From xen-devel-bounces@lists.xen.org Sat Dec 01 02:30:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 02:30:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TecqX-000418-Lc; Sat, 01 Dec 2012 02:30:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TecqV-0003VM-IE
	for xen-devel@lists.xensource.com; Sat, 01 Dec 2012 02:30:23 +0000
Received: from [85.158.138.51:54117] by server-14.bemta-3.messagelabs.com id
	80/F4-31424-EBB69B05; Sat, 01 Dec 2012 02:30:22 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354329020!30319489!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYyMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4076 invoked from network); 1 Dec 2012 02:30:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Dec 2012 02:30:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,195,1355097600"; d="scan'208";a="16101106"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Dec 2012 02:30:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 1 Dec 2012 02:30:20 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TecqS-0001yE-2C;
	Sat, 01 Dec 2012 02:30:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TecqR-0001F3-T0;
	Sat, 01 Dec 2012 02:30:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1TecqR-0001F3-T0@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Dec 2012 02:30:19 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-unstable bisection] complete build-amd64-oldkern
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-unstable
xen branch xen-unstable
job build-amd64-oldkern
test xen-build

Tree: linux http://xenbits.xen.org/linux-2.6.18-xen.hg
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-unstable.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-unstable.hg
  Bug introduced:  07396f4fb476
  Bug not present: eb3bf80d8dac


  changeset:   26216:07396f4fb476
  user:        Samuel Thibault <samuel.thibault@ens-lyon.org>
  date:        Fri Nov 30 09:32:27 2012 +0000
      
      [minios] Add xenbus shutdown control support
      
      Add a thread watching the xenbus shutdown control path and notifies a
      wait queue.
      
      Add HYPERVISOR_shutdown convenient inline for minios shutdown.
      
      Add proper shutdown to the minios test application.
      
      Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
      Committed-by: Keir Fraser <keir@xen.org>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-unstable.build-amd64-oldkern.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14526 fail [host=gall-mite] / 14506 ok.
Failure / basis pass flights: 14526 / 14506
Tree: linux http://xenbits.xen.org/linux-2.6.18-xen.hg
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-unstable.hg
Latest b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
Basis pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 9d88ac6046d8
Generating revisions with ./adhoc-revtuple-generator  http://xenbits.xen.org/linux-2.6.18-xen.hg#b9b0a1e9130e-b9b0a1e9130e git://xenbits.xen.org/staging/qemu-xen-unstable.git#bacc0d302445c75f18f4c826750fb5853b60e7ca-bacc0d302445c75f18f4c826750fb5853b60e7ca git://xenbits.xen.org/staging/qemu-upstream-unstable.git#1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c-1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c http://xenbits.xen.org/hg/staging/xen-unstable.hg#9d88ac6046d8-8037099671f3
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
Loaded 402 nodes in revision graph
Searching for test results:
 14544 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 07396f4fb476
 14506 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 9d88ac6046d8
 14494 [host=lace-bug]
 14515 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14518 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14533 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 9d88ac6046d8
 14519 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14534 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14496 [host=bush-cricket]
 14535 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 0cf1f79bc4f8
 14536 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c f3b6af40e79e
 14520 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14537 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 07396f4fb476
 14538 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c eb3bf80d8dac
 14526 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14539 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 07396f4fb476
 14540 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c eb3bf80d8dac
 14541 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 07396f4fb476
 14542 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c eb3bf80d8dac
Searching for interesting versions
 Result found: flight 14506 (pass), for basis pass
 Result found: flight 14515 (fail), for basis failure
 Repro found: flight 14533 (pass), for basis pass
 Repro found: flight 14534 (fail), for basis failure
 0 revisions at b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c eb3bf80d8dac
No revisions left to test, checking graph state.
 Result found: flight 14538 (pass), for last pass
 Result found: flight 14539 (fail), for first failure
 Repro found: flight 14540 (pass), for last pass
 Repro found: flight 14541 (fail), for first failure
 Repro found: flight 14542 (pass), for last pass
 Repro found: flight 14544 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-unstable.hg
  Bug introduced:  07396f4fb476
  Bug not present: eb3bf80d8dac

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found

  changeset:   26216:07396f4fb476
  user:        Samuel Thibault <samuel.thibault@ens-lyon.org>
  date:        Fri Nov 30 09:32:27 2012 +0000
      
      [minios] Add xenbus shutdown control support
      
      Add a thread watching the xenbus shutdown control path and notifies a
      wait queue.
      
      Add HYPERVISOR_shutdown convenient inline for minios shutdown.
      
      Add proper shutdown to the minios test application.
      
      Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
      Committed-by: Keir Fraser <keir@xen.org>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-unstable.build-amd64-oldkern.xen-build.{dot,ps,png,html}.
----------------------------------------
14544: tolerable ALL FAIL

flight 14544 xen-unstable real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14544/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build               fail baseline untested


jobs:
 build-amd64-oldkern                                          fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 02:30:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 02:30:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TecqX-000418-Lc; Sat, 01 Dec 2012 02:30:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TecqV-0003VM-IE
	for xen-devel@lists.xensource.com; Sat, 01 Dec 2012 02:30:23 +0000
Received: from [85.158.138.51:54117] by server-14.bemta-3.messagelabs.com id
	80/F4-31424-EBB69B05; Sat, 01 Dec 2012 02:30:22 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354329020!30319489!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYyMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4076 invoked from network); 1 Dec 2012 02:30:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Dec 2012 02:30:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,195,1355097600"; d="scan'208";a="16101106"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Dec 2012 02:30:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 1 Dec 2012 02:30:20 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TecqS-0001yE-2C;
	Sat, 01 Dec 2012 02:30:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TecqR-0001F3-T0;
	Sat, 01 Dec 2012 02:30:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1TecqR-0001F3-T0@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Dec 2012 02:30:19 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-unstable bisection] complete build-amd64-oldkern
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-unstable
xen branch xen-unstable
job build-amd64-oldkern
test xen-build

Tree: linux http://xenbits.xen.org/linux-2.6.18-xen.hg
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-unstable.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-unstable.hg
  Bug introduced:  07396f4fb476
  Bug not present: eb3bf80d8dac


  changeset:   26216:07396f4fb476
  user:        Samuel Thibault <samuel.thibault@ens-lyon.org>
  date:        Fri Nov 30 09:32:27 2012 +0000
      
      [minios] Add xenbus shutdown control support
      
      Add a thread watching the xenbus shutdown control path and notifies a
      wait queue.
      
      Add HYPERVISOR_shutdown convenient inline for minios shutdown.
      
      Add proper shutdown to the minios test application.
      
      Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
      Committed-by: Keir Fraser <keir@xen.org>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-unstable.build-amd64-oldkern.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14526 fail [host=gall-mite] / 14506 ok.
Failure / basis pass flights: 14526 / 14506
Tree: linux http://xenbits.xen.org/linux-2.6.18-xen.hg
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-unstable.hg
Latest b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
Basis pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 9d88ac6046d8
Generating revisions with ./adhoc-revtuple-generator  http://xenbits.xen.org/linux-2.6.18-xen.hg#b9b0a1e9130e-b9b0a1e9130e git://xenbits.xen.org/staging/qemu-xen-unstable.git#bacc0d302445c75f18f4c826750fb5853b60e7ca-bacc0d302445c75f18f4c826750fb5853b60e7ca git://xenbits.xen.org/staging/qemu-upstream-unstable.git#1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c-1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c http://xenbits.xen.org/hg/staging/xen-unstable.hg#9d88ac6046d8-8037099671f3
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found
Loaded 402 nodes in revision graph
Searching for test results:
 14544 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 07396f4fb476
 14506 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 9d88ac6046d8
 14494 [host=lace-bug]
 14515 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14518 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14533 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 9d88ac6046d8
 14519 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14534 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14496 [host=bush-cricket]
 14535 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 0cf1f79bc4f8
 14536 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c f3b6af40e79e
 14520 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14537 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 07396f4fb476
 14538 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c eb3bf80d8dac
 14526 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 8037099671f3
 14539 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 07396f4fb476
 14540 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c eb3bf80d8dac
 14541 fail b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c 07396f4fb476
 14542 pass b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c eb3bf80d8dac
Searching for interesting versions
 Result found: flight 14506 (pass), for basis pass
 Result found: flight 14515 (fail), for basis failure
 Repro found: flight 14533 (pass), for basis pass
 Repro found: flight 14534 (fail), for basis failure
 0 revisions at b9b0a1e9130e bacc0d302445c75f18f4c826750fb5853b60e7ca 1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c eb3bf80d8dac
No revisions left to test, checking graph state.
 Result found: flight 14538 (pass), for last pass
 Result found: flight 14539 (fail), for first failure
 Repro found: flight 14540 (pass), for last pass
 Repro found: flight 14541 (fail), for first failure
 Repro found: flight 14542 (pass), for last pass
 Repro found: flight 14544 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-unstable.hg
  Bug introduced:  07396f4fb476
  Bug not present: eb3bf80d8dac

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-unstable.hg
searching for changes
no changes found

  changeset:   26216:07396f4fb476
  user:        Samuel Thibault <samuel.thibault@ens-lyon.org>
  date:        Fri Nov 30 09:32:27 2012 +0000
      
      [minios] Add xenbus shutdown control support
      
      Add a thread watching the xenbus shutdown control path and notifies a
      wait queue.
      
      Add HYPERVISOR_shutdown convenient inline for minios shutdown.
      
      Add proper shutdown to the minios test application.
      
      Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
      Committed-by: Keir Fraser <keir@xen.org>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-unstable.build-amd64-oldkern.xen-build.{dot,ps,png,html}.
----------------------------------------
14544: tolerable ALL FAIL

flight 14544 xen-unstable real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14544/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build               fail baseline untested


jobs:
 build-amd64-oldkern                                          fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 06:18:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 06:18:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TegOQ-0007ej-Ou; Sat, 01 Dec 2012 06:17:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TegOO-0007ee-P6
	for xen-devel@lists.xensource.com; Sat, 01 Dec 2012 06:17:37 +0000
Received: from [85.158.139.83:64936] by server-1.bemta-5.messagelabs.com id
	56/93-09311-FF0A9B05; Sat, 01 Dec 2012 06:17:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354342654!27949153!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYyMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31674 invoked from network); 1 Dec 2012 06:17:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Dec 2012 06:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,195,1355097600"; d="scan'208";a="16101808"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Dec 2012 06:17:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 1 Dec 2012 06:17:32 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TegOK-00037I-3R;
	Sat, 01 Dec 2012 06:17:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TegOK-0002tH-31;
	Sat, 01 Dec 2012 06:17:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14543-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Dec 2012 06:17:32 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14543: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14543 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14543/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 06:18:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 06:18:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TegOQ-0007ej-Ou; Sat, 01 Dec 2012 06:17:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TegOO-0007ee-P6
	for xen-devel@lists.xensource.com; Sat, 01 Dec 2012 06:17:37 +0000
Received: from [85.158.139.83:64936] by server-1.bemta-5.messagelabs.com id
	56/93-09311-FF0A9B05; Sat, 01 Dec 2012 06:17:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354342654!27949153!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYyMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31674 invoked from network); 1 Dec 2012 06:17:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Dec 2012 06:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,195,1355097600"; d="scan'208";a="16101808"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Dec 2012 06:17:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 1 Dec 2012 06:17:32 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TegOK-00037I-3R;
	Sat, 01 Dec 2012 06:17:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TegOK-0002tH-31;
	Sat, 01 Dec 2012 06:17:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14543-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Dec 2012 06:17:32 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14543: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14543 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14543/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 07:52:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 07:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TehrN-0001cx-LV; Sat, 01 Dec 2012 07:51:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TehrL-0001cs-RB
	for xen-devel@lists.xen.org; Sat, 01 Dec 2012 07:51:36 +0000
Received: from [85.158.139.83:62535] by server-5.bemta-5.messagelabs.com id
	B5/B5-11353-607B9B05; Sat, 01 Dec 2012 07:51:34 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1354348294!20562938!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14024 invoked from network); 1 Dec 2012 07:51:34 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Dec 2012 07:51:34 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so142398wib.14
	for <xen-devel@lists.xen.org>; Fri, 30 Nov 2012 23:51:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=aZDu4lEwxvPS7KSYTD7G5sYxsEuWFn4T1yCc/uOsNNg=;
	b=dMOr69ddgGRM8FiTe0QZKjk1qeZJckgR1tUH9LSQL7XujjghbalRwJ2vgVvJd0/VmI
	0VzxEjFlA3/RkSJTxnpD6/MpW5QSef5g+2ZcZYu3VfRQA12Oqwh3dS8DWExRr/BafL8H
	CVp2ZS2ZiU6o0aE5PsJZB19UH4olRlkYvLRIQX/BSHjzBDlO0kKHJR65fKJywgr3GSkx
	9L6dP1V3KtAcAdyjjsqIoGuCD9U7DKjgeKyxo3A9GDNqMntQMnqkqqOx0URdn/ObmCkD
	guUMjr1ctGIjzr3zk8ws9bEdfgX0lX+pziLwGfO+jp1RgbCZbeG7BXIhcmvSXyhXBorN
	kSbw==
Received: by 10.216.226.137 with SMTP id b9mr1261461weq.137.1354348294077;
	Fri, 30 Nov 2012 23:51:34 -0800 (PST)
Received: from [192.168.1.88]
	(host86-183-153-239.range86-183.btcentralplus.com. [86.183.153.239])
	by mx.google.com with ESMTPS id gz3sm1858373wib.2.2012.11.30.23.51.32
	(version=SSLv3 cipher=OTHER); Fri, 30 Nov 2012 23:51:33 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Sat, 01 Dec 2012 07:51:29 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <CCDF6781.464C6%keir.xen@gmail.com>
Thread-Topic: [PATCH] mini-os: shutdown_thread depends on xenbus
Thread-Index: Ac3PmKvseozAajEXTU2kRH81TCExVw==
In-Reply-To: <20121130231354.GA5857@type.youpi.perso.aquilenet.fr>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mini-os: shutdown_thread depends on xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/11/2012 23:13, "Samuel Thibault" <samuel.thibault@ens-lyon.org> wrote:

> Daniel De Graaf, le Fri 30 Nov 2012 15:44:49 -0500, a =E9crit :
>> This fixes the build of the xenstore stub domain, which should never be
>> shut down and so does not need this feature.
> =

> Oops, indeed.
> We should probably also comment out the wait queue and variables, to
> make sure no code references it?

I applied Daniel's patch to fix the build. Feel free to follow up with
further cleanup, of course.

 -- Keir

> Samuel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 07:52:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 07:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TehrN-0001cx-LV; Sat, 01 Dec 2012 07:51:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TehrL-0001cs-RB
	for xen-devel@lists.xen.org; Sat, 01 Dec 2012 07:51:36 +0000
Received: from [85.158.139.83:62535] by server-5.bemta-5.messagelabs.com id
	B5/B5-11353-607B9B05; Sat, 01 Dec 2012 07:51:34 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1354348294!20562938!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14024 invoked from network); 1 Dec 2012 07:51:34 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Dec 2012 07:51:34 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so142398wib.14
	for <xen-devel@lists.xen.org>; Fri, 30 Nov 2012 23:51:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=aZDu4lEwxvPS7KSYTD7G5sYxsEuWFn4T1yCc/uOsNNg=;
	b=dMOr69ddgGRM8FiTe0QZKjk1qeZJckgR1tUH9LSQL7XujjghbalRwJ2vgVvJd0/VmI
	0VzxEjFlA3/RkSJTxnpD6/MpW5QSef5g+2ZcZYu3VfRQA12Oqwh3dS8DWExRr/BafL8H
	CVp2ZS2ZiU6o0aE5PsJZB19UH4olRlkYvLRIQX/BSHjzBDlO0kKHJR65fKJywgr3GSkx
	9L6dP1V3KtAcAdyjjsqIoGuCD9U7DKjgeKyxo3A9GDNqMntQMnqkqqOx0URdn/ObmCkD
	guUMjr1ctGIjzr3zk8ws9bEdfgX0lX+pziLwGfO+jp1RgbCZbeG7BXIhcmvSXyhXBorN
	kSbw==
Received: by 10.216.226.137 with SMTP id b9mr1261461weq.137.1354348294077;
	Fri, 30 Nov 2012 23:51:34 -0800 (PST)
Received: from [192.168.1.88]
	(host86-183-153-239.range86-183.btcentralplus.com. [86.183.153.239])
	by mx.google.com with ESMTPS id gz3sm1858373wib.2.2012.11.30.23.51.32
	(version=SSLv3 cipher=OTHER); Fri, 30 Nov 2012 23:51:33 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Sat, 01 Dec 2012 07:51:29 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <CCDF6781.464C6%keir.xen@gmail.com>
Thread-Topic: [PATCH] mini-os: shutdown_thread depends on xenbus
Thread-Index: Ac3PmKvseozAajEXTU2kRH81TCExVw==
In-Reply-To: <20121130231354.GA5857@type.youpi.perso.aquilenet.fr>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mini-os: shutdown_thread depends on xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/11/2012 23:13, "Samuel Thibault" <samuel.thibault@ens-lyon.org> wrote:

> Daniel De Graaf, le Fri 30 Nov 2012 15:44:49 -0500, a =E9crit :
>> This fixes the build of the xenstore stub domain, which should never be
>> shut down and so does not need this feature.
> =

> Oops, indeed.
> We should probably also comment out the wait queue and variables, to
> make sure no code references it?

I applied Daniel's patch to fix the build. Feel free to follow up with
further cleanup, of course.

 -- Keir

> Samuel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 08:45:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 08:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Teigy-0003GU-M4; Sat, 01 Dec 2012 08:44:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <axboe@kernel.dk>) id 1Teigw-0003GP-Fw
	for xen-devel@lists.xensource.com; Sat, 01 Dec 2012 08:44:54 +0000
Received: from [85.158.143.35:35592] by server-2.bemta-4.messagelabs.com id
	E8/53-28922-583C9B05; Sat, 01 Dec 2012 08:44:53 +0000
X-Env-Sender: axboe@kernel.dk
X-Msg-Ref: server-4.tower-21.messagelabs.com!1354351489!5321314!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTA1MTEz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6457 invoked from network); 1 Dec 2012 08:44:50 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Dec 2012 08:44:50 -0000
Received: from 87-104-106-3-dynamic-customer.profibernet.dk ([87.104.106.3]
	helo=kernel.dk)
	by merlin.infradead.org with esmtpsa (Exim 4.76 #1 (Red Hat Linux))
	id 1Teigp-0004Cm-ST; Sat, 01 Dec 2012 08:44:48 +0000
Received: from [192.168.0.26] (lenny.home.kernel.dk [192.168.0.26])
	by kernel.dk (Postfix) with ESMTPA id 3F9F356330;
	Sat,  1 Dec 2012 09:44:46 +0100 (CET)
Message-ID: <50B9C37E.8050607@kernel.dk>
Date: Sat, 01 Dec 2012 09:44:46 +0100
From: Jens Axboe <axboe@kernel.dk>
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <20121130220649.GA1893@phenom.dumpdata.com>
In-Reply-To: <20121130220649.GA1893@phenom.dumpdata.com>
X-Enigmail-Version: 1.4.6
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.8 bug-fixes..
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2012-11-30 23:06, Konrad Rzeszutek Wilk wrote:
> Hey Jens
> 
> Please git pull the following branch
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.8
> 
> which has one cleanup and one tiny fix.

Pulled, thanks Konrad.

-- 
Jens Axboe


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 08:45:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 08:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Teigy-0003GU-M4; Sat, 01 Dec 2012 08:44:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <axboe@kernel.dk>) id 1Teigw-0003GP-Fw
	for xen-devel@lists.xensource.com; Sat, 01 Dec 2012 08:44:54 +0000
Received: from [85.158.143.35:35592] by server-2.bemta-4.messagelabs.com id
	E8/53-28922-583C9B05; Sat, 01 Dec 2012 08:44:53 +0000
X-Env-Sender: axboe@kernel.dk
X-Msg-Ref: server-4.tower-21.messagelabs.com!1354351489!5321314!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTA1MTEz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6457 invoked from network); 1 Dec 2012 08:44:50 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Dec 2012 08:44:50 -0000
Received: from 87-104-106-3-dynamic-customer.profibernet.dk ([87.104.106.3]
	helo=kernel.dk)
	by merlin.infradead.org with esmtpsa (Exim 4.76 #1 (Red Hat Linux))
	id 1Teigp-0004Cm-ST; Sat, 01 Dec 2012 08:44:48 +0000
Received: from [192.168.0.26] (lenny.home.kernel.dk [192.168.0.26])
	by kernel.dk (Postfix) with ESMTPA id 3F9F356330;
	Sat,  1 Dec 2012 09:44:46 +0100 (CET)
Message-ID: <50B9C37E.8050607@kernel.dk>
Date: Sat, 01 Dec 2012 09:44:46 +0100
From: Jens Axboe <axboe@kernel.dk>
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <20121130220649.GA1893@phenom.dumpdata.com>
In-Reply-To: <20121130220649.GA1893@phenom.dumpdata.com>
X-Enigmail-Version: 1.4.6
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.8 bug-fixes..
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2012-11-30 23:06, Konrad Rzeszutek Wilk wrote:
> Hey Jens
> 
> Please git pull the following branch
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.8
> 
> which has one cleanup and one tiny fix.

Pulled, thanks Konrad.

-- 
Jens Axboe


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 09:10:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 09:10:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tej5Z-0003rg-TG; Sat, 01 Dec 2012 09:10:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tej5Y-0003rb-MF
	for xen-devel@lists.xen.org; Sat, 01 Dec 2012 09:10:20 +0000
Received: from [85.158.143.35:16749] by server-2.bemta-4.messagelabs.com id
	8A/DB-28922-B79C9B05; Sat, 01 Dec 2012 09:10:19 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1354353018!4631900!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19259 invoked from network); 1 Dec 2012 09:10:18 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Dec 2012 09:10:18 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tej4W-00026F-ST; Sat, 01 Dec 2012 09:09:16 +0000
Date: Sat, 1 Dec 2012 09:09:16 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121201090916.GA7987@ocelot.phlegethon.org>
References: <50B8EE13.6070301@citrix.com> <50B9050B.7090709@citrix.com>
	<20121130202420.GC95877@ocelot.phlegethon.org>
	<50B93964.4040609@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50B93964.4040609@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: Mats Petersson <mats.petersson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 22:55 +0000 on 30 Nov (1354316132), Andrew Cooper wrote:
> Any spinlocking whatsoever is out, until we completely fix the
> re-entrant entry to do_{nmi,mce}(), at which point spinlocks which are
> ok so long as they are exclusively used inside their respective
> handlers.

AFAICT spinlocks are bad news even if they're only in their respective
handlers, as one CPU can get (NMI (MCE)) while the other gets (MCE (NMI)).

> >>> 2) Faults on the MCE path will re-enable NMIs, as will the iret of the
> >>> MCE itself if an MCE interrupts an NMI.
> >> The same questions apply as to #1 (just replace NMI with MCE)
> > Andrew pointed out that some MCE code uses rdmsr_safe().
> >
> > FWIW, I think that constraining MCE and NMI code not to do anything that
> > can fault is perfectly reasonable.  The MCE code has grown a lot
> > recently and probably needs an audit to check for spinlocks, faults &c.
> 
> Yes.  However, being able to deal gracefully with the case were we miss
> things on code review which touches the NMI/MCE paths is certainly
> better than crashing in subtle ways.

Agreed.

> At the moment, the clearing of the MCIP bit is quite early in a few of
> the cpu family specific MCE handlers.  As it is an architectural MSR, I
> was considering moving it outside the family specific handlers, and make
> one of the last things on the MCE path, to help reduce the race
> condition window until we properly fix reentrant MCEs.

Why not have a per-cpu mce-in-progress flag, and clear MCIP early?  That
way you get a panic instead of silently losing a CPU.

> >>> [1] In an effort to prevent a flamewar with my comment, the situation we
> >>> find outself in now is almost certainly the result of unforseen
> >>> interactions of individual features, but we are left to pick up the many
> >>> pieces in way which cant completely be solved.
> >>>
> >> Happy to have my comments completely shot down into little bits, but I'm 
> >> worrying that we're looking to solve a problem that doesn't actually 
> >> need solving - at least as long as the code in the respective handlers 
> >> are "doing the right thing", and if that happens to be broken, then we 
> >> should fix THAT, not build lots of extra code to recover from such a thing.
> > I agree.  The only things we can't fix by DTRT in do_nmi() and do_mce()
> > are:
> >  - IRET in SMM mode re-enabling NMIs; and
> >  - detecting _every_ case where we get a nested NMI/MCE (all we can 
> >    do is detect _most_ cases, but the detection is just so we can print
> >    a message before the crash).
> 
> We would need to modify the asm stubs to detect nesting.

We can detect _almost all_ nesting from C with an in-progress flag.  We
can probably expand that to cover all nesting by pushing the
flag-setting/flag-clearing out to the asm but that'd still be only a
couple of lines of change - a lot simpler than what we'd need to allow
nested MCE/NMIs to continue without crashing.

>  I think it is
> unreasonable to expect the C functions do_{nmi,mce}() to be reentrantly
> safe.

Good. :)  I'm not suggesting that.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 09:10:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 09:10:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tej5Z-0003rg-TG; Sat, 01 Dec 2012 09:10:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tej5Y-0003rb-MF
	for xen-devel@lists.xen.org; Sat, 01 Dec 2012 09:10:20 +0000
Received: from [85.158.143.35:16749] by server-2.bemta-4.messagelabs.com id
	8A/DB-28922-B79C9B05; Sat, 01 Dec 2012 09:10:19 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1354353018!4631900!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19259 invoked from network); 1 Dec 2012 09:10:18 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Dec 2012 09:10:18 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tej4W-00026F-ST; Sat, 01 Dec 2012 09:09:16 +0000
Date: Sat, 1 Dec 2012 09:09:16 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121201090916.GA7987@ocelot.phlegethon.org>
References: <50B8EE13.6070301@citrix.com> <50B9050B.7090709@citrix.com>
	<20121130202420.GC95877@ocelot.phlegethon.org>
	<50B93964.4040609@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50B93964.4040609@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: Mats Petersson <mats.petersson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 22:55 +0000 on 30 Nov (1354316132), Andrew Cooper wrote:
> Any spinlocking whatsoever is out, until we completely fix the
> re-entrant entry to do_{nmi,mce}(), at which point spinlocks which are
> ok so long as they are exclusively used inside their respective
> handlers.

AFAICT spinlocks are bad news even if they're only in their respective
handlers, as one CPU can get (NMI (MCE)) while the other gets (MCE (NMI)).

> >>> 2) Faults on the MCE path will re-enable NMIs, as will the iret of the
> >>> MCE itself if an MCE interrupts an NMI.
> >> The same questions apply as to #1 (just replace NMI with MCE)
> > Andrew pointed out that some MCE code uses rdmsr_safe().
> >
> > FWIW, I think that constraining MCE and NMI code not to do anything that
> > can fault is perfectly reasonable.  The MCE code has grown a lot
> > recently and probably needs an audit to check for spinlocks, faults &c.
> 
> Yes.  However, being able to deal gracefully with the case were we miss
> things on code review which touches the NMI/MCE paths is certainly
> better than crashing in subtle ways.

Agreed.

> At the moment, the clearing of the MCIP bit is quite early in a few of
> the cpu family specific MCE handlers.  As it is an architectural MSR, I
> was considering moving it outside the family specific handlers, and make
> one of the last things on the MCE path, to help reduce the race
> condition window until we properly fix reentrant MCEs.

Why not have a per-cpu mce-in-progress flag, and clear MCIP early?  That
way you get a panic instead of silently losing a CPU.

> >>> [1] In an effort to prevent a flamewar with my comment, the situation we
> >>> find outself in now is almost certainly the result of unforseen
> >>> interactions of individual features, but we are left to pick up the many
> >>> pieces in way which cant completely be solved.
> >>>
> >> Happy to have my comments completely shot down into little bits, but I'm 
> >> worrying that we're looking to solve a problem that doesn't actually 
> >> need solving - at least as long as the code in the respective handlers 
> >> are "doing the right thing", and if that happens to be broken, then we 
> >> should fix THAT, not build lots of extra code to recover from such a thing.
> > I agree.  The only things we can't fix by DTRT in do_nmi() and do_mce()
> > are:
> >  - IRET in SMM mode re-enabling NMIs; and
> >  - detecting _every_ case where we get a nested NMI/MCE (all we can 
> >    do is detect _most_ cases, but the detection is just so we can print
> >    a message before the crash).
> 
> We would need to modify the asm stubs to detect nesting.

We can detect _almost all_ nesting from C with an in-progress flag.  We
can probably expand that to cover all nesting by pushing the
flag-setting/flag-clearing out to the asm but that'd still be only a
couple of lines of change - a lot simpler than what we'd need to allow
nested MCE/NMIs to continue without crashing.

>  I think it is
> unreasonable to expect the C functions do_{nmi,mce}() to be reentrantly
> safe.

Good. :)  I'm not suggesting that.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 14:04:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 14:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TenfR-0002Bk-DY; Sat, 01 Dec 2012 14:03:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TenfQ-0002Bf-AP
	for xen-devel@lists.xen.org; Sat, 01 Dec 2012 14:03:40 +0000
Received: from [85.158.139.211:56586] by server-8.bemta-5.messagelabs.com id
	00/A2-06050-B3E0AB05; Sat, 01 Dec 2012 14:03:39 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354370618!18641206!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA1MTEzMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1367 invoked from network); 1 Dec 2012 14:03:39 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Dec 2012 14:03:39 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 4B5E71257;
	Sat,  1 Dec 2012 16:03:37 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id BED82EC027; Sat,  1 Dec 2012 16:03:36 +0200 (EET)
Date: Sat, 1 Dec 2012 16:03:36 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121201140336.GS8912@reaktio.net>
References: <201211070206.qA7261Bp028589@wind.enjellic.com>
	<1352272948.12977.20.camel@hastur.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1352272948.12977.20.camel@hastur.hellion.org.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

IanJ: Just a reminder to commit these two patches to xen-4.1-testing.. 

It'd be good to have them for Xen 4.1.4.

Thanks,

-- Pasi

On Wed, Nov 07, 2012 at 08:22:28AM +0100, Ian Campbell wrote:
> On Wed, 2012-11-07 at 02:06 +0000, Dr. Greg Wettstein wrote:
> > ---------------------------------------------------------------------------
> > Backport of the following patch from development:
> > 
> > # User Ian Campbell <[hidden email]>
> > # Date 1309968705 -3600
> > # Node ID e4781aedf817c5ab36f6f3077e44c43c566a2812
> > # Parent 700d0f03d50aa6619d313c1ff6aea7fd429d28a7
> > libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> > 
> > This patch properly terminates the tapdisk2 process(es) started
> > to service a virtual block device.
> > 
> > Signed-off-by: Greg Wettstein <greg@enjellic.com>
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> > 
> > diff -r 700d0f03d50a tools/blktap2/control/tap-ctl-list.c
> > --- a/tools/blktap2/control/tap-ctl-list.c	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/blktap2/control/tap-ctl-list.c	Tue Nov 06 19:52:48 2012 -0600
> > @@ -506,17 +506,15 @@ out:
> >  }
> >  
> >  int
> > -tap_ctl_find_minor(const char *type, const char *path)
> > +tap_ctl_find(const char *type, const char *path, tap_list_t *tap)
> >  {
> >  	tap_list_t **list, **_entry;
> > -	int minor, err;
> > +	int ret = -ENOENT, err;
> >  
> >  	err = tap_ctl_list(&list);
> >  	if (err)
> >  		return err;
> >  
> > -	minor = -1;
> > -
> >  	for (_entry = list; *_entry != NULL; ++_entry) {
> >  		tap_list_t *entry  = *_entry;
> >  
> > @@ -526,11 +524,13 @@ tap_ctl_find_minor(const char *type, con
> >  		if (path && (!entry->path || strcmp(entry->path, path)))
> >  			continue;
> >  
> > -		minor = entry->minor;
> > +		*tap = *entry;
> > +		tap->type = tap->path = NULL;
> > +		ret = 0;
> >  		break;
> >  	}
> >  
> >  	tap_ctl_free_list(list);
> >  
> > -	return minor >= 0 ? minor : -ENOENT;
> > +	return ret;
> >  }
> > diff -r 700d0f03d50a tools/blktap2/control/tap-ctl.h
> > --- a/tools/blktap2/control/tap-ctl.h	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/blktap2/control/tap-ctl.h	Tue Nov 06 19:52:48 2012 -0600
> > @@ -76,7 +76,7 @@ int tap_ctl_get_driver_id(const char *ha
> >  
> >  int tap_ctl_list(tap_list_t ***list);
> >  void tap_ctl_free_list(tap_list_t **list);
> > -int tap_ctl_find_minor(const char *type, const char *path);
> > +int tap_ctl_find(const char *type, const char *path, tap_list_t *tap);
> >  
> >  int tap_ctl_allocate(int *minor, char **devname);
> >  int tap_ctl_free(const int minor);
> > diff -r 700d0f03d50a tools/libxl/libxl_blktap2.c
> > --- a/tools/libxl/libxl_blktap2.c	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/libxl/libxl_blktap2.c	Tue Nov 06 19:52:48 2012 -0600
> > @@ -18,6 +18,8 @@
> >  
> >  #include "tap-ctl.h"
> >  
> > +#include <string.h>
> > +
> >  int libxl__blktap_enabled(libxl__gc *gc)
> >  {
> >      const char *msg;
> > @@ -30,12 +32,13 @@ const char *libxl__blktap_devpath(libxl_
> >  {
> >      const char *type;
> >      char *params, *devname = NULL;
> > -    int minor, err;
> > +    tap_list_t tap;
> > +    int err;
> >  
> >      type = libxl__device_disk_string_of_format(format);
> > -    minor = tap_ctl_find_minor(type, disk);
> > -    if (minor >= 0) {
> > -        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", minor);
> > +    err = tap_ctl_find(type, disk, &tap);
> > +    if (err == 0) {
> > +        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", tap.minor);
> >          if (devname)
> >              return devname;
> >      }
> > @@ -49,3 +52,28 @@ const char *libxl__blktap_devpath(libxl_
> >  
> >      return NULL;
> >  }
> > +
> > +
> > +void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
> > +{
> > +    char *path, *params, *type, *disk;
> > +    int err;
> > +    tap_list_t tap;
> > +
> > +    path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
> > +    if (!path) return;
> > +
> > +    params = libxl__xs_read(gc, XBT_NULL, path);
> > +    if (!params) return;
> > +
> > +    type = params;
> > +    disk = strchr(params, ':');
> > +    if (!disk) return;
> > +
> > +    *disk++ = '\0';
> > +
> > +    err = tap_ctl_find(type, disk, &tap);
> > +    if (err < 0) return;
> > +
> > +    tap_ctl_destroy(tap.id, tap.minor);
> > +}
> > diff -r 700d0f03d50a tools/libxl/libxl_device.c
> > --- a/tools/libxl/libxl_device.c	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/libxl/libxl_device.c	Tue Nov 06 19:52:48 2012 -0600
> > @@ -250,6 +250,7 @@ int libxl__device_destroy(libxl_ctx *ctx
> >      if (!state)
> >          goto out;
> >      if (atoi(state) != 4) {
> > +        libxl__device_destroy_tapdisk(&gc, be_path);
> >          xs_rm(ctx->xsh, XBT_NULL, be_path);
> >          goto out;
> >      }
> > @@ -368,6 +369,7 @@ int libxl__devices_destroy(libxl_ctx *ct
> >              }
> >          }
> >      }
> > +    libxl__device_destroy_tapdisk(&gc, be_path);
> >  out:
> >      libxl__free_all(&gc);
> >      return 0;
> > diff -r 700d0f03d50a tools/libxl/libxl_internal.h
> > --- a/tools/libxl/libxl_internal.h	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/libxl/libxl_internal.h	Tue Nov 06 19:52:48 2012 -0600
> > @@ -314,6 +314,12 @@ _hidden const char *libxl__blktap_devpat
> >                                   const char *disk,
> >                                   libxl_disk_format format);
> >  
> > +/* libxl__device_destroy_tapdisk:
> > + *   Destroys any tapdisk process associated with the backend represented
> > + *   by be_path.
> > + */
> > +_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path);
> > +
> >  _hidden char *libxl__uuid2string(libxl__gc *gc, const libxl_uuid uuid);
> >  
> >  struct libxl__xen_console_reader {
> > diff -r 700d0f03d50a tools/libxl/libxl_noblktap2.c
> > --- a/tools/libxl/libxl_noblktap2.c	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/libxl/libxl_noblktap2.c	Tue Nov 06 19:52:48 2012 -0600
> > @@ -27,3 +27,7 @@ const char *libxl__blktap_devpath(libxl_
> >  {
> >      return NULL;
> >  }
> > +
> > +void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
> > +{
> > +}
> > ---------------------------------------------------------------------------
> > 
> > As always,
> > Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
> > 4206 N. 19th Ave.           Specializing in information infra-structure
> > Fargo, ND  58102            development.
> > PH: 701-281-1686
> > FAX: 701-281-3949           EMAIL: greg@enjellic.com
> > ------------------------------------------------------------------------------
> > "Man, despite his artistic pretensions, his sophistication and many
> >  accomplishments, owes the fact of his existence to a six-inch layer of
> >  topsoil and the fact that it rains."
> >                                 -- Anonymous writer on perspective.
> >                                    GAUSSIAN quote.
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 14:04:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 14:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TenfR-0002Bk-DY; Sat, 01 Dec 2012 14:03:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TenfQ-0002Bf-AP
	for xen-devel@lists.xen.org; Sat, 01 Dec 2012 14:03:40 +0000
Received: from [85.158.139.211:56586] by server-8.bemta-5.messagelabs.com id
	00/A2-06050-B3E0AB05; Sat, 01 Dec 2012 14:03:39 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354370618!18641206!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA1MTEzMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1367 invoked from network); 1 Dec 2012 14:03:39 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Dec 2012 14:03:39 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 4B5E71257;
	Sat,  1 Dec 2012 16:03:37 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id BED82EC027; Sat,  1 Dec 2012 16:03:36 +0200 (EET)
Date: Sat, 1 Dec 2012 16:03:36 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121201140336.GS8912@reaktio.net>
References: <201211070206.qA7261Bp028589@wind.enjellic.com>
	<1352272948.12977.20.camel@hastur.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1352272948.12977.20.camel@hastur.hellion.org.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

IanJ: Just a reminder to commit these two patches to xen-4.1-testing.. 

It'd be good to have them for Xen 4.1.4.

Thanks,

-- Pasi

On Wed, Nov 07, 2012 at 08:22:28AM +0100, Ian Campbell wrote:
> On Wed, 2012-11-07 at 02:06 +0000, Dr. Greg Wettstein wrote:
> > ---------------------------------------------------------------------------
> > Backport of the following patch from development:
> > 
> > # User Ian Campbell <[hidden email]>
> > # Date 1309968705 -3600
> > # Node ID e4781aedf817c5ab36f6f3077e44c43c566a2812
> > # Parent 700d0f03d50aa6619d313c1ff6aea7fd429d28a7
> > libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> > 
> > This patch properly terminates the tapdisk2 process(es) started
> > to service a virtual block device.
> > 
> > Signed-off-by: Greg Wettstein <greg@enjellic.com>
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> > 
> > diff -r 700d0f03d50a tools/blktap2/control/tap-ctl-list.c
> > --- a/tools/blktap2/control/tap-ctl-list.c	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/blktap2/control/tap-ctl-list.c	Tue Nov 06 19:52:48 2012 -0600
> > @@ -506,17 +506,15 @@ out:
> >  }
> >  
> >  int
> > -tap_ctl_find_minor(const char *type, const char *path)
> > +tap_ctl_find(const char *type, const char *path, tap_list_t *tap)
> >  {
> >  	tap_list_t **list, **_entry;
> > -	int minor, err;
> > +	int ret = -ENOENT, err;
> >  
> >  	err = tap_ctl_list(&list);
> >  	if (err)
> >  		return err;
> >  
> > -	minor = -1;
> > -
> >  	for (_entry = list; *_entry != NULL; ++_entry) {
> >  		tap_list_t *entry  = *_entry;
> >  
> > @@ -526,11 +524,13 @@ tap_ctl_find_minor(const char *type, con
> >  		if (path && (!entry->path || strcmp(entry->path, path)))
> >  			continue;
> >  
> > -		minor = entry->minor;
> > +		*tap = *entry;
> > +		tap->type = tap->path = NULL;
> > +		ret = 0;
> >  		break;
> >  	}
> >  
> >  	tap_ctl_free_list(list);
> >  
> > -	return minor >= 0 ? minor : -ENOENT;
> > +	return ret;
> >  }
> > diff -r 700d0f03d50a tools/blktap2/control/tap-ctl.h
> > --- a/tools/blktap2/control/tap-ctl.h	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/blktap2/control/tap-ctl.h	Tue Nov 06 19:52:48 2012 -0600
> > @@ -76,7 +76,7 @@ int tap_ctl_get_driver_id(const char *ha
> >  
> >  int tap_ctl_list(tap_list_t ***list);
> >  void tap_ctl_free_list(tap_list_t **list);
> > -int tap_ctl_find_minor(const char *type, const char *path);
> > +int tap_ctl_find(const char *type, const char *path, tap_list_t *tap);
> >  
> >  int tap_ctl_allocate(int *minor, char **devname);
> >  int tap_ctl_free(const int minor);
> > diff -r 700d0f03d50a tools/libxl/libxl_blktap2.c
> > --- a/tools/libxl/libxl_blktap2.c	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/libxl/libxl_blktap2.c	Tue Nov 06 19:52:48 2012 -0600
> > @@ -18,6 +18,8 @@
> >  
> >  #include "tap-ctl.h"
> >  
> > +#include <string.h>
> > +
> >  int libxl__blktap_enabled(libxl__gc *gc)
> >  {
> >      const char *msg;
> > @@ -30,12 +32,13 @@ const char *libxl__blktap_devpath(libxl_
> >  {
> >      const char *type;
> >      char *params, *devname = NULL;
> > -    int minor, err;
> > +    tap_list_t tap;
> > +    int err;
> >  
> >      type = libxl__device_disk_string_of_format(format);
> > -    minor = tap_ctl_find_minor(type, disk);
> > -    if (minor >= 0) {
> > -        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", minor);
> > +    err = tap_ctl_find(type, disk, &tap);
> > +    if (err == 0) {
> > +        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", tap.minor);
> >          if (devname)
> >              return devname;
> >      }
> > @@ -49,3 +52,28 @@ const char *libxl__blktap_devpath(libxl_
> >  
> >      return NULL;
> >  }
> > +
> > +
> > +void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
> > +{
> > +    char *path, *params, *type, *disk;
> > +    int err;
> > +    tap_list_t tap;
> > +
> > +    path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
> > +    if (!path) return;
> > +
> > +    params = libxl__xs_read(gc, XBT_NULL, path);
> > +    if (!params) return;
> > +
> > +    type = params;
> > +    disk = strchr(params, ':');
> > +    if (!disk) return;
> > +
> > +    *disk++ = '\0';
> > +
> > +    err = tap_ctl_find(type, disk, &tap);
> > +    if (err < 0) return;
> > +
> > +    tap_ctl_destroy(tap.id, tap.minor);
> > +}
> > diff -r 700d0f03d50a tools/libxl/libxl_device.c
> > --- a/tools/libxl/libxl_device.c	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/libxl/libxl_device.c	Tue Nov 06 19:52:48 2012 -0600
> > @@ -250,6 +250,7 @@ int libxl__device_destroy(libxl_ctx *ctx
> >      if (!state)
> >          goto out;
> >      if (atoi(state) != 4) {
> > +        libxl__device_destroy_tapdisk(&gc, be_path);
> >          xs_rm(ctx->xsh, XBT_NULL, be_path);
> >          goto out;
> >      }
> > @@ -368,6 +369,7 @@ int libxl__devices_destroy(libxl_ctx *ct
> >              }
> >          }
> >      }
> > +    libxl__device_destroy_tapdisk(&gc, be_path);
> >  out:
> >      libxl__free_all(&gc);
> >      return 0;
> > diff -r 700d0f03d50a tools/libxl/libxl_internal.h
> > --- a/tools/libxl/libxl_internal.h	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/libxl/libxl_internal.h	Tue Nov 06 19:52:48 2012 -0600
> > @@ -314,6 +314,12 @@ _hidden const char *libxl__blktap_devpat
> >                                   const char *disk,
> >                                   libxl_disk_format format);
> >  
> > +/* libxl__device_destroy_tapdisk:
> > + *   Destroys any tapdisk process associated with the backend represented
> > + *   by be_path.
> > + */
> > +_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path);
> > +
> >  _hidden char *libxl__uuid2string(libxl__gc *gc, const libxl_uuid uuid);
> >  
> >  struct libxl__xen_console_reader {
> > diff -r 700d0f03d50a tools/libxl/libxl_noblktap2.c
> > --- a/tools/libxl/libxl_noblktap2.c	Mon Oct 29 09:04:48 2012 +0100
> > +++ b/tools/libxl/libxl_noblktap2.c	Tue Nov 06 19:52:48 2012 -0600
> > @@ -27,3 +27,7 @@ const char *libxl__blktap_devpath(libxl_
> >  {
> >      return NULL;
> >  }
> > +
> > +void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
> > +{
> > +}
> > ---------------------------------------------------------------------------
> > 
> > As always,
> > Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
> > 4206 N. 19th Ave.           Specializing in information infra-structure
> > Fargo, ND  58102            development.
> > PH: 701-281-1686
> > FAX: 701-281-3949           EMAIL: greg@enjellic.com
> > ------------------------------------------------------------------------------
> > "Man, despite his artistic pretensions, his sophistication and many
> >  accomplishments, owes the fact of his existence to a six-inch layer of
> >  topsoil and the fact that it rains."
> >                                 -- Anonymous writer on perspective.
> >                                    GAUSSIAN quote.
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 16:20:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 16:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tepmq-0005g4-GI; Sat, 01 Dec 2012 16:19:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tepmo-0005fz-Hg
	for xen-devel@lists.xensource.com; Sat, 01 Dec 2012 16:19:26 +0000
Received: from [85.158.138.51:8127] by server-14.bemta-3.messagelabs.com id
	5B/AC-31424-D0E2AB05; Sat, 01 Dec 2012 16:19:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354378763!32372401!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31702 invoked from network); 1 Dec 2012 16:19:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Dec 2012 16:19:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,197,1355097600"; d="scan'208";a="16104703"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Dec 2012 16:19:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 1 Dec 2012 16:19:22 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tepmk-00068k-GH;
	Sat, 01 Dec 2012 16:19:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tepmk-0004Jv-13;
	Sat, 01 Dec 2012 16:19:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14547-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Dec 2012 16:19:22 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14547: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14547 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14547/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 16:20:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 16:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tepmq-0005g4-GI; Sat, 01 Dec 2012 16:19:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tepmo-0005fz-Hg
	for xen-devel@lists.xensource.com; Sat, 01 Dec 2012 16:19:26 +0000
Received: from [85.158.138.51:8127] by server-14.bemta-3.messagelabs.com id
	5B/AC-31424-D0E2AB05; Sat, 01 Dec 2012 16:19:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354378763!32372401!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31702 invoked from network); 1 Dec 2012 16:19:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Dec 2012 16:19:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,197,1355097600"; d="scan'208";a="16104703"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Dec 2012 16:19:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 1 Dec 2012 16:19:22 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tepmk-00068k-GH;
	Sat, 01 Dec 2012 16:19:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tepmk-0004Jv-13;
	Sat, 01 Dec 2012 16:19:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14547-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Dec 2012 16:19:22 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14547: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14547 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14547/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 16:39:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 16:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Teq5g-00065w-9Z; Sat, 01 Dec 2012 16:38:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Teq5e-00065r-JP
	for xen-devel@lists.xen.org; Sat, 01 Dec 2012 16:38:55 +0000
Received: from [85.158.143.99:7785] by server-2.bemta-4.messagelabs.com id
	02/34-28922-D923AB05; Sat, 01 Dec 2012 16:38:53 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354379932!18006102!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA1MTEzMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29869 invoked from network); 1 Dec 2012 16:38:53 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Dec 2012 16:38:53 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id B3C182205;
	Sat,  1 Dec 2012 18:38:51 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 726BDEC027; Sat,  1 Dec 2012 18:38:51 +0200 (EET)
Date: Sat, 1 Dec 2012 18:38:51 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121201163851.GU8912@reaktio.net>
References: <50B4B40B02000078000ABA19@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50B4B40B02000078000ABA19@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] vscsiif: allow larger segments-per-request
 values
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On Tue, Nov 27, 2012 at 11:37:31AM +0000, Jan Beulich wrote:
> At least certain tape devices require fixed size blocks to be operated
> upon, i.e. breaking up of I/O requests is not permitted. Consequently
> we need an interface extension that (leaving aside implementation
> limitations) doesn't impose a limit on the number of segments that can
> be associated with an individual request.
> 
> This, in turn, excludes the blkif extension FreeBSD folks implemented,
> as that still imposes an upper limit (the actual I/O request still
> specifies the full number of segments - as an 8-bit quantity -, and
> subsequent ring slots get used to carry the excess segment
> descriptors).
> 
> The alternative therefore is to allow the frontend to pre-set segment
> descriptors _before_ actually issuing the I/O request. I/O will then
> be done by the backend for the accumulated set of segments.
> 
> To properly associate segment preset operations with the main request,
> the rqid-s between them should match (originally I had hoped to use
> this to avoid producing individual responses for the pre-set
> operations, but that turned out to violate the underlying shared ring
> implementation).
> 
> Negotiation of the maximum number of segments a particular backend
> implementation supports happens through a new "segs-per-req" xenstore
> node.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As I have no plans to backport this to the 2.6.18 tree, I'm attaching
> for reference the full kernel side patch we're intending to use.
> 

Konrad: I wonder if this should be applied to:
http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/xen-scsi.v1.0


-- Pasi


> --- a/xen/include/public/io/vscsiif.h
> +++ b/xen/include/public/io/vscsiif.h
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  /*
>   * Maximum scatter/gather segments per request.
> @@ -50,6 +51,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -66,18 +73,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;
> 
> 
> 

> vscsiif: allow larger segments-per-request values
> 
> At least certain tape devices require fixed size blocks to be operated
> upon, i.e. breaking up of I/O requests is not permitted. Consequently
> we need an interface extension that (leaving aside implementation
> limitations) doesn't impose a limit on the number of segments that can
> be associated with an individual request.
> 
> This, in turn, excludes the blkif extension FreeBSD folks implemented,
> as that still imposes an upper limit (the actual I/O request still
> specifies the full number of segments - as an 8-bit quantity -, and
> subsequent ring slots get used to carry the excess segment
> descriptors).
> 
> The alternative therefore is to allow the frontend to pre-set segment
> descriptors _before_ actually issuing the I/O request. I/O will then
> be done by the backend for the accumulated set of segments.
> 
> To properly associate segment preset operations with the main request,
> the rqid-s between them should match (originally I had hoped to use
> this to avoid producing individual responses for the pre-set
> operations, but that turned out to violate the underlying shared ring
> implementation).
> 
> Negotiation of the maximum number of segments a particular backend
> implementation supports happens through a new "segs-per-req" xenstore
> node.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As I have no plans to backport this to the 2.6.18 tree, I'm attaching
> for reference the full kernel side patch we're intending to use.
> 
> --- a/xen/include/public/io/vscsiif.h
> +++ b/xen/include/public/io/vscsiif.h
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  /*
>   * Maximum scatter/gather segments per request.
> @@ -50,6 +51,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -66,18 +73,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;

> --- sle11sp3.orig/drivers/xen/scsiback/common.h	2012-06-06 13:53:26.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/common.h	2012-11-22 14:55:58.000000000 +0100
> @@ -94,10 +94,15 @@ struct vscsibk_info {
>  	unsigned int waiting_reqs;
>  	struct page **mmap_pages;
>  
> +	struct pending_req *preq;
> +
> +	union {
> +		struct gnttab_map_grant_ref   *gmap;
> +		struct gnttab_unmap_grant_ref *gunmap;
> +	};
>  };
>  
> -typedef struct {
> -	unsigned char act;
> +typedef struct pending_req {
>  	struct vscsibk_info *info;
>  	struct scsi_device *sdev;
>  
> @@ -114,7 +119,8 @@ typedef struct {
>  	
>  	uint32_t request_bufflen;
>  	struct scatterlist *sgl;
> -	grant_ref_t gref[VSCSIIF_SG_TABLESIZE];
> +	grant_ref_t *gref;
> +	vscsiif_segment_t *segs;
>  
>  	int32_t rslt;
>  	uint32_t resid;
> @@ -123,7 +129,7 @@ typedef struct {
>  	struct list_head free_list;
>  } pending_req_t;
>  
> -
> +extern unsigned int vscsiif_segs;
>  
>  #define scsiback_get(_b) (atomic_inc(&(_b)->nr_unreplied_reqs))
>  #define scsiback_put(_b)				\
> @@ -163,7 +169,7 @@ void scsiback_release_translation_entry(
>  
>  void scsiback_cmd_exec(pending_req_t *pending_req);
>  void scsiback_do_resp_with_sense(char *sense_buffer, int32_t result,
> -			uint32_t resid, pending_req_t *pending_req);
> +			uint32_t resid, pending_req_t *, uint8_t act);
>  void scsiback_fast_flush_area(pending_req_t *req);
>  
>  void scsiback_rsp_emulation(pending_req_t *pending_req);
> --- sle11sp3.orig/drivers/xen/scsiback/emulate.c	2012-01-11 12:14:54.000000000 +0100
> +++ sle11sp3/drivers/xen/scsiback/emulate.c	2012-11-22 14:29:27.000000000 +0100
> @@ -352,7 +352,9 @@ void scsiback_req_emulation_or_cmdexec(p
>  	else {
>  		scsiback_fast_flush_area(pending_req);
>  		scsiback_do_resp_with_sense(pending_req->sense_buffer,
> -		  pending_req->rslt, pending_req->resid, pending_req);
> +					    pending_req->rslt,
> +					    pending_req->resid, pending_req,
> +					    VSCSIIF_ACT_SCSI_CDB);
>  	}
>  }
>  
> --- sle11sp3.orig/drivers/xen/scsiback/interface.c	2011-10-10 11:58:37.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/interface.c	2012-11-13 13:21:10.000000000 +0100
> @@ -51,6 +51,13 @@ struct vscsibk_info *vscsibk_info_alloc(
>  	if (!info)
>  		return ERR_PTR(-ENOMEM);
>  
> +	info->gmap = kcalloc(max(sizeof(*info->gmap), sizeof(*info->gunmap)),
> +			     vscsiif_segs, GFP_KERNEL);
> +	if (!info->gmap) {
> +		kfree(info);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
>  	info->domid = domid;
>  	spin_lock_init(&info->ring_lock);
>  	atomic_set(&info->nr_unreplied_reqs, 0);
> @@ -120,6 +127,7 @@ void scsiback_disconnect(struct vscsibk_
>  
>  void scsiback_free(struct vscsibk_info *info)
>  {
> +	kfree(info->gmap);
>  	kmem_cache_free(scsiback_cachep, info);
>  }
>  
> --- sle11sp3.orig/drivers/xen/scsiback/scsiback.c	2012-11-22 15:36:11.000000000 +0100
> +++ sle11sp3/drivers/xen/scsiback/scsiback.c	2012-11-22 15:36:16.000000000 +0100
> @@ -56,6 +56,10 @@ int vscsiif_reqs = VSCSIIF_BACK_MAX_PEND
>  module_param_named(reqs, vscsiif_reqs, int, 0);
>  MODULE_PARM_DESC(reqs, "Number of scsiback requests to allocate");
>  
> +unsigned int vscsiif_segs = VSCSIIF_SG_TABLESIZE;
> +module_param_named(segs, vscsiif_segs, uint, 0);
> +MODULE_PARM_DESC(segs, "Number of segments to allow per request");
> +
>  static unsigned int log_print_stat = 0;
>  module_param(log_print_stat, int, 0644);
>  
> @@ -67,7 +71,7 @@ static grant_handle_t *pending_grant_han
>  
>  static int vaddr_pagenr(pending_req_t *req, int seg)
>  {
> -	return (req - pending_reqs) * VSCSIIF_SG_TABLESIZE + seg;
> +	return (req - pending_reqs) * vscsiif_segs + seg;
>  }
>  
>  static unsigned long vaddr(pending_req_t *req, int seg)
> @@ -82,7 +86,7 @@ static unsigned long vaddr(pending_req_t
>  
>  void scsiback_fast_flush_area(pending_req_t *req)
>  {
> -	struct gnttab_unmap_grant_ref unmap[VSCSIIF_SG_TABLESIZE];
> +	struct gnttab_unmap_grant_ref *unmap = req->info->gunmap;
>  	unsigned int i, invcount = 0;
>  	grant_handle_t handle;
>  	int err;
> @@ -117,6 +121,7 @@ static pending_req_t * alloc_req(struct 
>  	if (!list_empty(&pending_free)) {
>  		req = list_entry(pending_free.next, pending_req_t, free_list);
>  		list_del(&req->free_list);
> +		req->nr_segments = 0;
>  	}
>  	spin_unlock_irqrestore(&pending_free_lock, flags);
>  	return req;
> @@ -144,7 +149,8 @@ static void scsiback_notify_work(struct 
>  }
>  
>  void scsiback_do_resp_with_sense(char *sense_buffer, int32_t result,
> -			uint32_t resid, pending_req_t *pending_req)
> +				 uint32_t resid, pending_req_t *pending_req,
> +				 uint8_t act)
>  {
>  	vscsiif_response_t *ring_res;
>  	struct vscsibk_info *info = pending_req->info;
> @@ -159,6 +165,7 @@ void scsiback_do_resp_with_sense(char *s
>  	ring_res = RING_GET_RESPONSE(&info->ring, info->ring.rsp_prod_pvt);
>  	info->ring.rsp_prod_pvt++;
>  
> +	ring_res->act    = act;
>  	ring_res->rslt   = result;
>  	ring_res->rqid   = pending_req->rqid;
>  
> @@ -186,7 +193,8 @@ void scsiback_do_resp_with_sense(char *s
>  	if (notify)
>  		notify_remote_via_irq(info->irq);
>  
> -	free_req(pending_req);
> +	if (act != VSCSIIF_ACT_SCSI_SG_PRESET)
> +		free_req(pending_req);
>  }
>  
>  static void scsiback_print_status(char *sense_buffer, int errors,
> @@ -225,25 +233,25 @@ static void scsiback_cmd_done(struct req
>  		scsiback_rsp_emulation(pending_req);
>  
>  	scsiback_fast_flush_area(pending_req);
> -	scsiback_do_resp_with_sense(sense_buffer, errors, resid, pending_req);
> +	scsiback_do_resp_with_sense(sense_buffer, errors, resid, pending_req,
> +				    VSCSIIF_ACT_SCSI_CDB);
>  	scsiback_put(pending_req->info);
>  
>  	__blk_put_request(req->q, req);
>  }
>  
>  
> -static int scsiback_gnttab_data_map(vscsiif_request_t *ring_req,
> -					pending_req_t *pending_req)
> +static int scsiback_gnttab_data_map(const vscsiif_segment_t *segs,
> +				    unsigned int nr_segs,
> +				    pending_req_t *pending_req)
>  {
>  	u32 flags;
> -	int write;
> -	int i, err = 0;
> -	unsigned int data_len = 0;
> -	struct gnttab_map_grant_ref map[VSCSIIF_SG_TABLESIZE];
> +	int write, err = 0;
> +	unsigned int i, j, data_len = 0;
>  	struct vscsibk_info *info   = pending_req->info;
> -
> +	struct gnttab_map_grant_ref *map = info->gmap;
>  	int data_dir = (int)pending_req->sc_data_direction;
> -	unsigned int nr_segments = (unsigned int)pending_req->nr_segments;
> +	unsigned int nr_segments = pending_req->nr_segments + nr_segs;
>  
>  	write = (data_dir == DMA_TO_DEVICE);
>  
> @@ -264,14 +272,20 @@ static int scsiback_gnttab_data_map(vscs
>  		if (write)
>  			flags |= GNTMAP_readonly;
>  
> -		for (i = 0; i < nr_segments; i++)
> +		for (i = 0; i < pending_req->nr_segments; i++)
>  			gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
> -						ring_req->seg[i].gref,
> +						pending_req->segs[i].gref,
> +						info->domid);
> +		for (j = 0; i < nr_segments; i++, j++)
> +			gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
> +						segs[j].gref,
>  						info->domid);
>  
> +
>  		err = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map, nr_segments);
>  		BUG_ON(err);
>  
> +		j = 0;
>  		for_each_sg (pending_req->sgl, sg, nr_segments, i) {
>  			struct page *pg;
>  
> @@ -294,8 +308,15 @@ static int scsiback_gnttab_data_map(vscs
>  			set_phys_to_machine(page_to_pfn(pg),
>  				FOREIGN_FRAME(map[i].dev_bus_addr >> PAGE_SHIFT));
>  
> -			sg_set_page(sg, pg, ring_req->seg[i].length,
> -				    ring_req->seg[i].offset);
> +			if (i < pending_req->nr_segments)
> +				sg_set_page(sg, pg,
> +					    pending_req->segs[i].length,
> +					    pending_req->segs[i].offset);
> +			else {
> +				sg_set_page(sg, pg, segs[j].length,
> +					    segs[j].offset);
> +				++j;
> +			}
>  			data_len += sg->length;
>  
>  			barrier();
> @@ -306,6 +327,8 @@ static int scsiback_gnttab_data_map(vscs
>  
>  		}
>  
> +		pending_req->nr_segments = nr_segments;
> +
>  		if (err)
>  			goto fail_flush;
>  	}
> @@ -471,7 +494,8 @@ static void scsiback_device_reset_exec(p
>  	scsiback_get(info);
>  	err = scsi_reset_provider(sdev, SCSI_TRY_RESET_DEVICE);
>  
> -	scsiback_do_resp_with_sense(NULL, err, 0, pending_req);
> +	scsiback_do_resp_with_sense(NULL, err, 0, pending_req,
> +				    VSCSIIF_ACT_SCSI_RESET);
>  	scsiback_put(info);
>  
>  	return;
> @@ -489,13 +513,11 @@ static int prepare_pending_reqs(struct v
>  {
>  	struct scsi_device *sdev;
>  	struct ids_tuple vir;
> +	unsigned int nr_segs;
>  	int err = -EINVAL;
>  
>  	DPRINTK("%s\n",__FUNCTION__);
>  
> -	pending_req->rqid       = ring_req->rqid;
> -	pending_req->act        = ring_req->act;
> -
>  	pending_req->info       = info;
>  
>  	pending_req->v_chn = vir.chn = ring_req->channel;
> @@ -525,11 +547,10 @@ static int prepare_pending_reqs(struct v
>  		goto invalid_value;
>  	}
>  
> -	pending_req->nr_segments = ring_req->nr_segments;
> +	nr_segs = ring_req->nr_segments;
>  	barrier();
> -	if (pending_req->nr_segments > VSCSIIF_SG_TABLESIZE) {
> -		DPRINTK("scsiback: invalid parameter nr_seg = %d\n",
> -			pending_req->nr_segments);
> +	if (pending_req->nr_segments + nr_segs > vscsiif_segs) {
> +		DPRINTK("scsiback: invalid nr_segs = %u\n", nr_segs);
>  		err = -EINVAL;
>  		goto invalid_value;
>  	}
> @@ -546,7 +567,7 @@ static int prepare_pending_reqs(struct v
>  	
>  	pending_req->timeout_per_command = ring_req->timeout_per_command;
>  
> -	if(scsiback_gnttab_data_map(ring_req, pending_req)) {
> +	if (scsiback_gnttab_data_map(ring_req->seg, nr_segs, pending_req)) {
>  		DPRINTK("scsiback: invalid buffer\n");
>  		err = -EINVAL;
>  		goto invalid_value;
> @@ -558,6 +579,20 @@ invalid_value:
>  	return err;
>  }
>  
> +static void latch_segments(pending_req_t *pending_req,
> +			   const struct vscsiif_sg_list *sgl)
> +{
> +	unsigned int nr_segs = sgl->nr_segments;
> +
> +	barrier();
> +	if (pending_req->nr_segments + nr_segs <= vscsiif_segs) {
> +		memcpy(pending_req->segs + pending_req->nr_segments,
> +		       sgl->seg, nr_segs * sizeof(*sgl->seg));
> +		pending_req->nr_segments += nr_segs;
> +	}
> +	else
> +		DPRINTK("scsiback: invalid nr_segs = %u\n", nr_segs);
> +}
>  
>  static int _scsiback_do_cmd_fn(struct vscsibk_info *info)
>  {
> @@ -575,9 +610,11 @@ static int _scsiback_do_cmd_fn(struct vs
>  	rmb();
>  
>  	while ((rc != rp)) {
> +		int act, rqid;
> +
>  		if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
>  			break;
> -		pending_req = alloc_req(info);
> +		pending_req = info->preq ?: alloc_req(info);
>  		if (NULL == pending_req) {
>  			more_to_do = 1;
>  			break;
> @@ -586,32 +623,55 @@ static int _scsiback_do_cmd_fn(struct vs
>  		ring_req = RING_GET_REQUEST(ring, rc);
>  		ring->req_cons = ++rc;
>  
> +		act = ring_req->act;
> +		rqid = ring_req->rqid;
> +		barrier();
> +		if (!pending_req->nr_segments)
> +			pending_req->rqid = rqid;
> +		else if (pending_req->rqid != rqid)
> +			DPRINTK("scsiback: invalid rqid %04x, expected %04x\n",
> +				rqid, pending_req->rqid);
> +
> +		info->preq = NULL;
> +		if (pending_req->rqid != rqid) {
> +			scsiback_do_resp_with_sense(NULL, DRIVER_INVALID << 24,
> +						    0, pending_req, act);
> +			continue;
> +		}
> +
> +		if (act == VSCSIIF_ACT_SCSI_SG_PRESET) {
> +			latch_segments(pending_req, (void *)ring_req);
> +			info->preq = pending_req;
> +			scsiback_do_resp_with_sense(NULL, 0, 0,
> +						    pending_req, act);
> +			continue;
> +		}
> +
>  		err = prepare_pending_reqs(info, ring_req,
>  						pending_req);
>  		if (err == -EINVAL) {
>  			scsiback_do_resp_with_sense(NULL, (DRIVER_ERROR << 24),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		} else if (err == -ENODEV) {
>  			scsiback_do_resp_with_sense(NULL, (DID_NO_CONNECT << 16),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		}
>  
> -		if (pending_req->act == VSCSIIF_ACT_SCSI_CDB) {
> -
> +		if (act == VSCSIIF_ACT_SCSI_CDB) {
>  			/* The Host mode is through as for Emulation. */
>  			if (info->feature == VSCSI_TYPE_HOST)
>  				scsiback_cmd_exec(pending_req);
>  			else
>  				scsiback_req_emulation_or_cmdexec(pending_req);
>  
> -		} else if (pending_req->act == VSCSIIF_ACT_SCSI_RESET) {
> +		} else if (act == VSCSIIF_ACT_SCSI_RESET) {
>  			scsiback_device_reset_exec(pending_req);
>  		} else {
>  			pr_err("scsiback: invalid parameter for request\n");
>  			scsiback_do_resp_with_sense(NULL, (DRIVER_ERROR << 24),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		}
>  	}
> @@ -673,17 +733,32 @@ static int __init scsiback_init(void)
>  	if (!is_running_on_xen())
>  		return -ENODEV;
>  
> -	mmap_pages = vscsiif_reqs * VSCSIIF_SG_TABLESIZE;
> +	if (vscsiif_segs < VSCSIIF_SG_TABLESIZE)
> +		vscsiif_segs = VSCSIIF_SG_TABLESIZE;
> +	if (vscsiif_segs != (uint8_t)vscsiif_segs)
> +		return -EINVAL;
> +	mmap_pages = vscsiif_reqs * vscsiif_segs;
>  
>  	pending_reqs          = kzalloc(sizeof(pending_reqs[0]) *
>  					vscsiif_reqs, GFP_KERNEL);
> +	if (!pending_reqs)
> +		return -ENOMEM;
>  	pending_grant_handles = kmalloc(sizeof(pending_grant_handles[0]) *
>  					mmap_pages, GFP_KERNEL);
>  	pending_pages         = alloc_empty_pages_and_pagevec(mmap_pages);
>  
> -	if (!pending_reqs || !pending_grant_handles || !pending_pages)
> +	if (!pending_grant_handles || !pending_pages)
>  		goto out_of_memory;
>  
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		pending_reqs[i].gref = kcalloc(sizeof(*pending_reqs->gref),
> +					       vscsiif_segs, GFP_KERNEL);
> +		pending_reqs[i].segs = kcalloc(sizeof(*pending_reqs->segs),
> +					       vscsiif_segs, GFP_KERNEL);
> +		if (!pending_reqs[i].gref || !pending_reqs[i].segs)
> +			goto out_of_memory;
> +	}
> +
>  	for (i = 0; i < mmap_pages; i++)
>  		pending_grant_handles[i] = SCSIBACK_INVALID_HANDLE;
>  
> @@ -705,6 +780,10 @@ static int __init scsiback_init(void)
>  out_interface:
>  	scsiback_interface_exit();
>  out_of_memory:
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		kfree(pending_reqs[i].gref);
> +		kfree(pending_reqs[i].segs);
> +	}
>  	kfree(pending_reqs);
>  	kfree(pending_grant_handles);
>  	free_empty_pages_and_pagevec(pending_pages, mmap_pages);
> @@ -715,12 +794,17 @@ out_of_memory:
>  #if 0
>  static void __exit scsiback_exit(void)
>  {
> +	unsigned int i;
> +
>  	scsiback_xenbus_unregister();
>  	scsiback_interface_exit();
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		kfree(pending_reqs[i].gref);
> +		kfree(pending_reqs[i].segs);
> +	}
>  	kfree(pending_reqs);
>  	kfree(pending_grant_handles);
> -	free_empty_pages_and_pagevec(pending_pages, (vscsiif_reqs * VSCSIIF_SG_TABLESIZE));
> -
> +	free_empty_pages_and_pagevec(pending_pages, vscsiif_reqs * vscsiif_segs);
>  }
>  #endif
>  
> --- sle11sp3.orig/drivers/xen/scsiback/xenbus.c	2011-06-30 17:04:59.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/xenbus.c	2012-11-13 14:36:16.000000000 +0100
> @@ -339,6 +339,13 @@ static int scsiback_probe(struct xenbus_
>  	if (val)
>  		be->info->feature = VSCSI_TYPE_HOST;
>  
> +	if (vscsiif_segs > VSCSIIF_SG_TABLESIZE) {
> +		err = xenbus_printf(XBT_NIL, dev->nodename, "segs-per-req",
> +				    "%u", vscsiif_segs);
> +		if (err)
> +			xenbus_dev_error(dev, err, "writing segs-per-req");
> +	}
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> --- sle11sp3.orig/drivers/xen/scsifront/common.h	2011-01-31 17:29:16.000000000 +0100
> +++ sle11sp3/drivers/xen/scsifront/common.h	2012-11-22 13:45:50.000000000 +0100
> @@ -95,7 +95,7 @@ struct vscsifrnt_shadow {
>  
>  	/* requested struct scsi_cmnd is stored from kernel */
>  	unsigned long req_scsi_cmnd;
> -	int gref[VSCSIIF_SG_TABLESIZE];
> +	int gref[SG_ALL];
>  };
>  
>  struct vscsifrnt_info {
> @@ -110,7 +110,6 @@ struct vscsifrnt_info {
>  
>  	grant_ref_t ring_ref;
>  	struct vscsiif_front_ring ring;
> -	struct vscsiif_response	ring_res;
>  
>  	struct vscsifrnt_shadow shadow[VSCSIIF_MAX_REQS];
>  	uint32_t shadow_free;
> @@ -119,6 +118,12 @@ struct vscsifrnt_info {
>  	wait_queue_head_t wq;
>  	unsigned int waiting_resp;
>  
> +	struct {
> +		struct scsi_cmnd *sc;
> +		unsigned int rqid;
> +		unsigned int done;
> +		vscsiif_segment_t segs[];
> +	} active;
>  };
>  
>  #define DPRINTK(_f, _a...)				\
> --- sle11sp3.orig/drivers/xen/scsifront/scsifront.c	2011-06-28 18:57:14.000000000 +0200
> +++ sle11sp3/drivers/xen/scsifront/scsifront.c	2012-11-22 16:37:35.000000000 +0100
> @@ -106,6 +106,66 @@ irqreturn_t scsifront_intr(int irq, void
>  	return IRQ_HANDLED;
>  }
>  
> +static bool push_cmd_to_ring(struct vscsifrnt_info *info,
> +			     vscsiif_request_t *ring_req)
> +{
> +	unsigned int left, rqid = info->active.rqid;
> +	struct scsi_cmnd *sc;
> +
> +	for (; ; ring_req = NULL) {
> +		struct vscsiif_sg_list *sgl;
> +
> +		if (!ring_req) {
> +			struct vscsiif_front_ring *ring = &info->ring;
> +
> +			ring_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
> +			ring->req_prod_pvt++;
> +			ring_req->rqid = rqid;
> +		}
> +
> +		left = info->shadow[rqid].nr_segments - info->active.done;
> +		if (left <= VSCSIIF_SG_TABLESIZE)
> +			break;
> +
> +		sgl = (void *)ring_req;
> +		sgl->act = VSCSIIF_ACT_SCSI_SG_PRESET;
> +
> +		if (left > VSCSIIF_SG_LIST_SIZE)
> +			left = VSCSIIF_SG_LIST_SIZE;
> +		memcpy(sgl->seg, info->active.segs + info->active.done,
> +		       left * sizeof(*sgl->seg));
> +
> +		sgl->nr_segments = left;
> +		info->active.done += left;
> +
> +		if (RING_FULL(&info->ring))
> +			return false;
> +	}
> +
> +	sc = info->active.sc;
> +
> +	ring_req->act     = VSCSIIF_ACT_SCSI_CDB;
> +	ring_req->id      = sc->device->id;
> +	ring_req->lun     = sc->device->lun;
> +	ring_req->channel = sc->device->channel;
> +	ring_req->cmd_len = sc->cmd_len;
> +
> +	if ( sc->cmd_len )
> +		memcpy(ring_req->cmnd, sc->cmnd, sc->cmd_len);
> +	else
> +		memset(ring_req->cmnd, 0, VSCSIIF_MAX_COMMAND_SIZE);
> +
> +	ring_req->sc_data_direction   = sc->sc_data_direction;
> +	ring_req->timeout_per_command = sc->request->timeout / HZ;
> +	ring_req->nr_segments         = left;
> +
> +	memcpy(ring_req->seg, info->active.segs + info->active.done,
> +               left * sizeof(*ring_req->seg));
> +
> +	info->active.sc = NULL;
> +
> +	return !RING_FULL(&info->ring);
> +}
>  
>  static void scsifront_gnttab_done(struct vscsifrnt_shadow *s, uint32_t id)
>  {
> @@ -194,6 +254,16 @@ int scsifront_cmd_done(struct vscsifrnt_
>  		
>  		ring_res = RING_GET_RESPONSE(&info->ring, i);
>  
> +		if (info->host->sg_tablesize > VSCSIIF_SG_TABLESIZE) {
> +			u8 act = ring_res->act;
> +
> +			if (act == VSCSIIF_ACT_SCSI_SG_PRESET)
> +				continue;
> +			if (act != info->shadow[ring_res->rqid].act)
> +				DPRINTK("Bogus backend response (%02x vs %02x)\n",
> +					act, info->shadow[ring_res->rqid].act);
> +		}
> +
>  		if (info->shadow[ring_res->rqid].act == VSCSIIF_ACT_SCSI_CDB)
>  			scsifront_cdb_cmd_done(info, ring_res);
>  		else
> @@ -208,8 +278,16 @@ int scsifront_cmd_done(struct vscsifrnt_
>  		info->ring.sring->rsp_event = i + 1;
>  	}
>  
> -	spin_unlock_irqrestore(&info->io_lock, flags);
> +	spin_unlock(&info->io_lock);
> +
> +	spin_lock(info->host->host_lock);
> +
> +	if (info->active.sc && !RING_FULL(&info->ring)) {
> +		push_cmd_to_ring(info, NULL);
> +		scsifront_do_request(info);
> +	}
>  
> +	spin_unlock_irqrestore(info->host->host_lock, flags);
>  
>  	/* Yield point for this unbounded loop. */
>  	cond_resched();
> @@ -242,7 +320,8 @@ int scsifront_schedule(void *data)
>  
>  
>  static int map_data_for_request(struct vscsifrnt_info *info,
> -		struct scsi_cmnd *sc, vscsiif_request_t *ring_req, uint32_t id)
> +				struct scsi_cmnd *sc,
> +				struct vscsifrnt_shadow *shadow)
>  {
>  	grant_ref_t gref_head;
>  	struct page *page;
> @@ -254,7 +333,7 @@ static int map_data_for_request(struct v
>  	if (sc->sc_data_direction == DMA_NONE)
>  		return 0;
>  
> -	err = gnttab_alloc_grant_references(VSCSIIF_SG_TABLESIZE, &gref_head);
> +	err = gnttab_alloc_grant_references(info->host->sg_tablesize, &gref_head);
>  	if (err) {
>  		pr_err("scsifront: gnttab_alloc_grant_references() error\n");
>  		return -ENOMEM;
> @@ -266,7 +345,7 @@ static int map_data_for_request(struct v
>  		unsigned int data_len = scsi_bufflen(sc);
>  
>  		nr_pages = (data_len + sgl->offset + PAGE_SIZE - 1) >> PAGE_SHIFT;
> -		if (nr_pages > VSCSIIF_SG_TABLESIZE) {
> +		if (nr_pages > info->host->sg_tablesize) {
>  			pr_err("scsifront: Unable to map request_buffer for command!\n");
>  			ref_cnt = (-E2BIG);
>  			goto big_to_sg;
> @@ -294,10 +373,10 @@ static int map_data_for_request(struct v
>  				gnttab_grant_foreign_access_ref(ref, info->dev->otherend_id,
>  					buffer_pfn, write);
>  
> -				info->shadow[id].gref[ref_cnt]  = ref;
> -				ring_req->seg[ref_cnt].gref     = ref;
> -				ring_req->seg[ref_cnt].offset   = (uint16_t)off;
> -				ring_req->seg[ref_cnt].length   = (uint16_t)bytes;
> +				shadow->gref[ref_cnt] = ref;
> +				info->active.segs[ref_cnt].gref   = ref;
> +				info->active.segs[ref_cnt].offset = off;
> +				info->active.segs[ref_cnt].length = bytes;
>  
>  				buffer_pfn++;
>  				len -= bytes;
> @@ -336,34 +415,27 @@ static int scsifront_queuecommand(struct
>  		return SCSI_MLQUEUE_HOST_BUSY;
>  	}
>  
> +	if (info->active.sc && !push_cmd_to_ring(info, NULL)) {
> +		scsifront_do_request(info);
> +		spin_unlock_irqrestore(shost->host_lock, flags);
> +		return SCSI_MLQUEUE_HOST_BUSY;
> +	}
> +
>  	sc->result    = 0;
>  
>  	ring_req          = scsifront_pre_request(info);
>  	rqid              = ring_req->rqid;
> -	ring_req->act     = VSCSIIF_ACT_SCSI_CDB;
> -
> -	ring_req->id      = sc->device->id;
> -	ring_req->lun     = sc->device->lun;
> -	ring_req->channel = sc->device->channel;
> -	ring_req->cmd_len = sc->cmd_len;
>  
>  	BUG_ON(sc->cmd_len > VSCSIIF_MAX_COMMAND_SIZE);
>  
> -	if ( sc->cmd_len )
> -		memcpy(ring_req->cmnd, sc->cmnd, sc->cmd_len);
> -	else
> -		memset(ring_req->cmnd, 0, VSCSIIF_MAX_COMMAND_SIZE);
> -
> -	ring_req->sc_data_direction   = (uint8_t)sc->sc_data_direction;
> -	ring_req->timeout_per_command = (sc->request->timeout / HZ);
> -
>  	info->shadow[rqid].req_scsi_cmnd     = (unsigned long)sc;
>  	info->shadow[rqid].sc_data_direction = sc->sc_data_direction;
> -	info->shadow[rqid].act               = ring_req->act;
> +	info->shadow[rqid].act               = VSCSIIF_ACT_SCSI_CDB;
>  
> -	ref_cnt = map_data_for_request(info, sc, ring_req, rqid);
> +	ref_cnt = map_data_for_request(info, sc, &info->shadow[rqid]);
>  	if (ref_cnt < 0) {
>  		add_id_to_freelist(info, rqid);
> +		scsifront_do_request(info);
>  		spin_unlock_irqrestore(shost->host_lock, flags);
>  		if (ref_cnt == (-ENOMEM))
>  			return SCSI_MLQUEUE_HOST_BUSY;
> @@ -372,9 +444,13 @@ static int scsifront_queuecommand(struct
>  		return 0;
>  	}
>  
> -	ring_req->nr_segments          = (uint8_t)ref_cnt;
>  	info->shadow[rqid].nr_segments = ref_cnt;
>  
> +	info->active.sc  = sc;
> +	info->active.rqid = rqid;
> +	info->active.done = 0;
> +	push_cmd_to_ring(info, ring_req);
> +
>  	scsifront_do_request(info);
>  	spin_unlock_irqrestore(shost->host_lock, flags);
>  
> --- sle11sp3.orig/drivers/xen/scsifront/xenbus.c	2012-10-02 14:32:45.000000000 +0200
> +++ sle11sp3/drivers/xen/scsifront/xenbus.c	2012-11-21 13:35:47.000000000 +0100
> @@ -43,6 +43,10 @@
>    #define DEFAULT_TASK_COMM_LEN	TASK_COMM_LEN
>  #endif
>  
> +static unsigned int max_nr_segs = VSCSIIF_SG_TABLESIZE;
> +module_param_named(max_segs, max_nr_segs, uint, 0);
> +MODULE_PARM_DESC(max_segs, "Maximum number of segments per request");
> +
>  extern struct scsi_host_template scsifront_sht;
>  
>  static void scsifront_free(struct vscsifrnt_info *info)
> @@ -181,7 +185,9 @@ static int scsifront_probe(struct xenbus
>  	int i, err = -ENOMEM;
>  	char name[DEFAULT_TASK_COMM_LEN];
>  
> -	host = scsi_host_alloc(&scsifront_sht, sizeof(*info));
> +	host = scsi_host_alloc(&scsifront_sht,
> +			       offsetof(struct vscsifrnt_info,
> +					active.segs[max_nr_segs]));
>  	if (!host) {
>  		xenbus_dev_fatal(dev, err, "fail to allocate scsi host");
>  		return err;
> @@ -223,7 +229,7 @@ static int scsifront_probe(struct xenbus
>  	host->max_id      = VSCSIIF_MAX_TARGET;
>  	host->max_channel = 0;
>  	host->max_lun     = VSCSIIF_MAX_LUN;
> -	host->max_sectors = (VSCSIIF_SG_TABLESIZE - 1) * PAGE_SIZE / 512;
> +	host->max_sectors = (host->sg_tablesize - 1) * PAGE_SIZE / 512;
>  	host->max_cmd_len = VSCSIIF_MAX_COMMAND_SIZE;
>  
>  	err = scsi_add_host(host, &dev->dev);
> @@ -278,6 +284,23 @@ static int scsifront_disconnect(struct v
>  	return 0;
>  }
>  
> +static void scsifront_read_backend_params(struct xenbus_device *dev,
> +					  struct vscsifrnt_info *info)
> +{
> +	unsigned int nr_segs;
> +	int ret;
> +	struct Scsi_Host *host = info->host;
> +
> +	ret = xenbus_scanf(XBT_NIL, dev->otherend, "segs-per-req", "%u",
> +			   &nr_segs);
> +	if (ret == 1 && nr_segs > host->sg_tablesize) {
> +		host->sg_tablesize = min(nr_segs, max_nr_segs);
> +		dev_info(&dev->dev, "using up to %d SG entries\n",
> +			 host->sg_tablesize);
> +		host->max_sectors = (host->sg_tablesize - 1) * PAGE_SIZE / 512;
> +	}
> +}
> +
>  #define VSCSIFRONT_OP_ADD_LUN	1
>  #define VSCSIFRONT_OP_DEL_LUN	2
>  
> @@ -368,6 +391,7 @@ static void scsifront_backend_changed(st
>  		break;
>  
>  	case XenbusStateConnected:
> +		scsifront_read_backend_params(dev, info);
>  		if (xenbus_read_driver_state(dev->nodename) ==
>  			XenbusStateInitialised) {
>  			scsifront_do_lun_hotplug(info, VSCSIFRONT_OP_ADD_LUN);
> @@ -413,8 +437,13 @@ static DEFINE_XENBUS_DRIVER(scsifront, ,
>  	.otherend_changed	= scsifront_backend_changed,
>  );
>  
> -int scsifront_xenbus_init(void)
> +int __init scsifront_xenbus_init(void)
>  {
> +	if (max_nr_segs > SG_ALL)
> +		max_nr_segs = SG_ALL;
> +	if (max_nr_segs < VSCSIIF_SG_TABLESIZE)
> +		max_nr_segs = VSCSIIF_SG_TABLESIZE;
> +
>  	return xenbus_register_frontend(&scsifront_driver);
>  }
>  
> --- sle11sp3.orig/include/xen/interface/io/vscsiif.h	2008-07-21 11:00:33.000000000 +0200
> +++ sle11sp3/include/xen/interface/io/vscsiif.h	2012-11-22 14:32:31.000000000 +0100
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  
>  #define VSCSIIF_BACK_MAX_PENDING_REQS    128
> @@ -53,6 +54,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -69,18 +76,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 16:39:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 16:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Teq5g-00065w-9Z; Sat, 01 Dec 2012 16:38:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Teq5e-00065r-JP
	for xen-devel@lists.xen.org; Sat, 01 Dec 2012 16:38:55 +0000
Received: from [85.158.143.99:7785] by server-2.bemta-4.messagelabs.com id
	02/34-28922-D923AB05; Sat, 01 Dec 2012 16:38:53 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354379932!18006102!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA1MTEzMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29869 invoked from network); 1 Dec 2012 16:38:53 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Dec 2012 16:38:53 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id B3C182205;
	Sat,  1 Dec 2012 18:38:51 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 726BDEC027; Sat,  1 Dec 2012 18:38:51 +0200 (EET)
Date: Sat, 1 Dec 2012 18:38:51 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121201163851.GU8912@reaktio.net>
References: <50B4B40B02000078000ABA19@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50B4B40B02000078000ABA19@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] vscsiif: allow larger segments-per-request
 values
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On Tue, Nov 27, 2012 at 11:37:31AM +0000, Jan Beulich wrote:
> At least certain tape devices require fixed size blocks to be operated
> upon, i.e. breaking up of I/O requests is not permitted. Consequently
> we need an interface extension that (leaving aside implementation
> limitations) doesn't impose a limit on the number of segments that can
> be associated with an individual request.
> 
> This, in turn, excludes the blkif extension FreeBSD folks implemented,
> as that still imposes an upper limit (the actual I/O request still
> specifies the full number of segments - as an 8-bit quantity -, and
> subsequent ring slots get used to carry the excess segment
> descriptors).
> 
> The alternative therefore is to allow the frontend to pre-set segment
> descriptors _before_ actually issuing the I/O request. I/O will then
> be done by the backend for the accumulated set of segments.
> 
> To properly associate segment preset operations with the main request,
> the rqid-s between them should match (originally I had hoped to use
> this to avoid producing individual responses for the pre-set
> operations, but that turned out to violate the underlying shared ring
> implementation).
> 
> Negotiation of the maximum number of segments a particular backend
> implementation supports happens through a new "segs-per-req" xenstore
> node.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As I have no plans to backport this to the 2.6.18 tree, I'm attaching
> for reference the full kernel side patch we're intending to use.
> 

Konrad: I wonder if this should be applied to:
http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/xen-scsi.v1.0


-- Pasi


> --- a/xen/include/public/io/vscsiif.h
> +++ b/xen/include/public/io/vscsiif.h
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  /*
>   * Maximum scatter/gather segments per request.
> @@ -50,6 +51,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -66,18 +73,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;
> 
> 
> 

> vscsiif: allow larger segments-per-request values
> 
> At least certain tape devices require fixed size blocks to be operated
> upon, i.e. breaking up of I/O requests is not permitted. Consequently
> we need an interface extension that (leaving aside implementation
> limitations) doesn't impose a limit on the number of segments that can
> be associated with an individual request.
> 
> This, in turn, excludes the blkif extension FreeBSD folks implemented,
> as that still imposes an upper limit (the actual I/O request still
> specifies the full number of segments - as an 8-bit quantity -, and
> subsequent ring slots get used to carry the excess segment
> descriptors).
> 
> The alternative therefore is to allow the frontend to pre-set segment
> descriptors _before_ actually issuing the I/O request. I/O will then
> be done by the backend for the accumulated set of segments.
> 
> To properly associate segment preset operations with the main request,
> the rqid-s between them should match (originally I had hoped to use
> this to avoid producing individual responses for the pre-set
> operations, but that turned out to violate the underlying shared ring
> implementation).
> 
> Negotiation of the maximum number of segments a particular backend
> implementation supports happens through a new "segs-per-req" xenstore
> node.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As I have no plans to backport this to the 2.6.18 tree, I'm attaching
> for reference the full kernel side patch we're intending to use.
> 
> --- a/xen/include/public/io/vscsiif.h
> +++ b/xen/include/public/io/vscsiif.h
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  /*
>   * Maximum scatter/gather segments per request.
> @@ -50,6 +51,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -66,18 +73,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;

> --- sle11sp3.orig/drivers/xen/scsiback/common.h	2012-06-06 13:53:26.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/common.h	2012-11-22 14:55:58.000000000 +0100
> @@ -94,10 +94,15 @@ struct vscsibk_info {
>  	unsigned int waiting_reqs;
>  	struct page **mmap_pages;
>  
> +	struct pending_req *preq;
> +
> +	union {
> +		struct gnttab_map_grant_ref   *gmap;
> +		struct gnttab_unmap_grant_ref *gunmap;
> +	};
>  };
>  
> -typedef struct {
> -	unsigned char act;
> +typedef struct pending_req {
>  	struct vscsibk_info *info;
>  	struct scsi_device *sdev;
>  
> @@ -114,7 +119,8 @@ typedef struct {
>  	
>  	uint32_t request_bufflen;
>  	struct scatterlist *sgl;
> -	grant_ref_t gref[VSCSIIF_SG_TABLESIZE];
> +	grant_ref_t *gref;
> +	vscsiif_segment_t *segs;
>  
>  	int32_t rslt;
>  	uint32_t resid;
> @@ -123,7 +129,7 @@ typedef struct {
>  	struct list_head free_list;
>  } pending_req_t;
>  
> -
> +extern unsigned int vscsiif_segs;
>  
>  #define scsiback_get(_b) (atomic_inc(&(_b)->nr_unreplied_reqs))
>  #define scsiback_put(_b)				\
> @@ -163,7 +169,7 @@ void scsiback_release_translation_entry(
>  
>  void scsiback_cmd_exec(pending_req_t *pending_req);
>  void scsiback_do_resp_with_sense(char *sense_buffer, int32_t result,
> -			uint32_t resid, pending_req_t *pending_req);
> +			uint32_t resid, pending_req_t *, uint8_t act);
>  void scsiback_fast_flush_area(pending_req_t *req);
>  
>  void scsiback_rsp_emulation(pending_req_t *pending_req);
> --- sle11sp3.orig/drivers/xen/scsiback/emulate.c	2012-01-11 12:14:54.000000000 +0100
> +++ sle11sp3/drivers/xen/scsiback/emulate.c	2012-11-22 14:29:27.000000000 +0100
> @@ -352,7 +352,9 @@ void scsiback_req_emulation_or_cmdexec(p
>  	else {
>  		scsiback_fast_flush_area(pending_req);
>  		scsiback_do_resp_with_sense(pending_req->sense_buffer,
> -		  pending_req->rslt, pending_req->resid, pending_req);
> +					    pending_req->rslt,
> +					    pending_req->resid, pending_req,
> +					    VSCSIIF_ACT_SCSI_CDB);
>  	}
>  }
>  
> --- sle11sp3.orig/drivers/xen/scsiback/interface.c	2011-10-10 11:58:37.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/interface.c	2012-11-13 13:21:10.000000000 +0100
> @@ -51,6 +51,13 @@ struct vscsibk_info *vscsibk_info_alloc(
>  	if (!info)
>  		return ERR_PTR(-ENOMEM);
>  
> +	info->gmap = kcalloc(max(sizeof(*info->gmap), sizeof(*info->gunmap)),
> +			     vscsiif_segs, GFP_KERNEL);
> +	if (!info->gmap) {
> +		kfree(info);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
>  	info->domid = domid;
>  	spin_lock_init(&info->ring_lock);
>  	atomic_set(&info->nr_unreplied_reqs, 0);
> @@ -120,6 +127,7 @@ void scsiback_disconnect(struct vscsibk_
>  
>  void scsiback_free(struct vscsibk_info *info)
>  {
> +	kfree(info->gmap);
>  	kmem_cache_free(scsiback_cachep, info);
>  }
>  
> --- sle11sp3.orig/drivers/xen/scsiback/scsiback.c	2012-11-22 15:36:11.000000000 +0100
> +++ sle11sp3/drivers/xen/scsiback/scsiback.c	2012-11-22 15:36:16.000000000 +0100
> @@ -56,6 +56,10 @@ int vscsiif_reqs = VSCSIIF_BACK_MAX_PEND
>  module_param_named(reqs, vscsiif_reqs, int, 0);
>  MODULE_PARM_DESC(reqs, "Number of scsiback requests to allocate");
>  
> +unsigned int vscsiif_segs = VSCSIIF_SG_TABLESIZE;
> +module_param_named(segs, vscsiif_segs, uint, 0);
> +MODULE_PARM_DESC(segs, "Number of segments to allow per request");
> +
>  static unsigned int log_print_stat = 0;
>  module_param(log_print_stat, int, 0644);
>  
> @@ -67,7 +71,7 @@ static grant_handle_t *pending_grant_han
>  
>  static int vaddr_pagenr(pending_req_t *req, int seg)
>  {
> -	return (req - pending_reqs) * VSCSIIF_SG_TABLESIZE + seg;
> +	return (req - pending_reqs) * vscsiif_segs + seg;
>  }
>  
>  static unsigned long vaddr(pending_req_t *req, int seg)
> @@ -82,7 +86,7 @@ static unsigned long vaddr(pending_req_t
>  
>  void scsiback_fast_flush_area(pending_req_t *req)
>  {
> -	struct gnttab_unmap_grant_ref unmap[VSCSIIF_SG_TABLESIZE];
> +	struct gnttab_unmap_grant_ref *unmap = req->info->gunmap;
>  	unsigned int i, invcount = 0;
>  	grant_handle_t handle;
>  	int err;
> @@ -117,6 +121,7 @@ static pending_req_t * alloc_req(struct 
>  	if (!list_empty(&pending_free)) {
>  		req = list_entry(pending_free.next, pending_req_t, free_list);
>  		list_del(&req->free_list);
> +		req->nr_segments = 0;
>  	}
>  	spin_unlock_irqrestore(&pending_free_lock, flags);
>  	return req;
> @@ -144,7 +149,8 @@ static void scsiback_notify_work(struct 
>  }
>  
>  void scsiback_do_resp_with_sense(char *sense_buffer, int32_t result,
> -			uint32_t resid, pending_req_t *pending_req)
> +				 uint32_t resid, pending_req_t *pending_req,
> +				 uint8_t act)
>  {
>  	vscsiif_response_t *ring_res;
>  	struct vscsibk_info *info = pending_req->info;
> @@ -159,6 +165,7 @@ void scsiback_do_resp_with_sense(char *s
>  	ring_res = RING_GET_RESPONSE(&info->ring, info->ring.rsp_prod_pvt);
>  	info->ring.rsp_prod_pvt++;
>  
> +	ring_res->act    = act;
>  	ring_res->rslt   = result;
>  	ring_res->rqid   = pending_req->rqid;
>  
> @@ -186,7 +193,8 @@ void scsiback_do_resp_with_sense(char *s
>  	if (notify)
>  		notify_remote_via_irq(info->irq);
>  
> -	free_req(pending_req);
> +	if (act != VSCSIIF_ACT_SCSI_SG_PRESET)
> +		free_req(pending_req);
>  }
>  
>  static void scsiback_print_status(char *sense_buffer, int errors,
> @@ -225,25 +233,25 @@ static void scsiback_cmd_done(struct req
>  		scsiback_rsp_emulation(pending_req);
>  
>  	scsiback_fast_flush_area(pending_req);
> -	scsiback_do_resp_with_sense(sense_buffer, errors, resid, pending_req);
> +	scsiback_do_resp_with_sense(sense_buffer, errors, resid, pending_req,
> +				    VSCSIIF_ACT_SCSI_CDB);
>  	scsiback_put(pending_req->info);
>  
>  	__blk_put_request(req->q, req);
>  }
>  
>  
> -static int scsiback_gnttab_data_map(vscsiif_request_t *ring_req,
> -					pending_req_t *pending_req)
> +static int scsiback_gnttab_data_map(const vscsiif_segment_t *segs,
> +				    unsigned int nr_segs,
> +				    pending_req_t *pending_req)
>  {
>  	u32 flags;
> -	int write;
> -	int i, err = 0;
> -	unsigned int data_len = 0;
> -	struct gnttab_map_grant_ref map[VSCSIIF_SG_TABLESIZE];
> +	int write, err = 0;
> +	unsigned int i, j, data_len = 0;
>  	struct vscsibk_info *info   = pending_req->info;
> -
> +	struct gnttab_map_grant_ref *map = info->gmap;
>  	int data_dir = (int)pending_req->sc_data_direction;
> -	unsigned int nr_segments = (unsigned int)pending_req->nr_segments;
> +	unsigned int nr_segments = pending_req->nr_segments + nr_segs;
>  
>  	write = (data_dir == DMA_TO_DEVICE);
>  
> @@ -264,14 +272,20 @@ static int scsiback_gnttab_data_map(vscs
>  		if (write)
>  			flags |= GNTMAP_readonly;
>  
> -		for (i = 0; i < nr_segments; i++)
> +		for (i = 0; i < pending_req->nr_segments; i++)
>  			gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
> -						ring_req->seg[i].gref,
> +						pending_req->segs[i].gref,
> +						info->domid);
> +		for (j = 0; i < nr_segments; i++, j++)
> +			gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
> +						segs[j].gref,
>  						info->domid);
>  
> +
>  		err = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map, nr_segments);
>  		BUG_ON(err);
>  
> +		j = 0;
>  		for_each_sg (pending_req->sgl, sg, nr_segments, i) {
>  			struct page *pg;
>  
> @@ -294,8 +308,15 @@ static int scsiback_gnttab_data_map(vscs
>  			set_phys_to_machine(page_to_pfn(pg),
>  				FOREIGN_FRAME(map[i].dev_bus_addr >> PAGE_SHIFT));
>  
> -			sg_set_page(sg, pg, ring_req->seg[i].length,
> -				    ring_req->seg[i].offset);
> +			if (i < pending_req->nr_segments)
> +				sg_set_page(sg, pg,
> +					    pending_req->segs[i].length,
> +					    pending_req->segs[i].offset);
> +			else {
> +				sg_set_page(sg, pg, segs[j].length,
> +					    segs[j].offset);
> +				++j;
> +			}
>  			data_len += sg->length;
>  
>  			barrier();
> @@ -306,6 +327,8 @@ static int scsiback_gnttab_data_map(vscs
>  
>  		}
>  
> +		pending_req->nr_segments = nr_segments;
> +
>  		if (err)
>  			goto fail_flush;
>  	}
> @@ -471,7 +494,8 @@ static void scsiback_device_reset_exec(p
>  	scsiback_get(info);
>  	err = scsi_reset_provider(sdev, SCSI_TRY_RESET_DEVICE);
>  
> -	scsiback_do_resp_with_sense(NULL, err, 0, pending_req);
> +	scsiback_do_resp_with_sense(NULL, err, 0, pending_req,
> +				    VSCSIIF_ACT_SCSI_RESET);
>  	scsiback_put(info);
>  
>  	return;
> @@ -489,13 +513,11 @@ static int prepare_pending_reqs(struct v
>  {
>  	struct scsi_device *sdev;
>  	struct ids_tuple vir;
> +	unsigned int nr_segs;
>  	int err = -EINVAL;
>  
>  	DPRINTK("%s\n",__FUNCTION__);
>  
> -	pending_req->rqid       = ring_req->rqid;
> -	pending_req->act        = ring_req->act;
> -
>  	pending_req->info       = info;
>  
>  	pending_req->v_chn = vir.chn = ring_req->channel;
> @@ -525,11 +547,10 @@ static int prepare_pending_reqs(struct v
>  		goto invalid_value;
>  	}
>  
> -	pending_req->nr_segments = ring_req->nr_segments;
> +	nr_segs = ring_req->nr_segments;
>  	barrier();
> -	if (pending_req->nr_segments > VSCSIIF_SG_TABLESIZE) {
> -		DPRINTK("scsiback: invalid parameter nr_seg = %d\n",
> -			pending_req->nr_segments);
> +	if (pending_req->nr_segments + nr_segs > vscsiif_segs) {
> +		DPRINTK("scsiback: invalid nr_segs = %u\n", nr_segs);
>  		err = -EINVAL;
>  		goto invalid_value;
>  	}
> @@ -546,7 +567,7 @@ static int prepare_pending_reqs(struct v
>  	
>  	pending_req->timeout_per_command = ring_req->timeout_per_command;
>  
> -	if(scsiback_gnttab_data_map(ring_req, pending_req)) {
> +	if (scsiback_gnttab_data_map(ring_req->seg, nr_segs, pending_req)) {
>  		DPRINTK("scsiback: invalid buffer\n");
>  		err = -EINVAL;
>  		goto invalid_value;
> @@ -558,6 +579,20 @@ invalid_value:
>  	return err;
>  }
>  
> +static void latch_segments(pending_req_t *pending_req,
> +			   const struct vscsiif_sg_list *sgl)
> +{
> +	unsigned int nr_segs = sgl->nr_segments;
> +
> +	barrier();
> +	if (pending_req->nr_segments + nr_segs <= vscsiif_segs) {
> +		memcpy(pending_req->segs + pending_req->nr_segments,
> +		       sgl->seg, nr_segs * sizeof(*sgl->seg));
> +		pending_req->nr_segments += nr_segs;
> +	}
> +	else
> +		DPRINTK("scsiback: invalid nr_segs = %u\n", nr_segs);
> +}
>  
>  static int _scsiback_do_cmd_fn(struct vscsibk_info *info)
>  {
> @@ -575,9 +610,11 @@ static int _scsiback_do_cmd_fn(struct vs
>  	rmb();
>  
>  	while ((rc != rp)) {
> +		int act, rqid;
> +
>  		if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
>  			break;
> -		pending_req = alloc_req(info);
> +		pending_req = info->preq ?: alloc_req(info);
>  		if (NULL == pending_req) {
>  			more_to_do = 1;
>  			break;
> @@ -586,32 +623,55 @@ static int _scsiback_do_cmd_fn(struct vs
>  		ring_req = RING_GET_REQUEST(ring, rc);
>  		ring->req_cons = ++rc;
>  
> +		act = ring_req->act;
> +		rqid = ring_req->rqid;
> +		barrier();
> +		if (!pending_req->nr_segments)
> +			pending_req->rqid = rqid;
> +		else if (pending_req->rqid != rqid)
> +			DPRINTK("scsiback: invalid rqid %04x, expected %04x\n",
> +				rqid, pending_req->rqid);
> +
> +		info->preq = NULL;
> +		if (pending_req->rqid != rqid) {
> +			scsiback_do_resp_with_sense(NULL, DRIVER_INVALID << 24,
> +						    0, pending_req, act);
> +			continue;
> +		}
> +
> +		if (act == VSCSIIF_ACT_SCSI_SG_PRESET) {
> +			latch_segments(pending_req, (void *)ring_req);
> +			info->preq = pending_req;
> +			scsiback_do_resp_with_sense(NULL, 0, 0,
> +						    pending_req, act);
> +			continue;
> +		}
> +
>  		err = prepare_pending_reqs(info, ring_req,
>  						pending_req);
>  		if (err == -EINVAL) {
>  			scsiback_do_resp_with_sense(NULL, (DRIVER_ERROR << 24),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		} else if (err == -ENODEV) {
>  			scsiback_do_resp_with_sense(NULL, (DID_NO_CONNECT << 16),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		}
>  
> -		if (pending_req->act == VSCSIIF_ACT_SCSI_CDB) {
> -
> +		if (act == VSCSIIF_ACT_SCSI_CDB) {
>  			/* The Host mode is through as for Emulation. */
>  			if (info->feature == VSCSI_TYPE_HOST)
>  				scsiback_cmd_exec(pending_req);
>  			else
>  				scsiback_req_emulation_or_cmdexec(pending_req);
>  
> -		} else if (pending_req->act == VSCSIIF_ACT_SCSI_RESET) {
> +		} else if (act == VSCSIIF_ACT_SCSI_RESET) {
>  			scsiback_device_reset_exec(pending_req);
>  		} else {
>  			pr_err("scsiback: invalid parameter for request\n");
>  			scsiback_do_resp_with_sense(NULL, (DRIVER_ERROR << 24),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		}
>  	}
> @@ -673,17 +733,32 @@ static int __init scsiback_init(void)
>  	if (!is_running_on_xen())
>  		return -ENODEV;
>  
> -	mmap_pages = vscsiif_reqs * VSCSIIF_SG_TABLESIZE;
> +	if (vscsiif_segs < VSCSIIF_SG_TABLESIZE)
> +		vscsiif_segs = VSCSIIF_SG_TABLESIZE;
> +	if (vscsiif_segs != (uint8_t)vscsiif_segs)
> +		return -EINVAL;
> +	mmap_pages = vscsiif_reqs * vscsiif_segs;
>  
>  	pending_reqs          = kzalloc(sizeof(pending_reqs[0]) *
>  					vscsiif_reqs, GFP_KERNEL);
> +	if (!pending_reqs)
> +		return -ENOMEM;
>  	pending_grant_handles = kmalloc(sizeof(pending_grant_handles[0]) *
>  					mmap_pages, GFP_KERNEL);
>  	pending_pages         = alloc_empty_pages_and_pagevec(mmap_pages);
>  
> -	if (!pending_reqs || !pending_grant_handles || !pending_pages)
> +	if (!pending_grant_handles || !pending_pages)
>  		goto out_of_memory;
>  
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		pending_reqs[i].gref = kcalloc(sizeof(*pending_reqs->gref),
> +					       vscsiif_segs, GFP_KERNEL);
> +		pending_reqs[i].segs = kcalloc(sizeof(*pending_reqs->segs),
> +					       vscsiif_segs, GFP_KERNEL);
> +		if (!pending_reqs[i].gref || !pending_reqs[i].segs)
> +			goto out_of_memory;
> +	}
> +
>  	for (i = 0; i < mmap_pages; i++)
>  		pending_grant_handles[i] = SCSIBACK_INVALID_HANDLE;
>  
> @@ -705,6 +780,10 @@ static int __init scsiback_init(void)
>  out_interface:
>  	scsiback_interface_exit();
>  out_of_memory:
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		kfree(pending_reqs[i].gref);
> +		kfree(pending_reqs[i].segs);
> +	}
>  	kfree(pending_reqs);
>  	kfree(pending_grant_handles);
>  	free_empty_pages_and_pagevec(pending_pages, mmap_pages);
> @@ -715,12 +794,17 @@ out_of_memory:
>  #if 0
>  static void __exit scsiback_exit(void)
>  {
> +	unsigned int i;
> +
>  	scsiback_xenbus_unregister();
>  	scsiback_interface_exit();
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		kfree(pending_reqs[i].gref);
> +		kfree(pending_reqs[i].segs);
> +	}
>  	kfree(pending_reqs);
>  	kfree(pending_grant_handles);
> -	free_empty_pages_and_pagevec(pending_pages, (vscsiif_reqs * VSCSIIF_SG_TABLESIZE));
> -
> +	free_empty_pages_and_pagevec(pending_pages, vscsiif_reqs * vscsiif_segs);
>  }
>  #endif
>  
> --- sle11sp3.orig/drivers/xen/scsiback/xenbus.c	2011-06-30 17:04:59.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/xenbus.c	2012-11-13 14:36:16.000000000 +0100
> @@ -339,6 +339,13 @@ static int scsiback_probe(struct xenbus_
>  	if (val)
>  		be->info->feature = VSCSI_TYPE_HOST;
>  
> +	if (vscsiif_segs > VSCSIIF_SG_TABLESIZE) {
> +		err = xenbus_printf(XBT_NIL, dev->nodename, "segs-per-req",
> +				    "%u", vscsiif_segs);
> +		if (err)
> +			xenbus_dev_error(dev, err, "writing segs-per-req");
> +	}
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> --- sle11sp3.orig/drivers/xen/scsifront/common.h	2011-01-31 17:29:16.000000000 +0100
> +++ sle11sp3/drivers/xen/scsifront/common.h	2012-11-22 13:45:50.000000000 +0100
> @@ -95,7 +95,7 @@ struct vscsifrnt_shadow {
>  
>  	/* requested struct scsi_cmnd is stored from kernel */
>  	unsigned long req_scsi_cmnd;
> -	int gref[VSCSIIF_SG_TABLESIZE];
> +	int gref[SG_ALL];
>  };
>  
>  struct vscsifrnt_info {
> @@ -110,7 +110,6 @@ struct vscsifrnt_info {
>  
>  	grant_ref_t ring_ref;
>  	struct vscsiif_front_ring ring;
> -	struct vscsiif_response	ring_res;
>  
>  	struct vscsifrnt_shadow shadow[VSCSIIF_MAX_REQS];
>  	uint32_t shadow_free;
> @@ -119,6 +118,12 @@ struct vscsifrnt_info {
>  	wait_queue_head_t wq;
>  	unsigned int waiting_resp;
>  
> +	struct {
> +		struct scsi_cmnd *sc;
> +		unsigned int rqid;
> +		unsigned int done;
> +		vscsiif_segment_t segs[];
> +	} active;
>  };
>  
>  #define DPRINTK(_f, _a...)				\
> --- sle11sp3.orig/drivers/xen/scsifront/scsifront.c	2011-06-28 18:57:14.000000000 +0200
> +++ sle11sp3/drivers/xen/scsifront/scsifront.c	2012-11-22 16:37:35.000000000 +0100
> @@ -106,6 +106,66 @@ irqreturn_t scsifront_intr(int irq, void
>  	return IRQ_HANDLED;
>  }
>  
> +static bool push_cmd_to_ring(struct vscsifrnt_info *info,
> +			     vscsiif_request_t *ring_req)
> +{
> +	unsigned int left, rqid = info->active.rqid;
> +	struct scsi_cmnd *sc;
> +
> +	for (; ; ring_req = NULL) {
> +		struct vscsiif_sg_list *sgl;
> +
> +		if (!ring_req) {
> +			struct vscsiif_front_ring *ring = &info->ring;
> +
> +			ring_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
> +			ring->req_prod_pvt++;
> +			ring_req->rqid = rqid;
> +		}
> +
> +		left = info->shadow[rqid].nr_segments - info->active.done;
> +		if (left <= VSCSIIF_SG_TABLESIZE)
> +			break;
> +
> +		sgl = (void *)ring_req;
> +		sgl->act = VSCSIIF_ACT_SCSI_SG_PRESET;
> +
> +		if (left > VSCSIIF_SG_LIST_SIZE)
> +			left = VSCSIIF_SG_LIST_SIZE;
> +		memcpy(sgl->seg, info->active.segs + info->active.done,
> +		       left * sizeof(*sgl->seg));
> +
> +		sgl->nr_segments = left;
> +		info->active.done += left;
> +
> +		if (RING_FULL(&info->ring))
> +			return false;
> +	}
> +
> +	sc = info->active.sc;
> +
> +	ring_req->act     = VSCSIIF_ACT_SCSI_CDB;
> +	ring_req->id      = sc->device->id;
> +	ring_req->lun     = sc->device->lun;
> +	ring_req->channel = sc->device->channel;
> +	ring_req->cmd_len = sc->cmd_len;
> +
> +	if ( sc->cmd_len )
> +		memcpy(ring_req->cmnd, sc->cmnd, sc->cmd_len);
> +	else
> +		memset(ring_req->cmnd, 0, VSCSIIF_MAX_COMMAND_SIZE);
> +
> +	ring_req->sc_data_direction   = sc->sc_data_direction;
> +	ring_req->timeout_per_command = sc->request->timeout / HZ;
> +	ring_req->nr_segments         = left;
> +
> +	memcpy(ring_req->seg, info->active.segs + info->active.done,
> +               left * sizeof(*ring_req->seg));
> +
> +	info->active.sc = NULL;
> +
> +	return !RING_FULL(&info->ring);
> +}
>  
>  static void scsifront_gnttab_done(struct vscsifrnt_shadow *s, uint32_t id)
>  {
> @@ -194,6 +254,16 @@ int scsifront_cmd_done(struct vscsifrnt_
>  		
>  		ring_res = RING_GET_RESPONSE(&info->ring, i);
>  
> +		if (info->host->sg_tablesize > VSCSIIF_SG_TABLESIZE) {
> +			u8 act = ring_res->act;
> +
> +			if (act == VSCSIIF_ACT_SCSI_SG_PRESET)
> +				continue;
> +			if (act != info->shadow[ring_res->rqid].act)
> +				DPRINTK("Bogus backend response (%02x vs %02x)\n",
> +					act, info->shadow[ring_res->rqid].act);
> +		}
> +
>  		if (info->shadow[ring_res->rqid].act == VSCSIIF_ACT_SCSI_CDB)
>  			scsifront_cdb_cmd_done(info, ring_res);
>  		else
> @@ -208,8 +278,16 @@ int scsifront_cmd_done(struct vscsifrnt_
>  		info->ring.sring->rsp_event = i + 1;
>  	}
>  
> -	spin_unlock_irqrestore(&info->io_lock, flags);
> +	spin_unlock(&info->io_lock);
> +
> +	spin_lock(info->host->host_lock);
> +
> +	if (info->active.sc && !RING_FULL(&info->ring)) {
> +		push_cmd_to_ring(info, NULL);
> +		scsifront_do_request(info);
> +	}
>  
> +	spin_unlock_irqrestore(info->host->host_lock, flags);
>  
>  	/* Yield point for this unbounded loop. */
>  	cond_resched();
> @@ -242,7 +320,8 @@ int scsifront_schedule(void *data)
>  
>  
>  static int map_data_for_request(struct vscsifrnt_info *info,
> -		struct scsi_cmnd *sc, vscsiif_request_t *ring_req, uint32_t id)
> +				struct scsi_cmnd *sc,
> +				struct vscsifrnt_shadow *shadow)
>  {
>  	grant_ref_t gref_head;
>  	struct page *page;
> @@ -254,7 +333,7 @@ static int map_data_for_request(struct v
>  	if (sc->sc_data_direction == DMA_NONE)
>  		return 0;
>  
> -	err = gnttab_alloc_grant_references(VSCSIIF_SG_TABLESIZE, &gref_head);
> +	err = gnttab_alloc_grant_references(info->host->sg_tablesize, &gref_head);
>  	if (err) {
>  		pr_err("scsifront: gnttab_alloc_grant_references() error\n");
>  		return -ENOMEM;
> @@ -266,7 +345,7 @@ static int map_data_for_request(struct v
>  		unsigned int data_len = scsi_bufflen(sc);
>  
>  		nr_pages = (data_len + sgl->offset + PAGE_SIZE - 1) >> PAGE_SHIFT;
> -		if (nr_pages > VSCSIIF_SG_TABLESIZE) {
> +		if (nr_pages > info->host->sg_tablesize) {
>  			pr_err("scsifront: Unable to map request_buffer for command!\n");
>  			ref_cnt = (-E2BIG);
>  			goto big_to_sg;
> @@ -294,10 +373,10 @@ static int map_data_for_request(struct v
>  				gnttab_grant_foreign_access_ref(ref, info->dev->otherend_id,
>  					buffer_pfn, write);
>  
> -				info->shadow[id].gref[ref_cnt]  = ref;
> -				ring_req->seg[ref_cnt].gref     = ref;
> -				ring_req->seg[ref_cnt].offset   = (uint16_t)off;
> -				ring_req->seg[ref_cnt].length   = (uint16_t)bytes;
> +				shadow->gref[ref_cnt] = ref;
> +				info->active.segs[ref_cnt].gref   = ref;
> +				info->active.segs[ref_cnt].offset = off;
> +				info->active.segs[ref_cnt].length = bytes;
>  
>  				buffer_pfn++;
>  				len -= bytes;
> @@ -336,34 +415,27 @@ static int scsifront_queuecommand(struct
>  		return SCSI_MLQUEUE_HOST_BUSY;
>  	}
>  
> +	if (info->active.sc && !push_cmd_to_ring(info, NULL)) {
> +		scsifront_do_request(info);
> +		spin_unlock_irqrestore(shost->host_lock, flags);
> +		return SCSI_MLQUEUE_HOST_BUSY;
> +	}
> +
>  	sc->result    = 0;
>  
>  	ring_req          = scsifront_pre_request(info);
>  	rqid              = ring_req->rqid;
> -	ring_req->act     = VSCSIIF_ACT_SCSI_CDB;
> -
> -	ring_req->id      = sc->device->id;
> -	ring_req->lun     = sc->device->lun;
> -	ring_req->channel = sc->device->channel;
> -	ring_req->cmd_len = sc->cmd_len;
>  
>  	BUG_ON(sc->cmd_len > VSCSIIF_MAX_COMMAND_SIZE);
>  
> -	if ( sc->cmd_len )
> -		memcpy(ring_req->cmnd, sc->cmnd, sc->cmd_len);
> -	else
> -		memset(ring_req->cmnd, 0, VSCSIIF_MAX_COMMAND_SIZE);
> -
> -	ring_req->sc_data_direction   = (uint8_t)sc->sc_data_direction;
> -	ring_req->timeout_per_command = (sc->request->timeout / HZ);
> -
>  	info->shadow[rqid].req_scsi_cmnd     = (unsigned long)sc;
>  	info->shadow[rqid].sc_data_direction = sc->sc_data_direction;
> -	info->shadow[rqid].act               = ring_req->act;
> +	info->shadow[rqid].act               = VSCSIIF_ACT_SCSI_CDB;
>  
> -	ref_cnt = map_data_for_request(info, sc, ring_req, rqid);
> +	ref_cnt = map_data_for_request(info, sc, &info->shadow[rqid]);
>  	if (ref_cnt < 0) {
>  		add_id_to_freelist(info, rqid);
> +		scsifront_do_request(info);
>  		spin_unlock_irqrestore(shost->host_lock, flags);
>  		if (ref_cnt == (-ENOMEM))
>  			return SCSI_MLQUEUE_HOST_BUSY;
> @@ -372,9 +444,13 @@ static int scsifront_queuecommand(struct
>  		return 0;
>  	}
>  
> -	ring_req->nr_segments          = (uint8_t)ref_cnt;
>  	info->shadow[rqid].nr_segments = ref_cnt;
>  
> +	info->active.sc  = sc;
> +	info->active.rqid = rqid;
> +	info->active.done = 0;
> +	push_cmd_to_ring(info, ring_req);
> +
>  	scsifront_do_request(info);
>  	spin_unlock_irqrestore(shost->host_lock, flags);
>  
> --- sle11sp3.orig/drivers/xen/scsifront/xenbus.c	2012-10-02 14:32:45.000000000 +0200
> +++ sle11sp3/drivers/xen/scsifront/xenbus.c	2012-11-21 13:35:47.000000000 +0100
> @@ -43,6 +43,10 @@
>    #define DEFAULT_TASK_COMM_LEN	TASK_COMM_LEN
>  #endif
>  
> +static unsigned int max_nr_segs = VSCSIIF_SG_TABLESIZE;
> +module_param_named(max_segs, max_nr_segs, uint, 0);
> +MODULE_PARM_DESC(max_segs, "Maximum number of segments per request");
> +
>  extern struct scsi_host_template scsifront_sht;
>  
>  static void scsifront_free(struct vscsifrnt_info *info)
> @@ -181,7 +185,9 @@ static int scsifront_probe(struct xenbus
>  	int i, err = -ENOMEM;
>  	char name[DEFAULT_TASK_COMM_LEN];
>  
> -	host = scsi_host_alloc(&scsifront_sht, sizeof(*info));
> +	host = scsi_host_alloc(&scsifront_sht,
> +			       offsetof(struct vscsifrnt_info,
> +					active.segs[max_nr_segs]));
>  	if (!host) {
>  		xenbus_dev_fatal(dev, err, "fail to allocate scsi host");
>  		return err;
> @@ -223,7 +229,7 @@ static int scsifront_probe(struct xenbus
>  	host->max_id      = VSCSIIF_MAX_TARGET;
>  	host->max_channel = 0;
>  	host->max_lun     = VSCSIIF_MAX_LUN;
> -	host->max_sectors = (VSCSIIF_SG_TABLESIZE - 1) * PAGE_SIZE / 512;
> +	host->max_sectors = (host->sg_tablesize - 1) * PAGE_SIZE / 512;
>  	host->max_cmd_len = VSCSIIF_MAX_COMMAND_SIZE;
>  
>  	err = scsi_add_host(host, &dev->dev);
> @@ -278,6 +284,23 @@ static int scsifront_disconnect(struct v
>  	return 0;
>  }
>  
> +static void scsifront_read_backend_params(struct xenbus_device *dev,
> +					  struct vscsifrnt_info *info)
> +{
> +	unsigned int nr_segs;
> +	int ret;
> +	struct Scsi_Host *host = info->host;
> +
> +	ret = xenbus_scanf(XBT_NIL, dev->otherend, "segs-per-req", "%u",
> +			   &nr_segs);
> +	if (ret == 1 && nr_segs > host->sg_tablesize) {
> +		host->sg_tablesize = min(nr_segs, max_nr_segs);
> +		dev_info(&dev->dev, "using up to %d SG entries\n",
> +			 host->sg_tablesize);
> +		host->max_sectors = (host->sg_tablesize - 1) * PAGE_SIZE / 512;
> +	}
> +}
> +
>  #define VSCSIFRONT_OP_ADD_LUN	1
>  #define VSCSIFRONT_OP_DEL_LUN	2
>  
> @@ -368,6 +391,7 @@ static void scsifront_backend_changed(st
>  		break;
>  
>  	case XenbusStateConnected:
> +		scsifront_read_backend_params(dev, info);
>  		if (xenbus_read_driver_state(dev->nodename) ==
>  			XenbusStateInitialised) {
>  			scsifront_do_lun_hotplug(info, VSCSIFRONT_OP_ADD_LUN);
> @@ -413,8 +437,13 @@ static DEFINE_XENBUS_DRIVER(scsifront, ,
>  	.otherend_changed	= scsifront_backend_changed,
>  );
>  
> -int scsifront_xenbus_init(void)
> +int __init scsifront_xenbus_init(void)
>  {
> +	if (max_nr_segs > SG_ALL)
> +		max_nr_segs = SG_ALL;
> +	if (max_nr_segs < VSCSIIF_SG_TABLESIZE)
> +		max_nr_segs = VSCSIIF_SG_TABLESIZE;
> +
>  	return xenbus_register_frontend(&scsifront_driver);
>  }
>  
> --- sle11sp3.orig/include/xen/interface/io/vscsiif.h	2008-07-21 11:00:33.000000000 +0200
> +++ sle11sp3/include/xen/interface/io/vscsiif.h	2012-11-22 14:32:31.000000000 +0100
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  
>  #define VSCSIIF_BACK_MAX_PENDING_REQS    128
> @@ -53,6 +54,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -69,18 +76,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 20:47:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 20:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tetxj-0003Gk-TD; Sat, 01 Dec 2012 20:46:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tetxi-0003Gf-A1
	for xen-devel@lists.xensource.com; Sat, 01 Dec 2012 20:46:58 +0000
Received: from [85.158.139.83:8044] by server-4.bemta-5.messagelabs.com id
	30/0A-15011-1CC6AB05; Sat, 01 Dec 2012 20:46:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354394816!20729780!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10170 invoked from network); 1 Dec 2012 20:46:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Dec 2012 20:46:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,197,1355097600"; d="scan'208";a="16105582"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Dec 2012 20:46:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 1 Dec 2012 20:46:56 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tetxf-0007V8-TF;
	Sat, 01 Dec 2012 20:46:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tetxf-00055t-Hy;
	Sat, 01 Dec 2012 20:46:55 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14548-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Dec 2012 20:46:55 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14548: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14548 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14548/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 01 20:47:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Dec 2012 20:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tetxj-0003Gk-TD; Sat, 01 Dec 2012 20:46:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tetxi-0003Gf-A1
	for xen-devel@lists.xensource.com; Sat, 01 Dec 2012 20:46:58 +0000
Received: from [85.158.139.83:8044] by server-4.bemta-5.messagelabs.com id
	30/0A-15011-1CC6AB05; Sat, 01 Dec 2012 20:46:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354394816!20729780!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10170 invoked from network); 1 Dec 2012 20:46:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Dec 2012 20:46:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,197,1355097600"; d="scan'208";a="16105582"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Dec 2012 20:46:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 1 Dec 2012 20:46:56 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tetxf-0007V8-TF;
	Sat, 01 Dec 2012 20:46:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tetxf-00055t-Hy;
	Sat, 01 Dec 2012 20:46:55 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14548-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Dec 2012 20:46:55 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14548: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14548 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14548/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 00:25:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 00:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TexMt-0008UW-1q; Sun, 02 Dec 2012 00:25:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1TexMs-0008UM-AY
	for xen-devel@lists.xen.org; Sun, 02 Dec 2012 00:25:10 +0000
Received: from [193.109.254.147:29011] by server-11.bemta-14.messagelabs.com
	id 1D/24-29027-5EF9AB05; Sun, 02 Dec 2012 00:25:09 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354407907!8612285!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4839 invoked from network); 2 Dec 2012 00:25:07 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-2.tower-27.messagelabs.com with SMTP;
	2 Dec 2012 00:25:07 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id qB20OxM7012533;
	Sat, 1 Dec 2012 18:25:00 -0600
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id qB20OxtB012532;
	Sat, 1 Dec 2012 18:24:59 -0600
Date: Sat, 1 Dec 2012 18:24:59 -0600
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201212020024.qB20OxtB012532@wind.enjellic.com>
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: xen-devel@lists.xen.org
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Sat, 01 Dec 2012 18:25:00 -0600 (CST)
Subject: [Xen-devel] Updated ATI passthrough patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, hope the weekend is going well for everyone.

I just put a set of updated patches to support ATI passthrough on a
primary video adapter on the FTP site.  The URL's are as follows:

	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.ati-passthrough.patch

	ftp://ftp.enjellic.com/pub/xen/xen-4.2.0.ati-passthrough.patch

These patches have been validated to work up through kernel 3.4.18
with the xm control plane.  We are currently working on validating
whether or not there are passthrough issues with xl.

The original ATI pass-through patches posted to xen-devel fail with a
qemu-dm segmentation fault on recent kernels.  This is caused by
changes which have been made with respect to proper enforcement of the
permitted port ranges on the ioperm() system call.

Since these patchs allow a primary graphics adapter to be used for
passthrough I wanted to remind everyone of the availability of the
following utility script:

	ftp://ftp.enjellic.com/pub/xen/run-passthrough

Which automates the process of unplugging and re-plugging a video card
and optionally a USB controller.  A script like this, or a network
login, is needed in order to use passthrough on a primary graphics
adapter.

Have a good week.

Greg

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"There's nothing in the middle of the road 'cept yellow lines and
 squashed armadillos."
                                -- Mike Hightower

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 00:25:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 00:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TexMt-0008UW-1q; Sun, 02 Dec 2012 00:25:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1TexMs-0008UM-AY
	for xen-devel@lists.xen.org; Sun, 02 Dec 2012 00:25:10 +0000
Received: from [193.109.254.147:29011] by server-11.bemta-14.messagelabs.com
	id 1D/24-29027-5EF9AB05; Sun, 02 Dec 2012 00:25:09 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354407907!8612285!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4839 invoked from network); 2 Dec 2012 00:25:07 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-2.tower-27.messagelabs.com with SMTP;
	2 Dec 2012 00:25:07 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id qB20OxM7012533;
	Sat, 1 Dec 2012 18:25:00 -0600
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id qB20OxtB012532;
	Sat, 1 Dec 2012 18:24:59 -0600
Date: Sat, 1 Dec 2012 18:24:59 -0600
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201212020024.qB20OxtB012532@wind.enjellic.com>
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: xen-devel@lists.xen.org
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Sat, 01 Dec 2012 18:25:00 -0600 (CST)
Subject: [Xen-devel] Updated ATI passthrough patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, hope the weekend is going well for everyone.

I just put a set of updated patches to support ATI passthrough on a
primary video adapter on the FTP site.  The URL's are as follows:

	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.ati-passthrough.patch

	ftp://ftp.enjellic.com/pub/xen/xen-4.2.0.ati-passthrough.patch

These patches have been validated to work up through kernel 3.4.18
with the xm control plane.  We are currently working on validating
whether or not there are passthrough issues with xl.

The original ATI pass-through patches posted to xen-devel fail with a
qemu-dm segmentation fault on recent kernels.  This is caused by
changes which have been made with respect to proper enforcement of the
permitted port ranges on the ioperm() system call.

Since these patchs allow a primary graphics adapter to be used for
passthrough I wanted to remind everyone of the availability of the
following utility script:

	ftp://ftp.enjellic.com/pub/xen/run-passthrough

Which automates the process of unplugging and re-plugging a video card
and optionally a USB controller.  A script like this, or a network
login, is needed in order to use passthrough on a primary graphics
adapter.

Have a good week.

Greg

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"There's nothing in the middle of the road 'cept yellow lines and
 squashed armadillos."
                                -- Mike Hightower

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 01:14:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 01:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tey8Q-0007Lv-VD; Sun, 02 Dec 2012 01:14:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tey8P-0007Lq-E6
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 01:14:17 +0000
Received: from [85.158.137.99:22216] by server-10.bemta-3.messagelabs.com id
	1C/53-19806-86BAAB05; Sun, 02 Dec 2012 01:14:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354410855!17523759!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8227 invoked from network); 2 Dec 2012 01:14:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 01:14:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,199,1355097600"; d="scan'208";a="16106236"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 01:14:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 2 Dec 2012 01:14:14 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tey8M-0000PA-Qm;
	Sun, 02 Dec 2012 01:14:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tey8M-0007A3-EU;
	Sun, 02 Dec 2012 01:14:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14549-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 01:14:14 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14549: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14549 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14549/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 01:14:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 01:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tey8Q-0007Lv-VD; Sun, 02 Dec 2012 01:14:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tey8P-0007Lq-E6
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 01:14:17 +0000
Received: from [85.158.137.99:22216] by server-10.bemta-3.messagelabs.com id
	1C/53-19806-86BAAB05; Sun, 02 Dec 2012 01:14:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354410855!17523759!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8227 invoked from network); 2 Dec 2012 01:14:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 01:14:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,199,1355097600"; d="scan'208";a="16106236"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 01:14:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 2 Dec 2012 01:14:14 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tey8M-0000PA-Qm;
	Sun, 02 Dec 2012 01:14:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tey8M-0007A3-EU;
	Sun, 02 Dec 2012 01:14:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14549-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 01:14:14 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14549: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14549 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14549/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 06:02:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 06:02:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tf2cS-0005vE-2U; Sun, 02 Dec 2012 06:01:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tf2cQ-0005v9-Ca
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 06:01:34 +0000
Received: from [85.158.143.99:58844] by server-2.bemta-4.messagelabs.com id
	C4/8F-28922-DBEEAB05; Sun, 02 Dec 2012 06:01:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1354428092!24309392!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7035 invoked from network); 2 Dec 2012 06:01:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 06:01:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,200,1355097600"; d="scan'208";a="16106867"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 06:01:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 2 Dec 2012 06:01:32 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tf2cO-0001p5-6Y;
	Sun, 02 Dec 2012 06:01:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tf2cN-0000sF-UA;
	Sun, 02 Dec 2012 06:01:32 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14550-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 06:01:31 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14550: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14550 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14550/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 06:02:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 06:02:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tf2cS-0005vE-2U; Sun, 02 Dec 2012 06:01:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tf2cQ-0005v9-Ca
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 06:01:34 +0000
Received: from [85.158.143.99:58844] by server-2.bemta-4.messagelabs.com id
	C4/8F-28922-DBEEAB05; Sun, 02 Dec 2012 06:01:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1354428092!24309392!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7035 invoked from network); 2 Dec 2012 06:01:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 06:01:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,200,1355097600"; d="scan'208";a="16106867"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 06:01:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 2 Dec 2012 06:01:32 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tf2cO-0001p5-6Y;
	Sun, 02 Dec 2012 06:01:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tf2cN-0000sF-UA;
	Sun, 02 Dec 2012 06:01:32 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14550-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 06:01:31 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14550: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14550 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14550/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 10:08:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 10:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tf6Ss-0003hY-EZ; Sun, 02 Dec 2012 10:07:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tf6Sr-0003hT-Lb
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 10:07:57 +0000
Received: from [85.158.143.35:4464] by server-3.bemta-4.messagelabs.com id
	A4/40-06841-C782BB05; Sun, 02 Dec 2012 10:07:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1354442876!11437309!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 420 invoked from network); 2 Dec 2012 10:07:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 10:07:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,201,1355097600"; d="scan'208";a="16107751"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 10:07:56 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Sun, 2 Dec 2012
	10:07:56 +0000
Message-ID: <1354442875.14179.2.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen.org" <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 10:07:55 +0000
In-Reply-To: <osstest-14549-mainreport@xen.org>
References: <osstest-14549-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 14549: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2012-12-02 at 01:14 +0000, xen.org wrote:
> flight 14549 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14549/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386                    4 xen-build                 fail REGR. vs. 14482
>  build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

        fatal: Out of memory? mmap failed: Cannot allocate memory
        fatal: The remote end hung up unexpectedly
        make[1]: *** [qemu-xen-dir-find] Error 128
        
xenbits had nearly thirty stuck git-upload-pack processes, mostly dating
for 6 October. I've killed them, hopefully this will unwedge things.

netstat reports loads of ports stuck in FIN_WAIT1, the majority of which
the otherend is 178.172.232.70. I didn't think to try this before I
killed the extra git processes, but I'd bet they were these.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 10:08:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 10:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tf6Ss-0003hY-EZ; Sun, 02 Dec 2012 10:07:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tf6Sr-0003hT-Lb
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 10:07:57 +0000
Received: from [85.158.143.35:4464] by server-3.bemta-4.messagelabs.com id
	A4/40-06841-C782BB05; Sun, 02 Dec 2012 10:07:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1354442876!11437309!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 420 invoked from network); 2 Dec 2012 10:07:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 10:07:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,201,1355097600"; d="scan'208";a="16107751"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 10:07:56 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Sun, 2 Dec 2012
	10:07:56 +0000
Message-ID: <1354442875.14179.2.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen.org" <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 10:07:55 +0000
In-Reply-To: <osstest-14549-mainreport@xen.org>
References: <osstest-14549-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 14549: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2012-12-02 at 01:14 +0000, xen.org wrote:
> flight 14549 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14549/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386                    4 xen-build                 fail REGR. vs. 14482
>  build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

        fatal: Out of memory? mmap failed: Cannot allocate memory
        fatal: The remote end hung up unexpectedly
        make[1]: *** [qemu-xen-dir-find] Error 128
        
xenbits had nearly thirty stuck git-upload-pack processes, mostly dating
for 6 October. I've killed them, hopefully this will unwedge things.

netstat reports loads of ports stuck in FIN_WAIT1, the majority of which
the otherend is 178.172.232.70. I didn't think to try this before I
killed the extra git processes, but I'd bet they were these.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 10:34:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 10:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tf6s7-0004RX-M8; Sun, 02 Dec 2012 10:34:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tf6s6-0004RS-8m
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 10:34:02 +0000
Received: from [85.158.137.99:2645] by server-4.bemta-3.messagelabs.com id
	D5/E0-30023-99E2BB05; Sun, 02 Dec 2012 10:34:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354444440!12408350!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20369 invoked from network); 2 Dec 2012 10:34:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 10:34:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,201,1355097600"; d="scan'208";a="16107941"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 10:34:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 2 Dec 2012 10:34:00 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tf6s4-0003DN-4n;
	Sun, 02 Dec 2012 10:34:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tf6s3-0001fP-VX;
	Sun, 02 Dec 2012 10:34:00 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14551-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 10:33:59 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14551: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14551 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14551/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 10:34:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 10:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tf6s7-0004RX-M8; Sun, 02 Dec 2012 10:34:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tf6s6-0004RS-8m
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 10:34:02 +0000
Received: from [85.158.137.99:2645] by server-4.bemta-3.messagelabs.com id
	D5/E0-30023-99E2BB05; Sun, 02 Dec 2012 10:34:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354444440!12408350!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20369 invoked from network); 2 Dec 2012 10:34:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 10:34:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,201,1355097600"; d="scan'208";a="16107941"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 10:34:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 2 Dec 2012 10:34:00 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tf6s4-0003DN-4n;
	Sun, 02 Dec 2012 10:34:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tf6s3-0001fP-VX;
	Sun, 02 Dec 2012 10:34:00 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14551-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 10:33:59 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14551: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14551 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14551/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 16:26:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 16:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfCMY-0004gv-FA; Sun, 02 Dec 2012 16:25:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfCMW-0004gq-S9
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 16:25:49 +0000
Received: from [85.158.137.99:9794] by server-16.bemta-3.messagelabs.com id
	F7/00-07461-7018BB05; Sun, 02 Dec 2012 16:25:43 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354465542!12770545!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3492 invoked from network); 2 Dec 2012 16:25:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 16:25:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,201,1355097600"; d="scan'208";a="16109411"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 16:25:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 2 Dec 2012 16:25:41 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfCMP-0004ws-B2;
	Sun, 02 Dec 2012 16:25:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfCMP-0007so-0M;
	Sun, 02 Dec 2012 16:25:41 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14552-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 16:25:41 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14552: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14552 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14552/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 16:26:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 16:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfCMY-0004gv-FA; Sun, 02 Dec 2012 16:25:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfCMW-0004gq-S9
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 16:25:49 +0000
Received: from [85.158.137.99:9794] by server-16.bemta-3.messagelabs.com id
	F7/00-07461-7018BB05; Sun, 02 Dec 2012 16:25:43 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354465542!12770545!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzNzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3492 invoked from network); 2 Dec 2012 16:25:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 16:25:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,201,1355097600"; d="scan'208";a="16109411"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 16:25:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 2 Dec 2012 16:25:41 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfCMP-0004ws-B2;
	Sun, 02 Dec 2012 16:25:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfCMP-0007so-0M;
	Sun, 02 Dec 2012 16:25:41 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14552-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 16:25:41 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14552: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14552 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14552/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 20:30:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 20:30:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfGAZ-0005vB-L1; Sun, 02 Dec 2012 20:29:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <agrier@poofygoof.com>) id 1TfEBV-0005WK-K6
	for xen-devel@lists.xen.org; Sun, 02 Dec 2012 18:22:33 +0000
Received: from [193.109.254.147:65038] by server-13.bemta-14.messagelabs.com
	id 79/41-11239-86C9BB05; Sun, 02 Dec 2012 18:22:32 +0000
X-Env-Sender: agrier@poofygoof.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354472550!1408799!1
X-Originating-IP: [209.162.215.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3813 invoked from network); 2 Dec 2012 18:22:31 -0000
Received: from 209-162-215-114.dq1sn.easystreet.com (HELO smtp.poofygoof.com)
	(209.162.215.114) by server-7.tower-27.messagelabs.com with SMTP;
	2 Dec 2012 18:22:31 -0000
Received: from arwen.poofy.goof.com (skolem.poofy.goof.com [10.0.0.19])
	by smtp.poofygoof.com (Postfix) with ESMTP
	id 0FD9E2940; Sun,  2 Dec 2012 10:22:27 -0800 (PST)
Received: by arwen.poofy.goof.com (Postfix, from userid 100)
	id 9772533153; Sun,  2 Dec 2012 10:22:25 -0800 (PST)
Date: Sun, 2 Dec 2012 10:22:24 -0800
From: "Aaron J. Grier" <agrier@poofygoof.com>
To: Thor Lancelot Simon <tls@panix.com>
Message-ID: <20121202182224.GU15413@arwen.poofy.goof.com>
References: <1354108851-2383-1-git-send-email-roger.pau@citrix.com>
	<1354108851-2383-2-git-send-email-roger.pau@citrix.com>
	<20121128132634.GA18277@panix.com> <50B622B4.6010103@citrix.com>
	<20121128152919.GA11947@panix.com> <50B63262.3030407@citrix.com>
	<50B64D15.2060004@boogers.sf.ca.us> <50B65592.2000000@citrix.com>
	<20121128184639.GB11388@panix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121128184639.GB11388@panix.com>
User-Agent: Mutt/1.4.2.3i
X-Mailman-Approved-At: Sun, 02 Dec 2012 20:29:41 +0000
Cc: Jeff Rizzo <riz@boogers.sf.ca.us>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"port-xen@netbsd.org" <port-xen@netbsd.org>,
	Roger Pau Monn? <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: switch NetBSD image file
	handling to Qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Nov 28, 2012 at 01:46:40PM -0500, Thor Lancelot Simon wrote:
> On Wed, Nov 28, 2012 at 07:18:58PM +0100, Roger Pau Monn? wrote:
> > Well, I think the performance is not that bad, given that we are
> > using qemu-traditional and the Dom0 is not MP. We can archive better
> > performance by fixing qemu-upstream to work with NetBSD and of
> > course getting a MP Dom0.
> 
> Why is it a reasonable assumption that adding more CPU resource will
> make a significant difference to I/O throughput?

throughput might not be affected, but wouldn't additional code execution
affect latency?

-- 
  Aaron J. Grier | "Not your ordinary poofy goof." | agrier@poofygoof.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 20:30:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 20:30:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfGAZ-0005vB-L1; Sun, 02 Dec 2012 20:29:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <agrier@poofygoof.com>) id 1TfEBV-0005WK-K6
	for xen-devel@lists.xen.org; Sun, 02 Dec 2012 18:22:33 +0000
Received: from [193.109.254.147:65038] by server-13.bemta-14.messagelabs.com
	id 79/41-11239-86C9BB05; Sun, 02 Dec 2012 18:22:32 +0000
X-Env-Sender: agrier@poofygoof.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354472550!1408799!1
X-Originating-IP: [209.162.215.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3813 invoked from network); 2 Dec 2012 18:22:31 -0000
Received: from 209-162-215-114.dq1sn.easystreet.com (HELO smtp.poofygoof.com)
	(209.162.215.114) by server-7.tower-27.messagelabs.com with SMTP;
	2 Dec 2012 18:22:31 -0000
Received: from arwen.poofy.goof.com (skolem.poofy.goof.com [10.0.0.19])
	by smtp.poofygoof.com (Postfix) with ESMTP
	id 0FD9E2940; Sun,  2 Dec 2012 10:22:27 -0800 (PST)
Received: by arwen.poofy.goof.com (Postfix, from userid 100)
	id 9772533153; Sun,  2 Dec 2012 10:22:25 -0800 (PST)
Date: Sun, 2 Dec 2012 10:22:24 -0800
From: "Aaron J. Grier" <agrier@poofygoof.com>
To: Thor Lancelot Simon <tls@panix.com>
Message-ID: <20121202182224.GU15413@arwen.poofy.goof.com>
References: <1354108851-2383-1-git-send-email-roger.pau@citrix.com>
	<1354108851-2383-2-git-send-email-roger.pau@citrix.com>
	<20121128132634.GA18277@panix.com> <50B622B4.6010103@citrix.com>
	<20121128152919.GA11947@panix.com> <50B63262.3030407@citrix.com>
	<50B64D15.2060004@boogers.sf.ca.us> <50B65592.2000000@citrix.com>
	<20121128184639.GB11388@panix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121128184639.GB11388@panix.com>
User-Agent: Mutt/1.4.2.3i
X-Mailman-Approved-At: Sun, 02 Dec 2012 20:29:41 +0000
Cc: Jeff Rizzo <riz@boogers.sf.ca.us>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"port-xen@netbsd.org" <port-xen@netbsd.org>,
	Roger Pau Monn? <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: switch NetBSD image file
	handling to Qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Nov 28, 2012 at 01:46:40PM -0500, Thor Lancelot Simon wrote:
> On Wed, Nov 28, 2012 at 07:18:58PM +0100, Roger Pau Monn? wrote:
> > Well, I think the performance is not that bad, given that we are
> > using qemu-traditional and the Dom0 is not MP. We can archive better
> > performance by fixing qemu-upstream to work with NetBSD and of
> > course getting a MP Dom0.
> 
> Why is it a reasonable assumption that adding more CPU resource will
> make a significant difference to I/O throughput?

throughput might not be affected, but wouldn't additional code execution
affect latency?

-- 
  Aaron J. Grier | "Not your ordinary poofy goof." | agrier@poofygoof.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 20:30:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 20:30:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfGAa-0005vJ-0o; Sun, 02 Dec 2012 20:29:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <agrier@poofygoof.com>) id 1TfEKr-0005Y4-KZ
	for xen-devel@lists.xen.org; Sun, 02 Dec 2012 18:32:13 +0000
Received: from [85.158.139.83:57008] by server-4.bemta-5.messagelabs.com id
	81/1C-15011-CAE9BB05; Sun, 02 Dec 2012 18:32:12 +0000
X-Env-Sender: agrier@poofygoof.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354473131!16772977!1
X-Originating-IP: [209.162.215.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13521 invoked from network); 2 Dec 2012 18:32:11 -0000
Received: from 209-162-215-114.dq1sn.easystreet.com (HELO smtp.poofygoof.com)
	(209.162.215.114) by server-8.tower-182.messagelabs.com with SMTP;
	2 Dec 2012 18:32:11 -0000
Received: from arwen.poofy.goof.com (skolem.poofy.goof.com [10.0.0.19])
	by smtp.poofygoof.com (Postfix) with ESMTP
	id 9123A2940; Sun,  2 Dec 2012 10:32:06 -0800 (PST)
Received: by arwen.poofy.goof.com (Postfix, from userid 100)
	id 0A62B33153; Sun,  2 Dec 2012 10:32:04 -0800 (PST)
Date: Sun, 2 Dec 2012 10:32:04 -0800
From: "Aaron J. Grier" <agrier@poofygoof.com>
To: Brian Buhrow <buhrow@nfbcal.org>
Message-ID: <20121202183204.GV15413@arwen.poofy.goof.com>
References: <50B73382.8070300@citrix.com>
	<201211291818.qATIIfbX012104@lothlorien.nfbcal.org>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <201211291818.qATIIfbX012104@lothlorien.nfbcal.org>
User-Agent: Mutt/1.4.2.3i
X-Mailman-Approved-At: Sun, 02 Dec 2012 20:29:41 +0000
Cc: Toby Karyadi <toby.karyadi@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"port-xen@netbsd.org" <port-xen@NetBSD.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: switch NetBSD image file
	handling to Qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Nov 29, 2012 at 10:18:41AM -0800, Brian Buhrow wrote:
> 1. Look at the filesystem type of the filesystem in which the target
> resides.  If it's an NFS filesystem, call qemu.  Otherwise, go to step
> 2.

vnd can handle backing files on NFS.

-- 
  Aaron J. Grier | "Not your ordinary poofy goof." | agrier@poofygoof.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 20:30:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 20:30:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfGAa-0005vJ-0o; Sun, 02 Dec 2012 20:29:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <agrier@poofygoof.com>) id 1TfEKr-0005Y4-KZ
	for xen-devel@lists.xen.org; Sun, 02 Dec 2012 18:32:13 +0000
Received: from [85.158.139.83:57008] by server-4.bemta-5.messagelabs.com id
	81/1C-15011-CAE9BB05; Sun, 02 Dec 2012 18:32:12 +0000
X-Env-Sender: agrier@poofygoof.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354473131!16772977!1
X-Originating-IP: [209.162.215.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13521 invoked from network); 2 Dec 2012 18:32:11 -0000
Received: from 209-162-215-114.dq1sn.easystreet.com (HELO smtp.poofygoof.com)
	(209.162.215.114) by server-8.tower-182.messagelabs.com with SMTP;
	2 Dec 2012 18:32:11 -0000
Received: from arwen.poofy.goof.com (skolem.poofy.goof.com [10.0.0.19])
	by smtp.poofygoof.com (Postfix) with ESMTP
	id 9123A2940; Sun,  2 Dec 2012 10:32:06 -0800 (PST)
Received: by arwen.poofy.goof.com (Postfix, from userid 100)
	id 0A62B33153; Sun,  2 Dec 2012 10:32:04 -0800 (PST)
Date: Sun, 2 Dec 2012 10:32:04 -0800
From: "Aaron J. Grier" <agrier@poofygoof.com>
To: Brian Buhrow <buhrow@nfbcal.org>
Message-ID: <20121202183204.GV15413@arwen.poofy.goof.com>
References: <50B73382.8070300@citrix.com>
	<201211291818.qATIIfbX012104@lothlorien.nfbcal.org>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <201211291818.qATIIfbX012104@lothlorien.nfbcal.org>
User-Agent: Mutt/1.4.2.3i
X-Mailman-Approved-At: Sun, 02 Dec 2012 20:29:41 +0000
Cc: Toby Karyadi <toby.karyadi@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"port-xen@netbsd.org" <port-xen@NetBSD.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: switch NetBSD image file
	handling to Qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Nov 29, 2012 at 10:18:41AM -0800, Brian Buhrow wrote:
> 1. Look at the filesystem type of the filesystem in which the target
> resides.  If it's an NFS filesystem, call qemu.  Otherwise, go to step
> 2.

vnd can handle backing files on NFS.

-- 
  Aaron J. Grier | "Not your ordinary poofy goof." | agrier@poofygoof.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 20:59:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 20:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfGcS-0006H5-Gb; Sun, 02 Dec 2012 20:58:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfGcR-0006H0-97
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 20:58:31 +0000
Received: from [85.158.139.83:44395] by server-3.bemta-5.messagelabs.com id
	14/F4-18736-6F0CBB05; Sun, 02 Dec 2012 20:58:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354481909!28048191!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzODQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31370 invoked from network); 2 Dec 2012 20:58:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 20:58:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,201,1355097600"; d="scan'208";a="16112401"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 20:58:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 2 Dec 2012 20:58:28 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfGcO-0006I3-M3;
	Sun, 02 Dec 2012 20:58:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfGcO-0008Bc-DC;
	Sun, 02 Dec 2012 20:58:28 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14553-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 20:58:28 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14553: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14553 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14553/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2   fail pass in 14552

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14552 never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 02 20:59:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Dec 2012 20:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfGcS-0006H5-Gb; Sun, 02 Dec 2012 20:58:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfGcR-0006H0-97
	for xen-devel@lists.xensource.com; Sun, 02 Dec 2012 20:58:31 +0000
Received: from [85.158.139.83:44395] by server-3.bemta-5.messagelabs.com id
	14/F4-18736-6F0CBB05; Sun, 02 Dec 2012 20:58:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354481909!28048191!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzODQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31370 invoked from network); 2 Dec 2012 20:58:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Dec 2012 20:58:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,201,1355097600"; d="scan'208";a="16112401"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Dec 2012 20:58:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 2 Dec 2012 20:58:28 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfGcO-0006I3-M3;
	Sun, 02 Dec 2012 20:58:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfGcO-0008Bc-DC;
	Sun, 02 Dec 2012 20:58:28 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14553-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Dec 2012 20:58:28 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14553: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14553 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14553/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2   fail pass in 14552

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14552 never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 01:48:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 01:48:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfL8V-000337-BG; Mon, 03 Dec 2012 01:47:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfL8T-000332-An
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 01:47:53 +0000
Received: from [85.158.137.99:37488] by server-5.bemta-3.messagelabs.com id
	22/7D-26311-8C40CB05; Mon, 03 Dec 2012 01:47:52 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354499269!12695687!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzODQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20082 invoked from network); 3 Dec 2012 01:47:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 01:47:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,202,1355097600"; d="scan'208";a="16113537"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 01:47:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 3 Dec 2012 01:47:49 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfL8P-0007iK-7a;
	Mon, 03 Dec 2012 01:47:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfL8J-0001hT-Vw;
	Mon, 03 Dec 2012 01:47:49 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14554-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Dec 2012 01:47:44 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14554: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14554 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14554/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-win 12 guest-localmigrate/x10     fail pass in 14553
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail in 14553 pass in 14554

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop           fail in 14553 never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 01:48:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 01:48:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfL8V-000337-BG; Mon, 03 Dec 2012 01:47:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfL8T-000332-An
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 01:47:53 +0000
Received: from [85.158.137.99:37488] by server-5.bemta-3.messagelabs.com id
	22/7D-26311-8C40CB05; Mon, 03 Dec 2012 01:47:52 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354499269!12695687!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzODQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20082 invoked from network); 3 Dec 2012 01:47:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 01:47:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,202,1355097600"; d="scan'208";a="16113537"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 01:47:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 3 Dec 2012 01:47:49 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfL8P-0007iK-7a;
	Mon, 03 Dec 2012 01:47:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfL8J-0001hT-Vw;
	Mon, 03 Dec 2012 01:47:49 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14554-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Dec 2012 01:47:44 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14554: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14554 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14554/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-win 12 guest-localmigrate/x10     fail pass in 14553
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail in 14553 pass in 14554

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop           fail in 14553 never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 03:52:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 03:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfN4A-0004Uu-KW; Mon, 03 Dec 2012 03:51:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TfN48-0004Up-Hp
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 03:51:32 +0000
Received: from [85.158.143.99:51330] by server-1.bemta-4.messagelabs.com id
	84/9A-27934-3C12CB05; Mon, 03 Dec 2012 03:51:31 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354506689!16550606!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11732 invoked from network); 3 Dec 2012 03:51:30 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 03:51:30 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so3941739iej.32
	for <xen-devel@lists.xen.org>; Sun, 02 Dec 2012 19:47:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=8dl+wDI/Rh+zSnAG3q0IaLKiD6jkY+SnAaPT8GzpIQo=;
	b=cHRlyFn8LoPCS1B6E5pmobElU0P+IxvHVFkt4AQMI5GDK5JLzd8VNtjlCoTShN7ErV
	P8r2Z/OaLMhuoI0BCbAtLPEJpWXrIm8Rl9SzWJccb0c26vy0n7gxaxb0EG2lYigWpiqH
	BNQJPkg+mPlGo8JrhhwI04h4QcL6OVT7pDWmM69qjq8S5oIRCcHJxnjrnQMs+TbR8P/d
	E34csC58ujHtbk3Z6zdtv/SAxYg6kHFO0Slbrly3OmhERwNO0JMrQqQhDtSo934w3D/v
	NYWp9Bqg5t2WHJVu2b0oJDxXt3H1KOB2G5YUMcJ12wZcU0Shsv6UwjuKeu33ADMQzY3o
	vb3g==
MIME-Version: 1.0
Received: by 10.50.34.226 with SMTP id c2mr876686igj.24.1354506474914; Sun, 02
	Dec 2012 19:47:54 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Sun, 2 Dec 2012 19:47:54 -0800 (PST)
Date: Mon, 3 Dec 2012 11:47:54 +0800
X-Google-Sender-Auth: 1SUre7HSzZ5xkFDVtiBATk39drw
Message-ID: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0642646848559356174=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0642646848559356174==
Content-Type: multipart/alternative; boundary=14dae934059548840b04cfea9d57

--14dae934059548840b04cfea9d57
Content-Type: text/plain; charset=ISO-8859-1

Hi developers,
I met some domU issues and the log suggests missing interrupt.
Details from here:
http://www.gossamer-threads.com/lists/xen/users/263938#263938
In summary, this is the suspicious log:

(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3

I've checked the code in question and found that mode 3 is an 'reserved_1'
mode.
I want to trace down the source of this mode setting to root-cause the
issue.
But I'm not an xen developer, and am even a newbie as a xen user.
Could anybody give me instructions about how to enable detailed debug log?
It could be better if I can get advice about experiments to perform /
switches to try out etc.

My SW config:
dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
domU: Debian wheezy 3.2.x stock kernel.

Thanks,
Timothy

--14dae934059548840b04cfea9d57
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi developers,<br>I met some domU issues and the log suggests missing inter=
rupt.<br>Details from here: <a href=3D"http://www.gossamer-threads.com/list=
s/xen/users/263938#263938">http://www.gossamer-threads.com/lists/xen/users/=
263938#263938</a><br>
In summary, this is the suspicious log:<br><br>(XEN) vmsi.c:122:d32767 Unsu=
pported delivery mode 3<br><br>I&#39;ve checked the code in question and fo=
und that mode 3 is an &#39;reserved_1&#39; mode.<br>I want to trace down th=
e source of this mode setting to root-cause the issue.<br>
But I&#39;m not an xen developer, and am even a newbie as a xen user. <br>C=
ould anybody give me instructions about how to enable detailed debug log?<b=
r>It could be better if I can get advice about experiments to perform / swi=
tches to try out etc.<br>
<br>My SW config:<br>dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.=
3-4)<br>domU: Debian wheezy 3.2.x stock kernel.<br><br>Thanks,<br>Timothy<b=
r>

--14dae934059548840b04cfea9d57--


--===============0642646848559356174==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0642646848559356174==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 03:52:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 03:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfN4A-0004Uu-KW; Mon, 03 Dec 2012 03:51:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TfN48-0004Up-Hp
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 03:51:32 +0000
Received: from [85.158.143.99:51330] by server-1.bemta-4.messagelabs.com id
	84/9A-27934-3C12CB05; Mon, 03 Dec 2012 03:51:31 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354506689!16550606!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11732 invoked from network); 3 Dec 2012 03:51:30 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 03:51:30 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so3941739iej.32
	for <xen-devel@lists.xen.org>; Sun, 02 Dec 2012 19:47:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=8dl+wDI/Rh+zSnAG3q0IaLKiD6jkY+SnAaPT8GzpIQo=;
	b=cHRlyFn8LoPCS1B6E5pmobElU0P+IxvHVFkt4AQMI5GDK5JLzd8VNtjlCoTShN7ErV
	P8r2Z/OaLMhuoI0BCbAtLPEJpWXrIm8Rl9SzWJccb0c26vy0n7gxaxb0EG2lYigWpiqH
	BNQJPkg+mPlGo8JrhhwI04h4QcL6OVT7pDWmM69qjq8S5oIRCcHJxnjrnQMs+TbR8P/d
	E34csC58ujHtbk3Z6zdtv/SAxYg6kHFO0Slbrly3OmhERwNO0JMrQqQhDtSo934w3D/v
	NYWp9Bqg5t2WHJVu2b0oJDxXt3H1KOB2G5YUMcJ12wZcU0Shsv6UwjuKeu33ADMQzY3o
	vb3g==
MIME-Version: 1.0
Received: by 10.50.34.226 with SMTP id c2mr876686igj.24.1354506474914; Sun, 02
	Dec 2012 19:47:54 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Sun, 2 Dec 2012 19:47:54 -0800 (PST)
Date: Mon, 3 Dec 2012 11:47:54 +0800
X-Google-Sender-Auth: 1SUre7HSzZ5xkFDVtiBATk39drw
Message-ID: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0642646848559356174=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0642646848559356174==
Content-Type: multipart/alternative; boundary=14dae934059548840b04cfea9d57

--14dae934059548840b04cfea9d57
Content-Type: text/plain; charset=ISO-8859-1

Hi developers,
I met some domU issues and the log suggests missing interrupt.
Details from here:
http://www.gossamer-threads.com/lists/xen/users/263938#263938
In summary, this is the suspicious log:

(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3

I've checked the code in question and found that mode 3 is an 'reserved_1'
mode.
I want to trace down the source of this mode setting to root-cause the
issue.
But I'm not an xen developer, and am even a newbie as a xen user.
Could anybody give me instructions about how to enable detailed debug log?
It could be better if I can get advice about experiments to perform /
switches to try out etc.

My SW config:
dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
domU: Debian wheezy 3.2.x stock kernel.

Thanks,
Timothy

--14dae934059548840b04cfea9d57
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi developers,<br>I met some domU issues and the log suggests missing inter=
rupt.<br>Details from here: <a href=3D"http://www.gossamer-threads.com/list=
s/xen/users/263938#263938">http://www.gossamer-threads.com/lists/xen/users/=
263938#263938</a><br>
In summary, this is the suspicious log:<br><br>(XEN) vmsi.c:122:d32767 Unsu=
pported delivery mode 3<br><br>I&#39;ve checked the code in question and fo=
und that mode 3 is an &#39;reserved_1&#39; mode.<br>I want to trace down th=
e source of this mode setting to root-cause the issue.<br>
But I&#39;m not an xen developer, and am even a newbie as a xen user. <br>C=
ould anybody give me instructions about how to enable detailed debug log?<b=
r>It could be better if I can get advice about experiments to perform / swi=
tches to try out etc.<br>
<br>My SW config:<br>dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.=
3-4)<br>domU: Debian wheezy 3.2.x stock kernel.<br><br>Thanks,<br>Timothy<b=
r>

--14dae934059548840b04cfea9d57--


--===============0642646848559356174==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0642646848559356174==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 06:09:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 06:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfPD8-0005N0-P0; Mon, 03 Dec 2012 06:08:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TfPD6-0005Mv-Uc
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 06:08:57 +0000
Received: from [85.158.137.99:63326] by server-8.bemta-3.messagelabs.com id
	44/3F-07786-3F14CB05; Mon, 03 Dec 2012 06:08:51 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1354514929!15260788!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM1NjE0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31054 invoked from network); 3 Dec 2012 06:08:50 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-15.tower-217.messagelabs.com with SMTP;
	3 Dec 2012 06:08:50 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 02 Dec 2012 22:08:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,204,1355126400"; d="scan'208";a="225680350"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by azsmga001.ch.intel.com with ESMTP; 02 Dec 2012 22:08:45 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 2 Dec 2012 22:08:44 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 2 Dec 2012 22:08:44 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Mon, 3 Dec 2012 14:08:42 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "wei.huang2@amd.com"
	<wei.huang2@amd.com>, "weiwang.dd@gmail.com" <weiwang.dd@gmail.com>, Keir
	Fraser <keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
	faults for devices used by Xen or Dom0
Thread-Index: AQHNzuAehTAqIGeL2k2bTlhbc3Vd6ZgGlvcw
Date: Mon, 3 Dec 2012 06:08:41 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A483456440339D1E9@SHSMSX101.ccr.corp.intel.com>
References: <5097FD2902000078000A66BF@nat28.tlf.novell.com>
	<CCDE3016.54628%keir@xen.org>
	<50B88F8002000078000ACC8A@nat28.tlf.novell.com>
In-Reply-To: <50B88F8002000078000ACC8A@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>, Dario Faggioli <raistlin@linux.it>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
 faults for devices used by Xen or Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, Jan 
If the phantom device support for IOMMU is in upstream,  is this patch still needed ?  Basically,  I can't figure out why several faults should be allowed before disabling bus mastering.   Did you meet some real issues ?   Thanks!
Xiantao
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Friday, November 30, 2012 5:51 PM
> To: wei.huang2@amd.com; weiwang.dd@gmail.com; Zhang, Xiantao; Keir
> Fraser
> Cc: Dario Faggioli; xen-devel; Tim Deegan
> Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
> faults for devices used by Xen or Dom0
> 
> >>> On 30.11.12 at 10:42, Keir Fraser <keir@xen.org> wrote:
> > On 05/11/2012 16:53, "Jan Beulich" <JBeulich@suse.com> wrote:
> >
> >> Under the assumption that in these cases recurring faults aren't a
> >> security issue and it can be expected that the drivers there are
> >> going to try to take care of the problem.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >
> > This one's sat a while with no comments...
> 
> It's in already (26133:fdb69dd527cd), with Tim's and Dario's ack (who were
> the ones involved in creating the original change this modifies).
> 
> But yes, we're having a general response problem for IOMMU related stuff -
> I already asked for this to be a topic on the next community call.
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 06:09:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 06:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfPD8-0005N0-P0; Mon, 03 Dec 2012 06:08:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TfPD6-0005Mv-Uc
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 06:08:57 +0000
Received: from [85.158.137.99:63326] by server-8.bemta-3.messagelabs.com id
	44/3F-07786-3F14CB05; Mon, 03 Dec 2012 06:08:51 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1354514929!15260788!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM1NjE0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31054 invoked from network); 3 Dec 2012 06:08:50 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-15.tower-217.messagelabs.com with SMTP;
	3 Dec 2012 06:08:50 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 02 Dec 2012 22:08:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,204,1355126400"; d="scan'208";a="225680350"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by azsmga001.ch.intel.com with ESMTP; 02 Dec 2012 22:08:45 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 2 Dec 2012 22:08:44 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 2 Dec 2012 22:08:44 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Mon, 3 Dec 2012 14:08:42 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "wei.huang2@amd.com"
	<wei.huang2@amd.com>, "weiwang.dd@gmail.com" <weiwang.dd@gmail.com>, Keir
	Fraser <keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
	faults for devices used by Xen or Dom0
Thread-Index: AQHNzuAehTAqIGeL2k2bTlhbc3Vd6ZgGlvcw
Date: Mon, 3 Dec 2012 06:08:41 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A483456440339D1E9@SHSMSX101.ccr.corp.intel.com>
References: <5097FD2902000078000A66BF@nat28.tlf.novell.com>
	<CCDE3016.54628%keir@xen.org>
	<50B88F8002000078000ACC8A@nat28.tlf.novell.com>
In-Reply-To: <50B88F8002000078000ACC8A@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>, Dario Faggioli <raistlin@linux.it>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
 faults for devices used by Xen or Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, Jan 
If the phantom device support for IOMMU is in upstream,  is this patch still needed ?  Basically,  I can't figure out why several faults should be allowed before disabling bus mastering.   Did you meet some real issues ?   Thanks!
Xiantao
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Friday, November 30, 2012 5:51 PM
> To: wei.huang2@amd.com; weiwang.dd@gmail.com; Zhang, Xiantao; Keir
> Fraser
> Cc: Dario Faggioli; xen-devel; Tim Deegan
> Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
> faults for devices used by Xen or Dom0
> 
> >>> On 30.11.12 at 10:42, Keir Fraser <keir@xen.org> wrote:
> > On 05/11/2012 16:53, "Jan Beulich" <JBeulich@suse.com> wrote:
> >
> >> Under the assumption that in these cases recurring faults aren't a
> >> security issue and it can be expected that the drivers there are
> >> going to try to take care of the problem.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >
> > This one's sat a while with no comments...
> 
> It's in already (26133:fdb69dd527cd), with Tim's and Dario's ack (who were
> the ones involved in creating the original change this modifies).
> 
> But yes, we're having a general response problem for IOMMU related stuff -
> I already asked for this to be a topic on the next community call.
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 07:07:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQ7R-0005rV-KV; Mon, 03 Dec 2012 07:07:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TfQ7P-0005rQ-K6
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 07:07:07 +0000
Received: from [85.158.143.99:22511] by server-3.bemta-4.messagelabs.com id
	62/33-06841-A9F4CB05; Mon, 03 Dec 2012 07:07:06 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354518388!27742632!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTU5Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24190 invoked from network); 3 Dec 2012 07:06:29 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-15.tower-216.messagelabs.com with SMTP;
	3 Dec 2012 07:06:29 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 02 Dec 2012 23:06:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,204,1355126400"; 
	d="scan'208,217";a="256416314"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga001.fm.intel.com with ESMTP; 02 Dec 2012 23:06:27 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 2 Dec 2012 23:06:27 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 2 Dec 2012 23:06:26 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Mon, 3 Dec 2012 15:06:25 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: G.R. <firemeteor@users.sourceforge.net>, xen-devel
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] Issue about domU missing interrupt
Thread-Index: AQHN0QnCpYidZyDBa0yhgSm2LVST+pgGoyJg
Date: Mon, 3 Dec 2012 07:06:24 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A483456440339D2CD@SHSMSX101.ccr.corp.intel.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
In-Reply-To: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4255746506243060713=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4255746506243060713==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_B6C2EB9186482D47BD0C5A9A483456440339D2CDSHSMSX101ccrcor_"

--_000_B6C2EB9186482D47BD0C5A9A483456440339D2CDSHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Maybe you need to  provide more information about your VGA device,  for exa=
mple,  "lspci -vvv".   In addition,  from your log, seems expansion rom bar=
 is not correctly handled.  You may refer to this wiki page to check whethe=
r something is missed in your side.   http://wiki.xen.org/wiki/Xen_VGA_Pass=
through
Xiantao

From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.o=
rg] On Behalf Of G.R.
Sent: Monday, December 03, 2012 11:48 AM
To: xen-devel
Subject: [Xen-devel] Issue about domU missing interrupt

Hi developers,
I met some domU issues and the log suggests missing interrupt.
Details from here: http://www.gossamer-threads.com/lists/xen/users/263938#2=
63938
In summary, this is the suspicious log:

(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3

I've checked the code in question and found that mode 3 is an 'reserved_1' =
mode.
I want to trace down the source of this mode setting to root-cause the issu=
e.
But I'm not an xen developer, and am even a newbie as a xen user.
Could anybody give me instructions about how to enable detailed debug log?
It could be better if I can get advice about experiments to perform / switc=
hes to try out etc.

My SW config:
dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
domU: Debian wheezy 3.2.x stock kernel.

Thanks,
Timothy

--_000_B6C2EB9186482D47BD0C5A9A483456440339D2CDSHSMSX101ccrcor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Maybe you need to &nbsp;p=
rovide more information about your VGA device,&nbsp; for example,&nbsp; &#8=
220;lspci &#8211;vvv&#8221;. &nbsp;&nbsp;In addition,&nbsp; from your log, =
seems expansion rom bar is not
 correctly handled. &nbsp;You may refer to this wiki page to check whether =
something is missed in your side. &nbsp;&nbsp;<a href=3D"http://wiki.xen.or=
g/wiki/Xen_VGA_Passthrough">http://wiki.xen.org/wiki/Xen_VGA_Passthrough</a=
><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Xiantao<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D"><o:p>&nbsp;</o:p></span><=
/p>
<div style=3D"border:none;border-left:solid blue 1.5pt;padding:0in 0in 0in =
4.0pt">
<div>
<div style=3D"border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in">
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:&quot=
;Tahoma&quot;,&quot;sans-serif&quot;">From:</span></b><span style=3D"font-s=
ize:10.0pt;font-family:&quot;Tahoma&quot;,&quot;sans-serif&quot;"> xen-deve=
l-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org]
<b>On Behalf Of </b>G.R.<br>
<b>Sent:</b> Monday, December 03, 2012 11:48 AM<br>
<b>To:</b> xen-devel<br>
<b>Subject:</b> [Xen-devel] Issue about domU missing interrupt<o:p></o:p></=
span></p>
</div>
</div>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Hi developers,<br>
I met some domU issues and the log suggests missing interrupt.<br>
Details from here: <a href=3D"http://www.gossamer-threads.com/lists/xen/use=
rs/263938#263938">
http://www.gossamer-threads.com/lists/xen/users/263938#263938</a><br>
In summary, this is the suspicious log:<br>
<br>
(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
<br>
I've checked the code in question and found that mode 3 is an 'reserved_1' =
mode.<br>
I want to trace down the source of this mode setting to root-cause the issu=
e.<br>
But I'm not an xen developer, and am even a newbie as a xen user. <br>
Could anybody give me instructions about how to enable detailed debug log?<=
br>
It could be better if I can get advice about experiments to perform / switc=
hes to try out etc.<br>
<br>
My SW config:<br>
dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)<br>
domU: Debian wheezy 3.2.x stock kernel.<br>
<br>
Thanks,<br>
Timothy<o:p></o:p></p>
</div>
</div>
</body>
</html>

--_000_B6C2EB9186482D47BD0C5A9A483456440339D2CDSHSMSX101ccrcor_--


--===============4255746506243060713==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4255746506243060713==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 07:07:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQ7R-0005rV-KV; Mon, 03 Dec 2012 07:07:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TfQ7P-0005rQ-K6
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 07:07:07 +0000
Received: from [85.158.143.99:22511] by server-3.bemta-4.messagelabs.com id
	62/33-06841-A9F4CB05; Mon, 03 Dec 2012 07:07:06 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354518388!27742632!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTU5Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24190 invoked from network); 3 Dec 2012 07:06:29 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-15.tower-216.messagelabs.com with SMTP;
	3 Dec 2012 07:06:29 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 02 Dec 2012 23:06:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,204,1355126400"; 
	d="scan'208,217";a="256416314"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga001.fm.intel.com with ESMTP; 02 Dec 2012 23:06:27 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 2 Dec 2012 23:06:27 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 2 Dec 2012 23:06:26 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Mon, 3 Dec 2012 15:06:25 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: G.R. <firemeteor@users.sourceforge.net>, xen-devel
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] Issue about domU missing interrupt
Thread-Index: AQHN0QnCpYidZyDBa0yhgSm2LVST+pgGoyJg
Date: Mon, 3 Dec 2012 07:06:24 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A483456440339D2CD@SHSMSX101.ccr.corp.intel.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
In-Reply-To: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4255746506243060713=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4255746506243060713==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_B6C2EB9186482D47BD0C5A9A483456440339D2CDSHSMSX101ccrcor_"

--_000_B6C2EB9186482D47BD0C5A9A483456440339D2CDSHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Maybe you need to  provide more information about your VGA device,  for exa=
mple,  "lspci -vvv".   In addition,  from your log, seems expansion rom bar=
 is not correctly handled.  You may refer to this wiki page to check whethe=
r something is missed in your side.   http://wiki.xen.org/wiki/Xen_VGA_Pass=
through
Xiantao

From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.o=
rg] On Behalf Of G.R.
Sent: Monday, December 03, 2012 11:48 AM
To: xen-devel
Subject: [Xen-devel] Issue about domU missing interrupt

Hi developers,
I met some domU issues and the log suggests missing interrupt.
Details from here: http://www.gossamer-threads.com/lists/xen/users/263938#2=
63938
In summary, this is the suspicious log:

(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3

I've checked the code in question and found that mode 3 is an 'reserved_1' =
mode.
I want to trace down the source of this mode setting to root-cause the issu=
e.
But I'm not an xen developer, and am even a newbie as a xen user.
Could anybody give me instructions about how to enable detailed debug log?
It could be better if I can get advice about experiments to perform / switc=
hes to try out etc.

My SW config:
dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
domU: Debian wheezy 3.2.x stock kernel.

Thanks,
Timothy

--_000_B6C2EB9186482D47BD0C5A9A483456440339D2CDSHSMSX101ccrcor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Maybe you need to &nbsp;p=
rovide more information about your VGA device,&nbsp; for example,&nbsp; &#8=
220;lspci &#8211;vvv&#8221;. &nbsp;&nbsp;In addition,&nbsp; from your log, =
seems expansion rom bar is not
 correctly handled. &nbsp;You may refer to this wiki page to check whether =
something is missed in your side. &nbsp;&nbsp;<a href=3D"http://wiki.xen.or=
g/wiki/Xen_VGA_Passthrough">http://wiki.xen.org/wiki/Xen_VGA_Passthrough</a=
><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Xiantao<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D"><o:p>&nbsp;</o:p></span><=
/p>
<div style=3D"border:none;border-left:solid blue 1.5pt;padding:0in 0in 0in =
4.0pt">
<div>
<div style=3D"border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in">
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:&quot=
;Tahoma&quot;,&quot;sans-serif&quot;">From:</span></b><span style=3D"font-s=
ize:10.0pt;font-family:&quot;Tahoma&quot;,&quot;sans-serif&quot;"> xen-deve=
l-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org]
<b>On Behalf Of </b>G.R.<br>
<b>Sent:</b> Monday, December 03, 2012 11:48 AM<br>
<b>To:</b> xen-devel<br>
<b>Subject:</b> [Xen-devel] Issue about domU missing interrupt<o:p></o:p></=
span></p>
</div>
</div>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Hi developers,<br>
I met some domU issues and the log suggests missing interrupt.<br>
Details from here: <a href=3D"http://www.gossamer-threads.com/lists/xen/use=
rs/263938#263938">
http://www.gossamer-threads.com/lists/xen/users/263938#263938</a><br>
In summary, this is the suspicious log:<br>
<br>
(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
<br>
I've checked the code in question and found that mode 3 is an 'reserved_1' =
mode.<br>
I want to trace down the source of this mode setting to root-cause the issu=
e.<br>
But I'm not an xen developer, and am even a newbie as a xen user. <br>
Could anybody give me instructions about how to enable detailed debug log?<=
br>
It could be better if I can get advice about experiments to perform / switc=
hes to try out etc.<br>
<br>
My SW config:<br>
dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)<br>
domU: Debian wheezy 3.2.x stock kernel.<br>
<br>
Thanks,<br>
Timothy<o:p></o:p></p>
</div>
</div>
</body>
</html>

--_000_B6C2EB9186482D47BD0C5A9A483456440339D2CDSHSMSX101ccrcor_--


--===============4255746506243060713==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4255746506243060713==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 07:09:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQ9k-0005wK-6K; Mon, 03 Dec 2012 07:09:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfQ9i-0005wC-SH
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 07:09:31 +0000
Received: from [85.158.138.51:12855] by server-12.bemta-3.messagelabs.com id
	C6/FC-22757-5205CB05; Mon, 03 Dec 2012 07:09:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354518564!32399507!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzODQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29658 invoked from network); 3 Dec 2012 07:09:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 07:09:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,204,1355097600"; d="scan'208";a="16114871"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 07:09:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 3 Dec 2012 07:09:24 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfQ9c-00010G-Ck;
	Mon, 03 Dec 2012 07:09:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfQ9b-0002cC-T6;
	Mon, 03 Dec 2012 07:09:24 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14555-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Dec 2012 07:09:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14555: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14555 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14555/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install     fail pass in 14554
 test-amd64-amd64-xl-qemut-win 12 guest-localmigrate/x10 fail in 14554 pass in 14555

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14554 never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 07:09:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQ9k-0005wK-6K; Mon, 03 Dec 2012 07:09:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfQ9i-0005wC-SH
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 07:09:31 +0000
Received: from [85.158.138.51:12855] by server-12.bemta-3.messagelabs.com id
	C6/FC-22757-5205CB05; Mon, 03 Dec 2012 07:09:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354518564!32399507!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzODQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29658 invoked from network); 3 Dec 2012 07:09:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 07:09:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,204,1355097600"; d="scan'208";a="16114871"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 07:09:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 3 Dec 2012 07:09:24 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfQ9c-00010G-Ck;
	Mon, 03 Dec 2012 07:09:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfQ9b-0002cC-T6;
	Mon, 03 Dec 2012 07:09:24 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14555-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Dec 2012 07:09:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14555: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14555 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14555/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install     fail pass in 14554
 test-amd64-amd64-xl-qemut-win 12 guest-localmigrate/x10 fail in 14554 pass in 14555

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14554 never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 07:37:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:37:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQaN-0006Ed-OP; Mon, 03 Dec 2012 07:37:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <maheen_butt26@yahoo.com>) id 1TfQaL-0006EY-T0
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 07:37:02 +0000
Received: from [85.158.138.51:28901] by server-11.bemta-3.messagelabs.com id
	0E/08-19361-C965CB05; Mon, 03 Dec 2012 07:37:00 +0000
X-Env-Sender: maheen_butt26@yahoo.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354520164!30503965!1
X-Originating-IP: [98.138.90.68]
X-SpamReason: No, hits=1.0 required=7.0 tests=FROM_HAS_ULINE_NUMS,
	HTML_60_70, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12862 invoked from network); 3 Dec 2012 07:36:05 -0000
Received: from nm5.bullet.mail.ne1.yahoo.com (HELO
	nm5.bullet.mail.ne1.yahoo.com) (98.138.90.68)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 07:36:05 -0000
Received: from [98.138.90.51] by nm5.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Dec 2012 07:36:04 -0000
Received: from [98.138.89.253] by tm4.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Dec 2012 07:36:03 -0000
Received: from [127.0.0.1] by omp1045.mail.ne1.yahoo.com with NNFMP;
	03 Dec 2012 07:36:03 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 807795.82174.bm@omp1045.mail.ne1.yahoo.com
Received: (qmail 34357 invoked by uid 60001); 3 Dec 2012 07:36:03 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1354520163; bh=/rqC2wc/LbHXE+XGvCax/sLeFl1Nw0dHQrvf98Ccj04=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=LA+/ieCVW3icI9Sd5r8USSIamfNAUj7ZKaoBjY1fl3Bo+icwpkIWaDQ2i5u+N7EyndRbcawGso1ZkSWoXzjC3jfF/peYIQVKzGVrfSZ43Q6r75d/uYtlbXLoTXOOHnwN5AwhoTwSQ40+FYJfsNcoVTmlkGp4EFk65QHRY9hP6D0=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=GD9WlxGeoUfvjtGtcDM6kMLoMNgE43E9YOKyQ3pQiDY9vBYxnmGr2v8KofvWq3JDVvVipO+nnh2CSC9FWXJWKMDMN2mmpXFf8aR4ypdwu+jywK96YaXPEN8zb9tif3mix+oQiLqX+9YKf05nvk0NS2CEe2tXhPwF+2WmYl6hoF0=;
X-YMail-OSG: 17dzHzoVM1l2f9HgGr662nRRqNwYypi.JwDw7ke_Xlg4fgS
	JEMW8oQ6nFJB8Xe8qPlrOeF2LXlJKhPhZsfBNiL35XQreytg22SBfSHSUpwM
	lcNnuq3XgDMKescgzEXCzvgqXW2ZGCvCqO3RCKOZm015p2W_Borhg5vmDH5N
	s__AUPnrW2IFwcf.rBmz1dqmGEntH7MtXARyPGRCKrKz3GKbXvKlGg9TJmFg
	h.GmMRyjlBTZ_2YYByTmPb3Jvo_TqPtBYCHsc3LxAvPHRLwgcx3cBvscUgK7
	YibcSpp7prFuqeKpP69fay_8CFxGEgTGzbDrDR25iZ4atSi_5Z_TL_hyWs5R
	mnaZm198LVKwVO4lAaVxmr5wV8fpVJaIn.W04zh02hEdihcmB3lTqyG27oDc
	MTMksyIZM5HOmTKVgOPaPKyO0DQZgo5wjYWYDqimORLo2jlG0yP42hGOWX1a
	zgHbdUy7TNCG14Lk1XwKNOep6WBV2K1aLPb37nXdw13oa8I9xNYsPDOJDRax
	onKd5mQ--
Received: from [111.68.102.23] by web120002.mail.ne1.yahoo.com via HTTP;
	Sun, 02 Dec 2012 23:36:03 PST
X-Rocket-MIMEInfo: 001.001,
	SGkgYWxsLAoKSSdtIGludmVzdGlnYXRpbmcgWGVuIGJvb3R1cCBzZXF1ZW5jZSBhbmQgSSdtIHN0dWNrIHdpdGggdGhlIGZ1bmN0aW9uIHNldF9jdXJyZW50KChzdHJ1Y3QgdmNwdSAqKTB4ZmZmZmYwMDApOwp0aGUgaGlnaCBsZXZlbCBpZGVhIGlzIHRoYXQgaXQgaXMgYXNzaWduaW5nIFZDUFUgdG8gcGh5c2ljYWwgQ1BVIGJ1dCBJIGNhbid0IHVuZGVyc3RhbmQgdGhlIGZvbGxvd2luZyAKCgoxKSB3aGF0IGlzIHRoZSBuZWVkIG9mIHRoaXMgaGFyZCBjb2RlZCBhZGRyZXNzP8KgIGl0IGlzIHNlZW1lZCB0aGEBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.128.478
Message-ID: <1354520163.12932.YahooMailNeo@web120002.mail.ne1.yahoo.com>
Date: Sun, 2 Dec 2012 23:36:03 -0800 (PST)
From: maheen butt <maheen_butt26@yahoo.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
MIME-Version: 1.0
Subject: [Xen-devel] set_current in Xen booting sequence
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: maheen butt <maheen_butt26@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4258545890987973407=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4258545890987973407==
Content-Type: multipart/alternative; boundary="-299933220-1941740821-1354520163=:12932"

---299933220-1941740821-1354520163=:12932
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,=0A=0AI'm investigating Xen bootup sequence and I'm stuck with the f=
unction set_current((struct vcpu *)0xfffff000);=0Athe high level idea is th=
at it is assigning VCPU to physical CPU but I can't understand the followin=
g =0A=0A=0A1) what is the need of this hard coded address?=A0 it is seemed =
that vcpu =0Aexist on this address (but no idea who put vcpu on that partic=
ular =0Aaddress)?=0A(the same address is passed both in case of ARM and X86=
)=0A2) no idea about what exactly get_cpu_info() (called by set_current) is=
 doing..=A0 (logically doing & and or with sp register)?=0A=A0=0AThanks
---299933220-1941740821-1354520163=:12932
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:; background-color:; font-family:arial, hel=
vetica, sans-serif;font-size:13px"><div>Hi all,</div><div><br></div><div st=
yle=3D"color: rgb(26, 26, 26); font-size: 13px; font-family: arial,helvetic=
a,sans-serif; background-color: transparent; font-style: normal;">I'm inves=
tigating Xen bootup sequence and I'm stuck with the function set_current((s=
truct vcpu *)0xfffff000);</div><div style=3D"color: rgb(26, 26, 26); font-s=
ize: 13px; font-family: arial,helvetica,sans-serif; background-color: trans=
parent; font-style: normal;">the high level idea is that it is assigning VC=
PU to physical CPU but I can't understand the following <br></div><div styl=
e=3D"color: rgb(26, 26, 26); font-size: 13px; font-family: arial,helvetica,=
sans-serif; background-color: transparent; font-style: normal;"><br></div><=
div>1) what is the need of this hard coded address?&nbsp; it is seemed that=
 vcpu =0Aexist on this address (but no idea who put vcpu on that particular=
 =0Aaddress)?</div><div>(the same address is passed both in case of ARM and=
 X86)</div>=0A=0A<div>2) no idea about what exactly get_cpu_info() (called =
by set_current) is doing..&nbsp; (logically doing &amp; and or with sp regi=
ster)?</div><div>&nbsp;</div><div>Thanks<br></div><div style=3D"color: rgb(=
26, 26, 26); font-size: 13px; font-family: arial,helvetica,sans-serif; back=
ground-color: transparent; font-style: normal;"><br></div><div><br></div><d=
iv style=3D"color: rgb(26, 26, 26); font-size: 13px; font-family: arial,hel=
vetica,sans-serif; background-color: transparent; font-style: normal;"><br>=
</div><div style=3D"color: rgb(26, 26, 26); font-size: 13px; font-family: a=
rial,helvetica,sans-serif; background-color: transparent; font-style: norma=
l;"><br></div></div></body></html>
---299933220-1941740821-1354520163=:12932--


--===============4258545890987973407==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4258545890987973407==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 07:37:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:37:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQaN-0006Ed-OP; Mon, 03 Dec 2012 07:37:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <maheen_butt26@yahoo.com>) id 1TfQaL-0006EY-T0
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 07:37:02 +0000
Received: from [85.158.138.51:28901] by server-11.bemta-3.messagelabs.com id
	0E/08-19361-C965CB05; Mon, 03 Dec 2012 07:37:00 +0000
X-Env-Sender: maheen_butt26@yahoo.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354520164!30503965!1
X-Originating-IP: [98.138.90.68]
X-SpamReason: No, hits=1.0 required=7.0 tests=FROM_HAS_ULINE_NUMS,
	HTML_60_70, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12862 invoked from network); 3 Dec 2012 07:36:05 -0000
Received: from nm5.bullet.mail.ne1.yahoo.com (HELO
	nm5.bullet.mail.ne1.yahoo.com) (98.138.90.68)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 07:36:05 -0000
Received: from [98.138.90.51] by nm5.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Dec 2012 07:36:04 -0000
Received: from [98.138.89.253] by tm4.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Dec 2012 07:36:03 -0000
Received: from [127.0.0.1] by omp1045.mail.ne1.yahoo.com with NNFMP;
	03 Dec 2012 07:36:03 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 807795.82174.bm@omp1045.mail.ne1.yahoo.com
Received: (qmail 34357 invoked by uid 60001); 3 Dec 2012 07:36:03 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1354520163; bh=/rqC2wc/LbHXE+XGvCax/sLeFl1Nw0dHQrvf98Ccj04=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=LA+/ieCVW3icI9Sd5r8USSIamfNAUj7ZKaoBjY1fl3Bo+icwpkIWaDQ2i5u+N7EyndRbcawGso1ZkSWoXzjC3jfF/peYIQVKzGVrfSZ43Q6r75d/uYtlbXLoTXOOHnwN5AwhoTwSQ40+FYJfsNcoVTmlkGp4EFk65QHRY9hP6D0=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=GD9WlxGeoUfvjtGtcDM6kMLoMNgE43E9YOKyQ3pQiDY9vBYxnmGr2v8KofvWq3JDVvVipO+nnh2CSC9FWXJWKMDMN2mmpXFf8aR4ypdwu+jywK96YaXPEN8zb9tif3mix+oQiLqX+9YKf05nvk0NS2CEe2tXhPwF+2WmYl6hoF0=;
X-YMail-OSG: 17dzHzoVM1l2f9HgGr662nRRqNwYypi.JwDw7ke_Xlg4fgS
	JEMW8oQ6nFJB8Xe8qPlrOeF2LXlJKhPhZsfBNiL35XQreytg22SBfSHSUpwM
	lcNnuq3XgDMKescgzEXCzvgqXW2ZGCvCqO3RCKOZm015p2W_Borhg5vmDH5N
	s__AUPnrW2IFwcf.rBmz1dqmGEntH7MtXARyPGRCKrKz3GKbXvKlGg9TJmFg
	h.GmMRyjlBTZ_2YYByTmPb3Jvo_TqPtBYCHsc3LxAvPHRLwgcx3cBvscUgK7
	YibcSpp7prFuqeKpP69fay_8CFxGEgTGzbDrDR25iZ4atSi_5Z_TL_hyWs5R
	mnaZm198LVKwVO4lAaVxmr5wV8fpVJaIn.W04zh02hEdihcmB3lTqyG27oDc
	MTMksyIZM5HOmTKVgOPaPKyO0DQZgo5wjYWYDqimORLo2jlG0yP42hGOWX1a
	zgHbdUy7TNCG14Lk1XwKNOep6WBV2K1aLPb37nXdw13oa8I9xNYsPDOJDRax
	onKd5mQ--
Received: from [111.68.102.23] by web120002.mail.ne1.yahoo.com via HTTP;
	Sun, 02 Dec 2012 23:36:03 PST
X-Rocket-MIMEInfo: 001.001,
	SGkgYWxsLAoKSSdtIGludmVzdGlnYXRpbmcgWGVuIGJvb3R1cCBzZXF1ZW5jZSBhbmQgSSdtIHN0dWNrIHdpdGggdGhlIGZ1bmN0aW9uIHNldF9jdXJyZW50KChzdHJ1Y3QgdmNwdSAqKTB4ZmZmZmYwMDApOwp0aGUgaGlnaCBsZXZlbCBpZGVhIGlzIHRoYXQgaXQgaXMgYXNzaWduaW5nIFZDUFUgdG8gcGh5c2ljYWwgQ1BVIGJ1dCBJIGNhbid0IHVuZGVyc3RhbmQgdGhlIGZvbGxvd2luZyAKCgoxKSB3aGF0IGlzIHRoZSBuZWVkIG9mIHRoaXMgaGFyZCBjb2RlZCBhZGRyZXNzP8KgIGl0IGlzIHNlZW1lZCB0aGEBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.128.478
Message-ID: <1354520163.12932.YahooMailNeo@web120002.mail.ne1.yahoo.com>
Date: Sun, 2 Dec 2012 23:36:03 -0800 (PST)
From: maheen butt <maheen_butt26@yahoo.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
MIME-Version: 1.0
Subject: [Xen-devel] set_current in Xen booting sequence
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: maheen butt <maheen_butt26@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4258545890987973407=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4258545890987973407==
Content-Type: multipart/alternative; boundary="-299933220-1941740821-1354520163=:12932"

---299933220-1941740821-1354520163=:12932
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,=0A=0AI'm investigating Xen bootup sequence and I'm stuck with the f=
unction set_current((struct vcpu *)0xfffff000);=0Athe high level idea is th=
at it is assigning VCPU to physical CPU but I can't understand the followin=
g =0A=0A=0A1) what is the need of this hard coded address?=A0 it is seemed =
that vcpu =0Aexist on this address (but no idea who put vcpu on that partic=
ular =0Aaddress)?=0A(the same address is passed both in case of ARM and X86=
)=0A2) no idea about what exactly get_cpu_info() (called by set_current) is=
 doing..=A0 (logically doing & and or with sp register)?=0A=A0=0AThanks
---299933220-1941740821-1354520163=:12932
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:; background-color:; font-family:arial, hel=
vetica, sans-serif;font-size:13px"><div>Hi all,</div><div><br></div><div st=
yle=3D"color: rgb(26, 26, 26); font-size: 13px; font-family: arial,helvetic=
a,sans-serif; background-color: transparent; font-style: normal;">I'm inves=
tigating Xen bootup sequence and I'm stuck with the function set_current((s=
truct vcpu *)0xfffff000);</div><div style=3D"color: rgb(26, 26, 26); font-s=
ize: 13px; font-family: arial,helvetica,sans-serif; background-color: trans=
parent; font-style: normal;">the high level idea is that it is assigning VC=
PU to physical CPU but I can't understand the following <br></div><div styl=
e=3D"color: rgb(26, 26, 26); font-size: 13px; font-family: arial,helvetica,=
sans-serif; background-color: transparent; font-style: normal;"><br></div><=
div>1) what is the need of this hard coded address?&nbsp; it is seemed that=
 vcpu =0Aexist on this address (but no idea who put vcpu on that particular=
 =0Aaddress)?</div><div>(the same address is passed both in case of ARM and=
 X86)</div>=0A=0A<div>2) no idea about what exactly get_cpu_info() (called =
by set_current) is doing..&nbsp; (logically doing &amp; and or with sp regi=
ster)?</div><div>&nbsp;</div><div>Thanks<br></div><div style=3D"color: rgb(=
26, 26, 26); font-size: 13px; font-family: arial,helvetica,sans-serif; back=
ground-color: transparent; font-style: normal;"><br></div><div><br></div><d=
iv style=3D"color: rgb(26, 26, 26); font-size: 13px; font-family: arial,hel=
vetica,sans-serif; background-color: transparent; font-style: normal;"><br>=
</div><div style=3D"color: rgb(26, 26, 26); font-size: 13px; font-family: a=
rial,helvetica,sans-serif; background-color: transparent; font-style: norma=
l;"><br></div></div></body></html>
---299933220-1941740821-1354520163=:12932--


--===============4258545890987973407==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4258545890987973407==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 07:37:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQao-0006G9-AD; Mon, 03 Dec 2012 07:37:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfQam-0006Fz-7r
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 07:37:28 +0000
Received: from [193.109.254.147:56414] by server-3.bemta-14.messagelabs.com id
	27/6A-01317-7B65CB05; Mon, 03 Dec 2012 07:37:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354520212!6714464!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24410 invoked from network); 3 Dec 2012 07:36:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 07:36:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 07:36:51 +0000
Message-Id: <50BC649E02000078000AD289@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 07:36:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <5097FD2902000078000A66BF@nat28.tlf.novell.com>
	<CCDE3016.54628%keir@xen.org>
	<50B88F8002000078000ACC8A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339D1E9@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A483456440339D1E9@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: KeirFraser <keir@xen.org>, "wei.huang2@amd.com" <wei.huang2@amd.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>,
	Dario Faggioli <raistlin@linux.it>,
	"weiwang.dd@gmail.com" <weiwang.dd@gmail.com>
Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
 faults for devices used by Xen or Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 07:08, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> If the phantom device support for IOMMU is in upstream,  is this patch still 
> needed ?

Phantom function is unrelated to the behavioral adjustment here.

>  Basically,  I can't figure out why several faults should be allowed 
> before disabling bus mastering.   Did you meet some real issues ?   Thanks!

I observed quite a different driver failure pattern with and without
this adjustment, but in a contrived environment only. From the
customer data for the problem that prompted the phantom function
work, I could also conclude the same (comparing the driver failure
under native Linux with IOMMU turned on and the one under Xen).

But in any case, I am of the opinion that an occasional fault
shouldn't give reason to disable the device altogether - what we're
aiming at is solely to keep Xen and other domains functional (which
doesn't require as drastic an action as was carried out prior to this
adjustment). Also, afaict native Linux doesn't have any such
disabling behavior at all.

Jan

>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Friday, November 30, 2012 5:51 PM
>> To: wei.huang2@amd.com; weiwang.dd@gmail.com; Zhang, Xiantao; Keir
>> Fraser
>> Cc: Dario Faggioli; xen-devel; Tim Deegan
>> Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
>> faults for devices used by Xen or Dom0
>> 
>> >>> On 30.11.12 at 10:42, Keir Fraser <keir@xen.org> wrote:
>> > On 05/11/2012 16:53, "Jan Beulich" <JBeulich@suse.com> wrote:
>> >
>> >> Under the assumption that in these cases recurring faults aren't a
>> >> security issue and it can be expected that the drivers there are
>> >> going to try to take care of the problem.
>> >>
>> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> >
>> > This one's sat a while with no comments...
>> 
>> It's in already (26133:fdb69dd527cd), with Tim's and Dario's ack (who were
>> the ones involved in creating the original change this modifies).
>> 
>> But yes, we're having a general response problem for IOMMU related stuff -
>> I already asked for this to be a topic on the next community call.
>> 
>> Jan




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 07:37:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQao-0006G9-AD; Mon, 03 Dec 2012 07:37:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfQam-0006Fz-7r
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 07:37:28 +0000
Received: from [193.109.254.147:56414] by server-3.bemta-14.messagelabs.com id
	27/6A-01317-7B65CB05; Mon, 03 Dec 2012 07:37:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354520212!6714464!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24410 invoked from network); 3 Dec 2012 07:36:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 07:36:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 07:36:51 +0000
Message-Id: <50BC649E02000078000AD289@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 07:36:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <5097FD2902000078000A66BF@nat28.tlf.novell.com>
	<CCDE3016.54628%keir@xen.org>
	<50B88F8002000078000ACC8A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339D1E9@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A483456440339D1E9@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: KeirFraser <keir@xen.org>, "wei.huang2@amd.com" <wei.huang2@amd.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>,
	Dario Faggioli <raistlin@linux.it>,
	"weiwang.dd@gmail.com" <weiwang.dd@gmail.com>
Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
 faults for devices used by Xen or Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 07:08, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> If the phantom device support for IOMMU is in upstream,  is this patch still 
> needed ?

Phantom function is unrelated to the behavioral adjustment here.

>  Basically,  I can't figure out why several faults should be allowed 
> before disabling bus mastering.   Did you meet some real issues ?   Thanks!

I observed quite a different driver failure pattern with and without
this adjustment, but in a contrived environment only. From the
customer data for the problem that prompted the phantom function
work, I could also conclude the same (comparing the driver failure
under native Linux with IOMMU turned on and the one under Xen).

But in any case, I am of the opinion that an occasional fault
shouldn't give reason to disable the device altogether - what we're
aiming at is solely to keep Xen and other domains functional (which
doesn't require as drastic an action as was carried out prior to this
adjustment). Also, afaict native Linux doesn't have any such
disabling behavior at all.

Jan

>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Friday, November 30, 2012 5:51 PM
>> To: wei.huang2@amd.com; weiwang.dd@gmail.com; Zhang, Xiantao; Keir
>> Fraser
>> Cc: Dario Faggioli; xen-devel; Tim Deegan
>> Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
>> faults for devices used by Xen or Dom0
>> 
>> >>> On 30.11.12 at 10:42, Keir Fraser <keir@xen.org> wrote:
>> > On 05/11/2012 16:53, "Jan Beulich" <JBeulich@suse.com> wrote:
>> >
>> >> Under the assumption that in these cases recurring faults aren't a
>> >> security issue and it can be expected that the drivers there are
>> >> going to try to take care of the problem.
>> >>
>> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> >
>> > This one's sat a while with no comments...
>> 
>> It's in already (26133:fdb69dd527cd), with Tim's and Dario's ack (who were
>> the ones involved in creating the original change this modifies).
>> 
>> But yes, we're having a general response problem for IOMMU related stuff -
>> I already asked for this to be a topic on the next community call.
>> 
>> Jan




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 07:39:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQcl-0006QM-QV; Mon, 03 Dec 2012 07:39:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfQck-0006QB-TN
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 07:39:31 +0000
Received: from [193.109.254.147:31635] by server-6.bemta-14.messagelabs.com id
	3F/27-02788-2375CB05; Mon, 03 Dec 2012 07:39:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354520369!1451104!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6030 invoked from network); 3 Dec 2012 07:39:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 07:39:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 07:39:29 +0000
Message-Id: <50BC653E02000078000AD28C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 07:39:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Marek Marczykowski" <marmarek@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
In-Reply-To: <50B8DC55.8000308@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 17:18, Marek Marczykowski <marmarek@invisiblethingslab.com>
wrote:
> On 30.11.2012 17:12, Jan Beulich wrote:
>>>>> Marek Marczykowski <marmarek@invisiblethingslab.com> 11/30/12 5:07 PM >>>
>>> That was the rare case when resume worked at all... in most cased I've got
>>> reboot, before anything appear on the screen (even backlight is off) - xen
>>> panic? dom0 kernel panic?
>> 
>> Without serial console we won't get very far from here.
>> 
>>> I don't have serial console, but have USB-to-serial port - is it possible to
>>> use it as xen console (in xen 4.1.3)?
>> 
>> Not that I'm aware of. But 4.1.x isn't very interesting from a development
>> perspective anyway. If you had the same problems still with 4.3-unstable,
>> then that's be of much more interest to analyze, and you could use the
>> EHCI debug port (if one of your controllers has one) based serial console.
> 
> Is it possible to use libxl from xen 4.1 with newer hypervisor? My libxl is
> somehow patched and porting it to newer version will require some effort.

I don't think so, but I also don't think you need a libxl at all for the
purposes here (dealing with S3 is a Dom0-only thing).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 07:39:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQcl-0006QM-QV; Mon, 03 Dec 2012 07:39:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfQck-0006QB-TN
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 07:39:31 +0000
Received: from [193.109.254.147:31635] by server-6.bemta-14.messagelabs.com id
	3F/27-02788-2375CB05; Mon, 03 Dec 2012 07:39:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354520369!1451104!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6030 invoked from network); 3 Dec 2012 07:39:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 07:39:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 07:39:29 +0000
Message-Id: <50BC653E02000078000AD28C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 07:39:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Marek Marczykowski" <marmarek@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
In-Reply-To: <50B8DC55.8000308@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 17:18, Marek Marczykowski <marmarek@invisiblethingslab.com>
wrote:
> On 30.11.2012 17:12, Jan Beulich wrote:
>>>>> Marek Marczykowski <marmarek@invisiblethingslab.com> 11/30/12 5:07 PM >>>
>>> That was the rare case when resume worked at all... in most cased I've got
>>> reboot, before anything appear on the screen (even backlight is off) - xen
>>> panic? dom0 kernel panic?
>> 
>> Without serial console we won't get very far from here.
>> 
>>> I don't have serial console, but have USB-to-serial port - is it possible to
>>> use it as xen console (in xen 4.1.3)?
>> 
>> Not that I'm aware of. But 4.1.x isn't very interesting from a development
>> perspective anyway. If you had the same problems still with 4.3-unstable,
>> then that's be of much more interest to analyze, and you could use the
>> EHCI debug port (if one of your controllers has one) based serial console.
> 
> Is it possible to use libxl from xen 4.1 with newer hypervisor? My libxl is
> somehow patched and porting it to newer version will require some effort.

I don't think so, but I also don't think you need a libxl at all for the
purposes here (dealing with S3 is a Dom0-only thing).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 07:55:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQsH-0006jL-CV; Mon, 03 Dec 2012 07:55:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfQsG-0006jG-9w
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 07:55:32 +0000
Received: from [85.158.143.99:24037] by server-1.bemta-4.messagelabs.com id
	0C/12-27934-3FA5CB05; Mon, 03 Dec 2012 07:55:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1354521330!20241848!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13596 invoked from network); 3 Dec 2012 07:55:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 07:55:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 07:55:30 +0000
Message-Id: <50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 07:55:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <50B498E502000078000AB8B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A4834564403398F24@SHSMSX101.ccr.corp.intel.com>
	<50B7243602000078000AC61A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033994C8@SHSMSX101.ccr.corp.intel.com>
	<50B7389202000078000AC669@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATS and dependent features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 13:29, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:

> 
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Thursday, November 29, 2012 5:28 PM
>> To: Zhang, Xiantao
>> Cc: xen-devel
>> Subject: RE: ATS and dependent features
>> 
>> >>> On 29.11.12 at 10:19, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> wrote:
>> 
>> >
>> >> -----Original Message-----
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: Thursday, November 29, 2012 4:01 PM
>> >> To: Zhang, Xiantao
>> >> Cc: xen-devel
>> >> Subject: RE: ATS and dependent features
>> >>
>> >> >>> On 29.11.12 at 02:07, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> >> wrote:
>> >> > ATS should be a host feature controlled by iommu, and I don't think
>> >> > dom0 can control  it from Xen's architecture.
>> >>
>> >> "Can" or "should"? Because from all I can tell it currently clearly does.
>> >
>> > I mean Xen shouldn't  allow these capabilities can be detected by
>> > dom0.  If it does, we need to fix it.
>> 
>> It sort of hides it - all callers sit in the kernel's IOMMU code, and IOMMU
>> detection is being prevented. So it looks like the code is simply dead when
>> running on top of Xen.
> 
> I'm curious why dom0's !Xen kernel option for these features can solve the 
> issue you met. 

It doesn't "solve" the problem in that sense: As said, the code in
question only has callers in IOMMU code, which itself is dependent
on !XEN in our kernels (just to make clear - I'm talking about forward
ported kernels here, not pv-ops ones). So upstream probably just
has to live with that code being dead (at the moment, when run on
top of Xen) and take the risk of there appearing a caller elsewhere.
In our kernels, by making these options also dependent upon !XEN,
we can then actually detect (and actively deal with) an eventual
new caller elsewhere in the code, thus eliminating any risk of bad
interaction between Dom0 and Xen.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 07:55:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 07:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfQsH-0006jL-CV; Mon, 03 Dec 2012 07:55:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfQsG-0006jG-9w
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 07:55:32 +0000
Received: from [85.158.143.99:24037] by server-1.bemta-4.messagelabs.com id
	0C/12-27934-3FA5CB05; Mon, 03 Dec 2012 07:55:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1354521330!20241848!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13596 invoked from network); 3 Dec 2012 07:55:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 07:55:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 07:55:30 +0000
Message-Id: <50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 07:55:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <50B498E502000078000AB8B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A4834564403398F24@SHSMSX101.ccr.corp.intel.com>
	<50B7243602000078000AC61A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033994C8@SHSMSX101.ccr.corp.intel.com>
	<50B7389202000078000AC669@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATS and dependent features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 13:29, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:

> 
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Thursday, November 29, 2012 5:28 PM
>> To: Zhang, Xiantao
>> Cc: xen-devel
>> Subject: RE: ATS and dependent features
>> 
>> >>> On 29.11.12 at 10:19, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> wrote:
>> 
>> >
>> >> -----Original Message-----
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: Thursday, November 29, 2012 4:01 PM
>> >> To: Zhang, Xiantao
>> >> Cc: xen-devel
>> >> Subject: RE: ATS and dependent features
>> >>
>> >> >>> On 29.11.12 at 02:07, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> >> wrote:
>> >> > ATS should be a host feature controlled by iommu, and I don't think
>> >> > dom0 can control  it from Xen's architecture.
>> >>
>> >> "Can" or "should"? Because from all I can tell it currently clearly does.
>> >
>> > I mean Xen shouldn't  allow these capabilities can be detected by
>> > dom0.  If it does, we need to fix it.
>> 
>> It sort of hides it - all callers sit in the kernel's IOMMU code, and IOMMU
>> detection is being prevented. So it looks like the code is simply dead when
>> running on top of Xen.
> 
> I'm curious why dom0's !Xen kernel option for these features can solve the 
> issue you met. 

It doesn't "solve" the problem in that sense: As said, the code in
question only has callers in IOMMU code, which itself is dependent
on !XEN in our kernels (just to make clear - I'm talking about forward
ported kernels here, not pv-ops ones). So upstream probably just
has to live with that code being dead (at the moment, when run on
top of Xen) and take the risk of there appearing a caller elsewhere.
In our kernels, by making these options also dependent upon !XEN,
we can then actually detect (and actively deal with) an eventual
new caller elsewhere in the code, thus eliminating any risk of bad
interaction between Dom0 and Xen.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:10:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfR63-0007M8-Ch; Mon, 03 Dec 2012 08:09:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfR62-0007M3-11
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:09:46 +0000
Received: from [85.158.143.99:31419] by server-2.bemta-4.messagelabs.com id
	C2/78-28922-94E5CB05; Mon, 03 Dec 2012 08:09:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354522184!26874420!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32375 invoked from network); 3 Dec 2012 08:09:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:09:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:09:44 +0000
Message-Id: <50BC6C5302000078000AD2BF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:09:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mats Petersson" <mats.petersson@citrix.com>
References: <50B8EE13.6070301@citrix.com> <50B9050B.7090709@citrix.com>
In-Reply-To: <50B9050B.7090709@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 20:12, Mats Petersson <mats.petersson@citrix.com> wrote:
> On 30/11/12 17:34, Andrew Cooper wrote:
>> 4) "Fake NMIs" can be caused by hardware with access to the INTR pin
>> (very unlikely in modern systems with the LAPIC supporting virtual wire
>> mode), or by software executing an `int $0x2`.  This can cause the NMI
>> handler to run on the NMI stack, but without the normal hardware NMI
>> cessation logic being triggered.
>>
>> 5) "Fake MCEs" can be caused by software executing `int $0x18`, and by
>> any MSI/IOMMU/IOAPIC programmed to deliver vector 0x18.  Normally, this
>> could only be caused by a bug in Xen, although it is also possible on a
>> system with out interrupt remapping. (Where the host administrator has
>> accepted the documented security issue, and decided still to pass-though
>> a device to a trusted VM, and the VM in question has a buggy driver for
>> the passed-through hardware)
> Surely both 4 & 5 are "bad guest behaviour", and whilst it's a "nice to 
> have" to catch that, it's no different from running on bare metal doing 
> daft things with vectors or writing code that doesn't behave at all 
> "friendly". (4 is only available to Xen developers, which we hope are 
> most of the time sane enough not to try these crazy things in a "live" 
> system that matters). 5 is only available if you have pass through 
> enabled. I don't think either is a particularly likely cause of real, in 
> the field, problems.

If these were guest exposed, we'd have a much more severe
problem. But as Tim said, they aren't, hence we "only" need to
exclude Xen bugs here (and decide on how far we may want to go
with working around eventual hardware bugs).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:10:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfR63-0007M8-Ch; Mon, 03 Dec 2012 08:09:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfR62-0007M3-11
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:09:46 +0000
Received: from [85.158.143.99:31419] by server-2.bemta-4.messagelabs.com id
	C2/78-28922-94E5CB05; Mon, 03 Dec 2012 08:09:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354522184!26874420!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32375 invoked from network); 3 Dec 2012 08:09:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:09:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:09:44 +0000
Message-Id: <50BC6C5302000078000AD2BF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:09:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mats Petersson" <mats.petersson@citrix.com>
References: <50B8EE13.6070301@citrix.com> <50B9050B.7090709@citrix.com>
In-Reply-To: <50B9050B.7090709@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 20:12, Mats Petersson <mats.petersson@citrix.com> wrote:
> On 30/11/12 17:34, Andrew Cooper wrote:
>> 4) "Fake NMIs" can be caused by hardware with access to the INTR pin
>> (very unlikely in modern systems with the LAPIC supporting virtual wire
>> mode), or by software executing an `int $0x2`.  This can cause the NMI
>> handler to run on the NMI stack, but without the normal hardware NMI
>> cessation logic being triggered.
>>
>> 5) "Fake MCEs" can be caused by software executing `int $0x18`, and by
>> any MSI/IOMMU/IOAPIC programmed to deliver vector 0x18.  Normally, this
>> could only be caused by a bug in Xen, although it is also possible on a
>> system with out interrupt remapping. (Where the host administrator has
>> accepted the documented security issue, and decided still to pass-though
>> a device to a trusted VM, and the VM in question has a buggy driver for
>> the passed-through hardware)
> Surely both 4 & 5 are "bad guest behaviour", and whilst it's a "nice to 
> have" to catch that, it's no different from running on bare metal doing 
> daft things with vectors or writing code that doesn't behave at all 
> "friendly". (4 is only available to Xen developers, which we hope are 
> most of the time sane enough not to try these crazy things in a "live" 
> system that matters). 5 is only available if you have pass through 
> enabled. I don't think either is a particularly likely cause of real, in 
> the field, problems.

If these were guest exposed, we'd have a much more severe
problem. But as Tim said, they aren't, hence we "only" need to
exclude Xen bugs here (and decide on how far we may want to go
with working around eventual hardware bugs).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:27:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:27:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfRMb-0007ZO-2x; Mon, 03 Dec 2012 08:26:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfRMZ-0007ZJ-Iq
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:26:51 +0000
Received: from [85.158.138.51:40143] by server-8.bemta-3.messagelabs.com id
	78/87-07786-A426CB05; Mon, 03 Dec 2012 08:26:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354523208!32337771!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2491 invoked from network); 3 Dec 2012 08:26:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:26:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:26:47 +0000
Message-Id: <50BC705402000078000AD2D1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:26:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <50B8EE13.6070301@citrix.com>
In-Reply-To: <50B8EE13.6070301@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 18:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 1) Faults on the NMI path will re-enable NMIs before the handler
> returns, leading to reentrant behaviour.  We should audit the NMI path
> to try and remove any needless cases which might fault, but getting a
> fault-free path will be hard (and is not going so solve the reentrant
> behaviour itself).
> 
> 2) Faults on the MCE path will re-enable NMIs, as will the iret of the
> MCE itself if an MCE interrupts an NMI.

As apparently converged to later in the thread - we just need to
exclude the potential for faults inside the NMI and MCE paths. The
only reason I could this needing to change would be if we intended
to add an extensive NMI producer like native Linux'es perf
subsystem.

> 3) SMM mode executing an iret will re-enable NMIs.  There is nothing we
> can do to prevent this, and as an SMI can interrupt NMIs and MCEs, no
> way to predict if/when it may happen.  The best we can do is accept that
> it might happen, and try to deal with the after effects.

I don't see us needing to deal with that in any way. SMM using IRET
carelessly is just plain wrong. Iirc SMM (just like VMEXIT) has a save/
restore field for the NMI mask, so if they make proper use of this,
there should be no problem.

> 4) "Fake NMIs" can be caused by hardware with access to the INTR pin
> (very unlikely in modern systems with the LAPIC supporting virtual wire
> mode), or by software executing an `int $0x2`.  This can cause the NMI
> handler to run on the NMI stack, but without the normal hardware NMI
> cessation logic being triggered.
> 
> 5) "Fake MCEs" can be caused by software executing `int $0x18`, and by
> any MSI/IOMMU/IOAPIC programmed to deliver vector 0x18.  Normally, this
> could only be caused by a bug in Xen, although it is also possible on a
> system with out interrupt remapping. (Where the host administrator has
> accepted the documented security issue, and decided still to pass-though
> a device to a trusted VM, and the VM in question has a buggy driver for
> the passed-through hardware)

Fake exceptions, as was also already said by others, are a Xen or
hardware bug and hence shouldn't need extra precautions either.

> 9) The NMI handler when returning to ring3 will leave NMIs latched, as
> it uses the sysret path.

This is a little imprecise: The problem is only when entering the
scheduler on the way out of an NMI, and resuming an unaware
PV vCPU on the given pCPU. Apart from forcing an IRET in that
case early (we can't be on the special NMI stack in that case, as
the NMI entry path switches to the normal stack when entered
from PV guest context, entry from VMX context happens on the
normal stack anyway, and entry from hypervisor context [which
includes the SVM case] doesn't end up handling softirqs on the
exit path), another option would be to clear the TRAP_syscall
flag when resuming a PV vCPU in the scheduler.

But the early IRET solution has other benefits (keeping the NMI
disabled window short), so would be preferable imo.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:27:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:27:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfRMb-0007ZO-2x; Mon, 03 Dec 2012 08:26:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfRMZ-0007ZJ-Iq
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:26:51 +0000
Received: from [85.158.138.51:40143] by server-8.bemta-3.messagelabs.com id
	78/87-07786-A426CB05; Mon, 03 Dec 2012 08:26:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354523208!32337771!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2491 invoked from network); 3 Dec 2012 08:26:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:26:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:26:47 +0000
Message-Id: <50BC705402000078000AD2D1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:26:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <50B8EE13.6070301@citrix.com>
In-Reply-To: <50B8EE13.6070301@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 18:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 1) Faults on the NMI path will re-enable NMIs before the handler
> returns, leading to reentrant behaviour.  We should audit the NMI path
> to try and remove any needless cases which might fault, but getting a
> fault-free path will be hard (and is not going so solve the reentrant
> behaviour itself).
> 
> 2) Faults on the MCE path will re-enable NMIs, as will the iret of the
> MCE itself if an MCE interrupts an NMI.

As apparently converged to later in the thread - we just need to
exclude the potential for faults inside the NMI and MCE paths. The
only reason I could this needing to change would be if we intended
to add an extensive NMI producer like native Linux'es perf
subsystem.

> 3) SMM mode executing an iret will re-enable NMIs.  There is nothing we
> can do to prevent this, and as an SMI can interrupt NMIs and MCEs, no
> way to predict if/when it may happen.  The best we can do is accept that
> it might happen, and try to deal with the after effects.

I don't see us needing to deal with that in any way. SMM using IRET
carelessly is just plain wrong. Iirc SMM (just like VMEXIT) has a save/
restore field for the NMI mask, so if they make proper use of this,
there should be no problem.

> 4) "Fake NMIs" can be caused by hardware with access to the INTR pin
> (very unlikely in modern systems with the LAPIC supporting virtual wire
> mode), or by software executing an `int $0x2`.  This can cause the NMI
> handler to run on the NMI stack, but without the normal hardware NMI
> cessation logic being triggered.
> 
> 5) "Fake MCEs" can be caused by software executing `int $0x18`, and by
> any MSI/IOMMU/IOAPIC programmed to deliver vector 0x18.  Normally, this
> could only be caused by a bug in Xen, although it is also possible on a
> system with out interrupt remapping. (Where the host administrator has
> accepted the documented security issue, and decided still to pass-though
> a device to a trusted VM, and the VM in question has a buggy driver for
> the passed-through hardware)

Fake exceptions, as was also already said by others, are a Xen or
hardware bug and hence shouldn't need extra precautions either.

> 9) The NMI handler when returning to ring3 will leave NMIs latched, as
> it uses the sysret path.

This is a little imprecise: The problem is only when entering the
scheduler on the way out of an NMI, and resuming an unaware
PV vCPU on the given pCPU. Apart from forcing an IRET in that
case early (we can't be on the special NMI stack in that case, as
the NMI entry path switches to the normal stack when entered
from PV guest context, entry from VMX context happens on the
normal stack anyway, and entry from hypervisor context [which
includes the SVM case] doesn't end up handling softirqs on the
exit path), another option would be to clear the TRAP_syscall
flag when resuming a PV vCPU in the scheduler.

But the early IRET solution has other benefits (keeping the NMI
disabled window short), so would be preferable imo.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:28:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfROE-0007dy-Lo; Mon, 03 Dec 2012 08:28:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfROC-0007dq-OB
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:28:32 +0000
Received: from [85.158.138.51:51914] by server-16.bemta-3.messagelabs.com id
	08/C5-07461-FA26CB05; Mon, 03 Dec 2012 08:28:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354523310!32514271!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27103 invoked from network); 3 Dec 2012 08:28:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:28:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:28:30 +0000
Message-Id: <50BC70BB02000078000AD2DF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:28:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
 "Tim Deegan" <tim@xen.org>
References: <50B8EE13.6070301@citrix.com>
	<20121130175604.GB95877@ocelot.phlegethon.org>
In-Reply-To: <20121130175604.GB95877@ocelot.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 18:56, Tim Deegan <tim@xen.org> wrote:
> At 17:34 +0000 on 30 Nov (1354296851), Andrew Cooper wrote:
> For the record, we also came up with a much simpler solution, which I
> prefer:
>  - The MCE handler should never return to Xen with IRET.
>  - The NMI handler should always return with IRET.
>  - There should be no faulting code in the NMI or MCE handlers.
> 
> That covers all the interesting cases except (3), (4) and (7) below, and
> a simple per-cpu {nmi,mce}-in-progress flag will be enough to detect
> (and crash) on _almost_ all cases where that bites us (the other cases
> will crash less politely from their stacks being smashed).
> 
> Even if we go on to build some more bulletproof solution, I think we
> should consider implementing that now, as the baseline.

Fully agree. As said in an earlier reply to Andrew's original mail,
dealing with (3) and (4) doesn't seem necessary to me. 

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:28:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfROE-0007dy-Lo; Mon, 03 Dec 2012 08:28:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfROC-0007dq-OB
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:28:32 +0000
Received: from [85.158.138.51:51914] by server-16.bemta-3.messagelabs.com id
	08/C5-07461-FA26CB05; Mon, 03 Dec 2012 08:28:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354523310!32514271!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27103 invoked from network); 3 Dec 2012 08:28:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:28:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:28:30 +0000
Message-Id: <50BC70BB02000078000AD2DF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:28:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
 "Tim Deegan" <tim@xen.org>
References: <50B8EE13.6070301@citrix.com>
	<20121130175604.GB95877@ocelot.phlegethon.org>
In-Reply-To: <20121130175604.GB95877@ocelot.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 18:56, Tim Deegan <tim@xen.org> wrote:
> At 17:34 +0000 on 30 Nov (1354296851), Andrew Cooper wrote:
> For the record, we also came up with a much simpler solution, which I
> prefer:
>  - The MCE handler should never return to Xen with IRET.
>  - The NMI handler should always return with IRET.
>  - There should be no faulting code in the NMI or MCE handlers.
> 
> That covers all the interesting cases except (3), (4) and (7) below, and
> a simple per-cpu {nmi,mce}-in-progress flag will be enough to detect
> (and crash) on _almost_ all cases where that bites us (the other cases
> will crash less politely from their stacks being smashed).
> 
> Even if we go on to build some more bulletproof solution, I think we
> should consider implementing that now, as the baseline.

Fully agree. As said in an earlier reply to Andrew's original mail,
dealing with (3) and (4) doesn't seem necessary to me. 

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:45:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfReC-0007vS-Dk; Mon, 03 Dec 2012 08:45:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfReA-0007vN-QR
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:45:02 +0000
Received: from [85.158.138.51:45766] by server-15.bemta-3.messagelabs.com id
	17/CD-23779-9866CB05; Mon, 03 Dec 2012 08:44:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1354524295!28399700!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11365 invoked from network); 3 Dec 2012 08:44:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:44:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:44:41 +0000
Message-Id: <50BC748502000078000AD2F7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:44:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Anil Madhavapeddy" <anil@recoil.org>
References: <47152122-2636-48ED-BD08-B8C69B7F2D18@recoil.org>
In-Reply-To: <47152122-2636-48ED-BD08-B8C69B7F2D18@recoil.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Balraj Singh <balraj.singh@cl.cam.ac.uk>,
	Steven Smith <steven.smith@cl.cam.ac.uk>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initialising MXCSR in PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 18:14, Anil Madhavapeddy <anil@recoil.org> wrote:
> We're seeing floating point exceptions on some (but not all) machines when 
> doing FP operations inside MiniOS.
> 
> It looks like the FPU and SSE control registers are not set to good default 
> values (which by default mask various FP exceptions) when MiniOS is started 
> as a PV guest. If the call to fpu_init below is not made, the division 
> generates a precision error and fails.  
> 
> Do PV guests all need to explicitly initialise MXCSR, or has something 
> changed in Xen to trigger this now?  It seems to have been set in the past, 
> and only happens on some hosts.

As you're not telling us where you do and do not observe this bad
behavior, I'm wondering if you're simply running into the issue
addressed by -unstable c/s 24157:7b5e1cb94bfa.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:45:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfReC-0007vS-Dk; Mon, 03 Dec 2012 08:45:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfReA-0007vN-QR
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:45:02 +0000
Received: from [85.158.138.51:45766] by server-15.bemta-3.messagelabs.com id
	17/CD-23779-9866CB05; Mon, 03 Dec 2012 08:44:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1354524295!28399700!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11365 invoked from network); 3 Dec 2012 08:44:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:44:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:44:41 +0000
Message-Id: <50BC748502000078000AD2F7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:44:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Anil Madhavapeddy" <anil@recoil.org>
References: <47152122-2636-48ED-BD08-B8C69B7F2D18@recoil.org>
In-Reply-To: <47152122-2636-48ED-BD08-B8C69B7F2D18@recoil.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Balraj Singh <balraj.singh@cl.cam.ac.uk>,
	Steven Smith <steven.smith@cl.cam.ac.uk>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initialising MXCSR in PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 18:14, Anil Madhavapeddy <anil@recoil.org> wrote:
> We're seeing floating point exceptions on some (but not all) machines when 
> doing FP operations inside MiniOS.
> 
> It looks like the FPU and SSE control registers are not set to good default 
> values (which by default mask various FP exceptions) when MiniOS is started 
> as a PV guest. If the call to fpu_init below is not made, the division 
> generates a precision error and fails.  
> 
> Do PV guests all need to explicitly initialise MXCSR, or has something 
> changed in Xen to trigger this now?  It seems to have been set in the past, 
> and only happens on some hosts.

As you're not telling us where you do and do not observe this bad
behavior, I'm wondering if you're simply running into the issue
addressed by -unstable c/s 24157:7b5e1cb94bfa.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:46:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfRfc-0007zg-Tt; Mon, 03 Dec 2012 08:46:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfRfb-0007zW-GD
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:46:31 +0000
Received: from [85.158.138.51:55539] by server-3.bemta-3.messagelabs.com id
	73/E5-31566-6E66CB05; Mon, 03 Dec 2012 08:46:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354524389!32517061!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23126 invoked from network); 3 Dec 2012 08:46:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:46:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:46:29 +0000
Message-Id: <50BC74F202000078000AD2FA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:46:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "G.R." <firemeteor@users.sourceforge.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
In-Reply-To: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 04:47, "G.R." <firemeteor@users.sourceforge.net> wrote:
> Hi developers,
> I met some domU issues and the log suggests missing interrupt.
> Details from here:
> http://www.gossamer-threads.com/lists/xen/users/263938#263938 
> In summary, this is the suspicious log:
> 
> (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> 
> I've checked the code in question and found that mode 3 is an 'reserved_1'
> mode.
> I want to trace down the source of this mode setting to root-cause the
> issue.
> But I'm not an xen developer, and am even a newbie as a xen user.
> Could anybody give me instructions about how to enable detailed debug log?
> It could be better if I can get advice about experiments to perform /
> switches to try out etc.

Please check the list archives, this issue was discussed in great
lengths a couple of months ago (and should be fixed in current
trees).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:46:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfRfc-0007zg-Tt; Mon, 03 Dec 2012 08:46:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfRfb-0007zW-GD
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:46:31 +0000
Received: from [85.158.138.51:55539] by server-3.bemta-3.messagelabs.com id
	73/E5-31566-6E66CB05; Mon, 03 Dec 2012 08:46:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354524389!32517061!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23126 invoked from network); 3 Dec 2012 08:46:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:46:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:46:29 +0000
Message-Id: <50BC74F202000078000AD2FA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:46:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "G.R." <firemeteor@users.sourceforge.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
In-Reply-To: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 04:47, "G.R." <firemeteor@users.sourceforge.net> wrote:
> Hi developers,
> I met some domU issues and the log suggests missing interrupt.
> Details from here:
> http://www.gossamer-threads.com/lists/xen/users/263938#263938 
> In summary, this is the suspicious log:
> 
> (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> 
> I've checked the code in question and found that mode 3 is an 'reserved_1'
> mode.
> I want to trace down the source of this mode setting to root-cause the
> issue.
> But I'm not an xen developer, and am even a newbie as a xen user.
> Could anybody give me instructions about how to enable detailed debug log?
> It could be better if I can get advice about experiments to perform /
> switches to try out etc.

Please check the list archives, this issue was discussed in great
lengths a couple of months ago (and should be fixed in current
trees).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:51:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfRjw-0008Df-Pc; Mon, 03 Dec 2012 08:51:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfRjv-0008DB-SV
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:50:59 +0000
Received: from [85.158.137.99:6003] by server-14.bemta-3.messagelabs.com id
	F1/30-31424-3F76CB05; Mon, 03 Dec 2012 08:50:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1354524654!17604358!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7151 invoked from network); 3 Dec 2012 08:50:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:50:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:50:54 +0000
Message-Id: <50BC75FC02000078000AD310@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:50:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "maheen butt" <maheen_butt26@yahoo.com>
References: <1354520163.12932.YahooMailNeo@web120002.mail.ne1.yahoo.com>
In-Reply-To: <1354520163.12932.YahooMailNeo@web120002.mail.ne1.yahoo.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] set_current in Xen booting sequence
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 08:36, maheen butt <maheen_butt26@yahoo.com> wrote:
> Hi all,
> 
> I'm investigating Xen bootup sequence and I'm stuck with the function 
> set_current((struct vcpu *)0xfffff000);
> the high level idea is that it is assigning VCPU to physical CPU but I can't 
> understand the following 
> 
> 
> 1) what is the need of this hard coded address?  it is seemed that vcpu 
> exist on this address (but no idea who put vcpu on that particular 
> address)?
> (the same address is passed both in case of ARM and X86)

This is merely a debugging safeguard, making sure accesses
through "current" crash rather than going unnoticed.

> 2) no idea about what exactly get_cpu_info() (called by set_current) is 
> doing..  (logically doing & and or with sp register)?

It does what its name says - getting the cpu_info structure's
address for the current CPU. As that lives at the end of the
stack, arithmetic on the stack pointer it the appropriate thing
to do here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 08:51:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 08:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfRjw-0008Df-Pc; Mon, 03 Dec 2012 08:51:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfRjv-0008DB-SV
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 08:50:59 +0000
Received: from [85.158.137.99:6003] by server-14.bemta-3.messagelabs.com id
	F1/30-31424-3F76CB05; Mon, 03 Dec 2012 08:50:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1354524654!17604358!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7151 invoked from network); 3 Dec 2012 08:50:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 08:50:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 08:50:54 +0000
Message-Id: <50BC75FC02000078000AD310@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 08:50:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "maheen butt" <maheen_butt26@yahoo.com>
References: <1354520163.12932.YahooMailNeo@web120002.mail.ne1.yahoo.com>
In-Reply-To: <1354520163.12932.YahooMailNeo@web120002.mail.ne1.yahoo.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] set_current in Xen booting sequence
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 08:36, maheen butt <maheen_butt26@yahoo.com> wrote:
> Hi all,
> 
> I'm investigating Xen bootup sequence and I'm stuck with the function 
> set_current((struct vcpu *)0xfffff000);
> the high level idea is that it is assigning VCPU to physical CPU but I can't 
> understand the following 
> 
> 
> 1) what is the need of this hard coded address?  it is seemed that vcpu 
> exist on this address (but no idea who put vcpu on that particular 
> address)?
> (the same address is passed both in case of ARM and X86)

This is merely a debugging safeguard, making sure accesses
through "current" crash rather than going unnoticed.

> 2) no idea about what exactly get_cpu_info() (called by set_current) is 
> doing..  (logically doing & and or with sp register)?

It does what its name says - getting the cpu_info structure's
address for the current CPU. As that lives at the end of the
stack, arithmetic on the stack pointer it the appropriate thing
to do here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 10:16:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 10:16:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfT3q-0000kw-OO; Mon, 03 Dec 2012 10:15:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TfT3p-0000kr-0U
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 10:15:37 +0000
Received: from [193.109.254.147:35285] by server-14.bemta-14.messagelabs.com
	id B0/F8-14517-8CB7CB05; Mon, 03 Dec 2012 10:15:36 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354529710!2295527!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10665 invoked from network); 3 Dec 2012 10:15:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 10:15:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,205,1355097600"; d="scan'208";a="46356899"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	03 Dec 2012 10:15:10 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Mon, 3 Dec 2012
	05:15:09 -0500
Message-ID: <50BC7BAC.8050107@citrix.com>
Date: Mon, 3 Dec 2012 10:15:08 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
In-Reply-To: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 03:47, G.R. wrote:
> Hi developers,
> I met some domU issues and the log suggests missing interrupt.
> Details from here: 
> http://www.gossamer-threads.com/lists/xen/users/263938#263938
> In summary, this is the suspicious log:
>
> (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>
> I've checked the code in question and found that mode 3 is an 
> 'reserved_1' mode.
> I want to trace down the source of this mode setting to root-cause the 
> issue.
> But I'm not an xen developer, and am even a newbie as a xen user.
> Could anybody give me instructions about how to enable detailed debug log?
> It could be better if I can get advice about experiments to perform / 
> switches to try out etc.
>
> My SW config:
> dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
> domU: Debian wheezy 3.2.x stock kernel.
>
> Thanks,
> Timothy
Are you passing hardware (PCI Passthrough) to the HVM guest?
What are the exact messages in the DomU?

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 10:16:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 10:16:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfT3q-0000kw-OO; Mon, 03 Dec 2012 10:15:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TfT3p-0000kr-0U
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 10:15:37 +0000
Received: from [193.109.254.147:35285] by server-14.bemta-14.messagelabs.com
	id B0/F8-14517-8CB7CB05; Mon, 03 Dec 2012 10:15:36 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354529710!2295527!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10665 invoked from network); 3 Dec 2012 10:15:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 10:15:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,205,1355097600"; d="scan'208";a="46356899"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	03 Dec 2012 10:15:10 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Mon, 3 Dec 2012
	05:15:09 -0500
Message-ID: <50BC7BAC.8050107@citrix.com>
Date: Mon, 3 Dec 2012 10:15:08 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
In-Reply-To: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 03:47, G.R. wrote:
> Hi developers,
> I met some domU issues and the log suggests missing interrupt.
> Details from here: 
> http://www.gossamer-threads.com/lists/xen/users/263938#263938
> In summary, this is the suspicious log:
>
> (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>
> I've checked the code in question and found that mode 3 is an 
> 'reserved_1' mode.
> I want to trace down the source of this mode setting to root-cause the 
> issue.
> But I'm not an xen developer, and am even a newbie as a xen user.
> Could anybody give me instructions about how to enable detailed debug log?
> It could be better if I can get advice about experiments to perform / 
> switches to try out etc.
>
> My SW config:
> dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
> domU: Debian wheezy 3.2.x stock kernel.
>
> Thanks,
> Timothy
Are you passing hardware (PCI Passthrough) to the HVM guest?
What are the exact messages in the DomU?

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 10:24:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 10:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfTC4-0000vf-Oy; Mon, 03 Dec 2012 10:24:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfTC3-0000va-4V
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 10:24:07 +0000
Received: from [85.158.143.35:45515] by server-3.bemta-4.messagelabs.com id
	90/A9-06841-5CD7CB05; Mon, 03 Dec 2012 10:24:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354530243!4940312!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28227 invoked from network); 3 Dec 2012 10:24:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 10:24:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,205,1355097600"; d="scan'208";a="16119263"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 10:24:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	10:24:03 +0000
Message-ID: <1354530241.774.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Dec 2012 10:24:01 +0000
In-Reply-To: <osstest-14555-mainreport@xen.org>
References: <osstest-14555-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 14555: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 07:09 +0000, xen.org wrote:
> flight 14555 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14555/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386                    4 xen-build                 fail REGR. vs. 14482
>  build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Same failure as 14549 so killing the rogue git processes on xenbits
didn't help.
        make[1]: Leaving directory `/home/osstest/build.14554.build-i386/xen-unstable/docs'
        fatal: Out of memory? mmap failed: Cannot allocate memory
        fatal: The remote end hung up unexpectedly
        make[1]: *** [qemu-xen-traditional-dir-find] Error 128
        make[1]: Leaving directory `/home/osstest/build.14554.build-i386/xen-unstable/tools'
        make: *** [tools/qemu-xen-traditional-dir] Error 2
        make: *** Waiting for unfinished jobs....
        using cache /volatile/git-cache...
        locked cache /volatile/git-cache...
        processing /usr/local/bin/git clone git://xenbits.xen.org/staging/qemu-upstream-unstable.git qemu-xen-dir-remote.tmp...
        Cloning into qemu-xen-dir-remote.tmp...
        fatal: Out of memory? mmap failed: Cannot allocate memory
        fatal: The remote end hung up unexpectedly

ISTR that in the past we've needed to clear the local cache of git
objects?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 10:24:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 10:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfTC4-0000vf-Oy; Mon, 03 Dec 2012 10:24:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfTC3-0000va-4V
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 10:24:07 +0000
Received: from [85.158.143.35:45515] by server-3.bemta-4.messagelabs.com id
	90/A9-06841-5CD7CB05; Mon, 03 Dec 2012 10:24:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354530243!4940312!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28227 invoked from network); 3 Dec 2012 10:24:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 10:24:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,205,1355097600"; d="scan'208";a="16119263"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 10:24:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	10:24:03 +0000
Message-ID: <1354530241.774.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Dec 2012 10:24:01 +0000
In-Reply-To: <osstest-14555-mainreport@xen.org>
References: <osstest-14555-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 14555: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 07:09 +0000, xen.org wrote:
> flight 14555 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14555/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386                    4 xen-build                 fail REGR. vs. 14482
>  build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Same failure as 14549 so killing the rogue git processes on xenbits
didn't help.
        make[1]: Leaving directory `/home/osstest/build.14554.build-i386/xen-unstable/docs'
        fatal: Out of memory? mmap failed: Cannot allocate memory
        fatal: The remote end hung up unexpectedly
        make[1]: *** [qemu-xen-traditional-dir-find] Error 128
        make[1]: Leaving directory `/home/osstest/build.14554.build-i386/xen-unstable/tools'
        make: *** [tools/qemu-xen-traditional-dir] Error 2
        make: *** Waiting for unfinished jobs....
        using cache /volatile/git-cache...
        locked cache /volatile/git-cache...
        processing /usr/local/bin/git clone git://xenbits.xen.org/staging/qemu-upstream-unstable.git qemu-xen-dir-remote.tmp...
        Cloning into qemu-xen-dir-remote.tmp...
        fatal: Out of memory? mmap failed: Cannot allocate memory
        fatal: The remote end hung up unexpectedly

ISTR that in the past we've needed to clear the local cache of git
objects?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 10:35:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 10:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfTMh-00016L-0t; Mon, 03 Dec 2012 10:35:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfTMf-00016G-Qa
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 10:35:06 +0000
Received: from [85.158.137.99:12123] by server-7.bemta-3.messagelabs.com id
	38/8B-01713-8508CB05; Mon, 03 Dec 2012 10:35:04 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1354530874!17626092!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31122 invoked from network); 3 Dec 2012 10:34:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 10:34:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,205,1355097600"; d="scan'208";a="16119557"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 10:34:12 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	10:34:12 +0000
Message-ID: <50BC8023.9010408@citrix.com>
Date: Mon, 3 Dec 2012 11:34:11 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] Driver domains communication protocol proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I no longer have the original message, so I'm going to reply in a
copy-paste of xen mailing list archive. Sorry for the inconvenience.

> During some discussions and handwaving, including discussions with
> some experts on the Xenserver/XCP storage architecture, we came up
> with what we think might be a plausible proposal for an architecture
> for communication between toolstack and driver domain, for storage at
> least.
>
> I offered to write it up.  The abstract proposal is as I understand
> the consensus from our conversation.  The concrete protocol is my own
> invention.
>
> Please comments.  After a round of review here we should consider
> whether some of the assumptions need review from the communities
> involved in "other" backends (particularly, the BSDs).
>
> (FAOD the implementation of something like this is not 4.3 material,
> but it may inform some API decisions etc. we take in 4.2.)
>
> Ian.
>
>
> Components
>
>  toolstack
>
>  guest
>     Might be the toolstack domain, or an (intended) guest vm.
>
>  driver domain
>     Responsible for providing the disk service to guests.
>     Consists, internally, of (at least):
>        control plane
>        backend
>     but we avoid exposing this internal implementation detail.
>
>     We permit different driver domains on a single host, serving
>     different guests or the same guests.
>
>     The toolstack is expected to know the domid of the driver domain.
>
>  driver domain kind
>     We permit different "kinds" of driver domain, perhaps implemented
>     by completely different code, which support different facilities.
>
>     Each driver domain kind needs to document what targets (see
>     below) are valid and how they are specified, and what preparatory
>     steps may need to be taken eg at system boot.
>
>     Driver domain kinds do not have a formal presence in the API.
>
> Objects
>
>  target
>      A kind of name.
>
>      Combination of a physical location and data format plus all other
>      information needed by the underlying mechanisms, or relating to
>      the data format, needed to access it.
>
>      These names are assigned by the driver domain kind; the names may
>      be an open class; no facility provided via this API to enumerate
>      these.
>
>      Syntactically, these are key/value pairs, mapping short string
>      keys to shortish string values, suitable for storage in a
>      xenstore directory.
>
>  vdi
>      This host's intent to access a specific target.
>      Non-persistent, created on request by toolstack, enumerable.
>      Possible states: inactive/active.
>      Abstract operations: prepare, activate, deactivate, unprepare.
>
>      (We call the "create" operation for this object "prepare" to
>      avoid confusion with other kinds of "create".)
>
>      The toolstack promises that no two vdis for the same target
>      will simultaneously be active, even if the two vdis are on
>      different hosts.
>
>  vbd
>      Provision of a facility for a guest to access a particular target
>      via a particular vdi.  There may be zero or more of these at any
>      point for a particular vdi.
>
>      Non-persistent, created on request by toolstack, enumerable.
>      Abstract operations: plug, unplug.
>
>      (We call the "create" operation for this object "plug" to avoid
>      confusion with other kinds of "create".)
>
>      vbds may be created/destroyed, and the underlying vdi
>      activated/deactivated, in any other.  However IO is only possible
>      to a vbd when the corresponding vdi is active.  The reason for
>      requiring activation as a separate step is to allow as much of
>      the setup for an incoming migration domain's storage to be done
>      before committing to the migration and entering the "domain is
>      down" stage, during which access is switched from the old to the
>      new host.
>
>      We will consider here the case of a vbd which provides
>      service as a Xen vbd backend.  Other cases (eg, the driver domain
>      is the same as the toolstack domain and the vbd provides a block
>      device in the toolstack domain) can be regarded as
>      optimisations/shortcuts.
>
> Concrete protocol
>
>  The toolstack gives instructions to the driver domain, and receives
>  results, via xenstore, in the path:
>    /local/domain/<driverdomid>/backendctrl/vdi
>  Both driver domain and toolstack have write access to the whole of
>  this area.
>
>  Each vdi which has been requested and/or exists, corresponds to a
>  path .../backendctrl/vdi/<vdi> where <vdi> is a string (of
>  alphanumerics, hyphens and underscores) chosen by the toolstack.
>  Inside this, there are the following nodes:
>
>  /local/domain/<driverdomid>/backendctrl/vdi/<vdi>/
>    state       The current state.  Values are "inactive", "active",
>                or ENOENT meaning the vdi does not exist.
>                Set by the driver domain in response to requests.
>
>    request     Operation requested by the toolstack and currently
>                being performed.  Created by the toolstack, but may
>                then not be modified by the toolstack.  Deleted
>                by the driver domain when the operation has completed.
>
>                The values of "request" are:
>                  prepare
>                  activate
>                  deactivate
>                  unprepare
>                  plug <vbd>
>                  unplug <vbd>
>                <vbd> is an id chosen by the toolstack like <vdi>
>
>    result      errno value (in decimal, Xen error number) best
>                describing the results of the most recently completed
>                operation; 0 means success.  Created or set by the
>                driver domain in the same transaction as it deletes
>                request.  The toolstack may delete this.
>
>    result_msg  Optional UTF-8 string explaining any error; does not
>                exist when result is "0".  Created or deleted by the
>                driver domain whenever the driver domain sets result.
>                The toolstack may delete this.
>
>    t/*         The target name.  Must be written by the toolstack.
>                But may not be removed or changed while either of
>                state or request exist.
>
>    vbd/<vbd>/state
>                The state of a vbd, "ok" or ENOENT.
>                Set or deleted by the driver domain in response to
>                requests.
>
>    vbd/<vbd>/frontend
>                The frontend path (complete path in xenstore) which the
>                xen vbd should be servicing.  Set by the toolstack
>                with the plug request and not modified until after
>                completion of unplug.
>
>    vbd/<vbd>/backend
>                The backend path (complete path in xenstore) which the
>                driver domain has chosen for the vbd.  Set by the
>                driver domain in response to a plug request.
>
>    vbd/<vbd>/b-copy/*
>                The driver domain may request, in response to plug,
>                that the toolstack copy these values to the specified
>                backend directory, in the same transaction as it
>                creates the frontend.  Set by the driver domain in
>                response to a plug request; may be deleted by the
>                toolstack.  DEPRECATED, see below.
>
> The operations:
>
>  prepare
>         Creates a vdi from a target.
>         Preconditions:
>             state ENOENT
>             request ENOENT
>         Request (xenstore writes by toolstack):
>             request = "prepare"
>             t/* as appropriate
>         Results on success (xenstore writes by driver domain):
>             request ENOENT    } applies to success from all operations,
>             result = "0"      }  will not be restated below
>             state = "inactive"
>         Results on error (applies to all operations):  }
>             request ENOENT                             }  applies
>             result = some decimal integer errno value  }   to all
>             result_msg = ENOENT or a string            }    failures
>
>  activate
>         Preconditions:
>             state = "inactive"
>             request ENOENT
>         Request:
>             request = "activate"
>         Results on success:
>             state = "active"
>
>  deactivate
>         Preconditions:
>             state = "active"
>             request ENOENT
>         Request:
>             request = "deactivate"
>         Results on success:
>             state = "inactive"
>
>  unprepare
>         Preconditions:
>             state != ENOENT
>             request ENOENT
>         Request:
>             request = "unprepare"
>         Results on success:
>             state = ENOENT
>
>  removal, modification, etc. of an unprepared vdi:
>         Preconditions:
>             state ENOENT
>             request ENOENT
>         Request:
>             any changes to <vdi> directory which do
>              not create "state" or "request"
>         Results:
>             ignored - no response from driver domain
>
>  plug <vbd>
>         Preconditions:
>             state ENOENT

I'm not sure about this, but shouldn't state = "active" or at least
"prepared"? Maybe I don't understant the protocol correctly, but to be
able to plug a vbd, shouldn't the underlying vdi be prepared first?

Also, as far as I understand, each vdi only has one vbd, why is the
<vbd> parameter needed in both the plug and unplug operations?

>             request ENOENT
>             vbd/<vbd>/state ENOENT
>             <frontend> ENOENT
>         Request:
>             request = "plug <vbd>"
>             vbd/<vbd>/frontend = <frontend> ("/local/domain/<guest>/...")
>         Results on success:
>             vbd/<vbd>/state = "ok"
>             vbd/<vbd>/backend = <rel-backend>
>                 (<rel-backend> is the backend path relative to the
>                  driver domain's home directory in xenstore)
>             vbd/<vbd>/b-copy/*  may be created    } at least one of these
>             <backend>/*  may come into existence  }  must happen
>         Next step (xenstore write) by toolstack:
>             <frontend>  created and populated, specifically
>             <frontend>/backend = <backend>
>                 ("/local/domain/<driverdomid>/<rel-backend>")
>             <backend>    created if necessary
>             <backend>/*  copied from  vbd/<vbd>/b-copy/*  if any
>             <backend>/frontend = <frontend>  unless already set
>
>  unplug <vbd>
>         Preconditions:
>             state ENOENT
>             request ENOENT
>             vbd/<vbd>/state "ok"
>         Request:
>             request = "unplug <vbd>"
>             <frontend> ENOENT
>         Results on success:
>             vbd/<vbd>/state ENOENT
>             <backend> ENOENT

So the flow of the procotol is (if everything return success):

connection: prepare -> activate -> plug
disconnection: unplug -> deactivate -> unprepare

>
>  The toolstack and driver domains should not store state of their own,
>  not required for these communication purposes, in the backendctrl/
>  directory in xenstore.  If the driver domain wishes to make records
>  for its own use in xenstore, it should do so in a different directory
>  of its choice (eg, /local/domain/<driverdomid>/private/<something>.
>
>
> Notes regarding driver domains whose block backend implementation is
> controlled from the actual xenstore backend directory:
>
>  The b-copy/* feature exists for compatibility with some of these.  If
>  such a backend cannot cope with the backend directory coming into
>  existence before the corresponding frontend directory, then it is
>  necessary to create and populate the backend in the same xenstore
>  transaction as the creation of the frontend.  However, such backends
>  should be fixed; the b-copy/* feature is deprecated and will be
>  withdrawn at some point.
>
>  Note that a vbd may be created with the vdi inactive.  In this case

So in this case, the connection may happen with:

connection: prepare -> plug -> activate?

I frankly find this vbd/vdi naming very confusing.

>  the frontend and backend directories will exist, but the information
>  needed to start up the backend properly may be lacking until the vdi
>  is activated.  For example, if the existence of a suitable block
>  device in the driver domain depends on vdi activation, the block
>  device id cannot be made known to the backend until after the backend
>  directory has already been created and perhaps has existed for some
>  time.  It is believed that existing backends cope with this, because
>  they use a "hotplug script" approach - where the backend directory is
>  created without specifying the device node, and this backend directory
>  creation causes the invocation of machinery which establishes the
>  device node, which is subsequently written to xenstore.
>
>
> Question
>
>  What about network interfaces and other kinds of backend ?>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 10:35:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 10:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfTMh-00016L-0t; Mon, 03 Dec 2012 10:35:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfTMf-00016G-Qa
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 10:35:06 +0000
Received: from [85.158.137.99:12123] by server-7.bemta-3.messagelabs.com id
	38/8B-01713-8508CB05; Mon, 03 Dec 2012 10:35:04 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1354530874!17626092!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31122 invoked from network); 3 Dec 2012 10:34:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 10:34:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,205,1355097600"; d="scan'208";a="16119557"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 10:34:12 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	10:34:12 +0000
Message-ID: <50BC8023.9010408@citrix.com>
Date: Mon, 3 Dec 2012 11:34:11 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] Driver domains communication protocol proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I no longer have the original message, so I'm going to reply in a
copy-paste of xen mailing list archive. Sorry for the inconvenience.

> During some discussions and handwaving, including discussions with
> some experts on the Xenserver/XCP storage architecture, we came up
> with what we think might be a plausible proposal for an architecture
> for communication between toolstack and driver domain, for storage at
> least.
>
> I offered to write it up.  The abstract proposal is as I understand
> the consensus from our conversation.  The concrete protocol is my own
> invention.
>
> Please comments.  After a round of review here we should consider
> whether some of the assumptions need review from the communities
> involved in "other" backends (particularly, the BSDs).
>
> (FAOD the implementation of something like this is not 4.3 material,
> but it may inform some API decisions etc. we take in 4.2.)
>
> Ian.
>
>
> Components
>
>  toolstack
>
>  guest
>     Might be the toolstack domain, or an (intended) guest vm.
>
>  driver domain
>     Responsible for providing the disk service to guests.
>     Consists, internally, of (at least):
>        control plane
>        backend
>     but we avoid exposing this internal implementation detail.
>
>     We permit different driver domains on a single host, serving
>     different guests or the same guests.
>
>     The toolstack is expected to know the domid of the driver domain.
>
>  driver domain kind
>     We permit different "kinds" of driver domain, perhaps implemented
>     by completely different code, which support different facilities.
>
>     Each driver domain kind needs to document what targets (see
>     below) are valid and how they are specified, and what preparatory
>     steps may need to be taken eg at system boot.
>
>     Driver domain kinds do not have a formal presence in the API.
>
> Objects
>
>  target
>      A kind of name.
>
>      Combination of a physical location and data format plus all other
>      information needed by the underlying mechanisms, or relating to
>      the data format, needed to access it.
>
>      These names are assigned by the driver domain kind; the names may
>      be an open class; no facility provided via this API to enumerate
>      these.
>
>      Syntactically, these are key/value pairs, mapping short string
>      keys to shortish string values, suitable for storage in a
>      xenstore directory.
>
>  vdi
>      This host's intent to access a specific target.
>      Non-persistent, created on request by toolstack, enumerable.
>      Possible states: inactive/active.
>      Abstract operations: prepare, activate, deactivate, unprepare.
>
>      (We call the "create" operation for this object "prepare" to
>      avoid confusion with other kinds of "create".)
>
>      The toolstack promises that no two vdis for the same target
>      will simultaneously be active, even if the two vdis are on
>      different hosts.
>
>  vbd
>      Provision of a facility for a guest to access a particular target
>      via a particular vdi.  There may be zero or more of these at any
>      point for a particular vdi.
>
>      Non-persistent, created on request by toolstack, enumerable.
>      Abstract operations: plug, unplug.
>
>      (We call the "create" operation for this object "plug" to avoid
>      confusion with other kinds of "create".)
>
>      vbds may be created/destroyed, and the underlying vdi
>      activated/deactivated, in any other.  However IO is only possible
>      to a vbd when the corresponding vdi is active.  The reason for
>      requiring activation as a separate step is to allow as much of
>      the setup for an incoming migration domain's storage to be done
>      before committing to the migration and entering the "domain is
>      down" stage, during which access is switched from the old to the
>      new host.
>
>      We will consider here the case of a vbd which provides
>      service as a Xen vbd backend.  Other cases (eg, the driver domain
>      is the same as the toolstack domain and the vbd provides a block
>      device in the toolstack domain) can be regarded as
>      optimisations/shortcuts.
>
> Concrete protocol
>
>  The toolstack gives instructions to the driver domain, and receives
>  results, via xenstore, in the path:
>    /local/domain/<driverdomid>/backendctrl/vdi
>  Both driver domain and toolstack have write access to the whole of
>  this area.
>
>  Each vdi which has been requested and/or exists, corresponds to a
>  path .../backendctrl/vdi/<vdi> where <vdi> is a string (of
>  alphanumerics, hyphens and underscores) chosen by the toolstack.
>  Inside this, there are the following nodes:
>
>  /local/domain/<driverdomid>/backendctrl/vdi/<vdi>/
>    state       The current state.  Values are "inactive", "active",
>                or ENOENT meaning the vdi does not exist.
>                Set by the driver domain in response to requests.
>
>    request     Operation requested by the toolstack and currently
>                being performed.  Created by the toolstack, but may
>                then not be modified by the toolstack.  Deleted
>                by the driver domain when the operation has completed.
>
>                The values of "request" are:
>                  prepare
>                  activate
>                  deactivate
>                  unprepare
>                  plug <vbd>
>                  unplug <vbd>
>                <vbd> is an id chosen by the toolstack like <vdi>
>
>    result      errno value (in decimal, Xen error number) best
>                describing the results of the most recently completed
>                operation; 0 means success.  Created or set by the
>                driver domain in the same transaction as it deletes
>                request.  The toolstack may delete this.
>
>    result_msg  Optional UTF-8 string explaining any error; does not
>                exist when result is "0".  Created or deleted by the
>                driver domain whenever the driver domain sets result.
>                The toolstack may delete this.
>
>    t/*         The target name.  Must be written by the toolstack.
>                But may not be removed or changed while either of
>                state or request exist.
>
>    vbd/<vbd>/state
>                The state of a vbd, "ok" or ENOENT.
>                Set or deleted by the driver domain in response to
>                requests.
>
>    vbd/<vbd>/frontend
>                The frontend path (complete path in xenstore) which the
>                xen vbd should be servicing.  Set by the toolstack
>                with the plug request and not modified until after
>                completion of unplug.
>
>    vbd/<vbd>/backend
>                The backend path (complete path in xenstore) which the
>                driver domain has chosen for the vbd.  Set by the
>                driver domain in response to a plug request.
>
>    vbd/<vbd>/b-copy/*
>                The driver domain may request, in response to plug,
>                that the toolstack copy these values to the specified
>                backend directory, in the same transaction as it
>                creates the frontend.  Set by the driver domain in
>                response to a plug request; may be deleted by the
>                toolstack.  DEPRECATED, see below.
>
> The operations:
>
>  prepare
>         Creates a vdi from a target.
>         Preconditions:
>             state ENOENT
>             request ENOENT
>         Request (xenstore writes by toolstack):
>             request = "prepare"
>             t/* as appropriate
>         Results on success (xenstore writes by driver domain):
>             request ENOENT    } applies to success from all operations,
>             result = "0"      }  will not be restated below
>             state = "inactive"
>         Results on error (applies to all operations):  }
>             request ENOENT                             }  applies
>             result = some decimal integer errno value  }   to all
>             result_msg = ENOENT or a string            }    failures
>
>  activate
>         Preconditions:
>             state = "inactive"
>             request ENOENT
>         Request:
>             request = "activate"
>         Results on success:
>             state = "active"
>
>  deactivate
>         Preconditions:
>             state = "active"
>             request ENOENT
>         Request:
>             request = "deactivate"
>         Results on success:
>             state = "inactive"
>
>  unprepare
>         Preconditions:
>             state != ENOENT
>             request ENOENT
>         Request:
>             request = "unprepare"
>         Results on success:
>             state = ENOENT
>
>  removal, modification, etc. of an unprepared vdi:
>         Preconditions:
>             state ENOENT
>             request ENOENT
>         Request:
>             any changes to <vdi> directory which do
>              not create "state" or "request"
>         Results:
>             ignored - no response from driver domain
>
>  plug <vbd>
>         Preconditions:
>             state ENOENT

I'm not sure about this, but shouldn't state = "active" or at least
"prepared"? Maybe I don't understant the protocol correctly, but to be
able to plug a vbd, shouldn't the underlying vdi be prepared first?

Also, as far as I understand, each vdi only has one vbd, why is the
<vbd> parameter needed in both the plug and unplug operations?

>             request ENOENT
>             vbd/<vbd>/state ENOENT
>             <frontend> ENOENT
>         Request:
>             request = "plug <vbd>"
>             vbd/<vbd>/frontend = <frontend> ("/local/domain/<guest>/...")
>         Results on success:
>             vbd/<vbd>/state = "ok"
>             vbd/<vbd>/backend = <rel-backend>
>                 (<rel-backend> is the backend path relative to the
>                  driver domain's home directory in xenstore)
>             vbd/<vbd>/b-copy/*  may be created    } at least one of these
>             <backend>/*  may come into existence  }  must happen
>         Next step (xenstore write) by toolstack:
>             <frontend>  created and populated, specifically
>             <frontend>/backend = <backend>
>                 ("/local/domain/<driverdomid>/<rel-backend>")
>             <backend>    created if necessary
>             <backend>/*  copied from  vbd/<vbd>/b-copy/*  if any
>             <backend>/frontend = <frontend>  unless already set
>
>  unplug <vbd>
>         Preconditions:
>             state ENOENT
>             request ENOENT
>             vbd/<vbd>/state "ok"
>         Request:
>             request = "unplug <vbd>"
>             <frontend> ENOENT
>         Results on success:
>             vbd/<vbd>/state ENOENT
>             <backend> ENOENT

So the flow of the procotol is (if everything return success):

connection: prepare -> activate -> plug
disconnection: unplug -> deactivate -> unprepare

>
>  The toolstack and driver domains should not store state of their own,
>  not required for these communication purposes, in the backendctrl/
>  directory in xenstore.  If the driver domain wishes to make records
>  for its own use in xenstore, it should do so in a different directory
>  of its choice (eg, /local/domain/<driverdomid>/private/<something>.
>
>
> Notes regarding driver domains whose block backend implementation is
> controlled from the actual xenstore backend directory:
>
>  The b-copy/* feature exists for compatibility with some of these.  If
>  such a backend cannot cope with the backend directory coming into
>  existence before the corresponding frontend directory, then it is
>  necessary to create and populate the backend in the same xenstore
>  transaction as the creation of the frontend.  However, such backends
>  should be fixed; the b-copy/* feature is deprecated and will be
>  withdrawn at some point.
>
>  Note that a vbd may be created with the vdi inactive.  In this case

So in this case, the connection may happen with:

connection: prepare -> plug -> activate?

I frankly find this vbd/vdi naming very confusing.

>  the frontend and backend directories will exist, but the information
>  needed to start up the backend properly may be lacking until the vdi
>  is activated.  For example, if the existence of a suitable block
>  device in the driver domain depends on vdi activation, the block
>  device id cannot be made known to the backend until after the backend
>  directory has already been created and perhaps has existed for some
>  time.  It is believed that existing backends cope with this, because
>  they use a "hotplug script" approach - where the backend directory is
>  created without specifying the device node, and this backend directory
>  creation causes the invocation of machinery which establishes the
>  device node, which is subsequently written to xenstore.
>
>
> Question
>
>  What about network interfaces and other kinds of backend ?>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 11:24:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 11:24:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfU8M-0001hc-Gb; Mon, 03 Dec 2012 11:24:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TfU8K-0001hV-Kv
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 11:24:21 +0000
Received: from [85.158.139.83:52356] by server-8.bemta-5.messagelabs.com id
	DF/F8-06050-3EB8CB05; Mon, 03 Dec 2012 11:24:19 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354533847!16866646!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5534 invoked from network); 3 Dec 2012 11:24:18 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 11:24:18 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so1807329vbi.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 03:24:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=NZQeqyiyskMZcd9IH8nW377Ay0fkg89D0Hi5w0dIW44=;
	b=K3NhEVJTub0KYHjN5IrYgZgNEV2Tngk+/cQx6bVLgpN8FWwsftesVKKeGF6ufHTw39
	0TSLiJ92eExK9tgpcy3SF1mcOkLyaCQjMPACAq5KEPlZwdoEyb40A5AMdWB3q9RI5b9B
	NTAqyPUKkJmFxfLl6N87tPWGPm/p8TsVUGC6DlGvWzJakyeOklTfhqYQl1gHbEiQhFF0
	KrmP7EysTWGW9TkvWh4l9+O+nDjPPYSDzak6aaQsiRRexzHfADNf2khSv8uocbuj4BQ7
	a9XpS2Wka/9zcxWmAGB3luei2uZ4OpZ3Si1CJGP79QFKLx8Evql5RRwXQ7Cz7eAJByWB
	nOAw==
MIME-Version: 1.0
Received: by 10.52.92.144 with SMTP id cm16mr7284990vdb.36.1354533847342; Mon,
	03 Dec 2012 03:24:07 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Mon, 3 Dec 2012 03:24:07 -0800 (PST)
In-Reply-To: <50B8EE13.6070301@citrix.com>
References: <50B8EE13.6070301@citrix.com>
Date: Mon, 3 Dec 2012 11:24:07 +0000
X-Google-Sender-Auth: 26HgYnukaMIxcxP-k7Tl6Bk1ia4
Message-ID: <CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6554213251488682100=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6554213251488682100==
Content-Type: multipart/alternative; boundary=20cf307cff96ce9e8204cff0fc47

--20cf307cff96ce9e8204cff0fc47
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Nov 30, 2012 at 5:34 PM, Andrew Cooper <andrew.cooper3@citrix.com>wrote:

> 3) SMM mode executing an iret will re-enable NMIs.  There is nothing we
> can do to prevent this, and as an SMI can interrupt NMIs and MCEs, no
> way to predict if/when it may happen.  The best we can do is accept that
> it might happen, and try to deal with the after effects.
>

Did you actually mean IRET, or did you mean RSM?  Does it make a difference?


> As for 1 possible solution which we cant use:
>
> If it were not for the sysret stupidness[1] of requiring the hypervisor
> to move to the guest stack before executing the `sysret` instruction, we
> could do away with the stack tables for NMIs and MCEs alltogether, and
> the above crazyness would be easy to fix.  However, the overhead of
> always using iret to return to ring3 is not likely to be acceptable,
> meaning that we cannot "fix" the problem by discarding interrupt stacks
> and doing everything properly on the main hypervisor stack.
>

64-bit Intel processors have SYSEXIT, right?  It's worth pointing out the
following alternatives, even if we never actually use them:

1. Use SYSEXIT on Intel processors and let the bugs (or some subset of
them) remain on AMD
2. Use SYSEXIT on Intel processors and IRET on AMD

Given that AMD has cut back their investment in OSS development, and is
talking about moving to ARM, it may only be a matter of time before Intel
is the only important player in the x86 world.


> [1] In an effort to prevent a flamewar with my comment, the situation we
> find outself in now is almost certainly the result of unforseen
> interactions of individual features, but we are left to pick up the many
> pieces in way which cant completely be solved.
>

The very first time I heard that SYSRET didn't restore the stack pointer, I
thought it was an obviously stupid idea that would cause all kinds of crazy
bugs. When you're designing operating systems, a little paranoia is a good
thing, and I can't help but think that the architects that let this go
through made a big mistake here.

 -George

--20cf307cff96ce9e8204cff0fc47
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Fri, Nov 30, 2012 at 5:34 PM, Andrew Cooper <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:andrew.cooper3@citrix.com" target=3D"_blank">andrew.cooper3@cit=
rix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"g=
mail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">3) SMM mode executing an iret will re-enable=
 NMIs. =A0There is nothing we<br>
can do to prevent this, and as an SMI can interrupt NMIs and MCEs, no<br>
way to predict if/when it may happen. =A0The best we can do is accept that<=
br>
it might happen, and try to deal with the after effects.<br></blockquote><d=
iv><br>Did you actually mean IRET, or did you mean RSM?=A0 Does it make a d=
ifference?<br>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 =
0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

As for 1 possible solution which we cant use:<br>
<br>
If it were not for the sysret stupidness[1] of requiring the hypervisor<br>
to move to the guest stack before executing the `sysret` instruction, we<br=
>
could do away with the stack tables for NMIs and MCEs alltogether, and<br>
the above crazyness would be easy to fix. =A0However, the overhead of<br>
always using iret to return to ring3 is not likely to be acceptable,<br>
meaning that we cannot &quot;fix&quot; the problem by discarding interrupt =
stacks<br>
and doing everything properly on the main hypervisor stack.<br></blockquote=
><div><br>64-bit Intel processors have SYSEXIT, right?=A0 It&#39;s worth po=
inting out the following alternatives, even if we never actually use them:<=
br>
<br>1. Use SYSEXIT on Intel processors and let the bugs (or some subset of =
them) remain on AMD<br>2. Use SYSEXIT on Intel processors and IRET on AMD<b=
r><br>Given that AMD has cut back their investment in OSS development, and =
is talking about moving to ARM, it may only be a matter of time before Inte=
l is the only important player in the x86 world.<br>
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde=
r-left:1px #ccc solid;padding-left:1ex">
[1] In an effort to prevent a flamewar with my comment, the situation we<br=
>
find outself in now is almost certainly the result of unforseen<br>
interactions of individual features, but we are left to pick up the many<br=
>
pieces in way which cant completely be solved.<br></blockquote><div><br>The=
 very first time I heard that SYSRET didn&#39;t restore the stack pointer, =
I thought it was an obviously stupid idea that would cause all kinds of cra=
zy bugs. When you&#39;re designing operating systems, a little paranoia is =
a good thing, and I can&#39;t help but think that the architects that let t=
his go through made a big mistake here.<br>
<br>=A0-George<br></div></div><br></div>

--20cf307cff96ce9e8204cff0fc47--


--===============6554213251488682100==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6554213251488682100==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 11:24:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 11:24:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfU8M-0001hc-Gb; Mon, 03 Dec 2012 11:24:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TfU8K-0001hV-Kv
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 11:24:21 +0000
Received: from [85.158.139.83:52356] by server-8.bemta-5.messagelabs.com id
	DF/F8-06050-3EB8CB05; Mon, 03 Dec 2012 11:24:19 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354533847!16866646!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5534 invoked from network); 3 Dec 2012 11:24:18 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 11:24:18 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so1807329vbi.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 03:24:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=NZQeqyiyskMZcd9IH8nW377Ay0fkg89D0Hi5w0dIW44=;
	b=K3NhEVJTub0KYHjN5IrYgZgNEV2Tngk+/cQx6bVLgpN8FWwsftesVKKeGF6ufHTw39
	0TSLiJ92eExK9tgpcy3SF1mcOkLyaCQjMPACAq5KEPlZwdoEyb40A5AMdWB3q9RI5b9B
	NTAqyPUKkJmFxfLl6N87tPWGPm/p8TsVUGC6DlGvWzJakyeOklTfhqYQl1gHbEiQhFF0
	KrmP7EysTWGW9TkvWh4l9+O+nDjPPYSDzak6aaQsiRRexzHfADNf2khSv8uocbuj4BQ7
	a9XpS2Wka/9zcxWmAGB3luei2uZ4OpZ3Si1CJGP79QFKLx8Evql5RRwXQ7Cz7eAJByWB
	nOAw==
MIME-Version: 1.0
Received: by 10.52.92.144 with SMTP id cm16mr7284990vdb.36.1354533847342; Mon,
	03 Dec 2012 03:24:07 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Mon, 3 Dec 2012 03:24:07 -0800 (PST)
In-Reply-To: <50B8EE13.6070301@citrix.com>
References: <50B8EE13.6070301@citrix.com>
Date: Mon, 3 Dec 2012 11:24:07 +0000
X-Google-Sender-Auth: 26HgYnukaMIxcxP-k7Tl6Bk1ia4
Message-ID: <CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6554213251488682100=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6554213251488682100==
Content-Type: multipart/alternative; boundary=20cf307cff96ce9e8204cff0fc47

--20cf307cff96ce9e8204cff0fc47
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Nov 30, 2012 at 5:34 PM, Andrew Cooper <andrew.cooper3@citrix.com>wrote:

> 3) SMM mode executing an iret will re-enable NMIs.  There is nothing we
> can do to prevent this, and as an SMI can interrupt NMIs and MCEs, no
> way to predict if/when it may happen.  The best we can do is accept that
> it might happen, and try to deal with the after effects.
>

Did you actually mean IRET, or did you mean RSM?  Does it make a difference?


> As for 1 possible solution which we cant use:
>
> If it were not for the sysret stupidness[1] of requiring the hypervisor
> to move to the guest stack before executing the `sysret` instruction, we
> could do away with the stack tables for NMIs and MCEs alltogether, and
> the above crazyness would be easy to fix.  However, the overhead of
> always using iret to return to ring3 is not likely to be acceptable,
> meaning that we cannot "fix" the problem by discarding interrupt stacks
> and doing everything properly on the main hypervisor stack.
>

64-bit Intel processors have SYSEXIT, right?  It's worth pointing out the
following alternatives, even if we never actually use them:

1. Use SYSEXIT on Intel processors and let the bugs (or some subset of
them) remain on AMD
2. Use SYSEXIT on Intel processors and IRET on AMD

Given that AMD has cut back their investment in OSS development, and is
talking about moving to ARM, it may only be a matter of time before Intel
is the only important player in the x86 world.


> [1] In an effort to prevent a flamewar with my comment, the situation we
> find outself in now is almost certainly the result of unforseen
> interactions of individual features, but we are left to pick up the many
> pieces in way which cant completely be solved.
>

The very first time I heard that SYSRET didn't restore the stack pointer, I
thought it was an obviously stupid idea that would cause all kinds of crazy
bugs. When you're designing operating systems, a little paranoia is a good
thing, and I can't help but think that the architects that let this go
through made a big mistake here.

 -George

--20cf307cff96ce9e8204cff0fc47
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Fri, Nov 30, 2012 at 5:34 PM, Andrew Cooper <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:andrew.cooper3@citrix.com" target=3D"_blank">andrew.cooper3@cit=
rix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"g=
mail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">3) SMM mode executing an iret will re-enable=
 NMIs. =A0There is nothing we<br>
can do to prevent this, and as an SMI can interrupt NMIs and MCEs, no<br>
way to predict if/when it may happen. =A0The best we can do is accept that<=
br>
it might happen, and try to deal with the after effects.<br></blockquote><d=
iv><br>Did you actually mean IRET, or did you mean RSM?=A0 Does it make a d=
ifference?<br>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 =
0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

As for 1 possible solution which we cant use:<br>
<br>
If it were not for the sysret stupidness[1] of requiring the hypervisor<br>
to move to the guest stack before executing the `sysret` instruction, we<br=
>
could do away with the stack tables for NMIs and MCEs alltogether, and<br>
the above crazyness would be easy to fix. =A0However, the overhead of<br>
always using iret to return to ring3 is not likely to be acceptable,<br>
meaning that we cannot &quot;fix&quot; the problem by discarding interrupt =
stacks<br>
and doing everything properly on the main hypervisor stack.<br></blockquote=
><div><br>64-bit Intel processors have SYSEXIT, right?=A0 It&#39;s worth po=
inting out the following alternatives, even if we never actually use them:<=
br>
<br>1. Use SYSEXIT on Intel processors and let the bugs (or some subset of =
them) remain on AMD<br>2. Use SYSEXIT on Intel processors and IRET on AMD<b=
r><br>Given that AMD has cut back their investment in OSS development, and =
is talking about moving to ARM, it may only be a matter of time before Inte=
l is the only important player in the x86 world.<br>
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde=
r-left:1px #ccc solid;padding-left:1ex">
[1] In an effort to prevent a flamewar with my comment, the situation we<br=
>
find outself in now is almost certainly the result of unforseen<br>
interactions of individual features, but we are left to pick up the many<br=
>
pieces in way which cant completely be solved.<br></blockquote><div><br>The=
 very first time I heard that SYSRET didn&#39;t restore the stack pointer, =
I thought it was an obviously stupid idea that would cause all kinds of cra=
zy bugs. When you&#39;re designing operating systems, a little paranoia is =
a good thing, and I can&#39;t help but think that the architects that let t=
his go through made a big mistake here.<br>
<br>=A0-George<br></div></div><br></div>

--20cf307cff96ce9e8204cff0fc47--


--===============6554213251488682100==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6554213251488682100==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 11:30:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 11:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfUDp-0001sX-It; Mon, 03 Dec 2012 11:30:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TfUDn-0001sP-N3
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 11:30:00 +0000
Received: from [85.158.139.211:63555] by server-16.bemta-5.messagelabs.com id
	70/8C-21311-63D8CB05; Mon, 03 Dec 2012 11:29:58 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354534196!18869990!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6411 invoked from network); 3 Dec 2012 11:29:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 11:29:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,205,1355097600"; d="scan'208";a="46362337"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 11:29:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 06:29:55 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TfUDj-0000WU-BZ;
	Mon, 03 Dec 2012 11:29:55 +0000
Message-ID: <50BC8BD7.3020908@eu.citrix.com>
Date: Mon, 3 Dec 2012 11:24:07 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: "Liu, Jinsong" <jinsong.liu@intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/11/12 18:51, Liu, Jinsong wrote:
>
> Not exactly I think. Reading the broken page will trigger a serious SRAR error. Under such case hypervisor will inject a vMCE to the guest which was migrating, not dom0. The reason of this injection is, guest is best one to handle it, w/ sufficient clue/status/information (other component like hypervisor/dom0 are not proper). For xl migration process, after return from MCE context, it *again* read the broken page ... this will kill system entirely --> so we definitely not care migration any more.

Why would the vMCE be sent to the guest, rather than the vcpu that was 
running when the SRAR error occured (i.e., the vcpu on which the 
migration process was running)?

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 11:30:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 11:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfUDp-0001sX-It; Mon, 03 Dec 2012 11:30:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TfUDn-0001sP-N3
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 11:30:00 +0000
Received: from [85.158.139.211:63555] by server-16.bemta-5.messagelabs.com id
	70/8C-21311-63D8CB05; Mon, 03 Dec 2012 11:29:58 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354534196!18869990!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6411 invoked from network); 3 Dec 2012 11:29:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 11:29:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,205,1355097600"; d="scan'208";a="46362337"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 11:29:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 06:29:55 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TfUDj-0000WU-BZ;
	Mon, 03 Dec 2012 11:29:55 +0000
Message-ID: <50BC8BD7.3020908@eu.citrix.com>
Date: Mon, 3 Dec 2012 11:24:07 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: "Liu, Jinsong" <jinsong.liu@intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/11/12 18:51, Liu, Jinsong wrote:
>
> Not exactly I think. Reading the broken page will trigger a serious SRAR error. Under such case hypervisor will inject a vMCE to the guest which was migrating, not dom0. The reason of this injection is, guest is best one to handle it, w/ sufficient clue/status/information (other component like hypervisor/dom0 are not proper). For xl migration process, after return from MCE context, it *again* read the broken page ... this will kill system entirely --> so we definitely not care migration any more.

Why would the vMCE be sent to the guest, rather than the vcpu that was 
running when the SRAR error occured (i.e., the vcpu on which the 
migration process was running)?

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 11:38:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 11:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfULz-00026u-8T; Mon, 03 Dec 2012 11:38:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfULx-00026o-Rz
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 11:38:26 +0000
Received: from [85.158.137.99:34697] by server-16.bemta-3.messagelabs.com id
	77/7D-07461-13F8CB05; Mon, 03 Dec 2012 11:38:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354534703!12767312!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21964 invoked from network); 3 Dec 2012 11:38:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 11:38:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,206,1355097600"; d="scan'208";a="16121183"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 11:38:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 3 Dec 2012 11:38:11 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfULj-0002mu-6u;
	Mon, 03 Dec 2012 11:38:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfULh-0002gP-8i;
	Mon, 03 Dec 2012 11:38:10 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14556-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Dec 2012 11:38:09 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14556: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14556 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14556/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2   fail pass in 14555
 test-amd64-amd64-xl-qemuu-win7-amd64 7 windows-install fail in 14555 pass in 14556

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14555 never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 11:38:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 11:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfULz-00026u-8T; Mon, 03 Dec 2012 11:38:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfULx-00026o-Rz
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 11:38:26 +0000
Received: from [85.158.137.99:34697] by server-16.bemta-3.messagelabs.com id
	77/7D-07461-13F8CB05; Mon, 03 Dec 2012 11:38:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354534703!12767312!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21964 invoked from network); 3 Dec 2012 11:38:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 11:38:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,206,1355097600"; d="scan'208";a="16121183"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 11:38:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 3 Dec 2012 11:38:11 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfULj-0002mu-6u;
	Mon, 03 Dec 2012 11:38:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfULh-0002gP-8i;
	Mon, 03 Dec 2012 11:38:10 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14556-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Dec 2012 11:38:09 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14556: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14556 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14556/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2   fail pass in 14555
 test-amd64-amd64-xl-qemuu-win7-amd64 7 windows-install fail in 14555 pass in 14556

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14555 never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 11:46:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 11:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfUTo-0002Hm-7d; Mon, 03 Dec 2012 11:46:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TfUTm-0002Hh-N8
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 11:46:30 +0000
Received: from [193.109.254.147:25877] by server-10.bemta-14.messagelabs.com
	id 26/14-31741-5119CB05; Mon, 03 Dec 2012 11:46:29 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1354535135!1713757!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10426 invoked from network); 3 Dec 2012 11:45:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 11:45:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,206,1355097600"; d="scan'208";a="216165758"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	03 Dec 2012 11:45:34 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Mon, 3 Dec 2012
	06:45:34 -0500
Message-ID: <50BC90DD.5050907@citrix.com>
Date: Mon, 3 Dec 2012 11:45:33 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <50B8EE13.6070301@citrix.com>
	<CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
In-Reply-To: <CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 11:24, George Dunlap wrote:
> On Fri, Nov 30, 2012 at 5:34 PM, Andrew Cooper 
> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>
>     3) SMM mode executing an iret will re-enable NMIs.  There is
>     nothing we
>     can do to prevent this, and as an SMI can interrupt NMIs and MCEs, no
>     way to predict if/when it may happen.  The best we can do is
>     accept that
>     it might happen, and try to deal with the after effects.
>
>
> Did you actually mean IRET, or did you mean RSM?  Does it make a 
> difference?

If, for some obscure reason, the SMM code decides, for example, to run 
code like "int 0x21", where the int 0x21 handler ends with the rather 
predictable IRET to return to the caller, then you would indeed "unlock" 
the NMI blocking that happens from the NMI being taken by the processor. 
NMI will still not interrupt the SMM code, but it WILL interrupt the 
code that was running before SMI was taken - which could be an NMI 
handler, that doesn't expect another NMI.

RSM doesn't, in and of itself [unless "messing" with the saved state] 
alter the NMI state in other ways than "restore to previous value".
>
>     As for 1 possible solution which we cant use:
>
>     If it were not for the sysret stupidness[1] of requiring the
>     hypervisor
>     to move to the guest stack before executing the `sysret`
>     instruction, we
>     could do away with the stack tables for NMIs and MCEs alltogether, and
>     the above crazyness would be easy to fix.  However, the overhead of
>     always using iret to return to ring3 is not likely to be acceptable,
>     meaning that we cannot "fix" the problem by discarding interrupt
>     stacks
>     and doing everything properly on the main hypervisor stack.
>
>
> 64-bit Intel processors have SYSEXIT, right?  It's worth pointing out 
> the following alternatives, even if we never actually use them:
>
> 1. Use SYSEXIT on Intel processors and let the bugs (or some subset of 
> them) remain on AMD
> 2. Use SYSEXIT on Intel processors and IRET on AMD
> Given that AMD has cut back their investment in OSS development, and 
> is talking about moving to ARM, it may only be a matter of time before 
> Intel is the only important player in the x86 world.
Surely we would still want to support existing machines made with AMD 
processors. And as far as possible, we should keep the code architecture 
independent. We do not want a bunch of "IF processor=INTEL" in the 
assembler code [and even less, "#if BUILD_FOR_INTEL" and separate 
binaries, I would expect].

SYSCALL and SYSRET is the corresponding pair of instructions as SYSENTER 
and SYSEXIT but for 64-bit OS's (don't ask me why they decided to add a 
new pair of instructions, rather than just alter the behaviour of 
SYSENTER/SYSEXIT. I'm sure there were some reason for this, but it's 
beyond my understanding.

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 11:46:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 11:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfUTo-0002Hm-7d; Mon, 03 Dec 2012 11:46:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TfUTm-0002Hh-N8
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 11:46:30 +0000
Received: from [193.109.254.147:25877] by server-10.bemta-14.messagelabs.com
	id 26/14-31741-5119CB05; Mon, 03 Dec 2012 11:46:29 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1354535135!1713757!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10426 invoked from network); 3 Dec 2012 11:45:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 11:45:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,206,1355097600"; d="scan'208";a="216165758"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	03 Dec 2012 11:45:34 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Mon, 3 Dec 2012
	06:45:34 -0500
Message-ID: <50BC90DD.5050907@citrix.com>
Date: Mon, 3 Dec 2012 11:45:33 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <50B8EE13.6070301@citrix.com>
	<CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
In-Reply-To: <CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 11:24, George Dunlap wrote:
> On Fri, Nov 30, 2012 at 5:34 PM, Andrew Cooper 
> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>
>     3) SMM mode executing an iret will re-enable NMIs.  There is
>     nothing we
>     can do to prevent this, and as an SMI can interrupt NMIs and MCEs, no
>     way to predict if/when it may happen.  The best we can do is
>     accept that
>     it might happen, and try to deal with the after effects.
>
>
> Did you actually mean IRET, or did you mean RSM?  Does it make a 
> difference?

If, for some obscure reason, the SMM code decides, for example, to run 
code like "int 0x21", where the int 0x21 handler ends with the rather 
predictable IRET to return to the caller, then you would indeed "unlock" 
the NMI blocking that happens from the NMI being taken by the processor. 
NMI will still not interrupt the SMM code, but it WILL interrupt the 
code that was running before SMI was taken - which could be an NMI 
handler, that doesn't expect another NMI.

RSM doesn't, in and of itself [unless "messing" with the saved state] 
alter the NMI state in other ways than "restore to previous value".
>
>     As for 1 possible solution which we cant use:
>
>     If it were not for the sysret stupidness[1] of requiring the
>     hypervisor
>     to move to the guest stack before executing the `sysret`
>     instruction, we
>     could do away with the stack tables for NMIs and MCEs alltogether, and
>     the above crazyness would be easy to fix.  However, the overhead of
>     always using iret to return to ring3 is not likely to be acceptable,
>     meaning that we cannot "fix" the problem by discarding interrupt
>     stacks
>     and doing everything properly on the main hypervisor stack.
>
>
> 64-bit Intel processors have SYSEXIT, right?  It's worth pointing out 
> the following alternatives, even if we never actually use them:
>
> 1. Use SYSEXIT on Intel processors and let the bugs (or some subset of 
> them) remain on AMD
> 2. Use SYSEXIT on Intel processors and IRET on AMD
> Given that AMD has cut back their investment in OSS development, and 
> is talking about moving to ARM, it may only be a matter of time before 
> Intel is the only important player in the x86 world.
Surely we would still want to support existing machines made with AMD 
processors. And as far as possible, we should keep the code architecture 
independent. We do not want a bunch of "IF processor=INTEL" in the 
assembler code [and even less, "#if BUILD_FOR_INTEL" and separate 
binaries, I would expect].

SYSCALL and SYSRET is the corresponding pair of instructions as SYSENTER 
and SYSEXIT but for 64-bit OS's (don't ask me why they decided to add a 
new pair of instructions, rather than just alter the behaviour of 
SYSENTER/SYSEXIT. I'm sure there were some reason for this, but it's 
beyond my understanding.

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 12:32:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 12:32:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVBo-0002rY-6e; Mon, 03 Dec 2012 12:32:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfVBm-0002rT-Om
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 12:31:58 +0000
Received: from [85.158.137.99:28868] by server-16.bemta-3.messagelabs.com id
	6E/74-07461-DBB9CB05; Mon, 03 Dec 2012 12:31:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354537912!17015161!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15188 invoked from network); 3 Dec 2012 12:31:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 12:31:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 12:31:53 +0000
Message-Id: <50BCA9C302000078000AD3BE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 12:31:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mats Petersson" <mats.petersson@citrix.com>
References: <50B8EE13.6070301@citrix.com>
	<CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
	<50BC90DD.5050907@citrix.com>
In-Reply-To: <50BC90DD.5050907@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 12:45, Mats Petersson <mats.petersson@citrix.com> wrote:
> On 03/12/12 11:24, George Dunlap wrote:
>> On Fri, Nov 30, 2012 at 5:34 PM, Andrew Cooper 
>> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>>
>>     3) SMM mode executing an iret will re-enable NMIs.  There is
>>     nothing we
>>     can do to prevent this, and as an SMI can interrupt NMIs and MCEs, no
>>     way to predict if/when it may happen.  The best we can do is
>>     accept that
>>     it might happen, and try to deal with the after effects.
>>
>>
>> Did you actually mean IRET, or did you mean RSM?  Does it make a 
>> difference?
> 
> If, for some obscure reason, the SMM code decides, for example, to run 
> code like "int 0x21", where the int 0x21 handler ends with the rather 
> predictable IRET to return to the caller, then you would indeed "unlock" 
> the NMI blocking that happens from the NMI being taken by the processor. 
> NMI will still not interrupt the SMM code, but it WILL interrupt the 
> code that was running before SMI was taken - which could be an NMI 
> handler, that doesn't expect another NMI.
> 
> RSM doesn't, in and of itself [unless "messing" with the saved state] 
> alter the NMI state in other ways than "restore to previous value".

And isn't it this "restore to previous value" that makes this a
non-issue - after RSM, NMI-s would again be masked.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 12:32:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 12:32:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVBo-0002rY-6e; Mon, 03 Dec 2012 12:32:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfVBm-0002rT-Om
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 12:31:58 +0000
Received: from [85.158.137.99:28868] by server-16.bemta-3.messagelabs.com id
	6E/74-07461-DBB9CB05; Mon, 03 Dec 2012 12:31:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354537912!17015161!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15188 invoked from network); 3 Dec 2012 12:31:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 12:31:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 12:31:53 +0000
Message-Id: <50BCA9C302000078000AD3BE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 12:31:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mats Petersson" <mats.petersson@citrix.com>
References: <50B8EE13.6070301@citrix.com>
	<CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
	<50BC90DD.5050907@citrix.com>
In-Reply-To: <50BC90DD.5050907@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 12:45, Mats Petersson <mats.petersson@citrix.com> wrote:
> On 03/12/12 11:24, George Dunlap wrote:
>> On Fri, Nov 30, 2012 at 5:34 PM, Andrew Cooper 
>> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>>
>>     3) SMM mode executing an iret will re-enable NMIs.  There is
>>     nothing we
>>     can do to prevent this, and as an SMI can interrupt NMIs and MCEs, no
>>     way to predict if/when it may happen.  The best we can do is
>>     accept that
>>     it might happen, and try to deal with the after effects.
>>
>>
>> Did you actually mean IRET, or did you mean RSM?  Does it make a 
>> difference?
> 
> If, for some obscure reason, the SMM code decides, for example, to run 
> code like "int 0x21", where the int 0x21 handler ends with the rather 
> predictable IRET to return to the caller, then you would indeed "unlock" 
> the NMI blocking that happens from the NMI being taken by the processor. 
> NMI will still not interrupt the SMM code, but it WILL interrupt the 
> code that was running before SMI was taken - which could be an NMI 
> handler, that doesn't expect another NMI.
> 
> RSM doesn't, in and of itself [unless "messing" with the saved state] 
> alter the NMI state in other ways than "restore to previous value".

And isn't it this "restore to previous value" that makes this a
non-issue - after RSM, NMI-s would again be masked.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 12:39:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 12:39:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVIp-000310-Cm; Mon, 03 Dec 2012 12:39:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfVIn-00030u-Aa
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 12:39:13 +0000
Received: from [85.158.143.99:15659] by server-3.bemta-4.messagelabs.com id
	33/89-06841-F6D9CB05; Mon, 03 Dec 2012 12:39:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1354535426!20285151!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21125 invoked from network); 3 Dec 2012 11:50:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 11:50:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 11:50:26 +0000
Message-Id: <50BCA00E02000078000AD3AA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 11:50:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"George Dunlap" <dunlapg@umich.edu>
References: <50B8EE13.6070301@citrix.com>
	<CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
In-Reply-To: <CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 12:24, George Dunlap <dunlapg@umich.edu> wrote:
> On Fri, Nov 30, 2012 at 5:34 PM, Andrew Cooper 
>> As for 1 possible solution which we cant use:
>>
>> If it were not for the sysret stupidness[1] of requiring the hypervisor
>> to move to the guest stack before executing the `sysret` instruction, we
>> could do away with the stack tables for NMIs and MCEs alltogether, and
>> the above crazyness would be easy to fix.  However, the overhead of
>> always using iret to return to ring3 is not likely to be acceptable,
>> meaning that we cannot "fix" the problem by discarding interrupt stacks
>> and doing everything properly on the main hypervisor stack.
>>
> 
> 64-bit Intel processors have SYSEXIT, right?  It's worth pointing out the
> following alternatives, even if we never actually use them:
> 
> 1. Use SYSEXIT on Intel processors and let the bugs (or some subset of
> them) remain on AMD
> 2. Use SYSEXIT on Intel processors and IRET on AMD

SYSEXIT isn't very suitable because you'd have to corrupt %edx,
i.e. it couldn't be used for hypercalls with just 1 or 2 arguments.

Plus our GDT layout doesn't match that needed by SYSEXIT, yet
some of the selector values are part of the ABI.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 12:39:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 12:39:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVIp-000310-Cm; Mon, 03 Dec 2012 12:39:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfVIn-00030u-Aa
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 12:39:13 +0000
Received: from [85.158.143.99:15659] by server-3.bemta-4.messagelabs.com id
	33/89-06841-F6D9CB05; Mon, 03 Dec 2012 12:39:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1354535426!20285151!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21125 invoked from network); 3 Dec 2012 11:50:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 11:50:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 11:50:26 +0000
Message-Id: <50BCA00E02000078000AD3AA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 11:50:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"George Dunlap" <dunlapg@umich.edu>
References: <50B8EE13.6070301@citrix.com>
	<CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
In-Reply-To: <CAFLBxZbs0649ABJkRj4HsQDvHr__O6isfjPsQJWrRmfvJokxJA@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Woes of NMIs and MCEs, and possibly how to fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 12:24, George Dunlap <dunlapg@umich.edu> wrote:
> On Fri, Nov 30, 2012 at 5:34 PM, Andrew Cooper 
>> As for 1 possible solution which we cant use:
>>
>> If it were not for the sysret stupidness[1] of requiring the hypervisor
>> to move to the guest stack before executing the `sysret` instruction, we
>> could do away with the stack tables for NMIs and MCEs alltogether, and
>> the above crazyness would be easy to fix.  However, the overhead of
>> always using iret to return to ring3 is not likely to be acceptable,
>> meaning that we cannot "fix" the problem by discarding interrupt stacks
>> and doing everything properly on the main hypervisor stack.
>>
> 
> 64-bit Intel processors have SYSEXIT, right?  It's worth pointing out the
> following alternatives, even if we never actually use them:
> 
> 1. Use SYSEXIT on Intel processors and let the bugs (or some subset of
> them) remain on AMD
> 2. Use SYSEXIT on Intel processors and IRET on AMD

SYSEXIT isn't very suitable because you'd have to corrupt %edx,
i.e. it couldn't be used for hypercalls with just 1 or 2 arguments.

Plus our GDT layout doesn't match that needed by SYSEXIT, yet
some of the selector values are part of the ABI.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 12:43:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 12:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVMX-00039s-1l; Mon, 03 Dec 2012 12:43:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TfVMV-00039m-LB
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 12:43:03 +0000
Received: from [85.158.139.83:4971] by server-15.bemta-5.messagelabs.com id
	6A/3F-26920-65E9CB05; Mon, 03 Dec 2012 12:43:02 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1354538580!23865772!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26793 invoked from network); 3 Dec 2012 12:43:01 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 12:43:01 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so4591823iej.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 04:43:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=OwlkPj6OS2ENy8QWELLdru5Sy6VAooak5Tkn5E1MIEE=;
	b=qqziyjhkDcT6vYcfcK/GlLw3rT7S+GSf92YQYAj1Sg2NrxZeEtP2GUPXPUSrinOsVz
	mpfTd5BaaSMisaUINtHn2OWpYmWuWKXj0kSXAXhLVCOGqX3VC24oZ8N69j/uegsH++MC
	SGffbKNWbLn/GZVCE0ns+Ts+aV/1/vPOGJIHVwGDS+MOidihkXTWXRYlrNQoLfOTxo7W
	rtxGTy3erzobK0wudAF1GjM/YYJ+g+mgip4okaZCzmRJgCgGY5TgGg0iCXe7izlPvqcb
	IFSLvyI5NapKJyCsdMgIuhuOQmOkCAvRGid6VApTAkhDiJdgE5hPppUsTu8UECvhZ7+J
	tvCg==
MIME-Version: 1.0
Received: by 10.50.34.226 with SMTP id c2mr1842052igj.24.1354538580160; Mon,
	03 Dec 2012 04:43:00 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Mon, 3 Dec 2012 04:43:00 -0800 (PST)
In-Reply-To: <50BC74F202000078000AD2FA@nat28.tlf.novell.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC74F202000078000AD2FA@nat28.tlf.novell.com>
Date: Mon, 3 Dec 2012 20:43:00 +0800
X-Google-Sender-Auth: b876Rvd5UPMHfdEcYiZD1QzVHN4
Message-ID: <CAKhsbWYh+FPR0rE1LBNVNEanqLKGSD-McCZz-oMRC80yqVk6kQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4004368494276883589=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4004368494276883589==
Content-Type: multipart/alternative; boundary=14dae9340595e7b08304cff21693

--14dae9340595e7b08304cff21693
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Dec 3, 2012 at 4:46 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 03.12.12 at 04:47, "G.R." <firemeteor@users.sourceforge.net> wrote:
> > Hi developers,
> > I met some domU issues and the log suggests missing interrupt.
> > Details from here:
> > http://www.gossamer-threads.com/lists/xen/users/263938#263938
> > In summary, this is the suspicious log:
> >
> > (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> >
> > I've checked the code in question and found that mode 3 is an
> 'reserved_1'
> > mode.
> > I want to trace down the source of this mode setting to root-cause the
> > issue.
> > But I'm not an xen developer, and am even a newbie as a xen user.
> > Could anybody give me instructions about how to enable detailed debug
> log?
> > It could be better if I can get advice about experiments to perform /
> > switches to try out etc.
>
> Please check the list archives, this issue was discussed in great
> lengths a couple of months ago (and should be fixed in current
> trees).
>
>
Thanks, I just found the thread you mentioned:
http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html

I can confirm that the debian version of the xen 4.1.3 still misses this
patch.
Given the fact that xen 4.1.3 releases after the patch, I guess it is only
intended for v4.2.0?

Do you believe this patch should work for v4.1.3 either?
But anyway, I'm going to try my luck later...

Thanks,
Timothy

--14dae9340595e7b08304cff21693
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><div class=3D"gmail_quote">On Mon, Dec 3, 2012 a=
t 4:46 PM, Jan Beulich <span dir=3D"ltr">&lt;<a href=3D"mailto:JBeulich@sus=
e.com" target=3D"_blank">JBeulich@suse.com</a>&gt;</span> wrote:<br><blockq=
uote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1p=
x solid rgb(204,204,204);padding-left:1ex">
<div class=3D"im">&gt;&gt;&gt; On 03.12.12 at 04:47, &quot;G.R.&quot; &lt;<=
a href=3D"mailto:firemeteor@users.sourceforge.net">firemeteor@users.sourcef=
orge.net</a>&gt; wrote:<br>
&gt; Hi developers,<br>
&gt; I met some domU issues and the log suggests missing interrupt.<br>
&gt; Details from here:<br>
&gt; <a href=3D"http://www.gossamer-threads.com/lists/xen/users/263938#2639=
38" target=3D"_blank">http://www.gossamer-threads.com/lists/xen/users/26393=
8#263938</a><br>
&gt; In summary, this is the suspicious log:<br>
&gt;<br>
&gt; (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
&gt;<br>
&gt; I&#39;ve checked the code in question and found that mode 3 is an &#39=
;reserved_1&#39;<br>
&gt; mode.<br>
&gt; I want to trace down the source of this mode setting to root-cause the=
<br>
&gt; issue.<br>
&gt; But I&#39;m not an xen developer, and am even a newbie as a xen user.<=
br>
&gt; Could anybody give me instructions about how to enable detailed debug =
log?<br>
&gt; It could be better if I can get advice about experiments to perform /<=
br>
&gt; switches to try out etc.<br>
<br>
</div>Please check the list archives, this issue was discussed in great<br>
lengths a couple of months ago (and should be fixed in current<br>
trees).<br>
<br></blockquote><div><br>Thanks, I just found the thread you mentioned:<br=
><a href=3D"http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.h=
tml">http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html</a>=
<br>
<br>I can confirm that the debian version of the xen 4.1.3 still misses thi=
s patch.<br>Given the fact that xen 4.1.3 releases after the patch, I guess=
 it is only intended for v4.2.0?<br><br>Do you believe this patch should wo=
rk for v4.1.3 either?<br>
But anyway, I&#39;m going to try my luck later... <br><br>Thanks,<br>Timoth=
y</div></div></div>

--14dae9340595e7b08304cff21693--


--===============4004368494276883589==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4004368494276883589==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 12:43:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 12:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVMX-00039s-1l; Mon, 03 Dec 2012 12:43:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TfVMV-00039m-LB
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 12:43:03 +0000
Received: from [85.158.139.83:4971] by server-15.bemta-5.messagelabs.com id
	6A/3F-26920-65E9CB05; Mon, 03 Dec 2012 12:43:02 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1354538580!23865772!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26793 invoked from network); 3 Dec 2012 12:43:01 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 12:43:01 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so4591823iej.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 04:43:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=OwlkPj6OS2ENy8QWELLdru5Sy6VAooak5Tkn5E1MIEE=;
	b=qqziyjhkDcT6vYcfcK/GlLw3rT7S+GSf92YQYAj1Sg2NrxZeEtP2GUPXPUSrinOsVz
	mpfTd5BaaSMisaUINtHn2OWpYmWuWKXj0kSXAXhLVCOGqX3VC24oZ8N69j/uegsH++MC
	SGffbKNWbLn/GZVCE0ns+Ts+aV/1/vPOGJIHVwGDS+MOidihkXTWXRYlrNQoLfOTxo7W
	rtxGTy3erzobK0wudAF1GjM/YYJ+g+mgip4okaZCzmRJgCgGY5TgGg0iCXe7izlPvqcb
	IFSLvyI5NapKJyCsdMgIuhuOQmOkCAvRGid6VApTAkhDiJdgE5hPppUsTu8UECvhZ7+J
	tvCg==
MIME-Version: 1.0
Received: by 10.50.34.226 with SMTP id c2mr1842052igj.24.1354538580160; Mon,
	03 Dec 2012 04:43:00 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Mon, 3 Dec 2012 04:43:00 -0800 (PST)
In-Reply-To: <50BC74F202000078000AD2FA@nat28.tlf.novell.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC74F202000078000AD2FA@nat28.tlf.novell.com>
Date: Mon, 3 Dec 2012 20:43:00 +0800
X-Google-Sender-Auth: b876Rvd5UPMHfdEcYiZD1QzVHN4
Message-ID: <CAKhsbWYh+FPR0rE1LBNVNEanqLKGSD-McCZz-oMRC80yqVk6kQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4004368494276883589=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4004368494276883589==
Content-Type: multipart/alternative; boundary=14dae9340595e7b08304cff21693

--14dae9340595e7b08304cff21693
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Dec 3, 2012 at 4:46 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 03.12.12 at 04:47, "G.R." <firemeteor@users.sourceforge.net> wrote:
> > Hi developers,
> > I met some domU issues and the log suggests missing interrupt.
> > Details from here:
> > http://www.gossamer-threads.com/lists/xen/users/263938#263938
> > In summary, this is the suspicious log:
> >
> > (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> >
> > I've checked the code in question and found that mode 3 is an
> 'reserved_1'
> > mode.
> > I want to trace down the source of this mode setting to root-cause the
> > issue.
> > But I'm not an xen developer, and am even a newbie as a xen user.
> > Could anybody give me instructions about how to enable detailed debug
> log?
> > It could be better if I can get advice about experiments to perform /
> > switches to try out etc.
>
> Please check the list archives, this issue was discussed in great
> lengths a couple of months ago (and should be fixed in current
> trees).
>
>
Thanks, I just found the thread you mentioned:
http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html

I can confirm that the debian version of the xen 4.1.3 still misses this
patch.
Given the fact that xen 4.1.3 releases after the patch, I guess it is only
intended for v4.2.0?

Do you believe this patch should work for v4.1.3 either?
But anyway, I'm going to try my luck later...

Thanks,
Timothy

--14dae9340595e7b08304cff21693
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><div class=3D"gmail_quote">On Mon, Dec 3, 2012 a=
t 4:46 PM, Jan Beulich <span dir=3D"ltr">&lt;<a href=3D"mailto:JBeulich@sus=
e.com" target=3D"_blank">JBeulich@suse.com</a>&gt;</span> wrote:<br><blockq=
uote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1p=
x solid rgb(204,204,204);padding-left:1ex">
<div class=3D"im">&gt;&gt;&gt; On 03.12.12 at 04:47, &quot;G.R.&quot; &lt;<=
a href=3D"mailto:firemeteor@users.sourceforge.net">firemeteor@users.sourcef=
orge.net</a>&gt; wrote:<br>
&gt; Hi developers,<br>
&gt; I met some domU issues and the log suggests missing interrupt.<br>
&gt; Details from here:<br>
&gt; <a href=3D"http://www.gossamer-threads.com/lists/xen/users/263938#2639=
38" target=3D"_blank">http://www.gossamer-threads.com/lists/xen/users/26393=
8#263938</a><br>
&gt; In summary, this is the suspicious log:<br>
&gt;<br>
&gt; (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
&gt;<br>
&gt; I&#39;ve checked the code in question and found that mode 3 is an &#39=
;reserved_1&#39;<br>
&gt; mode.<br>
&gt; I want to trace down the source of this mode setting to root-cause the=
<br>
&gt; issue.<br>
&gt; But I&#39;m not an xen developer, and am even a newbie as a xen user.<=
br>
&gt; Could anybody give me instructions about how to enable detailed debug =
log?<br>
&gt; It could be better if I can get advice about experiments to perform /<=
br>
&gt; switches to try out etc.<br>
<br>
</div>Please check the list archives, this issue was discussed in great<br>
lengths a couple of months ago (and should be fixed in current<br>
trees).<br>
<br></blockquote><div><br>Thanks, I just found the thread you mentioned:<br=
><a href=3D"http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.h=
tml">http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html</a>=
<br>
<br>I can confirm that the debian version of the xen 4.1.3 still misses thi=
s patch.<br>Given the fact that xen 4.1.3 releases after the patch, I guess=
 it is only intended for v4.2.0?<br><br>Do you believe this patch should wo=
rk for v4.1.3 either?<br>
But anyway, I&#39;m going to try my luck later... <br><br>Thanks,<br>Timoth=
y</div></div></div>

--14dae9340595e7b08304cff21693--


--===============4004368494276883589==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4004368494276883589==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 13:07:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 13:07:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVj7-0003T0-8p; Mon, 03 Dec 2012 13:06:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfVj5-0003Sp-Bc
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 13:06:23 +0000
Received: from [85.158.139.211:62767] by server-11.bemta-5.messagelabs.com id
	94/7F-03409-EC3ACB05; Mon, 03 Dec 2012 13:06:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1354539968!18797949!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13158 invoked from network); 3 Dec 2012 13:06:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 13:06:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 13:06:11 +0000
Message-Id: <50BCB1CD02000078000AD3F7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 13:06:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "G.R." <firemeteor@users.sourceforge.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC74F202000078000AD2FA@nat28.tlf.novell.com>
	<CAKhsbWYh+FPR0rE1LBNVNEanqLKGSD-McCZz-oMRC80yqVk6kQ@mail.gmail.com>
In-Reply-To: <CAKhsbWYh+FPR0rE1LBNVNEanqLKGSD-McCZz-oMRC80yqVk6kQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 13:43, "G.R." <firemeteor@users.sourceforge.net> wrote:
> On Mon, Dec 3, 2012 at 4:46 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> >>> On 03.12.12 at 04:47, "G.R." <firemeteor@users.sourceforge.net> wrote:
>> > Hi developers,
>> > I met some domU issues and the log suggests missing interrupt.
>> > Details from here:
>> > http://www.gossamer-threads.com/lists/xen/users/263938#263938 
>> > In summary, this is the suspicious log:
>> >
>> > (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>> >
>> > I've checked the code in question and found that mode 3 is an
>> 'reserved_1'
>> > mode.
>> > I want to trace down the source of this mode setting to root-cause the
>> > issue.
>> > But I'm not an xen developer, and am even a newbie as a xen user.
>> > Could anybody give me instructions about how to enable detailed debug
>> log?
>> > It could be better if I can get advice about experiments to perform /
>> > switches to try out etc.
>>
>> Please check the list archives, this issue was discussed in great
>> lengths a couple of months ago (and should be fixed in current
>> trees).
>>
>>
> Thanks, I just found the thread you mentioned:
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html 
> 
> I can confirm that the debian version of the xen 4.1.3 still misses this
> patch.
> Given the fact that xen 4.1.3 releases after the patch, I guess it is only
> intended for v4.2.0?
> 
> Do you believe this patch should work for v4.1.3 either?

I don't think to date a backport of it to 4.1.x was requested.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 13:07:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 13:07:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVj7-0003T0-8p; Mon, 03 Dec 2012 13:06:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfVj5-0003Sp-Bc
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 13:06:23 +0000
Received: from [85.158.139.211:62767] by server-11.bemta-5.messagelabs.com id
	94/7F-03409-EC3ACB05; Mon, 03 Dec 2012 13:06:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1354539968!18797949!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13158 invoked from network); 3 Dec 2012 13:06:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 13:06:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 13:06:11 +0000
Message-Id: <50BCB1CD02000078000AD3F7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 13:06:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "G.R." <firemeteor@users.sourceforge.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC74F202000078000AD2FA@nat28.tlf.novell.com>
	<CAKhsbWYh+FPR0rE1LBNVNEanqLKGSD-McCZz-oMRC80yqVk6kQ@mail.gmail.com>
In-Reply-To: <CAKhsbWYh+FPR0rE1LBNVNEanqLKGSD-McCZz-oMRC80yqVk6kQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 13:43, "G.R." <firemeteor@users.sourceforge.net> wrote:
> On Mon, Dec 3, 2012 at 4:46 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> >>> On 03.12.12 at 04:47, "G.R." <firemeteor@users.sourceforge.net> wrote:
>> > Hi developers,
>> > I met some domU issues and the log suggests missing interrupt.
>> > Details from here:
>> > http://www.gossamer-threads.com/lists/xen/users/263938#263938 
>> > In summary, this is the suspicious log:
>> >
>> > (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>> >
>> > I've checked the code in question and found that mode 3 is an
>> 'reserved_1'
>> > mode.
>> > I want to trace down the source of this mode setting to root-cause the
>> > issue.
>> > But I'm not an xen developer, and am even a newbie as a xen user.
>> > Could anybody give me instructions about how to enable detailed debug
>> log?
>> > It could be better if I can get advice about experiments to perform /
>> > switches to try out etc.
>>
>> Please check the list archives, this issue was discussed in great
>> lengths a couple of months ago (and should be fixed in current
>> trees).
>>
>>
> Thanks, I just found the thread you mentioned:
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html 
> 
> I can confirm that the debian version of the xen 4.1.3 still misses this
> patch.
> Given the fact that xen 4.1.3 releases after the patch, I guess it is only
> intended for v4.2.0?
> 
> Do you believe this patch should work for v4.1.3 either?

I don't think to date a backport of it to 4.1.x was requested.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 13:07:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 13:07:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVjg-0003Xy-Mj; Mon, 03 Dec 2012 13:07:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TfVje-0003XI-L0
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 13:06:59 +0000
Received: from [85.158.138.51:24251] by server-15.bemta-3.messagelabs.com id
	4F/7A-23779-1F3ACB05; Mon, 03 Dec 2012 13:06:57 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1354540015!12764229!1
X-Originating-IP: [209.85.223.172]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12426 invoked from network); 3 Dec 2012 13:06:56 -0000
Received: from mail-ie0-f172.google.com (HELO mail-ie0-f172.google.com)
	(209.85.223.172)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 13:06:56 -0000
Received: by mail-ie0-f172.google.com with SMTP id c13so5009216ieb.17
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 05:06:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=bdD8/vNJvWlrt2TpZkQOs6s/rYuKxQ3eNi8epBDxGzk=;
	b=Hc5F3uaQmY9UBpMf6td/Qzn0TZkF3XDr1L9TcYSWW7SGLZ8MydEuOCN7ncPZZGCmJc
	esNXOkgFBjaqkNEVlmye04NWGq39hiGP300K88Q/FltSDKzcIMbuzOOGoEdycJ83Ab3R
	ynwjXQDXXMwnYC1BR42PDTuiEtyGEFgY05/PKpBf1X7r8Lo1oNgU7QbZiierB2Pj9TJq
	6HCZ2PVI6PLpOv88jMU/pljn4Tso80u88bDpPjO9G8L/P3PH3QeUMjIrxViRdAiwFM+b
	+AdRTOuus5ebh+ojARwaOKBBEnAay1V676mqJjFxHjtH8+CbQZSUo5JGozX4vUvCyN9/
	hspA==
MIME-Version: 1.0
Received: by 10.50.178.106 with SMTP id cx10mr5976711igc.24.1354540000615;
	Mon, 03 Dec 2012 05:06:40 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Mon, 3 Dec 2012 05:06:40 -0800 (PST)
In-Reply-To: <B6C2EB9186482D47BD0C5A9A483456440339D2CD@SHSMSX101.ccr.corp.intel.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<B6C2EB9186482D47BD0C5A9A483456440339D2CD@SHSMSX101.ccr.corp.intel.com>
Date: Mon, 3 Dec 2012 21:06:40 +0800
X-Google-Sender-Auth: eTK7Tix7SW-QYFe1fb_q_DiIusw
Message-ID: <CAKhsbWaA5vqWMxr_v4_ADct5WqrCTrRSOAHQjO=dh7M6nLLNrw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: "Zhang, Xiantao" <xiantao.zhang@intel.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2949336765687338523=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2949336765687338523==
Content-Type: multipart/alternative; boundary=e89a8f5036c8921d1904cff26b69

--e89a8f5036c8921d1904cff26b69
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

On Mon, Dec 3, 2012 at 3:06 PM, Zhang, Xiantao <xiantao.zhang@intel.com>wro=
te:

>  Maybe you need to  provide more information about your VGA device,  for
> example,  =93lspci =96vvv=94.   In addition,  from your log, seems expans=
ion rom
> bar is not correctly handled.  You may refer to this wiki page to check
> whether something is missed in your side.
> http://wiki.xen.org/wiki/Xen_VGA_Passthrough****
>
> Xiantao****
>
> **
>
I'm using the IGD coming with the H77M chipset, here are the lspci -vvv
output from dom0:
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd
Gen Core processor Graphics Controller (rev 09) (prog-if 00 [VGA
controller])
    Subsystem: ASRock Incorporation Device 0162
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=3Dfast >TAbort- <TAbor=
t-
<MAbort- >SERR- <PERR- INTx-
    Latency: 0
    Interrupt: pin A routed to IRQ 95
    Region 0: Memory at f7800000 (64-bit, non-prefetchable) [size=3D4M]
    Region 2: Memory at e0000000 (64-bit, prefetchable) [size=3D256M]
    Region 4: I/O ports at f000 [size=3D64]
    Expansion ROM at <unassigned> [disabled]
    Capabilities: [90] MSI: Enable+ Count=3D1/1 Maskable- 64bit-
        Address: fee00018  Data: 0000
    Capabilities: [d0] Power Management version 2
        Flags: PMEClk- DSI+ D1- D2- AuxCurrent=3D0mA
PME(D0-,D1-,D2-,D3hot-,D3cold-)
        Status: D0 NoSoftRst- PME-Enable- DSel=3D0 DScale=3D0 PME-
    Capabilities: [a4] PCI Advanced Features
        AFCap: TP+ FLR+
        AFCtrl: FLR-
        AFStatus: TP-
    Kernel driver in use: i915

And in domU respectively:

00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd
Gen Core processor Graphics Controller (rev 09) (prog-if 00 [VGA
controller])
    Subsystem: ASRock Incorporation Device 0162
    Physical Slot: 2
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=3Dfast >TAbort- <TAbor=
t-
<MAbort- >SERR- <PERR- INTx+
    Latency: 64
    Interrupt: pin A routed to IRQ 78
    Region 0: Memory at f1000000 (64-bit, non-prefetchable) [size=3D4M]
    Region 2: Memory at e0000000 (64-bit, prefetchable) [size=3D256M]
    Region 4: I/O ports at c100 [size=3D64]
    Expansion ROM at <unassigned> [disabled]
    Capabilities: [90] MSI: Enable+ Count=3D1/1 Maskable- 64bit-
        Address: fee33000  Data: 4300
    Capabilities: [d0] Power Management version 2
        Flags: PMEClk- DSI+ D1- D2- AuxCurrent=3D0mA
PME(D0-,D1-,D2-,D3hot-,D3cold-)
        Status: D0 NoSoftRst+ PME-Enable- DSel=3D0 DScale=3D0 PME-
    Kernel driver in use: i915

They are pretty much the same from my point of view except for some
interrupt config.
The 'expansion ROM' are disabled in both case.
But what does this mean after all? Could you give a brief intro for my
education?

I did not find anything obvious missing from the wiki page.
I would like to note that I'm using an AsRock H77m-itx board.
Even when the chipset is not formally classified as vt-d capable (yes, I
noticed your email domain),
Intel is still shipping H77 based vt-d capable board (
http://www.intel.com/support/motherboards/desktop/sb/CS-030922.htm).
I guess this should not count as a missing piece, right?

Thanks,
Timothy

>  **
>
> *From:* xen-devel-bounces@lists.xen.org [mailto:
> xen-devel-bounces@lists.xen.org] *On Behalf Of *G.R.
> *Sent:* Monday, December 03, 2012 11:48 AM
> *To:* xen-devel
> *Subject:* [Xen-devel] Issue about domU missing interrupt****
>
> ** **
>
> Hi developers,
> I met some domU issues and the log suggests missing interrupt.
> Details from here:
> http://www.gossamer-threads.com/lists/xen/users/263938#263938
> In summary, this is the suspicious log:
>
> (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>
> I've checked the code in question and found that mode 3 is an 'reserved_1=
'
> mode.
> I want to trace down the source of this mode setting to root-cause the
> issue.
> But I'm not an xen developer, and am even a newbie as a xen user.
> Could anybody give me instructions about how to enable detailed debug log=
?
> It could be better if I can get advice about experiments to perform /
> switches to try out etc.
>
> My SW config:
> dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
> domU: Debian wheezy 3.2.x stock kernel.
>
> Thanks,
> Timothy****
>

--e89a8f5036c8921d1904cff26b69
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Mon, Dec 3, 20=
12 at 3:06 PM, Zhang, Xiantao <span dir=3D"ltr">&lt;<a href=3D"mailto:xiant=
ao.zhang@intel.com" target=3D"_blank">xiantao.zhang@intel.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">





<div link=3D"blue" vlink=3D"purple" lang=3D"EN-US">
<div>
<p class=3D""><span style=3D"font-size:11pt;font-family:&quot;Calibri&quot;=
,&quot;sans-serif&quot;;color:rgb(31,73,125)">Maybe you need to =A0provide =
more information about your VGA device,=A0 for example,=A0 =93lspci =96vvv=
=94. =A0=A0In addition,=A0 from your log, seems expansion rom bar is not
 correctly handled. =A0You may refer to this wiki page to check whether som=
ething is missed in your side. =A0=A0<a href=3D"http://wiki.xen.org/wiki/Xe=
n_VGA_Passthrough" target=3D"_blank">http://wiki.xen.org/wiki/Xen_VGA_Passt=
hrough</a><u></u><u></u></span></p>

<p class=3D""><span style=3D"font-size:11pt;font-family:&quot;Calibri&quot;=
,&quot;sans-serif&quot;;color:rgb(31,73,125)">Xiantao<u></u><u></u></span><=
/p>
<p class=3D""><span style=3D"font-size:11pt;font-family:&quot;Calibri&quot;=
,&quot;sans-serif&quot;;color:rgb(31,73,125)"><u></u></span></p></div></div=
></blockquote><div>I&#39;m using the IGD coming with the H77M chipset, here=
 are the lspci -vvv output from dom0:<br>
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd Ge=
n Core processor Graphics Controller (rev 09) (prog-if 00 [VGA controller])=
<br>=A0=A0=A0 Subsystem: ASRock Incorporation Device 0162<br>=A0=A0=A0 Cont=
rol: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- S=
ERR- FastB2B- DisINTx+<br>
=A0=A0=A0 Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=3Dfast &gt;TAbor=
t- &lt;TAbort- &lt;MAbort- &gt;SERR- &lt;PERR- INTx-<br>=A0=A0=A0 Latency: =
0<br>=A0=A0=A0 Interrupt: pin A routed to IRQ 95<br>=A0=A0=A0 Region 0: Mem=
ory at f7800000 (64-bit, non-prefetchable) [size=3D4M]<br>
=A0=A0=A0 Region 2: Memory at e0000000 (64-bit, prefetchable) [size=3D256M]=
<br>=A0=A0=A0 Region 4: I/O ports at f000 [size=3D64]<br>=A0=A0=A0 Expansio=
n ROM at &lt;unassigned&gt; [disabled]<br>=A0=A0=A0 Capabilities: [90] MSI:=
 Enable+ Count=3D1/1 Maskable- 64bit-<br>
=A0=A0=A0 =A0=A0=A0 Address: fee00018=A0 Data: 0000<br>=A0=A0=A0 Capabiliti=
es: [d0] Power Management version 2<br>=A0=A0=A0 =A0=A0=A0 Flags: PMEClk- D=
SI+ D1- D2- AuxCurrent=3D0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)<br>=A0=A0=A0 =
=A0=A0=A0 Status: D0 NoSoftRst- PME-Enable- DSel=3D0 DScale=3D0 PME-<br>
=A0=A0=A0 Capabilities: [a4] PCI Advanced Features<br>=A0=A0=A0 =A0=A0=A0 A=
FCap: TP+ FLR+<br>=A0=A0=A0 =A0=A0=A0 AFCtrl: FLR-<br>=A0=A0=A0 =A0=A0=A0 A=
FStatus: TP-<br>=A0=A0=A0 Kernel driver in use: i915<br><br>And in domU res=
pectively:<br><br>00:02.0 VGA compatible controller: Intel Corporation Xeon=
 E3-1200 v2/3rd Gen Core processor Graphics Controller (rev 09) (prog-if 00=
 [VGA controller])<br>
=A0=A0=A0 Subsystem: ASRock Incorporation Device 0162<br>=A0=A0=A0 Physical=
 Slot: 2<br>=A0=A0=A0 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGA=
Snoop- ParErr- Stepping- SERR- FastB2B- DisINTx+<br>=A0=A0=A0 Status: Cap+ =
66MHz- UDF- FastB2B+ ParErr- DEVSEL=3Dfast &gt;TAbort- &lt;TAbort- &lt;MAbo=
rt- &gt;SERR- &lt;PERR- INTx+<br>
=A0=A0=A0 Latency: 64<br>=A0=A0=A0 Interrupt: pin A routed to IRQ 78<br>=A0=
=A0=A0 Region 0: Memory at f1000000 (64-bit, non-prefetchable) [size=3D4M]<=
br>=A0=A0=A0 Region 2: Memory at e0000000 (64-bit, prefetchable) [size=3D25=
6M]<br>=A0=A0=A0 Region 4: I/O ports at c100 [size=3D64]<br>
=A0=A0=A0 Expansion ROM at &lt;unassigned&gt; [disabled]<br>=A0=A0=A0 Capab=
ilities: [90] MSI: Enable+ Count=3D1/1 Maskable- 64bit-<br>=A0=A0=A0 =A0=A0=
 =A0Address: fee33000=A0 Data: 4300<br>=A0=A0=A0 Capabilities: [d0] Power M=
anagement version 2<br>=A0=A0=A0 =A0=A0 =A0Flags: PMEClk- DSI+ D1- D2- AuxC=
urrent=3D0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)<br>
=A0=A0=A0 =A0=A0 =A0Status: D0 NoSoftRst+ PME-Enable- DSel=3D0 DScale=3D0 P=
ME-<br>=A0=A0=A0 Kernel driver in use: i915<br><br>They are pretty much the=
 same from my point of view except for some interrupt config.<br>The &#39;e=
xpansion ROM&#39; are disabled in both case. <br>
But what does this mean after all? Could you give a brief intro for my educ=
ation?<br><br>I did not find anything obvious missing from the wiki page. <=
br>I would like to note that I&#39;m using an AsRock H77m-itx board.<br>
Even when the chipset is not formally classified as vt-d capable (yes, I no=
ticed your email domain), <br>Intel is still shipping H77 based vt-d capabl=
e board (<a href=3D"http://www.intel.com/support/motherboards/desktop/sb/CS=
-030922.htm">http://www.intel.com/support/motherboards/desktop/sb/CS-030922=
.htm</a>).<br>
I guess this should not count as a missing piece, right?<br><br>Thanks,<br>=
Timothy<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lin=
k=3D"blue" vlink=3D"purple" lang=3D"EN-US">
<div><p class=3D""><span style=3D"font-size:11pt;font-family:&quot;Calibri&=
quot;,&quot;sans-serif&quot;;color:rgb(31,73,125)">=A0<u></u></span></p>
<div style=3D"border-width:medium medium medium 1.5pt;border-style:none non=
e none solid;border-color:-moz-use-text-color -moz-use-text-color -moz-use-=
text-color blue;padding:0in 0in 0in 4pt">
<div>
<div style=3D"border-width:1pt medium medium;border-style:solid none none;b=
order-color:rgb(181,196,223) -moz-use-text-color -moz-use-text-color;paddin=
g:3pt 0in 0in">
<p class=3D""><b><span style=3D"font-size:10pt;font-family:&quot;Tahoma&quo=
t;,&quot;sans-serif&quot;">From:</span></b><span style=3D"font-size:10pt;fo=
nt-family:&quot;Tahoma&quot;,&quot;sans-serif&quot;"> <a href=3D"mailto:xen=
-devel-bounces@lists.xen.org" target=3D"_blank">xen-devel-bounces@lists.xen=
.org</a> [mailto:<a href=3D"mailto:xen-devel-bounces@lists.xen.org" target=
=3D"_blank">xen-devel-bounces@lists.xen.org</a>]
<b>On Behalf Of </b>G.R.<br>
<b>Sent:</b> Monday, December 03, 2012 11:48 AM<br>
<b>To:</b> xen-devel<br>
<b>Subject:</b> [Xen-devel] Issue about domU missing interrupt<u></u><u></u=
></span></p>
</div>
</div><div><div class=3D"h5">
<p class=3D""><u></u>=A0<u></u></p>
<p class=3D"">Hi developers,<br>
I met some domU issues and the log suggests missing interrupt.<br>
Details from here: <a href=3D"http://www.gossamer-threads.com/lists/xen/use=
rs/263938#263938" target=3D"_blank">
http://www.gossamer-threads.com/lists/xen/users/263938#263938</a><br>
In summary, this is the suspicious log:<br>
<br>
(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
<br>
I&#39;ve checked the code in question and found that mode 3 is an &#39;rese=
rved_1&#39; mode.<br>
I want to trace down the source of this mode setting to root-cause the issu=
e.<br>
But I&#39;m not an xen developer, and am even a newbie as a xen user. <br>
Could anybody give me instructions about how to enable detailed debug log?<=
br>
It could be better if I can get advice about experiments to perform / switc=
hes to try out etc.<br>
<br>
My SW config:<br>
dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)<br>
domU: Debian wheezy 3.2.x stock kernel.<br>
<br>
Thanks,<br>
Timothy<u></u><u></u></p>
</div></div></div>
</div>
</div>

</blockquote></div><br></div>

--e89a8f5036c8921d1904cff26b69--


--===============2949336765687338523==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2949336765687338523==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 13:07:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 13:07:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVjg-0003Xy-Mj; Mon, 03 Dec 2012 13:07:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TfVje-0003XI-L0
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 13:06:59 +0000
Received: from [85.158.138.51:24251] by server-15.bemta-3.messagelabs.com id
	4F/7A-23779-1F3ACB05; Mon, 03 Dec 2012 13:06:57 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1354540015!12764229!1
X-Originating-IP: [209.85.223.172]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12426 invoked from network); 3 Dec 2012 13:06:56 -0000
Received: from mail-ie0-f172.google.com (HELO mail-ie0-f172.google.com)
	(209.85.223.172)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 13:06:56 -0000
Received: by mail-ie0-f172.google.com with SMTP id c13so5009216ieb.17
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 05:06:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=bdD8/vNJvWlrt2TpZkQOs6s/rYuKxQ3eNi8epBDxGzk=;
	b=Hc5F3uaQmY9UBpMf6td/Qzn0TZkF3XDr1L9TcYSWW7SGLZ8MydEuOCN7ncPZZGCmJc
	esNXOkgFBjaqkNEVlmye04NWGq39hiGP300K88Q/FltSDKzcIMbuzOOGoEdycJ83Ab3R
	ynwjXQDXXMwnYC1BR42PDTuiEtyGEFgY05/PKpBf1X7r8Lo1oNgU7QbZiierB2Pj9TJq
	6HCZ2PVI6PLpOv88jMU/pljn4Tso80u88bDpPjO9G8L/P3PH3QeUMjIrxViRdAiwFM+b
	+AdRTOuus5ebh+ojARwaOKBBEnAay1V676mqJjFxHjtH8+CbQZSUo5JGozX4vUvCyN9/
	hspA==
MIME-Version: 1.0
Received: by 10.50.178.106 with SMTP id cx10mr5976711igc.24.1354540000615;
	Mon, 03 Dec 2012 05:06:40 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Mon, 3 Dec 2012 05:06:40 -0800 (PST)
In-Reply-To: <B6C2EB9186482D47BD0C5A9A483456440339D2CD@SHSMSX101.ccr.corp.intel.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<B6C2EB9186482D47BD0C5A9A483456440339D2CD@SHSMSX101.ccr.corp.intel.com>
Date: Mon, 3 Dec 2012 21:06:40 +0800
X-Google-Sender-Auth: eTK7Tix7SW-QYFe1fb_q_DiIusw
Message-ID: <CAKhsbWaA5vqWMxr_v4_ADct5WqrCTrRSOAHQjO=dh7M6nLLNrw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: "Zhang, Xiantao" <xiantao.zhang@intel.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2949336765687338523=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2949336765687338523==
Content-Type: multipart/alternative; boundary=e89a8f5036c8921d1904cff26b69

--e89a8f5036c8921d1904cff26b69
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

On Mon, Dec 3, 2012 at 3:06 PM, Zhang, Xiantao <xiantao.zhang@intel.com>wro=
te:

>  Maybe you need to  provide more information about your VGA device,  for
> example,  =93lspci =96vvv=94.   In addition,  from your log, seems expans=
ion rom
> bar is not correctly handled.  You may refer to this wiki page to check
> whether something is missed in your side.
> http://wiki.xen.org/wiki/Xen_VGA_Passthrough****
>
> Xiantao****
>
> **
>
I'm using the IGD coming with the H77M chipset, here are the lspci -vvv
output from dom0:
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd
Gen Core processor Graphics Controller (rev 09) (prog-if 00 [VGA
controller])
    Subsystem: ASRock Incorporation Device 0162
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=3Dfast >TAbort- <TAbor=
t-
<MAbort- >SERR- <PERR- INTx-
    Latency: 0
    Interrupt: pin A routed to IRQ 95
    Region 0: Memory at f7800000 (64-bit, non-prefetchable) [size=3D4M]
    Region 2: Memory at e0000000 (64-bit, prefetchable) [size=3D256M]
    Region 4: I/O ports at f000 [size=3D64]
    Expansion ROM at <unassigned> [disabled]
    Capabilities: [90] MSI: Enable+ Count=3D1/1 Maskable- 64bit-
        Address: fee00018  Data: 0000
    Capabilities: [d0] Power Management version 2
        Flags: PMEClk- DSI+ D1- D2- AuxCurrent=3D0mA
PME(D0-,D1-,D2-,D3hot-,D3cold-)
        Status: D0 NoSoftRst- PME-Enable- DSel=3D0 DScale=3D0 PME-
    Capabilities: [a4] PCI Advanced Features
        AFCap: TP+ FLR+
        AFCtrl: FLR-
        AFStatus: TP-
    Kernel driver in use: i915

And in domU respectively:

00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd
Gen Core processor Graphics Controller (rev 09) (prog-if 00 [VGA
controller])
    Subsystem: ASRock Incorporation Device 0162
    Physical Slot: 2
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=3Dfast >TAbort- <TAbor=
t-
<MAbort- >SERR- <PERR- INTx+
    Latency: 64
    Interrupt: pin A routed to IRQ 78
    Region 0: Memory at f1000000 (64-bit, non-prefetchable) [size=3D4M]
    Region 2: Memory at e0000000 (64-bit, prefetchable) [size=3D256M]
    Region 4: I/O ports at c100 [size=3D64]
    Expansion ROM at <unassigned> [disabled]
    Capabilities: [90] MSI: Enable+ Count=3D1/1 Maskable- 64bit-
        Address: fee33000  Data: 4300
    Capabilities: [d0] Power Management version 2
        Flags: PMEClk- DSI+ D1- D2- AuxCurrent=3D0mA
PME(D0-,D1-,D2-,D3hot-,D3cold-)
        Status: D0 NoSoftRst+ PME-Enable- DSel=3D0 DScale=3D0 PME-
    Kernel driver in use: i915

They are pretty much the same from my point of view except for some
interrupt config.
The 'expansion ROM' are disabled in both case.
But what does this mean after all? Could you give a brief intro for my
education?

I did not find anything obvious missing from the wiki page.
I would like to note that I'm using an AsRock H77m-itx board.
Even when the chipset is not formally classified as vt-d capable (yes, I
noticed your email domain),
Intel is still shipping H77 based vt-d capable board (
http://www.intel.com/support/motherboards/desktop/sb/CS-030922.htm).
I guess this should not count as a missing piece, right?

Thanks,
Timothy

>  **
>
> *From:* xen-devel-bounces@lists.xen.org [mailto:
> xen-devel-bounces@lists.xen.org] *On Behalf Of *G.R.
> *Sent:* Monday, December 03, 2012 11:48 AM
> *To:* xen-devel
> *Subject:* [Xen-devel] Issue about domU missing interrupt****
>
> ** **
>
> Hi developers,
> I met some domU issues and the log suggests missing interrupt.
> Details from here:
> http://www.gossamer-threads.com/lists/xen/users/263938#263938
> In summary, this is the suspicious log:
>
> (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>
> I've checked the code in question and found that mode 3 is an 'reserved_1=
'
> mode.
> I want to trace down the source of this mode setting to root-cause the
> issue.
> But I'm not an xen developer, and am even a newbie as a xen user.
> Could anybody give me instructions about how to enable detailed debug log=
?
> It could be better if I can get advice about experiments to perform /
> switches to try out etc.
>
> My SW config:
> dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
> domU: Debian wheezy 3.2.x stock kernel.
>
> Thanks,
> Timothy****
>

--e89a8f5036c8921d1904cff26b69
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Mon, Dec 3, 20=
12 at 3:06 PM, Zhang, Xiantao <span dir=3D"ltr">&lt;<a href=3D"mailto:xiant=
ao.zhang@intel.com" target=3D"_blank">xiantao.zhang@intel.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">





<div link=3D"blue" vlink=3D"purple" lang=3D"EN-US">
<div>
<p class=3D""><span style=3D"font-size:11pt;font-family:&quot;Calibri&quot;=
,&quot;sans-serif&quot;;color:rgb(31,73,125)">Maybe you need to =A0provide =
more information about your VGA device,=A0 for example,=A0 =93lspci =96vvv=
=94. =A0=A0In addition,=A0 from your log, seems expansion rom bar is not
 correctly handled. =A0You may refer to this wiki page to check whether som=
ething is missed in your side. =A0=A0<a href=3D"http://wiki.xen.org/wiki/Xe=
n_VGA_Passthrough" target=3D"_blank">http://wiki.xen.org/wiki/Xen_VGA_Passt=
hrough</a><u></u><u></u></span></p>

<p class=3D""><span style=3D"font-size:11pt;font-family:&quot;Calibri&quot;=
,&quot;sans-serif&quot;;color:rgb(31,73,125)">Xiantao<u></u><u></u></span><=
/p>
<p class=3D""><span style=3D"font-size:11pt;font-family:&quot;Calibri&quot;=
,&quot;sans-serif&quot;;color:rgb(31,73,125)"><u></u></span></p></div></div=
></blockquote><div>I&#39;m using the IGD coming with the H77M chipset, here=
 are the lspci -vvv output from dom0:<br>
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd Ge=
n Core processor Graphics Controller (rev 09) (prog-if 00 [VGA controller])=
<br>=A0=A0=A0 Subsystem: ASRock Incorporation Device 0162<br>=A0=A0=A0 Cont=
rol: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- S=
ERR- FastB2B- DisINTx+<br>
=A0=A0=A0 Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=3Dfast &gt;TAbor=
t- &lt;TAbort- &lt;MAbort- &gt;SERR- &lt;PERR- INTx-<br>=A0=A0=A0 Latency: =
0<br>=A0=A0=A0 Interrupt: pin A routed to IRQ 95<br>=A0=A0=A0 Region 0: Mem=
ory at f7800000 (64-bit, non-prefetchable) [size=3D4M]<br>
=A0=A0=A0 Region 2: Memory at e0000000 (64-bit, prefetchable) [size=3D256M]=
<br>=A0=A0=A0 Region 4: I/O ports at f000 [size=3D64]<br>=A0=A0=A0 Expansio=
n ROM at &lt;unassigned&gt; [disabled]<br>=A0=A0=A0 Capabilities: [90] MSI:=
 Enable+ Count=3D1/1 Maskable- 64bit-<br>
=A0=A0=A0 =A0=A0=A0 Address: fee00018=A0 Data: 0000<br>=A0=A0=A0 Capabiliti=
es: [d0] Power Management version 2<br>=A0=A0=A0 =A0=A0=A0 Flags: PMEClk- D=
SI+ D1- D2- AuxCurrent=3D0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)<br>=A0=A0=A0 =
=A0=A0=A0 Status: D0 NoSoftRst- PME-Enable- DSel=3D0 DScale=3D0 PME-<br>
=A0=A0=A0 Capabilities: [a4] PCI Advanced Features<br>=A0=A0=A0 =A0=A0=A0 A=
FCap: TP+ FLR+<br>=A0=A0=A0 =A0=A0=A0 AFCtrl: FLR-<br>=A0=A0=A0 =A0=A0=A0 A=
FStatus: TP-<br>=A0=A0=A0 Kernel driver in use: i915<br><br>And in domU res=
pectively:<br><br>00:02.0 VGA compatible controller: Intel Corporation Xeon=
 E3-1200 v2/3rd Gen Core processor Graphics Controller (rev 09) (prog-if 00=
 [VGA controller])<br>
=A0=A0=A0 Subsystem: ASRock Incorporation Device 0162<br>=A0=A0=A0 Physical=
 Slot: 2<br>=A0=A0=A0 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGA=
Snoop- ParErr- Stepping- SERR- FastB2B- DisINTx+<br>=A0=A0=A0 Status: Cap+ =
66MHz- UDF- FastB2B+ ParErr- DEVSEL=3Dfast &gt;TAbort- &lt;TAbort- &lt;MAbo=
rt- &gt;SERR- &lt;PERR- INTx+<br>
=A0=A0=A0 Latency: 64<br>=A0=A0=A0 Interrupt: pin A routed to IRQ 78<br>=A0=
=A0=A0 Region 0: Memory at f1000000 (64-bit, non-prefetchable) [size=3D4M]<=
br>=A0=A0=A0 Region 2: Memory at e0000000 (64-bit, prefetchable) [size=3D25=
6M]<br>=A0=A0=A0 Region 4: I/O ports at c100 [size=3D64]<br>
=A0=A0=A0 Expansion ROM at &lt;unassigned&gt; [disabled]<br>=A0=A0=A0 Capab=
ilities: [90] MSI: Enable+ Count=3D1/1 Maskable- 64bit-<br>=A0=A0=A0 =A0=A0=
 =A0Address: fee33000=A0 Data: 4300<br>=A0=A0=A0 Capabilities: [d0] Power M=
anagement version 2<br>=A0=A0=A0 =A0=A0 =A0Flags: PMEClk- DSI+ D1- D2- AuxC=
urrent=3D0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)<br>
=A0=A0=A0 =A0=A0 =A0Status: D0 NoSoftRst+ PME-Enable- DSel=3D0 DScale=3D0 P=
ME-<br>=A0=A0=A0 Kernel driver in use: i915<br><br>They are pretty much the=
 same from my point of view except for some interrupt config.<br>The &#39;e=
xpansion ROM&#39; are disabled in both case. <br>
But what does this mean after all? Could you give a brief intro for my educ=
ation?<br><br>I did not find anything obvious missing from the wiki page. <=
br>I would like to note that I&#39;m using an AsRock H77m-itx board.<br>
Even when the chipset is not formally classified as vt-d capable (yes, I no=
ticed your email domain), <br>Intel is still shipping H77 based vt-d capabl=
e board (<a href=3D"http://www.intel.com/support/motherboards/desktop/sb/CS=
-030922.htm">http://www.intel.com/support/motherboards/desktop/sb/CS-030922=
.htm</a>).<br>
I guess this should not count as a missing piece, right?<br><br>Thanks,<br>=
Timothy<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lin=
k=3D"blue" vlink=3D"purple" lang=3D"EN-US">
<div><p class=3D""><span style=3D"font-size:11pt;font-family:&quot;Calibri&=
quot;,&quot;sans-serif&quot;;color:rgb(31,73,125)">=A0<u></u></span></p>
<div style=3D"border-width:medium medium medium 1.5pt;border-style:none non=
e none solid;border-color:-moz-use-text-color -moz-use-text-color -moz-use-=
text-color blue;padding:0in 0in 0in 4pt">
<div>
<div style=3D"border-width:1pt medium medium;border-style:solid none none;b=
order-color:rgb(181,196,223) -moz-use-text-color -moz-use-text-color;paddin=
g:3pt 0in 0in">
<p class=3D""><b><span style=3D"font-size:10pt;font-family:&quot;Tahoma&quo=
t;,&quot;sans-serif&quot;">From:</span></b><span style=3D"font-size:10pt;fo=
nt-family:&quot;Tahoma&quot;,&quot;sans-serif&quot;"> <a href=3D"mailto:xen=
-devel-bounces@lists.xen.org" target=3D"_blank">xen-devel-bounces@lists.xen=
.org</a> [mailto:<a href=3D"mailto:xen-devel-bounces@lists.xen.org" target=
=3D"_blank">xen-devel-bounces@lists.xen.org</a>]
<b>On Behalf Of </b>G.R.<br>
<b>Sent:</b> Monday, December 03, 2012 11:48 AM<br>
<b>To:</b> xen-devel<br>
<b>Subject:</b> [Xen-devel] Issue about domU missing interrupt<u></u><u></u=
></span></p>
</div>
</div><div><div class=3D"h5">
<p class=3D""><u></u>=A0<u></u></p>
<p class=3D"">Hi developers,<br>
I met some domU issues and the log suggests missing interrupt.<br>
Details from here: <a href=3D"http://www.gossamer-threads.com/lists/xen/use=
rs/263938#263938" target=3D"_blank">
http://www.gossamer-threads.com/lists/xen/users/263938#263938</a><br>
In summary, this is the suspicious log:<br>
<br>
(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
<br>
I&#39;ve checked the code in question and found that mode 3 is an &#39;rese=
rved_1&#39; mode.<br>
I want to trace down the source of this mode setting to root-cause the issu=
e.<br>
But I&#39;m not an xen developer, and am even a newbie as a xen user. <br>
Could anybody give me instructions about how to enable detailed debug log?<=
br>
It could be better if I can get advice about experiments to perform / switc=
hes to try out etc.<br>
<br>
My SW config:<br>
dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)<br>
domU: Debian wheezy 3.2.x stock kernel.<br>
<br>
Thanks,<br>
Timothy<u></u><u></u></p>
</div></div></div>
</div>
</div>

</blockquote></div><br></div>

--e89a8f5036c8921d1904cff26b69--


--===============2949336765687338523==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2949336765687338523==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 13:15:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 13:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVrV-0004FE-S9; Mon, 03 Dec 2012 13:15:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TfVrV-0004F6-0x
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 13:15:05 +0000
Received: from [85.158.139.83:6180] by server-16.bemta-5.messagelabs.com id
	EF/AE-21311-7D5ACB05; Mon, 03 Dec 2012 13:15:03 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354540495!28157676!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27347 invoked from network); 3 Dec 2012 13:14:57 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 13:14:57 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so4649506iej.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 05:14:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=7pk67qxt3wsyExlCUpX/3WLeN4/fqrKXAFzeIgR54zY=;
	b=XtT4Kb3GnkEkc7B6Pxl6hZsn96woUOeu1BUptngD00b5kJY625WRZSzrRDqeFzRh1C
	YXmsnLMOwUSKs0eGMtJy+YXu6z4mpRQ/i9BsgjGaSsVHOEbOMXTlQPZJ3Rax8zZR8Li3
	wL/+5N4uPJeqN3oElGmG/vyDzzSmZ1Y3oK3je6oePTMEdWnoGw0Ua5bof7BuNtzYxtg1
	uX2ewHjfo7kuYjWnMWcyeKB0o/HSkwGLC5jAaZbRrTH5+6zanL83HPgCDygbXFD3mHGK
	pJ9eJ4Ka7+V54+mVY4kfC039S2ef2LhwSBaN5SCpZ2Y9GlLY6BoF1qJxznvRNeRy4+/7
	794A==
MIME-Version: 1.0
Received: by 10.42.41.144 with SMTP id p16mr7216719ice.39.1354540495343; Mon,
	03 Dec 2012 05:14:55 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Mon, 3 Dec 2012 05:14:55 -0800 (PST)
In-Reply-To: <50BC7BAC.8050107@citrix.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
Date: Mon, 3 Dec 2012 21:14:55 +0800
X-Google-Sender-Auth: 1UrNEsKdFp-tZ5ctjw4-REqqATg
Message-ID: <CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Mats Petersson <mats.petersson@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2115664862363947947=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2115664862363947947==
Content-Type: multipart/alternative; boundary=20cf301d42320f0ee104cff28910

--20cf301d42320f0ee104cff28910
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson <mats.petersson@citrix.com>wrote:

> On 03/12/12 03:47, G.R. wrote:
>
>> Hi developers,
>> I met some domU issues and the log suggests missing interrupt.
>> Details from here: http://www.gossamer-threads.**
>> com/lists/xen/users/263938#**263938<http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>> In summary, this is the suspicious log:
>>
>> (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>>
>> I've checked the code in question and found that mode 3 is an
>> 'reserved_1' mode.
>> I want to trace down the source of this mode setting to root-cause the
>> issue.
>> But I'm not an xen developer, and am even a newbie as a xen user.
>> Could anybody give me instructions about how to enable detailed debug log?
>> It could be better if I can get advice about experiments to perform /
>> switches to try out etc.
>>
>> My SW config:
>> dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>> domU: Debian wheezy 3.2.x stock kernel.
>>
>> Thanks,
>> Timothy
>>
> Are you passing hardware (PCI Passthrough) to the HVM guest?
> What are the exact messages in the DomU?
>
>
Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
But this is actually a PVHVM guest since debian stock kernel has PVOP
enabled.
And when I tried another PVOP disabled linux distro (openelec v2.0), I did
not see such msi related error message.
Actually, with that domU I do not see anything obvious wrong from the log,
but I also see nothing from panel (panel receive no signal and go
power-saving) :-(


Back to the issue I was reporting, the domU log looks like this:

Dec 2 21:52:44 debvm kernel: [ 1085.604071] [drm:i915_hangcheck_ring_idle]
*ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
3354], missed IRQ?
Dec 2 21:56:50 debvm kernel: [ 1332.076071] [drm:i915_hangcheck_ring_idle]
*ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
11297], missed IRQ?
Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
timeout, switching to polling mode: last cmd=0x000f0000
Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
codec, disabling MSI: last cmd=0x002f0600
Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
timeout, switching to single_cmd mode: last cmd=0x002f0600


Thanks,
Timothy

--20cf301d42320f0ee104cff28910
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><div class=3D"gmail_quote">On Mon, Dec 3, 2012 a=
t 6:15 PM, Mats Petersson <span dir=3D"ltr">&lt;<a href=3D"mailto:mats.pete=
rsson@citrix.com" target=3D"_blank">mats.petersson@citrix.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D""><div clas=
s=3D"h5">On 03/12/12 03:47, G.R. wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">
Hi developers,<br>
I met some domU issues and the log suggests missing interrupt.<br>
Details from here: <a href=3D"http://www.gossamer-threads.com/lists/xen/use=
rs/263938#263938" target=3D"_blank">http://www.gossamer-threads.<u></u>com/=
lists/xen/users/263938#<u></u>263938</a><br>
In summary, this is the suspicious log:<br>
<br>
(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
<br>
I&#39;ve checked the code in question and found that mode 3 is an &#39;rese=
rved_1&#39; mode.<br>
I want to trace down the source of this mode setting to root-cause the issu=
e.<br>
But I&#39;m not an xen developer, and am even a newbie as a xen user.<br>
Could anybody give me instructions about how to enable detailed debug log?<=
br>
It could be better if I can get advice about experiments to perform / switc=
hes to try out etc.<br>
<br>
My SW config:<br>
dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)<br>
domU: Debian wheezy 3.2.x stock kernel.<br>
<br>
Thanks,<br>
Timothy<br>
</blockquote></div></div>
Are you passing hardware (PCI Passthrough) to the HVM guest?<br>
What are the exact messages in the DomU?<br>
<br></blockquote><br>Yes, I&#39;m doing PCI passthrough (the IGD, audio &am=
p;&amp; USB controller).<br>But this is actually a PVHVM guest since debian=
 stock kernel has PVOP enabled.<br>And when I tried another PVOP disabled l=
inux distro (openelec v2.0), I did not see such msi related error message.<=
br>
Actually, with that domU I do not see anything obvious wrong from the log, =
but I also see nothing from panel (panel receive no signal and go power-sav=
ing) :-(<br><br><br>Back to the issue I was reporting, the domU log looks l=
ike this:<br>
<br><font color=3D"black" face=3D"Verdana,Arial,Helvetica"><font color=3D"b=
lack" face=3D"Verdana,Arial,Helvetica">Dec  2 21:52:44 debvm kernel: [ 1085=
.604071] [drm:i915_hangcheck_ring_idle] <br>*ERROR* Hangcheck timer elapsed=
... blt ring idle [waiting on 3354, at <br>
3354], missed IRQ? <br>Dec  2 21:56:50 debvm kernel: [ 1332.076071] [drm:i9=
15_hangcheck_ring_idle] <br>*ERROR* Hangcheck timer elapsed... render ring =
idle [waiting on 11297, at <br>11297], missed IRQ? <br>Dec  2 22:32:48 debv=
m kernel: [    7.220073] hda-intel: azx_get_response <br>
timeout, switching to polling mode: last cmd=3D0x000f0000 <br>Dec  2 22:45:=
32 debvm kernel: [  776.392084] hda-intel: No response from <br>codec, disa=
bling MSI: last cmd=3D0x002f0600 <br>Dec  2 22:45:33 debvm kernel: [  777.4=
00075] hda_intel: azx_get_response <br>
timeout, switching to single_cmd mode: last cmd=3D0x002f0600 </font></font>=
<br><br><br>Thanks,<br>Timothy<br></div></div>

--20cf301d42320f0ee104cff28910--


--===============2115664862363947947==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2115664862363947947==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 13:15:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 13:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVrV-0004FE-S9; Mon, 03 Dec 2012 13:15:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TfVrV-0004F6-0x
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 13:15:05 +0000
Received: from [85.158.139.83:6180] by server-16.bemta-5.messagelabs.com id
	EF/AE-21311-7D5ACB05; Mon, 03 Dec 2012 13:15:03 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354540495!28157676!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27347 invoked from network); 3 Dec 2012 13:14:57 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 13:14:57 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so4649506iej.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 05:14:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=7pk67qxt3wsyExlCUpX/3WLeN4/fqrKXAFzeIgR54zY=;
	b=XtT4Kb3GnkEkc7B6Pxl6hZsn96woUOeu1BUptngD00b5kJY625WRZSzrRDqeFzRh1C
	YXmsnLMOwUSKs0eGMtJy+YXu6z4mpRQ/i9BsgjGaSsVHOEbOMXTlQPZJ3Rax8zZR8Li3
	wL/+5N4uPJeqN3oElGmG/vyDzzSmZ1Y3oK3je6oePTMEdWnoGw0Ua5bof7BuNtzYxtg1
	uX2ewHjfo7kuYjWnMWcyeKB0o/HSkwGLC5jAaZbRrTH5+6zanL83HPgCDygbXFD3mHGK
	pJ9eJ4Ka7+V54+mVY4kfC039S2ef2LhwSBaN5SCpZ2Y9GlLY6BoF1qJxznvRNeRy4+/7
	794A==
MIME-Version: 1.0
Received: by 10.42.41.144 with SMTP id p16mr7216719ice.39.1354540495343; Mon,
	03 Dec 2012 05:14:55 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Mon, 3 Dec 2012 05:14:55 -0800 (PST)
In-Reply-To: <50BC7BAC.8050107@citrix.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
Date: Mon, 3 Dec 2012 21:14:55 +0800
X-Google-Sender-Auth: 1UrNEsKdFp-tZ5ctjw4-REqqATg
Message-ID: <CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Mats Petersson <mats.petersson@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2115664862363947947=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2115664862363947947==
Content-Type: multipart/alternative; boundary=20cf301d42320f0ee104cff28910

--20cf301d42320f0ee104cff28910
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson <mats.petersson@citrix.com>wrote:

> On 03/12/12 03:47, G.R. wrote:
>
>> Hi developers,
>> I met some domU issues and the log suggests missing interrupt.
>> Details from here: http://www.gossamer-threads.**
>> com/lists/xen/users/263938#**263938<http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>> In summary, this is the suspicious log:
>>
>> (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>>
>> I've checked the code in question and found that mode 3 is an
>> 'reserved_1' mode.
>> I want to trace down the source of this mode setting to root-cause the
>> issue.
>> But I'm not an xen developer, and am even a newbie as a xen user.
>> Could anybody give me instructions about how to enable detailed debug log?
>> It could be better if I can get advice about experiments to perform /
>> switches to try out etc.
>>
>> My SW config:
>> dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>> domU: Debian wheezy 3.2.x stock kernel.
>>
>> Thanks,
>> Timothy
>>
> Are you passing hardware (PCI Passthrough) to the HVM guest?
> What are the exact messages in the DomU?
>
>
Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
But this is actually a PVHVM guest since debian stock kernel has PVOP
enabled.
And when I tried another PVOP disabled linux distro (openelec v2.0), I did
not see such msi related error message.
Actually, with that domU I do not see anything obvious wrong from the log,
but I also see nothing from panel (panel receive no signal and go
power-saving) :-(


Back to the issue I was reporting, the domU log looks like this:

Dec 2 21:52:44 debvm kernel: [ 1085.604071] [drm:i915_hangcheck_ring_idle]
*ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
3354], missed IRQ?
Dec 2 21:56:50 debvm kernel: [ 1332.076071] [drm:i915_hangcheck_ring_idle]
*ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
11297], missed IRQ?
Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
timeout, switching to polling mode: last cmd=0x000f0000
Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
codec, disabling MSI: last cmd=0x002f0600
Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
timeout, switching to single_cmd mode: last cmd=0x002f0600


Thanks,
Timothy

--20cf301d42320f0ee104cff28910
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><div class=3D"gmail_quote">On Mon, Dec 3, 2012 a=
t 6:15 PM, Mats Petersson <span dir=3D"ltr">&lt;<a href=3D"mailto:mats.pete=
rsson@citrix.com" target=3D"_blank">mats.petersson@citrix.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D""><div clas=
s=3D"h5">On 03/12/12 03:47, G.R. wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">
Hi developers,<br>
I met some domU issues and the log suggests missing interrupt.<br>
Details from here: <a href=3D"http://www.gossamer-threads.com/lists/xen/use=
rs/263938#263938" target=3D"_blank">http://www.gossamer-threads.<u></u>com/=
lists/xen/users/263938#<u></u>263938</a><br>
In summary, this is the suspicious log:<br>
<br>
(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
<br>
I&#39;ve checked the code in question and found that mode 3 is an &#39;rese=
rved_1&#39; mode.<br>
I want to trace down the source of this mode setting to root-cause the issu=
e.<br>
But I&#39;m not an xen developer, and am even a newbie as a xen user.<br>
Could anybody give me instructions about how to enable detailed debug log?<=
br>
It could be better if I can get advice about experiments to perform / switc=
hes to try out etc.<br>
<br>
My SW config:<br>
dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)<br>
domU: Debian wheezy 3.2.x stock kernel.<br>
<br>
Thanks,<br>
Timothy<br>
</blockquote></div></div>
Are you passing hardware (PCI Passthrough) to the HVM guest?<br>
What are the exact messages in the DomU?<br>
<br></blockquote><br>Yes, I&#39;m doing PCI passthrough (the IGD, audio &am=
p;&amp; USB controller).<br>But this is actually a PVHVM guest since debian=
 stock kernel has PVOP enabled.<br>And when I tried another PVOP disabled l=
inux distro (openelec v2.0), I did not see such msi related error message.<=
br>
Actually, with that domU I do not see anything obvious wrong from the log, =
but I also see nothing from panel (panel receive no signal and go power-sav=
ing) :-(<br><br><br>Back to the issue I was reporting, the domU log looks l=
ike this:<br>
<br><font color=3D"black" face=3D"Verdana,Arial,Helvetica"><font color=3D"b=
lack" face=3D"Verdana,Arial,Helvetica">Dec  2 21:52:44 debvm kernel: [ 1085=
.604071] [drm:i915_hangcheck_ring_idle] <br>*ERROR* Hangcheck timer elapsed=
... blt ring idle [waiting on 3354, at <br>
3354], missed IRQ? <br>Dec  2 21:56:50 debvm kernel: [ 1332.076071] [drm:i9=
15_hangcheck_ring_idle] <br>*ERROR* Hangcheck timer elapsed... render ring =
idle [waiting on 11297, at <br>11297], missed IRQ? <br>Dec  2 22:32:48 debv=
m kernel: [    7.220073] hda-intel: azx_get_response <br>
timeout, switching to polling mode: last cmd=3D0x000f0000 <br>Dec  2 22:45:=
32 debvm kernel: [  776.392084] hda-intel: No response from <br>codec, disa=
bling MSI: last cmd=3D0x002f0600 <br>Dec  2 22:45:33 debvm kernel: [  777.4=
00075] hda_intel: azx_get_response <br>
timeout, switching to single_cmd mode: last cmd=3D0x002f0600 </font></font>=
<br><br><br>Thanks,<br>Timothy<br></div></div>

--20cf301d42320f0ee104cff28910--


--===============2115664862363947947==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2115664862363947947==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 13:20:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 13:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVwC-0004VS-Iu; Mon, 03 Dec 2012 13:19:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TfVwA-0004VD-OL
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 13:19:54 +0000
Received: from [85.158.137.99:15472] by server-11.bemta-3.messagelabs.com id
	F2/33-19361-9F6ACB05; Mon, 03 Dec 2012 13:19:53 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354540792!17688213!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14274 invoked from network); 3 Dec 2012 13:19:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 13:19:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,206,1355097600"; d="scan'208";a="46371233"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	03 Dec 2012 13:19:48 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Mon, 3 Dec 2012
	08:19:48 -0500
Message-ID: <50BCA6F3.8060804@citrix.com>
Date: Mon, 3 Dec 2012 13:19:47 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: G.R. <firemeteor@users.sourceforge.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
In-Reply-To: <CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 13:14, G.R. wrote:
> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson 
> <mats.petersson@citrix.com <mailto:mats.petersson@citrix.com>> wrote:
>
>     On 03/12/12 03:47, G.R. wrote:
>
>         Hi developers,
>         I met some domU issues and the log suggests missing interrupt.
>         Details from here:
>         http://www.gossamer-threads.com/lists/xen/users/263938#263938
>         In summary, this is the suspicious log:
>
>         (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>
>         I've checked the code in question and found that mode 3 is an
>         'reserved_1' mode.
>         I want to trace down the source of this mode setting to
>         root-cause the issue.
>         But I'm not an xen developer, and am even a newbie as a xen user.
>         Could anybody give me instructions about how to enable
>         detailed debug log?
>         It could be better if I can get advice about experiments to
>         perform / switches to try out etc.
>
>         My SW config:
>         dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>         domU: Debian wheezy 3.2.x stock kernel.
>
>         Thanks,
>         Timothy
>
>     Are you passing hardware (PCI Passthrough) to the HVM guest?
>     What are the exact messages in the DomU?
>
>
> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
> But this is actually a PVHVM guest since debian stock kernel has PVOP 
> enabled.
> And when I tried another PVOP disabled linux distro (openelec v2.0), I 
> did not see such msi related error message.
> Actually, with that domU I do not see anything obvious wrong from the 
> log, but I also see nothing from panel (panel receive no signal and go 
> power-saving) :-(
>
>
> Back to the issue I was reporting, the domU log looks like this:
>
> Dec 2 21:52:44 debvm kernel: [ 1085.604071] 
> [drm:i915_hangcheck_ring_idle]
> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
> 3354], missed IRQ?
> Dec 2 21:56:50 debvm kernel: [ 1332.076071] 
> [drm:i915_hangcheck_ring_idle]
> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
> 11297], missed IRQ?
> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
> timeout, switching to polling mode: last cmd=0x000f0000
> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
> codec, disabling MSI: last cmd=0x002f0600
> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
> timeout, switching to single_cmd mode: last cmd=0x002f0600
>
>
> Thanks,
> Timothy
It does sound like there is a fix in 4.2.0, as indicated by Jan, that 
fixes this. I'm not fully clued up to what the policy for backporting 
fixes are, and I haven't looked at the complexity of the fix itself, but 
either updating to the 4.2.0 or a (personal) backport sounds like the 
right solution here.

Unfortunately, I hadn't seen Jan's reply by the time I wrote my response 
to your original email.

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 13:20:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 13:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfVwC-0004VS-Iu; Mon, 03 Dec 2012 13:19:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TfVwA-0004VD-OL
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 13:19:54 +0000
Received: from [85.158.137.99:15472] by server-11.bemta-3.messagelabs.com id
	F2/33-19361-9F6ACB05; Mon, 03 Dec 2012 13:19:53 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354540792!17688213!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14274 invoked from network); 3 Dec 2012 13:19:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 13:19:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,206,1355097600"; d="scan'208";a="46371233"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	03 Dec 2012 13:19:48 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Mon, 3 Dec 2012
	08:19:48 -0500
Message-ID: <50BCA6F3.8060804@citrix.com>
Date: Mon, 3 Dec 2012 13:19:47 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: G.R. <firemeteor@users.sourceforge.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
In-Reply-To: <CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 13:14, G.R. wrote:
> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson 
> <mats.petersson@citrix.com <mailto:mats.petersson@citrix.com>> wrote:
>
>     On 03/12/12 03:47, G.R. wrote:
>
>         Hi developers,
>         I met some domU issues and the log suggests missing interrupt.
>         Details from here:
>         http://www.gossamer-threads.com/lists/xen/users/263938#263938
>         In summary, this is the suspicious log:
>
>         (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>
>         I've checked the code in question and found that mode 3 is an
>         'reserved_1' mode.
>         I want to trace down the source of this mode setting to
>         root-cause the issue.
>         But I'm not an xen developer, and am even a newbie as a xen user.
>         Could anybody give me instructions about how to enable
>         detailed debug log?
>         It could be better if I can get advice about experiments to
>         perform / switches to try out etc.
>
>         My SW config:
>         dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>         domU: Debian wheezy 3.2.x stock kernel.
>
>         Thanks,
>         Timothy
>
>     Are you passing hardware (PCI Passthrough) to the HVM guest?
>     What are the exact messages in the DomU?
>
>
> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
> But this is actually a PVHVM guest since debian stock kernel has PVOP 
> enabled.
> And when I tried another PVOP disabled linux distro (openelec v2.0), I 
> did not see such msi related error message.
> Actually, with that domU I do not see anything obvious wrong from the 
> log, but I also see nothing from panel (panel receive no signal and go 
> power-saving) :-(
>
>
> Back to the issue I was reporting, the domU log looks like this:
>
> Dec 2 21:52:44 debvm kernel: [ 1085.604071] 
> [drm:i915_hangcheck_ring_idle]
> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
> 3354], missed IRQ?
> Dec 2 21:56:50 debvm kernel: [ 1332.076071] 
> [drm:i915_hangcheck_ring_idle]
> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
> 11297], missed IRQ?
> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
> timeout, switching to polling mode: last cmd=0x000f0000
> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
> codec, disabling MSI: last cmd=0x002f0600
> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
> timeout, switching to single_cmd mode: last cmd=0x002f0600
>
>
> Thanks,
> Timothy
It does sound like there is a fix in 4.2.0, as indicated by Jan, that 
fixes this. I'm not fully clued up to what the policy for backporting 
fixes are, and I haven't looked at the complexity of the fix itself, but 
either updating to the 4.2.0 or a (personal) backport sounds like the 
right solution here.

Unfortunately, I hadn't seen Jan's reply by the time I wrote my response 
to your original email.

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 13:58:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 13:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfWXT-0005GG-H3; Mon, 03 Dec 2012 13:58:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TfWXS-0005GB-F3
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 13:58:26 +0000
Received: from [85.158.137.99:49841] by server-15.bemta-3.messagelabs.com id
	06/B5-23779-100BCB05; Mon, 03 Dec 2012 13:58:25 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354543103!12795122!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14483 invoked from network); 3 Dec 2012 13:58:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 13:58:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,206,1355097600"; d="scan'208";a="216177542"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	03 Dec 2012 13:58:08 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Mon, 3 Dec 2012
	08:58:08 -0500
Message-ID: <50BCAFEF.7040300@citrix.com>
Date: Mon, 3 Dec 2012 13:58:07 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com>
In-Reply-To: <50BCA6F3.8060804@citrix.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 13:19, Mats Petersson wrote:
> On 03/12/12 13:14, G.R. wrote:
>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>> <mats.petersson@citrix.com <mailto:mats.petersson@citrix.com>> wrote:
>>
>>      On 03/12/12 03:47, G.R. wrote:
>>
>>          Hi developers,
>>          I met some domU issues and the log suggests missing interrupt.
>>          Details from here:
>>          http://www.gossamer-threads.com/lists/xen/users/263938#263938
>>          In summary, this is the suspicious log:
>>
>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>>
>>          I've checked the code in question and found that mode 3 is an
>>          'reserved_1' mode.
>>          I want to trace down the source of this mode setting to
>>          root-cause the issue.
>>          But I'm not an xen developer, and am even a newbie as a xen user.
>>          Could anybody give me instructions about how to enable
>>          detailed debug log?
>>          It could be better if I can get advice about experiments to
>>          perform / switches to try out etc.
>>
>>          My SW config:
>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>>          domU: Debian wheezy 3.2.x stock kernel.
>>
>>          Thanks,
>>          Timothy
>>
>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>>      What are the exact messages in the DomU?
>>
>>
>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>> enabled.
>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>> did not see such msi related error message.
>> Actually, with that domU I do not see anything obvious wrong from the
>> log, but I also see nothing from panel (panel receive no signal and go
>> power-saving) :-(
>>
>>
>> Back to the issue I was reporting, the domU log looks like this:
>>
>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>> [drm:i915_hangcheck_ring_idle]
>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>> 3354], missed IRQ?
>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>> [drm:i915_hangcheck_ring_idle]
>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>> 11297], missed IRQ?
>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>> timeout, switching to polling mode: last cmd=0x000f0000
>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>> codec, disabling MSI: last cmd=0x002f0600
>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>>
>>
>> Thanks,
>> Timothy
> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
> fixes this. I'm not fully clued up to what the policy for backporting
> fixes are, and I haven't looked at the complexity of the fix itself, but
> either updating to the 4.2.0 or a (personal) backport sounds like the
> right solution here.
I had a quick look, and it doesn't look that hard to backport that patch.

--
Mats
>
> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
> to your original email.
>
> --
> Mats
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 13:58:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 13:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfWXT-0005GG-H3; Mon, 03 Dec 2012 13:58:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TfWXS-0005GB-F3
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 13:58:26 +0000
Received: from [85.158.137.99:49841] by server-15.bemta-3.messagelabs.com id
	06/B5-23779-100BCB05; Mon, 03 Dec 2012 13:58:25 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354543103!12795122!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14483 invoked from network); 3 Dec 2012 13:58:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 13:58:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,206,1355097600"; d="scan'208";a="216177542"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	03 Dec 2012 13:58:08 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Mon, 3 Dec 2012
	08:58:08 -0500
Message-ID: <50BCAFEF.7040300@citrix.com>
Date: Mon, 3 Dec 2012 13:58:07 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com>
In-Reply-To: <50BCA6F3.8060804@citrix.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 13:19, Mats Petersson wrote:
> On 03/12/12 13:14, G.R. wrote:
>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>> <mats.petersson@citrix.com <mailto:mats.petersson@citrix.com>> wrote:
>>
>>      On 03/12/12 03:47, G.R. wrote:
>>
>>          Hi developers,
>>          I met some domU issues and the log suggests missing interrupt.
>>          Details from here:
>>          http://www.gossamer-threads.com/lists/xen/users/263938#263938
>>          In summary, this is the suspicious log:
>>
>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>>
>>          I've checked the code in question and found that mode 3 is an
>>          'reserved_1' mode.
>>          I want to trace down the source of this mode setting to
>>          root-cause the issue.
>>          But I'm not an xen developer, and am even a newbie as a xen user.
>>          Could anybody give me instructions about how to enable
>>          detailed debug log?
>>          It could be better if I can get advice about experiments to
>>          perform / switches to try out etc.
>>
>>          My SW config:
>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>>          domU: Debian wheezy 3.2.x stock kernel.
>>
>>          Thanks,
>>          Timothy
>>
>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>>      What are the exact messages in the DomU?
>>
>>
>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>> enabled.
>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>> did not see such msi related error message.
>> Actually, with that domU I do not see anything obvious wrong from the
>> log, but I also see nothing from panel (panel receive no signal and go
>> power-saving) :-(
>>
>>
>> Back to the issue I was reporting, the domU log looks like this:
>>
>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>> [drm:i915_hangcheck_ring_idle]
>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>> 3354], missed IRQ?
>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>> [drm:i915_hangcheck_ring_idle]
>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>> 11297], missed IRQ?
>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>> timeout, switching to polling mode: last cmd=0x000f0000
>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>> codec, disabling MSI: last cmd=0x002f0600
>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>>
>>
>> Thanks,
>> Timothy
> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
> fixes this. I'm not fully clued up to what the policy for backporting
> fixes are, and I haven't looked at the complexity of the fix itself, but
> either updating to the 4.2.0 or a (personal) backport sounds like the
> right solution here.
I had a quick look, and it doesn't look that hard to backport that patch.

--
Mats
>
> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
> to your original email.
>
> --
> Mats
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 15:23:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 15:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfXrL-0006E9-RK; Mon, 03 Dec 2012 15:23:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1TfXrJ-0006E4-Nk
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 15:23:02 +0000
Received: from [193.109.254.147:52435] by server-9.bemta-14.messagelabs.com id
	2B/1E-30773-5D3CCB05; Mon, 03 Dec 2012 15:23:01 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354548143!8522260!1
X-Originating-IP: [209.85.220.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7321 invoked from network); 3 Dec 2012 15:22:25 -0000
Received: from mail-pa0-f45.google.com (HELO mail-pa0-f45.google.com)
	(209.85.220.45)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 15:22:25 -0000
Received: by mail-pa0-f45.google.com with SMTP id bg2so1911884pad.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 07:22:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=XtbX3TEeKVL8FYYsYJuTN6NwKhn0EHv79whLCqZpEGM=;
	b=tuFBYkQY8GauW8TVvyZQtLmL4Kdz+vGhvCdzUghzMlhKa6LASCLHcr+2ww8ww7AOKm
	9rknINt9yCX8smHa+jXp5m7YTA9lbuoWkewWDfTm+0Piiw+qN8cimGk6pHZ112u8FSIM
	wB3OyADzFuuka+KUSds9eDOUgGJMey3IfcpIiYg3qFoCo+XvG3FeAK4/NDnzZHdsn5XX
	modM04c/IN7BSzuRvStR7epnsZeuV04me/Lg0oSayPeWeOWn9eg73mSsinPutDiFgFzJ
	BItIQTx8MzrKBA4n5clDuQr9eKDqnAu4cLGQ6tmdMJ9OrtIAvawkr1ZYNSlANjRZVgbF
	oBSw==
Received: by 10.68.236.131 with SMTP id uu3mr29907629pbc.104.1354548143459;
	Mon, 03 Dec 2012 07:22:23 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id c8sm8217743pav.4.2012.12.03.07.22.21
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 07:22:22 -0800 (PST)
Date: Mon, 3 Dec 2012 10:22:18 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: greg@enjellic.com
Message-ID: <20121203152216.GA11151@phenom.dumpdata.com>
References: <201212020024.qB20OxtB012532@wind.enjellic.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <201212020024.qB20OxtB012532@wind.enjellic.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Updated ATI passthrough patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Dec 01, 2012 at 06:24:59PM -0600, Dr. Greg Wettstein wrote:
> Hi, hope the weekend is going well for everyone.
> 
> I just put a set of updated patches to support ATI passthrough on a
> primary video adapter on the FTP site.  The URL's are as follows:
> 
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.ati-passthrough.patch
> 
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.2.0.ati-passthrough.patch
> 
> These patches have been validated to work up through kernel 3.4.18
> with the xm control plane.  We are currently working on validating
> whether or not there are passthrough issues with xl.
> 
> The original ATI pass-through patches posted to xen-devel fail with a
> qemu-dm segmentation fault on recent kernels.  This is caused by
> changes which have been made with respect to proper enforcement of the
> permitted port ranges on the ioperm() system call.
> 
> Since these patchs allow a primary graphics adapter to be used for
> passthrough I wanted to remind everyone of the availability of the
> following utility script:
> 
> 	ftp://ftp.enjellic.com/pub/xen/run-passthrough
> 
> Which automates the process of unplugging and re-plugging a video card
> and optionally a USB controller.  A script like this, or a network
> login, is needed in order to use passthrough on a primary graphics
> adapter.

Great! What type of config options do you end up using for your guests?
gfx_pass.., msix_translate=.. ? Or just normal

pci=[BDF] ?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 15:23:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 15:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfXrL-0006E9-RK; Mon, 03 Dec 2012 15:23:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1TfXrJ-0006E4-Nk
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 15:23:02 +0000
Received: from [193.109.254.147:52435] by server-9.bemta-14.messagelabs.com id
	2B/1E-30773-5D3CCB05; Mon, 03 Dec 2012 15:23:01 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354548143!8522260!1
X-Originating-IP: [209.85.220.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7321 invoked from network); 3 Dec 2012 15:22:25 -0000
Received: from mail-pa0-f45.google.com (HELO mail-pa0-f45.google.com)
	(209.85.220.45)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 15:22:25 -0000
Received: by mail-pa0-f45.google.com with SMTP id bg2so1911884pad.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 07:22:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=XtbX3TEeKVL8FYYsYJuTN6NwKhn0EHv79whLCqZpEGM=;
	b=tuFBYkQY8GauW8TVvyZQtLmL4Kdz+vGhvCdzUghzMlhKa6LASCLHcr+2ww8ww7AOKm
	9rknINt9yCX8smHa+jXp5m7YTA9lbuoWkewWDfTm+0Piiw+qN8cimGk6pHZ112u8FSIM
	wB3OyADzFuuka+KUSds9eDOUgGJMey3IfcpIiYg3qFoCo+XvG3FeAK4/NDnzZHdsn5XX
	modM04c/IN7BSzuRvStR7epnsZeuV04me/Lg0oSayPeWeOWn9eg73mSsinPutDiFgFzJ
	BItIQTx8MzrKBA4n5clDuQr9eKDqnAu4cLGQ6tmdMJ9OrtIAvawkr1ZYNSlANjRZVgbF
	oBSw==
Received: by 10.68.236.131 with SMTP id uu3mr29907629pbc.104.1354548143459;
	Mon, 03 Dec 2012 07:22:23 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id c8sm8217743pav.4.2012.12.03.07.22.21
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 07:22:22 -0800 (PST)
Date: Mon, 3 Dec 2012 10:22:18 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: greg@enjellic.com
Message-ID: <20121203152216.GA11151@phenom.dumpdata.com>
References: <201212020024.qB20OxtB012532@wind.enjellic.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <201212020024.qB20OxtB012532@wind.enjellic.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Updated ATI passthrough patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Dec 01, 2012 at 06:24:59PM -0600, Dr. Greg Wettstein wrote:
> Hi, hope the weekend is going well for everyone.
> 
> I just put a set of updated patches to support ATI passthrough on a
> primary video adapter on the FTP site.  The URL's are as follows:
> 
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.ati-passthrough.patch
> 
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.2.0.ati-passthrough.patch
> 
> These patches have been validated to work up through kernel 3.4.18
> with the xm control plane.  We are currently working on validating
> whether or not there are passthrough issues with xl.
> 
> The original ATI pass-through patches posted to xen-devel fail with a
> qemu-dm segmentation fault on recent kernels.  This is caused by
> changes which have been made with respect to proper enforcement of the
> permitted port ranges on the ioperm() system call.
> 
> Since these patchs allow a primary graphics adapter to be used for
> passthrough I wanted to remind everyone of the availability of the
> following utility script:
> 
> 	ftp://ftp.enjellic.com/pub/xen/run-passthrough
> 
> Which automates the process of unplugging and re-plugging a video card
> and optionally a USB controller.  A script like this, or a network
> login, is needed in order to use passthrough on a primary graphics
> adapter.

Great! What type of config options do you end up using for your guests?
gfx_pass.., msix_translate=.. ? Or just normal

pci=[BDF] ?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 15:41:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 15:41:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfY8h-0006SK-He; Mon, 03 Dec 2012 15:40:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfY8g-0006SF-CD
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 15:40:58 +0000
Received: from [85.158.139.211:62319] by server-14.bemta-5.messagelabs.com id
	55/54-21768-908CCB05; Mon, 03 Dec 2012 15:40:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354549255!18854648!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8566 invoked from network); 3 Dec 2012 15:40:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 15:40:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="46405776"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 15:40:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 10:40:54 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfXxP-0004K8-CV;
	Mon, 03 Dec 2012 15:29:19 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 15:29:18 +0000
Message-ID: <1354548558-10039-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen: arm: Use $(OBJCOPY) not bare objcopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Reported-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/arch/arm/Makefile |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index fd92b72..4c61b04 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -46,7 +46,7 @@ $(TARGET): $(TARGET)-syms $(TARGET).bin
 
 #
 $(TARGET).bin: $(TARGET)-syms
-	objcopy -O binary -S $< $@
+	$(OBJCOPY) -O binary -S $< $@
 
 #$(TARGET): $(TARGET)-syms $(efi-y) boot/mkelf32
 #	./boot/mkelf32 $(TARGET)-syms $(TARGET) 0x100000 \
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 15:41:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 15:41:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfY8h-0006SK-He; Mon, 03 Dec 2012 15:40:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfY8g-0006SF-CD
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 15:40:58 +0000
Received: from [85.158.139.211:62319] by server-14.bemta-5.messagelabs.com id
	55/54-21768-908CCB05; Mon, 03 Dec 2012 15:40:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354549255!18854648!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8566 invoked from network); 3 Dec 2012 15:40:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 15:40:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="46405776"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 15:40:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 10:40:54 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfXxP-0004K8-CV;
	Mon, 03 Dec 2012 15:29:19 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 15:29:18 +0000
Message-ID: <1354548558-10039-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen: arm: Use $(OBJCOPY) not bare objcopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Reported-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/arch/arm/Makefile |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index fd92b72..4c61b04 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -46,7 +46,7 @@ $(TARGET): $(TARGET)-syms $(TARGET).bin
 
 #
 $(TARGET).bin: $(TARGET)-syms
-	objcopy -O binary -S $< $@
+	$(OBJCOPY) -O binary -S $< $@
 
 #$(TARGET): $(TARGET)-syms $(efi-y) boot/mkelf32
 #	./boot/mkelf32 $(TARGET)-syms $(TARGET) 0x100000 \
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 15:46:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 15:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfYDK-0006ZX-Cw; Mon, 03 Dec 2012 15:45:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1TfYDI-0006ZS-Eq
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 15:45:44 +0000
Received: from [85.158.143.99:39793] by server-2.bemta-4.messagelabs.com id
	20/EC-28922-729CCB05; Mon, 03 Dec 2012 15:45:43 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354549536!21470422!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMDA2ODc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16688 invoked from network); 3 Dec 2012 15:45:39 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 15:45:39 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3FjRw3001022
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 15:45:28 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3FjQwS022596
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 15:45:27 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3FjPOf030007; Mon, 3 Dec 2012 09:45:25 -0600
MIME-Version: 1.0
Message-ID: <fe40d10d-4b47-4263-810e-2d684b8f67e8@default>
Date: Mon, 3 Dec 2012 07:45:24 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <38a09df7-3d82-403f-9153-9a151b220a41@default>
	<50B48C8A02000078000AB851@nat28.tlf.novell.com>
	<1354012308.5830.175.camel@zakaz.uk.xensource.com>
	<4e524dad-b67a-4115-b0fd-72f9b372abf1@default>
	<20663.34439.774586.111797@mariner.uk.xensource.com>
In-Reply-To: <20663.34439.774586.111797@mariner.uk.xensource.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6665.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Wilk <konrad.wilk@oracle.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>, Zhigang Wang <zhigang.x.wang@oracle.com>
Subject: Re: [Xen-devel] Please ack XENMEM_claim_pages hypercall?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Ian Jackson [mailto:Ian.Jackson@eu.citrix.com]
> Subject: RE: Please ack XENMEM_claim_pages hypercall?
> 
> Dan Magenheimer writes ("RE: Please ack XENMEM_claim_pages hypercall?"):
> > From a single-system-xl-toolstack-centric perspective ("paradigm"),
> > I can see your point.
> 
> I don't think this is the case.  What you are doing is putting this
> node-specific claim functionality in the hypervisor.  I still think it
> should be done outside the hypervisor.  This does not mean that there
> has to be a single omniscient piece of software for an entire
> cluster.
>
>  <remainder deleted>

Hi Ian --

Thanks for taking the time to write a detailed response.
I realized that I had promised you a complete summary
of the problem and alternate solutions; but then I got
involved in demonstrating and refining a prototype
and failed to deliver.  I had thought the patch
summary would serve that purpose, but now see that it
is not sufficiently comprehensive.

So in an attempt to fulfill my promise and provide
all of the necessary information you may require, I
am in the process of writing up a complete summary
and will send it on a new thread, hopefully later
today (US time).

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 15:46:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 15:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfYDK-0006ZX-Cw; Mon, 03 Dec 2012 15:45:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1TfYDI-0006ZS-Eq
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 15:45:44 +0000
Received: from [85.158.143.99:39793] by server-2.bemta-4.messagelabs.com id
	20/EC-28922-729CCB05; Mon, 03 Dec 2012 15:45:43 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354549536!21470422!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMDA2ODc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16688 invoked from network); 3 Dec 2012 15:45:39 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 15:45:39 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3FjRw3001022
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 15:45:28 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3FjQwS022596
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 15:45:27 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3FjPOf030007; Mon, 3 Dec 2012 09:45:25 -0600
MIME-Version: 1.0
Message-ID: <fe40d10d-4b47-4263-810e-2d684b8f67e8@default>
Date: Mon, 3 Dec 2012 07:45:24 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <38a09df7-3d82-403f-9153-9a151b220a41@default>
	<50B48C8A02000078000AB851@nat28.tlf.novell.com>
	<1354012308.5830.175.camel@zakaz.uk.xensource.com>
	<4e524dad-b67a-4115-b0fd-72f9b372abf1@default>
	<20663.34439.774586.111797@mariner.uk.xensource.com>
In-Reply-To: <20663.34439.774586.111797@mariner.uk.xensource.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6665.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Wilk <konrad.wilk@oracle.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>, Zhigang Wang <zhigang.x.wang@oracle.com>
Subject: Re: [Xen-devel] Please ack XENMEM_claim_pages hypercall?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Ian Jackson [mailto:Ian.Jackson@eu.citrix.com]
> Subject: RE: Please ack XENMEM_claim_pages hypercall?
> 
> Dan Magenheimer writes ("RE: Please ack XENMEM_claim_pages hypercall?"):
> > From a single-system-xl-toolstack-centric perspective ("paradigm"),
> > I can see your point.
> 
> I don't think this is the case.  What you are doing is putting this
> node-specific claim functionality in the hypervisor.  I still think it
> should be done outside the hypervisor.  This does not mean that there
> has to be a single omniscient piece of software for an entire
> cluster.
>
>  <remainder deleted>

Hi Ian --

Thanks for taking the time to write a detailed response.
I realized that I had promised you a complete summary
of the problem and alternate solutions; but then I got
involved in demonstrating and refining a prototype
and failed to deliver.  I had thought the patch
summary would serve that purpose, but now see that it
is not sufficiently comprehensive.

So in an attempt to fulfill my promise and provide
all of the necessary information you may require, I
am in the process of writing up a complete summary
and will send it on a new thread, hopefully later
today (US time).

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:20:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfYkU-0007Kn-He; Mon, 03 Dec 2012 16:20:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfYkT-0007Ki-4q
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 16:20:01 +0000
Received: from [85.158.138.51:58643] by server-2.bemta-3.messagelabs.com id
	1B/A2-04744-031DCB05; Mon, 03 Dec 2012 16:20:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354551599!32426692!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30832 invoked from network); 3 Dec 2012 16:19:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:19:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16128275"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 16:19:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	16:19:59 +0000
Message-ID: <1354551597.2693.21.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 3 Dec 2012 16:19:57 +0000
In-Reply-To: <alpine.DEB.2.02.1211301501150.5310@kaball.uk.xensource.com>
References: <1352823779.7491.94.camel@zakaz.uk.xensource.com>
	<1352823804-28482-4-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1211301501150.5310@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 04/12] arm: parse modules from DT during
 early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> > new file mode 100644
> > index 0000000..2609450
> > --- /dev/null
> > +++ b/docs/misc/arm/device-tree/booting.txt
> > @@ -0,0 +1,27 @@
> > +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> > +node of the device tree.
> > +
> > +Each node has the form /chosen/module@<N> and contains the following
> > +properties:
> 
> Wouldn't it be better to move all the modules under /chosen/modules or
> /chosen/multiboot?

Why, what's the benefit?

I'm happy to do whatever is more normal in DT. Is that this:
	/foo/bar@1
	/foo/bar@2
or
	/foo/bar/bar@1
	/foo/bar/bar@2

The second (which I think is what you are suggesting) seems pretty
redundant.

> 
> 
> > +- compatible
> > +
> > +	Must be "xen,multiboot-module"
> > +
> > +- start
> > +
> > +	Physical address of the start of this module
> > +
> > +- end
> > +
> > +	Physical address of the end of this module
> 
> start and end could be encoded as one reg

Done.

> 
> 
> > +- bootargs (optional)
> > +
> > +	Command line associated with this module
> > +
> > +The following modules are understood
> > +
> > +- 1 -- the domain 0 kernel
> > +- 2 -- the domain 0 ramdisk
> 
> It would be nice if we could express this via the compatible property
> instead.
> So the linux kernel could be compatible "linux,kernel" and the initrd
> "linux,initrd", in addition to (or instead of) "xen,multiboot-module".
> Given that they go from the most specific to the less specific, it would
> become:
> 
> compatible = "linux,kernel", "xen,multiboot-module";

This bakes the word "linux" into the interface and would require a new
compatible tag and code changes in Xen for each new dom0 kernel type,
which I think we want to avoid. (maybe the code changes are unavoidable
in practice, but in principal...)

"xen,dom0-kernel", "xen,multiboot-module"

Might be an option?

I'm going to repost what I have without changing this bit yet.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:20:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfYkU-0007Kn-He; Mon, 03 Dec 2012 16:20:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfYkT-0007Ki-4q
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 16:20:01 +0000
Received: from [85.158.138.51:58643] by server-2.bemta-3.messagelabs.com id
	1B/A2-04744-031DCB05; Mon, 03 Dec 2012 16:20:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354551599!32426692!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30832 invoked from network); 3 Dec 2012 16:19:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:19:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16128275"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 16:19:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	16:19:59 +0000
Message-ID: <1354551597.2693.21.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 3 Dec 2012 16:19:57 +0000
In-Reply-To: <alpine.DEB.2.02.1211301501150.5310@kaball.uk.xensource.com>
References: <1352823779.7491.94.camel@zakaz.uk.xensource.com>
	<1352823804-28482-4-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1211301501150.5310@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 04/12] arm: parse modules from DT during
 early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> > new file mode 100644
> > index 0000000..2609450
> > --- /dev/null
> > +++ b/docs/misc/arm/device-tree/booting.txt
> > @@ -0,0 +1,27 @@
> > +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> > +node of the device tree.
> > +
> > +Each node has the form /chosen/module@<N> and contains the following
> > +properties:
> 
> Wouldn't it be better to move all the modules under /chosen/modules or
> /chosen/multiboot?

Why, what's the benefit?

I'm happy to do whatever is more normal in DT. Is that this:
	/foo/bar@1
	/foo/bar@2
or
	/foo/bar/bar@1
	/foo/bar/bar@2

The second (which I think is what you are suggesting) seems pretty
redundant.

> 
> 
> > +- compatible
> > +
> > +	Must be "xen,multiboot-module"
> > +
> > +- start
> > +
> > +	Physical address of the start of this module
> > +
> > +- end
> > +
> > +	Physical address of the end of this module
> 
> start and end could be encoded as one reg

Done.

> 
> 
> > +- bootargs (optional)
> > +
> > +	Command line associated with this module
> > +
> > +The following modules are understood
> > +
> > +- 1 -- the domain 0 kernel
> > +- 2 -- the domain 0 ramdisk
> 
> It would be nice if we could express this via the compatible property
> instead.
> So the linux kernel could be compatible "linux,kernel" and the initrd
> "linux,initrd", in addition to (or instead of) "xen,multiboot-module".
> Given that they go from the most specific to the less specific, it would
> become:
> 
> compatible = "linux,kernel", "xen,multiboot-module";

This bakes the word "linux" into the interface and would require a new
compatible tag and code changes in Xen for each new dom0 kernel type,
which I think we want to avoid. (maybe the code changes are unavoidable
in practice, but in principal...)

"xen,dom0-kernel", "xen,multiboot-module"

Might be an option?

I'm going to repost what I have without changing this bit yet.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:25:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfYp6-0007Ua-8t; Mon, 03 Dec 2012 16:24:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfYp5-0007US-3U
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 16:24:47 +0000
Received: from [85.158.139.83:60968] by server-9.bemta-5.messagelabs.com id
	76/E3-29295-E42DCB05; Mon, 03 Dec 2012 16:24:46 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354551885!28153964!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25627 invoked from network); 3 Dec 2012 16:24:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:24:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16128400"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 16:24:44 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	16:24:44 +0000
Message-ID: <50BCD24B.1010608@citrix.com>
Date: Mon, 3 Dec 2012 17:24:43 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/11/12 16:17, Stefano Stabellini wrote:
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 9d20086..c40f597 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>      if (!b_info->device_model_version) {
>          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
>              b_info->device_model_version =
> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;

Is there anyway we may keep qemu-traditional as default for NetBSD?
Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
heavy patching.

Could a helper function be added to libxl_{netbsd/linux}.c to decide
which device model to use?

>          else {
>              const char *dm;
>              int rc;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:25:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfYp6-0007Ua-8t; Mon, 03 Dec 2012 16:24:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfYp5-0007US-3U
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 16:24:47 +0000
Received: from [85.158.139.83:60968] by server-9.bemta-5.messagelabs.com id
	76/E3-29295-E42DCB05; Mon, 03 Dec 2012 16:24:46 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354551885!28153964!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25627 invoked from network); 3 Dec 2012 16:24:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:24:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16128400"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 16:24:44 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	16:24:44 +0000
Message-ID: <50BCD24B.1010608@citrix.com>
Date: Mon, 3 Dec 2012 17:24:43 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/11/12 16:17, Stefano Stabellini wrote:
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 9d20086..c40f597 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>      if (!b_info->device_model_version) {
>          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
>              b_info->device_model_version =
> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;

Is there anyway we may keep qemu-traditional as default for NetBSD?
Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
heavy patching.

Could a helper function be added to libxl_{netbsd/linux}.c to decide
which device model to use?

>          else {
>              const char *dm;
>              int rc;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfYrt-0007cL-Rz; Mon, 03 Dec 2012 16:27:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfYrr-0007cB-Pv
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 16:27:40 +0000
Received: from [85.158.138.51:49490] by server-14.bemta-3.messagelabs.com id
	77/E0-31424-8F2DCB05; Mon, 03 Dec 2012 16:27:36 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354552056!32500243!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22112 invoked from network); 3 Dec 2012 16:27:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:27:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16128450"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 16:27:35 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	16:27:35 +0000
Message-ID: <50BCD2F6.8060106@citrix.com>
Date: Mon, 3 Dec 2012 17:27:34 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1352822539-56930-1-git-send-email-roger.pau@citrix.com>
	<20646.26443.690228.204535@mariner.uk.xensource.com>
In-Reply-To: <20646.26443.690228.204535@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/11/12 17:18, Ian Jackson wrote:
> Roger Pau Monne writes ("[Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change"):
>> qemu-stubdom was stripping the prefix from the "params" xenstore
>> key in xenstore_parse_domain_config, which was then saved stripped in
>> a variable. In xenstore_process_event we compare the "param" from
>> xenstore (not stripped) with the stripped "param" saved in the
>> variable, which leads to a medium change (even if there isn't any),
>> since we are comparing something like aio:/path/to/file with
>> /path/to/file. This only happens one time, since
>> xenstore_parse_domain_config is the only place where we strip the
>> prefix. The result of this bug is the following:
> 
> I have been thinking about this.
> 
> The reason I'm reluctant to apply this patch is that I'm worried it
> might cause some non-stubdom-related breakage.  I know it feels EBW,
> but perhaps the answer is _more_ #ifdef STUBDOM rather than less ?
> 
> Or do you think I should just read the code closely enough to
> understand it and your patch ?  I suspect it's a can of worms...

Yes, it's a can of worms indeed.

The non-stubdom path is not modified, and the code changes (1st block of
the patch) are contained inside a #ifdef STUBDOM (which is not seen on
the patch itself, because it's already there).

Thanks, Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfYrt-0007cL-Rz; Mon, 03 Dec 2012 16:27:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfYrr-0007cB-Pv
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 16:27:40 +0000
Received: from [85.158.138.51:49490] by server-14.bemta-3.messagelabs.com id
	77/E0-31424-8F2DCB05; Mon, 03 Dec 2012 16:27:36 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354552056!32500243!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22112 invoked from network); 3 Dec 2012 16:27:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:27:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16128450"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 16:27:35 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	16:27:35 +0000
Message-ID: <50BCD2F6.8060106@citrix.com>
Date: Mon, 3 Dec 2012 17:27:34 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1352822539-56930-1-git-send-email-roger.pau@citrix.com>
	<20646.26443.690228.204535@mariner.uk.xensource.com>
In-Reply-To: <20646.26443.690228.204535@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/11/12 17:18, Ian Jackson wrote:
> Roger Pau Monne writes ("[Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change"):
>> qemu-stubdom was stripping the prefix from the "params" xenstore
>> key in xenstore_parse_domain_config, which was then saved stripped in
>> a variable. In xenstore_process_event we compare the "param" from
>> xenstore (not stripped) with the stripped "param" saved in the
>> variable, which leads to a medium change (even if there isn't any),
>> since we are comparing something like aio:/path/to/file with
>> /path/to/file. This only happens one time, since
>> xenstore_parse_domain_config is the only place where we strip the
>> prefix. The result of this bug is the following:
> 
> I have been thinking about this.
> 
> The reason I'm reluctant to apply this patch is that I'm worried it
> might cause some non-stubdom-related breakage.  I know it feels EBW,
> but perhaps the answer is _more_ #ifdef STUBDOM rather than less ?
> 
> Or do you think I should just read the code closely enough to
> understand it and your patch ?  I suspect it's a can of worms...

Yes, it's a can of worms indeed.

The non-stubdom path is not modified, and the code changes (1st block of
the patch) are contained inside a #ifdef STUBDOM (which is not seen on
the patch itself, because it's already there).

Thanks, Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:37:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:37:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZ1M-0007rG-0E; Mon, 03 Dec 2012 16:37:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TfZ1L-0007rB-BP
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 16:37:27 +0000
Received: from [193.109.254.147:43001] by server-7.bemta-14.messagelabs.com id
	0A/F3-02272-645DCB05; Mon, 03 Dec 2012 16:37:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354552644!3558268!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15634 invoked from network); 3 Dec 2012 16:37:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:37:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="46414215"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 16:37:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 11:37:23 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TfYtJ-0005Fh-4P;
	Mon, 03 Dec 2012 16:29:09 +0000
Message-ID: <1354552154.18784.9.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 16:29:14 +0000
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

There has been discussion on extending number of event channels back in
September [0].

Regarding Jan's comment in [0], I don't think allowing user to specify
arbitrary number of levels a good idea. Because only the last level
should be shared among vcpus, other level should be in percpu struct to
allow for quicker lookup. The idea to let user specify levels will be
too complicated in implementation and blow up percpu section (since the
size grows exponentially). Three levels should be quite enough. See
maths below.

Number of event channels:
 * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
 * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
Basically the third level is a new ABI, so I choose to use unsigned long
long here to get more event channels.

Pages occupied by the third level (if PAGE_SIZE=4k):
 * 32bit: 64k  / 8 / 4k = 2
 * 64bit: 512k / 8 / 4k = 16

Making second level percpu will incur overhead. In fact we move the
array in shared info into percpu struct:
 * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
 * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte

What concerns me is that the struct evtchn buckets are allocated all at
once during initialization phrase. To save memory inside Xen, the
internal allocation/free scheme for evtchn needs to be modified. Ian
suggested we do small number of buckets at start of day then dynamically
fault in more as required.

To sum up:
     1. Guest should allocate pages for third level evtchn.
     2. Guest should register third level pages via a new hypercall op.
     3. Hypervisor should setup third level evtchn in that hypercall op.
     4. Only last level (third in this case) should be shared among
        vcpus.
     5. Need a flexible allocation/free scheme of struct evtchn.
     6. Debug dumping should use snapshot to avoid holding event lock
        for too long. (Jan's concern in [0])

Any comments are welcomed.


Wei.

[0] http://thread.gmane.org/gmane.comp.emulators.xen.devel/139921



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:37:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:37:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZ1M-0007rG-0E; Mon, 03 Dec 2012 16:37:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TfZ1L-0007rB-BP
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 16:37:27 +0000
Received: from [193.109.254.147:43001] by server-7.bemta-14.messagelabs.com id
	0A/F3-02272-645DCB05; Mon, 03 Dec 2012 16:37:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354552644!3558268!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15634 invoked from network); 3 Dec 2012 16:37:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:37:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="46414215"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 16:37:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 11:37:23 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TfYtJ-0005Fh-4P;
	Mon, 03 Dec 2012 16:29:09 +0000
Message-ID: <1354552154.18784.9.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 16:29:14 +0000
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

There has been discussion on extending number of event channels back in
September [0].

Regarding Jan's comment in [0], I don't think allowing user to specify
arbitrary number of levels a good idea. Because only the last level
should be shared among vcpus, other level should be in percpu struct to
allow for quicker lookup. The idea to let user specify levels will be
too complicated in implementation and blow up percpu section (since the
size grows exponentially). Three levels should be quite enough. See
maths below.

Number of event channels:
 * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
 * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
Basically the third level is a new ABI, so I choose to use unsigned long
long here to get more event channels.

Pages occupied by the third level (if PAGE_SIZE=4k):
 * 32bit: 64k  / 8 / 4k = 2
 * 64bit: 512k / 8 / 4k = 16

Making second level percpu will incur overhead. In fact we move the
array in shared info into percpu struct:
 * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
 * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte

What concerns me is that the struct evtchn buckets are allocated all at
once during initialization phrase. To save memory inside Xen, the
internal allocation/free scheme for evtchn needs to be modified. Ian
suggested we do small number of buckets at start of day then dynamically
fault in more as required.

To sum up:
     1. Guest should allocate pages for third level evtchn.
     2. Guest should register third level pages via a new hypercall op.
     3. Hypervisor should setup third level evtchn in that hypercall op.
     4. Only last level (third in this case) should be shared among
        vcpus.
     5. Need a flexible allocation/free scheme of struct evtchn.
     6. Debug dumping should use snapshot to avoid holding event lock
        for too long. (Jan's concern in [0])

Any comments are welcomed.


Wei.

[0] http://thread.gmane.org/gmane.comp.emulators.xen.devel/139921



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:45:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZ97-00082G-1Z; Mon, 03 Dec 2012 16:45:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TfZ95-000827-8P
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 16:45:27 +0000
Received: from [85.158.139.83:44161] by server-8.bemta-5.messagelabs.com id
	E1/65-06050-627DCB05; Mon, 03 Dec 2012 16:45:26 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354553125!20953503!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25354 invoked from network); 3 Dec 2012 16:45:25 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 16:45:25 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TfZ92-0009hO-HS; Mon, 03 Dec 2012 16:45:24 +0000
Date: Mon, 3 Dec 2012 16:45:24 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121203164524.GB32690@ocelot.phlegethon.org>
References: <1354548558-10039-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354548558-10039-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: Use $(OBJCOPY) not bare objcopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:29 +0000 on 03 Dec (1354548558), Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Reported-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

> ---
>  xen/arch/arm/Makefile |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index fd92b72..4c61b04 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -46,7 +46,7 @@ $(TARGET): $(TARGET)-syms $(TARGET).bin
>  
>  #
>  $(TARGET).bin: $(TARGET)-syms
> -	objcopy -O binary -S $< $@
> +	$(OBJCOPY) -O binary -S $< $@
>  
>  #$(TARGET): $(TARGET)-syms $(efi-y) boot/mkelf32
>  #	./boot/mkelf32 $(TARGET)-syms $(TARGET) 0x100000 \
> -- 
> 1.7.9.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:45:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZ97-00082G-1Z; Mon, 03 Dec 2012 16:45:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TfZ95-000827-8P
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 16:45:27 +0000
Received: from [85.158.139.83:44161] by server-8.bemta-5.messagelabs.com id
	E1/65-06050-627DCB05; Mon, 03 Dec 2012 16:45:26 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354553125!20953503!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25354 invoked from network); 3 Dec 2012 16:45:25 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 16:45:25 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TfZ92-0009hO-HS; Mon, 03 Dec 2012 16:45:24 +0000
Date: Mon, 3 Dec 2012 16:45:24 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121203164524.GB32690@ocelot.phlegethon.org>
References: <1354548558-10039-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354548558-10039-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: Use $(OBJCOPY) not bare objcopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:29 +0000 on 03 Dec (1354548558), Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Reported-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

> ---
>  xen/arch/arm/Makefile |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index fd92b72..4c61b04 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -46,7 +46,7 @@ $(TARGET): $(TARGET)-syms $(TARGET).bin
>  
>  #
>  $(TARGET).bin: $(TARGET)-syms
> -	objcopy -O binary -S $< $@
> +	$(OBJCOPY) -O binary -S $< $@
>  
>  #$(TARGET): $(TARGET)-syms $(efi-y) boot/mkelf32
>  #	./boot/mkelf32 $(TARGET)-syms $(TARGET) 0x100000 \
> -- 
> 1.7.9.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:48:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:48:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZCD-00089M-LD; Mon, 03 Dec 2012 16:48:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TfZCC-00089G-3h
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 16:48:40 +0000
Received: from [85.158.143.35:38727] by server-2.bemta-4.messagelabs.com id
	4A/6F-28922-7E7DCB05; Mon, 03 Dec 2012 16:48:39 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354553318!16050154!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26641 invoked from network); 3 Dec 2012 16:48:38 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:48:38 -0000
Received: by mail-ea0-f171.google.com with SMTP id n10so1507895eaa.30
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 08:48:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:message-id:user-agent:date:from:to:cc;
	bh=DQo5S7FdqEN/I7jIYg0tIpO89X4AMhPR5Rh1CT4mEEg=;
	b=BicIkz8jL9OGJjN2LPYK0cdNZJXEPSc7KTKxxQ92/S8LJxNyDMjtlTB1OpWa5V4Afa
	gftwWcVPe7fy0CyoljIN+QOGbyuZSr2u3ybrsD3ryOSjYcj3EubcARTxuDbxFebzHqPY
	7/aRYoT8gkxYdnTBseUdSt3LPE7vbGo4KD/XQvuciF4StBAkuu7lWZea41fCewoW1W6f
	M9ZZ2Rf9nrBzxWdceYUVTFTbBbE0SELbeieJ5hyilv94hHNK5etAdSYwy0iPvAIDd/Yj
	nA50g7SqOGAL3XfVmXeA6cKPajL8du8h8FSPxYxJkVFpXE4jRfzEL3ZfT6uEpuantg07
	0AMg==
Received: by 10.14.199.5 with SMTP id w5mr12356896een.31.1354553317652;
	Mon, 03 Dec 2012 08:48:37 -0800 (PST)
Received: from [127.0.1.1] (ip-178-48.sn2.eutelia.it. [83.211.178.48])
	by mx.google.com with ESMTPS id a44sm32223644eeo.7.2012.12.03.08.48.36
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 08:48:36 -0800 (PST)
MIME-Version: 1.0
Message-Id: <patchbomb.1354552497@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 03 Dec 2012 17:34:57 +0100
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xensource.com>
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 3] xen: sched_credit: fix tickling and add
	some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This small series deals with some weirdness in the mechanism with which the
credit scheduler choses what PCPU to tickle upon a VCPU wake-up.  Details are
available in the changelog of the first patch.

The new approach has been extensively benchmarked and proved itself either
beneficial or harmless. That means it does not introduce any significant amount
of overhead and/or performances regressions while, for some workloads, it
improves the performances quite sensibly (e.g., `sysbench --test=memory').

Full results in the first changelog too.

The rest of the series introduces some macros to enable generating
per-scheduler tracing events, retaining the possibility of distinguishing them,
even with more than one scheduler running at any given time (via cpupools), and
adds some tracing to the credit scheduler.

Thanks and Regards, Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:48:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:48:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZCD-00089M-LD; Mon, 03 Dec 2012 16:48:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TfZCC-00089G-3h
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 16:48:40 +0000
Received: from [85.158.143.35:38727] by server-2.bemta-4.messagelabs.com id
	4A/6F-28922-7E7DCB05; Mon, 03 Dec 2012 16:48:39 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354553318!16050154!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26641 invoked from network); 3 Dec 2012 16:48:38 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:48:38 -0000
Received: by mail-ea0-f171.google.com with SMTP id n10so1507895eaa.30
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 08:48:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:message-id:user-agent:date:from:to:cc;
	bh=DQo5S7FdqEN/I7jIYg0tIpO89X4AMhPR5Rh1CT4mEEg=;
	b=BicIkz8jL9OGJjN2LPYK0cdNZJXEPSc7KTKxxQ92/S8LJxNyDMjtlTB1OpWa5V4Afa
	gftwWcVPe7fy0CyoljIN+QOGbyuZSr2u3ybrsD3ryOSjYcj3EubcARTxuDbxFebzHqPY
	7/aRYoT8gkxYdnTBseUdSt3LPE7vbGo4KD/XQvuciF4StBAkuu7lWZea41fCewoW1W6f
	M9ZZ2Rf9nrBzxWdceYUVTFTbBbE0SELbeieJ5hyilv94hHNK5etAdSYwy0iPvAIDd/Yj
	nA50g7SqOGAL3XfVmXeA6cKPajL8du8h8FSPxYxJkVFpXE4jRfzEL3ZfT6uEpuantg07
	0AMg==
Received: by 10.14.199.5 with SMTP id w5mr12356896een.31.1354553317652;
	Mon, 03 Dec 2012 08:48:37 -0800 (PST)
Received: from [127.0.1.1] (ip-178-48.sn2.eutelia.it. [83.211.178.48])
	by mx.google.com with ESMTPS id a44sm32223644eeo.7.2012.12.03.08.48.36
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 08:48:36 -0800 (PST)
MIME-Version: 1.0
Message-Id: <patchbomb.1354552497@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 03 Dec 2012 17:34:57 +0100
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xensource.com>
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 3] xen: sched_credit: fix tickling and add
	some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This small series deals with some weirdness in the mechanism with which the
credit scheduler choses what PCPU to tickle upon a VCPU wake-up.  Details are
available in the changelog of the first patch.

The new approach has been extensively benchmarked and proved itself either
beneficial or harmless. That means it does not introduce any significant amount
of overhead and/or performances regressions while, for some workloads, it
improves the performances quite sensibly (e.g., `sysbench --test=memory').

Full results in the first changelog too.

The rest of the series introduces some macros to enable generating
per-scheduler tracing events, retaining the possibility of distinguishing them,
even with more than one scheduler running at any given time (via cpupools), and
adds some tracing to the credit scheduler.

Thanks and Regards, Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:48:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZCF-00089b-1j; Mon, 03 Dec 2012 16:48:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TfZCE-00089R-D7
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 16:48:42 +0000
Received: from [85.158.143.35:17371] by server-3.bemta-4.messagelabs.com id
	8F/91-06841-9E7DCB05; Mon, 03 Dec 2012 16:48:41 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354553318!16050154!2
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26653 invoked from network); 3 Dec 2012 16:48:39 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:48:39 -0000
Received: by mail-ea0-f171.google.com with SMTP id n10so1507895eaa.30
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 08:48:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=y5qw/kbXSJUvS9yRajx4W2DfEDUVovBg8Y8DBR+uuT0=;
	b=P7cHuUjhXnbWRsCP/RCT6sj5cqzb8muP88Y9pLwjv8CpO/N6GfvDPVIVc7uKH7tk5N
	ATJj5tUA3pOSS65EUWFIKyP2qdGYlYk1f1hpIDE4/At4io78N03+c/xmrBglLlpcNAoG
	zUDUTQ84loiXYFzR9jHsltus3t+OPm7anSHA3wCH4IsEB6hYDtE+qCM7hpT7JQXLd9pd
	aOiEMB28ERPzUyqWtJYg6URLPicTZE8nXSQtHNoKaNFn7NN7cEG/3u5VId9b0u9yrnMm
	bSO29D+VKIk4d8Zu3VvTUvHFRNYYZQVphVQJpKMwUUL9R/fxXrIpYXgMJ758SkrTG9Yw
	nf3w==
Received: by 10.14.175.198 with SMTP id z46mr38023935eel.26.1354553318973;
	Mon, 03 Dec 2012 08:48:38 -0800 (PST)
Received: from [127.0.1.1] (ip-178-48.sn2.eutelia.it. [83.211.178.48])
	by mx.google.com with ESMTPS id a44sm32223644eeo.7.2012.12.03.08.48.37
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 08:48:38 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: dde3de6d81a3014f1d13740b80829f86a1f1643d
Message-Id: <dde3de6d81a3014f1d13.1354552498@Solace>
In-Reply-To: <patchbomb.1354552497@Solace>
References: <patchbomb.1354552497@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 03 Dec 2012 17:34:58 +0100
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xensource.com>
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 3] xen: sched_credit,
	improve tickling of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Right now, when a VCPU wakes-up, we check if the it should preempt
what is running on the PCPU, and whether or not the waking VCPU can
be migrated (by tickling some idlers). However, this can result in
suboptimal or even wrong behaviour, as explained here:

 http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html

This change, instead, when deciding what PCPUs to tickle upon VCPU
wake-up, considers both what it is likely to happen on the PCPU
where the wakeup occurs, as well as whether or not there are idle
PCPUs where to run the waking VCPU.
In fact, if there are idlers where the new VCPU can run, we can
avoid interrupting the running VCPU. OTOH, in case there aren't
any of these PCPUs, preemption and migration are the way to go.

This has been tested by running the following benchmarks inside 2,
6 and 10 VMs concurrently, on a shared host, each with 2 VCPUs and
960 MB of memory (host has 16 ways and 12 GB RAM).

1) All VMs had 'cpus="all"' in their config file.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 50.078467 +/- 1.6676162 | 49.704933 +/- 0.0277184 |
 | 6   | 63.259472 +/- 0.1137586 | 62.227367 +/- 0.3880619 |
 | 10  | 91.246797 +/- 0.1154008 | 91.174820 +/- 0.0928781 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 485.56333 +/- 6.0527356 | 525.57833 +/- 25.085826 |
 | 6   | 401.36278 +/- 1.9745916 | 421.96111 +/- 9.0364048 |
 | 10  | 294.43933 +/- 0.8064945 | 302.49033 +/- 0.2343978 |
$ specjbb2005 ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 43150.63 +/- 1359.5616  | 42720.632 +/- 1937.4488 |
 | 6   | 29274.29 +/- 1024.4042  | 29518.171 +/- 1014.5239 |
 | 10  | 19061.28 +/- 512.88561  | 19050.141 +/- 458.77327 |


2) All VMs had their VCPUs statically pinned to the host's PCPUs.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 47.8211   +/- 0.0215504 | 47.826900 +/- 0.0077872 |
 | 6   | 62.689122 +/- 0.0877173 | 62.764539 +/- 0.3882493 |
 | 10  | 90.321097 +/- 1.4803867 | 89.974570 +/- 1.1437566 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 550.97667 +/- 2.3512355 | 550.87000 +/- 0.8140792 |
 | 6   | 443.15000 +/- 5.7471797 | 454.01056 +/- 8.4373466 |
 | 10  | 313.89233 +/- 1.3237493 | 321.81167 +/- 0.3528418 |
$ specjbb2005 ... (throughput, higher is better)
 | 2   | 49591.057 +/- 952.93384 | 49610.98  +/- 1242.1675 |
 | 6   | 33538.247 +/- 1089.2115 | 33682.222 +/- 1216.1078 |
 | 10  | 21927.870 +/- 831.88742 | 21801.138 +/- 561.97068 |


Numbers show how the change has either no or very limited impact
(specjbb2005 case) or, when it does have some impact, that is an
actual improvement in performances, especially in the
sysbench-memory case.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -249,13 +249,25 @@ static inline void
     struct csched_vcpu * const cur =
         CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
-    cpumask_t mask;
+    cpumask_t mask, idle_mask;
 
     ASSERT(cur);
     cpumask_clear(&mask);
 
-    /* If strictly higher priority than current VCPU, signal the CPU */
-    if ( new->pri > cur->pri )
+    /* Check whether or not there are idlers that can run new */
+    cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
+
+    /*
+     * Should we ask cpu to reschedule? Well, if new can preempt cur,
+     * and there isn't any other place where it can run, we do. OTOH,
+     * if there are idlers where new can run, we can avoid interrupting
+     * cur and ask them to come and pick new up. So no, in that case, we
+     * do not signal cpu, avoiding an unnecessary migration of a running
+     * VCPU. It is true that we are (probably) migrating new, but as it
+     * is waking up, it likely is cache-cold anyway.
+     */
+    if ( new->pri > cur->pri &&
+         (cur->pri == CSCHED_PRI_IDLE || cpumask_empty(&idle_mask)) )
     {
         if ( cur->pri == CSCHED_PRI_IDLE )
             SCHED_STAT_CRANK(tickle_local_idler);
@@ -270,7 +282,7 @@ static inline void
     }
 
     /*
-     * If this CPU has at least two runnable VCPUs, we tickle any idlers to
+     * If this CPU has at least two runnable VCPUs, we tickle some idlers to
      * let them know there is runnable work in the system...
      */
     if ( cur->pri > CSCHED_PRI_IDLE )
@@ -281,9 +293,16 @@ static inline void
         }
         else
         {
-            cpumask_t idle_mask;
 
-            cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
+            /*
+             * If there aren't idlers for new, then letting new preempt cur and
+             * try to migrate cur becomes ineviable. If that is the case, update
+             * the mask of the to-be-tickled CPUs accordingly (i.e., with cur's
+             * idlers instead of new's).
+             */
+            if ( new->pri > cur->pri && cpumask_empty(&idle_mask) )
+                cpumask_and(&idle_mask, prv->idlers, cur->vcpu->cpu_affinity);
+
             if ( !cpumask_empty(&idle_mask) )
             {
                 SCHED_STAT_CRANK(tickle_idlers_some);
@@ -296,7 +315,6 @@ static inline void
                 else
                     cpumask_or(&mask, &mask, &idle_mask);
             }
-            cpumask_and(&mask, &mask, new->vcpu->cpu_affinity);
         }
     }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:48:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZCF-00089b-1j; Mon, 03 Dec 2012 16:48:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TfZCE-00089R-D7
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 16:48:42 +0000
Received: from [85.158.143.35:17371] by server-3.bemta-4.messagelabs.com id
	8F/91-06841-9E7DCB05; Mon, 03 Dec 2012 16:48:41 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354553318!16050154!2
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26653 invoked from network); 3 Dec 2012 16:48:39 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:48:39 -0000
Received: by mail-ea0-f171.google.com with SMTP id n10so1507895eaa.30
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 08:48:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=y5qw/kbXSJUvS9yRajx4W2DfEDUVovBg8Y8DBR+uuT0=;
	b=P7cHuUjhXnbWRsCP/RCT6sj5cqzb8muP88Y9pLwjv8CpO/N6GfvDPVIVc7uKH7tk5N
	ATJj5tUA3pOSS65EUWFIKyP2qdGYlYk1f1hpIDE4/At4io78N03+c/xmrBglLlpcNAoG
	zUDUTQ84loiXYFzR9jHsltus3t+OPm7anSHA3wCH4IsEB6hYDtE+qCM7hpT7JQXLd9pd
	aOiEMB28ERPzUyqWtJYg6URLPicTZE8nXSQtHNoKaNFn7NN7cEG/3u5VId9b0u9yrnMm
	bSO29D+VKIk4d8Zu3VvTUvHFRNYYZQVphVQJpKMwUUL9R/fxXrIpYXgMJ758SkrTG9Yw
	nf3w==
Received: by 10.14.175.198 with SMTP id z46mr38023935eel.26.1354553318973;
	Mon, 03 Dec 2012 08:48:38 -0800 (PST)
Received: from [127.0.1.1] (ip-178-48.sn2.eutelia.it. [83.211.178.48])
	by mx.google.com with ESMTPS id a44sm32223644eeo.7.2012.12.03.08.48.37
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 08:48:38 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: dde3de6d81a3014f1d13740b80829f86a1f1643d
Message-Id: <dde3de6d81a3014f1d13.1354552498@Solace>
In-Reply-To: <patchbomb.1354552497@Solace>
References: <patchbomb.1354552497@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 03 Dec 2012 17:34:58 +0100
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xensource.com>
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 3] xen: sched_credit,
	improve tickling of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Right now, when a VCPU wakes-up, we check if the it should preempt
what is running on the PCPU, and whether or not the waking VCPU can
be migrated (by tickling some idlers). However, this can result in
suboptimal or even wrong behaviour, as explained here:

 http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html

This change, instead, when deciding what PCPUs to tickle upon VCPU
wake-up, considers both what it is likely to happen on the PCPU
where the wakeup occurs, as well as whether or not there are idle
PCPUs where to run the waking VCPU.
In fact, if there are idlers where the new VCPU can run, we can
avoid interrupting the running VCPU. OTOH, in case there aren't
any of these PCPUs, preemption and migration are the way to go.

This has been tested by running the following benchmarks inside 2,
6 and 10 VMs concurrently, on a shared host, each with 2 VCPUs and
960 MB of memory (host has 16 ways and 12 GB RAM).

1) All VMs had 'cpus="all"' in their config file.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 50.078467 +/- 1.6676162 | 49.704933 +/- 0.0277184 |
 | 6   | 63.259472 +/- 0.1137586 | 62.227367 +/- 0.3880619 |
 | 10  | 91.246797 +/- 0.1154008 | 91.174820 +/- 0.0928781 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 485.56333 +/- 6.0527356 | 525.57833 +/- 25.085826 |
 | 6   | 401.36278 +/- 1.9745916 | 421.96111 +/- 9.0364048 |
 | 10  | 294.43933 +/- 0.8064945 | 302.49033 +/- 0.2343978 |
$ specjbb2005 ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 43150.63 +/- 1359.5616  | 42720.632 +/- 1937.4488 |
 | 6   | 29274.29 +/- 1024.4042  | 29518.171 +/- 1014.5239 |
 | 10  | 19061.28 +/- 512.88561  | 19050.141 +/- 458.77327 |


2) All VMs had their VCPUs statically pinned to the host's PCPUs.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 47.8211   +/- 0.0215504 | 47.826900 +/- 0.0077872 |
 | 6   | 62.689122 +/- 0.0877173 | 62.764539 +/- 0.3882493 |
 | 10  | 90.321097 +/- 1.4803867 | 89.974570 +/- 1.1437566 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 550.97667 +/- 2.3512355 | 550.87000 +/- 0.8140792 |
 | 6   | 443.15000 +/- 5.7471797 | 454.01056 +/- 8.4373466 |
 | 10  | 313.89233 +/- 1.3237493 | 321.81167 +/- 0.3528418 |
$ specjbb2005 ... (throughput, higher is better)
 | 2   | 49591.057 +/- 952.93384 | 49610.98  +/- 1242.1675 |
 | 6   | 33538.247 +/- 1089.2115 | 33682.222 +/- 1216.1078 |
 | 10  | 21927.870 +/- 831.88742 | 21801.138 +/- 561.97068 |


Numbers show how the change has either no or very limited impact
(specjbb2005 case) or, when it does have some impact, that is an
actual improvement in performances, especially in the
sysbench-memory case.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -249,13 +249,25 @@ static inline void
     struct csched_vcpu * const cur =
         CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
-    cpumask_t mask;
+    cpumask_t mask, idle_mask;
 
     ASSERT(cur);
     cpumask_clear(&mask);
 
-    /* If strictly higher priority than current VCPU, signal the CPU */
-    if ( new->pri > cur->pri )
+    /* Check whether or not there are idlers that can run new */
+    cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
+
+    /*
+     * Should we ask cpu to reschedule? Well, if new can preempt cur,
+     * and there isn't any other place where it can run, we do. OTOH,
+     * if there are idlers where new can run, we can avoid interrupting
+     * cur and ask them to come and pick new up. So no, in that case, we
+     * do not signal cpu, avoiding an unnecessary migration of a running
+     * VCPU. It is true that we are (probably) migrating new, but as it
+     * is waking up, it likely is cache-cold anyway.
+     */
+    if ( new->pri > cur->pri &&
+         (cur->pri == CSCHED_PRI_IDLE || cpumask_empty(&idle_mask)) )
     {
         if ( cur->pri == CSCHED_PRI_IDLE )
             SCHED_STAT_CRANK(tickle_local_idler);
@@ -270,7 +282,7 @@ static inline void
     }
 
     /*
-     * If this CPU has at least two runnable VCPUs, we tickle any idlers to
+     * If this CPU has at least two runnable VCPUs, we tickle some idlers to
      * let them know there is runnable work in the system...
      */
     if ( cur->pri > CSCHED_PRI_IDLE )
@@ -281,9 +293,16 @@ static inline void
         }
         else
         {
-            cpumask_t idle_mask;
 
-            cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
+            /*
+             * If there aren't idlers for new, then letting new preempt cur and
+             * try to migrate cur becomes ineviable. If that is the case, update
+             * the mask of the to-be-tickled CPUs accordingly (i.e., with cur's
+             * idlers instead of new's).
+             */
+            if ( new->pri > cur->pri && cpumask_empty(&idle_mask) )
+                cpumask_and(&idle_mask, prv->idlers, cur->vcpu->cpu_affinity);
+
             if ( !cpumask_empty(&idle_mask) )
             {
                 SCHED_STAT_CRANK(tickle_idlers_some);
@@ -296,7 +315,6 @@ static inline void
                 else
                     cpumask_or(&mask, &mask, &idle_mask);
             }
-            cpumask_and(&mask, &mask, new->vcpu->cpu_affinity);
         }
     }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:48:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZCH-0008AH-0k; Mon, 03 Dec 2012 16:48:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TfZCF-00089V-5r
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 16:48:43 +0000
Received: from [85.158.139.211:12396] by server-15.bemta-5.messagelabs.com id
	D5/DA-26920-AE7DCB05; Mon, 03 Dec 2012 16:48:42 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354553320!18909673!2
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5219 invoked from network); 3 Dec 2012 16:48:41 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:48:41 -0000
Received: by mail-ee0-f43.google.com with SMTP id e49so2080293eek.30
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 08:48:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=+TU17WPxHBgxG99mDpdN/iBjmY6frnTebwGH4lY6ujQ=;
	b=KWWvH/+ZuzXuwSFiTTb9UqEonP5q6sKVrSF1YVnZTn5MmzyFZkKc3sFYUTeMElzVed
	bkHWVQzRNfi1fh1Gce2+YbnSPmbOhD8QIkiKHFg7O8EQ/WX2ZUCxzkEtecbUn3mWky8J
	upqQLz/E8lTEmv2Aunrsy0cnGqZfzRftc0rIJ/QcCcgOW6/nXBo4fwMY/x/9tLcBNdGW
	ypSearwzBvZxgaZCnd7QXzGtBHnqqMas/kQ8LRIkt7wahQ2BiTSKzrHaRnDbfJQbcVMr
	mVx/Pj9nABUg9vDnL4dG43mV31HdmsK71qouRrD0bGRZZOoREqAcXwPgzSsrLWZAqF8h
	R2Zw==
Received: by 10.14.209.193 with SMTP id s41mr38389774eeo.9.1354553321648;
	Mon, 03 Dec 2012 08:48:41 -0800 (PST)
Received: from [127.0.1.1] (ip-178-48.sn2.eutelia.it. [83.211.178.48])
	by mx.google.com with ESMTPS id a44sm32223644eeo.7.2012.12.03.08.48.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 08:48:41 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: bf4b06eda19a03a6aa0bf03cdb38c24c13ccc28f
Message-Id: <bf4b06eda19a03a6aa0b.1354552500@Solace>
In-Reply-To: <patchbomb.1354552497@Solace>
References: <patchbomb.1354552497@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 03 Dec 2012 17:35:00 +0100
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xensource.com>
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

About tickling, and PCPU selection.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -21,6 +21,7 @@
 #include <asm/atomic.h>
 #include <xen/errno.h>
 #include <xen/keyhandler.h>
+#include <xen/trace.h>
 
 
 /*
@@ -95,6 +96,20 @@
 
 
 /*
+ * Credit tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED_EVENT(_e)     ((TRC_SCHED_CLASS|TRC_MASK_CSCHED) + _e)
+
+#define TRC_CSCHED_SCHED_TASKLET TRC_CSCHED_EVENT(1)
+#define TRC_CSCHED_ACCOUNT_START TRC_CSCHED_EVENT(2)
+#define TRC_CSCHED_ACCOUNT_STOP  TRC_CSCHED_EVENT(3)
+#define TRC_CSCHED_STOLEN_VCPU   TRC_CSCHED_EVENT(4)
+#define TRC_CSCHED_PICKED_CPU    TRC_CSCHED_EVENT(5)
+#define TRC_CSCHED_TICKLE        TRC_CSCHED_EVENT(6)
+
+
+/*
  * Boot parameters
  */
 static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
@@ -318,9 +333,26 @@ static inline void
         }
     }
 
-    /* Send scheduler interrupts to designated CPUs */
     if ( !cpumask_empty(&mask) )
+    {
+        if ( unlikely(tb_init_done) )
+        {
+            /* Avoid TRACE_* to avoid a lot of useless !tb_init_done checks */
+            for_each_cpu(cpu, &mask)
+            {
+                struct {
+                    unsigned cpu:8;
+                } d;
+                d.cpu = cpu;
+                trace_var(TRC_CSCHED_TICKLE, 0,
+                          sizeof(d),
+                          (unsigned char*)&d);
+            }
+        }
+
+        /* Send scheduler interrupts to designated CPUs */
         cpumask_raise_softirq(&mask, SCHEDULE_SOFTIRQ);
+    }
 }
 
 static void
@@ -552,6 +584,8 @@ static int
     if ( commit && spc )
        spc->idle_bias = cpu;
 
+    TRACE_3D(TRC_CSCHED_PICKED_CPU, vc->domain->domain_id, vc->vcpu_id, cpu);
+
     return cpu;
 }
 
@@ -584,6 +618,9 @@ static inline void
         }
     }
 
+    TRACE_3D(TRC_CSCHED_ACCOUNT_START, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
+
     spin_unlock_irqrestore(&prv->lock, flags);
 }
 
@@ -606,6 +643,9 @@ static inline void
     {
         list_del_init(&sdom->active_sdom_elem);
     }
+
+    TRACE_3D(TRC_CSCHED_ACCOUNT_STOP, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
 }
 
 static void
@@ -1238,6 +1278,8 @@ csched_runq_steal(int peer_cpu, int cpu,
             if (__csched_vcpu_is_migrateable(vc, cpu))
             {
                 /* We got a candidate. Grab it! */
+                TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
+                         vc->domain->domain_id, vc->vcpu_id);
                 SCHED_VCPU_STAT_CRANK(speer, migrate_q);
                 SCHED_STAT_CRANK(migrate_queued);
                 WARN_ON(vc->is_urgent);
@@ -1398,6 +1440,7 @@ csched_schedule(
     /* Tasklet work (which runs in idle VCPU context) overrides all else. */
     if ( tasklet_work_scheduled )
     {
+        TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
         snext = CSCHED_VCPU(idle_vcpu[cpu]);
         snext->pri = CSCHED_PRI_TS_BOOST;
     }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:48:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZCG-0008A6-KV; Mon, 03 Dec 2012 16:48:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TfZCE-00089U-Mi
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 16:48:43 +0000
Received: from [85.158.139.211:12351] by server-11.bemta-5.messagelabs.com id
	E1/87-03409-9E7DCB05; Mon, 03 Dec 2012 16:48:41 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354553320!18909673!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5101 invoked from network); 3 Dec 2012 16:48:40 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:48:40 -0000
Received: by mail-ee0-f43.google.com with SMTP id e49so2080293eek.30
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 08:48:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=N/49yvYBZ+JUR6KpPhjSuGe/iyemD+HiU13tp9MBtic=;
	b=WdkIsIG74Z6m0j8t5/kP3BQ9Fwxk2Y5rtGYX0q49MlUlna98sm3Dr4lqZE3zOhya0W
	UdTyl43OPlzPWKNnAxTy1RCcnCP/6l1Fn+hqUhZX3D/zx7N5dY7cccqptUvkk4rcGXr6
	XSn+kRGGWV3wSxg+uZbBKfzs9wf9nODdrfQ7UptKSk3TVRFJnoWF0+DIP2FyjnS3eoeH
	wZo6NB/TSWqrTqrdIEg2vFN7v8UdJ7V16DeSGjLx3AE04XLxM76H9cYEbAslfTD+LAKb
	WdqPtCy0I9HN7/6cv6/9HG1/JUH/pwNQfLx2TJ2pbKzWYqSiWRNmfzD9Bvb/6sBM6p92
	XbVg==
Received: by 10.14.2.196 with SMTP id 44mr38161572eef.25.1354553320393;
	Mon, 03 Dec 2012 08:48:40 -0800 (PST)
Received: from [127.0.1.1] (ip-178-48.sn2.eutelia.it. [83.211.178.48])
	by mx.google.com with ESMTPS id a44sm32223644eeo.7.2012.12.03.08.48.39
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 08:48:39 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 7265520b0188740d3dfa5226c48b2b6c5be09b5c
Message-Id: <7265520b0188740d3dfa.1354552499@Solace>
In-Reply-To: <patchbomb.1354552497@Solace>
References: <patchbomb.1354552497@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 03 Dec 2012 17:34:59 +0100
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xensource.com>
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH 2 of 3] xen: tracing: introduce per-scheduler
	trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So that it becomes possible to create specific scheduling event
within each scheduler without worrying about the overlapping,
and also without giving up being able to recognise them
univocally. The latter is deemed as useful, since we can have
more than one scheduler running at the same time, thanks to
cpupools.

The event ID is 12 bits, and this change uses the upper 3 of them
for the 'scheduler ID'. This means we're limited to 8 schedulers
and to 512 scheduler specific tracing events. Both seem reasonable
limitations as of now.

This also converts the existing credit2 tracing (the only scheduler
generating tracing events up to now) to the new system.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -29,18 +29,24 @@
 #define d2printk(x...)
 //#define d2printk printk
 
-#define TRC_CSCHED2_TICK        TRC_SCHED_CLASS + 1
-#define TRC_CSCHED2_RUNQ_POS    TRC_SCHED_CLASS + 2
-#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3
-#define TRC_CSCHED2_CREDIT_ADD  TRC_SCHED_CLASS + 4
-#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5
-#define TRC_CSCHED2_TICKLE       TRC_SCHED_CLASS + 6
-#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7
-#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8
-#define TRC_CSCHED2_UPDATE_LOAD   TRC_SCHED_CLASS + 9
-#define TRC_CSCHED2_RUNQ_ASSIGN   TRC_SCHED_CLASS + 10
-#define TRC_CSCHED2_UPDATE_VCPU_LOAD   TRC_SCHED_CLASS + 11
-#define TRC_CSCHED2_UPDATE_RUNQ_LOAD   TRC_SCHED_CLASS + 12
+/*
+ * Credit2 tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED2_EVENT(_e)        ((TRC_SCHED_CLASS|TRC_MASK_CSCHED2) + _e)
+
+#define TRC_CSCHED2_TICK             (TRC_CSCHED2_EVENT(1))
+#define TRC_CSCHED2_RUNQ_POS         (TRC_CSCHED2_EVENT(2))
+#define TRC_CSCHED2_CREDIT_BURN      (TRC_CSCHED2_EVENT(3))
+#define TRC_CSCHED2_CREDIT_ADD       (TRC_CSCHED2_EVENT(4))
+#define TRC_CSCHED2_TICKLE_CHECK     (TRC_CSCHED2_EVENT(5))
+#define TRC_CSCHED2_TICKLE           (TRC_CSCHED2_EVENT(6))
+#define TRC_CSCHED2_CREDIT_RESET     (TRC_CSCHED2_EVENT(7))
+#define TRC_CSCHED2_SCHED_TASKLET    (TRC_CSCHED2_EVENT(8))
+#define TRC_CSCHED2_UPDATE_LOAD      (TRC_CSCHED2_EVENT(9))
+#define TRC_CSCHED2_RUNQ_ASSIGN      (TRC_CSCHED2_EVENT(10))
+#define TRC_CSCHED2_UPDATE_VCPU_LOAD (TRC_CSCHED2_EVENT(11))
+#define TRC_CSCHED2_UPDATE_RUNQ_LOAD (TRC_CSCHED2_EVENT(12))
 
 /*
  * WARNING: This is still in an experimental phase.  Status and work can be found at the
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
--- a/xen/include/public/trace.h
+++ b/xen/include/public/trace.h
@@ -57,6 +57,26 @@
 #define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
 #define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
 
+/*
+ * Per-scheduler masks, to identify scheduler specific events.
+ *
+ * The highest 3 bits of the last 12 bits of the above TRC_SCHED_*
+ * tracing classes are reserved for encoding what scheduler is producing
+ * the event. The last 9 bits is where the scheduler specific event is
+ * encoded.
+ *
+ * This means we can have at most 8 tracing scheduling masks (which
+ * means at most 8 schedulers generating tracing events) and, in each
+ * scheduler, up to 512 different events.
+ */
+#define TRC_SCHED_ID_BITS    3
+#define TRC_SCHED_MASK_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
+
+#define TRC_MASK_CSCHED      (0 << TRC_SCHED_MASK_SHIFT)
+#define TRC_MASK_CSCHED2     (1 << TRC_SCHED_MASK_SHIFT)
+#define TRC_MASK_SEDF        (2 << TRC_SCHED_MASK_SHIFT)
+#define TRC_MASK_ARINC653    (3 << TRC_SCHED_MASK_SHIFT)
+
 /* Trace classes for Hardware */
 #define TRC_HW_PM           0x00801000   /* Power management traces */
 #define TRC_HW_IRQ          0x00802000   /* Traces relating to the handling of IRQs */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:48:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZCG-0008A6-KV; Mon, 03 Dec 2012 16:48:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TfZCE-00089U-Mi
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 16:48:43 +0000
Received: from [85.158.139.211:12351] by server-11.bemta-5.messagelabs.com id
	E1/87-03409-9E7DCB05; Mon, 03 Dec 2012 16:48:41 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354553320!18909673!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5101 invoked from network); 3 Dec 2012 16:48:40 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:48:40 -0000
Received: by mail-ee0-f43.google.com with SMTP id e49so2080293eek.30
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 08:48:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=N/49yvYBZ+JUR6KpPhjSuGe/iyemD+HiU13tp9MBtic=;
	b=WdkIsIG74Z6m0j8t5/kP3BQ9Fwxk2Y5rtGYX0q49MlUlna98sm3Dr4lqZE3zOhya0W
	UdTyl43OPlzPWKNnAxTy1RCcnCP/6l1Fn+hqUhZX3D/zx7N5dY7cccqptUvkk4rcGXr6
	XSn+kRGGWV3wSxg+uZbBKfzs9wf9nODdrfQ7UptKSk3TVRFJnoWF0+DIP2FyjnS3eoeH
	wZo6NB/TSWqrTqrdIEg2vFN7v8UdJ7V16DeSGjLx3AE04XLxM76H9cYEbAslfTD+LAKb
	WdqPtCy0I9HN7/6cv6/9HG1/JUH/pwNQfLx2TJ2pbKzWYqSiWRNmfzD9Bvb/6sBM6p92
	XbVg==
Received: by 10.14.2.196 with SMTP id 44mr38161572eef.25.1354553320393;
	Mon, 03 Dec 2012 08:48:40 -0800 (PST)
Received: from [127.0.1.1] (ip-178-48.sn2.eutelia.it. [83.211.178.48])
	by mx.google.com with ESMTPS id a44sm32223644eeo.7.2012.12.03.08.48.39
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 08:48:39 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 7265520b0188740d3dfa5226c48b2b6c5be09b5c
Message-Id: <7265520b0188740d3dfa.1354552499@Solace>
In-Reply-To: <patchbomb.1354552497@Solace>
References: <patchbomb.1354552497@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 03 Dec 2012 17:34:59 +0100
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xensource.com>
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH 2 of 3] xen: tracing: introduce per-scheduler
	trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So that it becomes possible to create specific scheduling event
within each scheduler without worrying about the overlapping,
and also without giving up being able to recognise them
univocally. The latter is deemed as useful, since we can have
more than one scheduler running at the same time, thanks to
cpupools.

The event ID is 12 bits, and this change uses the upper 3 of them
for the 'scheduler ID'. This means we're limited to 8 schedulers
and to 512 scheduler specific tracing events. Both seem reasonable
limitations as of now.

This also converts the existing credit2 tracing (the only scheduler
generating tracing events up to now) to the new system.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -29,18 +29,24 @@
 #define d2printk(x...)
 //#define d2printk printk
 
-#define TRC_CSCHED2_TICK        TRC_SCHED_CLASS + 1
-#define TRC_CSCHED2_RUNQ_POS    TRC_SCHED_CLASS + 2
-#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3
-#define TRC_CSCHED2_CREDIT_ADD  TRC_SCHED_CLASS + 4
-#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5
-#define TRC_CSCHED2_TICKLE       TRC_SCHED_CLASS + 6
-#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7
-#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8
-#define TRC_CSCHED2_UPDATE_LOAD   TRC_SCHED_CLASS + 9
-#define TRC_CSCHED2_RUNQ_ASSIGN   TRC_SCHED_CLASS + 10
-#define TRC_CSCHED2_UPDATE_VCPU_LOAD   TRC_SCHED_CLASS + 11
-#define TRC_CSCHED2_UPDATE_RUNQ_LOAD   TRC_SCHED_CLASS + 12
+/*
+ * Credit2 tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED2_EVENT(_e)        ((TRC_SCHED_CLASS|TRC_MASK_CSCHED2) + _e)
+
+#define TRC_CSCHED2_TICK             (TRC_CSCHED2_EVENT(1))
+#define TRC_CSCHED2_RUNQ_POS         (TRC_CSCHED2_EVENT(2))
+#define TRC_CSCHED2_CREDIT_BURN      (TRC_CSCHED2_EVENT(3))
+#define TRC_CSCHED2_CREDIT_ADD       (TRC_CSCHED2_EVENT(4))
+#define TRC_CSCHED2_TICKLE_CHECK     (TRC_CSCHED2_EVENT(5))
+#define TRC_CSCHED2_TICKLE           (TRC_CSCHED2_EVENT(6))
+#define TRC_CSCHED2_CREDIT_RESET     (TRC_CSCHED2_EVENT(7))
+#define TRC_CSCHED2_SCHED_TASKLET    (TRC_CSCHED2_EVENT(8))
+#define TRC_CSCHED2_UPDATE_LOAD      (TRC_CSCHED2_EVENT(9))
+#define TRC_CSCHED2_RUNQ_ASSIGN      (TRC_CSCHED2_EVENT(10))
+#define TRC_CSCHED2_UPDATE_VCPU_LOAD (TRC_CSCHED2_EVENT(11))
+#define TRC_CSCHED2_UPDATE_RUNQ_LOAD (TRC_CSCHED2_EVENT(12))
 
 /*
  * WARNING: This is still in an experimental phase.  Status and work can be found at the
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
--- a/xen/include/public/trace.h
+++ b/xen/include/public/trace.h
@@ -57,6 +57,26 @@
 #define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
 #define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
 
+/*
+ * Per-scheduler masks, to identify scheduler specific events.
+ *
+ * The highest 3 bits of the last 12 bits of the above TRC_SCHED_*
+ * tracing classes are reserved for encoding what scheduler is producing
+ * the event. The last 9 bits is where the scheduler specific event is
+ * encoded.
+ *
+ * This means we can have at most 8 tracing scheduling masks (which
+ * means at most 8 schedulers generating tracing events) and, in each
+ * scheduler, up to 512 different events.
+ */
+#define TRC_SCHED_ID_BITS    3
+#define TRC_SCHED_MASK_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
+
+#define TRC_MASK_CSCHED      (0 << TRC_SCHED_MASK_SHIFT)
+#define TRC_MASK_CSCHED2     (1 << TRC_SCHED_MASK_SHIFT)
+#define TRC_MASK_SEDF        (2 << TRC_SCHED_MASK_SHIFT)
+#define TRC_MASK_ARINC653    (3 << TRC_SCHED_MASK_SHIFT)
+
 /* Trace classes for Hardware */
 #define TRC_HW_PM           0x00801000   /* Power management traces */
 #define TRC_HW_IRQ          0x00802000   /* Traces relating to the handling of IRQs */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 16:48:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 16:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZCH-0008AH-0k; Mon, 03 Dec 2012 16:48:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TfZCF-00089V-5r
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 16:48:43 +0000
Received: from [85.158.139.211:12396] by server-15.bemta-5.messagelabs.com id
	D5/DA-26920-AE7DCB05; Mon, 03 Dec 2012 16:48:42 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354553320!18909673!2
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5219 invoked from network); 3 Dec 2012 16:48:41 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 16:48:41 -0000
Received: by mail-ee0-f43.google.com with SMTP id e49so2080293eek.30
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 08:48:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=+TU17WPxHBgxG99mDpdN/iBjmY6frnTebwGH4lY6ujQ=;
	b=KWWvH/+ZuzXuwSFiTTb9UqEonP5q6sKVrSF1YVnZTn5MmzyFZkKc3sFYUTeMElzVed
	bkHWVQzRNfi1fh1Gce2+YbnSPmbOhD8QIkiKHFg7O8EQ/WX2ZUCxzkEtecbUn3mWky8J
	upqQLz/E8lTEmv2Aunrsy0cnGqZfzRftc0rIJ/QcCcgOW6/nXBo4fwMY/x/9tLcBNdGW
	ypSearwzBvZxgaZCnd7QXzGtBHnqqMas/kQ8LRIkt7wahQ2BiTSKzrHaRnDbfJQbcVMr
	mVx/Pj9nABUg9vDnL4dG43mV31HdmsK71qouRrD0bGRZZOoREqAcXwPgzSsrLWZAqF8h
	R2Zw==
Received: by 10.14.209.193 with SMTP id s41mr38389774eeo.9.1354553321648;
	Mon, 03 Dec 2012 08:48:41 -0800 (PST)
Received: from [127.0.1.1] (ip-178-48.sn2.eutelia.it. [83.211.178.48])
	by mx.google.com with ESMTPS id a44sm32223644eeo.7.2012.12.03.08.48.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 08:48:41 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: bf4b06eda19a03a6aa0bf03cdb38c24c13ccc28f
Message-Id: <bf4b06eda19a03a6aa0b.1354552500@Solace>
In-Reply-To: <patchbomb.1354552497@Solace>
References: <patchbomb.1354552497@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 03 Dec 2012 17:35:00 +0100
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xensource.com>
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

About tickling, and PCPU selection.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -21,6 +21,7 @@
 #include <asm/atomic.h>
 #include <xen/errno.h>
 #include <xen/keyhandler.h>
+#include <xen/trace.h>
 
 
 /*
@@ -95,6 +96,20 @@
 
 
 /*
+ * Credit tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED_EVENT(_e)     ((TRC_SCHED_CLASS|TRC_MASK_CSCHED) + _e)
+
+#define TRC_CSCHED_SCHED_TASKLET TRC_CSCHED_EVENT(1)
+#define TRC_CSCHED_ACCOUNT_START TRC_CSCHED_EVENT(2)
+#define TRC_CSCHED_ACCOUNT_STOP  TRC_CSCHED_EVENT(3)
+#define TRC_CSCHED_STOLEN_VCPU   TRC_CSCHED_EVENT(4)
+#define TRC_CSCHED_PICKED_CPU    TRC_CSCHED_EVENT(5)
+#define TRC_CSCHED_TICKLE        TRC_CSCHED_EVENT(6)
+
+
+/*
  * Boot parameters
  */
 static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
@@ -318,9 +333,26 @@ static inline void
         }
     }
 
-    /* Send scheduler interrupts to designated CPUs */
     if ( !cpumask_empty(&mask) )
+    {
+        if ( unlikely(tb_init_done) )
+        {
+            /* Avoid TRACE_* to avoid a lot of useless !tb_init_done checks */
+            for_each_cpu(cpu, &mask)
+            {
+                struct {
+                    unsigned cpu:8;
+                } d;
+                d.cpu = cpu;
+                trace_var(TRC_CSCHED_TICKLE, 0,
+                          sizeof(d),
+                          (unsigned char*)&d);
+            }
+        }
+
+        /* Send scheduler interrupts to designated CPUs */
         cpumask_raise_softirq(&mask, SCHEDULE_SOFTIRQ);
+    }
 }
 
 static void
@@ -552,6 +584,8 @@ static int
     if ( commit && spc )
        spc->idle_bias = cpu;
 
+    TRACE_3D(TRC_CSCHED_PICKED_CPU, vc->domain->domain_id, vc->vcpu_id, cpu);
+
     return cpu;
 }
 
@@ -584,6 +618,9 @@ static inline void
         }
     }
 
+    TRACE_3D(TRC_CSCHED_ACCOUNT_START, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
+
     spin_unlock_irqrestore(&prv->lock, flags);
 }
 
@@ -606,6 +643,9 @@ static inline void
     {
         list_del_init(&sdom->active_sdom_elem);
     }
+
+    TRACE_3D(TRC_CSCHED_ACCOUNT_STOP, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
 }
 
 static void
@@ -1238,6 +1278,8 @@ csched_runq_steal(int peer_cpu, int cpu,
             if (__csched_vcpu_is_migrateable(vc, cpu))
             {
                 /* We got a candidate. Grab it! */
+                TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
+                         vc->domain->domain_id, vc->vcpu_id);
                 SCHED_VCPU_STAT_CRANK(speer, migrate_q);
                 SCHED_STAT_CRANK(migrate_queued);
                 WARN_ON(vc->is_urgent);
@@ -1398,6 +1440,7 @@ csched_schedule(
     /* Tasklet work (which runs in idle VCPU context) overrides all else. */
     if ( tasklet_work_scheduled )
     {
+        TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
         snext = CSCHED_VCPU(idle_vcpu[cpu]);
         snext->pri = CSCHED_PRI_TS_BOOST;
     }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:10:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:10:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXN-0000Tv-GY; Mon, 03 Dec 2012 17:10:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXL-0000Tq-C9
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:10:31 +0000
Received: from [85.158.139.211:23295] by server-2.bemta-5.messagelabs.com id
	9C/DF-04892-60DDCB05; Mon, 03 Dec 2012 17:10:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1354554629!18776955!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32448 invoked from network); 3 Dec 2012 17:10:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:10:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16129369"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:10:12 +0000
Message-ID: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Mon, 3 Dec 2012 17:10:11 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH 00/08 V2] arm: support for initial modules (e.g.
 dom0) and DTB supplied in RAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The following series implements support for initial images and DTB in
RAM, as opposed to in flash (dom0 kernel) or compiled into the
hypervisor (DTB). It arranges to not clobber these with either the h/v
text on relocation or with the heaps and frees them as appropriate.

Most of this is independent of the specific bootloader protocol which is
used to tell Xen where these modules actually are, but I have included a
simple PoC bootloader protocol based around device tree which is similar
to the protocol used by Linux to find its initrd
(where /chosen/linux,initrd-{start,end} indicate the physical address).

The PoC protocol is documented in docs/misc/arm/device-tree/booting.txt
which is added by this series.

Some of the smaller patches went in already, but most of the meat
remains.

The major change this time is that the protocol now uses a reg property
for the address and size, e.g. something like this:
	/chosen/modules@1 {
		bootargs = "root=/dev/mmcblk0 ro debug"
		reg = <0x200000 0x2000>
	}

I have pushed an updated boot-wrapper to:
git://xenbits.xen.org/people/ianc/boot-wrapper.git

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:10:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:10:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXN-0000Tv-GY; Mon, 03 Dec 2012 17:10:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXL-0000Tq-C9
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:10:31 +0000
Received: from [85.158.139.211:23295] by server-2.bemta-5.messagelabs.com id
	9C/DF-04892-60DDCB05; Mon, 03 Dec 2012 17:10:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1354554629!18776955!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32448 invoked from network); 3 Dec 2012 17:10:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:10:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16129369"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:10:12 +0000
Message-ID: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Mon, 3 Dec 2012 17:10:11 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH 00/08 V2] arm: support for initial modules (e.g.
 dom0) and DTB supplied in RAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The following series implements support for initial images and DTB in
RAM, as opposed to in flash (dom0 kernel) or compiled into the
hypervisor (DTB). It arranges to not clobber these with either the h/v
text on relocation or with the heaps and frees them as appropriate.

Most of this is independent of the specific bootloader protocol which is
used to tell Xen where these modules actually are, but I have included a
simple PoC bootloader protocol based around device tree which is similar
to the protocol used by Linux to find its initrd
(where /chosen/linux,initrd-{start,end} indicate the physical address).

The PoC protocol is documented in docs/misc/arm/device-tree/booting.txt
which is added by this series.

Some of the smaller patches went in already, but most of the meat
remains.

The major change this time is that the protocol now uses a reg property
for the address and size, e.g. something like this:
	/chosen/modules@1 {
		bootargs = "root=/dev/mmcblk0 ro debug"
		reg = <0x200000 0x2000>
	}

I have pushed an updated boot-wrapper to:
git://xenbits.xen.org/people/ianc/boot-wrapper.git

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXs-0000WI-AK; Mon, 03 Dec 2012 17:11:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXq-0000VZ-Ii
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:02 +0000
Received: from [85.158.143.99:20267] by server-3.bemta-4.messagelabs.com id
	23/6B-06841-52DDCB05; Mon, 03 Dec 2012 17:11:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354554659!27546836!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11049 invoked from network); 3 Dec 2012 17:11:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221046"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:32 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-Pa;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:29 +0000
Message-ID: <1354554631-17861-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 6/8] arm: discard boot modules after building
	domain 0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/domain_build.c |    3 +++
 xen/arch/arm/setup.c        |   16 ++++++++++++++++
 xen/include/asm-arm/setup.h |    2 ++
 3 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index a9e7f43..e96ed10 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -10,6 +10,7 @@
 #include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 #include <xen/guest_access.h>
+#include <asm/setup.h>
 
 #include "gic.h"
 #include "kernel.h"
@@ -308,6 +309,8 @@ int construct_dom0(struct domain *d)
     dtb_load(&kinfo);
     kernel_load(&kinfo);
 
+    discard_initial_modules();
+
     clear_bit(_VPF_down, &v->pause_flags);
 
     memset(regs, 0, sizeof(*regs));
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 9f08daf..1eb8f77 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -68,6 +68,22 @@ static void __init processor_id(void)
            READ_CP32(ID_ISAR3), READ_CP32(ID_ISAR4), READ_CP32(ID_ISAR5));
 }
 
+void __init discard_initial_modules(void)
+{
+    struct dt_module_info *mi = &early_info.modules;
+    int i;
+
+    for ( i = 1; i <= mi->nr_mods; i++ )
+    {
+        paddr_t s = mi->module[i].start;
+        paddr_t e = s + PAGE_ALIGN(mi->module[i].size);
+
+        init_domheap_pages(s, e);
+    }
+
+    mi->nr_mods = 0;
+}
+
 /*
  * Returns the end address of the highest region in the range s..e
  * with required size and alignment that does not conflict with the
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 8769f66..3267db0 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -9,6 +9,8 @@ void arch_get_xen_caps(xen_capabilities_info_t *info);
 
 int construct_dom0(struct domain *d);
 
+void discard_initial_modules(void);
+
 #endif
 /*
  * Local variables:
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXs-0000WI-AK; Mon, 03 Dec 2012 17:11:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXq-0000VZ-Ii
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:02 +0000
Received: from [85.158.143.99:20267] by server-3.bemta-4.messagelabs.com id
	23/6B-06841-52DDCB05; Mon, 03 Dec 2012 17:11:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354554659!27546836!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11049 invoked from network); 3 Dec 2012 17:11:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221046"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:32 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-Pa;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:29 +0000
Message-ID: <1354554631-17861-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 6/8] arm: discard boot modules after building
	domain 0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/domain_build.c |    3 +++
 xen/arch/arm/setup.c        |   16 ++++++++++++++++
 xen/include/asm-arm/setup.h |    2 ++
 3 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index a9e7f43..e96ed10 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -10,6 +10,7 @@
 #include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 #include <xen/guest_access.h>
+#include <asm/setup.h>
 
 #include "gic.h"
 #include "kernel.h"
@@ -308,6 +309,8 @@ int construct_dom0(struct domain *d)
     dtb_load(&kinfo);
     kernel_load(&kinfo);
 
+    discard_initial_modules();
+
     clear_bit(_VPF_down, &v->pause_flags);
 
     memset(regs, 0, sizeof(*regs));
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 9f08daf..1eb8f77 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -68,6 +68,22 @@ static void __init processor_id(void)
            READ_CP32(ID_ISAR3), READ_CP32(ID_ISAR4), READ_CP32(ID_ISAR5));
 }
 
+void __init discard_initial_modules(void)
+{
+    struct dt_module_info *mi = &early_info.modules;
+    int i;
+
+    for ( i = 1; i <= mi->nr_mods; i++ )
+    {
+        paddr_t s = mi->module[i].start;
+        paddr_t e = s + PAGE_ALIGN(mi->module[i].size);
+
+        init_domheap_pages(s, e);
+    }
+
+    mi->nr_mods = 0;
+}
+
 /*
  * Returns the end address of the highest region in the range s..e
  * with required size and alignment that does not conflict with the
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 8769f66..3267db0 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -9,6 +9,8 @@ void arch_get_xen_caps(xen_capabilities_info_t *info);
 
 int construct_dom0(struct domain *d);
 
+void discard_initial_modules(void);
+
 #endif
 /*
  * Local variables:
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXu-0000Xe-Rv; Mon, 03 Dec 2012 17:11:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXs-0000Vu-7m
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:04 +0000
Received: from [85.158.143.35:44469] by server-2.bemta-4.messagelabs.com id
	C0/68-28922-72DDCB05; Mon, 03 Dec 2012 17:11:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354554661!13422717!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10899 invoked from network); 3 Dec 2012 17:11:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221051"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:32 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-OL;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:28 +0000
Message-ID: <1354554631-17861-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5/8] arm: load dom0 kernel from first boot module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3: - correct limit check in try_zimage_prepare
    - copy zimage header to a local bufffer to avoid issues with
      crossing page boundaries.
    - handle non page aligned source and destinations when loading
    - use a BUFFERABLE mapping when loading kernel from RAM.
---
 xen/arch/arm/kernel.c |   91 ++++++++++++++++++++++++++++++++++--------------
 xen/arch/arm/kernel.h |   11 ++++++
 2 files changed, 75 insertions(+), 27 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 2d56130..c9265d7 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -22,6 +22,7 @@
 #define ZIMAGE_MAGIC_OFFSET 0x24
 #define ZIMAGE_START_OFFSET 0x28
 #define ZIMAGE_END_OFFSET   0x2c
+#define ZIMAGE_HEADER_LEN   0x30
 
 #define ZIMAGE_MAGIC 0x016f2818
 
@@ -65,40 +66,42 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
 static void kernel_zimage_load(struct kernel_info *info)
 {
     paddr_t load_addr = info->zimage.load_addr;
+    paddr_t paddr = info->zimage.kernel_addr;
+    paddr_t attr = info->load_attr;
     paddr_t len = info->zimage.len;
-    paddr_t flash = KERNEL_FLASH_ADDRESS;
-    void *src = (void *)FIXMAP_ADDR(FIXMAP_MISC);
     unsigned long offs;
 
-    printk("Loading %"PRIpaddr" byte zImage from flash %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr": [",
-           len, flash, load_addr, load_addr + len);
-    for ( offs = 0; offs < len; offs += PAGE_SIZE )
+    printk("Loading zImage from %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr"\n",
+           paddr, load_addr, load_addr + len);
+    for ( offs = 0; offs < len; )
     {
-        paddr_t ma = gvirt_to_maddr(load_addr + offs);
+        paddr_t s, l, ma = gvirt_to_maddr(load_addr + offs);
         void *dst = map_domain_page(ma>>PAGE_SHIFT);
 
-        if ( ( offs % (1<<20) ) == 0 )
-            printk(".");
+        s = offs & ~PAGE_MASK;
+        l = min(PAGE_SIZE - s, len);
 
-        set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED);
-        memcpy(dst, src, PAGE_SIZE);
-        clear_fixmap(FIXMAP_MISC);
+        copy_from_paddr(dst + s, paddr + offs, l, attr);
 
         unmap_domain_page(dst);
+        offs += l;
     }
-    printk("]\n");
 }
 
 /**
  * Check the image is a zImage and return the load address and length
  */
-static int kernel_try_zimage_prepare(struct kernel_info *info)
+static int kernel_try_zimage_prepare(struct kernel_info *info,
+                                     paddr_t addr, paddr_t size)
 {
-    uint32_t *zimage = (void *)FIXMAP_ADDR(FIXMAP_MISC);
+    uint32_t zimage[ZIMAGE_HEADER_LEN/4];
     uint32_t start, end;
     struct minimal_dtb_header dtb_hdr;
 
-    set_fixmap(FIXMAP_MISC, KERNEL_FLASH_ADDRESS >> PAGE_SHIFT, DEV_SHARED);
+    if ( size < ZIMAGE_HEADER_LEN )
+        return -EINVAL;
+
+    copy_from_paddr(zimage, addr, sizeof(zimage), DEV_SHARED);
 
     if (zimage[ZIMAGE_MAGIC_OFFSET/4] != ZIMAGE_MAGIC)
         return -EINVAL;
@@ -106,16 +109,26 @@ static int kernel_try_zimage_prepare(struct kernel_info *info)
     start = zimage[ZIMAGE_START_OFFSET/4];
     end = zimage[ZIMAGE_END_OFFSET/4];
 
-    clear_fixmap(FIXMAP_MISC);
+    if ( (end - start) > size )
+        return -EINVAL;
 
     /*
      * Check for an appended DTB.
      */
-    copy_from_paddr(&dtb_hdr, KERNEL_FLASH_ADDRESS + end - start, sizeof(dtb_hdr), DEV_SHARED);
-    if (be32_to_cpu(dtb_hdr.magic) == DTB_MAGIC) {
-        end += be32_to_cpu(dtb_hdr.total_size);
+    if ( addr + end - start + sizeof(dtb_hdr) <= size )
+    {
+        copy_from_paddr(&dtb_hdr, addr + end - start,
+                        sizeof(dtb_hdr), DEV_SHARED);
+        if (be32_to_cpu(dtb_hdr.magic) == DTB_MAGIC) {
+            end += be32_to_cpu(dtb_hdr.total_size);
+
+            if ( end > addr + size )
+                return -EINVAL;
+        }
     }
 
+    info->zimage.kernel_addr = addr;
+
     /*
      * If start is zero, the zImage is position independent -- load it
      * at 32k from start of RAM.
@@ -142,25 +155,26 @@ static void kernel_elf_load(struct kernel_info *info)
     free_xenheap_pages(info->kernel_img, info->kernel_order);
 }
 
-static int kernel_try_elf_prepare(struct kernel_info *info)
+static int kernel_try_elf_prepare(struct kernel_info *info,
+                                  paddr_t addr, paddr_t size)
 {
     int rc;
 
-    info->kernel_order = get_order_from_bytes(KERNEL_FLASH_SIZE);
+    info->kernel_order = get_order_from_bytes(size);
     info->kernel_img = alloc_xenheap_pages(info->kernel_order, 0);
     if ( info->kernel_img == NULL )
         panic("Cannot allocate temporary buffer for kernel.\n");
 
-    copy_from_paddr(info->kernel_img, KERNEL_FLASH_ADDRESS, KERNEL_FLASH_SIZE, DEV_SHARED);
+    copy_from_paddr(info->kernel_img, addr, size, info->load_attr);
 
-    if ( (rc = elf_init(&info->elf.elf, info->kernel_img, KERNEL_FLASH_SIZE )) != 0 )
-        return rc;
+    if ( (rc = elf_init(&info->elf.elf, info->kernel_img, size )) != 0 )
+        goto err;
 #ifdef VERBOSE
     elf_set_verbose(&info->elf.elf);
 #endif
     elf_parse_binary(&info->elf.elf);
     if ( (rc = elf_xen_parse(&info->elf.elf, &info->elf.parms)) != 0 )
-        return rc;
+        goto err;
 
     /*
      * TODO: can the ELF header be used to find the physical address
@@ -170,15 +184,38 @@ static int kernel_try_elf_prepare(struct kernel_info *info)
     info->load = kernel_elf_load;
 
     return 0;
+err:
+    free_xenheap_pages(info->kernel_img, info->kernel_order);
+    return rc;
 }
 
 int kernel_prepare(struct kernel_info *info)
 {
     int rc;
 
-    rc = kernel_try_zimage_prepare(info);
+    paddr_t start, size;
+
+    if ( early_info.modules.nr_mods > 1 )
+        panic("Cannot handle dom0 initrd yet\n");
+
+    if ( early_info.modules.nr_mods < 1 )
+    {
+        printk("No boot modules found, trying flash\n");
+        start = KERNEL_FLASH_ADDRESS;
+        size = KERNEL_FLASH_SIZE;
+        info->load_attr = DEV_SHARED;
+    }
+    else
+    {
+        printk("Loading kernel from boot module 1\n");
+        start = early_info.modules.module[1].start;
+        size = early_info.modules.module[1].size;
+        info->load_attr = BUFFERABLE;
+    }
+
+    rc = kernel_try_zimage_prepare(info, start, size);
     if (rc < 0)
-        rc = kernel_try_elf_prepare(info);
+        rc = kernel_try_elf_prepare(info, start, size);
 
     return rc;
 }
diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index 4533568..49fe9da 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -22,6 +22,7 @@ struct kernel_info {
 
     union {
         struct {
+            paddr_t kernel_addr;
             paddr_t load_addr;
             paddr_t len;
         } zimage;
@@ -33,9 +34,19 @@ struct kernel_info {
     };
 
     void (*load)(struct kernel_info *info);
+    int load_attr;
 };
 
 int kernel_prepare(struct kernel_info *info);
 void kernel_load(struct kernel_info *info);
 
 #endif /* #ifdef __ARCH_ARM_KERNEL_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXu-0000Xe-Rv; Mon, 03 Dec 2012 17:11:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXs-0000Vu-7m
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:04 +0000
Received: from [85.158.143.35:44469] by server-2.bemta-4.messagelabs.com id
	C0/68-28922-72DDCB05; Mon, 03 Dec 2012 17:11:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354554661!13422717!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10899 invoked from network); 3 Dec 2012 17:11:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221051"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:32 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-OL;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:28 +0000
Message-ID: <1354554631-17861-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5/8] arm: load dom0 kernel from first boot module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3: - correct limit check in try_zimage_prepare
    - copy zimage header to a local bufffer to avoid issues with
      crossing page boundaries.
    - handle non page aligned source and destinations when loading
    - use a BUFFERABLE mapping when loading kernel from RAM.
---
 xen/arch/arm/kernel.c |   91 ++++++++++++++++++++++++++++++++++--------------
 xen/arch/arm/kernel.h |   11 ++++++
 2 files changed, 75 insertions(+), 27 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 2d56130..c9265d7 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -22,6 +22,7 @@
 #define ZIMAGE_MAGIC_OFFSET 0x24
 #define ZIMAGE_START_OFFSET 0x28
 #define ZIMAGE_END_OFFSET   0x2c
+#define ZIMAGE_HEADER_LEN   0x30
 
 #define ZIMAGE_MAGIC 0x016f2818
 
@@ -65,40 +66,42 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
 static void kernel_zimage_load(struct kernel_info *info)
 {
     paddr_t load_addr = info->zimage.load_addr;
+    paddr_t paddr = info->zimage.kernel_addr;
+    paddr_t attr = info->load_attr;
     paddr_t len = info->zimage.len;
-    paddr_t flash = KERNEL_FLASH_ADDRESS;
-    void *src = (void *)FIXMAP_ADDR(FIXMAP_MISC);
     unsigned long offs;
 
-    printk("Loading %"PRIpaddr" byte zImage from flash %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr": [",
-           len, flash, load_addr, load_addr + len);
-    for ( offs = 0; offs < len; offs += PAGE_SIZE )
+    printk("Loading zImage from %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr"\n",
+           paddr, load_addr, load_addr + len);
+    for ( offs = 0; offs < len; )
     {
-        paddr_t ma = gvirt_to_maddr(load_addr + offs);
+        paddr_t s, l, ma = gvirt_to_maddr(load_addr + offs);
         void *dst = map_domain_page(ma>>PAGE_SHIFT);
 
-        if ( ( offs % (1<<20) ) == 0 )
-            printk(".");
+        s = offs & ~PAGE_MASK;
+        l = min(PAGE_SIZE - s, len);
 
-        set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED);
-        memcpy(dst, src, PAGE_SIZE);
-        clear_fixmap(FIXMAP_MISC);
+        copy_from_paddr(dst + s, paddr + offs, l, attr);
 
         unmap_domain_page(dst);
+        offs += l;
     }
-    printk("]\n");
 }
 
 /**
  * Check the image is a zImage and return the load address and length
  */
-static int kernel_try_zimage_prepare(struct kernel_info *info)
+static int kernel_try_zimage_prepare(struct kernel_info *info,
+                                     paddr_t addr, paddr_t size)
 {
-    uint32_t *zimage = (void *)FIXMAP_ADDR(FIXMAP_MISC);
+    uint32_t zimage[ZIMAGE_HEADER_LEN/4];
     uint32_t start, end;
     struct minimal_dtb_header dtb_hdr;
 
-    set_fixmap(FIXMAP_MISC, KERNEL_FLASH_ADDRESS >> PAGE_SHIFT, DEV_SHARED);
+    if ( size < ZIMAGE_HEADER_LEN )
+        return -EINVAL;
+
+    copy_from_paddr(zimage, addr, sizeof(zimage), DEV_SHARED);
 
     if (zimage[ZIMAGE_MAGIC_OFFSET/4] != ZIMAGE_MAGIC)
         return -EINVAL;
@@ -106,16 +109,26 @@ static int kernel_try_zimage_prepare(struct kernel_info *info)
     start = zimage[ZIMAGE_START_OFFSET/4];
     end = zimage[ZIMAGE_END_OFFSET/4];
 
-    clear_fixmap(FIXMAP_MISC);
+    if ( (end - start) > size )
+        return -EINVAL;
 
     /*
      * Check for an appended DTB.
      */
-    copy_from_paddr(&dtb_hdr, KERNEL_FLASH_ADDRESS + end - start, sizeof(dtb_hdr), DEV_SHARED);
-    if (be32_to_cpu(dtb_hdr.magic) == DTB_MAGIC) {
-        end += be32_to_cpu(dtb_hdr.total_size);
+    if ( addr + end - start + sizeof(dtb_hdr) <= size )
+    {
+        copy_from_paddr(&dtb_hdr, addr + end - start,
+                        sizeof(dtb_hdr), DEV_SHARED);
+        if (be32_to_cpu(dtb_hdr.magic) == DTB_MAGIC) {
+            end += be32_to_cpu(dtb_hdr.total_size);
+
+            if ( end > addr + size )
+                return -EINVAL;
+        }
     }
 
+    info->zimage.kernel_addr = addr;
+
     /*
      * If start is zero, the zImage is position independent -- load it
      * at 32k from start of RAM.
@@ -142,25 +155,26 @@ static void kernel_elf_load(struct kernel_info *info)
     free_xenheap_pages(info->kernel_img, info->kernel_order);
 }
 
-static int kernel_try_elf_prepare(struct kernel_info *info)
+static int kernel_try_elf_prepare(struct kernel_info *info,
+                                  paddr_t addr, paddr_t size)
 {
     int rc;
 
-    info->kernel_order = get_order_from_bytes(KERNEL_FLASH_SIZE);
+    info->kernel_order = get_order_from_bytes(size);
     info->kernel_img = alloc_xenheap_pages(info->kernel_order, 0);
     if ( info->kernel_img == NULL )
         panic("Cannot allocate temporary buffer for kernel.\n");
 
-    copy_from_paddr(info->kernel_img, KERNEL_FLASH_ADDRESS, KERNEL_FLASH_SIZE, DEV_SHARED);
+    copy_from_paddr(info->kernel_img, addr, size, info->load_attr);
 
-    if ( (rc = elf_init(&info->elf.elf, info->kernel_img, KERNEL_FLASH_SIZE )) != 0 )
-        return rc;
+    if ( (rc = elf_init(&info->elf.elf, info->kernel_img, size )) != 0 )
+        goto err;
 #ifdef VERBOSE
     elf_set_verbose(&info->elf.elf);
 #endif
     elf_parse_binary(&info->elf.elf);
     if ( (rc = elf_xen_parse(&info->elf.elf, &info->elf.parms)) != 0 )
-        return rc;
+        goto err;
 
     /*
      * TODO: can the ELF header be used to find the physical address
@@ -170,15 +184,38 @@ static int kernel_try_elf_prepare(struct kernel_info *info)
     info->load = kernel_elf_load;
 
     return 0;
+err:
+    free_xenheap_pages(info->kernel_img, info->kernel_order);
+    return rc;
 }
 
 int kernel_prepare(struct kernel_info *info)
 {
     int rc;
 
-    rc = kernel_try_zimage_prepare(info);
+    paddr_t start, size;
+
+    if ( early_info.modules.nr_mods > 1 )
+        panic("Cannot handle dom0 initrd yet\n");
+
+    if ( early_info.modules.nr_mods < 1 )
+    {
+        printk("No boot modules found, trying flash\n");
+        start = KERNEL_FLASH_ADDRESS;
+        size = KERNEL_FLASH_SIZE;
+        info->load_attr = DEV_SHARED;
+    }
+    else
+    {
+        printk("Loading kernel from boot module 1\n");
+        start = early_info.modules.module[1].start;
+        size = early_info.modules.module[1].size;
+        info->load_attr = BUFFERABLE;
+    }
+
+    rc = kernel_try_zimage_prepare(info, start, size);
     if (rc < 0)
-        rc = kernel_try_elf_prepare(info);
+        rc = kernel_try_elf_prepare(info, start, size);
 
     return rc;
 }
diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index 4533568..49fe9da 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -22,6 +22,7 @@ struct kernel_info {
 
     union {
         struct {
+            paddr_t kernel_addr;
             paddr_t load_addr;
             paddr_t len;
         } zimage;
@@ -33,9 +34,19 @@ struct kernel_info {
     };
 
     void (*load)(struct kernel_info *info);
+    int load_attr;
 };
 
 int kernel_prepare(struct kernel_info *info);
 void kernel_load(struct kernel_info *info);
 
 #endif /* #ifdef __ARCH_ARM_KERNEL_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXs-0000WV-MP; Mon, 03 Dec 2012 17:11:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXq-0000VY-OK
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:02 +0000
Received: from [85.158.137.99:13523] by server-15.bemta-3.messagelabs.com id
	1A/98-23779-52DDCB05; Mon, 03 Dec 2012 17:11:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354554658!17065432!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11026 invoked from network); 3 Dec 2012 17:11:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221044"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:32 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-Lu;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:27 +0000
Message-ID: <1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/8] device-tree: get_val cannot cope with cells
	> 2, add early_panic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3: early_panic instead of BUG_ON
v2: drop unrelated white space fixup
---
 xen/common/device_tree.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 9eb316f..5a0a1a6 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -58,6 +58,9 @@ static void __init get_val(const u32 **cell, u32 cells, u64 *val)
 {
     *val = 0;
 
+    if ( cells > 2 )
+        early_panic("dtb value contains > 2 cells\n");
+
     while ( cells-- )
     {
         *val <<= 32;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXt-0000Wo-EZ; Mon, 03 Dec 2012 17:11:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXr-0000VT-71
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:03 +0000
Received: from [85.158.137.99:63794] by server-9.bemta-3.messagelabs.com id
	4C/0B-02388-42DDCB05; Mon, 03 Dec 2012 17:11:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354554658!17065432!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10975 invoked from network); 3 Dec 2012 17:10:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:10:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221041"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:31 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-Gs;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:24 +0000
Message-ID: <1354554631-17861-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/8] arm: parse modules from DT during early
	boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The bootloader should populate /chosen/module@<N>/ for each module it
wishes to pass to the hypervisor. The content of these nodes is
described in docs/misc/arm/device-tree/booting.txt

The hypervisor allows for 2 modules (@1==kernel and @2==initrd).
Currently we don't do anything with them.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3: Use a reg = < > property for the module address/length.
v2: Reserve the zeroeth module for Xen itself (not used yet)
    Use a more idiomatic DT layout
    Document said layout

xen: arm: use a reg for the module addresses
---
 docs/misc/arm/device-tree/booting.txt |   24 ++++++++++++
 xen/common/device_tree.c              |   65 +++++++++++++++++++++++++++++++++
 xen/include/xen/device_tree.h         |   14 +++++++
 3 files changed, 103 insertions(+), 0 deletions(-)
 create mode 100644 docs/misc/arm/device-tree/booting.txt

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
new file mode 100644
index 0000000..2761b91
--- /dev/null
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -0,0 +1,24 @@
+Xen is passed the dom0 kernel and initrd via a reference in the /chosen
+node of the device tree.
+
+Each node has the form /chosen/module@<N> and contains the following
+properties:
+
+- compatible
+
+	Must be "xen,multiboot-module"
+
+- reg
+
+	Specifies the physical address of the module in RAM and the
+	length of the module.
+
+- bootargs (optional)
+
+	Command line associated with this module
+
+The following modules are understood
+
+- 1 -- the domain 0 kernel
+- 2 -- the domain 0 ramdisk
+
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index da0af77..9eb316f 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -270,6 +270,69 @@ static void __init process_cpu_node(const void *fdt, int node,
     cpumask_set_cpu(start, &cpu_possible_map);
 }
 
+static void __init process_chosen_node(const void *fdt, int node,
+                                       const char *name,
+                                       u32 address_cells, u32 size_cells)
+{
+    const struct fdt_property *prop;
+    const u32 *cell;
+    int nr, depth, nr_modules = 0;
+    struct dt_mb_module *mod;
+    int len;
+
+    for ( depth = 0;
+          depth >= 0;
+          node = fdt_next_node(fdt, node, &depth) )
+    {
+        name = fdt_get_name(fdt, node, NULL);
+        if ( strncmp(name, "module@", strlen("module@")) == 0 ) {
+
+            if ( fdt_node_check_compatible(fdt, node,
+                                           "xen,multiboot-module" ) != 0 )
+                early_panic("%s not a compatible module node\n", name);
+
+            nr = simple_strtol(name + strlen("module@"), NULL, 10);
+            if ( nr <= 0 )
+                early_panic("Invalid module number %d\n", nr);
+
+            if ( nr > NR_MODULES )
+                early_panic("too many modules %d > %d\n", nr, NR_MODULES);
+            if ( nr > nr_modules )
+                nr_modules = nr;
+
+            mod = &early_info.modules.module[nr];
+
+            prop = fdt_get_property(fdt, node, "reg", NULL);
+            if ( !prop )
+                early_panic("node %s missing `reg' property\n", name);
+
+            cell = (const u32 *)prop->data;
+            device_tree_get_reg(&cell, address_cells, size_cells,
+                                &mod->start, &mod->size);
+
+            prop = fdt_get_property(fdt, node, "bootargs", &len);
+            if ( prop )
+            {
+                if ( len > sizeof(mod->cmdline) )
+                    early_panic("module %d command line too long\n", nr);
+
+                safe_strcpy(mod->cmdline, prop->data);
+            }
+            else
+                mod->cmdline[0] = 0;
+        }
+    }
+
+    for ( nr = 1 ; nr < nr_modules ; nr++ )
+    {
+        mod = &early_info.modules.module[nr];
+        if ( !mod->start || !mod->size )
+            early_panic("module %d  missing / invalid\n", nr);
+    }
+
+    early_info.modules.nr_mods = nr_modules;
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -279,6 +342,8 @@ static int __init early_scan_node(const void *fdt,
         process_memory_node(fdt, node, name, address_cells, size_cells);
     else if ( device_tree_type_matches(fdt, node, "cpu") )
         process_cpu_node(fdt, node, name, address_cells, size_cells);
+    else if ( device_tree_node_matches(fdt, node, "chosen") )
+        process_chosen_node(fdt, node, name, address_cells, size_cells);
 
     return 0;
 }
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 4d010c0..c383677 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -15,6 +15,7 @@
 #define DEVICE_TREE_MAX_DEPTH 16
 
 #define NR_MEM_BANKS 8
+#define NR_MODULES 2
 
 struct membank {
     paddr_t start;
@@ -26,8 +27,21 @@ struct dt_mem_info {
     struct membank bank[NR_MEM_BANKS];
 };
 
+struct dt_mb_module {
+    paddr_t start;
+    paddr_t size;
+    char cmdline[1024];
+};
+
+struct dt_module_info {
+    int nr_mods;
+    /* Module 0 is Xen itself, followed by the provided modules-proper */
+    struct dt_mb_module module[NR_MODULES + 1];
+};
+
 struct dt_early_info {
     struct dt_mem_info mem;
+    struct dt_module_info modules;
 };
 
 typedef int (*device_tree_node_func)(const void *fdt,
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXs-0000WV-MP; Mon, 03 Dec 2012 17:11:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXq-0000VY-OK
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:02 +0000
Received: from [85.158.137.99:13523] by server-15.bemta-3.messagelabs.com id
	1A/98-23779-52DDCB05; Mon, 03 Dec 2012 17:11:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354554658!17065432!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11026 invoked from network); 3 Dec 2012 17:11:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221044"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:32 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-Lu;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:27 +0000
Message-ID: <1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/8] device-tree: get_val cannot cope with cells
	> 2, add early_panic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3: early_panic instead of BUG_ON
v2: drop unrelated white space fixup
---
 xen/common/device_tree.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 9eb316f..5a0a1a6 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -58,6 +58,9 @@ static void __init get_val(const u32 **cell, u32 cells, u64 *val)
 {
     *val = 0;
 
+    if ( cells > 2 )
+        early_panic("dtb value contains > 2 cells\n");
+
     while ( cells-- )
     {
         *val <<= 32;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXt-0000Wo-EZ; Mon, 03 Dec 2012 17:11:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXr-0000VT-71
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:03 +0000
Received: from [85.158.137.99:63794] by server-9.bemta-3.messagelabs.com id
	4C/0B-02388-42DDCB05; Mon, 03 Dec 2012 17:11:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354554658!17065432!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10975 invoked from network); 3 Dec 2012 17:10:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:10:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221041"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:31 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-Gs;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:24 +0000
Message-ID: <1354554631-17861-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/8] arm: parse modules from DT during early
	boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The bootloader should populate /chosen/module@<N>/ for each module it
wishes to pass to the hypervisor. The content of these nodes is
described in docs/misc/arm/device-tree/booting.txt

The hypervisor allows for 2 modules (@1==kernel and @2==initrd).
Currently we don't do anything with them.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3: Use a reg = < > property for the module address/length.
v2: Reserve the zeroeth module for Xen itself (not used yet)
    Use a more idiomatic DT layout
    Document said layout

xen: arm: use a reg for the module addresses
---
 docs/misc/arm/device-tree/booting.txt |   24 ++++++++++++
 xen/common/device_tree.c              |   65 +++++++++++++++++++++++++++++++++
 xen/include/xen/device_tree.h         |   14 +++++++
 3 files changed, 103 insertions(+), 0 deletions(-)
 create mode 100644 docs/misc/arm/device-tree/booting.txt

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
new file mode 100644
index 0000000..2761b91
--- /dev/null
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -0,0 +1,24 @@
+Xen is passed the dom0 kernel and initrd via a reference in the /chosen
+node of the device tree.
+
+Each node has the form /chosen/module@<N> and contains the following
+properties:
+
+- compatible
+
+	Must be "xen,multiboot-module"
+
+- reg
+
+	Specifies the physical address of the module in RAM and the
+	length of the module.
+
+- bootargs (optional)
+
+	Command line associated with this module
+
+The following modules are understood
+
+- 1 -- the domain 0 kernel
+- 2 -- the domain 0 ramdisk
+
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index da0af77..9eb316f 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -270,6 +270,69 @@ static void __init process_cpu_node(const void *fdt, int node,
     cpumask_set_cpu(start, &cpu_possible_map);
 }
 
+static void __init process_chosen_node(const void *fdt, int node,
+                                       const char *name,
+                                       u32 address_cells, u32 size_cells)
+{
+    const struct fdt_property *prop;
+    const u32 *cell;
+    int nr, depth, nr_modules = 0;
+    struct dt_mb_module *mod;
+    int len;
+
+    for ( depth = 0;
+          depth >= 0;
+          node = fdt_next_node(fdt, node, &depth) )
+    {
+        name = fdt_get_name(fdt, node, NULL);
+        if ( strncmp(name, "module@", strlen("module@")) == 0 ) {
+
+            if ( fdt_node_check_compatible(fdt, node,
+                                           "xen,multiboot-module" ) != 0 )
+                early_panic("%s not a compatible module node\n", name);
+
+            nr = simple_strtol(name + strlen("module@"), NULL, 10);
+            if ( nr <= 0 )
+                early_panic("Invalid module number %d\n", nr);
+
+            if ( nr > NR_MODULES )
+                early_panic("too many modules %d > %d\n", nr, NR_MODULES);
+            if ( nr > nr_modules )
+                nr_modules = nr;
+
+            mod = &early_info.modules.module[nr];
+
+            prop = fdt_get_property(fdt, node, "reg", NULL);
+            if ( !prop )
+                early_panic("node %s missing `reg' property\n", name);
+
+            cell = (const u32 *)prop->data;
+            device_tree_get_reg(&cell, address_cells, size_cells,
+                                &mod->start, &mod->size);
+
+            prop = fdt_get_property(fdt, node, "bootargs", &len);
+            if ( prop )
+            {
+                if ( len > sizeof(mod->cmdline) )
+                    early_panic("module %d command line too long\n", nr);
+
+                safe_strcpy(mod->cmdline, prop->data);
+            }
+            else
+                mod->cmdline[0] = 0;
+        }
+    }
+
+    for ( nr = 1 ; nr < nr_modules ; nr++ )
+    {
+        mod = &early_info.modules.module[nr];
+        if ( !mod->start || !mod->size )
+            early_panic("module %d  missing / invalid\n", nr);
+    }
+
+    early_info.modules.nr_mods = nr_modules;
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -279,6 +342,8 @@ static int __init early_scan_node(const void *fdt,
         process_memory_node(fdt, node, name, address_cells, size_cells);
     else if ( device_tree_type_matches(fdt, node, "cpu") )
         process_cpu_node(fdt, node, name, address_cells, size_cells);
+    else if ( device_tree_node_matches(fdt, node, "chosen") )
+        process_chosen_node(fdt, node, name, address_cells, size_cells);
 
     return 0;
 }
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 4d010c0..c383677 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -15,6 +15,7 @@
 #define DEVICE_TREE_MAX_DEPTH 16
 
 #define NR_MEM_BANKS 8
+#define NR_MODULES 2
 
 struct membank {
     paddr_t start;
@@ -26,8 +27,21 @@ struct dt_mem_info {
     struct membank bank[NR_MEM_BANKS];
 };
 
+struct dt_mb_module {
+    paddr_t start;
+    paddr_t size;
+    char cmdline[1024];
+};
+
+struct dt_module_info {
+    int nr_mods;
+    /* Module 0 is Xen itself, followed by the provided modules-proper */
+    struct dt_mb_module module[NR_MODULES + 1];
+};
+
 struct dt_early_info {
     struct dt_mem_info mem;
+    struct dt_module_info modules;
 };
 
 typedef int (*device_tree_node_func)(const void *fdt,
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXq-0000Vm-Tn; Mon, 03 Dec 2012 17:11:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXq-0000VW-2N
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:02 +0000
Received: from [85.158.143.99:20236] by server-1.bemta-4.messagelabs.com id
	02/E0-27934-52DDCB05; Mon, 03 Dec 2012 17:11:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354554659!27546836!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11021 invoked from network); 3 Dec 2012 17:11:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221043"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:31 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-Kg;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:26 +0000
Message-ID: <1354554631-17861-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/8] arm: avoid allocating the heaps over
	modules or xen itself.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/setup.c |   89 +++++++++++++++++++++++++++++++++++++++++++------
 1 files changed, 78 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index a97455e..9f08daf 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -73,7 +73,8 @@ static void __init processor_id(void)
  * with required size and alignment that does not conflict with the
  * modules from first_mod to nr_modules.
  *
- * For non-recursive callers first_mod should normally be 1.
+ * For non-recursive callers first_mod should normally be 0 (all
+ * modules and Xen itself) or 1 (all modules but not Xen).
  */
 static paddr_t __init consider_modules(paddr_t s, paddr_t e,
                                        uint32_t size, paddr_t align,
@@ -106,6 +107,34 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
     return e;
 }
 
+/*
+ * Return the end of the non-module region starting at s. In other
+ * words return s the start of the next modules after s.
+ *
+ * Also returns the end of that module in *n.
+ */
+static paddr_t __init next_module(paddr_t s, paddr_t *n)
+{
+    struct dt_module_info *mi = &early_info.modules;
+    paddr_t lowest = ~(paddr_t)0;
+    int i;
+
+    for ( i = 0; i <= mi->nr_mods; i++ )
+    {
+        paddr_t mod_s = mi->module[i].start;
+        paddr_t mod_e = mod_s + mi->module[i].size;
+
+        if ( mod_s < s )
+            continue;
+        if ( mod_s > lowest )
+            continue;
+        lowest = mod_s;
+        *n = mod_e;
+    }
+    return lowest;
+}
+
+
 /**
  * get_xen_paddr - get physical address to relocate Xen to
  *
@@ -159,6 +188,7 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     paddr_t ram_start;
     paddr_t ram_end;
     paddr_t ram_size;
+    paddr_t s, e;
     unsigned long ram_pages;
     unsigned long heap_pages, xenheap_pages, domheap_pages;
     unsigned long dtb_pages;
@@ -176,22 +206,37 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     ram_pages = ram_size >> PAGE_SHIFT;
 
     /*
-     * Calculate the sizes for the heaps using these constraints:
+     * Locate the xenheap using these constraints:
      *
-     *  - heaps must be 32 MiB aligned
-     *  - must not include Xen itself
-     *  - xen heap must be at most 1 GiB
+     *  - must be 32 MiB aligned
+     *  - must not include Xen itself or the boot modules
+     *  - must be at most 1 GiB
+     *  - must be at least 128M
      *
-     * XXX: needs a platform with at least 1GiB of RAM or the dom
-     * heap will be empty and no domains can be created.
+     * We try to allocate the largest xenheap possible within these
+     * constraints.
      */
-    heap_pages = (ram_size >> PAGE_SHIFT) - (32 << (20 - PAGE_SHIFT));
+    heap_pages = (ram_size >> PAGE_SHIFT);
     xenheap_pages = min(1ul << (30 - PAGE_SHIFT), heap_pages);
+
+    do
+    {
+        e = consider_modules(ram_start, ram_end, xenheap_pages<<PAGE_SHIFT,
+                             32<<20, 0);
+        if ( e )
+            break;
+
+        xenheap_pages >>= 1;
+    } while ( xenheap_pages > 128<<(20-PAGE_SHIFT) );
+
+    if ( ! e )
+        panic("Not not enough space for xenheap\n");
+
     domheap_pages = heap_pages - xenheap_pages;
 
     printk("Xen heap: %lu pages  Dom heap: %lu pages\n", xenheap_pages, domheap_pages);
 
-    setup_xenheap_mappings(ram_start >> PAGE_SHIFT, xenheap_pages);
+    setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
 
     /*
      * Need a single mapped page for populating bootmem_region_list
@@ -215,8 +260,30 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     copy_from_paddr(device_tree_flattened, dtb_paddr, dtb_size, BUFFERABLE);
 
     /* Add non-xenheap memory */
-    init_boot_pages(pfn_to_paddr(xenheap_mfn_start + xenheap_pages),
-                    pfn_to_paddr(xenheap_mfn_start + xenheap_pages + domheap_pages));
+    s = ram_start;
+    while ( s < ram_end )
+    {
+        paddr_t n = ram_end;
+
+        e = next_module(s, &n);
+
+        if ( e == ~(paddr_t)0 )
+        {
+            e = n = ram_end;
+        }
+
+        /* Avoid the xenheap */
+        if ( s < ((xenheap_mfn_start+xenheap_pages) << PAGE_SHIFT)
+             && (xenheap_mfn_start << PAGE_SHIFT) < e )
+        {
+            e = pfn_to_paddr(xenheap_mfn_start);
+            n = pfn_to_paddr(xenheap_mfn_start+xenheap_pages);
+        }
+
+        init_boot_pages(s, e);
+
+        s = n;
+    }
 
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXq-0000Vm-Tn; Mon, 03 Dec 2012 17:11:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXq-0000VW-2N
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:02 +0000
Received: from [85.158.143.99:20236] by server-1.bemta-4.messagelabs.com id
	02/E0-27934-52DDCB05; Mon, 03 Dec 2012 17:11:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354554659!27546836!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11021 invoked from network); 3 Dec 2012 17:11:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221043"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:31 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-Kg;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:26 +0000
Message-ID: <1354554631-17861-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/8] arm: avoid allocating the heaps over
	modules or xen itself.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/setup.c |   89 +++++++++++++++++++++++++++++++++++++++++++------
 1 files changed, 78 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index a97455e..9f08daf 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -73,7 +73,8 @@ static void __init processor_id(void)
  * with required size and alignment that does not conflict with the
  * modules from first_mod to nr_modules.
  *
- * For non-recursive callers first_mod should normally be 1.
+ * For non-recursive callers first_mod should normally be 0 (all
+ * modules and Xen itself) or 1 (all modules but not Xen).
  */
 static paddr_t __init consider_modules(paddr_t s, paddr_t e,
                                        uint32_t size, paddr_t align,
@@ -106,6 +107,34 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
     return e;
 }
 
+/*
+ * Return the end of the non-module region starting at s. In other
+ * words return s the start of the next modules after s.
+ *
+ * Also returns the end of that module in *n.
+ */
+static paddr_t __init next_module(paddr_t s, paddr_t *n)
+{
+    struct dt_module_info *mi = &early_info.modules;
+    paddr_t lowest = ~(paddr_t)0;
+    int i;
+
+    for ( i = 0; i <= mi->nr_mods; i++ )
+    {
+        paddr_t mod_s = mi->module[i].start;
+        paddr_t mod_e = mod_s + mi->module[i].size;
+
+        if ( mod_s < s )
+            continue;
+        if ( mod_s > lowest )
+            continue;
+        lowest = mod_s;
+        *n = mod_e;
+    }
+    return lowest;
+}
+
+
 /**
  * get_xen_paddr - get physical address to relocate Xen to
  *
@@ -159,6 +188,7 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     paddr_t ram_start;
     paddr_t ram_end;
     paddr_t ram_size;
+    paddr_t s, e;
     unsigned long ram_pages;
     unsigned long heap_pages, xenheap_pages, domheap_pages;
     unsigned long dtb_pages;
@@ -176,22 +206,37 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     ram_pages = ram_size >> PAGE_SHIFT;
 
     /*
-     * Calculate the sizes for the heaps using these constraints:
+     * Locate the xenheap using these constraints:
      *
-     *  - heaps must be 32 MiB aligned
-     *  - must not include Xen itself
-     *  - xen heap must be at most 1 GiB
+     *  - must be 32 MiB aligned
+     *  - must not include Xen itself or the boot modules
+     *  - must be at most 1 GiB
+     *  - must be at least 128M
      *
-     * XXX: needs a platform with at least 1GiB of RAM or the dom
-     * heap will be empty and no domains can be created.
+     * We try to allocate the largest xenheap possible within these
+     * constraints.
      */
-    heap_pages = (ram_size >> PAGE_SHIFT) - (32 << (20 - PAGE_SHIFT));
+    heap_pages = (ram_size >> PAGE_SHIFT);
     xenheap_pages = min(1ul << (30 - PAGE_SHIFT), heap_pages);
+
+    do
+    {
+        e = consider_modules(ram_start, ram_end, xenheap_pages<<PAGE_SHIFT,
+                             32<<20, 0);
+        if ( e )
+            break;
+
+        xenheap_pages >>= 1;
+    } while ( xenheap_pages > 128<<(20-PAGE_SHIFT) );
+
+    if ( ! e )
+        panic("Not not enough space for xenheap\n");
+
     domheap_pages = heap_pages - xenheap_pages;
 
     printk("Xen heap: %lu pages  Dom heap: %lu pages\n", xenheap_pages, domheap_pages);
 
-    setup_xenheap_mappings(ram_start >> PAGE_SHIFT, xenheap_pages);
+    setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
 
     /*
      * Need a single mapped page for populating bootmem_region_list
@@ -215,8 +260,30 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     copy_from_paddr(device_tree_flattened, dtb_paddr, dtb_size, BUFFERABLE);
 
     /* Add non-xenheap memory */
-    init_boot_pages(pfn_to_paddr(xenheap_mfn_start + xenheap_pages),
-                    pfn_to_paddr(xenheap_mfn_start + xenheap_pages + domheap_pages));
+    s = ram_start;
+    while ( s < ram_end )
+    {
+        paddr_t n = ram_end;
+
+        e = next_module(s, &n);
+
+        if ( e == ~(paddr_t)0 )
+        {
+            e = n = ram_end;
+        }
+
+        /* Avoid the xenheap */
+        if ( s < ((xenheap_mfn_start+xenheap_pages) << PAGE_SHIFT)
+             && (xenheap_mfn_start << PAGE_SHIFT) < e )
+        {
+            e = pfn_to_paddr(xenheap_mfn_start);
+            n = pfn_to_paddr(xenheap_mfn_start+xenheap_pages);
+        }
+
+        init_boot_pages(s, e);
+
+        s = n;
+    }
 
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXt-0000Wf-2N; Mon, 03 Dec 2012 17:11:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXq-0000VV-T2
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:03 +0000
Received: from [85.158.137.99:63838] by server-13.bemta-3.messagelabs.com id
	99/3C-24887-52DDCB05; Mon, 03 Dec 2012 17:11:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354554658!17065432!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10985 invoked from network); 3 Dec 2012 17:11:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221042"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:31 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-ID;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:25 +0000
Message-ID: <1354554631-17861-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/8] arm: avoid placing Xen over any modules.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This will still fail if the modules are such that Xen is pushed out of
the top 32M of RAM since it will then overlap with the domheap (or
possibly xenheap). This will be dealt with later.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: Xen is module 0, modules start at 1.
---
 xen/arch/arm/setup.c |   68 ++++++++++++++++++++++++++++++++++++++++++++-----
 1 files changed, 61 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..a97455e 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -68,17 +68,55 @@ static void __init processor_id(void)
            READ_CP32(ID_ISAR3), READ_CP32(ID_ISAR4), READ_CP32(ID_ISAR5));
 }
 
+/*
+ * Returns the end address of the highest region in the range s..e
+ * with required size and alignment that does not conflict with the
+ * modules from first_mod to nr_modules.
+ *
+ * For non-recursive callers first_mod should normally be 1.
+ */
+static paddr_t __init consider_modules(paddr_t s, paddr_t e,
+                                       uint32_t size, paddr_t align,
+                                       int first_mod)
+{
+    const struct dt_module_info *mi = &early_info.modules;
+    int i;
+
+    s = (s+align-1) & ~(align-1);
+    e = e & ~(align-1);
+
+    if ( s > e ||  e - s < size )
+        return 0;
+
+    for ( i = first_mod; i <= mi->nr_mods; i++ )
+    {
+        paddr_t mod_s = mi->module[i].start;
+        paddr_t mod_e = mod_s + mi->module[i].size;
+
+        if ( s < mod_e && mod_s < e )
+        {
+            mod_e = consider_modules(mod_e, e, size, align, i+1);
+            if ( mod_e )
+                return mod_e;
+
+            return consider_modules(s, mod_s, size, align, i+1);
+        }
+    }
+
+    return e;
+}
+
 /**
  * get_xen_paddr - get physical address to relocate Xen to
  *
- * Xen is relocated to the top of RAM and aligned to a XEN_PADDR_ALIGN
- * boundary.
+ * Xen is relocated to as near to the top of RAM as possible and
+ * aligned to a XEN_PADDR_ALIGN boundary.
  */
 static paddr_t __init get_xen_paddr(void)
 {
     struct dt_mem_info *mi = &early_info.mem;
     paddr_t min_size;
-    paddr_t paddr = 0, t;
+    paddr_t paddr = 0;
     int i;
 
     min_size = (_end - _start + (XEN_PADDR_ALIGN-1)) & ~(XEN_PADDR_ALIGN-1);
@@ -86,17 +124,33 @@ static paddr_t __init get_xen_paddr(void)
     /* Find the highest bank with enough space. */
     for ( i = 0; i < mi->nr_banks; i++ )
     {
-        if ( mi->bank[i].size >= min_size )
+        const struct membank *bank = &mi->bank[i];
+        paddr_t s, e;
+
+        if ( bank->size >= min_size )
         {
-            t = mi->bank[i].start + mi->bank[i].size - min_size;
-            if ( t > paddr )
-                paddr = t;
+            e = consider_modules(bank->start, bank->start + bank->size,
+                                 min_size, XEN_PADDR_ALIGN, 1);
+            if ( !e )
+                continue;
+
+            s = e - min_size;
+
+            if ( s > paddr )
+                paddr = s;
         }
     }
 
     if ( !paddr )
         early_panic("Not enough memory to relocate Xen\n");
 
+    early_printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
+                 paddr, paddr + min_size);
+
+    /* Xen is module 0 */
+    early_info.modules.module[0].start = paddr;
+    early_info.modules.module[0].size = min_size;
+
     return paddr;
 }
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXt-0000Wf-2N; Mon, 03 Dec 2012 17:11:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXq-0000VV-T2
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:03 +0000
Received: from [85.158.137.99:63838] by server-13.bemta-3.messagelabs.com id
	99/3C-24887-52DDCB05; Mon, 03 Dec 2012 17:11:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354554658!17065432!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10985 invoked from network); 3 Dec 2012 17:11:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221042"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:31 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-ID;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:25 +0000
Message-ID: <1354554631-17861-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/8] arm: avoid placing Xen over any modules.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This will still fail if the modules are such that Xen is pushed out of
the top 32M of RAM since it will then overlap with the domheap (or
possibly xenheap). This will be dealt with later.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: Xen is module 0, modules start at 1.
---
 xen/arch/arm/setup.c |   68 ++++++++++++++++++++++++++++++++++++++++++++-----
 1 files changed, 61 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..a97455e 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -68,17 +68,55 @@ static void __init processor_id(void)
            READ_CP32(ID_ISAR3), READ_CP32(ID_ISAR4), READ_CP32(ID_ISAR5));
 }
 
+/*
+ * Returns the end address of the highest region in the range s..e
+ * with required size and alignment that does not conflict with the
+ * modules from first_mod to nr_modules.
+ *
+ * For non-recursive callers first_mod should normally be 1.
+ */
+static paddr_t __init consider_modules(paddr_t s, paddr_t e,
+                                       uint32_t size, paddr_t align,
+                                       int first_mod)
+{
+    const struct dt_module_info *mi = &early_info.modules;
+    int i;
+
+    s = (s+align-1) & ~(align-1);
+    e = e & ~(align-1);
+
+    if ( s > e ||  e - s < size )
+        return 0;
+
+    for ( i = first_mod; i <= mi->nr_mods; i++ )
+    {
+        paddr_t mod_s = mi->module[i].start;
+        paddr_t mod_e = mod_s + mi->module[i].size;
+
+        if ( s < mod_e && mod_s < e )
+        {
+            mod_e = consider_modules(mod_e, e, size, align, i+1);
+            if ( mod_e )
+                return mod_e;
+
+            return consider_modules(s, mod_s, size, align, i+1);
+        }
+    }
+
+    return e;
+}
+
 /**
  * get_xen_paddr - get physical address to relocate Xen to
  *
- * Xen is relocated to the top of RAM and aligned to a XEN_PADDR_ALIGN
- * boundary.
+ * Xen is relocated to as near to the top of RAM as possible and
+ * aligned to a XEN_PADDR_ALIGN boundary.
  */
 static paddr_t __init get_xen_paddr(void)
 {
     struct dt_mem_info *mi = &early_info.mem;
     paddr_t min_size;
-    paddr_t paddr = 0, t;
+    paddr_t paddr = 0;
     int i;
 
     min_size = (_end - _start + (XEN_PADDR_ALIGN-1)) & ~(XEN_PADDR_ALIGN-1);
@@ -86,17 +124,33 @@ static paddr_t __init get_xen_paddr(void)
     /* Find the highest bank with enough space. */
     for ( i = 0; i < mi->nr_banks; i++ )
     {
-        if ( mi->bank[i].size >= min_size )
+        const struct membank *bank = &mi->bank[i];
+        paddr_t s, e;
+
+        if ( bank->size >= min_size )
         {
-            t = mi->bank[i].start + mi->bank[i].size - min_size;
-            if ( t > paddr )
-                paddr = t;
+            e = consider_modules(bank->start, bank->start + bank->size,
+                                 min_size, XEN_PADDR_ALIGN, 1);
+            if ( !e )
+                continue;
+
+            s = e - min_size;
+
+            if ( s > paddr )
+                paddr = s;
         }
     }
 
     if ( !paddr )
         early_panic("Not enough memory to relocate Xen\n");
 
+    early_printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
+                 paddr, paddr + min_size);
+
+    /* Xen is module 0 */
+    early_info.modules.module[0].start = paddr;
+    early_info.modules.module[0].size = min_size;
+
     return paddr;
 }
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXt-0000X2-To; Mon, 03 Dec 2012 17:11:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXr-0000Vu-EB
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:03 +0000
Received: from [85.158.143.99:46157] by server-2.bemta-4.messagelabs.com id
	AD/58-28922-62DDCB05; Mon, 03 Dec 2012 17:11:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354554659!27546836!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11092 invoked from network); 3 Dec 2012 17:11:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221049"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:32 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-T8;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:31 +0000
Message-ID: <1354554631-17861-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 8/8] xen: strip /chosen/module@<N>/* from dom0
	device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These nodes are used by Xen to find the initial modules.

Only drop the "xen,multiboot-module" compatible nodes in case someone
else has a similar idea.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3 - use a helper to filter out DT elements which are not for dom0.
     Better than an ad-hoc break in the middle of a loop.
---
 xen/arch/arm/domain_build.c |   39 +++++++++++++++++++++++++++++++++++++--
 1 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7a964f7..8624be2 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -172,6 +172,39 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
     return prop;
 }
 
+/* Returns the next node in fdt (starting from offset) which should be
+ * passed through to dom0.
+ */
+static int fdt_next_dom0_node(const void *fdt, int node,
+                              int *depth_out,
+                              int parents[DEVICE_TREE_MAX_DEPTH])
+{
+    int depth = *depth_out;
+
+    while ( (node = fdt_next_node(fdt, node, &depth)) &&
+            node >= 0 && depth >= 0 )
+    {
+        if ( depth >= DEVICE_TREE_MAX_DEPTH )
+            break;
+
+        parents[depth] = node;
+
+        /* Skip /chosen/module@<N>/ and all subnodes */
+        if ( depth >= 2 &&
+             device_tree_node_matches(fdt, parents[1], "chosen") &&
+             device_tree_node_matches(fdt, parents[2], "module") &&
+             fdt_node_check_compatible(fdt, parents[2],
+                                       "xen,multiboot-module" ) == 0 )
+            continue;
+
+        /* We've arrived at a node which dom0 is interested in. */
+        break;
+    }
+
+    *depth_out = depth;
+    return node;
+}
+
 static int write_nodes(struct domain *d, struct kernel_info *kinfo,
                        const void *fdt)
 {
@@ -179,11 +212,12 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
     int depth = 0, last_depth = -1;
     u32 address_cells[DEVICE_TREE_MAX_DEPTH];
     u32 size_cells[DEVICE_TREE_MAX_DEPTH];
+    int parents[DEVICE_TREE_MAX_DEPTH];
     int ret;
 
     for ( node = 0, depth = 0;
           node >= 0 && depth >= 0;
-          node = fdt_next_node(fdt, node, &depth) )
+          node = fdt_next_dom0_node(fdt, node, &depth, parents) )
     {
         const char *name;
 
@@ -191,7 +225,8 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
 
         if ( depth >= DEVICE_TREE_MAX_DEPTH )
         {
-            printk("warning: node `%s' is nested too deep\n", name);
+            printk("warning: node `%s' is nested too deep (%d)\n",
+                   name, depth);
             continue;
         }
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXt-0000X2-To; Mon, 03 Dec 2012 17:11:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXr-0000Vu-EB
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:03 +0000
Received: from [85.158.143.99:46157] by server-2.bemta-4.messagelabs.com id
	AD/58-28922-62DDCB05; Mon, 03 Dec 2012 17:11:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354554659!27546836!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11092 invoked from network); 3 Dec 2012 17:11:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221049"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:32 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-T8;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:31 +0000
Message-ID: <1354554631-17861-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 8/8] xen: strip /chosen/module@<N>/* from dom0
	device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These nodes are used by Xen to find the initial modules.

Only drop the "xen,multiboot-module" compatible nodes in case someone
else has a similar idea.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3 - use a helper to filter out DT elements which are not for dom0.
     Better than an ad-hoc break in the middle of a loop.
---
 xen/arch/arm/domain_build.c |   39 +++++++++++++++++++++++++++++++++++++--
 1 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7a964f7..8624be2 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -172,6 +172,39 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
     return prop;
 }
 
+/* Returns the next node in fdt (starting from offset) which should be
+ * passed through to dom0.
+ */
+static int fdt_next_dom0_node(const void *fdt, int node,
+                              int *depth_out,
+                              int parents[DEVICE_TREE_MAX_DEPTH])
+{
+    int depth = *depth_out;
+
+    while ( (node = fdt_next_node(fdt, node, &depth)) &&
+            node >= 0 && depth >= 0 )
+    {
+        if ( depth >= DEVICE_TREE_MAX_DEPTH )
+            break;
+
+        parents[depth] = node;
+
+        /* Skip /chosen/module@<N>/ and all subnodes */
+        if ( depth >= 2 &&
+             device_tree_node_matches(fdt, parents[1], "chosen") &&
+             device_tree_node_matches(fdt, parents[2], "module") &&
+             fdt_node_check_compatible(fdt, parents[2],
+                                       "xen,multiboot-module" ) == 0 )
+            continue;
+
+        /* We've arrived at a node which dom0 is interested in. */
+        break;
+    }
+
+    *depth_out = depth;
+    return node;
+}
+
 static int write_nodes(struct domain *d, struct kernel_info *kinfo,
                        const void *fdt)
 {
@@ -179,11 +212,12 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
     int depth = 0, last_depth = -1;
     u32 address_cells[DEVICE_TREE_MAX_DEPTH];
     u32 size_cells[DEVICE_TREE_MAX_DEPTH];
+    int parents[DEVICE_TREE_MAX_DEPTH];
     int ret;
 
     for ( node = 0, depth = 0;
           node >= 0 && depth >= 0;
-          node = fdt_next_node(fdt, node, &depth) )
+          node = fdt_next_dom0_node(fdt, node, &depth, parents) )
     {
         const char *name;
 
@@ -191,7 +225,8 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
 
         if ( depth >= DEVICE_TREE_MAX_DEPTH )
         {
-            printk("warning: node `%s' is nested too deep\n", name);
+            printk("warning: node `%s' is nested too deep (%d)\n",
+                   name, depth);
             continue;
         }
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXu-0000XB-AU; Mon, 03 Dec 2012 17:11:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXr-0000Vt-Hv
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:03 +0000
Received: from [85.158.137.99:27389] by server-2.bemta-3.messagelabs.com id
	E2/39-04744-62DDCB05; Mon, 03 Dec 2012 17:11:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354554658!17065432!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11090 invoked from network); 3 Dec 2012 17:11:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221047"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:32 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-Ql;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:30 +0000
Message-ID: <1354554631-17861-7-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 7/8] arm: use /chosen/module@1/bootargs for
	domain 0 command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fallback to xen,dom0-bootargs if this isn't present.

Ideally this would use module1-args iff the kernel came from
module@1/{start,end} and the existing xen,dom0-bootargs if the kernel
came from flash, but this approach is simpler and has the same effect
in practice.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: update for new DT layout
---
 xen/arch/arm/domain_build.c |   28 ++++++++++++++++++++++++----
 1 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e96ed10..7a964f7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -86,8 +86,13 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
                             int node, const char *name, int depth,
                             u32 address_cells, u32 size_cells)
 {
+    const char *bootargs = NULL;
     int prop;
 
+    if ( early_info.modules.nr_mods >= 1 &&
+         early_info.modules.module[1].cmdline[0] )
+        bootargs = &early_info.modules.module[1].cmdline[0];
+
     for ( prop = fdt_first_property_offset(fdt, node);
           prop >= 0;
           prop = fdt_next_property_offset(fdt, prop) )
@@ -104,15 +109,22 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
         prop_len  = fdt32_to_cpu(p->len);
 
         /*
-         * In chosen node: replace bootargs with value from
-         * xen,dom0-bootargs.
+         * In chosen node:
+         *
+         * * remember xen,dom0-bootargs if we don't already have
+         *   bootargs (from module #1, above).
+         * * remove bootargs and xen,dom0-bootargs.
          */
         if ( device_tree_node_matches(fdt, node, "chosen") )
         {
             if ( strcmp(prop_name, "bootargs") == 0 )
                 continue;
-            if ( strcmp(prop_name, "xen,dom0-bootargs") == 0 )
-                prop_name = "bootargs";
+            else if ( strcmp(prop_name, "xen,dom0-bootargs") == 0 )
+            {
+                if ( !bootargs )
+                    bootargs = prop_data;
+                continue;
+            }
         }
         /*
          * In a memory node: adjust reg property.
@@ -147,6 +159,14 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
         xfree(new_data);
     }
 
+    if ( device_tree_node_matches(fdt, node, "chosen") && bootargs )
+        fdt_property(kinfo->fdt, "bootargs", bootargs, strlen(bootargs));
+
+    /*
+     * XXX should populate /chosen/linux,initrd-{start,end} here if we
+     * have module[2]
+     */
+
     if ( prop == -FDT_ERR_NOTFOUND )
         return 0;
     return prop;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:11:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZXu-0000XB-AU; Mon, 03 Dec 2012 17:11:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZXr-0000Vt-Hv
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:11:03 +0000
Received: from [85.158.137.99:27389] by server-2.bemta-3.messagelabs.com id
	E2/39-04744-62DDCB05; Mon, 03 Dec 2012 17:11:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354554658!17065432!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11090 invoked from network); 3 Dec 2012 17:11:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:11:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216221047"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:10:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:10:32 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfZXL-0005on-Ql;
	Mon, 03 Dec 2012 17:10:31 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 17:10:30 +0000
Message-ID: <1354554631-17861-7-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 7/8] arm: use /chosen/module@1/bootargs for
	domain 0 command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fallback to xen,dom0-bootargs if this isn't present.

Ideally this would use module1-args iff the kernel came from
module@1/{start,end} and the existing xen,dom0-bootargs if the kernel
came from flash, but this approach is simpler and has the same effect
in practice.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: update for new DT layout
---
 xen/arch/arm/domain_build.c |   28 ++++++++++++++++++++++++----
 1 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e96ed10..7a964f7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -86,8 +86,13 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
                             int node, const char *name, int depth,
                             u32 address_cells, u32 size_cells)
 {
+    const char *bootargs = NULL;
     int prop;
 
+    if ( early_info.modules.nr_mods >= 1 &&
+         early_info.modules.module[1].cmdline[0] )
+        bootargs = &early_info.modules.module[1].cmdline[0];
+
     for ( prop = fdt_first_property_offset(fdt, node);
           prop >= 0;
           prop = fdt_next_property_offset(fdt, prop) )
@@ -104,15 +109,22 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
         prop_len  = fdt32_to_cpu(p->len);
 
         /*
-         * In chosen node: replace bootargs with value from
-         * xen,dom0-bootargs.
+         * In chosen node:
+         *
+         * * remember xen,dom0-bootargs if we don't already have
+         *   bootargs (from module #1, above).
+         * * remove bootargs and xen,dom0-bootargs.
          */
         if ( device_tree_node_matches(fdt, node, "chosen") )
         {
             if ( strcmp(prop_name, "bootargs") == 0 )
                 continue;
-            if ( strcmp(prop_name, "xen,dom0-bootargs") == 0 )
-                prop_name = "bootargs";
+            else if ( strcmp(prop_name, "xen,dom0-bootargs") == 0 )
+            {
+                if ( !bootargs )
+                    bootargs = prop_data;
+                continue;
+            }
         }
         /*
          * In a memory node: adjust reg property.
@@ -147,6 +159,14 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
         xfree(new_data);
     }
 
+    if ( device_tree_node_matches(fdt, node, "chosen") && bootargs )
+        fdt_property(kinfo->fdt, "bootargs", bootargs, strlen(bootargs));
+
+    /*
+     * XXX should populate /chosen/linux,initrd-{start,end} here if we
+     * have module[2]
+     */
+
     if ( prop == -FDT_ERR_NOTFOUND )
         return 0;
     return prop;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:12:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZZT-0001I1-Ih; Mon, 03 Dec 2012 17:12:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZZR-0001H1-RQ
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 17:12:41 +0000
Received: from [85.158.137.99:23942] by server-14.bemta-3.messagelabs.com id
	6B/93-31424-48DDCB05; Mon, 03 Dec 2012 17:12:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1354554755!12657557!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29812 invoked from network); 3 Dec 2012 17:12:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:12:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16129459"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:12:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:12:35 +0000
Message-ID: <1354554754.2693.31.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Date: Mon, 3 Dec 2012 17:12:34 +0000
In-Reply-To: <dde3de6d81a3014f1d13.1354552498@Solace>
References: <patchbomb.1354552497@Solace>
	<dde3de6d81a3014f1d13.1354552498@Solace>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 3] xen: sched_credit,
 improve tickling of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 16:34 +0000, Dario Faggioli wrote:
> +            /*
> +             * If there aren't idlers for new, then letting new preempt cur and
> +             * try to migrate cur becomes ineviable. If that is the case, update

"inevitable"

> +             * the mask of the to-be-tickled CPUs accordingly (i.e., with cur's
> +             * idlers instead of new's).
> +             */
> +            if ( new->pri > cur->pri && cpumask_empty(&idle_mask) )
> +                cpumask_and(&idle_mask, prv->idlers, cur->vcpu->cpu_affinity);
> +

(I have nothing more intelligent to say...)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:12:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZZT-0001I1-Ih; Mon, 03 Dec 2012 17:12:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZZR-0001H1-RQ
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 17:12:41 +0000
Received: from [85.158.137.99:23942] by server-14.bemta-3.messagelabs.com id
	6B/93-31424-48DDCB05; Mon, 03 Dec 2012 17:12:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1354554755!12657557!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29812 invoked from network); 3 Dec 2012 17:12:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:12:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16129459"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:12:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:12:35 +0000
Message-ID: <1354554754.2693.31.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Date: Mon, 3 Dec 2012 17:12:34 +0000
In-Reply-To: <dde3de6d81a3014f1d13.1354552498@Solace>
References: <patchbomb.1354552497@Solace>
	<dde3de6d81a3014f1d13.1354552498@Solace>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 3] xen: sched_credit,
 improve tickling of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 16:34 +0000, Dario Faggioli wrote:
> +            /*
> +             * If there aren't idlers for new, then letting new preempt cur and
> +             * try to migrate cur becomes ineviable. If that is the case, update

"inevitable"

> +             * the mask of the to-be-tickled CPUs accordingly (i.e., with cur's
> +             * idlers instead of new's).
> +             */
> +            if ( new->pri > cur->pri && cpumask_empty(&idle_mask) )
> +                cpumask_and(&idle_mask, prv->idlers, cur->vcpu->cpu_affinity);
> +

(I have nothing more intelligent to say...)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:12:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZZc-0001Li-0p; Mon, 03 Dec 2012 17:12:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TfZZa-0001KY-7x
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:12:50 +0000
Received: from [85.158.138.51:5407] by server-15.bemta-3.messagelabs.com id
	6D/7B-23779-19DDCB05; Mon, 03 Dec 2012 17:12:49 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354554766!32470096!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18228 invoked from network); 3 Dec 2012 17:12:47 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:12:47 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so2517976vcb.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 09:12:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=fUyi96NGc1KgqA/REhmJd7+Cd01T87SRS6okeDb2t9s=;
	b=v7QsOcX8xTiAiXS/yA/iunN4Ll6bmlS6iuQp4l4pAPmlBwGKxGCIQ+8y13ZjL9VKc7
	NH4Qb9o80uYI7xKpbib2echYD13b5WLMVWofoDQddtTbhv/IGkiUasEAU/XsfXI0aPeF
	0KNE26lN2SUUYBVxrEeBWHPlb6BEcnGlC+Mut1wyqVNozANwCVX5h9rcBQHmLSKXIpZr
	44iapGmkc3sBgYOcHGLoDLFzgVxTR4rChLZpEXxE2oFB6SfEG7721UBG7rlylPmU+lWd
	dUPBsiTgsGT99O/q+mRzXwz67NvUAybgQQ32MTcH/MbDeS3O8nFg0OedLzDSmKLXqz1S
	zKhQ==
MIME-Version: 1.0
Received: by 10.52.92.144 with SMTP id cm16mr8018727vdb.36.1354554766170; Mon,
	03 Dec 2012 09:12:46 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Mon, 3 Dec 2012 09:12:46 -0800 (PST)
In-Reply-To: <CAFLBxZYZWtargvXXUDrVNpRKouLgqmPth09qwqxmBFYnuGTGMw@mail.gmail.com>
References: <1352999682-11535-1-git-send-email-george.dunlap@eu.citrix.com>
	<1353346964.18229.151.camel@zakaz.uk.xensource.com>
	<CAFLBxZYZWtargvXXUDrVNpRKouLgqmPth09qwqxmBFYnuGTGMw@mail.gmail.com>
Date: Mon, 3 Dec 2012 17:12:46 +0000
X-Google-Sender-Auth: KCDUigAOYzob_ZLaXnzLntpTSFM
Message-ID: <CAFLBxZYoq0WPPeV0rNf3T=vvo9i9RvfKskAh_9hsNrTiSW6F7g@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] Make all public hosting providers
 eligible for the pre-disclosure list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4956836619025486511=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4956836619025486511==
Content-Type: multipart/alternative; boundary=20cf307cff96aa9bfc04cff5dba7

--20cf307cff96aa9bfc04cff5dba7
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Nov 27, 2012 at 12:05 PM, George Dunlap <George.Dunlap@eu.citrix.com
> wrote:

> On Mon, Nov 19, 2012 at 5:42 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:
>
>> > +    <p>Here as a rule of thumb, "public hosting provider" means
>> > +    "selling virtualization services to the general public";
>> > +    "large-scale" and "widely deployed" means an installed base of
>> > +    300,000 or more Xen guests.  Other well-established organisations
>> > +    with a mature security response process will be considered on a
>> > +    case-by-case basis.</p>
>>
>> I agree with Ian that relaxing this for software vendors also seems the
>> correct thing to do.
>>
>
> If we're going to include software vendors, we need some simple criteria
> to define what a "real" software vendor is.  The idea of asking for a link
> from cloud providers pointing to public rates and a security policy, which
> Ian Jackson proffered, was a good one.  I suppose we could do something
> similar for software providers: a link to a web page with either
> download-able install images, or prices, perhaps?
>

Thinking a bit more about this one -- if someone had (say) a Debian
derivative that looked like it was basically just a different default set
of packages -- IOW, a very small amount of effort to create and maintain --
that wouldn't qualify for being on the list, right?  So it seems to me like
"amount of effort spent" is kind of what we're looking for, right?  I mean,
if 2-3 developers are spending 3-4 hours per week each working on
something, then it's clearly a project we could consider, even if it's
really small.  I'm sure that would include QubesOS, ArchLinux, Edubuntu,
Scientific Linux, &c &c.  But if it's one guy spending half an hour every
six months, he doesn't really need to be on the list I don't think.

This would be a "rule of thumb" or "guideline" rather than a rule.

Thoughts?

 -George

--20cf307cff96aa9bfc04cff5dba7
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Nov 27, 2012 at 12:05 PM, George Dunlap <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:George.Dunlap@eu.citrix.com" target=3D"_blank">George.Dunlap@e=
u.citrix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=
=3D"gmail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Mon, Nov 19, 2012 at 5:=
42 PM, Ian Campbell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@ci=
trix.com" target=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<b=
r>
</div><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div class=3D"i=
m">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">&gt; + =A0 =A0&lt;p&gt;Here as a rule of thu=
mb, &quot;public hosting provider&quot; means<br><div><div>
&gt; + =A0 =A0&quot;selling virtualization services to the general public&q=
uot;;<br>
&gt; + =A0 =A0&quot;large-scale&quot; and &quot;widely deployed&quot; means=
 an installed base of<br>
&gt; + =A0 =A0300,000 or more Xen guests. =A0Other well-established organis=
ations<br>
&gt; + =A0 =A0with a mature security response process will be considered on=
 a<br>
&gt; + =A0 =A0case-by-case basis.&lt;/p&gt;<br>
<br>
</div></div>I agree with Ian that relaxing this for software vendors also s=
eems the<br>
correct thing to do.<br></blockquote></div><div><br>If we&#39;re going to i=
nclude software vendors, we need some simple criteria to define what a &quo=
t;real&quot; software vendor is.=A0 The idea of asking for a link from clou=
d providers pointing to public rates and a security policy, which Ian Jacks=
on proffered, was a good one.=A0 I suppose we could do something similar fo=
r software providers: a link to a web page with either download-able instal=
l images, or prices, perhaps?<br>
</div></div></div></blockquote><div><br>Thinking a bit more about this one =
-- if someone had (say) a Debian derivative that looked like it was basical=
ly just a different default set of packages -- IOW, a very small amount of =
effort to create and maintain -- that wouldn&#39;t qualify for being on the=
 list, right?=A0 So it seems to me like &quot;amount of effort spent&quot; =
is kind of what we&#39;re looking for, right?=A0 I mean, if 2-3 developers =
are spending 3-4 hours per week each working on something, then it&#39;s cl=
early a project we could consider, even if it&#39;s really small.=A0 I&#39;=
m sure that would include QubesOS, ArchLinux, Edubuntu, Scientific Linux, &=
amp;c &amp;c.=A0 But if it&#39;s one guy spending half an hour every six mo=
nths, he doesn&#39;t really need to be on the list I don&#39;t think.<br>
<br>This would be a &quot;rule of thumb&quot; or &quot;guideline&quot; rath=
er than a rule.<br><br>Thoughts?<br><br>=A0-George<br></div></div><br></div=
>

--20cf307cff96aa9bfc04cff5dba7--


--===============4956836619025486511==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4956836619025486511==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:12:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZZc-0001Li-0p; Mon, 03 Dec 2012 17:12:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TfZZa-0001KY-7x
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:12:50 +0000
Received: from [85.158.138.51:5407] by server-15.bemta-3.messagelabs.com id
	6D/7B-23779-19DDCB05; Mon, 03 Dec 2012 17:12:49 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354554766!32470096!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18228 invoked from network); 3 Dec 2012 17:12:47 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:12:47 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so2517976vcb.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 09:12:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=fUyi96NGc1KgqA/REhmJd7+Cd01T87SRS6okeDb2t9s=;
	b=v7QsOcX8xTiAiXS/yA/iunN4Ll6bmlS6iuQp4l4pAPmlBwGKxGCIQ+8y13ZjL9VKc7
	NH4Qb9o80uYI7xKpbib2echYD13b5WLMVWofoDQddtTbhv/IGkiUasEAU/XsfXI0aPeF
	0KNE26lN2SUUYBVxrEeBWHPlb6BEcnGlC+Mut1wyqVNozANwCVX5h9rcBQHmLSKXIpZr
	44iapGmkc3sBgYOcHGLoDLFzgVxTR4rChLZpEXxE2oFB6SfEG7721UBG7rlylPmU+lWd
	dUPBsiTgsGT99O/q+mRzXwz67NvUAybgQQ32MTcH/MbDeS3O8nFg0OedLzDSmKLXqz1S
	zKhQ==
MIME-Version: 1.0
Received: by 10.52.92.144 with SMTP id cm16mr8018727vdb.36.1354554766170; Mon,
	03 Dec 2012 09:12:46 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Mon, 3 Dec 2012 09:12:46 -0800 (PST)
In-Reply-To: <CAFLBxZYZWtargvXXUDrVNpRKouLgqmPth09qwqxmBFYnuGTGMw@mail.gmail.com>
References: <1352999682-11535-1-git-send-email-george.dunlap@eu.citrix.com>
	<1353346964.18229.151.camel@zakaz.uk.xensource.com>
	<CAFLBxZYZWtargvXXUDrVNpRKouLgqmPth09qwqxmBFYnuGTGMw@mail.gmail.com>
Date: Mon, 3 Dec 2012 17:12:46 +0000
X-Google-Sender-Auth: KCDUigAOYzob_ZLaXnzLntpTSFM
Message-ID: <CAFLBxZYoq0WPPeV0rNf3T=vvo9i9RvfKskAh_9hsNrTiSW6F7g@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] Make all public hosting providers
 eligible for the pre-disclosure list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4956836619025486511=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4956836619025486511==
Content-Type: multipart/alternative; boundary=20cf307cff96aa9bfc04cff5dba7

--20cf307cff96aa9bfc04cff5dba7
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Nov 27, 2012 at 12:05 PM, George Dunlap <George.Dunlap@eu.citrix.com
> wrote:

> On Mon, Nov 19, 2012 at 5:42 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:
>
>> > +    <p>Here as a rule of thumb, "public hosting provider" means
>> > +    "selling virtualization services to the general public";
>> > +    "large-scale" and "widely deployed" means an installed base of
>> > +    300,000 or more Xen guests.  Other well-established organisations
>> > +    with a mature security response process will be considered on a
>> > +    case-by-case basis.</p>
>>
>> I agree with Ian that relaxing this for software vendors also seems the
>> correct thing to do.
>>
>
> If we're going to include software vendors, we need some simple criteria
> to define what a "real" software vendor is.  The idea of asking for a link
> from cloud providers pointing to public rates and a security policy, which
> Ian Jackson proffered, was a good one.  I suppose we could do something
> similar for software providers: a link to a web page with either
> download-able install images, or prices, perhaps?
>

Thinking a bit more about this one -- if someone had (say) a Debian
derivative that looked like it was basically just a different default set
of packages -- IOW, a very small amount of effort to create and maintain --
that wouldn't qualify for being on the list, right?  So it seems to me like
"amount of effort spent" is kind of what we're looking for, right?  I mean,
if 2-3 developers are spending 3-4 hours per week each working on
something, then it's clearly a project we could consider, even if it's
really small.  I'm sure that would include QubesOS, ArchLinux, Edubuntu,
Scientific Linux, &c &c.  But if it's one guy spending half an hour every
six months, he doesn't really need to be on the list I don't think.

This would be a "rule of thumb" or "guideline" rather than a rule.

Thoughts?

 -George

--20cf307cff96aa9bfc04cff5dba7
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Nov 27, 2012 at 12:05 PM, George Dunlap <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:George.Dunlap@eu.citrix.com" target=3D"_blank">George.Dunlap@e=
u.citrix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=
=3D"gmail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Mon, Nov 19, 2012 at 5:=
42 PM, Ian Campbell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@ci=
trix.com" target=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<b=
r>
</div><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div class=3D"i=
m">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">&gt; + =A0 =A0&lt;p&gt;Here as a rule of thu=
mb, &quot;public hosting provider&quot; means<br><div><div>
&gt; + =A0 =A0&quot;selling virtualization services to the general public&q=
uot;;<br>
&gt; + =A0 =A0&quot;large-scale&quot; and &quot;widely deployed&quot; means=
 an installed base of<br>
&gt; + =A0 =A0300,000 or more Xen guests. =A0Other well-established organis=
ations<br>
&gt; + =A0 =A0with a mature security response process will be considered on=
 a<br>
&gt; + =A0 =A0case-by-case basis.&lt;/p&gt;<br>
<br>
</div></div>I agree with Ian that relaxing this for software vendors also s=
eems the<br>
correct thing to do.<br></blockquote></div><div><br>If we&#39;re going to i=
nclude software vendors, we need some simple criteria to define what a &quo=
t;real&quot; software vendor is.=A0 The idea of asking for a link from clou=
d providers pointing to public rates and a security policy, which Ian Jacks=
on proffered, was a good one.=A0 I suppose we could do something similar fo=
r software providers: a link to a web page with either download-able instal=
l images, or prices, perhaps?<br>
</div></div></div></blockquote><div><br>Thinking a bit more about this one =
-- if someone had (say) a Debian derivative that looked like it was basical=
ly just a different default set of packages -- IOW, a very small amount of =
effort to create and maintain -- that wouldn&#39;t qualify for being on the=
 list, right?=A0 So it seems to me like &quot;amount of effort spent&quot; =
is kind of what we&#39;re looking for, right?=A0 I mean, if 2-3 developers =
are spending 3-4 hours per week each working on something, then it&#39;s cl=
early a project we could consider, even if it&#39;s really small.=A0 I&#39;=
m sure that would include QubesOS, ArchLinux, Edubuntu, Scientific Linux, &=
amp;c &amp;c.=A0 But if it&#39;s one guy spending half an hour every six mo=
nths, he doesn&#39;t really need to be on the list I don&#39;t think.<br>
<br>This would be a &quot;rule of thumb&quot; or &quot;guideline&quot; rath=
er than a rule.<br><br>Thoughts?<br><br>=A0-George<br></div></div><br></div=
>

--20cf307cff96aa9bfc04cff5dba7--


--===============4956836619025486511==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4956836619025486511==--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:19:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:19:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZg4-00027k-TV; Mon, 03 Dec 2012 17:19:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfZg3-00027b-AX
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 17:19:31 +0000
Received: from [85.158.143.99:18410] by server-1.bemta-4.messagelabs.com id
	12/E8-27934-12FDCB05; Mon, 03 Dec 2012 17:19:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354555169!18239625!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29241 invoked from network); 3 Dec 2012 17:19:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:19:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16129580"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:19:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 3 Dec 2012 17:19:28 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfZg0-0005Fc-Gn;
	Mon, 03 Dec 2012 17:19:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfZg0-00089P-2C;
	Mon, 03 Dec 2012 17:19:28 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14557-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Dec 2012 17:19:28 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14557: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14557 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14557/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install     fail pass in 14556
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail in 14556 pass in 14557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14556 never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:19:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:19:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZg4-00027k-TV; Mon, 03 Dec 2012 17:19:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfZg3-00027b-AX
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 17:19:31 +0000
Received: from [85.158.143.99:18410] by server-1.bemta-4.messagelabs.com id
	12/E8-27934-12FDCB05; Mon, 03 Dec 2012 17:19:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354555169!18239625!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29241 invoked from network); 3 Dec 2012 17:19:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:19:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16129580"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:19:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 3 Dec 2012 17:19:28 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfZg0-0005Fc-Gn;
	Mon, 03 Dec 2012 17:19:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfZg0-00089P-2C;
	Mon, 03 Dec 2012 17:19:28 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14557-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Dec 2012 17:19:28 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14557: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14557 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14557/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 14482
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 14482

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install     fail pass in 14556
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail in 14556 pass in 14557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14556 never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 428 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:27:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZnK-0002ME-SZ; Mon, 03 Dec 2012 17:27:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZnI-0002M9-KO
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:27:00 +0000
Received: from [193.109.254.147:46522] by server-4.bemta-14.messagelabs.com id
	72/67-18856-3E0ECB05; Mon, 03 Dec 2012 17:26:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354555619!1533091!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6676 invoked from network); 3 Dec 2012 17:26:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:26:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16129706"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:26:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:26:58 +0000
Message-ID: <1354555617.2693.38.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 3 Dec 2012 17:26:57 +0000
In-Reply-To: <CAFLBxZYoq0WPPeV0rNf3T=vvo9i9RvfKskAh_9hsNrTiSW6F7g@mail.gmail.com>
References: <1352999682-11535-1-git-send-email-george.dunlap@eu.citrix.com>
	<1353346964.18229.151.camel@zakaz.uk.xensource.com>
	<CAFLBxZYZWtargvXXUDrVNpRKouLgqmPth09qwqxmBFYnuGTGMw@mail.gmail.com>
	<CAFLBxZYoq0WPPeV0rNf3T=vvo9i9RvfKskAh_9hsNrTiSW6F7g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] Make all public hosting providers
 eligible for the pre-disclosure list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:12 +0000, George Dunlap wrote:
> On Tue, Nov 27, 2012 at 12:05 PM, George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
>         On Mon, Nov 19, 2012 at 5:42 PM, Ian Campbell
>         <Ian.Campbell@citrix.com> wrote:
>         
>                 > +    <p>Here as a rule of thumb, "public hosting
>                 provider" means
>                 > +    "selling virtualization services to the general
>                 public";
>                 > +    "large-scale" and "widely deployed" means an
>                 installed base of
>                 > +    300,000 or more Xen guests.  Other
>                 well-established organisations
>                 > +    with a mature security response process will be
>                 considered on a
>                 > +    case-by-case basis.</p>
>                 
>                 
>                 I agree with Ian that relaxing this for software
>                 vendors also seems the
>                 correct thing to do.
>         
>         If we're going to include software vendors, we need some
>         simple criteria to define what a "real" software vendor is.
>         The idea of asking for a link from cloud providers pointing to
>         public rates and a security policy, which Ian Jackson
>         proffered, was a good one.  I suppose we could do something
>         similar for software providers: a link to a web page with
>         either download-able install images, or prices, perhaps?
>         
> 
> Thinking a bit more about this one -- if someone had (say) a Debian
> derivative that looked like it was basically just a different default
> set of packages -- IOW, a very small amount of effort to create and
> maintain -- that wouldn't qualify for being on the list, right?  So it
> seems to me like "amount of effort spent" is kind of what we're
> looking for, right?  I mean, if 2-3 developers are spending 3-4 hours
> per week each working on something, then it's clearly a project we
> could consider, even if it's really small.  I'm sure that would
> include QubesOS, ArchLinux, Edubuntu, Scientific Linux, &c &c.  But if
> it's one guy spending half an hour every six months, he doesn't really
> need to be on the list I don't think.

Is "deviation from upstream" a factor at all?

e.g. is a Debian derivative just ships the Xen bits direct from Debian
then there doesn't seem to be much call for them to be on the list, they
can simply just ship the security update too. This would be true even if
they were dozens of engineers working full time.

On the other hand if they are packaging Xen themselves, or modifying the
Debian package substantially that would potentially be somewhat
different, even if they were small (like your above example).

> This would be a "rule of thumb" or "guideline" rather than a rule.

Yes, I think most of these things will have to be.

I went looking for the linux-distros list inclusion criteria, in the
hopes we could just piggy back off that, but I can't find it right now.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:27:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZnK-0002ME-SZ; Mon, 03 Dec 2012 17:27:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfZnI-0002M9-KO
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:27:00 +0000
Received: from [193.109.254.147:46522] by server-4.bemta-14.messagelabs.com id
	72/67-18856-3E0ECB05; Mon, 03 Dec 2012 17:26:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354555619!1533091!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6676 invoked from network); 3 Dec 2012 17:26:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:26:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16129706"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:26:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:26:58 +0000
Message-ID: <1354555617.2693.38.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 3 Dec 2012 17:26:57 +0000
In-Reply-To: <CAFLBxZYoq0WPPeV0rNf3T=vvo9i9RvfKskAh_9hsNrTiSW6F7g@mail.gmail.com>
References: <1352999682-11535-1-git-send-email-george.dunlap@eu.citrix.com>
	<1353346964.18229.151.camel@zakaz.uk.xensource.com>
	<CAFLBxZYZWtargvXXUDrVNpRKouLgqmPth09qwqxmBFYnuGTGMw@mail.gmail.com>
	<CAFLBxZYoq0WPPeV0rNf3T=vvo9i9RvfKskAh_9hsNrTiSW6F7g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] Make all public hosting providers
 eligible for the pre-disclosure list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:12 +0000, George Dunlap wrote:
> On Tue, Nov 27, 2012 at 12:05 PM, George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
>         On Mon, Nov 19, 2012 at 5:42 PM, Ian Campbell
>         <Ian.Campbell@citrix.com> wrote:
>         
>                 > +    <p>Here as a rule of thumb, "public hosting
>                 provider" means
>                 > +    "selling virtualization services to the general
>                 public";
>                 > +    "large-scale" and "widely deployed" means an
>                 installed base of
>                 > +    300,000 or more Xen guests.  Other
>                 well-established organisations
>                 > +    with a mature security response process will be
>                 considered on a
>                 > +    case-by-case basis.</p>
>                 
>                 
>                 I agree with Ian that relaxing this for software
>                 vendors also seems the
>                 correct thing to do.
>         
>         If we're going to include software vendors, we need some
>         simple criteria to define what a "real" software vendor is.
>         The idea of asking for a link from cloud providers pointing to
>         public rates and a security policy, which Ian Jackson
>         proffered, was a good one.  I suppose we could do something
>         similar for software providers: a link to a web page with
>         either download-able install images, or prices, perhaps?
>         
> 
> Thinking a bit more about this one -- if someone had (say) a Debian
> derivative that looked like it was basically just a different default
> set of packages -- IOW, a very small amount of effort to create and
> maintain -- that wouldn't qualify for being on the list, right?  So it
> seems to me like "amount of effort spent" is kind of what we're
> looking for, right?  I mean, if 2-3 developers are spending 3-4 hours
> per week each working on something, then it's clearly a project we
> could consider, even if it's really small.  I'm sure that would
> include QubesOS, ArchLinux, Edubuntu, Scientific Linux, &c &c.  But if
> it's one guy spending half an hour every six months, he doesn't really
> need to be on the list I don't think.

Is "deviation from upstream" a factor at all?

e.g. is a Debian derivative just ships the Xen bits direct from Debian
then there doesn't seem to be much call for them to be on the list, they
can simply just ship the security update too. This would be true even if
they were dozens of engineers working full time.

On the other hand if they are packaging Xen themselves, or modifying the
Debian package substantially that would potentially be somewhat
different, even if they were small (like your above example).

> This would be a "rule of thumb" or "guideline" rather than a rule.

Yes, I think most of these things will have to be.

I went looking for the linux-distros list inclusion criteria, in the
hopes we could just piggy back off that, but I can't find it right now.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:35:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZvg-0002Yk-UV; Mon, 03 Dec 2012 17:35:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfZvf-0002Yf-7B
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:35:39 +0000
Received: from [85.158.137.99:28936] by server-6.bemta-3.messagelabs.com id
	EC/CD-28265-AE2ECB05; Mon, 03 Dec 2012 17:35:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354556137!17068067!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18285 invoked from network); 3 Dec 2012 17:35:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 17:35:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 17:35:35 +0000
Message-Id: <50BCF0F502000078000AD53F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 17:35:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland>
In-Reply-To: <1354552154.18784.9.camel@iceland>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
> Regarding Jan's comment in [0], I don't think allowing user to specify
> arbitrary number of levels a good idea. Because only the last level
> should be shared among vcpus, other level should be in percpu struct to
> allow for quicker lookup. The idea to let user specify levels will be
> too complicated in implementation and blow up percpu section (since the
> size grows exponentially). Three levels should be quite enough. See
> maths below.

I didn't ask to implement more than three levels, I just asked for
the interface to establish the number of levels a guest wants to
use to allow for higher numbers (passing of which would result in
-EINVAL in your implementation).

> Number of event channels:
>  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
>  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> Basically the third level is a new ABI, so I choose to use unsigned long
> long here to get more event channels.

Please don't: This would make things less consistent to handle
at least in the guest side code. And I don't see why you would
have a need to do so anyway (or else your argument above
against further levels would become questionable).

> Pages occupied by the third level (if PAGE_SIZE=4k):
>  * 32bit: 64k  / 8 / 4k = 2
>  * 64bit: 512k / 8 / 4k = 16
> 
> Making second level percpu will incur overhead. In fact we move the
> array in shared info into percpu struct:
>  * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
>  * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte
> 
> What concerns me is that the struct evtchn buckets are allocated all at
> once during initialization phrase. To save memory inside Xen, the
> internal allocation/free scheme for evtchn needs to be modified. Ian
> suggested we do small number of buckets at start of day then dynamically
> fault in more as required.
> 
> To sum up:
>      1. Guest should allocate pages for third level evtchn.
>      2. Guest should register third level pages via a new hypercall op.

Doesn't the guest also need to set up space for the 2nd level?

Jan

>      3. Hypervisor should setup third level evtchn in that hypercall op.
>      4. Only last level (third in this case) should be shared among
>         vcpus.
>      5. Need a flexible allocation/free scheme of struct evtchn.
>      6. Debug dumping should use snapshot to avoid holding event lock
>         for too long. (Jan's concern in [0])
> 
> Any comments are welcomed.
> 
> 
> Wei.
> 
> [0] http://thread.gmane.org/gmane.comp.emulators.xen.devel/139921 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:35:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfZvg-0002Yk-UV; Mon, 03 Dec 2012 17:35:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfZvf-0002Yf-7B
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:35:39 +0000
Received: from [85.158.137.99:28936] by server-6.bemta-3.messagelabs.com id
	EC/CD-28265-AE2ECB05; Mon, 03 Dec 2012 17:35:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354556137!17068067!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18285 invoked from network); 3 Dec 2012 17:35:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 17:35:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 17:35:35 +0000
Message-Id: <50BCF0F502000078000AD53F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 17:35:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland>
In-Reply-To: <1354552154.18784.9.camel@iceland>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
> Regarding Jan's comment in [0], I don't think allowing user to specify
> arbitrary number of levels a good idea. Because only the last level
> should be shared among vcpus, other level should be in percpu struct to
> allow for quicker lookup. The idea to let user specify levels will be
> too complicated in implementation and blow up percpu section (since the
> size grows exponentially). Three levels should be quite enough. See
> maths below.

I didn't ask to implement more than three levels, I just asked for
the interface to establish the number of levels a guest wants to
use to allow for higher numbers (passing of which would result in
-EINVAL in your implementation).

> Number of event channels:
>  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
>  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> Basically the third level is a new ABI, so I choose to use unsigned long
> long here to get more event channels.

Please don't: This would make things less consistent to handle
at least in the guest side code. And I don't see why you would
have a need to do so anyway (or else your argument above
against further levels would become questionable).

> Pages occupied by the third level (if PAGE_SIZE=4k):
>  * 32bit: 64k  / 8 / 4k = 2
>  * 64bit: 512k / 8 / 4k = 16
> 
> Making second level percpu will incur overhead. In fact we move the
> array in shared info into percpu struct:
>  * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
>  * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte
> 
> What concerns me is that the struct evtchn buckets are allocated all at
> once during initialization phrase. To save memory inside Xen, the
> internal allocation/free scheme for evtchn needs to be modified. Ian
> suggested we do small number of buckets at start of day then dynamically
> fault in more as required.
> 
> To sum up:
>      1. Guest should allocate pages for third level evtchn.
>      2. Guest should register third level pages via a new hypercall op.

Doesn't the guest also need to set up space for the 2nd level?

Jan

>      3. Hypervisor should setup third level evtchn in that hypercall op.
>      4. Only last level (third in this case) should be shared among
>         vcpus.
>      5. Need a flexible allocation/free scheme of struct evtchn.
>      6. Debug dumping should use snapshot to avoid holding event lock
>         for too long. (Jan's concern in [0])
> 
> Any comments are welcomed.
> 
> 
> Wei.
> 
> [0] http://thread.gmane.org/gmane.comp.emulators.xen.devel/139921 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:43:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:43:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfa3P-0002nf-5J; Mon, 03 Dec 2012 17:43:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfa3N-0002nY-T3
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:43:37 +0000
Received: from [85.158.143.99:33249] by server-1.bemta-4.messagelabs.com id
	A4/2D-27934-9C4ECB05; Mon, 03 Dec 2012 17:43:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1354556616!24512893!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10666 invoked from network); 3 Dec 2012 17:43:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:43:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130150"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:43:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:43:36 +0000
Message-ID: <1354556615.2693.41.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Date: Mon, 3 Dec 2012 17:43:35 +0000
In-Reply-To: <1354552154.18784.9.camel@iceland>
References: <1354552154.18784.9.camel@iceland>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 16:29 +0000, Wei Liu wrote:
> Hi all
> 
> There has been discussion on extending number of event channels back in
> September [0].

While we are changing the ABI anyway, it would be nice to consider the
possibility for some small-ish number of per-vcpu evtchns.

These would potentially be useful for things like IPI vectors and such
which every vcpu has.

e.g. reusing the per-vcpu space in evtchn_pending_sel would give 32 or
64 which would be more than sufficient.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:43:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:43:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfa3P-0002nf-5J; Mon, 03 Dec 2012 17:43:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfa3N-0002nY-T3
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:43:37 +0000
Received: from [85.158.143.99:33249] by server-1.bemta-4.messagelabs.com id
	A4/2D-27934-9C4ECB05; Mon, 03 Dec 2012 17:43:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1354556616!24512893!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10666 invoked from network); 3 Dec 2012 17:43:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:43:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130150"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:43:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:43:36 +0000
Message-ID: <1354556615.2693.41.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Date: Mon, 3 Dec 2012 17:43:35 +0000
In-Reply-To: <1354552154.18784.9.camel@iceland>
References: <1354552154.18784.9.camel@iceland>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 16:29 +0000, Wei Liu wrote:
> Hi all
> 
> There has been discussion on extending number of event channels back in
> September [0].

While we are changing the ABI anyway, it would be nice to consider the
possibility for some small-ish number of per-vcpu evtchns.

These would potentially be useful for things like IPI vectors and such
which every vcpu has.

e.g. reusing the per-vcpu space in evtchn_pending_sel would give 32 or
64 which would be more than sufficient.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:48:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:48:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfa8G-0002we-TB; Mon, 03 Dec 2012 17:48:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tfa8F-0002wZ-VM
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:48:40 +0000
Received: from [85.158.143.35:34907] by server-2.bemta-4.messagelabs.com id
	3F/E8-28922-7F5ECB05; Mon, 03 Dec 2012 17:48:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354556913!13426619!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25326 invoked from network); 3 Dec 2012 17:48:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 17:48:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 17:48:33 +0000
Message-Id: <50BCF40002000078000AD584@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 17:48:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>, "Wei Liu" <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland>
	<1354556615.2693.41.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354556615.2693.41.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 18:43, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2012-12-03 at 16:29 +0000, Wei Liu wrote:
>> Hi all
>> 
>> There has been discussion on extending number of event channels back in
>> September [0].
> 
> While we are changing the ABI anyway, it would be nice to consider the
> possibility for some small-ish number of per-vcpu evtchns.
> 
> These would potentially be useful for things like IPI vectors and such
> which every vcpu has.

While I agree to this, ...

> e.g. reusing the per-vcpu space in evtchn_pending_sel would give 32 or
> 64 which would be more than sufficient.

... I would have hoped for this field to retain its use as top level
selector.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:48:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:48:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfa8G-0002we-TB; Mon, 03 Dec 2012 17:48:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tfa8F-0002wZ-VM
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:48:40 +0000
Received: from [85.158.143.35:34907] by server-2.bemta-4.messagelabs.com id
	3F/E8-28922-7F5ECB05; Mon, 03 Dec 2012 17:48:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354556913!13426619!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25326 invoked from network); 3 Dec 2012 17:48:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 17:48:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 17:48:33 +0000
Message-Id: <50BCF40002000078000AD584@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 17:48:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>, "Wei Liu" <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland>
	<1354556615.2693.41.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354556615.2693.41.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 18:43, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2012-12-03 at 16:29 +0000, Wei Liu wrote:
>> Hi all
>> 
>> There has been discussion on extending number of event channels back in
>> September [0].
> 
> While we are changing the ABI anyway, it would be nice to consider the
> possibility for some small-ish number of per-vcpu evtchns.
> 
> These would potentially be useful for things like IPI vectors and such
> which every vcpu has.

While I agree to this, ...

> e.g. reusing the per-vcpu space in evtchn_pending_sel would give 32 or
> 64 which would be more than sufficient.

... I would have hoped for this field to retain its use as top level
selector.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:50:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaA0-00033m-D1; Mon, 03 Dec 2012 17:50:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfa9z-00033f-DH
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:50:27 +0000
Received: from [85.158.143.35:40573] by server-3.bemta-4.messagelabs.com id
	85/1E-06841-266ECB05; Mon, 03 Dec 2012 17:50:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354557022!16056555!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16085 invoked from network); 3 Dec 2012 17:50:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:50:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130268"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:50:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:50:22 +0000
Message-ID: <1354557020.2693.42.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 3 Dec 2012 17:50:20 +0000
In-Reply-To: <50BCF40002000078000AD584@nat28.tlf.novell.com>
References: <1354552154.18784.9.camel@iceland>
	<1354556615.2693.41.camel@zakaz.uk.xensource.com>
	<50BCF40002000078000AD584@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:48 +0000, Jan Beulich wrote:
> >>> On 03.12.12 at 18:43, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2012-12-03 at 16:29 +0000, Wei Liu wrote:
> >> Hi all
> >> 
> >> There has been discussion on extending number of event channels back in
> >> September [0].
> > 
> > While we are changing the ABI anyway, it would be nice to consider the
> > possibility for some small-ish number of per-vcpu evtchns.
> > 
> > These would potentially be useful for things like IPI vectors and such
> > which every vcpu has.
> 
> While I agree to this, ...
> 
> > e.g. reusing the per-vcpu space in evtchn_pending_sel would give 32 or
> > 64 which would be more than sufficient.
> 
> ... I would have hoped for this field to retain its use as top level
> selector.

Yes, I'm not sure why I thought it became unused...

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:50:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaA0-00033m-D1; Mon, 03 Dec 2012 17:50:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfa9z-00033f-DH
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:50:27 +0000
Received: from [85.158.143.35:40573] by server-3.bemta-4.messagelabs.com id
	85/1E-06841-266ECB05; Mon, 03 Dec 2012 17:50:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354557022!16056555!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16085 invoked from network); 3 Dec 2012 17:50:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:50:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130268"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:50:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:50:22 +0000
Message-ID: <1354557020.2693.42.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 3 Dec 2012 17:50:20 +0000
In-Reply-To: <50BCF40002000078000AD584@nat28.tlf.novell.com>
References: <1354552154.18784.9.camel@iceland>
	<1354556615.2693.41.camel@zakaz.uk.xensource.com>
	<50BCF40002000078000AD584@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:48 +0000, Jan Beulich wrote:
> >>> On 03.12.12 at 18:43, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2012-12-03 at 16:29 +0000, Wei Liu wrote:
> >> Hi all
> >> 
> >> There has been discussion on extending number of event channels back in
> >> September [0].
> > 
> > While we are changing the ABI anyway, it would be nice to consider the
> > possibility for some small-ish number of per-vcpu evtchns.
> > 
> > These would potentially be useful for things like IPI vectors and such
> > which every vcpu has.
> 
> While I agree to this, ...
> 
> > e.g. reusing the per-vcpu space in evtchn_pending_sel would give 32 or
> > 64 which would be more than sufficient.
> 
> ... I would have hoped for this field to retain its use as top level
> selector.

Yes, I'm not sure why I thought it became unused...

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBQ-0003FU-SR; Mon, 03 Dec 2012 17:51:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBO-0003Ez-Jp; Mon, 03 Dec 2012 17:51:55 +0000
Received: from [85.158.139.211:48146] by server-10.bemta-5.messagelabs.com id
	5E/BE-09257-9B6ECB05; Mon, 03 Dec 2012 17:51:53 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1354557111!18824868!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13587 invoked from network); 3 Dec 2012 17:51:52 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-5.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:52 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBD-0002Mr-Qx; Mon, 03 Dec 2012 17:51:43 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBC-00066G-S2; Mon, 03 Dec 2012 17:51:42 +0000
Date: Mon, 03 Dec 2012 17:51:42 +0000
Message-Id: <E1TfaBC-00066G-S2@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 26 (CVE-2012-5510) - Grant table
 version switch list corruption vulnerability
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5510 / XSA-26
                             version 3

       Grant table version switch list corruption vulnerability

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

Downgrading the grant table version of a guest involves freeing its
status pages. This freeing was incomplete - the page(s) are freed back
to the allocator, but not removed from the domain's tracking
list. This would cause list corruption, eventually leading to a
hypervisor crash.

IMPACT
======

A malicious guest administrator can cause Xen to crash, leading to a
denial of service attack.

VULNERABLE SYSTEMS
==================

All Xen version from 4.0 on are vulnerable.

Version 3.4 and earlier are not vulnerable.

MITIGATION
==========

Running only guests with trusted kernels will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa26-4.1.patch             Xen 4.1.x
xsa26-4.2.patch             Xen 4.2.x
xsa26-unstable.patch        xen-unstable


$ sha256sum xsa26*.patch
b4674ddaf9a9786d5e7e5e4f248f6095e118184df581036e0531b5db5e1d645b  xsa26-4.1.patch
a6e2ed7bae3e62d4294fdb48e8a5418b1de8e0e690f4fea4bb430d2b7cf758e6  xsa26-4.2.patch
ac2d5a82f0dba0f4213607a0e3bb9be586d90173bbadc4b402c2f19fbe4b2cf3  xsa26-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ1AAoJEIP+FMlX6CvZBHIH/jI42gGLsThzGlgkFg2aqE74
EUKIPZE4DLQNl6oTQ/fp0dfJgsQ8XHldovl4EphWK+oO0osloE2HjAY5mesOraui
IIQHRkbosbDshDcSqFDndl+xjAEk1ohlGMMpSdUImIHdFF8ZJneXdK11cqxMtCKR
27ych3lDViqy0OqxFGRZpsBE0hHqU7aiL8Orr+tI4sANnd/qVfZcdqizoTRuAJX3
KOmaq+8VwoRSeppAvVgcnGkDLyCd5udRLNEenjrFo1YkC01bVIdbD59/ZwEIC6eZ
iR7bvppV1nuq9WnbCkx+FVkNc9AuGwUZMOdePH2PwLYqIZGMBi9uqUD3Y0HHMoo=
=OtT0
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa26-4.1.patch"
Content-Disposition: attachment; filename="xsa26-4.1.patch"
Content-Transfer-Encoding: base64

Z250dGFiOiBmaXggcmVsZWFzaW5nIG9mIG1lbW9yeSB1cG9uIHN3aXRjaGVz
IGJldHdlZW4gdmVyc2lvbnMKCmdudHRhYl91bnBvcHVsYXRlX3N0YXR1c19m
cmFtZXMoKSBpbmNvbXBsZXRlbHkgZnJlZWQgdGhlIHBhZ2VzCnByZXZpb3Vz
bHkgdXNlZCBhcyBzdGF0dXMgZnJhbWUgaW4gdGhhdCB0aGV5IGRpZCBub3Qg
Z2V0IHJlbW92ZWQgZnJvbQp0aGUgZG9tYWluJ3MgeGVucGFnZV9saXN0LCB0
aHVzIGNhdXNpbmcgc3Vic2VxdWVudCBsaXN0IGNvcnJ1cHRpb24Kd2hlbiB0
aG9zZSBwYWdlcyBkaWQgZ2V0IGFsbG9jYXRlZCBhZ2FpbiBmb3IgdGhlIHNh
bWUgb3IgYW5vdGhlciBwdXJwb3NlLgoKU2ltaWxhcmx5LCBncmFudF90YWJs
ZV9jcmVhdGUoKSBhbmQgZ250dGFiX2dyb3dfdGFibGUoKSBib3RoIGltcHJv
cGVybHkKY2xlYW4gdXAgaW4gdGhlIGV2ZW50IG9mIGFuIGVycm9yIC0gcGFn
ZXMgYWxyZWFkeSBzaGFyZWQgd2l0aCB0aGUgZ3Vlc3QKY2FuJ3QgYmUgZnJl
ZWQgYnkganVzdCBwYXNzaW5nIHRoZW0gdG8gZnJlZV94ZW5oZWFwX3BhZ2Uo
KS4gRml4IHRoaXMgYnkKc2hhcmluZyB0aGUgcGFnZXMgb25seSBhZnRlciBh
bGwgYWxsb2NhdGlvbnMgc3VjY2VlZGVkLgoKVGhpcyBpcyBDVkUtMjAxMi01
NTEwIC8gWFNBLTI2LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogSWFuIENhbXBiZWxsIDxpYW4u
Y2FtcGJlbGxAY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9u
L2dyYW50X3RhYmxlLmMgYi94ZW4vY29tbW9uL2dyYW50X3RhYmxlLmMKaW5k
ZXggNmMwYWE2Zi4uYTE4MGFlZiAxMDA2NDQKLS0tIGEveGVuL2NvbW1vbi9n
cmFudF90YWJsZS5jCisrKyBiL3hlbi9jb21tb24vZ3JhbnRfdGFibGUuYwpA
QCAtMTEyNiwxMiArMTEyNiwxMyBAQCBmYXVsdDoKIH0KIAogc3RhdGljIGlu
dAotZ250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoc3RydWN0IGRvbWFp
biAqZCwgc3RydWN0IGdyYW50X3RhYmxlICpndCkKK2dudHRhYl9wb3B1bGF0
ZV9zdGF0dXNfZnJhbWVzKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBncmFu
dF90YWJsZSAqZ3QsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1
bnNpZ25lZCBpbnQgcmVxX25yX2ZyYW1lcykKIHsKICAgICB1bnNpZ25lZCBp
OwogICAgIHVuc2lnbmVkIHJlcV9zdGF0dXNfZnJhbWVzOwogCi0gICAgcmVx
X3N0YXR1c19mcmFtZXMgPSBncmFudF90b19zdGF0dXNfZnJhbWVzKGd0LT5u
cl9ncmFudF9mcmFtZXMpOworICAgIHJlcV9zdGF0dXNfZnJhbWVzID0gZ3Jh
bnRfdG9fc3RhdHVzX2ZyYW1lcyhyZXFfbnJfZnJhbWVzKTsKICAgICBmb3Ig
KCBpID0gbnJfc3RhdHVzX2ZyYW1lcyhndCk7IGkgPCByZXFfc3RhdHVzX2Zy
YW1lczsgaSsrICkKICAgICB7CiAgICAgICAgIGlmICggKGd0LT5zdGF0dXNb
aV0gPSBhbGxvY194ZW5oZWFwX3BhZ2UoKSkgPT0gTlVMTCApCkBAIC0xMTYy
LDcgKzExNjMsMTIgQEAgZ250dGFiX3VucG9wdWxhdGVfc3RhdHVzX2ZyYW1l
cyhzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZ3JhbnRfdGFibGUgKmd0KQog
CiAgICAgZm9yICggaSA9IDA7IGkgPCBucl9zdGF0dXNfZnJhbWVzKGd0KTsg
aSsrICkKICAgICB7Ci0gICAgICAgIHBhZ2Vfc2V0X293bmVyKHZpcnRfdG9f
cGFnZShndC0+c3RhdHVzW2ldKSwgZG9tX3hlbik7CisgICAgICAgIHN0cnVj
dCBwYWdlX2luZm8gKnBnID0gdmlydF90b19wYWdlKGd0LT5zdGF0dXNbaV0p
OworCisgICAgICAgIEJVR19PTihwYWdlX2dldF9vd25lcihwZykgIT0gZCk7
CisgICAgICAgIGlmICggdGVzdF9hbmRfY2xlYXJfYml0KF9QR0NfYWxsb2Nh
dGVkLCAmcGctPmNvdW50X2luZm8pICkKKyAgICAgICAgICAgIHB1dF9wYWdl
KHBnKTsKKyAgICAgICAgQlVHX09OKHBnLT5jb3VudF9pbmZvICYgflBHQ194
ZW5faGVhcCk7CiAgICAgICAgIGZyZWVfeGVuaGVhcF9wYWdlKGd0LT5zdGF0
dXNbaV0pOwogICAgICAgICBndC0+c3RhdHVzW2ldID0gTlVMTDsKICAgICB9
CkBAIC0xMjAwLDE5ICsxMjA2LDE4IEBAIGdudHRhYl9ncm93X3RhYmxlKHN0
cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGludCByZXFfbnJfZnJhbWVzKQog
ICAgICAgICBjbGVhcl9wYWdlKGd0LT5zaGFyZWRfcmF3W2ldKTsKICAgICB9
CiAKLSAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFtZXMgd2l0aCB0
aGUgcmVjaXBpZW50IGRvbWFpbiAqLwotICAgIGZvciAoIGkgPSBucl9ncmFu
dF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsrICkKLSAgICAg
ICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwgaSk7Ci0KLSAg
ICBndC0+bnJfZ3JhbnRfZnJhbWVzID0gcmVxX25yX2ZyYW1lczsKLQogICAg
IC8qIFN0YXR1cyBwYWdlcyAtIHZlcnNpb24gMiAqLwogICAgIGlmIChndC0+
Z3RfdmVyc2lvbiA+IDEpCiAgICAgewotICAgICAgICBpZiAoIGdudHRhYl9w
b3B1bGF0ZV9zdGF0dXNfZnJhbWVzKGQsIGd0KSApCisgICAgICAgIGlmICgg
Z250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoZCwgZ3QsIHJlcV9ucl9m
cmFtZXMpICkKICAgICAgICAgICAgIGdvdG8gc2hhcmVkX2FsbG9jX2ZhaWxl
ZDsKICAgICB9CiAKKyAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFt
ZXMgd2l0aCB0aGUgcmVjaXBpZW50IGRvbWFpbiAqLworICAgIGZvciAoIGkg
PSBucl9ncmFudF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsr
ICkKKyAgICAgICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwg
aSk7CisgICAgZ3QtPm5yX2dyYW50X2ZyYW1lcyA9IHJlcV9ucl9mcmFtZXM7
CisKICAgICByZXR1cm4gMTsKIAogc2hhcmVkX2FsbG9jX2ZhaWxlZDoKQEAg
LTIxMzQsNyArMjEzOSw3IEBAIGdudHRhYl9zZXRfdmVyc2lvbihYRU5fR1VF
U1RfSEFORExFKGdudHRhYl9zZXRfdmVyc2lvbl90IHVvcCkpCiAKICAgICBp
ZiAoIG9wLnZlcnNpb24gPT0gMiAmJiBndC0+Z3RfdmVyc2lvbiA8IDIgKQog
ICAgIHsKLSAgICAgICAgcmVzID0gZ250dGFiX3BvcHVsYXRlX3N0YXR1c19m
cmFtZXMoZCwgZ3QpOworICAgICAgICByZXMgPSBnbnR0YWJfcG9wdWxhdGVf
c3RhdHVzX2ZyYW1lcyhkLCBndCwgbnJfZ3JhbnRfZnJhbWVzKGd0KSk7CiAg
ICAgICAgIGlmICggcmVzIDwgMCkKICAgICAgICAgICAgIGdvdG8gb3V0X3Vu
bG9jazsKICAgICB9CkBAIC0yNDQ5LDkgKzI0NTQsNiBAQCBncmFudF90YWJs
ZV9jcmVhdGUoCiAgICAgICAgIGNsZWFyX3BhZ2UodC0+c2hhcmVkX3Jhd1tp
XSk7CiAgICAgfQogICAgIAotICAgIGZvciAoIGkgPSAwOyBpIDwgSU5JVElB
TF9OUl9HUkFOVF9GUkFNRVM7IGkrKyApCi0gICAgICAgIGdudHRhYl9jcmVh
dGVfc2hhcmVkX3BhZ2UoZCwgdCwgaSk7Ci0KICAgICAvKiBTdGF0dXMgcGFn
ZXMgZm9yIGdyYW50IHRhYmxlIC0gZm9yIHZlcnNpb24gMiAqLwogICAgIHQt
PnN0YXR1cyA9IHhtYWxsb2NfYXJyYXkoZ3JhbnRfc3RhdHVzX3QgKiwKICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGdyYW50X3RvX3N0YXR1c19m
cmFtZXMobWF4X25yX2dyYW50X2ZyYW1lcykpOwpAQCAtMjQ1OSw2ICsyNDYx
LDEwIEBAIGdyYW50X3RhYmxlX2NyZWF0ZSgKICAgICAgICAgZ290byBub19t
ZW1fNDsKICAgICBtZW1zZXQodC0+c3RhdHVzLCAwLAogICAgICAgICAgICBn
cmFudF90b19zdGF0dXNfZnJhbWVzKG1heF9ucl9ncmFudF9mcmFtZXMpICog
c2l6ZW9mKHQtPnN0YXR1c1swXSkpOworCisgICAgZm9yICggaSA9IDA7IGkg
PCBJTklUSUFMX05SX0dSQU5UX0ZSQU1FUzsgaSsrICkKKyAgICAgICAgZ250
dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCB0LCBpKTsKKwogICAgIHQtPm5y
X3N0YXR1c19mcmFtZXMgPSAwOwogCiAgICAgLyogT2theSwgaW5zdGFsbCB0
aGUgc3RydWN0dXJlLiAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa26-4.2.patch"
Content-Disposition: attachment; filename="xsa26-4.2.patch"
Content-Transfer-Encoding: base64

Z250dGFiOiBmaXggcmVsZWFzaW5nIG9mIG1lbW9yeSB1cG9uIHN3aXRjaGVz
IGJldHdlZW4gdmVyc2lvbnMKCmdudHRhYl91bnBvcHVsYXRlX3N0YXR1c19m
cmFtZXMoKSBpbmNvbXBsZXRlbHkgZnJlZWQgdGhlIHBhZ2VzCnByZXZpb3Vz
bHkgdXNlZCBhcyBzdGF0dXMgZnJhbWUgaW4gdGhhdCB0aGV5IGRpZCBub3Qg
Z2V0IHJlbW92ZWQgZnJvbQp0aGUgZG9tYWluJ3MgeGVucGFnZV9saXN0LCB0
aHVzIGNhdXNpbmcgc3Vic2VxdWVudCBsaXN0IGNvcnJ1cHRpb24Kd2hlbiB0
aG9zZSBwYWdlcyBkaWQgZ2V0IGFsbG9jYXRlZCBhZ2FpbiBmb3IgdGhlIHNh
bWUgb3IgYW5vdGhlciBwdXJwb3NlLgoKU2ltaWxhcmx5LCBncmFudF90YWJs
ZV9jcmVhdGUoKSBhbmQgZ250dGFiX2dyb3dfdGFibGUoKSBib3RoIGltcHJv
cGVybHkKY2xlYW4gdXAgaW4gdGhlIGV2ZW50IG9mIGFuIGVycm9yIC0gcGFn
ZXMgYWxyZWFkeSBzaGFyZWQgd2l0aCB0aGUgZ3Vlc3QKY2FuJ3QgYmUgZnJl
ZWQgYnkganVzdCBwYXNzaW5nIHRoZW0gdG8gZnJlZV94ZW5oZWFwX3BhZ2Uo
KS4gRml4IHRoaXMgYnkKc2hhcmluZyB0aGUgcGFnZXMgb25seSBhZnRlciBh
bGwgYWxsb2NhdGlvbnMgc3VjY2VlZGVkLgoKVGhpcyBpcyBDVkUtMjAxMi01
NTEwIC8gWFNBLTI2LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogSWFuIENhbXBiZWxsIDxpYW4u
Y2FtcGJlbGxAY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9u
L2dyYW50X3RhYmxlLmMgYi94ZW4vY29tbW9uL2dyYW50X3RhYmxlLmMKaW5k
ZXggYzAxYWQwMC4uNmZiMmJlOSAxMDA2NDQKLS0tIGEveGVuL2NvbW1vbi9n
cmFudF90YWJsZS5jCisrKyBiL3hlbi9jb21tb24vZ3JhbnRfdGFibGUuYwpA
QCAtMTE3MywxMiArMTE3MywxMyBAQCBmYXVsdDoKIH0KIAogc3RhdGljIGlu
dAotZ250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoc3RydWN0IGRvbWFp
biAqZCwgc3RydWN0IGdyYW50X3RhYmxlICpndCkKK2dudHRhYl9wb3B1bGF0
ZV9zdGF0dXNfZnJhbWVzKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBncmFu
dF90YWJsZSAqZ3QsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1
bnNpZ25lZCBpbnQgcmVxX25yX2ZyYW1lcykKIHsKICAgICB1bnNpZ25lZCBp
OwogICAgIHVuc2lnbmVkIHJlcV9zdGF0dXNfZnJhbWVzOwogCi0gICAgcmVx
X3N0YXR1c19mcmFtZXMgPSBncmFudF90b19zdGF0dXNfZnJhbWVzKGd0LT5u
cl9ncmFudF9mcmFtZXMpOworICAgIHJlcV9zdGF0dXNfZnJhbWVzID0gZ3Jh
bnRfdG9fc3RhdHVzX2ZyYW1lcyhyZXFfbnJfZnJhbWVzKTsKICAgICBmb3Ig
KCBpID0gbnJfc3RhdHVzX2ZyYW1lcyhndCk7IGkgPCByZXFfc3RhdHVzX2Zy
YW1lczsgaSsrICkKICAgICB7CiAgICAgICAgIGlmICggKGd0LT5zdGF0dXNb
aV0gPSBhbGxvY194ZW5oZWFwX3BhZ2UoKSkgPT0gTlVMTCApCkBAIC0xMjA5
LDcgKzEyMTAsMTIgQEAgZ250dGFiX3VucG9wdWxhdGVfc3RhdHVzX2ZyYW1l
cyhzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZ3JhbnRfdGFibGUgKmd0KQog
CiAgICAgZm9yICggaSA9IDA7IGkgPCBucl9zdGF0dXNfZnJhbWVzKGd0KTsg
aSsrICkKICAgICB7Ci0gICAgICAgIHBhZ2Vfc2V0X293bmVyKHZpcnRfdG9f
cGFnZShndC0+c3RhdHVzW2ldKSwgZG9tX3hlbik7CisgICAgICAgIHN0cnVj
dCBwYWdlX2luZm8gKnBnID0gdmlydF90b19wYWdlKGd0LT5zdGF0dXNbaV0p
OworCisgICAgICAgIEJVR19PTihwYWdlX2dldF9vd25lcihwZykgIT0gZCk7
CisgICAgICAgIGlmICggdGVzdF9hbmRfY2xlYXJfYml0KF9QR0NfYWxsb2Nh
dGVkLCAmcGctPmNvdW50X2luZm8pICkKKyAgICAgICAgICAgIHB1dF9wYWdl
KHBnKTsKKyAgICAgICAgQlVHX09OKHBnLT5jb3VudF9pbmZvICYgflBHQ194
ZW5faGVhcCk7CiAgICAgICAgIGZyZWVfeGVuaGVhcF9wYWdlKGd0LT5zdGF0
dXNbaV0pOwogICAgICAgICBndC0+c3RhdHVzW2ldID0gTlVMTDsKICAgICB9
CkBAIC0xMjQ3LDE5ICsxMjUzLDE4IEBAIGdudHRhYl9ncm93X3RhYmxlKHN0
cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGludCByZXFfbnJfZnJhbWVzKQog
ICAgICAgICBjbGVhcl9wYWdlKGd0LT5zaGFyZWRfcmF3W2ldKTsKICAgICB9
CiAKLSAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFtZXMgd2l0aCB0
aGUgcmVjaXBpZW50IGRvbWFpbiAqLwotICAgIGZvciAoIGkgPSBucl9ncmFu
dF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsrICkKLSAgICAg
ICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwgaSk7Ci0KLSAg
ICBndC0+bnJfZ3JhbnRfZnJhbWVzID0gcmVxX25yX2ZyYW1lczsKLQogICAg
IC8qIFN0YXR1cyBwYWdlcyAtIHZlcnNpb24gMiAqLwogICAgIGlmIChndC0+
Z3RfdmVyc2lvbiA+IDEpCiAgICAgewotICAgICAgICBpZiAoIGdudHRhYl9w
b3B1bGF0ZV9zdGF0dXNfZnJhbWVzKGQsIGd0KSApCisgICAgICAgIGlmICgg
Z250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoZCwgZ3QsIHJlcV9ucl9m
cmFtZXMpICkKICAgICAgICAgICAgIGdvdG8gc2hhcmVkX2FsbG9jX2ZhaWxl
ZDsKICAgICB9CiAKKyAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFt
ZXMgd2l0aCB0aGUgcmVjaXBpZW50IGRvbWFpbiAqLworICAgIGZvciAoIGkg
PSBucl9ncmFudF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsr
ICkKKyAgICAgICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwg
aSk7CisgICAgZ3QtPm5yX2dyYW50X2ZyYW1lcyA9IHJlcV9ucl9mcmFtZXM7
CisKICAgICByZXR1cm4gMTsKIAogc2hhcmVkX2FsbG9jX2ZhaWxlZDoKQEAg
LTIxNTcsNyArMjE2Miw3IEBAIGdudHRhYl9zZXRfdmVyc2lvbihYRU5fR1VF
U1RfSEFORExFKGdudHRhYl9zZXRfdmVyc2lvbl90IHVvcCkpCiAKICAgICBp
ZiAoIG9wLnZlcnNpb24gPT0gMiAmJiBndC0+Z3RfdmVyc2lvbiA8IDIgKQog
ICAgIHsKLSAgICAgICAgcmVzID0gZ250dGFiX3BvcHVsYXRlX3N0YXR1c19m
cmFtZXMoZCwgZ3QpOworICAgICAgICByZXMgPSBnbnR0YWJfcG9wdWxhdGVf
c3RhdHVzX2ZyYW1lcyhkLCBndCwgbnJfZ3JhbnRfZnJhbWVzKGd0KSk7CiAg
ICAgICAgIGlmICggcmVzIDwgMCkKICAgICAgICAgICAgIGdvdG8gb3V0X3Vu
bG9jazsKICAgICB9CkBAIC0yNjAwLDE0ICsyNjA1LDE1IEBAIGdyYW50X3Rh
YmxlX2NyZWF0ZSgKICAgICAgICAgY2xlYXJfcGFnZSh0LT5zaGFyZWRfcmF3
W2ldKTsKICAgICB9CiAgICAgCi0gICAgZm9yICggaSA9IDA7IGkgPCBJTklU
SUFMX05SX0dSQU5UX0ZSQU1FUzsgaSsrICkKLSAgICAgICAgZ250dGFiX2Ny
ZWF0ZV9zaGFyZWRfcGFnZShkLCB0LCBpKTsKLQogICAgIC8qIFN0YXR1cyBw
YWdlcyBmb3IgZ3JhbnQgdGFibGUgLSBmb3IgdmVyc2lvbiAyICovCiAgICAg
dC0+c3RhdHVzID0geHphbGxvY19hcnJheShncmFudF9zdGF0dXNfdCAqLAog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZ3JhbnRfdG9fc3RhdHVz
X2ZyYW1lcyhtYXhfbnJfZ3JhbnRfZnJhbWVzKSk7CiAgICAgaWYgKCB0LT5z
dGF0dXMgPT0gTlVMTCApCiAgICAgICAgIGdvdG8gbm9fbWVtXzQ7CisKKyAg
ICBmb3IgKCBpID0gMDsgaSA8IElOSVRJQUxfTlJfR1JBTlRfRlJBTUVTOyBp
KysgKQorICAgICAgICBnbnR0YWJfY3JlYXRlX3NoYXJlZF9wYWdlKGQsIHQs
IGkpOworCiAgICAgdC0+bnJfc3RhdHVzX2ZyYW1lcyA9IDA7CiAKICAgICAv
KiBPa2F5LCBpbnN0YWxsIHRoZSBzdHJ1Y3R1cmUuICovCg==

--=separator
Content-Type: application/octet-stream; name="xsa26-unstable.patch"
Content-Disposition: attachment; filename="xsa26-unstable.patch"
Content-Transfer-Encoding: base64

Z250dGFiOiBmaXggcmVsZWFzaW5nIG9mIG1lbW9yeSB1cG9uIHN3aXRjaGVz
IGJldHdlZW4gdmVyc2lvbnMKCmdudHRhYl91bnBvcHVsYXRlX3N0YXR1c19m
cmFtZXMoKSBpbmNvbXBsZXRlbHkgZnJlZWQgdGhlIHBhZ2VzCnByZXZpb3Vz
bHkgdXNlZCBhcyBzdGF0dXMgZnJhbWUgaW4gdGhhdCB0aGV5IGRpZCBub3Qg
Z2V0IHJlbW92ZWQgZnJvbQp0aGUgZG9tYWluJ3MgeGVucGFnZV9saXN0LCB0
aHVzIGNhdXNpbmcgc3Vic2VxdWVudCBsaXN0IGNvcnJ1cHRpb24Kd2hlbiB0
aG9zZSBwYWdlcyBkaWQgZ2V0IGFsbG9jYXRlZCBhZ2FpbiBmb3IgdGhlIHNh
bWUgb3IgYW5vdGhlciBwdXJwb3NlLgoKU2ltaWxhcmx5LCBncmFudF90YWJs
ZV9jcmVhdGUoKSBhbmQgZ250dGFiX2dyb3dfdGFibGUoKSBib3RoIGltcHJv
cGVybHkKY2xlYW4gdXAgaW4gdGhlIGV2ZW50IG9mIGFuIGVycm9yIC0gcGFn
ZXMgYWxyZWFkeSBzaGFyZWQgd2l0aCB0aGUgZ3Vlc3QKY2FuJ3QgYmUgZnJl
ZWQgYnkganVzdCBwYXNzaW5nIHRoZW0gdG8gZnJlZV94ZW5oZWFwX3BhZ2Uo
KS4gRml4IHRoaXMgYnkKc2hhcmluZyB0aGUgcGFnZXMgb25seSBhZnRlciBh
bGwgYWxsb2NhdGlvbnMgc3VjY2VlZGVkLgoKVGhpcyBpcyBDVkUtMjAxMi01
NTEwIC8gWFNBLTI2LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogSWFuIENhbXBiZWxsIDxpYW4u
Y2FtcGJlbGxAY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9u
L2dyYW50X3RhYmxlLmMgYi94ZW4vY29tbW9uL2dyYW50X3RhYmxlLmMKaW5k
ZXggNzkxMjc2OS4uZWM5ZWNmNCAxMDA2NDQKLS0tIGEveGVuL2NvbW1vbi9n
cmFudF90YWJsZS5jCisrKyBiL3hlbi9jb21tb24vZ3JhbnRfdGFibGUuYwpA
QCAtMTIwOCwxMiArMTIwOCwxMyBAQCBmYXVsdDoKIH0KIAogc3RhdGljIGlu
dAotZ250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoc3RydWN0IGRvbWFp
biAqZCwgc3RydWN0IGdyYW50X3RhYmxlICpndCkKK2dudHRhYl9wb3B1bGF0
ZV9zdGF0dXNfZnJhbWVzKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBncmFu
dF90YWJsZSAqZ3QsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1
bnNpZ25lZCBpbnQgcmVxX25yX2ZyYW1lcykKIHsKICAgICB1bnNpZ25lZCBp
OwogICAgIHVuc2lnbmVkIHJlcV9zdGF0dXNfZnJhbWVzOwogCi0gICAgcmVx
X3N0YXR1c19mcmFtZXMgPSBncmFudF90b19zdGF0dXNfZnJhbWVzKGd0LT5u
cl9ncmFudF9mcmFtZXMpOworICAgIHJlcV9zdGF0dXNfZnJhbWVzID0gZ3Jh
bnRfdG9fc3RhdHVzX2ZyYW1lcyhyZXFfbnJfZnJhbWVzKTsKICAgICBmb3Ig
KCBpID0gbnJfc3RhdHVzX2ZyYW1lcyhndCk7IGkgPCByZXFfc3RhdHVzX2Zy
YW1lczsgaSsrICkKICAgICB7CiAgICAgICAgIGlmICggKGd0LT5zdGF0dXNb
aV0gPSBhbGxvY194ZW5oZWFwX3BhZ2UoKSkgPT0gTlVMTCApCkBAIC0xMjQ0
LDcgKzEyNDUsMTIgQEAgZ250dGFiX3VucG9wdWxhdGVfc3RhdHVzX2ZyYW1l
cyhzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZ3JhbnRfdGFibGUgKmd0KQog
CiAgICAgZm9yICggaSA9IDA7IGkgPCBucl9zdGF0dXNfZnJhbWVzKGd0KTsg
aSsrICkKICAgICB7Ci0gICAgICAgIHBhZ2Vfc2V0X293bmVyKHZpcnRfdG9f
cGFnZShndC0+c3RhdHVzW2ldKSwgZG9tX3hlbik7CisgICAgICAgIHN0cnVj
dCBwYWdlX2luZm8gKnBnID0gdmlydF90b19wYWdlKGd0LT5zdGF0dXNbaV0p
OworCisgICAgICAgIEJVR19PTihwYWdlX2dldF9vd25lcihwZykgIT0gZCk7
CisgICAgICAgIGlmICggdGVzdF9hbmRfY2xlYXJfYml0KF9QR0NfYWxsb2Nh
dGVkLCAmcGctPmNvdW50X2luZm8pICkKKyAgICAgICAgICAgIHB1dF9wYWdl
KHBnKTsKKyAgICAgICAgQlVHX09OKHBnLT5jb3VudF9pbmZvICYgflBHQ194
ZW5faGVhcCk7CiAgICAgICAgIGZyZWVfeGVuaGVhcF9wYWdlKGd0LT5zdGF0
dXNbaV0pOwogICAgICAgICBndC0+c3RhdHVzW2ldID0gTlVMTDsKICAgICB9
CkBAIC0xMjgyLDE5ICsxMjg4LDE4IEBAIGdudHRhYl9ncm93X3RhYmxlKHN0
cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGludCByZXFfbnJfZnJhbWVzKQog
ICAgICAgICBjbGVhcl9wYWdlKGd0LT5zaGFyZWRfcmF3W2ldKTsKICAgICB9
CiAKLSAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFtZXMgd2l0aCB0
aGUgcmVjaXBpZW50IGRvbWFpbiAqLwotICAgIGZvciAoIGkgPSBucl9ncmFu
dF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsrICkKLSAgICAg
ICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwgaSk7Ci0KLSAg
ICBndC0+bnJfZ3JhbnRfZnJhbWVzID0gcmVxX25yX2ZyYW1lczsKLQogICAg
IC8qIFN0YXR1cyBwYWdlcyAtIHZlcnNpb24gMiAqLwogICAgIGlmIChndC0+
Z3RfdmVyc2lvbiA+IDEpCiAgICAgewotICAgICAgICBpZiAoIGdudHRhYl9w
b3B1bGF0ZV9zdGF0dXNfZnJhbWVzKGQsIGd0KSApCisgICAgICAgIGlmICgg
Z250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoZCwgZ3QsIHJlcV9ucl9m
cmFtZXMpICkKICAgICAgICAgICAgIGdvdG8gc2hhcmVkX2FsbG9jX2ZhaWxl
ZDsKICAgICB9CiAKKyAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFt
ZXMgd2l0aCB0aGUgcmVjaXBpZW50IGRvbWFpbiAqLworICAgIGZvciAoIGkg
PSBucl9ncmFudF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsr
ICkKKyAgICAgICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwg
aSk7CisgICAgZ3QtPm5yX2dyYW50X2ZyYW1lcyA9IHJlcV9ucl9mcmFtZXM7
CisKICAgICByZXR1cm4gMTsKIAogc2hhcmVkX2FsbG9jX2ZhaWxlZDoKQEAg
LTIxOTIsNyArMjE5Nyw3IEBAIGdudHRhYl9zZXRfdmVyc2lvbihYRU5fR1VF
U1RfSEFORExFX1BBUkFNKGdudHRhYl9zZXRfdmVyc2lvbl90IHVvcCkpCiAK
ICAgICBpZiAoIG9wLnZlcnNpb24gPT0gMiAmJiBndC0+Z3RfdmVyc2lvbiA8
IDIgKQogICAgIHsKLSAgICAgICAgcmVzID0gZ250dGFiX3BvcHVsYXRlX3N0
YXR1c19mcmFtZXMoZCwgZ3QpOworICAgICAgICByZXMgPSBnbnR0YWJfcG9w
dWxhdGVfc3RhdHVzX2ZyYW1lcyhkLCBndCwgbnJfZ3JhbnRfZnJhbWVzKGd0
KSk7CiAgICAgICAgIGlmICggcmVzIDwgMCkKICAgICAgICAgICAgIGdvdG8g
b3V0X3VubG9jazsKICAgICB9CkBAIC0yNjI4LDE0ICsyNjMzLDE1IEBAIGdy
YW50X3RhYmxlX2NyZWF0ZSgKICAgICAgICAgY2xlYXJfcGFnZSh0LT5zaGFy
ZWRfcmF3W2ldKTsKICAgICB9CiAgICAgCi0gICAgZm9yICggaSA9IDA7IGkg
PCBJTklUSUFMX05SX0dSQU5UX0ZSQU1FUzsgaSsrICkKLSAgICAgICAgZ250
dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCB0LCBpKTsKLQogICAgIC8qIFN0
YXR1cyBwYWdlcyBmb3IgZ3JhbnQgdGFibGUgLSBmb3IgdmVyc2lvbiAyICov
CiAgICAgdC0+c3RhdHVzID0geHphbGxvY19hcnJheShncmFudF9zdGF0dXNf
dCAqLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZ3JhbnRfdG9f
c3RhdHVzX2ZyYW1lcyhtYXhfbnJfZ3JhbnRfZnJhbWVzKSk7CiAgICAgaWYg
KCB0LT5zdGF0dXMgPT0gTlVMTCApCiAgICAgICAgIGdvdG8gbm9fbWVtXzQ7
CisKKyAgICBmb3IgKCBpID0gMDsgaSA8IElOSVRJQUxfTlJfR1JBTlRfRlJB
TUVTOyBpKysgKQorICAgICAgICBnbnR0YWJfY3JlYXRlX3NoYXJlZF9wYWdl
KGQsIHQsIGkpOworCiAgICAgdC0+bnJfc3RhdHVzX2ZyYW1lcyA9IDA7CiAK
ICAgICAvKiBPa2F5LCBpbnN0YWxsIHRoZSBzdHJ1Y3R1cmUuICovCg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBQ-0003FU-SR; Mon, 03 Dec 2012 17:51:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBO-0003Ez-Jp; Mon, 03 Dec 2012 17:51:55 +0000
Received: from [85.158.139.211:48146] by server-10.bemta-5.messagelabs.com id
	5E/BE-09257-9B6ECB05; Mon, 03 Dec 2012 17:51:53 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1354557111!18824868!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13587 invoked from network); 3 Dec 2012 17:51:52 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-5.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:52 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBD-0002Mr-Qx; Mon, 03 Dec 2012 17:51:43 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBC-00066G-S2; Mon, 03 Dec 2012 17:51:42 +0000
Date: Mon, 03 Dec 2012 17:51:42 +0000
Message-Id: <E1TfaBC-00066G-S2@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 26 (CVE-2012-5510) - Grant table
 version switch list corruption vulnerability
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5510 / XSA-26
                             version 3

       Grant table version switch list corruption vulnerability

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

Downgrading the grant table version of a guest involves freeing its
status pages. This freeing was incomplete - the page(s) are freed back
to the allocator, but not removed from the domain's tracking
list. This would cause list corruption, eventually leading to a
hypervisor crash.

IMPACT
======

A malicious guest administrator can cause Xen to crash, leading to a
denial of service attack.

VULNERABLE SYSTEMS
==================

All Xen version from 4.0 on are vulnerable.

Version 3.4 and earlier are not vulnerable.

MITIGATION
==========

Running only guests with trusted kernels will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa26-4.1.patch             Xen 4.1.x
xsa26-4.2.patch             Xen 4.2.x
xsa26-unstable.patch        xen-unstable


$ sha256sum xsa26*.patch
b4674ddaf9a9786d5e7e5e4f248f6095e118184df581036e0531b5db5e1d645b  xsa26-4.1.patch
a6e2ed7bae3e62d4294fdb48e8a5418b1de8e0e690f4fea4bb430d2b7cf758e6  xsa26-4.2.patch
ac2d5a82f0dba0f4213607a0e3bb9be586d90173bbadc4b402c2f19fbe4b2cf3  xsa26-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ1AAoJEIP+FMlX6CvZBHIH/jI42gGLsThzGlgkFg2aqE74
EUKIPZE4DLQNl6oTQ/fp0dfJgsQ8XHldovl4EphWK+oO0osloE2HjAY5mesOraui
IIQHRkbosbDshDcSqFDndl+xjAEk1ohlGMMpSdUImIHdFF8ZJneXdK11cqxMtCKR
27ych3lDViqy0OqxFGRZpsBE0hHqU7aiL8Orr+tI4sANnd/qVfZcdqizoTRuAJX3
KOmaq+8VwoRSeppAvVgcnGkDLyCd5udRLNEenjrFo1YkC01bVIdbD59/ZwEIC6eZ
iR7bvppV1nuq9WnbCkx+FVkNc9AuGwUZMOdePH2PwLYqIZGMBi9uqUD3Y0HHMoo=
=OtT0
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa26-4.1.patch"
Content-Disposition: attachment; filename="xsa26-4.1.patch"
Content-Transfer-Encoding: base64

Z250dGFiOiBmaXggcmVsZWFzaW5nIG9mIG1lbW9yeSB1cG9uIHN3aXRjaGVz
IGJldHdlZW4gdmVyc2lvbnMKCmdudHRhYl91bnBvcHVsYXRlX3N0YXR1c19m
cmFtZXMoKSBpbmNvbXBsZXRlbHkgZnJlZWQgdGhlIHBhZ2VzCnByZXZpb3Vz
bHkgdXNlZCBhcyBzdGF0dXMgZnJhbWUgaW4gdGhhdCB0aGV5IGRpZCBub3Qg
Z2V0IHJlbW92ZWQgZnJvbQp0aGUgZG9tYWluJ3MgeGVucGFnZV9saXN0LCB0
aHVzIGNhdXNpbmcgc3Vic2VxdWVudCBsaXN0IGNvcnJ1cHRpb24Kd2hlbiB0
aG9zZSBwYWdlcyBkaWQgZ2V0IGFsbG9jYXRlZCBhZ2FpbiBmb3IgdGhlIHNh
bWUgb3IgYW5vdGhlciBwdXJwb3NlLgoKU2ltaWxhcmx5LCBncmFudF90YWJs
ZV9jcmVhdGUoKSBhbmQgZ250dGFiX2dyb3dfdGFibGUoKSBib3RoIGltcHJv
cGVybHkKY2xlYW4gdXAgaW4gdGhlIGV2ZW50IG9mIGFuIGVycm9yIC0gcGFn
ZXMgYWxyZWFkeSBzaGFyZWQgd2l0aCB0aGUgZ3Vlc3QKY2FuJ3QgYmUgZnJl
ZWQgYnkganVzdCBwYXNzaW5nIHRoZW0gdG8gZnJlZV94ZW5oZWFwX3BhZ2Uo
KS4gRml4IHRoaXMgYnkKc2hhcmluZyB0aGUgcGFnZXMgb25seSBhZnRlciBh
bGwgYWxsb2NhdGlvbnMgc3VjY2VlZGVkLgoKVGhpcyBpcyBDVkUtMjAxMi01
NTEwIC8gWFNBLTI2LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogSWFuIENhbXBiZWxsIDxpYW4u
Y2FtcGJlbGxAY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9u
L2dyYW50X3RhYmxlLmMgYi94ZW4vY29tbW9uL2dyYW50X3RhYmxlLmMKaW5k
ZXggNmMwYWE2Zi4uYTE4MGFlZiAxMDA2NDQKLS0tIGEveGVuL2NvbW1vbi9n
cmFudF90YWJsZS5jCisrKyBiL3hlbi9jb21tb24vZ3JhbnRfdGFibGUuYwpA
QCAtMTEyNiwxMiArMTEyNiwxMyBAQCBmYXVsdDoKIH0KIAogc3RhdGljIGlu
dAotZ250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoc3RydWN0IGRvbWFp
biAqZCwgc3RydWN0IGdyYW50X3RhYmxlICpndCkKK2dudHRhYl9wb3B1bGF0
ZV9zdGF0dXNfZnJhbWVzKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBncmFu
dF90YWJsZSAqZ3QsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1
bnNpZ25lZCBpbnQgcmVxX25yX2ZyYW1lcykKIHsKICAgICB1bnNpZ25lZCBp
OwogICAgIHVuc2lnbmVkIHJlcV9zdGF0dXNfZnJhbWVzOwogCi0gICAgcmVx
X3N0YXR1c19mcmFtZXMgPSBncmFudF90b19zdGF0dXNfZnJhbWVzKGd0LT5u
cl9ncmFudF9mcmFtZXMpOworICAgIHJlcV9zdGF0dXNfZnJhbWVzID0gZ3Jh
bnRfdG9fc3RhdHVzX2ZyYW1lcyhyZXFfbnJfZnJhbWVzKTsKICAgICBmb3Ig
KCBpID0gbnJfc3RhdHVzX2ZyYW1lcyhndCk7IGkgPCByZXFfc3RhdHVzX2Zy
YW1lczsgaSsrICkKICAgICB7CiAgICAgICAgIGlmICggKGd0LT5zdGF0dXNb
aV0gPSBhbGxvY194ZW5oZWFwX3BhZ2UoKSkgPT0gTlVMTCApCkBAIC0xMTYy
LDcgKzExNjMsMTIgQEAgZ250dGFiX3VucG9wdWxhdGVfc3RhdHVzX2ZyYW1l
cyhzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZ3JhbnRfdGFibGUgKmd0KQog
CiAgICAgZm9yICggaSA9IDA7IGkgPCBucl9zdGF0dXNfZnJhbWVzKGd0KTsg
aSsrICkKICAgICB7Ci0gICAgICAgIHBhZ2Vfc2V0X293bmVyKHZpcnRfdG9f
cGFnZShndC0+c3RhdHVzW2ldKSwgZG9tX3hlbik7CisgICAgICAgIHN0cnVj
dCBwYWdlX2luZm8gKnBnID0gdmlydF90b19wYWdlKGd0LT5zdGF0dXNbaV0p
OworCisgICAgICAgIEJVR19PTihwYWdlX2dldF9vd25lcihwZykgIT0gZCk7
CisgICAgICAgIGlmICggdGVzdF9hbmRfY2xlYXJfYml0KF9QR0NfYWxsb2Nh
dGVkLCAmcGctPmNvdW50X2luZm8pICkKKyAgICAgICAgICAgIHB1dF9wYWdl
KHBnKTsKKyAgICAgICAgQlVHX09OKHBnLT5jb3VudF9pbmZvICYgflBHQ194
ZW5faGVhcCk7CiAgICAgICAgIGZyZWVfeGVuaGVhcF9wYWdlKGd0LT5zdGF0
dXNbaV0pOwogICAgICAgICBndC0+c3RhdHVzW2ldID0gTlVMTDsKICAgICB9
CkBAIC0xMjAwLDE5ICsxMjA2LDE4IEBAIGdudHRhYl9ncm93X3RhYmxlKHN0
cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGludCByZXFfbnJfZnJhbWVzKQog
ICAgICAgICBjbGVhcl9wYWdlKGd0LT5zaGFyZWRfcmF3W2ldKTsKICAgICB9
CiAKLSAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFtZXMgd2l0aCB0
aGUgcmVjaXBpZW50IGRvbWFpbiAqLwotICAgIGZvciAoIGkgPSBucl9ncmFu
dF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsrICkKLSAgICAg
ICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwgaSk7Ci0KLSAg
ICBndC0+bnJfZ3JhbnRfZnJhbWVzID0gcmVxX25yX2ZyYW1lczsKLQogICAg
IC8qIFN0YXR1cyBwYWdlcyAtIHZlcnNpb24gMiAqLwogICAgIGlmIChndC0+
Z3RfdmVyc2lvbiA+IDEpCiAgICAgewotICAgICAgICBpZiAoIGdudHRhYl9w
b3B1bGF0ZV9zdGF0dXNfZnJhbWVzKGQsIGd0KSApCisgICAgICAgIGlmICgg
Z250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoZCwgZ3QsIHJlcV9ucl9m
cmFtZXMpICkKICAgICAgICAgICAgIGdvdG8gc2hhcmVkX2FsbG9jX2ZhaWxl
ZDsKICAgICB9CiAKKyAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFt
ZXMgd2l0aCB0aGUgcmVjaXBpZW50IGRvbWFpbiAqLworICAgIGZvciAoIGkg
PSBucl9ncmFudF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsr
ICkKKyAgICAgICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwg
aSk7CisgICAgZ3QtPm5yX2dyYW50X2ZyYW1lcyA9IHJlcV9ucl9mcmFtZXM7
CisKICAgICByZXR1cm4gMTsKIAogc2hhcmVkX2FsbG9jX2ZhaWxlZDoKQEAg
LTIxMzQsNyArMjEzOSw3IEBAIGdudHRhYl9zZXRfdmVyc2lvbihYRU5fR1VF
U1RfSEFORExFKGdudHRhYl9zZXRfdmVyc2lvbl90IHVvcCkpCiAKICAgICBp
ZiAoIG9wLnZlcnNpb24gPT0gMiAmJiBndC0+Z3RfdmVyc2lvbiA8IDIgKQog
ICAgIHsKLSAgICAgICAgcmVzID0gZ250dGFiX3BvcHVsYXRlX3N0YXR1c19m
cmFtZXMoZCwgZ3QpOworICAgICAgICByZXMgPSBnbnR0YWJfcG9wdWxhdGVf
c3RhdHVzX2ZyYW1lcyhkLCBndCwgbnJfZ3JhbnRfZnJhbWVzKGd0KSk7CiAg
ICAgICAgIGlmICggcmVzIDwgMCkKICAgICAgICAgICAgIGdvdG8gb3V0X3Vu
bG9jazsKICAgICB9CkBAIC0yNDQ5LDkgKzI0NTQsNiBAQCBncmFudF90YWJs
ZV9jcmVhdGUoCiAgICAgICAgIGNsZWFyX3BhZ2UodC0+c2hhcmVkX3Jhd1tp
XSk7CiAgICAgfQogICAgIAotICAgIGZvciAoIGkgPSAwOyBpIDwgSU5JVElB
TF9OUl9HUkFOVF9GUkFNRVM7IGkrKyApCi0gICAgICAgIGdudHRhYl9jcmVh
dGVfc2hhcmVkX3BhZ2UoZCwgdCwgaSk7Ci0KICAgICAvKiBTdGF0dXMgcGFn
ZXMgZm9yIGdyYW50IHRhYmxlIC0gZm9yIHZlcnNpb24gMiAqLwogICAgIHQt
PnN0YXR1cyA9IHhtYWxsb2NfYXJyYXkoZ3JhbnRfc3RhdHVzX3QgKiwKICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGdyYW50X3RvX3N0YXR1c19m
cmFtZXMobWF4X25yX2dyYW50X2ZyYW1lcykpOwpAQCAtMjQ1OSw2ICsyNDYx
LDEwIEBAIGdyYW50X3RhYmxlX2NyZWF0ZSgKICAgICAgICAgZ290byBub19t
ZW1fNDsKICAgICBtZW1zZXQodC0+c3RhdHVzLCAwLAogICAgICAgICAgICBn
cmFudF90b19zdGF0dXNfZnJhbWVzKG1heF9ucl9ncmFudF9mcmFtZXMpICog
c2l6ZW9mKHQtPnN0YXR1c1swXSkpOworCisgICAgZm9yICggaSA9IDA7IGkg
PCBJTklUSUFMX05SX0dSQU5UX0ZSQU1FUzsgaSsrICkKKyAgICAgICAgZ250
dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCB0LCBpKTsKKwogICAgIHQtPm5y
X3N0YXR1c19mcmFtZXMgPSAwOwogCiAgICAgLyogT2theSwgaW5zdGFsbCB0
aGUgc3RydWN0dXJlLiAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa26-4.2.patch"
Content-Disposition: attachment; filename="xsa26-4.2.patch"
Content-Transfer-Encoding: base64

Z250dGFiOiBmaXggcmVsZWFzaW5nIG9mIG1lbW9yeSB1cG9uIHN3aXRjaGVz
IGJldHdlZW4gdmVyc2lvbnMKCmdudHRhYl91bnBvcHVsYXRlX3N0YXR1c19m
cmFtZXMoKSBpbmNvbXBsZXRlbHkgZnJlZWQgdGhlIHBhZ2VzCnByZXZpb3Vz
bHkgdXNlZCBhcyBzdGF0dXMgZnJhbWUgaW4gdGhhdCB0aGV5IGRpZCBub3Qg
Z2V0IHJlbW92ZWQgZnJvbQp0aGUgZG9tYWluJ3MgeGVucGFnZV9saXN0LCB0
aHVzIGNhdXNpbmcgc3Vic2VxdWVudCBsaXN0IGNvcnJ1cHRpb24Kd2hlbiB0
aG9zZSBwYWdlcyBkaWQgZ2V0IGFsbG9jYXRlZCBhZ2FpbiBmb3IgdGhlIHNh
bWUgb3IgYW5vdGhlciBwdXJwb3NlLgoKU2ltaWxhcmx5LCBncmFudF90YWJs
ZV9jcmVhdGUoKSBhbmQgZ250dGFiX2dyb3dfdGFibGUoKSBib3RoIGltcHJv
cGVybHkKY2xlYW4gdXAgaW4gdGhlIGV2ZW50IG9mIGFuIGVycm9yIC0gcGFn
ZXMgYWxyZWFkeSBzaGFyZWQgd2l0aCB0aGUgZ3Vlc3QKY2FuJ3QgYmUgZnJl
ZWQgYnkganVzdCBwYXNzaW5nIHRoZW0gdG8gZnJlZV94ZW5oZWFwX3BhZ2Uo
KS4gRml4IHRoaXMgYnkKc2hhcmluZyB0aGUgcGFnZXMgb25seSBhZnRlciBh
bGwgYWxsb2NhdGlvbnMgc3VjY2VlZGVkLgoKVGhpcyBpcyBDVkUtMjAxMi01
NTEwIC8gWFNBLTI2LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogSWFuIENhbXBiZWxsIDxpYW4u
Y2FtcGJlbGxAY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9u
L2dyYW50X3RhYmxlLmMgYi94ZW4vY29tbW9uL2dyYW50X3RhYmxlLmMKaW5k
ZXggYzAxYWQwMC4uNmZiMmJlOSAxMDA2NDQKLS0tIGEveGVuL2NvbW1vbi9n
cmFudF90YWJsZS5jCisrKyBiL3hlbi9jb21tb24vZ3JhbnRfdGFibGUuYwpA
QCAtMTE3MywxMiArMTE3MywxMyBAQCBmYXVsdDoKIH0KIAogc3RhdGljIGlu
dAotZ250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoc3RydWN0IGRvbWFp
biAqZCwgc3RydWN0IGdyYW50X3RhYmxlICpndCkKK2dudHRhYl9wb3B1bGF0
ZV9zdGF0dXNfZnJhbWVzKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBncmFu
dF90YWJsZSAqZ3QsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1
bnNpZ25lZCBpbnQgcmVxX25yX2ZyYW1lcykKIHsKICAgICB1bnNpZ25lZCBp
OwogICAgIHVuc2lnbmVkIHJlcV9zdGF0dXNfZnJhbWVzOwogCi0gICAgcmVx
X3N0YXR1c19mcmFtZXMgPSBncmFudF90b19zdGF0dXNfZnJhbWVzKGd0LT5u
cl9ncmFudF9mcmFtZXMpOworICAgIHJlcV9zdGF0dXNfZnJhbWVzID0gZ3Jh
bnRfdG9fc3RhdHVzX2ZyYW1lcyhyZXFfbnJfZnJhbWVzKTsKICAgICBmb3Ig
KCBpID0gbnJfc3RhdHVzX2ZyYW1lcyhndCk7IGkgPCByZXFfc3RhdHVzX2Zy
YW1lczsgaSsrICkKICAgICB7CiAgICAgICAgIGlmICggKGd0LT5zdGF0dXNb
aV0gPSBhbGxvY194ZW5oZWFwX3BhZ2UoKSkgPT0gTlVMTCApCkBAIC0xMjA5
LDcgKzEyMTAsMTIgQEAgZ250dGFiX3VucG9wdWxhdGVfc3RhdHVzX2ZyYW1l
cyhzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZ3JhbnRfdGFibGUgKmd0KQog
CiAgICAgZm9yICggaSA9IDA7IGkgPCBucl9zdGF0dXNfZnJhbWVzKGd0KTsg
aSsrICkKICAgICB7Ci0gICAgICAgIHBhZ2Vfc2V0X293bmVyKHZpcnRfdG9f
cGFnZShndC0+c3RhdHVzW2ldKSwgZG9tX3hlbik7CisgICAgICAgIHN0cnVj
dCBwYWdlX2luZm8gKnBnID0gdmlydF90b19wYWdlKGd0LT5zdGF0dXNbaV0p
OworCisgICAgICAgIEJVR19PTihwYWdlX2dldF9vd25lcihwZykgIT0gZCk7
CisgICAgICAgIGlmICggdGVzdF9hbmRfY2xlYXJfYml0KF9QR0NfYWxsb2Nh
dGVkLCAmcGctPmNvdW50X2luZm8pICkKKyAgICAgICAgICAgIHB1dF9wYWdl
KHBnKTsKKyAgICAgICAgQlVHX09OKHBnLT5jb3VudF9pbmZvICYgflBHQ194
ZW5faGVhcCk7CiAgICAgICAgIGZyZWVfeGVuaGVhcF9wYWdlKGd0LT5zdGF0
dXNbaV0pOwogICAgICAgICBndC0+c3RhdHVzW2ldID0gTlVMTDsKICAgICB9
CkBAIC0xMjQ3LDE5ICsxMjUzLDE4IEBAIGdudHRhYl9ncm93X3RhYmxlKHN0
cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGludCByZXFfbnJfZnJhbWVzKQog
ICAgICAgICBjbGVhcl9wYWdlKGd0LT5zaGFyZWRfcmF3W2ldKTsKICAgICB9
CiAKLSAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFtZXMgd2l0aCB0
aGUgcmVjaXBpZW50IGRvbWFpbiAqLwotICAgIGZvciAoIGkgPSBucl9ncmFu
dF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsrICkKLSAgICAg
ICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwgaSk7Ci0KLSAg
ICBndC0+bnJfZ3JhbnRfZnJhbWVzID0gcmVxX25yX2ZyYW1lczsKLQogICAg
IC8qIFN0YXR1cyBwYWdlcyAtIHZlcnNpb24gMiAqLwogICAgIGlmIChndC0+
Z3RfdmVyc2lvbiA+IDEpCiAgICAgewotICAgICAgICBpZiAoIGdudHRhYl9w
b3B1bGF0ZV9zdGF0dXNfZnJhbWVzKGQsIGd0KSApCisgICAgICAgIGlmICgg
Z250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoZCwgZ3QsIHJlcV9ucl9m
cmFtZXMpICkKICAgICAgICAgICAgIGdvdG8gc2hhcmVkX2FsbG9jX2ZhaWxl
ZDsKICAgICB9CiAKKyAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFt
ZXMgd2l0aCB0aGUgcmVjaXBpZW50IGRvbWFpbiAqLworICAgIGZvciAoIGkg
PSBucl9ncmFudF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsr
ICkKKyAgICAgICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwg
aSk7CisgICAgZ3QtPm5yX2dyYW50X2ZyYW1lcyA9IHJlcV9ucl9mcmFtZXM7
CisKICAgICByZXR1cm4gMTsKIAogc2hhcmVkX2FsbG9jX2ZhaWxlZDoKQEAg
LTIxNTcsNyArMjE2Miw3IEBAIGdudHRhYl9zZXRfdmVyc2lvbihYRU5fR1VF
U1RfSEFORExFKGdudHRhYl9zZXRfdmVyc2lvbl90IHVvcCkpCiAKICAgICBp
ZiAoIG9wLnZlcnNpb24gPT0gMiAmJiBndC0+Z3RfdmVyc2lvbiA8IDIgKQog
ICAgIHsKLSAgICAgICAgcmVzID0gZ250dGFiX3BvcHVsYXRlX3N0YXR1c19m
cmFtZXMoZCwgZ3QpOworICAgICAgICByZXMgPSBnbnR0YWJfcG9wdWxhdGVf
c3RhdHVzX2ZyYW1lcyhkLCBndCwgbnJfZ3JhbnRfZnJhbWVzKGd0KSk7CiAg
ICAgICAgIGlmICggcmVzIDwgMCkKICAgICAgICAgICAgIGdvdG8gb3V0X3Vu
bG9jazsKICAgICB9CkBAIC0yNjAwLDE0ICsyNjA1LDE1IEBAIGdyYW50X3Rh
YmxlX2NyZWF0ZSgKICAgICAgICAgY2xlYXJfcGFnZSh0LT5zaGFyZWRfcmF3
W2ldKTsKICAgICB9CiAgICAgCi0gICAgZm9yICggaSA9IDA7IGkgPCBJTklU
SUFMX05SX0dSQU5UX0ZSQU1FUzsgaSsrICkKLSAgICAgICAgZ250dGFiX2Ny
ZWF0ZV9zaGFyZWRfcGFnZShkLCB0LCBpKTsKLQogICAgIC8qIFN0YXR1cyBw
YWdlcyBmb3IgZ3JhbnQgdGFibGUgLSBmb3IgdmVyc2lvbiAyICovCiAgICAg
dC0+c3RhdHVzID0geHphbGxvY19hcnJheShncmFudF9zdGF0dXNfdCAqLAog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZ3JhbnRfdG9fc3RhdHVz
X2ZyYW1lcyhtYXhfbnJfZ3JhbnRfZnJhbWVzKSk7CiAgICAgaWYgKCB0LT5z
dGF0dXMgPT0gTlVMTCApCiAgICAgICAgIGdvdG8gbm9fbWVtXzQ7CisKKyAg
ICBmb3IgKCBpID0gMDsgaSA8IElOSVRJQUxfTlJfR1JBTlRfRlJBTUVTOyBp
KysgKQorICAgICAgICBnbnR0YWJfY3JlYXRlX3NoYXJlZF9wYWdlKGQsIHQs
IGkpOworCiAgICAgdC0+bnJfc3RhdHVzX2ZyYW1lcyA9IDA7CiAKICAgICAv
KiBPa2F5LCBpbnN0YWxsIHRoZSBzdHJ1Y3R1cmUuICovCg==

--=separator
Content-Type: application/octet-stream; name="xsa26-unstable.patch"
Content-Disposition: attachment; filename="xsa26-unstable.patch"
Content-Transfer-Encoding: base64

Z250dGFiOiBmaXggcmVsZWFzaW5nIG9mIG1lbW9yeSB1cG9uIHN3aXRjaGVz
IGJldHdlZW4gdmVyc2lvbnMKCmdudHRhYl91bnBvcHVsYXRlX3N0YXR1c19m
cmFtZXMoKSBpbmNvbXBsZXRlbHkgZnJlZWQgdGhlIHBhZ2VzCnByZXZpb3Vz
bHkgdXNlZCBhcyBzdGF0dXMgZnJhbWUgaW4gdGhhdCB0aGV5IGRpZCBub3Qg
Z2V0IHJlbW92ZWQgZnJvbQp0aGUgZG9tYWluJ3MgeGVucGFnZV9saXN0LCB0
aHVzIGNhdXNpbmcgc3Vic2VxdWVudCBsaXN0IGNvcnJ1cHRpb24Kd2hlbiB0
aG9zZSBwYWdlcyBkaWQgZ2V0IGFsbG9jYXRlZCBhZ2FpbiBmb3IgdGhlIHNh
bWUgb3IgYW5vdGhlciBwdXJwb3NlLgoKU2ltaWxhcmx5LCBncmFudF90YWJs
ZV9jcmVhdGUoKSBhbmQgZ250dGFiX2dyb3dfdGFibGUoKSBib3RoIGltcHJv
cGVybHkKY2xlYW4gdXAgaW4gdGhlIGV2ZW50IG9mIGFuIGVycm9yIC0gcGFn
ZXMgYWxyZWFkeSBzaGFyZWQgd2l0aCB0aGUgZ3Vlc3QKY2FuJ3QgYmUgZnJl
ZWQgYnkganVzdCBwYXNzaW5nIHRoZW0gdG8gZnJlZV94ZW5oZWFwX3BhZ2Uo
KS4gRml4IHRoaXMgYnkKc2hhcmluZyB0aGUgcGFnZXMgb25seSBhZnRlciBh
bGwgYWxsb2NhdGlvbnMgc3VjY2VlZGVkLgoKVGhpcyBpcyBDVkUtMjAxMi01
NTEwIC8gWFNBLTI2LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogSWFuIENhbXBiZWxsIDxpYW4u
Y2FtcGJlbGxAY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9u
L2dyYW50X3RhYmxlLmMgYi94ZW4vY29tbW9uL2dyYW50X3RhYmxlLmMKaW5k
ZXggNzkxMjc2OS4uZWM5ZWNmNCAxMDA2NDQKLS0tIGEveGVuL2NvbW1vbi9n
cmFudF90YWJsZS5jCisrKyBiL3hlbi9jb21tb24vZ3JhbnRfdGFibGUuYwpA
QCAtMTIwOCwxMiArMTIwOCwxMyBAQCBmYXVsdDoKIH0KIAogc3RhdGljIGlu
dAotZ250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoc3RydWN0IGRvbWFp
biAqZCwgc3RydWN0IGdyYW50X3RhYmxlICpndCkKK2dudHRhYl9wb3B1bGF0
ZV9zdGF0dXNfZnJhbWVzKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBncmFu
dF90YWJsZSAqZ3QsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1
bnNpZ25lZCBpbnQgcmVxX25yX2ZyYW1lcykKIHsKICAgICB1bnNpZ25lZCBp
OwogICAgIHVuc2lnbmVkIHJlcV9zdGF0dXNfZnJhbWVzOwogCi0gICAgcmVx
X3N0YXR1c19mcmFtZXMgPSBncmFudF90b19zdGF0dXNfZnJhbWVzKGd0LT5u
cl9ncmFudF9mcmFtZXMpOworICAgIHJlcV9zdGF0dXNfZnJhbWVzID0gZ3Jh
bnRfdG9fc3RhdHVzX2ZyYW1lcyhyZXFfbnJfZnJhbWVzKTsKICAgICBmb3Ig
KCBpID0gbnJfc3RhdHVzX2ZyYW1lcyhndCk7IGkgPCByZXFfc3RhdHVzX2Zy
YW1lczsgaSsrICkKICAgICB7CiAgICAgICAgIGlmICggKGd0LT5zdGF0dXNb
aV0gPSBhbGxvY194ZW5oZWFwX3BhZ2UoKSkgPT0gTlVMTCApCkBAIC0xMjQ0
LDcgKzEyNDUsMTIgQEAgZ250dGFiX3VucG9wdWxhdGVfc3RhdHVzX2ZyYW1l
cyhzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZ3JhbnRfdGFibGUgKmd0KQog
CiAgICAgZm9yICggaSA9IDA7IGkgPCBucl9zdGF0dXNfZnJhbWVzKGd0KTsg
aSsrICkKICAgICB7Ci0gICAgICAgIHBhZ2Vfc2V0X293bmVyKHZpcnRfdG9f
cGFnZShndC0+c3RhdHVzW2ldKSwgZG9tX3hlbik7CisgICAgICAgIHN0cnVj
dCBwYWdlX2luZm8gKnBnID0gdmlydF90b19wYWdlKGd0LT5zdGF0dXNbaV0p
OworCisgICAgICAgIEJVR19PTihwYWdlX2dldF9vd25lcihwZykgIT0gZCk7
CisgICAgICAgIGlmICggdGVzdF9hbmRfY2xlYXJfYml0KF9QR0NfYWxsb2Nh
dGVkLCAmcGctPmNvdW50X2luZm8pICkKKyAgICAgICAgICAgIHB1dF9wYWdl
KHBnKTsKKyAgICAgICAgQlVHX09OKHBnLT5jb3VudF9pbmZvICYgflBHQ194
ZW5faGVhcCk7CiAgICAgICAgIGZyZWVfeGVuaGVhcF9wYWdlKGd0LT5zdGF0
dXNbaV0pOwogICAgICAgICBndC0+c3RhdHVzW2ldID0gTlVMTDsKICAgICB9
CkBAIC0xMjgyLDE5ICsxMjg4LDE4IEBAIGdudHRhYl9ncm93X3RhYmxlKHN0
cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGludCByZXFfbnJfZnJhbWVzKQog
ICAgICAgICBjbGVhcl9wYWdlKGd0LT5zaGFyZWRfcmF3W2ldKTsKICAgICB9
CiAKLSAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFtZXMgd2l0aCB0
aGUgcmVjaXBpZW50IGRvbWFpbiAqLwotICAgIGZvciAoIGkgPSBucl9ncmFu
dF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsrICkKLSAgICAg
ICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwgaSk7Ci0KLSAg
ICBndC0+bnJfZ3JhbnRfZnJhbWVzID0gcmVxX25yX2ZyYW1lczsKLQogICAg
IC8qIFN0YXR1cyBwYWdlcyAtIHZlcnNpb24gMiAqLwogICAgIGlmIChndC0+
Z3RfdmVyc2lvbiA+IDEpCiAgICAgewotICAgICAgICBpZiAoIGdudHRhYl9w
b3B1bGF0ZV9zdGF0dXNfZnJhbWVzKGQsIGd0KSApCisgICAgICAgIGlmICgg
Z250dGFiX3BvcHVsYXRlX3N0YXR1c19mcmFtZXMoZCwgZ3QsIHJlcV9ucl9m
cmFtZXMpICkKICAgICAgICAgICAgIGdvdG8gc2hhcmVkX2FsbG9jX2ZhaWxl
ZDsKICAgICB9CiAKKyAgICAvKiBTaGFyZSB0aGUgbmV3IHNoYXJlZCBmcmFt
ZXMgd2l0aCB0aGUgcmVjaXBpZW50IGRvbWFpbiAqLworICAgIGZvciAoIGkg
PSBucl9ncmFudF9mcmFtZXMoZ3QpOyBpIDwgcmVxX25yX2ZyYW1lczsgaSsr
ICkKKyAgICAgICAgZ250dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCBndCwg
aSk7CisgICAgZ3QtPm5yX2dyYW50X2ZyYW1lcyA9IHJlcV9ucl9mcmFtZXM7
CisKICAgICByZXR1cm4gMTsKIAogc2hhcmVkX2FsbG9jX2ZhaWxlZDoKQEAg
LTIxOTIsNyArMjE5Nyw3IEBAIGdudHRhYl9zZXRfdmVyc2lvbihYRU5fR1VF
U1RfSEFORExFX1BBUkFNKGdudHRhYl9zZXRfdmVyc2lvbl90IHVvcCkpCiAK
ICAgICBpZiAoIG9wLnZlcnNpb24gPT0gMiAmJiBndC0+Z3RfdmVyc2lvbiA8
IDIgKQogICAgIHsKLSAgICAgICAgcmVzID0gZ250dGFiX3BvcHVsYXRlX3N0
YXR1c19mcmFtZXMoZCwgZ3QpOworICAgICAgICByZXMgPSBnbnR0YWJfcG9w
dWxhdGVfc3RhdHVzX2ZyYW1lcyhkLCBndCwgbnJfZ3JhbnRfZnJhbWVzKGd0
KSk7CiAgICAgICAgIGlmICggcmVzIDwgMCkKICAgICAgICAgICAgIGdvdG8g
b3V0X3VubG9jazsKICAgICB9CkBAIC0yNjI4LDE0ICsyNjMzLDE1IEBAIGdy
YW50X3RhYmxlX2NyZWF0ZSgKICAgICAgICAgY2xlYXJfcGFnZSh0LT5zaGFy
ZWRfcmF3W2ldKTsKICAgICB9CiAgICAgCi0gICAgZm9yICggaSA9IDA7IGkg
PCBJTklUSUFMX05SX0dSQU5UX0ZSQU1FUzsgaSsrICkKLSAgICAgICAgZ250
dGFiX2NyZWF0ZV9zaGFyZWRfcGFnZShkLCB0LCBpKTsKLQogICAgIC8qIFN0
YXR1cyBwYWdlcyBmb3IgZ3JhbnQgdGFibGUgLSBmb3IgdmVyc2lvbiAyICov
CiAgICAgdC0+c3RhdHVzID0geHphbGxvY19hcnJheShncmFudF9zdGF0dXNf
dCAqLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZ3JhbnRfdG9f
c3RhdHVzX2ZyYW1lcyhtYXhfbnJfZ3JhbnRfZnJhbWVzKSk7CiAgICAgaWYg
KCB0LT5zdGF0dXMgPT0gTlVMTCApCiAgICAgICAgIGdvdG8gbm9fbWVtXzQ7
CisKKyAgICBmb3IgKCBpID0gMDsgaSA8IElOSVRJQUxfTlJfR1JBTlRfRlJB
TUVTOyBpKysgKQorICAgICAgICBnbnR0YWJfY3JlYXRlX3NoYXJlZF9wYWdl
KGQsIHQsIGkpOworCiAgICAgdC0+bnJfc3RhdHVzX2ZyYW1lcyA9IDA7CiAK
ICAgICAvKiBPa2F5LCBpbnN0YWxsIHRoZSBzdHJ1Y3R1cmUuICovCg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBX-0003Jq-6Y; Mon, 03 Dec 2012 17:52:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBU-0003FH-JP; Mon, 03 Dec 2012 17:52:00 +0000
Received: from [85.158.138.51:39190] by server-7.bemta-3.messagelabs.com id
	E3/F4-01713-0C6ECB05; Mon, 03 Dec 2012 17:52:00 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354557117!32536337!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24587 invoked from network); 3 Dec 2012 17:51:58 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-11.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:58 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBG-0002NP-Sj; Mon, 03 Dec 2012 17:51:46 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBG-000688-Od; Mon, 03 Dec 2012 17:51:46 +0000
Date: Mon, 03 Dec 2012 17:51:46 +0000
Message-Id: <E1TfaBG-000688-Od@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 30 (CVE-2012-5514) - Broken error
 handling in guest_physmap_mark_populate_on_demand()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5514 / XSA-30
                              version 4

    Broken error handling in guest_physmap_mark_populate_on_demand()

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

guest_physmap_mark_populate_on_demand(), before carrying out its actual
operation, checks that the subject GFNs are not in use. If that check fails,
the code prints a message and bypasses the gfn_unlock() matching the
gfn_lock() carried out before entering the loop.

Further, the function is exposed to the use of guests on their own
behalf.  While we believe that this does not cause any further issues,
we have not conducted a thorough enough review to be sure.  Rather, it
should be exposed only to privileged domains.

IMPACT
======

A malicious guest administrator can cause Xen to hang.

VULNERABLE SYSTEMS
==================

All Xen version from 3.4 on are vulnerable.

The vulnerability is only exposed by HVM guests.

MITIGATION
==========

Running only PV guests will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa30-4.1.patch             Xen 4.1.x
xsa30-4.2.patch             Xen 4.2.x
xsa30-4.unstable.patch      xen-unstable

$ sha256sum xsa30*.patch
586adda04271e91e42f42bb53636e2aa6fc7379e2c2c4b825e7ec6e34350669e  xsa30-4.1.patch
c410bffb90a551be30fde5ec4593c361b69e9c261878255fdb4f8447e7177418  xsa30-4.2.patch
2270eed8b89e4e28c4c79e5a284203632a7189474d6f0a6152d6cf56b287497b  xsa30-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ3AAoJEIP+FMlX6CvZjRgIAIF1cvAxVM3nE55HwvIlMWto
ldpam6YtFKAIr5XXBD6IQ0NrghJNNXyeZT4bxSdQAqyqUg9tYgkIMgYJx3kxQuVZ
uhUIyg+mL5bZ+kN1TkHTVPVF1X1D0WbRDD//3V3MV8q6Dy1OEfTaQVb7ZLaNmwv5
tmZ0+D6nrMe24UEr5RjzupBgX5iMeGdKyh87Zg/OM0CG5y8EQOaxlb9i47K/DLDh
l4lc6Jpxz1+tW9B9T/SUDiH37BABturvr1XvDsbencuNZeicLr8y1YKDgf2OyN5L
RfCjSNadtJRBV4BcyGTqdboZfnmavGqmYoDdJg3eSRZ+ls9PZ9hyEMETaRsCeOc=
=MBWJ
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa30-4.1.patch"
Content-Disposition: attachment; filename="xsa30-4.1.patch"
Content-Transfer-Encoding: base64

eGVuOiBmaXggZXJyb3IgaGFuZGxpbmcgb2YgZ3Vlc3RfcGh5c21hcF9tYXJr
X3BvcHVsYXRlX29uX2RlbWFuZCgpCgpUaGUgb25seSB1c2VyIG9mIHRoZSAi
b3V0IiBsYWJlbCBieXBhc3NlcyBhIG5lY2Vzc2FyeSB1bmxvY2ssIHRodXMK
ZW5hYmxpbmcgdGhlIGNhbGxlciB0byBsb2NrIHVwIFhlbi4KCkFsc28sIHRo
ZSBmdW5jdGlvbiB3YXMgbmV2ZXIgbWVhbnQgdG8gYmUgY2FsbGVkIGJ5IGEg
Z3Vlc3QgZm9yIGl0c2VsZiwKc28gcmF0aGVyIHRoYW4gaW5zcGVjdGluZyB0
aGUgY29kZSBwYXRocyBpbiBkZXB0aCBmb3IgcG90ZW50aWFsIG90aGVyCnBy
b2JsZW1zIHRoaXMgbWlnaHQgY2F1c2UsIGFuZCBhZGp1c3RpbmcgZS5nLiB0
aGUgbm9uLWd1ZXN0IHByaW50aygpCmluIHRoZSBhYm92ZSBlcnJvciBwYXRo
LCBqdXN0IGRpc2FsbG93IHRoZSBndWVzdCBhY2Nlc3MgdG8gaXQuCgpGaW5h
bGx5LCB0aGUgcHJpbnRrKCkgKGNvbnNpZGVyaW5nIGl0cyBwb3RlbnRpYWwg
b2Ygc3BhbW1pbmcgdGhlIGxvZywKdGhlIG1vcmUgdGhhdCBpdCdzIG5vdCB1
c2luZyBYRU5MT0dfR1VFU1QpLCBpcyBiZWluZyBjb252ZXJ0ZWQgdG8KUDJN
X0RFQlVHKCksIGFzIGRlYnVnZ2luZyBpcyB3aGF0IGl0IGFwcGFyZW50bHkg
d2FzIGFkZGVkIGZvciBpbiB0aGUKZmlyc3QgcGxhY2UuCgpUaGlzIGlzIFhT
QS0zMCAvIENWRS0yMDEyLTU1MTQuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBJYW4gQ2FtcGJl
bGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgpBY2tlZC1ieTogR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBldS5jaXRyaXguY29tPgpBY2tlZC1i
eTogSWFuIEphY2tzb24gPGlhbi5qYWNrc29uQGV1LmNpdHJpeC5jb20+Cgpk
aWZmIC1yIDU2MzkwNDdkNmM5ZiB4ZW4vYXJjaC94ODYvbW0vcDJtLmMKLS0t
IGEveGVuL2FyY2gveDg2L21tL3AybS5jCU1vbiBOb3YgMTkgMDk6NDM6NDgg
MjAxMiArMDEwMAorKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLmMJVGh1IE5v
diAyMiAxNzowNzozNyAyMDEyICswMDAwCkBAIC0yNDEyLDYgKzI0MTIsOSBA
QCBndWVzdF9waHlzbWFwX21hcmtfcG9wdWxhdGVfb25fZGVtYW5kKHN0CiAg
ICAgbWZuX3Qgb21mbjsKICAgICBpbnQgcmMgPSAwOwogCisgICAgaWYgKCAh
SVNfUFJJVl9GT1IoY3VycmVudC0+ZG9tYWluLCBkKSApCisgICAgICAgIHJl
dHVybiAtRVBFUk07CisKICAgICBpZiAoICFwYWdpbmdfbW9kZV90cmFuc2xh
dGUoZCkgKQogICAgICAgICByZXR1cm4gLUVJTlZBTDsKIApAQCAtMjQzMCw4
ICsyNDMzLDcgQEAgZ3Vlc3RfcGh5c21hcF9tYXJrX3BvcHVsYXRlX29uX2Rl
bWFuZChzdAogICAgICAgICBvbWZuID0gZ2ZuX3RvX21mbl9xdWVyeShwMm0s
IGdmbiArIGksICZvdCk7CiAgICAgICAgIGlmICggcDJtX2lzX3JhbShvdCkg
KQogICAgICAgICB7Ci0gICAgICAgICAgICBwcmludGsoIiVzOiBnZm5fdG9f
bWZuIHJldHVybmVkIHR5cGUgJWQhXG4iLAotICAgICAgICAgICAgICAgICAg
IF9fZnVuY19fLCBvdCk7CisgICAgICAgICAgICBQMk1fREVCVUcoImdmbl90
b19tZm4gcmV0dXJuZWQgdHlwZSAlZCFcbiIsIG90KTsKICAgICAgICAgICAg
IHJjID0gLUVCVVNZOwogICAgICAgICAgICAgZ290byBvdXQ7CiAgICAgICAg
IH0KQEAgLTI0NTMsMTAgKzI0NTUsMTAgQEAgZ3Vlc3RfcGh5c21hcF9tYXJr
X3BvcHVsYXRlX29uX2RlbWFuZChzdAogICAgICAgICBCVUdfT04ocDJtLT5w
b2QuZW50cnlfY291bnQgPCAwKTsKICAgICB9CiAKK291dDoKICAgICBhdWRp
dF9wMm0ocDJtLCAxKTsKICAgICBwMm1fdW5sb2NrKHAybSk7CiAKLW91dDoK
ICAgICByZXR1cm4gcmM7CiB9CiAK

--=separator
Content-Type: application/octet-stream; name="xsa30-4.2.patch"
Content-Disposition: attachment; filename="xsa30-4.2.patch"
Content-Transfer-Encoding: base64

eGVuOiBmaXggZXJyb3IgaGFuZGxpbmcgb2YgZ3Vlc3RfcGh5c21hcF9tYXJr
X3BvcHVsYXRlX29uX2RlbWFuZCgpCgpUaGUgb25seSB1c2VyIG9mIHRoZSAi
b3V0IiBsYWJlbCBieXBhc3NlcyBhIG5lY2Vzc2FyeSB1bmxvY2ssIHRodXMK
ZW5hYmxpbmcgdGhlIGNhbGxlciB0byBsb2NrIHVwIFhlbi4KCkFsc28sIHRo
ZSBmdW5jdGlvbiB3YXMgbmV2ZXIgbWVhbnQgdG8gYmUgY2FsbGVkIGJ5IGEg
Z3Vlc3QgZm9yIGl0c2VsZiwKc28gcmF0aGVyIHRoYW4gaW5zcGVjdGluZyB0
aGUgY29kZSBwYXRocyBpbiBkZXB0aCBmb3IgcG90ZW50aWFsIG90aGVyCnBy
b2JsZW1zIHRoaXMgbWlnaHQgY2F1c2UsIGFuZCBhZGp1c3RpbmcgZS5nLiB0
aGUgbm9uLWd1ZXN0IHByaW50aygpCmluIHRoZSBhYm92ZSBlcnJvciBwYXRo
LCBqdXN0IGRpc2FsbG93IHRoZSBndWVzdCBhY2Nlc3MgdG8gaXQuCgpGaW5h
bGx5LCB0aGUgcHJpbnRrKCkgKGNvbnNpZGVyaW5nIGl0cyBwb3RlbnRpYWwg
b2Ygc3BhbW1pbmcgdGhlIGxvZywKdGhlIG1vcmUgdGhhdCBpdCdzIG5vdCB1
c2luZyBYRU5MT0dfR1VFU1QpLCBpcyBiZWluZyBjb252ZXJ0ZWQgdG8KUDJN
X0RFQlVHKCksIGFzIGRlYnVnZ2luZyBpcyB3aGF0IGl0IGFwcGFyZW50bHkg
d2FzIGFkZGVkIGZvciBpbiB0aGUKZmlyc3QgcGxhY2UuCgpUaGlzIGlzIFhT
QS0zMCAvIENWRS0yMDEyLTU1MTQuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBJYW4gQ2FtcGJl
bGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgpBY2tlZC1ieTogR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBldS5jaXRyaXguY29tPgpBY2tlZC1i
eTogSWFuIEphY2tzb24gPGlhbi5qYWNrc29uQGV1LmNpdHJpeC5jb20+Cgpk
aWZmIC1yIDdjNGQ4MDZiMzc1MyB4ZW4vYXJjaC94ODYvbW0vcDJtLXBvZC5j
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tcG9kLmMJRnJpIE5vdiAxNiAx
NTo1NjoxNCAyMDEyICswMDAwCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0t
cG9kLmMJVGh1IE5vdiAyMiAxNzowMjozMiAyMDEyICswMDAwCkBAIC0xMTE3
LDYgKzExMTcsOSBAQCBndWVzdF9waHlzbWFwX21hcmtfcG9wdWxhdGVfb25f
ZGVtYW5kKHN0CiAgICAgbWZuX3Qgb21mbjsKICAgICBpbnQgcmMgPSAwOwog
CisgICAgaWYgKCAhSVNfUFJJVl9GT1IoY3VycmVudC0+ZG9tYWluLCBkKSAp
CisgICAgICAgIHJldHVybiAtRVBFUk07CisKICAgICBpZiAoICFwYWdpbmdf
bW9kZV90cmFuc2xhdGUoZCkgKQogICAgICAgICByZXR1cm4gLUVJTlZBTDsK
IApAQCAtMTEzNSw4ICsxMTM4LDcgQEAgZ3Vlc3RfcGh5c21hcF9tYXJrX3Bv
cHVsYXRlX29uX2RlbWFuZChzdAogICAgICAgICBvbWZuID0gcDJtLT5nZXRf
ZW50cnkocDJtLCBnZm4gKyBpLCAmb3QsICZhLCAwLCBOVUxMKTsKICAgICAg
ICAgaWYgKCBwMm1faXNfcmFtKG90KSApCiAgICAgICAgIHsKLSAgICAgICAg
ICAgIHByaW50aygiJXM6IGdmbl90b19tZm4gcmV0dXJuZWQgdHlwZSAlZCFc
biIsCi0gICAgICAgICAgICAgICAgICAgX19mdW5jX18sIG90KTsKKyAgICAg
ICAgICAgIFAyTV9ERUJVRygiZ2ZuX3RvX21mbiByZXR1cm5lZCB0eXBlICVk
IVxuIiwgb3QpOwogICAgICAgICAgICAgcmMgPSAtRUJVU1k7CiAgICAgICAg
ICAgICBnb3RvIG91dDsKICAgICAgICAgfQpAQCAtMTE2MCw5ICsxMTYyLDkg
QEAgZ3Vlc3RfcGh5c21hcF9tYXJrX3BvcHVsYXRlX29uX2RlbWFuZChzdAog
ICAgICAgICBwb2RfdW5sb2NrKHAybSk7CiAgICAgfQogCitvdXQ6CiAgICAg
Z2ZuX3VubG9jayhwMm0sIGdmbiwgb3JkZXIpOwogCi1vdXQ6CiAgICAgcmV0
dXJuIHJjOwogfQogCg==

--=separator
Content-Type: application/octet-stream; name="xsa30-unstable.patch"
Content-Disposition: attachment; filename="xsa30-unstable.patch"
Content-Transfer-Encoding: base64

eGVuOiBmaXggZXJyb3IgaGFuZGxpbmcgb2YgZ3Vlc3RfcGh5c21hcF9tYXJr
X3BvcHVsYXRlX29uX2RlbWFuZCgpCgpUaGUgb25seSB1c2VyIG9mIHRoZSAi
b3V0IiBsYWJlbCBieXBhc3NlcyBhIG5lY2Vzc2FyeSB1bmxvY2ssIHRodXMK
ZW5hYmxpbmcgdGhlIGNhbGxlciB0byBsb2NrIHVwIFhlbi4KCkFsc28sIHRo
ZSBmdW5jdGlvbiB3YXMgbmV2ZXIgbWVhbnQgdG8gYmUgY2FsbGVkIGJ5IGEg
Z3Vlc3QgZm9yIGl0c2VsZiwKc28gcmF0aGVyIHRoYW4gaW5zcGVjdGluZyB0
aGUgY29kZSBwYXRocyBpbiBkZXB0aCBmb3IgcG90ZW50aWFsIG90aGVyCnBy
b2JsZW1zIHRoaXMgbWlnaHQgY2F1c2UsIGFuZCBhZGp1c3RpbmcgZS5nLiB0
aGUgbm9uLWd1ZXN0IHByaW50aygpCmluIHRoZSBhYm92ZSBlcnJvciBwYXRo
LCBqdXN0IGRpc2FsbG93IHRoZSBndWVzdCBhY2Nlc3MgdG8gaXQuCgpGaW5h
bGx5LCB0aGUgcHJpbnRrKCkgKGNvbnNpZGVyaW5nIGl0cyBwb3RlbnRpYWwg
b2Ygc3BhbW1pbmcgdGhlIGxvZywKdGhlIG1vcmUgdGhhdCBpdCdzIG5vdCB1
c2luZyBYRU5MT0dfR1VFU1QpLCBpcyBiZWluZyBjb252ZXJ0ZWQgdG8KUDJN
X0RFQlVHKCksIGFzIGRlYnVnZ2luZyBpcyB3aGF0IGl0IGFwcGFyZW50bHkg
d2FzIGFkZGVkIGZvciBpbiB0aGUKZmlyc3QgcGxhY2UuCgpUaGlzIGlzIFhT
QS0zMCAvIENWRS0yMDEyLTU1MTQuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBJYW4gQ2FtcGJl
bGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgpBY2tlZC1ieTogR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBldS5jaXRyaXguY29tPgpBY2tlZC1i
eTogSWFuIEphY2tzb24gPGlhbi5qYWNrc29uQGV1LmNpdHJpeC5jb20+Cgot
LS0gYS94ZW4vYXJjaC94ODYvbW0vcDJtLXBvZC5jCisrKyBiL3hlbi9hcmNo
L3g4Ni9tbS9wMm0tcG9kLmMKQEAgLTExMTcsNiArMTExNyw5IEBAIGd1ZXN0
X3BoeXNtYXBfbWFya19wb3B1bGF0ZV9vbl9kZW1hbmQoc3QKICAgICBtZm5f
dCBvbWZuOwogICAgIGludCByYyA9IDA7CiAKKyAgICBpZiAoICFJU19QUklW
X0ZPUihjdXJyZW50LT5kb21haW4sIGQpICkKKyAgICAgICAgcmV0dXJuIC1F
UEVSTTsKKwogICAgIGlmICggIXBhZ2luZ19tb2RlX3RyYW5zbGF0ZShkKSAp
CiAgICAgICAgIHJldHVybiAtRUlOVkFMOwogCkBAIC0xMTMxLDggKzExMzQs
NyBAQCBndWVzdF9waHlzbWFwX21hcmtfcG9wdWxhdGVfb25fZGVtYW5kKHN0
CiAgICAgICAgIG9tZm4gPSBwMm0tPmdldF9lbnRyeShwMm0sIGdmbiArIGks
ICZvdCwgJmEsIDAsIE5VTEwpOwogICAgICAgICBpZiAoIHAybV9pc19yYW0o
b3QpICkKICAgICAgICAgewotICAgICAgICAgICAgcHJpbnRrKCIlczogZ2Zu
X3RvX21mbiByZXR1cm5lZCB0eXBlICVkIVxuIiwKLSAgICAgICAgICAgICAg
ICAgICBfX2Z1bmNfXywgb3QpOworICAgICAgICAgICAgUDJNX0RFQlVHKCJn
Zm5fdG9fbWZuIHJldHVybmVkIHR5cGUgJWQhXG4iLCBvdCk7CiAgICAgICAg
ICAgICByYyA9IC1FQlVTWTsKICAgICAgICAgICAgIGdvdG8gb3V0OwogICAg
ICAgICB9CkBAIC0xMTU2LDkgKzExNTgsOSBAQCBndWVzdF9waHlzbWFwX21h
cmtfcG9wdWxhdGVfb25fZGVtYW5kKHN0CiAgICAgICAgIHBvZF91bmxvY2so
cDJtKTsKICAgICB9CiAKK291dDoKICAgICBnZm5fdW5sb2NrKHAybSwgZ2Zu
LCBvcmRlcik7CiAKLW91dDoKICAgICByZXR1cm4gcmM7CiB9CiAK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBX-0003Jq-6Y; Mon, 03 Dec 2012 17:52:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBU-0003FH-JP; Mon, 03 Dec 2012 17:52:00 +0000
Received: from [85.158.138.51:39190] by server-7.bemta-3.messagelabs.com id
	E3/F4-01713-0C6ECB05; Mon, 03 Dec 2012 17:52:00 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354557117!32536337!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24587 invoked from network); 3 Dec 2012 17:51:58 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-11.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:58 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBG-0002NP-Sj; Mon, 03 Dec 2012 17:51:46 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBG-000688-Od; Mon, 03 Dec 2012 17:51:46 +0000
Date: Mon, 03 Dec 2012 17:51:46 +0000
Message-Id: <E1TfaBG-000688-Od@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 30 (CVE-2012-5514) - Broken error
 handling in guest_physmap_mark_populate_on_demand()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5514 / XSA-30
                              version 4

    Broken error handling in guest_physmap_mark_populate_on_demand()

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

guest_physmap_mark_populate_on_demand(), before carrying out its actual
operation, checks that the subject GFNs are not in use. If that check fails,
the code prints a message and bypasses the gfn_unlock() matching the
gfn_lock() carried out before entering the loop.

Further, the function is exposed to the use of guests on their own
behalf.  While we believe that this does not cause any further issues,
we have not conducted a thorough enough review to be sure.  Rather, it
should be exposed only to privileged domains.

IMPACT
======

A malicious guest administrator can cause Xen to hang.

VULNERABLE SYSTEMS
==================

All Xen version from 3.4 on are vulnerable.

The vulnerability is only exposed by HVM guests.

MITIGATION
==========

Running only PV guests will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa30-4.1.patch             Xen 4.1.x
xsa30-4.2.patch             Xen 4.2.x
xsa30-4.unstable.patch      xen-unstable

$ sha256sum xsa30*.patch
586adda04271e91e42f42bb53636e2aa6fc7379e2c2c4b825e7ec6e34350669e  xsa30-4.1.patch
c410bffb90a551be30fde5ec4593c361b69e9c261878255fdb4f8447e7177418  xsa30-4.2.patch
2270eed8b89e4e28c4c79e5a284203632a7189474d6f0a6152d6cf56b287497b  xsa30-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ3AAoJEIP+FMlX6CvZjRgIAIF1cvAxVM3nE55HwvIlMWto
ldpam6YtFKAIr5XXBD6IQ0NrghJNNXyeZT4bxSdQAqyqUg9tYgkIMgYJx3kxQuVZ
uhUIyg+mL5bZ+kN1TkHTVPVF1X1D0WbRDD//3V3MV8q6Dy1OEfTaQVb7ZLaNmwv5
tmZ0+D6nrMe24UEr5RjzupBgX5iMeGdKyh87Zg/OM0CG5y8EQOaxlb9i47K/DLDh
l4lc6Jpxz1+tW9B9T/SUDiH37BABturvr1XvDsbencuNZeicLr8y1YKDgf2OyN5L
RfCjSNadtJRBV4BcyGTqdboZfnmavGqmYoDdJg3eSRZ+ls9PZ9hyEMETaRsCeOc=
=MBWJ
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa30-4.1.patch"
Content-Disposition: attachment; filename="xsa30-4.1.patch"
Content-Transfer-Encoding: base64

eGVuOiBmaXggZXJyb3IgaGFuZGxpbmcgb2YgZ3Vlc3RfcGh5c21hcF9tYXJr
X3BvcHVsYXRlX29uX2RlbWFuZCgpCgpUaGUgb25seSB1c2VyIG9mIHRoZSAi
b3V0IiBsYWJlbCBieXBhc3NlcyBhIG5lY2Vzc2FyeSB1bmxvY2ssIHRodXMK
ZW5hYmxpbmcgdGhlIGNhbGxlciB0byBsb2NrIHVwIFhlbi4KCkFsc28sIHRo
ZSBmdW5jdGlvbiB3YXMgbmV2ZXIgbWVhbnQgdG8gYmUgY2FsbGVkIGJ5IGEg
Z3Vlc3QgZm9yIGl0c2VsZiwKc28gcmF0aGVyIHRoYW4gaW5zcGVjdGluZyB0
aGUgY29kZSBwYXRocyBpbiBkZXB0aCBmb3IgcG90ZW50aWFsIG90aGVyCnBy
b2JsZW1zIHRoaXMgbWlnaHQgY2F1c2UsIGFuZCBhZGp1c3RpbmcgZS5nLiB0
aGUgbm9uLWd1ZXN0IHByaW50aygpCmluIHRoZSBhYm92ZSBlcnJvciBwYXRo
LCBqdXN0IGRpc2FsbG93IHRoZSBndWVzdCBhY2Nlc3MgdG8gaXQuCgpGaW5h
bGx5LCB0aGUgcHJpbnRrKCkgKGNvbnNpZGVyaW5nIGl0cyBwb3RlbnRpYWwg
b2Ygc3BhbW1pbmcgdGhlIGxvZywKdGhlIG1vcmUgdGhhdCBpdCdzIG5vdCB1
c2luZyBYRU5MT0dfR1VFU1QpLCBpcyBiZWluZyBjb252ZXJ0ZWQgdG8KUDJN
X0RFQlVHKCksIGFzIGRlYnVnZ2luZyBpcyB3aGF0IGl0IGFwcGFyZW50bHkg
d2FzIGFkZGVkIGZvciBpbiB0aGUKZmlyc3QgcGxhY2UuCgpUaGlzIGlzIFhT
QS0zMCAvIENWRS0yMDEyLTU1MTQuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBJYW4gQ2FtcGJl
bGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgpBY2tlZC1ieTogR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBldS5jaXRyaXguY29tPgpBY2tlZC1i
eTogSWFuIEphY2tzb24gPGlhbi5qYWNrc29uQGV1LmNpdHJpeC5jb20+Cgpk
aWZmIC1yIDU2MzkwNDdkNmM5ZiB4ZW4vYXJjaC94ODYvbW0vcDJtLmMKLS0t
IGEveGVuL2FyY2gveDg2L21tL3AybS5jCU1vbiBOb3YgMTkgMDk6NDM6NDgg
MjAxMiArMDEwMAorKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLmMJVGh1IE5v
diAyMiAxNzowNzozNyAyMDEyICswMDAwCkBAIC0yNDEyLDYgKzI0MTIsOSBA
QCBndWVzdF9waHlzbWFwX21hcmtfcG9wdWxhdGVfb25fZGVtYW5kKHN0CiAg
ICAgbWZuX3Qgb21mbjsKICAgICBpbnQgcmMgPSAwOwogCisgICAgaWYgKCAh
SVNfUFJJVl9GT1IoY3VycmVudC0+ZG9tYWluLCBkKSApCisgICAgICAgIHJl
dHVybiAtRVBFUk07CisKICAgICBpZiAoICFwYWdpbmdfbW9kZV90cmFuc2xh
dGUoZCkgKQogICAgICAgICByZXR1cm4gLUVJTlZBTDsKIApAQCAtMjQzMCw4
ICsyNDMzLDcgQEAgZ3Vlc3RfcGh5c21hcF9tYXJrX3BvcHVsYXRlX29uX2Rl
bWFuZChzdAogICAgICAgICBvbWZuID0gZ2ZuX3RvX21mbl9xdWVyeShwMm0s
IGdmbiArIGksICZvdCk7CiAgICAgICAgIGlmICggcDJtX2lzX3JhbShvdCkg
KQogICAgICAgICB7Ci0gICAgICAgICAgICBwcmludGsoIiVzOiBnZm5fdG9f
bWZuIHJldHVybmVkIHR5cGUgJWQhXG4iLAotICAgICAgICAgICAgICAgICAg
IF9fZnVuY19fLCBvdCk7CisgICAgICAgICAgICBQMk1fREVCVUcoImdmbl90
b19tZm4gcmV0dXJuZWQgdHlwZSAlZCFcbiIsIG90KTsKICAgICAgICAgICAg
IHJjID0gLUVCVVNZOwogICAgICAgICAgICAgZ290byBvdXQ7CiAgICAgICAg
IH0KQEAgLTI0NTMsMTAgKzI0NTUsMTAgQEAgZ3Vlc3RfcGh5c21hcF9tYXJr
X3BvcHVsYXRlX29uX2RlbWFuZChzdAogICAgICAgICBCVUdfT04ocDJtLT5w
b2QuZW50cnlfY291bnQgPCAwKTsKICAgICB9CiAKK291dDoKICAgICBhdWRp
dF9wMm0ocDJtLCAxKTsKICAgICBwMm1fdW5sb2NrKHAybSk7CiAKLW91dDoK
ICAgICByZXR1cm4gcmM7CiB9CiAK

--=separator
Content-Type: application/octet-stream; name="xsa30-4.2.patch"
Content-Disposition: attachment; filename="xsa30-4.2.patch"
Content-Transfer-Encoding: base64

eGVuOiBmaXggZXJyb3IgaGFuZGxpbmcgb2YgZ3Vlc3RfcGh5c21hcF9tYXJr
X3BvcHVsYXRlX29uX2RlbWFuZCgpCgpUaGUgb25seSB1c2VyIG9mIHRoZSAi
b3V0IiBsYWJlbCBieXBhc3NlcyBhIG5lY2Vzc2FyeSB1bmxvY2ssIHRodXMK
ZW5hYmxpbmcgdGhlIGNhbGxlciB0byBsb2NrIHVwIFhlbi4KCkFsc28sIHRo
ZSBmdW5jdGlvbiB3YXMgbmV2ZXIgbWVhbnQgdG8gYmUgY2FsbGVkIGJ5IGEg
Z3Vlc3QgZm9yIGl0c2VsZiwKc28gcmF0aGVyIHRoYW4gaW5zcGVjdGluZyB0
aGUgY29kZSBwYXRocyBpbiBkZXB0aCBmb3IgcG90ZW50aWFsIG90aGVyCnBy
b2JsZW1zIHRoaXMgbWlnaHQgY2F1c2UsIGFuZCBhZGp1c3RpbmcgZS5nLiB0
aGUgbm9uLWd1ZXN0IHByaW50aygpCmluIHRoZSBhYm92ZSBlcnJvciBwYXRo
LCBqdXN0IGRpc2FsbG93IHRoZSBndWVzdCBhY2Nlc3MgdG8gaXQuCgpGaW5h
bGx5LCB0aGUgcHJpbnRrKCkgKGNvbnNpZGVyaW5nIGl0cyBwb3RlbnRpYWwg
b2Ygc3BhbW1pbmcgdGhlIGxvZywKdGhlIG1vcmUgdGhhdCBpdCdzIG5vdCB1
c2luZyBYRU5MT0dfR1VFU1QpLCBpcyBiZWluZyBjb252ZXJ0ZWQgdG8KUDJN
X0RFQlVHKCksIGFzIGRlYnVnZ2luZyBpcyB3aGF0IGl0IGFwcGFyZW50bHkg
d2FzIGFkZGVkIGZvciBpbiB0aGUKZmlyc3QgcGxhY2UuCgpUaGlzIGlzIFhT
QS0zMCAvIENWRS0yMDEyLTU1MTQuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBJYW4gQ2FtcGJl
bGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgpBY2tlZC1ieTogR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBldS5jaXRyaXguY29tPgpBY2tlZC1i
eTogSWFuIEphY2tzb24gPGlhbi5qYWNrc29uQGV1LmNpdHJpeC5jb20+Cgpk
aWZmIC1yIDdjNGQ4MDZiMzc1MyB4ZW4vYXJjaC94ODYvbW0vcDJtLXBvZC5j
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tcG9kLmMJRnJpIE5vdiAxNiAx
NTo1NjoxNCAyMDEyICswMDAwCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0t
cG9kLmMJVGh1IE5vdiAyMiAxNzowMjozMiAyMDEyICswMDAwCkBAIC0xMTE3
LDYgKzExMTcsOSBAQCBndWVzdF9waHlzbWFwX21hcmtfcG9wdWxhdGVfb25f
ZGVtYW5kKHN0CiAgICAgbWZuX3Qgb21mbjsKICAgICBpbnQgcmMgPSAwOwog
CisgICAgaWYgKCAhSVNfUFJJVl9GT1IoY3VycmVudC0+ZG9tYWluLCBkKSAp
CisgICAgICAgIHJldHVybiAtRVBFUk07CisKICAgICBpZiAoICFwYWdpbmdf
bW9kZV90cmFuc2xhdGUoZCkgKQogICAgICAgICByZXR1cm4gLUVJTlZBTDsK
IApAQCAtMTEzNSw4ICsxMTM4LDcgQEAgZ3Vlc3RfcGh5c21hcF9tYXJrX3Bv
cHVsYXRlX29uX2RlbWFuZChzdAogICAgICAgICBvbWZuID0gcDJtLT5nZXRf
ZW50cnkocDJtLCBnZm4gKyBpLCAmb3QsICZhLCAwLCBOVUxMKTsKICAgICAg
ICAgaWYgKCBwMm1faXNfcmFtKG90KSApCiAgICAgICAgIHsKLSAgICAgICAg
ICAgIHByaW50aygiJXM6IGdmbl90b19tZm4gcmV0dXJuZWQgdHlwZSAlZCFc
biIsCi0gICAgICAgICAgICAgICAgICAgX19mdW5jX18sIG90KTsKKyAgICAg
ICAgICAgIFAyTV9ERUJVRygiZ2ZuX3RvX21mbiByZXR1cm5lZCB0eXBlICVk
IVxuIiwgb3QpOwogICAgICAgICAgICAgcmMgPSAtRUJVU1k7CiAgICAgICAg
ICAgICBnb3RvIG91dDsKICAgICAgICAgfQpAQCAtMTE2MCw5ICsxMTYyLDkg
QEAgZ3Vlc3RfcGh5c21hcF9tYXJrX3BvcHVsYXRlX29uX2RlbWFuZChzdAog
ICAgICAgICBwb2RfdW5sb2NrKHAybSk7CiAgICAgfQogCitvdXQ6CiAgICAg
Z2ZuX3VubG9jayhwMm0sIGdmbiwgb3JkZXIpOwogCi1vdXQ6CiAgICAgcmV0
dXJuIHJjOwogfQogCg==

--=separator
Content-Type: application/octet-stream; name="xsa30-unstable.patch"
Content-Disposition: attachment; filename="xsa30-unstable.patch"
Content-Transfer-Encoding: base64

eGVuOiBmaXggZXJyb3IgaGFuZGxpbmcgb2YgZ3Vlc3RfcGh5c21hcF9tYXJr
X3BvcHVsYXRlX29uX2RlbWFuZCgpCgpUaGUgb25seSB1c2VyIG9mIHRoZSAi
b3V0IiBsYWJlbCBieXBhc3NlcyBhIG5lY2Vzc2FyeSB1bmxvY2ssIHRodXMK
ZW5hYmxpbmcgdGhlIGNhbGxlciB0byBsb2NrIHVwIFhlbi4KCkFsc28sIHRo
ZSBmdW5jdGlvbiB3YXMgbmV2ZXIgbWVhbnQgdG8gYmUgY2FsbGVkIGJ5IGEg
Z3Vlc3QgZm9yIGl0c2VsZiwKc28gcmF0aGVyIHRoYW4gaW5zcGVjdGluZyB0
aGUgY29kZSBwYXRocyBpbiBkZXB0aCBmb3IgcG90ZW50aWFsIG90aGVyCnBy
b2JsZW1zIHRoaXMgbWlnaHQgY2F1c2UsIGFuZCBhZGp1c3RpbmcgZS5nLiB0
aGUgbm9uLWd1ZXN0IHByaW50aygpCmluIHRoZSBhYm92ZSBlcnJvciBwYXRo
LCBqdXN0IGRpc2FsbG93IHRoZSBndWVzdCBhY2Nlc3MgdG8gaXQuCgpGaW5h
bGx5LCB0aGUgcHJpbnRrKCkgKGNvbnNpZGVyaW5nIGl0cyBwb3RlbnRpYWwg
b2Ygc3BhbW1pbmcgdGhlIGxvZywKdGhlIG1vcmUgdGhhdCBpdCdzIG5vdCB1
c2luZyBYRU5MT0dfR1VFU1QpLCBpcyBiZWluZyBjb252ZXJ0ZWQgdG8KUDJN
X0RFQlVHKCksIGFzIGRlYnVnZ2luZyBpcyB3aGF0IGl0IGFwcGFyZW50bHkg
d2FzIGFkZGVkIGZvciBpbiB0aGUKZmlyc3QgcGxhY2UuCgpUaGlzIGlzIFhT
QS0zMCAvIENWRS0yMDEyLTU1MTQuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBJYW4gQ2FtcGJl
bGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgpBY2tlZC1ieTogR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBldS5jaXRyaXguY29tPgpBY2tlZC1i
eTogSWFuIEphY2tzb24gPGlhbi5qYWNrc29uQGV1LmNpdHJpeC5jb20+Cgot
LS0gYS94ZW4vYXJjaC94ODYvbW0vcDJtLXBvZC5jCisrKyBiL3hlbi9hcmNo
L3g4Ni9tbS9wMm0tcG9kLmMKQEAgLTExMTcsNiArMTExNyw5IEBAIGd1ZXN0
X3BoeXNtYXBfbWFya19wb3B1bGF0ZV9vbl9kZW1hbmQoc3QKICAgICBtZm5f
dCBvbWZuOwogICAgIGludCByYyA9IDA7CiAKKyAgICBpZiAoICFJU19QUklW
X0ZPUihjdXJyZW50LT5kb21haW4sIGQpICkKKyAgICAgICAgcmV0dXJuIC1F
UEVSTTsKKwogICAgIGlmICggIXBhZ2luZ19tb2RlX3RyYW5zbGF0ZShkKSAp
CiAgICAgICAgIHJldHVybiAtRUlOVkFMOwogCkBAIC0xMTMxLDggKzExMzQs
NyBAQCBndWVzdF9waHlzbWFwX21hcmtfcG9wdWxhdGVfb25fZGVtYW5kKHN0
CiAgICAgICAgIG9tZm4gPSBwMm0tPmdldF9lbnRyeShwMm0sIGdmbiArIGks
ICZvdCwgJmEsIDAsIE5VTEwpOwogICAgICAgICBpZiAoIHAybV9pc19yYW0o
b3QpICkKICAgICAgICAgewotICAgICAgICAgICAgcHJpbnRrKCIlczogZ2Zu
X3RvX21mbiByZXR1cm5lZCB0eXBlICVkIVxuIiwKLSAgICAgICAgICAgICAg
ICAgICBfX2Z1bmNfXywgb3QpOworICAgICAgICAgICAgUDJNX0RFQlVHKCJn
Zm5fdG9fbWZuIHJldHVybmVkIHR5cGUgJWQhXG4iLCBvdCk7CiAgICAgICAg
ICAgICByYyA9IC1FQlVTWTsKICAgICAgICAgICAgIGdvdG8gb3V0OwogICAg
ICAgICB9CkBAIC0xMTU2LDkgKzExNTgsOSBAQCBndWVzdF9waHlzbWFwX21h
cmtfcG9wdWxhdGVfb25fZGVtYW5kKHN0CiAgICAgICAgIHBvZF91bmxvY2so
cDJtKTsKICAgICB9CiAKK291dDoKICAgICBnZm5fdW5sb2NrKHAybSwgZ2Zu
LCBvcmRlcik7CiAKLW91dDoKICAgICByZXR1cm4gcmM7CiB9CiAK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBU-0003Hi-NF; Mon, 03 Dec 2012 17:52:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBT-0003Ez-2H; Mon, 03 Dec 2012 17:51:59 +0000
Received: from [85.158.139.83:64222] by server-10.bemta-5.messagelabs.com id
	C0/EE-09257-EB6ECB05; Mon, 03 Dec 2012 17:51:58 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354557116!25493258!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18144 invoked from network); 3 Dec 2012 17:51:57 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-4.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:57 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBH-0002Nb-J6; Mon, 03 Dec 2012 17:51:47 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBH-00068a-7n; Mon, 03 Dec 2012 17:51:47 +0000
Date: Mon, 03 Dec 2012 17:51:47 +0000
Message-Id: <E1TfaBH-00068a-7n@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 31 (CVE-2012-5515) - Several
 memory hypercall operations allow invalid extent order values
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5515 / XSA-31
                             version 3

  Several memory hypercall operations allow invalid extent order values

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

Allowing arbitrary extent_order input values for XENMEM_decrease_reservation,
XENMEM_populate_physmap, and XENMEM_exchange can cause arbitrarily long time
being spent in loops without allowing vital other code to get a chance to
execute. This may also cause inconsistent state resulting at the completion
of these hypercalls.

IMPACT
======

A malicious guest administrator can cause Xen to hang.

VULNERABLE SYSTEMS
==================

All Xen versions are vulnerable.  However, older versions (not supporting
Populate-on-Demand, i.e. before 3.4) may only be theoretically affected.

MITIGATION
==========

Running only trusted guest kernels will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa31-4.1.patch             Xen 4.1.x
xsa31-4.2-unstable.patch    Xen 4.2.x, xen-unstable


$ sha256sum xsa31*.patch
8e4bb43999d1a72d7f1b6ad3e66d0c173ca711c8145c5804b025eaa63d2c1691  xsa31-4.1.patch
090d0cca3eddaee798e5f06a8d5f469d47f874c657abcd6028248d949d36da81  xsa31-4.2-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ4AAoJEIP+FMlX6CvZhCgIAIAkB8EpoFU0vwCW26toELFh
3odZ8kji4hBoIaR6vOj4BIrSuTxC+0TZl3JGSwxQ+zo2k15njNqPZM/8m5kztLzZ
K79GXhSRb6zo96EmAhxX6wU4qpBdDH7htdAsO74ApHdfw3hw9yXY2h+OkwiYTO6J
K0TegvNYoJ+9NJ4ePTgZpHp4B1H4ymtvw84uzNBJQ6ePR95lV4aOq7h1loIvMPzB
Mcxy+3LTAZasK7yYZLClyHXR46pN41qbMawKYNMp70+fQvyP58P6cExwZ4ODrbHf
dfgEg2yNeI4YXzOx2vbRSDRDAzf4lhGHq9fXhUpNF/denRJJCC9r/E0+nWTzWog=
=CUvM
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa31-4.1.patch"
Content-Disposition: attachment; filename="xsa31-4.1.patch"
Content-Transfer-Encoding: base64

bWVtb3A6IGxpbWl0IGd1ZXN0IHNwZWNpZmllZCBleHRlbnQgb3JkZXIKCkFs
bG93aW5nIHVuYm91bmRlZCBvcmRlciB2YWx1ZXMgaGVyZSBjYXVzZXMgYWxt
b3N0IHVuYm91bmRlZCBsb29wcwphbmQvb3IgcGFydGlhbGx5IGluY29tcGxl
dGUgcmVxdWVzdHMsIHBhcnRpY3VsYXJseSBpbiBQb0QgY29kZS4KClRoZSBh
ZGRlZCByYW5nZSBjaGVja3MgaW4gcG9wdWxhdGVfcGh5c21hcCgpLCBkZWNy
ZWFzZV9yZXNlcnZhdGlvbigpLAphbmQgdGhlICJpbiIgb25lIGluIG1lbW9y
eV9leGNoYW5nZSgpIGFyY2hpdGVjdHVyYWxseSBhbGwgY291bGQgdXNlClBB
RERSX0JJVFMgLSBQQUdFX1NISUZULCBhbmQgYXJlIGJlaW5nIGFydGlmaWNp
YWxseSBjb25zdHJhaW5lZCB0bwpNQVhfT1JERVIuCgpUaGlzIGlzIFhTQS0z
MSAvIENWRS0yMDEyLTU1MTUuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBUaW0gRGVlZ2FuIDx0
aW1AeGVuLm9yZz4KQWNrZWQtYnk6IElhbiBKYWNrc29uIDxpYW4uamFja3Nv
bkBldS5jaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9jb21tb24vbWVt
b3J5LmMgYi94ZW4vY29tbW9uL21lbW9yeS5jCmluZGV4IDRlN2MyMzQuLjli
OWZiMTggMTAwNjQ0Ci0tLSBhL3hlbi9jb21tb24vbWVtb3J5LmMKKysrIGIv
eGVuL2NvbW1vbi9tZW1vcnkuYwpAQCAtMTE3LDcgKzExNyw4IEBAIHN0YXRp
YyB2b2lkIHBvcHVsYXRlX3BoeXNtYXAoc3RydWN0IG1lbW9wX2FyZ3MgKmEp
CiAKICAgICAgICAgaWYgKCBhLT5tZW1mbGFncyAmIE1FTUZfcG9wdWxhdGVf
b25fZGVtYW5kICkKICAgICAgICAgewotICAgICAgICAgICAgaWYgKCBndWVz
dF9waHlzbWFwX21hcmtfcG9wdWxhdGVfb25fZGVtYW5kKGQsIGdwZm4sCisg
ICAgICAgICAgICBpZiAoIGEtPmV4dGVudF9vcmRlciA+IE1BWF9PUkRFUiB8
fAorICAgICAgICAgICAgICAgICBndWVzdF9waHlzbWFwX21hcmtfcG9wdWxh
dGVfb25fZGVtYW5kKGQsIGdwZm4sCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYS0+ZXh0ZW50X29y
ZGVyKSA8IDAgKQogICAgICAgICAgICAgICAgIGdvdG8gb3V0OwogICAgICAg
ICB9CkBAIC0yMTYsNyArMjE3LDggQEAgc3RhdGljIHZvaWQgZGVjcmVhc2Vf
cmVzZXJ2YXRpb24oc3RydWN0IG1lbW9wX2FyZ3MgKmEpCiAgICAgeGVuX3Bm
bl90IGdtZm47CiAKICAgICBpZiAoICFndWVzdF9oYW5kbGVfc3VicmFuZ2Vf
b2theShhLT5leHRlbnRfbGlzdCwgYS0+bnJfZG9uZSwKLSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBhLT5ucl9leHRlbnRzLTEpICkK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhLT5ucl9l
eHRlbnRzLTEpIHx8CisgICAgICAgICBhLT5leHRlbnRfb3JkZXIgPiBNQVhf
T1JERVIgKQogICAgICAgICByZXR1cm47CiAKICAgICBmb3IgKCBpID0gYS0+
bnJfZG9uZTsgaSA8IGEtPm5yX2V4dGVudHM7IGkrKyApCkBAIC0yNzgsNiAr
MjgwLDkgQEAgc3RhdGljIGxvbmcgbWVtb3J5X2V4Y2hhbmdlKFhFTl9HVUVT
VF9IQU5ETEUoeGVuX21lbW9yeV9leGNoYW5nZV90KSBhcmcpCiAgICAgaWYg
KCAoZXhjaC5ucl9leGNoYW5nZWQgPiBleGNoLmluLm5yX2V4dGVudHMpIHx8
CiAgICAgICAgICAvKiBJbnB1dCBhbmQgb3V0cHV0IGRvbWFpbiBpZGVudGlm
aWVycyBtYXRjaD8gKi8KICAgICAgICAgIChleGNoLmluLmRvbWlkICE9IGV4
Y2gub3V0LmRvbWlkKSB8fAorICAgICAgICAgLyogRXh0ZW50IG9yZGVycyBh
cmUgc2Vuc2libGU/ICovCisgICAgICAgICAoZXhjaC5pbi5leHRlbnRfb3Jk
ZXIgPiBNQVhfT1JERVIpIHx8CisgICAgICAgICAoZXhjaC5vdXQuZXh0ZW50
X29yZGVyID4gTUFYX09SREVSKSB8fAogICAgICAgICAgLyogU2l6ZXMgb2Yg
aW5wdXQgYW5kIG91dHB1dCBsaXN0cyBkbyBub3Qgb3ZlcmZsb3cgYSBsb25n
PyAqLwogICAgICAgICAgKCh+MFVMID4+IGV4Y2guaW4uZXh0ZW50X29yZGVy
KSA8IGV4Y2guaW4ubnJfZXh0ZW50cykgfHwKICAgICAgICAgICgofjBVTCA+
PiBleGNoLm91dC5leHRlbnRfb3JkZXIpIDwgZXhjaC5vdXQubnJfZXh0ZW50
cykgfHwK

--=separator
Content-Type: application/octet-stream; name="xsa31-4.2-unstable.patch"
Content-Disposition: attachment; filename="xsa31-4.2-unstable.patch"
Content-Transfer-Encoding: base64

bWVtb3A6IGxpbWl0IGd1ZXN0IHNwZWNpZmllZCBleHRlbnQgb3JkZXIKCkFs
bG93aW5nIHVuYm91bmRlZCBvcmRlciB2YWx1ZXMgaGVyZSBjYXVzZXMgYWxt
b3N0IHVuYm91bmRlZCBsb29wcwphbmQvb3IgcGFydGlhbGx5IGluY29tcGxl
dGUgcmVxdWVzdHMsIHBhcnRpY3VsYXJseSBpbiBQb0QgY29kZS4KClRoZSBh
ZGRlZCByYW5nZSBjaGVja3MgaW4gcG9wdWxhdGVfcGh5c21hcCgpLCBkZWNy
ZWFzZV9yZXNlcnZhdGlvbigpLAphbmQgdGhlICJpbiIgb25lIGluIG1lbW9y
eV9leGNoYW5nZSgpIGFyY2hpdGVjdHVyYWxseSBhbGwgY291bGQgdXNlClBB
RERSX0JJVFMgLSBQQUdFX1NISUZULCBhbmQgYXJlIGJlaW5nIGFydGlmaWNp
YWxseSBjb25zdHJhaW5lZCB0bwpNQVhfT1JERVIuCgpUaGlzIGlzIFhTQS0z
MSAvIENWRS0yMDEyLTU1MTUuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBUaW0gRGVlZ2FuIDx0
aW1AeGVuLm9yZz4KQWNrZWQtYnk6IElhbiBKYWNrc29uIDxpYW4uamFja3Nv
bkBldS5jaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9jb21tb24vbWVt
b3J5LmMgYi94ZW4vY29tbW9uL21lbW9yeS5jCmluZGV4IDgzZTI2NjYuLjJl
NTZkNDYgMTAwNjQ0Ci0tLSBhL3hlbi9jb21tb24vbWVtb3J5LmMKKysrIGIv
eGVuL2NvbW1vbi9tZW1vcnkuYwpAQCAtMTE1LDcgKzExNSw4IEBAIHN0YXRp
YyB2b2lkIHBvcHVsYXRlX3BoeXNtYXAoc3RydWN0IG1lbW9wX2FyZ3MgKmEp
CiAKICAgICAgICAgaWYgKCBhLT5tZW1mbGFncyAmIE1FTUZfcG9wdWxhdGVf
b25fZGVtYW5kICkKICAgICAgICAgewotICAgICAgICAgICAgaWYgKCBndWVz
dF9waHlzbWFwX21hcmtfcG9wdWxhdGVfb25fZGVtYW5kKGQsIGdwZm4sCisg
ICAgICAgICAgICBpZiAoIGEtPmV4dGVudF9vcmRlciA+IE1BWF9PUkRFUiB8
fAorICAgICAgICAgICAgICAgICBndWVzdF9waHlzbWFwX21hcmtfcG9wdWxh
dGVfb25fZGVtYW5kKGQsIGdwZm4sCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYS0+ZXh0ZW50X29y
ZGVyKSA8IDAgKQogICAgICAgICAgICAgICAgIGdvdG8gb3V0OwogICAgICAg
ICB9CkBAIC0yMzUsNyArMjM2LDggQEAgc3RhdGljIHZvaWQgZGVjcmVhc2Vf
cmVzZXJ2YXRpb24oc3RydWN0IG1lbW9wX2FyZ3MgKmEpCiAgICAgeGVuX3Bm
bl90IGdtZm47CiAKICAgICBpZiAoICFndWVzdF9oYW5kbGVfc3VicmFuZ2Vf
b2theShhLT5leHRlbnRfbGlzdCwgYS0+bnJfZG9uZSwKLSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBhLT5ucl9leHRlbnRzLTEpICkK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhLT5ucl9l
eHRlbnRzLTEpIHx8CisgICAgICAgICBhLT5leHRlbnRfb3JkZXIgPiBNQVhf
T1JERVIgKQogICAgICAgICByZXR1cm47CiAKICAgICBmb3IgKCBpID0gYS0+
bnJfZG9uZTsgaSA8IGEtPm5yX2V4dGVudHM7IGkrKyApCkBAIC0yOTcsNiAr
Mjk5LDkgQEAgc3RhdGljIGxvbmcgbWVtb3J5X2V4Y2hhbmdlKFhFTl9HVUVT
VF9IQU5ETEVfUEFSQU0oeGVuX21lbW9yeV9leGNoYW5nZV90KSBhcmcpCiAg
ICAgaWYgKCAoZXhjaC5ucl9leGNoYW5nZWQgPiBleGNoLmluLm5yX2V4dGVu
dHMpIHx8CiAgICAgICAgICAvKiBJbnB1dCBhbmQgb3V0cHV0IGRvbWFpbiBp
ZGVudGlmaWVycyBtYXRjaD8gKi8KICAgICAgICAgIChleGNoLmluLmRvbWlk
ICE9IGV4Y2gub3V0LmRvbWlkKSB8fAorICAgICAgICAgLyogRXh0ZW50IG9y
ZGVycyBhcmUgc2Vuc2libGU/ICovCisgICAgICAgICAoZXhjaC5pbi5leHRl
bnRfb3JkZXIgPiBNQVhfT1JERVIpIHx8CisgICAgICAgICAoZXhjaC5vdXQu
ZXh0ZW50X29yZGVyID4gTUFYX09SREVSKSB8fAogICAgICAgICAgLyogU2l6
ZXMgb2YgaW5wdXQgYW5kIG91dHB1dCBsaXN0cyBkbyBub3Qgb3ZlcmZsb3cg
YSBsb25nPyAqLwogICAgICAgICAgKCh+MFVMID4+IGV4Y2guaW4uZXh0ZW50
X29yZGVyKSA8IGV4Y2guaW4ubnJfZXh0ZW50cykgfHwKICAgICAgICAgICgo
fjBVTCA+PiBleGNoLm91dC5leHRlbnRfb3JkZXIpIDwgZXhjaC5vdXQubnJf
ZXh0ZW50cykgfHwK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBU-0003Hi-NF; Mon, 03 Dec 2012 17:52:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBT-0003Ez-2H; Mon, 03 Dec 2012 17:51:59 +0000
Received: from [85.158.139.83:64222] by server-10.bemta-5.messagelabs.com id
	C0/EE-09257-EB6ECB05; Mon, 03 Dec 2012 17:51:58 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354557116!25493258!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18144 invoked from network); 3 Dec 2012 17:51:57 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-4.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:57 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBH-0002Nb-J6; Mon, 03 Dec 2012 17:51:47 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBH-00068a-7n; Mon, 03 Dec 2012 17:51:47 +0000
Date: Mon, 03 Dec 2012 17:51:47 +0000
Message-Id: <E1TfaBH-00068a-7n@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 31 (CVE-2012-5515) - Several
 memory hypercall operations allow invalid extent order values
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5515 / XSA-31
                             version 3

  Several memory hypercall operations allow invalid extent order values

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

Allowing arbitrary extent_order input values for XENMEM_decrease_reservation,
XENMEM_populate_physmap, and XENMEM_exchange can cause arbitrarily long time
being spent in loops without allowing vital other code to get a chance to
execute. This may also cause inconsistent state resulting at the completion
of these hypercalls.

IMPACT
======

A malicious guest administrator can cause Xen to hang.

VULNERABLE SYSTEMS
==================

All Xen versions are vulnerable.  However, older versions (not supporting
Populate-on-Demand, i.e. before 3.4) may only be theoretically affected.

MITIGATION
==========

Running only trusted guest kernels will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa31-4.1.patch             Xen 4.1.x
xsa31-4.2-unstable.patch    Xen 4.2.x, xen-unstable


$ sha256sum xsa31*.patch
8e4bb43999d1a72d7f1b6ad3e66d0c173ca711c8145c5804b025eaa63d2c1691  xsa31-4.1.patch
090d0cca3eddaee798e5f06a8d5f469d47f874c657abcd6028248d949d36da81  xsa31-4.2-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ4AAoJEIP+FMlX6CvZhCgIAIAkB8EpoFU0vwCW26toELFh
3odZ8kji4hBoIaR6vOj4BIrSuTxC+0TZl3JGSwxQ+zo2k15njNqPZM/8m5kztLzZ
K79GXhSRb6zo96EmAhxX6wU4qpBdDH7htdAsO74ApHdfw3hw9yXY2h+OkwiYTO6J
K0TegvNYoJ+9NJ4ePTgZpHp4B1H4ymtvw84uzNBJQ6ePR95lV4aOq7h1loIvMPzB
Mcxy+3LTAZasK7yYZLClyHXR46pN41qbMawKYNMp70+fQvyP58P6cExwZ4ODrbHf
dfgEg2yNeI4YXzOx2vbRSDRDAzf4lhGHq9fXhUpNF/denRJJCC9r/E0+nWTzWog=
=CUvM
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa31-4.1.patch"
Content-Disposition: attachment; filename="xsa31-4.1.patch"
Content-Transfer-Encoding: base64

bWVtb3A6IGxpbWl0IGd1ZXN0IHNwZWNpZmllZCBleHRlbnQgb3JkZXIKCkFs
bG93aW5nIHVuYm91bmRlZCBvcmRlciB2YWx1ZXMgaGVyZSBjYXVzZXMgYWxt
b3N0IHVuYm91bmRlZCBsb29wcwphbmQvb3IgcGFydGlhbGx5IGluY29tcGxl
dGUgcmVxdWVzdHMsIHBhcnRpY3VsYXJseSBpbiBQb0QgY29kZS4KClRoZSBh
ZGRlZCByYW5nZSBjaGVja3MgaW4gcG9wdWxhdGVfcGh5c21hcCgpLCBkZWNy
ZWFzZV9yZXNlcnZhdGlvbigpLAphbmQgdGhlICJpbiIgb25lIGluIG1lbW9y
eV9leGNoYW5nZSgpIGFyY2hpdGVjdHVyYWxseSBhbGwgY291bGQgdXNlClBB
RERSX0JJVFMgLSBQQUdFX1NISUZULCBhbmQgYXJlIGJlaW5nIGFydGlmaWNp
YWxseSBjb25zdHJhaW5lZCB0bwpNQVhfT1JERVIuCgpUaGlzIGlzIFhTQS0z
MSAvIENWRS0yMDEyLTU1MTUuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBUaW0gRGVlZ2FuIDx0
aW1AeGVuLm9yZz4KQWNrZWQtYnk6IElhbiBKYWNrc29uIDxpYW4uamFja3Nv
bkBldS5jaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9jb21tb24vbWVt
b3J5LmMgYi94ZW4vY29tbW9uL21lbW9yeS5jCmluZGV4IDRlN2MyMzQuLjli
OWZiMTggMTAwNjQ0Ci0tLSBhL3hlbi9jb21tb24vbWVtb3J5LmMKKysrIGIv
eGVuL2NvbW1vbi9tZW1vcnkuYwpAQCAtMTE3LDcgKzExNyw4IEBAIHN0YXRp
YyB2b2lkIHBvcHVsYXRlX3BoeXNtYXAoc3RydWN0IG1lbW9wX2FyZ3MgKmEp
CiAKICAgICAgICAgaWYgKCBhLT5tZW1mbGFncyAmIE1FTUZfcG9wdWxhdGVf
b25fZGVtYW5kICkKICAgICAgICAgewotICAgICAgICAgICAgaWYgKCBndWVz
dF9waHlzbWFwX21hcmtfcG9wdWxhdGVfb25fZGVtYW5kKGQsIGdwZm4sCisg
ICAgICAgICAgICBpZiAoIGEtPmV4dGVudF9vcmRlciA+IE1BWF9PUkRFUiB8
fAorICAgICAgICAgICAgICAgICBndWVzdF9waHlzbWFwX21hcmtfcG9wdWxh
dGVfb25fZGVtYW5kKGQsIGdwZm4sCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYS0+ZXh0ZW50X29y
ZGVyKSA8IDAgKQogICAgICAgICAgICAgICAgIGdvdG8gb3V0OwogICAgICAg
ICB9CkBAIC0yMTYsNyArMjE3LDggQEAgc3RhdGljIHZvaWQgZGVjcmVhc2Vf
cmVzZXJ2YXRpb24oc3RydWN0IG1lbW9wX2FyZ3MgKmEpCiAgICAgeGVuX3Bm
bl90IGdtZm47CiAKICAgICBpZiAoICFndWVzdF9oYW5kbGVfc3VicmFuZ2Vf
b2theShhLT5leHRlbnRfbGlzdCwgYS0+bnJfZG9uZSwKLSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBhLT5ucl9leHRlbnRzLTEpICkK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhLT5ucl9l
eHRlbnRzLTEpIHx8CisgICAgICAgICBhLT5leHRlbnRfb3JkZXIgPiBNQVhf
T1JERVIgKQogICAgICAgICByZXR1cm47CiAKICAgICBmb3IgKCBpID0gYS0+
bnJfZG9uZTsgaSA8IGEtPm5yX2V4dGVudHM7IGkrKyApCkBAIC0yNzgsNiAr
MjgwLDkgQEAgc3RhdGljIGxvbmcgbWVtb3J5X2V4Y2hhbmdlKFhFTl9HVUVT
VF9IQU5ETEUoeGVuX21lbW9yeV9leGNoYW5nZV90KSBhcmcpCiAgICAgaWYg
KCAoZXhjaC5ucl9leGNoYW5nZWQgPiBleGNoLmluLm5yX2V4dGVudHMpIHx8
CiAgICAgICAgICAvKiBJbnB1dCBhbmQgb3V0cHV0IGRvbWFpbiBpZGVudGlm
aWVycyBtYXRjaD8gKi8KICAgICAgICAgIChleGNoLmluLmRvbWlkICE9IGV4
Y2gub3V0LmRvbWlkKSB8fAorICAgICAgICAgLyogRXh0ZW50IG9yZGVycyBh
cmUgc2Vuc2libGU/ICovCisgICAgICAgICAoZXhjaC5pbi5leHRlbnRfb3Jk
ZXIgPiBNQVhfT1JERVIpIHx8CisgICAgICAgICAoZXhjaC5vdXQuZXh0ZW50
X29yZGVyID4gTUFYX09SREVSKSB8fAogICAgICAgICAgLyogU2l6ZXMgb2Yg
aW5wdXQgYW5kIG91dHB1dCBsaXN0cyBkbyBub3Qgb3ZlcmZsb3cgYSBsb25n
PyAqLwogICAgICAgICAgKCh+MFVMID4+IGV4Y2guaW4uZXh0ZW50X29yZGVy
KSA8IGV4Y2guaW4ubnJfZXh0ZW50cykgfHwKICAgICAgICAgICgofjBVTCA+
PiBleGNoLm91dC5leHRlbnRfb3JkZXIpIDwgZXhjaC5vdXQubnJfZXh0ZW50
cykgfHwK

--=separator
Content-Type: application/octet-stream; name="xsa31-4.2-unstable.patch"
Content-Disposition: attachment; filename="xsa31-4.2-unstable.patch"
Content-Transfer-Encoding: base64

bWVtb3A6IGxpbWl0IGd1ZXN0IHNwZWNpZmllZCBleHRlbnQgb3JkZXIKCkFs
bG93aW5nIHVuYm91bmRlZCBvcmRlciB2YWx1ZXMgaGVyZSBjYXVzZXMgYWxt
b3N0IHVuYm91bmRlZCBsb29wcwphbmQvb3IgcGFydGlhbGx5IGluY29tcGxl
dGUgcmVxdWVzdHMsIHBhcnRpY3VsYXJseSBpbiBQb0QgY29kZS4KClRoZSBh
ZGRlZCByYW5nZSBjaGVja3MgaW4gcG9wdWxhdGVfcGh5c21hcCgpLCBkZWNy
ZWFzZV9yZXNlcnZhdGlvbigpLAphbmQgdGhlICJpbiIgb25lIGluIG1lbW9y
eV9leGNoYW5nZSgpIGFyY2hpdGVjdHVyYWxseSBhbGwgY291bGQgdXNlClBB
RERSX0JJVFMgLSBQQUdFX1NISUZULCBhbmQgYXJlIGJlaW5nIGFydGlmaWNp
YWxseSBjb25zdHJhaW5lZCB0bwpNQVhfT1JERVIuCgpUaGlzIGlzIFhTQS0z
MSAvIENWRS0yMDEyLTU1MTUuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBUaW0gRGVlZ2FuIDx0
aW1AeGVuLm9yZz4KQWNrZWQtYnk6IElhbiBKYWNrc29uIDxpYW4uamFja3Nv
bkBldS5jaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9jb21tb24vbWVt
b3J5LmMgYi94ZW4vY29tbW9uL21lbW9yeS5jCmluZGV4IDgzZTI2NjYuLjJl
NTZkNDYgMTAwNjQ0Ci0tLSBhL3hlbi9jb21tb24vbWVtb3J5LmMKKysrIGIv
eGVuL2NvbW1vbi9tZW1vcnkuYwpAQCAtMTE1LDcgKzExNSw4IEBAIHN0YXRp
YyB2b2lkIHBvcHVsYXRlX3BoeXNtYXAoc3RydWN0IG1lbW9wX2FyZ3MgKmEp
CiAKICAgICAgICAgaWYgKCBhLT5tZW1mbGFncyAmIE1FTUZfcG9wdWxhdGVf
b25fZGVtYW5kICkKICAgICAgICAgewotICAgICAgICAgICAgaWYgKCBndWVz
dF9waHlzbWFwX21hcmtfcG9wdWxhdGVfb25fZGVtYW5kKGQsIGdwZm4sCisg
ICAgICAgICAgICBpZiAoIGEtPmV4dGVudF9vcmRlciA+IE1BWF9PUkRFUiB8
fAorICAgICAgICAgICAgICAgICBndWVzdF9waHlzbWFwX21hcmtfcG9wdWxh
dGVfb25fZGVtYW5kKGQsIGdwZm4sCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYS0+ZXh0ZW50X29y
ZGVyKSA8IDAgKQogICAgICAgICAgICAgICAgIGdvdG8gb3V0OwogICAgICAg
ICB9CkBAIC0yMzUsNyArMjM2LDggQEAgc3RhdGljIHZvaWQgZGVjcmVhc2Vf
cmVzZXJ2YXRpb24oc3RydWN0IG1lbW9wX2FyZ3MgKmEpCiAgICAgeGVuX3Bm
bl90IGdtZm47CiAKICAgICBpZiAoICFndWVzdF9oYW5kbGVfc3VicmFuZ2Vf
b2theShhLT5leHRlbnRfbGlzdCwgYS0+bnJfZG9uZSwKLSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBhLT5ucl9leHRlbnRzLTEpICkK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhLT5ucl9l
eHRlbnRzLTEpIHx8CisgICAgICAgICBhLT5leHRlbnRfb3JkZXIgPiBNQVhf
T1JERVIgKQogICAgICAgICByZXR1cm47CiAKICAgICBmb3IgKCBpID0gYS0+
bnJfZG9uZTsgaSA8IGEtPm5yX2V4dGVudHM7IGkrKyApCkBAIC0yOTcsNiAr
Mjk5LDkgQEAgc3RhdGljIGxvbmcgbWVtb3J5X2V4Y2hhbmdlKFhFTl9HVUVT
VF9IQU5ETEVfUEFSQU0oeGVuX21lbW9yeV9leGNoYW5nZV90KSBhcmcpCiAg
ICAgaWYgKCAoZXhjaC5ucl9leGNoYW5nZWQgPiBleGNoLmluLm5yX2V4dGVu
dHMpIHx8CiAgICAgICAgICAvKiBJbnB1dCBhbmQgb3V0cHV0IGRvbWFpbiBp
ZGVudGlmaWVycyBtYXRjaD8gKi8KICAgICAgICAgIChleGNoLmluLmRvbWlk
ICE9IGV4Y2gub3V0LmRvbWlkKSB8fAorICAgICAgICAgLyogRXh0ZW50IG9y
ZGVycyBhcmUgc2Vuc2libGU/ICovCisgICAgICAgICAoZXhjaC5pbi5leHRl
bnRfb3JkZXIgPiBNQVhfT1JERVIpIHx8CisgICAgICAgICAoZXhjaC5vdXQu
ZXh0ZW50X29yZGVyID4gTUFYX09SREVSKSB8fAogICAgICAgICAgLyogU2l6
ZXMgb2YgaW5wdXQgYW5kIG91dHB1dCBsaXN0cyBkbyBub3Qgb3ZlcmZsb3cg
YSBsb25nPyAqLwogICAgICAgICAgKCh+MFVMID4+IGV4Y2guaW4uZXh0ZW50
X29yZGVyKSA8IGV4Y2guaW4ubnJfZXh0ZW50cykgfHwKICAgICAgICAgICgo
fjBVTCA+PiBleGNoLm91dC5leHRlbnRfb3JkZXIpIDwgZXhjaC5vdXQubnJf
ZXh0ZW50cykgfHwK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBT-0003GS-3U; Mon, 03 Dec 2012 17:51:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBS-0003Fe-3x; Mon, 03 Dec 2012 17:51:58 +0000
Received: from [85.158.138.51:23328] by server-3.bemta-3.messagelabs.com id
	66/DA-31566-DB6ECB05; Mon, 03 Dec 2012 17:51:57 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354557115!24619203!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26654 invoked from network); 3 Dec 2012 17:51:56 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-12.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:56 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBF-0002N3-DV; Mon, 03 Dec 2012 17:51:45 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBF-000679-B7; Mon, 03 Dec 2012 17:51:45 +0000
Date: Mon, 03 Dec 2012 17:51:45 +0000
Message-Id: <E1TfaBF-000679-B7@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 28 (CVE-2012-5512) -
 HVMOP_get_mem_access crash / HVMOP_set_mem_access information leak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5512 / XSA-28
                             version 3

  HVMOP_get_mem_access crash / HVMOP_set_mem_access information leak

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The HVMOP_set_mem_access operation handler uses an input as an array index
before range checking it.

IMPACT
======

A malicious guest administrator can cause Xen to crash.  If the out of array
bounds access does not crash, the arbitrary value read will be used if the
caller reads back the default access through the HVMOP_get_mem_access
operation, thus causing an information leak. The caller cannot, however,
directly control the address from which to read, since the value read in the
first step will be used as an array index again in the second step.

VULNERABLE SYSTEMS
==================

Only Xen version 4.1 is vulnerable.

The vulnerability is only exposed to HVM guests.

MITIGATION
==========

Running only PV guests, or ensuring that the controlling domain of HVM
guests (e.g. dom0 or stubdom) only uses trusted code, will avoid this
vulnerability.

RESOLUTION
==========

The attached patch resolves this issue.


$ sha256sum xsa28*.patch
6282314c4ea0d76ac55473e5fc7d863e045c9f566899eb93c60e5d22f38e8319  xsa28-4.1.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ2AAoJEIP+FMlX6CvZDfEH/jKbLcOY6taduyPubvWjLqUj
5moVGJMcdTUnjEOe4TH6zcax4Ce98J5BptHjCkeIIm4A70bcdfFR7Kb8i1Pr1ZA6
jpo/fbDtn4+YVAJrMlZWhPspJU2lZSSYc+Tu3eVrX78OX4RZ/Ubb+KRGhaSkRn/a
r14VFvNBwhSmOXFXqFI0IiCRJBctyLOxF32P3lZB3PXUepxsezjrUeYKKZ6qGkSX
kdufkWYgZV4iKpb8WEwDOdWbs/hE7ru6vHCEE798T8I7BscQF+O8B+2ewVK/iCoo
AgjGkqWsKhc119lSjdud8LP3A4cXWhhuHSOlmIc+gNz91IsvG3DErzQizc0wtLk=
=GkYq
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa28-4.1.patch"
Content-Disposition: attachment; filename="xsa28-4.1.patch"
Content-Transfer-Encoding: base64

eDg2L0hWTTogcmFuZ2UgY2hlY2sgeGVuX2h2bV9zZXRfbWVtX2FjY2Vzcy5o
dm1tZW1fYWNjZXNzIGJlZm9yZSB1c2UKCk90aGVyd2lzZSBhbiBvdXQgb2Yg
Ym91bmRzIGFycmF5IGFjY2VzcyBjYW4gaGFwcGVuIGlmIGNoYW5naW5nIHRo
ZQpkZWZhdWx0IGFjY2VzcyBpcyBiZWluZyByZXF1ZXN0ZWQsIHdoaWNoIC0g
aWYgaXQgZG9lc24ndCBjcmFzaCBYZW4gLQp3b3VsZCBzdWJzZXF1ZW50bHkg
YWxsb3cgcmVhZGluZyBhcmJpdHJhcnkgbWVtb3J5IHRocm91Z2gKSFZNT1Bf
Z2V0X21lbV9hY2Nlc3MgKGFnYWluLCB1bmxlc3MgdGhhdCBvcGVyYXRpb24g
Y3Jhc2hlcyBYZW4pLgoKVGhpcyBpcyBYU0EtMjggLyBDVkUtMjAxMi01NTEy
LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPgpBY2tlZC1ieTogVGltIERlZWdhbiA8dGltQHhlbi5vcmc+CkFja2Vk
LWJ5OiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgoK
ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vaHZtLmMgYi94ZW4vYXJj
aC94ODYvaHZtL2h2bS5jCmluZGV4IDY2Y2Y4MDUuLjA4YjY0MTggMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vaHZtLmMKKysrIGIveGVuL2FyY2gv
eDg2L2h2bS9odm0uYwpAQCAtMzY5OSw3ICszNjk5LDcgQEAgbG9uZyBkb19o
dm1fb3AodW5zaWduZWQgbG9uZyBvcCwgWEVOX0dVRVNUX0hBTkRMRSh2b2lk
KSBhcmcpCiAgICAgICAgICAgICByZXR1cm4gcmM7CiAKICAgICAgICAgcmMg
PSAtRUlOVkFMOwotICAgICAgICBpZiAoICFpc19odm1fZG9tYWluKGQpICkK
KyAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFpbihkKSB8fCBhLmh2bW1lbV9h
Y2Nlc3MgPj0gQVJSQVlfU0laRShtZW1hY2Nlc3MpICkKICAgICAgICAgICAg
IGdvdG8gcGFyYW1fZmFpbDU7CiAKICAgICAgICAgcDJtID0gcDJtX2dldF9o
b3N0cDJtKGQpOwpAQCAtMzcxOSw5ICszNzE5LDYgQEAgbG9uZyBkb19odm1f
b3AodW5zaWduZWQgbG9uZyBvcCwgWEVOX0dVRVNUX0hBTkRMRSh2b2lkKSBh
cmcpCiAgICAgICAgICAgICAgKChhLmZpcnN0X3BmbiArIGEubnIgLSAxKSA+
IGRvbWFpbl9nZXRfbWF4aW11bV9ncGZuKGQpKSApCiAgICAgICAgICAgICBn
b3RvIHBhcmFtX2ZhaWw1OwogICAgICAgICAgICAgCi0gICAgICAgIGlmICgg
YS5odm1tZW1fYWNjZXNzID49IEFSUkFZX1NJWkUobWVtYWNjZXNzKSApCi0g
ICAgICAgICAgICBnb3RvIHBhcmFtX2ZhaWw1OwotCiAgICAgICAgIGZvciAo
IHBmbiA9IGEuZmlyc3RfcGZuOyBwZm4gPCBhLmZpcnN0X3BmbiArIGEubnI7
IHBmbisrICkKICAgICAgICAgewogICAgICAgICAgICAgcDJtX3R5cGVfdCB0
Owo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBT-0003GS-3U; Mon, 03 Dec 2012 17:51:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBS-0003Fe-3x; Mon, 03 Dec 2012 17:51:58 +0000
Received: from [85.158.138.51:23328] by server-3.bemta-3.messagelabs.com id
	66/DA-31566-DB6ECB05; Mon, 03 Dec 2012 17:51:57 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354557115!24619203!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26654 invoked from network); 3 Dec 2012 17:51:56 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-12.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:56 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBF-0002N3-DV; Mon, 03 Dec 2012 17:51:45 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBF-000679-B7; Mon, 03 Dec 2012 17:51:45 +0000
Date: Mon, 03 Dec 2012 17:51:45 +0000
Message-Id: <E1TfaBF-000679-B7@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 28 (CVE-2012-5512) -
 HVMOP_get_mem_access crash / HVMOP_set_mem_access information leak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5512 / XSA-28
                             version 3

  HVMOP_get_mem_access crash / HVMOP_set_mem_access information leak

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The HVMOP_set_mem_access operation handler uses an input as an array index
before range checking it.

IMPACT
======

A malicious guest administrator can cause Xen to crash.  If the out of array
bounds access does not crash, the arbitrary value read will be used if the
caller reads back the default access through the HVMOP_get_mem_access
operation, thus causing an information leak. The caller cannot, however,
directly control the address from which to read, since the value read in the
first step will be used as an array index again in the second step.

VULNERABLE SYSTEMS
==================

Only Xen version 4.1 is vulnerable.

The vulnerability is only exposed to HVM guests.

MITIGATION
==========

Running only PV guests, or ensuring that the controlling domain of HVM
guests (e.g. dom0 or stubdom) only uses trusted code, will avoid this
vulnerability.

RESOLUTION
==========

The attached patch resolves this issue.


$ sha256sum xsa28*.patch
6282314c4ea0d76ac55473e5fc7d863e045c9f566899eb93c60e5d22f38e8319  xsa28-4.1.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ2AAoJEIP+FMlX6CvZDfEH/jKbLcOY6taduyPubvWjLqUj
5moVGJMcdTUnjEOe4TH6zcax4Ce98J5BptHjCkeIIm4A70bcdfFR7Kb8i1Pr1ZA6
jpo/fbDtn4+YVAJrMlZWhPspJU2lZSSYc+Tu3eVrX78OX4RZ/Ubb+KRGhaSkRn/a
r14VFvNBwhSmOXFXqFI0IiCRJBctyLOxF32P3lZB3PXUepxsezjrUeYKKZ6qGkSX
kdufkWYgZV4iKpb8WEwDOdWbs/hE7ru6vHCEE798T8I7BscQF+O8B+2ewVK/iCoo
AgjGkqWsKhc119lSjdud8LP3A4cXWhhuHSOlmIc+gNz91IsvG3DErzQizc0wtLk=
=GkYq
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa28-4.1.patch"
Content-Disposition: attachment; filename="xsa28-4.1.patch"
Content-Transfer-Encoding: base64

eDg2L0hWTTogcmFuZ2UgY2hlY2sgeGVuX2h2bV9zZXRfbWVtX2FjY2Vzcy5o
dm1tZW1fYWNjZXNzIGJlZm9yZSB1c2UKCk90aGVyd2lzZSBhbiBvdXQgb2Yg
Ym91bmRzIGFycmF5IGFjY2VzcyBjYW4gaGFwcGVuIGlmIGNoYW5naW5nIHRo
ZQpkZWZhdWx0IGFjY2VzcyBpcyBiZWluZyByZXF1ZXN0ZWQsIHdoaWNoIC0g
aWYgaXQgZG9lc24ndCBjcmFzaCBYZW4gLQp3b3VsZCBzdWJzZXF1ZW50bHkg
YWxsb3cgcmVhZGluZyBhcmJpdHJhcnkgbWVtb3J5IHRocm91Z2gKSFZNT1Bf
Z2V0X21lbV9hY2Nlc3MgKGFnYWluLCB1bmxlc3MgdGhhdCBvcGVyYXRpb24g
Y3Jhc2hlcyBYZW4pLgoKVGhpcyBpcyBYU0EtMjggLyBDVkUtMjAxMi01NTEy
LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPgpBY2tlZC1ieTogVGltIERlZWdhbiA8dGltQHhlbi5vcmc+CkFja2Vk
LWJ5OiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgoK
ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vaHZtLmMgYi94ZW4vYXJj
aC94ODYvaHZtL2h2bS5jCmluZGV4IDY2Y2Y4MDUuLjA4YjY0MTggMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vaHZtLmMKKysrIGIveGVuL2FyY2gv
eDg2L2h2bS9odm0uYwpAQCAtMzY5OSw3ICszNjk5LDcgQEAgbG9uZyBkb19o
dm1fb3AodW5zaWduZWQgbG9uZyBvcCwgWEVOX0dVRVNUX0hBTkRMRSh2b2lk
KSBhcmcpCiAgICAgICAgICAgICByZXR1cm4gcmM7CiAKICAgICAgICAgcmMg
PSAtRUlOVkFMOwotICAgICAgICBpZiAoICFpc19odm1fZG9tYWluKGQpICkK
KyAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFpbihkKSB8fCBhLmh2bW1lbV9h
Y2Nlc3MgPj0gQVJSQVlfU0laRShtZW1hY2Nlc3MpICkKICAgICAgICAgICAg
IGdvdG8gcGFyYW1fZmFpbDU7CiAKICAgICAgICAgcDJtID0gcDJtX2dldF9o
b3N0cDJtKGQpOwpAQCAtMzcxOSw5ICszNzE5LDYgQEAgbG9uZyBkb19odm1f
b3AodW5zaWduZWQgbG9uZyBvcCwgWEVOX0dVRVNUX0hBTkRMRSh2b2lkKSBh
cmcpCiAgICAgICAgICAgICAgKChhLmZpcnN0X3BmbiArIGEubnIgLSAxKSA+
IGRvbWFpbl9nZXRfbWF4aW11bV9ncGZuKGQpKSApCiAgICAgICAgICAgICBn
b3RvIHBhcmFtX2ZhaWw1OwogICAgICAgICAgICAgCi0gICAgICAgIGlmICgg
YS5odm1tZW1fYWNjZXNzID49IEFSUkFZX1NJWkUobWVtYWNjZXNzKSApCi0g
ICAgICAgICAgICBnb3RvIHBhcmFtX2ZhaWw1OwotCiAgICAgICAgIGZvciAo
IHBmbiA9IGEuZmlyc3RfcGZuOyBwZm4gPCBhLmZpcnN0X3BmbiArIGEubnI7
IHBmbisrICkKICAgICAgICAgewogICAgICAgICAgICAgcDJtX3R5cGVfdCB0
Owo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBW-0003Iv-1A; Mon, 03 Dec 2012 17:52:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBT-0003GY-VZ; Mon, 03 Dec 2012 17:52:00 +0000
Received: from [85.158.138.51:23427] by server-8.bemta-3.messagelabs.com id
	3F/D4-07786-EB6ECB05; Mon, 03 Dec 2012 17:51:58 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354557116!32440021!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30664 invoked from network); 3 Dec 2012 17:51:57 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-16.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:57 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBI-0002Np-9g; Mon, 03 Dec 2012 17:51:48 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBI-000691-0m; Mon, 03 Dec 2012 17:51:48 +0000
Date: Mon, 03 Dec 2012 17:51:48 +0000
Message-Id: <E1TfaBI-000691-0m@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 32 (CVE-2012-5525) - several
 hypercalls do not validate input GFNs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5525 / XSA-32
			      version 4

	     several hypercalls do not validate input GFNs

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

The function get_page_from_gfn does not validate its input GFN. An
invalid GFN passed to a hypercall which uses this function will cause
the hypervisor to read off the end of the frame table and potentially
crash.

IMPACT
======

A malicious guest administrator of a PV guest can cause Xen to crash.
If the out of bounds access does not lead to a crash, a carefully
crafted privilege escalation cannot be excluded, even though the guest
doesn't itself control the values written.

VULNERABLE SYSTEMS
==================

Only Xen 4.2 and Xen unstable are vulnerable. Xen 4.1 and earlier are
not vulnerable.

The vulnerability is exposed only to PV guests.

MITIGATION
==========

Running only trusted PV guest kernels will avoid this vulnerability.

Running only HVM guests will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa32-4.2.patch             Xen 4.2.x, xen-unstable
xsa32-unstable.patch        xen-unstable


$ sha256sum xsa32*.patch
ad25c9298b543ef7af40e9f09cae232d36efc1932804678355ab724a19e3afd9  xsa32-4.2.patch
734cff82a93f032165ef26633acb30a499cc063141c2b16fccb294703718fcb0  xsa32-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOWxAAoJEIP+FMlX6CvZ9uUH/RM5PGHxWTuFv11kAEJAaQK7
m3dB9GZvjRo/zcRTrSQX2JCumM8rwXffNR9oUHQkC3WxRPjyNRdsiI02sSRLSDAh
q2tsalK1PpFNX2DRrOezWrkBA2zR7pnGe3sCzgO3sGGpqMMoG5+u6/IcZHu86LGm
zk+e0hMHtuurz6+uB0w8TJoLge4XSTw0K3ck70vCL4ysKmyOcEWcAgDmNA+OwnQ8
duw4UGkXLrxCF1X7RbAh31lUWPSLxPvxsytja+78/9ggpQRxZkF5x6T4oABcZ7jg
vjzYkNN3MdN41RIbmZps1SECLm/SKoOvsBxfOJArf0DYgVmJloxZrLK4TyquCDk=
=oEp3
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa32-4.2.patch"
Content-Disposition: attachment; filename="xsa32-4.2.patch"
Content-Transfer-Encoding: base64

eDg2OiBnZXRfcGFnZV9mcm9tX2dmbigpIG11c3QgcmV0dXJuIE5VTEwgZm9y
IGludmFsaWQgR0ZOcwoKLi4uIGFsc28gaW4gdGhlIG5vbi10cmFuc2xhdGVk
IGNhc2UuCgpUaGlzIGlzIFhTQS0zMiAvIENWRS0yMDEyLXh4eHguCgpTaWdu
ZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFj
a2VkLWJ5OiBUaW0gRGVlZ2FuIDx0aW1AeGVuLm9yZz4KCmRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oIGIveGVuL2luY2x1ZGUvYXNt
LXg4Ni9wMm0uaAppbmRleCA3YTdjN2ViLi5kNTY2NWI4IDEwMDY0NAotLS0g
YS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCisrKyBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvcDJtLmgKQEAgLTQwMCw3ICs0MDAsNyBAQCBzdGF0aWMgaW5s
aW5lIHN0cnVjdCBwYWdlX2luZm8gKmdldF9wYWdlX2Zyb21fZ2ZuKAogICAg
IGlmICh0KQogICAgICAgICAqdCA9IHAybV9yYW1fcnc7CiAgICAgcGFnZSA9
IF9fbWZuX3RvX3BhZ2UoZ2ZuKTsKLSAgICByZXR1cm4gZ2V0X3BhZ2UocGFn
ZSwgZCkgPyBwYWdlIDogTlVMTDsKKyAgICByZXR1cm4gbWZuX3ZhbGlkKGdm
bikgJiYgZ2V0X3BhZ2UocGFnZSwgZCkgPyBwYWdlIDogTlVMTDsKIH0KIAog
Cg==

--=separator
Content-Type: application/octet-stream; name="xsa32-unstable.patch"
Content-Disposition: attachment; filename="xsa32-unstable.patch"
Content-Transfer-Encoding: base64

eDg2OiBnZXRfcGFnZV9mcm9tX2dmbigpIG11c3QgcmV0dXJuIE5VTEwgZm9y
IGludmFsaWQgR0ZOcwoKLi4uIGFsc28gaW4gdGhlIG5vbi10cmFuc2xhdGVk
IGNhc2UuCgpUaGlzIGlzIFhTQS0zMiAvIENWRS0yMDEyLXh4eHguCgpTaWdu
ZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFj
a2VkLWJ5OiBUaW0gRGVlZ2FuIDx0aW1AeGVuLm9yZz4KCmRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oIGIveGVuL2luY2x1ZGUvYXNt
LXg4Ni9wMm0uaAppbmRleCAyOGJlNGU4Li45MDdhODE3IDEwMDY0NAotLS0g
YS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCisrKyBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvcDJtLmgKQEAgLTM4NCw3ICszODQsNyBAQCBzdGF0aWMgaW5s
aW5lIHN0cnVjdCBwYWdlX2luZm8gKmdldF9wYWdlX2Zyb21fZ2ZuKAogICAg
IGlmICh0KQogICAgICAgICAqdCA9IHAybV9yYW1fcnc7CiAgICAgcGFnZSA9
IF9fbWZuX3RvX3BhZ2UoZ2ZuKTsKLSAgICByZXR1cm4gZ2V0X3BhZ2UocGFn
ZSwgZCkgPyBwYWdlIDogTlVMTDsKKyAgICByZXR1cm4gbWZuX3ZhbGlkKGdm
bikgJiYgZ2V0X3BhZ2UocGFnZSwgZCkgPyBwYWdlIDogTlVMTDsKIH0KIAog
Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBW-0003Iv-1A; Mon, 03 Dec 2012 17:52:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBT-0003GY-VZ; Mon, 03 Dec 2012 17:52:00 +0000
Received: from [85.158.138.51:23427] by server-8.bemta-3.messagelabs.com id
	3F/D4-07786-EB6ECB05; Mon, 03 Dec 2012 17:51:58 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354557116!32440021!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30664 invoked from network); 3 Dec 2012 17:51:57 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-16.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:57 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBI-0002Np-9g; Mon, 03 Dec 2012 17:51:48 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBI-000691-0m; Mon, 03 Dec 2012 17:51:48 +0000
Date: Mon, 03 Dec 2012 17:51:48 +0000
Message-Id: <E1TfaBI-000691-0m@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 32 (CVE-2012-5525) - several
 hypercalls do not validate input GFNs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5525 / XSA-32
			      version 4

	     several hypercalls do not validate input GFNs

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

The function get_page_from_gfn does not validate its input GFN. An
invalid GFN passed to a hypercall which uses this function will cause
the hypervisor to read off the end of the frame table and potentially
crash.

IMPACT
======

A malicious guest administrator of a PV guest can cause Xen to crash.
If the out of bounds access does not lead to a crash, a carefully
crafted privilege escalation cannot be excluded, even though the guest
doesn't itself control the values written.

VULNERABLE SYSTEMS
==================

Only Xen 4.2 and Xen unstable are vulnerable. Xen 4.1 and earlier are
not vulnerable.

The vulnerability is exposed only to PV guests.

MITIGATION
==========

Running only trusted PV guest kernels will avoid this vulnerability.

Running only HVM guests will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa32-4.2.patch             Xen 4.2.x, xen-unstable
xsa32-unstable.patch        xen-unstable


$ sha256sum xsa32*.patch
ad25c9298b543ef7af40e9f09cae232d36efc1932804678355ab724a19e3afd9  xsa32-4.2.patch
734cff82a93f032165ef26633acb30a499cc063141c2b16fccb294703718fcb0  xsa32-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOWxAAoJEIP+FMlX6CvZ9uUH/RM5PGHxWTuFv11kAEJAaQK7
m3dB9GZvjRo/zcRTrSQX2JCumM8rwXffNR9oUHQkC3WxRPjyNRdsiI02sSRLSDAh
q2tsalK1PpFNX2DRrOezWrkBA2zR7pnGe3sCzgO3sGGpqMMoG5+u6/IcZHu86LGm
zk+e0hMHtuurz6+uB0w8TJoLge4XSTw0K3ck70vCL4ysKmyOcEWcAgDmNA+OwnQ8
duw4UGkXLrxCF1X7RbAh31lUWPSLxPvxsytja+78/9ggpQRxZkF5x6T4oABcZ7jg
vjzYkNN3MdN41RIbmZps1SECLm/SKoOvsBxfOJArf0DYgVmJloxZrLK4TyquCDk=
=oEp3
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa32-4.2.patch"
Content-Disposition: attachment; filename="xsa32-4.2.patch"
Content-Transfer-Encoding: base64

eDg2OiBnZXRfcGFnZV9mcm9tX2dmbigpIG11c3QgcmV0dXJuIE5VTEwgZm9y
IGludmFsaWQgR0ZOcwoKLi4uIGFsc28gaW4gdGhlIG5vbi10cmFuc2xhdGVk
IGNhc2UuCgpUaGlzIGlzIFhTQS0zMiAvIENWRS0yMDEyLXh4eHguCgpTaWdu
ZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFj
a2VkLWJ5OiBUaW0gRGVlZ2FuIDx0aW1AeGVuLm9yZz4KCmRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oIGIveGVuL2luY2x1ZGUvYXNt
LXg4Ni9wMm0uaAppbmRleCA3YTdjN2ViLi5kNTY2NWI4IDEwMDY0NAotLS0g
YS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCisrKyBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvcDJtLmgKQEAgLTQwMCw3ICs0MDAsNyBAQCBzdGF0aWMgaW5s
aW5lIHN0cnVjdCBwYWdlX2luZm8gKmdldF9wYWdlX2Zyb21fZ2ZuKAogICAg
IGlmICh0KQogICAgICAgICAqdCA9IHAybV9yYW1fcnc7CiAgICAgcGFnZSA9
IF9fbWZuX3RvX3BhZ2UoZ2ZuKTsKLSAgICByZXR1cm4gZ2V0X3BhZ2UocGFn
ZSwgZCkgPyBwYWdlIDogTlVMTDsKKyAgICByZXR1cm4gbWZuX3ZhbGlkKGdm
bikgJiYgZ2V0X3BhZ2UocGFnZSwgZCkgPyBwYWdlIDogTlVMTDsKIH0KIAog
Cg==

--=separator
Content-Type: application/octet-stream; name="xsa32-unstable.patch"
Content-Disposition: attachment; filename="xsa32-unstable.patch"
Content-Transfer-Encoding: base64

eDg2OiBnZXRfcGFnZV9mcm9tX2dmbigpIG11c3QgcmV0dXJuIE5VTEwgZm9y
IGludmFsaWQgR0ZOcwoKLi4uIGFsc28gaW4gdGhlIG5vbi10cmFuc2xhdGVk
IGNhc2UuCgpUaGlzIGlzIFhTQS0zMiAvIENWRS0yMDEyLXh4eHguCgpTaWdu
ZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFj
a2VkLWJ5OiBUaW0gRGVlZ2FuIDx0aW1AeGVuLm9yZz4KCmRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oIGIveGVuL2luY2x1ZGUvYXNt
LXg4Ni9wMm0uaAppbmRleCAyOGJlNGU4Li45MDdhODE3IDEwMDY0NAotLS0g
YS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCisrKyBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvcDJtLmgKQEAgLTM4NCw3ICszODQsNyBAQCBzdGF0aWMgaW5s
aW5lIHN0cnVjdCBwYWdlX2luZm8gKmdldF9wYWdlX2Zyb21fZ2ZuKAogICAg
IGlmICh0KQogICAgICAgICAqdCA9IHAybV9yYW1fcnc7CiAgICAgcGFnZSA9
IF9fbWZuX3RvX3BhZ2UoZ2ZuKTsKLSAgICByZXR1cm4gZ2V0X3BhZ2UocGFn
ZSwgZCkgPyBwYWdlIDogTlVMTDsKKyAgICByZXR1cm4gbWZuX3ZhbGlkKGdm
bikgJiYgZ2V0X3BhZ2UocGFnZSwgZCkgPyBwYWdlIDogTlVMTDsKIH0KIAog
Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBY-0003Kp-8g; Mon, 03 Dec 2012 17:52:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBU-0003H8-Qg; Mon, 03 Dec 2012 17:52:01 +0000
Received: from [85.158.138.51:23491] by server-11.bemta-3.messagelabs.com id
	F5/50-19361-FB6ECB05; Mon, 03 Dec 2012 17:51:59 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354557117!32614814!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21329 invoked from network); 3 Dec 2012 17:51:58 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-5.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:58 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBG-0002NF-9i; Mon, 03 Dec 2012 17:51:46 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBF-00067d-Un; Mon, 03 Dec 2012 17:51:45 +0000
Date: Mon, 03 Dec 2012 17:51:45 +0000
Message-Id: <E1TfaBF-00067d-Un@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 29 (CVE-2012-5513) -
 XENMEM_exchange may overwrite hypervisor memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5513 / XSA-29
                             version 3

           XENMEM_exchange may overwrite hypervisor memory

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The handler for XENMEM_exchange accesses guest memory without range checking
the guest provided addresses, thus allowing these accesses to include the
hypervisor reserved range.

IMPACT
======

A malicious guest administrator can cause Xen to crash.  If the out of address
space bounds access does not lead to a crash, a carefully crafted privilege
escalation cannot be excluded, even though the guest doesn't itself control
the values written.

VULNERABLE SYSTEMS
==================

All Xen versions are vulnerable.

The vulnerability is only exposed to PV guests.

MITIGATION
==========

Running only HVM guests, or ensuring that PV guests only use trusted kernels,
will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa29-4.1.patch             Xen 4.1.x
xsa29-4.2-unstable.patch    Xen 4.2.x, xen-unstable


$ sha256sum xsa29*.patch
7246a5534bc1e6a47bb6a860f6eb61c8353ad8b46209310783e823b4f7e2eae8  xsa29-4.1.patch
54dcd3ac5c84903bfb04f8591107a74c27b079815f2c6843212e05f776873c73  xsa29-4.2-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ3AAoJEIP+FMlX6CvZ7u8IAM01+jNn5fwdGmoo/LIdH885
nWr5aSc+qMqVuSvla0KKh1SOLFaVWFgovLN1Sfu2hAxLgrK3HxN86RqHU/vLo0k0
KTFM+9xQlxhJNQzyQSiDryH/qSrHTQI6ERxUEYgfjtTieK8y30SZqkd6jBmwoir/
nAMMP8oFmVevM2WfYEWjNNsWPaiUlUYP13qxiWGPcGzhcNNKRwcmrIY4N+F6kHID
Ipl4l5vhoeSaQ0fKkcJKHa+3QGd+706jHZ5VTCwPdWBCnBJLFuMWbc2UlyIg2EB9
N+3Olwf3jCF0zIzBJkomA+FAg+D7kw31DCjc+y1PdGIyuoMkk+JRwYFVkZcKLi4=
=pD8C
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa29-4.1.patch"
Content-Disposition: attachment; filename="xsa29-4.1.patch"
Content-Transfer-Encoding: base64

eGVuOiBhZGQgbWlzc2luZyBndWVzdCBhZGRyZXNzIHJhbmdlIGNoZWNrcyB0
byBYRU5NRU1fZXhjaGFuZ2UgaGFuZGxlcnMKCkV2ZXIgc2luY2UgaXRzIGV4
aXN0ZW5jZSAoMy4wLjMgaWlyYykgdGhlIGhhbmRsZXIgZm9yIHRoaXMgaGFz
IGJlZW4KdXNpbmcgbm9uIGFkZHJlc3MgcmFuZ2UgY2hlY2tpbmcgZ3Vlc3Qg
bWVtb3J5IGFjY2Vzc29ycyAoaS5lLgp0aGUgb25lcyBwcmVmaXhlZCB3aXRo
IHR3byB1bmRlcnNjb3Jlcykgd2l0aG91dCBmaXJzdCByYW5nZQpjaGVja2lu
ZyB0aGUgYWNjZXNzZWQgc3BhY2UgKHZpYSBndWVzdF9oYW5kbGVfb2theSgp
KSwgYWxsb3dpbmcKYSBndWVzdCB0byBhY2Nlc3MgYW5kIG92ZXJ3cml0ZSBo
eXBlcnZpc29yIG1lbW9yeS4KClRoaXMgaXMgWFNBLTI5IC8gQ1ZFLTIwMTIt
NTUxMy4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KQWNrZWQtYnk6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxs
QGNpdHJpeC5jb20+CkFja2VkLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AZXUuY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2Nv
bXBhdC9tZW1vcnkuYyBiL3hlbi9jb21tb24vY29tcGF0L21lbW9yeS5jCmlu
ZGV4IDI0MDI5ODQuLjFkODc3ZmMgMTAwNjQ0Ci0tLSBhL3hlbi9jb21tb24v
Y29tcGF0L21lbW9yeS5jCisrKyBiL3hlbi9jb21tb24vY29tcGF0L21lbW9y
eS5jCkBAIC0xMTQsNiArMTE0LDEyIEBAIGludCBjb21wYXRfbWVtb3J5X29w
KHVuc2lnbmVkIGludCBjbWQsIFhFTl9HVUVTVF9IQU5ETEUodm9pZCkgY29t
cGF0KQogICAgICAgICAgICAgICAgICAgKGNtcC54Y2hnLm91dC5ucl9leHRl
bnRzIDw8IGNtcC54Y2hnLm91dC5leHRlbnRfb3JkZXIpKSApCiAgICAgICAg
ICAgICAgICAgcmV0dXJuIC1FSU5WQUw7CiAKKyAgICAgICAgICAgIGlmICgg
IWNvbXBhdF9oYW5kbGVfb2theShjbXAueGNoZy5pbi5leHRlbnRfc3RhcnQs
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY21wLnhj
aGcuaW4ubnJfZXh0ZW50cykgfHwKKyAgICAgICAgICAgICAgICAgIWNvbXBh
dF9oYW5kbGVfb2theShjbXAueGNoZy5vdXQuZXh0ZW50X3N0YXJ0LAorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNtcC54Y2hnLm91
dC5ucl9leHRlbnRzKSApCisgICAgICAgICAgICAgICAgcmV0dXJuIC1FRkFV
TFQ7CisKICAgICAgICAgICAgIHN0YXJ0X2V4dGVudCA9IGNtcC54Y2hnLm5y
X2V4Y2hhbmdlZDsKICAgICAgICAgICAgIGVuZF9leHRlbnQgPSAoQ09NUEFU
X0FSR19YTEFUX1NJWkUgLSBzaXplb2YoKm5hdC54Y2hnKSkgLwogICAgICAg
ICAgICAgICAgICAgICAgICAgICgoKDFVIDw8IEFCUyhvcmRlcl9kZWx0YSkp
ICsgMSkgKgpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9tZW1vcnkuYyBiL3hl
bi9jb21tb24vbWVtb3J5LmMKaW5kZXggNGU3YzIzNC4uNTkzNzlkMyAxMDA2
NDQKLS0tIGEveGVuL2NvbW1vbi9tZW1vcnkuYworKysgYi94ZW4vY29tbW9u
L21lbW9yeS5jCkBAIC0yODksNiArMjg5LDEzIEBAIHN0YXRpYyBsb25nIG1l
bW9yeV9leGNoYW5nZShYRU5fR1VFU1RfSEFORExFKHhlbl9tZW1vcnlfZXhj
aGFuZ2VfdCkgYXJnKQogICAgICAgICBnb3RvIGZhaWxfZWFybHk7CiAgICAg
fQogCisgICAgaWYgKCAhZ3Vlc3RfaGFuZGxlX29rYXkoZXhjaC5pbi5leHRl
bnRfc3RhcnQsIGV4Y2guaW4ubnJfZXh0ZW50cykgfHwKKyAgICAgICAgICFn
dWVzdF9oYW5kbGVfb2theShleGNoLm91dC5leHRlbnRfc3RhcnQsIGV4Y2gu
b3V0Lm5yX2V4dGVudHMpICkKKyAgICB7CisgICAgICAgIHJjID0gLUVGQVVM
VDsKKyAgICAgICAgZ290byBmYWlsX2Vhcmx5OworICAgIH0KKwogICAgIC8q
IE9ubHkgcHJpdmlsZWdlZCBndWVzdHMgY2FuIGFsbG9jYXRlIG11bHRpLXBh
Z2UgY29udGlndW91cyBleHRlbnRzLiAqLwogICAgIGlmICggIW11bHRpcGFn
ZV9hbGxvY2F0aW9uX3Blcm1pdHRlZChjdXJyZW50LT5kb21haW4sCiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGV4Y2guaW4u
ZXh0ZW50X29yZGVyKSB8fAo=

--=separator
Content-Type: application/octet-stream; name="xsa29-4.2-unstable.patch"
Content-Disposition: attachment; filename="xsa29-4.2-unstable.patch"
Content-Transfer-Encoding: base64

eGVuOiBhZGQgbWlzc2luZyBndWVzdCBhZGRyZXNzIHJhbmdlIGNoZWNrcyB0
byBYRU5NRU1fZXhjaGFuZ2UgaGFuZGxlcnMKCkV2ZXIgc2luY2UgaXRzIGV4
aXN0ZW5jZSAoMy4wLjMgaWlyYykgdGhlIGhhbmRsZXIgZm9yIHRoaXMgaGFz
IGJlZW4KdXNpbmcgbm9uIGFkZHJlc3MgcmFuZ2UgY2hlY2tpbmcgZ3Vlc3Qg
bWVtb3J5IGFjY2Vzc29ycyAoaS5lLgp0aGUgb25lcyBwcmVmaXhlZCB3aXRo
IHR3byB1bmRlcnNjb3Jlcykgd2l0aG91dCBmaXJzdCByYW5nZQpjaGVja2lu
ZyB0aGUgYWNjZXNzZWQgc3BhY2UgKHZpYSBndWVzdF9oYW5kbGVfb2theSgp
KSwgYWxsb3dpbmcKYSBndWVzdCB0byBhY2Nlc3MgYW5kIG92ZXJ3cml0ZSBo
eXBlcnZpc29yIG1lbW9yeS4KClRoaXMgaXMgWFNBLTI5IC8gQ1ZFLTIwMTIt
NTUxMy4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KQWNrZWQtYnk6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxs
QGNpdHJpeC5jb20+CkFja2VkLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AZXUuY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2Nv
bXBhdC9tZW1vcnkuYyBiL3hlbi9jb21tb24vY29tcGF0L21lbW9yeS5jCmlu
ZGV4IDk5NjE1MWMuLmE0OWY1MWIgMTAwNjQ0Ci0tLSBhL3hlbi9jb21tb24v
Y29tcGF0L21lbW9yeS5jCisrKyBiL3hlbi9jb21tb24vY29tcGF0L21lbW9y
eS5jCkBAIC0xMTUsNiArMTE1LDEyIEBAIGludCBjb21wYXRfbWVtb3J5X29w
KHVuc2lnbmVkIGludCBjbWQsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9p
ZCkgY29tcGF0KQogICAgICAgICAgICAgICAgICAgKGNtcC54Y2hnLm91dC5u
cl9leHRlbnRzIDw8IGNtcC54Y2hnLm91dC5leHRlbnRfb3JkZXIpKSApCiAg
ICAgICAgICAgICAgICAgcmV0dXJuIC1FSU5WQUw7CiAKKyAgICAgICAgICAg
IGlmICggIWNvbXBhdF9oYW5kbGVfb2theShjbXAueGNoZy5pbi5leHRlbnRf
c3RhcnQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Y21wLnhjaGcuaW4ubnJfZXh0ZW50cykgfHwKKyAgICAgICAgICAgICAgICAg
IWNvbXBhdF9oYW5kbGVfb2theShjbXAueGNoZy5vdXQuZXh0ZW50X3N0YXJ0
LAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNtcC54
Y2hnLm91dC5ucl9leHRlbnRzKSApCisgICAgICAgICAgICAgICAgcmV0dXJu
IC1FRkFVTFQ7CisKICAgICAgICAgICAgIHN0YXJ0X2V4dGVudCA9IGNtcC54
Y2hnLm5yX2V4Y2hhbmdlZDsKICAgICAgICAgICAgIGVuZF9leHRlbnQgPSAo
Q09NUEFUX0FSR19YTEFUX1NJWkUgLSBzaXplb2YoKm5hdC54Y2hnKSkgLwog
ICAgICAgICAgICAgICAgICAgICAgICAgICgoKDFVIDw8IEFCUyhvcmRlcl9k
ZWx0YSkpICsgMSkgKgpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9tZW1vcnku
YyBiL3hlbi9jb21tb24vbWVtb3J5LmMKaW5kZXggODNlMjY2Ni4uYmRiNmVk
OCAxMDA2NDQKLS0tIGEveGVuL2NvbW1vbi9tZW1vcnkuYworKysgYi94ZW4v
Y29tbW9uL21lbW9yeS5jCkBAIC0zMDgsNiArMzA4LDEzIEBAIHN0YXRpYyBs
b25nIG1lbW9yeV9leGNoYW5nZShYRU5fR1VFU1RfSEFORExFX1BBUkFNKHhl
bl9tZW1vcnlfZXhjaGFuZ2VfdCkgYXJnKQogICAgICAgICBnb3RvIGZhaWxf
ZWFybHk7CiAgICAgfQogCisgICAgaWYgKCAhZ3Vlc3RfaGFuZGxlX29rYXko
ZXhjaC5pbi5leHRlbnRfc3RhcnQsIGV4Y2guaW4ubnJfZXh0ZW50cykgfHwK
KyAgICAgICAgICFndWVzdF9oYW5kbGVfb2theShleGNoLm91dC5leHRlbnRf
c3RhcnQsIGV4Y2gub3V0Lm5yX2V4dGVudHMpICkKKyAgICB7CisgICAgICAg
IHJjID0gLUVGQVVMVDsKKyAgICAgICAgZ290byBmYWlsX2Vhcmx5OworICAg
IH0KKwogICAgIC8qIE9ubHkgcHJpdmlsZWdlZCBndWVzdHMgY2FuIGFsbG9j
YXRlIG11bHRpLXBhZ2UgY29udGlndW91cyBleHRlbnRzLiAqLwogICAgIGlm
ICggIW11bHRpcGFnZV9hbGxvY2F0aW9uX3Blcm1pdHRlZChjdXJyZW50LT5k
b21haW4sCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGV4Y2guaW4uZXh0ZW50X29yZGVyKSB8fAo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBY-0003Kp-8g; Mon, 03 Dec 2012 17:52:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBU-0003H8-Qg; Mon, 03 Dec 2012 17:52:01 +0000
Received: from [85.158.138.51:23491] by server-11.bemta-3.messagelabs.com id
	F5/50-19361-FB6ECB05; Mon, 03 Dec 2012 17:51:59 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354557117!32614814!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21329 invoked from network); 3 Dec 2012 17:51:58 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-5.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 17:51:58 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBG-0002NF-9i; Mon, 03 Dec 2012 17:51:46 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBF-00067d-Un; Mon, 03 Dec 2012 17:51:45 +0000
Date: Mon, 03 Dec 2012 17:51:45 +0000
Message-Id: <E1TfaBF-00067d-Un@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 29 (CVE-2012-5513) -
 XENMEM_exchange may overwrite hypervisor memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5513 / XSA-29
                             version 3

           XENMEM_exchange may overwrite hypervisor memory

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The handler for XENMEM_exchange accesses guest memory without range checking
the guest provided addresses, thus allowing these accesses to include the
hypervisor reserved range.

IMPACT
======

A malicious guest administrator can cause Xen to crash.  If the out of address
space bounds access does not lead to a crash, a carefully crafted privilege
escalation cannot be excluded, even though the guest doesn't itself control
the values written.

VULNERABLE SYSTEMS
==================

All Xen versions are vulnerable.

The vulnerability is only exposed to PV guests.

MITIGATION
==========

Running only HVM guests, or ensuring that PV guests only use trusted kernels,
will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa29-4.1.patch             Xen 4.1.x
xsa29-4.2-unstable.patch    Xen 4.2.x, xen-unstable


$ sha256sum xsa29*.patch
7246a5534bc1e6a47bb6a860f6eb61c8353ad8b46209310783e823b4f7e2eae8  xsa29-4.1.patch
54dcd3ac5c84903bfb04f8591107a74c27b079815f2c6843212e05f776873c73  xsa29-4.2-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ3AAoJEIP+FMlX6CvZ7u8IAM01+jNn5fwdGmoo/LIdH885
nWr5aSc+qMqVuSvla0KKh1SOLFaVWFgovLN1Sfu2hAxLgrK3HxN86RqHU/vLo0k0
KTFM+9xQlxhJNQzyQSiDryH/qSrHTQI6ERxUEYgfjtTieK8y30SZqkd6jBmwoir/
nAMMP8oFmVevM2WfYEWjNNsWPaiUlUYP13qxiWGPcGzhcNNKRwcmrIY4N+F6kHID
Ipl4l5vhoeSaQ0fKkcJKHa+3QGd+706jHZ5VTCwPdWBCnBJLFuMWbc2UlyIg2EB9
N+3Olwf3jCF0zIzBJkomA+FAg+D7kw31DCjc+y1PdGIyuoMkk+JRwYFVkZcKLi4=
=pD8C
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa29-4.1.patch"
Content-Disposition: attachment; filename="xsa29-4.1.patch"
Content-Transfer-Encoding: base64

eGVuOiBhZGQgbWlzc2luZyBndWVzdCBhZGRyZXNzIHJhbmdlIGNoZWNrcyB0
byBYRU5NRU1fZXhjaGFuZ2UgaGFuZGxlcnMKCkV2ZXIgc2luY2UgaXRzIGV4
aXN0ZW5jZSAoMy4wLjMgaWlyYykgdGhlIGhhbmRsZXIgZm9yIHRoaXMgaGFz
IGJlZW4KdXNpbmcgbm9uIGFkZHJlc3MgcmFuZ2UgY2hlY2tpbmcgZ3Vlc3Qg
bWVtb3J5IGFjY2Vzc29ycyAoaS5lLgp0aGUgb25lcyBwcmVmaXhlZCB3aXRo
IHR3byB1bmRlcnNjb3Jlcykgd2l0aG91dCBmaXJzdCByYW5nZQpjaGVja2lu
ZyB0aGUgYWNjZXNzZWQgc3BhY2UgKHZpYSBndWVzdF9oYW5kbGVfb2theSgp
KSwgYWxsb3dpbmcKYSBndWVzdCB0byBhY2Nlc3MgYW5kIG92ZXJ3cml0ZSBo
eXBlcnZpc29yIG1lbW9yeS4KClRoaXMgaXMgWFNBLTI5IC8gQ1ZFLTIwMTIt
NTUxMy4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KQWNrZWQtYnk6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxs
QGNpdHJpeC5jb20+CkFja2VkLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AZXUuY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2Nv
bXBhdC9tZW1vcnkuYyBiL3hlbi9jb21tb24vY29tcGF0L21lbW9yeS5jCmlu
ZGV4IDI0MDI5ODQuLjFkODc3ZmMgMTAwNjQ0Ci0tLSBhL3hlbi9jb21tb24v
Y29tcGF0L21lbW9yeS5jCisrKyBiL3hlbi9jb21tb24vY29tcGF0L21lbW9y
eS5jCkBAIC0xMTQsNiArMTE0LDEyIEBAIGludCBjb21wYXRfbWVtb3J5X29w
KHVuc2lnbmVkIGludCBjbWQsIFhFTl9HVUVTVF9IQU5ETEUodm9pZCkgY29t
cGF0KQogICAgICAgICAgICAgICAgICAgKGNtcC54Y2hnLm91dC5ucl9leHRl
bnRzIDw8IGNtcC54Y2hnLm91dC5leHRlbnRfb3JkZXIpKSApCiAgICAgICAg
ICAgICAgICAgcmV0dXJuIC1FSU5WQUw7CiAKKyAgICAgICAgICAgIGlmICgg
IWNvbXBhdF9oYW5kbGVfb2theShjbXAueGNoZy5pbi5leHRlbnRfc3RhcnQs
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY21wLnhj
aGcuaW4ubnJfZXh0ZW50cykgfHwKKyAgICAgICAgICAgICAgICAgIWNvbXBh
dF9oYW5kbGVfb2theShjbXAueGNoZy5vdXQuZXh0ZW50X3N0YXJ0LAorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNtcC54Y2hnLm91
dC5ucl9leHRlbnRzKSApCisgICAgICAgICAgICAgICAgcmV0dXJuIC1FRkFV
TFQ7CisKICAgICAgICAgICAgIHN0YXJ0X2V4dGVudCA9IGNtcC54Y2hnLm5y
X2V4Y2hhbmdlZDsKICAgICAgICAgICAgIGVuZF9leHRlbnQgPSAoQ09NUEFU
X0FSR19YTEFUX1NJWkUgLSBzaXplb2YoKm5hdC54Y2hnKSkgLwogICAgICAg
ICAgICAgICAgICAgICAgICAgICgoKDFVIDw8IEFCUyhvcmRlcl9kZWx0YSkp
ICsgMSkgKgpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9tZW1vcnkuYyBiL3hl
bi9jb21tb24vbWVtb3J5LmMKaW5kZXggNGU3YzIzNC4uNTkzNzlkMyAxMDA2
NDQKLS0tIGEveGVuL2NvbW1vbi9tZW1vcnkuYworKysgYi94ZW4vY29tbW9u
L21lbW9yeS5jCkBAIC0yODksNiArMjg5LDEzIEBAIHN0YXRpYyBsb25nIG1l
bW9yeV9leGNoYW5nZShYRU5fR1VFU1RfSEFORExFKHhlbl9tZW1vcnlfZXhj
aGFuZ2VfdCkgYXJnKQogICAgICAgICBnb3RvIGZhaWxfZWFybHk7CiAgICAg
fQogCisgICAgaWYgKCAhZ3Vlc3RfaGFuZGxlX29rYXkoZXhjaC5pbi5leHRl
bnRfc3RhcnQsIGV4Y2guaW4ubnJfZXh0ZW50cykgfHwKKyAgICAgICAgICFn
dWVzdF9oYW5kbGVfb2theShleGNoLm91dC5leHRlbnRfc3RhcnQsIGV4Y2gu
b3V0Lm5yX2V4dGVudHMpICkKKyAgICB7CisgICAgICAgIHJjID0gLUVGQVVM
VDsKKyAgICAgICAgZ290byBmYWlsX2Vhcmx5OworICAgIH0KKwogICAgIC8q
IE9ubHkgcHJpdmlsZWdlZCBndWVzdHMgY2FuIGFsbG9jYXRlIG11bHRpLXBh
Z2UgY29udGlndW91cyBleHRlbnRzLiAqLwogICAgIGlmICggIW11bHRpcGFn
ZV9hbGxvY2F0aW9uX3Blcm1pdHRlZChjdXJyZW50LT5kb21haW4sCiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGV4Y2guaW4u
ZXh0ZW50X29yZGVyKSB8fAo=

--=separator
Content-Type: application/octet-stream; name="xsa29-4.2-unstable.patch"
Content-Disposition: attachment; filename="xsa29-4.2-unstable.patch"
Content-Transfer-Encoding: base64

eGVuOiBhZGQgbWlzc2luZyBndWVzdCBhZGRyZXNzIHJhbmdlIGNoZWNrcyB0
byBYRU5NRU1fZXhjaGFuZ2UgaGFuZGxlcnMKCkV2ZXIgc2luY2UgaXRzIGV4
aXN0ZW5jZSAoMy4wLjMgaWlyYykgdGhlIGhhbmRsZXIgZm9yIHRoaXMgaGFz
IGJlZW4KdXNpbmcgbm9uIGFkZHJlc3MgcmFuZ2UgY2hlY2tpbmcgZ3Vlc3Qg
bWVtb3J5IGFjY2Vzc29ycyAoaS5lLgp0aGUgb25lcyBwcmVmaXhlZCB3aXRo
IHR3byB1bmRlcnNjb3Jlcykgd2l0aG91dCBmaXJzdCByYW5nZQpjaGVja2lu
ZyB0aGUgYWNjZXNzZWQgc3BhY2UgKHZpYSBndWVzdF9oYW5kbGVfb2theSgp
KSwgYWxsb3dpbmcKYSBndWVzdCB0byBhY2Nlc3MgYW5kIG92ZXJ3cml0ZSBo
eXBlcnZpc29yIG1lbW9yeS4KClRoaXMgaXMgWFNBLTI5IC8gQ1ZFLTIwMTIt
NTUxMy4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KQWNrZWQtYnk6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxs
QGNpdHJpeC5jb20+CkFja2VkLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AZXUuY2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2Nv
bXBhdC9tZW1vcnkuYyBiL3hlbi9jb21tb24vY29tcGF0L21lbW9yeS5jCmlu
ZGV4IDk5NjE1MWMuLmE0OWY1MWIgMTAwNjQ0Ci0tLSBhL3hlbi9jb21tb24v
Y29tcGF0L21lbW9yeS5jCisrKyBiL3hlbi9jb21tb24vY29tcGF0L21lbW9y
eS5jCkBAIC0xMTUsNiArMTE1LDEyIEBAIGludCBjb21wYXRfbWVtb3J5X29w
KHVuc2lnbmVkIGludCBjbWQsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9p
ZCkgY29tcGF0KQogICAgICAgICAgICAgICAgICAgKGNtcC54Y2hnLm91dC5u
cl9leHRlbnRzIDw8IGNtcC54Y2hnLm91dC5leHRlbnRfb3JkZXIpKSApCiAg
ICAgICAgICAgICAgICAgcmV0dXJuIC1FSU5WQUw7CiAKKyAgICAgICAgICAg
IGlmICggIWNvbXBhdF9oYW5kbGVfb2theShjbXAueGNoZy5pbi5leHRlbnRf
c3RhcnQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Y21wLnhjaGcuaW4ubnJfZXh0ZW50cykgfHwKKyAgICAgICAgICAgICAgICAg
IWNvbXBhdF9oYW5kbGVfb2theShjbXAueGNoZy5vdXQuZXh0ZW50X3N0YXJ0
LAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNtcC54
Y2hnLm91dC5ucl9leHRlbnRzKSApCisgICAgICAgICAgICAgICAgcmV0dXJu
IC1FRkFVTFQ7CisKICAgICAgICAgICAgIHN0YXJ0X2V4dGVudCA9IGNtcC54
Y2hnLm5yX2V4Y2hhbmdlZDsKICAgICAgICAgICAgIGVuZF9leHRlbnQgPSAo
Q09NUEFUX0FSR19YTEFUX1NJWkUgLSBzaXplb2YoKm5hdC54Y2hnKSkgLwog
ICAgICAgICAgICAgICAgICAgICAgICAgICgoKDFVIDw8IEFCUyhvcmRlcl9k
ZWx0YSkpICsgMSkgKgpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9tZW1vcnku
YyBiL3hlbi9jb21tb24vbWVtb3J5LmMKaW5kZXggODNlMjY2Ni4uYmRiNmVk
OCAxMDA2NDQKLS0tIGEveGVuL2NvbW1vbi9tZW1vcnkuYworKysgYi94ZW4v
Y29tbW9uL21lbW9yeS5jCkBAIC0zMDgsNiArMzA4LDEzIEBAIHN0YXRpYyBs
b25nIG1lbW9yeV9leGNoYW5nZShYRU5fR1VFU1RfSEFORExFX1BBUkFNKHhl
bl9tZW1vcnlfZXhjaGFuZ2VfdCkgYXJnKQogICAgICAgICBnb3RvIGZhaWxf
ZWFybHk7CiAgICAgfQogCisgICAgaWYgKCAhZ3Vlc3RfaGFuZGxlX29rYXko
ZXhjaC5pbi5leHRlbnRfc3RhcnQsIGV4Y2guaW4ubnJfZXh0ZW50cykgfHwK
KyAgICAgICAgICFndWVzdF9oYW5kbGVfb2theShleGNoLm91dC5leHRlbnRf
c3RhcnQsIGV4Y2gub3V0Lm5yX2V4dGVudHMpICkKKyAgICB7CisgICAgICAg
IHJjID0gLUVGQVVMVDsKKyAgICAgICAgZ290byBmYWlsX2Vhcmx5OworICAg
IH0KKwogICAgIC8qIE9ubHkgcHJpdmlsZWdlZCBndWVzdHMgY2FuIGFsbG9j
YXRlIG11bHRpLXBhZ2UgY29udGlndW91cyBleHRlbnRzLiAqLwogICAgIGlm
ICggIW11bHRpcGFnZV9hbGxvY2F0aW9uX3Blcm1pdHRlZChjdXJyZW50LT5k
b21haW4sCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGV4Y2guaW4uZXh0ZW50X29yZGVyKSB8fAo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBx-0003fB-LZ; Mon, 03 Dec 2012 17:52:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TfaBw-0003de-76
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:52:28 +0000
Received: from [85.158.143.35:46459] by server-1.bemta-4.messagelabs.com id
	18/D3-27934-BD6ECB05; Mon, 03 Dec 2012 17:52:27 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354557144!16056750!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21647 invoked from network); 3 Dec 2012 17:52:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:52:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216227090"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:52:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:52:23 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TfaBr-0006QJ-As;
	Mon, 03 Dec 2012 17:52:23 +0000
Message-ID: <1354557148.18784.21.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 3 Dec 2012 17:52:28 +0000
In-Reply-To: <50BCF0F502000078000AD53F@nat28.tlf.novell.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
> >>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
> > Regarding Jan's comment in [0], I don't think allowing user to specify
> > arbitrary number of levels a good idea. Because only the last level
> > should be shared among vcpus, other level should be in percpu struct to
> > allow for quicker lookup. The idea to let user specify levels will be
> > too complicated in implementation and blow up percpu section (since the
> > size grows exponentially). Three levels should be quite enough. See
> > maths below.
> 
> I didn't ask to implement more than three levels, I just asked for
> the interface to establish the number of levels a guest wants to
> use to allow for higher numbers (passing of which would result in
> -EINVAL in your implementation).
> 

Ah, I understand now. How about something like this:

struct EVTCHNOP_reg_nlevel {
    int levels;
    void *level_specified_reg_struct;
}

> > Number of event channels:
> >  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
> >  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> > Basically the third level is a new ABI, so I choose to use unsigned long
> > long here to get more event channels.
> 
> Please don't: This would make things less consistent to handle
> at least in the guest side code. And I don't see why you would
> have a need to do so anyway (or else your argument above
> against further levels would become questionable).
> 

It was suggested by Ian to use unsigned long long. Ian, why do you
prefer unsigned long long to unsigned long?

> > Pages occupied by the third level (if PAGE_SIZE=4k):
> >  * 32bit: 64k  / 8 / 4k = 2
> >  * 64bit: 512k / 8 / 4k = 16
> > 
> > Making second level percpu will incur overhead. In fact we move the
> > array in shared info into percpu struct:
> >  * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
> >  * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte
> > 
> > What concerns me is that the struct evtchn buckets are allocated all at
> > once during initialization phrase. To save memory inside Xen, the
> > internal allocation/free scheme for evtchn needs to be modified. Ian
> > suggested we do small number of buckets at start of day then dynamically
> > fault in more as required.
> > 
> > To sum up:
> >      1. Guest should allocate pages for third level evtchn.
> >      2. Guest should register third level pages via a new hypercall op.
> 
> Doesn't the guest also need to set up space for the 2nd level?
> 

Yes. That will be embedded in percpu struct vcpu_info, which will be
also register via the same hypercall op.


Wei.

> Jan
> 
> >      3. Hypervisor should setup third level evtchn in that hypercall op.
> >      4. Only last level (third in this case) should be shared among
> >         vcpus.
> >      5. Need a flexible allocation/free scheme of struct evtchn.
> >      6. Debug dumping should use snapshot to avoid holding event lock
> >         for too long. (Jan's concern in [0])
> > 
> > Any comments are welcomed.
> > 
> > 
> > Wei.
> > 
> > [0] http://thread.gmane.org/gmane.comp.emulators.xen.devel/139921 
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:52:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:52:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaBx-0003fB-LZ; Mon, 03 Dec 2012 17:52:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TfaBw-0003de-76
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:52:28 +0000
Received: from [85.158.143.35:46459] by server-1.bemta-4.messagelabs.com id
	18/D3-27934-BD6ECB05; Mon, 03 Dec 2012 17:52:27 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354557144!16056750!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21647 invoked from network); 3 Dec 2012 17:52:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:52:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216227090"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:52:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 12:52:23 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TfaBr-0006QJ-As;
	Mon, 03 Dec 2012 17:52:23 +0000
Message-ID: <1354557148.18784.21.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 3 Dec 2012 17:52:28 +0000
In-Reply-To: <50BCF0F502000078000AD53F@nat28.tlf.novell.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
> >>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
> > Regarding Jan's comment in [0], I don't think allowing user to specify
> > arbitrary number of levels a good idea. Because only the last level
> > should be shared among vcpus, other level should be in percpu struct to
> > allow for quicker lookup. The idea to let user specify levels will be
> > too complicated in implementation and blow up percpu section (since the
> > size grows exponentially). Three levels should be quite enough. See
> > maths below.
> 
> I didn't ask to implement more than three levels, I just asked for
> the interface to establish the number of levels a guest wants to
> use to allow for higher numbers (passing of which would result in
> -EINVAL in your implementation).
> 

Ah, I understand now. How about something like this:

struct EVTCHNOP_reg_nlevel {
    int levels;
    void *level_specified_reg_struct;
}

> > Number of event channels:
> >  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
> >  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> > Basically the third level is a new ABI, so I choose to use unsigned long
> > long here to get more event channels.
> 
> Please don't: This would make things less consistent to handle
> at least in the guest side code. And I don't see why you would
> have a need to do so anyway (or else your argument above
> against further levels would become questionable).
> 

It was suggested by Ian to use unsigned long long. Ian, why do you
prefer unsigned long long to unsigned long?

> > Pages occupied by the third level (if PAGE_SIZE=4k):
> >  * 32bit: 64k  / 8 / 4k = 2
> >  * 64bit: 512k / 8 / 4k = 16
> > 
> > Making second level percpu will incur overhead. In fact we move the
> > array in shared info into percpu struct:
> >  * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
> >  * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte
> > 
> > What concerns me is that the struct evtchn buckets are allocated all at
> > once during initialization phrase. To save memory inside Xen, the
> > internal allocation/free scheme for evtchn needs to be modified. Ian
> > suggested we do small number of buckets at start of day then dynamically
> > fault in more as required.
> > 
> > To sum up:
> >      1. Guest should allocate pages for third level evtchn.
> >      2. Guest should register third level pages via a new hypercall op.
> 
> Doesn't the guest also need to set up space for the 2nd level?
> 

Yes. That will be embedded in percpu struct vcpu_info, which will be
also register via the same hypercall op.


Wei.

> Jan
> 
> >      3. Hypervisor should setup third level evtchn in that hypercall op.
> >      4. Only last level (third in this case) should be shared among
> >         vcpus.
> >      5. Need a flexible allocation/free scheme of struct evtchn.
> >      6. Debug dumping should use snapshot to avoid holding event lock
> >         for too long. (Jan's concern in [0])
> > 
> > Any comments are welcomed.
> > 
> > 
> > Wei.
> > 
> > [0] http://thread.gmane.org/gmane.comp.emulators.xen.devel/139921 
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:54:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaEC-0004zX-Jl; Mon, 03 Dec 2012 17:54:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfaEB-0004yT-AM
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:54:47 +0000
Received: from [85.158.139.83:17298] by server-4.bemta-5.messagelabs.com id
	59/7C-15011-667ECB05; Mon, 03 Dec 2012 17:54:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354557284!27661010!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30575 invoked from network); 3 Dec 2012 17:54:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:54:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130378"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:54:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 3 Dec 2012 17:54:44 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TfaE7-0005QD-MK	for xen-devel@lists.xen.org;
	Mon, 03 Dec 2012 17:54:43 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TfaE7-0003Vj-IC	for
	xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:54:43 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20668.59235.466325.65434@mariner.uk.xensource.com>
Date: Mon, 3 Dec 2012 17:54:43 +0000
To: xen-devel@lists.xen.org
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: [Xen-devel] Uncontrolled disclosure of advisories XSA-26 to XSA-32
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We just sent the message below to the security advisory predisclosure
list, relating to the release of XSA-26 to XSA-32.  As you will see,
these have now been publicly released.

We'll have a proper conversation about this in a week or two.

Thanks for your attention,
Ian.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

We regret to announce that a member of the predisclosure list
discovered today that they had failed to abort their disclosure
process in response to the embargo extension for XSA-26 to XSA-32.
The information in XSA-26 to XSA-32 has been publicly available since
at least Friday the 30th of November.

They reported the situation to us.  Under the circumstances we must
regard the embargo as at an end.  All members of the predisclosure
list are advised to publish and deploy their updates for XSA-26 to
XSA-32 inclusive as soon as they are able to do so.

Updated versions of XSA-26 to XSA-32, stating that they are now
public, will be sent out shortly - both to the predisclosure list and
to the public lists, according to the usual process.

As usual when we have had difficulties with the process the Xen.org
security team will conduct a full post mortem.  The post mortem will
consider the decision to extend the embargo, as well as the decision
now to regard the embargo as over.  As before, to allow members of the
community to concentrate entirely on patching their systems right now,
we will delay starting that conversation until at least Thursday the
13th of December.

Xen.org security team
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOEHAAoJEIP+FMlX6CvZpGUIAKx0W9bSoUiywC7B3WXhcvfO
Zl+7D60p8w6FjZRD/YU04r4AYblg1nKGI6zlROXtbjj8UyFCtHglYPAnNfJKmV4C
nyKHtg8iuiNV6zPYlEoU7rLAu4QwN/dFRmMOFAQr2Qilxu7D12e8vM1jP79c5lU6
w0ujSnJZxnrVTn/sZiOS1SgHsy7MVAyglOYFl4tT+LYbuxUl/G4QpccpM4ilJ7CC
ELXQtfyQcvEzXQuWB9fTUS+0d+1ilx8ASXhnnHZtT+juxp/s6AXqCJZBbCbTWZDQ
9T0qrur96marKTK15XilPQN3XgoCQrZgLccndDpmIq9HBTx3tSLyrB9EbTF+5WY=
=Dd4h
-----END PGP SIGNATURE-----

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:54:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaEC-0004zX-Jl; Mon, 03 Dec 2012 17:54:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfaEB-0004yT-AM
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:54:47 +0000
Received: from [85.158.139.83:17298] by server-4.bemta-5.messagelabs.com id
	59/7C-15011-667ECB05; Mon, 03 Dec 2012 17:54:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354557284!27661010!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30575 invoked from network); 3 Dec 2012 17:54:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:54:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130378"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:54:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 3 Dec 2012 17:54:44 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TfaE7-0005QD-MK	for xen-devel@lists.xen.org;
	Mon, 03 Dec 2012 17:54:43 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TfaE7-0003Vj-IC	for
	xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:54:43 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20668.59235.466325.65434@mariner.uk.xensource.com>
Date: Mon, 3 Dec 2012 17:54:43 +0000
To: xen-devel@lists.xen.org
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: [Xen-devel] Uncontrolled disclosure of advisories XSA-26 to XSA-32
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We just sent the message below to the security advisory predisclosure
list, relating to the release of XSA-26 to XSA-32.  As you will see,
these have now been publicly released.

We'll have a proper conversation about this in a week or two.

Thanks for your attention,
Ian.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

We regret to announce that a member of the predisclosure list
discovered today that they had failed to abort their disclosure
process in response to the embargo extension for XSA-26 to XSA-32.
The information in XSA-26 to XSA-32 has been publicly available since
at least Friday the 30th of November.

They reported the situation to us.  Under the circumstances we must
regard the embargo as at an end.  All members of the predisclosure
list are advised to publish and deploy their updates for XSA-26 to
XSA-32 inclusive as soon as they are able to do so.

Updated versions of XSA-26 to XSA-32, stating that they are now
public, will be sent out shortly - both to the predisclosure list and
to the public lists, according to the usual process.

As usual when we have had difficulties with the process the Xen.org
security team will conduct a full post mortem.  The post mortem will
consider the decision to extend the embargo, as well as the decision
now to regard the embargo as over.  As before, to allow members of the
community to concentrate entirely on patching their systems right now,
we will delay starting that conversation until at least Thursday the
13th of December.

Xen.org security team
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOEHAAoJEIP+FMlX6CvZpGUIAKx0W9bSoUiywC7B3WXhcvfO
Zl+7D60p8w6FjZRD/YU04r4AYblg1nKGI6zlROXtbjj8UyFCtHglYPAnNfJKmV4C
nyKHtg8iuiNV6zPYlEoU7rLAu4QwN/dFRmMOFAQr2Qilxu7D12e8vM1jP79c5lU6
w0ujSnJZxnrVTn/sZiOS1SgHsy7MVAyglOYFl4tT+LYbuxUl/G4QpccpM4ilJ7CC
ELXQtfyQcvEzXQuWB9fTUS+0d+1ilx8ASXhnnHZtT+juxp/s6AXqCJZBbCbTWZDQ
9T0qrur96marKTK15XilPQN3XgoCQrZgLccndDpmIq9HBTx3tSLyrB9EbTF+5WY=
=Dd4h
-----END PGP SIGNATURE-----

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:54:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaEJ-00053g-28; Mon, 03 Dec 2012 17:54:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TfaEI-00052Z-89
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 17:54:54 +0000
Received: from [193.109.254.147:20136] by server-6.bemta-14.messagelabs.com id
	B1/38-02788-D67ECB05; Mon, 03 Dec 2012 17:54:53 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354557284!8791900!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=1.1 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	MIME_QP_LONG_LINE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22797 invoked from network); 3 Dec 2012 17:54:44 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:54:44 -0000
Received: by mail-wi0-f179.google.com with SMTP id o1so1327582wic.6
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 09:54:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type;
	bh=tjRUCYngbuWiiEa4TStoOJsz0o/neV/xwRvvr/AR+UM=;
	b=RqYB8G45x6T+NAkdTFdRH+pF0W/1SW6o98w03O50lL5317Q3rjHV9UAvzonvtwjjqt
	HdcIkuxdpNHVkuk2GNencAycQsoNz1YQIw5aGIMrFIb/PMJ685dUOpE4HRhN0YWLQ0AP
	Uolil8W+HhegPVBcrU/hJxnzblGmnxkVW0uM0kj4Z6jenYn52X98tjFCDwnu1TreT/RO
	Ueiag9h7myXAxzoZqPCJ3SBiUjmiiEeDlk0q0QW6khdXvYGYprIj22eCqhBRUl0738WM
	9VzGdFnt8yKq197hHdvHoHLYLkK29zYj74ISFYtn6k9dE0e8BSeuJO88R5pr/AHB+PHh
	/RSg==
Received: by 10.216.143.169 with SMTP id l41mr3954590wej.146.1354557284457;
	Mon, 03 Dec 2012 09:54:44 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id dw4sm13540831wib.1.2012.12.03.09.54.42
	(version=SSLv3 cipher=OTHER); Mon, 03 Dec 2012 09:54:43 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Mon, 03 Dec 2012 17:54:39 +0000
From: Keir Fraser <keir@xen.org>
To: maheen butt <maheen_butt26@yahoo.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <CCE297DF.54A0A%keir@xen.org>
Thread-Topic: [Xen-devel] set_current in Xen booting sequence
Thread-Index: Ac3Rf0OrJ7ymVPxWRUq4sJFei4QwQA==
In-Reply-To: <1354520163.12932.YahooMailNeo@web120002.mail.ne1.yahoo.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] set_current in Xen booting sequence
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4356708209104267251=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--===============4356708209104267251==
Content-type: multipart/alternative;
	boundary="B_3437402082_30241066"

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3437402082_30241066
Content-type: text/plain;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

It=B9s a dummy or poison value to catch invalid use of current before it is
properly set up.

 -- Keir

On 03/12/2012 07:36, "maheen butt" <maheen_butt26@yahoo.com> wrote:

> Hi all,
>=20
> I'm investigating Xen bootup sequence and I'm stuck with the function
> set_current((struct vcpu *)0xfffff000);
> the high level idea is that it is assigning VCPU to physical CPU but I ca=
n't
> understand the following
>=20
> 1) what is the need of this hard coded address?  it is seemed that vcpu e=
xist
> on this address (but no idea who put vcpu on that particular address)?
> (the same address is passed both in case of ARM and X86)
> 2) no idea about what exactly get_cpu_info() (called by set_current) is
> doing..  (logically doing & and or with sp register)?
> =20
> Thanks
>=20
>=20
>=20
>=20
>=20
>=20
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--B_3437402082_30241066
Content-type: text/html;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

<HTML>
<HEAD>
<TITLE>Re: [Xen-devel] set_current in Xen booting sequence</TITLE>
</HEAD>
<BODY>
<FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN STYLE=3D'font-size:11pt=
'>It&#8217;s a dummy or poison value to catch invalid use of current before =
it is properly set up.<BR>
<BR>
&nbsp;-- Keir<BR>
<BR>
On 03/12/2012 07:36, &quot;maheen butt&quot; &lt;<a href=3D"maheen_butt26@yah=
oo.com">maheen_butt26@yahoo.com</a>&gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT SIZE=3D"2"><FONT FACE=3D"Arial"><SPAN STYLE=3D'fo=
nt-size:10pt'>Hi all,<BR>
<BR>
I'm investigating Xen bootup sequence and I'm stuck with the function set_c=
urrent((struct vcpu *)0xfffff000);<BR>
the high level idea is that it is assigning VCPU to physical CPU but I can'=
t understand the following <BR>
<BR>
1) what is the need of this hard coded address? &nbsp;it is seemed that vcp=
u exist on this address (but no idea who put vcpu on that particular address=
)?<BR>
(the same address is passed both in case of ARM and X86)<BR>
2) no idea about what exactly get_cpu_info() (called by set_current) is doi=
ng.. &nbsp;(logically doing &amp; and or with sp register)?<BR>
&nbsp;<BR>
Thanks<BR>
<BR>
<BR>
<BR>
<BR>
</SPAN></FONT></FONT><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN =
STYLE=3D'font-size:11pt'><BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
Xen-devel mailing list<BR>
<a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
<a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>=
<BR>
</SPAN></FONT></FONT></BLOCKQUOTE>
</BODY>
</HTML>


--B_3437402082_30241066--




--===============4356708209104267251==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4356708209104267251==--




From xen-devel-bounces@lists.xen.org Mon Dec 03 17:54:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaEJ-00053g-28; Mon, 03 Dec 2012 17:54:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TfaEI-00052Z-89
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 17:54:54 +0000
Received: from [193.109.254.147:20136] by server-6.bemta-14.messagelabs.com id
	B1/38-02788-D67ECB05; Mon, 03 Dec 2012 17:54:53 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354557284!8791900!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=1.1 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	MIME_QP_LONG_LINE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22797 invoked from network); 3 Dec 2012 17:54:44 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:54:44 -0000
Received: by mail-wi0-f179.google.com with SMTP id o1so1327582wic.6
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 09:54:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type;
	bh=tjRUCYngbuWiiEa4TStoOJsz0o/neV/xwRvvr/AR+UM=;
	b=RqYB8G45x6T+NAkdTFdRH+pF0W/1SW6o98w03O50lL5317Q3rjHV9UAvzonvtwjjqt
	HdcIkuxdpNHVkuk2GNencAycQsoNz1YQIw5aGIMrFIb/PMJ685dUOpE4HRhN0YWLQ0AP
	Uolil8W+HhegPVBcrU/hJxnzblGmnxkVW0uM0kj4Z6jenYn52X98tjFCDwnu1TreT/RO
	Ueiag9h7myXAxzoZqPCJ3SBiUjmiiEeDlk0q0QW6khdXvYGYprIj22eCqhBRUl0738WM
	9VzGdFnt8yKq197hHdvHoHLYLkK29zYj74ISFYtn6k9dE0e8BSeuJO88R5pr/AHB+PHh
	/RSg==
Received: by 10.216.143.169 with SMTP id l41mr3954590wej.146.1354557284457;
	Mon, 03 Dec 2012 09:54:44 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id dw4sm13540831wib.1.2012.12.03.09.54.42
	(version=SSLv3 cipher=OTHER); Mon, 03 Dec 2012 09:54:43 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Mon, 03 Dec 2012 17:54:39 +0000
From: Keir Fraser <keir@xen.org>
To: maheen butt <maheen_butt26@yahoo.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <CCE297DF.54A0A%keir@xen.org>
Thread-Topic: [Xen-devel] set_current in Xen booting sequence
Thread-Index: Ac3Rf0OrJ7ymVPxWRUq4sJFei4QwQA==
In-Reply-To: <1354520163.12932.YahooMailNeo@web120002.mail.ne1.yahoo.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] set_current in Xen booting sequence
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4356708209104267251=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--===============4356708209104267251==
Content-type: multipart/alternative;
	boundary="B_3437402082_30241066"

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3437402082_30241066
Content-type: text/plain;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

It=B9s a dummy or poison value to catch invalid use of current before it is
properly set up.

 -- Keir

On 03/12/2012 07:36, "maheen butt" <maheen_butt26@yahoo.com> wrote:

> Hi all,
>=20
> I'm investigating Xen bootup sequence and I'm stuck with the function
> set_current((struct vcpu *)0xfffff000);
> the high level idea is that it is assigning VCPU to physical CPU but I ca=
n't
> understand the following
>=20
> 1) what is the need of this hard coded address?  it is seemed that vcpu e=
xist
> on this address (but no idea who put vcpu on that particular address)?
> (the same address is passed both in case of ARM and X86)
> 2) no idea about what exactly get_cpu_info() (called by set_current) is
> doing..  (logically doing & and or with sp register)?
> =20
> Thanks
>=20
>=20
>=20
>=20
>=20
>=20
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--B_3437402082_30241066
Content-type: text/html;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

<HTML>
<HEAD>
<TITLE>Re: [Xen-devel] set_current in Xen booting sequence</TITLE>
</HEAD>
<BODY>
<FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN STYLE=3D'font-size:11pt=
'>It&#8217;s a dummy or poison value to catch invalid use of current before =
it is properly set up.<BR>
<BR>
&nbsp;-- Keir<BR>
<BR>
On 03/12/2012 07:36, &quot;maheen butt&quot; &lt;<a href=3D"maheen_butt26@yah=
oo.com">maheen_butt26@yahoo.com</a>&gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT SIZE=3D"2"><FONT FACE=3D"Arial"><SPAN STYLE=3D'fo=
nt-size:10pt'>Hi all,<BR>
<BR>
I'm investigating Xen bootup sequence and I'm stuck with the function set_c=
urrent((struct vcpu *)0xfffff000);<BR>
the high level idea is that it is assigning VCPU to physical CPU but I can'=
t understand the following <BR>
<BR>
1) what is the need of this hard coded address? &nbsp;it is seemed that vcp=
u exist on this address (but no idea who put vcpu on that particular address=
)?<BR>
(the same address is passed both in case of ARM and X86)<BR>
2) no idea about what exactly get_cpu_info() (called by set_current) is doi=
ng.. &nbsp;(logically doing &amp; and or with sp register)?<BR>
&nbsp;<BR>
Thanks<BR>
<BR>
<BR>
<BR>
<BR>
</SPAN></FONT></FONT><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN =
STYLE=3D'font-size:11pt'><BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
Xen-devel mailing list<BR>
<a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
<a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>=
<BR>
</SPAN></FONT></FONT></BLOCKQUOTE>
</BODY>
</HTML>


--B_3437402082_30241066--




--===============4356708209104267251==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4356708209104267251==--




From xen-devel-bounces@lists.xen.org Mon Dec 03 17:56:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:56:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaFJ-0005Y5-HY; Mon, 03 Dec 2012 17:55:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1TfaFH-0005X8-OS
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 17:55:55 +0000
Received: from [85.158.137.99:33570] by server-11.bemta-3.messagelabs.com id
	1A/C4-19361-BA7ECB05; Mon, 03 Dec 2012 17:55:55 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354557353!16857561!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23916 invoked from network); 3 Dec 2012 17:55:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:55:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130400"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:55:53 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Mon, 3 Dec 2012
	17:55:53 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 3 Dec 2012 17:55:52 +0000
Thread-Topic: [Xen-devel] [PATCH 0 of 5 RFC] blktap3: Introduce xenio daemon
	(coordinate blkfront with tapdisk).
Thread-Index: Ac3OQ/9KXYKNDjAiQ6OBDFGJgx6xNADOrJBw
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE012D554A56E8@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<1354201968.6269.14.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354201968.6269.14.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 0 of 5 RFC] blktap3: Introduce xenio daemon
 (coordinate blkfront with tapdisk).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On Wed, 2012-11-28 at 14:20 +0000, Thanos Makatos wrote:
> > This patch series introduces the xenio daemon.
> 
> Can we call it something less generic?

Sure, though I don't really have good alternatives. Conceptually this daemon is blkback's user space counterpart, but naming it blkback would bring much confusion I believe. What about "tapback"? (Just a suggestion.)

> 
> >  This daemon is responsible for
> > coordinating blkfront and tapdisk, when blkfront changes state. It
> does so by
> > monitoring XenStore for blkfront state changes. The daemon
> creates/destroys the
> > shared ring, instructs the tapdisk to connect to/disconnect from it,
> and
> > manages the state of the back-end.
> 
> So xenio creates the ring and manages xenstore? How does tapdisk get
> access to the ring?

Exactly, xenio creates the ring and manages xenstore. Tapdisk accesses the shared ring via a static library that will be introduced by another patch series -- it's the series I'm currently working on. I'll update this series' description to clarify this.

> 
> >
> > This series requires the RFC series described in
> > http://lists.xen.org/archives/html/xen-devel/2012-11/msg00875.html
> > in order to compile.
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:56:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:56:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaFJ-0005Y5-HY; Mon, 03 Dec 2012 17:55:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1TfaFH-0005X8-OS
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 17:55:55 +0000
Received: from [85.158.137.99:33570] by server-11.bemta-3.messagelabs.com id
	1A/C4-19361-BA7ECB05; Mon, 03 Dec 2012 17:55:55 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354557353!16857561!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23916 invoked from network); 3 Dec 2012 17:55:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:55:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130400"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:55:53 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Mon, 3 Dec 2012
	17:55:53 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 3 Dec 2012 17:55:52 +0000
Thread-Topic: [Xen-devel] [PATCH 0 of 5 RFC] blktap3: Introduce xenio daemon
	(coordinate blkfront with tapdisk).
Thread-Index: Ac3OQ/9KXYKNDjAiQ6OBDFGJgx6xNADOrJBw
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE012D554A56E8@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<1354201968.6269.14.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354201968.6269.14.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 0 of 5 RFC] blktap3: Introduce xenio daemon
 (coordinate blkfront with tapdisk).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On Wed, 2012-11-28 at 14:20 +0000, Thanos Makatos wrote:
> > This patch series introduces the xenio daemon.
> 
> Can we call it something less generic?

Sure, though I don't really have good alternatives. Conceptually this daemon is blkback's user space counterpart, but naming it blkback would bring much confusion I believe. What about "tapback"? (Just a suggestion.)

> 
> >  This daemon is responsible for
> > coordinating blkfront and tapdisk, when blkfront changes state. It
> does so by
> > monitoring XenStore for blkfront state changes. The daemon
> creates/destroys the
> > shared ring, instructs the tapdisk to connect to/disconnect from it,
> and
> > manages the state of the back-end.
> 
> So xenio creates the ring and manages xenstore? How does tapdisk get
> access to the ring?

Exactly, xenio creates the ring and manages xenstore. Tapdisk accesses the shared ring via a static library that will be introduced by another patch series -- it's the series I'm currently working on. I'll update this series' description to clarify this.

> 
> >
> > This series requires the RFC series described in
> > http://lists.xen.org/archives/html/xen-devel/2012-11/msg00875.html
> > in order to compile.
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:57:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:57:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaGy-0005wv-2d; Mon, 03 Dec 2012 17:57:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfaGw-0005wb-BV
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:57:38 +0000
Received: from [85.158.139.83:30403] by server-11.bemta-5.messagelabs.com id
	7F/89-03409-118ECB05; Mon, 03 Dec 2012 17:57:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1354557457!28206298!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27492 invoked from network); 3 Dec 2012 17:57:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:57:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130427"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:57:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:57:36 +0000
Message-ID: <1354557455.2693.46.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Date: Mon, 3 Dec 2012 17:57:35 +0000
In-Reply-To: <1354557148.18784.21.camel@iceland>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:52 +0000, Wei Liu wrote:
> On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
> > >>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
> > > Regarding Jan's comment in [0], I don't think allowing user to specify
> > > arbitrary number of levels a good idea. Because only the last level
> > > should be shared among vcpus, other level should be in percpu struct to
> > > allow for quicker lookup. The idea to let user specify levels will be
> > > too complicated in implementation and blow up percpu section (since the
> > > size grows exponentially). Three levels should be quite enough. See
> > > maths below.
> > 
> > I didn't ask to implement more than three levels, I just asked for
> > the interface to establish the number of levels a guest wants to
> > use to allow for higher numbers (passing of which would result in
> > -EINVAL in your implementation).
> > 
> 
> Ah, I understand now. How about something like this:
> 
> struct EVTCHNOP_reg_nlevel {
>     int levels;
>     void *level_specified_reg_struct;
> }
> 
> > > Number of event channels:
> > >  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
> > >  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> > > Basically the third level is a new ABI, so I choose to use unsigned long
> > > long here to get more event channels.
> > 
> > Please don't: This would make things less consistent to handle
> > at least in the guest side code. And I don't see why you would
> > have a need to do so anyway (or else your argument above
> > against further levels would become questionable).
> > 
> 
> It was suggested by Ian to use unsigned long long. Ian, why do you
> prefer unsigned long long to unsigned long?

I thought having 32 and 64 bit be the same might simplify some things,
but if not then that's fine.

Is 32k event channels going to be enough in the long run? I suppose any
system capable of running such a number of guests ought to be using 64
bit == 512k which should at least last a bit longer.

> > > Pages occupied by the third level (if PAGE_SIZE=4k):
> > >  * 32bit: 64k  / 8 / 4k = 2
> > >  * 64bit: 512k / 8 / 4k = 16
> > > 
> > > Making second level percpu will incur overhead. In fact we move the
> > > array in shared info into percpu struct:
> > >  * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
> > >  * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte
> > > 
> > > What concerns me is that the struct evtchn buckets are allocated all at
> > > once during initialization phrase. To save memory inside Xen, the
> > > internal allocation/free scheme for evtchn needs to be modified. Ian
> > > suggested we do small number of buckets at start of day then dynamically
> > > fault in more as required.
> > > 
> > > To sum up:
> > >      1. Guest should allocate pages for third level evtchn.
> > >      2. Guest should register third level pages via a new hypercall op.
> > 
> > Doesn't the guest also need to set up space for the 2nd level?
> > 
> 
> Yes. That will be embedded in percpu struct vcpu_info, which will be
> also register via the same hypercall op.

NB that there is already a vcpu info placement hypercall. I have no
problem making this be a prerequisite for this work.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:57:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:57:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaGy-0005wv-2d; Mon, 03 Dec 2012 17:57:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfaGw-0005wb-BV
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:57:38 +0000
Received: from [85.158.139.83:30403] by server-11.bemta-5.messagelabs.com id
	7F/89-03409-118ECB05; Mon, 03 Dec 2012 17:57:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1354557457!28206298!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27492 invoked from network); 3 Dec 2012 17:57:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:57:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130427"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:57:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	17:57:36 +0000
Message-ID: <1354557455.2693.46.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Date: Mon, 3 Dec 2012 17:57:35 +0000
In-Reply-To: <1354557148.18784.21.camel@iceland>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:52 +0000, Wei Liu wrote:
> On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
> > >>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
> > > Regarding Jan's comment in [0], I don't think allowing user to specify
> > > arbitrary number of levels a good idea. Because only the last level
> > > should be shared among vcpus, other level should be in percpu struct to
> > > allow for quicker lookup. The idea to let user specify levels will be
> > > too complicated in implementation and blow up percpu section (since the
> > > size grows exponentially). Three levels should be quite enough. See
> > > maths below.
> > 
> > I didn't ask to implement more than three levels, I just asked for
> > the interface to establish the number of levels a guest wants to
> > use to allow for higher numbers (passing of which would result in
> > -EINVAL in your implementation).
> > 
> 
> Ah, I understand now. How about something like this:
> 
> struct EVTCHNOP_reg_nlevel {
>     int levels;
>     void *level_specified_reg_struct;
> }
> 
> > > Number of event channels:
> > >  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
> > >  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> > > Basically the third level is a new ABI, so I choose to use unsigned long
> > > long here to get more event channels.
> > 
> > Please don't: This would make things less consistent to handle
> > at least in the guest side code. And I don't see why you would
> > have a need to do so anyway (or else your argument above
> > against further levels would become questionable).
> > 
> 
> It was suggested by Ian to use unsigned long long. Ian, why do you
> prefer unsigned long long to unsigned long?

I thought having 32 and 64 bit be the same might simplify some things,
but if not then that's fine.

Is 32k event channels going to be enough in the long run? I suppose any
system capable of running such a number of guests ought to be using 64
bit == 512k which should at least last a bit longer.

> > > Pages occupied by the third level (if PAGE_SIZE=4k):
> > >  * 32bit: 64k  / 8 / 4k = 2
> > >  * 64bit: 512k / 8 / 4k = 16
> > > 
> > > Making second level percpu will incur overhead. In fact we move the
> > > array in shared info into percpu struct:
> > >  * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
> > >  * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte
> > > 
> > > What concerns me is that the struct evtchn buckets are allocated all at
> > > once during initialization phrase. To save memory inside Xen, the
> > > internal allocation/free scheme for evtchn needs to be modified. Ian
> > > suggested we do small number of buckets at start of day then dynamically
> > > fault in more as required.
> > > 
> > > To sum up:
> > >      1. Guest should allocate pages for third level evtchn.
> > >      2. Guest should register third level pages via a new hypercall op.
> > 
> > Doesn't the guest also need to set up space for the 2nd level?
> > 
> 
> Yes. That will be embedded in percpu struct vcpu_info, which will be
> also register via the same hypercall op.

NB that there is already a vcpu info placement hypercall. I have no
problem making this be a prerequisite for this work.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:59:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaIh-0006IA-KZ; Mon, 03 Dec 2012 17:59:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robert.phillips@citrix.com>) id 1TfaIg-0006Hs-An
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:59:26 +0000
Received: from [85.158.139.211:48579] by server-2.bemta-5.messagelabs.com id
	3F/71-04892-D78ECB05; Mon, 03 Dec 2012 17:59:25 +0000
X-Env-Sender: robert.phillips@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354557563!18893283!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26346 invoked from network); 3 Dec 2012 17:59:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:59:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216228117"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:59:18 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.209]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi;
	Mon, 3 Dec 2012 12:59:18 -0500
From: Robert Phillips <robert.phillips@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>, Kouya Shimura <kouya@jp.fujitsu.com>
Date: Mon, 3 Dec 2012 12:59:27 -0500
Thread-Topic: [Xen-devel] [PATCH] x86/hap: fix race condition between
	ENABLE_LOGDIRTY and track_dirty_vram hypercall
Thread-Index: Ac3OR+qSABBO901aTB+ByaSwXsBj5ADM5xyw
Message-ID: <048EAD622912254A9DEA24C1734613C18C87CC821E@FTLPMAILBOX02.citrite.net>
References: <50B7087D.20407@jp.fujitsu.com>
	<20121129154041.GD80627@ocelot.phlegethon.org>
In-Reply-To: <20121129154041.GD80627@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/hap: fix race condition between
 ENABLE_LOGDIRTY and track_dirty_vram hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tim,

> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Thursday, November 29, 2012 10:41 AM
> To: Kouya Shimura
> Cc: Robert Phillips; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] x86/hap: fix race condition between
> ENABLE_LOGDIRTY and track_dirty_vram hypercall
> 
> At 16:02 +0900 on 29 Nov (1354204941), Kouya Shimura wrote:
> > I'm not sure why paging_lock() is used partially in hap_XXX_vram_tracking
> > functions. Thus, this patch introduces a new lock.
> > It would be better to use paging_lock() instead of the new lock
> > since shadow paging mode (not HAP mode) uses paging_lock to avoid
> > this race condition.
> 
> I think you're right - it would be better to use the paging_lock.
> 
> Cc'ing Robert Phillips, who's got a big patch outstanding that touches
> the locking in this code.  I think the right thing to do is make sure
> his patch fixes the issue and the backport just the locking parts of it
> to older trees.
> 
> Robert, in your patch you do wrap this all in the paging_lock, but then
> unlock to call various enable and disable routines.  Is there a version
> of this race condition there, where some other CPU might call
> LOG_DIRTY_ENABLE while you've temporarily dropped the lock?

My proposed patch does not modify the problematic locking code so, 
unfortunately, it preserves the race condition that Kouya Shimura 
has discovered.  

I question whether his proposed patch would be suitable for the 
multiple frame buffer situation that my proposed patch addresses.
It is possible that a guest might be updating its frame buffers when 
live migration starts, and the same race would result.

I think the domain.arch.paging.log_dirty  function pointers are problematic.
They are modified and executed without benefit of locking.

I am uncomfortable with adding another lock.

I will look at updating my patch to avoid the race and will (hopefully) 
avoid adding another lock.

-- rsp

> 
> Cheers,
> 
> Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 17:59:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 17:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaIh-0006IA-KZ; Mon, 03 Dec 2012 17:59:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robert.phillips@citrix.com>) id 1TfaIg-0006Hs-An
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 17:59:26 +0000
Received: from [85.158.139.211:48579] by server-2.bemta-5.messagelabs.com id
	3F/71-04892-D78ECB05; Mon, 03 Dec 2012 17:59:25 +0000
X-Env-Sender: robert.phillips@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354557563!18893283!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26346 invoked from network); 3 Dec 2012 17:59:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 17:59:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="216228117"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 17:59:18 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.209]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi;
	Mon, 3 Dec 2012 12:59:18 -0500
From: Robert Phillips <robert.phillips@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>, Kouya Shimura <kouya@jp.fujitsu.com>
Date: Mon, 3 Dec 2012 12:59:27 -0500
Thread-Topic: [Xen-devel] [PATCH] x86/hap: fix race condition between
	ENABLE_LOGDIRTY and track_dirty_vram hypercall
Thread-Index: Ac3OR+qSABBO901aTB+ByaSwXsBj5ADM5xyw
Message-ID: <048EAD622912254A9DEA24C1734613C18C87CC821E@FTLPMAILBOX02.citrite.net>
References: <50B7087D.20407@jp.fujitsu.com>
	<20121129154041.GD80627@ocelot.phlegethon.org>
In-Reply-To: <20121129154041.GD80627@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/hap: fix race condition between
 ENABLE_LOGDIRTY and track_dirty_vram hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tim,

> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Thursday, November 29, 2012 10:41 AM
> To: Kouya Shimura
> Cc: Robert Phillips; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] x86/hap: fix race condition between
> ENABLE_LOGDIRTY and track_dirty_vram hypercall
> 
> At 16:02 +0900 on 29 Nov (1354204941), Kouya Shimura wrote:
> > I'm not sure why paging_lock() is used partially in hap_XXX_vram_tracking
> > functions. Thus, this patch introduces a new lock.
> > It would be better to use paging_lock() instead of the new lock
> > since shadow paging mode (not HAP mode) uses paging_lock to avoid
> > this race condition.
> 
> I think you're right - it would be better to use the paging_lock.
> 
> Cc'ing Robert Phillips, who's got a big patch outstanding that touches
> the locking in this code.  I think the right thing to do is make sure
> his patch fixes the issue and the backport just the locking parts of it
> to older trees.
> 
> Robert, in your patch you do wrap this all in the paging_lock, but then
> unlock to call various enable and disable routines.  Is there a version
> of this race condition there, where some other CPU might call
> LOG_DIRTY_ENABLE while you've temporarily dropped the lock?

My proposed patch does not modify the problematic locking code so, 
unfortunately, it preserves the race condition that Kouya Shimura 
has discovered.  

I question whether his proposed patch would be suitable for the 
multiple frame buffer situation that my proposed patch addresses.
It is possible that a guest might be updating its frame buffers when 
live migration starts, and the same race would result.

I think the domain.arch.paging.log_dirty  function pointers are problematic.
They are modified and executed without benefit of locking.

I am uncomfortable with adding another lock.

I will look at updating my patch to avoid the race and will (hopefully) 
avoid adding another lock.

-- rsp

> 
> Cheers,
> 
> Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:00:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaJv-0006cy-8Z; Mon, 03 Dec 2012 18:00:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfaJt-0006ck-HP
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:00:41 +0000
Received: from [85.158.138.51:44194] by server-11.bemta-3.messagelabs.com id
	CD/3A-19361-8C8ECB05; Mon, 03 Dec 2012 18:00:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354557639!32513530!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27990 invoked from network); 3 Dec 2012 18:00:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 18:00:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 18:00:38 +0000
Message-Id: <50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 18:00:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
In-Reply-To: <1354557148.18784.21.camel@iceland>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 18:52, Wei Liu <Wei.Liu2@citrix.com> wrote:
> On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
>> >>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
>> > Regarding Jan's comment in [0], I don't think allowing user to specify
>> > arbitrary number of levels a good idea. Because only the last level
>> > should be shared among vcpus, other level should be in percpu struct to
>> > allow for quicker lookup. The idea to let user specify levels will be
>> > too complicated in implementation and blow up percpu section (since the
>> > size grows exponentially). Three levels should be quite enough. See
>> > maths below.
>> 
>> I didn't ask to implement more than three levels, I just asked for
>> the interface to establish the number of levels a guest wants to
>> use to allow for higher numbers (passing of which would result in
>> -EINVAL in your implementation).
>> 
> 
> Ah, I understand now. How about something like this:
> 
> struct EVTCHNOP_reg_nlevel {
>     int levels;
>     void *level_specified_reg_struct;
> }

Yes, just "unsigned int" please.

>> > To sum up:
>> >      1. Guest should allocate pages for third level evtchn.
>> >      2. Guest should register third level pages via a new hypercall op.
>> 
>> Doesn't the guest also need to set up space for the 2nd level?
>> 
> 
> Yes. That will be embedded in percpu struct vcpu_info, which will be
> also register via the same hypercall op.

"struct vcpu_info"? Same hypercall? Or are you mixing up types?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:00:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaJv-0006cy-8Z; Mon, 03 Dec 2012 18:00:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfaJt-0006ck-HP
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:00:41 +0000
Received: from [85.158.138.51:44194] by server-11.bemta-3.messagelabs.com id
	CD/3A-19361-8C8ECB05; Mon, 03 Dec 2012 18:00:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354557639!32513530!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27990 invoked from network); 3 Dec 2012 18:00:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Dec 2012 18:00:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Dec 2012 18:00:38 +0000
Message-Id: <50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 03 Dec 2012 18:00:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
In-Reply-To: <1354557148.18784.21.camel@iceland>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 18:52, Wei Liu <Wei.Liu2@citrix.com> wrote:
> On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
>> >>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
>> > Regarding Jan's comment in [0], I don't think allowing user to specify
>> > arbitrary number of levels a good idea. Because only the last level
>> > should be shared among vcpus, other level should be in percpu struct to
>> > allow for quicker lookup. The idea to let user specify levels will be
>> > too complicated in implementation and blow up percpu section (since the
>> > size grows exponentially). Three levels should be quite enough. See
>> > maths below.
>> 
>> I didn't ask to implement more than three levels, I just asked for
>> the interface to establish the number of levels a guest wants to
>> use to allow for higher numbers (passing of which would result in
>> -EINVAL in your implementation).
>> 
> 
> Ah, I understand now. How about something like this:
> 
> struct EVTCHNOP_reg_nlevel {
>     int levels;
>     void *level_specified_reg_struct;
> }

Yes, just "unsigned int" please.

>> > To sum up:
>> >      1. Guest should allocate pages for third level evtchn.
>> >      2. Guest should register third level pages via a new hypercall op.
>> 
>> Doesn't the guest also need to set up space for the 2nd level?
>> 
> 
> Yes. That will be embedded in percpu struct vcpu_info, which will be
> also register via the same hypercall op.

"struct vcpu_info"? Same hypercall? Or are you mixing up types?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:05:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaO3-0008Cp-9n; Mon, 03 Dec 2012 18:04:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1TfaO2-0008BL-2D
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 18:04:58 +0000
Received: from [193.109.254.147:64122] by server-16.bemta-14.messagelabs.com
	id B0/27-09215-9C9ECB05; Mon, 03 Dec 2012 18:04:57 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354557896!6123432!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17715 invoked from network); 3 Dec 2012 18:04:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:04:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130600"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 18:04:44 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Mon, 3 Dec 2012
	18:04:44 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 3 Dec 2012 18:04:43 +0000
Thread-Topic: [Xen-devel] [PATCH 1 of 5 RFC] blktap3: Introduce fundamental
	xenio headers
Thread-Index: Ac3OREOIqPeII3abQkGs8BRUtnQBUwDO4gvg
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE012D554A56F0@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<d426fc26719748805aaf.1354112411@makatos-desktop>
	<1354202083.6269.15.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354202083.6269.15.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 1 of 5 RFC] blktap3: Introduce fundamental
 xenio headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > diff -r 84f51929a064 -r d426fc267197 tools/blktap3/xenio/xenio-
> common.h
> > --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
> > +++ b/tools/blktap3/xenio/xenio-common.h	Wed Nov 28 14:11:43
> 2012 +0000
> > @@ -0,0 +1,34 @@
> > +/*
> > + * Copyright (C) 2012      Citrix Ltd.
> > + *
> > + * This program is free software; you can redistribute it and/or
> modify
> > + * it under the terms of the GNU Lesser General Public License as
> published
> > + * by the Free Software Foundation; version 2.1 only. with the
> special
> > + * exception on linking described in file LICENSE.
> 
> I don't think I've seen a LICENSE file yet (thinking back I think this
> was true of the previous series as well).
> 
> What is the special exception?
> 

There's no exception, I've assigned a license rather hastily, I'll have a closer look at this.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:05:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaO3-0008Cp-9n; Mon, 03 Dec 2012 18:04:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1TfaO2-0008BL-2D
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 18:04:58 +0000
Received: from [193.109.254.147:64122] by server-16.bemta-14.messagelabs.com
	id B0/27-09215-9C9ECB05; Mon, 03 Dec 2012 18:04:57 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354557896!6123432!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17715 invoked from network); 3 Dec 2012 18:04:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:04:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="16130600"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 18:04:44 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Mon, 3 Dec 2012
	18:04:44 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 3 Dec 2012 18:04:43 +0000
Thread-Topic: [Xen-devel] [PATCH 1 of 5 RFC] blktap3: Introduce fundamental
	xenio headers
Thread-Index: Ac3OREOIqPeII3abQkGs8BRUtnQBUwDO4gvg
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE012D554A56F0@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<d426fc26719748805aaf.1354112411@makatos-desktop>
	<1354202083.6269.15.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354202083.6269.15.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 1 of 5 RFC] blktap3: Introduce fundamental
 xenio headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > diff -r 84f51929a064 -r d426fc267197 tools/blktap3/xenio/xenio-
> common.h
> > --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
> > +++ b/tools/blktap3/xenio/xenio-common.h	Wed Nov 28 14:11:43
> 2012 +0000
> > @@ -0,0 +1,34 @@
> > +/*
> > + * Copyright (C) 2012      Citrix Ltd.
> > + *
> > + * This program is free software; you can redistribute it and/or
> modify
> > + * it under the terms of the GNU Lesser General Public License as
> published
> > + * by the Free Software Foundation; version 2.1 only. with the
> special
> > + * exception on linking described in file LICENSE.
> 
> I don't think I've seen a LICENSE file yet (thinking back I think this
> was true of the previous series as well).
> 
> What is the special exception?
> 

There's no exception, I've assigned a license rather hastily, I'll have a closer look at this.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:05:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:05:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaOZ-0008Sh-RG; Mon, 03 Dec 2012 18:05:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TfaOY-0008S1-5h
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:05:30 +0000
Received: from [85.158.138.51:29677] by server-13.bemta-3.messagelabs.com id
	31/F0-24887-9E9ECB05; Mon, 03 Dec 2012 18:05:29 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354557926!32514229!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9309 invoked from network); 3 Dec 2012 18:05:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:05:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="46427667"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 18:05:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 13:05:26 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TfaOT-0006bl-Nj;
	Mon, 03 Dec 2012 18:05:25 +0000
Message-ID: <50BCE889.9080909@eu.citrix.com>
Date: Mon, 3 Dec 2012 17:59:37 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1352999682-11535-1-git-send-email-george.dunlap@eu.citrix.com>
	<1353346964.18229.151.camel@zakaz.uk.xensource.com>
	<CAFLBxZYZWtargvXXUDrVNpRKouLgqmPth09qwqxmBFYnuGTGMw@mail.gmail.com>
	<CAFLBxZYoq0WPPeV0rNf3T=vvo9i9RvfKskAh_9hsNrTiSW6F7g@mail.gmail.com>
	<1354555617.2693.38.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354555617.2693.38.camel@zakaz.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] Make all public hosting providers
 eligible for the pre-disclosure list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 17:26, Ian Campbell wrote:
> On Mon, 2012-12-03 at 17:12 +0000, George Dunlap wrote:
>> On Tue, Nov 27, 2012 at 12:05 PM, George Dunlap
>> <George.Dunlap@eu.citrix.com> wrote:
>>          On Mon, Nov 19, 2012 at 5:42 PM, Ian Campbell
>>          <Ian.Campbell@citrix.com> wrote:
>>          
>>                  > +    <p>Here as a rule of thumb, "public hosting
>>                  provider" means
>>                  > +    "selling virtualization services to the general
>>                  public";
>>                  > +    "large-scale" and "widely deployed" means an
>>                  installed base of
>>                  > +    300,000 or more Xen guests.  Other
>>                  well-established organisations
>>                  > +    with a mature security response process will be
>>                  considered on a
>>                  > +    case-by-case basis.</p>
>>                  
>>                  
>>                  I agree with Ian that relaxing this for software
>>                  vendors also seems the
>>                  correct thing to do.
>>          
>>          If we're going to include software vendors, we need some
>>          simple criteria to define what a "real" software vendor is.
>>          The idea of asking for a link from cloud providers pointing to
>>          public rates and a security policy, which Ian Jackson
>>          proffered, was a good one.  I suppose we could do something
>>          similar for software providers: a link to a web page with
>>          either download-able install images, or prices, perhaps?
>>          
>>
>> Thinking a bit more about this one -- if someone had (say) a Debian
>> derivative that looked like it was basically just a different default
>> set of packages -- IOW, a very small amount of effort to create and
>> maintain -- that wouldn't qualify for being on the list, right?  So it
>> seems to me like "amount of effort spent" is kind of what we're
>> looking for, right?  I mean, if 2-3 developers are spending 3-4 hours
>> per week each working on something, then it's clearly a project we
>> could consider, even if it's really small.  I'm sure that would
>> include QubesOS, ArchLinux, Edubuntu, Scientific Linux, &c &c.  But if
>> it's one guy spending half an hour every six months, he doesn't really
>> need to be on the list I don't think.
> Is "deviation from upstream" a factor at all?
>
> e.g. is a Debian derivative just ships the Xen bits direct from Debian
> then there doesn't seem to be much call for them to be on the list, they
> can simply just ship the security update too. This would be true even if
> they were dozens of engineers working full time.

I was going to say that if they're not informed, there may be a longer 
turn-around time; but providers on the list are explicitly allowed to 
say that there *is* a vulnerability, and *when* the disclosure is 
scheduled to be, so if it's just a matter of making the same bits 
available that Debian has made available, it shouldn't be too long for 
those who are prepared.

But how much extra work would you need to do to qualify you for the 
list?  Suppose there's a derivative with a single additional patch -- 
that will still require pulling in the source, potentially porting the 
patch, doing a re-build, and some basic re-testing -- that whole thing 
could take a day even for a well-funded project.

If the criteria is so small, and so easy to qualify (just re-build the 
package basically), I'm not sure it's so useful to mention it.

> I went looking for the linux-distros list inclusion criteria, in the
> hopes we could just piggy back off that, but I can't find it right now.

I've got a draft I think is helpful; I'll send a v2 and people can see 
what they think of it.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:05:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:05:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaOZ-0008Sh-RG; Mon, 03 Dec 2012 18:05:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TfaOY-0008S1-5h
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:05:30 +0000
Received: from [85.158.138.51:29677] by server-13.bemta-3.messagelabs.com id
	31/F0-24887-9E9ECB05; Mon, 03 Dec 2012 18:05:29 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354557926!32514229!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9309 invoked from network); 3 Dec 2012 18:05:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:05:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="46427667"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 18:05:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 13:05:26 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TfaOT-0006bl-Nj;
	Mon, 03 Dec 2012 18:05:25 +0000
Message-ID: <50BCE889.9080909@eu.citrix.com>
Date: Mon, 3 Dec 2012 17:59:37 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1352999682-11535-1-git-send-email-george.dunlap@eu.citrix.com>
	<1353346964.18229.151.camel@zakaz.uk.xensource.com>
	<CAFLBxZYZWtargvXXUDrVNpRKouLgqmPth09qwqxmBFYnuGTGMw@mail.gmail.com>
	<CAFLBxZYoq0WPPeV0rNf3T=vvo9i9RvfKskAh_9hsNrTiSW6F7g@mail.gmail.com>
	<1354555617.2693.38.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354555617.2693.38.camel@zakaz.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] Make all public hosting providers
 eligible for the pre-disclosure list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 17:26, Ian Campbell wrote:
> On Mon, 2012-12-03 at 17:12 +0000, George Dunlap wrote:
>> On Tue, Nov 27, 2012 at 12:05 PM, George Dunlap
>> <George.Dunlap@eu.citrix.com> wrote:
>>          On Mon, Nov 19, 2012 at 5:42 PM, Ian Campbell
>>          <Ian.Campbell@citrix.com> wrote:
>>          
>>                  > +    <p>Here as a rule of thumb, "public hosting
>>                  provider" means
>>                  > +    "selling virtualization services to the general
>>                  public";
>>                  > +    "large-scale" and "widely deployed" means an
>>                  installed base of
>>                  > +    300,000 or more Xen guests.  Other
>>                  well-established organisations
>>                  > +    with a mature security response process will be
>>                  considered on a
>>                  > +    case-by-case basis.</p>
>>                  
>>                  
>>                  I agree with Ian that relaxing this for software
>>                  vendors also seems the
>>                  correct thing to do.
>>          
>>          If we're going to include software vendors, we need some
>>          simple criteria to define what a "real" software vendor is.
>>          The idea of asking for a link from cloud providers pointing to
>>          public rates and a security policy, which Ian Jackson
>>          proffered, was a good one.  I suppose we could do something
>>          similar for software providers: a link to a web page with
>>          either download-able install images, or prices, perhaps?
>>          
>>
>> Thinking a bit more about this one -- if someone had (say) a Debian
>> derivative that looked like it was basically just a different default
>> set of packages -- IOW, a very small amount of effort to create and
>> maintain -- that wouldn't qualify for being on the list, right?  So it
>> seems to me like "amount of effort spent" is kind of what we're
>> looking for, right?  I mean, if 2-3 developers are spending 3-4 hours
>> per week each working on something, then it's clearly a project we
>> could consider, even if it's really small.  I'm sure that would
>> include QubesOS, ArchLinux, Edubuntu, Scientific Linux, &c &c.  But if
>> it's one guy spending half an hour every six months, he doesn't really
>> need to be on the list I don't think.
> Is "deviation from upstream" a factor at all?
>
> e.g. is a Debian derivative just ships the Xen bits direct from Debian
> then there doesn't seem to be much call for them to be on the list, they
> can simply just ship the security update too. This would be true even if
> they were dozens of engineers working full time.

I was going to say that if they're not informed, there may be a longer 
turn-around time; but providers on the list are explicitly allowed to 
say that there *is* a vulnerability, and *when* the disclosure is 
scheduled to be, so if it's just a matter of making the same bits 
available that Debian has made available, it shouldn't be too long for 
those who are prepared.

But how much extra work would you need to do to qualify you for the 
list?  Suppose there's a derivative with a single additional patch -- 
that will still require pulling in the source, potentially porting the 
patch, doing a re-build, and some basic re-testing -- that whole thing 
could take a day even for a well-funded project.

If the criteria is so small, and so easy to qualify (just re-build the 
package basically), I'm not sure it's so useful to mention it.

> I went looking for the linux-distros list inclusion criteria, in the
> hopes we could just piggy back off that, but I can't find it right now.

I've got a draft I think is helpful; I'll send a v2 and people can see 
what they think of it.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:09:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaSV-0001Ox-MA; Mon, 03 Dec 2012 18:09:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TfaST-0001Nz-NU
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:09:33 +0000
Received: from [85.158.143.35:32117] by server-1.bemta-4.messagelabs.com id
	20/B0-27934-DDAECB05; Mon, 03 Dec 2012 18:09:33 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1354558170!14684050!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31715 invoked from network); 3 Dec 2012 18:09:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:09:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="46428336"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 18:09:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 13:09:23 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TfaSI-0006ex-Qr;
	Mon, 03 Dec 2012 18:09:22 +0000
Message-ID: <1354558168.18784.26.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 3 Dec 2012 18:09:28 +0000
In-Reply-To: <50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
	<50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 18:00 +0000, Jan Beulich wrote:
> >>> On 03.12.12 at 18:52, Wei Liu <Wei.Liu2@citrix.com> wrote:
> > On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
> >> >>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
> >> > Regarding Jan's comment in [0], I don't think allowing user to specify
> >> > arbitrary number of levels a good idea. Because only the last level
> >> > should be shared among vcpus, other level should be in percpu struct to
> >> > allow for quicker lookup. The idea to let user specify levels will be
> >> > too complicated in implementation and blow up percpu section (since the
> >> > size grows exponentially). Three levels should be quite enough. See
> >> > maths below.
> >> 
> >> I didn't ask to implement more than three levels, I just asked for
> >> the interface to establish the number of levels a guest wants to
> >> use to allow for higher numbers (passing of which would result in
> >> -EINVAL in your implementation).
> >> 
> > 
> > Ah, I understand now. How about something like this:
> > 
> > struct EVTCHNOP_reg_nlevel {
> >     int levels;
> >     void *level_specified_reg_struct;
> > }
> 
> Yes, just "unsigned int" please.
> 

Right, "unsigned int".

> >> > To sum up:
> >> >      1. Guest should allocate pages for third level evtchn.
> >> >      2. Guest should register third level pages via a new hypercall op.
> >> 
> >> Doesn't the guest also need to set up space for the 2nd level?
> >> 
> > 
> > Yes. That will be embedded in percpu struct vcpu_info, which will be
> > also register via the same hypercall op.
> 
> "struct vcpu_info"? Same hypercall? Or are you mixing up types?
> 

What I meant was the second level will be embedded in struct vcpu_info,
and the 2nd level will be registered via some hypercall (not the struct
vcpu_info).


Wei.

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:09:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaSV-0001Ox-MA; Mon, 03 Dec 2012 18:09:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TfaST-0001Nz-NU
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:09:33 +0000
Received: from [85.158.143.35:32117] by server-1.bemta-4.messagelabs.com id
	20/B0-27934-DDAECB05; Mon, 03 Dec 2012 18:09:33 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1354558170!14684050!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31715 invoked from network); 3 Dec 2012 18:09:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:09:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="46428336"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 18:09:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 13:09:23 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TfaSI-0006ex-Qr;
	Mon, 03 Dec 2012 18:09:22 +0000
Message-ID: <1354558168.18784.26.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 3 Dec 2012 18:09:28 +0000
In-Reply-To: <50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
	<50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 18:00 +0000, Jan Beulich wrote:
> >>> On 03.12.12 at 18:52, Wei Liu <Wei.Liu2@citrix.com> wrote:
> > On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
> >> >>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
> >> > Regarding Jan's comment in [0], I don't think allowing user to specify
> >> > arbitrary number of levels a good idea. Because only the last level
> >> > should be shared among vcpus, other level should be in percpu struct to
> >> > allow for quicker lookup. The idea to let user specify levels will be
> >> > too complicated in implementation and blow up percpu section (since the
> >> > size grows exponentially). Three levels should be quite enough. See
> >> > maths below.
> >> 
> >> I didn't ask to implement more than three levels, I just asked for
> >> the interface to establish the number of levels a guest wants to
> >> use to allow for higher numbers (passing of which would result in
> >> -EINVAL in your implementation).
> >> 
> > 
> > Ah, I understand now. How about something like this:
> > 
> > struct EVTCHNOP_reg_nlevel {
> >     int levels;
> >     void *level_specified_reg_struct;
> > }
> 
> Yes, just "unsigned int" please.
> 

Right, "unsigned int".

> >> > To sum up:
> >> >      1. Guest should allocate pages for third level evtchn.
> >> >      2. Guest should register third level pages via a new hypercall op.
> >> 
> >> Doesn't the guest also need to set up space for the 2nd level?
> >> 
> > 
> > Yes. That will be embedded in percpu struct vcpu_info, which will be
> > also register via the same hypercall op.
> 
> "struct vcpu_info"? Same hypercall? Or are you mixing up types?
> 

What I meant was the second level will be embedded in struct vcpu_info,
and the 2nd level will be registered via some hypercall (not the struct
vcpu_info).


Wei.

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:11:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:11:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaUG-0001ud-AZ; Mon, 03 Dec 2012 18:11:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaUE-0001tp-VK; Mon, 03 Dec 2012 18:11:23 +0000
Received: from [85.158.139.83:44299] by server-7.bemta-5.messagelabs.com id
	D3/51-23096-94BECB05; Mon, 03 Dec 2012 18:11:21 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354558279!28244673!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31739 invoked from network); 3 Dec 2012 18:11:20 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-5.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 18:11:20 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBE-0002Mv-OI; Mon, 03 Dec 2012 17:51:44 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBE-00066j-ID; Mon, 03 Dec 2012 17:51:44 +0000
Date: Mon, 03 Dec 2012 17:51:44 +0000
Message-Id: <E1TfaBE-00066j-ID@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 27 (CVE-2012-5511) - several HVM
 operations do not validate the range of their inputs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5511 / XSA-27
                           version 4

   several HVM operations do not validate the range of their inputs

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

Several HVM control operations do not check the size of their inputs
and can tie up a physical CPU for extended periods of time.

In addition dirty video RAM tracking involves clearing the bitmap
provided by the domain controlling the guest (e.g. dom0 or a
stubdom). If the size of that bitmap is overly large, an intermediate
variable on the hypervisor stack may overflow that stack.

IMPACT
======

A malicious guest administrator can cause Xen to become unresponsive
or to crash leading in either case to a Denial of Service.

VULNERABLE SYSTEMS
==================

All Xen versions from 3.4 onwards are vulnerable.

However Xen 4.2 and unstable are not vulnerable to the stack
overflow. Systems running either of these are not vulnerable to the
crash.

Version 3.4, 4.0 and 4.1 are vulnerable to both the stack overflow and
the physical CPU hang.

The vulnerability is only exposed to HVM guests.

MITIGATION
==========

Running only PV guests will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa27-4.1.patch             Xen 4.1.x
xsa27-4.2.patch             Xen 4.2.x
xsa27-4.unstable.patch      xen-unstable


$ sha256sum xsa27*.patch
7443da829a7b2dd4b5e0b8db97a8b569e7c10d908ee7c34fa60bc2ddd781be57  xsa27-4.1.patch
462eae827944d1d337a6ebf13a36ea952d7fb76b993b9c29946e1d9cfb5ea2a3  xsa27-4.2.patch
fcb07c6bd78a0d9513a68e2eb3bf0c21ef4d8ff0e6ebf6fdce04a3170303cab6  xsa27-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ2AAoJEIP+FMlX6CvZzqwIAJwIUGfXDA0KvJ/zZWAJm49Q
c5Sn5xK1wZdGdJTlCqAGZSMOmaUP6tofqEWanb6nOg2vRAk7HlDz1JbUw5P8E3H9
mTT9Ro8rOhAIhgD0joT4i2XE77OTuLF85JK0M0fn2XPdUNFraChYUGthXj9+irlc
FOhrLnXBlo34h7V7nY9XGIKAwcYUQnR7RcPasKOCO1OGEYofWKJOSKR9wrIhXiMN
Q2svs4J1+PxNdKpErS+mMwEbnYHBcmxxEZXWktB9plzSqf5FMP4yQ3C5wTu/zrYH
nu8Jj2JNV3NTnZgcviUBysTR+1s+JgVjLU3gtxebh2caqjSKyenPU2yYna5rlfY=
=tfAP
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa27-4.1.patch"
Content-Disposition: attachment; filename="xsa27-4.1.patch"
Content-Transfer-Encoding: base64

aHZtOiBMaW1pdCB0aGUgc2l6ZSBvZiBsYXJnZSBIVk0gb3AgYmF0Y2hlcwoK
RG9pbmcgbGFyZ2UgcDJtIHVwZGF0ZXMgZm9yIEhWTU9QX3RyYWNrX2RpcnR5
X3ZyYW0gd2l0aG91dCBwcmVlbXB0aW9uCnRpZXMgdXAgdGhlIHBoeXNpY2Fs
IHByb2Nlc3Nvci4gSW50ZWdyYXRpbmcgcHJlZW1wdGlvbiBpbnRvIHRoZSBw
Mm0KdXBkYXRlcyBpcyBoYXJkIHNvIHNpbXBseSBsaW1pdCB0byAxR0Igd2hp
Y2ggaXMgc3VmZmljaWVudCBmb3IgYSAxNTAwMAoqIDE1MDAwICogMzJicHAg
ZnJhbWVidWZmZXIuCgpGb3IgSFZNT1BfbW9kaWZpZWRfbWVtb3J5IGFuZCBI
Vk1PUF9zZXRfbWVtX3R5cGUgcHJlZW1wdGlibGUgYWRkIHRoZQpuZWNlc3Nh
cnkgbWFjaGluZXJ5IHRvIGhhbmRsZSBwcmVlbXB0aW9uLgoKVGhpcyBpcyBD
VkUtMjAxMi01NTExIC8gWFNBLTI3LgoKU2lnbmVkLW9mZi1ieTogVGltIERl
ZWdhbiA8dGltQHhlbi5vcmc+ClNpZ25lZC1vZmYtYnk6IElhbiBDYW1wYmVs
bCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBJYW4gSmFj
a3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT4KCng4Ni9wYWdpbmc6
IERvbid0IGFsbG9jYXRlIHVzZXItY29udHJvbGxlZCBhbW91bnRzIG9mIHN0
YWNrIG1lbW9yeS4KClRoaXMgaXMgWFNBLTI3IC8gQ1ZFLTIwMTItNTUxMS4K
ClNpZ25lZC1vZmYtYnk6IFRpbSBEZWVnYW4gPHRpbUB4ZW4ub3JnPgpBY2tl
ZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgp2MjogUHJv
dmlkZSBkZWZpbml0aW9uIG9mIEdCIHRvIGZpeCB4ODYtMzIgY29tcGlsZS4K
ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxKQmV1bGljaEBzdXNlLmNv
bT4KQWNrZWQtYnk6IElhbiBKYWNrc29uIDxpYW4uamFja3NvbkBldS5jaXRy
aXguY29tPgoKCmRpZmYgLXIgNTYzOTA0N2Q2YzlmIHhlbi9hcmNoL3g4Ni9o
dm0vaHZtLmMKLS0tIGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwlNb24gTm92
IDE5IDA5OjQzOjQ4IDIwMTIgKzAxMDAKKysrIGIveGVuL2FyY2gveDg2L2h2
bS9odm0uYwlNb24gTm92IDE5IDE2OjAwOjMzIDIwMTIgKzAwMDAKQEAgLTM0
NzEsNiArMzQ3MSw5IEBAIGxvbmcgZG9faHZtX29wKHVuc2lnbmVkIGxvbmcg
b3AsIFhFTl9HVUUKICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFpbihkKSAp
CiAgICAgICAgICAgICBnb3RvIHBhcmFtX2ZhaWwyOwogCisgICAgICAgIGlm
ICggYS5uciA+IEdCKDEpID4+IFBBR0VfU0hJRlQgKQorICAgICAgICAgICAg
Z290byBwYXJhbV9mYWlsMjsKKwogICAgICAgICByYyA9IHhzbV9odm1fcGFy
YW0oZCwgb3ApOwogICAgICAgICBpZiAoIHJjICkKICAgICAgICAgICAgIGdv
dG8gcGFyYW1fZmFpbDI7CkBAIC0zNDk4LDcgKzM1MDEsNiBAQCBsb25nIGRv
X2h2bV9vcCh1bnNpZ25lZCBsb25nIG9wLCBYRU5fR1VFCiAgICAgICAgIHN0
cnVjdCB4ZW5faHZtX21vZGlmaWVkX21lbW9yeSBhOwogICAgICAgICBzdHJ1
Y3QgZG9tYWluICpkOwogICAgICAgICBzdHJ1Y3QgcDJtX2RvbWFpbiAqcDJt
OwotICAgICAgICB1bnNpZ25lZCBsb25nIHBmbjsKIAogICAgICAgICBpZiAo
IGNvcHlfZnJvbV9ndWVzdCgmYSwgYXJnLCAxKSApCiAgICAgICAgICAgICBy
ZXR1cm4gLUVGQVVMVDsKQEAgLTM1MjYsOCArMzUyOCw5IEBAIGxvbmcgZG9f
aHZtX29wKHVuc2lnbmVkIGxvbmcgb3AsIFhFTl9HVUUKICAgICAgICAgICAg
IGdvdG8gcGFyYW1fZmFpbDM7CiAKICAgICAgICAgcDJtID0gcDJtX2dldF9o
b3N0cDJtKGQpOwotICAgICAgICBmb3IgKCBwZm4gPSBhLmZpcnN0X3Bmbjsg
cGZuIDwgYS5maXJzdF9wZm4gKyBhLm5yOyBwZm4rKyApCisgICAgICAgIHdo
aWxlICggYS5uciA+IDAgKQogICAgICAgICB7CisgICAgICAgICAgICB1bnNp
Z25lZCBsb25nIHBmbiA9IGEuZmlyc3RfcGZuOwogICAgICAgICAgICAgcDJt
X3R5cGVfdCB0OwogICAgICAgICAgICAgbWZuX3QgbWZuID0gZ2ZuX3RvX21m
bihwMm0sIHBmbiwgJnQpOwogICAgICAgICAgICAgaWYgKCBwMm1faXNfcGFn
aW5nKHQpICkKQEAgLTM1NDgsNiArMzU1MSwxOSBAQCBsb25nIGRvX2h2bV9v
cCh1bnNpZ25lZCBsb25nIG9wLCBYRU5fR1VFCiAgICAgICAgICAgICAgICAg
LyogZG9uJ3QgdGFrZSBhIGxvbmcgdGltZSBhbmQgZG9uJ3QgZGllIGVpdGhl
ciAqLwogICAgICAgICAgICAgICAgIHNoX3JlbW92ZV9zaGFkb3dzKGQtPnZj
cHVbMF0sIG1mbiwgMSwgMCk7CiAgICAgICAgICAgICB9CisKKyAgICAgICAg
ICAgIGEuZmlyc3RfcGZuKys7CisgICAgICAgICAgICBhLm5yLS07CisKKyAg
ICAgICAgICAgIC8qIENoZWNrIGZvciBjb250aW51YXRpb24gaWYgaXQncyBu
b3QgdGhlIGxhc3QgaW50ZXJhdGlvbiAqLworICAgICAgICAgICAgaWYgKCBh
Lm5yID4gMCAmJiBoeXBlcmNhbGxfcHJlZW1wdF9jaGVjaygpICkKKyAgICAg
ICAgICAgIHsKKyAgICAgICAgICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vlc3Qo
YXJnLCAmYSwgMSkgKQorICAgICAgICAgICAgICAgICAgICByYyA9IC1FRkFV
TFQ7CisgICAgICAgICAgICAgICAgZWxzZQorICAgICAgICAgICAgICAgICAg
ICByYyA9IC1FQUdBSU47CisgICAgICAgICAgICAgICAgYnJlYWs7CisgICAg
ICAgICAgICB9CiAgICAgICAgIH0KIAogICAgIHBhcmFtX2ZhaWwzOgpAQCAt
MzU5NSw3ICszNjExLDYgQEAgbG9uZyBkb19odm1fb3AodW5zaWduZWQgbG9u
ZyBvcCwgWEVOX0dVRQogICAgICAgICBzdHJ1Y3QgeGVuX2h2bV9zZXRfbWVt
X3R5cGUgYTsKICAgICAgICAgc3RydWN0IGRvbWFpbiAqZDsKICAgICAgICAg
c3RydWN0IHAybV9kb21haW4gKnAybTsKLSAgICAgICAgdW5zaWduZWQgbG9u
ZyBwZm47CiAgICAgICAgIAogICAgICAgICAvKiBJbnRlcmZhY2UgdHlwZXMg
dG8gaW50ZXJuYWwgcDJtIHR5cGVzICovCiAgICAgICAgIHAybV90eXBlX3Qg
bWVtdHlwZVtdID0gewpAQCAtMzYyNSw4ICszNjQwLDkgQEAgbG9uZyBkb19o
dm1fb3AodW5zaWduZWQgbG9uZyBvcCwgWEVOX0dVRQogICAgICAgICAgICAg
Z290byBwYXJhbV9mYWlsNDsKIAogICAgICAgICBwMm0gPSBwMm1fZ2V0X2hv
c3RwMm0oZCk7Ci0gICAgICAgIGZvciAoIHBmbiA9IGEuZmlyc3RfcGZuOyBw
Zm4gPCBhLmZpcnN0X3BmbiArIGEubnI7IHBmbisrICkKKyAgICAgICAgd2hp
bGUgKCBhLm5yID4gMCApCiAgICAgICAgIHsKKyAgICAgICAgICAgIHVuc2ln
bmVkIGxvbmcgcGZuID0gYS5maXJzdF9wZm47CiAgICAgICAgICAgICBwMm1f
dHlwZV90IHQ7CiAgICAgICAgICAgICBwMm1fdHlwZV90IG50OwogICAgICAg
ICAgICAgbWZuX3QgbWZuOwpAQCAtMzY2Miw2ICszNjc4LDE5IEBAIGxvbmcg
ZG9faHZtX29wKHVuc2lnbmVkIGxvbmcgb3AsIFhFTl9HVUUKICAgICAgICAg
ICAgICAgICAgICAgZ290byBwYXJhbV9mYWlsNDsKICAgICAgICAgICAgICAg
ICB9CiAgICAgICAgICAgICB9CisKKyAgICAgICAgICAgIGEuZmlyc3RfcGZu
Kys7CisgICAgICAgICAgICBhLm5yLS07CisKKyAgICAgICAgICAgIC8qIENo
ZWNrIGZvciBjb250aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaW50
ZXJhdGlvbiAqLworICAgICAgICAgICAgaWYgKCBhLm5yID4gMCAmJiBoeXBl
cmNhbGxfcHJlZW1wdF9jaGVjaygpICkKKyAgICAgICAgICAgIHsKKyAgICAg
ICAgICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vlc3QoYXJnLCAmYSwgMSkgKQor
ICAgICAgICAgICAgICAgICAgICByYyA9IC1FRkFVTFQ7CisgICAgICAgICAg
ICAgICAgZWxzZQorICAgICAgICAgICAgICAgICAgICByYyA9IC1FQUdBSU47
CisgICAgICAgICAgICAgICAgZ290byBwYXJhbV9mYWlsNDsKKyAgICAgICAg
ICAgIH0KICAgICAgICAgfQogCiAgICAgICAgIHJjID0gMDsKZGlmZiAtciA1
NjM5MDQ3ZDZjOWYgeGVuL2FyY2gveDg2L21tL3BhZ2luZy5jCi0tLSBhL3hl
bi9hcmNoL3g4Ni9tbS9wYWdpbmcuYwlNb24gTm92IDE5IDA5OjQzOjQ4IDIw
MTIgKzAxMDAKKysrIGIveGVuL2FyY2gveDg2L21tL3BhZ2luZy5jCU1vbiBO
b3YgMTkgMTY6MDA6MzMgMjAxMiArMDAwMApAQCAtNTI5LDEzICs1MjksMTgg
QEAgaW50IHBhZ2luZ19sb2dfZGlydHlfcmFuZ2Uoc3RydWN0IGRvbWFpbgog
CiAgICAgaWYgKCAhZC0+YXJjaC5wYWdpbmcubG9nX2RpcnR5LmZhdWx0X2Nv
dW50ICYmCiAgICAgICAgICAhZC0+YXJjaC5wYWdpbmcubG9nX2RpcnR5LmRp
cnR5X2NvdW50ICkgewotICAgICAgICBpbnQgc2l6ZSA9IChuciArIEJJVFNf
UEVSX0xPTkcgLSAxKSAvIEJJVFNfUEVSX0xPTkc7Ci0gICAgICAgIHVuc2ln
bmVkIGxvbmcgemVyb2VzW3NpemVdOwotICAgICAgICBtZW1zZXQoemVyb2Vz
LCAweDAwLCBzaXplICogQllURVNfUEVSX0xPTkcpOworICAgICAgICBzdGF0
aWMgdWludDhfdCB6ZXJvZXNbUEFHRV9TSVpFXTsKKyAgICAgICAgaW50IG9m
Ziwgc2l6ZTsKKworICAgICAgICBzaXplID0gKChuciArIEJJVFNfUEVSX0xP
TkcgLSAxKSAvIEJJVFNfUEVSX0xPTkcpICogc2l6ZW9mIChsb25nKTsKICAg
ICAgICAgcnYgPSAwOwotICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vlc3Rfb2Zm
c2V0KGRpcnR5X2JpdG1hcCwgMCwgKHVpbnQ4X3QgKikgemVyb2VzLAotICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHNpemUgKiBCWVRFU19Q
RVJfTE9ORykgIT0gMCApCi0gICAgICAgICAgICBydiA9IC1FRkFVTFQ7Cisg
ICAgICAgIGZvciAoIG9mZiA9IDA7ICFydiAmJiBvZmYgPCBzaXplOyBvZmYg
Kz0gc2l6ZW9mIHplcm9lcyApCisgICAgICAgIHsKKyAgICAgICAgICAgIGlu
dCB0b2RvID0gbWluKHNpemUgLSBvZmYsIChpbnQpIFBBR0VfU0laRSk7Cisg
ICAgICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vlc3Rfb2Zmc2V0KGRpcnR5X2Jp
dG1hcCwgb2ZmLCB6ZXJvZXMsIHRvZG8pICkKKyAgICAgICAgICAgICAgICBy
diA9IC1FRkFVTFQ7CisgICAgICAgICAgICBvZmYgKz0gdG9kbzsKKyAgICAg
ICAgfQogICAgICAgICBnb3RvIG91dDsKICAgICB9CiAgICAgZC0+YXJjaC5w
YWdpbmcubG9nX2RpcnR5LmZhdWx0X2NvdW50ID0gMDsKZGlmZiAtciA1NjM5
MDQ3ZDZjOWYgeGVuL2luY2x1ZGUvYXNtLXg4Ni9jb25maWcuaAotLS0gYS94
ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZpZy5oCU1vbiBOb3YgMTkgMDk6NDM6
NDggMjAxMiArMDEwMAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZp
Zy5oCU1vbiBOb3YgMTkgMTY6MDA6MzMgMjAxMiArMDAwMApAQCAtMTA4LDYg
KzEwOCw5IEBAIGV4dGVybiB1bnNpZ25lZCBpbnQgdHJhbXBvbGluZV94ZW5f
cGh5c18KIGV4dGVybiB1bnNpZ25lZCBjaGFyIHRyYW1wb2xpbmVfY3B1X3N0
YXJ0ZWQ7CiBleHRlcm4gY2hhciB3YWtldXBfc3RhcnRbXTsKIGV4dGVybiB1
bnNpZ25lZCBpbnQgdmlkZW9fbW9kZSwgdmlkZW9fZmxhZ3M7CisKKyNkZWZp
bmUgR0IoX2diKSAoX2diICMjIFVMIDw8IDMwKQorCiAjZW5kaWYKIAogI2Rl
ZmluZSBhc21saW5rYWdlCkBAIC0xMjMsNyArMTI2LDYgQEAgZXh0ZXJuIHVu
c2lnbmVkIGludCB2aWRlb19tb2RlLCB2aWRlb19mbAogI2RlZmluZSBQTUw0
X0FERFIoX3Nsb3QpICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgKCgoKF9zbG90ICMjIFVMKSA+PiA4KSAqIDB4ZmZmZjAwMDAwMDAwMDAw
MFVMKSB8IFwKICAgICAgKF9zbG90ICMjIFVMIDw8IFBNTDRfRU5UUllfQklU
UykpCi0jZGVmaW5lIEdCKF9nYikgKF9nYiAjIyBVTCA8PCAzMCkKICNlbHNl
CiAjZGVmaW5lIFBNTDRfRU5UUllfQllURVMgKDEgPDwgUE1MNF9FTlRSWV9C
SVRTKQogI2RlZmluZSBQTUw0X0FERFIoX3Nsb3QpICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCg==

--=separator
Content-Type: application/octet-stream; name="xsa27-4.2.patch"
Content-Disposition: attachment; filename="xsa27-4.2.patch"
Content-Transfer-Encoding: base64

aHZtOiBMaW1pdCB0aGUgc2l6ZSBvZiBsYXJnZSBIVk0gb3AgYmF0Y2hlcwoK
RG9pbmcgbGFyZ2UgcDJtIHVwZGF0ZXMgZm9yIEhWTU9QX3RyYWNrX2RpcnR5
X3ZyYW0gd2l0aG91dCBwcmVlbXB0aW9uCnRpZXMgdXAgdGhlIHBoeXNpY2Fs
IHByb2Nlc3Nvci4gSW50ZWdyYXRpbmcgcHJlZW1wdGlvbiBpbnRvIHRoZSBw
Mm0KdXBkYXRlcyBpcyBoYXJkIHNvIHNpbXBseSBsaW1pdCB0byAxR0Igd2hp
Y2ggaXMgc3VmZmljaWVudCBmb3IgYSAxNTAwMAoqIDE1MDAwICogMzJicHAg
ZnJhbWVidWZmZXIuCgpGb3IgSFZNT1BfbW9kaWZpZWRfbWVtb3J5IGFuZCBI
Vk1PUF9zZXRfbWVtX3R5cGUgcHJlZW1wdGlibGUgYWRkIHRoZQpuZWNlc3Nh
cnkgbWFjaGluZXJ5IHRvIGhhbmRsZSBwcmVlbXB0aW9uLgoKVGhpcyBpcyBD
VkUtMjAxMi01NTExIC8gWFNBLTI3LgoKU2lnbmVkLW9mZi1ieTogVGltIERl
ZWdhbiA8dGltQHhlbi5vcmc+ClNpZ25lZC1vZmYtYnk6IElhbiBDYW1wYmVs
bCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBJYW4gSmFj
a3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT4KCnYyOiBQcm92aWRl
IGRlZmluaXRpb24gb2YgR0IgdG8gZml4IHg4Ni0zMiBjb21waWxlLgoKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPEpCZXVsaWNoQHN1c2UuY29tPgpB
Y2tlZC1ieTogSWFuIEphY2tzb24gPGlhbi5qYWNrc29uQGV1LmNpdHJpeC5j
b20+CgoKZGlmZiAtciA3YzRkODA2YjM3NTMgeGVuL2FyY2gveDg2L2h2bS9o
dm0uYwotLS0gYS94ZW4vYXJjaC94ODYvaHZtL2h2bS5jCUZyaSBOb3YgMTYg
MTU6NTY6MTQgMjAxMiArMDAwMAorKysgYi94ZW4vYXJjaC94ODYvaHZtL2h2
bS5jCU1vbiBOb3YgMTkgMTQ6NDI6MTAgMjAxMiArMDAwMApAQCAtMzk2OSw2
ICszOTY5LDkgQEAgbG9uZyBkb19odm1fb3AodW5zaWduZWQgbG9uZyBvcCwg
WEVOX0dVRQogICAgICAgICBpZiAoICFpc19odm1fZG9tYWluKGQpICkKICAg
ICAgICAgICAgIGdvdG8gcGFyYW1fZmFpbDI7CiAKKyAgICAgICAgaWYgKCBh
Lm5yID4gR0IoMSkgPj4gUEFHRV9TSElGVCApCisgICAgICAgICAgICBnb3Rv
IHBhcmFtX2ZhaWwyOworCiAgICAgICAgIHJjID0geHNtX2h2bV9wYXJhbShk
LCBvcCk7CiAgICAgICAgIGlmICggcmMgKQogICAgICAgICAgICAgZ290byBw
YXJhbV9mYWlsMjsKQEAgLTM5OTUsNyArMzk5OCw2IEBAIGxvbmcgZG9faHZt
X29wKHVuc2lnbmVkIGxvbmcgb3AsIFhFTl9HVUUKICAgICB7CiAgICAgICAg
IHN0cnVjdCB4ZW5faHZtX21vZGlmaWVkX21lbW9yeSBhOwogICAgICAgICBz
dHJ1Y3QgZG9tYWluICpkOwotICAgICAgICB1bnNpZ25lZCBsb25nIHBmbjsK
IAogICAgICAgICBpZiAoIGNvcHlfZnJvbV9ndWVzdCgmYSwgYXJnLCAxKSAp
CiAgICAgICAgICAgICByZXR1cm4gLUVGQVVMVDsKQEAgLTQwMjIsOSArNDAy
NCwxMSBAQCBsb25nIGRvX2h2bV9vcCh1bnNpZ25lZCBsb25nIG9wLCBYRU5f
R1VFCiAgICAgICAgIGlmICggIXBhZ2luZ19tb2RlX2xvZ19kaXJ0eShkKSAp
CiAgICAgICAgICAgICBnb3RvIHBhcmFtX2ZhaWwzOwogCi0gICAgICAgIGZv
ciAoIHBmbiA9IGEuZmlyc3RfcGZuOyBwZm4gPCBhLmZpcnN0X3BmbiArIGEu
bnI7IHBmbisrICkKKyAgICAgICAgd2hpbGUgKCBhLm5yID4gMCApCiAgICAg
ICAgIHsKKyAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuID0gYS5maXJz
dF9wZm47CiAgICAgICAgICAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlOwor
CiAgICAgICAgICAgICBwYWdlID0gZ2V0X3BhZ2VfZnJvbV9nZm4oZCwgcGZu
LCBOVUxMLCBQMk1fVU5TSEFSRSk7CiAgICAgICAgICAgICBpZiAoIHBhZ2Ug
KQogICAgICAgICAgICAgewpAQCAtNDAzNCw2ICs0MDM4LDE5IEBAIGxvbmcg
ZG9faHZtX29wKHVuc2lnbmVkIGxvbmcgb3AsIFhFTl9HVUUKICAgICAgICAg
ICAgICAgICBzaF9yZW1vdmVfc2hhZG93cyhkLT52Y3B1WzBdLCBfbWZuKHBh
Z2VfdG9fbWZuKHBhZ2UpKSwgMSwgMCk7CiAgICAgICAgICAgICAgICAgcHV0
X3BhZ2UocGFnZSk7CiAgICAgICAgICAgICB9CisKKyAgICAgICAgICAgIGEu
Zmlyc3RfcGZuKys7CisgICAgICAgICAgICBhLm5yLS07CisKKyAgICAgICAg
ICAgIC8qIENoZWNrIGZvciBjb250aW51YXRpb24gaWYgaXQncyBub3QgdGhl
IGxhc3QgaW50ZXJhdGlvbiAqLworICAgICAgICAgICAgaWYgKCBhLm5yID4g
MCAmJiBoeXBlcmNhbGxfcHJlZW1wdF9jaGVjaygpICkKKyAgICAgICAgICAg
IHsKKyAgICAgICAgICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vlc3QoYXJnLCAm
YSwgMSkgKQorICAgICAgICAgICAgICAgICAgICByYyA9IC1FRkFVTFQ7Cisg
ICAgICAgICAgICAgICAgZWxzZQorICAgICAgICAgICAgICAgICAgICByYyA9
IC1FQUdBSU47CisgICAgICAgICAgICAgICAgYnJlYWs7CisgICAgICAgICAg
ICB9CiAgICAgICAgIH0KIAogICAgIHBhcmFtX2ZhaWwzOgpAQCAtNDA4OSw3
ICs0MTA2LDYgQEAgbG9uZyBkb19odm1fb3AodW5zaWduZWQgbG9uZyBvcCwg
WEVOX0dVRQogICAgIHsKICAgICAgICAgc3RydWN0IHhlbl9odm1fc2V0X21l
bV90eXBlIGE7CiAgICAgICAgIHN0cnVjdCBkb21haW4gKmQ7Ci0gICAgICAg
IHVuc2lnbmVkIGxvbmcgcGZuOwogICAgICAgICAKICAgICAgICAgLyogSW50
ZXJmYWNlIHR5cGVzIHRvIGludGVybmFsIHAybSB0eXBlcyAqLwogICAgICAg
ICBwMm1fdHlwZV90IG1lbXR5cGVbXSA9IHsKQEAgLTQxMjIsOCArNDEzOCw5
IEBAIGxvbmcgZG9faHZtX29wKHVuc2lnbmVkIGxvbmcgb3AsIFhFTl9HVUUK
ICAgICAgICAgaWYgKCBhLmh2bW1lbV90eXBlID49IEFSUkFZX1NJWkUobWVt
dHlwZSkgKQogICAgICAgICAgICAgZ290byBwYXJhbV9mYWlsNDsKIAotICAg
ICAgICBmb3IgKCBwZm4gPSBhLmZpcnN0X3BmbjsgcGZuIDwgYS5maXJzdF9w
Zm4gKyBhLm5yOyBwZm4rKyApCisgICAgICAgIHdoaWxlICggYS5uciApCiAg
ICAgICAgIHsKKyAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuID0gYS5m
aXJzdF9wZm47CiAgICAgICAgICAgICBwMm1fdHlwZV90IHQ7CiAgICAgICAg
ICAgICBwMm1fdHlwZV90IG50OwogICAgICAgICAgICAgbWZuX3QgbWZuOwpA
QCAtNDE2Myw2ICs0MTgwLDE5IEBAIGxvbmcgZG9faHZtX29wKHVuc2lnbmVk
IGxvbmcgb3AsIFhFTl9HVUUKICAgICAgICAgICAgICAgICB9CiAgICAgICAg
ICAgICB9CiAgICAgICAgICAgICBwdXRfZ2ZuKGQsIHBmbik7CisKKyAgICAg
ICAgICAgIGEuZmlyc3RfcGZuKys7CisgICAgICAgICAgICBhLm5yLS07CisK
KyAgICAgICAgICAgIC8qIENoZWNrIGZvciBjb250aW51YXRpb24gaWYgaXQn
cyBub3QgdGhlIGxhc3QgaW50ZXJhdGlvbiAqLworICAgICAgICAgICAgaWYg
KCBhLm5yID4gMCAmJiBoeXBlcmNhbGxfcHJlZW1wdF9jaGVjaygpICkKKyAg
ICAgICAgICAgIHsKKyAgICAgICAgICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vl
c3QoYXJnLCAmYSwgMSkgKQorICAgICAgICAgICAgICAgICAgICByYyA9IC1F
RkFVTFQ7CisgICAgICAgICAgICAgICAgZWxzZQorICAgICAgICAgICAgICAg
ICAgICByYyA9IC1FQUdBSU47CisgICAgICAgICAgICAgICAgZ290byBwYXJh
bV9mYWlsNDsKKyAgICAgICAgICAgIH0KICAgICAgICAgfQogCiAgICAgICAg
IHJjID0gMDsKZGlmZiAtciA3YzRkODA2YjM3NTMgeGVuL2luY2x1ZGUvYXNt
LXg4Ni9jb25maWcuaAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZp
Zy5oCUZyaSBOb3YgMTYgMTU6NTY6MTQgMjAxMiArMDAwMAorKysgYi94ZW4v
aW5jbHVkZS9hc20teDg2L2NvbmZpZy5oCU1vbiBOb3YgMTkgMTQ6NDI6MTAg
MjAxMiArMDAwMApAQCAtMTE5LDYgKzExOSw5IEBAIGV4dGVybiBjaGFyIHdh
a2V1cF9zdGFydFtdOwogZXh0ZXJuIHVuc2lnbmVkIGludCB2aWRlb19tb2Rl
LCB2aWRlb19mbGFnczsKIGV4dGVybiB1bnNpZ25lZCBzaG9ydCBib290X2Vk
aWRfY2FwczsKIGV4dGVybiB1bnNpZ25lZCBjaGFyIGJvb3RfZWRpZF9pbmZv
WzEyOF07CisKKyNkZWZpbmUgR0IoX2diKSAoX2diICMjIFVMIDw8IDMwKQor
CiAjZW5kaWYKIAogI2RlZmluZSBhc21saW5rYWdlCkBAIC0xMzQsNyArMTM3
LDYgQEAgZXh0ZXJuIHVuc2lnbmVkIGNoYXIgYm9vdF9lZGlkX2luZm9bMTI4
XQogI2RlZmluZSBQTUw0X0FERFIoX3Nsb3QpICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBcCiAgICAgKCgoKF9zbG90ICMjIFVMKSA+PiA4KSAqIDB4
ZmZmZjAwMDAwMDAwMDAwMFVMKSB8IFwKICAgICAgKF9zbG90ICMjIFVMIDw8
IFBNTDRfRU5UUllfQklUUykpCi0jZGVmaW5lIEdCKF9nYikgKF9nYiAjIyBV
TCA8PCAzMCkKICNlbHNlCiAjZGVmaW5lIFBNTDRfRU5UUllfQllURVMgKDEg
PDwgUE1MNF9FTlRSWV9CSVRTKQogI2RlZmluZSBQTUw0X0FERFIoX3Nsb3Qp
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCg==

--=separator
Content-Type: application/octet-stream; name="xsa27-unstable.patch"
Content-Disposition: attachment; filename="xsa27-unstable.patch"
Content-Transfer-Encoding: base64

aHZtOiBMaW1pdCB0aGUgc2l6ZSBvZiBsYXJnZSBIVk0gb3AgYmF0Y2hlcwoK
RG9pbmcgbGFyZ2UgcDJtIHVwZGF0ZXMgZm9yIEhWTU9QX3RyYWNrX2RpcnR5
X3ZyYW0gd2l0aG91dCBwcmVlbXB0aW9uCnRpZXMgdXAgdGhlIHBoeXNpY2Fs
IHByb2Nlc3Nvci4gSW50ZWdyYXRpbmcgcHJlZW1wdGlvbiBpbnRvIHRoZSBw
Mm0KdXBkYXRlcyBpcyBoYXJkIHNvIHNpbXBseSBsaW1pdCB0byAxR0Igd2hp
Y2ggaXMgc3VmZmljaWVudCBmb3IgYSAxNTAwMAoqIDE1MDAwICogMzJicHAg
ZnJhbWVidWZmZXIuCgpGb3IgSFZNT1BfbW9kaWZpZWRfbWVtb3J5IGFuZCBI
Vk1PUF9zZXRfbWVtX3R5cGUgcHJlZW1wdGlibGUgYWRkIHRoZQpuZWNlc3Nh
cnkgbWFjaGluZXJ5IHRvIGhhbmRsZSBwcmVlbXB0aW9uLgoKVGhpcyBpcyBD
VkUtMjAxMi01NTExIC8gWFNBLTI3LgoKU2lnbmVkLW9mZi1ieTogVGltIERl
ZWdhbiA8dGltQHhlbi5vcmc+ClNpZ25lZC1vZmYtYnk6IElhbiBDYW1wYmVs
bCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBJYW4gSmFj
a3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT4KCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvaHZtL2h2bS5jIGIveGVuL2FyY2gveDg2L2h2bS9o
dm0uYwppbmRleCAzNGRhMmY1Li4yZDQ2ZDk4IDEwMDY0NAotLS0gYS94ZW4v
YXJjaC94ODYvaHZtL2h2bS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9odm0vaHZt
LmMKQEAgLTM5ODQsNiArMzk4NCw5IEBAIGxvbmcgZG9faHZtX29wKHVuc2ln
bmVkIGxvbmcgb3AsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgYXJn
KQogICAgICAgICBpZiAoICFpc19odm1fZG9tYWluKGQpICkKICAgICAgICAg
ICAgIGdvdG8gcGFyYW1fZmFpbDI7CiAKKyAgICAgICAgaWYgKCBhLm5yID4g
R0IoMSkgPj4gUEFHRV9TSElGVCApCisgICAgICAgICAgICBnb3RvIHBhcmFt
X2ZhaWwyOworCiAgICAgICAgIHJjID0geHNtX2h2bV9wYXJhbShkLCBvcCk7
CiAgICAgICAgIGlmICggcmMgKQogICAgICAgICAgICAgZ290byBwYXJhbV9m
YWlsMjsKQEAgLTQwMTAsNyArNDAxMyw2IEBAIGxvbmcgZG9faHZtX29wKHVu
c2lnbmVkIGxvbmcgb3AsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkg
YXJnKQogICAgIHsKICAgICAgICAgc3RydWN0IHhlbl9odm1fbW9kaWZpZWRf
bWVtb3J5IGE7CiAgICAgICAgIHN0cnVjdCBkb21haW4gKmQ7Ci0gICAgICAg
IHVuc2lnbmVkIGxvbmcgcGZuOwogCiAgICAgICAgIGlmICggY29weV9mcm9t
X2d1ZXN0KCZhLCBhcmcsIDEpICkKICAgICAgICAgICAgIHJldHVybiAtRUZB
VUxUOwpAQCAtNDAzNyw5ICs0MDM5LDExIEBAIGxvbmcgZG9faHZtX29wKHVu
c2lnbmVkIGxvbmcgb3AsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkg
YXJnKQogICAgICAgICBpZiAoICFwYWdpbmdfbW9kZV9sb2dfZGlydHkoZCkg
KQogICAgICAgICAgICAgZ290byBwYXJhbV9mYWlsMzsKIAotICAgICAgICBm
b3IgKCBwZm4gPSBhLmZpcnN0X3BmbjsgcGZuIDwgYS5maXJzdF9wZm4gKyBh
Lm5yOyBwZm4rKyApCisgICAgICAgIHdoaWxlICggYS5uciA+IDAgKQogICAg
ICAgICB7CisgICAgICAgICAgICB1bnNpZ25lZCBsb25nIHBmbiA9IGEuZmly
c3RfcGZuOwogICAgICAgICAgICAgc3RydWN0IHBhZ2VfaW5mbyAqcGFnZTsK
KwogICAgICAgICAgICAgcGFnZSA9IGdldF9wYWdlX2Zyb21fZ2ZuKGQsIHBm
biwgTlVMTCwgUDJNX1VOU0hBUkUpOwogICAgICAgICAgICAgaWYgKCBwYWdl
ICkKICAgICAgICAgICAgIHsKQEAgLTQwNDksNiArNDA1MywxOSBAQCBsb25n
IGRvX2h2bV9vcCh1bnNpZ25lZCBsb25nIG9wLCBYRU5fR1VFU1RfSEFORExF
X1BBUkFNKHZvaWQpIGFyZykKICAgICAgICAgICAgICAgICBzaF9yZW1vdmVf
c2hhZG93cyhkLT52Y3B1WzBdLCBfbWZuKHBhZ2VfdG9fbWZuKHBhZ2UpKSwg
MSwgMCk7CiAgICAgICAgICAgICAgICAgcHV0X3BhZ2UocGFnZSk7CiAgICAg
ICAgICAgICB9CisKKyAgICAgICAgICAgIGEuZmlyc3RfcGZuKys7CisgICAg
ICAgICAgICBhLm5yLS07CisKKyAgICAgICAgICAgIC8qIENoZWNrIGZvciBj
b250aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaW50ZXJhdGlvbiAq
LworICAgICAgICAgICAgaWYgKCBhLm5yID4gMCAmJiBoeXBlcmNhbGxfcHJl
ZW1wdF9jaGVjaygpICkKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAg
ICBpZiAoIGNvcHlfdG9fZ3Vlc3QoYXJnLCAmYSwgMSkgKQorICAgICAgICAg
ICAgICAgICAgICByYyA9IC1FRkFVTFQ7CisgICAgICAgICAgICAgICAgZWxz
ZQorICAgICAgICAgICAgICAgICAgICByYyA9IC1FQUdBSU47CisgICAgICAg
ICAgICAgICAgYnJlYWs7CisgICAgICAgICAgICB9CiAgICAgICAgIH0KIAog
ICAgIHBhcmFtX2ZhaWwzOgpAQCAtNDEwNCw3ICs0MTIxLDYgQEAgbG9uZyBk
b19odm1fb3AodW5zaWduZWQgbG9uZyBvcCwgWEVOX0dVRVNUX0hBTkRMRV9Q
QVJBTSh2b2lkKSBhcmcpCiAgICAgewogICAgICAgICBzdHJ1Y3QgeGVuX2h2
bV9zZXRfbWVtX3R5cGUgYTsKICAgICAgICAgc3RydWN0IGRvbWFpbiAqZDsK
LSAgICAgICAgdW5zaWduZWQgbG9uZyBwZm47CiAgICAgICAgIAogICAgICAg
ICAvKiBJbnRlcmZhY2UgdHlwZXMgdG8gaW50ZXJuYWwgcDJtIHR5cGVzICov
CiAgICAgICAgIHAybV90eXBlX3QgbWVtdHlwZVtdID0gewpAQCAtNDEzNyw4
ICs0MTUzLDkgQEAgbG9uZyBkb19odm1fb3AodW5zaWduZWQgbG9uZyBvcCwg
WEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSBhcmcpCiAgICAgICAgIGlm
ICggYS5odm1tZW1fdHlwZSA+PSBBUlJBWV9TSVpFKG1lbXR5cGUpICkKICAg
ICAgICAgICAgIGdvdG8gcGFyYW1fZmFpbDQ7CiAKLSAgICAgICAgZm9yICgg
cGZuID0gYS5maXJzdF9wZm47IHBmbiA8IGEuZmlyc3RfcGZuICsgYS5ucjsg
cGZuKysgKQorICAgICAgICB3aGlsZSAoIGEubnIgKQogICAgICAgICB7Cisg
ICAgICAgICAgICB1bnNpZ25lZCBsb25nIHBmbiA9IGEuZmlyc3RfcGZuOwog
ICAgICAgICAgICAgcDJtX3R5cGVfdCB0OwogICAgICAgICAgICAgcDJtX3R5
cGVfdCBudDsKICAgICAgICAgICAgIG1mbl90IG1mbjsKQEAgLTQxNzgsNiAr
NDE5NSwxOSBAQCBsb25nIGRvX2h2bV9vcCh1bnNpZ25lZCBsb25nIG9wLCBY
RU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZykKICAgICAgICAgICAg
ICAgICB9CiAgICAgICAgICAgICB9CiAgICAgICAgICAgICBwdXRfZ2ZuKGQs
IHBmbik7CisKKyAgICAgICAgICAgIGEuZmlyc3RfcGZuKys7CisgICAgICAg
ICAgICBhLm5yLS07CisKKyAgICAgICAgICAgIC8qIENoZWNrIGZvciBjb250
aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaW50ZXJhdGlvbiAqLwor
ICAgICAgICAgICAgaWYgKCBhLm5yID4gMCAmJiBoeXBlcmNhbGxfcHJlZW1w
dF9jaGVjaygpICkKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAgICBp
ZiAoIGNvcHlfdG9fZ3Vlc3QoYXJnLCAmYSwgMSkgKQorICAgICAgICAgICAg
ICAgICAgICByYyA9IC1FRkFVTFQ7CisgICAgICAgICAgICAgICAgZWxzZQor
ICAgICAgICAgICAgICAgICAgICByYyA9IC1FQUdBSU47CisgICAgICAgICAg
ICAgICAgZ290byBwYXJhbV9mYWlsNDsKKyAgICAgICAgICAgIH0KICAgICAg
ICAgfQogCiAgICAgICAgIHJjID0gMDsK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 18:11:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:11:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaUG-0001ud-AZ; Mon, 03 Dec 2012 18:11:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaUE-0001tp-VK; Mon, 03 Dec 2012 18:11:23 +0000
Received: from [85.158.139.83:44299] by server-7.bemta-5.messagelabs.com id
	D3/51-23096-94BECB05; Mon, 03 Dec 2012 18:11:21 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354558279!28244673!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31739 invoked from network); 3 Dec 2012 18:11:20 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-5.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 18:11:20 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBE-0002Mv-OI; Mon, 03 Dec 2012 17:51:44 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1TfaBE-00066j-ID; Mon, 03 Dec 2012 17:51:44 +0000
Date: Mon, 03 Dec 2012 17:51:44 +0000
Message-Id: <E1TfaBE-00066j-ID@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 27 (CVE-2012-5511) - several HVM
 operations do not validate the range of their inputs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

	     Xen Security Advisory CVE-2012-5511 / XSA-27
                           version 4

   several HVM operations do not validate the range of their inputs

UPDATES IN VERSION 4
====================

Public release.

ISSUE DESCRIPTION
=================

Several HVM control operations do not check the size of their inputs
and can tie up a physical CPU for extended periods of time.

In addition dirty video RAM tracking involves clearing the bitmap
provided by the domain controlling the guest (e.g. dom0 or a
stubdom). If the size of that bitmap is overly large, an intermediate
variable on the hypervisor stack may overflow that stack.

IMPACT
======

A malicious guest administrator can cause Xen to become unresponsive
or to crash leading in either case to a Denial of Service.

VULNERABLE SYSTEMS
==================

All Xen versions from 3.4 onwards are vulnerable.

However Xen 4.2 and unstable are not vulnerable to the stack
overflow. Systems running either of these are not vulnerable to the
crash.

Version 3.4, 4.0 and 4.1 are vulnerable to both the stack overflow and
the physical CPU hang.

The vulnerability is only exposed to HVM guests.

MITIGATION
==========

Running only PV guests will avoid this vulnerability.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa27-4.1.patch             Xen 4.1.x
xsa27-4.2.patch             Xen 4.2.x
xsa27-4.unstable.patch      xen-unstable


$ sha256sum xsa27*.patch
7443da829a7b2dd4b5e0b8db97a8b569e7c10d908ee7c34fa60bc2ddd781be57  xsa27-4.1.patch
462eae827944d1d337a6ebf13a36ea952d7fb76b993b9c29946e1d9cfb5ea2a3  xsa27-4.2.patch
fcb07c6bd78a0d9513a68e2eb3bf0c21ef4d8ff0e6ebf6fdce04a3170303cab6  xsa27-unstable.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQvOJ2AAoJEIP+FMlX6CvZzqwIAJwIUGfXDA0KvJ/zZWAJm49Q
c5Sn5xK1wZdGdJTlCqAGZSMOmaUP6tofqEWanb6nOg2vRAk7HlDz1JbUw5P8E3H9
mTT9Ro8rOhAIhgD0joT4i2XE77OTuLF85JK0M0fn2XPdUNFraChYUGthXj9+irlc
FOhrLnXBlo34h7V7nY9XGIKAwcYUQnR7RcPasKOCO1OGEYofWKJOSKR9wrIhXiMN
Q2svs4J1+PxNdKpErS+mMwEbnYHBcmxxEZXWktB9plzSqf5FMP4yQ3C5wTu/zrYH
nu8Jj2JNV3NTnZgcviUBysTR+1s+JgVjLU3gtxebh2caqjSKyenPU2yYna5rlfY=
=tfAP
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa27-4.1.patch"
Content-Disposition: attachment; filename="xsa27-4.1.patch"
Content-Transfer-Encoding: base64

aHZtOiBMaW1pdCB0aGUgc2l6ZSBvZiBsYXJnZSBIVk0gb3AgYmF0Y2hlcwoK
RG9pbmcgbGFyZ2UgcDJtIHVwZGF0ZXMgZm9yIEhWTU9QX3RyYWNrX2RpcnR5
X3ZyYW0gd2l0aG91dCBwcmVlbXB0aW9uCnRpZXMgdXAgdGhlIHBoeXNpY2Fs
IHByb2Nlc3Nvci4gSW50ZWdyYXRpbmcgcHJlZW1wdGlvbiBpbnRvIHRoZSBw
Mm0KdXBkYXRlcyBpcyBoYXJkIHNvIHNpbXBseSBsaW1pdCB0byAxR0Igd2hp
Y2ggaXMgc3VmZmljaWVudCBmb3IgYSAxNTAwMAoqIDE1MDAwICogMzJicHAg
ZnJhbWVidWZmZXIuCgpGb3IgSFZNT1BfbW9kaWZpZWRfbWVtb3J5IGFuZCBI
Vk1PUF9zZXRfbWVtX3R5cGUgcHJlZW1wdGlibGUgYWRkIHRoZQpuZWNlc3Nh
cnkgbWFjaGluZXJ5IHRvIGhhbmRsZSBwcmVlbXB0aW9uLgoKVGhpcyBpcyBD
VkUtMjAxMi01NTExIC8gWFNBLTI3LgoKU2lnbmVkLW9mZi1ieTogVGltIERl
ZWdhbiA8dGltQHhlbi5vcmc+ClNpZ25lZC1vZmYtYnk6IElhbiBDYW1wYmVs
bCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBJYW4gSmFj
a3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT4KCng4Ni9wYWdpbmc6
IERvbid0IGFsbG9jYXRlIHVzZXItY29udHJvbGxlZCBhbW91bnRzIG9mIHN0
YWNrIG1lbW9yeS4KClRoaXMgaXMgWFNBLTI3IC8gQ1ZFLTIwMTItNTUxMS4K
ClNpZ25lZC1vZmYtYnk6IFRpbSBEZWVnYW4gPHRpbUB4ZW4ub3JnPgpBY2tl
ZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgp2MjogUHJv
dmlkZSBkZWZpbml0aW9uIG9mIEdCIHRvIGZpeCB4ODYtMzIgY29tcGlsZS4K
ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxKQmV1bGljaEBzdXNlLmNv
bT4KQWNrZWQtYnk6IElhbiBKYWNrc29uIDxpYW4uamFja3NvbkBldS5jaXRy
aXguY29tPgoKCmRpZmYgLXIgNTYzOTA0N2Q2YzlmIHhlbi9hcmNoL3g4Ni9o
dm0vaHZtLmMKLS0tIGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwlNb24gTm92
IDE5IDA5OjQzOjQ4IDIwMTIgKzAxMDAKKysrIGIveGVuL2FyY2gveDg2L2h2
bS9odm0uYwlNb24gTm92IDE5IDE2OjAwOjMzIDIwMTIgKzAwMDAKQEAgLTM0
NzEsNiArMzQ3MSw5IEBAIGxvbmcgZG9faHZtX29wKHVuc2lnbmVkIGxvbmcg
b3AsIFhFTl9HVUUKICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFpbihkKSAp
CiAgICAgICAgICAgICBnb3RvIHBhcmFtX2ZhaWwyOwogCisgICAgICAgIGlm
ICggYS5uciA+IEdCKDEpID4+IFBBR0VfU0hJRlQgKQorICAgICAgICAgICAg
Z290byBwYXJhbV9mYWlsMjsKKwogICAgICAgICByYyA9IHhzbV9odm1fcGFy
YW0oZCwgb3ApOwogICAgICAgICBpZiAoIHJjICkKICAgICAgICAgICAgIGdv
dG8gcGFyYW1fZmFpbDI7CkBAIC0zNDk4LDcgKzM1MDEsNiBAQCBsb25nIGRv
X2h2bV9vcCh1bnNpZ25lZCBsb25nIG9wLCBYRU5fR1VFCiAgICAgICAgIHN0
cnVjdCB4ZW5faHZtX21vZGlmaWVkX21lbW9yeSBhOwogICAgICAgICBzdHJ1
Y3QgZG9tYWluICpkOwogICAgICAgICBzdHJ1Y3QgcDJtX2RvbWFpbiAqcDJt
OwotICAgICAgICB1bnNpZ25lZCBsb25nIHBmbjsKIAogICAgICAgICBpZiAo
IGNvcHlfZnJvbV9ndWVzdCgmYSwgYXJnLCAxKSApCiAgICAgICAgICAgICBy
ZXR1cm4gLUVGQVVMVDsKQEAgLTM1MjYsOCArMzUyOCw5IEBAIGxvbmcgZG9f
aHZtX29wKHVuc2lnbmVkIGxvbmcgb3AsIFhFTl9HVUUKICAgICAgICAgICAg
IGdvdG8gcGFyYW1fZmFpbDM7CiAKICAgICAgICAgcDJtID0gcDJtX2dldF9o
b3N0cDJtKGQpOwotICAgICAgICBmb3IgKCBwZm4gPSBhLmZpcnN0X3Bmbjsg
cGZuIDwgYS5maXJzdF9wZm4gKyBhLm5yOyBwZm4rKyApCisgICAgICAgIHdo
aWxlICggYS5uciA+IDAgKQogICAgICAgICB7CisgICAgICAgICAgICB1bnNp
Z25lZCBsb25nIHBmbiA9IGEuZmlyc3RfcGZuOwogICAgICAgICAgICAgcDJt
X3R5cGVfdCB0OwogICAgICAgICAgICAgbWZuX3QgbWZuID0gZ2ZuX3RvX21m
bihwMm0sIHBmbiwgJnQpOwogICAgICAgICAgICAgaWYgKCBwMm1faXNfcGFn
aW5nKHQpICkKQEAgLTM1NDgsNiArMzU1MSwxOSBAQCBsb25nIGRvX2h2bV9v
cCh1bnNpZ25lZCBsb25nIG9wLCBYRU5fR1VFCiAgICAgICAgICAgICAgICAg
LyogZG9uJ3QgdGFrZSBhIGxvbmcgdGltZSBhbmQgZG9uJ3QgZGllIGVpdGhl
ciAqLwogICAgICAgICAgICAgICAgIHNoX3JlbW92ZV9zaGFkb3dzKGQtPnZj
cHVbMF0sIG1mbiwgMSwgMCk7CiAgICAgICAgICAgICB9CisKKyAgICAgICAg
ICAgIGEuZmlyc3RfcGZuKys7CisgICAgICAgICAgICBhLm5yLS07CisKKyAg
ICAgICAgICAgIC8qIENoZWNrIGZvciBjb250aW51YXRpb24gaWYgaXQncyBu
b3QgdGhlIGxhc3QgaW50ZXJhdGlvbiAqLworICAgICAgICAgICAgaWYgKCBh
Lm5yID4gMCAmJiBoeXBlcmNhbGxfcHJlZW1wdF9jaGVjaygpICkKKyAgICAg
ICAgICAgIHsKKyAgICAgICAgICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vlc3Qo
YXJnLCAmYSwgMSkgKQorICAgICAgICAgICAgICAgICAgICByYyA9IC1FRkFV
TFQ7CisgICAgICAgICAgICAgICAgZWxzZQorICAgICAgICAgICAgICAgICAg
ICByYyA9IC1FQUdBSU47CisgICAgICAgICAgICAgICAgYnJlYWs7CisgICAg
ICAgICAgICB9CiAgICAgICAgIH0KIAogICAgIHBhcmFtX2ZhaWwzOgpAQCAt
MzU5NSw3ICszNjExLDYgQEAgbG9uZyBkb19odm1fb3AodW5zaWduZWQgbG9u
ZyBvcCwgWEVOX0dVRQogICAgICAgICBzdHJ1Y3QgeGVuX2h2bV9zZXRfbWVt
X3R5cGUgYTsKICAgICAgICAgc3RydWN0IGRvbWFpbiAqZDsKICAgICAgICAg
c3RydWN0IHAybV9kb21haW4gKnAybTsKLSAgICAgICAgdW5zaWduZWQgbG9u
ZyBwZm47CiAgICAgICAgIAogICAgICAgICAvKiBJbnRlcmZhY2UgdHlwZXMg
dG8gaW50ZXJuYWwgcDJtIHR5cGVzICovCiAgICAgICAgIHAybV90eXBlX3Qg
bWVtdHlwZVtdID0gewpAQCAtMzYyNSw4ICszNjQwLDkgQEAgbG9uZyBkb19o
dm1fb3AodW5zaWduZWQgbG9uZyBvcCwgWEVOX0dVRQogICAgICAgICAgICAg
Z290byBwYXJhbV9mYWlsNDsKIAogICAgICAgICBwMm0gPSBwMm1fZ2V0X2hv
c3RwMm0oZCk7Ci0gICAgICAgIGZvciAoIHBmbiA9IGEuZmlyc3RfcGZuOyBw
Zm4gPCBhLmZpcnN0X3BmbiArIGEubnI7IHBmbisrICkKKyAgICAgICAgd2hp
bGUgKCBhLm5yID4gMCApCiAgICAgICAgIHsKKyAgICAgICAgICAgIHVuc2ln
bmVkIGxvbmcgcGZuID0gYS5maXJzdF9wZm47CiAgICAgICAgICAgICBwMm1f
dHlwZV90IHQ7CiAgICAgICAgICAgICBwMm1fdHlwZV90IG50OwogICAgICAg
ICAgICAgbWZuX3QgbWZuOwpAQCAtMzY2Miw2ICszNjc4LDE5IEBAIGxvbmcg
ZG9faHZtX29wKHVuc2lnbmVkIGxvbmcgb3AsIFhFTl9HVUUKICAgICAgICAg
ICAgICAgICAgICAgZ290byBwYXJhbV9mYWlsNDsKICAgICAgICAgICAgICAg
ICB9CiAgICAgICAgICAgICB9CisKKyAgICAgICAgICAgIGEuZmlyc3RfcGZu
Kys7CisgICAgICAgICAgICBhLm5yLS07CisKKyAgICAgICAgICAgIC8qIENo
ZWNrIGZvciBjb250aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaW50
ZXJhdGlvbiAqLworICAgICAgICAgICAgaWYgKCBhLm5yID4gMCAmJiBoeXBl
cmNhbGxfcHJlZW1wdF9jaGVjaygpICkKKyAgICAgICAgICAgIHsKKyAgICAg
ICAgICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vlc3QoYXJnLCAmYSwgMSkgKQor
ICAgICAgICAgICAgICAgICAgICByYyA9IC1FRkFVTFQ7CisgICAgICAgICAg
ICAgICAgZWxzZQorICAgICAgICAgICAgICAgICAgICByYyA9IC1FQUdBSU47
CisgICAgICAgICAgICAgICAgZ290byBwYXJhbV9mYWlsNDsKKyAgICAgICAg
ICAgIH0KICAgICAgICAgfQogCiAgICAgICAgIHJjID0gMDsKZGlmZiAtciA1
NjM5MDQ3ZDZjOWYgeGVuL2FyY2gveDg2L21tL3BhZ2luZy5jCi0tLSBhL3hl
bi9hcmNoL3g4Ni9tbS9wYWdpbmcuYwlNb24gTm92IDE5IDA5OjQzOjQ4IDIw
MTIgKzAxMDAKKysrIGIveGVuL2FyY2gveDg2L21tL3BhZ2luZy5jCU1vbiBO
b3YgMTkgMTY6MDA6MzMgMjAxMiArMDAwMApAQCAtNTI5LDEzICs1MjksMTgg
QEAgaW50IHBhZ2luZ19sb2dfZGlydHlfcmFuZ2Uoc3RydWN0IGRvbWFpbgog
CiAgICAgaWYgKCAhZC0+YXJjaC5wYWdpbmcubG9nX2RpcnR5LmZhdWx0X2Nv
dW50ICYmCiAgICAgICAgICAhZC0+YXJjaC5wYWdpbmcubG9nX2RpcnR5LmRp
cnR5X2NvdW50ICkgewotICAgICAgICBpbnQgc2l6ZSA9IChuciArIEJJVFNf
UEVSX0xPTkcgLSAxKSAvIEJJVFNfUEVSX0xPTkc7Ci0gICAgICAgIHVuc2ln
bmVkIGxvbmcgemVyb2VzW3NpemVdOwotICAgICAgICBtZW1zZXQoemVyb2Vz
LCAweDAwLCBzaXplICogQllURVNfUEVSX0xPTkcpOworICAgICAgICBzdGF0
aWMgdWludDhfdCB6ZXJvZXNbUEFHRV9TSVpFXTsKKyAgICAgICAgaW50IG9m
Ziwgc2l6ZTsKKworICAgICAgICBzaXplID0gKChuciArIEJJVFNfUEVSX0xP
TkcgLSAxKSAvIEJJVFNfUEVSX0xPTkcpICogc2l6ZW9mIChsb25nKTsKICAg
ICAgICAgcnYgPSAwOwotICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vlc3Rfb2Zm
c2V0KGRpcnR5X2JpdG1hcCwgMCwgKHVpbnQ4X3QgKikgemVyb2VzLAotICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHNpemUgKiBCWVRFU19Q
RVJfTE9ORykgIT0gMCApCi0gICAgICAgICAgICBydiA9IC1FRkFVTFQ7Cisg
ICAgICAgIGZvciAoIG9mZiA9IDA7ICFydiAmJiBvZmYgPCBzaXplOyBvZmYg
Kz0gc2l6ZW9mIHplcm9lcyApCisgICAgICAgIHsKKyAgICAgICAgICAgIGlu
dCB0b2RvID0gbWluKHNpemUgLSBvZmYsIChpbnQpIFBBR0VfU0laRSk7Cisg
ICAgICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vlc3Rfb2Zmc2V0KGRpcnR5X2Jp
dG1hcCwgb2ZmLCB6ZXJvZXMsIHRvZG8pICkKKyAgICAgICAgICAgICAgICBy
diA9IC1FRkFVTFQ7CisgICAgICAgICAgICBvZmYgKz0gdG9kbzsKKyAgICAg
ICAgfQogICAgICAgICBnb3RvIG91dDsKICAgICB9CiAgICAgZC0+YXJjaC5w
YWdpbmcubG9nX2RpcnR5LmZhdWx0X2NvdW50ID0gMDsKZGlmZiAtciA1NjM5
MDQ3ZDZjOWYgeGVuL2luY2x1ZGUvYXNtLXg4Ni9jb25maWcuaAotLS0gYS94
ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZpZy5oCU1vbiBOb3YgMTkgMDk6NDM6
NDggMjAxMiArMDEwMAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZp
Zy5oCU1vbiBOb3YgMTkgMTY6MDA6MzMgMjAxMiArMDAwMApAQCAtMTA4LDYg
KzEwOCw5IEBAIGV4dGVybiB1bnNpZ25lZCBpbnQgdHJhbXBvbGluZV94ZW5f
cGh5c18KIGV4dGVybiB1bnNpZ25lZCBjaGFyIHRyYW1wb2xpbmVfY3B1X3N0
YXJ0ZWQ7CiBleHRlcm4gY2hhciB3YWtldXBfc3RhcnRbXTsKIGV4dGVybiB1
bnNpZ25lZCBpbnQgdmlkZW9fbW9kZSwgdmlkZW9fZmxhZ3M7CisKKyNkZWZp
bmUgR0IoX2diKSAoX2diICMjIFVMIDw8IDMwKQorCiAjZW5kaWYKIAogI2Rl
ZmluZSBhc21saW5rYWdlCkBAIC0xMjMsNyArMTI2LDYgQEAgZXh0ZXJuIHVu
c2lnbmVkIGludCB2aWRlb19tb2RlLCB2aWRlb19mbAogI2RlZmluZSBQTUw0
X0FERFIoX3Nsb3QpICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgKCgoKF9zbG90ICMjIFVMKSA+PiA4KSAqIDB4ZmZmZjAwMDAwMDAwMDAw
MFVMKSB8IFwKICAgICAgKF9zbG90ICMjIFVMIDw8IFBNTDRfRU5UUllfQklU
UykpCi0jZGVmaW5lIEdCKF9nYikgKF9nYiAjIyBVTCA8PCAzMCkKICNlbHNl
CiAjZGVmaW5lIFBNTDRfRU5UUllfQllURVMgKDEgPDwgUE1MNF9FTlRSWV9C
SVRTKQogI2RlZmluZSBQTUw0X0FERFIoX3Nsb3QpICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCg==

--=separator
Content-Type: application/octet-stream; name="xsa27-4.2.patch"
Content-Disposition: attachment; filename="xsa27-4.2.patch"
Content-Transfer-Encoding: base64

aHZtOiBMaW1pdCB0aGUgc2l6ZSBvZiBsYXJnZSBIVk0gb3AgYmF0Y2hlcwoK
RG9pbmcgbGFyZ2UgcDJtIHVwZGF0ZXMgZm9yIEhWTU9QX3RyYWNrX2RpcnR5
X3ZyYW0gd2l0aG91dCBwcmVlbXB0aW9uCnRpZXMgdXAgdGhlIHBoeXNpY2Fs
IHByb2Nlc3Nvci4gSW50ZWdyYXRpbmcgcHJlZW1wdGlvbiBpbnRvIHRoZSBw
Mm0KdXBkYXRlcyBpcyBoYXJkIHNvIHNpbXBseSBsaW1pdCB0byAxR0Igd2hp
Y2ggaXMgc3VmZmljaWVudCBmb3IgYSAxNTAwMAoqIDE1MDAwICogMzJicHAg
ZnJhbWVidWZmZXIuCgpGb3IgSFZNT1BfbW9kaWZpZWRfbWVtb3J5IGFuZCBI
Vk1PUF9zZXRfbWVtX3R5cGUgcHJlZW1wdGlibGUgYWRkIHRoZQpuZWNlc3Nh
cnkgbWFjaGluZXJ5IHRvIGhhbmRsZSBwcmVlbXB0aW9uLgoKVGhpcyBpcyBD
VkUtMjAxMi01NTExIC8gWFNBLTI3LgoKU2lnbmVkLW9mZi1ieTogVGltIERl
ZWdhbiA8dGltQHhlbi5vcmc+ClNpZ25lZC1vZmYtYnk6IElhbiBDYW1wYmVs
bCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBJYW4gSmFj
a3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT4KCnYyOiBQcm92aWRl
IGRlZmluaXRpb24gb2YgR0IgdG8gZml4IHg4Ni0zMiBjb21waWxlLgoKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPEpCZXVsaWNoQHN1c2UuY29tPgpB
Y2tlZC1ieTogSWFuIEphY2tzb24gPGlhbi5qYWNrc29uQGV1LmNpdHJpeC5j
b20+CgoKZGlmZiAtciA3YzRkODA2YjM3NTMgeGVuL2FyY2gveDg2L2h2bS9o
dm0uYwotLS0gYS94ZW4vYXJjaC94ODYvaHZtL2h2bS5jCUZyaSBOb3YgMTYg
MTU6NTY6MTQgMjAxMiArMDAwMAorKysgYi94ZW4vYXJjaC94ODYvaHZtL2h2
bS5jCU1vbiBOb3YgMTkgMTQ6NDI6MTAgMjAxMiArMDAwMApAQCAtMzk2OSw2
ICszOTY5LDkgQEAgbG9uZyBkb19odm1fb3AodW5zaWduZWQgbG9uZyBvcCwg
WEVOX0dVRQogICAgICAgICBpZiAoICFpc19odm1fZG9tYWluKGQpICkKICAg
ICAgICAgICAgIGdvdG8gcGFyYW1fZmFpbDI7CiAKKyAgICAgICAgaWYgKCBh
Lm5yID4gR0IoMSkgPj4gUEFHRV9TSElGVCApCisgICAgICAgICAgICBnb3Rv
IHBhcmFtX2ZhaWwyOworCiAgICAgICAgIHJjID0geHNtX2h2bV9wYXJhbShk
LCBvcCk7CiAgICAgICAgIGlmICggcmMgKQogICAgICAgICAgICAgZ290byBw
YXJhbV9mYWlsMjsKQEAgLTM5OTUsNyArMzk5OCw2IEBAIGxvbmcgZG9faHZt
X29wKHVuc2lnbmVkIGxvbmcgb3AsIFhFTl9HVUUKICAgICB7CiAgICAgICAg
IHN0cnVjdCB4ZW5faHZtX21vZGlmaWVkX21lbW9yeSBhOwogICAgICAgICBz
dHJ1Y3QgZG9tYWluICpkOwotICAgICAgICB1bnNpZ25lZCBsb25nIHBmbjsK
IAogICAgICAgICBpZiAoIGNvcHlfZnJvbV9ndWVzdCgmYSwgYXJnLCAxKSAp
CiAgICAgICAgICAgICByZXR1cm4gLUVGQVVMVDsKQEAgLTQwMjIsOSArNDAy
NCwxMSBAQCBsb25nIGRvX2h2bV9vcCh1bnNpZ25lZCBsb25nIG9wLCBYRU5f
R1VFCiAgICAgICAgIGlmICggIXBhZ2luZ19tb2RlX2xvZ19kaXJ0eShkKSAp
CiAgICAgICAgICAgICBnb3RvIHBhcmFtX2ZhaWwzOwogCi0gICAgICAgIGZv
ciAoIHBmbiA9IGEuZmlyc3RfcGZuOyBwZm4gPCBhLmZpcnN0X3BmbiArIGEu
bnI7IHBmbisrICkKKyAgICAgICAgd2hpbGUgKCBhLm5yID4gMCApCiAgICAg
ICAgIHsKKyAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuID0gYS5maXJz
dF9wZm47CiAgICAgICAgICAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlOwor
CiAgICAgICAgICAgICBwYWdlID0gZ2V0X3BhZ2VfZnJvbV9nZm4oZCwgcGZu
LCBOVUxMLCBQMk1fVU5TSEFSRSk7CiAgICAgICAgICAgICBpZiAoIHBhZ2Ug
KQogICAgICAgICAgICAgewpAQCAtNDAzNCw2ICs0MDM4LDE5IEBAIGxvbmcg
ZG9faHZtX29wKHVuc2lnbmVkIGxvbmcgb3AsIFhFTl9HVUUKICAgICAgICAg
ICAgICAgICBzaF9yZW1vdmVfc2hhZG93cyhkLT52Y3B1WzBdLCBfbWZuKHBh
Z2VfdG9fbWZuKHBhZ2UpKSwgMSwgMCk7CiAgICAgICAgICAgICAgICAgcHV0
X3BhZ2UocGFnZSk7CiAgICAgICAgICAgICB9CisKKyAgICAgICAgICAgIGEu
Zmlyc3RfcGZuKys7CisgICAgICAgICAgICBhLm5yLS07CisKKyAgICAgICAg
ICAgIC8qIENoZWNrIGZvciBjb250aW51YXRpb24gaWYgaXQncyBub3QgdGhl
IGxhc3QgaW50ZXJhdGlvbiAqLworICAgICAgICAgICAgaWYgKCBhLm5yID4g
MCAmJiBoeXBlcmNhbGxfcHJlZW1wdF9jaGVjaygpICkKKyAgICAgICAgICAg
IHsKKyAgICAgICAgICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vlc3QoYXJnLCAm
YSwgMSkgKQorICAgICAgICAgICAgICAgICAgICByYyA9IC1FRkFVTFQ7Cisg
ICAgICAgICAgICAgICAgZWxzZQorICAgICAgICAgICAgICAgICAgICByYyA9
IC1FQUdBSU47CisgICAgICAgICAgICAgICAgYnJlYWs7CisgICAgICAgICAg
ICB9CiAgICAgICAgIH0KIAogICAgIHBhcmFtX2ZhaWwzOgpAQCAtNDA4OSw3
ICs0MTA2LDYgQEAgbG9uZyBkb19odm1fb3AodW5zaWduZWQgbG9uZyBvcCwg
WEVOX0dVRQogICAgIHsKICAgICAgICAgc3RydWN0IHhlbl9odm1fc2V0X21l
bV90eXBlIGE7CiAgICAgICAgIHN0cnVjdCBkb21haW4gKmQ7Ci0gICAgICAg
IHVuc2lnbmVkIGxvbmcgcGZuOwogICAgICAgICAKICAgICAgICAgLyogSW50
ZXJmYWNlIHR5cGVzIHRvIGludGVybmFsIHAybSB0eXBlcyAqLwogICAgICAg
ICBwMm1fdHlwZV90IG1lbXR5cGVbXSA9IHsKQEAgLTQxMjIsOCArNDEzOCw5
IEBAIGxvbmcgZG9faHZtX29wKHVuc2lnbmVkIGxvbmcgb3AsIFhFTl9HVUUK
ICAgICAgICAgaWYgKCBhLmh2bW1lbV90eXBlID49IEFSUkFZX1NJWkUobWVt
dHlwZSkgKQogICAgICAgICAgICAgZ290byBwYXJhbV9mYWlsNDsKIAotICAg
ICAgICBmb3IgKCBwZm4gPSBhLmZpcnN0X3BmbjsgcGZuIDwgYS5maXJzdF9w
Zm4gKyBhLm5yOyBwZm4rKyApCisgICAgICAgIHdoaWxlICggYS5uciApCiAg
ICAgICAgIHsKKyAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuID0gYS5m
aXJzdF9wZm47CiAgICAgICAgICAgICBwMm1fdHlwZV90IHQ7CiAgICAgICAg
ICAgICBwMm1fdHlwZV90IG50OwogICAgICAgICAgICAgbWZuX3QgbWZuOwpA
QCAtNDE2Myw2ICs0MTgwLDE5IEBAIGxvbmcgZG9faHZtX29wKHVuc2lnbmVk
IGxvbmcgb3AsIFhFTl9HVUUKICAgICAgICAgICAgICAgICB9CiAgICAgICAg
ICAgICB9CiAgICAgICAgICAgICBwdXRfZ2ZuKGQsIHBmbik7CisKKyAgICAg
ICAgICAgIGEuZmlyc3RfcGZuKys7CisgICAgICAgICAgICBhLm5yLS07CisK
KyAgICAgICAgICAgIC8qIENoZWNrIGZvciBjb250aW51YXRpb24gaWYgaXQn
cyBub3QgdGhlIGxhc3QgaW50ZXJhdGlvbiAqLworICAgICAgICAgICAgaWYg
KCBhLm5yID4gMCAmJiBoeXBlcmNhbGxfcHJlZW1wdF9jaGVjaygpICkKKyAg
ICAgICAgICAgIHsKKyAgICAgICAgICAgICAgICBpZiAoIGNvcHlfdG9fZ3Vl
c3QoYXJnLCAmYSwgMSkgKQorICAgICAgICAgICAgICAgICAgICByYyA9IC1F
RkFVTFQ7CisgICAgICAgICAgICAgICAgZWxzZQorICAgICAgICAgICAgICAg
ICAgICByYyA9IC1FQUdBSU47CisgICAgICAgICAgICAgICAgZ290byBwYXJh
bV9mYWlsNDsKKyAgICAgICAgICAgIH0KICAgICAgICAgfQogCiAgICAgICAg
IHJjID0gMDsKZGlmZiAtciA3YzRkODA2YjM3NTMgeGVuL2luY2x1ZGUvYXNt
LXg4Ni9jb25maWcuaAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZp
Zy5oCUZyaSBOb3YgMTYgMTU6NTY6MTQgMjAxMiArMDAwMAorKysgYi94ZW4v
aW5jbHVkZS9hc20teDg2L2NvbmZpZy5oCU1vbiBOb3YgMTkgMTQ6NDI6MTAg
MjAxMiArMDAwMApAQCAtMTE5LDYgKzExOSw5IEBAIGV4dGVybiBjaGFyIHdh
a2V1cF9zdGFydFtdOwogZXh0ZXJuIHVuc2lnbmVkIGludCB2aWRlb19tb2Rl
LCB2aWRlb19mbGFnczsKIGV4dGVybiB1bnNpZ25lZCBzaG9ydCBib290X2Vk
aWRfY2FwczsKIGV4dGVybiB1bnNpZ25lZCBjaGFyIGJvb3RfZWRpZF9pbmZv
WzEyOF07CisKKyNkZWZpbmUgR0IoX2diKSAoX2diICMjIFVMIDw8IDMwKQor
CiAjZW5kaWYKIAogI2RlZmluZSBhc21saW5rYWdlCkBAIC0xMzQsNyArMTM3
LDYgQEAgZXh0ZXJuIHVuc2lnbmVkIGNoYXIgYm9vdF9lZGlkX2luZm9bMTI4
XQogI2RlZmluZSBQTUw0X0FERFIoX3Nsb3QpICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBcCiAgICAgKCgoKF9zbG90ICMjIFVMKSA+PiA4KSAqIDB4
ZmZmZjAwMDAwMDAwMDAwMFVMKSB8IFwKICAgICAgKF9zbG90ICMjIFVMIDw8
IFBNTDRfRU5UUllfQklUUykpCi0jZGVmaW5lIEdCKF9nYikgKF9nYiAjIyBV
TCA8PCAzMCkKICNlbHNlCiAjZGVmaW5lIFBNTDRfRU5UUllfQllURVMgKDEg
PDwgUE1MNF9FTlRSWV9CSVRTKQogI2RlZmluZSBQTUw0X0FERFIoX3Nsb3Qp
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCg==

--=separator
Content-Type: application/octet-stream; name="xsa27-unstable.patch"
Content-Disposition: attachment; filename="xsa27-unstable.patch"
Content-Transfer-Encoding: base64

aHZtOiBMaW1pdCB0aGUgc2l6ZSBvZiBsYXJnZSBIVk0gb3AgYmF0Y2hlcwoK
RG9pbmcgbGFyZ2UgcDJtIHVwZGF0ZXMgZm9yIEhWTU9QX3RyYWNrX2RpcnR5
X3ZyYW0gd2l0aG91dCBwcmVlbXB0aW9uCnRpZXMgdXAgdGhlIHBoeXNpY2Fs
IHByb2Nlc3Nvci4gSW50ZWdyYXRpbmcgcHJlZW1wdGlvbiBpbnRvIHRoZSBw
Mm0KdXBkYXRlcyBpcyBoYXJkIHNvIHNpbXBseSBsaW1pdCB0byAxR0Igd2hp
Y2ggaXMgc3VmZmljaWVudCBmb3IgYSAxNTAwMAoqIDE1MDAwICogMzJicHAg
ZnJhbWVidWZmZXIuCgpGb3IgSFZNT1BfbW9kaWZpZWRfbWVtb3J5IGFuZCBI
Vk1PUF9zZXRfbWVtX3R5cGUgcHJlZW1wdGlibGUgYWRkIHRoZQpuZWNlc3Nh
cnkgbWFjaGluZXJ5IHRvIGhhbmRsZSBwcmVlbXB0aW9uLgoKVGhpcyBpcyBD
VkUtMjAxMi01NTExIC8gWFNBLTI3LgoKU2lnbmVkLW9mZi1ieTogVGltIERl
ZWdhbiA8dGltQHhlbi5vcmc+ClNpZ25lZC1vZmYtYnk6IElhbiBDYW1wYmVs
bCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+CkFja2VkLWJ5OiBJYW4gSmFj
a3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT4KCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvaHZtL2h2bS5jIGIveGVuL2FyY2gveDg2L2h2bS9o
dm0uYwppbmRleCAzNGRhMmY1Li4yZDQ2ZDk4IDEwMDY0NAotLS0gYS94ZW4v
YXJjaC94ODYvaHZtL2h2bS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9odm0vaHZt
LmMKQEAgLTM5ODQsNiArMzk4NCw5IEBAIGxvbmcgZG9faHZtX29wKHVuc2ln
bmVkIGxvbmcgb3AsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgYXJn
KQogICAgICAgICBpZiAoICFpc19odm1fZG9tYWluKGQpICkKICAgICAgICAg
ICAgIGdvdG8gcGFyYW1fZmFpbDI7CiAKKyAgICAgICAgaWYgKCBhLm5yID4g
R0IoMSkgPj4gUEFHRV9TSElGVCApCisgICAgICAgICAgICBnb3RvIHBhcmFt
X2ZhaWwyOworCiAgICAgICAgIHJjID0geHNtX2h2bV9wYXJhbShkLCBvcCk7
CiAgICAgICAgIGlmICggcmMgKQogICAgICAgICAgICAgZ290byBwYXJhbV9m
YWlsMjsKQEAgLTQwMTAsNyArNDAxMyw2IEBAIGxvbmcgZG9faHZtX29wKHVu
c2lnbmVkIGxvbmcgb3AsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkg
YXJnKQogICAgIHsKICAgICAgICAgc3RydWN0IHhlbl9odm1fbW9kaWZpZWRf
bWVtb3J5IGE7CiAgICAgICAgIHN0cnVjdCBkb21haW4gKmQ7Ci0gICAgICAg
IHVuc2lnbmVkIGxvbmcgcGZuOwogCiAgICAgICAgIGlmICggY29weV9mcm9t
X2d1ZXN0KCZhLCBhcmcsIDEpICkKICAgICAgICAgICAgIHJldHVybiAtRUZB
VUxUOwpAQCAtNDAzNyw5ICs0MDM5LDExIEBAIGxvbmcgZG9faHZtX29wKHVu
c2lnbmVkIGxvbmcgb3AsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkg
YXJnKQogICAgICAgICBpZiAoICFwYWdpbmdfbW9kZV9sb2dfZGlydHkoZCkg
KQogICAgICAgICAgICAgZ290byBwYXJhbV9mYWlsMzsKIAotICAgICAgICBm
b3IgKCBwZm4gPSBhLmZpcnN0X3BmbjsgcGZuIDwgYS5maXJzdF9wZm4gKyBh
Lm5yOyBwZm4rKyApCisgICAgICAgIHdoaWxlICggYS5uciA+IDAgKQogICAg
ICAgICB7CisgICAgICAgICAgICB1bnNpZ25lZCBsb25nIHBmbiA9IGEuZmly
c3RfcGZuOwogICAgICAgICAgICAgc3RydWN0IHBhZ2VfaW5mbyAqcGFnZTsK
KwogICAgICAgICAgICAgcGFnZSA9IGdldF9wYWdlX2Zyb21fZ2ZuKGQsIHBm
biwgTlVMTCwgUDJNX1VOU0hBUkUpOwogICAgICAgICAgICAgaWYgKCBwYWdl
ICkKICAgICAgICAgICAgIHsKQEAgLTQwNDksNiArNDA1MywxOSBAQCBsb25n
IGRvX2h2bV9vcCh1bnNpZ25lZCBsb25nIG9wLCBYRU5fR1VFU1RfSEFORExF
X1BBUkFNKHZvaWQpIGFyZykKICAgICAgICAgICAgICAgICBzaF9yZW1vdmVf
c2hhZG93cyhkLT52Y3B1WzBdLCBfbWZuKHBhZ2VfdG9fbWZuKHBhZ2UpKSwg
MSwgMCk7CiAgICAgICAgICAgICAgICAgcHV0X3BhZ2UocGFnZSk7CiAgICAg
ICAgICAgICB9CisKKyAgICAgICAgICAgIGEuZmlyc3RfcGZuKys7CisgICAg
ICAgICAgICBhLm5yLS07CisKKyAgICAgICAgICAgIC8qIENoZWNrIGZvciBj
b250aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaW50ZXJhdGlvbiAq
LworICAgICAgICAgICAgaWYgKCBhLm5yID4gMCAmJiBoeXBlcmNhbGxfcHJl
ZW1wdF9jaGVjaygpICkKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAg
ICBpZiAoIGNvcHlfdG9fZ3Vlc3QoYXJnLCAmYSwgMSkgKQorICAgICAgICAg
ICAgICAgICAgICByYyA9IC1FRkFVTFQ7CisgICAgICAgICAgICAgICAgZWxz
ZQorICAgICAgICAgICAgICAgICAgICByYyA9IC1FQUdBSU47CisgICAgICAg
ICAgICAgICAgYnJlYWs7CisgICAgICAgICAgICB9CiAgICAgICAgIH0KIAog
ICAgIHBhcmFtX2ZhaWwzOgpAQCAtNDEwNCw3ICs0MTIxLDYgQEAgbG9uZyBk
b19odm1fb3AodW5zaWduZWQgbG9uZyBvcCwgWEVOX0dVRVNUX0hBTkRMRV9Q
QVJBTSh2b2lkKSBhcmcpCiAgICAgewogICAgICAgICBzdHJ1Y3QgeGVuX2h2
bV9zZXRfbWVtX3R5cGUgYTsKICAgICAgICAgc3RydWN0IGRvbWFpbiAqZDsK
LSAgICAgICAgdW5zaWduZWQgbG9uZyBwZm47CiAgICAgICAgIAogICAgICAg
ICAvKiBJbnRlcmZhY2UgdHlwZXMgdG8gaW50ZXJuYWwgcDJtIHR5cGVzICov
CiAgICAgICAgIHAybV90eXBlX3QgbWVtdHlwZVtdID0gewpAQCAtNDEzNyw4
ICs0MTUzLDkgQEAgbG9uZyBkb19odm1fb3AodW5zaWduZWQgbG9uZyBvcCwg
WEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh2b2lkKSBhcmcpCiAgICAgICAgIGlm
ICggYS5odm1tZW1fdHlwZSA+PSBBUlJBWV9TSVpFKG1lbXR5cGUpICkKICAg
ICAgICAgICAgIGdvdG8gcGFyYW1fZmFpbDQ7CiAKLSAgICAgICAgZm9yICgg
cGZuID0gYS5maXJzdF9wZm47IHBmbiA8IGEuZmlyc3RfcGZuICsgYS5ucjsg
cGZuKysgKQorICAgICAgICB3aGlsZSAoIGEubnIgKQogICAgICAgICB7Cisg
ICAgICAgICAgICB1bnNpZ25lZCBsb25nIHBmbiA9IGEuZmlyc3RfcGZuOwog
ICAgICAgICAgICAgcDJtX3R5cGVfdCB0OwogICAgICAgICAgICAgcDJtX3R5
cGVfdCBudDsKICAgICAgICAgICAgIG1mbl90IG1mbjsKQEAgLTQxNzgsNiAr
NDE5NSwxOSBAQCBsb25nIGRvX2h2bV9vcCh1bnNpZ25lZCBsb25nIG9wLCBY
RU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZykKICAgICAgICAgICAg
ICAgICB9CiAgICAgICAgICAgICB9CiAgICAgICAgICAgICBwdXRfZ2ZuKGQs
IHBmbik7CisKKyAgICAgICAgICAgIGEuZmlyc3RfcGZuKys7CisgICAgICAg
ICAgICBhLm5yLS07CisKKyAgICAgICAgICAgIC8qIENoZWNrIGZvciBjb250
aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaW50ZXJhdGlvbiAqLwor
ICAgICAgICAgICAgaWYgKCBhLm5yID4gMCAmJiBoeXBlcmNhbGxfcHJlZW1w
dF9jaGVjaygpICkKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAgICBp
ZiAoIGNvcHlfdG9fZ3Vlc3QoYXJnLCAmYSwgMSkgKQorICAgICAgICAgICAg
ICAgICAgICByYyA9IC1FRkFVTFQ7CisgICAgICAgICAgICAgICAgZWxzZQor
ICAgICAgICAgICAgICAgICAgICByYyA9IC1FQUdBSU47CisgICAgICAgICAg
ICAgICAgZ290byBwYXJhbV9mYWlsNDsKKyAgICAgICAgICAgIH0KICAgICAg
ICAgfQogCiAgICAgICAgIHJjID0gMDsK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Dec 03 18:15:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaY1-0002yC-FC; Mon, 03 Dec 2012 18:15:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TfaXz-0002xs-LB
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:15:15 +0000
Received: from [85.158.139.211:10781] by server-9.bemta-5.messagelabs.com id
	9D/F4-29295-23CECB05; Mon, 03 Dec 2012 18:15:14 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354558512!18893543!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18257 invoked from network); 3 Dec 2012 18:15:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:15:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="46429329"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 18:15:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 13:15:11 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TfaXv-0006kA-Bv;
	Mon, 03 Dec 2012 18:15:11 +0000
Message-ID: <1354558516.18784.31.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 3 Dec 2012 18:15:16 +0000
In-Reply-To: <1354557455.2693.46.camel@zakaz.uk.xensource.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
	<1354557455.2693.46.camel@zakaz.uk.xensource.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:57 +0000, Ian Campbell wrote:
> On Mon, 2012-12-03 at 17:52 +0000, Wei Liu wrote:
> > On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
> > > >>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
> > > > Regarding Jan's comment in [0], I don't think allowing user to specify
> > > > arbitrary number of levels a good idea. Because only the last level
> > > > should be shared among vcpus, other level should be in percpu struct to
> > > > allow for quicker lookup. The idea to let user specify levels will be
> > > > too complicated in implementation and blow up percpu section (since the
> > > > size grows exponentially). Three levels should be quite enough. See
> > > > maths below.
> > > 
> > > I didn't ask to implement more than three levels, I just asked for
> > > the interface to establish the number of levels a guest wants to
> > > use to allow for higher numbers (passing of which would result in
> > > -EINVAL in your implementation).
> > > 
> > 
> > Ah, I understand now. How about something like this:
> > 
> > struct EVTCHNOP_reg_nlevel {
> >     int levels;
> >     void *level_specified_reg_struct;
> > }
> > 
> > > > Number of event channels:
> > > >  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
> > > >  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> > > > Basically the third level is a new ABI, so I choose to use unsigned long
> > > > long here to get more event channels.
> > > 
> > > Please don't: This would make things less consistent to handle
> > > at least in the guest side code. And I don't see why you would
> > > have a need to do so anyway (or else your argument above
> > > against further levels would become questionable).
> > > 
> > 
> > It was suggested by Ian to use unsigned long long. Ian, why do you
> > prefer unsigned long long to unsigned long?
> 
> I thought having 32 and 64 bit be the same might simplify some things,
> but if not then that's fine.
> 
> Is 32k event channels going to be enough in the long run? I suppose any
> system capable of running such a number of guests ought to be using 64
> bit == 512k which should at least last a bit longer.
> 

I think 32k is quite enough for 32bit machines. And I agree with "system
capable of running such a number of guests ought to be using 64 bit ==
512k" ;-)

> > > > Pages occupied by the third level (if PAGE_SIZE=4k):
> > > >  * 32bit: 64k  / 8 / 4k = 2
> > > >  * 64bit: 512k / 8 / 4k = 16
> > > > 
> > > > Making second level percpu will incur overhead. In fact we move the
> > > > array in shared info into percpu struct:
> > > >  * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
> > > >  * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte
> > > > 
> > > > What concerns me is that the struct evtchn buckets are allocated all at
> > > > once during initialization phrase. To save memory inside Xen, the
> > > > internal allocation/free scheme for evtchn needs to be modified. Ian
> > > > suggested we do small number of buckets at start of day then dynamically
> > > > fault in more as required.
> > > > 
> > > > To sum up:
> > > >      1. Guest should allocate pages for third level evtchn.
> > > >      2. Guest should register third level pages via a new hypercall op.
> > > 
> > > Doesn't the guest also need to set up space for the 2nd level?
> > > 
> > 
> > Yes. That will be embedded in percpu struct vcpu_info, which will be
> > also register via the same hypercall op.
> 
> NB that there is already a vcpu info placement hypercall. I have no
> problem making this be a prerequisite for this work.
> 

I saw that one. But that's something down to implementation, so I didn't
go into details.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:15:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfaY1-0002yC-FC; Mon, 03 Dec 2012 18:15:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TfaXz-0002xs-LB
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:15:15 +0000
Received: from [85.158.139.211:10781] by server-9.bemta-5.messagelabs.com id
	9D/F4-29295-23CECB05; Mon, 03 Dec 2012 18:15:14 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354558512!18893543!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18257 invoked from network); 3 Dec 2012 18:15:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:15:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208";a="46429329"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 18:15:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 13:15:11 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TfaXv-0006kA-Bv;
	Mon, 03 Dec 2012 18:15:11 +0000
Message-ID: <1354558516.18784.31.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 3 Dec 2012 18:15:16 +0000
In-Reply-To: <1354557455.2693.46.camel@zakaz.uk.xensource.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
	<1354557455.2693.46.camel@zakaz.uk.xensource.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:57 +0000, Ian Campbell wrote:
> On Mon, 2012-12-03 at 17:52 +0000, Wei Liu wrote:
> > On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
> > > >>> On 03.12.12 at 17:29, Wei Liu <Wei.Liu2@citrix.com> wrote:
> > > > Regarding Jan's comment in [0], I don't think allowing user to specify
> > > > arbitrary number of levels a good idea. Because only the last level
> > > > should be shared among vcpus, other level should be in percpu struct to
> > > > allow for quicker lookup. The idea to let user specify levels will be
> > > > too complicated in implementation and blow up percpu section (since the
> > > > size grows exponentially). Three levels should be quite enough. See
> > > > maths below.
> > > 
> > > I didn't ask to implement more than three levels, I just asked for
> > > the interface to establish the number of levels a guest wants to
> > > use to allow for higher numbers (passing of which would result in
> > > -EINVAL in your implementation).
> > > 
> > 
> > Ah, I understand now. How about something like this:
> > 
> > struct EVTCHNOP_reg_nlevel {
> >     int levels;
> >     void *level_specified_reg_struct;
> > }
> > 
> > > > Number of event channels:
> > > >  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
> > > >  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> > > > Basically the third level is a new ABI, so I choose to use unsigned long
> > > > long here to get more event channels.
> > > 
> > > Please don't: This would make things less consistent to handle
> > > at least in the guest side code. And I don't see why you would
> > > have a need to do so anyway (or else your argument above
> > > against further levels would become questionable).
> > > 
> > 
> > It was suggested by Ian to use unsigned long long. Ian, why do you
> > prefer unsigned long long to unsigned long?
> 
> I thought having 32 and 64 bit be the same might simplify some things,
> but if not then that's fine.
> 
> Is 32k event channels going to be enough in the long run? I suppose any
> system capable of running such a number of guests ought to be using 64
> bit == 512k which should at least last a bit longer.
> 

I think 32k is quite enough for 32bit machines. And I agree with "system
capable of running such a number of guests ought to be using 64 bit ==
512k" ;-)

> > > > Pages occupied by the third level (if PAGE_SIZE=4k):
> > > >  * 32bit: 64k  / 8 / 4k = 2
> > > >  * 64bit: 512k / 8 / 4k = 16
> > > > 
> > > > Making second level percpu will incur overhead. In fact we move the
> > > > array in shared info into percpu struct:
> > > >  * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
> > > >  * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte
> > > > 
> > > > What concerns me is that the struct evtchn buckets are allocated all at
> > > > once during initialization phrase. To save memory inside Xen, the
> > > > internal allocation/free scheme for evtchn needs to be modified. Ian
> > > > suggested we do small number of buckets at start of day then dynamically
> > > > fault in more as required.
> > > > 
> > > > To sum up:
> > > >      1. Guest should allocate pages for third level evtchn.
> > > >      2. Guest should register third level pages via a new hypercall op.
> > > 
> > > Doesn't the guest also need to set up space for the 2nd level?
> > > 
> > 
> > Yes. That will be embedded in percpu struct vcpu_info, which will be
> > also register via the same hypercall op.
> 
> NB that there is already a vcpu info placement hypercall. I have no
> problem making this be a prerequisite for this work.
> 

I saw that one. But that's something down to implementation, so I didn't
go into details.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:24:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfah3-0003nX-Pj; Mon, 03 Dec 2012 18:24:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tfah2-0003nA-Dc
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:24:36 +0000
Received: from [85.158.143.99:21816] by server-3.bemta-4.messagelabs.com id
	AF/17-06841-36EECB05; Mon, 03 Dec 2012 18:24:35 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354559073!27555656!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10577 invoked from network); 3 Dec 2012 18:24:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:24:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208,217";a="46430358"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 18:24:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 13:24:32 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tfagy-0006rJ-Bg;
	Mon, 03 Dec 2012 18:24:32 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 18:18:38 +0000
Message-ID: <1354558718-11194-1-git-send-email-george.dunlap@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_RFC_v2=5D_Expand_eligibility_for_t?=
	=?utf-8?q?he_pre-disclosure_list?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7233093474945523063=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7233093474945523063==
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

As discussed on the xen-devel mailing list, expand eligibilty of the
pre-disclosure list to include any public hosting provider, as well
as software project:
* Change "Large hosting providers" to "Public hosting providers"
* Remove "widely-deployed" from vendors and distributors
* Add rule of thumb for what contitutes a "genuine"
* Add an itemized list of information to be included in the application,
to make expectations clear and (hopefully) applications more streamlined.

NOTE: This RFC is meant to be a way to start a discussion on the exact
wording which will be voted on.  Once it has gone through review from
the xen-devel mailing list, I will post an "RC" and announce it on the
Xen blog, as well as on xen-users.  Once discussion seems to have
converged, I will post a "FINAL" one, which I will put up for a vote.

I hope to post an RC on the blog and on xen-users before I begin my
Christmas holidays, in mid-December.

v2:
 - Include "genuine" software providers, and a rule of thumb for "genuine"
 - Include evidence for software providers
 - Allow "a key signed with a key in the PGP strong set" as evidence
 - Require applicants to state they have read and understand policy
   and will abide by it
 - Minor suggested clarifications
 - Added version message at bottom
 - Made security aliases a requirement

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
---
 security_vulnerability_process.html |   67 +++++++++++++++++++++++++++++------
 1 file changed, 56 insertions(+), 11 deletions(-)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index e305371..9eb7aa5 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -1,7 +1,7 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
 <html xmlns="http://www.w3.org/1999/xhtml">
 <head>
-<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<meta http-equiv="Content-Type" content="text/html" />
     <title>Xen.org Security Problem Response Process</title>
 	<meta name="description" content="Xen.org, home of the Xen® hypervisor, the powerful open source industry standard for virtualization.">
 	<meta name="keywords" content="xen 2.0, xen 3.0, hypervisor, server consolidation, open source, ian pratt, virtualization, virtualisation, xen, xensource, security, vulnerability, process">
@@ -14,7 +14,7 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
 
 </script>
 <script type="text/javascript" src="/globals/menu_data_xenorg_main.js"></script>	
-	<meta http-equiv="Content-Type" content="text/html;charset=UTF-8">
+	<meta http-equiv="Content-Type" content="text/html">
 	<link rel="shortcut icon" href="/favicon.ico">
 
 </head>
@@ -194,16 +194,27 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
     addresses (ideally, role addresses) of the security response teams for
     significant Xen operators and distributors.</p>
     <p>This includes:<ul>
-      <li>Large-scale hosting providers;</li>
+      <li>Public hosting providers;</li>
       <li>Large-scale organisational users of Xen;</li>
-      <li>Vendors of widely-deployed Xen-based systems;</li>
-      <li>Distributors of widely-deployed operating systems with Xen support.</li>
+      <li>Vendors of Xen-based systems;</li>
+      <li>Distributors of operating systems with Xen support.</li>
     </ul></p>
     <p>This includes both corporations and community institutions.</p>    
-    <p>Here as a rule of thumb "large scale" and "widely deployed" means an
-    installed base of 300,000 or more Xen guests; other well-established
-    organisations with a mature security response process will be considered on
-    a case-by-case basis.</p>    
+    <p>Here "provider", "vendor", and "distributor" is meant to
+      include anyone who is making a genuine service, available to the
+      public, whether for a fee or gratis.  For projects providing a
+      service for a fee, the rule of thumb of "genuine" is that you
+      are offering services which people are purchasing.  For gratis
+      projects, the rule of thumb for "genuine" is measured in terms
+      of the amount of time committed to providing the service.  For
+      instance, a software project which has 2-3 active developers,
+      each of whom spend 3-4 hours per week doing development, is very
+      likely to be accepted; whereas a project with a single developer
+      who spends a few hours a month will most likey be rejected.</p>
+    <p>For organizational users, a rule of thumb is that "large-scale"
+      means an installed base of 300,000 or more Xen guests.  Other
+      well-established organisations with a mature security response
+      process will be considered on a case-by-case basis.</p>
     <p>The list of entities on the pre-disclosure list is public. (Just the list
     of projects and organisations, not the actual email addresses.)</p>  
     <p>If there is an embargo, the pre-disclosure list will receive
@@ -229,8 +240,41 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
        <li>The planned disclosure date</li>
     </ul></p>
 
-    <p>Organisations who meet the criteria should contact security@xen if they wish to receive pre-disclosure of advisories. Organisations should not request subscription via the mailing list web interface, any such subscription requests will be rejected and ignored.</p>
-    <p>Normally we would prefer that a role address be used for each organisation, rather than one or more individual's direct email address. This helps to ensure that changes of personnel do not end up effectively dropping an organisation from the list</p>
+    <p>Organisations who meet the criteria should contact security@xen
+      if they wish to receive pre-disclosure of advisories.  Please
+      include in the e-mail: <ul>
+	<li>The name of your organization</li>
+	<li>A brief description of why you fit the criteria, along
+	with evidence to support the claim</li>
+	<li>A security alias e-mail address (no personal addresses --
+	see below)</li>
+	<li>A link to a web page with your security policy
+	statement</li>
+        <li>A statement to the effect that you have read this policy
+          and agree to abide by the terms for inclusion in the list,
+          specifically the requirements to regarding confidentiality
+          during an embargo period</li>
+      </ul></p>
+    <p>Evidence that will be considered may include the following: <ul>
+	<li>If you are a public hosting provider, a link to a web page
+	  with your public rates</li>
+	<li>If you are a software provider, a link to a web page where
+	  your software can be downloaded or purchased</li>
+	<li>If you are an open-source project, a link to a mailing
+	  list archive and/or a version control repository
+	  demonstrating active development</li>
+	<li>A public key signed with a key which is in the PGP "strong
+	set"</li>
+    </ul></p>
+
+    <p>Organisations should not request subscription via the mailing
+      list web interface, any such subscription requests will be
+      rejected and ignored.</p>
+    <p>A role address (such as security@company.com) should be used
+      for each organisation, rather than one or more individual's
+      direct email address. This helps to ensure that changes of
+      personnel do not end up effectively dropping an organisation
+      from the list.</p>
 
     
     <h3>Organizations on the pre-disclosure list:</h3>
@@ -253,6 +297,7 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
     
     <h2>Change History</h2>
     <ul>
+      <li><b>v1.4 Nov 2012:</b> Predisclosure list criteria changes</li>
       <li><b>v1.3 Aug 2012:</b> Various minor updates</li>
       <li><b>v1.2 Apr 2012:</b> Added pre-disclosure list</li>
       <li><b>v1.1 Feb 2012:</b> Added link to Security Announcements wiki page</li>
-- 
1.7.9.5



--===============7233093474945523063==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7233093474945523063==--

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:24:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfah3-0003nX-Pj; Mon, 03 Dec 2012 18:24:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tfah2-0003nA-Dc
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:24:36 +0000
Received: from [85.158.143.99:21816] by server-3.bemta-4.messagelabs.com id
	AF/17-06841-36EECB05; Mon, 03 Dec 2012 18:24:35 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354559073!27555656!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10577 invoked from network); 3 Dec 2012 18:24:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:24:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,208,1355097600"; d="scan'208,217";a="46430358"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 18:24:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 13:24:32 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tfagy-0006rJ-Bg;
	Mon, 03 Dec 2012 18:24:32 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 3 Dec 2012 18:18:38 +0000
Message-ID: <1354558718-11194-1-git-send-email-george.dunlap@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_RFC_v2=5D_Expand_eligibility_for_t?=
	=?utf-8?q?he_pre-disclosure_list?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7233093474945523063=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7233093474945523063==
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

As discussed on the xen-devel mailing list, expand eligibilty of the
pre-disclosure list to include any public hosting provider, as well
as software project:
* Change "Large hosting providers" to "Public hosting providers"
* Remove "widely-deployed" from vendors and distributors
* Add rule of thumb for what contitutes a "genuine"
* Add an itemized list of information to be included in the application,
to make expectations clear and (hopefully) applications more streamlined.

NOTE: This RFC is meant to be a way to start a discussion on the exact
wording which will be voted on.  Once it has gone through review from
the xen-devel mailing list, I will post an "RC" and announce it on the
Xen blog, as well as on xen-users.  Once discussion seems to have
converged, I will post a "FINAL" one, which I will put up for a vote.

I hope to post an RC on the blog and on xen-users before I begin my
Christmas holidays, in mid-December.

v2:
 - Include "genuine" software providers, and a rule of thumb for "genuine"
 - Include evidence for software providers
 - Allow "a key signed with a key in the PGP strong set" as evidence
 - Require applicants to state they have read and understand policy
   and will abide by it
 - Minor suggested clarifications
 - Added version message at bottom
 - Made security aliases a requirement

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
---
 security_vulnerability_process.html |   67 +++++++++++++++++++++++++++++------
 1 file changed, 56 insertions(+), 11 deletions(-)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index e305371..9eb7aa5 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -1,7 +1,7 @@
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
 <html xmlns="http://www.w3.org/1999/xhtml">
 <head>
-<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<meta http-equiv="Content-Type" content="text/html" />
     <title>Xen.org Security Problem Response Process</title>
 	<meta name="description" content="Xen.org, home of the Xen® hypervisor, the powerful open source industry standard for virtualization.">
 	<meta name="keywords" content="xen 2.0, xen 3.0, hypervisor, server consolidation, open source, ian pratt, virtualization, virtualisation, xen, xensource, security, vulnerability, process">
@@ -14,7 +14,7 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
 
 </script>
 <script type="text/javascript" src="/globals/menu_data_xenorg_main.js"></script>	
-	<meta http-equiv="Content-Type" content="text/html;charset=UTF-8">
+	<meta http-equiv="Content-Type" content="text/html">
 	<link rel="shortcut icon" href="/favicon.ico">
 
 </head>
@@ -194,16 +194,27 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
     addresses (ideally, role addresses) of the security response teams for
     significant Xen operators and distributors.</p>
     <p>This includes:<ul>
-      <li>Large-scale hosting providers;</li>
+      <li>Public hosting providers;</li>
       <li>Large-scale organisational users of Xen;</li>
-      <li>Vendors of widely-deployed Xen-based systems;</li>
-      <li>Distributors of widely-deployed operating systems with Xen support.</li>
+      <li>Vendors of Xen-based systems;</li>
+      <li>Distributors of operating systems with Xen support.</li>
     </ul></p>
     <p>This includes both corporations and community institutions.</p>    
-    <p>Here as a rule of thumb "large scale" and "widely deployed" means an
-    installed base of 300,000 or more Xen guests; other well-established
-    organisations with a mature security response process will be considered on
-    a case-by-case basis.</p>    
+    <p>Here "provider", "vendor", and "distributor" is meant to
+      include anyone who is making a genuine service, available to the
+      public, whether for a fee or gratis.  For projects providing a
+      service for a fee, the rule of thumb of "genuine" is that you
+      are offering services which people are purchasing.  For gratis
+      projects, the rule of thumb for "genuine" is measured in terms
+      of the amount of time committed to providing the service.  For
+      instance, a software project which has 2-3 active developers,
+      each of whom spend 3-4 hours per week doing development, is very
+      likely to be accepted; whereas a project with a single developer
+      who spends a few hours a month will most likey be rejected.</p>
+    <p>For organizational users, a rule of thumb is that "large-scale"
+      means an installed base of 300,000 or more Xen guests.  Other
+      well-established organisations with a mature security response
+      process will be considered on a case-by-case basis.</p>
     <p>The list of entities on the pre-disclosure list is public. (Just the list
     of projects and organisations, not the actual email addresses.)</p>  
     <p>If there is an embargo, the pre-disclosure list will receive
@@ -229,8 +240,41 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
        <li>The planned disclosure date</li>
     </ul></p>
 
-    <p>Organisations who meet the criteria should contact security@xen if they wish to receive pre-disclosure of advisories. Organisations should not request subscription via the mailing list web interface, any such subscription requests will be rejected and ignored.</p>
-    <p>Normally we would prefer that a role address be used for each organisation, rather than one or more individual's direct email address. This helps to ensure that changes of personnel do not end up effectively dropping an organisation from the list</p>
+    <p>Organisations who meet the criteria should contact security@xen
+      if they wish to receive pre-disclosure of advisories.  Please
+      include in the e-mail: <ul>
+	<li>The name of your organization</li>
+	<li>A brief description of why you fit the criteria, along
+	with evidence to support the claim</li>
+	<li>A security alias e-mail address (no personal addresses --
+	see below)</li>
+	<li>A link to a web page with your security policy
+	statement</li>
+        <li>A statement to the effect that you have read this policy
+          and agree to abide by the terms for inclusion in the list,
+          specifically the requirements to regarding confidentiality
+          during an embargo period</li>
+      </ul></p>
+    <p>Evidence that will be considered may include the following: <ul>
+	<li>If you are a public hosting provider, a link to a web page
+	  with your public rates</li>
+	<li>If you are a software provider, a link to a web page where
+	  your software can be downloaded or purchased</li>
+	<li>If you are an open-source project, a link to a mailing
+	  list archive and/or a version control repository
+	  demonstrating active development</li>
+	<li>A public key signed with a key which is in the PGP "strong
+	set"</li>
+    </ul></p>
+
+    <p>Organisations should not request subscription via the mailing
+      list web interface, any such subscription requests will be
+      rejected and ignored.</p>
+    <p>A role address (such as security@company.com) should be used
+      for each organisation, rather than one or more individual's
+      direct email address. This helps to ensure that changes of
+      personnel do not end up effectively dropping an organisation
+      from the list.</p>
 
     
     <h3>Organizations on the pre-disclosure list:</h3>
@@ -253,6 +297,7 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
     
     <h2>Change History</h2>
     <ul>
+      <li><b>v1.4 Nov 2012:</b> Predisclosure list criteria changes</li>
       <li><b>v1.3 Aug 2012:</b> Various minor updates</li>
       <li><b>v1.2 Apr 2012:</b> Added pre-disclosure list</li>
       <li><b>v1.1 Feb 2012:</b> Added link to Security Announcements wiki page</li>
-- 
1.7.9.5



--===============7233093474945523063==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7233093474945523063==--

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:26:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfaj2-0003wS-AJ; Mon, 03 Dec 2012 18:26:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Tfaj1-0003wN-1l
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 18:26:39 +0000
Received: from [85.158.139.83:26704] by server-8.bemta-5.messagelabs.com id
	BB/12-06050-EDEECB05; Mon, 03 Dec 2012 18:26:38 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354559197!24231000!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8953 invoked from network); 3 Dec 2012 18:26:37 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:26:37 -0000
Received: by mail-ee0-f43.google.com with SMTP id e49so2153888eek.30
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 10:26:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=n5HB6GaE6NQgQ2FzN8VI8EpO4cqn4kn9O3tySQ/HL10=;
	b=gEjVYvDjkSvVTOTroenLeapiLesF40UmFxEGPLiBPWENyxcLH3oiTdEA0+Gbpa5r24
	4dRkAqZE+zS1hM5naQL6SwxrI0M+KdG2kmNkWAWjbFRqwvdguy0LX/G0/TmHz8p3ock/
	KRlLS4N75DF5AVsn9Pfn0YGPJmr1Dt6IRXuF9H+fxZrL1ruHLaV8JMrMhmvTimRz4ON4
	jSYIrxeR64UM/DWz7XghyG8Vg30dgqL13jULbKoyUFOL2fSGQpRqZ+InMKWruArWAnzo
	v4lHVzGsoj1+RcXL+hnJAJLfvGIx4HF03shSp9ELJffumPYVFyRZBZt44fB3Lo42nDZ7
	DbSg==
Received: by 10.14.213.134 with SMTP id a6mr38895243eep.45.1354559197154;
	Mon, 03 Dec 2012 10:26:37 -0800 (PST)
Received: from [192.168.0.40] (ip-178-48.sn2.eutelia.it. [83.211.178.48])
	by mx.google.com with ESMTPS id z8sm32703409eeo.11.2012.12.03.10.26.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 10:26:36 -0800 (PST)
Message-ID: <1354559180.20772.5.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 03 Dec 2012 19:26:20 +0100
In-Reply-To: <1354554754.2693.31.camel@zakaz.uk.xensource.com>
References: <patchbomb.1354552497@Solace>
	<dde3de6d81a3014f1d13.1354552498@Solace>
	<1354554754.2693.31.camel@zakaz.uk.xensource.com>
X-Mailer: Evolution 3.4.4-1
Mime-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 3] xen: sched_credit,
 improve tickling of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6641317753459257487=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6641317753459257487==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-B6+TUMtEb5jDqxxK/td8"


--=-B6+TUMtEb5jDqxxK/td8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2012-12-03 at 17:12 +0000, Ian Campbell wrote:
> On Mon, 2012-12-03 at 16:34 +0000, Dario Faggioli wrote:
> > +            /*
> > +             * If there aren't idlers for new, then letting new preemp=
t cur and
> > +             * try to migrate cur becomes ineviable. If that is the ca=
se, update
>=20
> "inevitable"
>=20
Gha! Again! :-(

I really need that 'comment spellchecker' thing! Actually, looking for
it a little bit more thoroughly, I ran into this (which I didn't know):=20

http://vimdoc.sourceforge.net/htmldoc/spell.html

As well as into the fact that just doing a ':set spell' in a recent
enough version of Vim, while editing a C file, seems to do the job quite
nicely.

> (I have nothing more intelligent to say...)
>=20
Well, it really was useful to me, as it triggered the above! :-)

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-B6+TUMtEb5jDqxxK/td8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlC87swACgkQk4XaBE3IOsRdZgCgiTwQXm8seL8C0Tgr2ykSMN30
aiYAn1cifPD/CqhwxeEu2zjQGyB3iI0U
=ygFx
-----END PGP SIGNATURE-----

--=-B6+TUMtEb5jDqxxK/td8--



--===============6641317753459257487==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6641317753459257487==--



From xen-devel-bounces@lists.xen.org Mon Dec 03 18:26:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfaj2-0003wS-AJ; Mon, 03 Dec 2012 18:26:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Tfaj1-0003wN-1l
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 18:26:39 +0000
Received: from [85.158.139.83:26704] by server-8.bemta-5.messagelabs.com id
	BB/12-06050-EDEECB05; Mon, 03 Dec 2012 18:26:38 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354559197!24231000!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8953 invoked from network); 3 Dec 2012 18:26:37 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:26:37 -0000
Received: by mail-ee0-f43.google.com with SMTP id e49so2153888eek.30
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Dec 2012 10:26:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=n5HB6GaE6NQgQ2FzN8VI8EpO4cqn4kn9O3tySQ/HL10=;
	b=gEjVYvDjkSvVTOTroenLeapiLesF40UmFxEGPLiBPWENyxcLH3oiTdEA0+Gbpa5r24
	4dRkAqZE+zS1hM5naQL6SwxrI0M+KdG2kmNkWAWjbFRqwvdguy0LX/G0/TmHz8p3ock/
	KRlLS4N75DF5AVsn9Pfn0YGPJmr1Dt6IRXuF9H+fxZrL1ruHLaV8JMrMhmvTimRz4ON4
	jSYIrxeR64UM/DWz7XghyG8Vg30dgqL13jULbKoyUFOL2fSGQpRqZ+InMKWruArWAnzo
	v4lHVzGsoj1+RcXL+hnJAJLfvGIx4HF03shSp9ELJffumPYVFyRZBZt44fB3Lo42nDZ7
	DbSg==
Received: by 10.14.213.134 with SMTP id a6mr38895243eep.45.1354559197154;
	Mon, 03 Dec 2012 10:26:37 -0800 (PST)
Received: from [192.168.0.40] (ip-178-48.sn2.eutelia.it. [83.211.178.48])
	by mx.google.com with ESMTPS id z8sm32703409eeo.11.2012.12.03.10.26.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 10:26:36 -0800 (PST)
Message-ID: <1354559180.20772.5.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 03 Dec 2012 19:26:20 +0100
In-Reply-To: <1354554754.2693.31.camel@zakaz.uk.xensource.com>
References: <patchbomb.1354552497@Solace>
	<dde3de6d81a3014f1d13.1354552498@Solace>
	<1354554754.2693.31.camel@zakaz.uk.xensource.com>
X-Mailer: Evolution 3.4.4-1
Mime-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 3] xen: sched_credit,
 improve tickling of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6641317753459257487=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6641317753459257487==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-B6+TUMtEb5jDqxxK/td8"


--=-B6+TUMtEb5jDqxxK/td8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2012-12-03 at 17:12 +0000, Ian Campbell wrote:
> On Mon, 2012-12-03 at 16:34 +0000, Dario Faggioli wrote:
> > +            /*
> > +             * If there aren't idlers for new, then letting new preemp=
t cur and
> > +             * try to migrate cur becomes ineviable. If that is the ca=
se, update
>=20
> "inevitable"
>=20
Gha! Again! :-(

I really need that 'comment spellchecker' thing! Actually, looking for
it a little bit more thoroughly, I ran into this (which I didn't know):=20

http://vimdoc.sourceforge.net/htmldoc/spell.html

As well as into the fact that just doing a ':set spell' in a recent
enough version of Vim, while editing a C file, seems to do the job quite
nicely.

> (I have nothing more intelligent to say...)
>=20
Well, it really was useful to me, as it triggered the above! :-)

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-B6+TUMtEb5jDqxxK/td8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlC87swACgkQk4XaBE3IOsRdZgCgiTwQXm8seL8C0Tgr2ykSMN30
aiYAn1cifPD/CqhwxeEu2zjQGyB3iI0U
=ygFx
-----END PGP SIGNATURE-----

--=-B6+TUMtEb5jDqxxK/td8--



--===============6641317753459257487==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6641317753459257487==--



From xen-devel-bounces@lists.xen.org Mon Dec 03 18:37:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfatP-0004KS-R1; Mon, 03 Dec 2012 18:37:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TfatO-0004KN-Us
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 18:37:23 +0000
Received: from [85.158.139.83:37463] by server-13.bemta-5.messagelabs.com id
	F9/9C-27809-261FCB05; Mon, 03 Dec 2012 18:37:22 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354559841!24337937!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8726 invoked from network); 3 Dec 2012 18:37:21 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-6.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 18:37:21 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:58779 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tfawy-0000N2-Sc; Mon, 03 Dec 2012 19:41:04 +0100
Date: Mon, 3 Dec 2012 19:37:18 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1671133405.20121203193718@eikelenboom.it>
To: =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
In-Reply-To: <50BCD24B.1010608@citrix.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
	device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 3, 2012, 5:24:43 PM, you wrote:

> On 27/11/12 16:17, Stefano Stabellini wrote:
>> 
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>> index 9d20086..c40f597 100644
>> --- a/tools/libxl/libxl_create.c
>> +++ b/tools/libxl/libxl_create.c
>> @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>>      if (!b_info->device_model_version) {
>>          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
>>              b_info->device_model_version =
>> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
>> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;

> Is there anyway we may keep qemu-traditional as default for NetBSD?
> Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
> heavy patching.

> Could a helper function be added to libxl_{netbsd/linux}.c to decide
> which device model to use?

And shouldn't the example configuration files and documentation be patched as well ?

>>          else {
>>              const char *dm;
>>              int rc;
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>> 





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:37:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfatP-0004KS-R1; Mon, 03 Dec 2012 18:37:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TfatO-0004KN-Us
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 18:37:23 +0000
Received: from [85.158.139.83:37463] by server-13.bemta-5.messagelabs.com id
	F9/9C-27809-261FCB05; Mon, 03 Dec 2012 18:37:22 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354559841!24337937!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8726 invoked from network); 3 Dec 2012 18:37:21 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-6.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Dec 2012 18:37:21 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:58779 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tfawy-0000N2-Sc; Mon, 03 Dec 2012 19:41:04 +0100
Date: Mon, 3 Dec 2012 19:37:18 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1671133405.20121203193718@eikelenboom.it>
To: =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
In-Reply-To: <50BCD24B.1010608@citrix.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
	device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 3, 2012, 5:24:43 PM, you wrote:

> On 27/11/12 16:17, Stefano Stabellini wrote:
>> 
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>> index 9d20086..c40f597 100644
>> --- a/tools/libxl/libxl_create.c
>> +++ b/tools/libxl/libxl_create.c
>> @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>>      if (!b_info->device_model_version) {
>>          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
>>              b_info->device_model_version =
>> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
>> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;

> Is there anyway we may keep qemu-traditional as default for NetBSD?
> Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
> heavy patching.

> Could a helper function be added to libxl_{netbsd/linux}.c to decide
> which device model to use?

And shouldn't the example configuration files and documentation be patched as well ?

>>          else {
>>              const char *dm;
>>              int rc;
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>> 





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:53:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfb8L-0005Sf-Gg; Mon, 03 Dec 2012 18:52:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Tfb8J-0005SY-Eh
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:52:47 +0000
Received: from [85.158.143.35:33247] by server-1.bemta-4.messagelabs.com id
	94/1B-27934-EF4FCB05; Mon, 03 Dec 2012 18:52:46 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354560762!10449391!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22964 invoked from network); 3 Dec 2012 18:52:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:52:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,209,1355097600"; d="scan'208";a="46434309"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	03 Dec 2012 18:52:42 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Mon, 3 Dec 2012
	13:52:42 -0500
Message-ID: <50BCF4F9.8010601@citrix.com>
Date: Mon, 3 Dec 2012 18:52:41 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Liu <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland>
In-Reply-To: <1354552154.18784.9.camel@iceland>
X-Originating-IP: [10.80.2.76]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 16:29, Wei Liu wrote:
> Hi all
> 
> There has been discussion on extending number of event channels back in
> September [0].

It seems that the decision has been made to go for this N-level
approach.  Were any other methods considered?

Would a per-VCPU ring of pending events work?  The ABI will be easier to
extend in the future for more event channels.  The guest side code will
be simpler.  It will be easier to fairly service the events as they will
be processed in the order they were raised.

The complexity would be in ensuring that events were not lost due to
lack of space in the ring.  This may make the ring prohibitively large
or require complex or expensive tracking of pending events inside Xen.

> Regarding Jan's comment in [0], I don't think allowing user to specify
> arbitrary number of levels a good idea. Because only the last level
> should be shared among vcpus, other level should be in percpu struct to
> allow for quicker lookup. The idea to let user specify levels will be
> too complicated in implementation and blow up percpu section (since the
> size grows exponentially). Three levels should be quite enough. See
> maths below.
> 
> Number of event channels:
>  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
>  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> Basically the third level is a new ABI, so I choose to use unsigned long
> long here to get more event channels.

32-bit guests will have to treat the unsigned long long as two separate
words and iterate over then individually.

This is easy to do -- we have an experimental build of Xen and the
kernel that extends the number of event channels for 32-bit dom0 to 4096
by having each selector bit select a group of 4 words.

David

> 
> Pages occupied by the third level (if PAGE_SIZE=4k):
>  * 32bit: 64k  / 8 / 4k = 2
>  * 64bit: 512k / 8 / 4k = 16
> 
> Making second level percpu will incur overhead. In fact we move the
> array in shared info into percpu struct:
>  * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
>  * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte
> 
> What concerns me is that the struct evtchn buckets are allocated all at
> once during initialization phrase. To save memory inside Xen, the
> internal allocation/free scheme for evtchn needs to be modified. Ian
> suggested we do small number of buckets at start of day then dynamically
> fault in more as required.
> 
> To sum up:
>      1. Guest should allocate pages for third level evtchn.
>      2. Guest should register third level pages via a new hypercall op.
>      3. Hypervisor should setup third level evtchn in that hypercall op.
>      4. Only last level (third in this case) should be shared among
>         vcpus.
>      5. Need a flexible allocation/free scheme of struct evtchn.
>      6. Debug dumping should use snapshot to avoid holding event lock
>         for too long. (Jan's concern in [0])
> 
> Any comments are welcomed.
> 
> 
> Wei.
> 
> [0] http://thread.gmane.org/gmane.comp.emulators.xen.devel/139921

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 18:53:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 18:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfb8L-0005Sf-Gg; Mon, 03 Dec 2012 18:52:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Tfb8J-0005SY-Eh
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 18:52:47 +0000
Received: from [85.158.143.35:33247] by server-1.bemta-4.messagelabs.com id
	94/1B-27934-EF4FCB05; Mon, 03 Dec 2012 18:52:46 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354560762!10449391!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22964 invoked from network); 3 Dec 2012 18:52:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 18:52:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,209,1355097600"; d="scan'208";a="46434309"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	03 Dec 2012 18:52:42 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Mon, 3 Dec 2012
	13:52:42 -0500
Message-ID: <50BCF4F9.8010601@citrix.com>
Date: Mon, 3 Dec 2012 18:52:41 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Liu <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland>
In-Reply-To: <1354552154.18784.9.camel@iceland>
X-Originating-IP: [10.80.2.76]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 16:29, Wei Liu wrote:
> Hi all
> 
> There has been discussion on extending number of event channels back in
> September [0].

It seems that the decision has been made to go for this N-level
approach.  Were any other methods considered?

Would a per-VCPU ring of pending events work?  The ABI will be easier to
extend in the future for more event channels.  The guest side code will
be simpler.  It will be easier to fairly service the events as they will
be processed in the order they were raised.

The complexity would be in ensuring that events were not lost due to
lack of space in the ring.  This may make the ring prohibitively large
or require complex or expensive tracking of pending events inside Xen.

> Regarding Jan's comment in [0], I don't think allowing user to specify
> arbitrary number of levels a good idea. Because only the last level
> should be shared among vcpus, other level should be in percpu struct to
> allow for quicker lookup. The idea to let user specify levels will be
> too complicated in implementation and blow up percpu section (since the
> size grows exponentially). Three levels should be quite enough. See
> maths below.
> 
> Number of event channels:
>  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
>  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> Basically the third level is a new ABI, so I choose to use unsigned long
> long here to get more event channels.

32-bit guests will have to treat the unsigned long long as two separate
words and iterate over then individually.

This is easy to do -- we have an experimental build of Xen and the
kernel that extends the number of event channels for 32-bit dom0 to 4096
by having each selector bit select a group of 4 words.

David

> 
> Pages occupied by the third level (if PAGE_SIZE=4k):
>  * 32bit: 64k  / 8 / 4k = 2
>  * 64bit: 512k / 8 / 4k = 16
> 
> Making second level percpu will incur overhead. In fact we move the
> array in shared info into percpu struct:
>  * 32bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 128 byte
>  * 64bit: sizeof(unsigned long) * 8 * sizeof(unsigned long) = 512 byte
> 
> What concerns me is that the struct evtchn buckets are allocated all at
> once during initialization phrase. To save memory inside Xen, the
> internal allocation/free scheme for evtchn needs to be modified. Ian
> suggested we do small number of buckets at start of day then dynamically
> fault in more as required.
> 
> To sum up:
>      1. Guest should allocate pages for third level evtchn.
>      2. Guest should register third level pages via a new hypercall op.
>      3. Hypervisor should setup third level evtchn in that hypercall op.
>      4. Only last level (third in this case) should be shared among
>         vcpus.
>      5. Need a flexible allocation/free scheme of struct evtchn.
>      6. Debug dumping should use snapshot to avoid holding event lock
>         for too long. (Jan's concern in [0])
> 
> Any comments are welcomed.
> 
> 
> Wei.
> 
> [0] http://thread.gmane.org/gmane.comp.emulators.xen.devel/139921

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 19:11:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 19:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfbQH-0005th-KY; Mon, 03 Dec 2012 19:11:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TfbQG-0005tc-DD
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 19:11:20 +0000
Received: from [85.158.139.83:61189] by server-4.bemta-5.messagelabs.com id
	34/4E-15011-759FCB05; Mon, 03 Dec 2012 19:11:19 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354561877!28250539!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27756 invoked from network); 3 Dec 2012 19:11:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 19:11:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,209,1355097600"; d="scan'208";a="216238491"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 19:11:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 14:11:16 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TfbQC-0007Tg-6U;
	Mon, 03 Dec 2012 19:11:16 +0000
Message-ID: <1354561881.21760.7.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 3 Dec 2012 19:11:21 +0000
In-Reply-To: <50BCF4F9.8010601@citrix.com>
References: <1354552154.18784.9.camel@iceland> <50BCF4F9.8010601@citrix.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 18:52 +0000, David Vrabel wrote:
> On 03/12/12 16:29, Wei Liu wrote:
> > Hi all
> > 
> > There has been discussion on extending number of event channels back in
> > September [0].
> 
> It seems that the decision has been made to go for this N-level
> approach.  Were any other methods considered?
> 

Not yet. The discussion is still open.

> Would a per-VCPU ring of pending events work?  The ABI will be easier to
> extend in the future for more event channels.  The guest side code will
> be simpler.  It will be easier to fairly service the events as they will
> be processed in the order they were raised.
> 

Will there be scenario that we need to raise some evtchn's priority? The
ring approach is completely fair.

> The complexity would be in ensuring that events were not lost due to
> lack of space in the ring.  This may make the ring prohibitively large
> or require complex or expensive tracking of pending events inside Xen.
> 

This also needs to be considered and evaluated...

> > Regarding Jan's comment in [0], I don't think allowing user to specify
> > arbitrary number of levels a good idea. Because only the last level
> > should be shared among vcpus, other level should be in percpu struct to
> > allow for quicker lookup. The idea to let user specify levels will be
> > too complicated in implementation and blow up percpu section (since the
> > size grows exponentially). Three levels should be quite enough. See
> > maths below.
> > 
> > Number of event channels:
> >  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
> >  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> > Basically the third level is a new ABI, so I choose to use unsigned long
> > long here to get more event channels.
> 
> 32-bit guests will have to treat the unsigned long long as two separate
> words and iterate over then individually.
> 
> This is easy to do -- we have an experimental build of Xen and the
> kernel that extends the number of event channels for 32-bit dom0 to 4096
> by having each selector bit select a group of 4 words.
> 

4096 is not enough. But this idea of each selector bit selecting more
words could be useful. Can you send me your patches then I will see what
I can do.


Wei.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 19:11:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 19:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfbQH-0005th-KY; Mon, 03 Dec 2012 19:11:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TfbQG-0005tc-DD
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 19:11:20 +0000
Received: from [85.158.139.83:61189] by server-4.bemta-5.messagelabs.com id
	34/4E-15011-759FCB05; Mon, 03 Dec 2012 19:11:19 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354561877!28250539!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27756 invoked from network); 3 Dec 2012 19:11:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 19:11:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,209,1355097600"; d="scan'208";a="216238491"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 19:11:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 14:11:16 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TfbQC-0007Tg-6U;
	Mon, 03 Dec 2012 19:11:16 +0000
Message-ID: <1354561881.21760.7.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 3 Dec 2012 19:11:21 +0000
In-Reply-To: <50BCF4F9.8010601@citrix.com>
References: <1354552154.18784.9.camel@iceland> <50BCF4F9.8010601@citrix.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 18:52 +0000, David Vrabel wrote:
> On 03/12/12 16:29, Wei Liu wrote:
> > Hi all
> > 
> > There has been discussion on extending number of event channels back in
> > September [0].
> 
> It seems that the decision has been made to go for this N-level
> approach.  Were any other methods considered?
> 

Not yet. The discussion is still open.

> Would a per-VCPU ring of pending events work?  The ABI will be easier to
> extend in the future for more event channels.  The guest side code will
> be simpler.  It will be easier to fairly service the events as they will
> be processed in the order they were raised.
> 

Will there be scenario that we need to raise some evtchn's priority? The
ring approach is completely fair.

> The complexity would be in ensuring that events were not lost due to
> lack of space in the ring.  This may make the ring prohibitively large
> or require complex or expensive tracking of pending events inside Xen.
> 

This also needs to be considered and evaluated...

> > Regarding Jan's comment in [0], I don't think allowing user to specify
> > arbitrary number of levels a good idea. Because only the last level
> > should be shared among vcpus, other level should be in percpu struct to
> > allow for quicker lookup. The idea to let user specify levels will be
> > too complicated in implementation and blow up percpu section (since the
> > size grows exponentially). Three levels should be quite enough. See
> > maths below.
> > 
> > Number of event channels:
> >  * 32bit: 1024 * sizeof(unsigned long long) * BITS_PER_BYTE = 64k
> >  * 64bit: 4096 * sizeof(unsigned long long) * BITS_PER_BYTE = 512k
> > Basically the third level is a new ABI, so I choose to use unsigned long
> > long here to get more event channels.
> 
> 32-bit guests will have to treat the unsigned long long as two separate
> words and iterate over then individually.
> 
> This is easy to do -- we have an experimental build of Xen and the
> kernel that extends the number of event channels for 32-bit dom0 to 4096
> by having each selector bit select a group of 4 words.
> 

4096 is not enough. But this idea of each selector bit selecting more
words could be useful. Can you send me your patches then I will see what
I can do.


Wei.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 19:33:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 19:33:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfblP-0006jo-UD; Mon, 03 Dec 2012 19:33:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TfblO-0006jg-15
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 19:33:10 +0000
Received: from [85.158.139.83:28574] by server-14.bemta-5.messagelabs.com id
	50/4F-21768-57EFCB05; Mon, 03 Dec 2012 19:33:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354563188!28252615!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTI5MjU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTI5MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22450 invoked from network); 3 Dec 2012 19:33:08 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 19:33:08 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFIiS0PG3rw
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-067-187.pools.arcor-ip.net [88.65.67.187])
	by smtp.strato.de (josoe mo36) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id x036c3oB3IDLco ;
	Mon, 3 Dec 2012 20:32:57 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 71DB01884C; Mon,  3 Dec 2012 20:32:56 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon,  3 Dec 2012 20:32:48 +0100
Message-Id: <1354563168-6609-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.0.1
Cc: Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	jbeulich@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen/blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

be->mode is obtained from xenbus_read, which does a kmalloc for the
message body. The short string is never released, so do it on blkbk
remove.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

!! Not compile tested !!

 drivers/block/xen-blkback/xenbus.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index f58434c..a6585a4 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -366,6 +366,7 @@ static int xen_blkbk_remove(struct xenbus_device *dev)
 		be->blkif = NULL;
 	}
 
+	kfree(be->mode);
 	kfree(be);
 	dev_set_drvdata(&dev->dev, NULL);
 	return 0;
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 19:33:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 19:33:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfblP-0006jo-UD; Mon, 03 Dec 2012 19:33:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TfblO-0006jg-15
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 19:33:10 +0000
Received: from [85.158.139.83:28574] by server-14.bemta-5.messagelabs.com id
	50/4F-21768-57EFCB05; Mon, 03 Dec 2012 19:33:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354563188!28252615!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTI5MjU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTI5MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22450 invoked from network); 3 Dec 2012 19:33:08 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 19:33:08 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFIiS0PG3rw
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-067-187.pools.arcor-ip.net [88.65.67.187])
	by smtp.strato.de (josoe mo36) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id x036c3oB3IDLco ;
	Mon, 3 Dec 2012 20:32:57 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 71DB01884C; Mon,  3 Dec 2012 20:32:56 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon,  3 Dec 2012 20:32:48 +0100
Message-Id: <1354563168-6609-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.0.1
Cc: Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	jbeulich@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen/blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

be->mode is obtained from xenbus_read, which does a kmalloc for the
message body. The short string is never released, so do it on blkbk
remove.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

!! Not compile tested !!

 drivers/block/xen-blkback/xenbus.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index f58434c..a6585a4 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -366,6 +366,7 @@ static int xen_blkbk_remove(struct xenbus_device *dev)
 		be->blkif = NULL;
 	}
 
+	kfree(be->mode);
 	kfree(be);
 	dev_set_drvdata(&dev->dev, NULL);
 	return 0;
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 19:48:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 19:48:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfbzW-0007HV-EG; Mon, 03 Dec 2012 19:47:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <toby.karyadi@gmail.com>) id 1TfbzV-0007HO-KG
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 19:47:45 +0000
Received: from [85.158.138.51:29411] by server-14.bemta-3.messagelabs.com id
	13/74-31424-0E10DB05; Mon, 03 Dec 2012 19:47:44 +0000
X-Env-Sender: toby.karyadi@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354564063!24462931!1
X-Originating-IP: [76.96.62.32]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3Ni45Ni42Mi4zMiA9PiAxNzcyMDE=\n,sa_preprocessor: 
	QmFkIElQOiA3Ni45Ni42Mi4zMiA9PiAxNzcyMDE=\n,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10740 invoked from network); 3 Dec 2012 19:47:43 -0000
Received: from qmta03.westchester.pa.mail.comcast.net (HELO
	qmta03.westchester.pa.mail.comcast.net) (76.96.62.32)
	by server-3.tower-174.messagelabs.com with SMTP;
	3 Dec 2012 19:47:43 -0000
Received: from omta08.westchester.pa.mail.comcast.net ([76.96.62.12])
	by qmta03.westchester.pa.mail.comcast.net with comcast
	id X42E1k00G0Fqzac537njjo; Mon, 03 Dec 2012 19:47:43 +0000
Received: from koalatu.simplecubes.com ([98.204.234.19])
	by omta08.westchester.pa.mail.comcast.net with comcast
	id X7ni1k00o0Rn0nW3U7ni1F; Mon, 03 Dec 2012 19:47:43 +0000
Received: from quoll.simplecubes.com (quoll.simplecubes.com [192.168.33.37])
	by koalatu.simplecubes.com (Postfix) with ESMTP id 97AA28B2;
	Mon,  3 Dec 2012 14:48:45 -0500 (EST)
Message-ID: <50BD01DE.1050809@gmail.com>
Date: Mon, 03 Dec 2012 14:47:42 -0500
From: Toby Karyadi <toby.karyadi@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<20121129185635.GA1045@asim.lip6.fr> <50B8718C.1090405@citrix.com>
	<20121130085241.GC311@asim.lip6.fr> <50B87A7E.5030001@citrix.com>
	<20121130094143.GA10993@asim.lip6.fr> <50B88325.1050009@citrix.com>
	<1354271552.6269.110.camel@zakaz.uk.xensource.com>
	<20121130103823.GA9562@asim.lip6.fr>
	<1354272201.6269.113.camel@zakaz.uk.xensource.com>
	<20121130105058.GA3457@asim.lip6.fr>
In-Reply-To: <20121130105058.GA3457@asim.lip6.fr>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net;
	s=q20121106; t=1354564063;
	bh=1yObbt6lgM/0eTlFbAbpdnwWD7cCbCI2Re8pYeq+RGc=;
	h=Received:Received:Received:Message-ID:Date:From:MIME-Version:To:
	Subject:Content-Type;
	b=fiYTSucqA6rL16rp3bWHcTvvLBaeD0EJjkaTgvN/Px8rxxDV3feRzt/A8sjLSUfFi
	0Zhd5xQ9TgOTQQoHJUP22eQJhbb5d7i2nEliNJduDXUv/abV1+pkpewL4NSSyi0YAf
	HIVL9Es9te9LOhE5mYpeqqcVt/Hugeh+90/a9wZNLpfWRt5UtNy/+kDq3dE0OYJ/5N
	QApMWpgeDK3C99uSPNxZOUiHiRoQ42+Q+DGDPx13uKRC8g3d33bjrFO7N91ZBdrB41
	UyBn8JOybCZRKgRfFa70njjkrMGb55NOhr48xMkeAu5p6afojwl17+CVoIq8TVFC2v
	vh9cPlgTIXpKw==
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/3] Add support for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/30/12 5:50 AM, Manuel Bouyer wrote:
> On Fri, Nov 30, 2012 at 10:43:21AM +0000, Ian Campbell wrote:
>> On Fri, 2012-11-30 at 10:38 +0000, Manuel Bouyer wrote:
>>> On Fri, Nov 30, 2012 at 10:32:32AM +0000, Ian Campbell wrote:
>>>> libxl only selects the backend itself if the caller doesn't provide one.
>>>> If the caller sets the backend field !=  UNKNOWN then libxl will (try)
>>>> and use it. This field is exposed by xl via the backendtype= key in the
>>>> disk configuration
>>>> http://xenbits.xen.org/docs/unstable/misc/xl-disk-configuration.txt
>>> thanks for pointing this out.
>>> I guess qdisk is the qemu backend, and tap would be the in-kernel backend ?
>> qdisk == qemu, tap == blktap, phy == in kernel.
> OK; but then, how does the script called by xenbackendd know what setup
> is should do ? With xm, it would get a string in the form
> phy:/dev/wd0e
> or
> file:/domains/foo.img
>
> but from what I've understant, this syntax is deprecated now ?

Hi Manuel, thanks for all of your work on xen.

So, the short answer is that the /usr/pkg/etc/xen/scripts/block 
executable (which is just a shell script) needs to fish the type of the 
backend and other extra info out of xenstore. In the case of netbsd's 
block script, it uses the xenstore-read utility. When the block script 
is called, $1 ($xpath) will be an entry like so 
/local/domain/0/backend/vbd/2/768; so this is vbd for domU instance #2 
with disk instance id 768. $2 ($xstatus) is the reason the block script 
is called, 2 for startup, and 6 for tear down.

If you look at the block script you can also see that the block script 
needs to fish out the type of the backend that is located at $xpath/type 
in xenstore. The block script utilizes the xenstore-read and 
xenstore-write to read/modify xenstore, if you notice.

So, under $xpath here are the pertinent entries:
- $xpath/type
- $xpath/params
- $xpath/physical-disk

With xm+xend, for disk config file:/var/xen/domu/server001/disk.img, 
prior to the block script getting called:
$xpath/type = 'file'
$xpath/params = '/var/xen/domu/server001/disk.img'
$xpath/physical-disk = <unspecified>

Then when the block script is called, it detects that the type is 'file' 
and so it would call vnconfig using the value $xpath/params as the 
actual file using an unused vnd device, say vnd2. Then it would modify 
$xpath/physical-disk using the xenstore-write utility and set the value 
of $xpath/physical-disk to /dev/vnd2. After that, the handling, 
everything else is handled as if the backend is an actual physical 
device, which is true.

If the value of $xpath/type is 'phy', nothing needs to be done, since 
$xpath/physical-disk has been setup properly.

Now, the above behavior is how xm+xend+xenbackendd works. There was a 
bug in libxl/xl that I fixed as described here 
http://mail-index.netbsd.org/port-xen/2012/05/29/msg007252.html so that 
libxl/xl would behave the same way. You might also notice, I made small 
changes to allow custom backend type. I don't mean to keep tooting my 
own horn, but I just don't want the fix to get lost, although I just 
learned that I probably should do a bug report with the patch, but I 
don't have much time right now.

Hopefully I don't bore you to death if you read this far, but the way 
the libxl/xl is going seems somewhat ridiculous, where they are trying 
to insert more and more policy vs functionality, as evidenced by Roger's 
effort to outrightly to just decide (hence a policy) that 'well, vnd 
can't work with file on NFS, so be damn with it and just route 
everything via qemu-dm'. Additionally, they are planning to retire 
xenbackendd and again this is a trend that I don't like where they clump 
everything into one giant ball called the libxl/xl, as opposed to a 
bunch of little executables doing specific things. Another illustration: 
you might ask who's managing the domU in the absence of xend; well what 
happens is that when you do 'xl create ...' xl would then daemonize 
itself to watch xenstore for that specific domU that it just created. I 
guess that's cool, but to me it's got the feel to put everything into 
one giant, monolithic, unflexible entity (shudder).

So, I'm just writing to give you some more information about this whole 
libxl/xl stuff, and hopefully it'll give you some arsenal so that some 
of the xen folks don't try to force policies that is claimed to protect 
'end-user' or very linux specific.

Cheers,
Toby


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 19:48:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 19:48:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfbzW-0007HV-EG; Mon, 03 Dec 2012 19:47:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <toby.karyadi@gmail.com>) id 1TfbzV-0007HO-KG
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 19:47:45 +0000
Received: from [85.158.138.51:29411] by server-14.bemta-3.messagelabs.com id
	13/74-31424-0E10DB05; Mon, 03 Dec 2012 19:47:44 +0000
X-Env-Sender: toby.karyadi@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354564063!24462931!1
X-Originating-IP: [76.96.62.32]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3Ni45Ni42Mi4zMiA9PiAxNzcyMDE=\n,sa_preprocessor: 
	QmFkIElQOiA3Ni45Ni42Mi4zMiA9PiAxNzcyMDE=\n,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10740 invoked from network); 3 Dec 2012 19:47:43 -0000
Received: from qmta03.westchester.pa.mail.comcast.net (HELO
	qmta03.westchester.pa.mail.comcast.net) (76.96.62.32)
	by server-3.tower-174.messagelabs.com with SMTP;
	3 Dec 2012 19:47:43 -0000
Received: from omta08.westchester.pa.mail.comcast.net ([76.96.62.12])
	by qmta03.westchester.pa.mail.comcast.net with comcast
	id X42E1k00G0Fqzac537njjo; Mon, 03 Dec 2012 19:47:43 +0000
Received: from koalatu.simplecubes.com ([98.204.234.19])
	by omta08.westchester.pa.mail.comcast.net with comcast
	id X7ni1k00o0Rn0nW3U7ni1F; Mon, 03 Dec 2012 19:47:43 +0000
Received: from quoll.simplecubes.com (quoll.simplecubes.com [192.168.33.37])
	by koalatu.simplecubes.com (Postfix) with ESMTP id 97AA28B2;
	Mon,  3 Dec 2012 14:48:45 -0500 (EST)
Message-ID: <50BD01DE.1050809@gmail.com>
Date: Mon, 03 Dec 2012 14:47:42 -0500
From: Toby Karyadi <toby.karyadi@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<20121129185635.GA1045@asim.lip6.fr> <50B8718C.1090405@citrix.com>
	<20121130085241.GC311@asim.lip6.fr> <50B87A7E.5030001@citrix.com>
	<20121130094143.GA10993@asim.lip6.fr> <50B88325.1050009@citrix.com>
	<1354271552.6269.110.camel@zakaz.uk.xensource.com>
	<20121130103823.GA9562@asim.lip6.fr>
	<1354272201.6269.113.camel@zakaz.uk.xensource.com>
	<20121130105058.GA3457@asim.lip6.fr>
In-Reply-To: <20121130105058.GA3457@asim.lip6.fr>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net;
	s=q20121106; t=1354564063;
	bh=1yObbt6lgM/0eTlFbAbpdnwWD7cCbCI2Re8pYeq+RGc=;
	h=Received:Received:Received:Message-ID:Date:From:MIME-Version:To:
	Subject:Content-Type;
	b=fiYTSucqA6rL16rp3bWHcTvvLBaeD0EJjkaTgvN/Px8rxxDV3feRzt/A8sjLSUfFi
	0Zhd5xQ9TgOTQQoHJUP22eQJhbb5d7i2nEliNJduDXUv/abV1+pkpewL4NSSyi0YAf
	HIVL9Es9te9LOhE5mYpeqqcVt/Hugeh+90/a9wZNLpfWRt5UtNy/+kDq3dE0OYJ/5N
	QApMWpgeDK3C99uSPNxZOUiHiRoQ42+Q+DGDPx13uKRC8g3d33bjrFO7N91ZBdrB41
	UyBn8JOybCZRKgRfFa70njjkrMGb55NOhr48xMkeAu5p6afojwl17+CVoIq8TVFC2v
	vh9cPlgTIXpKw==
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/3] Add support for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/30/12 5:50 AM, Manuel Bouyer wrote:
> On Fri, Nov 30, 2012 at 10:43:21AM +0000, Ian Campbell wrote:
>> On Fri, 2012-11-30 at 10:38 +0000, Manuel Bouyer wrote:
>>> On Fri, Nov 30, 2012 at 10:32:32AM +0000, Ian Campbell wrote:
>>>> libxl only selects the backend itself if the caller doesn't provide one.
>>>> If the caller sets the backend field !=  UNKNOWN then libxl will (try)
>>>> and use it. This field is exposed by xl via the backendtype= key in the
>>>> disk configuration
>>>> http://xenbits.xen.org/docs/unstable/misc/xl-disk-configuration.txt
>>> thanks for pointing this out.
>>> I guess qdisk is the qemu backend, and tap would be the in-kernel backend ?
>> qdisk == qemu, tap == blktap, phy == in kernel.
> OK; but then, how does the script called by xenbackendd know what setup
> is should do ? With xm, it would get a string in the form
> phy:/dev/wd0e
> or
> file:/domains/foo.img
>
> but from what I've understant, this syntax is deprecated now ?

Hi Manuel, thanks for all of your work on xen.

So, the short answer is that the /usr/pkg/etc/xen/scripts/block 
executable (which is just a shell script) needs to fish the type of the 
backend and other extra info out of xenstore. In the case of netbsd's 
block script, it uses the xenstore-read utility. When the block script 
is called, $1 ($xpath) will be an entry like so 
/local/domain/0/backend/vbd/2/768; so this is vbd for domU instance #2 
with disk instance id 768. $2 ($xstatus) is the reason the block script 
is called, 2 for startup, and 6 for tear down.

If you look at the block script you can also see that the block script 
needs to fish out the type of the backend that is located at $xpath/type 
in xenstore. The block script utilizes the xenstore-read and 
xenstore-write to read/modify xenstore, if you notice.

So, under $xpath here are the pertinent entries:
- $xpath/type
- $xpath/params
- $xpath/physical-disk

With xm+xend, for disk config file:/var/xen/domu/server001/disk.img, 
prior to the block script getting called:
$xpath/type = 'file'
$xpath/params = '/var/xen/domu/server001/disk.img'
$xpath/physical-disk = <unspecified>

Then when the block script is called, it detects that the type is 'file' 
and so it would call vnconfig using the value $xpath/params as the 
actual file using an unused vnd device, say vnd2. Then it would modify 
$xpath/physical-disk using the xenstore-write utility and set the value 
of $xpath/physical-disk to /dev/vnd2. After that, the handling, 
everything else is handled as if the backend is an actual physical 
device, which is true.

If the value of $xpath/type is 'phy', nothing needs to be done, since 
$xpath/physical-disk has been setup properly.

Now, the above behavior is how xm+xend+xenbackendd works. There was a 
bug in libxl/xl that I fixed as described here 
http://mail-index.netbsd.org/port-xen/2012/05/29/msg007252.html so that 
libxl/xl would behave the same way. You might also notice, I made small 
changes to allow custom backend type. I don't mean to keep tooting my 
own horn, but I just don't want the fix to get lost, although I just 
learned that I probably should do a bug report with the patch, but I 
don't have much time right now.

Hopefully I don't bore you to death if you read this far, but the way 
the libxl/xl is going seems somewhat ridiculous, where they are trying 
to insert more and more policy vs functionality, as evidenced by Roger's 
effort to outrightly to just decide (hence a policy) that 'well, vnd 
can't work with file on NFS, so be damn with it and just route 
everything via qemu-dm'. Additionally, they are planning to retire 
xenbackendd and again this is a trend that I don't like where they clump 
everything into one giant ball called the libxl/xl, as opposed to a 
bunch of little executables doing specific things. Another illustration: 
you might ask who's managing the domU in the absence of xend; well what 
happens is that when you do 'xl create ...' xl would then daemonize 
itself to watch xenstore for that specific domU that it just created. I 
guess that's cool, but to me it's got the feel to put everything into 
one giant, monolithic, unflexible entity (shudder).

So, I'm just writing to give you some more information about this whole 
libxl/xl stuff, and hopefully it'll give you some arsenal so that some 
of the xen folks don't try to force policies that is claimed to protect 
'end-user' or very linux specific.

Cheers,
Toby


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 20:54:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 20:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfd24-0000Q3-Nn; Mon, 03 Dec 2012 20:54:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1Tfd23-0000Py-4O
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 20:54:27 +0000
Received: from [85.158.139.83:26818] by server-4.bemta-5.messagelabs.com id
	CE/F1-15011-2811DB05; Mon, 03 Dec 2012 20:54:26 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354568063!16955985!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gOTE1MDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32543 invoked from network); 3 Dec 2012 20:54:25 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 20:54:25 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3KsHR8009003
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 20:54:18 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3KsHto001273
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 20:54:17 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3KsGVf004475; Mon, 3 Dec 2012 14:54:16 -0600
MIME-Version: 1.0
Message-ID: <6be01733-f6b1-4bdc-8865-ae38a32f4c7d@default>
Date: Mon, 3 Dec 2012 12:54:16 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, George Dunlap <George.Dunlap@eu.citrix.com>,
	"Tim (Xen.org)" <tim@xen.org>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6665.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
 problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I earlier promised a complete analysis of the problem
addressed by the proposed claim hypercall as well as
an analysis of the alternate solutions.  I had not
yet provided these analyses when I asked for approval
to commit the hypervisor patch, so there was still
a good amount of misunderstanding, and I am trying
to fix that here.

I had hoped this essay could be both concise and complete
but quickly found it to be impossible to be both at the
same time.  So I have erred on the side of verbosity,
but also have attempted to ensure that the analysis
flows smoothly and is understandable to anyone interested
in learning more about memory allocation in Xen.
I'd appreciate feedback from other developers to understand
if I've also achieved that goal.

Ian, Ian, George, and Tim -- I have tagged a few
out-of-flow questions to you with [IIGF].  If I lose
you at any point, I'd especially appreciate your feedback
at those points.  I trust that, first, you will read
this completely.  As I've said, I understand that
Oracle's paradigm may differ in many ways from your
own, so I also trust that you will read it completely
with an open mind.

Thanks,
Dan

PROBLEM STATEMENT OVERVIEW

The fundamental problem is a race; two entities are
competing for part or all of a shared resource: in this case,
physical system RAM.  Normally, a lock is used to mediate
a race.

For memory allocation in Xen, there are two significant
entities, the toolstack and the hypervisor.  And, in
general terms, there are currently two important locks:
one used in the toolstack for domain creation;
and one in the hypervisor used for the buddy allocator.

Considering first only domain creation, the toolstack
lock is taken to ensure that domain creation is serialized.
The lock is taken when domain creation starts, and released
when domain creation is complete.

As system and domain memory requirements grow, the amount
of time to allocate all necessary memory to launch a large
domain is growing and may now exceed several minutes, so
this serialization is increasingly problematic.  The result
is a customer reported problem:  If a customer wants to
launch two or more very large domains, the "wait time"
required by the serialization is unacceptable.

Oracle would like to solve this problem.  And Oracle
would like to solve this problem not just for a single
customer sitting in front of a single machine console, but
for the very complex case of a large number of machines,
with the "agent" on each machine taking independent
actions including automatic load balancing and power
management via migration.  (This complex environment
is sold by Oracle today; it is not a "future vision".)

[IIGT] Completely ignoring any possible solutions to this
problem, is everyone in agreement that this _is_ a problem
that _needs_ to be solved with _some_ change in the Xen
ecosystem?

SOME IMPORTANT BACKGROUND INFORMATION

In the subsequent discussion, it is important to
understand a few things:

While the toolstack lock is held, allocating memory for
the domain creation process is done as a sequence of one
or more hypercalls, each asking the hypervisor to allocate
one or more -- "X" -- slabs of physical RAM, where a slab
is 2**N contiguous aligned pages, also known as an
"order N" allocation.  While the hypercall is defined
to work with any value of N, common values are N=0
(individual pages), N=9 ("hugepages" or "superpages"),
and N=18 ("1GiB pages").  So, for example, if the toolstack
requires 201MiB of memory, it will make two hypercalls:
One with X=100 and N=9, and one with X=1 and N=0.

While the toolstack may ask for a smaller number X of
order==9 slabs, system fragmentation may unpredictably
cause the hypervisor to fail the request, in which case
the toolstack will fall back to a request for 512*X
individual pages.  If there is sufficient RAM in the system,
this request for order==0 pages is guaranteed to succeed.
Thus for a 1TiB domain, the hypervisor must be prepared
to allocate up to 256Mi individual pages.

Note carefully that when the toolstack hypercall asks for
100 slabs, the hypervisor "heaplock" is currently taken
and released 100 times.  Similarly, for 256M individual
pages... 256 million spin_lock-alloc_page-spin_unlocks.
This means that domain creation is not "atomic" inside
the hypervisor, which means that races can and will still
occur.

RULING OUT SOME SIMPLE SOLUTIONS

Is there an elegant simple solution here?

Let's first consider the possibility of removing the toolstack
serialization entirely and/or the possibility that two
independent toolstack threads (or "agents") can simultaneously
request a very large domain creation in parallel.  As described
above, the hypervisor's heaplock is insufficient to serialize RAM
allocation, so the two domain creation processes race.  If there
is sufficient resource for either one to launch, but insufficient
resource for both to launch, the winner of the race is indeterminate,
and one or both launches will fail, possibly after one or both 
domain creation threads have been working for several minutes.
This is a classic "TOCTOU" (time-of-check-time-of-use) race.
If a customer is unhappy waiting several minutes to launch
a domain, they will be even more unhappy waiting for several
minutes to be told that one or both of the launches has failed.
Multi-minute failure is even more unacceptable for an automated
agent trying to, for example, evacuate a machine that the
data center administrator needs to powercycle.

[IIGT: Please hold your objections for a moment... the paragraph
above is discussing the simple solution of removing the serialization;
your suggested solution will be discussed soon.]
 
Next, let's consider the possibility of changing the heaplock
strategy in the hypervisor so that the lock is held not
for one slab but for the entire request of N slabs.  As with
any core hypervisor lock, holding the heaplock for a "long time"
is unacceptable.  To a hypervisor, several minutes is an eternity.
And, in any case, by serializing domain creation in the hypervisor,
we have really only moved the problem from the toolstack into
the hypervisor, not solved the problem.

[IIGT] Are we in agreement that these simple solutions can be
safely ruled out?

CAPACITY ALLOCATION VS RAM ALLOCATION

Looking for a creative solution, one may realize that it is the
page allocation -- especially in large quantities -- that is very
time-consuming.  But, thinking outside of the box, it is not
the actual pages of RAM that we are racing on, but the quantity of pages required to launch a domain!  If we instead have a way to
"claim" a quantity of pages cheaply now and then allocate the actual
physical RAM pages later, we have changed the race to require only serialization of the claiming process!  In other words, if some entity
knows the number of pages available in the system, and can "claim"
N pages for the benefit of a domain being launched, the successful launch of the domain can be ensured.  Well... the domain launch may
still fail for an unrelated reason, but not due to a memory TOCTOU
race.  But, in this case, if the cost (in time) of the claiming
process is very small compared to the cost of the domain launch,
we have solved the memory TOCTOU race with hardly any delay added
to a non-memory-related failure that would have occurred anyway.

This "claim" sounds promising.  But we have made an assumption that
an "entity" has certain knowledge.  In the Xen system, that entity
must be either the toolstack or the hypervisor.  Or, in the Oracle
environment, an "agent"... but an agent and a toolstack are similar
enough for our purposes that we will just use the more broadly-used
term "toolstack".  In using this term, however, it's important to
remember it is necessary to consider the existence of multiple
threads within this toolstack.

Now I quote Ian Jackson: "It is a key design principle of a system
like Xen that the hypervisor should provide only those facilities
which are strictly necessary.  Any functionality which can be
reasonably provided outside the hypervisor should be excluded
from it."

So let's examine the toolstack first.

[IIGT] Still all on the same page (pun intended)?

TOOLSTACK-BASED CAPACITY ALLOCATION

Does the toolstack know how many physical pages of RAM are available?
Yes, it can use a hypercall to find out this information after Xen and
dom0 launch, but before it launches any domain.  Then if it subtracts
the number of pages used when it launches a domain and is aware of
when any domain dies, and adds them back, the toolstack has a pretty
good estimate.  In actuality, the toolstack doesn't _really_ know the
exact number of pages used when a domain is launched, but there
is a poorly-documented "fuzz factor"... the toolstack knows the
number of pages within a few megabytes, which is probably close enough.

This is a fairly good description of how the toolstack works today
and the accounting seems simple enough, so does toolstack-based
capacity allocation solve our original problem?  It would seem so.
Even if there are multiple threads, the accounting -- not the extended
sequence of page allocation for the domain creation -- can be
serialized by a lock in the toolstack.  But note carefully, either
the toolstack and the hypervisor must always be in sync on the
number of available pages (within an acceptable margin of error);
or any query to the hypervisor _and_ the toolstack-based claim must
be paired atomically, i.e. the toolstack lock must be held across
both.  Otherwise we again have another TOCTOU race. Interesting,
but probably not really a problem.

Wait, isn't it possible for the toolstack to dynamically change the
number of pages assigned to a domain?  Yes, this is often called
ballooning and the toolstack can do this via a hypercall.  But
that's still OK because each call goes through the toolstack and
it simply needs to add more accounting for when it uses ballooning
to adjust the domain's memory footprint.  So we are still OK.

But wait again... that brings up an interesting point.  Are there
any significant allocations that are done in the hypervisor without
the knowledge and/or permission of the toolstack?  If so, the
toolstack may be missing important information.

So are there any such allocations?  Well... yes. There are a few.
Let's take a moment to enumerate them:

A) In Linux, a privileged user can write to a sysfs file which writes
to the balloon driver which makes hypercalls from the guest kernel to
the hypervisor, which adjusts the domain memory footprint, which changes the number of free pages _without_ the toolstack knowledge.
The toolstack controls constraints (essentially a minimum and maximum)
which the hypervisor enforces.  The toolstack can ensure that the
minimum and maximum are identical to essentially disallow Linux from
using this functionality.  Indeed, this is precisely what Citrix's
Dynamic Memory Controller (DMC) does: enforce min==max so that DMC always has complete control and, so, knowledge of any domain memory
footprint changes.  But DMC is not prescribed by the toolstack,
and some real Oracle Linux customers use and depend on the flexibility
provided by in-guest ballooning.   So guest-privileged-user-driven-
ballooning is a potential issue for toolstack-based capacity allocation.

[IIGT: This is why I have brought up DMC several times and have
called this the "Citrix model,".. I'm not trying to be snippy
or impugn your morals as maintainers.]

B) Xen's page sharing feature has slowly been completed over a number
of recent Xen releases.  It takes advantage of the fact that many
pages often contain identical data; the hypervisor merges them to save
physical RAM.  When any "shared" page is written, the hypervisor
"splits" the page (aka, copy-on-write) by allocating a new physical
page.  There is a long history of this feature in other virtualization
products and it is known to be possible that, under many circumstances, thousands of splits may occur in any fraction of a second.  The
hypervisor does not notify or ask permission of the toolstack.
So, page-splitting is an issue for toolstack-based capacity
allocation, at least as currently coded in Xen.

[Andre: Please hold your objection here until you read further.]

C) Transcendent Memory ("tmem") has existed in the Xen hypervisor and
toolstack for over three years.  It depends on an in-guest-kernel
adaptive technique to constantly adjust the domain memory footprint as
well as hooks in the in-guest-kernel to move data to and from the
hypervisor.  While the data is in the hypervisor's care, interesting
memory-load balancing between guests is done, including optional
compression and deduplication.  All of this has been in Xen since 2009
and has been awaiting changes in the (guest-side) Linux kernel. Those
changes are now merged into the mainstream kernel and are fully
functional in shipping distros.

While a complete description of tmem's guest<->hypervisor interaction
is beyond the scope of this document, it is important to understand
that any tmem-enabled guest kernel may unpredictably request thousands
or even millions of pages directly via hypercalls from the hypervisor in a fraction of a second with absolutely no interaction with the toolstack.  Further, the guest-side hypercalls that allocate pages
via the hypervisor are done in "atomic" code deep in the Linux mm
subsystem.

Indeed, if one truly understands tmem, it should become clear that
tmem is fundamentally incompatible with toolstack-based capacity
allocation. But let's stop discussing tmem for now and move on.

OK.  So with existing code both in Xen and Linux guests, there are
three challenges to toolstack-based capacity allocation.  We'd
really still like to do capacity allocation in the toolstack.  Can
something be done in the toolstack to "fix" these three cases?

Possibly.  But let's first look at hypervisor-based capacity
allocation: the proposed "XENMEM_claim_pages" hypercall.

HYPERVISOR-BASED CAPACITY ALLOCATION

The posted patch for the claim hypercall is quite simple, but let's
look at it in detail.  The claim hypercall is actually a subop
of an existing hypercall.  After checking parameters for validity,
a new function is called in the core Xen memory management code.
This function takes the hypervisor heaplock, checks for a few
special cases, does some arithmetic to ensure a valid claim, stakes
the claim, releases the hypervisor heaplock, and then returns.  To
review from earlier, the hypervisor heaplock protects _all_ page/slab
allocations, so we can be absolutely certain that there are no other
page allocation races.  This new function is about 35 lines of code,
not counting comments.

The patch includes two other significant changes to the hypervisor:
First, when any adjustment to a domain's memory footprint is made
(either through a toolstack-aware hypercall or one of the three
toolstack-unaware methods described above), the heaplock is
taken, arithmetic is done, and the heaplock is released.  This
is 12 lines of code.  Second, when any memory is allocated within
Xen, a check must be made (with the heaplock already held) to
determine if, given a previous claim, the domain has exceeded
its upper bound, maxmem.  This code is a single conditional test.

With some declarations, but not counting the copious comments,
all told, the new code provided by the patch is well under 100 lines.

What about the toolstack side?  First, it's important to note that
the toolstack changes are entirely optional.  If any toolstack
wishes either to not fix the original problem, or avoid toolstack-
unaware allocation completely by ignoring the functionality provided
by in-guest ballooning, page-sharing, and/or tmem, that toolstack need
not use the new hypercall.  Second, it's very relevant to note that the Oracle product uses a combination of a proprietary "manager"
which oversees many machines, and the older open-source xm/xend
toolstack, for which the current Xen toolstack maintainers are no
longer accepting patches.

The preface of the published patch does suggest, however, some
straightforward pseudo-code, as follows:

Current toolstack domain creation memory allocation code fragment:

1. call populate_physmap repeatedly to achieve mem=N memory
2. if any populate_physmap call fails, report -ENOMEM up the stack
3. memory is held until domain dies or the toolstack decreases it

Proposed toolstack domain creation memory allocation code fragment
(new code marked with "+"):

+  call claim for mem=N amount of memory
+. if claim succeeds:
1.  call populate_physmap repeatedly to achieve mem=N memory (failsafe)
+  else
2.  report -ENOMEM up the stack
+  claim is held until mem=N is achieved or the domain dies or
    forced to 0 by a second hypercall
3. memory is held until domain dies or the toolstack decreases it

Reviewing the pseudo-code, one can readily see that the toolstack
changes required to implement the hypercall are quite small.

To complete this discussion, it has been pointed out that
the proposed hypercall doesn't solve the original problem
for certain classes of legacy domains... but also neither
does it make the problem worse.  It has also been pointed
out that the proposed patch is not (yet) NUMA-aware.

Now let's return to the earlier question:  There are three 
challenges to toolstack-based capacity allocation, which are
all handled easily by in-hypervisor capacity allocation. But we'd
really still like to do capacity allocation in the toolstack.
Can something be done in the toolstack to "fix" these three cases?

The answer is, of course, certainly... anything can be done in
software.  So, recalling Ian Jackson's stated requirement:

 "Any functionality which can be reasonably provided outside the
  hypervisor should be excluded from it."

we are now left to evaluate the subjective term "reasonably".

CAN TOOLSTACK-BASED CAPACITY ALLOCATION OVERCOME THE ISSUES?

In earlier discussion on this topic, when page-splitting was raised
as a concern, some of the authors of Xen's page-sharing feature
pointed out that a mechanism could be designed such that "batches"
of pages were pre-allocated by the toolstack and provided to the
hypervisor to be utilized as needed for page-splitting.  Should the
batch run dry, the hypervisor could stop the domain that was provoking
the page-split until the toolstack could be consulted and the toolstack, at its leisure, could request the hypervisor to refill
the batch, which then allows the page-split-causing domain to proceed.

But this batch page-allocation isn't implemented in Xen today.

Andres Lagar-Cavilla says "... this is because of shortcomings in the
[Xen] mm layer and its interaction with wait queues, documented
elsewhere."  In other words, this batching proposal requires
significant changes to the hypervisor, which I think we
all agreed we were trying to avoid.

[Note to Andre: I'm not objecting to the need for this functionality
for page-sharing to work with proprietary kernels and DMC; just
pointing out that it, too, is dependent on further hypervisor changes.]

Such an approach makes sense in the min==max model enforced by
DMC but, again, DMC is not prescribed by the toolstack.

Further, this waitqueue solution for page-splitting only awkwardly
works around in-guest ballooning (probably only with more hypervisor
changes, TBD) and would be useless for tmem.  [IIGT: Please argue
this last point only if you feel confident you truly understand how
tmem works.]

So this as-yet-unimplemented solution only really solves a part
of the problem.

Are there any other possibilities proposed?  Ian Jackson has
suggested a somewhat different approach:

Let me quote Ian Jackson again:

"Of course if it is really desired to have each guest make its own
decisions and simply for them to somehow agree to divvy up the
available resources, then even so a new hypervisor mechanism is
not needed.  All that is needed is a way for those guests to
synchronise their accesses and updates to shared records of the
available and in-use memory."

Ian then goes on to say:  "I don't have a detailed counter-proposal
design of course..."

This proposal is certainly possible, but I think most would agree that
it would require some fairly massive changes in OS memory management
design that would run contrary to many years of computing history.
It requires guest OS's to cooperate with each other about basic memory
management decisions.  And to work for tmem, it would require
communication from atomic code in the kernel to user-space, then communication from user-space in a guest to user-space-in-domain0
and then (presumably... I don't have a design either) back again.
One must also wonder what the performance impact would be.

CONCLUDING REMARKS

"Any functionality which can be reasonably provided outside the
  hypervisor should be excluded from it."

I think this document has described a real customer problem and
a good solution that could be implemented either in the toolstack
or in the hypervisor.  Memory allocation in existing Xen functionality
has been shown to interfere significantly with the toolstack-based
solution and suggested partial solutions to those issues either
require even more hypervisor work, or are completely undesigned and,
at least, call into question the definition of "reasonably".

The hypervisor-based solution has been shown to be extremely
simple, fits very logically with existing Xen memory management
mechanisms/code, and has been reviewed through several iterations
by Xen hypervisor experts.

While I understand completely the Xen maintainers' desire to
fend off unnecessary additions to the hypervisor, I believe
XENMEM_claim_pages is a reasonable and natural hypervisor feature
and I hope you will now Ack the patch.

Acknowledgements: Thanks very much to Konrad for his thorough
read-through and for suggestions on how to soften my combative
style which may have alienated the maintainers more than the
proposal itself.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 20:54:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 20:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfd24-0000Q3-Nn; Mon, 03 Dec 2012 20:54:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1Tfd23-0000Py-4O
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 20:54:27 +0000
Received: from [85.158.139.83:26818] by server-4.bemta-5.messagelabs.com id
	CE/F1-15011-2811DB05; Mon, 03 Dec 2012 20:54:26 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354568063!16955985!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gOTE1MDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32543 invoked from network); 3 Dec 2012 20:54:25 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 20:54:25 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3KsHR8009003
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 20:54:18 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3KsHto001273
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 20:54:17 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3KsGVf004475; Mon, 3 Dec 2012 14:54:16 -0600
MIME-Version: 1.0
Message-ID: <6be01733-f6b1-4bdc-8865-ae38a32f4c7d@default>
Date: Mon, 3 Dec 2012 12:54:16 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, George Dunlap <George.Dunlap@eu.citrix.com>,
	"Tim (Xen.org)" <tim@xen.org>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6665.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
 problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I earlier promised a complete analysis of the problem
addressed by the proposed claim hypercall as well as
an analysis of the alternate solutions.  I had not
yet provided these analyses when I asked for approval
to commit the hypervisor patch, so there was still
a good amount of misunderstanding, and I am trying
to fix that here.

I had hoped this essay could be both concise and complete
but quickly found it to be impossible to be both at the
same time.  So I have erred on the side of verbosity,
but also have attempted to ensure that the analysis
flows smoothly and is understandable to anyone interested
in learning more about memory allocation in Xen.
I'd appreciate feedback from other developers to understand
if I've also achieved that goal.

Ian, Ian, George, and Tim -- I have tagged a few
out-of-flow questions to you with [IIGF].  If I lose
you at any point, I'd especially appreciate your feedback
at those points.  I trust that, first, you will read
this completely.  As I've said, I understand that
Oracle's paradigm may differ in many ways from your
own, so I also trust that you will read it completely
with an open mind.

Thanks,
Dan

PROBLEM STATEMENT OVERVIEW

The fundamental problem is a race; two entities are
competing for part or all of a shared resource: in this case,
physical system RAM.  Normally, a lock is used to mediate
a race.

For memory allocation in Xen, there are two significant
entities, the toolstack and the hypervisor.  And, in
general terms, there are currently two important locks:
one used in the toolstack for domain creation;
and one in the hypervisor used for the buddy allocator.

Considering first only domain creation, the toolstack
lock is taken to ensure that domain creation is serialized.
The lock is taken when domain creation starts, and released
when domain creation is complete.

As system and domain memory requirements grow, the amount
of time to allocate all necessary memory to launch a large
domain is growing and may now exceed several minutes, so
this serialization is increasingly problematic.  The result
is a customer reported problem:  If a customer wants to
launch two or more very large domains, the "wait time"
required by the serialization is unacceptable.

Oracle would like to solve this problem.  And Oracle
would like to solve this problem not just for a single
customer sitting in front of a single machine console, but
for the very complex case of a large number of machines,
with the "agent" on each machine taking independent
actions including automatic load balancing and power
management via migration.  (This complex environment
is sold by Oracle today; it is not a "future vision".)

[IIGT] Completely ignoring any possible solutions to this
problem, is everyone in agreement that this _is_ a problem
that _needs_ to be solved with _some_ change in the Xen
ecosystem?

SOME IMPORTANT BACKGROUND INFORMATION

In the subsequent discussion, it is important to
understand a few things:

While the toolstack lock is held, allocating memory for
the domain creation process is done as a sequence of one
or more hypercalls, each asking the hypervisor to allocate
one or more -- "X" -- slabs of physical RAM, where a slab
is 2**N contiguous aligned pages, also known as an
"order N" allocation.  While the hypercall is defined
to work with any value of N, common values are N=0
(individual pages), N=9 ("hugepages" or "superpages"),
and N=18 ("1GiB pages").  So, for example, if the toolstack
requires 201MiB of memory, it will make two hypercalls:
One with X=100 and N=9, and one with X=1 and N=0.

While the toolstack may ask for a smaller number X of
order==9 slabs, system fragmentation may unpredictably
cause the hypervisor to fail the request, in which case
the toolstack will fall back to a request for 512*X
individual pages.  If there is sufficient RAM in the system,
this request for order==0 pages is guaranteed to succeed.
Thus for a 1TiB domain, the hypervisor must be prepared
to allocate up to 256Mi individual pages.

Note carefully that when the toolstack hypercall asks for
100 slabs, the hypervisor "heaplock" is currently taken
and released 100 times.  Similarly, for 256M individual
pages... 256 million spin_lock-alloc_page-spin_unlocks.
This means that domain creation is not "atomic" inside
the hypervisor, which means that races can and will still
occur.

RULING OUT SOME SIMPLE SOLUTIONS

Is there an elegant simple solution here?

Let's first consider the possibility of removing the toolstack
serialization entirely and/or the possibility that two
independent toolstack threads (or "agents") can simultaneously
request a very large domain creation in parallel.  As described
above, the hypervisor's heaplock is insufficient to serialize RAM
allocation, so the two domain creation processes race.  If there
is sufficient resource for either one to launch, but insufficient
resource for both to launch, the winner of the race is indeterminate,
and one or both launches will fail, possibly after one or both 
domain creation threads have been working for several minutes.
This is a classic "TOCTOU" (time-of-check-time-of-use) race.
If a customer is unhappy waiting several minutes to launch
a domain, they will be even more unhappy waiting for several
minutes to be told that one or both of the launches has failed.
Multi-minute failure is even more unacceptable for an automated
agent trying to, for example, evacuate a machine that the
data center administrator needs to powercycle.

[IIGT: Please hold your objections for a moment... the paragraph
above is discussing the simple solution of removing the serialization;
your suggested solution will be discussed soon.]
 
Next, let's consider the possibility of changing the heaplock
strategy in the hypervisor so that the lock is held not
for one slab but for the entire request of N slabs.  As with
any core hypervisor lock, holding the heaplock for a "long time"
is unacceptable.  To a hypervisor, several minutes is an eternity.
And, in any case, by serializing domain creation in the hypervisor,
we have really only moved the problem from the toolstack into
the hypervisor, not solved the problem.

[IIGT] Are we in agreement that these simple solutions can be
safely ruled out?

CAPACITY ALLOCATION VS RAM ALLOCATION

Looking for a creative solution, one may realize that it is the
page allocation -- especially in large quantities -- that is very
time-consuming.  But, thinking outside of the box, it is not
the actual pages of RAM that we are racing on, but the quantity of pages required to launch a domain!  If we instead have a way to
"claim" a quantity of pages cheaply now and then allocate the actual
physical RAM pages later, we have changed the race to require only serialization of the claiming process!  In other words, if some entity
knows the number of pages available in the system, and can "claim"
N pages for the benefit of a domain being launched, the successful launch of the domain can be ensured.  Well... the domain launch may
still fail for an unrelated reason, but not due to a memory TOCTOU
race.  But, in this case, if the cost (in time) of the claiming
process is very small compared to the cost of the domain launch,
we have solved the memory TOCTOU race with hardly any delay added
to a non-memory-related failure that would have occurred anyway.

This "claim" sounds promising.  But we have made an assumption that
an "entity" has certain knowledge.  In the Xen system, that entity
must be either the toolstack or the hypervisor.  Or, in the Oracle
environment, an "agent"... but an agent and a toolstack are similar
enough for our purposes that we will just use the more broadly-used
term "toolstack".  In using this term, however, it's important to
remember it is necessary to consider the existence of multiple
threads within this toolstack.

Now I quote Ian Jackson: "It is a key design principle of a system
like Xen that the hypervisor should provide only those facilities
which are strictly necessary.  Any functionality which can be
reasonably provided outside the hypervisor should be excluded
from it."

So let's examine the toolstack first.

[IIGT] Still all on the same page (pun intended)?

TOOLSTACK-BASED CAPACITY ALLOCATION

Does the toolstack know how many physical pages of RAM are available?
Yes, it can use a hypercall to find out this information after Xen and
dom0 launch, but before it launches any domain.  Then if it subtracts
the number of pages used when it launches a domain and is aware of
when any domain dies, and adds them back, the toolstack has a pretty
good estimate.  In actuality, the toolstack doesn't _really_ know the
exact number of pages used when a domain is launched, but there
is a poorly-documented "fuzz factor"... the toolstack knows the
number of pages within a few megabytes, which is probably close enough.

This is a fairly good description of how the toolstack works today
and the accounting seems simple enough, so does toolstack-based
capacity allocation solve our original problem?  It would seem so.
Even if there are multiple threads, the accounting -- not the extended
sequence of page allocation for the domain creation -- can be
serialized by a lock in the toolstack.  But note carefully, either
the toolstack and the hypervisor must always be in sync on the
number of available pages (within an acceptable margin of error);
or any query to the hypervisor _and_ the toolstack-based claim must
be paired atomically, i.e. the toolstack lock must be held across
both.  Otherwise we again have another TOCTOU race. Interesting,
but probably not really a problem.

Wait, isn't it possible for the toolstack to dynamically change the
number of pages assigned to a domain?  Yes, this is often called
ballooning and the toolstack can do this via a hypercall.  But
that's still OK because each call goes through the toolstack and
it simply needs to add more accounting for when it uses ballooning
to adjust the domain's memory footprint.  So we are still OK.

But wait again... that brings up an interesting point.  Are there
any significant allocations that are done in the hypervisor without
the knowledge and/or permission of the toolstack?  If so, the
toolstack may be missing important information.

So are there any such allocations?  Well... yes. There are a few.
Let's take a moment to enumerate them:

A) In Linux, a privileged user can write to a sysfs file which writes
to the balloon driver which makes hypercalls from the guest kernel to
the hypervisor, which adjusts the domain memory footprint, which changes the number of free pages _without_ the toolstack knowledge.
The toolstack controls constraints (essentially a minimum and maximum)
which the hypervisor enforces.  The toolstack can ensure that the
minimum and maximum are identical to essentially disallow Linux from
using this functionality.  Indeed, this is precisely what Citrix's
Dynamic Memory Controller (DMC) does: enforce min==max so that DMC always has complete control and, so, knowledge of any domain memory
footprint changes.  But DMC is not prescribed by the toolstack,
and some real Oracle Linux customers use and depend on the flexibility
provided by in-guest ballooning.   So guest-privileged-user-driven-
ballooning is a potential issue for toolstack-based capacity allocation.

[IIGT: This is why I have brought up DMC several times and have
called this the "Citrix model,".. I'm not trying to be snippy
or impugn your morals as maintainers.]

B) Xen's page sharing feature has slowly been completed over a number
of recent Xen releases.  It takes advantage of the fact that many
pages often contain identical data; the hypervisor merges them to save
physical RAM.  When any "shared" page is written, the hypervisor
"splits" the page (aka, copy-on-write) by allocating a new physical
page.  There is a long history of this feature in other virtualization
products and it is known to be possible that, under many circumstances, thousands of splits may occur in any fraction of a second.  The
hypervisor does not notify or ask permission of the toolstack.
So, page-splitting is an issue for toolstack-based capacity
allocation, at least as currently coded in Xen.

[Andre: Please hold your objection here until you read further.]

C) Transcendent Memory ("tmem") has existed in the Xen hypervisor and
toolstack for over three years.  It depends on an in-guest-kernel
adaptive technique to constantly adjust the domain memory footprint as
well as hooks in the in-guest-kernel to move data to and from the
hypervisor.  While the data is in the hypervisor's care, interesting
memory-load balancing between guests is done, including optional
compression and deduplication.  All of this has been in Xen since 2009
and has been awaiting changes in the (guest-side) Linux kernel. Those
changes are now merged into the mainstream kernel and are fully
functional in shipping distros.

While a complete description of tmem's guest<->hypervisor interaction
is beyond the scope of this document, it is important to understand
that any tmem-enabled guest kernel may unpredictably request thousands
or even millions of pages directly via hypercalls from the hypervisor in a fraction of a second with absolutely no interaction with the toolstack.  Further, the guest-side hypercalls that allocate pages
via the hypervisor are done in "atomic" code deep in the Linux mm
subsystem.

Indeed, if one truly understands tmem, it should become clear that
tmem is fundamentally incompatible with toolstack-based capacity
allocation. But let's stop discussing tmem for now and move on.

OK.  So with existing code both in Xen and Linux guests, there are
three challenges to toolstack-based capacity allocation.  We'd
really still like to do capacity allocation in the toolstack.  Can
something be done in the toolstack to "fix" these three cases?

Possibly.  But let's first look at hypervisor-based capacity
allocation: the proposed "XENMEM_claim_pages" hypercall.

HYPERVISOR-BASED CAPACITY ALLOCATION

The posted patch for the claim hypercall is quite simple, but let's
look at it in detail.  The claim hypercall is actually a subop
of an existing hypercall.  After checking parameters for validity,
a new function is called in the core Xen memory management code.
This function takes the hypervisor heaplock, checks for a few
special cases, does some arithmetic to ensure a valid claim, stakes
the claim, releases the hypervisor heaplock, and then returns.  To
review from earlier, the hypervisor heaplock protects _all_ page/slab
allocations, so we can be absolutely certain that there are no other
page allocation races.  This new function is about 35 lines of code,
not counting comments.

The patch includes two other significant changes to the hypervisor:
First, when any adjustment to a domain's memory footprint is made
(either through a toolstack-aware hypercall or one of the three
toolstack-unaware methods described above), the heaplock is
taken, arithmetic is done, and the heaplock is released.  This
is 12 lines of code.  Second, when any memory is allocated within
Xen, a check must be made (with the heaplock already held) to
determine if, given a previous claim, the domain has exceeded
its upper bound, maxmem.  This code is a single conditional test.

With some declarations, but not counting the copious comments,
all told, the new code provided by the patch is well under 100 lines.

What about the toolstack side?  First, it's important to note that
the toolstack changes are entirely optional.  If any toolstack
wishes either to not fix the original problem, or avoid toolstack-
unaware allocation completely by ignoring the functionality provided
by in-guest ballooning, page-sharing, and/or tmem, that toolstack need
not use the new hypercall.  Second, it's very relevant to note that the Oracle product uses a combination of a proprietary "manager"
which oversees many machines, and the older open-source xm/xend
toolstack, for which the current Xen toolstack maintainers are no
longer accepting patches.

The preface of the published patch does suggest, however, some
straightforward pseudo-code, as follows:

Current toolstack domain creation memory allocation code fragment:

1. call populate_physmap repeatedly to achieve mem=N memory
2. if any populate_physmap call fails, report -ENOMEM up the stack
3. memory is held until domain dies or the toolstack decreases it

Proposed toolstack domain creation memory allocation code fragment
(new code marked with "+"):

+  call claim for mem=N amount of memory
+. if claim succeeds:
1.  call populate_physmap repeatedly to achieve mem=N memory (failsafe)
+  else
2.  report -ENOMEM up the stack
+  claim is held until mem=N is achieved or the domain dies or
    forced to 0 by a second hypercall
3. memory is held until domain dies or the toolstack decreases it

Reviewing the pseudo-code, one can readily see that the toolstack
changes required to implement the hypercall are quite small.

To complete this discussion, it has been pointed out that
the proposed hypercall doesn't solve the original problem
for certain classes of legacy domains... but also neither
does it make the problem worse.  It has also been pointed
out that the proposed patch is not (yet) NUMA-aware.

Now let's return to the earlier question:  There are three 
challenges to toolstack-based capacity allocation, which are
all handled easily by in-hypervisor capacity allocation. But we'd
really still like to do capacity allocation in the toolstack.
Can something be done in the toolstack to "fix" these three cases?

The answer is, of course, certainly... anything can be done in
software.  So, recalling Ian Jackson's stated requirement:

 "Any functionality which can be reasonably provided outside the
  hypervisor should be excluded from it."

we are now left to evaluate the subjective term "reasonably".

CAN TOOLSTACK-BASED CAPACITY ALLOCATION OVERCOME THE ISSUES?

In earlier discussion on this topic, when page-splitting was raised
as a concern, some of the authors of Xen's page-sharing feature
pointed out that a mechanism could be designed such that "batches"
of pages were pre-allocated by the toolstack and provided to the
hypervisor to be utilized as needed for page-splitting.  Should the
batch run dry, the hypervisor could stop the domain that was provoking
the page-split until the toolstack could be consulted and the toolstack, at its leisure, could request the hypervisor to refill
the batch, which then allows the page-split-causing domain to proceed.

But this batch page-allocation isn't implemented in Xen today.

Andres Lagar-Cavilla says "... this is because of shortcomings in the
[Xen] mm layer and its interaction with wait queues, documented
elsewhere."  In other words, this batching proposal requires
significant changes to the hypervisor, which I think we
all agreed we were trying to avoid.

[Note to Andre: I'm not objecting to the need for this functionality
for page-sharing to work with proprietary kernels and DMC; just
pointing out that it, too, is dependent on further hypervisor changes.]

Such an approach makes sense in the min==max model enforced by
DMC but, again, DMC is not prescribed by the toolstack.

Further, this waitqueue solution for page-splitting only awkwardly
works around in-guest ballooning (probably only with more hypervisor
changes, TBD) and would be useless for tmem.  [IIGT: Please argue
this last point only if you feel confident you truly understand how
tmem works.]

So this as-yet-unimplemented solution only really solves a part
of the problem.

Are there any other possibilities proposed?  Ian Jackson has
suggested a somewhat different approach:

Let me quote Ian Jackson again:

"Of course if it is really desired to have each guest make its own
decisions and simply for them to somehow agree to divvy up the
available resources, then even so a new hypervisor mechanism is
not needed.  All that is needed is a way for those guests to
synchronise their accesses and updates to shared records of the
available and in-use memory."

Ian then goes on to say:  "I don't have a detailed counter-proposal
design of course..."

This proposal is certainly possible, but I think most would agree that
it would require some fairly massive changes in OS memory management
design that would run contrary to many years of computing history.
It requires guest OS's to cooperate with each other about basic memory
management decisions.  And to work for tmem, it would require
communication from atomic code in the kernel to user-space, then communication from user-space in a guest to user-space-in-domain0
and then (presumably... I don't have a design either) back again.
One must also wonder what the performance impact would be.

CONCLUDING REMARKS

"Any functionality which can be reasonably provided outside the
  hypervisor should be excluded from it."

I think this document has described a real customer problem and
a good solution that could be implemented either in the toolstack
or in the hypervisor.  Memory allocation in existing Xen functionality
has been shown to interfere significantly with the toolstack-based
solution and suggested partial solutions to those issues either
require even more hypervisor work, or are completely undesigned and,
at least, call into question the definition of "reasonably".

The hypervisor-based solution has been shown to be extremely
simple, fits very logically with existing Xen memory management
mechanisms/code, and has been reviewed through several iterations
by Xen hypervisor experts.

While I understand completely the Xen maintainers' desire to
fend off unnecessary additions to the hypervisor, I believe
XENMEM_claim_pages is a reasonable and natural hypervisor feature
and I hope you will now Ack the patch.

Acknowledgements: Thanks very much to Konrad for his thorough
read-through and for suggestions on how to soften my combative
style which may have alienated the maintainers more than the
proposal itself.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 20:56:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 20:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfd3l-0000V3-Cs; Mon, 03 Dec 2012 20:56:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tfd3j-0000Ux-Tw
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 20:56:12 +0000
Received: from [193.109.254.147:42325] by server-14.bemta-14.messagelabs.com
	id 3C/E4-14517-BE11DB05; Mon, 03 Dec 2012 20:56:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354568167!2372801!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32059 invoked from network); 3 Dec 2012 20:56:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 20:56:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,209,1355097600"; d="scan'208";a="216251869"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 20:56:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 15:56:07 -0500
Received: from [10.80.3.80] (helo=iceland)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <Wei.Liu2@citrix.com>)	id
	1Tfd3e-0000PK-MZ; Mon, 03 Dec 2012 20:56:06 +0000
Date: Mon, 3 Dec 2012 20:56:12 +0000
From: Wei Liu <Wei.Liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20121203205612.GA22913@iceland>
References: <1354552154.18784.9.camel@iceland>
 <50BCF4F9.8010601@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BCF4F9.8010601@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: wei.liu2@citrix.com, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 03, 2012 at 06:52:41PM +0000, David Vrabel wrote:
> On 03/12/12 16:29, Wei Liu wrote:
> > Hi all
> > 
> > There has been discussion on extending number of event channels back in
> > September [0].
> 
> It seems that the decision has been made to go for this N-level
> approach.  Were any other methods considered?
> 
> Would a per-VCPU ring of pending events work?  The ABI will be easier to
> extend in the future for more event channels.  The guest side code will
> be simpler.  It will be easier to fairly service the events as they will
> be processed in the order they were raised.
> 
> The complexity would be in ensuring that events were not lost due to
> lack of space in the ring.  This may make the ring prohibitively large
> or require complex or expensive tracking of pending events inside Xen.
> 

If I understand correctly, one event will always be queued up for
processing in the ring model, will this be too overkill? What if event
generation is much faster than processing?

In the current implementation, one event channel can be raised multiple
times but it is only processed once.


Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 20:56:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 20:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfd3l-0000V3-Cs; Mon, 03 Dec 2012 20:56:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tfd3j-0000Ux-Tw
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 20:56:12 +0000
Received: from [193.109.254.147:42325] by server-14.bemta-14.messagelabs.com
	id 3C/E4-14517-BE11DB05; Mon, 03 Dec 2012 20:56:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354568167!2372801!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkwMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32059 invoked from network); 3 Dec 2012 20:56:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 20:56:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,209,1355097600"; d="scan'208";a="216251869"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 20:56:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Mon, 3 Dec 2012 15:56:07 -0500
Received: from [10.80.3.80] (helo=iceland)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <Wei.Liu2@citrix.com>)	id
	1Tfd3e-0000PK-MZ; Mon, 03 Dec 2012 20:56:06 +0000
Date: Mon, 3 Dec 2012 20:56:12 +0000
From: Wei Liu <Wei.Liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20121203205612.GA22913@iceland>
References: <1354552154.18784.9.camel@iceland>
 <50BCF4F9.8010601@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BCF4F9.8010601@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: wei.liu2@citrix.com, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 03, 2012 at 06:52:41PM +0000, David Vrabel wrote:
> On 03/12/12 16:29, Wei Liu wrote:
> > Hi all
> > 
> > There has been discussion on extending number of event channels back in
> > September [0].
> 
> It seems that the decision has been made to go for this N-level
> approach.  Were any other methods considered?
> 
> Would a per-VCPU ring of pending events work?  The ABI will be easier to
> extend in the future for more event channels.  The guest side code will
> be simpler.  It will be easier to fairly service the events as they will
> be processed in the order they were raised.
> 
> The complexity would be in ensuring that events were not lost due to
> lack of space in the ring.  This may make the ring prohibitively large
> or require complex or expensive tracking of pending events inside Xen.
> 

If I understand correctly, one event will always be queued up for
processing in the ring model, will this be too overkill? What if event
generation is much faster than processing?

In the current implementation, one event channel can be raised multiple
times but it is only processed once.


Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 21:07:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 21:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfdDp-0000vm-Gs; Mon, 03 Dec 2012 21:06:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.carpenter@oracle.com>) id 1TfdDn-0000vh-KE
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 21:06:35 +0000
Received: from [85.158.137.99:13979] by server-16.bemta-3.messagelabs.com id
	B3/22-07461-A541DB05; Mon, 03 Dec 2012 21:06:34 +0000
X-Env-Sender: dan.carpenter@oracle.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354568792!12620630!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMDA2ODc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9073 invoked from network); 3 Dec 2012 21:06:34 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 21:06:34 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3L6SEc015884
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 21:06:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3L6SjC021284
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 21:06:28 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3L6RbN013377; Mon, 3 Dec 2012 15:06:27 -0600
Received: from elgon.mountain (/41.212.103.53)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Dec 2012 13:06:26 -0800
Date: Tue, 4 Dec 2012 00:06:17 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: roger.pau@citrix.com
Message-ID: <20121203210617.GA1513@elgon.mountain>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, virtualization@lists.linux-foundation.org
Subject: Re: [Xen-devel] xen/blkback: Persistent grant maps for xen blk
	drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Roger Pau Monne,

The patch 0a8704a51f38: "xen/blkback: Persistent grant maps for xen 
blk drivers" from Oct 24, 2012, leads to the following warning:
drivers/block/xen-blkfront.c:807 blkif_free()
	 warn: 'persistent_gnt' was already freed.

   807                  llist_for_each_entry(persistent_gnt, all_gnts, node) {
   808                          gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
   809                          __free_page(pfn_to_page(persistent_gnt->pfn));
   810                          kfree(persistent_gnt);
                                      ^^^^^^^^^^^^^^
We dereference this to find the next element in the list.  It will work
if you don't have poisoning enabled or if the memory is not used
immediately by another process.  In other words, there will be rare
bugs where this causes a hard to debug crash or if you have poisoning
enabled it will have an easy to debug crash.

   811                  }

regards,
dan carpenter


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 21:07:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 21:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfdDp-0000vm-Gs; Mon, 03 Dec 2012 21:06:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.carpenter@oracle.com>) id 1TfdDn-0000vh-KE
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 21:06:35 +0000
Received: from [85.158.137.99:13979] by server-16.bemta-3.messagelabs.com id
	B3/22-07461-A541DB05; Mon, 03 Dec 2012 21:06:34 +0000
X-Env-Sender: dan.carpenter@oracle.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354568792!12620630!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMDA2ODc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9073 invoked from network); 3 Dec 2012 21:06:34 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 21:06:34 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3L6SEc015884
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 21:06:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3L6SjC021284
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 21:06:28 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3L6RbN013377; Mon, 3 Dec 2012 15:06:27 -0600
Received: from elgon.mountain (/41.212.103.53)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Dec 2012 13:06:26 -0800
Date: Tue, 4 Dec 2012 00:06:17 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: roger.pau@citrix.com
Message-ID: <20121203210617.GA1513@elgon.mountain>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, virtualization@lists.linux-foundation.org
Subject: Re: [Xen-devel] xen/blkback: Persistent grant maps for xen blk
	drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Roger Pau Monne,

The patch 0a8704a51f38: "xen/blkback: Persistent grant maps for xen 
blk drivers" from Oct 24, 2012, leads to the following warning:
drivers/block/xen-blkfront.c:807 blkif_free()
	 warn: 'persistent_gnt' was already freed.

   807                  llist_for_each_entry(persistent_gnt, all_gnts, node) {
   808                          gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
   809                          __free_page(pfn_to_page(persistent_gnt->pfn));
   810                          kfree(persistent_gnt);
                                      ^^^^^^^^^^^^^^
We dereference this to find the next element in the list.  It will work
if you don't have poisoning enabled or if the memory is not used
immediately by another process.  In other words, there will be rare
bugs where this causes a hard to debug crash or if you have poisoning
enabled it will have an easy to debug crash.

   811                  }

regards,
dan carpenter


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 21:12:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 21:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfdJ9-00013o-9d; Mon, 03 Dec 2012 21:12:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.carpenter@oracle.com>) id 1TfdJ7-00013j-OO
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 21:12:06 +0000
Received: from [85.158.139.211:32490] by server-3.bemta-5.messagelabs.com id
	BA/E4-18736-5A51DB05; Mon, 03 Dec 2012 21:12:05 +0000
X-Env-Sender: dan.carpenter@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1354569123!17007891!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gOTI3Mzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6794 invoked from network); 3 Dec 2012 21:12:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 21:12:04 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3LBxho026623
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 21:12:00 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3LBwaI021302
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 21:11:59 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3LBwco013374; Mon, 3 Dec 2012 15:11:58 -0600
Received: from elgon.mountain (/41.212.103.53)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Dec 2012 13:11:57 -0800
Date: Tue, 4 Dec 2012 00:11:48 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: roger.pau@citrix.com
Message-ID: <20121203211148.GA3335@elgon.mountain>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, virtualization@lists.linux-foundation.org
Subject: Re: [Xen-devel] xen-blkback: move free persistent grants code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Roger Pau Monne,

The patch 4d4f270f1880: "xen-blkback: move free persistent grants
code" from Nov 16, 2012, leads to the following warning:
drivers/block/xen-blkback/blkback.c:238 free_persistent_gnts()
	 warn: 'persistent_gnt' was already freed.

drivers/block/xen-blkback/blkback.c
   232                  pages[segs_to_unmap] = persistent_gnt->page;
   233                  rb_erase(&persistent_gnt->node, root);
   234                  kfree(persistent_gnt);
                        ^^^^^^^^^^^^^^^^^^^^
kfree();

   235                  num--;
   236  
   237                  if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
   238                          !rb_next(&persistent_gnt->node)) {
                                         ^^^^^^^^^^^^^^^^^^^^^
Dereferenced inside the call to rb_next().

   239                          ret = gnttab_unmap_refs(unmap, NULL, pages,
   240                                  segs_to_unmap);

regards,
dan carpenter


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 21:12:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 21:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfdJ9-00013o-9d; Mon, 03 Dec 2012 21:12:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.carpenter@oracle.com>) id 1TfdJ7-00013j-OO
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 21:12:06 +0000
Received: from [85.158.139.211:32490] by server-3.bemta-5.messagelabs.com id
	BA/E4-18736-5A51DB05; Mon, 03 Dec 2012 21:12:05 +0000
X-Env-Sender: dan.carpenter@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1354569123!17007891!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gOTI3Mzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6794 invoked from network); 3 Dec 2012 21:12:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 21:12:04 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3LBxho026623
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 21:12:00 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3LBwaI021302
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 21:11:59 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3LBwco013374; Mon, 3 Dec 2012 15:11:58 -0600
Received: from elgon.mountain (/41.212.103.53)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Dec 2012 13:11:57 -0800
Date: Tue, 4 Dec 2012 00:11:48 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: roger.pau@citrix.com
Message-ID: <20121203211148.GA3335@elgon.mountain>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, virtualization@lists.linux-foundation.org
Subject: Re: [Xen-devel] xen-blkback: move free persistent grants code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Roger Pau Monne,

The patch 4d4f270f1880: "xen-blkback: move free persistent grants
code" from Nov 16, 2012, leads to the following warning:
drivers/block/xen-blkback/blkback.c:238 free_persistent_gnts()
	 warn: 'persistent_gnt' was already freed.

drivers/block/xen-blkback/blkback.c
   232                  pages[segs_to_unmap] = persistent_gnt->page;
   233                  rb_erase(&persistent_gnt->node, root);
   234                  kfree(persistent_gnt);
                        ^^^^^^^^^^^^^^^^^^^^
kfree();

   235                  num--;
   236  
   237                  if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
   238                          !rb_next(&persistent_gnt->node)) {
                                         ^^^^^^^^^^^^^^^^^^^^^
Dereferenced inside the call to rb_next().

   239                          ret = gnttab_unmap_refs(unmap, NULL, pages,
   240                                  segs_to_unmap);

regards,
dan carpenter


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 21:15:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 21:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfdLh-0001FE-S1; Mon, 03 Dec 2012 21:14:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.carpenter@oracle.com>) id 1TfdLg-0001F5-HW
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 21:14:44 +0000
Received: from [85.158.143.35:12230] by server-1.bemta-4.messagelabs.com id
	24/FD-27934-3461DB05; Mon, 03 Dec 2012 21:14:43 +0000
X-Env-Sender: dan.carpenter@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354569279!15968527!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gOTE1MDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20838 invoked from network); 3 Dec 2012 21:14:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 21:14:43 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3LEafG029307
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 21:14:36 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3LEZIq004011
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 21:14:35 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3LEZ0V014885; Mon, 3 Dec 2012 15:14:35 -0600
Received: from mwanda (/41.212.103.53) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Dec 2012 13:14:34 -0800
Date: Tue, 4 Dec 2012 00:14:26 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: roger.pau@citrix.com
Message-ID: <20121203211425.GB22569@mwanda>
References: <20121203211148.GA3335@elgon.mountain>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121203211148.GA3335@elgon.mountain>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, virtualization@lists.linux-foundation.org
Subject: Re: [Xen-devel] xen-blkback: move free persistent grants code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 12:11:48AM +0300, Dan Carpenter wrote:
> Hello Roger Pau Monne,
> 
> The patch 4d4f270f1880: "xen-blkback: move free persistent grants
> code" from Nov 16, 2012, leads to the following warning:
> drivers/block/xen-blkback/blkback.c:238 free_persistent_gnts()
> 	 warn: 'persistent_gnt' was already freed.
> 
> drivers/block/xen-blkback/blkback.c
>    232                  pages[segs_to_unmap] = persistent_gnt->page;
>    233                  rb_erase(&persistent_gnt->node, root);
>    234                  kfree(persistent_gnt);
>                         ^^^^^^^^^^^^^^^^^^^^
> kfree();
> 

Also persistent_gnt is the list iterator inside a foreach_grant()
loop.  It needs a _safe() version like list_for_each_safe() where it
saves the next entry in the list at the start so we don't
dereference a freed entry.

regards,
dan carpenter


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 21:15:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 21:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfdLh-0001FE-S1; Mon, 03 Dec 2012 21:14:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.carpenter@oracle.com>) id 1TfdLg-0001F5-HW
	for xen-devel@lists.xensource.com; Mon, 03 Dec 2012 21:14:44 +0000
Received: from [85.158.143.35:12230] by server-1.bemta-4.messagelabs.com id
	24/FD-27934-3461DB05; Mon, 03 Dec 2012 21:14:43 +0000
X-Env-Sender: dan.carpenter@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354569279!15968527!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gOTE1MDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20838 invoked from network); 3 Dec 2012 21:14:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 21:14:43 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3LEafG029307
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 21:14:36 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3LEZIq004011
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 21:14:35 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3LEZ0V014885; Mon, 3 Dec 2012 15:14:35 -0600
Received: from mwanda (/41.212.103.53) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Dec 2012 13:14:34 -0800
Date: Tue, 4 Dec 2012 00:14:26 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: roger.pau@citrix.com
Message-ID: <20121203211425.GB22569@mwanda>
References: <20121203211148.GA3335@elgon.mountain>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121203211148.GA3335@elgon.mountain>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, virtualization@lists.linux-foundation.org
Subject: Re: [Xen-devel] xen-blkback: move free persistent grants code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 12:11:48AM +0300, Dan Carpenter wrote:
> Hello Roger Pau Monne,
> 
> The patch 4d4f270f1880: "xen-blkback: move free persistent grants
> code" from Nov 16, 2012, leads to the following warning:
> drivers/block/xen-blkback/blkback.c:238 free_persistent_gnts()
> 	 warn: 'persistent_gnt' was already freed.
> 
> drivers/block/xen-blkback/blkback.c
>    232                  pages[segs_to_unmap] = persistent_gnt->page;
>    233                  rb_erase(&persistent_gnt->node, root);
>    234                  kfree(persistent_gnt);
>                         ^^^^^^^^^^^^^^^^^^^^
> kfree();
> 

Also persistent_gnt is the list iterator inside a foreach_grant()
loop.  It needs a _safe() version like list_for_each_safe() where it
saves the next entry in the list at the start so we don't
dereference a freed entry.

regards,
dan carpenter


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 21:28:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 21:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfdZ5-0001iO-9s; Mon, 03 Dec 2012 21:28:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1TfdZ3-0001iJ-RP
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 21:28:34 +0000
Received: from [85.158.138.51:25542] by server-14.bemta-3.messagelabs.com id
	C5/8B-31424-C791DB05; Mon, 03 Dec 2012 21:28:28 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354570105!24471382!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gOTE1MDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6730 invoked from network); 3 Dec 2012 21:28:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 21:28:27 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3LSKR0010456
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 21:28:21 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3LSJcZ028407
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 21:28:20 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3LSJ0t024336; Mon, 3 Dec 2012 15:28:19 -0600
MIME-Version: 1.0
Message-ID: <c50f9fe3-8e72-4417-a414-62c3ecd2a005@default>
Date: Mon, 3 Dec 2012 13:28:18 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
References: <cd59ddd0-be45-47e3-b762-a5d630c2df43@default>
	<50B74637.9030309@citrix.com>
In-Reply-To: <50B74637.9030309@citrix.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6665.5003 (x86)]
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Wilk <konrad.wilk@oracle.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Matthew Daley <mattjd@gmail.com>, TimDeegan <tim@xen.org>,
	xen-devel@lists.xen.org, Jan Beulich <JBeulich@suse.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>
Subject: Re: [Xen-devel] [PATCH v8 1/2] hypervisor: XENMEM_claim_pages
 (subop of existing) hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: David Vrabel [mailto:david.vrabel@citrix.com]

Hi David --

Thanks for your reply!
 
> On 28/11/12 15:50, Dan Magenheimer wrote:
> > This is patch 1of2 of an eighth cut of the patch of the proposed
> > XENMEM_claim_pages hypercall/subop, taking into account review
> > feedback from Jan and Keir and IanC and Matthew Daley, plus some
> > fixes found via runtime debugging (using printk and privcmd only).
> >
> [...]
> >
> > Proposed:
> > - call claim for mem=N amount of memory
> > - if claim succeeds:
> >     call populate_physmap repeatedly to achieve mem=N memory (failsafe)
> >   else
> >     report -ENOMEM up the stack
> > - claim is held until mem=N is achieved or the domain dies or
> >    the toolstack changes it to 0
> > - memory is held until domain dies or the toolstack decreases it
> 
> There is no mechanism for per-NUMA node claim.  Isn't this needed?

It would be a useful extension but is not a necessary one;
IIUC a domain creation succeeds even if optimal NUMA
positioning is not available.  As is, the proposed XENMEM_claim_pages
patch does exactly the same.  If there is a domain creation
option that forces domain creation to fail if there is _not_
an optimal NUMA positioning available, XENMEM_claim_pages
has flag fields to pass the same requirement so an extension
should be easy to add.

> More fundamentally, doesn't this approach result in a worse user
> experience?  It's guaranteeing that a new VM can be started but at the
> expense of existing VMs on that node.

Well, we are talking about a race.  Somebody has to win.
Traditionally, software races are decided by first-come-first-serve.
That's exactly how the proposed XENMEM_claim_pages works.

If you have a chance, please read the document I just posted
(Proposed XENMEM_claim_pages hypercall: Analysis of problems
and alternate solutions).

> When making a VM placement decision, the toolstack needs to consider the
> future memory requirements of the new and existing VMs on the host and
> not just the current (or more correctly, the recently) memory.
> 
> It seems more useful to me to have the toolstack (for example) to track
> historical memory usage of a VM to allow it to make better predictions
> about memory usage.  With a better prediction, the number of failed VM
> creates due to memory shortage will be minimized.  Then, combined with
> reducing the cost of a VM create by optimizing the allocator, the cost
> of occasionally failing a create will be minimal.
> 
> For example, Sally starts her CAD application at 9am, tripling her
> desktop VM instances memory usage.  If at 0858, the toolstack claimed a
> most of the remaining memory for a new VM, then Sally's VM is going to
> grind to a halt as it swaps to death.
> 
> If the toolstack could predict that that desktop instances memory usage
> was about to spike (because it had historical data showing his), it
> could have selected a different host and Sally's VM would perform as
> expected.

You are drifting the thread a bit here, but...

The last 4+ years of my life have been built on the fundamental
assumption that nobody, not even one guest kernel itself,
can adequately predict when memory usage is going to spike.
Accurate inference from an external entity across potentially dozens
of VMs is IMHO.... well... um... unlikely.  I could be wrong
but I believe, even in academia, there is no realistic research
solution proposed for this.  (If I'm wrong, please send a pointer.)

If one accepts this assumption as true, one must instead
plan to be able to adapt very dynamically when spikes
occur.  That's what tmem does to solve Sally's problem,
though admittedly tmem doesn't work for proprietary guest
kernels. +1 for open source. ;-)

Thanks,
Dan

P.S. If you'd like to learn more about tmem, please let me know,
as it is now available in Fedora and Ubuntu guests as well as
Oracle Linux (and, of course, Xen itself).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 21:28:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 21:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfdZ5-0001iO-9s; Mon, 03 Dec 2012 21:28:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1TfdZ3-0001iJ-RP
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 21:28:34 +0000
Received: from [85.158.138.51:25542] by server-14.bemta-3.messagelabs.com id
	C5/8B-31424-C791DB05; Mon, 03 Dec 2012 21:28:28 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354570105!24471382!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gOTE1MDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6730 invoked from network); 3 Dec 2012 21:28:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Dec 2012 21:28:27 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB3LSKR0010456
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Dec 2012 21:28:21 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB3LSJcZ028407
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Dec 2012 21:28:20 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB3LSJ0t024336; Mon, 3 Dec 2012 15:28:19 -0600
MIME-Version: 1.0
Message-ID: <c50f9fe3-8e72-4417-a414-62c3ecd2a005@default>
Date: Mon, 3 Dec 2012 13:28:18 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
References: <cd59ddd0-be45-47e3-b762-a5d630c2df43@default>
	<50B74637.9030309@citrix.com>
In-Reply-To: <50B74637.9030309@citrix.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6665.5003 (x86)]
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Wilk <konrad.wilk@oracle.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Matthew Daley <mattjd@gmail.com>, TimDeegan <tim@xen.org>,
	xen-devel@lists.xen.org, Jan Beulich <JBeulich@suse.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>
Subject: Re: [Xen-devel] [PATCH v8 1/2] hypervisor: XENMEM_claim_pages
 (subop of existing) hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: David Vrabel [mailto:david.vrabel@citrix.com]

Hi David --

Thanks for your reply!
 
> On 28/11/12 15:50, Dan Magenheimer wrote:
> > This is patch 1of2 of an eighth cut of the patch of the proposed
> > XENMEM_claim_pages hypercall/subop, taking into account review
> > feedback from Jan and Keir and IanC and Matthew Daley, plus some
> > fixes found via runtime debugging (using printk and privcmd only).
> >
> [...]
> >
> > Proposed:
> > - call claim for mem=N amount of memory
> > - if claim succeeds:
> >     call populate_physmap repeatedly to achieve mem=N memory (failsafe)
> >   else
> >     report -ENOMEM up the stack
> > - claim is held until mem=N is achieved or the domain dies or
> >    the toolstack changes it to 0
> > - memory is held until domain dies or the toolstack decreases it
> 
> There is no mechanism for per-NUMA node claim.  Isn't this needed?

It would be a useful extension but is not a necessary one;
IIUC a domain creation succeeds even if optimal NUMA
positioning is not available.  As is, the proposed XENMEM_claim_pages
patch does exactly the same.  If there is a domain creation
option that forces domain creation to fail if there is _not_
an optimal NUMA positioning available, XENMEM_claim_pages
has flag fields to pass the same requirement so an extension
should be easy to add.

> More fundamentally, doesn't this approach result in a worse user
> experience?  It's guaranteeing that a new VM can be started but at the
> expense of existing VMs on that node.

Well, we are talking about a race.  Somebody has to win.
Traditionally, software races are decided by first-come-first-serve.
That's exactly how the proposed XENMEM_claim_pages works.

If you have a chance, please read the document I just posted
(Proposed XENMEM_claim_pages hypercall: Analysis of problems
and alternate solutions).

> When making a VM placement decision, the toolstack needs to consider the
> future memory requirements of the new and existing VMs on the host and
> not just the current (or more correctly, the recently) memory.
> 
> It seems more useful to me to have the toolstack (for example) to track
> historical memory usage of a VM to allow it to make better predictions
> about memory usage.  With a better prediction, the number of failed VM
> creates due to memory shortage will be minimized.  Then, combined with
> reducing the cost of a VM create by optimizing the allocator, the cost
> of occasionally failing a create will be minimal.
> 
> For example, Sally starts her CAD application at 9am, tripling her
> desktop VM instances memory usage.  If at 0858, the toolstack claimed a
> most of the remaining memory for a new VM, then Sally's VM is going to
> grind to a halt as it swaps to death.
> 
> If the toolstack could predict that that desktop instances memory usage
> was about to spike (because it had historical data showing his), it
> could have selected a different host and Sally's VM would perform as
> expected.

You are drifting the thread a bit here, but...

The last 4+ years of my life have been built on the fundamental
assumption that nobody, not even one guest kernel itself,
can adequately predict when memory usage is going to spike.
Accurate inference from an external entity across potentially dozens
of VMs is IMHO.... well... um... unlikely.  I could be wrong
but I believe, even in academia, there is no realistic research
solution proposed for this.  (If I'm wrong, please send a pointer.)

If one accepts this assumption as true, one must instead
plan to be able to adapt very dynamically when spikes
occur.  That's what tmem does to solve Sally's problem,
though admittedly tmem doesn't work for proprietary guest
kernels. +1 for open source. ;-)

Thanks,
Dan

P.S. If you'd like to learn more about tmem, please let me know,
as it is now available in Fedora and Ubuntu guests as well as
Oracle Linux (and, of course, Xen itself).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 21:34:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 21:34:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfdej-0001rB-3m; Mon, 03 Dec 2012 21:34:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tfdei-0001r5-B2
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 21:34:24 +0000
Received: from [85.158.138.51:5420] by server-11.bemta-3.messagelabs.com id
	2C/19-19361-FDA1DB05; Mon, 03 Dec 2012 21:34:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1354570462!27466985!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5124 invoked from network); 3 Dec 2012 21:34:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 21:34:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,209,1355097600"; d="scan'208";a="16133330"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 21:34:06 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	21:34:06 +0000
Message-ID: <50BD1ACD.2030001@citrix.com>
Date: Mon, 3 Dec 2012 22:34:05 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Toby Karyadi <toby.karyadi@gmail.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<20121129185635.GA1045@asim.lip6.fr> <50B8718C.1090405@citrix.com>
	<20121130085241.GC311@asim.lip6.fr> <50B87A7E.5030001@citrix.com>
	<20121130094143.GA10993@asim.lip6.fr> <50B88325.1050009@citrix.com>
	<1354271552.6269.110.camel@zakaz.uk.xensource.com>
	<20121130103823.GA9562@asim.lip6.fr>
	<1354272201.6269.113.camel@zakaz.uk.xensource.com>
	<20121130105058.GA3457@asim.lip6.fr> <50BD01DE.1050809@gmail.com>
In-Reply-To: <50BD01DE.1050809@gmail.com>
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
	"port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/3] Add support for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 20:47, Toby Karyadi wrote:
> On 11/30/12 5:50 AM, Manuel Bouyer wrote:
>> On Fri, Nov 30, 2012 at 10:43:21AM +0000, Ian Campbell wrote:
>>> On Fri, 2012-11-30 at 10:38 +0000, Manuel Bouyer wrote:
>>>> On Fri, Nov 30, 2012 at 10:32:32AM +0000, Ian Campbell wrote:
>>>>> libxl only selects the backend itself if the caller doesn't provide one.
>>>>> If the caller sets the backend field !=  UNKNOWN then libxl will (try)
>>>>> and use it. This field is exposed by xl via the backendtype= key in the
>>>>> disk configuration
>>>>> http://xenbits.xen.org/docs/unstable/misc/xl-disk-configuration.txt
>>>> thanks for pointing this out.
>>>> I guess qdisk is the qemu backend, and tap would be the in-kernel backend ?
>>> qdisk == qemu, tap == blktap, phy == in kernel.
>> OK; but then, how does the script called by xenbackendd know what setup
>> is should do ? With xm, it would get a string in the form
>> phy:/dev/wd0e
>> or
>> file:/domains/foo.img
>>
>> but from what I've understant, this syntax is deprecated now ?
> 
> Hi Manuel, thanks for all of your work on xen.
> 
> So, the short answer is that the /usr/pkg/etc/xen/scripts/block 
> executable (which is just a shell script) needs to fish the type of the 
> backend and other extra info out of xenstore. In the case of netbsd's 
> block script, it uses the xenstore-read utility. When the block script 
> is called, $1 ($xpath) will be an entry like so 
> /local/domain/0/backend/vbd/2/768; so this is vbd for domU instance #2 
> with disk instance id 768. $2 ($xstatus) is the reason the block script 
> is called, 2 for startup, and 6 for tear down.
> 
> If you look at the block script you can also see that the block script 
> needs to fish out the type of the backend that is located at $xpath/type 
> in xenstore. The block script utilizes the xenstore-read and 
> xenstore-write to read/modify xenstore, if you notice.
> 
> So, under $xpath here are the pertinent entries:
> - $xpath/type
> - $xpath/params
> - $xpath/physical-disk
> 
> With xm+xend, for disk config file:/var/xen/domu/server001/disk.img, 
> prior to the block script getting called:
> $xpath/type = 'file'
> $xpath/params = '/var/xen/domu/server001/disk.img'
> $xpath/physical-disk = <unspecified>
> 
> Then when the block script is called, it detects that the type is 'file' 
> and so it would call vnconfig using the value $xpath/params as the 
> actual file using an unused vnd device, say vnd2. Then it would modify 
> $xpath/physical-disk using the xenstore-write utility and set the value 
> of $xpath/physical-disk to /dev/vnd2. After that, the handling, 
> everything else is handled as if the backend is an actual physical 
> device, which is true.
> 
> If the value of $xpath/type is 'phy', nothing needs to be done, since 
> $xpath/physical-disk has been setup properly.
> 
> Now, the above behavior is how xm+xend+xenbackendd works. There was a 
> bug in libxl/xl that I fixed as described here 
> http://mail-index.netbsd.org/port-xen/2012/05/29/msg007252.html so that 
> libxl/xl would behave the same way. You might also notice, I made small 
> changes to allow custom backend type. I don't mean to keep tooting my 
> own horn, but I just don't want the fix to get lost, although I just 
> learned that I probably should do a bug report with the patch, but I 
> don't have much time right now.
> 
> Hopefully I don't bore you to death if you read this far, but the way 
> the libxl/xl is going seems somewhat ridiculous, where they are trying 
> to insert more and more policy vs functionality, as evidenced by Roger's 
> effort to outrightly to just decide (hence a policy) that 'well, vnd 

I don't understand this whole argument about "policy vs functionality",
from my point of view what we have now is also a policy, every raw disk
file is attached using the vnd device.

And getting a gntdev isn't certainly a policy, on the other hand this is
going to get us much more functionality (like the ability to run
backends in userspace, for example support for blktap3 when it is released).

> can't work with file on NFS, so be damn with it and just route 
> everything via qemu-dm'. 

Again, as discused with Manuel, a better solution has to be found for
this issue, and I'm sure we can reach a consensus.

> Additionally, they are planning to retire 
> xenbackendd and again this is a trend that I don't like where they clump 
> everything into one giant ball called the libxl/xl, as opposed to a 

The retirement of xenbackendd is done for a good reason, calling hotplug
scripts from libxl allows for better control of when hotplug scripts are
called, and also allows for better error handling if these scripts fail.
Same scripts are called, so functionality stays the same.

This was done for both NetBSD and Linux, in the past Linux used to call
hotplug scripts from udev, and now they are called from libxl too. I'm
not able to see how having a more unified hotplug script interface can
be a bad thing.

> bunch of little executables doing specific things. Another illustration: 
> you might ask who's managing the domU in the absence of xend; well what 
> happens is that when you do 'xl create ...' xl would then daemonize 
> itself to watch xenstore for that specific domU that it just created. I 
> guess that's cool, but to me it's got the feel to put everything into 
> one giant, monolithic, unflexible entity (shudder).

Again, I'm not able to see how this is a problem, in the past you had
xend, which was a gigantic piece of python code acting as a central
arbiter, now you have a small C daemon for each running domain. libxl is
generally considered a better piece of code, and is much more easier to
maintain than xend.

Do you have any concrete technical reason to belive that xend was better
than libxl?

> So, I'm just writing to give you some more information about this whole 
> libxl/xl stuff, and hopefully it'll give you some arsenal so that some 
> of the xen folks don't try to force policies that is claimed to protect 
> 'end-user' or very linux specific.

libxl has even split OS specific routines in separate files, both for
Linux and NetBSD and frankly, I'm not able to see how that is worse that
the amount of patches we had in pkgsrc to have xend working. Now libxl
works out-of-the-box on NetBSD, and many efforts have been put into that.

We should focus our efforts on getting Xen to work on NetBSD without a
ton of NetBSD specific out of the tree patches, and right now my biggest
concern is getting Qemu upstream working.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 21:34:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 21:34:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfdej-0001rB-3m; Mon, 03 Dec 2012 21:34:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tfdei-0001r5-B2
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 21:34:24 +0000
Received: from [85.158.138.51:5420] by server-11.bemta-3.messagelabs.com id
	2C/19-19361-FDA1DB05; Mon, 03 Dec 2012 21:34:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1354570462!27466985!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDYzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5124 invoked from network); 3 Dec 2012 21:34:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Dec 2012 21:34:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,209,1355097600"; d="scan'208";a="16133330"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Dec 2012 21:34:06 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Mon, 3 Dec 2012
	21:34:06 +0000
Message-ID: <50BD1ACD.2030001@citrix.com>
Date: Mon, 3 Dec 2012 22:34:05 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Toby Karyadi <toby.karyadi@gmail.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<20121129185635.GA1045@asim.lip6.fr> <50B8718C.1090405@citrix.com>
	<20121130085241.GC311@asim.lip6.fr> <50B87A7E.5030001@citrix.com>
	<20121130094143.GA10993@asim.lip6.fr> <50B88325.1050009@citrix.com>
	<1354271552.6269.110.camel@zakaz.uk.xensource.com>
	<20121130103823.GA9562@asim.lip6.fr>
	<1354272201.6269.113.camel@zakaz.uk.xensource.com>
	<20121130105058.GA3457@asim.lip6.fr> <50BD01DE.1050809@gmail.com>
In-Reply-To: <50BD01DE.1050809@gmail.com>
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
	"port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/3] Add support for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 20:47, Toby Karyadi wrote:
> On 11/30/12 5:50 AM, Manuel Bouyer wrote:
>> On Fri, Nov 30, 2012 at 10:43:21AM +0000, Ian Campbell wrote:
>>> On Fri, 2012-11-30 at 10:38 +0000, Manuel Bouyer wrote:
>>>> On Fri, Nov 30, 2012 at 10:32:32AM +0000, Ian Campbell wrote:
>>>>> libxl only selects the backend itself if the caller doesn't provide one.
>>>>> If the caller sets the backend field !=  UNKNOWN then libxl will (try)
>>>>> and use it. This field is exposed by xl via the backendtype= key in the
>>>>> disk configuration
>>>>> http://xenbits.xen.org/docs/unstable/misc/xl-disk-configuration.txt
>>>> thanks for pointing this out.
>>>> I guess qdisk is the qemu backend, and tap would be the in-kernel backend ?
>>> qdisk == qemu, tap == blktap, phy == in kernel.
>> OK; but then, how does the script called by xenbackendd know what setup
>> is should do ? With xm, it would get a string in the form
>> phy:/dev/wd0e
>> or
>> file:/domains/foo.img
>>
>> but from what I've understant, this syntax is deprecated now ?
> 
> Hi Manuel, thanks for all of your work on xen.
> 
> So, the short answer is that the /usr/pkg/etc/xen/scripts/block 
> executable (which is just a shell script) needs to fish the type of the 
> backend and other extra info out of xenstore. In the case of netbsd's 
> block script, it uses the xenstore-read utility. When the block script 
> is called, $1 ($xpath) will be an entry like so 
> /local/domain/0/backend/vbd/2/768; so this is vbd for domU instance #2 
> with disk instance id 768. $2 ($xstatus) is the reason the block script 
> is called, 2 for startup, and 6 for tear down.
> 
> If you look at the block script you can also see that the block script 
> needs to fish out the type of the backend that is located at $xpath/type 
> in xenstore. The block script utilizes the xenstore-read and 
> xenstore-write to read/modify xenstore, if you notice.
> 
> So, under $xpath here are the pertinent entries:
> - $xpath/type
> - $xpath/params
> - $xpath/physical-disk
> 
> With xm+xend, for disk config file:/var/xen/domu/server001/disk.img, 
> prior to the block script getting called:
> $xpath/type = 'file'
> $xpath/params = '/var/xen/domu/server001/disk.img'
> $xpath/physical-disk = <unspecified>
> 
> Then when the block script is called, it detects that the type is 'file' 
> and so it would call vnconfig using the value $xpath/params as the 
> actual file using an unused vnd device, say vnd2. Then it would modify 
> $xpath/physical-disk using the xenstore-write utility and set the value 
> of $xpath/physical-disk to /dev/vnd2. After that, the handling, 
> everything else is handled as if the backend is an actual physical 
> device, which is true.
> 
> If the value of $xpath/type is 'phy', nothing needs to be done, since 
> $xpath/physical-disk has been setup properly.
> 
> Now, the above behavior is how xm+xend+xenbackendd works. There was a 
> bug in libxl/xl that I fixed as described here 
> http://mail-index.netbsd.org/port-xen/2012/05/29/msg007252.html so that 
> libxl/xl would behave the same way. You might also notice, I made small 
> changes to allow custom backend type. I don't mean to keep tooting my 
> own horn, but I just don't want the fix to get lost, although I just 
> learned that I probably should do a bug report with the patch, but I 
> don't have much time right now.
> 
> Hopefully I don't bore you to death if you read this far, but the way 
> the libxl/xl is going seems somewhat ridiculous, where they are trying 
> to insert more and more policy vs functionality, as evidenced by Roger's 
> effort to outrightly to just decide (hence a policy) that 'well, vnd 

I don't understand this whole argument about "policy vs functionality",
from my point of view what we have now is also a policy, every raw disk
file is attached using the vnd device.

And getting a gntdev isn't certainly a policy, on the other hand this is
going to get us much more functionality (like the ability to run
backends in userspace, for example support for blktap3 when it is released).

> can't work with file on NFS, so be damn with it and just route 
> everything via qemu-dm'. 

Again, as discused with Manuel, a better solution has to be found for
this issue, and I'm sure we can reach a consensus.

> Additionally, they are planning to retire 
> xenbackendd and again this is a trend that I don't like where they clump 
> everything into one giant ball called the libxl/xl, as opposed to a 

The retirement of xenbackendd is done for a good reason, calling hotplug
scripts from libxl allows for better control of when hotplug scripts are
called, and also allows for better error handling if these scripts fail.
Same scripts are called, so functionality stays the same.

This was done for both NetBSD and Linux, in the past Linux used to call
hotplug scripts from udev, and now they are called from libxl too. I'm
not able to see how having a more unified hotplug script interface can
be a bad thing.

> bunch of little executables doing specific things. Another illustration: 
> you might ask who's managing the domU in the absence of xend; well what 
> happens is that when you do 'xl create ...' xl would then daemonize 
> itself to watch xenstore for that specific domU that it just created. I 
> guess that's cool, but to me it's got the feel to put everything into 
> one giant, monolithic, unflexible entity (shudder).

Again, I'm not able to see how this is a problem, in the past you had
xend, which was a gigantic piece of python code acting as a central
arbiter, now you have a small C daemon for each running domain. libxl is
generally considered a better piece of code, and is much more easier to
maintain than xend.

Do you have any concrete technical reason to belive that xend was better
than libxl?

> So, I'm just writing to give you some more information about this whole 
> libxl/xl stuff, and hopefully it'll give you some arsenal so that some 
> of the xen folks don't try to force policies that is claimed to protect 
> 'end-user' or very linux specific.

libxl has even split OS specific routines in separate files, both for
Linux and NetBSD and frankly, I'm not able to see how that is worse that
the amount of patches we had in pkgsrc to have xend working. Now libxl
works out-of-the-box on NetBSD, and many efforts have been put into that.

We should focus our efforts on getting Xen to work on NetBSD without a
ton of NetBSD specific out of the tree patches, and right now my biggest
concern is getting Qemu upstream working.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 22:56:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 22:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfevV-0004H0-AV; Mon, 03 Dec 2012 22:55:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <toby.karyadi@gmail.com>) id 1TfevU-0004Gv-7s
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 22:55:48 +0000
Received: from [85.158.139.211:54070] by server-8.bemta-5.messagelabs.com id
	9E/B0-06050-3FD2DB05; Mon, 03 Dec 2012 22:55:47 +0000
X-Env-Sender: toby.karyadi@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1354575345!14682998!1
X-Originating-IP: [76.96.62.40]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3Ni45Ni42Mi40MCA9PiAyMjQ2NjA=\n,sa_preprocessor: 
	QmFkIElQOiA3Ni45Ni42Mi40MCA9PiAyMjQ2NjA=\n,BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2480 invoked from network); 3 Dec 2012 22:55:46 -0000
Received: from qmta04.westchester.pa.mail.comcast.net (HELO
	qmta04.westchester.pa.mail.comcast.net) (76.96.62.40)
	by server-13.tower-206.messagelabs.com with SMTP;
	3 Dec 2012 22:55:46 -0000
Received: from omta06.westchester.pa.mail.comcast.net ([76.96.62.51])
	by qmta04.westchester.pa.mail.comcast.net with comcast
	id X65t1k00216LCl054AumiL; Mon, 03 Dec 2012 22:54:46 +0000
Received: from koalatu.simplecubes.com ([98.204.234.19])
	by omta06.westchester.pa.mail.comcast.net with comcast
	id XAul1k00t0Rn0nW3SAul8a; Mon, 03 Dec 2012 22:54:46 +0000
Received: from quoll.simplecubes.com (quoll.simplecubes.com [192.168.33.37])
	by koalatu.simplecubes.com (Postfix) with ESMTP id 0CAAB8B2;
	Mon,  3 Dec 2012 17:55:49 -0500 (EST)
Message-ID: <50BD2DB5.9010802@gmail.com>
Date: Mon, 03 Dec 2012 17:54:45 -0500
From: Toby Karyadi <toby.karyadi@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<20121129185635.GA1045@asim.lip6.fr> <50B8718C.1090405@citrix.com>
	<20121130085241.GC311@asim.lip6.fr> <50B87A7E.5030001@citrix.com>
	<20121130094143.GA10993@asim.lip6.fr> <50B88325.1050009@citrix.com>
	<1354271552.6269.110.camel@zakaz.uk.xensource.com>
	<20121130103823.GA9562@asim.lip6.fr>
	<1354272201.6269.113.camel@zakaz.uk.xensource.com>
	<20121130105058.GA3457@asim.lip6.fr>
	<50BD01DE.1050809@gmail.com> <50BD1ACD.2030001@citrix.com>
In-Reply-To: <50BD1ACD.2030001@citrix.com>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net;
	s=q20121106; t=1354575286;
	bh=4xw2lnRxphTg1yxjf1dsBi2ozlztKki7YcuyPw/IF94=;
	h=Received:Received:Received:Message-ID:Date:From:MIME-Version:To:
	Subject:Content-Type;
	b=lcxY5e4Pk8d2IArECj5xzwqhfi7/GgGhA9zvJ0Ae6+jKAirtdwUkVF9lMHzGO6N6f
	kwAm52lNodH8Cg88tDazReJmGt9YU0Q9LKrLuWHYCFFjlsmYxuc0LeKzKmk+jf2XTM
	z06NZoAblIspCoAsMaQzMje8oaP7peq0/KKfQNqa/m1oq1aQXnsbZEqGpQJVcvQvb2
	UcX/TBoKOuLiauxh2S3rkU0JJCUPo+QtHcNXFDIu9M1Ss9+kyp6oXuj+ZMems/Zbjj
	eRt0aGJu9RPl4frfA4RmtgSZHaRiixaOvtPE4zY0s961cHSPYfB8PsgcWZxNCwu0bx
	Hqz7JZPHu9wsQ==
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
	"port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/3] Add support for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Whoops, this was meant to be a correspondence just for Manuel. But since =

the cat is out of the bag...

On 12/3/12 4:34 PM, Roger Pau Monn=E9 wrote:
> I don't understand this whole argument about "policy vs =

> functionality", from my point of view what we have now is also a =

> policy, every raw disk file is attached using the vnd device. And =

> getting a gntdev isn't certainly a policy, on the other hand this is =

> going to get us much more functionality (like the ability to run =

> backends in userspace, for example support for blktap3 when it is =

> released). =

Yes, on netbsd file: disk config will use vnd, but only because the way =

the block script is written. The functionality of the block script is =

well defined, and therefore is much easier to override, then, say =

patching libxl. Therefore, it's not a 'policy', but more of a default =

functionality, since anyone who has some knowledge about shell script =

can modify it to do something else, e.g. run the vnd under rump (is that =

possible?) to avoid dom0 blowing up.

But if it is decided that disk config that begins with file: to always =

use qemu-dm just because on the off chance that using vnd over NFS can =

blow up the dom0, and if there is no way to override it, or really =

difficult (which is relative, I know), then it becomes a  'policy'. Can =

I setup iscsi through hotplug for example?

  To be honest, I haven't look into how the hotplug script works, so =

hopefully it can provide equivalent functionality and flexibility.
>> Again, as discused with Manuel, a better solution has to be found for
>> this issue, and I'm sure we can reach a consensus.
That's what I'm hoping.
> The retirement of xenbackendd is done for a good reason, calling =

> hotplug scripts from libxl allows for better control of when hotplug =

> scripts are called, and also allows for better error handling if these =

> scripts fail. Same scripts are called, so functionality stays the =

> same. This was done for both NetBSD and Linux, in the past Linux used =

> to call hotplug scripts from udev, and now they are called from libxl =

> too. I'm not able to see how having a more unified hotplug script =

> interface can be a bad thing. Again, I'm not able to see how this is a =

> problem, in the past you had xend, which was a gigantic piece of =

> python code acting as a central arbiter, now you have a small C daemon =

> for each running domain. libxl is generally considered a better piece =

> of code, and is much more easier to maintain than xend. Do you have =

> any concrete technical reason to belive that xend was better than libxl? =

Like I said, I probably look into the hotplug script framework before I =

yammer any further. If it allows for overriding file: disk configs with =

vnd or whatever, then I'm happy. I don't think xend is better than =

libxl, on the contrary, I find libxl source to be more understandable, =

and much smaller. I couldn't make out head from tail with xend source =

code and I'm a python programmer most of the time. But by the same token =

I know I've got this blob called xend that does these specific things, =

and xenbackendd that does these specific things. There is really nothing =

that prevents from say creating another program called domu_watcher (or =

whatever) that is linked to libxl for functionality sharing and gets =

launched by xl during xl create; but again, this is more nitpicks and a =

matter of preference.
> libxl has even split OS specific routines in separate files, both for =

> Linux and NetBSD and frankly, I'm not able to see how that is worse =

> that the amount of patches we had in pkgsrc to have xend working. Now =

> libxl works out-of-the-box on NetBSD, and many efforts have been put =

> into that. We should focus our efforts on getting Xen to work on =

> NetBSD without a ton of NetBSD specific out of the tree patches, and =

> right now my biggest concern is getting Qemu upstream working. =

I noticed that even in 4.1.2, which I thought was a good start, however =

those compile-time 'plugins' were not useful back then since it was only =

for whether blktap is supported, to which under netbsd compilation the =

answer is 'nah!'. I haven't looked into 4.2 libxl src either, so maybe =

the plugin points are much extensive now. I know enough that creating a =

system with plugin support, whether compile-time or during runtime, is =

not easy, because to a certain degree you have to expect the unexpected. =

If I sound I'm just griping all of the time, it's just that I had more =

time right now to jump in, but I don't. So, I do a lot of hand waving =

;-). I understand that you gotta get qemu working, but that's orthogonal =

to letting us shoot ourselves in the foot ;-) I mean, have you used =

netbsd's disklabel?

Anyways...

Cheers,
Toby

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 03 22:56:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Dec 2012 22:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfevV-0004H0-AV; Mon, 03 Dec 2012 22:55:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <toby.karyadi@gmail.com>) id 1TfevU-0004Gv-7s
	for xen-devel@lists.xen.org; Mon, 03 Dec 2012 22:55:48 +0000
Received: from [85.158.139.211:54070] by server-8.bemta-5.messagelabs.com id
	9E/B0-06050-3FD2DB05; Mon, 03 Dec 2012 22:55:47 +0000
X-Env-Sender: toby.karyadi@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1354575345!14682998!1
X-Originating-IP: [76.96.62.40]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3Ni45Ni42Mi40MCA9PiAyMjQ2NjA=\n,sa_preprocessor: 
	QmFkIElQOiA3Ni45Ni42Mi40MCA9PiAyMjQ2NjA=\n,BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2480 invoked from network); 3 Dec 2012 22:55:46 -0000
Received: from qmta04.westchester.pa.mail.comcast.net (HELO
	qmta04.westchester.pa.mail.comcast.net) (76.96.62.40)
	by server-13.tower-206.messagelabs.com with SMTP;
	3 Dec 2012 22:55:46 -0000
Received: from omta06.westchester.pa.mail.comcast.net ([76.96.62.51])
	by qmta04.westchester.pa.mail.comcast.net with comcast
	id X65t1k00216LCl054AumiL; Mon, 03 Dec 2012 22:54:46 +0000
Received: from koalatu.simplecubes.com ([98.204.234.19])
	by omta06.westchester.pa.mail.comcast.net with comcast
	id XAul1k00t0Rn0nW3SAul8a; Mon, 03 Dec 2012 22:54:46 +0000
Received: from quoll.simplecubes.com (quoll.simplecubes.com [192.168.33.37])
	by koalatu.simplecubes.com (Postfix) with ESMTP id 0CAAB8B2;
	Mon,  3 Dec 2012 17:55:49 -0500 (EST)
Message-ID: <50BD2DB5.9010802@gmail.com>
Date: Mon, 03 Dec 2012 17:54:45 -0500
From: Toby Karyadi <toby.karyadi@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<20121129185635.GA1045@asim.lip6.fr> <50B8718C.1090405@citrix.com>
	<20121130085241.GC311@asim.lip6.fr> <50B87A7E.5030001@citrix.com>
	<20121130094143.GA10993@asim.lip6.fr> <50B88325.1050009@citrix.com>
	<1354271552.6269.110.camel@zakaz.uk.xensource.com>
	<20121130103823.GA9562@asim.lip6.fr>
	<1354272201.6269.113.camel@zakaz.uk.xensource.com>
	<20121130105058.GA3457@asim.lip6.fr>
	<50BD01DE.1050809@gmail.com> <50BD1ACD.2030001@citrix.com>
In-Reply-To: <50BD1ACD.2030001@citrix.com>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net;
	s=q20121106; t=1354575286;
	bh=4xw2lnRxphTg1yxjf1dsBi2ozlztKki7YcuyPw/IF94=;
	h=Received:Received:Received:Message-ID:Date:From:MIME-Version:To:
	Subject:Content-Type;
	b=lcxY5e4Pk8d2IArECj5xzwqhfi7/GgGhA9zvJ0Ae6+jKAirtdwUkVF9lMHzGO6N6f
	kwAm52lNodH8Cg88tDazReJmGt9YU0Q9LKrLuWHYCFFjlsmYxuc0LeKzKmk+jf2XTM
	z06NZoAblIspCoAsMaQzMje8oaP7peq0/KKfQNqa/m1oq1aQXnsbZEqGpQJVcvQvb2
	UcX/TBoKOuLiauxh2S3rkU0JJCUPo+QtHcNXFDIu9M1Ss9+kyp6oXuj+ZMems/Zbjj
	eRt0aGJu9RPl4frfA4RmtgSZHaRiixaOvtPE4zY0s961cHSPYfB8PsgcWZxNCwu0bx
	Hqz7JZPHu9wsQ==
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
	"port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/3] Add support for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Whoops, this was meant to be a correspondence just for Manuel. But since =

the cat is out of the bag...

On 12/3/12 4:34 PM, Roger Pau Monn=E9 wrote:
> I don't understand this whole argument about "policy vs =

> functionality", from my point of view what we have now is also a =

> policy, every raw disk file is attached using the vnd device. And =

> getting a gntdev isn't certainly a policy, on the other hand this is =

> going to get us much more functionality (like the ability to run =

> backends in userspace, for example support for blktap3 when it is =

> released). =

Yes, on netbsd file: disk config will use vnd, but only because the way =

the block script is written. The functionality of the block script is =

well defined, and therefore is much easier to override, then, say =

patching libxl. Therefore, it's not a 'policy', but more of a default =

functionality, since anyone who has some knowledge about shell script =

can modify it to do something else, e.g. run the vnd under rump (is that =

possible?) to avoid dom0 blowing up.

But if it is decided that disk config that begins with file: to always =

use qemu-dm just because on the off chance that using vnd over NFS can =

blow up the dom0, and if there is no way to override it, or really =

difficult (which is relative, I know), then it becomes a  'policy'. Can =

I setup iscsi through hotplug for example?

  To be honest, I haven't look into how the hotplug script works, so =

hopefully it can provide equivalent functionality and flexibility.
>> Again, as discused with Manuel, a better solution has to be found for
>> this issue, and I'm sure we can reach a consensus.
That's what I'm hoping.
> The retirement of xenbackendd is done for a good reason, calling =

> hotplug scripts from libxl allows for better control of when hotplug =

> scripts are called, and also allows for better error handling if these =

> scripts fail. Same scripts are called, so functionality stays the =

> same. This was done for both NetBSD and Linux, in the past Linux used =

> to call hotplug scripts from udev, and now they are called from libxl =

> too. I'm not able to see how having a more unified hotplug script =

> interface can be a bad thing. Again, I'm not able to see how this is a =

> problem, in the past you had xend, which was a gigantic piece of =

> python code acting as a central arbiter, now you have a small C daemon =

> for each running domain. libxl is generally considered a better piece =

> of code, and is much more easier to maintain than xend. Do you have =

> any concrete technical reason to belive that xend was better than libxl? =

Like I said, I probably look into the hotplug script framework before I =

yammer any further. If it allows for overriding file: disk configs with =

vnd or whatever, then I'm happy. I don't think xend is better than =

libxl, on the contrary, I find libxl source to be more understandable, =

and much smaller. I couldn't make out head from tail with xend source =

code and I'm a python programmer most of the time. But by the same token =

I know I've got this blob called xend that does these specific things, =

and xenbackendd that does these specific things. There is really nothing =

that prevents from say creating another program called domu_watcher (or =

whatever) that is linked to libxl for functionality sharing and gets =

launched by xl during xl create; but again, this is more nitpicks and a =

matter of preference.
> libxl has even split OS specific routines in separate files, both for =

> Linux and NetBSD and frankly, I'm not able to see how that is worse =

> that the amount of patches we had in pkgsrc to have xend working. Now =

> libxl works out-of-the-box on NetBSD, and many efforts have been put =

> into that. We should focus our efforts on getting Xen to work on =

> NetBSD without a ton of NetBSD specific out of the tree patches, and =

> right now my biggest concern is getting Qemu upstream working. =

I noticed that even in 4.1.2, which I thought was a good start, however =

those compile-time 'plugins' were not useful back then since it was only =

for whether blktap is supported, to which under netbsd compilation the =

answer is 'nah!'. I haven't looked into 4.2 libxl src either, so maybe =

the plugin points are much extensive now. I know enough that creating a =

system with plugin support, whether compile-time or during runtime, is =

not easy, because to a certain degree you have to expect the unexpected. =

If I sound I'm just griping all of the time, it's just that I had more =

time right now to jump in, but I don't. So, I do a lot of hand waving =

;-). I understand that you gotta get qemu working, but that's orthogonal =

to letting us shoot ourselves in the foot ;-) I mean, have you used =

netbsd's disklabel?

Anyways...

Cheers,
Toby

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:18:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:18:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgCd-0006LX-W2; Tue, 04 Dec 2012 00:17:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TfgCc-0006LS-CJ
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:17:34 +0000
Received: from [85.158.139.211:21896] by server-13.bemta-5.messagelabs.com id
	9E/A6-27809-D114DB05; Tue, 04 Dec 2012 00:17:33 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354580252!18921428!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19364 invoked from network); 4 Dec 2012 00:17:33 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 00:17:33 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 5D20B8408B;
	Tue,  4 Dec 2012 01:17:32 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id ISVQfgt9eBIK; Tue,  4 Dec 2012 01:17:32 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 047AE8408A;
	Tue,  4 Dec 2012 01:17:31 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TfgCY-0002n6-GL; Tue, 04 Dec 2012 01:17:30 +0100
Date: Tue, 4 Dec 2012 01:17:30 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121204001730.GM6055@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, Ian.Campbell@citrix.com,
	xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-8-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354286955-23900-8-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 7/9] stubdom/grub: send kernel measurements
	to vTPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel De Graaf, le Fri 30 Nov 2012 09:49:13 -0500, a =E9crit :
> This allows a domU with an arbitrary kernel and initrd to take advantage
> of the static root of trust provided by a vTPM.
> =

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:18:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:18:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgCd-0006LX-W2; Tue, 04 Dec 2012 00:17:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TfgCc-0006LS-CJ
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:17:34 +0000
Received: from [85.158.139.211:21896] by server-13.bemta-5.messagelabs.com id
	9E/A6-27809-D114DB05; Tue, 04 Dec 2012 00:17:33 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354580252!18921428!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19364 invoked from network); 4 Dec 2012 00:17:33 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 00:17:33 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 5D20B8408B;
	Tue,  4 Dec 2012 01:17:32 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id ISVQfgt9eBIK; Tue,  4 Dec 2012 01:17:32 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 047AE8408A;
	Tue,  4 Dec 2012 01:17:31 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TfgCY-0002n6-GL; Tue, 04 Dec 2012 01:17:30 +0100
Date: Tue, 4 Dec 2012 01:17:30 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121204001730.GM6055@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, Ian.Campbell@citrix.com,
	xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-8-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354286955-23900-8-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 7/9] stubdom/grub: send kernel measurements
	to vTPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel De Graaf, le Fri 30 Nov 2012 09:49:13 -0500, a =E9crit :
> This allows a domU with an arbitrary kernel and initrd to take advantage
> of the static root of trust provided by a vTPM.
> =

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:18:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:18:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgDO-0006Nz-E2; Tue, 04 Dec 2012 00:18:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TfgDN-0006Nq-ON
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:18:21 +0000
Received: from [85.158.143.99:21488] by server-1.bemta-4.messagelabs.com id
	1F/BB-27934-C414DB05; Tue, 04 Dec 2012 00:18:20 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354580300!22588106!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9323 invoked from network); 4 Dec 2012 00:18:20 -0000
Received: from toccata.ens-lyon.fr (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 00:18:20 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id EF7CE8408B;
	Tue,  4 Dec 2012 01:18:19 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id B4QIutqkCw4O; Tue,  4 Dec 2012 01:18:19 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id AB2F68408A;
	Tue,  4 Dec 2012 01:18:19 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TfgDK-0002oF-84; Tue, 04 Dec 2012 01:18:18 +0100
Date: Tue, 4 Dec 2012 01:18:18 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121204001818.GN6055@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, Ian.Campbell@citrix.com,
	xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 0/9] vTPM new ABI, extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I let Matthew review the other patches.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:18:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:18:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgDO-0006Nz-E2; Tue, 04 Dec 2012 00:18:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TfgDN-0006Nq-ON
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:18:21 +0000
Received: from [85.158.143.99:21488] by server-1.bemta-4.messagelabs.com id
	1F/BB-27934-C414DB05; Tue, 04 Dec 2012 00:18:20 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354580300!22588106!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9323 invoked from network); 4 Dec 2012 00:18:20 -0000
Received: from toccata.ens-lyon.fr (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 00:18:20 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id EF7CE8408B;
	Tue,  4 Dec 2012 01:18:19 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id B4QIutqkCw4O; Tue,  4 Dec 2012 01:18:19 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id AB2F68408A;
	Tue,  4 Dec 2012 01:18:19 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TfgDK-0002oF-84; Tue, 04 Dec 2012 01:18:18 +0100
Date: Tue, 4 Dec 2012 01:18:18 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121204001818.GN6055@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, Ian.Campbell@citrix.com,
	xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 0/9] vTPM new ABI, extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I let Matthew review the other patches.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:19:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgDs-0006Qe-Re; Tue, 04 Dec 2012 00:18:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfgDr-0006Q2-0H
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 00:18:51 +0000
Received: from [85.158.138.51:27910] by server-1.bemta-3.messagelabs.com id
	36/F8-12169-A614DB05; Tue, 04 Dec 2012 00:18:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354580329!30639767!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY3MzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13714 invoked from network); 4 Dec 2012 00:18:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 00:18:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,211,1355097600"; d="scan'208";a="16134519"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 00:18:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 4 Dec 2012 00:18:48 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfgDo-0007cB-EO;
	Tue, 04 Dec 2012 00:18:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfgDo-0004qn-24;
	Tue, 04 Dec 2012 00:18:48 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14558-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Dec 2012 00:18:48 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14558: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14558 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14558/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=29247e44df47
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 29247e44df47
+ branch=xen-unstable
+ revision=29247e44df47
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 29247e44df47 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 31 changesets with 51 changes to 34 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:19:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgDs-0006Qe-Re; Tue, 04 Dec 2012 00:18:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TfgDr-0006Q2-0H
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 00:18:51 +0000
Received: from [85.158.138.51:27910] by server-1.bemta-3.messagelabs.com id
	36/F8-12169-A614DB05; Tue, 04 Dec 2012 00:18:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354580329!30639767!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY3MzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13714 invoked from network); 4 Dec 2012 00:18:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 00:18:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,211,1355097600"; d="scan'208";a="16134519"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 00:18:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 4 Dec 2012 00:18:48 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfgDo-0007cB-EO;
	Tue, 04 Dec 2012 00:18:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfgDo-0004qn-24;
	Tue, 04 Dec 2012 00:18:48 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14558-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Dec 2012 00:18:48 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14558: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14558 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14558/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14482
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14482

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  1c69c938f641

------------------------------------------------------------
People who touched revisions under test:
  Acked-by: Tim Deegan <tim@xen.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Roger Pau Monn? <roger.pau@citrix.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wei.liu2@citrix.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=29247e44df47
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 29247e44df47
+ branch=xen-unstable
+ revision=29247e44df47
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 29247e44df47 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 31 changesets with 51 changes to 34 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:27:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgM0-0007qk-EG; Tue, 04 Dec 2012 00:27:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TfgLz-0007qf-71
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:27:15 +0000
Received: from [85.158.143.99:17681] by server-3.bemta-4.messagelabs.com id
	DD/CC-06841-2634DB05; Tue, 04 Dec 2012 00:27:14 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354580833!20869925!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27825 invoked from network); 4 Dec 2012 00:27:14 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 00:27:14 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 98D0B84089;
	Tue,  4 Dec 2012 01:27:13 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id m+pkf1CTUu0s; Tue,  4 Dec 2012 01:27:13 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 4BBEF84080;
	Tue,  4 Dec 2012 01:27:13 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TfgLv-0003XE-VL; Tue, 04 Dec 2012 01:27:11 +0100
Date: Tue, 4 Dec 2012 01:27:11 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20121204002711.GO6055@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Keir Fraser <keir.xen@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
References: <20121130231354.GA5857@type.youpi.perso.aquilenet.fr>
	<CCDF6781.464C6%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CCDF6781.464C6%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] mini-os: drop shutdown variables when
	CONFIG_XENBUS=n
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Shutdown variables are meaningless when CONFIG_XENBUS=n since no
shutdown event will ever happen.  Better make sure that no code tries to
use it and never get the hoped shutdown event.

Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

diff -r 29247e44df47 extras/mini-os/kernel.c
--- a/extras/mini-os/kernel.c	Fri Nov 30 21:51:17 2012 +0000
+++ b/extras/mini-os/kernel.c	Tue Dec 04 01:24:51 2012 +0100
@@ -48,9 +48,11 @@
 
 uint8_t xen_features[XENFEAT_NR_SUBMAPS * 32];
 
+#ifdef CONFIG_XENBUS
 unsigned int do_shutdown = 0;
 unsigned int shutdown_reason;
 DECLARE_WAIT_QUEUE_HEAD(shutdown_queue);
+#endif
 
 void setup_xen_features(void)
 {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:27:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgM0-0007qk-EG; Tue, 04 Dec 2012 00:27:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TfgLz-0007qf-71
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:27:15 +0000
Received: from [85.158.143.99:17681] by server-3.bemta-4.messagelabs.com id
	DD/CC-06841-2634DB05; Tue, 04 Dec 2012 00:27:14 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354580833!20869925!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27825 invoked from network); 4 Dec 2012 00:27:14 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 00:27:14 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 98D0B84089;
	Tue,  4 Dec 2012 01:27:13 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id m+pkf1CTUu0s; Tue,  4 Dec 2012 01:27:13 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 4BBEF84080;
	Tue,  4 Dec 2012 01:27:13 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TfgLv-0003XE-VL; Tue, 04 Dec 2012 01:27:11 +0100
Date: Tue, 4 Dec 2012 01:27:11 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20121204002711.GO6055@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Keir Fraser <keir.xen@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
References: <20121130231354.GA5857@type.youpi.perso.aquilenet.fr>
	<CCDF6781.464C6%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CCDF6781.464C6%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] mini-os: drop shutdown variables when
	CONFIG_XENBUS=n
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Shutdown variables are meaningless when CONFIG_XENBUS=n since no
shutdown event will ever happen.  Better make sure that no code tries to
use it and never get the hoped shutdown event.

Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

diff -r 29247e44df47 extras/mini-os/kernel.c
--- a/extras/mini-os/kernel.c	Fri Nov 30 21:51:17 2012 +0000
+++ b/extras/mini-os/kernel.c	Tue Dec 04 01:24:51 2012 +0100
@@ -48,9 +48,11 @@
 
 uint8_t xen_features[XENFEAT_NR_SUBMAPS * 32];
 
+#ifdef CONFIG_XENBUS
 unsigned int do_shutdown = 0;
 unsigned int shutdown_reason;
 DECLARE_WAIT_QUEUE_HEAD(shutdown_queue);
+#endif
 
 void setup_xen_features(void)
 {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:30:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:30:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgPA-00080h-81; Tue, 04 Dec 2012 00:30:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TfgP8-00080Z-9o
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:30:30 +0000
Received: from [85.158.139.83:35142] by server-4.bemta-5.messagelabs.com id
	22/46-15011-5244DB05; Tue, 04 Dec 2012 00:30:29 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354581028!27693188!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29040 invoked from network); 4 Dec 2012 00:30:28 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 00:30:28 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 0B20584089;
	Tue,  4 Dec 2012 01:30:28 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id Q5pVHPyZYoh6; Tue,  4 Dec 2012 01:30:27 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id B890A84080;
	Tue,  4 Dec 2012 01:30:27 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TfgP4-0003YJ-DX; Tue, 04 Dec 2012 01:30:26 +0100
Date: Tue, 4 Dec 2012 01:30:26 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121204003026.GP6055@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Keir (Xen.org)" <keir@xen.org>
References: <20121128215723.GA6109@type>
	<1354187659.25834.147.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354187659.25834.147.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [minios] Add xenbus shutdown control support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Re,

Ian Campbell, le Thu 29 Nov 2012 11:14:19 +0000, a =E9crit :
> On Wed, 2012-11-28 at 21:57 +0000, Samuel Thibault wrote:
> > Add a thread watching the xenbus shutdown control path and notifies a
> > wait queue.
> =

> Why a wait queue rather than a weak function call?

Because it integrates well with existing wait loops.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:30:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:30:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgPA-00080h-81; Tue, 04 Dec 2012 00:30:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TfgP8-00080Z-9o
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:30:30 +0000
Received: from [85.158.139.83:35142] by server-4.bemta-5.messagelabs.com id
	22/46-15011-5244DB05; Tue, 04 Dec 2012 00:30:29 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354581028!27693188!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29040 invoked from network); 4 Dec 2012 00:30:28 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 00:30:28 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 0B20584089;
	Tue,  4 Dec 2012 01:30:28 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id Q5pVHPyZYoh6; Tue,  4 Dec 2012 01:30:27 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id B890A84080;
	Tue,  4 Dec 2012 01:30:27 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TfgP4-0003YJ-DX; Tue, 04 Dec 2012 01:30:26 +0100
Date: Tue, 4 Dec 2012 01:30:26 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121204003026.GP6055@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Keir (Xen.org)" <keir@xen.org>
References: <20121128215723.GA6109@type>
	<1354187659.25834.147.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354187659.25834.147.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [minios] Add xenbus shutdown control support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Re,

Ian Campbell, le Thu 29 Nov 2012 11:14:19 +0000, a =E9crit :
> On Wed, 2012-11-28 at 21:57 +0000, Samuel Thibault wrote:
> > Add a thread watching the xenbus shutdown control path and notifies a
> > wait queue.
> =

> Why a wait queue rather than a weak function call?

Because it integrates well with existing wait loops.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:49:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfghY-0000CM-VS; Tue, 04 Dec 2012 00:49:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sunds@peapod.net>) id 1TfghY-0000CH-8S
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:49:32 +0000
Received: from [85.158.139.211:62823] by server-1.bemta-5.messagelabs.com id
	B8/C0-09311-B984DB05; Tue, 04 Dec 2012 00:49:31 +0000
X-Env-Sender: sunds@peapod.net
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354582170!18923847!1
X-Originating-IP: [209.85.214.45]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18219 invoked from network); 4 Dec 2012 00:49:30 -0000
Received: from mail-bk0-f45.google.com (HELO mail-bk0-f45.google.com)
	(209.85.214.45)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 00:49:30 -0000
Received: by mail-bk0-f45.google.com with SMTP id jk13so1486938bkc.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 16:49:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=O3UwNiiq6+Ovo9JLiI6GGbZxduWiL3QCpRP66YpoaCo=;
	b=mj79y8XaAI1o4WXCJ54QTcsNvWuZfImu1pA4iW0k2ho/lVWJXUqQNiF1V3rmkFlreA
	ZaNyyQF/5qzdwWZRMmC+d3JmwaTueci57tiQZBPKeZO3pdX+aa4a/Kkcp3oLBkHlfn45
	oSKHufS0R04b4qSdgleI/MYxNdBlEj5ZPMXr0oq2m1oV+4H2/q5ZzapWwBye8YgqAo9u
	TE2cBGtA5mNH2dF07Hyzj7IEoxVlZzlBbh3mqYKaqSGhGrhjSF+sdKsicuu3aRN5KhQD
	Y0UzW2j5+bFWBgHZ3+mMa5rrRh5ayBhQNPHRlVBXd3r5d6xffsEKyAA8DZYt2Fy7wNN0
	1HFQ==
MIME-Version: 1.0
Received: by 10.204.154.17 with SMTP id m17mr3519230bkw.89.1354582169887; Mon,
	03 Dec 2012 16:49:29 -0800 (PST)
Received: by 10.204.59.146 with HTTP; Mon, 3 Dec 2012 16:49:29 -0800 (PST)
In-Reply-To: <50A27423.6040104@tycho.nsa.gov>
References: <CAP4+OA3-CsKxJizFpx-V48cCFKJc5KXNQzzY4oQ6pQ5pu0RFMg@mail.gmail.com>
	<CAL08nMGDeaQO6HAJypK+pzmsmg8wcz2=WnLWwX==R5MRBCyoPA@mail.gmail.com>
	<CAP4+OA064YvLQyPahGWoyxPhWv+v5LLCjBUOUg09QF7XKJ1tZg@mail.gmail.com>
	<50A27423.6040104@tycho.nsa.gov>
Date: Mon, 3 Dec 2012 18:49:29 -0600
Message-ID: <CAP4+OA0KLDk-5p=S1sSq9ugrLF4siL_iQ7YNT1nCGO4mwG5=zA@mail.gmail.com>
From: D Sundstrom <sunds@peapod.net>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
X-Gm-Message-State: ALoCoQlz3v6k5oN4KFw493T/vgiDZ6Z1y+KDBnbBFBFupvudXhO7Gb9p4rBB3BiHyn9SM3kmAgFc
Cc: Pablo Llopis <pllopis@arcos.inf.uc3m.es>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Immediate kernel panic using gntdev device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8559791445519164607=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8559791445519164607==
Content-Type: multipart/alternative; boundary=0015175df16c0e24e004cffc3d0c

--0015175df16c0e24e004cffc3d0c
Content-Type: text/plain; charset=ISO-8859-1

The issue seems to be my version of Xen (XenClient XT) must not support
ballon drivers. Any call to the memory_op hypercall to change the
reservation terminates my guest with extreme prejudice.

I'll take that one up with Citrix.  However, can someone explain why
mapping a grant needs to manipulate the balloon reservation?

Specifically, in the 3.7-RC7 linux kernel tree, the file
drivers/xen/balloon.c:

At line 512 it tries to get a page out of the balloon.  This returns null
(no page).
If page.... at line 513 evaluates to false
At line 518 the else block calls decrease_reservation().


Thanks
David



On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:

> On 11/12/2012 08:15 AM, D Sundstrom wrote:
> > Thank you Pablo.
> >
> > It makes no difference if I run both the src-add and map from the same
> > domain or from different DomU domains.
> > Whichever DomU I run the map function in crashes immediately.
>
> Mapping your own grants (which is what the test run you showed did) might
> cause problems - although it's a bug that needs to be fixed, if so. You
> may want to try using the vchan-node2 tool (tools/libvchan) for testing
> and as an example user.
>
> > You mention Dom0.  I just want to be clear that I'd like to share
> > between two DomU domains.  Have you gotten this to work?
>
> That was the goal of gntalloc/libvchan - it should work (and has for me).
>
> > I also tried the userspace APIs provided by Xen such as
> > xc_gnttab_map_grant_ref() and these also crash.  Of course, these use
> > the same driver IOCTLs, so this isn't a surprise.
> >
> > I'll need to see if I can get some debug info from the DomU kernel to
> > make progress.
>
> You might want to try booting your domU with console=hvc0 and look at
> xl console - that will usually give you useful backtraces. Without that,
> it's rather difficult to tell what the problem is.
>
> > If I can get this to work, are there any restrictions on sharing large
> > amounts of memory?  Say 160Mb?  Or are grant tables intended for a
> > small number of pages?
>
> There are restrictions within the modules (default is 1024 4K pages), and
> in Xen itself for the number of grant table and maptrack pages - but I
> think those can be adjusted via a boot parameter. The grant tables aren't
> currently intended to share large amounts of memory, so you may run in to
> some inefficiencies when doing the map/unmap. If you're using an IOMMU for
> one of the domUs, this may end up being especially costly.
>
> > Thanks,
> > David
> >
> >
> >
> > On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <pllopis@arcos.inf.uc3m.es>
> wrote:
> >>
> >> It is not clear from your output from which domain you are running
> >> each command. It looks like you are trying to issue a grant and map it
> >> from within the same domain. That's probably the reason it crashes.
> >> You are supposed to run this tool from both domains, running the calls
> >> which interface with gntalloc from one domain, and the calls which
> >> interface with gntdev from the other domain.
> >> In any case, the domid you have to specify in the map must be the
> >> domid of the domain which issued the grant. In other words, when
> >> creating a grant, the domid which is granted access is specified. When
> >> mapping a grant, the domid which issued the grant is specified. (i.e.
> >> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
> >> 8)
> >>
> >>>
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >
>
>
> --
> Daniel De Graaf
> National Security Agency
>

--0015175df16c0e24e004cffc3d0c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div><br></div><div>The issue seems to be my version of Xen (XenClient XT) =
must not support ballon drivers. Any call to the memory_op hypercall to cha=
nge the reservation terminates my guest with extreme prejudice.</div><div>
<br></div><div>I&#39;ll take that one up with Citrix. =A0However, can someo=
ne explain why mapping a grant needs to manipulate the balloon reservation?=
 =A0</div><div><br></div><div>Specifically, in the 3.7-RC7 linux kernel tre=
e, the file drivers/xen/balloon.c:</div>
<div><br></div><div>At line 512 it tries to get a page out of the balloon. =
=A0This returns null (no page).</div><div>If page.... at line 513 evaluates=
 to false</div><div>At line 518 the else block calls decrease_reservation()=
.</div>
<div><br></div><div><br></div><div>Thanks</div><div>David</div><div>=A0=A0<=
/div><div><br></div><div><br></div><div class=3D"gmail_quote">On Tue, Nov 1=
3, 2012 at 10:24 AM, Daniel De Graaf <span dir=3D"ltr">&lt;<a href=3D"mailt=
o:dgdegra@tycho.nsa.gov" target=3D"_blank">dgdegra@tycho.nsa.gov</a>&gt;</s=
pan> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On 11/12/2012 08:15 AM, D Sundstrom wro=
te:<br>
&gt; Thank you Pablo.<br>
&gt;<br>
&gt; It makes no difference if I run both the src-add and map from the same=
<br>
&gt; domain or from different DomU domains.<br>
&gt; Whichever DomU I run the map function in crashes immediately.<br>
<br>
</div>Mapping your own grants (which is what the test run you showed did) m=
ight<br>
cause problems - although it&#39;s a bug that needs to be fixed, if so. You=
<br>
may want to try using the vchan-node2 tool (tools/libvchan) for testing<br>
and as an example user.<br>
<div><br>
&gt; You mention Dom0. =A0I just want to be clear that I&#39;d like to shar=
e<br>
&gt; between two DomU domains. =A0Have you gotten this to work?<br>
<br>
</div>That was the goal of gntalloc/libvchan - it should work (and has for =
me).<br>
<div><br>
&gt; I also tried the userspace APIs provided by Xen such as<br>
&gt; xc_gnttab_map_grant_ref() and these also crash. =A0Of course, these us=
e<br>
&gt; the same driver IOCTLs, so this isn&#39;t a surprise.<br>
&gt;<br>
&gt; I&#39;ll need to see if I can get some debug info from the DomU kernel=
 to<br>
&gt; make progress.<br>
<br>
</div>You might want to try booting your domU with console=3Dhvc0 and look =
at<br>
xl console - that will usually give you useful backtraces. Without that,<br=
>
it&#39;s rather difficult to tell what the problem is.<br>
<div><br>
&gt; If I can get this to work, are there any restrictions on sharing large=
<br>
&gt; amounts of memory? =A0Say 160Mb? =A0Or are grant tables intended for a=
<br>
&gt; small number of pages?<br>
<br>
</div>There are restrictions within the modules (default is 1024 4K pages),=
 and<br>
in Xen itself for the number of grant table and maptrack pages - but I<br>
think those can be adjusted via a boot parameter. The grant tables aren&#39=
;t<br>
currently intended to share large amounts of memory, so you may run in to<b=
r>
some inefficiencies when doing the map/unmap. If you&#39;re using an IOMMU =
for<br>
one of the domUs, this may end up being especially costly.<br>
<div><br>
&gt; Thanks,<br>
&gt; David<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis &lt;<a href=3D"mailto:pl=
lopis@arcos.inf.uc3m.es" target=3D"_blank">pllopis@arcos.inf.uc3m.es</a>&gt=
; wrote:<br>
&gt;&gt;<br>
&gt;&gt; It is not clear from your output from which domain you are running=
<br>
&gt;&gt; each command. It looks like you are trying to issue a grant and ma=
p it<br>
&gt;&gt; from within the same domain. That&#39;s probably the reason it cra=
shes.<br>
&gt;&gt; You are supposed to run this tool from both domains, running the c=
alls<br>
&gt;&gt; which interface with gntalloc from one domain, and the calls which=
<br>
&gt;&gt; interface with gntdev from the other domain.<br>
&gt;&gt; In any case, the domid you have to specify in the map must be the<=
br>
&gt;&gt; domid of the domain which issued the grant. In other words, when<b=
r>
&gt;&gt; creating a grant, the domid which is granted access is specified. =
When<br>
&gt;&gt; mapping a grant, the domid which issued the grant is specified. (i=
.e.<br>
&gt;&gt; If you did &quot;src-add 8&quot; from dom0 you should run map 0 13=
72 from domU<br>
&gt;&gt; 8)<br>
&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;<br>
</div><div><div>&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel=
@lists.xen.org</a><br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
&gt;<br>
<br>
<br>
</div></div><span><font color=3D"#888888">--<br>
Daniel De Graaf<br>
National Security Agency<br>
</font></span></blockquote></div><br>

--0015175df16c0e24e004cffc3d0c--


--===============8559791445519164607==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8559791445519164607==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 00:49:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfghY-0000CM-VS; Tue, 04 Dec 2012 00:49:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sunds@peapod.net>) id 1TfghY-0000CH-8S
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:49:32 +0000
Received: from [85.158.139.211:62823] by server-1.bemta-5.messagelabs.com id
	B8/C0-09311-B984DB05; Tue, 04 Dec 2012 00:49:31 +0000
X-Env-Sender: sunds@peapod.net
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354582170!18923847!1
X-Originating-IP: [209.85.214.45]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18219 invoked from network); 4 Dec 2012 00:49:30 -0000
Received: from mail-bk0-f45.google.com (HELO mail-bk0-f45.google.com)
	(209.85.214.45)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 00:49:30 -0000
Received: by mail-bk0-f45.google.com with SMTP id jk13so1486938bkc.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 16:49:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=O3UwNiiq6+Ovo9JLiI6GGbZxduWiL3QCpRP66YpoaCo=;
	b=mj79y8XaAI1o4WXCJ54QTcsNvWuZfImu1pA4iW0k2ho/lVWJXUqQNiF1V3rmkFlreA
	ZaNyyQF/5qzdwWZRMmC+d3JmwaTueci57tiQZBPKeZO3pdX+aa4a/Kkcp3oLBkHlfn45
	oSKHufS0R04b4qSdgleI/MYxNdBlEj5ZPMXr0oq2m1oV+4H2/q5ZzapWwBye8YgqAo9u
	TE2cBGtA5mNH2dF07Hyzj7IEoxVlZzlBbh3mqYKaqSGhGrhjSF+sdKsicuu3aRN5KhQD
	Y0UzW2j5+bFWBgHZ3+mMa5rrRh5ayBhQNPHRlVBXd3r5d6xffsEKyAA8DZYt2Fy7wNN0
	1HFQ==
MIME-Version: 1.0
Received: by 10.204.154.17 with SMTP id m17mr3519230bkw.89.1354582169887; Mon,
	03 Dec 2012 16:49:29 -0800 (PST)
Received: by 10.204.59.146 with HTTP; Mon, 3 Dec 2012 16:49:29 -0800 (PST)
In-Reply-To: <50A27423.6040104@tycho.nsa.gov>
References: <CAP4+OA3-CsKxJizFpx-V48cCFKJc5KXNQzzY4oQ6pQ5pu0RFMg@mail.gmail.com>
	<CAL08nMGDeaQO6HAJypK+pzmsmg8wcz2=WnLWwX==R5MRBCyoPA@mail.gmail.com>
	<CAP4+OA064YvLQyPahGWoyxPhWv+v5LLCjBUOUg09QF7XKJ1tZg@mail.gmail.com>
	<50A27423.6040104@tycho.nsa.gov>
Date: Mon, 3 Dec 2012 18:49:29 -0600
Message-ID: <CAP4+OA0KLDk-5p=S1sSq9ugrLF4siL_iQ7YNT1nCGO4mwG5=zA@mail.gmail.com>
From: D Sundstrom <sunds@peapod.net>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
X-Gm-Message-State: ALoCoQlz3v6k5oN4KFw493T/vgiDZ6Z1y+KDBnbBFBFupvudXhO7Gb9p4rBB3BiHyn9SM3kmAgFc
Cc: Pablo Llopis <pllopis@arcos.inf.uc3m.es>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Immediate kernel panic using gntdev device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8559791445519164607=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8559791445519164607==
Content-Type: multipart/alternative; boundary=0015175df16c0e24e004cffc3d0c

--0015175df16c0e24e004cffc3d0c
Content-Type: text/plain; charset=ISO-8859-1

The issue seems to be my version of Xen (XenClient XT) must not support
ballon drivers. Any call to the memory_op hypercall to change the
reservation terminates my guest with extreme prejudice.

I'll take that one up with Citrix.  However, can someone explain why
mapping a grant needs to manipulate the balloon reservation?

Specifically, in the 3.7-RC7 linux kernel tree, the file
drivers/xen/balloon.c:

At line 512 it tries to get a page out of the balloon.  This returns null
(no page).
If page.... at line 513 evaluates to false
At line 518 the else block calls decrease_reservation().


Thanks
David



On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:

> On 11/12/2012 08:15 AM, D Sundstrom wrote:
> > Thank you Pablo.
> >
> > It makes no difference if I run both the src-add and map from the same
> > domain or from different DomU domains.
> > Whichever DomU I run the map function in crashes immediately.
>
> Mapping your own grants (which is what the test run you showed did) might
> cause problems - although it's a bug that needs to be fixed, if so. You
> may want to try using the vchan-node2 tool (tools/libvchan) for testing
> and as an example user.
>
> > You mention Dom0.  I just want to be clear that I'd like to share
> > between two DomU domains.  Have you gotten this to work?
>
> That was the goal of gntalloc/libvchan - it should work (and has for me).
>
> > I also tried the userspace APIs provided by Xen such as
> > xc_gnttab_map_grant_ref() and these also crash.  Of course, these use
> > the same driver IOCTLs, so this isn't a surprise.
> >
> > I'll need to see if I can get some debug info from the DomU kernel to
> > make progress.
>
> You might want to try booting your domU with console=hvc0 and look at
> xl console - that will usually give you useful backtraces. Without that,
> it's rather difficult to tell what the problem is.
>
> > If I can get this to work, are there any restrictions on sharing large
> > amounts of memory?  Say 160Mb?  Or are grant tables intended for a
> > small number of pages?
>
> There are restrictions within the modules (default is 1024 4K pages), and
> in Xen itself for the number of grant table and maptrack pages - but I
> think those can be adjusted via a boot parameter. The grant tables aren't
> currently intended to share large amounts of memory, so you may run in to
> some inefficiencies when doing the map/unmap. If you're using an IOMMU for
> one of the domUs, this may end up being especially costly.
>
> > Thanks,
> > David
> >
> >
> >
> > On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <pllopis@arcos.inf.uc3m.es>
> wrote:
> >>
> >> It is not clear from your output from which domain you are running
> >> each command. It looks like you are trying to issue a grant and map it
> >> from within the same domain. That's probably the reason it crashes.
> >> You are supposed to run this tool from both domains, running the calls
> >> which interface with gntalloc from one domain, and the calls which
> >> interface with gntdev from the other domain.
> >> In any case, the domid you have to specify in the map must be the
> >> domid of the domain which issued the grant. In other words, when
> >> creating a grant, the domid which is granted access is specified. When
> >> mapping a grant, the domid which issued the grant is specified. (i.e.
> >> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
> >> 8)
> >>
> >>>
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >
>
>
> --
> Daniel De Graaf
> National Security Agency
>

--0015175df16c0e24e004cffc3d0c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div><br></div><div>The issue seems to be my version of Xen (XenClient XT) =
must not support ballon drivers. Any call to the memory_op hypercall to cha=
nge the reservation terminates my guest with extreme prejudice.</div><div>
<br></div><div>I&#39;ll take that one up with Citrix. =A0However, can someo=
ne explain why mapping a grant needs to manipulate the balloon reservation?=
 =A0</div><div><br></div><div>Specifically, in the 3.7-RC7 linux kernel tre=
e, the file drivers/xen/balloon.c:</div>
<div><br></div><div>At line 512 it tries to get a page out of the balloon. =
=A0This returns null (no page).</div><div>If page.... at line 513 evaluates=
 to false</div><div>At line 518 the else block calls decrease_reservation()=
.</div>
<div><br></div><div><br></div><div>Thanks</div><div>David</div><div>=A0=A0<=
/div><div><br></div><div><br></div><div class=3D"gmail_quote">On Tue, Nov 1=
3, 2012 at 10:24 AM, Daniel De Graaf <span dir=3D"ltr">&lt;<a href=3D"mailt=
o:dgdegra@tycho.nsa.gov" target=3D"_blank">dgdegra@tycho.nsa.gov</a>&gt;</s=
pan> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On 11/12/2012 08:15 AM, D Sundstrom wro=
te:<br>
&gt; Thank you Pablo.<br>
&gt;<br>
&gt; It makes no difference if I run both the src-add and map from the same=
<br>
&gt; domain or from different DomU domains.<br>
&gt; Whichever DomU I run the map function in crashes immediately.<br>
<br>
</div>Mapping your own grants (which is what the test run you showed did) m=
ight<br>
cause problems - although it&#39;s a bug that needs to be fixed, if so. You=
<br>
may want to try using the vchan-node2 tool (tools/libvchan) for testing<br>
and as an example user.<br>
<div><br>
&gt; You mention Dom0. =A0I just want to be clear that I&#39;d like to shar=
e<br>
&gt; between two DomU domains. =A0Have you gotten this to work?<br>
<br>
</div>That was the goal of gntalloc/libvchan - it should work (and has for =
me).<br>
<div><br>
&gt; I also tried the userspace APIs provided by Xen such as<br>
&gt; xc_gnttab_map_grant_ref() and these also crash. =A0Of course, these us=
e<br>
&gt; the same driver IOCTLs, so this isn&#39;t a surprise.<br>
&gt;<br>
&gt; I&#39;ll need to see if I can get some debug info from the DomU kernel=
 to<br>
&gt; make progress.<br>
<br>
</div>You might want to try booting your domU with console=3Dhvc0 and look =
at<br>
xl console - that will usually give you useful backtraces. Without that,<br=
>
it&#39;s rather difficult to tell what the problem is.<br>
<div><br>
&gt; If I can get this to work, are there any restrictions on sharing large=
<br>
&gt; amounts of memory? =A0Say 160Mb? =A0Or are grant tables intended for a=
<br>
&gt; small number of pages?<br>
<br>
</div>There are restrictions within the modules (default is 1024 4K pages),=
 and<br>
in Xen itself for the number of grant table and maptrack pages - but I<br>
think those can be adjusted via a boot parameter. The grant tables aren&#39=
;t<br>
currently intended to share large amounts of memory, so you may run in to<b=
r>
some inefficiencies when doing the map/unmap. If you&#39;re using an IOMMU =
for<br>
one of the domUs, this may end up being especially costly.<br>
<div><br>
&gt; Thanks,<br>
&gt; David<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis &lt;<a href=3D"mailto:pl=
lopis@arcos.inf.uc3m.es" target=3D"_blank">pllopis@arcos.inf.uc3m.es</a>&gt=
; wrote:<br>
&gt;&gt;<br>
&gt;&gt; It is not clear from your output from which domain you are running=
<br>
&gt;&gt; each command. It looks like you are trying to issue a grant and ma=
p it<br>
&gt;&gt; from within the same domain. That&#39;s probably the reason it cra=
shes.<br>
&gt;&gt; You are supposed to run this tool from both domains, running the c=
alls<br>
&gt;&gt; which interface with gntalloc from one domain, and the calls which=
<br>
&gt;&gt; interface with gntdev from the other domain.<br>
&gt;&gt; In any case, the domid you have to specify in the map must be the<=
br>
&gt;&gt; domid of the domain which issued the grant. In other words, when<b=
r>
&gt;&gt; creating a grant, the domid which is granted access is specified. =
When<br>
&gt;&gt; mapping a grant, the domid which issued the grant is specified. (i=
.e.<br>
&gt;&gt; If you did &quot;src-add 8&quot; from dom0 you should run map 0 13=
72 from domU<br>
&gt;&gt; 8)<br>
&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;<br>
</div><div><div>&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel=
@lists.xen.org</a><br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
&gt;<br>
<br>
<br>
</div></div><span><font color=3D"#888888">--<br>
Daniel De Graaf<br>
National Security Agency<br>
</font></span></blockquote></div><br>

--0015175df16c0e24e004cffc3d0c--


--===============8559791445519164607==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8559791445519164607==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 00:55:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgnW-0000R7-Qd; Tue, 04 Dec 2012 00:55:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TfgnV-0000R2-Fp
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:55:41 +0000
Received: from [85.158.137.99:9799] by server-12.bemta-3.messagelabs.com id
	90/2C-22757-C0A4DB05; Tue, 04 Dec 2012 00:55:40 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354582539!17747004!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM1NDU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2761 invoked from network); 4 Dec 2012 00:55:40 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-16.tower-217.messagelabs.com with SMTP;
	4 Dec 2012 00:55:40 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 03 Dec 2012 16:55:38 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,211,1355126400"; d="scan'208";a="226118267"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by azsmga001.ch.intel.com with ESMTP; 03 Dec 2012 16:55:32 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 3 Dec 2012 16:55:10 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Tue, 4 Dec 2012 08:55:08 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
	faults for devices used by Xen or Dom0
Thread-Index: AQHNzuAehTAqIGeL2k2bTlhbc3Vd6ZgGlvcw//+XEQCAAaZSAA==
Date: Tue, 4 Dec 2012 00:55:08 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A483456440339E6EB@SHSMSX101.ccr.corp.intel.com>
References: <5097FD2902000078000A66BF@nat28.tlf.novell.com>
	<CCDE3016.54628%keir@xen.org>
	<50B88F8002000078000ACC8A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339D1E9@SHSMSX101.ccr.corp.intel.com>
	<50BC649E02000078000AD289@nat28.tlf.novell.com>
In-Reply-To: <50BC649E02000078000AD289@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: KeirFraser <keir@xen.org>, "wei.huang2@amd.com" <wei.huang2@amd.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>,
	Dario Faggioli <raistlin@linux.it>,
	"weiwang.dd@gmail.com" <weiwang.dd@gmail.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
 faults for devices used by Xen or Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> >>> On 03.12.12 at 07:08, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> wrote:
> > If the phantom device support for IOMMU is in upstream,  is this patch
> > still needed ?
> 
> Phantom function is unrelated to the behavioral adjustment here.
> 
> >  Basically,  I can't figure out why several faults should be allowed
> > before disabling bus mastering.   Did you meet some real issues ?   Thanks!
> 
> I observed quite a different driver failure pattern with and without this
> adjustment, but in a contrived environment only. From the customer data
> for the problem that prompted the phantom function work, I could also
> conclude the same (comparing the driver failure under native Linux with
> IOMMU turned on and the one under Xen).
> 
> But in any case, I am of the opinion that an occasional fault shouldn't give
> reason to disable the device altogether - what we're aiming at is solely to
> keep Xen and other domains functional (which doesn't require as drastic an
> action as was carried out prior to this adjustment). Also, afaict native Linux
> doesn't have any such disabling behavior at all.

Okay, maybe we need to align this with native linux side, and just keep the fault reporting instead of disabling it in device level. And if the number of faults reaches to  a limit, and hypervisor can choose to suppress its output. 
Xiantao

> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Friday, November 30, 2012 5:51 PM
> >> To: wei.huang2@amd.com; weiwang.dd@gmail.com; Zhang, Xiantao; Keir
> >> Fraser
> >> Cc: Dario Faggioli; xen-devel; Tim Deegan
> >> Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering
> >> on faults for devices used by Xen or Dom0
> >>
> >> >>> On 30.11.12 at 10:42, Keir Fraser <keir@xen.org> wrote:
> >> > On 05/11/2012 16:53, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >
> >> >> Under the assumption that in these cases recurring faults aren't a
> >> >> security issue and it can be expected that the drivers there are
> >> >> going to try to take care of the problem.
> >> >>
> >> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> >
> >> > This one's sat a while with no comments...
> >>
> >> It's in already (26133:fdb69dd527cd), with Tim's and Dario's ack (who
> >> were the ones involved in creating the original change this modifies).
> >>
> >> But yes, we're having a general response problem for IOMMU related
> >> stuff - I already asked for this to be a topic on the next community call.
> >>
> >> Jan
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 00:55:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 00:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfgnW-0000R7-Qd; Tue, 04 Dec 2012 00:55:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TfgnV-0000R2-Fp
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 00:55:41 +0000
Received: from [85.158.137.99:9799] by server-12.bemta-3.messagelabs.com id
	90/2C-22757-C0A4DB05; Tue, 04 Dec 2012 00:55:40 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354582539!17747004!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM1NDU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2761 invoked from network); 4 Dec 2012 00:55:40 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-16.tower-217.messagelabs.com with SMTP;
	4 Dec 2012 00:55:40 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 03 Dec 2012 16:55:38 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,211,1355126400"; d="scan'208";a="226118267"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by azsmga001.ch.intel.com with ESMTP; 03 Dec 2012 16:55:32 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 3 Dec 2012 16:55:10 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Tue, 4 Dec 2012 08:55:08 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
	faults for devices used by Xen or Dom0
Thread-Index: AQHNzuAehTAqIGeL2k2bTlhbc3Vd6ZgGlvcw//+XEQCAAaZSAA==
Date: Tue, 4 Dec 2012 00:55:08 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A483456440339E6EB@SHSMSX101.ccr.corp.intel.com>
References: <5097FD2902000078000A66BF@nat28.tlf.novell.com>
	<CCDE3016.54628%keir@xen.org>
	<50B88F8002000078000ACC8A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339D1E9@SHSMSX101.ccr.corp.intel.com>
	<50BC649E02000078000AD289@nat28.tlf.novell.com>
In-Reply-To: <50BC649E02000078000AD289@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: KeirFraser <keir@xen.org>, "wei.huang2@amd.com" <wei.huang2@amd.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>,
	Dario Faggioli <raistlin@linux.it>,
	"weiwang.dd@gmail.com" <weiwang.dd@gmail.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
 faults for devices used by Xen or Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> >>> On 03.12.12 at 07:08, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> wrote:
> > If the phantom device support for IOMMU is in upstream,  is this patch
> > still needed ?
> 
> Phantom function is unrelated to the behavioral adjustment here.
> 
> >  Basically,  I can't figure out why several faults should be allowed
> > before disabling bus mastering.   Did you meet some real issues ?   Thanks!
> 
> I observed quite a different driver failure pattern with and without this
> adjustment, but in a contrived environment only. From the customer data
> for the problem that prompted the phantom function work, I could also
> conclude the same (comparing the driver failure under native Linux with
> IOMMU turned on and the one under Xen).
> 
> But in any case, I am of the opinion that an occasional fault shouldn't give
> reason to disable the device altogether - what we're aiming at is solely to
> keep Xen and other domains functional (which doesn't require as drastic an
> action as was carried out prior to this adjustment). Also, afaict native Linux
> doesn't have any such disabling behavior at all.

Okay, maybe we need to align this with native linux side, and just keep the fault reporting instead of disabling it in device level. And if the number of faults reaches to  a limit, and hypervisor can choose to suppress its output. 
Xiantao

> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Friday, November 30, 2012 5:51 PM
> >> To: wei.huang2@amd.com; weiwang.dd@gmail.com; Zhang, Xiantao; Keir
> >> Fraser
> >> Cc: Dario Faggioli; xen-devel; Tim Deegan
> >> Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering
> >> on faults for devices used by Xen or Dom0
> >>
> >> >>> On 30.11.12 at 10:42, Keir Fraser <keir@xen.org> wrote:
> >> > On 05/11/2012 16:53, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >
> >> >> Under the assumption that in these cases recurring faults aren't a
> >> >> security issue and it can be expected that the drivers there are
> >> >> going to try to take care of the problem.
> >> >>
> >> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> >
> >> > This one's sat a while with no comments...
> >>
> >> It's in already (26133:fdb69dd527cd), with Tim's and Dario's ack (who
> >> were the ones involved in creating the original change this modifies).
> >>
> >> But yes, we're having a general response problem for IOMMU related
> >> stuff - I already asked for this to be a topic on the next community call.
> >>
> >> Jan
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 01:29:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 01:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfhKH-0004mz-Nz; Tue, 04 Dec 2012 01:29:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TfhKF-0004mu-WA
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 01:29:32 +0000
Received: from [85.158.138.51:31025] by server-13.bemta-3.messagelabs.com id
	82/19-24887-AF15DB05; Tue, 04 Dec 2012 01:29:30 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1354584568!27483276!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzY5OTAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8984 invoked from network); 4 Dec 2012 01:29:29 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-9.tower-174.messagelabs.com with SMTP;
	4 Dec 2012 01:29:29 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 03 Dec 2012 17:28:42 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,211,1355126400"; d="scan'208";a="228525580"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 03 Dec 2012 17:29:27 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 3 Dec 2012 17:29:27 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 3 Dec 2012 17:29:27 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Tue, 4 Dec 2012 09:29:25 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: ATS and dependent features
Thread-Index: AQHNzINiSOuk8WhZTUOv61PMlYmAhZgAAhig///uAQCAAJGFcP//hsAAgAJJkKCAA+YKAIABq+xg
Date: Tue, 4 Dec 2012 01:29:24 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A483456440339E774@SHSMSX101.ccr.corp.intel.com>
References: <50B498E502000078000AB8B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A4834564403398F24@SHSMSX101.ccr.corp.intel.com>
	<50B7243602000078000AC61A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033994C8@SHSMSX101.ccr.corp.intel.com>
	<50B7389202000078000AC669@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
	<50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
In-Reply-To: <50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Xiantao" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATS and dependent features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, December 03, 2012 3:55 PM
> To: Zhang, Xiantao
> Cc: xen-devel
> Subject: RE: ATS and dependent features
> 
> >>> On 30.11.12 at 13:29, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> wrote:
> 
> >
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Thursday, November 29, 2012 5:28 PM
> >> To: Zhang, Xiantao
> >> Cc: xen-devel
> >> Subject: RE: ATS and dependent features
> >>
> >> >>> On 29.11.12 at 10:19, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> >> wrote:
> >>
> >> >
> >> >> -----Original Message-----
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> Sent: Thursday, November 29, 2012 4:01 PM
> >> >> To: Zhang, Xiantao
> >> >> Cc: xen-devel
> >> >> Subject: RE: ATS and dependent features
> >> >>
> >> >> >>> On 29.11.12 at 02:07, "Zhang, Xiantao"
> >> >> >>> <xiantao.zhang@intel.com>
> >> >> wrote:
> >> >> > ATS should be a host feature controlled by iommu, and I don't
> >> >> > think
> >> >> > dom0 can control  it from Xen's architecture.
> >> >>
> >> >> "Can" or "should"? Because from all I can tell it currently clearly does.
> >> >
> >> > I mean Xen shouldn't  allow these capabilities can be detected by
> >> > dom0.  If it does, we need to fix it.
> >>
> >> It sort of hides it - all callers sit in the kernel's IOMMU code, and
> >> IOMMU detection is being prevented. So it looks like the code is
> >> simply dead when running on top of Xen.
> >
> > I'm curious why dom0's !Xen kernel option for these features can solve
> > the issue you met.
> 
> It doesn't "solve" the problem in that sense: As said, the code in question
> only has callers in IOMMU code, which itself is dependent on !XEN in our
> kernels (just to make clear - I'm talking about forward ported kernels here,
> not pv-ops ones). So upstream probably just has to live with that code being
> dead (at the moment, when run on top of Xen) and take the risk of there
> appearing a caller elsewhere.
> In our kernels, by making these options also dependent upon !XEN, we can
> then actually detect (and actively deal with) an eventual new caller
> elsewhere in the code, thus eliminating any risk of bad interaction between
> Dom0 and Xen.

I think !Xen you are talking is a compile option, so this kernel can only used for dom0 and can't run on native with these features enabled ?  If don't need to keep the kernel running on native hardware, I think it is fine. 
Xiantao 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 01:29:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 01:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfhKH-0004mz-Nz; Tue, 04 Dec 2012 01:29:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TfhKF-0004mu-WA
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 01:29:32 +0000
Received: from [85.158.138.51:31025] by server-13.bemta-3.messagelabs.com id
	82/19-24887-AF15DB05; Tue, 04 Dec 2012 01:29:30 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1354584568!27483276!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzY5OTAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8984 invoked from network); 4 Dec 2012 01:29:29 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-9.tower-174.messagelabs.com with SMTP;
	4 Dec 2012 01:29:29 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 03 Dec 2012 17:28:42 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,211,1355126400"; d="scan'208";a="228525580"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 03 Dec 2012 17:29:27 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 3 Dec 2012 17:29:27 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 3 Dec 2012 17:29:27 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Tue, 4 Dec 2012 09:29:25 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: ATS and dependent features
Thread-Index: AQHNzINiSOuk8WhZTUOv61PMlYmAhZgAAhig///uAQCAAJGFcP//hsAAgAJJkKCAA+YKAIABq+xg
Date: Tue, 4 Dec 2012 01:29:24 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A483456440339E774@SHSMSX101.ccr.corp.intel.com>
References: <50B498E502000078000AB8B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A4834564403398F24@SHSMSX101.ccr.corp.intel.com>
	<50B7243602000078000AC61A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033994C8@SHSMSX101.ccr.corp.intel.com>
	<50B7389202000078000AC669@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
	<50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
In-Reply-To: <50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Xiantao" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATS and dependent features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, December 03, 2012 3:55 PM
> To: Zhang, Xiantao
> Cc: xen-devel
> Subject: RE: ATS and dependent features
> 
> >>> On 30.11.12 at 13:29, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> wrote:
> 
> >
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Thursday, November 29, 2012 5:28 PM
> >> To: Zhang, Xiantao
> >> Cc: xen-devel
> >> Subject: RE: ATS and dependent features
> >>
> >> >>> On 29.11.12 at 10:19, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> >> wrote:
> >>
> >> >
> >> >> -----Original Message-----
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> Sent: Thursday, November 29, 2012 4:01 PM
> >> >> To: Zhang, Xiantao
> >> >> Cc: xen-devel
> >> >> Subject: RE: ATS and dependent features
> >> >>
> >> >> >>> On 29.11.12 at 02:07, "Zhang, Xiantao"
> >> >> >>> <xiantao.zhang@intel.com>
> >> >> wrote:
> >> >> > ATS should be a host feature controlled by iommu, and I don't
> >> >> > think
> >> >> > dom0 can control  it from Xen's architecture.
> >> >>
> >> >> "Can" or "should"? Because from all I can tell it currently clearly does.
> >> >
> >> > I mean Xen shouldn't  allow these capabilities can be detected by
> >> > dom0.  If it does, we need to fix it.
> >>
> >> It sort of hides it - all callers sit in the kernel's IOMMU code, and
> >> IOMMU detection is being prevented. So it looks like the code is
> >> simply dead when running on top of Xen.
> >
> > I'm curious why dom0's !Xen kernel option for these features can solve
> > the issue you met.
> 
> It doesn't "solve" the problem in that sense: As said, the code in question
> only has callers in IOMMU code, which itself is dependent on !XEN in our
> kernels (just to make clear - I'm talking about forward ported kernels here,
> not pv-ops ones). So upstream probably just has to live with that code being
> dead (at the moment, when run on top of Xen) and take the risk of there
> appearing a caller elsewhere.
> In our kernels, by making these options also dependent upon !XEN, we can
> then actually detect (and actively deal with) an eventual new caller
> elsewhere in the code, thus eliminating any risk of bad interaction between
> Dom0 and Xen.

I think !Xen you are talking is a compile option, so this kernel can only used for dom0 and can't run on native with these features enabled ?  If don't need to keep the kernel running on native hardware, I think it is fine. 
Xiantao 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 03:08:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 03:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfir3-0007AW-QL; Tue, 04 Dec 2012 03:07:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tfir1-0007AR-U0
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 03:07:28 +0000
Received: from [193.109.254.147:60051] by server-9.bemta-14.messagelabs.com id
	2F/25-30773-FE86DB05; Tue, 04 Dec 2012 03:07:27 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354590444!2394637!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23856 invoked from network); 4 Dec 2012 03:07:25 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 03:07:25 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so3068100iac.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 19:07:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=E3aiIAvXkY0yBC+zu04NYiId+aOCia+X7gSJ2QWNXl8=;
	b=hZxcNjuxDLWvclMkxJpoHLa5UqhQOTu0gKjSP9/mVcOsf1Ne03l1UGy+gFVzTiCrrV
	jR826prVaXZBiDTckJ1cRhj7mipZ1JJ4R+jXt+EGB8yO06XgwPz3O+gYr6OzhH5UjzV7
	XtzutVB0x4KRRjUBmdO8zhWIhAPN25zFYh6OLz3ukiA/7b3+LSJq5tOgVLCcAzLiThZl
	T7hci3+i4J7rXqHBFvhcJCwiCPl6OT5mS9jFj4uWymZp/sjd/jlZDh93efLmumDg1txx
	JbSMKG5TUXcLRecR5PdksYRkQRKiMIZS1kdPPjYq8xQGaqEhXrmRhkMNG5ozDU/+K1Uy
	H8FA==
MIME-Version: 1.0
Received: by 10.42.179.8 with SMTP id bo8mr9858030icb.0.1354590444026; Mon, 03
	Dec 2012 19:07:24 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Mon, 3 Dec 2012 19:07:23 -0800 (PST)
In-Reply-To: <50BCAFEF.7040300@citrix.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
Date: Tue, 4 Dec 2012 11:07:23 +0800
X-Google-Sender-Auth: QkjqKdfHVRBbBSs1H6wzTrP6IoI
Message-ID: <CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Mats Petersson <mats.petersson@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6508738499468714629=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6508738499468714629==
Content-Type: multipart/alternative; boundary=90e6ba6e8e203b79bb04cffe2a08

--90e6ba6e8e203b79bb04cffe2a08
Content-Type: text/plain; charset=ISO-8859-1

> I had a quick look, and it doesn't look that hard to backport that patch.

Thanks, Mat.
I'm glad to report that the patch do fix my problem.

And yest it is really easy to port since the code did not change across the
two release.
The only change would be line numbers (3841 vs 3803) and one extra comments
before this line:

 static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)

I'm not sure if you are going to release another maintenance version that
include this patch,
but I'll report this to Debain maintainer since it's about to freeze for
v7.0 release and v4.2.0 will not make it.



On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson <mats.petersson@citrix.com>wrote:

> On 03/12/12 13:19, Mats Petersson wrote:
>
>> On 03/12/12 13:14, G.R. wrote:
>>
>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>>> <mats.petersson@citrix.com <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>>> wrote:
>>>
>>>      On 03/12/12 03:47, G.R. wrote:
>>>
>>>          Hi developers,
>>>          I met some domU issues and the log suggests missing interrupt.
>>>          Details from here:
>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#**
>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>>>          In summary, this is the suspicious log:
>>>
>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>>>
>>>          I've checked the code in question and found that mode 3 is an
>>>          'reserved_1' mode.
>>>          I want to trace down the source of this mode setting to
>>>          root-cause the issue.
>>>          But I'm not an xen developer, and am even a newbie as a xen
>>> user.
>>>          Could anybody give me instructions about how to enable
>>>          detailed debug log?
>>>          It could be better if I can get advice about experiments to
>>>          perform / switches to try out etc.
>>>
>>>          My SW config:
>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>>>          domU: Debian wheezy 3.2.x stock kernel.
>>>
>>>          Thanks,
>>>          Timothy
>>>
>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>>>      What are the exact messages in the DomU?
>>>
>>>
>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>>> enabled.
>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>>> did not see such msi related error message.
>>> Actually, with that domU I do not see anything obvious wrong from the
>>> log, but I also see nothing from panel (panel receive no signal and go
>>> power-saving) :-(
>>>
>>>
>>> Back to the issue I was reporting, the domU log looks like this:
>>>
>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>>> [drm:i915_hangcheck_ring_idle]
>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>>> 3354], missed IRQ?
>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>>> [drm:i915_hangcheck_ring_idle]
>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>>> 11297], missed IRQ?
>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>>> timeout, switching to polling mode: last cmd=0x000f0000
>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>>> codec, disabling MSI: last cmd=0x002f0600
>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>>>
>>>
>>> Thanks,
>>> Timothy
>>>
>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>> fixes this. I'm not fully clued up to what the policy for backporting
>> fixes are, and I haven't looked at the complexity of the fix itself, but
>> either updating to the 4.2.0 or a (personal) backport sounds like the
>> right solution here.
>>
> I had a quick look, and it doesn't look that hard to backport that patch.
>
> --
> Mats
>
>
>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>> to your original email.
>>
>> --
>> Mats
>>
>> ______________________________**_________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>>
>
> ______________________________**_________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--90e6ba6e8e203b79bb04cffe2a08
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

&gt; I had a quick look, and it doesn&#39;t look that hard to backport that=
 patch.<br><br>Thanks, Mat.<br>I&#39;m glad to report that the patch do fix=
 my problem.<br><br>And yest it is really easy to port since the code did n=
ot change across the two release.<br>
The only change would be line numbers (3841 vs 3803) and one extra comments=
 before this line:<br><pre> static int pt_msix_update_one(struct pt_dev *de=
v, int entry_nr)</pre>I&#39;m not sure if you are going to release another =
maintenance version that include this patch,<br>
but I&#39;ll report this to Debain maintainer since it&#39;s about to freez=
e for v7.0 release and v4.2.0 will not make it.<br><br><div class=3D"gmail_=
extra"><br><br><div class=3D"gmail_quote">On Mon, Dec 3, 2012 at 9:58 PM, M=
ats Petersson <span dir=3D"ltr">&lt;<a href=3D"mailto:mats.petersson@citrix=
.com" target=3D"_blank">mats.petersson@citrix.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On 0=
3/12/12 13:19, Mats Petersson wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On 03/12/12 13:14, G.R. wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson<br>
&lt;<a href=3D"mailto:mats.petersson@citrix.com" target=3D"_blank">mats.pet=
ersson@citrix.com</a> &lt;mailto:<a href=3D"mailto:mats.petersson@citrix.co=
m" target=3D"_blank">mats.petersson@citrix.<u></u>com</a>&gt;&gt; wrote:<br=
>
<br>
=A0 =A0 =A0On 03/12/12 03:47, G.R. wrote:<br>
<br>
=A0 =A0 =A0 =A0 =A0Hi developers,<br>
=A0 =A0 =A0 =A0 =A0I met some domU issues and the log suggests missing inte=
rrupt.<br>
=A0 =A0 =A0 =A0 =A0Details from here:<br>
=A0 =A0 =A0 =A0 =A0<a href=3D"http://www.gossamer-threads.com/lists/xen/use=
rs/263938#263938" target=3D"_blank">http://www.gossamer-threads.<u></u>com/=
lists/xen/users/263938#<u></u>263938</a><br>
=A0 =A0 =A0 =A0 =A0In summary, this is the suspicious log:<br>
<br>
=A0 =A0 =A0 =A0 =A0(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
<br>
=A0 =A0 =A0 =A0 =A0I&#39;ve checked the code in question and found that mod=
e 3 is an<br>
=A0 =A0 =A0 =A0 =A0&#39;reserved_1&#39; mode.<br>
=A0 =A0 =A0 =A0 =A0I want to trace down the source of this mode setting to<=
br>
=A0 =A0 =A0 =A0 =A0root-cause the issue.<br>
=A0 =A0 =A0 =A0 =A0But I&#39;m not an xen developer, and am even a newbie a=
s a xen user.<br>
=A0 =A0 =A0 =A0 =A0Could anybody give me instructions about how to enable<b=
r>
=A0 =A0 =A0 =A0 =A0detailed debug log?<br>
=A0 =A0 =A0 =A0 =A0It could be better if I can get advice about experiments=
 to<br>
=A0 =A0 =A0 =A0 =A0perform / switches to try out etc.<br>
<br>
=A0 =A0 =A0 =A0 =A0My SW config:<br>
=A0 =A0 =A0 =A0 =A0dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-=
4)<br>
=A0 =A0 =A0 =A0 =A0domU: Debian wheezy 3.2.x stock kernel.<br>
<br>
=A0 =A0 =A0 =A0 =A0Thanks,<br>
=A0 =A0 =A0 =A0 =A0Timothy<br>
<br>
=A0 =A0 =A0Are you passing hardware (PCI Passthrough) to the HVM guest?<br>
=A0 =A0 =A0What are the exact messages in the DomU?<br>
<br>
<br>
Yes, I&#39;m doing PCI passthrough (the IGD, audio &amp;&amp; USB controlle=
r).<br>
But this is actually a PVHVM guest since debian stock kernel has PVOP<br>
enabled.<br>
And when I tried another PVOP disabled linux distro (openelec v2.0), I<br>
did not see such msi related error message.<br>
Actually, with that domU I do not see anything obvious wrong from the<br>
log, but I also see nothing from panel (panel receive no signal and go<br>
power-saving) :-(<br>
<br>
<br>
Back to the issue I was reporting, the domU log looks like this:<br>
<br>
Dec 2 21:52:44 debvm kernel: [ 1085.604071]<br>
[drm:i915_hangcheck_ring_idle]<br>
*ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at<br>
3354], missed IRQ?<br>
Dec 2 21:56:50 debvm kernel: [ 1332.076071]<br>
[drm:i915_hangcheck_ring_idle]<br>
*ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at<b=
r>
11297], missed IRQ?<br>
Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response<br>
timeout, switching to polling mode: last cmd=3D0x000f0000<br>
Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from<br>
codec, disabling MSI: last cmd=3D0x002f0600<br>
Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response<br>
timeout, switching to single_cmd mode: last cmd=3D0x002f0600<br>
<br>
<br>
Thanks,<br>
Timothy<br>
</blockquote>
It does sound like there is a fix in 4.2.0, as indicated by Jan, that<br>
fixes this. I&#39;m not fully clued up to what the policy for backporting<b=
r>
fixes are, and I haven&#39;t looked at the complexity of the fix itself, bu=
t<br>
either updating to the 4.2.0 or a (personal) backport sounds like the<br>
right solution here.<br>
</blockquote></div></div>
I had a quick look, and it doesn&#39;t look that hard to backport that patc=
h.<br>
<br>
--<br>
Mats<div class=3D"HOEnZb"><div class=3D"h5"><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
<br>
Unfortunately, I hadn&#39;t seen Jan&#39;s reply by the time I wrote my res=
ponse<br>
to your original email.<br>
<br>
--<br>
Mats<br>
<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br>
<br>
</blockquote>
<br>
<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div>

--90e6ba6e8e203b79bb04cffe2a08--


--===============6508738499468714629==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6508738499468714629==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 03:08:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 03:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfir3-0007AW-QL; Tue, 04 Dec 2012 03:07:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tfir1-0007AR-U0
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 03:07:28 +0000
Received: from [193.109.254.147:60051] by server-9.bemta-14.messagelabs.com id
	2F/25-30773-FE86DB05; Tue, 04 Dec 2012 03:07:27 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354590444!2394637!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23856 invoked from network); 4 Dec 2012 03:07:25 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 03:07:25 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so3068100iac.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 19:07:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=E3aiIAvXkY0yBC+zu04NYiId+aOCia+X7gSJ2QWNXl8=;
	b=hZxcNjuxDLWvclMkxJpoHLa5UqhQOTu0gKjSP9/mVcOsf1Ne03l1UGy+gFVzTiCrrV
	jR826prVaXZBiDTckJ1cRhj7mipZ1JJ4R+jXt+EGB8yO06XgwPz3O+gYr6OzhH5UjzV7
	XtzutVB0x4KRRjUBmdO8zhWIhAPN25zFYh6OLz3ukiA/7b3+LSJq5tOgVLCcAzLiThZl
	T7hci3+i4J7rXqHBFvhcJCwiCPl6OT5mS9jFj4uWymZp/sjd/jlZDh93efLmumDg1txx
	JbSMKG5TUXcLRecR5PdksYRkQRKiMIZS1kdPPjYq8xQGaqEhXrmRhkMNG5ozDU/+K1Uy
	H8FA==
MIME-Version: 1.0
Received: by 10.42.179.8 with SMTP id bo8mr9858030icb.0.1354590444026; Mon, 03
	Dec 2012 19:07:24 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Mon, 3 Dec 2012 19:07:23 -0800 (PST)
In-Reply-To: <50BCAFEF.7040300@citrix.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
Date: Tue, 4 Dec 2012 11:07:23 +0800
X-Google-Sender-Auth: QkjqKdfHVRBbBSs1H6wzTrP6IoI
Message-ID: <CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Mats Petersson <mats.petersson@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6508738499468714629=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6508738499468714629==
Content-Type: multipart/alternative; boundary=90e6ba6e8e203b79bb04cffe2a08

--90e6ba6e8e203b79bb04cffe2a08
Content-Type: text/plain; charset=ISO-8859-1

> I had a quick look, and it doesn't look that hard to backport that patch.

Thanks, Mat.
I'm glad to report that the patch do fix my problem.

And yest it is really easy to port since the code did not change across the
two release.
The only change would be line numbers (3841 vs 3803) and one extra comments
before this line:

 static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)

I'm not sure if you are going to release another maintenance version that
include this patch,
but I'll report this to Debain maintainer since it's about to freeze for
v7.0 release and v4.2.0 will not make it.



On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson <mats.petersson@citrix.com>wrote:

> On 03/12/12 13:19, Mats Petersson wrote:
>
>> On 03/12/12 13:14, G.R. wrote:
>>
>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>>> <mats.petersson@citrix.com <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>>> wrote:
>>>
>>>      On 03/12/12 03:47, G.R. wrote:
>>>
>>>          Hi developers,
>>>          I met some domU issues and the log suggests missing interrupt.
>>>          Details from here:
>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#**
>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>>>          In summary, this is the suspicious log:
>>>
>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>>>
>>>          I've checked the code in question and found that mode 3 is an
>>>          'reserved_1' mode.
>>>          I want to trace down the source of this mode setting to
>>>          root-cause the issue.
>>>          But I'm not an xen developer, and am even a newbie as a xen
>>> user.
>>>          Could anybody give me instructions about how to enable
>>>          detailed debug log?
>>>          It could be better if I can get advice about experiments to
>>>          perform / switches to try out etc.
>>>
>>>          My SW config:
>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>>>          domU: Debian wheezy 3.2.x stock kernel.
>>>
>>>          Thanks,
>>>          Timothy
>>>
>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>>>      What are the exact messages in the DomU?
>>>
>>>
>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>>> enabled.
>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>>> did not see such msi related error message.
>>> Actually, with that domU I do not see anything obvious wrong from the
>>> log, but I also see nothing from panel (panel receive no signal and go
>>> power-saving) :-(
>>>
>>>
>>> Back to the issue I was reporting, the domU log looks like this:
>>>
>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>>> [drm:i915_hangcheck_ring_idle]
>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>>> 3354], missed IRQ?
>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>>> [drm:i915_hangcheck_ring_idle]
>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>>> 11297], missed IRQ?
>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>>> timeout, switching to polling mode: last cmd=0x000f0000
>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>>> codec, disabling MSI: last cmd=0x002f0600
>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>>>
>>>
>>> Thanks,
>>> Timothy
>>>
>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>> fixes this. I'm not fully clued up to what the policy for backporting
>> fixes are, and I haven't looked at the complexity of the fix itself, but
>> either updating to the 4.2.0 or a (personal) backport sounds like the
>> right solution here.
>>
> I had a quick look, and it doesn't look that hard to backport that patch.
>
> --
> Mats
>
>
>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>> to your original email.
>>
>> --
>> Mats
>>
>> ______________________________**_________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>>
>
> ______________________________**_________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--90e6ba6e8e203b79bb04cffe2a08
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

&gt; I had a quick look, and it doesn&#39;t look that hard to backport that=
 patch.<br><br>Thanks, Mat.<br>I&#39;m glad to report that the patch do fix=
 my problem.<br><br>And yest it is really easy to port since the code did n=
ot change across the two release.<br>
The only change would be line numbers (3841 vs 3803) and one extra comments=
 before this line:<br><pre> static int pt_msix_update_one(struct pt_dev *de=
v, int entry_nr)</pre>I&#39;m not sure if you are going to release another =
maintenance version that include this patch,<br>
but I&#39;ll report this to Debain maintainer since it&#39;s about to freez=
e for v7.0 release and v4.2.0 will not make it.<br><br><div class=3D"gmail_=
extra"><br><br><div class=3D"gmail_quote">On Mon, Dec 3, 2012 at 9:58 PM, M=
ats Petersson <span dir=3D"ltr">&lt;<a href=3D"mailto:mats.petersson@citrix=
.com" target=3D"_blank">mats.petersson@citrix.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On 0=
3/12/12 13:19, Mats Petersson wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On 03/12/12 13:14, G.R. wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson<br>
&lt;<a href=3D"mailto:mats.petersson@citrix.com" target=3D"_blank">mats.pet=
ersson@citrix.com</a> &lt;mailto:<a href=3D"mailto:mats.petersson@citrix.co=
m" target=3D"_blank">mats.petersson@citrix.<u></u>com</a>&gt;&gt; wrote:<br=
>
<br>
=A0 =A0 =A0On 03/12/12 03:47, G.R. wrote:<br>
<br>
=A0 =A0 =A0 =A0 =A0Hi developers,<br>
=A0 =A0 =A0 =A0 =A0I met some domU issues and the log suggests missing inte=
rrupt.<br>
=A0 =A0 =A0 =A0 =A0Details from here:<br>
=A0 =A0 =A0 =A0 =A0<a href=3D"http://www.gossamer-threads.com/lists/xen/use=
rs/263938#263938" target=3D"_blank">http://www.gossamer-threads.<u></u>com/=
lists/xen/users/263938#<u></u>263938</a><br>
=A0 =A0 =A0 =A0 =A0In summary, this is the suspicious log:<br>
<br>
=A0 =A0 =A0 =A0 =A0(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
<br>
=A0 =A0 =A0 =A0 =A0I&#39;ve checked the code in question and found that mod=
e 3 is an<br>
=A0 =A0 =A0 =A0 =A0&#39;reserved_1&#39; mode.<br>
=A0 =A0 =A0 =A0 =A0I want to trace down the source of this mode setting to<=
br>
=A0 =A0 =A0 =A0 =A0root-cause the issue.<br>
=A0 =A0 =A0 =A0 =A0But I&#39;m not an xen developer, and am even a newbie a=
s a xen user.<br>
=A0 =A0 =A0 =A0 =A0Could anybody give me instructions about how to enable<b=
r>
=A0 =A0 =A0 =A0 =A0detailed debug log?<br>
=A0 =A0 =A0 =A0 =A0It could be better if I can get advice about experiments=
 to<br>
=A0 =A0 =A0 =A0 =A0perform / switches to try out etc.<br>
<br>
=A0 =A0 =A0 =A0 =A0My SW config:<br>
=A0 =A0 =A0 =A0 =A0dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-=
4)<br>
=A0 =A0 =A0 =A0 =A0domU: Debian wheezy 3.2.x stock kernel.<br>
<br>
=A0 =A0 =A0 =A0 =A0Thanks,<br>
=A0 =A0 =A0 =A0 =A0Timothy<br>
<br>
=A0 =A0 =A0Are you passing hardware (PCI Passthrough) to the HVM guest?<br>
=A0 =A0 =A0What are the exact messages in the DomU?<br>
<br>
<br>
Yes, I&#39;m doing PCI passthrough (the IGD, audio &amp;&amp; USB controlle=
r).<br>
But this is actually a PVHVM guest since debian stock kernel has PVOP<br>
enabled.<br>
And when I tried another PVOP disabled linux distro (openelec v2.0), I<br>
did not see such msi related error message.<br>
Actually, with that domU I do not see anything obvious wrong from the<br>
log, but I also see nothing from panel (panel receive no signal and go<br>
power-saving) :-(<br>
<br>
<br>
Back to the issue I was reporting, the domU log looks like this:<br>
<br>
Dec 2 21:52:44 debvm kernel: [ 1085.604071]<br>
[drm:i915_hangcheck_ring_idle]<br>
*ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at<br>
3354], missed IRQ?<br>
Dec 2 21:56:50 debvm kernel: [ 1332.076071]<br>
[drm:i915_hangcheck_ring_idle]<br>
*ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at<b=
r>
11297], missed IRQ?<br>
Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response<br>
timeout, switching to polling mode: last cmd=3D0x000f0000<br>
Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from<br>
codec, disabling MSI: last cmd=3D0x002f0600<br>
Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response<br>
timeout, switching to single_cmd mode: last cmd=3D0x002f0600<br>
<br>
<br>
Thanks,<br>
Timothy<br>
</blockquote>
It does sound like there is a fix in 4.2.0, as indicated by Jan, that<br>
fixes this. I&#39;m not fully clued up to what the policy for backporting<b=
r>
fixes are, and I haven&#39;t looked at the complexity of the fix itself, bu=
t<br>
either updating to the 4.2.0 or a (personal) backport sounds like the<br>
right solution here.<br>
</blockquote></div></div>
I had a quick look, and it doesn&#39;t look that hard to backport that patc=
h.<br>
<br>
--<br>
Mats<div class=3D"HOEnZb"><div class=3D"h5"><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
<br>
Unfortunately, I hadn&#39;t seen Jan&#39;s reply by the time I wrote my res=
ponse<br>
to your original email.<br>
<br>
--<br>
Mats<br>
<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br>
<br>
</blockquote>
<br>
<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div>

--90e6ba6e8e203b79bb04cffe2a08--


--===============6508738499468714629==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6508738499468714629==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 03:12:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 03:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfivp-0007Ju-OB; Tue, 04 Dec 2012 03:12:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tfivo-0007Jp-C3
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 03:12:24 +0000
Received: from [85.158.137.99:33913] by server-2.bemta-3.messagelabs.com id
	A9/27-04744-71A6DB05; Tue, 04 Dec 2012 03:12:23 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-12.tower-217.messagelabs.com!1354590740!14618400!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32573 invoked from network); 4 Dec 2012 03:12:21 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-12.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 03:12:21 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so6118608iej.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 19:12:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=MQ9OmqWLUU+XH4HNuw7X6o6TydbaUxovYAapRPVnOdA=;
	b=DDWWXdmHFxc2VlyxfKTOFDtaw8o9nfszwUqogWnnefZmegViayoWa6fJAxBPLj6+YR
	9Jl3Tp03lm7JG+lxA9b6lSbjSf0WrIq69PiNswOdILMty55N9Oc1w8zUpLq5N5wmvvuO
	A2mBfeHH3lX4Mswm8XXBq355Ka2zTtRUX3g3bOEoT9zNpBwdCGvyGQcemGOVZ1UtIK7C
	eLqM0bnfklfTX8e79nKf9wrZ0ES223bZ8fxj7ykao1JF0+Tnf8vWnlu/YZmO2S9qOilV
	jb5lXHXQLt0JXVxMKUAXSvoaEa9hTtyR8nEgugRjmPBZOfhgo2KVg/c8A7Ivcn0nTelP
	2nGQ==
MIME-Version: 1.0
Received: by 10.50.194.132 with SMTP id hw4mr1244141igc.37.1354590740056; Mon,
	03 Dec 2012 19:12:20 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Mon, 3 Dec 2012 19:12:19 -0800 (PST)
In-Reply-To: <CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
Date: Tue, 4 Dec 2012 11:12:19 +0800
X-Google-Sender-Auth: Mo_qrzYBVxZocQ-eiyUZv05mqW8
Message-ID: <CAKhsbWbbLk5m8ubg3jyfMOxGVCxi=iLynv79w-6gPEdHSc_x5w@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Mats Petersson <mats.petersson@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2009236113536242055=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2009236113536242055==
Content-Type: multipart/alternative; boundary=14dae93411e9e08a6604cffe3b8e

--14dae93411e9e08a6604cffe3b8e
Content-Type: text/plain; charset=ISO-8859-1

PS: I got one more question about the patch:
Will it make any difference for pure HVM guest?
I also observed issues for win 7 (BSOD after intel display driver
installed) and openelec 2.0 ( linux 3.2 based with pvops disabled, panel
report no-signal).
And I haven't got chance to try them out with that patch.


On Tue, Dec 4, 2012 at 11:07 AM, G.R. <firemeteor@users.sourceforge.net>wrote:

> > I had a quick look, and it doesn't look that hard to backport that patch.
>
> Thanks, Mat.
> I'm glad to report that the patch do fix my problem.
>
> And yest it is really easy to port since the code did not change across
> the two release.
> The only change would be line numbers (3841 vs 3803) and one extra
> comments before this line:
>
>  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>
> I'm not sure if you are going to release another maintenance version that
> include this patch,
> but I'll report this to Debain maintainer since it's about to freeze for
> v7.0 release and v4.2.0 will not make it.
>
>
>
>
> On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson <mats.petersson@citrix.com>wrote:
>
>> On 03/12/12 13:19, Mats Petersson wrote:
>>
>>> On 03/12/12 13:14, G.R. wrote:
>>>
>>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>>>> <mats.petersson@citrix.com <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>>>> wrote:
>>>>
>>>>      On 03/12/12 03:47, G.R. wrote:
>>>>
>>>>          Hi developers,
>>>>          I met some domU issues and the log suggests missing interrupt.
>>>>          Details from here:
>>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#**
>>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>>>>          In summary, this is the suspicious log:
>>>>
>>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>>>>
>>>>          I've checked the code in question and found that mode 3 is an
>>>>          'reserved_1' mode.
>>>>          I want to trace down the source of this mode setting to
>>>>          root-cause the issue.
>>>>          But I'm not an xen developer, and am even a newbie as a xen
>>>> user.
>>>>          Could anybody give me instructions about how to enable
>>>>          detailed debug log?
>>>>          It could be better if I can get advice about experiments to
>>>>          perform / switches to try out etc.
>>>>
>>>>          My SW config:
>>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>>>>          domU: Debian wheezy 3.2.x stock kernel.
>>>>
>>>>          Thanks,
>>>>          Timothy
>>>>
>>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>>>>      What are the exact messages in the DomU?
>>>>
>>>>
>>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>>>> enabled.
>>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>>>> did not see such msi related error message.
>>>> Actually, with that domU I do not see anything obvious wrong from the
>>>> log, but I also see nothing from panel (panel receive no signal and go
>>>> power-saving) :-(
>>>>
>>>>
>>>> Back to the issue I was reporting, the domU log looks like this:
>>>>
>>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>>>> [drm:i915_hangcheck_ring_idle]
>>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>>>> 3354], missed IRQ?
>>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>>>> [drm:i915_hangcheck_ring_idle]
>>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297,
>>>> at
>>>> 11297], missed IRQ?
>>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>>>> timeout, switching to polling mode: last cmd=0x000f0000
>>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>>>> codec, disabling MSI: last cmd=0x002f0600
>>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>>>>
>>>>
>>>> Thanks,
>>>> Timothy
>>>>
>>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>>> fixes this. I'm not fully clued up to what the policy for backporting
>>> fixes are, and I haven't looked at the complexity of the fix itself, but
>>> either updating to the 4.2.0 or a (personal) backport sounds like the
>>> right solution here.
>>>
>> I had a quick look, and it doesn't look that hard to backport that patch.
>>
>> --
>> Mats
>>
>>
>>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>>> to your original email.
>>>
>>> --
>>> Mats
>>>
>>> ______________________________**_________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>>>
>>>
>>>
>>
>> ______________________________**_________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>
>

--14dae93411e9e08a6604cffe3b8e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

PS: I got one more question about the patch:<br>Will it make any difference=
 for pure HVM guest?<br>I also observed issues for win 7 (BSOD after intel =
display driver installed) and openelec 2.0 ( linux 3.2 based with pvops dis=
abled, panel report no-signal).<br>
And I haven&#39;t got chance to try them out with that patch.<br><div class=
=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Dec 4, 2012 at =
11:07 AM, G.R. <span dir=3D"ltr">&lt;<a href=3D"mailto:firemeteor@users.sou=
rceforge.net" target=3D"_blank">firemeteor@users.sourceforge.net</a>&gt;</s=
pan> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">&gt; I had a quick look, a=
nd it doesn&#39;t look that hard to backport that patch.<br><br></div>Thank=
s, Mat.<br>
I&#39;m glad to report that the patch do fix my problem.<br><br>And yest it=
 is really easy to port since the code did not change across the two releas=
e.<br>
The only change would be line numbers (3841 vs 3803) and one extra comments=
 before this line:<br><pre> static int pt_msix_update_one(struct pt_dev *de=
v, int entry_nr)</pre>I&#39;m not sure if you are going to release another =
maintenance version that include this patch,<br>

but I&#39;ll report this to Debain maintainer since it&#39;s about to freez=
e for v7.0 release and v4.2.0 will not make it.<div class=3D"HOEnZb"><div c=
lass=3D"h5"><br><br><div class=3D"gmail_extra"><br><br><div class=3D"gmail_=
quote">
On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:mats.petersson@citrix.com" target=3D"_blank">mats.petersson@cit=
rix.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><div>On 03/12/12 13:19, Mats Petersson =
wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On 03/12/12 13:14, G.R. wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson<br>
&lt;<a href=3D"mailto:mats.petersson@citrix.com" target=3D"_blank">mats.pet=
ersson@citrix.com</a> &lt;mailto:<a href=3D"mailto:mats.petersson@citrix.co=
m" target=3D"_blank">mats.petersson@citrix.<u></u>com</a>&gt;&gt; wrote:<br=
>
<br>
=A0 =A0 =A0On 03/12/12 03:47, G.R. wrote:<br>
<br>
=A0 =A0 =A0 =A0 =A0Hi developers,<br>
=A0 =A0 =A0 =A0 =A0I met some domU issues and the log suggests missing inte=
rrupt.<br>
=A0 =A0 =A0 =A0 =A0Details from here:<br>
=A0 =A0 =A0 =A0 =A0<a href=3D"http://www.gossamer-threads.com/lists/xen/use=
rs/263938#263938" target=3D"_blank">http://www.gossamer-threads.<u></u>com/=
lists/xen/users/263938#<u></u>263938</a><br>
=A0 =A0 =A0 =A0 =A0In summary, this is the suspicious log:<br>
<br>
=A0 =A0 =A0 =A0 =A0(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
<br>
=A0 =A0 =A0 =A0 =A0I&#39;ve checked the code in question and found that mod=
e 3 is an<br>
=A0 =A0 =A0 =A0 =A0&#39;reserved_1&#39; mode.<br>
=A0 =A0 =A0 =A0 =A0I want to trace down the source of this mode setting to<=
br>
=A0 =A0 =A0 =A0 =A0root-cause the issue.<br>
=A0 =A0 =A0 =A0 =A0But I&#39;m not an xen developer, and am even a newbie a=
s a xen user.<br>
=A0 =A0 =A0 =A0 =A0Could anybody give me instructions about how to enable<b=
r>
=A0 =A0 =A0 =A0 =A0detailed debug log?<br>
=A0 =A0 =A0 =A0 =A0It could be better if I can get advice about experiments=
 to<br>
=A0 =A0 =A0 =A0 =A0perform / switches to try out etc.<br>
<br>
=A0 =A0 =A0 =A0 =A0My SW config:<br>
=A0 =A0 =A0 =A0 =A0dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-=
4)<br>
=A0 =A0 =A0 =A0 =A0domU: Debian wheezy 3.2.x stock kernel.<br>
<br>
=A0 =A0 =A0 =A0 =A0Thanks,<br>
=A0 =A0 =A0 =A0 =A0Timothy<br>
<br>
=A0 =A0 =A0Are you passing hardware (PCI Passthrough) to the HVM guest?<br>
=A0 =A0 =A0What are the exact messages in the DomU?<br>
<br>
<br>
Yes, I&#39;m doing PCI passthrough (the IGD, audio &amp;&amp; USB controlle=
r).<br>
But this is actually a PVHVM guest since debian stock kernel has PVOP<br>
enabled.<br>
And when I tried another PVOP disabled linux distro (openelec v2.0), I<br>
did not see such msi related error message.<br>
Actually, with that domU I do not see anything obvious wrong from the<br>
log, but I also see nothing from panel (panel receive no signal and go<br>
power-saving) :-(<br>
<br>
<br>
Back to the issue I was reporting, the domU log looks like this:<br>
<br>
Dec 2 21:52:44 debvm kernel: [ 1085.604071]<br>
[drm:i915_hangcheck_ring_idle]<br>
*ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at<br>
3354], missed IRQ?<br>
Dec 2 21:56:50 debvm kernel: [ 1332.076071]<br>
[drm:i915_hangcheck_ring_idle]<br>
*ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at<b=
r>
11297], missed IRQ?<br>
Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response<br>
timeout, switching to polling mode: last cmd=3D0x000f0000<br>
Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from<br>
codec, disabling MSI: last cmd=3D0x002f0600<br>
Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response<br>
timeout, switching to single_cmd mode: last cmd=3D0x002f0600<br>
<br>
<br>
Thanks,<br>
Timothy<br>
</blockquote>
It does sound like there is a fix in 4.2.0, as indicated by Jan, that<br>
fixes this. I&#39;m not fully clued up to what the policy for backporting<b=
r>
fixes are, and I haven&#39;t looked at the complexity of the fix itself, bu=
t<br>
either updating to the 4.2.0 or a (personal) backport sounds like the<br>
right solution here.<br>
</blockquote></div></div>
I had a quick look, and it doesn&#39;t look that hard to backport that patc=
h.<br>
<br>
--<br>
Mats<div><div><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
<br>
Unfortunately, I hadn&#39;t seen Jan&#39;s reply by the time I wrote my res=
ponse<br>
to your original email.<br>
<br>
--<br>
Mats<br>
<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br>
<br>
</blockquote>
<br>
<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>

--14dae93411e9e08a6604cffe3b8e--


--===============2009236113536242055==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2009236113536242055==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 03:12:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 03:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfivp-0007Ju-OB; Tue, 04 Dec 2012 03:12:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tfivo-0007Jp-C3
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 03:12:24 +0000
Received: from [85.158.137.99:33913] by server-2.bemta-3.messagelabs.com id
	A9/27-04744-71A6DB05; Tue, 04 Dec 2012 03:12:23 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-12.tower-217.messagelabs.com!1354590740!14618400!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32573 invoked from network); 4 Dec 2012 03:12:21 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-12.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 03:12:21 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so6118608iej.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 19:12:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=MQ9OmqWLUU+XH4HNuw7X6o6TydbaUxovYAapRPVnOdA=;
	b=DDWWXdmHFxc2VlyxfKTOFDtaw8o9nfszwUqogWnnefZmegViayoWa6fJAxBPLj6+YR
	9Jl3Tp03lm7JG+lxA9b6lSbjSf0WrIq69PiNswOdILMty55N9Oc1w8zUpLq5N5wmvvuO
	A2mBfeHH3lX4Mswm8XXBq355Ka2zTtRUX3g3bOEoT9zNpBwdCGvyGQcemGOVZ1UtIK7C
	eLqM0bnfklfTX8e79nKf9wrZ0ES223bZ8fxj7ykao1JF0+Tnf8vWnlu/YZmO2S9qOilV
	jb5lXHXQLt0JXVxMKUAXSvoaEa9hTtyR8nEgugRjmPBZOfhgo2KVg/c8A7Ivcn0nTelP
	2nGQ==
MIME-Version: 1.0
Received: by 10.50.194.132 with SMTP id hw4mr1244141igc.37.1354590740056; Mon,
	03 Dec 2012 19:12:20 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Mon, 3 Dec 2012 19:12:19 -0800 (PST)
In-Reply-To: <CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
Date: Tue, 4 Dec 2012 11:12:19 +0800
X-Google-Sender-Auth: Mo_qrzYBVxZocQ-eiyUZv05mqW8
Message-ID: <CAKhsbWbbLk5m8ubg3jyfMOxGVCxi=iLynv79w-6gPEdHSc_x5w@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Mats Petersson <mats.petersson@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2009236113536242055=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2009236113536242055==
Content-Type: multipart/alternative; boundary=14dae93411e9e08a6604cffe3b8e

--14dae93411e9e08a6604cffe3b8e
Content-Type: text/plain; charset=ISO-8859-1

PS: I got one more question about the patch:
Will it make any difference for pure HVM guest?
I also observed issues for win 7 (BSOD after intel display driver
installed) and openelec 2.0 ( linux 3.2 based with pvops disabled, panel
report no-signal).
And I haven't got chance to try them out with that patch.


On Tue, Dec 4, 2012 at 11:07 AM, G.R. <firemeteor@users.sourceforge.net>wrote:

> > I had a quick look, and it doesn't look that hard to backport that patch.
>
> Thanks, Mat.
> I'm glad to report that the patch do fix my problem.
>
> And yest it is really easy to port since the code did not change across
> the two release.
> The only change would be line numbers (3841 vs 3803) and one extra
> comments before this line:
>
>  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>
> I'm not sure if you are going to release another maintenance version that
> include this patch,
> but I'll report this to Debain maintainer since it's about to freeze for
> v7.0 release and v4.2.0 will not make it.
>
>
>
>
> On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson <mats.petersson@citrix.com>wrote:
>
>> On 03/12/12 13:19, Mats Petersson wrote:
>>
>>> On 03/12/12 13:14, G.R. wrote:
>>>
>>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>>>> <mats.petersson@citrix.com <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>>>> wrote:
>>>>
>>>>      On 03/12/12 03:47, G.R. wrote:
>>>>
>>>>          Hi developers,
>>>>          I met some domU issues and the log suggests missing interrupt.
>>>>          Details from here:
>>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#**
>>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>>>>          In summary, this is the suspicious log:
>>>>
>>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>>>>
>>>>          I've checked the code in question and found that mode 3 is an
>>>>          'reserved_1' mode.
>>>>          I want to trace down the source of this mode setting to
>>>>          root-cause the issue.
>>>>          But I'm not an xen developer, and am even a newbie as a xen
>>>> user.
>>>>          Could anybody give me instructions about how to enable
>>>>          detailed debug log?
>>>>          It could be better if I can get advice about experiments to
>>>>          perform / switches to try out etc.
>>>>
>>>>          My SW config:
>>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>>>>          domU: Debian wheezy 3.2.x stock kernel.
>>>>
>>>>          Thanks,
>>>>          Timothy
>>>>
>>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>>>>      What are the exact messages in the DomU?
>>>>
>>>>
>>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>>>> enabled.
>>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>>>> did not see such msi related error message.
>>>> Actually, with that domU I do not see anything obvious wrong from the
>>>> log, but I also see nothing from panel (panel receive no signal and go
>>>> power-saving) :-(
>>>>
>>>>
>>>> Back to the issue I was reporting, the domU log looks like this:
>>>>
>>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>>>> [drm:i915_hangcheck_ring_idle]
>>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>>>> 3354], missed IRQ?
>>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>>>> [drm:i915_hangcheck_ring_idle]
>>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297,
>>>> at
>>>> 11297], missed IRQ?
>>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>>>> timeout, switching to polling mode: last cmd=0x000f0000
>>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>>>> codec, disabling MSI: last cmd=0x002f0600
>>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>>>>
>>>>
>>>> Thanks,
>>>> Timothy
>>>>
>>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>>> fixes this. I'm not fully clued up to what the policy for backporting
>>> fixes are, and I haven't looked at the complexity of the fix itself, but
>>> either updating to the 4.2.0 or a (personal) backport sounds like the
>>> right solution here.
>>>
>> I had a quick look, and it doesn't look that hard to backport that patch.
>>
>> --
>> Mats
>>
>>
>>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>>> to your original email.
>>>
>>> --
>>> Mats
>>>
>>> ______________________________**_________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>>>
>>>
>>>
>>
>> ______________________________**_________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>
>

--14dae93411e9e08a6604cffe3b8e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

PS: I got one more question about the patch:<br>Will it make any difference=
 for pure HVM guest?<br>I also observed issues for win 7 (BSOD after intel =
display driver installed) and openelec 2.0 ( linux 3.2 based with pvops dis=
abled, panel report no-signal).<br>
And I haven&#39;t got chance to try them out with that patch.<br><div class=
=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Dec 4, 2012 at =
11:07 AM, G.R. <span dir=3D"ltr">&lt;<a href=3D"mailto:firemeteor@users.sou=
rceforge.net" target=3D"_blank">firemeteor@users.sourceforge.net</a>&gt;</s=
pan> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">&gt; I had a quick look, a=
nd it doesn&#39;t look that hard to backport that patch.<br><br></div>Thank=
s, Mat.<br>
I&#39;m glad to report that the patch do fix my problem.<br><br>And yest it=
 is really easy to port since the code did not change across the two releas=
e.<br>
The only change would be line numbers (3841 vs 3803) and one extra comments=
 before this line:<br><pre> static int pt_msix_update_one(struct pt_dev *de=
v, int entry_nr)</pre>I&#39;m not sure if you are going to release another =
maintenance version that include this patch,<br>

but I&#39;ll report this to Debain maintainer since it&#39;s about to freez=
e for v7.0 release and v4.2.0 will not make it.<div class=3D"HOEnZb"><div c=
lass=3D"h5"><br><br><div class=3D"gmail_extra"><br><br><div class=3D"gmail_=
quote">
On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:mats.petersson@citrix.com" target=3D"_blank">mats.petersson@cit=
rix.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><div>On 03/12/12 13:19, Mats Petersson =
wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On 03/12/12 13:14, G.R. wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson<br>
&lt;<a href=3D"mailto:mats.petersson@citrix.com" target=3D"_blank">mats.pet=
ersson@citrix.com</a> &lt;mailto:<a href=3D"mailto:mats.petersson@citrix.co=
m" target=3D"_blank">mats.petersson@citrix.<u></u>com</a>&gt;&gt; wrote:<br=
>
<br>
=A0 =A0 =A0On 03/12/12 03:47, G.R. wrote:<br>
<br>
=A0 =A0 =A0 =A0 =A0Hi developers,<br>
=A0 =A0 =A0 =A0 =A0I met some domU issues and the log suggests missing inte=
rrupt.<br>
=A0 =A0 =A0 =A0 =A0Details from here:<br>
=A0 =A0 =A0 =A0 =A0<a href=3D"http://www.gossamer-threads.com/lists/xen/use=
rs/263938#263938" target=3D"_blank">http://www.gossamer-threads.<u></u>com/=
lists/xen/users/263938#<u></u>263938</a><br>
=A0 =A0 =A0 =A0 =A0In summary, this is the suspicious log:<br>
<br>
=A0 =A0 =A0 =A0 =A0(XEN) vmsi.c:122:d32767 Unsupported delivery mode 3<br>
<br>
=A0 =A0 =A0 =A0 =A0I&#39;ve checked the code in question and found that mod=
e 3 is an<br>
=A0 =A0 =A0 =A0 =A0&#39;reserved_1&#39; mode.<br>
=A0 =A0 =A0 =A0 =A0I want to trace down the source of this mode setting to<=
br>
=A0 =A0 =A0 =A0 =A0root-cause the issue.<br>
=A0 =A0 =A0 =A0 =A0But I&#39;m not an xen developer, and am even a newbie a=
s a xen user.<br>
=A0 =A0 =A0 =A0 =A0Could anybody give me instructions about how to enable<b=
r>
=A0 =A0 =A0 =A0 =A0detailed debug log?<br>
=A0 =A0 =A0 =A0 =A0It could be better if I can get advice about experiments=
 to<br>
=A0 =A0 =A0 =A0 =A0perform / switches to try out etc.<br>
<br>
=A0 =A0 =A0 =A0 =A0My SW config:<br>
=A0 =A0 =A0 =A0 =A0dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-=
4)<br>
=A0 =A0 =A0 =A0 =A0domU: Debian wheezy 3.2.x stock kernel.<br>
<br>
=A0 =A0 =A0 =A0 =A0Thanks,<br>
=A0 =A0 =A0 =A0 =A0Timothy<br>
<br>
=A0 =A0 =A0Are you passing hardware (PCI Passthrough) to the HVM guest?<br>
=A0 =A0 =A0What are the exact messages in the DomU?<br>
<br>
<br>
Yes, I&#39;m doing PCI passthrough (the IGD, audio &amp;&amp; USB controlle=
r).<br>
But this is actually a PVHVM guest since debian stock kernel has PVOP<br>
enabled.<br>
And when I tried another PVOP disabled linux distro (openelec v2.0), I<br>
did not see such msi related error message.<br>
Actually, with that domU I do not see anything obvious wrong from the<br>
log, but I also see nothing from panel (panel receive no signal and go<br>
power-saving) :-(<br>
<br>
<br>
Back to the issue I was reporting, the domU log looks like this:<br>
<br>
Dec 2 21:52:44 debvm kernel: [ 1085.604071]<br>
[drm:i915_hangcheck_ring_idle]<br>
*ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at<br>
3354], missed IRQ?<br>
Dec 2 21:56:50 debvm kernel: [ 1332.076071]<br>
[drm:i915_hangcheck_ring_idle]<br>
*ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at<b=
r>
11297], missed IRQ?<br>
Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response<br>
timeout, switching to polling mode: last cmd=3D0x000f0000<br>
Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from<br>
codec, disabling MSI: last cmd=3D0x002f0600<br>
Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response<br>
timeout, switching to single_cmd mode: last cmd=3D0x002f0600<br>
<br>
<br>
Thanks,<br>
Timothy<br>
</blockquote>
It does sound like there is a fix in 4.2.0, as indicated by Jan, that<br>
fixes this. I&#39;m not fully clued up to what the policy for backporting<b=
r>
fixes are, and I haven&#39;t looked at the complexity of the fix itself, bu=
t<br>
either updating to the 4.2.0 or a (personal) backport sounds like the<br>
right solution here.<br>
</blockquote></div></div>
I had a quick look, and it doesn&#39;t look that hard to backport that patc=
h.<br>
<br>
--<br>
Mats<div><div><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
<br>
Unfortunately, I hadn&#39;t seen Jan&#39;s reply by the time I wrote my res=
ponse<br>
to your original email.<br>
<br>
--<br>
Mats<br>
<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br>
<br>
</blockquote>
<br>
<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>

--14dae93411e9e08a6604cffe3b8e--


--===============2009236113536242055==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2009236113536242055==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 03:25:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 03:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfj7q-0007jj-4O; Tue, 04 Dec 2012 03:24:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1Tfj7o-0007jd-9G
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 03:24:48 +0000
Received: from [85.158.143.99:3354] by server-3.bemta-4.messagelabs.com id
	61/22-06841-EFC6DB05; Tue, 04 Dec 2012 03:24:46 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-11.tower-216.messagelabs.com!1354591483!20388668!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25447 invoked from network); 4 Dec 2012 03:24:44 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 03:24:44 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so3078992iac.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 19:24:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=10DopOkvaEdXDUX7pj2bUdJYKtXm+h2z1XxNFIOP/NU=;
	b=A/IY1BPuCGG3xrWd9DSCp9wymEvwW2dOVy0wkg9p0SQXOZZypEdz2M0W1oLrMtEzt8
	m/fWxfLgPTHvc/v2pnn2rYktzWJyMPDWDFv5A7qDDzDQyMmWH15kV5nb0WI2Sihm/17f
	eMnSpfwvz9uRhpyyHQ7h769UAb6WcM175N5EuDk4/4vUHpTAon980pHS0KX6mFsFBvUc
	1vh/vgRnhHygGKmrhUN0rsK7AYTDhcjeZduAU1AdVUyDu+PuGA3iCxYGuUVEqyaeZQ4g
	ZYKyox2V923bkTu1X9tYV5YNTR85L9PUk1yO/aRLWcq/ur38uFPZNz7Zn7hkvQR4Wuxl
	Zy8w==
Received: by 10.50.106.227 with SMTP id gx3mr1336696igb.10.1354591483135;
	Mon, 03 Dec 2012 19:24:43 -0800 (PST)
Received: from [192.168.1.101] (206-248-157-158.dsl.teksavvy.com.
	[206.248.157.158])
	by mx.google.com with ESMTPS id vq4sm8835290igb.10.2012.12.03.19.24.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 19:24:41 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
Date: Mon, 3 Dec 2012 22:24:40 -0500
Message-Id: <49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
References: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
To: xen-devel@lists.xen.org
X-Mailer: Apple Mail (2.1499)
X-Gm-Message-State: ALoCoQlM+KRXf3JAW72FHu8bOLThqajhZiAk6Y0w65lq5YrTy/MuO4wp2UFAwvwGqb/MlmxqGh3O
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
	problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> I earlier promised a complete analysis of the problem
> addressed by the proposed claim hypercall as well as
> an analysis of the alternate solutions.  I had not
> yet provided these analyses when I asked for approval
> to commit the hypervisor patch, so there was still
> a good amount of misunderstanding, and I am trying
> to fix that here.
> =

> I had hoped this essay could be both concise and complete
> but quickly found it to be impossible to be both at the
> same time.  So I have erred on the side of verbosity,
> but also have attempted to ensure that the analysis
> flows smoothly and is understandable to anyone interested
> in learning more about memory allocation in Xen.
> I'd appreciate feedback from other developers to understand
> if I've also achieved that goal.
> =

> Ian, Ian, George, and Tim -- I have tagged a few
> out-of-flow questions to you with [IIGF].  If I lose
> you at any point, I'd especially appreciate your feedback
> at those points.  I trust that, first, you will read
> this completely.  As I've said, I understand that
> Oracle's paradigm may differ in many ways from your
> own, so I also trust that you will read it completely
> with an open mind.
> =

> Thanks,
> Dan
> =

> PROBLEM STATEMENT OVERVIEW
> =

> The fundamental problem is a race; two entities are
> competing for part or all of a shared resource: in this case,
> physical system RAM.  Normally, a lock is used to mediate
> a race.
> =

> For memory allocation in Xen, there are two significant
> entities, the toolstack and the hypervisor.  And, in
> general terms, there are currently two important locks:
> one used in the toolstack for domain creation;
> and one in the hypervisor used for the buddy allocator.
> =

> Considering first only domain creation, the toolstack
> lock is taken to ensure that domain creation is serialized.
> The lock is taken when domain creation starts, and released
> when domain creation is complete.
> =

> As system and domain memory requirements grow, the amount
> of time to allocate all necessary memory to launch a large
> domain is growing and may now exceed several minutes, so
> this serialization is increasingly problematic.  The result
> is a customer reported problem:  If a customer wants to
> launch two or more very large domains, the "wait time"
> required by the serialization is unacceptable.
> =

> Oracle would like to solve this problem.  And Oracle
> would like to solve this problem not just for a single
> customer sitting in front of a single machine console, but
> for the very complex case of a large number of machines,
> with the "agent" on each machine taking independent
> actions including automatic load balancing and power
> management via migration.
Hi Dan,
an issue with your reasoning throughout has been the constant invocation of=
 the multi host environment as a justification for your proposal. But this =
argument is not used in your proposal below beyond this mention in passing.=
 Further, there is no relation between what you are changing (the hyperviso=
r) and what you are claiming it is needed for (multi host VM management).


>  (This complex environment
> is sold by Oracle today; it is not a "future vision".)
> =

> [IIGT] Completely ignoring any possible solutions to this
> problem, is everyone in agreement that this _is_ a problem
> that _needs_ to be solved with _some_ change in the Xen
> ecosystem?
> =

> SOME IMPORTANT BACKGROUND INFORMATION
> =

> In the subsequent discussion, it is important to
> understand a few things:
> =

> While the toolstack lock is held, allocating memory for
> the domain creation process is done as a sequence of one
> or more hypercalls, each asking the hypervisor to allocate
> one or more -- "X" -- slabs of physical RAM, where a slab
> is 2**N contiguous aligned pages, also known as an
> "order N" allocation.  While the hypercall is defined
> to work with any value of N, common values are N=3D0
> (individual pages), N=3D9 ("hugepages" or "superpages"),
> and N=3D18 ("1GiB pages").  So, for example, if the toolstack
> requires 201MiB of memory, it will make two hypercalls:
> One with X=3D100 and N=3D9, and one with X=3D1 and N=3D0.
> =

> While the toolstack may ask for a smaller number X of
> order=3D=3D9 slabs, system fragmentation may unpredictably
> cause the hypervisor to fail the request, in which case
> the toolstack will fall back to a request for 512*X
> individual pages.  If there is sufficient RAM in the system,
> this request for order=3D=3D0 pages is guaranteed to succeed.
> Thus for a 1TiB domain, the hypervisor must be prepared
> to allocate up to 256Mi individual pages.
> =

> Note carefully that when the toolstack hypercall asks for
> 100 slabs, the hypervisor "heaplock" is currently taken
> and released 100 times.  Similarly, for 256M individual
> pages... 256 million spin_lock-alloc_page-spin_unlocks.
> This means that domain creation is not "atomic" inside
> the hypervisor, which means that races can and will still
> occur.
> =

> RULING OUT SOME SIMPLE SOLUTIONS
> =

> Is there an elegant simple solution here?
> =

> Let's first consider the possibility of removing the toolstack
> serialization entirely and/or the possibility that two
> independent toolstack threads (or "agents") can simultaneously
> request a very large domain creation in parallel.  As described
> above, the hypervisor's heaplock is insufficient to serialize RAM
> allocation, so the two domain creation processes race.  If there
> is sufficient resource for either one to launch, but insufficient
> resource for both to launch, the winner of the race is indeterminate,
> and one or both launches will fail, possibly after one or both =

> domain creation threads have been working for several minutes.
> This is a classic "TOCTOU" (time-of-check-time-of-use) race.
> If a customer is unhappy waiting several minutes to launch
> a domain, they will be even more unhappy waiting for several
> minutes to be told that one or both of the launches has failed.
> Multi-minute failure is even more unacceptable for an automated
> agent trying to, for example, evacuate a machine that the
> data center administrator needs to powercycle.
> =

> [IIGT: Please hold your objections for a moment... the paragraph
> above is discussing the simple solution of removing the serialization;
> your suggested solution will be discussed soon.]
> =

> Next, let's consider the possibility of changing the heaplock
> strategy in the hypervisor so that the lock is held not
> for one slab but for the entire request of N slabs.  As with
> any core hypervisor lock, holding the heaplock for a "long time"
> is unacceptable.  To a hypervisor, several minutes is an eternity.
> And, in any case, by serializing domain creation in the hypervisor,
> we have really only moved the problem from the toolstack into
> the hypervisor, not solved the problem.
> =

> [IIGT] Are we in agreement that these simple solutions can be
> safely ruled out?
> =

> CAPACITY ALLOCATION VS RAM ALLOCATION
> =

> Looking for a creative solution, one may realize that it is the
> page allocation -- especially in large quantities -- that is very
> time-consuming.  But, thinking outside of the box, it is not
> the actual pages of RAM that we are racing on, but the quantity of pages =
required to launch a domain!  If we instead have a way to
> "claim" a quantity of pages cheaply now and then allocate the actual
> physical RAM pages later, we have changed the race to require only serial=
ization of the claiming process!  In other words, if some entity
> knows the number of pages available in the system, and can "claim"
> N pages for the benefit of a domain being launched, the successful launch=
 of the domain can be ensured.  Well... the domain launch may
> still fail for an unrelated reason, but not due to a memory TOCTOU
> race.  But, in this case, if the cost (in time) of the claiming
> process is very small compared to the cost of the domain launch,
> we have solved the memory TOCTOU race with hardly any delay added
> to a non-memory-related failure that would have occurred anyway.
> =

> This "claim" sounds promising.  But we have made an assumption that
> an "entity" has certain knowledge.  In the Xen system, that entity
> must be either the toolstack or the hypervisor.  Or, in the Oracle
> environment, an "agent"... but an agent and a toolstack are similar
> enough for our purposes that we will just use the more broadly-used
> term "toolstack".  In using this term, however, it's important to
> remember it is necessary to consider the existence of multiple
> threads within this toolstack.
> =

> Now I quote Ian Jackson: "It is a key design principle of a system
> like Xen that the hypervisor should provide only those facilities
> which are strictly necessary.  Any functionality which can be
> reasonably provided outside the hypervisor should be excluded
> from it."
> =

> So let's examine the toolstack first.
> =

> [IIGT] Still all on the same page (pun intended)?
> =

> TOOLSTACK-BASED CAPACITY ALLOCATION
> =

> Does the toolstack know how many physical pages of RAM are available?
> Yes, it can use a hypercall to find out this information after Xen and
> dom0 launch, but before it launches any domain.  Then if it subtracts
> the number of pages used when it launches a domain and is aware of
> when any domain dies, and adds them back, the toolstack has a pretty
> good estimate.  In actuality, the toolstack doesn't _really_ know the
> exact number of pages used when a domain is launched, but there
> is a poorly-documented "fuzz factor"... the toolstack knows the
> number of pages within a few megabytes, which is probably close enough.
> =

> This is a fairly good description of how the toolstack works today
> and the accounting seems simple enough, so does toolstack-based
> capacity allocation solve our original problem?  It would seem so.
> Even if there are multiple threads, the accounting -- not the extended
> sequence of page allocation for the domain creation -- can be
> serialized by a lock in the toolstack.  But note carefully, either
> the toolstack and the hypervisor must always be in sync on the
> number of available pages (within an acceptable margin of error);
> or any query to the hypervisor _and_ the toolstack-based claim must
> be paired atomically, i.e. the toolstack lock must be held across
> both.  Otherwise we again have another TOCTOU race. Interesting,
> but probably not really a problem.
> =

> Wait, isn't it possible for the toolstack to dynamically change the
> number of pages assigned to a domain?  Yes, this is often called
> ballooning and the toolstack can do this via a hypercall.  But

> that's still OK because each call goes through the toolstack and
> it simply needs to add more accounting for when it uses ballooning
> to adjust the domain's memory footprint.  So we are still OK.
> =

> But wait again... that brings up an interesting point.  Are there
> any significant allocations that are done in the hypervisor without
> the knowledge and/or permission of the toolstack?  If so, the
> toolstack may be missing important information.
> =

> So are there any such allocations?  Well... yes. There are a few.
> Let's take a moment to enumerate them:
> =

> A) In Linux, a privileged user can write to a sysfs file which writes
> to the balloon driver which makes hypercalls from the guest kernel to

A fairly bizarre limitation of a balloon-based approach to memory managemen=
t. Why on earth should the guest be allowed to change the size of its ballo=
on, and therefore its footprint on the host. This may be justified with arg=
uments pertaining to the stability of the in-guest workload. What they real=
ly reveal are limitations of ballooning. But the inadequacy of the balloon =
in itself doesn't automatically translate into justifying the need for a ne=
w hyper call.

> the hypervisor, which adjusts the domain memory footprint, which changes =
the number of free pages _without_ the toolstack knowledge.
> The toolstack controls constraints (essentially a minimum and maximum)
> which the hypervisor enforces.  The toolstack can ensure that the
> minimum and maximum are identical to essentially disallow Linux from
> using this functionality.  Indeed, this is precisely what Citrix's
> Dynamic Memory Controller (DMC) does: enforce min=3D=3Dmax so that DMC al=
ways has complete control and, so, knowledge of any domain memory
> footprint changes.  But DMC is not prescribed by the toolstack,

Neither is enforcing min=3D=3Dmax. This was my argument when previously com=
menting on this thread. The fact that you have enforcement of a maximum dom=
ain allocation gives you an excellent tool to keep a domain's unsupervised =
growth at bay. The toolstack can choose how fine-grained, how often to be a=
lerted and stall the domain.

> and some real Oracle Linux customers use and depend on the flexibility
> provided by in-guest ballooning.   So guest-privileged-user-driven-
> ballooning is a potential issue for toolstack-based capacity allocation.
> =

> [IIGT: This is why I have brought up DMC several times and have
> called this the "Citrix model,".. I'm not trying to be snippy
> or impugn your morals as maintainers.]
> =

> B) Xen's page sharing feature has slowly been completed over a number
> of recent Xen releases.  It takes advantage of the fact that many
> pages often contain identical data; the hypervisor merges them to save

Great care has been taken for this statement to not be exactly true. The hy=
pervisor discards one of two pages that the toolstack tells it to (and patc=
hes the physmap of the VM previously pointing to the discard page). It does=
n't merge, nor does it look into contents. The hypervisor doesn't care abou=
t the page contents. This is deliberate, so as to avoid spurious claims of =
"you are using technique X!"

> physical RAM.  When any "shared" page is written, the hypervisor
> "splits" the page (aka, copy-on-write) by allocating a new physical
> page.  There is a long history of this feature in other virtualization
> products and it is known to be possible that, under many circumstances, t=
housands of splits may occur in any fraction of a second.  The
> hypervisor does not notify or ask permission of the toolstack.
> So, page-splitting is an issue for toolstack-based capacity
> allocation, at least as currently coded in Xen.
> =

> [Andre: Please hold your objection here until you read further.]

Name is Andres. And please cc me if you'll be addressing me directly!

Note that I don't disagree with your previous statement in itself. Although=
 "page-splitting" is fairly unique terminology, and confusing (at least to =
me). CoW works.

> =

> C) Transcendent Memory ("tmem") has existed in the Xen hypervisor and
> toolstack for over three years.  It depends on an in-guest-kernel
> adaptive technique to constantly adjust the domain memory footprint as
> well as hooks in the in-guest-kernel to move data to and from the
> hypervisor.  While the data is in the hypervisor's care, interesting
> memory-load balancing between guests is done, including optional
> compression and deduplication.  All of this has been in Xen since 2009
> and has been awaiting changes in the (guest-side) Linux kernel. Those
> changes are now merged into the mainstream kernel and are fully
> functional in shipping distros.
> =

> While a complete description of tmem's guest<->hypervisor interaction
> is beyond the scope of this document, it is important to understand
> that any tmem-enabled guest kernel may unpredictably request thousands
> or even millions of pages directly via hypercalls from the hypervisor in =
a fraction of a second with absolutely no interaction with the toolstack.  =
Further, the guest-side hypercalls that allocate pages
> via the hypervisor are done in "atomic" code deep in the Linux mm
> subsystem.
> =

> Indeed, if one truly understands tmem, it should become clear that
> tmem is fundamentally incompatible with toolstack-based capacity
> allocation. But let's stop discussing tmem for now and move on.

You have not discussed tmem pool thaw and freeze in this proposal.

> =

> OK.  So with existing code both in Xen and Linux guests, there are
> three challenges to toolstack-based capacity allocation.  We'd
> really still like to do capacity allocation in the toolstack.  Can
> something be done in the toolstack to "fix" these three cases?
> =

> Possibly.  But let's first look at hypervisor-based capacity
> allocation: the proposed "XENMEM_claim_pages" hypercall.
> =

> HYPERVISOR-BASED CAPACITY ALLOCATION
> =

> The posted patch for the claim hypercall is quite simple, but let's
> look at it in detail.  The claim hypercall is actually a subop
> of an existing hypercall.  After checking parameters for validity,
> a new function is called in the core Xen memory management code.
> This function takes the hypervisor heaplock, checks for a few
> special cases, does some arithmetic to ensure a valid claim, stakes
> the claim, releases the hypervisor heaplock, and then returns.  To
> review from earlier, the hypervisor heaplock protects _all_ page/slab
> allocations, so we can be absolutely certain that there are no other
> page allocation races.  This new function is about 35 lines of code,
> not counting comments.
> =

> The patch includes two other significant changes to the hypervisor:
> First, when any adjustment to a domain's memory footprint is made
> (either through a toolstack-aware hypercall or one of the three
> toolstack-unaware methods described above), the heaplock is
> taken, arithmetic is done, and the heaplock is released.  This
> is 12 lines of code.  Second, when any memory is allocated within
> Xen, a check must be made (with the heaplock already held) to
> determine if, given a previous claim, the domain has exceeded
> its upper bound, maxmem.  This code is a single conditional test.
> =

> With some declarations, but not counting the copious comments,
> all told, the new code provided by the patch is well under 100 lines.
> =

> What about the toolstack side?  First, it's important to note that
> the toolstack changes are entirely optional.  If any toolstack
> wishes either to not fix the original problem, or avoid toolstack-
> unaware allocation completely by ignoring the functionality provided
> by in-guest ballooning, page-sharing, and/or tmem, that toolstack need
> not use the new hyper call.

You are ruling out any other possibility here. In particular, but not limit=
ed to, use of max_pages.

>  Second, it's very relevant to note that the Oracle product uses a combin=
ation of a proprietary "manager"
> which oversees many machines, and the older open-source xm/xend
> toolstack, for which the current Xen toolstack maintainers are no
> longer accepting patches.
> =

> The preface of the published patch does suggest, however, some
> straightforward pseudo-code, as follows:
> =

> Current toolstack domain creation memory allocation code fragment:
> =

> 1. call populate_physmap repeatedly to achieve mem=3DN memory
> 2. if any populate_physmap call fails, report -ENOMEM up the stack
> 3. memory is held until domain dies or the toolstack decreases it
> =

> Proposed toolstack domain creation memory allocation code fragment
> (new code marked with "+"):
> =

> +  call claim for mem=3DN amount of memory
> +. if claim succeeds:
> 1.  call populate_physmap repeatedly to achieve mem=3DN memory (failsafe)
> +  else
> 2.  report -ENOMEM up the stack
> +  claim is held until mem=3DN is achieved or the domain dies or
>    forced to 0 by a second hypercall
> 3. memory is held until domain dies or the toolstack decreases it
> =

> Reviewing the pseudo-code, one can readily see that the toolstack
> changes required to implement the hypercall are quite small.
> =

> To complete this discussion, it has been pointed out that
> the proposed hypercall doesn't solve the original problem
> for certain classes of legacy domains... but also neither
> does it make the problem worse.  It has also been pointed
> out that the proposed patch is not (yet) NUMA-aware.
> =

> Now let's return to the earlier question:  There are three =

> challenges to toolstack-based capacity allocation, which are
> all handled easily by in-hypervisor capacity allocation. But we'd
> really still like to do capacity allocation in the toolstack.
> Can something be done in the toolstack to "fix" these three cases?
> =

> The answer is, of course, certainly... anything can be done in
> software.  So, recalling Ian Jackson's stated requirement:
> =

> "Any functionality which can be reasonably provided outside the
>  hypervisor should be excluded from it."
> =

> we are now left to evaluate the subjective term "reasonably".
> =

> CAN TOOLSTACK-BASED CAPACITY ALLOCATION OVERCOME THE ISSUES?
> =

> In earlier discussion on this topic, when page-splitting was raised
> as a concern, some of the authors of Xen's page-sharing feature
> pointed out that a mechanism could be designed such that "batches"
> of pages were pre-allocated by the toolstack and provided to the
> hypervisor to be utilized as needed for page-splitting.  Should the
> batch run dry, the hypervisor could stop the domain that was provoking
> the page-split until the toolstack could be consulted and the toolstack, =
at its leisure, could request the hypervisor to refill
> the batch, which then allows the page-split-causing domain to proceed.
> =

> But this batch page-allocation isn't implemented in Xen today.
> =

> Andres Lagar-Cavilla says "... this is because of shortcomings in the
> [Xen] mm layer and its interaction with wait queues, documented
> elsewhere."  In other words, this batching proposal requires
> significant changes to the hypervisor, which I think we
> all agreed we were trying to avoid.

This is a misunderstanding. There is no connection between the batching pro=
posal and what I was referring to in the quote. Certainly I never advocated=
 for pre-allocations.

The "significant changes to the hypervisor" statement is FUD. Everyone you'=
ve addressed on this email makes significant changes to the hypervisor, und=
er the proviso that they are necessary/useful changes.

The interactions between the mm layer and wait queues need fixing, sooner o=
r later, claim hyper call or not. But they are not a blocker, they are esse=
ntially a race that may trigger under certain circumstances. That is why th=
ey remain a low priority fix.

> =

> [Note to Andre: I'm not objecting to the need for this functionality
> for page-sharing to work with proprietary kernels and DMC; just

Let me nip this at the bud. I use page sharing and other techniques in an e=
nvironment that doesn't use Citrix's DMC, nor is focused only on proprietar=
y kernels...

> pointing out that it, too, is dependent on further hypervisor changes.]

=85 with 4.2 Xen. It is not perfect and has limitations that I am trying to=
 fix. But our product ships, and page sharing works for anyone who would wa=
nt to consume it, independently of further hypervisor changes.

> =

> Such an approach makes sense in the min=3D=3Dmax model enforced by
> DMC but, again, DMC is not prescribed by the toolstack.
> =

> Further, this waitqueue solution for page-splitting only awkwardly
> works around in-guest ballooning (probably only with more hypervisor
> changes, TBD) and would be useless for tmem.  [IIGT: Please argue
> this last point only if you feel confident you truly understand how
> tmem works.]

I will argue though that "waitqueue solution =85 ballooning" is not true. B=
allooning has never needed nor it suddenly needs now hypervisor wait queues.

> =

> So this as-yet-unimplemented solution only really solves a part
> of the problem.

As per the previous comments, I don't see your characterization as accurate.

Andres
> =

> Are there any other possibilities proposed?  Ian Jackson has
> suggested a somewhat different approach:
> =

> Let me quote Ian Jackson again:
> =

> "Of course if it is really desired to have each guest make its own
> decisions and simply for them to somehow agree to divvy up the
> available resources, then even so a new hypervisor mechanism is
> not needed.  All that is needed is a way for those guests to
> synchronise their accesses and updates to shared records of the
> available and in-use memory."
> =

> Ian then goes on to say:  "I don't have a detailed counter-proposal
> design of course..."
> =

> This proposal is certainly possible, but I think most would agree that
> it would require some fairly massive changes in OS memory management
> design that would run contrary to many years of computing history.
> It requires guest OS's to cooperate with each other about basic memory
> management decisions.  And to work for tmem, it would require
> communication from atomic code in the kernel to user-space, then communic=
ation from user-space in a guest to user-space-in-domain0
> and then (presumably... I don't have a design either) back again.
> One must also wonder what the performance impact would be.
> =

> CONCLUDING REMARKS
> =

> "Any functionality which can be reasonably provided outside the
>  hypervisor should be excluded from it."
> =

> I think this document has described a real customer problem and
> a good solution that could be implemented either in the toolstack
> or in the hypervisor.  Memory allocation in existing Xen functionality
> has been shown to interfere significantly with the toolstack-based
> solution and suggested partial solutions to those issues either
> require even more hypervisor work, or are completely undesigned and,
> at least, call into question the definition of "reasonably".
> =

> The hypervisor-based solution has been shown to be extremely
> simple, fits very logically with existing Xen memory management
> mechanisms/code, and has been reviewed through several iterations
> by Xen hypervisor experts.
> =

> While I understand completely the Xen maintainers' desire to
> fend off unnecessary additions to the hypervisor, I believe
> XENMEM_claim_pages is a reasonable and natural hypervisor feature
> and I hope you will now Ack the patch.
> =

> Acknowledgements: Thanks very much to Konrad for his thorough
> read-through and for suggestions on how to soften my combative
> style which may have alienated the maintainers more than the
> proposal itself.
> =

> =

> =

> ------------------------------
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =

> =

> End of Xen-devel Digest, Vol 94, Issue 22
> *****************************************


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 03:25:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 03:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfj7q-0007jj-4O; Tue, 04 Dec 2012 03:24:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1Tfj7o-0007jd-9G
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 03:24:48 +0000
Received: from [85.158.143.99:3354] by server-3.bemta-4.messagelabs.com id
	61/22-06841-EFC6DB05; Tue, 04 Dec 2012 03:24:46 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-11.tower-216.messagelabs.com!1354591483!20388668!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25447 invoked from network); 4 Dec 2012 03:24:44 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 03:24:44 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so3078992iac.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 19:24:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=10DopOkvaEdXDUX7pj2bUdJYKtXm+h2z1XxNFIOP/NU=;
	b=A/IY1BPuCGG3xrWd9DSCp9wymEvwW2dOVy0wkg9p0SQXOZZypEdz2M0W1oLrMtEzt8
	m/fWxfLgPTHvc/v2pnn2rYktzWJyMPDWDFv5A7qDDzDQyMmWH15kV5nb0WI2Sihm/17f
	eMnSpfwvz9uRhpyyHQ7h769UAb6WcM175N5EuDk4/4vUHpTAon980pHS0KX6mFsFBvUc
	1vh/vgRnhHygGKmrhUN0rsK7AYTDhcjeZduAU1AdVUyDu+PuGA3iCxYGuUVEqyaeZQ4g
	ZYKyox2V923bkTu1X9tYV5YNTR85L9PUk1yO/aRLWcq/ur38uFPZNz7Zn7hkvQR4Wuxl
	Zy8w==
Received: by 10.50.106.227 with SMTP id gx3mr1336696igb.10.1354591483135;
	Mon, 03 Dec 2012 19:24:43 -0800 (PST)
Received: from [192.168.1.101] (206-248-157-158.dsl.teksavvy.com.
	[206.248.157.158])
	by mx.google.com with ESMTPS id vq4sm8835290igb.10.2012.12.03.19.24.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 03 Dec 2012 19:24:41 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
Date: Mon, 3 Dec 2012 22:24:40 -0500
Message-Id: <49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
References: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
To: xen-devel@lists.xen.org
X-Mailer: Apple Mail (2.1499)
X-Gm-Message-State: ALoCoQlM+KRXf3JAW72FHu8bOLThqajhZiAk6Y0w65lq5YrTy/MuO4wp2UFAwvwGqb/MlmxqGh3O
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
	problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> I earlier promised a complete analysis of the problem
> addressed by the proposed claim hypercall as well as
> an analysis of the alternate solutions.  I had not
> yet provided these analyses when I asked for approval
> to commit the hypervisor patch, so there was still
> a good amount of misunderstanding, and I am trying
> to fix that here.
> =

> I had hoped this essay could be both concise and complete
> but quickly found it to be impossible to be both at the
> same time.  So I have erred on the side of verbosity,
> but also have attempted to ensure that the analysis
> flows smoothly and is understandable to anyone interested
> in learning more about memory allocation in Xen.
> I'd appreciate feedback from other developers to understand
> if I've also achieved that goal.
> =

> Ian, Ian, George, and Tim -- I have tagged a few
> out-of-flow questions to you with [IIGF].  If I lose
> you at any point, I'd especially appreciate your feedback
> at those points.  I trust that, first, you will read
> this completely.  As I've said, I understand that
> Oracle's paradigm may differ in many ways from your
> own, so I also trust that you will read it completely
> with an open mind.
> =

> Thanks,
> Dan
> =

> PROBLEM STATEMENT OVERVIEW
> =

> The fundamental problem is a race; two entities are
> competing for part or all of a shared resource: in this case,
> physical system RAM.  Normally, a lock is used to mediate
> a race.
> =

> For memory allocation in Xen, there are two significant
> entities, the toolstack and the hypervisor.  And, in
> general terms, there are currently two important locks:
> one used in the toolstack for domain creation;
> and one in the hypervisor used for the buddy allocator.
> =

> Considering first only domain creation, the toolstack
> lock is taken to ensure that domain creation is serialized.
> The lock is taken when domain creation starts, and released
> when domain creation is complete.
> =

> As system and domain memory requirements grow, the amount
> of time to allocate all necessary memory to launch a large
> domain is growing and may now exceed several minutes, so
> this serialization is increasingly problematic.  The result
> is a customer reported problem:  If a customer wants to
> launch two or more very large domains, the "wait time"
> required by the serialization is unacceptable.
> =

> Oracle would like to solve this problem.  And Oracle
> would like to solve this problem not just for a single
> customer sitting in front of a single machine console, but
> for the very complex case of a large number of machines,
> with the "agent" on each machine taking independent
> actions including automatic load balancing and power
> management via migration.
Hi Dan,
an issue with your reasoning throughout has been the constant invocation of=
 the multi host environment as a justification for your proposal. But this =
argument is not used in your proposal below beyond this mention in passing.=
 Further, there is no relation between what you are changing (the hyperviso=
r) and what you are claiming it is needed for (multi host VM management).


>  (This complex environment
> is sold by Oracle today; it is not a "future vision".)
> =

> [IIGT] Completely ignoring any possible solutions to this
> problem, is everyone in agreement that this _is_ a problem
> that _needs_ to be solved with _some_ change in the Xen
> ecosystem?
> =

> SOME IMPORTANT BACKGROUND INFORMATION
> =

> In the subsequent discussion, it is important to
> understand a few things:
> =

> While the toolstack lock is held, allocating memory for
> the domain creation process is done as a sequence of one
> or more hypercalls, each asking the hypervisor to allocate
> one or more -- "X" -- slabs of physical RAM, where a slab
> is 2**N contiguous aligned pages, also known as an
> "order N" allocation.  While the hypercall is defined
> to work with any value of N, common values are N=3D0
> (individual pages), N=3D9 ("hugepages" or "superpages"),
> and N=3D18 ("1GiB pages").  So, for example, if the toolstack
> requires 201MiB of memory, it will make two hypercalls:
> One with X=3D100 and N=3D9, and one with X=3D1 and N=3D0.
> =

> While the toolstack may ask for a smaller number X of
> order=3D=3D9 slabs, system fragmentation may unpredictably
> cause the hypervisor to fail the request, in which case
> the toolstack will fall back to a request for 512*X
> individual pages.  If there is sufficient RAM in the system,
> this request for order=3D=3D0 pages is guaranteed to succeed.
> Thus for a 1TiB domain, the hypervisor must be prepared
> to allocate up to 256Mi individual pages.
> =

> Note carefully that when the toolstack hypercall asks for
> 100 slabs, the hypervisor "heaplock" is currently taken
> and released 100 times.  Similarly, for 256M individual
> pages... 256 million spin_lock-alloc_page-spin_unlocks.
> This means that domain creation is not "atomic" inside
> the hypervisor, which means that races can and will still
> occur.
> =

> RULING OUT SOME SIMPLE SOLUTIONS
> =

> Is there an elegant simple solution here?
> =

> Let's first consider the possibility of removing the toolstack
> serialization entirely and/or the possibility that two
> independent toolstack threads (or "agents") can simultaneously
> request a very large domain creation in parallel.  As described
> above, the hypervisor's heaplock is insufficient to serialize RAM
> allocation, so the two domain creation processes race.  If there
> is sufficient resource for either one to launch, but insufficient
> resource for both to launch, the winner of the race is indeterminate,
> and one or both launches will fail, possibly after one or both =

> domain creation threads have been working for several minutes.
> This is a classic "TOCTOU" (time-of-check-time-of-use) race.
> If a customer is unhappy waiting several minutes to launch
> a domain, they will be even more unhappy waiting for several
> minutes to be told that one or both of the launches has failed.
> Multi-minute failure is even more unacceptable for an automated
> agent trying to, for example, evacuate a machine that the
> data center administrator needs to powercycle.
> =

> [IIGT: Please hold your objections for a moment... the paragraph
> above is discussing the simple solution of removing the serialization;
> your suggested solution will be discussed soon.]
> =

> Next, let's consider the possibility of changing the heaplock
> strategy in the hypervisor so that the lock is held not
> for one slab but for the entire request of N slabs.  As with
> any core hypervisor lock, holding the heaplock for a "long time"
> is unacceptable.  To a hypervisor, several minutes is an eternity.
> And, in any case, by serializing domain creation in the hypervisor,
> we have really only moved the problem from the toolstack into
> the hypervisor, not solved the problem.
> =

> [IIGT] Are we in agreement that these simple solutions can be
> safely ruled out?
> =

> CAPACITY ALLOCATION VS RAM ALLOCATION
> =

> Looking for a creative solution, one may realize that it is the
> page allocation -- especially in large quantities -- that is very
> time-consuming.  But, thinking outside of the box, it is not
> the actual pages of RAM that we are racing on, but the quantity of pages =
required to launch a domain!  If we instead have a way to
> "claim" a quantity of pages cheaply now and then allocate the actual
> physical RAM pages later, we have changed the race to require only serial=
ization of the claiming process!  In other words, if some entity
> knows the number of pages available in the system, and can "claim"
> N pages for the benefit of a domain being launched, the successful launch=
 of the domain can be ensured.  Well... the domain launch may
> still fail for an unrelated reason, but not due to a memory TOCTOU
> race.  But, in this case, if the cost (in time) of the claiming
> process is very small compared to the cost of the domain launch,
> we have solved the memory TOCTOU race with hardly any delay added
> to a non-memory-related failure that would have occurred anyway.
> =

> This "claim" sounds promising.  But we have made an assumption that
> an "entity" has certain knowledge.  In the Xen system, that entity
> must be either the toolstack or the hypervisor.  Or, in the Oracle
> environment, an "agent"... but an agent and a toolstack are similar
> enough for our purposes that we will just use the more broadly-used
> term "toolstack".  In using this term, however, it's important to
> remember it is necessary to consider the existence of multiple
> threads within this toolstack.
> =

> Now I quote Ian Jackson: "It is a key design principle of a system
> like Xen that the hypervisor should provide only those facilities
> which are strictly necessary.  Any functionality which can be
> reasonably provided outside the hypervisor should be excluded
> from it."
> =

> So let's examine the toolstack first.
> =

> [IIGT] Still all on the same page (pun intended)?
> =

> TOOLSTACK-BASED CAPACITY ALLOCATION
> =

> Does the toolstack know how many physical pages of RAM are available?
> Yes, it can use a hypercall to find out this information after Xen and
> dom0 launch, but before it launches any domain.  Then if it subtracts
> the number of pages used when it launches a domain and is aware of
> when any domain dies, and adds them back, the toolstack has a pretty
> good estimate.  In actuality, the toolstack doesn't _really_ know the
> exact number of pages used when a domain is launched, but there
> is a poorly-documented "fuzz factor"... the toolstack knows the
> number of pages within a few megabytes, which is probably close enough.
> =

> This is a fairly good description of how the toolstack works today
> and the accounting seems simple enough, so does toolstack-based
> capacity allocation solve our original problem?  It would seem so.
> Even if there are multiple threads, the accounting -- not the extended
> sequence of page allocation for the domain creation -- can be
> serialized by a lock in the toolstack.  But note carefully, either
> the toolstack and the hypervisor must always be in sync on the
> number of available pages (within an acceptable margin of error);
> or any query to the hypervisor _and_ the toolstack-based claim must
> be paired atomically, i.e. the toolstack lock must be held across
> both.  Otherwise we again have another TOCTOU race. Interesting,
> but probably not really a problem.
> =

> Wait, isn't it possible for the toolstack to dynamically change the
> number of pages assigned to a domain?  Yes, this is often called
> ballooning and the toolstack can do this via a hypercall.  But

> that's still OK because each call goes through the toolstack and
> it simply needs to add more accounting for when it uses ballooning
> to adjust the domain's memory footprint.  So we are still OK.
> =

> But wait again... that brings up an interesting point.  Are there
> any significant allocations that are done in the hypervisor without
> the knowledge and/or permission of the toolstack?  If so, the
> toolstack may be missing important information.
> =

> So are there any such allocations?  Well... yes. There are a few.
> Let's take a moment to enumerate them:
> =

> A) In Linux, a privileged user can write to a sysfs file which writes
> to the balloon driver which makes hypercalls from the guest kernel to

A fairly bizarre limitation of a balloon-based approach to memory managemen=
t. Why on earth should the guest be allowed to change the size of its ballo=
on, and therefore its footprint on the host. This may be justified with arg=
uments pertaining to the stability of the in-guest workload. What they real=
ly reveal are limitations of ballooning. But the inadequacy of the balloon =
in itself doesn't automatically translate into justifying the need for a ne=
w hyper call.

> the hypervisor, which adjusts the domain memory footprint, which changes =
the number of free pages _without_ the toolstack knowledge.
> The toolstack controls constraints (essentially a minimum and maximum)
> which the hypervisor enforces.  The toolstack can ensure that the
> minimum and maximum are identical to essentially disallow Linux from
> using this functionality.  Indeed, this is precisely what Citrix's
> Dynamic Memory Controller (DMC) does: enforce min=3D=3Dmax so that DMC al=
ways has complete control and, so, knowledge of any domain memory
> footprint changes.  But DMC is not prescribed by the toolstack,

Neither is enforcing min=3D=3Dmax. This was my argument when previously com=
menting on this thread. The fact that you have enforcement of a maximum dom=
ain allocation gives you an excellent tool to keep a domain's unsupervised =
growth at bay. The toolstack can choose how fine-grained, how often to be a=
lerted and stall the domain.

> and some real Oracle Linux customers use and depend on the flexibility
> provided by in-guest ballooning.   So guest-privileged-user-driven-
> ballooning is a potential issue for toolstack-based capacity allocation.
> =

> [IIGT: This is why I have brought up DMC several times and have
> called this the "Citrix model,".. I'm not trying to be snippy
> or impugn your morals as maintainers.]
> =

> B) Xen's page sharing feature has slowly been completed over a number
> of recent Xen releases.  It takes advantage of the fact that many
> pages often contain identical data; the hypervisor merges them to save

Great care has been taken for this statement to not be exactly true. The hy=
pervisor discards one of two pages that the toolstack tells it to (and patc=
hes the physmap of the VM previously pointing to the discard page). It does=
n't merge, nor does it look into contents. The hypervisor doesn't care abou=
t the page contents. This is deliberate, so as to avoid spurious claims of =
"you are using technique X!"

> physical RAM.  When any "shared" page is written, the hypervisor
> "splits" the page (aka, copy-on-write) by allocating a new physical
> page.  There is a long history of this feature in other virtualization
> products and it is known to be possible that, under many circumstances, t=
housands of splits may occur in any fraction of a second.  The
> hypervisor does not notify or ask permission of the toolstack.
> So, page-splitting is an issue for toolstack-based capacity
> allocation, at least as currently coded in Xen.
> =

> [Andre: Please hold your objection here until you read further.]

Name is Andres. And please cc me if you'll be addressing me directly!

Note that I don't disagree with your previous statement in itself. Although=
 "page-splitting" is fairly unique terminology, and confusing (at least to =
me). CoW works.

> =

> C) Transcendent Memory ("tmem") has existed in the Xen hypervisor and
> toolstack for over three years.  It depends on an in-guest-kernel
> adaptive technique to constantly adjust the domain memory footprint as
> well as hooks in the in-guest-kernel to move data to and from the
> hypervisor.  While the data is in the hypervisor's care, interesting
> memory-load balancing between guests is done, including optional
> compression and deduplication.  All of this has been in Xen since 2009
> and has been awaiting changes in the (guest-side) Linux kernel. Those
> changes are now merged into the mainstream kernel and are fully
> functional in shipping distros.
> =

> While a complete description of tmem's guest<->hypervisor interaction
> is beyond the scope of this document, it is important to understand
> that any tmem-enabled guest kernel may unpredictably request thousands
> or even millions of pages directly via hypercalls from the hypervisor in =
a fraction of a second with absolutely no interaction with the toolstack.  =
Further, the guest-side hypercalls that allocate pages
> via the hypervisor are done in "atomic" code deep in the Linux mm
> subsystem.
> =

> Indeed, if one truly understands tmem, it should become clear that
> tmem is fundamentally incompatible with toolstack-based capacity
> allocation. But let's stop discussing tmem for now and move on.

You have not discussed tmem pool thaw and freeze in this proposal.

> =

> OK.  So with existing code both in Xen and Linux guests, there are
> three challenges to toolstack-based capacity allocation.  We'd
> really still like to do capacity allocation in the toolstack.  Can
> something be done in the toolstack to "fix" these three cases?
> =

> Possibly.  But let's first look at hypervisor-based capacity
> allocation: the proposed "XENMEM_claim_pages" hypercall.
> =

> HYPERVISOR-BASED CAPACITY ALLOCATION
> =

> The posted patch for the claim hypercall is quite simple, but let's
> look at it in detail.  The claim hypercall is actually a subop
> of an existing hypercall.  After checking parameters for validity,
> a new function is called in the core Xen memory management code.
> This function takes the hypervisor heaplock, checks for a few
> special cases, does some arithmetic to ensure a valid claim, stakes
> the claim, releases the hypervisor heaplock, and then returns.  To
> review from earlier, the hypervisor heaplock protects _all_ page/slab
> allocations, so we can be absolutely certain that there are no other
> page allocation races.  This new function is about 35 lines of code,
> not counting comments.
> =

> The patch includes two other significant changes to the hypervisor:
> First, when any adjustment to a domain's memory footprint is made
> (either through a toolstack-aware hypercall or one of the three
> toolstack-unaware methods described above), the heaplock is
> taken, arithmetic is done, and the heaplock is released.  This
> is 12 lines of code.  Second, when any memory is allocated within
> Xen, a check must be made (with the heaplock already held) to
> determine if, given a previous claim, the domain has exceeded
> its upper bound, maxmem.  This code is a single conditional test.
> =

> With some declarations, but not counting the copious comments,
> all told, the new code provided by the patch is well under 100 lines.
> =

> What about the toolstack side?  First, it's important to note that
> the toolstack changes are entirely optional.  If any toolstack
> wishes either to not fix the original problem, or avoid toolstack-
> unaware allocation completely by ignoring the functionality provided
> by in-guest ballooning, page-sharing, and/or tmem, that toolstack need
> not use the new hyper call.

You are ruling out any other possibility here. In particular, but not limit=
ed to, use of max_pages.

>  Second, it's very relevant to note that the Oracle product uses a combin=
ation of a proprietary "manager"
> which oversees many machines, and the older open-source xm/xend
> toolstack, for which the current Xen toolstack maintainers are no
> longer accepting patches.
> =

> The preface of the published patch does suggest, however, some
> straightforward pseudo-code, as follows:
> =

> Current toolstack domain creation memory allocation code fragment:
> =

> 1. call populate_physmap repeatedly to achieve mem=3DN memory
> 2. if any populate_physmap call fails, report -ENOMEM up the stack
> 3. memory is held until domain dies or the toolstack decreases it
> =

> Proposed toolstack domain creation memory allocation code fragment
> (new code marked with "+"):
> =

> +  call claim for mem=3DN amount of memory
> +. if claim succeeds:
> 1.  call populate_physmap repeatedly to achieve mem=3DN memory (failsafe)
> +  else
> 2.  report -ENOMEM up the stack
> +  claim is held until mem=3DN is achieved or the domain dies or
>    forced to 0 by a second hypercall
> 3. memory is held until domain dies or the toolstack decreases it
> =

> Reviewing the pseudo-code, one can readily see that the toolstack
> changes required to implement the hypercall are quite small.
> =

> To complete this discussion, it has been pointed out that
> the proposed hypercall doesn't solve the original problem
> for certain classes of legacy domains... but also neither
> does it make the problem worse.  It has also been pointed
> out that the proposed patch is not (yet) NUMA-aware.
> =

> Now let's return to the earlier question:  There are three =

> challenges to toolstack-based capacity allocation, which are
> all handled easily by in-hypervisor capacity allocation. But we'd
> really still like to do capacity allocation in the toolstack.
> Can something be done in the toolstack to "fix" these three cases?
> =

> The answer is, of course, certainly... anything can be done in
> software.  So, recalling Ian Jackson's stated requirement:
> =

> "Any functionality which can be reasonably provided outside the
>  hypervisor should be excluded from it."
> =

> we are now left to evaluate the subjective term "reasonably".
> =

> CAN TOOLSTACK-BASED CAPACITY ALLOCATION OVERCOME THE ISSUES?
> =

> In earlier discussion on this topic, when page-splitting was raised
> as a concern, some of the authors of Xen's page-sharing feature
> pointed out that a mechanism could be designed such that "batches"
> of pages were pre-allocated by the toolstack and provided to the
> hypervisor to be utilized as needed for page-splitting.  Should the
> batch run dry, the hypervisor could stop the domain that was provoking
> the page-split until the toolstack could be consulted and the toolstack, =
at its leisure, could request the hypervisor to refill
> the batch, which then allows the page-split-causing domain to proceed.
> =

> But this batch page-allocation isn't implemented in Xen today.
> =

> Andres Lagar-Cavilla says "... this is because of shortcomings in the
> [Xen] mm layer and its interaction with wait queues, documented
> elsewhere."  In other words, this batching proposal requires
> significant changes to the hypervisor, which I think we
> all agreed we were trying to avoid.

This is a misunderstanding. There is no connection between the batching pro=
posal and what I was referring to in the quote. Certainly I never advocated=
 for pre-allocations.

The "significant changes to the hypervisor" statement is FUD. Everyone you'=
ve addressed on this email makes significant changes to the hypervisor, und=
er the proviso that they are necessary/useful changes.

The interactions between the mm layer and wait queues need fixing, sooner o=
r later, claim hyper call or not. But they are not a blocker, they are esse=
ntially a race that may trigger under certain circumstances. That is why th=
ey remain a low priority fix.

> =

> [Note to Andre: I'm not objecting to the need for this functionality
> for page-sharing to work with proprietary kernels and DMC; just

Let me nip this at the bud. I use page sharing and other techniques in an e=
nvironment that doesn't use Citrix's DMC, nor is focused only on proprietar=
y kernels...

> pointing out that it, too, is dependent on further hypervisor changes.]

=85 with 4.2 Xen. It is not perfect and has limitations that I am trying to=
 fix. But our product ships, and page sharing works for anyone who would wa=
nt to consume it, independently of further hypervisor changes.

> =

> Such an approach makes sense in the min=3D=3Dmax model enforced by
> DMC but, again, DMC is not prescribed by the toolstack.
> =

> Further, this waitqueue solution for page-splitting only awkwardly
> works around in-guest ballooning (probably only with more hypervisor
> changes, TBD) and would be useless for tmem.  [IIGT: Please argue
> this last point only if you feel confident you truly understand how
> tmem works.]

I will argue though that "waitqueue solution =85 ballooning" is not true. B=
allooning has never needed nor it suddenly needs now hypervisor wait queues.

> =

> So this as-yet-unimplemented solution only really solves a part
> of the problem.

As per the previous comments, I don't see your characterization as accurate.

Andres
> =

> Are there any other possibilities proposed?  Ian Jackson has
> suggested a somewhat different approach:
> =

> Let me quote Ian Jackson again:
> =

> "Of course if it is really desired to have each guest make its own
> decisions and simply for them to somehow agree to divvy up the
> available resources, then even so a new hypervisor mechanism is
> not needed.  All that is needed is a way for those guests to
> synchronise their accesses and updates to shared records of the
> available and in-use memory."
> =

> Ian then goes on to say:  "I don't have a detailed counter-proposal
> design of course..."
> =

> This proposal is certainly possible, but I think most would agree that
> it would require some fairly massive changes in OS memory management
> design that would run contrary to many years of computing history.
> It requires guest OS's to cooperate with each other about basic memory
> management decisions.  And to work for tmem, it would require
> communication from atomic code in the kernel to user-space, then communic=
ation from user-space in a guest to user-space-in-domain0
> and then (presumably... I don't have a design either) back again.
> One must also wonder what the performance impact would be.
> =

> CONCLUDING REMARKS
> =

> "Any functionality which can be reasonably provided outside the
>  hypervisor should be excluded from it."
> =

> I think this document has described a real customer problem and
> a good solution that could be implemented either in the toolstack
> or in the hypervisor.  Memory allocation in existing Xen functionality
> has been shown to interfere significantly with the toolstack-based
> solution and suggested partial solutions to those issues either
> require even more hypervisor work, or are completely undesigned and,
> at least, call into question the definition of "reasonably".
> =

> The hypervisor-based solution has been shown to be extremely
> simple, fits very logically with existing Xen memory management
> mechanisms/code, and has been reviewed through several iterations
> by Xen hypervisor experts.
> =

> While I understand completely the Xen maintainers' desire to
> fend off unnecessary additions to the hypervisor, I believe
> XENMEM_claim_pages is a reasonable and natural hypervisor feature
> and I hope you will now Ack the patch.
> =

> Acknowledgements: Thanks very much to Konrad for his thorough
> read-through and for suggestions on how to soften my combative
> style which may have alienated the maintainers more than the
> proposal itself.
> =

> =

> =

> ------------------------------
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =

> =

> End of Xen-devel Digest, Vol 94, Issue 22
> *****************************************


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb0-0001qm-5P; Tue, 04 Dec 2012 06:03:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflay-0001qa-H3
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:04 +0000
Received: from [85.158.139.211:41274] by server-10.bemta-5.messagelabs.com id
	74/70-09257-7129DB05; Tue, 04 Dec 2012 06:03:03 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26557 invoked from network); 4 Dec 2012 06:03:02 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:02 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918816"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:00 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:21 +0800
Message-Id: <1354600410-3390-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 01/10] nested vmx: emulate MSR bitmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In nested vmx virtualization for MSR bitmaps, L0 hypervisor will trap all the VM
exit from L2 guest by disable the MSR_BITMAP feature. When handling this VM exit,
L0 hypervisor judges whether L1 hypervisor uses MSR_BITMAP feature and the
corresponding bit is set to 1. If so, L0 will inject such VM exit into L1
hypervisor; otherwise, L0 will be responsible for handling this VM exit.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0fbdd75..205e705 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -674,6 +674,34 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 }
 
 /*
+ * access_type: read == 0, write == 1
+ */
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+{
+    int ret = 1;
+    if ( !msr_bitmap )
+        return 1;
+
+    if ( msr <= 0x1fff )
+    {
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+    }
+    else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+    {
+        msr &= 0x1fff;
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+    }
+    return ret;
+}
+
+
+/*
  * Switch VMCS between layer 1 & 2 guest
  */
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ed47780..719bfce 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -48,6 +48,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
     nvmx->iobitmap[1] = NULL;
+    nvmx->msrbitmap = NULL;
     return 0;
 out:
     return -ENOMEM;
@@ -561,6 +562,17 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
 }
 
+static void __map_msr_bitmap(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    unsigned long gpa;
+
+    if ( nvmx->msrbitmap )
+        hvm_unmap_guest_frame (nvmx->msrbitmap); 
+    gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, MSR_BITMAP);
+    nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT);
+}
+
 static void __map_io_bitmap(struct vcpu *v, u64 vmcs_reg)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -597,6 +609,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
             nvmx->iobitmap[i] = NULL;
         }
     }
+    if ( nvmx->msrbitmap ) {
+        hvm_unmap_guest_frame(nvmx->msrbitmap);
+        nvmx->msrbitmap = NULL;
+    }
 }
 
 u64 nvmx_get_tsc_offset(struct vcpu *v)
@@ -1153,6 +1169,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
         nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT);
         nvcpu->nv_vvmcxaddr = gpa;
         map_io_bitmap_all (v);
+        __map_msr_bitmap(v);
     }
 
     vmreturn(regs, VMSUCCEED);
@@ -1270,6 +1287,9 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
               vmcs_encoding == IO_BITMAP_B_HIGH )
         __map_io_bitmap (v, IO_BITMAP_B);
 
+    if ( vmcs_encoding == MSR_BITMAP || vmcs_encoding == MSR_BITMAP_HIGH )
+        __map_msr_bitmap(v);
+
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
 }
@@ -1320,6 +1340,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDTSC_EXITING |
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
+               CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
         tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
@@ -1497,8 +1518,6 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_TRIPLE_FAULT:
     case EXIT_REASON_TASK_SWITCH:
     case EXIT_REASON_CPUID:
-    case EXIT_REASON_MSR_READ:
-    case EXIT_REASON_MSR_WRITE:
     case EXIT_REASON_VMCALL:
     case EXIT_REASON_VMCLEAR:
     case EXIT_REASON_VMLAUNCH:
@@ -1514,6 +1533,22 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_MSR_READ:
+    case EXIT_REASON_MSR_WRITE:
+    {
+        int status;
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        {
+            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
+                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
+            if ( status )
+                nvcpu->nv_vmexit_pending = 1;
+        }
+        else
+            nvcpu->nv_vmexit_pending = 1;
+        break;
+    }
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index cc92f69..14ac773 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -427,6 +427,7 @@ int vmx_add_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 
 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index b9137b8..067fbe4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -26,6 +26,7 @@
 struct nestedvmx {
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
+    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb2-0001rS-CC; Tue, 04 Dec 2012 06:03:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb0-0001qq-TS
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:07 +0000
Received: from [85.158.139.83:21304] by server-3.bemta-5.messagelabs.com id
	86/78-18736-9129DB05; Tue, 04 Dec 2012 06:03:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354600983!28220820!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzY5OTAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24360 invoked from network); 4 Dec 2012 06:03:04 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-182.messagelabs.com with SMTP;
	4 Dec 2012 06:03:04 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 03 Dec 2012 22:02:16 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="251488528"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 03 Dec 2012 22:03:01 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:22 +0800
Message-Id: <1354600410-3390-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 02/10] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   37 +++++++++++++++++++++++++------------
 1 files changed, 25 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 719bfce..cf91c7c 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp;
+    u64 data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
@@ -1311,18 +1311,20 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50;
+               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
         /* 1-seetings */
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        data <<= 32;
-	/* 0-settings */
-        data |= 0;
+        /* Consult SDM for default1 setting */
+        tmp = ( (1<<1) | (1<<2) | (1<<4) );
+        data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1342,10 +1344,14 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
-        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
+        /* Consult SDM for default1 setting */
+        if ( msr == MSR_IA32_VMX_PROCBASED_CTLS )
+            tmp = 0x401e172;
+        else if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
+            tmp = 0x4006172;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
+
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
@@ -1355,9 +1361,12 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = (data << 32) | tmp;
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
-        /* 1-seetings */
-        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
-        tmp = 0x36dff;
+    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
+        /* Consult SDM for default1 setting */
+        if ( msr == MSR_IA32_VMX_EXIT_CTLS )
+            tmp = 0x36dff;
+        else if ( msr == MSR_IA32_VMX_TRUE_EXIT_CTLS )
+            tmp = 0x36dfb;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1370,8 +1379,12 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
-        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
-        tmp = 0x11ff;
+    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
+        /* Consult SDM for default1 setting */
+        if ( msr == MSR_IA32_VMX_ENTRY_CTLS )
+            tmp = 0x11ff;
+        else if ( msr == MSR_IA32_VMX_TRUE_ENTRY_CTLS )
+            tmp = 0x11fb;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb1-0001rF-V2; Tue, 04 Dec 2012 06:03:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb0-0001qa-4c
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:06 +0000
Received: from [85.158.139.211:60159] by server-10.bemta-5.messagelabs.com id
	ED/70-09257-9129DB05; Tue, 04 Dec 2012 06:03:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!3
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26638 invoked from network); 4 Dec 2012 06:03:05 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:05 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:04 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918846"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:04 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:24 +0800
Message-Id: <1354600410-3390-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 04/10] nested vmx: fix handling of RDTSC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If L0 is to handle the TSC access, then we need to update guest EIP by
calling update_guest_eip().

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 ++
 3 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3bb0d99..9fb9562 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1555,7 +1555,7 @@ static int get_instruction_length(void)
     return len;
 }
 
-static void update_guest_eip(void)
+void update_guest_eip(void)
 {
     struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned long x;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6a8caaa..cf3797c 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1626,6 +1626,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
             tsc += __get_vvmcs(nvcpu->nv_vvmcx, TSC_OFFSET);
             regs->eax = (uint32_t)tsc;
             regs->edx = (uint32_t)(tsc >> 32);
+            update_guest_eip();
 
             return 1;
         }
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c4c2fe8..aa5b080 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -399,6 +399,8 @@ void ept_p2m_init(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
+void update_guest_eip(void);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb0-0001qm-5P; Tue, 04 Dec 2012 06:03:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflay-0001qa-H3
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:04 +0000
Received: from [85.158.139.211:41274] by server-10.bemta-5.messagelabs.com id
	74/70-09257-7129DB05; Tue, 04 Dec 2012 06:03:03 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26557 invoked from network); 4 Dec 2012 06:03:02 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:02 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918816"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:00 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:21 +0800
Message-Id: <1354600410-3390-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 01/10] nested vmx: emulate MSR bitmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In nested vmx virtualization for MSR bitmaps, L0 hypervisor will trap all the VM
exit from L2 guest by disable the MSR_BITMAP feature. When handling this VM exit,
L0 hypervisor judges whether L1 hypervisor uses MSR_BITMAP feature and the
corresponding bit is set to 1. If so, L0 will inject such VM exit into L1
hypervisor; otherwise, L0 will be responsible for handling this VM exit.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0fbdd75..205e705 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -674,6 +674,34 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 }
 
 /*
+ * access_type: read == 0, write == 1
+ */
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+{
+    int ret = 1;
+    if ( !msr_bitmap )
+        return 1;
+
+    if ( msr <= 0x1fff )
+    {
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+    }
+    else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+    {
+        msr &= 0x1fff;
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+    }
+    return ret;
+}
+
+
+/*
  * Switch VMCS between layer 1 & 2 guest
  */
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ed47780..719bfce 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -48,6 +48,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
     nvmx->iobitmap[1] = NULL;
+    nvmx->msrbitmap = NULL;
     return 0;
 out:
     return -ENOMEM;
@@ -561,6 +562,17 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
 }
 
+static void __map_msr_bitmap(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    unsigned long gpa;
+
+    if ( nvmx->msrbitmap )
+        hvm_unmap_guest_frame (nvmx->msrbitmap); 
+    gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, MSR_BITMAP);
+    nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT);
+}
+
 static void __map_io_bitmap(struct vcpu *v, u64 vmcs_reg)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -597,6 +609,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
             nvmx->iobitmap[i] = NULL;
         }
     }
+    if ( nvmx->msrbitmap ) {
+        hvm_unmap_guest_frame(nvmx->msrbitmap);
+        nvmx->msrbitmap = NULL;
+    }
 }
 
 u64 nvmx_get_tsc_offset(struct vcpu *v)
@@ -1153,6 +1169,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
         nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT);
         nvcpu->nv_vvmcxaddr = gpa;
         map_io_bitmap_all (v);
+        __map_msr_bitmap(v);
     }
 
     vmreturn(regs, VMSUCCEED);
@@ -1270,6 +1287,9 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
               vmcs_encoding == IO_BITMAP_B_HIGH )
         __map_io_bitmap (v, IO_BITMAP_B);
 
+    if ( vmcs_encoding == MSR_BITMAP || vmcs_encoding == MSR_BITMAP_HIGH )
+        __map_msr_bitmap(v);
+
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
 }
@@ -1320,6 +1340,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDTSC_EXITING |
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
+               CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
         tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
@@ -1497,8 +1518,6 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_TRIPLE_FAULT:
     case EXIT_REASON_TASK_SWITCH:
     case EXIT_REASON_CPUID:
-    case EXIT_REASON_MSR_READ:
-    case EXIT_REASON_MSR_WRITE:
     case EXIT_REASON_VMCALL:
     case EXIT_REASON_VMCLEAR:
     case EXIT_REASON_VMLAUNCH:
@@ -1514,6 +1533,22 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_MSR_READ:
+    case EXIT_REASON_MSR_WRITE:
+    {
+        int status;
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        {
+            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
+                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
+            if ( status )
+                nvcpu->nv_vmexit_pending = 1;
+        }
+        else
+            nvcpu->nv_vmexit_pending = 1;
+        break;
+    }
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index cc92f69..14ac773 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -427,6 +427,7 @@ int vmx_add_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 
 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index b9137b8..067fbe4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -26,6 +26,7 @@
 struct nestedvmx {
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
+    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb6-0001sj-9K; Tue, 04 Dec 2012 06:03:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb4-0001s2-SN
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:11 +0000
Received: from [193.109.254.147:13747] by server-9.bemta-14.messagelabs.com id
	9C/83-30773-E129DB05; Tue, 04 Dec 2012 06:03:10 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354600988!2404517!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MDk2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32417 invoked from network); 4 Dec 2012 06:03:09 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-27.messagelabs.com with SMTP;
	4 Dec 2012 06:03:09 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 03 Dec 2012 22:03:08 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="258602195"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 03 Dec 2012 22:03:07 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:27 +0800
Message-Id: <1354600410-3390-8-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 07/10] nested vmx: enable "Virtualize APIC
	accesses" feature for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the "Virtualize APIC accesses" feature is enabled, we need to sync
the APIC-access address from virtual vvmcs into shadow vmcs when doing
virtual_vmentry.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   27 ++++++++++++++++++++++++++-
 1 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 1304636..bc9d39c 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -554,6 +554,24 @@ void nvmx_update_exception_bitmap(struct vcpu *v, unsigned long value)
     set_shadow_control(v, EXCEPTION_BITMAP, value);
 }
 
+static void nvmx_update_apic_access_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 apic_gpfn, apic_mfn;
+    u32 ctrl;
+    void *apic_va;
+
+    ctrl = __n2_secondary_exec_control(v);
+    if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+    {
+        apic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
+        apic_va = hvm_map_guest_frame_ro(apic_gpfn);
+        apic_mfn = virt_to_mfn(apic_va);
+        __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(apic_va); 
+    }
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -761,6 +779,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_exit_control(v, vmx_vmexit_control);
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
+    nvmx_update_apic_access_address(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1356,7 +1375,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
@@ -1692,6 +1712,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
         break;
     }
+    case EXIT_REASON_APIC_ACCESS:
+        ctrl = __n2_secondary_exec_control(v);
+        if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb1-0001rF-V2; Tue, 04 Dec 2012 06:03:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb0-0001qa-4c
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:06 +0000
Received: from [85.158.139.211:60159] by server-10.bemta-5.messagelabs.com id
	ED/70-09257-9129DB05; Tue, 04 Dec 2012 06:03:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!3
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26638 invoked from network); 4 Dec 2012 06:03:05 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:05 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:04 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918846"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:04 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:24 +0800
Message-Id: <1354600410-3390-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 04/10] nested vmx: fix handling of RDTSC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If L0 is to handle the TSC access, then we need to update guest EIP by
calling update_guest_eip().

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 ++
 3 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3bb0d99..9fb9562 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1555,7 +1555,7 @@ static int get_instruction_length(void)
     return len;
 }
 
-static void update_guest_eip(void)
+void update_guest_eip(void)
 {
     struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned long x;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6a8caaa..cf3797c 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1626,6 +1626,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
             tsc += __get_vvmcs(nvcpu->nv_vvmcx, TSC_OFFSET);
             regs->eax = (uint32_t)tsc;
             regs->edx = (uint32_t)(tsc >> 32);
+            update_guest_eip();
 
             return 1;
         }
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c4c2fe8..aa5b080 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -399,6 +399,8 @@ void ept_p2m_init(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
+void update_guest_eip(void);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb0-0001r0-Hz; Tue, 04 Dec 2012 06:03:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflaz-0001qh-EP
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:05 +0000
Received: from [85.158.139.211:19009] by server-14.bemta-5.messagelabs.com id
	ED/8D-21768-8129DB05; Tue, 04 Dec 2012 06:03:04 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26596 invoked from network); 4 Dec 2012 06:03:04 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:04 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918832"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:03 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:23 +0800
Message-Id: <1354600410-3390-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 03/10] nested vmx: fix rflags status in virtual
	vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As stated in SDM, all bits (except for those 1-reserved) in rflags would
be set to 0 in VM exit. Therefore we need to follow this logic in
virtual_vmexit.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index cf91c7c..6a8caaa 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -991,7 +991,8 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
 
     regs->eip = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RIP);
     regs->esp = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RSP);
-    regs->eflags = __vmread(GUEST_RFLAGS);
+    /*VMexit clears all bits except bit 1!*/
+    regs->eflags = 2;
 
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb0-0001r0-Hz; Tue, 04 Dec 2012 06:03:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflaz-0001qh-EP
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:05 +0000
Received: from [85.158.139.211:19009] by server-14.bemta-5.messagelabs.com id
	ED/8D-21768-8129DB05; Tue, 04 Dec 2012 06:03:04 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26596 invoked from network); 4 Dec 2012 06:03:04 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:04 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918832"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:03 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:23 +0800
Message-Id: <1354600410-3390-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 03/10] nested vmx: fix rflags status in virtual
	vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As stated in SDM, all bits (except for those 1-reserved) in rflags would
be set to 0 in VM exit. Therefore we need to follow this logic in
virtual_vmexit.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index cf91c7c..6a8caaa 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -991,7 +991,8 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
 
     regs->eip = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RIP);
     regs->esp = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RSP);
-    regs->eflags = __vmread(GUEST_RFLAGS);
+    /*VMexit clears all bits except bit 1!*/
+    regs->eflags = 2;
 
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb6-0001sj-9K; Tue, 04 Dec 2012 06:03:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb4-0001s2-SN
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:11 +0000
Received: from [193.109.254.147:13747] by server-9.bemta-14.messagelabs.com id
	9C/83-30773-E129DB05; Tue, 04 Dec 2012 06:03:10 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354600988!2404517!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MDk2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32417 invoked from network); 4 Dec 2012 06:03:09 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-27.messagelabs.com with SMTP;
	4 Dec 2012 06:03:09 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 03 Dec 2012 22:03:08 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="258602195"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 03 Dec 2012 22:03:07 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:27 +0800
Message-Id: <1354600410-3390-8-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 07/10] nested vmx: enable "Virtualize APIC
	accesses" feature for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the "Virtualize APIC accesses" feature is enabled, we need to sync
the APIC-access address from virtual vvmcs into shadow vmcs when doing
virtual_vmentry.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   27 ++++++++++++++++++++++++++-
 1 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 1304636..bc9d39c 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -554,6 +554,24 @@ void nvmx_update_exception_bitmap(struct vcpu *v, unsigned long value)
     set_shadow_control(v, EXCEPTION_BITMAP, value);
 }
 
+static void nvmx_update_apic_access_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 apic_gpfn, apic_mfn;
+    u32 ctrl;
+    void *apic_va;
+
+    ctrl = __n2_secondary_exec_control(v);
+    if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+    {
+        apic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
+        apic_va = hvm_map_guest_frame_ro(apic_gpfn);
+        apic_mfn = virt_to_mfn(apic_va);
+        __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(apic_va); 
+    }
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -761,6 +779,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_exit_control(v, vmx_vmexit_control);
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
+    nvmx_update_apic_access_address(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1356,7 +1375,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
@@ -1692,6 +1712,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
         break;
     }
+    case EXIT_REASON_APIC_ACCESS:
+        ctrl = __n2_secondary_exec_control(v);
+        if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb2-0001rS-CC; Tue, 04 Dec 2012 06:03:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb0-0001qq-TS
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:07 +0000
Received: from [85.158.139.83:21304] by server-3.bemta-5.messagelabs.com id
	86/78-18736-9129DB05; Tue, 04 Dec 2012 06:03:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354600983!28220820!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzY5OTAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24360 invoked from network); 4 Dec 2012 06:03:04 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-182.messagelabs.com with SMTP;
	4 Dec 2012 06:03:04 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 03 Dec 2012 22:02:16 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="251488528"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 03 Dec 2012 22:03:01 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:22 +0800
Message-Id: <1354600410-3390-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 02/10] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   37 +++++++++++++++++++++++++------------
 1 files changed, 25 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 719bfce..cf91c7c 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp;
+    u64 data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
@@ -1311,18 +1311,20 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50;
+               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
         /* 1-seetings */
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        data <<= 32;
-	/* 0-settings */
-        data |= 0;
+        /* Consult SDM for default1 setting */
+        tmp = ( (1<<1) | (1<<2) | (1<<4) );
+        data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1342,10 +1344,14 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
-        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
+        /* Consult SDM for default1 setting */
+        if ( msr == MSR_IA32_VMX_PROCBASED_CTLS )
+            tmp = 0x401e172;
+        else if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
+            tmp = 0x4006172;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
+
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
@@ -1355,9 +1361,12 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = (data << 32) | tmp;
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
-        /* 1-seetings */
-        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
-        tmp = 0x36dff;
+    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
+        /* Consult SDM for default1 setting */
+        if ( msr == MSR_IA32_VMX_EXIT_CTLS )
+            tmp = 0x36dff;
+        else if ( msr == MSR_IA32_VMX_TRUE_EXIT_CTLS )
+            tmp = 0x36dfb;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1370,8 +1379,12 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
-        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
-        tmp = 0x11ff;
+    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
+        /* Consult SDM for default1 setting */
+        if ( msr == MSR_IA32_VMX_ENTRY_CTLS )
+            tmp = 0x11ff;
+        else if ( msr == MSR_IA32_VMX_TRUE_ENTRY_CTLS )
+            tmp = 0x11fb;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb6-0001ss-MA; Tue, 04 Dec 2012 06:03:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb5-0001sB-FT
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:11 +0000
Received: from [85.158.139.211:41630] by server-4.bemta-5.messagelabs.com id
	98/F2-15011-E129DB05; Tue, 04 Dec 2012 06:03:10 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!5
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27107 invoked from network); 4 Dec 2012 06:03:10 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:10 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:09 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918865"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:09 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:28 +0800
Message-Id: <1354600410-3390-9-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 08/10] nested vmx: enable PAUSE and RDPMC
	exiting for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index bc9d39c..bbf5266 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1363,6 +1363,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
+               CPU_BASED_PAUSE_EXITING |
+               CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* Consult SDM for default1 setting */
         if ( msr == MSR_IA32_VMX_PROCBASED_CTLS )
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb8-0001ti-Ab; Tue, 04 Dec 2012 06:03:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb6-0001sd-NR
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:12 +0000
Received: from [85.158.139.211:60460] by server-8.bemta-5.messagelabs.com id
	79/F2-06050-F129DB05; Tue, 04 Dec 2012 06:03:11 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!6
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27198 invoked from network); 4 Dec 2012 06:03:11 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:11 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918871"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:10 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:29 +0800
Message-Id: <1354600410-3390-10-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 09/10] nested vmx: fix interrupt delivery to L2
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While delivering interrupt into L2 guest, L0 hypervisor need to check
whether L1 hypervisor wants to own the interrupt, if not, directly inject
the interrupt into L2 guest.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 3961bc7..ef8b925 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -163,7 +163,7 @@ enum hvm_intblk nvmx_intr_blocked(struct vcpu *v)
 
 static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
 {
-    u32 exit_ctrl;
+    u32 ctrl;
 
     if ( nvmx_intr_blocked(v) != hvm_intblk_none )
     {
@@ -176,11 +176,14 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
         if ( intack.source == hvm_intsrc_pic ||
                  intack.source == hvm_intsrc_lapic )
         {
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, PIN_BASED_VM_EXEC_CONTROL);
+            if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) )
+                return 0;
+
             vmx_inject_extint(intack.vector);
 
-            exit_ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
-                            VM_EXIT_CONTROLS);
-            if ( exit_ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, VM_EXIT_CONTROLS);
+            if ( ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
             {
                 /* for now, duplicate the ack path in vmx_intr_assist */
                 hvm_vcpu_ack_pending_irq(v, intack);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflax-0001qV-OS; Tue, 04 Dec 2012 06:03:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflaw-0001qO-PE
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:02 +0000
Received: from [193.109.254.147:13380] by server-5.bemta-14.messagelabs.com id
	42/73-10257-5129DB05; Tue, 04 Dec 2012 06:03:01 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354600980!1581491!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM1OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16329 invoked from network); 4 Dec 2012 06:03:01 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-7.tower-27.messagelabs.com with SMTP;
	4 Dec 2012 06:03:01 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 03 Dec 2012 22:03:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="175786352"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 03 Dec 2012 22:02:59 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:20 +0800
Message-Id: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH 00/10] nested vmx: bug fixes and feature enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series of patches contain some bug fixes and feature enabling for nested vmx, please help to review and pull.

Thanks,
Dongxiao

Dongxiao Xu (10):
  nested vmx: emulate MSR bitmaps
  nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
  nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
  nested vmx: fix interrupt delivery to L2 guest
  nested vmx: check host ability when intercept MSR read

 xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 ++++++++
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |  128 ++++++++++++++++++++++++++++++------
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 7 files changed, 148 insertions(+), 25 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb6-0001ss-MA; Tue, 04 Dec 2012 06:03:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb5-0001sB-FT
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:11 +0000
Received: from [85.158.139.211:41630] by server-4.bemta-5.messagelabs.com id
	98/F2-15011-E129DB05; Tue, 04 Dec 2012 06:03:10 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!5
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27107 invoked from network); 4 Dec 2012 06:03:10 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:10 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:09 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918865"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:09 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:28 +0800
Message-Id: <1354600410-3390-9-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 08/10] nested vmx: enable PAUSE and RDPMC
	exiting for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index bc9d39c..bbf5266 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1363,6 +1363,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
+               CPU_BASED_PAUSE_EXITING |
+               CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* Consult SDM for default1 setting */
         if ( msr == MSR_IA32_VMX_PROCBASED_CTLS )
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb8-0001tv-Md; Tue, 04 Dec 2012 06:03:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb7-0001t9-JN
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:13 +0000
Received: from [85.158.143.99:16721] by server-2.bemta-4.messagelabs.com id
	FE/8F-28922-0229DB05; Tue, 04 Dec 2012 06:03:12 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1354600988!18162548!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM1OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23720 invoked from network); 4 Dec 2012 06:03:08 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-8.tower-216.messagelabs.com with SMTP;
	4 Dec 2012 06:03:08 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 03 Dec 2012 22:03:07 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="175786406"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 03 Dec 2012 22:03:06 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:26 +0800
Message-Id: <1354600410-3390-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 06/10] nested vmx: enable IA32E mode while do VM
	entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 0ac78af..1304636 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1388,7 +1388,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
             tmp = 0x11fb;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
+               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+               VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
         break;
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb8-0001ti-Ab; Tue, 04 Dec 2012 06:03:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb6-0001sd-NR
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:12 +0000
Received: from [85.158.139.211:60460] by server-8.bemta-5.messagelabs.com id
	79/F2-06050-F129DB05; Tue, 04 Dec 2012 06:03:11 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!6
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27198 invoked from network); 4 Dec 2012 06:03:11 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:11 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918871"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:10 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:29 +0800
Message-Id: <1354600410-3390-10-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 09/10] nested vmx: fix interrupt delivery to L2
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While delivering interrupt into L2 guest, L0 hypervisor need to check
whether L1 hypervisor wants to own the interrupt, if not, directly inject
the interrupt into L2 guest.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 3961bc7..ef8b925 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -163,7 +163,7 @@ enum hvm_intblk nvmx_intr_blocked(struct vcpu *v)
 
 static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
 {
-    u32 exit_ctrl;
+    u32 ctrl;
 
     if ( nvmx_intr_blocked(v) != hvm_intblk_none )
     {
@@ -176,11 +176,14 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
         if ( intack.source == hvm_intsrc_pic ||
                  intack.source == hvm_intsrc_lapic )
         {
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, PIN_BASED_VM_EXEC_CONTROL);
+            if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) )
+                return 0;
+
             vmx_inject_extint(intack.vector);
 
-            exit_ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
-                            VM_EXIT_CONTROLS);
-            if ( exit_ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, VM_EXIT_CONTROLS);
+            if ( ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
             {
                 /* for now, duplicate the ack path in vmx_intr_assist */
                 hvm_vcpu_ack_pending_irq(v, intack);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflax-0001qV-OS; Tue, 04 Dec 2012 06:03:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflaw-0001qO-PE
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:02 +0000
Received: from [193.109.254.147:13380] by server-5.bemta-14.messagelabs.com id
	42/73-10257-5129DB05; Tue, 04 Dec 2012 06:03:01 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354600980!1581491!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM1OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16329 invoked from network); 4 Dec 2012 06:03:01 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-7.tower-27.messagelabs.com with SMTP;
	4 Dec 2012 06:03:01 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 03 Dec 2012 22:03:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="175786352"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 03 Dec 2012 22:02:59 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:20 +0800
Message-Id: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH 00/10] nested vmx: bug fixes and feature enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series of patches contain some bug fixes and feature enabling for nested vmx, please help to review and pull.

Thanks,
Dongxiao

Dongxiao Xu (10):
  nested vmx: emulate MSR bitmaps
  nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
  nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
  nested vmx: fix interrupt delivery to L2 guest
  nested vmx: check host ability when intercept MSR read

 xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 ++++++++
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |  128 ++++++++++++++++++++++++++++++------
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 7 files changed, 148 insertions(+), 25 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb8-0001tv-Md; Tue, 04 Dec 2012 06:03:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb7-0001t9-JN
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:13 +0000
Received: from [85.158.143.99:16721] by server-2.bemta-4.messagelabs.com id
	FE/8F-28922-0229DB05; Tue, 04 Dec 2012 06:03:12 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1354600988!18162548!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM1OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23720 invoked from network); 4 Dec 2012 06:03:08 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-8.tower-216.messagelabs.com with SMTP;
	4 Dec 2012 06:03:08 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 03 Dec 2012 22:03:07 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="175786406"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 03 Dec 2012 22:03:06 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:26 +0800
Message-Id: <1354600410-3390-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 06/10] nested vmx: enable IA32E mode while do VM
	entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 0ac78af..1304636 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1388,7 +1388,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
             tmp = 0x11fb;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
+               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+               VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
         break;
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb2-0001rc-QX; Tue, 04 Dec 2012 06:03:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb1-0001rB-Rd
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:08 +0000
Received: from [85.158.139.211:19101] by server-1.bemta-5.messagelabs.com id
	3B/43-09311-A129DB05; Tue, 04 Dec 2012 06:03:06 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!4
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26671 invoked from network); 4 Dec 2012 06:03:06 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:06 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:06 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918849"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:05 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:25 +0800
Message-Id: <1354600410-3390-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 05/10] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For DR register, we use lazy restore mechanism when access it. Therefore
when receiving such VM exit, L0 should be responsible to switch to the
right DR values, then inject to L1 hypervisor.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index cf3797c..0ac78af 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1654,7 +1654,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_MOV_DR_EXITING )
-            nvcpu->nv_vmexit_pending = 1;
+            if ( v->arch.hvm_vcpu.flag_dr_dirty )
+                nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
         ctrl = __n2_exec_control(v);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tflb2-0001rc-QX; Tue, 04 Dec 2012 06:03:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tflb1-0001rB-Rd
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:08 +0000
Received: from [85.158.139.211:19101] by server-1.bemta-5.messagelabs.com id
	3B/43-09311-A129DB05; Tue, 04 Dec 2012 06:03:06 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354600982!18988522!4
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTE5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26671 invoked from network); 4 Dec 2012 06:03:06 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 06:03:06 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 03 Dec 2012 22:03:06 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="256918849"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 03 Dec 2012 22:03:05 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:25 +0800
Message-Id: <1354600410-3390-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 05/10] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For DR register, we use lazy restore mechanism when access it. Therefore
when receiving such VM exit, L0 should be responsible to switch to the
right DR values, then inject to L1 hypervisor.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index cf3797c..0ac78af 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1654,7 +1654,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_MOV_DR_EXITING )
-            nvcpu->nv_vmexit_pending = 1;
+            if ( v->arch.hvm_vcpu.flag_dr_dirty )
+                nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
         ctrl = __n2_exec_control(v);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TflbM-00020J-GQ; Tue, 04 Dec 2012 06:03:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TflbJ-0001z8-Om
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:25 +0000
Received: from [85.158.139.83:26079] by server-13.bemta-5.messagelabs.com id
	DE/C2-27809-C229DB05; Tue, 04 Dec 2012 06:03:24 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1354601003!20907543!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM1OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12142 invoked from network); 4 Dec 2012 06:03:24 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-16.tower-182.messagelabs.com with SMTP;
	4 Dec 2012 06:03:24 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 03 Dec 2012 22:03:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="226211074"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 03 Dec 2012 22:03:11 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:30 +0800
Message-Id: <1354600410-3390-11-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 10/10] nested vmx: check host ability when
	intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When guest hypervisor tries to read MSR value, we intercept this behavior
and return certain emulated values. Besides that, we also need to ensure
that those emulated values must compatible with host ability.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   19 ++++++++++++++-----
 1 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index bbf5266..f2bba1b 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1319,19 +1319,20 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp = 0;
+    u64 data = 0, host_data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
         return 0;
 
+    rdmsrl(msr, host_data);
+
     /*
      * Remove unsupport features from n1 guest capability MSR
      */
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
-        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
+        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
@@ -1342,6 +1343,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* Consult SDM for default1 setting */
         tmp = ( (1<<1) | (1<<2) | (1<<4) );
         data = ((data | tmp) << 32) | (tmp);
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
@@ -1373,7 +1376,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
             tmp = 0x4006172;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
-
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
@@ -1382,6 +1386,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
     case MSR_IA32_VMX_TRUE_EXIT_CTLS:
@@ -1400,6 +1406,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
 	/* 0-settings */
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
     case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
@@ -1413,8 +1421,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
                VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
-
     case IA32_FEATURE_CONTROL_MSR:
         data = IA32_FEATURE_CONTROL_MSR_LOCK | 
                IA32_FEATURE_CONTROL_MSR_ENABLE_VMXON_OUTSIDE_SMX;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 06:03:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 06:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TflbM-00020J-GQ; Tue, 04 Dec 2012 06:03:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TflbJ-0001z8-Om
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 06:03:25 +0000
Received: from [85.158.139.83:26079] by server-13.bemta-5.messagelabs.com id
	DE/C2-27809-C229DB05; Tue, 04 Dec 2012 06:03:24 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1354601003!20907543!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM1OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12142 invoked from network); 4 Dec 2012 06:03:24 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-16.tower-182.messagelabs.com with SMTP;
	4 Dec 2012 06:03:24 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 03 Dec 2012 22:03:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,213,1355126400"; d="scan'208";a="226211074"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 03 Dec 2012 22:03:11 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue,  4 Dec 2012 13:53:30 +0800
Message-Id: <1354600410-3390-11-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 10/10] nested vmx: check host ability when
	intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When guest hypervisor tries to read MSR value, we intercept this behavior
and return certain emulated values. Besides that, we also need to ensure
that those emulated values must compatible with host ability.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   19 ++++++++++++++-----
 1 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index bbf5266..f2bba1b 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1319,19 +1319,20 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp = 0;
+    u64 data = 0, host_data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
         return 0;
 
+    rdmsrl(msr, host_data);
+
     /*
      * Remove unsupport features from n1 guest capability MSR
      */
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
-        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
+        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
@@ -1342,6 +1343,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* Consult SDM for default1 setting */
         tmp = ( (1<<1) | (1<<2) | (1<<4) );
         data = ((data | tmp) << 32) | (tmp);
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
@@ -1373,7 +1376,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
             tmp = 0x4006172;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
-
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
@@ -1382,6 +1386,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
     case MSR_IA32_VMX_TRUE_EXIT_CTLS:
@@ -1400,6 +1406,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
 	/* 0-settings */
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
     case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
@@ -1413,8 +1421,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
                VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
-
     case IA32_FEATURE_CONTROL_MSR:
         data = IA32_FEATURE_CONTROL_MSR_LOCK | 
                IA32_FEATURE_CONTROL_MSR_ENABLE_VMXON_OUTSIDE_SMX;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 07:23:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 07:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfmqi-0004iS-6M; Tue, 04 Dec 2012 07:23:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tfmqg-0004iN-ER
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 07:23:22 +0000
Received: from [85.158.143.35:47661] by server-3.bemta-4.messagelabs.com id
	4E/15-06841-9E4ADB05; Tue, 04 Dec 2012 07:23:21 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1354605801!13503420!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29456 invoked from network); 4 Dec 2012 07:23:21 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 07:23:21 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so2401441eek.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 23:23:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=/H1QF1Vn/vD/wF7U4Plra4U8XoBkCpJX/s14/vCCjQM=;
	b=d0Au33XFsKezDioCfAM1AFB1D2dQEZgoBg8bwigIIrcmHhWoXx/k5tWt2EIt4fU34N
	ls4vZHVB1u2+2Jb3hY5vd1NKNhlmDXFJeadJ800E/nwTcih3URtN156QkeJrl/sIo5HU
	/h3EPbJsavnca856UxgdcLXMza682IWOgPEHzd173bjjUdvU3NK9HxbOq3a4r3PVexs8
	8FpG0IgUoMZYNPVZFjxm7j3b5a36UltfA3EETPnBf0Jg8sgWLgucC37pZXEdd9/eDrRw
	D3m7zkQ5vaf+lRr0EMS0iT8uAzRzNuI2SXyPNUvcNyKu+BhfuqptQQsHyIpHByeBAfWC
	2aGw==
Received: by 10.14.221.9 with SMTP id q9mr45918462eep.3.1354605801159;
	Mon, 03 Dec 2012 23:23:21 -0800 (PST)
Received: from [192.168.228.100] ([188.25.222.143])
	by mx.google.com with ESMTPS id w3sm912975eel.17.2012.12.03.23.23.19
	(version=SSLv3 cipher=OTHER); Mon, 03 Dec 2012 23:23:20 -0800 (PST)
Message-ID: <50BDA4E4.5030107@gmail.com>
Date: Tue, 04 Dec 2012 09:23:16 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121128 Thunderbird/10.0.11
MIME-Version: 1.0
To: AP <apxeng@gmail.com>
References: <50B77375.9070904@gmail.com> <50B77CB8.1040606@gmail.com>
	<CAGU+auuPUkYmccw9TEOut4T9JAxLduvEEMUc_unvnJExYECtEw@mail.gmail.com>
	<50B91273.6050606@gmail.com>
	<CAGU+ausjwgy+NO4RynjrdXyyPrHO6_MqrwXpqYHAOXrr0vmzrg@mail.gmail.com>
	<50B91AE0.1040206@gmail.com>
	<CAGU+autfQNe5V8rZ6z56NXUra2_jcRQmQHwMY454Xk=8dXpV+w@mail.gmail.com>
	<50B91F69.3040506@gmail.com>
	<CAGU+ausXSpOtBRjp3XbRNc0Jei+sJ62H9Jh=Mwn_WEHdKtCZ=g@mail.gmail.com>
In-Reply-To: <CAGU+ausXSpOtBRjp3XbRNc0Jei+sJ62H9Jh=Mwn_WEHdKtCZ=g@mail.gmail.com>
Cc: jepstein98@gmail.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Mem_event API and MEM_EVENT_REASON_SINGLESTEP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> So if I understand you correctly, simply single-stepping for only the
>> duration of one MEM_EVENT_REASON_SINGLESTEP, which should be the write
>> operation (ignoring the gfn/gla fields of the mem_event), should do the
>> trick?
> 
> That is correct. Let me know how it goes.

So far, so good. It did work when I tried it.

Thank you for your help,
Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 07:23:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 07:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfmqi-0004iS-6M; Tue, 04 Dec 2012 07:23:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tfmqg-0004iN-ER
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 07:23:22 +0000
Received: from [85.158.143.35:47661] by server-3.bemta-4.messagelabs.com id
	4E/15-06841-9E4ADB05; Tue, 04 Dec 2012 07:23:21 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1354605801!13503420!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29456 invoked from network); 4 Dec 2012 07:23:21 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 07:23:21 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so2401441eek.32
	for <xen-devel@lists.xen.org>; Mon, 03 Dec 2012 23:23:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=/H1QF1Vn/vD/wF7U4Plra4U8XoBkCpJX/s14/vCCjQM=;
	b=d0Au33XFsKezDioCfAM1AFB1D2dQEZgoBg8bwigIIrcmHhWoXx/k5tWt2EIt4fU34N
	ls4vZHVB1u2+2Jb3hY5vd1NKNhlmDXFJeadJ800E/nwTcih3URtN156QkeJrl/sIo5HU
	/h3EPbJsavnca856UxgdcLXMza682IWOgPEHzd173bjjUdvU3NK9HxbOq3a4r3PVexs8
	8FpG0IgUoMZYNPVZFjxm7j3b5a36UltfA3EETPnBf0Jg8sgWLgucC37pZXEdd9/eDrRw
	D3m7zkQ5vaf+lRr0EMS0iT8uAzRzNuI2SXyPNUvcNyKu+BhfuqptQQsHyIpHByeBAfWC
	2aGw==
Received: by 10.14.221.9 with SMTP id q9mr45918462eep.3.1354605801159;
	Mon, 03 Dec 2012 23:23:21 -0800 (PST)
Received: from [192.168.228.100] ([188.25.222.143])
	by mx.google.com with ESMTPS id w3sm912975eel.17.2012.12.03.23.23.19
	(version=SSLv3 cipher=OTHER); Mon, 03 Dec 2012 23:23:20 -0800 (PST)
Message-ID: <50BDA4E4.5030107@gmail.com>
Date: Tue, 04 Dec 2012 09:23:16 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121128 Thunderbird/10.0.11
MIME-Version: 1.0
To: AP <apxeng@gmail.com>
References: <50B77375.9070904@gmail.com> <50B77CB8.1040606@gmail.com>
	<CAGU+auuPUkYmccw9TEOut4T9JAxLduvEEMUc_unvnJExYECtEw@mail.gmail.com>
	<50B91273.6050606@gmail.com>
	<CAGU+ausjwgy+NO4RynjrdXyyPrHO6_MqrwXpqYHAOXrr0vmzrg@mail.gmail.com>
	<50B91AE0.1040206@gmail.com>
	<CAGU+autfQNe5V8rZ6z56NXUra2_jcRQmQHwMY454Xk=8dXpV+w@mail.gmail.com>
	<50B91F69.3040506@gmail.com>
	<CAGU+ausXSpOtBRjp3XbRNc0Jei+sJ62H9Jh=Mwn_WEHdKtCZ=g@mail.gmail.com>
In-Reply-To: <CAGU+ausXSpOtBRjp3XbRNc0Jei+sJ62H9Jh=Mwn_WEHdKtCZ=g@mail.gmail.com>
Cc: jepstein98@gmail.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Mem_event API and MEM_EVENT_REASON_SINGLESTEP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> So if I understand you correctly, simply single-stepping for only the
>> duration of one MEM_EVENT_REASON_SINGLESTEP, which should be the write
>> operation (ignoring the gfn/gla fields of the mem_event), should do the
>> trick?
> 
> That is correct. Let me know how it goes.

So far, so good. It did work when I tried it.

Thank you for your help,
Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 07:24:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 07:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfmr2-0004jH-Jr; Tue, 04 Dec 2012 07:23:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tfmr1-0004j4-Hm
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 07:23:43 +0000
Received: from [85.158.137.99:38858] by server-13.bemta-3.messagelabs.com id
	B5/BA-24887-EF4ADB05; Tue, 04 Dec 2012 07:23:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354605821!16920524!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY3MzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2109 invoked from network); 4 Dec 2012 07:23:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 07:23:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,213,1355097600"; d="scan'208";a="16136888"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 07:23:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 4 Dec 2012 07:22:59 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfmqI-0001Qv-Kt;
	Tue, 04 Dec 2012 07:22:58 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfmqI-00011l-6N;
	Tue, 04 Dec 2012 07:22:58 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14559-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Dec 2012 07:22:58 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14559: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14559 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14559/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14558
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14558

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  29247e44df47

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 07:24:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 07:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfmr2-0004jH-Jr; Tue, 04 Dec 2012 07:23:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tfmr1-0004j4-Hm
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 07:23:43 +0000
Received: from [85.158.137.99:38858] by server-13.bemta-3.messagelabs.com id
	B5/BA-24887-EF4ADB05; Tue, 04 Dec 2012 07:23:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354605821!16920524!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY3MzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2109 invoked from network); 4 Dec 2012 07:23:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 07:23:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,213,1355097600"; d="scan'208";a="16136888"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 07:23:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 4 Dec 2012 07:22:59 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TfmqI-0001Qv-Kt;
	Tue, 04 Dec 2012 07:22:58 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TfmqI-00011l-6N;
	Tue, 04 Dec 2012 07:22:58 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14559-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Dec 2012 07:22:58 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14559: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14559 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14559/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14558
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14558

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  29247e44df47
baseline version:
 xen                  29247e44df47

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 08:08:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 08:08:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfnYC-0005xx-Gy; Tue, 04 Dec 2012 08:08:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfnYA-0005xs-Qh
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 08:08:19 +0000
Received: from [193.109.254.147:7860] by server-5.bemta-14.messagelabs.com id
	D5/F1-10257-27FADB05; Tue, 04 Dec 2012 08:08:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354608322!3623793!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 424 invoked from network); 4 Dec 2012 08:05:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 08:05:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 08:05:21 +0000
Message-Id: <50BDBCCB02000078000AD8D8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 08:05:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
	<50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
	<1354558168.18784.26.camel@iceland>
In-Reply-To: <1354558168.18784.26.camel@iceland>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 19:09, Wei Liu <Wei.Liu2@citrix.com> wrote:
> On Mon, 2012-12-03 at 18:00 +0000, Jan Beulich wrote:
>> >>> On 03.12.12 at 18:52, Wei Liu <Wei.Liu2@citrix.com> wrote:
>> > On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
>> >> Doesn't the guest also need to set up space for the 2nd level?
>> >> 
>> > 
>> > Yes. That will be embedded in percpu struct vcpu_info, which will be
>> > also register via the same hypercall op.
>> 
>> "struct vcpu_info"? Same hypercall? Or are you mixing up types?
>> 
> 
> What I meant was the second level will be embedded in struct vcpu_info,
> and the 2nd level will be registered via some hypercall (not the struct
> vcpu_info).

I would strongly recommend against embedding in
struct vcpu_info, particularly in the context of intending to
allow for having further levels in the future.

Plus I don't think you really can embed this - there's just not
enough space left (I'm sure you're aware that you can't
extend the structure in size), in fact there's no space left at
all in the architecture independent part of the structure.

The only option you have is to declare the array you need to
add to immediately follow the structure when having used
the placement hypercall. That would probably be acceptable
for the second from the top level, but you'd again run into
(space) issues when wanting more than 3 levels (as you
validly said, all but the leaf level ought to be per-vCPU).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 08:08:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 08:08:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfnYC-0005xx-Gy; Tue, 04 Dec 2012 08:08:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfnYA-0005xs-Qh
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 08:08:19 +0000
Received: from [193.109.254.147:7860] by server-5.bemta-14.messagelabs.com id
	D5/F1-10257-27FADB05; Tue, 04 Dec 2012 08:08:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354608322!3623793!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 424 invoked from network); 4 Dec 2012 08:05:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 08:05:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 08:05:21 +0000
Message-Id: <50BDBCCB02000078000AD8D8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 08:05:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
	<50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
	<1354558168.18784.26.camel@iceland>
In-Reply-To: <1354558168.18784.26.camel@iceland>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 19:09, Wei Liu <Wei.Liu2@citrix.com> wrote:
> On Mon, 2012-12-03 at 18:00 +0000, Jan Beulich wrote:
>> >>> On 03.12.12 at 18:52, Wei Liu <Wei.Liu2@citrix.com> wrote:
>> > On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
>> >> Doesn't the guest also need to set up space for the 2nd level?
>> >> 
>> > 
>> > Yes. That will be embedded in percpu struct vcpu_info, which will be
>> > also register via the same hypercall op.
>> 
>> "struct vcpu_info"? Same hypercall? Or are you mixing up types?
>> 
> 
> What I meant was the second level will be embedded in struct vcpu_info,
> and the 2nd level will be registered via some hypercall (not the struct
> vcpu_info).

I would strongly recommend against embedding in
struct vcpu_info, particularly in the context of intending to
allow for having further levels in the future.

Plus I don't think you really can embed this - there's just not
enough space left (I'm sure you're aware that you can't
extend the structure in size), in fact there's no space left at
all in the architecture independent part of the structure.

The only option you have is to declare the array you need to
add to immediately follow the structure when having used
the placement hypercall. That would probably be acceptable
for the second from the top level, but you'd again run into
(space) issues when wanting more than 3 levels (as you
validly said, all but the leaf level ought to be per-vCPU).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 08:10:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 08:10:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfnZV-00062d-Bz; Tue, 04 Dec 2012 08:09:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1TfnZT-00062S-Gm
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 08:09:39 +0000
Received: from [85.158.139.211:64070] by server-13.bemta-5.messagelabs.com id
	C5/44-27809-2CFADB05; Tue, 04 Dec 2012 08:09:38 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354608576!18972742!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1986 invoked from network); 4 Dec 2012 08:09:37 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-11.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 08:09:37 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id qB4899EX026124;
	Tue, 4 Dec 2012 02:09:10 -0600
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id qB4899UX026123;
	Tue, 4 Dec 2012 02:09:09 -0600
Date: Tue, 4 Dec 2012 02:09:09 -0600
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201212040809.qB4899UX026123@wind.enjellic.com>
In-Reply-To: Konrad Rzeszutek Wilk <konrad@kernel.org>
	"Re: [Xen-devel] Updated ATI passthrough patches." (Dec  3, 10:22am)
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Tue, 04 Dec 2012 02:09:10 -0600 (CST)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Updated ATI passthrough patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Dec 3, 10:22am, Konrad Rzeszutek Wilk wrote:
} Subject: Re: [Xen-devel] Updated ATI passthrough patches.

Good morning, hope the day is going well for everyone.

> On Sat, Dec 01, 2012 at 06:24:59PM -0600, Dr. Greg Wettstein wrote:
> > Hi, hope the weekend is going well for everyone.
> > 
> > I just put a set of updated patches to support ATI passthrough on a
> > primary video adapter on the FTP site.  The URL's are as follows:
> > 
> > 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.ati-passthrough.patch
> > 
> > 	ftp://ftp.enjellic.com/pub/xen/xen-4.2.0.ati-passthrough.patch
> > 
> > These patches have been validated to work up through kernel 3.4.18
> > with the xm control plane.  We are currently working on validating
> > whether or not there are passthrough issues with xl.
> > 
> > The original ATI pass-through patches posted to xen-devel fail with a
> > qemu-dm segmentation fault on recent kernels.  This is caused by
> > changes which have been made with respect to proper enforcement of the
> > permitted port ranges on the ioperm() system call.
> > 
> > Since these patchs allow a primary graphics adapter to be used for
> > passthrough I wanted to remind everyone of the availability of the
> > following utility script:
> > 
> > 	ftp://ftp.enjellic.com/pub/xen/run-passthrough
> > 
> > Which automates the process of unplugging and re-plugging a video card
> > and optionally a USB controller.  A script like this, or a network
> > login, is needed in order to use passthrough on a primary graphics
> > adapter.

> Great! What type of config options do you end up using for your guests?
> gfx_pass.., msix_translate=.. ? Or just normal
> 
> pci=[BDF] ?

The configuration is pretty straight forward.  The active parameters
are as follows:

----------------------------------------------------------------------------
builder='hvm'
memory = 3072
name = "Windows"
vif = [ 'type=ioemu, bridge=bridge0, model=e1000' ]
acpi = 1
apic = 1
disk = [ 'phy:/dev/localvg1/winsnap,hda,w' ]
boot="c"
sdl=0
vnc=0
soundhw='ac97'
stdvga=1
gfx_passthru=1
pci=[ '01:00.0', '00:1a.0' ]
----------------------------------------------------------------------------

The 01:00.0 device is the ATI video card:

01:00.0 VGA compatible controller: ATI Technologies Inc Device 6898
(prog-if 00
[VGA controller])
        Subsystem: ASUSTeK Computer Inc. Device 0346
        Flags: bus master, fast devsel, latency 0, IRQ 11
        Memory at b0000000 (64-bit, prefetchable) [size=256M]
        Memory at c1a00000 (64-bit, non-prefetchable) [size=128K]
        I/O ports at 3000 [size=256]
        Expansion ROM at c1a40000 [disabled] [size=128K]
        Capabilities: <access denied>

The 00:1a.0 devie is the USB controller which has the keyboard and
mouse on it:

00:1a.0 USB Controller: Intel Corporation Ibex Peak USB2 Enhanced Host
Controller (rev 05) (prog-if 20 [EHCI])
        Subsystem: Intel Corporation Device 34ec
        Flags: bus master, medium devsel, latency 0, IRQ 21
        Memory at c1b22000 (32-bit, non-prefetchable) [size=1K]
        Capabilities: <access denied>
        Kernel driver in use: ehci_hcd

This is under a 3.4.18 dom0 kernel which is statically compiled for
the hardware platform being used.

Xen is stock 4.2.0 with the ATI patches noted above applied.

The xm control plane is used since xl appears to be a non-starter with
all this.  A bit more on that later.

The above is rock solid under Windows 7.  I would judge about 30
extended Windows sessions were run over the last week.

Greg

}-- End of excerpt from Konrad Rzeszutek Wilk

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"If we thought this was a trap, we wouldn't be doing it, and as you know,
 we have a lot of lawyers."
                                -- Irving Wladawsky-Berger

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 08:10:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 08:10:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfnZV-00062d-Bz; Tue, 04 Dec 2012 08:09:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1TfnZT-00062S-Gm
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 08:09:39 +0000
Received: from [85.158.139.211:64070] by server-13.bemta-5.messagelabs.com id
	C5/44-27809-2CFADB05; Tue, 04 Dec 2012 08:09:38 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354608576!18972742!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1986 invoked from network); 4 Dec 2012 08:09:37 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-11.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 08:09:37 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id qB4899EX026124;
	Tue, 4 Dec 2012 02:09:10 -0600
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id qB4899UX026123;
	Tue, 4 Dec 2012 02:09:09 -0600
Date: Tue, 4 Dec 2012 02:09:09 -0600
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201212040809.qB4899UX026123@wind.enjellic.com>
In-Reply-To: Konrad Rzeszutek Wilk <konrad@kernel.org>
	"Re: [Xen-devel] Updated ATI passthrough patches." (Dec  3, 10:22am)
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Tue, 04 Dec 2012 02:09:10 -0600 (CST)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Updated ATI passthrough patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Dec 3, 10:22am, Konrad Rzeszutek Wilk wrote:
} Subject: Re: [Xen-devel] Updated ATI passthrough patches.

Good morning, hope the day is going well for everyone.

> On Sat, Dec 01, 2012 at 06:24:59PM -0600, Dr. Greg Wettstein wrote:
> > Hi, hope the weekend is going well for everyone.
> > 
> > I just put a set of updated patches to support ATI passthrough on a
> > primary video adapter on the FTP site.  The URL's are as follows:
> > 
> > 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.ati-passthrough.patch
> > 
> > 	ftp://ftp.enjellic.com/pub/xen/xen-4.2.0.ati-passthrough.patch
> > 
> > These patches have been validated to work up through kernel 3.4.18
> > with the xm control plane.  We are currently working on validating
> > whether or not there are passthrough issues with xl.
> > 
> > The original ATI pass-through patches posted to xen-devel fail with a
> > qemu-dm segmentation fault on recent kernels.  This is caused by
> > changes which have been made with respect to proper enforcement of the
> > permitted port ranges on the ioperm() system call.
> > 
> > Since these patchs allow a primary graphics adapter to be used for
> > passthrough I wanted to remind everyone of the availability of the
> > following utility script:
> > 
> > 	ftp://ftp.enjellic.com/pub/xen/run-passthrough
> > 
> > Which automates the process of unplugging and re-plugging a video card
> > and optionally a USB controller.  A script like this, or a network
> > login, is needed in order to use passthrough on a primary graphics
> > adapter.

> Great! What type of config options do you end up using for your guests?
> gfx_pass.., msix_translate=.. ? Or just normal
> 
> pci=[BDF] ?

The configuration is pretty straight forward.  The active parameters
are as follows:

----------------------------------------------------------------------------
builder='hvm'
memory = 3072
name = "Windows"
vif = [ 'type=ioemu, bridge=bridge0, model=e1000' ]
acpi = 1
apic = 1
disk = [ 'phy:/dev/localvg1/winsnap,hda,w' ]
boot="c"
sdl=0
vnc=0
soundhw='ac97'
stdvga=1
gfx_passthru=1
pci=[ '01:00.0', '00:1a.0' ]
----------------------------------------------------------------------------

The 01:00.0 device is the ATI video card:

01:00.0 VGA compatible controller: ATI Technologies Inc Device 6898
(prog-if 00
[VGA controller])
        Subsystem: ASUSTeK Computer Inc. Device 0346
        Flags: bus master, fast devsel, latency 0, IRQ 11
        Memory at b0000000 (64-bit, prefetchable) [size=256M]
        Memory at c1a00000 (64-bit, non-prefetchable) [size=128K]
        I/O ports at 3000 [size=256]
        Expansion ROM at c1a40000 [disabled] [size=128K]
        Capabilities: <access denied>

The 00:1a.0 devie is the USB controller which has the keyboard and
mouse on it:

00:1a.0 USB Controller: Intel Corporation Ibex Peak USB2 Enhanced Host
Controller (rev 05) (prog-if 20 [EHCI])
        Subsystem: Intel Corporation Device 34ec
        Flags: bus master, medium devsel, latency 0, IRQ 21
        Memory at c1b22000 (32-bit, non-prefetchable) [size=1K]
        Capabilities: <access denied>
        Kernel driver in use: ehci_hcd

This is under a 3.4.18 dom0 kernel which is statically compiled for
the hardware platform being used.

Xen is stock 4.2.0 with the ATI patches noted above applied.

The xm control plane is used since xl appears to be a non-starter with
all this.  A bit more on that later.

The above is rock solid under Windows 7.  I would judge about 30
extended Windows sessions were run over the last week.

Greg

}-- End of excerpt from Konrad Rzeszutek Wilk

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"If we thought this was a trap, we wouldn't be doing it, and as you know,
 we have a lot of lawyers."
                                -- Irving Wladawsky-Berger

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 08:12:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 08:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfncH-0006JC-8w; Tue, 04 Dec 2012 08:12:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfncF-0006J4-W9
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 08:12:32 +0000
Received: from [193.109.254.147:44127] by server-11.bemta-14.messagelabs.com
	id F6/F4-29027-F60BDB05; Tue, 04 Dec 2012 08:12:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354608748!3624421!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25430 invoked from network); 4 Dec 2012 08:12:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 08:12:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 08:12:28 +0000
Message-Id: <50BDBE7902000078000AD8E1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 08:12:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <50B498E502000078000AB8B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A4834564403398F24@SHSMSX101.ccr.corp.intel.com>
	<50B7243602000078000AC61A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033994C8@SHSMSX101.ccr.corp.intel.com>
	<50B7389202000078000AC669@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
	<50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339E774@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A483456440339E774@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATS and dependent features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 02:29, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:

> 
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Monday, December 03, 2012 3:55 PM
>> To: Zhang, Xiantao
>> Cc: xen-devel
>> Subject: RE: ATS and dependent features
>> 
>> >>> On 30.11.12 at 13:29, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> wrote:
>> 
>> >
>> >> -----Original Message-----
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: Thursday, November 29, 2012 5:28 PM
>> >> To: Zhang, Xiantao
>> >> Cc: xen-devel
>> >> Subject: RE: ATS and dependent features
>> >>
>> >> >>> On 29.11.12 at 10:19, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> >> wrote:
>> >>
>> >> >
>> >> >> -----Original Message-----
>> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >> Sent: Thursday, November 29, 2012 4:01 PM
>> >> >> To: Zhang, Xiantao
>> >> >> Cc: xen-devel
>> >> >> Subject: RE: ATS and dependent features
>> >> >>
>> >> >> >>> On 29.11.12 at 02:07, "Zhang, Xiantao"
>> >> >> >>> <xiantao.zhang@intel.com>
>> >> >> wrote:
>> >> >> > ATS should be a host feature controlled by iommu, and I don't
>> >> >> > think
>> >> >> > dom0 can control  it from Xen's architecture.
>> >> >>
>> >> >> "Can" or "should"? Because from all I can tell it currently clearly does.
>> >> >
>> >> > I mean Xen shouldn't  allow these capabilities can be detected by
>> >> > dom0.  If it does, we need to fix it.
>> >>
>> >> It sort of hides it - all callers sit in the kernel's IOMMU code, and
>> >> IOMMU detection is being prevented. So it looks like the code is
>> >> simply dead when running on top of Xen.
>> >
>> > I'm curious why dom0's !Xen kernel option for these features can solve
>> > the issue you met.
>> 
>> It doesn't "solve" the problem in that sense: As said, the code in question
>> only has callers in IOMMU code, which itself is dependent on !XEN in our
>> kernels (just to make clear - I'm talking about forward ported kernels here,
>> not pv-ops ones). So upstream probably just has to live with that code being
>> dead (at the moment, when run on top of Xen) and take the risk of there
>> appearing a caller elsewhere.
>> In our kernels, by making these options also dependent upon !XEN, we can
>> then actually detect (and actively deal with) an eventual new caller
>> elsewhere in the code, thus eliminating any risk of bad interaction between
>> Dom0 and Xen.
> 
> I think !Xen you are talking is a compile option, so this kernel can only 
> used for dom0 and can't run on native with these features enabled ?  If don't 
> need to keep the kernel running on native hardware, I think it is fine. 

Yes, as said - this is for our forward ported kernel. Whether (and if
so how) the pv-ops one can add a similar safeguard I can't tell (and
doubt).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 08:12:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 08:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfncH-0006JC-8w; Tue, 04 Dec 2012 08:12:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfncF-0006J4-W9
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 08:12:32 +0000
Received: from [193.109.254.147:44127] by server-11.bemta-14.messagelabs.com
	id F6/F4-29027-F60BDB05; Tue, 04 Dec 2012 08:12:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354608748!3624421!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25430 invoked from network); 4 Dec 2012 08:12:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 08:12:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 08:12:28 +0000
Message-Id: <50BDBE7902000078000AD8E1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 08:12:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <50B498E502000078000AB8B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A4834564403398F24@SHSMSX101.ccr.corp.intel.com>
	<50B7243602000078000AC61A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033994C8@SHSMSX101.ccr.corp.intel.com>
	<50B7389202000078000AC669@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
	<50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339E774@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A483456440339E774@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATS and dependent features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 02:29, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:

> 
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Monday, December 03, 2012 3:55 PM
>> To: Zhang, Xiantao
>> Cc: xen-devel
>> Subject: RE: ATS and dependent features
>> 
>> >>> On 30.11.12 at 13:29, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> wrote:
>> 
>> >
>> >> -----Original Message-----
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: Thursday, November 29, 2012 5:28 PM
>> >> To: Zhang, Xiantao
>> >> Cc: xen-devel
>> >> Subject: RE: ATS and dependent features
>> >>
>> >> >>> On 29.11.12 at 10:19, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> >> wrote:
>> >>
>> >> >
>> >> >> -----Original Message-----
>> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >> Sent: Thursday, November 29, 2012 4:01 PM
>> >> >> To: Zhang, Xiantao
>> >> >> Cc: xen-devel
>> >> >> Subject: RE: ATS and dependent features
>> >> >>
>> >> >> >>> On 29.11.12 at 02:07, "Zhang, Xiantao"
>> >> >> >>> <xiantao.zhang@intel.com>
>> >> >> wrote:
>> >> >> > ATS should be a host feature controlled by iommu, and I don't
>> >> >> > think
>> >> >> > dom0 can control  it from Xen's architecture.
>> >> >>
>> >> >> "Can" or "should"? Because from all I can tell it currently clearly does.
>> >> >
>> >> > I mean Xen shouldn't  allow these capabilities can be detected by
>> >> > dom0.  If it does, we need to fix it.
>> >>
>> >> It sort of hides it - all callers sit in the kernel's IOMMU code, and
>> >> IOMMU detection is being prevented. So it looks like the code is
>> >> simply dead when running on top of Xen.
>> >
>> > I'm curious why dom0's !Xen kernel option for these features can solve
>> > the issue you met.
>> 
>> It doesn't "solve" the problem in that sense: As said, the code in question
>> only has callers in IOMMU code, which itself is dependent on !XEN in our
>> kernels (just to make clear - I'm talking about forward ported kernels here,
>> not pv-ops ones). So upstream probably just has to live with that code being
>> dead (at the moment, when run on top of Xen) and take the risk of there
>> appearing a caller elsewhere.
>> In our kernels, by making these options also dependent upon !XEN, we can
>> then actually detect (and actively deal with) an eventual new caller
>> elsewhere in the code, thus eliminating any risk of bad interaction between
>> Dom0 and Xen.
> 
> I think !Xen you are talking is a compile option, so this kernel can only 
> used for dom0 and can't run on native with these features enabled ?  If don't 
> need to keep the kernel running on native hardware, I think it is fine. 

Yes, as said - this is for our forward ported kernel. Whether (and if
so how) the pv-ops one can add a similar safeguard I can't tell (and
doubt).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 08:19:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 08:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfnj2-0006Xn-Pi; Tue, 04 Dec 2012 08:19:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tfnj1-0006Xh-5F
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 08:19:31 +0000
Received: from [85.158.143.99:18816] by server-3.bemta-4.messagelabs.com id
	3D/BD-06841-212BDB05; Tue, 04 Dec 2012 08:19:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354609169!18312546!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2920 invoked from network); 4 Dec 2012 08:19:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 08:19:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 08:19:28 +0000
Message-Id: <50BDC01D02000078000AD8FD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 08:19:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <5097FD2902000078000A66BF@nat28.tlf.novell.com>
	<CCDE3016.54628%keir@xen.org>
	<50B88F8002000078000ACC8A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339D1E9@SHSMSX101.ccr.corp.intel.com>
	<50BC649E02000078000AD289@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339E6EB@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A483456440339E6EB@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: KeirFraser <keir@xen.org>, "wei.huang2@amd.com" <wei.huang2@amd.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>,
	Dario Faggioli <raistlin@linux.it>,
	"weiwang.dd@gmail.com" <weiwang.dd@gmail.com>
Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
 faults for devices used by Xen or Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 01:55, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
>>  
>> >>> On 03.12.12 at 07:08, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> wrote:
>> > If the phantom device support for IOMMU is in upstream,  is this patch
>> > still needed ?
>> 
>> Phantom function is unrelated to the behavioral adjustment here.
>> 
>> >  Basically,  I can't figure out why several faults should be allowed
>> > before disabling bus mastering.   Did you meet some real issues ?   Thanks!
>> 
>> I observed quite a different driver failure pattern with and without this
>> adjustment, but in a contrived environment only. From the customer data
>> for the problem that prompted the phantom function work, I could also
>> conclude the same (comparing the driver failure under native Linux with
>> IOMMU turned on and the one under Xen).
>> 
>> But in any case, I am of the opinion that an occasional fault shouldn't give
>> reason to disable the device altogether - what we're aiming at is solely to
>> keep Xen and other domains functional (which doesn't require as drastic an
>> action as was carried out prior to this adjustment). Also, afaict native 
> Linux
>> doesn't have any such disabling behavior at all.
> 
> Okay, maybe we need to align this with native linux side, and just keep the 
> fault reporting instead of disabling it in device level. And if the number of 
> faults reaches to  a limit, and hypervisor can choose to suppress its output. 

Just suppressing the output is not enough (and I'm sure you're
aware that, other than native Linux and for whatever obscure
reason, at least the VT-d code doesn't produce _any_ indication
of a fault in the hypervisor log) - the problem that was addressed
with the original change was that with high enough a fault rate,
a CPU could be made busy with just handling these faults.

Having moved the real handling of the fault into a tasklet only
reduced the impact, so I continue to think that shutting off
at least bus mastering on the device is the right thing to do.
But - this shutting off is based on the source ID of the request,
so if the source ID doesn't refer to the device at fault (as e.g.
would currently be the case for the Marvell controllers that
we're adding the phantom function support for), we would
continue to be in trouble. But buggy hardware can certainly be
expected not to be passed through to guests in the first place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 08:19:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 08:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfnj2-0006Xn-Pi; Tue, 04 Dec 2012 08:19:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tfnj1-0006Xh-5F
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 08:19:31 +0000
Received: from [85.158.143.99:18816] by server-3.bemta-4.messagelabs.com id
	3D/BD-06841-212BDB05; Tue, 04 Dec 2012 08:19:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354609169!18312546!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2920 invoked from network); 4 Dec 2012 08:19:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 08:19:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 08:19:28 +0000
Message-Id: <50BDC01D02000078000AD8FD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 08:19:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <5097FD2902000078000A66BF@nat28.tlf.novell.com>
	<CCDE3016.54628%keir@xen.org>
	<50B88F8002000078000ACC8A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339D1E9@SHSMSX101.ccr.corp.intel.com>
	<50BC649E02000078000AD289@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339E6EB@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A483456440339E6EB@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: KeirFraser <keir@xen.org>, "wei.huang2@amd.com" <wei.huang2@amd.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>,
	Dario Faggioli <raistlin@linux.it>,
	"weiwang.dd@gmail.com" <weiwang.dd@gmail.com>
Subject: Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on
 faults for devices used by Xen or Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 01:55, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
>>  
>> >>> On 03.12.12 at 07:08, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> wrote:
>> > If the phantom device support for IOMMU is in upstream,  is this patch
>> > still needed ?
>> 
>> Phantom function is unrelated to the behavioral adjustment here.
>> 
>> >  Basically,  I can't figure out why several faults should be allowed
>> > before disabling bus mastering.   Did you meet some real issues ?   Thanks!
>> 
>> I observed quite a different driver failure pattern with and without this
>> adjustment, but in a contrived environment only. From the customer data
>> for the problem that prompted the phantom function work, I could also
>> conclude the same (comparing the driver failure under native Linux with
>> IOMMU turned on and the one under Xen).
>> 
>> But in any case, I am of the opinion that an occasional fault shouldn't give
>> reason to disable the device altogether - what we're aiming at is solely to
>> keep Xen and other domains functional (which doesn't require as drastic an
>> action as was carried out prior to this adjustment). Also, afaict native 
> Linux
>> doesn't have any such disabling behavior at all.
> 
> Okay, maybe we need to align this with native linux side, and just keep the 
> fault reporting instead of disabling it in device level. And if the number of 
> faults reaches to  a limit, and hypervisor can choose to suppress its output. 

Just suppressing the output is not enough (and I'm sure you're
aware that, other than native Linux and for whatever obscure
reason, at least the VT-d code doesn't produce _any_ indication
of a fault in the hypervisor log) - the problem that was addressed
with the original change was that with high enough a fault rate,
a CPU could be made busy with just handling these faults.

Having moved the real handling of the fault into a tasklet only
reduced the impact, so I continue to think that shutting off
at least bus mastering on the device is the right thing to do.
But - this shutting off is based on the source ID of the request,
so if the source ID doesn't refer to the device at fault (as e.g.
would currently be the case for the Marvell controllers that
we're adding the phantom function support for), we would
continue to be in trouble. But buggy hardware can certainly be
expected not to be passed through to guests in the first place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:25:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfokO-0007dm-AF; Tue, 04 Dec 2012 09:25:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfokM-0007dh-SL
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 09:24:59 +0000
Received: from [85.158.137.99:27249] by server-12.bemta-3.messagelabs.com id
	80/7A-22757-561CDB05; Tue, 04 Dec 2012 09:24:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1354613051!12746551!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10434 invoked from network); 4 Dec 2012 09:24:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 09:24:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,214,1355097600"; d="scan'208";a="16139338"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 09:24:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	09:24:11 +0000
Message-ID: <1354613049.2693.50.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Tue, 4 Dec 2012 09:24:09 +0000
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE012D554A56E8@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<1354201968.6269.14.camel@zakaz.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE012D554A56E8@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 0 of 5 RFC] blktap3: Introduce xenio daemon
 (coordinate blkfront with tapdisk).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:55 +0000, Thanos Makatos wrote:
> > On Wed, 2012-11-28 at 14:20 +0000, Thanos Makatos wrote:
> > > This patch series introduces the xenio daemon.
> > 
> > Can we call it something less generic?
> 
> Sure, though I don't really have good alternatives. Conceptually this
> daemon is blkback's user space counterpart, but naming it blkback
> would bring much confusion I believe. What about "tapback"? (Just a
> suggestion.)

tapback works, or blktapd, blktap3d etc. Even xenioblkd perhaps?

> 
> > 
> > >  This daemon is responsible for
> > > coordinating blkfront and tapdisk, when blkfront changes state. It
> > does so by
> > > monitoring XenStore for blkfront state changes. The daemon
> > creates/destroys the
> > > shared ring, instructs the tapdisk to connect to/disconnect from it,
> > and
> > > manages the state of the back-end.
> > 
> > So xenio creates the ring and manages xenstore? How does tapdisk get
> > access to the ring?
> 
> Exactly, xenio creates the ring and manages xenstore. Tapdisk accesses
> the shared ring via a static library that will be introduced by
> another patch series -- it's the series I'm currently working on. I'll
> update this series' description to clarify this.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:25:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfokO-0007dm-AF; Tue, 04 Dec 2012 09:25:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfokM-0007dh-SL
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 09:24:59 +0000
Received: from [85.158.137.99:27249] by server-12.bemta-3.messagelabs.com id
	80/7A-22757-561CDB05; Tue, 04 Dec 2012 09:24:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1354613051!12746551!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10434 invoked from network); 4 Dec 2012 09:24:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 09:24:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,214,1355097600"; d="scan'208";a="16139338"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 09:24:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	09:24:11 +0000
Message-ID: <1354613049.2693.50.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Tue, 4 Dec 2012 09:24:09 +0000
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE012D554A56E8@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<1354201968.6269.14.camel@zakaz.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE012D554A56E8@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 0 of 5 RFC] blktap3: Introduce xenio daemon
 (coordinate blkfront with tapdisk).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 17:55 +0000, Thanos Makatos wrote:
> > On Wed, 2012-11-28 at 14:20 +0000, Thanos Makatos wrote:
> > > This patch series introduces the xenio daemon.
> > 
> > Can we call it something less generic?
> 
> Sure, though I don't really have good alternatives. Conceptually this
> daemon is blkback's user space counterpart, but naming it blkback
> would bring much confusion I believe. What about "tapback"? (Just a
> suggestion.)

tapback works, or blktapd, blktap3d etc. Even xenioblkd perhaps?

> 
> > 
> > >  This daemon is responsible for
> > > coordinating blkfront and tapdisk, when blkfront changes state. It
> > does so by
> > > monitoring XenStore for blkfront state changes. The daemon
> > creates/destroys the
> > > shared ring, instructs the tapdisk to connect to/disconnect from it,
> > and
> > > manages the state of the back-end.
> > 
> > So xenio creates the ring and manages xenstore? How does tapdisk get
> > access to the ring?
> 
> Exactly, xenio creates the ring and manages xenstore. Tapdisk accesses
> the shared ring via a static library that will be introduced by
> another patch series -- it's the series I'm currently working on. I'll
> update this series' description to clarify this.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:31:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfoqZ-0007sE-62; Tue, 04 Dec 2012 09:31:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfoqX-0007s8-Vb
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:31:22 +0000
Received: from [193.109.254.147:17798] by server-4.bemta-14.messagelabs.com id
	35/69-18856-9E2CDB05; Tue, 04 Dec 2012 09:31:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354613452!1603257!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16085 invoked from network); 4 Dec 2012 09:30:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 09:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,214,1355097600"; d="scan'208";a="16139531"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 09:30:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	09:30:52 +0000
Message-ID: <1354613450.2693.53.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 4 Dec 2012 09:30:50 +0000
In-Reply-To: <50BDBCCB02000078000AD8D8@nat28.tlf.novell.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
	<50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
	<1354558168.18784.26.camel@iceland>
	<50BDBCCB02000078000AD8D8@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 08:05 +0000, Jan Beulich wrote:
> >>> On 03.12.12 at 19:09, Wei Liu <Wei.Liu2@citrix.com> wrote:
> > On Mon, 2012-12-03 at 18:00 +0000, Jan Beulich wrote:
> >> >>> On 03.12.12 at 18:52, Wei Liu <Wei.Liu2@citrix.com> wrote:
> >> > On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
> >> >> Doesn't the guest also need to set up space for the 2nd level?
> >> >> 
> >> > 
> >> > Yes. That will be embedded in percpu struct vcpu_info, which will be
> >> > also register via the same hypercall op.
> >> 
> >> "struct vcpu_info"? Same hypercall? Or are you mixing up types?
> >> 
> > 
> > What I meant was the second level will be embedded in struct vcpu_info,
> > and the 2nd level will be registered via some hypercall (not the struct
> > vcpu_info).
> 
> I would strongly recommend against embedding in
> struct vcpu_info, particularly in the context of intending to
> allow for having further levels in the future.
> 
> Plus I don't think you really can embed this - there's just not
> enough space left (I'm sure you're aware that you can't
> extend the structure in size), in fact there's no space left at
> all in the architecture independent part of the structure.
> 
> The only option you have is to declare the array you need to
> add to immediately follow the structure when having used
> the placement hypercall. That would probably be acceptable
> for the second from the top level, but you'd again run into
> (space) issues when wanting more than 3 levels (as you
> validly said, all but the leaf level ought to be per-vCPU).

The amount of space needed for levels 1..N-1 ought to be pretty
formulaic for any N, so we could choose to require that enough space be
available after vcpu_info for levels 1..N-1?

With larger N that might become multiple pages, is that a problem? It
might be preferable to have an explicit call which takes potentially
non-contiguous addresses.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:31:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfoqZ-0007sE-62; Tue, 04 Dec 2012 09:31:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfoqX-0007s8-Vb
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:31:22 +0000
Received: from [193.109.254.147:17798] by server-4.bemta-14.messagelabs.com id
	35/69-18856-9E2CDB05; Tue, 04 Dec 2012 09:31:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354613452!1603257!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16085 invoked from network); 4 Dec 2012 09:30:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 09:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,214,1355097600"; d="scan'208";a="16139531"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 09:30:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	09:30:52 +0000
Message-ID: <1354613450.2693.53.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 4 Dec 2012 09:30:50 +0000
In-Reply-To: <50BDBCCB02000078000AD8D8@nat28.tlf.novell.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
	<50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
	<1354558168.18784.26.camel@iceland>
	<50BDBCCB02000078000AD8D8@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 08:05 +0000, Jan Beulich wrote:
> >>> On 03.12.12 at 19:09, Wei Liu <Wei.Liu2@citrix.com> wrote:
> > On Mon, 2012-12-03 at 18:00 +0000, Jan Beulich wrote:
> >> >>> On 03.12.12 at 18:52, Wei Liu <Wei.Liu2@citrix.com> wrote:
> >> > On Mon, 2012-12-03 at 17:35 +0000, Jan Beulich wrote:
> >> >> Doesn't the guest also need to set up space for the 2nd level?
> >> >> 
> >> > 
> >> > Yes. That will be embedded in percpu struct vcpu_info, which will be
> >> > also register via the same hypercall op.
> >> 
> >> "struct vcpu_info"? Same hypercall? Or are you mixing up types?
> >> 
> > 
> > What I meant was the second level will be embedded in struct vcpu_info,
> > and the 2nd level will be registered via some hypercall (not the struct
> > vcpu_info).
> 
> I would strongly recommend against embedding in
> struct vcpu_info, particularly in the context of intending to
> allow for having further levels in the future.
> 
> Plus I don't think you really can embed this - there's just not
> enough space left (I'm sure you're aware that you can't
> extend the structure in size), in fact there's no space left at
> all in the architecture independent part of the structure.
> 
> The only option you have is to declare the array you need to
> add to immediately follow the structure when having used
> the placement hypercall. That would probably be acceptable
> for the second from the top level, but you'd again run into
> (space) issues when wanting more than 3 levels (as you
> validly said, all but the leaf level ought to be per-vCPU).

The amount of space needed for levels 1..N-1 ought to be pretty
formulaic for any N, so we could choose to require that enough space be
available after vcpu_info for levels 1..N-1?

With larger N that might become multiple pages, is that a problem? It
might be preferable to have an explicit call which takes potentially
non-contiguous addresses.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:33:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfory-0007wg-Lz; Tue, 04 Dec 2012 09:32:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tforx-0007wa-Fq
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:32:49 +0000
Received: from [85.158.137.99:38371] by server-16.bemta-3.messagelabs.com id
	2E/5E-07461-043CDB05; Tue, 04 Dec 2012 09:32:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354613567!16942985!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7255 invoked from network); 4 Dec 2012 09:32:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 09:32:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,214,1355097600"; d="scan'208";a="16139603"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 09:32:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	09:32:46 +0000
Message-ID: <1354613565.2693.54.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>
Date: Tue, 4 Dec 2012 09:32:45 +0000
In-Reply-To: <20121204003026.GP6055@type.youpi.perso.aquilenet.fr>
References: <20121128215723.GA6109@type>
	<1354187659.25834.147.camel@zakaz.uk.xensource.com>
	<20121204003026.GP6055@type.youpi.perso.aquilenet.fr>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [minios] Add xenbus shutdown control support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTEyLTA0IGF0IDAwOjMwICswMDAwLCBTYW11ZWwgVGhpYmF1bHQgd3JvdGU6
Cj4gUmUsCj4gCj4gSWFuIENhbXBiZWxsLCBsZSBUaHUgMjkgTm92IDIwMTIgMTE6MTQ6MTkgKzAw
MDAsIGEgw6ljcml0IDoKPiA+IE9uIFdlZCwgMjAxMi0xMS0yOCBhdCAyMTo1NyArMDAwMCwgU2Ft
dWVsIFRoaWJhdWx0IHdyb3RlOgo+ID4gPiBBZGQgYSB0aHJlYWQgd2F0Y2hpbmcgdGhlIHhlbmJ1
cyBzaHV0ZG93biBjb250cm9sIHBhdGggYW5kIG5vdGlmaWVzIGEKPiA+ID4gd2FpdCBxdWV1ZS4K
PiA+IAo+ID4gV2h5IGEgd2FpdCBxdWV1ZSByYXRoZXIgdGhhbiBhIHdlYWsgZnVuY3Rpb24gY2Fs
bD8KPiAKPiBCZWNhdXNlIGl0IGludGVncmF0ZXMgd2VsbCB3aXRoIGV4aXN0aW5nIHdhaXQgbG9v
cHMuCgpJIHdhcyBpbWFnaW5pbmcgdGhhdCBzb21lb25lIHVzaW5nIHN1Y2ggYSB3YWl0IGxvb3Ag
d291bGQgc2ltcGx5IHByb3ZpZGUKdGhlIHdlYWsgZnVuY3Rpb24gdG8ga2ljayB0aGUgcXVldWUg
dGhlbXNlbHZlcywgcmF0aGVyIHRoYW4gaW1wb3NpbmcKdGhpcyBkZXNpZ24gb24gdGhlbSBmcm9t
IHRoZSBjb3JlLiBJdCdzIG5vdCBhIGJpZyBkZWFsIHRob3VnaC4KCklhbi4KCgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:33:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfory-0007wg-Lz; Tue, 04 Dec 2012 09:32:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tforx-0007wa-Fq
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:32:49 +0000
Received: from [85.158.137.99:38371] by server-16.bemta-3.messagelabs.com id
	2E/5E-07461-043CDB05; Tue, 04 Dec 2012 09:32:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354613567!16942985!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7255 invoked from network); 4 Dec 2012 09:32:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 09:32:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,214,1355097600"; d="scan'208";a="16139603"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 09:32:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	09:32:46 +0000
Message-ID: <1354613565.2693.54.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>
Date: Tue, 4 Dec 2012 09:32:45 +0000
In-Reply-To: <20121204003026.GP6055@type.youpi.perso.aquilenet.fr>
References: <20121128215723.GA6109@type>
	<1354187659.25834.147.camel@zakaz.uk.xensource.com>
	<20121204003026.GP6055@type.youpi.perso.aquilenet.fr>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [minios] Add xenbus shutdown control support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTEyLTA0IGF0IDAwOjMwICswMDAwLCBTYW11ZWwgVGhpYmF1bHQgd3JvdGU6
Cj4gUmUsCj4gCj4gSWFuIENhbXBiZWxsLCBsZSBUaHUgMjkgTm92IDIwMTIgMTE6MTQ6MTkgKzAw
MDAsIGEgw6ljcml0IDoKPiA+IE9uIFdlZCwgMjAxMi0xMS0yOCBhdCAyMTo1NyArMDAwMCwgU2Ft
dWVsIFRoaWJhdWx0IHdyb3RlOgo+ID4gPiBBZGQgYSB0aHJlYWQgd2F0Y2hpbmcgdGhlIHhlbmJ1
cyBzaHV0ZG93biBjb250cm9sIHBhdGggYW5kIG5vdGlmaWVzIGEKPiA+ID4gd2FpdCBxdWV1ZS4K
PiA+IAo+ID4gV2h5IGEgd2FpdCBxdWV1ZSByYXRoZXIgdGhhbiBhIHdlYWsgZnVuY3Rpb24gY2Fs
bD8KPiAKPiBCZWNhdXNlIGl0IGludGVncmF0ZXMgd2VsbCB3aXRoIGV4aXN0aW5nIHdhaXQgbG9v
cHMuCgpJIHdhcyBpbWFnaW5pbmcgdGhhdCBzb21lb25lIHVzaW5nIHN1Y2ggYSB3YWl0IGxvb3Ag
d291bGQgc2ltcGx5IHByb3ZpZGUKdGhlIHdlYWsgZnVuY3Rpb24gdG8ga2ljayB0aGUgcXVldWUg
dGhlbXNlbHZlcywgcmF0aGVyIHRoYW4gaW1wb3NpbmcKdGhpcyBkZXNpZ24gb24gdGhlbSBmcm9t
IHRoZSBjb3JlLiBJdCdzIG5vdCBhIGJpZyBkZWFsIHRob3VnaC4KCklhbi4KCgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:34:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfost-00081P-4G; Tue, 04 Dec 2012 09:33:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1Tfosq-000816-Qi
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:33:44 +0000
Received: from [85.158.143.99:17678] by server-2.bemta-4.messagelabs.com id
	C1/9D-28922-873CDB05; Tue, 04 Dec 2012 09:33:44 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354613620!27378795!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2519 invoked from network); 4 Dec 2012 09:33:40 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-3.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	4 Dec 2012 09:33:40 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1Tfosj-0003Qo-06; Tue, 04 Dec 2012 09:33:37 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1Tfosd-0006od-0N; Tue, 04 Dec 2012 09:33:36 +0000
Message-ID: <1354613604.2693.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Bastian Blank <waldi@debian.org>, 695056@bugs.debian.org
Date: Tue, 04 Dec 2012 09:33:24 +0000
In-Reply-To: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
> Package: src:xen
> Version: 4.1.3-4
> Severity: serious
> 
> The bzimage loader used in both libxc and the hypervisor lacks XZ
> support. Debian kernels since 3.6 are compressed with XZ and can't be
> loaded.
> 
> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
> somewhere last year.

Indeed. Jan this would be a good candidate for a future 4.1.x I think
(it was already in 4.2.0).

Ian.

-- 
Ian Campbell


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:34:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfost-00081P-4G; Tue, 04 Dec 2012 09:33:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1Tfosq-000816-Qi
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:33:44 +0000
Received: from [85.158.143.99:17678] by server-2.bemta-4.messagelabs.com id
	C1/9D-28922-873CDB05; Tue, 04 Dec 2012 09:33:44 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354613620!27378795!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2519 invoked from network); 4 Dec 2012 09:33:40 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-3.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	4 Dec 2012 09:33:40 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1Tfosj-0003Qo-06; Tue, 04 Dec 2012 09:33:37 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1Tfosd-0006od-0N; Tue, 04 Dec 2012 09:33:36 +0000
Message-ID: <1354613604.2693.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Bastian Blank <waldi@debian.org>, 695056@bugs.debian.org
Date: Tue, 04 Dec 2012 09:33:24 +0000
In-Reply-To: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
> Package: src:xen
> Version: 4.1.3-4
> Severity: serious
> 
> The bzimage loader used in both libxc and the hypervisor lacks XZ
> support. Debian kernels since 3.6 are compressed with XZ and can't be
> loaded.
> 
> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
> somewhere last year.

Indeed. Jan this would be a good candidate for a future 4.1.x I think
(it was already in 4.2.0).

Ian.

-- 
Ian Campbell


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:38:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfowp-0008Gp-VN; Tue, 04 Dec 2012 09:37:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tfowp-0008Gg-1w
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:37:51 +0000
Received: from [85.158.139.211:11233] by server-4.bemta-5.messagelabs.com id
	CD/F0-15011-E64CDB05; Tue, 04 Dec 2012 09:37:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354613869!18165522!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12473 invoked from network); 4 Dec 2012 09:37:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 09:37:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 09:37:48 +0000
Message-Id: <50BDD27B02000078000ADA08@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 09:37:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
	<50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
	<1354558168.18784.26.camel@iceland>
	<50BDBCCB02000078000AD8D8@nat28.tlf.novell.com>
	<1354613450.2693.53.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354613450.2693.53.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 10:30, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> The amount of space needed for levels 1..N-1 ought to be pretty
> formulaic for any N, so we could choose to require that enough space be
> available after vcpu_info for levels 1..N-1?
> 
> With larger N that might become multiple pages, is that a problem? It
> might be preferable to have an explicit call which takes potentially
> non-contiguous addresses.

Yes, that was my point about space restriction. The current
registration call allows only for a single page.

Additionally, if we add something variably sized to vcpu_info,
I don't think we should make this immediately successive - that
would prevent fixed size extensions in the future. Instead, we
should have the guest tell at what offset from the base of the
structure the bitmaps start.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:38:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfowp-0008Gp-VN; Tue, 04 Dec 2012 09:37:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tfowp-0008Gg-1w
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:37:51 +0000
Received: from [85.158.139.211:11233] by server-4.bemta-5.messagelabs.com id
	CD/F0-15011-E64CDB05; Tue, 04 Dec 2012 09:37:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354613869!18165522!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12473 invoked from network); 4 Dec 2012 09:37:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 09:37:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 09:37:48 +0000
Message-Id: <50BDD27B02000078000ADA08@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 09:37:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF0F502000078000AD53F@nat28.tlf.novell.com>
	<1354557148.18784.21.camel@iceland>
	<50BCF6D502000078000AD6AB@nat28.tlf.novell.com>
	<1354558168.18784.26.camel@iceland>
	<50BDBCCB02000078000AD8D8@nat28.tlf.novell.com>
	<1354613450.2693.53.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354613450.2693.53.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 10:30, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> The amount of space needed for levels 1..N-1 ought to be pretty
> formulaic for any N, so we could choose to require that enough space be
> available after vcpu_info for levels 1..N-1?
> 
> With larger N that might become multiple pages, is that a problem? It
> might be preferable to have an explicit call which takes potentially
> non-contiguous addresses.

Yes, that was my point about space restriction. The current
registration call allows only for a single page.

Additionally, if we add something variably sized to vcpu_info,
I don't think we should make this immediately successive - that
would prevent fixed size extensions in the future. Instead, we
should have the guest tell at what offset from the base of the
structure the bitmaps start.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:40:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfozP-0008RY-Hn; Tue, 04 Dec 2012 09:40:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TfozN-0008RN-UE
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:40:30 +0000
Received: from [85.158.139.83:44757] by server-12.bemta-5.messagelabs.com id
	41/65-02886-D05CDB05; Tue, 04 Dec 2012 09:40:29 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354614013!17024334!1
X-Originating-IP: [192.134.164.105]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6138 invoked from network); 4 Dec 2012 09:40:14 -0000
Received: from mail4-relais-sop.national.inria.fr (HELO
	mail4-relais-sop.national.inria.fr) (192.134.164.105)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 09:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,214,1355094000"; d="scan'208";a="164696556"
Received: from unknown (HELO type.ipv6) ([193.50.110.200])
	by mail4-relais-sop.national.inria.fr with ESMTP/TLS/DHE-RSA-AES128-SHA;
	04 Dec 2012 10:40:12 +0100
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1Tfoz6-0002Hu-DY; Tue, 04 Dec 2012 10:40:12 +0100
Date: Tue, 4 Dec 2012 10:40:12 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Ian Campbell <Ian.Campbell@citrix.com>, dgdegra@tycho.nsa.gov,
	matthew.fioravante@jhuapl.edu
Message-ID: <20121204094012.GF5906@type.bordeaux.inria.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, dgdegra@tycho.nsa.gov,
	matthew.fioravante@jhuapl.edu,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Keir (Xen.org)" <keir@xen.org>
References: <20121128215723.GA6109@type>
	<1354187659.25834.147.camel@zakaz.uk.xensource.com>
	<20121204003026.GP6055@type.youpi.perso.aquilenet.fr>
	<1354613565.2693.54.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354613565.2693.54.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [minios] Add xenbus shutdown control support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell, le Tue 04 Dec 2012 09:32:45 +0000, a =E9crit :
> On Tue, 2012-12-04 at 00:30 +0000, Samuel Thibault wrote:
> > Ian Campbell, le Thu 29 Nov 2012 11:14:19 +0000, a =E9crit :
> > > On Wed, 2012-11-28 at 21:57 +0000, Samuel Thibault wrote:
> > > > Add a thread watching the xenbus shutdown control path and notifies=
 a
> > > > wait queue.
> > > =

> > > Why a wait queue rather than a weak function call?
> > =

> > Because it integrates well with existing wait loops.
> =

> I was imagining that someone using such a wait loop would simply provide
> the weak function to kick the queue themselves,

Ah, right, in that case it doesn't need much synchronization with the
shutdown event, so it doesn't pose problem.

> rather than imposing this design on them from the core. It's not a big
> deal though.

But it's nicer to get it the way people would prefer. Daniel, Matthew,
what do you think?

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:40:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfozP-0008RY-Hn; Tue, 04 Dec 2012 09:40:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TfozN-0008RN-UE
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:40:30 +0000
Received: from [85.158.139.83:44757] by server-12.bemta-5.messagelabs.com id
	41/65-02886-D05CDB05; Tue, 04 Dec 2012 09:40:29 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354614013!17024334!1
X-Originating-IP: [192.134.164.105]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6138 invoked from network); 4 Dec 2012 09:40:14 -0000
Received: from mail4-relais-sop.national.inria.fr (HELO
	mail4-relais-sop.national.inria.fr) (192.134.164.105)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 09:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,214,1355094000"; d="scan'208";a="164696556"
Received: from unknown (HELO type.ipv6) ([193.50.110.200])
	by mail4-relais-sop.national.inria.fr with ESMTP/TLS/DHE-RSA-AES128-SHA;
	04 Dec 2012 10:40:12 +0100
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1Tfoz6-0002Hu-DY; Tue, 04 Dec 2012 10:40:12 +0100
Date: Tue, 4 Dec 2012 10:40:12 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Ian Campbell <Ian.Campbell@citrix.com>, dgdegra@tycho.nsa.gov,
	matthew.fioravante@jhuapl.edu
Message-ID: <20121204094012.GF5906@type.bordeaux.inria.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, dgdegra@tycho.nsa.gov,
	matthew.fioravante@jhuapl.edu,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Keir (Xen.org)" <keir@xen.org>
References: <20121128215723.GA6109@type>
	<1354187659.25834.147.camel@zakaz.uk.xensource.com>
	<20121204003026.GP6055@type.youpi.perso.aquilenet.fr>
	<1354613565.2693.54.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354613565.2693.54.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [minios] Add xenbus shutdown control support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell, le Tue 04 Dec 2012 09:32:45 +0000, a =E9crit :
> On Tue, 2012-12-04 at 00:30 +0000, Samuel Thibault wrote:
> > Ian Campbell, le Thu 29 Nov 2012 11:14:19 +0000, a =E9crit :
> > > On Wed, 2012-11-28 at 21:57 +0000, Samuel Thibault wrote:
> > > > Add a thread watching the xenbus shutdown control path and notifies=
 a
> > > > wait queue.
> > > =

> > > Why a wait queue rather than a weak function call?
> > =

> > Because it integrates well with existing wait loops.
> =

> I was imagining that someone using such a wait loop would simply provide
> the weak function to kick the queue themselves,

Ah, right, in that case it doesn't need much synchronization with the
shutdown event, so it doesn't pose problem.

> rather than imposing this design on them from the core. It's not a big
> deal though.

But it's nicer to get it the way people would prefer. Daniel, Matthew,
what do you think?

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:53:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpBf-0000TA-TD; Tue, 04 Dec 2012 09:53:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpBd-0000T2-SC
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:53:09 +0000
Received: from [193.109.254.147:9830] by server-14.bemta-14.messagelabs.com id
	FD/7F-14517-508CDB05; Tue, 04 Dec 2012 09:53:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1354614679!8754734!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3879 invoked from network); 4 Dec 2012 09:51:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 09:51:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 09:51:19 +0000
Message-Id: <50BDD5A302000078000ADA26@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 09:51:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <695056@bugs.debian.org>,"Bastian Blank" <waldi@debian.org>,
	"Ian Campbell" <ijc@hellion.org.uk>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
	<1354613604.2693.55.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354613604.2693.55.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 10:33, Ian Campbell <ijc@hellion.org.uk> wrote:
> On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
>> Package: src:xen
>> Version: 4.1.3-4
>> Severity: serious
>> 
>> The bzimage loader used in both libxc and the hypervisor lacks XZ
>> support. Debian kernels since 3.6 are compressed with XZ and can't be
>> loaded.
>> 
>> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
>> somewhere last year.
> 
> Indeed. Jan this would be a good candidate for a future 4.1.x I think
> (it was already in 4.2.0).

Hmm, I'm not really convinced - we're at 4.1.4 (I'm looking
towards an RC2 followed by a release within the next days),
and doing a feature addition like this in a .5 stable release
looks questionable to me in the context of how we managed
stable updates in the past. But I'm open to be outvoted of
course...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:53:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpBf-0000TA-TD; Tue, 04 Dec 2012 09:53:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpBd-0000T2-SC
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:53:09 +0000
Received: from [193.109.254.147:9830] by server-14.bemta-14.messagelabs.com id
	FD/7F-14517-508CDB05; Tue, 04 Dec 2012 09:53:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1354614679!8754734!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3879 invoked from network); 4 Dec 2012 09:51:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 09:51:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 09:51:19 +0000
Message-Id: <50BDD5A302000078000ADA26@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 09:51:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <695056@bugs.debian.org>,"Bastian Blank" <waldi@debian.org>,
	"Ian Campbell" <ijc@hellion.org.uk>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
	<1354613604.2693.55.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354613604.2693.55.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 10:33, Ian Campbell <ijc@hellion.org.uk> wrote:
> On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
>> Package: src:xen
>> Version: 4.1.3-4
>> Severity: serious
>> 
>> The bzimage loader used in both libxc and the hypervisor lacks XZ
>> support. Debian kernels since 3.6 are compressed with XZ and can't be
>> loaded.
>> 
>> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
>> somewhere last year.
> 
> Indeed. Jan this would be a good candidate for a future 4.1.x I think
> (it was already in 4.2.0).

Hmm, I'm not really convinced - we're at 4.1.4 (I'm looking
towards an RC2 followed by a release within the next days),
and doing a feature addition like this in a .5 stable release
looks questionable to me in the context of how we managed
stable updates in the past. But I'm open to be outvoted of
course...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:59:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpHy-0000cL-OA; Tue, 04 Dec 2012 09:59:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpHx-0000cG-Eu
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:59:41 +0000
Received: from [85.158.139.83:26613] by server-5.bemta-5.messagelabs.com id
	70/7C-11353-C89CDB05; Tue, 04 Dec 2012 09:59:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354615179!27609725!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5499 invoked from network); 4 Dec 2012 09:59:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 09:59:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 09:59:39 +0000
Message-Id: <50BDD79802000078000ADA32@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 09:59:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-3-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354600410-3390-3-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 02/10] nested vmx: expose bit 55 of
 IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:

I'm sorry, but exposing something that doesn't even have a name
sound very awkward to me. Please adjust existing code using the
literal number in a prerequisite patch, and then use the added
constant here too.

Jan

> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |   37 +++++++++++++++++++++++++------------
>  1 files changed, 25 insertions(+), 12 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 719bfce..cf91c7c 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
>   */
>  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
>  {
> -    u64 data = 0, tmp;
> +    u64 data = 0, tmp = 0;
>      int r = 1;
>  
>      if ( !nestedhvm_enabled(current->domain) )
> @@ -1311,18 +1311,20 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>      switch (msr) {
>      case MSR_IA32_VMX_BASIC:
>          data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
> -               ((u64)MTRR_TYPE_WRBACK) << 50;
> +               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
>          break;
>      case MSR_IA32_VMX_PINBASED_CTLS:
> +    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
>          /* 1-seetings */
>          data = PIN_BASED_EXT_INTR_MASK |
>                 PIN_BASED_NMI_EXITING |
>                 PIN_BASED_PREEMPT_TIMER;
> -        data <<= 32;
> -	/* 0-settings */
> -        data |= 0;
> +        /* Consult SDM for default1 setting */
> +        tmp = ( (1<<1) | (1<<2) | (1<<4) );
> +        data = ((data | tmp) << 32) | (tmp);
>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS:
> +    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
>          /* 1-seetings */
>          data = CPU_BASED_HLT_EXITING |
>                 CPU_BASED_VIRTUAL_INTR_PENDING |
> @@ -1342,10 +1344,14 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 CPU_BASED_VIRTUAL_NMI_PENDING |
>                 CPU_BASED_ACTIVATE_MSR_BITMAP |
>                 CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
> -        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
> -        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
> +        /* Consult SDM for default1 setting */
> +        if ( msr == MSR_IA32_VMX_PROCBASED_CTLS )
> +            tmp = 0x401e172;
> +        else if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
> +            tmp = 0x4006172;
>          /* 0-settings */
>          data = ((data | tmp) << 32) | (tmp);
> +
>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS2:
>          /* 1-seetings */
> @@ -1355,9 +1361,12 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          data = (data << 32) | tmp;
>          break;
>      case MSR_IA32_VMX_EXIT_CTLS:
> -        /* 1-seetings */
> -        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
> -        tmp = 0x36dff;
> +    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
> +        /* Consult SDM for default1 setting */
> +        if ( msr == MSR_IA32_VMX_EXIT_CTLS )
> +            tmp = 0x36dff;
> +        else if ( msr == MSR_IA32_VMX_TRUE_EXIT_CTLS )
> +            tmp = 0x36dfb;
>          data = VM_EXIT_ACK_INTR_ON_EXIT |
>                 VM_EXIT_IA32E_MODE |
>                 VM_EXIT_SAVE_PREEMPT_TIMER |
> @@ -1370,8 +1379,12 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          data = ((data | tmp) << 32) | tmp;
>          break;
>      case MSR_IA32_VMX_ENTRY_CTLS:
> -        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
> -        tmp = 0x11ff;
> +    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> +        /* Consult SDM for default1 setting */
> +        if ( msr == MSR_IA32_VMX_ENTRY_CTLS )
> +            tmp = 0x11ff;
> +        else if ( msr == MSR_IA32_VMX_TRUE_ENTRY_CTLS )
> +            tmp = 0x11fb;
>          data = VM_ENTRY_LOAD_GUEST_PAT |
>                 VM_ENTRY_LOAD_GUEST_EFER |
>                 VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 09:59:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 09:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpHy-0000cL-OA; Tue, 04 Dec 2012 09:59:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpHx-0000cG-Eu
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 09:59:41 +0000
Received: from [85.158.139.83:26613] by server-5.bemta-5.messagelabs.com id
	70/7C-11353-C89CDB05; Tue, 04 Dec 2012 09:59:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354615179!27609725!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5499 invoked from network); 4 Dec 2012 09:59:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 09:59:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 09:59:39 +0000
Message-Id: <50BDD79802000078000ADA32@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 09:59:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-3-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354600410-3390-3-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 02/10] nested vmx: expose bit 55 of
 IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:

I'm sorry, but exposing something that doesn't even have a name
sound very awkward to me. Please adjust existing code using the
literal number in a prerequisite patch, and then use the added
constant here too.

Jan

> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |   37 +++++++++++++++++++++++++------------
>  1 files changed, 25 insertions(+), 12 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 719bfce..cf91c7c 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
>   */
>  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
>  {
> -    u64 data = 0, tmp;
> +    u64 data = 0, tmp = 0;
>      int r = 1;
>  
>      if ( !nestedhvm_enabled(current->domain) )
> @@ -1311,18 +1311,20 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>      switch (msr) {
>      case MSR_IA32_VMX_BASIC:
>          data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
> -               ((u64)MTRR_TYPE_WRBACK) << 50;
> +               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
>          break;
>      case MSR_IA32_VMX_PINBASED_CTLS:
> +    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
>          /* 1-seetings */
>          data = PIN_BASED_EXT_INTR_MASK |
>                 PIN_BASED_NMI_EXITING |
>                 PIN_BASED_PREEMPT_TIMER;
> -        data <<= 32;
> -	/* 0-settings */
> -        data |= 0;
> +        /* Consult SDM for default1 setting */
> +        tmp = ( (1<<1) | (1<<2) | (1<<4) );
> +        data = ((data | tmp) << 32) | (tmp);
>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS:
> +    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
>          /* 1-seetings */
>          data = CPU_BASED_HLT_EXITING |
>                 CPU_BASED_VIRTUAL_INTR_PENDING |
> @@ -1342,10 +1344,14 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 CPU_BASED_VIRTUAL_NMI_PENDING |
>                 CPU_BASED_ACTIVATE_MSR_BITMAP |
>                 CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
> -        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
> -        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
> +        /* Consult SDM for default1 setting */
> +        if ( msr == MSR_IA32_VMX_PROCBASED_CTLS )
> +            tmp = 0x401e172;
> +        else if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
> +            tmp = 0x4006172;
>          /* 0-settings */
>          data = ((data | tmp) << 32) | (tmp);
> +
>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS2:
>          /* 1-seetings */
> @@ -1355,9 +1361,12 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          data = (data << 32) | tmp;
>          break;
>      case MSR_IA32_VMX_EXIT_CTLS:
> -        /* 1-seetings */
> -        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
> -        tmp = 0x36dff;
> +    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
> +        /* Consult SDM for default1 setting */
> +        if ( msr == MSR_IA32_VMX_EXIT_CTLS )
> +            tmp = 0x36dff;
> +        else if ( msr == MSR_IA32_VMX_TRUE_EXIT_CTLS )
> +            tmp = 0x36dfb;
>          data = VM_EXIT_ACK_INTR_ON_EXIT |
>                 VM_EXIT_IA32E_MODE |
>                 VM_EXIT_SAVE_PREEMPT_TIMER |
> @@ -1370,8 +1379,12 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          data = ((data | tmp) << 32) | tmp;
>          break;
>      case MSR_IA32_VMX_ENTRY_CTLS:
> -        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
> -        tmp = 0x11ff;
> +    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> +        /* Consult SDM for default1 setting */
> +        if ( msr == MSR_IA32_VMX_ENTRY_CTLS )
> +            tmp = 0x11ff;
> +        else if ( msr == MSR_IA32_VMX_TRUE_ENTRY_CTLS )
> +            tmp = 0x11fb;
>          data = VM_ENTRY_LOAD_GUEST_PAT |
>                 VM_ENTRY_LOAD_GUEST_EFER |
>                 VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:02:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpKQ-0000nA-As; Tue, 04 Dec 2012 10:02:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpKO-0000n1-NG
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:02:13 +0000
Received: from [85.158.138.51:25692] by server-8.bemta-3.messagelabs.com id
	D7/F5-07786-F1ACDB05; Tue, 04 Dec 2012 10:02:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354615327!27447545!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17109 invoked from network); 4 Dec 2012 10:02:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:02:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:02:06 +0000
Message-Id: <50BDD82B02000078000ADA35@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:02:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-6-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354600410-3390-6-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 05/10] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> For DR register, we use lazy restore mechanism when access it. Therefore
> when receiving such VM exit, L0 should be responsible to switch to the
> right DR values, then inject to L1 hypervisor.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index cf3797c..0ac78af 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1654,7 +1654,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
>      case EXIT_REASON_DR_ACCESS:
>          ctrl = __n2_exec_control(v);
>          if ( ctrl & CPU_BASED_MOV_DR_EXITING )
> -            nvcpu->nv_vmexit_pending = 1;
> +            if ( v->arch.hvm_vcpu.flag_dr_dirty )
> +                nvcpu->nv_vmexit_pending = 1;

Personally I'd prefer if you combined the two if-s.

Jan

>          break;
>      case EXIT_REASON_INVLPG:
>          ctrl = __n2_exec_control(v);
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:02:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpKQ-0000nA-As; Tue, 04 Dec 2012 10:02:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpKO-0000n1-NG
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:02:13 +0000
Received: from [85.158.138.51:25692] by server-8.bemta-3.messagelabs.com id
	D7/F5-07786-F1ACDB05; Tue, 04 Dec 2012 10:02:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354615327!27447545!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17109 invoked from network); 4 Dec 2012 10:02:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:02:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:02:06 +0000
Message-Id: <50BDD82B02000078000ADA35@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:02:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-6-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354600410-3390-6-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 05/10] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> For DR register, we use lazy restore mechanism when access it. Therefore
> when receiving such VM exit, L0 should be responsible to switch to the
> right DR values, then inject to L1 hypervisor.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index cf3797c..0ac78af 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1654,7 +1654,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
>      case EXIT_REASON_DR_ACCESS:
>          ctrl = __n2_exec_control(v);
>          if ( ctrl & CPU_BASED_MOV_DR_EXITING )
> -            nvcpu->nv_vmexit_pending = 1;
> +            if ( v->arch.hvm_vcpu.flag_dr_dirty )
> +                nvcpu->nv_vmexit_pending = 1;

Personally I'd prefer if you combined the two if-s.

Jan

>          break;
>      case EXIT_REASON_INVLPG:
>          ctrl = __n2_exec_control(v);
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:02:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpKl-0000pU-O9; Tue, 04 Dec 2012 10:02:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1TfpKj-0000p6-Df
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 10:02:33 +0000
Received: from [85.158.143.35:34459] by server-2.bemta-4.messagelabs.com id
	E1/1C-28922-83ACDB05; Tue, 04 Dec 2012 10:02:32 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354615311!13500067!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7566 invoked from network); 4 Dec 2012 10:01:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 10:01:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,214,1355097600"; d="scan'208";a="16140469"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 10:01:51 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Tue, 4 Dec 2012
	10:01:51 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 10:01:50 +0000
Thread-Topic: [Xen-devel] [PATCH 3 of 5 RFC] blktap3: Introduce xenio.c,
	core xenio daemon functionality
Thread-Index: Ac3OReOK8zNFzHcyR2OU07AZqUHu9wDOvKZg
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE012D554A57A0@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<7126fda1424905ed186f.1354112413@makatos-desktop>
	<1354202779.6269.24.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354202779.6269.24.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 3 of 5 RFC] blktap3: Introduce xenio.c,
 core xenio daemon functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 29 November 2012 15:26
> To: Thanos Makatos
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH 3 of 5 RFC] blktap3: Introduce xenio.c,
> core xenio daemon functionality
> 
> On Wed, 2012-11-28 at 14:20 +0000, Thanos Makatos wrote:
> 
> > +/*
> > + * XenStore path components.
> > + *
> > + * TODO "xenio" is defined in the IDL, take it from there instead of
> > + * hard-coding it here
> 
> Is there an IDL somewhere that I've failed to find?

Libxl uses LIBXL__DEVICE_KIND_XENIO instead of LIBXL__DEVICE_KIND_VBD to avoid confusion, and this is defined in tools/libxl/libxl_types_internal.idl. This will be introduced by the very latest patch that will add blktap3 support in libxl. I'll extended the comment to clarify this.

> 
> > + */
> > +#define XENIO_BACKEND_NAME     "xenio"
> > +#define XENIO_BACKEND_PATH     "backend/"XENIO_BACKEND_NAME
> > +#define XENIO_BACKEND_TOKEN    "backend-"XENIO_BACKEND_NAME
> > +
> [...]
> > +/**
> > + * Reads the specified XenStore path. The caller must free the
> returned buffer.
> > + *
> > + * @param xs handle to XenStore
> > + * @param xst XenStore transaction (TODO Maybe NULL?)
> > + * @param fmt TODO
> > + * @param ap TODO
> > + * @returns TODO
> > + *
> > + * TODO Why don't we return the data pointer?
> > + */
> > +static char *
> > +xenio_xs_vread(struct xs_handle * const xs, xs_transaction_t xst,
> > +        const char * const fmt, va_list ap) {
> > +    char *path, *data, *s = NULL;
> > +    unsigned int len;
> > +
> > +    assert(xs);
> > +
> > +    path = vmprintf(fmt, ap);
> > +    data = xs_read(xs, xst, path, &len);
> > +    DBG("XS read %s -> %s \n", path, data);
> > +    free(path);
> > +
> > +    if (data) {
> > +        s = strndup(data, len);
> > +        free(data);
> 
> Is this a rather complicated way of ensuring the string is NULL
> terminated?

Probably, I see no other logical explanation -- I'll simplify it.

> 
> I suppose given the xs protocol and/or libxenstore doesn't necessarily
> NULL terminate the data or leave enough room in the allocated buffer to
> add a NULL this might actually be required, yuk!
> 
> > +    }
> > +
> > +    return s;
> > +}
> > +
> [...]
> > +    /*
> > +     * Set a watch on the back-end path using a token.
> > +     *
> > +     * TODO Do we really need to supply a token, given that this is
> the _only_
> > +     * watch on this specific path by this process?
> 
> xenio_backend_read_watch appears to use the token to distinguish the
> two cases of watch which you have?
> 
> > +     */
> > +    nerr = xs_watch(backend.xs, XENIO_BACKEND_PATH,
> XENIO_BACKEND_TOKEN);
> > +    if (!nerr) {
> > +        err = -errno;
> > +        goto fail;
> > +    }
> > +
> > +    backend.ops = ops;
> > +
> > +    return 0;
> > +
> > +fail:
> > +       xenio_backend_destroy();
> > +
> > +    return -err;
> > +}
> 
> There's obviously a lot of code here, I'm not going to pretend I read
> it all, but I did have a skim and commented on what leapt out.

I was actually thinking of splitting this file into smaller ones as it appears too big to review.

> 
> If there are any areas which you think could benefit from closer
> review, e.g. bits which are security sensitive or where you are unsure
> about the compatibility requirements for various guests or something
> like that then please do highlight them and I can take a closer look.
> 
> Some of your TODOs ask more general questions about Xen PV protocols
> etc which I suspect could be pretty quickly by someone on this list who
> is already familiar with that niche if you posted them as questions
> rather than part of the patch -- please don't feel you have to reverse
> engineer the whole thing yourself ;-)

Many of these TODOs are questions to me where I need to understand that particular bit in order to gain a wider understanding of what that piece of code does so I can document it. I'll try to separate these TODOs from the ones for which I need a reviewer's help.

> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:02:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpKl-0000pU-O9; Tue, 04 Dec 2012 10:02:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1TfpKj-0000p6-Df
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 10:02:33 +0000
Received: from [85.158.143.35:34459] by server-2.bemta-4.messagelabs.com id
	E1/1C-28922-83ACDB05; Tue, 04 Dec 2012 10:02:32 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354615311!13500067!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7566 invoked from network); 4 Dec 2012 10:01:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 10:01:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,214,1355097600"; d="scan'208";a="16140469"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 10:01:51 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Tue, 4 Dec 2012
	10:01:51 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 10:01:50 +0000
Thread-Topic: [Xen-devel] [PATCH 3 of 5 RFC] blktap3: Introduce xenio.c,
	core xenio daemon functionality
Thread-Index: Ac3OReOK8zNFzHcyR2OU07AZqUHu9wDOvKZg
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE012D554A57A0@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<7126fda1424905ed186f.1354112413@makatos-desktop>
	<1354202779.6269.24.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354202779.6269.24.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 3 of 5 RFC] blktap3: Introduce xenio.c,
 core xenio daemon functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 29 November 2012 15:26
> To: Thanos Makatos
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH 3 of 5 RFC] blktap3: Introduce xenio.c,
> core xenio daemon functionality
> 
> On Wed, 2012-11-28 at 14:20 +0000, Thanos Makatos wrote:
> 
> > +/*
> > + * XenStore path components.
> > + *
> > + * TODO "xenio" is defined in the IDL, take it from there instead of
> > + * hard-coding it here
> 
> Is there an IDL somewhere that I've failed to find?

Libxl uses LIBXL__DEVICE_KIND_XENIO instead of LIBXL__DEVICE_KIND_VBD to avoid confusion, and this is defined in tools/libxl/libxl_types_internal.idl. This will be introduced by the very latest patch that will add blktap3 support in libxl. I'll extended the comment to clarify this.

> 
> > + */
> > +#define XENIO_BACKEND_NAME     "xenio"
> > +#define XENIO_BACKEND_PATH     "backend/"XENIO_BACKEND_NAME
> > +#define XENIO_BACKEND_TOKEN    "backend-"XENIO_BACKEND_NAME
> > +
> [...]
> > +/**
> > + * Reads the specified XenStore path. The caller must free the
> returned buffer.
> > + *
> > + * @param xs handle to XenStore
> > + * @param xst XenStore transaction (TODO Maybe NULL?)
> > + * @param fmt TODO
> > + * @param ap TODO
> > + * @returns TODO
> > + *
> > + * TODO Why don't we return the data pointer?
> > + */
> > +static char *
> > +xenio_xs_vread(struct xs_handle * const xs, xs_transaction_t xst,
> > +        const char * const fmt, va_list ap) {
> > +    char *path, *data, *s = NULL;
> > +    unsigned int len;
> > +
> > +    assert(xs);
> > +
> > +    path = vmprintf(fmt, ap);
> > +    data = xs_read(xs, xst, path, &len);
> > +    DBG("XS read %s -> %s \n", path, data);
> > +    free(path);
> > +
> > +    if (data) {
> > +        s = strndup(data, len);
> > +        free(data);
> 
> Is this a rather complicated way of ensuring the string is NULL
> terminated?

Probably, I see no other logical explanation -- I'll simplify it.

> 
> I suppose given the xs protocol and/or libxenstore doesn't necessarily
> NULL terminate the data or leave enough room in the allocated buffer to
> add a NULL this might actually be required, yuk!
> 
> > +    }
> > +
> > +    return s;
> > +}
> > +
> [...]
> > +    /*
> > +     * Set a watch on the back-end path using a token.
> > +     *
> > +     * TODO Do we really need to supply a token, given that this is
> the _only_
> > +     * watch on this specific path by this process?
> 
> xenio_backend_read_watch appears to use the token to distinguish the
> two cases of watch which you have?
> 
> > +     */
> > +    nerr = xs_watch(backend.xs, XENIO_BACKEND_PATH,
> XENIO_BACKEND_TOKEN);
> > +    if (!nerr) {
> > +        err = -errno;
> > +        goto fail;
> > +    }
> > +
> > +    backend.ops = ops;
> > +
> > +    return 0;
> > +
> > +fail:
> > +       xenio_backend_destroy();
> > +
> > +    return -err;
> > +}
> 
> There's obviously a lot of code here, I'm not going to pretend I read
> it all, but I did have a skim and commented on what leapt out.

I was actually thinking of splitting this file into smaller ones as it appears too big to review.

> 
> If there are any areas which you think could benefit from closer
> review, e.g. bits which are security sensitive or where you are unsure
> about the compatibility requirements for various guests or something
> like that then please do highlight them and I can take a closer look.
> 
> Some of your TODOs ask more general questions about Xen PV protocols
> etc which I suspect could be pretty quickly by someone on this list who
> is already familiar with that niche if you posted them as questions
> rather than part of the patch -- please don't feel you have to reverse
> engineer the whole thing yourself ;-)

Many of these TODOs are questions to me where I need to understand that particular bit in order to gain a wider understanding of what that piece of code does so I can document it. I'll try to separate these TODOs from the ones for which I need a reviewer's help.

> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:03:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpLV-0000z6-65; Tue, 04 Dec 2012 10:03:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpLU-0000yr-CL
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:03:20 +0000
Received: from [85.158.137.99:14611] by server-2.bemta-3.messagelabs.com id
	3E/39-04744-76ACDB05; Tue, 04 Dec 2012 10:03:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354615398!17837558!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31021 invoked from network); 4 Dec 2012 10:03:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:03:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:03:18 +0000
Message-Id: <50BDD87402000078000ADA38@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:03:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-7-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354600410-3390-7-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 06/10] nested vmx: enable IA32E mode while
 do VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:

How did things work without this, or if it worked, what does this fix?

Jan

> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 0ac78af..1304636 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1388,7 +1388,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>              tmp = 0x11fb;
>          data = VM_ENTRY_LOAD_GUEST_PAT |
>                 VM_ENTRY_LOAD_GUEST_EFER |
> -               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
> +               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
> +               VM_ENTRY_IA32E_MODE;
>          data = ((data | tmp) << 32) | tmp;
>          break;
>  
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:03:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpLV-0000z6-65; Tue, 04 Dec 2012 10:03:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpLU-0000yr-CL
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:03:20 +0000
Received: from [85.158.137.99:14611] by server-2.bemta-3.messagelabs.com id
	3E/39-04744-76ACDB05; Tue, 04 Dec 2012 10:03:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354615398!17837558!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31021 invoked from network); 4 Dec 2012 10:03:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:03:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:03:18 +0000
Message-Id: <50BDD87402000078000ADA38@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:03:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-7-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354600410-3390-7-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 06/10] nested vmx: enable IA32E mode while
 do VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:

How did things work without this, or if it worked, what does this fix?

Jan

> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 0ac78af..1304636 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1388,7 +1388,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>              tmp = 0x11fb;
>          data = VM_ENTRY_LOAD_GUEST_PAT |
>                 VM_ENTRY_LOAD_GUEST_EFER |
> -               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
> +               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
> +               VM_ENTRY_IA32E_MODE;
>          data = ((data | tmp) << 32) | tmp;
>          break;
>  
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:06:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:06:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpNx-0001Ih-WE; Tue, 04 Dec 2012 10:05:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpNw-0001IS-OA
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:05:52 +0000
Received: from [85.158.143.99:37202] by server-2.bemta-4.messagelabs.com id
	04/52-28922-00BCDB05; Tue, 04 Dec 2012 10:05:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354615541!27384388!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14733 invoked from network); 4 Dec 2012 10:05:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:05:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:05:40 +0000
Message-Id: <50BDD90102000078000ADA54@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:05:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/10] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> This series of patches contain some bug fixes and feature enabling for nested 
> vmx, please help to review and pull.

As with the previous series, we would want this series to be ack-ed
by one of the formally listed maintainers.

Beyond that I'd appreciate if you could indicate which of the bug
fixes you sent would be candidates for 4.2.x.

Jan

> Dongxiao Xu (10):
>   nested vmx: emulate MSR bitmaps
>   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit
>   nested vmx: enable IA32E mode while do VM entry
>   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
>   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
>   nested vmx: fix interrupt delivery to L2 guest
>   nested vmx: check host ability when intercept MSR read
> 
>  xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
>  xen/arch/x86/hvm/vmx/vmcs.c        |   28 ++++++++
>  xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
>  xen/arch/x86/hvm/vmx/vvmx.c        |  128 ++++++++++++++++++++++++++++++------
>  xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
>  xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
>  xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
>  7 files changed, 148 insertions(+), 25 deletions(-)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:06:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:06:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpNx-0001Ih-WE; Tue, 04 Dec 2012 10:05:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpNw-0001IS-OA
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:05:52 +0000
Received: from [85.158.143.99:37202] by server-2.bemta-4.messagelabs.com id
	04/52-28922-00BCDB05; Tue, 04 Dec 2012 10:05:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354615541!27384388!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14733 invoked from network); 4 Dec 2012 10:05:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:05:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:05:40 +0000
Message-Id: <50BDD90102000078000ADA54@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:05:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/10] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> This series of patches contain some bug fixes and feature enabling for nested 
> vmx, please help to review and pull.

As with the previous series, we would want this series to be ack-ed
by one of the formally listed maintainers.

Beyond that I'd appreciate if you could indicate which of the bug
fixes you sent would be candidates for 4.2.x.

Jan

> Dongxiao Xu (10):
>   nested vmx: emulate MSR bitmaps
>   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit
>   nested vmx: enable IA32E mode while do VM entry
>   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
>   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
>   nested vmx: fix interrupt delivery to L2 guest
>   nested vmx: check host ability when intercept MSR read
> 
>  xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
>  xen/arch/x86/hvm/vmx/vmcs.c        |   28 ++++++++
>  xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
>  xen/arch/x86/hvm/vmx/vvmx.c        |  128 ++++++++++++++++++++++++++++++------
>  xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
>  xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
>  xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
>  7 files changed, 148 insertions(+), 25 deletions(-)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:09:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:09:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpRf-0001Vz-LM; Tue, 04 Dec 2012 10:09:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpRd-0001Vr-BR
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:09:41 +0000
Received: from [193.109.254.147:39666] by server-7.bemta-14.messagelabs.com id
	D6/A2-02272-4EBCDB05; Tue, 04 Dec 2012 10:09:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354615772!8901130!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10259 invoked from network); 4 Dec 2012 10:09:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:09:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:09:31 +0000
Message-Id: <50BDD9E702000078000ADA57@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:09:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Huang" <wei.huang2@amd.com>,
	"Wei Wang" <weiwang.dd@gmail.com>,<xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Boris Ostrovsky <boris.ostrovsky@amd.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Ping: [PATCH] IOMMU/ATS: fix maximum queue depth
	calculation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anyone? (I'd really want an ack from both Intel - who originally
contributed the ATS code - and AMD - due to the adjustment of
their later re-arrangements.)

Jan

>>> On 28.11.12 at 12:32, Jan Beulich wrote:
> The capabilities register field is a 5-bit value, and the 5 bits all
> being zero actually means 32 entries.
> 
> Under the assumption that amd_iommu_flush_iotlb() really just tried
> to correct for the miscalculation above when adding 32 to the value,
> that adjustment is also being removed.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/drivers/passthrough/amd/iommu_cmd.c
> +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
> @@ -321,7 +321,7 @@ void amd_iommu_flush_iotlb(struct pci_de
>  
>      req_id = get_dma_requestor_id(iommu->seg, bdf);
>      queueid = req_id;
> -    maxpend = (ats_pdev->ats_queue_depth + 32) & 0xff;
> +    maxpend = ats_pdev->ats_queue_depth & 0xff;
>  
>      /* send INVALIDATE_IOTLB_PAGES command */
>      spin_lock_irqsave(&iommu->lock, flags);
> --- a/xen/drivers/passthrough/ats.h
> +++ b/xen/drivers/passthrough/ats.h
> @@ -28,7 +28,7 @@ struct pci_ats_dev {
>  
>  #define ATS_REG_CAP    4
>  #define ATS_REG_CTL    6
> -#define ATS_QUEUE_DEPTH_MASK     0xF
> +#define ATS_QUEUE_DEPTH_MASK     0x1f
>  #define ATS_ENABLE               (1<<15)
>  
>  extern struct list_head ats_devices;
> --- a/xen/drivers/passthrough/x86/ats.c
> +++ b/xen/drivers/passthrough/x86/ats.c
> @@ -94,6 +94,8 @@ int enable_ats_device(int seg, int bus, 
>          value = pci_conf_read16(seg, bus, PCI_SLOT(devfn),
>                                  PCI_FUNC(devfn), pos + ATS_REG_CAP);
>          pdev->ats_queue_depth = value & ATS_QUEUE_DEPTH_MASK;
> +        if ( !pdev->ats_queue_depth )
> +            pdev->ats_queue_depth = ATS_QUEUE_DEPTH_MASK + 1;
>          list_add(&pdev->list, &ats_devices);
>      }
>  
> 
> 
> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:09:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:09:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpRf-0001Vz-LM; Tue, 04 Dec 2012 10:09:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpRd-0001Vr-BR
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:09:41 +0000
Received: from [193.109.254.147:39666] by server-7.bemta-14.messagelabs.com id
	D6/A2-02272-4EBCDB05; Tue, 04 Dec 2012 10:09:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354615772!8901130!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10259 invoked from network); 4 Dec 2012 10:09:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:09:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:09:31 +0000
Message-Id: <50BDD9E702000078000ADA57@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:09:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Huang" <wei.huang2@amd.com>,
	"Wei Wang" <weiwang.dd@gmail.com>,<xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Boris Ostrovsky <boris.ostrovsky@amd.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Ping: [PATCH] IOMMU/ATS: fix maximum queue depth
	calculation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anyone? (I'd really want an ack from both Intel - who originally
contributed the ATS code - and AMD - due to the adjustment of
their later re-arrangements.)

Jan

>>> On 28.11.12 at 12:32, Jan Beulich wrote:
> The capabilities register field is a 5-bit value, and the 5 bits all
> being zero actually means 32 entries.
> 
> Under the assumption that amd_iommu_flush_iotlb() really just tried
> to correct for the miscalculation above when adding 32 to the value,
> that adjustment is also being removed.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/drivers/passthrough/amd/iommu_cmd.c
> +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
> @@ -321,7 +321,7 @@ void amd_iommu_flush_iotlb(struct pci_de
>  
>      req_id = get_dma_requestor_id(iommu->seg, bdf);
>      queueid = req_id;
> -    maxpend = (ats_pdev->ats_queue_depth + 32) & 0xff;
> +    maxpend = ats_pdev->ats_queue_depth & 0xff;
>  
>      /* send INVALIDATE_IOTLB_PAGES command */
>      spin_lock_irqsave(&iommu->lock, flags);
> --- a/xen/drivers/passthrough/ats.h
> +++ b/xen/drivers/passthrough/ats.h
> @@ -28,7 +28,7 @@ struct pci_ats_dev {
>  
>  #define ATS_REG_CAP    4
>  #define ATS_REG_CTL    6
> -#define ATS_QUEUE_DEPTH_MASK     0xF
> +#define ATS_QUEUE_DEPTH_MASK     0x1f
>  #define ATS_ENABLE               (1<<15)
>  
>  extern struct list_head ats_devices;
> --- a/xen/drivers/passthrough/x86/ats.c
> +++ b/xen/drivers/passthrough/x86/ats.c
> @@ -94,6 +94,8 @@ int enable_ats_device(int seg, int bus, 
>          value = pci_conf_read16(seg, bus, PCI_SLOT(devfn),
>                                  PCI_FUNC(devfn), pos + ATS_REG_CAP);
>          pdev->ats_queue_depth = value & ATS_QUEUE_DEPTH_MASK;
> +        if ( !pdev->ats_queue_depth )
> +            pdev->ats_queue_depth = ATS_QUEUE_DEPTH_MASK + 1;
>          list_add(&pdev->list, &ats_devices);
>      }
>  
> 
> 
> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:10:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:10:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpSU-0001a2-3E; Tue, 04 Dec 2012 10:10:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpSS-0001Zr-TH
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:10:33 +0000
Received: from [85.158.138.51:20287] by server-5.bemta-3.messagelabs.com id
	F3/28-26311-81CCDB05; Tue, 04 Dec 2012 10:10:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354615830!27449004!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19712 invoked from network); 4 Dec 2012 10:10:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:10:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:10:30 +0000
Message-Id: <50BDDA2402000078000ADA69@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:10:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] Ping: [PATCH] vscsiif: allow larger
 segments-per-request values
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Does silence here imply agreement?

Jan

>>> On 27.11.12 at 12:37, Jan Beulich wrote:
> At least certain tape devices require fixed size blocks to be operated
> upon, i.e. breaking up of I/O requests is not permitted. Consequently
> we need an interface extension that (leaving aside implementation
> limitations) doesn't impose a limit on the number of segments that can
> be associated with an individual request.
> 
> This, in turn, excludes the blkif extension FreeBSD folks implemented,
> as that still imposes an upper limit (the actual I/O request still
> specifies the full number of segments - as an 8-bit quantity -, and
> subsequent ring slots get used to carry the excess segment
> descriptors).
> 
> The alternative therefore is to allow the frontend to pre-set segment
> descriptors _before_ actually issuing the I/O request. I/O will then
> be done by the backend for the accumulated set of segments.
> 
> To properly associate segment preset operations with the main request,
> the rqid-s between them should match (originally I had hoped to use
> this to avoid producing individual responses for the pre-set
> operations, but that turned out to violate the underlying shared ring
> implementation).
> 
> Negotiation of the maximum number of segments a particular backend
> implementation supports happens through a new "segs-per-req" xenstore
> node.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As I have no plans to backport this to the 2.6.18 tree, I'm attaching
> for reference the full kernel side patch we're intending to use.
> 
> --- a/xen/include/public/io/vscsiif.h
> +++ b/xen/include/public/io/vscsiif.h
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  /*
>   * Maximum scatter/gather segments per request.
> @@ -50,6 +51,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -66,18 +73,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather 
> */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports 
> SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;
> 
> 
> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:10:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:10:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpSU-0001a2-3E; Tue, 04 Dec 2012 10:10:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpSS-0001Zr-TH
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:10:33 +0000
Received: from [85.158.138.51:20287] by server-5.bemta-3.messagelabs.com id
	F3/28-26311-81CCDB05; Tue, 04 Dec 2012 10:10:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354615830!27449004!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19712 invoked from network); 4 Dec 2012 10:10:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:10:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:10:30 +0000
Message-Id: <50BDDA2402000078000ADA69@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:10:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] Ping: [PATCH] vscsiif: allow larger
 segments-per-request values
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Does silence here imply agreement?

Jan

>>> On 27.11.12 at 12:37, Jan Beulich wrote:
> At least certain tape devices require fixed size blocks to be operated
> upon, i.e. breaking up of I/O requests is not permitted. Consequently
> we need an interface extension that (leaving aside implementation
> limitations) doesn't impose a limit on the number of segments that can
> be associated with an individual request.
> 
> This, in turn, excludes the blkif extension FreeBSD folks implemented,
> as that still imposes an upper limit (the actual I/O request still
> specifies the full number of segments - as an 8-bit quantity -, and
> subsequent ring slots get used to carry the excess segment
> descriptors).
> 
> The alternative therefore is to allow the frontend to pre-set segment
> descriptors _before_ actually issuing the I/O request. I/O will then
> be done by the backend for the accumulated set of segments.
> 
> To properly associate segment preset operations with the main request,
> the rqid-s between them should match (originally I had hoped to use
> this to avoid producing individual responses for the pre-set
> operations, but that turned out to violate the underlying shared ring
> implementation).
> 
> Negotiation of the maximum number of segments a particular backend
> implementation supports happens through a new "segs-per-req" xenstore
> node.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As I have no plans to backport this to the 2.6.18 tree, I'm attaching
> for reference the full kernel side patch we're intending to use.
> 
> --- a/xen/include/public/io/vscsiif.h
> +++ b/xen/include/public/io/vscsiif.h
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  /*
>   * Maximum scatter/gather segments per request.
> @@ -50,6 +51,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -66,18 +73,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather 
> */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports 
> SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;
> 
> 
> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:12:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpUS-0001k2-Kv; Tue, 04 Dec 2012 10:12:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TfpUR-0001jw-G1
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:12:35 +0000
Received: from [85.158.143.99:20086] by server-2.bemta-4.messagelabs.com id
	E1/BE-28922-29CCDB05; Tue, 04 Dec 2012 10:12:34 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354615953!27071022!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28283 invoked from network); 4 Dec 2012 10:12:33 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-13.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	4 Dec 2012 10:12:33 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TfpUK-000464-RK; Tue, 04 Dec 2012 10:12:28 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TfpUE-0006wn-Dg; Tue, 04 Dec 2012 10:12:28 +0000
Message-ID: <1354615936.2693.59.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 04 Dec 2012 10:12:16 +0000
In-Reply-To: <50BDD5A302000078000ADA26@nat28.tlf.novell.com>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
	<1354613604.2693.55.camel@zakaz.uk.xensource.com>
	<50BDD5A302000078000ADA26@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: Bastian Blank <waldi@debian.org>,
	"695056@bugs.debian.org" <695056@bugs.debian.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 09:51 +0000, Jan Beulich wrote:
> >>> On 04.12.12 at 10:33, Ian Campbell <ijc@hellion.org.uk> wrote:
> > On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
> >> Package: src:xen
> >> Version: 4.1.3-4
> >> Severity: serious
> >> 
> >> The bzimage loader used in both libxc and the hypervisor lacks XZ
> >> support. Debian kernels since 3.6 are compressed with XZ and can't be
> >> loaded.
> >> 
> >> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
> >> somewhere last year.
> > 
> > Indeed. Jan this would be a good candidate for a future 4.1.x I think
> > (it was already in 4.2.0).
> 
> Hmm, I'm not really convinced - we're at 4.1.4 (I'm looking
> towards an RC2 followed by a release within the next days),
> and doing a feature addition like this in a .5 stable release
> looks questionable to me in the context of how we managed
> stable updates in the past. But I'm open to be outvoted of
> course...

My thinking was that it is a reasonably self contained patch (most of
the code is the new files) and without it people won't be able to use
4.1.x with their distro kernels at all, so it's something between a bug
fix and a new feature.

If distros are going to be backporting it anyway they may as well all
get it from us.

Ian.

-- 
Ian Campbell
Current Noise: Anthrax - Stealing From A Thief

Getting there is only half as far as getting there and back.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:12:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpUS-0001k2-Kv; Tue, 04 Dec 2012 10:12:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TfpUR-0001jw-G1
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:12:35 +0000
Received: from [85.158.143.99:20086] by server-2.bemta-4.messagelabs.com id
	E1/BE-28922-29CCDB05; Tue, 04 Dec 2012 10:12:34 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354615953!27071022!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28283 invoked from network); 4 Dec 2012 10:12:33 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-13.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	4 Dec 2012 10:12:33 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TfpUK-000464-RK; Tue, 04 Dec 2012 10:12:28 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TfpUE-0006wn-Dg; Tue, 04 Dec 2012 10:12:28 +0000
Message-ID: <1354615936.2693.59.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 04 Dec 2012 10:12:16 +0000
In-Reply-To: <50BDD5A302000078000ADA26@nat28.tlf.novell.com>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
	<1354613604.2693.55.camel@zakaz.uk.xensource.com>
	<50BDD5A302000078000ADA26@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: Bastian Blank <waldi@debian.org>,
	"695056@bugs.debian.org" <695056@bugs.debian.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 09:51 +0000, Jan Beulich wrote:
> >>> On 04.12.12 at 10:33, Ian Campbell <ijc@hellion.org.uk> wrote:
> > On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
> >> Package: src:xen
> >> Version: 4.1.3-4
> >> Severity: serious
> >> 
> >> The bzimage loader used in both libxc and the hypervisor lacks XZ
> >> support. Debian kernels since 3.6 are compressed with XZ and can't be
> >> loaded.
> >> 
> >> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
> >> somewhere last year.
> > 
> > Indeed. Jan this would be a good candidate for a future 4.1.x I think
> > (it was already in 4.2.0).
> 
> Hmm, I'm not really convinced - we're at 4.1.4 (I'm looking
> towards an RC2 followed by a release within the next days),
> and doing a feature addition like this in a .5 stable release
> looks questionable to me in the context of how we managed
> stable updates in the past. But I'm open to be outvoted of
> course...

My thinking was that it is a reasonably self contained patch (most of
the code is the new files) and without it people won't be able to use
4.1.x with their distro kernels at all, so it's something between a bug
fix and a new feature.

If distros are going to be backporting it anyway they may as well all
get it from us.

Ian.

-- 
Ian Campbell
Current Noise: Anthrax - Stealing From A Thief

Getting there is only half as far as getting there and back.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:16:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpXq-0001vK-9Z; Tue, 04 Dec 2012 10:16:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpXo-0001vE-Hq
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:16:04 +0000
Received: from [85.158.138.51:50224] by server-14.bemta-3.messagelabs.com id
	CA/C2-31424-36DCDB05; Tue, 04 Dec 2012 10:16:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354616162!25609243!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2324 invoked from network); 4 Dec 2012 10:16:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:16:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:16:02 +0000
Message-Id: <50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:15:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
In-Reply-To: <CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Mats Petersson <mats.petersson@citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
>>  I had a quick look, and it doesn't look that hard to backport that patch.
> 
> Thanks, Mat.
> I'm glad to report that the patch do fix my problem.
> 
> And yest it is really easy to port since the code did not change across the
> two release.
> The only change would be line numbers (3841 vs 3803) and one extra comments
> before this line:
> 
>  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> 
> I'm not sure if you are going to release another maintenance version that
> include this patch,
> but I'll report this to Debain maintainer since it's about to freeze for
> v7.0 release and v4.2.0 will not make it.

Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
out?

Jan

> On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
> <mats.petersson@citrix.com>wrote:
> 
>> On 03/12/12 13:19, Mats Petersson wrote:
>>
>>> On 03/12/12 13:14, G.R. wrote:
>>>
>>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>>>> <mats.petersson@citrix.com 
> <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>>>> wrote:
>>>>
>>>>      On 03/12/12 03:47, G.R. wrote:
>>>>
>>>>          Hi developers,
>>>>          I met some domU issues and the log suggests missing interrupt.
>>>>          Details from here:
>>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
>>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>>>>          In summary, this is the suspicious log:
>>>>
>>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>>>>
>>>>          I've checked the code in question and found that mode 3 is an
>>>>          'reserved_1' mode.
>>>>          I want to trace down the source of this mode setting to
>>>>          root-cause the issue.
>>>>          But I'm not an xen developer, and am even a newbie as a xen
>>>> user.
>>>>          Could anybody give me instructions about how to enable
>>>>          detailed debug log?
>>>>          It could be better if I can get advice about experiments to
>>>>          perform / switches to try out etc.
>>>>
>>>>          My SW config:
>>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>>>>          domU: Debian wheezy 3.2.x stock kernel.
>>>>
>>>>          Thanks,
>>>>          Timothy
>>>>
>>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>>>>      What are the exact messages in the DomU?
>>>>
>>>>
>>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>>>> enabled.
>>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>>>> did not see such msi related error message.
>>>> Actually, with that domU I do not see anything obvious wrong from the
>>>> log, but I also see nothing from panel (panel receive no signal and go
>>>> power-saving) :-(
>>>>
>>>>
>>>> Back to the issue I was reporting, the domU log looks like this:
>>>>
>>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>>>> [drm:i915_hangcheck_ring_idle]
>>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>>>> 3354], missed IRQ?
>>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>>>> [drm:i915_hangcheck_ring_idle]
>>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>>>> 11297], missed IRQ?
>>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>>>> timeout, switching to polling mode: last cmd=0x000f0000
>>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>>>> codec, disabling MSI: last cmd=0x002f0600
>>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>>>>
>>>>
>>>> Thanks,
>>>> Timothy
>>>>
>>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>>> fixes this. I'm not fully clued up to what the policy for backporting
>>> fixes are, and I haven't looked at the complexity of the fix itself, but
>>> either updating to the 4.2.0 or a (personal) backport sounds like the
>>> right solution here.
>>>
>> I had a quick look, and it doesn't look that hard to backport that patch.
>>
>> --
>> Mats
>>
>>
>>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>>> to your original email.
>>>
>>> --
>>> Mats
>>>
>>> ______________________________**_________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org 
>>> http://lists.xen.org/xen-devel 
>>>
>>>
>>>
>>
>> ______________________________**_________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 
>>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:16:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfpXq-0001vK-9Z; Tue, 04 Dec 2012 10:16:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfpXo-0001vE-Hq
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:16:04 +0000
Received: from [85.158.138.51:50224] by server-14.bemta-3.messagelabs.com id
	CA/C2-31424-36DCDB05; Tue, 04 Dec 2012 10:16:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354616162!25609243!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2324 invoked from network); 4 Dec 2012 10:16:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:16:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:16:02 +0000
Message-Id: <50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:15:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
In-Reply-To: <CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Mats Petersson <mats.petersson@citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
>>  I had a quick look, and it doesn't look that hard to backport that patch.
> 
> Thanks, Mat.
> I'm glad to report that the patch do fix my problem.
> 
> And yest it is really easy to port since the code did not change across the
> two release.
> The only change would be line numbers (3841 vs 3803) and one extra comments
> before this line:
> 
>  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> 
> I'm not sure if you are going to release another maintenance version that
> include this patch,
> but I'll report this to Debain maintainer since it's about to freeze for
> v7.0 release and v4.2.0 will not make it.

Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
out?

Jan

> On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
> <mats.petersson@citrix.com>wrote:
> 
>> On 03/12/12 13:19, Mats Petersson wrote:
>>
>>> On 03/12/12 13:14, G.R. wrote:
>>>
>>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>>>> <mats.petersson@citrix.com 
> <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>>>> wrote:
>>>>
>>>>      On 03/12/12 03:47, G.R. wrote:
>>>>
>>>>          Hi developers,
>>>>          I met some domU issues and the log suggests missing interrupt.
>>>>          Details from here:
>>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
>>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>>>>          In summary, this is the suspicious log:
>>>>
>>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>>>>
>>>>          I've checked the code in question and found that mode 3 is an
>>>>          'reserved_1' mode.
>>>>          I want to trace down the source of this mode setting to
>>>>          root-cause the issue.
>>>>          But I'm not an xen developer, and am even a newbie as a xen
>>>> user.
>>>>          Could anybody give me instructions about how to enable
>>>>          detailed debug log?
>>>>          It could be better if I can get advice about experiments to
>>>>          perform / switches to try out etc.
>>>>
>>>>          My SW config:
>>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>>>>          domU: Debian wheezy 3.2.x stock kernel.
>>>>
>>>>          Thanks,
>>>>          Timothy
>>>>
>>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>>>>      What are the exact messages in the DomU?
>>>>
>>>>
>>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>>>> enabled.
>>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>>>> did not see such msi related error message.
>>>> Actually, with that domU I do not see anything obvious wrong from the
>>>> log, but I also see nothing from panel (panel receive no signal and go
>>>> power-saving) :-(
>>>>
>>>>
>>>> Back to the issue I was reporting, the domU log looks like this:
>>>>
>>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>>>> [drm:i915_hangcheck_ring_idle]
>>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>>>> 3354], missed IRQ?
>>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>>>> [drm:i915_hangcheck_ring_idle]
>>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>>>> 11297], missed IRQ?
>>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>>>> timeout, switching to polling mode: last cmd=0x000f0000
>>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>>>> codec, disabling MSI: last cmd=0x002f0600
>>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>>>>
>>>>
>>>> Thanks,
>>>> Timothy
>>>>
>>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>>> fixes this. I'm not fully clued up to what the policy for backporting
>>> fixes are, and I haven't looked at the complexity of the fix itself, but
>>> either updating to the 4.2.0 or a (personal) backport sounds like the
>>> right solution here.
>>>
>> I had a quick look, and it doesn't look that hard to backport that patch.
>>
>> --
>> Mats
>>
>>
>>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>>> to your original email.
>>>
>>> --
>>> Mats
>>>
>>> ______________________________**_________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org 
>>> http://lists.xen.org/xen-devel 
>>>
>>>
>>>
>>
>> ______________________________**_________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 
>>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:30:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:30:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfplV-0002OC-N5; Tue, 04 Dec 2012 10:30:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfplU-0002O7-IL
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:30:12 +0000
Received: from [85.158.137.99:63622] by server-6.bemta-3.messagelabs.com id
	FA/72-28265-3B0DDB05; Tue, 04 Dec 2012 10:30:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1354617010!15482877!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19027 invoked from network); 4 Dec 2012 10:30:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:30:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:30:10 +0000
Message-Id: <50BDDEBF02000078000ADAAC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:30:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,<konrad.wilk@oracle.com>
References: <1354563168-6609-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1354563168-6609-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 20:32, Olaf Hering <olaf@aepfle.de> wrote:
> be->mode is obtained from xenbus_read, which does a kmalloc for the
> message body. The short string is never released, so do it on blkbk
> remove.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
> 
> !! Not compile tested !!
> 
>  drivers/block/xen-blkback/xenbus.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/block/xen-blkback/xenbus.c 
> b/drivers/block/xen-blkback/xenbus.c
> index f58434c..a6585a4 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -366,6 +366,7 @@ static int xen_blkbk_remove(struct xenbus_device *dev)
>  		be->blkif = NULL;
>  	}
>  
> +	kfree(be->mode);

This looks necessary but insufficient - there's nothing really
preventing backend_changed() from being called more than once
for a given device (is simply the handler of xenbus watch). Hence
I think either that function needs to be guarded against multiple
execution (e.g. by removing the watch from that function itself,
if that's permitted by xenbus), or to properly deal with the
effects this has (including but probably not limited to the leaking
of be->mode).

Jan

>  	kfree(be);
>  	dev_set_drvdata(&dev->dev, NULL);
>  	return 0;
> -- 
> 1.8.0.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 10:30:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 10:30:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfplV-0002OC-N5; Tue, 04 Dec 2012 10:30:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfplU-0002O7-IL
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 10:30:12 +0000
Received: from [85.158.137.99:63622] by server-6.bemta-3.messagelabs.com id
	FA/72-28265-3B0DDB05; Tue, 04 Dec 2012 10:30:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1354617010!15482877!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19027 invoked from network); 4 Dec 2012 10:30:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 10:30:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 10:30:10 +0000
Message-Id: <50BDDEBF02000078000ADAAC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 10:30:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,<konrad.wilk@oracle.com>
References: <1354563168-6609-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1354563168-6609-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.12.12 at 20:32, Olaf Hering <olaf@aepfle.de> wrote:
> be->mode is obtained from xenbus_read, which does a kmalloc for the
> message body. The short string is never released, so do it on blkbk
> remove.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
> 
> !! Not compile tested !!
> 
>  drivers/block/xen-blkback/xenbus.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/block/xen-blkback/xenbus.c 
> b/drivers/block/xen-blkback/xenbus.c
> index f58434c..a6585a4 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -366,6 +366,7 @@ static int xen_blkbk_remove(struct xenbus_device *dev)
>  		be->blkif = NULL;
>  	}
>  
> +	kfree(be->mode);

This looks necessary but insufficient - there's nothing really
preventing backend_changed() from being called more than once
for a given device (is simply the handler of xenbus watch). Hence
I think either that function needs to be guarded against multiple
execution (e.g. by removing the watch from that function itself,
if that's permitted by xenbus), or to properly deal with the
effects this has (including but probably not limited to the leaking
of be->mode).

Jan

>  	kfree(be);
>  	dev_set_drvdata(&dev->dev, NULL);
>  	return 0;
> -- 
> 1.8.0.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:01:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfqFS-000351-P1; Tue, 04 Dec 2012 11:01:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TfqFR-00034g-LL
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:01:09 +0000
Received: from [85.158.139.83:37596] by server-2.bemta-5.messagelabs.com id
	03/7D-04892-4F7DDB05; Tue, 04 Dec 2012 11:01:08 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354618866!28315988!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA1MDI0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2526 invoked from network); 4 Dec 2012 11:01:07 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 11:01:07 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id CB9DF190F;
	Tue,  4 Dec 2012 13:01:05 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id A13182005F; Tue,  4 Dec 2012 13:01:05 +0200 (EET)
Date: Tue, 4 Dec 2012 13:01:05 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121204110105.GX8912@reaktio.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Mats Petersson <mats.petersson@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
> >>  I had a quick look, and it doesn't look that hard to backport that patch.
> > 
> > Thanks, Mat.
> > I'm glad to report that the patch do fix my problem.
> > 
> > And yest it is really easy to port since the code did not change across the
> > two release.
> > The only change would be line numbers (3841 vs 3803) and one extra comments
> > before this line:
> > 
> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> > 
> > I'm not sure if you are going to release another maintenance version that
> > include this patch,
> > but I'll report this to Debain maintainer since it's about to freeze for
> > v7.0 release and v4.2.0 will not make it.
> 
> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
> out?
> 

It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
so I'd say it should be a candidate for Xen 4.1.4.

-- Pasi

> Jan
> 
> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
> > <mats.petersson@citrix.com>wrote:
> > 
> >> On 03/12/12 13:19, Mats Petersson wrote:
> >>
> >>> On 03/12/12 13:14, G.R. wrote:
> >>>
> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
> >>>> <mats.petersson@citrix.com 
> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
> >>>> wrote:
> >>>>
> >>>>      On 03/12/12 03:47, G.R. wrote:
> >>>>
> >>>>          Hi developers,
> >>>>          I met some domU issues and the log suggests missing interrupt.
> >>>>          Details from here:
> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
> >>>>          In summary, this is the suspicious log:
> >>>>
> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> >>>>
> >>>>          I've checked the code in question and found that mode 3 is an
> >>>>          'reserved_1' mode.
> >>>>          I want to trace down the source of this mode setting to
> >>>>          root-cause the issue.
> >>>>          But I'm not an xen developer, and am even a newbie as a xen
> >>>> user.
> >>>>          Could anybody give me instructions about how to enable
> >>>>          detailed debug log?
> >>>>          It could be better if I can get advice about experiments to
> >>>>          perform / switches to try out etc.
> >>>>
> >>>>          My SW config:
> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
> >>>>          domU: Debian wheezy 3.2.x stock kernel.
> >>>>
> >>>>          Thanks,
> >>>>          Timothy
> >>>>
> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
> >>>>      What are the exact messages in the DomU?
> >>>>
> >>>>
> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
> >>>> enabled.
> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
> >>>> did not see such msi related error message.
> >>>> Actually, with that domU I do not see anything obvious wrong from the
> >>>> log, but I also see nothing from panel (panel receive no signal and go
> >>>> power-saving) :-(
> >>>>
> >>>>
> >>>> Back to the issue I was reporting, the domU log looks like this:
> >>>>
> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
> >>>> [drm:i915_hangcheck_ring_idle]
> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
> >>>> 3354], missed IRQ?
> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
> >>>> [drm:i915_hangcheck_ring_idle]
> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
> >>>> 11297], missed IRQ?
> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
> >>>> timeout, switching to polling mode: last cmd=0x000f0000
> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
> >>>> codec, disabling MSI: last cmd=0x002f0600
> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
> >>>>
> >>>>
> >>>> Thanks,
> >>>> Timothy
> >>>>
> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
> >>> fixes this. I'm not fully clued up to what the policy for backporting
> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
> >>> right solution here.
> >>>
> >> I had a quick look, and it doesn't look that hard to backport that patch.
> >>
> >> --
> >> Mats
> >>
> >>
> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
> >>> to your original email.
> >>>
> >>> --
> >>> Mats
> >>>
> >>> ______________________________**_________________
> >>> Xen-devel mailing list
> >>> Xen-devel@lists.xen.org 
> >>> http://lists.xen.org/xen-devel 
> >>>
> >>>
> >>>
> >>
> >> ______________________________**_________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org 
> >> http://lists.xen.org/xen-devel 
> >>
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:01:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfqFS-000351-P1; Tue, 04 Dec 2012 11:01:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TfqFR-00034g-LL
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:01:09 +0000
Received: from [85.158.139.83:37596] by server-2.bemta-5.messagelabs.com id
	03/7D-04892-4F7DDB05; Tue, 04 Dec 2012 11:01:08 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354618866!28315988!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA1MDI0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2526 invoked from network); 4 Dec 2012 11:01:07 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 11:01:07 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id CB9DF190F;
	Tue,  4 Dec 2012 13:01:05 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id A13182005F; Tue,  4 Dec 2012 13:01:05 +0200 (EET)
Date: Tue, 4 Dec 2012 13:01:05 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121204110105.GX8912@reaktio.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Mats Petersson <mats.petersson@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
> >>  I had a quick look, and it doesn't look that hard to backport that patch.
> > 
> > Thanks, Mat.
> > I'm glad to report that the patch do fix my problem.
> > 
> > And yest it is really easy to port since the code did not change across the
> > two release.
> > The only change would be line numbers (3841 vs 3803) and one extra comments
> > before this line:
> > 
> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> > 
> > I'm not sure if you are going to release another maintenance version that
> > include this patch,
> > but I'll report this to Debain maintainer since it's about to freeze for
> > v7.0 release and v4.2.0 will not make it.
> 
> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
> out?
> 

It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
so I'd say it should be a candidate for Xen 4.1.4.

-- Pasi

> Jan
> 
> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
> > <mats.petersson@citrix.com>wrote:
> > 
> >> On 03/12/12 13:19, Mats Petersson wrote:
> >>
> >>> On 03/12/12 13:14, G.R. wrote:
> >>>
> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
> >>>> <mats.petersson@citrix.com 
> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
> >>>> wrote:
> >>>>
> >>>>      On 03/12/12 03:47, G.R. wrote:
> >>>>
> >>>>          Hi developers,
> >>>>          I met some domU issues and the log suggests missing interrupt.
> >>>>          Details from here:
> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
> >>>>          In summary, this is the suspicious log:
> >>>>
> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> >>>>
> >>>>          I've checked the code in question and found that mode 3 is an
> >>>>          'reserved_1' mode.
> >>>>          I want to trace down the source of this mode setting to
> >>>>          root-cause the issue.
> >>>>          But I'm not an xen developer, and am even a newbie as a xen
> >>>> user.
> >>>>          Could anybody give me instructions about how to enable
> >>>>          detailed debug log?
> >>>>          It could be better if I can get advice about experiments to
> >>>>          perform / switches to try out etc.
> >>>>
> >>>>          My SW config:
> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
> >>>>          domU: Debian wheezy 3.2.x stock kernel.
> >>>>
> >>>>          Thanks,
> >>>>          Timothy
> >>>>
> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
> >>>>      What are the exact messages in the DomU?
> >>>>
> >>>>
> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
> >>>> enabled.
> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
> >>>> did not see such msi related error message.
> >>>> Actually, with that domU I do not see anything obvious wrong from the
> >>>> log, but I also see nothing from panel (panel receive no signal and go
> >>>> power-saving) :-(
> >>>>
> >>>>
> >>>> Back to the issue I was reporting, the domU log looks like this:
> >>>>
> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
> >>>> [drm:i915_hangcheck_ring_idle]
> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
> >>>> 3354], missed IRQ?
> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
> >>>> [drm:i915_hangcheck_ring_idle]
> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
> >>>> 11297], missed IRQ?
> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
> >>>> timeout, switching to polling mode: last cmd=0x000f0000
> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
> >>>> codec, disabling MSI: last cmd=0x002f0600
> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
> >>>>
> >>>>
> >>>> Thanks,
> >>>> Timothy
> >>>>
> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
> >>> fixes this. I'm not fully clued up to what the policy for backporting
> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
> >>> right solution here.
> >>>
> >> I had a quick look, and it doesn't look that hard to backport that patch.
> >>
> >> --
> >> Mats
> >>
> >>
> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
> >>> to your original email.
> >>>
> >>> --
> >>> Mats
> >>>
> >>> ______________________________**_________________
> >>> Xen-devel mailing list
> >>> Xen-devel@lists.xen.org 
> >>> http://lists.xen.org/xen-devel 
> >>>
> >>>
> >>>
> >>
> >> ______________________________**_________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org 
> >> http://lists.xen.org/xen-devel 
> >>
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:03:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfqHB-0003Ca-GE; Tue, 04 Dec 2012 11:02:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TfqH9-0003CN-Ch
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:02:55 +0000
Received: from [85.158.138.51:10846] by server-14.bemta-3.messagelabs.com id
	0A/D7-31424-E58DDB05; Tue, 04 Dec 2012 11:02:54 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354618973!25618803!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA1MDI0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12775 invoked from network); 4 Dec 2012 11:02:53 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 11:02:53 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 8EB6F25BA;
	Tue,  4 Dec 2012 13:02:52 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 476832005F; Tue,  4 Dec 2012 13:02:52 +0200 (EET)
Date: Tue, 4 Dec 2012 13:02:52 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <ijc@hellion.org.uk>
Message-ID: <20121204110252.GY8912@reaktio.net>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
	<1354613604.2693.55.camel@zakaz.uk.xensource.com>
	<50BDD5A302000078000ADA26@nat28.tlf.novell.com>
	<1354615936.2693.59.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354615936.2693.59.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Bastian Blank <waldi@debian.org>,
	"695056@bugs.debian.org" <695056@bugs.debian.org>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 10:12:16AM +0000, Ian Campbell wrote:
> On Tue, 2012-12-04 at 09:51 +0000, Jan Beulich wrote:
> > >>> On 04.12.12 at 10:33, Ian Campbell <ijc@hellion.org.uk> wrote:
> > > On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
> > >> Package: src:xen
> > >> Version: 4.1.3-4
> > >> Severity: serious
> > >> 
> > >> The bzimage loader used in both libxc and the hypervisor lacks XZ
> > >> support. Debian kernels since 3.6 are compressed with XZ and can't be
> > >> loaded.
> > >> 
> > >> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
> > >> somewhere last year.
> > > 
> > > Indeed. Jan this would be a good candidate for a future 4.1.x I think
> > > (it was already in 4.2.0).
> > 
> > Hmm, I'm not really convinced - we're at 4.1.4 (I'm looking
> > towards an RC2 followed by a release within the next days),
> > and doing a feature addition like this in a .5 stable release
> > looks questionable to me in the context of how we managed
> > stable updates in the past. But I'm open to be outvoted of
> > course...
> 
> My thinking was that it is a reasonably self contained patch (most of
> the code is the new files) and without it people won't be able to use
> 4.1.x with their distro kernels at all, so it's something between a bug
> fix and a new feature.
> 
> If distros are going to be backporting it anyway they may as well all
> get it from us.
> 

+1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:03:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfqHB-0003Ca-GE; Tue, 04 Dec 2012 11:02:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TfqH9-0003CN-Ch
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:02:55 +0000
Received: from [85.158.138.51:10846] by server-14.bemta-3.messagelabs.com id
	0A/D7-31424-E58DDB05; Tue, 04 Dec 2012 11:02:54 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354618973!25618803!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA1MDI0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12775 invoked from network); 4 Dec 2012 11:02:53 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 11:02:53 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 8EB6F25BA;
	Tue,  4 Dec 2012 13:02:52 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 476832005F; Tue,  4 Dec 2012 13:02:52 +0200 (EET)
Date: Tue, 4 Dec 2012 13:02:52 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <ijc@hellion.org.uk>
Message-ID: <20121204110252.GY8912@reaktio.net>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
	<1354613604.2693.55.camel@zakaz.uk.xensource.com>
	<50BDD5A302000078000ADA26@nat28.tlf.novell.com>
	<1354615936.2693.59.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354615936.2693.59.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Bastian Blank <waldi@debian.org>,
	"695056@bugs.debian.org" <695056@bugs.debian.org>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 10:12:16AM +0000, Ian Campbell wrote:
> On Tue, 2012-12-04 at 09:51 +0000, Jan Beulich wrote:
> > >>> On 04.12.12 at 10:33, Ian Campbell <ijc@hellion.org.uk> wrote:
> > > On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
> > >> Package: src:xen
> > >> Version: 4.1.3-4
> > >> Severity: serious
> > >> 
> > >> The bzimage loader used in both libxc and the hypervisor lacks XZ
> > >> support. Debian kernels since 3.6 are compressed with XZ and can't be
> > >> loaded.
> > >> 
> > >> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
> > >> somewhere last year.
> > > 
> > > Indeed. Jan this would be a good candidate for a future 4.1.x I think
> > > (it was already in 4.2.0).
> > 
> > Hmm, I'm not really convinced - we're at 4.1.4 (I'm looking
> > towards an RC2 followed by a release within the next days),
> > and doing a feature addition like this in a .5 stable release
> > looks questionable to me in the context of how we managed
> > stable updates in the past. But I'm open to be outvoted of
> > course...
> 
> My thinking was that it is a reasonably self contained patch (most of
> the code is the new files) and without it people won't be able to use
> 4.1.x with their distro kernels at all, so it's something between a bug
> fix and a new feature.
> 
> If distros are going to be backporting it anyway they may as well all
> get it from us.
> 

+1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:29:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfqgc-0003gr-Pz; Tue, 04 Dec 2012 11:29:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Tfqgb-0003gm-GE
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:29:13 +0000
Received: from [85.158.139.83:54376] by server-9.bemta-5.messagelabs.com id
	BD/DA-29295-88EDDB05; Tue, 04 Dec 2012 11:29:12 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354620550!28220180!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30671 invoked from network); 4 Dec 2012 11:29:11 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:29:11 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so3489366vcb.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 03:29:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=pinW6NmEi100GDlzKRXzkanim4TprvQuuJnppIUGTeE=;
	b=dPV1cGaGnuwB+d3Gbp1aSpW0tqCVPTc5jDz3g5nkI0pxqMmMrRLeXMGRo8H7j7httb
	T+heS9V/bdRWMczpPRo3tUya3qqUsongGG3hIdupLRVP4XrRyqHwR48rVKgnbv+H83rC
	VEJqWK4PHiXT3WlY2bknsyjknUFoB3L3ua4FMmYqPgifvxdbI09qkHCbjKUT4RDvAHWU
	Gi4bFBYZKlSrJQZOGfzd0NDikcGNSfyi5z2bIjhDtGkWufIA+7G+6jmD25VPICp0nnoe
	DbWClA0Ylz4ykq9ZtO6a3enfUt1jrSYCYJN4CozfWBQyDphoDuvfekd4rZcCQ3cvWh10
	9JKg==
MIME-Version: 1.0
Received: by 10.52.179.130 with SMTP id dg2mr9862542vdc.8.1354620549929; Tue,
	04 Dec 2012 03:29:09 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Tue, 4 Dec 2012 03:29:09 -0800 (PST)
In-Reply-To: <50BCF4F9.8010601@citrix.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF4F9.8010601@citrix.com>
Date: Tue, 4 Dec 2012 11:29:09 +0000
X-Google-Sender-Auth: u7Q3POqBFdAcEXqU8ZEyQT_7Z3U
Message-ID: <CAFLBxZak4Bj2L9LCWMpp8Te95nj+SF-SDSYoMTXR926KHfx4QQ@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Wei Liu <Wei.Liu2@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1033863716520674471=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1033863716520674471==
Content-Type: multipart/alternative; boundary=bcaec5171ecdaf1cb204d0052cb9

--bcaec5171ecdaf1cb204d0052cb9
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Dec 3, 2012 at 6:52 PM, David Vrabel <david.vrabel@citrix.com>wrote:

> On 03/12/12 16:29, Wei Liu wrote:
> > Hi all
> >
> > There has been discussion on extending number of event channels back in
> > September [0].
>
> It seems that the decision has been made to go for this N-level
> approach.  Were any other methods considered?
>
> Would a per-VCPU ring of pending events work?  The ABI will be easier to
> extend in the future for more event channels.  The guest side code will
> be simpler.  It will be easier to fairly service the events as they will
> be processed in the order they were raised.
>

Not having looked in depth at either solution, it does seem like having
something like this, rather than an ever-growing array if bits that needs
to be examined, would be simpler and more scalable.

 -George

--bcaec5171ecdaf1cb204d0052cb9
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_extra">On Mon, Dec 3, 2012 at 6:52 PM, David Vrabel=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:david.vrabel@citrix.com" target=3D=
"_blank">david.vrabel@citrix.com</a>&gt;</span> wrote:<br><div class=3D"gma=
il_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On 03/12/12 16:29, Wei Liu=
 wrote:<br>
&gt; Hi all<br>
&gt;<br>
&gt; There has been discussion on extending number of event channels back i=
n<br>
&gt; September [0].<br>
<br>
</div>It seems that the decision has been made to go for this N-level<br>
approach. =A0Were any other methods considered?<br>
<br>
Would a per-VCPU ring of pending events work? =A0The ABI will be easier to<=
br>
extend in the future for more event channels. =A0The guest side code will<b=
r>
be simpler. =A0It will be easier to fairly service the events as they will<=
br>
be processed in the order they were raised.<br></blockquote><div><br>Not ha=
ving looked in depth at either solution, it does seem like having something=
 like this, rather than an ever-growing array if bits that needs to be exam=
ined, would be simpler and more scalable.<br>
<br>=A0-George <br></div></div><br></div>

--bcaec5171ecdaf1cb204d0052cb9--


--===============1033863716520674471==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1033863716520674471==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 11:29:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfqgc-0003gr-Pz; Tue, 04 Dec 2012 11:29:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Tfqgb-0003gm-GE
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:29:13 +0000
Received: from [85.158.139.83:54376] by server-9.bemta-5.messagelabs.com id
	BD/DA-29295-88EDDB05; Tue, 04 Dec 2012 11:29:12 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354620550!28220180!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30671 invoked from network); 4 Dec 2012 11:29:11 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:29:11 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so3489366vcb.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 03:29:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=pinW6NmEi100GDlzKRXzkanim4TprvQuuJnppIUGTeE=;
	b=dPV1cGaGnuwB+d3Gbp1aSpW0tqCVPTc5jDz3g5nkI0pxqMmMrRLeXMGRo8H7j7httb
	T+heS9V/bdRWMczpPRo3tUya3qqUsongGG3hIdupLRVP4XrRyqHwR48rVKgnbv+H83rC
	VEJqWK4PHiXT3WlY2bknsyjknUFoB3L3ua4FMmYqPgifvxdbI09qkHCbjKUT4RDvAHWU
	Gi4bFBYZKlSrJQZOGfzd0NDikcGNSfyi5z2bIjhDtGkWufIA+7G+6jmD25VPICp0nnoe
	DbWClA0Ylz4ykq9ZtO6a3enfUt1jrSYCYJN4CozfWBQyDphoDuvfekd4rZcCQ3cvWh10
	9JKg==
MIME-Version: 1.0
Received: by 10.52.179.130 with SMTP id dg2mr9862542vdc.8.1354620549929; Tue,
	04 Dec 2012 03:29:09 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Tue, 4 Dec 2012 03:29:09 -0800 (PST)
In-Reply-To: <50BCF4F9.8010601@citrix.com>
References: <1354552154.18784.9.camel@iceland>
	<50BCF4F9.8010601@citrix.com>
Date: Tue, 4 Dec 2012 11:29:09 +0000
X-Google-Sender-Auth: u7Q3POqBFdAcEXqU8ZEyQT_7Z3U
Message-ID: <CAFLBxZak4Bj2L9LCWMpp8Te95nj+SF-SDSYoMTXR926KHfx4QQ@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Wei Liu <Wei.Liu2@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1033863716520674471=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1033863716520674471==
Content-Type: multipart/alternative; boundary=bcaec5171ecdaf1cb204d0052cb9

--bcaec5171ecdaf1cb204d0052cb9
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Dec 3, 2012 at 6:52 PM, David Vrabel <david.vrabel@citrix.com>wrote:

> On 03/12/12 16:29, Wei Liu wrote:
> > Hi all
> >
> > There has been discussion on extending number of event channels back in
> > September [0].
>
> It seems that the decision has been made to go for this N-level
> approach.  Were any other methods considered?
>
> Would a per-VCPU ring of pending events work?  The ABI will be easier to
> extend in the future for more event channels.  The guest side code will
> be simpler.  It will be easier to fairly service the events as they will
> be processed in the order they were raised.
>

Not having looked in depth at either solution, it does seem like having
something like this, rather than an ever-growing array if bits that needs
to be examined, would be simpler and more scalable.

 -George

--bcaec5171ecdaf1cb204d0052cb9
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_extra">On Mon, Dec 3, 2012 at 6:52 PM, David Vrabel=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:david.vrabel@citrix.com" target=3D=
"_blank">david.vrabel@citrix.com</a>&gt;</span> wrote:<br><div class=3D"gma=
il_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On 03/12/12 16:29, Wei Liu=
 wrote:<br>
&gt; Hi all<br>
&gt;<br>
&gt; There has been discussion on extending number of event channels back i=
n<br>
&gt; September [0].<br>
<br>
</div>It seems that the decision has been made to go for this N-level<br>
approach. =A0Were any other methods considered?<br>
<br>
Would a per-VCPU ring of pending events work? =A0The ABI will be easier to<=
br>
extend in the future for more event channels. =A0The guest side code will<b=
r>
be simpler. =A0It will be easier to fairly service the events as they will<=
br>
be processed in the order they were raised.<br></blockquote><div><br>Not ha=
ving looked in depth at either solution, it does seem like having something=
 like this, rather than an ever-growing array if bits that needs to be exam=
ined, would be simpler and more scalable.<br>
<br>=A0-George <br></div></div><br></div>

--bcaec5171ecdaf1cb204d0052cb9--


--===============1033863716520674471==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1033863716520674471==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 11:36:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:36:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfqnf-0003tL-P6; Tue, 04 Dec 2012 11:36:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Tfqne-0003tG-7D
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:36:30 +0000
Received: from [85.158.139.83:59368] by server-1.bemta-5.messagelabs.com id
	4C/E5-09311-D30EDB05; Tue, 04 Dec 2012 11:36:29 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354620916!25603380!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26644 invoked from network); 4 Dec 2012 11:35:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:35:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216314123"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	04 Dec 2012 11:35:14 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Tue, 4 Dec 2012
	06:35:14 -0500
Message-ID: <50BDDFF0.6050308@citrix.com>
Date: Tue, 4 Dec 2012 11:35:12 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Liu <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland> <50BCF4F9.8010601@citrix.com>
	<20121203205612.GA22913@iceland>
In-Reply-To: <20121203205612.GA22913@iceland>
X-Originating-IP: [10.80.2.76]
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 20:56, Wei Liu wrote:
> On Mon, Dec 03, 2012 at 06:52:41PM +0000, David Vrabel wrote:
>> On 03/12/12 16:29, Wei Liu wrote:
>>> Hi all
>>>
>>> There has been discussion on extending number of event channels back in
>>> September [0].
>>
>> It seems that the decision has been made to go for this N-level
>> approach.  Were any other methods considered?
>>
>> Would a per-VCPU ring of pending events work?  The ABI will be easier to
>> extend in the future for more event channels.  The guest side code will
>> be simpler.  It will be easier to fairly service the events as they will
>> be processed in the order they were raised.
>>
>> The complexity would be in ensuring that events were not lost due to
>> lack of space in the ring.  This may make the ring prohibitively large
>> or require complex or expensive tracking of pending events inside Xen.
>>
> 
> If I understand correctly, one event will always be queued up for
> processing in the ring model, will this be too overkill? What if event
> generation is much faster than processing?
> 
> In the current implementation, one event channel can be raised multiple
> times but it is only processed once.

There would have to be something in Xen to ensure an event was only
added to the ring once.  i.e., if it's already in the ring, it doesn't
get added.  This is the tricky bit and I can't immediately think of how
this would work.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:36:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:36:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfqnf-0003tL-P6; Tue, 04 Dec 2012 11:36:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Tfqne-0003tG-7D
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:36:30 +0000
Received: from [85.158.139.83:59368] by server-1.bemta-5.messagelabs.com id
	4C/E5-09311-D30EDB05; Tue, 04 Dec 2012 11:36:29 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354620916!25603380!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26644 invoked from network); 4 Dec 2012 11:35:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:35:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216314123"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	04 Dec 2012 11:35:14 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Tue, 4 Dec 2012
	06:35:14 -0500
Message-ID: <50BDDFF0.6050308@citrix.com>
Date: Tue, 4 Dec 2012 11:35:12 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Liu <Wei.Liu2@citrix.com>
References: <1354552154.18784.9.camel@iceland> <50BCF4F9.8010601@citrix.com>
	<20121203205612.GA22913@iceland>
In-Reply-To: <20121203205612.GA22913@iceland>
X-Originating-IP: [10.80.2.76]
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 20:56, Wei Liu wrote:
> On Mon, Dec 03, 2012 at 06:52:41PM +0000, David Vrabel wrote:
>> On 03/12/12 16:29, Wei Liu wrote:
>>> Hi all
>>>
>>> There has been discussion on extending number of event channels back in
>>> September [0].
>>
>> It seems that the decision has been made to go for this N-level
>> approach.  Were any other methods considered?
>>
>> Would a per-VCPU ring of pending events work?  The ABI will be easier to
>> extend in the future for more event channels.  The guest side code will
>> be simpler.  It will be easier to fairly service the events as they will
>> be processed in the order they were raised.
>>
>> The complexity would be in ensuring that events were not lost due to
>> lack of space in the ring.  This may make the ring prohibitively large
>> or require complex or expensive tracking of pending events inside Xen.
>>
> 
> If I understand correctly, one event will always be queued up for
> processing in the ring model, will this be too overkill? What if event
> generation is much faster than processing?
> 
> In the current implementation, one event channel can be raised multiple
> times but it is only processed once.

There would have to be something in Xen to ensure an event was only
added to the ring once.  i.e., if it's already in the ring, it doesn't
get added.  This is the tricky bit and I can't immediately think of how
this would work.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:54:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:54:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr4y-0004Mj-Fl; Tue, 04 Dec 2012 11:54:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfr4w-0004Me-O4
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 11:54:22 +0000
Received: from [85.158.143.35:19095] by server-1.bemta-4.messagelabs.com id
	80/7F-27934-E64EDB05; Tue, 04 Dec 2012 11:54:22 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354622035!10530461!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23607 invoked from network); 4 Dec 2012 11:53:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:53:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16145107"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:53:55 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Tue, 4 Dec 2012
	11:53:55 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 11:53:54 +0000
Thread-Topic: [Xen-devel] [PATCH 4 of 5 RFC] blktap3: Introduce xenio daemon
	Makefile
Thread-Index: Ac3ORq/HYvokQBtzTNyCxrR/0Q9ETQDwE6GQ
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3DA@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<0f3b6811dad16cc9d93f.1354112414@makatos-desktop>
	<1354203123.6269.30.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354203123.6269.30.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 4 of 5 RFC] blktap3: Introduce xenio daemon
 Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 29 November 2012 15:32
> To: Thanos Makatos
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH 4 of 5 RFC] blktap3: Introduce xenio
> daemon Makefile
> 
> On Wed, 2012-11-28 at 14:20 +0000, Thanos Makatos wrote:
> > diff -r 7126fda14249 -r 0f3b6811dad1 tools/blktap3/xenio/Makefile
> > --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
> > +++ b/tools/blktap3/xenio/Makefile	Wed Nov 28 14:18:46 2012
> +0000
> > @@ -0,0 +1,47 @@
> > +XEN_ROOT := $(CURDIR)/../../../
> > +include $(XEN_ROOT)/tools/Rules.mk
> > +
> > +BLKTAP_ROOT := ..
> > +
> > +INST_DIR ?= /usr/bin
> > +
> > +IBIN = xenio
> > +
> > +override CFLAGS += \
> > +	-I$(BLKTAP_ROOT)/include \
> > +	-I$(BLKTAP_ROOT)/control \
> > +	-I$(XEN_ROOT)/tools/libxc \
> > +	-I$(XEN_ROOT)/tools/xenstore \
> 
> $(CFLAGS_libxenstore) and $(CFLAGS_libxenctrl) please.

Ok.

> 
> > +	-D_GNU_SOURCE \
> > +	$(CFLAGS_xeninclude) \
> > +    -Wall \
> > +    -Wextra \
> > +    -Werror
> > +
> > +# FIXME cause trouble
> > +override CFLAGS += \
> > +    -Wno-old-style-declaration \
> > +    -Wno-sign-compare \
> > +    -Wno-type-limits
> > +
> > +override LDFLAGS = \
> > +    -L$(XEN_ROOT)/tools/xenstore -lxenstore
> 
> $(LDFLAGS_libxenstore)

You mean $(LDLIBS_libxenstore), right?

> 
> You have xenctrl CFLAGS but not LDFLAGS, is that right?

Yes that's an omission, I've added $(LDFLAGS_libxenctrl).

> 
> > +
> > +XENIO-OBJS := log.o
> > +
> > +all: $(IBIN)
> > +
> > +$(BLKTAP_ROOT)/control/libblktapctl.a:
> > +	make -C $(BLKTAP_ROOT)/control libblktapctl.a
> 
> Can you use SUBDIRS in the normal way for this?

Ok.

> 
> > +$(IBIN): $(XENIO-OBJS) xenio.o $(BLKTAP_ROOT)/control/libblktapctl.a
> 
> Static linking on purpose?
> 
> If this is a purely internal library then fine, but I have a feeling
> that libblktapctl is intended as an interface library which others
> (e.g.
> libxl) will want to use?

Indeed libxl uses libblktapctl, does this mean that the xenio daemon must use the .so version as well?

> 
> > +	$(CC) -o $@ $^ $(LDFLAGS)
> > +
> > +install: all
> > +	$(INSTALL_DIR) -p $(DESTDIR)$(INST_DIR)
> > +	$(INSTALL_PROG) $(IBIN) $(DESTDIR)$(INST_DIR)
> > +
> > +clean:
> > +	rm -f *.o *.o.d .*.o.d $(IBIN)
> > +
> > +.PHONY: clean install
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:54:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:54:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr4y-0004Mj-Fl; Tue, 04 Dec 2012 11:54:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfr4w-0004Me-O4
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 11:54:22 +0000
Received: from [85.158.143.35:19095] by server-1.bemta-4.messagelabs.com id
	80/7F-27934-E64EDB05; Tue, 04 Dec 2012 11:54:22 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354622035!10530461!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23607 invoked from network); 4 Dec 2012 11:53:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:53:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16145107"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:53:55 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Tue, 4 Dec 2012
	11:53:55 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 11:53:54 +0000
Thread-Topic: [Xen-devel] [PATCH 4 of 5 RFC] blktap3: Introduce xenio daemon
	Makefile
Thread-Index: Ac3ORq/HYvokQBtzTNyCxrR/0Q9ETQDwE6GQ
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3DA@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<0f3b6811dad16cc9d93f.1354112414@makatos-desktop>
	<1354203123.6269.30.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354203123.6269.30.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 4 of 5 RFC] blktap3: Introduce xenio daemon
 Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 29 November 2012 15:32
> To: Thanos Makatos
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH 4 of 5 RFC] blktap3: Introduce xenio
> daemon Makefile
> 
> On Wed, 2012-11-28 at 14:20 +0000, Thanos Makatos wrote:
> > diff -r 7126fda14249 -r 0f3b6811dad1 tools/blktap3/xenio/Makefile
> > --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
> > +++ b/tools/blktap3/xenio/Makefile	Wed Nov 28 14:18:46 2012
> +0000
> > @@ -0,0 +1,47 @@
> > +XEN_ROOT := $(CURDIR)/../../../
> > +include $(XEN_ROOT)/tools/Rules.mk
> > +
> > +BLKTAP_ROOT := ..
> > +
> > +INST_DIR ?= /usr/bin
> > +
> > +IBIN = xenio
> > +
> > +override CFLAGS += \
> > +	-I$(BLKTAP_ROOT)/include \
> > +	-I$(BLKTAP_ROOT)/control \
> > +	-I$(XEN_ROOT)/tools/libxc \
> > +	-I$(XEN_ROOT)/tools/xenstore \
> 
> $(CFLAGS_libxenstore) and $(CFLAGS_libxenctrl) please.

Ok.

> 
> > +	-D_GNU_SOURCE \
> > +	$(CFLAGS_xeninclude) \
> > +    -Wall \
> > +    -Wextra \
> > +    -Werror
> > +
> > +# FIXME cause trouble
> > +override CFLAGS += \
> > +    -Wno-old-style-declaration \
> > +    -Wno-sign-compare \
> > +    -Wno-type-limits
> > +
> > +override LDFLAGS = \
> > +    -L$(XEN_ROOT)/tools/xenstore -lxenstore
> 
> $(LDFLAGS_libxenstore)

You mean $(LDLIBS_libxenstore), right?

> 
> You have xenctrl CFLAGS but not LDFLAGS, is that right?

Yes that's an omission, I've added $(LDFLAGS_libxenctrl).

> 
> > +
> > +XENIO-OBJS := log.o
> > +
> > +all: $(IBIN)
> > +
> > +$(BLKTAP_ROOT)/control/libblktapctl.a:
> > +	make -C $(BLKTAP_ROOT)/control libblktapctl.a
> 
> Can you use SUBDIRS in the normal way for this?

Ok.

> 
> > +$(IBIN): $(XENIO-OBJS) xenio.o $(BLKTAP_ROOT)/control/libblktapctl.a
> 
> Static linking on purpose?
> 
> If this is a purely internal library then fine, but I have a feeling
> that libblktapctl is intended as an interface library which others
> (e.g.
> libxl) will want to use?

Indeed libxl uses libblktapctl, does this mean that the xenio daemon must use the .so version as well?

> 
> > +	$(CC) -o $@ $^ $(LDFLAGS)
> > +
> > +install: all
> > +	$(INSTALL_DIR) -p $(DESTDIR)$(INST_DIR)
> > +	$(INSTALL_PROG) $(IBIN) $(DESTDIR)$(INST_DIR)
> > +
> > +clean:
> > +	rm -f *.o *.o.d .*.o.d $(IBIN)
> > +
> > +.PHONY: clean install
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:56:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7H-0004Sx-Fj; Tue, 04 Dec 2012 11:56:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7G-0004ST-Qa
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:56:47 +0000
Received: from [85.158.139.83:60739] by server-10.bemta-5.messagelabs.com id
	75/F3-09257-DF4EDB05; Tue, 04 Dec 2012 11:56:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354622202!24441497!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27639 invoked from network); 4 Dec 2012 11:56:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:56:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216316094"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-9X;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:34 +0000
Message-ID: <1354622199-27504-10-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 10/15] xen: arm: stub out domain_get_maximum_gpfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It currently has no callers, so return ENOSYS until such a time as one
arrives.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    1 -
 xen/arch/arm/mm.c    |    5 +++++
 2 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 7e6f171..fd667e5 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -7,7 +7,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 x:	mov pc, lr
 
 /* Other */
-DUMMY(domain_get_maximum_gpfn);
 DUMMY(domain_relinquish_resources);
 DUMMY(dom_cow);
 DUMMY(send_timer_event);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 329b1d4..718f32d 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -447,6 +447,11 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
     return 0;
 }
 
+unsigned long domain_get_maximum_gpfn(struct domain *d)
+{
+    return -ENOSYS;
+}
+
 void share_xen_page_with_guest(struct page_info *page,
                           struct domain *d, int readonly)
 {
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:56:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7H-0004Sx-Fj; Tue, 04 Dec 2012 11:56:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7G-0004ST-Qa
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:56:47 +0000
Received: from [85.158.139.83:60739] by server-10.bemta-5.messagelabs.com id
	75/F3-09257-DF4EDB05; Tue, 04 Dec 2012 11:56:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354622202!24441497!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27639 invoked from network); 4 Dec 2012 11:56:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:56:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216316094"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-9X;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:34 +0000
Message-ID: <1354622199-27504-10-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 10/15] xen: arm: stub out domain_get_maximum_gpfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It currently has no callers, so return ENOSYS until such a time as one
arrives.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    1 -
 xen/arch/arm/mm.c    |    5 +++++
 2 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 7e6f171..fd667e5 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -7,7 +7,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 x:	mov pc, lr
 
 /* Other */
-DUMMY(domain_get_maximum_gpfn);
 DUMMY(domain_relinquish_resources);
 DUMMY(dom_cow);
 DUMMY(send_timer_event);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 329b1d4..718f32d 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -447,6 +447,11 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
     return 0;
 }
 
+unsigned long domain_get_maximum_gpfn(struct domain *d)
+{
+    return -ENOSYS;
+}
+
 void share_xen_page_with_guest(struct page_info *page,
                           struct domain *d, int readonly)
 {
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:56:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7F-0004SJ-El; Tue, 04 Dec 2012 11:56:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7E-0004S0-D8
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:56:44 +0000
Received: from [85.158.139.83:37739] by server-9.bemta-5.messagelabs.com id
	31/01-29295-BF4EDB05; Tue, 04 Dec 2012 11:56:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354622200!28278519!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20079 invoked from network); 4 Dec 2012 11:56:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:56:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216316092"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr79-0005GC-Sl;
	Tue, 04 Dec 2012 11:56:39 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:28 +0000
Message-ID: <1354622199-27504-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 04/15] xen: arm: implement arch_vcpu_reset.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Untested.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domain.c |    5 +++++
 xen/arch/arm/dummy.S  |    1 -
 2 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c5292c7..b7b2d5c 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -515,6 +515,11 @@ int arch_set_info_guest(
     return 0;
 }
 
+void arch_vcpu_reset(struct vcpu *v)
+{
+    vcpu_end_shutdown_deferral(v);
+}
+
 void arch_dump_domain_info(struct domain *d)
 {
 }
diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 5ac6af9..66eb314 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -14,7 +14,6 @@ DUMMY(pirq_guest_unbind);
 DUMMY(pirq_set_affinity);
 
 /* VCPU */
-DUMMY(arch_vcpu_reset);
 NOP(update_vcpu_system_time);
 
 /* Grant Tables */
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:56:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7G-0004Sa-Rq; Tue, 04 Dec 2012 11:56:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7F-0004SB-Al
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:56:45 +0000
Received: from [85.158.139.83:60593] by server-8.bemta-5.messagelabs.com id
	D8/9D-06050-CF4EDB05; Tue, 04 Dec 2012 11:56:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354622200!28278519!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20097 invoked from network); 4 Dec 2012 11:56:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:56:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216316093"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-0U;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Dec 2012 11:56:29 +0000
Message-ID: <1354622199-27504-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 05/15] xen: remove nr_irqs_gsi from generic code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The concept is X86 specific.

AFAICT the generic concept here is the number of physical IRQs which
the current hardware has, so call this nr_hw_irqs.

Also using "defined NR_IRQS" as a standin for x86 might have made
sense at one point but its just cleaner to push the necessary
definitions into asm/irq.h.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Keir (Xen.org) <keir@xen.org>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/arm/dummy.S      |    1 -
 xen/common/domain.c       |    4 ++--
 xen/include/asm-arm/irq.h |    3 +++
 xen/include/asm-x86/irq.h |    4 ++++
 xen/include/xen/irq.h     |    8 --------
 xen/xsm/flask/hooks.c     |    4 ++--
 6 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 66eb314..5d9bcff 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -8,7 +8,6 @@ x:	mov pc, lr
 	
 /* PIRQ support */
 DUMMY(alloc_pirq_struct);
-DUMMY(nr_irqs_gsi);
 DUMMY(pirq_guest_bind);
 DUMMY(pirq_guest_unbind);
 DUMMY(pirq_set_affinity);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 12c8e24..d80461d 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -259,9 +259,9 @@ struct domain *domain_create(
         atomic_inc(&d->pause_count);
 
         if ( domid )
-            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
+            d->nr_pirqs = nr_hw_irqs + extra_domU_irqs;
         else
-            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
+            d->nr_pirqs = nr_hw_irqs + extra_dom0_irqs;
         if ( d->nr_pirqs > nr_irqs )
             d->nr_pirqs = nr_irqs;
 
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index abde839..4facaf0 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -21,6 +21,9 @@ struct irq_cfg {
 #define NR_IRQS		1024
 #define nr_irqs NR_IRQS
 
+#define nr_irqs NR_IRQS
+#define nr_hw_irqs NR_IRQS
+
 struct irq_desc;
 
 struct irq_desc *__irq_to_desc(int irq);
diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
index 5eefb94..6ea5f53 100644
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -11,6 +11,10 @@
 #include <irq_vectors.h>
 #include <asm/percpu.h>
 
+extern unsigned int nr_irqs_gsi;
+extern unsigned int nr_irqs;
+#define nr_hw_irqs nr_irqs_gsi
+
 #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
 			     (1 << (irq)) & io_apic_irqs : \
 			     (irq) < nr_irqs_gsi)
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 5973cce..7386358 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
 
 #include <asm/irq.h>
 
-#ifdef NR_IRQS
-# define nr_irqs NR_IRQS
-# define nr_irqs_gsi NR_IRQS
-#else
-extern unsigned int nr_irqs_gsi;
-extern unsigned int nr_irqs;
-#endif
-
 struct msi_desc;
 /*
  * This is the "IRQ descriptor", which contains various information
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 0ca10d0..595c31e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct avc_audit_data *ad)
     struct irq_desc *desc = irq_to_desc(irq);
     if ( irq >= nr_irqs || irq < 0 )
         return -EINVAL;
-    if ( irq < nr_irqs_gsi ) {
+    if ( irq < nr_hw_irqs ) {
         if (ad) {
             AVC_AUDIT_DATA_INIT(ad, IRQ);
             ad->irq = irq;
@@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int irq, void *data)
     if ( rc )
         return rc;
 
-    if ( irq >= nr_irqs_gsi && msi ) {
+    if ( irq >= nr_hw_irqs && msi ) {
         u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
         AVC_AUDIT_DATA_INIT(&ad, DEV);
         ad.device = machine_bdf;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:56:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7F-0004SJ-El; Tue, 04 Dec 2012 11:56:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7E-0004S0-D8
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:56:44 +0000
Received: from [85.158.139.83:37739] by server-9.bemta-5.messagelabs.com id
	31/01-29295-BF4EDB05; Tue, 04 Dec 2012 11:56:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354622200!28278519!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20079 invoked from network); 4 Dec 2012 11:56:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:56:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216316092"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr79-0005GC-Sl;
	Tue, 04 Dec 2012 11:56:39 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:28 +0000
Message-ID: <1354622199-27504-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 04/15] xen: arm: implement arch_vcpu_reset.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Untested.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domain.c |    5 +++++
 xen/arch/arm/dummy.S  |    1 -
 2 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c5292c7..b7b2d5c 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -515,6 +515,11 @@ int arch_set_info_guest(
     return 0;
 }
 
+void arch_vcpu_reset(struct vcpu *v)
+{
+    vcpu_end_shutdown_deferral(v);
+}
+
 void arch_dump_domain_info(struct domain *d)
 {
 }
diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 5ac6af9..66eb314 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -14,7 +14,6 @@ DUMMY(pirq_guest_unbind);
 DUMMY(pirq_set_affinity);
 
 /* VCPU */
-DUMMY(arch_vcpu_reset);
 NOP(update_vcpu_system_time);
 
 /* Grant Tables */
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:56:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7G-0004Sa-Rq; Tue, 04 Dec 2012 11:56:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7F-0004SB-Al
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:56:45 +0000
Received: from [85.158.139.83:60593] by server-8.bemta-5.messagelabs.com id
	D8/9D-06050-CF4EDB05; Tue, 04 Dec 2012 11:56:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354622200!28278519!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20097 invoked from network); 4 Dec 2012 11:56:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:56:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216316093"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-0U;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Dec 2012 11:56:29 +0000
Message-ID: <1354622199-27504-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 05/15] xen: remove nr_irqs_gsi from generic code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The concept is X86 specific.

AFAICT the generic concept here is the number of physical IRQs which
the current hardware has, so call this nr_hw_irqs.

Also using "defined NR_IRQS" as a standin for x86 might have made
sense at one point but its just cleaner to push the necessary
definitions into asm/irq.h.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Keir (Xen.org) <keir@xen.org>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/arm/dummy.S      |    1 -
 xen/common/domain.c       |    4 ++--
 xen/include/asm-arm/irq.h |    3 +++
 xen/include/asm-x86/irq.h |    4 ++++
 xen/include/xen/irq.h     |    8 --------
 xen/xsm/flask/hooks.c     |    4 ++--
 6 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 66eb314..5d9bcff 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -8,7 +8,6 @@ x:	mov pc, lr
 	
 /* PIRQ support */
 DUMMY(alloc_pirq_struct);
-DUMMY(nr_irqs_gsi);
 DUMMY(pirq_guest_bind);
 DUMMY(pirq_guest_unbind);
 DUMMY(pirq_set_affinity);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 12c8e24..d80461d 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -259,9 +259,9 @@ struct domain *domain_create(
         atomic_inc(&d->pause_count);
 
         if ( domid )
-            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
+            d->nr_pirqs = nr_hw_irqs + extra_domU_irqs;
         else
-            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
+            d->nr_pirqs = nr_hw_irqs + extra_dom0_irqs;
         if ( d->nr_pirqs > nr_irqs )
             d->nr_pirqs = nr_irqs;
 
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index abde839..4facaf0 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -21,6 +21,9 @@ struct irq_cfg {
 #define NR_IRQS		1024
 #define nr_irqs NR_IRQS
 
+#define nr_irqs NR_IRQS
+#define nr_hw_irqs NR_IRQS
+
 struct irq_desc;
 
 struct irq_desc *__irq_to_desc(int irq);
diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
index 5eefb94..6ea5f53 100644
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -11,6 +11,10 @@
 #include <irq_vectors.h>
 #include <asm/percpu.h>
 
+extern unsigned int nr_irqs_gsi;
+extern unsigned int nr_irqs;
+#define nr_hw_irqs nr_irqs_gsi
+
 #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
 			     (1 << (irq)) & io_apic_irqs : \
 			     (irq) < nr_irqs_gsi)
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 5973cce..7386358 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
 
 #include <asm/irq.h>
 
-#ifdef NR_IRQS
-# define nr_irqs NR_IRQS
-# define nr_irqs_gsi NR_IRQS
-#else
-extern unsigned int nr_irqs_gsi;
-extern unsigned int nr_irqs;
-#endif
-
 struct msi_desc;
 /*
  * This is the "IRQ descriptor", which contains various information
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 0ca10d0..595c31e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct avc_audit_data *ad)
     struct irq_desc *desc = irq_to_desc(irq);
     if ( irq >= nr_irqs || irq < 0 )
         return -EINVAL;
-    if ( irq < nr_irqs_gsi ) {
+    if ( irq < nr_hw_irqs ) {
         if (ad) {
             AVC_AUDIT_DATA_INIT(ad, IRQ);
             ad->irq = irq;
@@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int irq, void *data)
     if ( rc )
         return rc;
 
-    if ( irq >= nr_irqs_gsi && msi ) {
+    if ( irq >= nr_hw_irqs && msi ) {
         u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
         AVC_AUDIT_DATA_INIT(&ad, DEV);
         ad.device = machine_bdf;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:56:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7F-0004SC-1A; Tue, 04 Dec 2012 11:56:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7E-0004Rz-5Q
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:56:44 +0000
Received: from [85.158.139.83:60465] by server-15.bemta-5.messagelabs.com id
	19/D5-26920-BF4EDB05; Tue, 04 Dec 2012 11:56:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354622200!28278519!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20066 invoked from network); 4 Dec 2012 11:56:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:56:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216316091"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-2Q;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:30 +0000
Message-ID: <1354622199-27504-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 06/15] xen: arm: stub out pirq related functions.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM we use GIC functionality to inject virtualised real interrupts
for h/w devices rather than evtchn-pirqs.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    6 ------
 xen/arch/arm/irq.c   |   29 +++++++++++++++++++++++++++++
 2 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 5d9bcff..2110bf1 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -6,12 +6,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 	.globl x; \
 x:	mov pc, lr
 	
-/* PIRQ support */
-DUMMY(alloc_pirq_struct);
-DUMMY(pirq_guest_bind);
-DUMMY(pirq_guest_unbind);
-DUMMY(pirq_set_affinity);
-
 /* VCPU */
 NOP(update_vcpu_system_time);
 
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 72e83e6..a50281b 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -192,6 +192,35 @@ out_no_end:
 }
 
 /*
+ * pirq event channels. We don't use these on ARM, instead we use the
+ * features of the GIC to inject virtualised normal interrupts.
+ */
+struct pirq *alloc_pirq_struct(struct domain *d)
+{
+    return NULL;
+}
+
+/*
+ * These are all unreachable given an alloc_pirq_struct
+ * which returns NULL, all callers try to lookup struct pirq first
+ * which will fail.
+ */
+int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
+{
+    BUG();
+}
+
+void pirq_guest_unbind(struct domain *d, struct pirq *pirq)
+{
+    BUG();
+}
+
+void pirq_set_affinity(struct domain *d, int pirq, const cpumask_t *mask)
+{
+    BUG();
+}
+
+/*
  * Local variables:
  * mode: C
  * c-set-style: "BSD"
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:56:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7F-0004SC-1A; Tue, 04 Dec 2012 11:56:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7E-0004Rz-5Q
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:56:44 +0000
Received: from [85.158.139.83:60465] by server-15.bemta-5.messagelabs.com id
	19/D5-26920-BF4EDB05; Tue, 04 Dec 2012 11:56:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354622200!28278519!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20066 invoked from network); 4 Dec 2012 11:56:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:56:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216316091"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-2Q;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:30 +0000
Message-ID: <1354622199-27504-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 06/15] xen: arm: stub out pirq related functions.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM we use GIC functionality to inject virtualised real interrupts
for h/w devices rather than evtchn-pirqs.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    6 ------
 xen/arch/arm/irq.c   |   29 +++++++++++++++++++++++++++++
 2 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 5d9bcff..2110bf1 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -6,12 +6,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 	.globl x; \
 x:	mov pc, lr
 	
-/* PIRQ support */
-DUMMY(alloc_pirq_struct);
-DUMMY(pirq_guest_bind);
-DUMMY(pirq_guest_unbind);
-DUMMY(pirq_set_affinity);
-
 /* VCPU */
 NOP(update_vcpu_system_time);
 
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 72e83e6..a50281b 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -192,6 +192,35 @@ out_no_end:
 }
 
 /*
+ * pirq event channels. We don't use these on ARM, instead we use the
+ * features of the GIC to inject virtualised normal interrupts.
+ */
+struct pirq *alloc_pirq_struct(struct domain *d)
+{
+    return NULL;
+}
+
+/*
+ * These are all unreachable given an alloc_pirq_struct
+ * which returns NULL, all callers try to lookup struct pirq first
+ * which will fail.
+ */
+int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
+{
+    BUG();
+}
+
+void pirq_guest_unbind(struct domain *d, struct pirq *pirq)
+{
+    BUG();
+}
+
+void pirq_set_affinity(struct domain *d, int pirq, const cpumask_t *mask)
+{
+    BUG();
+}
+
+/*
  * Local variables:
  * mode: C
  * c-set-style: "BSD"
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7m-0004cL-0o; Tue, 04 Dec 2012 11:57:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7k-0004bo-PY
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:16 +0000
Received: from [85.158.139.211:29696] by server-9.bemta-5.messagelabs.com id
	B9/92-29295-B15EDB05; Tue, 04 Dec 2012 11:57:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1354622233!18950243!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1782 invoked from network); 4 Dec 2012 11:57:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16145157"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	11:56:14 +0000
Message-ID: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Tue, 4 Dec 2012 11:56:13 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This was a short term hack to get something linking quickly, but its
usefulness has now passed.

This series replaces everything in here with proper functions. In many
cases these are still just stubs.

It seems to me that at least some of this stuff consists of x86-isms
which should instead be removed from the common code.

This highlights two large missing pieces of functionality: wallclock
time and cleaning up on domain destroy.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7m-0004cL-0o; Tue, 04 Dec 2012 11:57:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7k-0004bo-PY
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:16 +0000
Received: from [85.158.139.211:29696] by server-9.bemta-5.messagelabs.com id
	B9/92-29295-B15EDB05; Tue, 04 Dec 2012 11:57:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1354622233!18950243!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1782 invoked from network); 4 Dec 2012 11:57:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16145157"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	11:56:14 +0000
Message-ID: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Tue, 4 Dec 2012 11:56:13 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This was a short term hack to get something linking quickly, but its
usefulness has now passed.

This series replaces everything in here with proper functions. In many
cases these are still just stubs.

It seems to me that at least some of this stuff consists of x86-isms
which should instead be removed from the common code.

This highlights two large missing pieces of functionality: wallclock
time and cleaning up on domain destroy.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7o-0004db-TM; Tue, 04 Dec 2012 11:57:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7m-0004cK-Qp
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:18 +0000
Received: from [85.158.143.99:60547] by server-2.bemta-4.messagelabs.com id
	AD/3A-28922-E15EDB05; Tue, 04 Dec 2012 11:57:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354622236!20947205!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24096 invoked from network); 4 Dec 2012 11:57:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516008"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-5x;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:32 +0000
Message-ID: <1354622199-27504-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 08/15] xen: arm: stub out steal_page.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Callers handle the failure gracefully, can be called by
GNTTABOP_transfer, XENMEM_exchange or tmem.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    3 ---
 xen/arch/arm/mm.c    |    6 ++++++
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 87159da..f5b0db7 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -6,9 +6,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 	.globl x; \
 x:	mov pc, lr
 	
-/* Grant Tables */
-DUMMY(steal_page);
-
 /* Page Offlining */
 DUMMY(page_is_ram_type);
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 855f83d..687eb55 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -435,6 +435,12 @@ int donate_page(struct domain *d, struct page_info *page, unsigned int memflags)
     return -ENOSYS;
 }
 
+int steal_page(
+    struct domain *d, struct page_info *page, unsigned int memflags)
+{
+    return -1;
+}
+
 void share_xen_page_with_guest(struct page_info *page,
                           struct domain *d, int readonly)
 {
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7o-0004db-TM; Tue, 04 Dec 2012 11:57:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7m-0004cK-Qp
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:18 +0000
Received: from [85.158.143.99:60547] by server-2.bemta-4.messagelabs.com id
	AD/3A-28922-E15EDB05; Tue, 04 Dec 2012 11:57:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354622236!20947205!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24096 invoked from network); 4 Dec 2012 11:57:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516008"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-5x;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:32 +0000
Message-ID: <1354622199-27504-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 08/15] xen: arm: stub out steal_page.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Callers handle the failure gracefully, can be called by
GNTTABOP_transfer, XENMEM_exchange or tmem.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    3 ---
 xen/arch/arm/mm.c    |    6 ++++++
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 87159da..f5b0db7 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -6,9 +6,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 	.globl x; \
 x:	mov pc, lr
 	
-/* Grant Tables */
-DUMMY(steal_page);
-
 /* Page Offlining */
 DUMMY(page_is_ram_type);
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 855f83d..687eb55 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -435,6 +435,12 @@ int donate_page(struct domain *d, struct page_info *page, unsigned int memflags)
     return -ENOSYS;
 }
 
+int steal_page(
+    struct domain *d, struct page_info *page, unsigned int memflags)
+{
+    return -1;
+}
+
 void share_xen_page_with_guest(struct page_info *page,
                           struct domain *d, int readonly)
 {
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7o-0004dK-E6; Tue, 04 Dec 2012 11:57:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7m-0004cK-9p
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:18 +0000
Received: from [85.158.143.99:21675] by server-2.bemta-4.messagelabs.com id
	78/3A-28922-D15EDB05; Tue, 04 Dec 2012 11:57:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354622236!20947205!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24007 invoked from network); 4 Dec 2012 11:57:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516007"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr79-0005GC-R0;
	Tue, 04 Dec 2012 11:56:39 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:27 +0000
Message-ID: <1354622199-27504-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 03/15] xen: arm: implement arch_get_info_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Untested, but basically the inverse of arch_set_info_guest.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domctl.c |   17 +++++++++++++++++
 xen/arch/arm/dummy.S  |    1 -
 2 files changed, 17 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index cf16791..76f31ce 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -8,6 +8,7 @@
 #include <xen/types.h>
 #include <xen/lib.h>
 #include <xen/errno.h>
+#include <xen/sched.h>
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl,
@@ -16,6 +17,22 @@ long arch_do_domctl(struct xen_domctl *domctl,
     return -ENOSYS;
 }
 
+void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
+{
+    struct vcpu_guest_context *ctxt = c.nat;
+    struct cpu_user_regs *regs = &c.nat->user_regs;
+
+    *regs = v->arch.cpu_info->guest_cpu_user_regs;
+
+    ctxt->sctlr = v->arch.sctlr;
+    ctxt->ttbr0 = v->arch.ttbr0;
+    ctxt->ttbr1 = v->arch.ttbr1;
+    ctxt->ttbcr = v->arch.ttbcr;
+
+    if ( !test_bit(_VPF_down, &v->pause_flags) )
+        ctxt->flags |= VGCF_online;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index bfd948a..5ac6af9 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -14,7 +14,6 @@ DUMMY(pirq_guest_unbind);
 DUMMY(pirq_set_affinity);
 
 /* VCPU */
-DUMMY(arch_get_info_guest);
 DUMMY(arch_vcpu_reset);
 NOP(update_vcpu_system_time);
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7o-0004dK-E6; Tue, 04 Dec 2012 11:57:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7m-0004cK-9p
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:18 +0000
Received: from [85.158.143.99:21675] by server-2.bemta-4.messagelabs.com id
	78/3A-28922-D15EDB05; Tue, 04 Dec 2012 11:57:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354622236!20947205!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24007 invoked from network); 4 Dec 2012 11:57:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516007"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr79-0005GC-R0;
	Tue, 04 Dec 2012 11:56:39 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:27 +0000
Message-ID: <1354622199-27504-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 03/15] xen: arm: implement arch_get_info_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Untested, but basically the inverse of arch_set_info_guest.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domctl.c |   17 +++++++++++++++++
 xen/arch/arm/dummy.S  |    1 -
 2 files changed, 17 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index cf16791..76f31ce 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -8,6 +8,7 @@
 #include <xen/types.h>
 #include <xen/lib.h>
 #include <xen/errno.h>
+#include <xen/sched.h>
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl,
@@ -16,6 +17,22 @@ long arch_do_domctl(struct xen_domctl *domctl,
     return -ENOSYS;
 }
 
+void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
+{
+    struct vcpu_guest_context *ctxt = c.nat;
+    struct cpu_user_regs *regs = &c.nat->user_regs;
+
+    *regs = v->arch.cpu_info->guest_cpu_user_regs;
+
+    ctxt->sctlr = v->arch.sctlr;
+    ctxt->ttbr0 = v->arch.ttbr0;
+    ctxt->ttbr1 = v->arch.ttbr1;
+    ctxt->ttbcr = v->arch.ttbcr;
+
+    if ( !test_bit(_VPF_down, &v->pause_flags) )
+        ctxt->flags |= VGCF_online;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index bfd948a..5ac6af9 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -14,7 +14,6 @@ DUMMY(pirq_guest_unbind);
 DUMMY(pirq_set_affinity);
 
 /* VCPU */
-DUMMY(arch_get_info_guest);
 DUMMY(arch_vcpu_reset);
 NOP(update_vcpu_system_time);
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7p-0004dq-95; Tue, 04 Dec 2012 11:57:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7n-0004cq-FX
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:19 +0000
Received: from [85.158.143.99:60567] by server-1.bemta-4.messagelabs.com id
	D0/83-27934-E15EDB05; Tue, 04 Dec 2012 11:57:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354622236!20947205!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24151 invoked from network); 4 Dec 2012 11:57:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516009"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-4F;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:31 +0000
Message-ID: <1354622199-27504-7-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 07/15] xen: arm: stub out wallclock time.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We don't currently have much concept of wallclock time on ARM (for
either the hypervisor, dom0 or guests). For now just stub everything
out. Specifically domain_set_time_offset, update_vcpu_system_time and
wallclock_time.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    5 -----
 xen/arch/arm/time.c  |   18 ++++++++++++++++++
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 2110bf1..87159da 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -6,9 +6,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 	.globl x; \
 x:	mov pc, lr
 	
-/* VCPU */
-NOP(update_vcpu_system_time);
-
 /* Grant Tables */
 DUMMY(steal_page);
 
@@ -18,8 +15,6 @@ DUMMY(page_is_ram_type);
 /* Other */
 DUMMY(domain_get_maximum_gpfn);
 DUMMY(domain_relinquish_resources);
-DUMMY(domain_set_time_offset);
 DUMMY(dom_cow);
 DUMMY(send_timer_event);
 DUMMY(share_xen_page_with_privileged_guests);
-DUMMY(wallclock_time);
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index b6d7015..ac606f7 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -25,6 +25,7 @@
 #include <xen/mm.h>
 #include <xen/softirq.h>
 #include <xen/time.h>
+#include <xen/sched.h>
 #include <asm/system.h>
 
 /*
@@ -185,6 +186,23 @@ void udelay(unsigned long usecs)
     isb();
 }
 
+/* VCPU PV clock. */
+void update_vcpu_system_time(struct vcpu *v)
+{
+    /* XXX update shared_info->wc_* */
+}
+
+void domain_set_time_offset(struct domain *d, int32_t time_offset_seconds)
+{
+    d->time_offset_seconds = time_offset_seconds;
+    /* XXX update guest visible wallclock time */
+}
+
+struct tm wallclock_time(void)
+{
+    return (struct tm) { 0 };
+}
+
 /*
  * Local variables:
  * mode: C
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7p-0004dq-95; Tue, 04 Dec 2012 11:57:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7n-0004cq-FX
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:19 +0000
Received: from [85.158.143.99:60567] by server-1.bemta-4.messagelabs.com id
	D0/83-27934-E15EDB05; Tue, 04 Dec 2012 11:57:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354622236!20947205!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24151 invoked from network); 4 Dec 2012 11:57:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516009"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-4F;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:31 +0000
Message-ID: <1354622199-27504-7-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 07/15] xen: arm: stub out wallclock time.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We don't currently have much concept of wallclock time on ARM (for
either the hypervisor, dom0 or guests). For now just stub everything
out. Specifically domain_set_time_offset, update_vcpu_system_time and
wallclock_time.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    5 -----
 xen/arch/arm/time.c  |   18 ++++++++++++++++++
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 2110bf1..87159da 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -6,9 +6,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 	.globl x; \
 x:	mov pc, lr
 	
-/* VCPU */
-NOP(update_vcpu_system_time);
-
 /* Grant Tables */
 DUMMY(steal_page);
 
@@ -18,8 +15,6 @@ DUMMY(page_is_ram_type);
 /* Other */
 DUMMY(domain_get_maximum_gpfn);
 DUMMY(domain_relinquish_resources);
-DUMMY(domain_set_time_offset);
 DUMMY(dom_cow);
 DUMMY(send_timer_event);
 DUMMY(share_xen_page_with_privileged_guests);
-DUMMY(wallclock_time);
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index b6d7015..ac606f7 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -25,6 +25,7 @@
 #include <xen/mm.h>
 #include <xen/softirq.h>
 #include <xen/time.h>
+#include <xen/sched.h>
 #include <asm/system.h>
 
 /*
@@ -185,6 +186,23 @@ void udelay(unsigned long usecs)
     isb();
 }
 
+/* VCPU PV clock. */
+void update_vcpu_system_time(struct vcpu *v)
+{
+    /* XXX update shared_info->wc_* */
+}
+
+void domain_set_time_offset(struct domain *d, int32_t time_offset_seconds)
+{
+    d->time_offset_seconds = time_offset_seconds;
+    /* XXX update guest visible wallclock time */
+}
+
+struct tm wallclock_time(void)
+{
+    return (struct tm) { 0 };
+}
+
 /*
  * Local variables:
  * mode: C
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7u-0004h7-Mb; Tue, 04 Dec 2012 11:57:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7s-0004fL-VG
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:25 +0000
Received: from [85.158.137.99:65114] by server-5.bemta-3.messagelabs.com id
	45/70-26311-F15EDB05; Tue, 04 Dec 2012 11:57:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354622237!17530638!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16205 invoked from network); 4 Dec 2012 11:57:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516010"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-7n;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:33 +0000
Message-ID: <1354622199-27504-9-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 09/15] xen: arm: stub page_is_ram_type.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Callers are VT-d (so x86 specific) and various bits of page offlining
support, which although it looks generic (and is in xen/common) does
things like diving into page_info->count_info which is not generic.

In any case on this is only reachable via XEN_SYSCTL_page_offline_op,
which clearly shouldn't be called on ARM just yet.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    3 ---
 xen/arch/arm/mm.c    |    6 ++++++
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index f5b0db7..7e6f171 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -5,9 +5,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 #define  NOP(x) \
 	.globl x; \
 x:	mov pc, lr
-	
-/* Page Offlining */
-DUMMY(page_is_ram_type);
 
 /* Other */
 DUMMY(domain_get_maximum_gpfn);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 687eb55..329b1d4 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -441,6 +441,12 @@ int steal_page(
     return -1;
 }
 
+int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
+{
+    ASSERT(0);
+    return 0;
+}
+
 void share_xen_page_with_guest(struct page_info *page,
                           struct domain *d, int readonly)
 {
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr7u-0004h7-Mb; Tue, 04 Dec 2012 11:57:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr7s-0004fL-VG
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:25 +0000
Received: from [85.158.137.99:65114] by server-5.bemta-3.messagelabs.com id
	45/70-26311-F15EDB05; Tue, 04 Dec 2012 11:57:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354622237!17530638!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16205 invoked from network); 4 Dec 2012 11:57:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516010"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-7n;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:33 +0000
Message-ID: <1354622199-27504-9-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 09/15] xen: arm: stub page_is_ram_type.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Callers are VT-d (so x86 specific) and various bits of page offlining
support, which although it looks generic (and is in xen/common) does
things like diving into page_info->count_info which is not generic.

In any case on this is only reachable via XEN_SYSCTL_page_offline_op,
which clearly shouldn't be called on ARM just yet.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    3 ---
 xen/arch/arm/mm.c    |    6 ++++++
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index f5b0db7..7e6f171 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -5,9 +5,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 #define  NOP(x) \
 	.globl x; \
 x:	mov pc, lr
-	
-/* Page Offlining */
-DUMMY(page_is_ram_type);
 
 /* Other */
 DUMMY(domain_get_maximum_gpfn);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 687eb55..329b1d4 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -441,6 +441,12 @@ int steal_page(
     return -1;
 }
 
+int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
+{
+    ASSERT(0);
+    return 0;
+}
+
 void share_xen_page_with_guest(struct page_info *page,
                           struct domain *d, int readonly)
 {
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr84-0004oT-4F; Tue, 04 Dec 2012 11:57:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr83-0004nW-19
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:35 +0000
Received: from [193.109.254.147:57625] by server-4.bemta-14.messagelabs.com id
	91/70-18856-E25EDB05; Tue, 04 Dec 2012 11:57:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354622200!8630831!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8154 invoked from network); 4 Dec 2012 11:56:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:56:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516005"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr79-0005GC-NG;
	Tue, 04 Dec 2012 11:56:39 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:25 +0000
Message-ID: <1354622199-27504-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 01/15] xen: arm: define node_online_map.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For now just initialise it as a single online node, which is what
asm-arm/numa.h assumes anyway.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S       |    1 -
 xen/arch/arm/smpboot.c     |    3 +++
 xen/include/asm-arm/numa.h |    2 +-
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 022338a..4abb30a 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -7,7 +7,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 x:	mov pc, lr
 	
 /* SMP support */
-DUMMY(node_online_map);
 DUMMY(smp_send_state_dump);
 
 /* PIRQ support */
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 6555ac6..351b559 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -38,6 +38,9 @@ EXPORT_SYMBOL(cpu_online_map);
 cpumask_t cpu_possible_map;
 EXPORT_SYMBOL(cpu_possible_map);
 
+/* Fake one node for now. See also include/asm-arm/numa.h */
+nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
+
 /* Xen stack for bringing up the first CPU. */
 static unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
        __attribute__((__aligned__(STACK_SIZE)));
diff --git a/xen/include/asm-arm/numa.h b/xen/include/asm-arm/numa.h
index 1b060e6..a1b1f58 100644
--- a/xen/include/asm-arm/numa.h
+++ b/xen/include/asm-arm/numa.h
@@ -1,7 +1,7 @@
 #ifndef __ARCH_ARM_NUMA_H
 #define __ARCH_ARM_NUMA_H
 
-/* Fake one node for now... */
+/* Fake one node for now. See also node_online_map. */
 #define cpu_to_node(cpu) 0
 #define node_to_cpumask(node)   (cpu_online_map)
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:57:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr84-0004oT-4F; Tue, 04 Dec 2012 11:57:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr83-0004nW-19
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:35 +0000
Received: from [193.109.254.147:57625] by server-4.bemta-14.messagelabs.com id
	91/70-18856-E25EDB05; Tue, 04 Dec 2012 11:57:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354622200!8630831!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8154 invoked from network); 4 Dec 2012 11:56:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:56:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516005"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr79-0005GC-NG;
	Tue, 04 Dec 2012 11:56:39 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:25 +0000
Message-ID: <1354622199-27504-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 01/15] xen: arm: define node_online_map.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For now just initialise it as a single online node, which is what
asm-arm/numa.h assumes anyway.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S       |    1 -
 xen/arch/arm/smpboot.c     |    3 +++
 xen/include/asm-arm/numa.h |    2 +-
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 022338a..4abb30a 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -7,7 +7,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 x:	mov pc, lr
 	
 /* SMP support */
-DUMMY(node_online_map);
 DUMMY(smp_send_state_dump);
 
 /* PIRQ support */
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 6555ac6..351b559 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -38,6 +38,9 @@ EXPORT_SYMBOL(cpu_online_map);
 cpumask_t cpu_possible_map;
 EXPORT_SYMBOL(cpu_possible_map);
 
+/* Fake one node for now. See also include/asm-arm/numa.h */
+nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
+
 /* Xen stack for bringing up the first CPU. */
 static unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
        __attribute__((__aligned__(STACK_SIZE)));
diff --git a/xen/include/asm-arm/numa.h b/xen/include/asm-arm/numa.h
index 1b060e6..a1b1f58 100644
--- a/xen/include/asm-arm/numa.h
+++ b/xen/include/asm-arm/numa.h
@@ -1,7 +1,7 @@
 #ifndef __ARCH_ARM_NUMA_H
 #define __ARCH_ARM_NUMA_H
 
-/* Fake one node for now... */
+/* Fake one node for now. See also node_online_map. */
 #define cpu_to_node(cpu) 0
 #define node_to_cpumask(node)   (cpu_online_map)
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:58:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr8R-00051G-Hz; Tue, 04 Dec 2012 11:57:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr8Q-00050R-15
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:58 +0000
Received: from [193.109.254.147:15156] by server-16.bemta-14.messagelabs.com
	id E1/E3-09215-545EDB05; Tue, 04 Dec 2012 11:57:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354622200!8630831!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9659 invoked from network); 4 Dec 2012 11:57:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516006"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr79-0005GC-PH;
	Tue, 04 Dec 2012 11:56:39 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:26 +0000
Message-ID: <1354622199-27504-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 02/15] xen: arm: make smp_send_state_dump a real
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It still doesn't do anything useful, but at least it isn't in dummy.S!

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    3 ---
 xen/arch/arm/gic.c   |    6 ++++++
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 4abb30a..bfd948a 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -6,9 +6,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 	.globl x; \
 x:	mov pc, lr
 	
-/* SMP support */
-DUMMY(smp_send_state_dump);
-
 /* PIRQ support */
 DUMMY(alloc_pirq_struct);
 DUMMY(nr_irqs_gsi);
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0c6fab9..416e8d8 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -330,6 +330,12 @@ void __init gic_init(void)
     spin_unlock(&gic.lock);
 }
 
+void smp_send_state_dump(unsigned int cpu)
+{
+    printk("WARNING: unable to send state dump request to CPU%d\n", cpu);
+    /* XXX TODO -- send an SGI */
+}
+
 /* Set up the per-CPU parts of the GIC for a secondary CPU */
 void __cpuinit gic_init_secondary_cpu(void)
 {
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:58:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr8R-00051G-Hz; Tue, 04 Dec 2012 11:57:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr8Q-00050R-15
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 11:57:58 +0000
Received: from [193.109.254.147:15156] by server-16.bemta-14.messagelabs.com
	id E1/E3-09215-545EDB05; Tue, 04 Dec 2012 11:57:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354622200!8630831!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9659 invoked from network); 4 Dec 2012 11:57:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:57:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46516006"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:56:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 06:56:40 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr79-0005GC-PH;
	Tue, 04 Dec 2012 11:56:39 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:26 +0000
Message-ID: <1354622199-27504-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 02/15] xen: arm: make smp_send_state_dump a real
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It still doesn't do anything useful, but at least it isn't in dummy.S!

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    3 ---
 xen/arch/arm/gic.c   |    6 ++++++
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 4abb30a..bfd948a 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -6,9 +6,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 	.globl x; \
 x:	mov pc, lr
 	
-/* SMP support */
-DUMMY(smp_send_state_dump);
-
 /* PIRQ support */
 DUMMY(alloc_pirq_struct);
 DUMMY(nr_irqs_gsi);
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0c6fab9..416e8d8 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -330,6 +330,12 @@ void __init gic_init(void)
     spin_unlock(&gic.lock);
 }
 
+void smp_send_state_dump(unsigned int cpu)
+{
+    printk("WARNING: unable to send state dump request to CPU%d\n", cpu);
+    /* XXX TODO -- send an SGI */
+}
+
 /* Set up the per-CPU parts of the GIC for a secondary CPU */
 void __cpuinit gic_init_secondary_cpu(void)
 {
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:59:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr9b-0005Ua-2w; Tue, 04 Dec 2012 11:59:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr9Z-0005U1-JR
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 11:59:09 +0000
Received: from [85.158.143.99:12539] by server-3.bemta-4.messagelabs.com id
	C4/F6-06841-C85EDB05; Tue, 04 Dec 2012 11:59:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354622343!27004144!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29738 invoked from network); 4 Dec 2012 11:59:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:59:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16145283"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:59:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	11:59:03 +0000
Message-ID: <1354622342.2693.75.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Tue, 4 Dec 2012 11:59:02 +0000
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3DA@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<0f3b6811dad16cc9d93f.1354112414@makatos-desktop>
	<1354203123.6269.30.camel@zakaz.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3DA@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 4 of 5 RFC] blktap3: Introduce xenio daemon
 Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 11:53 +0000, Thanos Makatos wrote:

> > > +override LDFLAGS = \
> > > +    -L$(XEN_ROOT)/tools/xenstore -lxenstore
> > 
> > $(LDFLAGS_libxenstore)
> 
> You mean $(LDLIBS_libxenstore), right?

Yes.

> > > +$(IBIN): $(XENIO-OBJS) xenio.o $(BLKTAP_ROOT)/control/libblktapctl.a
> > 
> > Static linking on purpose?
> > 
> > If this is a purely internal library then fine, but I have a feeling
> > that libblktapctl is intended as an interface library which others
> > (e.g.
> > libxl) will want to use?
> 
> Indeed libxl uses libblktapctl, does this mean that the xenio daemon
> must use the .so version as well?

The default assumption should be to use the shared library. Using the
static library would need to be justified.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 11:59:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 11:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfr9b-0005Ua-2w; Tue, 04 Dec 2012 11:59:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfr9Z-0005U1-JR
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 11:59:09 +0000
Received: from [85.158.143.99:12539] by server-3.bemta-4.messagelabs.com id
	C4/F6-06841-C85EDB05; Tue, 04 Dec 2012 11:59:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354622343!27004144!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29738 invoked from network); 4 Dec 2012 11:59:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 11:59:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16145283"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 11:59:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	11:59:03 +0000
Message-ID: <1354622342.2693.75.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Tue, 4 Dec 2012 11:59:02 +0000
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3DA@LONPMAILBOX01.citrite.net>
References: <patchbomb.1354112410@makatos-desktop>
	<0f3b6811dad16cc9d93f.1354112414@makatos-desktop>
	<1354203123.6269.30.camel@zakaz.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3DA@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 4 of 5 RFC] blktap3: Introduce xenio daemon
 Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 11:53 +0000, Thanos Makatos wrote:

> > > +override LDFLAGS = \
> > > +    -L$(XEN_ROOT)/tools/xenstore -lxenstore
> > 
> > $(LDFLAGS_libxenstore)
> 
> You mean $(LDLIBS_libxenstore), right?

Yes.

> > > +$(IBIN): $(XENIO-OBJS) xenio.o $(BLKTAP_ROOT)/control/libblktapctl.a
> > 
> > Static linking on purpose?
> > 
> > If this is a purely internal library then fine, but I have a feeling
> > that libblktapctl is intended as an interface library which others
> > (e.g.
> > libxl) will want to use?
> 
> Indeed libxl uses libblktapctl, does this mean that the xenio daemon
> must use the .so version as well?

The default assumption should be to use the shared library. Using the
static library would need to be justified.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:09:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrJX-0006e7-FM; Tue, 04 Dec 2012 12:09:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1TfrJV-0006e2-Lm
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 12:09:25 +0000
Received: from [85.158.137.99:14811] by server-6.bemta-3.messagelabs.com id
	46/08-28265-4F7EDB05; Tue, 04 Dec 2012 12:09:24 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354622963!14669450!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15589 invoked from network); 4 Dec 2012 12:09:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:09:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16145555"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:09:02 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Tue, 4 Dec 2012
	12:09:01 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 12:09:01 +0000
Thread-Topic: [Xen-devel] [PATCH 0 of 9 RFC] blktap3: Introduce a small
	subset of blktap3 files
Thread-Index: Ac3OQ7et2LWEfUQ0SsKVNQK5tgKKWgD1CpCw
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3EB@LONPMAILBOX01.citrite.net>
References: <patchbomb.1353090340@makatos-desktop>
	<1354201848.6269.12.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354201848.6269.12.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 0 of 9 RFC] blktap3: Introduce a small
 subset of blktap3 files
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 29 November 2012 15:11
> To: Thanos Makatos
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH 0 of 9 RFC] blktap3: Introduce a small
> subset of blktap3 files
> 
> On Fri, 2012-11-16 at 18:25 +0000, Thanos Makatos wrote:
> > blktap3 is a disk backend driver. It is based on blktap2 but does not
> > require the blktap/blkback kernel modules as it allows tapdisk to
> talk
> > directly to blkfront. This primarily simplifies maintenance, and
> _may_
> > lead to performance improvements. This patch series introduces a
> small
> > subset of files required by blktap3. blktap3 is based on a blktap2
> > fork maintained mostly by Citrix (it lives in github), so these
> > changes are also imported, apart from the blktap3 ones.
> 
> Sorry it took so long to look at this series, it generally looks good,
> thanks. I made a couple of minor comments on some patches and I noticed
> that I agreed with many of your TODOs.
> 
> Can you list explicitly what is in the patch, I think it's the central
> dispatcher/ctl daemon, which spawns the tapdisk processes on demand,
> but not the tapdisk process itself?
> 
> It might also be useful to give a high level overview of the
> architecture. Could you enumerate what the moving parts are and how
> they fit together?
> 
> For example, I think, but I'm guessing a bit, that there is a daemon
> which watches xenstore and spawns processes on demand, is that right?
> Is that daemon called "tap-ctl"
> 
> There is also an RPC mechanism for talking to either that daemon or the
> individual tapdisk process and a library for clients to speak it? This
> is unix domain socket based or something else?
> 
> Ian.

I'll make a more elaborate description of all the issues you mention in the next round.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:09:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrJX-0006e7-FM; Tue, 04 Dec 2012 12:09:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1TfrJV-0006e2-Lm
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 12:09:25 +0000
Received: from [85.158.137.99:14811] by server-6.bemta-3.messagelabs.com id
	46/08-28265-4F7EDB05; Tue, 04 Dec 2012 12:09:24 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354622963!14669450!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15589 invoked from network); 4 Dec 2012 12:09:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:09:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16145555"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:09:02 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Tue, 4 Dec 2012
	12:09:01 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 12:09:01 +0000
Thread-Topic: [Xen-devel] [PATCH 0 of 9 RFC] blktap3: Introduce a small
	subset of blktap3 files
Thread-Index: Ac3OQ7et2LWEfUQ0SsKVNQK5tgKKWgD1CpCw
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3EB@LONPMAILBOX01.citrite.net>
References: <patchbomb.1353090340@makatos-desktop>
	<1354201848.6269.12.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354201848.6269.12.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 0 of 9 RFC] blktap3: Introduce a small
 subset of blktap3 files
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 29 November 2012 15:11
> To: Thanos Makatos
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH 0 of 9 RFC] blktap3: Introduce a small
> subset of blktap3 files
> 
> On Fri, 2012-11-16 at 18:25 +0000, Thanos Makatos wrote:
> > blktap3 is a disk backend driver. It is based on blktap2 but does not
> > require the blktap/blkback kernel modules as it allows tapdisk to
> talk
> > directly to blkfront. This primarily simplifies maintenance, and
> _may_
> > lead to performance improvements. This patch series introduces a
> small
> > subset of files required by blktap3. blktap3 is based on a blktap2
> > fork maintained mostly by Citrix (it lives in github), so these
> > changes are also imported, apart from the blktap3 ones.
> 
> Sorry it took so long to look at this series, it generally looks good,
> thanks. I made a couple of minor comments on some patches and I noticed
> that I agreed with many of your TODOs.
> 
> Can you list explicitly what is in the patch, I think it's the central
> dispatcher/ctl daemon, which spawns the tapdisk processes on demand,
> but not the tapdisk process itself?
> 
> It might also be useful to give a high level overview of the
> architecture. Could you enumerate what the moving parts are and how
> they fit together?
> 
> For example, I think, but I'm guessing a bit, that there is a daemon
> which watches xenstore and spawns processes on demand, is that right?
> Is that daemon called "tap-ctl"
> 
> There is also an RPC mechanism for talking to either that daemon or the
> individual tapdisk process and a library for clients to speak it? This
> is unix domain socket based or something else?
> 
> Ian.

I'll make a more elaborate description of all the issues you mention in the next round.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:16:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrQN-0006xY-RP; Tue, 04 Dec 2012 12:16:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfrQL-0006xH-GL
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:16:29 +0000
Received: from [85.158.143.99:60722] by server-2.bemta-4.messagelabs.com id
	B1/AA-28922-C99EDB05; Tue, 04 Dec 2012 12:16:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354623371!27007309!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17316 invoked from network); 4 Dec 2012 12:16:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:16:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46518169"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:16:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:16:10 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-Jq;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:39 +0000
Message-ID: <1354622199-27504-15-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 15/15] xen: arm: remove now empty dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/Makefile |    1 -
 xen/arch/arm/dummy.S  |    7 -------
 2 files changed, 0 insertions(+), 8 deletions(-)
 delete mode 100644 xen/arch/arm/dummy.S

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index fd92b72..5867fff 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -1,6 +1,5 @@
 subdir-y += lib
 
-obj-y += dummy.o
 obj-y += early_printk.o
 obj-y += entry.o
 obj-y += domain.o
diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
deleted file mode 100644
index a48ec08..0000000
--- a/xen/arch/arm/dummy.S
+++ /dev/null
@@ -1,7 +0,0 @@
-#define DUMMY(x) \
-	.globl x; \
-x:	.word 0xe7f000f0 /* Undefined instruction */
-
-#define  NOP(x) \
-	.globl x; \
-x:	mov pc, lr
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:16:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrQN-0006xY-RP; Tue, 04 Dec 2012 12:16:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfrQL-0006xH-GL
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:16:29 +0000
Received: from [85.158.143.99:60722] by server-2.bemta-4.messagelabs.com id
	B1/AA-28922-C99EDB05; Tue, 04 Dec 2012 12:16:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354623371!27007309!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17316 invoked from network); 4 Dec 2012 12:16:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:16:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46518169"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:16:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:16:10 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-Jq;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:39 +0000
Message-ID: <1354622199-27504-15-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 15/15] xen: arm: remove now empty dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/Makefile |    1 -
 xen/arch/arm/dummy.S  |    7 -------
 2 files changed, 0 insertions(+), 8 deletions(-)
 delete mode 100644 xen/arch/arm/dummy.S

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index fd92b72..5867fff 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -1,6 +1,5 @@
 subdir-y += lib
 
-obj-y += dummy.o
 obj-y += early_printk.o
 obj-y += entry.o
 obj-y += domain.o
diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
deleted file mode 100644
index a48ec08..0000000
--- a/xen/arch/arm/dummy.S
+++ /dev/null
@@ -1,7 +0,0 @@
-#define DUMMY(x) \
-	.globl x; \
-x:	.word 0xe7f000f0 /* Undefined instruction */
-
-#define  NOP(x) \
-	.globl x; \
-x:	mov pc, lr
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:16:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrQP-0006xq-9D; Tue, 04 Dec 2012 12:16:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfrQO-0006xU-3P
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:16:32 +0000
Received: from [85.158.139.83:38869] by server-11.bemta-5.messagelabs.com id
	A2/3F-03409-F99EDB05; Tue, 04 Dec 2012 12:16:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1354623388!28580601!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27720 invoked from network); 4 Dec 2012 12:16:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:16:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216318195"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:16:08 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-DM;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:36 +0000
Message-ID: <1354622199-27504-12-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 12/15] xen: arm: initialise dom_{xen,io,cow}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S        |    1 -
 xen/arch/arm/mm.c           |   28 +++++++++++++++++++++++++++-
 xen/arch/arm/setup.c        |    2 ++
 xen/include/asm-arm/setup.h |    2 ++
 4 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 7189648..3fe4ba6 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -7,6 +7,5 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 x:	mov pc, lr
 
 /* Other */
-DUMMY(dom_cow);
 DUMMY(send_timer_event);
 DUMMY(share_xen_page_with_privileged_guests);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 718f32d..d9c1ff7 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -30,12 +30,13 @@
 #include <xen/event.h>
 #include <xen/guest_access.h>
 #include <xen/domain_page.h>
+#include <xen/err.h>
 #include <asm/page.h>
 #include <asm/current.h>
 #include <public/memory.h>
 #include <xen/sched.h>
 
-struct domain *dom_xen, *dom_io;
+struct domain *dom_xen, *dom_io, *dom_cow;
 
 /* Static start-of-day pagetables that we use before the allocators are up */
 lpae_t xen_pgtable[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
@@ -206,6 +207,31 @@ void unmap_domain_page(const void *va)
     local_irq_restore(flags);
 }
 
+void __init arch_init_memory(void)
+{
+    /*
+     * Initialise our DOMID_XEN domain.
+     * Any Xen-heap pages that we will allow to be mapped will have
+     * their domain field set to dom_xen.
+     */
+    dom_xen = domain_create(DOMID_XEN, DOMCRF_dummy, 0);
+    BUG_ON(IS_ERR(dom_xen));
+
+    /*
+     * Initialise our DOMID_IO domain.
+     * This domain owns I/O pages that are within the range of the page_info
+     * array. Mappings occur at the priv of the caller.
+     */
+    dom_io = domain_create(DOMID_IO, DOMCRF_dummy, 0);
+    BUG_ON(IS_ERR(dom_io));
+
+    /*
+     * Initialise our COW domain.
+     * This domain owns sharable pages.
+     */
+    dom_cow = domain_create(DOMID_COW, DOMCRF_dummy, 0);
+    BUG_ON(IS_ERR(dom_cow));
+}
 
 /* Boot-time pagetable setup.
  * Changes here may need matching changes in head.S */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..61bf47c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -265,6 +265,8 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     rcu_init();
 
+    arch_init_memory();
+
     local_irq_enable();
 
     smp_prepare_cpus(cpus);
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 8769f66..5c84334 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -3,6 +3,8 @@
 
 #include <public/version.h>
 
+void arch_init_memory(void);
+
 void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx);
 
 void arch_get_xen_caps(xen_capabilities_info_t *info);
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:16:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrQP-0006xq-9D; Tue, 04 Dec 2012 12:16:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfrQO-0006xU-3P
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:16:32 +0000
Received: from [85.158.139.83:38869] by server-11.bemta-5.messagelabs.com id
	A2/3F-03409-F99EDB05; Tue, 04 Dec 2012 12:16:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1354623388!28580601!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27720 invoked from network); 4 Dec 2012 12:16:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:16:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216318195"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:16:08 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-DM;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:36 +0000
Message-ID: <1354622199-27504-12-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 12/15] xen: arm: initialise dom_{xen,io,cow}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S        |    1 -
 xen/arch/arm/mm.c           |   28 +++++++++++++++++++++++++++-
 xen/arch/arm/setup.c        |    2 ++
 xen/include/asm-arm/setup.h |    2 ++
 4 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 7189648..3fe4ba6 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -7,6 +7,5 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 x:	mov pc, lr
 
 /* Other */
-DUMMY(dom_cow);
 DUMMY(send_timer_event);
 DUMMY(share_xen_page_with_privileged_guests);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 718f32d..d9c1ff7 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -30,12 +30,13 @@
 #include <xen/event.h>
 #include <xen/guest_access.h>
 #include <xen/domain_page.h>
+#include <xen/err.h>
 #include <asm/page.h>
 #include <asm/current.h>
 #include <public/memory.h>
 #include <xen/sched.h>
 
-struct domain *dom_xen, *dom_io;
+struct domain *dom_xen, *dom_io, *dom_cow;
 
 /* Static start-of-day pagetables that we use before the allocators are up */
 lpae_t xen_pgtable[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
@@ -206,6 +207,31 @@ void unmap_domain_page(const void *va)
     local_irq_restore(flags);
 }
 
+void __init arch_init_memory(void)
+{
+    /*
+     * Initialise our DOMID_XEN domain.
+     * Any Xen-heap pages that we will allow to be mapped will have
+     * their domain field set to dom_xen.
+     */
+    dom_xen = domain_create(DOMID_XEN, DOMCRF_dummy, 0);
+    BUG_ON(IS_ERR(dom_xen));
+
+    /*
+     * Initialise our DOMID_IO domain.
+     * This domain owns I/O pages that are within the range of the page_info
+     * array. Mappings occur at the priv of the caller.
+     */
+    dom_io = domain_create(DOMID_IO, DOMCRF_dummy, 0);
+    BUG_ON(IS_ERR(dom_io));
+
+    /*
+     * Initialise our COW domain.
+     * This domain owns sharable pages.
+     */
+    dom_cow = domain_create(DOMID_COW, DOMCRF_dummy, 0);
+    BUG_ON(IS_ERR(dom_cow));
+}
 
 /* Boot-time pagetable setup.
  * Changes here may need matching changes in head.S */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..61bf47c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -265,6 +265,8 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     rcu_init();
 
+    arch_init_memory();
+
     local_irq_enable();
 
     smp_prepare_cpus(cpus);
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 8769f66..5c84334 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -3,6 +3,8 @@
 
 #include <public/version.h>
 
+void arch_init_memory(void);
+
 void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx);
 
 void arch_get_xen_caps(xen_capabilities_info_t *info);
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:16:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrQR-0006yP-3A; Tue, 04 Dec 2012 12:16:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfrQQ-0006xt-0D
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:16:34 +0000
Received: from [85.158.139.83:13399] by server-5.bemta-5.messagelabs.com id
	BF/9E-11353-1A9EDB05; Tue, 04 Dec 2012 12:16:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1354623388!28580601!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27776 invoked from network); 4 Dec 2012 12:16:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:16:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216318199"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:16:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:16:11 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-GT;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:37 +0000
Message-ID: <1354622199-27504-13-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 13/15] xen: arm: implement send_timer_event.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    1 -
 xen/arch/arm/time.c  |    7 +++++++
 2 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 3fe4ba6..6d4b34f 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -7,5 +7,4 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 x:	mov pc, lr
 
 /* Other */
-DUMMY(send_timer_event);
 DUMMY(share_xen_page_with_privileged_guests);
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index ac606f7..0f9335e 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -26,6 +26,7 @@
 #include <xen/softirq.h>
 #include <xen/time.h>
 #include <xen/sched.h>
+#include <xen/event.h>
 #include <asm/system.h>
 
 /*
@@ -186,6 +187,12 @@ void udelay(unsigned long usecs)
     isb();
 }
 
+/* VCPU PV timers. */
+void send_timer_event(struct vcpu *v)
+{
+    send_guest_vcpu_virq(v, VIRQ_TIMER);
+}
+
 /* VCPU PV clock. */
 void update_vcpu_system_time(struct vcpu *v)
 {
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:16:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrQR-0006yP-3A; Tue, 04 Dec 2012 12:16:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfrQQ-0006xt-0D
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:16:34 +0000
Received: from [85.158.139.83:13399] by server-5.bemta-5.messagelabs.com id
	BF/9E-11353-1A9EDB05; Tue, 04 Dec 2012 12:16:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1354623388!28580601!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27776 invoked from network); 4 Dec 2012 12:16:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:16:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216318199"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:16:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:16:11 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-GT;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:37 +0000
Message-ID: <1354622199-27504-13-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 13/15] xen: arm: implement send_timer_event.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    1 -
 xen/arch/arm/time.c  |    7 +++++++
 2 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 3fe4ba6..6d4b34f 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -7,5 +7,4 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 x:	mov pc, lr
 
 /* Other */
-DUMMY(send_timer_event);
 DUMMY(share_xen_page_with_privileged_guests);
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index ac606f7..0f9335e 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -26,6 +26,7 @@
 #include <xen/softirq.h>
 #include <xen/time.h>
 #include <xen/sched.h>
+#include <xen/event.h>
 #include <asm/system.h>
 
 /*
@@ -186,6 +187,12 @@ void udelay(unsigned long usecs)
     isb();
 }
 
+/* VCPU PV timers. */
+void send_timer_event(struct vcpu *v)
+{
+    send_guest_vcpu_virq(v, VIRQ_TIMER);
+}
+
 /* VCPU PV clock. */
 void update_vcpu_system_time(struct vcpu *v)
 {
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:16:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrQN-0006xQ-Ci; Tue, 04 Dec 2012 12:16:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfrQL-0006xG-ER
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:16:29 +0000
Received: from [85.158.143.99:43960] by server-3.bemta-4.messagelabs.com id
	C4/65-06841-C99EDB05; Tue, 04 Dec 2012 12:16:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354623371!27007309!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16382 invoked from network); 4 Dec 2012 12:16:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:16:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46518166"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:16:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:16:09 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-IA;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:38 +0000
Message-ID: <1354622199-27504-14-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 14/15] xen: arm: implement
	share_xen_page_with_privileged_guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    3 ---
 xen/arch/arm/mm.c    |    6 ++++++
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 6d4b34f..a48ec08 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -5,6 +5,3 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 #define  NOP(x) \
 	.globl x; \
 x:	mov pc, lr
-
-/* Other */
-DUMMY(share_xen_page_with_privileged_guests);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index d9c1ff7..d97b3ea 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -506,6 +506,12 @@ void share_xen_page_with_guest(struct page_info *page,
     spin_unlock(&d->page_alloc_lock);
 }
 
+void share_xen_page_with_privileged_guests(
+    struct page_info *page, int readonly)
+{
+    share_xen_page_with_guest(page, dom_xen, readonly);
+}
+
 static int xenmem_add_to_physmap_one(
     struct domain *d,
     uint16_t space,
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:16:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrQN-0006xQ-Ci; Tue, 04 Dec 2012 12:16:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfrQL-0006xG-ER
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:16:29 +0000
Received: from [85.158.143.99:43960] by server-3.bemta-4.messagelabs.com id
	C4/65-06841-C99EDB05; Tue, 04 Dec 2012 12:16:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354623371!27007309!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16382 invoked from network); 4 Dec 2012 12:16:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:16:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46518166"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:16:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:16:09 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-IA;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:38 +0000
Message-ID: <1354622199-27504-14-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 14/15] xen: arm: implement
	share_xen_page_with_privileged_guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/dummy.S |    3 ---
 xen/arch/arm/mm.c    |    6 ++++++
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 6d4b34f..a48ec08 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -5,6 +5,3 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 #define  NOP(x) \
 	.globl x; \
 x:	mov pc, lr
-
-/* Other */
-DUMMY(share_xen_page_with_privileged_guests);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index d9c1ff7..d97b3ea 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -506,6 +506,12 @@ void share_xen_page_with_guest(struct page_info *page,
     spin_unlock(&d->page_alloc_lock);
 }
 
+void share_xen_page_with_privileged_guests(
+    struct page_info *page, int readonly)
+{
+    share_xen_page_with_guest(page, dom_xen, readonly);
+}
+
 static int xenmem_add_to_physmap_one(
     struct domain *d,
     uint16_t space,
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:16:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrQQ-0006yE-Me; Tue, 04 Dec 2012 12:16:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfrQO-0006xc-Km
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:16:32 +0000
Received: from [85.158.139.83:13278] by server-7.bemta-5.messagelabs.com id
	AE/68-23096-F99EDB05; Tue, 04 Dec 2012 12:16:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1354623388!28580601!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27741 invoked from network); 4 Dec 2012 12:16:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:16:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216318196"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:16:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:16:09 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-BS;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:35 +0000
Message-ID: <1354622199-27504-11-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 11/15] xen: arm: stub
	domain_relinquish_resources.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently unimplemented. Domain teardown in general needs looking at.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domain.c |    7 +++++++
 xen/arch/arm/dummy.S  |    1 -
 2 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index b7b2d5c..7bbad45 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -520,6 +520,13 @@ void arch_vcpu_reset(struct vcpu *v)
     vcpu_end_shutdown_deferral(v);
 }
 
+int domain_relinquish_resources(struct domain *d)
+{
+    /* XXX teardown pagetables, free pages etc */
+    ASSERT(0);
+    return 0;
+}
+
 void arch_dump_domain_info(struct domain *d)
 {
 }
diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index fd667e5..7189648 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -7,7 +7,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 x:	mov pc, lr
 
 /* Other */
-DUMMY(domain_relinquish_resources);
 DUMMY(dom_cow);
 DUMMY(send_timer_event);
 DUMMY(share_xen_page_with_privileged_guests);
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:16:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrQQ-0006yE-Me; Tue, 04 Dec 2012 12:16:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfrQO-0006xc-Km
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:16:32 +0000
Received: from [85.158.139.83:13278] by server-7.bemta-5.messagelabs.com id
	AE/68-23096-F99EDB05; Tue, 04 Dec 2012 12:16:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1354623388!28580601!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27741 invoked from network); 4 Dec 2012 12:16:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:16:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216318196"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:16:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:16:09 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfr7A-0005GC-BS;
	Tue, 04 Dec 2012 11:56:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 11:56:35 +0000
Message-ID: <1354622199-27504-11-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 11/15] xen: arm: stub
	domain_relinquish_resources.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently unimplemented. Domain teardown in general needs looking at.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domain.c |    7 +++++++
 xen/arch/arm/dummy.S  |    1 -
 2 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index b7b2d5c..7bbad45 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -520,6 +520,13 @@ void arch_vcpu_reset(struct vcpu *v)
     vcpu_end_shutdown_deferral(v);
 }
 
+int domain_relinquish_resources(struct domain *d)
+{
+    /* XXX teardown pagetables, free pages etc */
+    ASSERT(0);
+    return 0;
+}
+
 void arch_dump_domain_info(struct domain *d)
 {
 }
diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index fd667e5..7189648 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -7,7 +7,6 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 x:	mov pc, lr
 
 /* Other */
-DUMMY(domain_relinquish_resources);
 DUMMY(dom_cow);
 DUMMY(send_timer_event);
 DUMMY(share_xen_page_with_privileged_guests);
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:27:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfraz-0007lx-B7; Tue, 04 Dec 2012 12:27:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfray-0007ls-A1
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 12:27:28 +0000
Received: from [85.158.137.99:5875] by server-14.bemta-3.messagelabs.com id
	86/A8-31424-F2CEDB05; Tue, 04 Dec 2012 12:27:27 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354624046!17191268!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27697 invoked from network); 4 Dec 2012 12:27:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:27:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16145973"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:26:55 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Tue, 4 Dec 2012
	12:26:56 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 12:26:55 +0000
Thread-Topic: [Xen-devel] [PATCH 1 of 9 RFC] blktap3: Introduce blktap3 headers
Thread-Index: Ac3OQxCll43Vza/7QCaJLZk0sdh7fAD1ZypA
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3F4@LONPMAILBOX01.citrite.net>
References: <patchbomb.1353090340@makatos-desktop>
	<28d57229042b9fe04a6d.1353090341@makatos-desktop>
	<1354201567.6269.7.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354201567.6269.7.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 1 of 9 RFC] blktap3: Introduce blktap3
 headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 29 November 2012 15:06
> To: Thanos Makatos
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH 1 of 9 RFC] blktap3: Introduce blktap3
> headers
> 
> On Fri, 2012-11-16 at 18:25 +0000, Thanos Makatos wrote:
> > This patch introduces basic blktap3 header files. It also provides an
> > plain Makefile so that the build system doesn't complain.
> 
> You don't want to start listing the headers etc in this Makefile as you
> introduce them?

Ok.

> 
> Which build system complains? We don't yet recurse into here do we?

I meant the blktap3 build system, no don't yet recurse here.

In the top-level Makefile (tools/blktap3/Makefile) I had erroneously put "SUBDIRS-y += include" and that required a dummy Makefile in tools/blktap3/includes. Removing this fixes the problem and renders the tools/blktap3/includes/Makefile useless.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:27:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfraz-0007lx-B7; Tue, 04 Dec 2012 12:27:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfray-0007ls-A1
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 12:27:28 +0000
Received: from [85.158.137.99:5875] by server-14.bemta-3.messagelabs.com id
	86/A8-31424-F2CEDB05; Tue, 04 Dec 2012 12:27:27 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354624046!17191268!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27697 invoked from network); 4 Dec 2012 12:27:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:27:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16145973"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:26:55 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Tue, 4 Dec 2012
	12:26:56 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 12:26:55 +0000
Thread-Topic: [Xen-devel] [PATCH 1 of 9 RFC] blktap3: Introduce blktap3 headers
Thread-Index: Ac3OQxCll43Vza/7QCaJLZk0sdh7fAD1ZypA
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3F4@LONPMAILBOX01.citrite.net>
References: <patchbomb.1353090340@makatos-desktop>
	<28d57229042b9fe04a6d.1353090341@makatos-desktop>
	<1354201567.6269.7.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354201567.6269.7.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 1 of 9 RFC] blktap3: Introduce blktap3
 headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 29 November 2012 15:06
> To: Thanos Makatos
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH 1 of 9 RFC] blktap3: Introduce blktap3
> headers
> 
> On Fri, 2012-11-16 at 18:25 +0000, Thanos Makatos wrote:
> > This patch introduces basic blktap3 header files. It also provides an
> > plain Makefile so that the build system doesn't complain.
> 
> You don't want to start listing the headers etc in this Makefile as you
> introduce them?

Ok.

> 
> Which build system complains? We don't yet recurse into here do we?

I meant the blktap3 build system, no don't yet recurse here.

In the top-level Makefile (tools/blktap3/Makefile) I had erroneously put "SUBDIRS-y += include" and that required a dummy Makefile in tools/blktap3/includes. Removing this fixes the problem and renders the tools/blktap3/includes/Makefile useless.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:43:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfrq6-0008KQ-IW; Tue, 04 Dec 2012 12:43:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tfrq4-0008KL-1m
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:43:04 +0000
Received: from [85.158.139.83:10211] by server-6.bemta-5.messagelabs.com id
	5A/D4-19321-7DFEDB05; Tue, 04 Dec 2012 12:43:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354624981!24450585!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26203 invoked from network); 4 Dec 2012 12:43:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:43:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46520408"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:43:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:43:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tfrq0-0005zL-7H;
	Tue, 04 Dec 2012 12:43:00 +0000
Date: Tue, 4 Dec 2012 12:42:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354551597.2693.21.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212041234120.8801@kaball.uk.xensource.com>
References: <1352823779.7491.94.camel@zakaz.uk.xensource.com>
	<1352823804-28482-4-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1211301501150.5310@kaball.uk.xensource.com>
	<1354551597.2693.21.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 04/12] arm: parse modules from DT during
 early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Dec 2012, Ian Campbell wrote:
> > > diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> > > new file mode 100644
> > > index 0000000..2609450
> > > --- /dev/null
> > > +++ b/docs/misc/arm/device-tree/booting.txt
> > > @@ -0,0 +1,27 @@
> > > +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> > > +node of the device tree.
> > > +
> > > +Each node has the form /chosen/module@<N> and contains the following
> > > +properties:
> > 
> > Wouldn't it be better to move all the modules under /chosen/modules or
> > /chosen/multiboot?
> 
> Why, what's the benefit?
> 
> I'm happy to do whatever is more normal in DT. Is that this:
> 	/foo/bar@1
> 	/foo/bar@2
> or
> 	/foo/bar/bar@1
> 	/foo/bar/bar@2
> 
> The second (which I think is what you are suggesting) seems pretty
> redundant.

To be precise I am suggesting:

/foo/bars/bar@0
/foo/bars/bar@1

I think it is just clearer, especially if more stuff end up inside
/chosen. Also see how the cpus node is defined, for example.


> > > +- compatible
> > > +
> > > +	Must be "xen,multiboot-module"
> > > +
> > > +- start
> > > +
> > > +	Physical address of the start of this module
> > > +
> > > +- end
> > > +
> > > +	Physical address of the end of this module
> > 
> > start and end could be encoded as one reg
> 
> Done.
> 
> > 
> > 
> > > +- bootargs (optional)
> > > +
> > > +	Command line associated with this module
> > > +
> > > +The following modules are understood
> > > +
> > > +- 1 -- the domain 0 kernel
> > > +- 2 -- the domain 0 ramdisk
> > 
> > It would be nice if we could express this via the compatible property
> > instead.
> > So the linux kernel could be compatible "linux,kernel" and the initrd
> > "linux,initrd", in addition to (or instead of) "xen,multiboot-module".
> > Given that they go from the most specific to the less specific, it would
> > become:
> > 
> > compatible = "linux,kernel", "xen,multiboot-module";
> 
> This bakes the word "linux" into the interface and would require a new
> compatible tag and code changes in Xen for each new dom0 kernel type,
> which I think we want to avoid. (maybe the code changes are unavoidable
> in practice, but in principal...)
> 
> "xen,dom0-kernel", "xen,multiboot-module"
> 
> Might be an option?
> 
> I'm going to repost what I have without changing this bit yet.

"xem,dom0-kernel" is OK.
However what about the initrd? Does Xen need to know that the second
module is the kernel's initrd or is it just another opaque module from
Xen's point of view?
If Xen needs to know that it is an initrd I think we need to introduce
another compatible string. Maybe the following:

"xen,dom0-initrd"

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:43:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfrq6-0008KQ-IW; Tue, 04 Dec 2012 12:43:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tfrq4-0008KL-1m
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 12:43:04 +0000
Received: from [85.158.139.83:10211] by server-6.bemta-5.messagelabs.com id
	5A/D4-19321-7DFEDB05; Tue, 04 Dec 2012 12:43:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354624981!24450585!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26203 invoked from network); 4 Dec 2012 12:43:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:43:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46520408"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:43:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:43:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tfrq0-0005zL-7H;
	Tue, 04 Dec 2012 12:43:00 +0000
Date: Tue, 4 Dec 2012 12:42:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354551597.2693.21.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212041234120.8801@kaball.uk.xensource.com>
References: <1352823779.7491.94.camel@zakaz.uk.xensource.com>
	<1352823804-28482-4-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1211301501150.5310@kaball.uk.xensource.com>
	<1354551597.2693.21.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 04/12] arm: parse modules from DT during
 early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Dec 2012, Ian Campbell wrote:
> > > diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> > > new file mode 100644
> > > index 0000000..2609450
> > > --- /dev/null
> > > +++ b/docs/misc/arm/device-tree/booting.txt
> > > @@ -0,0 +1,27 @@
> > > +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> > > +node of the device tree.
> > > +
> > > +Each node has the form /chosen/module@<N> and contains the following
> > > +properties:
> > 
> > Wouldn't it be better to move all the modules under /chosen/modules or
> > /chosen/multiboot?
> 
> Why, what's the benefit?
> 
> I'm happy to do whatever is more normal in DT. Is that this:
> 	/foo/bar@1
> 	/foo/bar@2
> or
> 	/foo/bar/bar@1
> 	/foo/bar/bar@2
> 
> The second (which I think is what you are suggesting) seems pretty
> redundant.

To be precise I am suggesting:

/foo/bars/bar@0
/foo/bars/bar@1

I think it is just clearer, especially if more stuff end up inside
/chosen. Also see how the cpus node is defined, for example.


> > > +- compatible
> > > +
> > > +	Must be "xen,multiboot-module"
> > > +
> > > +- start
> > > +
> > > +	Physical address of the start of this module
> > > +
> > > +- end
> > > +
> > > +	Physical address of the end of this module
> > 
> > start and end could be encoded as one reg
> 
> Done.
> 
> > 
> > 
> > > +- bootargs (optional)
> > > +
> > > +	Command line associated with this module
> > > +
> > > +The following modules are understood
> > > +
> > > +- 1 -- the domain 0 kernel
> > > +- 2 -- the domain 0 ramdisk
> > 
> > It would be nice if we could express this via the compatible property
> > instead.
> > So the linux kernel could be compatible "linux,kernel" and the initrd
> > "linux,initrd", in addition to (or instead of) "xen,multiboot-module".
> > Given that they go from the most specific to the less specific, it would
> > become:
> > 
> > compatible = "linux,kernel", "xen,multiboot-module";
> 
> This bakes the word "linux" into the interface and would require a new
> compatible tag and code changes in Xen for each new dom0 kernel type,
> which I think we want to avoid. (maybe the code changes are unavoidable
> in practice, but in principal...)
> 
> "xen,dom0-kernel", "xen,multiboot-module"
> 
> Might be an option?
> 
> I'm going to repost what I have without changing this bit yet.

"xem,dom0-kernel" is OK.
However what about the initrd? Does Xen need to know that the second
module is the kernel's initrd or is it just another opaque module from
Xen's point of view?
If Xen needs to know that it is an initrd I think we need to introduce
another compatible string. Maybe the following:

"xen,dom0-initrd"

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:46:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrtK-0008T6-BX; Tue, 04 Dec 2012 12:46:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TfrtJ-0008T1-C2
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 12:46:25 +0000
Received: from [85.158.138.51:15199] by server-14.bemta-3.messagelabs.com id
	C5/61-31424-0A0FDB05; Tue, 04 Dec 2012 12:46:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354625119!27368133!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14018 invoked from network); 4 Dec 2012 12:45:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:45:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46520651"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:45:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:45:18 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TfrsE-00060y-43;
	Tue, 04 Dec 2012 12:45:18 +0000
Date: Tue, 4 Dec 2012 12:45:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50BCD24B.1010608@citrix.com>
Message-ID: <alpine.DEB.2.02.1212041244210.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Dec 2012, Roger Pau Monne wrote:
> On 27/11/12 16:17, Stefano Stabellini wrote:
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > index 9d20086..c40f597 100644
> > --- a/tools/libxl/libxl_create.c
> > +++ b/tools/libxl/libxl_create.c
> > @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
> >      if (!b_info->device_model_version) {
> >          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
> >              b_info->device_model_version =
> > -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> > +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
> 
> Is there anyway we may keep qemu-traditional as default for NetBSD?
> Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
> heavy patching.
> 
> Could a helper function be added to libxl_{netbsd/linux}.c to decide
> which device model to use?

Yes, we could have a libxl__default_device_model function in
libxl_{netbsd/linux}.c.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:46:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfrtK-0008T6-BX; Tue, 04 Dec 2012 12:46:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TfrtJ-0008T1-C2
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 12:46:25 +0000
Received: from [85.158.138.51:15199] by server-14.bemta-3.messagelabs.com id
	C5/61-31424-0A0FDB05; Tue, 04 Dec 2012 12:46:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354625119!27368133!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14018 invoked from network); 4 Dec 2012 12:45:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:45:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46520651"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:45:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:45:18 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TfrsE-00060y-43;
	Tue, 04 Dec 2012 12:45:18 +0000
Date: Tue, 4 Dec 2012 12:45:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50BCD24B.1010608@citrix.com>
Message-ID: <alpine.DEB.2.02.1212041244210.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Dec 2012, Roger Pau Monne wrote:
> On 27/11/12 16:17, Stefano Stabellini wrote:
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > index 9d20086..c40f597 100644
> > --- a/tools/libxl/libxl_create.c
> > +++ b/tools/libxl/libxl_create.c
> > @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
> >      if (!b_info->device_model_version) {
> >          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
> >              b_info->device_model_version =
> > -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> > +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
> 
> Is there anyway we may keep qemu-traditional as default for NetBSD?
> Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
> heavy patching.
> 
> Could a helper function be added to libxl_{netbsd/linux}.c to decide
> which device model to use?

Yes, we could have a libxl__default_device_model function in
libxl_{netbsd/linux}.c.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:48:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfrv2-00007k-SG; Tue, 04 Dec 2012 12:48:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tfrv1-00007V-Iw
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 12:48:11 +0000
Received: from [85.158.137.99:23072] by server-1.bemta-3.messagelabs.com id
	37/C9-12169-901FDB05; Tue, 04 Dec 2012 12:48:09 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354625287!17195312!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3289 invoked from network); 4 Dec 2012 12:48:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:48:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46520872"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:48:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:48:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tfruv-00063v-Kj;
	Tue, 04 Dec 2012 12:48:05 +0000
Date: Tue, 4 Dec 2012 12:48:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1671133405.20121203193718@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212041245220.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
	<1671133405.20121203193718@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Dec 2012, Sander Eikelenboom wrote:
> Monday, December 3, 2012, 5:24:43 PM, you wrote:
> 
> > On 27/11/12 16:17, Stefano Stabellini wrote:
> >> 
> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> 
> >> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> >> index 9d20086..c40f597 100644
> >> --- a/tools/libxl/libxl_create.c
> >> +++ b/tools/libxl/libxl_create.c
> >> @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
> >>      if (!b_info->device_model_version) {
> >>          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
> >>              b_info->device_model_version =
> >> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> >> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
> 
> > Is there anyway we may keep qemu-traditional as default for NetBSD?
> > Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
> > heavy patching.
> 
> > Could a helper function be added to libxl_{netbsd/linux}.c to decide
> > which device model to use?
> 
> And shouldn't the example configuration files and documentation be patched as well ?

The example config files don't have anything on the device model.
However I do need to update the xl man page.
Thanks for the reminder!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:48:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfrv2-00007k-SG; Tue, 04 Dec 2012 12:48:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tfrv1-00007V-Iw
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 12:48:11 +0000
Received: from [85.158.137.99:23072] by server-1.bemta-3.messagelabs.com id
	37/C9-12169-901FDB05; Tue, 04 Dec 2012 12:48:09 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354625287!17195312!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3289 invoked from network); 4 Dec 2012 12:48:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:48:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46520872"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:48:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:48:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tfruv-00063v-Kj;
	Tue, 04 Dec 2012 12:48:05 +0000
Date: Tue, 4 Dec 2012 12:48:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1671133405.20121203193718@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212041245220.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
	<1671133405.20121203193718@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Dec 2012, Sander Eikelenboom wrote:
> Monday, December 3, 2012, 5:24:43 PM, you wrote:
> 
> > On 27/11/12 16:17, Stefano Stabellini wrote:
> >> 
> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> 
> >> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> >> index 9d20086..c40f597 100644
> >> --- a/tools/libxl/libxl_create.c
> >> +++ b/tools/libxl/libxl_create.c
> >> @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
> >>      if (!b_info->device_model_version) {
> >>          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
> >>              b_info->device_model_version =
> >> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> >> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
> 
> > Is there anyway we may keep qemu-traditional as default for NetBSD?
> > Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
> > heavy patching.
> 
> > Could a helper function be added to libxl_{netbsd/linux}.c to decide
> > which device model to use?
> 
> And shouldn't the example configuration files and documentation be patched as well ?

The example config files don't have anything on the device model.
However I do need to update the xl man page.
Thanks for the reminder!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:59:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfs5R-0000VK-0g; Tue, 04 Dec 2012 12:58:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tfs5P-0000VF-9e
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 12:58:55 +0000
Received: from [85.158.137.99:29683] by server-9.bemta-3.messagelabs.com id
	BA/3E-02388-E83FDB05; Tue, 04 Dec 2012 12:58:54 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354625932!12962230!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16311 invoked from network); 4 Dec 2012 12:58:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:58:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216321759"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:58:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:58:51 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tfs5L-0006EZ-BI;
	Tue, 04 Dec 2012 12:58:51 +0000
Date: Tue, 4 Dec 2012 12:58:49 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1212041256290.8801@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Changes in v2:
- update the xl man page;
- write a small helper function in libxl_{linux,netbsd}.c to set the
default device_model, so that NetBSD can keep using the old version.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index fe4fac9..69a38b9 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1132,15 +1132,15 @@ guest. Valid values are:
 
 =over 4
 
-=item B<qemu-xen-traditional>
+=item B<qemu-xen>
 
-Use the device-model based upon the historical Xen fork of Qemu.  This
-device-model is currently the default.
+use the device-model merged into the upstream QEMU project.
+This device-model is the default for Linux dom0.
 
-=item B<qemu-xen>
+=item B<qemu-xen-traditional>
 
-use the device-model merged into the upstream QEMU project.  This
-device-model will become the default in a future version of Xen.
+Use the device-model based upon the historical Xen fork of Qemu.
+This device-model is still the default for NetBSD dom0.
 
 =back
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 9d20086..6ec543a 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -143,8 +143,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
 
     if (!b_info->device_model_version) {
         if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
-            b_info->device_model_version =
-                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
+            b_info->device_model_version = libxl__default_device_model(gc);
         else {
             const char *dm;
             int rc;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index cba3616..0ea11d1 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1557,6 +1557,10 @@ _hidden libxl__json_object *libxl__json_parse(libxl__gc *gc_opt, const char *s);
   /* Based on /local/domain/$domid/dm-version xenstore key
    * default is qemu xen traditional */
 _hidden int libxl__device_model_version_running(libxl__gc *gc, uint32_t domid);
+  /* Return the system-wide default device model:
+   * qemu-xen for Linux, qemu-xen-traditional for NetBSD.
+   */
+_hidden libxl_device_model_version libxl__default_device_model(libxl__gc *gc);
 
 /* Check how executes hotplug script currently */
 int libxl__hotplug_settings(libxl__gc *gc, xs_transaction_t t);
diff --git a/tools/libxl/libxl_linux.c b/tools/libxl/libxl_linux.c
index 1fed3cd..409e9f2 100644
--- a/tools/libxl/libxl_linux.c
+++ b/tools/libxl/libxl_linux.c
@@ -266,3 +266,8 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, libxl__device *dev,
 out:
     return rc;
 }
+
+libxl_device_model_version libxl__default_device_model(libxl__gc *gc)
+{
+    return LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
+}
diff --git a/tools/libxl/libxl_netbsd.c b/tools/libxl/libxl_netbsd.c
index 9587833..aa04969 100644
--- a/tools/libxl/libxl_netbsd.c
+++ b/tools/libxl/libxl_netbsd.c
@@ -94,3 +94,8 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, libxl__device *dev,
 out:
     return rc;
 }
+
+libxl_device_model_version libxl__default_device_model(libxl__gc *gc)
+{
+    return LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
+}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 12:59:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 12:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfs5R-0000VK-0g; Tue, 04 Dec 2012 12:58:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tfs5P-0000VF-9e
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 12:58:55 +0000
Received: from [85.158.137.99:29683] by server-9.bemta-3.messagelabs.com id
	BA/3E-02388-E83FDB05; Tue, 04 Dec 2012 12:58:54 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354625932!12962230!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16311 invoked from network); 4 Dec 2012 12:58:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 12:58:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216321759"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 12:58:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 07:58:51 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tfs5L-0006EZ-BI;
	Tue, 04 Dec 2012 12:58:51 +0000
Date: Tue, 4 Dec 2012 12:58:49 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1212041256290.8801@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Changes in v2:
- update the xl man page;
- write a small helper function in libxl_{linux,netbsd}.c to set the
default device_model, so that NetBSD can keep using the old version.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index fe4fac9..69a38b9 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1132,15 +1132,15 @@ guest. Valid values are:
 
 =over 4
 
-=item B<qemu-xen-traditional>
+=item B<qemu-xen>
 
-Use the device-model based upon the historical Xen fork of Qemu.  This
-device-model is currently the default.
+use the device-model merged into the upstream QEMU project.
+This device-model is the default for Linux dom0.
 
-=item B<qemu-xen>
+=item B<qemu-xen-traditional>
 
-use the device-model merged into the upstream QEMU project.  This
-device-model will become the default in a future version of Xen.
+Use the device-model based upon the historical Xen fork of Qemu.
+This device-model is still the default for NetBSD dom0.
 
 =back
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 9d20086..6ec543a 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -143,8 +143,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
 
     if (!b_info->device_model_version) {
         if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
-            b_info->device_model_version =
-                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
+            b_info->device_model_version = libxl__default_device_model(gc);
         else {
             const char *dm;
             int rc;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index cba3616..0ea11d1 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1557,6 +1557,10 @@ _hidden libxl__json_object *libxl__json_parse(libxl__gc *gc_opt, const char *s);
   /* Based on /local/domain/$domid/dm-version xenstore key
    * default is qemu xen traditional */
 _hidden int libxl__device_model_version_running(libxl__gc *gc, uint32_t domid);
+  /* Return the system-wide default device model:
+   * qemu-xen for Linux, qemu-xen-traditional for NetBSD.
+   */
+_hidden libxl_device_model_version libxl__default_device_model(libxl__gc *gc);
 
 /* Check how executes hotplug script currently */
 int libxl__hotplug_settings(libxl__gc *gc, xs_transaction_t t);
diff --git a/tools/libxl/libxl_linux.c b/tools/libxl/libxl_linux.c
index 1fed3cd..409e9f2 100644
--- a/tools/libxl/libxl_linux.c
+++ b/tools/libxl/libxl_linux.c
@@ -266,3 +266,8 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, libxl__device *dev,
 out:
     return rc;
 }
+
+libxl_device_model_version libxl__default_device_model(libxl__gc *gc)
+{
+    return LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
+}
diff --git a/tools/libxl/libxl_netbsd.c b/tools/libxl/libxl_netbsd.c
index 9587833..aa04969 100644
--- a/tools/libxl/libxl_netbsd.c
+++ b/tools/libxl/libxl_netbsd.c
@@ -94,3 +94,8 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, libxl__device *dev,
 out:
     return rc;
 }
+
+libxl_device_model_version libxl__default_device_model(libxl__gc *gc)
+{
+    return LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
+}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:01:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:01:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfs7u-0000gJ-JG; Tue, 04 Dec 2012 13:01:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tfs7t-0000fj-6l
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 13:01:29 +0000
Received: from [85.158.139.211:49969] by server-6.bemta-5.messagelabs.com id
	FC/18-19321-824FDB05; Tue, 04 Dec 2012 13:01:28 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-206.messagelabs.com!1354626086!19030129!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 975 invoked from network); 4 Dec 2012 13:01:26 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-4.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	4 Dec 2012 13:01:26 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:56805 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TfsBT-0001Xf-PO; Tue, 04 Dec 2012 14:05:11 +0100
Date: Tue, 4 Dec 2012 14:01:23 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <152972466.20121204140123@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212041245220.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
	<1671133405.20121203193718@eikelenboom.it>
	<alpine.DEB.2.02.1212041245220.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
	device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, December 4, 2012, 1:48:03 PM, you wrote:

> On Mon, 3 Dec 2012, Sander Eikelenboom wrote:
>> Monday, December 3, 2012, 5:24:43 PM, you wrote:
>> 
>> > On 27/11/12 16:17, Stefano Stabellini wrote:
>> >> 
>> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> >> 
>> >> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>> >> index 9d20086..c40f597 100644
>> >> --- a/tools/libxl/libxl_create.c
>> >> +++ b/tools/libxl/libxl_create.c
>> >> @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>> >>      if (!b_info->device_model_version) {
>> >>          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
>> >>              b_info->device_model_version =
>> >> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
>> >> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
>> 
>> > Is there anyway we may keep qemu-traditional as default for NetBSD?
>> > Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
>> > heavy patching.
>> 
>> > Could a helper function be added to libxl_{netbsd/linux}.c to decide
>> > which device model to use?
>> 
>> And shouldn't the example configuration files and documentation be patched as well ?

> The example config files don't have anything on the device model.
> However I do need to update the xl man page.
> Thanks for the reminder!

What i did perhaps miss, are both qemu traditional and upstream going to be build and installed side-by-side, so a admin can mix guest with different device models during a transitional phase ?

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:01:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:01:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfs7u-0000gJ-JG; Tue, 04 Dec 2012 13:01:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tfs7t-0000fj-6l
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 13:01:29 +0000
Received: from [85.158.139.211:49969] by server-6.bemta-5.messagelabs.com id
	FC/18-19321-824FDB05; Tue, 04 Dec 2012 13:01:28 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-206.messagelabs.com!1354626086!19030129!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 975 invoked from network); 4 Dec 2012 13:01:26 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-4.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	4 Dec 2012 13:01:26 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:56805 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TfsBT-0001Xf-PO; Tue, 04 Dec 2012 14:05:11 +0100
Date: Tue, 4 Dec 2012 14:01:23 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <152972466.20121204140123@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212041245220.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
	<1671133405.20121203193718@eikelenboom.it>
	<alpine.DEB.2.02.1212041245220.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
	device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, December 4, 2012, 1:48:03 PM, you wrote:

> On Mon, 3 Dec 2012, Sander Eikelenboom wrote:
>> Monday, December 3, 2012, 5:24:43 PM, you wrote:
>> 
>> > On 27/11/12 16:17, Stefano Stabellini wrote:
>> >> 
>> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> >> 
>> >> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>> >> index 9d20086..c40f597 100644
>> >> --- a/tools/libxl/libxl_create.c
>> >> +++ b/tools/libxl/libxl_create.c
>> >> @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>> >>      if (!b_info->device_model_version) {
>> >>          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
>> >>              b_info->device_model_version =
>> >> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
>> >> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
>> 
>> > Is there anyway we may keep qemu-traditional as default for NetBSD?
>> > Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
>> > heavy patching.
>> 
>> > Could a helper function be added to libxl_{netbsd/linux}.c to decide
>> > which device model to use?
>> 
>> And shouldn't the example configuration files and documentation be patched as well ?

> The example config files don't have anything on the device model.
> However I do need to update the xl man page.
> Thanks for the reminder!

What i did perhaps miss, are both qemu traditional and upstream going to be build and installed side-by-side, so a admin can mix guest with different device models during a transitional phase ?

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:07:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfsD5-0000xm-Ad; Tue, 04 Dec 2012 13:06:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TfsD4-0000xe-9u
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 13:06:50 +0000
Received: from [193.109.254.147:40098] by server-11.bemta-14.messagelabs.com
	id E9/76-29027-965FDB05; Tue, 04 Dec 2012 13:06:49 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1354626322!9013610!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9088 invoked from network); 4 Dec 2012 13:05:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 13:05:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216322480"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 13:05:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 08:05:21 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TfsBd-0006Lr-1v;
	Tue, 04 Dec 2012 13:05:21 +0000
Date: Tue, 4 Dec 2012 13:05:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <152972466.20121204140123@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212041304330.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
	<1671133405.20121203193718@eikelenboom.it>
	<alpine.DEB.2.02.1212041245220.8801@kaball.uk.xensource.com>
	<152972466.20121204140123@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
> Tuesday, December 4, 2012, 1:48:03 PM, you wrote:
> 
> > On Mon, 3 Dec 2012, Sander Eikelenboom wrote:
> >> Monday, December 3, 2012, 5:24:43 PM, you wrote:
> >> 
> >> > On 27/11/12 16:17, Stefano Stabellini wrote:
> >> >> 
> >> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> >> 
> >> >> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> >> >> index 9d20086..c40f597 100644
> >> >> --- a/tools/libxl/libxl_create.c
> >> >> +++ b/tools/libxl/libxl_create.c
> >> >> @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
> >> >>      if (!b_info->device_model_version) {
> >> >>          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
> >> >>              b_info->device_model_version =
> >> >> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> >> >> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
> >> 
> >> > Is there anyway we may keep qemu-traditional as default for NetBSD?
> >> > Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
> >> > heavy patching.
> >> 
> >> > Could a helper function be added to libxl_{netbsd/linux}.c to decide
> >> > which device model to use?
> >> 
> >> And shouldn't the example configuration files and documentation be patched as well ?
> 
> > The example config files don't have anything on the device model.
> > However I do need to update the xl man page.
> > Thanks for the reminder!
> 
> What i did perhaps miss, are both qemu traditional and upstream going to be build and installed side-by-side, so a admin can mix guest with different device models during a transitional phase ?

Yes: there is a config option to select what device_model you want at VM
creation time. I am only changing the default (if no option is
specified).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:07:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfsD5-0000xm-Ad; Tue, 04 Dec 2012 13:06:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TfsD4-0000xe-9u
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 13:06:50 +0000
Received: from [193.109.254.147:40098] by server-11.bemta-14.messagelabs.com
	id E9/76-29027-965FDB05; Tue, 04 Dec 2012 13:06:49 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1354626322!9013610!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9088 invoked from network); 4 Dec 2012 13:05:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 13:05:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216322480"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 13:05:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 08:05:21 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TfsBd-0006Lr-1v;
	Tue, 04 Dec 2012 13:05:21 +0000
Date: Tue, 4 Dec 2012 13:05:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <152972466.20121204140123@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212041304330.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
	<1671133405.20121203193718@eikelenboom.it>
	<alpine.DEB.2.02.1212041245220.8801@kaball.uk.xensource.com>
	<152972466.20121204140123@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
> Tuesday, December 4, 2012, 1:48:03 PM, you wrote:
> 
> > On Mon, 3 Dec 2012, Sander Eikelenboom wrote:
> >> Monday, December 3, 2012, 5:24:43 PM, you wrote:
> >> 
> >> > On 27/11/12 16:17, Stefano Stabellini wrote:
> >> >> 
> >> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> >> 
> >> >> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> >> >> index 9d20086..c40f597 100644
> >> >> --- a/tools/libxl/libxl_create.c
> >> >> +++ b/tools/libxl/libxl_create.c
> >> >> @@ -144,7 +144,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
> >> >>      if (!b_info->device_model_version) {
> >> >>          if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
> >> >>              b_info->device_model_version =
> >> >> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> >> >> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
> >> 
> >> > Is there anyway we may keep qemu-traditional as default for NetBSD?
> >> > Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
> >> > heavy patching.
> >> 
> >> > Could a helper function be added to libxl_{netbsd/linux}.c to decide
> >> > which device model to use?
> >> 
> >> And shouldn't the example configuration files and documentation be patched as well ?
> 
> > The example config files don't have anything on the device model.
> > However I do need to update the xl man page.
> > Thanks for the reminder!
> 
> What i did perhaps miss, are both qemu traditional and upstream going to be build and installed side-by-side, so a admin can mix guest with different device models during a transitional phase ?

Yes: there is a config option to select what device_model you want at VM
creation time. I am only changing the default (if no option is
specified).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:28:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfsXw-0001iH-Uw; Tue, 04 Dec 2012 13:28:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1TfsXw-0001iC-4E
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 13:28:24 +0000
Received: from [85.158.139.83:50316] by server-3.bemta-5.messagelabs.com id
	E4/0E-18736-77AFDB05; Tue, 04 Dec 2012 13:28:23 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1354627654!20980175!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gMTcwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4798 invoked from network); 4 Dec 2012 13:27:35 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 13:27:35 -0000
Received: from compute5.internal (compute5.nyi.mail.srv.osa [10.202.2.45])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 1909020EA3;
	Tue,  4 Dec 2012 08:27:34 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute5.internal (MEProxy); Tue, 04 Dec 2012 08:27:34 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=jZ
	GezHK4GTWnZgIM5Be80zmL48M=; b=Bd0O08V7rtFRGES54oPY2iMX57wts1pGHb
	ct87mwvqrSQ61ken4cYVfto3UtbNwxIAmXndE0tm+VgqDTN/gQAFwb8EZNS5Mrzd
	3fGxvSp2jikFnMkJWQ2paSXMP3m7sBChbanV9GVxeMbG7/qP09KkTSdnm2N9dFo3
	VcBfWjAn0=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=jZGe
	zHK4GTWnZgIM5Be80zmL48M=; b=asKbYJfZo9CUTuXezG5qI9uM1GW0qjasNrCO
	myuNFhnt0Ugo28dJG/tOI95IIapWc0NthvsJdcKPMLSBfg8sRNPgGwtqVyIGUvPm
	lvz0LEPw4x/9gYuGVO0bWSJAGN4Gwc794vwE/EQjN63Q2nOqjeY74ELUMwpsy/nt
	u7IHqbM=
X-Sasl-enc: Av0twMmC8RMk4MwypliaSBkJ0OAk0PW9+xNLrWwsvkWp 1354627653
Received: from [10.137.1.17] (unknown [193.0.96.15])
	by mail.messagingengine.com (Postfix) with ESMTPA id 6E5148E05EE;
	Tue,  4 Dec 2012 08:27:33 -0500 (EST)
Message-ID: <50BDFA38.7030009@invisiblethingslab.com>
Date: Tue, 04 Dec 2012 14:27:20 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
In-Reply-To: <50BC653E02000078000AD28C@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.6
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4947800748649306789=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============4947800748649306789==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enig3A12A8A56BC9E0BFDC55E0CD"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig3A12A8A56BC9E0BFDC55E0CD
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 03.12.2012 08:39, Jan Beulich wrote:
>>>> On 30.11.12 at 17:18, Marek Marczykowski <marmarek@invisiblethingsla=
b.com>
> wrote:
>> On 30.11.2012 17:12, Jan Beulich wrote:
>>>>>> Marek Marczykowski <marmarek@invisiblethingslab.com> 11/30/12 5:07=
 PM >>>
>>>> That was the rare case when resume worked at all... in most cased I'=
ve got
>>>> reboot, before anything appear on the screen (even backlight is off)=
 - xen
>>>> panic? dom0 kernel panic?
>>>
>>> Without serial console we won't get very far from here.
>>>
>>>> I don't have serial console, but have USB-to-serial port - is it pos=
sible to
>>>> use it as xen console (in xen 4.1.3)?
>>>
>>> Not that I'm aware of. But 4.1.x isn't very interesting from a develo=
pment
>>> perspective anyway. If you had the same problems still with 4.3-unsta=
ble,
>>> then that's be of much more interest to analyze, and you could use th=
e
>>> EHCI debug port (if one of your controllers has one) based serial con=
sole.
>>
>> Is it possible to use libxl from xen 4.1 with newer hypervisor? My lib=
xl is
>> somehow patched and porting it to newer version will require some effo=
rt.
>=20
> I don't think so, but I also don't think you need a libxl at all for th=
e
> purposes here (dealing with S3 is a Dom0-only thing).

I've tested xen 4.2 hypervisor (without serial console) and also rebooted=

during resume. But it works with xen 4.1.2, so the problem was introduced=

between 4.1.2 and 4.1.3. Will try to get console messages from xen-unstab=
le -
perhaps this will give some hints.

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enig3A12A8A56BC9E0BFDC55E0CD
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQEcBAEBAgAGBQJQvfo9AAoJENuP0xzK19csBvgH/iqpEvzU+c4cdUIZTFK+s/9z
IIMtj+qTfPnn8w3nAp80/teB7ndO6JsyOEveMVoGOZS+4jWkKF9fKW+K/aYbYvR1
3pqT8LQv4DBoq1qMYtR7lfnph3tmqrLAKC68ZszXSuWWAOzHrIoyEmkzLUNZ3xWy
mq65QBIgXzPcETwDz0JqI2+hcjnEsr8J5ZchinXw3RvgZSj+7tLDAVZk9uKLyPiu
ta6rIb10vsldI/I8U5kdlanWgbMIJ7Ou2+HFW+fiLuuQ/8vz+g4PW4VI1uukXFna
phUgV+AmgYvNm2K0lXdoT4fSQifJCyGAt1v7TN+eW95/Dsd2e4KnfTXtohdufS8=
=GtVK
-----END PGP SIGNATURE-----

--------------enig3A12A8A56BC9E0BFDC55E0CD--


--===============4947800748649306789==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4947800748649306789==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 13:28:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfsXw-0001iH-Uw; Tue, 04 Dec 2012 13:28:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1TfsXw-0001iC-4E
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 13:28:24 +0000
Received: from [85.158.139.83:50316] by server-3.bemta-5.messagelabs.com id
	E4/0E-18736-77AFDB05; Tue, 04 Dec 2012 13:28:23 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1354627654!20980175!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gMTcwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4798 invoked from network); 4 Dec 2012 13:27:35 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 13:27:35 -0000
Received: from compute5.internal (compute5.nyi.mail.srv.osa [10.202.2.45])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 1909020EA3;
	Tue,  4 Dec 2012 08:27:34 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute5.internal (MEProxy); Tue, 04 Dec 2012 08:27:34 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=jZ
	GezHK4GTWnZgIM5Be80zmL48M=; b=Bd0O08V7rtFRGES54oPY2iMX57wts1pGHb
	ct87mwvqrSQ61ken4cYVfto3UtbNwxIAmXndE0tm+VgqDTN/gQAFwb8EZNS5Mrzd
	3fGxvSp2jikFnMkJWQ2paSXMP3m7sBChbanV9GVxeMbG7/qP09KkTSdnm2N9dFo3
	VcBfWjAn0=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=jZGe
	zHK4GTWnZgIM5Be80zmL48M=; b=asKbYJfZo9CUTuXezG5qI9uM1GW0qjasNrCO
	myuNFhnt0Ugo28dJG/tOI95IIapWc0NthvsJdcKPMLSBfg8sRNPgGwtqVyIGUvPm
	lvz0LEPw4x/9gYuGVO0bWSJAGN4Gwc794vwE/EQjN63Q2nOqjeY74ELUMwpsy/nt
	u7IHqbM=
X-Sasl-enc: Av0twMmC8RMk4MwypliaSBkJ0OAk0PW9+xNLrWwsvkWp 1354627653
Received: from [10.137.1.17] (unknown [193.0.96.15])
	by mail.messagingengine.com (Postfix) with ESMTPA id 6E5148E05EE;
	Tue,  4 Dec 2012 08:27:33 -0500 (EST)
Message-ID: <50BDFA38.7030009@invisiblethingslab.com>
Date: Tue, 04 Dec 2012 14:27:20 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
In-Reply-To: <50BC653E02000078000AD28C@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.6
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4947800748649306789=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============4947800748649306789==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enig3A12A8A56BC9E0BFDC55E0CD"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig3A12A8A56BC9E0BFDC55E0CD
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 03.12.2012 08:39, Jan Beulich wrote:
>>>> On 30.11.12 at 17:18, Marek Marczykowski <marmarek@invisiblethingsla=
b.com>
> wrote:
>> On 30.11.2012 17:12, Jan Beulich wrote:
>>>>>> Marek Marczykowski <marmarek@invisiblethingslab.com> 11/30/12 5:07=
 PM >>>
>>>> That was the rare case when resume worked at all... in most cased I'=
ve got
>>>> reboot, before anything appear on the screen (even backlight is off)=
 - xen
>>>> panic? dom0 kernel panic?
>>>
>>> Without serial console we won't get very far from here.
>>>
>>>> I don't have serial console, but have USB-to-serial port - is it pos=
sible to
>>>> use it as xen console (in xen 4.1.3)?
>>>
>>> Not that I'm aware of. But 4.1.x isn't very interesting from a develo=
pment
>>> perspective anyway. If you had the same problems still with 4.3-unsta=
ble,
>>> then that's be of much more interest to analyze, and you could use th=
e
>>> EHCI debug port (if one of your controllers has one) based serial con=
sole.
>>
>> Is it possible to use libxl from xen 4.1 with newer hypervisor? My lib=
xl is
>> somehow patched and porting it to newer version will require some effo=
rt.
>=20
> I don't think so, but I also don't think you need a libxl at all for th=
e
> purposes here (dealing with S3 is a Dom0-only thing).

I've tested xen 4.2 hypervisor (without serial console) and also rebooted=

during resume. But it works with xen 4.1.2, so the problem was introduced=

between 4.1.2 and 4.1.3. Will try to get console messages from xen-unstab=
le -
perhaps this will give some hints.

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enig3A12A8A56BC9E0BFDC55E0CD
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQEcBAEBAgAGBQJQvfo9AAoJENuP0xzK19csBvgH/iqpEvzU+c4cdUIZTFK+s/9z
IIMtj+qTfPnn8w3nAp80/teB7ndO6JsyOEveMVoGOZS+4jWkKF9fKW+K/aYbYvR1
3pqT8LQv4DBoq1qMYtR7lfnph3tmqrLAKC68ZszXSuWWAOzHrIoyEmkzLUNZ3xWy
mq65QBIgXzPcETwDz0JqI2+hcjnEsr8J5ZchinXw3RvgZSj+7tLDAVZk9uKLyPiu
ta6rIb10vsldI/I8U5kdlanWgbMIJ7Ou2+HFW+fiLuuQ/8vz+g4PW4VI1uukXFna
phUgV+AmgYvNm2K0lXdoT4fSQifJCyGAt1v7TN+eW95/Dsd2e4KnfTXtohdufS8=
=GtVK
-----END PGP SIGNATURE-----

--------------enig3A12A8A56BC9E0BFDC55E0CD--


--===============4947800748649306789==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4947800748649306789==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 13:33:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfsc8-0001rn-L4; Tue, 04 Dec 2012 13:32:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tfsc7-0001ri-2O
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 13:32:43 +0000
Received: from [85.158.143.99:36480] by server-1.bemta-4.messagelabs.com id
	85/47-27934-A7BFDB05; Tue, 04 Dec 2012 13:32:42 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354627958!21612618!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTYy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2772 invoked from network); 4 Dec 2012 13:32:39 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-10.tower-216.messagelabs.com with SMTP;
	4 Dec 2012 13:32:39 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 04 Dec 2012 05:31:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,215,1355126400"; d="scan'208";a="228751878"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 04 Dec 2012 05:32:06 -0800
Received: from fmsmsx119.amr.corp.intel.com (10.18.22.143) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 05:32:01 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx119.amr.corp.intel.com (10.18.22.143) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 05:32:01 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Tue, 4 Dec 2012 21:31:59 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Wei Huang <wei.huang2@amd.com>, Wei Wang
	<weiwang.dd@gmail.com>
Thread-Topic: Ping: [PATCH] IOMMU/ATS: fix maximum queue depth calculation
Thread-Index: AQHN0geGq+nasw7LNUqw0YLRRU9EbpgIoiCQ
Date: Tue, 4 Dec 2012 13:31:59 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A483456440339F2CA@SHSMSX101.ccr.corp.intel.com>
References: <50BDD9E702000078000ADA57@nat28.tlf.novell.com>
In-Reply-To: <50BDD9E702000078000ADA57@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Boris Ostrovsky <boris.ostrovsky@amd.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Ping: [PATCH] IOMMU/ATS: fix maximum queue depth
	calculation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry for the late reply.   Originally it should be caused by mis-read about the field's width.   Thanks!
Acked-by Xiantao Zhang <xiantao.zhang@intel.com>

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 04, 2012 6:09 PM
> To: Wei Huang; Wei Wang; Zhang, Xiantao
> Cc: Boris Ostrovsky; xen-devel
> Subject: Ping: [PATCH] IOMMU/ATS: fix maximum queue depth calculation
> 
> Anyone? (I'd really want an ack from both Intel - who originally contributed
> the ATS code - and AMD - due to the adjustment of their later re-
> arrangements.)
> 
> Jan
> 
> >>> On 28.11.12 at 12:32, Jan Beulich wrote:
> > The capabilities register field is a 5-bit value, and the 5 bits all
> > being zero actually means 32 entries.
> >
> > Under the assumption that amd_iommu_flush_iotlb() really just tried to
> > correct for the miscalculation above when adding 32 to the value, that
> > adjustment is also being removed.
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >
> > --- a/xen/drivers/passthrough/amd/iommu_cmd.c
> > +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
> > @@ -321,7 +321,7 @@ void amd_iommu_flush_iotlb(struct pci_de
> >
> >      req_id = get_dma_requestor_id(iommu->seg, bdf);
> >      queueid = req_id;
> > -    maxpend = (ats_pdev->ats_queue_depth + 32) & 0xff;
> > +    maxpend = ats_pdev->ats_queue_depth & 0xff;
> >
> >      /* send INVALIDATE_IOTLB_PAGES command */
> >      spin_lock_irqsave(&iommu->lock, flags);
> > --- a/xen/drivers/passthrough/ats.h
> > +++ b/xen/drivers/passthrough/ats.h
> > @@ -28,7 +28,7 @@ struct pci_ats_dev {
> >
> >  #define ATS_REG_CAP    4
> >  #define ATS_REG_CTL    6
> > -#define ATS_QUEUE_DEPTH_MASK     0xF
> > +#define ATS_QUEUE_DEPTH_MASK     0x1f
> >  #define ATS_ENABLE               (1<<15)
> >
> >  extern struct list_head ats_devices;
> > --- a/xen/drivers/passthrough/x86/ats.c
> > +++ b/xen/drivers/passthrough/x86/ats.c
> > @@ -94,6 +94,8 @@ int enable_ats_device(int seg, int bus,
> >          value = pci_conf_read16(seg, bus, PCI_SLOT(devfn),
> >                                  PCI_FUNC(devfn), pos + ATS_REG_CAP);
> >          pdev->ats_queue_depth = value & ATS_QUEUE_DEPTH_MASK;
> > +        if ( !pdev->ats_queue_depth )
> > +            pdev->ats_queue_depth = ATS_QUEUE_DEPTH_MASK + 1;
> >          list_add(&pdev->list, &ats_devices);
> >      }
> >
> >
> >
> >
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:33:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfsc8-0001rn-L4; Tue, 04 Dec 2012 13:32:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tfsc7-0001ri-2O
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 13:32:43 +0000
Received: from [85.158.143.99:36480] by server-1.bemta-4.messagelabs.com id
	85/47-27934-A7BFDB05; Tue, 04 Dec 2012 13:32:42 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354627958!21612618!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTYy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2772 invoked from network); 4 Dec 2012 13:32:39 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-10.tower-216.messagelabs.com with SMTP;
	4 Dec 2012 13:32:39 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 04 Dec 2012 05:31:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,215,1355126400"; d="scan'208";a="228751878"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 04 Dec 2012 05:32:06 -0800
Received: from fmsmsx119.amr.corp.intel.com (10.18.22.143) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 05:32:01 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx119.amr.corp.intel.com (10.18.22.143) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 05:32:01 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Tue, 4 Dec 2012 21:31:59 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Wei Huang <wei.huang2@amd.com>, Wei Wang
	<weiwang.dd@gmail.com>
Thread-Topic: Ping: [PATCH] IOMMU/ATS: fix maximum queue depth calculation
Thread-Index: AQHN0geGq+nasw7LNUqw0YLRRU9EbpgIoiCQ
Date: Tue, 4 Dec 2012 13:31:59 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A483456440339F2CA@SHSMSX101.ccr.corp.intel.com>
References: <50BDD9E702000078000ADA57@nat28.tlf.novell.com>
In-Reply-To: <50BDD9E702000078000ADA57@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Boris Ostrovsky <boris.ostrovsky@amd.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Ping: [PATCH] IOMMU/ATS: fix maximum queue depth
	calculation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry for the late reply.   Originally it should be caused by mis-read about the field's width.   Thanks!
Acked-by Xiantao Zhang <xiantao.zhang@intel.com>

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 04, 2012 6:09 PM
> To: Wei Huang; Wei Wang; Zhang, Xiantao
> Cc: Boris Ostrovsky; xen-devel
> Subject: Ping: [PATCH] IOMMU/ATS: fix maximum queue depth calculation
> 
> Anyone? (I'd really want an ack from both Intel - who originally contributed
> the ATS code - and AMD - due to the adjustment of their later re-
> arrangements.)
> 
> Jan
> 
> >>> On 28.11.12 at 12:32, Jan Beulich wrote:
> > The capabilities register field is a 5-bit value, and the 5 bits all
> > being zero actually means 32 entries.
> >
> > Under the assumption that amd_iommu_flush_iotlb() really just tried to
> > correct for the miscalculation above when adding 32 to the value, that
> > adjustment is also being removed.
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >
> > --- a/xen/drivers/passthrough/amd/iommu_cmd.c
> > +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
> > @@ -321,7 +321,7 @@ void amd_iommu_flush_iotlb(struct pci_de
> >
> >      req_id = get_dma_requestor_id(iommu->seg, bdf);
> >      queueid = req_id;
> > -    maxpend = (ats_pdev->ats_queue_depth + 32) & 0xff;
> > +    maxpend = ats_pdev->ats_queue_depth & 0xff;
> >
> >      /* send INVALIDATE_IOTLB_PAGES command */
> >      spin_lock_irqsave(&iommu->lock, flags);
> > --- a/xen/drivers/passthrough/ats.h
> > +++ b/xen/drivers/passthrough/ats.h
> > @@ -28,7 +28,7 @@ struct pci_ats_dev {
> >
> >  #define ATS_REG_CAP    4
> >  #define ATS_REG_CTL    6
> > -#define ATS_QUEUE_DEPTH_MASK     0xF
> > +#define ATS_QUEUE_DEPTH_MASK     0x1f
> >  #define ATS_ENABLE               (1<<15)
> >
> >  extern struct list_head ats_devices;
> > --- a/xen/drivers/passthrough/x86/ats.c
> > +++ b/xen/drivers/passthrough/x86/ats.c
> > @@ -94,6 +94,8 @@ int enable_ats_device(int seg, int bus,
> >          value = pci_conf_read16(seg, bus, PCI_SLOT(devfn),
> >                                  PCI_FUNC(devfn), pos + ATS_REG_CAP);
> >          pdev->ats_queue_depth = value & ATS_QUEUE_DEPTH_MASK;
> > +        if ( !pdev->ats_queue_depth )
> > +            pdev->ats_queue_depth = ATS_QUEUE_DEPTH_MASK + 1;
> >          list_add(&pdev->list, &ats_devices);
> >      }
> >
> >
> >
> >
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:42:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfslk-0002OU-0I; Tue, 04 Dec 2012 13:42:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfslj-0002OP-C1
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 13:42:39 +0000
Received: from [85.158.139.83:3711] by server-15.bemta-5.messagelabs.com id
	E9/59-26920-ECDFDB05; Tue, 04 Dec 2012 13:42:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354628558!24463078!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29453 invoked from network); 4 Dec 2012 13:42:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 13:42:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16147937"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 13:42:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	13:42:37 +0000
Message-ID: <1354628556.2693.77.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Tue, 4 Dec 2012 13:42:36 +0000
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3F4@LONPMAILBOX01.citrite.net>
References: <patchbomb.1353090340@makatos-desktop>
	<28d57229042b9fe04a6d.1353090341@makatos-desktop>
	<1354201567.6269.7.camel@zakaz.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3F4@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 1 of 9 RFC] blktap3: Introduce blktap3
 headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 12:26 +0000, Thanos Makatos wrote:
> > -----Original Message-----
> > From: Ian Campbell
> > Sent: 29 November 2012 15:06
> > To: Thanos Makatos
> > Cc: xen-devel@lists.xensource.com
> > Subject: Re: [Xen-devel] [PATCH 1 of 9 RFC] blktap3: Introduce blktap3
> > headers
> > 
> > On Fri, 2012-11-16 at 18:25 +0000, Thanos Makatos wrote:
> > > This patch introduces basic blktap3 header files. It also provides an
> > > plain Makefile so that the build system doesn't complain.
> > 
> > You don't want to start listing the headers etc in this Makefile as you
> > introduce them?
> 
> Ok.
> 
> > 
> > Which build system complains? We don't yet recurse into here do we?
> 
> I meant the blktap3 build system, no don't yet recurse here.
> 
> In the top-level Makefile (tools/blktap3/Makefile) I had erroneously
> put "SUBDIRS-y += include" and that required a dummy Makefile in
> tools/blktap3/includes. Removing this fixes the problem and renders
> the tools/blktap3/includes/Makefile useless.

Unless you need to install some headers so that third parties can build
against them?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:42:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfslk-0002OU-0I; Tue, 04 Dec 2012 13:42:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfslj-0002OP-C1
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 13:42:39 +0000
Received: from [85.158.139.83:3711] by server-15.bemta-5.messagelabs.com id
	E9/59-26920-ECDFDB05; Tue, 04 Dec 2012 13:42:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354628558!24463078!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29453 invoked from network); 4 Dec 2012 13:42:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 13:42:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16147937"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 13:42:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	13:42:37 +0000
Message-ID: <1354628556.2693.77.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Tue, 4 Dec 2012 13:42:36 +0000
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3F4@LONPMAILBOX01.citrite.net>
References: <patchbomb.1353090340@makatos-desktop>
	<28d57229042b9fe04a6d.1353090341@makatos-desktop>
	<1354201567.6269.7.camel@zakaz.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF3F4@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 1 of 9 RFC] blktap3: Introduce blktap3
 headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 12:26 +0000, Thanos Makatos wrote:
> > -----Original Message-----
> > From: Ian Campbell
> > Sent: 29 November 2012 15:06
> > To: Thanos Makatos
> > Cc: xen-devel@lists.xensource.com
> > Subject: Re: [Xen-devel] [PATCH 1 of 9 RFC] blktap3: Introduce blktap3
> > headers
> > 
> > On Fri, 2012-11-16 at 18:25 +0000, Thanos Makatos wrote:
> > > This patch introduces basic blktap3 header files. It also provides an
> > > plain Makefile so that the build system doesn't complain.
> > 
> > You don't want to start listing the headers etc in this Makefile as you
> > introduce them?
> 
> Ok.
> 
> > 
> > Which build system complains? We don't yet recurse into here do we?
> 
> I meant the blktap3 build system, no don't yet recurse here.
> 
> In the top-level Makefile (tools/blktap3/Makefile) I had erroneously
> put "SUBDIRS-y += include" and that required a dummy Makefile in
> tools/blktap3/includes. Removing this fixes the problem and renders
> the tools/blktap3/includes/Makefile useless.

Unless you need to install some headers so that third parties can build
against them?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:43:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfsmc-0002RZ-Eh; Tue, 04 Dec 2012 13:43:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tfsmb-0002RR-HK
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 13:43:33 +0000
Received: from [85.158.143.35:10039] by server-3.bemta-4.messagelabs.com id
	17/AD-06841-40EFDB05; Tue, 04 Dec 2012 13:43:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1354628612!4357002!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14807 invoked from network); 4 Dec 2012 13:43:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 13:43:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 13:43:36 +0000
Message-Id: <50BE0C1102000078000ADC8A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 13:43:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1354622199-27504-5-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1354622199-27504-5-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 05/15] xen: remove nr_irqs_gsi from generic
	code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 12:56, Ian Campbell <ian.campbell@citrix.com> wrote:
> The concept is X86 specific.
> 
> AFAICT the generic concept here is the number of physical IRQs which
> the current hardware has, so call this nr_hw_irqs.

Hmm, I don't particularly like this name, as (at least to me) it
gives the appearance to include MSI ones, which isn't correct.
How about nr_fixed_irqs or nr_static_irqs or some such?

Jan

> Also using "defined NR_IRQS" as a standin for x86 might have made
> sense at one point but its just cleaner to push the necessary
> definitions into asm/irq.h.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Keir (Xen.org) <keir@xen.org>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  xen/arch/arm/dummy.S      |    1 -
>  xen/common/domain.c       |    4 ++--
>  xen/include/asm-arm/irq.h |    3 +++
>  xen/include/asm-x86/irq.h |    4 ++++
>  xen/include/xen/irq.h     |    8 --------
>  xen/xsm/flask/hooks.c     |    4 ++--
>  6 files changed, 11 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
> index 66eb314..5d9bcff 100644
> --- a/xen/arch/arm/dummy.S
> +++ b/xen/arch/arm/dummy.S
> @@ -8,7 +8,6 @@ x:	mov pc, lr
>  	
>  /* PIRQ support */
>  DUMMY(alloc_pirq_struct);
> -DUMMY(nr_irqs_gsi);
>  DUMMY(pirq_guest_bind);
>  DUMMY(pirq_guest_unbind);
>  DUMMY(pirq_set_affinity);
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 12c8e24..d80461d 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -259,9 +259,9 @@ struct domain *domain_create(
>          atomic_inc(&d->pause_count);
>  
>          if ( domid )
> -            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
> +            d->nr_pirqs = nr_hw_irqs + extra_domU_irqs;
>          else
> -            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
> +            d->nr_pirqs = nr_hw_irqs + extra_dom0_irqs;
>          if ( d->nr_pirqs > nr_irqs )
>              d->nr_pirqs = nr_irqs;
>  
> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> index abde839..4facaf0 100644
> --- a/xen/include/asm-arm/irq.h
> +++ b/xen/include/asm-arm/irq.h
> @@ -21,6 +21,9 @@ struct irq_cfg {
>  #define NR_IRQS		1024
>  #define nr_irqs NR_IRQS
>  
> +#define nr_irqs NR_IRQS
> +#define nr_hw_irqs NR_IRQS
> +
>  struct irq_desc;
>  
>  struct irq_desc *__irq_to_desc(int irq);
> diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
> index 5eefb94..6ea5f53 100644
> --- a/xen/include/asm-x86/irq.h
> +++ b/xen/include/asm-x86/irq.h
> @@ -11,6 +11,10 @@
>  #include <irq_vectors.h>
>  #include <asm/percpu.h>
>  
> +extern unsigned int nr_irqs_gsi;
> +extern unsigned int nr_irqs;
> +#define nr_hw_irqs nr_irqs_gsi
> +
>  #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
>  			     (1 << (irq)) & io_apic_irqs : \
>  			     (irq) < nr_irqs_gsi)
> diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> index 5973cce..7386358 100644
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
>  
>  #include <asm/irq.h>
>  
> -#ifdef NR_IRQS
> -# define nr_irqs NR_IRQS
> -# define nr_irqs_gsi NR_IRQS
> -#else
> -extern unsigned int nr_irqs_gsi;
> -extern unsigned int nr_irqs;
> -#endif
> -
>  struct msi_desc;
>  /*
>   * This is the "IRQ descriptor", which contains various information
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 0ca10d0..595c31e 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct 
> avc_audit_data *ad)
>      struct irq_desc *desc = irq_to_desc(irq);
>      if ( irq >= nr_irqs || irq < 0 )
>          return -EINVAL;
> -    if ( irq < nr_irqs_gsi ) {
> +    if ( irq < nr_hw_irqs ) {
>          if (ad) {
>              AVC_AUDIT_DATA_INIT(ad, IRQ);
>              ad->irq = irq;
> @@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int 
> irq, void *data)
>      if ( rc )
>          return rc;
>  
> -    if ( irq >= nr_irqs_gsi && msi ) {
> +    if ( irq >= nr_hw_irqs && msi ) {
>          u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
>          AVC_AUDIT_DATA_INIT(&ad, DEV);
>          ad.device = machine_bdf;
> -- 
> 1.7.9.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:43:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfsmc-0002RZ-Eh; Tue, 04 Dec 2012 13:43:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tfsmb-0002RR-HK
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 13:43:33 +0000
Received: from [85.158.143.35:10039] by server-3.bemta-4.messagelabs.com id
	17/AD-06841-40EFDB05; Tue, 04 Dec 2012 13:43:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1354628612!4357002!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14807 invoked from network); 4 Dec 2012 13:43:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 13:43:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 13:43:36 +0000
Message-Id: <50BE0C1102000078000ADC8A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 13:43:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1354622199-27504-5-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1354622199-27504-5-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 05/15] xen: remove nr_irqs_gsi from generic
	code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 12:56, Ian Campbell <ian.campbell@citrix.com> wrote:
> The concept is X86 specific.
> 
> AFAICT the generic concept here is the number of physical IRQs which
> the current hardware has, so call this nr_hw_irqs.

Hmm, I don't particularly like this name, as (at least to me) it
gives the appearance to include MSI ones, which isn't correct.
How about nr_fixed_irqs or nr_static_irqs or some such?

Jan

> Also using "defined NR_IRQS" as a standin for x86 might have made
> sense at one point but its just cleaner to push the necessary
> definitions into asm/irq.h.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Keir (Xen.org) <keir@xen.org>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  xen/arch/arm/dummy.S      |    1 -
>  xen/common/domain.c       |    4 ++--
>  xen/include/asm-arm/irq.h |    3 +++
>  xen/include/asm-x86/irq.h |    4 ++++
>  xen/include/xen/irq.h     |    8 --------
>  xen/xsm/flask/hooks.c     |    4 ++--
>  6 files changed, 11 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
> index 66eb314..5d9bcff 100644
> --- a/xen/arch/arm/dummy.S
> +++ b/xen/arch/arm/dummy.S
> @@ -8,7 +8,6 @@ x:	mov pc, lr
>  	
>  /* PIRQ support */
>  DUMMY(alloc_pirq_struct);
> -DUMMY(nr_irqs_gsi);
>  DUMMY(pirq_guest_bind);
>  DUMMY(pirq_guest_unbind);
>  DUMMY(pirq_set_affinity);
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 12c8e24..d80461d 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -259,9 +259,9 @@ struct domain *domain_create(
>          atomic_inc(&d->pause_count);
>  
>          if ( domid )
> -            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
> +            d->nr_pirqs = nr_hw_irqs + extra_domU_irqs;
>          else
> -            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
> +            d->nr_pirqs = nr_hw_irqs + extra_dom0_irqs;
>          if ( d->nr_pirqs > nr_irqs )
>              d->nr_pirqs = nr_irqs;
>  
> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> index abde839..4facaf0 100644
> --- a/xen/include/asm-arm/irq.h
> +++ b/xen/include/asm-arm/irq.h
> @@ -21,6 +21,9 @@ struct irq_cfg {
>  #define NR_IRQS		1024
>  #define nr_irqs NR_IRQS
>  
> +#define nr_irqs NR_IRQS
> +#define nr_hw_irqs NR_IRQS
> +
>  struct irq_desc;
>  
>  struct irq_desc *__irq_to_desc(int irq);
> diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
> index 5eefb94..6ea5f53 100644
> --- a/xen/include/asm-x86/irq.h
> +++ b/xen/include/asm-x86/irq.h
> @@ -11,6 +11,10 @@
>  #include <irq_vectors.h>
>  #include <asm/percpu.h>
>  
> +extern unsigned int nr_irqs_gsi;
> +extern unsigned int nr_irqs;
> +#define nr_hw_irqs nr_irqs_gsi
> +
>  #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
>  			     (1 << (irq)) & io_apic_irqs : \
>  			     (irq) < nr_irqs_gsi)
> diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> index 5973cce..7386358 100644
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
>  
>  #include <asm/irq.h>
>  
> -#ifdef NR_IRQS
> -# define nr_irqs NR_IRQS
> -# define nr_irqs_gsi NR_IRQS
> -#else
> -extern unsigned int nr_irqs_gsi;
> -extern unsigned int nr_irqs;
> -#endif
> -
>  struct msi_desc;
>  /*
>   * This is the "IRQ descriptor", which contains various information
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 0ca10d0..595c31e 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct 
> avc_audit_data *ad)
>      struct irq_desc *desc = irq_to_desc(irq);
>      if ( irq >= nr_irqs || irq < 0 )
>          return -EINVAL;
> -    if ( irq < nr_irqs_gsi ) {
> +    if ( irq < nr_hw_irqs ) {
>          if (ad) {
>              AVC_AUDIT_DATA_INIT(ad, IRQ);
>              ad->irq = irq;
> @@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int 
> irq, void *data)
>      if ( rc )
>          return rc;
>  
> -    if ( irq >= nr_irqs_gsi && msi ) {
> +    if ( irq >= nr_hw_irqs && msi ) {
>          u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
>          AVC_AUDIT_DATA_INIT(&ad, DEV);
>          ad.device = machine_bdf;
> -- 
> 1.7.9.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:45:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfsoD-0002ZS-25; Tue, 04 Dec 2012 13:45:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfsoB-0002ZI-6Q
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 13:45:11 +0000
Received: from [193.109.254.147:53136] by server-13.bemta-14.messagelabs.com
	id 00/E2-11239-66EFDB05; Tue, 04 Dec 2012 13:45:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354628662!1635787!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13534 invoked from network); 4 Dec 2012 13:44:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 13:44:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16148087"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 13:44:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	13:44:22 +0000
Message-ID: <1354628660.2693.79.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 4 Dec 2012 13:44:20 +0000
In-Reply-To: <alpine.DEB.2.02.1212041234120.8801@kaball.uk.xensource.com>
References: <1352823779.7491.94.camel@zakaz.uk.xensource.com>
	<1352823804-28482-4-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1211301501150.5310@kaball.uk.xensource.com>
	<1354551597.2693.21.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212041234120.8801@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 04/12] arm: parse modules from DT during
 early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 12:42 +0000, Stefano Stabellini wrote:
> On Mon, 3 Dec 2012, Ian Campbell wrote:
> > > > diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> > > > new file mode 100644
> > > > index 0000000..2609450
> > > > --- /dev/null
> > > > +++ b/docs/misc/arm/device-tree/booting.txt
> > > > @@ -0,0 +1,27 @@
> > > > +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> > > > +node of the device tree.
> > > > +
> > > > +Each node has the form /chosen/module@<N> and contains the following
> > > > +properties:
> > > 
> > > Wouldn't it be better to move all the modules under /chosen/modules or
> > > /chosen/multiboot?
> > 
> > Why, what's the benefit?
> > 
> > I'm happy to do whatever is more normal in DT. Is that this:
> > 	/foo/bar@1
> > 	/foo/bar@2
> > or
> > 	/foo/bar/bar@1
> > 	/foo/bar/bar@2
> > 
> > The second (which I think is what you are suggesting) seems pretty
> > redundant.
> 
> To be precise I am suggesting:
> 
> /foo/bars/bar@0
> /foo/bars/bar@1
> 
> I think it is just clearer, especially if more stuff end up inside
> /chosen. Also see how the cpus node is defined, for example.

OK.

> > > 
> > > 
> > > > +- bootargs (optional)
> > > > +
> > > > +	Command line associated with this module
> > > > +
> > > > +The following modules are understood
> > > > +
> > > > +- 1 -- the domain 0 kernel
> > > > +- 2 -- the domain 0 ramdisk
> > > 
> > > It would be nice if we could express this via the compatible property
> > > instead.
> > > So the linux kernel could be compatible "linux,kernel" and the initrd
> > > "linux,initrd", in addition to (or instead of) "xen,multiboot-module".
> > > Given that they go from the most specific to the less specific, it would
> > > become:
> > > 
> > > compatible = "linux,kernel", "xen,multiboot-module";
> > 
> > This bakes the word "linux" into the interface and would require a new
> > compatible tag and code changes in Xen for each new dom0 kernel type,
> > which I think we want to avoid. (maybe the code changes are unavoidable
> > in practice, but in principal...)
> > 
> > "xen,dom0-kernel", "xen,multiboot-module"
> > 
> > Might be an option?
> > 
> > I'm going to repost what I have without changing this bit yet.
> 
> "xem,dom0-kernel" is OK.
> However what about the initrd? Does Xen need to know that the second
> module is the kernel's initrd or is it just another opaque module from
> Xen's point of view?
> If Xen needs to know that it is an initrd I think we need to introduce
> another compatible string. Maybe the following:
> 
> "xen,dom0-initrd"

Hr, perhaps it does need to know it is a Linux initrd, or indeed that
the kernel is Linux in order to implement the necessary boot protocol.
IOW in the absence of a more generic boot protocol for ARM maybe we
can't avoid coding OS specifics into the builder :-(


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:45:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfsoD-0002ZS-25; Tue, 04 Dec 2012 13:45:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfsoB-0002ZI-6Q
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 13:45:11 +0000
Received: from [193.109.254.147:53136] by server-13.bemta-14.messagelabs.com
	id 00/E2-11239-66EFDB05; Tue, 04 Dec 2012 13:45:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354628662!1635787!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13534 invoked from network); 4 Dec 2012 13:44:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 13:44:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16148087"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 13:44:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	13:44:22 +0000
Message-ID: <1354628660.2693.79.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 4 Dec 2012 13:44:20 +0000
In-Reply-To: <alpine.DEB.2.02.1212041234120.8801@kaball.uk.xensource.com>
References: <1352823779.7491.94.camel@zakaz.uk.xensource.com>
	<1352823804-28482-4-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1211301501150.5310@kaball.uk.xensource.com>
	<1354551597.2693.21.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212041234120.8801@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 04/12] arm: parse modules from DT during
 early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 12:42 +0000, Stefano Stabellini wrote:
> On Mon, 3 Dec 2012, Ian Campbell wrote:
> > > > diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> > > > new file mode 100644
> > > > index 0000000..2609450
> > > > --- /dev/null
> > > > +++ b/docs/misc/arm/device-tree/booting.txt
> > > > @@ -0,0 +1,27 @@
> > > > +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> > > > +node of the device tree.
> > > > +
> > > > +Each node has the form /chosen/module@<N> and contains the following
> > > > +properties:
> > > 
> > > Wouldn't it be better to move all the modules under /chosen/modules or
> > > /chosen/multiboot?
> > 
> > Why, what's the benefit?
> > 
> > I'm happy to do whatever is more normal in DT. Is that this:
> > 	/foo/bar@1
> > 	/foo/bar@2
> > or
> > 	/foo/bar/bar@1
> > 	/foo/bar/bar@2
> > 
> > The second (which I think is what you are suggesting) seems pretty
> > redundant.
> 
> To be precise I am suggesting:
> 
> /foo/bars/bar@0
> /foo/bars/bar@1
> 
> I think it is just clearer, especially if more stuff end up inside
> /chosen. Also see how the cpus node is defined, for example.

OK.

> > > 
> > > 
> > > > +- bootargs (optional)
> > > > +
> > > > +	Command line associated with this module
> > > > +
> > > > +The following modules are understood
> > > > +
> > > > +- 1 -- the domain 0 kernel
> > > > +- 2 -- the domain 0 ramdisk
> > > 
> > > It would be nice if we could express this via the compatible property
> > > instead.
> > > So the linux kernel could be compatible "linux,kernel" and the initrd
> > > "linux,initrd", in addition to (or instead of) "xen,multiboot-module".
> > > Given that they go from the most specific to the less specific, it would
> > > become:
> > > 
> > > compatible = "linux,kernel", "xen,multiboot-module";
> > 
> > This bakes the word "linux" into the interface and would require a new
> > compatible tag and code changes in Xen for each new dom0 kernel type,
> > which I think we want to avoid. (maybe the code changes are unavoidable
> > in practice, but in principal...)
> > 
> > "xen,dom0-kernel", "xen,multiboot-module"
> > 
> > Might be an option?
> > 
> > I'm going to repost what I have without changing this bit yet.
> 
> "xem,dom0-kernel" is OK.
> However what about the initrd? Does Xen need to know that the second
> module is the kernel's initrd or is it just another opaque module from
> Xen's point of view?
> If Xen needs to know that it is an initrd I think we need to introduce
> another compatible string. Maybe the following:
> 
> "xen,dom0-initrd"

Hr, perhaps it does need to know it is a Linux initrd, or indeed that
the kernel is Linux in order to implement the necessary boot protocol.
IOW in the absence of a more generic boot protocol for ARM maybe we
can't avoid coding OS specifics into the builder :-(


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:49:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfss0-0002o4-Qp; Tue, 04 Dec 2012 13:49:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfsrz-0002nx-1r
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 13:49:07 +0000
Received: from [85.158.139.211:21503] by server-14.bemta-5.messagelabs.com id
	89/FF-21768-25FFDB05; Tue, 04 Dec 2012 13:49:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1354628945!14787582!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9172 invoked from network); 4 Dec 2012 13:49:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 13:49:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16148219"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 13:49:05 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	13:49:05 +0000
Message-ID: <1354628943.2693.80.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 4 Dec 2012 13:49:03 +0000
In-Reply-To: <50BE0C1102000078000ADC8A@nat28.tlf.novell.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1354622199-27504-5-git-send-email-ian.campbell@citrix.com>
	<50BE0C1102000078000ADC8A@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 05/15] xen: remove nr_irqs_gsi from generic
	code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 13:43 +0000, Jan Beulich wrote:
> >>> On 04.12.12 at 12:56, Ian Campbell <ian.campbell@citrix.com> wrote:
> > The concept is X86 specific.
> > 
> > AFAICT the generic concept here is the number of physical IRQs which
> > the current hardware has, so call this nr_hw_irqs.
> 
> Hmm, I don't particularly like this name, as (at least to me) it
> gives the appearance to include MSI ones, which isn't correct.

Ah, I wondered about this then forgot to mention it.

> How about nr_fixed_irqs or nr_static_irqs or some such?

Static would be a nice counter to "dynamic" which is how I tend to think
of MSIs.

> 
> Jan
> 
> > Also using "defined NR_IRQS" as a standin for x86 might have made
> > sense at one point but its just cleaner to push the necessary
> > definitions into asm/irq.h.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Keir (Xen.org) <keir@xen.org>
> > Cc: Jan Beulich <JBeulich@suse.com>
> > Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> > ---
> >  xen/arch/arm/dummy.S      |    1 -
> >  xen/common/domain.c       |    4 ++--
> >  xen/include/asm-arm/irq.h |    3 +++
> >  xen/include/asm-x86/irq.h |    4 ++++
> >  xen/include/xen/irq.h     |    8 --------
> >  xen/xsm/flask/hooks.c     |    4 ++--
> >  6 files changed, 11 insertions(+), 13 deletions(-)
> > 
> > diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
> > index 66eb314..5d9bcff 100644
> > --- a/xen/arch/arm/dummy.S
> > +++ b/xen/arch/arm/dummy.S
> > @@ -8,7 +8,6 @@ x:	mov pc, lr
> >  	
> >  /* PIRQ support */
> >  DUMMY(alloc_pirq_struct);
> > -DUMMY(nr_irqs_gsi);
> >  DUMMY(pirq_guest_bind);
> >  DUMMY(pirq_guest_unbind);
> >  DUMMY(pirq_set_affinity);
> > diff --git a/xen/common/domain.c b/xen/common/domain.c
> > index 12c8e24..d80461d 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -259,9 +259,9 @@ struct domain *domain_create(
> >          atomic_inc(&d->pause_count);
> >  
> >          if ( domid )
> > -            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
> > +            d->nr_pirqs = nr_hw_irqs + extra_domU_irqs;
> >          else
> > -            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
> > +            d->nr_pirqs = nr_hw_irqs + extra_dom0_irqs;
> >          if ( d->nr_pirqs > nr_irqs )
> >              d->nr_pirqs = nr_irqs;
> >  
> > diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> > index abde839..4facaf0 100644
> > --- a/xen/include/asm-arm/irq.h
> > +++ b/xen/include/asm-arm/irq.h
> > @@ -21,6 +21,9 @@ struct irq_cfg {
> >  #define NR_IRQS		1024
> >  #define nr_irqs NR_IRQS
> >  
> > +#define nr_irqs NR_IRQS
> > +#define nr_hw_irqs NR_IRQS
> > +
> >  struct irq_desc;
> >  
> >  struct irq_desc *__irq_to_desc(int irq);
> > diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
> > index 5eefb94..6ea5f53 100644
> > --- a/xen/include/asm-x86/irq.h
> > +++ b/xen/include/asm-x86/irq.h
> > @@ -11,6 +11,10 @@
> >  #include <irq_vectors.h>
> >  #include <asm/percpu.h>
> >  
> > +extern unsigned int nr_irqs_gsi;
> > +extern unsigned int nr_irqs;
> > +#define nr_hw_irqs nr_irqs_gsi
> > +
> >  #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
> >  			     (1 << (irq)) & io_apic_irqs : \
> >  			     (irq) < nr_irqs_gsi)
> > diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> > index 5973cce..7386358 100644
> > --- a/xen/include/xen/irq.h
> > +++ b/xen/include/xen/irq.h
> > @@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
> >  
> >  #include <asm/irq.h>
> >  
> > -#ifdef NR_IRQS
> > -# define nr_irqs NR_IRQS
> > -# define nr_irqs_gsi NR_IRQS
> > -#else
> > -extern unsigned int nr_irqs_gsi;
> > -extern unsigned int nr_irqs;
> > -#endif
> > -
> >  struct msi_desc;
> >  /*
> >   * This is the "IRQ descriptor", which contains various information
> > diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> > index 0ca10d0..595c31e 100644
> > --- a/xen/xsm/flask/hooks.c
> > +++ b/xen/xsm/flask/hooks.c
> > @@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct 
> > avc_audit_data *ad)
> >      struct irq_desc *desc = irq_to_desc(irq);
> >      if ( irq >= nr_irqs || irq < 0 )
> >          return -EINVAL;
> > -    if ( irq < nr_irqs_gsi ) {
> > +    if ( irq < nr_hw_irqs ) {
> >          if (ad) {
> >              AVC_AUDIT_DATA_INIT(ad, IRQ);
> >              ad->irq = irq;
> > @@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int 
> > irq, void *data)
> >      if ( rc )
> >          return rc;
> >  
> > -    if ( irq >= nr_irqs_gsi && msi ) {
> > +    if ( irq >= nr_hw_irqs && msi ) {
> >          u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
> >          AVC_AUDIT_DATA_INIT(&ad, DEV);
> >          ad.device = machine_bdf;
> > -- 
> > 1.7.9.1
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:49:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfss0-0002o4-Qp; Tue, 04 Dec 2012 13:49:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfsrz-0002nx-1r
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 13:49:07 +0000
Received: from [85.158.139.211:21503] by server-14.bemta-5.messagelabs.com id
	89/FF-21768-25FFDB05; Tue, 04 Dec 2012 13:49:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1354628945!14787582!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9172 invoked from network); 4 Dec 2012 13:49:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 13:49:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16148219"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 13:49:05 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	13:49:05 +0000
Message-ID: <1354628943.2693.80.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 4 Dec 2012 13:49:03 +0000
In-Reply-To: <50BE0C1102000078000ADC8A@nat28.tlf.novell.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1354622199-27504-5-git-send-email-ian.campbell@citrix.com>
	<50BE0C1102000078000ADC8A@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 05/15] xen: remove nr_irqs_gsi from generic
	code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 13:43 +0000, Jan Beulich wrote:
> >>> On 04.12.12 at 12:56, Ian Campbell <ian.campbell@citrix.com> wrote:
> > The concept is X86 specific.
> > 
> > AFAICT the generic concept here is the number of physical IRQs which
> > the current hardware has, so call this nr_hw_irqs.
> 
> Hmm, I don't particularly like this name, as (at least to me) it
> gives the appearance to include MSI ones, which isn't correct.

Ah, I wondered about this then forgot to mention it.

> How about nr_fixed_irqs or nr_static_irqs or some such?

Static would be a nice counter to "dynamic" which is how I tend to think
of MSIs.

> 
> Jan
> 
> > Also using "defined NR_IRQS" as a standin for x86 might have made
> > sense at one point but its just cleaner to push the necessary
> > definitions into asm/irq.h.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Keir (Xen.org) <keir@xen.org>
> > Cc: Jan Beulich <JBeulich@suse.com>
> > Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> > ---
> >  xen/arch/arm/dummy.S      |    1 -
> >  xen/common/domain.c       |    4 ++--
> >  xen/include/asm-arm/irq.h |    3 +++
> >  xen/include/asm-x86/irq.h |    4 ++++
> >  xen/include/xen/irq.h     |    8 --------
> >  xen/xsm/flask/hooks.c     |    4 ++--
> >  6 files changed, 11 insertions(+), 13 deletions(-)
> > 
> > diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
> > index 66eb314..5d9bcff 100644
> > --- a/xen/arch/arm/dummy.S
> > +++ b/xen/arch/arm/dummy.S
> > @@ -8,7 +8,6 @@ x:	mov pc, lr
> >  	
> >  /* PIRQ support */
> >  DUMMY(alloc_pirq_struct);
> > -DUMMY(nr_irqs_gsi);
> >  DUMMY(pirq_guest_bind);
> >  DUMMY(pirq_guest_unbind);
> >  DUMMY(pirq_set_affinity);
> > diff --git a/xen/common/domain.c b/xen/common/domain.c
> > index 12c8e24..d80461d 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -259,9 +259,9 @@ struct domain *domain_create(
> >          atomic_inc(&d->pause_count);
> >  
> >          if ( domid )
> > -            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
> > +            d->nr_pirqs = nr_hw_irqs + extra_domU_irqs;
> >          else
> > -            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
> > +            d->nr_pirqs = nr_hw_irqs + extra_dom0_irqs;
> >          if ( d->nr_pirqs > nr_irqs )
> >              d->nr_pirqs = nr_irqs;
> >  
> > diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> > index abde839..4facaf0 100644
> > --- a/xen/include/asm-arm/irq.h
> > +++ b/xen/include/asm-arm/irq.h
> > @@ -21,6 +21,9 @@ struct irq_cfg {
> >  #define NR_IRQS		1024
> >  #define nr_irqs NR_IRQS
> >  
> > +#define nr_irqs NR_IRQS
> > +#define nr_hw_irqs NR_IRQS
> > +
> >  struct irq_desc;
> >  
> >  struct irq_desc *__irq_to_desc(int irq);
> > diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
> > index 5eefb94..6ea5f53 100644
> > --- a/xen/include/asm-x86/irq.h
> > +++ b/xen/include/asm-x86/irq.h
> > @@ -11,6 +11,10 @@
> >  #include <irq_vectors.h>
> >  #include <asm/percpu.h>
> >  
> > +extern unsigned int nr_irqs_gsi;
> > +extern unsigned int nr_irqs;
> > +#define nr_hw_irqs nr_irqs_gsi
> > +
> >  #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
> >  			     (1 << (irq)) & io_apic_irqs : \
> >  			     (irq) < nr_irqs_gsi)
> > diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> > index 5973cce..7386358 100644
> > --- a/xen/include/xen/irq.h
> > +++ b/xen/include/xen/irq.h
> > @@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
> >  
> >  #include <asm/irq.h>
> >  
> > -#ifdef NR_IRQS
> > -# define nr_irqs NR_IRQS
> > -# define nr_irqs_gsi NR_IRQS
> > -#else
> > -extern unsigned int nr_irqs_gsi;
> > -extern unsigned int nr_irqs;
> > -#endif
> > -
> >  struct msi_desc;
> >  /*
> >   * This is the "IRQ descriptor", which contains various information
> > diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> > index 0ca10d0..595c31e 100644
> > --- a/xen/xsm/flask/hooks.c
> > +++ b/xen/xsm/flask/hooks.c
> > @@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct 
> > avc_audit_data *ad)
> >      struct irq_desc *desc = irq_to_desc(irq);
> >      if ( irq >= nr_irqs || irq < 0 )
> >          return -EINVAL;
> > -    if ( irq < nr_irqs_gsi ) {
> > +    if ( irq < nr_hw_irqs ) {
> >          if (ad) {
> >              AVC_AUDIT_DATA_INIT(ad, IRQ);
> >              ad->irq = irq;
> > @@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int 
> > irq, void *data)
> >      if ( rc )
> >          return rc;
> >  
> > -    if ( irq >= nr_irqs_gsi && msi ) {
> > +    if ( irq >= nr_hw_irqs && msi ) {
> >          u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
> >          AVC_AUDIT_DATA_INIT(&ad, DEV);
> >          ad.device = machine_bdf;
> > -- 
> > 1.7.9.1
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 13:55:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfsxo-0003G1-0X; Tue, 04 Dec 2012 13:55:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lukas@laukamp.me>)
	id 1Tfss0-0002nt-2V; Tue, 04 Dec 2012 13:49:09 +0000
Received: from [85.158.139.83:33473] by server-8.bemta-5.messagelabs.com id
	FC/23-06050-05FFDB05; Tue, 04 Dec 2012 13:49:04 +0000
X-Env-Sender: lukas@laukamp.me
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354628719!28245365!1
X-Originating-IP: [5.9.218.243]
X-SpamReason: No, hits=1.1 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32323 invoked from network); 4 Dec 2012 13:45:20 -0000
Received: from mailer0.lippux.de (HELO mailer0.lippux.de) (5.9.218.243)
	by server-15.tower-182.messagelabs.com with SMTP;
	4 Dec 2012 13:45:20 -0000
Received: from localhost (localhost [127.0.0.1])
	by mailer0.lippux.de (Postfix) with ESMTP id 614982C216;
	Tue,  4 Dec 2012 14:45:33 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mailer1.lippux.de
Received: from mailer0.lippux.de ([127.0.0.1])
	by localhost (mailer0.lippux.de [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id mKOtJAr3dMDb; Tue,  4 Dec 2012 14:45:32 +0100 (CET)
Received: from ashlynn.lippux.de (ashlynn.lippux.de [5.9.218.242])
	by mailer0.lippux.de (Postfix) with ESMTPSA id 04F882C212;
	Tue,  4 Dec 2012 14:45:30 +0100 (CET)
Message-ID: <50BDFE6B.1010800@laukamp.me>
Date: Tue, 04 Dec 2012 14:45:15 +0100
From: Lukas Laukamp <lukas@laukamp.me>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.10) Gecko/20121027 Icedove/10.0.10
MIME-Version: 1.0
To: xen-users@lists.xen.org
References: <CADGo8CWt=uO53ZedJUU0+U6ie_QXPKWY8u1-CDy6wD_pupbdeg@mail.gmail.com>
In-Reply-To: <CADGo8CWt=uO53ZedJUU0+U6ie_QXPKWY8u1-CDy6wD_pupbdeg@mail.gmail.com>
X-Forwarded-Message-Id: <CADGo8CWt=uO53ZedJUU0+U6ie_QXPKWY8u1-CDy6wD_pupbdeg@mail.gmail.com>
Content-Type: multipart/mixed; boundary="------------070505000000060005050803"
X-Mailman-Approved-At: Tue, 04 Dec 2012 13:55:07 +0000
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] Fwd: Compilation of Xen 4.2 Utils breaks on NetBSD 6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------070505000000060005050803
Content-Type: multipart/alternative;
 boundary="------------090808050706050703050402"


--------------090808050706050703050402
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hello all,

because there are still problems to build Xen 4.2 on NetBSD (there was 
also another thread on the port-xen list) I forward this message to get 
a solution for the problem. The complete output of my build is in a log 
file in the attachment.

I used this commands for compilation:

./configure PYTHON=/usr/pkg/bin/python2.7 APPEND_INCLUDES=/usr/pkg/include APPEND_LIB=/usr/pkg/lib --prefix=/usr/xen42
gmake PYTHON=/usr/pkg/bin/python2.7 xen
gmake tools

I took the commans from this wiki article: http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD

The build error appears in the tools target in libxl.

This is the last mail from port-xen list related to this theme:


On 30/11/12 21:16, Mike Bowie wrote:

> On 11/30/12 12:13 PM, Jeff Rizzo wrote:
>> Anyone up for creating a pkgsrc package for xen 4.2?  There's clearly a
>> lot to be done, and my pkgsrc-fu is not all that great.
> I could be up for that... might not be until next week, but if the build
> steps all work out, I should be able to cobble something together into
> pkgsrc/wip. (Which would motivate me to get a box onto 4.2 also...
> double win.)
I would definetely help, this will probably require some Makefile
changes, which I think should be submitted upstream.


Is the problem solvable without big changes in the build system to get 4.2 running on a NetBSD 6 box? Or isn't it able to compile th toolstack on NetBSD for 4.2 without big changes?



-------- Original-Nachricht --------
Betreff: 	Compilation of Xen 4.2 Utils breaks on NetBSD 6
Datum: 	Mon, 3 Dec 2012 17:19:16 +0000
Von: 	Miguel Clara <miguelmclara@gmail.com>
An: 	port-xen@netbsd.org, lukas@laukamp.me



Lukas Laukamp <lukas <at> laukamp.me <http://laukamp.me>> writes:

 >
 > Hey all,
 >
 > I trying to compile Xen 4.2 on NetBSD 6. The hypervisor it self compiled
 > fine but the compilation of the utils breaks with this error:
 >
 > In file included from xl_cmdimpl.c:40:0:
 > libxl_json.h:18:27: fatal error: yajl/yajl_gen.h: No such file or 
directory
 > compilation terminated.
 > gmake[3]: *** [xl_cmdimpl.o] Error 1
 > gmake[3]: Leaving directory `/root/xen-4.2.0/tools/libxl'
 > gmake[2]: *** [subdir-install-libxl] Error 2
 > gmake[2]: Leaving directory `/root/xen-4.2.0/tools'
 > gmake[1]: *** [subdirs-install] Error 2
 > gmake[1]: Leaving directory `/root/xen-4.2.0/tools'
 > gmake: *** [install-tools] Error 2
 > testdom0#
 >
 > I passed the needed options to the configure script so that it searches
 > in /usr/pkg/include/ and /usr/pkg/lib and so on. The file which is
 > declaired to don't exist, exists in /usr/pkg/include/yajl/ so I don't
 > understand why the file could not be found.
 >
 > Hope that someone could help me.
 >
 > Best Regards
 >
 >

I'm trying to build following the guide at: 
http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD

All works fine until I try to build "tools"

gmake[3]: Entering directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
rm -f _paths.h.tmp.tmp; echo "SBINDIR=\"/usr/pkg/sbin\"" 
 >>_paths.h.tmp.tmp; echo "BINDIR=\"/usr/pkg/bin\"" >>_paths.h.tmp.tmp; 
echo "LIBEXEC=\"/usr/pkg/lâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
ibexec\"" >>_paths.h.tmp.tmp; echo "LIBDIR=\"/usr/pkg/lib\"" 
 >>_paths.h.tmp.tmp; echo "SHAREDIR=\"/usr/pkg/share\"" 
 >>_paths.h.tmp.tmp; echo "PRIVATE_BINDâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
IR=\"/usr/pkg/bin\"" >>_paths.h.tmp.tmp; echo 
"XENFIRMWAREDIR=\"/usr/pkg/lib/xen/boot\"" >>_paths.h.tmp.tmp; echo 
"XEN_CONFIG_DIR=\"/usr/pkg/etc/xen\"" >>_â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
paths.h.tmp.tmp; echo "XEN_SCRIPT_DIR=\"/usr/pkg/etc/xen/scripts\"" 
 >>_paths.h.tmp.tmp; echo "XEN_LOCK_DIR=\"/usr/pkg/var/lib\"" 
 >>_paths.h.tmp.tmp; echo â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
"XEN_RUN_DIR=\"/usr/pkg/var/run/xen\"" >>_paths.h.tmp.tmp; echo 
"XEN_PAGING_DIR=\"/usr/pkg/var/lib/xen/xenpaging\"" >>_paths.h.tmp.tmp; 
if ! cmp -s _pathâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
s.h.tmp.tmp _paths.h.tmp; then mv -f _paths.h.tmp.tmp _paths.h.tmp; else 
rm -f _paths.h.tmp.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp >_paths.h.2.tmp 
â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
rm -f _paths.h.tmp â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
if ! cmp -s _paths.h.2.tmp _paths.h; then mv -f _paths.h.2.tmp _paths.h; 
else rm -f _paths.h.2.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gcc -pthread -o testidl testidl.o libxlutil.so 
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so 
-Wl,-rpath-link=/home/miguelcâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/xen-data/xen-4.2.0/tools/libxl/../../tools/libxc 
-Wl,-rpath-link=/home/xen/xen-4.2.0/tools/libxl/../../tools/xenstore 
/home/xen/xâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
en-4.2.0/tools/libxl/../../tools/libxc/libxenctrl.so -L/usr/pkg/lib 
â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
ld: warning: libyajl.so.2, needed by 
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so, not 
found (try using -rpath or -rpath-linâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
k) â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_complete_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_null' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_array_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_string' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_map_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_get_buf' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_array_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_map_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_get_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_free_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_integer' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_bool' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[3]: *** [testidl] Error 1 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[3]: Leaving directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[2]: *** [subdir-install-libxl] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[2]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[1]: *** [subdirs-install] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[1]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake: *** [install-tools] Error 2


I'm using yajl version 2....  could this be the problem? Is there any patch?

Thanks


--------------090808050706050703050402
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Hello all,<br>
    <br>
    because there are still problems to build Xen 4.2 on NetBSD (there
    was also another thread on the port-xen list) I forward this message
    to get a solution for the problem. The complete output of my build
    is in a log file in the attachment.<br>
    <br>
    I used this commands for compilation:<br>
    <br>
    <pre>./configure PYTHON=/usr/pkg/bin/python2.7 APPEND_INCLUDES=/usr/pkg/include APPEND_LIB=/usr/pkg/lib --prefix=/usr/xen42
gmake PYTHON=/usr/pkg/bin/python2.7 xen
gmake tools

I took the commans from this wiki article: <a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD">http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD</a>

The build error appears in the tools target in libxl.

This is the last mail from port-xen list related to this theme:

</pre>
    <div class="moz-text-plain" wrap="true" graphical-quote="true"
      style="font-family: -moz-fixed; font-size: 12px;" lang="x-western">
      <pre wrap="">On 30/11/12 21:16, Mike Bowie wrote:
</pre>
      <blockquote type="cite" style="color: #000000;">
        <pre wrap="">On 11/30/12 12:13 PM, Jeff Rizzo wrote:
</pre>
        <blockquote type="cite" style="color: #000000;">
          <pre wrap="">Anyone up for creating a pkgsrc package for xen 4.2?  There's clearly a
lot to be done, and my pkgsrc-fu is not all that great.
</pre>
        </blockquote>
        <pre wrap="">I could be up for that... might not be until next week, but if the build 
steps all work out, I should be able to cobble something together into 
pkgsrc/wip. (Which would motivate me to get a box onto 4.2 also... 
double win.)
</pre>
      </blockquote>
      <pre wrap="">I would definetely help, this will probably require some Makefile
changes, which I think should be submitted upstream.

</pre>
    </div>
    <pre>Is the problem solvable without big changes in the build system to get 4.2 running on a NetBSD 6 box? Or isn't it able to compile th toolstack on NetBSD for 4.2 without big changes?
</pre>
    <br>
    <br>
    -------- Original-Nachricht --------
    <table class="moz-email-headers-table" border="0" cellpadding="0"
      cellspacing="0">
      <tbody>
        <tr>
          <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Betreff: </th>
          <td>Compilation of Xen 4.2 Utils breaks on NetBSD 6</td>
        </tr>
        <tr>
          <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Datum: </th>
          <td>Mon, 3 Dec 2012 17:19:16 +0000</td>
        </tr>
        <tr>
          <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Von: </th>
          <td>Miguel Clara <a class="moz-txt-link-rfc2396E" href="mailto:miguelmclara@gmail.com">&lt;miguelmclara@gmail.com&gt;</a></td>
        </tr>
        <tr>
          <th align="RIGHT" nowrap="nowrap" valign="BASELINE">An: </th>
          <td><a class="moz-txt-link-abbreviated" href="mailto:port-xen@netbsd.org">port-xen@netbsd.org</a>, <a class="moz-txt-link-abbreviated" href="mailto:lukas@laukamp.me">lukas@laukamp.me</a></td>
        </tr>
      </tbody>
    </table>
    <br>
    <br>
    Lukas Laukamp &lt;lukas &lt;at&gt; <a moz-do-not-send="true"
      href="http://laukamp.me">laukamp.me</a>&gt; writes:<br>
    <br>
    &gt; <br>
    &gt; Hey all,<br>
    &gt; <br>
    &gt; I trying to compile Xen 4.2 on NetBSD 6. The hypervisor it self
    compiled <br>
    &gt; fine but the compilation of the utils breaks with this error:<br>
    &gt; <br>
    &gt; In file included from xl_cmdimpl.c:40:0:<br>
    &gt; libxl_json.h:18:27: fatal error: yajl/yajl_gen.h: No such file
    or directory<br>
    &gt; compilation terminated.<br>
    &gt; gmake[3]: *** [xl_cmdimpl.o] Error 1<br>
    &gt; gmake[3]: Leaving directory `/root/xen-4.2.0/tools/libxl'<br>
    &gt; gmake[2]: *** [subdir-install-libxl] Error 2<br>
    &gt; gmake[2]: Leaving directory `/root/xen-4.2.0/tools'<br>
    &gt; gmake[1]: *** [subdirs-install] Error 2<br>
    &gt; gmake[1]: Leaving directory `/root/xen-4.2.0/tools'<br>
    &gt; gmake: *** [install-tools] Error 2<br>
    &gt; testdom0#<br>
    &gt; <br>
    &gt; I passed the needed options to the configure script so that it
    searches <br>
    &gt; in /usr/pkg/include/ and /usr/pkg/lib and so on. The file which
    is <br>
    &gt; declaired to don't exist, exists in /usr/pkg/include/yajl/ so I
    don't <br>
    &gt; understand why the file could not be found.<br>
    &gt; <br>
    &gt; Hope that someone could help me.<br>
    &gt; <br>
    &gt; Best Regards<br>
    &gt; <br>
    &gt; <br>
    <div><br>
    </div>
    <div>I'm trying to build following the guide at: <a
        moz-do-not-send="true"
        href="http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD">http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD</a><br>
      <br>
      All works fine until I try to build "tools"<br>
      <br>
      gmake[3]: Entering directory `/home/xen/xen-4.2.0/tools/libxl'
      â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      rm -f _paths.h.tmp.tmp; echo "SBINDIR=\"/usr/pkg/sbin\""
      &gt;&gt;_paths.h.tmp.tmp; echo "BINDIR=\"/usr/pkg/bin\""
      &gt;&gt;_paths.h.tmp.tmp; echo "LIBEXEC=\"/usr/pkg/lâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      ibexec\"" &gt;&gt;_paths.h.tmp.tmp; echo "LIBDIR=\"/usr/pkg/lib\""
      &gt;&gt;_paths.h.tmp.tmp; echo "SHAREDIR=\"/usr/pkg/share\""
      &gt;&gt;_paths.h.tmp.tmp; echo "PRIVATE_BINDâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      IR=\"/usr/pkg/bin\"" &gt;&gt;_paths.h.tmp.tmp; echo
      "XENFIRMWAREDIR=\"/usr/pkg/lib/xen/boot\""
      &gt;&gt;_paths.h.tmp.tmp; echo
      "XEN_CONFIG_DIR=\"/usr/pkg/etc/xen\"" &gt;&gt;_â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      paths.h.tmp.tmp; echo
      "XEN_SCRIPT_DIR=\"/usr/pkg/etc/xen/scripts\""
      &gt;&gt;_paths.h.tmp.tmp; echo "XEN_LOCK_DIR=\"/usr/pkg/var/lib\""
      &gt;&gt;_paths.h.tmp.tmp; echo â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      "XEN_RUN_DIR=\"/usr/pkg/var/run/xen\"" &gt;&gt;_paths.h.tmp.tmp;
      echo "XEN_PAGING_DIR=\"/usr/pkg/var/lib/xen/xenpaging\""
      &gt;&gt;_paths.h.tmp.tmp; if ! cmp -s _pathâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      s.h.tmp.tmp _paths.h.tmp; then mv -f _paths.h.tmp.tmp
      _paths.h.tmp; else rm -f _paths.h.tmp.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp
      &gt;_paths.h.2.tmp â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      rm -f _paths.h.tmp â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      if ! cmp -s _paths.h.2.tmp _paths.h; then mv -f _paths.h.2.tmp
      _paths.h; else rm -f _paths.h.2.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gcc -pthread -o testidl testidl.o libxlutil.so
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so
      -Wl,-rpath-link=/home/miguelcâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /xen-data/xen-4.2.0/tools/libxl/../../tools/libxc
      -Wl,-rpath-link=/home/xen/xen-4.2.0/tools/libxl/../../tools/xenstore
      /home/xen/xâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      en-4.2.0/tools/libxl/../../tools/libxc/libxenctrl.so
      -L/usr/pkg/lib â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      ld: warning: libyajl.so.2, needed by
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so,
      not found (try using -rpath or -rpath-linâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      k) â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_complete_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_null' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_array_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_string' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_map_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_get_buf' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_array_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_map_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_get_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_free_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_integer' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_bool' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[3]: *** [testidl] Error 1 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[3]: Leaving directory `/home/xen/xen-4.2.0/tools/libxl'
      â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[2]: *** [subdir-install-libxl] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[2]: Leaving directory `/home/xen/xen-4.2.0/tools'
      â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[1]: *** [subdirs-install] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[1]: Leaving directory `/home/xen/xen-4.2.0/tools'
      â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake: *** [install-tools] Error 2 <br>
      <br>
      <br>
      I'm using yajl version 2.... Â could this be the problem? Is there
      any patch?<br>
      <br>
      Thanks<br>
      <br>
    </div>
  </body>
</html>

--------------090808050706050703050402--

--------------070505000000060005050803
Content-Type: application/octet-stream;
 name="xen-build.log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="xen-build.log"

ZG9tMCMgLi9jb25maWd1cmUgUFlUSE9OPS91c3IvcGtnL2Jpbi9weXRob24yLjcgQVBQRU5E
X0lOQ0xVREVTPS91c3IvcGtnL2luY2x1ZGUgCCBBUFBFTkRfTElCPS91c3IvcGtnL2xpYiAt
LXByZWZpeD0vdXNyL3hlbjQyCmNoZWNraW5nIGJ1aWxkIHN5c3RlbSB0eXBlLi4uIHg4Nl82
NC11bmtub3duLW5ldGJzZDYuMApjaGVja2luZyBob3N0IHN5c3RlbSB0eXBlLi4uIHg4Nl82
NC11bmtub3duLW5ldGJzZDYuMApjaGVja2luZyBmb3IgZ2NjLi4uIGdjYwpjaGVja2luZyB3
aGV0aGVyIHRoZSBDIGNvbXBpbGVyIHdvcmtzLi4uIHllcwpjaGVja2luZyBmb3IgQyBjb21w
aWxlciBkZWZhdWx0IG91dHB1dCBmaWxlIG5hbWUuLi4gYS5vdXQKY2hlY2tpbmcgZm9yIHN1
ZmZpeCBvZiBleGVjdXRhYmxlcy4uLiAKY2hlY2tpbmcgd2hldGhlciB3ZSBhcmUgY3Jvc3Mg
Y29tcGlsaW5nLi4uIG5vCmNoZWNraW5nIGZvciBzdWZmaXggb2Ygb2JqZWN0IGZpbGVzLi4u
IG8KY2hlY2tpbmcgd2hldGhlciB3ZSBhcmUgdXNpbmcgdGhlIEdOVSBDIGNvbXBpbGVyLi4u
IHllcwpjaGVja2luZyB3aGV0aGVyIGdjYyBhY2NlcHRzIC1nLi4uIHllcwpjaGVja2luZyBm
b3IgZ2NjIG9wdGlvbiB0byBhY2NlcHQgSVNPIEM4OS4uLiBub25lIG5lZWRlZApjaGVja2lu
ZyB3aGV0aGVyIG1ha2Ugc2V0cyAkKE1BS0UpLi4uIHllcwpjaGVja2luZyBmb3IgYSBCU0Qt
Y29tcGF0aWJsZSBpbnN0YWxsLi4uIC91c3IvYmluL2luc3RhbGwgLWMKY2hlY2tpbmcgZm9y
IGJpc29uLi4uIC91c3IvcGtnL2Jpbi9iaXNvbgpjaGVja2luZyBmb3IgZmxleC4uLiAvdXNy
L2Jpbi9mbGV4CmNoZWNraW5nIGZvciBwZXJsLi4uIC91c3IvcGtnL2Jpbi9wZXJsCmNoZWNr
aW5nIGZvciBvY2FtbGMuLi4gbm8KY2hlY2tpbmcgZm9yIG9jYW1sLi4uIG5vCmNoZWNraW5n
IGZvciBvY2FtbGRlcC4uLiBubwpjaGVja2luZyBmb3Igb2NhbWxta3RvcC4uLiBubwpjaGVj
a2luZyBmb3Igb2NhbWxta2xpYi4uLiBubwpjaGVja2luZyBmb3Igb2NhbWxkb2MuLi4gbm8K
Y2hlY2tpbmcgZm9yIG9jYW1sYnVpbGQuLi4gbm8KY2hlY2tpbmcgZm9yIGJhc2guLi4gL3Vz
ci9wa2cvYmluL2Jhc2gKY2hlY2tpbmcgZm9yIHB5dGhvbjIuNy4uLiAvdXNyL3BrZy9iaW4v
cHl0aG9uMi43CmNoZWNraW5nIGZvciBweXRob24gdmVyc2lvbiA+PSAyLjMgLi4uIHllcwpj
aGVja2luZyBob3cgdG8gcnVuIHRoZSBDIHByZXByb2Nlc3Nvci4uLiBnY2MgLUUKY2hlY2tp
bmcgZm9yIGdyZXAgdGhhdCBoYW5kbGVzIGxvbmcgbGluZXMgYW5kIC1lLi4uIC91c3IvYmlu
L2dyZXAKY2hlY2tpbmcgZm9yIGVncmVwLi4uIC91c3IvYmluL2dyZXAgLUUKY2hlY2tpbmcg
Zm9yIEFOU0kgQyBoZWFkZXIgZmlsZXMuLi4geWVzCmNoZWNraW5nIGZvciBzeXMvdHlwZXMu
aC4uLiB5ZXMKY2hlY2tpbmcgZm9yIHN5cy9zdGF0LmguLi4geWVzCmNoZWNraW5nIGZvciBz
dGRsaWIuaC4uLiB5ZXMKY2hlY2tpbmcgZm9yIHN0cmluZy5oLi4uIHllcwpjaGVja2luZyBm
b3IgbWVtb3J5LmguLi4geWVzCmNoZWNraW5nIGZvciBzdHJpbmdzLmguLi4geWVzCmNoZWNr
aW5nIGZvciBpbnR0eXBlcy5oLi4uIHllcwpjaGVja2luZyBmb3Igc3RkaW50LmguLi4geWVz
CmNoZWNraW5nIGZvciB1bmlzdGQuaC4uLiB5ZXMKY2hlY2tpbmcgZm9yIHB5dGhvbjIuNy1j
b25maWcuLi4gL3Vzci9wa2cvYmluL3B5dGhvbjIuNy1jb25maWcKY2hlY2tpbmcgUHl0aG9u
LmggdXNhYmlsaXR5Li4uIHllcwpjaGVja2luZyBQeXRob24uaCBwcmVzZW5jZS4uLiB5ZXMK
Y2hlY2tpbmcgZm9yIFB5dGhvbi5oLi4uIHllcwpjaGVja2luZyBmb3IgUHlBcmdfUGFyc2VU
dXBsZSBpbiAtbHB5dGhvbjIuNy4uLiB5ZXMKY2hlY2tpbmcgZm9yIHhnZXR0ZXh0Li4uIC91
c3IvYmluL3hnZXR0ZXh0CmNoZWNraW5nIGZvciBhczg2Li4uIC91c3IvcGtnL2Jpbi9hczg2
CmNoZWNraW5nIGZvciBsZDg2Li4uIC91c3IvcGtnL2Jpbi9sZDg2CmNoZWNraW5nIGZvciBi
Y2MuLi4gL3Vzci9wa2cvYmluL2JjYwpjaGVja2luZyBmb3IgaWFzbC4uLiAvdXNyL2Jpbi9p
YXNsCmNoZWNraW5nIHV1aWQvdXVpZC5oIHVzYWJpbGl0eS4uLiBubwpjaGVja2luZyB1dWlk
L3V1aWQuaCBwcmVzZW5jZS4uLiBubwpjaGVja2luZyBmb3IgdXVpZC91dWlkLmguLi4gbm8K
Y2hlY2tpbmcgdXVpZC5oIHVzYWJpbGl0eS4uLiB5ZXMKY2hlY2tpbmcgdXVpZC5oIHByZXNl
bmNlLi4uIHllcwpjaGVja2luZyBmb3IgdXVpZC5oLi4uIHllcwpjaGVja2luZyBjdXJzZXMu
aCB1c2FiaWxpdHkuLi4geWVzCmNoZWNraW5nIGN1cnNlcy5oIHByZXNlbmNlLi4uIHllcwpj
aGVja2luZyBmb3IgY3Vyc2VzLmguLi4geWVzCmNoZWNraW5nIGZvciBjbGVhciBpbiAtbGN1
cnNlcy4uLiB5ZXMKY2hlY2tpbmcgbmN1cnNlcy5oIHVzYWJpbGl0eS4uLiBubwpjaGVja2lu
ZyBuY3Vyc2VzLmggcHJlc2VuY2UuLi4gbm8KY2hlY2tpbmcgZm9yIG5jdXJzZXMuaC4uLiBu
bwpjaGVja2luZyBmb3IgcGtnLWNvbmZpZy4uLiAvdXNyL3BrZy9iaW4vcGtnLWNvbmZpZwpj
aGVja2luZyBwa2ctY29uZmlnIGlzIGF0IGxlYXN0IHZlcnNpb24gMC45LjAuLi4geWVzCmNo
ZWNraW5nIGZvciBnbGliLi4uIHllcwpjaGVja2luZyBiemxpYi5oIHVzYWJpbGl0eS4uLiB5
ZXMKY2hlY2tpbmcgYnpsaWIuaCBwcmVzZW5jZS4uLiB5ZXMKY2hlY2tpbmcgZm9yIGJ6bGli
LmguLi4geWVzCmNoZWNraW5nIGZvciBCWjJfYnpEZWNvbXByZXNzSW5pdCBpbiAtbGJ6Mi4u
LiB5ZXMKY2hlY2tpbmcgbHptYS5oIHVzYWJpbGl0eS4uLiB5ZXMKY2hlY2tpbmcgbHptYS5o
IHByZXNlbmNlLi4uIHllcwpjaGVja2luZyBmb3IgbHptYS5oLi4uIHllcwpjaGVja2luZyBm
b3IgbHptYV9zdHJlYW1fZGVjb2RlciBpbiAtbGx6bWEuLi4geWVzCmNoZWNraW5nIGx6by9s
em8xeC5oIHVzYWJpbGl0eS4uLiBubwpjaGVja2luZyBsem8vbHpvMXguaCBwcmVzZW5jZS4u
LiBubwpjaGVja2luZyBmb3IgbHpvL2x6bzF4LmguLi4gbm8KY2hlY2tpbmcgZm9yIGlvX3Nl
dHVwIGluIC1sYWlvLi4uIG5vCmNoZWNraW5nIGZvciBNRDUgaW4gLWxjcnlwdG8uLi4geWVz
CmNoZWNraW5nIGZvciBleHQyZnNfb3BlbjIgaW4gLWxleHQyZnMuLi4gbm8KY2hlY2tpbmcg
Zm9yIGdjcnlfbWRfaGFzaF9idWZmZXIgaW4gLWxnY3J5cHQuLi4geWVzCmNoZWNraW5nIGZv
ciBwdGhyZWFkIGZsYWcuLi4gLXB0aHJlYWQKY2hlY2tpbmcgbGlidXRpbC5oIHVzYWJpbGl0
eS4uLiBubwpjaGVja2luZyBsaWJ1dGlsLmggcHJlc2VuY2UuLi4gbm8KY2hlY2tpbmcgZm9y
IGxpYnV0aWwuaC4uLiBubwpjaGVja2luZyBmb3Igb3BlbnB0eSBldCBhbC4uLiAtbHV0aWwK
Y2hlY2tpbmcgZm9yIHlhamxfYWxsb2MgaW4gLWx5YWpsLi4uIHllcwpjaGVja2luZyBmb3Ig
ZGVmbGF0ZUNvcHkgaW4gLWx6Li4uIHllcwpjaGVja2luZyBmb3IgbGliaWNvbnZfb3BlbiBp
biAtbGljb252Li4uIG5vCmNoZWNraW5nIHlhamwveWFqbF92ZXJzaW9uLmggdXNhYmlsaXR5
Li4uIHllcwpjaGVja2luZyB5YWpsL3lhamxfdmVyc2lvbi5oIHByZXNlbmNlLi4uIHllcwpj
aGVja2luZyBmb3IgeWFqbC95YWpsX3ZlcnNpb24uaC4uLiB5ZXMKY29uZmlndXJlOiBjcmVh
dGluZyAuL2NvbmZpZy5zdGF0dXMKY29uZmlnLnN0YXR1czogY3JlYXRpbmcgLi4vY29uZmln
L1Rvb2xzLm1rCmNvbmZpZy5zdGF0dXM6IGNyZWF0aW5nIGNvbmZpZy5oCmRvbTAjIGdtYWtl
IFBZVEhPTj0vdXNyL3BrZy9iaW4vcHl0aG9uMi43IHhlbgpnbWFrZSAtQyB4ZW4gaW5zdGFs
bApnbWFrZVsxXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuJwpn
bWFrZSAtZiBSdWxlcy5tayBfaW5zdGFsbApnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAveGVuJwpnbWFrZSAtQyB0b29scwpnbWFrZVszXTogRW50ZXJp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzJwpbIC1kIGZpZ2xldCBd
ICYmIGdtYWtlIC1DIGZpZ2xldApnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAveGVuL3Rvb2xzL2ZpZ2xldCcKZ2NjIC1vIGZpZ2xldCBmaWdsZXQuYwpn
bWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMv
ZmlnbGV0JwpnbWFrZSBzeW1ib2xzCmdtYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9y
b290L3hlbi00LjIuMC94ZW4vdG9vbHMnCmdjYyAtV2FsbCAtV2Vycm9yIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1vIHN5bWJvbHMgc3ltYm9scy5jCmdt
YWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scycK
Z21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xz
JwpnbWFrZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIGluY2x1ZGUveGVuL2Nv
bXBpbGUuaApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAv
eGVuJwpnbWFrZSAtQyB0b29scwpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAveGVuL3Rvb2xzJwpbIC1kIGZpZ2xldCBdICYmIGdtYWtlIC1DIGZpZ2xl
dApnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL3Rv
b2xzL2ZpZ2xldCcKZ21ha2VbNV06IGBmaWdsZXQnIGlzIHVwIHRvIGRhdGUuCmdtYWtlWzVd
OiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9maWdsZXQn
CmdtYWtlIHN5bWJvbHMKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3hlbi90b29scycKZ21ha2VbNV06IGBzeW1ib2xzJyBpcyB1cCB0byBkYXRlLgpn
bWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMn
CmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29s
cycKIF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICBfX18gIAogXCBcLyAvX19f
IF8gX18gICB8IHx8IHwgIHxfX18gXCAgLyBfIFwgCiAgXCAgLy8gXyBcICdfIFwgIHwgfHwg
fF8gICBfXykgfHwgfCB8IHwKICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy8gfCB8
X3wgfAogL18vXF9cX19ffF98IHxffCAgICB8X3woXylfX19fXyhfKV9fXy8gCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKZ21ha2VbM106IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuJwpbIC1lIGluY2x1ZGUvYXNtIF0gfHwgbG4g
LXNmIGFzbS14ODYgaW5jbHVkZS9hc20KZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9S
dWxlcy5tayAtQyBpbmNsdWRlCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZScKbWtkaXIgLXAgY29tcGF0CmdyZXAgLXYgJ0RFRklO
RV9YRU5fR1VFU1RfSEFORExFKGxvbmcpJyBwdWJsaWMvY2FsbGJhY2suaCB8IFwKL3Vzci9w
a2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWls
ZC1zb3VyY2UucHkgPmNvbXBhdC9jYWxsYmFjay5jLm5ldwptdiAtZiBjb21wYXQvY2FsbGJh
Y2suYy5uZXcgY29tcGF0L2NhbGxiYWNrLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNvbXBh
dC5oIC1tMzIgLW8gY29tcGF0L2NhbGxiYWNrLmkgY29tcGF0L2NhbGxiYWNrLmMKc2V0IC1l
OyBpZD1fJChlY2hvIGNvbXBhdC9jYWxsYmFjay5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6
dXBwZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L2NhbGxiYWNrLmgu
bmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBhdC9jYWxsYmFjay5oLm5ldzsgXApl
Y2hvICIjaW5jbHVkZSA8eGVuL2NvbXBhdC5oPiIgPj5jb21wYXQvY2FsbGJhY2suaC5uZXc7
IFwKZWNobyAiI2luY2x1ZGUgPHB1YmxpYy9jYWxsYmFjay5oPiIgPj5jb21wYXQvY2FsbGJh
Y2suaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9jYWxsYmFjay5o
Lm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L2NhbGxiYWNrLmkgfCBcCi91c3Iv
cGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVp
bGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC9jYWxsYmFjay5oLm5ldzsgXAplY2hvICIj
cHJhZ21hIHBhY2soKSIgPj5jb21wYXQvY2FsbGJhY2suaC5uZXc7IFwKZWNobyAiI2VuZGlm
IC8qICRpZCAqLyIgPj5jb21wYXQvY2FsbGJhY2suaC5uZXcKbXYgLWYgY29tcGF0L2NhbGxi
YWNrLmgubmV3IGNvbXBhdC9jYWxsYmFjay5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdE
RUZJTkVfWEVOX0dVRVNUX0hBTkRMRShsb25nKScgcHVibGljL2VsZm5vdGUuaCB8IFwKL3Vz
ci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1i
dWlsZC1zb3VyY2UucHkgPmNvbXBhdC9lbGZub3RlLmMubmV3Cm12IC1mIGNvbXBhdC9lbGZu
b3RlLmMubmV3IGNvbXBhdC9lbGZub3RlLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNvbXBh
dC5oIC1tMzIgLW8gY29tcGF0L2VsZm5vdGUuaSBjb21wYXQvZWxmbm90ZS5jCnNldCAtZTsg
aWQ9XyQoZWNobyBjb21wYXQvZWxmbm90ZS5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBw
ZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L2VsZm5vdGUuaC5uZXc7
IFwKZWNobyAiI2RlZmluZSAkaWQiID4+Y29tcGF0L2VsZm5vdGUuaC5uZXc7IFwKZWNobyAi
I2luY2x1ZGUgPHhlbi9jb21wYXQuaD4iID4+Y29tcGF0L2VsZm5vdGUuaC5uZXc7IFwKZWNo
byAiI2luY2x1ZGUgPHB1YmxpYy9lbGZub3RlLmg+IiA+PmNvbXBhdC9lbGZub3RlLmgubmV3
OyBcCmVjaG8gIiNwcmFnbWEgcGFjayg0KSIgPj5jb21wYXQvZWxmbm90ZS5oLm5ldzsgXApn
cmVwIC12ICdeIyBbMC05XScgY29tcGF0L2VsZm5vdGUuaSB8IFwKL3Vzci9wa2cvYmluL3B5
dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1oZWFkZXIu
cHkgfCB1bmlxID4+Y29tcGF0L2VsZm5vdGUuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNr
KCkiID4+Y29tcGF0L2VsZm5vdGUuaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIg
Pj5jb21wYXQvZWxmbm90ZS5oLm5ldwptdiAtZiBjb21wYXQvZWxmbm90ZS5oLm5ldyBjb21w
YXQvZWxmbm90ZS5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dVRVNU
X0hBTkRMRShsb25nKScgcHVibGljL2V2ZW50X2NoYW5uZWwuaCB8IFwKL3Vzci9wa2cvYmlu
L3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1zb3Vy
Y2UucHkgPmNvbXBhdC9ldmVudF9jaGFubmVsLmMubmV3Cm12IC1mIGNvbXBhdC9ldmVudF9j
aGFubmVsLmMubmV3IGNvbXBhdC9ldmVudF9jaGFubmVsLmMKZ2NjIC1FIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMv
eGVuLWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0L2V2ZW50X2NoYW5uZWwuaSBjb21wYXQvZXZl
bnRfY2hhbm5lbC5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21wYXQvZXZlbnRfY2hhbm5lbC5o
IHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYg
JGlkIiA+Y29tcGF0L2V2ZW50X2NoYW5uZWwuaC5uZXc7IFwKZWNobyAiI2RlZmluZSAkaWQi
ID4+Y29tcGF0L2V2ZW50X2NoYW5uZWwuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9j
b21wYXQuaD4iID4+Y29tcGF0L2V2ZW50X2NoYW5uZWwuaC5uZXc7IFwKZWNobyAiI2luY2x1
ZGUgPHB1YmxpYy9ldmVudF9jaGFubmVsLmg+IiA+PmNvbXBhdC9ldmVudF9jaGFubmVsLmgu
bmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjayg0KSIgPj5jb21wYXQvZXZlbnRfY2hhbm5lbC5o
Lm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L2V2ZW50X2NoYW5uZWwuaSB8IFwK
L3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBh
dC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0L2V2ZW50X2NoYW5uZWwuaC5uZXc7
IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L2V2ZW50X2NoYW5uZWwuaC5uZXc7
IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21wYXQvZXZlbnRfY2hhbm5lbC5oLm5l
dwptdiAtZiBjb21wYXQvZXZlbnRfY2hhbm5lbC5oLm5ldyBjb21wYXQvZXZlbnRfY2hhbm5l
bC5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hBTkRMRShs
b25nKScgcHVibGljL2ZlYXR1cmVzLmggfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jv
b3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNlLnB5ID5jb21wYXQv
ZmVhdHVyZXMuYy5uZXcKbXYgLWYgY29tcGF0L2ZlYXR1cmVzLmMubmV3IGNvbXBhdC9mZWF0
dXJlcy5jCmdjYyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1m
bG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0
ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAt
ZyAtRF9fWEVOX18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNvbXBhdC9m
ZWF0dXJlcy5pIGNvbXBhdC9mZWF0dXJlcy5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21wYXQv
ZmVhdHVyZXMuaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hv
ICIjaWZuZGVmICRpZCIgPmNvbXBhdC9mZWF0dXJlcy5oLm5ldzsgXAplY2hvICIjZGVmaW5l
ICRpZCIgPj5jb21wYXQvZmVhdHVyZXMuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9j
b21wYXQuaD4iID4+Y29tcGF0L2ZlYXR1cmVzLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxw
dWJsaWMvZmVhdHVyZXMuaD4iID4+Y29tcGF0L2ZlYXR1cmVzLmgubmV3OyBcCmVjaG8gIiNw
cmFnbWEgcGFjayg0KSIgPj5jb21wYXQvZmVhdHVyZXMuaC5uZXc7IFwKZ3JlcCAtdiAnXiMg
WzAtOV0nIGNvbXBhdC9mZWF0dXJlcy5pIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9y
b290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29tcGF0LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEg
Pj5jb21wYXQvZmVhdHVyZXMuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29t
cGF0L2ZlYXR1cmVzLmgubmV3OyBcCmVjaG8gIiNlbmRpZiAvKiAkaWQgKi8iID4+Y29tcGF0
L2ZlYXR1cmVzLmgubmV3Cm12IC1mIGNvbXBhdC9mZWF0dXJlcy5oLm5ldyBjb21wYXQvZmVh
dHVyZXMuaApta2RpciAtcCBjb21wYXQKZ3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5E
TEUobG9uZyknIHB1YmxpYy9ncmFudF90YWJsZS5oIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9u
Mi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29tcGF0LWJ1aWxkLXNvdXJjZS5weSA+
Y29tcGF0L2dyYW50X3RhYmxlLmMubmV3Cm12IC1mIGNvbXBhdC9ncmFudF90YWJsZS5jLm5l
dyBjb21wYXQvZ3JhbnRfdGFibGUuYwpnY2MgLUUgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1ETkRFQlVHIC1mbm8t
YnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5j
bHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
ZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRp
b25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5v
LWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJ
QlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURI
QVNfUEFTU1RIUk9VR0ggLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1pbmNsdWRlIHB1YmxpYy94ZW4tY29tcGF0Lmgg
LW0zMiAtbyBjb21wYXQvZ3JhbnRfdGFibGUuaSBjb21wYXQvZ3JhbnRfdGFibGUuYwpzZXQg
LWU7IGlkPV8kKGVjaG8gY29tcGF0L2dyYW50X3RhYmxlLmggfCB0ciAnWzpsb3dlcjpdLS8u
JyAnWzp1cHBlcjpdX19fJyk7IFwKZWNobyAiI2lmbmRlZiAkaWQiID5jb21wYXQvZ3JhbnRf
dGFibGUuaC5uZXc7IFwKZWNobyAiI2RlZmluZSAkaWQiID4+Y29tcGF0L2dyYW50X3RhYmxl
LmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0Lmg+IiA+PmNvbXBhdC9ncmFu
dF90YWJsZS5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8cHVibGljL2dyYW50X3RhYmxlLmg+
IiA+PmNvbXBhdC9ncmFudF90YWJsZS5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soNCki
ID4+Y29tcGF0L2dyYW50X3RhYmxlLmgubmV3OyBcCmdyZXAgLXYgJ14jIFswLTldJyBjb21w
YXQvZ3JhbnRfdGFibGUuaSB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4t
NC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0
L2dyYW50X3RhYmxlLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC9n
cmFudF90YWJsZS5oLm5ldzsgXAplY2hvICIjZW5kaWYgLyogJGlkICovIiA+PmNvbXBhdC9n
cmFudF90YWJsZS5oLm5ldwptdiAtZiBjb21wYXQvZ3JhbnRfdGFibGUuaC5uZXcgY29tcGF0
L2dyYW50X3RhYmxlLmgKbWtkaXIgLXAgY29tcGF0CmdyZXAgLXYgJ0RFRklORV9YRU5fR1VF
U1RfSEFORExFKGxvbmcpJyBwdWJsaWMva2V4ZWMuaCB8IFwKL3Vzci9wa2cvYmluL3B5dGhv
bjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1zb3VyY2UucHkg
PmNvbXBhdC9rZXhlYy5jLm5ldwptdiAtZiBjb21wYXQva2V4ZWMuYy5uZXcgY29tcGF0L2tl
eGVjLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21t
b24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25v
LXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0L2tl
eGVjLmkgY29tcGF0L2tleGVjLmMKc2V0IC1lOyBpZD1fJChlY2hvIGNvbXBhdC9rZXhlYy5o
IHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYg
JGlkIiA+Y29tcGF0L2tleGVjLmgubmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBh
dC9rZXhlYy5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8eGVuL2NvbXBhdC5oPiIgPj5jb21w
YXQva2V4ZWMuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHB1YmxpYy9rZXhlYy5oPiIgPj5j
b21wYXQva2V4ZWMuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9r
ZXhlYy5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L2tleGVjLmkgfCBcCi91
c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQt
YnVpbGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC9rZXhlYy5oLm5ldzsgXAplY2hvICIj
cHJhZ21hIHBhY2soKSIgPj5jb21wYXQva2V4ZWMuaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8q
ICRpZCAqLyIgPj5jb21wYXQva2V4ZWMuaC5uZXcKbXYgLWYgY29tcGF0L2tleGVjLmgubmV3
IGNvbXBhdC9rZXhlYy5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dV
RVNUX0hBTkRMRShsb25nKScgcHVibGljL21lbW9yeS5oIHwgXAovdXNyL3BrZy9iaW4vcHl0
aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29tcGF0LWJ1aWxkLXNvdXJjZS5w
eSA+Y29tcGF0L21lbW9yeS5jLm5ldwptdiAtZiBjb21wYXQvbWVtb3J5LmMubmV3IGNvbXBh
dC9tZW1vcnkuYwpnY2MgLUUgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNv
ZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVk
LWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91
cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRp
bmMgLWcgLURfX1hFTl9fIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50IC1pbmNsdWRlIHB1YmxpYy94ZW4tY29tcGF0LmggLW0zMiAtbyBjb21w
YXQvbWVtb3J5LmkgY29tcGF0L21lbW9yeS5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21wYXQv
bWVtb3J5LmggfCB0ciAnWzpsb3dlcjpdLS8uJyAnWzp1cHBlcjpdX19fJyk7IFwKZWNobyAi
I2lmbmRlZiAkaWQiID5jb21wYXQvbWVtb3J5LmgubmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlk
IiA+PmNvbXBhdC9tZW1vcnkuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9jb21wYXQu
aD4iID4+Y29tcGF0L21lbW9yeS5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8cHVibGljL21l
bW9yeS5oPiIgPj5jb21wYXQvbWVtb3J5LmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjayg0
KSIgPj5jb21wYXQvbWVtb3J5LmgubmV3OyBcCmdyZXAgLXYgJ14jIFswLTldJyBjb21wYXQv
bWVtb3J5LmkgfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hl
bi90b29scy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC9tZW1vcnku
aC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L21lbW9yeS5oLm5ldzsg
XAplY2hvICIjZW5kaWYgLyogJGlkICovIiA+PmNvbXBhdC9tZW1vcnkuaC5uZXcKbXYgLWYg
Y29tcGF0L21lbW9yeS5oLm5ldyBjb21wYXQvbWVtb3J5LmgKbWtkaXIgLXAgY29tcGF0Cmdy
ZXAgLXYgJ0RFRklORV9YRU5fR1VFU1RfSEFORExFKGxvbmcpJyBwdWJsaWMvbm1pLmggfCBc
Ci91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21w
YXQtYnVpbGQtc291cmNlLnB5ID5jb21wYXQvbm1pLmMubmV3Cm12IC1mIGNvbXBhdC9ubWku
Yy5uZXcgY29tcGF0L25taS5jCmdjYyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWls
dGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRl
IC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1n
ZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZh
dWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMg
LVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRF
IC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19Q
QVNTVEhST1VHSCAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1h
bGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMy
IC1vIGNvbXBhdC9ubWkuaSBjb21wYXQvbm1pLmMKc2V0IC1lOyBpZD1fJChlY2hvIGNvbXBh
dC9ubWkuaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hvICIj
aWZuZGVmICRpZCIgPmNvbXBhdC9ubWkuaC5uZXc7IFwKZWNobyAiI2RlZmluZSAkaWQiID4+
Y29tcGF0L25taS5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8eGVuL2NvbXBhdC5oPiIgPj5j
b21wYXQvbm1pLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMvbm1pLmg+IiA+PmNv
bXBhdC9ubWkuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9ubWku
aC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNvbXBhdC9ubWkuaSB8IFwKL3Vzci9wa2cv
YmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1o
ZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0L25taS5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBh
Y2soKSIgPj5jb21wYXQvbm1pLmgubmV3OyBcCmVjaG8gIiNlbmRpZiAvKiAkaWQgKi8iID4+
Y29tcGF0L25taS5oLm5ldwptdiAtZiBjb21wYXQvbm1pLmgubmV3IGNvbXBhdC9ubWkuaApt
a2RpciAtcCBjb21wYXQKZ3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUobG9uZykn
IHB1YmxpYy9waHlzZGV2LmggfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVu
LTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNlLnB5ID5jb21wYXQvcGh5c2Rl
di5jLm5ldwptdiAtZiBjb21wYXQvcGh5c2Rldi5jLm5ldyBjb21wYXQvcGh5c2Rldi5jCmdj
YyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNvbXBhdC9waHlzZGV2Lmkg
Y29tcGF0L3BoeXNkZXYuYwpzZXQgLWU7IGlkPV8kKGVjaG8gY29tcGF0L3BoeXNkZXYuaCB8
IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hvICIjaWZuZGVmICRp
ZCIgPmNvbXBhdC9waHlzZGV2LmgubmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBh
dC9waHlzZGV2LmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0Lmg+IiA+PmNv
bXBhdC9waHlzZGV2LmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMvcGh5c2Rldi5o
PiIgPj5jb21wYXQvcGh5c2Rldi5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soNCkiID4+
Y29tcGF0L3BoeXNkZXYuaC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNvbXBhdC9waHlz
ZGV2LmkgfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90
b29scy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC9waHlzZGV2Lmgu
bmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC9waHlzZGV2LmgubmV3OyBc
CmVjaG8gIiNlbmRpZiAvKiAkaWQgKi8iID4+Y29tcGF0L3BoeXNkZXYuaC5uZXcKbXYgLWYg
Y29tcGF0L3BoeXNkZXYuaC5uZXcgY29tcGF0L3BoeXNkZXYuaApta2RpciAtcCBjb21wYXQK
Z3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUobG9uZyknIHB1YmxpYy9wbGF0Zm9y
bS5oIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9v
bHMvY29tcGF0LWJ1aWxkLXNvdXJjZS5weSA+Y29tcGF0L3BsYXRmb3JtLmMubmV3Cm12IC1m
IGNvbXBhdC9wbGF0Zm9ybS5jLm5ldyBjb21wYXQvcGxhdGZvcm0uYwpnY2MgLUUgLU8yIC1m
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90
ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAt
bW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hB
U19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1pbmNsdWRlIHB1
YmxpYy94ZW4tY29tcGF0LmggLW0zMiAtbyBjb21wYXQvcGxhdGZvcm0uaSBjb21wYXQvcGxh
dGZvcm0uYwpzZXQgLWU7IGlkPV8kKGVjaG8gY29tcGF0L3BsYXRmb3JtLmggfCB0ciAnWzps
b3dlcjpdLS8uJyAnWzp1cHBlcjpdX19fJyk7IFwKZWNobyAiI2lmbmRlZiAkaWQiID5jb21w
YXQvcGxhdGZvcm0uaC5uZXc7IFwKZWNobyAiI2RlZmluZSAkaWQiID4+Y29tcGF0L3BsYXRm
b3JtLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0Lmg+IiA+PmNvbXBhdC9w
bGF0Zm9ybS5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8cHVibGljL3BsYXRmb3JtLmg+IiA+
PmNvbXBhdC9wbGF0Zm9ybS5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soNCkiID4+Y29t
cGF0L3BsYXRmb3JtLmgubmV3OyBcCmdyZXAgLXYgJ14jIFswLTldJyBjb21wYXQvcGxhdGZv
cm0uaSB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rv
b2xzL2NvbXBhdC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0L3BsYXRmb3JtLmgu
bmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC9wbGF0Zm9ybS5oLm5ldzsg
XAplY2hvICIjZW5kaWYgLyogJGlkICovIiA+PmNvbXBhdC9wbGF0Zm9ybS5oLm5ldwptdiAt
ZiBjb21wYXQvcGxhdGZvcm0uaC5uZXcgY29tcGF0L3BsYXRmb3JtLmgKbWtkaXIgLXAgY29t
cGF0CmdyZXAgLXYgJ0RFRklORV9YRU5fR1VFU1RfSEFORExFKGxvbmcpJyBwdWJsaWMvc2No
ZWQuaCB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rv
b2xzL2NvbXBhdC1idWlsZC1zb3VyY2UucHkgPmNvbXBhdC9zY2hlZC5jLm5ldwptdiAtZiBj
b21wYXQvc2NoZWQuYy5uZXcgY29tcGF0L3NjaGVkLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5E
RUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRo
cHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVu
LWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0L3NjaGVkLmkgY29tcGF0L3NjaGVkLmMKc2V0IC1l
OyBpZD1fJChlY2hvIGNvbXBhdC9zY2hlZC5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBw
ZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L3NjaGVkLmgubmV3OyBc
CmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBhdC9zY2hlZC5oLm5ldzsgXAplY2hvICIjaW5j
bHVkZSA8eGVuL2NvbXBhdC5oPiIgPj5jb21wYXQvc2NoZWQuaC5uZXc7IFwKZWNobyAiI2lu
Y2x1ZGUgPHB1YmxpYy9zY2hlZC5oPiIgPj5jb21wYXQvc2NoZWQuaC5uZXc7IFwKZWNobyAi
I3ByYWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9zY2hlZC5oLm5ldzsgXApncmVwIC12ICdeIyBb
MC05XScgY29tcGF0L3NjaGVkLmkgfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNv
bXBhdC9zY2hlZC5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soKSIgPj5jb21wYXQvc2No
ZWQuaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21wYXQvc2NoZWQuaC5u
ZXcKbXYgLWYgY29tcGF0L3NjaGVkLmgubmV3IGNvbXBhdC9zY2hlZC5oCm1rZGlyIC1wIGNv
bXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hBTkRMRShsb25nKScgcHVibGljL3Rt
ZW0uaCB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rv
b2xzL2NvbXBhdC1idWlsZC1zb3VyY2UucHkgPmNvbXBhdC90bWVtLmMubmV3Cm12IC1mIGNv
bXBhdC90bWVtLmMubmV3IGNvbXBhdC90bWVtLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNv
bXBhdC5oIC1tMzIgLW8gY29tcGF0L3RtZW0uaSBjb21wYXQvdG1lbS5jCnNldCAtZTsgaWQ9
XyQoZWNobyBjb21wYXQvdG1lbS5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9f
XycpOyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L3RtZW0uaC5uZXc7IFwKZWNobyAi
I2RlZmluZSAkaWQiID4+Y29tcGF0L3RtZW0uaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhl
bi9jb21wYXQuaD4iID4+Y29tcGF0L3RtZW0uaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHB1
YmxpYy90bWVtLmg+IiA+PmNvbXBhdC90bWVtLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFj
ayg0KSIgPj5jb21wYXQvdG1lbS5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0
L3RtZW0uaSB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVu
L3Rvb2xzL2NvbXBhdC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0L3RtZW0uaC5u
ZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L3RtZW0uaC5uZXc7IFwKZWNo
byAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21wYXQvdG1lbS5oLm5ldwptdiAtZiBjb21wYXQv
dG1lbS5oLm5ldyBjb21wYXQvdG1lbS5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdERUZJ
TkVfWEVOX0dVRVNUX0hBTkRMRShsb25nKScgcHVibGljL3RyYWNlLmggfCBcCi91c3IvcGtn
L2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQt
c291cmNlLnB5ID5jb21wYXQvdHJhY2UuYy5uZXcKbXYgLWYgY29tcGF0L3RyYWNlLmMubmV3
IGNvbXBhdC90cmFjZS5jCmdjYyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGlu
IC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1X
ZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNT
VEhST1VHSCAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1v
IGNvbXBhdC90cmFjZS5pIGNvbXBhdC90cmFjZS5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21w
YXQvdHJhY2UuaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hv
ICIjaWZuZGVmICRpZCIgPmNvbXBhdC90cmFjZS5oLm5ldzsgXAplY2hvICIjZGVmaW5lICRp
ZCIgPj5jb21wYXQvdHJhY2UuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9jb21wYXQu
aD4iID4+Y29tcGF0L3RyYWNlLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMvdHJh
Y2UuaD4iID4+Y29tcGF0L3RyYWNlLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjayg0KSIg
Pj5jb21wYXQvdHJhY2UuaC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNvbXBhdC90cmFj
ZS5pIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9v
bHMvY29tcGF0LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEgPj5jb21wYXQvdHJhY2UuaC5uZXc7
IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L3RyYWNlLmgubmV3OyBcCmVjaG8g
IiNlbmRpZiAvKiAkaWQgKi8iID4+Y29tcGF0L3RyYWNlLmgubmV3Cm12IC1mIGNvbXBhdC90
cmFjZS5oLm5ldyBjb21wYXQvdHJhY2UuaApta2RpciAtcCBjb21wYXQKZ3JlcCAtdiAnREVG
SU5FX1hFTl9HVUVTVF9IQU5ETEUobG9uZyknIHB1YmxpYy92Y3B1LmggfCBcCi91c3IvcGtn
L2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQt
c291cmNlLnB5ID5jb21wYXQvdmNwdS5jLm5ldwptdiAtZiBjb21wYXQvdmNwdS5jLm5ldyBj
b21wYXQvdmNwdS5jCmdjYyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmlj
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1t
c29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0
ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9u
b3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0
ZGluYyAtZyAtRF9fWEVOX18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhS
T1VHSCAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNv
bXBhdC92Y3B1LmkgY29tcGF0L3ZjcHUuYwpzZXQgLWU7IGlkPV8kKGVjaG8gY29tcGF0L3Zj
cHUuaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hvICIjaWZu
ZGVmICRpZCIgPmNvbXBhdC92Y3B1LmgubmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNv
bXBhdC92Y3B1LmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0Lmg+IiA+PmNv
bXBhdC92Y3B1LmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMvdmNwdS5oPiIgPj5j
b21wYXQvdmNwdS5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soNCkiID4+Y29tcGF0L3Zj
cHUuaC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNvbXBhdC92Y3B1LmkgfCBcCi91c3Iv
cGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVp
bGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC92Y3B1LmgubmV3OyBcCmVjaG8gIiNwcmFn
bWEgcGFjaygpIiA+PmNvbXBhdC92Y3B1LmgubmV3OyBcCmVjaG8gIiNlbmRpZiAvKiAkaWQg
Ki8iID4+Y29tcGF0L3ZjcHUuaC5uZXcKbXYgLWYgY29tcGF0L3ZjcHUuaC5uZXcgY29tcGF0
L3ZjcHUuaApta2RpciAtcCBjb21wYXQKZ3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5E
TEUobG9uZyknIHB1YmxpYy92ZXJzaW9uLmggfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcg
L3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNlLnB5ID5jb21w
YXQvdmVyc2lvbi5jLm5ldwptdiAtZiBjb21wYXQvdmVyc2lvbi5jLm5ldyBjb21wYXQvdmVy
c2lvbi5jCmdjYyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1m
bG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0
ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAt
ZyAtRF9fWEVOX18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNvbXBhdC92
ZXJzaW9uLmkgY29tcGF0L3ZlcnNpb24uYwpzZXQgLWU7IGlkPV8kKGVjaG8gY29tcGF0L3Zl
cnNpb24uaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hvICIj
aWZuZGVmICRpZCIgPmNvbXBhdC92ZXJzaW9uLmgubmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlk
IiA+PmNvbXBhdC92ZXJzaW9uLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0
Lmg+IiA+PmNvbXBhdC92ZXJzaW9uLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMv
dmVyc2lvbi5oPiIgPj5jb21wYXQvdmVyc2lvbi5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBh
Y2soNCkiID4+Y29tcGF0L3ZlcnNpb24uaC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNv
bXBhdC92ZXJzaW9uLmkgfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQu
Mi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC92
ZXJzaW9uLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC92ZXJzaW9u
LmgubmV3OyBcCmVjaG8gIiNlbmRpZiAvKiAkaWQgKi8iID4+Y29tcGF0L3ZlcnNpb24uaC5u
ZXcKbXYgLWYgY29tcGF0L3ZlcnNpb24uaC5uZXcgY29tcGF0L3ZlcnNpb24uaApta2RpciAt
cCBjb21wYXQKZ3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUobG9uZyknIHB1Ymxp
Yy94ZW4uaCB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVu
L3Rvb2xzL2NvbXBhdC1idWlsZC1zb3VyY2UucHkgPmNvbXBhdC94ZW4uYy5uZXcKbXYgLWYg
Y29tcGF0L3hlbi5jLm5ldyBjb21wYXQveGVuLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNv
bXBhdC5oIC1tMzIgLW8gY29tcGF0L3hlbi5pIGNvbXBhdC94ZW4uYwpzZXQgLWU7IGlkPV8k
KGVjaG8gY29tcGF0L3hlbi5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9fXycp
OyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L3hlbi5oLm5ldzsgXAplY2hvICIjZGVm
aW5lICRpZCIgPj5jb21wYXQveGVuLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29t
cGF0Lmg+IiA+PmNvbXBhdC94ZW4uaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHB1YmxpYy94
ZW4uaD4iID4+Y29tcGF0L3hlbi5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soNCkiID4+
Y29tcGF0L3hlbi5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L3hlbi5pIHwg
XAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29t
cGF0LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEgPj5jb21wYXQveGVuLmgubmV3OyBcCmVjaG8g
IiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC94ZW4uaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8q
ICRpZCAqLyIgPj5jb21wYXQveGVuLmgubmV3Cm12IC1mIGNvbXBhdC94ZW4uaC5uZXcgY29t
cGF0L3hlbi5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hB
TkRMRShsb25nKScgcHVibGljL3hlbmNvbW0uaCB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIu
NyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1zb3VyY2UucHkgPmNv
bXBhdC94ZW5jb21tLmMubmV3Cm12IC1mIGNvbXBhdC94ZW5jb21tLmMubmV3IGNvbXBhdC94
ZW5jb21tLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdI
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0
L3hlbmNvbW0uaSBjb21wYXQveGVuY29tbS5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21wYXQv
eGVuY29tbS5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9fXycpOyBcCmVjaG8g
IiNpZm5kZWYgJGlkIiA+Y29tcGF0L3hlbmNvbW0uaC5uZXc7IFwKZWNobyAiI2RlZmluZSAk
aWQiID4+Y29tcGF0L3hlbmNvbW0uaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9jb21w
YXQuaD4iID4+Y29tcGF0L3hlbmNvbW0uaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHB1Ymxp
Yy94ZW5jb21tLmg+IiA+PmNvbXBhdC94ZW5jb21tLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEg
cGFjayg0KSIgPj5jb21wYXQveGVuY29tbS5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScg
Y29tcGF0L3hlbmNvbW0uaSB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4t
NC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0
L3hlbmNvbW0uaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L3hlbmNv
bW0uaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21wYXQveGVuY29tbS5o
Lm5ldwptdiAtZiBjb21wYXQveGVuY29tbS5oLm5ldyBjb21wYXQveGVuY29tbS5oCm1rZGly
IC1wIGNvbXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hBTkRMRShsb25nKScgcHVi
bGljL3hlbm9wcm9mLmggfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQu
Mi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNlLnB5ID5jb21wYXQveGVub3Byb2Yu
Yy5uZXcKbXYgLWYgY29tcGF0L3hlbm9wcm9mLmMubmV3IGNvbXBhdC94ZW5vcHJvZi5jCmdj
YyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNvbXBhdC94ZW5vcHJvZi5p
IGNvbXBhdC94ZW5vcHJvZi5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21wYXQveGVub3Byb2Yu
aCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hvICIjaWZuZGVm
ICRpZCIgPmNvbXBhdC94ZW5vcHJvZi5oLm5ldzsgXAplY2hvICIjZGVmaW5lICRpZCIgPj5j
b21wYXQveGVub3Byb2YuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9jb21wYXQuaD4i
ID4+Y29tcGF0L3hlbm9wcm9mLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMveGVu
b3Byb2YuaD4iID4+Y29tcGF0L3hlbm9wcm9mLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFj
ayg0KSIgPj5jb21wYXQveGVub3Byb2YuaC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNv
bXBhdC94ZW5vcHJvZi5pIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00
LjIuMC94ZW4vdG9vbHMvY29tcGF0LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEgPj5jb21wYXQv
eGVub3Byb2YuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L3hlbm9w
cm9mLmgubmV3OyBcCmVjaG8gIiNlbmRpZiAvKiAkaWQgKi8iID4+Y29tcGF0L3hlbm9wcm9m
LmgubmV3Cm12IC1mIGNvbXBhdC94ZW5vcHJvZi5oLm5ldyBjb21wYXQveGVub3Byb2YuaApt
a2RpciAtcCBjb21wYXQvYXJjaC14ODYKZ3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5E
TEUobG9uZyknIHB1YmxpYy9hcmNoLXg4Ni94ZW4tbWNhLmggfCBcCi91c3IvcGtnL2Jpbi9w
eXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNl
LnB5ID5jb21wYXQvYXJjaC14ODYveGVuLW1jYS5jLm5ldwptdiAtZiBjb21wYXQvYXJjaC14
ODYveGVuLW1jYS5jLm5ldyBjb21wYXQvYXJjaC14ODYveGVuLW1jYS5jCmdjYyAtRSAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRl
Y2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1w
aXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTzIgLWZvbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLWluY2x1ZGUg
cHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNvbXBhdC9hcmNoLXg4Ni94ZW4tbWNhLmkg
Y29tcGF0L2FyY2gteDg2L3hlbi1tY2EuYwpzZXQgLWU7IGlkPV8kKGVjaG8gY29tcGF0L2Fy
Y2gteDg2L3hlbi1tY2EuaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsg
XAplY2hvICIjaWZuZGVmICRpZCIgPmNvbXBhdC9hcmNoLXg4Ni94ZW4tbWNhLmgubmV3OyBc
CmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBhdC9hcmNoLXg4Ni94ZW4tbWNhLmgubmV3OyBc
CmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0Lmg+IiA+PmNvbXBhdC9hcmNoLXg4Ni94ZW4t
bWNhLmgubmV3OyBcCiBcCmVjaG8gIiNwcmFnbWEgcGFjayg0KSIgPj5jb21wYXQvYXJjaC14
ODYveGVuLW1jYS5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L2FyY2gteDg2
L3hlbi1tY2EuaSB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAv
eGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0L2FyY2gt
eDg2L3hlbi1tY2EuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L2Fy
Y2gteDg2L3hlbi1tY2EuaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21w
YXQvYXJjaC14ODYveGVuLW1jYS5oLm5ldwptdiAtZiBjb21wYXQvYXJjaC14ODYveGVuLW1j
YS5oLm5ldyBjb21wYXQvYXJjaC14ODYveGVuLW1jYS5oCm1rZGlyIC1wIGNvbXBhdC9hcmNo
LXg4NgpncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hBTkRMRShsb25nKScgcHVibGljL2Fy
Y2gteDg2L3hlbi5oIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIu
MC94ZW4vdG9vbHMvY29tcGF0LWJ1aWxkLXNvdXJjZS5weSA+Y29tcGF0L2FyY2gteDg2L3hl
bi5jLm5ldwptdiAtZiBjb21wYXQvYXJjaC14ODYveGVuLmMubmV3IGNvbXBhdC9hcmNoLXg4
Ni94ZW4uYwpnY2MgLUUgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQt
ZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4
dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11
bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMg
LWcgLURfX1hFTl9fIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50IC1pbmNsdWRlIHB1YmxpYy94ZW4tY29tcGF0LmggLW0zMiAtbyBjb21wYXQv
YXJjaC14ODYveGVuLmkgY29tcGF0L2FyY2gteDg2L3hlbi5jCnNldCAtZTsgaWQ9XyQoZWNo
byBjb21wYXQvYXJjaC14ODYveGVuLmggfCB0ciAnWzpsb3dlcjpdLS8uJyAnWzp1cHBlcjpd
X19fJyk7IFwKZWNobyAiI2lmbmRlZiAkaWQiID5jb21wYXQvYXJjaC14ODYveGVuLmgubmV3
OyBcCmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBhdC9hcmNoLXg4Ni94ZW4uaC5uZXc7IFwK
ZWNobyAiI2luY2x1ZGUgPHhlbi9jb21wYXQuaD4iID4+Y29tcGF0L2FyY2gteDg2L3hlbi5o
Lm5ldzsgXAogXAplY2hvICIjcHJhZ21hIHBhY2soNCkiID4+Y29tcGF0L2FyY2gteDg2L3hl
bi5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L2FyY2gteDg2L3hlbi5pIHwg
XAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29t
cGF0LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEgPj5jb21wYXQvYXJjaC14ODYveGVuLmgubmV3
OyBcCmVjaG8gIiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC9hcmNoLXg4Ni94ZW4uaC5uZXc7
IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21wYXQvYXJjaC14ODYveGVuLmgubmV3
Cm12IC1mIGNvbXBhdC9hcmNoLXg4Ni94ZW4uaC5uZXcgY29tcGF0L2FyY2gteDg2L3hlbi5o
Cm1rZGlyIC1wIGNvbXBhdC9hcmNoLXg4NgpncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hB
TkRMRShsb25nKScgcHVibGljL2FyY2gteDg2L3hlbi14ODZfMzIuaCB8IFwKL3Vzci9wa2cv
YmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1z
b3VyY2UucHkgPmNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2XzMyLmMubmV3Cm12IC1mIGNvbXBh
dC9hcmNoLXg4Ni94ZW4teDg2XzMyLmMubmV3IGNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2XzMy
LmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0L2FyY2gt
eDg2L3hlbi14ODZfMzIuaSBjb21wYXQvYXJjaC14ODYveGVuLXg4Nl8zMi5jCnNldCAtZTsg
aWQ9XyQoZWNobyBjb21wYXQvYXJjaC14ODYveGVuLXg4Nl8zMi5oIHwgdHIgJ1s6bG93ZXI6
XS0vLicgJ1s6dXBwZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L2Fy
Y2gteDg2L3hlbi14ODZfMzIuaC5uZXc7IFwKZWNobyAiI2RlZmluZSAkaWQiID4+Y29tcGF0
L2FyY2gteDg2L3hlbi14ODZfMzIuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9jb21w
YXQuaD4iID4+Y29tcGF0L2FyY2gteDg2L3hlbi14ODZfMzIuaC5uZXc7IFwKIFwKZWNobyAi
I3ByYWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2XzMyLmgubmV3OyBc
CmdyZXAgLXYgJ14jIFswLTldJyBjb21wYXQvYXJjaC14ODYveGVuLXg4Nl8zMi5pIHwgXAov
dXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29tcGF0
LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEgPj5jb21wYXQvYXJjaC14ODYveGVuLXg4Nl8zMi5o
Lm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soKSIgPj5jb21wYXQvYXJjaC14ODYveGVuLXg4
Nl8zMi5oLm5ldzsgXAplY2hvICIjZW5kaWYgLyogJGlkICovIiA+PmNvbXBhdC9hcmNoLXg4
Ni94ZW4teDg2XzMyLmgubmV3Cm12IC1mIGNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2XzMyLmgu
bmV3IGNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2XzMyLmgKbWtkaXIgLXAgY29tcGF0CmdyZXAg
LXYgJ0RFRklORV9YRU5fR1VFU1RfSEFORExFKGxvbmcpJyBwdWJsaWMvYXJjaC14ODZfMzIu
aCB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xz
L2NvbXBhdC1idWlsZC1zb3VyY2UucHkgPmNvbXBhdC9hcmNoLXg4Nl8zMi5jLm5ldwptdiAt
ZiBjb21wYXQvYXJjaC14ODZfMzIuYy5uZXcgY29tcGF0L2FyY2gteDg2XzMyLmMKZ2NjIC1F
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5j
bHVkZSBwdWJsaWMveGVuLWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0L2FyY2gteDg2XzMyLmkg
Y29tcGF0L2FyY2gteDg2XzMyLmMKc2V0IC1lOyBpZD1fJChlY2hvIGNvbXBhdC9hcmNoLXg4
Nl8zMi5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9fXycpOyBcCmVjaG8gIiNp
Zm5kZWYgJGlkIiA+Y29tcGF0L2FyY2gteDg2XzMyLmgubmV3OyBcCmVjaG8gIiNkZWZpbmUg
JGlkIiA+PmNvbXBhdC9hcmNoLXg4Nl8zMi5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8eGVu
L2NvbXBhdC5oPiIgPj5jb21wYXQvYXJjaC14ODZfMzIuaC5uZXc7IFwKIFwKZWNobyAiI3By
YWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9hcmNoLXg4Nl8zMi5oLm5ldzsgXApncmVwIC12ICde
IyBbMC05XScgY29tcGF0L2FyY2gteDg2XzMyLmkgfCBcCi91c3IvcGtnL2Jpbi9weXRob24y
LjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5IHwg
dW5pcSA+PmNvbXBhdC9hcmNoLXg4Nl8zMi5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2so
KSIgPj5jb21wYXQvYXJjaC14ODZfMzIuaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8qICRpZCAq
LyIgPj5jb21wYXQvYXJjaC14ODZfMzIuaC5uZXcKbXYgLWYgY29tcGF0L2FyY2gteDg2XzMy
LmgubmV3IGNvbXBhdC9hcmNoLXg4Nl8zMi5oCmV4cG9ydCBQWVRIT049L3Vzci9wa2cvYmlu
L3B5dGhvbjIuNzsgXApncmVwIC12ICdeWwkgXSojJyB4bGF0LmxzdCB8IFwKd2hpbGUgcmVh
ZCB3aGF0IG5hbWUgaGRyOyBkbyBcCgkvYmluL3NoIC9yb290L3hlbi00LjIuMC94ZW4vdG9v
bHMvZ2V0LWZpZWxkcy5zaCAiJHdoYXQiIGNvbXBhdF8kbmFtZSAkKGVjaG8gY29tcGF0LyRo
ZHIgfCBzZWQgJ3MsQGFyY2hALHg4Nl8zMixnJykgfHwgZXhpdCAkPzsgXApkb25lID5jb21w
YXQveGxhdC5oLm5ldwptdiAtZiBjb21wYXQveGxhdC5oLm5ldyBjb21wYXQveGxhdC5oCmZv
ciBpIGluIHB1YmxpYy90cmFjZS5oIHB1YmxpYy9lbGZub3RlLmggcHVibGljL3RtZW0uaCBw
dWJsaWMvcGxhdGZvcm0uaCBwdWJsaWMvcGh5c2Rldi5oIHB1YmxpYy94ZW4tY29tcGF0Lmgg
cHVibGljL2dyYW50X3RhYmxlLmggcHVibGljL2NhbGxiYWNrLmggcHVibGljL3NjaGVkLmgg
cHVibGljL21lbW9yeS5oIHB1YmxpYy9mZWF0dXJlcy5oIHB1YmxpYy94ZW4uaCBwdWJsaWMv
ZG9tMF9vcHMuaCBwdWJsaWMvbWVtX2V2ZW50LmggcHVibGljL3ZlcnNpb24uaCBwdWJsaWMv
ZXZlbnRfY2hhbm5lbC5oIHB1YmxpYy94ZW5vcHJvZi5oIHB1YmxpYy94ZW5jb21tLmggcHVi
bGljL25taS5oIHB1YmxpYy9rZXhlYy5oIHB1YmxpYy92Y3B1LmggcHVibGljL2lvL3hlbmJ1
cy5oIHB1YmxpYy9pby9saWJ4ZW52Y2hhbi5oIHB1YmxpYy9pby90cG1pZi5oIHB1YmxpYy9p
by9wY2lpZi5oIHB1YmxpYy9pby91c2JpZi5oIHB1YmxpYy9pby9uZXRpZi5oIHB1YmxpYy9p
by9mYmlmLmggcHVibGljL2lvL2ZzaWYuaCBwdWJsaWMvaW8vYmxraWYuaCBwdWJsaWMvaW8v
Y29uc29sZS5oIHB1YmxpYy9pby9yaW5nLmggcHVibGljL2lvL3Byb3RvY29scy5oIHB1Ymxp
Yy9pby9rYmRpZi5oIHB1YmxpYy9pby94c193aXJlLmggcHVibGljL2lvL3ZzY3NpaWYuaCBw
dWJsaWMvaHZtL3BhcmFtcy5oIHB1YmxpYy9odm0vaHZtX2luZm9fdGFibGUuaCBwdWJsaWMv
aHZtL2lvcmVxLmggcHVibGljL2h2bS9odm1fb3AuaCBwdWJsaWMvaHZtL2U4MjAuaDsgZG8g
Z2NjIC1hbnNpIC1pbmNsdWRlIHN0ZGludC5oIC1XYWxsIC1XIC1XZXJyb3IgLVMgLW8gL2Rl
di9udWxsIC14YyAkaSB8fCBleGl0IDE7IGVjaG8gJGk7IGRvbmUgPmhlYWRlcnMuY2hrLm5l
dwptdiBoZWFkZXJzLmNoay5uZXcgaGVhZGVycy5jaGsKcm0gY29tcGF0L3hlbi5jIGNvbXBh
dC9rZXhlYy5pIGNvbXBhdC9hcmNoLXg4Nl8zMi5jIGNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2
XzMyLmMgY29tcGF0L21lbW9yeS5jIGNvbXBhdC9zY2hlZC5jIGNvbXBhdC92Y3B1LmMgY29t
cGF0L3hlbi5pIGNvbXBhdC9waHlzZGV2LmkgY29tcGF0L3RtZW0uaSBjb21wYXQvdHJhY2Uu
aSBjb21wYXQvZmVhdHVyZXMuaSBjb21wYXQvY2FsbGJhY2suYyBjb21wYXQveGVuY29tbS5p
IGNvbXBhdC9hcmNoLXg4Ni94ZW4uaSBjb21wYXQvZWxmbm90ZS5jIGNvbXBhdC9hcmNoLXg4
Ni94ZW4tbWNhLmkgY29tcGF0L3ZlcnNpb24uaSBjb21wYXQvZXZlbnRfY2hhbm5lbC5pIGNv
bXBhdC9wbGF0Zm9ybS5pIGNvbXBhdC9rZXhlYy5jIGNvbXBhdC90bWVtLmMgY29tcGF0L25t
aS5pIGNvbXBhdC9lbGZub3RlLmkgY29tcGF0L3BoeXNkZXYuYyBjb21wYXQvdmNwdS5pIGNv
bXBhdC90cmFjZS5jIGNvbXBhdC9mZWF0dXJlcy5jIGNvbXBhdC9ldmVudF9jaGFubmVsLmMg
Y29tcGF0L2dyYW50X3RhYmxlLmkgY29tcGF0L3hlbmNvbW0uYyBjb21wYXQvYXJjaC14ODYv
eGVuLmMgY29tcGF0L2FyY2gteDg2L3hlbi1tY2EuYyBjb21wYXQvdmVyc2lvbi5jIGNvbXBh
dC9hcmNoLXg4Nl8zMi5pIGNvbXBhdC9wbGF0Zm9ybS5jIGNvbXBhdC9tZW1vcnkuaSBjb21w
YXQvc2NoZWQuaSBjb21wYXQvbm1pLmMgY29tcGF0L2NhbGxiYWNrLmkgY29tcGF0L3hlbm9w
cm9mLmMgY29tcGF0L3hlbm9wcm9mLmkgY29tcGF0L2dyYW50X3RhYmxlLmMgY29tcGF0L2Fy
Y2gteDg2L3hlbi14ODZfMzIuaQpnbWFrZVszXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZScKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9S
dWxlcy5tayAtQyBhcmNoL3g4NiBhc20tb2Zmc2V0cy5zCmdtYWtlWzNdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYnCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1p
d2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5hc20tb2Zmc2V0cy5zLmQgLVMg
LW8gYXNtLW9mZnNldHMucyB4ODZfNjQvYXNtLW9mZnNldHMuYwpnbWFrZVszXTogTGVhdmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYnCmdtYWtlIC1mIC9y
b290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgaW5jbHVkZS9hc20teDg2L2FzbS1vZmZzZXRz
LmgKZ21ha2VbM106IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbicK
Z21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuJwpnbWFr
ZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIGFyY2gveDg2IC9yb290L3hl
bi00LjIuMC94ZW4veGVuCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC94ZW4vYXJjaC94ODYnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVs
ZXMubWsgLUMgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9ib290IGJ1aWx0X2luLm8K
Z21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNo
L3g4Ni9ib290JwpnbWFrZSAtZiBidWlsZDMyLm1rIHJlbG9jLlMKZ21ha2VbNV06IEVudGVy
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9ib290JwpnY2Mg
LU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0
aW9ucyAtV2Vycm9yIC1mbm8tYnVpbHRpbiAtbXNvZnQtZmxvYXQgIC1jIC1mcGljIHJlbG9j
LmMgLW8gcmVsb2MubwpsZCAtbWVsZl9pMzg2IC1OIC1UdGV4dCAwIC1vIHJlbG9jLmxuayBy
ZWxvYy5vCm9iamNvcHkgLU8gYmluYXJ5IHJlbG9jLmxuayByZWxvYy5iaW4KKG9kIC12IC10
IHggcmVsb2MuYmluIHwgdHIgLXMgJyAnIHwgYXdrICdOUiA+IDEge3ByaW50IHN9IHtzPSQw
fScgfCBcCnNlZCAncy8gLywweC9nJyB8IHNlZCAncy8sMHgkLy8nIHwgc2VkICdzL15bMC05
XSosLyAubG9uZyAvJykgPnJlbG9jLlMKZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2Jvb3QnCmdjYyAtRF9fQVNTRU1CTFlfXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVC
VUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHBy
ZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5v
LWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1m
cGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJ
VFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5oZWFkLm8uZCAtYyBoZWFkLlMgLW8gaGVh
ZC5vCmxkICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBoZWFkLm8KZ21ha2Vb
NF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2Jv
b3QnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgL3Jvb3QveGVu
LTQuMi4wL3hlbi9hcmNoL3g4Ni9lZmkgYnVpbHRfaW4ubwpnbWFrZVs0XTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2VmaScKZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnN0dWIuby5kIC1m
c2hvcnQtd2NoYXIgLWMgc3R1Yi5jIC1vIHN0dWIubwpsZCAgICAtbWVsZl94ODZfNjQgIC1y
IC1vIGJ1aWx0X2luLm8gc3R1Yi5vCmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jv
b3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9lZmknCmdtYWtlIC1mIC9yb290L3hlbi00LjIu
MC94ZW4vUnVsZXMubWsgLUMgL3Jvb3QveGVuLTQuMi4wL3hlbi9jb21tb24gYnVpbHRfaW4u
bwpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2Nv
bW1vbicKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLmJpdG1hcC5vLmQgLWMgYml0bWFwLmMgLW8gYml0bWFwLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNvcmVfcGFya2luZy5vLmQg
LWMgY29yZV9wYXJraW5nLmMgLW8gY29yZV9wYXJraW5nLm8KZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNwdS5vLmQgLWMgY3B1LmMgLW8g
Y3B1Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLmNwdXBvb2wuby5kIC1jIGNwdXBvb2wuYyAtbyBjcHVwb29sLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRvbWN0bC5vLmQgLWMg
ZG9tY3RsLmMgLW8gZG9tY3RsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLmRvbWFpbi5vLmQgLWMgZG9tYWluLmMgLW8gZG9tYWluLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmV2
ZW50X2NoYW5uZWwuby5kIC1jIGV2ZW50X2NoYW5uZWwuYyAtbyBldmVudF9jaGFubmVsLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmdy
YW50X3RhYmxlLm8uZCAtYyBncmFudF90YWJsZS5jIC1vIGdyYW50X3RhYmxlLm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmlycS5vLmQg
LWMgaXJxLmMgLW8gaXJxLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRp
biAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAt
V2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdl
bmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1
bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAt
V25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3lu
Y2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUg
LW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RI
Uk9VR0ggLU1NRCAtTUYgLmtlcm5lbC5vLmQgLWMga2VybmVsLmMgLW8ga2VybmVsLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmtleWhh
bmRsZXIuby5kIC1jIGtleWhhbmRsZXIuYyAtbyBrZXloYW5kbGVyLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmtleGVjLm8uZCAtYyBr
ZXhlYy5jIC1vIGtleGVjLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRp
biAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAt
V2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdl
bmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1
bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAt
V25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3lu
Y2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUg
LW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RI
Uk9VR0ggLU1NRCAtTUYgLmxpYi5vLmQgLWMgbGliLmMgLW8gbGliLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1lbW9yeS5vLmQgLWMg
bWVtb3J5LmMgLW8gbWVtb3J5Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLm11bHRpY2FsbC5vLmQgLWMgbXVsdGljYWxsLmMgLW8gbXVs
dGljYWxsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLm5vdGlmaWVyLm8uZCAtYyBub3RpZmllci5jIC1vIG5vdGlmaWVyLm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnBhZ2VfYWxs
b2Muby5kIC1jIHBhZ2VfYWxsb2MuYyAtbyBwYWdlX2FsbG9jLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnByZWVtcHQuby5kIC1jIHBy
ZWVtcHQuYyAtbyBwcmVlbXB0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnJhbmdlc2V0Lm8uZCAtYyByYW5nZXNldC5jIC1vIHJhbmdl
c2V0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLnNjaGVkX2NyZWRpdC5vLmQgLWMgc2NoZWRfY3JlZGl0LmMgLW8gc2NoZWRfY3JlZGl0
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
LnNjaGVkX2NyZWRpdDIuby5kIC1jIHNjaGVkX2NyZWRpdDIuYyAtbyBzY2hlZF9jcmVkaXQy
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
LnNjaGVkX3NlZGYuby5kIC1jIHNjaGVkX3NlZGYuYyAtbyBzY2hlZF9zZWRmLm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNjaGVkX2Fy
aW5jNjUzLm8uZCAtYyBzY2hlZF9hcmluYzY1My5jIC1vIHNjaGVkX2FyaW5jNjUzLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNjaGVk
dWxlLm8uZCAtYyBzY2hlZHVsZS5jIC1vIHNjaGVkdWxlLm8KZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNodXRkb3duLm8uZCAtYyBzaHV0
ZG93bi5jIC1vIHNodXRkb3duLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnNvZnRpcnEuby5kIC1jIHNvZnRpcnEuYyAtbyBzb2Z0aXJx
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
LnNvcnQuby5kIC1jIHNvcnQuYyAtbyBzb3J0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNwaW5sb2NrLm8uZCAtYyBzcGlubG9jay5j
IC1vIHNwaW5sb2NrLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLnN0b3BfbWFjaGluZS5vLmQgLWMgc3RvcF9tYWNoaW5lLmMgLW8gc3Rv
cF9tYWNoaW5lLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLnN0cmluZy5vLmQgLWMgc3RyaW5nLmMgLW8gc3RyaW5nLm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnN5bWJvbHMuby5k
IC1jIHN5bWJvbHMuYyAtbyBzeW1ib2xzLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1m
bm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXgg
aW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURI
QVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnN5c2N0bC5vLmQgLWMgc3lzY3RsLmMgLW8gc3lz
Y3RsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLnRhc2tsZXQuby5kIC1jIHRhc2tsZXQuYyAtbyB0YXNrbGV0Lm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRpbWUuby5kIC1jIHRp
bWUuYyAtbyB0aW1lLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLnRpbWVyLm8uZCAtYyB0aW1lci5jIC1vIHRpbWVyLm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRyYWNlLm8uZCAt
YyB0cmFjZS5jIC1vIHRyYWNlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnZlcnNpb24uby5kIC1jIHZlcnNpb24uYyAtbyB2ZXJzaW9u
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
LnZzcHJpbnRmLm8uZCAtYyB2c3ByaW50Zi5jIC1vIHZzcHJpbnRmLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLndhaXQuby5kIC1jIHdh
aXQuYyAtbyB3YWl0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLnhtYWxsb2NfdGxzZi5vLmQgLWMgeG1hbGxvY190bHNmLmMgLW8geG1h
bGxvY190bHNmLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLnJjdXBkYXRlLm8uZCAtYyByY3VwZGF0ZS5jIC1vIHJjdXBkYXRlLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRtZW0u
by5kIC1jIHRtZW0uYyAtbyB0bWVtLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8t
YnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5j
bHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0
aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZu
by1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRS
SUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNf
UEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRtZW1feGVuLm8uZCAtYyB0bWVtX3hlbi5jIC1vIHRt
ZW1feGVuLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLnJhZGl4LXRyZWUuby5kIC1jIHJhZGl4LXRyZWUuYyAtbyByYWRpeC10cmVlLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnJi
dHJlZS5vLmQgLWMgcmJ0cmVlLmMgLW8gcmJ0cmVlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmx6by5vLmQgLWMgbHpvLmMgLW8gbHpv
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
Lnhlbm9wcm9mLm8uZCAtYyB4ZW5vcHJvZi5jIC1vIHhlbm9wcm9mLm8KZ21ha2UgLWYgL3Jv
b3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyBjb21wYXQgYnVpbHRfaW4ubwpnbWFrZVs1
XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2NvbW1vbi9jb21w
YXQnCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9h
dCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJu
cyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2lu
ZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAt
RF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25m
aWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1G
IC5kb21haW4uby5kIC1jIGRvbWFpbi5jIC1vIGRvbWFpbi5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5rZXJuZWwuby5kIC1jIGtlcm5l
bC5jIC1vIGtlcm5lbC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC5tZW1vcnkuby5kIC1jIG1lbW9yeS5jIC1vIG1lbW9yeS5vCmdjYyAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJl
ZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMg
LURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhB
U19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5tdWx0aWNh
bGwuby5kIC1jIG11bHRpY2FsbC5jIC1vIG11bHRpY2FsbC5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC54bGF0Lm8uZCAtYyB4bGF0LmMg
LW8geGxhdC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29m
dC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQt
ZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3Vz
LXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGlu
YyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hl
bi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1N
TUQgLU1GIC50bWVtX3hlbi5vLmQgLWMgdG1lbV94ZW4uYyAtbyB0bWVtX3hlbi5vCmxkICAg
IC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBkb21haW4ubyBrZXJuZWwubyBtZW1v
cnkubyBtdWx0aWNhbGwubyB4bGF0Lm8gdG1lbV94ZW4ubwpnbWFrZVs1XTogTGVhdmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vY29tbW9uL2NvbXBhdCcKZ21ha2UgLWYg
L3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyBodm0gYnVpbHRfaW4ubwpnbWFrZVs1
XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2NvbW1vbi9odm0n
CmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdy
ZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50
ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAt
Zm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAt
bW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10
YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9f
WEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcu
aCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5z
YXZlLm8uZCAtYyBzYXZlLmMgLW8gc2F2ZS5vCmxkICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8g
YnVpbHRfaW4ubyBzYXZlLm8KZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAveGVuL2NvbW1vbi9odm0nCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4v
UnVsZXMubWsgLUMgbGliZWxmIGJ1aWx0X2luLm8KZ21ha2VbNV06IEVudGVyaW5nIGRpcmVj
dG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9jb21tb24vbGliZWxmJwpnY2MgLU8yIC1mb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xz
IC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBl
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90
ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAt
bW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hB
U19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRl
IC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAt
REhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubGliZWxmLXRvb2xzLm8u
ZCAtYyBsaWJlbGYtdG9vbHMuYyAtbyBsaWJlbGYtdG9vbHMubwpnY2MgLU8yIC1mb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1p
d2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0
b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5v
LXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19W
SVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhB
U19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubGliZWxmLWxvYWRlci5vLmQg
LWMgbGliZWxmLWxvYWRlci5jIC1vIGxpYmVsZi1sb2FkZXIubwpnY2MgLU8yIC1mb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1p
d2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0
b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5v
LXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19W
SVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhB
U19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubGliZWxmLWRvbWluZm8uby5k
IC1jIGxpYmVsZi1kb21pbmZvLmMgLW8gbGliZWxmLWRvbWluZm8ubwpsZCAgICAtbWVsZl94
ODZfNjQgIC1yIC1vIGxpYmVsZi10ZW1wLm8gbGliZWxmLXRvb2xzLm8gbGliZWxmLWxvYWRl
ci5vIGxpYmVsZi1kb21pbmZvLm8Kb2JqY29weSAtLXJlbmFtZS1zZWN0aW9uIC50ZXh0PS5p
bml0LnRleHQgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YT0uaW5pdC5kYXRhIGxpYmVsZi10ZW1w
Lm8gbGliZWxmLm8KbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGxpYmVs
Zi5vCmdtYWtlWzVdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9j
b21tb24vbGliZWxmJwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAuZGVjb21wcmVzcy5vLmQgLURJTklUX1NFQ1RJT05TX09OTFkgLWMgZGVj
b21wcmVzcy5jIC1vIGRlY29tcHJlc3MubwpvYmpkdW1wIC1oIGRlY29tcHJlc3MubyB8IHNl
ZCAtbiAnL1swLTldL3tzLDAwKiwwLGc7cH0nIHwgd2hpbGUgcmVhZCBpZHggbmFtZSBzeiBy
ZXN0OyBkbyBcCgljYXNlICIkbmFtZSIgaW4gXAoJLnRleHR8LnRleHQuKnwuZGF0YXwuZGF0
YS4qfC5ic3MpIFwKCQl0ZXN0ICRzeiAhPSAwIHx8IGNvbnRpbnVlOyBcCgkJZWNobyAiRXJy
b3I6IHNpemUgb2YgZGVjb21wcmVzcy5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0
ICQoZXhwciAkaWR4ICsgMSk7OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tz
LDAwKiwwLGc7cH0iOiBleHRyYSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5k
Cm9iamNvcHkgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFt
ZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUt
c2VjdGlvbiAucm9kYXRhLnN0cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNl
Y3Rpb24gLnJvZGF0YS5zdHIxLjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0
aW9uIC5yb2RhdGEuc3RyMS44PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlv
biAuZGF0YS5yZWw9LmluaXQuZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwu
bG9jYWw9LmluaXQuZGF0YS5yZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwu
cm89LmluaXQuZGF0YS5yZWwucm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9j
YWw9LmluaXQuZGF0YS5yZWwucm8ubG9jYWwgZGVjb21wcmVzcy5vIGRlY29tcHJlc3MuaW5p
dC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9h
dCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJu
cyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2lu
ZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAt
RF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25m
aWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1G
IC5idW56aXAyLm8uZCAtRElOSVRfU0VDVElPTlNfT05MWSAtYyBidW56aXAyLmMgLW8gYnVu
emlwMi5vCm9iamR1bXAgLWggYnVuemlwMi5vIHwgc2VkIC1uICcvWzAtOV0ve3MsMDAqLDAs
ZztwfScgfCB3aGlsZSByZWFkIGlkeCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2UgIiRuYW1l
IiBpbiBcCgkudGV4dHwudGV4dC4qfC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRlc3QgJHN6
ICE9IDAgfHwgY29udGludWU7IFwKCQllY2hvICJFcnJvcjogc2l6ZSBvZiBidW56aXAyLm86
JG5hbWUgaXMgMHgkc3oiID4mMjsgXAoJCWV4aXQgJChleHByICRpZHggKyAxKTs7IFwKCWVz
YWM7IFwKZG9uZQpzZWQ6IDE6ICIvWzAtOV0ve3MsMDAqLDAsZztwfSI6IGV4dHJhIGNoYXJh
Y3RlcnMgYXQgdGhlIGVuZCBvZiBwIGNvbW1hbmQKb2JqY29weSAtLXJlbmFtZS1zZWN0aW9u
IC5yb2RhdGE9LmluaXQucm9kYXRhIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjE9
LmluaXQucm9kYXRhLnN0cjEuMSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4yPS5p
bml0LnJvZGF0YS5zdHIxLjIgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuND0uaW5p
dC5yb2RhdGEuc3RyMS40IC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjg9LmluaXQu
cm9kYXRhLnN0cjEuOCAtLXJlbmFtZS1zZWN0aW9uIC5kYXRhLnJlbD0uaW5pdC5kYXRhLnJl
bCAtLXJlbmFtZS1zZWN0aW9uIC5kYXRhLnJlbC5sb2NhbD0uaW5pdC5kYXRhLnJlbC5sb2Nh
bCAtLXJlbmFtZS1zZWN0aW9uIC5kYXRhLnJlbC5ybz0uaW5pdC5kYXRhLnJlbC5ybyAtLXJl
bmFtZS1zZWN0aW9uIC5kYXRhLnJlbC5yby5sb2NhbD0uaW5pdC5kYXRhLnJlbC5yby5sb2Nh
bCBidW56aXAyLm8gYnVuemlwMi5pbml0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1m
bm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXgg
aW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURI
QVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnVueHouby5kIC1ESU5JVF9TRUNUSU9OU19PTkxZ
IC1jIHVueHouYyAtbyB1bnh6Lm8Kb2JqZHVtcCAtaCB1bnh6Lm8gfCBzZWQgLW4gJy9bMC05
XS97cywwMCosMCxnO3B9JyB8IHdoaWxlIHJlYWQgaWR4IG5hbWUgc3ogcmVzdDsgZG8gXAoJ
Y2FzZSAiJG5hbWUiIGluIFwKCS50ZXh0fC50ZXh0Lip8LmRhdGF8LmRhdGEuKnwuYnNzKSBc
CgkJdGVzdCAkc3ogIT0gMCB8fCBjb250aW51ZTsgXAoJCWVjaG8gIkVycm9yOiBzaXplIG9m
IHVueHoubzokbmFtZSBpcyAweCRzeiIgPiYyOyBcCgkJZXhpdCAkKGV4cHIgJGlkeCArIDEp
OzsgXAoJZXNhYzsgXApkb25lCnNlZDogMTogIi9bMC05XS97cywwMCosMCxnO3B9IjogZXh0
cmEgY2hhcmFjdGVycyBhdCB0aGUgZW5kIG9mIHAgY29tbWFuZApvYmpjb3B5IC0tcmVuYW1l
LXNlY3Rpb24gLnJvZGF0YT0uaW5pdC5yb2RhdGEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRh
LnN0cjEuMT0uaW5pdC5yb2RhdGEuc3RyMS4xIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5z
dHIxLjI9LmluaXQucm9kYXRhLnN0cjEuMiAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3Ry
MS40PS5pbml0LnJvZGF0YS5zdHIxLjQgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEu
OD0uaW5pdC5yb2RhdGEuc3RyMS44IC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsPS5pbml0
LmRhdGEucmVsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLmxvY2FsPS5pbml0LmRhdGEu
cmVsLmxvY2FsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvPS5pbml0LmRhdGEucmVs
LnJvIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvLmxvY2FsPS5pbml0LmRhdGEucmVs
LnJvLmxvY2FsIHVueHoubyB1bnh6LmluaXQubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcg
LWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZp
eCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1l
eGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBp
YyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZ
X0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAt
REhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudW5sem1hLm8uZCAtRElOSVRfU0VDVElPTlNf
T05MWSAtYyB1bmx6bWEuYyAtbyB1bmx6bWEubwpvYmpkdW1wIC1oIHVubHptYS5vIHwgc2Vk
IC1uICcvWzAtOV0ve3MsMDAqLDAsZztwfScgfCB3aGlsZSByZWFkIGlkeCBuYW1lIHN6IHJl
c3Q7IGRvIFwKCWNhc2UgIiRuYW1lIiBpbiBcCgkudGV4dHwudGV4dC4qfC5kYXRhfC5kYXRh
Lip8LmJzcykgXAoJCXRlc3QgJHN6ICE9IDAgfHwgY29udGludWU7IFwKCQllY2hvICJFcnJv
cjogc2l6ZSBvZiB1bmx6bWEubzokbmFtZSBpcyAweCRzeiIgPiYyOyBcCgkJZXhpdCAkKGV4
cHIgJGlkeCArIDEpOzsgXAoJZXNhYzsgXApkb25lCnNlZDogMTogIi9bMC05XS97cywwMCos
MCxnO3B9IjogZXh0cmEgY2hhcmFjdGVycyBhdCB0aGUgZW5kIG9mIHAgY29tbWFuZApvYmpj
b3B5IC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YT0uaW5pdC5yb2RhdGEgLS1yZW5hbWUtc2Vj
dGlvbiAucm9kYXRhLnN0cjEuMT0uaW5pdC5yb2RhdGEuc3RyMS4xIC0tcmVuYW1lLXNlY3Rp
b24gLnJvZGF0YS5zdHIxLjI9LmluaXQucm9kYXRhLnN0cjEuMiAtLXJlbmFtZS1zZWN0aW9u
IC5yb2RhdGEuc3RyMS40PS5pbml0LnJvZGF0YS5zdHIxLjQgLS1yZW5hbWUtc2VjdGlvbiAu
cm9kYXRhLnN0cjEuOD0uaW5pdC5yb2RhdGEuc3RyMS44IC0tcmVuYW1lLXNlY3Rpb24gLmRh
dGEucmVsPS5pbml0LmRhdGEucmVsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLmxvY2Fs
PS5pbml0LmRhdGEucmVsLmxvY2FsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvPS5p
bml0LmRhdGEucmVsLnJvIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvLmxvY2FsPS5p
bml0LmRhdGEucmVsLnJvLmxvY2FsIHVubHptYS5vIHVubHptYS5pbml0Lm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnVubHpvLm8uZCAt
RElOSVRfU0VDVElPTlNfT05MWSAtYyB1bmx6by5jIC1vIHVubHpvLm8Kb2JqZHVtcCAtaCB1
bmx6by5vIHwgc2VkIC1uICcvWzAtOV0ve3MsMDAqLDAsZztwfScgfCB3aGlsZSByZWFkIGlk
eCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2UgIiRuYW1lIiBpbiBcCgkudGV4dHwudGV4dC4q
fC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRlc3QgJHN6ICE9IDAgfHwgY29udGludWU7IFwK
CQllY2hvICJFcnJvcjogc2l6ZSBvZiB1bmx6by5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwK
CQlleGl0ICQoZXhwciAkaWR4ICsgMSk7OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1sw
LTldL3tzLDAwKiwwLGc7cH0iOiBleHRyYSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBj
b21tYW5kCm9iamNvcHkgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAt
LXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1y
ZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVu
YW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFt
ZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS44PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUt
c2VjdGlvbiAuZGF0YS5yZWw9LmluaXQuZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0
YS5yZWwubG9jYWw9LmluaXQuZGF0YS5yZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0
YS5yZWwucm89LmluaXQuZGF0YS5yZWwucm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwu
cm8ubG9jYWw9LmluaXQuZGF0YS5yZWwucm8ubG9jYWwgdW5sem8ubyB1bmx6by5pbml0Lm8K
bGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGJpdG1hcC5vIGNvcmVfcGFy
a2luZy5vIGNwdS5vIGNwdXBvb2wubyBkb21jdGwubyBkb21haW4ubyBldmVudF9jaGFubmVs
Lm8gZ3JhbnRfdGFibGUubyBpcnEubyBrZXJuZWwubyBrZXloYW5kbGVyLm8ga2V4ZWMubyBs
aWIubyBtZW1vcnkubyBtdWx0aWNhbGwubyBub3RpZmllci5vIHBhZ2VfYWxsb2MubyBwcmVl
bXB0Lm8gcmFuZ2VzZXQubyBzY2hlZF9jcmVkaXQubyBzY2hlZF9jcmVkaXQyLm8gc2NoZWRf
c2VkZi5vIHNjaGVkX2FyaW5jNjUzLm8gc2NoZWR1bGUubyBzaHV0ZG93bi5vIHNvZnRpcnEu
byBzb3J0Lm8gc3BpbmxvY2subyBzdG9wX21hY2hpbmUubyBzdHJpbmcubyBzeW1ib2xzLm8g
c3lzY3RsLm8gdGFza2xldC5vIHRpbWUubyB0aW1lci5vIHRyYWNlLm8gdmVyc2lvbi5vIHZz
cHJpbnRmLm8gd2FpdC5vIHhtYWxsb2NfdGxzZi5vIHJjdXBkYXRlLm8gdG1lbS5vIHRtZW1f
eGVuLm8gcmFkaXgtdHJlZS5vIHJidHJlZS5vIGx6by5vIHhlbm9wcm9mLm8gY29tcGF0L2J1
aWx0X2luLm8gaHZtL2J1aWx0X2luLm8gbGliZWxmL2J1aWx0X2luLm8gZGVjb21wcmVzcy5p
bml0Lm8gYnVuemlwMi5pbml0Lm8gdW54ei5pbml0Lm8gdW5sem1hLmluaXQubyB1bmx6by5p
bml0Lm8KZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVu
L2NvbW1vbicKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyAvcm9v
dC94ZW4tNC4yLjAveGVuL2RyaXZlcnMgYnVpbHRfaW4ubwpnbWFrZVs0XTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMnCmdtYWtlIC1mIC9yb290
L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgY2hhciBidWlsdF9pbi5vCmdtYWtlWzVdOiBF
bnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9jaGFyJwpn
Y2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZu
by1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1u
by1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFi
bGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hF
Tl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgg
LURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuY29u
c29sZS5vLmQgLWMgY29uc29sZS5jIC1vIGNvbnNvbGUubwpnY2MgLU8yIC1mb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURO
REVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0
aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9h
c20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3Ig
LWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNz
ZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJ
QklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19H
REJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubnMxNjU1MC5vLmQgLWMgbnMxNjU1
MC5jIC1vIG5zMTY1NTAubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZu
by1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVz
IC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGlu
IC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1X
ZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2Vu
ZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVs
dCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1X
bmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5j
aHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAt
bm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhS
T1VHSCAtTU1EIC1NRiAuc2VyaWFsLm8uZCAtYyBzZXJpYWwuYyAtbyBzZXJpYWwubwpsZCAg
ICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gY29uc29sZS5vIG5zMTY1NTAubyBz
ZXJpYWwubwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94
ZW4vZHJpdmVycy9jaGFyJwpnbWFrZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1r
IC1DIGNwdWZyZXEgYnVpbHRfaW4ubwpnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvY3B1ZnJlcScKZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNwdWZyZXEuby5kIC1jIGNwdWZy
ZXEuYyAtbyBjcHVmcmVxLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRp
biAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAt
V2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdl
bmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1
bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAt
V25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3lu
Y2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUg
LW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RI
Uk9VR0ggLU1NRCAtTUYgLmNwdWZyZXFfb25kZW1hbmQuby5kIC1jIGNwdWZyZXFfb25kZW1h
bmQuYyAtbyBjcHVmcmVxX29uZGVtYW5kLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1m
bm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXgg
aW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURI
QVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNwdWZyZXFfbWlzY19nb3Zlcm5vcnMuby5kIC1j
IGNwdWZyZXFfbWlzY19nb3Zlcm5vcnMuYyAtbyBjcHVmcmVxX21pc2NfZ292ZXJub3JzLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnV0
aWxpdHkuby5kIC1jIHV0aWxpdHkuYyAtbyB1dGlsaXR5Lm8KbGQgICAgLW1lbGZfeDg2XzY0
ICAtciAtbyBidWlsdF9pbi5vIGNwdWZyZXEubyBjcHVmcmVxX29uZGVtYW5kLm8gY3B1ZnJl
cV9taXNjX2dvdmVybm9ycy5vIHV0aWxpdHkubwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3Rv
cnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9jcHVmcmVxJwpnbWFrZSAtZiAvcm9v
dC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIHBjaSBidWlsdF9pbi5vCmdtYWtlWzVdOiBF
bnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9wY2knCmdj
YyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1
bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXIt
YXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5wY2ku
by5kIC1jIHBjaS5jIC1vIHBjaS5vCmxkICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRf
aW4ubyBwY2kubwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC94ZW4vZHJpdmVycy9wY2knCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMu
bWsgLUMgcGFzc3Rocm91Z2ggYnVpbHRfaW4ubwpnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gnCmdjYyAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQt
ZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGgg
LXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pb21tdS5vLmQg
LWMgaW9tbXUuYyAtbyBpb21tdS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1
aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1
ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1k
ZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8t
YXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklC
VVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BB
U1NUSFJPVUdIIC1NTUQgLU1GIC5pby5vLmQgLWMgaW8uYyAtbyBpby5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5wY2kuby5kIC1jIHBj
aS5jIC1vIHBjaS5vCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMg
dnRkIGJ1aWx0X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZCcKZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmlvbW11Lm8uZCAtYyBpb21tdS5j
IC1vIGlvbW11Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLmRtYXIuby5kIC1jIGRtYXIuYyAtbyBkbWFyLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnV0aWxzLm8uZCAtYyB1dGls
cy5jIC1vIHV0aWxzLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLnFpbnZhbC5vLmQgLWMgcWludmFsLmMgLW8gcWludmFsLm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmludHJlbWFw
Lm8uZCAtYyBpbnRyZW1hcC5jIC1vIGludHJlbWFwLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnF1aXJrcy5vLmQgLWMgcXVpcmtzLmMg
LW8gcXVpcmtzLm8KZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyB4
ODYgYnVpbHRfaW4ubwpnbWFrZVs3XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3g4NicKZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnZ0ZC5vLmQgLWMgdnRkLmMg
LW8gdnRkLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLmF0cy5vLmQgLWMgYXRzLmMgLW8gYXRzLm8KbGQgICAgLW1lbGZfeDg2XzY0ICAt
ciAtbyBidWlsdF9pbi5vIHZ0ZC5vIGF0cy5vCmdtYWtlWzddOiBMZWF2aW5nIGRpcmVjdG9y
eSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC94ODYnCmxk
ICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBpb21tdS5vIGRtYXIubyB1dGls
cy5vIHFpbnZhbC5vIGludHJlbWFwLm8gcXVpcmtzLm8geDg2L2J1aWx0X2luLm8KZ21ha2Vb
Nl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvdnRkJwpnbWFrZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1D
IGFtZCBidWlsdF9pbi5vCmdtYWtlWzZdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQnCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pb21tdV9pbml0Lm8uZCAtYyBp
b21tdV9pbml0LmMgLW8gaW9tbXVfaW5pdC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGlj
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlf
QVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1E
SEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pb21tdV9tYXAuby5kIC1jIGlvbW11X21hcC5j
IC1vIGlvbW11X21hcC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC5wY2lfYW1kX2lvbW11Lm8uZCAtYyBwY2lfYW1kX2lvbW11LmMgLW8g
cGNpX2FtZF9pb21tdS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC5pb21tdV9pbnRyLm8uZCAtYyBpb21tdV9pbnRyLmMgLW8gaW9tbXVf
aW50ci5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1h
bGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21t
b24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25v
LXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1m
bG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0
ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAt
ZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9j
b25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQg
LU1GIC5pb21tdV9jbWQuby5kIC1jIGlvbW11X2NtZC5jIC1vIGlvbW11X2NtZC5vCmdjYyAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJl
ZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMg
LURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhB
U19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pb21tdV9n
dWVzdC5vLmQgLWMgaW9tbXVfZ3Vlc3QuYyAtbyBpb21tdV9ndWVzdC5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pb21tdV9kZXRlY3Qu
by5kIC1ESU5JVF9TRUNUSU9OU19PTkxZIC1jIGlvbW11X2RldGVjdC5jIC1vIGlvbW11X2Rl
dGVjdC5vCm9iamR1bXAgLWggaW9tbXVfZGV0ZWN0Lm8gfCBzZWQgLW4gJy9bMC05XS97cyww
MCosMCxnO3B9JyB8IHdoaWxlIHJlYWQgaWR4IG5hbWUgc3ogcmVzdDsgZG8gXAoJY2FzZSAi
JG5hbWUiIGluIFwKCS50ZXh0fC50ZXh0Lip8LmRhdGF8LmRhdGEuKnwuYnNzKSBcCgkJdGVz
dCAkc3ogIT0gMCB8fCBjb250aW51ZTsgXAoJCWVjaG8gIkVycm9yOiBzaXplIG9mIGlvbW11
X2RldGVjdC5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0ICQoZXhwciAkaWR4ICsg
MSk7OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tzLDAwKiwwLGc7cH0iOiBl
eHRyYSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5kCm9iamNvcHkgLS1yZW5h
bWUtc2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFtZS1zZWN0aW9uIC5yb2Rh
dGEuc3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRh
LnN0cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5z
dHIxLjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3Ry
MS44PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWw9Lmlu
aXQuZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwubG9jYWw9LmluaXQuZGF0
YS5yZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm89LmluaXQuZGF0YS5y
ZWwucm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9jYWw9LmluaXQuZGF0YS5y
ZWwucm8ubG9jYWwgaW9tbXVfZGV0ZWN0Lm8gaW9tbXVfZGV0ZWN0LmluaXQubwpnY2MgLU8y
IC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFj
ay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQt
em9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1E
R0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuaW9tbXVfYWNw
aS5vLmQgLURJTklUX1NFQ1RJT05TX09OTFkgLWMgaW9tbXVfYWNwaS5jIC1vIGlvbW11X2Fj
cGkubwpvYmpkdW1wIC1oIGlvbW11X2FjcGkubyB8IHNlZCAtbiAnL1swLTldL3tzLDAwKiww
LGc7cH0nIHwgd2hpbGUgcmVhZCBpZHggbmFtZSBzeiByZXN0OyBkbyBcCgljYXNlICIkbmFt
ZSIgaW4gXAoJLnRleHR8LnRleHQuKnwuZGF0YXwuZGF0YS4qfC5ic3MpIFwKCQl0ZXN0ICRz
eiAhPSAwIHx8IGNvbnRpbnVlOyBcCgkJZWNobyAiRXJyb3I6IHNpemUgb2YgaW9tbXVfYWNw
aS5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0ICQoZXhwciAkaWR4ICsgMSk7OyBc
Cgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tzLDAwKiwwLGc7cH0iOiBleHRyYSBj
aGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5kCm9iamNvcHkgLS1yZW5hbWUtc2Vj
dGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3Ry
MS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEu
Mj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjQ9
LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS44PS5p
bml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWw9LmluaXQuZGF0
YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwubG9jYWw9LmluaXQuZGF0YS5yZWwu
bG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm89LmluaXQuZGF0YS5yZWwucm8g
LS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9jYWw9LmluaXQuZGF0YS5yZWwucm8u
bG9jYWwgaW9tbXVfYWNwaS5vIGlvbW11X2FjcGkuaW5pdC5vCmxkICAgIC1tZWxmX3g4Nl82
NCAgLXIgLW8gYnVpbHRfaW4ubyBpb21tdV9pbml0Lm8gaW9tbXVfbWFwLm8gcGNpX2FtZF9p
b21tdS5vIGlvbW11X2ludHIubyBpb21tdV9jbWQubyBpb21tdV9ndWVzdC5vIGlvbW11X2Rl
dGVjdC5pbml0Lm8gaW9tbXVfYWNwaS5pbml0Lm8KZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kJwpnbWFr
ZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIHg4NiBidWlsdF9pbi5vCmdt
YWtlWzZdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVy
cy9wYXNzdGhyb3VnaC94ODYnCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0
aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUg
LVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1n
ZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZh
dWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMg
LVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRF
IC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NU
SFJPVUdIIC1NTUQgLU1GIC5hdHMuby5kIC1jIGF0cy5jIC1vIGF0cy5vCmxkICAgIC1tZWxm
X3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBhdHMubwpnbWFrZVs2XTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC94ODYnCmxk
ICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBpb21tdS5vIGlvLm8gcGNpLm8g
dnRkL2J1aWx0X2luLm8gYW1kL2J1aWx0X2luLm8geDg2L2J1aWx0X2luLm8KZ21ha2VbNV06
IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvcGFzc3Ro
cm91Z2gnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgYWNwaSBi
dWlsdF9pbi5vCmdtYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC94ZW4vZHJpdmVycy9hY3BpJwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWls
dGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRl
IC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
Z2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVm
YXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25z
IC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFz
eW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVU
RSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNT
VEhST1VHSCAtTU1EIC1NRiAudGFibGVzLm8uZCAtYyB0YWJsZXMuYyAtbyB0YWJsZXMubwpn
Y2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZu
by1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1u
by1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFi
bGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hF
Tl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgg
LURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubnVt
YS5vLmQgLWMgbnVtYS5jIC1vIG51bWEubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAub3NsLm8uZCAtYyBvc2wuYyAtbyBvc2wubwpnY2Mg
LU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5k
YW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFy
aXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1z
dGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1y
ZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVz
IC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9f
IC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURI
QVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAucG1zdGF0
Lm8uZCAtYyBwbXN0YXQuYyAtbyBwbXN0YXQubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcg
LWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZp
eCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1l
eGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBp
YyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZ
X0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAt
REhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuaHdyZWdzLm8uZCAtYyBod3JlZ3MuYyAtbyBo
d3JlZ3MubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQt
ZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4
dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11
bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMg
LWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4v
Y29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1E
IC1NRiAucmVib290Lm8uZCAtYyByZWJvb3QuYyAtbyByZWJvb3QubwpnbWFrZSAtZiAvcm9v
dC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIHRhYmxlcyBidWlsdF9pbi5vCmdtYWtlWzZd
OiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9hY3Bp
L3RhYmxlcycKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLnRidXRpbHMuby5kIC1jIHRidXRpbHMuYyAtbyB0YnV0aWxzLm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRiZmFkdC5vLmQg
LURJTklUX1NFQ1RJT05TX09OTFkgLWMgdGJmYWR0LmMgLW8gdGJmYWR0Lm8Kb2JqZHVtcCAt
aCB0YmZhZHQubyB8IHNlZCAtbiAnL1swLTldL3tzLDAwKiwwLGc7cH0nIHwgd2hpbGUgcmVh
ZCBpZHggbmFtZSBzeiByZXN0OyBkbyBcCgljYXNlICIkbmFtZSIgaW4gXAoJLnRleHR8LnRl
eHQuKnwuZGF0YXwuZGF0YS4qfC5ic3MpIFwKCQl0ZXN0ICRzeiAhPSAwIHx8IGNvbnRpbnVl
OyBcCgkJZWNobyAiRXJyb3I6IHNpemUgb2YgdGJmYWR0Lm86JG5hbWUgaXMgMHgkc3oiID4m
MjsgXAoJCWV4aXQgJChleHByICRpZHggKyAxKTs7IFwKCWVzYWM7IFwKZG9uZQpzZWQ6IDE6
ICIvWzAtOV0ve3MsMDAqLDAsZztwfSI6IGV4dHJhIGNoYXJhY3RlcnMgYXQgdGhlIGVuZCBv
ZiBwIGNvbW1hbmQKb2JqY29weSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGE9LmluaXQucm9k
YXRhIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjE9LmluaXQucm9kYXRhLnN0cjEu
MSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4yPS5pbml0LnJvZGF0YS5zdHIxLjIg
LS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuND0uaW5pdC5yb2RhdGEuc3RyMS40IC0t
cmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjg9LmluaXQucm9kYXRhLnN0cjEuOCAtLXJl
bmFtZS1zZWN0aW9uIC5kYXRhLnJlbD0uaW5pdC5kYXRhLnJlbCAtLXJlbmFtZS1zZWN0aW9u
IC5kYXRhLnJlbC5sb2NhbD0uaW5pdC5kYXRhLnJlbC5sb2NhbCAtLXJlbmFtZS1zZWN0aW9u
IC5kYXRhLnJlbC5ybz0uaW5pdC5kYXRhLnJlbC5ybyAtLXJlbmFtZS1zZWN0aW9uIC5kYXRh
LnJlbC5yby5sb2NhbD0uaW5pdC5kYXRhLnJlbC5yby5sb2NhbCB0YmZhZHQubyB0YmZhZHQu
aW5pdC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1h
bGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21t
b24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25v
LXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1m
bG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0
ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAt
ZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9j
b25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQg
LU1GIC50Ymluc3RhbC5vLmQgLURJTklUX1NFQ1RJT05TX09OTFkgLWMgdGJpbnN0YWwuYyAt
byB0Ymluc3RhbC5vCm9iamR1bXAgLWggdGJpbnN0YWwubyB8IHNlZCAtbiAnL1swLTldL3tz
LDAwKiwwLGc7cH0nIHwgd2hpbGUgcmVhZCBpZHggbmFtZSBzeiByZXN0OyBkbyBcCgljYXNl
ICIkbmFtZSIgaW4gXAoJLnRleHR8LnRleHQuKnwuZGF0YXwuZGF0YS4qfC5ic3MpIFwKCQl0
ZXN0ICRzeiAhPSAwIHx8IGNvbnRpbnVlOyBcCgkJZWNobyAiRXJyb3I6IHNpemUgb2YgdGJp
bnN0YWwubzokbmFtZSBpcyAweCRzeiIgPiYyOyBcCgkJZXhpdCAkKGV4cHIgJGlkeCArIDEp
OzsgXAoJZXNhYzsgXApkb25lCnNlZDogMTogIi9bMC05XS97cywwMCosMCxnO3B9IjogZXh0
cmEgY2hhcmFjdGVycyBhdCB0aGUgZW5kIG9mIHAgY29tbWFuZApvYmpjb3B5IC0tcmVuYW1l
LXNlY3Rpb24gLnJvZGF0YT0uaW5pdC5yb2RhdGEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRh
LnN0cjEuMT0uaW5pdC5yb2RhdGEuc3RyMS4xIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5z
dHIxLjI9LmluaXQucm9kYXRhLnN0cjEuMiAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3Ry
MS40PS5pbml0LnJvZGF0YS5zdHIxLjQgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEu
OD0uaW5pdC5yb2RhdGEuc3RyMS44IC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsPS5pbml0
LmRhdGEucmVsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLmxvY2FsPS5pbml0LmRhdGEu
cmVsLmxvY2FsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvPS5pbml0LmRhdGEucmVs
LnJvIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvLmxvY2FsPS5pbml0LmRhdGEucmVs
LnJvLmxvY2FsIHRiaW5zdGFsLm8gdGJpbnN0YWwuaW5pdC5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC50YnhmYWNlLm8uZCAtRElOSVRf
U0VDVElPTlNfT05MWSAtYyB0YnhmYWNlLmMgLW8gdGJ4ZmFjZS5vCm9iamR1bXAgLWggdGJ4
ZmFjZS5vIHwgc2VkIC1uICcvWzAtOV0ve3MsMDAqLDAsZztwfScgfCB3aGlsZSByZWFkIGlk
eCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2UgIiRuYW1lIiBpbiBcCgkudGV4dHwudGV4dC4q
fC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRlc3QgJHN6ICE9IDAgfHwgY29udGludWU7IFwK
CQllY2hvICJFcnJvcjogc2l6ZSBvZiB0YnhmYWNlLm86JG5hbWUgaXMgMHgkc3oiID4mMjsg
XAoJCWV4aXQgJChleHByICRpZHggKyAxKTs7IFwKCWVzYWM7IFwKZG9uZQpzZWQ6IDE6ICIv
WzAtOV0ve3MsMDAqLDAsZztwfSI6IGV4dHJhIGNoYXJhY3RlcnMgYXQgdGhlIGVuZCBvZiBw
IGNvbW1hbmQKb2JqY29weSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGE9LmluaXQucm9kYXRh
IC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjE9LmluaXQucm9kYXRhLnN0cjEuMSAt
LXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4yPS5pbml0LnJvZGF0YS5zdHIxLjIgLS1y
ZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuND0uaW5pdC5yb2RhdGEuc3RyMS40IC0tcmVu
YW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjg9LmluaXQucm9kYXRhLnN0cjEuOCAtLXJlbmFt
ZS1zZWN0aW9uIC5kYXRhLnJlbD0uaW5pdC5kYXRhLnJlbCAtLXJlbmFtZS1zZWN0aW9uIC5k
YXRhLnJlbC5sb2NhbD0uaW5pdC5kYXRhLnJlbC5sb2NhbCAtLXJlbmFtZS1zZWN0aW9uIC5k
YXRhLnJlbC5ybz0uaW5pdC5kYXRhLnJlbC5ybyAtLXJlbmFtZS1zZWN0aW9uIC5kYXRhLnJl
bC5yby5sb2NhbD0uaW5pdC5kYXRhLnJlbC5yby5sb2NhbCB0YnhmYWNlLm8gdGJ4ZmFjZS5p
bml0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLnRieGZyb290Lm8uZCAtRElOSVRfU0VDVElPTlNfT05MWSAtYyB0Ynhmcm9vdC5jIC1v
IHRieGZyb290Lm8Kb2JqZHVtcCAtaCB0Ynhmcm9vdC5vIHwgc2VkIC1uICcvWzAtOV0ve3Ms
MDAqLDAsZztwfScgfCB3aGlsZSByZWFkIGlkeCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2Ug
IiRuYW1lIiBpbiBcCgkudGV4dHwudGV4dC4qfC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRl
c3QgJHN6ICE9IDAgfHwgY29udGludWU7IFwKCQllY2hvICJFcnJvcjogc2l6ZSBvZiB0Ynhm
cm9vdC5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0ICQoZXhwciAkaWR4ICsgMSk7
OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tzLDAwKiwwLGc7cH0iOiBleHRy
YSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5kCm9iamNvcHkgLS1yZW5hbWUt
c2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEu
c3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0
cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIx
LjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS44
PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWw9LmluaXQu
ZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwubG9jYWw9LmluaXQuZGF0YS5y
ZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm89LmluaXQuZGF0YS5yZWwu
cm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9jYWw9LmluaXQuZGF0YS5yZWwu
cm8ubG9jYWwgdGJ4ZnJvb3QubyB0Ynhmcm9vdC5pbml0Lm8KbGQgICAgLW1lbGZfeDg2XzY0
ICAtciAtbyBidWlsdF9pbi5vIHRidXRpbHMubyB0YmZhZHQuaW5pdC5vIHRiaW5zdGFsLmlu
aXQubyB0YnhmYWNlLmluaXQubyB0Ynhmcm9vdC5pbml0Lm8KZ21ha2VbNl06IExlYXZpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvYWNwaS90YWJsZXMnCmdt
YWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgdXRpbGl0aWVzIGJ1aWx0
X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hl
bi9kcml2ZXJzL2FjcGkvdXRpbGl0aWVzJwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAudXRnbG9iYWwuby5kIC1jIHV0Z2xvYmFsLmMgLW8g
dXRnbG9iYWwubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8t
Y29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3Ig
LVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
ICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNv
ZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVk
LWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91
cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRp
bmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94
ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TU1EIC1NRiAudXRtaXNjLm8uZCAtRElOSVRfU0VDVElPTlNfT05MWSAtYyB1dG1pc2MuYyAt
byB1dG1pc2MubwpvYmpkdW1wIC1oIHV0bWlzYy5vIHwgc2VkIC1uICcvWzAtOV0ve3MsMDAq
LDAsZztwfScgfCB3aGlsZSByZWFkIGlkeCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2UgIiRu
YW1lIiBpbiBcCgkudGV4dHwudGV4dC4qfC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRlc3Qg
JHN6ICE9IDAgfHwgY29udGludWU7IFwKCQllY2hvICJFcnJvcjogc2l6ZSBvZiB1dG1pc2Mu
bzokbmFtZSBpcyAweCRzeiIgPiYyOyBcCgkJZXhpdCAkKGV4cHIgJGlkeCArIDEpOzsgXAoJ
ZXNhYzsgXApkb25lCnNlZDogMTogIi9bMC05XS97cywwMCosMCxnO3B9IjogZXh0cmEgY2hh
cmFjdGVycyBhdCB0aGUgZW5kIG9mIHAgY29tbWFuZApvYmpjb3B5IC0tcmVuYW1lLXNlY3Rp
b24gLnJvZGF0YT0uaW5pdC5yb2RhdGEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEu
MT0uaW5pdC5yb2RhdGEuc3RyMS4xIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjI9
LmluaXQucm9kYXRhLnN0cjEuMiAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS40PS5p
bml0LnJvZGF0YS5zdHIxLjQgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuOD0uaW5p
dC5yb2RhdGEuc3RyMS44IC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsPS5pbml0LmRhdGEu
cmVsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLmxvY2FsPS5pbml0LmRhdGEucmVsLmxv
Y2FsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvPS5pbml0LmRhdGEucmVsLnJvIC0t
cmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvLmxvY2FsPS5pbml0LmRhdGEucmVsLnJvLmxv
Y2FsIHV0bWlzYy5vIHV0bWlzYy5pbml0Lm8KbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBi
dWlsdF9pbi5vIHV0Z2xvYmFsLm8gdXRtaXNjLmluaXQubwpnbWFrZVs2XTogTGVhdmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9hY3BpL3V0aWxpdGllcycK
Z21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyBhcGVpIGJ1aWx0X2lu
Lm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9k
cml2ZXJzL2FjcGkvYXBlaScKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRp
biAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAt
V2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdl
bmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1
bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAt
V25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3lu
Y2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUg
LW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RI
Uk9VR0ggLU1NRCAtTUYgLmVyc3Quby5kIC1jIGVyc3QuYyAtbyBlcnN0Lm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmFwZWktYmFzZS5v
LmQgLWMgYXBlaS1iYXNlLmMgLW8gYXBlaS1iYXNlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmFwZWktaW8uby5kIC1jIGFwZWktaW8u
YyAtbyBhcGVpLWlvLm8KbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGVy
c3QubyBhcGVpLWJhc2UubyBhcGVpLWlvLm8KZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvYWNwaS9hcGVpJwpsZCAgICAtbWVsZl94
ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gdGFibGVzLm8gbnVtYS5vIG9zbC5vIHBtc3RhdC5v
IGh3cmVncy5vIHJlYm9vdC5vIHRhYmxlcy9idWlsdF9pbi5vIHV0aWxpdGllcy9idWlsdF9p
bi5vIGFwZWkvYnVpbHRfaW4ubwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC94ZW4vZHJpdmVycy9hY3BpJwpnbWFrZSAtZiAvcm9vdC94ZW4tNC4yLjAv
eGVuL1J1bGVzLm1rIC1DIHZpZGVvIGJ1aWx0X2luLm8KZ21ha2VbNV06IEVudGVyaW5nIGRp
cmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9kcml2ZXJzL3ZpZGVvJwpnY2MgLU8yIC1m
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRl
Y2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1w
aXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1w
cm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9u
ZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0ND
X0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNs
dWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQ
SSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudmdhLm8uZCAtYyB2
Z2EuYyAtbyB2Z2EubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAuZm9udF84eDE0Lm8uZCAtYyBmb250Xzh4MTQuYyAtbyBmb250Xzh4MTQu
bwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1X
cmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2lu
dGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQg
LWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMg
LW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQt
dGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURf
X1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmln
LmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAu
Zm9udF84eDE2Lm8uZCAtYyBmb250Xzh4MTYuYyAtbyBmb250Xzh4MTYubwpnY2MgLU8yIC1m
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRl
Y2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1w
aXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1w
cm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9u
ZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0ND
X0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNs
dWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQ
SSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuZm9udF84eDguby5k
IC1jIGZvbnRfOHg4LmMgLW8gZm9udF84eDgubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcg
LWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZp
eCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1l
eGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBp
YyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZ
X0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAt
REhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudmVzYS5vLmQgLWMgdmVzYS5jIC1vIHZlc2Eu
bwpsZCAgICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gdmdhLm8gZm9udF84eDE0
Lm8gZm9udF84eDE2Lm8gZm9udF84eDgubyB2ZXNhLm8KZ21ha2VbNV06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvdmlkZW8nCmxkICAgIC1tZWxm
X3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBjaGFyL2J1aWx0X2luLm8gY3B1ZnJlcS9idWls
dF9pbi5vIHBjaS9idWlsdF9pbi5vIHBhc3N0aHJvdWdoL2J1aWx0X2luLm8gYWNwaS9idWls
dF9pbi5vIHZpZGVvL2J1aWx0X2luLm8KZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94
ZW4vUnVsZXMubWsgLUMgL3Jvb3QveGVuLTQuMi4wL3hlbi94c20gYnVpbHRfaW4ubwpnbWFr
ZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL3hzbScKZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnhzbV9j
b3JlLm8uZCAtYyB4c21fY29yZS5jIC1vIHhzbV9jb3JlLm8KbGQgICAgLW1lbGZfeDg2XzY0
ICAtciAtbyBidWlsdF9pbi5vIHhzbV9jb3JlLm8KZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL3hzbScKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4w
L3hlbi9SdWxlcy5tayAtQyAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2IGJ1aWx0X2lu
Lm8KZ21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9h
cmNoL3g4NicKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLmFwaWMuby5kIC1jIGFwaWMuYyAtbyBhcGljLm8KZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmJpdG9wcy5vLmQgLWMgYml0b3Bz
LmMgLW8gYml0b3BzLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLmNvbXBhdC5vLmQgLWMgY29tcGF0LmMgLW8gY29tcGF0Lm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRlYnVnLm8u
ZCAtYyBkZWJ1Zy5jIC1vIGRlYnVnLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8t
YnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5j
bHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0
aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZu
by1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRS
SUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNf
UEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRlbGF5Lm8uZCAtYyBkZWxheS5jIC1vIGRlbGF5Lm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRv
bWN0bC5vLmQgLWMgZG9tY3RsLmMgLW8gZG9tY3RsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRvbWFpbi5vLmQgLWMgZG9tYWluLmMg
LW8gZG9tYWluLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLmU4MjAuby5kIC1jIGU4MjAuYyAtbyBlODIwLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmV4dGFibGUuby5kIC1jIGV4
dGFibGUuYyAtbyBleHRhYmxlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLmZsdXNodGxiLm8uZCAtYyBmbHVzaHRsYi5jIC1vIGZsdXNo
dGxiLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLnBsYXRmb3JtX2h5cGVyY2FsbC5vLmQgLWMgcGxhdGZvcm1faHlwZXJjYWxsLmMgLW8g
cGxhdGZvcm1faHlwZXJjYWxsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLmkzODcuby5kIC1jIGkzODcuYyAtbyBpMzg3Lm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmk4MjU5Lm8u
ZCAtYyBpODI1OS5jIC1vIGk4MjU5Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8t
YnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5j
bHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0
aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZu
by1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRS
SUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNf
UEFTU1RIUk9VR0ggLU1NRCAtTUYgLmlvX2FwaWMuby5kIC1jIGlvX2FwaWMuYyAtbyBpb19h
cGljLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLm1zaS5vLmQgLWMgbXNpLmMgLW8gbXNpLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmlvcG9ydF9lbXVsYXRlLm8uZCAtYyBpb3Bv
cnRfZW11bGF0ZS5jIC1vIGlvcG9ydF9lbXVsYXRlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmlycS5vLmQgLWMgaXJxLmMgLW8gaXJx
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
Lm1pY3JvY29kZV9hbWQuby5kIC1jIG1pY3JvY29kZV9hbWQuYyAtbyBtaWNyb2NvZGVfYW1k
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
Lm1pY3JvY29kZV9pbnRlbC5vLmQgLWMgbWljcm9jb2RlX2ludGVsLmMgLW8gbWljcm9jb2Rl
X2ludGVsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLm1pY3JvY29kZS5vLmQgLWMgbWljcm9jb2RlLmMgLW8gbWljcm9jb2RlLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1tLm8u
ZCAtYyBtbS5jIC1vIG1tLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRp
biAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAt
V2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdl
bmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1
bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAt
V25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3lu
Y2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUg
LW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RI
Uk9VR0ggLU1NRCAtTUYgLm1wcGFyc2Uuby5kIC1jIG1wcGFyc2UuYyAtbyBtcHBhcnNlLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm5t
aS5vLmQgLWMgbm1pLmMgLW8gbm1pLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8t
YnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5j
bHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0
aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZu
by1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRS
SUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNf
UEFTU1RIUk9VR0ggLU1NRCAtTUYgLm51bWEuby5kIC1jIG51bWEuYyAtbyBudW1hLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnBjaS5v
LmQgLWMgcGNpLmMgLW8gcGNpLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnBlcmNwdS5vLmQgLWMgcGVyY3B1LmMgLW8gcGVyY3B1Lm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnBo
eXNkZXYuby5kIC1jIHBoeXNkZXYuYyAtbyBwaHlzZGV2Lm8KZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNldHVwLm8uZCAtYyBzZXR1cC5j
IC1vIHNldHVwLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLnNodXRkb3duLm8uZCAtYyBzaHV0ZG93bi5jIC1vIHNodXRkb3duLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNtcC5v
LmQgLWMgc21wLmMgLW8gc21wLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnNtcGJvb3Quby5kIC1jIHNtcGJvb3QuYyAtbyBzbXBib290
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
LnNyYXQuby5kIC1jIHNyYXQuYyAtbyBzcmF0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnN0cmluZy5vLmQgLWMgc3RyaW5nLmMgLW8g
c3RyaW5nLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLnN5c2N0bC5vLmQgLWMgc3lzY3RsLmMgLW8gc3lzY3RsLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRpbWUuby5kIC1jIHRp
bWUuYyAtbyB0aW1lLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLnRyYWNlLm8uZCAtYyB0cmFjZS5jIC1vIHRyYWNlLm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRyYXBzLm8uZCAt
YyB0cmFwcy5jIC1vIHRyYXBzLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnVzZXJjb3B5Lm8uZCAtYyB1c2VyY29weS5jIC1vIHVzZXJj
b3B5Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLng4Nl9lbXVsYXRlLm8uZCAtYyB4ODZfZW11bGF0ZS5jIC1vIHg4Nl9lbXVsYXRlLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1h
Y2hpbmVfa2V4ZWMuby5kIC1jIG1hY2hpbmVfa2V4ZWMuYyAtbyBtYWNoaW5lX2tleGVjLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNy
YXNoLm8uZCAtYyBjcmFzaC5jIC1vIGNyYXNoLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRib290Lm8uZCAtYyB0Ym9vdC5jIC1vIHRi
b290Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLmhwZXQuby5kIC1jIGhwZXQuYyAtbyBocGV0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnhzdGF0ZS5vLmQgLWMgeHN0YXRlLmMg
LW8geHN0YXRlLm8KZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyBh
Y3BpIGJ1aWx0X2luLm8KZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3hlbi9hcmNoL3g4Ni9hY3BpJwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAubGliLm8uZCAtYyBsaWIuYyAtbyBsaWIubwpnY2Mg
LU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5k
YW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFy
aXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1z
dGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1y
ZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVz
IC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9f
IC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURI
QVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAucG93ZXIu
by5kIC1jIHBvd2VyLmMgLW8gcG93ZXIubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAuc3VzcGVuZC5vLmQgLWMgc3VzcGVuZC5jIC1vIHN1
c3BlbmQubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQt
ZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4
dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11
bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMg
LWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4v
Y29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1E
IC1NRiAuY3B1X2lkbGUuby5kIC1jIGNwdV9pZGxlLmMgLW8gY3B1X2lkbGUubwpnY2MgLU8y
IC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFj
ay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQt
em9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1E
R0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuY3B1aWRsZV9t
ZW51Lm8uZCAtYyBjcHVpZGxlX21lbnUuYyAtbyBjcHVpZGxlX21lbnUubwpnbWFrZSAtZiAv
cm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIGNwdWZyZXEgYnVpbHRfaW4ubwpnbWFr
ZVs2XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2
L2FjcGkvY3B1ZnJlcScKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLmNwdWZyZXEuby5kIC1jIGNwdWZyZXEuYyAtbyBjcHVmcmVxLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnBvd2Vy
bm93Lm8uZCAtYyBwb3dlcm5vdy5jIC1vIHBvd2Vybm93Lm8KbGQgICAgLW1lbGZfeDg2XzY0
ICAtciAtbyBidWlsdF9pbi5vIGNwdWZyZXEubyBwb3dlcm5vdy5vCmdtYWtlWzZdOiBMZWF2
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9hY3BpL2NwdWZy
ZXEnCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9h
dCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJu
cyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2lu
ZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAt
RF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25m
aWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1G
IC5ib290Lm8uZCAtRElOSVRfU0VDVElPTlNfT05MWSAtYyBib290LmMgLW8gYm9vdC5vCm9i
amR1bXAgLWggYm9vdC5vIHwgc2VkIC1uICcvWzAtOV0ve3MsMDAqLDAsZztwfScgfCB3aGls
ZSByZWFkIGlkeCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2UgIiRuYW1lIiBpbiBcCgkudGV4
dHwudGV4dC4qfC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRlc3QgJHN6ICE9IDAgfHwgY29u
dGludWU7IFwKCQllY2hvICJFcnJvcjogc2l6ZSBvZiBib290Lm86JG5hbWUgaXMgMHgkc3oi
ID4mMjsgXAoJCWV4aXQgJChleHByICRpZHggKyAxKTs7IFwKCWVzYWM7IFwKZG9uZQpzZWQ6
IDE6ICIvWzAtOV0ve3MsMDAqLDAsZztwfSI6IGV4dHJhIGNoYXJhY3RlcnMgYXQgdGhlIGVu
ZCBvZiBwIGNvbW1hbmQKb2JqY29weSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGE9LmluaXQu
cm9kYXRhIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjE9LmluaXQucm9kYXRhLnN0
cjEuMSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4yPS5pbml0LnJvZGF0YS5zdHIx
LjIgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuND0uaW5pdC5yb2RhdGEuc3RyMS40
IC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjg9LmluaXQucm9kYXRhLnN0cjEuOCAt
LXJlbmFtZS1zZWN0aW9uIC5kYXRhLnJlbD0uaW5pdC5kYXRhLnJlbCAtLXJlbmFtZS1zZWN0
aW9uIC5kYXRhLnJlbC5sb2NhbD0uaW5pdC5kYXRhLnJlbC5sb2NhbCAtLXJlbmFtZS1zZWN0
aW9uIC5kYXRhLnJlbC5ybz0uaW5pdC5kYXRhLnJlbC5ybyAtLXJlbmFtZS1zZWN0aW9uIC5k
YXRhLnJlbC5yby5sb2NhbD0uaW5pdC5kYXRhLnJlbC5yby5sb2NhbCBib290Lm8gYm9vdC5p
bml0Lm8KZ2NjIC1EX19BU1NFTUJMWV9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS94ZW4vY29uZmlnLmggLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZu
by1zdHJpY3QtYWxpYXNpbmcgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
Lndha2V1cF9wcm90Lm8uZCAtYyB3YWtldXBfcHJvdC5TIC1vIHdha2V1cF9wcm90Lm8KbGQg
ICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGxpYi5vIHBvd2VyLm8gc3VzcGVu
ZC5vIGNwdV9pZGxlLm8gY3B1aWRsZV9tZW51Lm8gY3B1ZnJlcS9idWlsdF9pbi5vIGJvb3Qu
aW5pdC5vIHdha2V1cF9wcm90Lm8KZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2FjcGknCmdtYWtlIC1mIC9yb290L3hlbi00LjIu
MC94ZW4vUnVsZXMubWsgLUMgY3B1IGJ1aWx0X2luLm8KZ21ha2VbNV06IEVudGVyaW5nIGRp
cmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9jcHUnCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5hbWQuby5kIC1jIGFt
ZC5jIC1vIGFtZC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZu
by1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJv
ciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmlj
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1t
c29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0
ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9u
b3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0
ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdI
IC1NTUQgLU1GIC5jb21tb24uby5kIC1jIGNvbW1vbi5jIC1vIGNvbW1vbi5vCmdjYyAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQt
ZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGgg
LXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pbnRlbC5vLmQg
LWMgaW50ZWwuYyAtbyBpbnRlbC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1
aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1
ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1k
ZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8t
YXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklC
VVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BB
U1NUSFJPVUdIIC1NTUQgLU1GIC5pbnRlbF9jYWNoZWluZm8uby5kIC1jIGludGVsX2NhY2hl
aW5mby5jIC1vIGludGVsX2NhY2hlaW5mby5vCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94
ZW4vUnVsZXMubWsgLUMgbWNoZWNrIGJ1aWx0X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRp
cmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9jcHUvbWNoZWNrJwpnY2Mg
LU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5k
YW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFy
aXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1z
dGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1y
ZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVz
IC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9f
IC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURI
QVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuYW1kX25v
bmZhdGFsLm8uZCAtYyBhbWRfbm9uZmF0YWwuYyAtbyBhbWRfbm9uZmF0YWwubwpnY2MgLU8y
IC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFj
ay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQt
em9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1E
R0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuazcuby5kIC1j
IGs3LmMgLW8gazcubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAuYW1kX2s4Lm8uZCAtYyBhbWRfazguYyAtbyBhbWRfazgubwpnY2MgLU8y
IC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFj
ay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQt
em9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1E
R0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuYW1kX2YxMC5v
LmQgLWMgYW1kX2YxMC5jIC1vIGFtZF9mMTAubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcg
LWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZp
eCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1l
eGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBp
YyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZ
X0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAt
REhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubWN0ZWxlbS5vLmQgLWMgbWN0ZWxlbS5jIC1v
IG1jdGVsZW0ubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8t
Y29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3Ig
LVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
ICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNv
ZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVk
LWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91
cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRp
bmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94
ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TU1EIC1NRiAubWNlLm8uZCAtYyBtY2UuYyAtbyBtY2UubwpnY2MgLU8yIC1mb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURO
REVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0
aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9h
c20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3Ig
LWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNz
ZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJ
QklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19H
REJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubWNlLWFwZWkuby5kIC1jIG1jZS1h
cGVpLmMgLW8gbWNlLWFwZWkubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWls
dGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRl
IC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
Z2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVm
YXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25z
IC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFz
eW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVU
RSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNT
VEhST1VHSCAtTU1EIC1NRiAubWNlX2ludGVsLm8uZCAtYyBtY2VfaW50ZWwuYyAtbyBtY2Vf
aW50ZWwubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQt
ZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4
dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11
bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMg
LWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4v
Y29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1E
IC1NRiAubWNlX2FtZF9xdWlya3Muby5kIC1jIG1jZV9hbWRfcXVpcmtzLmMgLW8gbWNlX2Ft
ZF9xdWlya3MubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8t
Y29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3Ig
LVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
ICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNv
ZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVk
LWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91
cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRp
bmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94
ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TU1EIC1NRiAubm9uLWZhdGFsLm8uZCAtYyBub24tZmF0YWwuYyAtbyBub24tZmF0YWwubwpn
Y2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZu
by1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1u
by1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFi
bGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hF
Tl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgg
LURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudm1j
ZS5vLmQgLWMgdm1jZS5jIC1vIHZtY2UubwpsZCAgICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1
aWx0X2luLm8gYW1kX25vbmZhdGFsLm8gazcubyBhbWRfazgubyBhbWRfZjEwLm8gbWN0ZWxl
bS5vIG1jZS5vIG1jZS1hcGVpLm8gbWNlX2ludGVsLm8gbWNlX2FtZF9xdWlya3MubyBub24t
ZmF0YWwubyB2bWNlLm8KZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAveGVuL2FyY2gveDg2L2NwdS9tY2hlY2snCmdtYWtlIC1mIC9yb290L3hlbi00LjIu
MC94ZW4vUnVsZXMubWsgLUMgbXRyciBidWlsdF9pbi5vCmdtYWtlWzZdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYvY3B1L210cnInCmdjYyAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJl
ZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMg
LURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhB
U19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5nZW5lcmlj
Lm8uZCAtYyBnZW5lcmljLmMgLW8gZ2VuZXJpYy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5v
LWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1m
cGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJ
VFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5tYWluLm8uZCAtYyBtYWluLmMgLW8gbWFp
bi5vCmxkICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBnZW5lcmljLm8gbWFp
bi5vCmdtYWtlWzZdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9h
cmNoL3g4Ni9jcHUvbXRycicKbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5v
IGFtZC5vIGNvbW1vbi5vIGludGVsLm8gaW50ZWxfY2FjaGVpbmZvLm8gbWNoZWNrL2J1aWx0
X2luLm8gbXRyci9idWlsdF9pbi5vCmdtYWtlWzVdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jv
b3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9jcHUnCmdtYWtlIC1mIC9yb290L3hlbi00LjIu
MC94ZW4vUnVsZXMubWsgLUMgZ2VuYXBpYyBidWlsdF9pbi5vCmdtYWtlWzVdOiBFbnRlcmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYvZ2VuYXBpYycKZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmJpZ3Nt
cC5vLmQgLWMgYmlnc21wLmMgLW8gYmlnc21wLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLngyYXBpYy5vLmQgLWMgeDJhcGljLmMgLW8g
eDJhcGljLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLmRlZmF1bHQuby5kIC1jIGRlZmF1bHQuYyAtbyBkZWZhdWx0Lm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRlbGl2ZXJ5Lm8u
ZCAtYyBkZWxpdmVyeS5jIC1vIGRlbGl2ZXJ5Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnByb2JlLm8uZCAtYyBwcm9iZS5jIC1vIHBy
b2JlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLnN1bW1pdC5vLmQgLWMgc3VtbWl0LmMgLW8gc3VtbWl0Lm8KbGQgICAgLW1lbGZfeDg2
XzY0ICAtciAtbyBidWlsdF9pbi5vIGJpZ3NtcC5vIHgyYXBpYy5vIGRlZmF1bHQubyBkZWxp
dmVyeS5vIHByb2JlLm8gc3VtbWl0Lm8KZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2dlbmFwaWMnCmdtYWtlIC1mIC9yb290L3hl
bi00LjIuMC94ZW4vUnVsZXMubWsgLUMgaHZtIGJ1aWx0X2luLm8KZ21ha2VbNV06IEVudGVy
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9odm0nCmdjYyAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJl
ZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMg
LURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhB
U19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5hc2lkLm8u
ZCAtYyBhc2lkLmMgLW8gYXNpZC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1
aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1
ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1k
ZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8t
YXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklC
VVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BB
U1NUSFJPVUdIIC1NTUQgLU1GIC5lbXVsYXRlLm8uZCAtYyBlbXVsYXRlLmMgLW8gZW11bGF0
ZS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9h
dCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJu
cyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2lu
ZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAt
RF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25m
aWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1G
IC5ocGV0Lm8uZCAtYyBocGV0LmMgLW8gaHBldC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5v
LWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1m
cGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJ
VFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5odm0uby5kIC1jIGh2bS5jIC1vIGh2bS5v
CmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdy
ZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50
ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAt
Zm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAt
bW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10
YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9f
WEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcu
aCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5p
ODI1NC5vLmQgLWMgaTgyNTQuYyAtbyBpODI1NC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5v
LWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1m
cGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJ
VFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pbnRlcmNlcHQuby5kIC1jIGludGVyY2Vw
dC5jIC1vIGludGVyY2VwdC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0
aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUg
LVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1n
ZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZh
dWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMg
LVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRF
IC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NU
SFJPVUdIIC1NTUQgLU1GIC5pby5vLmQgLWMgaW8uYyAtbyBpby5vCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pcnEuby5kIC1jIGlycS5j
IC1vIGlycS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29m
dC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQt
ZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3Vz
LXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGlu
YyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hl
bi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1N
TUQgLU1GIC5tdHJyLm8uZCAtYyBtdHJyLmMgLW8gbXRyci5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5uZXN0ZWRodm0uby5kIC1jIG5l
c3RlZGh2bS5jIC1vIG5lc3RlZGh2bS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5v
LWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGlu
Y2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2Vw
dGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1m
bm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRU
UklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFT
X1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5wbXRpbWVyLm8uZCAtYyBwbXRpbWVyLmMgLW8gcG10
aW1lci5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1h
bGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21t
b24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25v
LXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1m
bG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0
ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAt
ZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9j
b25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQg
LU1GIC5xdWlya3Muby5kIC1jIHF1aXJrcy5jIC1vIHF1aXJrcy5vCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5ydGMuby5kIC1jIHJ0Yy5j
IC1vIHJ0Yy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29m
dC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQt
ZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3Vz
LXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGlu
YyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hl
bi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1N
TUQgLU1GIC5zYXZlLm8uZCAtYyBzYXZlLmMgLW8gc2F2ZS5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5zdGR2Z2Euby5kIC1jIHN0ZHZn
YS5jIC1vIHN0ZHZnYS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC52aW9hcGljLm8uZCAtYyB2aW9hcGljLmMgLW8gdmlvYXBpYy5vCmdj
YyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1
bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXIt
YXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC52aXJp
ZGlhbi5vLmQgLWMgdmlyaWRpYW4uYyAtbyB2aXJpZGlhbi5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC52bGFwaWMuby5kIC1jIHZsYXBp
Yy5jIC1vIHZsYXBpYy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC52bXNpLm8uZCAtYyB2bXNpLmMgLW8gdm1zaS5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC52cGljLm8uZCAtYyB2
cGljLmMgLW8gdnBpYy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC52cHQuby5kIC1jIHZwdC5jIC1vIHZwdC5vCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC52cG11Lm8uZCAtYyB2cG11
LmMgLW8gdnBtdS5vCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMg
c3ZtIGJ1aWx0X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3hlbi9hcmNoL3g4Ni9odm0vc3ZtJwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcg
LWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZp
eCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1l
eGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBp
YyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZ
X0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAt
REhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuYXNpZC5vLmQgLWMgYXNpZC5jIC1vIGFzaWQu
bwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1X
cmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2lu
dGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQg
LWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMg
LW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQt
dGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURf
X1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmln
LmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAu
ZW11bGF0ZS5vLmQgLWMgZW11bGF0ZS5jIC1vIGVtdWxhdGUubwpnY2MgLU8yIC1mb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1p
d2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0
b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5v
LXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19W
SVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhB
U19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuaW50ci5vLmQgLWMgaW50ci5j
IC1vIGludHIubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8t
Y29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3Ig
LVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
ICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNv
ZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVk
LWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91
cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRp
bmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94
ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TU1EIC1NRiAubmVzdGVkc3ZtLm8uZCAtYyBuZXN0ZWRzdm0uYyAtbyBuZXN0ZWRzdm0ubwpn
Y2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZu
by1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1u
by1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFi
bGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hF
Tl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgg
LURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuc3Zt
Lm8uZCAtYyBzdm0uYyAtbyBzdm0ubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1i
dWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNs
dWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
ZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRp
b25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5v
LWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJ
QlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19Q
QVNTVEhST1VHSCAtTU1EIC1NRiAuc3ZtZGVidWcuby5kIC1jIHN2bWRlYnVnLmMgLW8gc3Zt
ZGVidWcubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQt
ZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4
dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11
bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMg
LWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4v
Y29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1E
IC1NRiAudm1jYi5vLmQgLWMgdm1jYi5jIC1vIHZtY2IubwpnY2MgLU8yIC1mb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURO
REVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0
aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9h
c20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3Ig
LWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNz
ZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJ
QklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19H
REJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudnBtdS5vLmQgLWMgdnBtdS5jIC1v
IHZwbXUubwpnY2MgLURfX0FTU0VNQkxZX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxv
YXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVy
bnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndp
bmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcg
LURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29u
ZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1N
RiAuZW50cnkuby5kIC1jIGVudHJ5LlMgLW8gZW50cnkubwpsZCAgICAtbWVsZl94ODZfNjQg
IC1yIC1vIGJ1aWx0X2luLm8gYXNpZC5vIGVtdWxhdGUubyBpbnRyLm8gbmVzdGVkc3ZtLm8g
c3ZtLm8gc3ZtZGVidWcubyB2bWNiLm8gdnBtdS5vIGVudHJ5Lm8KZ21ha2VbNl06IExlYXZp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2h2bS9zdm0nCmdt
YWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgdm14IGJ1aWx0X2luLm8K
Z21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNo
L3g4Ni9odm0vdm14JwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAuaW50ci5vLmQgLWMgaW50ci5jIC1vIGludHIubwpnY2MgLU8yIC1mb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xz
IC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBl
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90
ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAt
bW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hB
U19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRl
IC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAt
REhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAucmVhbG1vZGUuby5kIC1j
IHJlYWxtb2RlLmMgLW8gcmVhbG1vZGUubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAudm1jcy5vLmQgLWMgdm1jcy5jIC1vIHZtY3Mubwpn
Y2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZu
by1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1u
by1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFi
bGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hF
Tl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgg
LURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudm14
Lm8uZCAtYyB2bXguYyAtbyB2bXgubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1i
dWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNs
dWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
ZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRp
b25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5v
LWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJ
QlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19Q
QVNTVEhST1VHSCAtTU1EIC1NRiAudnBtdV9jb3JlMi5vLmQgLWMgdnBtdV9jb3JlMi5jIC1v
IHZwbXVfY29yZTIubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAudnZteC5vLmQgLWMgdnZteC5jIC1vIHZ2bXgubwpnY2MgLURfX0FTU0VN
QkxZX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcu
aCAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0
b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5v
LXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19W
SVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhB
U19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuZW50cnkuby5kIC1jIGVudHJ5
LlMgLW8gZW50cnkubwpsZCAgICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gaW50
ci5vIHJlYWxtb2RlLm8gdm1jcy5vIHZteC5vIHZwbXVfY29yZTIubyB2dm14Lm8gZW50cnku
bwpnbWFrZVs2XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJj
aC94ODYvaHZtL3ZteCcKbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGFz
aWQubyBlbXVsYXRlLm8gaHBldC5vIGh2bS5vIGk4MjU0Lm8gaW50ZXJjZXB0Lm8gaW8ubyBp
cnEubyBtdHJyLm8gbmVzdGVkaHZtLm8gcG10aW1lci5vIHF1aXJrcy5vIHJ0Yy5vIHNhdmUu
byBzdGR2Z2EubyB2aW9hcGljLm8gdmlyaWRpYW4ubyB2bGFwaWMubyB2bXNpLm8gdnBpYy5v
IHZwdC5vIHZwbXUubyBzdm0vYnVpbHRfaW4ubyB2bXgvYnVpbHRfaW4ubwpnbWFrZVs1XTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYvaHZtJwpn
bWFrZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIG1tIGJ1aWx0X2luLm8K
Z21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNo
L3g4Ni9tbScKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLnBhZ2luZy5vLmQgLWMgcGFnaW5nLmMgLW8gcGFnaW5nLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnAybS5vLmQgLWMgcDJt
LmMgLW8gcDJtLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLnAybS1wdC5vLmQgLWMgcDJtLXB0LmMgLW8gcDJtLXB0Lm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnAybS1lcHQuby5k
IC1jIHAybS1lcHQuYyAtbyBwMm0tZXB0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1m
bm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXgg
aW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURI
QVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnAybS1wb2Quby5kIC1jIHAybS1wb2QuYyAtbyBw
Mm0tcG9kLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLmd1ZXN0X3dhbGtfMi5vLmQgLURHVUVTVF9QQUdJTkdfTEVWRUxTPTIgLWMgZ3Vl
c3Rfd2Fsay5jIC1vIGd1ZXN0X3dhbGtfMi5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGlj
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlf
QVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1E
SEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5ndWVzdF93YWxrXzMuby5kIC1ER1VFU1RfUEFH
SU5HX0xFVkVMUz0zIC1jIGd1ZXN0X3dhbGsuYyAtbyBndWVzdF93YWxrXzMubwpnY2MgLU8y
IC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFj
ay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQt
em9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1E
R0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuZ3Vlc3Rfd2Fs
a180Lm8uZCAtREdVRVNUX1BBR0lOR19MRVZFTFM9NCAtYyBndWVzdF93YWxrLmMgLW8gZ3Vl
c3Rfd2Fsa180Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLm1lbV9ldmVudC5vLmQgLWMgbWVtX2V2ZW50LmMgLW8gbWVtX2V2ZW50Lm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1l
bV9wYWdpbmcuby5kIC1jIG1lbV9wYWdpbmcuYyAtbyBtZW1fcGFnaW5nLm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1lbV9zaGFyaW5n
Lm8uZCAtYyBtZW1fc2hhcmluZy5jIC1vIG1lbV9zaGFyaW5nLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1lbV9hY2Nlc3Muby5kIC1j
IG1lbV9hY2Nlc3MuYyAtbyBtZW1fYWNjZXNzLm8KZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4w
L3hlbi9SdWxlcy5tayAtQyBzaGFkb3cgYnVpbHRfaW4ubwpnbWFrZVs2XTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L21tL3NoYWRvdycKZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNvbW1v
bi5vLmQgLWMgY29tbW9uLmMgLW8gY29tbW9uLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmd1ZXN0XzIuby5kIC1ER1VFU1RfUEFHSU5H
X0xFVkVMUz0yIC1jIG11bHRpLmMgLW8gZ3Vlc3RfMi5vCmdjYyAtTzIgLWZvbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5E
RUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRo
cHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAt
Zm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3Nl
IC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lC
SUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5ndWVzdF8zLm8uZCAtREdVRVNUX1BB
R0lOR19MRVZFTFM9MyAtYyBtdWx0aS5jIC1vIGd1ZXN0XzMubwpnY2MgLU8yIC1mb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1p
d2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0
b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5v
LXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19W
SVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhB
U19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuZ3Vlc3RfNC5vLmQgLURHVUVT
VF9QQUdJTkdfTEVWRUxTPTQgLWMgbXVsdGkuYyAtbyBndWVzdF80Lm8KbGQgICAgLW1lbGZf
eDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGNvbW1vbi5vIGd1ZXN0XzIubyBndWVzdF8zLm8g
Z3Vlc3RfNC5vCmdtYWtlWzZdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4w
L3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4v
UnVsZXMubWsgLUMgaGFwIGJ1aWx0X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9y
eSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9tbS9oYXAnCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5oYXAuby5kIC1jIGhhcC5j
IC1vIGhhcC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29m
dC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQt
ZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3Vz
LXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGlu
YyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hl
bi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1N
TUQgLU1GIC5ndWVzdF93YWxrXzJsZXZlbC5vLmQgLURHVUVTVF9QQUdJTkdfTEVWRUxTPTIg
LWMgZ3Vlc3Rfd2Fsay5jIC1vIGd1ZXN0X3dhbGtfMmxldmVsLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmd1ZXN0X3dhbGtfM2xldmVs
Lm8uZCAtREdVRVNUX1BBR0lOR19MRVZFTFM9MyAtYyBndWVzdF93YWxrLmMgLW8gZ3Vlc3Rf
d2Fsa18zbGV2ZWwubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAuZ3Vlc3Rfd2Fsa180bGV2ZWwuby5kIC1ER1VFU1RfUEFHSU5HX0xFVkVM
Uz00IC1jIGd1ZXN0X3dhbGsuYyAtbyBndWVzdF93YWxrXzRsZXZlbC5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5uZXN0ZWRfaGFwLm8u
ZCAtYyBuZXN0ZWRfaGFwLmMgLW8gbmVzdGVkX2hhcC5vCmxkICAgIC1tZWxmX3g4Nl82NCAg
LXIgLW8gYnVpbHRfaW4ubyBoYXAubyBndWVzdF93YWxrXzJsZXZlbC5vIGd1ZXN0X3dhbGtf
M2xldmVsLm8gZ3Vlc3Rfd2Fsa180bGV2ZWwubyBuZXN0ZWRfaGFwLm8KZ21ha2VbNl06IExl
YXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L21tL2hhcCcK
bGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIHBhZ2luZy5vIHAybS5vIHAy
bS1wdC5vIHAybS1lcHQubyBwMm0tcG9kLm8gZ3Vlc3Rfd2Fsa18yLm8gZ3Vlc3Rfd2Fsa18z
Lm8gZ3Vlc3Rfd2Fsa180Lm8gbWVtX2V2ZW50Lm8gbWVtX3BhZ2luZy5vIG1lbV9zaGFyaW5n
Lm8gbWVtX2FjY2Vzcy5vIHNoYWRvdy9idWlsdF9pbi5vIGhhcC9idWlsdF9pbi5vCmdtYWtl
WzVdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9t
bScKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyBvcHJvZmlsZSBi
dWlsdF9pbi5vCmdtYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC94ZW4vYXJjaC94ODYvb3Byb2ZpbGUnCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5v
LWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGlu
Y2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2Vw
dGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1m
bm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRU
UklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFT
X1BBU1NUSFJPVUdIIC1NTUQgLU1GIC54ZW5vcHJvZi5vLmQgLWMgeGVub3Byb2YuYyAtbyB4
ZW5vcHJvZi5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29m
dC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQt
ZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3Vz
LXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGlu
YyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hl
bi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1N
TUQgLU1GIC5ubWlfaW50Lm8uZCAtYyBubWlfaW50LmMgLW8gbm1pX2ludC5vCmdjYyAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQt
ZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGgg
LXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5vcF9tb2RlbF9w
NC5vLmQgLWMgb3BfbW9kZWxfcDQuYyAtbyBvcF9tb2RlbF9wNC5vCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5vcF9tb2RlbF9wcHJvLm8u
ZCAtYyBvcF9tb2RlbF9wcHJvLmMgLW8gb3BfbW9kZWxfcHByby5vCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5vcF9tb2RlbF9hdGhsb24u
by5kIC1jIG9wX21vZGVsX2F0aGxvbi5jIC1vIG9wX21vZGVsX2F0aGxvbi5vCmdjYyAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQt
ZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGgg
LXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5iYWNrdHJhY2Uu
by5kIC1jIGJhY2t0cmFjZS5jIC1vIGJhY2t0cmFjZS5vCmxkICAgIC1tZWxmX3g4Nl82NCAg
LXIgLW8gYnVpbHRfaW4ubyB4ZW5vcHJvZi5vIG5taV9pbnQubyBvcF9tb2RlbF9wNC5vIG9w
X21vZGVsX3Bwcm8ubyBvcF9tb2RlbF9hdGhsb24ubyBiYWNrdHJhY2UubwpnbWFrZVs1XTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYvb3Byb2Zp
bGUnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgeDg2XzY0IGJ1
aWx0X2luLm8KZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4w
L3hlbi9hcmNoL3g4Ni94ODZfNjQnCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1
aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1
ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1k
ZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8t
YXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklC
VVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BB
U1NUSFJPVUdIIC1NTUQgLU1GIC5tbS5vLmQgLWMgbW0uYyAtbyBtbS5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC50cmFwcy5vLmQgLWMg
dHJhcHMuYyAtbyB0cmFwcy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0
aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUg
LVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1n
ZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZh
dWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMg
LVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRF
IC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NU
SFJPVUdIIC1NTUQgLU1GIC5tYWNoaW5lX2tleGVjLm8uZCAtYyBtYWNoaW5lX2tleGVjLmMg
LW8gbWFjaGluZV9rZXhlYy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0
aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUg
LVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1n
ZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZh
dWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMg
LVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRF
IC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NU
SFJPVUdIIC1NTUQgLU1GIC5wY2kuby5kIC1jIHBjaS5jIC1vIHBjaS5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5hY3BpX21tY2ZnLm8u
ZCAtYyBhY3BpX21tY2ZnLmMgLW8gYWNwaV9tbWNmZy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5E
RUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRo
cHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAt
Zm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3Nl
IC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lC
SUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5tbWNvbmYtZmFtMTBoLm8uZCAtYyBt
bWNvbmYtZmFtMTBoLmMgLW8gbW1jb25mLWZhbTEwaC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5E
RUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRo
cHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAt
Zm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3Nl
IC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lC
SUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5tbWNvbmZpZ182NC5vLmQgLWMgbW1j
b25maWdfNjQuYyAtbyBtbWNvbmZpZ182NC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGlj
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlf
QVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1E
SEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5tbWNvbmZpZy1zaGFyZWQuby5kIC1jIG1tY29u
ZmlnLXNoYXJlZC5jIC1vIG1tY29uZmlnLXNoYXJlZC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5E
RUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRo
cHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAt
Zm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3Nl
IC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lC
SUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5jb21wYXQuby5kIC1jIGNvbXBhdC5j
IC1vIGNvbXBhdC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZu
by1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJv
ciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmlj
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1t
c29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0
ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9u
b3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0
ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdI
IC1NTUQgLU1GIC5kb21haW4uby5kIC1jIGRvbWFpbi5jIC1vIGRvbWFpbi5vCmdjYyAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQt
ZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGgg
LXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5waHlzZGV2Lm8u
ZCAtYyBwaHlzZGV2LmMgLW8gcGh5c2Rldi5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGlj
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlf
QVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1E
SEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5wbGF0Zm9ybV9oeXBlcmNhbGwuby5kIC1jIHBs
YXRmb3JtX2h5cGVyY2FsbC5jIC1vIHBsYXRmb3JtX2h5cGVyY2FsbC5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5jcHVfaWRsZS5vLmQg
LWMgY3B1X2lkbGUuYyAtbyBjcHVfaWRsZS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGlj
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlf
QVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1E
SEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5jcHVmcmVxLm8uZCAtYyBjcHVmcmVxLmMgLW8g
Y3B1ZnJlcS5vCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgY29t
cGF0IGJ1aWx0X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3hlbi9hcmNoL3g4Ni94ODZfNjQvY29tcGF0JwpnY2MgLURfX0FTU0VNQkxZX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9h
c20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZu
by1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAt
ZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklM
SVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJT
WCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuZW50cnkuby5kIC1jIGVudHJ5LlMgLW8g
ZW50cnkubwpsZCAgICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gZW50cnkubwpn
bWFrZVs2XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94
ODYveDg2XzY0L2NvbXBhdCcKZ2NjIC1EX19BU1NFTUJMWV9fIC1pbmNsdWRlIC9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLmVudHJ5Lm8uZCAtYyBlbnRyeS5TIC1vIGVudHJ5Lm8KZ2NjIC1EX19B
U1NFTUJMWV9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29u
ZmlnLmggLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmdwcl9zd2l0Y2guby5k
IC1jIGdwcl9zd2l0Y2guUyAtbyBncHJfc3dpdGNoLm8KZ2NjIC1EX19BU1NFTUJMWV9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLU8yIC1m
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNvbXBhdF9rZXhlYy5vLmQgLWMgY29tcGF0
X2tleGVjLlMgLW8gY29tcGF0X2tleGVjLm8KbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBi
dWlsdF9pbi5vIG1tLm8gdHJhcHMubyBtYWNoaW5lX2tleGVjLm8gcGNpLm8gYWNwaV9tbWNm
Zy5vIG1tY29uZi1mYW0xMGgubyBtbWNvbmZpZ182NC5vIG1tY29uZmlnLXNoYXJlZC5vIGNv
bXBhdC5vIGRvbWFpbi5vIHBoeXNkZXYubyBwbGF0Zm9ybV9oeXBlcmNhbGwubyBjcHVfaWRs
ZS5vIGNwdWZyZXEubyBjb21wYXQvYnVpbHRfaW4ubyBlbnRyeS5vIGdwcl9zd2l0Y2gubyBj
b21wYXRfa2V4ZWMubwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC94ZW4vYXJjaC94ODYveDg2XzY0JwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAuYnppbWFnZS5vLmQgLURJTklUX1NFQ1RJT05TX09O
TFkgLWMgYnppbWFnZS5jIC1vIGJ6aW1hZ2UubwpvYmpkdW1wIC1oIGJ6aW1hZ2UubyB8IHNl
ZCAtbiAnL1swLTldL3tzLDAwKiwwLGc7cH0nIHwgd2hpbGUgcmVhZCBpZHggbmFtZSBzeiBy
ZXN0OyBkbyBcCgljYXNlICIkbmFtZSIgaW4gXAoJLnRleHR8LnRleHQuKnwuZGF0YXwuZGF0
YS4qfC5ic3MpIFwKCQl0ZXN0ICRzeiAhPSAwIHx8IGNvbnRpbnVlOyBcCgkJZWNobyAiRXJy
b3I6IHNpemUgb2YgYnppbWFnZS5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0ICQo
ZXhwciAkaWR4ICsgMSk7OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tzLDAw
KiwwLGc7cH0iOiBleHRyYSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5kCm9i
amNvcHkgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFtZS1z
ZWN0aW9uIC5yb2RhdGEuc3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUtc2Vj
dGlvbiAucm9kYXRhLnN0cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNlY3Rp
b24gLnJvZGF0YS5zdHIxLjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0aW9u
IC5yb2RhdGEuc3RyMS44PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlvbiAu
ZGF0YS5yZWw9LmluaXQuZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwubG9j
YWw9LmluaXQuZGF0YS5yZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm89
LmluaXQuZGF0YS5yZWwucm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9jYWw9
LmluaXQuZGF0YS5yZWwucm8ubG9jYWwgYnppbWFnZS5vIGJ6aW1hZ2UuaW5pdC5vCmdjYyAt
RF9fQVNTRU1CTFlfXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5jbGVhcl9wYWdl
Lm8uZCAtYyBjbGVhcl9wYWdlLlMgLW8gY2xlYXJfcGFnZS5vCmdjYyAtRF9fQVNTRU1CTFlf
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLURO
REVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0
aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAt
Zm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3Nl
IC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lC
SUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5jb3B5X3BhZ2Uuby5kIC1jIGNvcHlf
cGFnZS5TIC1vIGNvcHlfcGFnZS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1
aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1
ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1k
ZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8t
YXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklC
VVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BB
U1NUSFJPVUdIIC1NTUQgLU1GIC5kbWlfc2Nhbi5vLmQgLURJTklUX1NFQ1RJT05TX09OTFkg
LWMgZG1pX3NjYW4uYyAtbyBkbWlfc2Nhbi5vCm9iamR1bXAgLWggZG1pX3NjYW4ubyB8IHNl
ZCAtbiAnL1swLTldL3tzLDAwKiwwLGc7cH0nIHwgd2hpbGUgcmVhZCBpZHggbmFtZSBzeiBy
ZXN0OyBkbyBcCgljYXNlICIkbmFtZSIgaW4gXAoJLnRleHR8LnRleHQuKnwuZGF0YXwuZGF0
YS4qfC5ic3MpIFwKCQl0ZXN0ICRzeiAhPSAwIHx8IGNvbnRpbnVlOyBcCgkJZWNobyAiRXJy
b3I6IHNpemUgb2YgZG1pX3NjYW4ubzokbmFtZSBpcyAweCRzeiIgPiYyOyBcCgkJZXhpdCAk
KGV4cHIgJGlkeCArIDEpOzsgXAoJZXNhYzsgXApkb25lCnNlZDogMTogIi9bMC05XS97cyww
MCosMCxnO3B9IjogZXh0cmEgY2hhcmFjdGVycyBhdCB0aGUgZW5kIG9mIHAgY29tbWFuZApv
Ympjb3B5IC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YT0uaW5pdC5yb2RhdGEgLS1yZW5hbWUt
c2VjdGlvbiAucm9kYXRhLnN0cjEuMT0uaW5pdC5yb2RhdGEuc3RyMS4xIC0tcmVuYW1lLXNl
Y3Rpb24gLnJvZGF0YS5zdHIxLjI9LmluaXQucm9kYXRhLnN0cjEuMiAtLXJlbmFtZS1zZWN0
aW9uIC5yb2RhdGEuc3RyMS40PS5pbml0LnJvZGF0YS5zdHIxLjQgLS1yZW5hbWUtc2VjdGlv
biAucm9kYXRhLnN0cjEuOD0uaW5pdC5yb2RhdGEuc3RyMS44IC0tcmVuYW1lLXNlY3Rpb24g
LmRhdGEucmVsPS5pbml0LmRhdGEucmVsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLmxv
Y2FsPS5pbml0LmRhdGEucmVsLmxvY2FsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJv
PS5pbml0LmRhdGEucmVsLnJvIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvLmxvY2Fs
PS5pbml0LmRhdGEucmVsLnJvLmxvY2FsIGRtaV9zY2FuLm8gZG1pX3NjYW4uaW5pdC5vCmdj
YyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1
bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXIt
YXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5kb21h
aW5fYnVpbGQuby5kIC1ESU5JVF9TRUNUSU9OU19PTkxZIC1jIGRvbWFpbl9idWlsZC5jIC1v
IGRvbWFpbl9idWlsZC5vCm9iamR1bXAgLWggZG9tYWluX2J1aWxkLm8gfCBzZWQgLW4gJy9b
MC05XS97cywwMCosMCxnO3B9JyB8IHdoaWxlIHJlYWQgaWR4IG5hbWUgc3ogcmVzdDsgZG8g
XAoJY2FzZSAiJG5hbWUiIGluIFwKCS50ZXh0fC50ZXh0Lip8LmRhdGF8LmRhdGEuKnwuYnNz
KSBcCgkJdGVzdCAkc3ogIT0gMCB8fCBjb250aW51ZTsgXAoJCWVjaG8gIkVycm9yOiBzaXpl
IG9mIGRvbWFpbl9idWlsZC5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0ICQoZXhw
ciAkaWR4ICsgMSk7OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tzLDAwKiww
LGc7cH0iOiBleHRyYSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5kCm9iamNv
cHkgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFtZS1zZWN0
aW9uIC5yb2RhdGEuc3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUtc2VjdGlv
biAucm9kYXRhLnN0cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNlY3Rpb24g
LnJvZGF0YS5zdHIxLjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0aW9uIC5y
b2RhdGEuc3RyMS44PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlvbiAuZGF0
YS5yZWw9LmluaXQuZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwubG9jYWw9
LmluaXQuZGF0YS5yZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm89Lmlu
aXQuZGF0YS5yZWwucm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9jYWw9Lmlu
aXQuZGF0YS5yZWwucm8ubG9jYWwgZG9tYWluX2J1aWxkLm8gZG9tYWluX2J1aWxkLmluaXQu
bwpsZCAgICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gYXBpYy5vIGJpdG9wcy5v
IGNvbXBhdC5vIGRlYnVnLm8gZGVsYXkubyBkb21jdGwubyBkb21haW4ubyBlODIwLm8gZXh0
YWJsZS5vIGZsdXNodGxiLm8gcGxhdGZvcm1faHlwZXJjYWxsLm8gaTM4Ny5vIGk4MjU5Lm8g
aW9fYXBpYy5vIG1zaS5vIGlvcG9ydF9lbXVsYXRlLm8gaXJxLm8gbWljcm9jb2RlX2FtZC5v
IG1pY3JvY29kZV9pbnRlbC5vIG1pY3JvY29kZS5vIG1tLm8gbXBwYXJzZS5vIG5taS5vIG51
bWEubyBwY2kubyBwZXJjcHUubyBwaHlzZGV2Lm8gc2V0dXAubyBzaHV0ZG93bi5vIHNtcC5v
IHNtcGJvb3QubyBzcmF0Lm8gc3RyaW5nLm8gc3lzY3RsLm8gdGltZS5vIHRyYWNlLm8gdHJh
cHMubyB1c2VyY29weS5vIHg4Nl9lbXVsYXRlLm8gbWFjaGluZV9rZXhlYy5vIGNyYXNoLm8g
dGJvb3QubyBocGV0Lm8geHN0YXRlLm8gYWNwaS9idWlsdF9pbi5vIGNwdS9idWlsdF9pbi5v
IGdlbmFwaWMvYnVpbHRfaW4ubyBodm0vYnVpbHRfaW4ubyBtbS9idWlsdF9pbi5vIG9wcm9m
aWxlL2J1aWx0X2luLm8geDg2XzY0L2J1aWx0X2luLm8gYnppbWFnZS5pbml0Lm8gY2xlYXJf
cGFnZS5vIGNvcHlfcGFnZS5vIGRtaV9zY2FuLmluaXQubyBkb21haW5fYnVpbGQuaW5pdC5v
CmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNo
L3g4NicKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyAvcm9vdC94
ZW4tNC4yLjAveGVuL2NyeXB0byBidWlsdF9pbi5vCmdtYWtlWzRdOiBFbnRlcmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vY3J5cHRvJwpnY2MgLU8yIC1mb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURO
REVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0
aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9h
c20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3Ig
LWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNz
ZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJ
QklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19H
REJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAucmlqbmRhZWwuby5kIC1jIHJpam5k
YWVsLmMgLW8gcmlqbmRhZWwubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWls
dGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRl
IC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
Z2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVm
YXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25z
IC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFz
eW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVU
RSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNT
VEhST1VHSCAtTU1EIC1NRiAudm1hYy5vLmQgLWMgdm1hYy5jIC1vIHZtYWMubwpsZCAgICAt
bWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gcmlqbmRhZWwubyB2bWFjLm8KZ21ha2Vb
NF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2NyeXB0bycKbGQg
ICAgLW1lbGZfeDg2XzY0ICAtciAtbyBwcmVsaW5rLm8gL3Jvb3QveGVuLTQuMi4wL3hlbi9h
cmNoL3g4Ni9ib290L2J1aWx0X2luLm8gL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9l
ZmkvYnVpbHRfaW4ubyAvcm9vdC94ZW4tNC4yLjAveGVuL2NvbW1vbi9idWlsdF9pbi5vIC9y
b290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9idWlsdF9pbi5vIC9yb290L3hlbi00LjIuMC94
ZW4veHNtL2J1aWx0X2luLm8gL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9idWlsdF9p
bi5vIC9yb290L3hlbi00LjIuMC94ZW4vY3J5cHRvL2J1aWx0X2luLm8KZ2NjIC1QIC1FIC1V
aTM4NiAtRF9fQVNTRU1CTFlfXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC54ZW4u
bGRzLmQgLW8geGVuLmxkcyB4ZW4ubGRzLlMKc2VkIC1lICdzL3hlblwubGRzXC5vOi94ZW5c
LmxkczovZycgPC54ZW4ubGRzLmQgPi54ZW4ubGRzLmQubmV3Cm12IC1mIC54ZW4ubGRzLmQu
bmV3IC54ZW4ubGRzLmQKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAt
QyAvcm9vdC94ZW4tNC4yLjAveGVuL2NvbW1vbiBzeW1ib2xzLWR1bW15Lm8KZ21ha2VbNF06
IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9jb21tb24nCmdjYyAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJl
ZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMg
LURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhB
U19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5zeW1ib2xz
LWR1bW15Lm8uZCAtYyBzeW1ib2xzLWR1bW15LmMgLW8gc3ltYm9scy1kdW1teS5vCmdtYWtl
WzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9jb21tb24nCmxk
ICAgIC1tZWxmX3g4Nl82NCAgLVQgeGVuLmxkcyAtTiBwcmVsaW5rLm8gXAogICAgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9jb21tb24vc3ltYm9scy1kdW1teS5vIC1vIC9yb290L3hlbi00LjIu
MC94ZW4vLnhlbi1zeW1zLjAKbm0gLW4gL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMu
MCB8IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvc3ltYm9scyA+L3Jvb3QveGVuLTQuMi4w
L3hlbi8ueGVuLXN5bXMuMC5TCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMu
bWsgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMC5vCmdtYWtlWzRdOiBFbnRlcmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYnCmdjYyAtRF9fQVNT
RU1CTFlfXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xz
IC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBl
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC4ueGVuLXN5bXMuMC5vLmQg
LWMgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMC5TIC1vIC9yb290L3hlbi00LjIu
MC94ZW4vLnhlbi1zeW1zLjAubwpnbWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC94ZW4vYXJjaC94ODYnCmxkICAgIC1tZWxmX3g4Nl82NCAgLVQgeGVuLmxk
cyAtTiBwcmVsaW5rLm8gXAogICAgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMC5v
IC1vIC9yb290L3hlbi00LjIuMC94ZW4vLnhlbi1zeW1zLjEKbm0gLW4gL3Jvb3QveGVuLTQu
Mi4wL3hlbi8ueGVuLXN5bXMuMSB8IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvc3ltYm9s
cyA+L3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMS5TCmdtYWtlIC1mIC9yb290L3hl
bi00LjIuMC94ZW4vUnVsZXMubWsgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMS5v
CmdtYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJj
aC94ODYnCmdjYyAtRF9fQVNTRU1CTFlfXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9u
IC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1w
b2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9h
dCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJu
cyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2lu
ZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAt
RF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25m
aWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1G
IC4ueGVuLXN5bXMuMS5vLmQgLWMgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMS5T
IC1vIC9yb290L3hlbi00LjIuMC94ZW4vLnhlbi1zeW1zLjEubwpnbWFrZVs0XTogTGVhdmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYnCmxkICAgIC1tZWxm
X3g4Nl82NCAgLVQgeGVuLmxkcyAtTiBwcmVsaW5rLm8gXAogICAgL3Jvb3QveGVuLTQuMi4w
L3hlbi8ueGVuLXN5bXMuMS5vIC1vIC9yb290L3hlbi00LjIuMC94ZW4veGVuLXN5bXMKcm0g
LWYgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuWzAtOV0qCjogbGQgICAgLW1lbGZf
eDg2XzY0ICAtciAtbyBwcmVsaW5rLWVmaS5vIC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94
ODYvYm9vdC9idWlsdF9pbi5vIC9yb290L3hlbi00LjIuMC94ZW4vY29tbW9uL2J1aWx0X2lu
Lm8gL3Jvb3QveGVuLTQuMi4wL3hlbi9kcml2ZXJzL2J1aWx0X2luLm8gL3Jvb3QveGVuLTQu
Mi4wL3hlbi94c20vYnVpbHRfaW4ubyAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2J1
aWx0X2luLm8gL3Jvb3QveGVuLTQuMi4wL3hlbi9jcnlwdG8vYnVpbHRfaW4ubyBlZmkvYm9v
dC5pbml0Lm8gZWZpL3J1bnRpbWUubyBlZmkvY29tcGF0Lm8KZ2NjIC1QIC1FIC1VaTM4NiAt
REVGSSAtRF9fQVNTRU1CTFlfXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5lZmku
bGRzLmQgLW8gZWZpLmxkcyB4ZW4ubGRzLlMKc2VkIC1lICdzL2VmaVwubGRzXC5vOi9lZmlc
LmxkczovZycgPC5lZmkubGRzLmQgPi5lZmkubGRzLmQubmV3Cm12IC1mIC5lZmkubGRzLmQu
bmV3IC5lZmkubGRzLmQKZ2NjIC1EX19BU1NFTUJMWV9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZu
by1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJv
ciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLnJlbG9jcy1kdW1teS5vLmQgLWMgZWZpL3JlbG9jcy1kdW1teS5TIC1vIGVm
aS9yZWxvY3MtZHVtbXkubwpnY2MgLVdhbGwgLVdlcnJvciAtV3N0cmljdC1wcm90b3R5cGVz
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtZm5vLXN0cmljdC1hbGlhc2luZyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtZyAtbyBlZmkvbWtyZWxvYyBlZmkvbWtyZWxvYy5j
CjogbGQgLW1pMzg2cGVwIC0tc3Vic3lzdGVtPTEwIC0taW1hZ2UtYmFzZT0weGZmZmY4MmM0
ODAwMDAwMDAgLS1zdGFjaz0wLDAgLS1oZWFwPTAsMCAtLXN0cmlwLWRlYnVnIC0tc2VjdGlv
bi1hbGlnbm1lbnQ9MHgyMDAwMDAgLS1maWxlLWFsaWdubWVudD0weDIwIC0tbWFqb3ItaW1h
Z2UtdmVyc2lvbj00IC0tbWlub3ItaW1hZ2UtdmVyc2lvbj0yIC0tbWFqb3Itb3MtdmVyc2lv
bj0yIC0tbWlub3Itb3MtdmVyc2lvbj0wIC0tbWFqb3Itc3Vic3lzdGVtLXZlcnNpb249MiAt
LW1pbm9yLXN1YnN5c3RlbS12ZXJzaW9uPTAgLVQgZWZpLmxkcyAtTiBwcmVsaW5rLWVmaS5v
IGVmaS9yZWxvY3MtZHVtbXkubyAvcm9vdC94ZW4tNC4yLjAveGVuL2NvbW1vbi9zeW1ib2xz
LWR1bW15Lm8gLW8gL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLmVmaS4weGZmZmY4MmM0ODAw
MDAwMDAuMCAmJiAgOiBsZCAtbWkzODZwZXAgLS1zdWJzeXN0ZW09MTAgLS1pbWFnZS1iYXNl
PTB4ZmZmZjgyYzRjMDAwMDAwMCAtLXN0YWNrPTAsMCAtLWhlYXA9MCwwIC0tc3RyaXAtZGVi
dWcgLS1zZWN0aW9uLWFsaWdubWVudD0weDIwMDAwMCAtLWZpbGUtYWxpZ25tZW50PTB4MjAg
LS1tYWpvci1pbWFnZS12ZXJzaW9uPTQgLS1taW5vci1pbWFnZS12ZXJzaW9uPTIgLS1tYWpv
ci1vcy12ZXJzaW9uPTIgLS1taW5vci1vcy12ZXJzaW9uPTAgLS1tYWpvci1zdWJzeXN0ZW0t
dmVyc2lvbj0yIC0tbWlub3Itc3Vic3lzdGVtLXZlcnNpb249MCAtVCBlZmkubGRzIC1OIHBy
ZWxpbmstZWZpLm8gZWZpL3JlbG9jcy1kdW1teS5vIC9yb290L3hlbi00LjIuMC94ZW4vY29t
bW9uL3N5bWJvbHMtZHVtbXkubyAtbyAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjB4
ZmZmZjgyYzRjMDAwMDAwMC4wICYmIDoKOiBlZmkvbWtyZWxvYyAvcm9vdC94ZW4tNC4yLjAv
eGVuLy54ZW4uZWZpLjB4ZmZmZjgyYzQ4MDAwMDAwMC4wIC9yb290L3hlbi00LjIuMC94ZW4v
Lnhlbi5lZmkuMHhmZmZmODJjNGMwMDAwMDAwLjAgPi9yb290L3hlbi00LjIuMC94ZW4vLnhl
bi5lZmkuMHIuUwo6IG5tIC1uIC9yb290L3hlbi00LjIuMC94ZW4vLnhlbi5lZmkuMHhmZmZm
ODJjNDgwMDAwMDAwLjAgfCA6IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvc3ltYm9scyA+
L3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLmVmaS4wcy5TCjogZ21ha2UgLWYgL3Jvb3QveGVu
LTQuMi4wL3hlbi9SdWxlcy5tayAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjByLm8g
L3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLmVmaS4wcy5vCjogbGQgLW1pMzg2cGVwIC0tc3Vi
c3lzdGVtPTEwIC0taW1hZ2UtYmFzZT0weGZmZmY4MmM0ODAwMDAwMDAgLS1zdGFjaz0wLDAg
LS1oZWFwPTAsMCAtLXN0cmlwLWRlYnVnIC0tc2VjdGlvbi1hbGlnbm1lbnQ9MHgyMDAwMDAg
LS1maWxlLWFsaWdubWVudD0weDIwIC0tbWFqb3ItaW1hZ2UtdmVyc2lvbj00IC0tbWlub3It
aW1hZ2UtdmVyc2lvbj0yIC0tbWFqb3Itb3MtdmVyc2lvbj0yIC0tbWlub3Itb3MtdmVyc2lv
bj0wIC0tbWFqb3Itc3Vic3lzdGVtLXZlcnNpb249MiAtLW1pbm9yLXN1YnN5c3RlbS12ZXJz
aW9uPTAgLVQgZWZpLmxkcyAtTiBwcmVsaW5rLWVmaS5vIC9yb290L3hlbi00LjIuMC94ZW4v
Lnhlbi5lZmkuMHIubyAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjBzLm8gLW8gL3Jv
b3QveGVuLTQuMi4wL3hlbi8ueGVuLmVmaS4weGZmZmY4MmM0ODAwMDAwMDAuMSAmJiAgOiBs
ZCAtbWkzODZwZXAgLS1zdWJzeXN0ZW09MTAgLS1pbWFnZS1iYXNlPTB4ZmZmZjgyYzRjMDAw
MDAwMCAtLXN0YWNrPTAsMCAtLWhlYXA9MCwwIC0tc3RyaXAtZGVidWcgLS1zZWN0aW9uLWFs
aWdubWVudD0weDIwMDAwMCAtLWZpbGUtYWxpZ25tZW50PTB4MjAgLS1tYWpvci1pbWFnZS12
ZXJzaW9uPTQgLS1taW5vci1pbWFnZS12ZXJzaW9uPTIgLS1tYWpvci1vcy12ZXJzaW9uPTIg
LS1taW5vci1vcy12ZXJzaW9uPTAgLS1tYWpvci1zdWJzeXN0ZW0tdmVyc2lvbj0yIC0tbWlu
b3Itc3Vic3lzdGVtLXZlcnNpb249MCAtVCBlZmkubGRzIC1OIHByZWxpbmstZWZpLm8gL3Jv
b3QveGVuLTQuMi4wL3hlbi8ueGVuLmVmaS4wci5vIC9yb290L3hlbi00LjIuMC94ZW4vLnhl
bi5lZmkuMHMubyAtbyAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjB4ZmZmZjgyYzRj
MDAwMDAwMC4xICYmIDoKOiBlZmkvbWtyZWxvYyAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4u
ZWZpLjB4ZmZmZjgyYzQ4MDAwMDAwMC4xIC9yb290L3hlbi00LjIuMC94ZW4vLnhlbi5lZmku
MHhmZmZmODJjNGMwMDAwMDAwLjEgPi9yb290L3hlbi00LjIuMC94ZW4vLnhlbi5lZmkuMXIu
Uwo6IG5tIC1uIC9yb290L3hlbi00LjIuMC94ZW4vLnhlbi5lZmkuMHhmZmZmODJjNDgwMDAw
MDAwLjEgfCA6IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvc3ltYm9scyA+L3Jvb3QveGVu
LTQuMi4wL3hlbi8ueGVuLmVmaS4xcy5TCjogZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hl
bi9SdWxlcy5tayAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjFyLm8gL3Jvb3QveGVu
LTQuMi4wL3hlbi8ueGVuLmVmaS4xcy5vCjogbGQgLW1pMzg2cGVwIC0tc3Vic3lzdGVtPTEw
IC0taW1hZ2UtYmFzZT0weGZmZmY4MmM0ODAwMDAwMDAgLS1zdGFjaz0wLDAgLS1oZWFwPTAs
MCAtLXN0cmlwLWRlYnVnIC0tc2VjdGlvbi1hbGlnbm1lbnQ9MHgyMDAwMDAgLS1maWxlLWFs
aWdubWVudD0weDIwIC0tbWFqb3ItaW1hZ2UtdmVyc2lvbj00IC0tbWlub3ItaW1hZ2UtdmVy
c2lvbj0yIC0tbWFqb3Itb3MtdmVyc2lvbj0yIC0tbWlub3Itb3MtdmVyc2lvbj0wIC0tbWFq
b3Itc3Vic3lzdGVtLXZlcnNpb249MiAtLW1pbm9yLXN1YnN5c3RlbS12ZXJzaW9uPTAgLVQg
ZWZpLmxkcyAtTiBwcmVsaW5rLWVmaS5vIFwKICAgICAgICAgICAgICAgIC9yb290L3hlbi00
LjIuMC94ZW4vLnhlbi5lZmkuMXIubyAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjFz
Lm8gLW8gL3Jvb3QveGVuLTQuMi4wL3hlbi94ZW4uZWZpCmlmIDogZmFsc2U7IHRoZW4gcm0g
LWYgL3Jvb3QveGVuLTQuMi4wL3hlbi94ZW4uZWZpOyBlY2hvICdFRkkgc3VwcG9ydCBkaXNh
YmxlZCc7IGZpCkVGSSBzdXBwb3J0IGRpc2FibGVkCnJtIC1mIC9yb290L3hlbi00LjIuMC94
ZW4vLnhlbi5lZmkuWzAtOV0qCmdjYyAtV2FsbCAtV2Vycm9yIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1vIGJvb3QvbWtlbGYzMiBib290L21rZWxmMzIu
YwouL2Jvb3QvbWtlbGYzMiAvcm9vdC94ZW4tNC4yLjAveGVuL3hlbi1zeW1zIC9yb290L3hl
bi00LjIuMC94ZW4veGVuIDB4MTAwMDAwIFwKYG5tIC1uciAvcm9vdC94ZW4tNC4yLjAveGVu
L3hlbi1zeW1zIHwgaGVhZCAtbiAxIHwgc2VkIC1lICdzL15cKFteIF0qXCkuKi8weFwxLydg
CmdtYWtlWzNdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNo
L3g4NicKZ3ppcCAtZiAtOSA8IC9yb290L3hlbi00LjIuMC94ZW4veGVuID4gL3Jvb3QveGVu
LTQuMi4wL3hlbi94ZW4uZ3oubmV3Cm12IC9yb290L3hlbi00LjIuMC94ZW4veGVuLmd6Lm5l
dyAvcm9vdC94ZW4tNC4yLjAveGVuL3hlbi5negpbIC1kIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvYm9vdCBdIHx8IGluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvYm9vdAppbnN0YWxsIC1tMDY0NCAtcCAvcm9vdC94ZW4tNC4yLjAv
eGVuL3hlbi5neiAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL2Jvb3QveGVuLTQuMi4w
Lmd6CmxuIC1mIC1zIHhlbi00LjIuMC5neiAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxs
L2Jvb3QveGVuLTQuMi5negpsbiAtZiAtcyB4ZW4tNC4yLjAuZ3ogL3Jvb3QveGVuLTQuMi4w
L2Rpc3QvaW5zdGFsbC9ib290L3hlbi00Lmd6CmxuIC1mIC1zIHhlbi00LjIuMC5neiAvcm9v
dC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL2Jvb3QveGVuLmd6Cmluc3RhbGwgLW0wNjQ0IC1w
IC9yb290L3hlbi00LjIuMC94ZW4veGVuLXN5bXMgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5z
dGFsbC9ib290L3hlbi1zeW1zLTQuMi4wCmlmIFsgLXIgL3Jvb3QveGVuLTQuMi4wL3hlbi94
ZW4uZWZpIC1hIC1uICcvdXNyL2xpYjY0L2VmaScgXTsgdGhlbiBcCglbIC1kIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xpYjY0L2VmaSBdIHx8IGluc3RhbGwgLWQgLW0w
NzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xpYjY0L2VmaTsgXAoJ
aW5zdGFsbCAtbTA2NDQgLXAgL3Jvb3QveGVuLTQuMi4wL3hlbi94ZW4uZWZpIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xpYjY0L2VmaS94ZW4tNC4yLjAuZWZpOyBcCgls
biAtc2YgeGVuLTQuMi4wLmVmaSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci9s
aWI2NC9lZmkveGVuLTQuMi5lZmk7IFwKCWxuIC1zZiB4ZW4tNC4yLjAuZWZpIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xpYjY0L2VmaS94ZW4tNC5lZmk7IFwKCWxuIC1z
ZiB4ZW4tNC4yLjAuZWZpIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xpYjY0
L2VmaS94ZW4uZWZpOyBcCglpZiBbIC1uICcvYm9vdC9lZmknIC1hIC1uICcnIF07IHRoZW4g
XAoJCWluc3RhbGwgLW0wNjQ0IC1wIC9yb290L3hlbi00LjIuMC94ZW4veGVuLmVmaSAvcm9v
dC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL2Jvb3QvZWZpL2VmaS8veGVuLTQuMi4wLmVmaTsg
XAoJZWxpZiBbICIvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsIiA9ICJkaXN0L2luc3Rh
bGwiIF07IHRoZW4gXAoJCWVjaG8gJ0VGSSBpbnN0YWxsYXRpb24gb25seSBwYXJ0aWFsbHkg
ZG9uZSAoRUZJX1ZFTkRPUiBub3Qgc2V0KScgPiYyOyBcCglmaTsgXApmaQpnbWFrZVsyXTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4nCmdtYWtlWzFdOiBMZWF2
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbicKZG9tMCMgZ21ha2UgdG9vbHMK
Z21ha2UgLUMgdG9vbHMgcWVtdS14ZW4tdHJhZGl0aW9uYWwtZGlyLWZpbmQKZ21ha2VbMV06
IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpzZXQgLWV4OyBc
CmlmIHRlc3QgLWQgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3FlbXUteGVuLXRy
YWRpdGlvbmFsOyB0aGVuIFwKCW1rZGlyIC1wIHFlbXUteGVuLXRyYWRpdGlvbmFsLWRpcjsg
XAplbHNlIFwKCWV4cG9ydCBHSVQ9Z2l0OyBcCgkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4v
c2NyaXB0cy9naXQtY2hlY2tvdXQuc2ggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xz
L3FlbXUteGVuLXRyYWRpdGlvbmFsIHhlbi00LjIuMCBxZW11LXhlbi10cmFkaXRpb25hbC1k
aXI7IFwKZmkKKyB0ZXN0IC1kICcvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMvcWVt
dS14ZW4tdHJhZGl0aW9uYWwnCisgbWtkaXIgLXAgcWVtdS14ZW4tdHJhZGl0aW9uYWwtZGly
CmdtYWtlWzFdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpn
bWFrZSAtQyB0b29scyBxZW11LXhlbi1kaXItZmluZApnbWFrZVsxXTogRW50ZXJpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmlmIHRlc3QgLWQgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzLy4uL3Rvb2xzL3FlbXUteGVuIDsgdGhlbiBcCglta2RpciAtcCBxZW11LXhl
bi1kaXI7IFwKZWxzZSBcCglleHBvcnQgR0lUPWdpdDsgXAoJL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzLy4uL3NjcmlwdHMvZ2l0LWNoZWNrb3V0LnNoIC9yb290L3hlbi00LjIuMC90b29scy8u
Li90b29scy9xZW11LXhlbiBxZW11LXhlbi00LjIuMCBxZW11LXhlbi1kaXIgOyBcCmZpCmdt
YWtlWzFdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFr
ZSAtQyB0b29scyBpbnN0YWxsCmdtYWtlWzFdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzJwpnbWFrZSAtQyBpbmNsdWRlIGluc3RhbGwKZ21ha2VbM106IEVu
dGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUnCmdtYWtl
IC1DIHhlbi1mb3JlaWduCmdtYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduJwpweXRob24yLjcgbWtoZWFkZXIu
cHkgeDg2XzMyIHg4Nl8zMi5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlL3hlbi1m
b3JlaWduLy4uLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLXg4Ni94ZW4teDg2XzMy
LmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUveGVuLWZvcmVpZ24vLi4vLi4vLi4v
eGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L3hlbi5oIC9yb290L3hlbi00LjIuMC90b29s
cy9pbmNsdWRlL3hlbi1mb3JlaWduLy4uLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy94ZW4u
aApweXRob24yLjcgbWtoZWFkZXIucHkgeDg2XzY0IHg4Nl82NC5oIC9yb290L3hlbi00LjIu
MC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduLy4uLy4uLy4uL3hlbi9pbmNsdWRlL3B1Ymxp
Yy9hcmNoLXg4Ni94ZW4teDg2XzY0LmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUv
eGVuLWZvcmVpZ24vLi4vLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L3hlbi5o
IC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduLy4uLy4uLy4uL3hl
bi9pbmNsdWRlL3B1YmxpYy94ZW4uaApweXRob24yLjcgbWtoZWFkZXIucHkgaWE2NCBpYTY0
LmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUveGVuLWZvcmVpZ24vLi4vLi4vLi4v
eGVuL2luY2x1ZGUvcHVibGljL2FyY2gtaWE2NC5oIC9yb290L3hlbi00LjIuMC90b29scy9p
bmNsdWRlL3hlbi1mb3JlaWduLy4uLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy94ZW4uaApw
eXRob24yLjcgbWtjaGVja2VyLnB5IGNoZWNrZXIuYyB4ODZfMzIgeDg2XzY0IGlhNjQKZ2Nj
IC1XYWxsIC1XZXJyb3IgLVdzdHJpY3QtcHJvdG90eXBlcyAtTzIgLWZvbWl0LWZyYW1lLXBv
aW50ZXIgLWZuby1zdHJpY3QtYWxpYXNpbmcgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLW8gY2hlY2tlciBjaGVja2VyLmMKLi9jaGVja2VyID4gdG1wLnNpemUKZGlmZiAtdSBy
ZWZlcmVuY2Uuc2l6ZSB0bXAuc2l6ZQpybSB0bXAuc2l6ZQpnbWFrZVs0XTogTGVhdmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduJwpt
a2RpciAtcCB4ZW4vbGliZWxmCmxuIC1zZiAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVk
ZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvQ09QWUlORyB4ZW4KbG4gLXNmIC9yb290L3hl
bi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9kb21jdGwu
aCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJs
aWMvdHJhY2UuaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5j
bHVkZS9wdWJsaWMvZWxmbm90ZS5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4u
Ly4uL3hlbi9pbmNsdWRlL3B1YmxpYy90bWVtLmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2lu
Y2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL3BsYXRmb3JtLmggL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2XzY0
LmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVi
bGljL3BoeXNkZXYuaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4v
aW5jbHVkZS9wdWJsaWMveGVuLWNvbXBhdC5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNs
dWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9ncmFudF90YWJsZS5oIC9yb290L3hlbi00
LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9jYWxsYmFjay5o
IC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1Ymxp
Yy9zY2hlZC5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNs
dWRlL3B1YmxpYy9tZW1vcnkuaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8u
Li94ZW4vaW5jbHVkZS9wdWJsaWMvZmVhdHVyZXMuaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
aW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMveGVuLmggL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2XzMyLmgg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGlj
L2RvbTBfb3BzLmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2lu
Y2x1ZGUvcHVibGljL21lbV9ldmVudC5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRl
Ly4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy92ZXJzaW9uLmggL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2V2ZW50X2NoYW5uZWwuaCAv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMv
eGVub3Byb2YuaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5j
bHVkZS9wdWJsaWMveGVuY29tbS5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4u
Ly4uL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLWFybS5oIC9yb290L3hlbi00LjIuMC90b29s
cy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9ubWkuaCAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC1pYTY0Lmgg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGlj
L2tleGVjLmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1
ZGUvcHVibGljL3N5c2N0bC5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4u
L3hlbi9pbmNsdWRlL3B1YmxpYy92Y3B1LmggeGVuCmxuIC1zZiAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC1pYTY0IC9yb290
L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNo
LXg4NiAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9w
dWJsaWMvaHZtIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNs
dWRlL3B1YmxpYy9pbyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4v
aW5jbHVkZS9wdWJsaWMveHNtIHhlbgpsbiAtc2YgLi4veGVuLXN5cy9OZXRCU0QgeGVuL3N5
cwpsbiAtc2YgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1
ZGUveGVuL2xpYmVsZi5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hl
bi9pbmNsdWRlL3hlbi9lbGZzdHJ1Y3RzLmggeGVuL2xpYmVsZi8KbG4gLXMgLi4veGVuLWZv
cmVpZ24geGVuL2ZvcmVpZ24KdG91Y2ggeGVuLy5kaXIKL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL3Jvb3Qv
eGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvaW5jbHVkZS94ZW4vYXJjaC1pYTY0
Ci9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2luY2x1ZGUveGVuL2FyY2gtaWE2NC9odm0KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1
ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL3Jvb3QveGVuLTQu
Mi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvaW5jbHVkZS94ZW4vYXJjaC14ODYKL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAt
bTA3NTUgLXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvaW5jbHVk
ZS94ZW4vYXJjaC14ODYvaHZtCi9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVuL2ZvcmVpZ24KL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAg
L3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvaW5jbHVkZS94ZW4vaHZt
Ci9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2luY2x1ZGUveGVuL2lvCi9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3Rv
b2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVuL3N5cwovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
aW5jbHVkZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94
ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlL3hlbi94c20KL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2
NDQgLXAgeGVuL0NPUFlJTkcgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVu
NDIvaW5jbHVkZS94ZW4KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9v
bHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuLyouaCAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlL3hlbgovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
aW5jbHVkZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4ZW4vYXJjaC1p
YTY0LyouaCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRl
L3hlbi9hcmNoLWlhNjQKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9v
bHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuL2FyY2gtaWE2NC9odm0vKi5oIC9yb290
L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVuL2FyY2gtaWE2
NC9odm0KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3Mt
aW5zdGFsbCAtbTA2NDQgLXAgeGVuL2FyY2gteDg2LyouaCAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlL3hlbi9hcmNoLXg4Ngovcm9vdC94ZW4tNC4y
LjAvdG9vbHMvaW5jbHVkZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4
ZW4vYXJjaC14ODYvaHZtLyouaCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94
ZW40Mi9pbmNsdWRlL3hlbi9hcmNoLXg4Ni9odm0KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2lu
Y2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuL2ZvcmVpZ24v
Ki5oIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVu
L2ZvcmVpZ24KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jv
c3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuL2h2bS8qLmggL3Jvb3QveGVuLTQuMi4wL2Rpc3Qv
aW5zdGFsbC91c3IveGVuNDIvaW5jbHVkZS94ZW4vaHZtCi9yb290L3hlbi00LjIuMC90b29s
cy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIHhlbi9pby8q
LmggL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvaW5jbHVkZS94ZW4v
aW8KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5z
dGFsbCAtbTA2NDQgLXAgeGVuL3N5cy8qLmggL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFs
bC91c3IveGVuNDIvaW5jbHVkZS94ZW4vc3lzCi9yb290L3hlbi00LjIuMC90b29scy9pbmNs
dWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIHhlbi94c20vKi5oIC9y
b290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVuL3hzbQpn
bWFrZVszXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9pbmNs
dWRlJwpnbWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29s
cycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
JwpnbWFrZSAtQyBsaWJ4YyBpbnN0YWxsCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3Rvcnkg
YC9yb290L3hlbi00LjIuMC90b29scy9saWJ4YycKZ21ha2UgbGlicwpnbWFrZVs0XTogRW50
ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMnCmdjYyAgLU8x
IC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jb3JlLm8uZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYg
LVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfY29yZS5v
IHhjX2NvcmUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jb3JlX3g4Ni5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdt
aXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4u
Ly4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHhjX2NvcmVfeDg2Lm8geGNfY29y
ZV94ODYuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jcHVwb29sLm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfY3B1cG9vbC5vIHhjX2NwdXBvb2wu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC54Y19kb21haW4uby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1j
YWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90
b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9p
bmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19kb21haW4ubyB4Y19kb21haW4uYyAgLUkvdXNy
L3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcg
LWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1N
TUQgLU1GIC54Y19ldnRjaG4uby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUku
Li8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1J
LiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1w
dGhyZWFkICAtYyAtbyB4Y19ldnRjaG4ubyB4Y19ldnRjaG4uYyAgLUkvdXNyL3BrZy9pbmNs
dWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54
Y19nbnR0YWIuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4v
Y29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAt
YyAtbyB4Y19nbnR0YWIubyB4Y19nbnR0YWIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19taXNjLm8u
ZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJl
bGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfbWlz
Yy5vIHhjX21pc2MuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19mbGFzay5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdt
aXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4u
Ly4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHhjX2ZsYXNrLm8geGNfZmxhc2su
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC54Y19waHlzZGV2Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJv
dG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMv
aW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfcGh5c2Rldi5vIHhjX3BoeXNkZXYuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54Y19wcml2YXRlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBl
cyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVk
ZSAtcHRocmVhZCAgLWMgLW8geGNfcHJpdmF0ZS5vIHhjX3ByaXZhdGUuYyAgLUkvdXNyL3Br
Zy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZu
by1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVz
IC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQg
LU1GIC54Y19zZWRmLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVh
ZCAgLWMgLW8geGNfc2VkZi5vIHhjX3NlZGYuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jc2NoZWQu
by5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xp
YmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19j
c2NoZWQubyB4Y19jc2NoZWQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8t
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jc2NoZWQyLm8uZCAtZm5v
LW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdl
cnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfY3NjaGVkMi5v
IHhjX2NzY2hlZDIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19hcmluYzY1My5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3Ig
LVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhj
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHhjX2FyaW5jNjUzLm8geGNf
YXJpbmM2NTMuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y190YnVmLm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfdGJ1Zi5vIHhjX3RidWYuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54Y19wbS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4u
Ly4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUku
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0
aHJlYWQgIC1jIC1vIHhjX3BtLm8geGNfcG0uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jcHVfaG90
cGx1Zy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21t
b24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1v
IHhjX2NwdV9ob3RwbHVnLm8geGNfY3B1X2hvdHBsdWcuYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19y
ZXN1bWUuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29t
bW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAt
byB4Y19yZXN1bWUubyB4Y19yZXN1bWUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8x
IC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y190bWVtLm8uZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYg
LVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfdG1lbS5v
IHhjX3RtZW0uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19tZW1fZXZlbnQuby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1X
bWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8u
Li8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19tZW1fZXZlbnQubyB4Y19t
ZW1fZXZlbnQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19tZW1fcGFnaW5nLm8uZCAtZm5vLW9wdGlt
aXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAt
V21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMv
Li4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfbWVtX3BhZ2luZy5vIHhj
X21lbV9wYWdpbmcuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19tZW1fYWNjZXNzLm8uZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfbWVtX2FjY2Vzcy5v
IHhjX21lbV9hY2Nlc3MuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19tZW1zaHIuby5kIC1mbm8tb3B0
aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9y
IC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19tZW1zaHIubyB4Y19t
ZW1zaHIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19oY2FsbF9idWYuby5kIC1mbm8tb3B0aW1pemUt
c2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlz
c2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8u
Li90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19oY2FsbF9idWYubyB4Y19oY2Fs
bF9idWYuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19mb3JlaWduX21lbW9yeS5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3Ig
LVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhj
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHhjX2ZvcmVpZ25fbWVtb3J5
Lm8geGNfZm9yZWlnbl9tZW1vcnkuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1m
bm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54dGxfY29yZS5vLmQgLWZu
by1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1X
ZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHh0bF9jb3JlLm8g
eHRsX2NvcmUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54dGxfbG9nZ2VyX3N0ZGlvLm8uZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geHRsX2xvZ2dlcl9zdGRp
by5vIHh0bF9sb2dnZXJfc3RkaW8uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1m
bm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19wYWdldGFiLm8uZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYg
LVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfcGFnZXRh
Yi5vIHhjX3BhZ2V0YWIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19uZXRic2Quby5kIC1mbm8tb3B0
aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9y
IC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19uZXRic2QubyB4Y19u
ZXRic2QuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmFyIHJjIGxpYnhlbmN0cmwuYSB4Y19jb3Jl
Lm8geGNfY29yZV94ODYubyB4Y19jcHVwb29sLm8geGNfZG9tYWluLm8geGNfZXZ0Y2huLm8g
eGNfZ250dGFiLm8geGNfbWlzYy5vIHhjX2ZsYXNrLm8geGNfcGh5c2Rldi5vIHhjX3ByaXZh
dGUubyB4Y19zZWRmLm8geGNfY3NjaGVkLm8geGNfY3NjaGVkMi5vIHhjX2FyaW5jNjUzLm8g
eGNfdGJ1Zi5vIHhjX3BtLm8geGNfY3B1X2hvdHBsdWcubyB4Y19yZXN1bWUubyB4Y190bWVt
Lm8geGNfbWVtX2V2ZW50Lm8geGNfbWVtX3BhZ2luZy5vIHhjX21lbV9hY2Nlc3MubyB4Y19t
ZW1zaHIubyB4Y19oY2FsbF9idWYubyB4Y19mb3JlaWduX21lbW9yeS5vIHh0bF9jb3JlLm8g
eHRsX2xvZ2dlcl9zdGRpby5vIHhjX3BhZ2V0YWIubyB4Y19uZXRic2QubwpnY2MgIC1EUElD
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfY29yZS5v
cGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24v
bGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1j
IC1vIHhjX2NvcmUub3BpYyB4Y19jb3JlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1E
UElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfY29y
ZV94ODYub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4v
Y29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAt
ZlBJQyAtYyAtbyB4Y19jb3JlX3g4Ni5vcGljIHhjX2NvcmVfeDg2LmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1n
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAt
TU1EIC1NRiAueGNfY3B1cG9vbC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlw
ZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX2NwdXBvb2wub3BpYyB4Y19jcHVwb29sLmMg
IC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tYWluLm9waWMuZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfZG9tYWluLm9waWMgeGNf
ZG9tYWluLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZXZ0Y2huLm9waWMuZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfZXZ0Y2hu
Lm9waWMgeGNfZXZ0Y2huLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAt
Zm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZ250dGFiLm9waWMu
ZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJl
bGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8g
eGNfZ250dGFiLm9waWMgeGNfZ250dGFiLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1E
UElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfbWlz
Yy5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21t
b24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElD
IC1jIC1vIHhjX21pc2Mub3BpYyB4Y19taXNjLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2Mg
IC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNf
Zmxhc2sub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4v
Y29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAt
ZlBJQyAtYyAtbyB4Y19mbGFzay5vcGljIHhjX2ZsYXNrLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1N
RiAueGNfcGh5c2Rldi5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4u
Ly4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUku
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0
aHJlYWQgIC1mUElDIC1jIC1vIHhjX3BoeXNkZXYub3BpYyB4Y19waHlzZGV2LmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAueGNfcHJpdmF0ZS5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXBy
b3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX3ByaXZhdGUub3BpYyB4Y19wcml2
YXRlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfc2VkZi5vcGljLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdt
aXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4u
Ly4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX3NlZGYub3BpYyB4
Y19zZWRmLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfY3NjaGVkLm9waWMuZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfY3NjaGVk
Lm9waWMgeGNfY3NjaGVkLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAt
Zm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfY3NjaGVkMi5vcGlj
LmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGli
ZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1v
IHhjX2NzY2hlZDIub3BpYyB4Y19jc2NoZWQyLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2Mg
IC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNf
YXJpbmM2NTMub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94
ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFk
ICAtZlBJQyAtYyAtbyB4Y19hcmluYzY1My5vcGljIHhjX2FyaW5jNjUzLmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAueGNfdGJ1Zi5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlw
ZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX3RidWYub3BpYyB4Y190YnVmLmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAueGNfcG0ub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxs
cyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5
cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNs
dWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19wbS5vcGljIHhjX3BtLmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAueGNfY3B1X2hvdHBsdWcub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1w
cm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29s
cy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19jcHVfaG90cGx1Zy5vcGljIHhj
X2NwdV9ob3RwbHVnLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfcmVzdW1lLm9waWMuZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYg
LVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNf
cmVzdW1lLm9waWMgeGNfcmVzdW1lLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElD
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfdG1lbS5v
cGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24v
bGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1j
IC1vIHhjX3RtZW0ub3BpYyB4Y190bWVtLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1E
UElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfbWVt
X2V2ZW50Lm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVu
L2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAg
LWZQSUMgLWMgLW8geGNfbWVtX2V2ZW50Lm9waWMgeGNfbWVtX2V2ZW50LmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAueGNfbWVtX3BhZ2luZy5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXBy
b3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX21lbV9wYWdpbmcub3BpYyB4Y19t
ZW1fcGFnaW5nLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfbWVtX2FjY2Vzcy5vcGljLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxm
IC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhj
X21lbV9hY2Nlc3Mub3BpYyB4Y19tZW1fYWNjZXNzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
eGNfbWVtc2hyLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVh
ZCAgLWZQSUMgLWMgLW8geGNfbWVtc2hyLm9waWMgeGNfbWVtc2hyLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1n
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAt
TU1EIC1NRiAueGNfaGNhbGxfYnVmLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2Fs
bHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90
eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfaGNhbGxfYnVmLm9waWMgeGNfaGNhbGxf
YnVmLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZm9yZWlnbl9tZW1vcnkub3BpYy5kIC1m
bm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAt
V2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19m
b3JlaWduX21lbW9yeS5vcGljIHhjX2ZvcmVpZ25fbWVtb3J5LmMgIC1JL3Vzci9wa2cvaW5j
bHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1E
IC1NRiAueHRsX2NvcmUub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUku
Li8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1J
LiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1w
dGhyZWFkICAtZlBJQyAtYyAtbyB4dGxfY29yZS5vcGljIHh0bF9jb3JlLmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAueHRsX2xvZ2dlcl9zdGRpby5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJs
aW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5n
LXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHh0bF9sb2dnZXJfc3RkaW8ub3Bp
YyB4dGxfbG9nZ2VyX3N0ZGlvLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfcGFnZXRhYi5v
cGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24v
bGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1j
IC1vIHhjX3BhZ2V0YWIub3BpYyB4Y19wYWdldGFiLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
eGNfbmV0YnNkLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVh
ZCAgLWZQSUMgLWMgLW8geGNfbmV0YnNkLm9waWMgeGNfbmV0YnNkLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgICAgLXB0aHJlYWQgLVdsLC1zb25hbWUgLVdsLGxpYnhlbmN0cmwuc28u
NC4yIC1zaGFyZWQgLW8gbGlieGVuY3RybC5zby40LjIuMCB4Y19jb3JlLm9waWMgeGNfY29y
ZV94ODYub3BpYyB4Y19jcHVwb29sLm9waWMgeGNfZG9tYWluLm9waWMgeGNfZXZ0Y2huLm9w
aWMgeGNfZ250dGFiLm9waWMgeGNfbWlzYy5vcGljIHhjX2ZsYXNrLm9waWMgeGNfcGh5c2Rl
di5vcGljIHhjX3ByaXZhdGUub3BpYyB4Y19zZWRmLm9waWMgeGNfY3NjaGVkLm9waWMgeGNf
Y3NjaGVkMi5vcGljIHhjX2FyaW5jNjUzLm9waWMgeGNfdGJ1Zi5vcGljIHhjX3BtLm9waWMg
eGNfY3B1X2hvdHBsdWcub3BpYyB4Y19yZXN1bWUub3BpYyB4Y190bWVtLm9waWMgeGNfbWVt
X2V2ZW50Lm9waWMgeGNfbWVtX3BhZ2luZy5vcGljIHhjX21lbV9hY2Nlc3Mub3BpYyB4Y19t
ZW1zaHIub3BpYyB4Y19oY2FsbF9idWYub3BpYyB4Y19mb3JlaWduX21lbW9yeS5vcGljIHh0
bF9jb3JlLm9waWMgeHRsX2xvZ2dlcl9zdGRpby5vcGljIHhjX3BhZ2V0YWIub3BpYyB4Y19u
ZXRic2Qub3BpYyAgICAtTC91c3IvcGtnL2xpYgpsbiAtc2YgbGlieGVuY3RybC5zby40LjIu
MCBsaWJ4ZW5jdHJsLnNvLjQuMgpsbiAtc2YgbGlieGVuY3RybC5zby40LjIgbGlieGVuY3Ry
bC5zbwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
eGdfcHJpdmF0ZS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hl
bi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQg
IC1jIC1vIHhnX3ByaXZhdGUubyB4Z19wcml2YXRlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfc3Vz
cGVuZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21t
b24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1v
IHhjX3N1c3BlbmQubyB4Y19zdXNwZW5kLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tYWluX3Jl
c3RvcmUuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29t
bW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAt
byB4Y19kb21haW5fcmVzdG9yZS5vIHhjX2RvbWFpbl9yZXN0b3JlLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1N
RiAueGNfZG9tYWluX3NhdmUuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUku
Li8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1J
LiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1w
dGhyZWFkICAtYyAtbyB4Y19kb21haW5fc2F2ZS5vIHhjX2RvbWFpbl9zYXZlLmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1n
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAt
TU1EIC1NRiAueGNfb2ZmbGluZV9wYWdlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2Fs
bHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90
eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtcHRocmVhZCAgLWMgLW8geGNfb2ZmbGluZV9wYWdlLm8geGNfb2ZmbGluZV9wYWdl
LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5f
VE9PTFNfXyAtTU1EIC1NRiAueGNfY29tcHJlc3Npb24uby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2lu
Zy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90
b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19jb21wcmVzc2lvbi5vIHhjX2NvbXBy
ZXNzaW9uLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGliZWxmLXRvb2xzLm8uZCAtZm5vLW9wdGltaXpl
LXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21p
c3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8gbGliZWxmLXRvb2xzLm8gLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYvbGliZWxmLXRvb2xzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGliZWxm
LWxvYWRlci5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9j
b21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1j
IC1vIGxpYmVsZi1sb2FkZXIubyAuLi8uLi94ZW4vY29tbW9uL2xpYmVsZi9saWJlbGYtbG9h
ZGVyLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAubGliZWxmLWRvbWluZm8uby5kIC1mbm8tb3B0aW1pemUt
c2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlz
c2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8u
Li90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyBsaWJlbGYtZG9taW5mby5vIC4uLy4u
L3hlbi9jb21tb24vbGliZWxmL2xpYmVsZi1kb21pbmZvLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGli
ZWxmLXJlbG9jYXRlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVh
ZCAgLWMgLW8gbGliZWxmLXJlbG9jYXRlLm8gLi4vLi4veGVuL2NvbW1vbi9saWJlbGYvbGli
ZWxmLXJlbG9jYXRlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tX2NvcmUuby5kIC1mbm8tb3B0
aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9y
IC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19kb21fY29yZS5vIHhj
X2RvbV9jb3JlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tX2Jvb3Quby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1X
bWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8u
Li8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19kb21fYm9vdC5vIHhjX2Rv
bV9ib290LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tX2VsZmxvYWRlci5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3Ig
LVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhj
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHhjX2RvbV9lbGZsb2FkZXIu
byB4Y19kb21fZWxmbG9hZGVyLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tX2J6aW1hZ2Vsb2Fk
ZXIuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9u
L2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00
LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkIC1ESEFWRV9C
WkxJQiAtbGJ6MiAtREhBVkVfTFpNQSAtbGx6bWEgIC1jIC1vIHhjX2RvbV9iemltYWdlbG9h
ZGVyLm8geGNfZG9tX2J6aW1hZ2Vsb2FkZXIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19kb21fYmlu
bG9hZGVyLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2Nv
bW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMg
LW8geGNfZG9tX2JpbmxvYWRlci5vIHhjX2RvbV9iaW5sb2FkZXIuYyAgLUkvdXNyL3BrZy9p
bmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC54Y19kb21fY29tcGF0X2xpbnV4Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBl
cyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVk
ZSAtcHRocmVhZCAgLWMgLW8geGNfZG9tX2NvbXBhdF9saW51eC5vIHhjX2RvbV9jb21wYXRf
bGludXguYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19kb21feDg2Lm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfZG9tX3g4Ni5vIHhjX2RvbV94ODYu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC54Y19jcHVpZF94ODYuby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1w
cm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29s
cy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19jcHVpZF94ODYubyB4Y19jcHVpZF94ODYu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC54Y19odm1fYnVpbGRfeDg2Lm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfaHZtX2J1aWxkX3g4Ni5vIHhjX2h2
bV9idWlsZF94ODYuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmFyIHJjIGxpYnhlbmd1ZXN0LmEg
eGdfcHJpdmF0ZS5vIHhjX3N1c3BlbmQubyB4Y19kb21haW5fcmVzdG9yZS5vIHhjX2RvbWFp
bl9zYXZlLm8geGNfb2ZmbGluZV9wYWdlLm8geGNfY29tcHJlc3Npb24ubyBsaWJlbGYtdG9v
bHMubyBsaWJlbGYtbG9hZGVyLm8gbGliZWxmLWRvbWluZm8ubyBsaWJlbGYtcmVsb2NhdGUu
byB4Y19kb21fY29yZS5vIHhjX2RvbV9ib290Lm8geGNfZG9tX2VsZmxvYWRlci5vIHhjX2Rv
bV9iemltYWdlbG9hZGVyLm8geGNfZG9tX2JpbmxvYWRlci5vIHhjX2RvbV9jb21wYXRfbGlu
dXgubyB4Y19kb21feDg2Lm8geGNfY3B1aWRfeDg2Lm8geGNfaHZtX2J1aWxkX3g4Ni5vCmdj
YyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54
Z19wcml2YXRlLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVh
ZCAgLWZQSUMgLWMgLW8geGdfcHJpdmF0ZS5vcGljIHhnX3ByaXZhdGUuYyAgLUkvdXNyL3Br
Zy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54Y19zdXNwZW5kLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2Fs
bHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90
eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfc3VzcGVuZC5vcGljIHhjX3N1c3BlbmQu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19kb21haW5fcmVzdG9yZS5vcGljLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJy
b3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX2RvbWFp
bl9yZXN0b3JlLm9waWMgeGNfZG9tYWluX3Jlc3RvcmUuYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC54Y19kb21haW5fc2F2ZS5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAt
SS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMg
LUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX2RvbWFpbl9zYXZlLm9waWMgeGNfZG9tYWluX3Nh
dmUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19vZmZsaW5lX3BhZ2Uub3BpYy5kIC1mbm8t
b3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vy
cm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19vZmZs
aW5lX3BhZ2Uub3BpYyB4Y19vZmZsaW5lX3BhZ2UuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdj
YyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54
Y19jb21wcmVzc2lvbi5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4u
Ly4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUku
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0
aHJlYWQgIC1mUElDIC1jIC1vIHhjX2NvbXByZXNzaW9uLm9waWMgeGNfY29tcHJlc3Npb24u
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJlbGYtdG9vbHMub3BpYy5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1X
bWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8u
Li8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyBsaWJlbGYtdG9vbHMu
b3BpYyAuLi8uLi94ZW4vY29tbW9uL2xpYmVsZi9saWJlbGYtdG9vbHMuYyAgLUkvdXNyL3Br
Zy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC5saWJlbGYtbG9hZGVyLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJv
dG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMv
aW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8gbGliZWxmLWxvYWRlci5vcGljIC4uLy4u
L3hlbi9jb21tb24vbGliZWxmL2xpYmVsZi1sb2FkZXIuYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC5saWJlbGYtZG9taW5mby5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAt
SS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMg
LUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LXB0aHJlYWQgIC1mUElDIC1jIC1vIGxpYmVsZi1kb21pbmZvLm9waWMgLi4vLi4veGVuL2Nv
bW1vbi9saWJlbGYvbGliZWxmLWRvbWluZm8uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
LURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJl
bGYtcmVsb2NhdGUub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8u
Li94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhy
ZWFkICAtZlBJQyAtYyAtbyBsaWJlbGYtcmVsb2NhdGUub3BpYyAuLi8uLi94ZW4vY29tbW9u
L2xpYmVsZi9saWJlbGYtcmVsb2NhdGUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQ
SUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19kb21f
Y29yZS5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9j
b21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1m
UElDIC1jIC1vIHhjX2RvbV9jb3JlLm9waWMgeGNfZG9tX2NvcmUuYyAgLUkvdXNyL3BrZy9p
bmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcg
LWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1N
TUQgLU1GIC54Y19kb21fYm9vdC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlw
ZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX2RvbV9ib290Lm9waWMgeGNfZG9tX2Jvb3Qu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19kb21fZWxmbG9hZGVyLm9waWMuZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfZG9tX2Vs
ZmxvYWRlci5vcGljIHhjX2RvbV9lbGZsb2FkZXIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdj
YyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54
Y19kb21fYnppbWFnZWxvYWRlci5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlw
ZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLXB0aHJlYWQgLURIQVZFX0JaTElCIC1sYnoyIC1ESEFWRV9MWk1BIC1sbHptYSAgLWZQ
SUMgLWMgLW8geGNfZG9tX2J6aW1hZ2Vsb2FkZXIub3BpYyB4Y19kb21fYnppbWFnZWxvYWRl
ci5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhjX2RvbV9iaW5sb2FkZXIub3BpYy5kIC1mbm8t
b3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vy
cm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19kb21f
YmlubG9hZGVyLm9waWMgeGNfZG9tX2JpbmxvYWRlci5jICAtSS91c3IvcGtnL2luY2x1ZGUK
Z2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYg
LnhjX2RvbV9jb21wYXRfbGludXgub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxs
cyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5
cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNs
dWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19kb21fY29tcGF0X2xpbnV4Lm9waWMgeGNf
ZG9tX2NvbXBhdF9saW51eC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEg
LWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhjX2RvbV94ODYub3Bp
Yy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xp
YmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAt
byB4Y19kb21feDg2Lm9waWMgeGNfZG9tX3g4Ni5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2Nj
ICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhj
X2NwdWlkX3g4Ni5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4u
L3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJl
YWQgIC1mUElDIC1jIC1vIHhjX2NwdWlkX3g4Ni5vcGljIHhjX2NwdWlkX3g4Ni5jICAtSS91
c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RP
T0xTX18gLU1NRCAtTUYgLnhjX2h2bV9idWlsZF94ODYub3BpYy5kIC1mbm8tb3B0aW1pemUt
c2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlz
c2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8u
Li90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19odm1fYnVpbGRfeDg2
Lm9waWMgeGNfaHZtX2J1aWxkX3g4Ni5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIC1X
bCwtc29uYW1lIC1XbCxsaWJ4ZW5ndWVzdC5zby40LjIgLXNoYXJlZCAtbyBsaWJ4ZW5ndWVz
dC5zby40LjIuMCB4Z19wcml2YXRlLm9waWMgeGNfc3VzcGVuZC5vcGljIHhjX2RvbWFpbl9y
ZXN0b3JlLm9waWMgeGNfZG9tYWluX3NhdmUub3BpYyB4Y19vZmZsaW5lX3BhZ2Uub3BpYyB4
Y19jb21wcmVzc2lvbi5vcGljIGxpYmVsZi10b29scy5vcGljIGxpYmVsZi1sb2FkZXIub3Bp
YyBsaWJlbGYtZG9taW5mby5vcGljIGxpYmVsZi1yZWxvY2F0ZS5vcGljIHhjX2RvbV9jb3Jl
Lm9waWMgeGNfZG9tX2Jvb3Qub3BpYyB4Y19kb21fZWxmbG9hZGVyLm9waWMgeGNfZG9tX2J6
aW1hZ2Vsb2FkZXIub3BpYyB4Y19kb21fYmlubG9hZGVyLm9waWMgeGNfZG9tX2NvbXBhdF9s
aW51eC5vcGljIHhjX2RvbV94ODYub3BpYyB4Y19jcHVpZF94ODYub3BpYyB4Y19odm1fYnVp
bGRfeDg2Lm9waWMgLURIQVZFX0JaTElCIC1sYnoyIC1ESEFWRV9MWk1BIC1sbHptYSAtbHog
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0
cmwuc28gICAtTC91c3IvcGtnL2xpYgpsbiAtc2YgbGlieGVuZ3Vlc3Quc28uNC4yLjAgbGli
eGVuZ3Vlc3Quc28uNC4yCmxuIC1zZiBsaWJ4ZW5ndWVzdC5zby40LjIgbGlieGVuZ3Vlc3Qu
c28KZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLnhlbmN0cmxfb3NkZXBfRU5PU1lTLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJv
dG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMv
aW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGVuY3RybF9vc2RlcF9FTk9TWVMub3Bp
YyB4ZW5jdHJsX29zZGVwX0VOT1NZUy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjIC1nICAg
IC1zaGFyZWQgLW8geGVuY3RybF9vc2RlcF9FTk9TWVMuc28geGVuY3RybF9vc2RlcF9FTk9T
WVMub3BpYyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvbGlieGMv
bGlieGVuY3RybC5zbyAgLUwvdXNyL3BrZy9saWIKZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMnCi9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4Yy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94
ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWIKL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290
L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUKL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGxp
YnhlbmN0cmwuc28uNC4yLjAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVu
NDIvbGliCi9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9jcm9zcy1p
bnN0YWxsIC1tMDY0NCAtcCBsaWJ4ZW5jdHJsLmEgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5z
dGFsbC91c3IveGVuNDIvbGliCmxuIC1zZiBsaWJ4ZW5jdHJsLnNvLjQuMi4wIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9saWJ4ZW5jdHJsLnNvLjQuMgps
biAtc2YgbGlieGVuY3RybC5zby40LjIgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91
c3IveGVuNDIvbGliL2xpYnhlbmN0cmwuc28KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhj
Ly4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIHhlbmN0cmwuaCB4ZW5jdHJs
b3NkZXAuaCB4ZW50b29sbG9nLmggL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3Iv
eGVuNDIvaW5jbHVkZQovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMv
Y3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgbGlieGVuZ3Vlc3Quc28uNC4yLjAgL3Jvb3QveGVu
LTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliCi9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4Yy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCBsaWJ4ZW5ndWVz
dC5hIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYgpsbiAtc2Yg
bGlieGVuZ3Vlc3Quc28uNC4yLjAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3Iv
eGVuNDIvbGliL2xpYnhlbmd1ZXN0LnNvLjQuMgpsbiAtc2YgbGlieGVuZ3Vlc3Quc28uNC4y
IC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9saWJ4ZW5ndWVz
dC5zbwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvY3Jvc3MtaW5z
dGFsbCAtbTA2NDQgLXAgeGVuZ3Vlc3QuaCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxs
L3Vzci94ZW40Mi9pbmNsdWRlCmdtYWtlWzNdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhjJwpnbWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9y
b290L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZSAtQyBmbGFzayBpbnN0YWxsCmdtYWtlWzNdOiBF
bnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9mbGFzaycKZ21ha2Vb
NF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrJwpn
bWFrZSAtQyB1dGlscyBpbnN0YWxsCmdtYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9y
b290L3hlbi00LjIuMC90b29scy9mbGFzay91dGlscycKZ2NjICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmxvYWRwb2xpY3kuby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLVdhbGwgLWcgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90
b29scy9mbGFzay91dGlscy8uLi8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIu
MC90b29scy9mbGFzay91dGlscy8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBsb2Fk
cG9saWN5Lm8gbG9hZHBvbGljeS5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIGxvYWRw
b2xpY3kubyAgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxzLy4uLy4uLy4uL3Rv
b2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gLW8gZmxhc2stbG9hZHBvbGljeQpnY2MgIC1PMSAt
Zm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuc2V0ZW5mb3JjZS5vLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2FsbCAtZyAtV2Vycm9yIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxzLy4uLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxzLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
IC1jIC1vIHNldGVuZm9yY2UubyBzZXRlbmZvcmNlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgICAgc2V0ZW5mb3JjZS5vICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMv
Li4vLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAtbyBmbGFzay1zZXRlbmZvcmNl
CmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5nZXRl
bmZvcmNlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XYWxsIC1nIC1XZXJy
b3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMvLi4vLi4vLi4vdG9vbHMv
bGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMvLi4vLi4vLi4vdG9v
bHMvaW5jbHVkZSAgLWMgLW8gZ2V0ZW5mb3JjZS5vIGdldGVuZm9yY2UuYyAgLUkvdXNyL3Br
Zy9pbmNsdWRlCmdjYyAgICBnZXRlbmZvcmNlLm8gIC9yb290L3hlbi00LjIuMC90b29scy9m
bGFzay91dGlscy8uLi8uLi8uLi90b29scy9saWJ4Yy9saWJ4ZW5jdHJsLnNvIC1vIGZsYXNr
LWdldGVuZm9yY2UKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1N
RCAtTUYgLmxhYmVsLXBjaS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Fs
bCAtZyAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxzLy4uLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxzLy4u
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIGxhYmVsLXBjaS5vIGxhYmVsLXBjaS5jICAt
SS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIGxhYmVsLXBjaS5vICAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvZmxhc2svdXRpbHMvLi4vLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAt
byBmbGFzay1sYWJlbC1wY2kKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xT
X18gLU1NRCAtTUYgLmdldC1ib29sLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XYWxsIC1nIC1XZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMv
Li4vLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRp
bHMvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8gZ2V0LWJvb2wubyBnZXQtYm9vbC5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIGdldC1ib29sLm8gIC9yb290L3hlbi00LjIu
MC90b29scy9mbGFzay91dGlscy8uLi8uLi8uLi90b29scy9saWJ4Yy9saWJ4ZW5jdHJsLnNv
IC1vIGZsYXNrLWdldC1ib29sCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09M
U19fIC1NTUQgLU1GIC5zZXQtYm9vbC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtV2FsbCAtZyAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxz
Ly4uLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0
aWxzLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHNldC1ib29sLm8gc2V0LWJvb2wu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICBzZXQtYm9vbC5vICAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvZmxhc2svdXRpbHMvLi4vLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5z
byAtbyBmbGFzay1zZXQtYm9vbAovcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMv
Li4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL3Jvb3QveGVuLTQu
Mi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2Jpbgovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
Zmxhc2svdXRpbHMvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgZmxh
c2stbG9hZHBvbGljeSBmbGFzay1zZXRlbmZvcmNlIGZsYXNrLWdldGVuZm9yY2UgZmxhc2st
bGFiZWwtcGNpIGZsYXNrLWdldC1ib29sIGZsYXNrLXNldC1ib29sIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL3NiaW4KZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMnCmdtYWtlWzRdOiBMZWF2
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrJwpnbWFrZVszXTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9mbGFzaycKZ21ha2Vb
Ml06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmdtYWtlWzJd
OiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2UgLUMg
eGVuc3RvcmUgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUnCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0
cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hF
Tl9UT09MU19fIC1NTUQgLU1GIC54ZW5zdG9yZV9jbGllbnQuby5kIC1mbm8tb3B0aW1pemUt
c2libGluZy1jYWxscyAgLVdlcnJvciAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVu
c3RvcmUvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3Rv
cmUvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8geGVuc3RvcmVfY2xpZW50Lm8geGVuc3Rv
cmVfY2xpZW50LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueHMub3BpYy5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
eGVuc3RvcmUvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVu
c3RvcmUvLi4vLi4vdG9vbHMvaW5jbHVkZSAtRFVTRV9QVEhSRUFEICAtZlBJQyAtYyAtbyB4
cy5vcGljIHhzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueHNfbGliLm9waWMuZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1mUElDIC1jIC1vIHhzX2xpYi5vcGlj
IHhzX2xpYi5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIC1XbCwtc29uYW1lIC1XbCxs
aWJ4ZW5zdG9yZS5zby4zLjAgLXNoYXJlZCAtbyBsaWJ4ZW5zdG9yZS5zby4zLjAuMSB4cy5v
cGljIHhzX2xpYi5vcGljICAtbHB0aHJlYWQgIC1ML3Vzci9wa2cvbGliCmxuIC1zZiBsaWJ4
ZW5zdG9yZS5zby4zLjAuMSBsaWJ4ZW5zdG9yZS5zby4zLjAKbG4gLXNmIGxpYnhlbnN0b3Jl
LnNvLjMuMCBsaWJ4ZW5zdG9yZS5zbwpnY2MgICAgeGVuc3RvcmVfY2xpZW50Lm8gL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL3hlbnN0b3JlL2xpYnhlbnN0
b3JlLnNvICAtbyB4ZW5zdG9yZSAgLUwvdXNyL3BrZy9saWIKbG4gLWYgeGVuc3RvcmUgeGVu
c3RvcmUtZXhpc3RzCmxuIC1mIHhlbnN0b3JlIHhlbnN0b3JlLWxpc3QKbG4gLWYgeGVuc3Rv
cmUgeGVuc3RvcmUtcmVhZApsbiAtZiB4ZW5zdG9yZSB4ZW5zdG9yZS1ybQpsbiAtZiB4ZW5z
dG9yZSB4ZW5zdG9yZS1jaG1vZApsbiAtZiB4ZW5zdG9yZSB4ZW5zdG9yZS13cml0ZQpsbiAt
ZiB4ZW5zdG9yZSB4ZW5zdG9yZS1scwpsbiAtZiB4ZW5zdG9yZSB4ZW5zdG9yZS13YXRjaApn
Y2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGVuc3Rv
cmVfY29udHJvbC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1J
LiAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9pbmNsdWRlICAt
YyAtbyB4ZW5zdG9yZV9jb250cm9sLm8geGVuc3RvcmVfY29udHJvbC5jICAtSS91c3IvcGtn
L2luY2x1ZGUKZ2NjICAgIHhlbnN0b3JlX2NvbnRyb2wubyAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMveGVuc3RvcmUvLi4vLi4vdG9vbHMveGVuc3RvcmUvbGlieGVuc3RvcmUuc28gIC1vIHhl
bnN0b3JlLWNvbnRyb2wgIC1ML3Vzci9wa2cvbGliCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54cy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtV2Vycm9yIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8u
Li8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8u
Li90b29scy9pbmNsdWRlICAtYyAtbyB4cy5vIHhzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueHNfbGli
Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkuIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHhzX2xp
Yi5vIHhzX2xpYi5jICAtSS91c3IvcGtnL2luY2x1ZGUKYXIgcmNzIGxpYnhlbnN0b3JlLmEg
eHMubyB4c19saWIubwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1n
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAt
TU1EIC1NRiAueHNfdGRiX2R1bXAuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAg
LVdlcnJvciAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9v
bHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMv
aW5jbHVkZSAgLWMgLW8geHNfdGRiX2R1bXAubyB4c190ZGJfZHVtcC5jICAtSS91c3IvcGtn
L2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLnV0aWxzLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUku
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2xpYnhjIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1j
IC1vIHV0aWxzLm8gdXRpbHMuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8t
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC50ZGIuby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
eGVuc3RvcmUvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVu
c3RvcmUvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8gdGRiLm8gdGRiLmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1E
IC1NRiAudGFsbG9jLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3Ig
LUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2xpYnhj
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
IC1jIC1vIHRhbGxvYy5vIHRhbGxvYy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIHhz
X3RkYl9kdW1wLm8gdXRpbHMubyB0ZGIubyB0YWxsb2MubyAtbyB4c190ZGJfZHVtcCAgLUwv
dXNyL3BrZy9saWIKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1N
RCAtTUYgLnhlbnN0b3JlZF9jb3JlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XZXJyb3IgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rv
b2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgIC1jIC1vIHhlbnN0b3JlZF9jb3JlLm8geGVuc3RvcmVkX2NvcmUuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54ZW5zdG9yZWRfd2F0Y2guby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1j
YWxscyAgLVdlcnJvciAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4v
Li4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4v
dG9vbHMvaW5jbHVkZSAgLWMgLW8geGVuc3RvcmVkX3dhdGNoLm8geGVuc3RvcmVkX3dhdGNo
LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5f
VE9PTFNfXyAtTU1EIC1NRiAueGVuc3RvcmVkX2RvbWFpbi5vLmQgLWZuby1vcHRpbWl6ZS1z
aWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5z
dG9yZS8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9y
ZS8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyB4ZW5zdG9yZWRfZG9tYWluLm8geGVuc3Rv
cmVkX2RvbWFpbi5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhlbnN0b3JlZF90cmFuc2FjdGlvbi5vLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JLiAtSS9yb290L3hlbi00
LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIu
MC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyB4ZW5zdG9yZWRf
dHJhbnNhY3Rpb24ubyB4ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYyAgLUkvdXNyL3BrZy9pbmNs
dWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5o
YXNodGFibGUuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS4g
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvbGlieGMgLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMg
LW8gaGFzaHRhYmxlLm8gaGFzaHRhYmxlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGVuc3RvcmVkX25l
dGJzZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JLiAtSS9y
b290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9saWJ4YyAtSS9yb290
L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyB4
ZW5zdG9yZWRfbmV0YnNkLm8geGVuc3RvcmVkX25ldGJzZC5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhl
bnN0b3JlZF9wb3NpeC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9y
IC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9saWJ4
YyAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9pbmNsdWRl
ICAtYyAtbyB4ZW5zdG9yZWRfcG9zaXgubyB4ZW5zdG9yZWRfcG9zaXguYyAgLUkvdXNyL3Br
Zy9pbmNsdWRlCmdjYyAgICB4ZW5zdG9yZWRfY29yZS5vIHhlbnN0b3JlZF93YXRjaC5vIHhl
bnN0b3JlZF9kb21haW4ubyB4ZW5zdG9yZWRfdHJhbnNhY3Rpb24ubyB4c19saWIubyB0YWxs
b2MubyB1dGlscy5vIHRkYi5vIGhhc2h0YWJsZS5vIHhlbnN0b3JlZF9uZXRic2QubyB4ZW5z
dG9yZWRfcG9zaXgubyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9v
bHMvbGlieGMvbGlieGVuY3RybC5zbyAgLW8geGVuc3RvcmVkICAtTC91c3IvcGtnL2xpYgov
cm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFs
bCAtZCAtbTA3NTUgLXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIv
YmluCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9jcm9zcy1p
bnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94
ZW40Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9j
cm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxs
L3Vzci94ZW40Mi9pbmNsdWRlCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8u
Li90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlL3hlbnN0b3JlLWNvbXBhdAovcm9vdC94ZW4t
NC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3
NTUgLXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC92YXIvcnVuL3hlbnN0b3JlZAov
cm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFs
bCAtZCAtbTA3NTUgLXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC92YXIvbGliL3hl
bnN0b3JlZAovcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvY3Jv
c3MtaW5zdGFsbCAtbTA3NTUgLXAgeGVuc3RvcmVkIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL3NiaW4KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4u
Ly4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIHhlbnN0b3JlLWNvbnRyb2wgL3Jv
b3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvYmluCi9yb290L3hlbi00LjIu
MC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4
ZW5zdG9yZSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9iaW4Kc2V0
IC1lIDsgZm9yIGMgaW4geGVuc3RvcmUtZXhpc3RzIHhlbnN0b3JlLWxpc3QgeGVuc3RvcmUt
cmVhZCB4ZW5zdG9yZS1ybSB4ZW5zdG9yZS1jaG1vZCB4ZW5zdG9yZS13cml0ZSB4ZW5zdG9y
ZS1scyB4ZW5zdG9yZS13YXRjaCA7IGRvIFwKCWxuIC1mIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL2Jpbi94ZW5zdG9yZSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9p
bnN0YWxsL3Vzci94ZW40Mi9iaW4vJHtjfSA7IFwKZG9uZQovcm9vdC94ZW4tNC4yLjAvdG9v
bHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL3Jv
b3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliCi9yb290L3hlbi00LjIu
MC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCBs
aWJ4ZW5zdG9yZS5zby4zLjAuMSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94
ZW40Mi9saWIKbG4gLXNmIGxpYnhlbnN0b3JlLnNvLjMuMC4xIC9yb290L3hlbi00LjIuMC9k
aXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9saWJ4ZW5zdG9yZS5zby4zLjAKbG4gLXNmIGxp
YnhlbnN0b3JlLnNvLjMuMCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40
Mi9saWIvbGlieGVuc3RvcmUuc28KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4u
Ly4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIGxpYnhlbnN0b3JlLmEgL3Jvb3Qv
eGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliCi9yb290L3hlbi00LjIuMC90
b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4ZW5z
dG9yZS5oIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUK
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLW0wNjQ0IC1wIHhlbnN0b3JlX2xpYi5oIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3Rh
bGwvdXNyL3hlbjQyL2luY2x1ZGUKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4u
Ly4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIGNvbXBhdC94cy5oIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVuc3RvcmUtY29tcGF0
L3hzLmgKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2Nyb3Nz
LWluc3RhbGwgLW0wNjQ0IC1wIGNvbXBhdC94c19saWIuaCAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlL3hlbnN0b3JlLWNvbXBhdC94c19saWIuaAps
biAtc2YgeGVuc3RvcmUtY29tcGF0L3hzLmggIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3Rh
bGwvdXNyL3hlbjQyL2luY2x1ZGUveHMuaApsbiAtc2YgeGVuc3RvcmUtY29tcGF0L3hzX2xp
Yi5oIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveHNf
bGliLmgKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMveGVuc3RvcmUnCmdtYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzJwpnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMnCmdtYWtlIC1DIG1pc2MgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYycKZ2NjICAtTzEgLWZuby1vbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhlbnBlcmYuby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNj
Ly4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94ZW5z
dG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scyAgLWMgLW8geGVucGVy
Zi5vIHhlbnBlcmYuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyB4ZW5wZXJmIHhl
bnBlcmYubyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9saWJ4Yy9s
aWJ4ZW5jdHJsLnNvICAtTC91c3IvcGtnL2xpYgpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGVucG0uby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rv
b2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scyAgLWMgLW8geGVucG0ubyB4ZW5wbS5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIC1vIHhlbnBtIHhlbnBtLm8gL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAgLUwv
dXNyL3BrZy9saWIKZ2NjIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1E
IC1NRiAueGVuLXRtZW0tbGlzdC1wYXJzZS5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxs
cyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2xp
YnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5jbHVkZSAt
SS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290L3hl
bi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbWlzYy8uLi8uLi90b29scyAgICAgIHhlbi10bWVtLWxpc3QtcGFyc2UuYyAg
IC1vIHhlbi10bWVtLWxpc3QtcGFyc2UKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVO
X1RPT0xTX18gLU1NRCAtTUYgLmd0cmFjZXZpZXcuby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rv
b2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scyAgLWMgLW8gZ3RyYWNldmlldy5vIGd0
cmFjZXZpZXcuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyBndHJhY2V2aWV3IGd0
cmFjZXZpZXcubyAtbGN1cnNlcyAgLUwvdXNyL3BrZy9saWIKZ2NjICAtTzEgLWZuby1vbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmd0cmFjZXN0YXQuby5kIC1mbm8tb3B0
aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9t
aXNjLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94
ZW5zdG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scyAgLWMgLW8gZ3Ry
YWNlc3RhdC5vIGd0cmFjZXN0YXQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyBn
dHJhY2VzdGF0IGd0cmFjZXN0YXQubyAgLUwvdXNyL3BrZy9saWIKZ2NjICAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhlbmxvY2twcm9mLm8uZCAtZm5v
LW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbWlzYy8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNj
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8u
Li90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9v
bHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9p
bmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMgIC1jIC1v
IHhlbmxvY2twcm9mLm8geGVubG9ja3Byb2YuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
ICAtbyB4ZW5sb2NrcHJvZiB4ZW5sb2NrcHJvZi5vIC9yb290L3hlbi00LjIuMC90b29scy9t
aXNjLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gIC1ML3Vzci9wa2cvbGliCmdj
YyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW53YXRj
aGRvZ2Quby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS9yb290
L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90
b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bWlzYy8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNj
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8u
Li90b29scyAgLWMgLW8geGVud2F0Y2hkb2dkLm8geGVud2F0Y2hkb2dkLmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgICAgLW8geGVud2F0Y2hkb2dkIHhlbndhdGNoZG9nZC5vIC9yb290
L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28g
IC1ML3Vzci9wa2cvbGliCmdjYyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
ZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18g
LU1NRCAtTUYgLnhlbi1kZXRlY3QuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1X
ZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL21pc2MvLi4vLi4vdG9vbHMgICAgICB4ZW4tZGV0ZWN0LmMgICAtbyB4ZW4tZGV0ZWN0
CmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW4t
aHZtY3R4Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00
LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L21pc2MvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlz
Yy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4v
Li4vdG9vbHMgIC1jIC1vIHhlbi1odm1jdHgubyB4ZW4taHZtY3R4LmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgICAgLW8geGVuLWh2bWN0eCB4ZW4taHZtY3R4Lm8gL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAgLUwvdXNy
L3BrZy9saWIKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLnhlbi1odm1jcmFzaC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vy
cm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvbGlieGMgLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00
LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9taXNjLy4uLy4uL3Rvb2xzICAtYyAtbyB4ZW4taHZtY3Jhc2gubyB4ZW4taHZtY3Jhc2gu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyB4ZW4taHZtY3Jhc2ggeGVuLWh2bWNy
YXNoLm8gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvbGlieGMvbGli
eGVuY3RybC5zbyAgLUwvdXNyL3BrZy9saWIKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9f
WEVOX1RPT0xTX18gLU1NRCAtTUYgLnhlbi1sb3dtZW1kLm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1XZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8u
Li90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNs
dWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMveGVuc3RvcmUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMgIC1jIC1vIHhlbi1sb3dtZW1k
Lm8geGVuLWxvd21lbWQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyB4ZW4tbG93
bWVtZCB4ZW4tbG93bWVtZC5vIC9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rv
b2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4v
Li4vdG9vbHMveGVuc3RvcmUvbGlieGVuc3RvcmUuc28gIC1ML3Vzci9wa2cvbGliCmdjYyAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW4taHB0b29s
Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90
b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2Mv
Li4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8u
Li90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9v
bHMgIC1jIC1vIHhlbi1ocHRvb2wubyB4ZW4taHB0b29sLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgICAgLW8geGVuLWhwdG9vbCB4ZW4taHB0b29sLm8gL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL21pc2MvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9saWJ4Yy9saWJ4ZW5ndWVzdC5zbyAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94ZW5zdG9yZS9saWJ4ZW5zdG9yZS5z
byAgLUwvdXNyL3BrZy9saWIKc2V0IC1lOyBmb3IgZCBpbiA7IGRvIGdtYWtlIC1DICRkOyBk
b25lCi9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2Jpbgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9jcm9zcy1pbnN0
YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40
Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2Nyb3NzLWlu
c3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hl
bjQyL2Jpbgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9weXRob24v
aW5zdGFsbC13cmFwICIvdXNyL3BrZy9iaW4vcHl0aG9uMi43IiAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbWlzYy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4ZW5jb25z
IHhlbi1kZXRlY3QgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvYmlu
Ci9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL3B5dGhvbi9pbnN0YWxs
LXdyYXAgIi91c3IvcGtnL2Jpbi9weXRob24yLjciIC9yb290L3hlbi00LjIuMC90b29scy9t
aXNjLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIHhtIHhlbi1idWd0b29s
IHhlbi1weXRob24tcGF0aCB4ZW5kIHhlbnBlcmYgeHN2aWV3IHhlbnBtIHhlbi10bWVtLWxp
c3QtcGFyc2UgZ3RyYWNldmlldyBndHJhY2VzdGF0IHhlbmxvY2twcm9mIHhlbndhdGNoZG9n
ZCB4ZW4tcmluZ3dhdGNoIHhlbi1odm1jdHggeGVuLWh2bWNyYXNoIHhlbi1sb3dtZW1kIHhl
bi1ocHRvb2wgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2Jpbgov
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9weXRob24vaW5zdGFsbC13
cmFwICIvdXNyL3BrZy9iaW4vcHl0aG9uMi43IiAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlz
Yy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4ZW5wdm5ldGJvb3QgL3Jv
b3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvYmluCnNldCAtZTsgZm9yIGQg
aW4gOyBkbyBnbWFrZSAtQyAkZCBpbnN0YWxsLXJlY3Vyc2U7IGRvbmUKZ21ha2VbM106IExl
YXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYycKZ21ha2VbMl06
IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmdtYWtlWzJdOiBF
bnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2UgLUMgZXhh
bXBsZXMgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvZXhhbXBsZXMnClsgLWQgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFs
bC91c3IveGVuNDIvZXRjL3hlbiBdIHx8IFwKCS9yb290L3hlbi00LjIuMC90b29scy9leGFt
cGxlcy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4t
NC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9ldGMveGVuCnNldCAtZTsgZm9yIGkgaW4g
UkVBRE1FIFJFQURNRS5pbmNvbXBhdGliaWxpdGllczsgXAogICAgZG8gWyAtZSAvcm9vdC94
ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9ldGMveGVuLyRpIF0gfHwgXAogICAg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2V4YW1wbGVzLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLW0wNjQ0IC1wICRpIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2V0Yy94ZW47IFwKZG9uZQpbIC1kIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNy
L3hlbjQyL2V0Yy94ZW4gXSB8fCBcCgkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZXhhbXBsZXMv
Li4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL3Jvb3QveGVuLTQuMi4w
L2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRjL3hlbgpbIC1kIC9yb290L3hlbi00LjIuMC9k
aXN0L2luc3RhbGwvdXNyL3hlbjQyL2V0Yy94ZW4vYXV0byBdIHx8IFwKCS9yb290L3hlbi00
LjIuMC90b29scy9leGFtcGxlcy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1
NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9ldGMveGVuL2F1
dG8Kc2V0IC1lOyBmb3IgaSBpbiB4ZW5kLWNvbmZpZy5zeHAgeG0tY29uZmlnLnhtbCB4bWV4
YW1wbGUxICB4bWV4YW1wbGUyIHhtZXhhbXBsZTMgeG1leGFtcGxlLmh2bSB4bWV4YW1wbGUu
aHZtLXN0dWJkb20geG1leGFtcGxlLnB2LWdydWIgeG1leGFtcGxlLm5iZCB4bWV4YW1wbGUu
dnRpIHhsZXhhbXBsZS5odm0geGxleGFtcGxlLnB2bGludXggeGVuZC1wY2ktcXVpcmtzLnN4
cCB4ZW5kLXBjaS1wZXJtaXNzaXZlLnN4cCB4bC5jb25mIGNwdXBvb2w7IFwKICAgIGRvIFsg
LWUgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRjL3hlbi8kaSBd
IHx8IFwKICAgIC9yb290L3hlbi00LjIuMC90b29scy9leGFtcGxlcy8uLi8uLi90b29scy9j
cm9zcy1pbnN0YWxsIC1tMDY0NCAtcCAkaSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxs
L3Vzci94ZW40Mi9ldGMveGVuOyBcCmRvbmUKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZXhhbXBsZXMnCmdtYWtlWzJdOiBMZWF2aW5nIGRp
cmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZVsyXTogRW50ZXJpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmdtYWtlIC1DIGhvdHBsdWcgaW5zdGFs
bApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
aG90cGx1ZycKZ21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2hvdHBsdWcnCmdtYWtlIC1DIGNvbW1vbiBpbnN0YWxsCmdtYWtlWzVdOiBFbnRl
cmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ob3RwbHVnL2NvbW1vbicK
cm0gLWYgImhvdHBsdWdwYXRoLnNoIi50bXA7ICBlY2hvICJTQklORElSPVwiL3Vzci94ZW40
Mi9zYmluXCIiID4+ImhvdHBsdWdwYXRoLnNoIi50bXA7ICBlY2hvICJCSU5ESVI9XCIvdXNy
L3hlbjQyL2JpblwiIiA+PiJob3RwbHVncGF0aC5zaCIudG1wOyAgZWNobyAiTElCRVhFQz1c
Ii91c3IveGVuNDIvbGliZXhlY1wiIiA+PiJob3RwbHVncGF0aC5zaCIudG1wOyAgZWNobyAi
TElCRElSPVwiL3Vzci94ZW40Mi9saWJcIiIgPj4iaG90cGx1Z3BhdGguc2giLnRtcDsgIGVj
aG8gIlNIQVJFRElSPVwiL3Vzci94ZW40Mi9zaGFyZVwiIiA+PiJob3RwbHVncGF0aC5zaCIu
dG1wOyAgZWNobyAiUFJJVkFURV9CSU5ESVI9XCIvdXNyL3hlbjQyL2JpblwiIiA+PiJob3Rw
bHVncGF0aC5zaCIudG1wOyAgZWNobyAiWEVORklSTVdBUkVESVI9XCIvdXNyL3hlbjQyL2xp
Yi94ZW4vYm9vdFwiIiA+PiJob3RwbHVncGF0aC5zaCIudG1wOyAgZWNobyAiWEVOX0NPTkZJ
R19ESVI9XCIvdXNyL3hlbjQyL2V0Yy94ZW5cIiIgPj4iaG90cGx1Z3BhdGguc2giLnRtcDsg
IGVjaG8gIlhFTl9TQ1JJUFRfRElSPVwiL3Vzci94ZW40Mi9ldGMveGVuL3NjcmlwdHNcIiIg
Pj4iaG90cGx1Z3BhdGguc2giLnRtcDsgIGVjaG8gIlhFTl9MT0NLX0RJUj1cIi91c3IveGVu
NDIvdmFyL2xpYlwiIiA+PiJob3RwbHVncGF0aC5zaCIudG1wOyAgZWNobyAiWEVOX1JVTl9E
SVI9XCIvdXNyL3hlbjQyL3Zhci9ydW4veGVuXCIiID4+ImhvdHBsdWdwYXRoLnNoIi50bXA7
ICBlY2hvICJYRU5fUEFHSU5HX0RJUj1cIi91c3IveGVuNDIvdmFyL2xpYi94ZW4veGVucGFn
aW5nXCIiID4+ImhvdHBsdWdwYXRoLnNoIi50bXA7IAlpZiAhIGNtcCAtcyAiaG90cGx1Z3Bh
dGguc2giLnRtcCAiaG90cGx1Z3BhdGguc2giOyB0aGVuIG12IC1mICJob3RwbHVncGF0aC5z
aCIudG1wICJob3RwbHVncGF0aC5zaCI7IGVsc2Ugcm0gLWYgImhvdHBsdWdwYXRoLnNoIi50
bXA7IGZpClsgLWQgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRj
L3hlbi9zY3JpcHRzIF0gfHwgXAoJL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2hvdHBsdWcvY29t
bW9uLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2V0Yy94ZW4vc2NyaXB0cwpzZXQgLWU7
IGZvciBpIGluICJob3RwbHVncGF0aC5zaCI7IFwKICAgZG8gXAogICAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvaG90cGx1Zy9jb21tb24vLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAt
bTA3NTUgLXAgJGkgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRj
L3hlbi9zY3JpcHRzOyBcCmRvbmUKc2V0IC1lOyBmb3IgaSBpbiA7IFwKICAgZG8gXAogICAv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90cGx1Zy9jb21tb24vLi4vLi4vLi4vdG9vbHMvY3Jv
c3MtaW5zdGFsbCAtbTA2NDQgLXAgJGkgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91
c3IveGVuNDIvZXRjL3hlbi9zY3JpcHRzOyBcCmRvbmUKZ21ha2VbNV06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90cGx1Zy9jb21tb24nCmdtYWtlWzRd
OiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2hvdHBsdWcnCmdt
YWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ob3Rw
bHVnJwpnbWFrZSAtQyBOZXRCU0QgaW5zdGFsbApnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90cGx1Zy9OZXRCU0QnCi9yb290L3hlbi00
LjIuMC90b29scy9ob3RwbHVnL05ldEJTRC8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxs
IC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9l
dGMveGVuL3NjcmlwdHMKc2V0IC1lOyBmb3IgaSBpbiAgYmxvY2sgdmlmLWJyaWRnZSB2aWYt
aXA7IFwKICAgZG8gXAogICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90cGx1Zy9OZXRCU0Qv
Li4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgJGkgL3Jvb3QveGVuLTQu
Mi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRjL3hlbi9zY3JpcHRzOyBcCmRvbmUKc2V0
IC1lOyBmb3IgaSBpbiA7IFwKICAgZG8gXAogICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90
cGx1Zy9OZXRCU0QvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgJGkg
L3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRjL3hlbi9zY3JpcHRz
OyBcCmRvbmUKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2hvdHBsdWcvTmV0QlNELy4uLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL2V0Yy9yYy5kCnNldCAtZTsgZm9yIGkgaW4gcmMuZC94ZW5j
b21tb25zIHJjLmQveGVuZCByYy5kL3hlbmRvbWFpbnMgcmMuZC94ZW4td2F0Y2hkb2c7IFwK
ICAgZG8gXAogICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90cGx1Zy9OZXRCU0QvLi4vLi4v
Li4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgJGkgL3Jvb3QveGVuLTQuMi4wL2Rp
c3QvaW5zdGFsbC91c3IveGVuNDIvZXRjL3JjLmQ7IFwKZG9uZQovcm9vdC94ZW4tNC4yLjAv
dG9vbHMvaG90cGx1Zy9OZXRCU0QvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2
NDQgLXAgLi4vY29tbW9uL2hvdHBsdWdwYXRoLnNoIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL2V0Yy9yYy5kL3hlbi1ob3RwbHVncGF0aC5zaApnbWFrZVs1XTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ob3RwbHVnL05ldEJT
RCcKZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
aG90cGx1ZycKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvaG90cGx1ZycKZ21ha2VbMl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMnCmdtYWtlWzJdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC90b29scycKZ21ha2UgLUMgeGVudHJhY2UgaW5zdGFsbApnbWFrZVszXTogRW50ZXJp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVudHJhY2UnCmdjYyAgLU8x
IC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW50cmFjZS5vLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL3hlbnRyYWNlLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL3hlbnRyYWNlLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHhlbnRyYWNlLm8geGVu
dHJhY2UuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyB4ZW50cmFjZSB4ZW50cmFj
ZS5vIC9yb290L3hlbi00LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29scy9saWJ4Yy9s
aWJ4ZW5jdHJsLnNvICAtTC91c3IvcGtnL2xpYgpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuc2V0c2l6ZS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJs
aW5nLWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRyYWNlLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRyYWNlLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHNldHNpemUubyBzZXRzaXplLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgICAgLW8geGVudHJhY2Vfc2V0c2l6ZSBzZXRzaXplLm8gL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL3hlbnRyYWNlLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28g
IC1ML3Vzci9wa2cvbGliCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54ZW5jdHguby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdl
cnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29scy9saWJ4
YyAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29scy9pbmNsdWRl
ICAtYyAtbyB4ZW5jdHgubyB4ZW5jdHguYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAt
byB4ZW5jdHggeGVuY3R4Lm8gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRyYWNlLy4uLy4u
L3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gIC1ML3Vzci9wa2cvbGliCi9yb290L3hlbi00
LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1
NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9iaW4KWyAteiAi
eGVuY3R4IiBdIHx8IC9yb290L3hlbi00LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29s
cy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0
YWxsL3Vzci94ZW40Mi9iaW4KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRyYWNlLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL3NoYXJlL21hbi9tYW4xCi9yb290L3hlbi00LjIuMC90b29s
cy94ZW50cmFjZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9v
dC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9zaGFyZS9tYW4vbWFuOAovcm9v
dC94ZW4tNC4yLjAvdG9vbHMveGVudHJhY2UvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAt
bTA3NTUgLXAgeGVudHJhY2UgeGVudHJhY2Vfc2V0c2l6ZSAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9iaW4KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRyYWNl
Ly4uLy4uL3Rvb2xzL3B5dGhvbi9pbnN0YWxsLXdyYXAgIi91c3IvcGtnL2Jpbi9weXRob24y
LjciIC9yb290L3hlbi00LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29scy9jcm9zcy1p
bnN0YWxsIC1tMDc1NSAtcCB4ZW50cmFjZV9mb3JtYXQgL3Jvb3QveGVuLTQuMi4wL2Rpc3Qv
aW5zdGFsbC91c3IveGVuNDIvYmluClsgLXogInhlbmN0eCIgXSB8fCAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMveGVudHJhY2UvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAg
eGVuY3R4IC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2Jpbgovcm9v
dC94ZW4tNC4yLjAvdG9vbHMveGVudHJhY2UvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAt
bTA2NDQgLXAgeGVudHJhY2VfZm9ybWF0LjEgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFs
bC91c3IveGVuNDIvc2hhcmUvbWFuL21hbjEKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRy
YWNlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIHhlbnRyYWNlLjggL3Jv
b3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2hhcmUvbWFuL21hbjgKZ21h
a2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVudHJh
Y2UnCmdtYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
JwpnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMn
CmdtYWtlIC1DIHhjdXRpbHMgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscycKZ2NjICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhjX3Jlc3RvcmUuby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy94Y3V0
aWxzLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hjdXRpbHMv
Li4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hjdXRpbHMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtYyAtbyB4Y19yZXN0b3JlLm8geGNfcmVzdG9yZS5jICAtSS91c3Iv
cGtnL2luY2x1ZGUKZ2NjICAgIHhjX3Jlc3RvcmUubyAtbyB4Y19yZXN0b3JlIC9yb290L3hl
bi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28g
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hjdXRpbHMvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVu
Z3Vlc3Quc28gIC1ML3Vzci9wa2cvbGliCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0
cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hF
Tl9UT09MU19fIC1NTUQgLU1GIC54Y19zYXZlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1XZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8uLi90
b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8uLi90b29scy9s
aWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8uLi90b29scy94ZW5zdG9y
ZSAtSS9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LWMgLW8geGNfc2F2ZS5vIHhjX3NhdmUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICB4
Y19zYXZlLm8gLW8geGNfc2F2ZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8u
Li90b29scy9saWJ4Yy9saWJ4ZW5jdHJsLnNvIC9yb290L3hlbi00LjIuMC90b29scy94Y3V0
aWxzLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmd1ZXN0LnNvIC9yb290L3hlbi00LjIuMC90
b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL3hlbnN0b3JlL2xpYnhlbnN0b3JlLnNvICAtTC91
c3IvcGtnL2xpYgpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1E
IC1NRiAucmVhZG5vdGVzLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJy
b3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8uLi90b29scy9saWJ4YyAtSS9yb290
L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLWMgLW8gcmVh
ZG5vdGVzLm8gcmVhZG5vdGVzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAgcmVhZG5v
dGVzLm8gLW8gcmVhZG5vdGVzIC9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4u
L3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hjdXRp
bHMvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuZ3Vlc3Quc28gIC1ML3Vzci9wa2cvbGliCmdj
YyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5sc2V2dGNo
bi5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL3hjdXRpbHMvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMveGN1dGlscy8uLi8uLi90b29scy9pbmNsdWRlIC1jIC1vIGxzZXZ0Y2huLm8g
bHNldnRjaG4uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICBsc2V2dGNobi5vIC1vIGxz
ZXZ0Y2huIC9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2xpYnhj
L2xpYnhlbmN0cmwuc28gIC1ML3Vzci9wa2cvbGliCi9yb290L3hlbi00LjIuMC90b29scy94
Y3V0aWxzLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2Jpbgovcm9vdC94ZW4tNC4yLjAvdG9v
bHMveGN1dGlscy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4Y19yZXN0
b3JlIHhjX3NhdmUgcmVhZG5vdGVzIGxzZXZ0Y2huIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL2JpbgpnbWFrZVszXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC90b29scy94Y3V0aWxzJwpnbWFrZVsyXTogTGVhdmluZyBkaXJlY3Rvcnkg
YC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZSAtQyBmaXJtd2FyZSBpbnN0YWxsCmdtYWtl
WzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2Fy
ZScKR0lUPWdpdCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvLi4vLi4vc2NyaXB0
cy9naXQtY2hlY2tvdXQuc2ggZ2l0Oi8veGVuYml0cy54ZW4ub3JnL3NlYWJpb3MuZ2l0IHJl
bC0xLjYuMy4yIHNlYWJpb3MtZGlyCkNsb25pbmcgaW50byAnc2VhYmlvcy1kaXItcmVtb3Rl
LnRtcCcuLi4KcmVtb3RlOiBDb3VudGluZyBvYmplY3RzOiA2NDkwLCBkb25lLhtbSwpyZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgMCUgKDEvMTM5MSkgICAbW0sNcmVtb3RlOiBD
b21wcmVzc2luZyBvYmplY3RzOiAgIDElICgxNC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAgMiUgKDI4LzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Np
bmcgb2JqZWN0czogICAzJSAoNDIvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBv
YmplY3RzOiAgIDQlICg1Ni8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICAgNSUgKDcwLzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czog
ICA2JSAoODQvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDcl
ICg5OC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgOCUgKDEx
Mi8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgOSUgKDEyNi8x
MzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMCUgKDE0MC8xMzkx
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMSUgKDE1NC8xMzkxKSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMiUgKDE2Ny8xMzkxKSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMyUgKDE4MS8xMzkxKSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNCUgKDE5NS8xMzkxKSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNSUgKDIwOS8xMzkxKSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNiUgKDIyMy8xMzkxKSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICAxNyUgKDIzNy8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAxOCUgKDI1MS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNz
aW5nIG9iamVjdHM6ICAxOSUgKDI2NS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5n
IG9iamVjdHM6ICAyMCUgKDI3OS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9i
amVjdHM6ICAyMSUgKDI5My8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICAyMiUgKDMwNy8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6
ICAyMyUgKDMyMC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAy
NCUgKDMzNC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAyNSUg
KDM0OC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAyNiUgKDM2
Mi8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAyNyUgKDM3Ni8x
MzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAyOCUgKDM5MC8xMzkx
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAyOSUgKDQwNC8xMzkxKSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMCUgKDQxOC8xMzkxKSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMSUgKDQzMi8xMzkxKSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMiUgKDQ0Ni8xMzkxKSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMyUgKDQ2MC8xMzkxKSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICAzNCUgKDQ3My8xMzkxKSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICAzNSUgKDQ4Ny8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAzNiUgKDUwMS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNz
aW5nIG9iamVjdHM6ICAzNyUgKDUxNS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5n
IG9iamVjdHM6ICAzOCUgKDUyOS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9i
amVjdHM6ICAzOSUgKDU0My8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICA0MCUgKDU1Ny8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6
ICA0MSUgKDU3MS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0
MiUgKDU4NS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MyUg
KDU5OS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NCUgKDYx
My8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NSUgKDYyNi8x
MzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NiUgKDY0MC8xMzkx
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NyUgKDY1NC8xMzkxKSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0OCUgKDY2OC8xMzkxKSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0OSUgKDY4Mi8xMzkxKSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MCUgKDY5Ni8xMzkxKSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MSUgKDcxMC8xMzkxKSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MiUgKDcyNC8xMzkxKSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICA1MyUgKDczOC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA1NCUgKDc1Mi8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNz
aW5nIG9iamVjdHM6ICA1NSUgKDc2Ni8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5n
IG9iamVjdHM6ICA1NiUgKDc3OS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9i
amVjdHM6ICA1NyUgKDc5My8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICA1OCUgKDgwNy8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6
ICA1OSUgKDgyMS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2
MCUgKDgzNS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MSUg
KDg0OS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MiUgKDg2
My8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MyUgKDg3Ny8x
MzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NCUgKDg5MS8xMzkx
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NSUgKDkwNS8xMzkxKSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NiUgKDkxOS8xMzkxKSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NyUgKDkzMi8xMzkxKSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2OCUgKDk0Ni8xMzkxKSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2OSUgKDk2MC8xMzkxKSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICA3MCUgKDk3NC8xMzkxKSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICA3MSUgKDk4OC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA3MiUgKDEwMDIvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVz
c2luZyBvYmplY3RzOiAgNzMlICgxMDE2LzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Np
bmcgb2JqZWN0czogIDc0JSAoMTAzMC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5n
IG9iamVjdHM6ICA3NSUgKDEwNDQvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBv
YmplY3RzOiAgNzYlICgxMDU4LzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2Jq
ZWN0czogIDc3JSAoMTA3Mi8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICA3OCUgKDEwODUvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3Rz
OiAgNzklICgxMDk5LzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czog
IDgwJSAoMTExMy8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA4
MSUgKDExMjcvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgODIl
ICgxMTQxLzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDgzJSAo
MTE1NS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA4NCUgKDEx
NjkvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgODUlICgxMTgz
LzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDg2JSAoMTE5Ny8x
MzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA4NyUgKDEyMTEvMTM5
MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgODglICgxMjI1LzEzOTEp
ICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDg5JSAoMTIzOC8xMzkxKSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA5MCUgKDEyNTIvMTM5MSkgICAb
W0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgOTElICgxMjY2LzEzOTEpICAgG1tL
DXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDkyJSAoMTI4MC8xMzkxKSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA5MyUgKDEyOTQvMTM5MSkgICAbW0sNcmVt
b3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgOTQlICgxMzA4LzEzOTEpICAgG1tLDXJlbW90
ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDk1JSAoMTMyMi8xMzkxKSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICA5NiUgKDEzMzYvMTM5MSkgICAbW0sNcmVtb3RlOiBD
b21wcmVzc2luZyBvYmplY3RzOiAgOTclICgxMzUwLzEzOTEpICAgG1tLDXJlbW90ZTogQ29t
cHJlc3Npbmcgb2JqZWN0czogIDk4JSAoMTM2NC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA5OSUgKDEzNzgvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVz
c2luZyBvYmplY3RzOiAxMDAlICgxMzkxLzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Np
bmcgb2JqZWN0czogMTAwJSAoMTM5MS8xMzkxKSwgZG9uZS4bW0sKUmVjZWl2aW5nIG9iamVj
dHM6ICAgMCUgKDEvNjQ5MCkgICANUmVjZWl2aW5nIG9iamVjdHM6ICAgMSUgKDY1LzY0OTAp
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgIDIlICgxMzAvNjQ5MCkgICANUmVjZWl2aW5nIG9i
amVjdHM6ICAgMyUgKDE5NS82NDkwKSAgIA1SZWNlaXZpbmcgb2JqZWN0czogICA0JSAoMjYw
LzY0OTApICAgDVJlY2VpdmluZyBvYmplY3RzOiAgIDUlICgzMjUvNjQ5MCkgICANUmVjZWl2
aW5nIG9iamVjdHM6ICAgNiUgKDM5MC82NDkwKSwgMTAwLjAwIEtpQiB8IDE3MiBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogICA3JSAoNDU1LzY0OTApLCAxMDAuMDAgS2lCIHwgMTcy
IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgIDglICg1MjAvNjQ5MCksIDEwMC4wMCBL
aUIgfCAxNzIgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAgOSUgKDU4NS82NDkwKSwg
MTAwLjAwIEtpQiB8IDE3MiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDEwJSAoNjQ5
LzY0OTApLCAxMDAuMDAgS2lCIHwgMTcyIEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
MTElICg3MTQvNjQ5MCksIDEwMC4wMCBLaUIgfCAxNzIgS2lCL3MgICANUmVjZWl2aW5nIG9i
amVjdHM6ICAxMiUgKDc3OS82NDkwKSwgMTAwLjAwIEtpQiB8IDE3MiBLaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogIDEzJSAoODQ0LzY0OTApLCAxMDAuMDAgS2lCIHwgMTcyIEtpQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMTQlICg5MDkvNjQ5MCksIDEwMC4wMCBLaUIgfCAx
NzIgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAxNSUgKDk3NC82NDkwKSwgMTAwLjAw
IEtpQiB8IDE3MiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDE2JSAoMTAzOS82NDkw
KSwgMTAwLjAwIEtpQiB8IDE3MiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDE2JSAo
MTA4NC82NDkwKSwgMTAwLjAwIEtpQiB8IDE3MiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDE3JSAoMTEwNC82NDkwKSwgMzI0LjAwIEtpQiB8IDI4MSBLaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDE4JSAoMTE2OS82NDkwKSwgMzI0LjAwIEtpQiB8IDI4MSBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDE5JSAoMTIzNC82NDkwKSwgNDg0LjAwIEtpQiB8IDI3
NiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDE5JSAoMTI2NC82NDkwKSwgNDg0LjAw
IEtpQiB8IDI3NiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDIwJSAoMTI5OC82NDkw
KSwgNDg0LjAwIEtpQiB8IDI3NiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDIxJSAo
MTM2My82NDkwKSwgNjM2LjAwIEtpQiB8IDI3MyBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDIyJSAoMTQyOC82NDkwKSwgNjM2LjAwIEtpQiB8IDI3MyBLaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDIzJSAoMTQ5My82NDkwKSwgNjM2LjAwIEtpQiB8IDI3MyBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDI0JSAoMTU1OC82NDkwKSwgNjM2LjAwIEtpQiB8IDI3
MyBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDI1JSAoMTYyMy82NDkwKSwgNjM2LjAw
IEtpQiB8IDI3MyBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDI2JSAoMTY4OC82NDkw
KSwgNjM2LjAwIEtpQiB8IDI3MyBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDI3JSAo
MTc1My82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDI4JSAoMTgxOC82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDI4JSAoMTgzMi82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDI5JSAoMTg4My82NDkwKSwgODYwLjAwIEtpQiB8IDI5
NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDMwJSAoMTk0Ny82NDkwKSwgODYwLjAw
IEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDMxJSAoMjAxMi82NDkw
KSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDMyJSAo
MjA3Ny82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDMzJSAoMjE0Mi82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDM0JSAoMjIwNy82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDM1JSAoMjI3Mi82NDkwKSwgODYwLjAwIEtpQiB8IDI5
NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDM2JSAoMjMzNy82NDkwKSwgODYwLjAw
IEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDM3JSAoMjQwMi82NDkw
KSwgMS4wMiBNaUIgfCAzMDUgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAzOCUgKDI0
NjcvNjQ5MCksIDEuMDIgTWlCIHwgMzA1IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
MzklICgyNTMyLzY0OTApLCAxLjAyIE1pQiB8IDMwNSBLaUIvcyAgIA1SZWNlaXZpbmcgb2Jq
ZWN0czogIDQwJSAoMjU5Ni82NDkwKSwgMS4wMiBNaUIgfCAzMDUgS2lCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICA0MSUgKDI2NjEvNjQ5MCksIDEuMDIgTWlCIHwgMzA1IEtpQi9zICAg
DVJlY2VpdmluZyBvYmplY3RzOiAgNDIlICgyNzI2LzY0OTApLCAxLjAyIE1pQiB8IDMwNSBL
aUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDQzJSAoMjc5MS82NDkwKSwgMS4wMiBNaUIg
fCAzMDUgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA0NCUgKDI4NTYvNjQ5MCksIDEu
MDIgTWlCIHwgMzA1IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNDUlICgyOTIxLzY0
OTApLCAxLjAyIE1pQiB8IDMwNSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDQ2JSAo
Mjk4Ni82NDkwKSwgMS4wMiBNaUIgfCAzMDUgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICA0NyUgKDMwNTEvNjQ5MCksIDEuMDIgTWlCIHwgMzA1IEtpQi9zICAgDVJlY2VpdmluZyBv
YmplY3RzOiAgNDglICgzMTE2LzY0OTApLCAxLjAyIE1pQiB8IDMwNSBLaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogIDQ5JSAoMzE4MS82NDkwKSwgMS4wMiBNaUIgfCAzMDUgS2lCL3Mg
ICANUmVjZWl2aW5nIG9iamVjdHM6ICA1MCUgKDMyNDUvNjQ5MCksIDEuMDIgTWlCIHwgMzA1
IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNTElICgzMzEwLzY0OTApLCAxLjAyIE1p
QiB8IDMwNSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDUyJSAoMzM3NS82NDkwKSwg
MS4wMiBNaUIgfCAzMDUgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA1MyUgKDM0NDAv
NjQ5MCksIDEuMDIgTWlCIHwgMzA1IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNTQl
ICgzNTA1LzY0OTApLCAxLjAyIE1pQiB8IDMwNSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDU1JSAoMzU3MC82NDkwKSwgMS4wMiBNaUIgfCAzMDUgS2lCL3MgICANUmVjZWl2aW5n
IG9iamVjdHM6ICA1NiUgKDM2MzUvNjQ5MCksIDEuMjEgTWlCIHwgMzE0IEtpQi9zICAgDVJl
Y2VpdmluZyBvYmplY3RzOiAgNTclICgzNzAwLzY0OTApLCAxLjIxIE1pQiB8IDMxNCBLaUIv
cyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDU4JSAoMzc2NS82NDkwKSwgMS4yMSBNaUIgfCAz
MTQgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA1OSUgKDM4MzAvNjQ5MCksIDEuMjEg
TWlCIHwgMzE0IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNjAlICgzODk0LzY0OTAp
LCAxLjIxIE1pQiB8IDMxNCBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDYwJSAoMzky
Ny82NDkwKSwgMS4yMSBNaUIgfCAzMTQgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA2
MSUgKDM5NTkvNjQ5MCksIDEuMjEgTWlCIHwgMzE0IEtpQi9zICAgDVJlY2VpdmluZyBvYmpl
Y3RzOiAgNjIlICg0MDI0LzY0OTApLCAxLjIxIE1pQiB8IDMxNCBLaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDYzJSAoNDA4OS82NDkwKSwgMS4yMSBNaUIgfCAzMTQgS2lCL3MgICAN
UmVjZWl2aW5nIG9iamVjdHM6ICA2NCUgKDQxNTQvNjQ5MCksIDEuMjEgTWlCIHwgMzE0IEtp
Qi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNjUlICg0MjE5LzY0OTApLCAxLjIxIE1pQiB8
IDMxNCBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDY2JSAoNDI4NC82NDkwKSwgMS4y
MSBNaUIgfCAzMTQgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA2NyUgKDQzNDkvNjQ5
MCksIDEuMjEgTWlCIHwgMzE0IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNjglICg0
NDE0LzY0OTApLCAxLjIxIE1pQiB8IDMxNCBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czog
IDY5JSAoNDQ3OS82NDkwKSwgMS4yMSBNaUIgfCAzMTQgS2lCL3MgICANUmVjZWl2aW5nIG9i
amVjdHM6ICA3MCUgKDQ1NDMvNjQ5MCksIDEuMjEgTWlCIHwgMzE0IEtpQi9zICAgDVJlY2Vp
dmluZyBvYmplY3RzOiAgNzElICg0NjA4LzY0OTApLCAxLjIxIE1pQiB8IDMxNCBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDcyJSAoNDY3My82NDkwKSwgMS4yMSBNaUIgfCAzMTQg
S2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA3MyUgKDQ3MzgvNjQ5MCksIDEuMjEgTWlC
IHwgMzE0IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNzQlICg0ODAzLzY0OTApLCAx
LjIxIE1pQiB8IDMxNCBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDc1JSAoNDg2OC82
NDkwKSwgMS4yMSBNaUIgfCAzMTQgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA3NiUg
KDQ5MzMvNjQ5MCksIDEuMjEgTWlCIHwgMzE0IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3Rz
OiAgNzclICg0OTk4LzY0OTApLCAxLjIxIE1pQiB8IDMxNCBLaUIvcyAgIA1SZWNlaXZpbmcg
b2JqZWN0czogIDc4JSAoNTA2My82NDkwKSwgMS4yMSBNaUIgfCAzMTQgS2lCL3MgICANUmVj
ZWl2aW5nIG9iamVjdHM6ICA3OSUgKDUxMjgvNjQ5MCksIDEuNDMgTWlCIHwgMzI2IEtpQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgODAlICg1MTkyLzY0OTApLCAxLjQzIE1pQiB8IDMy
NiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDgxJSAoNTI1Ny82NDkwKSwgMS40MyBN
aUIgfCAzMjYgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA4MiUgKDUzMjIvNjQ5MCks
IDEuNDMgTWlCIHwgMzI2IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgODMlICg1Mzg3
LzY0OTApLCAxLjQzIE1pQiB8IDMyNiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDg0
JSAoNTQ1Mi82NDkwKSwgMS40MyBNaUIgfCAzMjYgS2lCL3MgICANUmVjZWl2aW5nIG9iamVj
dHM6ICA4NSUgKDU1MTcvNjQ5MCksIDEuNDMgTWlCIHwgMzI2IEtpQi9zICAgDVJlY2Vpdmlu
ZyBvYmplY3RzOiAgODYlICg1NTgyLzY0OTApLCAxLjQzIE1pQiB8IDMyNiBLaUIvcyAgIA1S
ZWNlaXZpbmcgb2JqZWN0czogIDg3JSAoNTY0Ny82NDkwKSwgMS40MyBNaUIgfCAzMjYgS2lC
L3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA4OCUgKDU3MTIvNjQ5MCksIDEuNDMgTWlCIHwg
MzI2IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgODklICg1Nzc3LzY0OTApLCAxLjQz
IE1pQiB8IDMyNiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDkwJSAoNTg0MS82NDkw
KSwgMS40MyBNaUIgfCAzMjYgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA5MSUgKDU5
MDYvNjQ5MCksIDEuNDMgTWlCIHwgMzI2IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
OTIlICg1OTcxLzY0OTApLCAxLjQzIE1pQiB8IDMyNiBLaUIvcyAgIA1SZWNlaXZpbmcgb2Jq
ZWN0czogIDkzJSAoNjAzNi82NDkwKSwgMS40MyBNaUIgfCAzMjYgS2lCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICA5NCUgKDYxMDEvNjQ5MCksIDEuNDMgTWlCIHwgMzI2IEtpQi9zICAg
DVJlY2VpdmluZyBvYmplY3RzOiAgOTUlICg2MTY2LzY0OTApLCAxLjQzIE1pQiB8IDMyNiBL
aUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDk2JSAoNjIzMS82NDkwKSwgMS40MyBNaUIg
fCAzMjYgS2lCL3MgICANcmVtb3RlOiBUb3RhbCA2NDkwIChkZWx0YSA1MTQ3KSwgcmV1c2Vk
IDY0MjAgKGRlbHRhIDUwOTUpG1tLClJlY2VpdmluZyBvYmplY3RzOiAgOTclICg2Mjk2LzY0
OTApLCAxLjQzIE1pQiB8IDMyNiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDk4JSAo
NjM2MS82NDkwKSwgMS40MyBNaUIgfCAzMjYgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICA5OSUgKDY0MjYvNjQ5MCksIDEuNDMgTWlCIHwgMzI2IEtpQi9zICAgDVJlY2VpdmluZyBv
YmplY3RzOiAxMDAlICg2NDkwLzY0OTApLCAxLjQzIE1pQiB8IDMyNiBLaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogMTAwJSAoNjQ5MC82NDkwKSwgMS42MSBNaUIgfCAzMjYgS2lCL3Ms
IGRvbmUuClJlc29sdmluZyBkZWx0YXM6ICAgMCUgKDAvNTE0NykgICANUmVzb2x2aW5nIGRl
bHRhczogICAxJSAoNTQvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogICAzJSAoMjAzLzUx
NDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAgNCUgKDIxMC81MTQ3KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgIDclICgzNzQvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogICA4JSAoNDIx
LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAgOSUgKDQ3My81MTQ3KSAgIA1SZXNvbHZp
bmcgZGVsdGFzOiAgMTAlICg1MjMvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDExJSAo
NTg2LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxMiUgKDY2MS81MTQ3KSAgIA1SZXNv
bHZpbmcgZGVsdGFzOiAgMTMlICg2OTMvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDE0
JSAoNzIxLzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxNSUgKDgyMi81MTQ3KSAgIA1S
ZXNvbHZpbmcgZGVsdGFzOiAgMTYlICg4MzUvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczog
IDE4JSAoOTQ2LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxOSUgKDk5OC81MTQ3KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAgMjAlICgxMDcyLzUxNDcpICAgDVJlc29sdmluZyBkZWx0
YXM6ICAyMiUgKDExNjMvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDIzJSAoMTE4OC81
MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMjQlICgxMjQzLzUxNDcpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICAyNSUgKDEzMjAvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDI2JSAo
MTM0OC81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMjclICgxNDA4LzUxNDcpICAgDVJl
c29sdmluZyBkZWx0YXM6ICAyOCUgKDE0NDYvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczog
IDI5JSAoMTUxMy81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMzAlICgxNTcxLzUxNDcp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICAzMSUgKDE2MDEvNTE0NykgICANUmVzb2x2aW5nIGRl
bHRhczogIDMyJSAoMTY2MC81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMzMlICgxNzQw
LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAzNCUgKDE3NTYvNTE0NykgICANUmVzb2x2
aW5nIGRlbHRhczogIDM1JSAoMTgwMy81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMzYl
ICgxODU5LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAzNyUgKDE5MTAvNTE0NykgICAN
UmVzb2x2aW5nIGRlbHRhczogIDM4JSAoMTk3My81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFz
OiAgMzklICgyMDI1LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA0MCUgKDIwODYvNTE0
NykgICANUmVzb2x2aW5nIGRlbHRhczogIDQxJSAoMjEyNS81MTQ3KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgNDIlICgyMTY1LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA0MyUgKDIy
MTYvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDQ0JSAoMjI4MC81MTQ3KSAgIA1SZXNv
bHZpbmcgZGVsdGFzOiAgNDUlICgyMzUxLzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA0
OCUgKDI1MjIvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDQ5JSAoMjUzNC81MTQ3KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAgNTAlICgyNjE4LzUxNDcpICAgDVJlc29sdmluZyBkZWx0
YXM6ICA1MSUgKDI2MzcvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDUyJSAoMjY4My81
MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNTMlICgyNzI4LzUxNDcpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICA1NyUgKDI5NjEvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDYwJSAo
MzA5OS81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNjElICgzMTUwLzUxNDcpICAgDVJl
c29sdmluZyBkZWx0YXM6ICA2MiUgKDMyMDEvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczog
IDYzJSAoMzI4Ni81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNjQlICgzMzEyLzUxNDcp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICA2NiUgKDM0MDUvNTE0NykgICANUmVzb2x2aW5nIGRl
bHRhczogIDY5JSAoMzU1Ny81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNzAlICgzNjI0
LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA3MyUgKDM3NjAvNTE0NykgICANUmVzb2x2
aW5nIGRlbHRhczogIDc0JSAoMzgzMy81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNzUl
ICgzOTAyLzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA3NiUgKDM5NTcvNTE0NykgICAN
UmVzb2x2aW5nIGRlbHRhczogIDc3JSAoMzk2OS81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFz
OiAgNzglICg0MDUzLzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA4MCUgKDQxMzIvNTE0
NykgICANUmVzb2x2aW5nIGRlbHRhczogIDgzJSAoNDMxMC81MTQ3KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgODQlICg0MzI4LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA4NSUgKDQz
OTAvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDg3JSAoNDQ4Mi81MTQ3KSAgIA1SZXNv
bHZpbmcgZGVsdGFzOiAgODglICg0NTU3LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA4
OSUgKDQ2MDEvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDkwJSAoNDY2NC81MTQ3KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAgOTElICg0NzA1LzUxNDcpICAgDVJlc29sdmluZyBkZWx0
YXM6ICA5MiUgKDQ3NTgvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDkzJSAoNDgwMy81
MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgOTQlICg0ODc1LzUxNDcpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICA5NSUgKDQ5MzQvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDk2JSAo
NDk2Ny81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgOTclICg1MDA1LzUxNDcpICAgDVJl
c29sdmluZyBkZWx0YXM6ICA5OCUgKDUwNjMvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczog
IDk5JSAoNTA5OC81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAxMDAlICg1MTQ3LzUxNDcp
ICAgDVJlc29sdmluZyBkZWx0YXM6IDEwMCUgKDUxNDcvNTE0NyksIGRvbmUuClN3aXRjaGVk
IHRvIGEgbmV3IGJyYW5jaCAnZHVtbXknCmNwIHNlYWJpb3MtY29uZmlnIHNlYWJpb3MtZGly
Ly5jb25maWc7CmdtYWtlIFBZVEhPTj1weXRob24yLjcgc3ViZGlycy1hbGwKZ21ha2VbNF06
IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlJwpn
bWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmly
bXdhcmUnCmdtYWtlIC1DIHNlYWJpb3MtZGlyIGFsbApnbWFrZVs2XTogRW50ZXJpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvc2VhYmlvcy1kaXItcmVt
b3RlJwogIEJ1aWxkIEtjb25maWcgY29uZmlnIGZpbGUKICBDb21waWxpbmcgd2hvbGUgcHJv
Z3JhbSBvdXQvY2NvZGUuMTYucwpJbiBmaWxlIGluY2x1ZGVkIGZyb20gc3JjL2lvcG9ydC5o
OjgxOjAsCiAgICAgICAgICAgICAgICAgZnJvbSBzcmMvZmFycHRyLmg6OSwKICAgICAgICAg
ICAgICAgICBmcm9tIHNyYy9vdXRwdXQuYzo5OgpzcmMvdHlwZXMuaDoxMjc6MDogd2Fybmlu
ZzogIl9fc2VjdGlvbiIgcmVkZWZpbmVkCi91c3IvaW5jbHVkZS9zeXMvY2RlZnMuaDozMjA6
MDogbm90ZTogdGhpcyBpcyB0aGUgbG9jYXRpb24gb2YgdGhlIHByZXZpb3VzIGRlZmluaXRp
b24Kc3JjL3R5cGVzLmg6MTMwOjA6IHdhcm5pbmc6ICJfX2FsaWduZWQiIHJlZGVmaW5lZAov
dXNyL2luY2x1ZGUvc3lzL2NkZWZzLmg6MzE5OjA6IG5vdGU6IHRoaXMgaXMgdGhlIGxvY2F0
aW9uIG9mIHRoZSBwcmV2aW91cyBkZWZpbml0aW9uCiAgQ29tcGlsaW5nIHRvIGFzc2VtYmxl
ciBvdXQvYXNtLW9mZnNldHMucwogIEdlbmVyYXRpbmcgb2Zmc2V0IGZpbGUgb3V0L2FzbS1v
ZmZzZXRzLmgKICBDb21waWxpbmcgKDE2Yml0KSBvdXQvY29kZTE2Lm8KICBDb21waWxpbmcg
d2hvbGUgcHJvZ3JhbSBvdXQvY2NvZGUzMmZsYXQubwpJbiBmaWxlIGluY2x1ZGVkIGZyb20g
c3JjL2lvcG9ydC5oOjgxOjAsCiAgICAgICAgICAgICAgICAgZnJvbSBzcmMvZmFycHRyLmg6
OSwKICAgICAgICAgICAgICAgICBmcm9tIHNyYy9vdXRwdXQuYzo5OgpzcmMvdHlwZXMuaDox
Mjc6MDogd2FybmluZzogIl9fc2VjdGlvbiIgcmVkZWZpbmVkCi91c3IvaW5jbHVkZS9zeXMv
Y2RlZnMuaDozMjA6MDogbm90ZTogdGhpcyBpcyB0aGUgbG9jYXRpb24gb2YgdGhlIHByZXZp
b3VzIGRlZmluaXRpb24Kc3JjL3R5cGVzLmg6MTMwOjA6IHdhcm5pbmc6ICJfX2FsaWduZWQi
IHJlZGVmaW5lZAovdXNyL2luY2x1ZGUvc3lzL2NkZWZzLmg6MzE5OjA6IG5vdGU6IHRoaXMg
aXMgdGhlIGxvY2F0aW9uIG9mIHRoZSBwcmV2aW91cyBkZWZpbml0aW9uCiAgQ29tcGlsaW5n
IHdob2xlIHByb2dyYW0gb3V0L2NvZGUzMnNlZy5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSBz
cmMvaW9wb3J0Lmg6ODE6MCwKICAgICAgICAgICAgICAgICBmcm9tIHNyYy9mYXJwdHIuaDo5
LAogICAgICAgICAgICAgICAgIGZyb20gc3JjL291dHB1dC5jOjk6CnNyYy90eXBlcy5oOjEy
NzowOiB3YXJuaW5nOiAiX19zZWN0aW9uIiByZWRlZmluZWQKL3Vzci9pbmNsdWRlL3N5cy9j
ZGVmcy5oOjMyMDowOiBub3RlOiB0aGlzIGlzIHRoZSBsb2NhdGlvbiBvZiB0aGUgcHJldmlv
dXMgZGVmaW5pdGlvbgpzcmMvdHlwZXMuaDoxMzA6MDogd2FybmluZzogIl9fYWxpZ25lZCIg
cmVkZWZpbmVkCi91c3IvaW5jbHVkZS9zeXMvY2RlZnMuaDozMTk6MDogbm90ZTogdGhpcyBp
cyB0aGUgbG9jYXRpb24gb2YgdGhlIHByZXZpb3VzIGRlZmluaXRpb24KICBCdWlsZGluZyBs
ZCBzY3JpcHRzICh2ZXJzaW9uICIxLjYuMy4yLTIwMTIxMjA0XzEzMjUyNy1kb20wLmxpcHB1
eC5kZSIpCkZpeGVkIHNwYWNlOiAweGUwNWItMHgxMDAwMCAgdG90YWw6IDgxMDEgIHNsYWNr
OiA1ICBQZXJjZW50IHNsYWNrOiAwLjElCjE2Yml0IHNpemU6ICAgICAgICAgICA0MDkxMgoz
MmJpdCBzZWdtZW50ZWQgc2l6ZTogMTU4MAozMmJpdCBmbGF0IHNpemU6ICAgICAgMTM2MzYK
MzJiaXQgZmxhdCBpbml0IHNpemU6IDUzMjMyCiAgTGlua2luZyBvdXQvcm9tMTYubwogIFN0
cmlwcGluZyBvdXQvcm9tMTYuc3RyaXAubwogIExpbmtpbmcgb3V0L3JvbTMyc2VnLm8KICBT
dHJpcHBpbmcgb3V0L3JvbTMyc2VnLnN0cmlwLm8KICBMaW5raW5nIG91dC9yb20ubwogIFBy
ZXBwaW5nIG91dC9iaW9zLmJpbgpUb3RhbCBzaXplOiAxMTE4NTIgIEZpeGVkOiA1NjEzMiAg
RnJlZTogMTkyMjAgKHVzZWQgODUuMyUgb2YgMTI4S2lCIHJvbSkKZ21ha2VbNl06IExlYXZp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvc2VhYmlvcy1k
aXItcmVtb3RlJwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC90b29scy9maXJtd2FyZScKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlJwpnbWFrZSAtQyByb21iaW9zIGFsbApnbWFrZVs2
XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUv
cm9tYmlvcycKZ21ha2VbN106IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3MnCmdtYWtlIC1DIDMyYml0IGFsbApnbWFrZVs4XTog
RW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvcm9t
Ymlvcy8zMmJpdCcKZ21ha2VbOV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3MvMzJiaXQnCmdtYWtlIC1DIHRjZ2Jpb3MgYWxs
CmdtYWtlWzEwXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
ZmlybXdhcmUvcm9tYmlvcy8zMmJpdC90Y2diaW9zJwpnY2MgICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnRjZ2Jpb3Muby5kIC1m
bm8tb3B0aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1kaXJlY3Qtc2VnLXJlZnMgIC1X
ZXJyb3IgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1mbm8tYnVpbHRp
biAtbXNvZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvcm9tYmlv
cy8zMmJpdC90Y2diaW9zLy4uLy4uLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkuLiAtSS4u
Ly4uICAtYyAtbyB0Y2diaW9zLm8gdGNnYmlvcy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2Nj
ICAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZu
by1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVz
IC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQg
LU1GIC50cG1fZHJpdmVycy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8t
dGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5v
LWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIu
MC90b29scy9maXJtd2FyZS9yb21iaW9zLzMyYml0L3RjZ2Jpb3MvLi4vLi4vLi4vLi4vLi4v
dG9vbHMvaW5jbHVkZSAtSS4uIC1JLi4vLi4gIC1jIC1vIHRwbV9kcml2ZXJzLm8gdHBtX2Ry
aXZlcnMuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmxkIC1tZWxmX2kzODYgLXIgdGNnYmlvcy5v
IHRwbV9kcml2ZXJzLm8gLW8gdGNnYmlvc2V4dC5vCmdtYWtlWzEwXTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9yb21iaW9zLzMyYml0L3Rj
Z2Jpb3MnCmdtYWtlWzldOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2Zpcm13YXJlL3JvbWJpb3MvMzJiaXQnCmdtYWtlIDMyYml0Ymlvc19mbGF0LmgKZ21h
a2VbOV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13
YXJlL3JvbWJpb3MvMzJiaXQnCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuMzJiaXRiaW9zLm8uZCAtZm5vLW9wdGltaXpl
LXNpYmxpbmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0LWZs
b2F0IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3MvMzJiaXQvLi4v
Li4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS4uICAtYyAtbyAzMmJpdGJpb3MubyAzMmJpdGJp
b3MuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAudXRpbC5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1m
bG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9yb21iaW9zLzMyYml0Ly4u
Ly4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkuLiAgLWMgLW8gdXRpbC5vIHV0aWwuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMy
IC1tYXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAucG1tLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0LWZsb2F0IC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3MvMzJiaXQvLi4vLi4vLi4vLi4v
dG9vbHMvaW5jbHVkZSAtSS4uICAtYyAtbyBwbW0ubyBwbW0uYyAgLUkvdXNyL3BrZy9pbmNs
dWRlCmxkIC1tZWxmX2kzODYgLXMgLXIgMzJiaXRiaW9zLm8gdGNnYmlvcy90Y2diaW9zZXh0
Lm8gdXRpbC5vIHBtbS5vIC1vIDMyYml0Ymlvc19hbGwubwpzaCBta2hleCBoaWdoYmlvc19h
cnJheSAzMmJpdGJpb3NfYWxsLm8gPiAzMmJpdGJpb3NfZmxhdC5oCmdtYWtlWzldOiBMZWF2
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3Mv
MzJiaXQnCmdtYWtlWzhdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2Zpcm13YXJlL3JvbWJpb3MvMzJiaXQnCmdtYWtlWzddOiBMZWF2aW5nIGRpcmVjdG9y
eSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3MnCmdtYWtlIEJJT1Mt
Ym9jaHMtbGF0ZXN0CmdtYWtlWzddOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC90b29scy9maXJtd2FyZS9yb21iaW9zJwpnY2MgLW8gYmlvc3N1bXMgYmlvc3N1bXMu
YwpnY2MgLURCWF9TTVBfUFJPQ0VTU09SUz0xIC1FIC1QIHJvbWJpb3MuYyA+IF9yb21iaW9z
Xy5jCmJjYyAtbyByb21iaW9zLnMgLUMtYyAtRF9faTg2X18gLTAgLVMgX3JvbWJpb3NfLmMK
c2VkIC1lICdzL15cLnRleHQvLycgLWUgJ3MvXlwuZGF0YS8vJyByb21iaW9zLnMgPiBfcm9t
Ymlvc18ucwphczg2IF9yb21iaW9zXy5zIC1iIHRtcC5iaW4gLXUtIC13LSAtZyAtMCAtaiAt
TyAtbCByb21iaW9zLnR4dApwZXJsIG1ha2VzeW0ucGVybCA8IHJvbWJpb3MudHh0ID4gcm9t
Ymlvcy5zeW0KbXYgdG1wLmJpbiBCSU9TLWJvY2hzLWxhdGVzdAouL2Jpb3NzdW1zIEJJT1Mt
Ym9jaHMtbGF0ZXN0CgoKUENJLUJpb3MgaGVhZGVyIGF0OiAweEI1QjAKQ3VycmVudCBjaGVj
a3N1bTogICAgIDB4NTgKQ2FsY3VsYXRlZCBjaGVja3N1bTogIDB4NTggIAoKCiRQSVIgaGVh
ZGVyIGF0OiAgICAgMHhCOTAwCkN1cnJlbnQgY2hlY2tzdW06ICAgICAweDM3CkNhbGN1bGF0
ZWQgY2hlY2tzdW06ICAweDI3CiAgU2V0dGluZyBjaGVja3N1bS4KCgpCaW9zIGNoZWNrc3Vt
IGF0OiAgIDB4RkZGRgpDdXJyZW50IGNoZWNrc3VtOiAgICAgMHgwMApDYWxjdWxhdGVkIGNo
ZWNrc3VtOiAgMHhBQSAgU2V0dGluZyBjaGVja3N1bS4Kcm0gLWYgX3JvbWJpb3NfLnMKZ21h
a2VbN106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdh
cmUvcm9tYmlvcycKZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvZmlybXdhcmUvcm9tYmlvcycKZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUnCmdtYWtlWzVdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZScKZ21ha2UgLUMgdmdh
YmlvcyBhbGwKZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2Zpcm13YXJlL3ZnYWJpb3MnCmdjYyAtbyBiaW9zc3VtcyBiaW9zc3Vtcy5jCmdj
YyAtbyB2YmV0YWJsZXMtZ2VuIHZiZXRhYmxlcy1nZW4uYwouL3ZiZXRhYmxlcy1nZW4gPiB2
YmV0YWJsZXMuaApnY2MgLUUgLVAgdmdhYmlvcy5jICAtRFZCRSAiLURWR0FCSU9TX0RBVEU9
XCJgZGF0ZSAnKyVkICViICVZJ2BcIiIgPiBfdmdhYmlvc18uYwpiY2MgLW8gdmdhYmlvcy5z
IC1DLWMgLURfX2k4Nl9fIC1TIC0wIF92Z2FiaW9zXy5jCnNlZCAtZSAncy9eXC50ZXh0Ly8n
IC1lICdzL15cLmRhdGEvLycgdmdhYmlvcy5zID4gX3ZnYWJpb3NfLnMKYXM4NiBfdmdhYmlv
c18ucyAtYiB2Z2FiaW9zLmJpbiAtdSAtdy0gLWcgLTAgLWogLU8gLWwgdmdhYmlvcy50eHQK
cm0gLWYgX3ZnYWJpb3NfLnMgX3ZnYWJpb3NfLmMgdmdhYmlvcy5zCmNwIHZnYWJpb3MuYmlu
IFZHQUJJT1MtbGdwbC1sYXRlc3QuYmluCi4vYmlvc3N1bXMgVkdBQklPUy1sZ3BsLWxhdGVz
dC5iaW4KCkJpb3MgY2hlY2tzdW0gYXQ6ICAgMHg5REZGCkN1cnJlbnQgY2hlY2tzdW06ICAg
ICAweDAwCkNhbGN1bGF0ZWQgY2hlY2tzdW06ICAweEVDICBTZXR0aW5nIGNoZWNrc3VtLgps
cyAtbCBWR0FCSU9TLWxncGwtbGF0ZXN0LmJpbgotcnctci0tci0tICAxIHJvb3QgIHdoZWVs
ICA0MDQ0OCBEZWMgIDQgMTM6MjUgVkdBQklPUy1sZ3BsLWxhdGVzdC5iaW4KZ2NjIC1FIC1Q
IHZnYWJpb3MuYyAgLURWQkUgLURERUJVRyAiLURWR0FCSU9TX0RBVEU9XCJgZGF0ZSAnKyVk
ICViICVZJ2BcIiIgPiBfdmdhYmlvcy1kZWJ1Z18uYwpiY2MgLW8gdmdhYmlvcy1kZWJ1Zy5z
IC1DLWMgLURfX2k4Nl9fIC1TIC0wIF92Z2FiaW9zLWRlYnVnXy5jCnNlZCAtZSAncy9eXC50
ZXh0Ly8nIC1lICdzL15cLmRhdGEvLycgdmdhYmlvcy1kZWJ1Zy5zID4gX3ZnYWJpb3MtZGVi
dWdfLnMKYXM4NiBfdmdhYmlvcy1kZWJ1Z18ucyAtYiB2Z2FiaW9zLmRlYnVnLmJpbiAtdSAt
dy0gLWcgLTAgLWogLU8gLWwgdmdhYmlvcy5kZWJ1Zy50eHQKcm0gLWYgX3ZnYWJpb3MtZGVi
dWdfLnMgX3ZnYWJpb3MtZGVidWdfLmMgdmdhYmlvcy1kZWJ1Zy5zCmNwIHZnYWJpb3MuZGVi
dWcuYmluIFZHQUJJT1MtbGdwbC1sYXRlc3QuZGVidWcuYmluCi4vYmlvc3N1bXMgVkdBQklP
Uy1sZ3BsLWxhdGVzdC5kZWJ1Zy5iaW4KCkJpb3MgY2hlY2tzdW0gYXQ6ICAgMHhBMUZGCkN1
cnJlbnQgY2hlY2tzdW06ICAgICAweDAwCkNhbGN1bGF0ZWQgY2hlY2tzdW06ICAweDU4ICBT
ZXR0aW5nIGNoZWNrc3VtLgpscyAtbCBWR0FCSU9TLWxncGwtbGF0ZXN0LmRlYnVnLmJpbgot
cnctci0tci0tICAxIHJvb3QgIHdoZWVsICA0MTQ3MiBEZWMgIDQgMTM6MjUgVkdBQklPUy1s
Z3BsLWxhdGVzdC5kZWJ1Zy5iaW4KZ2NjIC1FIC1QIHZnYWJpb3MuYyAgLURDSVJSVVMgLURQ
Q0lCSU9TICItRFZHQUJJT1NfREFURT1cImBkYXRlICcrJWQgJWIgJVknYFwiIiA+IF92Z2Fi
aW9zLWNpcnJ1c18uYwpiY2MgLW8gdmdhYmlvcy1jaXJydXMucyAtQy1jIC1EX19pODZfXyAt
UyAtMCBfdmdhYmlvcy1jaXJydXNfLmMKc2VkIC1lICdzL15cLnRleHQvLycgLWUgJ3MvXlwu
ZGF0YS8vJyB2Z2FiaW9zLWNpcnJ1cy5zID4gX3ZnYWJpb3MtY2lycnVzXy5zCmFzODYgX3Zn
YWJpb3MtY2lycnVzXy5zIC1iIHZnYWJpb3MtY2lycnVzLmJpbiAtdSAtdy0gLWcgLTAgLWog
LU8gLWwgdmdhYmlvcy1jaXJydXMudHh0CnJtIC1mIF92Z2FiaW9zLWNpcnJ1c18ucyBfdmdh
Ymlvcy1jaXJydXNfLmMgdmdhYmlvcy1jaXJydXMucwpjcCB2Z2FiaW9zLWNpcnJ1cy5iaW4g
VkdBQklPUy1sZ3BsLWxhdGVzdC5jaXJydXMuYmluCi4vYmlvc3N1bXMgVkdBQklPUy1sZ3Bs
LWxhdGVzdC5jaXJydXMuYmluCgpCaW9zIGNoZWNrc3VtIGF0OiAgIDB4OEJGRgpDdXJyZW50
IGNoZWNrc3VtOiAgICAgMHgwMApDYWxjdWxhdGVkIGNoZWNrc3VtOiAgMHhGMCAgU2V0dGlu
ZyBjaGVja3N1bS4KbHMgLWwgVkdBQklPUy1sZ3BsLWxhdGVzdC5jaXJydXMuYmluCi1ydy1y
LS1yLS0gIDEgcm9vdCAgd2hlZWwgIDM1ODQwIERlYyAgNCAxMzoyNSBWR0FCSU9TLWxncGwt
bGF0ZXN0LmNpcnJ1cy5iaW4KZ2NjIC1FIC1QIHZnYWJpb3MuYyAgLURDSVJSVVMgLURDSVJS
VVNfREVCVUcgLURQQ0lCSU9TICItRFZHQUJJT1NfREFURT1cImBkYXRlICcrJWQgJWIgJVkn
YFwiIiA+IF92Z2FiaW9zLWNpcnJ1cy1kZWJ1Z18uYwpiY2MgLW8gdmdhYmlvcy1jaXJydXMt
ZGVidWcucyAtQy1jIC1EX19pODZfXyAtUyAtMCBfdmdhYmlvcy1jaXJydXMtZGVidWdfLmMK
c2VkIC1lICdzL15cLnRleHQvLycgLWUgJ3MvXlwuZGF0YS8vJyB2Z2FiaW9zLWNpcnJ1cy1k
ZWJ1Zy5zID4gX3ZnYWJpb3MtY2lycnVzLWRlYnVnXy5zCmFzODYgX3ZnYWJpb3MtY2lycnVz
LWRlYnVnXy5zIC1iIHZnYWJpb3MtY2lycnVzLmRlYnVnLmJpbiAtdSAtdy0gLWcgLTAgLWog
LU8gLWwgdmdhYmlvcy1jaXJydXMuZGVidWcudHh0CnJtIC1mIF92Z2FiaW9zLWNpcnJ1cy1k
ZWJ1Z18ucyBfdmdhYmlvcy1jaXJydXMtZGVidWdfLmMgdmdhYmlvcy1jaXJydXMtZGVidWcu
cwpjcCB2Z2FiaW9zLWNpcnJ1cy5kZWJ1Zy5iaW4gVkdBQklPUy1sZ3BsLWxhdGVzdC5jaXJy
dXMuZGVidWcuYmluCi4vYmlvc3N1bXMgVkdBQklPUy1sZ3BsLWxhdGVzdC5jaXJydXMuZGVi
dWcuYmluCgpCaW9zIGNoZWNrc3VtIGF0OiAgIDB4OEJGRgpDdXJyZW50IGNoZWNrc3VtOiAg
ICAgMHgwMApDYWxjdWxhdGVkIGNoZWNrc3VtOiAgMHg2OCAgU2V0dGluZyBjaGVja3N1bS4K
bHMgLWwgVkdBQklPUy1sZ3BsLWxhdGVzdC5jaXJydXMuZGVidWcuYmluCi1ydy1yLS1yLS0g
IDEgcm9vdCAgd2hlZWwgIDM1ODQwIERlYyAgNCAxMzoyNSBWR0FCSU9TLWxncGwtbGF0ZXN0
LmNpcnJ1cy5kZWJ1Zy5iaW4KZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvdmdhYmlvcycKZ21ha2VbNV06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUnCmdtYWtlWzVdOiBFbnRl
cmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZScKZ21ha2Ug
LUMgZXRoZXJib290IGFsbApnbWFrZVs2XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvZXRoZXJib290JwppZiAhIHdnZXQgLU8gX2lweGUu
dGFyLmd6IGh0dHA6Ly94ZW5iaXRzLnhlbi5vcmcveGVuLWV4dGZpbGVzL2lweGUtZ2l0LTlh
OTNkYjNmMDk0NzQ4NGUzMGU3NTNiYmQ2MWExMGIxNzMzNmUyMGUudGFyLmd6OyB0aGVuIFwK
CWdpdCBjbG9uZSBnaXQ6Ly9naXQuaXB4ZS5vcmcvaXB4ZS5naXQgaXB4ZS5naXQ7IFwKCShj
ZCBpcHhlLmdpdCAmJiBnaXQgYXJjaGl2ZSAtLWZvcm1hdD10YXIgLS1wcmVmaXg9aXB4ZS8g
XAoJOWE5M2RiM2YwOTQ3NDg0ZTMwZTc1M2JiZDYxYTEwYjE3MzM2ZTIwZSB8IGd6aXAgPi4u
L19pcHhlLnRhci5neik7IFwKCXJtIC1yZiBpcHhlLmdpdDsgXApmaQp3Z2V0OiBub3QgZm91
bmQKQ2xvbmluZyBpbnRvICdpcHhlLmdpdCcuLi4KcmVtb3RlOiBDb3VudGluZyBvYmplY3Rz
OiAzNzg0OSwgZG9uZS4bW0sKcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDAlICgx
LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgMSUgKDEzMy8x
MzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDIlICgyNjYvMTMy
NzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogICAzJSAoMzk5LzEzMjc2
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgNCUgKDUzMi8xMzI3Nikg
ICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDUlICg2NjQvMTMyNzYpICAg
G1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogICA2JSAoNzk3LzEzMjc2KSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgNyUgKDkzMC8xMzI3NikgICAbW0sN
cmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDglICgxMDYzLzEzMjc2KSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgOSUgKDExOTUvMTMyNzYpICAgG1tLDXJl
bW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDEwJSAoMTMyOC8xMzI3NikgICAbW0sNcmVt
b3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgMTElICgxNDYxLzEzMjc2KSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMiUgKDE1OTQvMTMyNzYpICAgG1tLDXJlbW90
ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDEzJSAoMTcyNi8xMzI3NikgICAbW0sNcmVtb3Rl
OiBDb21wcmVzc2luZyBvYmplY3RzOiAgMTQlICgxODU5LzEzMjc2KSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNSUgKDE5OTIvMTMyNzYpICAgG1tLDXJlbW90ZTog
Q29tcHJlc3Npbmcgb2JqZWN0czogIDE2JSAoMjEyNS8xMzI3NikgICAbW0sNcmVtb3RlOiBD
b21wcmVzc2luZyBvYmplY3RzOiAgMTclICgyMjU3LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICAxOCUgKDIzOTAvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29t
cHJlc3Npbmcgb2JqZWN0czogIDE5JSAoMjUyMy8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21w
cmVzc2luZyBvYmplY3RzOiAgMjAlICgyNjU2LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAyMSUgKDI3ODgvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDIyJSAoMjkyMS8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVz
c2luZyBvYmplY3RzOiAgMjMlICgzMDU0LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNz
aW5nIG9iamVjdHM6ICAyNCUgKDMxODcvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Np
bmcgb2JqZWN0czogIDI1JSAoMzMxOS8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2lu
ZyBvYmplY3RzOiAgMjYlICgzNDUyLzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5n
IG9iamVjdHM6ICAyNyUgKDM1ODUvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcg
b2JqZWN0czogIDI4JSAoMzcxOC8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBv
YmplY3RzOiAgMjklICgzODUxLzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9i
amVjdHM6ICAzMCUgKDM5ODMvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2Jq
ZWN0czogIDMxJSAoNDExNi8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmpl
Y3RzOiAgMzIlICg0MjQ5LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICAzMyUgKDQzODIvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0
czogIDM0JSAoNDUxNC8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3Rz
OiAgMzUlICg0NjQ3LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6
ICAzNiUgKDQ3ODAvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czog
IDM3JSAoNDkxMy8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAg
MzglICg1MDQ1LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAz
OSUgKDUxNzgvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDQw
JSAoNTMxMS8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNDEl
ICg1NDQ0LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MiUg
KDU1NzYvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDQzJSAo
NTcwOS8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNDQlICg1
ODQyLzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NSUgKDU5
NzUvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDQ2JSAoNjEw
Ny8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNDclICg2MjQw
LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0OCUgKDYzNzMv
MTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDQ5JSAoNjUwNi8x
MzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNTAlICg2NjM4LzEz
Mjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MSUgKDY3NzEvMTMy
NzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDUyJSAoNjkwNC8xMzI3
NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNTMlICg3MDM3LzEzMjc2
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA1NCUgKDcxNzAvMTMyNzYp
ICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDU1JSAoNzMwMi8xMzI3Nikg
ICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNTYlICg3NDM1LzEzMjc2KSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA1NyUgKDc1NjgvMTMyNzYpICAg
G1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDU4JSAoNzcwMS8xMzI3NikgICAb
W0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNTklICg3ODMzLzEzMjc2KSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MCUgKDc5NjYvMTMyNzYpICAgG1tL
DXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDYxJSAoODA5OS8xMzI3NikgICAbW0sN
cmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNjIlICg4MjMyLzEzMjc2KSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MyUgKDgzNjQvMTMyNzYpICAgG1tLDXJl
bW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDY0JSAoODQ5Ny8xMzI3NikgICAbW0sNcmVt
b3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNjUlICg4NjMwLzEzMjc2KSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NiUgKDg3NjMvMTMyNzYpICAgG1tLDXJlbW90
ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDY3JSAoODg5NS8xMzI3NikgICAbW0sNcmVtb3Rl
OiBDb21wcmVzc2luZyBvYmplY3RzOiAgNjglICg5MDI4LzEzMjc2KSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICA2OSUgKDkxNjEvMTMyNzYpICAgG1tLDXJlbW90ZTog
Q29tcHJlc3Npbmcgb2JqZWN0czogIDcwJSAoOTI5NC8xMzI3NikgICAbW0sNcmVtb3RlOiBD
b21wcmVzc2luZyBvYmplY3RzOiAgNzElICg5NDI2LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICA3MiUgKDk1NTkvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29t
cHJlc3Npbmcgb2JqZWN0czogIDczJSAoOTY5Mi8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21w
cmVzc2luZyBvYmplY3RzOiAgNzQlICg5ODI1LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA3NSUgKDk5NTcvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDc2JSAoMTAwOTAvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDc3JSAoMTAyMjMvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDc4JSAoMTAzNTYvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDc5JSAoMTA0ODkvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDgwJSAoMTA2MjEvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDgxJSAoMTA3NTQvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDgyJSAoMTA4ODcvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDgzJSAoMTEwMjAvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg0JSAoMTExNTIvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg1JSAoMTEyODUvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg2JSAoMTE0MTgvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg3JSAoMTE1NTEvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg4JSAoMTE2ODMvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg5JSAoMTE4MTYvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDkwJSAoMTE5NDkvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDkxJSAoMTIwODIvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDkyJSAoMTIyMTQvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDkzJSAoMTIzNDcvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk0JSAoMTI0ODAvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk1JSAoMTI2MTMvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk2JSAoMTI3NDUvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk3JSAoMTI4NzgvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk4JSAoMTMwMTEvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk5JSAoMTMxNDQvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogMTAwJSAoMTMyNzYvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogMTAwJSAoMTMyNzYvMTMyNzYpLCBkb25lLhtbSwpSZWNlaXZpbmcg
b2JqZWN0czogICAwJSAoMS8zNzg0OSkgICANUmVjZWl2aW5nIG9iamVjdHM6ICAgMSUgKDM3
OS8zNzg0OSkgICANUmVjZWl2aW5nIG9iamVjdHM6ICAgMiUgKDc1Ny8zNzg0OSkgICANUmVj
ZWl2aW5nIG9iamVjdHM6ICAgMyUgKDExMzYvMzc4NDkpICAgDVJlY2VpdmluZyBvYmplY3Rz
OiAgIDQlICgxNTE0LzM3ODQ5KSAgIA1SZWNlaXZpbmcgb2JqZWN0czogICA1JSAoMTg5My8z
Nzg0OSksIDU1Ni4wMCBLaUIgfCAxLjA2IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
IDYlICgyMjcxLzM3ODQ5KSwgNTU2LjAwIEtpQiB8IDEuMDYgTWlCL3MgICANUmVjZWl2aW5n
IG9iamVjdHM6ICAgNyUgKDI2NTAvMzc4NDkpLCA1NTYuMDAgS2lCIHwgMS4wNiBNaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogICA4JSAoMzAyOC8zNzg0OSksIDU1Ni4wMCBLaUIgfCAx
LjA2IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgIDklICgzNDA3LzM3ODQ5KSwgNTU2
LjAwIEtpQiB8IDEuMDYgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAxMCUgKDM3ODUv
Mzc4NDkpLCA1NTYuMDAgS2lCIHwgMS4wNiBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czog
IDExJSAoNDE2NC8zNzg0OSksIDU1Ni4wMCBLaUIgfCAxLjA2IE1pQi9zICAgDVJlY2Vpdmlu
ZyBvYmplY3RzOiAgMTIlICg0NTQyLzM3ODQ5KSwgNTU2LjAwIEtpQiB8IDEuMDYgTWlCL3Mg
ICANUmVjZWl2aW5nIG9iamVjdHM6ICAxMyUgKDQ5MjEvMzc4NDkpLCA1NTYuMDAgS2lCIHwg
MS4wNiBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDE0JSAoNTI5OS8zNzg0OSksIDU1
Ni4wMCBLaUIgfCAxLjA2IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMTUlICg1Njc4
LzM3ODQ5KSwgNTU2LjAwIEtpQiB8IDEuMDYgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICAxNiUgKDYwNTYvMzc4NDkpLCA1NTYuMDAgS2lCIHwgMS4wNiBNaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDE3JSAoNjQzNS8zNzg0OSksIDU1Ni4wMCBLaUIgfCAxLjA2IE1pQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMTglICg2ODEzLzM3ODQ5KSwgNTU2LjAwIEtpQiB8
IDEuMDYgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAxOSUgKDcxOTIvMzc4NDkpLCA1
NTYuMDAgS2lCIHwgMS4wNiBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDIwJSAoNzU3
MC8zNzg0OSksIDU1Ni4wMCBLaUIgfCAxLjA2IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3Rz
OiAgMjElICg3OTQ5LzM3ODQ5KSwgNTU2LjAwIEtpQiB8IDEuMDYgTWlCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICAyMiUgKDgzMjcvMzc4NDkpLCAxLjMyIE1pQiB8IDEuMzAgTWlCL3Mg
ICANUmVjZWl2aW5nIG9iamVjdHM6ICAyMiUgKDg0MDQvMzc4NDkpLCAxLjMyIE1pQiB8IDEu
MzAgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAyMyUgKDg3MDYvMzc4NDkpLCAxLjMy
IE1pQiB8IDEuMzAgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAyNCUgKDkwODQvMzc4
NDkpLCAxLjMyIE1pQiB8IDEuMzAgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAyNSUg
KDk0NjMvMzc4NDkpLCAxLjMyIE1pQiB8IDEuMzAgTWlCL3MgICANUmVjZWl2aW5nIG9iamVj
dHM6ICAyNiUgKDk4NDEvMzc4NDkpLCAxLjMyIE1pQiB8IDEuMzAgTWlCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICAyNyUgKDEwMjIwLzM3ODQ5KSwgMS4zMiBNaUIgfCAxLjMwIE1pQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMjglICgxMDU5OC8zNzg0OSksIDEuMzIgTWlCIHwg
MS4zMCBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDI5JSAoMTA5NzcvMzc4NDkpLCAx
LjMyIE1pQiB8IDEuMzAgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAzMCUgKDExMzU1
LzM3ODQ5KSwgMS4zMiBNaUIgfCAxLjMwIE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
MzElICgxMTczNC8zNzg0OSksIDEuMzIgTWlCIHwgMS4zMCBNaUIvcyAgIA1SZWNlaXZpbmcg
b2JqZWN0czogIDMyJSAoMTIxMTIvMzc4NDkpLCAxLjMyIE1pQiB8IDEuMzAgTWlCL3MgICAN
UmVjZWl2aW5nIG9iamVjdHM6ICAzMyUgKDEyNDkxLzM3ODQ5KSwgMi4xMCBNaUIgfCAxLjM5
IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMzQlICgxMjg2OS8zNzg0OSksIDIuMTAg
TWlCIHwgMS4zOSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDM1JSAoMTMyNDgvMzc4
NDkpLCAyLjEwIE1pQiB8IDEuMzkgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAzNiUg
KDEzNjI2LzM3ODQ5KSwgMi4xMCBNaUIgfCAxLjM5IE1pQi9zICAgDVJlY2VpdmluZyBvYmpl
Y3RzOiAgMzclICgxNDAwNS8zNzg0OSksIDIuMTAgTWlCIHwgMS4zOSBNaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogIDM4JSAoMTQzODMvMzc4NDkpLCAyLjEwIE1pQiB8IDEuMzkgTWlC
L3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAzOSUgKDE0NzYyLzM3ODQ5KSwgMi4xMCBNaUIg
fCAxLjM5IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMzklICgxNDgxMy8zNzg0OSks
IDMuMjggTWlCIHwgMS42MiBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDQwJSAoMTUx
NDAvMzc4NDkpLCAzLjI4IE1pQiB8IDEuNjIgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICA0MSUgKDE1NTE5LzM3ODQ5KSwgMy4yOCBNaUIgfCAxLjYyIE1pQi9zICAgDVJlY2Vpdmlu
ZyBvYmplY3RzOiAgNDIlICgxNTg5Ny8zNzg0OSksIDQuNDUgTWlCIHwgMS42NCBNaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDQzJSAoMTYyNzYvMzc4NDkpLCA0LjQ1IE1pQiB8IDEu
NjQgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA0NCUgKDE2NjU0LzM3ODQ5KSwgNC40
NSBNaUIgfCAxLjY0IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNDUlICgxNzAzMy8z
Nzg0OSksIDQuNDUgTWlCIHwgMS42NCBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDQ2
JSAoMTc0MTEvMzc4NDkpLCA0LjQ1IE1pQiB8IDEuNjQgTWlCL3MgICANUmVjZWl2aW5nIG9i
amVjdHM6ICA0NyUgKDE3NzkwLzM3ODQ5KSwgNC40NSBNaUIgfCAxLjY0IE1pQi9zICAgDVJl
Y2VpdmluZyBvYmplY3RzOiAgNDclICgxNzg2Mi8zNzg0OSksIDQuNDUgTWlCIHwgMS42NCBN
aUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDQ4JSAoMTgxNjgvMzc4NDkpLCA1LjQxIE1p
QiB8IDEuNjggTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA0OSUgKDE4NTQ3LzM3ODQ5
KSwgNS40MSBNaUIgfCAxLjY4IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNTAlICgx
ODkyNS8zNzg0OSksIDUuNDEgTWlCIHwgMS42OCBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDUxJSAoMTkzMDMvMzc4NDkpLCA1LjQxIE1pQiB8IDEuNjggTWlCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICA1MiUgKDE5NjgyLzM3ODQ5KSwgNS40MSBNaUIgfCAxLjY4IE1pQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNTMlICgyMDA2MC8zNzg0OSksIDUuNDEgTWlCIHwg
MS42OCBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDU0JSAoMjA0MzkvMzc4NDkpLCA1
LjQxIE1pQiB8IDEuNjggTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA1NSUgKDIwODE3
LzM3ODQ5KSwgNi41NCBNaUIgfCAxLjc1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
NTYlICgyMTE5Ni8zNzg0OSksIDYuNTQgTWlCIHwgMS43NSBNaUIvcyAgIA1SZWNlaXZpbmcg
b2JqZWN0czogIDU3JSAoMjE1NzQvMzc4NDkpLCA2LjU0IE1pQiB8IDEuNzUgTWlCL3MgICAN
UmVjZWl2aW5nIG9iamVjdHM6ICA1OCUgKDIxOTUzLzM3ODQ5KSwgNi41NCBNaUIgfCAxLjc1
IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNTklICgyMjMzMS8zNzg0OSksIDYuNTQg
TWlCIHwgMS43NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDYwJSAoMjI3MTAvMzc4
NDkpLCA2LjU0IE1pQiB8IDEuNzUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA2MSUg
KDIzMDg4LzM3ODQ5KSwgNi41NCBNaUIgfCAxLjc1IE1pQi9zICAgDVJlY2VpdmluZyBvYmpl
Y3RzOiAgNjIlICgyMzQ2Ny8zNzg0OSksIDYuNTQgTWlCIHwgMS43NSBNaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogIDYzJSAoMjM4NDUvMzc4NDkpLCA2LjU0IE1pQiB8IDEuNzUgTWlC
L3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA2MyUgKDI0MTczLzM3ODQ5KSwgNi41NCBNaUIg
fCAxLjc1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNjQlICgyNDIyNC8zNzg0OSks
IDYuNTQgTWlCIHwgMS43NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDY1JSAoMjQ2
MDIvMzc4NDkpLCA2LjU0IE1pQiB8IDEuNzUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICA2NiUgKDI0OTgxLzM3ODQ5KSwgNi41NCBNaUIgfCAxLjc1IE1pQi9zICAgDVJlY2Vpdmlu
ZyBvYmplY3RzOiAgNjclICgyNTM1OS8zNzg0OSksIDYuNTQgTWlCIHwgMS43NSBNaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDY4JSAoMjU3MzgvMzc4NDkpLCA2LjU0IE1pQiB8IDEu
NzUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA2OSUgKDI2MTE2LzM3ODQ5KSwgNi41
NCBNaUIgfCAxLjc1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNzAlICgyNjQ5NS8z
Nzg0OSksIDYuNTQgTWlCIHwgMS43NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDcx
JSAoMjY4NzMvMzc4NDkpLCA2LjU0IE1pQiB8IDEuNzUgTWlCL3MgICANUmVjZWl2aW5nIG9i
amVjdHM6ICA3MiUgKDI3MjUyLzM3ODQ5KSwgNi41NCBNaUIgfCAxLjc1IE1pQi9zICAgDVJl
Y2VpdmluZyBvYmplY3RzOiAgNzMlICgyNzYzMC8zNzg0OSksIDYuNTQgTWlCIHwgMS43NSBN
aUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDc0JSAoMjgwMDkvMzc4NDkpLCA3LjgyIE1p
QiB8IDEuODUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA3NSUgKDI4Mzg3LzM3ODQ5
KSwgNy44MiBNaUIgfCAxLjg1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNzYlICgy
ODc2Ni8zNzg0OSksIDcuODIgTWlCIHwgMS44NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDc3JSAoMjkxNDQvMzc4NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICA3OCUgKDI5NTIzLzM3ODQ5KSwgNy44MiBNaUIgfCAxLjg1IE1pQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNzklICgyOTkwMS8zNzg0OSksIDcuODIgTWlCIHwg
MS44NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDgwJSAoMzAyODAvMzc4NDkpLCA3
LjgyIE1pQiB8IDEuODUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA4MSUgKDMwNjU4
LzM3ODQ5KSwgNy44MiBNaUIgfCAxLjg1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
ODIlICgzMTAzNy8zNzg0OSksIDcuODIgTWlCIHwgMS44NSBNaUIvcyAgIA1SZWNlaXZpbmcg
b2JqZWN0czogIDgzJSAoMzE0MTUvMzc4NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlCL3MgICAN
UmVjZWl2aW5nIG9iamVjdHM6ICA4NCUgKDMxNzk0LzM3ODQ5KSwgNy44MiBNaUIgfCAxLjg1
IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgODUlICgzMjE3Mi8zNzg0OSksIDcuODIg
TWlCIHwgMS44NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDg2JSAoMzI1NTEvMzc4
NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA4NyUg
KDMyOTI5LzM3ODQ5KSwgNy44MiBNaUIgfCAxLjg1IE1pQi9zICAgDVJlY2VpdmluZyBvYmpl
Y3RzOiAgODglICgzMzMwOC8zNzg0OSksIDcuODIgTWlCIHwgMS44NSBNaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogIDg5JSAoMzM2ODYvMzc4NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlC
L3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA5MCUgKDM0MDY1LzM3ODQ5KSwgNy44MiBNaUIg
fCAxLjg1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgOTElICgzNDQ0My8zNzg0OSks
IDcuODIgTWlCIHwgMS44NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDkyJSAoMzQ4
MjIvMzc4NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICA5MyUgKDM1MjAwLzM3ODQ5KSwgNy44MiBNaUIgfCAxLjg1IE1pQi9zICAgDVJlY2Vpdmlu
ZyBvYmplY3RzOiAgOTQlICgzNTU3OS8zNzg0OSksIDcuODIgTWlCIHwgMS44NSBNaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDk1JSAoMzU5NTcvMzc4NDkpLCA3LjgyIE1pQiB8IDEu
ODUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA5NiUgKDM2MzM2LzM3ODQ5KSwgNy44
MiBNaUIgfCAxLjg1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgOTclICgzNjcxNC8z
Nzg0OSksIDcuODIgTWlCIHwgMS44NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDk4
JSAoMzcwOTMvMzc4NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlCL3MgICANUmVjZWl2aW5nIG9i
amVjdHM6ICA5OSUgKDM3NDcxLzM3ODQ5KSwgOS4xOCBNaUIgfCAxLjk0IE1pQi9zICAgDXJl
bW90ZTogVG90YWwgMzc4NDkgKGRlbHRhIDI4MTM5KSwgcmV1c2VkIDMxMTk3IChkZWx0YSAy
MzAyMSkbW0sKUmVjZWl2aW5nIG9iamVjdHM6IDEwMCUgKDM3ODQ5LzM3ODQ5KSwgOS4xOCBN
aUIgfCAxLjk0IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAxMDAlICgzNzg0OS8zNzg0
OSksIDkuMjYgTWlCIHwgMi4wNCBNaUIvcywgZG9uZS4KUmVzb2x2aW5nIGRlbHRhczogICAw
JSAoMC8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogICAxJSAoMzI0LzI4MTM5KSAgIA1S
ZXNvbHZpbmcgZGVsdGFzOiAgIDIlICg2MjQvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6
ICAgMyUgKDExMTAvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAgNCUgKDEyMTgvMjgx
MzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAgNSUgKDE0NDMvMjgxMzkpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICAgNiUgKDE3NTYvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAgNyUg
KDIwMDMvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxMCUgKDI4ODkvMjgxMzkpICAg
DVJlc29sdmluZyBkZWx0YXM6ICAxMSUgKDMxMzQvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0
YXM6ICAxMiUgKDM0MDcvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxMyUgKDM2ODUv
MjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxNCUgKDM5NjAvMjgxMzkpICAgDVJlc29s
dmluZyBkZWx0YXM6ICAxNSUgKDQyMzgvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAx
NiUgKDQ2MTkvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxNyUgKDQ5NTMvMjgxMzkp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICAxOCUgKDUwNzEvMjgxMzkpICAgDVJlc29sdmluZyBk
ZWx0YXM6ICAxOSUgKDUzNDgvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyMCUgKDU2
ODIvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyMSUgKDU5NjAvMjgxMzkpICAgDVJl
c29sdmluZyBkZWx0YXM6ICAyMiUgKDYyNzkvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6
ICAyMyUgKDY0NzMvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyNCUgKDY3NjYvMjgx
MzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyNSUgKDcwOTUvMjgxMzkpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICAyNyUgKDc4MDUvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyOCUg
KDc5NTgvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyOSUgKDgxNjcvMjgxMzkpICAg
DVJlc29sdmluZyBkZWx0YXM6ICAzMCUgKDg0NjMvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0
YXM6ICAzMSUgKDg3MzAvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAzMiUgKDkwMTYv
MjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAzNCUgKDk3NTUvMjgxMzkpICAgDVJlc29s
dmluZyBkZWx0YXM6ICAzNSUgKDk4NTYvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAz
NiUgKDEwMTY0LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMzclICgxMDQzMS8yODEz
OSkgICANUmVzb2x2aW5nIGRlbHRhczogIDM4JSAoMTA2OTgvMjgxMzkpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICAzOSUgKDExMTkxLzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNDAl
ICgxMTMwNC8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDQxJSAoMTE1NTUvMjgxMzkp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICA0MiUgKDExODM4LzI4MTM5KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgNDMlICgxMjEwNi8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDQ0JSAo
MTIzOTcvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA0NSUgKDEyNjk3LzI4MTM5KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAgNDYlICgxMjk4Mi8yODEzOSkgICANUmVzb2x2aW5nIGRl
bHRhczogIDQ3JSAoMTMyMjYvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA0OCUgKDEz
NTUxLzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNDklICgxMzgwNC8yODEzOSkgICAN
UmVzb2x2aW5nIGRlbHRhczogIDUwJSAoMTQwODgvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0
YXM6ICA1MSUgKDE0MzcwLzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNTIlICgxNDYz
Ny8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDUzJSAoMTQ5NjEvMjgxMzkpICAgDVJl
c29sdmluZyBkZWx0YXM6ICA1NCUgKDE1MjI4LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFz
OiAgNTUlICgxNTQ5MC8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDU2JSAoMTU3NjUv
MjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA1NyUgKDE2MDYwLzI4MTM5KSAgIA1SZXNv
bHZpbmcgZGVsdGFzOiAgNTglICgxNjMyMi8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczog
IDU5JSAoMTY2NDkvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA2MCUgKDE2OTMxLzI4
MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNjElICgxNzE2Ni8yODEzOSkgICANUmVzb2x2
aW5nIGRlbHRhczogIDYyJSAoMTc0NjgvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA2
MyUgKDE3NzM0LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNjQlICgxODAxNy8yODEz
OSkgICANUmVzb2x2aW5nIGRlbHRhczogIDY1JSAoMTgyOTQvMjgxMzkpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICA2NiUgKDE4NTkwLzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNjcl
ICgxODg4NS8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDY4JSAoMTkxNTEvMjgxMzkp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICA2OSUgKDE5NDIwLzI4MTM5KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgNzAlICgxOTcyMC8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDcxJSAo
MjAwMDUvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA3MiUgKDIwMjY4LzI4MTM5KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAgNzMlICgyMDU1NS8yODEzOSkgICANUmVzb2x2aW5nIGRl
bHRhczogIDc0JSAoMjA4NjMvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA3NSUgKDIx
MTA3LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNzYlICgyMTQ0OC8yODEzOSkgICAN
UmVzb2x2aW5nIGRlbHRhczogIDc3JSAoMjE2NzAvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0
YXM6ICA3OCUgKDIxOTY3LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNzklICgyMjI0
NS8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDgwJSAoMjI1MTMvMjgxMzkpICAgDVJl
c29sdmluZyBkZWx0YXM6ICA4MSUgKDIyNzk1LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFz
OiAgODIlICgyMzA4MS8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDgzJSAoMjMzNzcv
MjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA4NCUgKDIzNjQ0LzI4MTM5KSAgIA1SZXNv
bHZpbmcgZGVsdGFzOiAgODUlICgyMzkzNy8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczog
IDg2JSAoMjQyMDMvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA4NyUgKDI0NTI3LzI4
MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgODglICgyNDgwOS8yODEzOSkgICANUmVzb2x2
aW5nIGRlbHRhczogIDg5JSAoMjUxMTYvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA5
MCUgKDI1MzI3LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgOTElICgyNTYwOS8yODEz
OSkgICANUmVzb2x2aW5nIGRlbHRhczogIDkyJSAoMjU4OTEvMjgxMzkpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICA5MyUgKDI2MTg2LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgOTQl
ICgyNjQ1OC8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDk1JSAoMjY3NDQvMjgxMzkp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICA5NiUgKDI3MDM5LzI4MTM5KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgOTclICgyNzMwNi8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDk4JSAo
Mjc1NzcvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA5OSUgKDI3ODcyLzI4MTM5KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAxMDAlICgyODEzOS8yODEzOSkgICANUmVzb2x2aW5nIGRl
bHRhczogMTAwJSAoMjgxMzkvMjgxMzkpLCBkb25lLgpDaGVja2luZyBvdXQgZmlsZXM6ICAy
NyUgKDM0Mi8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAyOCUgKDM0My8xMjIyKSAg
IA1DaGVja2luZyBvdXQgZmlsZXM6ICAyOSUgKDM1NS8xMjIyKSAgIA1DaGVja2luZyBvdXQg
ZmlsZXM6ICAzMCUgKDM2Ny8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzMSUgKDM3
OS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzMiUgKDM5Mi8xMjIyKSAgIA1DaGVj
a2luZyBvdXQgZmlsZXM6ICAzMyUgKDQwNC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6
ICAzNCUgKDQxNi8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzNSUgKDQyOC8xMjIy
KSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzNiUgKDQ0MC8xMjIyKSAgIA1DaGVja2luZyBv
dXQgZmlsZXM6ICAzNyUgKDQ1My8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzOCUg
KDQ2NS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzOSUgKDQ3Ny8xMjIyKSAgIA1D
aGVja2luZyBvdXQgZmlsZXM6ICA0MCUgKDQ4OS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmls
ZXM6ICA0MSUgKDUwMi8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA0MiUgKDUxNC8x
MjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA0MyUgKDUyNi8xMjIyKSAgIA1DaGVja2lu
ZyBvdXQgZmlsZXM6ICA0NCUgKDUzOC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA0
NSUgKDU1MC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA0NiUgKDU2My8xMjIyKSAg
IA1DaGVja2luZyBvdXQgZmlsZXM6ICA0NyUgKDU3NS8xMjIyKSAgIA1DaGVja2luZyBvdXQg
ZmlsZXM6ICA0OCUgKDU4Ny8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA0OSUgKDU5
OS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1MCUgKDYxMS8xMjIyKSAgIA1DaGVj
a2luZyBvdXQgZmlsZXM6ICA1MSUgKDYyNC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6
ICA1MiUgKDYzNi8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1MyUgKDY0OC8xMjIy
KSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1NCUgKDY2MC8xMjIyKSAgIA1DaGVja2luZyBv
dXQgZmlsZXM6ICA1NSUgKDY3My8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1NiUg
KDY4NS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1NyUgKDY5Ny8xMjIyKSAgIA1D
aGVja2luZyBvdXQgZmlsZXM6ICA1OCUgKDcwOS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmls
ZXM6ICA1OCUgKDcxNi8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1OSUgKDcyMS8x
MjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA2MCUgKDczNC8xMjIyKSAgIA1DaGVja2lu
ZyBvdXQgZmlsZXM6ICA2MSUgKDc0Ni8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA2
MiUgKDc1OC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA2MyUgKDc3MC8xMjIyKSAg
IA1DaGVja2luZyBvdXQgZmlsZXM6ICA2NCUgKDc4My8xMjIyKSAgIA1DaGVja2luZyBvdXQg
ZmlsZXM6ICA2NSUgKDc5NS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA2NiUgKDgw
Ny8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA2NyUgKDgxOS8xMjIyKSAgIA1DaGVj
a2luZyBvdXQgZmlsZXM6ICA2OCUgKDgzMS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6
ICA2OSUgKDg0NC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3MCUgKDg1Ni8xMjIy
KSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3MSUgKDg2OC8xMjIyKSAgIA1DaGVja2luZyBv
dXQgZmlsZXM6ICA3MiUgKDg4MC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3MyUg
KDg5My8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3NCUgKDkwNS8xMjIyKSAgIA1D
aGVja2luZyBvdXQgZmlsZXM6ICA3NSUgKDkxNy8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmls
ZXM6ICA3NiUgKDkyOS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3NyUgKDk0MS8x
MjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3OCUgKDk1NC8xMjIyKSAgIA1DaGVja2lu
ZyBvdXQgZmlsZXM6ICA3OSUgKDk2Ni8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA4
MCUgKDk3OC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA4MSUgKDk5MC8xMjIyKSAg
IA1DaGVja2luZyBvdXQgZmlsZXM6ICA4MiUgKDEwMDMvMTIyMikgICANQ2hlY2tpbmcgb3V0
IGZpbGVzOiAgODMlICgxMDE1LzEyMjIpICAgDUNoZWNraW5nIG91dCBmaWxlczogIDg0JSAo
MTAyNy8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA4NSUgKDEwMzkvMTIyMikgICAN
Q2hlY2tpbmcgb3V0IGZpbGVzOiAgODYlICgxMDUxLzEyMjIpICAgDUNoZWNraW5nIG91dCBm
aWxlczogIDg3JSAoMTA2NC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA4OCUgKDEw
NzYvMTIyMikgICANQ2hlY2tpbmcgb3V0IGZpbGVzOiAgODklICgxMDg4LzEyMjIpICAgDUNo
ZWNraW5nIG91dCBmaWxlczogIDkwJSAoMTEwMC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmls
ZXM6ICA5MSUgKDExMTMvMTIyMikgICANQ2hlY2tpbmcgb3V0IGZpbGVzOiAgOTIlICgxMTI1
LzEyMjIpICAgDUNoZWNraW5nIG91dCBmaWxlczogIDkzJSAoMTEzNy8xMjIyKSAgIA1DaGVj
a2luZyBvdXQgZmlsZXM6ICA5NCUgKDExNDkvMTIyMikgICANQ2hlY2tpbmcgb3V0IGZpbGVz
OiAgOTUlICgxMTYxLzEyMjIpICAgDUNoZWNraW5nIG91dCBmaWxlczogIDk2JSAoMTE3NC8x
MjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA5NyUgKDExODYvMTIyMikgICANQ2hlY2tp
bmcgb3V0IGZpbGVzOiAgOTclICgxMTk3LzEyMjIpICAgDUNoZWNraW5nIG91dCBmaWxlczog
IDk4JSAoMTE5OC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA5OSUgKDEyMTAvMTIy
MikgICANQ2hlY2tpbmcgb3V0IGZpbGVzOiAxMDAlICgxMjIyLzEyMjIpICAgDUNoZWNraW5n
IG91dCBmaWxlczogMTAwJSAoMTIyMi8xMjIyKSwgZG9uZS4KbXYgX2lweGUudGFyLmd6IGlw
eGUudGFyLmd6CnJtIC1yZiBpcHhlCmd6aXAgLWRjIGlweGUudGFyLmd6IHwgdGFyIHhmIC0K
Zm9yIGkgaW4gJChjYXQgcGF0Y2hlcy9zZXJpZXMpIDsgZG8gICAgICAgICAgICAgICAgIFwK
ICAgIHBhdGNoIC1kIGlweGUgLXAxIC0tcXVpZXQgPHBhdGNoZXMvJGkgfHwgZXhpdCAxIDsg
XApkb25lCmNhdCBDb25maWcgPj5pcHhlL3NyYy9hcmNoL2kzODYvTWFrZWZpbGUKZ21ha2Ug
LUMgaXB4ZS9zcmMgYmluL3J0bDgxMzkucm9tCmdtYWtlWzddOiBFbnRlcmluZyBkaXJlY3Rv
cnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9ldGhlcmJvb3QvaXB4ZS9zcmMn
CnJtIC1mICBiaW4vKi4qICBiaW4vZXJyb3JzCSBiaW4vTklDCSAuL3V0aWwvbnJ2MmIgLi91
dGlsL3piaW4gLi91dGlsL2VsZjJlZmkzMiAuL3V0aWwvZWxmMmVmaTY0IC4vdXRpbC9lZmly
b20gLi91dGlsL2ljY2ZpeCAuL3V0aWwvZWluZm8gVEFHUyBiaW4vc3ltdGFiCiAgW01FRElB
UlVMRVNdIGV4ZQogIFtNRURJQVJVTEVTXSByYXcKICBbTUVESUFSVUxFU10gaGQKICBbTUVE
SUFSVUxFU10gbmJpCiAgW01FRElBUlVMRVNdIGRzawogIFtNRURJQVJVTEVTXSBsa3JuCiAg
W01FRElBUlVMRVNdIGtra3B4ZQogIFtNRURJQVJVTEVTXSBra3B4ZQogIFtNRURJQVJVTEVT
XSBrcHhlCiAgW01FRElBUlVMRVNdIHB4ZQogIFtNRURJQVJVTEVTXSBtcm9tCiAgW01FRElB
UlVMRVNdIHJvbQogIFtSVUxFU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlpc3IuUwog
IFtSVUxFU10gYXJjaC9pMzg2L2ludGVyZmFjZS9zeXNsaW51eC9jb20zMl93cmFwcGVyLlMK
ICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV9lbnRyeS5TCiAgW1JVTEVT
XSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2U4MjBtYW5nbGVyLlMKICBbUlVMRVNdIGFy
Y2gvaTM4Ni9wcmVmaXgvbWJyLlMKICBbUlVMRVNdIGFyY2gvaTM4Ni9wcmVmaXgvcHhlcHJl
Zml4LlMKICBbUlVMRVNdIGFyY2gvaTM4Ni9wcmVmaXgvcm9tcHJlZml4LlMKICBbUlVMRVNd
IGFyY2gvaTM4Ni9wcmVmaXgvZXhlcHJlZml4LlMKICBbUlVMRVNdIGFyY2gvaTM4Ni9wcmVm
aXgvaGRwcmVmaXguUwogIFtSVUxFU10gYXJjaC9pMzg2L3ByZWZpeC91c2JkaXNrLlMKICBb
UlVMRVNdIGFyY2gvaTM4Ni9wcmVmaXgva2trcHhlcHJlZml4LlMKICBbUlVMRVNdIGFyY2gv
aTM4Ni9wcmVmaXgva3B4ZXByZWZpeC5TCiAgW1JVTEVTXSBhcmNoL2kzODYvcHJlZml4L25i
aXByZWZpeC5TCiAgW1JVTEVTXSBhcmNoL2kzODYvcHJlZml4L251bGxwcmVmaXguUwogIFtS
VUxFU10gYXJjaC9pMzg2L3ByZWZpeC9ib290cGFydC5TCiAgW1JVTEVTXSBhcmNoL2kzODYv
cHJlZml4L3VuZGlsb2FkZXIuUwogIFtSVUxFU10gYXJjaC9pMzg2L3ByZWZpeC9ra3B4ZXBy
ZWZpeC5TCiAgW1JVTEVTXSBhcmNoL2kzODYvcHJlZml4L3VubnJ2MmIxNi5TCiAgW1JVTEVT
XSBhcmNoL2kzODYvcHJlZml4L2xrcm5wcmVmaXguUwogIFtSVUxFU10gYXJjaC9pMzg2L3By
ZWZpeC91bm5ydjJiLlMKICBbUlVMRVNdIGFyY2gvaTM4Ni9wcmVmaXgvbXJvbXByZWZpeC5T
CiAgW1JVTEVTXSBhcmNoL2kzODYvcHJlZml4L2Rza3ByZWZpeC5TCiAgW1JVTEVTXSBhcmNo
L2kzODYvcHJlZml4L2xpYnByZWZpeC5TCiAgW1JVTEVTXSBhcmNoL2kzODYvdHJhbnNpdGlv
bnMvbGlicm0uUwogIFtSVUxFU10gYXJjaC9pMzg2L3RyYW5zaXRpb25zL2xpYmEyMC5TCiAg
W1JVTEVTXSBhcmNoL2kzODYvdHJhbnNpdGlvbnMvbGlicG0uUwogIFtSVUxFU10gYXJjaC9p
Mzg2L3RyYW5zaXRpb25zL2xpYmtpci5TCiAgW1JVTEVTXSBhcmNoL2kzODYvY29yZS9zdGFj
azE2LlMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL3N0YWNrLlMKICBbUlVMRVNdIGFyY2gv
aTM4Ni9jb3JlL3NldGptcC5TCiAgW1JVTEVTXSBhcmNoL2kzODYvY29yZS9nZGJpZHQuUwog
IFtSVUxFU10gYXJjaC9pMzg2L2NvcmUvcGF0Y2hfY2YuUwogIFtSVUxFU10gYXJjaC9pMzg2
L2NvcmUvdmlydGFkZHIuUwogIFtSVUxFU10gdGVzdHMvZ2Ric3R1Yl90ZXN0LlMKICBbUlVM
RVNdIGFyY2gvaTM4Ni9kcml2ZXJzL25ldC91bmRpcm9tLmMKICBbUlVMRVNdIGFyY2gvaTM4
Ni9kcml2ZXJzL25ldC91bmRpbmV0LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9kcml2ZXJzL25l
dC91bmRpLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9kcml2ZXJzL25ldC91bmRpb25seS5jCiAg
W1JVTEVTXSBhcmNoL2kzODYvZHJpdmVycy9uZXQvdW5kaWxvYWQuYwogIFtSVUxFU10gYXJj
aC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlwcmVsb2FkLmMKICBbUlVMRVNdIGFyY2gveDg2L3By
ZWZpeC9lZmlkcnZwcmVmaXguYwogIFtSVUxFU10gYXJjaC94ODYvcHJlZml4L2VmaXByZWZp
eC5jCiAgW1JVTEVTXSBhcmNoL3g4Ni9pbnRlcmZhY2UvZWZpL2VmaXg4Nl9uYXAuYwogIFtS
VUxFU10gYXJjaC94ODYvY29yZS94ODZfc3RyaW5nLmMKICBbUlVMRVNdIGFyY2gveDg2L2Nv
cmUvcGNpZGlyZWN0LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9oY2kvY29tbWFuZHMvcmVib290
X2NtZC5jCiAgW1JVTEVTXSBhcmNoL2kzODYvaGNpL2NvbW1hbmRzL3B4ZV9jbWQuYwogIFtS
VUxFU10gYXJjaC9pMzg2L2ludGVyZmFjZS9zeXNsaW51eC9jb21ib290X3Jlc29sdi5jCiAg
W1JVTEVTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3N5c2xpbnV4L2NvbTMyX2NhbGwuYwogIFtS
VUxFU10gYXJjaC9pMzg2L2ludGVyZmFjZS9zeXNsaW51eC9jb21ib290X2NhbGwuYwogIFtS
VUxFU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGVwYXJlbnQvcHhlcGFyZW50X2RoY3AuYwog
IFtSVUxFU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGVwYXJlbnQvcHhlcGFyZW50LmMKICBb
UlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV91ZHAuYwogIFtSVUxFU10gYXJj
aC9pMzg2L2ludGVyZmFjZS9weGUvcHhlX3VuZGkuYwogIFtSVUxFU10gYXJjaC9pMzg2L2lu
dGVyZmFjZS9weGUvcHhlX2xvYWRlci5jCiAgW1JVTEVTXSBhcmNoL2kzODYvaW50ZXJmYWNl
L3B4ZS9weGVfZXhpdF9ob29rLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhl
L3B4ZV9wcmVib290LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV90
ZnRwLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV9maWxlLmMKICBb
UlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV9jYWxsLmMKICBbUlVMRVNdIGFy
Y2gvaTM4Ni9pbnRlcmZhY2UvcGNiaW9zL2Jpb3Nfc21iaW9zLmMKICBbUlVMRVNdIGFyY2gv
aTM4Ni9pbnRlcmZhY2UvcGNiaW9zL21lbXRvcF91bWFsbG9jLmMKICBbUlVMRVNdIGFyY2gv
aTM4Ni9pbnRlcmZhY2UvcGNiaW9zL2Jpb3NpbnQuYwogIFtSVUxFU10gYXJjaC9pMzg2L2lu
dGVyZmFjZS9wY2Jpb3MvYmlvc190aW1lci5jCiAgW1JVTEVTXSBhcmNoL2kzODYvaW50ZXJm
YWNlL3BjYmlvcy9wY2liaW9zLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcGNi
aW9zL2ludDEzLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcGNiaW9zL2Jpb3Nf
bmFwLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbWFnZS9jb21ib290LmMKICBbUlVMRVNdIGFy
Y2gvaTM4Ni9pbWFnZS9lbGZib290LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbWFnZS9ib290
c2VjdG9yLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbWFnZS9tdWx0aWJvb3QuYwogIFtSVUxF
U10gYXJjaC9pMzg2L2ltYWdlL3B4ZV9pbWFnZS5jCiAgW1JVTEVTXSBhcmNoL2kzODYvaW1h
Z2UvYnppbWFnZS5jCiAgW1JVTEVTXSBhcmNoL2kzODYvaW1hZ2UvbmJpLmMKICBbUlVMRVNd
IGFyY2gvaTM4Ni9pbWFnZS9jb20zMi5jCiAgW1JVTEVTXSBhcmNoL2kzODYvZmlybXdhcmUv
cGNiaW9zL3BucGJpb3MuYwogIFtSVUxFU10gYXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9i
aW9zX2NvbnNvbGUuYwogIFtSVUxFU10gYXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9mYWtl
ZTgyMC5jCiAgW1JVTEVTXSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2Jhc2VtZW0uYwog
IFtSVUxFU10gYXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9tZW1tYXAuYwogIFtSVUxFU10g
YXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9oaWRlbWVtLmMKICBbUlVMRVNdIGFyY2gvaTM4
Ni90cmFuc2l0aW9ucy9saWJybV9tZ210LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL2R1
bXByZWdzLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL251bGx0cmFwLmMKICBbUlVMRVNd
IGFyY2gvaTM4Ni9jb3JlL3JlbG9jYXRlLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL3g4
Nl9pby5jCiAgW1JVTEVTXSBhcmNoL2kzODYvY29yZS90aW1lcjIuYwogIFtSVUxFU10gYXJj
aC9pMzg2L2NvcmUvcnVudGltZS5jCiAgW1JVTEVTXSBhcmNoL2kzODYvY29yZS9waWM4MjU5
LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL2NwdS5jCiAgW1JVTEVTXSBhcmNoL2kzODYv
Y29yZS9nZGJtYWNoLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL3ZpZGVvX3N1YnIuYwog
IFtSVUxFU10gYXJjaC9pMzg2L2NvcmUvYmFzZW1lbV9wYWNrZXQuYwogIFtSVUxFU10gYXJj
aC9pMzg2L2NvcmUvcmR0c2NfdGltZXIuYwogIFtSVUxFU10gY29uZmlnL2NvbmZpZ19yb21w
cmVmaXguYwogIFtSVUxFU10gY29uZmlnL2NvbmZpZy5jCiAgW1JVTEVTXSBjb25maWcvY29u
ZmlnX2ZjLmMKICBbUlVMRVNdIGNvbmZpZy9jb25maWdfZXRoZXJuZXQuYwogIFtSVUxFU10g
Y29uZmlnL2NvbmZpZ19uZXQ4MDIxMS5jCiAgW1JVTEVTXSBjb25maWcvY29uZmlnX2luZmlu
aWJhbmQuYwogIFtSVUxFU10gdXNyL2F1dG9ib290LmMKICBbUlVMRVNdIHVzci9pZm1nbXQu
YwogIFtSVUxFU10gdXNyL2ZjbWdtdC5jCiAgW1JVTEVTXSB1c3IvZGhjcG1nbXQuYwogIFtS
VUxFU10gdXNyL3B4ZW1lbnUuYwogIFtSVUxFU10gdXNyL2ltZ21nbXQuYwogIFtSVUxFU10g
dXNyL2xvdGVzdC5jCiAgW1JVTEVTXSB1c3IvaXdtZ210LmMKICBbUlVMRVNdIHVzci9yb3V0
ZS5jCiAgW1JVTEVTXSB1c3IvcHJvbXB0LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFw
X3JvLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2l0LmMKICBbUlVMRVNdIGhjaS9r
ZXltYXAva2V5bWFwX3NnLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2VzLmMKICBb
UlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2h1LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5
bWFwX2JnLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX25sLmMKICBbUlVMRVNdIGhj
aS9rZXltYXAva2V5bWFwX2N6LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2RlLmMK
ICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2ZpLmMKICBbUlVMRVNdIGhjaS9rZXltYXAv
a2V5bWFwX21rLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX3VrLmMKICBbUlVMRVNd
IGhjaS9rZXltYXAva2V5bWFwX3BsLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2F6
LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2ZyLmMKICBbUlVMRVNdIGhjaS9rZXlt
YXAva2V5bWFwX2J5LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX210LmMKICBbUlVM
RVNdIGhjaS9rZXltYXAva2V5bWFwX3dvLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFw
X3VhLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2x0LmMKICBbUlVMRVNdIGhjaS9r
ZXltYXAva2V5bWFwX3NyLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2FsLmMKICBb
UlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX3J1LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5
bWFwX2NmLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX25vLmMKICBbUlVMRVNdIGhj
aS9rZXltYXAva2V5bWFwX2V0LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX3RoLmMK
ICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX3VzLmMKICBbUlVMRVNdIGhjaS9rZXltYXAv
a2V5bWFwX2lsLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2dyLmMKICBbUlVMRVNd
IGhjaS9rZXltYXAva2V5bWFwX2RrLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX3B0
LmMKICBbUlVMRVNdIGhjaS9tdWN1cnNlcy93aWRnZXRzL2VkaXRib3guYwogIFtSVUxFU10g
aGNpL211Y3Vyc2VzL2tiLmMKICBbUlVMRVNdIGhjaS9tdWN1cnNlcy9jb2xvdXIuYwogIFtS
VUxFU10gaGNpL211Y3Vyc2VzL3Nsay5jCiAgW1JVTEVTXSBoY2kvbXVjdXJzZXMvcHJpbnQu
YwogIFtSVUxFU10gaGNpL211Y3Vyc2VzL3dpbmRvd3MuYwogIFtSVUxFU10gaGNpL211Y3Vy
c2VzL211Y3Vyc2VzLmMKICBbUlVMRVNdIGhjaS9tdWN1cnNlcy93aW5pbml0LmMKICBbUlVM
RVNdIGhjaS9tdWN1cnNlcy9wcmludF9uYWR2LmMKICBbUlVMRVNdIGhjaS9tdWN1cnNlcy9h
bnNpX3NjcmVlbi5jCiAgW1JVTEVTXSBoY2kvbXVjdXJzZXMvd2luYXR0cnMuYwogIFtSVUxF
U10gaGNpL211Y3Vyc2VzL2VkZ2luZy5jCiAgW1JVTEVTXSBoY2kvbXVjdXJzZXMvY2xlYXIu
YwogIFtSVUxFU10gaGNpL211Y3Vyc2VzL2FsZXJ0LmMKICBbUlVMRVNdIGhjaS90dWkvc2V0
dGluZ3NfdWkuYwogIFtSVUxFU10gaGNpL3R1aS9sb2dpbl91aS5jCiAgW1JVTEVTXSBoY2kv
Y29tbWFuZHMvdmxhbl9jbWQuYwogIFtSVUxFU10gaGNpL2NvbW1hbmRzL2l3bWdtdF9jbWQu
YwogIFtSVUxFU10gaGNpL2NvbW1hbmRzL2xvdGVzdF9jbWQuYwogIFtSVUxFU10gaGNpL2Nv
bW1hbmRzL2ZjbWdtdF9jbWQuYwogIFtSVUxFU10gaGNpL2NvbW1hbmRzL2ltYWdlX2NtZC5j
CiAgW1JVTEVTXSBoY2kvY29tbWFuZHMvZGlnZXN0X2NtZC5jCiAgW1JVTEVTXSBoY2kvY29t
bWFuZHMvcm91dGVfY21kLmMKICBbUlVMRVNdIGhjaS9jb21tYW5kcy9kaGNwX2NtZC5jCiAg
W1JVTEVTXSBoY2kvY29tbWFuZHMvdGltZV9jbWQuYwogIFtSVUxFU10gaGNpL2NvbW1hbmRz
L2F1dG9ib290X2NtZC5jCiAgW1JVTEVTXSBoY2kvY29tbWFuZHMvZ2Ric3R1Yl9jbWQuYwog
IFtSVUxFU10gaGNpL2NvbW1hbmRzL2lmbWdtdF9jbWQuYwogIFtSVUxFU10gaGNpL2NvbW1h
bmRzL3NhbmJvb3RfY21kLmMKICBbUlVMRVNdIGhjaS9jb21tYW5kcy9sb2dpbl9jbWQuYwog
IFtSVUxFU10gaGNpL2NvbW1hbmRzL2NvbmZpZ19jbWQuYwogIFtSVUxFU10gaGNpL2NvbW1h
bmRzL252b19jbWQuYwogIFtSVUxFU10gaGNpL3dpcmVsZXNzX2Vycm9ycy5jCiAgW1JVTEVT
XSBoY2kvZWRpdHN0cmluZy5jCiAgW1JVTEVTXSBoY2kvcmVhZGxpbmUuYwogIFtSVUxFU10g
aGNpL3N0cmVycm9yLmMKICBbUlVMRVNdIGhjaS9zaGVsbC5jCiAgW1JVTEVTXSBoY2kvbGlu
dXhfYXJncy5jCiAgW1JVTEVTXSBjcnlwdG8vYXh0bHMvc2hhMS5jCiAgW1JVTEVTXSBjcnlw
dG8vYXh0bHMvcnNhLmMKICBbUlVMRVNdIGNyeXB0by9heHRscy9iaWdpbnQuYwogIFtSVUxF
U10gY3J5cHRvL2F4dGxzL2Flcy5jCiAgW1JVTEVTXSBjcnlwdG8vY2JjLmMKICBbUlVMRVNd
IGNyeXB0by9heHRsc19zaGExLmMKICBbUlVMRVNdIGNyeXB0by9hZXNfd3JhcC5jCiAgW1JV
TEVTXSBjcnlwdG8vYXh0bHNfYWVzLmMKICBbUlVMRVNdIGNyeXB0by9hc24xLmMKICBbUlVM
RVNdIGNyeXB0by9obWFjLmMKICBbUlVMRVNdIGNyeXB0by9jcmMzMi5jCiAgW1JVTEVTXSBj
cnlwdG8vY3JhbmRvbS5jCiAgW1JVTEVTXSBjcnlwdG8vY3J5cHRvX251bGwuYwogIFtSVUxF
U10gY3J5cHRvL2FyYzQuYwogIFtSVUxFU10gY3J5cHRvL3NoYTFleHRyYS5jCiAgW1JVTEVT
XSBjcnlwdG8veDUwOS5jCiAgW1JVTEVTXSBjcnlwdG8vbWQ1LmMKICBbUlVMRVNdIGNyeXB0
by9jaGFwLmMKICBbUlVMRVNdIHRlc3RzL2xpbmVidWZfdGVzdC5jCiAgW1JVTEVTXSB0ZXN0
cy91bWFsbG9jX3Rlc3QuYwogIFtSVUxFU10gdGVzdHMvYm9mbV90ZXN0LmMKICBbUlVMRVNd
IHRlc3RzL3VyaV90ZXN0LmMKICBbUlVMRVNdIHRlc3RzL3Rlc3QuYwogIFtSVUxFU10gdGVz
dHMvbGlzdF90ZXN0LmMKICBbUlVMRVNdIHRlc3RzL21lbWNweV90ZXN0LmMKICBbUlVMRVNd
IGludGVyZmFjZS9ib2ZtL2JvZm0uYwogIFtSVUxFU10gaW50ZXJmYWNlL3NtYmlvcy9zbWJp
b3MuYwogIFtSVUxFU10gaW50ZXJmYWNlL3NtYmlvcy9zbWJpb3Nfc2V0dGluZ3MuYwogIFtS
VUxFU10gaW50ZXJmYWNlL2VmaS9lZmlfY29uc29sZS5jCiAgW1JVTEVTXSBpbnRlcmZhY2Uv
ZWZpL2VmaV9zbnAuYwogIFtSVUxFU10gaW50ZXJmYWNlL2VmaS9lZmlfcGNpLmMKICBbUlVM
RVNdIGludGVyZmFjZS9lZmkvZWZpX3N0cmVycm9yLmMKICBbUlVMRVNdIGludGVyZmFjZS9l
ZmkvZWZpX2JvZm0uYwogIFtSVUxFU10gaW50ZXJmYWNlL2VmaS9lZmlfdW1hbGxvYy5jCiAg
W1JVTEVTXSBpbnRlcmZhY2UvZWZpL2VmaV9zdHJpbmdzLmMKICBbUlVMRVNdIGludGVyZmFj
ZS9lZmkvZWZpX3RpbWVyLmMKICBbUlVMRVNdIGludGVyZmFjZS9lZmkvZWZpX3NtYmlvcy5j
CiAgW1JVTEVTXSBpbnRlcmZhY2UvZWZpL2VmaV9kcml2ZXIuYwogIFtSVUxFU10gaW50ZXJm
YWNlL2VmaS9lZmlfaW5pdC5jCiAgW1JVTEVTXSBpbnRlcmZhY2UvZWZpL2VmaV91YWNjZXNz
LmMKICBbUlVMRVNdIGludGVyZmFjZS9lZmkvZWZpX2lvLmMKICBbUlVMRVNdIGRyaXZlcnMv
aW5maW5pYmFuZC9saW5kYS5jCiAgW1JVTEVTXSBkcml2ZXJzL2luZmluaWJhbmQvaGVybW9u
LmMKICBbUlVMRVNdIGRyaXZlcnMvaW5maW5pYmFuZC9hcmJlbC5jCiAgW1JVTEVTXSBkcml2
ZXJzL2luZmluaWJhbmQvcWliNzMyMi5jCiAgW1JVTEVTXSBkcml2ZXJzL2luZmluaWJhbmQv
bGluZGFfZncuYwogIFtSVUxFU10gZHJpdmVycy9iaXRiYXNoL2JpdGJhc2guYwogIFtSVUxF
U10gZHJpdmVycy9iaXRiYXNoL3NwaV9iaXQuYwogIFtSVUxFU10gZHJpdmVycy9iaXRiYXNo
L2kyY19iaXQuYwogIFtSVUxFU10gZHJpdmVycy9udnMvc3BpLmMKICBbUlVMRVNdIGRyaXZl
cnMvbnZzL252c3ZwZC5jCiAgW1JVTEVTXSBkcml2ZXJzL252cy90aHJlZXdpcmUuYwogIFtS
VUxFU10gZHJpdmVycy9udnMvbnZzLmMKICBbUlVMRVNdIGRyaXZlcnMvYmxvY2svaWJmdC5j
CiAgW1JVTEVTXSBkcml2ZXJzL2Jsb2NrL2F0YS5jCiAgW1JVTEVTXSBkcml2ZXJzL2Jsb2Nr
L3NycC5jCiAgW1JVTEVTXSBkcml2ZXJzL2Jsb2NrL3Njc2kuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvZWZpL3NucG5ldC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9lZmkvc25wb25seS5j
CiAgW1JVTEVTXSBkcml2ZXJzL25ldC92eGdlL3Z4Z2VfdHJhZmZpYy5jCiAgW1JVTEVTXSBk
cml2ZXJzL25ldC92eGdlL3Z4Z2UuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvdnhnZS92eGdl
X2NvbmZpZy5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC92eGdlL3Z4Z2VfbWFpbi5jCiAgW1JV
TEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfaW5pdC5jCiAgW1JVTEVTXSBkcml2
ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAzX21hYy5jCiAgW1JVTEVTXSBkcml2ZXJz
L25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAzX2NhbGliLmMKICBbUlVMRVNdIGRyaXZlcnMv
bmV0L2F0aC9hdGg5ay9hdGg5a19lZXByb21fOTI4Ny5jCiAgW1JVTEVTXSBkcml2ZXJzL25l
dC9hdGgvYXRoOWsvYXRoOWsuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0
aDlrX2NvbW1vbi5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5
MDAyX2h3LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19jYWxpYi5j
CiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfZWVwcm9tXzRrLmMKICBb
UlVMRVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19lZXByb21fZGVmLmMKICBbUlVM
RVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19tYWMuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwM19lZXByb20uYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwMl9tYWMuYwogIFtSVUxFU10gZHJpdmVycy9u
ZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwMl9jYWxpYi5jCiAgW1JVTEVTXSBkcml2ZXJzL25l
dC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAyX3BoeS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9h
dGgvYXRoOWsvYXRoOWtfeG1pdC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsv
YXRoOWtfYXI1MDA4X3BoeS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRo
OWtfYXI5MDAzX3BoeS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtf
YW5pLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19tYWluLmMKICBb
UlVMRVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19hcjkwMDNfaHcuYwogIFtSVUxF
U10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2VlcHJvbS5jCiAgW1JVTEVTXSBkcml2
ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfcmVjdi5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9h
dGgvYXRoOWsvYXRoOWtfaHcuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0
aDVrX3Jlc2V0LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1ay5jCiAg
W1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtfYXR0YWNoLmMKICBbUlVMRVNd
IGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19yZmtpbGwuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvYXRoL2F0aDVrL2F0aDVrX2dwaW8uYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYXRo
L2F0aDVrL2F0aDVrX3BoeS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRo
NWtfaW5pdHZhbHMuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0aDVrX2Rt
YS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtfcGN1LmMKICBbUlVM
RVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19kZXNjLmMKICBbUlVMRVNdIGRyaXZl
cnMvbmV0L2F0aC9hdGg1ay9hdGg1a19xY3UuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYXRo
L2F0aDVrL2F0aDVrX2VlcHJvbS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsv
YXRoNWtfY2Fwcy5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoX2h3LmMKICBbUlVM
RVNdIGRyaXZlcnMvbmV0L2F0aC9hdGhfa2V5LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2F0
aC9hdGhfbWFpbi5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoX3JlZ2QuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvcnRsODE4eC9ydGw4MTgwX2dyZjUxMDEuYwogIFtSVUxFU10g
ZHJpdmVycy9uZXQvcnRsODE4eC9ydGw4MTgwX21heDI4MjAuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvcnRsODE4eC9ydGw4MTg1LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L3J0bDgxOHgv
cnRsODE4eC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9ydGw4MTh4L3J0bDgxODAuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvcnRsODE4eC9ydGw4MTg1X3J0bDgyMjUuYwogIFtSVUxFU10g
ZHJpdmVycy9uZXQvcnRsODE4eC9ydGw4MTgwX3NhMjQwMC5jCiAgW1JVTEVTXSBkcml2ZXJz
L25ldC9waGFudG9tL3BoYW50b20uYwogIFtSVUxFU10gZHJpdmVycy9uZXQvaWdidmYvaWdi
dmZfbWFpbi5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9pZ2J2Zi9pZ2J2Zl92Zi5jCiAgW1JV
TEVTXSBkcml2ZXJzL25ldC9pZ2J2Zi9pZ2J2Zl9tYnguYwogIFtSVUxFU10gZHJpdmVycy9u
ZXQvaWdiL2lnYl84MjU3NS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9pZ2IvaWdiLmMKICBb
UlVMRVNdIGRyaXZlcnMvbmV0L2lnYi9pZ2JfbWFjLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0
L2lnYi9pZ2JfcGh5LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2lnYi9pZ2JfbWFpbi5jCiAg
W1JVTEVTXSBkcml2ZXJzL25ldC9pZ2IvaWdiX252bS5jCiAgW1JVTEVTXSBkcml2ZXJzL25l
dC9pZ2IvaWdiX2FwaS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9pZ2IvaWdiX21hbmFnZS5j
CiAgW1JVTEVTXSBkcml2ZXJzL25ldC9lMTAwMGUvZTEwMDBlXzgwMDAzZXMybGFuLmMKICBb
UlVMRVNdIGRyaXZlcnMvbmV0L2UxMDAwZS9lMTAwMGVfaWNoOGxhbi5jCiAgW1JVTEVTXSBk
cml2ZXJzL25ldC9lMTAwMGUvZTEwMDBlX21hbmFnZS5jCiAgW1JVTEVTXSBkcml2ZXJzL25l
dC9lMTAwMGUvZTEwMDBlXzgyNTcxLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2UxMDAwZS9l
MTAwMGUuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV9tYWMuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV9waHkuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvZTEwMDBlL2UxMDAwZV9udm0uYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDBl
L2UxMDAwZV9tYWluLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwXzgyNTQy
LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwXzgyNTQwLmMKICBbUlVMRVNd
IGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwX2FwaS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9l
MTAwMC9lMTAwMF9tYW5hZ2UuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBf
ODI1NDMuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBfbnZtLmMKICBbUlVM
RVNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwX21hYy5jCiAgW1JVTEVTXSBkcml2ZXJzL25l
dC9lMTAwMC9lMTAwMF9waHkuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDAu
YwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBfbWFpbi5jCiAgW1JVTEVTXSBk
cml2ZXJzL25ldC9lMTAwMC9lMTAwMF84MjU0MS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9k
ZXBjYS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hbWQ4MTExZS5jCiAgW1JVTEVTXSBkcml2
ZXJzL25ldC9qbWUuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvcHJpc20yX3BjaS5jCiAgW1JV
TEVTXSBkcml2ZXJzL25ldC8zYzU5NS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC92aWEtcmhp
bmUuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvdzg5Yzg0MC5jCiAgW1JVTEVTXSBkcml2ZXJz
L25ldC9jczg5eDAuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvbmUya19pc2EuYwogIFtSVUxF
U10gZHJpdmVycy9uZXQvaXBvaWIuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvc2t5Mi5jCiAg
W1JVTEVTXSBkcml2ZXJzL25ldC9hdGwxZS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9sZWdh
Y3kuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZWVwcm8xMDAuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvM2M1MTUuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYm54Mi5jCiAgW1JVTEVTXSBk
cml2ZXJzL25ldC9kbWZlLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L25zODM5MC5jCiAgW1JV
TEVTXSBkcml2ZXJzL25ldC9uczgzODIwLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L3BjbmV0
MzIuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvM2M1MDktZWlzYS5jCiAgW1JVTEVTXSBkcml2
ZXJzL25ldC90ZzMuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvM2M1eDkuYwogIFtSVUxFU10g
ZHJpdmVycy9uZXQvc21jOTAwMC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC92aXJ0aW8tbmV0
LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2V0aGVyZmFicmljLmMKICBbUlVMRVNdIGRyaXZl
cnMvbmV0L3dkLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L3NrZ2UuYwogIFtSVUxFU10gZHJp
dmVycy9uZXQvc2lzMTkwLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L25hdHNlbWkuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvYjQ0LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2ZvcmNlZGV0
aC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9wcmlzbTJfcGx4LmMKICBbUlVMRVNdIGRyaXZl
cnMvbmV0L3N1bmRhbmNlLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L3J0bDgxMzkuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvZXBpYzEwMC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC8zYzkw
eC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9kYXZpY29tLmMKICBbUlVMRVNdIGRyaXZlcnMv
bmV0LzNjNTA5LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0LzNjNTI5LmMKICBbUlVMRVNdIGRy
aXZlcnMvbmV0L210ZDgweC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9teXJpMTBnZS5jCiAg
W1JVTEVTXSBkcml2ZXJzL25ldC9lZXByby5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9uZS5j
CiAgW1JVTEVTXSBkcml2ZXJzL25ldC92aWEtdmVsb2NpdHkuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvcG5pYy5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC90dWxpcC5jCiAgW1JVTEVTXSBk
cml2ZXJzL25ldC9zaXM5MDAuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvcjgxNjkuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvdGxhbi5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC8zYzUwMy5j
CiAgW1JVTEVTXSBkcml2ZXJzL2J1cy9wY2kuYwogIFtSVUxFU10gZHJpdmVycy9idXMvaXNh
cG5wLmMKICBbUlVMRVNdIGRyaXZlcnMvYnVzL3ZpcnRpby1yaW5nLmMKICBbUlVMRVNdIGRy
aXZlcnMvYnVzL3ZpcnRpby1wY2kuYwogIFtSVUxFU10gZHJpdmVycy9idXMvaXNhLmMKICBb
UlVMRVNdIGRyaXZlcnMvYnVzL2lzYV9pZHMuYwogIFtSVUxFU10gZHJpdmVycy9idXMvcGNp
ZXh0cmEuYwogIFtSVUxFU10gZHJpdmVycy9idXMvcGNpYmFja3VwLmMKICBbUlVMRVNdIGRy
aXZlcnMvYnVzL3BjaXZwZC5jCiAgW1JVTEVTXSBkcml2ZXJzL2J1cy9tY2EuYwogIFtSVUxF
U10gZHJpdmVycy9idXMvZWlzYS5jCiAgW1JVTEVTXSBpbWFnZS9zY3JpcHQuYwogIFtSVUxF
U10gaW1hZ2UvZWxmLmMKICBbUlVMRVNdIGltYWdlL2VmaV9pbWFnZS5jCiAgW1JVTEVTXSBp
bWFnZS9zZWdtZW50LmMKICBbUlVMRVNdIGltYWdlL2VtYmVkZGVkLmMKICBbUlVMRVNdIG5l
dC84MDIxMS9yYzgwMjExLmMKICBbUlVMRVNdIG5ldC84MDIxMS93cGEuYwogIFtSVUxFU10g
bmV0LzgwMjExL3dwYV9jY21wLmMKICBbUlVMRVNdIG5ldC84MDIxMS9uZXQ4MDIxMS5jCiAg
W1JVTEVTXSBuZXQvODAyMTEvc2VjODAyMTEuYwogIFtSVUxFU10gbmV0LzgwMjExL3dlcC5j
CiAgW1JVTEVTXSBuZXQvODAyMTEvd3BhX3Bzay5jCiAgW1JVTEVTXSBuZXQvODAyMTEvd3Bh
X3RraXAuYwogIFtSVUxFU10gbmV0L2luZmluaWJhbmQvaWJfbWkuYwogIFtSVUxFU10gbmV0
L2luZmluaWJhbmQvaWJfY20uYwogIFtSVUxFU10gbmV0L2luZmluaWJhbmQvaWJfcGFja2V0
LmMKICBbUlVMRVNdIG5ldC9pbmZpbmliYW5kL2liX3NtYy5jCiAgW1JVTEVTXSBuZXQvaW5m
aW5pYmFuZC9pYl9wYXRocmVjLmMKICBbUlVMRVNdIG5ldC9pbmZpbmliYW5kL2liX3NtYS5j
CiAgW1JVTEVTXSBuZXQvaW5maW5pYmFuZC9pYl9jbXJjLmMKICBbUlVMRVNdIG5ldC9pbmZp
bmliYW5kL2liX3NycC5jCiAgW1JVTEVTXSBuZXQvaW5maW5pYmFuZC9pYl9tY2FzdC5jCiAg
W1JVTEVTXSBuZXQvdWRwL2RoY3AuYwogIFtSVUxFU10gbmV0L3VkcC9kbnMuYwogIFtSVUxF
U10gbmV0L3VkcC9zbGFtLmMKICBbUlVMRVNdIG5ldC91ZHAvdGZ0cC5jCiAgW1JVTEVTXSBu
ZXQvdWRwL3N5c2xvZy5jCiAgW1JVTEVTXSBuZXQvdGNwL2h0dHBzLmMKICBbUlVMRVNdIG5l
dC90Y3AvaXNjc2kuYwogIFtSVUxFU10gbmV0L3RjcC9mdHAuYwogIFtSVUxFU10gbmV0L3Rj
cC9odHRwLmMKICBbUlVMRVNdIG5ldC9lYXBvbC5jCiAgW1JVTEVTXSBuZXQvZmNucy5jCiAg
W1JVTEVTXSBuZXQvZmFrZWRoY3AuYwogIFtSVUxFU10gbmV0L2ljbXB2Ni5jCiAgW1JVTEVT
XSBuZXQvbmV0ZGV2X3NldHRpbmdzLmMKICBbUlVMRVNdIG5ldC9mY3AuYwogIFtSVUxFU10g
bmV0L2Zjb2UuYwogIFtSVUxFU10gbmV0L2lvYnBhZC5jCiAgW1JVTEVTXSBuZXQvdGNwLmMK
ICBbUlVMRVNdIG5ldC9taWkuYwogIFtSVUxFU10gbmV0L2FycC5jCiAgW1JVTEVTXSBuZXQv
ZXRoZXJuZXQuYwogIFtSVUxFU10gbmV0L2ZjZWxzLmMKICBbUlVMRVNdIG5ldC90Y3BpcC5j
CiAgW1JVTEVTXSBuZXQvaXB2Ni5jCiAgW1JVTEVTXSBuZXQvYW9lLmMKICBbUlVMRVNdIG5l
dC9yYXJwLmMKICBbUlVMRVNdIG5ldC92bGFuLmMKICBbUlVMRVNdIG5ldC9udWxsbmV0LmMK
ICBbUlVMRVNdIG5ldC9pbmZpbmliYW5kLmMKICBbUlVMRVNdIG5ldC9pcHY0LmMKICBbUlVM
RVNdIG5ldC9ldGhfc2xvdy5jCiAgW1JVTEVTXSBuZXQvdGxzLmMKICBbUlVMRVNdIG5ldC9u
ZHAuYwogIFtSVUxFU10gbmV0L2RoY3Bwa3QuYwogIFtSVUxFU10gbmV0L2NhY2hlZGhjcC5j
CiAgW1JVTEVTXSBuZXQvbmV0ZGV2aWNlLmMKICBbUlVMRVNdIG5ldC9yZXRyeS5jCiAgW1JV
TEVTXSBuZXQvaWNtcC5jCiAgW1JVTEVTXSBuZXQvdWRwLmMKICBbUlVMRVNdIG5ldC9kaGNw
b3B0cy5jCiAgW1JVTEVTXSBuZXQvZmMuYwogIFtSVUxFU10gY29yZS9jdHlwZS5jCiAgW1JV
TEVTXSBjb3JlL2Jhc2VuYW1lLmMKICBbUlVMRVNdIGNvcmUvbnZvLmMKICBbUlVMRVNdIGNv
cmUvZGVidWdfbWQ1LmMKICBbUlVMRVNdIGNvcmUvaW50ZXJmYWNlLmMKICBbUlVMRVNdIGNv
cmUvYnRleHQuYwogIFtSVUxFU10gY29yZS9nZXRvcHQuYwogIFtSVUxFU10gY29yZS9nZXRr
ZXkuYwogIFtSVUxFU10gY29yZS9hc3ByaW50Zi5jCiAgW1JVTEVTXSBjb3JlL2dkYnN0dWIu
YwogIFtSVUxFU10gY29yZS9saW5lYnVmLmMKICBbUlVMRVNdIGNvcmUvZWRkLmMKICBbUlVM
RVNdIGNvcmUvaW5pdC5jCiAgW1JVTEVTXSBjb3JlL3N0cnRvdWxsLmMKICBbUlVMRVNdIGNv
cmUvc2V0dGluZ3MuYwogIFtSVUxFU10gY29yZS9tYWluLmMKICBbUlVMRVNdIGNvcmUvZG93
bmxvYWRlci5jCiAgW1JVTEVTXSBjb3JlL2h3LmMKICBbUlVMRVNdIGNvcmUvYml0b3BzLmMK
ICBbUlVMRVNdIGNvcmUvdnNwcmludGYuYwogIFtSVUxFU10gY29yZS9udWxsX25hcC5jCiAg
W1JVTEVTXSBjb3JlL3hmZXIuYwogIFtSVUxFU10gY29yZS9wY19rYmQuYwogIFtSVUxFU10g
Y29yZS9wb3NpeF9pby5jCiAgW1JVTEVTXSBjb3JlL2dkYnVkcC5jCiAgW1JVTEVTXSBjb3Jl
L2NvbnNvbGUuYwogIFtSVUxFU10gY29yZS9vcGVuLmMKICBbUlVMRVNdIGNvcmUvc2VyaWFs
LmMKICBbUlVMRVNdIGNvcmUvYWNwaS5jCiAgW1JVTEVTXSBjb3JlL3VyaS5jCiAgW1JVTEVT
XSBjb3JlL2Jsb2NrZGV2LmMKICBbUlVMRVNdIGNvcmUvY3Bpby5jCiAgW1JVTEVTXSBjb3Jl
L3RpbWVyLmMKICBbUlVMRVNdIGNvcmUvbWlzYy5jCiAgW1JVTEVTXSBjb3JlL2N3dXJpLmMK
ICBbUlVMRVNdIGNvcmUvaTgyMzY1LmMKICBbUlVMRVNdIGNvcmUvZXJybm8uYwogIFtSVUxF
U10gY29yZS9qb2IuYwogIFtSVUxFU10gY29yZS9wcm9jZXNzLmMKICBbUlVMRVNdIGNvcmUv
Z2Ric2VyaWFsLmMKICBbUlVMRVNdIGNvcmUvZGVidWcuYwogIFtSVUxFU10gY29yZS9mbnJl
Yy5jCiAgW1JVTEVTXSBjb3JlL21hbGxvYy5jCiAgW1JVTEVTXSBjb3JlL2Fuc2llc2MuYwog
IFtSVUxFU10gY29yZS9kZXZpY2UuYwogIFtSVUxFU10gY29yZS9iYXNlNjQuYwogIFtSVUxF
U10gY29yZS9iaXRtYXAuYwogIFtSVUxFU10gY29yZS9leGVjLmMKICBbUlVMRVNdIGNvcmUv
bW9ub2pvYi5jCiAgW1JVTEVTXSBjb3JlL251bGxfc2FuYm9vdC5jCiAgW1JVTEVTXSBjb3Jl
L3N0cmluZ2V4dHJhLmMKICBbUlVMRVNdIGNvcmUvcmFuZG9tLmMKICBbUlVMRVNdIGNvcmUv
cGFyc2VvcHQuYwogIFtSVUxFU10gY29yZS9yZXNvbHYuYwogIFtSVUxFU10gY29yZS9pb2J1
Zi5jCiAgW1JVTEVTXSBjb3JlL2ltYWdlLmMKICBbUlVMRVNdIGNvcmUvc3RyaW5nLmMKICBb
UlVMRVNdIGNvcmUvYmFzZTE2LmMKICBbUlVMRVNdIGNvcmUvYXNzZXJ0LmMKICBbUlVMRVNd
IGNvcmUvcmVmY250LmMKICBbUlVMRVNdIGNvcmUvdXVpZC5jCiAgW1JVTEVTXSBjb3JlL3Nl
cmlhbF9jb25zb2xlLmMKICBbUlVMRVNdIGNvcmUvcGNtY2lhLmMKICBbUlVMRVNdIGxpYmdj
Yy9fX3Vtb2RkaTMuYwogIFtSVUxFU10gbGliZ2NjL19fdWRpdmRpMy5jCiAgW1JVTEVTXSBs
aWJnY2MvX19tb2RkaTMuYwogIFtSVUxFU10gbGliZ2NjL21lbWNweS5jCiAgW1JVTEVTXSBs
aWJnY2MvaWNjLmMKICBbUlVMRVNdIGxpYmdjYy9fX2RpdmRpMy5jCiAgW1JVTEVTXSBsaWJn
Y2MvX191ZGl2bW9kZGk0LmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlp
c3IuUwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3N5c2xpbnV4L2NvbTMyX3dyYXBw
ZXIuUwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3B4ZS9weGVfZW50cnkuUwogIFtE
RVBTXSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2U4MjBtYW5nbGVyLlMKICBbREVQU10g
YXJjaC9pMzg2L3ByZWZpeC9tYnIuUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L3B4ZXBy
ZWZpeC5TCiAgW0RFUFNdIGFyY2gvaTM4Ni9wcmVmaXgvcm9tcHJlZml4LlMKICBbREVQU10g
YXJjaC9pMzg2L3ByZWZpeC9leGVwcmVmaXguUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4
L2hkcHJlZml4LlMKICBbREVQU10gYXJjaC9pMzg2L3ByZWZpeC91c2JkaXNrLlMKICBbREVQ
U10gYXJjaC9pMzg2L3ByZWZpeC9ra2tweGVwcmVmaXguUwogIFtERVBTXSBhcmNoL2kzODYv
cHJlZml4L2tweGVwcmVmaXguUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L25iaXByZWZp
eC5TCiAgW0RFUFNdIGFyY2gvaTM4Ni9wcmVmaXgvbnVsbHByZWZpeC5TCiAgW0RFUFNdIGFy
Y2gvaTM4Ni9wcmVmaXgvYm9vdHBhcnQuUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L3Vu
ZGlsb2FkZXIuUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L2trcHhlcHJlZml4LlMKICBb
REVQU10gYXJjaC9pMzg2L3ByZWZpeC91bm5ydjJiMTYuUwogIFtERVBTXSBhcmNoL2kzODYv
cHJlZml4L2xrcm5wcmVmaXguUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L3VubnJ2MmIu
UwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L21yb21wcmVmaXguUwogIFtERVBTXSBhcmNo
L2kzODYvcHJlZml4L2Rza3ByZWZpeC5TCiAgW0RFUFNdIGFyY2gvaTM4Ni9wcmVmaXgvbGli
cHJlZml4LlMKICBbREVQU10gYXJjaC9pMzg2L3RyYW5zaXRpb25zL2xpYnJtLlMKICBbREVQ
U10gYXJjaC9pMzg2L3RyYW5zaXRpb25zL2xpYmEyMC5TCiAgW0RFUFNdIGFyY2gvaTM4Ni90
cmFuc2l0aW9ucy9saWJwbS5TCiAgW0RFUFNdIGFyY2gvaTM4Ni90cmFuc2l0aW9ucy9saWJr
aXIuUwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9zdGFjazE2LlMKICBbREVQU10gYXJjaC9p
Mzg2L2NvcmUvc3RhY2suUwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9zZXRqbXAuUwogIFtE
RVBTXSBhcmNoL2kzODYvY29yZS9nZGJpZHQuUwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9w
YXRjaF9jZi5TCiAgW0RFUFNdIGFyY2gvaTM4Ni9jb3JlL3ZpcnRhZGRyLlMKICBbREVQU10g
dGVzdHMvZ2Ric3R1Yl90ZXN0LlMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3Vu
ZGlyb20uYwogIFtERVBTXSBhcmNoL2kzODYvZHJpdmVycy9uZXQvdW5kaW5ldC5jCiAgW0RF
UFNdIGFyY2gvaTM4Ni9kcml2ZXJzL25ldC91bmRpLmMKICBbREVQU10gYXJjaC9pMzg2L2Ry
aXZlcnMvbmV0L3VuZGlvbmx5LmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3Vu
ZGlsb2FkLmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlwcmVsb2FkLmMK
ICBbREVQU10gYXJjaC94ODYvcHJlZml4L2VmaWRydnByZWZpeC5jCiAgW0RFUFNdIGFyY2gv
eDg2L3ByZWZpeC9lZmlwcmVmaXguYwogIFtERVBTXSBhcmNoL3g4Ni9pbnRlcmZhY2UvZWZp
L2VmaXg4Nl9uYXAuYwogIFtERVBTXSBhcmNoL3g4Ni9jb3JlL3g4Nl9zdHJpbmcuYwogIFtE
RVBTXSBhcmNoL3g4Ni9jb3JlL3BjaWRpcmVjdC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9oY2kv
Y29tbWFuZHMvcmVib290X2NtZC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9oY2kvY29tbWFuZHMv
cHhlX2NtZC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2Uvc3lzbGludXgvY29tYm9v
dF9yZXNvbHYuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3N5c2xpbnV4L2NvbTMy
X2NhbGwuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3N5c2xpbnV4L2NvbWJvb3Rf
Y2FsbC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlcGFyZW50L3B4ZXBhcmVu
dF9kaGNwLmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGVwYXJlbnQvcHhlcGFy
ZW50LmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGUvcHhlX3VkcC5jCiAgW0RF
UFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV91bmRpLmMKICBbREVQU10gYXJjaC9p
Mzg2L2ludGVyZmFjZS9weGUvcHhlX2xvYWRlci5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRl
cmZhY2UvcHhlL3B4ZV9leGl0X2hvb2suYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNl
L3B4ZS9weGVfcHJlYm9vdC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4
ZV90ZnRwLmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGUvcHhlX2ZpbGUuYwog
IFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3B4ZS9weGVfY2FsbC5jCiAgW0RFUFNdIGFy
Y2gvaTM4Ni9pbnRlcmZhY2UvcGNiaW9zL2Jpb3Nfc21iaW9zLmMKICBbREVQU10gYXJjaC9p
Mzg2L2ludGVyZmFjZS9wY2Jpb3MvbWVtdG9wX3VtYWxsb2MuYwogIFtERVBTXSBhcmNoL2kz
ODYvaW50ZXJmYWNlL3BjYmlvcy9iaW9zaW50LmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVy
ZmFjZS9wY2Jpb3MvYmlvc190aW1lci5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2Uv
cGNiaW9zL3BjaWJpb3MuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3BjYmlvcy9p
bnQxMy5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcGNiaW9zL2Jpb3NfbmFwLmMK
ICBbREVQU10gYXJjaC9pMzg2L2ltYWdlL2NvbWJvb3QuYwogIFtERVBTXSBhcmNoL2kzODYv
aW1hZ2UvZWxmYm9vdC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbWFnZS9ib290c2VjdG9yLmMK
ICBbREVQU10gYXJjaC9pMzg2L2ltYWdlL211bHRpYm9vdC5jCiAgW0RFUFNdIGFyY2gvaTM4
Ni9pbWFnZS9weGVfaW1hZ2UuYwogIFtERVBTXSBhcmNoL2kzODYvaW1hZ2UvYnppbWFnZS5j
CiAgW0RFUFNdIGFyY2gvaTM4Ni9pbWFnZS9uYmkuYwogIFtERVBTXSBhcmNoL2kzODYvaW1h
Z2UvY29tMzIuYwogIFtERVBTXSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL3BucGJpb3Mu
YwogIFtERVBTXSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2Jpb3NfY29uc29sZS5jCiAg
W0RFUFNdIGFyY2gvaTM4Ni9maXJtd2FyZS9wY2Jpb3MvZmFrZWU4MjAuYwogIFtERVBTXSBh
cmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2Jhc2VtZW0uYwogIFtERVBTXSBhcmNoL2kzODYv
ZmlybXdhcmUvcGNiaW9zL21lbW1hcC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9maXJtd2FyZS9w
Y2Jpb3MvaGlkZW1lbS5jCiAgW0RFUFNdIGFyY2gvaTM4Ni90cmFuc2l0aW9ucy9saWJybV9t
Z210LmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUvZHVtcHJlZ3MuYwogIFtERVBTXSBhcmNo
L2kzODYvY29yZS9udWxsdHJhcC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9jb3JlL3JlbG9jYXRl
LmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUveDg2X2lvLmMKICBbREVQU10gYXJjaC9pMzg2
L2NvcmUvdGltZXIyLmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUvcnVudGltZS5jCiAgW0RF
UFNdIGFyY2gvaTM4Ni9jb3JlL3BpYzgyNTkuYwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9j
cHUuYwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9nZGJtYWNoLmMKICBbREVQU10gYXJjaC9p
Mzg2L2NvcmUvdmlkZW9fc3Vici5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9jb3JlL2Jhc2VtZW1f
cGFja2V0LmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUvcmR0c2NfdGltZXIuYwogIFtERVBT
XSBjb25maWcvY29uZmlnX3JvbXByZWZpeC5jCiAgW0RFUFNdIGNvbmZpZy9jb25maWcuYwog
IFtERVBTXSBjb25maWcvY29uZmlnX2ZjLmMKICBbREVQU10gY29uZmlnL2NvbmZpZ19ldGhl
cm5ldC5jCiAgW0RFUFNdIGNvbmZpZy9jb25maWdfbmV0ODAyMTEuYwogIFtERVBTXSBjb25m
aWcvY29uZmlnX2luZmluaWJhbmQuYwogIFtERVBTXSB1c3IvYXV0b2Jvb3QuYwogIFtERVBT
XSB1c3IvaWZtZ210LmMKICBbREVQU10gdXNyL2ZjbWdtdC5jCiAgW0RFUFNdIHVzci9kaGNw
bWdtdC5jCiAgW0RFUFNdIHVzci9weGVtZW51LmMKICBbREVQU10gdXNyL2ltZ21nbXQuYwog
IFtERVBTXSB1c3IvbG90ZXN0LmMKICBbREVQU10gdXNyL2l3bWdtdC5jCiAgW0RFUFNdIHVz
ci9yb3V0ZS5jCiAgW0RFUFNdIHVzci9wcm9tcHQuYwogIFtERVBTXSBoY2kva2V5bWFwL2tl
eW1hcF9yby5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX2l0LmMKICBbREVQU10gaGNp
L2tleW1hcC9rZXltYXBfc2cuYwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF9lcy5jCiAg
W0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX2h1LmMKICBbREVQU10gaGNpL2tleW1hcC9rZXlt
YXBfYmcuYwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF9ubC5jCiAgW0RFUFNdIGhjaS9r
ZXltYXAva2V5bWFwX2N6LmMKICBbREVQU10gaGNpL2tleW1hcC9rZXltYXBfZGUuYwogIFtE
RVBTXSBoY2kva2V5bWFwL2tleW1hcF9maS5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFw
X21rLmMKICBbREVQU10gaGNpL2tleW1hcC9rZXltYXBfdWsuYwogIFtERVBTXSBoY2kva2V5
bWFwL2tleW1hcF9wbC5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX2F6LmMKICBbREVQ
U10gaGNpL2tleW1hcC9rZXltYXBfZnIuYwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF9i
eS5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX210LmMKICBbREVQU10gaGNpL2tleW1h
cC9rZXltYXBfd28uYwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF91YS5jCiAgW0RFUFNd
IGhjaS9rZXltYXAva2V5bWFwX2x0LmMKICBbREVQU10gaGNpL2tleW1hcC9rZXltYXBfc3Iu
YwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF9hbC5jCiAgW0RFUFNdIGhjaS9rZXltYXAv
a2V5bWFwX3J1LmMKICBbREVQU10gaGNpL2tleW1hcC9rZXltYXBfY2YuYwogIFtERVBTXSBo
Y2kva2V5bWFwL2tleW1hcF9uby5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX2V0LmMK
ICBbREVQU10gaGNpL2tleW1hcC9rZXltYXBfdGguYwogIFtERVBTXSBoY2kva2V5bWFwL2tl
eW1hcF91cy5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX2lsLmMKICBbREVQU10gaGNp
L2tleW1hcC9rZXltYXBfZ3IuYwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF9kay5jCiAg
W0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX3B0LmMKICBbREVQU10gaGNpL211Y3Vyc2VzL3dp
ZGdldHMvZWRpdGJveC5jCiAgW0RFUFNdIGhjaS9tdWN1cnNlcy9rYi5jCiAgW0RFUFNdIGhj
aS9tdWN1cnNlcy9jb2xvdXIuYwogIFtERVBTXSBoY2kvbXVjdXJzZXMvc2xrLmMKICBbREVQ
U10gaGNpL211Y3Vyc2VzL3ByaW50LmMKICBbREVQU10gaGNpL211Y3Vyc2VzL3dpbmRvd3Mu
YwogIFtERVBTXSBoY2kvbXVjdXJzZXMvbXVjdXJzZXMuYwogIFtERVBTXSBoY2kvbXVjdXJz
ZXMvd2luaW5pdC5jCiAgW0RFUFNdIGhjaS9tdWN1cnNlcy9wcmludF9uYWR2LmMKICBbREVQ
U10gaGNpL211Y3Vyc2VzL2Fuc2lfc2NyZWVuLmMKICBbREVQU10gaGNpL211Y3Vyc2VzL3dp
bmF0dHJzLmMKICBbREVQU10gaGNpL211Y3Vyc2VzL2VkZ2luZy5jCiAgW0RFUFNdIGhjaS9t
dWN1cnNlcy9jbGVhci5jCiAgW0RFUFNdIGhjaS9tdWN1cnNlcy9hbGVydC5jCiAgW0RFUFNd
IGhjaS90dWkvc2V0dGluZ3NfdWkuYwogIFtERVBTXSBoY2kvdHVpL2xvZ2luX3VpLmMKICBb
REVQU10gaGNpL2NvbW1hbmRzL3ZsYW5fY21kLmMKICBbREVQU10gaGNpL2NvbW1hbmRzL2l3
bWdtdF9jbWQuYwogIFtERVBTXSBoY2kvY29tbWFuZHMvbG90ZXN0X2NtZC5jCiAgW0RFUFNd
IGhjaS9jb21tYW5kcy9mY21nbXRfY21kLmMKICBbREVQU10gaGNpL2NvbW1hbmRzL2ltYWdl
X2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5kcy9kaWdlc3RfY21kLmMKICBbREVQU10gaGNp
L2NvbW1hbmRzL3JvdXRlX2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5kcy9kaGNwX2NtZC5j
CiAgW0RFUFNdIGhjaS9jb21tYW5kcy90aW1lX2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5k
cy9hdXRvYm9vdF9jbWQuYwogIFtERVBTXSBoY2kvY29tbWFuZHMvZ2Ric3R1Yl9jbWQuYwog
IFtERVBTXSBoY2kvY29tbWFuZHMvaWZtZ210X2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5k
cy9zYW5ib290X2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5kcy9sb2dpbl9jbWQuYwogIFtE
RVBTXSBoY2kvY29tbWFuZHMvY29uZmlnX2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5kcy9u
dm9fY21kLmMKICBbREVQU10gaGNpL3dpcmVsZXNzX2Vycm9ycy5jCiAgW0RFUFNdIGhjaS9l
ZGl0c3RyaW5nLmMKICBbREVQU10gaGNpL3JlYWRsaW5lLmMKICBbREVQU10gaGNpL3N0cmVy
cm9yLmMKICBbREVQU10gaGNpL3NoZWxsLmMKICBbREVQU10gaGNpL2xpbnV4X2FyZ3MuYwog
IFtERVBTXSBjcnlwdG8vYXh0bHMvc2hhMS5jCiAgW0RFUFNdIGNyeXB0by9heHRscy9yc2Eu
YwogIFtERVBTXSBjcnlwdG8vYXh0bHMvYmlnaW50LmMKICBbREVQU10gY3J5cHRvL2F4dGxz
L2Flcy5jCiAgW0RFUFNdIGNyeXB0by9jYmMuYwogIFtERVBTXSBjcnlwdG8vYXh0bHNfc2hh
MS5jCiAgW0RFUFNdIGNyeXB0by9hZXNfd3JhcC5jCiAgW0RFUFNdIGNyeXB0by9heHRsc19h
ZXMuYwogIFtERVBTXSBjcnlwdG8vYXNuMS5jCiAgW0RFUFNdIGNyeXB0by9obWFjLmMKICBb
REVQU10gY3J5cHRvL2NyYzMyLmMKICBbREVQU10gY3J5cHRvL2NyYW5kb20uYwogIFtERVBT
XSBjcnlwdG8vY3J5cHRvX251bGwuYwogIFtERVBTXSBjcnlwdG8vYXJjNC5jCiAgW0RFUFNd
IGNyeXB0by9zaGExZXh0cmEuYwogIFtERVBTXSBjcnlwdG8veDUwOS5jCiAgW0RFUFNdIGNy
eXB0by9tZDUuYwogIFtERVBTXSBjcnlwdG8vY2hhcC5jCiAgW0RFUFNdIHRlc3RzL2xpbmVi
dWZfdGVzdC5jCiAgW0RFUFNdIHRlc3RzL3VtYWxsb2NfdGVzdC5jCiAgW0RFUFNdIHRlc3Rz
L2JvZm1fdGVzdC5jCiAgW0RFUFNdIHRlc3RzL3VyaV90ZXN0LmMKICBbREVQU10gdGVzdHMv
dGVzdC5jCiAgW0RFUFNdIHRlc3RzL2xpc3RfdGVzdC5jCiAgW0RFUFNdIHRlc3RzL21lbWNw
eV90ZXN0LmMKICBbREVQU10gaW50ZXJmYWNlL2JvZm0vYm9mbS5jCiAgW0RFUFNdIGludGVy
ZmFjZS9zbWJpb3Mvc21iaW9zLmMKICBbREVQU10gaW50ZXJmYWNlL3NtYmlvcy9zbWJpb3Nf
c2V0dGluZ3MuYwogIFtERVBTXSBpbnRlcmZhY2UvZWZpL2VmaV9jb25zb2xlLmMKICBbREVQ
U10gaW50ZXJmYWNlL2VmaS9lZmlfc25wLmMKICBbREVQU10gaW50ZXJmYWNlL2VmaS9lZmlf
cGNpLmMKICBbREVQU10gaW50ZXJmYWNlL2VmaS9lZmlfc3RyZXJyb3IuYwogIFtERVBTXSBp
bnRlcmZhY2UvZWZpL2VmaV9ib2ZtLmMKICBbREVQU10gaW50ZXJmYWNlL2VmaS9lZmlfdW1h
bGxvYy5jCiAgW0RFUFNdIGludGVyZmFjZS9lZmkvZWZpX3N0cmluZ3MuYwogIFtERVBTXSBp
bnRlcmZhY2UvZWZpL2VmaV90aW1lci5jCiAgW0RFUFNdIGludGVyZmFjZS9lZmkvZWZpX3Nt
Ymlvcy5jCiAgW0RFUFNdIGludGVyZmFjZS9lZmkvZWZpX2RyaXZlci5jCiAgW0RFUFNdIGlu
dGVyZmFjZS9lZmkvZWZpX2luaXQuYwogIFtERVBTXSBpbnRlcmZhY2UvZWZpL2VmaV91YWNj
ZXNzLmMKICBbREVQU10gaW50ZXJmYWNlL2VmaS9lZmlfaW8uYwogIFtERVBTXSBkcml2ZXJz
L2luZmluaWJhbmQvbGluZGEuYwogIFtERVBTXSBkcml2ZXJzL2luZmluaWJhbmQvaGVybW9u
LmMKICBbREVQU10gZHJpdmVycy9pbmZpbmliYW5kL2FyYmVsLmMKICBbREVQU10gZHJpdmVy
cy9pbmZpbmliYW5kL3FpYjczMjIuYwogIFtERVBTXSBkcml2ZXJzL2luZmluaWJhbmQvbGlu
ZGFfZncuYwogIFtERVBTXSBkcml2ZXJzL2JpdGJhc2gvYml0YmFzaC5jCiAgW0RFUFNdIGRy
aXZlcnMvYml0YmFzaC9zcGlfYml0LmMKICBbREVQU10gZHJpdmVycy9iaXRiYXNoL2kyY19i
aXQuYwogIFtERVBTXSBkcml2ZXJzL252cy9zcGkuYwogIFtERVBTXSBkcml2ZXJzL252cy9u
dnN2cGQuYwogIFtERVBTXSBkcml2ZXJzL252cy90aHJlZXdpcmUuYwogIFtERVBTXSBkcml2
ZXJzL252cy9udnMuYwogIFtERVBTXSBkcml2ZXJzL2Jsb2NrL2liZnQuYwogIFtERVBTXSBk
cml2ZXJzL2Jsb2NrL2F0YS5jCiAgW0RFUFNdIGRyaXZlcnMvYmxvY2svc3JwLmMKICBbREVQ
U10gZHJpdmVycy9ibG9jay9zY3NpLmMKICBbREVQU10gZHJpdmVycy9uZXQvZWZpL3NucG5l
dC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2VmaS9zbnBvbmx5LmMKICBbREVQU10gZHJpdmVy
cy9uZXQvdnhnZS92eGdlX3RyYWZmaWMuYwogIFtERVBTXSBkcml2ZXJzL25ldC92eGdlL3Z4
Z2UuYwogIFtERVBTXSBkcml2ZXJzL25ldC92eGdlL3Z4Z2VfY29uZmlnLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvdnhnZS92eGdlX21haW4uYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgv
YXRoOWsvYXRoOWtfaW5pdC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5
a19hcjkwMDNfbWFjLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2Fy
OTAwM19jYWxpYi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19lZXBy
b21fOTI4Ny5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5ay5jCiAgW0RF
UFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19jb21tb24uYwogIFtERVBTXSBkcml2
ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAyX2h3LmMKICBbREVQU10gZHJpdmVycy9u
ZXQvYXRoL2F0aDlrL2F0aDlrX2NhbGliLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0
aDlrL2F0aDlrX2VlcHJvbV80ay5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9h
dGg5a19lZXByb21fZGVmLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlr
X21hYy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19hcjkwMDNfZWVw
cm9tLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwMl9tYWMu
YwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAyX2NhbGliLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwMl9waHkuYwogIFtE
RVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfeG1pdC5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L2F0aC9hdGg5ay9hdGg5a19hcjUwMDhfcGh5LmMKICBbREVQU10gZHJpdmVycy9u
ZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwM19waHkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9h
dGgvYXRoOWsvYXRoOWtfYW5pLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0
aDlrX21haW4uYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAz
X2h3LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2VlcHJvbS5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19yZWN2LmMKICBbREVQU10gZHJp
dmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2h3LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRo
L2F0aDVrL2F0aDVrX3Jlc2V0LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0
aDVrLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0aDVrX2F0dGFjaC5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19yZmtpbGwuYwogIFtERVBTXSBk
cml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtfZ3Bpby5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0
L2F0aC9hdGg1ay9hdGg1a19waHkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsv
YXRoNWtfaW5pdHZhbHMuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtf
ZG1hLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0aDVrX3BjdS5jCiAgW0RF
UFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19kZXNjLmMKICBbREVQU10gZHJpdmVy
cy9uZXQvYXRoL2F0aDVrL2F0aDVrX3FjdS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9h
dGg1ay9hdGg1a19lZXByb20uYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRo
NWtfY2Fwcy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGhfaHcuYwogIFtERVBTXSBk
cml2ZXJzL25ldC9hdGgvYXRoX2tleS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGhf
bWFpbi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGhfcmVnZC5jCiAgW0RFUFNdIGRy
aXZlcnMvbmV0L3J0bDgxOHgvcnRsODE4MF9ncmY1MTAxLmMKICBbREVQU10gZHJpdmVycy9u
ZXQvcnRsODE4eC9ydGw4MTgwX21heDI4MjAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9ydGw4
MTh4L3J0bDgxODUuYwogIFtERVBTXSBkcml2ZXJzL25ldC9ydGw4MTh4L3J0bDgxOHguYwog
IFtERVBTXSBkcml2ZXJzL25ldC9ydGw4MTh4L3J0bDgxODAuYwogIFtERVBTXSBkcml2ZXJz
L25ldC9ydGw4MTh4L3J0bDgxODVfcnRsODIyNS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3J0
bDgxOHgvcnRsODE4MF9zYTI0MDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9waGFudG9tL3Bo
YW50b20uYwogIFtERVBTXSBkcml2ZXJzL25ldC9pZ2J2Zi9pZ2J2Zl9tYWluLmMKICBbREVQ
U10gZHJpdmVycy9uZXQvaWdidmYvaWdidmZfdmYuYwogIFtERVBTXSBkcml2ZXJzL25ldC9p
Z2J2Zi9pZ2J2Zl9tYnguYwogIFtERVBTXSBkcml2ZXJzL25ldC9pZ2IvaWdiXzgyNTc1LmMK
ICBbREVQU10gZHJpdmVycy9uZXQvaWdiL2lnYi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2ln
Yi9pZ2JfbWFjLmMKICBbREVQU10gZHJpdmVycy9uZXQvaWdiL2lnYl9waHkuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9pZ2IvaWdiX21haW4uYwogIFtERVBTXSBkcml2ZXJzL25ldC9pZ2Iv
aWdiX252bS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYi9pZ2JfYXBpLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvaWdiL2lnYl9tYW5hZ2UuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAw
MGUvZTEwMDBlXzgwMDAzZXMybGFuLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDBlL2Ux
MDAwZV9pY2g4bGFuLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV9tYW5h
Z2UuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMGUvZTEwMDBlXzgyNTcxLmMKICBbREVQ
U10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2Ux
MDAwZS9lMTAwMGVfbWFjLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV9w
aHkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMGUvZTEwMDBlX252bS5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L2UxMDAwZS9lMTAwMGVfbWFpbi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0
L2UxMDAwL2UxMDAwXzgyNTQyLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBf
ODI1NDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMC9lMTAwMF9hcGkuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9lMTAwMC9lMTAwMF9tYW5hZ2UuYwogIFtERVBTXSBkcml2ZXJzL25l
dC9lMTAwMC9lMTAwMF84MjU0My5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAw
X252bS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwX21hYy5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwX3BoeS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2Ux
MDAwL2UxMDAwLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBfbWFpbi5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwXzgyNTQxLmMKICBbREVQU10gZHJpdmVy
cy9uZXQvZGVwY2EuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hbWQ4MTExZS5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L2ptZS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3ByaXNtMl9wY2kuYwog
IFtERVBTXSBkcml2ZXJzL25ldC8zYzU5NS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3ZpYS1y
aGluZS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3c4OWM4NDAuYwogIFtERVBTXSBkcml2ZXJz
L25ldC9jczg5eDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9uZTJrX2lzYS5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L2lwb2liLmMKICBbREVQU10gZHJpdmVycy9uZXQvc2t5Mi5jCiAgW0RF
UFNdIGRyaXZlcnMvbmV0L2F0bDFlLmMKICBbREVQU10gZHJpdmVycy9uZXQvbGVnYWN5LmMK
ICBbREVQU10gZHJpdmVycy9uZXQvZWVwcm8xMDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC8z
YzUxNS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2JueDIuYwogIFtERVBTXSBkcml2ZXJzL25l
dC9kbWZlLmMKICBbREVQU10gZHJpdmVycy9uZXQvbnM4MzkwLmMKICBbREVQU10gZHJpdmVy
cy9uZXQvbnM4MzgyMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3BjbmV0MzIuYwogIFtERVBT
XSBkcml2ZXJzL25ldC8zYzUwOS1laXNhLmMKICBbREVQU10gZHJpdmVycy9uZXQvdGczLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvM2M1eDkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9zbWM5
MDAwLmMKICBbREVQU10gZHJpdmVycy9uZXQvdmlydGlvLW5ldC5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L2V0aGVyZmFicmljLmMKICBbREVQU10gZHJpdmVycy9uZXQvd2QuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9za2dlLmMKICBbREVQU10gZHJpdmVycy9uZXQvc2lzMTkwLmMKICBb
REVQU10gZHJpdmVycy9uZXQvbmF0c2VtaS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2I0NC5j
CiAgW0RFUFNdIGRyaXZlcnMvbmV0L2ZvcmNlZGV0aC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0
L3ByaXNtMl9wbHguYwogIFtERVBTXSBkcml2ZXJzL25ldC9zdW5kYW5jZS5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L3J0bDgxMzkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lcGljMTAwLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvM2M5MHguYwogIFtERVBTXSBkcml2ZXJzL25ldC9kYXZp
Y29tLmMKICBbREVQU10gZHJpdmVycy9uZXQvM2M1MDkuYwogIFtERVBTXSBkcml2ZXJzL25l
dC8zYzUyOS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L210ZDgweC5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L215cmkxMGdlLmMKICBbREVQU10gZHJpdmVycy9uZXQvZWVwcm8uYwogIFtERVBT
XSBkcml2ZXJzL25ldC9uZS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3ZpYS12ZWxvY2l0eS5j
CiAgW0RFUFNdIGRyaXZlcnMvbmV0L3BuaWMuYwogIFtERVBTXSBkcml2ZXJzL25ldC90dWxp
cC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3NpczkwMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0
L3I4MTY5LmMKICBbREVQU10gZHJpdmVycy9uZXQvdGxhbi5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0LzNjNTAzLmMKICBbREVQU10gZHJpdmVycy9idXMvcGNpLmMKICBbREVQU10gZHJpdmVy
cy9idXMvaXNhcG5wLmMKICBbREVQU10gZHJpdmVycy9idXMvdmlydGlvLXJpbmcuYwogIFtE
RVBTXSBkcml2ZXJzL2J1cy92aXJ0aW8tcGNpLmMKICBbREVQU10gZHJpdmVycy9idXMvaXNh
LmMKICBbREVQU10gZHJpdmVycy9idXMvaXNhX2lkcy5jCiAgW0RFUFNdIGRyaXZlcnMvYnVz
L3BjaWV4dHJhLmMKICBbREVQU10gZHJpdmVycy9idXMvcGNpYmFja3VwLmMKICBbREVQU10g
ZHJpdmVycy9idXMvcGNpdnBkLmMKICBbREVQU10gZHJpdmVycy9idXMvbWNhLmMKICBbREVQ
U10gZHJpdmVycy9idXMvZWlzYS5jCiAgW0RFUFNdIGltYWdlL3NjcmlwdC5jCiAgW0RFUFNd
IGltYWdlL2VsZi5jCiAgW0RFUFNdIGltYWdlL2VmaV9pbWFnZS5jCiAgW0RFUFNdIGltYWdl
L3NlZ21lbnQuYwogIFtERVBTXSBpbWFnZS9lbWJlZGRlZC5jCiAgW0RFUFNdIG5ldC84MDIx
MS9yYzgwMjExLmMKICBbREVQU10gbmV0LzgwMjExL3dwYS5jCiAgW0RFUFNdIG5ldC84MDIx
MS93cGFfY2NtcC5jCiAgW0RFUFNdIG5ldC84MDIxMS9uZXQ4MDIxMS5jCiAgW0RFUFNdIG5l
dC84MDIxMS9zZWM4MDIxMS5jCiAgW0RFUFNdIG5ldC84MDIxMS93ZXAuYwogIFtERVBTXSBu
ZXQvODAyMTEvd3BhX3Bzay5jCiAgW0RFUFNdIG5ldC84MDIxMS93cGFfdGtpcC5jCiAgW0RF
UFNdIG5ldC9pbmZpbmliYW5kL2liX21pLmMKICBbREVQU10gbmV0L2luZmluaWJhbmQvaWJf
Y20uYwogIFtERVBTXSBuZXQvaW5maW5pYmFuZC9pYl9wYWNrZXQuYwogIFtERVBTXSBuZXQv
aW5maW5pYmFuZC9pYl9zbWMuYwogIFtERVBTXSBuZXQvaW5maW5pYmFuZC9pYl9wYXRocmVj
LmMKICBbREVQU10gbmV0L2luZmluaWJhbmQvaWJfc21hLmMKICBbREVQU10gbmV0L2luZmlu
aWJhbmQvaWJfY21yYy5jCiAgW0RFUFNdIG5ldC9pbmZpbmliYW5kL2liX3NycC5jCiAgW0RF
UFNdIG5ldC9pbmZpbmliYW5kL2liX21jYXN0LmMKICBbREVQU10gbmV0L3VkcC9kaGNwLmMK
ICBbREVQU10gbmV0L3VkcC9kbnMuYwogIFtERVBTXSBuZXQvdWRwL3NsYW0uYwogIFtERVBT
XSBuZXQvdWRwL3RmdHAuYwogIFtERVBTXSBuZXQvdWRwL3N5c2xvZy5jCiAgW0RFUFNdIG5l
dC90Y3AvaHR0cHMuYwogIFtERVBTXSBuZXQvdGNwL2lzY3NpLmMKICBbREVQU10gbmV0L3Rj
cC9mdHAuYwogIFtERVBTXSBuZXQvdGNwL2h0dHAuYwogIFtERVBTXSBuZXQvZWFwb2wuYwog
IFtERVBTXSBuZXQvZmNucy5jCiAgW0RFUFNdIG5ldC9mYWtlZGhjcC5jCiAgW0RFUFNdIG5l
dC9pY21wdjYuYwogIFtERVBTXSBuZXQvbmV0ZGV2X3NldHRpbmdzLmMKICBbREVQU10gbmV0
L2ZjcC5jCiAgW0RFUFNdIG5ldC9mY29lLmMKICBbREVQU10gbmV0L2lvYnBhZC5jCiAgW0RF
UFNdIG5ldC90Y3AuYwogIFtERVBTXSBuZXQvbWlpLmMKICBbREVQU10gbmV0L2FycC5jCiAg
W0RFUFNdIG5ldC9ldGhlcm5ldC5jCiAgW0RFUFNdIG5ldC9mY2Vscy5jCiAgW0RFUFNdIG5l
dC90Y3BpcC5jCiAgW0RFUFNdIG5ldC9pcHY2LmMKICBbREVQU10gbmV0L2FvZS5jCiAgW0RF
UFNdIG5ldC9yYXJwLmMKICBbREVQU10gbmV0L3ZsYW4uYwogIFtERVBTXSBuZXQvbnVsbG5l
dC5jCiAgW0RFUFNdIG5ldC9pbmZpbmliYW5kLmMKICBbREVQU10gbmV0L2lwdjQuYwogIFtE
RVBTXSBuZXQvZXRoX3Nsb3cuYwogIFtERVBTXSBuZXQvdGxzLmMKICBbREVQU10gbmV0L25k
cC5jCiAgW0RFUFNdIG5ldC9kaGNwcGt0LmMKICBbREVQU10gbmV0L2NhY2hlZGhjcC5jCiAg
W0RFUFNdIG5ldC9uZXRkZXZpY2UuYwogIFtERVBTXSBuZXQvcmV0cnkuYwogIFtERVBTXSBu
ZXQvaWNtcC5jCiAgW0RFUFNdIG5ldC91ZHAuYwogIFtERVBTXSBuZXQvZGhjcG9wdHMuYwog
IFtERVBTXSBuZXQvZmMuYwogIFtERVBTXSBjb3JlL2N0eXBlLmMKICBbREVQU10gY29yZS9i
YXNlbmFtZS5jCiAgW0RFUFNdIGNvcmUvbnZvLmMKICBbREVQU10gY29yZS9kZWJ1Z19tZDUu
YwogIFtERVBTXSBjb3JlL2ludGVyZmFjZS5jCiAgW0RFUFNdIGNvcmUvYnRleHQuYwogIFtE
RVBTXSBjb3JlL2dldG9wdC5jCiAgW0RFUFNdIGNvcmUvZ2V0a2V5LmMKICBbREVQU10gY29y
ZS9hc3ByaW50Zi5jCiAgW0RFUFNdIGNvcmUvZ2Ric3R1Yi5jCiAgW0RFUFNdIGNvcmUvbGlu
ZWJ1Zi5jCiAgW0RFUFNdIGNvcmUvZWRkLmMKICBbREVQU10gY29yZS9pbml0LmMKICBbREVQ
U10gY29yZS9zdHJ0b3VsbC5jCiAgW0RFUFNdIGNvcmUvc2V0dGluZ3MuYwogIFtERVBTXSBj
b3JlL21haW4uYwogIFtERVBTXSBjb3JlL2Rvd25sb2FkZXIuYwogIFtERVBTXSBjb3JlL2h3
LmMKICBbREVQU10gY29yZS9iaXRvcHMuYwogIFtERVBTXSBjb3JlL3ZzcHJpbnRmLmMKICBb
REVQU10gY29yZS9udWxsX25hcC5jCiAgW0RFUFNdIGNvcmUveGZlci5jCiAgW0RFUFNdIGNv
cmUvcGNfa2JkLmMKICBbREVQU10gY29yZS9wb3NpeF9pby5jCiAgW0RFUFNdIGNvcmUvZ2Ri
dWRwLmMKICBbREVQU10gY29yZS9jb25zb2xlLmMKICBbREVQU10gY29yZS9vcGVuLmMKICBb
REVQU10gY29yZS9zZXJpYWwuYwogIFtERVBTXSBjb3JlL2FjcGkuYwogIFtERVBTXSBjb3Jl
L3VyaS5jCiAgW0RFUFNdIGNvcmUvYmxvY2tkZXYuYwogIFtERVBTXSBjb3JlL2NwaW8uYwog
IFtERVBTXSBjb3JlL3RpbWVyLmMKICBbREVQU10gY29yZS9taXNjLmMKICBbREVQU10gY29y
ZS9jd3VyaS5jCiAgW0RFUFNdIGNvcmUvaTgyMzY1LmMKICBbREVQU10gY29yZS9lcnJuby5j
CiAgW0RFUFNdIGNvcmUvam9iLmMKICBbREVQU10gY29yZS9wcm9jZXNzLmMKICBbREVQU10g
Y29yZS9nZGJzZXJpYWwuYwogIFtERVBTXSBjb3JlL2RlYnVnLmMKICBbREVQU10gY29yZS9m
bnJlYy5jCiAgW0RFUFNdIGNvcmUvbWFsbG9jLmMKICBbREVQU10gY29yZS9hbnNpZXNjLmMK
ICBbREVQU10gY29yZS9kZXZpY2UuYwogIFtERVBTXSBjb3JlL2Jhc2U2NC5jCiAgW0RFUFNd
IGNvcmUvYml0bWFwLmMKICBbREVQU10gY29yZS9leGVjLmMKICBbREVQU10gY29yZS9tb25v
am9iLmMKICBbREVQU10gY29yZS9udWxsX3NhbmJvb3QuYwogIFtERVBTXSBjb3JlL3N0cmlu
Z2V4dHJhLmMKICBbREVQU10gY29yZS9yYW5kb20uYwogIFtERVBTXSBjb3JlL3BhcnNlb3B0
LmMKICBbREVQU10gY29yZS9yZXNvbHYuYwogIFtERVBTXSBjb3JlL2lvYnVmLmMKICBbREVQ
U10gY29yZS9pbWFnZS5jCiAgW0RFUFNdIGNvcmUvc3RyaW5nLmMKICBbREVQU10gY29yZS9i
YXNlMTYuYwogIFtERVBTXSBjb3JlL2Fzc2VydC5jCiAgW0RFUFNdIGNvcmUvcmVmY250LmMK
ICBbREVQU10gY29yZS91dWlkLmMKICBbREVQU10gY29yZS9zZXJpYWxfY29uc29sZS5jCiAg
W0RFUFNdIGNvcmUvcGNtY2lhLmMKICBbREVQU10gbGliZ2NjL19fdW1vZGRpMy5jCiAgW0RF
UFNdIGxpYmdjYy9fX3VkaXZkaTMuYwogIFtERVBTXSBsaWJnY2MvX19tb2RkaTMuYwogIFtE
RVBTXSBsaWJnY2MvbWVtY3B5LmMKICBbREVQU10gbGliZ2NjL2ljYy5jCiAgW0RFUFNdIGxp
YmdjYy9fX2RpdmRpMy5jCiAgW0RFUFNdIGxpYmdjYy9fX3VkaXZtb2RkaTQuYwpnbWFrZVs3
XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9l
dGhlcmJvb3QvaXB4ZS9zcmMnCmdtYWtlWzddOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9ldGhlcmJvb3QvaXB4ZS9zcmMnCiAgW0RFUFNd
IGFyY2gvaTM4Ni9wcmVmaXgvcm9tcHJlZml4LlMKICBbREVQU10gYXJjaC9pMzg2L3ByZWZp
eC9tcm9tcHJlZml4LlMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlyb20u
YwogIFtERVBTXSBhcmNoL2kzODYvZHJpdmVycy9uZXQvdW5kaW5ldC5jCiAgW0RFUFNdIGFy
Y2gvaTM4Ni9kcml2ZXJzL25ldC91bmRpLmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMv
bmV0L3VuZGlvbmx5LmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlsb2Fk
LmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlwcmVsb2FkLmMKICBbREVQ
U10gYXJjaC94ODYvaW50ZXJmYWNlL2VmaS9lZml4ODZfbmFwLmMKICBbREVQU10gYXJjaC94
ODYvY29yZS9wY2lkaXJlY3QuYwogIFtERVBTXSBhcmNoL2kzODYvaGNpL2NvbW1hbmRzL3Jl
Ym9vdF9jbWQuYwogIFtERVBTXSBhcmNoL2kzODYvaGNpL2NvbW1hbmRzL3B4ZV9jbWQuYwog
IFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3N5c2xpbnV4L2NvbWJvb3RfcmVzb2x2LmMK
ICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9zeXNsaW51eC9jb20zMl9jYWxsLmMKICBb
REVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9zeXNsaW51eC9jb21ib290X2NhbGwuYwogIFtE
RVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3B4ZXBhcmVudC9weGVwYXJlbnRfZGhjcC5jCiAg
W0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlcGFyZW50L3B4ZXBhcmVudC5jCiAgW0RF
UFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV91ZHAuYwogIFtERVBTXSBhcmNoL2kz
ODYvaW50ZXJmYWNlL3B4ZS9weGVfdW5kaS5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZh
Y2UvcHhlL3B4ZV9sb2FkZXIuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3B4ZS9w
eGVfZXhpdF9ob29rLmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGUvcHhlX3By
ZWJvb3QuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3B4ZS9weGVfdGZ0cC5jCiAg
W0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV9maWxlLmMKICBbREVQU10gYXJj
aC9pMzg2L2ludGVyZmFjZS9weGUvcHhlX2NhbGwuYwogIFtERVBTXSBhcmNoL2kzODYvaW50
ZXJmYWNlL3BjYmlvcy9iaW9zX3NtYmlvcy5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZh
Y2UvcGNiaW9zL21lbXRvcF91bWFsbG9jLmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFj
ZS9wY2Jpb3MvYmlvc2ludC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcGNiaW9z
L2Jpb3NfdGltZXIuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3BjYmlvcy9wY2li
aW9zLmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9wY2Jpb3MvaW50MTMuYwogIFtE
RVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3BjYmlvcy9iaW9zX25hcC5jCiAgW0RFUFNdIGFy
Y2gvaTM4Ni9pbWFnZS9jb21ib290LmMKICBbREVQU10gYXJjaC9pMzg2L2ltYWdlL2VsZmJv
b3QuYwogIFtERVBTXSBhcmNoL2kzODYvaW1hZ2UvYm9vdHNlY3Rvci5jCiAgW0RFUFNdIGFy
Y2gvaTM4Ni9pbWFnZS9tdWx0aWJvb3QuYwogIFtERVBTXSBhcmNoL2kzODYvaW1hZ2UvcHhl
X2ltYWdlLmMKICBbREVQU10gYXJjaC9pMzg2L2ltYWdlL2J6aW1hZ2UuYwogIFtERVBTXSBh
cmNoL2kzODYvaW1hZ2UvbmJpLmMKICBbREVQU10gYXJjaC9pMzg2L2ltYWdlL2NvbTMyLmMK
ICBbREVQU10gYXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9wbnBiaW9zLmMKICBbREVQU10g
YXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9iaW9zX2NvbnNvbGUuYwogIFtERVBTXSBhcmNo
L2kzODYvZmlybXdhcmUvcGNiaW9zL2Zha2VlODIwLmMKICBbREVQU10gYXJjaC9pMzg2L2Zp
cm13YXJlL3BjYmlvcy9iYXNlbWVtLmMKICBbREVQU10gYXJjaC9pMzg2L2Zpcm13YXJlL3Bj
Ymlvcy9tZW1tYXAuYwogIFtERVBTXSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2hpZGVt
ZW0uYwogIFtERVBTXSBhcmNoL2kzODYvdHJhbnNpdGlvbnMvbGlicm1fbWdtdC5jCiAgW0RF
UFNdIGFyY2gvaTM4Ni9jb3JlL2R1bXByZWdzLmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUv
cmVsb2NhdGUuYwogIFtERVBTXSBhcmNoL2kzODYvY29yZS94ODZfaW8uYwogIFtERVBTXSBh
cmNoL2kzODYvY29yZS90aW1lcjIuYwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9ydW50aW1l
LmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUvcGljODI1OS5jCiAgW0RFUFNdIGFyY2gvaTM4
Ni9jb3JlL2dkYm1hY2guYwogIFtERVBTXSBhcmNoL2kzODYvY29yZS92aWRlb19zdWJyLmMK
ICBbREVQU10gYXJjaC9pMzg2L2NvcmUvYmFzZW1lbV9wYWNrZXQuYwogIFtERVBTXSBhcmNo
L2kzODYvY29yZS9yZHRzY190aW1lci5jCiAgW0RFUFNdIGNvbmZpZy9jb25maWdfcm9tcHJl
Zml4LmMKICBbREVQU10gY29uZmlnL2NvbmZpZy5jCiAgW0RFUFNdIGNvbmZpZy9jb25maWdf
ZmMuYwogIFtERVBTXSBjb25maWcvY29uZmlnX2V0aGVybmV0LmMKICBbREVQU10gY29uZmln
L2NvbmZpZ19uZXQ4MDIxMS5jCiAgW0RFUFNdIGNvbmZpZy9jb25maWdfaW5maW5pYmFuZC5j
CiAgW0RFUFNdIHVzci9hdXRvYm9vdC5jCiAgW0RFUFNdIHVzci9pZm1nbXQuYwogIFtERVBT
XSB1c3IvZGhjcG1nbXQuYwogIFtERVBTXSB1c3IvcHhlbWVudS5jCiAgW0RFUFNdIHVzci9p
bWdtZ210LmMKICBbREVQU10gdXNyL3Byb21wdC5jCiAgW0RFUFNdIGhjaS9tdWN1cnNlcy9r
Yi5jCiAgW0RFUFNdIGhjaS90dWkvc2V0dGluZ3NfdWkuYwogIFtERVBTXSBoY2kvY29tbWFu
ZHMvaW1hZ2VfY21kLmMKICBbREVQU10gaGNpL2NvbW1hbmRzL2RpZ2VzdF9jbWQuYwogIFtE
RVBTXSBoY2kvY29tbWFuZHMvdGltZV9jbWQuYwogIFtERVBTXSBoY2kvY29tbWFuZHMvc2Fu
Ym9vdF9jbWQuYwogIFtERVBTXSB0ZXN0cy91bWFsbG9jX3Rlc3QuYwogIFtERVBTXSB0ZXN0
cy9ib2ZtX3Rlc3QuYwogIFtERVBTXSBpbnRlcmZhY2UvYm9mbS9ib2ZtLmMKICBbREVQU10g
aW50ZXJmYWNlL3NtYmlvcy9zbWJpb3MuYwogIFtERVBTXSBpbnRlcmZhY2Uvc21iaW9zL3Nt
Ymlvc19zZXR0aW5ncy5jCiAgW0RFUFNdIGludGVyZmFjZS9lZmkvZWZpX3NucC5jCiAgW0RF
UFNdIGludGVyZmFjZS9lZmkvZWZpX3BjaS5jCiAgW0RFUFNdIGludGVyZmFjZS9lZmkvZWZp
X2JvZm0uYwogIFtERVBTXSBpbnRlcmZhY2UvZWZpL2VmaV91bWFsbG9jLmMKICBbREVQU10g
aW50ZXJmYWNlL2VmaS9lZmlfdGltZXIuYwogIFtERVBTXSBpbnRlcmZhY2UvZWZpL2VmaV9z
bWJpb3MuYwogIFtERVBTXSBpbnRlcmZhY2UvZWZpL2VmaV9kcml2ZXIuYwogIFtERVBTXSBp
bnRlcmZhY2UvZWZpL2VmaV91YWNjZXNzLmMKICBbREVQU10gaW50ZXJmYWNlL2VmaS9lZmlf
aW8uYwogIFtERVBTXSBkcml2ZXJzL2luZmluaWJhbmQvbGluZGEuYwogIFtERVBTXSBkcml2
ZXJzL2luZmluaWJhbmQvaGVybW9uLmMKICBbREVQU10gZHJpdmVycy9pbmZpbmliYW5kL2Fy
YmVsLmMKICBbREVQU10gZHJpdmVycy9pbmZpbmliYW5kL3FpYjczMjIuYwogIFtERVBTXSBk
cml2ZXJzL2JpdGJhc2gvc3BpX2JpdC5jCiAgW0RFUFNdIGRyaXZlcnMvYml0YmFzaC9pMmNf
Yml0LmMKICBbREVQU10gZHJpdmVycy9udnMvc3BpLmMKICBbREVQU10gZHJpdmVycy9udnMv
bnZzdnBkLmMKICBbREVQU10gZHJpdmVycy9udnMvdGhyZWV3aXJlLmMKICBbREVQU10gZHJp
dmVycy9ibG9jay9pYmZ0LmMKICBbREVQU10gZHJpdmVycy9ibG9jay9hdGEuYwogIFtERVBT
XSBkcml2ZXJzL2Jsb2NrL3NycC5jCiAgW0RFUFNdIGRyaXZlcnMvYmxvY2svc2NzaS5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L2VmaS9zbnBuZXQuYwogIFtERVBTXSBkcml2ZXJzL25ldC92
eGdlL3Z4Z2VfdHJhZmZpYy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3Z4Z2UvdnhnZS5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L3Z4Z2UvdnhnZV9jb25maWcuYwogIFtERVBTXSBkcml2ZXJz
L25ldC92eGdlL3Z4Z2VfbWFpbi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9h
dGg5a19pbml0LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAw
M19tYWMuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAzX2Nh
bGliLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2VlcHJvbV85Mjg3
LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrLmMKICBbREVQU10gZHJp
dmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2NvbW1vbi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0
L2F0aC9hdGg5ay9hdGg5a19hcjkwMDJfaHcuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgv
YXRoOWsvYXRoOWtfY2FsaWIuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRo
OWtfZWVwcm9tXzRrLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2Vl
cHJvbV9kZWYuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfbWFjLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwM19lZXByb20uYwog
IFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAyX21hYy5jCiAgW0RF
UFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19hcjkwMDJfY2FsaWIuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAyX3BoeS5jCiAgW0RFUFNdIGRy
aXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a194bWl0LmMKICBbREVQU10gZHJpdmVycy9uZXQv
YXRoL2F0aDlrL2F0aDlrX2FyNTAwOF9waHkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgv
YXRoOWsvYXRoOWtfYXI5MDAzX3BoeS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5
ay9hdGg5a19hbmkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfbWFp
bi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19hcjkwMDNfaHcuYwog
IFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfZWVwcm9tLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX3JlY3YuYwogIFtERVBTXSBkcml2ZXJzL25l
dC9hdGgvYXRoOWsvYXRoOWtfaHcuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsv
YXRoNWtfcmVzZXQuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWsuYwog
IFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtfYXR0YWNoLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0aDVrX3Jma2lsbC5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0L2F0aC9hdGg1ay9hdGg1a19ncGlvLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0
aDVrL2F0aDVrX3BoeS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19p
bml0dmFscy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19kbWEuYwog
IFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtfcGN1LmMKICBbREVQU10gZHJp
dmVycy9uZXQvYXRoL2F0aDVrL2F0aDVrX2Rlc2MuYwogIFtERVBTXSBkcml2ZXJzL25ldC9h
dGgvYXRoNWsvYXRoNWtfcWN1LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0
aDVrX2VlcHJvbS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19jYXBz
LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aF9ody5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0L2F0aC9hdGhfa2V5LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aF9tYWluLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aF9yZWdkLmMKICBbREVQU10gZHJpdmVycy9u
ZXQvcnRsODE4eC9ydGw4MTgwX2dyZjUxMDEuYwogIFtERVBTXSBkcml2ZXJzL25ldC9ydGw4
MTh4L3J0bDgxODBfbWF4MjgyMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3J0bDgxOHgvcnRs
ODE4NS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3J0bDgxOHgvcnRsODE4eC5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L3J0bDgxOHgvcnRsODE4MC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3J0
bDgxOHgvcnRsODE4NV9ydGw4MjI1LmMKICBbREVQU10gZHJpdmVycy9uZXQvcnRsODE4eC9y
dGw4MTgwX3NhMjQwMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3BoYW50b20vcGhhbnRvbS5j
CiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYnZmL2lnYnZmX21haW4uYwogIFtERVBTXSBkcml2
ZXJzL25ldC9pZ2J2Zi9pZ2J2Zl92Zi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYnZmL2ln
YnZmX21ieC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYi9pZ2JfODI1NzUuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9pZ2IvaWdiX21hYy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYi9p
Z2JfcGh5LmMKICBbREVQU10gZHJpdmVycy9uZXQvaWdiL2lnYl9tYWluLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvaWdiL2lnYl9udm0uYwogIFtERVBTXSBkcml2ZXJzL25ldC9pZ2IvaWdi
X2FwaS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYi9pZ2JfbWFuYWdlLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV84MDAwM2VzMmxhbi5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L2UxMDAwZS9lMTAwMGVfaWNoOGxhbi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2Ux
MDAwZS9lMTAwMGVfODI1NzEuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMGUvZTEwMDBl
X21hYy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2UxMDAwZS9lMTAwMGVfcGh5LmMKICBbREVQ
U10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV9udm0uYwogIFtERVBTXSBkcml2ZXJzL25l
dC9lMTAwMGUvZTEwMDBlX21haW4uYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMC9lMTAw
MF84MjU0Mi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwXzgyNTQwLmMKICBb
REVQU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBfYXBpLmMKICBbREVQU10gZHJpdmVycy9u
ZXQvZTEwMDAvZTEwMDBfODI1NDMuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMC9lMTAw
MF9udm0uYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMC9lMTAwMF9tYWMuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9lMTAwMC9lMTAwMF9waHkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9l
MTAwMC9lMTAwMF9tYWluLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBfODI1
NDEuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hbWQ4MTExZS5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0L2ptZS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3ByaXNtMl9wY2kuYwogIFtERVBTXSBk
cml2ZXJzL25ldC8zYzU5NS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3ZpYS1yaGluZS5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L3c4OWM4NDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9jczg5
eDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9uZTJrX2lzYS5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0L2lwb2liLmMKICBbREVQU10gZHJpdmVycy9uZXQvc2t5Mi5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L2F0bDFlLmMKICBbREVQU10gZHJpdmVycy9uZXQvbGVnYWN5LmMKICBbREVQU10g
ZHJpdmVycy9uZXQvZWVwcm8xMDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC8zYzUxNS5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L2JueDIuYwogIFtERVBTXSBkcml2ZXJzL25ldC9kbWZlLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvbnM4MzkwLmMKICBbREVQU10gZHJpdmVycy9uZXQvbnM4
MzgyMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3BjbmV0MzIuYwogIFtERVBTXSBkcml2ZXJz
L25ldC8zYzUwOS1laXNhLmMKICBbREVQU10gZHJpdmVycy9uZXQvdGczLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvM2M1eDkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9zbWM5MDAwLmMKICBb
REVQU10gZHJpdmVycy9uZXQvdmlydGlvLW5ldC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2V0
aGVyZmFicmljLmMKICBbREVQU10gZHJpdmVycy9uZXQvc2tnZS5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L3NpczE5MC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L25hdHNlbWkuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9iNDQuYwogIFtERVBTXSBkcml2ZXJzL25ldC9mb3JjZWRldGguYwog
IFtERVBTXSBkcml2ZXJzL25ldC9wcmlzbTJfcGx4LmMKICBbREVQU10gZHJpdmVycy9uZXQv
c3VuZGFuY2UuYwogIFtERVBTXSBkcml2ZXJzL25ldC9ydGw4MTM5LmMKICBbREVQU10gZHJp
dmVycy9uZXQvZXBpYzEwMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0LzNjOTB4LmMKICBbREVQ
U10gZHJpdmVycy9uZXQvZGF2aWNvbS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0LzNjNTA5LmMK
ICBbREVQU10gZHJpdmVycy9uZXQvM2M1MjkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9tdGQ4
MHguYwogIFtERVBTXSBkcml2ZXJzL25ldC9teXJpMTBnZS5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0L2VlcHJvLmMKICBbREVQU10gZHJpdmVycy9uZXQvdmlhLXZlbG9jaXR5LmMKICBbREVQ
U10gZHJpdmVycy9uZXQvcG5pYy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3R1bGlwLmMKICBb
REVQU10gZHJpdmVycy9uZXQvc2lzOTAwLmMKICBbREVQU10gZHJpdmVycy9uZXQvcjgxNjku
YwogIFtERVBTXSBkcml2ZXJzL25ldC90bGFuLmMKICBbREVQU10gZHJpdmVycy9idXMvcGNp
LmMKICBbREVQU10gZHJpdmVycy9idXMvaXNhcG5wLmMKICBbREVQU10gZHJpdmVycy9idXMv
dmlydGlvLXJpbmcuYwogIFtERVBTXSBkcml2ZXJzL2J1cy92aXJ0aW8tcGNpLmMKICBbREVQ
U10gZHJpdmVycy9idXMvaXNhLmMKICBbREVQU10gZHJpdmVycy9idXMvcGNpZXh0cmEuYwog
IFtERVBTXSBkcml2ZXJzL2J1cy9wY2liYWNrdXAuYwogIFtERVBTXSBkcml2ZXJzL2J1cy9w
Y2l2cGQuYwogIFtERVBTXSBkcml2ZXJzL2J1cy9tY2EuYwogIFtERVBTXSBkcml2ZXJzL2J1
cy9laXNhLmMKICBbREVQU10gaW1hZ2Uvc2NyaXB0LmMKICBbREVQU10gaW1hZ2UvZWxmLmMK
ICBbREVQU10gaW1hZ2UvZWZpX2ltYWdlLmMKICBbREVQU10gaW1hZ2Uvc2VnbWVudC5jCiAg
W0RFUFNdIGltYWdlL2VtYmVkZGVkLmMKICBbREVQU10gbmV0LzgwMjExL25ldDgwMjExLmMK
ICBbREVQU10gbmV0L2luZmluaWJhbmQvaWJfbWkuYwogIFtERVBTXSBuZXQvaW5maW5pYmFu
ZC9pYl9zbWMuYwogIFtERVBTXSBuZXQvaW5maW5pYmFuZC9pYl9zbWEuYwogIFtERVBTXSBu
ZXQvaW5maW5pYmFuZC9pYl9zcnAuYwogIFtERVBTXSBuZXQvdWRwL2RoY3AuYwogIFtERVBT
XSBuZXQvdWRwL2Rucy5jCiAgW0RFUFNdIG5ldC91ZHAvc2xhbS5jCiAgW0RFUFNdIG5ldC91
ZHAvdGZ0cC5jCiAgW0RFUFNdIG5ldC91ZHAvc3lzbG9nLmMKICBbREVQU10gbmV0L3RjcC9o
dHRwcy5jCiAgW0RFUFNdIG5ldC90Y3AvaXNjc2kuYwogIFtERVBTXSBuZXQvdGNwL2Z0cC5j
CiAgW0RFUFNdIG5ldC90Y3AvaHR0cC5jCiAgW0RFUFNdIG5ldC9mYWtlZGhjcC5jCiAgW0RF
UFNdIG5ldC9uZXRkZXZfc2V0dGluZ3MuYwogIFtERVBTXSBuZXQvZmNwLmMKICBbREVQU10g
bmV0L2Zjb2UuYwogIFtERVBTXSBuZXQvdGNwLmMKICBbREVQU10gbmV0L2FvZS5jCiAgW0RF
UFNdIG5ldC92bGFuLmMKICBbREVQU10gbmV0L2luZmluaWJhbmQuYwogIFtERVBTXSBuZXQv
aXB2NC5jCiAgW0RFUFNdIG5ldC9kaGNwcGt0LmMKICBbREVQU10gbmV0L2NhY2hlZGhjcC5j
CiAgW0RFUFNdIG5ldC9uZXRkZXZpY2UuYwogIFtERVBTXSBuZXQvcmV0cnkuYwogIFtERVBT
XSBuZXQvZGhjcG9wdHMuYwogIFtERVBTXSBuZXQvZmMuYwogIFtERVBTXSBjb3JlL252by5j
CiAgW0RFUFNdIGNvcmUvZ2V0a2V5LmMKICBbREVQU10gY29yZS9zZXR0aW5ncy5jCiAgW0RF
UFNdIGNvcmUvbWFpbi5jCiAgW0RFUFNdIGNvcmUvZG93bmxvYWRlci5jCiAgW0RFUFNdIGNv
cmUvbnVsbF9uYXAuYwogIFtERVBTXSBjb3JlL3BjX2tiZC5jCiAgW0RFUFNdIGNvcmUvcG9z
aXhfaW8uYwogIFtERVBTXSBjb3JlL2dkYnVkcC5jCiAgW0RFUFNdIGNvcmUvY29uc29sZS5j
CiAgW0RFUFNdIGNvcmUvc2VyaWFsLmMKICBbREVQU10gY29yZS9ibG9ja2Rldi5jCiAgW0RF
UFNdIGNvcmUvdGltZXIuYwogIFtERVBTXSBjb3JlL21pc2MuYwogIFtERVBTXSBjb3JlL2Rl
YnVnLmMKICBbREVQU10gY29yZS9mbnJlYy5jCiAgW0RFUFNdIGNvcmUvbWFsbG9jLmMKICBb
REVQU10gY29yZS9leGVjLmMKICBbREVQU10gY29yZS9tb25vam9iLmMKICBbREVQU10gY29y
ZS9udWxsX3NhbmJvb3QuYwogIFtERVBTXSBjb3JlL3JhbmRvbS5jCiAgW0RFUFNdIGNvcmUv
cGFyc2VvcHQuYwogIFtERVBTXSBjb3JlL2ltYWdlLmMKZ21ha2VbN106IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvZXRoZXJib290L2lweGUv
c3JjJwpnbWFrZVs3XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvZmlybXdhcmUvZXRoZXJib290L2lweGUvc3JjJwogIFtCVUlMRF0gYmluL19fdWRpdm1v
ZGRpNC5vCiAgW0JVSUxEXSBiaW4vX19kaXZkaTMubwogIFtCVUlMRF0gYmluL2ljYy5vCiAg
W0JVSUxEXSBiaW4vbWVtY3B5Lm8KICBbQlVJTERdIGJpbi9fX21vZGRpMy5vCiAgW0JVSUxE
XSBiaW4vX191ZGl2ZGkzLm8KICBbQlVJTERdIGJpbi9fX3Vtb2RkaTMubwogIFtCVUlMRF0g
YmluL3BjbWNpYS5vCiAgW0JVSUxEXSBiaW4vc2VyaWFsX2NvbnNvbGUubwogIFtCVUlMRF0g
YmluL3V1aWQubwogIFtCVUlMRF0gYmluL3JlZmNudC5vCiAgW0JVSUxEXSBiaW4vYXNzZXJ0
Lm8KICBbQlVJTERdIGJpbi9iYXNlMTYubwogIFtCVUlMRF0gYmluL3N0cmluZy5vCiAgW0JV
SUxEXSBiaW4vaW1hZ2UubwogIFtCVUlMRF0gYmluL2lvYnVmLm8KICBbQlVJTERdIGJpbi9y
ZXNvbHYubwogIFtCVUlMRF0gYmluL3BhcnNlb3B0Lm8KICBbQlVJTERdIGJpbi9yYW5kb20u
bwogIFtCVUlMRF0gYmluL3N0cmluZ2V4dHJhLm8KICBbQlVJTERdIGJpbi9udWxsX3NhbmJv
b3QubwogIFtCVUlMRF0gYmluL21vbm9qb2IubwogIFtCVUlMRF0gYmluL2V4ZWMubwogIFtC
VUlMRF0gYmluL2JpdG1hcC5vCiAgW0JVSUxEXSBiaW4vYmFzZTY0Lm8KICBbQlVJTERdIGJp
bi9kZXZpY2UubwogIFtCVUlMRF0gYmluL2Fuc2llc2MubwogIFtCVUlMRF0gYmluL21hbGxv
Yy5vCiAgW0JVSUxEXSBiaW4vZm5yZWMubwogIFtCVUlMRF0gYmluL2RlYnVnLm8KICBbQlVJ
TERdIGJpbi9nZGJzZXJpYWwubwogIFtCVUlMRF0gYmluL3Byb2Nlc3MubwogIFtCVUlMRF0g
YmluL2pvYi5vCiAgW0JVSUxEXSBiaW4vZXJybm8ubwogIFtCVUlMRF0gYmluL2k4MjM2NS5v
CiAgW0JVSUxEXSBiaW4vY3d1cmkubwogIFtCVUlMRF0gYmluL21pc2MubwogIFtCVUlMRF0g
YmluL3RpbWVyLm8KICBbQlVJTERdIGJpbi9jcGlvLm8KICBbQlVJTERdIGJpbi9ibG9ja2Rl
di5vCiAgW0JVSUxEXSBiaW4vdXJpLm8KICBbQlVJTERdIGJpbi9hY3BpLm8KICBbQlVJTERd
IGJpbi9zZXJpYWwubwogIFtCVUlMRF0gYmluL29wZW4ubwogIFtCVUlMRF0gYmluL2NvbnNv
bGUubwogIFtCVUlMRF0gYmluL2dkYnVkcC5vCiAgW0JVSUxEXSBiaW4vcG9zaXhfaW8ubwog
IFtCVUlMRF0gYmluL3BjX2tiZC5vCiAgW0JVSUxEXSBiaW4veGZlci5vCiAgW0JVSUxEXSBi
aW4vbnVsbF9uYXAubwogIFtCVUlMRF0gYmluL3ZzcHJpbnRmLm8KICBbQlVJTERdIGJpbi9i
aXRvcHMubwogIFtCVUlMRF0gYmluL2h3Lm8KICBbQlVJTERdIGJpbi9kb3dubG9hZGVyLm8K
ICBbQlVJTERdIGJpbi9tYWluLm8KICBbQlVJTERdIGJpbi9zZXR0aW5ncy5vCiAgW0JVSUxE
XSBiaW4vc3RydG91bGwubwogIFtCVUlMRF0gYmluL2luaXQubwogIFtCVUlMRF0gYmluL2Vk
ZC5vCiAgW0JVSUxEXSBiaW4vbGluZWJ1Zi5vCiAgW0JVSUxEXSBiaW4vZ2Ric3R1Yi5vCiAg
W0JVSUxEXSBiaW4vYXNwcmludGYubwogIFtCVUlMRF0gYmluL2dldGtleS5vCiAgW0JVSUxE
XSBiaW4vZ2V0b3B0Lm8KICBbQlVJTERdIGJpbi9idGV4dC5vCiAgW0JVSUxEXSBiaW4vaW50
ZXJmYWNlLm8KICBbQlVJTERdIGJpbi9kZWJ1Z19tZDUubwogIFtCVUlMRF0gYmluL252by5v
CiAgW0JVSUxEXSBiaW4vYmFzZW5hbWUubwogIFtCVUlMRF0gYmluL2N0eXBlLm8KICBbQlVJ
TERdIGJpbi9mYy5vCiAgW0JVSUxEXSBiaW4vZGhjcG9wdHMubwogIFtCVUlMRF0gYmluL3Vk
cC5vCiAgW0JVSUxEXSBiaW4vaWNtcC5vCiAgW0JVSUxEXSBiaW4vcmV0cnkubwogIFtCVUlM
RF0gYmluL25ldGRldmljZS5vCiAgW0JVSUxEXSBiaW4vY2FjaGVkaGNwLm8KICBbQlVJTERd
IGJpbi9kaGNwcGt0Lm8KICBbQlVJTERdIGJpbi9uZHAubwogIFtCVUlMRF0gYmluL3Rscy5v
CiAgW0JVSUxEXSBiaW4vZXRoX3Nsb3cubwogIFtCVUlMRF0gYmluL2lwdjQubwogIFtCVUlM
RF0gYmluL2luZmluaWJhbmQubwogIFtCVUlMRF0gYmluL251bGxuZXQubwogIFtCVUlMRF0g
YmluL3ZsYW4ubwogIFtCVUlMRF0gYmluL3JhcnAubwogIFtCVUlMRF0gYmluL2FvZS5vCiAg
W0JVSUxEXSBiaW4vaXB2Ni5vCiAgW0JVSUxEXSBiaW4vdGNwaXAubwogIFtCVUlMRF0gYmlu
L2ZjZWxzLm8KICBbQlVJTERdIGJpbi9ldGhlcm5ldC5vCiAgW0JVSUxEXSBiaW4vYXJwLm8K
ICBbQlVJTERdIGJpbi9taWkubwogIFtCVUlMRF0gYmluL3RjcC5vCiAgW0JVSUxEXSBiaW4v
aW9icGFkLm8KICBbQlVJTERdIGJpbi9mY29lLm8KICBbQlVJTERdIGJpbi9mY3AubwogIFtC
VUlMRF0gYmluL25ldGRldl9zZXR0aW5ncy5vCiAgW0JVSUxEXSBiaW4vaWNtcHY2Lm8KICBb
QlVJTERdIGJpbi9mYWtlZGhjcC5vCiAgW0JVSUxEXSBiaW4vZmNucy5vCiAgW0JVSUxEXSBi
aW4vZWFwb2wubwogIFtCVUlMRF0gYmluL2h0dHAubwogIFtCVUlMRF0gYmluL2Z0cC5vCiAg
W0JVSUxEXSBiaW4vaXNjc2kubwogIFtCVUlMRF0gYmluL2h0dHBzLm8KICBbQlVJTERdIGJp
bi9zeXNsb2cubwogIFtCVUlMRF0gYmluL3RmdHAubwogIFtCVUlMRF0gYmluL3NsYW0ubwog
IFtCVUlMRF0gYmluL2Rucy5vCiAgW0JVSUxEXSBiaW4vZGhjcC5vCiAgW0JVSUxEXSBiaW4v
aWJfbWNhc3QubwogIFtCVUlMRF0gYmluL2liX3NycC5vCiAgW0JVSUxEXSBiaW4vaWJfY21y
Yy5vCiAgW0JVSUxEXSBiaW4vaWJfc21hLm8KICBbQlVJTERdIGJpbi9pYl9wYXRocmVjLm8K
ICBbQlVJTERdIGJpbi9pYl9zbWMubwogIFtCVUlMRF0gYmluL2liX3BhY2tldC5vCiAgW0JV
SUxEXSBiaW4vaWJfY20ubwogIFtCVUlMRF0gYmluL2liX21pLm8KICBbQlVJTERdIGJpbi93
cGFfdGtpcC5vCiAgW0JVSUxEXSBiaW4vd3BhX3Bzay5vCiAgW0JVSUxEXSBiaW4vd2VwLm8K
ICBbQlVJTERdIGJpbi9zZWM4MDIxMS5vCiAgW0JVSUxEXSBiaW4vbmV0ODAyMTEubwogIFtC
VUlMRF0gYmluL3dwYV9jY21wLm8KICBbQlVJTERdIGJpbi93cGEubwogIFtCVUlMRF0gYmlu
L3JjODAyMTEubwogIFtCVUlMRF0gYmluL2VtYmVkZGVkLm8KICBbQlVJTERdIGJpbi9zZWdt
ZW50Lm8KICBbQlVJTERdIGJpbi9lZmlfaW1hZ2UubwogIFtCVUlMRF0gYmluL2VsZi5vCiAg
W0JVSUxEXSBiaW4vc2NyaXB0Lm8KICBbQlVJTERdIGJpbi9laXNhLm8KICBbQlVJTERdIGJp
bi9tY2EubwogIFtCVUlMRF0gYmluL3BjaXZwZC5vCiAgW0JVSUxEXSBiaW4vcGNpYmFja3Vw
Lm8KICBbQlVJTERdIGJpbi9wY2lleHRyYS5vCiAgW0JVSUxEXSBiaW4vaXNhX2lkcy5vCiAg
W0JVSUxEXSBiaW4vaXNhLm8KICBbQlVJTERdIGJpbi92aXJ0aW8tcGNpLm8KICBbQlVJTERd
IGJpbi92aXJ0aW8tcmluZy5vCiAgW0JVSUxEXSBiaW4vaXNhcG5wLm8KICBbQlVJTERdIGJp
bi9wY2kubwogIFtCVUlMRF0gYmluLzNjNTAzLm8KICBbQlVJTERdIGJpbi90bGFuLm8KICBb
QlVJTERdIGJpbi9yODE2OS5vCiAgW0JVSUxEXSBiaW4vc2lzOTAwLm8KICBbQlVJTERdIGJp
bi90dWxpcC5vCiAgW0JVSUxEXSBiaW4vcG5pYy5vCiAgW0JVSUxEXSBiaW4vdmlhLXZlbG9j
aXR5Lm8KICBbQlVJTERdIGJpbi9uZS5vCiAgW0JVSUxEXSBiaW4vZWVwcm8ubwogIFtCVUlM
RF0gYmluL215cmkxMGdlLm8KICBbQlVJTERdIGJpbi9tdGQ4MHgubwogIFtCVUlMRF0gYmlu
LzNjNTI5Lm8KICBbQlVJTERdIGJpbi8zYzUwOS5vCiAgW0JVSUxEXSBiaW4vZGF2aWNvbS5v
CiAgW0JVSUxEXSBiaW4vM2M5MHgubwogIFtCVUlMRF0gYmluL2VwaWMxMDAubwogIFtCVUlM
RF0gYmluL3J0bDgxMzkubwogIFtCVUlMRF0gYmluL3N1bmRhbmNlLm8KICBbQlVJTERdIGJp
bi9wcmlzbTJfcGx4Lm8KICBbQlVJTERdIGJpbi9mb3JjZWRldGgubwogIFtCVUlMRF0gYmlu
L2I0NC5vCiAgW0JVSUxEXSBiaW4vbmF0c2VtaS5vCiAgW0JVSUxEXSBiaW4vc2lzMTkwLm8K
ICBbQlVJTERdIGJpbi9za2dlLm8KICBbQlVJTERdIGJpbi93ZC5vCiAgW0JVSUxEXSBiaW4v
ZXRoZXJmYWJyaWMubwogIFtCVUlMRF0gYmluL3ZpcnRpby1uZXQubwogIFtCVUlMRF0gYmlu
L3NtYzkwMDAubwogIFtCVUlMRF0gYmluLzNjNXg5Lm8KICBbQlVJTERdIGJpbi90ZzMubwog
IFtCVUlMRF0gYmluLzNjNTA5LWVpc2EubwogIFtCVUlMRF0gYmluL3BjbmV0MzIubwogIFtC
VUlMRF0gYmluL25zODM4MjAubwogIFtCVUlMRF0gYmluL25zODM5MC5vCiAgW0JVSUxEXSBi
aW4vZG1mZS5vCiAgW0JVSUxEXSBiaW4vYm54Mi5vCiAgW0JVSUxEXSBiaW4vM2M1MTUubwog
IFtCVUlMRF0gYmluL2VlcHJvMTAwLm8KICBbQlVJTERdIGJpbi9sZWdhY3kubwogIFtCVUlM
RF0gYmluL2F0bDFlLm8KICBbQlVJTERdIGJpbi9za3kyLm8KICBbQlVJTERdIGJpbi9pcG9p
Yi5vCiAgW0JVSUxEXSBiaW4vbmUya19pc2EubwogIFtCVUlMRF0gYmluL2NzODl4MC5vCiAg
W0JVSUxEXSBiaW4vdzg5Yzg0MC5vCiAgW0JVSUxEXSBiaW4vdmlhLXJoaW5lLm8KICBbQlVJ
TERdIGJpbi8zYzU5NS5vCiAgW0JVSUxEXSBiaW4vcHJpc20yX3BjaS5vCiAgW0JVSUxEXSBi
aW4vam1lLm8KICBbQlVJTERdIGJpbi9hbWQ4MTExZS5vCiAgW0JVSUxEXSBiaW4vZGVwY2Eu
bwogIFtCVUlMRF0gYmluL2UxMDAwXzgyNTQxLm8KICBbQlVJTERdIGJpbi9lMTAwMF9tYWlu
Lm8KICBbQlVJTERdIGJpbi9lMTAwMC5vCiAgW0JVSUxEXSBiaW4vZTEwMDBfcGh5Lm8KICBb
QlVJTERdIGJpbi9lMTAwMF9tYWMubwogIFtCVUlMRF0gYmluL2UxMDAwX252bS5vCiAgW0JV
SUxEXSBiaW4vZTEwMDBfODI1NDMubwogIFtCVUlMRF0gYmluL2UxMDAwX21hbmFnZS5vCiAg
W0JVSUxEXSBiaW4vZTEwMDBfYXBpLm8KICBbQlVJTERdIGJpbi9lMTAwMF84MjU0MC5vCiAg
W0JVSUxEXSBiaW4vZTEwMDBfODI1NDIubwogIFtCVUlMRF0gYmluL2UxMDAwZV9tYWluLm8K
ICBbQlVJTERdIGJpbi9lMTAwMGVfbnZtLm8KICBbQlVJTERdIGJpbi9lMTAwMGVfcGh5Lm8K
ICBbQlVJTERdIGJpbi9lMTAwMGVfbWFjLm8KICBbQlVJTERdIGJpbi9lMTAwMGUubwogIFtC
VUlMRF0gYmluL2UxMDAwZV84MjU3MS5vCiAgW0JVSUxEXSBiaW4vZTEwMDBlX21hbmFnZS5v
CiAgW0JVSUxEXSBiaW4vZTEwMDBlX2ljaDhsYW4ubwogIFtCVUlMRF0gYmluL2UxMDAwZV84
MDAwM2VzMmxhbi5vCiAgW0JVSUxEXSBiaW4vaWdiX21hbmFnZS5vCiAgW0JVSUxEXSBiaW4v
aWdiX2FwaS5vCiAgW0JVSUxEXSBiaW4vaWdiX252bS5vCiAgW0JVSUxEXSBiaW4vaWdiX21h
aW4ubwogIFtCVUlMRF0gYmluL2lnYl9waHkubwogIFtCVUlMRF0gYmluL2lnYl9tYWMubwog
IFtCVUlMRF0gYmluL2lnYi5vCiAgW0JVSUxEXSBiaW4vaWdiXzgyNTc1Lm8KICBbQlVJTERd
IGJpbi9pZ2J2Zl9tYngubwogIFtCVUlMRF0gYmluL2lnYnZmX3ZmLm8KICBbQlVJTERdIGJp
bi9pZ2J2Zl9tYWluLm8KICBbQlVJTERdIGJpbi9waGFudG9tLm8KICBbQlVJTERdIGJpbi9y
dGw4MTgwX3NhMjQwMC5vCiAgW0JVSUxEXSBiaW4vcnRsODE4NV9ydGw4MjI1Lm8KICBbQlVJ
TERdIGJpbi9ydGw4MTgwLm8KICBbQlVJTERdIGJpbi9ydGw4MTh4Lm8KICBbQlVJTERdIGJp
bi9ydGw4MTg1Lm8KICBbQlVJTERdIGJpbi9ydGw4MTgwX21heDI4MjAubwogIFtCVUlMRF0g
YmluL3J0bDgxODBfZ3JmNTEwMS5vCiAgW0JVSUxEXSBiaW4vYXRoX3JlZ2QubwogIFtCVUlM
RF0gYmluL2F0aF9tYWluLm8KICBbQlVJTERdIGJpbi9hdGhfa2V5Lm8KICBbQlVJTERdIGJp
bi9hdGhfaHcubwogIFtCVUlMRF0gYmluL2F0aDVrX2NhcHMubwogIFtCVUlMRF0gYmluL2F0
aDVrX2VlcHJvbS5vCiAgW0JVSUxEXSBiaW4vYXRoNWtfcWN1Lm8KICBbQlVJTERdIGJpbi9h
dGg1a19kZXNjLm8KICBbQlVJTERdIGJpbi9hdGg1a19wY3UubwogIFtCVUlMRF0gYmluL2F0
aDVrX2RtYS5vCiAgW0JVSUxEXSBiaW4vYXRoNWtfaW5pdHZhbHMubwogIFtCVUlMRF0gYmlu
L2F0aDVrX3BoeS5vCiAgW0JVSUxEXSBiaW4vYXRoNWtfZ3Bpby5vCiAgW0JVSUxEXSBiaW4v
YXRoNWtfcmZraWxsLm8KICBbQlVJTERdIGJpbi9hdGg1a19hdHRhY2gubwogIFtCVUlMRF0g
YmluL2F0aDVrLm8KICBbQlVJTERdIGJpbi9hdGg1a19yZXNldC5vCiAgW0JVSUxEXSBiaW4v
YXRoOWtfaHcubwogIFtCVUlMRF0gYmluL2F0aDlrX3JlY3YubwogIFtCVUlMRF0gYmluL2F0
aDlrX2VlcHJvbS5vCiAgW0JVSUxEXSBiaW4vYXRoOWtfYXI5MDAzX2h3Lm8KICBbQlVJTERd
IGJpbi9hdGg5a19tYWluLm8KICBbQlVJTERdIGJpbi9hdGg5a19hbmkubwogIFtCVUlMRF0g
YmluL2F0aDlrX2FyOTAwM19waHkubwogIFtCVUlMRF0gYmluL2F0aDlrX2FyNTAwOF9waHku
bwogIFtCVUlMRF0gYmluL2F0aDlrX3htaXQubwogIFtCVUlMRF0gYmluL2F0aDlrX2FyOTAw
Ml9waHkubwogIFtCVUlMRF0gYmluL2F0aDlrX2FyOTAwMl9jYWxpYi5vCiAgW0JVSUxEXSBi
aW4vYXRoOWtfYXI5MDAyX21hYy5vCiAgW0JVSUxEXSBiaW4vYXRoOWtfYXI5MDAzX2VlcHJv
bS5vCiAgW0JVSUxEXSBiaW4vYXRoOWtfbWFjLm8KICBbQlVJTERdIGJpbi9hdGg5a19lZXBy
b21fZGVmLm8KICBbQlVJTERdIGJpbi9hdGg5a19lZXByb21fNGsubwogIFtCVUlMRF0gYmlu
L2F0aDlrX2NhbGliLm8KICBbQlVJTERdIGJpbi9hdGg5a19hcjkwMDJfaHcubwogIFtCVUlM
RF0gYmluL2F0aDlrX2NvbW1vbi5vCiAgW0JVSUxEXSBiaW4vYXRoOWsubwogIFtCVUlMRF0g
YmluL2F0aDlrX2VlcHJvbV85Mjg3Lm8KICBbQlVJTERdIGJpbi9hdGg5a19hcjkwMDNfY2Fs
aWIubwogIFtCVUlMRF0gYmluL2F0aDlrX2FyOTAwM19tYWMubwogIFtCVUlMRF0gYmluL2F0
aDlrX2luaXQubwogIFtCVUlMRF0gYmluL3Z4Z2VfbWFpbi5vCiAgW0JVSUxEXSBiaW4vdnhn
ZV9jb25maWcubwogIFtCVUlMRF0gYmluL3Z4Z2UubwogIFtCVUlMRF0gYmluL3Z4Z2VfdHJh
ZmZpYy5vCiAgW0JVSUxEXSBiaW4vc25wb25seS5vCiAgW0JVSUxEXSBiaW4vc25wbmV0Lm8K
ICBbQlVJTERdIGJpbi9zY3NpLm8KICBbQlVJTERdIGJpbi9zcnAubwogIFtCVUlMRF0gYmlu
L2F0YS5vCiAgW0JVSUxEXSBiaW4vaWJmdC5vCiAgW0JVSUxEXSBiaW4vbnZzLm8KICBbQlVJ
TERdIGJpbi90aHJlZXdpcmUubwogIFtCVUlMRF0gYmluL252c3ZwZC5vCiAgW0JVSUxEXSBi
aW4vc3BpLm8KICBbQlVJTERdIGJpbi9pMmNfYml0Lm8KICBbQlVJTERdIGJpbi9zcGlfYml0
Lm8KICBbQlVJTERdIGJpbi9iaXRiYXNoLm8KICBbQlVJTERdIGJpbi9saW5kYV9mdy5vCiAg
W0JVSUxEXSBiaW4vcWliNzMyMi5vCiAgW0JVSUxEXSBiaW4vYXJiZWwubwogIFtCVUlMRF0g
YmluL2hlcm1vbi5vCiAgW0JVSUxEXSBiaW4vbGluZGEubwogIFtCVUlMRF0gYmluL2VmaV9p
by5vCiAgW0JVSUxEXSBiaW4vZWZpX3VhY2Nlc3MubwogIFtCVUlMRF0gYmluL2VmaV9pbml0
Lm8KICBbQlVJTERdIGJpbi9lZmlfZHJpdmVyLm8KICBbQlVJTERdIGJpbi9lZmlfc21iaW9z
Lm8KICBbQlVJTERdIGJpbi9lZmlfdGltZXIubwogIFtCVUlMRF0gYmluL2VmaV9zdHJpbmdz
Lm8KICBbQlVJTERdIGJpbi9lZmlfdW1hbGxvYy5vCiAgW0JVSUxEXSBiaW4vZWZpX2JvZm0u
bwogIFtCVUlMRF0gYmluL2VmaV9zdHJlcnJvci5vCiAgW0JVSUxEXSBiaW4vZWZpX3BjaS5v
CiAgW0JVSUxEXSBiaW4vZWZpX3NucC5vCiAgW0JVSUxEXSBiaW4vZWZpX2NvbnNvbGUubwog
IFtCVUlMRF0gYmluL3NtYmlvc19zZXR0aW5ncy5vCiAgW0JVSUxEXSBiaW4vc21iaW9zLm8K
ICBbQlVJTERdIGJpbi9ib2ZtLm8KICBbQlVJTERdIGJpbi9tZW1jcHlfdGVzdC5vCiAgW0JV
SUxEXSBiaW4vbGlzdF90ZXN0Lm8KICBbQlVJTERdIGJpbi90ZXN0Lm8KICBbQlVJTERdIGJp
bi91cmlfdGVzdC5vCiAgW0JVSUxEXSBiaW4vYm9mbV90ZXN0Lm8KICBbQlVJTERdIGJpbi91
bWFsbG9jX3Rlc3QubwogIFtCVUlMRF0gYmluL2xpbmVidWZfdGVzdC5vCiAgW0JVSUxEXSBi
aW4vY2hhcC5vCiAgW0JVSUxEXSBiaW4vbWQ1Lm8KICBbQlVJTERdIGJpbi94NTA5Lm8KICBb
QlVJTERdIGJpbi9zaGExZXh0cmEubwogIFtCVUlMRF0gYmluL2FyYzQubwogIFtCVUlMRF0g
YmluL2NyeXB0b19udWxsLm8KICBbQlVJTERdIGJpbi9jcmFuZG9tLm8KICBbQlVJTERdIGJp
bi9jcmMzMi5vCiAgW0JVSUxEXSBiaW4vaG1hYy5vCiAgW0JVSUxEXSBiaW4vYXNuMS5vCiAg
W0JVSUxEXSBiaW4vYXh0bHNfYWVzLm8KICBbQlVJTERdIGJpbi9hZXNfd3JhcC5vCiAgW0JV
SUxEXSBiaW4vYXh0bHNfc2hhMS5vCiAgW0JVSUxEXSBiaW4vY2JjLm8KICBbQlVJTERdIGJp
bi9hZXMubwogIFtCVUlMRF0gYmluL2JpZ2ludC5vCiAgW0JVSUxEXSBiaW4vcnNhLm8KICBb
QlVJTERdIGJpbi9zaGExLm8KICBbQlVJTERdIGJpbi9saW51eF9hcmdzLm8KICBbQlVJTERd
IGJpbi9zaGVsbC5vCiAgW0JVSUxEXSBiaW4vc3RyZXJyb3IubwogIFtCVUlMRF0gYmluL3Jl
YWRsaW5lLm8KICBbQlVJTERdIGJpbi9lZGl0c3RyaW5nLm8KICBbQlVJTERdIGJpbi93aXJl
bGVzc19lcnJvcnMubwogIFtCVUlMRF0gYmluL252b19jbWQubwogIFtCVUlMRF0gYmluL2Nv
bmZpZ19jbWQubwogIFtCVUlMRF0gYmluL2xvZ2luX2NtZC5vCiAgW0JVSUxEXSBiaW4vc2Fu
Ym9vdF9jbWQubwogIFtCVUlMRF0gYmluL2lmbWdtdF9jbWQubwogIFtCVUlMRF0gYmluL2dk
YnN0dWJfY21kLm8KICBbQlVJTERdIGJpbi9hdXRvYm9vdF9jbWQubwogIFtCVUlMRF0gYmlu
L3RpbWVfY21kLm8KICBbQlVJTERdIGJpbi9kaGNwX2NtZC5vCiAgW0JVSUxEXSBiaW4vcm91
dGVfY21kLm8KICBbQlVJTERdIGJpbi9kaWdlc3RfY21kLm8KICBbQlVJTERdIGJpbi9pbWFn
ZV9jbWQubwogIFtCVUlMRF0gYmluL2ZjbWdtdF9jbWQubwogIFtCVUlMRF0gYmluL2xvdGVz
dF9jbWQubwogIFtCVUlMRF0gYmluL2l3bWdtdF9jbWQubwogIFtCVUlMRF0gYmluL3ZsYW5f
Y21kLm8KICBbQlVJTERdIGJpbi9sb2dpbl91aS5vCiAgW0JVSUxEXSBiaW4vc2V0dGluZ3Nf
dWkubwogIFtCVUlMRF0gYmluL2FsZXJ0Lm8KICBbQlVJTERdIGJpbi9jbGVhci5vCiAgW0JV
SUxEXSBiaW4vZWRnaW5nLm8KICBbQlVJTERdIGJpbi93aW5hdHRycy5vCiAgW0JVSUxEXSBi
aW4vYW5zaV9zY3JlZW4ubwogIFtCVUlMRF0gYmluL3ByaW50X25hZHYubwogIFtCVUlMRF0g
YmluL3dpbmluaXQubwogIFtCVUlMRF0gYmluL211Y3Vyc2VzLm8KICBbQlVJTERdIGJpbi93
aW5kb3dzLm8KICBbQlVJTERdIGJpbi9wcmludC5vCiAgW0JVSUxEXSBiaW4vc2xrLm8KICBb
QlVJTERdIGJpbi9jb2xvdXIubwogIFtCVUlMRF0gYmluL2tiLm8KICBbQlVJTERdIGJpbi9l
ZGl0Ym94Lm8KICBbQlVJTERdIGJpbi9rZXltYXBfcHQubwogIFtCVUlMRF0gYmluL2tleW1h
cF9kay5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX2dyLm8KICBbQlVJTERdIGJpbi9rZXltYXBf
aWwubwogIFtCVUlMRF0gYmluL2tleW1hcF91cy5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX3Ro
Lm8KICBbQlVJTERdIGJpbi9rZXltYXBfZXQubwogIFtCVUlMRF0gYmluL2tleW1hcF9uby5v
CiAgW0JVSUxEXSBiaW4va2V5bWFwX2NmLm8KICBbQlVJTERdIGJpbi9rZXltYXBfcnUubwog
IFtCVUlMRF0gYmluL2tleW1hcF9hbC5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX3NyLm8KICBb
QlVJTERdIGJpbi9rZXltYXBfbHQubwogIFtCVUlMRF0gYmluL2tleW1hcF91YS5vCiAgW0JV
SUxEXSBiaW4va2V5bWFwX3dvLm8KICBbQlVJTERdIGJpbi9rZXltYXBfbXQubwogIFtCVUlM
RF0gYmluL2tleW1hcF9ieS5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX2ZyLm8KICBbQlVJTERd
IGJpbi9rZXltYXBfYXoubwogIFtCVUlMRF0gYmluL2tleW1hcF9wbC5vCiAgW0JVSUxEXSBi
aW4va2V5bWFwX3VrLm8KICBbQlVJTERdIGJpbi9rZXltYXBfbWsubwogIFtCVUlMRF0gYmlu
L2tleW1hcF9maS5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX2RlLm8KICBbQlVJTERdIGJpbi9r
ZXltYXBfY3oubwogIFtCVUlMRF0gYmluL2tleW1hcF9ubC5vCiAgW0JVSUxEXSBiaW4va2V5
bWFwX2JnLm8KICBbQlVJTERdIGJpbi9rZXltYXBfaHUubwogIFtCVUlMRF0gYmluL2tleW1h
cF9lcy5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX3NnLm8KICBbQlVJTERdIGJpbi9rZXltYXBf
aXQubwogIFtCVUlMRF0gYmluL2tleW1hcF9yby5vCiAgW0JVSUxEXSBiaW4vcHJvbXB0Lm8K
ICBbQlVJTERdIGJpbi9yb3V0ZS5vCiAgW0JVSUxEXSBiaW4vaXdtZ210Lm8KICBbQlVJTERd
IGJpbi9sb3Rlc3QubwogIFtCVUlMRF0gYmluL2ltZ21nbXQubwogIFtCVUlMRF0gYmluL3B4
ZW1lbnUubwogIFtCVUlMRF0gYmluL2RoY3BtZ210Lm8KICBbQlVJTERdIGJpbi9mY21nbXQu
bwogIFtCVUlMRF0gYmluL2lmbWdtdC5vCiAgW0JVSUxEXSBiaW4vYXV0b2Jvb3QubwogIFtC
VUlMRF0gYmluL2NvbmZpZ19pbmZpbmliYW5kLm8KICBbQlVJTERdIGJpbi9jb25maWdfbmV0
ODAyMTEubwogIFtCVUlMRF0gYmluL2NvbmZpZ19ldGhlcm5ldC5vCiAgW0JVSUxEXSBiaW4v
Y29uZmlnX2ZjLm8KICBbQlVJTERdIGJpbi9jb25maWcubwogIFtCVUlMRF0gYmluL2NvbmZp
Z19yb21wcmVmaXgubwogIFtCVUlMRF0gYmluL3JkdHNjX3RpbWVyLm8KICBbQlVJTERdIGJp
bi9iYXNlbWVtX3BhY2tldC5vCiAgW0JVSUxEXSBiaW4vdmlkZW9fc3Vici5vCiAgW0JVSUxE
XSBiaW4vZ2RibWFjaC5vCiAgW0JVSUxEXSBiaW4vY3B1Lm8KICBbQlVJTERdIGJpbi9waWM4
MjU5Lm8KICBbQlVJTERdIGJpbi9ydW50aW1lLm8KICBbQlVJTERdIGJpbi90aW1lcjIubwog
IFtCVUlMRF0gYmluL3g4Nl9pby5vCiAgW0JVSUxEXSBiaW4vcmVsb2NhdGUubwogIFtCVUlM
RF0gYmluL251bGx0cmFwLm8KICBbQlVJTERdIGJpbi9kdW1wcmVncy5vCiAgW0JVSUxEXSBi
aW4vbGlicm1fbWdtdC5vCiAgW0JVSUxEXSBiaW4vaGlkZW1lbS5vCiAgW0JVSUxEXSBiaW4v
bWVtbWFwLm8KICBbQlVJTERdIGJpbi9iYXNlbWVtLm8KICBbQlVJTERdIGJpbi9mYWtlZTgy
MC5vCiAgW0JVSUxEXSBiaW4vYmlvc19jb25zb2xlLm8KICBbQlVJTERdIGJpbi9wbnBiaW9z
Lm8KICBbQlVJTERdIGJpbi9jb20zMi5vCiAgW0JVSUxEXSBiaW4vbmJpLm8KICBbQlVJTERd
IGJpbi9iemltYWdlLm8KICBbQlVJTERdIGJpbi9weGVfaW1hZ2UubwogIFtCVUlMRF0gYmlu
L211bHRpYm9vdC5vCiAgW0JVSUxEXSBiaW4vYm9vdHNlY3Rvci5vCiAgW0JVSUxEXSBiaW4v
ZWxmYm9vdC5vCiAgW0JVSUxEXSBiaW4vY29tYm9vdC5vCiAgW0JVSUxEXSBiaW4vYmlvc19u
YXAubwogIFtCVUlMRF0gYmluL2ludDEzLm8KICBbQlVJTERdIGJpbi9wY2liaW9zLm8KICBb
QlVJTERdIGJpbi9iaW9zX3RpbWVyLm8KICBbQlVJTERdIGJpbi9iaW9zaW50Lm8KICBbQlVJ
TERdIGJpbi9tZW10b3BfdW1hbGxvYy5vCiAgW0JVSUxEXSBiaW4vYmlvc19zbWJpb3Mubwog
IFtCVUlMRF0gYmluL3B4ZV9jYWxsLm8KICBbQlVJTERdIGJpbi9weGVfZmlsZS5vCiAgW0JV
SUxEXSBiaW4vcHhlX3RmdHAubwogIFtCVUlMRF0gYmluL3B4ZV9wcmVib290Lm8KICBbQlVJ
TERdIGJpbi9weGVfZXhpdF9ob29rLm8KICBbQlVJTERdIGJpbi9weGVfbG9hZGVyLm8KICBb
QlVJTERdIGJpbi9weGVfdW5kaS5vCiAgW0JVSUxEXSBiaW4vcHhlX3VkcC5vCiAgW0JVSUxE
XSBiaW4vcHhlcGFyZW50Lm8KICBbQlVJTERdIGJpbi9weGVwYXJlbnRfZGhjcC5vCiAgW0JV
SUxEXSBiaW4vY29tYm9vdF9jYWxsLm8KICBbQlVJTERdIGJpbi9jb20zMl9jYWxsLm8KICBb
QlVJTERdIGJpbi9jb21ib290X3Jlc29sdi5vCiAgW0JVSUxEXSBiaW4vcHhlX2NtZC5vCiAg
W0JVSUxEXSBiaW4vcmVib290X2NtZC5vCiAgW0JVSUxEXSBiaW4vcGNpZGlyZWN0Lm8KICBb
QlVJTERdIGJpbi94ODZfc3RyaW5nLm8KICBbQlVJTERdIGJpbi9lZml4ODZfbmFwLm8KICBb
QlVJTERdIGJpbi9lZmlwcmVmaXgubwogIFtCVUlMRF0gYmluL2VmaWRydnByZWZpeC5vCiAg
W0JVSUxEXSBiaW4vdW5kaXByZWxvYWQubwogIFtCVUlMRF0gYmluL3VuZGlsb2FkLm8KICBb
QlVJTERdIGJpbi91bmRpb25seS5vCiAgW0JVSUxEXSBiaW4vdW5kaS5vCiAgW0JVSUxEXSBi
aW4vdW5kaW5ldC5vCiAgW0JVSUxEXSBiaW4vdW5kaXJvbS5vCiAgW0JVSUxEXSBiaW4vZ2Ri
c3R1Yl90ZXN0Lm8KICBbQlVJTERdIGJpbi92aXJ0YWRkci5vCiAgW0JVSUxEXSBiaW4vcGF0
Y2hfY2YubwogIFtCVUlMRF0gYmluL2dkYmlkdC5vCiAgW0JVSUxEXSBiaW4vc2V0am1wLm8K
ICBbQlVJTERdIGJpbi9zdGFjay5vCiAgW0JVSUxEXSBiaW4vc3RhY2sxNi5vCiAgW0JVSUxE
XSBiaW4vbGlia2lyLm8KICBbQlVJTERdIGJpbi9saWJwbS5vCiAgW0JVSUxEXSBiaW4vbGli
YTIwLm8KICBbQlVJTERdIGJpbi9saWJybS5vCiAgW0JVSUxEXSBiaW4vbGlicHJlZml4Lm8K
ICBbQlVJTERdIGJpbi9kc2twcmVmaXgubwogIFtCVUlMRF0gYmluL21yb21wcmVmaXgubwog
IFtCVUlMRF0gYmluL3VubnJ2MmIubwogIFtCVUlMRF0gYmluL2xrcm5wcmVmaXgubwogIFtC
VUlMRF0gYmluL3VubnJ2MmIxNi5vCiAgW0JVSUxEXSBiaW4va2tweGVwcmVmaXgubwogIFtC
VUlMRF0gYmluL3VuZGlsb2FkZXIubwogIFtCVUlMRF0gYmluL2Jvb3RwYXJ0Lm8KICBbQlVJ
TERdIGJpbi9udWxscHJlZml4Lm8KICBbQlVJTERdIGJpbi9uYmlwcmVmaXgubwogIFtCVUlM
RF0gYmluL2tweGVwcmVmaXgubwogIFtCVUlMRF0gYmluL2tra3B4ZXByZWZpeC5vCiAgW0JV
SUxEXSBiaW4vdXNiZGlzay5vCiAgW0JVSUxEXSBiaW4vaGRwcmVmaXgubwogIFtCVUlMRF0g
YmluL2V4ZXByZWZpeC5vCiAgW0JVSUxEXSBiaW4vcm9tcHJlZml4Lm8KICBbQlVJTERdIGJp
bi9weGVwcmVmaXgubwogIFtCVUlMRF0gYmluL21ici5vCiAgW0JVSUxEXSBiaW4vZTgyMG1h
bmdsZXIubwogIFtCVUlMRF0gYmluL3B4ZV9lbnRyeS5vCiAgW0JVSUxEXSBiaW4vY29tMzJf
d3JhcHBlci5vCiAgW0JVSUxEXSBiaW4vdW5kaWlzci5vCiAgW0FSXSBiaW4vYmxpYi5hCmFy
OiBjcmVhdGluZyBiaW4vYmxpYi5hCiAgW0hPU1RDQ10gdXRpbC96YmluCiAgW0xEXSBiaW4v
cnRsODEzOS5yb20udG1wCiAgW0JJTl0gYmluL3J0bDgxMzkucm9tLmJpbgogIFtaSU5GT10g
YmluL3J0bDgxMzkucm9tLnppbmZvCiAgW1pCSU5dIGJpbi9ydGw4MTM5LnJvbS56YmluCiAg
W0ZJTklTSF0gYmluL3J0bDgxMzkucm9tCnJtIGJpbi9ydGw4MTM5LnJvbS56YmluIGJpbi9y
dGw4MTM5LnJvbS5iaW4gYmluL3J0bDgxMzkucm9tLnppbmZvCmdtYWtlWzddOiBMZWF2aW5n
IGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2V0aGVyYm9vdC9p
cHhlL3NyYycKZ21ha2UgLUMgaXB4ZS9zcmMgYmluLzgwODYxMDBlLnJvbQpnbWFrZVs3XTog
RW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvZXRo
ZXJib290L2lweGUvc3JjJwogIFtMRF0gYmluLzgwODYxMDBlLnJvbS50bXAKICBbQklOXSBi
aW4vODA4NjEwMGUucm9tLmJpbgogIFtaSU5GT10gYmluLzgwODYxMDBlLnJvbS56aW5mbwog
IFtaQklOXSBiaW4vODA4NjEwMGUucm9tLnpiaW4KICBbRklOSVNIXSBiaW4vODA4NjEwMGUu
cm9tCnJtIGJpbi84MDg2MTAwZS5yb20uemJpbiBiaW4vODA4NjEwMGUucm9tLmJpbiBiaW4v
ODA4NjEwMGUucm9tLnppbmZvCmdtYWtlWzddOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2V0aGVyYm9vdC9pcHhlL3NyYycKZ21ha2VbNl06
IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvZXRo
ZXJib290JwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scy9maXJtd2FyZScKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2Zpcm13YXJlJwpnbWFrZSAtQyBodm1sb2FkZXIgYWxsCmdtYWtlWzZd
OiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9o
dm1sb2FkZXInCmdtYWtlWzddOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC90b29scy9maXJtd2FyZS9odm1sb2FkZXInCmdtYWtlIC1DIGFjcGkgYWxsCmdtYWtlWzhd
OiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9o
dm1sb2FkZXIvYWNwaScKaWFzbCAtdnMgLXAgc3NkdF9zMyAtdGMgc3NkdF9zMy5hc2wKQVNM
IElucHV0OiAgc3NkdF9zMy5hc2wgLSAzNCBsaW5lcywgMTA2NyBieXRlcywgMSBrZXl3b3Jk
cwpBTUwgT3V0cHV0OiBzc2R0X3MzLmFtbCAtIDQ5IGJ5dGVzLCAxIG5hbWVkIG9iamVjdHMs
IDAgZXhlY3V0YWJsZSBvcGNvZGVzCgpDb21waWxhdGlvbiBjb21wbGV0ZS4gMCBFcnJvcnMs
IDAgV2FybmluZ3MsIDAgUmVtYXJrcywgNCBPcHRpbWl6YXRpb25zCnNlZCAtZSAncy9BbWxD
b2RlL3NzZHRfczMvZycgc3NkdF9zMy5oZXggPnNzZHRfczMuaApybSAtZiBzc2R0X3MzLmhl
eCBzc2R0X3MzLmFtbAppYXNsIC12cyAtcCBzc2R0X3M0IC10YyBzc2R0X3M0LmFzbApBU0wg
SW5wdXQ6ICBzc2R0X3M0LmFzbCAtIDM0IGxpbmVzLCAxMDY3IGJ5dGVzLCAxIGtleXdvcmRz
CkFNTCBPdXRwdXQ6IHNzZHRfczQuYW1sIC0gNDkgYnl0ZXMsIDEgbmFtZWQgb2JqZWN0cywg
MCBleGVjdXRhYmxlIG9wY29kZXMKCkNvbXBpbGF0aW9uIGNvbXBsZXRlLiAwIEVycm9ycywg
MCBXYXJuaW5ncywgMCBSZW1hcmtzLCA0IE9wdGltaXphdGlvbnMKc2VkIC1lICdzL0FtbENv
ZGUvc3NkdF9zNC9nJyBzc2R0X3M0LmhleCA+c3NkdF9zNC5oCnJtIC1mIHNzZHRfczQuaGV4
IHNzZHRfczQuYW1sCmlhc2wgLXZzIC1wIHNzZHRfcG0gLXRjIHNzZHRfcG0uYXNsCkFTTCBJ
bnB1dDogIHNzZHRfcG0uYXNsIC0gNDI1IGxpbmVzLCAxMjc1NCBieXRlcywgMTkyIGtleXdv
cmRzCkFNTCBPdXRwdXQ6IHNzZHRfcG0uYW1sIC0gMTQ5NCBieXRlcywgNjQgbmFtZWQgb2Jq
ZWN0cywgMTI4IGV4ZWN1dGFibGUgb3Bjb2RlcwoKQ29tcGlsYXRpb24gY29tcGxldGUuIDAg
RXJyb3JzLCAwIFdhcm5pbmdzLCAwIFJlbWFya3MsIDMxIE9wdGltaXphdGlvbnMKc2VkIC1l
ICdzL0FtbENvZGUvc3NkdF9wbS9nJyBzc2R0X3BtLmhleCA+c3NkdF9wbS5oCnJtIC1mIHNz
ZHRfcG0uaGV4IHNzZHRfcG0uYW1sCmlhc2wgLXZzIC1wIHNzZHRfdHBtIC10YyBzc2R0X3Rw
bS5hc2wKQVNMIElucHV0OiAgc3NkdF90cG0uYXNsIC0gMzMgbGluZXMsIDEwNDYgYnl0ZXMs
IDMga2V5d29yZHMKQU1MIE91dHB1dDogc3NkdF90cG0uYW1sIC0gNzYgYnl0ZXMsIDMgbmFt
ZWQgb2JqZWN0cywgMCBleGVjdXRhYmxlIG9wY29kZXMKCkNvbXBpbGF0aW9uIGNvbXBsZXRl
LiAwIEVycm9ycywgMCBXYXJuaW5ncywgMCBSZW1hcmtzLCAwIE9wdGltaXphdGlvbnMKc2Vk
IC1lICdzL0FtbENvZGUvc3NkdF90cG0vZycgc3NkdF90cG0uaGV4ID5zc2R0X3RwbS5oCnJt
IC1mIHNzZHRfdHBtLmhleCBzc2R0X3RwbS5hbWwKZ2NjICAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5idWlsZC5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJv
ciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1t
c29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXIv
YWNwaS8uLi8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBidWlsZC5vIGJ1aWxkLmMg
IC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgLVdhbGwgLVdlcnJvciAtV3N0cmljdC1wcm90b3R5
cGVzIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtZm5vLXN0cmljdC1hbGlhc2luZyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJt
d2FyZS9odm1sb2FkZXIvYWNwaS8uLi8uLi8uLi8uLi90b29scy9pbmNsdWRlIC1vIG1rX2Rz
ZHQgbWtfZHNkdC5jCmF3ayAnTlIgPiAxIHtwcmludCBzfSB7cz0kMH0nIGRzZHQuYXNsID4g
ZHNkdF9hbnljcHUuYXNsCi4vbWtfZHNkdCAtLW1heGNwdSBhbnkgID4+IGRzZHRfYW55Y3B1
LmFzbAppYXNsIC12cyAtcCBkc2R0X2FueWNwdSAtdGMgZHNkdF9hbnljcHUuYXNsCmRzZHRf
YW55Y3B1LmFzbCAgIDUyODM6ICAgICAgICAgICAgIFJldHVybiAoIFxfU0IuUFJTQygpICkK
V2FybmluZyAgMTEyOCAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBe
IFJlc2VydmVkIG1ldGhvZCBzaG91bGQgbm90IHJldHVybiBhIHZhbHVlIChfTDAyKQoKQVNM
IElucHV0OiAgZHNkdF9hbnljcHUuYXNsIC0gMTA5MzYgbGluZXMsIDM4NjYxOCBieXRlcywg
Nzk1OSBrZXl3b3JkcwpBTUwgT3V0cHV0OiBkc2R0X2FueWNwdS5hbWwgLSA3MDQyMSBieXRl
cywgMjQ1NiBuYW1lZCBvYmplY3RzLCA1NTAzIGV4ZWN1dGFibGUgb3Bjb2RlcwoKQ29tcGls
YXRpb24gY29tcGxldGUuIDAgRXJyb3JzLCAxIFdhcm5pbmdzLCAwIFJlbWFya3MsIDI2MTQg
T3B0aW1pemF0aW9ucwpzZWQgLWUgJ3MvQW1sQ29kZS9kc2R0X2FueWNwdS9nJyBkc2R0X2Fu
eWNwdS5oZXggPmRzZHRfYW55Y3B1LmMKZWNobyAiaW50IGRzZHRfYW55Y3B1X2xlbj1zaXpl
b2YoZHNkdF9hbnljcHUpOyIgPj5kc2R0X2FueWNwdS5jCnJtIC1mIGRzZHRfYW55Y3B1LmFt
bCBkc2R0X2FueWNwdS5oZXgKZ2NjICAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
MzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5kc2R0X2FueWNwdS5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1m
bG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXIvYWNwaS8u
Li8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBkc2R0X2FueWNwdS5vIGRzZHRfYW55
Y3B1LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQphd2sgJ05SID4gMSB7cHJpbnQgc30ge3M9JDB9
JyBkc2R0LmFzbCA+IGRzZHRfMTVjcHUuYXNsCi4vbWtfZHNkdCAtLW1heGNwdSAxNSAgPj4g
ZHNkdF8xNWNwdS5hc2wKaWFzbCAtdnMgLXAgZHNkdF8xNWNwdSAtdGMgZHNkdF8xNWNwdS5h
c2wKZHNkdF8xNWNwdS5hc2wgICAgOTg5OiAgICAgICAgICAgICBSZXR1cm4gKCBcX1NCLlBS
U0MoKSApCldhcm5pbmcgIDExMjggLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBeIFJlc2VydmVkIG1ldGhvZCBzaG91bGQgbm90IHJldHVybiBhIHZhbHVlIChfTDAy
KQoKQVNMIElucHV0OiAgZHNkdF8xNWNwdS5hc2wgLSA2NjQyIGxpbmVzLCAyNDQ2MTEgYnl0
ZXMsIDQ3Njcga2V5d29yZHMKQU1MIE91dHB1dDogZHNkdF8xNWNwdS5hbWwgLSA0ODExOCBi
eXRlcywgMTU1MiBuYW1lZCBvYmplY3RzLCAzMjE1IGV4ZWN1dGFibGUgb3Bjb2RlcwoKQ29t
cGlsYXRpb24gY29tcGxldGUuIDAgRXJyb3JzLCAxIFdhcm5pbmdzLCAwIFJlbWFya3MsIDEw
NDYgT3B0aW1pemF0aW9ucwpzZWQgLWUgJ3MvQW1sQ29kZS9kc2R0XzE1Y3B1L2cnIGRzZHRf
MTVjcHUuaGV4ID5kc2R0XzE1Y3B1LmMKZWNobyAiaW50IGRzZHRfMTVjcHVfbGVuPXNpemVv
Zihkc2R0XzE1Y3B1KTsiID4+ZHNkdF8xNWNwdS5jCnJtIC1mIGRzZHRfMTVjcHUuYW1sIGRz
ZHRfMTVjcHUuaGV4CmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMyIC1t
YXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAuZHNkdF8xNWNwdS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJs
aW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1mbG9hdCAt
SS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXIvYWNwaS8uLi8uLi8u
Li8uLi90b29scy9pbmNsdWRlICAtYyAtbyBkc2R0XzE1Y3B1Lm8gZHNkdF8xNWNwdS5jICAt
SS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
MzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5zdGF0aWNfdGFibGVzLm8uZCAtZm5vLW9wdGlt
aXplLXNpYmxpbmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0
LWZsb2F0IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci9hY3Bp
Ly4uLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHN0YXRpY190YWJsZXMubyBzdGF0
aWNfdGFibGVzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQphd2sgJ05SID4gMSB7cHJpbnQgc30g
e3M9JDB9JyBkc2R0LmFzbCA+IGRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbAouL21rX2RzZHQg
LS1kbS12ZXJzaW9uIHFlbXUteGVuID4+IGRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbAppYXNs
IC12cyAtcCBkc2R0X2FueWNwdV9xZW11X3hlbiAtdGMgZHNkdF9hbnljcHVfcWVtdV94ZW4u
YXNsCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDUyODM6ICAgICAgICAgICAgIFJldHVy
biAoIFxfU0IuUFJTQygpICkKV2FybmluZyAgMTEyOCAtICAgICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0wwMikKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU2NzY6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU2ODQ6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU2OTI6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3MDA6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3MDg6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3MTY6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3MjQ6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3MzI6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3NDA6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3NDg6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3NTY6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3NjQ6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3NzI6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3ODA6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3ODg6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3OTY6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4MDQ6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4MTI6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4MjA6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4Mjg6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4MzY6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4NDQ6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4NTI6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4NjA6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4Njg6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4NzY6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4ODQ6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4OTI6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU5MDA6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU5MDg6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU5MTY6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCkFTTCBJbnB1dDogIGRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAtIDYxMjIg
bGluZXMsIDIwMzM0OSBieXRlcywgNDMyNSBrZXl3b3JkcwpBTUwgT3V0cHV0OiBkc2R0X2Fu
eWNwdV9xZW11X3hlbi5hbWwgLSAzNDEzMyBieXRlcywgMTMwMCBuYW1lZCBvYmplY3RzLCAz
MDI1IGV4ZWN1dGFibGUgb3Bjb2RlcwoKQ29tcGlsYXRpb24gY29tcGxldGUuIDAgRXJyb3Jz
LCAzMiBXYXJuaW5ncywgMCBSZW1hcmtzLCAyNTg2IE9wdGltaXphdGlvbnMKc2VkIC1lICdz
L0FtbENvZGUvZHNkdF9hbnljcHVfcWVtdV94ZW4vZycgZHNkdF9hbnljcHVfcWVtdV94ZW4u
aGV4ID5kc2R0X2FueWNwdV9xZW11X3hlbi5jCmVjaG8gImludCBkc2R0X2FueWNwdV9xZW11
X3hlbl9sZW49c2l6ZW9mKGRzZHRfYW55Y3B1X3FlbXVfeGVuKTsiID4+ZHNkdF9hbnljcHVf
cWVtdV94ZW4uYwpybSAtZiBkc2R0X2FueWNwdV9xZW11X3hlbi5hbWwgZHNkdF9hbnljcHVf
cWVtdV94ZW4uaGV4CmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMyIC1t
YXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAuZHNkdF9hbnljcHVfcWVtdV94ZW4uby5kIC1mbm8tb3B0
aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1kaXJlY3Qtc2VnLXJlZnMgIC1XZXJyb3Ig
LWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1mbm8tYnVpbHRpbiAtbXNv
ZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvaHZtbG9hZGVyL2Fj
cGkvLi4vLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8gZHNkdF9hbnljcHVfcWVtdV94
ZW4ubyBkc2R0X2FueWNwdV9xZW11X3hlbi5jICAtSS91c3IvcGtnL2luY2x1ZGUKYXIgcmMg
YWNwaS5hIGJ1aWxkLm8gZHNkdF9hbnljcHUubyBkc2R0XzE1Y3B1Lm8gc3RhdGljX3RhYmxl
cy5vIGRzZHRfYW55Y3B1X3FlbXVfeGVuLm8KZ21ha2VbOF06IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvaHZtbG9hZGVyL2FjcGknCmdtYWtl
WzddOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJl
L2h2bWxvYWRlcicKZ21ha2UgaHZtbG9hZGVyCmdtYWtlWzddOiBFbnRlcmluZyBkaXJlY3Rv
cnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXInCmVjaG8gIi8q
IEF1dG9nZW5lcmF0ZWQgZmlsZS4gRE8gTk9UIEVESVQgKi8iID4gcm9tcy5pbmMubmV3CmVj
aG8gIiNpZmRlZiBST01fSU5DTFVERV9ST01CSU9TIiA+PiByb21zLmluYy5uZXcKc2ggLi9t
a2hleCByb21iaW9zIC4uL3JvbWJpb3MvQklPUy1ib2Nocy1sYXRlc3QgPj4gcm9tcy5pbmMu
bmV3CmVjaG8gIiNlbmRpZiIgPj4gcm9tcy5pbmMubmV3CmVjaG8gIiNpZmRlZiBST01fSU5D
TFVERV9TRUFCSU9TIiA+PiByb21zLmluYy5uZXcKc2ggLi9ta2hleCBzZWFiaW9zIC4uL3Nl
YWJpb3MtZGlyL291dC9iaW9zLmJpbiA+PiByb21zLmluYy5uZXcKZWNobyAiI2VuZGlmIiA+
PiByb21zLmluYy5uZXcKZWNobyAiI2lmZGVmIFJPTV9JTkNMVURFX1ZHQUJJT1MiID4+IHJv
bXMuaW5jLm5ldwpzaCAuL21raGV4IHZnYWJpb3Nfc3RkdmdhIC4uL3ZnYWJpb3MvVkdBQklP
Uy1sZ3BsLWxhdGVzdC5iaW4gPj4gcm9tcy5pbmMubmV3CmVjaG8gIiNlbmRpZiIgPj4gcm9t
cy5pbmMubmV3CmVjaG8gIiNpZmRlZiBST01fSU5DTFVERV9WR0FCSU9TIiA+PiByb21zLmlu
Yy5uZXcKc2ggLi9ta2hleCB2Z2FiaW9zX2NpcnJ1c3ZnYSAuLi92Z2FiaW9zL1ZHQUJJT1Mt
bGdwbC1sYXRlc3QuY2lycnVzLmJpbiA+PiByb21zLmluYy5uZXcKZWNobyAiI2VuZGlmIiA+
PiByb21zLmluYy5uZXcKZWNobyAiI2lmZGVmIFJPTV9JTkNMVURFX0VUSEVSQk9PVCIgPj4g
cm9tcy5pbmMubmV3CnNoIC4vbWtoZXggZXRoZXJib290IC4uL2V0aGVyYm9vdC9pcHhlL3Ny
Yy9iaW4vcnRsODEzOS5yb20gLi4vZXRoZXJib290L2lweGUvc3JjL2Jpbi84MDg2MTAwZS5y
b20gPj4gcm9tcy5pbmMubmV3CmVjaG8gIiNlbmRpZiIgPj4gcm9tcy5pbmMubmV3Cm12IHJv
bXMuaW5jLm5ldyByb21zLmluYwpnY2MgICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmh2bWxvYWRlci5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1m
bG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXIvLi4vLi4v
Li4vdG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01CSU9TIC1ERU5BQkxFX1NFQUJJT1MgIC1j
IC1vIGh2bWxvYWRlci5vIGh2bWxvYWRlci5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC5tcF90YWJsZXMuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1k
aXJlY3Qtc2VnLXJlZnMgIC1XZXJyb3IgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1mbm8tYnVpbHRpbiAtbXNvZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvZmlybXdhcmUvaHZtbG9hZGVyLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLURFTkFCTEVf
Uk9NQklPUyAtREVOQUJMRV9TRUFCSU9TICAtYyAtbyBtcF90YWJsZXMubyBtcF90YWJsZXMu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRl
ciAtbTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAudXRpbC5vLmQgLWZuby1vcHRpbWl6ZS1z
aWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1mbG9h
dCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXIvLi4vLi4vLi4v
dG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01CSU9TIC1ERU5BQkxFX1NFQUJJT1MgIC1jIC1v
IHV0aWwubyB1dGlsLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAtTzEgLWZuby1vbWl0
LWZyYW1lLXBvaW50ZXIgLW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnNtYmlvcy5vLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAg
LVdlcnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWls
dGluIC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1s
b2FkZXIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01CSU9TIC1ERU5BQkxF
X1NFQUJJT1MgLURfX1NNQklPU19EQVRFX189IlwiMTIvMDQvMjAxMlwiIiAgLWMgLW8gc21i
aW9zLm8gc21iaW9zLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAtTzEgLWZuby1vbWl0
LWZyYW1lLXBvaW50ZXIgLW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnNtcC5vLmQgLWZu
by1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdl
cnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGlu
IC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2Fk
ZXIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01CSU9TIC1ERU5BQkxFX1NF
QUJJT1MgIC1jIC1vIHNtcC5vIHNtcC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgLU8x
IC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5j
YWNoZWF0dHIuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1kaXJl
Y3Qtc2VnLXJlZnMgIC1XZXJyb3IgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRp
b25zIC1mbm8tYnVpbHRpbiAtbXNvZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
ZmlybXdhcmUvaHZtbG9hZGVyLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLURFTkFCTEVfUk9N
QklPUyAtREVOQUJMRV9TRUFCSU9TICAtYyAtbyBjYWNoZWF0dHIubyBjYWNoZWF0dHIuYyAg
LUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGVuYnVzLm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0LWZsb2F0
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci8uLi8uLi8uLi90
b29scy9pbmNsdWRlIC1ERU5BQkxFX1JPTUJJT1MgLURFTkFCTEVfU0VBQklPUyAgLWMgLW8g
eGVuYnVzLm8geGVuYnVzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgLW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmU4MjAuby5k
IC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1kaXJlY3Qtc2VnLXJlZnMg
IC1XZXJyb3IgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1mbm8tYnVp
bHRpbiAtbXNvZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvaHZt
bG9hZGVyLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLURFTkFCTEVfUk9NQklPUyAtREVOQUJM
RV9TRUFCSU9TICAtYyAtbyBlODIwLm8gZTgyMC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2Nj
ICAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZu
by1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVz
IC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQg
LU1GIC5wY2kuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1kaXJl
Y3Qtc2VnLXJlZnMgIC1XZXJyb3IgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRp
b25zIC1mbm8tYnVpbHRpbiAtbXNvZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
ZmlybXdhcmUvaHZtbG9hZGVyLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLURFTkFCTEVfUk9N
QklPUyAtREVOQUJMRV9TRUFCSU9TICAtYyAtbyBwY2kubyBwY2kuYyAgLUkvdXNyL3BrZy9p
bmNsdWRlCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMyIC1tYXJjaD1p
Njg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAucGlyLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgLW1u
by10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0LWZsb2F0IC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci8uLi8uLi8uLi90b29scy9pbmNsdWRlIC1E
RU5BQkxFX1JPTUJJT1MgLURFTkFCTEVfU0VBQklPUyAgLWMgLW8gcGlyLm8gcGlyLmMgIC1J
L3Vzci9wa2cvaW5jbHVkZQpnY2MgICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW0z
MiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmN0eXBlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxp
bmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0LWZsb2F0IC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci8uLi8uLi8uLi90b29s
cy9pbmNsdWRlIC1ERU5BQkxFX1JPTUJJT1MgLURFTkFCTEVfU0VBQklPUyAgLWMgLW8gY3R5
cGUubyBjdHlwZS5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC50ZXN0cy5vLmQgLWZu
by1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdl
cnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGlu
IC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2Fk
ZXIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01CSU9TIC1ERU5BQkxFX1NF
QUJJT1MgIC1jIC1vIHRlc3RzLm8gdGVzdHMuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1N
RiAub3B0aW9ucm9tcy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxz
LWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90
b29scy9maXJtd2FyZS9odm1sb2FkZXIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtREVOQUJM
RV9ST01CSU9TIC1ERU5BQkxFX1NFQUJJT1MgIC1jIC1vIG9wdGlvbnJvbXMubyBvcHRpb25y
b21zLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAtTzEgLWZuby1vbWl0LWZyYW1lLXBv
aW50ZXIgLW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLjMyYml0Ymlvc19zdXBwb3J0Lm8u
ZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZz
ICAtV2Vycm9yIC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1
aWx0aW4gLW1zb2Z0LWZsb2F0IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2h2
bWxvYWRlci8uLi8uLi8uLi90b29scy9pbmNsdWRlIC1ERU5BQkxFX1JPTUJJT1MgLURFTkFC
TEVfU0VBQklPUyAgLWMgLW8gMzJiaXRiaW9zX3N1cHBvcnQubyAzMmJpdGJpb3Nfc3VwcG9y
dC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2lu
dGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5yb21iaW9zLm8uZCAtZm5vLW9wdGlt
aXplLXNpYmxpbmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0
LWZsb2F0IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci8uLi8u
Li8uLi90b29scy9pbmNsdWRlIC1ERU5BQkxFX1JPTUJJT1MgLURFTkFCTEVfU0VBQklPUyAg
LWMgLW8gcm9tYmlvcy5vIHJvbWJpb3MuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
c2VhYmlvcy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVj
dC1zZWctcmVmcyAgLVdlcnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLWZuby1idWlsdGluIC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9m
aXJtd2FyZS9odm1sb2FkZXIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01C
SU9TIC1ERU5BQkxFX1NFQUJJT1MgIC1jIC1vIHNlYWJpb3MubyBzZWFiaW9zLmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpsZCAtbWVsZl9pMzg2IC1OIC1UdGV4dCAweDEwMDAwMCAtbyBodm1s
b2FkZXIudG1wIGh2bWxvYWRlci5vIG1wX3RhYmxlcy5vIHV0aWwubyBzbWJpb3MubyBzbXAu
byBjYWNoZWF0dHIubyB4ZW5idXMubyBlODIwLm8gcGNpLm8gcGlyLm8gY3R5cGUubyB0ZXN0
cy5vIG9wdGlvbnJvbXMubyAzMmJpdGJpb3Nfc3VwcG9ydC5vIHJvbWJpb3MubyBzZWFiaW9z
Lm8gYWNwaS9hY3BpLmEKb2JqY29weSBodm1sb2FkZXIudG1wIGh2bWxvYWRlcgpybSAtZiBo
dm1sb2FkZXIudG1wCmdtYWtlWzddOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlcicKZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvaHZtbG9hZGVyJwpnbWFrZVs1
XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZScK
Z21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmly
bXdhcmUnClsgLWQgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGli
L3hlbi9ib290IF0gfHwgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlLy4uLy4uL3Rv
b2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL2xpYi94ZW4vYm9vdApbICEgLWUgaHZtbG9hZGVyL2h2bWxvYWRl
ciBdIHx8IC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS8uLi8uLi90b29scy9jcm9z
cy1pbnN0YWxsIC1tMDY0NCAtcCBodm1sb2FkZXIvaHZtbG9hZGVyIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi94ZW4vYm9vdApnbWFrZVszXTogTGVhdmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZScKZ21ha2VbMl06
IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmdtYWtlWzJdOiBF
bnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2UgLUMgY29u
c29sZSBpbnN0YWxsCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC90b29scy9jb25zb2xlJwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAudXRpbHMuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAg
LVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rvb2xzL2xp
YnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMvaW5jbHVk
ZSAtSS9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rvb2xzL3hlbnN0b3Jl
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMvaW5jbHVkZSAg
LWMgLW8gZGFlbW9uL3V0aWxzLm8gZGFlbW9uL3V0aWxzLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubWFp
bi5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvY29uc29sZS8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvY29uc29sZS8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBkYWVtb24vbWFpbi5v
IGRhZW1vbi9tYWluLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuaW8uby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rv
b2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9v
bHMvaW5jbHVkZSAgLWMgLW8gZGFlbW9uL2lvLm8gZGFlbW9uL2lvLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgICAgZGFlbW9uL3V0aWxzLm8gZGFlbW9uL21haW4ubyBkYWVtb24vaW8u
byAtbyB4ZW5jb25zb2xlZCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvY29uc29sZS8uLi8uLi90
b29scy9saWJ4Yy9saWJ4ZW5jdHJsLnNvIC9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xl
Ly4uLy4uL3Rvb2xzL3hlbnN0b3JlL2xpYnhlbnN0b3JlLnNvICAtbHV0aWwgLWxydCAgLUwv
dXNyL3BrZy9saWIKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1N
RCAtTUYgLm1haW4uby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAt
SS9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290
L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8gY2xp
ZW50L21haW4ubyBjbGllbnQvbWFpbi5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIGNs
aWVudC9tYWluLm8gLW8geGVuY29uc29sZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvY29uc29s
ZS8uLi8uLi90b29scy9saWJ4Yy9saWJ4ZW5jdHJsLnNvIC9yb290L3hlbi00LjIuMC90b29s
cy9jb25zb2xlLy4uLy4uL3Rvb2xzL3hlbnN0b3JlL2xpYnhlbnN0b3JlLnNvICAgIC1ML3Vz
ci9wa2cvbGliCi9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwv
L3Vzci94ZW40Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rv
b2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIHhlbmNvbnNvbGVkIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvL3Vzci94ZW40Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy9j
b25zb2xlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2Jpbgovcm9vdC94ZW4tNC4yLjAvdG9v
bHMvY29uc29sZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4ZW5jb25z
b2xlIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2JpbgpnbWFrZVsz
XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlJwpn
bWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21h
a2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFr
ZSAtQyB4ZW5tb24gaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAvdG9vbHMveGVubW9uJwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAuc2V0bWFzay5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbm1vbi8uLi8uLi90
b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5tb24vLi4vLi4vdG9vbHMv
aW5jbHVkZSAgLWMgLW8gc2V0bWFzay5vIHNldG1hc2suYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgICBzZXRtYXNrLm8gLW8geGVudHJhY2Vfc2V0bWFzayAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMveGVubW9uLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gIC1ML3Vzci9w
a2cvbGliCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC54ZW5iYWtlZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbm1vbi8uLi8uLi90b29scy9saWJ4YyAtSS9yb290
L3hlbi00LjIuMC90b29scy94ZW5tb24vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8geGVu
YmFrZWQubyB4ZW5iYWtlZC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIHhlbmJha2Vk
Lm8gLW8geGVuYmFrZWQgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbm1vbi8uLi8uLi90b29s
cy9saWJ4Yy9saWJ4ZW5jdHJsLnNvICAtTC91c3IvcGtnL2xpYgovcm9vdC94ZW4tNC4yLjAv
dG9vbHMveGVubW9uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9y
b290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL3NiaW4KL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL3hlbm1vbi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4
ZW5iYWtlZCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9zYmluL3hl
bmJha2VkCi9yb290L3hlbi00LjIuMC90b29scy94ZW5tb24vLi4vLi4vdG9vbHMvY3Jvc3Mt
aW5zdGFsbCAtbTA3NTUgLXAgeGVudHJhY2Vfc2V0bWFzayAgL3Jvb3QveGVuLTQuMi4wL2Rp
c3QvaW5zdGFsbC91c3IveGVuNDIvc2Jpbi94ZW50cmFjZV9zZXRtYXNrCi9yb290L3hlbi00
LjIuMC90b29scy94ZW5tb24vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAg
eGVubW9uLnB5ICAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9zYmlu
L3hlbm1vbi5weQovcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVubW9uLy4uLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwv
dXNyL3hlbjQyL3NoYXJlL2RvYy94ZW4KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbm1vbi8u
Li8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCBSRUFETUUgL3Jvb3QveGVuLTQu
Mi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2hhcmUvZG9jL3hlbi9SRUFETUUueGVubW9u
CmdtYWtlWzNdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hl
bm1vbicKZ21ha2VbMl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMnCmdtYWtlWzJdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29s
cycKZ21ha2UgLUMgeGVuc3RhdCBpbnN0YWxsCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3Rv
cnkgYC9yb290L3hlbi00LjIuMC90b29scy94ZW5zdGF0JwpnbWFrZVs0XTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdCcKZ21ha2UgLUMgbGli
eGVuc3RhdCBpbnN0YWxsCmdtYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC90b29scy94ZW5zdGF0L2xpYnhlbnN0YXQnCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW5zdGF0Lm8uZCAtZm5vLW9wdGltaXpl
LXNpYmxpbmctY2FsbHMgIC1mUElDIC1Jc3JjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hl
bnN0YXQvbGlieGVuc3RhdC8uLi8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIu
MC90b29scy94ZW5zdGF0L2xpYnhlbnN0YXQvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC90b29scy94ZW5zdGF0L2xpYnhlbnN0YXQvLi4vLi4vLi4vdG9vbHMv
eGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdC9saWJ4ZW5zdGF0Ly4u
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdC9s
aWJ4ZW5zdGF0Ly4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHNyYy94ZW5zdGF0Lm8g
c3JjL3hlbnN0YXQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW5zdGF0X25ldGJzZC5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtZlBJQyAtSXNyYyAtSS9yb290L3hlbi00LjIuMC90
b29scy94ZW5zdGF0L2xpYnhlbnN0YXQvLi4vLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMveGVuc3RhdC9saWJ4ZW5zdGF0Ly4uLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdC9saWJ4ZW5zdGF0Ly4uLy4uLy4u
L3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQvbGlieGVu
c3RhdC8uLi8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hl
bnN0YXQvbGlieGVuc3RhdC8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBzcmMveGVu
c3RhdF9uZXRic2QubyBzcmMveGVuc3RhdF9uZXRic2QuYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CnNyYy94ZW5zdGF0X25ldGJzZC5jOjc5OjEyOiB3YXJuaW5nOiAncmVhZF9hdHRyaWJ1dGVz
X3ZiZCcgZGVmaW5lZCBidXQgbm90IHVzZWQKYXIgcmMgc3JjL2xpYnhlbnN0YXQuYSBzcmMv
eGVuc3RhdC5vIHNyYy94ZW5zdGF0X25ldGJzZC5vCnJhbmxpYiBzcmMvbGlieGVuc3RhdC5h
CmdjYyAgICAtV2wsLXNvbmFtZSAtV2wsbGlieGVuc3RhdC5zby4wIC1zaGFyZWQgLW8gc3Jj
L2xpYnhlbnN0YXQuc28uMC4wIFwKICAgIHNyYy94ZW5zdGF0Lm8gc3JjL3hlbnN0YXRfbmV0
YnNkLm8gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQvbGlieGVuc3RhdC8uLi8uLi8u
Li90b29scy94ZW5zdG9yZS9saWJ4ZW5zdG9yZS5zbyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
eGVuc3RhdC9saWJ4ZW5zdGF0Ly4uLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28g
IC1ML3Vzci9wa2cvbGliCmxuIC1zZiBsaWJ4ZW5zdGF0LnNvLjAuMCBzcmMvbGlieGVuc3Rh
dC5zby4wCmxuIC1zZiBsaWJ4ZW5zdGF0LnNvLjAgc3JjL2xpYnhlbnN0YXQuc28KL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQvbGlieGVuc3RhdC8uLi8uLi8uLi90b29scy9jcm9z
cy1pbnN0YWxsIC1tMDY0NCAtcCBzcmMveGVuc3RhdC5oIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0
YXQvbGlieGVuc3RhdC8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCBz
cmMvbGlieGVuc3RhdC5hIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2xpYi9saWJ4ZW5zdGF0LmEKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQvbGlieGVu
c3RhdC8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCBzcmMvbGlieGVu
c3RhdC5zby4wLjAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGli
CmxuIC1zZiBsaWJ4ZW5zdGF0LnNvLjAuMCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxs
L3Vzci94ZW40Mi9saWIvbGlieGVuc3RhdC5zby4wCmxuIC1zZiBsaWJ4ZW5zdGF0LnNvLjAg
L3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliL2xpYnhlbnN0YXQu
c28KZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
eGVuc3RhdC9saWJ4ZW5zdGF0JwpnbWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC90b29scy94ZW5zdGF0JwpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdCcKZ21ha2UgLUMgeGVudG9wIGluc3Rh
bGwKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L3hlbnN0YXQveGVudG9wJwpnY2MgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54ZW50b3AuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1ER0ND
X1BSSU5URiAtV2FsbCAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQv
eGVudG9wLy4uLy4uLy4uL3Rvb2xzL3hlbnN0YXQvbGlieGVuc3RhdC9zcmMgLURIT1NUX05l
dEJTRCAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdGF0L3hlbnRvcC8uLi8uLi8uLi90
b29scyAgICAgIHhlbnRvcC5jICAtV2wsLXJwYXRoLWxpbms9L3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL3hlbnN0YXQveGVudG9wLy4uLy4uLy4uL3Rvb2xzL2xpYnhjIC1XbCwtcnBhdGgtbGlu
az0vcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdC94ZW50b3AvLi4vLi4vLi4vdG9vbHMv
eGVuc3RvcmUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQveGVudG9wLy4uLy4uLy4u
L3Rvb2xzL3hlbnN0YXQvbGlieGVuc3RhdC9zcmMvbGlieGVuc3RhdC5zbyAtbGN1cnNlcyAg
LW8geGVudG9wCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdGF0L3hlbnRvcC8uLi8uLi8u
Li90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdGF0
L3hlbnRvcC8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4ZW50b3Ag
L3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2Jpbi94ZW50b3AKL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQveGVudG9wLy4uLy4uLy4uL3Rvb2xzL2Nyb3Nz
LWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNy
L3hlbjQyL3NoYXJlL21hbi9tYW4xCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdGF0L3hl
bnRvcC8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4ZW50b3AuMSAv
cm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9zaGFyZS9tYW4vbWFuMS94
ZW50b3AuMQpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scy94ZW5zdGF0L3hlbnRvcCcKZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdCcKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdCcKZ21ha2VbMl06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmdtYWtlWzJdOiBFbnRlcmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2UgLUMgbGliYWlvIGluc3RhbGwK
Z21ha2VbM106IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YmFpbycKZ21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYmFpby9zcmMnCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxlcyAtV2FsbCAtSS4g
LWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIGlvX3F1ZXVlX2luaXQu
b2wgaW9fcXVldWVfaW5pdC5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxlcyAtV2FsbCAt
SS4gLWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIGlvX3F1ZXVlX3Jl
bGVhc2Uub2wgaW9fcXVldWVfcmVsZWFzZS5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxl
cyAtV2FsbCAtSS4gLWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIGlv
X3F1ZXVlX3dhaXQub2wgaW9fcXVldWVfd2FpdC5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRm
aWxlcyAtV2FsbCAtSS4gLWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1v
IGlvX3F1ZXVlX3J1bi5vbCBpb19xdWV1ZV9ydW4uYwpnY2MgLW5vc3RkbGliIC1ub3N0YXJ0
ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAtYyAt
byBpb19nZXRldmVudHMub2wgaW9fZ2V0ZXZlbnRzLmMKZ2NjIC1ub3N0ZGxpYiAtbm9zdGFy
dGZpbGVzIC1XYWxsIC1JLiAtZyAtZm9taXQtZnJhbWUtcG9pbnRlciAtTzIgLWZQSUMgLWMg
LW8gaW9fc3VibWl0Lm9sIGlvX3N1Ym1pdC5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxl
cyAtV2FsbCAtSS4gLWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIGlv
X2NhbmNlbC5vbCBpb19jYW5jZWwuYwpnY2MgLW5vc3RkbGliIC1ub3N0YXJ0ZmlsZXMgLVdh
bGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAtYyAtbyBpb19zZXR1
cC5vbCBpb19zZXR1cC5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxlcyAtV2FsbCAtSS4g
LWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIGlvX2Rlc3Ryb3kub2wg
aW9fZGVzdHJveS5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxlcyAtV2FsbCAtSS4gLWcg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIHJhd19zeXNjYWxsLm9sIHJh
d19zeXNjYWxsLmMKZ2NjIC1ub3N0ZGxpYiAtbm9zdGFydGZpbGVzIC1XYWxsIC1JLiAtZyAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtTzIgLWZQSUMgLWMgLW8gY29tcGF0LTBfMS5vbCBjb21w
YXQtMF8xLmMKcm0gLWYgbGliYWlvLmEKYXIgciBsaWJhaW8uYSBpb19xdWV1ZV9pbml0Lm9s
IGlvX3F1ZXVlX3JlbGVhc2Uub2wgaW9fcXVldWVfd2FpdC5vbCBpb19xdWV1ZV9ydW4ub2wg
aW9fZ2V0ZXZlbnRzLm9sIGlvX3N1Ym1pdC5vbCBpb19jYW5jZWwub2wgaW9fc2V0dXAub2wg
aW9fZGVzdHJveS5vbCByYXdfc3lzY2FsbC5vbCBjb21wYXQtMF8xLm9sCmFyOiBjcmVhdGlu
ZyBsaWJhaW8uYQpyYW5saWIgbGliYWlvLmEKZ2NjIC1zaGFyZWQgLW5vc3RkbGliIC1ub3N0
YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAt
YyAtbyBpb19xdWV1ZV9pbml0Lm9zIGlvX3F1ZXVlX2luaXQuYwpnY2MgLXNoYXJlZCAtbm9z
dGRsaWIgLW5vc3RhcnRmaWxlcyAtV2FsbCAtSS4gLWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIg
LU8yIC1mUElDIC1jIC1vIGlvX3F1ZXVlX3JlbGVhc2Uub3MgaW9fcXVldWVfcmVsZWFzZS5j
CmdjYyAtc2hhcmVkIC1ub3N0ZGxpYiAtbm9zdGFydGZpbGVzIC1XYWxsIC1JLiAtZyAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtTzIgLWZQSUMgLWMgLW8gaW9fcXVldWVfd2FpdC5vcyBpb19x
dWV1ZV93YWl0LmMKZ2NjIC1zaGFyZWQgLW5vc3RkbGliIC1ub3N0YXJ0ZmlsZXMgLVdhbGwg
LUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAtYyAtbyBpb19xdWV1ZV9y
dW4ub3MgaW9fcXVldWVfcnVuLmMKZ2NjIC1zaGFyZWQgLW5vc3RkbGliIC1ub3N0YXJ0Zmls
ZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAtYyAtbyBp
b19nZXRldmVudHMub3MgaW9fZ2V0ZXZlbnRzLmMKZ2NjIC1zaGFyZWQgLW5vc3RkbGliIC1u
b3N0YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJ
QyAtYyAtbyBpb19zdWJtaXQub3MgaW9fc3VibWl0LmMKZ2NjIC1zaGFyZWQgLW5vc3RkbGli
IC1ub3N0YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAt
ZlBJQyAtYyAtbyBpb19jYW5jZWwub3MgaW9fY2FuY2VsLmMKZ2NjIC1zaGFyZWQgLW5vc3Rk
bGliIC1ub3N0YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1P
MiAtZlBJQyAtYyAtbyBpb19zZXR1cC5vcyBpb19zZXR1cC5jCmdjYyAtc2hhcmVkIC1ub3N0
ZGxpYiAtbm9zdGFydGZpbGVzIC1XYWxsIC1JLiAtZyAtZm9taXQtZnJhbWUtcG9pbnRlciAt
TzIgLWZQSUMgLWMgLW8gaW9fZGVzdHJveS5vcyBpb19kZXN0cm95LmMKZ2NjIC1zaGFyZWQg
LW5vc3RkbGliIC1ub3N0YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1PMiAtZlBJQyAtYyAtbyByYXdfc3lzY2FsbC5vcyByYXdfc3lzY2FsbC5jCmdjYyAt
c2hhcmVkIC1ub3N0ZGxpYiAtbm9zdGFydGZpbGVzIC1XYWxsIC1JLiAtZyAtZm9taXQtZnJh
bWUtcG9pbnRlciAtTzIgLWZQSUMgLWMgLW8gY29tcGF0LTBfMS5vcyBjb21wYXQtMF8xLmMK
Z2NjIC1zaGFyZWQgLW5vc3RkbGliIC1ub3N0YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21p
dC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAtV2wsLS12ZXJzaW9uLXNjcmlwdD1saWJhaW8u
bWFwIC1XbCwtc29uYW1lPWxpYmFpby5zby4xIC1vIGxpYmFpby5zby4xLjAuMSBpb19xdWV1
ZV9pbml0Lm9zIGlvX3F1ZXVlX3JlbGVhc2Uub3MgaW9fcXVldWVfd2FpdC5vcyBpb19xdWV1
ZV9ydW4ub3MgaW9fZ2V0ZXZlbnRzLm9zIGlvX3N1Ym1pdC5vcyBpb19jYW5jZWwub3MgaW9f
c2V0dXAub3MgaW9fZGVzdHJveS5vcyByYXdfc3lzY2FsbC5vcyBjb21wYXQtMF8xLm9zIApn
bWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJh
aW8vc3JjJwpnbWFrZVszXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scy9saWJhaW8nCmdtYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzJwpnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMnCmdtYWtlIC1DIGJsa3RhcDIgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMicKZ21ha2VbNF06IEVu
dGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3RhcDInCmdtYWtl
IC1DIGluY2x1ZGUgaW5zdGFsbApnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi9pbmNsdWRlJwovcm9vdC94ZW4tNC4yLjAvdG9v
bHMvYmxrdGFwMi9pbmNsdWRlLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0w
NzU1IC1wIC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1
ZGUKZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
YmxrdGFwMi9pbmNsdWRlJwpnbWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC90b29scy9ibGt0YXAyJwpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMicKZ21ha2UgLUMgbHZtIGluc3RhbGwKZ21h
a2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3Rh
cDIvbHZtJwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1N
RiAubHZtLXV0aWwuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAt
V25vLXVudXNlZCAtSS4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAgLWMgLW8gbHZt
LXV0aWwubyBsdm0tdXRpbC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ21ha2VbNV06IExlYXZp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi9sdm0nCmdtYWtl
WzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3RhcDIn
CmdtYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9i
bGt0YXAyJwpnbWFrZSAtQyB2aGQgaW5zdGFsbApnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi92aGQnCmdtYWtlWzZdOiBFbnRl
cmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ibGt0YXAyL3ZoZCcKZ21h
a2UgLUMgbGliIGFsbApnbWFrZVs3XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvYmxrdGFwMi92aGQvbGliJwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlidmhkLm8uZCAtZm5vLW9wdGltaXplLXNpYmxp
bmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9T
T1VSQ0UgLWZQSUMgLWcgIC1jIC1vIGxpYnZoZC5vIGxpYnZoZC5jICAtSS91c3IvcGtnL2lu
Y2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYg
LmxpYnZoZC1qb3VybmFsLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJy
b3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9TT1VSQ0UgLWZQSUMgLWcg
IC1jIC1vIGxpYnZoZC1qb3VybmFsLm8gbGlidmhkLWpvdXJuYWwuYyAgLUkvdXNyL3BrZy9p
bmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC52aGQtdXRpbC1jb2FsZXNjZS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAt
V2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElD
IC1nICAtYyAtbyB2aGQtdXRpbC1jb2FsZXNjZS5vIHZoZC11dGlsLWNvYWxlc2NlLmMgIC1J
L3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAudmhkLXV0aWwtY3JlYXRlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9TT1VS
Q0UgLWZQSUMgLWcgIC1jIC1vIHZoZC11dGlsLWNyZWF0ZS5vIHZoZC11dGlsLWNyZWF0ZS5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RP
T0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLWZpbGwuby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NP
VVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtZmlsbC5vIHZoZC11dGlsLWZpbGwuYyAg
LUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09M
U19fIC1NTUQgLU1GIC52aGQtdXRpbC1tb2RpZnkuby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NP
VVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtbW9kaWZ5Lm8gdmhkLXV0aWwtbW9kaWZ5
LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5f
VE9PTFNfXyAtTU1EIC1NRiAudmhkLXV0aWwtcXVlcnkuby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05V
X1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtcXVlcnkubyB2aGQtdXRpbC1xdWVy
eS5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVO
X1RPT0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLXJlYWQuby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05V
X1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtcmVhZC5vIHZoZC11dGlsLXJlYWQu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC52aGQtdXRpbC1yZXBhaXIuby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05V
X1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtcmVwYWlyLm8gdmhkLXV0aWwtcmVw
YWlyLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAudmhkLXV0aWwtcmVzaXplLm8uZCAtZm5vLW9wdGltaXpl
LXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1E
X0dOVV9TT1VSQ0UgLWZQSUMgLWcgIC1jIC1vIHZoZC11dGlsLXJlc2l6ZS5vIHZoZC11dGls
LXJlc2l6ZS5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLXJldmVydC5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVk
ZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtYyAtbyB2aGQtdXRpbC1yZXZlcnQubyB2aGQt
dXRpbC1yZXZlcnQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXRpbC1zZXQtZmllbGQuby5kIC1m
bm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4u
L2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtc2V0LWZp
ZWxkLm8gdmhkLXV0aWwtc2V0LWZpZWxkLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAudmhkLXV0aWwtc25h
cHNob3Quby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVu
dXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhk
LXV0aWwtc25hcHNob3QubyB2aGQtdXRpbC1zbmFwc2hvdC5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnZo
ZC11dGlsLXNjYW4uby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAt
V25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWMg
LW8gdmhkLXV0aWwtc2Nhbi5vIHZoZC11dGlsLXNjYW4uYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQt
dXRpbC1jaGVjay5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1X
bm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtYyAt
byB2aGQtdXRpbC1jaGVjay5vIHZoZC11dGlsLWNoZWNrLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAudmhk
LXV0aWwtdXVpZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1X
bm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtYyAt
byB2aGQtdXRpbC11dWlkLm8gdmhkLXV0aWwtdXVpZC5jICAtSS91c3IvcGtnL2luY2x1ZGUK
Z2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1h
bGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnJlbGF0
aXZlLXBhdGguby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25v
LXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8g
cmVsYXRpdmUtcGF0aC5vIHJlbGF0aXZlLXBhdGguYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdj
YyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5hdG9taWNp
by5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2Vk
IC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtYyAtbyBhdG9taWNp
by5vIGF0b21pY2lvLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQphciByYyBsaWJ2aGQuYSBsaWJ2
aGQubyBsaWJ2aGQtam91cm5hbC5vIHZoZC11dGlsLWNvYWxlc2NlLm8gdmhkLXV0aWwtY3Jl
YXRlLm8gdmhkLXV0aWwtZmlsbC5vIHZoZC11dGlsLW1vZGlmeS5vIHZoZC11dGlsLXF1ZXJ5
Lm8gdmhkLXV0aWwtcmVhZC5vIHZoZC11dGlsLXJlcGFpci5vIHZoZC11dGlsLXJlc2l6ZS5v
IHZoZC11dGlsLXJldmVydC5vIHZoZC11dGlsLXNldC1maWVsZC5vIHZoZC11dGlsLXNuYXBz
aG90Lm8gdmhkLXV0aWwtc2Nhbi5vIHZoZC11dGlsLWNoZWNrLm8gdmhkLXV0aWwtdXVpZC5v
IHJlbGF0aXZlLXBhdGgubyBhdG9taWNpby5vIC4uLy4uL2x2bS9sdm0tdXRpbC5vCmdjYyAg
LURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ2
aGQub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVu
dXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWZQSUMgLWMg
LW8gbGlidmhkLm9waWMgbGlidmhkLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElD
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlidmhkLWpv
dXJuYWwub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25v
LXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWZQSUMg
LWMgLW8gbGlidmhkLWpvdXJuYWwub3BpYyBsaWJ2aGQtam91cm5hbC5jICAtSS91c3IvcGtn
L2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
ZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18g
LU1NRCAtTUYgLnZoZC11dGlsLWNvYWxlc2NlLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxp
bmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9T
T1VSQ0UgLWZQSUMgLWcgIC1mUElDIC1jIC1vIHZoZC11dGlsLWNvYWxlc2NlLm9waWMgdmhk
LXV0aWwtY29hbGVzY2UuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1m
bm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXRpbC1jcmVhdGUu
b3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNl
ZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWZQSUMgLWMgLW8g
dmhkLXV0aWwtY3JlYXRlLm9waWMgdmhkLXV0aWwtY3JlYXRlLmMgIC1JL3Vzci9wa2cvaW5j
bHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1E
IC1NRiAudmhkLXV0aWwtZmlsbC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1m
UElDIC1nICAtZlBJQyAtYyAtbyB2aGQtdXRpbC1maWxsLm9waWMgdmhkLXV0aWwtZmlsbC5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9f
WEVOX1RPT0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLW1vZGlmeS5vcGljLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVk
ZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAtbyB2aGQtdXRpbC1tb2RpZnku
b3BpYyB2aGQtdXRpbC1tb2RpZnkuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXRpbC1x
dWVyeS5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8t
dW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAt
YyAtbyB2aGQtdXRpbC1xdWVyeS5vcGljIHZoZC11dGlsLXF1ZXJ5LmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1n
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAt
TU1EIC1NRiAudmhkLXV0aWwtcmVhZC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNh
bGxzICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNF
IC1mUElDIC1nICAtZlBJQyAtYyAtbyB2aGQtdXRpbC1yZWFkLm9waWMgdmhkLXV0aWwtcmVh
ZC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLXJlcGFpci5vcGljLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5j
bHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAtbyB2aGQtdXRpbC1yZXBh
aXIub3BpYyB2aGQtdXRpbC1yZXBhaXIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQ
SUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXRp
bC1yZXNpemUub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAt
V25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWZQ
SUMgLWMgLW8gdmhkLXV0aWwtcmVzaXplLm9waWMgdmhkLXV0aWwtcmVzaXplLmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAudmhkLXV0aWwtcmV2ZXJ0Lm9waWMuZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dO
VV9TT1VSQ0UgLWZQSUMgLWcgIC1mUElDIC1jIC1vIHZoZC11dGlsLXJldmVydC5vcGljIHZo
ZC11dGlsLXJldmVydC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZu
by1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLXNldC1maWVs
ZC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51
c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAt
byB2aGQtdXRpbC1zZXQtZmllbGQub3BpYyB2aGQtdXRpbC1zZXQtZmllbGQuYyAgLUkvdXNy
L3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09M
U19fIC1NTUQgLU1GIC52aGQtdXRpbC1zbmFwc2hvdC5vcGljLmQgLWZuby1vcHRpbWl6ZS1z
aWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9H
TlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAtbyB2aGQtdXRpbC1zbmFwc2hvdC5vcGlj
IHZoZC11dGlsLXNuYXBzaG90LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAudmhkLXV0aWwtc2Nh
bi5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51
c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAt
byB2aGQtdXRpbC1zY2FuLm9waWMgdmhkLXV0aWwtc2Nhbi5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLnZoZC11dGlsLWNoZWNrLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9TT1VSQ0UgLWZQ
SUMgLWcgIC1mUElDIC1jIC1vIHZoZC11dGlsLWNoZWNrLm9waWMgdmhkLXV0aWwtY2hlY2su
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXRpbC11dWlkLm9waWMuZCAtZm5vLW9wdGlt
aXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRl
IC1EX0dOVV9TT1VSQ0UgLWZQSUMgLWcgIC1mUElDIC1jIC1vIHZoZC11dGlsLXV1aWQub3Bp
YyB2aGQtdXRpbC11dWlkLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAt
Zm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAucmVsYXRpdmUtcGF0aC5v
cGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2Vk
IC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAtbyBy
ZWxhdGl2ZS1wYXRoLm9waWMgcmVsYXRpdmUtcGF0aC5jICAtSS91c3IvcGtnL2luY2x1ZGUK
Z2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYg
LmF0b21pY2lvLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3Ig
LVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9TT1VSQ0UgLWZQSUMgLWcgIC1m
UElDIC1jIC1vIGF0b21pY2lvLm9waWMgYXRvbWljaW8uYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC5sdm0tdXRpbC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9y
IC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAt
ZlBJQyAtYyAtbyAuLi8uLi9sdm0vbHZtLXV0aWwub3BpYyAuLi8uLi9sdm0vbHZtLXV0aWwu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAtV2wsLXNvbmFtZSxsaWJ2aGQuc28uMS4wIC1z
aGFyZWQgXAoJICAgLW8gbGlidmhkLnNvLjEuMC4wIGxpYnZoZC5vcGljIGxpYnZoZC1qb3Vy
bmFsLm9waWMgdmhkLXV0aWwtY29hbGVzY2Uub3BpYyB2aGQtdXRpbC1jcmVhdGUub3BpYyB2
aGQtdXRpbC1maWxsLm9waWMgdmhkLXV0aWwtbW9kaWZ5Lm9waWMgdmhkLXV0aWwtcXVlcnku
b3BpYyB2aGQtdXRpbC1yZWFkLm9waWMgdmhkLXV0aWwtcmVwYWlyLm9waWMgdmhkLXV0aWwt
cmVzaXplLm9waWMgdmhkLXV0aWwtcmV2ZXJ0Lm9waWMgdmhkLXV0aWwtc2V0LWZpZWxkLm9w
aWMgdmhkLXV0aWwtc25hcHNob3Qub3BpYyB2aGQtdXRpbC1zY2FuLm9waWMgdmhkLXV0aWwt
Y2hlY2sub3BpYyB2aGQtdXRpbC11dWlkLm9waWMgcmVsYXRpdmUtcGF0aC5vcGljIGF0b21p
Y2lvLm9waWMgLi4vLi4vbHZtL2x2bS11dGlsLm9waWMgCmxuIC1zZiBsaWJ2aGQuc28uMS4w
LjAgbGlidmhkLnNvLjEuMApsbiAtc2YgbGlidmhkLnNvLjEuMCBsaWJ2aGQuc28KZ21ha2Vb
N106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi92
aGQvbGliJwpnbWFrZVs2XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scy9ibGt0YXAyL3ZoZCcKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xT
X18gLU1NRCAtTUYgLnZoZC11dGlsLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi9pbmNsdWRlIC1EX0dOVV9TT1VSQ0UgLWZQSUMg
IC1jIC1vIHZoZC11dGlsLm8gdmhkLXV0aWwuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
ICAtbyB2aGQtdXRpbCB2aGQtdXRpbC5vIC1MbGliIC1sdmhkCmdjYyAgLU8xIC1mbm8tb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXBkYXRlLm8uZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi9pbmNsdWRl
IC1EX0dOVV9TT1VSQ0UgLWZQSUMgIC1jIC1vIHZoZC11cGRhdGUubyB2aGQtdXBkYXRlLmMg
IC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAgLW8gdmhkLXVwZGF0ZSB2aGQtdXBkYXRlLm8g
LUxsaWIgLWx2aGQKZ21ha2Ugc3ViZGlycy1pbnN0YWxsCmdtYWtlWzZdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ibGt0YXAyL3ZoZCcKZ21ha2VbN106
IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3RhcDIvdmhk
JwpnbWFrZSAtQyBsaWIgaW5zdGFsbApnbWFrZVs4XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi92aGQvbGliJwovcm9vdC94ZW4tNC4yLjAv
dG9vbHMvYmxrdGFwMi92aGQvbGliLy4uLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwg
LWQgLW0wNzU1IC1wIC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2xpYgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi92aGQvbGliLy4uLy4uLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGxpYnZoZC5hIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxr
dGFwMi92aGQvbGliLy4uLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1w
IGxpYnZoZC5zby4xLjAuMCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40
Mi9saWIKbG4gLXNmIGxpYnZoZC5zby4xLjAuMCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0
YWxsL3Vzci94ZW40Mi9saWIvbGlidmhkLnNvLjEuMApsbiAtc2YgbGlidmhkLnNvLjEuMCAv
cm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWIvbGlidmhkLnNvCmdt
YWtlWzhdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3Rh
cDIvdmhkL2xpYicKZ21ha2VbN106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvYmxrdGFwMi92aGQnCmdtYWtlWzZdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3RhcDIvdmhkJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
YmxrdGFwMi92aGQvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAg
LXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2Jpbgovcm9vdC94
ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi92aGQvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFs
bCAtbTA3NTUgLXAgdmhkLXV0aWwgdmhkLXVwZGF0ZSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9p
bnN0YWxsL3Vzci94ZW40Mi9zYmluCmdtYWtlWzVdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3RhcDIvdmhkJwpnbWFrZVs0XTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ibGt0YXAyJwpnbWFrZVszXTogTGVhdmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ibGt0YXAyJwpnbWFrZVsyXTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVu
dGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZSAtQyB4ZW5i
YWNrZW5kZCBpbnN0YWxsCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC90b29scy94ZW5iYWNrZW5kZCcKZ2NjIC1EWEVOX1NDUklQVF9ESVI9IlwiL3Vz
ci94ZW40Mi9ldGMveGVuL3NjcmlwdHNcIiIgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC54ZW5iYWNrZW5kZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbmJhY2tlbmRkLy4u
Ly4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbmJhY2tlbmRk
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHhlbmJhY2tlbmRkLm8geGVuYmFja2VuZGQu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICB4ZW5iYWNrZW5kZC5vIC1vIHhlbmJhY2tl
bmRkIC9yb290L3hlbi00LjIuMC90b29scy94ZW5iYWNrZW5kZC8uLi8uLi90b29scy94ZW5z
dG9yZS9saWJ4ZW5zdG9yZS5zbyAgLUwvdXNyL3BrZy9saWIKL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL3hlbmJhY2tlbmRkLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1w
IC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL3NiaW4KL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL3hlbmJhY2tlbmRkLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0w
NzU1IC1wIHhlbmJhY2tlbmRkIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hl
bjQyL3NiaW4KZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMveGVuYmFja2VuZGQnCmdtYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzJwpnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMnCmdtYWtlIC1DIGxpYmZzaW1hZ2UgaW5zdGFsbApnbWFrZVszXTog
RW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZScK
Z21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YmZzaW1hZ2UnCmdtYWtlIC1DIGNvbW1vbiBpbnN0YWxsCmdtYWtlWzVdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL2NvbW1vbicKTWFr
ZWZpbGU6MzU6IHdhcm5pbmc6IG92ZXJyaWRpbmcgcmVjaXBlIGZvciB0YXJnZXQgYGNsZWFu
Jwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9jb21tb24vLi4vLi4vLi4vdG9v
bHMvbGliZnNpbWFnZS9SdWxlcy5tazoyNTogd2FybmluZzogaWdub3Jpbmcgb2xkIHJlY2lw
ZSBmb3IgdGFyZ2V0IGBjbGVhbicKTWFrZWZpbGU6MzU6IHdhcm5pbmc6IG92ZXJyaWRpbmcg
cmVjaXBlIGZvciB0YXJnZXQgYGRpc3RjbGVhbicKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YmZzaW1hZ2UvY29tbW9uLy4uLy4uLy4uL3Rvb2xzL2xpYmZzaW1hZ2UvUnVsZXMubWs6MjU6
IHdhcm5pbmc6IGlnbm9yaW5nIG9sZCByZWNpcGUgZm9yIHRhcmdldCBgZGlzdGNsZWFuJwpn
Y2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
ZnNpbWFnZS5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV25vLXVua25v
d24tcHJhZ21hcyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL2NvbW1vbi8u
Li8uLi8uLi90b29scy9saWJmc2ltYWdlL2NvbW1vbi8gLURGU0lNQUdFX0ZTRElSPVwiL3Vz
ci94ZW40Mi9saWIvZnNcIiAtV2Vycm9yIC1EX0dOVV9TT1VSQ0UgLXB0aHJlYWQgIC1mUElD
IC1jIC1vIGZzaW1hZ2Uub3BpYyBmc2ltYWdlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2Mg
IC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuZnNp
bWFnZV9wbHVnaW4ub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVduby11
bmtub3duLXByYWdtYXMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9jb21t
b24vLi4vLi4vLi4vdG9vbHMvbGliZnNpbWFnZS9jb21tb24vIC1ERlNJTUFHRV9GU0RJUj1c
Ii91c3IveGVuNDIvbGliL2ZzXCIgLVdlcnJvciAtRF9HTlVfU09VUkNFIC1wdGhyZWFkICAt
ZlBJQyAtYyAtbyBmc2ltYWdlX3BsdWdpbi5vcGljIGZzaW1hZ2VfcGx1Z2luLmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAuZnNpbWFnZV9ncnViLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxp
bmctY2FsbHMgIC1Xbm8tdW5rbm93bi1wcmFnbWFzIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYmZzaW1hZ2UvY29tbW9uLy4uLy4uLy4uL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLyAt
REZTSU1BR0VfRlNESVI9XCIvdXNyL3hlbjQyL2xpYi9mc1wiIC1XZXJyb3IgLURfR05VX1NP
VVJDRSAtcHRocmVhZCAgLWZQSUMgLWMgLW8gZnNpbWFnZV9ncnViLm9waWMgZnNpbWFnZV9n
cnViLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1wdGhyZWFkIC1XbCwtc29uYW1lIC1X
bCxsaWJmc2ltYWdlLnNvLjEuMCAtc2hhcmVkIC1vIGxpYmZzaW1hZ2Uuc28uMS4wLjAgZnNp
bWFnZS5vcGljIGZzaW1hZ2VfcGx1Z2luLm9waWMgZnNpbWFnZV9ncnViLm9waWMgCmxuIC1z
ZiBsaWJmc2ltYWdlLnNvLjEuMC4wIGxpYmZzaW1hZ2Uuc28uMS4wCmxuIC1zZiBsaWJmc2lt
YWdlLnNvLjEuMCBsaWJmc2ltYWdlLnNvCi9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL2NvbW1vbi8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAv
cm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWIKL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2luY2x1ZGUKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLy4uLy4u
Ly4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGxpYmZzaW1hZ2Uuc28uMS4wLjAg
L3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliCmxuIC1zZiBsaWJm
c2ltYWdlLnNvLjEuMC4wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2xpYi9saWJmc2ltYWdlLnNvLjEuMApsbiAtc2YgbGliZnNpbWFnZS5zby4xLjAgL3Jvb3Qv
eGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliL2xpYmZzaW1hZ2Uuc28KL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLy4uLy4uLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLW0wNjQ0IC1wIGZzaW1hZ2UuaCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9p
bnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlCi9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL2NvbW1vbi8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCBmc2lt
YWdlX3BsdWdpbi5oIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2lu
Y2x1ZGUKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLy4uLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIGZzaW1hZ2VfZ3J1Yi5oIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUKZ21ha2VbNV06IExlYXZp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9jb21tb24n
CmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YmZzaW1hZ2UnCmdtYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC90b29scy9saWJmc2ltYWdlJwpnbWFrZSAtQyB1ZnMgaW5zdGFsbApnbWFrZVs1XTogRW50
ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS91ZnMn
CmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC5mc3lzX3Vmcy5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV25vLXVu
a25vd24tcHJhZ21hcyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3Vmcy8u
Li8uLi8uLi90b29scy9saWJmc2ltYWdlL2NvbW1vbi8gLURGU0lNQUdFX0ZTRElSPVwiL3Vz
ci94ZW40Mi9saWIvZnNcIiAtV2Vycm9yIC1EX0dOVV9TT1VSQ0UgIC1mUElDIC1jIC1vIGZz
eXNfdWZzLm9waWMgZnN5c191ZnMuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtTC4u
L2NvbW1vbi8gLXNoYXJlZCAtbyBmc2ltYWdlLnNvIGZzeXNfdWZzLm9waWMgLWxmc2ltYWdl
ICAgLUwvdXNyL3BrZy9saWIKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvdWZz
Ly4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00
LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9mcy91ZnMKL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYmZzaW1hZ2UvdWZzLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0w
NzU1IC1wIGZzaW1hZ2Uuc28gL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVu
NDIvbGliL2ZzL3VmcwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC90b29scy9saWJmc2ltYWdlL3VmcycKZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZScKZ21ha2VbNF06IEVudGVyaW5n
IGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UnCmdtYWtlIC1D
IHJlaXNlcmZzIGluc3RhbGwKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvcmVpc2VyZnMnCmdjYyAgLURQSUMgLU8xIC1m
bm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5mc3lzX3JlaXNlcmZzLm9w
aWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1Xbm8tdW5rbm93bi1wcmFnbWFz
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvcmVpc2VyZnMvLi4vLi4vLi4v
dG9vbHMvbGliZnNpbWFnZS9jb21tb24vIC1ERlNJTUFHRV9GU0RJUj1cIi91c3IveGVuNDIv
bGliL2ZzXCIgLVdlcnJvciAtRF9HTlVfU09VUkNFICAtZlBJQyAtYyAtbyBmc3lzX3JlaXNl
cmZzLm9waWMgZnN5c19yZWlzZXJmcy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIC1M
Li4vY29tbW9uLyAtc2hhcmVkIC1vIGZzaW1hZ2Uuc28gZnN5c19yZWlzZXJmcy5vcGljIC1s
ZnNpbWFnZSAgIC1ML3Vzci9wa2cvbGliCi9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL3JlaXNlcmZzLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1w
IC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9mcy9yZWlzZXJm
cwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9yZWlzZXJmcy8uLi8uLi8uLi90
b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCBmc2ltYWdlLnNvIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9mcy9yZWlzZXJmcwpnbWFrZVs1XTogTGVh
dmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3JlaXNl
cmZzJwpnbWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29s
cy9saWJmc2ltYWdlJwpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGliZnNpbWFnZScKZ21ha2UgLUMgaXNvOTY2MCBpbnN0YWxsCmdtYWtl
WzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL2lzbzk2NjAnCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09M
U19fIC1NTUQgLU1GIC5mc3lzX2lzbzk2NjAub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVduby11bmtub3duLXByYWdtYXMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGliZnNpbWFnZS9pc285NjYwLy4uLy4uLy4uL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLyAt
REZTSU1BR0VfRlNESVI9XCIvdXNyL3hlbjQyL2xpYi9mc1wiIC1XZXJyb3IgLURfR05VX1NP
VVJDRSAgLWZQSUMgLWMgLW8gZnN5c19pc285NjYwLm9waWMgZnN5c19pc285NjYwLmMgIC1J
L3Vzci9wa2cvaW5jbHVkZQpnY2MgICAgLUwuLi9jb21tb24vIC1zaGFyZWQgLW8gZnNpbWFn
ZS5zbyBmc3lzX2lzbzk2NjAub3BpYyAtbGZzaW1hZ2UgICAtTC91c3IvcGtnL2xpYgovcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9pc285NjYwLy4uLy4uLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwv
dXNyL3hlbjQyL2xpYi9mcy9pc285NjYwCi9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL2lzbzk2NjAvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgZnNp
bWFnZS5zbyAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWIvZnMv
aXNvOTY2MApnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scy9saWJmc2ltYWdlL2lzbzk2NjAnCmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UnCmdtYWtlWzRdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlJwpnbWFrZSAtQyBm
YXQgaW5zdGFsbApnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGliZnNpbWFnZS9mYXQnCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5mc3lzX2ZhdC5vcGljLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzICAtV25vLXVua25vd24tcHJhZ21hcyAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJmc2ltYWdlL2ZhdC8uLi8uLi8uLi90b29scy9saWJmc2ltYWdlL2NvbW1v
bi8gLURGU0lNQUdFX0ZTRElSPVwiL3Vzci94ZW40Mi9saWIvZnNcIiAtV2Vycm9yIC1EX0dO
VV9TT1VSQ0UgIC1mUElDIC1jIC1vIGZzeXNfZmF0Lm9waWMgZnN5c19mYXQuYyAgLUkvdXNy
L3BrZy9pbmNsdWRlCmdjYyAgICAtTC4uL2NvbW1vbi8gLXNoYXJlZCAtbyBmc2ltYWdlLnNv
IGZzeXNfZmF0Lm9waWMgLWxmc2ltYWdlICAgLUwvdXNyL3BrZy9saWIKL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYmZzaW1hZ2UvZmF0Ly4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwg
LWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xp
Yi9mcy9mYXQKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvZmF0Ly4uLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGZzaW1hZ2Uuc28gL3Jvb3QveGVuLTQu
Mi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliL2ZzL2ZhdApnbWFrZVs1XTogTGVhdmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL2ZhdCcKZ21h
a2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNp
bWFnZScKZ21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYmZzaW1hZ2UnCmdtYWtlIC1DIHpmcyBpbnN0YWxsCmdtYWtlWzVdOiBFbnRlcmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3pmcycKZ2Nj
ICAtRFBJQyAtREZTWVNfWkZTIC1ERlNJTUFHRSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJmc2ltYWdlL3pmcy8uLi8uLi8uLi90b29scy9saWJmc2ltYWdlL3pmcyAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnpmc19sempiLm9waWMuZCAtZm5v
LW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1Xbm8tdW5rbm93bi1wcmFnbWFzIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvemZzLy4uLy4uLy4uL3Rvb2xzL2xpYmZzaW1h
Z2UvY29tbW9uLyAtREZTSU1BR0VfRlNESVI9XCIvdXNyL3hlbjQyL2xpYi9mc1wiIC1XZXJy
b3IgLURfR05VX1NPVVJDRSAgLWZQSUMgLWMgLW8gemZzX2x6amIub3BpYyB6ZnNfbHpqYi5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtREZTWVNfWkZTIC1ERlNJTUFHRSAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3pmcy8uLi8uLi8uLi90b29scy9s
aWJmc2ltYWdlL3pmcyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLnpmc19zaGEyNTYub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdu
by11bmtub3duLXByYWdtYXMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS96
ZnMvLi4vLi4vLi4vdG9vbHMvbGliZnNpbWFnZS9jb21tb24vIC1ERlNJTUFHRV9GU0RJUj1c
Ii91c3IveGVuNDIvbGliL2ZzXCIgLVdlcnJvciAtRF9HTlVfU09VUkNFICAtZlBJQyAtYyAt
byB6ZnNfc2hhMjU2Lm9waWMgemZzX3NoYTI1Ni5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2Nj
ICAtRFBJQyAtREZTWVNfWkZTIC1ERlNJTUFHRSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJmc2ltYWdlL3pmcy8uLi8uLi8uLi90b29scy9saWJmc2ltYWdlL3pmcyAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnpmc19mbGV0Y2hlci5vcGljLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV25vLXVua25vd24tcHJhZ21hcyAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3pmcy8uLi8uLi8uLi90b29scy9saWJm
c2ltYWdlL2NvbW1vbi8gLURGU0lNQUdFX0ZTRElSPVwiL3Vzci94ZW40Mi9saWIvZnNcIiAt
V2Vycm9yIC1EX0dOVV9TT1VSQ0UgIC1mUElDIC1jIC1vIHpmc19mbGV0Y2hlci5vcGljIHpm
c19mbGV0Y2hlci5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtREZTWVNfWkZT
IC1ERlNJTUFHRSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3pmcy8uLi8u
Li8uLi90b29scy9saWJmc2ltYWdlL3pmcyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RP
T0xTX18gLU1NRCAtTUYgLmZzaV96ZnMub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1j
YWxscyAgLVduby11bmtub3duLXByYWdtYXMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
ZnNpbWFnZS96ZnMvLi4vLi4vLi4vdG9vbHMvbGliZnNpbWFnZS9jb21tb24vIC1ERlNJTUFH
RV9GU0RJUj1cIi91c3IveGVuNDIvbGliL2ZzXCIgLVdlcnJvciAtRF9HTlVfU09VUkNFICAt
ZlBJQyAtYyAtbyBmc2lfemZzLm9waWMgZnNpX3pmcy5jICAtSS91c3IvcGtnL2luY2x1ZGUK
Z2NjICAtRFBJQyAtREZTWVNfWkZTIC1ERlNJTUFHRSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJmc2ltYWdlL3pmcy8uLi8uLi8uLi90b29scy9saWJmc2ltYWdlL3pmcyAtTzEgLWZu
by1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmZzeXNfemZzLm9waWMuZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1Xbm8tdW5rbm93bi1wcmFnbWFzIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvemZzLy4uLy4uLy4uL3Rvb2xzL2xpYmZz
aW1hZ2UvY29tbW9uLyAtREZTSU1BR0VfRlNESVI9XCIvdXNyL3hlbjQyL2xpYi9mc1wiIC1X
ZXJyb3IgLURfR05VX1NPVVJDRSAgLWZQSUMgLWMgLW8gZnN5c196ZnMub3BpYyBmc3lzX3pm
cy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIC1MLi4vY29tbW9uLyAtc2hhcmVkIC1v
IGZzaW1hZ2Uuc28gemZzX2x6amIub3BpYyB6ZnNfc2hhMjU2Lm9waWMgemZzX2ZsZXRjaGVy
Lm9waWMgZnNpX3pmcy5vcGljIGZzeXNfemZzLm9waWMgLWxmc2ltYWdlICAgLUwvdXNyL3Br
Zy9saWIKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvemZzLy4uLy4uLy4uL3Rv
b2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL2xpYi9mcy96ZnMKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZz
aW1hZ2UvemZzLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGZzaW1h
Z2Uuc28gL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliL2ZzL3pm
cwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9s
aWJmc2ltYWdlL3pmcycKZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGliZnNpbWFnZScKZ21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UnCmdtYWtlIC1DIHhmcyBpbnN0YWxs
CmdtYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9s
aWJmc2ltYWdlL3hmcycKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RP
T0xTX18gLU1NRCAtTUYgLmZzeXNfeGZzLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1Xbm8tdW5rbm93bi1wcmFnbWFzIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YmZzaW1hZ2UveGZzLy4uLy4uLy4uL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLyAtREZTSU1B
R0VfRlNESVI9XCIvdXNyL3hlbjQyL2xpYi9mc1wiIC1XZXJyb3IgLURfR05VX1NPVVJDRSAg
LWZQSUMgLWMgLW8gZnN5c194ZnMub3BpYyBmc3lzX3hmcy5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAgIC1MLi4vY29tbW9uLyAtc2hhcmVkIC1vIGZzaW1hZ2Uuc28gZnN5c194ZnMu
b3BpYyAtbGZzaW1hZ2UgICAtTC91c3IvcGtnL2xpYgovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGliZnNpbWFnZS94ZnMvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUg
LXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliL2ZzL3hmcwov
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS94ZnMvLi4vLi4vLi4vdG9vbHMvY3Jv
c3MtaW5zdGFsbCAtbTA3NTUgLXAgZnNpbWFnZS5zbyAvcm9vdC94ZW4tNC4yLjAvZGlzdC9p
bnN0YWxsL3Vzci94ZW40Mi9saWIvZnMveGZzCmdtYWtlWzVdOiBMZWF2aW5nIGRpcmVjdG9y
eSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UveGZzJwpnbWFrZVs0XTogTGVh
dmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlJwpnbWFr
ZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNp
bWFnZScKZ21ha2UgLUMgZXh0MmZzIGluc3RhbGwKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVj
dG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvZXh0MmZzJwpnY2MgIC1E
UElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuZnN5c19l
eHQyZnMub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVduby11bmtub3du
LXByYWdtYXMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9leHQyZnMvLi4v
Li4vLi4vdG9vbHMvbGliZnNpbWFnZS9jb21tb24vIC1ERlNJTUFHRV9GU0RJUj1cIi91c3Iv
eGVuNDIvbGliL2ZzXCIgLVdlcnJvciAtRF9HTlVfU09VUkNFICAtZlBJQyAtYyAtbyBmc3lz
X2V4dDJmcy5vcGljIGZzeXNfZXh0MmZzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAg
LUwuLi9jb21tb24vIC1zaGFyZWQgLW8gZnNpbWFnZS5zbyBmc3lzX2V4dDJmcy5vcGljIC1s
ZnNpbWFnZSAgIC1ML3Vzci9wa2cvbGliCi9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL2V4dDJmcy8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAv
cm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWIvZnMvZXh0MmZzCi9y
b290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL2V4dDJmcy8uLi8uLi8uLi90b29scy9j
cm9zcy1pbnN0YWxsIC1tMDc1NSAtcCBmc2ltYWdlLnNvIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9mcy9leHQyZnMKZ21ha2VbNV06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9leHQyZnMnCmdtYWtl
WzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1h
Z2UnCmdtYWtlWzNdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYmZzaW1hZ2UnCmdtYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzJwpnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMnCnNldCAtZXg7IFwKaWYgdGVzdCAtZCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
Li4vdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWw7IHRoZW4gXAoJbWtkaXIgLXAgcWVtdS14
ZW4tdHJhZGl0aW9uYWwtZGlyOyBcCmVsc2UgXAoJZXhwb3J0IEdJVD1naXQ7IFwKCS9yb290
L3hlbi00LjIuMC90b29scy8uLi9zY3JpcHRzL2dpdC1jaGVja291dC5zaCAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvLi4vdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwgeGVuLTQuMi4wIHFl
bXUteGVuLXRyYWRpdGlvbmFsLWRpcjsgXApmaQorIHRlc3QgLWQgJy9yb290L3hlbi00LjIu
MC90b29scy8uLi90b29scy9xZW11LXhlbi10cmFkaXRpb25hbCcKKyBta2RpciAtcCBxZW11
LXhlbi10cmFkaXRpb25hbC1kaXIKc2V0IC1lOyBcCgkgICAgZXhwb3J0IFBSRUZJWD0iL3Vz
ci94ZW40MiI7IGV4cG9ydCBYRU5fU0NSSVBUX0RJUj0iL3Vzci94ZW40Mi9ldGMveGVuL3Nj
cmlwdHMiOyBleHBvcnQgWEVOX1JPT1Q9Ii9yb290L3hlbi00LjIuMC90b29scy8uLiI7IFwK
CWNkIHFlbXUteGVuLXRyYWRpdGlvbmFsLWRpcjsgXAoJL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
Ly4uL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL3hlbi1zZXR1cCBcCgktLWV4dHJhLWNm
bGFncz0iIiBcCgk7IFwKCWdtYWtlIGluc3RhbGwKc2RsLWNvbmZpZzogbm90IGZvdW5kCnNk
bC1jb25maWc6IG5vdCBmb3VuZApJbnN0YWxsIHByZWZpeCAgICAvdXNyL3hlbjQyCkJJT1Mg
ZGlyZWN0b3J5ICAgIC91c3IveGVuNDIvc2hhcmUvcWVtdQpiaW5hcnkgZGlyZWN0b3J5ICAv
dXNyL3hlbjQyL2JpbgpNYW51YWwgZGlyZWN0b3J5ICAvdXNyL3hlbjQyL3NoYXJlL21hbgpF
TEYgaW50ZXJwIHByZWZpeCAvdXNyL2duZW11bC9xZW11LSVNClNvdXJjZSBwYXRoICAgICAg
IC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbApDIGNvbXBpbGVy
ICAgICAgICBnY2MKSG9zdCBDIGNvbXBpbGVyICAgZ2NjCkFSQ0hfQ0ZMQUdTICAgICAgIC1t
NjQKbWFrZSAgICAgICAgICAgICAgZ21ha2UKaW5zdGFsbCAgICAgICAgICAgaW5zdGFsbApo
b3N0IENQVSAgICAgICAgICB4ODZfNjQKaG9zdCBiaWcgZW5kaWFuICAgbm8KdGFyZ2V0IGxp
c3QgICAgICAgaTM4Ni1zb2Z0bW11IHg4Nl82NC1zb2Z0bW11IGFybS1zb2Z0bW11IGNyaXMt
c29mdG1tdSBtNjhrLXNvZnRtbXUgbWlwcy1zb2Z0bW11IG1pcHNlbC1zb2Z0bW11IG1pcHM2
NC1zb2Z0bW11IG1pcHM2NGVsLXNvZnRtbXUgcHBjLXNvZnRtbXUgcHBjZW1iLXNvZnRtbXUg
cHBjNjQtc29mdG1tdSBzaDQtc29mdG1tdSBzaDRlYi1zb2Z0bW11IHNwYXJjLXNvZnRtbXUg
c3BhcmM2NC1ic2QtdXNlciAKZ3Byb2YgZW5hYmxlZCAgICAgbm8Kc3BhcnNlIGVuYWJsZWQg
ICAgbm8KcHJvZmlsZXIgICAgICAgICAgbm8Kc3RhdGljIGJ1aWxkICAgICAgbm8KLVdlcnJv
ciBlbmFibGVkICAgbm8KU0RMIHN1cHBvcnQgICAgICAgbm8KT3BlbkdMIHN1cHBvcnQgICAg
CmN1cnNlcyBzdXBwb3J0ICAgIG5vCm1pbmd3MzIgc3VwcG9ydCAgIG5vCkF1ZGlvIGRyaXZl
cnMgICAgIG9zcwpFeHRyYSBhdWRpbyBjYXJkcyBhYzk3IGVzMTM3MCBzYjE2Ck1peGVyIGVt
dWxhdGlvbiAgIG5vClZOQyBUTFMgc3VwcG9ydCAgIG5vCmtxZW11IHN1cHBvcnQgICAgIG5v
CmJybGFwaSBzdXBwb3J0ICAgIG5vCkRvY3VtZW50YXRpb24gICAgIG5vCk5QVEwgc3VwcG9y
dCAgICAgIG5vCnZkZSBzdXBwb3J0ICAgICAgIG5vCkFJTyBzdXBwb3J0ICAgICAgIHllcwpJ
bnN0YWxsIGJsb2JzICAgICB5ZXMKS1ZNIHN1cHBvcnQgICAgICAgbm8gLSAobGludXgva3Zt
Lmg6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkpCmZkdCBzdXBwb3J0ICAgICAgIG5vClRo
ZSBlcnJvciBsb2cgZnJvbSBjb21waWxpbmcgdGhlIGxpYlNETCB0ZXN0IGlzOiAKL3RtcC9x
ZW11LWNvbmYtLTEwODM5LS5jOjE6MTc6IGZhdGFsIGVycm9yOiBTREwuaDogTm8gc3VjaCBm
aWxlIG9yIGRpcmVjdG9yeQpjb21waWxhdGlvbiB0ZXJtaW5hdGVkLgpxZW11IHN1Y2Nlc3Nm
dWx5IGNvbmZpZ3VyZWQgZm9yIFhlbiBxZW11LWRtIGJ1aWxkCi1tc3NlMjogbm90IGZvdW5k
CmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9x
ZW11LXhlbi10cmFkaXRpb25hbC1kaXInCi9yb290L3hlbi00LjIuMC90b29scy8uLi90b29s
cy9xZW11LXhlbi10cmFkaXRpb25hbC94ZW4taG9va3MubWFrOjYxOiA9PT0gcGNpdXRpbHMt
ZGV2IHBhY2thZ2Ugbm90IGZvdW5kIC0gbWlzc2luZyAvdXNyL2luY2x1ZGUvcGNpCi9yb290
L3hlbi00LjIuMC90b29scy8uLi90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC94ZW4taG9v
a3MubWFrOjYyOiA9PT0gUENJIHBhc3N0aHJvdWdoIGNhcGFiaWxpdHkgaGFzIGJlZW4gZGlz
YWJsZWQKICBDQyAgICBxZW11LWltZy5vCiAgQ0MgICAgcWVtdS10b29sLm8KICBDQyAgICBv
c2RlcC5vCiAgQ0MgICAgY3V0aWxzLm8KICBDQyAgICBxZW11LW1hbGxvYy5vCiAgQ0MgICAg
YmxvY2stY293Lm8KICBDQyAgICBibG9jay1xY293Lm8KICBDQyAgICBhZXMubwogIENDICAg
IGJsb2NrLXZtZGsubwogIENDICAgIGJsb2NrLWNsb29wLm8KICBDQyAgICBibG9jay1kbWcu
bwogIENDICAgIGJsb2NrLWJvY2hzLm8KICBDQyAgICBibG9jay12cGMubwogIENDICAgIGJs
b2NrLXZ2ZmF0Lm8KICBDQyAgICBibG9jay1xY293Mi5vCiAgQ0MgICAgYmxvY2stcGFyYWxs
ZWxzLm8KICBDQyAgICBibG9jay1uYmQubwogIENDICAgIG5iZC5vCiAgQ0MgICAgYmxvY2su
bwogIENDICAgIGFpby5vCiAgQ0MgICAgcG9zaXgtYWlvLWNvbXBhdC5vCiAgQ0MgICAgYmxv
Y2stcmF3LXBvc2l4Lm8KICBMSU5LICBxZW11LWltZy14ZW4KL3Vzci9saWIvbGliYy5zbzog
d2FybmluZzogbXVsdGlwbGUgY29tbW9uIG9mIGBlbnZpcm9uJwovdXNyL2xpYi9jcnQwLm86
IHdhcm5pbmc6IHByZXZpb3VzIGNvbW1vbiBpcyBoZXJlCiAgQ0MgICAgcmVhZGxpbmUubwog
IENDICAgIGNvbnNvbGUubwogIENDICAgIGlycS5vCiAgQ0MgICAgaTJjLm8KICBDQyAgICBz
bWJ1cy5vCiAgQ0MgICAgc21idXNfZWVwcm9tLm8KICBDQyAgICBtYXg3MzEwLm8KICBDQyAg
ICBtYXgxMTF4Lm8KICBDQyAgICB3bTg3NTAubwpJbiBmaWxlIGluY2x1ZGVkIGZyb20gL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2h3L3dtODc1MC5jOjEy
OjA6Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRpby9h
dWRpby5oOjE1MzoxMDogd2FybmluZzogcmVkdW5kYW50IHJlZGVjbGFyYXRpb24gb2YgJ3Bv
cGNvdW50JwovdXNyL2luY2x1ZGUvc3RyaW5ncy5oOjU3OjE0OiBub3RlOiBwcmV2aW91cyBk
ZWNsYXJhdGlvbiBvZiAncG9wY291bnQnIHdhcyBoZXJlCiAgQ0MgICAgc3NkMDMwMy5vCiAg
Q0MgICAgc3NkMDMyMy5vCiAgQ0MgICAgYWRzNzg0Ni5vCiAgQ0MgICAgc3RlbGxhcmlzX2lu
cHV0Lm8KICBDQyAgICB0d2w5MjIzMC5vCiAgQ0MgICAgdG1wMTA1Lm8KICBDQyAgICBsbTgz
MngubwogIENDICAgIHNjc2ktZGlzay5vCiAgQ0MgICAgY2Ryb20ubwogIENDICAgIHNjc2kt
Z2VuZXJpYy5vCiAgQ0MgICAgdXNiLm8KICBDQyAgICB1c2ItaHViLm8KICBDQyAgICB1c2It
YnNkLm8KICBDQyAgICB1c2ItaGlkLm8KICBDQyAgICB1c2ItbXNkLm8KICBDQyAgICB1c2It
d2Fjb20ubwogIENDICAgIHVzYi1zZXJpYWwubwogIENDICAgIHVzYi1uZXQubwogIENDICAg
IHNkLm8KICBDQyAgICBzc2ktc2QubwogIENDICAgIGJ0Lm8KICBDQyAgICBidC1ob3N0Lm8K
ICBDQyAgICBidC12aGNpLm8KICBDQyAgICBidC1sMmNhcC5vCiAgQ0MgICAgYnQtc2RwLm8K
ICBDQyAgICBidC1oY2kubwogIENDICAgIGJ0LWhpZC5vCiAgQ0MgICAgdXNiLWJ0Lm8KICBD
QyAgICBidWZmZXJlZF9maWxlLm8KICBDQyAgICBtaWdyYXRpb24ubwogIENDICAgIG1pZ3Jh
dGlvbi10Y3AubwogIENDICAgIG5ldC5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvbmV0LmM6MzA6MDoKL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2F1ZGlvL2F1ZGlvLmg6MTUz
OjEwOiB3YXJuaW5nOiByZWR1bmRhbnQgcmVkZWNsYXJhdGlvbiBvZiAncG9wY291bnQnCi91
c3IvaW5jbHVkZS9zdHJpbmdzLmg6NTc6MTQ6IG5vdGU6IHByZXZpb3VzIGRlY2xhcmF0aW9u
IG9mICdwb3Bjb3VudCcgd2FzIGhlcmUKICBDQyAgICBxZW11LXNvY2tldHMubwogIENDICAg
IHFlbXUtY2hhci5vCiAgQ0MgICAgbmV0LWNoZWNrc3VtLm8KICBDQyAgICBzYXZldm0ubwpJ
biBmaWxlIGluY2x1ZGVkIGZyb20gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRy
YWRpdGlvbmFsL3NhdmV2bS5jOjMyOjA6Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhl
bi10cmFkaXRpb25hbC9hdWRpby9hdWRpby5oOjE1MzoxMDogd2FybmluZzogcmVkdW5kYW50
IHJlZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50JwovdXNyL2luY2x1ZGUvc3RyaW5ncy5oOjU3
OjE0OiBub3RlOiBwcmV2aW91cyBkZWNsYXJhdGlvbiBvZiAncG9wY291bnQnIHdhcyBoZXJl
CiAgQ0MgICAgY2FjaGUtdXRpbHMubwogIENDICAgIG1pZ3JhdGlvbi1leGVjLm8KICBDQyAg
ICBhdWRpby9hdWRpby5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uYzoyNTowOgovcm9vdC94
ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6
MTA6IHdhcm5pbmc6IHJlZHVuZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vz
ci9pbmNsdWRlL3N0cmluZ3MuaDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24g
b2YgJ3BvcGNvdW50JyB3YXMgaGVyZQogIENDICAgIGF1ZGlvL25vYXVkaW8ubwpJbiBmaWxl
IGluY2x1ZGVkIGZyb20gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlv
bmFsL2F1ZGlvL25vYXVkaW8uYzoyNTowOgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14
ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6MTA6IHdhcm5pbmc6IHJlZHVuZGFu
dCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vzci9pbmNsdWRlL3N0cmluZ3MuaDo1
NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50JyB3YXMgaGVy
ZQogIENDICAgIGF1ZGlvL3dhdmF1ZGlvLm8KSW4gZmlsZSBpbmNsdWRlZCBmcm9tIC9yb290
L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRpby93YXZhdWRpby5j
OjI2OjA6Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRp
by9hdWRpby5oOjE1MzoxMDogd2FybmluZzogcmVkdW5kYW50IHJlZGVjbGFyYXRpb24gb2Yg
J3BvcGNvdW50JwovdXNyL2luY2x1ZGUvc3RyaW5ncy5oOjU3OjE0OiBub3RlOiBwcmV2aW91
cyBkZWNsYXJhdGlvbiBvZiAncG9wY291bnQnIHdhcyBoZXJlCiAgQ0MgICAgYXVkaW8vbWl4
ZW5nLm8KSW4gZmlsZSBpbmNsdWRlZCBmcm9tIC9yb290L3hlbi00LjIuMC90b29scy9xZW11
LXhlbi10cmFkaXRpb25hbC9hdWRpby9taXhlbmcuYzoyNjowOgovcm9vdC94ZW4tNC4yLjAv
dG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6MTA6IHdhcm5p
bmc6IHJlZHVuZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vzci9pbmNsdWRl
L3N0cmluZ3MuaDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24gb2YgJ3BvcGNv
dW50JyB3YXMgaGVyZQogIENDICAgIGF1ZGlvL29zc2F1ZGlvLm8KSW4gZmlsZSBpbmNsdWRl
ZCBmcm9tIC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRp
by9vc3NhdWRpby5jOjM0OjA6Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFk
aXRpb25hbC9hdWRpby9hdWRpby5oOjE1MzoxMDogd2FybmluZzogcmVkdW5kYW50IHJlZGVj
bGFyYXRpb24gb2YgJ3BvcGNvdW50JwovdXNyL2luY2x1ZGUvc3RyaW5ncy5oOjU3OjE0OiBu
b3RlOiBwcmV2aW91cyBkZWNsYXJhdGlvbiBvZiAncG9wY291bnQnIHdhcyBoZXJlCiAgQ0Mg
ICAgYXVkaW8vd2F2Y2FwdHVyZS5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vd2F2Y2FwdHVyZS5jOjM6
MDoKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2F1ZGlvL2F1
ZGlvLmg6MTUzOjEwOiB3YXJuaW5nOiByZWR1bmRhbnQgcmVkZWNsYXJhdGlvbiBvZiAncG9w
Y291bnQnCi91c3IvaW5jbHVkZS9zdHJpbmdzLmg6NTc6MTQ6IG5vdGU6IHByZXZpb3VzIGRl
Y2xhcmF0aW9uIG9mICdwb3Bjb3VudCcgd2FzIGhlcmUKICBDQyAgICB2bmMubwogIENDICAg
IGQzZGVzLm8KICBBUiAgICBsaWJxZW11X2NvbW1vbi5hCi1tc3NlMjogbm90IGZvdW5kCmdt
YWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9xZW11
LXhlbi10cmFkaXRpb25hbC1kaXIvaTM4Ni1kbScKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4u
L3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL3hlbi1ob29rcy5tYWs6NjE6ID09PSBwY2l1
dGlscy1kZXYgcGFja2FnZSBub3QgZm91bmQgLSBtaXNzaW5nIC91c3IvaW5jbHVkZS9wY2kK
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL3hl
bi1ob29rcy5tYWs6NjI6ID09PSBQQ0kgcGFzc3Rocm91Z2ggY2FwYWJpbGl0eSBoYXMgYmVl
biBkaXNhYmxlZAovcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMvcWVtdS14ZW4tdHJh
ZGl0aW9uYWwveGVuLWhvb2tzLm1hazo2MTogPT09IHBjaXV0aWxzLWRldiBwYWNrYWdlIG5v
dCBmb3VuZCAtIG1pc3NpbmcgL3Vzci9pbmNsdWRlL3BjaQovcm9vdC94ZW4tNC4yLjAvdG9v
bHMvLi4vdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwveGVuLWhvb2tzLm1hazo2MjogPT09
IFBDSSBwYXNzdGhyb3VnaCBjYXBhYmlsaXR5IGhhcyBiZWVuIGRpc2FibGVkCiAgQ0MgICAg
aTM4Ni1kbS92bC5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvdmwuYzo0MTowOgovcm9vdC94ZW4tNC4yLjAvdG9v
bHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6MTA6IHdhcm5pbmc6
IHJlZHVuZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vzci9pbmNsdWRlL3N0
cmluZ3MuaDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50
JyB3YXMgaGVyZQogIENDICAgIGkzODYtZG0vb3NkZXAubwogIENDICAgIGkzODYtZG0vbW9u
aXRvci5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVt
dS14ZW4tdHJhZGl0aW9uYWwvbW9uaXRvci5jOjM1OjA6Ci9yb290L3hlbi00LjIuMC90b29s
cy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRpby9hdWRpby5oOjE1MzoxMDogd2FybmluZzog
cmVkdW5kYW50IHJlZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50JwovdXNyL2luY2x1ZGUvc3Ry
aW5ncy5oOjU3OjE0OiBub3RlOiBwcmV2aW91cyBkZWNsYXJhdGlvbiBvZiAncG9wY291bnQn
IHdhcyBoZXJlCiAgQ0MgICAgaTM4Ni1kbS9wY2kubwogIENDICAgIGkzODYtZG0vbG9hZGVy
Lm8KICBDQyAgICBpMzg2LWRtL2lzYV9tbWlvLm8KICBDQyAgICBpMzg2LWRtL21hY2hpbmUu
bwogIENDICAgIGkzODYtZG0vZG1hLWhlbHBlcnMubwogIENDICAgIGkzODYtZG0vdmlydGlv
Lm8KICBDQyAgICBpMzg2LWRtL3ZpcnRpby1ibGsubwogIENDICAgIGkzODYtZG0vdmlydGlv
LW5ldC5vCiAgQ0MgICAgaTM4Ni1kbS92aXJ0aW8tY29uc29sZS5vCiAgQ0MgICAgaTM4Ni1k
bS9md19jZmcubwogIENDICAgIGkzODYtZG0vcG9zaXgtYWlvLWNvbXBhdC5vCiAgQ0MgICAg
aTM4Ni1kbS9ibG9jay1yYXctcG9zaXgubwogIENDICAgIGkzODYtZG0vbHNpNTNjODk1YS5v
CiAgQ0MgICAgaTM4Ni1kbS9lc3AubwogIENDICAgIGkzODYtZG0vdXNiLW9oY2kubwogIEND
ICAgIGkzODYtZG0vZWVwcm9tOTN4eC5vCiAgQ0MgICAgaTM4Ni1kbS9lZXBybzEwMC5vCi9y
b290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9ody9lZXBybzEwMC5j
OiBJbiBmdW5jdGlvbiAnZWVwcm8xMDBfcmVhZDQnOgovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
cWVtdS14ZW4tdHJhZGl0aW9uYWwvaHcvZWVwcm8xMDAuYzoxMjA3OjE0OiB3YXJuaW5nOiAn
dmFsJyBtYXkgYmUgdXNlZCB1bmluaXRpYWxpemVkIGluIHRoaXMgZnVuY3Rpb24KL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2h3L2VlcHJvMTAwLmM6IElu
IGZ1bmN0aW9uICdlZXBybzEwMF9yZWFkMic6Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11
LXhlbi10cmFkaXRpb25hbC9ody9lZXBybzEwMC5jOjExODQ6MTQ6IHdhcm5pbmc6ICd2YWwn
IG1heSBiZSB1c2VkIHVuaW5pdGlhbGl6ZWQgaW4gdGhpcyBmdW5jdGlvbgovcm9vdC94ZW4t
NC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvaHcvZWVwcm8xMDAuYzogSW4gZnVu
Y3Rpb24gJ2VlcHJvMTAwX3JlYWQxJzoKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVu
LXRyYWRpdGlvbmFsL2h3L2VlcHJvMTAwLmM6MTEzOToxMzogd2FybmluZzogJ3ZhbCcgbWF5
IGJlIHVzZWQgdW5pbml0aWFsaXplZCBpbiB0aGlzIGZ1bmN0aW9uCiAgQ0MgICAgaTM4Ni1k
bS9uZTIwMDAubwogIENDICAgIGkzODYtZG0vcGNuZXQubwogIENDICAgIGkzODYtZG0vcnRs
ODEzOS5vCiAgQ0MgICAgaTM4Ni1kbS9lMTAwMC5vCiAgQ0MgICAgaTM4Ni1kbS9tc21vdXNl
Lm8KICBDQyAgICBpMzg2LWRtL3NiMTYubwpJbiBmaWxlIGluY2x1ZGVkIGZyb20gL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2h3L3NiMTYuYzoyNjowOgov
cm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8u
aDoxNTM6MTA6IHdhcm5pbmc6IHJlZHVuZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3Vu
dCcKL3Vzci9pbmNsdWRlL3N0cmluZ3MuaDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFy
YXRpb24gb2YgJ3BvcGNvdW50JyB3YXMgaGVyZQogIENDICAgIGkzODYtZG0vZXMxMzcwLm8K
SW4gZmlsZSBpbmNsdWRlZCBmcm9tIC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10
cmFkaXRpb25hbC9ody9lczEzNzAuYzozMTowOgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVt
dS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6MTA6IHdhcm5pbmc6IHJlZHVu
ZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vzci9pbmNsdWRlL3N0cmluZ3Mu
aDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50JyB3YXMg
aGVyZQogIENDICAgIGkzODYtZG0vYWM5Ny5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvaHcvYWM5Ny5jOjE5OjA6
Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRpby9hdWRp
by5oOjE1MzoxMDogd2FybmluZzogcmVkdW5kYW50IHJlZGVjbGFyYXRpb24gb2YgJ3BvcGNv
dW50JwovdXNyL2luY2x1ZGUvc3RyaW5ncy5oOjU3OjE0OiBub3RlOiBwcmV2aW91cyBkZWNs
YXJhdGlvbiBvZiAncG9wY291bnQnIHdhcyBoZXJlCiAgQ0MgICAgaTM4Ni1kbS9wY3Nway5v
CkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4t
dHJhZGl0aW9uYWwvaHcvcGNzcGsuYzoyODowOgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVt
dS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6MTA6IHdhcm5pbmc6IHJlZHVu
ZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vzci9pbmNsdWRlL3N0cmluZ3Mu
aDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50JyB3YXMg
aGVyZQogIENDICAgIGkzODYtZG0vaWRlLm8KICBDQyAgICBpMzg2LWRtL3Bja2JkLm8KICBD
QyAgICBpMzg2LWRtL3BzMi5vCiAgQ0MgICAgaTM4Ni1kbS92Z2EubwogIENDICAgIGkzODYt
ZG0vZG1hLm8KICBDQyAgICBpMzg2LWRtL2ZkYy5vCiAgQ0MgICAgaTM4Ni1kbS9tYzE0Njgx
OHJ0Yy5vCiAgQ0MgICAgaTM4Ni1kbS9zZXJpYWwubwogIENDICAgIGkzODYtZG0vaTgyNTku
bwogIENDICAgIGkzODYtZG0vaTgyNTQubwogIENDICAgIGkzODYtZG0vcGMubwpJbiBmaWxl
IGluY2x1ZGVkIGZyb20gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlv
bmFsL2h3L3BjLmM6MzA6MDoKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRp
dGlvbmFsL2F1ZGlvL2F1ZGlvLmg6MTUzOjEwOiB3YXJuaW5nOiByZWR1bmRhbnQgcmVkZWNs
YXJhdGlvbiBvZiAncG9wY291bnQnCi91c3IvaW5jbHVkZS9zdHJpbmdzLmg6NTc6MTQ6IG5v
dGU6IHByZXZpb3VzIGRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcgd2FzIGhlcmUKICBDQyAg
ICBpMzg2LWRtL2NpcnJ1c192Z2EubwogIENDICAgIGkzODYtZG0vcGFyYWxsZWwubwogIEND
ICAgIGkzODYtZG0vcGlpeF9wY2kubwogIENDICAgIGkzODYtZG0vdXNiLXVoY2kubwogIEND
ICAgIGkzODYtZG0vaHBldC5vCiAgQ0MgICAgaTM4Ni1kbS9kZXZpY2UtaG90cGx1Zy5vCiAg
Q0MgICAgaTM4Ni1kbS9wY2ktaG90cGx1Zy5vCiAgQ0MgICAgaTM4Ni1kbS9waWl4NGFjcGku
bwogIENDICAgIGkzODYtZG0veGVuc3RvcmUubwogIENDICAgIGkzODYtZG0veGVuX3BsYXRm
b3JtLm8KICBDQyAgICBpMzg2LWRtL3hlbl9tYWNoaW5lX2Z2Lm8KICBDQyAgICBpMzg2LWRt
L3hlbl9tYWNoaW5lX3B2Lm8KICBDQyAgICBpMzg2LWRtL3hlbl9iYWNrZW5kLm8KICBDQyAg
ICBpMzg2LWRtL3hlbmZiLm8KICBDQyAgICBpMzg2LWRtL3hlbl9jb25zb2xlLm8KICBDQyAg
ICBpMzg2LWRtL3hlbl9kaXNrLm8KICBDQyAgICBpMzg2LWRtL2V4ZWMtZG0ubwogIENDICAg
IGkzODYtZG0vcGNpX2VtdWxhdGlvbi5vCiAgQ0MgICAgaTM4Ni1kbS9oZWxwZXIyLm8KICBD
QyAgICBpMzg2LWRtL2JhdHRlcnlfbWdtdC5vCiAgQ0MgICAgaTM4Ni1kbS9rcWVtdS5vCiAg
Q0MgICAgaTM4Ni1kbS9pMzg2LWRpcy5vCiAgQVIgICAgaTM4Ni1kbS9saWJxZW11LmEKICBM
SU5LICBpMzg2LWRtL3FlbXUtZG0KL3Vzci9saWIvbGliYy5zbzogd2FybmluZzogbXVsdGlw
bGUgY29tbW9uIG9mIGBlbnZpcm9uJwovdXNyL2xpYi9jcnQwLm86IHdhcm5pbmc6IHByZXZp
b3VzIGNvbW1vbiBpcyBoZXJlCmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsLWRpci9pMzg2LWRtJwpta2Rp
ciAtcCAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvYmluIgovcm9v
dC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbSA3NTUgLXMgcWVt
dS1pbWcteGVuICAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvYmlu
Igpta2RpciAtcCAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2hh
cmUveGVuL3FlbXUiCnNldCAtZTsgZm9yIHggaW4gYmlvcy5iaW4gdmdhYmlvcy5iaW4gdmdh
Ymlvcy1jaXJydXMuYmluIHBwY19yb20uYmluIHZpZGVvLnggb3BlbmJpb3Mtc3BhcmMzMiBv
cGVuYmlvcy1zcGFyYzY0IG9wZW5iaW9zLXBwYyBweGUtbmUya19wY2kuYmluIHB4ZS1ydGw4
MTM5LmJpbiBweGUtcGNuZXQuYmluIHB4ZS1lMTAwMC5iaW4gYmFtYm9vLmR0YjsgZG8gXAoJ
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0gNjQ0IC9y
b290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9wYy1iaW9zLyR4ICIv
cm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9zaGFyZS94ZW4vcWVtdSI7
IFwKZG9uZQpta2RpciAtcCAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVu
NDIvc2hhcmUveGVuL3FlbXUva2V5bWFwcyIKc2V0IC1lOyBmb3IgeCBpbiBkYSAgICAgZW4t
Z2IgIGV0ICBmciAgICAgZnItY2ggIGlzICBsdCAgbW9kaWZpZXJzICBubyAgcHQtYnIgIHN2
IGFyICAgICAgZGUgICAgIGVuLXVzICBmaSAgZnItYmUgIGhyICAgICBpdCAgbHYgIG5sICAg
ICAgICAgcGwgIHJ1ICAgICB0aCBjb21tb24gIGRlLWNoICBlcyAgICAgZm8gIGZyLWNhICBo
dSAgICAgamEgIG1rICBubC1iZSAgICAgIHB0ICBzbCAgICAgdHI7IGRvIFwKCS9yb290L3hl
bi00LjIuMC90b29scy8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tIDY0NCAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwva2V5bWFwcy8keCAiL3Jvb3QveGVu
LTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2hhcmUveGVuL3FlbXUva2V5bWFwcyI7
IFwKZG9uZQpmb3IgZCBpbiBpMzg2LWRtOyBkbyBcCmdtYWtlIC1DICRkIGluc3RhbGwgfHwg
ZXhpdCAxIDsgXAogICAgICAgIGRvbmUKLW1zc2UyOiBub3QgZm91bmQKZ21ha2VbNF06IEVu
dGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRp
dGlvbmFsLWRpci9pMzg2LWRtJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMvcWVt
dS14ZW4tdHJhZGl0aW9uYWwveGVuLWhvb2tzLm1hazo2MTogPT09IHBjaXV0aWxzLWRldiBw
YWNrYWdlIG5vdCBmb3VuZCAtIG1pc3NpbmcgL3Vzci9pbmNsdWRlL3BjaQovcm9vdC94ZW4t
NC4yLjAvdG9vbHMvLi4vdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwveGVuLWhvb2tzLm1h
azo2MjogPT09IFBDSSBwYXNzdGhyb3VnaCBjYXBhYmlsaXR5IGhhcyBiZWVuIGRpc2FibGVk
Ci9yb290L3hlbi00LjIuMC90b29scy8uLi90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC94
ZW4taG9va3MubWFrOjYxOiA9PT0gcGNpdXRpbHMtZGV2IHBhY2thZ2Ugbm90IGZvdW5kIC0g
bWlzc2luZyAvdXNyL2luY2x1ZGUvcGNpCi9yb290L3hlbi00LjIuMC90b29scy8uLi90b29s
cy9xZW11LXhlbi10cmFkaXRpb25hbC94ZW4taG9va3MubWFrOjYyOiA9PT0gUENJIHBhc3N0
aHJvdWdoIGNhcGFiaWxpdHkgaGFzIGJlZW4gZGlzYWJsZWQKL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wICIvcm9vdC94ZW4tNC4y
LjAvZGlzdC9pbnN0YWxsLy91c3IveGVuNDIvbGliZXhlYyIKL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wICIvcm9vdC94ZW4tNC4y
LjAvZGlzdC9pbnN0YWxsLy91c3IveGVuNDIvZXRjL3hlbi9zY3JpcHRzIgovcm9vdC94ZW4t
NC4yLjAvdG9vbHMvLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2kzODYtZG0vcWVt
dS1pZnVwLU5ldEJTRCAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC8vdXNyL3hlbjQy
L2V0Yy94ZW4vc2NyaXB0cy9xZW11LWlmdXAiCi9yb290L3hlbi00LjIuMC90b29scy8uLi90
b29scy9jcm9zcy1pbnN0YWxsIC1tIDc1NSAtcyBxZW11LWRtICIvcm9vdC94ZW4tNC4yLjAv
ZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWJleGVjIgpnbWFrZVs0XTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC1kaXIv
aTM4Ni1kbScKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwtZGlyJwpnbWFrZVsyXTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVj
dG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwppZiB0ZXN0IC1kIC9yb290L3hlbi00LjIu
MC90b29scy8uLi90b29scy9xZW11LXhlbiA7IHRoZW4gXAoJbWtkaXIgLXAgcWVtdS14ZW4t
ZGlyOyBcCmVsc2UgXAoJZXhwb3J0IEdJVD1naXQ7IFwKCS9yb290L3hlbi00LjIuMC90b29s
cy8uLi9zY3JpcHRzL2dpdC1jaGVja291dC5zaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4v
dG9vbHMvcWVtdS14ZW4gcWVtdS14ZW4tNC4yLjAgcWVtdS14ZW4tZGlyIDsgXApmaQppZiB0
ZXN0IC1kIC9yb290L3hlbi00LjIuMC90b29scy8uLi90b29scy9xZW11LXhlbiA7IHRoZW4g
XAoJc291cmNlPS9yb290L3hlbi00LjIuMC90b29scy8uLi90b29scy9xZW11LXhlbjsgXApl
bHNlIFwKCXNvdXJjZT0uOyBcCmZpOyBcCmNkIHFlbXUteGVuLWRpcjsgXAokc291cmNlL2Nv
bmZpZ3VyZSAtLWVuYWJsZS14ZW4gLS10YXJnZXQtbGlzdD1pMzg2LXNvZnRtbXUgXAoJLS1z
b3VyY2UtcGF0aD0kc291cmNlIFwKCS0tZXh0cmEtY2ZsYWdzPSItSS9yb290L3hlbi00LjIu
MC90b29scy8uLi90b29scy9pbmNsdWRlIFwKCS1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4u
L3Rvb2xzL2xpYnhjIFwKCS1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3hlbnN0
b3JlIFwKCS1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3hlbnN0b3JlL2NvbXBh
dCBcCgkiIFwKCS0tZXh0cmEtbGRmbGFncz0iLUwvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4v
dG9vbHMvbGlieGMgXAoJLUwvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMveGVuc3Rv
cmUiIFwKCS0tYmluZGlyPS91c3IveGVuNDIvbGliZXhlYyBcCgktLWRhdGFkaXI9L3Vzci94
ZW40Mi9zaGFyZS9xZW11LXhlbiBcCgktLWRpc2FibGUta3ZtIFwKCS0tcHl0aG9uPXB5dGhv
bjIuNyBcCgk7IFwKZ21ha2UgYWxsCkluc3RhbGwgcHJlZml4ICAgIC91c3IvbG9jYWwKQklP
UyBkaXJlY3RvcnkgICAgL3Vzci94ZW40Mi9zaGFyZS9xZW11LXhlbgpiaW5hcnkgZGlyZWN0
b3J5ICAvdXNyL3hlbjQyL2xpYmV4ZWMKbGlicmFyeSBkaXJlY3RvcnkgL3Vzci9sb2NhbC9s
aWIKaW5jbHVkZSBkaXJlY3RvcnkgL3Vzci9sb2NhbC9pbmNsdWRlCmNvbmZpZyBkaXJlY3Rv
cnkgIC91c3IvbG9jYWwvZXRjCk1hbnVhbCBkaXJlY3RvcnkgIC91c3IvbG9jYWwvc2hhcmUv
bWFuCkVMRiBpbnRlcnAgcHJlZml4IC91c3IvZ25lbXVsL3FlbXUtJU0KU291cmNlIHBhdGgg
ICAgICAgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuCkMgY29tcGlsZXIgICAgICAg
IGdjYwpIb3N0IEMgY29tcGlsZXIgICBnY2MKQ0ZMQUdTICAgICAgICAgICAgLU8yIC1nIApR
RU1VX0NGTEFHUyAgICAgICAtbTY0IC1EX0ZPUlRJRllfU09VUkNFPTIgLURfR05VX1NPVVJD
RSAtRF9GSUxFX09GRlNFVF9CSVRTPTY0IC1EX0xBUkdFRklMRV9TT1VSQ0UgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV3JlZHVuZGFudC1kZWNscyAtV2FsbCAtV3VuZGVmIC1Xd3JpdGUtc3Ry
aW5ncyAtV21pc3NpbmctcHJvdG90eXBlcyAtZm5vLXN0cmljdC1hbGlhc2luZyAtSS9yb290
L3hlbi00LjIuMC90b29scy8uLi90b29scy9pbmNsdWRlIAktSS9yb290L3hlbi00LjIuMC90
b29scy8uLi90b29scy9saWJ4YyAJLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMv
eGVuc3RvcmUgCS1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3hlbnN0b3JlL2Nv
bXBhdCAJICAtZnN0YWNrLXByb3RlY3Rvci1hbGwgLVdlbmRpZi1sYWJlbHMgLVdtaXNzaW5n
LWluY2x1ZGUtZGlycyAtV2VtcHR5LWJvZHkgLVduZXN0ZWQtZXh0ZXJucyAtV2Zvcm1hdC1z
ZWN1cml0eSAtV2Zvcm1hdC15MmsgLVdpbml0LXNlbGYgLVdpZ25vcmVkLXF1YWxpZmllcnMg
LVdvbGQtc3R5bGUtZGVjbGFyYXRpb24gLVdvbGQtc3R5bGUtZGVmaW5pdGlvbiAtV3R5cGUt
bGltaXRzCkxERkxBR1MgICAgICAgICAgIC1XbCwtLXdhcm4tY29tbW9uIC1tNjQgLWcgLUwv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMvbGlieGMgCS1ML3Jvb3QveGVuLTQuMi4w
L3Rvb2xzLy4uL3Rvb2xzL3hlbnN0b3JlIAptYWtlICAgICAgICAgICAgICBnbWFrZQppbnN0
YWxsICAgICAgICAgICBpbnN0YWxsCnB5dGhvbiAgICAgICAgICAgIHB5dGhvbjIuNwpzbWJk
ICAgICAgICAgICAgICAvdXNyL3NiaW4vc21iZApob3N0IENQVSAgICAgICAgICB4ODZfNjQK
aG9zdCBiaWcgZW5kaWFuICAgbm8KdGFyZ2V0IGxpc3QgICAgICAgaTM4Ni1zb2Z0bW11CnRj
ZyBkZWJ1ZyBlbmFibGVkIG5vCk1vbiBkZWJ1ZyBlbmFibGVkIG5vCmdwcm9mIGVuYWJsZWQg
ICAgIG5vCnNwYXJzZSBlbmFibGVkICAgIG5vCnN0cmlwIGJpbmFyaWVzICAgIHllcwpwcm9m
aWxlciAgICAgICAgICBubwpzdGF0aWMgYnVpbGQgICAgICBubwotV2Vycm9yIGVuYWJsZWQg
ICBubwpTREwgc3VwcG9ydCAgICAgICBubwpjdXJzZXMgc3VwcG9ydCAgICBubwpjdXJsIHN1
cHBvcnQgICAgICB5ZXMKY2hlY2sgc3VwcG9ydCAgICAgbm8KbWluZ3czMiBzdXBwb3J0ICAg
bm8KQXVkaW8gZHJpdmVycyAgICAgb3NzCkV4dHJhIGF1ZGlvIGNhcmRzIGFjOTcgZXMxMzcw
IHNiMTYgaGRhCkJsb2NrIHdoaXRlbGlzdCAgIApNaXhlciBlbXVsYXRpb24gICBubwpWTkMg
c3VwcG9ydCAgICAgICB5ZXMKVk5DIFRMUyBzdXBwb3J0ICAgbm8KVk5DIFNBU0wgc3VwcG9y
dCAgbm8KVk5DIEpQRUcgc3VwcG9ydCAgbm8KVk5DIFBORyBzdXBwb3J0ICAgbm8KVk5DIHRo
cmVhZCAgICAgICAgbm8KeGVuIHN1cHBvcnQgICAgICAgeWVzCmJybGFwaSBzdXBwb3J0ICAg
IG5vCmJsdWV6ICBzdXBwb3J0ICAgIG5vCkRvY3VtZW50YXRpb24gICAgIHllcwpOUFRMIHN1
cHBvcnQgICAgICBubwpHVUVTVF9CQVNFICAgICAgICB5ZXMKUElFICAgICAgICAgICAgICAg
bm8KdmRlIHN1cHBvcnQgICAgICAgbm8KTGludXggQUlPIHN1cHBvcnQgbm8KQVRUUi9YQVRU
UiBzdXBwb3J0IHllcwpJbnN0YWxsIGJsb2JzICAgICB5ZXMKS1ZNIHN1cHBvcnQgICAgICAg
bm8KVENHIGludGVycHJldGVyICAgbm8KZmR0IHN1cHBvcnQgICAgICAgbm8KcHJlYWR2IHN1
cHBvcnQgICAgeWVzCmZkYXRhc3luYyAgICAgICAgIHllcwptYWR2aXNlICAgICAgICAgICB5
ZXMKcG9zaXhfbWFkdmlzZSAgICAgeWVzCnV1aWQgc3VwcG9ydCAgICAgIG5vCnZob3N0LW5l
dCBzdXBwb3J0IG5vClRyYWNlIGJhY2tlbmQgICAgIG5vcApUcmFjZSBvdXRwdXQgZmlsZSB0
cmFjZS08cGlkPgpzcGljZSBzdXBwb3J0ICAgICBubwpyYmQgc3VwcG9ydCAgICAgICBubwp4
ZnNjdGwgc3VwcG9ydCAgICBubwpuc3MgdXNlZCAgICAgICAgICBubwp1c2IgbmV0IHJlZGly
ICAgICBubwpPcGVuR0wgc3VwcG9ydCAgICBubwpsaWJpc2NzaSBzdXBwb3J0ICBubwpidWls
ZCBndWVzdCBhZ2VudCB5ZXMKZ21ha2VbM106IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLWRpcicKICBHRU4gICBpMzg2LXNvZnRtbXUvY29u
ZmlnLWRldmljZXMubWFrCiAgR0VOICAgY29uZmlnLWFsbC1kZXZpY2VzLm1hawpnbWFrZVsz
XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi1k
aXInCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29s
cy9xZW11LXhlbi1kaXInCiAgR0VOICAgcWVtdS1vcHRpb25zLnRleGkKICBHRU4gICBxZW11
LW1vbml0b3IudGV4aQogIEdFTiAgIHFlbXUtaW1nLWNtZHMudGV4aQogIEdFTiAgIHFlbXUt
ZG9jLmh0bWwKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuL3FlbXUtZG9jLnRleGk6
Nzogd2FybmluZzogdW5yZWNvZ25pemVkIGVuY29kaW5nIG5hbWUgYFVURi04Jy4KICBHRU4g
ICBxZW11LXRlY2guaHRtbAovcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4vcWVtdS10
ZWNoLnRleGk6Nzogd2FybmluZzogdW5yZWNvZ25pemVkIGVuY29kaW5nIG5hbWUgYFVURi04
Jy4KICBHRU4gICBxZW11LjEKICBHRU4gICBxZW11LWltZy4xCiAgR0VOICAgcWVtdS1uYmQu
OAogIEdFTiAgIFFNUC9xbXAtY29tbWFuZHMudHh0CiAgR0VOICAgY29uZmlnLWhvc3QuaAog
IEdFTiAgIHRyYWNlLmgKICBHRU4gICBxZW11LW9wdGlvbnMuZGVmCiAgR0VOICAgcW1wLWNv
bW1hbmRzLmgKICBHRU4gICBxYXBpLXR5cGVzLmgKICBHRU4gICBxYXBpLXZpc2l0LmgKICBH
RU4gICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tZGlyL3FhcGktZ2VuZXJhdGVk
L3FnYS1xYXBpLXR5cGVzLmgKICBHRU4gICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14
ZW4tZGlyL3FhcGktZ2VuZXJhdGVkL3FnYS1xYXBpLXZpc2l0LmgKICBHRU4gICAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tZGlyL3FhcGktZ2VuZXJhdGVkL3FnYS1xbXAtY29t
bWFuZHMuaAogIENDICAgIHFlbXUtZ2EubwogIENDICAgIHFnYS9ndWVzdC1hZ2VudC1jb21t
YW5kcy5vCiAgQ0MgICAgcWdhL2d1ZXN0LWFnZW50LWNvbW1hbmQtc3RhdGUubwogIENDICAg
IHFlbXUtc29ja2V0cy5vCiAgQ0MgICAgbW9kdWxlLm8KICBDQyAgICBxZW11LW9wdGlvbi5v
CiAgQ0MgICAgb3NsaWItcG9zaXgubwogIENDICAgIHFhcGkvcWFwaS12aXNpdC1jb3JlLm8K
ICBDQyAgICBxYXBpL3FtcC1pbnB1dC12aXNpdG9yLm8KICBDQyAgICBxYXBpL3FtcC1vdXRw
dXQtdmlzaXRvci5vCiAgQ0MgICAgcWFwaS9xYXBpLWRlYWxsb2MtdmlzaXRvci5vCiAgQ0Mg
ICAgcWFwaS9xbXAtcmVnaXN0cnkubwogIENDICAgIHFhcGkvcW1wLWRpc3BhdGNoLm8KICBD
QyAgICBxZW11LXRvb2wubwogIENDICAgIG9zZGVwLm8KICBDQyAgICBxZW11LXRocmVhZC1w
b3NpeC5vCiAgR0VOICAgdHJhY2UuYwogIENDICAgIHRyYWNlLm8KICBDQyAgICB0cmFjZS9k
ZWZhdWx0Lm8KICBDQyAgICB0cmFjZS9jb250cm9sLm8KICBDQyAgICBxZW11LXRpbWVyLWNv
bW1vbi5vCiAgQ0MgICAgY3V0aWxzLm8KICBDQyAgICBxaW50Lm8KICBDQyAgICBxc3RyaW5n
Lm8KICBDQyAgICBxZGljdC5vCiAgQ0MgICAgcWxpc3QubwogIENDICAgIHFmbG9hdC5vCiAg
Q0MgICAgcWJvb2wubwogIENDICAgIHFqc29uLm8KICBDQyAgICBqc29uLWxleGVyLm8KICBD
QyAgICBqc29uLXN0cmVhbWVyLm8KICBDQyAgICBqc29uLXBhcnNlci5vCiAgQ0MgICAgcWVy
cm9yLm8KICBDQyAgICBlcnJvci5vCiAgQ0MgICAgcWVtdS1lcnJvci5vCiAgQ0MgICAgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLWRpci9xYXBpLWdlbmVyYXRlZC9xZ2EtcWFw
aS10eXBlcy5vCiAgQ0MgICAgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLWRpci9x
YXBpLWdlbmVyYXRlZC9xZ2EtcWFwaS12aXNpdC5vCiAgQ0MgICAgL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL3FlbXUteGVuLWRpci9xYXBpLWdlbmVyYXRlZC9xZ2EtcW1wLW1hcnNoYWwubwog
IExJTksgIHFlbXUtZ2EKL3Vzci9saWIvbGliYy5zbzogd2FybmluZzogbXVsdGlwbGUgY29t
bW9uIG9mIGBlbnZpcm9uJwovdXNyL2xpYi9jcnQwLm86IHdhcm5pbmc6IHByZXZpb3VzIGNv
bW1vbiBpcyBoZXJlCiAgQ0MgICAgcWVtdS1uYmQubwogIENDICAgIGNhY2hlLXV0aWxzLm8K
ICBDQyAgICBhc3luYy5vCiAgQ0MgICAgbmJkLm8KICBDQyAgICBibG9jay5vCiAgQ0MgICAg
YWlvLm8KICBDQyAgICBhZXMubwogIENDICAgIHFlbXUtY29uZmlnLm8KICBDQyAgICBxZW11
LXByb2dyZXNzLm8KICBDQyAgICBxZW11LWNvcm91dGluZS5vCiAgQ0MgICAgcWVtdS1jb3Jv
dXRpbmUtbG9jay5vCiAgQ0MgICAgY29yb3V0aW5lLXVjb250ZXh0Lm8KICBDQyAgICBwb3Np
eC1haW8tY29tcGF0Lm8KICBDQyAgICBibG9jay9yYXcubwogIENDICAgIGJsb2NrL2Nvdy5v
CiAgQ0MgICAgYmxvY2svcWNvdy5vCiAgQ0MgICAgYmxvY2svdmRpLm8KICBDQyAgICBibG9j
ay92bWRrLm8KICBDQyAgICBibG9jay9jbG9vcC5vCiAgQ0MgICAgYmxvY2svZG1nLm8KICBD
QyAgICBibG9jay9ib2Nocy5vCiAgQ0MgICAgYmxvY2svdnBjLm8KICBDQyAgICBibG9jay92
dmZhdC5vCiAgQ0MgICAgYmxvY2svcWNvdzIubwogIENDICAgIGJsb2NrL3Fjb3cyLXJlZmNv
dW50Lm8KICBDQyAgICBibG9jay9xY293Mi1jbHVzdGVyLm8KICBDQyAgICBibG9jay9xY293
Mi1zbmFwc2hvdC5vCiAgQ0MgICAgYmxvY2svcWNvdzItY2FjaGUubwogIENDICAgIGJsb2Nr
L3FlZC5vCiAgQ0MgICAgYmxvY2svcWVkLWdlbmNiLm8KICBDQyAgICBibG9jay9xZWQtbDIt
Y2FjaGUubwogIENDICAgIGJsb2NrL3FlZC10YWJsZS5vCiAgQ0MgICAgYmxvY2svcWVkLWNs
dXN0ZXIubwogIENDICAgIGJsb2NrL3FlZC1jaGVjay5vCiAgQ0MgICAgYmxvY2svcGFyYWxs
ZWxzLm8KICBDQyAgICBibG9jay9uYmQubwogIENDICAgIGJsb2NrL2Jsa2RlYnVnLm8KICBD
QyAgICBibG9jay9zaGVlcGRvZy5vCiAgQ0MgICAgYmxvY2svYmxrdmVyaWZ5Lm8KICBDQyAg
ICBibG9jay9yYXctcG9zaXgubwogIENDICAgIGJsb2NrL2N1cmwubwogIExJTksgIHFlbXUt
bmJkCi91c3IvbGliL2xpYmMuc286IHdhcm5pbmc6IG11bHRpcGxlIGNvbW1vbiBvZiBgZW52
aXJvbicKL3Vzci9saWIvY3J0MC5vOiB3YXJuaW5nOiBwcmV2aW91cyBjb21tb24gaXMgaGVy
ZQogIEdFTiAgIHFlbXUtaW1nLWNtZHMuaAogIENDICAgIHFlbXUtaW1nLm8KICBMSU5LICBx
ZW11LWltZwovdXNyL2xpYi9saWJjLnNvOiB3YXJuaW5nOiBtdWx0aXBsZSBjb21tb24gb2Yg
YGVudmlyb24nCi91c3IvbGliL2NydDAubzogd2FybmluZzogcHJldmlvdXMgY29tbW9uIGlz
IGhlcmUKICBDQyAgICBxZW11LWlvLm8KICBDQyAgICBjbWQubwogIExJTksgIHFlbXUtaW8K
L3Vzci9saWIvbGliYy5zbzogd2FybmluZzogbXVsdGlwbGUgY29tbW9uIG9mIGBlbnZpcm9u
JwovdXNyL2xpYi9jcnQwLm86IHdhcm5pbmc6IHByZXZpb3VzIGNvbW1vbiBpcyBoZXJlCiAg
Q0MgICAgbGliaHc2NC92bC5vCiAgQ0MgICAgbGliaHc2NC9sb2FkZXIubwogIENDICAgIGxp
Ymh3NjQvdmlydGlvLWNvbnNvbGUubwogIENDICAgIGxpYmh3NjQvdXNiLWxpYmh3Lm8KICBD
QyAgICBsaWJodzY0L3ZpcnRpby1wY2kubwogIENDICAgIGxpYmh3NjQvZndfY2ZnLm8KICBD
QyAgICBsaWJodzY0L3BjaS5vCiAgQ0MgICAgbGliaHc2NC9wY2lfYnJpZGdlLm8KICBDQyAg
ICBsaWJodzY0L21zaXgubwogIENDICAgIGxpYmh3NjQvbXNpLm8KICBDQyAgICBsaWJodzY0
L3BjaV9ob3N0Lm8KICBDQyAgICBsaWJodzY0L3BjaWVfaG9zdC5vCiAgQ0MgICAgbGliaHc2
NC9pb2gzNDIwLm8KICBDQyAgICBsaWJodzY0L3hpbzMxMzBfdXBzdHJlYW0ubwogIENDICAg
IGxpYmh3NjQveGlvMzEzMF9kb3duc3RyZWFtLm8KICBDQyAgICBsaWJodzY0L3dhdGNoZG9n
Lm8KICBDQyAgICBsaWJodzY0L3NlcmlhbC5vCiAgQ0MgICAgbGliaHc2NC9wYXJhbGxlbC5v
CiAgQ0MgICAgbGliaHc2NC9pODI1NC5vCiAgQ0MgICAgbGliaHc2NC9wY3Nway5vCiAgQ0Mg
ICAgbGliaHc2NC9wY2tiZC5vCiAgQ0MgICAgbGliaHc2NC91c2ItdWhjaS5vCiAgQ0MgICAg
bGliaHc2NC91c2Itb2hjaS5vCiAgQ0MgICAgbGliaHc2NC91c2ItZWhjaS5vCiAgQ0MgICAg
bGliaHc2NC9mZGMubwogIENDICAgIGxpYmh3NjQvYWNwaS5vCiAgQ0MgICAgbGliaHc2NC9h
Y3BpX3BpaXg0Lm8KICBDQyAgICBsaWJodzY0L3BtX3NtYnVzLm8KICBDQyAgICBsaWJodzY0
L2FwbS5vCiAgQ0MgICAgbGliaHc2NC9kbWEubwogIENDICAgIGxpYmh3NjQvaHBldC5vCiAg
Q0MgICAgbGliaHc2NC9hcHBsZXNtYy5vCiAgQ0MgICAgbGliaHc2NC91c2ItY2NpZC5vCiAg
Q0MgICAgbGliaHc2NC9jY2lkLWNhcmQtcGFzc3RocnUubwogIENDICAgIGxpYmh3NjQvaTgy
NTkubwogIENDICAgIGxpYmh3NjQvd2R0X2k2MzAwZXNiLm8KICBDQyAgICBsaWJodzY0L3Bj
aWUubwogIENDICAgIGxpYmh3NjQvcGNpZV9hZXIubwogIENDICAgIGxpYmh3NjQvcGNpZV9w
b3J0Lm8KICBDQyAgICBsaWJodzY0L25lMjAwMC5vCiAgQ0MgICAgbGliaHc2NC9lZXBybzEw
MC5vCiAgQ0MgICAgbGliaHc2NC9wY25ldC1wY2kubwogIENDICAgIGxpYmh3NjQvcGNuZXQu
bwogIENDICAgIGxpYmh3NjQvZTEwMDAubwogIENDICAgIGxpYmh3NjQvcnRsODEzOS5vCiAg
Q0MgICAgbGliaHc2NC9uZTIwMDAtaXNhLm8KICBDQyAgICBsaWJodzY0L2lkZS9jb3JlLm8K
ICBDQyAgICBsaWJodzY0L2lkZS9hdGFwaS5vCiAgQ0MgICAgbGliaHc2NC9pZGUvcWRldi5v
CiAgQ0MgICAgbGliaHc2NC9pZGUvcGNpLm8KICBDQyAgICBsaWJodzY0L2lkZS9pc2Eubwog
IENDICAgIGxpYmh3NjQvaWRlL3BpaXgubwogIENDICAgIGxpYmh3NjQvaWRlL2FoY2kubwog
IENDICAgIGxpYmh3NjQvaWRlL2ljaC5vCiAgQ0MgICAgbGliaHc2NC9sc2k1M2M4OTVhLm8K
ICBDQyAgICBsaWJodzY0L2RtYS1oZWxwZXJzLm8KICBDQyAgICBsaWJodzY0L3N5c2J1cy5v
CiAgQ0MgICAgbGliaHc2NC9pc2EtYnVzLm8KICBDQyAgICBsaWJodzY0L3FkZXYtYWRkci5v
CiAgQ0MgICAgbGliaHc2NC92Z2EtcGNpLm8KICBDQyAgICBsaWJodzY0L3ZnYS1pc2Eubwog
IENDICAgIGxpYmh3NjQvdm13YXJlX3ZnYS5vCiAgQ0MgICAgbGliaHc2NC92bW1vdXNlLm8K
ICBDQyAgICBsaWJodzY0L3NiMTYubwogIENDICAgIGxpYmh3NjQvZXMxMzcwLm8KICBDQyAg
ICBsaWJodzY0L2FjOTcubwogIENDICAgIGxpYmh3NjQvaW50ZWwtaGRhLm8KICBDQyAgICBs
aWJodzY0L2hkYS1hdWRpby5vCiAgQ0MgICAgYmxvY2tkZXYubwogIENDICAgIG5ldC5vCiAg
Q0MgICAgbmV0L3F1ZXVlLm8KICBDQyAgICBuZXQvY2hlY2tzdW0ubwogIENDICAgIG5ldC91
dGlsLm8KICBDQyAgICBuZXQvc29ja2V0Lm8KICBDQyAgICBuZXQvZHVtcC5vCiAgQ0MgICAg
bmV0L3RhcC5vCiAgQ0MgICAgbmV0L3RhcC1ic2QubwogIENDICAgIG5ldC9zbGlycC5vCiAg
Q0MgICAgcmVhZGxpbmUubwogIENDICAgIGNvbnNvbGUubwogIENDICAgIGN1cnNvci5vCiAg
Q0MgICAgb3MtcG9zaXgubwogIENDICAgIHRjZy1ydW50aW1lLm8KICBDQyAgICBob3N0LXV0
aWxzLm8KICBDQyAgICBtYWluLWxvb3AubwogIENDICAgIGlycS5vCiAgQ0MgICAgaW5wdXQu
bwogIENDICAgIGkyYy5vCiAgQ0MgICAgc21idXMubwogIENDICAgIHNtYnVzX2VlcHJvbS5v
CiAgQ0MgICAgZWVwcm9tOTN4eC5vCiAgQ0MgICAgc2NzaS1kaXNrLm8KICBDQyAgICBjZHJv
bS5vCiAgQ0MgICAgc2NzaS1nZW5lcmljLm8KICBDQyAgICBzY3NpLWJ1cy5vCiAgQ0MgICAg
aGlkLm8KICBDQyAgICB1c2IubwogIENDICAgIHVzYi1odWIubwogIENDICAgIHVzYi1ic2Qu
bwogIENDICAgIHVzYi1oaWQubwogIENDICAgIHVzYi1tc2QubwogIENDICAgIHVzYi13YWNv
bS5vCiAgQ0MgICAgdXNiLXNlcmlhbC5vCiAgQ0MgICAgdXNiLW5ldC5vCiAgQ0MgICAgdXNi
LWJ1cy5vCiAgQ0MgICAgdXNiLWRlc2MubwogIENDICAgIGJ0Lm8KICBDQyAgICBidC1ob3N0
Lm8KICBDQyAgICBidC12aGNpLm8KICBDQyAgICBidC1sMmNhcC5vCiAgQ0MgICAgYnQtc2Rw
Lm8KICBDQyAgICBidC1oY2kubwogIENDICAgIGJ0LWhpZC5vCiAgQ0MgICAgdXNiLWJ0Lm8K
ICBDQyAgICBidC1oY2ktY3NyLm8KICBDQyAgICBidWZmZXJlZF9maWxlLm8KICBDQyAgICBt
aWdyYXRpb24ubwogIENDICAgIG1pZ3JhdGlvbi10Y3AubwogIENDICAgIHFlbXUtY2hhci5v
CiAgQ0MgICAgc2F2ZXZtLm8KICBDQyAgICBtc21vdXNlLm8KICBDQyAgICBwczIubwogIEND
ICAgIHFkZXYubwogIENDICAgIHFkZXYtcHJvcGVydGllcy5vCiAgQ0MgICAgYmxvY2stbWln
cmF0aW9uLm8KICBDQyAgICBpb2hhbmRsZXIubwogIENDICAgIHBmbGliLm8KICBDQyAgICBi
aXRtYXAubwogIENDICAgIGJpdG9wcy5vCiAgQ0MgICAgbWlncmF0aW9uLWV4ZWMubwogIEND
ICAgIG1pZ3JhdGlvbi11bml4Lm8KICBDQyAgICBtaWdyYXRpb24tZmQubwogIENDICAgIGF1
ZGlvL2F1ZGlvLm8KICBDQyAgICBhdWRpby9ub2F1ZGlvLm8KICBDQyAgICBhdWRpby93YXZh
dWRpby5vCiAgQ0MgICAgYXVkaW8vbWl4ZW5nLm8KICBDQyAgICBhdWRpby9vc3NhdWRpby5v
CiAgQ0MgICAgYXVkaW8vd2F2Y2FwdHVyZS5vCiAgQ0MgICAgdWkva2V5bWFwcy5vCiAgQ0Mg
ICAgdWkvdm5jLm8KICBDQyAgICB1aS9kM2Rlcy5vCiAgQ0MgICAgdWkvdm5jLWVuYy16bGli
Lm8KICBDQyAgICB1aS92bmMtZW5jLWhleHRpbGUubwogIENDICAgIHVpL3ZuYy1lbmMtdGln
aHQubwogIENDICAgIHVpL3ZuYy1wYWxldHRlLm8KICBDQyAgICB1aS92bmMtZW5jLXpybGUu
bwogIENDICAgIHVpL3ZuYy1qb2JzLXN5bmMubwogIENDICAgIGlvdi5vCiAgQ0MgICAgYWNs
Lm8KICBDQyAgICBjb21wYXRmZC5vCiAgQ0MgICAgbm90aWZ5Lm8KICBDQyAgICBldmVudF9u
b3RpZmllci5vCiAgQ0MgICAgcWVtdS10aW1lci5vCiAgQ0MgICAgc2xpcnAvY2tzdW0ubwog
IENDICAgIHNsaXJwL2lmLm8KICBDQyAgICBzbGlycC9pcF9pY21wLm8KICBDQyAgICBzbGly
cC9pcF9pbnB1dC5vCiAgQ0MgICAgc2xpcnAvaXBfb3V0cHV0Lm8KICBDQyAgICBzbGlycC9z
bGlycC5vCiAgQ0MgICAgc2xpcnAvbWJ1Zi5vCiAgQ0MgICAgc2xpcnAvbWlzYy5vCiAgQ0Mg
ICAgc2xpcnAvc2J1Zi5vCiAgQ0MgICAgc2xpcnAvc29ja2V0Lm8KICBDQyAgICBzbGlycC90
Y3BfaW5wdXQubwogIENDICAgIHNsaXJwL3RjcF9vdXRwdXQubwogIENDICAgIHNsaXJwL3Rj
cF9zdWJyLm8KICBDQyAgICBzbGlycC90Y3BfdGltZXIubwogIENDICAgIHNsaXJwL3VkcC5v
CiAgQ0MgICAgc2xpcnAvYm9vdHAubwogIENDICAgIHNsaXJwL3RmdHAubwogIENDICAgIHNs
aXJwL2FycF90YWJsZS5vCiAgQ0MgICAgeGVuX2JhY2tlbmQubwogIENDICAgIHhlbl9kZXZj
b25maWcubwogIENDICAgIHhlbl9jb25zb2xlLm8KICBDQyAgICB4ZW5mYi5vCiAgQ0MgICAg
eGVuX2Rpc2subwogIENDICAgIHhlbl9uaWMubwogIENDICAgIHFtcC1tYXJzaGFsLm8KICBD
QyAgICBxYXBpLXZpc2l0Lm8KICBDQyAgICBxYXBpLXR5cGVzLm8KICBDQyAgICBxbXAubwog
IENDICAgIGhtcC5vCiAgQ0MgICAgbGliZGlzL2kzODYtZGlzLm8KICBHRU4gICBjb25maWct
dGFyZ2V0LmgKICBDQyAgICBpMzg2LXNvZnRtbXUvYXJjaF9pbml0Lm8KICBDQyAgICBpMzg2
LXNvZnRtbXUvY3B1cy5vCiAgR0VOICAgaTM4Ni1zb2Z0bW11L2htcC1jb21tYW5kcy5oCiAg
R0VOICAgaTM4Ni1zb2Z0bW11L3FtcC1jb21tYW5kcy1vbGQuaAogIENDICAgIGkzODYtc29m
dG1tdS9tb25pdG9yLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvbWFjaGluZS5vCiAgQ0MgICAg
aTM4Ni1zb2Z0bW11L2dkYnN0dWIubwogIENDICAgIGkzODYtc29mdG1tdS9iYWxsb29uLm8K
ICBDQyAgICBpMzg2LXNvZnRtbXUvaW9wb3J0Lm8KICBDQyAgICBpMzg2LXNvZnRtbXUvdmly
dGlvLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvdmlydGlvLWJsay5vCiAgQ0MgICAgaTM4Ni1z
b2Z0bW11L3ZpcnRpby1iYWxsb29uLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvdmlydGlvLW5l
dC5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L3ZpcnRpby1zZXJpYWwtYnVzLm8KICBDQyAgICBp
Mzg2LXNvZnRtbXUvdmhvc3RfbmV0Lm8KICBDQyAgICBpMzg2LXNvZnRtbXUva3ZtLXN0dWIu
bwogIENDICAgIGkzODYtc29mdG1tdS9tZW1vcnkubwogIENDICAgIGkzODYtc29mdG1tdS94
ZW4tYWxsLm8KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuL3hlbi1hbGwuYzogSW4g
ZnVuY3Rpb24gJ3hlbl9zeW5jX2RpcnR5X2JpdG1hcCc6Ci9yb290L3hlbi00LjIuMC90b29s
cy9xZW11LXhlbi94ZW4tYWxsLmM6NDc5OjEzOiB3YXJuaW5nOiBpbXBsaWNpdCBkZWNsYXJh
dGlvbiBvZiBmdW5jdGlvbiAnZmZzbCcKICBDQyAgICBpMzg2LXNvZnRtbXUveGVuX21hY2hp
bmVfcHYubwogIENDICAgIGkzODYtc29mdG1tdS94ZW5fZG9tYWluYnVpbGQubwogIENDICAg
IGkzODYtc29mdG1tdS94ZW4tbWFwY2FjaGUubwogIENDICAgIGkzODYtc29mdG1tdS9leGVj
Lm8KICBDQyAgICBpMzg2LXNvZnRtbXUvdHJhbnNsYXRlLWFsbC5vCiAgQ0MgICAgaTM4Ni1z
b2Z0bW11L2NwdS1leGVjLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvdHJhbnNsYXRlLm8KICBD
QyAgICBpMzg2LXNvZnRtbXUvdGNnL3RjZy5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L3RjZy9v
cHRpbWl6ZS5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L2ZwdS9zb2Z0ZmxvYXQubwogIENDICAg
IGkzODYtc29mdG1tdS9vcF9oZWxwZXIubwogIENDICAgIGkzODYtc29mdG1tdS9oZWxwZXIu
bwogIENDICAgIGkzODYtc29mdG1tdS9jcHVpZC5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L2Rp
c2FzLm8KICBDQyAgICBpMzg2LXNvZnRtbXUveGVuX3BsYXRmb3JtLm8KICBDQyAgICBpMzg2
LXNvZnRtbXUvdmdhLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvbWMxNDY4MThydGMubwogIEND
ICAgIGkzODYtc29mdG1tdS9wYy5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L2NpcnJ1c192Z2Eu
bwogIENDICAgIGkzODYtc29mdG1tdS9zZ2EubwogIENDICAgIGkzODYtc29mdG1tdS9hcGlj
Lm8KICBDQyAgICBpMzg2LXNvZnRtbXUvaW9hcGljLm8KICBDQyAgICBpMzg2LXNvZnRtbXUv
cGlpeF9wY2kubwogIENDICAgIGkzODYtc29mdG1tdS92bXBvcnQubwogIENDICAgIGkzODYt
c29mdG1tdS9kZXZpY2UtaG90cGx1Zy5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L3BjaS1ob3Rw
bHVnLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvc21iaW9zLm8KICBDQyAgICBpMzg2LXNvZnRt
bXUvd2R0X2liNzAwLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvZGVidWdjb24ubwogIENDICAg
IGkzODYtc29mdG1tdS9tdWx0aWJvb3QubwogIENDICAgIGkzODYtc29mdG1tdS9wY19waWl4
Lm8KICBMSU5LICBpMzg2LXNvZnRtbXUvcWVtdS1zeXN0ZW0taTM4NgovdXNyL2xpYi9saWJj
LnNvOiB3YXJuaW5nOiBtdWx0aXBsZSBjb21tb24gb2YgYGVudmlyb24nCi91c3IvbGliL2Ny
dDAubzogd2FybmluZzogcHJldmlvdXMgY29tbW9uIGlzIGhlcmUKICBBUyAgICBvcHRpb25y
b20vbXVsdGlib290Lm8KICBCdWlsZGluZyBvcHRpb25yb20vbXVsdGlib290LmltZwogIEJ1
aWxkaW5nIG9wdGlvbnJvbS9tdWx0aWJvb3QucmF3CiAgU2lnbmluZyBvcHRpb25yb20vbXVs
dGlib290LmJpbgogIEFTICAgIG9wdGlvbnJvbS9saW51eGJvb3QubwogIEJ1aWxkaW5nIG9w
dGlvbnJvbS9saW51eGJvb3QuaW1nCiAgQnVpbGRpbmcgb3B0aW9ucm9tL2xpbnV4Ym9vdC5y
YXcKICBTaWduaW5nIG9wdGlvbnJvbS9saW51eGJvb3QuYmluCnJtIG11bHRpYm9vdC5vIGxp
bnV4Ym9vdC5yYXcgbGludXhib290LmltZyBtdWx0aWJvb3QucmF3IG11bHRpYm9vdC5pbWcg
bGludXhib290Lm8KZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvcWVtdS14ZW4tZGlyJwpjZCBxZW11LXhlbi1kaXI7IFwKZ21ha2UgaW5zdGFs
bApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
cWVtdS14ZW4tZGlyJwppbnN0YWxsIC1kIC1tIDA3NTUgIi9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL2xvY2FsL3NoYXJlL2RvYy9xZW11IgppbnN0YWxsIC1jIC1tIDA2NDQg
cWVtdS1kb2MuaHRtbCAgcWVtdS10ZWNoLmh0bWwgIi9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL2xvY2FsL3NoYXJlL2RvYy9xZW11IgppbnN0YWxsIC1kIC1tIDA3NTUgIi9y
b290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xvY2FsL3NoYXJlL21hbi9tYW4xIgpp
bnN0YWxsIC1jIC1tIDA2NDQgcWVtdS4xIHFlbXUtaW1nLjEgIi9yb290L3hlbi00LjIuMC9k
aXN0L2luc3RhbGwvdXNyL2xvY2FsL3NoYXJlL21hbi9tYW4xIgppbnN0YWxsIC1kIC1tIDA3
NTUgIi9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xvY2FsL3NoYXJlL21hbi9t
YW44IgppbnN0YWxsIC1jIC1tIDA2NDQgcWVtdS1uYmQuOCAiL3Jvb3QveGVuLTQuMi4wL2Rp
c3QvaW5zdGFsbC91c3IvbG9jYWwvc2hhcmUvbWFuL21hbjgiCmluc3RhbGwgLWQgLW0gMDc1
NSAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IvbG9jYWwvZXRjL3FlbXUiCmlu
c3RhbGwgLWMgLW0gMDY0NCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4vc3lzY29u
Zmlncy90YXJnZXQvdGFyZ2V0LXg4Nl82NC5jb25mICIvcm9vdC94ZW4tNC4yLjAvZGlzdC9p
bnN0YWxsL3Vzci9sb2NhbC9ldGMvcWVtdSIKaW5zdGFsbCAtZCAtbSAwNzU1ICIvcm9vdC94
ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWJleGVjIgppbnN0YWxsIC1jIC1t
IDA3NTUgIHFlbXUtZ2EgcWVtdS1uYmQgcWVtdS1pbWcgcWVtdS1pbyAgIi9yb290L3hlbi00
LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYmV4ZWMiCmluc3RhbGwgLWQgLW0gMDc1
NSAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2hhcmUvcWVtdS14
ZW4iCnNldCAtZTsgZm9yIHggaW4gYmlvcy5iaW4gc2dhYmlvcy5iaW4gdmdhYmlvcy5iaW4g
dmdhYmlvcy1jaXJydXMuYmluIHZnYWJpb3Mtc3RkdmdhLmJpbiB2Z2FiaW9zLXZtd2FyZS5i
aW4gdmdhYmlvcy1xeGwuYmluIHBwY19yb20uYmluIG9wZW5iaW9zLXNwYXJjMzIgb3BlbmJp
b3Mtc3BhcmM2NCBvcGVuYmlvcy1wcGMgcHhlLWUxMDAwLnJvbSBweGUtZWVwcm8xMDAucm9t
IHB4ZS1uZTJrX3BjaS5yb20gcHhlLXBjbmV0LnJvbSBweGUtcnRsODEzOS5yb20gcHhlLXZp
cnRpby5yb20gYmFtYm9vLmR0YiBwZXRhbG9naXgtczNhZHNwMTgwMC5kdGIgcGV0YWxvZ2l4
LW1sNjA1LmR0YiBtcGM4NTQ0ZHMuZHRiIG11bHRpYm9vdC5iaW4gbGludXhib290LmJpbiBz
MzkwLXppcGwucm9tIHNwYXByLXJ0YXMuYmluIHNsb2YuYmluIHBhbGNvZGUtY2xpcHBlcjsg
ZG8gXAoJaW5zdGFsbCAtYyAtbSAwNjQ0IC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhl
bi9wYy1iaW9zLyR4ICIvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9z
aGFyZS9xZW11LXhlbiI7IFwKZG9uZQppbnN0YWxsIC1kIC1tIDA3NTUgIi9yb290L3hlbi00
LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL3NoYXJlL3FlbXUteGVuL2tleW1hcHMiCnNl
dCAtZTsgZm9yIHggaW4gZGEgICAgIGVuLWdiICBldCAgZnIgICAgIGZyLWNoICBpcyAgbHQg
IG1vZGlmaWVycyAgbm8gIHB0LWJyICBzdiBhciAgICAgIGRlICAgICBlbi11cyAgZmkgIGZy
LWJlICBociAgICAgaXQgIGx2ICBubCAgICAgICAgIHBsICBydSAgICAgdGggY29tbW9uICBk
ZS1jaCAgZXMgICAgIGZvICBmci1jYSAgaHUgICAgIGphICBtayAgbmwtYmUgICAgICBwdCAg
c2wgICAgIHRyOyBkbyBcCglpbnN0YWxsIC1jIC1tIDA2NDQgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL3FlbXUteGVuL3BjLWJpb3Mva2V5bWFwcy8keCAiL3Jvb3QveGVuLTQuMi4wL2Rpc3Qv
aW5zdGFsbC91c3IveGVuNDIvc2hhcmUvcWVtdS14ZW4va2V5bWFwcyI7IFwKZG9uZQpmb3Ig
ZCBpbiBpMzg2LXNvZnRtbXU7IGRvIFwKZ21ha2UgLUMgJGQgaW5zdGFsbCB8fCBleGl0IDEg
OyBcCiAgICAgICAgZG9uZQpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tZGlyL2kzODYtc29mdG1tdScKaW5zdGFsbCAtbSA3
NTUgcWVtdS1zeXN0ZW0taTM4NiAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3Iv
eGVuNDIvbGliZXhlYyIKc3RyaXAgIi9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNy
L3hlbjQyL2xpYmV4ZWMvcWVtdS1zeXN0ZW0taTM4NiIKZ21ha2VbNF06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tZGlyL2kzODYtc29mdG1t
dScKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
cWVtdS14ZW4tZGlyJwpnbWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzJwpnbWFrZSAtQyB4ZW5wbWQgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVucG1kJwpnY2MgIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGVucG1kLm8uZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
eGVucG1kLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hl
bnBtZC8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyB4ZW5wbWQubyB4ZW5wbWQuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgICB4ZW5wbWQubyAtbyB4ZW5wbWQgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL3hlbnBtZC8uLi8uLi90b29scy94ZW5zdG9yZS9saWJ4ZW5zdG9yZS5zbyAg
LUwvdXNyL3BrZy9saWIKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnBtZC8uLi8uLi90b29s
cy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0
YWxsL3Vzci94ZW40Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy94ZW5wbWQvLi4vLi4v
dG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgeGVucG1kIC9yb290L3hlbi00LjIuMC9k
aXN0L2luc3RhbGwvdXNyL3hlbjQyL3NiaW4KZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVucG1kJwpnbWFrZVsyXTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVj
dG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZSAtQyBsaWJ4bCBpbnN0YWxsCmdt
YWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJ4
bCcKL3Vzci9wa2cvYmluL3BlcmwgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL2luY2x1ZGUveGVuLWV4dGVybmFsL2JzZC1zeXMtcXVldWUtaC1zZWRkZXJ5IC9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlL3hlbi1leHRl
cm5hbC9ic2Qtc3lzLXF1ZXVlLmggLS1wcmVmaXg9bGlieGwgPl9saWJ4bF9saXN0LmgubmV3
CmlmICEgY21wIC1zIF9saWJ4bF9saXN0LmgubmV3IF9saWJ4bF9saXN0Lmg7IHRoZW4gbXYg
LWYgX2xpYnhsX2xpc3QuaC5uZXcgX2xpYnhsX2xpc3QuaDsgZWxzZSBybSAtZiBfbGlieGxf
bGlzdC5oLm5ldzsgZmkKcm0gLWYgX3BhdGhzLmgudG1wLnRtcDsgIGVjaG8gIlNCSU5ESVI9
XCIvdXNyL3hlbjQyL3NiaW5cIiIgPj5fcGF0aHMuaC50bXAudG1wOyAgZWNobyAiQklORElS
PVwiL3Vzci94ZW40Mi9iaW5cIiIgPj5fcGF0aHMuaC50bXAudG1wOyAgZWNobyAiTElCRVhF
Qz1cIi91c3IveGVuNDIvbGliZXhlY1wiIiA+Pl9wYXRocy5oLnRtcC50bXA7ICBlY2hvICJM
SUJESVI9XCIvdXNyL3hlbjQyL2xpYlwiIiA+Pl9wYXRocy5oLnRtcC50bXA7ICBlY2hvICJT
SEFSRURJUj1cIi91c3IveGVuNDIvc2hhcmVcIiIgPj5fcGF0aHMuaC50bXAudG1wOyAgZWNo
byAiUFJJVkFURV9CSU5ESVI9XCIvdXNyL3hlbjQyL2JpblwiIiA+Pl9wYXRocy5oLnRtcC50
bXA7ICBlY2hvICJYRU5GSVJNV0FSRURJUj1cIi91c3IveGVuNDIvbGliL3hlbi9ib290XCIi
ID4+X3BhdGhzLmgudG1wLnRtcDsgIGVjaG8gIlhFTl9DT05GSUdfRElSPVwiL3Vzci94ZW40
Mi9ldGMveGVuXCIiID4+X3BhdGhzLmgudG1wLnRtcDsgIGVjaG8gIlhFTl9TQ1JJUFRfRElS
PVwiL3Vzci94ZW40Mi9ldGMveGVuL3NjcmlwdHNcIiIgPj5fcGF0aHMuaC50bXAudG1wOyAg
ZWNobyAiWEVOX0xPQ0tfRElSPVwiL3Vzci94ZW40Mi92YXIvbGliXCIiID4+X3BhdGhzLmgu
dG1wLnRtcDsgIGVjaG8gIlhFTl9SVU5fRElSPVwiL3Vzci94ZW40Mi92YXIvcnVuL3hlblwi
IiA+Pl9wYXRocy5oLnRtcC50bXA7ICBlY2hvICJYRU5fUEFHSU5HX0RJUj1cIi91c3IveGVu
NDIvdmFyL2xpYi94ZW4veGVucGFnaW5nXCIiID4+X3BhdGhzLmgudG1wLnRtcDsgCWlmICEg
Y21wIC1zIF9wYXRocy5oLnRtcC50bXAgX3BhdGhzLmgudG1wOyB0aGVuIG12IC1mIF9wYXRo
cy5oLnRtcC50bXAgX3BhdGhzLmgudG1wOyBlbHNlIHJtIC1mIF9wYXRocy5oLnRtcC50bXA7
IGZpCnNlZCAtZSAicy9cKFtePV0qXCk9XCguKlwpLyNkZWZpbmUgXDEgXDIvZyIgX3BhdGhz
LmgudG1wID5fcGF0aHMuaC4yLnRtcApybSAtZiBfcGF0aHMuaC50bXAKaWYgISBjbXAgLXMg
X3BhdGhzLmguMi50bXAgX3BhdGhzLmg7IHRoZW4gbXYgLWYgX3BhdGhzLmguMi50bXAgX3Bh
dGhzLmg7IGVsc2Ugcm0gLWYgX3BhdGhzLmguMi50bXA7IGZpCi91c3IvcGtnL2Jpbi9wZXJs
IC13IGxpYnhsX3NhdmVfbXNnc19nZW4ucGwgX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0Lmgg
Pl9saWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5oLm5ldwppZiAhIGNtcCAtcyBfbGlieGxfc2F2
ZV9tc2dzX2NhbGxvdXQuaC5uZXcgX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0Lmg7IHRoZW4g
bXYgLWYgX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0LmgubmV3IF9saWJ4bF9zYXZlX21zZ3Nf
Y2FsbG91dC5oOyBlbHNlIHJtIC1mIF9saWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5oLm5ldzsg
ZmkKL3Vzci9wa2cvYmluL3BlcmwgLXcgbGlieGxfc2F2ZV9tc2dzX2dlbi5wbCBfbGlieGxf
c2F2ZV9tc2dzX2hlbHBlci5oID5fbGlieGxfc2F2ZV9tc2dzX2hlbHBlci5oLm5ldwppZiAh
IGNtcCAtcyBfbGlieGxfc2F2ZV9tc2dzX2hlbHBlci5oLm5ldyBfbGlieGxfc2F2ZV9tc2dz
X2hlbHBlci5oOyB0aGVuIG12IC1mIF9saWJ4bF9zYXZlX21zZ3NfaGVscGVyLmgubmV3IF9s
aWJ4bF9zYXZlX21zZ3NfaGVscGVyLmg7IGVsc2Ugcm0gLWYgX2xpYnhsX3NhdmVfbXNnc19o
ZWxwZXIuaC5uZXc7IGZpCnB5dGhvbjIuNyBnZW50eXBlcy5weSBsaWJ4bF90eXBlcy5pZGwg
X19saWJ4bF90eXBlcy5oIF9fbGlieGxfdHlwZXNfanNvbi5oIF9fbGlieGxfdHlwZXMuYwpQ
YXJzaW5nIGxpYnhsX3R5cGVzLmlkbApvdXRwdXR0aW5nIGxpYnhsIHR5cGUgZGVmaW5pdGlv
bnMgdG8gX19saWJ4bF90eXBlcy5oCm91dHB1dHRpbmcgbGlieGwgSlNPTiBkZWZpbml0aW9u
cyB0byBfX2xpYnhsX3R5cGVzX2pzb24uaApvdXRwdXR0aW5nIGxpYnhsIHR5cGUgaW1wbGVt
ZW50YXRpb25zIHRvIF9fbGlieGxfdHlwZXMuYwppZiAhIGNtcCAtcyBfX2xpYnhsX3R5cGVz
LmggX2xpYnhsX3R5cGVzLmg7IHRoZW4gbXYgLWYgX19saWJ4bF90eXBlcy5oIF9saWJ4bF90
eXBlcy5oOyBlbHNlIHJtIC1mIF9fbGlieGxfdHlwZXMuaDsgZmkKaWYgISBjbXAgLXMgX19s
aWJ4bF90eXBlc19qc29uLmggX2xpYnhsX3R5cGVzX2pzb24uaDsgdGhlbiBtdiAtZiBfX2xp
YnhsX3R5cGVzX2pzb24uaCBfbGlieGxfdHlwZXNfanNvbi5oOyBlbHNlIHJtIC1mIF9fbGli
eGxfdHlwZXNfanNvbi5oOyBmaQppZiAhIGNtcCAtcyBfX2xpYnhsX3R5cGVzLmMgX2xpYnhs
X3R5cGVzLmM7IHRoZW4gbXYgLWYgX19saWJ4bF90eXBlcy5jIF9saWJ4bF90eXBlcy5jOyBl
bHNlIHJtIC1mIF9fbGlieGxfdHlwZXMuYzsgZmkKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLl9saWJ4bC5hcGktZm9yLWNoZWNrLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3Ro
IC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGwgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90
b29scy9pbmNsdWRlIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9jb25maWcuaCAgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4v
dG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
aW5jbHVkZSAgIC1jIC1FIGxpYnhsLmggIC1JL3Vzci9wa2cvaW5jbHVkZSBcCgktRExJQlhM
X0VYVEVSTkFMX0NBTExFUlNfT05MWT1MSUJYTF9FWFRFUk5BTF9DQUxMRVJTX09OTFkgXAoJ
Pl9saWJ4bC5hcGktZm9yLWNoZWNrLm5ldwptdiAtZiBfbGlieGwuYXBpLWZvci1jaGVjay5u
ZXcgX2xpYnhsLmFwaS1mb3ItY2hlY2sKL3Vzci9wa2cvYmluL3BlcmwgY2hlY2stbGlieGwt
YXBpLXJ1bGVzIF9saWJ4bC5hcGktZm9yLWNoZWNrCnRvdWNoIGxpYnhsLmFwaS1vawpnY2Mg
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGwuby5kIC1m
bm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxl
bmd0aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhsIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwv
Li4vLi4vdG9vbHMvY29uZmlnLmggICAtYyAtbyB4bC5vIHhsLmMgIC1JL3Vzci9wa2cvaW5j
bHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
eGxfY21kaW1wbC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1X
bm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMg
LXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGli
eGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGwgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgIC1jIC1vIHhsX2NtZGlt
cGwubyB4bF9jbWRpbXBsLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGxfY21kdGFibGUuby5kIC1mbm8t
b3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0
aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhsIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvY29uZmlnLmggICAtYyAtbyB4bF9jbWR0YWJsZS5vIHhsX2NtZHRhYmxlLmMg
IC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAueGxfc3hwLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9u
cyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFs
IC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29s
cy9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9s
aWJ4bCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAgLWMg
LW8geGxfc3hwLm8geGxfc3hwLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X2NmZ195Lm8uZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1s
ZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAgLWMgLW8gbGlieGx1X2Nm
Z195Lm8gbGlieGx1X2NmZ195LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X2NmZ19sLm8uZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1s
ZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAgLWMgLW8gbGlieGx1X2Nm
Z19sLm8gbGlieGx1X2NmZ19sLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X2NmZy5vLmQgLWZu
by1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVu
Z3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgIC1jIC1vIGxpYnhsdV9jZmcu
byBsaWJ4bHVfY2ZnLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X2Rpc2tfbC5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3Ro
IC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgIC1jIC1vIGxpYnhsdV9kaXNrX2wu
byBsaWJ4bHVfZGlza19sLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X2Rpc2suby5kIC1mbm8t
b3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0
aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgICAtYyAtbyBsaWJ4bHVfZGlzay5v
IGxpYnhsdV9kaXNrLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X3ZpZi5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1X
bWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgIC1jIC1vIGxpYnhsdV92aWYubyBsaWJ4
bHVfdmlmLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X3BjaS5vLmQgLWZuby1vcHRpbWl6ZS1z
aWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2lu
Zy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3Jt
YXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgIC1jIC1vIGxpYnhsdV9wY2kubyBsaWJ4bHVfcGNp
LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAgLXB0aHJlYWQgLVdsLC1zb25hbWUgLVds
LGxpYnhsdXRpbC5zby4xLjAgLXNoYXJlZCAtbyBsaWJ4bHV0aWwuc28uMS4wLjAgbGlieGx1
X2NmZ195Lm8gbGlieGx1X2NmZ19sLm8gbGlieGx1X2NmZy5vIGxpYnhsdV9kaXNrX2wubyBs
aWJ4bHVfZGlzay5vIGxpYnhsdV92aWYubyBsaWJ4bHVfcGNpLm8gICAtTC91c3IvcGtnL2xp
YgpsbiAtc2YgbGlieGx1dGlsLnNvLjEuMC4wIGxpYnhsdXRpbC5zby4xLjAKbG4gLXNmIGxp
YnhsdXRpbC5zby4xLjAgbGlieGx1dGlsLnNvCnB5dGhvbjIuNyBnZW50eXBlcy5weSBsaWJ4
bF90eXBlc19pbnRlcm5hbC5pZGwgX19saWJ4bF90eXBlc19pbnRlcm5hbC5oIF9fbGlieGxf
dHlwZXNfaW50ZXJuYWxfanNvbi5oIF9fbGlieGxfdHlwZXNfaW50ZXJuYWwuYwpQYXJzaW5n
IGxpYnhsX3R5cGVzX2ludGVybmFsLmlkbApvdXRwdXR0aW5nIGxpYnhsIHR5cGUgZGVmaW5p
dGlvbnMgdG8gX19saWJ4bF90eXBlc19pbnRlcm5hbC5oCm91dHB1dHRpbmcgbGlieGwgSlNP
TiBkZWZpbml0aW9ucyB0byBfX2xpYnhsX3R5cGVzX2ludGVybmFsX2pzb24uaApvdXRwdXR0
aW5nIGxpYnhsIHR5cGUgaW1wbGVtZW50YXRpb25zIHRvIF9fbGlieGxfdHlwZXNfaW50ZXJu
YWwuYwppZiAhIGNtcCAtcyBfX2xpYnhsX3R5cGVzX2ludGVybmFsLmggX2xpYnhsX3R5cGVz
X2ludGVybmFsLmg7IHRoZW4gbXYgLWYgX19saWJ4bF90eXBlc19pbnRlcm5hbC5oIF9saWJ4
bF90eXBlc19pbnRlcm5hbC5oOyBlbHNlIHJtIC1mIF9fbGlieGxfdHlwZXNfaW50ZXJuYWwu
aDsgZmkKaWYgISBjbXAgLXMgX19saWJ4bF90eXBlc19pbnRlcm5hbF9qc29uLmggX2xpYnhs
X3R5cGVzX2ludGVybmFsX2pzb24uaDsgdGhlbiBtdiAtZiBfX2xpYnhsX3R5cGVzX2ludGVy
bmFsX2pzb24uaCBfbGlieGxfdHlwZXNfaW50ZXJuYWxfanNvbi5oOyBlbHNlIHJtIC1mIF9f
bGlieGxfdHlwZXNfaW50ZXJuYWxfanNvbi5oOyBmaQppZiAhIGNtcCAtcyBfX2xpYnhsX3R5
cGVzX2ludGVybmFsLmMgX2xpYnhsX3R5cGVzX2ludGVybmFsLmM7IHRoZW4gbXYgLWYgX19s
aWJ4bF90eXBlc19pbnRlcm5hbC5jIF9saWJ4bF90eXBlc19pbnRlcm5hbC5jOyBlbHNlIHJt
IC1mIF9fbGlieGxfdHlwZXNfaW50ZXJuYWwuYzsgZmkKZ2NjICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmZsZXhhcnJheS5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlz
c2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdm
b3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90
b29scy9jb25maWcuaCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29s
cy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNs
dWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBm
bGV4YXJyYXkubyBmbGV4YXJyYXkuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1m
bm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ4bC5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3Ro
IC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGwubyBsaWJ4bC5jICAtSS91c3IvcGtn
L2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLmxpYnhsX2NyZWF0ZS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vy
cm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVdu
by1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4g
LWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9v
bHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8g
bGlieGxfY3JlYXRlLm8gbGlieGxfY3JlYXRlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2Mg
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGxfZG0u
by5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16
ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmggIC1jIC1vIGxpYnhsX2RtLm8gbGlieGxf
ZG0uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0
cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hF
Tl9UT09MU19fIC1NTUQgLU1GIC5saWJ4bF9wY2kuby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVj
bGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5v
bmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4v
dG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
aW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5z
dG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRl
ICAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29u
ZmlnLmggIC1jIC1vIGxpYnhsX3BjaS5vIGxpYnhsX3BjaS5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmxp
YnhsX2RvbS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8t
Zm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0
aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfZG9t
Lm8gbGlieGxfZG9tLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGxfZXhlYy5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1X
bWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfZXhlYy5vIGxpYnhsX2V4ZWMuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC5saWJ4bF94c2hlbHAuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxs
cyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRp
b25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVy
YWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
bGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVk
ZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmgg
IC1jIC1vIGxpYnhsX3hzaGVscC5vIGxpYnhsX3hzaGVscC5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmxp
YnhsX2RldmljZS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1X
bm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMg
LXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGli
eGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxf
ZGV2aWNlLm8gbGlieGxfZGV2aWNlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAt
Zm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGxfaW50ZXJuYWwu
by5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16
ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmggIC1jIC1vIGxpYnhsX2ludGVybmFsLm8g
bGlieGxfaW50ZXJuYWwuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ4bF91dGlscy5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3Ro
IC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfdXRpbHMubyBsaWJ4bF91dGlscy5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RP
T0xTX18gLU1NRCAtTUYgLmxpYnhsX3V1aWQuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1j
YWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFy
YXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxp
dGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9v
bHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9y
ZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmln
LmggIC1jIC1vIGxpYnhsX3V1aWQubyBsaWJ4bF91dWlkLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGli
eGxfanNvbi5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8t
Zm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0
aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfanNv
bi5vIGxpYnhsX2pzb24uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ4bF9hb3V0aWxzLm8uZCAtZm5v
LW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1sZW5n
dGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwv
Li4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBsaWJ4bF9hb3V0aWxzLm8gbGlieGxfYW91
dGlscy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9f
WEVOX1RPT0xTX18gLU1NRCAtTUYgLmxpYnhsX251bWEuby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3Npbmct
ZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0
LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9v
bHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94
ZW5zdG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNs
dWRlICAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
Y29uZmlnLmggIC1jIC1vIGxpYnhsX251bWEubyBsaWJ4bF9udW1hLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1N
RiAubGlieGxfc2F2ZV9jYWxsb3V0Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9u
cyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFs
IC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29s
cy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xp
YnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAt
YyAtbyBsaWJ4bF9zYXZlX2NhbGxvdXQubyBsaWJ4bF9zYXZlX2NhbGxvdXQuYyAgLUkvdXNy
L3BrZy9pbmNsdWRlCi91c3IvcGtnL2Jpbi9wZXJsIC13IGxpYnhsX3NhdmVfbXNnc19nZW4u
cGwgX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0LmMgPl9saWJ4bF9zYXZlX21zZ3NfY2FsbG91
dC5jLm5ldwppZiAhIGNtcCAtcyBfbGlieGxfc2F2ZV9tc2dzX2NhbGxvdXQuYy5uZXcgX2xp
YnhsX3NhdmVfbXNnc19jYWxsb3V0LmM7IHRoZW4gbXYgLWYgX2xpYnhsX3NhdmVfbXNnc19j
YWxsb3V0LmMubmV3IF9saWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5jOyBlbHNlIHJtIC1mIF9s
aWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5jLm5ldzsgZmkKZ2NjICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLl9saWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5v
LmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXpl
cm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90
b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gX2xpYnhsX3NhdmVfbXNnc19j
YWxsb3V0Lm8gX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0LmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGli
eGxfcW1wLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1m
b3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRo
cmVhZCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBsaWJ4bF9xbXAu
byBsaWJ4bF9xbXAuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ4bF9ldmVudC5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1X
bWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfZXZlbnQubyBsaWJ4bF9ldmVudC5jICAt
SS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xT
X18gLU1NRCAtTUYgLmxpYnhsX2Zvcmsuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxs
cyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRp
b25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVy
YWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
bGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVk
ZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmgg
IC1jIC1vIGxpYnhsX2ZvcmsubyBsaWJ4bF9mb3JrLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAub3NkZXBz
Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQt
emVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBvc2RlcHMubyBvc2RlcHMu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC5saWJ4bF9wYXRocy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNs
YXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9u
bGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90
b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9p
bmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0
b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
IC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25m
aWcuaCAgLWMgLW8gbGlieGxfcGF0aHMubyBsaWJ4bF9wYXRocy5jICAtSS91c3IvcGtnL2lu
Y2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYg
LmxpYnhsX2Jvb3Rsb2FkZXIuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdl
cnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1X
bm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUku
IC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2lu
Y2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmggIC1jIC1v
IGxpYnhsX2Jvb3Rsb2FkZXIubyBsaWJ4bF9ib290bG9hZGVyLmMgIC1JL3Vzci9wa2cvaW5j
bHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
bGlieGxfbm9ibGt0YXAyLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJy
b3IgLVduby1mb3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25v
LWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAt
ZlBJQyAtcHRocmVhZCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29s
cy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNs
dWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBs
aWJ4bF9ub2Jsa3RhcDIubyBsaWJ4bF9ub2Jsa3RhcDIuYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ4
bF9jcHVpZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8t
Zm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0
aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfY3B1
aWQubyBsaWJ4bF9jcHVpZC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmxpYnhsX3g4Ni5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3Ro
IC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfeDg2Lm8gbGlieGxfeDg2LmMgIC1J
L3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAubGlieGxfbmV0YnNkLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2Fs
bHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0
aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRl
cmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90
b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xz
L2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5o
ICAtYyAtbyBsaWJ4bF9uZXRic2QubyBsaWJ4bF9uZXRic2QuYyAgLUkvdXNyL3BrZy9pbmNs
dWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5f
bGlieGxfdHlwZXMuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAt
V25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElD
IC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xp
YnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmggIC1jIC1vIF9saWJ4
bF90eXBlcy5vIF9saWJ4bF90eXBlcy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEg
LWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmxpYnhsX2ZsYXNrLm8u
ZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVy
by1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBsaWJ4bF9mbGFzay5vIGxpYnhs
X2ZsYXNrLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuX2xpYnhsX3R5cGVzX2ludGVybmFsLm8uZCAtZm5v
LW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1sZW5n
dGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwv
Li4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBfbGlieGxfdHlwZXNfaW50ZXJuYWwubyBf
bGlieGxfdHlwZXNfaW50ZXJuYWwuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtcHRo
cmVhZCAtV2wsLXNvbmFtZSAtV2wsbGlieGVubGlnaHQuc28uMi4wIC1zaGFyZWQgLW8gbGli
eGVubGlnaHQuc28uMi4wLjAgZmxleGFycmF5Lm8gbGlieGwubyBsaWJ4bF9jcmVhdGUubyBs
aWJ4bF9kbS5vIGxpYnhsX3BjaS5vIGxpYnhsX2RvbS5vIGxpYnhsX2V4ZWMubyBsaWJ4bF94
c2hlbHAubyBsaWJ4bF9kZXZpY2UubyBsaWJ4bF9pbnRlcm5hbC5vIGxpYnhsX3V0aWxzLm8g
bGlieGxfdXVpZC5vIGxpYnhsX2pzb24ubyBsaWJ4bF9hb3V0aWxzLm8gbGlieGxfbnVtYS5v
IGxpYnhsX3NhdmVfY2FsbG91dC5vIF9saWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5vIGxpYnhs
X3FtcC5vIGxpYnhsX2V2ZW50Lm8gbGlieGxfZm9yay5vIG9zZGVwcy5vIGxpYnhsX3BhdGhz
Lm8gbGlieGxfYm9vdGxvYWRlci5vIGxpYnhsX25vYmxrdGFwMi5vIGxpYnhsX2NwdWlkLm8g
bGlieGxfeDg2Lm8gbGlieGxfbmV0YnNkLm8gX2xpYnhsX3R5cGVzLm8gbGlieGxfZmxhc2su
byBfbGlieGxfdHlwZXNfaW50ZXJuYWwubyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwv
Li4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGlieGwvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuZ3Vlc3Quc28gL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlL2xpYnhlbnN0b3JlLnNvICAtbHV0
aWwgICAtbHlhamwgIC1ML3Vzci9wa2cvbGliCmxuIC1zZiBsaWJ4ZW5saWdodC5zby4yLjAu
MCBsaWJ4ZW5saWdodC5zby4yLjAKbG4gLXNmIGxpYnhlbmxpZ2h0LnNvLjIuMCBsaWJ4ZW5s
aWdodC5zbwpnY2MgICAgLXB0aHJlYWQgLW8geGwgeGwubyB4bF9jbWRpbXBsLm8geGxfY21k
dGFibGUubyB4bF9zeHAubyBsaWJ4bHV0aWwuc28gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2xpYnhsL2xpYnhlbmxpZ2h0LnNvIC1XbCwtcnBhdGgtbGluaz0v
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLVdsLC1ycGF0
aC1saW5rPS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9y
ZSAgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhl
bmN0cmwuc28gLWx5YWpsICAtTC91c3IvcGtnL2xpYgpweXRob24yLjcgZ2VudGVzdC5weSBs
aWJ4bF90eXBlcy5pZGwgdGVzdGlkbC5jLm5ldwpQYXJzaW5nIGxpYnhsX3R5cGVzLmlkbApt
diB0ZXN0aWRsLmMubmV3IHRlc3RpZGwuYwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAudGVzdGlkbC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNs
YXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9u
bGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90
b29scy9saWJ4bCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9s
aWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRl
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1j
IC1vIHRlc3RpZGwubyB0ZXN0aWRsLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAgLXB0
aHJlYWQgLW8gdGVzdGlkbCB0ZXN0aWRsLm8gbGlieGx1dGlsLnNvIC9yb290L3hlbi00LjIu
MC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4bC9saWJ4ZW5saWdodC5zbyAtV2wsLXJw
YXRoLWxpbms9L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhj
IC1XbCwtcnBhdGgtbGluaz0vcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9v
bHMveGVuc3RvcmUgIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9s
aWJ4Yy9saWJ4ZW5jdHJsLnNvICAtTC91c3IvcGtnL2xpYgpsZDogd2FybmluZzogbGlieWFq
bC5zby4yLCBuZWVkZWQgYnkgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2xpYnhsL2xpYnhlbmxpZ2h0LnNvLCBub3QgZm91bmQgKHRyeSB1c2luZyAtcnBhdGgg
b3IgLXJwYXRoLWxpbmspCi9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29s
cy9saWJ4bC9saWJ4ZW5saWdodC5zbzogdW5kZWZpbmVkIHJlZmVyZW5jZSB0byBgeWFqbF9w
YXJzZScKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhsL2xp
YnhlbmxpZ2h0LnNvOiB1bmRlZmluZWQgcmVmZXJlbmNlIHRvIGB5YWpsX2NvbXBsZXRlX3Bh
cnNlJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGwvbGli
eGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYHlhamxfZ2VuX251bGwnCi9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4bC9saWJ4ZW5saWdo
dC5zbzogdW5kZWZpbmVkIHJlZmVyZW5jZSB0byBgeWFqbF9nZW5fYXJyYXlfb3BlbicKL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhsL2xpYnhlbmxpZ2h0
LnNvOiB1bmRlZmluZWQgcmVmZXJlbmNlIHRvIGB5YWpsX2dlbl9zdHJpbmcnCi9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4bC9saWJ4ZW5saWdodC5zbzog
dW5kZWZpbmVkIHJlZmVyZW5jZSB0byBgeWFqbF9nZW5fbWFwX2Nsb3NlJwovcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGwvbGlieGVubGlnaHQuc286IHVu
ZGVmaW5lZCByZWZlcmVuY2UgdG8gYHlhamxfZ2VuX2dldF9idWYnCi9yb290L3hlbi00LjIu
MC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4bC9saWJ4ZW5saWdodC5zbzogdW5kZWZp
bmVkIHJlZmVyZW5jZSB0byBgeWFqbF9mcmVlJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvbGlieGwvbGlieGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVu
Y2UgdG8gYHlhamxfZ2VuX2FsbG9jJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvbGlieGwvbGlieGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8g
YHlhamxfZ2VuX2FycmF5X2Nsb3NlJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvbGlieGwvbGlieGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8g
YHlhamxfZ2VuX21hcF9vcGVuJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4v
dG9vbHMvbGlieGwvbGlieGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYHlh
amxfZ2V0X2Vycm9yJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
bGlieGwvbGlieGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYHlhamxfZnJl
ZV9lcnJvcicKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhs
L2xpYnhlbmxpZ2h0LnNvOiB1bmRlZmluZWQgcmVmZXJlbmNlIHRvIGB5YWpsX2dlbl9pbnRl
Z2VyJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGwvbGli
eGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYHlhamxfYWxsb2MnCi9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4bC9saWJ4ZW5saWdodC5z
bzogdW5kZWZpbmVkIHJlZmVyZW5jZSB0byBgeWFqbF9nZW5fZnJlZScKL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhsL2xpYnhlbmxpZ2h0LnNvOiB1bmRl
ZmluZWQgcmVmZXJlbmNlIHRvIGB5YWpsX2dlbl9ib29sJwpnbWFrZVszXTogKioqIFt0ZXN0
aWRsXSBFcnJvciAxCmdtYWtlWzNdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsJwpnbWFrZVsyXTogKioqIFtzdWJkaXItaW5zdGFsbC1saWJ4bF0g
RXJyb3IgMgpnbWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scycKZ21ha2VbMV06ICoqKiBbc3ViZGlycy1pbnN0YWxsXSBFcnJvciAyCmdtYWtlWzFd
OiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZTogKioq
IFtpbnN0YWxsLXRvb2xzXSBFcnJvciAyCmRvbTAjIGV4aXQKClNjcmlwdCBkb25lIG9uIFR1
ZSBEZWMgIDQgMTM6MzA6MTQgMjAxMgo=
--------------070505000000060005050803
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------070505000000060005050803--


From xen-devel-bounces@lists.xen.org Tue Dec 04 13:55:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 13:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfsxo-0003G1-0X; Tue, 04 Dec 2012 13:55:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lukas@laukamp.me>)
	id 1Tfss0-0002nt-2V; Tue, 04 Dec 2012 13:49:09 +0000
Received: from [85.158.139.83:33473] by server-8.bemta-5.messagelabs.com id
	FC/23-06050-05FFDB05; Tue, 04 Dec 2012 13:49:04 +0000
X-Env-Sender: lukas@laukamp.me
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354628719!28245365!1
X-Originating-IP: [5.9.218.243]
X-SpamReason: No, hits=1.1 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32323 invoked from network); 4 Dec 2012 13:45:20 -0000
Received: from mailer0.lippux.de (HELO mailer0.lippux.de) (5.9.218.243)
	by server-15.tower-182.messagelabs.com with SMTP;
	4 Dec 2012 13:45:20 -0000
Received: from localhost (localhost [127.0.0.1])
	by mailer0.lippux.de (Postfix) with ESMTP id 614982C216;
	Tue,  4 Dec 2012 14:45:33 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mailer1.lippux.de
Received: from mailer0.lippux.de ([127.0.0.1])
	by localhost (mailer0.lippux.de [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id mKOtJAr3dMDb; Tue,  4 Dec 2012 14:45:32 +0100 (CET)
Received: from ashlynn.lippux.de (ashlynn.lippux.de [5.9.218.242])
	by mailer0.lippux.de (Postfix) with ESMTPSA id 04F882C212;
	Tue,  4 Dec 2012 14:45:30 +0100 (CET)
Message-ID: <50BDFE6B.1010800@laukamp.me>
Date: Tue, 04 Dec 2012 14:45:15 +0100
From: Lukas Laukamp <lukas@laukamp.me>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.10) Gecko/20121027 Icedove/10.0.10
MIME-Version: 1.0
To: xen-users@lists.xen.org
References: <CADGo8CWt=uO53ZedJUU0+U6ie_QXPKWY8u1-CDy6wD_pupbdeg@mail.gmail.com>
In-Reply-To: <CADGo8CWt=uO53ZedJUU0+U6ie_QXPKWY8u1-CDy6wD_pupbdeg@mail.gmail.com>
X-Forwarded-Message-Id: <CADGo8CWt=uO53ZedJUU0+U6ie_QXPKWY8u1-CDy6wD_pupbdeg@mail.gmail.com>
Content-Type: multipart/mixed; boundary="------------070505000000060005050803"
X-Mailman-Approved-At: Tue, 04 Dec 2012 13:55:07 +0000
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] Fwd: Compilation of Xen 4.2 Utils breaks on NetBSD 6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------070505000000060005050803
Content-Type: multipart/alternative;
 boundary="------------090808050706050703050402"


--------------090808050706050703050402
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hello all,

because there are still problems to build Xen 4.2 on NetBSD (there was 
also another thread on the port-xen list) I forward this message to get 
a solution for the problem. The complete output of my build is in a log 
file in the attachment.

I used this commands for compilation:

./configure PYTHON=/usr/pkg/bin/python2.7 APPEND_INCLUDES=/usr/pkg/include APPEND_LIB=/usr/pkg/lib --prefix=/usr/xen42
gmake PYTHON=/usr/pkg/bin/python2.7 xen
gmake tools

I took the commans from this wiki article: http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD

The build error appears in the tools target in libxl.

This is the last mail from port-xen list related to this theme:


On 30/11/12 21:16, Mike Bowie wrote:

> On 11/30/12 12:13 PM, Jeff Rizzo wrote:
>> Anyone up for creating a pkgsrc package for xen 4.2?  There's clearly a
>> lot to be done, and my pkgsrc-fu is not all that great.
> I could be up for that... might not be until next week, but if the build
> steps all work out, I should be able to cobble something together into
> pkgsrc/wip. (Which would motivate me to get a box onto 4.2 also...
> double win.)
I would definetely help, this will probably require some Makefile
changes, which I think should be submitted upstream.


Is the problem solvable without big changes in the build system to get 4.2 running on a NetBSD 6 box? Or isn't it able to compile th toolstack on NetBSD for 4.2 without big changes?



-------- Original-Nachricht --------
Betreff: 	Compilation of Xen 4.2 Utils breaks on NetBSD 6
Datum: 	Mon, 3 Dec 2012 17:19:16 +0000
Von: 	Miguel Clara <miguelmclara@gmail.com>
An: 	port-xen@netbsd.org, lukas@laukamp.me



Lukas Laukamp <lukas <at> laukamp.me <http://laukamp.me>> writes:

 >
 > Hey all,
 >
 > I trying to compile Xen 4.2 on NetBSD 6. The hypervisor it self compiled
 > fine but the compilation of the utils breaks with this error:
 >
 > In file included from xl_cmdimpl.c:40:0:
 > libxl_json.h:18:27: fatal error: yajl/yajl_gen.h: No such file or 
directory
 > compilation terminated.
 > gmake[3]: *** [xl_cmdimpl.o] Error 1
 > gmake[3]: Leaving directory `/root/xen-4.2.0/tools/libxl'
 > gmake[2]: *** [subdir-install-libxl] Error 2
 > gmake[2]: Leaving directory `/root/xen-4.2.0/tools'
 > gmake[1]: *** [subdirs-install] Error 2
 > gmake[1]: Leaving directory `/root/xen-4.2.0/tools'
 > gmake: *** [install-tools] Error 2
 > testdom0#
 >
 > I passed the needed options to the configure script so that it searches
 > in /usr/pkg/include/ and /usr/pkg/lib and so on. The file which is
 > declaired to don't exist, exists in /usr/pkg/include/yajl/ so I don't
 > understand why the file could not be found.
 >
 > Hope that someone could help me.
 >
 > Best Regards
 >
 >

I'm trying to build following the guide at: 
http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD

All works fine until I try to build "tools"

gmake[3]: Entering directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
rm -f _paths.h.tmp.tmp; echo "SBINDIR=\"/usr/pkg/sbin\"" 
 >>_paths.h.tmp.tmp; echo "BINDIR=\"/usr/pkg/bin\"" >>_paths.h.tmp.tmp; 
echo "LIBEXEC=\"/usr/pkg/lâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
ibexec\"" >>_paths.h.tmp.tmp; echo "LIBDIR=\"/usr/pkg/lib\"" 
 >>_paths.h.tmp.tmp; echo "SHAREDIR=\"/usr/pkg/share\"" 
 >>_paths.h.tmp.tmp; echo "PRIVATE_BINDâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
IR=\"/usr/pkg/bin\"" >>_paths.h.tmp.tmp; echo 
"XENFIRMWAREDIR=\"/usr/pkg/lib/xen/boot\"" >>_paths.h.tmp.tmp; echo 
"XEN_CONFIG_DIR=\"/usr/pkg/etc/xen\"" >>_â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
paths.h.tmp.tmp; echo "XEN_SCRIPT_DIR=\"/usr/pkg/etc/xen/scripts\"" 
 >>_paths.h.tmp.tmp; echo "XEN_LOCK_DIR=\"/usr/pkg/var/lib\"" 
 >>_paths.h.tmp.tmp; echo â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
"XEN_RUN_DIR=\"/usr/pkg/var/run/xen\"" >>_paths.h.tmp.tmp; echo 
"XEN_PAGING_DIR=\"/usr/pkg/var/lib/xen/xenpaging\"" >>_paths.h.tmp.tmp; 
if ! cmp -s _pathâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
s.h.tmp.tmp _paths.h.tmp; then mv -f _paths.h.tmp.tmp _paths.h.tmp; else 
rm -f _paths.h.tmp.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp >_paths.h.2.tmp 
â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
rm -f _paths.h.tmp â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
if ! cmp -s _paths.h.2.tmp _paths.h; then mv -f _paths.h.2.tmp _paths.h; 
else rm -f _paths.h.2.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gcc -pthread -o testidl testidl.o libxlutil.so 
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so 
-Wl,-rpath-link=/home/miguelcâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/xen-data/xen-4.2.0/tools/libxl/../../tools/libxc 
-Wl,-rpath-link=/home/xen/xen-4.2.0/tools/libxl/../../tools/xenstore 
/home/xen/xâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
en-4.2.0/tools/libxl/../../tools/libxc/libxenctrl.so -L/usr/pkg/lib 
â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
ld: warning: libyajl.so.2, needed by 
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so, not 
found (try using -rpath or -rpath-linâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
k) â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_complete_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_null' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_array_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_string' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_map_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_get_buf' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_array_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_map_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_get_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_free_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_integer' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
/home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so: 
undefined reference to `yajl_gen_bool' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[3]: *** [testidl] Error 1 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[3]: Leaving directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[2]: *** [subdir-install-libxl] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[2]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[1]: *** [subdirs-install] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake[1]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
gmake: *** [install-tools] Error 2


I'm using yajl version 2....  could this be the problem? Is there any patch?

Thanks


--------------090808050706050703050402
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Hello all,<br>
    <br>
    because there are still problems to build Xen 4.2 on NetBSD (there
    was also another thread on the port-xen list) I forward this message
    to get a solution for the problem. The complete output of my build
    is in a log file in the attachment.<br>
    <br>
    I used this commands for compilation:<br>
    <br>
    <pre>./configure PYTHON=/usr/pkg/bin/python2.7 APPEND_INCLUDES=/usr/pkg/include APPEND_LIB=/usr/pkg/lib --prefix=/usr/xen42
gmake PYTHON=/usr/pkg/bin/python2.7 xen
gmake tools

I took the commans from this wiki article: <a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD">http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD</a>

The build error appears in the tools target in libxl.

This is the last mail from port-xen list related to this theme:

</pre>
    <div class="moz-text-plain" wrap="true" graphical-quote="true"
      style="font-family: -moz-fixed; font-size: 12px;" lang="x-western">
      <pre wrap="">On 30/11/12 21:16, Mike Bowie wrote:
</pre>
      <blockquote type="cite" style="color: #000000;">
        <pre wrap="">On 11/30/12 12:13 PM, Jeff Rizzo wrote:
</pre>
        <blockquote type="cite" style="color: #000000;">
          <pre wrap="">Anyone up for creating a pkgsrc package for xen 4.2?  There's clearly a
lot to be done, and my pkgsrc-fu is not all that great.
</pre>
        </blockquote>
        <pre wrap="">I could be up for that... might not be until next week, but if the build 
steps all work out, I should be able to cobble something together into 
pkgsrc/wip. (Which would motivate me to get a box onto 4.2 also... 
double win.)
</pre>
      </blockquote>
      <pre wrap="">I would definetely help, this will probably require some Makefile
changes, which I think should be submitted upstream.

</pre>
    </div>
    <pre>Is the problem solvable without big changes in the build system to get 4.2 running on a NetBSD 6 box? Or isn't it able to compile th toolstack on NetBSD for 4.2 without big changes?
</pre>
    <br>
    <br>
    -------- Original-Nachricht --------
    <table class="moz-email-headers-table" border="0" cellpadding="0"
      cellspacing="0">
      <tbody>
        <tr>
          <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Betreff: </th>
          <td>Compilation of Xen 4.2 Utils breaks on NetBSD 6</td>
        </tr>
        <tr>
          <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Datum: </th>
          <td>Mon, 3 Dec 2012 17:19:16 +0000</td>
        </tr>
        <tr>
          <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Von: </th>
          <td>Miguel Clara <a class="moz-txt-link-rfc2396E" href="mailto:miguelmclara@gmail.com">&lt;miguelmclara@gmail.com&gt;</a></td>
        </tr>
        <tr>
          <th align="RIGHT" nowrap="nowrap" valign="BASELINE">An: </th>
          <td><a class="moz-txt-link-abbreviated" href="mailto:port-xen@netbsd.org">port-xen@netbsd.org</a>, <a class="moz-txt-link-abbreviated" href="mailto:lukas@laukamp.me">lukas@laukamp.me</a></td>
        </tr>
      </tbody>
    </table>
    <br>
    <br>
    Lukas Laukamp &lt;lukas &lt;at&gt; <a moz-do-not-send="true"
      href="http://laukamp.me">laukamp.me</a>&gt; writes:<br>
    <br>
    &gt; <br>
    &gt; Hey all,<br>
    &gt; <br>
    &gt; I trying to compile Xen 4.2 on NetBSD 6. The hypervisor it self
    compiled <br>
    &gt; fine but the compilation of the utils breaks with this error:<br>
    &gt; <br>
    &gt; In file included from xl_cmdimpl.c:40:0:<br>
    &gt; libxl_json.h:18:27: fatal error: yajl/yajl_gen.h: No such file
    or directory<br>
    &gt; compilation terminated.<br>
    &gt; gmake[3]: *** [xl_cmdimpl.o] Error 1<br>
    &gt; gmake[3]: Leaving directory `/root/xen-4.2.0/tools/libxl'<br>
    &gt; gmake[2]: *** [subdir-install-libxl] Error 2<br>
    &gt; gmake[2]: Leaving directory `/root/xen-4.2.0/tools'<br>
    &gt; gmake[1]: *** [subdirs-install] Error 2<br>
    &gt; gmake[1]: Leaving directory `/root/xen-4.2.0/tools'<br>
    &gt; gmake: *** [install-tools] Error 2<br>
    &gt; testdom0#<br>
    &gt; <br>
    &gt; I passed the needed options to the configure script so that it
    searches <br>
    &gt; in /usr/pkg/include/ and /usr/pkg/lib and so on. The file which
    is <br>
    &gt; declaired to don't exist, exists in /usr/pkg/include/yajl/ so I
    don't <br>
    &gt; understand why the file could not be found.<br>
    &gt; <br>
    &gt; Hope that someone could help me.<br>
    &gt; <br>
    &gt; Best Regards<br>
    &gt; <br>
    &gt; <br>
    <div><br>
    </div>
    <div>I'm trying to build following the guide at: <a
        moz-do-not-send="true"
        href="http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD">http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD</a><br>
      <br>
      All works fine until I try to build "tools"<br>
      <br>
      gmake[3]: Entering directory `/home/xen/xen-4.2.0/tools/libxl'
      â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      rm -f _paths.h.tmp.tmp; echo "SBINDIR=\"/usr/pkg/sbin\""
      &gt;&gt;_paths.h.tmp.tmp; echo "BINDIR=\"/usr/pkg/bin\""
      &gt;&gt;_paths.h.tmp.tmp; echo "LIBEXEC=\"/usr/pkg/lâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      ibexec\"" &gt;&gt;_paths.h.tmp.tmp; echo "LIBDIR=\"/usr/pkg/lib\""
      &gt;&gt;_paths.h.tmp.tmp; echo "SHAREDIR=\"/usr/pkg/share\""
      &gt;&gt;_paths.h.tmp.tmp; echo "PRIVATE_BINDâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      IR=\"/usr/pkg/bin\"" &gt;&gt;_paths.h.tmp.tmp; echo
      "XENFIRMWAREDIR=\"/usr/pkg/lib/xen/boot\""
      &gt;&gt;_paths.h.tmp.tmp; echo
      "XEN_CONFIG_DIR=\"/usr/pkg/etc/xen\"" &gt;&gt;_â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      paths.h.tmp.tmp; echo
      "XEN_SCRIPT_DIR=\"/usr/pkg/etc/xen/scripts\""
      &gt;&gt;_paths.h.tmp.tmp; echo "XEN_LOCK_DIR=\"/usr/pkg/var/lib\""
      &gt;&gt;_paths.h.tmp.tmp; echo â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      "XEN_RUN_DIR=\"/usr/pkg/var/run/xen\"" &gt;&gt;_paths.h.tmp.tmp;
      echo "XEN_PAGING_DIR=\"/usr/pkg/var/lib/xen/xenpaging\""
      &gt;&gt;_paths.h.tmp.tmp; if ! cmp -s _pathâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      s.h.tmp.tmp _paths.h.tmp; then mv -f _paths.h.tmp.tmp
      _paths.h.tmp; else rm -f _paths.h.tmp.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp
      &gt;_paths.h.2.tmp â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      rm -f _paths.h.tmp â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      if ! cmp -s _paths.h.2.tmp _paths.h; then mv -f _paths.h.2.tmp
      _paths.h; else rm -f _paths.h.2.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gcc -pthread -o testidl testidl.o libxlutil.so
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so
      -Wl,-rpath-link=/home/miguelcâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /xen-data/xen-4.2.0/tools/libxl/../../tools/libxc
      -Wl,-rpath-link=/home/xen/xen-4.2.0/tools/libxl/../../tools/xenstore
      /home/xen/xâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      en-4.2.0/tools/libxl/../../tools/libxc/libxenctrl.so
      -L/usr/pkg/lib â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      ld: warning: libyajl.so.2, needed by
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so,
      not found (try using -rpath or -rpath-linâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      k) â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_complete_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_null' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_array_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_string' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_map_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_get_buf' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_array_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_map_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_get_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_free_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_integer' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
      undefined reference to `yajl_gen_bool' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[3]: *** [testidl] Error 1 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[3]: Leaving directory `/home/xen/xen-4.2.0/tools/libxl'
      â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[2]: *** [subdir-install-libxl] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[2]: Leaving directory `/home/xen/xen-4.2.0/tools'
      â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[1]: *** [subdirs-install] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake[1]: Leaving directory `/home/xen/xen-4.2.0/tools'
      â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·<br>
      gmake: *** [install-tools] Error 2 <br>
      <br>
      <br>
      I'm using yajl version 2.... Â could this be the problem? Is there
      any patch?<br>
      <br>
      Thanks<br>
      <br>
    </div>
  </body>
</html>

--------------090808050706050703050402--

--------------070505000000060005050803
Content-Type: application/octet-stream;
 name="xen-build.log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="xen-build.log"

ZG9tMCMgLi9jb25maWd1cmUgUFlUSE9OPS91c3IvcGtnL2Jpbi9weXRob24yLjcgQVBQRU5E
X0lOQ0xVREVTPS91c3IvcGtnL2luY2x1ZGUgCCBBUFBFTkRfTElCPS91c3IvcGtnL2xpYiAt
LXByZWZpeD0vdXNyL3hlbjQyCmNoZWNraW5nIGJ1aWxkIHN5c3RlbSB0eXBlLi4uIHg4Nl82
NC11bmtub3duLW5ldGJzZDYuMApjaGVja2luZyBob3N0IHN5c3RlbSB0eXBlLi4uIHg4Nl82
NC11bmtub3duLW5ldGJzZDYuMApjaGVja2luZyBmb3IgZ2NjLi4uIGdjYwpjaGVja2luZyB3
aGV0aGVyIHRoZSBDIGNvbXBpbGVyIHdvcmtzLi4uIHllcwpjaGVja2luZyBmb3IgQyBjb21w
aWxlciBkZWZhdWx0IG91dHB1dCBmaWxlIG5hbWUuLi4gYS5vdXQKY2hlY2tpbmcgZm9yIHN1
ZmZpeCBvZiBleGVjdXRhYmxlcy4uLiAKY2hlY2tpbmcgd2hldGhlciB3ZSBhcmUgY3Jvc3Mg
Y29tcGlsaW5nLi4uIG5vCmNoZWNraW5nIGZvciBzdWZmaXggb2Ygb2JqZWN0IGZpbGVzLi4u
IG8KY2hlY2tpbmcgd2hldGhlciB3ZSBhcmUgdXNpbmcgdGhlIEdOVSBDIGNvbXBpbGVyLi4u
IHllcwpjaGVja2luZyB3aGV0aGVyIGdjYyBhY2NlcHRzIC1nLi4uIHllcwpjaGVja2luZyBm
b3IgZ2NjIG9wdGlvbiB0byBhY2NlcHQgSVNPIEM4OS4uLiBub25lIG5lZWRlZApjaGVja2lu
ZyB3aGV0aGVyIG1ha2Ugc2V0cyAkKE1BS0UpLi4uIHllcwpjaGVja2luZyBmb3IgYSBCU0Qt
Y29tcGF0aWJsZSBpbnN0YWxsLi4uIC91c3IvYmluL2luc3RhbGwgLWMKY2hlY2tpbmcgZm9y
IGJpc29uLi4uIC91c3IvcGtnL2Jpbi9iaXNvbgpjaGVja2luZyBmb3IgZmxleC4uLiAvdXNy
L2Jpbi9mbGV4CmNoZWNraW5nIGZvciBwZXJsLi4uIC91c3IvcGtnL2Jpbi9wZXJsCmNoZWNr
aW5nIGZvciBvY2FtbGMuLi4gbm8KY2hlY2tpbmcgZm9yIG9jYW1sLi4uIG5vCmNoZWNraW5n
IGZvciBvY2FtbGRlcC4uLiBubwpjaGVja2luZyBmb3Igb2NhbWxta3RvcC4uLiBubwpjaGVj
a2luZyBmb3Igb2NhbWxta2xpYi4uLiBubwpjaGVja2luZyBmb3Igb2NhbWxkb2MuLi4gbm8K
Y2hlY2tpbmcgZm9yIG9jYW1sYnVpbGQuLi4gbm8KY2hlY2tpbmcgZm9yIGJhc2guLi4gL3Vz
ci9wa2cvYmluL2Jhc2gKY2hlY2tpbmcgZm9yIHB5dGhvbjIuNy4uLiAvdXNyL3BrZy9iaW4v
cHl0aG9uMi43CmNoZWNraW5nIGZvciBweXRob24gdmVyc2lvbiA+PSAyLjMgLi4uIHllcwpj
aGVja2luZyBob3cgdG8gcnVuIHRoZSBDIHByZXByb2Nlc3Nvci4uLiBnY2MgLUUKY2hlY2tp
bmcgZm9yIGdyZXAgdGhhdCBoYW5kbGVzIGxvbmcgbGluZXMgYW5kIC1lLi4uIC91c3IvYmlu
L2dyZXAKY2hlY2tpbmcgZm9yIGVncmVwLi4uIC91c3IvYmluL2dyZXAgLUUKY2hlY2tpbmcg
Zm9yIEFOU0kgQyBoZWFkZXIgZmlsZXMuLi4geWVzCmNoZWNraW5nIGZvciBzeXMvdHlwZXMu
aC4uLiB5ZXMKY2hlY2tpbmcgZm9yIHN5cy9zdGF0LmguLi4geWVzCmNoZWNraW5nIGZvciBz
dGRsaWIuaC4uLiB5ZXMKY2hlY2tpbmcgZm9yIHN0cmluZy5oLi4uIHllcwpjaGVja2luZyBm
b3IgbWVtb3J5LmguLi4geWVzCmNoZWNraW5nIGZvciBzdHJpbmdzLmguLi4geWVzCmNoZWNr
aW5nIGZvciBpbnR0eXBlcy5oLi4uIHllcwpjaGVja2luZyBmb3Igc3RkaW50LmguLi4geWVz
CmNoZWNraW5nIGZvciB1bmlzdGQuaC4uLiB5ZXMKY2hlY2tpbmcgZm9yIHB5dGhvbjIuNy1j
b25maWcuLi4gL3Vzci9wa2cvYmluL3B5dGhvbjIuNy1jb25maWcKY2hlY2tpbmcgUHl0aG9u
LmggdXNhYmlsaXR5Li4uIHllcwpjaGVja2luZyBQeXRob24uaCBwcmVzZW5jZS4uLiB5ZXMK
Y2hlY2tpbmcgZm9yIFB5dGhvbi5oLi4uIHllcwpjaGVja2luZyBmb3IgUHlBcmdfUGFyc2VU
dXBsZSBpbiAtbHB5dGhvbjIuNy4uLiB5ZXMKY2hlY2tpbmcgZm9yIHhnZXR0ZXh0Li4uIC91
c3IvYmluL3hnZXR0ZXh0CmNoZWNraW5nIGZvciBhczg2Li4uIC91c3IvcGtnL2Jpbi9hczg2
CmNoZWNraW5nIGZvciBsZDg2Li4uIC91c3IvcGtnL2Jpbi9sZDg2CmNoZWNraW5nIGZvciBi
Y2MuLi4gL3Vzci9wa2cvYmluL2JjYwpjaGVja2luZyBmb3IgaWFzbC4uLiAvdXNyL2Jpbi9p
YXNsCmNoZWNraW5nIHV1aWQvdXVpZC5oIHVzYWJpbGl0eS4uLiBubwpjaGVja2luZyB1dWlk
L3V1aWQuaCBwcmVzZW5jZS4uLiBubwpjaGVja2luZyBmb3IgdXVpZC91dWlkLmguLi4gbm8K
Y2hlY2tpbmcgdXVpZC5oIHVzYWJpbGl0eS4uLiB5ZXMKY2hlY2tpbmcgdXVpZC5oIHByZXNl
bmNlLi4uIHllcwpjaGVja2luZyBmb3IgdXVpZC5oLi4uIHllcwpjaGVja2luZyBjdXJzZXMu
aCB1c2FiaWxpdHkuLi4geWVzCmNoZWNraW5nIGN1cnNlcy5oIHByZXNlbmNlLi4uIHllcwpj
aGVja2luZyBmb3IgY3Vyc2VzLmguLi4geWVzCmNoZWNraW5nIGZvciBjbGVhciBpbiAtbGN1
cnNlcy4uLiB5ZXMKY2hlY2tpbmcgbmN1cnNlcy5oIHVzYWJpbGl0eS4uLiBubwpjaGVja2lu
ZyBuY3Vyc2VzLmggcHJlc2VuY2UuLi4gbm8KY2hlY2tpbmcgZm9yIG5jdXJzZXMuaC4uLiBu
bwpjaGVja2luZyBmb3IgcGtnLWNvbmZpZy4uLiAvdXNyL3BrZy9iaW4vcGtnLWNvbmZpZwpj
aGVja2luZyBwa2ctY29uZmlnIGlzIGF0IGxlYXN0IHZlcnNpb24gMC45LjAuLi4geWVzCmNo
ZWNraW5nIGZvciBnbGliLi4uIHllcwpjaGVja2luZyBiemxpYi5oIHVzYWJpbGl0eS4uLiB5
ZXMKY2hlY2tpbmcgYnpsaWIuaCBwcmVzZW5jZS4uLiB5ZXMKY2hlY2tpbmcgZm9yIGJ6bGli
LmguLi4geWVzCmNoZWNraW5nIGZvciBCWjJfYnpEZWNvbXByZXNzSW5pdCBpbiAtbGJ6Mi4u
LiB5ZXMKY2hlY2tpbmcgbHptYS5oIHVzYWJpbGl0eS4uLiB5ZXMKY2hlY2tpbmcgbHptYS5o
IHByZXNlbmNlLi4uIHllcwpjaGVja2luZyBmb3IgbHptYS5oLi4uIHllcwpjaGVja2luZyBm
b3IgbHptYV9zdHJlYW1fZGVjb2RlciBpbiAtbGx6bWEuLi4geWVzCmNoZWNraW5nIGx6by9s
em8xeC5oIHVzYWJpbGl0eS4uLiBubwpjaGVja2luZyBsem8vbHpvMXguaCBwcmVzZW5jZS4u
LiBubwpjaGVja2luZyBmb3IgbHpvL2x6bzF4LmguLi4gbm8KY2hlY2tpbmcgZm9yIGlvX3Nl
dHVwIGluIC1sYWlvLi4uIG5vCmNoZWNraW5nIGZvciBNRDUgaW4gLWxjcnlwdG8uLi4geWVz
CmNoZWNraW5nIGZvciBleHQyZnNfb3BlbjIgaW4gLWxleHQyZnMuLi4gbm8KY2hlY2tpbmcg
Zm9yIGdjcnlfbWRfaGFzaF9idWZmZXIgaW4gLWxnY3J5cHQuLi4geWVzCmNoZWNraW5nIGZv
ciBwdGhyZWFkIGZsYWcuLi4gLXB0aHJlYWQKY2hlY2tpbmcgbGlidXRpbC5oIHVzYWJpbGl0
eS4uLiBubwpjaGVja2luZyBsaWJ1dGlsLmggcHJlc2VuY2UuLi4gbm8KY2hlY2tpbmcgZm9y
IGxpYnV0aWwuaC4uLiBubwpjaGVja2luZyBmb3Igb3BlbnB0eSBldCBhbC4uLiAtbHV0aWwK
Y2hlY2tpbmcgZm9yIHlhamxfYWxsb2MgaW4gLWx5YWpsLi4uIHllcwpjaGVja2luZyBmb3Ig
ZGVmbGF0ZUNvcHkgaW4gLWx6Li4uIHllcwpjaGVja2luZyBmb3IgbGliaWNvbnZfb3BlbiBp
biAtbGljb252Li4uIG5vCmNoZWNraW5nIHlhamwveWFqbF92ZXJzaW9uLmggdXNhYmlsaXR5
Li4uIHllcwpjaGVja2luZyB5YWpsL3lhamxfdmVyc2lvbi5oIHByZXNlbmNlLi4uIHllcwpj
aGVja2luZyBmb3IgeWFqbC95YWpsX3ZlcnNpb24uaC4uLiB5ZXMKY29uZmlndXJlOiBjcmVh
dGluZyAuL2NvbmZpZy5zdGF0dXMKY29uZmlnLnN0YXR1czogY3JlYXRpbmcgLi4vY29uZmln
L1Rvb2xzLm1rCmNvbmZpZy5zdGF0dXM6IGNyZWF0aW5nIGNvbmZpZy5oCmRvbTAjIGdtYWtl
IFBZVEhPTj0vdXNyL3BrZy9iaW4vcHl0aG9uMi43IHhlbgpnbWFrZSAtQyB4ZW4gaW5zdGFs
bApnbWFrZVsxXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuJwpn
bWFrZSAtZiBSdWxlcy5tayBfaW5zdGFsbApnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAveGVuJwpnbWFrZSAtQyB0b29scwpnbWFrZVszXTogRW50ZXJp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzJwpbIC1kIGZpZ2xldCBd
ICYmIGdtYWtlIC1DIGZpZ2xldApnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAveGVuL3Rvb2xzL2ZpZ2xldCcKZ2NjIC1vIGZpZ2xldCBmaWdsZXQuYwpn
bWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMv
ZmlnbGV0JwpnbWFrZSBzeW1ib2xzCmdtYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9y
b290L3hlbi00LjIuMC94ZW4vdG9vbHMnCmdjYyAtV2FsbCAtV2Vycm9yIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1vIHN5bWJvbHMgc3ltYm9scy5jCmdt
YWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scycK
Z21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xz
JwpnbWFrZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIGluY2x1ZGUveGVuL2Nv
bXBpbGUuaApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAv
eGVuJwpnbWFrZSAtQyB0b29scwpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAveGVuL3Rvb2xzJwpbIC1kIGZpZ2xldCBdICYmIGdtYWtlIC1DIGZpZ2xl
dApnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL3Rv
b2xzL2ZpZ2xldCcKZ21ha2VbNV06IGBmaWdsZXQnIGlzIHVwIHRvIGRhdGUuCmdtYWtlWzVd
OiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9maWdsZXQn
CmdtYWtlIHN5bWJvbHMKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3hlbi90b29scycKZ21ha2VbNV06IGBzeW1ib2xzJyBpcyB1cCB0byBkYXRlLgpn
bWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMn
CmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29s
cycKIF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICBfX18gIAogXCBcLyAvX19f
IF8gX18gICB8IHx8IHwgIHxfX18gXCAgLyBfIFwgCiAgXCAgLy8gXyBcICdfIFwgIHwgfHwg
fF8gICBfXykgfHwgfCB8IHwKICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy8gfCB8
X3wgfAogL18vXF9cX19ffF98IHxffCAgICB8X3woXylfX19fXyhfKV9fXy8gCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKZ21ha2VbM106IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuJwpbIC1lIGluY2x1ZGUvYXNtIF0gfHwgbG4g
LXNmIGFzbS14ODYgaW5jbHVkZS9hc20KZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9S
dWxlcy5tayAtQyBpbmNsdWRlCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZScKbWtkaXIgLXAgY29tcGF0CmdyZXAgLXYgJ0RFRklO
RV9YRU5fR1VFU1RfSEFORExFKGxvbmcpJyBwdWJsaWMvY2FsbGJhY2suaCB8IFwKL3Vzci9w
a2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWls
ZC1zb3VyY2UucHkgPmNvbXBhdC9jYWxsYmFjay5jLm5ldwptdiAtZiBjb21wYXQvY2FsbGJh
Y2suYy5uZXcgY29tcGF0L2NhbGxiYWNrLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNvbXBh
dC5oIC1tMzIgLW8gY29tcGF0L2NhbGxiYWNrLmkgY29tcGF0L2NhbGxiYWNrLmMKc2V0IC1l
OyBpZD1fJChlY2hvIGNvbXBhdC9jYWxsYmFjay5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6
dXBwZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L2NhbGxiYWNrLmgu
bmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBhdC9jYWxsYmFjay5oLm5ldzsgXApl
Y2hvICIjaW5jbHVkZSA8eGVuL2NvbXBhdC5oPiIgPj5jb21wYXQvY2FsbGJhY2suaC5uZXc7
IFwKZWNobyAiI2luY2x1ZGUgPHB1YmxpYy9jYWxsYmFjay5oPiIgPj5jb21wYXQvY2FsbGJh
Y2suaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9jYWxsYmFjay5o
Lm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L2NhbGxiYWNrLmkgfCBcCi91c3Iv
cGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVp
bGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC9jYWxsYmFjay5oLm5ldzsgXAplY2hvICIj
cHJhZ21hIHBhY2soKSIgPj5jb21wYXQvY2FsbGJhY2suaC5uZXc7IFwKZWNobyAiI2VuZGlm
IC8qICRpZCAqLyIgPj5jb21wYXQvY2FsbGJhY2suaC5uZXcKbXYgLWYgY29tcGF0L2NhbGxi
YWNrLmgubmV3IGNvbXBhdC9jYWxsYmFjay5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdE
RUZJTkVfWEVOX0dVRVNUX0hBTkRMRShsb25nKScgcHVibGljL2VsZm5vdGUuaCB8IFwKL3Vz
ci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1i
dWlsZC1zb3VyY2UucHkgPmNvbXBhdC9lbGZub3RlLmMubmV3Cm12IC1mIGNvbXBhdC9lbGZu
b3RlLmMubmV3IGNvbXBhdC9lbGZub3RlLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNvbXBh
dC5oIC1tMzIgLW8gY29tcGF0L2VsZm5vdGUuaSBjb21wYXQvZWxmbm90ZS5jCnNldCAtZTsg
aWQ9XyQoZWNobyBjb21wYXQvZWxmbm90ZS5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBw
ZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L2VsZm5vdGUuaC5uZXc7
IFwKZWNobyAiI2RlZmluZSAkaWQiID4+Y29tcGF0L2VsZm5vdGUuaC5uZXc7IFwKZWNobyAi
I2luY2x1ZGUgPHhlbi9jb21wYXQuaD4iID4+Y29tcGF0L2VsZm5vdGUuaC5uZXc7IFwKZWNo
byAiI2luY2x1ZGUgPHB1YmxpYy9lbGZub3RlLmg+IiA+PmNvbXBhdC9lbGZub3RlLmgubmV3
OyBcCmVjaG8gIiNwcmFnbWEgcGFjayg0KSIgPj5jb21wYXQvZWxmbm90ZS5oLm5ldzsgXApn
cmVwIC12ICdeIyBbMC05XScgY29tcGF0L2VsZm5vdGUuaSB8IFwKL3Vzci9wa2cvYmluL3B5
dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1oZWFkZXIu
cHkgfCB1bmlxID4+Y29tcGF0L2VsZm5vdGUuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNr
KCkiID4+Y29tcGF0L2VsZm5vdGUuaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIg
Pj5jb21wYXQvZWxmbm90ZS5oLm5ldwptdiAtZiBjb21wYXQvZWxmbm90ZS5oLm5ldyBjb21w
YXQvZWxmbm90ZS5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dVRVNU
X0hBTkRMRShsb25nKScgcHVibGljL2V2ZW50X2NoYW5uZWwuaCB8IFwKL3Vzci9wa2cvYmlu
L3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1zb3Vy
Y2UucHkgPmNvbXBhdC9ldmVudF9jaGFubmVsLmMubmV3Cm12IC1mIGNvbXBhdC9ldmVudF9j
aGFubmVsLmMubmV3IGNvbXBhdC9ldmVudF9jaGFubmVsLmMKZ2NjIC1FIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMv
eGVuLWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0L2V2ZW50X2NoYW5uZWwuaSBjb21wYXQvZXZl
bnRfY2hhbm5lbC5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21wYXQvZXZlbnRfY2hhbm5lbC5o
IHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYg
JGlkIiA+Y29tcGF0L2V2ZW50X2NoYW5uZWwuaC5uZXc7IFwKZWNobyAiI2RlZmluZSAkaWQi
ID4+Y29tcGF0L2V2ZW50X2NoYW5uZWwuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9j
b21wYXQuaD4iID4+Y29tcGF0L2V2ZW50X2NoYW5uZWwuaC5uZXc7IFwKZWNobyAiI2luY2x1
ZGUgPHB1YmxpYy9ldmVudF9jaGFubmVsLmg+IiA+PmNvbXBhdC9ldmVudF9jaGFubmVsLmgu
bmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjayg0KSIgPj5jb21wYXQvZXZlbnRfY2hhbm5lbC5o
Lm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L2V2ZW50X2NoYW5uZWwuaSB8IFwK
L3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBh
dC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0L2V2ZW50X2NoYW5uZWwuaC5uZXc7
IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L2V2ZW50X2NoYW5uZWwuaC5uZXc7
IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21wYXQvZXZlbnRfY2hhbm5lbC5oLm5l
dwptdiAtZiBjb21wYXQvZXZlbnRfY2hhbm5lbC5oLm5ldyBjb21wYXQvZXZlbnRfY2hhbm5l
bC5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hBTkRMRShs
b25nKScgcHVibGljL2ZlYXR1cmVzLmggfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jv
b3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNlLnB5ID5jb21wYXQv
ZmVhdHVyZXMuYy5uZXcKbXYgLWYgY29tcGF0L2ZlYXR1cmVzLmMubmV3IGNvbXBhdC9mZWF0
dXJlcy5jCmdjYyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1m
bG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0
ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAt
ZyAtRF9fWEVOX18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNvbXBhdC9m
ZWF0dXJlcy5pIGNvbXBhdC9mZWF0dXJlcy5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21wYXQv
ZmVhdHVyZXMuaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hv
ICIjaWZuZGVmICRpZCIgPmNvbXBhdC9mZWF0dXJlcy5oLm5ldzsgXAplY2hvICIjZGVmaW5l
ICRpZCIgPj5jb21wYXQvZmVhdHVyZXMuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9j
b21wYXQuaD4iID4+Y29tcGF0L2ZlYXR1cmVzLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxw
dWJsaWMvZmVhdHVyZXMuaD4iID4+Y29tcGF0L2ZlYXR1cmVzLmgubmV3OyBcCmVjaG8gIiNw
cmFnbWEgcGFjayg0KSIgPj5jb21wYXQvZmVhdHVyZXMuaC5uZXc7IFwKZ3JlcCAtdiAnXiMg
WzAtOV0nIGNvbXBhdC9mZWF0dXJlcy5pIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9y
b290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29tcGF0LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEg
Pj5jb21wYXQvZmVhdHVyZXMuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29t
cGF0L2ZlYXR1cmVzLmgubmV3OyBcCmVjaG8gIiNlbmRpZiAvKiAkaWQgKi8iID4+Y29tcGF0
L2ZlYXR1cmVzLmgubmV3Cm12IC1mIGNvbXBhdC9mZWF0dXJlcy5oLm5ldyBjb21wYXQvZmVh
dHVyZXMuaApta2RpciAtcCBjb21wYXQKZ3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5E
TEUobG9uZyknIHB1YmxpYy9ncmFudF90YWJsZS5oIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9u
Mi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29tcGF0LWJ1aWxkLXNvdXJjZS5weSA+
Y29tcGF0L2dyYW50X3RhYmxlLmMubmV3Cm12IC1mIGNvbXBhdC9ncmFudF90YWJsZS5jLm5l
dyBjb21wYXQvZ3JhbnRfdGFibGUuYwpnY2MgLUUgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1ETkRFQlVHIC1mbm8t
YnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5j
bHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
ZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRp
b25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5v
LWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJ
QlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURI
QVNfUEFTU1RIUk9VR0ggLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1pbmNsdWRlIHB1YmxpYy94ZW4tY29tcGF0Lmgg
LW0zMiAtbyBjb21wYXQvZ3JhbnRfdGFibGUuaSBjb21wYXQvZ3JhbnRfdGFibGUuYwpzZXQg
LWU7IGlkPV8kKGVjaG8gY29tcGF0L2dyYW50X3RhYmxlLmggfCB0ciAnWzpsb3dlcjpdLS8u
JyAnWzp1cHBlcjpdX19fJyk7IFwKZWNobyAiI2lmbmRlZiAkaWQiID5jb21wYXQvZ3JhbnRf
dGFibGUuaC5uZXc7IFwKZWNobyAiI2RlZmluZSAkaWQiID4+Y29tcGF0L2dyYW50X3RhYmxl
LmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0Lmg+IiA+PmNvbXBhdC9ncmFu
dF90YWJsZS5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8cHVibGljL2dyYW50X3RhYmxlLmg+
IiA+PmNvbXBhdC9ncmFudF90YWJsZS5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soNCki
ID4+Y29tcGF0L2dyYW50X3RhYmxlLmgubmV3OyBcCmdyZXAgLXYgJ14jIFswLTldJyBjb21w
YXQvZ3JhbnRfdGFibGUuaSB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4t
NC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0
L2dyYW50X3RhYmxlLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC9n
cmFudF90YWJsZS5oLm5ldzsgXAplY2hvICIjZW5kaWYgLyogJGlkICovIiA+PmNvbXBhdC9n
cmFudF90YWJsZS5oLm5ldwptdiAtZiBjb21wYXQvZ3JhbnRfdGFibGUuaC5uZXcgY29tcGF0
L2dyYW50X3RhYmxlLmgKbWtkaXIgLXAgY29tcGF0CmdyZXAgLXYgJ0RFRklORV9YRU5fR1VF
U1RfSEFORExFKGxvbmcpJyBwdWJsaWMva2V4ZWMuaCB8IFwKL3Vzci9wa2cvYmluL3B5dGhv
bjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1zb3VyY2UucHkg
PmNvbXBhdC9rZXhlYy5jLm5ldwptdiAtZiBjb21wYXQva2V4ZWMuYy5uZXcgY29tcGF0L2tl
eGVjLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21t
b24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25v
LXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0L2tl
eGVjLmkgY29tcGF0L2tleGVjLmMKc2V0IC1lOyBpZD1fJChlY2hvIGNvbXBhdC9rZXhlYy5o
IHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYg
JGlkIiA+Y29tcGF0L2tleGVjLmgubmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBh
dC9rZXhlYy5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8eGVuL2NvbXBhdC5oPiIgPj5jb21w
YXQva2V4ZWMuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHB1YmxpYy9rZXhlYy5oPiIgPj5j
b21wYXQva2V4ZWMuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9r
ZXhlYy5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L2tleGVjLmkgfCBcCi91
c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQt
YnVpbGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC9rZXhlYy5oLm5ldzsgXAplY2hvICIj
cHJhZ21hIHBhY2soKSIgPj5jb21wYXQva2V4ZWMuaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8q
ICRpZCAqLyIgPj5jb21wYXQva2V4ZWMuaC5uZXcKbXYgLWYgY29tcGF0L2tleGVjLmgubmV3
IGNvbXBhdC9rZXhlYy5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dV
RVNUX0hBTkRMRShsb25nKScgcHVibGljL21lbW9yeS5oIHwgXAovdXNyL3BrZy9iaW4vcHl0
aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29tcGF0LWJ1aWxkLXNvdXJjZS5w
eSA+Y29tcGF0L21lbW9yeS5jLm5ldwptdiAtZiBjb21wYXQvbWVtb3J5LmMubmV3IGNvbXBh
dC9tZW1vcnkuYwpnY2MgLUUgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNv
ZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVk
LWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91
cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRp
bmMgLWcgLURfX1hFTl9fIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50IC1pbmNsdWRlIHB1YmxpYy94ZW4tY29tcGF0LmggLW0zMiAtbyBjb21w
YXQvbWVtb3J5LmkgY29tcGF0L21lbW9yeS5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21wYXQv
bWVtb3J5LmggfCB0ciAnWzpsb3dlcjpdLS8uJyAnWzp1cHBlcjpdX19fJyk7IFwKZWNobyAi
I2lmbmRlZiAkaWQiID5jb21wYXQvbWVtb3J5LmgubmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlk
IiA+PmNvbXBhdC9tZW1vcnkuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9jb21wYXQu
aD4iID4+Y29tcGF0L21lbW9yeS5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8cHVibGljL21l
bW9yeS5oPiIgPj5jb21wYXQvbWVtb3J5LmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjayg0
KSIgPj5jb21wYXQvbWVtb3J5LmgubmV3OyBcCmdyZXAgLXYgJ14jIFswLTldJyBjb21wYXQv
bWVtb3J5LmkgfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hl
bi90b29scy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC9tZW1vcnku
aC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L21lbW9yeS5oLm5ldzsg
XAplY2hvICIjZW5kaWYgLyogJGlkICovIiA+PmNvbXBhdC9tZW1vcnkuaC5uZXcKbXYgLWYg
Y29tcGF0L21lbW9yeS5oLm5ldyBjb21wYXQvbWVtb3J5LmgKbWtkaXIgLXAgY29tcGF0Cmdy
ZXAgLXYgJ0RFRklORV9YRU5fR1VFU1RfSEFORExFKGxvbmcpJyBwdWJsaWMvbm1pLmggfCBc
Ci91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21w
YXQtYnVpbGQtc291cmNlLnB5ID5jb21wYXQvbm1pLmMubmV3Cm12IC1mIGNvbXBhdC9ubWku
Yy5uZXcgY29tcGF0L25taS5jCmdjYyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWls
dGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRl
IC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1n
ZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZh
dWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMg
LVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRF
IC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19Q
QVNTVEhST1VHSCAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1h
bGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMy
IC1vIGNvbXBhdC9ubWkuaSBjb21wYXQvbm1pLmMKc2V0IC1lOyBpZD1fJChlY2hvIGNvbXBh
dC9ubWkuaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hvICIj
aWZuZGVmICRpZCIgPmNvbXBhdC9ubWkuaC5uZXc7IFwKZWNobyAiI2RlZmluZSAkaWQiID4+
Y29tcGF0L25taS5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8eGVuL2NvbXBhdC5oPiIgPj5j
b21wYXQvbm1pLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMvbm1pLmg+IiA+PmNv
bXBhdC9ubWkuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9ubWku
aC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNvbXBhdC9ubWkuaSB8IFwKL3Vzci9wa2cv
YmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1o
ZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0L25taS5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBh
Y2soKSIgPj5jb21wYXQvbm1pLmgubmV3OyBcCmVjaG8gIiNlbmRpZiAvKiAkaWQgKi8iID4+
Y29tcGF0L25taS5oLm5ldwptdiAtZiBjb21wYXQvbm1pLmgubmV3IGNvbXBhdC9ubWkuaApt
a2RpciAtcCBjb21wYXQKZ3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUobG9uZykn
IHB1YmxpYy9waHlzZGV2LmggfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVu
LTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNlLnB5ID5jb21wYXQvcGh5c2Rl
di5jLm5ldwptdiAtZiBjb21wYXQvcGh5c2Rldi5jLm5ldyBjb21wYXQvcGh5c2Rldi5jCmdj
YyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNvbXBhdC9waHlzZGV2Lmkg
Y29tcGF0L3BoeXNkZXYuYwpzZXQgLWU7IGlkPV8kKGVjaG8gY29tcGF0L3BoeXNkZXYuaCB8
IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hvICIjaWZuZGVmICRp
ZCIgPmNvbXBhdC9waHlzZGV2LmgubmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBh
dC9waHlzZGV2LmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0Lmg+IiA+PmNv
bXBhdC9waHlzZGV2LmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMvcGh5c2Rldi5o
PiIgPj5jb21wYXQvcGh5c2Rldi5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soNCkiID4+
Y29tcGF0L3BoeXNkZXYuaC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNvbXBhdC9waHlz
ZGV2LmkgfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90
b29scy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC9waHlzZGV2Lmgu
bmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC9waHlzZGV2LmgubmV3OyBc
CmVjaG8gIiNlbmRpZiAvKiAkaWQgKi8iID4+Y29tcGF0L3BoeXNkZXYuaC5uZXcKbXYgLWYg
Y29tcGF0L3BoeXNkZXYuaC5uZXcgY29tcGF0L3BoeXNkZXYuaApta2RpciAtcCBjb21wYXQK
Z3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUobG9uZyknIHB1YmxpYy9wbGF0Zm9y
bS5oIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9v
bHMvY29tcGF0LWJ1aWxkLXNvdXJjZS5weSA+Y29tcGF0L3BsYXRmb3JtLmMubmV3Cm12IC1m
IGNvbXBhdC9wbGF0Zm9ybS5jLm5ldyBjb21wYXQvcGxhdGZvcm0uYwpnY2MgLUUgLU8yIC1m
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90
ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAt
bW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hB
U19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1pbmNsdWRlIHB1
YmxpYy94ZW4tY29tcGF0LmggLW0zMiAtbyBjb21wYXQvcGxhdGZvcm0uaSBjb21wYXQvcGxh
dGZvcm0uYwpzZXQgLWU7IGlkPV8kKGVjaG8gY29tcGF0L3BsYXRmb3JtLmggfCB0ciAnWzps
b3dlcjpdLS8uJyAnWzp1cHBlcjpdX19fJyk7IFwKZWNobyAiI2lmbmRlZiAkaWQiID5jb21w
YXQvcGxhdGZvcm0uaC5uZXc7IFwKZWNobyAiI2RlZmluZSAkaWQiID4+Y29tcGF0L3BsYXRm
b3JtLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0Lmg+IiA+PmNvbXBhdC9w
bGF0Zm9ybS5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8cHVibGljL3BsYXRmb3JtLmg+IiA+
PmNvbXBhdC9wbGF0Zm9ybS5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soNCkiID4+Y29t
cGF0L3BsYXRmb3JtLmgubmV3OyBcCmdyZXAgLXYgJ14jIFswLTldJyBjb21wYXQvcGxhdGZv
cm0uaSB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rv
b2xzL2NvbXBhdC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0L3BsYXRmb3JtLmgu
bmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC9wbGF0Zm9ybS5oLm5ldzsg
XAplY2hvICIjZW5kaWYgLyogJGlkICovIiA+PmNvbXBhdC9wbGF0Zm9ybS5oLm5ldwptdiAt
ZiBjb21wYXQvcGxhdGZvcm0uaC5uZXcgY29tcGF0L3BsYXRmb3JtLmgKbWtkaXIgLXAgY29t
cGF0CmdyZXAgLXYgJ0RFRklORV9YRU5fR1VFU1RfSEFORExFKGxvbmcpJyBwdWJsaWMvc2No
ZWQuaCB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rv
b2xzL2NvbXBhdC1idWlsZC1zb3VyY2UucHkgPmNvbXBhdC9zY2hlZC5jLm5ldwptdiAtZiBj
b21wYXQvc2NoZWQuYy5uZXcgY29tcGF0L3NjaGVkLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5E
RUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRo
cHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVu
LWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0L3NjaGVkLmkgY29tcGF0L3NjaGVkLmMKc2V0IC1l
OyBpZD1fJChlY2hvIGNvbXBhdC9zY2hlZC5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBw
ZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L3NjaGVkLmgubmV3OyBc
CmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBhdC9zY2hlZC5oLm5ldzsgXAplY2hvICIjaW5j
bHVkZSA8eGVuL2NvbXBhdC5oPiIgPj5jb21wYXQvc2NoZWQuaC5uZXc7IFwKZWNobyAiI2lu
Y2x1ZGUgPHB1YmxpYy9zY2hlZC5oPiIgPj5jb21wYXQvc2NoZWQuaC5uZXc7IFwKZWNobyAi
I3ByYWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9zY2hlZC5oLm5ldzsgXApncmVwIC12ICdeIyBb
MC05XScgY29tcGF0L3NjaGVkLmkgfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNv
bXBhdC9zY2hlZC5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soKSIgPj5jb21wYXQvc2No
ZWQuaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21wYXQvc2NoZWQuaC5u
ZXcKbXYgLWYgY29tcGF0L3NjaGVkLmgubmV3IGNvbXBhdC9zY2hlZC5oCm1rZGlyIC1wIGNv
bXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hBTkRMRShsb25nKScgcHVibGljL3Rt
ZW0uaCB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rv
b2xzL2NvbXBhdC1idWlsZC1zb3VyY2UucHkgPmNvbXBhdC90bWVtLmMubmV3Cm12IC1mIGNv
bXBhdC90bWVtLmMubmV3IGNvbXBhdC90bWVtLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNv
bXBhdC5oIC1tMzIgLW8gY29tcGF0L3RtZW0uaSBjb21wYXQvdG1lbS5jCnNldCAtZTsgaWQ9
XyQoZWNobyBjb21wYXQvdG1lbS5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9f
XycpOyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L3RtZW0uaC5uZXc7IFwKZWNobyAi
I2RlZmluZSAkaWQiID4+Y29tcGF0L3RtZW0uaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhl
bi9jb21wYXQuaD4iID4+Y29tcGF0L3RtZW0uaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHB1
YmxpYy90bWVtLmg+IiA+PmNvbXBhdC90bWVtLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFj
ayg0KSIgPj5jb21wYXQvdG1lbS5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0
L3RtZW0uaSB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVu
L3Rvb2xzL2NvbXBhdC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0L3RtZW0uaC5u
ZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L3RtZW0uaC5uZXc7IFwKZWNo
byAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21wYXQvdG1lbS5oLm5ldwptdiAtZiBjb21wYXQv
dG1lbS5oLm5ldyBjb21wYXQvdG1lbS5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdERUZJ
TkVfWEVOX0dVRVNUX0hBTkRMRShsb25nKScgcHVibGljL3RyYWNlLmggfCBcCi91c3IvcGtn
L2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQt
c291cmNlLnB5ID5jb21wYXQvdHJhY2UuYy5uZXcKbXYgLWYgY29tcGF0L3RyYWNlLmMubmV3
IGNvbXBhdC90cmFjZS5jCmdjYyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGlu
IC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1X
ZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNT
VEhST1VHSCAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1v
IGNvbXBhdC90cmFjZS5pIGNvbXBhdC90cmFjZS5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21w
YXQvdHJhY2UuaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hv
ICIjaWZuZGVmICRpZCIgPmNvbXBhdC90cmFjZS5oLm5ldzsgXAplY2hvICIjZGVmaW5lICRp
ZCIgPj5jb21wYXQvdHJhY2UuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9jb21wYXQu
aD4iID4+Y29tcGF0L3RyYWNlLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMvdHJh
Y2UuaD4iID4+Y29tcGF0L3RyYWNlLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjayg0KSIg
Pj5jb21wYXQvdHJhY2UuaC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNvbXBhdC90cmFj
ZS5pIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9v
bHMvY29tcGF0LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEgPj5jb21wYXQvdHJhY2UuaC5uZXc7
IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L3RyYWNlLmgubmV3OyBcCmVjaG8g
IiNlbmRpZiAvKiAkaWQgKi8iID4+Y29tcGF0L3RyYWNlLmgubmV3Cm12IC1mIGNvbXBhdC90
cmFjZS5oLm5ldyBjb21wYXQvdHJhY2UuaApta2RpciAtcCBjb21wYXQKZ3JlcCAtdiAnREVG
SU5FX1hFTl9HVUVTVF9IQU5ETEUobG9uZyknIHB1YmxpYy92Y3B1LmggfCBcCi91c3IvcGtn
L2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQt
c291cmNlLnB5ID5jb21wYXQvdmNwdS5jLm5ldwptdiAtZiBjb21wYXQvdmNwdS5jLm5ldyBj
b21wYXQvdmNwdS5jCmdjYyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmlj
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1t
c29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0
ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9u
b3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0
ZGluYyAtZyAtRF9fWEVOX18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhS
T1VHSCAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNv
bXBhdC92Y3B1LmkgY29tcGF0L3ZjcHUuYwpzZXQgLWU7IGlkPV8kKGVjaG8gY29tcGF0L3Zj
cHUuaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hvICIjaWZu
ZGVmICRpZCIgPmNvbXBhdC92Y3B1LmgubmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNv
bXBhdC92Y3B1LmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0Lmg+IiA+PmNv
bXBhdC92Y3B1LmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMvdmNwdS5oPiIgPj5j
b21wYXQvdmNwdS5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soNCkiID4+Y29tcGF0L3Zj
cHUuaC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNvbXBhdC92Y3B1LmkgfCBcCi91c3Iv
cGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVp
bGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC92Y3B1LmgubmV3OyBcCmVjaG8gIiNwcmFn
bWEgcGFjaygpIiA+PmNvbXBhdC92Y3B1LmgubmV3OyBcCmVjaG8gIiNlbmRpZiAvKiAkaWQg
Ki8iID4+Y29tcGF0L3ZjcHUuaC5uZXcKbXYgLWYgY29tcGF0L3ZjcHUuaC5uZXcgY29tcGF0
L3ZjcHUuaApta2RpciAtcCBjb21wYXQKZ3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5E
TEUobG9uZyknIHB1YmxpYy92ZXJzaW9uLmggfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcg
L3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNlLnB5ID5jb21w
YXQvdmVyc2lvbi5jLm5ldwptdiAtZiBjb21wYXQvdmVyc2lvbi5jLm5ldyBjb21wYXQvdmVy
c2lvbi5jCmdjYyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1m
bG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0
ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAt
ZyAtRF9fWEVOX18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgLWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNvbXBhdC92
ZXJzaW9uLmkgY29tcGF0L3ZlcnNpb24uYwpzZXQgLWU7IGlkPV8kKGVjaG8gY29tcGF0L3Zl
cnNpb24uaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hvICIj
aWZuZGVmICRpZCIgPmNvbXBhdC92ZXJzaW9uLmgubmV3OyBcCmVjaG8gIiNkZWZpbmUgJGlk
IiA+PmNvbXBhdC92ZXJzaW9uLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0
Lmg+IiA+PmNvbXBhdC92ZXJzaW9uLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMv
dmVyc2lvbi5oPiIgPj5jb21wYXQvdmVyc2lvbi5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBh
Y2soNCkiID4+Y29tcGF0L3ZlcnNpb24uaC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNv
bXBhdC92ZXJzaW9uLmkgfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQu
Mi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5IHwgdW5pcSA+PmNvbXBhdC92
ZXJzaW9uLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC92ZXJzaW9u
LmgubmV3OyBcCmVjaG8gIiNlbmRpZiAvKiAkaWQgKi8iID4+Y29tcGF0L3ZlcnNpb24uaC5u
ZXcKbXYgLWYgY29tcGF0L3ZlcnNpb24uaC5uZXcgY29tcGF0L3ZlcnNpb24uaApta2RpciAt
cCBjb21wYXQKZ3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUobG9uZyknIHB1Ymxp
Yy94ZW4uaCB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVu
L3Rvb2xzL2NvbXBhdC1idWlsZC1zb3VyY2UucHkgPmNvbXBhdC94ZW4uYy5uZXcKbXYgLWYg
Y29tcGF0L3hlbi5jLm5ldyBjb21wYXQveGVuLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNv
bXBhdC5oIC1tMzIgLW8gY29tcGF0L3hlbi5pIGNvbXBhdC94ZW4uYwpzZXQgLWU7IGlkPV8k
KGVjaG8gY29tcGF0L3hlbi5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9fXycp
OyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L3hlbi5oLm5ldzsgXAplY2hvICIjZGVm
aW5lICRpZCIgPj5jb21wYXQveGVuLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29t
cGF0Lmg+IiA+PmNvbXBhdC94ZW4uaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHB1YmxpYy94
ZW4uaD4iID4+Y29tcGF0L3hlbi5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soNCkiID4+
Y29tcGF0L3hlbi5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L3hlbi5pIHwg
XAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29t
cGF0LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEgPj5jb21wYXQveGVuLmgubmV3OyBcCmVjaG8g
IiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC94ZW4uaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8q
ICRpZCAqLyIgPj5jb21wYXQveGVuLmgubmV3Cm12IC1mIGNvbXBhdC94ZW4uaC5uZXcgY29t
cGF0L3hlbi5oCm1rZGlyIC1wIGNvbXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hB
TkRMRShsb25nKScgcHVibGljL3hlbmNvbW0uaCB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIu
NyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1zb3VyY2UucHkgPmNv
bXBhdC94ZW5jb21tLmMubmV3Cm12IC1mIGNvbXBhdC94ZW5jb21tLmMubmV3IGNvbXBhdC94
ZW5jb21tLmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdI
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0
L3hlbmNvbW0uaSBjb21wYXQveGVuY29tbS5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21wYXQv
eGVuY29tbS5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9fXycpOyBcCmVjaG8g
IiNpZm5kZWYgJGlkIiA+Y29tcGF0L3hlbmNvbW0uaC5uZXc7IFwKZWNobyAiI2RlZmluZSAk
aWQiID4+Y29tcGF0L3hlbmNvbW0uaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9jb21w
YXQuaD4iID4+Y29tcGF0L3hlbmNvbW0uaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHB1Ymxp
Yy94ZW5jb21tLmg+IiA+PmNvbXBhdC94ZW5jb21tLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEg
cGFjayg0KSIgPj5jb21wYXQveGVuY29tbS5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScg
Y29tcGF0L3hlbmNvbW0uaSB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4t
NC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0
L3hlbmNvbW0uaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L3hlbmNv
bW0uaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21wYXQveGVuY29tbS5o
Lm5ldwptdiAtZiBjb21wYXQveGVuY29tbS5oLm5ldyBjb21wYXQveGVuY29tbS5oCm1rZGly
IC1wIGNvbXBhdApncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hBTkRMRShsb25nKScgcHVi
bGljL3hlbm9wcm9mLmggfCBcCi91c3IvcGtnL2Jpbi9weXRob24yLjcgL3Jvb3QveGVuLTQu
Mi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNlLnB5ID5jb21wYXQveGVub3Byb2Yu
Yy5uZXcKbXYgLWYgY29tcGF0L3hlbm9wcm9mLmMubmV3IGNvbXBhdC94ZW5vcHJvZi5jCmdj
YyAtRSAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LWluY2x1ZGUgcHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNvbXBhdC94ZW5vcHJvZi5p
IGNvbXBhdC94ZW5vcHJvZi5jCnNldCAtZTsgaWQ9XyQoZWNobyBjb21wYXQveGVub3Byb2Yu
aCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsgXAplY2hvICIjaWZuZGVm
ICRpZCIgPmNvbXBhdC94ZW5vcHJvZi5oLm5ldzsgXAplY2hvICIjZGVmaW5lICRpZCIgPj5j
b21wYXQveGVub3Byb2YuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9jb21wYXQuaD4i
ID4+Y29tcGF0L3hlbm9wcm9mLmgubmV3OyBcCmVjaG8gIiNpbmNsdWRlIDxwdWJsaWMveGVu
b3Byb2YuaD4iID4+Y29tcGF0L3hlbm9wcm9mLmgubmV3OyBcCmVjaG8gIiNwcmFnbWEgcGFj
ayg0KSIgPj5jb21wYXQveGVub3Byb2YuaC5uZXc7IFwKZ3JlcCAtdiAnXiMgWzAtOV0nIGNv
bXBhdC94ZW5vcHJvZi5pIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00
LjIuMC94ZW4vdG9vbHMvY29tcGF0LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEgPj5jb21wYXQv
eGVub3Byb2YuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L3hlbm9w
cm9mLmgubmV3OyBcCmVjaG8gIiNlbmRpZiAvKiAkaWQgKi8iID4+Y29tcGF0L3hlbm9wcm9m
LmgubmV3Cm12IC1mIGNvbXBhdC94ZW5vcHJvZi5oLm5ldyBjb21wYXQveGVub3Byb2YuaApt
a2RpciAtcCBjb21wYXQvYXJjaC14ODYKZ3JlcCAtdiAnREVGSU5FX1hFTl9HVUVTVF9IQU5E
TEUobG9uZyknIHB1YmxpYy9hcmNoLXg4Ni94ZW4tbWNhLmggfCBcCi91c3IvcGtnL2Jpbi9w
eXRob24yLjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNl
LnB5ID5jb21wYXQvYXJjaC14ODYveGVuLW1jYS5jLm5ldwptdiAtZiBjb21wYXQvYXJjaC14
ODYveGVuLW1jYS5jLm5ldyBjb21wYXQvYXJjaC14ODYveGVuLW1jYS5jCmdjYyAtRSAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRl
Y2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1w
aXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTzIgLWZvbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLWluY2x1ZGUg
cHVibGljL3hlbi1jb21wYXQuaCAtbTMyIC1vIGNvbXBhdC9hcmNoLXg4Ni94ZW4tbWNhLmkg
Y29tcGF0L2FyY2gteDg2L3hlbi1tY2EuYwpzZXQgLWU7IGlkPV8kKGVjaG8gY29tcGF0L2Fy
Y2gteDg2L3hlbi1tY2EuaCB8IHRyICdbOmxvd2VyOl0tLy4nICdbOnVwcGVyOl1fX18nKTsg
XAplY2hvICIjaWZuZGVmICRpZCIgPmNvbXBhdC9hcmNoLXg4Ni94ZW4tbWNhLmgubmV3OyBc
CmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBhdC9hcmNoLXg4Ni94ZW4tbWNhLmgubmV3OyBc
CmVjaG8gIiNpbmNsdWRlIDx4ZW4vY29tcGF0Lmg+IiA+PmNvbXBhdC9hcmNoLXg4Ni94ZW4t
bWNhLmgubmV3OyBcCiBcCmVjaG8gIiNwcmFnbWEgcGFjayg0KSIgPj5jb21wYXQvYXJjaC14
ODYveGVuLW1jYS5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L2FyY2gteDg2
L3hlbi1tY2EuaSB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAv
eGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1oZWFkZXIucHkgfCB1bmlxID4+Y29tcGF0L2FyY2gt
eDg2L3hlbi1tY2EuaC5uZXc7IFwKZWNobyAiI3ByYWdtYSBwYWNrKCkiID4+Y29tcGF0L2Fy
Y2gteDg2L3hlbi1tY2EuaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21w
YXQvYXJjaC14ODYveGVuLW1jYS5oLm5ldwptdiAtZiBjb21wYXQvYXJjaC14ODYveGVuLW1j
YS5oLm5ldyBjb21wYXQvYXJjaC14ODYveGVuLW1jYS5oCm1rZGlyIC1wIGNvbXBhdC9hcmNo
LXg4NgpncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hBTkRMRShsb25nKScgcHVibGljL2Fy
Y2gteDg2L3hlbi5oIHwgXAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIu
MC94ZW4vdG9vbHMvY29tcGF0LWJ1aWxkLXNvdXJjZS5weSA+Y29tcGF0L2FyY2gteDg2L3hl
bi5jLm5ldwptdiAtZiBjb21wYXQvYXJjaC14ODYveGVuLmMubmV3IGNvbXBhdC9hcmNoLXg4
Ni94ZW4uYwpnY2MgLUUgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQt
ZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4
dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11
bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMg
LWcgLURfX1hFTl9fIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50IC1pbmNsdWRlIHB1YmxpYy94ZW4tY29tcGF0LmggLW0zMiAtbyBjb21wYXQv
YXJjaC14ODYveGVuLmkgY29tcGF0L2FyY2gteDg2L3hlbi5jCnNldCAtZTsgaWQ9XyQoZWNo
byBjb21wYXQvYXJjaC14ODYveGVuLmggfCB0ciAnWzpsb3dlcjpdLS8uJyAnWzp1cHBlcjpd
X19fJyk7IFwKZWNobyAiI2lmbmRlZiAkaWQiID5jb21wYXQvYXJjaC14ODYveGVuLmgubmV3
OyBcCmVjaG8gIiNkZWZpbmUgJGlkIiA+PmNvbXBhdC9hcmNoLXg4Ni94ZW4uaC5uZXc7IFwK
ZWNobyAiI2luY2x1ZGUgPHhlbi9jb21wYXQuaD4iID4+Y29tcGF0L2FyY2gteDg2L3hlbi5o
Lm5ldzsgXAogXAplY2hvICIjcHJhZ21hIHBhY2soNCkiID4+Y29tcGF0L2FyY2gteDg2L3hl
bi5oLm5ldzsgXApncmVwIC12ICdeIyBbMC05XScgY29tcGF0L2FyY2gteDg2L3hlbi5pIHwg
XAovdXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29t
cGF0LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEgPj5jb21wYXQvYXJjaC14ODYveGVuLmgubmV3
OyBcCmVjaG8gIiNwcmFnbWEgcGFjaygpIiA+PmNvbXBhdC9hcmNoLXg4Ni94ZW4uaC5uZXc7
IFwKZWNobyAiI2VuZGlmIC8qICRpZCAqLyIgPj5jb21wYXQvYXJjaC14ODYveGVuLmgubmV3
Cm12IC1mIGNvbXBhdC9hcmNoLXg4Ni94ZW4uaC5uZXcgY29tcGF0L2FyY2gteDg2L3hlbi5o
Cm1rZGlyIC1wIGNvbXBhdC9hcmNoLXg4NgpncmVwIC12ICdERUZJTkVfWEVOX0dVRVNUX0hB
TkRMRShsb25nKScgcHVibGljL2FyY2gteDg2L3hlbi14ODZfMzIuaCB8IFwKL3Vzci9wa2cv
YmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xzL2NvbXBhdC1idWlsZC1z
b3VyY2UucHkgPmNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2XzMyLmMubmV3Cm12IC1mIGNvbXBh
dC9hcmNoLXg4Ni94ZW4teDg2XzMyLmMubmV3IGNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2XzMy
LmMKZ2NjIC1FIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAtaW5jbHVkZSBwdWJsaWMveGVuLWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0L2FyY2gt
eDg2L3hlbi14ODZfMzIuaSBjb21wYXQvYXJjaC14ODYveGVuLXg4Nl8zMi5jCnNldCAtZTsg
aWQ9XyQoZWNobyBjb21wYXQvYXJjaC14ODYveGVuLXg4Nl8zMi5oIHwgdHIgJ1s6bG93ZXI6
XS0vLicgJ1s6dXBwZXI6XV9fXycpOyBcCmVjaG8gIiNpZm5kZWYgJGlkIiA+Y29tcGF0L2Fy
Y2gteDg2L3hlbi14ODZfMzIuaC5uZXc7IFwKZWNobyAiI2RlZmluZSAkaWQiID4+Y29tcGF0
L2FyY2gteDg2L3hlbi14ODZfMzIuaC5uZXc7IFwKZWNobyAiI2luY2x1ZGUgPHhlbi9jb21w
YXQuaD4iID4+Y29tcGF0L2FyY2gteDg2L3hlbi14ODZfMzIuaC5uZXc7IFwKIFwKZWNobyAi
I3ByYWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2XzMyLmgubmV3OyBc
CmdyZXAgLXYgJ14jIFswLTldJyBjb21wYXQvYXJjaC14ODYveGVuLXg4Nl8zMi5pIHwgXAov
dXNyL3BrZy9iaW4vcHl0aG9uMi43IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvY29tcGF0
LWJ1aWxkLWhlYWRlci5weSB8IHVuaXEgPj5jb21wYXQvYXJjaC14ODYveGVuLXg4Nl8zMi5o
Lm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2soKSIgPj5jb21wYXQvYXJjaC14ODYveGVuLXg4
Nl8zMi5oLm5ldzsgXAplY2hvICIjZW5kaWYgLyogJGlkICovIiA+PmNvbXBhdC9hcmNoLXg4
Ni94ZW4teDg2XzMyLmgubmV3Cm12IC1mIGNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2XzMyLmgu
bmV3IGNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2XzMyLmgKbWtkaXIgLXAgY29tcGF0CmdyZXAg
LXYgJ0RFRklORV9YRU5fR1VFU1RfSEFORExFKGxvbmcpJyBwdWJsaWMvYXJjaC14ODZfMzIu
aCB8IFwKL3Vzci9wa2cvYmluL3B5dGhvbjIuNyAvcm9vdC94ZW4tNC4yLjAveGVuL3Rvb2xz
L2NvbXBhdC1idWlsZC1zb3VyY2UucHkgPmNvbXBhdC9hcmNoLXg4Nl8zMi5jLm5ldwptdiAt
ZiBjb21wYXQvYXJjaC14ODZfMzIuYy5uZXcgY29tcGF0L2FyY2gteDg2XzMyLmMKZ2NjIC1F
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtaW5j
bHVkZSBwdWJsaWMveGVuLWNvbXBhdC5oIC1tMzIgLW8gY29tcGF0L2FyY2gteDg2XzMyLmkg
Y29tcGF0L2FyY2gteDg2XzMyLmMKc2V0IC1lOyBpZD1fJChlY2hvIGNvbXBhdC9hcmNoLXg4
Nl8zMi5oIHwgdHIgJ1s6bG93ZXI6XS0vLicgJ1s6dXBwZXI6XV9fXycpOyBcCmVjaG8gIiNp
Zm5kZWYgJGlkIiA+Y29tcGF0L2FyY2gteDg2XzMyLmgubmV3OyBcCmVjaG8gIiNkZWZpbmUg
JGlkIiA+PmNvbXBhdC9hcmNoLXg4Nl8zMi5oLm5ldzsgXAplY2hvICIjaW5jbHVkZSA8eGVu
L2NvbXBhdC5oPiIgPj5jb21wYXQvYXJjaC14ODZfMzIuaC5uZXc7IFwKIFwKZWNobyAiI3By
YWdtYSBwYWNrKDQpIiA+PmNvbXBhdC9hcmNoLXg4Nl8zMi5oLm5ldzsgXApncmVwIC12ICde
IyBbMC05XScgY29tcGF0L2FyY2gteDg2XzMyLmkgfCBcCi91c3IvcGtnL2Jpbi9weXRob24y
LjcgL3Jvb3QveGVuLTQuMi4wL3hlbi90b29scy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5IHwg
dW5pcSA+PmNvbXBhdC9hcmNoLXg4Nl8zMi5oLm5ldzsgXAplY2hvICIjcHJhZ21hIHBhY2so
KSIgPj5jb21wYXQvYXJjaC14ODZfMzIuaC5uZXc7IFwKZWNobyAiI2VuZGlmIC8qICRpZCAq
LyIgPj5jb21wYXQvYXJjaC14ODZfMzIuaC5uZXcKbXYgLWYgY29tcGF0L2FyY2gteDg2XzMy
LmgubmV3IGNvbXBhdC9hcmNoLXg4Nl8zMi5oCmV4cG9ydCBQWVRIT049L3Vzci9wa2cvYmlu
L3B5dGhvbjIuNzsgXApncmVwIC12ICdeWwkgXSojJyB4bGF0LmxzdCB8IFwKd2hpbGUgcmVh
ZCB3aGF0IG5hbWUgaGRyOyBkbyBcCgkvYmluL3NoIC9yb290L3hlbi00LjIuMC94ZW4vdG9v
bHMvZ2V0LWZpZWxkcy5zaCAiJHdoYXQiIGNvbXBhdF8kbmFtZSAkKGVjaG8gY29tcGF0LyRo
ZHIgfCBzZWQgJ3MsQGFyY2hALHg4Nl8zMixnJykgfHwgZXhpdCAkPzsgXApkb25lID5jb21w
YXQveGxhdC5oLm5ldwptdiAtZiBjb21wYXQveGxhdC5oLm5ldyBjb21wYXQveGxhdC5oCmZv
ciBpIGluIHB1YmxpYy90cmFjZS5oIHB1YmxpYy9lbGZub3RlLmggcHVibGljL3RtZW0uaCBw
dWJsaWMvcGxhdGZvcm0uaCBwdWJsaWMvcGh5c2Rldi5oIHB1YmxpYy94ZW4tY29tcGF0Lmgg
cHVibGljL2dyYW50X3RhYmxlLmggcHVibGljL2NhbGxiYWNrLmggcHVibGljL3NjaGVkLmgg
cHVibGljL21lbW9yeS5oIHB1YmxpYy9mZWF0dXJlcy5oIHB1YmxpYy94ZW4uaCBwdWJsaWMv
ZG9tMF9vcHMuaCBwdWJsaWMvbWVtX2V2ZW50LmggcHVibGljL3ZlcnNpb24uaCBwdWJsaWMv
ZXZlbnRfY2hhbm5lbC5oIHB1YmxpYy94ZW5vcHJvZi5oIHB1YmxpYy94ZW5jb21tLmggcHVi
bGljL25taS5oIHB1YmxpYy9rZXhlYy5oIHB1YmxpYy92Y3B1LmggcHVibGljL2lvL3hlbmJ1
cy5oIHB1YmxpYy9pby9saWJ4ZW52Y2hhbi5oIHB1YmxpYy9pby90cG1pZi5oIHB1YmxpYy9p
by9wY2lpZi5oIHB1YmxpYy9pby91c2JpZi5oIHB1YmxpYy9pby9uZXRpZi5oIHB1YmxpYy9p
by9mYmlmLmggcHVibGljL2lvL2ZzaWYuaCBwdWJsaWMvaW8vYmxraWYuaCBwdWJsaWMvaW8v
Y29uc29sZS5oIHB1YmxpYy9pby9yaW5nLmggcHVibGljL2lvL3Byb3RvY29scy5oIHB1Ymxp
Yy9pby9rYmRpZi5oIHB1YmxpYy9pby94c193aXJlLmggcHVibGljL2lvL3ZzY3NpaWYuaCBw
dWJsaWMvaHZtL3BhcmFtcy5oIHB1YmxpYy9odm0vaHZtX2luZm9fdGFibGUuaCBwdWJsaWMv
aHZtL2lvcmVxLmggcHVibGljL2h2bS9odm1fb3AuaCBwdWJsaWMvaHZtL2U4MjAuaDsgZG8g
Z2NjIC1hbnNpIC1pbmNsdWRlIHN0ZGludC5oIC1XYWxsIC1XIC1XZXJyb3IgLVMgLW8gL2Rl
di9udWxsIC14YyAkaSB8fCBleGl0IDE7IGVjaG8gJGk7IGRvbmUgPmhlYWRlcnMuY2hrLm5l
dwptdiBoZWFkZXJzLmNoay5uZXcgaGVhZGVycy5jaGsKcm0gY29tcGF0L3hlbi5jIGNvbXBh
dC9rZXhlYy5pIGNvbXBhdC9hcmNoLXg4Nl8zMi5jIGNvbXBhdC9hcmNoLXg4Ni94ZW4teDg2
XzMyLmMgY29tcGF0L21lbW9yeS5jIGNvbXBhdC9zY2hlZC5jIGNvbXBhdC92Y3B1LmMgY29t
cGF0L3hlbi5pIGNvbXBhdC9waHlzZGV2LmkgY29tcGF0L3RtZW0uaSBjb21wYXQvdHJhY2Uu
aSBjb21wYXQvZmVhdHVyZXMuaSBjb21wYXQvY2FsbGJhY2suYyBjb21wYXQveGVuY29tbS5p
IGNvbXBhdC9hcmNoLXg4Ni94ZW4uaSBjb21wYXQvZWxmbm90ZS5jIGNvbXBhdC9hcmNoLXg4
Ni94ZW4tbWNhLmkgY29tcGF0L3ZlcnNpb24uaSBjb21wYXQvZXZlbnRfY2hhbm5lbC5pIGNv
bXBhdC9wbGF0Zm9ybS5pIGNvbXBhdC9rZXhlYy5jIGNvbXBhdC90bWVtLmMgY29tcGF0L25t
aS5pIGNvbXBhdC9lbGZub3RlLmkgY29tcGF0L3BoeXNkZXYuYyBjb21wYXQvdmNwdS5pIGNv
bXBhdC90cmFjZS5jIGNvbXBhdC9mZWF0dXJlcy5jIGNvbXBhdC9ldmVudF9jaGFubmVsLmMg
Y29tcGF0L2dyYW50X3RhYmxlLmkgY29tcGF0L3hlbmNvbW0uYyBjb21wYXQvYXJjaC14ODYv
eGVuLmMgY29tcGF0L2FyY2gteDg2L3hlbi1tY2EuYyBjb21wYXQvdmVyc2lvbi5jIGNvbXBh
dC9hcmNoLXg4Nl8zMi5pIGNvbXBhdC9wbGF0Zm9ybS5jIGNvbXBhdC9tZW1vcnkuaSBjb21w
YXQvc2NoZWQuaSBjb21wYXQvbm1pLmMgY29tcGF0L2NhbGxiYWNrLmkgY29tcGF0L3hlbm9w
cm9mLmMgY29tcGF0L3hlbm9wcm9mLmkgY29tcGF0L2dyYW50X3RhYmxlLmMgY29tcGF0L2Fy
Y2gteDg2L3hlbi14ODZfMzIuaQpnbWFrZVszXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZScKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9S
dWxlcy5tayAtQyBhcmNoL3g4NiBhc20tb2Zmc2V0cy5zCmdtYWtlWzNdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYnCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1p
d2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5hc20tb2Zmc2V0cy5zLmQgLVMg
LW8gYXNtLW9mZnNldHMucyB4ODZfNjQvYXNtLW9mZnNldHMuYwpnbWFrZVszXTogTGVhdmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYnCmdtYWtlIC1mIC9y
b290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgaW5jbHVkZS9hc20teDg2L2FzbS1vZmZzZXRz
LmgKZ21ha2VbM106IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbicK
Z21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuJwpnbWFr
ZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIGFyY2gveDg2IC9yb290L3hl
bi00LjIuMC94ZW4veGVuCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC94ZW4vYXJjaC94ODYnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVs
ZXMubWsgLUMgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9ib290IGJ1aWx0X2luLm8K
Z21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNo
L3g4Ni9ib290JwpnbWFrZSAtZiBidWlsZDMyLm1rIHJlbG9jLlMKZ21ha2VbNV06IEVudGVy
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9ib290JwpnY2Mg
LU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0
aW9ucyAtV2Vycm9yIC1mbm8tYnVpbHRpbiAtbXNvZnQtZmxvYXQgIC1jIC1mcGljIHJlbG9j
LmMgLW8gcmVsb2MubwpsZCAtbWVsZl9pMzg2IC1OIC1UdGV4dCAwIC1vIHJlbG9jLmxuayBy
ZWxvYy5vCm9iamNvcHkgLU8gYmluYXJ5IHJlbG9jLmxuayByZWxvYy5iaW4KKG9kIC12IC10
IHggcmVsb2MuYmluIHwgdHIgLXMgJyAnIHwgYXdrICdOUiA+IDEge3ByaW50IHN9IHtzPSQw
fScgfCBcCnNlZCAncy8gLywweC9nJyB8IHNlZCAncy8sMHgkLy8nIHwgc2VkICdzL15bMC05
XSosLyAubG9uZyAvJykgPnJlbG9jLlMKZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2Jvb3QnCmdjYyAtRF9fQVNTRU1CTFlfXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVC
VUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHBy
ZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5v
LWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1m
cGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJ
VFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5oZWFkLm8uZCAtYyBoZWFkLlMgLW8gaGVh
ZC5vCmxkICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBoZWFkLm8KZ21ha2Vb
NF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2Jv
b3QnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgL3Jvb3QveGVu
LTQuMi4wL3hlbi9hcmNoL3g4Ni9lZmkgYnVpbHRfaW4ubwpnbWFrZVs0XTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2VmaScKZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnN0dWIuby5kIC1m
c2hvcnQtd2NoYXIgLWMgc3R1Yi5jIC1vIHN0dWIubwpsZCAgICAtbWVsZl94ODZfNjQgIC1y
IC1vIGJ1aWx0X2luLm8gc3R1Yi5vCmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jv
b3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9lZmknCmdtYWtlIC1mIC9yb290L3hlbi00LjIu
MC94ZW4vUnVsZXMubWsgLUMgL3Jvb3QveGVuLTQuMi4wL3hlbi9jb21tb24gYnVpbHRfaW4u
bwpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2Nv
bW1vbicKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLmJpdG1hcC5vLmQgLWMgYml0bWFwLmMgLW8gYml0bWFwLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNvcmVfcGFya2luZy5vLmQg
LWMgY29yZV9wYXJraW5nLmMgLW8gY29yZV9wYXJraW5nLm8KZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNwdS5vLmQgLWMgY3B1LmMgLW8g
Y3B1Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLmNwdXBvb2wuby5kIC1jIGNwdXBvb2wuYyAtbyBjcHVwb29sLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRvbWN0bC5vLmQgLWMg
ZG9tY3RsLmMgLW8gZG9tY3RsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLmRvbWFpbi5vLmQgLWMgZG9tYWluLmMgLW8gZG9tYWluLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmV2
ZW50X2NoYW5uZWwuby5kIC1jIGV2ZW50X2NoYW5uZWwuYyAtbyBldmVudF9jaGFubmVsLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmdy
YW50X3RhYmxlLm8uZCAtYyBncmFudF90YWJsZS5jIC1vIGdyYW50X3RhYmxlLm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmlycS5vLmQg
LWMgaXJxLmMgLW8gaXJxLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRp
biAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAt
V2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdl
bmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1
bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAt
V25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3lu
Y2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUg
LW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RI
Uk9VR0ggLU1NRCAtTUYgLmtlcm5lbC5vLmQgLWMga2VybmVsLmMgLW8ga2VybmVsLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmtleWhh
bmRsZXIuby5kIC1jIGtleWhhbmRsZXIuYyAtbyBrZXloYW5kbGVyLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmtleGVjLm8uZCAtYyBr
ZXhlYy5jIC1vIGtleGVjLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRp
biAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAt
V2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdl
bmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1
bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAt
V25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3lu
Y2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUg
LW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RI
Uk9VR0ggLU1NRCAtTUYgLmxpYi5vLmQgLWMgbGliLmMgLW8gbGliLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1lbW9yeS5vLmQgLWMg
bWVtb3J5LmMgLW8gbWVtb3J5Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLm11bHRpY2FsbC5vLmQgLWMgbXVsdGljYWxsLmMgLW8gbXVs
dGljYWxsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLm5vdGlmaWVyLm8uZCAtYyBub3RpZmllci5jIC1vIG5vdGlmaWVyLm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnBhZ2VfYWxs
b2Muby5kIC1jIHBhZ2VfYWxsb2MuYyAtbyBwYWdlX2FsbG9jLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnByZWVtcHQuby5kIC1jIHBy
ZWVtcHQuYyAtbyBwcmVlbXB0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnJhbmdlc2V0Lm8uZCAtYyByYW5nZXNldC5jIC1vIHJhbmdl
c2V0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLnNjaGVkX2NyZWRpdC5vLmQgLWMgc2NoZWRfY3JlZGl0LmMgLW8gc2NoZWRfY3JlZGl0
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
LnNjaGVkX2NyZWRpdDIuby5kIC1jIHNjaGVkX2NyZWRpdDIuYyAtbyBzY2hlZF9jcmVkaXQy
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
LnNjaGVkX3NlZGYuby5kIC1jIHNjaGVkX3NlZGYuYyAtbyBzY2hlZF9zZWRmLm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNjaGVkX2Fy
aW5jNjUzLm8uZCAtYyBzY2hlZF9hcmluYzY1My5jIC1vIHNjaGVkX2FyaW5jNjUzLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNjaGVk
dWxlLm8uZCAtYyBzY2hlZHVsZS5jIC1vIHNjaGVkdWxlLm8KZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNodXRkb3duLm8uZCAtYyBzaHV0
ZG93bi5jIC1vIHNodXRkb3duLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnNvZnRpcnEuby5kIC1jIHNvZnRpcnEuYyAtbyBzb2Z0aXJx
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
LnNvcnQuby5kIC1jIHNvcnQuYyAtbyBzb3J0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNwaW5sb2NrLm8uZCAtYyBzcGlubG9jay5j
IC1vIHNwaW5sb2NrLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLnN0b3BfbWFjaGluZS5vLmQgLWMgc3RvcF9tYWNoaW5lLmMgLW8gc3Rv
cF9tYWNoaW5lLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLnN0cmluZy5vLmQgLWMgc3RyaW5nLmMgLW8gc3RyaW5nLm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnN5bWJvbHMuby5k
IC1jIHN5bWJvbHMuYyAtbyBzeW1ib2xzLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1m
bm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXgg
aW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURI
QVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnN5c2N0bC5vLmQgLWMgc3lzY3RsLmMgLW8gc3lz
Y3RsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLnRhc2tsZXQuby5kIC1jIHRhc2tsZXQuYyAtbyB0YXNrbGV0Lm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRpbWUuby5kIC1jIHRp
bWUuYyAtbyB0aW1lLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLnRpbWVyLm8uZCAtYyB0aW1lci5jIC1vIHRpbWVyLm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRyYWNlLm8uZCAt
YyB0cmFjZS5jIC1vIHRyYWNlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnZlcnNpb24uby5kIC1jIHZlcnNpb24uYyAtbyB2ZXJzaW9u
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
LnZzcHJpbnRmLm8uZCAtYyB2c3ByaW50Zi5jIC1vIHZzcHJpbnRmLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLndhaXQuby5kIC1jIHdh
aXQuYyAtbyB3YWl0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLnhtYWxsb2NfdGxzZi5vLmQgLWMgeG1hbGxvY190bHNmLmMgLW8geG1h
bGxvY190bHNmLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLnJjdXBkYXRlLm8uZCAtYyByY3VwZGF0ZS5jIC1vIHJjdXBkYXRlLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRtZW0u
by5kIC1jIHRtZW0uYyAtbyB0bWVtLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8t
YnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5j
bHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0
aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZu
by1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRS
SUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNf
UEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRtZW1feGVuLm8uZCAtYyB0bWVtX3hlbi5jIC1vIHRt
ZW1feGVuLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLnJhZGl4LXRyZWUuby5kIC1jIHJhZGl4LXRyZWUuYyAtbyByYWRpeC10cmVlLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnJi
dHJlZS5vLmQgLWMgcmJ0cmVlLmMgLW8gcmJ0cmVlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmx6by5vLmQgLWMgbHpvLmMgLW8gbHpv
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
Lnhlbm9wcm9mLm8uZCAtYyB4ZW5vcHJvZi5jIC1vIHhlbm9wcm9mLm8KZ21ha2UgLWYgL3Jv
b3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyBjb21wYXQgYnVpbHRfaW4ubwpnbWFrZVs1
XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2NvbW1vbi9jb21w
YXQnCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9h
dCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJu
cyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2lu
ZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAt
RF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25m
aWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1G
IC5kb21haW4uby5kIC1jIGRvbWFpbi5jIC1vIGRvbWFpbi5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5rZXJuZWwuby5kIC1jIGtlcm5l
bC5jIC1vIGtlcm5lbC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC5tZW1vcnkuby5kIC1jIG1lbW9yeS5jIC1vIG1lbW9yeS5vCmdjYyAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJl
ZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMg
LURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhB
U19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5tdWx0aWNh
bGwuby5kIC1jIG11bHRpY2FsbC5jIC1vIG11bHRpY2FsbC5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC54bGF0Lm8uZCAtYyB4bGF0LmMg
LW8geGxhdC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29m
dC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQt
ZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3Vz
LXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGlu
YyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hl
bi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1N
TUQgLU1GIC50bWVtX3hlbi5vLmQgLWMgdG1lbV94ZW4uYyAtbyB0bWVtX3hlbi5vCmxkICAg
IC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBkb21haW4ubyBrZXJuZWwubyBtZW1v
cnkubyBtdWx0aWNhbGwubyB4bGF0Lm8gdG1lbV94ZW4ubwpnbWFrZVs1XTogTGVhdmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vY29tbW9uL2NvbXBhdCcKZ21ha2UgLWYg
L3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyBodm0gYnVpbHRfaW4ubwpnbWFrZVs1
XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2NvbW1vbi9odm0n
CmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdy
ZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50
ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAt
Zm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAt
bW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10
YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9f
WEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcu
aCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5z
YXZlLm8uZCAtYyBzYXZlLmMgLW8gc2F2ZS5vCmxkICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8g
YnVpbHRfaW4ubyBzYXZlLm8KZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAveGVuL2NvbW1vbi9odm0nCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4v
UnVsZXMubWsgLUMgbGliZWxmIGJ1aWx0X2luLm8KZ21ha2VbNV06IEVudGVyaW5nIGRpcmVj
dG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9jb21tb24vbGliZWxmJwpnY2MgLU8yIC1mb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xz
IC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBl
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90
ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAt
bW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hB
U19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRl
IC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAt
REhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubGliZWxmLXRvb2xzLm8u
ZCAtYyBsaWJlbGYtdG9vbHMuYyAtbyBsaWJlbGYtdG9vbHMubwpnY2MgLU8yIC1mb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1p
d2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0
b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5v
LXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19W
SVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhB
U19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubGliZWxmLWxvYWRlci5vLmQg
LWMgbGliZWxmLWxvYWRlci5jIC1vIGxpYmVsZi1sb2FkZXIubwpnY2MgLU8yIC1mb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1p
d2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0
b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5v
LXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19W
SVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhB
U19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubGliZWxmLWRvbWluZm8uby5k
IC1jIGxpYmVsZi1kb21pbmZvLmMgLW8gbGliZWxmLWRvbWluZm8ubwpsZCAgICAtbWVsZl94
ODZfNjQgIC1yIC1vIGxpYmVsZi10ZW1wLm8gbGliZWxmLXRvb2xzLm8gbGliZWxmLWxvYWRl
ci5vIGxpYmVsZi1kb21pbmZvLm8Kb2JqY29weSAtLXJlbmFtZS1zZWN0aW9uIC50ZXh0PS5p
bml0LnRleHQgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YT0uaW5pdC5kYXRhIGxpYmVsZi10ZW1w
Lm8gbGliZWxmLm8KbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGxpYmVs
Zi5vCmdtYWtlWzVdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9j
b21tb24vbGliZWxmJwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAuZGVjb21wcmVzcy5vLmQgLURJTklUX1NFQ1RJT05TX09OTFkgLWMgZGVj
b21wcmVzcy5jIC1vIGRlY29tcHJlc3MubwpvYmpkdW1wIC1oIGRlY29tcHJlc3MubyB8IHNl
ZCAtbiAnL1swLTldL3tzLDAwKiwwLGc7cH0nIHwgd2hpbGUgcmVhZCBpZHggbmFtZSBzeiBy
ZXN0OyBkbyBcCgljYXNlICIkbmFtZSIgaW4gXAoJLnRleHR8LnRleHQuKnwuZGF0YXwuZGF0
YS4qfC5ic3MpIFwKCQl0ZXN0ICRzeiAhPSAwIHx8IGNvbnRpbnVlOyBcCgkJZWNobyAiRXJy
b3I6IHNpemUgb2YgZGVjb21wcmVzcy5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0
ICQoZXhwciAkaWR4ICsgMSk7OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tz
LDAwKiwwLGc7cH0iOiBleHRyYSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5k
Cm9iamNvcHkgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFt
ZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUt
c2VjdGlvbiAucm9kYXRhLnN0cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNl
Y3Rpb24gLnJvZGF0YS5zdHIxLjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0
aW9uIC5yb2RhdGEuc3RyMS44PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlv
biAuZGF0YS5yZWw9LmluaXQuZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwu
bG9jYWw9LmluaXQuZGF0YS5yZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwu
cm89LmluaXQuZGF0YS5yZWwucm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9j
YWw9LmluaXQuZGF0YS5yZWwucm8ubG9jYWwgZGVjb21wcmVzcy5vIGRlY29tcHJlc3MuaW5p
dC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9h
dCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJu
cyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2lu
ZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAt
RF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25m
aWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1G
IC5idW56aXAyLm8uZCAtRElOSVRfU0VDVElPTlNfT05MWSAtYyBidW56aXAyLmMgLW8gYnVu
emlwMi5vCm9iamR1bXAgLWggYnVuemlwMi5vIHwgc2VkIC1uICcvWzAtOV0ve3MsMDAqLDAs
ZztwfScgfCB3aGlsZSByZWFkIGlkeCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2UgIiRuYW1l
IiBpbiBcCgkudGV4dHwudGV4dC4qfC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRlc3QgJHN6
ICE9IDAgfHwgY29udGludWU7IFwKCQllY2hvICJFcnJvcjogc2l6ZSBvZiBidW56aXAyLm86
JG5hbWUgaXMgMHgkc3oiID4mMjsgXAoJCWV4aXQgJChleHByICRpZHggKyAxKTs7IFwKCWVz
YWM7IFwKZG9uZQpzZWQ6IDE6ICIvWzAtOV0ve3MsMDAqLDAsZztwfSI6IGV4dHJhIGNoYXJh
Y3RlcnMgYXQgdGhlIGVuZCBvZiBwIGNvbW1hbmQKb2JqY29weSAtLXJlbmFtZS1zZWN0aW9u
IC5yb2RhdGE9LmluaXQucm9kYXRhIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjE9
LmluaXQucm9kYXRhLnN0cjEuMSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4yPS5p
bml0LnJvZGF0YS5zdHIxLjIgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuND0uaW5p
dC5yb2RhdGEuc3RyMS40IC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjg9LmluaXQu
cm9kYXRhLnN0cjEuOCAtLXJlbmFtZS1zZWN0aW9uIC5kYXRhLnJlbD0uaW5pdC5kYXRhLnJl
bCAtLXJlbmFtZS1zZWN0aW9uIC5kYXRhLnJlbC5sb2NhbD0uaW5pdC5kYXRhLnJlbC5sb2Nh
bCAtLXJlbmFtZS1zZWN0aW9uIC5kYXRhLnJlbC5ybz0uaW5pdC5kYXRhLnJlbC5ybyAtLXJl
bmFtZS1zZWN0aW9uIC5kYXRhLnJlbC5yby5sb2NhbD0uaW5pdC5kYXRhLnJlbC5yby5sb2Nh
bCBidW56aXAyLm8gYnVuemlwMi5pbml0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1m
bm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXgg
aW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURI
QVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnVueHouby5kIC1ESU5JVF9TRUNUSU9OU19PTkxZ
IC1jIHVueHouYyAtbyB1bnh6Lm8Kb2JqZHVtcCAtaCB1bnh6Lm8gfCBzZWQgLW4gJy9bMC05
XS97cywwMCosMCxnO3B9JyB8IHdoaWxlIHJlYWQgaWR4IG5hbWUgc3ogcmVzdDsgZG8gXAoJ
Y2FzZSAiJG5hbWUiIGluIFwKCS50ZXh0fC50ZXh0Lip8LmRhdGF8LmRhdGEuKnwuYnNzKSBc
CgkJdGVzdCAkc3ogIT0gMCB8fCBjb250aW51ZTsgXAoJCWVjaG8gIkVycm9yOiBzaXplIG9m
IHVueHoubzokbmFtZSBpcyAweCRzeiIgPiYyOyBcCgkJZXhpdCAkKGV4cHIgJGlkeCArIDEp
OzsgXAoJZXNhYzsgXApkb25lCnNlZDogMTogIi9bMC05XS97cywwMCosMCxnO3B9IjogZXh0
cmEgY2hhcmFjdGVycyBhdCB0aGUgZW5kIG9mIHAgY29tbWFuZApvYmpjb3B5IC0tcmVuYW1l
LXNlY3Rpb24gLnJvZGF0YT0uaW5pdC5yb2RhdGEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRh
LnN0cjEuMT0uaW5pdC5yb2RhdGEuc3RyMS4xIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5z
dHIxLjI9LmluaXQucm9kYXRhLnN0cjEuMiAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3Ry
MS40PS5pbml0LnJvZGF0YS5zdHIxLjQgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEu
OD0uaW5pdC5yb2RhdGEuc3RyMS44IC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsPS5pbml0
LmRhdGEucmVsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLmxvY2FsPS5pbml0LmRhdGEu
cmVsLmxvY2FsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvPS5pbml0LmRhdGEucmVs
LnJvIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvLmxvY2FsPS5pbml0LmRhdGEucmVs
LnJvLmxvY2FsIHVueHoubyB1bnh6LmluaXQubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcg
LWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZp
eCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1l
eGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBp
YyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZ
X0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAt
REhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudW5sem1hLm8uZCAtRElOSVRfU0VDVElPTlNf
T05MWSAtYyB1bmx6bWEuYyAtbyB1bmx6bWEubwpvYmpkdW1wIC1oIHVubHptYS5vIHwgc2Vk
IC1uICcvWzAtOV0ve3MsMDAqLDAsZztwfScgfCB3aGlsZSByZWFkIGlkeCBuYW1lIHN6IHJl
c3Q7IGRvIFwKCWNhc2UgIiRuYW1lIiBpbiBcCgkudGV4dHwudGV4dC4qfC5kYXRhfC5kYXRh
Lip8LmJzcykgXAoJCXRlc3QgJHN6ICE9IDAgfHwgY29udGludWU7IFwKCQllY2hvICJFcnJv
cjogc2l6ZSBvZiB1bmx6bWEubzokbmFtZSBpcyAweCRzeiIgPiYyOyBcCgkJZXhpdCAkKGV4
cHIgJGlkeCArIDEpOzsgXAoJZXNhYzsgXApkb25lCnNlZDogMTogIi9bMC05XS97cywwMCos
MCxnO3B9IjogZXh0cmEgY2hhcmFjdGVycyBhdCB0aGUgZW5kIG9mIHAgY29tbWFuZApvYmpj
b3B5IC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YT0uaW5pdC5yb2RhdGEgLS1yZW5hbWUtc2Vj
dGlvbiAucm9kYXRhLnN0cjEuMT0uaW5pdC5yb2RhdGEuc3RyMS4xIC0tcmVuYW1lLXNlY3Rp
b24gLnJvZGF0YS5zdHIxLjI9LmluaXQucm9kYXRhLnN0cjEuMiAtLXJlbmFtZS1zZWN0aW9u
IC5yb2RhdGEuc3RyMS40PS5pbml0LnJvZGF0YS5zdHIxLjQgLS1yZW5hbWUtc2VjdGlvbiAu
cm9kYXRhLnN0cjEuOD0uaW5pdC5yb2RhdGEuc3RyMS44IC0tcmVuYW1lLXNlY3Rpb24gLmRh
dGEucmVsPS5pbml0LmRhdGEucmVsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLmxvY2Fs
PS5pbml0LmRhdGEucmVsLmxvY2FsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvPS5p
bml0LmRhdGEucmVsLnJvIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvLmxvY2FsPS5p
bml0LmRhdGEucmVsLnJvLmxvY2FsIHVubHptYS5vIHVubHptYS5pbml0Lm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnVubHpvLm8uZCAt
RElOSVRfU0VDVElPTlNfT05MWSAtYyB1bmx6by5jIC1vIHVubHpvLm8Kb2JqZHVtcCAtaCB1
bmx6by5vIHwgc2VkIC1uICcvWzAtOV0ve3MsMDAqLDAsZztwfScgfCB3aGlsZSByZWFkIGlk
eCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2UgIiRuYW1lIiBpbiBcCgkudGV4dHwudGV4dC4q
fC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRlc3QgJHN6ICE9IDAgfHwgY29udGludWU7IFwK
CQllY2hvICJFcnJvcjogc2l6ZSBvZiB1bmx6by5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwK
CQlleGl0ICQoZXhwciAkaWR4ICsgMSk7OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1sw
LTldL3tzLDAwKiwwLGc7cH0iOiBleHRyYSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBj
b21tYW5kCm9iamNvcHkgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAt
LXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1y
ZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVu
YW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFt
ZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS44PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUt
c2VjdGlvbiAuZGF0YS5yZWw9LmluaXQuZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0
YS5yZWwubG9jYWw9LmluaXQuZGF0YS5yZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0
YS5yZWwucm89LmluaXQuZGF0YS5yZWwucm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwu
cm8ubG9jYWw9LmluaXQuZGF0YS5yZWwucm8ubG9jYWwgdW5sem8ubyB1bmx6by5pbml0Lm8K
bGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGJpdG1hcC5vIGNvcmVfcGFy
a2luZy5vIGNwdS5vIGNwdXBvb2wubyBkb21jdGwubyBkb21haW4ubyBldmVudF9jaGFubmVs
Lm8gZ3JhbnRfdGFibGUubyBpcnEubyBrZXJuZWwubyBrZXloYW5kbGVyLm8ga2V4ZWMubyBs
aWIubyBtZW1vcnkubyBtdWx0aWNhbGwubyBub3RpZmllci5vIHBhZ2VfYWxsb2MubyBwcmVl
bXB0Lm8gcmFuZ2VzZXQubyBzY2hlZF9jcmVkaXQubyBzY2hlZF9jcmVkaXQyLm8gc2NoZWRf
c2VkZi5vIHNjaGVkX2FyaW5jNjUzLm8gc2NoZWR1bGUubyBzaHV0ZG93bi5vIHNvZnRpcnEu
byBzb3J0Lm8gc3BpbmxvY2subyBzdG9wX21hY2hpbmUubyBzdHJpbmcubyBzeW1ib2xzLm8g
c3lzY3RsLm8gdGFza2xldC5vIHRpbWUubyB0aW1lci5vIHRyYWNlLm8gdmVyc2lvbi5vIHZz
cHJpbnRmLm8gd2FpdC5vIHhtYWxsb2NfdGxzZi5vIHJjdXBkYXRlLm8gdG1lbS5vIHRtZW1f
eGVuLm8gcmFkaXgtdHJlZS5vIHJidHJlZS5vIGx6by5vIHhlbm9wcm9mLm8gY29tcGF0L2J1
aWx0X2luLm8gaHZtL2J1aWx0X2luLm8gbGliZWxmL2J1aWx0X2luLm8gZGVjb21wcmVzcy5p
bml0Lm8gYnVuemlwMi5pbml0Lm8gdW54ei5pbml0Lm8gdW5sem1hLmluaXQubyB1bmx6by5p
bml0Lm8KZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVu
L2NvbW1vbicKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyAvcm9v
dC94ZW4tNC4yLjAveGVuL2RyaXZlcnMgYnVpbHRfaW4ubwpnbWFrZVs0XTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMnCmdtYWtlIC1mIC9yb290
L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgY2hhciBidWlsdF9pbi5vCmdtYWtlWzVdOiBF
bnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9jaGFyJwpn
Y2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZu
by1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1u
by1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFi
bGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hF
Tl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgg
LURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuY29u
c29sZS5vLmQgLWMgY29uc29sZS5jIC1vIGNvbnNvbGUubwpnY2MgLU8yIC1mb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURO
REVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0
aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9h
c20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3Ig
LWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNz
ZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJ
QklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19H
REJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubnMxNjU1MC5vLmQgLWMgbnMxNjU1
MC5jIC1vIG5zMTY1NTAubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZu
by1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVz
IC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGlu
IC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1X
ZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2Vu
ZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVs
dCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1X
bmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5j
aHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAt
bm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhS
T1VHSCAtTU1EIC1NRiAuc2VyaWFsLm8uZCAtYyBzZXJpYWwuYyAtbyBzZXJpYWwubwpsZCAg
ICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gY29uc29sZS5vIG5zMTY1NTAubyBz
ZXJpYWwubwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94
ZW4vZHJpdmVycy9jaGFyJwpnbWFrZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1r
IC1DIGNwdWZyZXEgYnVpbHRfaW4ubwpnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvY3B1ZnJlcScKZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNwdWZyZXEuby5kIC1jIGNwdWZy
ZXEuYyAtbyBjcHVmcmVxLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRp
biAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAt
V2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdl
bmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1
bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAt
V25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3lu
Y2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUg
LW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RI
Uk9VR0ggLU1NRCAtTUYgLmNwdWZyZXFfb25kZW1hbmQuby5kIC1jIGNwdWZyZXFfb25kZW1h
bmQuYyAtbyBjcHVmcmVxX29uZGVtYW5kLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1m
bm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXgg
aW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURI
QVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNwdWZyZXFfbWlzY19nb3Zlcm5vcnMuby5kIC1j
IGNwdWZyZXFfbWlzY19nb3Zlcm5vcnMuYyAtbyBjcHVmcmVxX21pc2NfZ292ZXJub3JzLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnV0
aWxpdHkuby5kIC1jIHV0aWxpdHkuYyAtbyB1dGlsaXR5Lm8KbGQgICAgLW1lbGZfeDg2XzY0
ICAtciAtbyBidWlsdF9pbi5vIGNwdWZyZXEubyBjcHVmcmVxX29uZGVtYW5kLm8gY3B1ZnJl
cV9taXNjX2dvdmVybm9ycy5vIHV0aWxpdHkubwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3Rv
cnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9jcHVmcmVxJwpnbWFrZSAtZiAvcm9v
dC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIHBjaSBidWlsdF9pbi5vCmdtYWtlWzVdOiBF
bnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9wY2knCmdj
YyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1
bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXIt
YXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5wY2ku
by5kIC1jIHBjaS5jIC1vIHBjaS5vCmxkICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRf
aW4ubyBwY2kubwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC94ZW4vZHJpdmVycy9wY2knCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMu
bWsgLUMgcGFzc3Rocm91Z2ggYnVpbHRfaW4ubwpnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gnCmdjYyAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQt
ZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGgg
LXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pb21tdS5vLmQg
LWMgaW9tbXUuYyAtbyBpb21tdS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1
aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1
ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1k
ZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8t
YXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklC
VVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BB
U1NUSFJPVUdIIC1NTUQgLU1GIC5pby5vLmQgLWMgaW8uYyAtbyBpby5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5wY2kuby5kIC1jIHBj
aS5jIC1vIHBjaS5vCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMg
dnRkIGJ1aWx0X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZCcKZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmlvbW11Lm8uZCAtYyBpb21tdS5j
IC1vIGlvbW11Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLmRtYXIuby5kIC1jIGRtYXIuYyAtbyBkbWFyLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnV0aWxzLm8uZCAtYyB1dGls
cy5jIC1vIHV0aWxzLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLnFpbnZhbC5vLmQgLWMgcWludmFsLmMgLW8gcWludmFsLm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmludHJlbWFw
Lm8uZCAtYyBpbnRyZW1hcC5jIC1vIGludHJlbWFwLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnF1aXJrcy5vLmQgLWMgcXVpcmtzLmMg
LW8gcXVpcmtzLm8KZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyB4
ODYgYnVpbHRfaW4ubwpnbWFrZVs3XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3g4NicKZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnZ0ZC5vLmQgLWMgdnRkLmMg
LW8gdnRkLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLmF0cy5vLmQgLWMgYXRzLmMgLW8gYXRzLm8KbGQgICAgLW1lbGZfeDg2XzY0ICAt
ciAtbyBidWlsdF9pbi5vIHZ0ZC5vIGF0cy5vCmdtYWtlWzddOiBMZWF2aW5nIGRpcmVjdG9y
eSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC94ODYnCmxk
ICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBpb21tdS5vIGRtYXIubyB1dGls
cy5vIHFpbnZhbC5vIGludHJlbWFwLm8gcXVpcmtzLm8geDg2L2J1aWx0X2luLm8KZ21ha2Vb
Nl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvdnRkJwpnbWFrZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1D
IGFtZCBidWlsdF9pbi5vCmdtYWtlWzZdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQnCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pb21tdV9pbml0Lm8uZCAtYyBp
b21tdV9pbml0LmMgLW8gaW9tbXVfaW5pdC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGlj
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlf
QVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1E
SEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pb21tdV9tYXAuby5kIC1jIGlvbW11X21hcC5j
IC1vIGlvbW11X21hcC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC5wY2lfYW1kX2lvbW11Lm8uZCAtYyBwY2lfYW1kX2lvbW11LmMgLW8g
cGNpX2FtZF9pb21tdS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC5pb21tdV9pbnRyLm8uZCAtYyBpb21tdV9pbnRyLmMgLW8gaW9tbXVf
aW50ci5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1h
bGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21t
b24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25v
LXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1m
bG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0
ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAt
ZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9j
b25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQg
LU1GIC5pb21tdV9jbWQuby5kIC1jIGlvbW11X2NtZC5jIC1vIGlvbW11X2NtZC5vCmdjYyAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJl
ZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMg
LURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhB
U19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pb21tdV9n
dWVzdC5vLmQgLWMgaW9tbXVfZ3Vlc3QuYyAtbyBpb21tdV9ndWVzdC5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pb21tdV9kZXRlY3Qu
by5kIC1ESU5JVF9TRUNUSU9OU19PTkxZIC1jIGlvbW11X2RldGVjdC5jIC1vIGlvbW11X2Rl
dGVjdC5vCm9iamR1bXAgLWggaW9tbXVfZGV0ZWN0Lm8gfCBzZWQgLW4gJy9bMC05XS97cyww
MCosMCxnO3B9JyB8IHdoaWxlIHJlYWQgaWR4IG5hbWUgc3ogcmVzdDsgZG8gXAoJY2FzZSAi
JG5hbWUiIGluIFwKCS50ZXh0fC50ZXh0Lip8LmRhdGF8LmRhdGEuKnwuYnNzKSBcCgkJdGVz
dCAkc3ogIT0gMCB8fCBjb250aW51ZTsgXAoJCWVjaG8gIkVycm9yOiBzaXplIG9mIGlvbW11
X2RldGVjdC5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0ICQoZXhwciAkaWR4ICsg
MSk7OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tzLDAwKiwwLGc7cH0iOiBl
eHRyYSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5kCm9iamNvcHkgLS1yZW5h
bWUtc2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFtZS1zZWN0aW9uIC5yb2Rh
dGEuc3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRh
LnN0cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5z
dHIxLjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3Ry
MS44PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWw9Lmlu
aXQuZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwubG9jYWw9LmluaXQuZGF0
YS5yZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm89LmluaXQuZGF0YS5y
ZWwucm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9jYWw9LmluaXQuZGF0YS5y
ZWwucm8ubG9jYWwgaW9tbXVfZGV0ZWN0Lm8gaW9tbXVfZGV0ZWN0LmluaXQubwpnY2MgLU8y
IC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFj
ay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQt
em9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1E
R0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuaW9tbXVfYWNw
aS5vLmQgLURJTklUX1NFQ1RJT05TX09OTFkgLWMgaW9tbXVfYWNwaS5jIC1vIGlvbW11X2Fj
cGkubwpvYmpkdW1wIC1oIGlvbW11X2FjcGkubyB8IHNlZCAtbiAnL1swLTldL3tzLDAwKiww
LGc7cH0nIHwgd2hpbGUgcmVhZCBpZHggbmFtZSBzeiByZXN0OyBkbyBcCgljYXNlICIkbmFt
ZSIgaW4gXAoJLnRleHR8LnRleHQuKnwuZGF0YXwuZGF0YS4qfC5ic3MpIFwKCQl0ZXN0ICRz
eiAhPSAwIHx8IGNvbnRpbnVlOyBcCgkJZWNobyAiRXJyb3I6IHNpemUgb2YgaW9tbXVfYWNw
aS5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0ICQoZXhwciAkaWR4ICsgMSk7OyBc
Cgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tzLDAwKiwwLGc7cH0iOiBleHRyYSBj
aGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5kCm9iamNvcHkgLS1yZW5hbWUtc2Vj
dGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3Ry
MS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEu
Mj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjQ9
LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS44PS5p
bml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWw9LmluaXQuZGF0
YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwubG9jYWw9LmluaXQuZGF0YS5yZWwu
bG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm89LmluaXQuZGF0YS5yZWwucm8g
LS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9jYWw9LmluaXQuZGF0YS5yZWwucm8u
bG9jYWwgaW9tbXVfYWNwaS5vIGlvbW11X2FjcGkuaW5pdC5vCmxkICAgIC1tZWxmX3g4Nl82
NCAgLXIgLW8gYnVpbHRfaW4ubyBpb21tdV9pbml0Lm8gaW9tbXVfbWFwLm8gcGNpX2FtZF9p
b21tdS5vIGlvbW11X2ludHIubyBpb21tdV9jbWQubyBpb21tdV9ndWVzdC5vIGlvbW11X2Rl
dGVjdC5pbml0Lm8gaW9tbXVfYWNwaS5pbml0Lm8KZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kJwpnbWFr
ZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIHg4NiBidWlsdF9pbi5vCmdt
YWtlWzZdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVy
cy9wYXNzdGhyb3VnaC94ODYnCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0
aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUg
LVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1n
ZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZh
dWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMg
LVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRF
IC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NU
SFJPVUdIIC1NTUQgLU1GIC5hdHMuby5kIC1jIGF0cy5jIC1vIGF0cy5vCmxkICAgIC1tZWxm
X3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBhdHMubwpnbWFrZVs2XTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC94ODYnCmxk
ICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBpb21tdS5vIGlvLm8gcGNpLm8g
dnRkL2J1aWx0X2luLm8gYW1kL2J1aWx0X2luLm8geDg2L2J1aWx0X2luLm8KZ21ha2VbNV06
IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvcGFzc3Ro
cm91Z2gnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgYWNwaSBi
dWlsdF9pbi5vCmdtYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC94ZW4vZHJpdmVycy9hY3BpJwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWls
dGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRl
IC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
Z2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVm
YXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25z
IC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFz
eW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVU
RSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNT
VEhST1VHSCAtTU1EIC1NRiAudGFibGVzLm8uZCAtYyB0YWJsZXMuYyAtbyB0YWJsZXMubwpn
Y2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZu
by1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1u
by1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFi
bGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hF
Tl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgg
LURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubnVt
YS5vLmQgLWMgbnVtYS5jIC1vIG51bWEubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAub3NsLm8uZCAtYyBvc2wuYyAtbyBvc2wubwpnY2Mg
LU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5k
YW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFy
aXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1z
dGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1y
ZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVz
IC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9f
IC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURI
QVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAucG1zdGF0
Lm8uZCAtYyBwbXN0YXQuYyAtbyBwbXN0YXQubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcg
LWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZp
eCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1l
eGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBp
YyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZ
X0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAt
REhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuaHdyZWdzLm8uZCAtYyBod3JlZ3MuYyAtbyBo
d3JlZ3MubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQt
ZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4
dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11
bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMg
LWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4v
Y29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1E
IC1NRiAucmVib290Lm8uZCAtYyByZWJvb3QuYyAtbyByZWJvb3QubwpnbWFrZSAtZiAvcm9v
dC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIHRhYmxlcyBidWlsdF9pbi5vCmdtYWtlWzZd
OiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9hY3Bp
L3RhYmxlcycKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLnRidXRpbHMuby5kIC1jIHRidXRpbHMuYyAtbyB0YnV0aWxzLm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRiZmFkdC5vLmQg
LURJTklUX1NFQ1RJT05TX09OTFkgLWMgdGJmYWR0LmMgLW8gdGJmYWR0Lm8Kb2JqZHVtcCAt
aCB0YmZhZHQubyB8IHNlZCAtbiAnL1swLTldL3tzLDAwKiwwLGc7cH0nIHwgd2hpbGUgcmVh
ZCBpZHggbmFtZSBzeiByZXN0OyBkbyBcCgljYXNlICIkbmFtZSIgaW4gXAoJLnRleHR8LnRl
eHQuKnwuZGF0YXwuZGF0YS4qfC5ic3MpIFwKCQl0ZXN0ICRzeiAhPSAwIHx8IGNvbnRpbnVl
OyBcCgkJZWNobyAiRXJyb3I6IHNpemUgb2YgdGJmYWR0Lm86JG5hbWUgaXMgMHgkc3oiID4m
MjsgXAoJCWV4aXQgJChleHByICRpZHggKyAxKTs7IFwKCWVzYWM7IFwKZG9uZQpzZWQ6IDE6
ICIvWzAtOV0ve3MsMDAqLDAsZztwfSI6IGV4dHJhIGNoYXJhY3RlcnMgYXQgdGhlIGVuZCBv
ZiBwIGNvbW1hbmQKb2JqY29weSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGE9LmluaXQucm9k
YXRhIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjE9LmluaXQucm9kYXRhLnN0cjEu
MSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4yPS5pbml0LnJvZGF0YS5zdHIxLjIg
LS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuND0uaW5pdC5yb2RhdGEuc3RyMS40IC0t
cmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjg9LmluaXQucm9kYXRhLnN0cjEuOCAtLXJl
bmFtZS1zZWN0aW9uIC5kYXRhLnJlbD0uaW5pdC5kYXRhLnJlbCAtLXJlbmFtZS1zZWN0aW9u
IC5kYXRhLnJlbC5sb2NhbD0uaW5pdC5kYXRhLnJlbC5sb2NhbCAtLXJlbmFtZS1zZWN0aW9u
IC5kYXRhLnJlbC5ybz0uaW5pdC5kYXRhLnJlbC5ybyAtLXJlbmFtZS1zZWN0aW9uIC5kYXRh
LnJlbC5yby5sb2NhbD0uaW5pdC5kYXRhLnJlbC5yby5sb2NhbCB0YmZhZHQubyB0YmZhZHQu
aW5pdC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1h
bGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21t
b24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25v
LXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1m
bG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0
ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAt
ZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9j
b25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQg
LU1GIC50Ymluc3RhbC5vLmQgLURJTklUX1NFQ1RJT05TX09OTFkgLWMgdGJpbnN0YWwuYyAt
byB0Ymluc3RhbC5vCm9iamR1bXAgLWggdGJpbnN0YWwubyB8IHNlZCAtbiAnL1swLTldL3tz
LDAwKiwwLGc7cH0nIHwgd2hpbGUgcmVhZCBpZHggbmFtZSBzeiByZXN0OyBkbyBcCgljYXNl
ICIkbmFtZSIgaW4gXAoJLnRleHR8LnRleHQuKnwuZGF0YXwuZGF0YS4qfC5ic3MpIFwKCQl0
ZXN0ICRzeiAhPSAwIHx8IGNvbnRpbnVlOyBcCgkJZWNobyAiRXJyb3I6IHNpemUgb2YgdGJp
bnN0YWwubzokbmFtZSBpcyAweCRzeiIgPiYyOyBcCgkJZXhpdCAkKGV4cHIgJGlkeCArIDEp
OzsgXAoJZXNhYzsgXApkb25lCnNlZDogMTogIi9bMC05XS97cywwMCosMCxnO3B9IjogZXh0
cmEgY2hhcmFjdGVycyBhdCB0aGUgZW5kIG9mIHAgY29tbWFuZApvYmpjb3B5IC0tcmVuYW1l
LXNlY3Rpb24gLnJvZGF0YT0uaW5pdC5yb2RhdGEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRh
LnN0cjEuMT0uaW5pdC5yb2RhdGEuc3RyMS4xIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5z
dHIxLjI9LmluaXQucm9kYXRhLnN0cjEuMiAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3Ry
MS40PS5pbml0LnJvZGF0YS5zdHIxLjQgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEu
OD0uaW5pdC5yb2RhdGEuc3RyMS44IC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsPS5pbml0
LmRhdGEucmVsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLmxvY2FsPS5pbml0LmRhdGEu
cmVsLmxvY2FsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvPS5pbml0LmRhdGEucmVs
LnJvIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvLmxvY2FsPS5pbml0LmRhdGEucmVs
LnJvLmxvY2FsIHRiaW5zdGFsLm8gdGJpbnN0YWwuaW5pdC5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC50YnhmYWNlLm8uZCAtRElOSVRf
U0VDVElPTlNfT05MWSAtYyB0YnhmYWNlLmMgLW8gdGJ4ZmFjZS5vCm9iamR1bXAgLWggdGJ4
ZmFjZS5vIHwgc2VkIC1uICcvWzAtOV0ve3MsMDAqLDAsZztwfScgfCB3aGlsZSByZWFkIGlk
eCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2UgIiRuYW1lIiBpbiBcCgkudGV4dHwudGV4dC4q
fC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRlc3QgJHN6ICE9IDAgfHwgY29udGludWU7IFwK
CQllY2hvICJFcnJvcjogc2l6ZSBvZiB0YnhmYWNlLm86JG5hbWUgaXMgMHgkc3oiID4mMjsg
XAoJCWV4aXQgJChleHByICRpZHggKyAxKTs7IFwKCWVzYWM7IFwKZG9uZQpzZWQ6IDE6ICIv
WzAtOV0ve3MsMDAqLDAsZztwfSI6IGV4dHJhIGNoYXJhY3RlcnMgYXQgdGhlIGVuZCBvZiBw
IGNvbW1hbmQKb2JqY29weSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGE9LmluaXQucm9kYXRh
IC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjE9LmluaXQucm9kYXRhLnN0cjEuMSAt
LXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4yPS5pbml0LnJvZGF0YS5zdHIxLjIgLS1y
ZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuND0uaW5pdC5yb2RhdGEuc3RyMS40IC0tcmVu
YW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjg9LmluaXQucm9kYXRhLnN0cjEuOCAtLXJlbmFt
ZS1zZWN0aW9uIC5kYXRhLnJlbD0uaW5pdC5kYXRhLnJlbCAtLXJlbmFtZS1zZWN0aW9uIC5k
YXRhLnJlbC5sb2NhbD0uaW5pdC5kYXRhLnJlbC5sb2NhbCAtLXJlbmFtZS1zZWN0aW9uIC5k
YXRhLnJlbC5ybz0uaW5pdC5kYXRhLnJlbC5ybyAtLXJlbmFtZS1zZWN0aW9uIC5kYXRhLnJl
bC5yby5sb2NhbD0uaW5pdC5kYXRhLnJlbC5yby5sb2NhbCB0YnhmYWNlLm8gdGJ4ZmFjZS5p
bml0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLnRieGZyb290Lm8uZCAtRElOSVRfU0VDVElPTlNfT05MWSAtYyB0Ynhmcm9vdC5jIC1v
IHRieGZyb290Lm8Kb2JqZHVtcCAtaCB0Ynhmcm9vdC5vIHwgc2VkIC1uICcvWzAtOV0ve3Ms
MDAqLDAsZztwfScgfCB3aGlsZSByZWFkIGlkeCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2Ug
IiRuYW1lIiBpbiBcCgkudGV4dHwudGV4dC4qfC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRl
c3QgJHN6ICE9IDAgfHwgY29udGludWU7IFwKCQllY2hvICJFcnJvcjogc2l6ZSBvZiB0Ynhm
cm9vdC5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0ICQoZXhwciAkaWR4ICsgMSk7
OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tzLDAwKiwwLGc7cH0iOiBleHRy
YSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5kCm9iamNvcHkgLS1yZW5hbWUt
c2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEu
c3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0
cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIx
LjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS44
PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWw9LmluaXQu
ZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwubG9jYWw9LmluaXQuZGF0YS5y
ZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm89LmluaXQuZGF0YS5yZWwu
cm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9jYWw9LmluaXQuZGF0YS5yZWwu
cm8ubG9jYWwgdGJ4ZnJvb3QubyB0Ynhmcm9vdC5pbml0Lm8KbGQgICAgLW1lbGZfeDg2XzY0
ICAtciAtbyBidWlsdF9pbi5vIHRidXRpbHMubyB0YmZhZHQuaW5pdC5vIHRiaW5zdGFsLmlu
aXQubyB0YnhmYWNlLmluaXQubyB0Ynhmcm9vdC5pbml0Lm8KZ21ha2VbNl06IExlYXZpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvYWNwaS90YWJsZXMnCmdt
YWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgdXRpbGl0aWVzIGJ1aWx0
X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hl
bi9kcml2ZXJzL2FjcGkvdXRpbGl0aWVzJwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAudXRnbG9iYWwuby5kIC1jIHV0Z2xvYmFsLmMgLW8g
dXRnbG9iYWwubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8t
Y29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3Ig
LVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
ICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNv
ZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVk
LWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91
cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRp
bmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94
ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TU1EIC1NRiAudXRtaXNjLm8uZCAtRElOSVRfU0VDVElPTlNfT05MWSAtYyB1dG1pc2MuYyAt
byB1dG1pc2MubwpvYmpkdW1wIC1oIHV0bWlzYy5vIHwgc2VkIC1uICcvWzAtOV0ve3MsMDAq
LDAsZztwfScgfCB3aGlsZSByZWFkIGlkeCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2UgIiRu
YW1lIiBpbiBcCgkudGV4dHwudGV4dC4qfC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRlc3Qg
JHN6ICE9IDAgfHwgY29udGludWU7IFwKCQllY2hvICJFcnJvcjogc2l6ZSBvZiB1dG1pc2Mu
bzokbmFtZSBpcyAweCRzeiIgPiYyOyBcCgkJZXhpdCAkKGV4cHIgJGlkeCArIDEpOzsgXAoJ
ZXNhYzsgXApkb25lCnNlZDogMTogIi9bMC05XS97cywwMCosMCxnO3B9IjogZXh0cmEgY2hh
cmFjdGVycyBhdCB0aGUgZW5kIG9mIHAgY29tbWFuZApvYmpjb3B5IC0tcmVuYW1lLXNlY3Rp
b24gLnJvZGF0YT0uaW5pdC5yb2RhdGEgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEu
MT0uaW5pdC5yb2RhdGEuc3RyMS4xIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjI9
LmluaXQucm9kYXRhLnN0cjEuMiAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS40PS5p
bml0LnJvZGF0YS5zdHIxLjQgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuOD0uaW5p
dC5yb2RhdGEuc3RyMS44IC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsPS5pbml0LmRhdGEu
cmVsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLmxvY2FsPS5pbml0LmRhdGEucmVsLmxv
Y2FsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvPS5pbml0LmRhdGEucmVsLnJvIC0t
cmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvLmxvY2FsPS5pbml0LmRhdGEucmVsLnJvLmxv
Y2FsIHV0bWlzYy5vIHV0bWlzYy5pbml0Lm8KbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBi
dWlsdF9pbi5vIHV0Z2xvYmFsLm8gdXRtaXNjLmluaXQubwpnbWFrZVs2XTogTGVhdmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9hY3BpL3V0aWxpdGllcycK
Z21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyBhcGVpIGJ1aWx0X2lu
Lm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9k
cml2ZXJzL2FjcGkvYXBlaScKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRp
biAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAt
V2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdl
bmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1
bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAt
V25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3lu
Y2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUg
LW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RI
Uk9VR0ggLU1NRCAtTUYgLmVyc3Quby5kIC1jIGVyc3QuYyAtbyBlcnN0Lm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmFwZWktYmFzZS5v
LmQgLWMgYXBlaS1iYXNlLmMgLW8gYXBlaS1iYXNlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmFwZWktaW8uby5kIC1jIGFwZWktaW8u
YyAtbyBhcGVpLWlvLm8KbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGVy
c3QubyBhcGVpLWJhc2UubyBhcGVpLWlvLm8KZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvYWNwaS9hcGVpJwpsZCAgICAtbWVsZl94
ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gdGFibGVzLm8gbnVtYS5vIG9zbC5vIHBtc3RhdC5v
IGh3cmVncy5vIHJlYm9vdC5vIHRhYmxlcy9idWlsdF9pbi5vIHV0aWxpdGllcy9idWlsdF9p
bi5vIGFwZWkvYnVpbHRfaW4ubwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC94ZW4vZHJpdmVycy9hY3BpJwpnbWFrZSAtZiAvcm9vdC94ZW4tNC4yLjAv
eGVuL1J1bGVzLm1rIC1DIHZpZGVvIGJ1aWx0X2luLm8KZ21ha2VbNV06IEVudGVyaW5nIGRp
cmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9kcml2ZXJzL3ZpZGVvJwpnY2MgLU8yIC1m
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRl
Y2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1w
aXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1w
cm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9u
ZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0ND
X0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNs
dWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQ
SSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudmdhLm8uZCAtYyB2
Z2EuYyAtbyB2Z2EubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAuZm9udF84eDE0Lm8uZCAtYyBmb250Xzh4MTQuYyAtbyBmb250Xzh4MTQu
bwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1X
cmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2lu
dGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQg
LWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMg
LW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQt
dGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURf
X1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmln
LmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAu
Zm9udF84eDE2Lm8uZCAtYyBmb250Xzh4MTYuYyAtbyBmb250Xzh4MTYubwpnY2MgLU8yIC1m
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRl
Y2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1w
aXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1w
cm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9u
ZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0ND
X0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNs
dWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQ
SSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuZm9udF84eDguby5k
IC1jIGZvbnRfOHg4LmMgLW8gZm9udF84eDgubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcg
LWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZp
eCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1l
eGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBp
YyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZ
X0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAt
REhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudmVzYS5vLmQgLWMgdmVzYS5jIC1vIHZlc2Eu
bwpsZCAgICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gdmdhLm8gZm9udF84eDE0
Lm8gZm9udF84eDE2Lm8gZm9udF84eDgubyB2ZXNhLm8KZ21ha2VbNV06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMvdmlkZW8nCmxkICAgIC1tZWxm
X3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBjaGFyL2J1aWx0X2luLm8gY3B1ZnJlcS9idWls
dF9pbi5vIHBjaS9idWlsdF9pbi5vIHBhc3N0aHJvdWdoL2J1aWx0X2luLm8gYWNwaS9idWls
dF9pbi5vIHZpZGVvL2J1aWx0X2luLm8KZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAveGVuL2RyaXZlcnMnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94
ZW4vUnVsZXMubWsgLUMgL3Jvb3QveGVuLTQuMi4wL3hlbi94c20gYnVpbHRfaW4ubwpnbWFr
ZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL3hzbScKZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnhzbV9j
b3JlLm8uZCAtYyB4c21fY29yZS5jIC1vIHhzbV9jb3JlLm8KbGQgICAgLW1lbGZfeDg2XzY0
ICAtciAtbyBidWlsdF9pbi5vIHhzbV9jb3JlLm8KZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL3hzbScKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4w
L3hlbi9SdWxlcy5tayAtQyAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2IGJ1aWx0X2lu
Lm8KZ21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9h
cmNoL3g4NicKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLmFwaWMuby5kIC1jIGFwaWMuYyAtbyBhcGljLm8KZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmJpdG9wcy5vLmQgLWMgYml0b3Bz
LmMgLW8gYml0b3BzLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLmNvbXBhdC5vLmQgLWMgY29tcGF0LmMgLW8gY29tcGF0Lm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRlYnVnLm8u
ZCAtYyBkZWJ1Zy5jIC1vIGRlYnVnLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8t
YnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5j
bHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0
aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZu
by1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRS
SUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNf
UEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRlbGF5Lm8uZCAtYyBkZWxheS5jIC1vIGRlbGF5Lm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRv
bWN0bC5vLmQgLWMgZG9tY3RsLmMgLW8gZG9tY3RsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRvbWFpbi5vLmQgLWMgZG9tYWluLmMg
LW8gZG9tYWluLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLmU4MjAuby5kIC1jIGU4MjAuYyAtbyBlODIwLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmV4dGFibGUuby5kIC1jIGV4
dGFibGUuYyAtbyBleHRhYmxlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLmZsdXNodGxiLm8uZCAtYyBmbHVzaHRsYi5jIC1vIGZsdXNo
dGxiLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLnBsYXRmb3JtX2h5cGVyY2FsbC5vLmQgLWMgcGxhdGZvcm1faHlwZXJjYWxsLmMgLW8g
cGxhdGZvcm1faHlwZXJjYWxsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLmkzODcuby5kIC1jIGkzODcuYyAtbyBpMzg3Lm8KZ2NjIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFu
dC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0
aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVk
LXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAt
REdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFT
X0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmk4MjU5Lm8u
ZCAtYyBpODI1OS5jIC1vIGk4MjU5Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8t
YnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5j
bHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0
aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZu
by1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRS
SUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNf
UEFTU1RIUk9VR0ggLU1NRCAtTUYgLmlvX2FwaWMuby5kIC1jIGlvX2FwaWMuYyAtbyBpb19h
cGljLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLm1zaS5vLmQgLWMgbXNpLmMgLW8gbXNpLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmlvcG9ydF9lbXVsYXRlLm8uZCAtYyBpb3Bv
cnRfZW11bGF0ZS5jIC1vIGlvcG9ydF9lbXVsYXRlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmlycS5vLmQgLWMgaXJxLmMgLW8gaXJx
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
Lm1pY3JvY29kZV9hbWQuby5kIC1jIG1pY3JvY29kZV9hbWQuYyAtbyBtaWNyb2NvZGVfYW1k
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
Lm1pY3JvY29kZV9pbnRlbC5vLmQgLWMgbWljcm9jb2RlX2ludGVsLmMgLW8gbWljcm9jb2Rl
X2ludGVsLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLm1pY3JvY29kZS5vLmQgLWMgbWljcm9jb2RlLmMgLW8gbWljcm9jb2RlLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1tLm8u
ZCAtYyBtbS5jIC1vIG1tLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRp
biAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAt
V2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdl
bmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1
bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAt
V25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3lu
Y2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUg
LW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RI
Uk9VR0ggLU1NRCAtTUYgLm1wcGFyc2Uuby5kIC1jIG1wcGFyc2UuYyAtbyBtcHBhcnNlLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm5t
aS5vLmQgLWMgbm1pLmMgLW8gbm1pLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8t
YnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5j
bHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0
aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZu
by1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRS
SUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNf
UEFTU1RIUk9VR0ggLU1NRCAtTUYgLm51bWEuby5kIC1jIG51bWEuYyAtbyBudW1hLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnBjaS5v
LmQgLWMgcGNpLmMgLW8gcGNpLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnBlcmNwdS5vLmQgLWMgcGVyY3B1LmMgLW8gcGVyY3B1Lm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnBo
eXNkZXYuby5kIC1jIHBoeXNkZXYuYyAtbyBwaHlzZGV2Lm8KZ2NjIC1PMiAtZm9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
TkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdp
dGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9y
IC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1z
c2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklT
SUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNf
R0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNldHVwLm8uZCAtYyBzZXR1cC5j
IC1vIHNldHVwLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLnNodXRkb3duLm8uZCAtYyBzaHV0ZG93bi5jIC1vIHNodXRkb3duLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnNtcC5v
LmQgLWMgc21wLmMgLW8gc21wLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnNtcGJvb3Quby5kIC1jIHNtcGJvb3QuYyAtbyBzbXBib290
Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAt
V3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9p
bnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
LnNyYXQuby5kIC1jIHNyYXQuYyAtbyBzcmF0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnN0cmluZy5vLmQgLWMgc3RyaW5nLmMgLW8g
c3RyaW5nLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLnN5c2N0bC5vLmQgLWMgc3lzY3RsLmMgLW8gc3lzY3RsLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRpbWUuby5kIC1jIHRp
bWUuYyAtbyB0aW1lLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLnRyYWNlLm8uZCAtYyB0cmFjZS5jIC1vIHRyYWNlLm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRyYXBzLm8uZCAt
YyB0cmFwcy5jIC1vIHRyYXBzLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVp
bHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVk
ZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNo
LWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRl
ZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9u
cyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1h
c3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJV
VEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFT
U1RIUk9VR0ggLU1NRCAtTUYgLnVzZXJjb3B5Lm8uZCAtYyB1c2VyY29weS5jIC1vIHVzZXJj
b3B5Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLng4Nl9lbXVsYXRlLm8uZCAtYyB4ODZfZW11bGF0ZS5jIC1vIHg4Nl9lbXVsYXRlLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1h
Y2hpbmVfa2V4ZWMuby5kIC1jIG1hY2hpbmVfa2V4ZWMuYyAtbyBtYWNoaW5lX2tleGVjLm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNy
YXNoLm8uZCAtYyBjcmFzaC5jIC1vIGNyYXNoLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnRib290Lm8uZCAtYyB0Ym9vdC5jIC1vIHRi
b290Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLmhwZXQuby5kIC1jIGhwZXQuYyAtbyBocGV0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
YXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2Ug
LWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJ
TElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RC
U1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnhzdGF0ZS5vLmQgLWMgeHN0YXRlLmMg
LW8geHN0YXRlLm8KZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyBh
Y3BpIGJ1aWx0X2luLm8KZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3hlbi9hcmNoL3g4Ni9hY3BpJwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAubGliLm8uZCAtYyBsaWIuYyAtbyBsaWIubwpnY2Mg
LU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5k
YW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFy
aXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1z
dGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1y
ZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVz
IC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9f
IC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURI
QVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAucG93ZXIu
by5kIC1jIHBvd2VyLmMgLW8gcG93ZXIubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAuc3VzcGVuZC5vLmQgLWMgc3VzcGVuZC5jIC1vIHN1
c3BlbmQubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQt
ZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4
dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11
bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMg
LWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4v
Y29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1E
IC1NRiAuY3B1X2lkbGUuby5kIC1jIGNwdV9pZGxlLmMgLW8gY3B1X2lkbGUubwpnY2MgLU8y
IC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFj
ay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQt
em9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1E
R0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuY3B1aWRsZV9t
ZW51Lm8uZCAtYyBjcHVpZGxlX21lbnUuYyAtbyBjcHVpZGxlX21lbnUubwpnbWFrZSAtZiAv
cm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIGNwdWZyZXEgYnVpbHRfaW4ubwpnbWFr
ZVs2XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2
L2FjcGkvY3B1ZnJlcScKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAt
Zm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vy
cm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLmNwdWZyZXEuby5kIC1jIGNwdWZyZXEuYyAtbyBjcHVmcmVxLm8KZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnBvd2Vy
bm93Lm8uZCAtYyBwb3dlcm5vdy5jIC1vIHBvd2Vybm93Lm8KbGQgICAgLW1lbGZfeDg2XzY0
ICAtciAtbyBidWlsdF9pbi5vIGNwdWZyZXEubyBwb3dlcm5vdy5vCmdtYWtlWzZdOiBMZWF2
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9hY3BpL2NwdWZy
ZXEnCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9h
dCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJu
cyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2lu
ZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAt
RF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25m
aWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1G
IC5ib290Lm8uZCAtRElOSVRfU0VDVElPTlNfT05MWSAtYyBib290LmMgLW8gYm9vdC5vCm9i
amR1bXAgLWggYm9vdC5vIHwgc2VkIC1uICcvWzAtOV0ve3MsMDAqLDAsZztwfScgfCB3aGls
ZSByZWFkIGlkeCBuYW1lIHN6IHJlc3Q7IGRvIFwKCWNhc2UgIiRuYW1lIiBpbiBcCgkudGV4
dHwudGV4dC4qfC5kYXRhfC5kYXRhLip8LmJzcykgXAoJCXRlc3QgJHN6ICE9IDAgfHwgY29u
dGludWU7IFwKCQllY2hvICJFcnJvcjogc2l6ZSBvZiBib290Lm86JG5hbWUgaXMgMHgkc3oi
ID4mMjsgXAoJCWV4aXQgJChleHByICRpZHggKyAxKTs7IFwKCWVzYWM7IFwKZG9uZQpzZWQ6
IDE6ICIvWzAtOV0ve3MsMDAqLDAsZztwfSI6IGV4dHJhIGNoYXJhY3RlcnMgYXQgdGhlIGVu
ZCBvZiBwIGNvbW1hbmQKb2JqY29weSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGE9LmluaXQu
cm9kYXRhIC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjE9LmluaXQucm9kYXRhLnN0
cjEuMSAtLXJlbmFtZS1zZWN0aW9uIC5yb2RhdGEuc3RyMS4yPS5pbml0LnJvZGF0YS5zdHIx
LjIgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhLnN0cjEuND0uaW5pdC5yb2RhdGEuc3RyMS40
IC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YS5zdHIxLjg9LmluaXQucm9kYXRhLnN0cjEuOCAt
LXJlbmFtZS1zZWN0aW9uIC5kYXRhLnJlbD0uaW5pdC5kYXRhLnJlbCAtLXJlbmFtZS1zZWN0
aW9uIC5kYXRhLnJlbC5sb2NhbD0uaW5pdC5kYXRhLnJlbC5sb2NhbCAtLXJlbmFtZS1zZWN0
aW9uIC5kYXRhLnJlbC5ybz0uaW5pdC5kYXRhLnJlbC5ybyAtLXJlbmFtZS1zZWN0aW9uIC5k
YXRhLnJlbC5yby5sb2NhbD0uaW5pdC5kYXRhLnJlbC5yby5sb2NhbCBib290Lm8gYm9vdC5p
bml0Lm8KZ2NjIC1EX19BU1NFTUJMWV9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS94ZW4vY29uZmlnLmggLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZu
by1zdHJpY3QtYWxpYXNpbmcgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0
IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5z
IC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5k
LXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1E
X19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYg
Lndha2V1cF9wcm90Lm8uZCAtYyB3YWtldXBfcHJvdC5TIC1vIHdha2V1cF9wcm90Lm8KbGQg
ICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGxpYi5vIHBvd2VyLm8gc3VzcGVu
ZC5vIGNwdV9pZGxlLm8gY3B1aWRsZV9tZW51Lm8gY3B1ZnJlcS9idWlsdF9pbi5vIGJvb3Qu
aW5pdC5vIHdha2V1cF9wcm90Lm8KZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2FjcGknCmdtYWtlIC1mIC9yb290L3hlbi00LjIu
MC94ZW4vUnVsZXMubWsgLUMgY3B1IGJ1aWx0X2luLm8KZ21ha2VbNV06IEVudGVyaW5nIGRp
cmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9jcHUnCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5hbWQuby5kIC1jIGFt
ZC5jIC1vIGFtZC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZu
by1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJv
ciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmlj
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1t
c29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0
ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9u
b3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0
ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdI
IC1NTUQgLU1GIC5jb21tb24uby5kIC1jIGNvbW1vbi5jIC1vIGNvbW1vbi5vCmdjYyAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQt
ZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGgg
LXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pbnRlbC5vLmQg
LWMgaW50ZWwuYyAtbyBpbnRlbC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1
aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1
ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1k
ZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8t
YXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklC
VVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BB
U1NUSFJPVUdIIC1NTUQgLU1GIC5pbnRlbF9jYWNoZWluZm8uby5kIC1jIGludGVsX2NhY2hl
aW5mby5jIC1vIGludGVsX2NhY2hlaW5mby5vCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94
ZW4vUnVsZXMubWsgLUMgbWNoZWNrIGJ1aWx0X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRp
cmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9jcHUvbWNoZWNrJwpnY2Mg
LU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5k
YW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFy
aXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1z
dGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1y
ZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVz
IC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9f
IC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURI
QVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuYW1kX25v
bmZhdGFsLm8uZCAtYyBhbWRfbm9uZmF0YWwuYyAtbyBhbWRfbm9uZmF0YWwubwpnY2MgLU8y
IC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFj
ay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQt
em9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1E
R0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuazcuby5kIC1j
IGs3LmMgLW8gazcubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAuYW1kX2s4Lm8uZCAtYyBhbWRfazguYyAtbyBhbWRfazgubwpnY2MgLU8y
IC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFj
ay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQt
em9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1E
R0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuYW1kX2YxMC5v
LmQgLWMgYW1kX2YxMC5jIC1vIGFtZF9mMTAubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcg
LWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZp
eCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1l
eGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBp
YyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZ
X0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAt
REhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubWN0ZWxlbS5vLmQgLWMgbWN0ZWxlbS5jIC1v
IG1jdGVsZW0ubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8t
Y29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3Ig
LVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
ICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNv
ZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVk
LWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91
cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRp
bmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94
ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TU1EIC1NRiAubWNlLm8uZCAtYyBtY2UuYyAtbyBtY2UubwpnY2MgLU8yIC1mb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURO
REVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0
aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9h
c20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3Ig
LWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNz
ZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJ
QklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19H
REJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAubWNlLWFwZWkuby5kIC1jIG1jZS1h
cGVpLmMgLW8gbWNlLWFwZWkubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWls
dGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRl
IC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
Z2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVm
YXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25z
IC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFz
eW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVU
RSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNT
VEhST1VHSCAtTU1EIC1NRiAubWNlX2ludGVsLm8uZCAtYyBtY2VfaW50ZWwuYyAtbyBtY2Vf
aW50ZWwubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQt
ZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4
dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11
bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMg
LWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4v
Y29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1E
IC1NRiAubWNlX2FtZF9xdWlya3Muby5kIC1jIG1jZV9hbWRfcXVpcmtzLmMgLW8gbWNlX2Ft
ZF9xdWlya3MubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8t
Y29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3Ig
LVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
ICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNv
ZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVk
LWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91
cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRp
bmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94
ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TU1EIC1NRiAubm9uLWZhdGFsLm8uZCAtYyBub24tZmF0YWwuYyAtbyBub24tZmF0YWwubwpn
Y2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZu
by1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1u
by1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFi
bGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hF
Tl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgg
LURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudm1j
ZS5vLmQgLWMgdm1jZS5jIC1vIHZtY2UubwpsZCAgICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1
aWx0X2luLm8gYW1kX25vbmZhdGFsLm8gazcubyBhbWRfazgubyBhbWRfZjEwLm8gbWN0ZWxl
bS5vIG1jZS5vIG1jZS1hcGVpLm8gbWNlX2ludGVsLm8gbWNlX2FtZF9xdWlya3MubyBub24t
ZmF0YWwubyB2bWNlLm8KZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAveGVuL2FyY2gveDg2L2NwdS9tY2hlY2snCmdtYWtlIC1mIC9yb290L3hlbi00LjIu
MC94ZW4vUnVsZXMubWsgLUMgbXRyciBidWlsdF9pbi5vCmdtYWtlWzZdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYvY3B1L210cnInCmdjYyAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJl
ZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMg
LURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhB
U19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5nZW5lcmlj
Lm8uZCAtYyBnZW5lcmljLmMgLW8gZ2VuZXJpYy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5v
LWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1m
cGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJ
VFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5tYWluLm8uZCAtYyBtYWluLmMgLW8gbWFp
bi5vCmxkICAgIC1tZWxmX3g4Nl82NCAgLXIgLW8gYnVpbHRfaW4ubyBnZW5lcmljLm8gbWFp
bi5vCmdtYWtlWzZdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9h
cmNoL3g4Ni9jcHUvbXRycicKbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5v
IGFtZC5vIGNvbW1vbi5vIGludGVsLm8gaW50ZWxfY2FjaGVpbmZvLm8gbWNoZWNrL2J1aWx0
X2luLm8gbXRyci9idWlsdF9pbi5vCmdtYWtlWzVdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jv
b3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9jcHUnCmdtYWtlIC1mIC9yb290L3hlbi00LjIu
MC94ZW4vUnVsZXMubWsgLUMgZ2VuYXBpYyBidWlsdF9pbi5vCmdtYWtlWzVdOiBFbnRlcmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYvZ2VuYXBpYycKZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmJpZ3Nt
cC5vLmQgLWMgYmlnc21wLmMgLW8gYmlnc21wLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLngyYXBpYy5vLmQgLWMgeDJhcGljLmMgLW8g
eDJhcGljLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLmRlZmF1bHQuby5kIC1jIGRlZmF1bHQuYyAtbyBkZWZhdWx0Lm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmRlbGl2ZXJ5Lm8u
ZCAtYyBkZWxpdmVyeS5jIC1vIGRlbGl2ZXJ5Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnByb2JlLm8uZCAtYyBwcm9iZS5jIC1vIHBy
b2JlLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZs
b2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRl
cm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53
aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1n
IC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2Nv
bmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAt
TUYgLnN1bW1pdC5vLmQgLWMgc3VtbWl0LmMgLW8gc3VtbWl0Lm8KbGQgICAgLW1lbGZfeDg2
XzY0ICAtciAtbyBidWlsdF9pbi5vIGJpZ3NtcC5vIHgyYXBpYy5vIGRlZmF1bHQubyBkZWxp
dmVyeS5vIHByb2JlLm8gc3VtbWl0Lm8KZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2dlbmFwaWMnCmdtYWtlIC1mIC9yb290L3hl
bi00LjIuMC94ZW4vUnVsZXMubWsgLUMgaHZtIGJ1aWx0X2luLm8KZ21ha2VbNV06IEVudGVy
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9odm0nCmdjYyAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJl
ZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMg
LURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhB
U19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5hc2lkLm8u
ZCAtYyBhc2lkLmMgLW8gYXNpZC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1
aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1
ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1k
ZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8t
YXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklC
VVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BB
U1NUSFJPVUdIIC1NTUQgLU1GIC5lbXVsYXRlLm8uZCAtYyBlbXVsYXRlLmMgLW8gZW11bGF0
ZS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24g
LVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBv
aW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9h
dCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJu
cyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2lu
ZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAt
RF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25m
aWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1G
IC5ocGV0Lm8uZCAtYyBocGV0LmMgLW8gaHBldC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5v
LWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1m
cGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJ
VFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5odm0uby5kIC1jIGh2bS5jIC1vIGh2bS5v
CmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdy
ZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50
ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAt
Zm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAt
bW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10
YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9f
WEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcu
aCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5p
ODI1NC5vLmQgLWMgaTgyNTQuYyAtbyBpODI1NC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5v
LWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1m
cGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJ
VFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNY
IC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pbnRlcmNlcHQuby5kIC1jIGludGVyY2Vw
dC5jIC1vIGludGVyY2VwdC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0
aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUg
LVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1n
ZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZh
dWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMg
LVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRF
IC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NU
SFJPVUdIIC1NTUQgLU1GIC5pby5vLmQgLWMgaW8uYyAtbyBpby5vCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5pcnEuby5kIC1jIGlycS5j
IC1vIGlycS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29m
dC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQt
ZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3Vz
LXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGlu
YyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hl
bi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1N
TUQgLU1GIC5tdHJyLm8uZCAtYyBtdHJyLmMgLW8gbXRyci5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5uZXN0ZWRodm0uby5kIC1jIG5l
c3RlZGh2bS5jIC1vIG5lc3RlZGh2bS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5v
LWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGlu
Y2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2Vw
dGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1m
bm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRU
UklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFT
X1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5wbXRpbWVyLm8uZCAtYyBwbXRpbWVyLmMgLW8gcG10
aW1lci5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1h
bGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21t
b24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25v
LXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1m
bG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0
ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVu
d2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAt
ZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9j
b25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQg
LU1GIC5xdWlya3Muby5kIC1jIHF1aXJrcy5jIC1vIHF1aXJrcy5vCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5ydGMuby5kIC1jIHJ0Yy5j
IC1vIHJ0Yy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29m
dC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQt
ZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3Vz
LXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGlu
YyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hl
bi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1N
TUQgLU1GIC5zYXZlLm8uZCAtYyBzYXZlLmMgLW8gc2F2ZS5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5zdGR2Z2Euby5kIC1jIHN0ZHZn
YS5jIC1vIHN0ZHZnYS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC52aW9hcGljLm8uZCAtYyB2aW9hcGljLmMgLW8gdmlvYXBpYy5vCmdj
YyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1
bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXIt
YXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC52aXJp
ZGlhbi5vLmQgLWMgdmlyaWRpYW4uYyAtbyB2aXJpZGlhbi5vCmdjYyAtTzIgLWZvbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3
aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3Rv
ciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8t
c3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJ
U0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFT
X0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC52bGFwaWMuby5kIC1jIHZsYXBp
Yy5jIC1vIHZsYXBpYy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC52bXNpLm8uZCAtYyB2bXNpLmMgLW8gdm1zaS5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC52cGljLm8uZCAtYyB2
cGljLmMgLW8gdnBpYy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5l
cmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0
IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVdu
ZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNo
cm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1u
b3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJP
VUdIIC1NTUQgLU1GIC52cHQuby5kIC1jIHZwdC5jIC1vIHZwdC5vCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC52cG11Lm8uZCAtYyB2cG11
LmMgLW8gdnBtdS5vCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMg
c3ZtIGJ1aWx0X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3hlbi9hcmNoL3g4Ni9odm0vc3ZtJwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcg
LWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZp
eCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1l
eGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBp
YyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZ
X0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAt
REhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuYXNpZC5vLmQgLWMgYXNpZC5jIC1vIGFzaWQu
bwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1X
cmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2lu
dGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQg
LWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMg
LW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQt
dGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURf
X1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmln
LmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAu
ZW11bGF0ZS5vLmQgLWMgZW11bGF0ZS5jIC1vIGVtdWxhdGUubwpnY2MgLU8yIC1mb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1p
d2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0
b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5v
LXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19W
SVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhB
U19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuaW50ci5vLmQgLWMgaW50ci5j
IC1vIGludHIubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8t
Y29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3Ig
LVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
ICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNv
ZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVk
LWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91
cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRp
bmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94
ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAt
TU1EIC1NRiAubmVzdGVkc3ZtLm8uZCAtYyBuZXN0ZWRzdm0uYyAtbyBuZXN0ZWRzdm0ubwpn
Y2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZu
by1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1u
by1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFi
bGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hF
Tl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgg
LURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuc3Zt
Lm8uZCAtYyBzdm0uYyAtbyBzdm0ubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1i
dWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNs
dWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
ZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRp
b25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5v
LWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJ
QlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19Q
QVNTVEhST1VHSCAtTU1EIC1NRiAuc3ZtZGVidWcuby5kIC1jIHN2bWRlYnVnLmMgLW8gc3Zt
ZGVidWcubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29t
bW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVdu
by1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQt
ZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4
dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11
bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMg
LWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4v
Y29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1E
IC1NRiAudm1jYi5vLmQgLWMgdm1jYi5jIC1vIHZtY2IubwpnY2MgLU8yIC1mb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURO
REVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0
aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9h
c20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3Ig
LWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNz
ZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJ
QklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19H
REJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudnBtdS5vLmQgLWMgdnBtdS5jIC1v
IHZwbXUubwpnY2MgLURfX0FTU0VNQkxZX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1v
biAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8t
cG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxv
YXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVy
bnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndp
bmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcg
LURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29u
ZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1N
RiAuZW50cnkuby5kIC1jIGVudHJ5LlMgLW8gZW50cnkubwpsZCAgICAtbWVsZl94ODZfNjQg
IC1yIC1vIGJ1aWx0X2luLm8gYXNpZC5vIGVtdWxhdGUubyBpbnRyLm8gbmVzdGVkc3ZtLm8g
c3ZtLm8gc3ZtZGVidWcubyB2bWNiLm8gdnBtdS5vIGVudHJ5Lm8KZ21ha2VbNl06IExlYXZp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2h2bS9zdm0nCmdt
YWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgdm14IGJ1aWx0X2luLm8K
Z21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNo
L3g4Ni9odm0vdm14JwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAuaW50ci5vLmQgLWMgaW50ci5jIC1vIGludHIubwpnY2MgLU8yIC1mb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xz
IC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBl
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90
ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAt
bW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hB
U19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRl
IC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAt
REhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAucmVhbG1vZGUuby5kIC1j
IHJlYWxtb2RlLmMgLW8gcmVhbG1vZGUubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAudm1jcy5vLmQgLWMgdm1jcy5jIC1vIHZtY3Mubwpn
Y2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZu
by1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1u
by1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFi
bGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hF
Tl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgg
LURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAudm14
Lm8uZCAtYyB2bXguYyAtbyB2bXgubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1i
dWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNs
dWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
ZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRp
b25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5v
LWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJ
QlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19Q
QVNTVEhST1VHSCAtTU1EIC1NRiAudnBtdV9jb3JlMi5vLmQgLWMgdnBtdV9jb3JlMi5jIC1v
IHZwbXVfY29yZTIubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAudnZteC5vLmQgLWMgdnZteC5jIC1vIHZ2bXgubwpnY2MgLURfX0FTU0VN
QkxZX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcu
aCAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0
b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5v
LXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19W
SVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhB
U19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuZW50cnkuby5kIC1jIGVudHJ5
LlMgLW8gZW50cnkubwpsZCAgICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gaW50
ci5vIHJlYWxtb2RlLm8gdm1jcy5vIHZteC5vIHZwbXVfY29yZTIubyB2dm14Lm8gZW50cnku
bwpnbWFrZVs2XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJj
aC94ODYvaHZtL3ZteCcKbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGFz
aWQubyBlbXVsYXRlLm8gaHBldC5vIGh2bS5vIGk4MjU0Lm8gaW50ZXJjZXB0Lm8gaW8ubyBp
cnEubyBtdHJyLm8gbmVzdGVkaHZtLm8gcG10aW1lci5vIHF1aXJrcy5vIHJ0Yy5vIHNhdmUu
byBzdGR2Z2EubyB2aW9hcGljLm8gdmlyaWRpYW4ubyB2bGFwaWMubyB2bXNpLm8gdnBpYy5v
IHZwdC5vIHZwbXUubyBzdm0vYnVpbHRfaW4ubyB2bXgvYnVpbHRfaW4ubwpnbWFrZVs1XTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYvaHZtJwpn
bWFrZSAtZiAvcm9vdC94ZW4tNC4yLjAveGVuL1J1bGVzLm1rIC1DIG1tIGJ1aWx0X2luLm8K
Z21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNo
L3g4Ni9tbScKZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLnBhZ2luZy5vLmQgLWMgcGFnaW5nLmMgLW8gcGFnaW5nLm8KZ2NjIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNs
cyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlw
ZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnAybS5vLmQgLWMgcDJt
LmMgLW8gcDJtLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLnAybS1wdC5vLmQgLWMgcDJtLXB0LmMgLW8gcDJtLXB0Lm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnAybS1lcHQuby5k
IC1jIHAybS1lcHQuYyAtbyBwMm0tZXB0Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1m
bm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXgg
aW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9t
YWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhj
ZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMg
LWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9B
VFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURI
QVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLnAybS1wb2Quby5kIC1jIHAybS1wb2QuYyAtbyBw
Mm0tcG9kLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNv
bW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1X
bm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0
LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1l
eHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMt
dW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5j
IC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1N
RCAtTUYgLmd1ZXN0X3dhbGtfMi5vLmQgLURHVUVTVF9QQUdJTkdfTEVWRUxTPTIgLWMgZ3Vl
c3Rfd2Fsay5jIC1vIGd1ZXN0X3dhbGtfMi5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGlj
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlf
QVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1E
SEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5ndWVzdF93YWxrXzMuby5kIC1ER1VFU1RfUEFH
SU5HX0xFVkVMUz0zIC1jIGd1ZXN0X3dhbGsuYyAtbyBndWVzdF93YWxrXzMubwpnY2MgLU8y
IC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94
ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFj
ay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQt
em9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1E
R0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNf
QUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuZ3Vlc3Rfd2Fs
a180Lm8uZCAtREdVRVNUX1BBR0lOR19MRVZFTFM9NCAtYyBndWVzdF93YWxrLmMgLW8gZ3Vl
c3Rfd2Fsa180Lm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5v
LWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9y
IC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLm1lbV9ldmVudC5vLmQgLWMgbWVtX2V2ZW50LmMgLW8gbWVtX2V2ZW50Lm8K
Z2NjIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3Jl
ZHVuZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRl
ci1hcml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1t
bm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRh
YmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19Y
RU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5o
IC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1l
bV9wYWdpbmcuby5kIC1jIG1lbV9wYWdpbmcuYyAtbyBtZW1fcGFnaW5nLm8KZ2NjIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1k
ZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAt
cGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpv
bmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdD
Q19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FD
UEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1lbV9zaGFyaW5n
Lm8uZCAtYyBtZW1fc2hhcmluZy5jIC1vIG1lbV9zaGFyaW5nLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLm1lbV9hY2Nlc3Muby5kIC1j
IG1lbV9hY2Nlc3MuYyAtbyBtZW1fYWNjZXNzLm8KZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4w
L3hlbi9SdWxlcy5tayAtQyBzaGFkb3cgYnVpbHRfaW4ubwpnbWFrZVs2XTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L21tL3NoYWRvdycKZ2Nj
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVu
ZGFudC1kZWNscyAtaXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1h
cml0aCAtcGlwZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8t
cmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxl
cyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5f
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1E
SEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNvbW1v
bi5vLmQgLWMgY29tbW9uLmMgLW8gY29tbW9uLm8KZ2NjIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1ETkRFQlVH
IC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhwcmVm
aXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmd1ZXN0XzIuby5kIC1ER1VFU1RfUEFHSU5H
X0xFVkVMUz0yIC1jIG11bHRpLmMgLW8gZ3Vlc3RfMi5vCmdjYyAtTzIgLWZvbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5E
RUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRo
cHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAt
Zm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3Nl
IC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lC
SUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5ndWVzdF8zLm8uZCAtREdVRVNUX1BB
R0lOR19MRVZFTFM9MyAtYyBtdWx0aS5jIC1vIGd1ZXN0XzMubwpnY2MgLU8yIC1mb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1p
d2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5j
bHVkZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0
b3IgLWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5v
LXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19W
SVNJQklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhB
U19HREJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuZ3Vlc3RfNC5vLmQgLURHVUVT
VF9QQUdJTkdfTEVWRUxTPTQgLWMgbXVsdGkuYyAtbyBndWVzdF80Lm8KbGQgICAgLW1lbGZf
eDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIGNvbW1vbi5vIGd1ZXN0XzIubyBndWVzdF8zLm8g
Z3Vlc3RfNC5vCmdtYWtlWzZdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4w
L3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4v
UnVsZXMubWsgLUMgaGFwIGJ1aWx0X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9y
eSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9tbS9oYXAnCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5oYXAuby5kIC1jIGhhcC5j
IC1vIGhhcC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29m
dC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQt
ZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3Vz
LXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGlu
YyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hl
bi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1N
TUQgLU1GIC5ndWVzdF93YWxrXzJsZXZlbC5vLmQgLURHVUVTVF9QQUdJTkdfTEVWRUxTPTIg
LWMgZ3Vlc3Rfd2Fsay5jIC1vIGd1ZXN0X3dhbGtfMmxldmVsLm8KZ2NjIC1PMiAtZm9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1ETkRFQlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAt
aXdpdGhwcmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAt
SS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVj
dG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1u
by1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNf
VklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURI
QVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmd1ZXN0X3dhbGtfM2xldmVs
Lm8uZCAtREdVRVNUX1BBR0lOR19MRVZFTFM9MyAtYyBndWVzdF93YWxrLmMgLW8gZ3Vlc3Rf
d2Fsa18zbGV2ZWwubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWlsdGluIC1m
bm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJy
b3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZ2VuZXJp
YyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVmYXVsdCAt
bXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1XbmVz
dGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFzeW5jaHJv
bm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVURSAtbm9z
dGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNTVEhST1VH
SCAtTU1EIC1NRiAuZ3Vlc3Rfd2Fsa180bGV2ZWwuby5kIC1ER1VFU1RfUEFHSU5HX0xFVkVM
Uz00IC1jIGd1ZXN0X3dhbGsuYyAtbyBndWVzdF93YWxrXzRsZXZlbC5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5uZXN0ZWRfaGFwLm8u
ZCAtYyBuZXN0ZWRfaGFwLmMgLW8gbmVzdGVkX2hhcC5vCmxkICAgIC1tZWxmX3g4Nl82NCAg
LXIgLW8gYnVpbHRfaW4ubyBoYXAubyBndWVzdF93YWxrXzJsZXZlbC5vIGd1ZXN0X3dhbGtf
M2xldmVsLm8gZ3Vlc3Rfd2Fsa180bGV2ZWwubyBuZXN0ZWRfaGFwLm8KZ21ha2VbNl06IExl
YXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L21tL2hhcCcK
bGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBidWlsdF9pbi5vIHBhZ2luZy5vIHAybS5vIHAy
bS1wdC5vIHAybS1lcHQubyBwMm0tcG9kLm8gZ3Vlc3Rfd2Fsa18yLm8gZ3Vlc3Rfd2Fsa18z
Lm8gZ3Vlc3Rfd2Fsa180Lm8gbWVtX2V2ZW50Lm8gbWVtX3BhZ2luZy5vIG1lbV9zaGFyaW5n
Lm8gbWVtX2FjY2Vzcy5vIHNoYWRvdy9idWlsdF9pbi5vIGhhcC9idWlsdF9pbi5vCmdtYWtl
WzVdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9t
bScKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyBvcHJvZmlsZSBi
dWlsdF9pbi5vCmdtYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC94ZW4vYXJjaC94ODYvb3Byb2ZpbGUnCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5v
LWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGlu
Y2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4y
LjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2Vw
dGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1m
bm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRU
UklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFT
X1BBU1NUSFJPVUdIIC1NTUQgLU1GIC54ZW5vcHJvZi5vLmQgLWMgeGVub3Byb2YuYyAtbyB4
ZW5vcHJvZi5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1j
b21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAt
V25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUg
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1J
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29m
dC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQt
ZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3Vz
LXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGlu
YyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hl
bi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1N
TUQgLU1GIC5ubWlfaW50Lm8uZCAtYyBubWlfaW50LmMgLW8gbm1pX2ludC5vCmdjYyAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQt
ZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGgg
LXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5vcF9tb2RlbF9w
NC5vLmQgLWMgb3BfbW9kZWxfcDQuYyAtbyBvcF9tb2RlbF9wNC5vCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5vcF9tb2RlbF9wcHJvLm8u
ZCAtYyBvcF9tb2RlbF9wcHJvLmMgLW8gb3BfbW9kZWxfcHByby5vCmdjYyAtTzIgLWZvbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMg
LWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5vcF9tb2RlbF9hdGhsb24u
by5kIC1jIG9wX21vZGVsX2F0aGxvbi5jIC1vIG9wX21vZGVsX2F0aGxvbi5vCmdjYyAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQt
ZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGgg
LXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5iYWNrdHJhY2Uu
by5kIC1jIGJhY2t0cmFjZS5jIC1vIGJhY2t0cmFjZS5vCmxkICAgIC1tZWxmX3g4Nl82NCAg
LXIgLW8gYnVpbHRfaW4ubyB4ZW5vcHJvZi5vIG5taV9pbnQubyBvcF9tb2RlbF9wNC5vIG9w
X21vZGVsX3Bwcm8ubyBvcF9tb2RlbF9hdGhsb24ubyBiYWNrdHJhY2UubwpnbWFrZVs1XTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYvb3Byb2Zp
bGUnCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgeDg2XzY0IGJ1
aWx0X2luLm8KZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4w
L3hlbi9hcmNoL3g4Ni94ODZfNjQnCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1
aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1
ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1k
ZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8t
YXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklC
VVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BB
U1NUSFJPVUdIIC1NTUQgLU1GIC5tbS5vLmQgLWMgbW0uYyAtbyBtbS5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC50cmFwcy5vLmQgLWMg
dHJhcHMuYyAtbyB0cmFwcy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0
aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUg
LVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1n
ZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZh
dWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMg
LVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRF
IC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NU
SFJPVUdIIC1NTUQgLU1GIC5tYWNoaW5lX2tleGVjLm8uZCAtYyBtYWNoaW5lX2tleGVjLmMg
LW8gbWFjaGluZV9rZXhlYy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0
aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUg
LVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1n
ZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZh
dWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMg
LVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRF
IC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NU
SFJPVUdIIC1NTUQgLU1GIC5wY2kuby5kIC1jIHBjaS5jIC1vIHBjaS5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5hY3BpX21tY2ZnLm8u
ZCAtYyBhY3BpX21tY2ZnLmMgLW8gYWNwaV9tbWNmZy5vCmdjYyAtTzIgLWZvbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5E
RUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRo
cHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAt
Zm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3Nl
IC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lC
SUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5tbWNvbmYtZmFtMTBoLm8uZCAtYyBt
bWNvbmYtZmFtMTBoLmMgLW8gbW1jb25mLWZhbTEwaC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5E
RUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRo
cHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAt
Zm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3Nl
IC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lC
SUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5tbWNvbmZpZ182NC5vLmQgLWMgbW1j
b25maWdfNjQuYyAtbyBtbWNvbmZpZ182NC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGlj
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlf
QVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1E
SEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5tbWNvbmZpZy1zaGFyZWQuby5kIC1jIG1tY29u
ZmlnLXNoYXJlZC5jIC1vIG1tY29uZmlnLXNoYXJlZC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5E
RUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRo
cHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9v
dC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAt
Zm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3Nl
IC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lC
SUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5jb21wYXQuby5kIC1jIGNvbXBhdC5j
IC1vIGNvbXBhdC5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZu
by1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJv
ciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmlj
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1t
c29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0
ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9u
b3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0
ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdI
IC1NTUQgLU1GIC5kb21haW4uby5kIC1jIGRvbWFpbi5jIC1vIGRvbWFpbi5vCmdjYyAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQt
ZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGgg
LXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5waHlzZGV2Lm8u
ZCAtYyBwaHlzZGV2LmMgLW8gcGh5c2Rldi5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGlj
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlf
QVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1E
SEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5wbGF0Zm9ybV9oeXBlcmNhbGwuby5kIC1jIHBs
YXRmb3JtX2h5cGVyY2FsbC5jIC1vIHBsYXRmb3JtX2h5cGVyY2FsbC5vCmdjYyAtTzIgLWZv
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXBy
b3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25l
IC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0Nf
SEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJ
IC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5jcHVfaWRsZS5vLmQg
LWMgY3B1X2lkbGUuYyAtbyBjcHVfaWRsZS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAt
Zm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4
IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14
ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYv
bWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGlj
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlf
QVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1E
SEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5jcHVmcmVxLm8uZCAtYyBjcHVmcmVxLmMgLW8g
Y3B1ZnJlcS5vCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMubWsgLUMgY29t
cGF0IGJ1aWx0X2luLm8KZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3hlbi9hcmNoL3g4Ni94ODZfNjQvY29tcGF0JwpnY2MgLURfX0FTU0VNQkxZX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtTzIg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1ETkRF
QlVHIC1mbm8tYnVpbHRpbiAtZm5vLWNvbW1vbiAtV3JlZHVuZGFudC1kZWNscyAtaXdpdGhw
cmVmaXggaW5jbHVkZSAtV2Vycm9yIC1Xbm8tcG9pbnRlci1hcml0aCAtcGlwZSAtSS9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9h
c20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20t
eDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZu
by1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAt
ZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklM
SVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJT
WCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAuZW50cnkuby5kIC1jIGVudHJ5LlMgLW8g
ZW50cnkubwpsZCAgICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gZW50cnkubwpn
bWFrZVs2XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94
ODYveDg2XzY0L2NvbXBhdCcKZ2NjIC1EX19BU1NFTUJMWV9fIC1pbmNsdWRlIC9yb290L3hl
bi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4g
LWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdl
cnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVy
aWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQg
LW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25l
c3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hy
b25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5v
c3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9V
R0ggLU1NRCAtTUYgLmVudHJ5Lm8uZCAtYyBlbnRyeS5TIC1vIGVudHJ5Lm8KZ2NjIC1EX19B
U1NFTUJMWV9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29u
ZmlnLmggLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVj
bHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBp
cGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUg
LW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19I
QVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVk
ZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkg
LURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmdwcl9zd2l0Y2guby5k
IC1jIGdwcl9zd2l0Y2guUyAtbyBncHJfc3dpdGNoLm8KZ2NjIC1EX19BU1NFTUJMWV9fIC1p
bmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLU8yIC1m
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJV
RyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJl
Zml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94
ZW4tNC4yLjAveGVuL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tYWNoLWdlbmVyaWMgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tYWNoLWRlZmF1bHQgLW1zb2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtV25lc3RlZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZw
aWMgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElU
WV9BVFRSSUJVVEUgLW5vc3RkaW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4t
NC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1gg
LURIQVNfUEFTU1RIUk9VR0ggLU1NRCAtTUYgLmNvbXBhdF9rZXhlYy5vLmQgLWMgY29tcGF0
X2tleGVjLlMgLW8gY29tcGF0X2tleGVjLm8KbGQgICAgLW1lbGZfeDg2XzY0ICAtciAtbyBi
dWlsdF9pbi5vIG1tLm8gdHJhcHMubyBtYWNoaW5lX2tleGVjLm8gcGNpLm8gYWNwaV9tbWNm
Zy5vIG1tY29uZi1mYW0xMGgubyBtbWNvbmZpZ182NC5vIG1tY29uZmlnLXNoYXJlZC5vIGNv
bXBhdC5vIGRvbWFpbi5vIHBoeXNkZXYubyBwbGF0Zm9ybV9oeXBlcmNhbGwubyBjcHVfaWRs
ZS5vIGNwdWZyZXEubyBjb21wYXQvYnVpbHRfaW4ubyBlbnRyeS5vIGdwcl9zd2l0Y2gubyBj
b21wYXRfa2V4ZWMubwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC94ZW4vYXJjaC94ODYveDg2XzY0JwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZu
by1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBp
bmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2
L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21h
Y2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAt
Zm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FU
VFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIu
MC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhB
U19QQVNTVEhST1VHSCAtTU1EIC1NRiAuYnppbWFnZS5vLmQgLURJTklUX1NFQ1RJT05TX09O
TFkgLWMgYnppbWFnZS5jIC1vIGJ6aW1hZ2UubwpvYmpkdW1wIC1oIGJ6aW1hZ2UubyB8IHNl
ZCAtbiAnL1swLTldL3tzLDAwKiwwLGc7cH0nIHwgd2hpbGUgcmVhZCBpZHggbmFtZSBzeiBy
ZXN0OyBkbyBcCgljYXNlICIkbmFtZSIgaW4gXAoJLnRleHR8LnRleHQuKnwuZGF0YXwuZGF0
YS4qfC5ic3MpIFwKCQl0ZXN0ICRzeiAhPSAwIHx8IGNvbnRpbnVlOyBcCgkJZWNobyAiRXJy
b3I6IHNpemUgb2YgYnppbWFnZS5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0ICQo
ZXhwciAkaWR4ICsgMSk7OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tzLDAw
KiwwLGc7cH0iOiBleHRyYSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5kCm9i
amNvcHkgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFtZS1z
ZWN0aW9uIC5yb2RhdGEuc3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUtc2Vj
dGlvbiAucm9kYXRhLnN0cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNlY3Rp
b24gLnJvZGF0YS5zdHIxLjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0aW9u
IC5yb2RhdGEuc3RyMS44PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlvbiAu
ZGF0YS5yZWw9LmluaXQuZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwubG9j
YWw9LmluaXQuZGF0YS5yZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm89
LmluaXQuZGF0YS5yZWwucm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9jYWw9
LmluaXQuZGF0YS5yZWwucm8ubG9jYWwgYnppbWFnZS5vIGJ6aW1hZ2UuaW5pdC5vCmdjYyAt
RF9fQVNTRU1CTFlfXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVu
L2NvbmZpZy5oIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50
LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRo
IC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16
b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURH
Q0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19B
Q1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5jbGVhcl9wYWdl
Lm8uZCAtYyBjbGVhcl9wYWdlLlMgLW8gY2xlYXJfcGFnZS5vCmdjYyAtRF9fQVNTRU1CTFlf
XyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZpZy5oIC1P
MiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLURO
REVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0
aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRl
L2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2Fz
bS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAt
Zm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3Nl
IC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lC
SUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dE
QlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5jb3B5X3BhZ2Uuby5kIC1jIGNvcHlf
cGFnZS5TIC1vIGNvcHlfcGFnZS5vCmdjYyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1
aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1
ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAv
eGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFj
aC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1k
ZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8t
YXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklC
VVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BB
U1NUSFJPVUdIIC1NTUQgLU1GIC5kbWlfc2Nhbi5vLmQgLURJTklUX1NFQ1RJT05TX09OTFkg
LWMgZG1pX3NjYW4uYyAtbyBkbWlfc2Nhbi5vCm9iamR1bXAgLWggZG1pX3NjYW4ubyB8IHNl
ZCAtbiAnL1swLTldL3tzLDAwKiwwLGc7cH0nIHwgd2hpbGUgcmVhZCBpZHggbmFtZSBzeiBy
ZXN0OyBkbyBcCgljYXNlICIkbmFtZSIgaW4gXAoJLnRleHR8LnRleHQuKnwuZGF0YXwuZGF0
YS4qfC5ic3MpIFwKCQl0ZXN0ICRzeiAhPSAwIHx8IGNvbnRpbnVlOyBcCgkJZWNobyAiRXJy
b3I6IHNpemUgb2YgZG1pX3NjYW4ubzokbmFtZSBpcyAweCRzeiIgPiYyOyBcCgkJZXhpdCAk
KGV4cHIgJGlkeCArIDEpOzsgXAoJZXNhYzsgXApkb25lCnNlZDogMTogIi9bMC05XS97cyww
MCosMCxnO3B9IjogZXh0cmEgY2hhcmFjdGVycyBhdCB0aGUgZW5kIG9mIHAgY29tbWFuZApv
Ympjb3B5IC0tcmVuYW1lLXNlY3Rpb24gLnJvZGF0YT0uaW5pdC5yb2RhdGEgLS1yZW5hbWUt
c2VjdGlvbiAucm9kYXRhLnN0cjEuMT0uaW5pdC5yb2RhdGEuc3RyMS4xIC0tcmVuYW1lLXNl
Y3Rpb24gLnJvZGF0YS5zdHIxLjI9LmluaXQucm9kYXRhLnN0cjEuMiAtLXJlbmFtZS1zZWN0
aW9uIC5yb2RhdGEuc3RyMS40PS5pbml0LnJvZGF0YS5zdHIxLjQgLS1yZW5hbWUtc2VjdGlv
biAucm9kYXRhLnN0cjEuOD0uaW5pdC5yb2RhdGEuc3RyMS44IC0tcmVuYW1lLXNlY3Rpb24g
LmRhdGEucmVsPS5pbml0LmRhdGEucmVsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLmxv
Y2FsPS5pbml0LmRhdGEucmVsLmxvY2FsIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJv
PS5pbml0LmRhdGEucmVsLnJvIC0tcmVuYW1lLXNlY3Rpb24gLmRhdGEucmVsLnJvLmxvY2Fs
PS5pbml0LmRhdGEucmVsLnJvLmxvY2FsIGRtaV9zY2FuLm8gZG1pX3NjYW4uaW5pdC5vCmdj
YyAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1
bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXIt
YXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5kb21h
aW5fYnVpbGQuby5kIC1ESU5JVF9TRUNUSU9OU19PTkxZIC1jIGRvbWFpbl9idWlsZC5jIC1v
IGRvbWFpbl9idWlsZC5vCm9iamR1bXAgLWggZG9tYWluX2J1aWxkLm8gfCBzZWQgLW4gJy9b
MC05XS97cywwMCosMCxnO3B9JyB8IHdoaWxlIHJlYWQgaWR4IG5hbWUgc3ogcmVzdDsgZG8g
XAoJY2FzZSAiJG5hbWUiIGluIFwKCS50ZXh0fC50ZXh0Lip8LmRhdGF8LmRhdGEuKnwuYnNz
KSBcCgkJdGVzdCAkc3ogIT0gMCB8fCBjb250aW51ZTsgXAoJCWVjaG8gIkVycm9yOiBzaXpl
IG9mIGRvbWFpbl9idWlsZC5vOiRuYW1lIGlzIDB4JHN6IiA+JjI7IFwKCQlleGl0ICQoZXhw
ciAkaWR4ICsgMSk7OyBcCgllc2FjOyBcCmRvbmUKc2VkOiAxOiAiL1swLTldL3tzLDAwKiww
LGc7cH0iOiBleHRyYSBjaGFyYWN0ZXJzIGF0IHRoZSBlbmQgb2YgcCBjb21tYW5kCm9iamNv
cHkgLS1yZW5hbWUtc2VjdGlvbiAucm9kYXRhPS5pbml0LnJvZGF0YSAtLXJlbmFtZS1zZWN0
aW9uIC5yb2RhdGEuc3RyMS4xPS5pbml0LnJvZGF0YS5zdHIxLjEgLS1yZW5hbWUtc2VjdGlv
biAucm9kYXRhLnN0cjEuMj0uaW5pdC5yb2RhdGEuc3RyMS4yIC0tcmVuYW1lLXNlY3Rpb24g
LnJvZGF0YS5zdHIxLjQ9LmluaXQucm9kYXRhLnN0cjEuNCAtLXJlbmFtZS1zZWN0aW9uIC5y
b2RhdGEuc3RyMS44PS5pbml0LnJvZGF0YS5zdHIxLjggLS1yZW5hbWUtc2VjdGlvbiAuZGF0
YS5yZWw9LmluaXQuZGF0YS5yZWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwubG9jYWw9
LmluaXQuZGF0YS5yZWwubG9jYWwgLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm89Lmlu
aXQuZGF0YS5yZWwucm8gLS1yZW5hbWUtc2VjdGlvbiAuZGF0YS5yZWwucm8ubG9jYWw9Lmlu
aXQuZGF0YS5yZWwucm8ubG9jYWwgZG9tYWluX2J1aWxkLm8gZG9tYWluX2J1aWxkLmluaXQu
bwpsZCAgICAtbWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gYXBpYy5vIGJpdG9wcy5v
IGNvbXBhdC5vIGRlYnVnLm8gZGVsYXkubyBkb21jdGwubyBkb21haW4ubyBlODIwLm8gZXh0
YWJsZS5vIGZsdXNodGxiLm8gcGxhdGZvcm1faHlwZXJjYWxsLm8gaTM4Ny5vIGk4MjU5Lm8g
aW9fYXBpYy5vIG1zaS5vIGlvcG9ydF9lbXVsYXRlLm8gaXJxLm8gbWljcm9jb2RlX2FtZC5v
IG1pY3JvY29kZV9pbnRlbC5vIG1pY3JvY29kZS5vIG1tLm8gbXBwYXJzZS5vIG5taS5vIG51
bWEubyBwY2kubyBwZXJjcHUubyBwaHlzZGV2Lm8gc2V0dXAubyBzaHV0ZG93bi5vIHNtcC5v
IHNtcGJvb3QubyBzcmF0Lm8gc3RyaW5nLm8gc3lzY3RsLm8gdGltZS5vIHRyYWNlLm8gdHJh
cHMubyB1c2VyY29weS5vIHg4Nl9lbXVsYXRlLm8gbWFjaGluZV9rZXhlYy5vIGNyYXNoLm8g
dGJvb3QubyBocGV0Lm8geHN0YXRlLm8gYWNwaS9idWlsdF9pbi5vIGNwdS9idWlsdF9pbi5v
IGdlbmFwaWMvYnVpbHRfaW4ubyBodm0vYnVpbHRfaW4ubyBtbS9idWlsdF9pbi5vIG9wcm9m
aWxlL2J1aWx0X2luLm8geDg2XzY0L2J1aWx0X2luLm8gYnppbWFnZS5pbml0Lm8gY2xlYXJf
cGFnZS5vIGNvcHlfcGFnZS5vIGRtaV9zY2FuLmluaXQubyBkb21haW5fYnVpbGQuaW5pdC5v
CmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNo
L3g4NicKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAtQyAvcm9vdC94
ZW4tNC4yLjAveGVuL2NyeXB0byBidWlsdF9pbi5vCmdtYWtlWzRdOiBFbnRlcmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vY3J5cHRvJwpnY2MgLU8yIC1mb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURO
REVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0
aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVk
ZS9hc20teDg2L21hY2gtZ2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9h
c20teDg2L21hY2gtZGVmYXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3Ig
LWZuby1leGNlcHRpb25zIC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNz
ZSAtZnBpYyAtZm5vLWFzeW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJ
QklMSVRZX0FUVFJJQlVURSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290
L3hlbi00LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19H
REJTWCAtREhBU19QQVNTVEhST1VHSCAtTU1EIC1NRiAucmlqbmRhZWwuby5kIC1jIHJpam5k
YWVsLmMgLW8gcmlqbmRhZWwubwpnY2MgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLUROREVCVUcgLWZuby1idWls
dGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRl
IC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hl
bi9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gt
Z2VuZXJpYyAtSS9yb290L3hlbi00LjIuMC94ZW4vaW5jbHVkZS9hc20teDg2L21hY2gtZGVm
YXVsdCAtbXNvZnQtZmxvYXQgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25z
IC1XbmVzdGVkLWV4dGVybnMgLW1uby1yZWQtem9uZSAtbW5vLXNzZSAtZnBpYyAtZm5vLWFz
eW5jaHJvbm91cy11bndpbmQtdGFibGVzIC1ER0NDX0hBU19WSVNJQklMSVRZX0FUVFJJQlVU
RSAtbm9zdGRpbmMgLWcgLURfX1hFTl9fIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC94ZW4v
aW5jbHVkZS94ZW4vY29uZmlnLmggLURIQVNfQUNQSSAtREhBU19HREJTWCAtREhBU19QQVNT
VEhST1VHSCAtTU1EIC1NRiAudm1hYy5vLmQgLWMgdm1hYy5jIC1vIHZtYWMubwpsZCAgICAt
bWVsZl94ODZfNjQgIC1yIC1vIGJ1aWx0X2luLm8gcmlqbmRhZWwubyB2bWFjLm8KZ21ha2Vb
NF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAveGVuL2NyeXB0bycKbGQg
ICAgLW1lbGZfeDg2XzY0ICAtciAtbyBwcmVsaW5rLm8gL3Jvb3QveGVuLTQuMi4wL3hlbi9h
cmNoL3g4Ni9ib290L2J1aWx0X2luLm8gL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9l
ZmkvYnVpbHRfaW4ubyAvcm9vdC94ZW4tNC4yLjAveGVuL2NvbW1vbi9idWlsdF9pbi5vIC9y
b290L3hlbi00LjIuMC94ZW4vZHJpdmVycy9idWlsdF9pbi5vIC9yb290L3hlbi00LjIuMC94
ZW4veHNtL2J1aWx0X2luLm8gL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNoL3g4Ni9idWlsdF9p
bi5vIC9yb290L3hlbi00LjIuMC94ZW4vY3J5cHRvL2J1aWx0X2luLm8KZ2NjIC1QIC1FIC1V
aTM4NiAtRF9fQVNTRU1CTFlfXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC54ZW4u
bGRzLmQgLW8geGVuLmxkcyB4ZW4ubGRzLlMKc2VkIC1lICdzL3hlblwubGRzXC5vOi94ZW5c
LmxkczovZycgPC54ZW4ubGRzLmQgPi54ZW4ubGRzLmQubmV3Cm12IC1mIC54ZW4ubGRzLmQu
bmV3IC54ZW4ubGRzLmQKZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hlbi9SdWxlcy5tayAt
QyAvcm9vdC94ZW4tNC4yLjAveGVuL2NvbW1vbiBzeW1ib2xzLWR1bW15Lm8KZ21ha2VbNF06
IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9jb21tb24nCmdjYyAt
TzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZuby1jb21tb24gLVdyZWR1bmRh
bnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJvciAtV25vLXBvaW50ZXItYXJp
dGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4w
L3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJl
ZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMg
LURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18g
LWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhB
U19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5zeW1ib2xz
LWR1bW15Lm8uZCAtYyBzeW1ib2xzLWR1bW15LmMgLW8gc3ltYm9scy1kdW1teS5vCmdtYWtl
WzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9jb21tb24nCmxk
ICAgIC1tZWxmX3g4Nl82NCAgLVQgeGVuLmxkcyAtTiBwcmVsaW5rLm8gXAogICAgL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9jb21tb24vc3ltYm9scy1kdW1teS5vIC1vIC9yb290L3hlbi00LjIu
MC94ZW4vLnhlbi1zeW1zLjAKbm0gLW4gL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMu
MCB8IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvc3ltYm9scyA+L3Jvb3QveGVuLTQuMi4w
L3hlbi8ueGVuLXN5bXMuMC5TCmdtYWtlIC1mIC9yb290L3hlbi00LjIuMC94ZW4vUnVsZXMu
bWsgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMC5vCmdtYWtlWzRdOiBFbnRlcmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYnCmdjYyAtRF9fQVNT
RU1CTFlfXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUveGVuL2NvbmZp
Zy5oIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVkdW5kYW50LWRlY2xz
IC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVyLWFyaXRoIC1waXBl
IC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9p
bmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNs
dWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5vLXN0YWNrLXByb3Rl
Y3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5vLXJlZC16b25lIC1t
bm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLURHQ0NfSEFT
X1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVOX18gLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAtREhBU19BQ1BJIC1E
SEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC4ueGVuLXN5bXMuMC5vLmQg
LWMgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMC5TIC1vIC9yb290L3hlbi00LjIu
MC94ZW4vLnhlbi1zeW1zLjAubwpnbWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC94ZW4vYXJjaC94ODYnCmxkICAgIC1tZWxmX3g4Nl82NCAgLVQgeGVuLmxk
cyAtTiBwcmVsaW5rLm8gXAogICAgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMC5v
IC1vIC9yb290L3hlbi00LjIuMC94ZW4vLnhlbi1zeW1zLjEKbm0gLW4gL3Jvb3QveGVuLTQu
Mi4wL3hlbi8ueGVuLXN5bXMuMSB8IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvc3ltYm9s
cyA+L3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMS5TCmdtYWtlIC1mIC9yb290L3hl
bi00LjIuMC94ZW4vUnVsZXMubWsgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMS5v
CmdtYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJj
aC94ODYnCmdjYyAtRF9fQVNTRU1CTFlfXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVu
L2luY2x1ZGUveGVuL2NvbmZpZy5oIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9u
IC1XcmVkdW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1w
b2ludGVyLWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jv
b3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3Qv
eGVuLTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9h
dCAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJu
cyAtbW5vLXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2lu
ZC10YWJsZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAt
RF9fWEVOX18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25m
aWcuaCAtREhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1G
IC4ueGVuLXN5bXMuMS5vLmQgLWMgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuMS5T
IC1vIC9yb290L3hlbi00LjIuMC94ZW4vLnhlbi1zeW1zLjEubwpnbWFrZVs0XTogTGVhdmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94ODYnCmxkICAgIC1tZWxm
X3g4Nl82NCAgLVQgeGVuLmxkcyAtTiBwcmVsaW5rLm8gXAogICAgL3Jvb3QveGVuLTQuMi4w
L3hlbi8ueGVuLXN5bXMuMS5vIC1vIC9yb290L3hlbi00LjIuMC94ZW4veGVuLXN5bXMKcm0g
LWYgL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLXN5bXMuWzAtOV0qCjogbGQgICAgLW1lbGZf
eDg2XzY0ICAtciAtbyBwcmVsaW5rLWVmaS5vIC9yb290L3hlbi00LjIuMC94ZW4vYXJjaC94
ODYvYm9vdC9idWlsdF9pbi5vIC9yb290L3hlbi00LjIuMC94ZW4vY29tbW9uL2J1aWx0X2lu
Lm8gL3Jvb3QveGVuLTQuMi4wL3hlbi9kcml2ZXJzL2J1aWx0X2luLm8gL3Jvb3QveGVuLTQu
Mi4wL3hlbi94c20vYnVpbHRfaW4ubyAvcm9vdC94ZW4tNC4yLjAveGVuL2FyY2gveDg2L2J1
aWx0X2luLm8gL3Jvb3QveGVuLTQuMi4wL3hlbi9jcnlwdG8vYnVpbHRfaW4ubyBlZmkvYm9v
dC5pbml0Lm8gZWZpL3J1bnRpbWUubyBlZmkvY29tcGF0Lm8KZ2NjIC1QIC1FIC1VaTM4NiAt
REVGSSAtRF9fQVNTRU1CTFlfXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUveGVuL2NvbmZpZy5oIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLUROREVCVUcgLWZuby1idWlsdGluIC1mbm8tY29tbW9uIC1XcmVk
dW5kYW50LWRlY2xzIC1pd2l0aHByZWZpeCBpbmNsdWRlIC1XZXJyb3IgLVduby1wb2ludGVy
LWFyaXRoIC1waXBlIC1JL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1nZW5lcmljIC1JL3Jvb3QveGVuLTQu
Mi4wL3hlbi9pbmNsdWRlL2FzbS14ODYvbWFjaC1kZWZhdWx0IC1tc29mdC1mbG9hdCAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLVduZXN0ZWQtZXh0ZXJucyAtbW5v
LXJlZC16b25lIC1tbm8tc3NlIC1mcGljIC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJs
ZXMgLURHQ0NfSEFTX1ZJU0lCSUxJVFlfQVRUUklCVVRFIC1ub3N0ZGluYyAtZyAtRF9fWEVO
X18gLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3hlbi9pbmNsdWRlL3hlbi9jb25maWcuaCAt
REhBU19BQ1BJIC1ESEFTX0dEQlNYIC1ESEFTX1BBU1NUSFJPVUdIIC1NTUQgLU1GIC5lZmku
bGRzLmQgLW8gZWZpLmxkcyB4ZW4ubGRzLlMKc2VkIC1lICdzL2VmaVwubGRzXC5vOi9lZmlc
LmxkczovZycgPC5lZmkubGRzLmQgPi5lZmkubGRzLmQubmV3Cm12IC1mIC5lZmkubGRzLmQu
bmV3IC5lZmkubGRzLmQKZ2NjIC1EX19BU1NFTUJMWV9fIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC94ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmggLU8yIC1mb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtRE5ERUJVRyAtZm5vLWJ1aWx0aW4gLWZu
by1jb21tb24gLVdyZWR1bmRhbnQtZGVjbHMgLWl3aXRocHJlZml4IGluY2x1ZGUgLVdlcnJv
ciAtV25vLXBvaW50ZXItYXJpdGggLXBpcGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1
ZGUgLUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWdlbmVyaWMg
LUkvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUvYXNtLXg4Ni9tYWNoLWRlZmF1bHQgLW1z
b2Z0LWZsb2F0IC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtV25lc3Rl
ZC1leHRlcm5zIC1tbm8tcmVkLXpvbmUgLW1uby1zc2UgLWZwaWMgLWZuby1hc3luY2hyb25v
dXMtdW53aW5kLXRhYmxlcyAtREdDQ19IQVNfVklTSUJJTElUWV9BVFRSSUJVVEUgLW5vc3Rk
aW5jIC1nIC1EX19YRU5fXyAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAveGVuL2luY2x1ZGUv
eGVuL2NvbmZpZy5oIC1ESEFTX0FDUEkgLURIQVNfR0RCU1ggLURIQVNfUEFTU1RIUk9VR0gg
LU1NRCAtTUYgLnJlbG9jcy1kdW1teS5vLmQgLWMgZWZpL3JlbG9jcy1kdW1teS5TIC1vIGVm
aS9yZWxvY3MtZHVtbXkubwpnY2MgLVdhbGwgLVdlcnJvciAtV3N0cmljdC1wcm90b3R5cGVz
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtZm5vLXN0cmljdC1hbGlhc2luZyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtZyAtbyBlZmkvbWtyZWxvYyBlZmkvbWtyZWxvYy5j
CjogbGQgLW1pMzg2cGVwIC0tc3Vic3lzdGVtPTEwIC0taW1hZ2UtYmFzZT0weGZmZmY4MmM0
ODAwMDAwMDAgLS1zdGFjaz0wLDAgLS1oZWFwPTAsMCAtLXN0cmlwLWRlYnVnIC0tc2VjdGlv
bi1hbGlnbm1lbnQ9MHgyMDAwMDAgLS1maWxlLWFsaWdubWVudD0weDIwIC0tbWFqb3ItaW1h
Z2UtdmVyc2lvbj00IC0tbWlub3ItaW1hZ2UtdmVyc2lvbj0yIC0tbWFqb3Itb3MtdmVyc2lv
bj0yIC0tbWlub3Itb3MtdmVyc2lvbj0wIC0tbWFqb3Itc3Vic3lzdGVtLXZlcnNpb249MiAt
LW1pbm9yLXN1YnN5c3RlbS12ZXJzaW9uPTAgLVQgZWZpLmxkcyAtTiBwcmVsaW5rLWVmaS5v
IGVmaS9yZWxvY3MtZHVtbXkubyAvcm9vdC94ZW4tNC4yLjAveGVuL2NvbW1vbi9zeW1ib2xz
LWR1bW15Lm8gLW8gL3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLmVmaS4weGZmZmY4MmM0ODAw
MDAwMDAuMCAmJiAgOiBsZCAtbWkzODZwZXAgLS1zdWJzeXN0ZW09MTAgLS1pbWFnZS1iYXNl
PTB4ZmZmZjgyYzRjMDAwMDAwMCAtLXN0YWNrPTAsMCAtLWhlYXA9MCwwIC0tc3RyaXAtZGVi
dWcgLS1zZWN0aW9uLWFsaWdubWVudD0weDIwMDAwMCAtLWZpbGUtYWxpZ25tZW50PTB4MjAg
LS1tYWpvci1pbWFnZS12ZXJzaW9uPTQgLS1taW5vci1pbWFnZS12ZXJzaW9uPTIgLS1tYWpv
ci1vcy12ZXJzaW9uPTIgLS1taW5vci1vcy12ZXJzaW9uPTAgLS1tYWpvci1zdWJzeXN0ZW0t
dmVyc2lvbj0yIC0tbWlub3Itc3Vic3lzdGVtLXZlcnNpb249MCAtVCBlZmkubGRzIC1OIHBy
ZWxpbmstZWZpLm8gZWZpL3JlbG9jcy1kdW1teS5vIC9yb290L3hlbi00LjIuMC94ZW4vY29t
bW9uL3N5bWJvbHMtZHVtbXkubyAtbyAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjB4
ZmZmZjgyYzRjMDAwMDAwMC4wICYmIDoKOiBlZmkvbWtyZWxvYyAvcm9vdC94ZW4tNC4yLjAv
eGVuLy54ZW4uZWZpLjB4ZmZmZjgyYzQ4MDAwMDAwMC4wIC9yb290L3hlbi00LjIuMC94ZW4v
Lnhlbi5lZmkuMHhmZmZmODJjNGMwMDAwMDAwLjAgPi9yb290L3hlbi00LjIuMC94ZW4vLnhl
bi5lZmkuMHIuUwo6IG5tIC1uIC9yb290L3hlbi00LjIuMC94ZW4vLnhlbi5lZmkuMHhmZmZm
ODJjNDgwMDAwMDAwLjAgfCA6IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvc3ltYm9scyA+
L3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLmVmaS4wcy5TCjogZ21ha2UgLWYgL3Jvb3QveGVu
LTQuMi4wL3hlbi9SdWxlcy5tayAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjByLm8g
L3Jvb3QveGVuLTQuMi4wL3hlbi8ueGVuLmVmaS4wcy5vCjogbGQgLW1pMzg2cGVwIC0tc3Vi
c3lzdGVtPTEwIC0taW1hZ2UtYmFzZT0weGZmZmY4MmM0ODAwMDAwMDAgLS1zdGFjaz0wLDAg
LS1oZWFwPTAsMCAtLXN0cmlwLWRlYnVnIC0tc2VjdGlvbi1hbGlnbm1lbnQ9MHgyMDAwMDAg
LS1maWxlLWFsaWdubWVudD0weDIwIC0tbWFqb3ItaW1hZ2UtdmVyc2lvbj00IC0tbWlub3It
aW1hZ2UtdmVyc2lvbj0yIC0tbWFqb3Itb3MtdmVyc2lvbj0yIC0tbWlub3Itb3MtdmVyc2lv
bj0wIC0tbWFqb3Itc3Vic3lzdGVtLXZlcnNpb249MiAtLW1pbm9yLXN1YnN5c3RlbS12ZXJz
aW9uPTAgLVQgZWZpLmxkcyAtTiBwcmVsaW5rLWVmaS5vIC9yb290L3hlbi00LjIuMC94ZW4v
Lnhlbi5lZmkuMHIubyAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjBzLm8gLW8gL3Jv
b3QveGVuLTQuMi4wL3hlbi8ueGVuLmVmaS4weGZmZmY4MmM0ODAwMDAwMDAuMSAmJiAgOiBs
ZCAtbWkzODZwZXAgLS1zdWJzeXN0ZW09MTAgLS1pbWFnZS1iYXNlPTB4ZmZmZjgyYzRjMDAw
MDAwMCAtLXN0YWNrPTAsMCAtLWhlYXA9MCwwIC0tc3RyaXAtZGVidWcgLS1zZWN0aW9uLWFs
aWdubWVudD0weDIwMDAwMCAtLWZpbGUtYWxpZ25tZW50PTB4MjAgLS1tYWpvci1pbWFnZS12
ZXJzaW9uPTQgLS1taW5vci1pbWFnZS12ZXJzaW9uPTIgLS1tYWpvci1vcy12ZXJzaW9uPTIg
LS1taW5vci1vcy12ZXJzaW9uPTAgLS1tYWpvci1zdWJzeXN0ZW0tdmVyc2lvbj0yIC0tbWlu
b3Itc3Vic3lzdGVtLXZlcnNpb249MCAtVCBlZmkubGRzIC1OIHByZWxpbmstZWZpLm8gL3Jv
b3QveGVuLTQuMi4wL3hlbi8ueGVuLmVmaS4wci5vIC9yb290L3hlbi00LjIuMC94ZW4vLnhl
bi5lZmkuMHMubyAtbyAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjB4ZmZmZjgyYzRj
MDAwMDAwMC4xICYmIDoKOiBlZmkvbWtyZWxvYyAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4u
ZWZpLjB4ZmZmZjgyYzQ4MDAwMDAwMC4xIC9yb290L3hlbi00LjIuMC94ZW4vLnhlbi5lZmku
MHhmZmZmODJjNGMwMDAwMDAwLjEgPi9yb290L3hlbi00LjIuMC94ZW4vLnhlbi5lZmkuMXIu
Uwo6IG5tIC1uIC9yb290L3hlbi00LjIuMC94ZW4vLnhlbi5lZmkuMHhmZmZmODJjNDgwMDAw
MDAwLjEgfCA6IC9yb290L3hlbi00LjIuMC94ZW4vdG9vbHMvc3ltYm9scyA+L3Jvb3QveGVu
LTQuMi4wL3hlbi8ueGVuLmVmaS4xcy5TCjogZ21ha2UgLWYgL3Jvb3QveGVuLTQuMi4wL3hl
bi9SdWxlcy5tayAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjFyLm8gL3Jvb3QveGVu
LTQuMi4wL3hlbi8ueGVuLmVmaS4xcy5vCjogbGQgLW1pMzg2cGVwIC0tc3Vic3lzdGVtPTEw
IC0taW1hZ2UtYmFzZT0weGZmZmY4MmM0ODAwMDAwMDAgLS1zdGFjaz0wLDAgLS1oZWFwPTAs
MCAtLXN0cmlwLWRlYnVnIC0tc2VjdGlvbi1hbGlnbm1lbnQ9MHgyMDAwMDAgLS1maWxlLWFs
aWdubWVudD0weDIwIC0tbWFqb3ItaW1hZ2UtdmVyc2lvbj00IC0tbWlub3ItaW1hZ2UtdmVy
c2lvbj0yIC0tbWFqb3Itb3MtdmVyc2lvbj0yIC0tbWlub3Itb3MtdmVyc2lvbj0wIC0tbWFq
b3Itc3Vic3lzdGVtLXZlcnNpb249MiAtLW1pbm9yLXN1YnN5c3RlbS12ZXJzaW9uPTAgLVQg
ZWZpLmxkcyAtTiBwcmVsaW5rLWVmaS5vIFwKICAgICAgICAgICAgICAgIC9yb290L3hlbi00
LjIuMC94ZW4vLnhlbi5lZmkuMXIubyAvcm9vdC94ZW4tNC4yLjAveGVuLy54ZW4uZWZpLjFz
Lm8gLW8gL3Jvb3QveGVuLTQuMi4wL3hlbi94ZW4uZWZpCmlmIDogZmFsc2U7IHRoZW4gcm0g
LWYgL3Jvb3QveGVuLTQuMi4wL3hlbi94ZW4uZWZpOyBlY2hvICdFRkkgc3VwcG9ydCBkaXNh
YmxlZCc7IGZpCkVGSSBzdXBwb3J0IGRpc2FibGVkCnJtIC1mIC9yb290L3hlbi00LjIuMC94
ZW4vLnhlbi5lZmkuWzAtOV0qCmdjYyAtV2FsbCAtV2Vycm9yIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1vIGJvb3QvbWtlbGYzMiBib290L21rZWxmMzIu
YwouL2Jvb3QvbWtlbGYzMiAvcm9vdC94ZW4tNC4yLjAveGVuL3hlbi1zeW1zIC9yb290L3hl
bi00LjIuMC94ZW4veGVuIDB4MTAwMDAwIFwKYG5tIC1uciAvcm9vdC94ZW4tNC4yLjAveGVu
L3hlbi1zeW1zIHwgaGVhZCAtbiAxIHwgc2VkIC1lICdzL15cKFteIF0qXCkuKi8weFwxLydg
CmdtYWtlWzNdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbi9hcmNo
L3g4NicKZ3ppcCAtZiAtOSA8IC9yb290L3hlbi00LjIuMC94ZW4veGVuID4gL3Jvb3QveGVu
LTQuMi4wL3hlbi94ZW4uZ3oubmV3Cm12IC9yb290L3hlbi00LjIuMC94ZW4veGVuLmd6Lm5l
dyAvcm9vdC94ZW4tNC4yLjAveGVuL3hlbi5negpbIC1kIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvYm9vdCBdIHx8IGluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvYm9vdAppbnN0YWxsIC1tMDY0NCAtcCAvcm9vdC94ZW4tNC4yLjAv
eGVuL3hlbi5neiAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL2Jvb3QveGVuLTQuMi4w
Lmd6CmxuIC1mIC1zIHhlbi00LjIuMC5neiAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxs
L2Jvb3QveGVuLTQuMi5negpsbiAtZiAtcyB4ZW4tNC4yLjAuZ3ogL3Jvb3QveGVuLTQuMi4w
L2Rpc3QvaW5zdGFsbC9ib290L3hlbi00Lmd6CmxuIC1mIC1zIHhlbi00LjIuMC5neiAvcm9v
dC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL2Jvb3QveGVuLmd6Cmluc3RhbGwgLW0wNjQ0IC1w
IC9yb290L3hlbi00LjIuMC94ZW4veGVuLXN5bXMgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5z
dGFsbC9ib290L3hlbi1zeW1zLTQuMi4wCmlmIFsgLXIgL3Jvb3QveGVuLTQuMi4wL3hlbi94
ZW4uZWZpIC1hIC1uICcvdXNyL2xpYjY0L2VmaScgXTsgdGhlbiBcCglbIC1kIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xpYjY0L2VmaSBdIHx8IGluc3RhbGwgLWQgLW0w
NzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xpYjY0L2VmaTsgXAoJ
aW5zdGFsbCAtbTA2NDQgLXAgL3Jvb3QveGVuLTQuMi4wL3hlbi94ZW4uZWZpIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xpYjY0L2VmaS94ZW4tNC4yLjAuZWZpOyBcCgls
biAtc2YgeGVuLTQuMi4wLmVmaSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci9s
aWI2NC9lZmkveGVuLTQuMi5lZmk7IFwKCWxuIC1zZiB4ZW4tNC4yLjAuZWZpIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xpYjY0L2VmaS94ZW4tNC5lZmk7IFwKCWxuIC1z
ZiB4ZW4tNC4yLjAuZWZpIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xpYjY0
L2VmaS94ZW4uZWZpOyBcCglpZiBbIC1uICcvYm9vdC9lZmknIC1hIC1uICcnIF07IHRoZW4g
XAoJCWluc3RhbGwgLW0wNjQ0IC1wIC9yb290L3hlbi00LjIuMC94ZW4veGVuLmVmaSAvcm9v
dC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL2Jvb3QvZWZpL2VmaS8veGVuLTQuMi4wLmVmaTsg
XAoJZWxpZiBbICIvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsIiA9ICJkaXN0L2luc3Rh
bGwiIF07IHRoZW4gXAoJCWVjaG8gJ0VGSSBpbnN0YWxsYXRpb24gb25seSBwYXJ0aWFsbHkg
ZG9uZSAoRUZJX1ZFTkRPUiBub3Qgc2V0KScgPiYyOyBcCglmaTsgXApmaQpnbWFrZVsyXTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC94ZW4nCmdtYWtlWzFdOiBMZWF2
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3hlbicKZG9tMCMgZ21ha2UgdG9vbHMK
Z21ha2UgLUMgdG9vbHMgcWVtdS14ZW4tdHJhZGl0aW9uYWwtZGlyLWZpbmQKZ21ha2VbMV06
IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpzZXQgLWV4OyBc
CmlmIHRlc3QgLWQgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3FlbXUteGVuLXRy
YWRpdGlvbmFsOyB0aGVuIFwKCW1rZGlyIC1wIHFlbXUteGVuLXRyYWRpdGlvbmFsLWRpcjsg
XAplbHNlIFwKCWV4cG9ydCBHSVQ9Z2l0OyBcCgkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4v
c2NyaXB0cy9naXQtY2hlY2tvdXQuc2ggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xz
L3FlbXUteGVuLXRyYWRpdGlvbmFsIHhlbi00LjIuMCBxZW11LXhlbi10cmFkaXRpb25hbC1k
aXI7IFwKZmkKKyB0ZXN0IC1kICcvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMvcWVt
dS14ZW4tdHJhZGl0aW9uYWwnCisgbWtkaXIgLXAgcWVtdS14ZW4tdHJhZGl0aW9uYWwtZGly
CmdtYWtlWzFdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpn
bWFrZSAtQyB0b29scyBxZW11LXhlbi1kaXItZmluZApnbWFrZVsxXTogRW50ZXJpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmlmIHRlc3QgLWQgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzLy4uL3Rvb2xzL3FlbXUteGVuIDsgdGhlbiBcCglta2RpciAtcCBxZW11LXhl
bi1kaXI7IFwKZWxzZSBcCglleHBvcnQgR0lUPWdpdDsgXAoJL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzLy4uL3NjcmlwdHMvZ2l0LWNoZWNrb3V0LnNoIC9yb290L3hlbi00LjIuMC90b29scy8u
Li90b29scy9xZW11LXhlbiBxZW11LXhlbi00LjIuMCBxZW11LXhlbi1kaXIgOyBcCmZpCmdt
YWtlWzFdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFr
ZSAtQyB0b29scyBpbnN0YWxsCmdtYWtlWzFdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzJwpnbWFrZSAtQyBpbmNsdWRlIGluc3RhbGwKZ21ha2VbM106IEVu
dGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUnCmdtYWtl
IC1DIHhlbi1mb3JlaWduCmdtYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduJwpweXRob24yLjcgbWtoZWFkZXIu
cHkgeDg2XzMyIHg4Nl8zMi5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlL3hlbi1m
b3JlaWduLy4uLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLXg4Ni94ZW4teDg2XzMy
LmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUveGVuLWZvcmVpZ24vLi4vLi4vLi4v
eGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L3hlbi5oIC9yb290L3hlbi00LjIuMC90b29s
cy9pbmNsdWRlL3hlbi1mb3JlaWduLy4uLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy94ZW4u
aApweXRob24yLjcgbWtoZWFkZXIucHkgeDg2XzY0IHg4Nl82NC5oIC9yb290L3hlbi00LjIu
MC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduLy4uLy4uLy4uL3hlbi9pbmNsdWRlL3B1Ymxp
Yy9hcmNoLXg4Ni94ZW4teDg2XzY0LmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUv
eGVuLWZvcmVpZ24vLi4vLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L3hlbi5o
IC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduLy4uLy4uLy4uL3hl
bi9pbmNsdWRlL3B1YmxpYy94ZW4uaApweXRob24yLjcgbWtoZWFkZXIucHkgaWE2NCBpYTY0
LmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUveGVuLWZvcmVpZ24vLi4vLi4vLi4v
eGVuL2luY2x1ZGUvcHVibGljL2FyY2gtaWE2NC5oIC9yb290L3hlbi00LjIuMC90b29scy9p
bmNsdWRlL3hlbi1mb3JlaWduLy4uLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy94ZW4uaApw
eXRob24yLjcgbWtjaGVja2VyLnB5IGNoZWNrZXIuYyB4ODZfMzIgeDg2XzY0IGlhNjQKZ2Nj
IC1XYWxsIC1XZXJyb3IgLVdzdHJpY3QtcHJvdG90eXBlcyAtTzIgLWZvbWl0LWZyYW1lLXBv
aW50ZXIgLWZuby1zdHJpY3QtYWxpYXNpbmcgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLW8gY2hlY2tlciBjaGVja2VyLmMKLi9jaGVja2VyID4gdG1wLnNpemUKZGlmZiAtdSBy
ZWZlcmVuY2Uuc2l6ZSB0bXAuc2l6ZQpybSB0bXAuc2l6ZQpnbWFrZVs0XTogTGVhdmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduJwpt
a2RpciAtcCB4ZW4vbGliZWxmCmxuIC1zZiAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVk
ZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvQ09QWUlORyB4ZW4KbG4gLXNmIC9yb290L3hl
bi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9kb21jdGwu
aCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJs
aWMvdHJhY2UuaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5j
bHVkZS9wdWJsaWMvZWxmbm90ZS5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4u
Ly4uL3hlbi9pbmNsdWRlL3B1YmxpYy90bWVtLmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2lu
Y2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL3BsYXRmb3JtLmggL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2XzY0
LmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVi
bGljL3BoeXNkZXYuaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4v
aW5jbHVkZS9wdWJsaWMveGVuLWNvbXBhdC5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNs
dWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9ncmFudF90YWJsZS5oIC9yb290L3hlbi00
LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9jYWxsYmFjay5o
IC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1Ymxp
Yy9zY2hlZC5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNs
dWRlL3B1YmxpYy9tZW1vcnkuaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8u
Li94ZW4vaW5jbHVkZS9wdWJsaWMvZmVhdHVyZXMuaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
aW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMveGVuLmggL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2XzMyLmgg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGlj
L2RvbTBfb3BzLmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2lu
Y2x1ZGUvcHVibGljL21lbV9ldmVudC5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRl
Ly4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy92ZXJzaW9uLmggL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2V2ZW50X2NoYW5uZWwuaCAv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMv
eGVub3Byb2YuaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5j
bHVkZS9wdWJsaWMveGVuY29tbS5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4u
Ly4uL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLWFybS5oIC9yb290L3hlbi00LjIuMC90b29s
cy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9ubWkuaCAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC1pYTY0Lmgg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGlj
L2tleGVjLmggL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1
ZGUvcHVibGljL3N5c2N0bC5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4u
L3hlbi9pbmNsdWRlL3B1YmxpYy92Y3B1LmggeGVuCmxuIC1zZiAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC1pYTY0IC9yb290
L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNo
LXg4NiAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9w
dWJsaWMvaHZtIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNs
dWRlL3B1YmxpYy9pbyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4v
aW5jbHVkZS9wdWJsaWMveHNtIHhlbgpsbiAtc2YgLi4veGVuLXN5cy9OZXRCU0QgeGVuL3N5
cwpsbiAtc2YgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1
ZGUveGVuL2xpYmVsZi5oIC9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3hl
bi9pbmNsdWRlL3hlbi9lbGZzdHJ1Y3RzLmggeGVuL2xpYmVsZi8KbG4gLXMgLi4veGVuLWZv
cmVpZ24geGVuL2ZvcmVpZ24KdG91Y2ggeGVuLy5kaXIKL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL3Jvb3Qv
eGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvaW5jbHVkZS94ZW4vYXJjaC1pYTY0
Ci9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2luY2x1ZGUveGVuL2FyY2gtaWE2NC9odm0KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1
ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL3Jvb3QveGVuLTQu
Mi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvaW5jbHVkZS94ZW4vYXJjaC14ODYKL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAt
bTA3NTUgLXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvaW5jbHVk
ZS94ZW4vYXJjaC14ODYvaHZtCi9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVuL2ZvcmVpZ24KL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAg
L3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvaW5jbHVkZS94ZW4vaHZt
Ci9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2luY2x1ZGUveGVuL2lvCi9yb290L3hlbi00LjIuMC90b29scy9pbmNsdWRlLy4uLy4uL3Rv
b2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVuL3N5cwovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
aW5jbHVkZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94
ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlL3hlbi94c20KL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2
NDQgLXAgeGVuL0NPUFlJTkcgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVu
NDIvaW5jbHVkZS94ZW4KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9v
bHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuLyouaCAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlL3hlbgovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
aW5jbHVkZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4ZW4vYXJjaC1p
YTY0LyouaCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRl
L3hlbi9hcmNoLWlhNjQKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9v
bHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuL2FyY2gtaWE2NC9odm0vKi5oIC9yb290
L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVuL2FyY2gtaWE2
NC9odm0KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3Mt
aW5zdGFsbCAtbTA2NDQgLXAgeGVuL2FyY2gteDg2LyouaCAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlL3hlbi9hcmNoLXg4Ngovcm9vdC94ZW4tNC4y
LjAvdG9vbHMvaW5jbHVkZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4
ZW4vYXJjaC14ODYvaHZtLyouaCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94
ZW40Mi9pbmNsdWRlL3hlbi9hcmNoLXg4Ni9odm0KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2lu
Y2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuL2ZvcmVpZ24v
Ki5oIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVu
L2ZvcmVpZ24KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jv
c3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuL2h2bS8qLmggL3Jvb3QveGVuLTQuMi4wL2Rpc3Qv
aW5zdGFsbC91c3IveGVuNDIvaW5jbHVkZS94ZW4vaHZtCi9yb290L3hlbi00LjIuMC90b29s
cy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIHhlbi9pby8q
LmggL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvaW5jbHVkZS94ZW4v
aW8KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5z
dGFsbCAtbTA2NDQgLXAgeGVuL3N5cy8qLmggL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFs
bC91c3IveGVuNDIvaW5jbHVkZS94ZW4vc3lzCi9yb290L3hlbi00LjIuMC90b29scy9pbmNs
dWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIHhlbi94c20vKi5oIC9y
b290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVuL3hzbQpn
bWFrZVszXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9pbmNs
dWRlJwpnbWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29s
cycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
JwpnbWFrZSAtQyBsaWJ4YyBpbnN0YWxsCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3Rvcnkg
YC9yb290L3hlbi00LjIuMC90b29scy9saWJ4YycKZ21ha2UgbGlicwpnbWFrZVs0XTogRW50
ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMnCmdjYyAgLU8x
IC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jb3JlLm8uZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYg
LVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfY29yZS5v
IHhjX2NvcmUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jb3JlX3g4Ni5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdt
aXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4u
Ly4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHhjX2NvcmVfeDg2Lm8geGNfY29y
ZV94ODYuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jcHVwb29sLm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfY3B1cG9vbC5vIHhjX2NwdXBvb2wu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC54Y19kb21haW4uby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1j
YWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90
b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9p
bmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19kb21haW4ubyB4Y19kb21haW4uYyAgLUkvdXNy
L3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcg
LWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1N
TUQgLU1GIC54Y19ldnRjaG4uby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUku
Li8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1J
LiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1w
dGhyZWFkICAtYyAtbyB4Y19ldnRjaG4ubyB4Y19ldnRjaG4uYyAgLUkvdXNyL3BrZy9pbmNs
dWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54
Y19nbnR0YWIuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4v
Y29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAt
YyAtbyB4Y19nbnR0YWIubyB4Y19nbnR0YWIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19taXNjLm8u
ZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJl
bGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfbWlz
Yy5vIHhjX21pc2MuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19mbGFzay5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdt
aXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4u
Ly4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHhjX2ZsYXNrLm8geGNfZmxhc2su
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC54Y19waHlzZGV2Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJv
dG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMv
aW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfcGh5c2Rldi5vIHhjX3BoeXNkZXYuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54Y19wcml2YXRlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBl
cyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVk
ZSAtcHRocmVhZCAgLWMgLW8geGNfcHJpdmF0ZS5vIHhjX3ByaXZhdGUuYyAgLUkvdXNyL3Br
Zy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZu
by1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVz
IC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQg
LU1GIC54Y19zZWRmLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVh
ZCAgLWMgLW8geGNfc2VkZi5vIHhjX3NlZGYuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jc2NoZWQu
by5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xp
YmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19j
c2NoZWQubyB4Y19jc2NoZWQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8t
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jc2NoZWQyLm8uZCAtZm5v
LW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdl
cnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfY3NjaGVkMi5v
IHhjX2NzY2hlZDIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19hcmluYzY1My5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3Ig
LVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhj
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHhjX2FyaW5jNjUzLm8geGNf
YXJpbmM2NTMuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y190YnVmLm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfdGJ1Zi5vIHhjX3RidWYuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54Y19wbS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4u
Ly4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUku
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0
aHJlYWQgIC1jIC1vIHhjX3BtLm8geGNfcG0uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19jcHVfaG90
cGx1Zy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21t
b24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1v
IHhjX2NwdV9ob3RwbHVnLm8geGNfY3B1X2hvdHBsdWcuYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19y
ZXN1bWUuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29t
bW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAt
byB4Y19yZXN1bWUubyB4Y19yZXN1bWUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8x
IC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y190bWVtLm8uZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYg
LVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfdG1lbS5v
IHhjX3RtZW0uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19tZW1fZXZlbnQuby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1X
bWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8u
Li8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19tZW1fZXZlbnQubyB4Y19t
ZW1fZXZlbnQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19tZW1fcGFnaW5nLm8uZCAtZm5vLW9wdGlt
aXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAt
V21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMv
Li4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfbWVtX3BhZ2luZy5vIHhj
X21lbV9wYWdpbmcuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19tZW1fYWNjZXNzLm8uZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfbWVtX2FjY2Vzcy5v
IHhjX21lbV9hY2Nlc3MuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19tZW1zaHIuby5kIC1mbm8tb3B0
aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9y
IC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19tZW1zaHIubyB4Y19t
ZW1zaHIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19oY2FsbF9idWYuby5kIC1mbm8tb3B0aW1pemUt
c2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlz
c2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8u
Li90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19oY2FsbF9idWYubyB4Y19oY2Fs
bF9idWYuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19mb3JlaWduX21lbW9yeS5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3Ig
LVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhj
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHhjX2ZvcmVpZ25fbWVtb3J5
Lm8geGNfZm9yZWlnbl9tZW1vcnkuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1m
bm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54dGxfY29yZS5vLmQgLWZu
by1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1X
ZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHh0bF9jb3JlLm8g
eHRsX2NvcmUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54dGxfbG9nZ2VyX3N0ZGlvLm8uZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geHRsX2xvZ2dlcl9zdGRp
by5vIHh0bF9sb2dnZXJfc3RkaW8uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1m
bm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19wYWdldGFiLm8uZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYg
LVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfcGFnZXRh
Yi5vIHhjX3BhZ2V0YWIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19uZXRic2Quby5kIC1mbm8tb3B0
aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9y
IC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19uZXRic2QubyB4Y19u
ZXRic2QuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmFyIHJjIGxpYnhlbmN0cmwuYSB4Y19jb3Jl
Lm8geGNfY29yZV94ODYubyB4Y19jcHVwb29sLm8geGNfZG9tYWluLm8geGNfZXZ0Y2huLm8g
eGNfZ250dGFiLm8geGNfbWlzYy5vIHhjX2ZsYXNrLm8geGNfcGh5c2Rldi5vIHhjX3ByaXZh
dGUubyB4Y19zZWRmLm8geGNfY3NjaGVkLm8geGNfY3NjaGVkMi5vIHhjX2FyaW5jNjUzLm8g
eGNfdGJ1Zi5vIHhjX3BtLm8geGNfY3B1X2hvdHBsdWcubyB4Y19yZXN1bWUubyB4Y190bWVt
Lm8geGNfbWVtX2V2ZW50Lm8geGNfbWVtX3BhZ2luZy5vIHhjX21lbV9hY2Nlc3MubyB4Y19t
ZW1zaHIubyB4Y19oY2FsbF9idWYubyB4Y19mb3JlaWduX21lbW9yeS5vIHh0bF9jb3JlLm8g
eHRsX2xvZ2dlcl9zdGRpby5vIHhjX3BhZ2V0YWIubyB4Y19uZXRic2QubwpnY2MgIC1EUElD
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfY29yZS5v
cGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24v
bGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1j
IC1vIHhjX2NvcmUub3BpYyB4Y19jb3JlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1E
UElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfY29y
ZV94ODYub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4v
Y29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAt
ZlBJQyAtYyAtbyB4Y19jb3JlX3g4Ni5vcGljIHhjX2NvcmVfeDg2LmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1n
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAt
TU1EIC1NRiAueGNfY3B1cG9vbC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlw
ZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX2NwdXBvb2wub3BpYyB4Y19jcHVwb29sLmMg
IC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tYWluLm9waWMuZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfZG9tYWluLm9waWMgeGNf
ZG9tYWluLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZXZ0Y2huLm9waWMuZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfZXZ0Y2hu
Lm9waWMgeGNfZXZ0Y2huLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAt
Zm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZ250dGFiLm9waWMu
ZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJl
bGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8g
eGNfZ250dGFiLm9waWMgeGNfZ250dGFiLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1E
UElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfbWlz
Yy5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21t
b24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElD
IC1jIC1vIHhjX21pc2Mub3BpYyB4Y19taXNjLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2Mg
IC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNf
Zmxhc2sub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4v
Y29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAt
ZlBJQyAtYyAtbyB4Y19mbGFzay5vcGljIHhjX2ZsYXNrLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1N
RiAueGNfcGh5c2Rldi5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4u
Ly4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUku
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0
aHJlYWQgIC1mUElDIC1jIC1vIHhjX3BoeXNkZXYub3BpYyB4Y19waHlzZGV2LmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAueGNfcHJpdmF0ZS5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXBy
b3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX3ByaXZhdGUub3BpYyB4Y19wcml2
YXRlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfc2VkZi5vcGljLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdt
aXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4u
Ly4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX3NlZGYub3BpYyB4
Y19zZWRmLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfY3NjaGVkLm9waWMuZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfY3NjaGVk
Lm9waWMgeGNfY3NjaGVkLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAt
Zm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfY3NjaGVkMi5vcGlj
LmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGli
ZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1v
IHhjX2NzY2hlZDIub3BpYyB4Y19jc2NoZWQyLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2Mg
IC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNf
YXJpbmM2NTMub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94
ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFk
ICAtZlBJQyAtYyAtbyB4Y19hcmluYzY1My5vcGljIHhjX2FyaW5jNjUzLmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAueGNfdGJ1Zi5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlw
ZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX3RidWYub3BpYyB4Y190YnVmLmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAueGNfcG0ub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxs
cyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5
cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNs
dWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19wbS5vcGljIHhjX3BtLmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAueGNfY3B1X2hvdHBsdWcub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1w
cm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29s
cy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19jcHVfaG90cGx1Zy5vcGljIHhj
X2NwdV9ob3RwbHVnLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfcmVzdW1lLm9waWMuZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYg
LVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNf
cmVzdW1lLm9waWMgeGNfcmVzdW1lLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElD
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfdG1lbS5v
cGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24v
bGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1j
IC1vIHhjX3RtZW0ub3BpYyB4Y190bWVtLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1E
UElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfbWVt
X2V2ZW50Lm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVu
L2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAg
LWZQSUMgLWMgLW8geGNfbWVtX2V2ZW50Lm9waWMgeGNfbWVtX2V2ZW50LmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAueGNfbWVtX3BhZ2luZy5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXBy
b3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX21lbV9wYWdpbmcub3BpYyB4Y19t
ZW1fcGFnaW5nLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfbWVtX2FjY2Vzcy5vcGljLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxm
IC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhj
X21lbV9hY2Nlc3Mub3BpYyB4Y19tZW1fYWNjZXNzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
eGNfbWVtc2hyLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVh
ZCAgLWZQSUMgLWMgLW8geGNfbWVtc2hyLm9waWMgeGNfbWVtc2hyLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1n
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAt
TU1EIC1NRiAueGNfaGNhbGxfYnVmLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2Fs
bHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90
eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfaGNhbGxfYnVmLm9waWMgeGNfaGNhbGxf
YnVmLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZm9yZWlnbl9tZW1vcnkub3BpYy5kIC1m
bm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAt
V2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19m
b3JlaWduX21lbW9yeS5vcGljIHhjX2ZvcmVpZ25fbWVtb3J5LmMgIC1JL3Vzci9wa2cvaW5j
bHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1E
IC1NRiAueHRsX2NvcmUub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUku
Li8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1J
LiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1w
dGhyZWFkICAtZlBJQyAtYyAtbyB4dGxfY29yZS5vcGljIHh0bF9jb3JlLmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAueHRsX2xvZ2dlcl9zdGRpby5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJs
aW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5n
LXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHh0bF9sb2dnZXJfc3RkaW8ub3Bp
YyB4dGxfbG9nZ2VyX3N0ZGlvLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfcGFnZXRhYi5v
cGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24v
bGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1j
IC1vIHhjX3BhZ2V0YWIub3BpYyB4Y19wYWdldGFiLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
eGNfbmV0YnNkLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVh
ZCAgLWZQSUMgLWMgLW8geGNfbmV0YnNkLm9waWMgeGNfbmV0YnNkLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgICAgLXB0aHJlYWQgLVdsLC1zb25hbWUgLVdsLGxpYnhlbmN0cmwuc28u
NC4yIC1zaGFyZWQgLW8gbGlieGVuY3RybC5zby40LjIuMCB4Y19jb3JlLm9waWMgeGNfY29y
ZV94ODYub3BpYyB4Y19jcHVwb29sLm9waWMgeGNfZG9tYWluLm9waWMgeGNfZXZ0Y2huLm9w
aWMgeGNfZ250dGFiLm9waWMgeGNfbWlzYy5vcGljIHhjX2ZsYXNrLm9waWMgeGNfcGh5c2Rl
di5vcGljIHhjX3ByaXZhdGUub3BpYyB4Y19zZWRmLm9waWMgeGNfY3NjaGVkLm9waWMgeGNf
Y3NjaGVkMi5vcGljIHhjX2FyaW5jNjUzLm9waWMgeGNfdGJ1Zi5vcGljIHhjX3BtLm9waWMg
eGNfY3B1X2hvdHBsdWcub3BpYyB4Y19yZXN1bWUub3BpYyB4Y190bWVtLm9waWMgeGNfbWVt
X2V2ZW50Lm9waWMgeGNfbWVtX3BhZ2luZy5vcGljIHhjX21lbV9hY2Nlc3Mub3BpYyB4Y19t
ZW1zaHIub3BpYyB4Y19oY2FsbF9idWYub3BpYyB4Y19mb3JlaWduX21lbW9yeS5vcGljIHh0
bF9jb3JlLm9waWMgeHRsX2xvZ2dlcl9zdGRpby5vcGljIHhjX3BhZ2V0YWIub3BpYyB4Y19u
ZXRic2Qub3BpYyAgICAtTC91c3IvcGtnL2xpYgpsbiAtc2YgbGlieGVuY3RybC5zby40LjIu
MCBsaWJ4ZW5jdHJsLnNvLjQuMgpsbiAtc2YgbGlieGVuY3RybC5zby40LjIgbGlieGVuY3Ry
bC5zbwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
eGdfcHJpdmF0ZS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hl
bi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQg
IC1jIC1vIHhnX3ByaXZhdGUubyB4Z19wcml2YXRlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfc3Vz
cGVuZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21t
b24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1v
IHhjX3N1c3BlbmQubyB4Y19zdXNwZW5kLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tYWluX3Jl
c3RvcmUuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29t
bW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAt
byB4Y19kb21haW5fcmVzdG9yZS5vIHhjX2RvbWFpbl9yZXN0b3JlLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1N
RiAueGNfZG9tYWluX3NhdmUuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUku
Li8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1J
LiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1w
dGhyZWFkICAtYyAtbyB4Y19kb21haW5fc2F2ZS5vIHhjX2RvbWFpbl9zYXZlLmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1n
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAt
TU1EIC1NRiAueGNfb2ZmbGluZV9wYWdlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2Fs
bHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90
eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtcHRocmVhZCAgLWMgLW8geGNfb2ZmbGluZV9wYWdlLm8geGNfb2ZmbGluZV9wYWdl
LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5f
VE9PTFNfXyAtTU1EIC1NRiAueGNfY29tcHJlc3Npb24uby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2lu
Zy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90
b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19jb21wcmVzc2lvbi5vIHhjX2NvbXBy
ZXNzaW9uLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGliZWxmLXRvb2xzLm8uZCAtZm5vLW9wdGltaXpl
LXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21p
c3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8gbGliZWxmLXRvb2xzLm8gLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYvbGliZWxmLXRvb2xzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGliZWxm
LWxvYWRlci5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9j
b21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1j
IC1vIGxpYmVsZi1sb2FkZXIubyAuLi8uLi94ZW4vY29tbW9uL2xpYmVsZi9saWJlbGYtbG9h
ZGVyLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAubGliZWxmLWRvbWluZm8uby5kIC1mbm8tb3B0aW1pemUt
c2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlz
c2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8u
Li90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyBsaWJlbGYtZG9taW5mby5vIC4uLy4u
L3hlbi9jb21tb24vbGliZWxmL2xpYmVsZi1kb21pbmZvLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGli
ZWxmLXJlbG9jYXRlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVh
ZCAgLWMgLW8gbGliZWxmLXJlbG9jYXRlLm8gLi4vLi4veGVuL2NvbW1vbi9saWJlbGYvbGli
ZWxmLXJlbG9jYXRlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tX2NvcmUuby5kIC1mbm8tb3B0
aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9y
IC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19kb21fY29yZS5vIHhj
X2RvbV9jb3JlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJh
bWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tX2Jvb3Quby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1X
bWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8u
Li8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19kb21fYm9vdC5vIHhjX2Rv
bV9ib290LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tX2VsZmxvYWRlci5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3Ig
LVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhj
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1jIC1vIHhjX2RvbV9lbGZsb2FkZXIu
byB4Y19kb21fZWxmbG9hZGVyLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGNfZG9tX2J6aW1hZ2Vsb2Fk
ZXIuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9u
L2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00
LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkIC1ESEFWRV9C
WkxJQiAtbGJ6MiAtREhBVkVfTFpNQSAtbGx6bWEgIC1jIC1vIHhjX2RvbV9iemltYWdlbG9h
ZGVyLm8geGNfZG9tX2J6aW1hZ2Vsb2FkZXIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19kb21fYmlu
bG9hZGVyLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2Nv
bW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMg
LW8geGNfZG9tX2JpbmxvYWRlci5vIHhjX2RvbV9iaW5sb2FkZXIuYyAgLUkvdXNyL3BrZy9p
bmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC54Y19kb21fY29tcGF0X2xpbnV4Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBl
cyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVk
ZSAtcHRocmVhZCAgLWMgLW8geGNfZG9tX2NvbXBhdF9saW51eC5vIHhjX2RvbV9jb21wYXRf
bGludXguYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19kb21feDg2Lm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfZG9tX3g4Ni5vIHhjX2RvbV94ODYu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC54Y19jcHVpZF94ODYuby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1w
cm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29s
cy9pbmNsdWRlIC1wdGhyZWFkICAtYyAtbyB4Y19jcHVpZF94ODYubyB4Y19jcHVpZF94ODYu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC54Y19odm1fYnVpbGRfeDg2Lm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWMgLW8geGNfaHZtX2J1aWxkX3g4Ni5vIHhjX2h2
bV9idWlsZF94ODYuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmFyIHJjIGxpYnhlbmd1ZXN0LmEg
eGdfcHJpdmF0ZS5vIHhjX3N1c3BlbmQubyB4Y19kb21haW5fcmVzdG9yZS5vIHhjX2RvbWFp
bl9zYXZlLm8geGNfb2ZmbGluZV9wYWdlLm8geGNfY29tcHJlc3Npb24ubyBsaWJlbGYtdG9v
bHMubyBsaWJlbGYtbG9hZGVyLm8gbGliZWxmLWRvbWluZm8ubyBsaWJlbGYtcmVsb2NhdGUu
byB4Y19kb21fY29yZS5vIHhjX2RvbV9ib290Lm8geGNfZG9tX2VsZmxvYWRlci5vIHhjX2Rv
bV9iemltYWdlbG9hZGVyLm8geGNfZG9tX2JpbmxvYWRlci5vIHhjX2RvbV9jb21wYXRfbGlu
dXgubyB4Y19kb21feDg2Lm8geGNfY3B1aWRfeDg2Lm8geGNfaHZtX2J1aWxkX3g4Ni5vCmdj
YyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54
Z19wcml2YXRlLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4v
eGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVh
ZCAgLWZQSUMgLWMgLW8geGdfcHJpdmF0ZS5vcGljIHhnX3ByaXZhdGUuYyAgLUkvdXNyL3Br
Zy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54Y19zdXNwZW5kLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2Fs
bHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJvdG90
eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfc3VzcGVuZC5vcGljIHhjX3N1c3BlbmQu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19kb21haW5fcmVzdG9yZS5vcGljLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJy
b3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX2RvbWFp
bl9yZXN0b3JlLm9waWMgeGNfZG9tYWluX3Jlc3RvcmUuYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC54Y19kb21haW5fc2F2ZS5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAt
SS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMg
LUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX2RvbWFpbl9zYXZlLm9waWMgeGNfZG9tYWluX3Nh
dmUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19vZmZsaW5lX3BhZ2Uub3BpYy5kIC1mbm8t
b3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vy
cm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19vZmZs
aW5lX3BhZ2Uub3BpYyB4Y19vZmZsaW5lX3BhZ2UuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdj
YyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54
Y19jb21wcmVzc2lvbi5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4u
Ly4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUku
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0
aHJlYWQgIC1mUElDIC1jIC1vIHhjX2NvbXByZXNzaW9uLm9waWMgeGNfY29tcHJlc3Npb24u
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJlbGYtdG9vbHMub3BpYy5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1X
bWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8u
Li8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyBsaWJlbGYtdG9vbHMu
b3BpYyAuLi8uLi94ZW4vY29tbW9uL2xpYmVsZi9saWJlbGYtdG9vbHMuYyAgLUkvdXNyL3Br
Zy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC5saWJlbGYtbG9hZGVyLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJv
dG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMv
aW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8gbGliZWxmLWxvYWRlci5vcGljIC4uLy4u
L3hlbi9jb21tb24vbGliZWxmL2xpYmVsZi1sb2FkZXIuYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC5saWJlbGYtZG9taW5mby5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAt
SS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMg
LUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LXB0aHJlYWQgIC1mUElDIC1jIC1vIGxpYmVsZi1kb21pbmZvLm9waWMgLi4vLi4veGVuL2Nv
bW1vbi9saWJlbGYvbGliZWxmLWRvbWluZm8uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
LURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJl
bGYtcmVsb2NhdGUub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8u
Li94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhy
ZWFkICAtZlBJQyAtYyAtbyBsaWJlbGYtcmVsb2NhdGUub3BpYyAuLi8uLi94ZW4vY29tbW9u
L2xpYmVsZi9saWJlbGYtcmVsb2NhdGUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQ
SUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19kb21f
Y29yZS5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4uL3hlbi9j
b21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJlYWQgIC1m
UElDIC1jIC1vIHhjX2RvbV9jb3JlLm9waWMgeGNfZG9tX2NvcmUuYyAgLUkvdXNyL3BrZy9p
bmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcg
LWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1N
TUQgLU1GIC54Y19kb21fYm9vdC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlw
ZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLXB0aHJlYWQgIC1mUElDIC1jIC1vIHhjX2RvbV9ib290Lm9waWMgeGNfZG9tX2Jvb3Qu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC54Y19kb21fZWxmbG9hZGVyLm9waWMuZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGMvLi4vLi4vdG9vbHMvaW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGNfZG9tX2Vs
ZmxvYWRlci5vcGljIHhjX2RvbV9lbGZsb2FkZXIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdj
YyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54
Y19kb21fYnppbWFnZWxvYWRlci5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtSS4uLy4uL3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlw
ZXMgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLXB0aHJlYWQgLURIQVZFX0JaTElCIC1sYnoyIC1ESEFWRV9MWk1BIC1sbHptYSAgLWZQ
SUMgLWMgLW8geGNfZG9tX2J6aW1hZ2Vsb2FkZXIub3BpYyB4Y19kb21fYnppbWFnZWxvYWRl
ci5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhjX2RvbV9iaW5sb2FkZXIub3BpYy5kIC1mbm8t
b3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vy
cm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19kb21f
YmlubG9hZGVyLm9waWMgeGNfZG9tX2JpbmxvYWRlci5jICAtSS91c3IvcGtnL2luY2x1ZGUK
Z2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYg
LnhjX2RvbV9jb21wYXRfbGludXgub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxs
cyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5
cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNs
dWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19kb21fY29tcGF0X2xpbnV4Lm9waWMgeGNf
ZG9tX2NvbXBhdF9saW51eC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEg
LWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhjX2RvbV94ODYub3Bp
Yy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xp
YmVsZiAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJ4Yy8uLi8uLi90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAt
byB4Y19kb21feDg2Lm9waWMgeGNfZG9tX3g4Ni5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2Nj
ICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhj
X2NwdWlkX3g4Ni5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtSS4uLy4u
L3hlbi9jb21tb24vbGliZWxmIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLXB0aHJl
YWQgIC1mUElDIC1jIC1vIHhjX2NwdWlkX3g4Ni5vcGljIHhjX2NwdWlkX3g4Ni5jICAtSS91
c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RP
T0xTX18gLU1NRCAtTUYgLnhjX2h2bV9idWlsZF94ODYub3BpYy5kIC1mbm8tb3B0aW1pemUt
c2libGluZy1jYWxscyAgLUkuLi8uLi94ZW4vY29tbW9uL2xpYmVsZiAtV2Vycm9yIC1XbWlz
c2luZy1wcm90b3R5cGVzIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8u
Li90b29scy9pbmNsdWRlIC1wdGhyZWFkICAtZlBJQyAtYyAtbyB4Y19odm1fYnVpbGRfeDg2
Lm9waWMgeGNfaHZtX2J1aWxkX3g4Ni5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIC1X
bCwtc29uYW1lIC1XbCxsaWJ4ZW5ndWVzdC5zby40LjIgLXNoYXJlZCAtbyBsaWJ4ZW5ndWVz
dC5zby40LjIuMCB4Z19wcml2YXRlLm9waWMgeGNfc3VzcGVuZC5vcGljIHhjX2RvbWFpbl9y
ZXN0b3JlLm9waWMgeGNfZG9tYWluX3NhdmUub3BpYyB4Y19vZmZsaW5lX3BhZ2Uub3BpYyB4
Y19jb21wcmVzc2lvbi5vcGljIGxpYmVsZi10b29scy5vcGljIGxpYmVsZi1sb2FkZXIub3Bp
YyBsaWJlbGYtZG9taW5mby5vcGljIGxpYmVsZi1yZWxvY2F0ZS5vcGljIHhjX2RvbV9jb3Jl
Lm9waWMgeGNfZG9tX2Jvb3Qub3BpYyB4Y19kb21fZWxmbG9hZGVyLm9waWMgeGNfZG9tX2J6
aW1hZ2Vsb2FkZXIub3BpYyB4Y19kb21fYmlubG9hZGVyLm9waWMgeGNfZG9tX2NvbXBhdF9s
aW51eC5vcGljIHhjX2RvbV94ODYub3BpYyB4Y19jcHVpZF94ODYub3BpYyB4Y19odm1fYnVp
bGRfeDg2Lm9waWMgLURIQVZFX0JaTElCIC1sYnoyIC1ESEFWRV9MWk1BIC1sbHptYSAtbHog
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0
cmwuc28gICAtTC91c3IvcGtnL2xpYgpsbiAtc2YgbGlieGVuZ3Vlc3Quc28uNC4yLjAgbGli
eGVuZ3Vlc3Quc28uNC4yCmxuIC1zZiBsaWJ4ZW5ndWVzdC5zby40LjIgbGlieGVuZ3Vlc3Qu
c28KZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLnhlbmN0cmxfb3NkZXBfRU5PU1lTLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1JLi4vLi4veGVuL2NvbW1vbi9saWJlbGYgLVdlcnJvciAtV21pc3NpbmctcHJv
dG90eXBlcyAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMv
aW5jbHVkZSAtcHRocmVhZCAgLWZQSUMgLWMgLW8geGVuY3RybF9vc2RlcF9FTk9TWVMub3Bp
YyB4ZW5jdHJsX29zZGVwX0VOT1NZUy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjIC1nICAg
IC1zaGFyZWQgLW8geGVuY3RybF9vc2RlcF9FTk9TWVMuc28geGVuY3RybF9vc2RlcF9FTk9T
WVMub3BpYyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvbGlieGMv
bGlieGVuY3RybC5zbyAgLUwvdXNyL3BrZy9saWIKZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMnCi9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4Yy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94
ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWIKL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290
L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUKL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhjLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGxp
YnhlbmN0cmwuc28uNC4yLjAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVu
NDIvbGliCi9yb290L3hlbi00LjIuMC90b29scy9saWJ4Yy8uLi8uLi90b29scy9jcm9zcy1p
bnN0YWxsIC1tMDY0NCAtcCBsaWJ4ZW5jdHJsLmEgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5z
dGFsbC91c3IveGVuNDIvbGliCmxuIC1zZiBsaWJ4ZW5jdHJsLnNvLjQuMi4wIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9saWJ4ZW5jdHJsLnNvLjQuMgps
biAtc2YgbGlieGVuY3RybC5zby40LjIgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91
c3IveGVuNDIvbGliL2xpYnhlbmN0cmwuc28KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhj
Ly4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIHhlbmN0cmwuaCB4ZW5jdHJs
b3NkZXAuaCB4ZW50b29sbG9nLmggL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3Iv
eGVuNDIvaW5jbHVkZQovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMv
Y3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgbGlieGVuZ3Vlc3Quc28uNC4yLjAgL3Jvb3QveGVu
LTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliCi9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4Yy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCBsaWJ4ZW5ndWVz
dC5hIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYgpsbiAtc2Yg
bGlieGVuZ3Vlc3Quc28uNC4yLjAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3Iv
eGVuNDIvbGliL2xpYnhlbmd1ZXN0LnNvLjQuMgpsbiAtc2YgbGlieGVuZ3Vlc3Quc28uNC4y
IC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9saWJ4ZW5ndWVz
dC5zbwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGMvLi4vLi4vdG9vbHMvY3Jvc3MtaW5z
dGFsbCAtbTA2NDQgLXAgeGVuZ3Vlc3QuaCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxs
L3Vzci94ZW40Mi9pbmNsdWRlCmdtYWtlWzNdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhjJwpnbWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9y
b290L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZSAtQyBmbGFzayBpbnN0YWxsCmdtYWtlWzNdOiBF
bnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9mbGFzaycKZ21ha2Vb
NF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrJwpn
bWFrZSAtQyB1dGlscyBpbnN0YWxsCmdtYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9y
b290L3hlbi00LjIuMC90b29scy9mbGFzay91dGlscycKZ2NjICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmxvYWRwb2xpY3kuby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLVdhbGwgLWcgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90
b29scy9mbGFzay91dGlscy8uLi8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIu
MC90b29scy9mbGFzay91dGlscy8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBsb2Fk
cG9saWN5Lm8gbG9hZHBvbGljeS5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIGxvYWRw
b2xpY3kubyAgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxzLy4uLy4uLy4uL3Rv
b2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gLW8gZmxhc2stbG9hZHBvbGljeQpnY2MgIC1PMSAt
Zm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuc2V0ZW5mb3JjZS5vLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2FsbCAtZyAtV2Vycm9yIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxzLy4uLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxzLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
IC1jIC1vIHNldGVuZm9yY2UubyBzZXRlbmZvcmNlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgICAgc2V0ZW5mb3JjZS5vICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMv
Li4vLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAtbyBmbGFzay1zZXRlbmZvcmNl
CmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5nZXRl
bmZvcmNlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XYWxsIC1nIC1XZXJy
b3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMvLi4vLi4vLi4vdG9vbHMv
bGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMvLi4vLi4vLi4vdG9v
bHMvaW5jbHVkZSAgLWMgLW8gZ2V0ZW5mb3JjZS5vIGdldGVuZm9yY2UuYyAgLUkvdXNyL3Br
Zy9pbmNsdWRlCmdjYyAgICBnZXRlbmZvcmNlLm8gIC9yb290L3hlbi00LjIuMC90b29scy9m
bGFzay91dGlscy8uLi8uLi8uLi90b29scy9saWJ4Yy9saWJ4ZW5jdHJsLnNvIC1vIGZsYXNr
LWdldGVuZm9yY2UKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1N
RCAtTUYgLmxhYmVsLXBjaS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Fs
bCAtZyAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxzLy4uLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxzLy4u
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIGxhYmVsLXBjaS5vIGxhYmVsLXBjaS5jICAt
SS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIGxhYmVsLXBjaS5vICAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvZmxhc2svdXRpbHMvLi4vLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAt
byBmbGFzay1sYWJlbC1wY2kKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xT
X18gLU1NRCAtTUYgLmdldC1ib29sLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XYWxsIC1nIC1XZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMv
Li4vLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRp
bHMvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8gZ2V0LWJvb2wubyBnZXQtYm9vbC5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIGdldC1ib29sLm8gIC9yb290L3hlbi00LjIu
MC90b29scy9mbGFzay91dGlscy8uLi8uLi8uLi90b29scy9saWJ4Yy9saWJ4ZW5jdHJsLnNv
IC1vIGZsYXNrLWdldC1ib29sCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09M
U19fIC1NTUQgLU1GIC5zZXQtYm9vbC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtV2FsbCAtZyAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0aWxz
Ly4uLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrL3V0
aWxzLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHNldC1ib29sLm8gc2V0LWJvb2wu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICBzZXQtYm9vbC5vICAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvZmxhc2svdXRpbHMvLi4vLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5z
byAtbyBmbGFzay1zZXQtYm9vbAovcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMv
Li4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL3Jvb3QveGVuLTQu
Mi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2Jpbgovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
Zmxhc2svdXRpbHMvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgZmxh
c2stbG9hZHBvbGljeSBmbGFzay1zZXRlbmZvcmNlIGZsYXNrLWdldGVuZm9yY2UgZmxhc2st
bGFiZWwtcGNpIGZsYXNrLWdldC1ib29sIGZsYXNrLXNldC1ib29sIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL3NiaW4KZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmxhc2svdXRpbHMnCmdtYWtlWzRdOiBMZWF2
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2ZsYXNrJwpnbWFrZVszXTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9mbGFzaycKZ21ha2Vb
Ml06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmdtYWtlWzJd
OiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2UgLUMg
eGVuc3RvcmUgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUnCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0
cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hF
Tl9UT09MU19fIC1NTUQgLU1GIC54ZW5zdG9yZV9jbGllbnQuby5kIC1mbm8tb3B0aW1pemUt
c2libGluZy1jYWxscyAgLVdlcnJvciAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVu
c3RvcmUvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3Rv
cmUvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8geGVuc3RvcmVfY2xpZW50Lm8geGVuc3Rv
cmVfY2xpZW50LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueHMub3BpYy5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
eGVuc3RvcmUvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVu
c3RvcmUvLi4vLi4vdG9vbHMvaW5jbHVkZSAtRFVTRV9QVEhSRUFEICAtZlBJQyAtYyAtbyB4
cy5vcGljIHhzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueHNfbGliLm9waWMuZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1mUElDIC1jIC1vIHhzX2xpYi5vcGlj
IHhzX2xpYi5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIC1XbCwtc29uYW1lIC1XbCxs
aWJ4ZW5zdG9yZS5zby4zLjAgLXNoYXJlZCAtbyBsaWJ4ZW5zdG9yZS5zby4zLjAuMSB4cy5v
cGljIHhzX2xpYi5vcGljICAtbHB0aHJlYWQgIC1ML3Vzci9wa2cvbGliCmxuIC1zZiBsaWJ4
ZW5zdG9yZS5zby4zLjAuMSBsaWJ4ZW5zdG9yZS5zby4zLjAKbG4gLXNmIGxpYnhlbnN0b3Jl
LnNvLjMuMCBsaWJ4ZW5zdG9yZS5zbwpnY2MgICAgeGVuc3RvcmVfY2xpZW50Lm8gL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL3hlbnN0b3JlL2xpYnhlbnN0
b3JlLnNvICAtbyB4ZW5zdG9yZSAgLUwvdXNyL3BrZy9saWIKbG4gLWYgeGVuc3RvcmUgeGVu
c3RvcmUtZXhpc3RzCmxuIC1mIHhlbnN0b3JlIHhlbnN0b3JlLWxpc3QKbG4gLWYgeGVuc3Rv
cmUgeGVuc3RvcmUtcmVhZApsbiAtZiB4ZW5zdG9yZSB4ZW5zdG9yZS1ybQpsbiAtZiB4ZW5z
dG9yZSB4ZW5zdG9yZS1jaG1vZApsbiAtZiB4ZW5zdG9yZSB4ZW5zdG9yZS13cml0ZQpsbiAt
ZiB4ZW5zdG9yZSB4ZW5zdG9yZS1scwpsbiAtZiB4ZW5zdG9yZSB4ZW5zdG9yZS13YXRjaApn
Y2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGVuc3Rv
cmVfY29udHJvbC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1J
LiAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9pbmNsdWRlICAt
YyAtbyB4ZW5zdG9yZV9jb250cm9sLm8geGVuc3RvcmVfY29udHJvbC5jICAtSS91c3IvcGtn
L2luY2x1ZGUKZ2NjICAgIHhlbnN0b3JlX2NvbnRyb2wubyAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMveGVuc3RvcmUvLi4vLi4vdG9vbHMveGVuc3RvcmUvbGlieGVuc3RvcmUuc28gIC1vIHhl
bnN0b3JlLWNvbnRyb2wgIC1ML3Vzci9wa2cvbGliCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54cy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtV2Vycm9yIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8u
Li8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8u
Li90b29scy9pbmNsdWRlICAtYyAtbyB4cy5vIHhzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueHNfbGli
Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkuIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHhzX2xp
Yi5vIHhzX2xpYi5jICAtSS91c3IvcGtnL2luY2x1ZGUKYXIgcmNzIGxpYnhlbnN0b3JlLmEg
eHMubyB4c19saWIubwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1n
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAt
TU1EIC1NRiAueHNfdGRiX2R1bXAuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAg
LVdlcnJvciAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9v
bHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMv
aW5jbHVkZSAgLWMgLW8geHNfdGRiX2R1bXAubyB4c190ZGJfZHVtcC5jICAtSS91c3IvcGtn
L2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLnV0aWxzLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUku
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2xpYnhjIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1j
IC1vIHV0aWxzLm8gdXRpbHMuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8t
b21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1n
bnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC50ZGIuby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
eGVuc3RvcmUvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVu
c3RvcmUvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8gdGRiLm8gdGRiLmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1E
IC1NRiAudGFsbG9jLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3Ig
LUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2xpYnhj
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
IC1jIC1vIHRhbGxvYy5vIHRhbGxvYy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIHhz
X3RkYl9kdW1wLm8gdXRpbHMubyB0ZGIubyB0YWxsb2MubyAtbyB4c190ZGJfZHVtcCAgLUwv
dXNyL3BrZy9saWIKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1N
RCAtTUYgLnhlbnN0b3JlZF9jb3JlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XZXJyb3IgLUkuIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rv
b2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgIC1jIC1vIHhlbnN0b3JlZF9jb3JlLm8geGVuc3RvcmVkX2NvcmUuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54ZW5zdG9yZWRfd2F0Y2guby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1j
YWxscyAgLVdlcnJvciAtSS4gLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4v
Li4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4v
dG9vbHMvaW5jbHVkZSAgLWMgLW8geGVuc3RvcmVkX3dhdGNoLm8geGVuc3RvcmVkX3dhdGNo
LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5f
VE9PTFNfXyAtTU1EIC1NRiAueGVuc3RvcmVkX2RvbWFpbi5vLmQgLWZuby1vcHRpbWl6ZS1z
aWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5z
dG9yZS8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9y
ZS8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyB4ZW5zdG9yZWRfZG9tYWluLm8geGVuc3Rv
cmVkX2RvbWFpbi5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhlbnN0b3JlZF90cmFuc2FjdGlvbi5vLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JLiAtSS9yb290L3hlbi00
LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIu
MC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyB4ZW5zdG9yZWRf
dHJhbnNhY3Rpb24ubyB4ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYyAgLUkvdXNyL3BrZy9pbmNs
dWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5o
YXNodGFibGUuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS4g
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvbGlieGMgLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMg
LW8gaGFzaHRhYmxlLm8gaGFzaHRhYmxlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGVuc3RvcmVkX25l
dGJzZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JLiAtSS9y
b290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9saWJ4YyAtSS9yb290
L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyB4
ZW5zdG9yZWRfbmV0YnNkLm8geGVuc3RvcmVkX25ldGJzZC5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhl
bnN0b3JlZF9wb3NpeC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9y
IC1JLiAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9saWJ4
YyAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9pbmNsdWRl
ICAtYyAtbyB4ZW5zdG9yZWRfcG9zaXgubyB4ZW5zdG9yZWRfcG9zaXguYyAgLUkvdXNyL3Br
Zy9pbmNsdWRlCmdjYyAgICB4ZW5zdG9yZWRfY29yZS5vIHhlbnN0b3JlZF93YXRjaC5vIHhl
bnN0b3JlZF9kb21haW4ubyB4ZW5zdG9yZWRfdHJhbnNhY3Rpb24ubyB4c19saWIubyB0YWxs
b2MubyB1dGlscy5vIHRkYi5vIGhhc2h0YWJsZS5vIHhlbnN0b3JlZF9uZXRic2QubyB4ZW5z
dG9yZWRfcG9zaXgubyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9v
bHMvbGlieGMvbGlieGVuY3RybC5zbyAgLW8geGVuc3RvcmVkICAtTC91c3IvcGtnL2xpYgov
cm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFs
bCAtZCAtbTA3NTUgLXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIv
YmluCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9jcm9zcy1p
bnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94
ZW40Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9j
cm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxs
L3Vzci94ZW40Mi9pbmNsdWRlCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdG9yZS8uLi8u
Li90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlL3hlbnN0b3JlLWNvbXBhdAovcm9vdC94ZW4t
NC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3
NTUgLXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC92YXIvcnVuL3hlbnN0b3JlZAov
cm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFs
bCAtZCAtbTA3NTUgLXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC92YXIvbGliL3hl
bnN0b3JlZAovcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvY3Jv
c3MtaW5zdGFsbCAtbTA3NTUgLXAgeGVuc3RvcmVkIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL3NiaW4KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4u
Ly4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIHhlbnN0b3JlLWNvbnRyb2wgL3Jv
b3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvYmluCi9yb290L3hlbi00LjIu
MC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4
ZW5zdG9yZSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9iaW4Kc2V0
IC1lIDsgZm9yIGMgaW4geGVuc3RvcmUtZXhpc3RzIHhlbnN0b3JlLWxpc3QgeGVuc3RvcmUt
cmVhZCB4ZW5zdG9yZS1ybSB4ZW5zdG9yZS1jaG1vZCB4ZW5zdG9yZS13cml0ZSB4ZW5zdG9y
ZS1scyB4ZW5zdG9yZS13YXRjaCA7IGRvIFwKCWxuIC1mIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL2Jpbi94ZW5zdG9yZSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9p
bnN0YWxsL3Vzci94ZW40Mi9iaW4vJHtjfSA7IFwKZG9uZQovcm9vdC94ZW4tNC4yLjAvdG9v
bHMveGVuc3RvcmUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL3Jv
b3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliCi9yb290L3hlbi00LjIu
MC90b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCBs
aWJ4ZW5zdG9yZS5zby4zLjAuMSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94
ZW40Mi9saWIKbG4gLXNmIGxpYnhlbnN0b3JlLnNvLjMuMC4xIC9yb290L3hlbi00LjIuMC9k
aXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9saWJ4ZW5zdG9yZS5zby4zLjAKbG4gLXNmIGxp
YnhlbnN0b3JlLnNvLjMuMCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40
Mi9saWIvbGlieGVuc3RvcmUuc28KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4u
Ly4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIGxpYnhlbnN0b3JlLmEgL3Jvb3Qv
eGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliCi9yb290L3hlbi00LjIuMC90
b29scy94ZW5zdG9yZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4ZW5z
dG9yZS5oIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUK
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLW0wNjQ0IC1wIHhlbnN0b3JlX2xpYi5oIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3Rh
bGwvdXNyL3hlbjQyL2luY2x1ZGUKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4u
Ly4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIGNvbXBhdC94cy5oIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveGVuc3RvcmUtY29tcGF0
L3hzLmgKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0b3JlLy4uLy4uL3Rvb2xzL2Nyb3Nz
LWluc3RhbGwgLW0wNjQ0IC1wIGNvbXBhdC94c19saWIuaCAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlL3hlbnN0b3JlLWNvbXBhdC94c19saWIuaAps
biAtc2YgeGVuc3RvcmUtY29tcGF0L3hzLmggIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3Rh
bGwvdXNyL3hlbjQyL2luY2x1ZGUveHMuaApsbiAtc2YgeGVuc3RvcmUtY29tcGF0L3hzX2xp
Yi5oIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUveHNf
bGliLmgKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMveGVuc3RvcmUnCmdtYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzJwpnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMnCmdtYWtlIC1DIG1pc2MgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYycKZ2NjICAtTzEgLWZuby1vbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhlbnBlcmYuby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNj
Ly4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94ZW5z
dG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scyAgLWMgLW8geGVucGVy
Zi5vIHhlbnBlcmYuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyB4ZW5wZXJmIHhl
bnBlcmYubyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9saWJ4Yy9s
aWJ4ZW5jdHJsLnNvICAtTC91c3IvcGtnL2xpYgpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGVucG0uby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rv
b2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scyAgLWMgLW8geGVucG0ubyB4ZW5wbS5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIC1vIHhlbnBtIHhlbnBtLm8gL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAgLUwv
dXNyL3BrZy9saWIKZ2NjIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1E
IC1NRiAueGVuLXRtZW0tbGlzdC1wYXJzZS5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxs
cyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2xp
YnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5jbHVkZSAt
SS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290L3hl
bi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbWlzYy8uLi8uLi90b29scyAgICAgIHhlbi10bWVtLWxpc3QtcGFyc2UuYyAg
IC1vIHhlbi10bWVtLWxpc3QtcGFyc2UKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVO
X1RPT0xTX18gLU1NRCAtTUYgLmd0cmFjZXZpZXcuby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rv
b2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scyAgLWMgLW8gZ3RyYWNldmlldy5vIGd0
cmFjZXZpZXcuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyBndHJhY2V2aWV3IGd0
cmFjZXZpZXcubyAtbGN1cnNlcyAgLUwvdXNyL3BrZy9saWIKZ2NjICAtTzEgLWZuby1vbWl0
LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmd0cmFjZXN0YXQuby5kIC1mbm8tb3B0
aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9t
aXNjLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94
ZW5zdG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scyAgLWMgLW8gZ3Ry
YWNlc3RhdC5vIGd0cmFjZXN0YXQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyBn
dHJhY2VzdGF0IGd0cmFjZXN0YXQubyAgLUwvdXNyL3BrZy9saWIKZ2NjICAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhlbmxvY2twcm9mLm8uZCAtZm5v
LW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbWlzYy8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNj
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8u
Li90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9v
bHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9p
bmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMgIC1jIC1v
IHhlbmxvY2twcm9mLm8geGVubG9ja3Byb2YuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
ICAtbyB4ZW5sb2NrcHJvZiB4ZW5sb2NrcHJvZi5vIC9yb290L3hlbi00LjIuMC90b29scy9t
aXNjLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gIC1ML3Vzci9wa2cvbGliCmdj
YyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW53YXRj
aGRvZ2Quby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS9yb290
L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90
b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bWlzYy8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNj
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8u
Li90b29scyAgLWMgLW8geGVud2F0Y2hkb2dkLm8geGVud2F0Y2hkb2dkLmMgIC1JL3Vzci9w
a2cvaW5jbHVkZQpnY2MgICAgLW8geGVud2F0Y2hkb2dkIHhlbndhdGNoZG9nZC5vIC9yb290
L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28g
IC1ML3Vzci9wa2cvbGliCmdjYyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
ZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18g
LU1NRCAtTUYgLnhlbi1kZXRlY3QuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1X
ZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL21pc2MvLi4vLi4vdG9vbHMgICAgICB4ZW4tZGV0ZWN0LmMgICAtbyB4ZW4tZGV0ZWN0
CmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW4t
aHZtY3R4Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00
LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L21pc2MvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlz
Yy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4v
Li4vdG9vbHMgIC1jIC1vIHhlbi1odm1jdHgubyB4ZW4taHZtY3R4LmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgICAgLW8geGVuLWh2bWN0eCB4ZW4taHZtY3R4Lm8gL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAgLUwvdXNy
L3BrZy9saWIKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLnhlbi1odm1jcmFzaC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vy
cm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvbGlieGMgLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00
LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9taXNjLy4uLy4uL3Rvb2xzICAtYyAtbyB4ZW4taHZtY3Jhc2gubyB4ZW4taHZtY3Jhc2gu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyB4ZW4taHZtY3Jhc2ggeGVuLWh2bWNy
YXNoLm8gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMvbGlieGMvbGli
eGVuY3RybC5zbyAgLUwvdXNyL3BrZy9saWIKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9f
WEVOX1RPT0xTX18gLU1NRCAtTUYgLnhlbi1sb3dtZW1kLm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1XZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8u
Li90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNs
dWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMveGVuc3RvcmUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9vbHMgIC1jIC1vIHhlbi1sb3dtZW1k
Lm8geGVuLWxvd21lbWQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyB4ZW4tbG93
bWVtZCB4ZW4tbG93bWVtZC5vIC9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rv
b2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4v
Li4vdG9vbHMveGVuc3RvcmUvbGlieGVuc3RvcmUuc28gIC1ML3Vzci9wa2cvbGliCmdjYyAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW4taHB0b29s
Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90
b29scy9taXNjLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bWlzYy8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2Mv
Li4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8u
Li90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL21pc2MvLi4vLi4vdG9v
bHMgIC1jIC1vIHhlbi1ocHRvb2wubyB4ZW4taHB0b29sLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgICAgLW8geGVuLWhwdG9vbCB4ZW4taHB0b29sLm8gL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL21pc2MvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9saWJ4Yy9saWJ4ZW5ndWVzdC5zbyAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy94ZW5zdG9yZS9saWJ4ZW5zdG9yZS5z
byAgLUwvdXNyL3BrZy9saWIKc2V0IC1lOyBmb3IgZCBpbiA7IGRvIGdtYWtlIC1DICRkOyBk
b25lCi9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2Jpbgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9jcm9zcy1pbnN0
YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40
Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL2Nyb3NzLWlu
c3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hl
bjQyL2Jpbgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9weXRob24v
aW5zdGFsbC13cmFwICIvdXNyL3BrZy9iaW4vcHl0aG9uMi43IiAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbWlzYy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4ZW5jb25z
IHhlbi1kZXRlY3QgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvYmlu
Ci9yb290L3hlbi00LjIuMC90b29scy9taXNjLy4uLy4uL3Rvb2xzL3B5dGhvbi9pbnN0YWxs
LXdyYXAgIi91c3IvcGtnL2Jpbi9weXRob24yLjciIC9yb290L3hlbi00LjIuMC90b29scy9t
aXNjLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIHhtIHhlbi1idWd0b29s
IHhlbi1weXRob24tcGF0aCB4ZW5kIHhlbnBlcmYgeHN2aWV3IHhlbnBtIHhlbi10bWVtLWxp
c3QtcGFyc2UgZ3RyYWNldmlldyBndHJhY2VzdGF0IHhlbmxvY2twcm9mIHhlbndhdGNoZG9n
ZCB4ZW4tcmluZ3dhdGNoIHhlbi1odm1jdHggeGVuLWh2bWNyYXNoIHhlbi1sb3dtZW1kIHhl
bi1ocHRvb2wgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2Jpbgov
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYy8uLi8uLi90b29scy9weXRob24vaW5zdGFsbC13
cmFwICIvdXNyL3BrZy9iaW4vcHl0aG9uMi43IiAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlz
Yy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4ZW5wdm5ldGJvb3QgL3Jv
b3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvYmluCnNldCAtZTsgZm9yIGQg
aW4gOyBkbyBnbWFrZSAtQyAkZCBpbnN0YWxsLXJlY3Vyc2U7IGRvbmUKZ21ha2VbM106IExl
YXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbWlzYycKZ21ha2VbMl06
IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmdtYWtlWzJdOiBF
bnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2UgLUMgZXhh
bXBsZXMgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvZXhhbXBsZXMnClsgLWQgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFs
bC91c3IveGVuNDIvZXRjL3hlbiBdIHx8IFwKCS9yb290L3hlbi00LjIuMC90b29scy9leGFt
cGxlcy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4t
NC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9ldGMveGVuCnNldCAtZTsgZm9yIGkgaW4g
UkVBRE1FIFJFQURNRS5pbmNvbXBhdGliaWxpdGllczsgXAogICAgZG8gWyAtZSAvcm9vdC94
ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9ldGMveGVuLyRpIF0gfHwgXAogICAg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2V4YW1wbGVzLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLW0wNjQ0IC1wICRpIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2V0Yy94ZW47IFwKZG9uZQpbIC1kIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNy
L3hlbjQyL2V0Yy94ZW4gXSB8fCBcCgkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZXhhbXBsZXMv
Li4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL3Jvb3QveGVuLTQuMi4w
L2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRjL3hlbgpbIC1kIC9yb290L3hlbi00LjIuMC9k
aXN0L2luc3RhbGwvdXNyL3hlbjQyL2V0Yy94ZW4vYXV0byBdIHx8IFwKCS9yb290L3hlbi00
LjIuMC90b29scy9leGFtcGxlcy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1
NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9ldGMveGVuL2F1
dG8Kc2V0IC1lOyBmb3IgaSBpbiB4ZW5kLWNvbmZpZy5zeHAgeG0tY29uZmlnLnhtbCB4bWV4
YW1wbGUxICB4bWV4YW1wbGUyIHhtZXhhbXBsZTMgeG1leGFtcGxlLmh2bSB4bWV4YW1wbGUu
aHZtLXN0dWJkb20geG1leGFtcGxlLnB2LWdydWIgeG1leGFtcGxlLm5iZCB4bWV4YW1wbGUu
dnRpIHhsZXhhbXBsZS5odm0geGxleGFtcGxlLnB2bGludXggeGVuZC1wY2ktcXVpcmtzLnN4
cCB4ZW5kLXBjaS1wZXJtaXNzaXZlLnN4cCB4bC5jb25mIGNwdXBvb2w7IFwKICAgIGRvIFsg
LWUgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRjL3hlbi8kaSBd
IHx8IFwKICAgIC9yb290L3hlbi00LjIuMC90b29scy9leGFtcGxlcy8uLi8uLi90b29scy9j
cm9zcy1pbnN0YWxsIC1tMDY0NCAtcCAkaSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxs
L3Vzci94ZW40Mi9ldGMveGVuOyBcCmRvbmUKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZXhhbXBsZXMnCmdtYWtlWzJdOiBMZWF2aW5nIGRp
cmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZVsyXTogRW50ZXJpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmdtYWtlIC1DIGhvdHBsdWcgaW5zdGFs
bApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
aG90cGx1ZycKZ21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2hvdHBsdWcnCmdtYWtlIC1DIGNvbW1vbiBpbnN0YWxsCmdtYWtlWzVdOiBFbnRl
cmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ob3RwbHVnL2NvbW1vbicK
cm0gLWYgImhvdHBsdWdwYXRoLnNoIi50bXA7ICBlY2hvICJTQklORElSPVwiL3Vzci94ZW40
Mi9zYmluXCIiID4+ImhvdHBsdWdwYXRoLnNoIi50bXA7ICBlY2hvICJCSU5ESVI9XCIvdXNy
L3hlbjQyL2JpblwiIiA+PiJob3RwbHVncGF0aC5zaCIudG1wOyAgZWNobyAiTElCRVhFQz1c
Ii91c3IveGVuNDIvbGliZXhlY1wiIiA+PiJob3RwbHVncGF0aC5zaCIudG1wOyAgZWNobyAi
TElCRElSPVwiL3Vzci94ZW40Mi9saWJcIiIgPj4iaG90cGx1Z3BhdGguc2giLnRtcDsgIGVj
aG8gIlNIQVJFRElSPVwiL3Vzci94ZW40Mi9zaGFyZVwiIiA+PiJob3RwbHVncGF0aC5zaCIu
dG1wOyAgZWNobyAiUFJJVkFURV9CSU5ESVI9XCIvdXNyL3hlbjQyL2JpblwiIiA+PiJob3Rw
bHVncGF0aC5zaCIudG1wOyAgZWNobyAiWEVORklSTVdBUkVESVI9XCIvdXNyL3hlbjQyL2xp
Yi94ZW4vYm9vdFwiIiA+PiJob3RwbHVncGF0aC5zaCIudG1wOyAgZWNobyAiWEVOX0NPTkZJ
R19ESVI9XCIvdXNyL3hlbjQyL2V0Yy94ZW5cIiIgPj4iaG90cGx1Z3BhdGguc2giLnRtcDsg
IGVjaG8gIlhFTl9TQ1JJUFRfRElSPVwiL3Vzci94ZW40Mi9ldGMveGVuL3NjcmlwdHNcIiIg
Pj4iaG90cGx1Z3BhdGguc2giLnRtcDsgIGVjaG8gIlhFTl9MT0NLX0RJUj1cIi91c3IveGVu
NDIvdmFyL2xpYlwiIiA+PiJob3RwbHVncGF0aC5zaCIudG1wOyAgZWNobyAiWEVOX1JVTl9E
SVI9XCIvdXNyL3hlbjQyL3Zhci9ydW4veGVuXCIiID4+ImhvdHBsdWdwYXRoLnNoIi50bXA7
ICBlY2hvICJYRU5fUEFHSU5HX0RJUj1cIi91c3IveGVuNDIvdmFyL2xpYi94ZW4veGVucGFn
aW5nXCIiID4+ImhvdHBsdWdwYXRoLnNoIi50bXA7IAlpZiAhIGNtcCAtcyAiaG90cGx1Z3Bh
dGguc2giLnRtcCAiaG90cGx1Z3BhdGguc2giOyB0aGVuIG12IC1mICJob3RwbHVncGF0aC5z
aCIudG1wICJob3RwbHVncGF0aC5zaCI7IGVsc2Ugcm0gLWYgImhvdHBsdWdwYXRoLnNoIi50
bXA7IGZpClsgLWQgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRj
L3hlbi9zY3JpcHRzIF0gfHwgXAoJL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2hvdHBsdWcvY29t
bW9uLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2V0Yy94ZW4vc2NyaXB0cwpzZXQgLWU7
IGZvciBpIGluICJob3RwbHVncGF0aC5zaCI7IFwKICAgZG8gXAogICAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvaG90cGx1Zy9jb21tb24vLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAt
bTA3NTUgLXAgJGkgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRj
L3hlbi9zY3JpcHRzOyBcCmRvbmUKc2V0IC1lOyBmb3IgaSBpbiA7IFwKICAgZG8gXAogICAv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90cGx1Zy9jb21tb24vLi4vLi4vLi4vdG9vbHMvY3Jv
c3MtaW5zdGFsbCAtbTA2NDQgLXAgJGkgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91
c3IveGVuNDIvZXRjL3hlbi9zY3JpcHRzOyBcCmRvbmUKZ21ha2VbNV06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90cGx1Zy9jb21tb24nCmdtYWtlWzRd
OiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2hvdHBsdWcnCmdt
YWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ob3Rw
bHVnJwpnbWFrZSAtQyBOZXRCU0QgaW5zdGFsbApnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90cGx1Zy9OZXRCU0QnCi9yb290L3hlbi00
LjIuMC90b29scy9ob3RwbHVnL05ldEJTRC8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxs
IC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9l
dGMveGVuL3NjcmlwdHMKc2V0IC1lOyBmb3IgaSBpbiAgYmxvY2sgdmlmLWJyaWRnZSB2aWYt
aXA7IFwKICAgZG8gXAogICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90cGx1Zy9OZXRCU0Qv
Li4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgJGkgL3Jvb3QveGVuLTQu
Mi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRjL3hlbi9zY3JpcHRzOyBcCmRvbmUKc2V0
IC1lOyBmb3IgaSBpbiA7IFwKICAgZG8gXAogICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90
cGx1Zy9OZXRCU0QvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgJGkg
L3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvZXRjL3hlbi9zY3JpcHRz
OyBcCmRvbmUKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2hvdHBsdWcvTmV0QlNELy4uLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL2V0Yy9yYy5kCnNldCAtZTsgZm9yIGkgaW4gcmMuZC94ZW5j
b21tb25zIHJjLmQveGVuZCByYy5kL3hlbmRvbWFpbnMgcmMuZC94ZW4td2F0Y2hkb2c7IFwK
ICAgZG8gXAogICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvaG90cGx1Zy9OZXRCU0QvLi4vLi4v
Li4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgJGkgL3Jvb3QveGVuLTQuMi4wL2Rp
c3QvaW5zdGFsbC91c3IveGVuNDIvZXRjL3JjLmQ7IFwKZG9uZQovcm9vdC94ZW4tNC4yLjAv
dG9vbHMvaG90cGx1Zy9OZXRCU0QvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2
NDQgLXAgLi4vY29tbW9uL2hvdHBsdWdwYXRoLnNoIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL2V0Yy9yYy5kL3hlbi1ob3RwbHVncGF0aC5zaApnbWFrZVs1XTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ob3RwbHVnL05ldEJT
RCcKZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
aG90cGx1ZycKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvaG90cGx1ZycKZ21ha2VbMl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMnCmdtYWtlWzJdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC90b29scycKZ21ha2UgLUMgeGVudHJhY2UgaW5zdGFsbApnbWFrZVszXTogRW50ZXJp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVudHJhY2UnCmdjYyAgLU8x
IC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW50cmFjZS5vLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL3hlbnRyYWNlLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL3hlbnRyYWNlLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHhlbnRyYWNlLm8geGVu
dHJhY2UuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtbyB4ZW50cmFjZSB4ZW50cmFj
ZS5vIC9yb290L3hlbi00LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29scy9saWJ4Yy9s
aWJ4ZW5jdHJsLnNvICAtTC91c3IvcGtnL2xpYgpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuc2V0c2l6ZS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJs
aW5nLWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRyYWNlLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRyYWNlLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHNldHNpemUubyBzZXRzaXplLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgICAgLW8geGVudHJhY2Vfc2V0c2l6ZSBzZXRzaXplLm8gL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL3hlbnRyYWNlLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28g
IC1ML3Vzci9wa2cvbGliCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54ZW5jdHguby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdl
cnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29scy9saWJ4
YyAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29scy9pbmNsdWRl
ICAtYyAtbyB4ZW5jdHgubyB4ZW5jdHguYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAt
byB4ZW5jdHggeGVuY3R4Lm8gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRyYWNlLy4uLy4u
L3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gIC1ML3Vzci9wa2cvbGliCi9yb290L3hlbi00
LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1
NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9iaW4KWyAteiAi
eGVuY3R4IiBdIHx8IC9yb290L3hlbi00LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29s
cy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0
YWxsL3Vzci94ZW40Mi9iaW4KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRyYWNlLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL3NoYXJlL21hbi9tYW4xCi9yb290L3hlbi00LjIuMC90b29s
cy94ZW50cmFjZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9v
dC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9zaGFyZS9tYW4vbWFuOAovcm9v
dC94ZW4tNC4yLjAvdG9vbHMveGVudHJhY2UvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAt
bTA3NTUgLXAgeGVudHJhY2UgeGVudHJhY2Vfc2V0c2l6ZSAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9iaW4KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRyYWNl
Ly4uLy4uL3Rvb2xzL3B5dGhvbi9pbnN0YWxsLXdyYXAgIi91c3IvcGtnL2Jpbi9weXRob24y
LjciIC9yb290L3hlbi00LjIuMC90b29scy94ZW50cmFjZS8uLi8uLi90b29scy9jcm9zcy1p
bnN0YWxsIC1tMDc1NSAtcCB4ZW50cmFjZV9mb3JtYXQgL3Jvb3QveGVuLTQuMi4wL2Rpc3Qv
aW5zdGFsbC91c3IveGVuNDIvYmluClsgLXogInhlbmN0eCIgXSB8fCAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMveGVudHJhY2UvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAg
eGVuY3R4IC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2Jpbgovcm9v
dC94ZW4tNC4yLjAvdG9vbHMveGVudHJhY2UvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAt
bTA2NDQgLXAgeGVudHJhY2VfZm9ybWF0LjEgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFs
bC91c3IveGVuNDIvc2hhcmUvbWFuL21hbjEKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnRy
YWNlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIHhlbnRyYWNlLjggL3Jv
b3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2hhcmUvbWFuL21hbjgKZ21h
a2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVudHJh
Y2UnCmdtYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
JwpnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMn
CmdtYWtlIC1DIHhjdXRpbHMgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscycKZ2NjICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnhjX3Jlc3RvcmUuby5kIC1mbm8tb3B0aW1p
emUtc2libGluZy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy94Y3V0
aWxzLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hjdXRpbHMv
Li4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hjdXRpbHMvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtYyAtbyB4Y19yZXN0b3JlLm8geGNfcmVzdG9yZS5jICAtSS91c3Iv
cGtnL2luY2x1ZGUKZ2NjICAgIHhjX3Jlc3RvcmUubyAtbyB4Y19yZXN0b3JlIC9yb290L3hl
bi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28g
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hjdXRpbHMvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVu
Z3Vlc3Quc28gIC1ML3Vzci9wa2cvbGliCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0
cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hF
Tl9UT09MU19fIC1NTUQgLU1GIC54Y19zYXZlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1XZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8uLi90
b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8uLi90b29scy9s
aWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8uLi90b29scy94ZW5zdG9y
ZSAtSS9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LWMgLW8geGNfc2F2ZS5vIHhjX3NhdmUuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICB4
Y19zYXZlLm8gLW8geGNfc2F2ZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8u
Li90b29scy9saWJ4Yy9saWJ4ZW5jdHJsLnNvIC9yb290L3hlbi00LjIuMC90b29scy94Y3V0
aWxzLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmd1ZXN0LnNvIC9yb290L3hlbi00LjIuMC90
b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL3hlbnN0b3JlL2xpYnhlbnN0b3JlLnNvICAtTC91
c3IvcGtnL2xpYgpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1E
IC1NRiAucmVhZG5vdGVzLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJy
b3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMveGN1dGlscy8uLi8uLi90b29scy9saWJ4YyAtSS9yb290
L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLWMgLW8gcmVh
ZG5vdGVzLm8gcmVhZG5vdGVzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAgcmVhZG5v
dGVzLm8gLW8gcmVhZG5vdGVzIC9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4u
L3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hjdXRp
bHMvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuZ3Vlc3Quc28gIC1ML3Vzci9wa2cvbGliCmdj
YyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5sc2V2dGNo
bi5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL3hjdXRpbHMvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMveGN1dGlscy8uLi8uLi90b29scy9pbmNsdWRlIC1jIC1vIGxzZXZ0Y2huLm8g
bHNldnRjaG4uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICBsc2V2dGNobi5vIC1vIGxz
ZXZ0Y2huIC9yb290L3hlbi00LjIuMC90b29scy94Y3V0aWxzLy4uLy4uL3Rvb2xzL2xpYnhj
L2xpYnhlbmN0cmwuc28gIC1ML3Vzci9wa2cvbGliCi9yb290L3hlbi00LjIuMC90b29scy94
Y3V0aWxzLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2Jpbgovcm9vdC94ZW4tNC4yLjAvdG9v
bHMveGN1dGlscy8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4Y19yZXN0
b3JlIHhjX3NhdmUgcmVhZG5vdGVzIGxzZXZ0Y2huIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL2JpbgpnbWFrZVszXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC90b29scy94Y3V0aWxzJwpnbWFrZVsyXTogTGVhdmluZyBkaXJlY3Rvcnkg
YC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZSAtQyBmaXJtd2FyZSBpbnN0YWxsCmdtYWtl
WzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2Fy
ZScKR0lUPWdpdCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvLi4vLi4vc2NyaXB0
cy9naXQtY2hlY2tvdXQuc2ggZ2l0Oi8veGVuYml0cy54ZW4ub3JnL3NlYWJpb3MuZ2l0IHJl
bC0xLjYuMy4yIHNlYWJpb3MtZGlyCkNsb25pbmcgaW50byAnc2VhYmlvcy1kaXItcmVtb3Rl
LnRtcCcuLi4KcmVtb3RlOiBDb3VudGluZyBvYmplY3RzOiA2NDkwLCBkb25lLhtbSwpyZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgMCUgKDEvMTM5MSkgICAbW0sNcmVtb3RlOiBD
b21wcmVzc2luZyBvYmplY3RzOiAgIDElICgxNC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAgMiUgKDI4LzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Np
bmcgb2JqZWN0czogICAzJSAoNDIvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBv
YmplY3RzOiAgIDQlICg1Ni8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICAgNSUgKDcwLzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czog
ICA2JSAoODQvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDcl
ICg5OC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgOCUgKDEx
Mi8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgOSUgKDEyNi8x
MzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMCUgKDE0MC8xMzkx
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMSUgKDE1NC8xMzkxKSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMiUgKDE2Ny8xMzkxKSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMyUgKDE4MS8xMzkxKSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNCUgKDE5NS8xMzkxKSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNSUgKDIwOS8xMzkxKSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNiUgKDIyMy8xMzkxKSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICAxNyUgKDIzNy8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAxOCUgKDI1MS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNz
aW5nIG9iamVjdHM6ICAxOSUgKDI2NS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5n
IG9iamVjdHM6ICAyMCUgKDI3OS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9i
amVjdHM6ICAyMSUgKDI5My8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICAyMiUgKDMwNy8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6
ICAyMyUgKDMyMC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAy
NCUgKDMzNC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAyNSUg
KDM0OC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAyNiUgKDM2
Mi8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAyNyUgKDM3Ni8x
MzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAyOCUgKDM5MC8xMzkx
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAyOSUgKDQwNC8xMzkxKSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMCUgKDQxOC8xMzkxKSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMSUgKDQzMi8xMzkxKSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMiUgKDQ0Ni8xMzkxKSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMyUgKDQ2MC8xMzkxKSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICAzNCUgKDQ3My8xMzkxKSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICAzNSUgKDQ4Ny8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAzNiUgKDUwMS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNz
aW5nIG9iamVjdHM6ICAzNyUgKDUxNS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5n
IG9iamVjdHM6ICAzOCUgKDUyOS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9i
amVjdHM6ICAzOSUgKDU0My8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICA0MCUgKDU1Ny8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6
ICA0MSUgKDU3MS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0
MiUgKDU4NS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MyUg
KDU5OS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NCUgKDYx
My8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NSUgKDYyNi8x
MzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NiUgKDY0MC8xMzkx
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NyUgKDY1NC8xMzkxKSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0OCUgKDY2OC8xMzkxKSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0OSUgKDY4Mi8xMzkxKSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MCUgKDY5Ni8xMzkxKSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MSUgKDcxMC8xMzkxKSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MiUgKDcyNC8xMzkxKSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICA1MyUgKDczOC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA1NCUgKDc1Mi8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNz
aW5nIG9iamVjdHM6ICA1NSUgKDc2Ni8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5n
IG9iamVjdHM6ICA1NiUgKDc3OS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9i
amVjdHM6ICA1NyUgKDc5My8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICA1OCUgKDgwNy8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6
ICA1OSUgKDgyMS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2
MCUgKDgzNS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MSUg
KDg0OS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MiUgKDg2
My8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MyUgKDg3Ny8x
MzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NCUgKDg5MS8xMzkx
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NSUgKDkwNS8xMzkxKSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NiUgKDkxOS8xMzkxKSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NyUgKDkzMi8xMzkxKSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2OCUgKDk0Ni8xMzkxKSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2OSUgKDk2MC8xMzkxKSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICA3MCUgKDk3NC8xMzkxKSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICA3MSUgKDk4OC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA3MiUgKDEwMDIvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVz
c2luZyBvYmplY3RzOiAgNzMlICgxMDE2LzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Np
bmcgb2JqZWN0czogIDc0JSAoMTAzMC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5n
IG9iamVjdHM6ICA3NSUgKDEwNDQvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBv
YmplY3RzOiAgNzYlICgxMDU4LzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2Jq
ZWN0czogIDc3JSAoMTA3Mi8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICA3OCUgKDEwODUvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3Rz
OiAgNzklICgxMDk5LzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czog
IDgwJSAoMTExMy8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA4
MSUgKDExMjcvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgODIl
ICgxMTQxLzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDgzJSAo
MTE1NS8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA4NCUgKDEx
NjkvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgODUlICgxMTgz
LzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDg2JSAoMTE5Ny8x
MzkxKSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA4NyUgKDEyMTEvMTM5
MSkgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgODglICgxMjI1LzEzOTEp
ICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDg5JSAoMTIzOC8xMzkxKSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA5MCUgKDEyNTIvMTM5MSkgICAb
W0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgOTElICgxMjY2LzEzOTEpICAgG1tL
DXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDkyJSAoMTI4MC8xMzkxKSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA5MyUgKDEyOTQvMTM5MSkgICAbW0sNcmVt
b3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgOTQlICgxMzA4LzEzOTEpICAgG1tLDXJlbW90
ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDk1JSAoMTMyMi8xMzkxKSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICA5NiUgKDEzMzYvMTM5MSkgICAbW0sNcmVtb3RlOiBD
b21wcmVzc2luZyBvYmplY3RzOiAgOTclICgxMzUwLzEzOTEpICAgG1tLDXJlbW90ZTogQ29t
cHJlc3Npbmcgb2JqZWN0czogIDk4JSAoMTM2NC8xMzkxKSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA5OSUgKDEzNzgvMTM5MSkgICAbW0sNcmVtb3RlOiBDb21wcmVz
c2luZyBvYmplY3RzOiAxMDAlICgxMzkxLzEzOTEpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Np
bmcgb2JqZWN0czogMTAwJSAoMTM5MS8xMzkxKSwgZG9uZS4bW0sKUmVjZWl2aW5nIG9iamVj
dHM6ICAgMCUgKDEvNjQ5MCkgICANUmVjZWl2aW5nIG9iamVjdHM6ICAgMSUgKDY1LzY0OTAp
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgIDIlICgxMzAvNjQ5MCkgICANUmVjZWl2aW5nIG9i
amVjdHM6ICAgMyUgKDE5NS82NDkwKSAgIA1SZWNlaXZpbmcgb2JqZWN0czogICA0JSAoMjYw
LzY0OTApICAgDVJlY2VpdmluZyBvYmplY3RzOiAgIDUlICgzMjUvNjQ5MCkgICANUmVjZWl2
aW5nIG9iamVjdHM6ICAgNiUgKDM5MC82NDkwKSwgMTAwLjAwIEtpQiB8IDE3MiBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogICA3JSAoNDU1LzY0OTApLCAxMDAuMDAgS2lCIHwgMTcy
IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgIDglICg1MjAvNjQ5MCksIDEwMC4wMCBL
aUIgfCAxNzIgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAgOSUgKDU4NS82NDkwKSwg
MTAwLjAwIEtpQiB8IDE3MiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDEwJSAoNjQ5
LzY0OTApLCAxMDAuMDAgS2lCIHwgMTcyIEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
MTElICg3MTQvNjQ5MCksIDEwMC4wMCBLaUIgfCAxNzIgS2lCL3MgICANUmVjZWl2aW5nIG9i
amVjdHM6ICAxMiUgKDc3OS82NDkwKSwgMTAwLjAwIEtpQiB8IDE3MiBLaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogIDEzJSAoODQ0LzY0OTApLCAxMDAuMDAgS2lCIHwgMTcyIEtpQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMTQlICg5MDkvNjQ5MCksIDEwMC4wMCBLaUIgfCAx
NzIgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAxNSUgKDk3NC82NDkwKSwgMTAwLjAw
IEtpQiB8IDE3MiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDE2JSAoMTAzOS82NDkw
KSwgMTAwLjAwIEtpQiB8IDE3MiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDE2JSAo
MTA4NC82NDkwKSwgMTAwLjAwIEtpQiB8IDE3MiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDE3JSAoMTEwNC82NDkwKSwgMzI0LjAwIEtpQiB8IDI4MSBLaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDE4JSAoMTE2OS82NDkwKSwgMzI0LjAwIEtpQiB8IDI4MSBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDE5JSAoMTIzNC82NDkwKSwgNDg0LjAwIEtpQiB8IDI3
NiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDE5JSAoMTI2NC82NDkwKSwgNDg0LjAw
IEtpQiB8IDI3NiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDIwJSAoMTI5OC82NDkw
KSwgNDg0LjAwIEtpQiB8IDI3NiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDIxJSAo
MTM2My82NDkwKSwgNjM2LjAwIEtpQiB8IDI3MyBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDIyJSAoMTQyOC82NDkwKSwgNjM2LjAwIEtpQiB8IDI3MyBLaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDIzJSAoMTQ5My82NDkwKSwgNjM2LjAwIEtpQiB8IDI3MyBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDI0JSAoMTU1OC82NDkwKSwgNjM2LjAwIEtpQiB8IDI3
MyBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDI1JSAoMTYyMy82NDkwKSwgNjM2LjAw
IEtpQiB8IDI3MyBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDI2JSAoMTY4OC82NDkw
KSwgNjM2LjAwIEtpQiB8IDI3MyBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDI3JSAo
MTc1My82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDI4JSAoMTgxOC82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDI4JSAoMTgzMi82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDI5JSAoMTg4My82NDkwKSwgODYwLjAwIEtpQiB8IDI5
NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDMwJSAoMTk0Ny82NDkwKSwgODYwLjAw
IEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDMxJSAoMjAxMi82NDkw
KSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDMyJSAo
MjA3Ny82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDMzJSAoMjE0Mi82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDM0JSAoMjIwNy82NDkwKSwgODYwLjAwIEtpQiB8IDI5NSBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDM1JSAoMjI3Mi82NDkwKSwgODYwLjAwIEtpQiB8IDI5
NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDM2JSAoMjMzNy82NDkwKSwgODYwLjAw
IEtpQiB8IDI5NSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDM3JSAoMjQwMi82NDkw
KSwgMS4wMiBNaUIgfCAzMDUgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAzOCUgKDI0
NjcvNjQ5MCksIDEuMDIgTWlCIHwgMzA1IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
MzklICgyNTMyLzY0OTApLCAxLjAyIE1pQiB8IDMwNSBLaUIvcyAgIA1SZWNlaXZpbmcgb2Jq
ZWN0czogIDQwJSAoMjU5Ni82NDkwKSwgMS4wMiBNaUIgfCAzMDUgS2lCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICA0MSUgKDI2NjEvNjQ5MCksIDEuMDIgTWlCIHwgMzA1IEtpQi9zICAg
DVJlY2VpdmluZyBvYmplY3RzOiAgNDIlICgyNzI2LzY0OTApLCAxLjAyIE1pQiB8IDMwNSBL
aUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDQzJSAoMjc5MS82NDkwKSwgMS4wMiBNaUIg
fCAzMDUgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA0NCUgKDI4NTYvNjQ5MCksIDEu
MDIgTWlCIHwgMzA1IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNDUlICgyOTIxLzY0
OTApLCAxLjAyIE1pQiB8IDMwNSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDQ2JSAo
Mjk4Ni82NDkwKSwgMS4wMiBNaUIgfCAzMDUgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICA0NyUgKDMwNTEvNjQ5MCksIDEuMDIgTWlCIHwgMzA1IEtpQi9zICAgDVJlY2VpdmluZyBv
YmplY3RzOiAgNDglICgzMTE2LzY0OTApLCAxLjAyIE1pQiB8IDMwNSBLaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogIDQ5JSAoMzE4MS82NDkwKSwgMS4wMiBNaUIgfCAzMDUgS2lCL3Mg
ICANUmVjZWl2aW5nIG9iamVjdHM6ICA1MCUgKDMyNDUvNjQ5MCksIDEuMDIgTWlCIHwgMzA1
IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNTElICgzMzEwLzY0OTApLCAxLjAyIE1p
QiB8IDMwNSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDUyJSAoMzM3NS82NDkwKSwg
MS4wMiBNaUIgfCAzMDUgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA1MyUgKDM0NDAv
NjQ5MCksIDEuMDIgTWlCIHwgMzA1IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNTQl
ICgzNTA1LzY0OTApLCAxLjAyIE1pQiB8IDMwNSBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDU1JSAoMzU3MC82NDkwKSwgMS4wMiBNaUIgfCAzMDUgS2lCL3MgICANUmVjZWl2aW5n
IG9iamVjdHM6ICA1NiUgKDM2MzUvNjQ5MCksIDEuMjEgTWlCIHwgMzE0IEtpQi9zICAgDVJl
Y2VpdmluZyBvYmplY3RzOiAgNTclICgzNzAwLzY0OTApLCAxLjIxIE1pQiB8IDMxNCBLaUIv
cyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDU4JSAoMzc2NS82NDkwKSwgMS4yMSBNaUIgfCAz
MTQgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA1OSUgKDM4MzAvNjQ5MCksIDEuMjEg
TWlCIHwgMzE0IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNjAlICgzODk0LzY0OTAp
LCAxLjIxIE1pQiB8IDMxNCBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDYwJSAoMzky
Ny82NDkwKSwgMS4yMSBNaUIgfCAzMTQgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA2
MSUgKDM5NTkvNjQ5MCksIDEuMjEgTWlCIHwgMzE0IEtpQi9zICAgDVJlY2VpdmluZyBvYmpl
Y3RzOiAgNjIlICg0MDI0LzY0OTApLCAxLjIxIE1pQiB8IDMxNCBLaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDYzJSAoNDA4OS82NDkwKSwgMS4yMSBNaUIgfCAzMTQgS2lCL3MgICAN
UmVjZWl2aW5nIG9iamVjdHM6ICA2NCUgKDQxNTQvNjQ5MCksIDEuMjEgTWlCIHwgMzE0IEtp
Qi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNjUlICg0MjE5LzY0OTApLCAxLjIxIE1pQiB8
IDMxNCBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDY2JSAoNDI4NC82NDkwKSwgMS4y
MSBNaUIgfCAzMTQgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA2NyUgKDQzNDkvNjQ5
MCksIDEuMjEgTWlCIHwgMzE0IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNjglICg0
NDE0LzY0OTApLCAxLjIxIE1pQiB8IDMxNCBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czog
IDY5JSAoNDQ3OS82NDkwKSwgMS4yMSBNaUIgfCAzMTQgS2lCL3MgICANUmVjZWl2aW5nIG9i
amVjdHM6ICA3MCUgKDQ1NDMvNjQ5MCksIDEuMjEgTWlCIHwgMzE0IEtpQi9zICAgDVJlY2Vp
dmluZyBvYmplY3RzOiAgNzElICg0NjA4LzY0OTApLCAxLjIxIE1pQiB8IDMxNCBLaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDcyJSAoNDY3My82NDkwKSwgMS4yMSBNaUIgfCAzMTQg
S2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA3MyUgKDQ3MzgvNjQ5MCksIDEuMjEgTWlC
IHwgMzE0IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNzQlICg0ODAzLzY0OTApLCAx
LjIxIE1pQiB8IDMxNCBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDc1JSAoNDg2OC82
NDkwKSwgMS4yMSBNaUIgfCAzMTQgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA3NiUg
KDQ5MzMvNjQ5MCksIDEuMjEgTWlCIHwgMzE0IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3Rz
OiAgNzclICg0OTk4LzY0OTApLCAxLjIxIE1pQiB8IDMxNCBLaUIvcyAgIA1SZWNlaXZpbmcg
b2JqZWN0czogIDc4JSAoNTA2My82NDkwKSwgMS4yMSBNaUIgfCAzMTQgS2lCL3MgICANUmVj
ZWl2aW5nIG9iamVjdHM6ICA3OSUgKDUxMjgvNjQ5MCksIDEuNDMgTWlCIHwgMzI2IEtpQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgODAlICg1MTkyLzY0OTApLCAxLjQzIE1pQiB8IDMy
NiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDgxJSAoNTI1Ny82NDkwKSwgMS40MyBN
aUIgfCAzMjYgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA4MiUgKDUzMjIvNjQ5MCks
IDEuNDMgTWlCIHwgMzI2IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgODMlICg1Mzg3
LzY0OTApLCAxLjQzIE1pQiB8IDMyNiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDg0
JSAoNTQ1Mi82NDkwKSwgMS40MyBNaUIgfCAzMjYgS2lCL3MgICANUmVjZWl2aW5nIG9iamVj
dHM6ICA4NSUgKDU1MTcvNjQ5MCksIDEuNDMgTWlCIHwgMzI2IEtpQi9zICAgDVJlY2Vpdmlu
ZyBvYmplY3RzOiAgODYlICg1NTgyLzY0OTApLCAxLjQzIE1pQiB8IDMyNiBLaUIvcyAgIA1S
ZWNlaXZpbmcgb2JqZWN0czogIDg3JSAoNTY0Ny82NDkwKSwgMS40MyBNaUIgfCAzMjYgS2lC
L3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA4OCUgKDU3MTIvNjQ5MCksIDEuNDMgTWlCIHwg
MzI2IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgODklICg1Nzc3LzY0OTApLCAxLjQz
IE1pQiB8IDMyNiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDkwJSAoNTg0MS82NDkw
KSwgMS40MyBNaUIgfCAzMjYgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA5MSUgKDU5
MDYvNjQ5MCksIDEuNDMgTWlCIHwgMzI2IEtpQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
OTIlICg1OTcxLzY0OTApLCAxLjQzIE1pQiB8IDMyNiBLaUIvcyAgIA1SZWNlaXZpbmcgb2Jq
ZWN0czogIDkzJSAoNjAzNi82NDkwKSwgMS40MyBNaUIgfCAzMjYgS2lCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICA5NCUgKDYxMDEvNjQ5MCksIDEuNDMgTWlCIHwgMzI2IEtpQi9zICAg
DVJlY2VpdmluZyBvYmplY3RzOiAgOTUlICg2MTY2LzY0OTApLCAxLjQzIE1pQiB8IDMyNiBL
aUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDk2JSAoNjIzMS82NDkwKSwgMS40MyBNaUIg
fCAzMjYgS2lCL3MgICANcmVtb3RlOiBUb3RhbCA2NDkwIChkZWx0YSA1MTQ3KSwgcmV1c2Vk
IDY0MjAgKGRlbHRhIDUwOTUpG1tLClJlY2VpdmluZyBvYmplY3RzOiAgOTclICg2Mjk2LzY0
OTApLCAxLjQzIE1pQiB8IDMyNiBLaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDk4JSAo
NjM2MS82NDkwKSwgMS40MyBNaUIgfCAzMjYgS2lCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICA5OSUgKDY0MjYvNjQ5MCksIDEuNDMgTWlCIHwgMzI2IEtpQi9zICAgDVJlY2VpdmluZyBv
YmplY3RzOiAxMDAlICg2NDkwLzY0OTApLCAxLjQzIE1pQiB8IDMyNiBLaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogMTAwJSAoNjQ5MC82NDkwKSwgMS42MSBNaUIgfCAzMjYgS2lCL3Ms
IGRvbmUuClJlc29sdmluZyBkZWx0YXM6ICAgMCUgKDAvNTE0NykgICANUmVzb2x2aW5nIGRl
bHRhczogICAxJSAoNTQvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogICAzJSAoMjAzLzUx
NDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAgNCUgKDIxMC81MTQ3KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgIDclICgzNzQvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogICA4JSAoNDIx
LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAgOSUgKDQ3My81MTQ3KSAgIA1SZXNvbHZp
bmcgZGVsdGFzOiAgMTAlICg1MjMvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDExJSAo
NTg2LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxMiUgKDY2MS81MTQ3KSAgIA1SZXNv
bHZpbmcgZGVsdGFzOiAgMTMlICg2OTMvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDE0
JSAoNzIxLzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxNSUgKDgyMi81MTQ3KSAgIA1S
ZXNvbHZpbmcgZGVsdGFzOiAgMTYlICg4MzUvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczog
IDE4JSAoOTQ2LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxOSUgKDk5OC81MTQ3KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAgMjAlICgxMDcyLzUxNDcpICAgDVJlc29sdmluZyBkZWx0
YXM6ICAyMiUgKDExNjMvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDIzJSAoMTE4OC81
MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMjQlICgxMjQzLzUxNDcpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICAyNSUgKDEzMjAvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDI2JSAo
MTM0OC81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMjclICgxNDA4LzUxNDcpICAgDVJl
c29sdmluZyBkZWx0YXM6ICAyOCUgKDE0NDYvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczog
IDI5JSAoMTUxMy81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMzAlICgxNTcxLzUxNDcp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICAzMSUgKDE2MDEvNTE0NykgICANUmVzb2x2aW5nIGRl
bHRhczogIDMyJSAoMTY2MC81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMzMlICgxNzQw
LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAzNCUgKDE3NTYvNTE0NykgICANUmVzb2x2
aW5nIGRlbHRhczogIDM1JSAoMTgwMy81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMzYl
ICgxODU5LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICAzNyUgKDE5MTAvNTE0NykgICAN
UmVzb2x2aW5nIGRlbHRhczogIDM4JSAoMTk3My81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFz
OiAgMzklICgyMDI1LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA0MCUgKDIwODYvNTE0
NykgICANUmVzb2x2aW5nIGRlbHRhczogIDQxJSAoMjEyNS81MTQ3KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgNDIlICgyMTY1LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA0MyUgKDIy
MTYvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDQ0JSAoMjI4MC81MTQ3KSAgIA1SZXNv
bHZpbmcgZGVsdGFzOiAgNDUlICgyMzUxLzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA0
OCUgKDI1MjIvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDQ5JSAoMjUzNC81MTQ3KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAgNTAlICgyNjE4LzUxNDcpICAgDVJlc29sdmluZyBkZWx0
YXM6ICA1MSUgKDI2MzcvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDUyJSAoMjY4My81
MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNTMlICgyNzI4LzUxNDcpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICA1NyUgKDI5NjEvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDYwJSAo
MzA5OS81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNjElICgzMTUwLzUxNDcpICAgDVJl
c29sdmluZyBkZWx0YXM6ICA2MiUgKDMyMDEvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczog
IDYzJSAoMzI4Ni81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNjQlICgzMzEyLzUxNDcp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICA2NiUgKDM0MDUvNTE0NykgICANUmVzb2x2aW5nIGRl
bHRhczogIDY5JSAoMzU1Ny81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNzAlICgzNjI0
LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA3MyUgKDM3NjAvNTE0NykgICANUmVzb2x2
aW5nIGRlbHRhczogIDc0JSAoMzgzMy81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNzUl
ICgzOTAyLzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA3NiUgKDM5NTcvNTE0NykgICAN
UmVzb2x2aW5nIGRlbHRhczogIDc3JSAoMzk2OS81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFz
OiAgNzglICg0MDUzLzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA4MCUgKDQxMzIvNTE0
NykgICANUmVzb2x2aW5nIGRlbHRhczogIDgzJSAoNDMxMC81MTQ3KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgODQlICg0MzI4LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA4NSUgKDQz
OTAvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDg3JSAoNDQ4Mi81MTQ3KSAgIA1SZXNv
bHZpbmcgZGVsdGFzOiAgODglICg0NTU3LzUxNDcpICAgDVJlc29sdmluZyBkZWx0YXM6ICA4
OSUgKDQ2MDEvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDkwJSAoNDY2NC81MTQ3KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAgOTElICg0NzA1LzUxNDcpICAgDVJlc29sdmluZyBkZWx0
YXM6ICA5MiUgKDQ3NTgvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDkzJSAoNDgwMy81
MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgOTQlICg0ODc1LzUxNDcpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICA5NSUgKDQ5MzQvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczogIDk2JSAo
NDk2Ny81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgOTclICg1MDA1LzUxNDcpICAgDVJl
c29sdmluZyBkZWx0YXM6ICA5OCUgKDUwNjMvNTE0NykgICANUmVzb2x2aW5nIGRlbHRhczog
IDk5JSAoNTA5OC81MTQ3KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAxMDAlICg1MTQ3LzUxNDcp
ICAgDVJlc29sdmluZyBkZWx0YXM6IDEwMCUgKDUxNDcvNTE0NyksIGRvbmUuClN3aXRjaGVk
IHRvIGEgbmV3IGJyYW5jaCAnZHVtbXknCmNwIHNlYWJpb3MtY29uZmlnIHNlYWJpb3MtZGly
Ly5jb25maWc7CmdtYWtlIFBZVEhPTj1weXRob24yLjcgc3ViZGlycy1hbGwKZ21ha2VbNF06
IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlJwpn
bWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmly
bXdhcmUnCmdtYWtlIC1DIHNlYWJpb3MtZGlyIGFsbApnbWFrZVs2XTogRW50ZXJpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvc2VhYmlvcy1kaXItcmVt
b3RlJwogIEJ1aWxkIEtjb25maWcgY29uZmlnIGZpbGUKICBDb21waWxpbmcgd2hvbGUgcHJv
Z3JhbSBvdXQvY2NvZGUuMTYucwpJbiBmaWxlIGluY2x1ZGVkIGZyb20gc3JjL2lvcG9ydC5o
OjgxOjAsCiAgICAgICAgICAgICAgICAgZnJvbSBzcmMvZmFycHRyLmg6OSwKICAgICAgICAg
ICAgICAgICBmcm9tIHNyYy9vdXRwdXQuYzo5OgpzcmMvdHlwZXMuaDoxMjc6MDogd2Fybmlu
ZzogIl9fc2VjdGlvbiIgcmVkZWZpbmVkCi91c3IvaW5jbHVkZS9zeXMvY2RlZnMuaDozMjA6
MDogbm90ZTogdGhpcyBpcyB0aGUgbG9jYXRpb24gb2YgdGhlIHByZXZpb3VzIGRlZmluaXRp
b24Kc3JjL3R5cGVzLmg6MTMwOjA6IHdhcm5pbmc6ICJfX2FsaWduZWQiIHJlZGVmaW5lZAov
dXNyL2luY2x1ZGUvc3lzL2NkZWZzLmg6MzE5OjA6IG5vdGU6IHRoaXMgaXMgdGhlIGxvY2F0
aW9uIG9mIHRoZSBwcmV2aW91cyBkZWZpbml0aW9uCiAgQ29tcGlsaW5nIHRvIGFzc2VtYmxl
ciBvdXQvYXNtLW9mZnNldHMucwogIEdlbmVyYXRpbmcgb2Zmc2V0IGZpbGUgb3V0L2FzbS1v
ZmZzZXRzLmgKICBDb21waWxpbmcgKDE2Yml0KSBvdXQvY29kZTE2Lm8KICBDb21waWxpbmcg
d2hvbGUgcHJvZ3JhbSBvdXQvY2NvZGUzMmZsYXQubwpJbiBmaWxlIGluY2x1ZGVkIGZyb20g
c3JjL2lvcG9ydC5oOjgxOjAsCiAgICAgICAgICAgICAgICAgZnJvbSBzcmMvZmFycHRyLmg6
OSwKICAgICAgICAgICAgICAgICBmcm9tIHNyYy9vdXRwdXQuYzo5OgpzcmMvdHlwZXMuaDox
Mjc6MDogd2FybmluZzogIl9fc2VjdGlvbiIgcmVkZWZpbmVkCi91c3IvaW5jbHVkZS9zeXMv
Y2RlZnMuaDozMjA6MDogbm90ZTogdGhpcyBpcyB0aGUgbG9jYXRpb24gb2YgdGhlIHByZXZp
b3VzIGRlZmluaXRpb24Kc3JjL3R5cGVzLmg6MTMwOjA6IHdhcm5pbmc6ICJfX2FsaWduZWQi
IHJlZGVmaW5lZAovdXNyL2luY2x1ZGUvc3lzL2NkZWZzLmg6MzE5OjA6IG5vdGU6IHRoaXMg
aXMgdGhlIGxvY2F0aW9uIG9mIHRoZSBwcmV2aW91cyBkZWZpbml0aW9uCiAgQ29tcGlsaW5n
IHdob2xlIHByb2dyYW0gb3V0L2NvZGUzMnNlZy5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSBz
cmMvaW9wb3J0Lmg6ODE6MCwKICAgICAgICAgICAgICAgICBmcm9tIHNyYy9mYXJwdHIuaDo5
LAogICAgICAgICAgICAgICAgIGZyb20gc3JjL291dHB1dC5jOjk6CnNyYy90eXBlcy5oOjEy
NzowOiB3YXJuaW5nOiAiX19zZWN0aW9uIiByZWRlZmluZWQKL3Vzci9pbmNsdWRlL3N5cy9j
ZGVmcy5oOjMyMDowOiBub3RlOiB0aGlzIGlzIHRoZSBsb2NhdGlvbiBvZiB0aGUgcHJldmlv
dXMgZGVmaW5pdGlvbgpzcmMvdHlwZXMuaDoxMzA6MDogd2FybmluZzogIl9fYWxpZ25lZCIg
cmVkZWZpbmVkCi91c3IvaW5jbHVkZS9zeXMvY2RlZnMuaDozMTk6MDogbm90ZTogdGhpcyBp
cyB0aGUgbG9jYXRpb24gb2YgdGhlIHByZXZpb3VzIGRlZmluaXRpb24KICBCdWlsZGluZyBs
ZCBzY3JpcHRzICh2ZXJzaW9uICIxLjYuMy4yLTIwMTIxMjA0XzEzMjUyNy1kb20wLmxpcHB1
eC5kZSIpCkZpeGVkIHNwYWNlOiAweGUwNWItMHgxMDAwMCAgdG90YWw6IDgxMDEgIHNsYWNr
OiA1ICBQZXJjZW50IHNsYWNrOiAwLjElCjE2Yml0IHNpemU6ICAgICAgICAgICA0MDkxMgoz
MmJpdCBzZWdtZW50ZWQgc2l6ZTogMTU4MAozMmJpdCBmbGF0IHNpemU6ICAgICAgMTM2MzYK
MzJiaXQgZmxhdCBpbml0IHNpemU6IDUzMjMyCiAgTGlua2luZyBvdXQvcm9tMTYubwogIFN0
cmlwcGluZyBvdXQvcm9tMTYuc3RyaXAubwogIExpbmtpbmcgb3V0L3JvbTMyc2VnLm8KICBT
dHJpcHBpbmcgb3V0L3JvbTMyc2VnLnN0cmlwLm8KICBMaW5raW5nIG91dC9yb20ubwogIFBy
ZXBwaW5nIG91dC9iaW9zLmJpbgpUb3RhbCBzaXplOiAxMTE4NTIgIEZpeGVkOiA1NjEzMiAg
RnJlZTogMTkyMjAgKHVzZWQgODUuMyUgb2YgMTI4S2lCIHJvbSkKZ21ha2VbNl06IExlYXZp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvc2VhYmlvcy1k
aXItcmVtb3RlJwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC90b29scy9maXJtd2FyZScKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlJwpnbWFrZSAtQyByb21iaW9zIGFsbApnbWFrZVs2
XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUv
cm9tYmlvcycKZ21ha2VbN106IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3MnCmdtYWtlIC1DIDMyYml0IGFsbApnbWFrZVs4XTog
RW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvcm9t
Ymlvcy8zMmJpdCcKZ21ha2VbOV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3MvMzJiaXQnCmdtYWtlIC1DIHRjZ2Jpb3MgYWxs
CmdtYWtlWzEwXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
ZmlybXdhcmUvcm9tYmlvcy8zMmJpdC90Y2diaW9zJwpnY2MgICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnRjZ2Jpb3Muby5kIC1m
bm8tb3B0aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1kaXJlY3Qtc2VnLXJlZnMgIC1X
ZXJyb3IgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1mbm8tYnVpbHRp
biAtbXNvZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvcm9tYmlv
cy8zMmJpdC90Y2diaW9zLy4uLy4uLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkuLiAtSS4u
Ly4uICAtYyAtbyB0Y2diaW9zLm8gdGNnYmlvcy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2Nj
ICAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZu
by1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVz
IC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQg
LU1GIC50cG1fZHJpdmVycy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8t
dGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5v
LWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIu
MC90b29scy9maXJtd2FyZS9yb21iaW9zLzMyYml0L3RjZ2Jpb3MvLi4vLi4vLi4vLi4vLi4v
dG9vbHMvaW5jbHVkZSAtSS4uIC1JLi4vLi4gIC1jIC1vIHRwbV9kcml2ZXJzLm8gdHBtX2Ry
aXZlcnMuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmxkIC1tZWxmX2kzODYgLXIgdGNnYmlvcy5v
IHRwbV9kcml2ZXJzLm8gLW8gdGNnYmlvc2V4dC5vCmdtYWtlWzEwXTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9yb21iaW9zLzMyYml0L3Rj
Z2Jpb3MnCmdtYWtlWzldOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2Zpcm13YXJlL3JvbWJpb3MvMzJiaXQnCmdtYWtlIDMyYml0Ymlvc19mbGF0LmgKZ21h
a2VbOV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13
YXJlL3JvbWJpb3MvMzJiaXQnCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuMzJiaXRiaW9zLm8uZCAtZm5vLW9wdGltaXpl
LXNpYmxpbmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1mbm8t
c3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0LWZs
b2F0IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3MvMzJiaXQvLi4v
Li4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS4uICAtYyAtbyAzMmJpdGJpb3MubyAzMmJpdGJp
b3MuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAudXRpbC5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1m
bG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9yb21iaW9zLzMyYml0Ly4u
Ly4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkuLiAgLWMgLW8gdXRpbC5vIHV0aWwuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMy
IC1tYXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAucG1tLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1mbm8tc3RhY2stcHJv
dGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0LWZsb2F0IC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3MvMzJiaXQvLi4vLi4vLi4vLi4v
dG9vbHMvaW5jbHVkZSAtSS4uICAtYyAtbyBwbW0ubyBwbW0uYyAgLUkvdXNyL3BrZy9pbmNs
dWRlCmxkIC1tZWxmX2kzODYgLXMgLXIgMzJiaXRiaW9zLm8gdGNnYmlvcy90Y2diaW9zZXh0
Lm8gdXRpbC5vIHBtbS5vIC1vIDMyYml0Ymlvc19hbGwubwpzaCBta2hleCBoaWdoYmlvc19h
cnJheSAzMmJpdGJpb3NfYWxsLm8gPiAzMmJpdGJpb3NfZmxhdC5oCmdtYWtlWzldOiBMZWF2
aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3Mv
MzJiaXQnCmdtYWtlWzhdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2Zpcm13YXJlL3JvbWJpb3MvMzJiaXQnCmdtYWtlWzddOiBMZWF2aW5nIGRpcmVjdG9y
eSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL3JvbWJpb3MnCmdtYWtlIEJJT1Mt
Ym9jaHMtbGF0ZXN0CmdtYWtlWzddOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC90b29scy9maXJtd2FyZS9yb21iaW9zJwpnY2MgLW8gYmlvc3N1bXMgYmlvc3N1bXMu
YwpnY2MgLURCWF9TTVBfUFJPQ0VTU09SUz0xIC1FIC1QIHJvbWJpb3MuYyA+IF9yb21iaW9z
Xy5jCmJjYyAtbyByb21iaW9zLnMgLUMtYyAtRF9faTg2X18gLTAgLVMgX3JvbWJpb3NfLmMK
c2VkIC1lICdzL15cLnRleHQvLycgLWUgJ3MvXlwuZGF0YS8vJyByb21iaW9zLnMgPiBfcm9t
Ymlvc18ucwphczg2IF9yb21iaW9zXy5zIC1iIHRtcC5iaW4gLXUtIC13LSAtZyAtMCAtaiAt
TyAtbCByb21iaW9zLnR4dApwZXJsIG1ha2VzeW0ucGVybCA8IHJvbWJpb3MudHh0ID4gcm9t
Ymlvcy5zeW0KbXYgdG1wLmJpbiBCSU9TLWJvY2hzLWxhdGVzdAouL2Jpb3NzdW1zIEJJT1Mt
Ym9jaHMtbGF0ZXN0CgoKUENJLUJpb3MgaGVhZGVyIGF0OiAweEI1QjAKQ3VycmVudCBjaGVj
a3N1bTogICAgIDB4NTgKQ2FsY3VsYXRlZCBjaGVja3N1bTogIDB4NTggIAoKCiRQSVIgaGVh
ZGVyIGF0OiAgICAgMHhCOTAwCkN1cnJlbnQgY2hlY2tzdW06ICAgICAweDM3CkNhbGN1bGF0
ZWQgY2hlY2tzdW06ICAweDI3CiAgU2V0dGluZyBjaGVja3N1bS4KCgpCaW9zIGNoZWNrc3Vt
IGF0OiAgIDB4RkZGRgpDdXJyZW50IGNoZWNrc3VtOiAgICAgMHgwMApDYWxjdWxhdGVkIGNo
ZWNrc3VtOiAgMHhBQSAgU2V0dGluZyBjaGVja3N1bS4Kcm0gLWYgX3JvbWJpb3NfLnMKZ21h
a2VbN106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdh
cmUvcm9tYmlvcycKZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvZmlybXdhcmUvcm9tYmlvcycKZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUnCmdtYWtlWzVdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZScKZ21ha2UgLUMgdmdh
YmlvcyBhbGwKZ21ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2Zpcm13YXJlL3ZnYWJpb3MnCmdjYyAtbyBiaW9zc3VtcyBiaW9zc3Vtcy5jCmdj
YyAtbyB2YmV0YWJsZXMtZ2VuIHZiZXRhYmxlcy1nZW4uYwouL3ZiZXRhYmxlcy1nZW4gPiB2
YmV0YWJsZXMuaApnY2MgLUUgLVAgdmdhYmlvcy5jICAtRFZCRSAiLURWR0FCSU9TX0RBVEU9
XCJgZGF0ZSAnKyVkICViICVZJ2BcIiIgPiBfdmdhYmlvc18uYwpiY2MgLW8gdmdhYmlvcy5z
IC1DLWMgLURfX2k4Nl9fIC1TIC0wIF92Z2FiaW9zXy5jCnNlZCAtZSAncy9eXC50ZXh0Ly8n
IC1lICdzL15cLmRhdGEvLycgdmdhYmlvcy5zID4gX3ZnYWJpb3NfLnMKYXM4NiBfdmdhYmlv
c18ucyAtYiB2Z2FiaW9zLmJpbiAtdSAtdy0gLWcgLTAgLWogLU8gLWwgdmdhYmlvcy50eHQK
cm0gLWYgX3ZnYWJpb3NfLnMgX3ZnYWJpb3NfLmMgdmdhYmlvcy5zCmNwIHZnYWJpb3MuYmlu
IFZHQUJJT1MtbGdwbC1sYXRlc3QuYmluCi4vYmlvc3N1bXMgVkdBQklPUy1sZ3BsLWxhdGVz
dC5iaW4KCkJpb3MgY2hlY2tzdW0gYXQ6ICAgMHg5REZGCkN1cnJlbnQgY2hlY2tzdW06ICAg
ICAweDAwCkNhbGN1bGF0ZWQgY2hlY2tzdW06ICAweEVDICBTZXR0aW5nIGNoZWNrc3VtLgps
cyAtbCBWR0FCSU9TLWxncGwtbGF0ZXN0LmJpbgotcnctci0tci0tICAxIHJvb3QgIHdoZWVs
ICA0MDQ0OCBEZWMgIDQgMTM6MjUgVkdBQklPUy1sZ3BsLWxhdGVzdC5iaW4KZ2NjIC1FIC1Q
IHZnYWJpb3MuYyAgLURWQkUgLURERUJVRyAiLURWR0FCSU9TX0RBVEU9XCJgZGF0ZSAnKyVk
ICViICVZJ2BcIiIgPiBfdmdhYmlvcy1kZWJ1Z18uYwpiY2MgLW8gdmdhYmlvcy1kZWJ1Zy5z
IC1DLWMgLURfX2k4Nl9fIC1TIC0wIF92Z2FiaW9zLWRlYnVnXy5jCnNlZCAtZSAncy9eXC50
ZXh0Ly8nIC1lICdzL15cLmRhdGEvLycgdmdhYmlvcy1kZWJ1Zy5zID4gX3ZnYWJpb3MtZGVi
dWdfLnMKYXM4NiBfdmdhYmlvcy1kZWJ1Z18ucyAtYiB2Z2FiaW9zLmRlYnVnLmJpbiAtdSAt
dy0gLWcgLTAgLWogLU8gLWwgdmdhYmlvcy5kZWJ1Zy50eHQKcm0gLWYgX3ZnYWJpb3MtZGVi
dWdfLnMgX3ZnYWJpb3MtZGVidWdfLmMgdmdhYmlvcy1kZWJ1Zy5zCmNwIHZnYWJpb3MuZGVi
dWcuYmluIFZHQUJJT1MtbGdwbC1sYXRlc3QuZGVidWcuYmluCi4vYmlvc3N1bXMgVkdBQklP
Uy1sZ3BsLWxhdGVzdC5kZWJ1Zy5iaW4KCkJpb3MgY2hlY2tzdW0gYXQ6ICAgMHhBMUZGCkN1
cnJlbnQgY2hlY2tzdW06ICAgICAweDAwCkNhbGN1bGF0ZWQgY2hlY2tzdW06ICAweDU4ICBT
ZXR0aW5nIGNoZWNrc3VtLgpscyAtbCBWR0FCSU9TLWxncGwtbGF0ZXN0LmRlYnVnLmJpbgot
cnctci0tci0tICAxIHJvb3QgIHdoZWVsICA0MTQ3MiBEZWMgIDQgMTM6MjUgVkdBQklPUy1s
Z3BsLWxhdGVzdC5kZWJ1Zy5iaW4KZ2NjIC1FIC1QIHZnYWJpb3MuYyAgLURDSVJSVVMgLURQ
Q0lCSU9TICItRFZHQUJJT1NfREFURT1cImBkYXRlICcrJWQgJWIgJVknYFwiIiA+IF92Z2Fi
aW9zLWNpcnJ1c18uYwpiY2MgLW8gdmdhYmlvcy1jaXJydXMucyAtQy1jIC1EX19pODZfXyAt
UyAtMCBfdmdhYmlvcy1jaXJydXNfLmMKc2VkIC1lICdzL15cLnRleHQvLycgLWUgJ3MvXlwu
ZGF0YS8vJyB2Z2FiaW9zLWNpcnJ1cy5zID4gX3ZnYWJpb3MtY2lycnVzXy5zCmFzODYgX3Zn
YWJpb3MtY2lycnVzXy5zIC1iIHZnYWJpb3MtY2lycnVzLmJpbiAtdSAtdy0gLWcgLTAgLWog
LU8gLWwgdmdhYmlvcy1jaXJydXMudHh0CnJtIC1mIF92Z2FiaW9zLWNpcnJ1c18ucyBfdmdh
Ymlvcy1jaXJydXNfLmMgdmdhYmlvcy1jaXJydXMucwpjcCB2Z2FiaW9zLWNpcnJ1cy5iaW4g
VkdBQklPUy1sZ3BsLWxhdGVzdC5jaXJydXMuYmluCi4vYmlvc3N1bXMgVkdBQklPUy1sZ3Bs
LWxhdGVzdC5jaXJydXMuYmluCgpCaW9zIGNoZWNrc3VtIGF0OiAgIDB4OEJGRgpDdXJyZW50
IGNoZWNrc3VtOiAgICAgMHgwMApDYWxjdWxhdGVkIGNoZWNrc3VtOiAgMHhGMCAgU2V0dGlu
ZyBjaGVja3N1bS4KbHMgLWwgVkdBQklPUy1sZ3BsLWxhdGVzdC5jaXJydXMuYmluCi1ydy1y
LS1yLS0gIDEgcm9vdCAgd2hlZWwgIDM1ODQwIERlYyAgNCAxMzoyNSBWR0FCSU9TLWxncGwt
bGF0ZXN0LmNpcnJ1cy5iaW4KZ2NjIC1FIC1QIHZnYWJpb3MuYyAgLURDSVJSVVMgLURDSVJS
VVNfREVCVUcgLURQQ0lCSU9TICItRFZHQUJJT1NfREFURT1cImBkYXRlICcrJWQgJWIgJVkn
YFwiIiA+IF92Z2FiaW9zLWNpcnJ1cy1kZWJ1Z18uYwpiY2MgLW8gdmdhYmlvcy1jaXJydXMt
ZGVidWcucyAtQy1jIC1EX19pODZfXyAtUyAtMCBfdmdhYmlvcy1jaXJydXMtZGVidWdfLmMK
c2VkIC1lICdzL15cLnRleHQvLycgLWUgJ3MvXlwuZGF0YS8vJyB2Z2FiaW9zLWNpcnJ1cy1k
ZWJ1Zy5zID4gX3ZnYWJpb3MtY2lycnVzLWRlYnVnXy5zCmFzODYgX3ZnYWJpb3MtY2lycnVz
LWRlYnVnXy5zIC1iIHZnYWJpb3MtY2lycnVzLmRlYnVnLmJpbiAtdSAtdy0gLWcgLTAgLWog
LU8gLWwgdmdhYmlvcy1jaXJydXMuZGVidWcudHh0CnJtIC1mIF92Z2FiaW9zLWNpcnJ1cy1k
ZWJ1Z18ucyBfdmdhYmlvcy1jaXJydXMtZGVidWdfLmMgdmdhYmlvcy1jaXJydXMtZGVidWcu
cwpjcCB2Z2FiaW9zLWNpcnJ1cy5kZWJ1Zy5iaW4gVkdBQklPUy1sZ3BsLWxhdGVzdC5jaXJy
dXMuZGVidWcuYmluCi4vYmlvc3N1bXMgVkdBQklPUy1sZ3BsLWxhdGVzdC5jaXJydXMuZGVi
dWcuYmluCgpCaW9zIGNoZWNrc3VtIGF0OiAgIDB4OEJGRgpDdXJyZW50IGNoZWNrc3VtOiAg
ICAgMHgwMApDYWxjdWxhdGVkIGNoZWNrc3VtOiAgMHg2OCAgU2V0dGluZyBjaGVja3N1bS4K
bHMgLWwgVkdBQklPUy1sZ3BsLWxhdGVzdC5jaXJydXMuZGVidWcuYmluCi1ydy1yLS1yLS0g
IDEgcm9vdCAgd2hlZWwgIDM1ODQwIERlYyAgNCAxMzoyNSBWR0FCSU9TLWxncGwtbGF0ZXN0
LmNpcnJ1cy5kZWJ1Zy5iaW4KZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvdmdhYmlvcycKZ21ha2VbNV06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUnCmdtYWtlWzVdOiBFbnRl
cmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZScKZ21ha2Ug
LUMgZXRoZXJib290IGFsbApnbWFrZVs2XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvZXRoZXJib290JwppZiAhIHdnZXQgLU8gX2lweGUu
dGFyLmd6IGh0dHA6Ly94ZW5iaXRzLnhlbi5vcmcveGVuLWV4dGZpbGVzL2lweGUtZ2l0LTlh
OTNkYjNmMDk0NzQ4NGUzMGU3NTNiYmQ2MWExMGIxNzMzNmUyMGUudGFyLmd6OyB0aGVuIFwK
CWdpdCBjbG9uZSBnaXQ6Ly9naXQuaXB4ZS5vcmcvaXB4ZS5naXQgaXB4ZS5naXQ7IFwKCShj
ZCBpcHhlLmdpdCAmJiBnaXQgYXJjaGl2ZSAtLWZvcm1hdD10YXIgLS1wcmVmaXg9aXB4ZS8g
XAoJOWE5M2RiM2YwOTQ3NDg0ZTMwZTc1M2JiZDYxYTEwYjE3MzM2ZTIwZSB8IGd6aXAgPi4u
L19pcHhlLnRhci5neik7IFwKCXJtIC1yZiBpcHhlLmdpdDsgXApmaQp3Z2V0OiBub3QgZm91
bmQKQ2xvbmluZyBpbnRvICdpcHhlLmdpdCcuLi4KcmVtb3RlOiBDb3VudGluZyBvYmplY3Rz
OiAzNzg0OSwgZG9uZS4bW0sKcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDAlICgx
LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgMSUgKDEzMy8x
MzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDIlICgyNjYvMTMy
NzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogICAzJSAoMzk5LzEzMjc2
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgNCUgKDUzMi8xMzI3Nikg
ICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDUlICg2NjQvMTMyNzYpICAg
G1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogICA2JSAoNzk3LzEzMjc2KSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgNyUgKDkzMC8xMzI3NikgICAbW0sN
cmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDglICgxMDYzLzEzMjc2KSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAgOSUgKDExOTUvMTMyNzYpICAgG1tLDXJl
bW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDEwJSAoMTMyOC8xMzI3NikgICAbW0sNcmVt
b3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgMTElICgxNDYxLzEzMjc2KSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMiUgKDE1OTQvMTMyNzYpICAgG1tLDXJlbW90
ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDEzJSAoMTcyNi8xMzI3NikgICAbW0sNcmVtb3Rl
OiBDb21wcmVzc2luZyBvYmplY3RzOiAgMTQlICgxODU5LzEzMjc2KSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNSUgKDE5OTIvMTMyNzYpICAgG1tLDXJlbW90ZTog
Q29tcHJlc3Npbmcgb2JqZWN0czogIDE2JSAoMjEyNS8xMzI3NikgICAbW0sNcmVtb3RlOiBD
b21wcmVzc2luZyBvYmplY3RzOiAgMTclICgyMjU3LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICAxOCUgKDIzOTAvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29t
cHJlc3Npbmcgb2JqZWN0czogIDE5JSAoMjUyMy8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21w
cmVzc2luZyBvYmplY3RzOiAgMjAlICgyNjU2LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAyMSUgKDI3ODgvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDIyJSAoMjkyMS8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVz
c2luZyBvYmplY3RzOiAgMjMlICgzMDU0LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNz
aW5nIG9iamVjdHM6ICAyNCUgKDMxODcvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Np
bmcgb2JqZWN0czogIDI1JSAoMzMxOS8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2lu
ZyBvYmplY3RzOiAgMjYlICgzNDUyLzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5n
IG9iamVjdHM6ICAyNyUgKDM1ODUvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcg
b2JqZWN0czogIDI4JSAoMzcxOC8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBv
YmplY3RzOiAgMjklICgzODUxLzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9i
amVjdHM6ICAzMCUgKDM5ODMvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2Jq
ZWN0czogIDMxJSAoNDExNi8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmpl
Y3RzOiAgMzIlICg0MjQ5LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVj
dHM6ICAzMyUgKDQzODIvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0
czogIDM0JSAoNDUxNC8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3Rz
OiAgMzUlICg0NjQ3LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6
ICAzNiUgKDQ3ODAvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czog
IDM3JSAoNDkxMy8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAg
MzglICg1MDQ1LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICAz
OSUgKDUxNzgvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDQw
JSAoNTMxMS8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNDEl
ICg1NDQ0LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MiUg
KDU1NzYvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDQzJSAo
NTcwOS8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNDQlICg1
ODQyLzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NSUgKDU5
NzUvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDQ2JSAoNjEw
Ny8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNDclICg2MjQw
LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA0OCUgKDYzNzMv
MTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDQ5JSAoNjUwNi8x
MzI3NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNTAlICg2NjM4LzEz
Mjc2KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MSUgKDY3NzEvMTMy
NzYpICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDUyJSAoNjkwNC8xMzI3
NikgICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNTMlICg3MDM3LzEzMjc2
KSAgIBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA1NCUgKDcxNzAvMTMyNzYp
ICAgG1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDU1JSAoNzMwMi8xMzI3Nikg
ICAbW0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNTYlICg3NDM1LzEzMjc2KSAg
IBtbSw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA1NyUgKDc1NjgvMTMyNzYpICAg
G1tLDXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDU4JSAoNzcwMS8xMzI3NikgICAb
W0sNcmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNTklICg3ODMzLzEzMjc2KSAgIBtb
Sw1yZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MCUgKDc5NjYvMTMyNzYpICAgG1tL
DXJlbW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDYxJSAoODA5OS8xMzI3NikgICAbW0sN
cmVtb3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNjIlICg4MjMyLzEzMjc2KSAgIBtbSw1y
ZW1vdGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MyUgKDgzNjQvMTMyNzYpICAgG1tLDXJl
bW90ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDY0JSAoODQ5Ny8xMzI3NikgICAbW0sNcmVt
b3RlOiBDb21wcmVzc2luZyBvYmplY3RzOiAgNjUlICg4NjMwLzEzMjc2KSAgIBtbSw1yZW1v
dGU6IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NiUgKDg3NjMvMTMyNzYpICAgG1tLDXJlbW90
ZTogQ29tcHJlc3Npbmcgb2JqZWN0czogIDY3JSAoODg5NS8xMzI3NikgICAbW0sNcmVtb3Rl
OiBDb21wcmVzc2luZyBvYmplY3RzOiAgNjglICg5MDI4LzEzMjc2KSAgIBtbSw1yZW1vdGU6
IENvbXByZXNzaW5nIG9iamVjdHM6ICA2OSUgKDkxNjEvMTMyNzYpICAgG1tLDXJlbW90ZTog
Q29tcHJlc3Npbmcgb2JqZWN0czogIDcwJSAoOTI5NC8xMzI3NikgICAbW0sNcmVtb3RlOiBD
b21wcmVzc2luZyBvYmplY3RzOiAgNzElICg5NDI2LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENv
bXByZXNzaW5nIG9iamVjdHM6ICA3MiUgKDk1NTkvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29t
cHJlc3Npbmcgb2JqZWN0czogIDczJSAoOTY5Mi8xMzI3NikgICAbW0sNcmVtb3RlOiBDb21w
cmVzc2luZyBvYmplY3RzOiAgNzQlICg5ODI1LzEzMjc2KSAgIBtbSw1yZW1vdGU6IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA3NSUgKDk5NTcvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDc2JSAoMTAwOTAvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDc3JSAoMTAyMjMvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDc4JSAoMTAzNTYvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDc5JSAoMTA0ODkvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDgwJSAoMTA2MjEvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDgxJSAoMTA3NTQvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDgyJSAoMTA4ODcvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDgzJSAoMTEwMjAvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg0JSAoMTExNTIvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg1JSAoMTEyODUvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg2JSAoMTE0MTgvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg3JSAoMTE1NTEvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg4JSAoMTE2ODMvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDg5JSAoMTE4MTYvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDkwJSAoMTE5NDkvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDkxJSAoMTIwODIvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDkyJSAoMTIyMTQvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDkzJSAoMTIzNDcvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk0JSAoMTI0ODAvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk1JSAoMTI2MTMvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk2JSAoMTI3NDUvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk3JSAoMTI4NzgvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk4JSAoMTMwMTEvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogIDk5JSAoMTMxNDQvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogMTAwJSAoMTMyNzYvMTMyNzYpICAgG1tLDXJlbW90ZTogQ29tcHJl
c3Npbmcgb2JqZWN0czogMTAwJSAoMTMyNzYvMTMyNzYpLCBkb25lLhtbSwpSZWNlaXZpbmcg
b2JqZWN0czogICAwJSAoMS8zNzg0OSkgICANUmVjZWl2aW5nIG9iamVjdHM6ICAgMSUgKDM3
OS8zNzg0OSkgICANUmVjZWl2aW5nIG9iamVjdHM6ICAgMiUgKDc1Ny8zNzg0OSkgICANUmVj
ZWl2aW5nIG9iamVjdHM6ICAgMyUgKDExMzYvMzc4NDkpICAgDVJlY2VpdmluZyBvYmplY3Rz
OiAgIDQlICgxNTE0LzM3ODQ5KSAgIA1SZWNlaXZpbmcgb2JqZWN0czogICA1JSAoMTg5My8z
Nzg0OSksIDU1Ni4wMCBLaUIgfCAxLjA2IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
IDYlICgyMjcxLzM3ODQ5KSwgNTU2LjAwIEtpQiB8IDEuMDYgTWlCL3MgICANUmVjZWl2aW5n
IG9iamVjdHM6ICAgNyUgKDI2NTAvMzc4NDkpLCA1NTYuMDAgS2lCIHwgMS4wNiBNaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogICA4JSAoMzAyOC8zNzg0OSksIDU1Ni4wMCBLaUIgfCAx
LjA2IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgIDklICgzNDA3LzM3ODQ5KSwgNTU2
LjAwIEtpQiB8IDEuMDYgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAxMCUgKDM3ODUv
Mzc4NDkpLCA1NTYuMDAgS2lCIHwgMS4wNiBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czog
IDExJSAoNDE2NC8zNzg0OSksIDU1Ni4wMCBLaUIgfCAxLjA2IE1pQi9zICAgDVJlY2Vpdmlu
ZyBvYmplY3RzOiAgMTIlICg0NTQyLzM3ODQ5KSwgNTU2LjAwIEtpQiB8IDEuMDYgTWlCL3Mg
ICANUmVjZWl2aW5nIG9iamVjdHM6ICAxMyUgKDQ5MjEvMzc4NDkpLCA1NTYuMDAgS2lCIHwg
MS4wNiBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDE0JSAoNTI5OS8zNzg0OSksIDU1
Ni4wMCBLaUIgfCAxLjA2IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMTUlICg1Njc4
LzM3ODQ5KSwgNTU2LjAwIEtpQiB8IDEuMDYgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICAxNiUgKDYwNTYvMzc4NDkpLCA1NTYuMDAgS2lCIHwgMS4wNiBNaUIvcyAgIA1SZWNlaXZp
bmcgb2JqZWN0czogIDE3JSAoNjQzNS8zNzg0OSksIDU1Ni4wMCBLaUIgfCAxLjA2IE1pQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMTglICg2ODEzLzM3ODQ5KSwgNTU2LjAwIEtpQiB8
IDEuMDYgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAxOSUgKDcxOTIvMzc4NDkpLCA1
NTYuMDAgS2lCIHwgMS4wNiBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDIwJSAoNzU3
MC8zNzg0OSksIDU1Ni4wMCBLaUIgfCAxLjA2IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3Rz
OiAgMjElICg3OTQ5LzM3ODQ5KSwgNTU2LjAwIEtpQiB8IDEuMDYgTWlCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICAyMiUgKDgzMjcvMzc4NDkpLCAxLjMyIE1pQiB8IDEuMzAgTWlCL3Mg
ICANUmVjZWl2aW5nIG9iamVjdHM6ICAyMiUgKDg0MDQvMzc4NDkpLCAxLjMyIE1pQiB8IDEu
MzAgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAyMyUgKDg3MDYvMzc4NDkpLCAxLjMy
IE1pQiB8IDEuMzAgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAyNCUgKDkwODQvMzc4
NDkpLCAxLjMyIE1pQiB8IDEuMzAgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAyNSUg
KDk0NjMvMzc4NDkpLCAxLjMyIE1pQiB8IDEuMzAgTWlCL3MgICANUmVjZWl2aW5nIG9iamVj
dHM6ICAyNiUgKDk4NDEvMzc4NDkpLCAxLjMyIE1pQiB8IDEuMzAgTWlCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICAyNyUgKDEwMjIwLzM3ODQ5KSwgMS4zMiBNaUIgfCAxLjMwIE1pQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMjglICgxMDU5OC8zNzg0OSksIDEuMzIgTWlCIHwg
MS4zMCBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDI5JSAoMTA5NzcvMzc4NDkpLCAx
LjMyIE1pQiB8IDEuMzAgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAzMCUgKDExMzU1
LzM3ODQ5KSwgMS4zMiBNaUIgfCAxLjMwIE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
MzElICgxMTczNC8zNzg0OSksIDEuMzIgTWlCIHwgMS4zMCBNaUIvcyAgIA1SZWNlaXZpbmcg
b2JqZWN0czogIDMyJSAoMTIxMTIvMzc4NDkpLCAxLjMyIE1pQiB8IDEuMzAgTWlCL3MgICAN
UmVjZWl2aW5nIG9iamVjdHM6ICAzMyUgKDEyNDkxLzM3ODQ5KSwgMi4xMCBNaUIgfCAxLjM5
IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMzQlICgxMjg2OS8zNzg0OSksIDIuMTAg
TWlCIHwgMS4zOSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDM1JSAoMTMyNDgvMzc4
NDkpLCAyLjEwIE1pQiB8IDEuMzkgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAzNiUg
KDEzNjI2LzM3ODQ5KSwgMi4xMCBNaUIgfCAxLjM5IE1pQi9zICAgDVJlY2VpdmluZyBvYmpl
Y3RzOiAgMzclICgxNDAwNS8zNzg0OSksIDIuMTAgTWlCIHwgMS4zOSBNaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogIDM4JSAoMTQzODMvMzc4NDkpLCAyLjEwIE1pQiB8IDEuMzkgTWlC
L3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICAzOSUgKDE0NzYyLzM3ODQ5KSwgMi4xMCBNaUIg
fCAxLjM5IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgMzklICgxNDgxMy8zNzg0OSks
IDMuMjggTWlCIHwgMS42MiBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDQwJSAoMTUx
NDAvMzc4NDkpLCAzLjI4IE1pQiB8IDEuNjIgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICA0MSUgKDE1NTE5LzM3ODQ5KSwgMy4yOCBNaUIgfCAxLjYyIE1pQi9zICAgDVJlY2Vpdmlu
ZyBvYmplY3RzOiAgNDIlICgxNTg5Ny8zNzg0OSksIDQuNDUgTWlCIHwgMS42NCBNaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDQzJSAoMTYyNzYvMzc4NDkpLCA0LjQ1IE1pQiB8IDEu
NjQgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA0NCUgKDE2NjU0LzM3ODQ5KSwgNC40
NSBNaUIgfCAxLjY0IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNDUlICgxNzAzMy8z
Nzg0OSksIDQuNDUgTWlCIHwgMS42NCBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDQ2
JSAoMTc0MTEvMzc4NDkpLCA0LjQ1IE1pQiB8IDEuNjQgTWlCL3MgICANUmVjZWl2aW5nIG9i
amVjdHM6ICA0NyUgKDE3NzkwLzM3ODQ5KSwgNC40NSBNaUIgfCAxLjY0IE1pQi9zICAgDVJl
Y2VpdmluZyBvYmplY3RzOiAgNDclICgxNzg2Mi8zNzg0OSksIDQuNDUgTWlCIHwgMS42NCBN
aUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDQ4JSAoMTgxNjgvMzc4NDkpLCA1LjQxIE1p
QiB8IDEuNjggTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA0OSUgKDE4NTQ3LzM3ODQ5
KSwgNS40MSBNaUIgfCAxLjY4IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNTAlICgx
ODkyNS8zNzg0OSksIDUuNDEgTWlCIHwgMS42OCBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDUxJSAoMTkzMDMvMzc4NDkpLCA1LjQxIE1pQiB8IDEuNjggTWlCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICA1MiUgKDE5NjgyLzM3ODQ5KSwgNS40MSBNaUIgfCAxLjY4IE1pQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNTMlICgyMDA2MC8zNzg0OSksIDUuNDEgTWlCIHwg
MS42OCBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDU0JSAoMjA0MzkvMzc4NDkpLCA1
LjQxIE1pQiB8IDEuNjggTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA1NSUgKDIwODE3
LzM3ODQ5KSwgNi41NCBNaUIgfCAxLjc1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
NTYlICgyMTE5Ni8zNzg0OSksIDYuNTQgTWlCIHwgMS43NSBNaUIvcyAgIA1SZWNlaXZpbmcg
b2JqZWN0czogIDU3JSAoMjE1NzQvMzc4NDkpLCA2LjU0IE1pQiB8IDEuNzUgTWlCL3MgICAN
UmVjZWl2aW5nIG9iamVjdHM6ICA1OCUgKDIxOTUzLzM3ODQ5KSwgNi41NCBNaUIgfCAxLjc1
IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNTklICgyMjMzMS8zNzg0OSksIDYuNTQg
TWlCIHwgMS43NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDYwJSAoMjI3MTAvMzc4
NDkpLCA2LjU0IE1pQiB8IDEuNzUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA2MSUg
KDIzMDg4LzM3ODQ5KSwgNi41NCBNaUIgfCAxLjc1IE1pQi9zICAgDVJlY2VpdmluZyBvYmpl
Y3RzOiAgNjIlICgyMzQ2Ny8zNzg0OSksIDYuNTQgTWlCIHwgMS43NSBNaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogIDYzJSAoMjM4NDUvMzc4NDkpLCA2LjU0IE1pQiB8IDEuNzUgTWlC
L3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA2MyUgKDI0MTczLzM3ODQ5KSwgNi41NCBNaUIg
fCAxLjc1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNjQlICgyNDIyNC8zNzg0OSks
IDYuNTQgTWlCIHwgMS43NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDY1JSAoMjQ2
MDIvMzc4NDkpLCA2LjU0IE1pQiB8IDEuNzUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICA2NiUgKDI0OTgxLzM3ODQ5KSwgNi41NCBNaUIgfCAxLjc1IE1pQi9zICAgDVJlY2Vpdmlu
ZyBvYmplY3RzOiAgNjclICgyNTM1OS8zNzg0OSksIDYuNTQgTWlCIHwgMS43NSBNaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDY4JSAoMjU3MzgvMzc4NDkpLCA2LjU0IE1pQiB8IDEu
NzUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA2OSUgKDI2MTE2LzM3ODQ5KSwgNi41
NCBNaUIgfCAxLjc1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNzAlICgyNjQ5NS8z
Nzg0OSksIDYuNTQgTWlCIHwgMS43NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDcx
JSAoMjY4NzMvMzc4NDkpLCA2LjU0IE1pQiB8IDEuNzUgTWlCL3MgICANUmVjZWl2aW5nIG9i
amVjdHM6ICA3MiUgKDI3MjUyLzM3ODQ5KSwgNi41NCBNaUIgfCAxLjc1IE1pQi9zICAgDVJl
Y2VpdmluZyBvYmplY3RzOiAgNzMlICgyNzYzMC8zNzg0OSksIDYuNTQgTWlCIHwgMS43NSBN
aUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDc0JSAoMjgwMDkvMzc4NDkpLCA3LjgyIE1p
QiB8IDEuODUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA3NSUgKDI4Mzg3LzM3ODQ5
KSwgNy44MiBNaUIgfCAxLjg1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNzYlICgy
ODc2Ni8zNzg0OSksIDcuODIgTWlCIHwgMS44NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0
czogIDc3JSAoMjkxNDQvMzc4NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlCL3MgICANUmVjZWl2
aW5nIG9iamVjdHM6ICA3OCUgKDI5NTIzLzM3ODQ5KSwgNy44MiBNaUIgfCAxLjg1IE1pQi9z
ICAgDVJlY2VpdmluZyBvYmplY3RzOiAgNzklICgyOTkwMS8zNzg0OSksIDcuODIgTWlCIHwg
MS44NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDgwJSAoMzAyODAvMzc4NDkpLCA3
LjgyIE1pQiB8IDEuODUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA4MSUgKDMwNjU4
LzM3ODQ5KSwgNy44MiBNaUIgfCAxLjg1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAg
ODIlICgzMTAzNy8zNzg0OSksIDcuODIgTWlCIHwgMS44NSBNaUIvcyAgIA1SZWNlaXZpbmcg
b2JqZWN0czogIDgzJSAoMzE0MTUvMzc4NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlCL3MgICAN
UmVjZWl2aW5nIG9iamVjdHM6ICA4NCUgKDMxNzk0LzM3ODQ5KSwgNy44MiBNaUIgfCAxLjg1
IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgODUlICgzMjE3Mi8zNzg0OSksIDcuODIg
TWlCIHwgMS44NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDg2JSAoMzI1NTEvMzc4
NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA4NyUg
KDMyOTI5LzM3ODQ5KSwgNy44MiBNaUIgfCAxLjg1IE1pQi9zICAgDVJlY2VpdmluZyBvYmpl
Y3RzOiAgODglICgzMzMwOC8zNzg0OSksIDcuODIgTWlCIHwgMS44NSBNaUIvcyAgIA1SZWNl
aXZpbmcgb2JqZWN0czogIDg5JSAoMzM2ODYvMzc4NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlC
L3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA5MCUgKDM0MDY1LzM3ODQ5KSwgNy44MiBNaUIg
fCAxLjg1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgOTElICgzNDQ0My8zNzg0OSks
IDcuODIgTWlCIHwgMS44NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDkyJSAoMzQ4
MjIvMzc4NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6
ICA5MyUgKDM1MjAwLzM3ODQ5KSwgNy44MiBNaUIgfCAxLjg1IE1pQi9zICAgDVJlY2Vpdmlu
ZyBvYmplY3RzOiAgOTQlICgzNTU3OS8zNzg0OSksIDcuODIgTWlCIHwgMS44NSBNaUIvcyAg
IA1SZWNlaXZpbmcgb2JqZWN0czogIDk1JSAoMzU5NTcvMzc4NDkpLCA3LjgyIE1pQiB8IDEu
ODUgTWlCL3MgICANUmVjZWl2aW5nIG9iamVjdHM6ICA5NiUgKDM2MzM2LzM3ODQ5KSwgNy44
MiBNaUIgfCAxLjg1IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAgOTclICgzNjcxNC8z
Nzg0OSksIDcuODIgTWlCIHwgMS44NSBNaUIvcyAgIA1SZWNlaXZpbmcgb2JqZWN0czogIDk4
JSAoMzcwOTMvMzc4NDkpLCA3LjgyIE1pQiB8IDEuODUgTWlCL3MgICANUmVjZWl2aW5nIG9i
amVjdHM6ICA5OSUgKDM3NDcxLzM3ODQ5KSwgOS4xOCBNaUIgfCAxLjk0IE1pQi9zICAgDXJl
bW90ZTogVG90YWwgMzc4NDkgKGRlbHRhIDI4MTM5KSwgcmV1c2VkIDMxMTk3IChkZWx0YSAy
MzAyMSkbW0sKUmVjZWl2aW5nIG9iamVjdHM6IDEwMCUgKDM3ODQ5LzM3ODQ5KSwgOS4xOCBN
aUIgfCAxLjk0IE1pQi9zICAgDVJlY2VpdmluZyBvYmplY3RzOiAxMDAlICgzNzg0OS8zNzg0
OSksIDkuMjYgTWlCIHwgMi4wNCBNaUIvcywgZG9uZS4KUmVzb2x2aW5nIGRlbHRhczogICAw
JSAoMC8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogICAxJSAoMzI0LzI4MTM5KSAgIA1S
ZXNvbHZpbmcgZGVsdGFzOiAgIDIlICg2MjQvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6
ICAgMyUgKDExMTAvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAgNCUgKDEyMTgvMjgx
MzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAgNSUgKDE0NDMvMjgxMzkpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICAgNiUgKDE3NTYvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAgNyUg
KDIwMDMvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxMCUgKDI4ODkvMjgxMzkpICAg
DVJlc29sdmluZyBkZWx0YXM6ICAxMSUgKDMxMzQvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0
YXM6ICAxMiUgKDM0MDcvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxMyUgKDM2ODUv
MjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxNCUgKDM5NjAvMjgxMzkpICAgDVJlc29s
dmluZyBkZWx0YXM6ICAxNSUgKDQyMzgvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAx
NiUgKDQ2MTkvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAxNyUgKDQ5NTMvMjgxMzkp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICAxOCUgKDUwNzEvMjgxMzkpICAgDVJlc29sdmluZyBk
ZWx0YXM6ICAxOSUgKDUzNDgvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyMCUgKDU2
ODIvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyMSUgKDU5NjAvMjgxMzkpICAgDVJl
c29sdmluZyBkZWx0YXM6ICAyMiUgKDYyNzkvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6
ICAyMyUgKDY0NzMvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyNCUgKDY3NjYvMjgx
MzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyNSUgKDcwOTUvMjgxMzkpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICAyNyUgKDc4MDUvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyOCUg
KDc5NTgvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAyOSUgKDgxNjcvMjgxMzkpICAg
DVJlc29sdmluZyBkZWx0YXM6ICAzMCUgKDg0NjMvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0
YXM6ICAzMSUgKDg3MzAvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAzMiUgKDkwMTYv
MjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAzNCUgKDk3NTUvMjgxMzkpICAgDVJlc29s
dmluZyBkZWx0YXM6ICAzNSUgKDk4NTYvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICAz
NiUgKDEwMTY0LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgMzclICgxMDQzMS8yODEz
OSkgICANUmVzb2x2aW5nIGRlbHRhczogIDM4JSAoMTA2OTgvMjgxMzkpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICAzOSUgKDExMTkxLzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNDAl
ICgxMTMwNC8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDQxJSAoMTE1NTUvMjgxMzkp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICA0MiUgKDExODM4LzI4MTM5KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgNDMlICgxMjEwNi8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDQ0JSAo
MTIzOTcvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA0NSUgKDEyNjk3LzI4MTM5KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAgNDYlICgxMjk4Mi8yODEzOSkgICANUmVzb2x2aW5nIGRl
bHRhczogIDQ3JSAoMTMyMjYvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA0OCUgKDEz
NTUxLzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNDklICgxMzgwNC8yODEzOSkgICAN
UmVzb2x2aW5nIGRlbHRhczogIDUwJSAoMTQwODgvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0
YXM6ICA1MSUgKDE0MzcwLzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNTIlICgxNDYz
Ny8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDUzJSAoMTQ5NjEvMjgxMzkpICAgDVJl
c29sdmluZyBkZWx0YXM6ICA1NCUgKDE1MjI4LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFz
OiAgNTUlICgxNTQ5MC8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDU2JSAoMTU3NjUv
MjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA1NyUgKDE2MDYwLzI4MTM5KSAgIA1SZXNv
bHZpbmcgZGVsdGFzOiAgNTglICgxNjMyMi8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczog
IDU5JSAoMTY2NDkvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA2MCUgKDE2OTMxLzI4
MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNjElICgxNzE2Ni8yODEzOSkgICANUmVzb2x2
aW5nIGRlbHRhczogIDYyJSAoMTc0NjgvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA2
MyUgKDE3NzM0LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNjQlICgxODAxNy8yODEz
OSkgICANUmVzb2x2aW5nIGRlbHRhczogIDY1JSAoMTgyOTQvMjgxMzkpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICA2NiUgKDE4NTkwLzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNjcl
ICgxODg4NS8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDY4JSAoMTkxNTEvMjgxMzkp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICA2OSUgKDE5NDIwLzI4MTM5KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgNzAlICgxOTcyMC8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDcxJSAo
MjAwMDUvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA3MiUgKDIwMjY4LzI4MTM5KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAgNzMlICgyMDU1NS8yODEzOSkgICANUmVzb2x2aW5nIGRl
bHRhczogIDc0JSAoMjA4NjMvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA3NSUgKDIx
MTA3LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNzYlICgyMTQ0OC8yODEzOSkgICAN
UmVzb2x2aW5nIGRlbHRhczogIDc3JSAoMjE2NzAvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0
YXM6ICA3OCUgKDIxOTY3LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgNzklICgyMjI0
NS8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDgwJSAoMjI1MTMvMjgxMzkpICAgDVJl
c29sdmluZyBkZWx0YXM6ICA4MSUgKDIyNzk1LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFz
OiAgODIlICgyMzA4MS8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDgzJSAoMjMzNzcv
MjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA4NCUgKDIzNjQ0LzI4MTM5KSAgIA1SZXNv
bHZpbmcgZGVsdGFzOiAgODUlICgyMzkzNy8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczog
IDg2JSAoMjQyMDMvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA4NyUgKDI0NTI3LzI4
MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgODglICgyNDgwOS8yODEzOSkgICANUmVzb2x2
aW5nIGRlbHRhczogIDg5JSAoMjUxMTYvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA5
MCUgKDI1MzI3LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgOTElICgyNTYwOS8yODEz
OSkgICANUmVzb2x2aW5nIGRlbHRhczogIDkyJSAoMjU4OTEvMjgxMzkpICAgDVJlc29sdmlu
ZyBkZWx0YXM6ICA5MyUgKDI2MTg2LzI4MTM5KSAgIA1SZXNvbHZpbmcgZGVsdGFzOiAgOTQl
ICgyNjQ1OC8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDk1JSAoMjY3NDQvMjgxMzkp
ICAgDVJlc29sdmluZyBkZWx0YXM6ICA5NiUgKDI3MDM5LzI4MTM5KSAgIA1SZXNvbHZpbmcg
ZGVsdGFzOiAgOTclICgyNzMwNi8yODEzOSkgICANUmVzb2x2aW5nIGRlbHRhczogIDk4JSAo
Mjc1NzcvMjgxMzkpICAgDVJlc29sdmluZyBkZWx0YXM6ICA5OSUgKDI3ODcyLzI4MTM5KSAg
IA1SZXNvbHZpbmcgZGVsdGFzOiAxMDAlICgyODEzOS8yODEzOSkgICANUmVzb2x2aW5nIGRl
bHRhczogMTAwJSAoMjgxMzkvMjgxMzkpLCBkb25lLgpDaGVja2luZyBvdXQgZmlsZXM6ICAy
NyUgKDM0Mi8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAyOCUgKDM0My8xMjIyKSAg
IA1DaGVja2luZyBvdXQgZmlsZXM6ICAyOSUgKDM1NS8xMjIyKSAgIA1DaGVja2luZyBvdXQg
ZmlsZXM6ICAzMCUgKDM2Ny8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzMSUgKDM3
OS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzMiUgKDM5Mi8xMjIyKSAgIA1DaGVj
a2luZyBvdXQgZmlsZXM6ICAzMyUgKDQwNC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6
ICAzNCUgKDQxNi8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzNSUgKDQyOC8xMjIy
KSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzNiUgKDQ0MC8xMjIyKSAgIA1DaGVja2luZyBv
dXQgZmlsZXM6ICAzNyUgKDQ1My8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzOCUg
KDQ2NS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICAzOSUgKDQ3Ny8xMjIyKSAgIA1D
aGVja2luZyBvdXQgZmlsZXM6ICA0MCUgKDQ4OS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmls
ZXM6ICA0MSUgKDUwMi8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA0MiUgKDUxNC8x
MjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA0MyUgKDUyNi8xMjIyKSAgIA1DaGVja2lu
ZyBvdXQgZmlsZXM6ICA0NCUgKDUzOC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA0
NSUgKDU1MC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA0NiUgKDU2My8xMjIyKSAg
IA1DaGVja2luZyBvdXQgZmlsZXM6ICA0NyUgKDU3NS8xMjIyKSAgIA1DaGVja2luZyBvdXQg
ZmlsZXM6ICA0OCUgKDU4Ny8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA0OSUgKDU5
OS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1MCUgKDYxMS8xMjIyKSAgIA1DaGVj
a2luZyBvdXQgZmlsZXM6ICA1MSUgKDYyNC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6
ICA1MiUgKDYzNi8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1MyUgKDY0OC8xMjIy
KSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1NCUgKDY2MC8xMjIyKSAgIA1DaGVja2luZyBv
dXQgZmlsZXM6ICA1NSUgKDY3My8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1NiUg
KDY4NS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1NyUgKDY5Ny8xMjIyKSAgIA1D
aGVja2luZyBvdXQgZmlsZXM6ICA1OCUgKDcwOS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmls
ZXM6ICA1OCUgKDcxNi8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA1OSUgKDcyMS8x
MjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA2MCUgKDczNC8xMjIyKSAgIA1DaGVja2lu
ZyBvdXQgZmlsZXM6ICA2MSUgKDc0Ni8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA2
MiUgKDc1OC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA2MyUgKDc3MC8xMjIyKSAg
IA1DaGVja2luZyBvdXQgZmlsZXM6ICA2NCUgKDc4My8xMjIyKSAgIA1DaGVja2luZyBvdXQg
ZmlsZXM6ICA2NSUgKDc5NS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA2NiUgKDgw
Ny8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA2NyUgKDgxOS8xMjIyKSAgIA1DaGVj
a2luZyBvdXQgZmlsZXM6ICA2OCUgKDgzMS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6
ICA2OSUgKDg0NC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3MCUgKDg1Ni8xMjIy
KSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3MSUgKDg2OC8xMjIyKSAgIA1DaGVja2luZyBv
dXQgZmlsZXM6ICA3MiUgKDg4MC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3MyUg
KDg5My8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3NCUgKDkwNS8xMjIyKSAgIA1D
aGVja2luZyBvdXQgZmlsZXM6ICA3NSUgKDkxNy8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmls
ZXM6ICA3NiUgKDkyOS8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3NyUgKDk0MS8x
MjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA3OCUgKDk1NC8xMjIyKSAgIA1DaGVja2lu
ZyBvdXQgZmlsZXM6ICA3OSUgKDk2Ni8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA4
MCUgKDk3OC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA4MSUgKDk5MC8xMjIyKSAg
IA1DaGVja2luZyBvdXQgZmlsZXM6ICA4MiUgKDEwMDMvMTIyMikgICANQ2hlY2tpbmcgb3V0
IGZpbGVzOiAgODMlICgxMDE1LzEyMjIpICAgDUNoZWNraW5nIG91dCBmaWxlczogIDg0JSAo
MTAyNy8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA4NSUgKDEwMzkvMTIyMikgICAN
Q2hlY2tpbmcgb3V0IGZpbGVzOiAgODYlICgxMDUxLzEyMjIpICAgDUNoZWNraW5nIG91dCBm
aWxlczogIDg3JSAoMTA2NC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA4OCUgKDEw
NzYvMTIyMikgICANQ2hlY2tpbmcgb3V0IGZpbGVzOiAgODklICgxMDg4LzEyMjIpICAgDUNo
ZWNraW5nIG91dCBmaWxlczogIDkwJSAoMTEwMC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmls
ZXM6ICA5MSUgKDExMTMvMTIyMikgICANQ2hlY2tpbmcgb3V0IGZpbGVzOiAgOTIlICgxMTI1
LzEyMjIpICAgDUNoZWNraW5nIG91dCBmaWxlczogIDkzJSAoMTEzNy8xMjIyKSAgIA1DaGVj
a2luZyBvdXQgZmlsZXM6ICA5NCUgKDExNDkvMTIyMikgICANQ2hlY2tpbmcgb3V0IGZpbGVz
OiAgOTUlICgxMTYxLzEyMjIpICAgDUNoZWNraW5nIG91dCBmaWxlczogIDk2JSAoMTE3NC8x
MjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA5NyUgKDExODYvMTIyMikgICANQ2hlY2tp
bmcgb3V0IGZpbGVzOiAgOTclICgxMTk3LzEyMjIpICAgDUNoZWNraW5nIG91dCBmaWxlczog
IDk4JSAoMTE5OC8xMjIyKSAgIA1DaGVja2luZyBvdXQgZmlsZXM6ICA5OSUgKDEyMTAvMTIy
MikgICANQ2hlY2tpbmcgb3V0IGZpbGVzOiAxMDAlICgxMjIyLzEyMjIpICAgDUNoZWNraW5n
IG91dCBmaWxlczogMTAwJSAoMTIyMi8xMjIyKSwgZG9uZS4KbXYgX2lweGUudGFyLmd6IGlw
eGUudGFyLmd6CnJtIC1yZiBpcHhlCmd6aXAgLWRjIGlweGUudGFyLmd6IHwgdGFyIHhmIC0K
Zm9yIGkgaW4gJChjYXQgcGF0Y2hlcy9zZXJpZXMpIDsgZG8gICAgICAgICAgICAgICAgIFwK
ICAgIHBhdGNoIC1kIGlweGUgLXAxIC0tcXVpZXQgPHBhdGNoZXMvJGkgfHwgZXhpdCAxIDsg
XApkb25lCmNhdCBDb25maWcgPj5pcHhlL3NyYy9hcmNoL2kzODYvTWFrZWZpbGUKZ21ha2Ug
LUMgaXB4ZS9zcmMgYmluL3J0bDgxMzkucm9tCmdtYWtlWzddOiBFbnRlcmluZyBkaXJlY3Rv
cnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9ldGhlcmJvb3QvaXB4ZS9zcmMn
CnJtIC1mICBiaW4vKi4qICBiaW4vZXJyb3JzCSBiaW4vTklDCSAuL3V0aWwvbnJ2MmIgLi91
dGlsL3piaW4gLi91dGlsL2VsZjJlZmkzMiAuL3V0aWwvZWxmMmVmaTY0IC4vdXRpbC9lZmly
b20gLi91dGlsL2ljY2ZpeCAuL3V0aWwvZWluZm8gVEFHUyBiaW4vc3ltdGFiCiAgW01FRElB
UlVMRVNdIGV4ZQogIFtNRURJQVJVTEVTXSByYXcKICBbTUVESUFSVUxFU10gaGQKICBbTUVE
SUFSVUxFU10gbmJpCiAgW01FRElBUlVMRVNdIGRzawogIFtNRURJQVJVTEVTXSBsa3JuCiAg
W01FRElBUlVMRVNdIGtra3B4ZQogIFtNRURJQVJVTEVTXSBra3B4ZQogIFtNRURJQVJVTEVT
XSBrcHhlCiAgW01FRElBUlVMRVNdIHB4ZQogIFtNRURJQVJVTEVTXSBtcm9tCiAgW01FRElB
UlVMRVNdIHJvbQogIFtSVUxFU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlpc3IuUwog
IFtSVUxFU10gYXJjaC9pMzg2L2ludGVyZmFjZS9zeXNsaW51eC9jb20zMl93cmFwcGVyLlMK
ICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV9lbnRyeS5TCiAgW1JVTEVT
XSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2U4MjBtYW5nbGVyLlMKICBbUlVMRVNdIGFy
Y2gvaTM4Ni9wcmVmaXgvbWJyLlMKICBbUlVMRVNdIGFyY2gvaTM4Ni9wcmVmaXgvcHhlcHJl
Zml4LlMKICBbUlVMRVNdIGFyY2gvaTM4Ni9wcmVmaXgvcm9tcHJlZml4LlMKICBbUlVMRVNd
IGFyY2gvaTM4Ni9wcmVmaXgvZXhlcHJlZml4LlMKICBbUlVMRVNdIGFyY2gvaTM4Ni9wcmVm
aXgvaGRwcmVmaXguUwogIFtSVUxFU10gYXJjaC9pMzg2L3ByZWZpeC91c2JkaXNrLlMKICBb
UlVMRVNdIGFyY2gvaTM4Ni9wcmVmaXgva2trcHhlcHJlZml4LlMKICBbUlVMRVNdIGFyY2gv
aTM4Ni9wcmVmaXgva3B4ZXByZWZpeC5TCiAgW1JVTEVTXSBhcmNoL2kzODYvcHJlZml4L25i
aXByZWZpeC5TCiAgW1JVTEVTXSBhcmNoL2kzODYvcHJlZml4L251bGxwcmVmaXguUwogIFtS
VUxFU10gYXJjaC9pMzg2L3ByZWZpeC9ib290cGFydC5TCiAgW1JVTEVTXSBhcmNoL2kzODYv
cHJlZml4L3VuZGlsb2FkZXIuUwogIFtSVUxFU10gYXJjaC9pMzg2L3ByZWZpeC9ra3B4ZXBy
ZWZpeC5TCiAgW1JVTEVTXSBhcmNoL2kzODYvcHJlZml4L3VubnJ2MmIxNi5TCiAgW1JVTEVT
XSBhcmNoL2kzODYvcHJlZml4L2xrcm5wcmVmaXguUwogIFtSVUxFU10gYXJjaC9pMzg2L3By
ZWZpeC91bm5ydjJiLlMKICBbUlVMRVNdIGFyY2gvaTM4Ni9wcmVmaXgvbXJvbXByZWZpeC5T
CiAgW1JVTEVTXSBhcmNoL2kzODYvcHJlZml4L2Rza3ByZWZpeC5TCiAgW1JVTEVTXSBhcmNo
L2kzODYvcHJlZml4L2xpYnByZWZpeC5TCiAgW1JVTEVTXSBhcmNoL2kzODYvdHJhbnNpdGlv
bnMvbGlicm0uUwogIFtSVUxFU10gYXJjaC9pMzg2L3RyYW5zaXRpb25zL2xpYmEyMC5TCiAg
W1JVTEVTXSBhcmNoL2kzODYvdHJhbnNpdGlvbnMvbGlicG0uUwogIFtSVUxFU10gYXJjaC9p
Mzg2L3RyYW5zaXRpb25zL2xpYmtpci5TCiAgW1JVTEVTXSBhcmNoL2kzODYvY29yZS9zdGFj
azE2LlMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL3N0YWNrLlMKICBbUlVMRVNdIGFyY2gv
aTM4Ni9jb3JlL3NldGptcC5TCiAgW1JVTEVTXSBhcmNoL2kzODYvY29yZS9nZGJpZHQuUwog
IFtSVUxFU10gYXJjaC9pMzg2L2NvcmUvcGF0Y2hfY2YuUwogIFtSVUxFU10gYXJjaC9pMzg2
L2NvcmUvdmlydGFkZHIuUwogIFtSVUxFU10gdGVzdHMvZ2Ric3R1Yl90ZXN0LlMKICBbUlVM
RVNdIGFyY2gvaTM4Ni9kcml2ZXJzL25ldC91bmRpcm9tLmMKICBbUlVMRVNdIGFyY2gvaTM4
Ni9kcml2ZXJzL25ldC91bmRpbmV0LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9kcml2ZXJzL25l
dC91bmRpLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9kcml2ZXJzL25ldC91bmRpb25seS5jCiAg
W1JVTEVTXSBhcmNoL2kzODYvZHJpdmVycy9uZXQvdW5kaWxvYWQuYwogIFtSVUxFU10gYXJj
aC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlwcmVsb2FkLmMKICBbUlVMRVNdIGFyY2gveDg2L3By
ZWZpeC9lZmlkcnZwcmVmaXguYwogIFtSVUxFU10gYXJjaC94ODYvcHJlZml4L2VmaXByZWZp
eC5jCiAgW1JVTEVTXSBhcmNoL3g4Ni9pbnRlcmZhY2UvZWZpL2VmaXg4Nl9uYXAuYwogIFtS
VUxFU10gYXJjaC94ODYvY29yZS94ODZfc3RyaW5nLmMKICBbUlVMRVNdIGFyY2gveDg2L2Nv
cmUvcGNpZGlyZWN0LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9oY2kvY29tbWFuZHMvcmVib290
X2NtZC5jCiAgW1JVTEVTXSBhcmNoL2kzODYvaGNpL2NvbW1hbmRzL3B4ZV9jbWQuYwogIFtS
VUxFU10gYXJjaC9pMzg2L2ludGVyZmFjZS9zeXNsaW51eC9jb21ib290X3Jlc29sdi5jCiAg
W1JVTEVTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3N5c2xpbnV4L2NvbTMyX2NhbGwuYwogIFtS
VUxFU10gYXJjaC9pMzg2L2ludGVyZmFjZS9zeXNsaW51eC9jb21ib290X2NhbGwuYwogIFtS
VUxFU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGVwYXJlbnQvcHhlcGFyZW50X2RoY3AuYwog
IFtSVUxFU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGVwYXJlbnQvcHhlcGFyZW50LmMKICBb
UlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV91ZHAuYwogIFtSVUxFU10gYXJj
aC9pMzg2L2ludGVyZmFjZS9weGUvcHhlX3VuZGkuYwogIFtSVUxFU10gYXJjaC9pMzg2L2lu
dGVyZmFjZS9weGUvcHhlX2xvYWRlci5jCiAgW1JVTEVTXSBhcmNoL2kzODYvaW50ZXJmYWNl
L3B4ZS9weGVfZXhpdF9ob29rLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhl
L3B4ZV9wcmVib290LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV90
ZnRwLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV9maWxlLmMKICBb
UlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV9jYWxsLmMKICBbUlVMRVNdIGFy
Y2gvaTM4Ni9pbnRlcmZhY2UvcGNiaW9zL2Jpb3Nfc21iaW9zLmMKICBbUlVMRVNdIGFyY2gv
aTM4Ni9pbnRlcmZhY2UvcGNiaW9zL21lbXRvcF91bWFsbG9jLmMKICBbUlVMRVNdIGFyY2gv
aTM4Ni9pbnRlcmZhY2UvcGNiaW9zL2Jpb3NpbnQuYwogIFtSVUxFU10gYXJjaC9pMzg2L2lu
dGVyZmFjZS9wY2Jpb3MvYmlvc190aW1lci5jCiAgW1JVTEVTXSBhcmNoL2kzODYvaW50ZXJm
YWNlL3BjYmlvcy9wY2liaW9zLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcGNi
aW9zL2ludDEzLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcGNiaW9zL2Jpb3Nf
bmFwLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbWFnZS9jb21ib290LmMKICBbUlVMRVNdIGFy
Y2gvaTM4Ni9pbWFnZS9lbGZib290LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbWFnZS9ib290
c2VjdG9yLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9pbWFnZS9tdWx0aWJvb3QuYwogIFtSVUxF
U10gYXJjaC9pMzg2L2ltYWdlL3B4ZV9pbWFnZS5jCiAgW1JVTEVTXSBhcmNoL2kzODYvaW1h
Z2UvYnppbWFnZS5jCiAgW1JVTEVTXSBhcmNoL2kzODYvaW1hZ2UvbmJpLmMKICBbUlVMRVNd
IGFyY2gvaTM4Ni9pbWFnZS9jb20zMi5jCiAgW1JVTEVTXSBhcmNoL2kzODYvZmlybXdhcmUv
cGNiaW9zL3BucGJpb3MuYwogIFtSVUxFU10gYXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9i
aW9zX2NvbnNvbGUuYwogIFtSVUxFU10gYXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9mYWtl
ZTgyMC5jCiAgW1JVTEVTXSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2Jhc2VtZW0uYwog
IFtSVUxFU10gYXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9tZW1tYXAuYwogIFtSVUxFU10g
YXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9oaWRlbWVtLmMKICBbUlVMRVNdIGFyY2gvaTM4
Ni90cmFuc2l0aW9ucy9saWJybV9tZ210LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL2R1
bXByZWdzLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL251bGx0cmFwLmMKICBbUlVMRVNd
IGFyY2gvaTM4Ni9jb3JlL3JlbG9jYXRlLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL3g4
Nl9pby5jCiAgW1JVTEVTXSBhcmNoL2kzODYvY29yZS90aW1lcjIuYwogIFtSVUxFU10gYXJj
aC9pMzg2L2NvcmUvcnVudGltZS5jCiAgW1JVTEVTXSBhcmNoL2kzODYvY29yZS9waWM4MjU5
LmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL2NwdS5jCiAgW1JVTEVTXSBhcmNoL2kzODYv
Y29yZS9nZGJtYWNoLmMKICBbUlVMRVNdIGFyY2gvaTM4Ni9jb3JlL3ZpZGVvX3N1YnIuYwog
IFtSVUxFU10gYXJjaC9pMzg2L2NvcmUvYmFzZW1lbV9wYWNrZXQuYwogIFtSVUxFU10gYXJj
aC9pMzg2L2NvcmUvcmR0c2NfdGltZXIuYwogIFtSVUxFU10gY29uZmlnL2NvbmZpZ19yb21w
cmVmaXguYwogIFtSVUxFU10gY29uZmlnL2NvbmZpZy5jCiAgW1JVTEVTXSBjb25maWcvY29u
ZmlnX2ZjLmMKICBbUlVMRVNdIGNvbmZpZy9jb25maWdfZXRoZXJuZXQuYwogIFtSVUxFU10g
Y29uZmlnL2NvbmZpZ19uZXQ4MDIxMS5jCiAgW1JVTEVTXSBjb25maWcvY29uZmlnX2luZmlu
aWJhbmQuYwogIFtSVUxFU10gdXNyL2F1dG9ib290LmMKICBbUlVMRVNdIHVzci9pZm1nbXQu
YwogIFtSVUxFU10gdXNyL2ZjbWdtdC5jCiAgW1JVTEVTXSB1c3IvZGhjcG1nbXQuYwogIFtS
VUxFU10gdXNyL3B4ZW1lbnUuYwogIFtSVUxFU10gdXNyL2ltZ21nbXQuYwogIFtSVUxFU10g
dXNyL2xvdGVzdC5jCiAgW1JVTEVTXSB1c3IvaXdtZ210LmMKICBbUlVMRVNdIHVzci9yb3V0
ZS5jCiAgW1JVTEVTXSB1c3IvcHJvbXB0LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFw
X3JvLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2l0LmMKICBbUlVMRVNdIGhjaS9r
ZXltYXAva2V5bWFwX3NnLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2VzLmMKICBb
UlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2h1LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5
bWFwX2JnLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX25sLmMKICBbUlVMRVNdIGhj
aS9rZXltYXAva2V5bWFwX2N6LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2RlLmMK
ICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2ZpLmMKICBbUlVMRVNdIGhjaS9rZXltYXAv
a2V5bWFwX21rLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX3VrLmMKICBbUlVMRVNd
IGhjaS9rZXltYXAva2V5bWFwX3BsLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2F6
LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2ZyLmMKICBbUlVMRVNdIGhjaS9rZXlt
YXAva2V5bWFwX2J5LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX210LmMKICBbUlVM
RVNdIGhjaS9rZXltYXAva2V5bWFwX3dvLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFw
X3VhLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2x0LmMKICBbUlVMRVNdIGhjaS9r
ZXltYXAva2V5bWFwX3NyLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2FsLmMKICBb
UlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX3J1LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5
bWFwX2NmLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX25vLmMKICBbUlVMRVNdIGhj
aS9rZXltYXAva2V5bWFwX2V0LmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX3RoLmMK
ICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX3VzLmMKICBbUlVMRVNdIGhjaS9rZXltYXAv
a2V5bWFwX2lsLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX2dyLmMKICBbUlVMRVNd
IGhjaS9rZXltYXAva2V5bWFwX2RrLmMKICBbUlVMRVNdIGhjaS9rZXltYXAva2V5bWFwX3B0
LmMKICBbUlVMRVNdIGhjaS9tdWN1cnNlcy93aWRnZXRzL2VkaXRib3guYwogIFtSVUxFU10g
aGNpL211Y3Vyc2VzL2tiLmMKICBbUlVMRVNdIGhjaS9tdWN1cnNlcy9jb2xvdXIuYwogIFtS
VUxFU10gaGNpL211Y3Vyc2VzL3Nsay5jCiAgW1JVTEVTXSBoY2kvbXVjdXJzZXMvcHJpbnQu
YwogIFtSVUxFU10gaGNpL211Y3Vyc2VzL3dpbmRvd3MuYwogIFtSVUxFU10gaGNpL211Y3Vy
c2VzL211Y3Vyc2VzLmMKICBbUlVMRVNdIGhjaS9tdWN1cnNlcy93aW5pbml0LmMKICBbUlVM
RVNdIGhjaS9tdWN1cnNlcy9wcmludF9uYWR2LmMKICBbUlVMRVNdIGhjaS9tdWN1cnNlcy9h
bnNpX3NjcmVlbi5jCiAgW1JVTEVTXSBoY2kvbXVjdXJzZXMvd2luYXR0cnMuYwogIFtSVUxF
U10gaGNpL211Y3Vyc2VzL2VkZ2luZy5jCiAgW1JVTEVTXSBoY2kvbXVjdXJzZXMvY2xlYXIu
YwogIFtSVUxFU10gaGNpL211Y3Vyc2VzL2FsZXJ0LmMKICBbUlVMRVNdIGhjaS90dWkvc2V0
dGluZ3NfdWkuYwogIFtSVUxFU10gaGNpL3R1aS9sb2dpbl91aS5jCiAgW1JVTEVTXSBoY2kv
Y29tbWFuZHMvdmxhbl9jbWQuYwogIFtSVUxFU10gaGNpL2NvbW1hbmRzL2l3bWdtdF9jbWQu
YwogIFtSVUxFU10gaGNpL2NvbW1hbmRzL2xvdGVzdF9jbWQuYwogIFtSVUxFU10gaGNpL2Nv
bW1hbmRzL2ZjbWdtdF9jbWQuYwogIFtSVUxFU10gaGNpL2NvbW1hbmRzL2ltYWdlX2NtZC5j
CiAgW1JVTEVTXSBoY2kvY29tbWFuZHMvZGlnZXN0X2NtZC5jCiAgW1JVTEVTXSBoY2kvY29t
bWFuZHMvcm91dGVfY21kLmMKICBbUlVMRVNdIGhjaS9jb21tYW5kcy9kaGNwX2NtZC5jCiAg
W1JVTEVTXSBoY2kvY29tbWFuZHMvdGltZV9jbWQuYwogIFtSVUxFU10gaGNpL2NvbW1hbmRz
L2F1dG9ib290X2NtZC5jCiAgW1JVTEVTXSBoY2kvY29tbWFuZHMvZ2Ric3R1Yl9jbWQuYwog
IFtSVUxFU10gaGNpL2NvbW1hbmRzL2lmbWdtdF9jbWQuYwogIFtSVUxFU10gaGNpL2NvbW1h
bmRzL3NhbmJvb3RfY21kLmMKICBbUlVMRVNdIGhjaS9jb21tYW5kcy9sb2dpbl9jbWQuYwog
IFtSVUxFU10gaGNpL2NvbW1hbmRzL2NvbmZpZ19jbWQuYwogIFtSVUxFU10gaGNpL2NvbW1h
bmRzL252b19jbWQuYwogIFtSVUxFU10gaGNpL3dpcmVsZXNzX2Vycm9ycy5jCiAgW1JVTEVT
XSBoY2kvZWRpdHN0cmluZy5jCiAgW1JVTEVTXSBoY2kvcmVhZGxpbmUuYwogIFtSVUxFU10g
aGNpL3N0cmVycm9yLmMKICBbUlVMRVNdIGhjaS9zaGVsbC5jCiAgW1JVTEVTXSBoY2kvbGlu
dXhfYXJncy5jCiAgW1JVTEVTXSBjcnlwdG8vYXh0bHMvc2hhMS5jCiAgW1JVTEVTXSBjcnlw
dG8vYXh0bHMvcnNhLmMKICBbUlVMRVNdIGNyeXB0by9heHRscy9iaWdpbnQuYwogIFtSVUxF
U10gY3J5cHRvL2F4dGxzL2Flcy5jCiAgW1JVTEVTXSBjcnlwdG8vY2JjLmMKICBbUlVMRVNd
IGNyeXB0by9heHRsc19zaGExLmMKICBbUlVMRVNdIGNyeXB0by9hZXNfd3JhcC5jCiAgW1JV
TEVTXSBjcnlwdG8vYXh0bHNfYWVzLmMKICBbUlVMRVNdIGNyeXB0by9hc24xLmMKICBbUlVM
RVNdIGNyeXB0by9obWFjLmMKICBbUlVMRVNdIGNyeXB0by9jcmMzMi5jCiAgW1JVTEVTXSBj
cnlwdG8vY3JhbmRvbS5jCiAgW1JVTEVTXSBjcnlwdG8vY3J5cHRvX251bGwuYwogIFtSVUxF
U10gY3J5cHRvL2FyYzQuYwogIFtSVUxFU10gY3J5cHRvL3NoYTFleHRyYS5jCiAgW1JVTEVT
XSBjcnlwdG8veDUwOS5jCiAgW1JVTEVTXSBjcnlwdG8vbWQ1LmMKICBbUlVMRVNdIGNyeXB0
by9jaGFwLmMKICBbUlVMRVNdIHRlc3RzL2xpbmVidWZfdGVzdC5jCiAgW1JVTEVTXSB0ZXN0
cy91bWFsbG9jX3Rlc3QuYwogIFtSVUxFU10gdGVzdHMvYm9mbV90ZXN0LmMKICBbUlVMRVNd
IHRlc3RzL3VyaV90ZXN0LmMKICBbUlVMRVNdIHRlc3RzL3Rlc3QuYwogIFtSVUxFU10gdGVz
dHMvbGlzdF90ZXN0LmMKICBbUlVMRVNdIHRlc3RzL21lbWNweV90ZXN0LmMKICBbUlVMRVNd
IGludGVyZmFjZS9ib2ZtL2JvZm0uYwogIFtSVUxFU10gaW50ZXJmYWNlL3NtYmlvcy9zbWJp
b3MuYwogIFtSVUxFU10gaW50ZXJmYWNlL3NtYmlvcy9zbWJpb3Nfc2V0dGluZ3MuYwogIFtS
VUxFU10gaW50ZXJmYWNlL2VmaS9lZmlfY29uc29sZS5jCiAgW1JVTEVTXSBpbnRlcmZhY2Uv
ZWZpL2VmaV9zbnAuYwogIFtSVUxFU10gaW50ZXJmYWNlL2VmaS9lZmlfcGNpLmMKICBbUlVM
RVNdIGludGVyZmFjZS9lZmkvZWZpX3N0cmVycm9yLmMKICBbUlVMRVNdIGludGVyZmFjZS9l
ZmkvZWZpX2JvZm0uYwogIFtSVUxFU10gaW50ZXJmYWNlL2VmaS9lZmlfdW1hbGxvYy5jCiAg
W1JVTEVTXSBpbnRlcmZhY2UvZWZpL2VmaV9zdHJpbmdzLmMKICBbUlVMRVNdIGludGVyZmFj
ZS9lZmkvZWZpX3RpbWVyLmMKICBbUlVMRVNdIGludGVyZmFjZS9lZmkvZWZpX3NtYmlvcy5j
CiAgW1JVTEVTXSBpbnRlcmZhY2UvZWZpL2VmaV9kcml2ZXIuYwogIFtSVUxFU10gaW50ZXJm
YWNlL2VmaS9lZmlfaW5pdC5jCiAgW1JVTEVTXSBpbnRlcmZhY2UvZWZpL2VmaV91YWNjZXNz
LmMKICBbUlVMRVNdIGludGVyZmFjZS9lZmkvZWZpX2lvLmMKICBbUlVMRVNdIGRyaXZlcnMv
aW5maW5pYmFuZC9saW5kYS5jCiAgW1JVTEVTXSBkcml2ZXJzL2luZmluaWJhbmQvaGVybW9u
LmMKICBbUlVMRVNdIGRyaXZlcnMvaW5maW5pYmFuZC9hcmJlbC5jCiAgW1JVTEVTXSBkcml2
ZXJzL2luZmluaWJhbmQvcWliNzMyMi5jCiAgW1JVTEVTXSBkcml2ZXJzL2luZmluaWJhbmQv
bGluZGFfZncuYwogIFtSVUxFU10gZHJpdmVycy9iaXRiYXNoL2JpdGJhc2guYwogIFtSVUxF
U10gZHJpdmVycy9iaXRiYXNoL3NwaV9iaXQuYwogIFtSVUxFU10gZHJpdmVycy9iaXRiYXNo
L2kyY19iaXQuYwogIFtSVUxFU10gZHJpdmVycy9udnMvc3BpLmMKICBbUlVMRVNdIGRyaXZl
cnMvbnZzL252c3ZwZC5jCiAgW1JVTEVTXSBkcml2ZXJzL252cy90aHJlZXdpcmUuYwogIFtS
VUxFU10gZHJpdmVycy9udnMvbnZzLmMKICBbUlVMRVNdIGRyaXZlcnMvYmxvY2svaWJmdC5j
CiAgW1JVTEVTXSBkcml2ZXJzL2Jsb2NrL2F0YS5jCiAgW1JVTEVTXSBkcml2ZXJzL2Jsb2Nr
L3NycC5jCiAgW1JVTEVTXSBkcml2ZXJzL2Jsb2NrL3Njc2kuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvZWZpL3NucG5ldC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9lZmkvc25wb25seS5j
CiAgW1JVTEVTXSBkcml2ZXJzL25ldC92eGdlL3Z4Z2VfdHJhZmZpYy5jCiAgW1JVTEVTXSBk
cml2ZXJzL25ldC92eGdlL3Z4Z2UuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvdnhnZS92eGdl
X2NvbmZpZy5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC92eGdlL3Z4Z2VfbWFpbi5jCiAgW1JV
TEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfaW5pdC5jCiAgW1JVTEVTXSBkcml2
ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAzX21hYy5jCiAgW1JVTEVTXSBkcml2ZXJz
L25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAzX2NhbGliLmMKICBbUlVMRVNdIGRyaXZlcnMv
bmV0L2F0aC9hdGg5ay9hdGg5a19lZXByb21fOTI4Ny5jCiAgW1JVTEVTXSBkcml2ZXJzL25l
dC9hdGgvYXRoOWsvYXRoOWsuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0
aDlrX2NvbW1vbi5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5
MDAyX2h3LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19jYWxpYi5j
CiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfZWVwcm9tXzRrLmMKICBb
UlVMRVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19lZXByb21fZGVmLmMKICBbUlVM
RVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19tYWMuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwM19lZXByb20uYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwMl9tYWMuYwogIFtSVUxFU10gZHJpdmVycy9u
ZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwMl9jYWxpYi5jCiAgW1JVTEVTXSBkcml2ZXJzL25l
dC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAyX3BoeS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9h
dGgvYXRoOWsvYXRoOWtfeG1pdC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsv
YXRoOWtfYXI1MDA4X3BoeS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRo
OWtfYXI5MDAzX3BoeS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtf
YW5pLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19tYWluLmMKICBb
UlVMRVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19hcjkwMDNfaHcuYwogIFtSVUxF
U10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2VlcHJvbS5jCiAgW1JVTEVTXSBkcml2
ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfcmVjdi5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9h
dGgvYXRoOWsvYXRoOWtfaHcuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0
aDVrX3Jlc2V0LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1ay5jCiAg
W1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtfYXR0YWNoLmMKICBbUlVMRVNd
IGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19yZmtpbGwuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvYXRoL2F0aDVrL2F0aDVrX2dwaW8uYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYXRo
L2F0aDVrL2F0aDVrX3BoeS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRo
NWtfaW5pdHZhbHMuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0aDVrX2Rt
YS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtfcGN1LmMKICBbUlVM
RVNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19kZXNjLmMKICBbUlVMRVNdIGRyaXZl
cnMvbmV0L2F0aC9hdGg1ay9hdGg1a19xY3UuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYXRo
L2F0aDVrL2F0aDVrX2VlcHJvbS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsv
YXRoNWtfY2Fwcy5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoX2h3LmMKICBbUlVM
RVNdIGRyaXZlcnMvbmV0L2F0aC9hdGhfa2V5LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2F0
aC9hdGhfbWFpbi5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hdGgvYXRoX3JlZ2QuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvcnRsODE4eC9ydGw4MTgwX2dyZjUxMDEuYwogIFtSVUxFU10g
ZHJpdmVycy9uZXQvcnRsODE4eC9ydGw4MTgwX21heDI4MjAuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvcnRsODE4eC9ydGw4MTg1LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L3J0bDgxOHgv
cnRsODE4eC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9ydGw4MTh4L3J0bDgxODAuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvcnRsODE4eC9ydGw4MTg1X3J0bDgyMjUuYwogIFtSVUxFU10g
ZHJpdmVycy9uZXQvcnRsODE4eC9ydGw4MTgwX3NhMjQwMC5jCiAgW1JVTEVTXSBkcml2ZXJz
L25ldC9waGFudG9tL3BoYW50b20uYwogIFtSVUxFU10gZHJpdmVycy9uZXQvaWdidmYvaWdi
dmZfbWFpbi5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9pZ2J2Zi9pZ2J2Zl92Zi5jCiAgW1JV
TEVTXSBkcml2ZXJzL25ldC9pZ2J2Zi9pZ2J2Zl9tYnguYwogIFtSVUxFU10gZHJpdmVycy9u
ZXQvaWdiL2lnYl84MjU3NS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9pZ2IvaWdiLmMKICBb
UlVMRVNdIGRyaXZlcnMvbmV0L2lnYi9pZ2JfbWFjLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0
L2lnYi9pZ2JfcGh5LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2lnYi9pZ2JfbWFpbi5jCiAg
W1JVTEVTXSBkcml2ZXJzL25ldC9pZ2IvaWdiX252bS5jCiAgW1JVTEVTXSBkcml2ZXJzL25l
dC9pZ2IvaWdiX2FwaS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9pZ2IvaWdiX21hbmFnZS5j
CiAgW1JVTEVTXSBkcml2ZXJzL25ldC9lMTAwMGUvZTEwMDBlXzgwMDAzZXMybGFuLmMKICBb
UlVMRVNdIGRyaXZlcnMvbmV0L2UxMDAwZS9lMTAwMGVfaWNoOGxhbi5jCiAgW1JVTEVTXSBk
cml2ZXJzL25ldC9lMTAwMGUvZTEwMDBlX21hbmFnZS5jCiAgW1JVTEVTXSBkcml2ZXJzL25l
dC9lMTAwMGUvZTEwMDBlXzgyNTcxLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2UxMDAwZS9l
MTAwMGUuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV9tYWMuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV9waHkuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvZTEwMDBlL2UxMDAwZV9udm0uYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDBl
L2UxMDAwZV9tYWluLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwXzgyNTQy
LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwXzgyNTQwLmMKICBbUlVMRVNd
IGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwX2FwaS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9l
MTAwMC9lMTAwMF9tYW5hZ2UuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBf
ODI1NDMuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBfbnZtLmMKICBbUlVM
RVNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwX21hYy5jCiAgW1JVTEVTXSBkcml2ZXJzL25l
dC9lMTAwMC9lMTAwMF9waHkuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDAu
YwogIFtSVUxFU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBfbWFpbi5jCiAgW1JVTEVTXSBk
cml2ZXJzL25ldC9lMTAwMC9lMTAwMF84MjU0MS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9k
ZXBjYS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9hbWQ4MTExZS5jCiAgW1JVTEVTXSBkcml2
ZXJzL25ldC9qbWUuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvcHJpc20yX3BjaS5jCiAgW1JV
TEVTXSBkcml2ZXJzL25ldC8zYzU5NS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC92aWEtcmhp
bmUuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvdzg5Yzg0MC5jCiAgW1JVTEVTXSBkcml2ZXJz
L25ldC9jczg5eDAuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvbmUya19pc2EuYwogIFtSVUxF
U10gZHJpdmVycy9uZXQvaXBvaWIuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvc2t5Mi5jCiAg
W1JVTEVTXSBkcml2ZXJzL25ldC9hdGwxZS5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9sZWdh
Y3kuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvZWVwcm8xMDAuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvM2M1MTUuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvYm54Mi5jCiAgW1JVTEVTXSBk
cml2ZXJzL25ldC9kbWZlLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L25zODM5MC5jCiAgW1JV
TEVTXSBkcml2ZXJzL25ldC9uczgzODIwLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L3BjbmV0
MzIuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvM2M1MDktZWlzYS5jCiAgW1JVTEVTXSBkcml2
ZXJzL25ldC90ZzMuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvM2M1eDkuYwogIFtSVUxFU10g
ZHJpdmVycy9uZXQvc21jOTAwMC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC92aXJ0aW8tbmV0
LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2V0aGVyZmFicmljLmMKICBbUlVMRVNdIGRyaXZl
cnMvbmV0L3dkLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L3NrZ2UuYwogIFtSVUxFU10gZHJp
dmVycy9uZXQvc2lzMTkwLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L25hdHNlbWkuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvYjQ0LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L2ZvcmNlZGV0
aC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9wcmlzbTJfcGx4LmMKICBbUlVMRVNdIGRyaXZl
cnMvbmV0L3N1bmRhbmNlLmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0L3J0bDgxMzkuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvZXBpYzEwMC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC8zYzkw
eC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9kYXZpY29tLmMKICBbUlVMRVNdIGRyaXZlcnMv
bmV0LzNjNTA5LmMKICBbUlVMRVNdIGRyaXZlcnMvbmV0LzNjNTI5LmMKICBbUlVMRVNdIGRy
aXZlcnMvbmV0L210ZDgweC5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9teXJpMTBnZS5jCiAg
W1JVTEVTXSBkcml2ZXJzL25ldC9lZXByby5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC9uZS5j
CiAgW1JVTEVTXSBkcml2ZXJzL25ldC92aWEtdmVsb2NpdHkuYwogIFtSVUxFU10gZHJpdmVy
cy9uZXQvcG5pYy5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC90dWxpcC5jCiAgW1JVTEVTXSBk
cml2ZXJzL25ldC9zaXM5MDAuYwogIFtSVUxFU10gZHJpdmVycy9uZXQvcjgxNjkuYwogIFtS
VUxFU10gZHJpdmVycy9uZXQvdGxhbi5jCiAgW1JVTEVTXSBkcml2ZXJzL25ldC8zYzUwMy5j
CiAgW1JVTEVTXSBkcml2ZXJzL2J1cy9wY2kuYwogIFtSVUxFU10gZHJpdmVycy9idXMvaXNh
cG5wLmMKICBbUlVMRVNdIGRyaXZlcnMvYnVzL3ZpcnRpby1yaW5nLmMKICBbUlVMRVNdIGRy
aXZlcnMvYnVzL3ZpcnRpby1wY2kuYwogIFtSVUxFU10gZHJpdmVycy9idXMvaXNhLmMKICBb
UlVMRVNdIGRyaXZlcnMvYnVzL2lzYV9pZHMuYwogIFtSVUxFU10gZHJpdmVycy9idXMvcGNp
ZXh0cmEuYwogIFtSVUxFU10gZHJpdmVycy9idXMvcGNpYmFja3VwLmMKICBbUlVMRVNdIGRy
aXZlcnMvYnVzL3BjaXZwZC5jCiAgW1JVTEVTXSBkcml2ZXJzL2J1cy9tY2EuYwogIFtSVUxF
U10gZHJpdmVycy9idXMvZWlzYS5jCiAgW1JVTEVTXSBpbWFnZS9zY3JpcHQuYwogIFtSVUxF
U10gaW1hZ2UvZWxmLmMKICBbUlVMRVNdIGltYWdlL2VmaV9pbWFnZS5jCiAgW1JVTEVTXSBp
bWFnZS9zZWdtZW50LmMKICBbUlVMRVNdIGltYWdlL2VtYmVkZGVkLmMKICBbUlVMRVNdIG5l
dC84MDIxMS9yYzgwMjExLmMKICBbUlVMRVNdIG5ldC84MDIxMS93cGEuYwogIFtSVUxFU10g
bmV0LzgwMjExL3dwYV9jY21wLmMKICBbUlVMRVNdIG5ldC84MDIxMS9uZXQ4MDIxMS5jCiAg
W1JVTEVTXSBuZXQvODAyMTEvc2VjODAyMTEuYwogIFtSVUxFU10gbmV0LzgwMjExL3dlcC5j
CiAgW1JVTEVTXSBuZXQvODAyMTEvd3BhX3Bzay5jCiAgW1JVTEVTXSBuZXQvODAyMTEvd3Bh
X3RraXAuYwogIFtSVUxFU10gbmV0L2luZmluaWJhbmQvaWJfbWkuYwogIFtSVUxFU10gbmV0
L2luZmluaWJhbmQvaWJfY20uYwogIFtSVUxFU10gbmV0L2luZmluaWJhbmQvaWJfcGFja2V0
LmMKICBbUlVMRVNdIG5ldC9pbmZpbmliYW5kL2liX3NtYy5jCiAgW1JVTEVTXSBuZXQvaW5m
aW5pYmFuZC9pYl9wYXRocmVjLmMKICBbUlVMRVNdIG5ldC9pbmZpbmliYW5kL2liX3NtYS5j
CiAgW1JVTEVTXSBuZXQvaW5maW5pYmFuZC9pYl9jbXJjLmMKICBbUlVMRVNdIG5ldC9pbmZp
bmliYW5kL2liX3NycC5jCiAgW1JVTEVTXSBuZXQvaW5maW5pYmFuZC9pYl9tY2FzdC5jCiAg
W1JVTEVTXSBuZXQvdWRwL2RoY3AuYwogIFtSVUxFU10gbmV0L3VkcC9kbnMuYwogIFtSVUxF
U10gbmV0L3VkcC9zbGFtLmMKICBbUlVMRVNdIG5ldC91ZHAvdGZ0cC5jCiAgW1JVTEVTXSBu
ZXQvdWRwL3N5c2xvZy5jCiAgW1JVTEVTXSBuZXQvdGNwL2h0dHBzLmMKICBbUlVMRVNdIG5l
dC90Y3AvaXNjc2kuYwogIFtSVUxFU10gbmV0L3RjcC9mdHAuYwogIFtSVUxFU10gbmV0L3Rj
cC9odHRwLmMKICBbUlVMRVNdIG5ldC9lYXBvbC5jCiAgW1JVTEVTXSBuZXQvZmNucy5jCiAg
W1JVTEVTXSBuZXQvZmFrZWRoY3AuYwogIFtSVUxFU10gbmV0L2ljbXB2Ni5jCiAgW1JVTEVT
XSBuZXQvbmV0ZGV2X3NldHRpbmdzLmMKICBbUlVMRVNdIG5ldC9mY3AuYwogIFtSVUxFU10g
bmV0L2Zjb2UuYwogIFtSVUxFU10gbmV0L2lvYnBhZC5jCiAgW1JVTEVTXSBuZXQvdGNwLmMK
ICBbUlVMRVNdIG5ldC9taWkuYwogIFtSVUxFU10gbmV0L2FycC5jCiAgW1JVTEVTXSBuZXQv
ZXRoZXJuZXQuYwogIFtSVUxFU10gbmV0L2ZjZWxzLmMKICBbUlVMRVNdIG5ldC90Y3BpcC5j
CiAgW1JVTEVTXSBuZXQvaXB2Ni5jCiAgW1JVTEVTXSBuZXQvYW9lLmMKICBbUlVMRVNdIG5l
dC9yYXJwLmMKICBbUlVMRVNdIG5ldC92bGFuLmMKICBbUlVMRVNdIG5ldC9udWxsbmV0LmMK
ICBbUlVMRVNdIG5ldC9pbmZpbmliYW5kLmMKICBbUlVMRVNdIG5ldC9pcHY0LmMKICBbUlVM
RVNdIG5ldC9ldGhfc2xvdy5jCiAgW1JVTEVTXSBuZXQvdGxzLmMKICBbUlVMRVNdIG5ldC9u
ZHAuYwogIFtSVUxFU10gbmV0L2RoY3Bwa3QuYwogIFtSVUxFU10gbmV0L2NhY2hlZGhjcC5j
CiAgW1JVTEVTXSBuZXQvbmV0ZGV2aWNlLmMKICBbUlVMRVNdIG5ldC9yZXRyeS5jCiAgW1JV
TEVTXSBuZXQvaWNtcC5jCiAgW1JVTEVTXSBuZXQvdWRwLmMKICBbUlVMRVNdIG5ldC9kaGNw
b3B0cy5jCiAgW1JVTEVTXSBuZXQvZmMuYwogIFtSVUxFU10gY29yZS9jdHlwZS5jCiAgW1JV
TEVTXSBjb3JlL2Jhc2VuYW1lLmMKICBbUlVMRVNdIGNvcmUvbnZvLmMKICBbUlVMRVNdIGNv
cmUvZGVidWdfbWQ1LmMKICBbUlVMRVNdIGNvcmUvaW50ZXJmYWNlLmMKICBbUlVMRVNdIGNv
cmUvYnRleHQuYwogIFtSVUxFU10gY29yZS9nZXRvcHQuYwogIFtSVUxFU10gY29yZS9nZXRr
ZXkuYwogIFtSVUxFU10gY29yZS9hc3ByaW50Zi5jCiAgW1JVTEVTXSBjb3JlL2dkYnN0dWIu
YwogIFtSVUxFU10gY29yZS9saW5lYnVmLmMKICBbUlVMRVNdIGNvcmUvZWRkLmMKICBbUlVM
RVNdIGNvcmUvaW5pdC5jCiAgW1JVTEVTXSBjb3JlL3N0cnRvdWxsLmMKICBbUlVMRVNdIGNv
cmUvc2V0dGluZ3MuYwogIFtSVUxFU10gY29yZS9tYWluLmMKICBbUlVMRVNdIGNvcmUvZG93
bmxvYWRlci5jCiAgW1JVTEVTXSBjb3JlL2h3LmMKICBbUlVMRVNdIGNvcmUvYml0b3BzLmMK
ICBbUlVMRVNdIGNvcmUvdnNwcmludGYuYwogIFtSVUxFU10gY29yZS9udWxsX25hcC5jCiAg
W1JVTEVTXSBjb3JlL3hmZXIuYwogIFtSVUxFU10gY29yZS9wY19rYmQuYwogIFtSVUxFU10g
Y29yZS9wb3NpeF9pby5jCiAgW1JVTEVTXSBjb3JlL2dkYnVkcC5jCiAgW1JVTEVTXSBjb3Jl
L2NvbnNvbGUuYwogIFtSVUxFU10gY29yZS9vcGVuLmMKICBbUlVMRVNdIGNvcmUvc2VyaWFs
LmMKICBbUlVMRVNdIGNvcmUvYWNwaS5jCiAgW1JVTEVTXSBjb3JlL3VyaS5jCiAgW1JVTEVT
XSBjb3JlL2Jsb2NrZGV2LmMKICBbUlVMRVNdIGNvcmUvY3Bpby5jCiAgW1JVTEVTXSBjb3Jl
L3RpbWVyLmMKICBbUlVMRVNdIGNvcmUvbWlzYy5jCiAgW1JVTEVTXSBjb3JlL2N3dXJpLmMK
ICBbUlVMRVNdIGNvcmUvaTgyMzY1LmMKICBbUlVMRVNdIGNvcmUvZXJybm8uYwogIFtSVUxF
U10gY29yZS9qb2IuYwogIFtSVUxFU10gY29yZS9wcm9jZXNzLmMKICBbUlVMRVNdIGNvcmUv
Z2Ric2VyaWFsLmMKICBbUlVMRVNdIGNvcmUvZGVidWcuYwogIFtSVUxFU10gY29yZS9mbnJl
Yy5jCiAgW1JVTEVTXSBjb3JlL21hbGxvYy5jCiAgW1JVTEVTXSBjb3JlL2Fuc2llc2MuYwog
IFtSVUxFU10gY29yZS9kZXZpY2UuYwogIFtSVUxFU10gY29yZS9iYXNlNjQuYwogIFtSVUxF
U10gY29yZS9iaXRtYXAuYwogIFtSVUxFU10gY29yZS9leGVjLmMKICBbUlVMRVNdIGNvcmUv
bW9ub2pvYi5jCiAgW1JVTEVTXSBjb3JlL251bGxfc2FuYm9vdC5jCiAgW1JVTEVTXSBjb3Jl
L3N0cmluZ2V4dHJhLmMKICBbUlVMRVNdIGNvcmUvcmFuZG9tLmMKICBbUlVMRVNdIGNvcmUv
cGFyc2VvcHQuYwogIFtSVUxFU10gY29yZS9yZXNvbHYuYwogIFtSVUxFU10gY29yZS9pb2J1
Zi5jCiAgW1JVTEVTXSBjb3JlL2ltYWdlLmMKICBbUlVMRVNdIGNvcmUvc3RyaW5nLmMKICBb
UlVMRVNdIGNvcmUvYmFzZTE2LmMKICBbUlVMRVNdIGNvcmUvYXNzZXJ0LmMKICBbUlVMRVNd
IGNvcmUvcmVmY250LmMKICBbUlVMRVNdIGNvcmUvdXVpZC5jCiAgW1JVTEVTXSBjb3JlL3Nl
cmlhbF9jb25zb2xlLmMKICBbUlVMRVNdIGNvcmUvcGNtY2lhLmMKICBbUlVMRVNdIGxpYmdj
Yy9fX3Vtb2RkaTMuYwogIFtSVUxFU10gbGliZ2NjL19fdWRpdmRpMy5jCiAgW1JVTEVTXSBs
aWJnY2MvX19tb2RkaTMuYwogIFtSVUxFU10gbGliZ2NjL21lbWNweS5jCiAgW1JVTEVTXSBs
aWJnY2MvaWNjLmMKICBbUlVMRVNdIGxpYmdjYy9fX2RpdmRpMy5jCiAgW1JVTEVTXSBsaWJn
Y2MvX191ZGl2bW9kZGk0LmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlp
c3IuUwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3N5c2xpbnV4L2NvbTMyX3dyYXBw
ZXIuUwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3B4ZS9weGVfZW50cnkuUwogIFtE
RVBTXSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2U4MjBtYW5nbGVyLlMKICBbREVQU10g
YXJjaC9pMzg2L3ByZWZpeC9tYnIuUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L3B4ZXBy
ZWZpeC5TCiAgW0RFUFNdIGFyY2gvaTM4Ni9wcmVmaXgvcm9tcHJlZml4LlMKICBbREVQU10g
YXJjaC9pMzg2L3ByZWZpeC9leGVwcmVmaXguUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4
L2hkcHJlZml4LlMKICBbREVQU10gYXJjaC9pMzg2L3ByZWZpeC91c2JkaXNrLlMKICBbREVQ
U10gYXJjaC9pMzg2L3ByZWZpeC9ra2tweGVwcmVmaXguUwogIFtERVBTXSBhcmNoL2kzODYv
cHJlZml4L2tweGVwcmVmaXguUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L25iaXByZWZp
eC5TCiAgW0RFUFNdIGFyY2gvaTM4Ni9wcmVmaXgvbnVsbHByZWZpeC5TCiAgW0RFUFNdIGFy
Y2gvaTM4Ni9wcmVmaXgvYm9vdHBhcnQuUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L3Vu
ZGlsb2FkZXIuUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L2trcHhlcHJlZml4LlMKICBb
REVQU10gYXJjaC9pMzg2L3ByZWZpeC91bm5ydjJiMTYuUwogIFtERVBTXSBhcmNoL2kzODYv
cHJlZml4L2xrcm5wcmVmaXguUwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L3VubnJ2MmIu
UwogIFtERVBTXSBhcmNoL2kzODYvcHJlZml4L21yb21wcmVmaXguUwogIFtERVBTXSBhcmNo
L2kzODYvcHJlZml4L2Rza3ByZWZpeC5TCiAgW0RFUFNdIGFyY2gvaTM4Ni9wcmVmaXgvbGli
cHJlZml4LlMKICBbREVQU10gYXJjaC9pMzg2L3RyYW5zaXRpb25zL2xpYnJtLlMKICBbREVQ
U10gYXJjaC9pMzg2L3RyYW5zaXRpb25zL2xpYmEyMC5TCiAgW0RFUFNdIGFyY2gvaTM4Ni90
cmFuc2l0aW9ucy9saWJwbS5TCiAgW0RFUFNdIGFyY2gvaTM4Ni90cmFuc2l0aW9ucy9saWJr
aXIuUwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9zdGFjazE2LlMKICBbREVQU10gYXJjaC9p
Mzg2L2NvcmUvc3RhY2suUwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9zZXRqbXAuUwogIFtE
RVBTXSBhcmNoL2kzODYvY29yZS9nZGJpZHQuUwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9w
YXRjaF9jZi5TCiAgW0RFUFNdIGFyY2gvaTM4Ni9jb3JlL3ZpcnRhZGRyLlMKICBbREVQU10g
dGVzdHMvZ2Ric3R1Yl90ZXN0LlMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3Vu
ZGlyb20uYwogIFtERVBTXSBhcmNoL2kzODYvZHJpdmVycy9uZXQvdW5kaW5ldC5jCiAgW0RF
UFNdIGFyY2gvaTM4Ni9kcml2ZXJzL25ldC91bmRpLmMKICBbREVQU10gYXJjaC9pMzg2L2Ry
aXZlcnMvbmV0L3VuZGlvbmx5LmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3Vu
ZGlsb2FkLmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlwcmVsb2FkLmMK
ICBbREVQU10gYXJjaC94ODYvcHJlZml4L2VmaWRydnByZWZpeC5jCiAgW0RFUFNdIGFyY2gv
eDg2L3ByZWZpeC9lZmlwcmVmaXguYwogIFtERVBTXSBhcmNoL3g4Ni9pbnRlcmZhY2UvZWZp
L2VmaXg4Nl9uYXAuYwogIFtERVBTXSBhcmNoL3g4Ni9jb3JlL3g4Nl9zdHJpbmcuYwogIFtE
RVBTXSBhcmNoL3g4Ni9jb3JlL3BjaWRpcmVjdC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9oY2kv
Y29tbWFuZHMvcmVib290X2NtZC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9oY2kvY29tbWFuZHMv
cHhlX2NtZC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2Uvc3lzbGludXgvY29tYm9v
dF9yZXNvbHYuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3N5c2xpbnV4L2NvbTMy
X2NhbGwuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3N5c2xpbnV4L2NvbWJvb3Rf
Y2FsbC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlcGFyZW50L3B4ZXBhcmVu
dF9kaGNwLmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGVwYXJlbnQvcHhlcGFy
ZW50LmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGUvcHhlX3VkcC5jCiAgW0RF
UFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV91bmRpLmMKICBbREVQU10gYXJjaC9p
Mzg2L2ludGVyZmFjZS9weGUvcHhlX2xvYWRlci5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRl
cmZhY2UvcHhlL3B4ZV9leGl0X2hvb2suYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNl
L3B4ZS9weGVfcHJlYm9vdC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4
ZV90ZnRwLmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGUvcHhlX2ZpbGUuYwog
IFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3B4ZS9weGVfY2FsbC5jCiAgW0RFUFNdIGFy
Y2gvaTM4Ni9pbnRlcmZhY2UvcGNiaW9zL2Jpb3Nfc21iaW9zLmMKICBbREVQU10gYXJjaC9p
Mzg2L2ludGVyZmFjZS9wY2Jpb3MvbWVtdG9wX3VtYWxsb2MuYwogIFtERVBTXSBhcmNoL2kz
ODYvaW50ZXJmYWNlL3BjYmlvcy9iaW9zaW50LmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVy
ZmFjZS9wY2Jpb3MvYmlvc190aW1lci5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2Uv
cGNiaW9zL3BjaWJpb3MuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3BjYmlvcy9p
bnQxMy5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcGNiaW9zL2Jpb3NfbmFwLmMK
ICBbREVQU10gYXJjaC9pMzg2L2ltYWdlL2NvbWJvb3QuYwogIFtERVBTXSBhcmNoL2kzODYv
aW1hZ2UvZWxmYm9vdC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbWFnZS9ib290c2VjdG9yLmMK
ICBbREVQU10gYXJjaC9pMzg2L2ltYWdlL211bHRpYm9vdC5jCiAgW0RFUFNdIGFyY2gvaTM4
Ni9pbWFnZS9weGVfaW1hZ2UuYwogIFtERVBTXSBhcmNoL2kzODYvaW1hZ2UvYnppbWFnZS5j
CiAgW0RFUFNdIGFyY2gvaTM4Ni9pbWFnZS9uYmkuYwogIFtERVBTXSBhcmNoL2kzODYvaW1h
Z2UvY29tMzIuYwogIFtERVBTXSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL3BucGJpb3Mu
YwogIFtERVBTXSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2Jpb3NfY29uc29sZS5jCiAg
W0RFUFNdIGFyY2gvaTM4Ni9maXJtd2FyZS9wY2Jpb3MvZmFrZWU4MjAuYwogIFtERVBTXSBh
cmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2Jhc2VtZW0uYwogIFtERVBTXSBhcmNoL2kzODYv
ZmlybXdhcmUvcGNiaW9zL21lbW1hcC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9maXJtd2FyZS9w
Y2Jpb3MvaGlkZW1lbS5jCiAgW0RFUFNdIGFyY2gvaTM4Ni90cmFuc2l0aW9ucy9saWJybV9t
Z210LmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUvZHVtcHJlZ3MuYwogIFtERVBTXSBhcmNo
L2kzODYvY29yZS9udWxsdHJhcC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9jb3JlL3JlbG9jYXRl
LmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUveDg2X2lvLmMKICBbREVQU10gYXJjaC9pMzg2
L2NvcmUvdGltZXIyLmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUvcnVudGltZS5jCiAgW0RF
UFNdIGFyY2gvaTM4Ni9jb3JlL3BpYzgyNTkuYwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9j
cHUuYwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9nZGJtYWNoLmMKICBbREVQU10gYXJjaC9p
Mzg2L2NvcmUvdmlkZW9fc3Vici5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9jb3JlL2Jhc2VtZW1f
cGFja2V0LmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUvcmR0c2NfdGltZXIuYwogIFtERVBT
XSBjb25maWcvY29uZmlnX3JvbXByZWZpeC5jCiAgW0RFUFNdIGNvbmZpZy9jb25maWcuYwog
IFtERVBTXSBjb25maWcvY29uZmlnX2ZjLmMKICBbREVQU10gY29uZmlnL2NvbmZpZ19ldGhl
cm5ldC5jCiAgW0RFUFNdIGNvbmZpZy9jb25maWdfbmV0ODAyMTEuYwogIFtERVBTXSBjb25m
aWcvY29uZmlnX2luZmluaWJhbmQuYwogIFtERVBTXSB1c3IvYXV0b2Jvb3QuYwogIFtERVBT
XSB1c3IvaWZtZ210LmMKICBbREVQU10gdXNyL2ZjbWdtdC5jCiAgW0RFUFNdIHVzci9kaGNw
bWdtdC5jCiAgW0RFUFNdIHVzci9weGVtZW51LmMKICBbREVQU10gdXNyL2ltZ21nbXQuYwog
IFtERVBTXSB1c3IvbG90ZXN0LmMKICBbREVQU10gdXNyL2l3bWdtdC5jCiAgW0RFUFNdIHVz
ci9yb3V0ZS5jCiAgW0RFUFNdIHVzci9wcm9tcHQuYwogIFtERVBTXSBoY2kva2V5bWFwL2tl
eW1hcF9yby5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX2l0LmMKICBbREVQU10gaGNp
L2tleW1hcC9rZXltYXBfc2cuYwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF9lcy5jCiAg
W0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX2h1LmMKICBbREVQU10gaGNpL2tleW1hcC9rZXlt
YXBfYmcuYwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF9ubC5jCiAgW0RFUFNdIGhjaS9r
ZXltYXAva2V5bWFwX2N6LmMKICBbREVQU10gaGNpL2tleW1hcC9rZXltYXBfZGUuYwogIFtE
RVBTXSBoY2kva2V5bWFwL2tleW1hcF9maS5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFw
X21rLmMKICBbREVQU10gaGNpL2tleW1hcC9rZXltYXBfdWsuYwogIFtERVBTXSBoY2kva2V5
bWFwL2tleW1hcF9wbC5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX2F6LmMKICBbREVQ
U10gaGNpL2tleW1hcC9rZXltYXBfZnIuYwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF9i
eS5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX210LmMKICBbREVQU10gaGNpL2tleW1h
cC9rZXltYXBfd28uYwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF91YS5jCiAgW0RFUFNd
IGhjaS9rZXltYXAva2V5bWFwX2x0LmMKICBbREVQU10gaGNpL2tleW1hcC9rZXltYXBfc3Iu
YwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF9hbC5jCiAgW0RFUFNdIGhjaS9rZXltYXAv
a2V5bWFwX3J1LmMKICBbREVQU10gaGNpL2tleW1hcC9rZXltYXBfY2YuYwogIFtERVBTXSBo
Y2kva2V5bWFwL2tleW1hcF9uby5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX2V0LmMK
ICBbREVQU10gaGNpL2tleW1hcC9rZXltYXBfdGguYwogIFtERVBTXSBoY2kva2V5bWFwL2tl
eW1hcF91cy5jCiAgW0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX2lsLmMKICBbREVQU10gaGNp
L2tleW1hcC9rZXltYXBfZ3IuYwogIFtERVBTXSBoY2kva2V5bWFwL2tleW1hcF9kay5jCiAg
W0RFUFNdIGhjaS9rZXltYXAva2V5bWFwX3B0LmMKICBbREVQU10gaGNpL211Y3Vyc2VzL3dp
ZGdldHMvZWRpdGJveC5jCiAgW0RFUFNdIGhjaS9tdWN1cnNlcy9rYi5jCiAgW0RFUFNdIGhj
aS9tdWN1cnNlcy9jb2xvdXIuYwogIFtERVBTXSBoY2kvbXVjdXJzZXMvc2xrLmMKICBbREVQ
U10gaGNpL211Y3Vyc2VzL3ByaW50LmMKICBbREVQU10gaGNpL211Y3Vyc2VzL3dpbmRvd3Mu
YwogIFtERVBTXSBoY2kvbXVjdXJzZXMvbXVjdXJzZXMuYwogIFtERVBTXSBoY2kvbXVjdXJz
ZXMvd2luaW5pdC5jCiAgW0RFUFNdIGhjaS9tdWN1cnNlcy9wcmludF9uYWR2LmMKICBbREVQ
U10gaGNpL211Y3Vyc2VzL2Fuc2lfc2NyZWVuLmMKICBbREVQU10gaGNpL211Y3Vyc2VzL3dp
bmF0dHJzLmMKICBbREVQU10gaGNpL211Y3Vyc2VzL2VkZ2luZy5jCiAgW0RFUFNdIGhjaS9t
dWN1cnNlcy9jbGVhci5jCiAgW0RFUFNdIGhjaS9tdWN1cnNlcy9hbGVydC5jCiAgW0RFUFNd
IGhjaS90dWkvc2V0dGluZ3NfdWkuYwogIFtERVBTXSBoY2kvdHVpL2xvZ2luX3VpLmMKICBb
REVQU10gaGNpL2NvbW1hbmRzL3ZsYW5fY21kLmMKICBbREVQU10gaGNpL2NvbW1hbmRzL2l3
bWdtdF9jbWQuYwogIFtERVBTXSBoY2kvY29tbWFuZHMvbG90ZXN0X2NtZC5jCiAgW0RFUFNd
IGhjaS9jb21tYW5kcy9mY21nbXRfY21kLmMKICBbREVQU10gaGNpL2NvbW1hbmRzL2ltYWdl
X2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5kcy9kaWdlc3RfY21kLmMKICBbREVQU10gaGNp
L2NvbW1hbmRzL3JvdXRlX2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5kcy9kaGNwX2NtZC5j
CiAgW0RFUFNdIGhjaS9jb21tYW5kcy90aW1lX2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5k
cy9hdXRvYm9vdF9jbWQuYwogIFtERVBTXSBoY2kvY29tbWFuZHMvZ2Ric3R1Yl9jbWQuYwog
IFtERVBTXSBoY2kvY29tbWFuZHMvaWZtZ210X2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5k
cy9zYW5ib290X2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5kcy9sb2dpbl9jbWQuYwogIFtE
RVBTXSBoY2kvY29tbWFuZHMvY29uZmlnX2NtZC5jCiAgW0RFUFNdIGhjaS9jb21tYW5kcy9u
dm9fY21kLmMKICBbREVQU10gaGNpL3dpcmVsZXNzX2Vycm9ycy5jCiAgW0RFUFNdIGhjaS9l
ZGl0c3RyaW5nLmMKICBbREVQU10gaGNpL3JlYWRsaW5lLmMKICBbREVQU10gaGNpL3N0cmVy
cm9yLmMKICBbREVQU10gaGNpL3NoZWxsLmMKICBbREVQU10gaGNpL2xpbnV4X2FyZ3MuYwog
IFtERVBTXSBjcnlwdG8vYXh0bHMvc2hhMS5jCiAgW0RFUFNdIGNyeXB0by9heHRscy9yc2Eu
YwogIFtERVBTXSBjcnlwdG8vYXh0bHMvYmlnaW50LmMKICBbREVQU10gY3J5cHRvL2F4dGxz
L2Flcy5jCiAgW0RFUFNdIGNyeXB0by9jYmMuYwogIFtERVBTXSBjcnlwdG8vYXh0bHNfc2hh
MS5jCiAgW0RFUFNdIGNyeXB0by9hZXNfd3JhcC5jCiAgW0RFUFNdIGNyeXB0by9heHRsc19h
ZXMuYwogIFtERVBTXSBjcnlwdG8vYXNuMS5jCiAgW0RFUFNdIGNyeXB0by9obWFjLmMKICBb
REVQU10gY3J5cHRvL2NyYzMyLmMKICBbREVQU10gY3J5cHRvL2NyYW5kb20uYwogIFtERVBT
XSBjcnlwdG8vY3J5cHRvX251bGwuYwogIFtERVBTXSBjcnlwdG8vYXJjNC5jCiAgW0RFUFNd
IGNyeXB0by9zaGExZXh0cmEuYwogIFtERVBTXSBjcnlwdG8veDUwOS5jCiAgW0RFUFNdIGNy
eXB0by9tZDUuYwogIFtERVBTXSBjcnlwdG8vY2hhcC5jCiAgW0RFUFNdIHRlc3RzL2xpbmVi
dWZfdGVzdC5jCiAgW0RFUFNdIHRlc3RzL3VtYWxsb2NfdGVzdC5jCiAgW0RFUFNdIHRlc3Rz
L2JvZm1fdGVzdC5jCiAgW0RFUFNdIHRlc3RzL3VyaV90ZXN0LmMKICBbREVQU10gdGVzdHMv
dGVzdC5jCiAgW0RFUFNdIHRlc3RzL2xpc3RfdGVzdC5jCiAgW0RFUFNdIHRlc3RzL21lbWNw
eV90ZXN0LmMKICBbREVQU10gaW50ZXJmYWNlL2JvZm0vYm9mbS5jCiAgW0RFUFNdIGludGVy
ZmFjZS9zbWJpb3Mvc21iaW9zLmMKICBbREVQU10gaW50ZXJmYWNlL3NtYmlvcy9zbWJpb3Nf
c2V0dGluZ3MuYwogIFtERVBTXSBpbnRlcmZhY2UvZWZpL2VmaV9jb25zb2xlLmMKICBbREVQ
U10gaW50ZXJmYWNlL2VmaS9lZmlfc25wLmMKICBbREVQU10gaW50ZXJmYWNlL2VmaS9lZmlf
cGNpLmMKICBbREVQU10gaW50ZXJmYWNlL2VmaS9lZmlfc3RyZXJyb3IuYwogIFtERVBTXSBp
bnRlcmZhY2UvZWZpL2VmaV9ib2ZtLmMKICBbREVQU10gaW50ZXJmYWNlL2VmaS9lZmlfdW1h
bGxvYy5jCiAgW0RFUFNdIGludGVyZmFjZS9lZmkvZWZpX3N0cmluZ3MuYwogIFtERVBTXSBp
bnRlcmZhY2UvZWZpL2VmaV90aW1lci5jCiAgW0RFUFNdIGludGVyZmFjZS9lZmkvZWZpX3Nt
Ymlvcy5jCiAgW0RFUFNdIGludGVyZmFjZS9lZmkvZWZpX2RyaXZlci5jCiAgW0RFUFNdIGlu
dGVyZmFjZS9lZmkvZWZpX2luaXQuYwogIFtERVBTXSBpbnRlcmZhY2UvZWZpL2VmaV91YWNj
ZXNzLmMKICBbREVQU10gaW50ZXJmYWNlL2VmaS9lZmlfaW8uYwogIFtERVBTXSBkcml2ZXJz
L2luZmluaWJhbmQvbGluZGEuYwogIFtERVBTXSBkcml2ZXJzL2luZmluaWJhbmQvaGVybW9u
LmMKICBbREVQU10gZHJpdmVycy9pbmZpbmliYW5kL2FyYmVsLmMKICBbREVQU10gZHJpdmVy
cy9pbmZpbmliYW5kL3FpYjczMjIuYwogIFtERVBTXSBkcml2ZXJzL2luZmluaWJhbmQvbGlu
ZGFfZncuYwogIFtERVBTXSBkcml2ZXJzL2JpdGJhc2gvYml0YmFzaC5jCiAgW0RFUFNdIGRy
aXZlcnMvYml0YmFzaC9zcGlfYml0LmMKICBbREVQU10gZHJpdmVycy9iaXRiYXNoL2kyY19i
aXQuYwogIFtERVBTXSBkcml2ZXJzL252cy9zcGkuYwogIFtERVBTXSBkcml2ZXJzL252cy9u
dnN2cGQuYwogIFtERVBTXSBkcml2ZXJzL252cy90aHJlZXdpcmUuYwogIFtERVBTXSBkcml2
ZXJzL252cy9udnMuYwogIFtERVBTXSBkcml2ZXJzL2Jsb2NrL2liZnQuYwogIFtERVBTXSBk
cml2ZXJzL2Jsb2NrL2F0YS5jCiAgW0RFUFNdIGRyaXZlcnMvYmxvY2svc3JwLmMKICBbREVQ
U10gZHJpdmVycy9ibG9jay9zY3NpLmMKICBbREVQU10gZHJpdmVycy9uZXQvZWZpL3NucG5l
dC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2VmaS9zbnBvbmx5LmMKICBbREVQU10gZHJpdmVy
cy9uZXQvdnhnZS92eGdlX3RyYWZmaWMuYwogIFtERVBTXSBkcml2ZXJzL25ldC92eGdlL3Z4
Z2UuYwogIFtERVBTXSBkcml2ZXJzL25ldC92eGdlL3Z4Z2VfY29uZmlnLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvdnhnZS92eGdlX21haW4uYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgv
YXRoOWsvYXRoOWtfaW5pdC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5
a19hcjkwMDNfbWFjLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2Fy
OTAwM19jYWxpYi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19lZXBy
b21fOTI4Ny5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5ay5jCiAgW0RF
UFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19jb21tb24uYwogIFtERVBTXSBkcml2
ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAyX2h3LmMKICBbREVQU10gZHJpdmVycy9u
ZXQvYXRoL2F0aDlrL2F0aDlrX2NhbGliLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0
aDlrL2F0aDlrX2VlcHJvbV80ay5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9h
dGg5a19lZXByb21fZGVmLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlr
X21hYy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19hcjkwMDNfZWVw
cm9tLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwMl9tYWMu
YwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAyX2NhbGliLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwMl9waHkuYwogIFtE
RVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfeG1pdC5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L2F0aC9hdGg5ay9hdGg5a19hcjUwMDhfcGh5LmMKICBbREVQU10gZHJpdmVycy9u
ZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwM19waHkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9h
dGgvYXRoOWsvYXRoOWtfYW5pLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0
aDlrX21haW4uYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAz
X2h3LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2VlcHJvbS5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19yZWN2LmMKICBbREVQU10gZHJp
dmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2h3LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRo
L2F0aDVrL2F0aDVrX3Jlc2V0LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0
aDVrLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0aDVrX2F0dGFjaC5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19yZmtpbGwuYwogIFtERVBTXSBk
cml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtfZ3Bpby5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0
L2F0aC9hdGg1ay9hdGg1a19waHkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsv
YXRoNWtfaW5pdHZhbHMuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtf
ZG1hLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0aDVrX3BjdS5jCiAgW0RF
UFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19kZXNjLmMKICBbREVQU10gZHJpdmVy
cy9uZXQvYXRoL2F0aDVrL2F0aDVrX3FjdS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9h
dGg1ay9hdGg1a19lZXByb20uYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRo
NWtfY2Fwcy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGhfaHcuYwogIFtERVBTXSBk
cml2ZXJzL25ldC9hdGgvYXRoX2tleS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGhf
bWFpbi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGhfcmVnZC5jCiAgW0RFUFNdIGRy
aXZlcnMvbmV0L3J0bDgxOHgvcnRsODE4MF9ncmY1MTAxLmMKICBbREVQU10gZHJpdmVycy9u
ZXQvcnRsODE4eC9ydGw4MTgwX21heDI4MjAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9ydGw4
MTh4L3J0bDgxODUuYwogIFtERVBTXSBkcml2ZXJzL25ldC9ydGw4MTh4L3J0bDgxOHguYwog
IFtERVBTXSBkcml2ZXJzL25ldC9ydGw4MTh4L3J0bDgxODAuYwogIFtERVBTXSBkcml2ZXJz
L25ldC9ydGw4MTh4L3J0bDgxODVfcnRsODIyNS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3J0
bDgxOHgvcnRsODE4MF9zYTI0MDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9waGFudG9tL3Bo
YW50b20uYwogIFtERVBTXSBkcml2ZXJzL25ldC9pZ2J2Zi9pZ2J2Zl9tYWluLmMKICBbREVQ
U10gZHJpdmVycy9uZXQvaWdidmYvaWdidmZfdmYuYwogIFtERVBTXSBkcml2ZXJzL25ldC9p
Z2J2Zi9pZ2J2Zl9tYnguYwogIFtERVBTXSBkcml2ZXJzL25ldC9pZ2IvaWdiXzgyNTc1LmMK
ICBbREVQU10gZHJpdmVycy9uZXQvaWdiL2lnYi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2ln
Yi9pZ2JfbWFjLmMKICBbREVQU10gZHJpdmVycy9uZXQvaWdiL2lnYl9waHkuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9pZ2IvaWdiX21haW4uYwogIFtERVBTXSBkcml2ZXJzL25ldC9pZ2Iv
aWdiX252bS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYi9pZ2JfYXBpLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvaWdiL2lnYl9tYW5hZ2UuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAw
MGUvZTEwMDBlXzgwMDAzZXMybGFuLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDBlL2Ux
MDAwZV9pY2g4bGFuLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV9tYW5h
Z2UuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMGUvZTEwMDBlXzgyNTcxLmMKICBbREVQ
U10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2Ux
MDAwZS9lMTAwMGVfbWFjLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV9w
aHkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMGUvZTEwMDBlX252bS5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L2UxMDAwZS9lMTAwMGVfbWFpbi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0
L2UxMDAwL2UxMDAwXzgyNTQyLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBf
ODI1NDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMC9lMTAwMF9hcGkuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9lMTAwMC9lMTAwMF9tYW5hZ2UuYwogIFtERVBTXSBkcml2ZXJzL25l
dC9lMTAwMC9lMTAwMF84MjU0My5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAw
X252bS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwX21hYy5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwX3BoeS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2Ux
MDAwL2UxMDAwLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBfbWFpbi5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwXzgyNTQxLmMKICBbREVQU10gZHJpdmVy
cy9uZXQvZGVwY2EuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hbWQ4MTExZS5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L2ptZS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3ByaXNtMl9wY2kuYwog
IFtERVBTXSBkcml2ZXJzL25ldC8zYzU5NS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3ZpYS1y
aGluZS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3c4OWM4NDAuYwogIFtERVBTXSBkcml2ZXJz
L25ldC9jczg5eDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9uZTJrX2lzYS5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L2lwb2liLmMKICBbREVQU10gZHJpdmVycy9uZXQvc2t5Mi5jCiAgW0RF
UFNdIGRyaXZlcnMvbmV0L2F0bDFlLmMKICBbREVQU10gZHJpdmVycy9uZXQvbGVnYWN5LmMK
ICBbREVQU10gZHJpdmVycy9uZXQvZWVwcm8xMDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC8z
YzUxNS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2JueDIuYwogIFtERVBTXSBkcml2ZXJzL25l
dC9kbWZlLmMKICBbREVQU10gZHJpdmVycy9uZXQvbnM4MzkwLmMKICBbREVQU10gZHJpdmVy
cy9uZXQvbnM4MzgyMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3BjbmV0MzIuYwogIFtERVBT
XSBkcml2ZXJzL25ldC8zYzUwOS1laXNhLmMKICBbREVQU10gZHJpdmVycy9uZXQvdGczLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvM2M1eDkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9zbWM5
MDAwLmMKICBbREVQU10gZHJpdmVycy9uZXQvdmlydGlvLW5ldC5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L2V0aGVyZmFicmljLmMKICBbREVQU10gZHJpdmVycy9uZXQvd2QuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9za2dlLmMKICBbREVQU10gZHJpdmVycy9uZXQvc2lzMTkwLmMKICBb
REVQU10gZHJpdmVycy9uZXQvbmF0c2VtaS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2I0NC5j
CiAgW0RFUFNdIGRyaXZlcnMvbmV0L2ZvcmNlZGV0aC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0
L3ByaXNtMl9wbHguYwogIFtERVBTXSBkcml2ZXJzL25ldC9zdW5kYW5jZS5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L3J0bDgxMzkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lcGljMTAwLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvM2M5MHguYwogIFtERVBTXSBkcml2ZXJzL25ldC9kYXZp
Y29tLmMKICBbREVQU10gZHJpdmVycy9uZXQvM2M1MDkuYwogIFtERVBTXSBkcml2ZXJzL25l
dC8zYzUyOS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L210ZDgweC5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L215cmkxMGdlLmMKICBbREVQU10gZHJpdmVycy9uZXQvZWVwcm8uYwogIFtERVBT
XSBkcml2ZXJzL25ldC9uZS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3ZpYS12ZWxvY2l0eS5j
CiAgW0RFUFNdIGRyaXZlcnMvbmV0L3BuaWMuYwogIFtERVBTXSBkcml2ZXJzL25ldC90dWxp
cC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3NpczkwMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0
L3I4MTY5LmMKICBbREVQU10gZHJpdmVycy9uZXQvdGxhbi5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0LzNjNTAzLmMKICBbREVQU10gZHJpdmVycy9idXMvcGNpLmMKICBbREVQU10gZHJpdmVy
cy9idXMvaXNhcG5wLmMKICBbREVQU10gZHJpdmVycy9idXMvdmlydGlvLXJpbmcuYwogIFtE
RVBTXSBkcml2ZXJzL2J1cy92aXJ0aW8tcGNpLmMKICBbREVQU10gZHJpdmVycy9idXMvaXNh
LmMKICBbREVQU10gZHJpdmVycy9idXMvaXNhX2lkcy5jCiAgW0RFUFNdIGRyaXZlcnMvYnVz
L3BjaWV4dHJhLmMKICBbREVQU10gZHJpdmVycy9idXMvcGNpYmFja3VwLmMKICBbREVQU10g
ZHJpdmVycy9idXMvcGNpdnBkLmMKICBbREVQU10gZHJpdmVycy9idXMvbWNhLmMKICBbREVQ
U10gZHJpdmVycy9idXMvZWlzYS5jCiAgW0RFUFNdIGltYWdlL3NjcmlwdC5jCiAgW0RFUFNd
IGltYWdlL2VsZi5jCiAgW0RFUFNdIGltYWdlL2VmaV9pbWFnZS5jCiAgW0RFUFNdIGltYWdl
L3NlZ21lbnQuYwogIFtERVBTXSBpbWFnZS9lbWJlZGRlZC5jCiAgW0RFUFNdIG5ldC84MDIx
MS9yYzgwMjExLmMKICBbREVQU10gbmV0LzgwMjExL3dwYS5jCiAgW0RFUFNdIG5ldC84MDIx
MS93cGFfY2NtcC5jCiAgW0RFUFNdIG5ldC84MDIxMS9uZXQ4MDIxMS5jCiAgW0RFUFNdIG5l
dC84MDIxMS9zZWM4MDIxMS5jCiAgW0RFUFNdIG5ldC84MDIxMS93ZXAuYwogIFtERVBTXSBu
ZXQvODAyMTEvd3BhX3Bzay5jCiAgW0RFUFNdIG5ldC84MDIxMS93cGFfdGtpcC5jCiAgW0RF
UFNdIG5ldC9pbmZpbmliYW5kL2liX21pLmMKICBbREVQU10gbmV0L2luZmluaWJhbmQvaWJf
Y20uYwogIFtERVBTXSBuZXQvaW5maW5pYmFuZC9pYl9wYWNrZXQuYwogIFtERVBTXSBuZXQv
aW5maW5pYmFuZC9pYl9zbWMuYwogIFtERVBTXSBuZXQvaW5maW5pYmFuZC9pYl9wYXRocmVj
LmMKICBbREVQU10gbmV0L2luZmluaWJhbmQvaWJfc21hLmMKICBbREVQU10gbmV0L2luZmlu
aWJhbmQvaWJfY21yYy5jCiAgW0RFUFNdIG5ldC9pbmZpbmliYW5kL2liX3NycC5jCiAgW0RF
UFNdIG5ldC9pbmZpbmliYW5kL2liX21jYXN0LmMKICBbREVQU10gbmV0L3VkcC9kaGNwLmMK
ICBbREVQU10gbmV0L3VkcC9kbnMuYwogIFtERVBTXSBuZXQvdWRwL3NsYW0uYwogIFtERVBT
XSBuZXQvdWRwL3RmdHAuYwogIFtERVBTXSBuZXQvdWRwL3N5c2xvZy5jCiAgW0RFUFNdIG5l
dC90Y3AvaHR0cHMuYwogIFtERVBTXSBuZXQvdGNwL2lzY3NpLmMKICBbREVQU10gbmV0L3Rj
cC9mdHAuYwogIFtERVBTXSBuZXQvdGNwL2h0dHAuYwogIFtERVBTXSBuZXQvZWFwb2wuYwog
IFtERVBTXSBuZXQvZmNucy5jCiAgW0RFUFNdIG5ldC9mYWtlZGhjcC5jCiAgW0RFUFNdIG5l
dC9pY21wdjYuYwogIFtERVBTXSBuZXQvbmV0ZGV2X3NldHRpbmdzLmMKICBbREVQU10gbmV0
L2ZjcC5jCiAgW0RFUFNdIG5ldC9mY29lLmMKICBbREVQU10gbmV0L2lvYnBhZC5jCiAgW0RF
UFNdIG5ldC90Y3AuYwogIFtERVBTXSBuZXQvbWlpLmMKICBbREVQU10gbmV0L2FycC5jCiAg
W0RFUFNdIG5ldC9ldGhlcm5ldC5jCiAgW0RFUFNdIG5ldC9mY2Vscy5jCiAgW0RFUFNdIG5l
dC90Y3BpcC5jCiAgW0RFUFNdIG5ldC9pcHY2LmMKICBbREVQU10gbmV0L2FvZS5jCiAgW0RF
UFNdIG5ldC9yYXJwLmMKICBbREVQU10gbmV0L3ZsYW4uYwogIFtERVBTXSBuZXQvbnVsbG5l
dC5jCiAgW0RFUFNdIG5ldC9pbmZpbmliYW5kLmMKICBbREVQU10gbmV0L2lwdjQuYwogIFtE
RVBTXSBuZXQvZXRoX3Nsb3cuYwogIFtERVBTXSBuZXQvdGxzLmMKICBbREVQU10gbmV0L25k
cC5jCiAgW0RFUFNdIG5ldC9kaGNwcGt0LmMKICBbREVQU10gbmV0L2NhY2hlZGhjcC5jCiAg
W0RFUFNdIG5ldC9uZXRkZXZpY2UuYwogIFtERVBTXSBuZXQvcmV0cnkuYwogIFtERVBTXSBu
ZXQvaWNtcC5jCiAgW0RFUFNdIG5ldC91ZHAuYwogIFtERVBTXSBuZXQvZGhjcG9wdHMuYwog
IFtERVBTXSBuZXQvZmMuYwogIFtERVBTXSBjb3JlL2N0eXBlLmMKICBbREVQU10gY29yZS9i
YXNlbmFtZS5jCiAgW0RFUFNdIGNvcmUvbnZvLmMKICBbREVQU10gY29yZS9kZWJ1Z19tZDUu
YwogIFtERVBTXSBjb3JlL2ludGVyZmFjZS5jCiAgW0RFUFNdIGNvcmUvYnRleHQuYwogIFtE
RVBTXSBjb3JlL2dldG9wdC5jCiAgW0RFUFNdIGNvcmUvZ2V0a2V5LmMKICBbREVQU10gY29y
ZS9hc3ByaW50Zi5jCiAgW0RFUFNdIGNvcmUvZ2Ric3R1Yi5jCiAgW0RFUFNdIGNvcmUvbGlu
ZWJ1Zi5jCiAgW0RFUFNdIGNvcmUvZWRkLmMKICBbREVQU10gY29yZS9pbml0LmMKICBbREVQ
U10gY29yZS9zdHJ0b3VsbC5jCiAgW0RFUFNdIGNvcmUvc2V0dGluZ3MuYwogIFtERVBTXSBj
b3JlL21haW4uYwogIFtERVBTXSBjb3JlL2Rvd25sb2FkZXIuYwogIFtERVBTXSBjb3JlL2h3
LmMKICBbREVQU10gY29yZS9iaXRvcHMuYwogIFtERVBTXSBjb3JlL3ZzcHJpbnRmLmMKICBb
REVQU10gY29yZS9udWxsX25hcC5jCiAgW0RFUFNdIGNvcmUveGZlci5jCiAgW0RFUFNdIGNv
cmUvcGNfa2JkLmMKICBbREVQU10gY29yZS9wb3NpeF9pby5jCiAgW0RFUFNdIGNvcmUvZ2Ri
dWRwLmMKICBbREVQU10gY29yZS9jb25zb2xlLmMKICBbREVQU10gY29yZS9vcGVuLmMKICBb
REVQU10gY29yZS9zZXJpYWwuYwogIFtERVBTXSBjb3JlL2FjcGkuYwogIFtERVBTXSBjb3Jl
L3VyaS5jCiAgW0RFUFNdIGNvcmUvYmxvY2tkZXYuYwogIFtERVBTXSBjb3JlL2NwaW8uYwog
IFtERVBTXSBjb3JlL3RpbWVyLmMKICBbREVQU10gY29yZS9taXNjLmMKICBbREVQU10gY29y
ZS9jd3VyaS5jCiAgW0RFUFNdIGNvcmUvaTgyMzY1LmMKICBbREVQU10gY29yZS9lcnJuby5j
CiAgW0RFUFNdIGNvcmUvam9iLmMKICBbREVQU10gY29yZS9wcm9jZXNzLmMKICBbREVQU10g
Y29yZS9nZGJzZXJpYWwuYwogIFtERVBTXSBjb3JlL2RlYnVnLmMKICBbREVQU10gY29yZS9m
bnJlYy5jCiAgW0RFUFNdIGNvcmUvbWFsbG9jLmMKICBbREVQU10gY29yZS9hbnNpZXNjLmMK
ICBbREVQU10gY29yZS9kZXZpY2UuYwogIFtERVBTXSBjb3JlL2Jhc2U2NC5jCiAgW0RFUFNd
IGNvcmUvYml0bWFwLmMKICBbREVQU10gY29yZS9leGVjLmMKICBbREVQU10gY29yZS9tb25v
am9iLmMKICBbREVQU10gY29yZS9udWxsX3NhbmJvb3QuYwogIFtERVBTXSBjb3JlL3N0cmlu
Z2V4dHJhLmMKICBbREVQU10gY29yZS9yYW5kb20uYwogIFtERVBTXSBjb3JlL3BhcnNlb3B0
LmMKICBbREVQU10gY29yZS9yZXNvbHYuYwogIFtERVBTXSBjb3JlL2lvYnVmLmMKICBbREVQ
U10gY29yZS9pbWFnZS5jCiAgW0RFUFNdIGNvcmUvc3RyaW5nLmMKICBbREVQU10gY29yZS9i
YXNlMTYuYwogIFtERVBTXSBjb3JlL2Fzc2VydC5jCiAgW0RFUFNdIGNvcmUvcmVmY250LmMK
ICBbREVQU10gY29yZS91dWlkLmMKICBbREVQU10gY29yZS9zZXJpYWxfY29uc29sZS5jCiAg
W0RFUFNdIGNvcmUvcGNtY2lhLmMKICBbREVQU10gbGliZ2NjL19fdW1vZGRpMy5jCiAgW0RF
UFNdIGxpYmdjYy9fX3VkaXZkaTMuYwogIFtERVBTXSBsaWJnY2MvX19tb2RkaTMuYwogIFtE
RVBTXSBsaWJnY2MvbWVtY3B5LmMKICBbREVQU10gbGliZ2NjL2ljYy5jCiAgW0RFUFNdIGxp
YmdjYy9fX2RpdmRpMy5jCiAgW0RFUFNdIGxpYmdjYy9fX3VkaXZtb2RkaTQuYwpnbWFrZVs3
XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9l
dGhlcmJvb3QvaXB4ZS9zcmMnCmdtYWtlWzddOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9ldGhlcmJvb3QvaXB4ZS9zcmMnCiAgW0RFUFNd
IGFyY2gvaTM4Ni9wcmVmaXgvcm9tcHJlZml4LlMKICBbREVQU10gYXJjaC9pMzg2L3ByZWZp
eC9tcm9tcHJlZml4LlMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlyb20u
YwogIFtERVBTXSBhcmNoL2kzODYvZHJpdmVycy9uZXQvdW5kaW5ldC5jCiAgW0RFUFNdIGFy
Y2gvaTM4Ni9kcml2ZXJzL25ldC91bmRpLmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMv
bmV0L3VuZGlvbmx5LmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlsb2Fk
LmMKICBbREVQU10gYXJjaC9pMzg2L2RyaXZlcnMvbmV0L3VuZGlwcmVsb2FkLmMKICBbREVQ
U10gYXJjaC94ODYvaW50ZXJmYWNlL2VmaS9lZml4ODZfbmFwLmMKICBbREVQU10gYXJjaC94
ODYvY29yZS9wY2lkaXJlY3QuYwogIFtERVBTXSBhcmNoL2kzODYvaGNpL2NvbW1hbmRzL3Jl
Ym9vdF9jbWQuYwogIFtERVBTXSBhcmNoL2kzODYvaGNpL2NvbW1hbmRzL3B4ZV9jbWQuYwog
IFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3N5c2xpbnV4L2NvbWJvb3RfcmVzb2x2LmMK
ICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9zeXNsaW51eC9jb20zMl9jYWxsLmMKICBb
REVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9zeXNsaW51eC9jb21ib290X2NhbGwuYwogIFtE
RVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3B4ZXBhcmVudC9weGVwYXJlbnRfZGhjcC5jCiAg
W0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlcGFyZW50L3B4ZXBhcmVudC5jCiAgW0RF
UFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV91ZHAuYwogIFtERVBTXSBhcmNoL2kz
ODYvaW50ZXJmYWNlL3B4ZS9weGVfdW5kaS5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZh
Y2UvcHhlL3B4ZV9sb2FkZXIuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3B4ZS9w
eGVfZXhpdF9ob29rLmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9weGUvcHhlX3By
ZWJvb3QuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3B4ZS9weGVfdGZ0cC5jCiAg
W0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcHhlL3B4ZV9maWxlLmMKICBbREVQU10gYXJj
aC9pMzg2L2ludGVyZmFjZS9weGUvcHhlX2NhbGwuYwogIFtERVBTXSBhcmNoL2kzODYvaW50
ZXJmYWNlL3BjYmlvcy9iaW9zX3NtYmlvcy5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZh
Y2UvcGNiaW9zL21lbXRvcF91bWFsbG9jLmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFj
ZS9wY2Jpb3MvYmlvc2ludC5jCiAgW0RFUFNdIGFyY2gvaTM4Ni9pbnRlcmZhY2UvcGNiaW9z
L2Jpb3NfdGltZXIuYwogIFtERVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3BjYmlvcy9wY2li
aW9zLmMKICBbREVQU10gYXJjaC9pMzg2L2ludGVyZmFjZS9wY2Jpb3MvaW50MTMuYwogIFtE
RVBTXSBhcmNoL2kzODYvaW50ZXJmYWNlL3BjYmlvcy9iaW9zX25hcC5jCiAgW0RFUFNdIGFy
Y2gvaTM4Ni9pbWFnZS9jb21ib290LmMKICBbREVQU10gYXJjaC9pMzg2L2ltYWdlL2VsZmJv
b3QuYwogIFtERVBTXSBhcmNoL2kzODYvaW1hZ2UvYm9vdHNlY3Rvci5jCiAgW0RFUFNdIGFy
Y2gvaTM4Ni9pbWFnZS9tdWx0aWJvb3QuYwogIFtERVBTXSBhcmNoL2kzODYvaW1hZ2UvcHhl
X2ltYWdlLmMKICBbREVQU10gYXJjaC9pMzg2L2ltYWdlL2J6aW1hZ2UuYwogIFtERVBTXSBh
cmNoL2kzODYvaW1hZ2UvbmJpLmMKICBbREVQU10gYXJjaC9pMzg2L2ltYWdlL2NvbTMyLmMK
ICBbREVQU10gYXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9wbnBiaW9zLmMKICBbREVQU10g
YXJjaC9pMzg2L2Zpcm13YXJlL3BjYmlvcy9iaW9zX2NvbnNvbGUuYwogIFtERVBTXSBhcmNo
L2kzODYvZmlybXdhcmUvcGNiaW9zL2Zha2VlODIwLmMKICBbREVQU10gYXJjaC9pMzg2L2Zp
cm13YXJlL3BjYmlvcy9iYXNlbWVtLmMKICBbREVQU10gYXJjaC9pMzg2L2Zpcm13YXJlL3Bj
Ymlvcy9tZW1tYXAuYwogIFtERVBTXSBhcmNoL2kzODYvZmlybXdhcmUvcGNiaW9zL2hpZGVt
ZW0uYwogIFtERVBTXSBhcmNoL2kzODYvdHJhbnNpdGlvbnMvbGlicm1fbWdtdC5jCiAgW0RF
UFNdIGFyY2gvaTM4Ni9jb3JlL2R1bXByZWdzLmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUv
cmVsb2NhdGUuYwogIFtERVBTXSBhcmNoL2kzODYvY29yZS94ODZfaW8uYwogIFtERVBTXSBh
cmNoL2kzODYvY29yZS90aW1lcjIuYwogIFtERVBTXSBhcmNoL2kzODYvY29yZS9ydW50aW1l
LmMKICBbREVQU10gYXJjaC9pMzg2L2NvcmUvcGljODI1OS5jCiAgW0RFUFNdIGFyY2gvaTM4
Ni9jb3JlL2dkYm1hY2guYwogIFtERVBTXSBhcmNoL2kzODYvY29yZS92aWRlb19zdWJyLmMK
ICBbREVQU10gYXJjaC9pMzg2L2NvcmUvYmFzZW1lbV9wYWNrZXQuYwogIFtERVBTXSBhcmNo
L2kzODYvY29yZS9yZHRzY190aW1lci5jCiAgW0RFUFNdIGNvbmZpZy9jb25maWdfcm9tcHJl
Zml4LmMKICBbREVQU10gY29uZmlnL2NvbmZpZy5jCiAgW0RFUFNdIGNvbmZpZy9jb25maWdf
ZmMuYwogIFtERVBTXSBjb25maWcvY29uZmlnX2V0aGVybmV0LmMKICBbREVQU10gY29uZmln
L2NvbmZpZ19uZXQ4MDIxMS5jCiAgW0RFUFNdIGNvbmZpZy9jb25maWdfaW5maW5pYmFuZC5j
CiAgW0RFUFNdIHVzci9hdXRvYm9vdC5jCiAgW0RFUFNdIHVzci9pZm1nbXQuYwogIFtERVBT
XSB1c3IvZGhjcG1nbXQuYwogIFtERVBTXSB1c3IvcHhlbWVudS5jCiAgW0RFUFNdIHVzci9p
bWdtZ210LmMKICBbREVQU10gdXNyL3Byb21wdC5jCiAgW0RFUFNdIGhjaS9tdWN1cnNlcy9r
Yi5jCiAgW0RFUFNdIGhjaS90dWkvc2V0dGluZ3NfdWkuYwogIFtERVBTXSBoY2kvY29tbWFu
ZHMvaW1hZ2VfY21kLmMKICBbREVQU10gaGNpL2NvbW1hbmRzL2RpZ2VzdF9jbWQuYwogIFtE
RVBTXSBoY2kvY29tbWFuZHMvdGltZV9jbWQuYwogIFtERVBTXSBoY2kvY29tbWFuZHMvc2Fu
Ym9vdF9jbWQuYwogIFtERVBTXSB0ZXN0cy91bWFsbG9jX3Rlc3QuYwogIFtERVBTXSB0ZXN0
cy9ib2ZtX3Rlc3QuYwogIFtERVBTXSBpbnRlcmZhY2UvYm9mbS9ib2ZtLmMKICBbREVQU10g
aW50ZXJmYWNlL3NtYmlvcy9zbWJpb3MuYwogIFtERVBTXSBpbnRlcmZhY2Uvc21iaW9zL3Nt
Ymlvc19zZXR0aW5ncy5jCiAgW0RFUFNdIGludGVyZmFjZS9lZmkvZWZpX3NucC5jCiAgW0RF
UFNdIGludGVyZmFjZS9lZmkvZWZpX3BjaS5jCiAgW0RFUFNdIGludGVyZmFjZS9lZmkvZWZp
X2JvZm0uYwogIFtERVBTXSBpbnRlcmZhY2UvZWZpL2VmaV91bWFsbG9jLmMKICBbREVQU10g
aW50ZXJmYWNlL2VmaS9lZmlfdGltZXIuYwogIFtERVBTXSBpbnRlcmZhY2UvZWZpL2VmaV9z
bWJpb3MuYwogIFtERVBTXSBpbnRlcmZhY2UvZWZpL2VmaV9kcml2ZXIuYwogIFtERVBTXSBp
bnRlcmZhY2UvZWZpL2VmaV91YWNjZXNzLmMKICBbREVQU10gaW50ZXJmYWNlL2VmaS9lZmlf
aW8uYwogIFtERVBTXSBkcml2ZXJzL2luZmluaWJhbmQvbGluZGEuYwogIFtERVBTXSBkcml2
ZXJzL2luZmluaWJhbmQvaGVybW9uLmMKICBbREVQU10gZHJpdmVycy9pbmZpbmliYW5kL2Fy
YmVsLmMKICBbREVQU10gZHJpdmVycy9pbmZpbmliYW5kL3FpYjczMjIuYwogIFtERVBTXSBk
cml2ZXJzL2JpdGJhc2gvc3BpX2JpdC5jCiAgW0RFUFNdIGRyaXZlcnMvYml0YmFzaC9pMmNf
Yml0LmMKICBbREVQU10gZHJpdmVycy9udnMvc3BpLmMKICBbREVQU10gZHJpdmVycy9udnMv
bnZzdnBkLmMKICBbREVQU10gZHJpdmVycy9udnMvdGhyZWV3aXJlLmMKICBbREVQU10gZHJp
dmVycy9ibG9jay9pYmZ0LmMKICBbREVQU10gZHJpdmVycy9ibG9jay9hdGEuYwogIFtERVBT
XSBkcml2ZXJzL2Jsb2NrL3NycC5jCiAgW0RFUFNdIGRyaXZlcnMvYmxvY2svc2NzaS5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L2VmaS9zbnBuZXQuYwogIFtERVBTXSBkcml2ZXJzL25ldC92
eGdlL3Z4Z2VfdHJhZmZpYy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3Z4Z2UvdnhnZS5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L3Z4Z2UvdnhnZV9jb25maWcuYwogIFtERVBTXSBkcml2ZXJz
L25ldC92eGdlL3Z4Z2VfbWFpbi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9h
dGg5a19pbml0LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAw
M19tYWMuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAzX2Nh
bGliLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2VlcHJvbV85Mjg3
LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrLmMKICBbREVQU10gZHJp
dmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2NvbW1vbi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0
L2F0aC9hdGg5ay9hdGg5a19hcjkwMDJfaHcuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgv
YXRoOWsvYXRoOWtfY2FsaWIuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRo
OWtfZWVwcm9tXzRrLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2Vl
cHJvbV9kZWYuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfbWFjLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX2FyOTAwM19lZXByb20uYwog
IFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAyX21hYy5jCiAgW0RF
UFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19hcjkwMDJfY2FsaWIuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfYXI5MDAyX3BoeS5jCiAgW0RFUFNdIGRy
aXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a194bWl0LmMKICBbREVQU10gZHJpdmVycy9uZXQv
YXRoL2F0aDlrL2F0aDlrX2FyNTAwOF9waHkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgv
YXRoOWsvYXRoOWtfYXI5MDAzX3BoeS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5
ay9hdGg5a19hbmkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfbWFp
bi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg5ay9hdGg5a19hcjkwMDNfaHcuYwog
IFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoOWsvYXRoOWtfZWVwcm9tLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvYXRoL2F0aDlrL2F0aDlrX3JlY3YuYwogIFtERVBTXSBkcml2ZXJzL25l
dC9hdGgvYXRoOWsvYXRoOWtfaHcuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsv
YXRoNWtfcmVzZXQuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWsuYwog
IFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtfYXR0YWNoLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0aDVrX3Jma2lsbC5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0L2F0aC9hdGg1ay9hdGg1a19ncGlvLmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0
aDVrL2F0aDVrX3BoeS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19p
bml0dmFscy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19kbWEuYwog
IFtERVBTXSBkcml2ZXJzL25ldC9hdGgvYXRoNWsvYXRoNWtfcGN1LmMKICBbREVQU10gZHJp
dmVycy9uZXQvYXRoL2F0aDVrL2F0aDVrX2Rlc2MuYwogIFtERVBTXSBkcml2ZXJzL25ldC9h
dGgvYXRoNWsvYXRoNWtfcWN1LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aDVrL2F0
aDVrX2VlcHJvbS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2F0aC9hdGg1ay9hdGg1a19jYXBz
LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aF9ody5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0L2F0aC9hdGhfa2V5LmMKICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aF9tYWluLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvYXRoL2F0aF9yZWdkLmMKICBbREVQU10gZHJpdmVycy9u
ZXQvcnRsODE4eC9ydGw4MTgwX2dyZjUxMDEuYwogIFtERVBTXSBkcml2ZXJzL25ldC9ydGw4
MTh4L3J0bDgxODBfbWF4MjgyMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3J0bDgxOHgvcnRs
ODE4NS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3J0bDgxOHgvcnRsODE4eC5jCiAgW0RFUFNd
IGRyaXZlcnMvbmV0L3J0bDgxOHgvcnRsODE4MC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3J0
bDgxOHgvcnRsODE4NV9ydGw4MjI1LmMKICBbREVQU10gZHJpdmVycy9uZXQvcnRsODE4eC9y
dGw4MTgwX3NhMjQwMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3BoYW50b20vcGhhbnRvbS5j
CiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYnZmL2lnYnZmX21haW4uYwogIFtERVBTXSBkcml2
ZXJzL25ldC9pZ2J2Zi9pZ2J2Zl92Zi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYnZmL2ln
YnZmX21ieC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYi9pZ2JfODI1NzUuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9pZ2IvaWdiX21hYy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYi9p
Z2JfcGh5LmMKICBbREVQU10gZHJpdmVycy9uZXQvaWdiL2lnYl9tYWluLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvaWdiL2lnYl9udm0uYwogIFtERVBTXSBkcml2ZXJzL25ldC9pZ2IvaWdi
X2FwaS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2lnYi9pZ2JfbWFuYWdlLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV84MDAwM2VzMmxhbi5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L2UxMDAwZS9lMTAwMGVfaWNoOGxhbi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2Ux
MDAwZS9lMTAwMGVfODI1NzEuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMGUvZTEwMDBl
X21hYy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2UxMDAwZS9lMTAwMGVfcGh5LmMKICBbREVQ
U10gZHJpdmVycy9uZXQvZTEwMDBlL2UxMDAwZV9udm0uYwogIFtERVBTXSBkcml2ZXJzL25l
dC9lMTAwMGUvZTEwMDBlX21haW4uYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMC9lMTAw
MF84MjU0Mi5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2UxMDAwL2UxMDAwXzgyNTQwLmMKICBb
REVQU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBfYXBpLmMKICBbREVQU10gZHJpdmVycy9u
ZXQvZTEwMDAvZTEwMDBfODI1NDMuYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMC9lMTAw
MF9udm0uYwogIFtERVBTXSBkcml2ZXJzL25ldC9lMTAwMC9lMTAwMF9tYWMuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9lMTAwMC9lMTAwMF9waHkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9l
MTAwMC9lMTAwMF9tYWluLmMKICBbREVQU10gZHJpdmVycy9uZXQvZTEwMDAvZTEwMDBfODI1
NDEuYwogIFtERVBTXSBkcml2ZXJzL25ldC9hbWQ4MTExZS5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0L2ptZS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3ByaXNtMl9wY2kuYwogIFtERVBTXSBk
cml2ZXJzL25ldC8zYzU5NS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3ZpYS1yaGluZS5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L3c4OWM4NDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9jczg5
eDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC9uZTJrX2lzYS5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0L2lwb2liLmMKICBbREVQU10gZHJpdmVycy9uZXQvc2t5Mi5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L2F0bDFlLmMKICBbREVQU10gZHJpdmVycy9uZXQvbGVnYWN5LmMKICBbREVQU10g
ZHJpdmVycy9uZXQvZWVwcm8xMDAuYwogIFtERVBTXSBkcml2ZXJzL25ldC8zYzUxNS5jCiAg
W0RFUFNdIGRyaXZlcnMvbmV0L2JueDIuYwogIFtERVBTXSBkcml2ZXJzL25ldC9kbWZlLmMK
ICBbREVQU10gZHJpdmVycy9uZXQvbnM4MzkwLmMKICBbREVQU10gZHJpdmVycy9uZXQvbnM4
MzgyMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3BjbmV0MzIuYwogIFtERVBTXSBkcml2ZXJz
L25ldC8zYzUwOS1laXNhLmMKICBbREVQU10gZHJpdmVycy9uZXQvdGczLmMKICBbREVQU10g
ZHJpdmVycy9uZXQvM2M1eDkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9zbWM5MDAwLmMKICBb
REVQU10gZHJpdmVycy9uZXQvdmlydGlvLW5ldC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L2V0
aGVyZmFicmljLmMKICBbREVQU10gZHJpdmVycy9uZXQvc2tnZS5jCiAgW0RFUFNdIGRyaXZl
cnMvbmV0L3NpczE5MC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L25hdHNlbWkuYwogIFtERVBT
XSBkcml2ZXJzL25ldC9iNDQuYwogIFtERVBTXSBkcml2ZXJzL25ldC9mb3JjZWRldGguYwog
IFtERVBTXSBkcml2ZXJzL25ldC9wcmlzbTJfcGx4LmMKICBbREVQU10gZHJpdmVycy9uZXQv
c3VuZGFuY2UuYwogIFtERVBTXSBkcml2ZXJzL25ldC9ydGw4MTM5LmMKICBbREVQU10gZHJp
dmVycy9uZXQvZXBpYzEwMC5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0LzNjOTB4LmMKICBbREVQ
U10gZHJpdmVycy9uZXQvZGF2aWNvbS5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0LzNjNTA5LmMK
ICBbREVQU10gZHJpdmVycy9uZXQvM2M1MjkuYwogIFtERVBTXSBkcml2ZXJzL25ldC9tdGQ4
MHguYwogIFtERVBTXSBkcml2ZXJzL25ldC9teXJpMTBnZS5jCiAgW0RFUFNdIGRyaXZlcnMv
bmV0L2VlcHJvLmMKICBbREVQU10gZHJpdmVycy9uZXQvdmlhLXZlbG9jaXR5LmMKICBbREVQ
U10gZHJpdmVycy9uZXQvcG5pYy5jCiAgW0RFUFNdIGRyaXZlcnMvbmV0L3R1bGlwLmMKICBb
REVQU10gZHJpdmVycy9uZXQvc2lzOTAwLmMKICBbREVQU10gZHJpdmVycy9uZXQvcjgxNjku
YwogIFtERVBTXSBkcml2ZXJzL25ldC90bGFuLmMKICBbREVQU10gZHJpdmVycy9idXMvcGNp
LmMKICBbREVQU10gZHJpdmVycy9idXMvaXNhcG5wLmMKICBbREVQU10gZHJpdmVycy9idXMv
dmlydGlvLXJpbmcuYwogIFtERVBTXSBkcml2ZXJzL2J1cy92aXJ0aW8tcGNpLmMKICBbREVQ
U10gZHJpdmVycy9idXMvaXNhLmMKICBbREVQU10gZHJpdmVycy9idXMvcGNpZXh0cmEuYwog
IFtERVBTXSBkcml2ZXJzL2J1cy9wY2liYWNrdXAuYwogIFtERVBTXSBkcml2ZXJzL2J1cy9w
Y2l2cGQuYwogIFtERVBTXSBkcml2ZXJzL2J1cy9tY2EuYwogIFtERVBTXSBkcml2ZXJzL2J1
cy9laXNhLmMKICBbREVQU10gaW1hZ2Uvc2NyaXB0LmMKICBbREVQU10gaW1hZ2UvZWxmLmMK
ICBbREVQU10gaW1hZ2UvZWZpX2ltYWdlLmMKICBbREVQU10gaW1hZ2Uvc2VnbWVudC5jCiAg
W0RFUFNdIGltYWdlL2VtYmVkZGVkLmMKICBbREVQU10gbmV0LzgwMjExL25ldDgwMjExLmMK
ICBbREVQU10gbmV0L2luZmluaWJhbmQvaWJfbWkuYwogIFtERVBTXSBuZXQvaW5maW5pYmFu
ZC9pYl9zbWMuYwogIFtERVBTXSBuZXQvaW5maW5pYmFuZC9pYl9zbWEuYwogIFtERVBTXSBu
ZXQvaW5maW5pYmFuZC9pYl9zcnAuYwogIFtERVBTXSBuZXQvdWRwL2RoY3AuYwogIFtERVBT
XSBuZXQvdWRwL2Rucy5jCiAgW0RFUFNdIG5ldC91ZHAvc2xhbS5jCiAgW0RFUFNdIG5ldC91
ZHAvdGZ0cC5jCiAgW0RFUFNdIG5ldC91ZHAvc3lzbG9nLmMKICBbREVQU10gbmV0L3RjcC9o
dHRwcy5jCiAgW0RFUFNdIG5ldC90Y3AvaXNjc2kuYwogIFtERVBTXSBuZXQvdGNwL2Z0cC5j
CiAgW0RFUFNdIG5ldC90Y3AvaHR0cC5jCiAgW0RFUFNdIG5ldC9mYWtlZGhjcC5jCiAgW0RF
UFNdIG5ldC9uZXRkZXZfc2V0dGluZ3MuYwogIFtERVBTXSBuZXQvZmNwLmMKICBbREVQU10g
bmV0L2Zjb2UuYwogIFtERVBTXSBuZXQvdGNwLmMKICBbREVQU10gbmV0L2FvZS5jCiAgW0RF
UFNdIG5ldC92bGFuLmMKICBbREVQU10gbmV0L2luZmluaWJhbmQuYwogIFtERVBTXSBuZXQv
aXB2NC5jCiAgW0RFUFNdIG5ldC9kaGNwcGt0LmMKICBbREVQU10gbmV0L2NhY2hlZGhjcC5j
CiAgW0RFUFNdIG5ldC9uZXRkZXZpY2UuYwogIFtERVBTXSBuZXQvcmV0cnkuYwogIFtERVBT
XSBuZXQvZGhjcG9wdHMuYwogIFtERVBTXSBuZXQvZmMuYwogIFtERVBTXSBjb3JlL252by5j
CiAgW0RFUFNdIGNvcmUvZ2V0a2V5LmMKICBbREVQU10gY29yZS9zZXR0aW5ncy5jCiAgW0RF
UFNdIGNvcmUvbWFpbi5jCiAgW0RFUFNdIGNvcmUvZG93bmxvYWRlci5jCiAgW0RFUFNdIGNv
cmUvbnVsbF9uYXAuYwogIFtERVBTXSBjb3JlL3BjX2tiZC5jCiAgW0RFUFNdIGNvcmUvcG9z
aXhfaW8uYwogIFtERVBTXSBjb3JlL2dkYnVkcC5jCiAgW0RFUFNdIGNvcmUvY29uc29sZS5j
CiAgW0RFUFNdIGNvcmUvc2VyaWFsLmMKICBbREVQU10gY29yZS9ibG9ja2Rldi5jCiAgW0RF
UFNdIGNvcmUvdGltZXIuYwogIFtERVBTXSBjb3JlL21pc2MuYwogIFtERVBTXSBjb3JlL2Rl
YnVnLmMKICBbREVQU10gY29yZS9mbnJlYy5jCiAgW0RFUFNdIGNvcmUvbWFsbG9jLmMKICBb
REVQU10gY29yZS9leGVjLmMKICBbREVQU10gY29yZS9tb25vam9iLmMKICBbREVQU10gY29y
ZS9udWxsX3NhbmJvb3QuYwogIFtERVBTXSBjb3JlL3JhbmRvbS5jCiAgW0RFUFNdIGNvcmUv
cGFyc2VvcHQuYwogIFtERVBTXSBjb3JlL2ltYWdlLmMKZ21ha2VbN106IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvZXRoZXJib290L2lweGUv
c3JjJwpnbWFrZVs3XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvZmlybXdhcmUvZXRoZXJib290L2lweGUvc3JjJwogIFtCVUlMRF0gYmluL19fdWRpdm1v
ZGRpNC5vCiAgW0JVSUxEXSBiaW4vX19kaXZkaTMubwogIFtCVUlMRF0gYmluL2ljYy5vCiAg
W0JVSUxEXSBiaW4vbWVtY3B5Lm8KICBbQlVJTERdIGJpbi9fX21vZGRpMy5vCiAgW0JVSUxE
XSBiaW4vX191ZGl2ZGkzLm8KICBbQlVJTERdIGJpbi9fX3Vtb2RkaTMubwogIFtCVUlMRF0g
YmluL3BjbWNpYS5vCiAgW0JVSUxEXSBiaW4vc2VyaWFsX2NvbnNvbGUubwogIFtCVUlMRF0g
YmluL3V1aWQubwogIFtCVUlMRF0gYmluL3JlZmNudC5vCiAgW0JVSUxEXSBiaW4vYXNzZXJ0
Lm8KICBbQlVJTERdIGJpbi9iYXNlMTYubwogIFtCVUlMRF0gYmluL3N0cmluZy5vCiAgW0JV
SUxEXSBiaW4vaW1hZ2UubwogIFtCVUlMRF0gYmluL2lvYnVmLm8KICBbQlVJTERdIGJpbi9y
ZXNvbHYubwogIFtCVUlMRF0gYmluL3BhcnNlb3B0Lm8KICBbQlVJTERdIGJpbi9yYW5kb20u
bwogIFtCVUlMRF0gYmluL3N0cmluZ2V4dHJhLm8KICBbQlVJTERdIGJpbi9udWxsX3NhbmJv
b3QubwogIFtCVUlMRF0gYmluL21vbm9qb2IubwogIFtCVUlMRF0gYmluL2V4ZWMubwogIFtC
VUlMRF0gYmluL2JpdG1hcC5vCiAgW0JVSUxEXSBiaW4vYmFzZTY0Lm8KICBbQlVJTERdIGJp
bi9kZXZpY2UubwogIFtCVUlMRF0gYmluL2Fuc2llc2MubwogIFtCVUlMRF0gYmluL21hbGxv
Yy5vCiAgW0JVSUxEXSBiaW4vZm5yZWMubwogIFtCVUlMRF0gYmluL2RlYnVnLm8KICBbQlVJ
TERdIGJpbi9nZGJzZXJpYWwubwogIFtCVUlMRF0gYmluL3Byb2Nlc3MubwogIFtCVUlMRF0g
YmluL2pvYi5vCiAgW0JVSUxEXSBiaW4vZXJybm8ubwogIFtCVUlMRF0gYmluL2k4MjM2NS5v
CiAgW0JVSUxEXSBiaW4vY3d1cmkubwogIFtCVUlMRF0gYmluL21pc2MubwogIFtCVUlMRF0g
YmluL3RpbWVyLm8KICBbQlVJTERdIGJpbi9jcGlvLm8KICBbQlVJTERdIGJpbi9ibG9ja2Rl
di5vCiAgW0JVSUxEXSBiaW4vdXJpLm8KICBbQlVJTERdIGJpbi9hY3BpLm8KICBbQlVJTERd
IGJpbi9zZXJpYWwubwogIFtCVUlMRF0gYmluL29wZW4ubwogIFtCVUlMRF0gYmluL2NvbnNv
bGUubwogIFtCVUlMRF0gYmluL2dkYnVkcC5vCiAgW0JVSUxEXSBiaW4vcG9zaXhfaW8ubwog
IFtCVUlMRF0gYmluL3BjX2tiZC5vCiAgW0JVSUxEXSBiaW4veGZlci5vCiAgW0JVSUxEXSBi
aW4vbnVsbF9uYXAubwogIFtCVUlMRF0gYmluL3ZzcHJpbnRmLm8KICBbQlVJTERdIGJpbi9i
aXRvcHMubwogIFtCVUlMRF0gYmluL2h3Lm8KICBbQlVJTERdIGJpbi9kb3dubG9hZGVyLm8K
ICBbQlVJTERdIGJpbi9tYWluLm8KICBbQlVJTERdIGJpbi9zZXR0aW5ncy5vCiAgW0JVSUxE
XSBiaW4vc3RydG91bGwubwogIFtCVUlMRF0gYmluL2luaXQubwogIFtCVUlMRF0gYmluL2Vk
ZC5vCiAgW0JVSUxEXSBiaW4vbGluZWJ1Zi5vCiAgW0JVSUxEXSBiaW4vZ2Ric3R1Yi5vCiAg
W0JVSUxEXSBiaW4vYXNwcmludGYubwogIFtCVUlMRF0gYmluL2dldGtleS5vCiAgW0JVSUxE
XSBiaW4vZ2V0b3B0Lm8KICBbQlVJTERdIGJpbi9idGV4dC5vCiAgW0JVSUxEXSBiaW4vaW50
ZXJmYWNlLm8KICBbQlVJTERdIGJpbi9kZWJ1Z19tZDUubwogIFtCVUlMRF0gYmluL252by5v
CiAgW0JVSUxEXSBiaW4vYmFzZW5hbWUubwogIFtCVUlMRF0gYmluL2N0eXBlLm8KICBbQlVJ
TERdIGJpbi9mYy5vCiAgW0JVSUxEXSBiaW4vZGhjcG9wdHMubwogIFtCVUlMRF0gYmluL3Vk
cC5vCiAgW0JVSUxEXSBiaW4vaWNtcC5vCiAgW0JVSUxEXSBiaW4vcmV0cnkubwogIFtCVUlM
RF0gYmluL25ldGRldmljZS5vCiAgW0JVSUxEXSBiaW4vY2FjaGVkaGNwLm8KICBbQlVJTERd
IGJpbi9kaGNwcGt0Lm8KICBbQlVJTERdIGJpbi9uZHAubwogIFtCVUlMRF0gYmluL3Rscy5v
CiAgW0JVSUxEXSBiaW4vZXRoX3Nsb3cubwogIFtCVUlMRF0gYmluL2lwdjQubwogIFtCVUlM
RF0gYmluL2luZmluaWJhbmQubwogIFtCVUlMRF0gYmluL251bGxuZXQubwogIFtCVUlMRF0g
YmluL3ZsYW4ubwogIFtCVUlMRF0gYmluL3JhcnAubwogIFtCVUlMRF0gYmluL2FvZS5vCiAg
W0JVSUxEXSBiaW4vaXB2Ni5vCiAgW0JVSUxEXSBiaW4vdGNwaXAubwogIFtCVUlMRF0gYmlu
L2ZjZWxzLm8KICBbQlVJTERdIGJpbi9ldGhlcm5ldC5vCiAgW0JVSUxEXSBiaW4vYXJwLm8K
ICBbQlVJTERdIGJpbi9taWkubwogIFtCVUlMRF0gYmluL3RjcC5vCiAgW0JVSUxEXSBiaW4v
aW9icGFkLm8KICBbQlVJTERdIGJpbi9mY29lLm8KICBbQlVJTERdIGJpbi9mY3AubwogIFtC
VUlMRF0gYmluL25ldGRldl9zZXR0aW5ncy5vCiAgW0JVSUxEXSBiaW4vaWNtcHY2Lm8KICBb
QlVJTERdIGJpbi9mYWtlZGhjcC5vCiAgW0JVSUxEXSBiaW4vZmNucy5vCiAgW0JVSUxEXSBi
aW4vZWFwb2wubwogIFtCVUlMRF0gYmluL2h0dHAubwogIFtCVUlMRF0gYmluL2Z0cC5vCiAg
W0JVSUxEXSBiaW4vaXNjc2kubwogIFtCVUlMRF0gYmluL2h0dHBzLm8KICBbQlVJTERdIGJp
bi9zeXNsb2cubwogIFtCVUlMRF0gYmluL3RmdHAubwogIFtCVUlMRF0gYmluL3NsYW0ubwog
IFtCVUlMRF0gYmluL2Rucy5vCiAgW0JVSUxEXSBiaW4vZGhjcC5vCiAgW0JVSUxEXSBiaW4v
aWJfbWNhc3QubwogIFtCVUlMRF0gYmluL2liX3NycC5vCiAgW0JVSUxEXSBiaW4vaWJfY21y
Yy5vCiAgW0JVSUxEXSBiaW4vaWJfc21hLm8KICBbQlVJTERdIGJpbi9pYl9wYXRocmVjLm8K
ICBbQlVJTERdIGJpbi9pYl9zbWMubwogIFtCVUlMRF0gYmluL2liX3BhY2tldC5vCiAgW0JV
SUxEXSBiaW4vaWJfY20ubwogIFtCVUlMRF0gYmluL2liX21pLm8KICBbQlVJTERdIGJpbi93
cGFfdGtpcC5vCiAgW0JVSUxEXSBiaW4vd3BhX3Bzay5vCiAgW0JVSUxEXSBiaW4vd2VwLm8K
ICBbQlVJTERdIGJpbi9zZWM4MDIxMS5vCiAgW0JVSUxEXSBiaW4vbmV0ODAyMTEubwogIFtC
VUlMRF0gYmluL3dwYV9jY21wLm8KICBbQlVJTERdIGJpbi93cGEubwogIFtCVUlMRF0gYmlu
L3JjODAyMTEubwogIFtCVUlMRF0gYmluL2VtYmVkZGVkLm8KICBbQlVJTERdIGJpbi9zZWdt
ZW50Lm8KICBbQlVJTERdIGJpbi9lZmlfaW1hZ2UubwogIFtCVUlMRF0gYmluL2VsZi5vCiAg
W0JVSUxEXSBiaW4vc2NyaXB0Lm8KICBbQlVJTERdIGJpbi9laXNhLm8KICBbQlVJTERdIGJp
bi9tY2EubwogIFtCVUlMRF0gYmluL3BjaXZwZC5vCiAgW0JVSUxEXSBiaW4vcGNpYmFja3Vw
Lm8KICBbQlVJTERdIGJpbi9wY2lleHRyYS5vCiAgW0JVSUxEXSBiaW4vaXNhX2lkcy5vCiAg
W0JVSUxEXSBiaW4vaXNhLm8KICBbQlVJTERdIGJpbi92aXJ0aW8tcGNpLm8KICBbQlVJTERd
IGJpbi92aXJ0aW8tcmluZy5vCiAgW0JVSUxEXSBiaW4vaXNhcG5wLm8KICBbQlVJTERdIGJp
bi9wY2kubwogIFtCVUlMRF0gYmluLzNjNTAzLm8KICBbQlVJTERdIGJpbi90bGFuLm8KICBb
QlVJTERdIGJpbi9yODE2OS5vCiAgW0JVSUxEXSBiaW4vc2lzOTAwLm8KICBbQlVJTERdIGJp
bi90dWxpcC5vCiAgW0JVSUxEXSBiaW4vcG5pYy5vCiAgW0JVSUxEXSBiaW4vdmlhLXZlbG9j
aXR5Lm8KICBbQlVJTERdIGJpbi9uZS5vCiAgW0JVSUxEXSBiaW4vZWVwcm8ubwogIFtCVUlM
RF0gYmluL215cmkxMGdlLm8KICBbQlVJTERdIGJpbi9tdGQ4MHgubwogIFtCVUlMRF0gYmlu
LzNjNTI5Lm8KICBbQlVJTERdIGJpbi8zYzUwOS5vCiAgW0JVSUxEXSBiaW4vZGF2aWNvbS5v
CiAgW0JVSUxEXSBiaW4vM2M5MHgubwogIFtCVUlMRF0gYmluL2VwaWMxMDAubwogIFtCVUlM
RF0gYmluL3J0bDgxMzkubwogIFtCVUlMRF0gYmluL3N1bmRhbmNlLm8KICBbQlVJTERdIGJp
bi9wcmlzbTJfcGx4Lm8KICBbQlVJTERdIGJpbi9mb3JjZWRldGgubwogIFtCVUlMRF0gYmlu
L2I0NC5vCiAgW0JVSUxEXSBiaW4vbmF0c2VtaS5vCiAgW0JVSUxEXSBiaW4vc2lzMTkwLm8K
ICBbQlVJTERdIGJpbi9za2dlLm8KICBbQlVJTERdIGJpbi93ZC5vCiAgW0JVSUxEXSBiaW4v
ZXRoZXJmYWJyaWMubwogIFtCVUlMRF0gYmluL3ZpcnRpby1uZXQubwogIFtCVUlMRF0gYmlu
L3NtYzkwMDAubwogIFtCVUlMRF0gYmluLzNjNXg5Lm8KICBbQlVJTERdIGJpbi90ZzMubwog
IFtCVUlMRF0gYmluLzNjNTA5LWVpc2EubwogIFtCVUlMRF0gYmluL3BjbmV0MzIubwogIFtC
VUlMRF0gYmluL25zODM4MjAubwogIFtCVUlMRF0gYmluL25zODM5MC5vCiAgW0JVSUxEXSBi
aW4vZG1mZS5vCiAgW0JVSUxEXSBiaW4vYm54Mi5vCiAgW0JVSUxEXSBiaW4vM2M1MTUubwog
IFtCVUlMRF0gYmluL2VlcHJvMTAwLm8KICBbQlVJTERdIGJpbi9sZWdhY3kubwogIFtCVUlM
RF0gYmluL2F0bDFlLm8KICBbQlVJTERdIGJpbi9za3kyLm8KICBbQlVJTERdIGJpbi9pcG9p
Yi5vCiAgW0JVSUxEXSBiaW4vbmUya19pc2EubwogIFtCVUlMRF0gYmluL2NzODl4MC5vCiAg
W0JVSUxEXSBiaW4vdzg5Yzg0MC5vCiAgW0JVSUxEXSBiaW4vdmlhLXJoaW5lLm8KICBbQlVJ
TERdIGJpbi8zYzU5NS5vCiAgW0JVSUxEXSBiaW4vcHJpc20yX3BjaS5vCiAgW0JVSUxEXSBi
aW4vam1lLm8KICBbQlVJTERdIGJpbi9hbWQ4MTExZS5vCiAgW0JVSUxEXSBiaW4vZGVwY2Eu
bwogIFtCVUlMRF0gYmluL2UxMDAwXzgyNTQxLm8KICBbQlVJTERdIGJpbi9lMTAwMF9tYWlu
Lm8KICBbQlVJTERdIGJpbi9lMTAwMC5vCiAgW0JVSUxEXSBiaW4vZTEwMDBfcGh5Lm8KICBb
QlVJTERdIGJpbi9lMTAwMF9tYWMubwogIFtCVUlMRF0gYmluL2UxMDAwX252bS5vCiAgW0JV
SUxEXSBiaW4vZTEwMDBfODI1NDMubwogIFtCVUlMRF0gYmluL2UxMDAwX21hbmFnZS5vCiAg
W0JVSUxEXSBiaW4vZTEwMDBfYXBpLm8KICBbQlVJTERdIGJpbi9lMTAwMF84MjU0MC5vCiAg
W0JVSUxEXSBiaW4vZTEwMDBfODI1NDIubwogIFtCVUlMRF0gYmluL2UxMDAwZV9tYWluLm8K
ICBbQlVJTERdIGJpbi9lMTAwMGVfbnZtLm8KICBbQlVJTERdIGJpbi9lMTAwMGVfcGh5Lm8K
ICBbQlVJTERdIGJpbi9lMTAwMGVfbWFjLm8KICBbQlVJTERdIGJpbi9lMTAwMGUubwogIFtC
VUlMRF0gYmluL2UxMDAwZV84MjU3MS5vCiAgW0JVSUxEXSBiaW4vZTEwMDBlX21hbmFnZS5v
CiAgW0JVSUxEXSBiaW4vZTEwMDBlX2ljaDhsYW4ubwogIFtCVUlMRF0gYmluL2UxMDAwZV84
MDAwM2VzMmxhbi5vCiAgW0JVSUxEXSBiaW4vaWdiX21hbmFnZS5vCiAgW0JVSUxEXSBiaW4v
aWdiX2FwaS5vCiAgW0JVSUxEXSBiaW4vaWdiX252bS5vCiAgW0JVSUxEXSBiaW4vaWdiX21h
aW4ubwogIFtCVUlMRF0gYmluL2lnYl9waHkubwogIFtCVUlMRF0gYmluL2lnYl9tYWMubwog
IFtCVUlMRF0gYmluL2lnYi5vCiAgW0JVSUxEXSBiaW4vaWdiXzgyNTc1Lm8KICBbQlVJTERd
IGJpbi9pZ2J2Zl9tYngubwogIFtCVUlMRF0gYmluL2lnYnZmX3ZmLm8KICBbQlVJTERdIGJp
bi9pZ2J2Zl9tYWluLm8KICBbQlVJTERdIGJpbi9waGFudG9tLm8KICBbQlVJTERdIGJpbi9y
dGw4MTgwX3NhMjQwMC5vCiAgW0JVSUxEXSBiaW4vcnRsODE4NV9ydGw4MjI1Lm8KICBbQlVJ
TERdIGJpbi9ydGw4MTgwLm8KICBbQlVJTERdIGJpbi9ydGw4MTh4Lm8KICBbQlVJTERdIGJp
bi9ydGw4MTg1Lm8KICBbQlVJTERdIGJpbi9ydGw4MTgwX21heDI4MjAubwogIFtCVUlMRF0g
YmluL3J0bDgxODBfZ3JmNTEwMS5vCiAgW0JVSUxEXSBiaW4vYXRoX3JlZ2QubwogIFtCVUlM
RF0gYmluL2F0aF9tYWluLm8KICBbQlVJTERdIGJpbi9hdGhfa2V5Lm8KICBbQlVJTERdIGJp
bi9hdGhfaHcubwogIFtCVUlMRF0gYmluL2F0aDVrX2NhcHMubwogIFtCVUlMRF0gYmluL2F0
aDVrX2VlcHJvbS5vCiAgW0JVSUxEXSBiaW4vYXRoNWtfcWN1Lm8KICBbQlVJTERdIGJpbi9h
dGg1a19kZXNjLm8KICBbQlVJTERdIGJpbi9hdGg1a19wY3UubwogIFtCVUlMRF0gYmluL2F0
aDVrX2RtYS5vCiAgW0JVSUxEXSBiaW4vYXRoNWtfaW5pdHZhbHMubwogIFtCVUlMRF0gYmlu
L2F0aDVrX3BoeS5vCiAgW0JVSUxEXSBiaW4vYXRoNWtfZ3Bpby5vCiAgW0JVSUxEXSBiaW4v
YXRoNWtfcmZraWxsLm8KICBbQlVJTERdIGJpbi9hdGg1a19hdHRhY2gubwogIFtCVUlMRF0g
YmluL2F0aDVrLm8KICBbQlVJTERdIGJpbi9hdGg1a19yZXNldC5vCiAgW0JVSUxEXSBiaW4v
YXRoOWtfaHcubwogIFtCVUlMRF0gYmluL2F0aDlrX3JlY3YubwogIFtCVUlMRF0gYmluL2F0
aDlrX2VlcHJvbS5vCiAgW0JVSUxEXSBiaW4vYXRoOWtfYXI5MDAzX2h3Lm8KICBbQlVJTERd
IGJpbi9hdGg5a19tYWluLm8KICBbQlVJTERdIGJpbi9hdGg5a19hbmkubwogIFtCVUlMRF0g
YmluL2F0aDlrX2FyOTAwM19waHkubwogIFtCVUlMRF0gYmluL2F0aDlrX2FyNTAwOF9waHku
bwogIFtCVUlMRF0gYmluL2F0aDlrX3htaXQubwogIFtCVUlMRF0gYmluL2F0aDlrX2FyOTAw
Ml9waHkubwogIFtCVUlMRF0gYmluL2F0aDlrX2FyOTAwMl9jYWxpYi5vCiAgW0JVSUxEXSBi
aW4vYXRoOWtfYXI5MDAyX21hYy5vCiAgW0JVSUxEXSBiaW4vYXRoOWtfYXI5MDAzX2VlcHJv
bS5vCiAgW0JVSUxEXSBiaW4vYXRoOWtfbWFjLm8KICBbQlVJTERdIGJpbi9hdGg5a19lZXBy
b21fZGVmLm8KICBbQlVJTERdIGJpbi9hdGg5a19lZXByb21fNGsubwogIFtCVUlMRF0gYmlu
L2F0aDlrX2NhbGliLm8KICBbQlVJTERdIGJpbi9hdGg5a19hcjkwMDJfaHcubwogIFtCVUlM
RF0gYmluL2F0aDlrX2NvbW1vbi5vCiAgW0JVSUxEXSBiaW4vYXRoOWsubwogIFtCVUlMRF0g
YmluL2F0aDlrX2VlcHJvbV85Mjg3Lm8KICBbQlVJTERdIGJpbi9hdGg5a19hcjkwMDNfY2Fs
aWIubwogIFtCVUlMRF0gYmluL2F0aDlrX2FyOTAwM19tYWMubwogIFtCVUlMRF0gYmluL2F0
aDlrX2luaXQubwogIFtCVUlMRF0gYmluL3Z4Z2VfbWFpbi5vCiAgW0JVSUxEXSBiaW4vdnhn
ZV9jb25maWcubwogIFtCVUlMRF0gYmluL3Z4Z2UubwogIFtCVUlMRF0gYmluL3Z4Z2VfdHJh
ZmZpYy5vCiAgW0JVSUxEXSBiaW4vc25wb25seS5vCiAgW0JVSUxEXSBiaW4vc25wbmV0Lm8K
ICBbQlVJTERdIGJpbi9zY3NpLm8KICBbQlVJTERdIGJpbi9zcnAubwogIFtCVUlMRF0gYmlu
L2F0YS5vCiAgW0JVSUxEXSBiaW4vaWJmdC5vCiAgW0JVSUxEXSBiaW4vbnZzLm8KICBbQlVJ
TERdIGJpbi90aHJlZXdpcmUubwogIFtCVUlMRF0gYmluL252c3ZwZC5vCiAgW0JVSUxEXSBi
aW4vc3BpLm8KICBbQlVJTERdIGJpbi9pMmNfYml0Lm8KICBbQlVJTERdIGJpbi9zcGlfYml0
Lm8KICBbQlVJTERdIGJpbi9iaXRiYXNoLm8KICBbQlVJTERdIGJpbi9saW5kYV9mdy5vCiAg
W0JVSUxEXSBiaW4vcWliNzMyMi5vCiAgW0JVSUxEXSBiaW4vYXJiZWwubwogIFtCVUlMRF0g
YmluL2hlcm1vbi5vCiAgW0JVSUxEXSBiaW4vbGluZGEubwogIFtCVUlMRF0gYmluL2VmaV9p
by5vCiAgW0JVSUxEXSBiaW4vZWZpX3VhY2Nlc3MubwogIFtCVUlMRF0gYmluL2VmaV9pbml0
Lm8KICBbQlVJTERdIGJpbi9lZmlfZHJpdmVyLm8KICBbQlVJTERdIGJpbi9lZmlfc21iaW9z
Lm8KICBbQlVJTERdIGJpbi9lZmlfdGltZXIubwogIFtCVUlMRF0gYmluL2VmaV9zdHJpbmdz
Lm8KICBbQlVJTERdIGJpbi9lZmlfdW1hbGxvYy5vCiAgW0JVSUxEXSBiaW4vZWZpX2JvZm0u
bwogIFtCVUlMRF0gYmluL2VmaV9zdHJlcnJvci5vCiAgW0JVSUxEXSBiaW4vZWZpX3BjaS5v
CiAgW0JVSUxEXSBiaW4vZWZpX3NucC5vCiAgW0JVSUxEXSBiaW4vZWZpX2NvbnNvbGUubwog
IFtCVUlMRF0gYmluL3NtYmlvc19zZXR0aW5ncy5vCiAgW0JVSUxEXSBiaW4vc21iaW9zLm8K
ICBbQlVJTERdIGJpbi9ib2ZtLm8KICBbQlVJTERdIGJpbi9tZW1jcHlfdGVzdC5vCiAgW0JV
SUxEXSBiaW4vbGlzdF90ZXN0Lm8KICBbQlVJTERdIGJpbi90ZXN0Lm8KICBbQlVJTERdIGJp
bi91cmlfdGVzdC5vCiAgW0JVSUxEXSBiaW4vYm9mbV90ZXN0Lm8KICBbQlVJTERdIGJpbi91
bWFsbG9jX3Rlc3QubwogIFtCVUlMRF0gYmluL2xpbmVidWZfdGVzdC5vCiAgW0JVSUxEXSBi
aW4vY2hhcC5vCiAgW0JVSUxEXSBiaW4vbWQ1Lm8KICBbQlVJTERdIGJpbi94NTA5Lm8KICBb
QlVJTERdIGJpbi9zaGExZXh0cmEubwogIFtCVUlMRF0gYmluL2FyYzQubwogIFtCVUlMRF0g
YmluL2NyeXB0b19udWxsLm8KICBbQlVJTERdIGJpbi9jcmFuZG9tLm8KICBbQlVJTERdIGJp
bi9jcmMzMi5vCiAgW0JVSUxEXSBiaW4vaG1hYy5vCiAgW0JVSUxEXSBiaW4vYXNuMS5vCiAg
W0JVSUxEXSBiaW4vYXh0bHNfYWVzLm8KICBbQlVJTERdIGJpbi9hZXNfd3JhcC5vCiAgW0JV
SUxEXSBiaW4vYXh0bHNfc2hhMS5vCiAgW0JVSUxEXSBiaW4vY2JjLm8KICBbQlVJTERdIGJp
bi9hZXMubwogIFtCVUlMRF0gYmluL2JpZ2ludC5vCiAgW0JVSUxEXSBiaW4vcnNhLm8KICBb
QlVJTERdIGJpbi9zaGExLm8KICBbQlVJTERdIGJpbi9saW51eF9hcmdzLm8KICBbQlVJTERd
IGJpbi9zaGVsbC5vCiAgW0JVSUxEXSBiaW4vc3RyZXJyb3IubwogIFtCVUlMRF0gYmluL3Jl
YWRsaW5lLm8KICBbQlVJTERdIGJpbi9lZGl0c3RyaW5nLm8KICBbQlVJTERdIGJpbi93aXJl
bGVzc19lcnJvcnMubwogIFtCVUlMRF0gYmluL252b19jbWQubwogIFtCVUlMRF0gYmluL2Nv
bmZpZ19jbWQubwogIFtCVUlMRF0gYmluL2xvZ2luX2NtZC5vCiAgW0JVSUxEXSBiaW4vc2Fu
Ym9vdF9jbWQubwogIFtCVUlMRF0gYmluL2lmbWdtdF9jbWQubwogIFtCVUlMRF0gYmluL2dk
YnN0dWJfY21kLm8KICBbQlVJTERdIGJpbi9hdXRvYm9vdF9jbWQubwogIFtCVUlMRF0gYmlu
L3RpbWVfY21kLm8KICBbQlVJTERdIGJpbi9kaGNwX2NtZC5vCiAgW0JVSUxEXSBiaW4vcm91
dGVfY21kLm8KICBbQlVJTERdIGJpbi9kaWdlc3RfY21kLm8KICBbQlVJTERdIGJpbi9pbWFn
ZV9jbWQubwogIFtCVUlMRF0gYmluL2ZjbWdtdF9jbWQubwogIFtCVUlMRF0gYmluL2xvdGVz
dF9jbWQubwogIFtCVUlMRF0gYmluL2l3bWdtdF9jbWQubwogIFtCVUlMRF0gYmluL3ZsYW5f
Y21kLm8KICBbQlVJTERdIGJpbi9sb2dpbl91aS5vCiAgW0JVSUxEXSBiaW4vc2V0dGluZ3Nf
dWkubwogIFtCVUlMRF0gYmluL2FsZXJ0Lm8KICBbQlVJTERdIGJpbi9jbGVhci5vCiAgW0JV
SUxEXSBiaW4vZWRnaW5nLm8KICBbQlVJTERdIGJpbi93aW5hdHRycy5vCiAgW0JVSUxEXSBi
aW4vYW5zaV9zY3JlZW4ubwogIFtCVUlMRF0gYmluL3ByaW50X25hZHYubwogIFtCVUlMRF0g
YmluL3dpbmluaXQubwogIFtCVUlMRF0gYmluL211Y3Vyc2VzLm8KICBbQlVJTERdIGJpbi93
aW5kb3dzLm8KICBbQlVJTERdIGJpbi9wcmludC5vCiAgW0JVSUxEXSBiaW4vc2xrLm8KICBb
QlVJTERdIGJpbi9jb2xvdXIubwogIFtCVUlMRF0gYmluL2tiLm8KICBbQlVJTERdIGJpbi9l
ZGl0Ym94Lm8KICBbQlVJTERdIGJpbi9rZXltYXBfcHQubwogIFtCVUlMRF0gYmluL2tleW1h
cF9kay5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX2dyLm8KICBbQlVJTERdIGJpbi9rZXltYXBf
aWwubwogIFtCVUlMRF0gYmluL2tleW1hcF91cy5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX3Ro
Lm8KICBbQlVJTERdIGJpbi9rZXltYXBfZXQubwogIFtCVUlMRF0gYmluL2tleW1hcF9uby5v
CiAgW0JVSUxEXSBiaW4va2V5bWFwX2NmLm8KICBbQlVJTERdIGJpbi9rZXltYXBfcnUubwog
IFtCVUlMRF0gYmluL2tleW1hcF9hbC5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX3NyLm8KICBb
QlVJTERdIGJpbi9rZXltYXBfbHQubwogIFtCVUlMRF0gYmluL2tleW1hcF91YS5vCiAgW0JV
SUxEXSBiaW4va2V5bWFwX3dvLm8KICBbQlVJTERdIGJpbi9rZXltYXBfbXQubwogIFtCVUlM
RF0gYmluL2tleW1hcF9ieS5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX2ZyLm8KICBbQlVJTERd
IGJpbi9rZXltYXBfYXoubwogIFtCVUlMRF0gYmluL2tleW1hcF9wbC5vCiAgW0JVSUxEXSBi
aW4va2V5bWFwX3VrLm8KICBbQlVJTERdIGJpbi9rZXltYXBfbWsubwogIFtCVUlMRF0gYmlu
L2tleW1hcF9maS5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX2RlLm8KICBbQlVJTERdIGJpbi9r
ZXltYXBfY3oubwogIFtCVUlMRF0gYmluL2tleW1hcF9ubC5vCiAgW0JVSUxEXSBiaW4va2V5
bWFwX2JnLm8KICBbQlVJTERdIGJpbi9rZXltYXBfaHUubwogIFtCVUlMRF0gYmluL2tleW1h
cF9lcy5vCiAgW0JVSUxEXSBiaW4va2V5bWFwX3NnLm8KICBbQlVJTERdIGJpbi9rZXltYXBf
aXQubwogIFtCVUlMRF0gYmluL2tleW1hcF9yby5vCiAgW0JVSUxEXSBiaW4vcHJvbXB0Lm8K
ICBbQlVJTERdIGJpbi9yb3V0ZS5vCiAgW0JVSUxEXSBiaW4vaXdtZ210Lm8KICBbQlVJTERd
IGJpbi9sb3Rlc3QubwogIFtCVUlMRF0gYmluL2ltZ21nbXQubwogIFtCVUlMRF0gYmluL3B4
ZW1lbnUubwogIFtCVUlMRF0gYmluL2RoY3BtZ210Lm8KICBbQlVJTERdIGJpbi9mY21nbXQu
bwogIFtCVUlMRF0gYmluL2lmbWdtdC5vCiAgW0JVSUxEXSBiaW4vYXV0b2Jvb3QubwogIFtC
VUlMRF0gYmluL2NvbmZpZ19pbmZpbmliYW5kLm8KICBbQlVJTERdIGJpbi9jb25maWdfbmV0
ODAyMTEubwogIFtCVUlMRF0gYmluL2NvbmZpZ19ldGhlcm5ldC5vCiAgW0JVSUxEXSBiaW4v
Y29uZmlnX2ZjLm8KICBbQlVJTERdIGJpbi9jb25maWcubwogIFtCVUlMRF0gYmluL2NvbmZp
Z19yb21wcmVmaXgubwogIFtCVUlMRF0gYmluL3JkdHNjX3RpbWVyLm8KICBbQlVJTERdIGJp
bi9iYXNlbWVtX3BhY2tldC5vCiAgW0JVSUxEXSBiaW4vdmlkZW9fc3Vici5vCiAgW0JVSUxE
XSBiaW4vZ2RibWFjaC5vCiAgW0JVSUxEXSBiaW4vY3B1Lm8KICBbQlVJTERdIGJpbi9waWM4
MjU5Lm8KICBbQlVJTERdIGJpbi9ydW50aW1lLm8KICBbQlVJTERdIGJpbi90aW1lcjIubwog
IFtCVUlMRF0gYmluL3g4Nl9pby5vCiAgW0JVSUxEXSBiaW4vcmVsb2NhdGUubwogIFtCVUlM
RF0gYmluL251bGx0cmFwLm8KICBbQlVJTERdIGJpbi9kdW1wcmVncy5vCiAgW0JVSUxEXSBi
aW4vbGlicm1fbWdtdC5vCiAgW0JVSUxEXSBiaW4vaGlkZW1lbS5vCiAgW0JVSUxEXSBiaW4v
bWVtbWFwLm8KICBbQlVJTERdIGJpbi9iYXNlbWVtLm8KICBbQlVJTERdIGJpbi9mYWtlZTgy
MC5vCiAgW0JVSUxEXSBiaW4vYmlvc19jb25zb2xlLm8KICBbQlVJTERdIGJpbi9wbnBiaW9z
Lm8KICBbQlVJTERdIGJpbi9jb20zMi5vCiAgW0JVSUxEXSBiaW4vbmJpLm8KICBbQlVJTERd
IGJpbi9iemltYWdlLm8KICBbQlVJTERdIGJpbi9weGVfaW1hZ2UubwogIFtCVUlMRF0gYmlu
L211bHRpYm9vdC5vCiAgW0JVSUxEXSBiaW4vYm9vdHNlY3Rvci5vCiAgW0JVSUxEXSBiaW4v
ZWxmYm9vdC5vCiAgW0JVSUxEXSBiaW4vY29tYm9vdC5vCiAgW0JVSUxEXSBiaW4vYmlvc19u
YXAubwogIFtCVUlMRF0gYmluL2ludDEzLm8KICBbQlVJTERdIGJpbi9wY2liaW9zLm8KICBb
QlVJTERdIGJpbi9iaW9zX3RpbWVyLm8KICBbQlVJTERdIGJpbi9iaW9zaW50Lm8KICBbQlVJ
TERdIGJpbi9tZW10b3BfdW1hbGxvYy5vCiAgW0JVSUxEXSBiaW4vYmlvc19zbWJpb3Mubwog
IFtCVUlMRF0gYmluL3B4ZV9jYWxsLm8KICBbQlVJTERdIGJpbi9weGVfZmlsZS5vCiAgW0JV
SUxEXSBiaW4vcHhlX3RmdHAubwogIFtCVUlMRF0gYmluL3B4ZV9wcmVib290Lm8KICBbQlVJ
TERdIGJpbi9weGVfZXhpdF9ob29rLm8KICBbQlVJTERdIGJpbi9weGVfbG9hZGVyLm8KICBb
QlVJTERdIGJpbi9weGVfdW5kaS5vCiAgW0JVSUxEXSBiaW4vcHhlX3VkcC5vCiAgW0JVSUxE
XSBiaW4vcHhlcGFyZW50Lm8KICBbQlVJTERdIGJpbi9weGVwYXJlbnRfZGhjcC5vCiAgW0JV
SUxEXSBiaW4vY29tYm9vdF9jYWxsLm8KICBbQlVJTERdIGJpbi9jb20zMl9jYWxsLm8KICBb
QlVJTERdIGJpbi9jb21ib290X3Jlc29sdi5vCiAgW0JVSUxEXSBiaW4vcHhlX2NtZC5vCiAg
W0JVSUxEXSBiaW4vcmVib290X2NtZC5vCiAgW0JVSUxEXSBiaW4vcGNpZGlyZWN0Lm8KICBb
QlVJTERdIGJpbi94ODZfc3RyaW5nLm8KICBbQlVJTERdIGJpbi9lZml4ODZfbmFwLm8KICBb
QlVJTERdIGJpbi9lZmlwcmVmaXgubwogIFtCVUlMRF0gYmluL2VmaWRydnByZWZpeC5vCiAg
W0JVSUxEXSBiaW4vdW5kaXByZWxvYWQubwogIFtCVUlMRF0gYmluL3VuZGlsb2FkLm8KICBb
QlVJTERdIGJpbi91bmRpb25seS5vCiAgW0JVSUxEXSBiaW4vdW5kaS5vCiAgW0JVSUxEXSBi
aW4vdW5kaW5ldC5vCiAgW0JVSUxEXSBiaW4vdW5kaXJvbS5vCiAgW0JVSUxEXSBiaW4vZ2Ri
c3R1Yl90ZXN0Lm8KICBbQlVJTERdIGJpbi92aXJ0YWRkci5vCiAgW0JVSUxEXSBiaW4vcGF0
Y2hfY2YubwogIFtCVUlMRF0gYmluL2dkYmlkdC5vCiAgW0JVSUxEXSBiaW4vc2V0am1wLm8K
ICBbQlVJTERdIGJpbi9zdGFjay5vCiAgW0JVSUxEXSBiaW4vc3RhY2sxNi5vCiAgW0JVSUxE
XSBiaW4vbGlia2lyLm8KICBbQlVJTERdIGJpbi9saWJwbS5vCiAgW0JVSUxEXSBiaW4vbGli
YTIwLm8KICBbQlVJTERdIGJpbi9saWJybS5vCiAgW0JVSUxEXSBiaW4vbGlicHJlZml4Lm8K
ICBbQlVJTERdIGJpbi9kc2twcmVmaXgubwogIFtCVUlMRF0gYmluL21yb21wcmVmaXgubwog
IFtCVUlMRF0gYmluL3VubnJ2MmIubwogIFtCVUlMRF0gYmluL2xrcm5wcmVmaXgubwogIFtC
VUlMRF0gYmluL3VubnJ2MmIxNi5vCiAgW0JVSUxEXSBiaW4va2tweGVwcmVmaXgubwogIFtC
VUlMRF0gYmluL3VuZGlsb2FkZXIubwogIFtCVUlMRF0gYmluL2Jvb3RwYXJ0Lm8KICBbQlVJ
TERdIGJpbi9udWxscHJlZml4Lm8KICBbQlVJTERdIGJpbi9uYmlwcmVmaXgubwogIFtCVUlM
RF0gYmluL2tweGVwcmVmaXgubwogIFtCVUlMRF0gYmluL2tra3B4ZXByZWZpeC5vCiAgW0JV
SUxEXSBiaW4vdXNiZGlzay5vCiAgW0JVSUxEXSBiaW4vaGRwcmVmaXgubwogIFtCVUlMRF0g
YmluL2V4ZXByZWZpeC5vCiAgW0JVSUxEXSBiaW4vcm9tcHJlZml4Lm8KICBbQlVJTERdIGJp
bi9weGVwcmVmaXgubwogIFtCVUlMRF0gYmluL21ici5vCiAgW0JVSUxEXSBiaW4vZTgyMG1h
bmdsZXIubwogIFtCVUlMRF0gYmluL3B4ZV9lbnRyeS5vCiAgW0JVSUxEXSBiaW4vY29tMzJf
d3JhcHBlci5vCiAgW0JVSUxEXSBiaW4vdW5kaWlzci5vCiAgW0FSXSBiaW4vYmxpYi5hCmFy
OiBjcmVhdGluZyBiaW4vYmxpYi5hCiAgW0hPU1RDQ10gdXRpbC96YmluCiAgW0xEXSBiaW4v
cnRsODEzOS5yb20udG1wCiAgW0JJTl0gYmluL3J0bDgxMzkucm9tLmJpbgogIFtaSU5GT10g
YmluL3J0bDgxMzkucm9tLnppbmZvCiAgW1pCSU5dIGJpbi9ydGw4MTM5LnJvbS56YmluCiAg
W0ZJTklTSF0gYmluL3J0bDgxMzkucm9tCnJtIGJpbi9ydGw4MTM5LnJvbS56YmluIGJpbi9y
dGw4MTM5LnJvbS5iaW4gYmluL3J0bDgxMzkucm9tLnppbmZvCmdtYWtlWzddOiBMZWF2aW5n
IGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2V0aGVyYm9vdC9p
cHhlL3NyYycKZ21ha2UgLUMgaXB4ZS9zcmMgYmluLzgwODYxMDBlLnJvbQpnbWFrZVs3XTog
RW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvZXRo
ZXJib290L2lweGUvc3JjJwogIFtMRF0gYmluLzgwODYxMDBlLnJvbS50bXAKICBbQklOXSBi
aW4vODA4NjEwMGUucm9tLmJpbgogIFtaSU5GT10gYmluLzgwODYxMDBlLnJvbS56aW5mbwog
IFtaQklOXSBiaW4vODA4NjEwMGUucm9tLnpiaW4KICBbRklOSVNIXSBiaW4vODA4NjEwMGUu
cm9tCnJtIGJpbi84MDg2MTAwZS5yb20uemJpbiBiaW4vODA4NjEwMGUucm9tLmJpbiBiaW4v
ODA4NjEwMGUucm9tLnppbmZvCmdtYWtlWzddOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2V0aGVyYm9vdC9pcHhlL3NyYycKZ21ha2VbNl06
IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvZXRo
ZXJib290JwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scy9maXJtd2FyZScKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2Zpcm13YXJlJwpnbWFrZSAtQyBodm1sb2FkZXIgYWxsCmdtYWtlWzZd
OiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9o
dm1sb2FkZXInCmdtYWtlWzddOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC90b29scy9maXJtd2FyZS9odm1sb2FkZXInCmdtYWtlIC1DIGFjcGkgYWxsCmdtYWtlWzhd
OiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9o
dm1sb2FkZXIvYWNwaScKaWFzbCAtdnMgLXAgc3NkdF9zMyAtdGMgc3NkdF9zMy5hc2wKQVNM
IElucHV0OiAgc3NkdF9zMy5hc2wgLSAzNCBsaW5lcywgMTA2NyBieXRlcywgMSBrZXl3b3Jk
cwpBTUwgT3V0cHV0OiBzc2R0X3MzLmFtbCAtIDQ5IGJ5dGVzLCAxIG5hbWVkIG9iamVjdHMs
IDAgZXhlY3V0YWJsZSBvcGNvZGVzCgpDb21waWxhdGlvbiBjb21wbGV0ZS4gMCBFcnJvcnMs
IDAgV2FybmluZ3MsIDAgUmVtYXJrcywgNCBPcHRpbWl6YXRpb25zCnNlZCAtZSAncy9BbWxD
b2RlL3NzZHRfczMvZycgc3NkdF9zMy5oZXggPnNzZHRfczMuaApybSAtZiBzc2R0X3MzLmhl
eCBzc2R0X3MzLmFtbAppYXNsIC12cyAtcCBzc2R0X3M0IC10YyBzc2R0X3M0LmFzbApBU0wg
SW5wdXQ6ICBzc2R0X3M0LmFzbCAtIDM0IGxpbmVzLCAxMDY3IGJ5dGVzLCAxIGtleXdvcmRz
CkFNTCBPdXRwdXQ6IHNzZHRfczQuYW1sIC0gNDkgYnl0ZXMsIDEgbmFtZWQgb2JqZWN0cywg
MCBleGVjdXRhYmxlIG9wY29kZXMKCkNvbXBpbGF0aW9uIGNvbXBsZXRlLiAwIEVycm9ycywg
MCBXYXJuaW5ncywgMCBSZW1hcmtzLCA0IE9wdGltaXphdGlvbnMKc2VkIC1lICdzL0FtbENv
ZGUvc3NkdF9zNC9nJyBzc2R0X3M0LmhleCA+c3NkdF9zNC5oCnJtIC1mIHNzZHRfczQuaGV4
IHNzZHRfczQuYW1sCmlhc2wgLXZzIC1wIHNzZHRfcG0gLXRjIHNzZHRfcG0uYXNsCkFTTCBJ
bnB1dDogIHNzZHRfcG0uYXNsIC0gNDI1IGxpbmVzLCAxMjc1NCBieXRlcywgMTkyIGtleXdv
cmRzCkFNTCBPdXRwdXQ6IHNzZHRfcG0uYW1sIC0gMTQ5NCBieXRlcywgNjQgbmFtZWQgb2Jq
ZWN0cywgMTI4IGV4ZWN1dGFibGUgb3Bjb2RlcwoKQ29tcGlsYXRpb24gY29tcGxldGUuIDAg
RXJyb3JzLCAwIFdhcm5pbmdzLCAwIFJlbWFya3MsIDMxIE9wdGltaXphdGlvbnMKc2VkIC1l
ICdzL0FtbENvZGUvc3NkdF9wbS9nJyBzc2R0X3BtLmhleCA+c3NkdF9wbS5oCnJtIC1mIHNz
ZHRfcG0uaGV4IHNzZHRfcG0uYW1sCmlhc2wgLXZzIC1wIHNzZHRfdHBtIC10YyBzc2R0X3Rw
bS5hc2wKQVNMIElucHV0OiAgc3NkdF90cG0uYXNsIC0gMzMgbGluZXMsIDEwNDYgYnl0ZXMs
IDMga2V5d29yZHMKQU1MIE91dHB1dDogc3NkdF90cG0uYW1sIC0gNzYgYnl0ZXMsIDMgbmFt
ZWQgb2JqZWN0cywgMCBleGVjdXRhYmxlIG9wY29kZXMKCkNvbXBpbGF0aW9uIGNvbXBsZXRl
LiAwIEVycm9ycywgMCBXYXJuaW5ncywgMCBSZW1hcmtzLCAwIE9wdGltaXphdGlvbnMKc2Vk
IC1lICdzL0FtbENvZGUvc3NkdF90cG0vZycgc3NkdF90cG0uaGV4ID5zc2R0X3RwbS5oCnJt
IC1mIHNzZHRfdHBtLmhleCBzc2R0X3RwbS5hbWwKZ2NjICAgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5idWlsZC5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJv
ciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1t
c29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXIv
YWNwaS8uLi8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBidWlsZC5vIGJ1aWxkLmMg
IC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgLVdhbGwgLVdlcnJvciAtV3N0cmljdC1wcm90b3R5
cGVzIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtZm5vLXN0cmljdC1hbGlhc2luZyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJt
d2FyZS9odm1sb2FkZXIvYWNwaS8uLi8uLi8uLi8uLi90b29scy9pbmNsdWRlIC1vIG1rX2Rz
ZHQgbWtfZHNkdC5jCmF3ayAnTlIgPiAxIHtwcmludCBzfSB7cz0kMH0nIGRzZHQuYXNsID4g
ZHNkdF9hbnljcHUuYXNsCi4vbWtfZHNkdCAtLW1heGNwdSBhbnkgID4+IGRzZHRfYW55Y3B1
LmFzbAppYXNsIC12cyAtcCBkc2R0X2FueWNwdSAtdGMgZHNkdF9hbnljcHUuYXNsCmRzZHRf
YW55Y3B1LmFzbCAgIDUyODM6ICAgICAgICAgICAgIFJldHVybiAoIFxfU0IuUFJTQygpICkK
V2FybmluZyAgMTEyOCAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBe
IFJlc2VydmVkIG1ldGhvZCBzaG91bGQgbm90IHJldHVybiBhIHZhbHVlIChfTDAyKQoKQVNM
IElucHV0OiAgZHNkdF9hbnljcHUuYXNsIC0gMTA5MzYgbGluZXMsIDM4NjYxOCBieXRlcywg
Nzk1OSBrZXl3b3JkcwpBTUwgT3V0cHV0OiBkc2R0X2FueWNwdS5hbWwgLSA3MDQyMSBieXRl
cywgMjQ1NiBuYW1lZCBvYmplY3RzLCA1NTAzIGV4ZWN1dGFibGUgb3Bjb2RlcwoKQ29tcGls
YXRpb24gY29tcGxldGUuIDAgRXJyb3JzLCAxIFdhcm5pbmdzLCAwIFJlbWFya3MsIDI2MTQg
T3B0aW1pemF0aW9ucwpzZWQgLWUgJ3MvQW1sQ29kZS9kc2R0X2FueWNwdS9nJyBkc2R0X2Fu
eWNwdS5oZXggPmRzZHRfYW55Y3B1LmMKZWNobyAiaW50IGRzZHRfYW55Y3B1X2xlbj1zaXpl
b2YoZHNkdF9hbnljcHUpOyIgPj5kc2R0X2FueWNwdS5jCnJtIC1mIGRzZHRfYW55Y3B1LmFt
bCBkc2R0X2FueWNwdS5oZXgKZ2NjICAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
MzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5kc2R0X2FueWNwdS5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1m
bG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXIvYWNwaS8u
Li8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBkc2R0X2FueWNwdS5vIGRzZHRfYW55
Y3B1LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQphd2sgJ05SID4gMSB7cHJpbnQgc30ge3M9JDB9
JyBkc2R0LmFzbCA+IGRzZHRfMTVjcHUuYXNsCi4vbWtfZHNkdCAtLW1heGNwdSAxNSAgPj4g
ZHNkdF8xNWNwdS5hc2wKaWFzbCAtdnMgLXAgZHNkdF8xNWNwdSAtdGMgZHNkdF8xNWNwdS5h
c2wKZHNkdF8xNWNwdS5hc2wgICAgOTg5OiAgICAgICAgICAgICBSZXR1cm4gKCBcX1NCLlBS
U0MoKSApCldhcm5pbmcgIDExMjggLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBeIFJlc2VydmVkIG1ldGhvZCBzaG91bGQgbm90IHJldHVybiBhIHZhbHVlIChfTDAy
KQoKQVNMIElucHV0OiAgZHNkdF8xNWNwdS5hc2wgLSA2NjQyIGxpbmVzLCAyNDQ2MTEgYnl0
ZXMsIDQ3Njcga2V5d29yZHMKQU1MIE91dHB1dDogZHNkdF8xNWNwdS5hbWwgLSA0ODExOCBi
eXRlcywgMTU1MiBuYW1lZCBvYmplY3RzLCAzMjE1IGV4ZWN1dGFibGUgb3Bjb2RlcwoKQ29t
cGlsYXRpb24gY29tcGxldGUuIDAgRXJyb3JzLCAxIFdhcm5pbmdzLCAwIFJlbWFya3MsIDEw
NDYgT3B0aW1pemF0aW9ucwpzZWQgLWUgJ3MvQW1sQ29kZS9kc2R0XzE1Y3B1L2cnIGRzZHRf
MTVjcHUuaGV4ID5kc2R0XzE1Y3B1LmMKZWNobyAiaW50IGRzZHRfMTVjcHVfbGVuPXNpemVv
Zihkc2R0XzE1Y3B1KTsiID4+ZHNkdF8xNWNwdS5jCnJtIC1mIGRzZHRfMTVjcHUuYW1sIGRz
ZHRfMTVjcHUuaGV4CmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMyIC1t
YXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAuZHNkdF8xNWNwdS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJs
aW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5vLXN0YWNr
LXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1mbG9hdCAt
SS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXIvYWNwaS8uLi8uLi8u
Li8uLi90b29scy9pbmNsdWRlICAtYyAtbyBkc2R0XzE1Y3B1Lm8gZHNkdF8xNWNwdS5jICAt
SS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
MzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5zdGF0aWNfdGFibGVzLm8uZCAtZm5vLW9wdGlt
aXplLXNpYmxpbmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0
LWZsb2F0IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci9hY3Bp
Ly4uLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHN0YXRpY190YWJsZXMubyBzdGF0
aWNfdGFibGVzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQphd2sgJ05SID4gMSB7cHJpbnQgc30g
e3M9JDB9JyBkc2R0LmFzbCA+IGRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbAouL21rX2RzZHQg
LS1kbS12ZXJzaW9uIHFlbXUteGVuID4+IGRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbAppYXNs
IC12cyAtcCBkc2R0X2FueWNwdV9xZW11X3hlbiAtdGMgZHNkdF9hbnljcHVfcWVtdV94ZW4u
YXNsCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDUyODM6ICAgICAgICAgICAgIFJldHVy
biAoIFxfU0IuUFJTQygpICkKV2FybmluZyAgMTEyOCAtICAgICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0wwMikKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU2NzY6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU2ODQ6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU2OTI6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3MDA6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3MDg6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3MTY6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3MjQ6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3MzI6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3NDA6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3NDg6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3NTY6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3NjQ6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3NzI6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3ODA6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU3ODg6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU3OTY6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4MDQ6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4MTI6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4MjA6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4Mjg6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4MzY6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4NDQ6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4NTI6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4NjA6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4Njg6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4NzY6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU4ODQ6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU4OTI6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU5MDA6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAgIDU5MDg6ICAgICAgICAgICAg
ICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5nICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qg
c2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBeICAoX0VKMCkKCmRzZHRfYW55Y3B1X3FlbXVf
eGVuLmFzbCAgIDU5MTY6ICAgICAgICAgICAgICAgICBSZXR1cm4gKCAweDAgKQpXYXJuaW5n
ICAxMTI4IC0gICBSZXNlcnZlZCBtZXRob2Qgc2hvdWxkIG5vdCByZXR1cm4gYSB2YWx1ZSBe
ICAoX0VKMCkKCkFTTCBJbnB1dDogIGRzZHRfYW55Y3B1X3FlbXVfeGVuLmFzbCAtIDYxMjIg
bGluZXMsIDIwMzM0OSBieXRlcywgNDMyNSBrZXl3b3JkcwpBTUwgT3V0cHV0OiBkc2R0X2Fu
eWNwdV9xZW11X3hlbi5hbWwgLSAzNDEzMyBieXRlcywgMTMwMCBuYW1lZCBvYmplY3RzLCAz
MDI1IGV4ZWN1dGFibGUgb3Bjb2RlcwoKQ29tcGlsYXRpb24gY29tcGxldGUuIDAgRXJyb3Jz
LCAzMiBXYXJuaW5ncywgMCBSZW1hcmtzLCAyNTg2IE9wdGltaXphdGlvbnMKc2VkIC1lICdz
L0FtbENvZGUvZHNkdF9hbnljcHVfcWVtdV94ZW4vZycgZHNkdF9hbnljcHVfcWVtdV94ZW4u
aGV4ID5kc2R0X2FueWNwdV9xZW11X3hlbi5jCmVjaG8gImludCBkc2R0X2FueWNwdV9xZW11
X3hlbl9sZW49c2l6ZW9mKGRzZHRfYW55Y3B1X3FlbXVfeGVuKTsiID4+ZHNkdF9hbnljcHVf
cWVtdV94ZW4uYwpybSAtZiBkc2R0X2FueWNwdV9xZW11X3hlbi5hbWwgZHNkdF9hbnljcHVf
cWVtdV94ZW4uaGV4CmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMyIC1t
YXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAuZHNkdF9hbnljcHVfcWVtdV94ZW4uby5kIC1mbm8tb3B0
aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1kaXJlY3Qtc2VnLXJlZnMgIC1XZXJyb3Ig
LWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1mbm8tYnVpbHRpbiAtbXNv
ZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvaHZtbG9hZGVyL2Fj
cGkvLi4vLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8gZHNkdF9hbnljcHVfcWVtdV94
ZW4ubyBkc2R0X2FueWNwdV9xZW11X3hlbi5jICAtSS91c3IvcGtnL2luY2x1ZGUKYXIgcmMg
YWNwaS5hIGJ1aWxkLm8gZHNkdF9hbnljcHUubyBkc2R0XzE1Y3B1Lm8gc3RhdGljX3RhYmxl
cy5vIGRzZHRfYW55Y3B1X3FlbXVfeGVuLm8KZ21ha2VbOF06IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvaHZtbG9hZGVyL2FjcGknCmdtYWtl
WzddOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJl
L2h2bWxvYWRlcicKZ21ha2UgaHZtbG9hZGVyCmdtYWtlWzddOiBFbnRlcmluZyBkaXJlY3Rv
cnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXInCmVjaG8gIi8q
IEF1dG9nZW5lcmF0ZWQgZmlsZS4gRE8gTk9UIEVESVQgKi8iID4gcm9tcy5pbmMubmV3CmVj
aG8gIiNpZmRlZiBST01fSU5DTFVERV9ST01CSU9TIiA+PiByb21zLmluYy5uZXcKc2ggLi9t
a2hleCByb21iaW9zIC4uL3JvbWJpb3MvQklPUy1ib2Nocy1sYXRlc3QgPj4gcm9tcy5pbmMu
bmV3CmVjaG8gIiNlbmRpZiIgPj4gcm9tcy5pbmMubmV3CmVjaG8gIiNpZmRlZiBST01fSU5D
TFVERV9TRUFCSU9TIiA+PiByb21zLmluYy5uZXcKc2ggLi9ta2hleCBzZWFiaW9zIC4uL3Nl
YWJpb3MtZGlyL291dC9iaW9zLmJpbiA+PiByb21zLmluYy5uZXcKZWNobyAiI2VuZGlmIiA+
PiByb21zLmluYy5uZXcKZWNobyAiI2lmZGVmIFJPTV9JTkNMVURFX1ZHQUJJT1MiID4+IHJv
bXMuaW5jLm5ldwpzaCAuL21raGV4IHZnYWJpb3Nfc3RkdmdhIC4uL3ZnYWJpb3MvVkdBQklP
Uy1sZ3BsLWxhdGVzdC5iaW4gPj4gcm9tcy5pbmMubmV3CmVjaG8gIiNlbmRpZiIgPj4gcm9t
cy5pbmMubmV3CmVjaG8gIiNpZmRlZiBST01fSU5DTFVERV9WR0FCSU9TIiA+PiByb21zLmlu
Yy5uZXcKc2ggLi9ta2hleCB2Z2FiaW9zX2NpcnJ1c3ZnYSAuLi92Z2FiaW9zL1ZHQUJJT1Mt
bGdwbC1sYXRlc3QuY2lycnVzLmJpbiA+PiByb21zLmluYy5uZXcKZWNobyAiI2VuZGlmIiA+
PiByb21zLmluYy5uZXcKZWNobyAiI2lmZGVmIFJPTV9JTkNMVURFX0VUSEVSQk9PVCIgPj4g
cm9tcy5pbmMubmV3CnNoIC4vbWtoZXggZXRoZXJib290IC4uL2V0aGVyYm9vdC9pcHhlL3Ny
Yy9iaW4vcnRsODEzOS5yb20gLi4vZXRoZXJib290L2lweGUvc3JjL2Jpbi84MDg2MTAwZS5y
b20gPj4gcm9tcy5pbmMubmV3CmVjaG8gIiNlbmRpZiIgPj4gcm9tcy5pbmMubmV3Cm12IHJv
bXMuaW5jLm5ldyByb21zLmluYwpnY2MgICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmh2bWxvYWRlci5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5v
LXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1m
bG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXIvLi4vLi4v
Li4vdG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01CSU9TIC1ERU5BQkxFX1NFQUJJT1MgIC1j
IC1vIGh2bWxvYWRlci5vIGh2bWxvYWRlci5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC5tcF90YWJsZXMuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1k
aXJlY3Qtc2VnLXJlZnMgIC1XZXJyb3IgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNl
cHRpb25zIC1mbm8tYnVpbHRpbiAtbXNvZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvZmlybXdhcmUvaHZtbG9hZGVyLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLURFTkFCTEVf
Uk9NQklPUyAtREVOQUJMRV9TRUFCSU9TICAtYyAtbyBtcF90YWJsZXMubyBtcF90YWJsZXMu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRl
ciAtbTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAudXRpbC5vLmQgLWZuby1vcHRpbWl6ZS1z
aWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5vLXN0
YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1mbG9h
dCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2FkZXIvLi4vLi4vLi4v
dG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01CSU9TIC1ERU5BQkxFX1NFQUJJT1MgIC1jIC1v
IHV0aWwubyB1dGlsLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAtTzEgLWZuby1vbWl0
LWZyYW1lLXBvaW50ZXIgLW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnNtYmlvcy5vLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAg
LVdlcnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWls
dGluIC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1s
b2FkZXIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01CSU9TIC1ERU5BQkxF
X1NFQUJJT1MgLURfX1NNQklPU19EQVRFX189IlwiMTIvMDQvMjAxMlwiIiAgLWMgLW8gc21i
aW9zLm8gc21iaW9zLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAtTzEgLWZuby1vbWl0
LWZyYW1lLXBvaW50ZXIgLW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnNtcC5vLmQgLWZu
by1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdl
cnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGlu
IC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2Fk
ZXIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01CSU9TIC1ERU5BQkxFX1NF
QUJJT1MgIC1jIC1vIHNtcC5vIHNtcC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgLU8x
IC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5j
YWNoZWF0dHIuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1kaXJl
Y3Qtc2VnLXJlZnMgIC1XZXJyb3IgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRp
b25zIC1mbm8tYnVpbHRpbiAtbXNvZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
ZmlybXdhcmUvaHZtbG9hZGVyLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLURFTkFCTEVfUk9N
QklPUyAtREVOQUJMRV9TRUFCSU9TICAtYyAtbyBjYWNoZWF0dHIubyBjYWNoZWF0dHIuYyAg
LUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdh
bGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAg
IC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGVuYnVzLm8uZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1mbm8tc3Rh
Y2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0LWZsb2F0
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci8uLi8uLi8uLi90
b29scy9pbmNsdWRlIC1ERU5BQkxFX1JPTUJJT1MgLURFTkFCTEVfU0VBQklPUyAgLWMgLW8g
eGVuYnVzLm8geGVuYnVzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgLW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlv
bi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmU4MjAuby5k
IC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1kaXJlY3Qtc2VnLXJlZnMg
IC1XZXJyb3IgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRpb25zIC1mbm8tYnVp
bHRpbiAtbXNvZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvaHZt
bG9hZGVyLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLURFTkFCTEVfUk9NQklPUyAtREVOQUJM
RV9TRUFCSU9TICAtYyAtbyBlODIwLm8gZTgyMC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2Nj
ICAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZu
by1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVz
IC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQg
LU1GIC5wY2kuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAtbW5vLXRscy1kaXJl
Y3Qtc2VnLXJlZnMgIC1XZXJyb3IgLWZuby1zdGFjay1wcm90ZWN0b3IgLWZuby1leGNlcHRp
b25zIC1mbm8tYnVpbHRpbiAtbXNvZnQtZmxvYXQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
ZmlybXdhcmUvaHZtbG9hZGVyLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLURFTkFCTEVfUk9N
QklPUyAtREVOQUJMRV9TRUFCSU9TICAtYyAtbyBwY2kubyBwY2kuYyAgLUkvdXNyL3BrZy9p
bmNsdWRlCmdjYyAgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMyIC1tYXJjaD1p
Njg2IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAucGlyLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgLW1u
by10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1mbm8tc3RhY2stcHJvdGVjdG9yIC1m
bm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0LWZsb2F0IC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci8uLi8uLi8uLi90b29scy9pbmNsdWRlIC1E
RU5BQkxFX1JPTUJJT1MgLURFTkFCTEVfU0VBQklPUyAgLWMgLW8gcGlyLm8gcGlyLmMgIC1J
L3Vzci9wa2cvaW5jbHVkZQpnY2MgICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW0z
MiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmN0eXBlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxp
bmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0LWZsb2F0IC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci8uLi8uLi8uLi90b29s
cy9pbmNsdWRlIC1ERU5BQkxFX1JPTUJJT1MgLURFTkFCTEVfU0VBQklPUyAgLWMgLW8gY3R5
cGUubyBjdHlwZS5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcg
LXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC50ZXN0cy5vLmQgLWZu
by1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVjdC1zZWctcmVmcyAgLVdl
cnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWZuby1idWlsdGlu
IC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS9odm1sb2Fk
ZXIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01CSU9TIC1ERU5BQkxFX1NF
QUJJT1MgIC1jIC1vIHRlc3RzLm8gdGVzdHMuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1N
RiAub3B0aW9ucm9tcy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxz
LWRpcmVjdC1zZWctcmVmcyAgLVdlcnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4
Y2VwdGlvbnMgLWZuby1idWlsdGluIC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90
b29scy9maXJtd2FyZS9odm1sb2FkZXIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtREVOQUJM
RV9ST01CSU9TIC1ERU5BQkxFX1NFQUJJT1MgIC1jIC1vIG9wdGlvbnJvbXMubyBvcHRpb25y
b21zLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAtTzEgLWZuby1vbWl0LWZyYW1lLXBv
aW50ZXIgLW0zMiAtbWFyY2g9aTY4NiAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLjMyYml0Ymlvc19zdXBwb3J0Lm8u
ZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZz
ICAtV2Vycm9yIC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1
aWx0aW4gLW1zb2Z0LWZsb2F0IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2h2
bWxvYWRlci8uLi8uLi8uLi90b29scy9pbmNsdWRlIC1ERU5BQkxFX1JPTUJJT1MgLURFTkFC
TEVfU0VBQklPUyAgLWMgLW8gMzJiaXRiaW9zX3N1cHBvcnQubyAzMmJpdGJpb3Nfc3VwcG9y
dC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2lu
dGVyIC1tMzIgLW1hcmNoPWk2ODYgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5yb21iaW9zLm8uZCAtZm5vLW9wdGlt
aXplLXNpYmxpbmctY2FsbHMgLW1uby10bHMtZGlyZWN0LXNlZy1yZWZzICAtV2Vycm9yIC1m
bm8tc3RhY2stcHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtZm5vLWJ1aWx0aW4gLW1zb2Z0
LWZsb2F0IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci8uLi8u
Li8uLi90b29scy9pbmNsdWRlIC1ERU5BQkxFX1JPTUJJT1MgLURFTkFCTEVfU0VBQklPUyAg
LWMgLW8gcm9tYmlvcy5vIHJvbWJpb3MuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTMyIC1tYXJjaD1pNjg2IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
c2VhYmlvcy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzIC1tbm8tdGxzLWRpcmVj
dC1zZWctcmVmcyAgLVdlcnJvciAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlv
bnMgLWZuby1idWlsdGluIC1tc29mdC1mbG9hdCAtSS9yb290L3hlbi00LjIuMC90b29scy9m
aXJtd2FyZS9odm1sb2FkZXIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtREVOQUJMRV9ST01C
SU9TIC1ERU5BQkxFX1NFQUJJT1MgIC1jIC1vIHNlYWJpb3MubyBzZWFiaW9zLmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpsZCAtbWVsZl9pMzg2IC1OIC1UdGV4dCAweDEwMDAwMCAtbyBodm1s
b2FkZXIudG1wIGh2bWxvYWRlci5vIG1wX3RhYmxlcy5vIHV0aWwubyBzbWJpb3MubyBzbXAu
byBjYWNoZWF0dHIubyB4ZW5idXMubyBlODIwLm8gcGNpLm8gcGlyLm8gY3R5cGUubyB0ZXN0
cy5vIG9wdGlvbnJvbXMubyAzMmJpdGJpb3Nfc3VwcG9ydC5vIHJvbWJpb3MubyBzZWFiaW9z
Lm8gYWNwaS9hY3BpLmEKb2JqY29weSBodm1sb2FkZXIudG1wIGh2bWxvYWRlcgpybSAtZiBo
dm1sb2FkZXIudG1wCmdtYWtlWzddOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlcicKZ21ha2VbNl06IExlYXZpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmlybXdhcmUvaHZtbG9hZGVyJwpnbWFrZVs1
XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZScK
Z21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvZmly
bXdhcmUnClsgLWQgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGli
L3hlbi9ib290IF0gfHwgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Zpcm13YXJlLy4uLy4uL3Rv
b2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL2xpYi94ZW4vYm9vdApbICEgLWUgaHZtbG9hZGVyL2h2bWxvYWRl
ciBdIHx8IC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZS8uLi8uLi90b29scy9jcm9z
cy1pbnN0YWxsIC1tMDY0NCAtcCBodm1sb2FkZXIvaHZtbG9hZGVyIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi94ZW4vYm9vdApnbWFrZVszXTogTGVhdmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9maXJtd2FyZScKZ21ha2VbMl06
IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmdtYWtlWzJdOiBF
bnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2UgLUMgY29u
c29sZSBpbnN0YWxsCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC90b29scy9jb25zb2xlJwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAudXRpbHMuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAg
LVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rvb2xzL2xp
YnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMvaW5jbHVk
ZSAtSS9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rvb2xzL3hlbnN0b3Jl
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMvaW5jbHVkZSAg
LWMgLW8gZGFlbW9uL3V0aWxzLm8gZGFlbW9uL3V0aWxzLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubWFp
bi5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvY29uc29sZS8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvY29uc29sZS8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBkYWVtb24vbWFpbi5v
IGRhZW1vbi9tYWluLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuaW8uby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLVdlcnJvciAtSS9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rv
b2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9v
bHMvaW5jbHVkZSAgLWMgLW8gZGFlbW9uL2lvLm8gZGFlbW9uL2lvLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgICAgZGFlbW9uL3V0aWxzLm8gZGFlbW9uL21haW4ubyBkYWVtb24vaW8u
byAtbyB4ZW5jb25zb2xlZCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvY29uc29sZS8uLi8uLi90
b29scy9saWJ4Yy9saWJ4ZW5jdHJsLnNvIC9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xl
Ly4uLy4uL3Rvb2xzL3hlbnN0b3JlL2xpYnhlbnN0b3JlLnNvICAtbHV0aWwgLWxydCAgLUwv
dXNyL3BrZy9saWIKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAt
Zm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1N
RCAtTUYgLm1haW4uby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAt
SS9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290
L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2NvbnNvbGUvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8gY2xp
ZW50L21haW4ubyBjbGllbnQvbWFpbi5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIGNs
aWVudC9tYWluLm8gLW8geGVuY29uc29sZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvY29uc29s
ZS8uLi8uLi90b29scy9saWJ4Yy9saWJ4ZW5jdHJsLnNvIC9yb290L3hlbi00LjIuMC90b29s
cy9jb25zb2xlLy4uLy4uL3Rvb2xzL3hlbnN0b3JlL2xpYnhlbnN0b3JlLnNvICAgIC1ML3Vz
ci9wa2cvbGliCi9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwv
L3Vzci94ZW40Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlLy4uLy4uL3Rv
b2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIHhlbmNvbnNvbGVkIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvL3Vzci94ZW40Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy9j
b25zb2xlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2Jpbgovcm9vdC94ZW4tNC4yLjAvdG9v
bHMvY29uc29sZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4ZW5jb25z
b2xlIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2JpbgpnbWFrZVsz
XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9jb25zb2xlJwpn
bWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21h
a2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFr
ZSAtQyB4ZW5tb24gaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAvdG9vbHMveGVubW9uJwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAuc2V0bWFzay5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbm1vbi8uLi8uLi90
b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5tb24vLi4vLi4vdG9vbHMv
aW5jbHVkZSAgLWMgLW8gc2V0bWFzay5vIHNldG1hc2suYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgICBzZXRtYXNrLm8gLW8geGVudHJhY2Vfc2V0bWFzayAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMveGVubW9uLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28gIC1ML3Vzci9w
a2cvbGliCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC54ZW5iYWtlZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbm1vbi8uLi8uLi90b29scy9saWJ4YyAtSS9yb290
L3hlbi00LjIuMC90b29scy94ZW5tb24vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWMgLW8geGVu
YmFrZWQubyB4ZW5iYWtlZC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIHhlbmJha2Vk
Lm8gLW8geGVuYmFrZWQgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbm1vbi8uLi8uLi90b29s
cy9saWJ4Yy9saWJ4ZW5jdHJsLnNvICAtTC91c3IvcGtnL2xpYgovcm9vdC94ZW4tNC4yLjAv
dG9vbHMveGVubW9uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9y
b290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL3NiaW4KL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL3hlbm1vbi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4
ZW5iYWtlZCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9zYmluL3hl
bmJha2VkCi9yb290L3hlbi00LjIuMC90b29scy94ZW5tb24vLi4vLi4vdG9vbHMvY3Jvc3Mt
aW5zdGFsbCAtbTA3NTUgLXAgeGVudHJhY2Vfc2V0bWFzayAgL3Jvb3QveGVuLTQuMi4wL2Rp
c3QvaW5zdGFsbC91c3IveGVuNDIvc2Jpbi94ZW50cmFjZV9zZXRtYXNrCi9yb290L3hlbi00
LjIuMC90b29scy94ZW5tb24vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAg
eGVubW9uLnB5ICAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9zYmlu
L3hlbm1vbi5weQovcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVubW9uLy4uLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwv
dXNyL3hlbjQyL3NoYXJlL2RvYy94ZW4KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbm1vbi8u
Li8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCBSRUFETUUgL3Jvb3QveGVuLTQu
Mi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2hhcmUvZG9jL3hlbi9SRUFETUUueGVubW9u
CmdtYWtlWzNdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hl
bm1vbicKZ21ha2VbMl06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMnCmdtYWtlWzJdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29s
cycKZ21ha2UgLUMgeGVuc3RhdCBpbnN0YWxsCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3Rv
cnkgYC9yb290L3hlbi00LjIuMC90b29scy94ZW5zdGF0JwpnbWFrZVs0XTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdCcKZ21ha2UgLUMgbGli
eGVuc3RhdCBpbnN0YWxsCmdtYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC90b29scy94ZW5zdGF0L2xpYnhlbnN0YXQnCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW5zdGF0Lm8uZCAtZm5vLW9wdGltaXpl
LXNpYmxpbmctY2FsbHMgIC1mUElDIC1Jc3JjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hl
bnN0YXQvbGlieGVuc3RhdC8uLi8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIu
MC90b29scy94ZW5zdGF0L2xpYnhlbnN0YXQvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC90b29scy94ZW5zdGF0L2xpYnhlbnN0YXQvLi4vLi4vLi4vdG9vbHMv
eGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdC9saWJ4ZW5zdGF0Ly4u
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdC9s
aWJ4ZW5zdGF0Ly4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHNyYy94ZW5zdGF0Lm8g
c3JjL3hlbnN0YXQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC54ZW5zdGF0X25ldGJzZC5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtZlBJQyAtSXNyYyAtSS9yb290L3hlbi00LjIuMC90
b29scy94ZW5zdGF0L2xpYnhlbnN0YXQvLi4vLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMveGVuc3RhdC9saWJ4ZW5zdGF0Ly4uLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdC9saWJ4ZW5zdGF0Ly4uLy4uLy4u
L3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQvbGlieGVu
c3RhdC8uLi8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hl
bnN0YXQvbGlieGVuc3RhdC8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBzcmMveGVu
c3RhdF9uZXRic2QubyBzcmMveGVuc3RhdF9uZXRic2QuYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CnNyYy94ZW5zdGF0X25ldGJzZC5jOjc5OjEyOiB3YXJuaW5nOiAncmVhZF9hdHRyaWJ1dGVz
X3ZiZCcgZGVmaW5lZCBidXQgbm90IHVzZWQKYXIgcmMgc3JjL2xpYnhlbnN0YXQuYSBzcmMv
eGVuc3RhdC5vIHNyYy94ZW5zdGF0X25ldGJzZC5vCnJhbmxpYiBzcmMvbGlieGVuc3RhdC5h
CmdjYyAgICAtV2wsLXNvbmFtZSAtV2wsbGlieGVuc3RhdC5zby4wIC1zaGFyZWQgLW8gc3Jj
L2xpYnhlbnN0YXQuc28uMC4wIFwKICAgIHNyYy94ZW5zdGF0Lm8gc3JjL3hlbnN0YXRfbmV0
YnNkLm8gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQvbGlieGVuc3RhdC8uLi8uLi8u
Li90b29scy94ZW5zdG9yZS9saWJ4ZW5zdG9yZS5zbyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
eGVuc3RhdC9saWJ4ZW5zdGF0Ly4uLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhlbmN0cmwuc28g
IC1ML3Vzci9wa2cvbGliCmxuIC1zZiBsaWJ4ZW5zdGF0LnNvLjAuMCBzcmMvbGlieGVuc3Rh
dC5zby4wCmxuIC1zZiBsaWJ4ZW5zdGF0LnNvLjAgc3JjL2xpYnhlbnN0YXQuc28KL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQvbGlieGVuc3RhdC8uLi8uLi8uLi90b29scy9jcm9z
cy1pbnN0YWxsIC1tMDY0NCAtcCBzcmMveGVuc3RhdC5oIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0
YXQvbGlieGVuc3RhdC8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCBz
cmMvbGlieGVuc3RhdC5hIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2xpYi9saWJ4ZW5zdGF0LmEKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQvbGlieGVu
c3RhdC8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCBzcmMvbGlieGVu
c3RhdC5zby4wLjAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGli
CmxuIC1zZiBsaWJ4ZW5zdGF0LnNvLjAuMCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxs
L3Vzci94ZW40Mi9saWIvbGlieGVuc3RhdC5zby4wCmxuIC1zZiBsaWJ4ZW5zdGF0LnNvLjAg
L3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliL2xpYnhlbnN0YXQu
c28KZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
eGVuc3RhdC9saWJ4ZW5zdGF0JwpnbWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290
L3hlbi00LjIuMC90b29scy94ZW5zdGF0JwpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdCcKZ21ha2UgLUMgeGVudG9wIGluc3Rh
bGwKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L3hlbnN0YXQveGVudG9wJwpnY2MgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC54ZW50b3AuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1ER0ND
X1BSSU5URiAtV2FsbCAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQv
eGVudG9wLy4uLy4uLy4uL3Rvb2xzL3hlbnN0YXQvbGlieGVuc3RhdC9zcmMgLURIT1NUX05l
dEJTRCAtSS9yb290L3hlbi00LjIuMC90b29scy94ZW5zdGF0L3hlbnRvcC8uLi8uLi8uLi90
b29scyAgICAgIHhlbnRvcC5jICAtV2wsLXJwYXRoLWxpbms9L3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL3hlbnN0YXQveGVudG9wLy4uLy4uLy4uL3Rvb2xzL2xpYnhjIC1XbCwtcnBhdGgtbGlu
az0vcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdC94ZW50b3AvLi4vLi4vLi4vdG9vbHMv
eGVuc3RvcmUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQveGVudG9wLy4uLy4uLy4u
L3Rvb2xzL3hlbnN0YXQvbGlieGVuc3RhdC9zcmMvbGlieGVuc3RhdC5zbyAtbGN1cnNlcyAg
LW8geGVudG9wCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdGF0L3hlbnRvcC8uLi8uLi8u
Li90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlz
dC9pbnN0YWxsL3Vzci94ZW40Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdGF0
L3hlbnRvcC8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCB4ZW50b3Ag
L3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2Jpbi94ZW50b3AKL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL3hlbnN0YXQveGVudG9wLy4uLy4uLy4uL3Rvb2xzL2Nyb3Nz
LWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNy
L3hlbjQyL3NoYXJlL21hbi9tYW4xCi9yb290L3hlbi00LjIuMC90b29scy94ZW5zdGF0L3hl
bnRvcC8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4ZW50b3AuMSAv
cm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9zaGFyZS9tYW4vbWFuMS94
ZW50b3AuMQpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scy94ZW5zdGF0L3hlbnRvcCcKZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdCcKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVuc3RhdCcKZ21ha2VbMl06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMnCmdtYWtlWzJdOiBFbnRlcmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2UgLUMgbGliYWlvIGluc3RhbGwK
Z21ha2VbM106IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YmFpbycKZ21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYmFpby9zcmMnCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxlcyAtV2FsbCAtSS4g
LWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIGlvX3F1ZXVlX2luaXQu
b2wgaW9fcXVldWVfaW5pdC5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxlcyAtV2FsbCAt
SS4gLWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIGlvX3F1ZXVlX3Jl
bGVhc2Uub2wgaW9fcXVldWVfcmVsZWFzZS5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxl
cyAtV2FsbCAtSS4gLWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIGlv
X3F1ZXVlX3dhaXQub2wgaW9fcXVldWVfd2FpdC5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRm
aWxlcyAtV2FsbCAtSS4gLWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1v
IGlvX3F1ZXVlX3J1bi5vbCBpb19xdWV1ZV9ydW4uYwpnY2MgLW5vc3RkbGliIC1ub3N0YXJ0
ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAtYyAt
byBpb19nZXRldmVudHMub2wgaW9fZ2V0ZXZlbnRzLmMKZ2NjIC1ub3N0ZGxpYiAtbm9zdGFy
dGZpbGVzIC1XYWxsIC1JLiAtZyAtZm9taXQtZnJhbWUtcG9pbnRlciAtTzIgLWZQSUMgLWMg
LW8gaW9fc3VibWl0Lm9sIGlvX3N1Ym1pdC5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxl
cyAtV2FsbCAtSS4gLWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIGlv
X2NhbmNlbC5vbCBpb19jYW5jZWwuYwpnY2MgLW5vc3RkbGliIC1ub3N0YXJ0ZmlsZXMgLVdh
bGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAtYyAtbyBpb19zZXR1
cC5vbCBpb19zZXR1cC5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxlcyAtV2FsbCAtSS4g
LWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIGlvX2Rlc3Ryb3kub2wg
aW9fZGVzdHJveS5jCmdjYyAtbm9zdGRsaWIgLW5vc3RhcnRmaWxlcyAtV2FsbCAtSS4gLWcg
LWZvbWl0LWZyYW1lLXBvaW50ZXIgLU8yIC1mUElDIC1jIC1vIHJhd19zeXNjYWxsLm9sIHJh
d19zeXNjYWxsLmMKZ2NjIC1ub3N0ZGxpYiAtbm9zdGFydGZpbGVzIC1XYWxsIC1JLiAtZyAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtTzIgLWZQSUMgLWMgLW8gY29tcGF0LTBfMS5vbCBjb21w
YXQtMF8xLmMKcm0gLWYgbGliYWlvLmEKYXIgciBsaWJhaW8uYSBpb19xdWV1ZV9pbml0Lm9s
IGlvX3F1ZXVlX3JlbGVhc2Uub2wgaW9fcXVldWVfd2FpdC5vbCBpb19xdWV1ZV9ydW4ub2wg
aW9fZ2V0ZXZlbnRzLm9sIGlvX3N1Ym1pdC5vbCBpb19jYW5jZWwub2wgaW9fc2V0dXAub2wg
aW9fZGVzdHJveS5vbCByYXdfc3lzY2FsbC5vbCBjb21wYXQtMF8xLm9sCmFyOiBjcmVhdGlu
ZyBsaWJhaW8uYQpyYW5saWIgbGliYWlvLmEKZ2NjIC1zaGFyZWQgLW5vc3RkbGliIC1ub3N0
YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAt
YyAtbyBpb19xdWV1ZV9pbml0Lm9zIGlvX3F1ZXVlX2luaXQuYwpnY2MgLXNoYXJlZCAtbm9z
dGRsaWIgLW5vc3RhcnRmaWxlcyAtV2FsbCAtSS4gLWcgLWZvbWl0LWZyYW1lLXBvaW50ZXIg
LU8yIC1mUElDIC1jIC1vIGlvX3F1ZXVlX3JlbGVhc2Uub3MgaW9fcXVldWVfcmVsZWFzZS5j
CmdjYyAtc2hhcmVkIC1ub3N0ZGxpYiAtbm9zdGFydGZpbGVzIC1XYWxsIC1JLiAtZyAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtTzIgLWZQSUMgLWMgLW8gaW9fcXVldWVfd2FpdC5vcyBpb19x
dWV1ZV93YWl0LmMKZ2NjIC1zaGFyZWQgLW5vc3RkbGliIC1ub3N0YXJ0ZmlsZXMgLVdhbGwg
LUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAtYyAtbyBpb19xdWV1ZV9y
dW4ub3MgaW9fcXVldWVfcnVuLmMKZ2NjIC1zaGFyZWQgLW5vc3RkbGliIC1ub3N0YXJ0Zmls
ZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAtYyAtbyBp
b19nZXRldmVudHMub3MgaW9fZ2V0ZXZlbnRzLmMKZ2NjIC1zaGFyZWQgLW5vc3RkbGliIC1u
b3N0YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJ
QyAtYyAtbyBpb19zdWJtaXQub3MgaW9fc3VibWl0LmMKZ2NjIC1zaGFyZWQgLW5vc3RkbGli
IC1ub3N0YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1PMiAt
ZlBJQyAtYyAtbyBpb19jYW5jZWwub3MgaW9fY2FuY2VsLmMKZ2NjIC1zaGFyZWQgLW5vc3Rk
bGliIC1ub3N0YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2ludGVyIC1P
MiAtZlBJQyAtYyAtbyBpb19zZXR1cC5vcyBpb19zZXR1cC5jCmdjYyAtc2hhcmVkIC1ub3N0
ZGxpYiAtbm9zdGFydGZpbGVzIC1XYWxsIC1JLiAtZyAtZm9taXQtZnJhbWUtcG9pbnRlciAt
TzIgLWZQSUMgLWMgLW8gaW9fZGVzdHJveS5vcyBpb19kZXN0cm95LmMKZ2NjIC1zaGFyZWQg
LW5vc3RkbGliIC1ub3N0YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1PMiAtZlBJQyAtYyAtbyByYXdfc3lzY2FsbC5vcyByYXdfc3lzY2FsbC5jCmdjYyAt
c2hhcmVkIC1ub3N0ZGxpYiAtbm9zdGFydGZpbGVzIC1XYWxsIC1JLiAtZyAtZm9taXQtZnJh
bWUtcG9pbnRlciAtTzIgLWZQSUMgLWMgLW8gY29tcGF0LTBfMS5vcyBjb21wYXQtMF8xLmMK
Z2NjIC1zaGFyZWQgLW5vc3RkbGliIC1ub3N0YXJ0ZmlsZXMgLVdhbGwgLUkuIC1nIC1mb21p
dC1mcmFtZS1wb2ludGVyIC1PMiAtZlBJQyAtV2wsLS12ZXJzaW9uLXNjcmlwdD1saWJhaW8u
bWFwIC1XbCwtc29uYW1lPWxpYmFpby5zby4xIC1vIGxpYmFpby5zby4xLjAuMSBpb19xdWV1
ZV9pbml0Lm9zIGlvX3F1ZXVlX3JlbGVhc2Uub3MgaW9fcXVldWVfd2FpdC5vcyBpb19xdWV1
ZV9ydW4ub3MgaW9fZ2V0ZXZlbnRzLm9zIGlvX3N1Ym1pdC5vcyBpb19jYW5jZWwub3MgaW9f
c2V0dXAub3MgaW9fZGVzdHJveS5vcyByYXdfc3lzY2FsbC5vcyBjb21wYXQtMF8xLm9zIApn
bWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJh
aW8vc3JjJwpnbWFrZVszXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scy9saWJhaW8nCmdtYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzJwpnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMnCmdtYWtlIC1DIGJsa3RhcDIgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMicKZ21ha2VbNF06IEVu
dGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3RhcDInCmdtYWtl
IC1DIGluY2x1ZGUgaW5zdGFsbApnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi9pbmNsdWRlJwovcm9vdC94ZW4tNC4yLjAvdG9v
bHMvYmxrdGFwMi9pbmNsdWRlLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0w
NzU1IC1wIC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1
ZGUKZ21ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
YmxrdGFwMi9pbmNsdWRlJwpnbWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC90b29scy9ibGt0YXAyJwpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMicKZ21ha2UgLUMgbHZtIGluc3RhbGwKZ21h
a2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3Rh
cDIvbHZtJwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1N
RiAubHZtLXV0aWwuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAt
V25vLXVudXNlZCAtSS4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAgLWMgLW8gbHZt
LXV0aWwubyBsdm0tdXRpbC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ21ha2VbNV06IExlYXZp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi9sdm0nCmdtYWtl
WzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3RhcDIn
CmdtYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9i
bGt0YXAyJwpnbWFrZSAtQyB2aGQgaW5zdGFsbApnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0
b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi92aGQnCmdtYWtlWzZdOiBFbnRl
cmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ibGt0YXAyL3ZoZCcKZ21h
a2UgLUMgbGliIGFsbApnbWFrZVs3XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvYmxrdGFwMi92aGQvbGliJwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlidmhkLm8uZCAtZm5vLW9wdGltaXplLXNpYmxp
bmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9T
T1VSQ0UgLWZQSUMgLWcgIC1jIC1vIGxpYnZoZC5vIGxpYnZoZC5jICAtSS91c3IvcGtnL2lu
Y2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYg
LmxpYnZoZC1qb3VybmFsLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJy
b3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9TT1VSQ0UgLWZQSUMgLWcg
IC1jIC1vIGxpYnZoZC1qb3VybmFsLm8gbGlidmhkLWpvdXJuYWwuYyAgLUkvdXNyL3BrZy9p
bmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC52aGQtdXRpbC1jb2FsZXNjZS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAt
V2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElD
IC1nICAtYyAtbyB2aGQtdXRpbC1jb2FsZXNjZS5vIHZoZC11dGlsLWNvYWxlc2NlLmMgIC1J
L3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAudmhkLXV0aWwtY3JlYXRlLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9TT1VS
Q0UgLWZQSUMgLWcgIC1jIC1vIHZoZC11dGlsLWNyZWF0ZS5vIHZoZC11dGlsLWNyZWF0ZS5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RP
T0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLWZpbGwuby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NP
VVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtZmlsbC5vIHZoZC11dGlsLWZpbGwuYyAg
LUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09M
U19fIC1NTUQgLU1GIC52aGQtdXRpbC1tb2RpZnkuby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NP
VVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtbW9kaWZ5Lm8gdmhkLXV0aWwtbW9kaWZ5
LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRl
ciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJp
Y3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5f
VE9PTFNfXyAtTU1EIC1NRiAudmhkLXV0aWwtcXVlcnkuby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05V
X1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtcXVlcnkubyB2aGQtdXRpbC1xdWVy
eS5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50
ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3Ry
aWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVO
X1RPT0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLXJlYWQuby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05V
X1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtcmVhZC5vIHZoZC11dGlsLXJlYWQu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC52aGQtdXRpbC1yZXBhaXIuby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05V
X1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtcmVwYWlyLm8gdmhkLXV0aWwtcmVw
YWlyLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAudmhkLXV0aWwtcmVzaXplLm8uZCAtZm5vLW9wdGltaXpl
LXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1E
X0dOVV9TT1VSQ0UgLWZQSUMgLWcgIC1jIC1vIHZoZC11dGlsLXJlc2l6ZS5vIHZoZC11dGls
LXJlc2l6ZS5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLXJldmVydC5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVk
ZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtYyAtbyB2aGQtdXRpbC1yZXZlcnQubyB2aGQt
dXRpbC1yZXZlcnQuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXRpbC1zZXQtZmllbGQuby5kIC1m
bm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNlZCAtSS4uLy4u
L2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhkLXV0aWwtc2V0LWZp
ZWxkLm8gdmhkLXV0aWwtc2V0LWZpZWxkLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAudmhkLXV0aWwtc25h
cHNob3Quby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVu
dXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8gdmhk
LXV0aWwtc25hcHNob3QubyB2aGQtdXRpbC1zbmFwc2hvdC5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnZo
ZC11dGlsLXNjYW4uby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAt
V25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWMg
LW8gdmhkLXV0aWwtc2Nhbi5vIHZoZC11dGlsLXNjYW4uYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQt
dXRpbC1jaGVjay5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1X
bm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtYyAt
byB2aGQtdXRpbC1jaGVjay5vIHZoZC11dGlsLWNoZWNrLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAudmhk
LXV0aWwtdXVpZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1X
bm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtYyAt
byB2aGQtdXRpbC11dWlkLm8gdmhkLXV0aWwtdXVpZC5jICAtSS91c3IvcGtnL2luY2x1ZGUK
Z2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1h
bGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnJlbGF0
aXZlLXBhdGguby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25v
LXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWMgLW8g
cmVsYXRpdmUtcGF0aC5vIHJlbGF0aXZlLXBhdGguYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdj
YyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5hdG9taWNp
by5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2Vk
IC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtYyAtbyBhdG9taWNp
by5vIGF0b21pY2lvLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQphciByYyBsaWJ2aGQuYSBsaWJ2
aGQubyBsaWJ2aGQtam91cm5hbC5vIHZoZC11dGlsLWNvYWxlc2NlLm8gdmhkLXV0aWwtY3Jl
YXRlLm8gdmhkLXV0aWwtZmlsbC5vIHZoZC11dGlsLW1vZGlmeS5vIHZoZC11dGlsLXF1ZXJ5
Lm8gdmhkLXV0aWwtcmVhZC5vIHZoZC11dGlsLXJlcGFpci5vIHZoZC11dGlsLXJlc2l6ZS5v
IHZoZC11dGlsLXJldmVydC5vIHZoZC11dGlsLXNldC1maWVsZC5vIHZoZC11dGlsLXNuYXBz
aG90Lm8gdmhkLXV0aWwtc2Nhbi5vIHZoZC11dGlsLWNoZWNrLm8gdmhkLXV0aWwtdXVpZC5v
IHJlbGF0aXZlLXBhdGgubyBhdG9taWNpby5vIC4uLy4uL2x2bS9sdm0tdXRpbC5vCmdjYyAg
LURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ2
aGQub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVu
dXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWZQSUMgLWMg
LW8gbGlidmhkLm9waWMgbGlidmhkLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElD
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlidmhkLWpv
dXJuYWwub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25v
LXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWZQSUMg
LWMgLW8gbGlidmhkLWpvdXJuYWwub3BpYyBsaWJ2aGQtam91cm5hbC5jICAtSS91c3IvcGtn
L2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAt
ZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18g
LU1NRCAtTUYgLnZoZC11dGlsLWNvYWxlc2NlLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxp
bmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9T
T1VSQ0UgLWZQSUMgLWcgIC1mUElDIC1jIC1vIHZoZC11dGlsLWNvYWxlc2NlLm9waWMgdmhk
LXV0aWwtY29hbGVzY2UuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1m
bm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXRpbC1jcmVhdGUu
b3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLXVudXNl
ZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWZQSUMgLWMgLW8g
dmhkLXV0aWwtY3JlYXRlLm9waWMgdmhkLXV0aWwtY3JlYXRlLmMgIC1JL3Vzci9wa2cvaW5j
bHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBl
cyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1E
IC1NRiAudmhkLXV0aWwtZmlsbC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1m
UElDIC1nICAtZlBJQyAtYyAtbyB2aGQtdXRpbC1maWxsLm9waWMgdmhkLXV0aWwtZmlsbC5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9f
WEVOX1RPT0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLW1vZGlmeS5vcGljLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVk
ZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAtbyB2aGQtdXRpbC1tb2RpZnku
b3BpYyB2aGQtdXRpbC1tb2RpZnkuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMg
LU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNp
bmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXRpbC1x
dWVyeS5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8t
dW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAt
YyAtbyB2aGQtdXRpbC1xdWVyeS5vcGljIHZoZC11dGlsLXF1ZXJ5LmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1n
IC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAt
TU1EIC1NRiAudmhkLXV0aWwtcmVhZC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNh
bGxzICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNF
IC1mUElDIC1nICAtZlBJQyAtYyAtbyB2aGQtdXRpbC1yZWFkLm9waWMgdmhkLXV0aWwtcmVh
ZC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLXJlcGFpci5vcGljLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5j
bHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAtbyB2aGQtdXRpbC1yZXBh
aXIub3BpYyB2aGQtdXRpbC1yZXBhaXIuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQ
SUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXRp
bC1yZXNpemUub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAt
V25vLXVudXNlZCAtSS4uLy4uL2luY2x1ZGUgLURfR05VX1NPVVJDRSAtZlBJQyAtZyAgLWZQ
SUMgLWMgLW8gdmhkLXV0aWwtcmVzaXplLm9waWMgdmhkLXV0aWwtcmVzaXplLmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAudmhkLXV0aWwtcmV2ZXJ0Lm9waWMuZCAtZm5vLW9wdGltaXplLXNp
YmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dO
VV9TT1VSQ0UgLWZQSUMgLWcgIC1mUElDIC1jIC1vIHZoZC11dGlsLXJldmVydC5vcGljIHZo
ZC11dGlsLXJldmVydC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtTzEgLWZu
by1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnZoZC11dGlsLXNldC1maWVs
ZC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51
c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAt
byB2aGQtdXRpbC1zZXQtZmllbGQub3BpYyB2aGQtdXRpbC1zZXQtZmllbGQuYyAgLUkvdXNy
L3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09M
U19fIC1NTUQgLU1GIC52aGQtdXRpbC1zbmFwc2hvdC5vcGljLmQgLWZuby1vcHRpbWl6ZS1z
aWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9H
TlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAtbyB2aGQtdXRpbC1zbmFwc2hvdC5vcGlj
IHZoZC11dGlsLXNuYXBzaG90LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1P
MSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5n
IC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAudmhkLXV0aWwtc2Nh
bi5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51
c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAt
byB2aGQtdXRpbC1zY2FuLm9waWMgdmhkLXV0aWwtc2Nhbi5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLnZoZC11dGlsLWNoZWNrLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9TT1VSQ0UgLWZQ
SUMgLWcgIC1mUElDIC1jIC1vIHZoZC11dGlsLWNoZWNrLm9waWMgdmhkLXV0aWwtY2hlY2su
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1w
b2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAt
V3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURf
X1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXRpbC11dWlkLm9waWMuZCAtZm5vLW9wdGlt
aXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRl
IC1EX0dOVV9TT1VSQ0UgLWZQSUMgLWcgIC1mUElDIC1jIC1vIHZoZC11dGlsLXV1aWQub3Bp
YyB2aGQtdXRpbC11dWlkLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAt
Zm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAucmVsYXRpdmUtcGF0aC5v
cGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tdW51c2Vk
IC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAtZlBJQyAtYyAtbyBy
ZWxhdGl2ZS1wYXRoLm9waWMgcmVsYXRpdmUtcGF0aC5jICAtSS91c3IvcGtnL2luY2x1ZGUK
Z2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYg
LmF0b21pY2lvLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3Ig
LVduby11bnVzZWQgLUkuLi8uLi9pbmNsdWRlIC1EX0dOVV9TT1VSQ0UgLWZQSUMgLWcgIC1m
UElDIC1jIC1vIGF0b21pY2lvLm9waWMgYXRvbWljaW8uYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC5sdm0tdXRpbC5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9y
IC1Xbm8tdW51c2VkIC1JLi4vLi4vaW5jbHVkZSAtRF9HTlVfU09VUkNFIC1mUElDIC1nICAt
ZlBJQyAtYyAtbyAuLi8uLi9sdm0vbHZtLXV0aWwub3BpYyAuLi8uLi9sdm0vbHZtLXV0aWwu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAtV2wsLXNvbmFtZSxsaWJ2aGQuc28uMS4wIC1z
aGFyZWQgXAoJICAgLW8gbGlidmhkLnNvLjEuMC4wIGxpYnZoZC5vcGljIGxpYnZoZC1qb3Vy
bmFsLm9waWMgdmhkLXV0aWwtY29hbGVzY2Uub3BpYyB2aGQtdXRpbC1jcmVhdGUub3BpYyB2
aGQtdXRpbC1maWxsLm9waWMgdmhkLXV0aWwtbW9kaWZ5Lm9waWMgdmhkLXV0aWwtcXVlcnku
b3BpYyB2aGQtdXRpbC1yZWFkLm9waWMgdmhkLXV0aWwtcmVwYWlyLm9waWMgdmhkLXV0aWwt
cmVzaXplLm9waWMgdmhkLXV0aWwtcmV2ZXJ0Lm9waWMgdmhkLXV0aWwtc2V0LWZpZWxkLm9w
aWMgdmhkLXV0aWwtc25hcHNob3Qub3BpYyB2aGQtdXRpbC1zY2FuLm9waWMgdmhkLXV0aWwt
Y2hlY2sub3BpYyB2aGQtdXRpbC11dWlkLm9waWMgcmVsYXRpdmUtcGF0aC5vcGljIGF0b21p
Y2lvLm9waWMgLi4vLi4vbHZtL2x2bS11dGlsLm9waWMgCmxuIC1zZiBsaWJ2aGQuc28uMS4w
LjAgbGlidmhkLnNvLjEuMApsbiAtc2YgbGlidmhkLnNvLjEuMCBsaWJ2aGQuc28KZ21ha2Vb
N106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi92
aGQvbGliJwpnbWFrZVs2XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scy9ibGt0YXAyL3ZoZCcKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xT
X18gLU1NRCAtTUYgLnZoZC11dGlsLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi9pbmNsdWRlIC1EX0dOVV9TT1VSQ0UgLWZQSUMg
IC1jIC1vIHZoZC11dGlsLm8gdmhkLXV0aWwuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAg
ICAtbyB2aGQtdXRpbCB2aGQtdXRpbC5vIC1MbGliIC1sdmhkCmdjYyAgLU8xIC1mbm8tb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC52aGQtdXBkYXRlLm8uZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby11bnVzZWQgLUkuLi9pbmNsdWRl
IC1EX0dOVV9TT1VSQ0UgLWZQSUMgIC1jIC1vIHZoZC11cGRhdGUubyB2aGQtdXBkYXRlLmMg
IC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAgLW8gdmhkLXVwZGF0ZSB2aGQtdXBkYXRlLm8g
LUxsaWIgLWx2aGQKZ21ha2Ugc3ViZGlycy1pbnN0YWxsCmdtYWtlWzZdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ibGt0YXAyL3ZoZCcKZ21ha2VbN106
IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3RhcDIvdmhk
JwpnbWFrZSAtQyBsaWIgaW5zdGFsbApnbWFrZVs4XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi92aGQvbGliJwovcm9vdC94ZW4tNC4yLjAv
dG9vbHMvYmxrdGFwMi92aGQvbGliLy4uLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwg
LWQgLW0wNzU1IC1wIC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2xpYgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi92aGQvbGliLy4uLy4uLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGxpYnZoZC5hIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvYmxr
dGFwMi92aGQvbGliLy4uLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1w
IGxpYnZoZC5zby4xLjAuMCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40
Mi9saWIKbG4gLXNmIGxpYnZoZC5zby4xLjAuMCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0
YWxsL3Vzci94ZW40Mi9saWIvbGlidmhkLnNvLjEuMApsbiAtc2YgbGlidmhkLnNvLjEuMCAv
cm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWIvbGlidmhkLnNvCmdt
YWtlWzhdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3Rh
cDIvdmhkL2xpYicKZ21ha2VbN106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvYmxrdGFwMi92aGQnCmdtYWtlWzZdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3RhcDIvdmhkJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
YmxrdGFwMi92aGQvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAg
LXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2Jpbgovcm9vdC94
ZW4tNC4yLjAvdG9vbHMvYmxrdGFwMi92aGQvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFs
bCAtbTA3NTUgLXAgdmhkLXV0aWwgdmhkLXVwZGF0ZSAvcm9vdC94ZW4tNC4yLjAvZGlzdC9p
bnN0YWxsL3Vzci94ZW40Mi9zYmluCmdtYWtlWzVdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2Jsa3RhcDIvdmhkJwpnbWFrZVs0XTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ibGt0YXAyJwpnbWFrZVszXTogTGVhdmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9ibGt0YXAyJwpnbWFrZVsyXTog
TGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVu
dGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZSAtQyB4ZW5i
YWNrZW5kZCBpbnN0YWxsCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hl
bi00LjIuMC90b29scy94ZW5iYWNrZW5kZCcKZ2NjIC1EWEVOX1NDUklQVF9ESVI9IlwiL3Vz
ci94ZW40Mi9ldGMveGVuL3NjcmlwdHNcIiIgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC54ZW5iYWNrZW5kZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtV2Vycm9yIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbmJhY2tlbmRkLy4u
Ly4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbmJhY2tlbmRk
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIHhlbmJhY2tlbmRkLm8geGVuYmFja2VuZGQu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICB4ZW5iYWNrZW5kZC5vIC1vIHhlbmJhY2tl
bmRkIC9yb290L3hlbi00LjIuMC90b29scy94ZW5iYWNrZW5kZC8uLi8uLi90b29scy94ZW5z
dG9yZS9saWJ4ZW5zdG9yZS5zbyAgLUwvdXNyL3BrZy9saWIKL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL3hlbmJhY2tlbmRkLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1w
IC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL3NiaW4KL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL3hlbmJhY2tlbmRkLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0w
NzU1IC1wIHhlbmJhY2tlbmRkIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hl
bjQyL3NiaW4KZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMveGVuYmFja2VuZGQnCmdtYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzJwpnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMnCmdtYWtlIC1DIGxpYmZzaW1hZ2UgaW5zdGFsbApnbWFrZVszXTog
RW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZScK
Z21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YmZzaW1hZ2UnCmdtYWtlIC1DIGNvbW1vbiBpbnN0YWxsCmdtYWtlWzVdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL2NvbW1vbicKTWFr
ZWZpbGU6MzU6IHdhcm5pbmc6IG92ZXJyaWRpbmcgcmVjaXBlIGZvciB0YXJnZXQgYGNsZWFu
Jwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9jb21tb24vLi4vLi4vLi4vdG9v
bHMvbGliZnNpbWFnZS9SdWxlcy5tazoyNTogd2FybmluZzogaWdub3Jpbmcgb2xkIHJlY2lw
ZSBmb3IgdGFyZ2V0IGBjbGVhbicKTWFrZWZpbGU6MzU6IHdhcm5pbmc6IG92ZXJyaWRpbmcg
cmVjaXBlIGZvciB0YXJnZXQgYGRpc3RjbGVhbicKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YmZzaW1hZ2UvY29tbW9uLy4uLy4uLy4uL3Rvb2xzL2xpYmZzaW1hZ2UvUnVsZXMubWs6MjU6
IHdhcm5pbmc6IGlnbm9yaW5nIG9sZCByZWNpcGUgZm9yIHRhcmdldCBgZGlzdGNsZWFuJwpn
Y2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
ZnNpbWFnZS5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV25vLXVua25v
d24tcHJhZ21hcyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL2NvbW1vbi8u
Li8uLi8uLi90b29scy9saWJmc2ltYWdlL2NvbW1vbi8gLURGU0lNQUdFX0ZTRElSPVwiL3Vz
ci94ZW40Mi9saWIvZnNcIiAtV2Vycm9yIC1EX0dOVV9TT1VSQ0UgLXB0aHJlYWQgIC1mUElD
IC1jIC1vIGZzaW1hZ2Uub3BpYyBmc2ltYWdlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2Mg
IC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuZnNp
bWFnZV9wbHVnaW4ub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVduby11
bmtub3duLXByYWdtYXMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9jb21t
b24vLi4vLi4vLi4vdG9vbHMvbGliZnNpbWFnZS9jb21tb24vIC1ERlNJTUFHRV9GU0RJUj1c
Ii91c3IveGVuNDIvbGliL2ZzXCIgLVdlcnJvciAtRF9HTlVfU09VUkNFIC1wdGhyZWFkICAt
ZlBJQyAtYyAtbyBmc2ltYWdlX3BsdWdpbi5vcGljIGZzaW1hZ2VfcGx1Z2luLmMgIC1JL3Vz
ci9wa2cvaW5jbHVkZQpnY2MgIC1EUElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAuZnNpbWFnZV9ncnViLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxp
bmctY2FsbHMgIC1Xbm8tdW5rbm93bi1wcmFnbWFzIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYmZzaW1hZ2UvY29tbW9uLy4uLy4uLy4uL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLyAt
REZTSU1BR0VfRlNESVI9XCIvdXNyL3hlbjQyL2xpYi9mc1wiIC1XZXJyb3IgLURfR05VX1NP
VVJDRSAtcHRocmVhZCAgLWZQSUMgLWMgLW8gZnNpbWFnZV9ncnViLm9waWMgZnNpbWFnZV9n
cnViLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1wdGhyZWFkIC1XbCwtc29uYW1lIC1X
bCxsaWJmc2ltYWdlLnNvLjEuMCAtc2hhcmVkIC1vIGxpYmZzaW1hZ2Uuc28uMS4wLjAgZnNp
bWFnZS5vcGljIGZzaW1hZ2VfcGx1Z2luLm9waWMgZnNpbWFnZV9ncnViLm9waWMgCmxuIC1z
ZiBsaWJmc2ltYWdlLnNvLjEuMC4wIGxpYmZzaW1hZ2Uuc28uMS4wCmxuIC1zZiBsaWJmc2lt
YWdlLnNvLjEuMCBsaWJmc2ltYWdlLnNvCi9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL2NvbW1vbi8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAv
cm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWIKL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2luY2x1ZGUKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLy4uLy4u
Ly4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGxpYmZzaW1hZ2Uuc28uMS4wLjAg
L3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliCmxuIC1zZiBsaWJm
c2ltYWdlLnNvLjEuMC4wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQy
L2xpYi9saWJmc2ltYWdlLnNvLjEuMApsbiAtc2YgbGliZnNpbWFnZS5zby4xLjAgL3Jvb3Qv
eGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliL2xpYmZzaW1hZ2Uuc28KL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLy4uLy4uLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLW0wNjQ0IC1wIGZzaW1hZ2UuaCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9p
bnN0YWxsL3Vzci94ZW40Mi9pbmNsdWRlCi9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL2NvbW1vbi8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCBmc2lt
YWdlX3BsdWdpbi5oIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2lu
Y2x1ZGUKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLy4uLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIGZzaW1hZ2VfZ3J1Yi5oIC9yb290L3hl
bi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2luY2x1ZGUKZ21ha2VbNV06IExlYXZp
bmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9jb21tb24n
CmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YmZzaW1hZ2UnCmdtYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIu
MC90b29scy9saWJmc2ltYWdlJwpnbWFrZSAtQyB1ZnMgaW5zdGFsbApnbWFrZVs1XTogRW50
ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS91ZnMn
CmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1X
ZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1G
IC5mc3lzX3Vmcy5vcGljLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV25vLXVu
a25vd24tcHJhZ21hcyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3Vmcy8u
Li8uLi8uLi90b29scy9saWJmc2ltYWdlL2NvbW1vbi8gLURGU0lNQUdFX0ZTRElSPVwiL3Vz
ci94ZW40Mi9saWIvZnNcIiAtV2Vycm9yIC1EX0dOVV9TT1VSQ0UgIC1mUElDIC1jIC1vIGZz
eXNfdWZzLm9waWMgZnN5c191ZnMuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtTC4u
L2NvbW1vbi8gLXNoYXJlZCAtbyBmc2ltYWdlLnNvIGZzeXNfdWZzLm9waWMgLWxmc2ltYWdl
ICAgLUwvdXNyL3BrZy9saWIKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvdWZz
Ly4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00
LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9mcy91ZnMKL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYmZzaW1hZ2UvdWZzLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0w
NzU1IC1wIGZzaW1hZ2Uuc28gL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVu
NDIvbGliL2ZzL3VmcwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC90b29scy9saWJmc2ltYWdlL3VmcycKZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZScKZ21ha2VbNF06IEVudGVyaW5n
IGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UnCmdtYWtlIC1D
IHJlaXNlcmZzIGluc3RhbGwKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvcmVpc2VyZnMnCmdjYyAgLURQSUMgLU8xIC1m
bm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5mc3lzX3JlaXNlcmZzLm9w
aWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1Xbm8tdW5rbm93bi1wcmFnbWFz
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvcmVpc2VyZnMvLi4vLi4vLi4v
dG9vbHMvbGliZnNpbWFnZS9jb21tb24vIC1ERlNJTUFHRV9GU0RJUj1cIi91c3IveGVuNDIv
bGliL2ZzXCIgLVdlcnJvciAtRF9HTlVfU09VUkNFICAtZlBJQyAtYyAtbyBmc3lzX3JlaXNl
cmZzLm9waWMgZnN5c19yZWlzZXJmcy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIC1M
Li4vY29tbW9uLyAtc2hhcmVkIC1vIGZzaW1hZ2Uuc28gZnN5c19yZWlzZXJmcy5vcGljIC1s
ZnNpbWFnZSAgIC1ML3Vzci9wa2cvbGliCi9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL3JlaXNlcmZzLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1w
IC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9mcy9yZWlzZXJm
cwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9yZWlzZXJmcy8uLi8uLi8uLi90
b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCBmc2ltYWdlLnNvIC9yb290L3hlbi00LjIu
MC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9mcy9yZWlzZXJmcwpnbWFrZVs1XTogTGVh
dmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3JlaXNl
cmZzJwpnbWFrZVs0XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29s
cy9saWJmc2ltYWdlJwpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGliZnNpbWFnZScKZ21ha2UgLUMgaXNvOTY2MCBpbnN0YWxsCmdtYWtl
WzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL2lzbzk2NjAnCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1t
NjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1w
cm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09M
U19fIC1NTUQgLU1GIC5mc3lzX2lzbzk2NjAub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVduby11bmtub3duLXByYWdtYXMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGliZnNpbWFnZS9pc285NjYwLy4uLy4uLy4uL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLyAt
REZTSU1BR0VfRlNESVI9XCIvdXNyL3hlbjQyL2xpYi9mc1wiIC1XZXJyb3IgLURfR05VX1NP
VVJDRSAgLWZQSUMgLWMgLW8gZnN5c19pc285NjYwLm9waWMgZnN5c19pc285NjYwLmMgIC1J
L3Vzci9wa2cvaW5jbHVkZQpnY2MgICAgLUwuLi9jb21tb24vIC1zaGFyZWQgLW8gZnNpbWFn
ZS5zbyBmc3lzX2lzbzk2NjAub3BpYyAtbGZzaW1hZ2UgICAtTC91c3IvcGtnL2xpYgovcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9pc285NjYwLy4uLy4uLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwv
dXNyL3hlbjQyL2xpYi9mcy9pc285NjYwCi9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL2lzbzk2NjAvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgZnNp
bWFnZS5zbyAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWIvZnMv
aXNvOTY2MApnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scy9saWJmc2ltYWdlL2lzbzk2NjAnCmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UnCmdtYWtlWzRdOiBFbnRlcmluZyBk
aXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlJwpnbWFrZSAtQyBm
YXQgaW5zdGFsbApnbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGliZnNpbWFnZS9mYXQnCmdjYyAgLURQSUMgLU8xIC1mbm8tb21pdC1mcmFt
ZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2Fs
bCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAg
LURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5mc3lzX2ZhdC5vcGljLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzICAtV25vLXVua25vd24tcHJhZ21hcyAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJmc2ltYWdlL2ZhdC8uLi8uLi8uLi90b29scy9saWJmc2ltYWdlL2NvbW1v
bi8gLURGU0lNQUdFX0ZTRElSPVwiL3Vzci94ZW40Mi9saWIvZnNcIiAtV2Vycm9yIC1EX0dO
VV9TT1VSQ0UgIC1mUElDIC1jIC1vIGZzeXNfZmF0Lm9waWMgZnN5c19mYXQuYyAgLUkvdXNy
L3BrZy9pbmNsdWRlCmdjYyAgICAtTC4uL2NvbW1vbi8gLXNoYXJlZCAtbyBmc2ltYWdlLnNv
IGZzeXNfZmF0Lm9waWMgLWxmc2ltYWdlICAgLUwvdXNyL3BrZy9saWIKL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYmZzaW1hZ2UvZmF0Ly4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwg
LWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xp
Yi9mcy9mYXQKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvZmF0Ly4uLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGZzaW1hZ2Uuc28gL3Jvb3QveGVuLTQu
Mi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliL2ZzL2ZhdApnbWFrZVs1XTogTGVhdmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL2ZhdCcKZ21h
a2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNp
bWFnZScKZ21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYmZzaW1hZ2UnCmdtYWtlIC1DIHpmcyBpbnN0YWxsCmdtYWtlWzVdOiBFbnRlcmlu
ZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3pmcycKZ2Nj
ICAtRFBJQyAtREZTWVNfWkZTIC1ERlNJTUFHRSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJmc2ltYWdlL3pmcy8uLi8uLi8uLi90b29scy9saWJmc2ltYWdlL3pmcyAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnpmc19sempiLm9waWMuZCAtZm5v
LW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1Xbm8tdW5rbm93bi1wcmFnbWFzIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvemZzLy4uLy4uLy4uL3Rvb2xzL2xpYmZzaW1h
Z2UvY29tbW9uLyAtREZTSU1BR0VfRlNESVI9XCIvdXNyL3hlbjQyL2xpYi9mc1wiIC1XZXJy
b3IgLURfR05VX1NPVVJDRSAgLWZQSUMgLWMgLW8gemZzX2x6amIub3BpYyB6ZnNfbHpqYi5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtREZTWVNfWkZTIC1ERlNJTUFHRSAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3pmcy8uLi8uLi8uLi90b29scy9s
aWJmc2ltYWdlL3pmcyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLnpmc19zaGEyNTYub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdu
by11bmtub3duLXByYWdtYXMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS96
ZnMvLi4vLi4vLi4vdG9vbHMvbGliZnNpbWFnZS9jb21tb24vIC1ERlNJTUFHRV9GU0RJUj1c
Ii91c3IveGVuNDIvbGliL2ZzXCIgLVdlcnJvciAtRF9HTlVfU09VUkNFICAtZlBJQyAtYyAt
byB6ZnNfc2hhMjU2Lm9waWMgemZzX3NoYTI1Ni5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2Nj
ICAtRFBJQyAtREZTWVNfWkZTIC1ERlNJTUFHRSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJmc2ltYWdlL3pmcy8uLi8uLi8uLi90b29scy9saWJmc2ltYWdlL3pmcyAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLnpmc19mbGV0Y2hlci5vcGljLmQg
LWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV25vLXVua25vd24tcHJhZ21hcyAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3pmcy8uLi8uLi8uLi90b29scy9saWJm
c2ltYWdlL2NvbW1vbi8gLURGU0lNQUdFX0ZTRElSPVwiL3Vzci94ZW40Mi9saWIvZnNcIiAt
V2Vycm9yIC1EX0dOVV9TT1VSQ0UgIC1mUElDIC1jIC1vIHpmc19mbGV0Y2hlci5vcGljIHpm
c19mbGV0Y2hlci5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtRFBJQyAtREZTWVNfWkZT
IC1ERlNJTUFHRSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL3pmcy8uLi8u
Li8uLi90b29scy9saWJmc2ltYWdlL3pmcyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RP
T0xTX18gLU1NRCAtTUYgLmZzaV96ZnMub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1j
YWxscyAgLVduby11bmtub3duLXByYWdtYXMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
ZnNpbWFnZS96ZnMvLi4vLi4vLi4vdG9vbHMvbGliZnNpbWFnZS9jb21tb24vIC1ERlNJTUFH
RV9GU0RJUj1cIi91c3IveGVuNDIvbGliL2ZzXCIgLVdlcnJvciAtRF9HTlVfU09VUkNFICAt
ZlBJQyAtYyAtbyBmc2lfemZzLm9waWMgZnNpX3pmcy5jICAtSS91c3IvcGtnL2luY2x1ZGUK
Z2NjICAtRFBJQyAtREZTWVNfWkZTIC1ERlNJTUFHRSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJmc2ltYWdlL3pmcy8uLi8uLi8uLi90b29scy9saWJmc2ltYWdlL3pmcyAtTzEgLWZu
by1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1z
dGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmZzeXNfemZzLm9waWMuZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1Xbm8tdW5rbm93bi1wcmFnbWFzIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvemZzLy4uLy4uLy4uL3Rvb2xzL2xpYmZz
aW1hZ2UvY29tbW9uLyAtREZTSU1BR0VfRlNESVI9XCIvdXNyL3hlbjQyL2xpYi9mc1wiIC1X
ZXJyb3IgLURfR05VX1NPVVJDRSAgLWZQSUMgLWMgLW8gZnN5c196ZnMub3BpYyBmc3lzX3pm
cy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAgIC1MLi4vY29tbW9uLyAtc2hhcmVkIC1v
IGZzaW1hZ2Uuc28gemZzX2x6amIub3BpYyB6ZnNfc2hhMjU2Lm9waWMgemZzX2ZsZXRjaGVy
Lm9waWMgZnNpX3pmcy5vcGljIGZzeXNfemZzLm9waWMgLWxmc2ltYWdlICAgLUwvdXNyL3Br
Zy9saWIKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvemZzLy4uLy4uLy4uL3Rv
b2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL3hlbjQyL2xpYi9mcy96ZnMKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZz
aW1hZ2UvemZzLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGZzaW1h
Z2Uuc28gL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliL2ZzL3pm
cwpnbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9s
aWJmc2ltYWdlL3pmcycKZ21ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGliZnNpbWFnZScKZ21ha2VbNF06IEVudGVyaW5nIGRpcmVjdG9yeSBg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UnCmdtYWtlIC1DIHhmcyBpbnN0YWxs
CmdtYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9s
aWJmc2ltYWdlL3hmcycKZ2NjICAtRFBJQyAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RP
T0xTX18gLU1NRCAtTUYgLmZzeXNfeGZzLm9waWMuZCAtZm5vLW9wdGltaXplLXNpYmxpbmct
Y2FsbHMgIC1Xbm8tdW5rbm93bi1wcmFnbWFzIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YmZzaW1hZ2UveGZzLy4uLy4uLy4uL3Rvb2xzL2xpYmZzaW1hZ2UvY29tbW9uLyAtREZTSU1B
R0VfRlNESVI9XCIvdXNyL3hlbjQyL2xpYi9mc1wiIC1XZXJyb3IgLURfR05VX1NPVVJDRSAg
LWZQSUMgLWMgLW8gZnN5c194ZnMub3BpYyBmc3lzX3hmcy5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAgIC1MLi4vY29tbW9uLyAtc2hhcmVkIC1vIGZzaW1hZ2Uuc28gZnN5c194ZnMu
b3BpYyAtbGZzaW1hZ2UgICAtTC91c3IvcGtnL2xpYgovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGliZnNpbWFnZS94ZnMvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUg
LXAgL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvbGliL2ZzL3hmcwov
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS94ZnMvLi4vLi4vLi4vdG9vbHMvY3Jv
c3MtaW5zdGFsbCAtbTA3NTUgLXAgZnNpbWFnZS5zbyAvcm9vdC94ZW4tNC4yLjAvZGlzdC9p
bnN0YWxsL3Vzci94ZW40Mi9saWIvZnMveGZzCmdtYWtlWzVdOiBMZWF2aW5nIGRpcmVjdG9y
eSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UveGZzJwpnbWFrZVs0XTogTGVh
dmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlJwpnbWFr
ZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNp
bWFnZScKZ21ha2UgLUMgZXh0MmZzIGluc3RhbGwKZ21ha2VbNV06IEVudGVyaW5nIGRpcmVj
dG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1hZ2UvZXh0MmZzJwpnY2MgIC1E
UElDIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuZnN5c19l
eHQyZnMub3BpYy5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVduby11bmtub3du
LXByYWdtYXMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9leHQyZnMvLi4v
Li4vLi4vdG9vbHMvbGliZnNpbWFnZS9jb21tb24vIC1ERlNJTUFHRV9GU0RJUj1cIi91c3Iv
eGVuNDIvbGliL2ZzXCIgLVdlcnJvciAtRF9HTlVfU09VUkNFICAtZlBJQyAtYyAtbyBmc3lz
X2V4dDJmcy5vcGljIGZzeXNfZXh0MmZzLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAg
LUwuLi9jb21tb24vIC1zaGFyZWQgLW8gZnNpbWFnZS5zbyBmc3lzX2V4dDJmcy5vcGljIC1s
ZnNpbWFnZSAgIC1ML3Vzci9wa2cvbGliCi9yb290L3hlbi00LjIuMC90b29scy9saWJmc2lt
YWdlL2V4dDJmcy8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAv
cm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWIvZnMvZXh0MmZzCi9y
b290L3hlbi00LjIuMC90b29scy9saWJmc2ltYWdlL2V4dDJmcy8uLi8uLi8uLi90b29scy9j
cm9zcy1pbnN0YWxsIC1tMDc1NSAtcCBmc2ltYWdlLnNvIC9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL3hlbjQyL2xpYi9mcy9leHQyZnMKZ21ha2VbNV06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGliZnNpbWFnZS9leHQyZnMnCmdtYWtl
WzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYmZzaW1h
Z2UnCmdtYWtlWzNdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYmZzaW1hZ2UnCmdtYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzJwpnbWFrZVsyXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMnCnNldCAtZXg7IFwKaWYgdGVzdCAtZCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
Li4vdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWw7IHRoZW4gXAoJbWtkaXIgLXAgcWVtdS14
ZW4tdHJhZGl0aW9uYWwtZGlyOyBcCmVsc2UgXAoJZXhwb3J0IEdJVD1naXQ7IFwKCS9yb290
L3hlbi00LjIuMC90b29scy8uLi9zY3JpcHRzL2dpdC1jaGVja291dC5zaCAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvLi4vdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwgeGVuLTQuMi4wIHFl
bXUteGVuLXRyYWRpdGlvbmFsLWRpcjsgXApmaQorIHRlc3QgLWQgJy9yb290L3hlbi00LjIu
MC90b29scy8uLi90b29scy9xZW11LXhlbi10cmFkaXRpb25hbCcKKyBta2RpciAtcCBxZW11
LXhlbi10cmFkaXRpb25hbC1kaXIKc2V0IC1lOyBcCgkgICAgZXhwb3J0IFBSRUZJWD0iL3Vz
ci94ZW40MiI7IGV4cG9ydCBYRU5fU0NSSVBUX0RJUj0iL3Vzci94ZW40Mi9ldGMveGVuL3Nj
cmlwdHMiOyBleHBvcnQgWEVOX1JPT1Q9Ii9yb290L3hlbi00LjIuMC90b29scy8uLiI7IFwK
CWNkIHFlbXUteGVuLXRyYWRpdGlvbmFsLWRpcjsgXAoJL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
Ly4uL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL3hlbi1zZXR1cCBcCgktLWV4dHJhLWNm
bGFncz0iIiBcCgk7IFwKCWdtYWtlIGluc3RhbGwKc2RsLWNvbmZpZzogbm90IGZvdW5kCnNk
bC1jb25maWc6IG5vdCBmb3VuZApJbnN0YWxsIHByZWZpeCAgICAvdXNyL3hlbjQyCkJJT1Mg
ZGlyZWN0b3J5ICAgIC91c3IveGVuNDIvc2hhcmUvcWVtdQpiaW5hcnkgZGlyZWN0b3J5ICAv
dXNyL3hlbjQyL2JpbgpNYW51YWwgZGlyZWN0b3J5ICAvdXNyL3hlbjQyL3NoYXJlL21hbgpF
TEYgaW50ZXJwIHByZWZpeCAvdXNyL2duZW11bC9xZW11LSVNClNvdXJjZSBwYXRoICAgICAg
IC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbApDIGNvbXBpbGVy
ICAgICAgICBnY2MKSG9zdCBDIGNvbXBpbGVyICAgZ2NjCkFSQ0hfQ0ZMQUdTICAgICAgIC1t
NjQKbWFrZSAgICAgICAgICAgICAgZ21ha2UKaW5zdGFsbCAgICAgICAgICAgaW5zdGFsbApo
b3N0IENQVSAgICAgICAgICB4ODZfNjQKaG9zdCBiaWcgZW5kaWFuICAgbm8KdGFyZ2V0IGxp
c3QgICAgICAgaTM4Ni1zb2Z0bW11IHg4Nl82NC1zb2Z0bW11IGFybS1zb2Z0bW11IGNyaXMt
c29mdG1tdSBtNjhrLXNvZnRtbXUgbWlwcy1zb2Z0bW11IG1pcHNlbC1zb2Z0bW11IG1pcHM2
NC1zb2Z0bW11IG1pcHM2NGVsLXNvZnRtbXUgcHBjLXNvZnRtbXUgcHBjZW1iLXNvZnRtbXUg
cHBjNjQtc29mdG1tdSBzaDQtc29mdG1tdSBzaDRlYi1zb2Z0bW11IHNwYXJjLXNvZnRtbXUg
c3BhcmM2NC1ic2QtdXNlciAKZ3Byb2YgZW5hYmxlZCAgICAgbm8Kc3BhcnNlIGVuYWJsZWQg
ICAgbm8KcHJvZmlsZXIgICAgICAgICAgbm8Kc3RhdGljIGJ1aWxkICAgICAgbm8KLVdlcnJv
ciBlbmFibGVkICAgbm8KU0RMIHN1cHBvcnQgICAgICAgbm8KT3BlbkdMIHN1cHBvcnQgICAg
CmN1cnNlcyBzdXBwb3J0ICAgIG5vCm1pbmd3MzIgc3VwcG9ydCAgIG5vCkF1ZGlvIGRyaXZl
cnMgICAgIG9zcwpFeHRyYSBhdWRpbyBjYXJkcyBhYzk3IGVzMTM3MCBzYjE2Ck1peGVyIGVt
dWxhdGlvbiAgIG5vClZOQyBUTFMgc3VwcG9ydCAgIG5vCmtxZW11IHN1cHBvcnQgICAgIG5v
CmJybGFwaSBzdXBwb3J0ICAgIG5vCkRvY3VtZW50YXRpb24gICAgIG5vCk5QVEwgc3VwcG9y
dCAgICAgIG5vCnZkZSBzdXBwb3J0ICAgICAgIG5vCkFJTyBzdXBwb3J0ICAgICAgIHllcwpJ
bnN0YWxsIGJsb2JzICAgICB5ZXMKS1ZNIHN1cHBvcnQgICAgICAgbm8gLSAobGludXgva3Zt
Lmg6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkpCmZkdCBzdXBwb3J0ICAgICAgIG5vClRo
ZSBlcnJvciBsb2cgZnJvbSBjb21waWxpbmcgdGhlIGxpYlNETCB0ZXN0IGlzOiAKL3RtcC9x
ZW11LWNvbmYtLTEwODM5LS5jOjE6MTc6IGZhdGFsIGVycm9yOiBTREwuaDogTm8gc3VjaCBm
aWxlIG9yIGRpcmVjdG9yeQpjb21waWxhdGlvbiB0ZXJtaW5hdGVkLgpxZW11IHN1Y2Nlc3Nm
dWx5IGNvbmZpZ3VyZWQgZm9yIFhlbiBxZW11LWRtIGJ1aWxkCi1tc3NlMjogbm90IGZvdW5k
CmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9x
ZW11LXhlbi10cmFkaXRpb25hbC1kaXInCi9yb290L3hlbi00LjIuMC90b29scy8uLi90b29s
cy9xZW11LXhlbi10cmFkaXRpb25hbC94ZW4taG9va3MubWFrOjYxOiA9PT0gcGNpdXRpbHMt
ZGV2IHBhY2thZ2Ugbm90IGZvdW5kIC0gbWlzc2luZyAvdXNyL2luY2x1ZGUvcGNpCi9yb290
L3hlbi00LjIuMC90b29scy8uLi90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC94ZW4taG9v
a3MubWFrOjYyOiA9PT0gUENJIHBhc3N0aHJvdWdoIGNhcGFiaWxpdHkgaGFzIGJlZW4gZGlz
YWJsZWQKICBDQyAgICBxZW11LWltZy5vCiAgQ0MgICAgcWVtdS10b29sLm8KICBDQyAgICBv
c2RlcC5vCiAgQ0MgICAgY3V0aWxzLm8KICBDQyAgICBxZW11LW1hbGxvYy5vCiAgQ0MgICAg
YmxvY2stY293Lm8KICBDQyAgICBibG9jay1xY293Lm8KICBDQyAgICBhZXMubwogIENDICAg
IGJsb2NrLXZtZGsubwogIENDICAgIGJsb2NrLWNsb29wLm8KICBDQyAgICBibG9jay1kbWcu
bwogIENDICAgIGJsb2NrLWJvY2hzLm8KICBDQyAgICBibG9jay12cGMubwogIENDICAgIGJs
b2NrLXZ2ZmF0Lm8KICBDQyAgICBibG9jay1xY293Mi5vCiAgQ0MgICAgYmxvY2stcGFyYWxs
ZWxzLm8KICBDQyAgICBibG9jay1uYmQubwogIENDICAgIG5iZC5vCiAgQ0MgICAgYmxvY2su
bwogIENDICAgIGFpby5vCiAgQ0MgICAgcG9zaXgtYWlvLWNvbXBhdC5vCiAgQ0MgICAgYmxv
Y2stcmF3LXBvc2l4Lm8KICBMSU5LICBxZW11LWltZy14ZW4KL3Vzci9saWIvbGliYy5zbzog
d2FybmluZzogbXVsdGlwbGUgY29tbW9uIG9mIGBlbnZpcm9uJwovdXNyL2xpYi9jcnQwLm86
IHdhcm5pbmc6IHByZXZpb3VzIGNvbW1vbiBpcyBoZXJlCiAgQ0MgICAgcmVhZGxpbmUubwog
IENDICAgIGNvbnNvbGUubwogIENDICAgIGlycS5vCiAgQ0MgICAgaTJjLm8KICBDQyAgICBz
bWJ1cy5vCiAgQ0MgICAgc21idXNfZWVwcm9tLm8KICBDQyAgICBtYXg3MzEwLm8KICBDQyAg
ICBtYXgxMTF4Lm8KICBDQyAgICB3bTg3NTAubwpJbiBmaWxlIGluY2x1ZGVkIGZyb20gL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2h3L3dtODc1MC5jOjEy
OjA6Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRpby9h
dWRpby5oOjE1MzoxMDogd2FybmluZzogcmVkdW5kYW50IHJlZGVjbGFyYXRpb24gb2YgJ3Bv
cGNvdW50JwovdXNyL2luY2x1ZGUvc3RyaW5ncy5oOjU3OjE0OiBub3RlOiBwcmV2aW91cyBk
ZWNsYXJhdGlvbiBvZiAncG9wY291bnQnIHdhcyBoZXJlCiAgQ0MgICAgc3NkMDMwMy5vCiAg
Q0MgICAgc3NkMDMyMy5vCiAgQ0MgICAgYWRzNzg0Ni5vCiAgQ0MgICAgc3RlbGxhcmlzX2lu
cHV0Lm8KICBDQyAgICB0d2w5MjIzMC5vCiAgQ0MgICAgdG1wMTA1Lm8KICBDQyAgICBsbTgz
MngubwogIENDICAgIHNjc2ktZGlzay5vCiAgQ0MgICAgY2Ryb20ubwogIENDICAgIHNjc2kt
Z2VuZXJpYy5vCiAgQ0MgICAgdXNiLm8KICBDQyAgICB1c2ItaHViLm8KICBDQyAgICB1c2It
YnNkLm8KICBDQyAgICB1c2ItaGlkLm8KICBDQyAgICB1c2ItbXNkLm8KICBDQyAgICB1c2It
d2Fjb20ubwogIENDICAgIHVzYi1zZXJpYWwubwogIENDICAgIHVzYi1uZXQubwogIENDICAg
IHNkLm8KICBDQyAgICBzc2ktc2QubwogIENDICAgIGJ0Lm8KICBDQyAgICBidC1ob3N0Lm8K
ICBDQyAgICBidC12aGNpLm8KICBDQyAgICBidC1sMmNhcC5vCiAgQ0MgICAgYnQtc2RwLm8K
ICBDQyAgICBidC1oY2kubwogIENDICAgIGJ0LWhpZC5vCiAgQ0MgICAgdXNiLWJ0Lm8KICBD
QyAgICBidWZmZXJlZF9maWxlLm8KICBDQyAgICBtaWdyYXRpb24ubwogIENDICAgIG1pZ3Jh
dGlvbi10Y3AubwogIENDICAgIG5ldC5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvbmV0LmM6MzA6MDoKL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2F1ZGlvL2F1ZGlvLmg6MTUz
OjEwOiB3YXJuaW5nOiByZWR1bmRhbnQgcmVkZWNsYXJhdGlvbiBvZiAncG9wY291bnQnCi91
c3IvaW5jbHVkZS9zdHJpbmdzLmg6NTc6MTQ6IG5vdGU6IHByZXZpb3VzIGRlY2xhcmF0aW9u
IG9mICdwb3Bjb3VudCcgd2FzIGhlcmUKICBDQyAgICBxZW11LXNvY2tldHMubwogIENDICAg
IHFlbXUtY2hhci5vCiAgQ0MgICAgbmV0LWNoZWNrc3VtLm8KICBDQyAgICBzYXZldm0ubwpJ
biBmaWxlIGluY2x1ZGVkIGZyb20gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRy
YWRpdGlvbmFsL3NhdmV2bS5jOjMyOjA6Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhl
bi10cmFkaXRpb25hbC9hdWRpby9hdWRpby5oOjE1MzoxMDogd2FybmluZzogcmVkdW5kYW50
IHJlZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50JwovdXNyL2luY2x1ZGUvc3RyaW5ncy5oOjU3
OjE0OiBub3RlOiBwcmV2aW91cyBkZWNsYXJhdGlvbiBvZiAncG9wY291bnQnIHdhcyBoZXJl
CiAgQ0MgICAgY2FjaGUtdXRpbHMubwogIENDICAgIG1pZ3JhdGlvbi1leGVjLm8KICBDQyAg
ICBhdWRpby9hdWRpby5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uYzoyNTowOgovcm9vdC94
ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6
MTA6IHdhcm5pbmc6IHJlZHVuZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vz
ci9pbmNsdWRlL3N0cmluZ3MuaDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24g
b2YgJ3BvcGNvdW50JyB3YXMgaGVyZQogIENDICAgIGF1ZGlvL25vYXVkaW8ubwpJbiBmaWxl
IGluY2x1ZGVkIGZyb20gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlv
bmFsL2F1ZGlvL25vYXVkaW8uYzoyNTowOgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14
ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6MTA6IHdhcm5pbmc6IHJlZHVuZGFu
dCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vzci9pbmNsdWRlL3N0cmluZ3MuaDo1
NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50JyB3YXMgaGVy
ZQogIENDICAgIGF1ZGlvL3dhdmF1ZGlvLm8KSW4gZmlsZSBpbmNsdWRlZCBmcm9tIC9yb290
L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRpby93YXZhdWRpby5j
OjI2OjA6Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRp
by9hdWRpby5oOjE1MzoxMDogd2FybmluZzogcmVkdW5kYW50IHJlZGVjbGFyYXRpb24gb2Yg
J3BvcGNvdW50JwovdXNyL2luY2x1ZGUvc3RyaW5ncy5oOjU3OjE0OiBub3RlOiBwcmV2aW91
cyBkZWNsYXJhdGlvbiBvZiAncG9wY291bnQnIHdhcyBoZXJlCiAgQ0MgICAgYXVkaW8vbWl4
ZW5nLm8KSW4gZmlsZSBpbmNsdWRlZCBmcm9tIC9yb290L3hlbi00LjIuMC90b29scy9xZW11
LXhlbi10cmFkaXRpb25hbC9hdWRpby9taXhlbmcuYzoyNjowOgovcm9vdC94ZW4tNC4yLjAv
dG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6MTA6IHdhcm5p
bmc6IHJlZHVuZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vzci9pbmNsdWRl
L3N0cmluZ3MuaDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24gb2YgJ3BvcGNv
dW50JyB3YXMgaGVyZQogIENDICAgIGF1ZGlvL29zc2F1ZGlvLm8KSW4gZmlsZSBpbmNsdWRl
ZCBmcm9tIC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRp
by9vc3NhdWRpby5jOjM0OjA6Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFk
aXRpb25hbC9hdWRpby9hdWRpby5oOjE1MzoxMDogd2FybmluZzogcmVkdW5kYW50IHJlZGVj
bGFyYXRpb24gb2YgJ3BvcGNvdW50JwovdXNyL2luY2x1ZGUvc3RyaW5ncy5oOjU3OjE0OiBu
b3RlOiBwcmV2aW91cyBkZWNsYXJhdGlvbiBvZiAncG9wY291bnQnIHdhcyBoZXJlCiAgQ0Mg
ICAgYXVkaW8vd2F2Y2FwdHVyZS5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vd2F2Y2FwdHVyZS5jOjM6
MDoKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2F1ZGlvL2F1
ZGlvLmg6MTUzOjEwOiB3YXJuaW5nOiByZWR1bmRhbnQgcmVkZWNsYXJhdGlvbiBvZiAncG9w
Y291bnQnCi91c3IvaW5jbHVkZS9zdHJpbmdzLmg6NTc6MTQ6IG5vdGU6IHByZXZpb3VzIGRl
Y2xhcmF0aW9uIG9mICdwb3Bjb3VudCcgd2FzIGhlcmUKICBDQyAgICB2bmMubwogIENDICAg
IGQzZGVzLm8KICBBUiAgICBsaWJxZW11X2NvbW1vbi5hCi1tc3NlMjogbm90IGZvdW5kCmdt
YWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9xZW11
LXhlbi10cmFkaXRpb25hbC1kaXIvaTM4Ni1kbScKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4u
L3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL3hlbi1ob29rcy5tYWs6NjE6ID09PSBwY2l1
dGlscy1kZXYgcGFja2FnZSBub3QgZm91bmQgLSBtaXNzaW5nIC91c3IvaW5jbHVkZS9wY2kK
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL3hl
bi1ob29rcy5tYWs6NjI6ID09PSBQQ0kgcGFzc3Rocm91Z2ggY2FwYWJpbGl0eSBoYXMgYmVl
biBkaXNhYmxlZAovcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMvcWVtdS14ZW4tdHJh
ZGl0aW9uYWwveGVuLWhvb2tzLm1hazo2MTogPT09IHBjaXV0aWxzLWRldiBwYWNrYWdlIG5v
dCBmb3VuZCAtIG1pc3NpbmcgL3Vzci9pbmNsdWRlL3BjaQovcm9vdC94ZW4tNC4yLjAvdG9v
bHMvLi4vdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwveGVuLWhvb2tzLm1hazo2MjogPT09
IFBDSSBwYXNzdGhyb3VnaCBjYXBhYmlsaXR5IGhhcyBiZWVuIGRpc2FibGVkCiAgQ0MgICAg
aTM4Ni1kbS92bC5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvdmwuYzo0MTowOgovcm9vdC94ZW4tNC4yLjAvdG9v
bHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6MTA6IHdhcm5pbmc6
IHJlZHVuZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vzci9pbmNsdWRlL3N0
cmluZ3MuaDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50
JyB3YXMgaGVyZQogIENDICAgIGkzODYtZG0vb3NkZXAubwogIENDICAgIGkzODYtZG0vbW9u
aXRvci5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVt
dS14ZW4tdHJhZGl0aW9uYWwvbW9uaXRvci5jOjM1OjA6Ci9yb290L3hlbi00LjIuMC90b29s
cy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRpby9hdWRpby5oOjE1MzoxMDogd2FybmluZzog
cmVkdW5kYW50IHJlZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50JwovdXNyL2luY2x1ZGUvc3Ry
aW5ncy5oOjU3OjE0OiBub3RlOiBwcmV2aW91cyBkZWNsYXJhdGlvbiBvZiAncG9wY291bnQn
IHdhcyBoZXJlCiAgQ0MgICAgaTM4Ni1kbS9wY2kubwogIENDICAgIGkzODYtZG0vbG9hZGVy
Lm8KICBDQyAgICBpMzg2LWRtL2lzYV9tbWlvLm8KICBDQyAgICBpMzg2LWRtL21hY2hpbmUu
bwogIENDICAgIGkzODYtZG0vZG1hLWhlbHBlcnMubwogIENDICAgIGkzODYtZG0vdmlydGlv
Lm8KICBDQyAgICBpMzg2LWRtL3ZpcnRpby1ibGsubwogIENDICAgIGkzODYtZG0vdmlydGlv
LW5ldC5vCiAgQ0MgICAgaTM4Ni1kbS92aXJ0aW8tY29uc29sZS5vCiAgQ0MgICAgaTM4Ni1k
bS9md19jZmcubwogIENDICAgIGkzODYtZG0vcG9zaXgtYWlvLWNvbXBhdC5vCiAgQ0MgICAg
aTM4Ni1kbS9ibG9jay1yYXctcG9zaXgubwogIENDICAgIGkzODYtZG0vbHNpNTNjODk1YS5v
CiAgQ0MgICAgaTM4Ni1kbS9lc3AubwogIENDICAgIGkzODYtZG0vdXNiLW9oY2kubwogIEND
ICAgIGkzODYtZG0vZWVwcm9tOTN4eC5vCiAgQ0MgICAgaTM4Ni1kbS9lZXBybzEwMC5vCi9y
b290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9ody9lZXBybzEwMC5j
OiBJbiBmdW5jdGlvbiAnZWVwcm8xMDBfcmVhZDQnOgovcm9vdC94ZW4tNC4yLjAvdG9vbHMv
cWVtdS14ZW4tdHJhZGl0aW9uYWwvaHcvZWVwcm8xMDAuYzoxMjA3OjE0OiB3YXJuaW5nOiAn
dmFsJyBtYXkgYmUgdXNlZCB1bmluaXRpYWxpemVkIGluIHRoaXMgZnVuY3Rpb24KL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2h3L2VlcHJvMTAwLmM6IElu
IGZ1bmN0aW9uICdlZXBybzEwMF9yZWFkMic6Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11
LXhlbi10cmFkaXRpb25hbC9ody9lZXBybzEwMC5jOjExODQ6MTQ6IHdhcm5pbmc6ICd2YWwn
IG1heSBiZSB1c2VkIHVuaW5pdGlhbGl6ZWQgaW4gdGhpcyBmdW5jdGlvbgovcm9vdC94ZW4t
NC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvaHcvZWVwcm8xMDAuYzogSW4gZnVu
Y3Rpb24gJ2VlcHJvMTAwX3JlYWQxJzoKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVu
LXRyYWRpdGlvbmFsL2h3L2VlcHJvMTAwLmM6MTEzOToxMzogd2FybmluZzogJ3ZhbCcgbWF5
IGJlIHVzZWQgdW5pbml0aWFsaXplZCBpbiB0aGlzIGZ1bmN0aW9uCiAgQ0MgICAgaTM4Ni1k
bS9uZTIwMDAubwogIENDICAgIGkzODYtZG0vcGNuZXQubwogIENDICAgIGkzODYtZG0vcnRs
ODEzOS5vCiAgQ0MgICAgaTM4Ni1kbS9lMTAwMC5vCiAgQ0MgICAgaTM4Ni1kbS9tc21vdXNl
Lm8KICBDQyAgICBpMzg2LWRtL3NiMTYubwpJbiBmaWxlIGluY2x1ZGVkIGZyb20gL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2h3L3NiMTYuYzoyNjowOgov
cm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8u
aDoxNTM6MTA6IHdhcm5pbmc6IHJlZHVuZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3Vu
dCcKL3Vzci9pbmNsdWRlL3N0cmluZ3MuaDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFy
YXRpb24gb2YgJ3BvcGNvdW50JyB3YXMgaGVyZQogIENDICAgIGkzODYtZG0vZXMxMzcwLm8K
SW4gZmlsZSBpbmNsdWRlZCBmcm9tIC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10
cmFkaXRpb25hbC9ody9lczEzNzAuYzozMTowOgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVt
dS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6MTA6IHdhcm5pbmc6IHJlZHVu
ZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vzci9pbmNsdWRlL3N0cmluZ3Mu
aDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50JyB3YXMg
aGVyZQogIENDICAgIGkzODYtZG0vYWM5Ny5vCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwvaHcvYWM5Ny5jOjE5OjA6
Ci9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9hdWRpby9hdWRp
by5oOjE1MzoxMDogd2FybmluZzogcmVkdW5kYW50IHJlZGVjbGFyYXRpb24gb2YgJ3BvcGNv
dW50JwovdXNyL2luY2x1ZGUvc3RyaW5ncy5oOjU3OjE0OiBub3RlOiBwcmV2aW91cyBkZWNs
YXJhdGlvbiBvZiAncG9wY291bnQnIHdhcyBoZXJlCiAgQ0MgICAgaTM4Ni1kbS9wY3Nway5v
CkluIGZpbGUgaW5jbHVkZWQgZnJvbSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4t
dHJhZGl0aW9uYWwvaHcvcGNzcGsuYzoyODowOgovcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVt
dS14ZW4tdHJhZGl0aW9uYWwvYXVkaW8vYXVkaW8uaDoxNTM6MTA6IHdhcm5pbmc6IHJlZHVu
ZGFudCByZWRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcKL3Vzci9pbmNsdWRlL3N0cmluZ3Mu
aDo1NzoxNDogbm90ZTogcHJldmlvdXMgZGVjbGFyYXRpb24gb2YgJ3BvcGNvdW50JyB3YXMg
aGVyZQogIENDICAgIGkzODYtZG0vaWRlLm8KICBDQyAgICBpMzg2LWRtL3Bja2JkLm8KICBD
QyAgICBpMzg2LWRtL3BzMi5vCiAgQ0MgICAgaTM4Ni1kbS92Z2EubwogIENDICAgIGkzODYt
ZG0vZG1hLm8KICBDQyAgICBpMzg2LWRtL2ZkYy5vCiAgQ0MgICAgaTM4Ni1kbS9tYzE0Njgx
OHJ0Yy5vCiAgQ0MgICAgaTM4Ni1kbS9zZXJpYWwubwogIENDICAgIGkzODYtZG0vaTgyNTku
bwogIENDICAgIGkzODYtZG0vaTgyNTQubwogIENDICAgIGkzODYtZG0vcGMubwpJbiBmaWxl
IGluY2x1ZGVkIGZyb20gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlv
bmFsL2h3L3BjLmM6MzA6MDoKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRp
dGlvbmFsL2F1ZGlvL2F1ZGlvLmg6MTUzOjEwOiB3YXJuaW5nOiByZWR1bmRhbnQgcmVkZWNs
YXJhdGlvbiBvZiAncG9wY291bnQnCi91c3IvaW5jbHVkZS9zdHJpbmdzLmg6NTc6MTQ6IG5v
dGU6IHByZXZpb3VzIGRlY2xhcmF0aW9uIG9mICdwb3Bjb3VudCcgd2FzIGhlcmUKICBDQyAg
ICBpMzg2LWRtL2NpcnJ1c192Z2EubwogIENDICAgIGkzODYtZG0vcGFyYWxsZWwubwogIEND
ICAgIGkzODYtZG0vcGlpeF9wY2kubwogIENDICAgIGkzODYtZG0vdXNiLXVoY2kubwogIEND
ICAgIGkzODYtZG0vaHBldC5vCiAgQ0MgICAgaTM4Ni1kbS9kZXZpY2UtaG90cGx1Zy5vCiAg
Q0MgICAgaTM4Ni1kbS9wY2ktaG90cGx1Zy5vCiAgQ0MgICAgaTM4Ni1kbS9waWl4NGFjcGku
bwogIENDICAgIGkzODYtZG0veGVuc3RvcmUubwogIENDICAgIGkzODYtZG0veGVuX3BsYXRm
b3JtLm8KICBDQyAgICBpMzg2LWRtL3hlbl9tYWNoaW5lX2Z2Lm8KICBDQyAgICBpMzg2LWRt
L3hlbl9tYWNoaW5lX3B2Lm8KICBDQyAgICBpMzg2LWRtL3hlbl9iYWNrZW5kLm8KICBDQyAg
ICBpMzg2LWRtL3hlbmZiLm8KICBDQyAgICBpMzg2LWRtL3hlbl9jb25zb2xlLm8KICBDQyAg
ICBpMzg2LWRtL3hlbl9kaXNrLm8KICBDQyAgICBpMzg2LWRtL2V4ZWMtZG0ubwogIENDICAg
IGkzODYtZG0vcGNpX2VtdWxhdGlvbi5vCiAgQ0MgICAgaTM4Ni1kbS9oZWxwZXIyLm8KICBD
QyAgICBpMzg2LWRtL2JhdHRlcnlfbWdtdC5vCiAgQ0MgICAgaTM4Ni1kbS9rcWVtdS5vCiAg
Q0MgICAgaTM4Ni1kbS9pMzg2LWRpcy5vCiAgQVIgICAgaTM4Ni1kbS9saWJxZW11LmEKICBM
SU5LICBpMzg2LWRtL3FlbXUtZG0KL3Vzci9saWIvbGliYy5zbzogd2FybmluZzogbXVsdGlw
bGUgY29tbW9uIG9mIGBlbnZpcm9uJwovdXNyL2xpYi9jcnQwLm86IHdhcm5pbmc6IHByZXZp
b3VzIGNvbW1vbiBpcyBoZXJlCmdtYWtlWzRdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsLWRpci9pMzg2LWRtJwpta2Rp
ciAtcCAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvYmluIgovcm9v
dC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbSA3NTUgLXMgcWVt
dS1pbWcteGVuICAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvYmlu
Igpta2RpciAtcCAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2hh
cmUveGVuL3FlbXUiCnNldCAtZTsgZm9yIHggaW4gYmlvcy5iaW4gdmdhYmlvcy5iaW4gdmdh
Ymlvcy1jaXJydXMuYmluIHBwY19yb20uYmluIHZpZGVvLnggb3BlbmJpb3Mtc3BhcmMzMiBv
cGVuYmlvcy1zcGFyYzY0IG9wZW5iaW9zLXBwYyBweGUtbmUya19wY2kuYmluIHB4ZS1ydGw4
MTM5LmJpbiBweGUtcGNuZXQuYmluIHB4ZS1lMTAwMC5iaW4gYmFtYm9vLmR0YjsgZG8gXAoJ
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0gNjQ0IC9y
b290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC9wYy1iaW9zLyR4ICIv
cm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9zaGFyZS94ZW4vcWVtdSI7
IFwKZG9uZQpta2RpciAtcCAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVu
NDIvc2hhcmUveGVuL3FlbXUva2V5bWFwcyIKc2V0IC1lOyBmb3IgeCBpbiBkYSAgICAgZW4t
Z2IgIGV0ICBmciAgICAgZnItY2ggIGlzICBsdCAgbW9kaWZpZXJzICBubyAgcHQtYnIgIHN2
IGFyICAgICAgZGUgICAgIGVuLXVzICBmaSAgZnItYmUgIGhyICAgICBpdCAgbHYgIG5sICAg
ICAgICAgcGwgIHJ1ICAgICB0aCBjb21tb24gIGRlLWNoICBlcyAgICAgZm8gIGZyLWNhICBo
dSAgICAgamEgIG1rICBubC1iZSAgICAgIHB0ICBzbCAgICAgdHI7IGRvIFwKCS9yb290L3hl
bi00LjIuMC90b29scy8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tIDY0NCAvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwva2V5bWFwcy8keCAiL3Jvb3QveGVu
LTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2hhcmUveGVuL3FlbXUva2V5bWFwcyI7
IFwKZG9uZQpmb3IgZCBpbiBpMzg2LWRtOyBkbyBcCmdtYWtlIC1DICRkIGluc3RhbGwgfHwg
ZXhpdCAxIDsgXAogICAgICAgIGRvbmUKLW1zc2UyOiBub3QgZm91bmQKZ21ha2VbNF06IEVu
dGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLXRyYWRp
dGlvbmFsLWRpci9pMzg2LWRtJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMvcWVt
dS14ZW4tdHJhZGl0aW9uYWwveGVuLWhvb2tzLm1hazo2MTogPT09IHBjaXV0aWxzLWRldiBw
YWNrYWdlIG5vdCBmb3VuZCAtIG1pc3NpbmcgL3Vzci9pbmNsdWRlL3BjaQovcm9vdC94ZW4t
NC4yLjAvdG9vbHMvLi4vdG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwveGVuLWhvb2tzLm1h
azo2MjogPT09IFBDSSBwYXNzdGhyb3VnaCBjYXBhYmlsaXR5IGhhcyBiZWVuIGRpc2FibGVk
Ci9yb290L3hlbi00LjIuMC90b29scy8uLi90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC94
ZW4taG9va3MubWFrOjYxOiA9PT0gcGNpdXRpbHMtZGV2IHBhY2thZ2Ugbm90IGZvdW5kIC0g
bWlzc2luZyAvdXNyL2luY2x1ZGUvcGNpCi9yb290L3hlbi00LjIuMC90b29scy8uLi90b29s
cy9xZW11LXhlbi10cmFkaXRpb25hbC94ZW4taG9va3MubWFrOjYyOiA9PT0gUENJIHBhc3N0
aHJvdWdoIGNhcGFiaWxpdHkgaGFzIGJlZW4gZGlzYWJsZWQKL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wICIvcm9vdC94ZW4tNC4y
LjAvZGlzdC9pbnN0YWxsLy91c3IveGVuNDIvbGliZXhlYyIKL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wICIvcm9vdC94ZW4tNC4y
LjAvZGlzdC9pbnN0YWxsLy91c3IveGVuNDIvZXRjL3hlbi9zY3JpcHRzIgovcm9vdC94ZW4t
NC4yLjAvdG9vbHMvLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsL2kzODYtZG0vcWVt
dS1pZnVwLU5ldEJTRCAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC8vdXNyL3hlbjQy
L2V0Yy94ZW4vc2NyaXB0cy9xZW11LWlmdXAiCi9yb290L3hlbi00LjIuMC90b29scy8uLi90
b29scy9jcm9zcy1pbnN0YWxsIC1tIDc1NSAtcyBxZW11LWRtICIvcm9vdC94ZW4tNC4yLjAv
ZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWJleGVjIgpnbWFrZVs0XTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC1kaXIv
aTM4Ni1kbScKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvcWVtdS14ZW4tdHJhZGl0aW9uYWwtZGlyJwpnbWFrZVsyXTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVj
dG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwppZiB0ZXN0IC1kIC9yb290L3hlbi00LjIu
MC90b29scy8uLi90b29scy9xZW11LXhlbiA7IHRoZW4gXAoJbWtkaXIgLXAgcWVtdS14ZW4t
ZGlyOyBcCmVsc2UgXAoJZXhwb3J0IEdJVD1naXQ7IFwKCS9yb290L3hlbi00LjIuMC90b29s
cy8uLi9zY3JpcHRzL2dpdC1jaGVja291dC5zaCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4v
dG9vbHMvcWVtdS14ZW4gcWVtdS14ZW4tNC4yLjAgcWVtdS14ZW4tZGlyIDsgXApmaQppZiB0
ZXN0IC1kIC9yb290L3hlbi00LjIuMC90b29scy8uLi90b29scy9xZW11LXhlbiA7IHRoZW4g
XAoJc291cmNlPS9yb290L3hlbi00LjIuMC90b29scy8uLi90b29scy9xZW11LXhlbjsgXApl
bHNlIFwKCXNvdXJjZT0uOyBcCmZpOyBcCmNkIHFlbXUteGVuLWRpcjsgXAokc291cmNlL2Nv
bmZpZ3VyZSAtLWVuYWJsZS14ZW4gLS10YXJnZXQtbGlzdD1pMzg2LXNvZnRtbXUgXAoJLS1z
b3VyY2UtcGF0aD0kc291cmNlIFwKCS0tZXh0cmEtY2ZsYWdzPSItSS9yb290L3hlbi00LjIu
MC90b29scy8uLi90b29scy9pbmNsdWRlIFwKCS1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4u
L3Rvb2xzL2xpYnhjIFwKCS1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3hlbnN0
b3JlIFwKCS1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3hlbnN0b3JlL2NvbXBh
dCBcCgkiIFwKCS0tZXh0cmEtbGRmbGFncz0iLUwvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4v
dG9vbHMvbGlieGMgXAoJLUwvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMveGVuc3Rv
cmUiIFwKCS0tYmluZGlyPS91c3IveGVuNDIvbGliZXhlYyBcCgktLWRhdGFkaXI9L3Vzci94
ZW40Mi9zaGFyZS9xZW11LXhlbiBcCgktLWRpc2FibGUta3ZtIFwKCS0tcHl0aG9uPXB5dGhv
bjIuNyBcCgk7IFwKZ21ha2UgYWxsCkluc3RhbGwgcHJlZml4ICAgIC91c3IvbG9jYWwKQklP
UyBkaXJlY3RvcnkgICAgL3Vzci94ZW40Mi9zaGFyZS9xZW11LXhlbgpiaW5hcnkgZGlyZWN0
b3J5ICAvdXNyL3hlbjQyL2xpYmV4ZWMKbGlicmFyeSBkaXJlY3RvcnkgL3Vzci9sb2NhbC9s
aWIKaW5jbHVkZSBkaXJlY3RvcnkgL3Vzci9sb2NhbC9pbmNsdWRlCmNvbmZpZyBkaXJlY3Rv
cnkgIC91c3IvbG9jYWwvZXRjCk1hbnVhbCBkaXJlY3RvcnkgIC91c3IvbG9jYWwvc2hhcmUv
bWFuCkVMRiBpbnRlcnAgcHJlZml4IC91c3IvZ25lbXVsL3FlbXUtJU0KU291cmNlIHBhdGgg
ICAgICAgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuCkMgY29tcGlsZXIgICAgICAg
IGdjYwpIb3N0IEMgY29tcGlsZXIgICBnY2MKQ0ZMQUdTICAgICAgICAgICAgLU8yIC1nIApR
RU1VX0NGTEFHUyAgICAgICAtbTY0IC1EX0ZPUlRJRllfU09VUkNFPTIgLURfR05VX1NPVVJD
RSAtRF9GSUxFX09GRlNFVF9CSVRTPTY0IC1EX0xBUkdFRklMRV9TT1VSQ0UgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV3JlZHVuZGFudC1kZWNscyAtV2FsbCAtV3VuZGVmIC1Xd3JpdGUtc3Ry
aW5ncyAtV21pc3NpbmctcHJvdG90eXBlcyAtZm5vLXN0cmljdC1hbGlhc2luZyAtSS9yb290
L3hlbi00LjIuMC90b29scy8uLi90b29scy9pbmNsdWRlIAktSS9yb290L3hlbi00LjIuMC90
b29scy8uLi90b29scy9saWJ4YyAJLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMv
eGVuc3RvcmUgCS1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzLy4uL3Rvb2xzL3hlbnN0b3JlL2Nv
bXBhdCAJICAtZnN0YWNrLXByb3RlY3Rvci1hbGwgLVdlbmRpZi1sYWJlbHMgLVdtaXNzaW5n
LWluY2x1ZGUtZGlycyAtV2VtcHR5LWJvZHkgLVduZXN0ZWQtZXh0ZXJucyAtV2Zvcm1hdC1z
ZWN1cml0eSAtV2Zvcm1hdC15MmsgLVdpbml0LXNlbGYgLVdpZ25vcmVkLXF1YWxpZmllcnMg
LVdvbGQtc3R5bGUtZGVjbGFyYXRpb24gLVdvbGQtc3R5bGUtZGVmaW5pdGlvbiAtV3R5cGUt
bGltaXRzCkxERkxBR1MgICAgICAgICAgIC1XbCwtLXdhcm4tY29tbW9uIC1tNjQgLWcgLUwv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvLi4vdG9vbHMvbGlieGMgCS1ML3Jvb3QveGVuLTQuMi4w
L3Rvb2xzLy4uL3Rvb2xzL3hlbnN0b3JlIAptYWtlICAgICAgICAgICAgICBnbWFrZQppbnN0
YWxsICAgICAgICAgICBpbnN0YWxsCnB5dGhvbiAgICAgICAgICAgIHB5dGhvbjIuNwpzbWJk
ICAgICAgICAgICAgICAvdXNyL3NiaW4vc21iZApob3N0IENQVSAgICAgICAgICB4ODZfNjQK
aG9zdCBiaWcgZW5kaWFuICAgbm8KdGFyZ2V0IGxpc3QgICAgICAgaTM4Ni1zb2Z0bW11CnRj
ZyBkZWJ1ZyBlbmFibGVkIG5vCk1vbiBkZWJ1ZyBlbmFibGVkIG5vCmdwcm9mIGVuYWJsZWQg
ICAgIG5vCnNwYXJzZSBlbmFibGVkICAgIG5vCnN0cmlwIGJpbmFyaWVzICAgIHllcwpwcm9m
aWxlciAgICAgICAgICBubwpzdGF0aWMgYnVpbGQgICAgICBubwotV2Vycm9yIGVuYWJsZWQg
ICBubwpTREwgc3VwcG9ydCAgICAgICBubwpjdXJzZXMgc3VwcG9ydCAgICBubwpjdXJsIHN1
cHBvcnQgICAgICB5ZXMKY2hlY2sgc3VwcG9ydCAgICAgbm8KbWluZ3czMiBzdXBwb3J0ICAg
bm8KQXVkaW8gZHJpdmVycyAgICAgb3NzCkV4dHJhIGF1ZGlvIGNhcmRzIGFjOTcgZXMxMzcw
IHNiMTYgaGRhCkJsb2NrIHdoaXRlbGlzdCAgIApNaXhlciBlbXVsYXRpb24gICBubwpWTkMg
c3VwcG9ydCAgICAgICB5ZXMKVk5DIFRMUyBzdXBwb3J0ICAgbm8KVk5DIFNBU0wgc3VwcG9y
dCAgbm8KVk5DIEpQRUcgc3VwcG9ydCAgbm8KVk5DIFBORyBzdXBwb3J0ICAgbm8KVk5DIHRo
cmVhZCAgICAgICAgbm8KeGVuIHN1cHBvcnQgICAgICAgeWVzCmJybGFwaSBzdXBwb3J0ICAg
IG5vCmJsdWV6ICBzdXBwb3J0ICAgIG5vCkRvY3VtZW50YXRpb24gICAgIHllcwpOUFRMIHN1
cHBvcnQgICAgICBubwpHVUVTVF9CQVNFICAgICAgICB5ZXMKUElFICAgICAgICAgICAgICAg
bm8KdmRlIHN1cHBvcnQgICAgICAgbm8KTGludXggQUlPIHN1cHBvcnQgbm8KQVRUUi9YQVRU
UiBzdXBwb3J0IHllcwpJbnN0YWxsIGJsb2JzICAgICB5ZXMKS1ZNIHN1cHBvcnQgICAgICAg
bm8KVENHIGludGVycHJldGVyICAgbm8KZmR0IHN1cHBvcnQgICAgICAgbm8KcHJlYWR2IHN1
cHBvcnQgICAgeWVzCmZkYXRhc3luYyAgICAgICAgIHllcwptYWR2aXNlICAgICAgICAgICB5
ZXMKcG9zaXhfbWFkdmlzZSAgICAgeWVzCnV1aWQgc3VwcG9ydCAgICAgIG5vCnZob3N0LW5l
dCBzdXBwb3J0IG5vClRyYWNlIGJhY2tlbmQgICAgIG5vcApUcmFjZSBvdXRwdXQgZmlsZSB0
cmFjZS08cGlkPgpzcGljZSBzdXBwb3J0ICAgICBubwpyYmQgc3VwcG9ydCAgICAgICBubwp4
ZnNjdGwgc3VwcG9ydCAgICBubwpuc3MgdXNlZCAgICAgICAgICBubwp1c2IgbmV0IHJlZGly
ICAgICBubwpPcGVuR0wgc3VwcG9ydCAgICBubwpsaWJpc2NzaSBzdXBwb3J0ICBubwpidWls
ZCBndWVzdCBhZ2VudCB5ZXMKZ21ha2VbM106IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLWRpcicKICBHRU4gICBpMzg2LXNvZnRtbXUvY29u
ZmlnLWRldmljZXMubWFrCiAgR0VOICAgY29uZmlnLWFsbC1kZXZpY2VzLm1hawpnbWFrZVsz
XTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhlbi1k
aXInCmdtYWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29s
cy9xZW11LXhlbi1kaXInCiAgR0VOICAgcWVtdS1vcHRpb25zLnRleGkKICBHRU4gICBxZW11
LW1vbml0b3IudGV4aQogIEdFTiAgIHFlbXUtaW1nLWNtZHMudGV4aQogIEdFTiAgIHFlbXUt
ZG9jLmh0bWwKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuL3FlbXUtZG9jLnRleGk6
Nzogd2FybmluZzogdW5yZWNvZ25pemVkIGVuY29kaW5nIG5hbWUgYFVURi04Jy4KICBHRU4g
ICBxZW11LXRlY2guaHRtbAovcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4vcWVtdS10
ZWNoLnRleGk6Nzogd2FybmluZzogdW5yZWNvZ25pemVkIGVuY29kaW5nIG5hbWUgYFVURi04
Jy4KICBHRU4gICBxZW11LjEKICBHRU4gICBxZW11LWltZy4xCiAgR0VOICAgcWVtdS1uYmQu
OAogIEdFTiAgIFFNUC9xbXAtY29tbWFuZHMudHh0CiAgR0VOICAgY29uZmlnLWhvc3QuaAog
IEdFTiAgIHRyYWNlLmgKICBHRU4gICBxZW11LW9wdGlvbnMuZGVmCiAgR0VOICAgcW1wLWNv
bW1hbmRzLmgKICBHRU4gICBxYXBpLXR5cGVzLmgKICBHRU4gICBxYXBpLXZpc2l0LmgKICBH
RU4gICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tZGlyL3FhcGktZ2VuZXJhdGVk
L3FnYS1xYXBpLXR5cGVzLmgKICBHRU4gICAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14
ZW4tZGlyL3FhcGktZ2VuZXJhdGVkL3FnYS1xYXBpLXZpc2l0LmgKICBHRU4gICAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tZGlyL3FhcGktZ2VuZXJhdGVkL3FnYS1xbXAtY29t
bWFuZHMuaAogIENDICAgIHFlbXUtZ2EubwogIENDICAgIHFnYS9ndWVzdC1hZ2VudC1jb21t
YW5kcy5vCiAgQ0MgICAgcWdhL2d1ZXN0LWFnZW50LWNvbW1hbmQtc3RhdGUubwogIENDICAg
IHFlbXUtc29ja2V0cy5vCiAgQ0MgICAgbW9kdWxlLm8KICBDQyAgICBxZW11LW9wdGlvbi5v
CiAgQ0MgICAgb3NsaWItcG9zaXgubwogIENDICAgIHFhcGkvcWFwaS12aXNpdC1jb3JlLm8K
ICBDQyAgICBxYXBpL3FtcC1pbnB1dC12aXNpdG9yLm8KICBDQyAgICBxYXBpL3FtcC1vdXRw
dXQtdmlzaXRvci5vCiAgQ0MgICAgcWFwaS9xYXBpLWRlYWxsb2MtdmlzaXRvci5vCiAgQ0Mg
ICAgcWFwaS9xbXAtcmVnaXN0cnkubwogIENDICAgIHFhcGkvcW1wLWRpc3BhdGNoLm8KICBD
QyAgICBxZW11LXRvb2wubwogIENDICAgIG9zZGVwLm8KICBDQyAgICBxZW11LXRocmVhZC1w
b3NpeC5vCiAgR0VOICAgdHJhY2UuYwogIENDICAgIHRyYWNlLm8KICBDQyAgICB0cmFjZS9k
ZWZhdWx0Lm8KICBDQyAgICB0cmFjZS9jb250cm9sLm8KICBDQyAgICBxZW11LXRpbWVyLWNv
bW1vbi5vCiAgQ0MgICAgY3V0aWxzLm8KICBDQyAgICBxaW50Lm8KICBDQyAgICBxc3RyaW5n
Lm8KICBDQyAgICBxZGljdC5vCiAgQ0MgICAgcWxpc3QubwogIENDICAgIHFmbG9hdC5vCiAg
Q0MgICAgcWJvb2wubwogIENDICAgIHFqc29uLm8KICBDQyAgICBqc29uLWxleGVyLm8KICBD
QyAgICBqc29uLXN0cmVhbWVyLm8KICBDQyAgICBqc29uLXBhcnNlci5vCiAgQ0MgICAgcWVy
cm9yLm8KICBDQyAgICBlcnJvci5vCiAgQ0MgICAgcWVtdS1lcnJvci5vCiAgQ0MgICAgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLWRpci9xYXBpLWdlbmVyYXRlZC9xZ2EtcWFw
aS10eXBlcy5vCiAgQ0MgICAgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuLWRpci9x
YXBpLWdlbmVyYXRlZC9xZ2EtcWFwaS12aXNpdC5vCiAgQ0MgICAgL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL3FlbXUteGVuLWRpci9xYXBpLWdlbmVyYXRlZC9xZ2EtcW1wLW1hcnNoYWwubwog
IExJTksgIHFlbXUtZ2EKL3Vzci9saWIvbGliYy5zbzogd2FybmluZzogbXVsdGlwbGUgY29t
bW9uIG9mIGBlbnZpcm9uJwovdXNyL2xpYi9jcnQwLm86IHdhcm5pbmc6IHByZXZpb3VzIGNv
bW1vbiBpcyBoZXJlCiAgQ0MgICAgcWVtdS1uYmQubwogIENDICAgIGNhY2hlLXV0aWxzLm8K
ICBDQyAgICBhc3luYy5vCiAgQ0MgICAgbmJkLm8KICBDQyAgICBibG9jay5vCiAgQ0MgICAg
YWlvLm8KICBDQyAgICBhZXMubwogIENDICAgIHFlbXUtY29uZmlnLm8KICBDQyAgICBxZW11
LXByb2dyZXNzLm8KICBDQyAgICBxZW11LWNvcm91dGluZS5vCiAgQ0MgICAgcWVtdS1jb3Jv
dXRpbmUtbG9jay5vCiAgQ0MgICAgY29yb3V0aW5lLXVjb250ZXh0Lm8KICBDQyAgICBwb3Np
eC1haW8tY29tcGF0Lm8KICBDQyAgICBibG9jay9yYXcubwogIENDICAgIGJsb2NrL2Nvdy5v
CiAgQ0MgICAgYmxvY2svcWNvdy5vCiAgQ0MgICAgYmxvY2svdmRpLm8KICBDQyAgICBibG9j
ay92bWRrLm8KICBDQyAgICBibG9jay9jbG9vcC5vCiAgQ0MgICAgYmxvY2svZG1nLm8KICBD
QyAgICBibG9jay9ib2Nocy5vCiAgQ0MgICAgYmxvY2svdnBjLm8KICBDQyAgICBibG9jay92
dmZhdC5vCiAgQ0MgICAgYmxvY2svcWNvdzIubwogIENDICAgIGJsb2NrL3Fjb3cyLXJlZmNv
dW50Lm8KICBDQyAgICBibG9jay9xY293Mi1jbHVzdGVyLm8KICBDQyAgICBibG9jay9xY293
Mi1zbmFwc2hvdC5vCiAgQ0MgICAgYmxvY2svcWNvdzItY2FjaGUubwogIENDICAgIGJsb2Nr
L3FlZC5vCiAgQ0MgICAgYmxvY2svcWVkLWdlbmNiLm8KICBDQyAgICBibG9jay9xZWQtbDIt
Y2FjaGUubwogIENDICAgIGJsb2NrL3FlZC10YWJsZS5vCiAgQ0MgICAgYmxvY2svcWVkLWNs
dXN0ZXIubwogIENDICAgIGJsb2NrL3FlZC1jaGVjay5vCiAgQ0MgICAgYmxvY2svcGFyYWxs
ZWxzLm8KICBDQyAgICBibG9jay9uYmQubwogIENDICAgIGJsb2NrL2Jsa2RlYnVnLm8KICBD
QyAgICBibG9jay9zaGVlcGRvZy5vCiAgQ0MgICAgYmxvY2svYmxrdmVyaWZ5Lm8KICBDQyAg
ICBibG9jay9yYXctcG9zaXgubwogIENDICAgIGJsb2NrL2N1cmwubwogIExJTksgIHFlbXUt
bmJkCi91c3IvbGliL2xpYmMuc286IHdhcm5pbmc6IG11bHRpcGxlIGNvbW1vbiBvZiBgZW52
aXJvbicKL3Vzci9saWIvY3J0MC5vOiB3YXJuaW5nOiBwcmV2aW91cyBjb21tb24gaXMgaGVy
ZQogIEdFTiAgIHFlbXUtaW1nLWNtZHMuaAogIENDICAgIHFlbXUtaW1nLm8KICBMSU5LICBx
ZW11LWltZwovdXNyL2xpYi9saWJjLnNvOiB3YXJuaW5nOiBtdWx0aXBsZSBjb21tb24gb2Yg
YGVudmlyb24nCi91c3IvbGliL2NydDAubzogd2FybmluZzogcHJldmlvdXMgY29tbW9uIGlz
IGhlcmUKICBDQyAgICBxZW11LWlvLm8KICBDQyAgICBjbWQubwogIExJTksgIHFlbXUtaW8K
L3Vzci9saWIvbGliYy5zbzogd2FybmluZzogbXVsdGlwbGUgY29tbW9uIG9mIGBlbnZpcm9u
JwovdXNyL2xpYi9jcnQwLm86IHdhcm5pbmc6IHByZXZpb3VzIGNvbW1vbiBpcyBoZXJlCiAg
Q0MgICAgbGliaHc2NC92bC5vCiAgQ0MgICAgbGliaHc2NC9sb2FkZXIubwogIENDICAgIGxp
Ymh3NjQvdmlydGlvLWNvbnNvbGUubwogIENDICAgIGxpYmh3NjQvdXNiLWxpYmh3Lm8KICBD
QyAgICBsaWJodzY0L3ZpcnRpby1wY2kubwogIENDICAgIGxpYmh3NjQvZndfY2ZnLm8KICBD
QyAgICBsaWJodzY0L3BjaS5vCiAgQ0MgICAgbGliaHc2NC9wY2lfYnJpZGdlLm8KICBDQyAg
ICBsaWJodzY0L21zaXgubwogIENDICAgIGxpYmh3NjQvbXNpLm8KICBDQyAgICBsaWJodzY0
L3BjaV9ob3N0Lm8KICBDQyAgICBsaWJodzY0L3BjaWVfaG9zdC5vCiAgQ0MgICAgbGliaHc2
NC9pb2gzNDIwLm8KICBDQyAgICBsaWJodzY0L3hpbzMxMzBfdXBzdHJlYW0ubwogIENDICAg
IGxpYmh3NjQveGlvMzEzMF9kb3duc3RyZWFtLm8KICBDQyAgICBsaWJodzY0L3dhdGNoZG9n
Lm8KICBDQyAgICBsaWJodzY0L3NlcmlhbC5vCiAgQ0MgICAgbGliaHc2NC9wYXJhbGxlbC5v
CiAgQ0MgICAgbGliaHc2NC9pODI1NC5vCiAgQ0MgICAgbGliaHc2NC9wY3Nway5vCiAgQ0Mg
ICAgbGliaHc2NC9wY2tiZC5vCiAgQ0MgICAgbGliaHc2NC91c2ItdWhjaS5vCiAgQ0MgICAg
bGliaHc2NC91c2Itb2hjaS5vCiAgQ0MgICAgbGliaHc2NC91c2ItZWhjaS5vCiAgQ0MgICAg
bGliaHc2NC9mZGMubwogIENDICAgIGxpYmh3NjQvYWNwaS5vCiAgQ0MgICAgbGliaHc2NC9h
Y3BpX3BpaXg0Lm8KICBDQyAgICBsaWJodzY0L3BtX3NtYnVzLm8KICBDQyAgICBsaWJodzY0
L2FwbS5vCiAgQ0MgICAgbGliaHc2NC9kbWEubwogIENDICAgIGxpYmh3NjQvaHBldC5vCiAg
Q0MgICAgbGliaHc2NC9hcHBsZXNtYy5vCiAgQ0MgICAgbGliaHc2NC91c2ItY2NpZC5vCiAg
Q0MgICAgbGliaHc2NC9jY2lkLWNhcmQtcGFzc3RocnUubwogIENDICAgIGxpYmh3NjQvaTgy
NTkubwogIENDICAgIGxpYmh3NjQvd2R0X2k2MzAwZXNiLm8KICBDQyAgICBsaWJodzY0L3Bj
aWUubwogIENDICAgIGxpYmh3NjQvcGNpZV9hZXIubwogIENDICAgIGxpYmh3NjQvcGNpZV9w
b3J0Lm8KICBDQyAgICBsaWJodzY0L25lMjAwMC5vCiAgQ0MgICAgbGliaHc2NC9lZXBybzEw
MC5vCiAgQ0MgICAgbGliaHc2NC9wY25ldC1wY2kubwogIENDICAgIGxpYmh3NjQvcGNuZXQu
bwogIENDICAgIGxpYmh3NjQvZTEwMDAubwogIENDICAgIGxpYmh3NjQvcnRsODEzOS5vCiAg
Q0MgICAgbGliaHc2NC9uZTIwMDAtaXNhLm8KICBDQyAgICBsaWJodzY0L2lkZS9jb3JlLm8K
ICBDQyAgICBsaWJodzY0L2lkZS9hdGFwaS5vCiAgQ0MgICAgbGliaHc2NC9pZGUvcWRldi5v
CiAgQ0MgICAgbGliaHc2NC9pZGUvcGNpLm8KICBDQyAgICBsaWJodzY0L2lkZS9pc2Eubwog
IENDICAgIGxpYmh3NjQvaWRlL3BpaXgubwogIENDICAgIGxpYmh3NjQvaWRlL2FoY2kubwog
IENDICAgIGxpYmh3NjQvaWRlL2ljaC5vCiAgQ0MgICAgbGliaHc2NC9sc2k1M2M4OTVhLm8K
ICBDQyAgICBsaWJodzY0L2RtYS1oZWxwZXJzLm8KICBDQyAgICBsaWJodzY0L3N5c2J1cy5v
CiAgQ0MgICAgbGliaHc2NC9pc2EtYnVzLm8KICBDQyAgICBsaWJodzY0L3FkZXYtYWRkci5v
CiAgQ0MgICAgbGliaHc2NC92Z2EtcGNpLm8KICBDQyAgICBsaWJodzY0L3ZnYS1pc2Eubwog
IENDICAgIGxpYmh3NjQvdm13YXJlX3ZnYS5vCiAgQ0MgICAgbGliaHc2NC92bW1vdXNlLm8K
ICBDQyAgICBsaWJodzY0L3NiMTYubwogIENDICAgIGxpYmh3NjQvZXMxMzcwLm8KICBDQyAg
ICBsaWJodzY0L2FjOTcubwogIENDICAgIGxpYmh3NjQvaW50ZWwtaGRhLm8KICBDQyAgICBs
aWJodzY0L2hkYS1hdWRpby5vCiAgQ0MgICAgYmxvY2tkZXYubwogIENDICAgIG5ldC5vCiAg
Q0MgICAgbmV0L3F1ZXVlLm8KICBDQyAgICBuZXQvY2hlY2tzdW0ubwogIENDICAgIG5ldC91
dGlsLm8KICBDQyAgICBuZXQvc29ja2V0Lm8KICBDQyAgICBuZXQvZHVtcC5vCiAgQ0MgICAg
bmV0L3RhcC5vCiAgQ0MgICAgbmV0L3RhcC1ic2QubwogIENDICAgIG5ldC9zbGlycC5vCiAg
Q0MgICAgcmVhZGxpbmUubwogIENDICAgIGNvbnNvbGUubwogIENDICAgIGN1cnNvci5vCiAg
Q0MgICAgb3MtcG9zaXgubwogIENDICAgIHRjZy1ydW50aW1lLm8KICBDQyAgICBob3N0LXV0
aWxzLm8KICBDQyAgICBtYWluLWxvb3AubwogIENDICAgIGlycS5vCiAgQ0MgICAgaW5wdXQu
bwogIENDICAgIGkyYy5vCiAgQ0MgICAgc21idXMubwogIENDICAgIHNtYnVzX2VlcHJvbS5v
CiAgQ0MgICAgZWVwcm9tOTN4eC5vCiAgQ0MgICAgc2NzaS1kaXNrLm8KICBDQyAgICBjZHJv
bS5vCiAgQ0MgICAgc2NzaS1nZW5lcmljLm8KICBDQyAgICBzY3NpLWJ1cy5vCiAgQ0MgICAg
aGlkLm8KICBDQyAgICB1c2IubwogIENDICAgIHVzYi1odWIubwogIENDICAgIHVzYi1ic2Qu
bwogIENDICAgIHVzYi1oaWQubwogIENDICAgIHVzYi1tc2QubwogIENDICAgIHVzYi13YWNv
bS5vCiAgQ0MgICAgdXNiLXNlcmlhbC5vCiAgQ0MgICAgdXNiLW5ldC5vCiAgQ0MgICAgdXNi
LWJ1cy5vCiAgQ0MgICAgdXNiLWRlc2MubwogIENDICAgIGJ0Lm8KICBDQyAgICBidC1ob3N0
Lm8KICBDQyAgICBidC12aGNpLm8KICBDQyAgICBidC1sMmNhcC5vCiAgQ0MgICAgYnQtc2Rw
Lm8KICBDQyAgICBidC1oY2kubwogIENDICAgIGJ0LWhpZC5vCiAgQ0MgICAgdXNiLWJ0Lm8K
ICBDQyAgICBidC1oY2ktY3NyLm8KICBDQyAgICBidWZmZXJlZF9maWxlLm8KICBDQyAgICBt
aWdyYXRpb24ubwogIENDICAgIG1pZ3JhdGlvbi10Y3AubwogIENDICAgIHFlbXUtY2hhci5v
CiAgQ0MgICAgc2F2ZXZtLm8KICBDQyAgICBtc21vdXNlLm8KICBDQyAgICBwczIubwogIEND
ICAgIHFkZXYubwogIENDICAgIHFkZXYtcHJvcGVydGllcy5vCiAgQ0MgICAgYmxvY2stbWln
cmF0aW9uLm8KICBDQyAgICBpb2hhbmRsZXIubwogIENDICAgIHBmbGliLm8KICBDQyAgICBi
aXRtYXAubwogIENDICAgIGJpdG9wcy5vCiAgQ0MgICAgbWlncmF0aW9uLWV4ZWMubwogIEND
ICAgIG1pZ3JhdGlvbi11bml4Lm8KICBDQyAgICBtaWdyYXRpb24tZmQubwogIENDICAgIGF1
ZGlvL2F1ZGlvLm8KICBDQyAgICBhdWRpby9ub2F1ZGlvLm8KICBDQyAgICBhdWRpby93YXZh
dWRpby5vCiAgQ0MgICAgYXVkaW8vbWl4ZW5nLm8KICBDQyAgICBhdWRpby9vc3NhdWRpby5v
CiAgQ0MgICAgYXVkaW8vd2F2Y2FwdHVyZS5vCiAgQ0MgICAgdWkva2V5bWFwcy5vCiAgQ0Mg
ICAgdWkvdm5jLm8KICBDQyAgICB1aS9kM2Rlcy5vCiAgQ0MgICAgdWkvdm5jLWVuYy16bGli
Lm8KICBDQyAgICB1aS92bmMtZW5jLWhleHRpbGUubwogIENDICAgIHVpL3ZuYy1lbmMtdGln
aHQubwogIENDICAgIHVpL3ZuYy1wYWxldHRlLm8KICBDQyAgICB1aS92bmMtZW5jLXpybGUu
bwogIENDICAgIHVpL3ZuYy1qb2JzLXN5bmMubwogIENDICAgIGlvdi5vCiAgQ0MgICAgYWNs
Lm8KICBDQyAgICBjb21wYXRmZC5vCiAgQ0MgICAgbm90aWZ5Lm8KICBDQyAgICBldmVudF9u
b3RpZmllci5vCiAgQ0MgICAgcWVtdS10aW1lci5vCiAgQ0MgICAgc2xpcnAvY2tzdW0ubwog
IENDICAgIHNsaXJwL2lmLm8KICBDQyAgICBzbGlycC9pcF9pY21wLm8KICBDQyAgICBzbGly
cC9pcF9pbnB1dC5vCiAgQ0MgICAgc2xpcnAvaXBfb3V0cHV0Lm8KICBDQyAgICBzbGlycC9z
bGlycC5vCiAgQ0MgICAgc2xpcnAvbWJ1Zi5vCiAgQ0MgICAgc2xpcnAvbWlzYy5vCiAgQ0Mg
ICAgc2xpcnAvc2J1Zi5vCiAgQ0MgICAgc2xpcnAvc29ja2V0Lm8KICBDQyAgICBzbGlycC90
Y3BfaW5wdXQubwogIENDICAgIHNsaXJwL3RjcF9vdXRwdXQubwogIENDICAgIHNsaXJwL3Rj
cF9zdWJyLm8KICBDQyAgICBzbGlycC90Y3BfdGltZXIubwogIENDICAgIHNsaXJwL3VkcC5v
CiAgQ0MgICAgc2xpcnAvYm9vdHAubwogIENDICAgIHNsaXJwL3RmdHAubwogIENDICAgIHNs
aXJwL2FycF90YWJsZS5vCiAgQ0MgICAgeGVuX2JhY2tlbmQubwogIENDICAgIHhlbl9kZXZj
b25maWcubwogIENDICAgIHhlbl9jb25zb2xlLm8KICBDQyAgICB4ZW5mYi5vCiAgQ0MgICAg
eGVuX2Rpc2subwogIENDICAgIHhlbl9uaWMubwogIENDICAgIHFtcC1tYXJzaGFsLm8KICBD
QyAgICBxYXBpLXZpc2l0Lm8KICBDQyAgICBxYXBpLXR5cGVzLm8KICBDQyAgICBxbXAubwog
IENDICAgIGhtcC5vCiAgQ0MgICAgbGliZGlzL2kzODYtZGlzLm8KICBHRU4gICBjb25maWct
dGFyZ2V0LmgKICBDQyAgICBpMzg2LXNvZnRtbXUvYXJjaF9pbml0Lm8KICBDQyAgICBpMzg2
LXNvZnRtbXUvY3B1cy5vCiAgR0VOICAgaTM4Ni1zb2Z0bW11L2htcC1jb21tYW5kcy5oCiAg
R0VOICAgaTM4Ni1zb2Z0bW11L3FtcC1jb21tYW5kcy1vbGQuaAogIENDICAgIGkzODYtc29m
dG1tdS9tb25pdG9yLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvbWFjaGluZS5vCiAgQ0MgICAg
aTM4Ni1zb2Z0bW11L2dkYnN0dWIubwogIENDICAgIGkzODYtc29mdG1tdS9iYWxsb29uLm8K
ICBDQyAgICBpMzg2LXNvZnRtbXUvaW9wb3J0Lm8KICBDQyAgICBpMzg2LXNvZnRtbXUvdmly
dGlvLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvdmlydGlvLWJsay5vCiAgQ0MgICAgaTM4Ni1z
b2Z0bW11L3ZpcnRpby1iYWxsb29uLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvdmlydGlvLW5l
dC5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L3ZpcnRpby1zZXJpYWwtYnVzLm8KICBDQyAgICBp
Mzg2LXNvZnRtbXUvdmhvc3RfbmV0Lm8KICBDQyAgICBpMzg2LXNvZnRtbXUva3ZtLXN0dWIu
bwogIENDICAgIGkzODYtc29mdG1tdS9tZW1vcnkubwogIENDICAgIGkzODYtc29mdG1tdS94
ZW4tYWxsLm8KL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3FlbXUteGVuL3hlbi1hbGwuYzogSW4g
ZnVuY3Rpb24gJ3hlbl9zeW5jX2RpcnR5X2JpdG1hcCc6Ci9yb290L3hlbi00LjIuMC90b29s
cy9xZW11LXhlbi94ZW4tYWxsLmM6NDc5OjEzOiB3YXJuaW5nOiBpbXBsaWNpdCBkZWNsYXJh
dGlvbiBvZiBmdW5jdGlvbiAnZmZzbCcKICBDQyAgICBpMzg2LXNvZnRtbXUveGVuX21hY2hp
bmVfcHYubwogIENDICAgIGkzODYtc29mdG1tdS94ZW5fZG9tYWluYnVpbGQubwogIENDICAg
IGkzODYtc29mdG1tdS94ZW4tbWFwY2FjaGUubwogIENDICAgIGkzODYtc29mdG1tdS9leGVj
Lm8KICBDQyAgICBpMzg2LXNvZnRtbXUvdHJhbnNsYXRlLWFsbC5vCiAgQ0MgICAgaTM4Ni1z
b2Z0bW11L2NwdS1leGVjLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvdHJhbnNsYXRlLm8KICBD
QyAgICBpMzg2LXNvZnRtbXUvdGNnL3RjZy5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L3RjZy9v
cHRpbWl6ZS5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L2ZwdS9zb2Z0ZmxvYXQubwogIENDICAg
IGkzODYtc29mdG1tdS9vcF9oZWxwZXIubwogIENDICAgIGkzODYtc29mdG1tdS9oZWxwZXIu
bwogIENDICAgIGkzODYtc29mdG1tdS9jcHVpZC5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L2Rp
c2FzLm8KICBDQyAgICBpMzg2LXNvZnRtbXUveGVuX3BsYXRmb3JtLm8KICBDQyAgICBpMzg2
LXNvZnRtbXUvdmdhLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvbWMxNDY4MThydGMubwogIEND
ICAgIGkzODYtc29mdG1tdS9wYy5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L2NpcnJ1c192Z2Eu
bwogIENDICAgIGkzODYtc29mdG1tdS9zZ2EubwogIENDICAgIGkzODYtc29mdG1tdS9hcGlj
Lm8KICBDQyAgICBpMzg2LXNvZnRtbXUvaW9hcGljLm8KICBDQyAgICBpMzg2LXNvZnRtbXUv
cGlpeF9wY2kubwogIENDICAgIGkzODYtc29mdG1tdS92bXBvcnQubwogIENDICAgIGkzODYt
c29mdG1tdS9kZXZpY2UtaG90cGx1Zy5vCiAgQ0MgICAgaTM4Ni1zb2Z0bW11L3BjaS1ob3Rw
bHVnLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvc21iaW9zLm8KICBDQyAgICBpMzg2LXNvZnRt
bXUvd2R0X2liNzAwLm8KICBDQyAgICBpMzg2LXNvZnRtbXUvZGVidWdjb24ubwogIENDICAg
IGkzODYtc29mdG1tdS9tdWx0aWJvb3QubwogIENDICAgIGkzODYtc29mdG1tdS9wY19waWl4
Lm8KICBMSU5LICBpMzg2LXNvZnRtbXUvcWVtdS1zeXN0ZW0taTM4NgovdXNyL2xpYi9saWJj
LnNvOiB3YXJuaW5nOiBtdWx0aXBsZSBjb21tb24gb2YgYGVudmlyb24nCi91c3IvbGliL2Ny
dDAubzogd2FybmluZzogcHJldmlvdXMgY29tbW9uIGlzIGhlcmUKICBBUyAgICBvcHRpb25y
b20vbXVsdGlib290Lm8KICBCdWlsZGluZyBvcHRpb25yb20vbXVsdGlib290LmltZwogIEJ1
aWxkaW5nIG9wdGlvbnJvbS9tdWx0aWJvb3QucmF3CiAgU2lnbmluZyBvcHRpb25yb20vbXVs
dGlib290LmJpbgogIEFTICAgIG9wdGlvbnJvbS9saW51eGJvb3QubwogIEJ1aWxkaW5nIG9w
dGlvbnJvbS9saW51eGJvb3QuaW1nCiAgQnVpbGRpbmcgb3B0aW9ucm9tL2xpbnV4Ym9vdC5y
YXcKICBTaWduaW5nIG9wdGlvbnJvbS9saW51eGJvb3QuYmluCnJtIG11bHRpYm9vdC5vIGxp
bnV4Ym9vdC5yYXcgbGludXhib290LmltZyBtdWx0aWJvb3QucmF3IG11bHRpYm9vdC5pbWcg
bGludXhib290Lm8KZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvcWVtdS14ZW4tZGlyJwpjZCBxZW11LXhlbi1kaXI7IFwKZ21ha2UgaW5zdGFs
bApnbWFrZVszXTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
cWVtdS14ZW4tZGlyJwppbnN0YWxsIC1kIC1tIDA3NTUgIi9yb290L3hlbi00LjIuMC9kaXN0
L2luc3RhbGwvdXNyL2xvY2FsL3NoYXJlL2RvYy9xZW11IgppbnN0YWxsIC1jIC1tIDA2NDQg
cWVtdS1kb2MuaHRtbCAgcWVtdS10ZWNoLmh0bWwgIi9yb290L3hlbi00LjIuMC9kaXN0L2lu
c3RhbGwvdXNyL2xvY2FsL3NoYXJlL2RvYy9xZW11IgppbnN0YWxsIC1kIC1tIDA3NTUgIi9y
b290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xvY2FsL3NoYXJlL21hbi9tYW4xIgpp
bnN0YWxsIC1jIC1tIDA2NDQgcWVtdS4xIHFlbXUtaW1nLjEgIi9yb290L3hlbi00LjIuMC9k
aXN0L2luc3RhbGwvdXNyL2xvY2FsL3NoYXJlL21hbi9tYW4xIgppbnN0YWxsIC1kIC1tIDA3
NTUgIi9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNyL2xvY2FsL3NoYXJlL21hbi9t
YW44IgppbnN0YWxsIC1jIC1tIDA2NDQgcWVtdS1uYmQuOCAiL3Jvb3QveGVuLTQuMi4wL2Rp
c3QvaW5zdGFsbC91c3IvbG9jYWwvc2hhcmUvbWFuL21hbjgiCmluc3RhbGwgLWQgLW0gMDc1
NSAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IvbG9jYWwvZXRjL3FlbXUiCmlu
c3RhbGwgLWMgLW0gMDY0NCAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4vc3lzY29u
Zmlncy90YXJnZXQvdGFyZ2V0LXg4Nl82NC5jb25mICIvcm9vdC94ZW4tNC4yLjAvZGlzdC9p
bnN0YWxsL3Vzci9sb2NhbC9ldGMvcWVtdSIKaW5zdGFsbCAtZCAtbSAwNzU1ICIvcm9vdC94
ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9saWJleGVjIgppbnN0YWxsIC1jIC1t
IDA3NTUgIHFlbXUtZ2EgcWVtdS1uYmQgcWVtdS1pbWcgcWVtdS1pbyAgIi9yb290L3hlbi00
LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL2xpYmV4ZWMiCmluc3RhbGwgLWQgLW0gMDc1
NSAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3IveGVuNDIvc2hhcmUvcWVtdS14
ZW4iCnNldCAtZTsgZm9yIHggaW4gYmlvcy5iaW4gc2dhYmlvcy5iaW4gdmdhYmlvcy5iaW4g
dmdhYmlvcy1jaXJydXMuYmluIHZnYWJpb3Mtc3RkdmdhLmJpbiB2Z2FiaW9zLXZtd2FyZS5i
aW4gdmdhYmlvcy1xeGwuYmluIHBwY19yb20uYmluIG9wZW5iaW9zLXNwYXJjMzIgb3BlbmJp
b3Mtc3BhcmM2NCBvcGVuYmlvcy1wcGMgcHhlLWUxMDAwLnJvbSBweGUtZWVwcm8xMDAucm9t
IHB4ZS1uZTJrX3BjaS5yb20gcHhlLXBjbmV0LnJvbSBweGUtcnRsODEzOS5yb20gcHhlLXZp
cnRpby5yb20gYmFtYm9vLmR0YiBwZXRhbG9naXgtczNhZHNwMTgwMC5kdGIgcGV0YWxvZ2l4
LW1sNjA1LmR0YiBtcGM4NTQ0ZHMuZHRiIG11bHRpYm9vdC5iaW4gbGludXhib290LmJpbiBz
MzkwLXppcGwucm9tIHNwYXByLXJ0YXMuYmluIHNsb2YuYmluIHBhbGNvZGUtY2xpcHBlcjsg
ZG8gXAoJaW5zdGFsbCAtYyAtbSAwNjQ0IC9yb290L3hlbi00LjIuMC90b29scy9xZW11LXhl
bi9wYy1iaW9zLyR4ICIvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0YWxsL3Vzci94ZW40Mi9z
aGFyZS9xZW11LXhlbiI7IFwKZG9uZQppbnN0YWxsIC1kIC1tIDA3NTUgIi9yb290L3hlbi00
LjIuMC9kaXN0L2luc3RhbGwvdXNyL3hlbjQyL3NoYXJlL3FlbXUteGVuL2tleW1hcHMiCnNl
dCAtZTsgZm9yIHggaW4gZGEgICAgIGVuLWdiICBldCAgZnIgICAgIGZyLWNoICBpcyAgbHQg
IG1vZGlmaWVycyAgbm8gIHB0LWJyICBzdiBhciAgICAgIGRlICAgICBlbi11cyAgZmkgIGZy
LWJlICBociAgICAgaXQgIGx2ICBubCAgICAgICAgIHBsICBydSAgICAgdGggY29tbW9uICBk
ZS1jaCAgZXMgICAgIGZvICBmci1jYSAgaHUgICAgIGphICBtayAgbmwtYmUgICAgICBwdCAg
c2wgICAgIHRyOyBkbyBcCglpbnN0YWxsIC1jIC1tIDA2NDQgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL3FlbXUteGVuL3BjLWJpb3Mva2V5bWFwcy8keCAiL3Jvb3QveGVuLTQuMi4wL2Rpc3Qv
aW5zdGFsbC91c3IveGVuNDIvc2hhcmUvcWVtdS14ZW4va2V5bWFwcyI7IFwKZG9uZQpmb3Ig
ZCBpbiBpMzg2LXNvZnRtbXU7IGRvIFwKZ21ha2UgLUMgJGQgaW5zdGFsbCB8fCBleGl0IDEg
OyBcCiAgICAgICAgZG9uZQpnbWFrZVs0XTogRW50ZXJpbmcgZGlyZWN0b3J5IGAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tZGlyL2kzODYtc29mdG1tdScKaW5zdGFsbCAtbSA3
NTUgcWVtdS1zeXN0ZW0taTM4NiAiL3Jvb3QveGVuLTQuMi4wL2Rpc3QvaW5zdGFsbC91c3Iv
eGVuNDIvbGliZXhlYyIKc3RyaXAgIi9yb290L3hlbi00LjIuMC9kaXN0L2luc3RhbGwvdXNy
L3hlbjQyL2xpYmV4ZWMvcWVtdS1zeXN0ZW0taTM4NiIKZ21ha2VbNF06IExlYXZpbmcgZGly
ZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvcWVtdS14ZW4tZGlyL2kzODYtc29mdG1t
dScKZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
cWVtdS14ZW4tZGlyJwpnbWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00
LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzJwpnbWFrZSAtQyB4ZW5wbWQgaW5zdGFsbApnbWFrZVszXTogRW50ZXJpbmcg
ZGlyZWN0b3J5IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVucG1kJwpnY2MgIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGVucG1kLm8uZCAtZm5vLW9w
dGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
eGVucG1kLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hl
bnBtZC8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyB4ZW5wbWQubyB4ZW5wbWQuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgICB4ZW5wbWQubyAtbyB4ZW5wbWQgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL3hlbnBtZC8uLi8uLi90b29scy94ZW5zdG9yZS9saWJ4ZW5zdG9yZS5zbyAg
LUwvdXNyL3BrZy9saWIKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL3hlbnBtZC8uLi8uLi90b29s
cy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvcm9vdC94ZW4tNC4yLjAvZGlzdC9pbnN0
YWxsL3Vzci94ZW40Mi9zYmluCi9yb290L3hlbi00LjIuMC90b29scy94ZW5wbWQvLi4vLi4v
dG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA3NTUgLXAgeGVucG1kIC9yb290L3hlbi00LjIuMC9k
aXN0L2luc3RhbGwvdXNyL3hlbjQyL3NiaW4KZ21ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5
IGAvcm9vdC94ZW4tNC4yLjAvdG9vbHMveGVucG1kJwpnbWFrZVsyXTogTGVhdmluZyBkaXJl
Y3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scycKZ21ha2VbMl06IEVudGVyaW5nIGRpcmVj
dG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZSAtQyBsaWJ4bCBpbnN0YWxsCmdt
YWtlWzNdOiBFbnRlcmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90b29scy9saWJ4
bCcKL3Vzci9wa2cvYmluL3BlcmwgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL2luY2x1ZGUveGVuLWV4dGVybmFsL2JzZC1zeXMtcXVldWUtaC1zZWRkZXJ5IC9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlL3hlbi1leHRl
cm5hbC9ic2Qtc3lzLXF1ZXVlLmggLS1wcmVmaXg9bGlieGwgPl9saWJ4bF9saXN0LmgubmV3
CmlmICEgY21wIC1zIF9saWJ4bF9saXN0LmgubmV3IF9saWJ4bF9saXN0Lmg7IHRoZW4gbXYg
LWYgX2xpYnhsX2xpc3QuaC5uZXcgX2xpYnhsX2xpc3QuaDsgZWxzZSBybSAtZiBfbGlieGxf
bGlzdC5oLm5ldzsgZmkKcm0gLWYgX3BhdGhzLmgudG1wLnRtcDsgIGVjaG8gIlNCSU5ESVI9
XCIvdXNyL3hlbjQyL3NiaW5cIiIgPj5fcGF0aHMuaC50bXAudG1wOyAgZWNobyAiQklORElS
PVwiL3Vzci94ZW40Mi9iaW5cIiIgPj5fcGF0aHMuaC50bXAudG1wOyAgZWNobyAiTElCRVhF
Qz1cIi91c3IveGVuNDIvbGliZXhlY1wiIiA+Pl9wYXRocy5oLnRtcC50bXA7ICBlY2hvICJM
SUJESVI9XCIvdXNyL3hlbjQyL2xpYlwiIiA+Pl9wYXRocy5oLnRtcC50bXA7ICBlY2hvICJT
SEFSRURJUj1cIi91c3IveGVuNDIvc2hhcmVcIiIgPj5fcGF0aHMuaC50bXAudG1wOyAgZWNo
byAiUFJJVkFURV9CSU5ESVI9XCIvdXNyL3hlbjQyL2JpblwiIiA+Pl9wYXRocy5oLnRtcC50
bXA7ICBlY2hvICJYRU5GSVJNV0FSRURJUj1cIi91c3IveGVuNDIvbGliL3hlbi9ib290XCIi
ID4+X3BhdGhzLmgudG1wLnRtcDsgIGVjaG8gIlhFTl9DT05GSUdfRElSPVwiL3Vzci94ZW40
Mi9ldGMveGVuXCIiID4+X3BhdGhzLmgudG1wLnRtcDsgIGVjaG8gIlhFTl9TQ1JJUFRfRElS
PVwiL3Vzci94ZW40Mi9ldGMveGVuL3NjcmlwdHNcIiIgPj5fcGF0aHMuaC50bXAudG1wOyAg
ZWNobyAiWEVOX0xPQ0tfRElSPVwiL3Vzci94ZW40Mi92YXIvbGliXCIiID4+X3BhdGhzLmgu
dG1wLnRtcDsgIGVjaG8gIlhFTl9SVU5fRElSPVwiL3Vzci94ZW40Mi92YXIvcnVuL3hlblwi
IiA+Pl9wYXRocy5oLnRtcC50bXA7ICBlY2hvICJYRU5fUEFHSU5HX0RJUj1cIi91c3IveGVu
NDIvdmFyL2xpYi94ZW4veGVucGFnaW5nXCIiID4+X3BhdGhzLmgudG1wLnRtcDsgCWlmICEg
Y21wIC1zIF9wYXRocy5oLnRtcC50bXAgX3BhdGhzLmgudG1wOyB0aGVuIG12IC1mIF9wYXRo
cy5oLnRtcC50bXAgX3BhdGhzLmgudG1wOyBlbHNlIHJtIC1mIF9wYXRocy5oLnRtcC50bXA7
IGZpCnNlZCAtZSAicy9cKFtePV0qXCk9XCguKlwpLyNkZWZpbmUgXDEgXDIvZyIgX3BhdGhz
LmgudG1wID5fcGF0aHMuaC4yLnRtcApybSAtZiBfcGF0aHMuaC50bXAKaWYgISBjbXAgLXMg
X3BhdGhzLmguMi50bXAgX3BhdGhzLmg7IHRoZW4gbXYgLWYgX3BhdGhzLmguMi50bXAgX3Bh
dGhzLmg7IGVsc2Ugcm0gLWYgX3BhdGhzLmguMi50bXA7IGZpCi91c3IvcGtnL2Jpbi9wZXJs
IC13IGxpYnhsX3NhdmVfbXNnc19nZW4ucGwgX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0Lmgg
Pl9saWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5oLm5ldwppZiAhIGNtcCAtcyBfbGlieGxfc2F2
ZV9tc2dzX2NhbGxvdXQuaC5uZXcgX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0Lmg7IHRoZW4g
bXYgLWYgX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0LmgubmV3IF9saWJ4bF9zYXZlX21zZ3Nf
Y2FsbG91dC5oOyBlbHNlIHJtIC1mIF9saWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5oLm5ldzsg
ZmkKL3Vzci9wa2cvYmluL3BlcmwgLXcgbGlieGxfc2F2ZV9tc2dzX2dlbi5wbCBfbGlieGxf
c2F2ZV9tc2dzX2hlbHBlci5oID5fbGlieGxfc2F2ZV9tc2dzX2hlbHBlci5oLm5ldwppZiAh
IGNtcCAtcyBfbGlieGxfc2F2ZV9tc2dzX2hlbHBlci5oLm5ldyBfbGlieGxfc2F2ZV9tc2dz
X2hlbHBlci5oOyB0aGVuIG12IC1mIF9saWJ4bF9zYXZlX21zZ3NfaGVscGVyLmgubmV3IF9s
aWJ4bF9zYXZlX21zZ3NfaGVscGVyLmg7IGVsc2Ugcm0gLWYgX2xpYnhsX3NhdmVfbXNnc19o
ZWxwZXIuaC5uZXc7IGZpCnB5dGhvbjIuNyBnZW50eXBlcy5weSBsaWJ4bF90eXBlcy5pZGwg
X19saWJ4bF90eXBlcy5oIF9fbGlieGxfdHlwZXNfanNvbi5oIF9fbGlieGxfdHlwZXMuYwpQ
YXJzaW5nIGxpYnhsX3R5cGVzLmlkbApvdXRwdXR0aW5nIGxpYnhsIHR5cGUgZGVmaW5pdGlv
bnMgdG8gX19saWJ4bF90eXBlcy5oCm91dHB1dHRpbmcgbGlieGwgSlNPTiBkZWZpbml0aW9u
cyB0byBfX2xpYnhsX3R5cGVzX2pzb24uaApvdXRwdXR0aW5nIGxpYnhsIHR5cGUgaW1wbGVt
ZW50YXRpb25zIHRvIF9fbGlieGxfdHlwZXMuYwppZiAhIGNtcCAtcyBfX2xpYnhsX3R5cGVz
LmggX2xpYnhsX3R5cGVzLmg7IHRoZW4gbXYgLWYgX19saWJ4bF90eXBlcy5oIF9saWJ4bF90
eXBlcy5oOyBlbHNlIHJtIC1mIF9fbGlieGxfdHlwZXMuaDsgZmkKaWYgISBjbXAgLXMgX19s
aWJ4bF90eXBlc19qc29uLmggX2xpYnhsX3R5cGVzX2pzb24uaDsgdGhlbiBtdiAtZiBfX2xp
YnhsX3R5cGVzX2pzb24uaCBfbGlieGxfdHlwZXNfanNvbi5oOyBlbHNlIHJtIC1mIF9fbGli
eGxfdHlwZXNfanNvbi5oOyBmaQppZiAhIGNtcCAtcyBfX2xpYnhsX3R5cGVzLmMgX2xpYnhs
X3R5cGVzLmM7IHRoZW4gbXYgLWYgX19saWJ4bF90eXBlcy5jIF9saWJ4bF90eXBlcy5jOyBl
bHNlIHJtIC1mIF9fbGlieGxfdHlwZXMuYzsgZmkKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1l
LXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAt
RF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLl9saWJ4bC5hcGktZm9yLWNoZWNrLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3Ro
IC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGwgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90
b29scy9pbmNsdWRlIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9jb25maWcuaCAgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4v
dG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
aW5jbHVkZSAgIC1jIC1FIGxpYnhsLmggIC1JL3Vzci9wa2cvaW5jbHVkZSBcCgktRExJQlhM
X0VYVEVSTkFMX0NBTExFUlNfT05MWT1MSUJYTF9FWFRFUk5BTF9DQUxMRVJTX09OTFkgXAoJ
Pl9saWJ4bC5hcGktZm9yLWNoZWNrLm5ldwptdiAtZiBfbGlieGwuYXBpLWZvci1jaGVjay5u
ZXcgX2xpYnhsLmFwaS1mb3ItY2hlY2sKL3Vzci9wa2cvYmluL3BlcmwgY2hlY2stbGlieGwt
YXBpLXJ1bGVzIF9saWJ4bC5hcGktZm9yLWNoZWNrCnRvdWNoIGxpYnhsLmFwaS1vawpnY2Mg
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGwuby5kIC1m
bm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxl
bmd0aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhsIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwv
Li4vLi4vdG9vbHMvY29uZmlnLmggICAtYyAtbyB4bC5vIHhsLmMgIC1JL3Vzci9wa2cvaW5j
bHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
eGxfY21kaW1wbC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1X
bm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMg
LXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGli
eGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGwgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgIC1jIC1vIHhsX2NtZGlt
cGwubyB4bF9jbWRpbXBsLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAueGxfY21kdGFibGUuby5kIC1mbm8t
b3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0
aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhsIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4v
dG9vbHMvaW5jbHVkZSAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvY29uZmlnLmggICAtYyAtbyB4bF9jbWR0YWJsZS5vIHhsX2NtZHRhYmxlLmMg
IC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAt
bTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAueGxfc3hwLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9u
cyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFs
IC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29s
cy9pbmNsdWRlICAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9s
aWJ4bCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLWluY2x1ZGUg
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAgLWMg
LW8geGxfc3hwLm8geGxfc3hwLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X2NmZ195Lm8uZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1s
ZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAgLWMgLW8gbGlieGx1X2Nm
Z195Lm8gbGlieGx1X2NmZ195LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X2NmZ19sLm8uZCAt
Zm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1s
ZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAgLWMgLW8gbGlieGx1X2Nm
Z19sLm8gbGlieGx1X2NmZ19sLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5v
LW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9
Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X2NmZy5vLmQgLWZu
by1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVu
Z3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgIC1jIC1vIGxpYnhsdV9jZmcu
byBsaWJ4bHVfY2ZnLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X2Rpc2tfbC5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3Ro
IC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgIC1jIC1vIGxpYnhsdV9kaXNrX2wu
byBsaWJ4bHVfZGlza19sLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9t
aXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251
OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X2Rpc2suby5kIC1mbm8t
b3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0
aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgICAtYyAtbyBsaWJ4bHVfZGlzay5v
IGxpYnhsdV9kaXNrLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X3ZpZi5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1X
bWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgIC1jIC1vIGxpYnhsdV92aWYubyBsaWJ4
bHVfdmlmLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGx1X3BjaS5vLmQgLWZuby1vcHRpbWl6ZS1z
aWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2lu
Zy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3Jt
YXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgIC1jIC1vIGxpYnhsdV9wY2kubyBsaWJ4bHVfcGNp
LmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAgLXB0aHJlYWQgLVdsLC1zb25hbWUgLVds
LGxpYnhsdXRpbC5zby4xLjAgLXNoYXJlZCAtbyBsaWJ4bHV0aWwuc28uMS4wLjAgbGlieGx1
X2NmZ195Lm8gbGlieGx1X2NmZ19sLm8gbGlieGx1X2NmZy5vIGxpYnhsdV9kaXNrX2wubyBs
aWJ4bHVfZGlzay5vIGxpYnhsdV92aWYubyBsaWJ4bHVfcGNpLm8gICAtTC91c3IvcGtnL2xp
YgpsbiAtc2YgbGlieGx1dGlsLnNvLjEuMC4wIGxpYnhsdXRpbC5zby4xLjAKbG4gLXNmIGxp
YnhsdXRpbC5zby4xLjAgbGlieGx1dGlsLnNvCnB5dGhvbjIuNyBnZW50eXBlcy5weSBsaWJ4
bF90eXBlc19pbnRlcm5hbC5pZGwgX19saWJ4bF90eXBlc19pbnRlcm5hbC5oIF9fbGlieGxf
dHlwZXNfaW50ZXJuYWxfanNvbi5oIF9fbGlieGxfdHlwZXNfaW50ZXJuYWwuYwpQYXJzaW5n
IGxpYnhsX3R5cGVzX2ludGVybmFsLmlkbApvdXRwdXR0aW5nIGxpYnhsIHR5cGUgZGVmaW5p
dGlvbnMgdG8gX19saWJ4bF90eXBlc19pbnRlcm5hbC5oCm91dHB1dHRpbmcgbGlieGwgSlNP
TiBkZWZpbml0aW9ucyB0byBfX2xpYnhsX3R5cGVzX2ludGVybmFsX2pzb24uaApvdXRwdXR0
aW5nIGxpYnhsIHR5cGUgaW1wbGVtZW50YXRpb25zIHRvIF9fbGlieGxfdHlwZXNfaW50ZXJu
YWwuYwppZiAhIGNtcCAtcyBfX2xpYnhsX3R5cGVzX2ludGVybmFsLmggX2xpYnhsX3R5cGVz
X2ludGVybmFsLmg7IHRoZW4gbXYgLWYgX19saWJ4bF90eXBlc19pbnRlcm5hbC5oIF9saWJ4
bF90eXBlc19pbnRlcm5hbC5oOyBlbHNlIHJtIC1mIF9fbGlieGxfdHlwZXNfaW50ZXJuYWwu
aDsgZmkKaWYgISBjbXAgLXMgX19saWJ4bF90eXBlc19pbnRlcm5hbF9qc29uLmggX2xpYnhs
X3R5cGVzX2ludGVybmFsX2pzb24uaDsgdGhlbiBtdiAtZiBfX2xpYnhsX3R5cGVzX2ludGVy
bmFsX2pzb24uaCBfbGlieGxfdHlwZXNfaW50ZXJuYWxfanNvbi5oOyBlbHNlIHJtIC1mIF9f
bGlieGxfdHlwZXNfaW50ZXJuYWxfanNvbi5oOyBmaQppZiAhIGNtcCAtcyBfX2xpYnhsX3R5
cGVzX2ludGVybmFsLmMgX2xpYnhsX3R5cGVzX2ludGVybmFsLmM7IHRoZW4gbXYgLWYgX19s
aWJ4bF90eXBlc19pbnRlcm5hbC5jIF9saWJ4bF90eXBlc19pbnRlcm5hbC5jOyBlbHNlIHJt
IC1mIF9fbGlieGxfdHlwZXNfaW50ZXJuYWwuYzsgZmkKZ2NjICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmZsZXhhcnJheS5vLmQgLWZuby1vcHRpbWl6
ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlz
c2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdm
b3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xz
L2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90
b29scy9jb25maWcuaCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29s
cy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNs
dWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBm
bGV4YXJyYXkubyBmbGV4YXJyYXkuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1m
bm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXIt
c3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ4bC5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3Ro
IC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGwubyBsaWJ4bC5jICAtSS91c3IvcGtn
L2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5v
LXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMg
LVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAt
TUYgLmxpYnhsX2NyZWF0ZS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vy
cm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVdu
by1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4g
LWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9v
bHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8g
bGlieGxfY3JlYXRlLm8gbGlieGxfY3JlYXRlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2Mg
IC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGxfZG0u
by5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16
ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmggIC1jIC1vIGxpYnhsX2RtLm8gbGlieGxf
ZG0uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2lu
dGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0
cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hF
Tl9UT09MU19fIC1NTUQgLU1GIC5saWJ4bF9wY2kuby5kIC1mbm8tb3B0aW1pemUtc2libGlu
Zy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVj
bGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5v
bmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4v
dG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
aW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5z
dG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRl
ICAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29u
ZmlnLmggIC1jIC1vIGxpYnhsX3BjaS5vIGxpYnhsX3BjaS5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmxp
YnhsX2RvbS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8t
Zm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0
aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfZG9t
Lm8gbGlieGxfZG9tLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQt
ZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkg
LVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVu
dCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGxfZXhlYy5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1X
bWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfZXhlYy5vIGxpYnhsX2V4ZWMuYyAgLUkv
dXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQg
LWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90
b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19f
IC1NTUQgLU1GIC5saWJ4bF94c2hlbHAuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxs
cyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRp
b25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVy
YWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
bGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVk
ZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmgg
IC1jIC1vIGxpYnhsX3hzaGVscC5vIGxpYnhsX3hzaGVscC5jICAtSS91c3IvcGtnL2luY2x1
ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmlj
dC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmxp
YnhsX2RldmljZS5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1X
bm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNs
YXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMg
LXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGli
eGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxf
ZGV2aWNlLm8gbGlieGxfZGV2aWNlLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAt
Zm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGlieGxfaW50ZXJuYWwu
by5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16
ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0
ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3Qv
eGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmggIC1jIC1vIGxpYnhsX2ludGVybmFsLm8g
bGlieGxfaW50ZXJuYWwuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ4bF91dGlscy5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3Ro
IC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfdXRpbHMubyBsaWJ4bF91dGlscy5j
ICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIg
LW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0
LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RP
T0xTX18gLU1NRCAtTUYgLmxpYnhsX3V1aWQuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1j
YWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFy
YXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxp
dGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9v
bHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5j
bHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9y
ZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAt
aW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmln
LmggIC1jIC1vIGxpYnhsX3V1aWQubyBsaWJ4bF91dWlkLmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGli
eGxfanNvbi5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8t
Zm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0
aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfanNv
bi5vIGxpYnhsX2pzb24uYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21p
dC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVt
ZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ4bF9hb3V0aWxzLm8uZCAtZm5v
LW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1sZW5n
dGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwv
Li4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBsaWJ4bF9hb3V0aWxzLm8gbGlieGxfYW91
dGlscy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBv
aW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9f
WEVOX1RPT0xTX18gLU1NRCAtTUYgLmxpYnhsX251bWEuby5kIC1mbm8tb3B0aW1pemUtc2li
bGluZy1jYWxscyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3Npbmct
ZGVjbGFyYXRpb25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0
LW5vbmxpdGVyYWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9v
bHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94
ZW5zdG9yZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNs
dWRlICAtaW5jbHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
Y29uZmlnLmggIC1jIC1vIGxpYnhsX251bWEubyBsaWJ4bF9udW1hLmMgIC1JL3Vzci9wa2cv
aW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8t
c3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1N
RiAubGlieGxfc2F2ZV9jYWxsb3V0Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMg
IC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9u
cyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFs
IC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29s
cy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xp
YnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1
ZGUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAt
YyAtbyBsaWJ4bF9zYXZlX2NhbGxvdXQubyBsaWJ4bF9zYXZlX2NhbGxvdXQuYyAgLUkvdXNy
L3BrZy9pbmNsdWRlCi91c3IvcGtnL2Jpbi9wZXJsIC13IGxpYnhsX3NhdmVfbXNnc19nZW4u
cGwgX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0LmMgPl9saWJ4bF9zYXZlX21zZ3NfY2FsbG91
dC5jLm5ldwppZiAhIGNtcCAtcyBfbGlieGxfc2F2ZV9tc2dzX2NhbGxvdXQuYy5uZXcgX2xp
YnhsX3NhdmVfbXNnc19jYWxsb3V0LmM7IHRoZW4gbXYgLWYgX2xpYnhsX3NhdmVfbXNnc19j
YWxsb3V0LmMubmV3IF9saWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5jOyBlbHNlIHJtIC1mIF9s
aWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5jLm5ldzsgZmkKZ2NjICAtTzEgLWZuby1vbWl0LWZy
YW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1X
YWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
ICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLl9saWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5v
LmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXpl
cm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90
b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gX2xpYnhsX3NhdmVfbXNnc19j
YWxsb3V0Lm8gX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0LmMgIC1JL3Vzci9wa2cvaW5jbHVk
ZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAubGli
eGxfcW1wLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1m
b3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRo
cmVhZCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBsaWJ4bF9xbXAu
byBsaWJ4bF9xbXAuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1m
cmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAt
V2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ4bF9ldmVudC5vLmQgLWZuby1vcHRp
bWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1X
bWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4y
LjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9s
aWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8u
Li90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfZXZlbnQubyBsaWJ4bF9ldmVudC5jICAt
SS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02
NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXBy
b3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xT
X18gLU1NRCAtTUYgLmxpYnhsX2Zvcmsuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxs
cyAgLVdlcnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRp
b25zIC1Xbm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVy
YWwgLUkuIC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
bGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVk
ZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5j
bHVkZSAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmgg
IC1jIC1vIGxpYnhsX2ZvcmsubyBsaWJ4bF9mb3JrLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpn
Y2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAub3NkZXBz
Lm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQt
emVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAt
SS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVu
LTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9v
bHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBvc2RlcHMubyBvc2RlcHMu
YyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVy
IC1tNjQgLWcgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmlj
dC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9U
T09MU19fIC1NTUQgLU1GIC5saWJ4bF9wYXRocy5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNs
YXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9u
bGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90
b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9p
bmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0
b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
IC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25m
aWcuaCAgLWMgLW8gbGlieGxfcGF0aHMubyBsaWJ4bF9wYXRocy5jICAtSS91c3IvcGtnL2lu
Y2x1ZGUKZ2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0
cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdk
ZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYg
LmxpYnhsX2Jvb3Rsb2FkZXIuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdl
cnJvciAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1X
bm8tZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUku
IC1mUElDIC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2lu
Y2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5jbHVkZSAv
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmggIC1jIC1v
IGxpYnhsX2Jvb3Rsb2FkZXIubyBsaWJ4bF9ib290bG9hZGVyLmMgIC1JL3Vzci9wa2cvaW5j
bHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0IC1nIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2Rl
Y2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNfXyAtTU1EIC1NRiAu
bGlieGxfbm9ibGt0YXAyLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJy
b3IgLVduby1mb3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25v
LWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAt
ZlBJQyAtcHRocmVhZCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29s
cy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNs
dWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1J
L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBs
aWJ4bF9ub2Jsa3RhcDIubyBsaWJ4bF9ub2Jsa3RhcDIuYyAgLUkvdXNyL3BrZy9pbmNsdWRl
CmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFy
YXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5saWJ4
bF9jcHVpZC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8t
Zm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJh
dGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0
aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfY3B1
aWQubyBsaWJ4bF9jcHVpZC5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdu
dTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmxpYnhsX3g4Ni5vLmQgLWZuby1v
cHRpbWl6ZS1zaWJsaW5nLWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3Ro
IC1XbWlzc2luZy1kZWNsYXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1l
bnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29s
cy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4
bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4u
Ly4uL3Rvb2xzL3hlbnN0b3JlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgIC1pbmNsdWRlIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9jb25maWcuaCAgLWMgLW8gbGlieGxfeDg2Lm8gbGlieGxfeDg2LmMgIC1J
L3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAtbTY0
IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19YRU5fVE9PTFNf
XyAtTU1EIC1NRiAubGlieGxfbmV0YnNkLm8uZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2Fs
bHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0
aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRl
cmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8u
Li8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90
b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xz
L2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWlu
Y2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5o
ICAtYyAtbyBsaWJ4bF9uZXRic2QubyBsaWJ4bF9uZXRic2QuYyAgLUkvdXNyL3BrZy9pbmNs
dWRlCmdjYyAgLU8xIC1mbm8tb21pdC1mcmFtZS1wb2ludGVyIC1tNjQgLWcgLWZuby1zdHJp
Y3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50ICAgLURfX1hFTl9UT09MU19fIC1NTUQgLU1GIC5f
bGlieGxfdHlwZXMuby5kIC1mbm8tb3B0aW1pemUtc2libGluZy1jYWxscyAgLVdlcnJvciAt
V25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3NpbmctZGVjbGFyYXRpb25zIC1Xbm8tZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1XZm9ybWF0LW5vbmxpdGVyYWwgLUkuIC1mUElD
IC1wdGhyZWFkIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xp
YnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
LUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9v
dC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9yZSAtSS9yb290L3hlbi00
LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlICAtaW5jbHVkZSAvcm9vdC94
ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvY29uZmlnLmggIC1jIC1vIF9saWJ4
bF90eXBlcy5vIF9saWJ4bF90eXBlcy5jICAtSS91c3IvcGtnL2luY2x1ZGUKZ2NjICAtTzEg
LWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAt
c3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgICAtRF9fWEVOX1RPT0xTX18gLU1NRCAtTUYgLmxpYnhsX2ZsYXNrLm8u
ZCAtZm5vLW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVy
by1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVy
LXN0YXRlbWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGlieGwvLi4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xz
L2xpYnhsLy4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBsaWJ4bF9mbGFzay5vIGxpYnhs
X2ZsYXNrLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUt
cG9pbnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwg
LVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1E
X19YRU5fVE9PTFNfXyAtTU1EIC1NRiAuX2xpYnhsX3R5cGVzX2ludGVybmFsLm8uZCAtZm5v
LW9wdGltaXplLXNpYmxpbmctY2FsbHMgIC1XZXJyb3IgLVduby1mb3JtYXQtemVyby1sZW5n
dGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucyAtV25vLWRlY2xhcmF0aW9uLWFmdGVyLXN0YXRl
bWVudCAtV2Zvcm1hdC1ub25saXRlcmFsIC1JLiAtZlBJQyAtcHRocmVhZCAtSS9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS9yb290L3hlbi00LjIu
MC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1JL3Jvb3QveGVuLTQuMi4wL3Rv
b2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjIC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwv
Li4vLi4vdG9vbHMveGVuc3RvcmUgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAgLWluY2x1ZGUgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2NvbmZpZy5oICAtYyAtbyBfbGlieGxfdHlwZXNfaW50ZXJuYWwubyBf
bGlieGxfdHlwZXNfaW50ZXJuYWwuYyAgLUkvdXNyL3BrZy9pbmNsdWRlCmdjYyAgICAtcHRo
cmVhZCAtV2wsLXNvbmFtZSAtV2wsbGlieGVubGlnaHQuc28uMi4wIC1zaGFyZWQgLW8gbGli
eGVubGlnaHQuc28uMi4wLjAgZmxleGFycmF5Lm8gbGlieGwubyBsaWJ4bF9jcmVhdGUubyBs
aWJ4bF9kbS5vIGxpYnhsX3BjaS5vIGxpYnhsX2RvbS5vIGxpYnhsX2V4ZWMubyBsaWJ4bF94
c2hlbHAubyBsaWJ4bF9kZXZpY2UubyBsaWJ4bF9pbnRlcm5hbC5vIGxpYnhsX3V0aWxzLm8g
bGlieGxfdXVpZC5vIGxpYnhsX2pzb24ubyBsaWJ4bF9hb3V0aWxzLm8gbGlieGxfbnVtYS5v
IGxpYnhsX3NhdmVfY2FsbG91dC5vIF9saWJ4bF9zYXZlX21zZ3NfY2FsbG91dC5vIGxpYnhs
X3FtcC5vIGxpYnhsX2V2ZW50Lm8gbGlieGxfZm9yay5vIG9zZGVwcy5vIGxpYnhsX3BhdGhz
Lm8gbGlieGxfYm9vdGxvYWRlci5vIGxpYnhsX25vYmxrdGFwMi5vIGxpYnhsX2NwdWlkLm8g
bGlieGxfeDg2Lm8gbGlieGxfbmV0YnNkLm8gX2xpYnhsX3R5cGVzLm8gbGlieGxfZmxhc2su
byBfbGlieGxfdHlwZXNfaW50ZXJuYWwubyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwv
Li4vLi4vdG9vbHMvbGlieGMvbGlieGVuY3RybC5zbyAvcm9vdC94ZW4tNC4yLjAvdG9vbHMv
bGlieGwvLi4vLi4vdG9vbHMvbGlieGMvbGlieGVuZ3Vlc3Quc28gL3Jvb3QveGVuLTQuMi4w
L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL3hlbnN0b3JlL2xpYnhlbnN0b3JlLnNvICAtbHV0
aWwgICAtbHlhamwgIC1ML3Vzci9wa2cvbGliCmxuIC1zZiBsaWJ4ZW5saWdodC5zby4yLjAu
MCBsaWJ4ZW5saWdodC5zby4yLjAKbG4gLXNmIGxpYnhlbmxpZ2h0LnNvLjIuMCBsaWJ4ZW5s
aWdodC5zbwpnY2MgICAgLXB0aHJlYWQgLW8geGwgeGwubyB4bF9jbWRpbXBsLm8geGxfY21k
dGFibGUubyB4bF9zeHAubyBsaWJ4bHV0aWwuc28gL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xp
YnhsLy4uLy4uL3Rvb2xzL2xpYnhsL2xpYnhlbmxpZ2h0LnNvIC1XbCwtcnBhdGgtbGluaz0v
cm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLVdsLC1ycGF0
aC1saW5rPS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy94ZW5zdG9y
ZSAgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhjL2xpYnhl
bmN0cmwuc28gLWx5YWpsICAtTC91c3IvcGtnL2xpYgpweXRob24yLjcgZ2VudGVzdC5weSBs
aWJ4bF90eXBlcy5pZGwgdGVzdGlkbC5jLm5ldwpQYXJzaW5nIGxpYnhsX3R5cGVzLmlkbApt
diB0ZXN0aWRsLmMubmV3IHRlc3RpZGwuYwpnY2MgIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9p
bnRlciAtbTY0IC1nIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdz
dHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1EX19Y
RU5fVE9PTFNfXyAtTU1EIC1NRiAudGVzdGlkbC5vLmQgLWZuby1vcHRpbWl6ZS1zaWJsaW5n
LWNhbGxzICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNs
YXJhdGlvbnMgLVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9u
bGl0ZXJhbCAtSS4gLWZQSUMgLXB0aHJlYWQgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvaW5jbHVkZSAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90
b29scy9saWJ4bCAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9s
aWJ4YyAtSS9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRl
IC1JL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1j
IC1vIHRlc3RpZGwubyB0ZXN0aWRsLmMgIC1JL3Vzci9wa2cvaW5jbHVkZQpnY2MgICAgLXB0
aHJlYWQgLW8gdGVzdGlkbCB0ZXN0aWRsLm8gbGlieGx1dGlsLnNvIC9yb290L3hlbi00LjIu
MC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4bC9saWJ4ZW5saWdodC5zbyAtV2wsLXJw
YXRoLWxpbms9L3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhj
IC1XbCwtcnBhdGgtbGluaz0vcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9v
bHMveGVuc3RvcmUgIC9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9s
aWJ4Yy9saWJ4ZW5jdHJsLnNvICAtTC91c3IvcGtnL2xpYgpsZDogd2FybmluZzogbGlieWFq
bC5zby4yLCBuZWVkZWQgYnkgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rv
b2xzL2xpYnhsL2xpYnhlbmxpZ2h0LnNvLCBub3QgZm91bmQgKHRyeSB1c2luZyAtcnBhdGgg
b3IgLXJwYXRoLWxpbmspCi9yb290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29s
cy9saWJ4bC9saWJ4ZW5saWdodC5zbzogdW5kZWZpbmVkIHJlZmVyZW5jZSB0byBgeWFqbF9w
YXJzZScKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhsL2xp
YnhlbmxpZ2h0LnNvOiB1bmRlZmluZWQgcmVmZXJlbmNlIHRvIGB5YWpsX2NvbXBsZXRlX3Bh
cnNlJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGwvbGli
eGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYHlhamxfZ2VuX251bGwnCi9y
b290L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4bC9saWJ4ZW5saWdo
dC5zbzogdW5kZWZpbmVkIHJlZmVyZW5jZSB0byBgeWFqbF9nZW5fYXJyYXlfb3BlbicKL3Jv
b3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhsL2xpYnhlbmxpZ2h0
LnNvOiB1bmRlZmluZWQgcmVmZXJlbmNlIHRvIGB5YWpsX2dlbl9zdHJpbmcnCi9yb290L3hl
bi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4bC9saWJ4ZW5saWdodC5zbzog
dW5kZWZpbmVkIHJlZmVyZW5jZSB0byBgeWFqbF9nZW5fbWFwX2Nsb3NlJwovcm9vdC94ZW4t
NC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGwvbGlieGVubGlnaHQuc286IHVu
ZGVmaW5lZCByZWZlcmVuY2UgdG8gYHlhamxfZ2VuX2dldF9idWYnCi9yb290L3hlbi00LjIu
MC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4bC9saWJ4ZW5saWdodC5zbzogdW5kZWZp
bmVkIHJlZmVyZW5jZSB0byBgeWFqbF9mcmVlJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGli
eGwvLi4vLi4vdG9vbHMvbGlieGwvbGlieGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVu
Y2UgdG8gYHlhamxfZ2VuX2FsbG9jJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvbGlieGwvbGlieGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8g
YHlhamxfZ2VuX2FycmF5X2Nsb3NlJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4v
Li4vdG9vbHMvbGlieGwvbGlieGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8g
YHlhamxfZ2VuX21hcF9vcGVuJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4v
dG9vbHMvbGlieGwvbGlieGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYHlh
amxfZ2V0X2Vycm9yJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMv
bGlieGwvbGlieGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYHlhamxfZnJl
ZV9lcnJvcicKL3Jvb3QveGVuLTQuMi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhs
L2xpYnhlbmxpZ2h0LnNvOiB1bmRlZmluZWQgcmVmZXJlbmNlIHRvIGB5YWpsX2dlbl9pbnRl
Z2VyJwovcm9vdC94ZW4tNC4yLjAvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvbGlieGwvbGli
eGVubGlnaHQuc286IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYHlhamxfYWxsb2MnCi9yb290
L3hlbi00LjIuMC90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4bC9saWJ4ZW5saWdodC5z
bzogdW5kZWZpbmVkIHJlZmVyZW5jZSB0byBgeWFqbF9nZW5fZnJlZScKL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2xpYnhsL2xpYnhlbmxpZ2h0LnNvOiB1bmRl
ZmluZWQgcmVmZXJlbmNlIHRvIGB5YWpsX2dlbl9ib29sJwpnbWFrZVszXTogKioqIFt0ZXN0
aWRsXSBFcnJvciAxCmdtYWtlWzNdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQu
Mi4wL3Rvb2xzL2xpYnhsJwpnbWFrZVsyXTogKioqIFtzdWJkaXItaW5zdGFsbC1saWJ4bF0g
RXJyb3IgMgpnbWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9yb290L3hlbi00LjIuMC90
b29scycKZ21ha2VbMV06ICoqKiBbc3ViZGlycy1pbnN0YWxsXSBFcnJvciAyCmdtYWtlWzFd
OiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Jvb3QveGVuLTQuMi4wL3Rvb2xzJwpnbWFrZTogKioq
IFtpbnN0YWxsLXRvb2xzXSBFcnJvciAyCmRvbTAjIGV4aXQKClNjcmlwdCBkb25lIG9uIFR1
ZSBEZWMgIDQgMTM6MzA6MTQgMjAxMgo=
--------------070505000000060005050803
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------070505000000060005050803--


From xen-devel-bounces@lists.xen.org Tue Dec 04 14:11:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftD6-0003vh-4l; Tue, 04 Dec 2012 14:10:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TftD4-0003v5-RU
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:10:54 +0000
Received: from [85.158.138.51:36923] by server-3.bemta-3.messagelabs.com id
	4B/1C-31566-D640EB05; Tue, 04 Dec 2012 14:10:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354630253!27375733!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19307 invoked from network); 4 Dec 2012 14:10:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 14:10:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 13:45:26 +0000
Message-Id: <50BE0C7F02000078000ADC8D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 13:45:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <dunlapg@umich.edu>
References: <1354552154.18784.9.camel@iceland> <50BCF4F9.8010601@citrix.com>
	<CAFLBxZak4Bj2L9LCWMpp8Te95nj+SF-SDSYoMTXR926KHfx4QQ@mail.gmail.com>
In-Reply-To: <CAFLBxZak4Bj2L9LCWMpp8Te95nj+SF-SDSYoMTXR926KHfx4QQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Wei Liu <Wei.Liu2@citrix.com>, David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 12:29, George Dunlap <dunlapg@umich.edu> wrote:
> On Mon, Dec 3, 2012 at 6:52 PM, David Vrabel <david.vrabel@citrix.com>wrote:
> 
>> On 03/12/12 16:29, Wei Liu wrote:
>> > Hi all
>> >
>> > There has been discussion on extending number of event channels back in
>> > September [0].
>>
>> It seems that the decision has been made to go for this N-level
>> approach.  Were any other methods considered?
>>
>> Would a per-VCPU ring of pending events work?  The ABI will be easier to
>> extend in the future for more event channels.  The guest side code will
>> be simpler.  It will be easier to fairly service the events as they will
>> be processed in the order they were raised.
>>
> 
> Not having looked in depth at either solution, it does seem like having
> something like this, rather than an ever-growing array if bits that needs
> to be examined, would be simpler and more scalable.

Surely more scalable, but simpler?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:11:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftD6-0003vh-4l; Tue, 04 Dec 2012 14:10:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TftD4-0003v5-RU
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:10:54 +0000
Received: from [85.158.138.51:36923] by server-3.bemta-3.messagelabs.com id
	4B/1C-31566-D640EB05; Tue, 04 Dec 2012 14:10:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354630253!27375733!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19307 invoked from network); 4 Dec 2012 14:10:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 14:10:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 13:45:26 +0000
Message-Id: <50BE0C7F02000078000ADC8D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 13:45:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <dunlapg@umich.edu>
References: <1354552154.18784.9.camel@iceland> <50BCF4F9.8010601@citrix.com>
	<CAFLBxZak4Bj2L9LCWMpp8Te95nj+SF-SDSYoMTXR926KHfx4QQ@mail.gmail.com>
In-Reply-To: <CAFLBxZak4Bj2L9LCWMpp8Te95nj+SF-SDSYoMTXR926KHfx4QQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Wei Liu <Wei.Liu2@citrix.com>, David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 12:29, George Dunlap <dunlapg@umich.edu> wrote:
> On Mon, Dec 3, 2012 at 6:52 PM, David Vrabel <david.vrabel@citrix.com>wrote:
> 
>> On 03/12/12 16:29, Wei Liu wrote:
>> > Hi all
>> >
>> > There has been discussion on extending number of event channels back in
>> > September [0].
>>
>> It seems that the decision has been made to go for this N-level
>> approach.  Were any other methods considered?
>>
>> Would a per-VCPU ring of pending events work?  The ABI will be easier to
>> extend in the future for more event channels.  The guest side code will
>> be simpler.  It will be easier to fairly service the events as they will
>> be processed in the order they were raised.
>>
> 
> Not having looked in depth at either solution, it does seem like having
> something like this, rather than an ever-growing array if bits that needs
> to be examined, would be simpler and more scalable.

Surely more scalable, but simpler?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:18:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftKD-0004SO-35; Tue, 04 Dec 2012 14:18:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1TftKB-0004SH-SQ
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:18:15 +0000
Received: from [85.158.139.211:23255] by server-16.bemta-5.messagelabs.com id
	05/EA-21311-6260EB05; Tue, 04 Dec 2012 14:18:14 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1354630679!14793113!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27422 invoked from network); 4 Dec 2012 14:18:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:18:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46532441"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	04 Dec 2012 14:17:59 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Tue, 4 Dec 2012
	09:17:59 -0500
Message-ID: <50BE0615.1090307@citrix.com>
Date: Tue, 4 Dec 2012 14:17:57 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1354622199-27504-12-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1354622199-27504-12-git-send-email-ian.campbell@citrix.com>
X-Originating-IP: [10.80.2.76]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 12/15] xen: arm: initialise dom_{xen, io,
	cow}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 11:56, Ian Campbell wrote:
> +void __init arch_init_memory(void)
> +{
> +    /*
> +     * Initialise our DOMID_XEN domain.
> +     * Any Xen-heap pages that we will allow to be mapped will have
> +     * their domain field set to dom_xen.
> +     */
> +    dom_xen = domain_create(DOMID_XEN, DOMCRF_dummy, 0);
> +    BUG_ON(IS_ERR(dom_xen));
> +
> +    /*
> +     * Initialise our DOMID_IO domain.
> +     * This domain owns I/O pages that are within the range of the page_info
> +     * array. Mappings occur at the priv of the caller.
> +     */
> +    dom_io = domain_create(DOMID_IO, DOMCRF_dummy, 0);
> +    BUG_ON(IS_ERR(dom_io));
> +
> +    /*
> +     * Initialise our COW domain.
> +     * This domain owns sharable pages.
> +     */
> +    dom_cow = domain_create(DOMID_COW, DOMCRF_dummy, 0);
> +    BUG_ON(IS_ERR(dom_cow));
> +}

This looks like a cut and paste from the x86 code.  Should it be
refactored into a common function?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:18:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftKD-0004SO-35; Tue, 04 Dec 2012 14:18:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1TftKB-0004SH-SQ
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:18:15 +0000
Received: from [85.158.139.211:23255] by server-16.bemta-5.messagelabs.com id
	05/EA-21311-6260EB05; Tue, 04 Dec 2012 14:18:14 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1354630679!14793113!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27422 invoked from network); 4 Dec 2012 14:18:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:18:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46532441"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	04 Dec 2012 14:17:59 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Tue, 4 Dec 2012
	09:17:59 -0500
Message-ID: <50BE0615.1090307@citrix.com>
Date: Tue, 4 Dec 2012 14:17:57 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1354622199-27504-12-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1354622199-27504-12-git-send-email-ian.campbell@citrix.com>
X-Originating-IP: [10.80.2.76]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 12/15] xen: arm: initialise dom_{xen, io,
	cow}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 11:56, Ian Campbell wrote:
> +void __init arch_init_memory(void)
> +{
> +    /*
> +     * Initialise our DOMID_XEN domain.
> +     * Any Xen-heap pages that we will allow to be mapped will have
> +     * their domain field set to dom_xen.
> +     */
> +    dom_xen = domain_create(DOMID_XEN, DOMCRF_dummy, 0);
> +    BUG_ON(IS_ERR(dom_xen));
> +
> +    /*
> +     * Initialise our DOMID_IO domain.
> +     * This domain owns I/O pages that are within the range of the page_info
> +     * array. Mappings occur at the priv of the caller.
> +     */
> +    dom_io = domain_create(DOMID_IO, DOMCRF_dummy, 0);
> +    BUG_ON(IS_ERR(dom_io));
> +
> +    /*
> +     * Initialise our COW domain.
> +     * This domain owns sharable pages.
> +     */
> +    dom_cow = domain_create(DOMID_COW, DOMCRF_dummy, 0);
> +    BUG_ON(IS_ERR(dom_cow));
> +}

This looks like a cut and paste from the x86 code.  Should it be
refactored into a common function?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:19:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftL5-0004WD-Hf; Tue, 04 Dec 2012 14:19:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TftL4-0004Vy-2O
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:19:10 +0000
Received: from [193.109.254.147:51177] by server-9.bemta-14.messagelabs.com id
	1D/FF-30773-D560EB05; Tue, 04 Dec 2012 14:19:09 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354630748!8930375!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29285 invoked from network); 4 Dec 2012 14:19:08 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-13.tower-27.messagelabs.com with SMTP;
	4 Dec 2012 14:19:08 -0000
X-TM-IMSS-Message-ID: <47e388c800040b6a@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 47e388c800040b6a ;
	Tue, 4 Dec 2012 09:17:46 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qB4EIvoF014059; 
	Tue, 4 Dec 2012 09:18:58 -0500
Message-ID: <50BE0651.1060502@tycho.nsa.gov>
Date: Tue, 04 Dec 2012 09:18:57 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: D Sundstrom <sunds@peapod.net>
References: <CAP4+OA3-CsKxJizFpx-V48cCFKJc5KXNQzzY4oQ6pQ5pu0RFMg@mail.gmail.com>
	<CAL08nMGDeaQO6HAJypK+pzmsmg8wcz2=WnLWwX==R5MRBCyoPA@mail.gmail.com>
	<CAP4+OA064YvLQyPahGWoyxPhWv+v5LLCjBUOUg09QF7XKJ1tZg@mail.gmail.com>
	<50A27423.6040104@tycho.nsa.gov>
	<CAP4+OA0KLDk-5p=S1sSq9ugrLF4siL_iQ7YNT1nCGO4mwG5=zA@mail.gmail.com>
In-Reply-To: <CAP4+OA0KLDk-5p=S1sSq9ugrLF4siL_iQ7YNT1nCGO4mwG5=zA@mail.gmail.com>
Cc: Pablo Llopis <pllopis@arcos.inf.uc3m.es>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Immediate kernel panic using gntdev device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/03/2012 07:49 PM, D Sundstrom wrote:
> The issue seems to be my version of Xen (XenClient XT) must not support
> ballon drivers. Any call to the memory_op hypercall to change the
> reservation terminates my guest with extreme prejudice.
> 
> I'll take that one up with Citrix.  However, can someone explain why
> mapping a grant needs to manipulate the balloon reservation?
> 
> Specifically, in the 3.7-RC7 linux kernel tree, the file
> drivers/xen/balloon.c:
> 
> At line 512 it tries to get a page out of the balloon.  This returns null
> (no page).
> If page.... at line 513 evaluates to false
> At line 518 the else block calls decrease_reservation().
> 
> 
> Thanks
> David
> 

The gntdev driver needs a GFN for the mapped page (this is a hard requirement
for HVM, and also makes PV in-kernel mapping of the page easier iirc), and this
GFN must be unused by the guest (no associated MFN - otherwise it may end up
leaking the MFN until the domain is shutdown). Since ballooned out pages satisfy
these requirements, the gntdev code uses the balloon pool instead of breaking
the GFN/MFN association itself or trying to use the pages beyond the last valid
GFN.

> 
> On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:
> 
>> On 11/12/2012 08:15 AM, D Sundstrom wrote:
>>> Thank you Pablo.
>>>
>>> It makes no difference if I run both the src-add and map from the same
>>> domain or from different DomU domains.
>>> Whichever DomU I run the map function in crashes immediately.
>>
>> Mapping your own grants (which is what the test run you showed did) might
>> cause problems - although it's a bug that needs to be fixed, if so. You
>> may want to try using the vchan-node2 tool (tools/libvchan) for testing
>> and as an example user.
>>
>>> You mention Dom0.  I just want to be clear that I'd like to share
>>> between two DomU domains.  Have you gotten this to work?
>>
>> That was the goal of gntalloc/libvchan - it should work (and has for me).
>>
>>> I also tried the userspace APIs provided by Xen such as
>>> xc_gnttab_map_grant_ref() and these also crash.  Of course, these use
>>> the same driver IOCTLs, so this isn't a surprise.
>>>
>>> I'll need to see if I can get some debug info from the DomU kernel to
>>> make progress.
>>
>> You might want to try booting your domU with console=hvc0 and look at
>> xl console - that will usually give you useful backtraces. Without that,
>> it's rather difficult to tell what the problem is.
>>
>>> If I can get this to work, are there any restrictions on sharing large
>>> amounts of memory?  Say 160Mb?  Or are grant tables intended for a
>>> small number of pages?
>>
>> There are restrictions within the modules (default is 1024 4K pages), and
>> in Xen itself for the number of grant table and maptrack pages - but I
>> think those can be adjusted via a boot parameter. The grant tables aren't
>> currently intended to share large amounts of memory, so you may run in to
>> some inefficiencies when doing the map/unmap. If you're using an IOMMU for
>> one of the domUs, this may end up being especially costly.
>>
>>> Thanks,
>>> David
>>>
>>>
>>>
>>> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <pllopis@arcos.inf.uc3m.es>
>> wrote:
>>>>
>>>> It is not clear from your output from which domain you are running
>>>> each command. It looks like you are trying to issue a grant and map it
>>>> from within the same domain. That's probably the reason it crashes.
>>>> You are supposed to run this tool from both domains, running the calls
>>>> which interface with gntalloc from one domain, and the calls which
>>>> interface with gntdev from the other domain.
>>>> In any case, the domid you have to specify in the map must be the
>>>> domid of the domain which issued the grant. In other words, when
>>>> creating a grant, the domid which is granted access is specified. When
>>>> mapping a grant, the domid which issued the grant is specified. (i.e.
>>>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
>>>> 8)
>>>>
>>>>>
>>>


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:19:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftL5-0004WD-Hf; Tue, 04 Dec 2012 14:19:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TftL4-0004Vy-2O
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:19:10 +0000
Received: from [193.109.254.147:51177] by server-9.bemta-14.messagelabs.com id
	1D/FF-30773-D560EB05; Tue, 04 Dec 2012 14:19:09 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354630748!8930375!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29285 invoked from network); 4 Dec 2012 14:19:08 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-13.tower-27.messagelabs.com with SMTP;
	4 Dec 2012 14:19:08 -0000
X-TM-IMSS-Message-ID: <47e388c800040b6a@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 47e388c800040b6a ;
	Tue, 4 Dec 2012 09:17:46 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qB4EIvoF014059; 
	Tue, 4 Dec 2012 09:18:58 -0500
Message-ID: <50BE0651.1060502@tycho.nsa.gov>
Date: Tue, 04 Dec 2012 09:18:57 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: D Sundstrom <sunds@peapod.net>
References: <CAP4+OA3-CsKxJizFpx-V48cCFKJc5KXNQzzY4oQ6pQ5pu0RFMg@mail.gmail.com>
	<CAL08nMGDeaQO6HAJypK+pzmsmg8wcz2=WnLWwX==R5MRBCyoPA@mail.gmail.com>
	<CAP4+OA064YvLQyPahGWoyxPhWv+v5LLCjBUOUg09QF7XKJ1tZg@mail.gmail.com>
	<50A27423.6040104@tycho.nsa.gov>
	<CAP4+OA0KLDk-5p=S1sSq9ugrLF4siL_iQ7YNT1nCGO4mwG5=zA@mail.gmail.com>
In-Reply-To: <CAP4+OA0KLDk-5p=S1sSq9ugrLF4siL_iQ7YNT1nCGO4mwG5=zA@mail.gmail.com>
Cc: Pablo Llopis <pllopis@arcos.inf.uc3m.es>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Immediate kernel panic using gntdev device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/03/2012 07:49 PM, D Sundstrom wrote:
> The issue seems to be my version of Xen (XenClient XT) must not support
> ballon drivers. Any call to the memory_op hypercall to change the
> reservation terminates my guest with extreme prejudice.
> 
> I'll take that one up with Citrix.  However, can someone explain why
> mapping a grant needs to manipulate the balloon reservation?
> 
> Specifically, in the 3.7-RC7 linux kernel tree, the file
> drivers/xen/balloon.c:
> 
> At line 512 it tries to get a page out of the balloon.  This returns null
> (no page).
> If page.... at line 513 evaluates to false
> At line 518 the else block calls decrease_reservation().
> 
> 
> Thanks
> David
> 

The gntdev driver needs a GFN for the mapped page (this is a hard requirement
for HVM, and also makes PV in-kernel mapping of the page easier iirc), and this
GFN must be unused by the guest (no associated MFN - otherwise it may end up
leaking the MFN until the domain is shutdown). Since ballooned out pages satisfy
these requirements, the gntdev code uses the balloon pool instead of breaking
the GFN/MFN association itself or trying to use the pages beyond the last valid
GFN.

> 
> On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:
> 
>> On 11/12/2012 08:15 AM, D Sundstrom wrote:
>>> Thank you Pablo.
>>>
>>> It makes no difference if I run both the src-add and map from the same
>>> domain or from different DomU domains.
>>> Whichever DomU I run the map function in crashes immediately.
>>
>> Mapping your own grants (which is what the test run you showed did) might
>> cause problems - although it's a bug that needs to be fixed, if so. You
>> may want to try using the vchan-node2 tool (tools/libvchan) for testing
>> and as an example user.
>>
>>> You mention Dom0.  I just want to be clear that I'd like to share
>>> between two DomU domains.  Have you gotten this to work?
>>
>> That was the goal of gntalloc/libvchan - it should work (and has for me).
>>
>>> I also tried the userspace APIs provided by Xen such as
>>> xc_gnttab_map_grant_ref() and these also crash.  Of course, these use
>>> the same driver IOCTLs, so this isn't a surprise.
>>>
>>> I'll need to see if I can get some debug info from the DomU kernel to
>>> make progress.
>>
>> You might want to try booting your domU with console=hvc0 and look at
>> xl console - that will usually give you useful backtraces. Without that,
>> it's rather difficult to tell what the problem is.
>>
>>> If I can get this to work, are there any restrictions on sharing large
>>> amounts of memory?  Say 160Mb?  Or are grant tables intended for a
>>> small number of pages?
>>
>> There are restrictions within the modules (default is 1024 4K pages), and
>> in Xen itself for the number of grant table and maptrack pages - but I
>> think those can be adjusted via a boot parameter. The grant tables aren't
>> currently intended to share large amounts of memory, so you may run in to
>> some inefficiencies when doing the map/unmap. If you're using an IOMMU for
>> one of the domUs, this may end up being especially costly.
>>
>>> Thanks,
>>> David
>>>
>>>
>>>
>>> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <pllopis@arcos.inf.uc3m.es>
>> wrote:
>>>>
>>>> It is not clear from your output from which domain you are running
>>>> each command. It looks like you are trying to issue a grant and map it
>>>> from within the same domain. That's probably the reason it crashes.
>>>> You are supposed to run this tool from both domains, running the calls
>>>> which interface with gntalloc from one domain, and the calls which
>>>> interface with gntdev from the other domain.
>>>> In any case, the domid you have to specify in the map must be the
>>>> domid of the domain which issued the grant. In other words, when
>>>> creating a grant, the domid which is granted access is specified. When
>>>> mapping a grant, the domid which issued the grant is specified. (i.e.
>>>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
>>>> 8)
>>>>
>>>>>
>>>


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:21:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:21:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftN8-0004g3-2D; Tue, 04 Dec 2012 14:21:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TftN6-0004fv-VH
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:21:17 +0000
Received: from [85.158.143.35:43994] by server-3.bemta-4.messagelabs.com id
	49/6A-06841-CD60EB05; Tue, 04 Dec 2012 14:21:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1354630874!4362651!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30769 invoked from network); 4 Dec 2012 14:21:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:21:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149084"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:21:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:21:14 +0000
Message-ID: <1354630873.15296.1.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 4 Dec 2012 14:21:13 +0000
In-Reply-To: <50BE0615.1090307@citrix.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1354622199-27504-12-git-send-email-ian.campbell@citrix.com>
	<50BE0615.1090307@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 12/15] xen: arm: initialise dom_{xen, io,
	cow}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 14:17 +0000, David Vrabel wrote:
> On 04/12/12 11:56, Ian Campbell wrote:
> > +void __init arch_init_memory(void)
> > +{
> > +    /*
> > +     * Initialise our DOMID_XEN domain.
> > +     * Any Xen-heap pages that we will allow to be mapped will have
> > +     * their domain field set to dom_xen.
> > +     */
> > +    dom_xen = domain_create(DOMID_XEN, DOMCRF_dummy, 0);
> > +    BUG_ON(IS_ERR(dom_xen));
> > +
> > +    /*
> > +     * Initialise our DOMID_IO domain.
> > +     * This domain owns I/O pages that are within the range of the page_info
> > +     * array. Mappings occur at the priv of the caller.
> > +     */
> > +    dom_io = domain_create(DOMID_IO, DOMCRF_dummy, 0);
> > +    BUG_ON(IS_ERR(dom_io));
> > +
> > +    /*
> > +     * Initialise our COW domain.
> > +     * This domain owns sharable pages.
> > +     */
> > +    dom_cow = domain_create(DOMID_COW, DOMCRF_dummy, 0);
> > +    BUG_ON(IS_ERR(dom_cow));
> > +}
> 
> This looks like a cut and paste from the x86 code.  Should it be
> refactored into a common function?

Yes, I suppose it should. I guess this is the counterpoint to my
observation that there is x86 stuff in generic code ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:21:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:21:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftN8-0004g3-2D; Tue, 04 Dec 2012 14:21:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TftN6-0004fv-VH
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:21:17 +0000
Received: from [85.158.143.35:43994] by server-3.bemta-4.messagelabs.com id
	49/6A-06841-CD60EB05; Tue, 04 Dec 2012 14:21:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1354630874!4362651!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30769 invoked from network); 4 Dec 2012 14:21:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:21:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149084"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:21:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:21:14 +0000
Message-ID: <1354630873.15296.1.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 4 Dec 2012 14:21:13 +0000
In-Reply-To: <50BE0615.1090307@citrix.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1354622199-27504-12-git-send-email-ian.campbell@citrix.com>
	<50BE0615.1090307@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 12/15] xen: arm: initialise dom_{xen, io,
	cow}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 14:17 +0000, David Vrabel wrote:
> On 04/12/12 11:56, Ian Campbell wrote:
> > +void __init arch_init_memory(void)
> > +{
> > +    /*
> > +     * Initialise our DOMID_XEN domain.
> > +     * Any Xen-heap pages that we will allow to be mapped will have
> > +     * their domain field set to dom_xen.
> > +     */
> > +    dom_xen = domain_create(DOMID_XEN, DOMCRF_dummy, 0);
> > +    BUG_ON(IS_ERR(dom_xen));
> > +
> > +    /*
> > +     * Initialise our DOMID_IO domain.
> > +     * This domain owns I/O pages that are within the range of the page_info
> > +     * array. Mappings occur at the priv of the caller.
> > +     */
> > +    dom_io = domain_create(DOMID_IO, DOMCRF_dummy, 0);
> > +    BUG_ON(IS_ERR(dom_io));
> > +
> > +    /*
> > +     * Initialise our COW domain.
> > +     * This domain owns sharable pages.
> > +     */
> > +    dom_cow = domain_create(DOMID_COW, DOMCRF_dummy, 0);
> > +    BUG_ON(IS_ERR(dom_cow));
> > +}
> 
> This looks like a cut and paste from the x86 code.  Should it be
> refactored into a common function?

Yes, I suppose it should. I guess this is the counterpoint to my
observation that there is x86 stuff in generic code ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:22:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:22:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftNu-0004m6-Gf; Tue, 04 Dec 2012 14:22:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TftNt-0004lp-Mn
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:22:05 +0000
Received: from [85.158.138.51:17713] by server-16.bemta-3.messagelabs.com id
	AE/62-07461-C070EB05; Tue, 04 Dec 2012 14:22:04 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354630921!27377688!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6500 invoked from network); 4 Dec 2012 14:22:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:22:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149110"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:22:01 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:22:01 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Tue, 4 Dec 2012 15:21:52 +0100
Message-ID: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 1/2] xen-blkback: implement safe iterator for
	the list of persistent grants
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q2hhbmdlIGZvcmVhY2hfZ3JhbnQgaXRlcmF0b3IgdG8gYSBzYWZlIHZlcnNpb24sIHRoYXQgYWxs
b3dzIGZyZWVpbmcKdGhlIGVsZW1lbnQgd2hpbGUgaXRlcmF0aW5nLiBBbHNvIG1vdmUgdGhlIGZy
ZWUgY29kZSBpbgpmcmVlX3BlcnNpc3RlbnRfZ250cyB0byBwcmV2ZW50IGZyZWVpbmcgdGhlIGVs
ZW1lbnQgYmVmb3JlIHRoZSByYl9uZXh0CmNhbGwuCgpSZXBvcnRlZC1ieTogRGFuIENhcnBlbnRl
ciA8ZGFuLmNhcnBlbnRlckBvcmFjbGUuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29u
cmFkQGtlcm5lbC5vcmc+CkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwotLS0KIGRyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jIHwgICAxOCArKysrKysrKysrKy0tLS0tLS0KIDEg
ZmlsZXMgY2hhbmdlZCwgMTEgaW5zZXJ0aW9ucygrKSwgNyBkZWxldGlvbnMoLSkKCmRpZmYgLS1n
aXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4IDc0Mzc0ZmIuLjVhYzg0MWYgMTAwNjQ0Ci0tLSBh
L2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC0xNjEsMTAgKzE2MSwxMiBAQCBzdGF0aWMgaW50IGRp
c3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLAogc3RhdGljIHZvaWQg
bWFrZV9yZXNwb25zZShzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwgdTY0IGlkLAogCQkJICB1bnNp
Z25lZCBzaG9ydCBvcCwgaW50IHN0KTsKIAotI2RlZmluZSBmb3JlYWNoX2dyYW50KHBvcywgcmJ0
cmVlLCBub2RlKSBcCi0JZm9yICgocG9zKSA9IGNvbnRhaW5lcl9vZihyYl9maXJzdCgocmJ0cmVl
KSksIHR5cGVvZigqKHBvcykpLCBub2RlKTsgXAorI2RlZmluZSBmb3JlYWNoX2dyYW50X3NhZmUo
cG9zLCBuLCByYnRyZWUsIG5vZGUpIFwKKwlmb3IgKChwb3MpID0gY29udGFpbmVyX29mKHJiX2Zp
cnN0KChyYnRyZWUpKSwgdHlwZW9mKCoocG9zKSksIG5vZGUpLCBcCisJICAgICAobikgPSByYl9u
ZXh0KCYocG9zKS0+bm9kZSk7IFwKIAkgICAgICYocG9zKS0+bm9kZSAhPSBOVUxMOyBcCi0JICAg
ICAocG9zKSA9IGNvbnRhaW5lcl9vZihyYl9uZXh0KCYocG9zKS0+bm9kZSksIHR5cGVvZigqKHBv
cykpLCBub2RlKSkKKwkgICAgIChwb3MpID0gY29udGFpbmVyX29mKG4sIHR5cGVvZigqKHBvcykp
LCBub2RlKSwgXAorCSAgICAgKG4pID0gKCYocG9zKS0+bm9kZSAhPSBOVUxMKSA/IHJiX25leHQo
Jihwb3MpLT5ub2RlKSA6IE5VTEwpCiAKIAogc3RhdGljIHZvaWQgYWRkX3BlcnNpc3RlbnRfZ250
KHN0cnVjdCByYl9yb290ICpyb290LApAQCAtMjE3LDEwICsyMTksMTEgQEAgc3RhdGljIHZvaWQg
ZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBu
dW0pCiAJc3RydWN0IGdudHRhYl91bm1hcF9ncmFudF9yZWYgdW5tYXBbQkxLSUZfTUFYX1NFR01F
TlRTX1BFUl9SRVFVRVNUXTsKIAlzdHJ1Y3QgcGFnZSAqcGFnZXNbQkxLSUZfTUFYX1NFR01FTlRT
X1BFUl9SRVFVRVNUXTsKIAlzdHJ1Y3QgcGVyc2lzdGVudF9nbnQgKnBlcnNpc3RlbnRfZ250Owor
CXN0cnVjdCByYl9ub2RlICpuOwogCWludCByZXQgPSAwOwogCWludCBzZWdzX3RvX3VubWFwID0g
MDsKIAotCWZvcmVhY2hfZ3JhbnQocGVyc2lzdGVudF9nbnQsIHJvb3QsIG5vZGUpIHsKKwlmb3Jl
YWNoX2dyYW50X3NhZmUocGVyc2lzdGVudF9nbnQsIG4sIHJvb3QsIG5vZGUpIHsKIAkJQlVHX09O
KHBlcnNpc3RlbnRfZ250LT5oYW5kbGUgPT0KIAkJCUJMS0JBQ0tfSU5WQUxJRF9IQU5ETEUpOwog
CQlnbnR0YWJfc2V0X3VubWFwX29wKCZ1bm1hcFtzZWdzX3RvX3VubWFwXSwKQEAgLTIzMCw5ICsy
MzMsNiBAQCBzdGF0aWMgdm9pZCBmcmVlX3BlcnNpc3RlbnRfZ250cyhzdHJ1Y3QgcmJfcm9vdCAq
cm9vdCwgdW5zaWduZWQgaW50IG51bSkKIAkJCXBlcnNpc3RlbnRfZ250LT5oYW5kbGUpOwogCiAJ
CXBhZ2VzW3NlZ3NfdG9fdW5tYXBdID0gcGVyc2lzdGVudF9nbnQtPnBhZ2U7Ci0JCXJiX2VyYXNl
KCZwZXJzaXN0ZW50X2dudC0+bm9kZSwgcm9vdCk7Ci0JCWtmcmVlKHBlcnNpc3RlbnRfZ250KTsK
LQkJbnVtLS07CiAKIAkJaWYgKCsrc2Vnc190b191bm1hcCA9PSBCTEtJRl9NQVhfU0VHTUVOVFNf
UEVSX1JFUVVFU1QgfHwKIAkJCSFyYl9uZXh0KCZwZXJzaXN0ZW50X2dudC0+bm9kZSkpIHsKQEAg
LTI0MSw2ICsyNDEsMTAgQEAgc3RhdGljIHZvaWQgZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0
IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBudW0pCiAJCQlCVUdfT04ocmV0KTsKIAkJCXNl
Z3NfdG9fdW5tYXAgPSAwOwogCQl9CisKKwkJcmJfZXJhc2UoJnBlcnNpc3RlbnRfZ250LT5ub2Rl
LCByb290KTsKKwkJa2ZyZWUocGVyc2lzdGVudF9nbnQpOworCQludW0tLTsKIAl9CiAJQlVHX09O
KG51bSAhPSAwKTsKIH0KLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0
Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:22:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:22:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftNu-0004m6-Gf; Tue, 04 Dec 2012 14:22:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TftNt-0004lp-Mn
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:22:05 +0000
Received: from [85.158.138.51:17713] by server-16.bemta-3.messagelabs.com id
	AE/62-07461-C070EB05; Tue, 04 Dec 2012 14:22:04 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354630921!27377688!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6500 invoked from network); 4 Dec 2012 14:22:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:22:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149110"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:22:01 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:22:01 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Tue, 4 Dec 2012 15:21:52 +0100
Message-ID: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 1/2] xen-blkback: implement safe iterator for
	the list of persistent grants
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q2hhbmdlIGZvcmVhY2hfZ3JhbnQgaXRlcmF0b3IgdG8gYSBzYWZlIHZlcnNpb24sIHRoYXQgYWxs
b3dzIGZyZWVpbmcKdGhlIGVsZW1lbnQgd2hpbGUgaXRlcmF0aW5nLiBBbHNvIG1vdmUgdGhlIGZy
ZWUgY29kZSBpbgpmcmVlX3BlcnNpc3RlbnRfZ250cyB0byBwcmV2ZW50IGZyZWVpbmcgdGhlIGVs
ZW1lbnQgYmVmb3JlIHRoZSByYl9uZXh0CmNhbGwuCgpSZXBvcnRlZC1ieTogRGFuIENhcnBlbnRl
ciA8ZGFuLmNhcnBlbnRlckBvcmFjbGUuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29u
cmFkQGtlcm5lbC5vcmc+CkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwotLS0KIGRyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jIHwgICAxOCArKysrKysrKysrKy0tLS0tLS0KIDEg
ZmlsZXMgY2hhbmdlZCwgMTEgaW5zZXJ0aW9ucygrKSwgNyBkZWxldGlvbnMoLSkKCmRpZmYgLS1n
aXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4IDc0Mzc0ZmIuLjVhYzg0MWYgMTAwNjQ0Ci0tLSBh
L2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC0xNjEsMTAgKzE2MSwxMiBAQCBzdGF0aWMgaW50IGRp
c3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLAogc3RhdGljIHZvaWQg
bWFrZV9yZXNwb25zZShzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwgdTY0IGlkLAogCQkJICB1bnNp
Z25lZCBzaG9ydCBvcCwgaW50IHN0KTsKIAotI2RlZmluZSBmb3JlYWNoX2dyYW50KHBvcywgcmJ0
cmVlLCBub2RlKSBcCi0JZm9yICgocG9zKSA9IGNvbnRhaW5lcl9vZihyYl9maXJzdCgocmJ0cmVl
KSksIHR5cGVvZigqKHBvcykpLCBub2RlKTsgXAorI2RlZmluZSBmb3JlYWNoX2dyYW50X3NhZmUo
cG9zLCBuLCByYnRyZWUsIG5vZGUpIFwKKwlmb3IgKChwb3MpID0gY29udGFpbmVyX29mKHJiX2Zp
cnN0KChyYnRyZWUpKSwgdHlwZW9mKCoocG9zKSksIG5vZGUpLCBcCisJICAgICAobikgPSByYl9u
ZXh0KCYocG9zKS0+bm9kZSk7IFwKIAkgICAgICYocG9zKS0+bm9kZSAhPSBOVUxMOyBcCi0JICAg
ICAocG9zKSA9IGNvbnRhaW5lcl9vZihyYl9uZXh0KCYocG9zKS0+bm9kZSksIHR5cGVvZigqKHBv
cykpLCBub2RlKSkKKwkgICAgIChwb3MpID0gY29udGFpbmVyX29mKG4sIHR5cGVvZigqKHBvcykp
LCBub2RlKSwgXAorCSAgICAgKG4pID0gKCYocG9zKS0+bm9kZSAhPSBOVUxMKSA/IHJiX25leHQo
Jihwb3MpLT5ub2RlKSA6IE5VTEwpCiAKIAogc3RhdGljIHZvaWQgYWRkX3BlcnNpc3RlbnRfZ250
KHN0cnVjdCByYl9yb290ICpyb290LApAQCAtMjE3LDEwICsyMTksMTEgQEAgc3RhdGljIHZvaWQg
ZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBu
dW0pCiAJc3RydWN0IGdudHRhYl91bm1hcF9ncmFudF9yZWYgdW5tYXBbQkxLSUZfTUFYX1NFR01F
TlRTX1BFUl9SRVFVRVNUXTsKIAlzdHJ1Y3QgcGFnZSAqcGFnZXNbQkxLSUZfTUFYX1NFR01FTlRT
X1BFUl9SRVFVRVNUXTsKIAlzdHJ1Y3QgcGVyc2lzdGVudF9nbnQgKnBlcnNpc3RlbnRfZ250Owor
CXN0cnVjdCByYl9ub2RlICpuOwogCWludCByZXQgPSAwOwogCWludCBzZWdzX3RvX3VubWFwID0g
MDsKIAotCWZvcmVhY2hfZ3JhbnQocGVyc2lzdGVudF9nbnQsIHJvb3QsIG5vZGUpIHsKKwlmb3Jl
YWNoX2dyYW50X3NhZmUocGVyc2lzdGVudF9nbnQsIG4sIHJvb3QsIG5vZGUpIHsKIAkJQlVHX09O
KHBlcnNpc3RlbnRfZ250LT5oYW5kbGUgPT0KIAkJCUJMS0JBQ0tfSU5WQUxJRF9IQU5ETEUpOwog
CQlnbnR0YWJfc2V0X3VubWFwX29wKCZ1bm1hcFtzZWdzX3RvX3VubWFwXSwKQEAgLTIzMCw5ICsy
MzMsNiBAQCBzdGF0aWMgdm9pZCBmcmVlX3BlcnNpc3RlbnRfZ250cyhzdHJ1Y3QgcmJfcm9vdCAq
cm9vdCwgdW5zaWduZWQgaW50IG51bSkKIAkJCXBlcnNpc3RlbnRfZ250LT5oYW5kbGUpOwogCiAJ
CXBhZ2VzW3NlZ3NfdG9fdW5tYXBdID0gcGVyc2lzdGVudF9nbnQtPnBhZ2U7Ci0JCXJiX2VyYXNl
KCZwZXJzaXN0ZW50X2dudC0+bm9kZSwgcm9vdCk7Ci0JCWtmcmVlKHBlcnNpc3RlbnRfZ250KTsK
LQkJbnVtLS07CiAKIAkJaWYgKCsrc2Vnc190b191bm1hcCA9PSBCTEtJRl9NQVhfU0VHTUVOVFNf
UEVSX1JFUVVFU1QgfHwKIAkJCSFyYl9uZXh0KCZwZXJzaXN0ZW50X2dudC0+bm9kZSkpIHsKQEAg
LTI0MSw2ICsyNDEsMTAgQEAgc3RhdGljIHZvaWQgZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0
IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBudW0pCiAJCQlCVUdfT04ocmV0KTsKIAkJCXNl
Z3NfdG9fdW5tYXAgPSAwOwogCQl9CisKKwkJcmJfZXJhc2UoJnBlcnNpc3RlbnRfZ250LT5ub2Rl
LCByb290KTsKKwkJa2ZyZWUocGVyc2lzdGVudF9nbnQpOworCQludW0tLTsKIAl9CiAJQlVHX09O
KG51bSAhPSAwKTsKIH0KLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0
Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:22:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftO3-0004o6-VD; Tue, 04 Dec 2012 14:22:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TftO2-0004nd-1d
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:22:14 +0000
Received: from [85.158.138.51:9345] by server-14.bemta-3.messagelabs.com id
	D8/19-31424-5170EB05; Tue, 04 Dec 2012 14:22:13 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354630921!27377688!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6611 invoked from network); 4 Dec 2012 14:22:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:22:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149111"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:22:01 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:22:01 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Tue, 4 Dec 2012 15:21:53 +0100
Message-ID: <1354630913-17287-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
	llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW1wbGVtZW50IGEgc2FmZSB2ZXJzaW9uIG9mIGxsaXN0X2Zvcl9lYWNoX2VudHJ5LCBhbmQgdXNl
IGl0IGluCmJsa2lmX2ZyZWUuIFByZXZpb3VzbHkgZ3JhbnRzIHdoZXJlIGZyZWVkIHdoaWxlIGl0
ZXJhdGluZyB0aGUgbGlzdCwKd2hpY2ggbGVhZCB0byBkZXJlZmVyZW5jZXMgd2hlbiB0cnlpbmcg
dG8gZmV0Y2ggdGhlIG5leHQgaXRlbS4KClJlcG9ydGVkLWJ5OiBEYW4gQ2FycGVudGVyIDxkYW4u
Y2FycGVudGVyQG9yYWNsZS5jb20+ClNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJv
Z2VyLnBhdUBjaXRyaXguY29tPgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25yYWRAa2Vy
bmVsLm9yZz4KQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCi0tLQogZHJpdmVycy9ibG9jay94
ZW4tYmxrZnJvbnQuYyB8ICAgMTAgKysrKysrKysrLQogMSBmaWxlcyBjaGFuZ2VkLCA5IGluc2Vy
dGlvbnMoKyksIDEgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4t
YmxrZnJvbnQuYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2Zyb250LmMKaW5kZXggOTZlOWIwMC4u
ZGYyMWIwNSAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQuYworKysgYi9k
cml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jCkBAIC0xNDMsNiArMTQzLDEzIEBAIHN0YXRpYyBE
RUZJTkVfU1BJTkxPQ0sobWlub3JfbG9jayk7CiAKICNkZWZpbmUgREVWX05BTUUJInh2ZCIJLyog
bmFtZSBpbiAvZGV2ICovCiAKKyNkZWZpbmUgbGxpc3RfZm9yX2VhY2hfZW50cnlfc2FmZShwb3Ms
IG4sIG5vZGUsIG1lbWJlcikJCVwKKwlmb3IgKChwb3MpID0gbGxpc3RfZW50cnkoKG5vZGUpLCB0
eXBlb2YoKihwb3MpKSwgbWVtYmVyKSwJXAorCSAgICAgKG4pID0gKHBvcyktPm1lbWJlci5uZXh0
OwkJCQkJXAorCSAgICAgJihwb3MpLT5tZW1iZXIgIT0gTlVMTDsJCQkJCVwKKwkgICAgIChwb3Mp
ID0gbGxpc3RfZW50cnkobiwgdHlwZW9mKCoocG9zKSksIG1lbWJlciksCQlcCisJICAgICAobikg
PSAoJihwb3MpLT5tZW1iZXIgIT0gTlVMTCkgPyAocG9zKS0+bWVtYmVyLm5leHQgOiBOVUxMKQor
CiBzdGF0aWMgaW50IGdldF9pZF9mcm9tX2ZyZWVsaXN0KHN0cnVjdCBibGtmcm9udF9pbmZvICpp
bmZvKQogewogCXVuc2lnbmVkIGxvbmcgZnJlZSA9IGluZm8tPnNoYWRvd19mcmVlOwpAQCAtNzky
LDYgKzc5OSw3IEBAIHN0YXRpYyB2b2lkIGJsa2lmX2ZyZWUoc3RydWN0IGJsa2Zyb250X2luZm8g
KmluZm8sIGludCBzdXNwZW5kKQogewogCXN0cnVjdCBsbGlzdF9ub2RlICphbGxfZ250czsKIAlz
dHJ1Y3QgZ3JhbnQgKnBlcnNpc3RlbnRfZ250OworCXN0cnVjdCBsbGlzdF9ub2RlICpuOwogCiAJ
LyogUHJldmVudCBuZXcgcmVxdWVzdHMgYmVpbmcgaXNzdWVkIHVudGlsIHdlIGZpeCB0aGluZ3Mg
dXAuICovCiAJc3Bpbl9sb2NrX2lycSgmaW5mby0+aW9fbG9jayk7CkBAIC04MDQsNyArODEyLDcg
QEAgc3RhdGljIHZvaWQgYmxraWZfZnJlZShzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgaW50
IHN1c3BlbmQpCiAJLyogUmVtb3ZlIGFsbCBwZXJzaXN0ZW50IGdyYW50cyAqLwogCWlmIChpbmZv
LT5wZXJzaXN0ZW50X2dudHNfYykgewogCQlhbGxfZ250cyA9IGxsaXN0X2RlbF9hbGwoJmluZm8t
PnBlcnNpc3RlbnRfZ250cyk7Ci0JCWxsaXN0X2Zvcl9lYWNoX2VudHJ5KHBlcnNpc3RlbnRfZ250
LCBhbGxfZ250cywgbm9kZSkgeworCQlsbGlzdF9mb3JfZWFjaF9lbnRyeV9zYWZlKHBlcnNpc3Rl
bnRfZ250LCBuLCBhbGxfZ250cywgbm9kZSkgewogCQkJZ250dGFiX2VuZF9mb3JlaWduX2FjY2Vz
cyhwZXJzaXN0ZW50X2dudC0+Z3JlZiwgMCwgMFVMKTsKIAkJCV9fZnJlZV9wYWdlKHBmbl90b19w
YWdlKHBlcnNpc3RlbnRfZ250LT5wZm4pKTsKIAkJCWtmcmVlKHBlcnNpc3RlbnRfZ250KTsKLS0g
CjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:22:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftO3-0004o6-VD; Tue, 04 Dec 2012 14:22:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TftO2-0004nd-1d
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:22:14 +0000
Received: from [85.158.138.51:9345] by server-14.bemta-3.messagelabs.com id
	D8/19-31424-5170EB05; Tue, 04 Dec 2012 14:22:13 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354630921!27377688!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6611 invoked from network); 4 Dec 2012 14:22:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:22:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149111"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:22:01 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:22:01 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Tue, 4 Dec 2012 15:21:53 +0100
Message-ID: <1354630913-17287-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
	llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW1wbGVtZW50IGEgc2FmZSB2ZXJzaW9uIG9mIGxsaXN0X2Zvcl9lYWNoX2VudHJ5LCBhbmQgdXNl
IGl0IGluCmJsa2lmX2ZyZWUuIFByZXZpb3VzbHkgZ3JhbnRzIHdoZXJlIGZyZWVkIHdoaWxlIGl0
ZXJhdGluZyB0aGUgbGlzdCwKd2hpY2ggbGVhZCB0byBkZXJlZmVyZW5jZXMgd2hlbiB0cnlpbmcg
dG8gZmV0Y2ggdGhlIG5leHQgaXRlbS4KClJlcG9ydGVkLWJ5OiBEYW4gQ2FycGVudGVyIDxkYW4u
Y2FycGVudGVyQG9yYWNsZS5jb20+ClNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJv
Z2VyLnBhdUBjaXRyaXguY29tPgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25yYWRAa2Vy
bmVsLm9yZz4KQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCi0tLQogZHJpdmVycy9ibG9jay94
ZW4tYmxrZnJvbnQuYyB8ICAgMTAgKysrKysrKysrLQogMSBmaWxlcyBjaGFuZ2VkLCA5IGluc2Vy
dGlvbnMoKyksIDEgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4t
YmxrZnJvbnQuYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2Zyb250LmMKaW5kZXggOTZlOWIwMC4u
ZGYyMWIwNSAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQuYworKysgYi9k
cml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jCkBAIC0xNDMsNiArMTQzLDEzIEBAIHN0YXRpYyBE
RUZJTkVfU1BJTkxPQ0sobWlub3JfbG9jayk7CiAKICNkZWZpbmUgREVWX05BTUUJInh2ZCIJLyog
bmFtZSBpbiAvZGV2ICovCiAKKyNkZWZpbmUgbGxpc3RfZm9yX2VhY2hfZW50cnlfc2FmZShwb3Ms
IG4sIG5vZGUsIG1lbWJlcikJCVwKKwlmb3IgKChwb3MpID0gbGxpc3RfZW50cnkoKG5vZGUpLCB0
eXBlb2YoKihwb3MpKSwgbWVtYmVyKSwJXAorCSAgICAgKG4pID0gKHBvcyktPm1lbWJlci5uZXh0
OwkJCQkJXAorCSAgICAgJihwb3MpLT5tZW1iZXIgIT0gTlVMTDsJCQkJCVwKKwkgICAgIChwb3Mp
ID0gbGxpc3RfZW50cnkobiwgdHlwZW9mKCoocG9zKSksIG1lbWJlciksCQlcCisJICAgICAobikg
PSAoJihwb3MpLT5tZW1iZXIgIT0gTlVMTCkgPyAocG9zKS0+bWVtYmVyLm5leHQgOiBOVUxMKQor
CiBzdGF0aWMgaW50IGdldF9pZF9mcm9tX2ZyZWVsaXN0KHN0cnVjdCBibGtmcm9udF9pbmZvICpp
bmZvKQogewogCXVuc2lnbmVkIGxvbmcgZnJlZSA9IGluZm8tPnNoYWRvd19mcmVlOwpAQCAtNzky
LDYgKzc5OSw3IEBAIHN0YXRpYyB2b2lkIGJsa2lmX2ZyZWUoc3RydWN0IGJsa2Zyb250X2luZm8g
KmluZm8sIGludCBzdXNwZW5kKQogewogCXN0cnVjdCBsbGlzdF9ub2RlICphbGxfZ250czsKIAlz
dHJ1Y3QgZ3JhbnQgKnBlcnNpc3RlbnRfZ250OworCXN0cnVjdCBsbGlzdF9ub2RlICpuOwogCiAJ
LyogUHJldmVudCBuZXcgcmVxdWVzdHMgYmVpbmcgaXNzdWVkIHVudGlsIHdlIGZpeCB0aGluZ3Mg
dXAuICovCiAJc3Bpbl9sb2NrX2lycSgmaW5mby0+aW9fbG9jayk7CkBAIC04MDQsNyArODEyLDcg
QEAgc3RhdGljIHZvaWQgYmxraWZfZnJlZShzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgaW50
IHN1c3BlbmQpCiAJLyogUmVtb3ZlIGFsbCBwZXJzaXN0ZW50IGdyYW50cyAqLwogCWlmIChpbmZv
LT5wZXJzaXN0ZW50X2dudHNfYykgewogCQlhbGxfZ250cyA9IGxsaXN0X2RlbF9hbGwoJmluZm8t
PnBlcnNpc3RlbnRfZ250cyk7Ci0JCWxsaXN0X2Zvcl9lYWNoX2VudHJ5KHBlcnNpc3RlbnRfZ250
LCBhbGxfZ250cywgbm9kZSkgeworCQlsbGlzdF9mb3JfZWFjaF9lbnRyeV9zYWZlKHBlcnNpc3Rl
bnRfZ250LCBuLCBhbGxfZ250cywgbm9kZSkgewogCQkJZ250dGFiX2VuZF9mb3JlaWduX2FjY2Vz
cyhwZXJzaXN0ZW50X2dudC0+Z3JlZiwgMCwgMFVMKTsKIAkJCV9fZnJlZV9wYWdlKHBmbl90b19w
YWdlKHBlcnNpc3RlbnRfZ250LT5wZm4pKTsKIAkJCWtmcmVlKHBlcnNpc3RlbnRfZ250KTsKLS0g
CjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:22:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:22:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftOC-0004qB-DI; Tue, 04 Dec 2012 14:22:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <s.munaut@whatever-company.com>) id 1TftOA-0004pX-4z
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:22:22 +0000
Received: from [85.158.139.83:16113] by server-7.bemta-5.messagelabs.com id
	74/78-23096-D170EB05; Tue, 04 Dec 2012 14:22:21 +0000
X-Env-Sender: s.munaut@whatever-company.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354630938!28357825!1
X-Originating-IP: [209.85.219.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18657 invoked from network); 4 Dec 2012 14:22:20 -0000
Received: from mail-oa0-f45.google.com (HELO mail-oa0-f45.google.com)
	(209.85.219.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:22:20 -0000
Received: by mail-oa0-f45.google.com with SMTP id i18so4728079oag.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 06:21:19 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=QoC58GuQvbXt6eeAXnZsqVtfjdUMRX5hYAfAwfFsdaM=;
	b=HEs7c6kMca8Owo5MBN5xjFudbQhE15Rq/62361z8XJH8qJAqwn/EwfrtJjQtbf6Hxn
	EZ3NrO3WpnX4fTKTCqXAhq7GX146VimBp4dKD0j2LmnhOwN5gt3/3vlhkWAMx7iEztPS
	4GyTmS5CiQo7TeCOyLA1F0zSQ0yHi/35Au84NP4YZo4bJeSKpfLiUpkgaUXa1DNNOqR+
	PjLOk9wMxuc1K0aK7Disz+6wQlviBssZiZc3VxgF4D4Lxe1/hPe9faq/JGG5FQmBVSuw
	FyDHLv8IONEI5IhDAwijBEtqsnEB6wjtGErMXfrIqYrZt6bjhYy+ldhFi9Oz1bwFIx7g
	E8jw==
MIME-Version: 1.0
Received: by 10.60.31.234 with SMTP id d10mr11797408oei.123.1354630879394;
	Tue, 04 Dec 2012 06:21:19 -0800 (PST)
Received: by 10.76.28.234 with HTTP; Tue, 4 Dec 2012 06:21:19 -0800 (PST)
In-Reply-To: <CAF6-1L7q-m00CVTMPx4ySWuSBqqwnfP+-=Wz92Nh1C6_2RQ82A@mail.gmail.com>
References: <CAF6-1L51R5yMBNO1wQe7YWoeGoM_bFJcAb0XN0cVg6cm7+Kzaw@mail.gmail.com>
	<1352363172.12977.87.camel@hastur.hellion.org.uk>
	<509D22E6.2090005@citrix.com> <50A27A37.2090506@citrix.com>
	<CAF6-1L7q-m00CVTMPx4ySWuSBqqwnfP+-=Wz92Nh1C6_2RQ82A@mail.gmail.com>
Date: Tue, 4 Dec 2012 15:21:19 +0100
Message-ID: <CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
From: Sylvain Munaut <s.munaut@whatever-company.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
X-Gm-Message-State: ALoCoQl679luxGyVk/1zur5GUCSMio4rG+Kq5W2ci2v/EPlsEdIPWb9mUDhHY7uEmudkzNRGH7jL
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Custom block script started twice for root block
 but only stopped once
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A bit more info:

I tried using PV-GRUB and this doesn't happen there, so it's only with PyGRUB.

AFAICT it happens in  tools/python/xen/xend/XendDomainInfo.py in the
_configureBootloader method.

_shouldMount will return true and so " dom0.create_vbd(vbd, disk)"
will be executed to allow access to the device to the host by
executing the block-script before the actuall booting.


[2012-11-26 13:40:44 3469] INFO (XendDomainInfo:3270) Mounting
192.168.2.201 name=rbd;key=rbd rbd-vm test on /dev/xvdp.
[2012-11-26 13:40:44 3469] DEBUG (DevController:95) DevController:
writing {'backend-id': '0', 'virtual-device': '51952', 'device-type':
'disk', 'state': '1', 'backend':
'/local/domain/0/backend/vbd/0/51952'} to
/local/domain/0/device/vbd/51952.
[2012-11-26 13:40:44 3469] DEBUG (DevController:97) DevController:
writing {'domain': 'Domain-0', 'frontend':
'/local/domain/0/device/vbd/51952', 'uuid':
'1845bb27-97b5-ce21-348d-ade6e99829e9', 'bootable': '0', 'dev':
'/dev/xvdp', 'state': '1', 'params': '192.168.2.201 name=rbd;key=rbd
rbd-vm test', 'mode': 'r', 'online': '1', 'frontend-id': '0', 'type':
'rbd'} to /local/domain/0/backend/vbd/0/51952.
[2012-11-26 13:40:44 3469] DEBUG (DevController:144) Waiting for 51952.
[2012-11-26 13:40:44 3469] DEBUG (DevController:628)
hotplugStatusCallback
/local/domain/0/backend/vbd/0/51952/hotplug-status.
[2012-11-26 13:40:45 3469] DEBUG (DevController:628)
hotplugStatusCallback
/local/domain/0/backend/vbd/0/51952/hotplug-status.
[2012-11-26 13:40:45 3469] DEBUG (DevController:642) hotplugStatusCallback 1.
[2012-11-26 13:40:45 3469] DEBUG (DevController:144) Waiting for 51952.
[2012-11-26 13:40:45 3469] DEBUG (DevController:628)
hotplugStatusCallback
/local/domain/0/backend/vbd/0/51952/hotplug-status.
[2012-11-26 13:40:45 3469] DEBUG (DevController:642) hotplugStatusCallback 1.
[2012-11-26 13:40:45 4455] DEBUG (XendBootloader:113) Launching
bootloader as ['/usr/lib/xen-4.1/bin/pygrub', '--args=console=hvc0
xencons=hvc0', '--output=/var/run/xend/boot/xenbl.7585', '-q',
'/dev/xvdp'].
[2012-11-26 13:40:48 3469] INFO (XendDomainInfo:3289) Unmounting
/dev/xvdp from /dev/xvdp.
[2012-11-26 13:40:48 3469] DEBUG (XendDomainInfo:1276)
XendDomainInfo.destroyDevice: deviceClass = vbd, device = /dev/xvdp

Now in theory the destroyDevice call should have cleared up any
resources, but it seems it didn't ...


Cheers,

    Sylvain

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:22:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:22:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftOC-0004qB-DI; Tue, 04 Dec 2012 14:22:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <s.munaut@whatever-company.com>) id 1TftOA-0004pX-4z
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:22:22 +0000
Received: from [85.158.139.83:16113] by server-7.bemta-5.messagelabs.com id
	74/78-23096-D170EB05; Tue, 04 Dec 2012 14:22:21 +0000
X-Env-Sender: s.munaut@whatever-company.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354630938!28357825!1
X-Originating-IP: [209.85.219.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18657 invoked from network); 4 Dec 2012 14:22:20 -0000
Received: from mail-oa0-f45.google.com (HELO mail-oa0-f45.google.com)
	(209.85.219.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:22:20 -0000
Received: by mail-oa0-f45.google.com with SMTP id i18so4728079oag.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 06:21:19 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=QoC58GuQvbXt6eeAXnZsqVtfjdUMRX5hYAfAwfFsdaM=;
	b=HEs7c6kMca8Owo5MBN5xjFudbQhE15Rq/62361z8XJH8qJAqwn/EwfrtJjQtbf6Hxn
	EZ3NrO3WpnX4fTKTCqXAhq7GX146VimBp4dKD0j2LmnhOwN5gt3/3vlhkWAMx7iEztPS
	4GyTmS5CiQo7TeCOyLA1F0zSQ0yHi/35Au84NP4YZo4bJeSKpfLiUpkgaUXa1DNNOqR+
	PjLOk9wMxuc1K0aK7Disz+6wQlviBssZiZc3VxgF4D4Lxe1/hPe9faq/JGG5FQmBVSuw
	FyDHLv8IONEI5IhDAwijBEtqsnEB6wjtGErMXfrIqYrZt6bjhYy+ldhFi9Oz1bwFIx7g
	E8jw==
MIME-Version: 1.0
Received: by 10.60.31.234 with SMTP id d10mr11797408oei.123.1354630879394;
	Tue, 04 Dec 2012 06:21:19 -0800 (PST)
Received: by 10.76.28.234 with HTTP; Tue, 4 Dec 2012 06:21:19 -0800 (PST)
In-Reply-To: <CAF6-1L7q-m00CVTMPx4ySWuSBqqwnfP+-=Wz92Nh1C6_2RQ82A@mail.gmail.com>
References: <CAF6-1L51R5yMBNO1wQe7YWoeGoM_bFJcAb0XN0cVg6cm7+Kzaw@mail.gmail.com>
	<1352363172.12977.87.camel@hastur.hellion.org.uk>
	<509D22E6.2090005@citrix.com> <50A27A37.2090506@citrix.com>
	<CAF6-1L7q-m00CVTMPx4ySWuSBqqwnfP+-=Wz92Nh1C6_2RQ82A@mail.gmail.com>
Date: Tue, 4 Dec 2012 15:21:19 +0100
Message-ID: <CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
From: Sylvain Munaut <s.munaut@whatever-company.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
X-Gm-Message-State: ALoCoQl679luxGyVk/1zur5GUCSMio4rG+Kq5W2ci2v/EPlsEdIPWb9mUDhHY7uEmudkzNRGH7jL
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Custom block script started twice for root block
 but only stopped once
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A bit more info:

I tried using PV-GRUB and this doesn't happen there, so it's only with PyGRUB.

AFAICT it happens in  tools/python/xen/xend/XendDomainInfo.py in the
_configureBootloader method.

_shouldMount will return true and so " dom0.create_vbd(vbd, disk)"
will be executed to allow access to the device to the host by
executing the block-script before the actuall booting.


[2012-11-26 13:40:44 3469] INFO (XendDomainInfo:3270) Mounting
192.168.2.201 name=rbd;key=rbd rbd-vm test on /dev/xvdp.
[2012-11-26 13:40:44 3469] DEBUG (DevController:95) DevController:
writing {'backend-id': '0', 'virtual-device': '51952', 'device-type':
'disk', 'state': '1', 'backend':
'/local/domain/0/backend/vbd/0/51952'} to
/local/domain/0/device/vbd/51952.
[2012-11-26 13:40:44 3469] DEBUG (DevController:97) DevController:
writing {'domain': 'Domain-0', 'frontend':
'/local/domain/0/device/vbd/51952', 'uuid':
'1845bb27-97b5-ce21-348d-ade6e99829e9', 'bootable': '0', 'dev':
'/dev/xvdp', 'state': '1', 'params': '192.168.2.201 name=rbd;key=rbd
rbd-vm test', 'mode': 'r', 'online': '1', 'frontend-id': '0', 'type':
'rbd'} to /local/domain/0/backend/vbd/0/51952.
[2012-11-26 13:40:44 3469] DEBUG (DevController:144) Waiting for 51952.
[2012-11-26 13:40:44 3469] DEBUG (DevController:628)
hotplugStatusCallback
/local/domain/0/backend/vbd/0/51952/hotplug-status.
[2012-11-26 13:40:45 3469] DEBUG (DevController:628)
hotplugStatusCallback
/local/domain/0/backend/vbd/0/51952/hotplug-status.
[2012-11-26 13:40:45 3469] DEBUG (DevController:642) hotplugStatusCallback 1.
[2012-11-26 13:40:45 3469] DEBUG (DevController:144) Waiting for 51952.
[2012-11-26 13:40:45 3469] DEBUG (DevController:628)
hotplugStatusCallback
/local/domain/0/backend/vbd/0/51952/hotplug-status.
[2012-11-26 13:40:45 3469] DEBUG (DevController:642) hotplugStatusCallback 1.
[2012-11-26 13:40:45 4455] DEBUG (XendBootloader:113) Launching
bootloader as ['/usr/lib/xen-4.1/bin/pygrub', '--args=console=hvc0
xencons=hvc0', '--output=/var/run/xend/boot/xenbl.7585', '-q',
'/dev/xvdp'].
[2012-11-26 13:40:48 3469] INFO (XendDomainInfo:3289) Unmounting
/dev/xvdp from /dev/xvdp.
[2012-11-26 13:40:48 3469] DEBUG (XendDomainInfo:1276)
XendDomainInfo.destroyDevice: deviceClass = vbd, device = /dev/xvdp

Now in theory the destroyDevice call should have cleared up any
resources, but it seems it didn't ...


Cheers,

    Sylvain

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:35:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftaJ-0005il-Vk; Tue, 04 Dec 2012 14:34:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1TftaI-0005ig-9D
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 14:34:54 +0000
Received: from [85.158.138.51:2646] by server-4.bemta-3.messagelabs.com id
	A8/83-30023-D0A0EB05; Tue, 04 Dec 2012 14:34:53 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354631692!27500219!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29705 invoked from network); 4 Dec 2012 14:34:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:34:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149450"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:34:51 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Tue, 4 Dec 2012
	14:34:51 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 14:34:51 +0000
Thread-Topic: [Xen-devel] [PATCH 9 of 9 RFC] blktap3: Introduce makefile
	that builds xenio-required tap-ctl functionality
Thread-Index: Ac3OQwwEwJ2pY8lSQQuUMPx3CBVF+AD6V/nA
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF462@LONPMAILBOX01.citrite.net>
References: <patchbomb.1353090340@makatos-desktop>
	<99866fac1445feaf3c39.1353090349@makatos-desktop>
	<1354201560.6269.6.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354201560.6269.6.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 9 of 9 RFC] blktap3: Introduce makefile that
 builds xenio-required tap-ctl functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This should come from tools/configure and config/Tools.mk etc rather
> than being hard coded here.
> 
> By default it should likely be under /usr/lib/xen (aka LIBEXEC_DIR).
> Check out buildmakevars2file in tools/Rules.mk and the use of it in
> tools/libxl/Makefile -- you probably want to do something similar.

Ok.

> >  clean:
> > -	rm -f $(OBJS) $(PICS) $(DEPS) $(IBIN) $(LIB_STATIC) $(LIB_SHARED)
> 
> Don't you want CTL_OBJS here instead?

Correct.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:35:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftaJ-0005il-Vk; Tue, 04 Dec 2012 14:34:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1TftaI-0005ig-9D
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 14:34:54 +0000
Received: from [85.158.138.51:2646] by server-4.bemta-3.messagelabs.com id
	A8/83-30023-D0A0EB05; Tue, 04 Dec 2012 14:34:53 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354631692!27500219!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29705 invoked from network); 4 Dec 2012 14:34:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:34:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149450"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:34:51 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.161]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Tue, 4 Dec 2012
	14:34:51 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 14:34:51 +0000
Thread-Topic: [Xen-devel] [PATCH 9 of 9 RFC] blktap3: Introduce makefile
	that builds xenio-required tap-ctl functionality
Thread-Index: Ac3OQwwEwJ2pY8lSQQuUMPx3CBVF+AD6V/nA
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE01412EDDF462@LONPMAILBOX01.citrite.net>
References: <patchbomb.1353090340@makatos-desktop>
	<99866fac1445feaf3c39.1353090349@makatos-desktop>
	<1354201560.6269.6.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354201560.6269.6.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 9 of 9 RFC] blktap3: Introduce makefile that
 builds xenio-required tap-ctl functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This should come from tools/configure and config/Tools.mk etc rather
> than being hard coded here.
> 
> By default it should likely be under /usr/lib/xen (aka LIBEXEC_DIR).
> Check out buildmakevars2file in tools/Rules.mk and the use of it in
> tools/libxl/Makefile -- you probably want to do something similar.

Ok.

> >  clean:
> > -	rm -f $(OBJS) $(PICS) $(DEPS) $(IBIN) $(LIB_STATIC) $(LIB_SHARED)
> 
> Don't you want CTL_OBJS here instead?

Correct.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:36:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftbF-0005lw-HN; Tue, 04 Dec 2012 14:35:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TftbE-0005lo-16
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 14:35:52 +0000
Received: from [85.158.143.35:54755] by server-2.bemta-4.messagelabs.com id
	C7/2E-28922-74A0EB05; Tue, 04 Dec 2012 14:35:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354631745!12570925!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14041 invoked from network); 4 Dec 2012 14:35:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:35:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149473"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:35:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:35:45 +0000
Message-ID: <1354631744.15296.8.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Dec 2012 14:35:44 +0000
In-Reply-To: <bc2a7645dc2be4a01f2b.1354295635@elijah>
References: <bc2a7645dc2be4a01f2b.1354295635@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] libxl: Make an internal function explicitly
 check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 17:13 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354294821 0
> # Node ID bc2a7645dc2be4a01f2b5ee30d7453cc3d7339aa
> # Parent  bd041b7426fe10a730994edd98708ff98ae1cb74
> libxl: Make an internal function explicitly check existence of expected paths
> 
> libxl__device_disk_from_xs_be() was failing without error for some
> missing xenstore nodes in a backend, while assuming (without checking)
> that other nodes were valid, causing a crash when another internal
> error wrote these nodes in the wrong place.
> 
> Make this function consistent by:
> * Checking the existence of all nodes before using
> * Choosing a default only when the node is not written in device_disk_add()
> * Failing with log msg if any node written by device_disk_add() is not present
> * Returning an error on failure
> 
> Also make the callers of the function pay attention to the error and
> behave appropriately.

If libxl__device_disk_from_xs_be returns an error then someone needs to
cleanup the partial allocations in the disk (pdev_path) probably by
calling libxl_device_disk_dispose.

It's probably easiest to do this in libxl__device_disk_from_xs_be on
error rather than modifying all the callers?

Also libxl__append_disk_list_of_type updates *ndisks early, so if you
abort half way through initialising the elements of the disks array
using libxl__device_disk_from_xs_be then the caller will try and free
some stuff which hasn't been initialised. I think the code needs to
remember ndisks-in-array separately from
ndisks-which-I-have-initialised, with the latter becoming the returned
*ndisks.

> v2:
>  * Remove "Internal error", as the failure will most likely look internal
>  * Use LOG(ERROR...) macros for incrased prettiness

More crass?

> @@ -2186,21 +2187,36 @@ static void libxl__device_disk_from_xs_b
>      } else {
>          disk->pdev_path = tmp;
>      }
> -    libxl_string_to_backend(ctx,
> -                        libxl__xs_read(gc, XBT_NULL,
> -                                       libxl__sprintf(gc, "%s/type", be_path)),
> -                        &(disk->backend));
> +
> +    
> +    tmp = libxl__xs_read(gc, XBT_NULL,
> +                         libxl__sprintf(gc, "%s/type", be_path));
> +    if (!tmp) {
> +        LOG(ERROR, "Missing xenstore node %s/type", be_path);
> +        return ERROR_FAIL;
> +    }

I've just remembered about libxl__xs_read_checked which effectively
implements the error reporting for you.

Oh, but it accepts ENOENT, so not quite what you need -- nevermind!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:36:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftbF-0005lw-HN; Tue, 04 Dec 2012 14:35:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TftbE-0005lo-16
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 14:35:52 +0000
Received: from [85.158.143.35:54755] by server-2.bemta-4.messagelabs.com id
	C7/2E-28922-74A0EB05; Tue, 04 Dec 2012 14:35:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354631745!12570925!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14041 invoked from network); 4 Dec 2012 14:35:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:35:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149473"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:35:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:35:45 +0000
Message-ID: <1354631744.15296.8.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Dec 2012 14:35:44 +0000
In-Reply-To: <bc2a7645dc2be4a01f2b.1354295635@elijah>
References: <bc2a7645dc2be4a01f2b.1354295635@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] libxl: Make an internal function explicitly
 check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 17:13 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354294821 0
> # Node ID bc2a7645dc2be4a01f2b5ee30d7453cc3d7339aa
> # Parent  bd041b7426fe10a730994edd98708ff98ae1cb74
> libxl: Make an internal function explicitly check existence of expected paths
> 
> libxl__device_disk_from_xs_be() was failing without error for some
> missing xenstore nodes in a backend, while assuming (without checking)
> that other nodes were valid, causing a crash when another internal
> error wrote these nodes in the wrong place.
> 
> Make this function consistent by:
> * Checking the existence of all nodes before using
> * Choosing a default only when the node is not written in device_disk_add()
> * Failing with log msg if any node written by device_disk_add() is not present
> * Returning an error on failure
> 
> Also make the callers of the function pay attention to the error and
> behave appropriately.

If libxl__device_disk_from_xs_be returns an error then someone needs to
cleanup the partial allocations in the disk (pdev_path) probably by
calling libxl_device_disk_dispose.

It's probably easiest to do this in libxl__device_disk_from_xs_be on
error rather than modifying all the callers?

Also libxl__append_disk_list_of_type updates *ndisks early, so if you
abort half way through initialising the elements of the disks array
using libxl__device_disk_from_xs_be then the caller will try and free
some stuff which hasn't been initialised. I think the code needs to
remember ndisks-in-array separately from
ndisks-which-I-have-initialised, with the latter becoming the returned
*ndisks.

> v2:
>  * Remove "Internal error", as the failure will most likely look internal
>  * Use LOG(ERROR...) macros for incrased prettiness

More crass?

> @@ -2186,21 +2187,36 @@ static void libxl__device_disk_from_xs_b
>      } else {
>          disk->pdev_path = tmp;
>      }
> -    libxl_string_to_backend(ctx,
> -                        libxl__xs_read(gc, XBT_NULL,
> -                                       libxl__sprintf(gc, "%s/type", be_path)),
> -                        &(disk->backend));
> +
> +    
> +    tmp = libxl__xs_read(gc, XBT_NULL,
> +                         libxl__sprintf(gc, "%s/type", be_path));
> +    if (!tmp) {
> +        LOG(ERROR, "Missing xenstore node %s/type", be_path);
> +        return ERROR_FAIL;
> +    }

I've just remembered about libxl__xs_read_checked which effectively
implements the error reporting for you.

Oh, but it accepts ENOENT, so not quite what you need -- nevermind!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:37:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftcU-0005sS-2z; Tue, 04 Dec 2012 14:37:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dietmar.hahn@ts.fujitsu.com>) id 1TftcT-0005sK-13
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:37:09 +0000
Received: from [85.158.139.211:8862] by server-14.bemta-5.messagelabs.com id
	64/00-21768-49A0EB05; Tue, 04 Dec 2012 14:37:08 +0000
X-Env-Sender: dietmar.hahn@ts.fujitsu.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354631825!19028156!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDE4OTYyMQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1203 invoked from network); 4 Dec 2012 14:37:05 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 14:37:05 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:From:To:Cc:Subject:Date:Message-ID:
	User-Agent:In-Reply-To:References:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	b=R5NCw/5Rq2L7hZTPi+/bW4seubN92ZLBDgxDW8gfOd+SSeH1/0t3LIb9
	GpT2OzB3iV22P/9LBa76A14wr4PrvtKEPP9Rjyy9MIOvO6Unb2h8zvpJS
	Fb8Xfnu4ZZ+VjjWU2yRPFCwTV3cCrXMdlNOLkGJzaYgo2njMVxl/7K+sj
	guzdk78T348kpGE6xJlXRfXvvXcSgZ1QBBxfGgPpfcRuCG7z52CHWEe2G
	NGLWoykn4c3vNT41Nya//uGd1vlMR;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1354631826; x=1386167826;
	h=from:to:cc:subject:date:message-id:in-reply-to:
	references:mime-version:content-transfer-encoding;
	bh=L0MfHWIsHuUNYpt7orlmogR1GVnvO8LFDBwt3MEt4aw=;
	b=VKFoBlBa4EJCUY6H0tP6P8ZU5lb7iulX0/i2/7rE9Lo4H5wfpNgGXfk2
	4sz9or0ARFsuqbSARNyBHR/GGGgl5lpgOTX7EZIHJ8oa5cLQ4M9FUjMmN
	i0tHeyaIaIVEBmtrwOeCM3B+wAqU8Pp/G/OX0WODdyb8LL5Y68KzxbTzG
	NNk+oOpOC9nw20kmGLiqdCbi9QdHyC0Gwf0QwNIBBWX8Cxu4QyiAVcLZM
	pD6ZZaxoObpifZxyQTOqgk4s6dseX;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.84,215,1355094000"; d="scan'208";a="110368917"
Received: from abgdgate30u.abg.fsc.net ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 04 Dec 2012 15:36:06 +0100
X-IronPort-AV: E=Sophos;i="4.84,215,1355094000"; d="scan'208";a="150780786"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate30u.abg.fsc.net with SMTP; 04 Dec 2012 15:36:06 +0100
Received: from amur.localnet (amur.mch.fsc.net [10.172.102.56])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id 0858E771367;
	Tue,  4 Dec 2012 15:36:06 +0100 (CET)
From: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 04 Dec 2012 15:36:05 +0100
Message-ID: <2239473.lTia567Yms@amur>
User-Agent: KMail/4.7.2 (Linux/3.1.10-1.16-xen; KDE/4.7.2; x86_64; ; )
In-Reply-To: <50B4EE7102000078000ABC70@nat28.tlf.novell.com>
References: <1449364.BXBucXMpuY@amur> <1883792.BQDJ6uZviQ@amur>
	<50B4EE7102000078000ABC70@nat28.tlf.novell.com>
MIME-Version: 1.0
Cc: Jun Nakajima <jun.nakajima@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] oprofile: Add X7542 and E7-8837 to the list
	of supported cpus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am Dienstag 27 November 2012, 15:46:41 schrieb Jan Beulich:
> >>> On 27.11.12 at 15:21, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
> > Am Dienstag 27 November 2012, 13:19:34 schrieb Jan Beulich:
> >> >>> On 27.11.12 at 14:04, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >>>> On 26.11.12 at 13:52, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
> >> >> Add intel cpus X7542 and E7-8837 to the list of supported cpus.
> >> >> 
> >> >> Thanks.
> >> >> Dietmar.
> >> >> 
> >> >> Signed-off-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> >> >> 
> >> >> diff -r 0049de3827bc -r 6fb0129600cd xen/arch/x86/oprofile/nmi_int.c
> >> >> --- a/xen/arch/x86/oprofile/nmi_int.c   Fri Nov 23 11:06:15 2012 +0000
> >> >> +++ b/xen/arch/x86/oprofile/nmi_int.c   Mon Nov 26 13:36:00 2012 +0100
> >> >> @@ -366,6 +366,8 @@ static int __init ppro_init(char ** cpu_
> >> >>                 ppro_has_global_ctrl = 1;
> >> >>                 break;
> >> >>         case 26:
> >> >> +       case 46:
> >> >> +       case 47:
> >> >>                 arch_perfmon_setup_counters();
> >> >>                 *cpu_type = "i386/core_i7";
> >> >>                 ppro_has_global_ctrl = 1;
> >> > 
> >> > Actually, and apart from the patch being white space damaged,
> >> > after a closer look I think this is wrong - these newer CPUs
> >> > shouldn't get be handled here, but instead should be covered by
> >> > arch_perfmon_init(). Are you observing X86_FEATURE_ARCH_PERFMON
> >> > not getting set on these CPUs by init_intel()?
> >> 
> >> I.e. the below would be the patch I'd expect when merely
> >> taking the SDM as reference (with the "todo remove?" ones
> >> also fully removed of course).
> > 
> > Yes, looks much cleaner.
> 
> Question is - does it work for you? And if it does, why would it not
> have worked without any change? After all, the patch in this form,
> apart from the setting of ppro_has_global_ctrl in
> arch_perfmon_init(), only removes code.

After one week of vacation I can't reproduce the thing.
I only remember that trying oprofile didn't work on my machine and the problem
went away after the fix. But now I looked deeper and see that something else
must have been wrong I cant reproduce. Sorry I didn't think enough!
Please forget my patch and thank you for better looking than me!

Dietmar.

> Jan
> 
> 
-- 
Company details: http://ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:37:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftcU-0005sS-2z; Tue, 04 Dec 2012 14:37:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dietmar.hahn@ts.fujitsu.com>) id 1TftcT-0005sK-13
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:37:09 +0000
Received: from [85.158.139.211:8862] by server-14.bemta-5.messagelabs.com id
	64/00-21768-49A0EB05; Tue, 04 Dec 2012 14:37:08 +0000
X-Env-Sender: dietmar.hahn@ts.fujitsu.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354631825!19028156!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDE4OTYyMQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1203 invoked from network); 4 Dec 2012 14:37:05 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 14:37:05 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:From:To:Cc:Subject:Date:Message-ID:
	User-Agent:In-Reply-To:References:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	b=R5NCw/5Rq2L7hZTPi+/bW4seubN92ZLBDgxDW8gfOd+SSeH1/0t3LIb9
	GpT2OzB3iV22P/9LBa76A14wr4PrvtKEPP9Rjyy9MIOvO6Unb2h8zvpJS
	Fb8Xfnu4ZZ+VjjWU2yRPFCwTV3cCrXMdlNOLkGJzaYgo2njMVxl/7K+sj
	guzdk78T348kpGE6xJlXRfXvvXcSgZ1QBBxfGgPpfcRuCG7z52CHWEe2G
	NGLWoykn4c3vNT41Nya//uGd1vlMR;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1354631826; x=1386167826;
	h=from:to:cc:subject:date:message-id:in-reply-to:
	references:mime-version:content-transfer-encoding;
	bh=L0MfHWIsHuUNYpt7orlmogR1GVnvO8LFDBwt3MEt4aw=;
	b=VKFoBlBa4EJCUY6H0tP6P8ZU5lb7iulX0/i2/7rE9Lo4H5wfpNgGXfk2
	4sz9or0ARFsuqbSARNyBHR/GGGgl5lpgOTX7EZIHJ8oa5cLQ4M9FUjMmN
	i0tHeyaIaIVEBmtrwOeCM3B+wAqU8Pp/G/OX0WODdyb8LL5Y68KzxbTzG
	NNk+oOpOC9nw20kmGLiqdCbi9QdHyC0Gwf0QwNIBBWX8Cxu4QyiAVcLZM
	pD6ZZaxoObpifZxyQTOqgk4s6dseX;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.84,215,1355094000"; d="scan'208";a="110368917"
Received: from abgdgate30u.abg.fsc.net ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 04 Dec 2012 15:36:06 +0100
X-IronPort-AV: E=Sophos;i="4.84,215,1355094000"; d="scan'208";a="150780786"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate30u.abg.fsc.net with SMTP; 04 Dec 2012 15:36:06 +0100
Received: from amur.localnet (amur.mch.fsc.net [10.172.102.56])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id 0858E771367;
	Tue,  4 Dec 2012 15:36:06 +0100 (CET)
From: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 04 Dec 2012 15:36:05 +0100
Message-ID: <2239473.lTia567Yms@amur>
User-Agent: KMail/4.7.2 (Linux/3.1.10-1.16-xen; KDE/4.7.2; x86_64; ; )
In-Reply-To: <50B4EE7102000078000ABC70@nat28.tlf.novell.com>
References: <1449364.BXBucXMpuY@amur> <1883792.BQDJ6uZviQ@amur>
	<50B4EE7102000078000ABC70@nat28.tlf.novell.com>
MIME-Version: 1.0
Cc: Jun Nakajima <jun.nakajima@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] oprofile: Add X7542 and E7-8837 to the list
	of supported cpus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am Dienstag 27 November 2012, 15:46:41 schrieb Jan Beulich:
> >>> On 27.11.12 at 15:21, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
> > Am Dienstag 27 November 2012, 13:19:34 schrieb Jan Beulich:
> >> >>> On 27.11.12 at 14:04, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >>>> On 26.11.12 at 13:52, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
> >> >> Add intel cpus X7542 and E7-8837 to the list of supported cpus.
> >> >> 
> >> >> Thanks.
> >> >> Dietmar.
> >> >> 
> >> >> Signed-off-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> >> >> 
> >> >> diff -r 0049de3827bc -r 6fb0129600cd xen/arch/x86/oprofile/nmi_int.c
> >> >> --- a/xen/arch/x86/oprofile/nmi_int.c   Fri Nov 23 11:06:15 2012 +0000
> >> >> +++ b/xen/arch/x86/oprofile/nmi_int.c   Mon Nov 26 13:36:00 2012 +0100
> >> >> @@ -366,6 +366,8 @@ static int __init ppro_init(char ** cpu_
> >> >>                 ppro_has_global_ctrl = 1;
> >> >>                 break;
> >> >>         case 26:
> >> >> +       case 46:
> >> >> +       case 47:
> >> >>                 arch_perfmon_setup_counters();
> >> >>                 *cpu_type = "i386/core_i7";
> >> >>                 ppro_has_global_ctrl = 1;
> >> > 
> >> > Actually, and apart from the patch being white space damaged,
> >> > after a closer look I think this is wrong - these newer CPUs
> >> > shouldn't get be handled here, but instead should be covered by
> >> > arch_perfmon_init(). Are you observing X86_FEATURE_ARCH_PERFMON
> >> > not getting set on these CPUs by init_intel()?
> >> 
> >> I.e. the below would be the patch I'd expect when merely
> >> taking the SDM as reference (with the "todo remove?" ones
> >> also fully removed of course).
> > 
> > Yes, looks much cleaner.
> 
> Question is - does it work for you? And if it does, why would it not
> have worked without any change? After all, the patch in this form,
> apart from the setting of ppro_has_global_ctrl in
> arch_perfmon_init(), only removes code.

After one week of vacation I can't reproduce the thing.
I only remember that trying oprofile didn't work on my machine and the problem
went away after the fix. But now I looked deeper and see that something else
must have been wrong I cant reproduce. Sorry I didn't think enough!
Please forget my patch and thank you for better looking than me!

Dietmar.

> Jan
> 
> 
-- 
Company details: http://ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:46:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftlO-0006Ax-JG; Tue, 04 Dec 2012 14:46:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TftlN-0006As-Kr
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:46:21 +0000
Received: from [193.109.254.147:53427] by server-5.bemta-14.messagelabs.com id
	4D/40-10257-DBC0EB05; Tue, 04 Dec 2012 14:46:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354632336!8903818!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17177 invoked from network); 4 Dec 2012 14:45:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:45:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149745"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:45:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:45:36 +0000
Message-ID: <1354632334.15296.13.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 4 Dec 2012 14:45:34 +0000
In-Reply-To: <1354210308-23251-2-git-send-email-roger.pau@citrix.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<1354210308-23251-2-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/3] libxc: add suport for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTExLTI5IGF0IDE3OjMxICswMDAwLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6
Cj4gQWRkIE9TIHNwZWNpZmljIGhhbmRsZXJzIGZvciBOZXRCU0QgZ250ZGV2LiBUaGUgbWFpbiBk
aWZmZXJlbmNlIGlzCj4gdGhhdCBOZXRCU0QgcGFzc2VzIHRoZSBWQSB3aGVyZSB0aGUgZ3JhbnQg
c2hvdWxkIGJlIHNldCBpbnNpZGUgdGhlCj4gSU9DVExfR05UREVWX01BUF9HUkFOVF9SRUYgaW9j
dGwsIGluc3RlYWQgb2YgdXNpbmcgbW1hcCAodGhpcyBpcyBkdWUKPiB0byBPUyBjb25zdHJhaW50
cykuCj4gCj4gU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJp
eC5jb20+CgpJcyBhbnlvbmUgZnJvbSB0aGUgTmV0QlNEIGNhbXAgbGlrZWx5IHRvIHJldmlldyB0
aGlzIGNvZGU/CgpPdGhlcndpc2UgSSdtIG1pbmRlZCB0byBqdXN0IHRocm93IGl0IGluLCBnaXZl
biB0aGF0IGl0IGxvb2tzIE9LIHRvCm1lIDstKQoKPiArc3RhdGljIGludCBuZXRic2RfZ250dGFi
X3NldF9tYXhfZ3JhbnRzKHhjX2dudHRhYiAqeGNoLCB4Y19vc2RlcF9oYW5kbGUgaCwKPiArICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgY291bnQpCj4gK3sK
PiArICAgIC8qIE5ldEJTRCBkb2Vzbid0IGltcGxlbWVudCB0aGlzIGZlYXR1cmUgKi8KPiArICAg
IHJldHVybiAwOwo+ICt9CgpJcyAtRU5PU1lTIG9yICJlcnJubz1FTk9TWVM7IHJldHVybiAtMSIg
bW9yZSBhcHByb3ByaWF0ZSB0aGVuPwoKSWFuLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxp
c3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:46:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftlO-0006Ax-JG; Tue, 04 Dec 2012 14:46:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TftlN-0006As-Kr
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:46:21 +0000
Received: from [193.109.254.147:53427] by server-5.bemta-14.messagelabs.com id
	4D/40-10257-DBC0EB05; Tue, 04 Dec 2012 14:46:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354632336!8903818!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17177 invoked from network); 4 Dec 2012 14:45:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:45:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149745"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:45:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:45:36 +0000
Message-ID: <1354632334.15296.13.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 4 Dec 2012 14:45:34 +0000
In-Reply-To: <1354210308-23251-2-git-send-email-roger.pau@citrix.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<1354210308-23251-2-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/3] libxc: add suport for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTExLTI5IGF0IDE3OjMxICswMDAwLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6
Cj4gQWRkIE9TIHNwZWNpZmljIGhhbmRsZXJzIGZvciBOZXRCU0QgZ250ZGV2LiBUaGUgbWFpbiBk
aWZmZXJlbmNlIGlzCj4gdGhhdCBOZXRCU0QgcGFzc2VzIHRoZSBWQSB3aGVyZSB0aGUgZ3JhbnQg
c2hvdWxkIGJlIHNldCBpbnNpZGUgdGhlCj4gSU9DVExfR05UREVWX01BUF9HUkFOVF9SRUYgaW9j
dGwsIGluc3RlYWQgb2YgdXNpbmcgbW1hcCAodGhpcyBpcyBkdWUKPiB0byBPUyBjb25zdHJhaW50
cykuCj4gCj4gU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJp
eC5jb20+CgpJcyBhbnlvbmUgZnJvbSB0aGUgTmV0QlNEIGNhbXAgbGlrZWx5IHRvIHJldmlldyB0
aGlzIGNvZGU/CgpPdGhlcndpc2UgSSdtIG1pbmRlZCB0byBqdXN0IHRocm93IGl0IGluLCBnaXZl
biB0aGF0IGl0IGxvb2tzIE9LIHRvCm1lIDstKQoKPiArc3RhdGljIGludCBuZXRic2RfZ250dGFi
X3NldF9tYXhfZ3JhbnRzKHhjX2dudHRhYiAqeGNoLCB4Y19vc2RlcF9oYW5kbGUgaCwKPiArICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgY291bnQpCj4gK3sK
PiArICAgIC8qIE5ldEJTRCBkb2Vzbid0IGltcGxlbWVudCB0aGlzIGZlYXR1cmUgKi8KPiArICAg
IHJldHVybiAwOwo+ICt9CgpJcyAtRU5PU1lTIG9yICJlcnJubz1FTk9TWVM7IHJldHVybiAtMSIg
bW9yZSBhcHByb3ByaWF0ZSB0aGVuPwoKSWFuLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxp
c3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:53:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:53:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftsC-0006a5-Gx; Tue, 04 Dec 2012 14:53:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TftsA-0006Zz-IP
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 14:53:22 +0000
Received: from [85.158.139.83:51619] by server-13.bemta-5.messagelabs.com id
	AA/99-27809-16E0EB05; Tue, 04 Dec 2012 14:53:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1354632801!20996443!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8017 invoked from network); 4 Dec 2012 14:53:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:53:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:53:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:53:21 +0000
Message-ID: <1354632799.15296.19.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 4 Dec 2012 14:53:19 +0000
In-Reply-To: <alpine.DEB.2.02.1211301236230.5310@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211301236230.5310@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v2] arm: add few checks to gic_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 12:38 +0000, Stefano Stabellini wrote:
> Check for:
> - uninitialized GIC interface addresses;
> - non-page aligned GIC interface addresses.
> 
> Panic in both cases.
> Also remove the code from GICH and GICC to handle non-page aligned
> interfaces.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I tried to apply but I got rejects. I thought this came before "xen: get
GIC addresses from DT", have I got it backwards?

> /
> @@ -308,6 +306,23 @@ static void __cpuinit gic_hyp_disable(void)
>  /* Set up the GIC */
>  void __init gic_init(void)
>  {
> +    if ( !early_info.gic.gic_dist_addr ||
> +         !early_info.gic.gic_cpu_addr ||
> +         !early_info.gic.gic_hyp_addr ||
> +         !early_info.gic.gic_vcpu_addr )
> +        panic("the physical address of one of the GIC interfaces is missing:\n"
> +              "        gic_dist_addr=%"PRIpaddr"\n"
> +              "        gic_cpu_addr=%"PRIpaddr"\n"
> +              "        gic_hyp_addr=%"PRIpaddr"\n"
> +              "        gic_vcpu_addr=%"PRIpaddr"\n",
> +              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
> +              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);

Might be worth printing these out unconditionally as part of a message
about initialising the GIC (which could include version, number of
interrupts etc too).

Useful in its own right but better than duplicating them in the next
panic message too.

> +    if ( (early_info.gic.gic_dist_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_cpu_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_hyp_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_vcpu_addr & ~PAGE_MASK) )
> +        panic("error: GIC interfaces not page aligned.\n");

"error: " seems a bit redundant in a panic message ;-)

> +
>      gic.dbase = early_info.gic.gic_dist_addr;
>      gic.cbase = early_info.gic.gic_cpu_addr;
>      gic.hbase = early_info.gic.gic_hyp_addr;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:53:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:53:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftsC-0006a5-Gx; Tue, 04 Dec 2012 14:53:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TftsA-0006Zz-IP
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 14:53:22 +0000
Received: from [85.158.139.83:51619] by server-13.bemta-5.messagelabs.com id
	AA/99-27809-16E0EB05; Tue, 04 Dec 2012 14:53:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1354632801!20996443!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8017 invoked from network); 4 Dec 2012 14:53:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:53:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16149957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:53:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:53:21 +0000
Message-ID: <1354632799.15296.19.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 4 Dec 2012 14:53:19 +0000
In-Reply-To: <alpine.DEB.2.02.1211301236230.5310@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211301236230.5310@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v2] arm: add few checks to gic_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 12:38 +0000, Stefano Stabellini wrote:
> Check for:
> - uninitialized GIC interface addresses;
> - non-page aligned GIC interface addresses.
> 
> Panic in both cases.
> Also remove the code from GICH and GICC to handle non-page aligned
> interfaces.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I tried to apply but I got rejects. I thought this came before "xen: get
GIC addresses from DT", have I got it backwards?

> /
> @@ -308,6 +306,23 @@ static void __cpuinit gic_hyp_disable(void)
>  /* Set up the GIC */
>  void __init gic_init(void)
>  {
> +    if ( !early_info.gic.gic_dist_addr ||
> +         !early_info.gic.gic_cpu_addr ||
> +         !early_info.gic.gic_hyp_addr ||
> +         !early_info.gic.gic_vcpu_addr )
> +        panic("the physical address of one of the GIC interfaces is missing:\n"
> +              "        gic_dist_addr=%"PRIpaddr"\n"
> +              "        gic_cpu_addr=%"PRIpaddr"\n"
> +              "        gic_hyp_addr=%"PRIpaddr"\n"
> +              "        gic_vcpu_addr=%"PRIpaddr"\n",
> +              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
> +              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);

Might be worth printing these out unconditionally as part of a message
about initialising the GIC (which could include version, number of
interrupts etc too).

Useful in its own right but better than duplicating them in the next
panic message too.

> +    if ( (early_info.gic.gic_dist_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_cpu_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_hyp_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_vcpu_addr & ~PAGE_MASK) )
> +        panic("error: GIC interfaces not page aligned.\n");

"error: " seems a bit redundant in a panic message ;-)

> +
>      gic.dbase = early_info.gic.gic_dist_addr;
>      gic.cbase = early_info.gic.gic_cpu_addr;
>      gic.hbase = early_info.gic.gic_hyp_addr;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:57:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:57:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftwK-0006j6-9s; Tue, 04 Dec 2012 14:57:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TftwH-0006j0-Qq
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:57:38 +0000
Received: from [85.158.138.51:12157] by server-9.bemta-3.messagelabs.com id
	57/83-02388-25F0EB05; Tue, 04 Dec 2012 14:57:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354633041!27453619!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26007 invoked from network); 4 Dec 2012 14:57:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:57:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150038"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:57:20 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:57:20 +0000
Message-ID: <50BE0F4F.4000400@citrix.com>
Date: Tue, 4 Dec 2012 15:57:19 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<1354210308-23251-2-git-send-email-roger.pau@citrix.com>
	<1354632334.15296.13.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354632334.15296.13.camel@zakaz.uk.xensource.com>
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/3] libxc: add suport for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDQvMTIvMTIgMTU6NDUsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBUaHUsIDIwMTItMTEt
MjkgYXQgMTc6MzEgKzAwMDAsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPj4gQWRkIE9TIHNwZWNp
ZmljIGhhbmRsZXJzIGZvciBOZXRCU0QgZ250ZGV2LiBUaGUgbWFpbiBkaWZmZXJlbmNlIGlzCj4+
IHRoYXQgTmV0QlNEIHBhc3NlcyB0aGUgVkEgd2hlcmUgdGhlIGdyYW50IHNob3VsZCBiZSBzZXQg
aW5zaWRlIHRoZQo+PiBJT0NUTF9HTlRERVZfTUFQX0dSQU5UX1JFRiBpb2N0bCwgaW5zdGVhZCBv
ZiB1c2luZyBtbWFwICh0aGlzIGlzIGR1ZQo+PiB0byBPUyBjb25zdHJhaW50cykuCj4+Cj4+IFNp
Z25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgo+IAo+
IElzIGFueW9uZSBmcm9tIHRoZSBOZXRCU0QgY2FtcCBsaWtlbHkgdG8gcmV2aWV3IHRoaXMgY29k
ZT8KPiAKPiBPdGhlcndpc2UgSSdtIG1pbmRlZCB0byBqdXN0IHRocm93IGl0IGluLCBnaXZlbiB0
aGF0IGl0IGxvb2tzIE9LIHRvCj4gbWUgOy0pCj4gCj4+ICtzdGF0aWMgaW50IG5ldGJzZF9nbnR0
YWJfc2V0X21heF9ncmFudHMoeGNfZ250dGFiICp4Y2gsIHhjX29zZGVwX2hhbmRsZSBoLAo+PiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgY291bnQpCj4+
ICt7Cj4+ICsgICAgLyogTmV0QlNEIGRvZXNuJ3QgaW1wbGVtZW50IHRoaXMgZmVhdHVyZSAqLwo+
PiArICAgIHJldHVybiAwOwo+PiArfQo+IAo+IElzIC1FTk9TWVMgb3IgImVycm5vPUVOT1NZUzsg
cmV0dXJuIC0xIiBtb3JlIGFwcHJvcHJpYXRlIHRoZW4/CgpUaGFua3MgZm9yIHRoZSByZXZpZXcs
IEkndmUganVzdCBjb3BpZWQgTGludXggYmVoYXZpb3VyIGFuZCByZXR1cm5lZCAwCndoZW4gdGhl
IGlvY3RsIGlzIG5vdCBpbXBsZW1lbnRlZCwgSSdtIGFmcmFpZCB0aGF0IGlmIEkgcmV0dXJuIHNv
bWUga2luZApvZiBlcnJvciBsZWdhbmN5IGFwcGxpY2F0aW9ucyAod2hpY2ggZXhwZWN0IHRoaXMg
Y2FsbCB0byB3b3JrKSBtYXkgYWJvcnQKdGhlIHdob2xlIHByb2Nlc3MgKHRoaXMgaXMgZnJvbQp4
Y19saW51eF9vc2RlcC5jOmxpbnV4X2dudHRhYl9zZXRfbWF4X2dyYW50cykuCgoKX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcg
bGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2
ZWwK

From xen-devel-bounces@lists.xen.org Tue Dec 04 14:57:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 14:57:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TftwK-0006j6-9s; Tue, 04 Dec 2012 14:57:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TftwH-0006j0-Qq
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 14:57:38 +0000
Received: from [85.158.138.51:12157] by server-9.bemta-3.messagelabs.com id
	57/83-02388-25F0EB05; Tue, 04 Dec 2012 14:57:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354633041!27453619!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26007 invoked from network); 4 Dec 2012 14:57:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 14:57:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150038"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 14:57:20 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	14:57:20 +0000
Message-ID: <50BE0F4F.4000400@citrix.com>
Date: Tue, 4 Dec 2012 15:57:19 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<1354210308-23251-2-git-send-email-roger.pau@citrix.com>
	<1354632334.15296.13.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354632334.15296.13.camel@zakaz.uk.xensource.com>
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/3] libxc: add suport for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDQvMTIvMTIgMTU6NDUsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBUaHUsIDIwMTItMTEt
MjkgYXQgMTc6MzEgKzAwMDAsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPj4gQWRkIE9TIHNwZWNp
ZmljIGhhbmRsZXJzIGZvciBOZXRCU0QgZ250ZGV2LiBUaGUgbWFpbiBkaWZmZXJlbmNlIGlzCj4+
IHRoYXQgTmV0QlNEIHBhc3NlcyB0aGUgVkEgd2hlcmUgdGhlIGdyYW50IHNob3VsZCBiZSBzZXQg
aW5zaWRlIHRoZQo+PiBJT0NUTF9HTlRERVZfTUFQX0dSQU5UX1JFRiBpb2N0bCwgaW5zdGVhZCBv
ZiB1c2luZyBtbWFwICh0aGlzIGlzIGR1ZQo+PiB0byBPUyBjb25zdHJhaW50cykuCj4+Cj4+IFNp
Z25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgo+IAo+
IElzIGFueW9uZSBmcm9tIHRoZSBOZXRCU0QgY2FtcCBsaWtlbHkgdG8gcmV2aWV3IHRoaXMgY29k
ZT8KPiAKPiBPdGhlcndpc2UgSSdtIG1pbmRlZCB0byBqdXN0IHRocm93IGl0IGluLCBnaXZlbiB0
aGF0IGl0IGxvb2tzIE9LIHRvCj4gbWUgOy0pCj4gCj4+ICtzdGF0aWMgaW50IG5ldGJzZF9nbnR0
YWJfc2V0X21heF9ncmFudHMoeGNfZ250dGFiICp4Y2gsIHhjX29zZGVwX2hhbmRsZSBoLAo+PiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgY291bnQpCj4+
ICt7Cj4+ICsgICAgLyogTmV0QlNEIGRvZXNuJ3QgaW1wbGVtZW50IHRoaXMgZmVhdHVyZSAqLwo+
PiArICAgIHJldHVybiAwOwo+PiArfQo+IAo+IElzIC1FTk9TWVMgb3IgImVycm5vPUVOT1NZUzsg
cmV0dXJuIC0xIiBtb3JlIGFwcHJvcHJpYXRlIHRoZW4/CgpUaGFua3MgZm9yIHRoZSByZXZpZXcs
IEkndmUganVzdCBjb3BpZWQgTGludXggYmVoYXZpb3VyIGFuZCByZXR1cm5lZCAwCndoZW4gdGhl
IGlvY3RsIGlzIG5vdCBpbXBsZW1lbnRlZCwgSSdtIGFmcmFpZCB0aGF0IGlmIEkgcmV0dXJuIHNv
bWUga2luZApvZiBlcnJvciBsZWdhbmN5IGFwcGxpY2F0aW9ucyAod2hpY2ggZXhwZWN0IHRoaXMg
Y2FsbCB0byB3b3JrKSBtYXkgYWJvcnQKdGhlIHdob2xlIHByb2Nlc3MgKHRoaXMgaXMgZnJvbQp4
Y19saW51eF9vc2RlcC5jOmxpbnV4X2dudHRhYl9zZXRfbWF4X2dyYW50cykuCgoKX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcg
bGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2
ZWwK

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:00:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tftyn-0006tn-Ry; Tue, 04 Dec 2012 15:00:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tftym-0006th-MA
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 15:00:12 +0000
Received: from [85.158.139.211:25807] by server-14.bemta-5.messagelabs.com id
	52/FD-21768-BFF0EB05; Tue, 04 Dec 2012 15:00:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354633211!19063483!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10268 invoked from network); 4 Dec 2012 15:00:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:00:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150110"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:00:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:00:10 +0000
Message-ID: <1354633209.15296.25.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 4 Dec 2012 15:00:09 +0000
In-Reply-To: <alpine.DEB.2.02.1211301426120.5310@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211301426120.5310@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v2] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 14:28 +0000, Stefano Stabellini wrote:
> Get the address of the GIC distributor, cpu, virtual and virtual cpu
> interfaces registers from device tree.
> 
> Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
> and friends because we are using them from mode_switch.S, that is
> executed before device tree has been parsed. But at least mode_switch.S
> is known to contain vexpress specific code anyway.
> 
> 
> Changes in v2:
> - remove 2 superflous lines from process_gic_node;
> - introduce device_tree_get_reg_ranges;
> - add a check for uninitialized GIC interface addresses;
> - add a check for non-page aligned GIC interface addresses;
> - remove the code to deal with non-page aligned addresses from GICC and
> GICH.
> 
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 0c6fab9..2b29e7e 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -26,6 +26,7 @@
>  #include <xen/errno.h>
>  #include <xen/softirq.h>
>  #include <xen/list.h>
> +#include <xen/device_tree.h>
>  #include <asm/p2m.h>
>  #include <asm/domain.h>
>  
> @@ -33,10 +34,8 @@
>  
>  /* Access to the GIC Distributor registers through the fixmap */
>  #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
> -#define GICC ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICC1)  \
> -                                     + (GIC_CR_OFFSET & 0xfff)))
> -#define GICH ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICH)  \
> -                                     + (GIC_HR_OFFSET & 0xfff)))
> +#define GICC ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICC1)) 
> +#define GICH ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICH))
>  static void gic_restore_pending_irqs(struct vcpu *v);
>  
>  /* Global state */
> @@ -44,6 +43,7 @@ static struct {
>      paddr_t dbase;       /* Address of distributor registers */
>      paddr_t cbase;       /* Address of CPU interface registers */
>      paddr_t hbase;       /* Address of virtual interface registers */
> +    paddr_t vbase;       /* Address of virtual cpu interface registers */
>      unsigned int lines;
>      unsigned int cpus;
>      spinlock_t lock;
> @@ -306,10 +306,27 @@ static void __cpuinit gic_hyp_disable(void)
>  /* Set up the GIC */
>  void __init gic_init(void)
>  {
> -    /* XXX FIXME get this from devicetree */
> -    gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET;
> -    gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET;
> -    gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET;
> +    if ( !early_info.gic.gic_dist_addr ||
> +         !early_info.gic.gic_cpu_addr ||
> +         !early_info.gic.gic_hyp_addr ||
> +         !early_info.gic.gic_vcpu_addr )
> +        panic("the physical address of one of the GIC interfaces is missing:\n"
> +              "        gic_dist_addr=%"PRIpaddr"\n"
> +              "        gic_cpu_addr=%"PRIpaddr"\n"
> +              "        gic_hyp_addr=%"PRIpaddr"\n"
> +              "        gic_vcpu_addr=%"PRIpaddr"\n",
> +              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
> +              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);
> +    if ( (early_info.gic.gic_dist_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_cpu_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_hyp_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_vcpu_addr & ~PAGE_MASK) )
> +        panic("error: GIC interfaces not page aligned.\n");

Oh, maybe I should have skipped "arm: add few checks to gic_init" and
come straight here instead?

>  static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index da0af77..8d5b6b0 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -54,6 +54,33 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
>      return !strncmp(prop, match, len);
>  }
>  
> +bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
> +{
> +    int len, l;
> +    const void *prop;
> +
> +    prop = fdt_getprop(fdt, node, "compatible", &len);
> +    if ( prop == NULL )
> +        return 0;
> +
> +    while ( len > 0 ) {
> +        if ( !strncmp(prop, match, strlen(match)) )
> +            return 1;

This will decide that match="foo-bar" is compatible with a node which
contains compatible = "foo-bar-baz". Is that deliberate? I thought the
DT way would be to have compatible = "foo-bar-baz", "foo-bar" ?

Perhaps this is just due to cut-n-paste from device_tree_node_matches
(where it makes sense, I think)?

I now wonder the same thing about device_tree_type_matches too...

> +        l = strlen(prop) + 1;
> +        prop += l;
> +        len -= l;
> +    }
> +
> +    return 0;
> +}
> +
> +static void device_tree_get_reg_ranges(const struct fdt_property *prop,

"nr" in the name somewhere? I thought at first this was somehow
returning the ranges themselves.

> +        u32 address_cells, u32 size_cells, int *ranges)
> +{
> +    u32 reg_cells = address_cells + size_cells;
> +    *ranges = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
> +}
> +
>  static void __init get_val(const u32 **cell, u32 cells, u64 *val)
>  {
>      *val = 0;
> @@ -209,7 +236,6 @@ static void __init process_memory_node(const void *fdt, int node,
>                                         u32 address_cells, u32 size_cells)
>  {
>      const struct fdt_property *prop;
> -    size_t reg_cells;
>      int i;
>      int banks;
>      const u32 *cell;
> @@ -230,8 +256,7 @@ static void __init process_memory_node(const void *fdt, int node,
>      }
>  
>      cell = (const u32 *)prop->data;
> -    reg_cells = address_cells + size_cells;
> -    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
> +    device_tree_get_reg_ranges(prop, address_cells, size_cells, &banks);
>  
>      for ( i = 0; i < banks && early_info.mem.nr_banks < NR_MEM_BANKS; i++ )
>      {
> @@ -270,6 +295,46 @@ static void __init process_cpu_node(const void *fdt, int node,
>      cpumask_set_cpu(start, &cpu_possible_map);
>  }
>  
> +static void __init process_gic_node(const void *fdt, int node,
> +                                    const char *name,
> +                                    u32 address_cells, u32 size_cells)
> +{
> +    const struct fdt_property *prop;
> +    const u32 *cell;
> +    paddr_t start, size;
> +    int interfaces;
> +
> +    if ( address_cells < 1 || size_cells < 1 )
> +    {
> +        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
> +                     name);
> +        return;
> +    }

Is this the sort of check which the generic DT walker helper thing ought
to include?

> +
> +    prop = fdt_get_property(fdt, node, "reg", NULL);
> +    if ( !prop )
> +    {
> +        early_printk("fdt: node `%s': missing `reg' property\n", name);
> +        return;
> +    }
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg_ranges(prop, address_cells, size_cells, &interfaces);
> +    if ( interfaces < 4 )
> +    {
> +        early_printk("fdt: node `%s': invalid `reg' property\n", name);

A more specific message would be "not enough ranges in ..."

> +        return;
> +    }
> +    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +    early_info.gic.gic_dist_addr = start;
> +    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +    early_info.gic.gic_cpu_addr = start;
> +    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +    early_info.gic.gic_hyp_addr = start;
> +    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +    early_info.gic.gic_vcpu_addr = start;
> +}
> +
>  static int __init early_scan_node(const void *fdt,
>                                    int node, const char *name, int depth,
>                                    u32 address_cells, u32 size_cells,
> @@ -279,6 +344,8 @@ static int __init early_scan_node(const void *fdt,
>          process_memory_node(fdt, node, name, address_cells, size_cells);
>      else if ( device_tree_type_matches(fdt, node, "cpu") )
>          process_cpu_node(fdt, node, name, address_cells, size_cells);
> +    else if ( device_tree_node_compatible(fdt, node, "arm,cortex-a15-gic") )
> +        process_gic_node(fdt, node, name, address_cells, size_cells);
>  
>      return 0;
>  }
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 4d010c0..a0e3a97 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -26,8 +26,16 @@ struct dt_mem_info {
>      struct membank bank[NR_MEM_BANKS];
>  };
>  
> +struct dt_gic_info {
> +    paddr_t gic_dist_addr;
> +    paddr_t gic_cpu_addr;
> +    paddr_t gic_hyp_addr;
> +    paddr_t gic_vcpu_addr;
> +};
> +
>  struct dt_early_info {
>      struct dt_mem_info mem;
> +    struct dt_gic_info gic;
>  };
>  
>  typedef int (*device_tree_node_func)(const void *fdt,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:00:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tftyn-0006tn-Ry; Tue, 04 Dec 2012 15:00:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tftym-0006th-MA
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 15:00:12 +0000
Received: from [85.158.139.211:25807] by server-14.bemta-5.messagelabs.com id
	52/FD-21768-BFF0EB05; Tue, 04 Dec 2012 15:00:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354633211!19063483!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10268 invoked from network); 4 Dec 2012 15:00:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:00:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150110"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:00:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:00:10 +0000
Message-ID: <1354633209.15296.25.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 4 Dec 2012 15:00:09 +0000
In-Reply-To: <alpine.DEB.2.02.1211301426120.5310@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211301426120.5310@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v2] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 14:28 +0000, Stefano Stabellini wrote:
> Get the address of the GIC distributor, cpu, virtual and virtual cpu
> interfaces registers from device tree.
> 
> Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
> and friends because we are using them from mode_switch.S, that is
> executed before device tree has been parsed. But at least mode_switch.S
> is known to contain vexpress specific code anyway.
> 
> 
> Changes in v2:
> - remove 2 superflous lines from process_gic_node;
> - introduce device_tree_get_reg_ranges;
> - add a check for uninitialized GIC interface addresses;
> - add a check for non-page aligned GIC interface addresses;
> - remove the code to deal with non-page aligned addresses from GICC and
> GICH.
> 
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 0c6fab9..2b29e7e 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -26,6 +26,7 @@
>  #include <xen/errno.h>
>  #include <xen/softirq.h>
>  #include <xen/list.h>
> +#include <xen/device_tree.h>
>  #include <asm/p2m.h>
>  #include <asm/domain.h>
>  
> @@ -33,10 +34,8 @@
>  
>  /* Access to the GIC Distributor registers through the fixmap */
>  #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
> -#define GICC ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICC1)  \
> -                                     + (GIC_CR_OFFSET & 0xfff)))
> -#define GICH ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICH)  \
> -                                     + (GIC_HR_OFFSET & 0xfff)))
> +#define GICC ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICC1)) 
> +#define GICH ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICH))
>  static void gic_restore_pending_irqs(struct vcpu *v);
>  
>  /* Global state */
> @@ -44,6 +43,7 @@ static struct {
>      paddr_t dbase;       /* Address of distributor registers */
>      paddr_t cbase;       /* Address of CPU interface registers */
>      paddr_t hbase;       /* Address of virtual interface registers */
> +    paddr_t vbase;       /* Address of virtual cpu interface registers */
>      unsigned int lines;
>      unsigned int cpus;
>      spinlock_t lock;
> @@ -306,10 +306,27 @@ static void __cpuinit gic_hyp_disable(void)
>  /* Set up the GIC */
>  void __init gic_init(void)
>  {
> -    /* XXX FIXME get this from devicetree */
> -    gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET;
> -    gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET;
> -    gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET;
> +    if ( !early_info.gic.gic_dist_addr ||
> +         !early_info.gic.gic_cpu_addr ||
> +         !early_info.gic.gic_hyp_addr ||
> +         !early_info.gic.gic_vcpu_addr )
> +        panic("the physical address of one of the GIC interfaces is missing:\n"
> +              "        gic_dist_addr=%"PRIpaddr"\n"
> +              "        gic_cpu_addr=%"PRIpaddr"\n"
> +              "        gic_hyp_addr=%"PRIpaddr"\n"
> +              "        gic_vcpu_addr=%"PRIpaddr"\n",
> +              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
> +              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);
> +    if ( (early_info.gic.gic_dist_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_cpu_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_hyp_addr & ~PAGE_MASK) ||
> +         (early_info.gic.gic_vcpu_addr & ~PAGE_MASK) )
> +        panic("error: GIC interfaces not page aligned.\n");

Oh, maybe I should have skipped "arm: add few checks to gic_init" and
come straight here instead?

>  static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index da0af77..8d5b6b0 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -54,6 +54,33 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
>      return !strncmp(prop, match, len);
>  }
>  
> +bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
> +{
> +    int len, l;
> +    const void *prop;
> +
> +    prop = fdt_getprop(fdt, node, "compatible", &len);
> +    if ( prop == NULL )
> +        return 0;
> +
> +    while ( len > 0 ) {
> +        if ( !strncmp(prop, match, strlen(match)) )
> +            return 1;

This will decide that match="foo-bar" is compatible with a node which
contains compatible = "foo-bar-baz". Is that deliberate? I thought the
DT way would be to have compatible = "foo-bar-baz", "foo-bar" ?

Perhaps this is just due to cut-n-paste from device_tree_node_matches
(where it makes sense, I think)?

I now wonder the same thing about device_tree_type_matches too...

> +        l = strlen(prop) + 1;
> +        prop += l;
> +        len -= l;
> +    }
> +
> +    return 0;
> +}
> +
> +static void device_tree_get_reg_ranges(const struct fdt_property *prop,

"nr" in the name somewhere? I thought at first this was somehow
returning the ranges themselves.

> +        u32 address_cells, u32 size_cells, int *ranges)
> +{
> +    u32 reg_cells = address_cells + size_cells;
> +    *ranges = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
> +}
> +
>  static void __init get_val(const u32 **cell, u32 cells, u64 *val)
>  {
>      *val = 0;
> @@ -209,7 +236,6 @@ static void __init process_memory_node(const void *fdt, int node,
>                                         u32 address_cells, u32 size_cells)
>  {
>      const struct fdt_property *prop;
> -    size_t reg_cells;
>      int i;
>      int banks;
>      const u32 *cell;
> @@ -230,8 +256,7 @@ static void __init process_memory_node(const void *fdt, int node,
>      }
>  
>      cell = (const u32 *)prop->data;
> -    reg_cells = address_cells + size_cells;
> -    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
> +    device_tree_get_reg_ranges(prop, address_cells, size_cells, &banks);
>  
>      for ( i = 0; i < banks && early_info.mem.nr_banks < NR_MEM_BANKS; i++ )
>      {
> @@ -270,6 +295,46 @@ static void __init process_cpu_node(const void *fdt, int node,
>      cpumask_set_cpu(start, &cpu_possible_map);
>  }
>  
> +static void __init process_gic_node(const void *fdt, int node,
> +                                    const char *name,
> +                                    u32 address_cells, u32 size_cells)
> +{
> +    const struct fdt_property *prop;
> +    const u32 *cell;
> +    paddr_t start, size;
> +    int interfaces;
> +
> +    if ( address_cells < 1 || size_cells < 1 )
> +    {
> +        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
> +                     name);
> +        return;
> +    }

Is this the sort of check which the generic DT walker helper thing ought
to include?

> +
> +    prop = fdt_get_property(fdt, node, "reg", NULL);
> +    if ( !prop )
> +    {
> +        early_printk("fdt: node `%s': missing `reg' property\n", name);
> +        return;
> +    }
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg_ranges(prop, address_cells, size_cells, &interfaces);
> +    if ( interfaces < 4 )
> +    {
> +        early_printk("fdt: node `%s': invalid `reg' property\n", name);

A more specific message would be "not enough ranges in ..."

> +        return;
> +    }
> +    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +    early_info.gic.gic_dist_addr = start;
> +    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +    early_info.gic.gic_cpu_addr = start;
> +    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +    early_info.gic.gic_hyp_addr = start;
> +    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +    early_info.gic.gic_vcpu_addr = start;
> +}
> +
>  static int __init early_scan_node(const void *fdt,
>                                    int node, const char *name, int depth,
>                                    u32 address_cells, u32 size_cells,
> @@ -279,6 +344,8 @@ static int __init early_scan_node(const void *fdt,
>          process_memory_node(fdt, node, name, address_cells, size_cells);
>      else if ( device_tree_type_matches(fdt, node, "cpu") )
>          process_cpu_node(fdt, node, name, address_cells, size_cells);
> +    else if ( device_tree_node_compatible(fdt, node, "arm,cortex-a15-gic") )
> +        process_gic_node(fdt, node, name, address_cells, size_cells);
>  
>      return 0;
>  }
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 4d010c0..a0e3a97 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -26,8 +26,16 @@ struct dt_mem_info {
>      struct membank bank[NR_MEM_BANKS];
>  };
>  
> +struct dt_gic_info {
> +    paddr_t gic_dist_addr;
> +    paddr_t gic_cpu_addr;
> +    paddr_t gic_hyp_addr;
> +    paddr_t gic_vcpu_addr;
> +};
> +
>  struct dt_early_info {
>      struct dt_mem_info mem;
> +    struct dt_gic_info gic;
>  };
>  
>  typedef int (*device_tree_node_func)(const void *fdt,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:13:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuBk-0007iH-2U; Tue, 04 Dec 2012 15:13:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1TfuBi-0007iC-JE
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:13:34 +0000
Received: from [85.158.139.83:51790] by server-14.bemta-5.messagelabs.com id
	7D/84-21768-D131EB05; Tue, 04 Dec 2012 15:13:33 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354634011!25645410!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30279 invoked from network); 4 Dec 2012 15:13:32 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:13:32 -0000
Received: by mail-gh0-f173.google.com with SMTP id 16so652805ghy.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 07:13:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=bvWc7q2/O+7NJuG12AHsASfaNWesHEE6cD6FgONjxDY=;
	b=AnEG3Jticeanw9633qfzVMsmHPzC9cMhmWnAVttyczkgANXgs3iwoaQr/WHHD8NsH2
	NPT+vzmeJVNP8sogVsXXUOlg1Z3JclkYTT6aomVZo0HKh+3LtAo7/hb4NS1KE/uPFQaL
	NXBqXuUcH7AUppkF/rQy24qjZ/cxzp1gJ4Ta/ilDX4gABnZmE4X5oTgYcdJGfZ54NK+B
	keIm1NdhvR4jMrambXUNozI7oOMZmKi5sp9kf3EnmqzbMEKPpw4vDeLhuAxV7O94gaWM
	SGBFnZ4l3dyHOmSnvXPR7AaOQba/e3LtIj7he3itmda9CjBpOWZWEpnD/ej3dz5OQS6j
	721A==
Received: by 10.236.119.37 with SMTP id m25mr15007236yhh.95.1354634011490;
	Tue, 04 Dec 2012 07:13:31 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id e7sm1064650ang.8.2012.12.04.07.13.30
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 04 Dec 2012 07:13:31 -0800 (PST)
Date: Tue, 4 Dec 2012 10:13:28 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121204151326.GB15404@phenom.dumpdata.com>
References: <50B498E502000078000AB8B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A4834564403398F24@SHSMSX101.ccr.corp.intel.com>
	<50B7243602000078000AC61A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033994C8@SHSMSX101.ccr.corp.intel.com>
	<50B7389202000078000AC669@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
	<50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339E774@SHSMSX101.ccr.corp.intel.com>
	<50BDBE7902000078000AD8E1@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BDBE7902000078000AD8E1@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATS and dependent features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 08:12:25AM +0000, Jan Beulich wrote:
> >>> On 04.12.12 at 02:29, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> 
> > 
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Monday, December 03, 2012 3:55 PM
> >> To: Zhang, Xiantao
> >> Cc: xen-devel
> >> Subject: RE: ATS and dependent features
> >> 
> >> >>> On 30.11.12 at 13:29, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> >> wrote:
> >> 
> >> >
> >> >> -----Original Message-----
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> Sent: Thursday, November 29, 2012 5:28 PM
> >> >> To: Zhang, Xiantao
> >> >> Cc: xen-devel
> >> >> Subject: RE: ATS and dependent features
> >> >>
> >> >> >>> On 29.11.12 at 10:19, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> >> >> wrote:
> >> >>
> >> >> >
> >> >> >> -----Original Message-----
> >> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> >> Sent: Thursday, November 29, 2012 4:01 PM
> >> >> >> To: Zhang, Xiantao
> >> >> >> Cc: xen-devel
> >> >> >> Subject: RE: ATS and dependent features
> >> >> >>
> >> >> >> >>> On 29.11.12 at 02:07, "Zhang, Xiantao"
> >> >> >> >>> <xiantao.zhang@intel.com>
> >> >> >> wrote:
> >> >> >> > ATS should be a host feature controlled by iommu, and I don't
> >> >> >> > think
> >> >> >> > dom0 can control  it from Xen's architecture.
> >> >> >>
> >> >> >> "Can" or "should"? Because from all I can tell it currently clearly does.
> >> >> >
> >> >> > I mean Xen shouldn't  allow these capabilities can be detected by
> >> >> > dom0.  If it does, we need to fix it.
> >> >>
> >> >> It sort of hides it - all callers sit in the kernel's IOMMU code, and
> >> >> IOMMU detection is being prevented. So it looks like the code is
> >> >> simply dead when running on top of Xen.
> >> >
> >> > I'm curious why dom0's !Xen kernel option for these features can solve
> >> > the issue you met.
> >> 
> >> It doesn't "solve" the problem in that sense: As said, the code in question
> >> only has callers in IOMMU code, which itself is dependent on !XEN in our
> >> kernels (just to make clear - I'm talking about forward ported kernels here,
> >> not pv-ops ones). So upstream probably just has to live with that code being
> >> dead (at the moment, when run on top of Xen) and take the risk of there
> >> appearing a caller elsewhere.
> >> In our kernels, by making these options also dependent upon !XEN, we can
> >> then actually detect (and actively deal with) an eventual new caller
> >> elsewhere in the code, thus eliminating any risk of bad interaction between
> >> Dom0 and Xen.
> > 
> > I think !Xen you are talking is a compile option, so this kernel can only 
> > used for dom0 and can't run on native with these features enabled ?  If don't 
> > need to keep the kernel running on native hardware, I think it is fine. 
> 
> Yes, as said - this is for our forward ported kernel. Whether (and if
> so how) the pv-ops one can add a similar safeguard I can't tell (and
> doubt).

Right, in the upstream kernel that is not going to work. But I am a bit
confused - this code (pci_enable_ats) looks to be called only from the
intel and amd iommu code. Aren't those blacklisted (so DMAR MADT
is overwritten and AMD PCI device is witheld) so the calls aren't going
to be executed?

> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:13:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuBk-0007iH-2U; Tue, 04 Dec 2012 15:13:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1TfuBi-0007iC-JE
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:13:34 +0000
Received: from [85.158.139.83:51790] by server-14.bemta-5.messagelabs.com id
	7D/84-21768-D131EB05; Tue, 04 Dec 2012 15:13:33 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354634011!25645410!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30279 invoked from network); 4 Dec 2012 15:13:32 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:13:32 -0000
Received: by mail-gh0-f173.google.com with SMTP id 16so652805ghy.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 07:13:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=bvWc7q2/O+7NJuG12AHsASfaNWesHEE6cD6FgONjxDY=;
	b=AnEG3Jticeanw9633qfzVMsmHPzC9cMhmWnAVttyczkgANXgs3iwoaQr/WHHD8NsH2
	NPT+vzmeJVNP8sogVsXXUOlg1Z3JclkYTT6aomVZo0HKh+3LtAo7/hb4NS1KE/uPFQaL
	NXBqXuUcH7AUppkF/rQy24qjZ/cxzp1gJ4Ta/ilDX4gABnZmE4X5oTgYcdJGfZ54NK+B
	keIm1NdhvR4jMrambXUNozI7oOMZmKi5sp9kf3EnmqzbMEKPpw4vDeLhuAxV7O94gaWM
	SGBFnZ4l3dyHOmSnvXPR7AaOQba/e3LtIj7he3itmda9CjBpOWZWEpnD/ej3dz5OQS6j
	721A==
Received: by 10.236.119.37 with SMTP id m25mr15007236yhh.95.1354634011490;
	Tue, 04 Dec 2012 07:13:31 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id e7sm1064650ang.8.2012.12.04.07.13.30
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 04 Dec 2012 07:13:31 -0800 (PST)
Date: Tue, 4 Dec 2012 10:13:28 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121204151326.GB15404@phenom.dumpdata.com>
References: <50B498E502000078000AB8B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A4834564403398F24@SHSMSX101.ccr.corp.intel.com>
	<50B7243602000078000AC61A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033994C8@SHSMSX101.ccr.corp.intel.com>
	<50B7389202000078000AC669@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
	<50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339E774@SHSMSX101.ccr.corp.intel.com>
	<50BDBE7902000078000AD8E1@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BDBE7902000078000AD8E1@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATS and dependent features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 08:12:25AM +0000, Jan Beulich wrote:
> >>> On 04.12.12 at 02:29, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> 
> > 
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Monday, December 03, 2012 3:55 PM
> >> To: Zhang, Xiantao
> >> Cc: xen-devel
> >> Subject: RE: ATS and dependent features
> >> 
> >> >>> On 30.11.12 at 13:29, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> >> wrote:
> >> 
> >> >
> >> >> -----Original Message-----
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> Sent: Thursday, November 29, 2012 5:28 PM
> >> >> To: Zhang, Xiantao
> >> >> Cc: xen-devel
> >> >> Subject: RE: ATS and dependent features
> >> >>
> >> >> >>> On 29.11.12 at 10:19, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> >> >> wrote:
> >> >>
> >> >> >
> >> >> >> -----Original Message-----
> >> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> >> Sent: Thursday, November 29, 2012 4:01 PM
> >> >> >> To: Zhang, Xiantao
> >> >> >> Cc: xen-devel
> >> >> >> Subject: RE: ATS and dependent features
> >> >> >>
> >> >> >> >>> On 29.11.12 at 02:07, "Zhang, Xiantao"
> >> >> >> >>> <xiantao.zhang@intel.com>
> >> >> >> wrote:
> >> >> >> > ATS should be a host feature controlled by iommu, and I don't
> >> >> >> > think
> >> >> >> > dom0 can control  it from Xen's architecture.
> >> >> >>
> >> >> >> "Can" or "should"? Because from all I can tell it currently clearly does.
> >> >> >
> >> >> > I mean Xen shouldn't  allow these capabilities can be detected by
> >> >> > dom0.  If it does, we need to fix it.
> >> >>
> >> >> It sort of hides it - all callers sit in the kernel's IOMMU code, and
> >> >> IOMMU detection is being prevented. So it looks like the code is
> >> >> simply dead when running on top of Xen.
> >> >
> >> > I'm curious why dom0's !Xen kernel option for these features can solve
> >> > the issue you met.
> >> 
> >> It doesn't "solve" the problem in that sense: As said, the code in question
> >> only has callers in IOMMU code, which itself is dependent on !XEN in our
> >> kernels (just to make clear - I'm talking about forward ported kernels here,
> >> not pv-ops ones). So upstream probably just has to live with that code being
> >> dead (at the moment, when run on top of Xen) and take the risk of there
> >> appearing a caller elsewhere.
> >> In our kernels, by making these options also dependent upon !XEN, we can
> >> then actually detect (and actively deal with) an eventual new caller
> >> elsewhere in the code, thus eliminating any risk of bad interaction between
> >> Dom0 and Xen.
> > 
> > I think !Xen you are talking is a compile option, so this kernel can only 
> > used for dom0 and can't run on native with these features enabled ?  If don't 
> > need to keep the kernel running on native hardware, I think it is fine. 
> 
> Yes, as said - this is for our forward ported kernel. Whether (and if
> so how) the pv-ops one can add a similar safeguard I can't tell (and
> doubt).

Right, in the upstream kernel that is not going to work. But I am a bit
confused - this code (pci_enable_ats) looks to be called only from the
intel and amd iommu code. Aren't those blacklisted (so DMAR MADT
is overwritten and AMD PCI device is witheld) so the calls aren't going
to be executed?

> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:19:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:19:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuHH-00082R-F7; Tue, 04 Dec 2012 15:19:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfuHF-00081x-S6
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:19:18 +0000
Received: from [85.158.139.83:41682] by server-3.bemta-5.messagelabs.com id
	25/B7-18736-5741EB05; Tue, 04 Dec 2012 15:19:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354634330!24378178!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24465 invoked from network); 4 Dec 2012 15:18:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 15:18:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 15:21:00 +0000
Message-Id: <50BE226702000078000ADD8A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 15:18:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <50B498E502000078000AB8B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A4834564403398F24@SHSMSX101.ccr.corp.intel.com>
	<50B7243602000078000AC61A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033994C8@SHSMSX101.ccr.corp.intel.com>
	<50B7389202000078000AC669@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
	<50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339E774@SHSMSX101.ccr.corp.intel.com>
	<50BDBE7902000078000AD8E1@nat28.tlf.novell.com>
	<20121204151326.GB15404@phenom.dumpdata.com>
In-Reply-To: <20121204151326.GB15404@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATS and dependent features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 16:13, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> On Tue, Dec 04, 2012 at 08:12:25AM +0000, Jan Beulich wrote:
>> >>> On 04.12.12 at 02:29, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
>> 
>> > 
>> >> -----Original Message-----
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: Monday, December 03, 2012 3:55 PM
>> >> To: Zhang, Xiantao
>> >> Cc: xen-devel
>> >> Subject: RE: ATS and dependent features
>> >> 
>> >> >>> On 30.11.12 at 13:29, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> >> wrote:
>> >> 
>> >> >
>> >> >> -----Original Message-----
>> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >> Sent: Thursday, November 29, 2012 5:28 PM
>> >> >> To: Zhang, Xiantao
>> >> >> Cc: xen-devel
>> >> >> Subject: RE: ATS and dependent features
>> >> >>
>> >> >> >>> On 29.11.12 at 10:19, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> >> >> wrote:
>> >> >>
>> >> >> >
>> >> >> >> -----Original Message-----
>> >> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >> >> Sent: Thursday, November 29, 2012 4:01 PM
>> >> >> >> To: Zhang, Xiantao
>> >> >> >> Cc: xen-devel
>> >> >> >> Subject: RE: ATS and dependent features
>> >> >> >>
>> >> >> >> >>> On 29.11.12 at 02:07, "Zhang, Xiantao"
>> >> >> >> >>> <xiantao.zhang@intel.com>
>> >> >> >> wrote:
>> >> >> >> > ATS should be a host feature controlled by iommu, and I don't
>> >> >> >> > think
>> >> >> >> > dom0 can control  it from Xen's architecture.
>> >> >> >>
>> >> >> >> "Can" or "should"? Because from all I can tell it currently clearly does.
>> >> >> >
>> >> >> > I mean Xen shouldn't  allow these capabilities can be detected by
>> >> >> > dom0.  If it does, we need to fix it.
>> >> >>
>> >> >> It sort of hides it - all callers sit in the kernel's IOMMU code, and
>> >> >> IOMMU detection is being prevented. So it looks like the code is
>> >> >> simply dead when running on top of Xen.
>> >> >
>> >> > I'm curious why dom0's !Xen kernel option for these features can solve
>> >> > the issue you met.
>> >> 
>> >> It doesn't "solve" the problem in that sense: As said, the code in question
>> >> only has callers in IOMMU code, which itself is dependent on !XEN in our
>> >> kernels (just to make clear - I'm talking about forward ported kernels here,
>> >> not pv-ops ones). So upstream probably just has to live with that code being
>> >> dead (at the moment, when run on top of Xen) and take the risk of there
>> >> appearing a caller elsewhere.
>> >> In our kernels, by making these options also dependent upon !XEN, we can
>> >> then actually detect (and actively deal with) an eventual new caller
>> >> elsewhere in the code, thus eliminating any risk of bad interaction between
>> >> Dom0 and Xen.
>> > 
>> > I think !Xen you are talking is a compile option, so this kernel can only 
>> > used for dom0 and can't run on native with these features enabled ?  If 
> don't 
>> > need to keep the kernel running on native hardware, I think it is fine. 
>> 
>> Yes, as said - this is for our forward ported kernel. Whether (and if
>> so how) the pv-ops one can add a similar safeguard I can't tell (and
>> doubt).
> 
> Right, in the upstream kernel that is not going to work. But I am a bit
> confused - this code (pci_enable_ats) looks to be called only from the
> intel and amd iommu code. Aren't those blacklisted (so DMAR MADT
> is overwritten and AMD PCI device is witheld) so the calls aren't going
> to be executed?

That's right. But the code being there means you wouldn't notice
(at build time) if some other caller appeared (which would need
fixing).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:19:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:19:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuHH-00082R-F7; Tue, 04 Dec 2012 15:19:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TfuHF-00081x-S6
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:19:18 +0000
Received: from [85.158.139.83:41682] by server-3.bemta-5.messagelabs.com id
	25/B7-18736-5741EB05; Tue, 04 Dec 2012 15:19:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354634330!24378178!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24465 invoked from network); 4 Dec 2012 15:18:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 15:18:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Dec 2012 15:21:00 +0000
Message-Id: <50BE226702000078000ADD8A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 04 Dec 2012 15:18:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <50B498E502000078000AB8B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A4834564403398F24@SHSMSX101.ccr.corp.intel.com>
	<50B7243602000078000AC61A@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033994C8@SHSMSX101.ccr.corp.intel.com>
	<50B7389202000078000AC669@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339B365@SHSMSX101.ccr.corp.intel.com>
	<50BC68FE02000078000AD2B1@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A483456440339E774@SHSMSX101.ccr.corp.intel.com>
	<50BDBE7902000078000AD8E1@nat28.tlf.novell.com>
	<20121204151326.GB15404@phenom.dumpdata.com>
In-Reply-To: <20121204151326.GB15404@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATS and dependent features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 16:13, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> On Tue, Dec 04, 2012 at 08:12:25AM +0000, Jan Beulich wrote:
>> >>> On 04.12.12 at 02:29, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
>> 
>> > 
>> >> -----Original Message-----
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: Monday, December 03, 2012 3:55 PM
>> >> To: Zhang, Xiantao
>> >> Cc: xen-devel
>> >> Subject: RE: ATS and dependent features
>> >> 
>> >> >>> On 30.11.12 at 13:29, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> >> wrote:
>> >> 
>> >> >
>> >> >> -----Original Message-----
>> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >> Sent: Thursday, November 29, 2012 5:28 PM
>> >> >> To: Zhang, Xiantao
>> >> >> Cc: xen-devel
>> >> >> Subject: RE: ATS and dependent features
>> >> >>
>> >> >> >>> On 29.11.12 at 10:19, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> >> >> wrote:
>> >> >>
>> >> >> >
>> >> >> >> -----Original Message-----
>> >> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >> >> Sent: Thursday, November 29, 2012 4:01 PM
>> >> >> >> To: Zhang, Xiantao
>> >> >> >> Cc: xen-devel
>> >> >> >> Subject: RE: ATS and dependent features
>> >> >> >>
>> >> >> >> >>> On 29.11.12 at 02:07, "Zhang, Xiantao"
>> >> >> >> >>> <xiantao.zhang@intel.com>
>> >> >> >> wrote:
>> >> >> >> > ATS should be a host feature controlled by iommu, and I don't
>> >> >> >> > think
>> >> >> >> > dom0 can control  it from Xen's architecture.
>> >> >> >>
>> >> >> >> "Can" or "should"? Because from all I can tell it currently clearly does.
>> >> >> >
>> >> >> > I mean Xen shouldn't  allow these capabilities can be detected by
>> >> >> > dom0.  If it does, we need to fix it.
>> >> >>
>> >> >> It sort of hides it - all callers sit in the kernel's IOMMU code, and
>> >> >> IOMMU detection is being prevented. So it looks like the code is
>> >> >> simply dead when running on top of Xen.
>> >> >
>> >> > I'm curious why dom0's !Xen kernel option for these features can solve
>> >> > the issue you met.
>> >> 
>> >> It doesn't "solve" the problem in that sense: As said, the code in question
>> >> only has callers in IOMMU code, which itself is dependent on !XEN in our
>> >> kernels (just to make clear - I'm talking about forward ported kernels here,
>> >> not pv-ops ones). So upstream probably just has to live with that code being
>> >> dead (at the moment, when run on top of Xen) and take the risk of there
>> >> appearing a caller elsewhere.
>> >> In our kernels, by making these options also dependent upon !XEN, we can
>> >> then actually detect (and actively deal with) an eventual new caller
>> >> elsewhere in the code, thus eliminating any risk of bad interaction between
>> >> Dom0 and Xen.
>> > 
>> > I think !Xen you are talking is a compile option, so this kernel can only 
>> > used for dom0 and can't run on native with these features enabled ?  If 
> don't 
>> > need to keep the kernel running on native hardware, I think it is fine. 
>> 
>> Yes, as said - this is for our forward ported kernel. Whether (and if
>> so how) the pv-ops one can add a similar safeguard I can't tell (and
>> doubt).
> 
> Right, in the upstream kernel that is not going to work. But I am a bit
> confused - this code (pci_enable_ats) looks to be called only from the
> intel and amd iommu code. Aren't those blacklisted (so DMAR MADT
> is overwritten and AMD PCI device is witheld) so the calls aren't going
> to be executed?

That's right. But the code being there means you wouldn't notice
(at build time) if some other caller appeared (which would need
fixing).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:21:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuIu-0008J1-Ut; Tue, 04 Dec 2012 15:21:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TfuIu-0008It-5R
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:21:00 +0000
Received: from [85.158.137.99:16014] by server-14.bemta-3.messagelabs.com id
	24/E7-31424-BD41EB05; Tue, 04 Dec 2012 15:20:59 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354634428!12518205!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26982 invoked from network); 4 Dec 2012 15:20:30 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:20:30 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so3524531iac.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 07:20:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=fM8XtjpAvp5phUfIF/leKf4hC8tVQhm9/qW5fWGgGSc=;
	b=Nm02G1seM/vkqzkDNPj5P25wInLBs4LJTfoMo64v1Ic/Oxs0e+zj5m07ocUoesU4tw
	3zZ0I44Oj7/obmWnvGBzuTMa19O3hbcMDmQWXK6a3EK9+1MsqsS6lNqjnYcG6dJFOJlA
	PAkIrDHyUKZXpznWCbrOPfyqjfFuEqfCKSlXBL0NduG1NbHaiIj5BGefAvy49aFUuvsA
	Sz1krPAIBsBw9hlXTguRSNUU7tFe8nacFN6loLOwvEUNe8aakkOHW7H/6JN0jDlSFEkP
	QTG9RxXXHrsULlbPiGfx4RJ1hAp0RmfBfQlapK59af36cpKtl5yHfERSps79a5UEZEgk
	LtGQ==
MIME-Version: 1.0
Received: by 10.50.46.129 with SMTP id v1mr3059896igm.42.1354634428566; Tue,
	04 Dec 2012 07:20:28 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Tue, 4 Dec 2012 07:20:28 -0800 (PST)
In-Reply-To: <20121204110105.GX8912@reaktio.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
Date: Tue, 4 Dec 2012 23:20:28 +0800
X-Google-Sender-Auth: jjo6MEHMWSaetfDUsFK2j-YS7vY
Message-ID: <CAKhsbWYoCfWAOsBsxbpScYRHD1h5o-WOK2Hd3yj0XhC5eGvC3Q@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8011718497973607544=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8011718497973607544==
Content-Type: multipart/alternative; boundary=14dae9340bfbea4e3b04d00867c0

--14dae9340bfbea4e3b04d00867c0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 4, 2012 at 7:01 PM, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:

> On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
> > >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net>
> wrote:
> > >>  I had a quick look, and it doesn't look that hard to backport that
> patch.
> > >
> > > Thanks, Mat.
> > > I'm glad to report that the patch do fix my problem.
> > >
> > > And yest it is really easy to port since the code did not change
> across the
> > > two release.
> > > The only change would be line numbers (3841 vs 3803) and one extra
> comments
> > > before this line:
> > >
> > >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> > >
> > > I'm not sure if you are going to release another maintenance version
> that
> > > include this patch,
> > > but I'll report this to Debain maintainer since it's about to freeze
> for
> > > v7.0 release and v4.2.0 will not make it.
> >
> > Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
> > out?
> >
>
> It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
> so I'd say it should be a candidate for Xen 4.1.4.
>
>
Hi, it seems that the patch has some side effect on pure HVM guest.
For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled, I
see such syndrome in qemu-dm-xxx.log:

pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
support??
pt_pci_write_config: Internal error: Invalid write emulation return
value[-1]. I/O emulator exit.

The guest dies immediately after the log, I have no way to check guest
kernel log.
Without the patch, this guest can boot without obvious error log even the
VGA passthrough does not quite work.
I'll check the code to see what does these log mean...

Thanks,
Timothy


> -- Pasi
>
> > Jan
> >
> > > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson
> > > <mats.petersson@citrix.com>wrote:
> > >
> > >> On 03/12/12 13:19, Mats Petersson wrote:
> > >>
> > >>> On 03/12/12 13:14, G.R. wrote:
> > >>>
> > >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
> > >>>> <mats.petersson@citrix.com
> > > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
> > >>>> wrote:
> > >>>>
> > >>>>      On 03/12/12 03:47, G.R. wrote:
> > >>>>
> > >>>>          Hi developers,
> > >>>>          I met some domU issues and the log suggests missing
> interrupt.
> > >>>>          Details from here:
> > >>>>          http://www.gossamer-threads.
> **com/lists/xen/users/263938#**
> > >>>> 263938 <
> http://www.gossamer-threads.com/lists/xen/users/263938#263938>
> > >>>>          In summary, this is the suspicious log:
> > >>>>
> > >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> > >>>>
> > >>>>          I've checked the code in question and found that mode 3 i=
s
> an
> > >>>>          'reserved_1' mode.
> > >>>>          I want to trace down the source of this mode setting to
> > >>>>          root-cause the issue.
> > >>>>          But I'm not an xen developer, and am even a newbie as a x=
en
> > >>>> user.
> > >>>>          Could anybody give me instructions about how to enable
> > >>>>          detailed debug log?
> > >>>>          It could be better if I can get advice about experiments =
to
> > >>>>          perform / switches to try out etc.
> > >>>>
> > >>>>          My SW config:
> > >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4=
)
> > >>>>          domU: Debian wheezy 3.2.x stock kernel.
> > >>>>
> > >>>>          Thanks,
> > >>>>          Timothy
> > >>>>
> > >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
> > >>>>      What are the exact messages in the DomU?
> > >>>>
> > >>>>
> > >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
> > >>>> But this is actually a PVHVM guest since debian stock kernel has
> PVOP
> > >>>> enabled.
> > >>>> And when I tried another PVOP disabled linux distro (openelec
> v2.0), I
> > >>>> did not see such msi related error message.
> > >>>> Actually, with that domU I do not see anything obvious wrong from
> the
> > >>>> log, but I also see nothing from panel (panel receive no signal an=
d
> go
> > >>>> power-saving) :-(
> > >>>>
> > >>>>
> > >>>> Back to the issue I was reporting, the domU log looks like this:
> > >>>>
> > >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
> > >>>> [drm:i915_hangcheck_ring_idle]
> > >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354,
> at
> > >>>> 3354], missed IRQ?
> > >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
> > >>>> [drm:i915_hangcheck_ring_idle]
> > >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on
> 11297, at
> > >>>> 11297], missed IRQ?
> > >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_respon=
se
> > >>>> timeout, switching to polling mode: last cmd=3D0x000f0000
> > >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response
> from
> > >>>> codec, disabling MSI: last cmd=3D0x002f0600
> > >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel:
> azx_get_response
> > >>>> timeout, switching to single_cmd mode: last cmd=3D0x002f0600
> > >>>>
> > >>>>
> > >>>> Thanks,
> > >>>> Timothy
> > >>>>
> > >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, th=
at
> > >>> fixes this. I'm not fully clued up to what the policy for backporti=
ng
> > >>> fixes are, and I haven't looked at the complexity of the fix itself=
,
> but
> > >>> either updating to the 4.2.0 or a (personal) backport sounds like t=
he
> > >>> right solution here.
> > >>>
> > >> I had a quick look, and it doesn't look that hard to backport that
> patch.
> > >>
> > >> --
> > >> Mats
> > >>
> > >>
> > >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my
> response
> > >>> to your original email.
> > >>>
> > >>> --
> > >>> Mats
> > >>>
> > >>> ______________________________**_________________
> > >>> Xen-devel mailing list
> > >>> Xen-devel@lists.xen.org
> > >>> http://lists.xen.org/xen-devel
> > >>>
> > >>>
> > >>>
> > >>
> > >> ______________________________**_________________
> > >> Xen-devel mailing list
> > >> Xen-devel@lists.xen.org
> > >> http://lists.xen.org/xen-devel
> > >>
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--14dae9340bfbea4e3b04d00867c0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><div class=3D"gmail_quote">On Tue, Dec 4, 2012 a=
t 7:01 PM, Pasi K=E4rkk=E4inen <span dir=3D"ltr">&lt;<a href=3D"mailto:pasi=
k@iki.fi" target=3D"_blank">pasik@iki.fi</a>&gt;</span> wrote:<br><blockquo=
te class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px =
solid rgb(204,204,204);padding-left:1ex">
<div class=3D"im">On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wro=
te:<br>
&gt; &gt;&gt;&gt; On 04.12.12 at 04:07, &quot;G.R.&quot; &lt;<a href=3D"mai=
lto:firemeteor@users.sourceforge.net">firemeteor@users.sourceforge.net</a>&=
gt; wrote:<br>
&gt; &gt;&gt; =A0I had a quick look, and it doesn&#39;t look that hard to b=
ackport that patch.<br>
&gt; &gt;<br>
&gt; &gt; Thanks, Mat.<br>
&gt; &gt; I&#39;m glad to report that the patch do fix my problem.<br>
&gt; &gt;<br>
&gt; &gt; And yest it is really easy to port since the code did not change =
across the<br>
&gt; &gt; two release.<br>
&gt; &gt; The only change would be line numbers (3841 vs 3803) and one extr=
a comments<br>
&gt; &gt; before this line:<br>
&gt; &gt;<br>
&gt; &gt; =A0static int pt_msix_update_one(struct pt_dev *dev, int entry_nr=
)<br>
&gt; &gt;<br>
&gt; &gt; I&#39;m not sure if you are going to release another maintenance =
version that<br>
&gt; &gt; include this patch,<br>
&gt; &gt; but I&#39;ll report this to Debain maintainer since it&#39;s abou=
t to freeze for<br>
&gt; &gt; v7.0 release and v4.2.0 will not make it.<br>
&gt;<br>
&gt; Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is<br>
&gt; out?<br>
&gt;<br>
<br>
</div>It&#39;s already in Xen 4.2, it&#39;s confirmed to fix a bug in Xen 4=
.1,<br>
so I&#39;d say it should be a candidate for Xen 4.1.4.<br>
<span class=3D""><font color=3D"#888888"><br></font></span></blockquote><di=
v><br>Hi, it seems that the patch has some side effect on pure HVM guest.<b=
r>For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled,=
 I see such syndrome in qemu-dm-xxx.log:<br>
<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>pt_pci_write_config: Internal error: Invalid write emulation=
 return value[-1]. I/O emulator exit.<br><br>The guest dies immediately aft=
er the log, I have no way to check guest kernel log.<br>
Without the patch, this guest can boot without obvious error log even the V=
GA passthrough does not quite work.<br>I&#39;ll check the code to see what =
does these log mean...<br><br>Thanks,<br>Timothy<br>=A0</div><blockquote cl=
ass=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid=
 rgb(204,204,204);padding-left:1ex">
<span class=3D""><font color=3D"#888888">
-- Pasi<br>
</font></span><div><div class=3D"h5"><br>
&gt; Jan<br>
&gt;<br>
&gt; &gt; On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson<br>
&gt; &gt; &lt;<a href=3D"mailto:mats.petersson@citrix.com">mats.petersson@c=
itrix.com</a>&gt;wrote:<br>
&gt; &gt;<br>
&gt; &gt;&gt; On 03/12/12 13:19, Mats Petersson wrote:<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;&gt; On 03/12/12 13:14, G.R. wrote:<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson<br>
&gt; &gt;&gt;&gt;&gt; &lt;<a href=3D"mailto:mats.petersson@citrix.com">mats=
.petersson@citrix.com</a><br>
&gt; &gt; &lt;mailto:<a href=3D"mailto:mats.petersson@citrix.">mats.peterss=
on@citrix.</a>**com&lt;<a href=3D"mailto:mats.petersson@citrix.com">mats.pe=
tersson@citrix.com</a>&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; wrote:<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0On 03/12/12 03:47, G.R. wrote:<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0Hi developers,<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0I met some domU issues and the log=
 suggests missing interrupt.<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0Details from here:<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0<a href=3D"http://www.gossamer-thr=
eads." target=3D"_blank">http://www.gossamer-threads.</a>**com/lists/xen/us=
ers/263938#**<br>
&gt; &gt;&gt;&gt;&gt; 263938 &lt;<a href=3D"http://www.gossamer-threads.com=
/lists/xen/users/263938#263938" target=3D"_blank">http://www.gossamer-threa=
ds.com/lists/xen/users/263938#263938</a>&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0In summary, this is the suspicious=
 log:<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0(XEN) vmsi.c:122:d32767 Unsupporte=
d delivery mode 3<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0I&#39;ve checked the code in quest=
ion and found that mode 3 is an<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0&#39;reserved_1&#39; mode.<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0I want to trace down the source of=
 this mode setting to<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0root-cause the issue.<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0But I&#39;m not an xen developer, =
and am even a newbie as a xen<br>
&gt; &gt;&gt;&gt;&gt; user.<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0Could anybody give me instructions=
 about how to enable<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0detailed debug log?<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0It could be better if I can get ad=
vice about experiments to<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0perform / switches to try out etc.=
<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0My SW config:<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0dom0: Debian wheezy (3.6 experimen=
tal kernel, xen 4.1.3-4)<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0domU: Debian wheezy 3.2.x stock ke=
rnel.<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0Thanks,<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0Timothy<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0Are you passing hardware (PCI Passthrough)=
 to the HVM guest?<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0What are the exact messages in the DomU?<b=
r>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; Yes, I&#39;m doing PCI passthrough (the IGD, audio &a=
mp;&amp; USB controller).<br>
&gt; &gt;&gt;&gt;&gt; But this is actually a PVHVM guest since debian stock=
 kernel has PVOP<br>
&gt; &gt;&gt;&gt;&gt; enabled.<br>
&gt; &gt;&gt;&gt;&gt; And when I tried another PVOP disabled linux distro (=
openelec v2.0), I<br>
&gt; &gt;&gt;&gt;&gt; did not see such msi related error message.<br>
&gt; &gt;&gt;&gt;&gt; Actually, with that domU I do not see anything obviou=
s wrong from the<br>
&gt; &gt;&gt;&gt;&gt; log, but I also see nothing from panel (panel receive=
 no signal and go<br>
&gt; &gt;&gt;&gt;&gt; power-saving) :-(<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; Back to the issue I was reporting, the domU log looks=
 like this:<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; Dec 2 21:52:44 debvm kernel: [ 1085.604071]<br>
&gt; &gt;&gt;&gt;&gt; [drm:i915_hangcheck_ring_idle]<br>
&gt; &gt;&gt;&gt;&gt; *ERROR* Hangcheck timer elapsed... blt ring idle [wai=
ting on 3354, at<br>
&gt; &gt;&gt;&gt;&gt; 3354], missed IRQ?<br>
&gt; &gt;&gt;&gt;&gt; Dec 2 21:56:50 debvm kernel: [ 1332.076071]<br>
&gt; &gt;&gt;&gt;&gt; [drm:i915_hangcheck_ring_idle]<br>
&gt; &gt;&gt;&gt;&gt; *ERROR* Hangcheck timer elapsed... render ring idle [=
waiting on 11297, at<br>
&gt; &gt;&gt;&gt;&gt; 11297], missed IRQ?<br>
&gt; &gt;&gt;&gt;&gt; Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: a=
zx_get_response<br>
&gt; &gt;&gt;&gt;&gt; timeout, switching to polling mode: last cmd=3D0x000f=
0000<br>
&gt; &gt;&gt;&gt;&gt; Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel:=
 No response from<br>
&gt; &gt;&gt;&gt;&gt; codec, disabling MSI: last cmd=3D0x002f0600<br>
&gt; &gt;&gt;&gt;&gt; Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel:=
 azx_get_response<br>
&gt; &gt;&gt;&gt;&gt; timeout, switching to single_cmd mode: last cmd=3D0x0=
02f0600<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; Thanks,<br>
&gt; &gt;&gt;&gt;&gt; Timothy<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; It does sound like there is a fix in 4.2.0, as indicated =
by Jan, that<br>
&gt; &gt;&gt;&gt; fixes this. I&#39;m not fully clued up to what the policy=
 for backporting<br>
&gt; &gt;&gt;&gt; fixes are, and I haven&#39;t looked at the complexity of =
the fix itself, but<br>
&gt; &gt;&gt;&gt; either updating to the 4.2.0 or a (personal) backport sou=
nds like the<br>
&gt; &gt;&gt;&gt; right solution here.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt; I had a quick look, and it doesn&#39;t look that hard to back=
port that patch.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; --<br>
&gt; &gt;&gt; Mats<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;&gt; Unfortunately, I hadn&#39;t seen Jan&#39;s reply by the t=
ime I wrote my response<br>
&gt; &gt;&gt;&gt; to your original email.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; --<br>
&gt; &gt;&gt;&gt; Mats<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; ______________________________**_________________<br>
&gt; &gt;&gt;&gt; Xen-devel mailing list<br>
&gt; &gt;&gt;&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@list=
s.xen.org</a><br>
&gt; &gt;&gt;&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_bla=
nk">http://lists.xen.org/xen-devel</a><br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; ______________________________**_________________<br>
&gt; &gt;&gt; Xen-devel mailing list<br>
&gt; &gt;&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xe=
n.org</a><br>
&gt; &gt;&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">=
http://lists.xen.org/xen-devel</a><br>
&gt; &gt;&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
</div></div>&gt; _______________________________________________<br>
<div class=3D"im">&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
</div>_______________________________________________<br>
<div class=3D""><div class=3D"h5">Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div>

--14dae9340bfbea4e3b04d00867c0--


--===============8011718497973607544==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8011718497973607544==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 15:21:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuIu-0008J1-Ut; Tue, 04 Dec 2012 15:21:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TfuIu-0008It-5R
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:21:00 +0000
Received: from [85.158.137.99:16014] by server-14.bemta-3.messagelabs.com id
	24/E7-31424-BD41EB05; Tue, 04 Dec 2012 15:20:59 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354634428!12518205!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26982 invoked from network); 4 Dec 2012 15:20:30 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:20:30 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so3524531iac.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 07:20:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=fM8XtjpAvp5phUfIF/leKf4hC8tVQhm9/qW5fWGgGSc=;
	b=Nm02G1seM/vkqzkDNPj5P25wInLBs4LJTfoMo64v1Ic/Oxs0e+zj5m07ocUoesU4tw
	3zZ0I44Oj7/obmWnvGBzuTMa19O3hbcMDmQWXK6a3EK9+1MsqsS6lNqjnYcG6dJFOJlA
	PAkIrDHyUKZXpznWCbrOPfyqjfFuEqfCKSlXBL0NduG1NbHaiIj5BGefAvy49aFUuvsA
	Sz1krPAIBsBw9hlXTguRSNUU7tFe8nacFN6loLOwvEUNe8aakkOHW7H/6JN0jDlSFEkP
	QTG9RxXXHrsULlbPiGfx4RJ1hAp0RmfBfQlapK59af36cpKtl5yHfERSps79a5UEZEgk
	LtGQ==
MIME-Version: 1.0
Received: by 10.50.46.129 with SMTP id v1mr3059896igm.42.1354634428566; Tue,
	04 Dec 2012 07:20:28 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Tue, 4 Dec 2012 07:20:28 -0800 (PST)
In-Reply-To: <20121204110105.GX8912@reaktio.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
Date: Tue, 4 Dec 2012 23:20:28 +0800
X-Google-Sender-Auth: jjo6MEHMWSaetfDUsFK2j-YS7vY
Message-ID: <CAKhsbWYoCfWAOsBsxbpScYRHD1h5o-WOK2Hd3yj0XhC5eGvC3Q@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8011718497973607544=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8011718497973607544==
Content-Type: multipart/alternative; boundary=14dae9340bfbea4e3b04d00867c0

--14dae9340bfbea4e3b04d00867c0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 4, 2012 at 7:01 PM, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:

> On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
> > >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net>
> wrote:
> > >>  I had a quick look, and it doesn't look that hard to backport that
> patch.
> > >
> > > Thanks, Mat.
> > > I'm glad to report that the patch do fix my problem.
> > >
> > > And yest it is really easy to port since the code did not change
> across the
> > > two release.
> > > The only change would be line numbers (3841 vs 3803) and one extra
> comments
> > > before this line:
> > >
> > >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> > >
> > > I'm not sure if you are going to release another maintenance version
> that
> > > include this patch,
> > > but I'll report this to Debain maintainer since it's about to freeze
> for
> > > v7.0 release and v4.2.0 will not make it.
> >
> > Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
> > out?
> >
>
> It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
> so I'd say it should be a candidate for Xen 4.1.4.
>
>
Hi, it seems that the patch has some side effect on pure HVM guest.
For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled, I
see such syndrome in qemu-dm-xxx.log:

pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
support??
pt_pci_write_config: Internal error: Invalid write emulation return
value[-1]. I/O emulator exit.

The guest dies immediately after the log, I have no way to check guest
kernel log.
Without the patch, this guest can boot without obvious error log even the
VGA passthrough does not quite work.
I'll check the code to see what does these log mean...

Thanks,
Timothy


> -- Pasi
>
> > Jan
> >
> > > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson
> > > <mats.petersson@citrix.com>wrote:
> > >
> > >> On 03/12/12 13:19, Mats Petersson wrote:
> > >>
> > >>> On 03/12/12 13:14, G.R. wrote:
> > >>>
> > >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
> > >>>> <mats.petersson@citrix.com
> > > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
> > >>>> wrote:
> > >>>>
> > >>>>      On 03/12/12 03:47, G.R. wrote:
> > >>>>
> > >>>>          Hi developers,
> > >>>>          I met some domU issues and the log suggests missing
> interrupt.
> > >>>>          Details from here:
> > >>>>          http://www.gossamer-threads.
> **com/lists/xen/users/263938#**
> > >>>> 263938 <
> http://www.gossamer-threads.com/lists/xen/users/263938#263938>
> > >>>>          In summary, this is the suspicious log:
> > >>>>
> > >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> > >>>>
> > >>>>          I've checked the code in question and found that mode 3 i=
s
> an
> > >>>>          'reserved_1' mode.
> > >>>>          I want to trace down the source of this mode setting to
> > >>>>          root-cause the issue.
> > >>>>          But I'm not an xen developer, and am even a newbie as a x=
en
> > >>>> user.
> > >>>>          Could anybody give me instructions about how to enable
> > >>>>          detailed debug log?
> > >>>>          It could be better if I can get advice about experiments =
to
> > >>>>          perform / switches to try out etc.
> > >>>>
> > >>>>          My SW config:
> > >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4=
)
> > >>>>          domU: Debian wheezy 3.2.x stock kernel.
> > >>>>
> > >>>>          Thanks,
> > >>>>          Timothy
> > >>>>
> > >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
> > >>>>      What are the exact messages in the DomU?
> > >>>>
> > >>>>
> > >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
> > >>>> But this is actually a PVHVM guest since debian stock kernel has
> PVOP
> > >>>> enabled.
> > >>>> And when I tried another PVOP disabled linux distro (openelec
> v2.0), I
> > >>>> did not see such msi related error message.
> > >>>> Actually, with that domU I do not see anything obvious wrong from
> the
> > >>>> log, but I also see nothing from panel (panel receive no signal an=
d
> go
> > >>>> power-saving) :-(
> > >>>>
> > >>>>
> > >>>> Back to the issue I was reporting, the domU log looks like this:
> > >>>>
> > >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
> > >>>> [drm:i915_hangcheck_ring_idle]
> > >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354,
> at
> > >>>> 3354], missed IRQ?
> > >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
> > >>>> [drm:i915_hangcheck_ring_idle]
> > >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on
> 11297, at
> > >>>> 11297], missed IRQ?
> > >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_respon=
se
> > >>>> timeout, switching to polling mode: last cmd=3D0x000f0000
> > >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response
> from
> > >>>> codec, disabling MSI: last cmd=3D0x002f0600
> > >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel:
> azx_get_response
> > >>>> timeout, switching to single_cmd mode: last cmd=3D0x002f0600
> > >>>>
> > >>>>
> > >>>> Thanks,
> > >>>> Timothy
> > >>>>
> > >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, th=
at
> > >>> fixes this. I'm not fully clued up to what the policy for backporti=
ng
> > >>> fixes are, and I haven't looked at the complexity of the fix itself=
,
> but
> > >>> either updating to the 4.2.0 or a (personal) backport sounds like t=
he
> > >>> right solution here.
> > >>>
> > >> I had a quick look, and it doesn't look that hard to backport that
> patch.
> > >>
> > >> --
> > >> Mats
> > >>
> > >>
> > >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my
> response
> > >>> to your original email.
> > >>>
> > >>> --
> > >>> Mats
> > >>>
> > >>> ______________________________**_________________
> > >>> Xen-devel mailing list
> > >>> Xen-devel@lists.xen.org
> > >>> http://lists.xen.org/xen-devel
> > >>>
> > >>>
> > >>>
> > >>
> > >> ______________________________**_________________
> > >> Xen-devel mailing list
> > >> Xen-devel@lists.xen.org
> > >> http://lists.xen.org/xen-devel
> > >>
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--14dae9340bfbea4e3b04d00867c0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><div class=3D"gmail_quote">On Tue, Dec 4, 2012 a=
t 7:01 PM, Pasi K=E4rkk=E4inen <span dir=3D"ltr">&lt;<a href=3D"mailto:pasi=
k@iki.fi" target=3D"_blank">pasik@iki.fi</a>&gt;</span> wrote:<br><blockquo=
te class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px =
solid rgb(204,204,204);padding-left:1ex">
<div class=3D"im">On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wro=
te:<br>
&gt; &gt;&gt;&gt; On 04.12.12 at 04:07, &quot;G.R.&quot; &lt;<a href=3D"mai=
lto:firemeteor@users.sourceforge.net">firemeteor@users.sourceforge.net</a>&=
gt; wrote:<br>
&gt; &gt;&gt; =A0I had a quick look, and it doesn&#39;t look that hard to b=
ackport that patch.<br>
&gt; &gt;<br>
&gt; &gt; Thanks, Mat.<br>
&gt; &gt; I&#39;m glad to report that the patch do fix my problem.<br>
&gt; &gt;<br>
&gt; &gt; And yest it is really easy to port since the code did not change =
across the<br>
&gt; &gt; two release.<br>
&gt; &gt; The only change would be line numbers (3841 vs 3803) and one extr=
a comments<br>
&gt; &gt; before this line:<br>
&gt; &gt;<br>
&gt; &gt; =A0static int pt_msix_update_one(struct pt_dev *dev, int entry_nr=
)<br>
&gt; &gt;<br>
&gt; &gt; I&#39;m not sure if you are going to release another maintenance =
version that<br>
&gt; &gt; include this patch,<br>
&gt; &gt; but I&#39;ll report this to Debain maintainer since it&#39;s abou=
t to freeze for<br>
&gt; &gt; v7.0 release and v4.2.0 will not make it.<br>
&gt;<br>
&gt; Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is<br>
&gt; out?<br>
&gt;<br>
<br>
</div>It&#39;s already in Xen 4.2, it&#39;s confirmed to fix a bug in Xen 4=
.1,<br>
so I&#39;d say it should be a candidate for Xen 4.1.4.<br>
<span class=3D""><font color=3D"#888888"><br></font></span></blockquote><di=
v><br>Hi, it seems that the patch has some side effect on pure HVM guest.<b=
r>For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled,=
 I see such syndrome in qemu-dm-xxx.log:<br>
<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>pt_pci_write_config: Internal error: Invalid write emulation=
 return value[-1]. I/O emulator exit.<br><br>The guest dies immediately aft=
er the log, I have no way to check guest kernel log.<br>
Without the patch, this guest can boot without obvious error log even the V=
GA passthrough does not quite work.<br>I&#39;ll check the code to see what =
does these log mean...<br><br>Thanks,<br>Timothy<br>=A0</div><blockquote cl=
ass=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid=
 rgb(204,204,204);padding-left:1ex">
<span class=3D""><font color=3D"#888888">
-- Pasi<br>
</font></span><div><div class=3D"h5"><br>
&gt; Jan<br>
&gt;<br>
&gt; &gt; On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson<br>
&gt; &gt; &lt;<a href=3D"mailto:mats.petersson@citrix.com">mats.petersson@c=
itrix.com</a>&gt;wrote:<br>
&gt; &gt;<br>
&gt; &gt;&gt; On 03/12/12 13:19, Mats Petersson wrote:<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;&gt; On 03/12/12 13:14, G.R. wrote:<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson<br>
&gt; &gt;&gt;&gt;&gt; &lt;<a href=3D"mailto:mats.petersson@citrix.com">mats=
.petersson@citrix.com</a><br>
&gt; &gt; &lt;mailto:<a href=3D"mailto:mats.petersson@citrix.">mats.peterss=
on@citrix.</a>**com&lt;<a href=3D"mailto:mats.petersson@citrix.com">mats.pe=
tersson@citrix.com</a>&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; wrote:<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0On 03/12/12 03:47, G.R. wrote:<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0Hi developers,<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0I met some domU issues and the log=
 suggests missing interrupt.<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0Details from here:<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0<a href=3D"http://www.gossamer-thr=
eads." target=3D"_blank">http://www.gossamer-threads.</a>**com/lists/xen/us=
ers/263938#**<br>
&gt; &gt;&gt;&gt;&gt; 263938 &lt;<a href=3D"http://www.gossamer-threads.com=
/lists/xen/users/263938#263938" target=3D"_blank">http://www.gossamer-threa=
ds.com/lists/xen/users/263938#263938</a>&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0In summary, this is the suspicious=
 log:<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0(XEN) vmsi.c:122:d32767 Unsupporte=
d delivery mode 3<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0I&#39;ve checked the code in quest=
ion and found that mode 3 is an<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0&#39;reserved_1&#39; mode.<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0I want to trace down the source of=
 this mode setting to<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0root-cause the issue.<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0But I&#39;m not an xen developer, =
and am even a newbie as a xen<br>
&gt; &gt;&gt;&gt;&gt; user.<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0Could anybody give me instructions=
 about how to enable<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0detailed debug log?<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0It could be better if I can get ad=
vice about experiments to<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0perform / switches to try out etc.=
<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0My SW config:<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0dom0: Debian wheezy (3.6 experimen=
tal kernel, xen 4.1.3-4)<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0domU: Debian wheezy 3.2.x stock ke=
rnel.<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0Thanks,<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0 =A0 =A0Timothy<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0Are you passing hardware (PCI Passthrough)=
 to the HVM guest?<br>
&gt; &gt;&gt;&gt;&gt; =A0 =A0 =A0What are the exact messages in the DomU?<b=
r>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; Yes, I&#39;m doing PCI passthrough (the IGD, audio &a=
mp;&amp; USB controller).<br>
&gt; &gt;&gt;&gt;&gt; But this is actually a PVHVM guest since debian stock=
 kernel has PVOP<br>
&gt; &gt;&gt;&gt;&gt; enabled.<br>
&gt; &gt;&gt;&gt;&gt; And when I tried another PVOP disabled linux distro (=
openelec v2.0), I<br>
&gt; &gt;&gt;&gt;&gt; did not see such msi related error message.<br>
&gt; &gt;&gt;&gt;&gt; Actually, with that domU I do not see anything obviou=
s wrong from the<br>
&gt; &gt;&gt;&gt;&gt; log, but I also see nothing from panel (panel receive=
 no signal and go<br>
&gt; &gt;&gt;&gt;&gt; power-saving) :-(<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; Back to the issue I was reporting, the domU log looks=
 like this:<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; Dec 2 21:52:44 debvm kernel: [ 1085.604071]<br>
&gt; &gt;&gt;&gt;&gt; [drm:i915_hangcheck_ring_idle]<br>
&gt; &gt;&gt;&gt;&gt; *ERROR* Hangcheck timer elapsed... blt ring idle [wai=
ting on 3354, at<br>
&gt; &gt;&gt;&gt;&gt; 3354], missed IRQ?<br>
&gt; &gt;&gt;&gt;&gt; Dec 2 21:56:50 debvm kernel: [ 1332.076071]<br>
&gt; &gt;&gt;&gt;&gt; [drm:i915_hangcheck_ring_idle]<br>
&gt; &gt;&gt;&gt;&gt; *ERROR* Hangcheck timer elapsed... render ring idle [=
waiting on 11297, at<br>
&gt; &gt;&gt;&gt;&gt; 11297], missed IRQ?<br>
&gt; &gt;&gt;&gt;&gt; Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: a=
zx_get_response<br>
&gt; &gt;&gt;&gt;&gt; timeout, switching to polling mode: last cmd=3D0x000f=
0000<br>
&gt; &gt;&gt;&gt;&gt; Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel:=
 No response from<br>
&gt; &gt;&gt;&gt;&gt; codec, disabling MSI: last cmd=3D0x002f0600<br>
&gt; &gt;&gt;&gt;&gt; Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel:=
 azx_get_response<br>
&gt; &gt;&gt;&gt;&gt; timeout, switching to single_cmd mode: last cmd=3D0x0=
02f0600<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;&gt; Thanks,<br>
&gt; &gt;&gt;&gt;&gt; Timothy<br>
&gt; &gt;&gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; It does sound like there is a fix in 4.2.0, as indicated =
by Jan, that<br>
&gt; &gt;&gt;&gt; fixes this. I&#39;m not fully clued up to what the policy=
 for backporting<br>
&gt; &gt;&gt;&gt; fixes are, and I haven&#39;t looked at the complexity of =
the fix itself, but<br>
&gt; &gt;&gt;&gt; either updating to the 4.2.0 or a (personal) backport sou=
nds like the<br>
&gt; &gt;&gt;&gt; right solution here.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt; I had a quick look, and it doesn&#39;t look that hard to back=
port that patch.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; --<br>
&gt; &gt;&gt; Mats<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;&gt; Unfortunately, I hadn&#39;t seen Jan&#39;s reply by the t=
ime I wrote my response<br>
&gt; &gt;&gt;&gt; to your original email.<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; --<br>
&gt; &gt;&gt;&gt; Mats<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; ______________________________**_________________<br>
&gt; &gt;&gt;&gt; Xen-devel mailing list<br>
&gt; &gt;&gt;&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@list=
s.xen.org</a><br>
&gt; &gt;&gt;&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_bla=
nk">http://lists.xen.org/xen-devel</a><br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; ______________________________**_________________<br>
&gt; &gt;&gt; Xen-devel mailing list<br>
&gt; &gt;&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xe=
n.org</a><br>
&gt; &gt;&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">=
http://lists.xen.org/xen-devel</a><br>
&gt; &gt;&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
</div></div>&gt; _______________________________________________<br>
<div class=3D"im">&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
</div>_______________________________________________<br>
<div class=3D""><div class=3D"h5">Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div>

--14dae9340bfbea4e3b04d00867c0--


--===============8011718497973607544==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8011718497973607544==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 15:22:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuJw-000075-LB; Tue, 04 Dec 2012 15:22:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfuJv-00006Z-7z
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 15:22:03 +0000
Received: from [85.158.143.99:31662] by server-1.bemta-4.messagelabs.com id
	C0/2B-27934-A151EB05; Tue, 04 Dec 2012 15:22:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354634504!27432894!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11966 invoked from network); 4 Dec 2012 15:21:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:21:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150714"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:21:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:21:43 +0000
Message-ID: <1354634502.15296.33.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Dec 2012 15:21:42 +0000
In-Reply-To: <722da032ac90c0e1a78b.1354292656@elijah>
References: <patchbomb.1354292655@elijah>
	<722da032ac90c0e1a78b.1354292656@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 1 of 4 v2] libxl: Combine device model arg
 build functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 16:24 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354287957 0
> # Node ID 722da032ac90c0e1a78b1154fa588bf295d1f009
> # Parent  ae6fb202b233af815466055d9f1a635802a50855
> libxl: Combine device model arg build functions
> 
> qemu-traditional and qemu-upstream have some differences in the way
> the arguments need to be passed.  At the moment, this is dealt with by
> having two entirely separate functions, libxl__build_device_model_args_new
> and libxl__build_device_model_args_old.
> 
> However, at least 80% of these are the same; this means that fixes or
> additions to one may not make it into the other.  Furthermore, there
> are some unaccounable differences in implementation.

FWIW:
 1 file changed, 168 insertions(+), 260 deletions(-)

But that doesn't really show how much of the code in the new function is
shared and how much is per-qemu-version. Casting my eye over the code
with the patch applied (i.e. an unscientific estimate) it seems like
most of it is under an if (dm_new) of some sort.

My main concern is that qemu-xen-traditional is frozen and so this code
shouldn't be changing, but by bundling it with the qemu-xen code it may
be subject to churn as new stuff gets added to the new qemu and plumbed
through.

A related concern is that some of the interfaces we use (e.g. "-hda")
are deprecated in favour of more expressive forms (-drive -device et
al), depending on how things go upstream we may want to switch to using
them.

Perhaps a useful compromise would be to pull the common stuff into a
common function but call out to version specific function for the bits
which differ too.

Perhaps factoring into functional areas might help too? e.g.
make_{disk,net,foo}_args, then the decision to combine or not can be
made on a per functional area basis?

> @@ -523,6 +450,9 @@ static char ** libxl__build_device_model
>          abort();
>      }
> 
> +    if (!dm_new)
> +        goto finish;
> +
>      ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
>      flexarray_append(dm_args, "-m");
>      flexarray_append(dm_args, libxl__sprintf(gc, "%"PRId64, ram_size));
> @@ -585,33 +515,11 @@ static char ** libxl__build_device_model
>              flexarray_append(dm_args, drive);
>          }
>      }
> +finish:

This feels especially unsatisfying...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:22:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuJw-000075-LB; Tue, 04 Dec 2012 15:22:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfuJv-00006Z-7z
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 15:22:03 +0000
Received: from [85.158.143.99:31662] by server-1.bemta-4.messagelabs.com id
	C0/2B-27934-A151EB05; Tue, 04 Dec 2012 15:22:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354634504!27432894!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11966 invoked from network); 4 Dec 2012 15:21:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:21:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150714"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:21:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:21:43 +0000
Message-ID: <1354634502.15296.33.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Dec 2012 15:21:42 +0000
In-Reply-To: <722da032ac90c0e1a78b.1354292656@elijah>
References: <patchbomb.1354292655@elijah>
	<722da032ac90c0e1a78b.1354292656@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 1 of 4 v2] libxl: Combine device model arg
 build functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 16:24 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354287957 0
> # Node ID 722da032ac90c0e1a78b1154fa588bf295d1f009
> # Parent  ae6fb202b233af815466055d9f1a635802a50855
> libxl: Combine device model arg build functions
> 
> qemu-traditional and qemu-upstream have some differences in the way
> the arguments need to be passed.  At the moment, this is dealt with by
> having two entirely separate functions, libxl__build_device_model_args_new
> and libxl__build_device_model_args_old.
> 
> However, at least 80% of these are the same; this means that fixes or
> additions to one may not make it into the other.  Furthermore, there
> are some unaccounable differences in implementation.

FWIW:
 1 file changed, 168 insertions(+), 260 deletions(-)

But that doesn't really show how much of the code in the new function is
shared and how much is per-qemu-version. Casting my eye over the code
with the patch applied (i.e. an unscientific estimate) it seems like
most of it is under an if (dm_new) of some sort.

My main concern is that qemu-xen-traditional is frozen and so this code
shouldn't be changing, but by bundling it with the qemu-xen code it may
be subject to churn as new stuff gets added to the new qemu and plumbed
through.

A related concern is that some of the interfaces we use (e.g. "-hda")
are deprecated in favour of more expressive forms (-drive -device et
al), depending on how things go upstream we may want to switch to using
them.

Perhaps a useful compromise would be to pull the common stuff into a
common function but call out to version specific function for the bits
which differ too.

Perhaps factoring into functional areas might help too? e.g.
make_{disk,net,foo}_args, then the decision to combine or not can be
made on a per functional area basis?

> @@ -523,6 +450,9 @@ static char ** libxl__build_device_model
>          abort();
>      }
> 
> +    if (!dm_new)
> +        goto finish;
> +
>      ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
>      flexarray_append(dm_args, "-m");
>      flexarray_append(dm_args, libxl__sprintf(gc, "%"PRId64, ram_size));
> @@ -585,33 +515,11 @@ static char ** libxl__build_device_model
>              flexarray_append(dm_args, drive);
>          }
>      }
> +finish:

This feels especially unsatisfying...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:26:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuO2-0000fx-Bu; Tue, 04 Dec 2012 15:26:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <s.munaut@whatever-company.com>) id 1TfuO0-0000fo-EE
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:26:16 +0000
Received: from [85.158.143.35:3487] by server-3.bemta-4.messagelabs.com id
	2F/7F-06841-7161EB05; Tue, 04 Dec 2012 15:26:15 +0000
X-Env-Sender: s.munaut@whatever-company.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354634634!10559743!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15808 invoked from network); 4 Dec 2012 15:23:57 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:23:57 -0000
Received: by mail-ob0-f173.google.com with SMTP id xn12so4601683obc.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 07:23:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=wAa61vukzhVT63qJTwsavFE0VN5xI8pP3QtkYlIZITQ=;
	b=gfckpKLmLgMecGmxReJ+O5l13GCUIi+m4f0x7h8xx/JAocZwk6Z3LjCVYUjQY7ZWMS
	y0pWOGLzfw5vXyqr+VqnX1g5cWwVCuVzSzdUZ2VoVT+v6RygzOsIvUHujAKojNS2JngO
	x/mlmv7EXuDjeuqe4s3GvyO/GVZvPjllVlCbYG2CU0HD12EznKb3w/2mRLBjlFMInvbz
	WNN9p2UHKH0SkNR4KRen9QiTlDx99RQSmMDvAttddPICIn2z+wARjAy94oBxQsT+faFz
	Z8QoFTQilloPWZ8BM88ZF4LfICKh3gWnlEL9gkyp1QQ4O9b1tJn0kVmP9tkU7TTxY5QM
	eYcQ==
MIME-Version: 1.0
Received: by 10.60.27.36 with SMTP id q4mr12008027oeg.111.1354634633940; Tue,
	04 Dec 2012 07:23:53 -0800 (PST)
Received: by 10.76.28.234 with HTTP; Tue, 4 Dec 2012 07:23:53 -0800 (PST)
In-Reply-To: <CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
References: <CAF6-1L51R5yMBNO1wQe7YWoeGoM_bFJcAb0XN0cVg6cm7+Kzaw@mail.gmail.com>
	<1352363172.12977.87.camel@hastur.hellion.org.uk>
	<509D22E6.2090005@citrix.com> <50A27A37.2090506@citrix.com>
	<CAF6-1L7q-m00CVTMPx4ySWuSBqqwnfP+-=Wz92Nh1C6_2RQ82A@mail.gmail.com>
	<CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
Date: Tue, 4 Dec 2012 16:23:53 +0100
Message-ID: <CAF6-1L7NyZgM4AhuT7fwz1+wL7gFqTufcEhsufA3SUHE4XVj3Q@mail.gmail.com>
From: Sylvain Munaut <s.munaut@whatever-company.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
X-Gm-Message-State: ALoCoQmg9QGDQzfWKw17kXQw6B/fqlSseVPUO1075KpTzLNV6e4bcViAfJi56UgCMr53WPOC66LJ
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Custom block script started twice for root block
 but only stopped once
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Actually I managed to trace it a bit further but here I'm kind of
stuck due to my lack of knowledge of "how it should work".

Basically it does try to destruct the device and the hotplug script
'/etc/xen/scripts/block' is called with the command remove and the
right device path. However by the time it's called, the corresponding
subtree in the xenstore has been wiped it seems and so the hotplug
script fails, not knowing what to do ... and it does nothing.

So whoever triggers the hotplug remove action should leave the xen
store tree in place until the hotplug script has run.

Cheers,

    Sylvain

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:26:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuO2-0000fx-Bu; Tue, 04 Dec 2012 15:26:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <s.munaut@whatever-company.com>) id 1TfuO0-0000fo-EE
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:26:16 +0000
Received: from [85.158.143.35:3487] by server-3.bemta-4.messagelabs.com id
	2F/7F-06841-7161EB05; Tue, 04 Dec 2012 15:26:15 +0000
X-Env-Sender: s.munaut@whatever-company.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354634634!10559743!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15808 invoked from network); 4 Dec 2012 15:23:57 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:23:57 -0000
Received: by mail-ob0-f173.google.com with SMTP id xn12so4601683obc.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 07:23:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=wAa61vukzhVT63qJTwsavFE0VN5xI8pP3QtkYlIZITQ=;
	b=gfckpKLmLgMecGmxReJ+O5l13GCUIi+m4f0x7h8xx/JAocZwk6Z3LjCVYUjQY7ZWMS
	y0pWOGLzfw5vXyqr+VqnX1g5cWwVCuVzSzdUZ2VoVT+v6RygzOsIvUHujAKojNS2JngO
	x/mlmv7EXuDjeuqe4s3GvyO/GVZvPjllVlCbYG2CU0HD12EznKb3w/2mRLBjlFMInvbz
	WNN9p2UHKH0SkNR4KRen9QiTlDx99RQSmMDvAttddPICIn2z+wARjAy94oBxQsT+faFz
	Z8QoFTQilloPWZ8BM88ZF4LfICKh3gWnlEL9gkyp1QQ4O9b1tJn0kVmP9tkU7TTxY5QM
	eYcQ==
MIME-Version: 1.0
Received: by 10.60.27.36 with SMTP id q4mr12008027oeg.111.1354634633940; Tue,
	04 Dec 2012 07:23:53 -0800 (PST)
Received: by 10.76.28.234 with HTTP; Tue, 4 Dec 2012 07:23:53 -0800 (PST)
In-Reply-To: <CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
References: <CAF6-1L51R5yMBNO1wQe7YWoeGoM_bFJcAb0XN0cVg6cm7+Kzaw@mail.gmail.com>
	<1352363172.12977.87.camel@hastur.hellion.org.uk>
	<509D22E6.2090005@citrix.com> <50A27A37.2090506@citrix.com>
	<CAF6-1L7q-m00CVTMPx4ySWuSBqqwnfP+-=Wz92Nh1C6_2RQ82A@mail.gmail.com>
	<CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
Date: Tue, 4 Dec 2012 16:23:53 +0100
Message-ID: <CAF6-1L7NyZgM4AhuT7fwz1+wL7gFqTufcEhsufA3SUHE4XVj3Q@mail.gmail.com>
From: Sylvain Munaut <s.munaut@whatever-company.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
X-Gm-Message-State: ALoCoQmg9QGDQzfWKw17kXQw6B/fqlSseVPUO1075KpTzLNV6e4bcViAfJi56UgCMr53WPOC66LJ
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Custom block script started twice for root block
 but only stopped once
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Actually I managed to trace it a bit further but here I'm kind of
stuck due to my lack of knowledge of "how it should work".

Basically it does try to destruct the device and the hotplug script
'/etc/xen/scripts/block' is called with the command remove and the
right device path. However by the time it's called, the corresponding
subtree in the xenstore has been wiped it seems and so the hotplug
script fails, not knowing what to do ... and it does nothing.

So whoever triggers the hotplug remove action should leave the xen
store tree in place until the hotplug script has run.

Cheers,

    Sylvain

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuOx-0000l5-QL; Tue, 04 Dec 2012 15:27:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfuOw-0000kv-Po
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:27:14 +0000
Received: from [193.109.254.147:46382] by server-16.bemta-14.messagelabs.com
	id 0C/77-09215-2561EB05; Tue, 04 Dec 2012 15:27:14 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354634788!6244765!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7169 invoked from network); 4 Dec 2012 15:26:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:26:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150832"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:26:20 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:26:20 +0000
Message-ID: <50BE161B.8000603@citrix.com>
Date: Tue, 4 Dec 2012 16:26:19 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/11/12 14:19, Roger Pau Monne wrote:
> This is a basic (and experimental) gntdev implementation for NetBSD.
> 
> The gnt device allows usermode applications to map grant references in
> userspace. It is mainly used by Qemu to implement a Xen backend (that
> runs in userspace).
> 
> Due to the fact that qemu-upstream is not yet functional in NetBSD,
> the only way to try this gntdev is to use the old qemu
> (qemu-traditional).
> 
> Performance is not that bad (given that we are using qemu-traditional
> and running a backend in userspace), the throughput of write
> operations is 64.7 MB/s, while in the Dom0 it is 104.6 MB/s. Regarding
> read operations, the throughput inside the DomU is 76.0 MB/s, while on
> the Dom0 it is 108.8 MB/s.

Independently of what we end up doing as default for handling raw file
disks, could someone review this code?

It's the first time I've done a device, so someone with more experience
should review it.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuOx-0000l5-QL; Tue, 04 Dec 2012 15:27:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfuOw-0000kv-Po
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:27:14 +0000
Received: from [193.109.254.147:46382] by server-16.bemta-14.messagelabs.com
	id 0C/77-09215-2561EB05; Tue, 04 Dec 2012 15:27:14 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354634788!6244765!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7169 invoked from network); 4 Dec 2012 15:26:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:26:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150832"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:26:20 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:26:20 +0000
Message-ID: <50BE161B.8000603@citrix.com>
Date: Tue, 4 Dec 2012 16:26:19 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/11/12 14:19, Roger Pau Monne wrote:
> This is a basic (and experimental) gntdev implementation for NetBSD.
> 
> The gnt device allows usermode applications to map grant references in
> userspace. It is mainly used by Qemu to implement a Xen backend (that
> runs in userspace).
> 
> Due to the fact that qemu-upstream is not yet functional in NetBSD,
> the only way to try this gntdev is to use the old qemu
> (qemu-traditional).
> 
> Performance is not that bad (given that we are using qemu-traditional
> and running a backend in userspace), the throughput of write
> operations is 64.7 MB/s, while in the Dom0 it is 104.6 MB/s. Regarding
> read operations, the throughput inside the DomU is 76.0 MB/s, while on
> the Dom0 it is 108.8 MB/s.

Independently of what we end up doing as default for handling raw file
disks, could someone review this code?

It's the first time I've done a device, so someone with more experience
should review it.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:29:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:29:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuQe-0000vD-Am; Tue, 04 Dec 2012 15:29:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfuQc-0000uz-Nr
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:28:58 +0000
Received: from [85.158.137.99:54842] by server-16.bemta-3.messagelabs.com id
	35/14-07461-5B61EB05; Tue, 04 Dec 2012 15:28:53 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354634931!17853918!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8152 invoked from network); 4 Dec 2012 15:28:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:28:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150910"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:28:51 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:28:51 +0000
Message-ID: <50BE16B2.2030306@citrix.com>
Date: Tue, 4 Dec 2012 16:28:50 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Sylvain Munaut <s.munaut@whatever-company.com>
References: <CAF6-1L51R5yMBNO1wQe7YWoeGoM_bFJcAb0XN0cVg6cm7+Kzaw@mail.gmail.com>
	<1352363172.12977.87.camel@hastur.hellion.org.uk>
	<509D22E6.2090005@citrix.com>	<50A27A37.2090506@citrix.com>
	<CAF6-1L7q-m00CVTMPx4ySWuSBqqwnfP+-=Wz92Nh1C6_2RQ82A@mail.gmail.com>
	<CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
In-Reply-To: <CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Custom block script started twice for root block
 but only stopped once
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 15:21, Sylvain Munaut wrote:
> A bit more info:
> 
> I tried using PV-GRUB and this doesn't happen there, so it's only with PyGRUB.
> 
> AFAICT it happens in  tools/python/xen/xend/XendDomainInfo.py in the
> _configureBootloader method.
> 
> _shouldMount will return true and so " dom0.create_vbd(vbd, disk)"
> will be executed to allow access to the device to the host by
> executing the block-script before the actuall booting.
> 
> 
> [2012-11-26 13:40:44 3469] INFO (XendDomainInfo:3270) Mounting
> 192.168.2.201 name=rbd;key=rbd rbd-vm test on /dev/xvdp.
> [2012-11-26 13:40:44 3469] DEBUG (DevController:95) DevController:
> writing {'backend-id': '0', 'virtual-device': '51952', 'device-type':
> 'disk', 'state': '1', 'backend':
> '/local/domain/0/backend/vbd/0/51952'} to
> /local/domain/0/device/vbd/51952.
> [2012-11-26 13:40:44 3469] DEBUG (DevController:97) DevController:
> writing {'domain': 'Domain-0', 'frontend':
> '/local/domain/0/device/vbd/51952', 'uuid':
> '1845bb27-97b5-ce21-348d-ade6e99829e9', 'bootable': '0', 'dev':
> '/dev/xvdp', 'state': '1', 'params': '192.168.2.201 name=rbd;key=rbd
> rbd-vm test', 'mode': 'r', 'online': '1', 'frontend-id': '0', 'type':
> 'rbd'} to /local/domain/0/backend/vbd/0/51952.
> [2012-11-26 13:40:44 3469] DEBUG (DevController:144) Waiting for 51952.
> [2012-11-26 13:40:44 3469] DEBUG (DevController:628)
> hotplugStatusCallback
> /local/domain/0/backend/vbd/0/51952/hotplug-status.
> [2012-11-26 13:40:45 3469] DEBUG (DevController:628)
> hotplugStatusCallback
> /local/domain/0/backend/vbd/0/51952/hotplug-status.
> [2012-11-26 13:40:45 3469] DEBUG (DevController:642) hotplugStatusCallback 1.
> [2012-11-26 13:40:45 3469] DEBUG (DevController:144) Waiting for 51952.
> [2012-11-26 13:40:45 3469] DEBUG (DevController:628)
> hotplugStatusCallback
> /local/domain/0/backend/vbd/0/51952/hotplug-status.
> [2012-11-26 13:40:45 3469] DEBUG (DevController:642) hotplugStatusCallback 1.
> [2012-11-26 13:40:45 4455] DEBUG (XendBootloader:113) Launching
> bootloader as ['/usr/lib/xen-4.1/bin/pygrub', '--args=console=hvc0
> xencons=hvc0', '--output=/var/run/xend/boot/xenbl.7585', '-q',
> '/dev/xvdp'].
> [2012-11-26 13:40:48 3469] INFO (XendDomainInfo:3289) Unmounting
> /dev/xvdp from /dev/xvdp.
> [2012-11-26 13:40:48 3469] DEBUG (XendDomainInfo:1276)
> XendDomainInfo.destroyDevice: deviceClass = vbd, device = /dev/xvdp
> 
> Now in theory the destroyDevice call should have cleared up any
> resources, but it seems it didn't ...

Could you give a try to xl in Xen 4.2?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:29:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:29:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuQe-0000vD-Am; Tue, 04 Dec 2012 15:29:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfuQc-0000uz-Nr
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:28:58 +0000
Received: from [85.158.137.99:54842] by server-16.bemta-3.messagelabs.com id
	35/14-07461-5B61EB05; Tue, 04 Dec 2012 15:28:53 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354634931!17853918!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8152 invoked from network); 4 Dec 2012 15:28:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:28:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150910"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:28:51 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:28:51 +0000
Message-ID: <50BE16B2.2030306@citrix.com>
Date: Tue, 4 Dec 2012 16:28:50 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Sylvain Munaut <s.munaut@whatever-company.com>
References: <CAF6-1L51R5yMBNO1wQe7YWoeGoM_bFJcAb0XN0cVg6cm7+Kzaw@mail.gmail.com>
	<1352363172.12977.87.camel@hastur.hellion.org.uk>
	<509D22E6.2090005@citrix.com>	<50A27A37.2090506@citrix.com>
	<CAF6-1L7q-m00CVTMPx4ySWuSBqqwnfP+-=Wz92Nh1C6_2RQ82A@mail.gmail.com>
	<CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
In-Reply-To: <CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Custom block script started twice for root block
 but only stopped once
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 15:21, Sylvain Munaut wrote:
> A bit more info:
> 
> I tried using PV-GRUB and this doesn't happen there, so it's only with PyGRUB.
> 
> AFAICT it happens in  tools/python/xen/xend/XendDomainInfo.py in the
> _configureBootloader method.
> 
> _shouldMount will return true and so " dom0.create_vbd(vbd, disk)"
> will be executed to allow access to the device to the host by
> executing the block-script before the actuall booting.
> 
> 
> [2012-11-26 13:40:44 3469] INFO (XendDomainInfo:3270) Mounting
> 192.168.2.201 name=rbd;key=rbd rbd-vm test on /dev/xvdp.
> [2012-11-26 13:40:44 3469] DEBUG (DevController:95) DevController:
> writing {'backend-id': '0', 'virtual-device': '51952', 'device-type':
> 'disk', 'state': '1', 'backend':
> '/local/domain/0/backend/vbd/0/51952'} to
> /local/domain/0/device/vbd/51952.
> [2012-11-26 13:40:44 3469] DEBUG (DevController:97) DevController:
> writing {'domain': 'Domain-0', 'frontend':
> '/local/domain/0/device/vbd/51952', 'uuid':
> '1845bb27-97b5-ce21-348d-ade6e99829e9', 'bootable': '0', 'dev':
> '/dev/xvdp', 'state': '1', 'params': '192.168.2.201 name=rbd;key=rbd
> rbd-vm test', 'mode': 'r', 'online': '1', 'frontend-id': '0', 'type':
> 'rbd'} to /local/domain/0/backend/vbd/0/51952.
> [2012-11-26 13:40:44 3469] DEBUG (DevController:144) Waiting for 51952.
> [2012-11-26 13:40:44 3469] DEBUG (DevController:628)
> hotplugStatusCallback
> /local/domain/0/backend/vbd/0/51952/hotplug-status.
> [2012-11-26 13:40:45 3469] DEBUG (DevController:628)
> hotplugStatusCallback
> /local/domain/0/backend/vbd/0/51952/hotplug-status.
> [2012-11-26 13:40:45 3469] DEBUG (DevController:642) hotplugStatusCallback 1.
> [2012-11-26 13:40:45 3469] DEBUG (DevController:144) Waiting for 51952.
> [2012-11-26 13:40:45 3469] DEBUG (DevController:628)
> hotplugStatusCallback
> /local/domain/0/backend/vbd/0/51952/hotplug-status.
> [2012-11-26 13:40:45 3469] DEBUG (DevController:642) hotplugStatusCallback 1.
> [2012-11-26 13:40:45 4455] DEBUG (XendBootloader:113) Launching
> bootloader as ['/usr/lib/xen-4.1/bin/pygrub', '--args=console=hvc0
> xencons=hvc0', '--output=/var/run/xend/boot/xenbl.7585', '-q',
> '/dev/xvdp'].
> [2012-11-26 13:40:48 3469] INFO (XendDomainInfo:3289) Unmounting
> /dev/xvdp from /dev/xvdp.
> [2012-11-26 13:40:48 3469] DEBUG (XendDomainInfo:1276)
> XendDomainInfo.destroyDevice: deviceClass = vbd, device = /dev/xvdp
> 
> Now in theory the destroyDevice call should have cleared up any
> resources, but it seems it didn't ...

Could you give a try to xl in Xen 4.2?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:31:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuT9-00019e-Tv; Tue, 04 Dec 2012 15:31:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfuT8-00019D-KL
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 15:31:34 +0000
Received: from [85.158.143.99:3344] by server-1.bemta-4.messagelabs.com id
	A9/98-27934-5571EB05; Tue, 04 Dec 2012 15:31:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354635086!28003978!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23343 invoked from network); 4 Dec 2012 15:31:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:31:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150965"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:30:54 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:30:54 +0000
Message-ID: <1354635052.15296.38.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Dec 2012 15:30:52 +0000
In-Reply-To: <af47b01d2dbe0e916e10.1354292658@elijah>
References: <patchbomb.1354292655@elijah>
	<af47b01d2dbe0e916e10.1354292658@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 3 of 4 v2] libxl: Allow multiple USB devices
 on HVM domain creation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 16:24 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354291988 0
> # Node ID af47b01d2dbe0e916e1014a264845c48d6ef8108
> # Parent  fa37bd276212a01e3c898b54a7f2385454c406a7
> libxl: Allow multiple USB devices on HVM domain creation
> 
> This patch allows an HVM domain to be created with multiple USB
> devices.
> 
> Since the previous interface only allowed the passing of a single
> device, this requires us to add a new element to the hvm struct of
> libxl_domain_build_info -- usbdevice_list.  For API compatibility, the
> old element, usbdevice, remains.
> 
> If hvm.usbdevice_list is set, each device listed will cause an extra
> "-usbdevice [foo]" to be appended to the qemu command line.
> 
> Callers may set either hvm.usbdevice or hvm.usbdevice_list, but not
> both; libxl will throw an error if both are set.
> 
> In order to allow users of libxl to write software compatible with
> older versions of libxl, also define LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST.
> If this is defined, callers may use either hvm.usbdevice or
> hvm.usbdevice_list; otherwise, only hvm.usbdevice will be available.
> 
> v2:
>  - Throw an error if both usbdevice and usbdevice_list are set
>  - Update and clarify definition based on feedback
>  - Previous patches means this works for both traditional and upstream

Is this a requirement? Did this work with qemu-xen-trad with xend?

Unless it did I'm not especially in favour of adding new features to
qemu-xen-trad just because it looks easy to do so.

> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -266,6 +266,22 @@
>  #endif
>  #endif
>  
> +/* 
> + * LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST
> + * 
> + * If this is defined, then the libxl_domain_build_info structure will
> + * contain hvm.usbdevice_list, a libxl_string_list type that contains
> + * a list of USB devices to specify on the qemu command-line.
> + *
> + * If it is set, callers may use either hvm.usbdevice or
> + * hvm.usbdevice_list, but not both; if both are set, libxl will
> + * throw an error.
> + *
> + * If this is not defined, callers can only use hvm.usbdevice.  Note
> + * that this means only one device can be added at domain build time.
> + */
> +#define LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST 1
> +
>  /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
>   * called from within libxl itself. Callers outside libxl, who
>   * do not #include libxl_internal.h, are fine. */
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -322,11 +322,28 @@ static char ** libxl__build_device_model
>  
>          }
>  
> -        if (libxl_defbool_val(b_info->u.hvm.usb) || b_info->u.hvm.usbdevice) {
> +        if (libxl_defbool_val(b_info->u.hvm.usb)
> +            || b_info->u.hvm.usbdevice
> +            || b_info->u.hvm.usbdevice_list) {
> +            if ( b_info->u.hvm.usbdevice && b_info->u.hvm.usbdevice_list )
> +            {
> +                LOG(ERROR, "%s: Both usbdevice and usbdevice_list set",
> +                    __func__);
> +                return NULL;
> +            }
>              flexarray_append(dm_args, "-usb");
>              if (b_info->u.hvm.usbdevice) {
>                  flexarray_vappend(dm_args,
>                                    "-usbdevice", b_info->u.hvm.usbdevice, NULL);
> +            } else if (b_info->u.hvm.usbdevice_list) {
> +                char **p;
> +                for (p = b_info->u.hvm.usbdevice_list;
> +                     *p;
> +                     p++) {

Is the line too long if you unfold this? It doesn't look to me like it
should be.

> +                    flexarray_vappend(dm_args,
> +                                      "-usbdevice",
> +                                      *p, NULL);
> +                }
>              }
>          }
>          if (b_info->u.hvm.soundhw) {
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -326,6 +326,7 @@ libxl_domain_build_info = Struct("domain
>                                         ("usbdevice",        string),
>                                         ("soundhw",          string),
>                                         ("xen_platform_pci", libxl_defbool),
> +                                       ("usbdevice_list",   libxl_string_list),

Is there any chance we might want something more structured that a
string in the interface in the future? e.g. libxl_device_usb?

>                                         ])),
>                   ("pv", Struct(None, [("kernel", string),
>                                        ("slack_memkb", MemKB),
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:31:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuT9-00019e-Tv; Tue, 04 Dec 2012 15:31:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfuT8-00019D-KL
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 15:31:34 +0000
Received: from [85.158.143.99:3344] by server-1.bemta-4.messagelabs.com id
	A9/98-27934-5571EB05; Tue, 04 Dec 2012 15:31:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354635086!28003978!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23343 invoked from network); 4 Dec 2012 15:31:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:31:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16150965"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:30:54 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:30:54 +0000
Message-ID: <1354635052.15296.38.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Dec 2012 15:30:52 +0000
In-Reply-To: <af47b01d2dbe0e916e10.1354292658@elijah>
References: <patchbomb.1354292655@elijah>
	<af47b01d2dbe0e916e10.1354292658@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 3 of 4 v2] libxl: Allow multiple USB devices
 on HVM domain creation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 16:24 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354291988 0
> # Node ID af47b01d2dbe0e916e1014a264845c48d6ef8108
> # Parent  fa37bd276212a01e3c898b54a7f2385454c406a7
> libxl: Allow multiple USB devices on HVM domain creation
> 
> This patch allows an HVM domain to be created with multiple USB
> devices.
> 
> Since the previous interface only allowed the passing of a single
> device, this requires us to add a new element to the hvm struct of
> libxl_domain_build_info -- usbdevice_list.  For API compatibility, the
> old element, usbdevice, remains.
> 
> If hvm.usbdevice_list is set, each device listed will cause an extra
> "-usbdevice [foo]" to be appended to the qemu command line.
> 
> Callers may set either hvm.usbdevice or hvm.usbdevice_list, but not
> both; libxl will throw an error if both are set.
> 
> In order to allow users of libxl to write software compatible with
> older versions of libxl, also define LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST.
> If this is defined, callers may use either hvm.usbdevice or
> hvm.usbdevice_list; otherwise, only hvm.usbdevice will be available.
> 
> v2:
>  - Throw an error if both usbdevice and usbdevice_list are set
>  - Update and clarify definition based on feedback
>  - Previous patches means this works for both traditional and upstream

Is this a requirement? Did this work with qemu-xen-trad with xend?

Unless it did I'm not especially in favour of adding new features to
qemu-xen-trad just because it looks easy to do so.

> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -266,6 +266,22 @@
>  #endif
>  #endif
>  
> +/* 
> + * LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST
> + * 
> + * If this is defined, then the libxl_domain_build_info structure will
> + * contain hvm.usbdevice_list, a libxl_string_list type that contains
> + * a list of USB devices to specify on the qemu command-line.
> + *
> + * If it is set, callers may use either hvm.usbdevice or
> + * hvm.usbdevice_list, but not both; if both are set, libxl will
> + * throw an error.
> + *
> + * If this is not defined, callers can only use hvm.usbdevice.  Note
> + * that this means only one device can be added at domain build time.
> + */
> +#define LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST 1
> +
>  /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
>   * called from within libxl itself. Callers outside libxl, who
>   * do not #include libxl_internal.h, are fine. */
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -322,11 +322,28 @@ static char ** libxl__build_device_model
>  
>          }
>  
> -        if (libxl_defbool_val(b_info->u.hvm.usb) || b_info->u.hvm.usbdevice) {
> +        if (libxl_defbool_val(b_info->u.hvm.usb)
> +            || b_info->u.hvm.usbdevice
> +            || b_info->u.hvm.usbdevice_list) {
> +            if ( b_info->u.hvm.usbdevice && b_info->u.hvm.usbdevice_list )
> +            {
> +                LOG(ERROR, "%s: Both usbdevice and usbdevice_list set",
> +                    __func__);
> +                return NULL;
> +            }
>              flexarray_append(dm_args, "-usb");
>              if (b_info->u.hvm.usbdevice) {
>                  flexarray_vappend(dm_args,
>                                    "-usbdevice", b_info->u.hvm.usbdevice, NULL);
> +            } else if (b_info->u.hvm.usbdevice_list) {
> +                char **p;
> +                for (p = b_info->u.hvm.usbdevice_list;
> +                     *p;
> +                     p++) {

Is the line too long if you unfold this? It doesn't look to me like it
should be.

> +                    flexarray_vappend(dm_args,
> +                                      "-usbdevice",
> +                                      *p, NULL);
> +                }
>              }
>          }
>          if (b_info->u.hvm.soundhw) {
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -326,6 +326,7 @@ libxl_domain_build_info = Struct("domain
>                                         ("usbdevice",        string),
>                                         ("soundhw",          string),
>                                         ("xen_platform_pci", libxl_defbool),
> +                                       ("usbdevice_list",   libxl_string_list),

Is there any chance we might want something more structured that a
string in the interface in the future? e.g. libxl_device_usb?

>                                         ])),
>                   ("pv", Struct(None, [("kernel", string),
>                                        ("slack_memkb", MemKB),
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:35:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuWR-0001Tz-PE; Tue, 04 Dec 2012 15:34:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfuWQ-0001Tq-TT
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:34:59 +0000
Received: from [85.158.139.211:54691] by server-7.bemta-5.messagelabs.com id
	BD/65-23096-2281EB05; Tue, 04 Dec 2012 15:34:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1354635292!18176898!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14348 invoked from network); 4 Dec 2012 15:34:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:34:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151101"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:34:52 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:34:52 +0000
Message-ID: <50BE181B.2040705@citrix.com>
Date: Tue, 4 Dec 2012 16:34:51 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Sylvain Munaut <s.munaut@whatever-company.com>
References: <CAF6-1L51R5yMBNO1wQe7YWoeGoM_bFJcAb0XN0cVg6cm7+Kzaw@mail.gmail.com>
	<1352363172.12977.87.camel@hastur.hellion.org.uk>
	<509D22E6.2090005@citrix.com>	<50A27A37.2090506@citrix.com>
	<CAF6-1L7q-m00CVTMPx4ySWuSBqqwnfP+-=Wz92Nh1C6_2RQ82A@mail.gmail.com>
	<CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
	<CAF6-1L7NyZgM4AhuT7fwz1+wL7gFqTufcEhsufA3SUHE4XVj3Q@mail.gmail.com>
In-Reply-To: <CAF6-1L7NyZgM4AhuT7fwz1+wL7gFqTufcEhsufA3SUHE4XVj3Q@mail.gmail.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Custom block script started twice for root block
 but only stopped once
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 16:23, Sylvain Munaut wrote:
> Actually I managed to trace it a bit further but here I'm kind of
> stuck due to my lack of knowledge of "how it should work".
> 
> Basically it does try to destruct the device and the hotplug script
> '/etc/xen/scripts/block' is called with the command remove and the
> right device path. However by the time it's called, the corresponding
> subtree in the xenstore has been wiped it seems and so the hotplug
> script fails, not knowing what to do ... and it does nothing.
> 
> So whoever triggers the hotplug remove action should leave the xen
> store tree in place until the hotplug script has run.

This is one of the reasons we no longer launch hotplug scripts from udev
in xl, it is difficult to synchronize the process, and this kind of
races can happen easily, I guess the same can happen in xend. I think
the best way to fix this would be to switch to Xen 4.2 and the xl toolstack.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:35:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuWR-0001Tz-PE; Tue, 04 Dec 2012 15:34:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfuWQ-0001Tq-TT
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:34:59 +0000
Received: from [85.158.139.211:54691] by server-7.bemta-5.messagelabs.com id
	BD/65-23096-2281EB05; Tue, 04 Dec 2012 15:34:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1354635292!18176898!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14348 invoked from network); 4 Dec 2012 15:34:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:34:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151101"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:34:52 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:34:52 +0000
Message-ID: <50BE181B.2040705@citrix.com>
Date: Tue, 4 Dec 2012 16:34:51 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Sylvain Munaut <s.munaut@whatever-company.com>
References: <CAF6-1L51R5yMBNO1wQe7YWoeGoM_bFJcAb0XN0cVg6cm7+Kzaw@mail.gmail.com>
	<1352363172.12977.87.camel@hastur.hellion.org.uk>
	<509D22E6.2090005@citrix.com>	<50A27A37.2090506@citrix.com>
	<CAF6-1L7q-m00CVTMPx4ySWuSBqqwnfP+-=Wz92Nh1C6_2RQ82A@mail.gmail.com>
	<CAF6-1L7pD3Ha=gkSE984seKW+tUCfmvX15uR_w3cd4CGtv5yXQ@mail.gmail.com>
	<CAF6-1L7NyZgM4AhuT7fwz1+wL7gFqTufcEhsufA3SUHE4XVj3Q@mail.gmail.com>
In-Reply-To: <CAF6-1L7NyZgM4AhuT7fwz1+wL7gFqTufcEhsufA3SUHE4XVj3Q@mail.gmail.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Custom block script started twice for root block
 but only stopped once
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 16:23, Sylvain Munaut wrote:
> Actually I managed to trace it a bit further but here I'm kind of
> stuck due to my lack of knowledge of "how it should work".
> 
> Basically it does try to destruct the device and the hotplug script
> '/etc/xen/scripts/block' is called with the command remove and the
> right device path. However by the time it's called, the corresponding
> subtree in the xenstore has been wiped it seems and so the hotplug
> script fails, not knowing what to do ... and it does nothing.
> 
> So whoever triggers the hotplug remove action should leave the xen
> store tree in place until the hotplug script has run.

This is one of the reasons we no longer launch hotplug scripts from udev
in xl, it is difficult to synchronize the process, and this kind of
races can happen easily, I guess the same can happen in xend. I think
the best way to fix this would be to switch to Xen 4.2 and the xl toolstack.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:52:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfumh-0002A7-6A; Tue, 04 Dec 2012 15:51:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfumg-00029s-FY
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:51:46 +0000
Received: from [85.158.139.83:10071] by server-6.bemta-5.messagelabs.com id
	2E/F7-19321-11C1EB05; Tue, 04 Dec 2012 15:51:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354636304!27682243!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14422 invoked from network); 4 Dec 2012 15:51:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:51:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151586"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:50:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:50:56 +0000
Message-ID: <1354636255.15296.41.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Tue, 4 Dec 2012 15:50:55 +0000
In-Reply-To: <20121203164524.GB32690@ocelot.phlegethon.org>
References: <1354548558-10039-1-git-send-email-ian.campbell@citrix.com>
	<20121203164524.GB32690@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: Use $(OBJCOPY) not bare objcopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 16:45 +0000, Tim Deegan wrote:
> At 15:29 +0000 on 03 Dec (1354548558), Ian Campbell wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Reported-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Acked-by: Tim Deegan <tim@xen.org>

Applied, ta.

> 
> > ---
> >  xen/arch/arm/Makefile |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> > index fd92b72..4c61b04 100644
> > --- a/xen/arch/arm/Makefile
> > +++ b/xen/arch/arm/Makefile
> > @@ -46,7 +46,7 @@ $(TARGET): $(TARGET)-syms $(TARGET).bin
> >  
> >  #
> >  $(TARGET).bin: $(TARGET)-syms
> > -	objcopy -O binary -S $< $@
> > +	$(OBJCOPY) -O binary -S $< $@
> >  
> >  #$(TARGET): $(TARGET)-syms $(efi-y) boot/mkelf32
> >  #	./boot/mkelf32 $(TARGET)-syms $(TARGET) 0x100000 \
> > -- 
> > 1.7.9.1
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:52:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfumh-0002A7-6A; Tue, 04 Dec 2012 15:51:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfumg-00029s-FY
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:51:46 +0000
Received: from [85.158.139.83:10071] by server-6.bemta-5.messagelabs.com id
	2E/F7-19321-11C1EB05; Tue, 04 Dec 2012 15:51:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354636304!27682243!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14422 invoked from network); 4 Dec 2012 15:51:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:51:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151586"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:50:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:50:56 +0000
Message-ID: <1354636255.15296.41.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Tue, 4 Dec 2012 15:50:55 +0000
In-Reply-To: <20121203164524.GB32690@ocelot.phlegethon.org>
References: <1354548558-10039-1-git-send-email-ian.campbell@citrix.com>
	<20121203164524.GB32690@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: Use $(OBJCOPY) not bare objcopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-03 at 16:45 +0000, Tim Deegan wrote:
> At 15:29 +0000 on 03 Dec (1354548558), Ian Campbell wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Reported-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Acked-by: Tim Deegan <tim@xen.org>

Applied, ta.

> 
> > ---
> >  xen/arch/arm/Makefile |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> > index fd92b72..4c61b04 100644
> > --- a/xen/arch/arm/Makefile
> > +++ b/xen/arch/arm/Makefile
> > @@ -46,7 +46,7 @@ $(TARGET): $(TARGET)-syms $(TARGET).bin
> >  
> >  #
> >  $(TARGET).bin: $(TARGET)-syms
> > -	objcopy -O binary -S $< $@
> > +	$(OBJCOPY) -O binary -S $< $@
> >  
> >  #$(TARGET): $(TARGET)-syms $(efi-y) boot/mkelf32
> >  #	./boot/mkelf32 $(TARGET)-syms $(TARGET) 0x100000 \
> > -- 
> > 1.7.9.1
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:52:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfuml-0002BH-HZ; Tue, 04 Dec 2012 15:51:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfumk-0002Ar-QR
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 15:51:51 +0000
Received: from [85.158.143.35:45633] by server-2.bemta-4.messagelabs.com id
	69/EC-28922-61C1EB05; Tue, 04 Dec 2012 15:51:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354636309!16074437!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3850 invoked from network); 4 Dec 2012 15:51:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:51:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151584"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:50:54 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:50:54 +0000
Message-ID: <1354636252.15296.40.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Dec 2012 15:50:52 +0000
In-Reply-To: <043d69be04c91eaead15.1354292659@elijah>
References: <patchbomb.1354292655@elijah>
	<043d69be04c91eaead15.1354292659@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 4 of 4 v2] xl: Accept a list for usbdevice
 in config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 16:24 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354291988 0
> # Node ID 043d69be04c91eaead159e8e74999c59eb68f560
> # Parent  af47b01d2dbe0e916e1014a264845c48d6ef8108
> xl: Accept a list for usbdevice in config file
> 
> Allow the "usbdevice" key to accept a list of USB devices, and pass
> them in using the new usbdevice_list domain build element.
> 
> For backwards compatibility, still accept singleton values.
> 
> Also update the xl.cfg manpage, adding information about how to pass
> through host devices.
> 
> v2:
>  - Add some verbiage to make it clear that "usb" is for emulated devices
>  - Reference qemu manual for more usbdevice options
> 
> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -1107,17 +1107,27 @@ device.
>  
>  =item B<usb=BOOLEAN>
>  
> -Enables or disables a USB bus in the guest.
> +Enables or disables an emulated USB bus in the guest.
>  
> -=item B<usbdevice=DEVICE>
> +=item B<usbdevice=[ "DEVICE", "DEVICE", ...]>
>  
> -Adds B<DEVICE> to the USB bus. The USB bus must also be enabled using
> -B<usb=1>. The most common use for this option is B<usbdevice=tablet>
> -which adds pointer device using absolute coordinates. Such devices
> -function better than relative coordinate devices (such as a standard
> -mouse) since many methods of exporting guest graphics (such as VNC)
> -work better in this mode. Note that this is independent of the actual
> -pointer device you are using on the host/client side.
> +Adds B<DEVICE>s to the emulated USB bus. The USB bus must also be
> +enabled using B<usb=1>. The most common use for this option is
> +B<usbdevice=['tablet']> which adds pointer device using absolute
> +coordinates. Such devices function better than relative coordinate
> +devices (such as a standard mouse) since many methods of exporting
> +guest graphics (such as VNC) work better in this mode. Note that this
> +is independent of the actual pointer device you are using on the
> +host/client side.
> +
> +Host devices can also be passed through in this way, by specifying
> +host:USBID, where USBID is of the form xxxx:yyyy.  The USBID can 
> +typically be found by using lsusb or usb-devices.
> +
> +The form usbdevice=DEVICE is also accepted for backwards compatibility.
> +
> +More valid options can be found in the "usbdevice" section of the qemu
> +documentation.
>  
>  =back
>  
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1486,8 +1486,23 @@ skip_vfb:
>          xlu_cfg_replace_string (config, "serial", &b_info->u.hvm.serial, 0);
>          xlu_cfg_replace_string (config, "boot", &b_info->u.hvm.boot, 0);
>          xlu_cfg_get_defbool(config, "usb", &b_info->u.hvm.usb, 0);
> -        xlu_cfg_replace_string (config, "usbdevice",
> -                                &b_info->u.hvm.usbdevice, 0);
> +        switch (xlu_cfg_get_list_as_string_list(config, "usbdevice",
> +                                                &b_info->u.hvm.usbdevice_list,
> +                                                1))
> +        {
> +
> +        case 0: break; /* Success */
> +        case ESRCH: break; /* Option not present */
> +        case EINVAL:
> +            /* If it's not a valid list, try reading it as an atom, falling through to
> +             * an error if it fails */
> +            if (!xlu_cfg_replace_string(config, "usbdevice", &b_info->u.hvm.usbdevice, 0)) 
> +                break;
> +            /* FALLTHRU */
> +        default:
> +            fprintf(stderr,"xl: Unable to parse usbdevice.\n");
> +            exit(-ERROR_FAIL);
> +        }
>          xlu_cfg_replace_string (config, "soundhw", &b_info->u.hvm.soundhw, 0);
>          xlu_cfg_get_defbool(config, "xen_platform_pci",
>                              &b_info->u.hvm.xen_platform_pci, 0);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:52:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfuml-0002BH-HZ; Tue, 04 Dec 2012 15:51:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfumk-0002Ar-QR
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 15:51:51 +0000
Received: from [85.158.143.35:45633] by server-2.bemta-4.messagelabs.com id
	69/EC-28922-61C1EB05; Tue, 04 Dec 2012 15:51:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354636309!16074437!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3850 invoked from network); 4 Dec 2012 15:51:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:51:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151584"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:50:54 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:50:54 +0000
Message-ID: <1354636252.15296.40.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Dec 2012 15:50:52 +0000
In-Reply-To: <043d69be04c91eaead15.1354292659@elijah>
References: <patchbomb.1354292655@elijah>
	<043d69be04c91eaead15.1354292659@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 4 of 4 v2] xl: Accept a list for usbdevice
 in config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-11-30 at 16:24 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354291988 0
> # Node ID 043d69be04c91eaead159e8e74999c59eb68f560
> # Parent  af47b01d2dbe0e916e1014a264845c48d6ef8108
> xl: Accept a list for usbdevice in config file
> 
> Allow the "usbdevice" key to accept a list of USB devices, and pass
> them in using the new usbdevice_list domain build element.
> 
> For backwards compatibility, still accept singleton values.
> 
> Also update the xl.cfg manpage, adding information about how to pass
> through host devices.
> 
> v2:
>  - Add some verbiage to make it clear that "usb" is for emulated devices
>  - Reference qemu manual for more usbdevice options
> 
> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -1107,17 +1107,27 @@ device.
>  
>  =item B<usb=BOOLEAN>
>  
> -Enables or disables a USB bus in the guest.
> +Enables or disables an emulated USB bus in the guest.
>  
> -=item B<usbdevice=DEVICE>
> +=item B<usbdevice=[ "DEVICE", "DEVICE", ...]>
>  
> -Adds B<DEVICE> to the USB bus. The USB bus must also be enabled using
> -B<usb=1>. The most common use for this option is B<usbdevice=tablet>
> -which adds pointer device using absolute coordinates. Such devices
> -function better than relative coordinate devices (such as a standard
> -mouse) since many methods of exporting guest graphics (such as VNC)
> -work better in this mode. Note that this is independent of the actual
> -pointer device you are using on the host/client side.
> +Adds B<DEVICE>s to the emulated USB bus. The USB bus must also be
> +enabled using B<usb=1>. The most common use for this option is
> +B<usbdevice=['tablet']> which adds pointer device using absolute
> +coordinates. Such devices function better than relative coordinate
> +devices (such as a standard mouse) since many methods of exporting
> +guest graphics (such as VNC) work better in this mode. Note that this
> +is independent of the actual pointer device you are using on the
> +host/client side.
> +
> +Host devices can also be passed through in this way, by specifying
> +host:USBID, where USBID is of the form xxxx:yyyy.  The USBID can 
> +typically be found by using lsusb or usb-devices.
> +
> +The form usbdevice=DEVICE is also accepted for backwards compatibility.
> +
> +More valid options can be found in the "usbdevice" section of the qemu
> +documentation.
>  
>  =back
>  
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1486,8 +1486,23 @@ skip_vfb:
>          xlu_cfg_replace_string (config, "serial", &b_info->u.hvm.serial, 0);
>          xlu_cfg_replace_string (config, "boot", &b_info->u.hvm.boot, 0);
>          xlu_cfg_get_defbool(config, "usb", &b_info->u.hvm.usb, 0);
> -        xlu_cfg_replace_string (config, "usbdevice",
> -                                &b_info->u.hvm.usbdevice, 0);
> +        switch (xlu_cfg_get_list_as_string_list(config, "usbdevice",
> +                                                &b_info->u.hvm.usbdevice_list,
> +                                                1))
> +        {
> +
> +        case 0: break; /* Success */
> +        case ESRCH: break; /* Option not present */
> +        case EINVAL:
> +            /* If it's not a valid list, try reading it as an atom, falling through to
> +             * an error if it fails */
> +            if (!xlu_cfg_replace_string(config, "usbdevice", &b_info->u.hvm.usbdevice, 0)) 
> +                break;
> +            /* FALLTHRU */
> +        default:
> +            fprintf(stderr,"xl: Unable to parse usbdevice.\n");
> +            exit(-ERROR_FAIL);
> +        }
>          xlu_cfg_replace_string (config, "soundhw", &b_info->u.hvm.soundhw, 0);
>          xlu_cfg_get_defbool(config, "xen_platform_pci",
>                              &b_info->u.hvm.xen_platform_pci, 0);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:52:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfumh-0002AE-Ik; Tue, 04 Dec 2012 15:51:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfumg-00029u-Lc
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:51:46 +0000
Received: from [85.158.139.83:10094] by server-16.bemta-5.messagelabs.com id
	9A/87-21311-11C1EB05; Tue, 04 Dec 2012 15:51:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354636304!27682243!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14446 invoked from network); 4 Dec 2012 15:51:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:51:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151587"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:50:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:50:59 +0000
Message-ID: <1354636257.15296.42.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 4 Dec 2012 15:50:57 +0000
In-Reply-To: <1354210308-23251-3-git-send-email-roger.pau@citrix.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<1354210308-23251-3-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/3] libxl: fix wrong comment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTExLTI5IGF0IDE3OjMxICswMDAwLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6
Cj4gVGhlIGNvbW1lbnQgaW4gZnVuY3Rpb24gbGlieGxfX3RyeV9waHlfYmFja2VuZCBpcyB3cm9u
ZywgMSBpcyByZXR1cm5lZAo+IGlmIHRoZSBiYWNrZW5kIHNob3VsZCBiZSBoYW5kbGVkIGFzICJw
aHkiLCB3aGlsZSAwIGlzIHJldHVybmVkIGlmIG5vdC4KPiAKPiBTaWduZWQtb2ZmLWJ5OiBSb2dl
ciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KCkFja2VkICsgYXBwbGllZCwgdGhh
bmtzLgoKPiAtLS0KPiAgdG9vbHMvbGlieGwvbGlieGxfaW50ZXJuYWwuaCB8ICAgIDIgKy0KPiAg
MSBmaWxlcyBjaGFuZ2VkLCAxIGluc2VydGlvbnMoKyksIDEgZGVsZXRpb25zKC0pCj4gCj4gZGlm
ZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVybmFsLmggYi90b29scy9saWJ4bC9saWJ4
bF9pbnRlcm5hbC5oCj4gaW5kZXggY2JhMzYxNi4uMGIzOGUzZSAxMDA2NDQKPiAtLS0gYS90b29s
cy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oCj4gKysrIGIvdG9vbHMvbGlieGwvbGlieGxfaW50ZXJu
YWwuaAo+IEBAIC0xMDQxLDcgKzEwNDEsNyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgbGlieGxfX2Rv
bWFpbmRlYXRoY2hlY2tfc3RvcChsaWJ4bF9fZ2MgKmdjLAo+ICAgKiB0eXBlIG9mIGZpbGUgdXNp
bmcgdGhlIFBIWSBiYWNrZW5kCj4gICAqIHN0X21vZGU6IG1vZGVfdCBvZiB0aGUgZmlsZSwgYXMg
cmV0dXJuZWQgYnkgc3RhdCBmdW5jdGlvbgo+ICAgKgo+IC0gKiBSZXR1cm5zIDAgb24gc3VjY2Vz
cywgYW5kIDwgMCBvbiBlcnJvci4KPiArICogUmV0dXJucyAxIG9uIHN1Y2Nlc3MsIGFuZCAwIGlm
IG5vdCBzdWl0YWJsZSBmb3IgcGh5IGJhY2tlbmQuCj4gICAqLwo+ICBfaGlkZGVuIGludCBsaWJ4
bF9fdHJ5X3BoeV9iYWNrZW5kKG1vZGVfdCBzdF9tb2RlKTsKPiAgCgoKCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QK
WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:52:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfumh-0002AE-Ik; Tue, 04 Dec 2012 15:51:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfumg-00029u-Lc
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:51:46 +0000
Received: from [85.158.139.83:10094] by server-16.bemta-5.messagelabs.com id
	9A/87-21311-11C1EB05; Tue, 04 Dec 2012 15:51:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354636304!27682243!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14446 invoked from network); 4 Dec 2012 15:51:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:51:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151587"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:50:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:50:59 +0000
Message-ID: <1354636257.15296.42.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 4 Dec 2012 15:50:57 +0000
In-Reply-To: <1354210308-23251-3-git-send-email-roger.pau@citrix.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<1354210308-23251-3-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/3] libxl: fix wrong comment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTExLTI5IGF0IDE3OjMxICswMDAwLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6
Cj4gVGhlIGNvbW1lbnQgaW4gZnVuY3Rpb24gbGlieGxfX3RyeV9waHlfYmFja2VuZCBpcyB3cm9u
ZywgMSBpcyByZXR1cm5lZAo+IGlmIHRoZSBiYWNrZW5kIHNob3VsZCBiZSBoYW5kbGVkIGFzICJw
aHkiLCB3aGlsZSAwIGlzIHJldHVybmVkIGlmIG5vdC4KPiAKPiBTaWduZWQtb2ZmLWJ5OiBSb2dl
ciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KCkFja2VkICsgYXBwbGllZCwgdGhh
bmtzLgoKPiAtLS0KPiAgdG9vbHMvbGlieGwvbGlieGxfaW50ZXJuYWwuaCB8ICAgIDIgKy0KPiAg
MSBmaWxlcyBjaGFuZ2VkLCAxIGluc2VydGlvbnMoKyksIDEgZGVsZXRpb25zKC0pCj4gCj4gZGlm
ZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVybmFsLmggYi90b29scy9saWJ4bC9saWJ4
bF9pbnRlcm5hbC5oCj4gaW5kZXggY2JhMzYxNi4uMGIzOGUzZSAxMDA2NDQKPiAtLS0gYS90b29s
cy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oCj4gKysrIGIvdG9vbHMvbGlieGwvbGlieGxfaW50ZXJu
YWwuaAo+IEBAIC0xMDQxLDcgKzEwNDEsNyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgbGlieGxfX2Rv
bWFpbmRlYXRoY2hlY2tfc3RvcChsaWJ4bF9fZ2MgKmdjLAo+ICAgKiB0eXBlIG9mIGZpbGUgdXNp
bmcgdGhlIFBIWSBiYWNrZW5kCj4gICAqIHN0X21vZGU6IG1vZGVfdCBvZiB0aGUgZmlsZSwgYXMg
cmV0dXJuZWQgYnkgc3RhdCBmdW5jdGlvbgo+ICAgKgo+IC0gKiBSZXR1cm5zIDAgb24gc3VjY2Vz
cywgYW5kIDwgMCBvbiBlcnJvci4KPiArICogUmV0dXJucyAxIG9uIHN1Y2Nlc3MsIGFuZCAwIGlm
IG5vdCBzdWl0YWJsZSBmb3IgcGh5IGJhY2tlbmQuCj4gICAqLwo+ICBfaGlkZGVuIGludCBsaWJ4
bF9fdHJ5X3BoeV9iYWNrZW5kKG1vZGVfdCBzdF9tb2RlKTsKPiAgCgoKCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QK
WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:52:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfuml-0002B3-43; Tue, 04 Dec 2012 15:51:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfumj-0002AX-Kb
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:51:49 +0000
Received: from [85.158.139.83:4487] by server-11.bemta-5.messagelabs.com id
	0B/3C-03409-41C1EB05; Tue, 04 Dec 2012 15:51:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354636304!27682243!3
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14606 invoked from network); 4 Dec 2012 15:51:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:51:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151588"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:51:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:51:02 +0000
Message-ID: <1354636260.15296.43.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 4 Dec 2012 15:51:00 +0000
In-Reply-To: <1354202142-22695-1-git-send-email-roger.pau@citrix.com>
References: <1354202142-22695-1-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] docs: expand persistent grants protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTExLTI5IGF0IDE1OjE1ICswMDAwLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6
Cj4gU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
CgpBY2tlZCArIGFwcGxpZWQsIHRoYW5rcy4KCj4gLS0tCj4gIHhlbi9pbmNsdWRlL3B1YmxpYy9p
by9ibGtpZi5oIHwgICAzNyArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLS0tCj4g
IDEgZmlsZXMgY2hhbmdlZCwgMzQgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKPiAKPiBk
aWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvcHVibGljL2lvL2Jsa2lmLmggYi94ZW4vaW5jbHVkZS9w
dWJsaWMvaW8vYmxraWYuaAo+IGluZGV4IDhkZjU4NjYuLjFmMGZiZDYgMTAwNjQ0Cj4gLS0tIGEv
eGVuL2luY2x1ZGUvcHVibGljL2lvL2Jsa2lmLmgKPiArKysgYi94ZW4vaW5jbHVkZS9wdWJsaWMv
aW8vYmxraWYuaAo+IEBAIC0xMzcsNyArMTM3LDIyIEBACj4gICAqICAgICAgY2FuIG1hcCBwZXJz
aXN0ZW50bHkgZGVwZW5kcyBvbiB0aGUgaW1wbGVtZW50YXRpb24sIGJ1dCBpZGVhbGx5IGl0Cj4g
ICAqICAgICAgc2hvdWxkIGJlIFJJTkdfU0laRSAqIEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVR
VUVTVC4gVXNpbmcgdGhpcwo+ICAgKiAgICAgIGZlYXR1cmUgdGhlIGJhY2tlbmQgZG9lc24ndCBu
ZWVkIHRvIHVubWFwIGVhY2ggZ3JhbnQsIHByZXZlbnRpbmcKPiAtICogICAgICBjb3N0bHkgVExC
IGZsdXNoZXMuCj4gKyAqICAgICAgY29zdGx5IFRMQiBmbHVzaGVzLiBUaGUgYmFja2VuZCBkcml2
ZXIgc2hvdWxkIG9ubHkgbWFwIGdyYW50cwo+ICsgKiAgICAgIHBlcnNpc3RlbnRseSBpZiB0aGUg
ZnJvbnRlbmQgc3VwcG9ydHMgaXQuIElmIGEgYmFja2VuZCBkcml2ZXIgY2hvb3Nlcwo+ICsgKiAg
ICAgIHRvIHVzZSB0aGUgcGVyc2lzdGVudCBwcm90b2NvbCB3aGVuIHRoZSBmcm9udGVuZCBkb2Vz
bid0IHN1cHBvcnQgaXQsCj4gKyAqICAgICAgaXQgd2lsbCBwcm9iYWJseSBoaXQgdGhlIG1heGlt
dW0gbnVtYmVyIG9mIHBlcnNpc3RlbnRseSBtYXBwZWQgZ3JhbnRzCj4gKyAqICAgICAgKGR1ZSB0
byB0aGUgZmFjdCB0aGF0IHRoZSBmcm9udGVuZCB3b24ndCBiZSByZXVzaW5nIHRoZSBzYW1lIGdy
YW50cyksCj4gKyAqICAgICAgYW5kIGZhbGwgYmFjayB0byBub24tcGVyc2lzdGVudCBtb2RlLiBC
YWNrZW5kIGltcGxlbWVudGF0aW9ucyBtYXkKPiArICogICAgICBzaHJpbmsgb3IgZXhwYW5kIHRo
ZSBudW1iZXIgb2YgcGVyc2lzdGVudGx5IG1hcHBlZCBncmFudHMgd2l0aG91dAo+ICsgKiAgICAg
IG5vdGlmeWluZyB0aGUgZnJvbnRlbmQgZGVwZW5kaW5nIG9uIG1lbW9yeSBjb25zdHJhaW50cyAo
dGhpcyBtaWdodAo+ICsgKiAgICAgIGNhdXNlIGEgcGVyZm9ybWFuY2UgZGVncmFkYXRpb24pLgo+
ICsgKgo+ICsgKiAgICAgIElmIGEgYmFja2VuZCBkcml2ZXIgd2FudHMgdG8gbGltaXQgdGhlIG1h
eGltdW0gbnVtYmVyIG9mIHBlcnNpc3RlbnRseQo+ICsgKiAgICAgIG1hcHBlZCBncmFudHMgdG8g
YSB2YWx1ZSBsZXNzIHRoYW4gUklOR19TSVpFICoKPiArICogICAgICBCTEtJRl9NQVhfU0VHTUVO
VFNfUEVSX1JFUVVFU1QgYSBMUlUgc3RyYXRlZ3kgc2hvdWxkIGJlIHVzZWQgdG8KPiArICogICAg
ICBkaXNjYXJkIHRoZSBncmFudHMgdGhhdCBhcmUgbGVzcyBjb21tb25seSB1c2VkLiBVc2luZyBh
IExSVSBpbiB0aGUKPiArICogICAgICBiYWNrZW5kIGRyaXZlciBwYWlyZWQgd2l0aCBhIExJRk8g
cXVldWUgaW4gdGhlIGZyb250ZW5kIHdpbGwKPiArICogICAgICBhbGxvdyB1cyB0byBoYXZlIGJl
dHRlciBwZXJmb3JtYW5jZSBpbiB0aGlzIHNjZW5hcmlvLgo+ICAgKgo+ICAgKi0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tIFJlcXVlc3QgVHJhbnNwb3J0IFBhcmFtZXRlcnMgLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tCj4gICAqCj4gQEAgLTI1OCwxMSArMjczLDIzIEBACj4gICAqIGZlYXR1cmUtcGVy
c2lzdGVudAo+ICAgKiAgICAgIFZhbHVlczogICAgICAgICAwLzEgKGJvb2xlYW4pCj4gICAqICAg
ICAgRGVmYXVsdCBWYWx1ZTogIDAKPiAtICogICAgICBOb3RlczogNywgOAo+ICsgKiAgICAgIE5v
dGVzOiA3LCA4LCA5Cj4gICAqCj4gICAqICAgICAgQSB2YWx1ZSBvZiAiMSIgaW5kaWNhdGVzIHRo
YXQgdGhlIGZyb250ZW5kIHdpbGwgcmV1c2UgdGhlIHNhbWUgZ3JhbnRzCj4gICAqICAgICAgZm9y
IGFsbCB0cmFuc2FjdGlvbnMsIGFsbG93aW5nIHRoZSBiYWNrZW5kIHRvIG1hcCB0aGVtIHdpdGgg
d3JpdGUKPiAtICogICAgICBhY2Nlc3MgKGV2ZW4gd2hlbiBpdCBzaG91bGQgYmUgcmVhZC1vbmx5
KS4KPiArICogICAgICBhY2Nlc3MgKGV2ZW4gd2hlbiBpdCBzaG91bGQgYmUgcmVhZC1vbmx5KS4g
SWYgdGhlIGZyb250ZW5kIGhpdHMgdGhlCj4gKyAqICAgICAgbWF4aW11bSBudW1iZXIgb2YgYWxs
b3dlZCBwZXJzaXN0ZW50bHkgbWFwcGVkIGdyYW50cywgaXQgY2FuIGZhbGxiYWNrCj4gKyAqICAg
ICAgdG8gbm9uIHBlcnNpc3RlbnQgbW9kZS4gVGhpcyB3aWxsIGNhdXNlIGEgcGVyZm9ybWFuY2Ug
ZGVncmFkYXRpb24sCj4gKyAqICAgICAgc2luY2UgdGhlIHRoZSBiYWNrZW5kIGRyaXZlciB3aWxs
IHN0aWxsIHRyeSB0byBtYXAgdGhvc2UgZ3JhbnRzCj4gKyAqICAgICAgcGVyc2lzdGVudGx5LiBT
aW5jZSB0aGUgcGVyc2lzdGVudCBncmFudHMgcHJvdG9jb2wgaXMgY29tcGF0aWJsZSB3aXRoCj4g
KyAqICAgICAgdGhlIHByZXZpb3VzIHByb3RvY29sLCBhIGZyb250ZW5kIGRyaXZlciBjYW4gY2hv
b3NlIHRvIHdvcmsgaW4KPiArICogICAgICBwZXJzaXN0ZW50IG1vZGUgZXZlbiB3aGVuIHRoZSBi
YWNrZW5kIGRvZXNuJ3Qgc3VwcG9ydCBpdC4KPiArICoKPiArICogICAgICBJdCBpcyByZWNvbW1l
bmRlZCB0aGF0IHRoZSBmcm9udGVuZCBkcml2ZXIgc3RvcmVzIHRoZSBwZXJzaXN0ZW50bHkKPiAr
ICogICAgICBtYXBwZWQgZ3JhbnRzIGluIGEgTElGTyBxdWV1ZSwgc28gYSBzdWJzZXQgb2YgYWxs
IHBlcnNpc3RlbnRseSBtYXBwZWQKPiArICogICAgICBncmFudHMgZ2V0cyB1c2VkIGNvbW1vbmx5
LiBUaGlzIGlzIGRvbmUgaW4gY2FzZSB0aGUgYmFja2VuZCBkcml2ZXIKPiArICogICAgICBkZWNp
ZGVzIHRvIGxpbWl0IHRoZSBtYXhpbXVtIG51bWJlciBvZiBwZXJzaXN0ZW50bHkgbWFwcGVkIGdy
YW50cwo+ICsgKiAgICAgIHRvIGEgdmFsdWUgbGVzcyB0aGFuIFJJTkdfU0laRSAqIEJMS0lGX01B
WF9TRUdNRU5UU19QRVJfUkVRVUVTVC4KPiAgICoKPiAgICotLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tIFZpcnR1YWwgRGV2aWNlIFByb3BlcnRpZXMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQo+
ICAgKgo+IEBAIC0zMDgsNiArMzM1LDEwIEBACj4gICAqICg4KSBUaGUgZnJvbnRlbmQgZHJpdmVy
IGhhcyB0byBhbGxvdyB0aGUgYmFja2VuZCBkcml2ZXIgdG8gbWFwIGFsbCBncmFudHMKPiAgICog
ICAgIHdpdGggd3JpdGUgYWNjZXNzLCBldmVuIHdoZW4gdGhleSBzaG91bGQgYmUgbWFwcGVkIHJl
YWQtb25seSwgc2luY2UKPiAgICogICAgIGZ1cnRoZXIgcmVxdWVzdHMgbWF5IHJldXNlIHRoZXNl
IGdyYW50cyBhbmQgcmVxdWlyZSB3cml0ZSBwZXJtaXNzaW9ucy4KPiArICogKDkpIExpbnV4IGlt
cGxlbWVudGF0aW9uIGRvZXNuJ3QgaGF2ZSBhIGxpbWl0IG9uIHRoZSBtYXhpbXVtIG51bWJlciBv
Zgo+ICsgKiAgICAgZ3JhbnRzIHRoYXQgY2FuIGJlIHBlcnNpc3RlbnRseSBtYXBwZWQgaW4gdGhl
IGZyb250ZW5kIGRyaXZlciwgYnV0Cj4gKyAqICAgICBkdWUgdG8gdGhlIGZyb250ZW50IGRyaXZl
ciBpbXBsZW1lbnRhdGlvbiBpdCBzaG91bGQgbmV2ZXIgYmUgYmlnZ2VyCj4gKyAqICAgICB0aGFu
IFJJTkdfU0laRSAqIEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVC4KPiAgICovCj4gIAo+
ICAvKgoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:52:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfuml-0002B3-43; Tue, 04 Dec 2012 15:51:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfumj-0002AX-Kb
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:51:49 +0000
Received: from [85.158.139.83:4487] by server-11.bemta-5.messagelabs.com id
	0B/3C-03409-41C1EB05; Tue, 04 Dec 2012 15:51:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354636304!27682243!3
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14606 invoked from network); 4 Dec 2012 15:51:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:51:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151588"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:51:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	15:51:02 +0000
Message-ID: <1354636260.15296.43.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 4 Dec 2012 15:51:00 +0000
In-Reply-To: <1354202142-22695-1-git-send-email-roger.pau@citrix.com>
References: <1354202142-22695-1-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] docs: expand persistent grants protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTExLTI5IGF0IDE1OjE1ICswMDAwLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6
Cj4gU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
CgpBY2tlZCArIGFwcGxpZWQsIHRoYW5rcy4KCj4gLS0tCj4gIHhlbi9pbmNsdWRlL3B1YmxpYy9p
by9ibGtpZi5oIHwgICAzNyArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLS0tCj4g
IDEgZmlsZXMgY2hhbmdlZCwgMzQgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKPiAKPiBk
aWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvcHVibGljL2lvL2Jsa2lmLmggYi94ZW4vaW5jbHVkZS9w
dWJsaWMvaW8vYmxraWYuaAo+IGluZGV4IDhkZjU4NjYuLjFmMGZiZDYgMTAwNjQ0Cj4gLS0tIGEv
eGVuL2luY2x1ZGUvcHVibGljL2lvL2Jsa2lmLmgKPiArKysgYi94ZW4vaW5jbHVkZS9wdWJsaWMv
aW8vYmxraWYuaAo+IEBAIC0xMzcsNyArMTM3LDIyIEBACj4gICAqICAgICAgY2FuIG1hcCBwZXJz
aXN0ZW50bHkgZGVwZW5kcyBvbiB0aGUgaW1wbGVtZW50YXRpb24sIGJ1dCBpZGVhbGx5IGl0Cj4g
ICAqICAgICAgc2hvdWxkIGJlIFJJTkdfU0laRSAqIEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVR
VUVTVC4gVXNpbmcgdGhpcwo+ICAgKiAgICAgIGZlYXR1cmUgdGhlIGJhY2tlbmQgZG9lc24ndCBu
ZWVkIHRvIHVubWFwIGVhY2ggZ3JhbnQsIHByZXZlbnRpbmcKPiAtICogICAgICBjb3N0bHkgVExC
IGZsdXNoZXMuCj4gKyAqICAgICAgY29zdGx5IFRMQiBmbHVzaGVzLiBUaGUgYmFja2VuZCBkcml2
ZXIgc2hvdWxkIG9ubHkgbWFwIGdyYW50cwo+ICsgKiAgICAgIHBlcnNpc3RlbnRseSBpZiB0aGUg
ZnJvbnRlbmQgc3VwcG9ydHMgaXQuIElmIGEgYmFja2VuZCBkcml2ZXIgY2hvb3Nlcwo+ICsgKiAg
ICAgIHRvIHVzZSB0aGUgcGVyc2lzdGVudCBwcm90b2NvbCB3aGVuIHRoZSBmcm9udGVuZCBkb2Vz
bid0IHN1cHBvcnQgaXQsCj4gKyAqICAgICAgaXQgd2lsbCBwcm9iYWJseSBoaXQgdGhlIG1heGlt
dW0gbnVtYmVyIG9mIHBlcnNpc3RlbnRseSBtYXBwZWQgZ3JhbnRzCj4gKyAqICAgICAgKGR1ZSB0
byB0aGUgZmFjdCB0aGF0IHRoZSBmcm9udGVuZCB3b24ndCBiZSByZXVzaW5nIHRoZSBzYW1lIGdy
YW50cyksCj4gKyAqICAgICAgYW5kIGZhbGwgYmFjayB0byBub24tcGVyc2lzdGVudCBtb2RlLiBC
YWNrZW5kIGltcGxlbWVudGF0aW9ucyBtYXkKPiArICogICAgICBzaHJpbmsgb3IgZXhwYW5kIHRo
ZSBudW1iZXIgb2YgcGVyc2lzdGVudGx5IG1hcHBlZCBncmFudHMgd2l0aG91dAo+ICsgKiAgICAg
IG5vdGlmeWluZyB0aGUgZnJvbnRlbmQgZGVwZW5kaW5nIG9uIG1lbW9yeSBjb25zdHJhaW50cyAo
dGhpcyBtaWdodAo+ICsgKiAgICAgIGNhdXNlIGEgcGVyZm9ybWFuY2UgZGVncmFkYXRpb24pLgo+
ICsgKgo+ICsgKiAgICAgIElmIGEgYmFja2VuZCBkcml2ZXIgd2FudHMgdG8gbGltaXQgdGhlIG1h
eGltdW0gbnVtYmVyIG9mIHBlcnNpc3RlbnRseQo+ICsgKiAgICAgIG1hcHBlZCBncmFudHMgdG8g
YSB2YWx1ZSBsZXNzIHRoYW4gUklOR19TSVpFICoKPiArICogICAgICBCTEtJRl9NQVhfU0VHTUVO
VFNfUEVSX1JFUVVFU1QgYSBMUlUgc3RyYXRlZ3kgc2hvdWxkIGJlIHVzZWQgdG8KPiArICogICAg
ICBkaXNjYXJkIHRoZSBncmFudHMgdGhhdCBhcmUgbGVzcyBjb21tb25seSB1c2VkLiBVc2luZyBh
IExSVSBpbiB0aGUKPiArICogICAgICBiYWNrZW5kIGRyaXZlciBwYWlyZWQgd2l0aCBhIExJRk8g
cXVldWUgaW4gdGhlIGZyb250ZW5kIHdpbGwKPiArICogICAgICBhbGxvdyB1cyB0byBoYXZlIGJl
dHRlciBwZXJmb3JtYW5jZSBpbiB0aGlzIHNjZW5hcmlvLgo+ICAgKgo+ICAgKi0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tIFJlcXVlc3QgVHJhbnNwb3J0IFBhcmFtZXRlcnMgLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tCj4gICAqCj4gQEAgLTI1OCwxMSArMjczLDIzIEBACj4gICAqIGZlYXR1cmUtcGVy
c2lzdGVudAo+ICAgKiAgICAgIFZhbHVlczogICAgICAgICAwLzEgKGJvb2xlYW4pCj4gICAqICAg
ICAgRGVmYXVsdCBWYWx1ZTogIDAKPiAtICogICAgICBOb3RlczogNywgOAo+ICsgKiAgICAgIE5v
dGVzOiA3LCA4LCA5Cj4gICAqCj4gICAqICAgICAgQSB2YWx1ZSBvZiAiMSIgaW5kaWNhdGVzIHRo
YXQgdGhlIGZyb250ZW5kIHdpbGwgcmV1c2UgdGhlIHNhbWUgZ3JhbnRzCj4gICAqICAgICAgZm9y
IGFsbCB0cmFuc2FjdGlvbnMsIGFsbG93aW5nIHRoZSBiYWNrZW5kIHRvIG1hcCB0aGVtIHdpdGgg
d3JpdGUKPiAtICogICAgICBhY2Nlc3MgKGV2ZW4gd2hlbiBpdCBzaG91bGQgYmUgcmVhZC1vbmx5
KS4KPiArICogICAgICBhY2Nlc3MgKGV2ZW4gd2hlbiBpdCBzaG91bGQgYmUgcmVhZC1vbmx5KS4g
SWYgdGhlIGZyb250ZW5kIGhpdHMgdGhlCj4gKyAqICAgICAgbWF4aW11bSBudW1iZXIgb2YgYWxs
b3dlZCBwZXJzaXN0ZW50bHkgbWFwcGVkIGdyYW50cywgaXQgY2FuIGZhbGxiYWNrCj4gKyAqICAg
ICAgdG8gbm9uIHBlcnNpc3RlbnQgbW9kZS4gVGhpcyB3aWxsIGNhdXNlIGEgcGVyZm9ybWFuY2Ug
ZGVncmFkYXRpb24sCj4gKyAqICAgICAgc2luY2UgdGhlIHRoZSBiYWNrZW5kIGRyaXZlciB3aWxs
IHN0aWxsIHRyeSB0byBtYXAgdGhvc2UgZ3JhbnRzCj4gKyAqICAgICAgcGVyc2lzdGVudGx5LiBT
aW5jZSB0aGUgcGVyc2lzdGVudCBncmFudHMgcHJvdG9jb2wgaXMgY29tcGF0aWJsZSB3aXRoCj4g
KyAqICAgICAgdGhlIHByZXZpb3VzIHByb3RvY29sLCBhIGZyb250ZW5kIGRyaXZlciBjYW4gY2hv
b3NlIHRvIHdvcmsgaW4KPiArICogICAgICBwZXJzaXN0ZW50IG1vZGUgZXZlbiB3aGVuIHRoZSBi
YWNrZW5kIGRvZXNuJ3Qgc3VwcG9ydCBpdC4KPiArICoKPiArICogICAgICBJdCBpcyByZWNvbW1l
bmRlZCB0aGF0IHRoZSBmcm9udGVuZCBkcml2ZXIgc3RvcmVzIHRoZSBwZXJzaXN0ZW50bHkKPiAr
ICogICAgICBtYXBwZWQgZ3JhbnRzIGluIGEgTElGTyBxdWV1ZSwgc28gYSBzdWJzZXQgb2YgYWxs
IHBlcnNpc3RlbnRseSBtYXBwZWQKPiArICogICAgICBncmFudHMgZ2V0cyB1c2VkIGNvbW1vbmx5
LiBUaGlzIGlzIGRvbmUgaW4gY2FzZSB0aGUgYmFja2VuZCBkcml2ZXIKPiArICogICAgICBkZWNp
ZGVzIHRvIGxpbWl0IHRoZSBtYXhpbXVtIG51bWJlciBvZiBwZXJzaXN0ZW50bHkgbWFwcGVkIGdy
YW50cwo+ICsgKiAgICAgIHRvIGEgdmFsdWUgbGVzcyB0aGFuIFJJTkdfU0laRSAqIEJMS0lGX01B
WF9TRUdNRU5UU19QRVJfUkVRVUVTVC4KPiAgICoKPiAgICotLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tIFZpcnR1YWwgRGV2aWNlIFByb3BlcnRpZXMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQo+
ICAgKgo+IEBAIC0zMDgsNiArMzM1LDEwIEBACj4gICAqICg4KSBUaGUgZnJvbnRlbmQgZHJpdmVy
IGhhcyB0byBhbGxvdyB0aGUgYmFja2VuZCBkcml2ZXIgdG8gbWFwIGFsbCBncmFudHMKPiAgICog
ICAgIHdpdGggd3JpdGUgYWNjZXNzLCBldmVuIHdoZW4gdGhleSBzaG91bGQgYmUgbWFwcGVkIHJl
YWQtb25seSwgc2luY2UKPiAgICogICAgIGZ1cnRoZXIgcmVxdWVzdHMgbWF5IHJldXNlIHRoZXNl
IGdyYW50cyBhbmQgcmVxdWlyZSB3cml0ZSBwZXJtaXNzaW9ucy4KPiArICogKDkpIExpbnV4IGlt
cGxlbWVudGF0aW9uIGRvZXNuJ3QgaGF2ZSBhIGxpbWl0IG9uIHRoZSBtYXhpbXVtIG51bWJlciBv
Zgo+ICsgKiAgICAgZ3JhbnRzIHRoYXQgY2FuIGJlIHBlcnNpc3RlbnRseSBtYXBwZWQgaW4gdGhl
IGZyb250ZW5kIGRyaXZlciwgYnV0Cj4gKyAqICAgICBkdWUgdG8gdGhlIGZyb250ZW50IGRyaXZl
ciBpbXBsZW1lbnRhdGlvbiBpdCBzaG91bGQgbmV2ZXIgYmUgYmlnZ2VyCj4gKyAqICAgICB0aGFu
IFJJTkdfU0laRSAqIEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVC4KPiAgICovCj4gIAo+
ICAvKgoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:57:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfus5-0002xf-0a; Tue, 04 Dec 2012 15:57:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1Tfus3-0002xO-Fv
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:57:19 +0000
Received: from [85.158.137.99:2105] by server-11.bemta-3.messagelabs.com id
	BA/85-19361-E5D1EB05; Tue, 04 Dec 2012 15:57:18 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354636635!17577467!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32556 invoked from network); 4 Dec 2012 15:57:17 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-2.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 15:57:17 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 49d3_0321_6a3d17bc_493f_4239_8635_f58fde0ce5f9;
	Tue, 04 Dec 2012 10:57:08 -0500
Message-ID: <50BE1D4C.5020707@jhuapl.edu>
Date: Tue, 04 Dec 2012 10:57:00 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354276631.6269.137.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354276631.6269.137.camel@zakaz.uk.xensource.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2737677908018149253=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============2737677908018149253==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030704050405020205010800"

This is a cryptographically signed message in MIME format.

--------------ms030704050405020205010800
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11/30/2012 06:57 AM, Ian Campbell wrote:
> On Thu, 2012-11-29 at 17:35 +0000, Matthew Fioravante wrote:
>> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
>> ---
>>   autogen.sh         |    1 +
>>   configure          | 3013 ++++++++++++++++++++++++++++++++++++++++++=
+++++++++-
> Pretty impressive for a minimal configure.ac ;-)
>
>>   configure.ac       |   14 +
>>   tools/configure    |  601 ++++++-----
>>   tools/configure.ac |    2 +-
>>   5 files changed, 3341 insertions(+), 290 deletions(-)
>>   create mode 100644 configure.ac
>
>> diff --git a/configure.ac b/configure.ac
>> new file mode 100644
>> index 0000000..3a6339c
>> --- /dev/null
>> +++ b/configure.ac
>> @@ -0,0 +1,14 @@
>> +#                                               -*- Autoconf -*-
>> +# Process this file with autoconf to produce a configure script.
>> +
>> +AC_PREREQ([2.67])
>> +AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
>> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
>> +AC_CONFIG_SRCDIR([./tools/libxl/libxl.c])
>> +AC_CONFIG_FILES([./config/Tools.mk])
> If I understand correctly this causes this script to consider
> config/Tools.mk as an output file, which it isn't for this script (that=

> is tools/configure's job).
Yeah thats a typo, will be removed
>
> If this is needed then why is config/Stubdom.mk similarly listed?
>
>> +AC_PREFIX_DEFAULT([/usr])
>> +AC_CONFIG_AUX_DIR([.])
> Looking at
> http://www.gnu.org/software/automake/manual/html_node/Optional.html I
> wonder if we shouldn't also change the sub-configures to use .. with
> this macro, such that they pickup the toplevel config.{sub,guess} etc
> which this implies instead of having their own copy?
>
> Since the default is ., .. or ../.. do we need this at all?
I'll test this, we have multiple copies of install-sh now so removing=20
this means we can probably go back to just 1.
>
>> +
>> +AC_CONFIG_SUBDIRS([tools stubdom])
>> +
>> +AC_OUTPUT()
> Is this mandatory? This configure script shouldn't have any output othe=
r
> than the call to the sub-configures.
The configure script does nothing without AC_OUTPUT. It doesn't call the =

other scripts so I think we need this.
>
> [...]
>> diff --git a/tools/configure.ac b/tools/configure.ac
>> index 971e3e9..9924852 100644
>> --- a/tools/configure.ac
>> +++ b/tools/configure.ac
>> @@ -2,7 +2,7 @@
>>   # Process this file with autoconf to produce a configure script.
>>
>>   AC_PREREQ([2.67])
>> -AC_INIT([Xen Hypervisor], m4_esyscmd([../version.sh ../xen/Makefile])=
,
>> +AC_INIT([Xen Hypervisor Tools], m4_esyscmd([../version.sh ../xen/Make=
file]),
>>       [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
>>   AC_CONFIG_SRCDIR([libxl/libxl.c])
>>   AC_CONFIG_FILES([../config/Tools.mk])
>> --
>> 1.7.10.4
>>
>



--------------ms030704050405020205010800
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE1NTcwMFowIwYJKoZIhvcNAQkEMRYEFEga+KQuAaqfpQdM
aHmJx0AMk9J5MGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYBroCjAo5/pNMCHw4wZhOR1d/fiGBZXJ0oa
3wBEg82Lr9/d95sW5fZaF0IaVtc75MsNxBsBC6wG9iAFrPi61y/vrcXzxpqRx4e7/aAt1kj0
yvqAi0ZQtgQn9t3lQClHaLq1L6BDrL7B7BYAiQA6CmxgGHiC28X/NBg1zn+Jh/tDywAAAAAA
AA==
--------------ms030704050405020205010800--


--===============2737677908018149253==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2737677908018149253==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 15:57:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfus5-0002xf-0a; Tue, 04 Dec 2012 15:57:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1Tfus3-0002xO-Fv
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:57:19 +0000
Received: from [85.158.137.99:2105] by server-11.bemta-3.messagelabs.com id
	BA/85-19361-E5D1EB05; Tue, 04 Dec 2012 15:57:18 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354636635!17577467!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32556 invoked from network); 4 Dec 2012 15:57:17 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-2.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 15:57:17 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 49d3_0321_6a3d17bc_493f_4239_8635_f58fde0ce5f9;
	Tue, 04 Dec 2012 10:57:08 -0500
Message-ID: <50BE1D4C.5020707@jhuapl.edu>
Date: Tue, 04 Dec 2012 10:57:00 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354276631.6269.137.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354276631.6269.137.camel@zakaz.uk.xensource.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2737677908018149253=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============2737677908018149253==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030704050405020205010800"

This is a cryptographically signed message in MIME format.

--------------ms030704050405020205010800
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11/30/2012 06:57 AM, Ian Campbell wrote:
> On Thu, 2012-11-29 at 17:35 +0000, Matthew Fioravante wrote:
>> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
>> ---
>>   autogen.sh         |    1 +
>>   configure          | 3013 ++++++++++++++++++++++++++++++++++++++++++=
+++++++++-
> Pretty impressive for a minimal configure.ac ;-)
>
>>   configure.ac       |   14 +
>>   tools/configure    |  601 ++++++-----
>>   tools/configure.ac |    2 +-
>>   5 files changed, 3341 insertions(+), 290 deletions(-)
>>   create mode 100644 configure.ac
>
>> diff --git a/configure.ac b/configure.ac
>> new file mode 100644
>> index 0000000..3a6339c
>> --- /dev/null
>> +++ b/configure.ac
>> @@ -0,0 +1,14 @@
>> +#                                               -*- Autoconf -*-
>> +# Process this file with autoconf to produce a configure script.
>> +
>> +AC_PREREQ([2.67])
>> +AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
>> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
>> +AC_CONFIG_SRCDIR([./tools/libxl/libxl.c])
>> +AC_CONFIG_FILES([./config/Tools.mk])
> If I understand correctly this causes this script to consider
> config/Tools.mk as an output file, which it isn't for this script (that=

> is tools/configure's job).
Yeah thats a typo, will be removed
>
> If this is needed then why is config/Stubdom.mk similarly listed?
>
>> +AC_PREFIX_DEFAULT([/usr])
>> +AC_CONFIG_AUX_DIR([.])
> Looking at
> http://www.gnu.org/software/automake/manual/html_node/Optional.html I
> wonder if we shouldn't also change the sub-configures to use .. with
> this macro, such that they pickup the toplevel config.{sub,guess} etc
> which this implies instead of having their own copy?
>
> Since the default is ., .. or ../.. do we need this at all?
I'll test this, we have multiple copies of install-sh now so removing=20
this means we can probably go back to just 1.
>
>> +
>> +AC_CONFIG_SUBDIRS([tools stubdom])
>> +
>> +AC_OUTPUT()
> Is this mandatory? This configure script shouldn't have any output othe=
r
> than the call to the sub-configures.
The configure script does nothing without AC_OUTPUT. It doesn't call the =

other scripts so I think we need this.
>
> [...]
>> diff --git a/tools/configure.ac b/tools/configure.ac
>> index 971e3e9..9924852 100644
>> --- a/tools/configure.ac
>> +++ b/tools/configure.ac
>> @@ -2,7 +2,7 @@
>>   # Process this file with autoconf to produce a configure script.
>>
>>   AC_PREREQ([2.67])
>> -AC_INIT([Xen Hypervisor], m4_esyscmd([../version.sh ../xen/Makefile])=
,
>> +AC_INIT([Xen Hypervisor Tools], m4_esyscmd([../version.sh ../xen/Make=
file]),
>>       [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
>>   AC_CONFIG_SRCDIR([libxl/libxl.c])
>>   AC_CONFIG_FILES([../config/Tools.mk])
>> --
>> 1.7.10.4
>>
>



--------------ms030704050405020205010800
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE1NTcwMFowIwYJKoZIhvcNAQkEMRYEFEga+KQuAaqfpQdM
aHmJx0AMk9J5MGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYBroCjAo5/pNMCHw4wZhOR1d/fiGBZXJ0oa
3wBEg82Lr9/d95sW5fZaF0IaVtc75MsNxBsBC6wG9iAFrPi61y/vrcXzxpqRx4e7/aAt1kj0
yvqAi0ZQtgQn9t3lQClHaLq1L6BDrL7B7BYAiQA6CmxgGHiC28X/NBg1zn+Jh/tDywAAAAAA
AA==
--------------ms030704050405020205010800--


--===============2737677908018149253==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2737677908018149253==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 15:58:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:58:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfutE-00033h-GB; Tue, 04 Dec 2012 15:58:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfutC-00033c-1x
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:58:30 +0000
Received: from [85.158.138.51:29704] by server-4.bemta-3.messagelabs.com id
	09/30-30023-5AD1EB05; Tue, 04 Dec 2012 15:58:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354636705!27465140!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13032 invoked from network); 4 Dec 2012 15:58:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:58:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216347851"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:57:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 10:57:57 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfusf-0000fS-Ch;
	Tue, 04 Dec 2012 15:57:57 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 15:57:57 +0000
Message-ID: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Eventually we will have arm64 as well.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Config.mk                                    |    4 +++-
 config/{arm.mk => arm32.mk}                  |    0
 xen/Rules.mk                                 |    2 +-
 xen/arch/arm/Makefile                        |    9 +++------
 xen/arch/arm/Rules.mk                        |   13 ++++++++-----
 xen/arch/arm/arm32/Makefile                  |    5 +++++
 xen/arch/arm/{ => arm32}/asm-offsets.c       |    0
 xen/arch/arm/{ => arm32}/entry.S             |    0
 xen/arch/arm/{ => arm32}/head.S              |    0
 xen/arch/arm/{ => arm32}/lib/Makefile        |    0
 xen/arch/arm/{ => arm32}/lib/assembler.h     |    0
 xen/arch/arm/{ => arm32}/lib/bitops.h        |    0
 xen/arch/arm/{ => arm32}/lib/changebit.S     |    0
 xen/arch/arm/{ => arm32}/lib/clearbit.S      |    0
 xen/arch/arm/{ => arm32}/lib/copy_template.S |    0
 xen/arch/arm/{ => arm32}/lib/div64.S         |    0
 xen/arch/arm/{ => arm32}/lib/findbit.S       |    0
 xen/arch/arm/{ => arm32}/lib/lib1funcs.S     |    0
 xen/arch/arm/{ => arm32}/lib/lshrdi3.S       |    0
 xen/arch/arm/{ => arm32}/lib/memcpy.S        |    0
 xen/arch/arm/{ => arm32}/lib/memmove.S       |    0
 xen/arch/arm/{ => arm32}/lib/memset.S        |    0
 xen/arch/arm/{ => arm32}/lib/memzero.S       |    0
 xen/arch/arm/{ => arm32}/lib/setbit.S        |    0
 xen/arch/arm/{ => arm32}/lib/testchangebit.S |    0
 xen/arch/arm/{ => arm32}/lib/testclearbit.S  |    0
 xen/arch/arm/{ => arm32}/lib/testsetbit.S    |    0
 xen/arch/arm/{ => arm32}/mode_switch.S       |    2 +-
 xen/arch/arm/{ => arm32}/proc-ca15.S         |    0
 xen/arch/arm/domain.c                        |    2 +-
 xen/arch/arm/domain_build.c                  |    2 +-
 xen/arch/arm/gic.c                           |    2 +-
 xen/arch/arm/irq.c                           |    2 +-
 xen/arch/arm/p2m.c                           |    2 +-
 xen/arch/arm/setup.c                         |    2 +-
 xen/arch/arm/smpboot.c                       |    2 +-
 xen/arch/arm/traps.c                         |    2 +-
 xen/arch/arm/vgic.c                          |    2 +-
 xen/arch/arm/vtimer.c                        |    2 +-
 xen/{arch/arm => include/asm-arm}/gic.h      |    6 ++----
 40 files changed, 33 insertions(+), 28 deletions(-)
 rename config/{arm.mk => arm32.mk} (100%)
 create mode 100644 xen/arch/arm/arm32/Makefile
 rename xen/arch/arm/{ => arm32}/asm-offsets.c (100%)
 rename xen/arch/arm/{ => arm32}/entry.S (100%)
 rename xen/arch/arm/{ => arm32}/head.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/Makefile (100%)
 rename xen/arch/arm/{ => arm32}/lib/assembler.h (100%)
 rename xen/arch/arm/{ => arm32}/lib/bitops.h (100%)
 rename xen/arch/arm/{ => arm32}/lib/changebit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/clearbit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/copy_template.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/div64.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/findbit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/lib1funcs.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/lshrdi3.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/memcpy.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/memmove.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/memset.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/memzero.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/setbit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/testchangebit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/testclearbit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/testsetbit.S (100%)
 rename xen/arch/arm/{ => arm32}/mode_switch.S (99%)
 rename xen/arch/arm/{ => arm32}/proc-ca15.S (100%)
 rename xen/{arch/arm => include/asm-arm}/gic.h (98%)

diff --git a/Config.mk b/Config.mk
index d99b9a1..8e35886 100644
--- a/Config.mk
+++ b/Config.mk
@@ -14,7 +14,9 @@ debug ?= y
 debug_symbols ?= $(debug)
 
 XEN_COMPILE_ARCH    ?= $(shell uname -m | sed -e s/i.86/x86_32/ \
-                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ -e s/arm.*/arm/)
+                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ \
+                         -e s/armv7.*/arm32/)
+
 XEN_TARGET_ARCH     ?= $(XEN_COMPILE_ARCH)
 XEN_OS              ?= $(shell uname -s)
 
diff --git a/config/arm.mk b/config/arm32.mk
similarity index 100%
rename from config/arm.mk
rename to config/arm32.mk
diff --git a/xen/Rules.mk b/xen/Rules.mk
index f7cb8b2..c2db449 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -28,7 +28,7 @@ endif
 # Set ARCH/SUBARCH appropriately.
 override TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
 override TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
-                              sed -e 's/x86.*/x86/')
+                              sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
 
 TARGET := $(BASEDIR)/xen
 
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index fd92b72..1b33767 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -1,8 +1,7 @@
-subdir-y += lib
+subdir-$(arm32) += arm32
 
 obj-y += dummy.o
 obj-y += early_printk.o
-obj-y += entry.o
 obj-y += domain.o
 obj-y += domctl.o
 obj-y += sysctl.o
@@ -12,8 +11,6 @@ obj-y += io.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += mm.o
-obj-y += mode_switch.o
-obj-y += proc-ca15.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
@@ -36,7 +33,7 @@ obj-y += dtb.o
 AFLAGS += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
 endif
 
-ALL_OBJS := head.o $(ALL_OBJS)
+ALL_OBJS := $(TARGET_SUBARCH)/head.o $(ALL_OBJS)
 
 $(TARGET): $(TARGET)-syms $(TARGET).bin
 	# XXX: VE model loads by VMA so instead of
@@ -81,7 +78,7 @@ $(TARGET)-syms: prelink.o xen.lds $(BASEDIR)/common/symbols-dummy.o
 	    $(@D)/.$(@F).1.o -o $@
 	rm -f $(@D)/.$(@F).[0-9]*
 
-asm-offsets.s: asm-offsets.c
+asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c
 	$(CC) $(filter-out -flto,$(CFLAGS)) -S -o $@ $<
 
 xen.lds: xen.lds.S
diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index a45c654..f83bfee 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -12,16 +12,19 @@ CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
 CFLAGS += -I$(BASEDIR)/include
 
-# Prevent floating-point variables from creeping into Xen.
-CFLAGS += -msoft-float
-
 $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 $(call cc-option-add,CFLAGS,CC,-Wnested-externs)
 
 arm := y
 
+ifeq ($(TARGET_SUBARCH),arm32)
+# Prevent floating-point variables from creeping into Xen.
+CFLAGS += -msoft-float
+CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
+arm32 := y
+arm64 := n
+endif
+
 ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n)
 CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
 endif
-
-CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
diff --git a/xen/arch/arm/arm32/Makefile b/xen/arch/arm/arm32/Makefile
new file mode 100644
index 0000000..20931fa
--- /dev/null
+++ b/xen/arch/arm/arm32/Makefile
@@ -0,0 +1,5 @@
+subdir-y += lib
+
+obj-y += entry.o
+obj-y += mode_switch.o
+obj-y += proc-ca15.o
diff --git a/xen/arch/arm/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c
similarity index 100%
rename from xen/arch/arm/asm-offsets.c
rename to xen/arch/arm/arm32/asm-offsets.c
diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/arm32/entry.S
similarity index 100%
rename from xen/arch/arm/entry.S
rename to xen/arch/arm/arm32/entry.S
diff --git a/xen/arch/arm/head.S b/xen/arch/arm/arm32/head.S
similarity index 100%
rename from xen/arch/arm/head.S
rename to xen/arch/arm/arm32/head.S
diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/arm32/lib/Makefile
similarity index 100%
rename from xen/arch/arm/lib/Makefile
rename to xen/arch/arm/arm32/lib/Makefile
diff --git a/xen/arch/arm/lib/assembler.h b/xen/arch/arm/arm32/lib/assembler.h
similarity index 100%
rename from xen/arch/arm/lib/assembler.h
rename to xen/arch/arm/arm32/lib/assembler.h
diff --git a/xen/arch/arm/lib/bitops.h b/xen/arch/arm/arm32/lib/bitops.h
similarity index 100%
rename from xen/arch/arm/lib/bitops.h
rename to xen/arch/arm/arm32/lib/bitops.h
diff --git a/xen/arch/arm/lib/changebit.S b/xen/arch/arm/arm32/lib/changebit.S
similarity index 100%
rename from xen/arch/arm/lib/changebit.S
rename to xen/arch/arm/arm32/lib/changebit.S
diff --git a/xen/arch/arm/lib/clearbit.S b/xen/arch/arm/arm32/lib/clearbit.S
similarity index 100%
rename from xen/arch/arm/lib/clearbit.S
rename to xen/arch/arm/arm32/lib/clearbit.S
diff --git a/xen/arch/arm/lib/copy_template.S b/xen/arch/arm/arm32/lib/copy_template.S
similarity index 100%
rename from xen/arch/arm/lib/copy_template.S
rename to xen/arch/arm/arm32/lib/copy_template.S
diff --git a/xen/arch/arm/lib/div64.S b/xen/arch/arm/arm32/lib/div64.S
similarity index 100%
rename from xen/arch/arm/lib/div64.S
rename to xen/arch/arm/arm32/lib/div64.S
diff --git a/xen/arch/arm/lib/findbit.S b/xen/arch/arm/arm32/lib/findbit.S
similarity index 100%
rename from xen/arch/arm/lib/findbit.S
rename to xen/arch/arm/arm32/lib/findbit.S
diff --git a/xen/arch/arm/lib/lib1funcs.S b/xen/arch/arm/arm32/lib/lib1funcs.S
similarity index 100%
rename from xen/arch/arm/lib/lib1funcs.S
rename to xen/arch/arm/arm32/lib/lib1funcs.S
diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/arm32/lib/lshrdi3.S
similarity index 100%
rename from xen/arch/arm/lib/lshrdi3.S
rename to xen/arch/arm/arm32/lib/lshrdi3.S
diff --git a/xen/arch/arm/lib/memcpy.S b/xen/arch/arm/arm32/lib/memcpy.S
similarity index 100%
rename from xen/arch/arm/lib/memcpy.S
rename to xen/arch/arm/arm32/lib/memcpy.S
diff --git a/xen/arch/arm/lib/memmove.S b/xen/arch/arm/arm32/lib/memmove.S
similarity index 100%
rename from xen/arch/arm/lib/memmove.S
rename to xen/arch/arm/arm32/lib/memmove.S
diff --git a/xen/arch/arm/lib/memset.S b/xen/arch/arm/arm32/lib/memset.S
similarity index 100%
rename from xen/arch/arm/lib/memset.S
rename to xen/arch/arm/arm32/lib/memset.S
diff --git a/xen/arch/arm/lib/memzero.S b/xen/arch/arm/arm32/lib/memzero.S
similarity index 100%
rename from xen/arch/arm/lib/memzero.S
rename to xen/arch/arm/arm32/lib/memzero.S
diff --git a/xen/arch/arm/lib/setbit.S b/xen/arch/arm/arm32/lib/setbit.S
similarity index 100%
rename from xen/arch/arm/lib/setbit.S
rename to xen/arch/arm/arm32/lib/setbit.S
diff --git a/xen/arch/arm/lib/testchangebit.S b/xen/arch/arm/arm32/lib/testchangebit.S
similarity index 100%
rename from xen/arch/arm/lib/testchangebit.S
rename to xen/arch/arm/arm32/lib/testchangebit.S
diff --git a/xen/arch/arm/lib/testclearbit.S b/xen/arch/arm/arm32/lib/testclearbit.S
similarity index 100%
rename from xen/arch/arm/lib/testclearbit.S
rename to xen/arch/arm/arm32/lib/testclearbit.S
diff --git a/xen/arch/arm/lib/testsetbit.S b/xen/arch/arm/arm32/lib/testsetbit.S
similarity index 100%
rename from xen/arch/arm/lib/testsetbit.S
rename to xen/arch/arm/arm32/lib/testsetbit.S
diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/arm32/mode_switch.S
similarity index 99%
rename from xen/arch/arm/mode_switch.S
rename to xen/arch/arm/arm32/mode_switch.S
index 7c3b357..d550c33 100644
--- a/xen/arch/arm/mode_switch.S
+++ b/xen/arch/arm/arm32/mode_switch.S
@@ -21,7 +21,7 @@
 #include <asm/page.h>
 #include <asm/platform_vexpress.h>
 #include <asm/asm_defns.h>
-#include "gic.h"
+#include <asm/gic.h>
 
 
 /* XXX: Versatile Express specific code */
diff --git a/xen/arch/arm/proc-ca15.S b/xen/arch/arm/arm32/proc-ca15.S
similarity index 100%
rename from xen/arch/arm/proc-ca15.S
rename to xen/arch/arm/arm32/proc-ca15.S
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c5292c7..0875045 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -12,7 +12,7 @@
 #include <asm/p2m.h>
 #include <asm/irq.h>
 
-#include "gic.h"
+#include <asm/gic.h>
 #include "vtimer.h"
 #include "vpl011.h"
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index a9e7f43..aac92b3 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -11,7 +11,7 @@
 #include <xen/libfdt/libfdt.h>
 #include <xen/guest_access.h>
 
-#include "gic.h"
+#include <asm/gic.h>
 #include "kernel.h"
 
 static unsigned int __initdata opt_dom0_max_vcpus;
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0c6fab9..41824c9 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -29,7 +29,7 @@
 #include <asm/p2m.h>
 #include <asm/domain.h>
 
-#include "gic.h"
+#include <asm/gic.h>
 
 /* Access to the GIC Distributor registers through the fixmap */
 #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 72e83e6..c141d81 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -25,7 +25,7 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 
-#include "gic.h"
+#include <asm/gic.h>
 
 static void enable_none(struct irq_desc *irq) { }
 static unsigned int startup_none(struct irq_desc *irq) { return 0; }
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7ae4515..852f0d8 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -4,7 +4,7 @@
 #include <xen/errno.h>
 #include <xen/domain_page.h>
 #include <asm/flushtlb.h>
-#include "gic.h"
+#include <asm/gic.h>
 
 void dump_p2m_lookup(struct domain *d, paddr_t addr)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..8f85ae6 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -39,7 +39,7 @@
 #include <asm/setup.h>
 #include <asm/vfp.h>
 #include <asm/early_printk.h>
-#include "gic.h"
+#include <asm/gic.h>
 
 static __used void init_done(void)
 {
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 6555ac6..7b6ffa0 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -29,7 +29,7 @@
 #include <xen/timer.h>
 #include <xen/irq.h>
 #include <asm/vfp.h>
-#include "gic.h"
+#include <asm/gic.h>
 
 cpumask_t cpu_online_map;
 EXPORT_SYMBOL(cpu_online_map);
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 19e2081..d01ff6d 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -35,7 +35,7 @@
 
 #include "io.h"
 #include "vtimer.h"
-#include "gic.h"
+#include <asm/gic.h>
 
 /* The base of the stack must always be double-word aligned, which means
  * that both the kernel half of struct cpu_user_regs (which is pushed in
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 3f7e757..7d1a5ad 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -27,7 +27,7 @@
 #include <asm/current.h>
 
 #include "io.h"
-#include "gic.h"
+#include <asm/gic.h>
 
 #define VGIC_DISTR_BASE_ADDRESS 0x000000002c001000
 
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 490b021..1c45f4a 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -21,7 +21,7 @@
 #include <xen/lib.h>
 #include <xen/timer.h>
 #include <xen/sched.h>
-#include "gic.h"
+#include <asm/gic.h>
 
 extern s_time_t ticks_to_ns(uint64_t ticks);
 extern uint64_t ns_to_ticks(s_time_t ns);
diff --git a/xen/arch/arm/gic.h b/xen/include/asm-arm/gic.h
similarity index 98%
rename from xen/arch/arm/gic.h
rename to xen/include/asm-arm/gic.h
index 1bf1b02..bf30fbd 100644
--- a/xen/arch/arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -1,6 +1,4 @@
 /*
- * xen/arch/arm/gic.h
- *
  * ARM Generic Interrupt Controller support
  *
  * Tim Deegan <tim@xen.org>
@@ -17,8 +15,8 @@
  * GNU General Public License for more details.
  */
 
-#ifndef __ARCH_ARM_GIC_H__
-#define __ARCH_ARM_GIC_H__
+#ifndef __ASM_ARM_GIC_H__
+#define __ASM_ARM_GIC_H__
 
 #define GICD_CTLR       (0x000/4)
 #define GICD_TYPER      (0x004/4)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:58:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:58:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfutE-00033h-GB; Tue, 04 Dec 2012 15:58:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfutC-00033c-1x
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:58:30 +0000
Received: from [85.158.138.51:29704] by server-4.bemta-3.messagelabs.com id
	09/30-30023-5AD1EB05; Tue, 04 Dec 2012 15:58:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354636705!27465140!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13032 invoked from network); 4 Dec 2012 15:58:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:58:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216347851"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 15:57:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 10:57:57 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tfusf-0000fS-Ch;
	Tue, 04 Dec 2012 15:57:57 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 15:57:57 +0000
Message-ID: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Eventually we will have arm64 as well.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Config.mk                                    |    4 +++-
 config/{arm.mk => arm32.mk}                  |    0
 xen/Rules.mk                                 |    2 +-
 xen/arch/arm/Makefile                        |    9 +++------
 xen/arch/arm/Rules.mk                        |   13 ++++++++-----
 xen/arch/arm/arm32/Makefile                  |    5 +++++
 xen/arch/arm/{ => arm32}/asm-offsets.c       |    0
 xen/arch/arm/{ => arm32}/entry.S             |    0
 xen/arch/arm/{ => arm32}/head.S              |    0
 xen/arch/arm/{ => arm32}/lib/Makefile        |    0
 xen/arch/arm/{ => arm32}/lib/assembler.h     |    0
 xen/arch/arm/{ => arm32}/lib/bitops.h        |    0
 xen/arch/arm/{ => arm32}/lib/changebit.S     |    0
 xen/arch/arm/{ => arm32}/lib/clearbit.S      |    0
 xen/arch/arm/{ => arm32}/lib/copy_template.S |    0
 xen/arch/arm/{ => arm32}/lib/div64.S         |    0
 xen/arch/arm/{ => arm32}/lib/findbit.S       |    0
 xen/arch/arm/{ => arm32}/lib/lib1funcs.S     |    0
 xen/arch/arm/{ => arm32}/lib/lshrdi3.S       |    0
 xen/arch/arm/{ => arm32}/lib/memcpy.S        |    0
 xen/arch/arm/{ => arm32}/lib/memmove.S       |    0
 xen/arch/arm/{ => arm32}/lib/memset.S        |    0
 xen/arch/arm/{ => arm32}/lib/memzero.S       |    0
 xen/arch/arm/{ => arm32}/lib/setbit.S        |    0
 xen/arch/arm/{ => arm32}/lib/testchangebit.S |    0
 xen/arch/arm/{ => arm32}/lib/testclearbit.S  |    0
 xen/arch/arm/{ => arm32}/lib/testsetbit.S    |    0
 xen/arch/arm/{ => arm32}/mode_switch.S       |    2 +-
 xen/arch/arm/{ => arm32}/proc-ca15.S         |    0
 xen/arch/arm/domain.c                        |    2 +-
 xen/arch/arm/domain_build.c                  |    2 +-
 xen/arch/arm/gic.c                           |    2 +-
 xen/arch/arm/irq.c                           |    2 +-
 xen/arch/arm/p2m.c                           |    2 +-
 xen/arch/arm/setup.c                         |    2 +-
 xen/arch/arm/smpboot.c                       |    2 +-
 xen/arch/arm/traps.c                         |    2 +-
 xen/arch/arm/vgic.c                          |    2 +-
 xen/arch/arm/vtimer.c                        |    2 +-
 xen/{arch/arm => include/asm-arm}/gic.h      |    6 ++----
 40 files changed, 33 insertions(+), 28 deletions(-)
 rename config/{arm.mk => arm32.mk} (100%)
 create mode 100644 xen/arch/arm/arm32/Makefile
 rename xen/arch/arm/{ => arm32}/asm-offsets.c (100%)
 rename xen/arch/arm/{ => arm32}/entry.S (100%)
 rename xen/arch/arm/{ => arm32}/head.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/Makefile (100%)
 rename xen/arch/arm/{ => arm32}/lib/assembler.h (100%)
 rename xen/arch/arm/{ => arm32}/lib/bitops.h (100%)
 rename xen/arch/arm/{ => arm32}/lib/changebit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/clearbit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/copy_template.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/div64.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/findbit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/lib1funcs.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/lshrdi3.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/memcpy.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/memmove.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/memset.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/memzero.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/setbit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/testchangebit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/testclearbit.S (100%)
 rename xen/arch/arm/{ => arm32}/lib/testsetbit.S (100%)
 rename xen/arch/arm/{ => arm32}/mode_switch.S (99%)
 rename xen/arch/arm/{ => arm32}/proc-ca15.S (100%)
 rename xen/{arch/arm => include/asm-arm}/gic.h (98%)

diff --git a/Config.mk b/Config.mk
index d99b9a1..8e35886 100644
--- a/Config.mk
+++ b/Config.mk
@@ -14,7 +14,9 @@ debug ?= y
 debug_symbols ?= $(debug)
 
 XEN_COMPILE_ARCH    ?= $(shell uname -m | sed -e s/i.86/x86_32/ \
-                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ -e s/arm.*/arm/)
+                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ \
+                         -e s/armv7.*/arm32/)
+
 XEN_TARGET_ARCH     ?= $(XEN_COMPILE_ARCH)
 XEN_OS              ?= $(shell uname -s)
 
diff --git a/config/arm.mk b/config/arm32.mk
similarity index 100%
rename from config/arm.mk
rename to config/arm32.mk
diff --git a/xen/Rules.mk b/xen/Rules.mk
index f7cb8b2..c2db449 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -28,7 +28,7 @@ endif
 # Set ARCH/SUBARCH appropriately.
 override TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
 override TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
-                              sed -e 's/x86.*/x86/')
+                              sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
 
 TARGET := $(BASEDIR)/xen
 
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index fd92b72..1b33767 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -1,8 +1,7 @@
-subdir-y += lib
+subdir-$(arm32) += arm32
 
 obj-y += dummy.o
 obj-y += early_printk.o
-obj-y += entry.o
 obj-y += domain.o
 obj-y += domctl.o
 obj-y += sysctl.o
@@ -12,8 +11,6 @@ obj-y += io.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += mm.o
-obj-y += mode_switch.o
-obj-y += proc-ca15.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
@@ -36,7 +33,7 @@ obj-y += dtb.o
 AFLAGS += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
 endif
 
-ALL_OBJS := head.o $(ALL_OBJS)
+ALL_OBJS := $(TARGET_SUBARCH)/head.o $(ALL_OBJS)
 
 $(TARGET): $(TARGET)-syms $(TARGET).bin
 	# XXX: VE model loads by VMA so instead of
@@ -81,7 +78,7 @@ $(TARGET)-syms: prelink.o xen.lds $(BASEDIR)/common/symbols-dummy.o
 	    $(@D)/.$(@F).1.o -o $@
 	rm -f $(@D)/.$(@F).[0-9]*
 
-asm-offsets.s: asm-offsets.c
+asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c
 	$(CC) $(filter-out -flto,$(CFLAGS)) -S -o $@ $<
 
 xen.lds: xen.lds.S
diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index a45c654..f83bfee 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -12,16 +12,19 @@ CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
 CFLAGS += -I$(BASEDIR)/include
 
-# Prevent floating-point variables from creeping into Xen.
-CFLAGS += -msoft-float
-
 $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 $(call cc-option-add,CFLAGS,CC,-Wnested-externs)
 
 arm := y
 
+ifeq ($(TARGET_SUBARCH),arm32)
+# Prevent floating-point variables from creeping into Xen.
+CFLAGS += -msoft-float
+CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
+arm32 := y
+arm64 := n
+endif
+
 ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n)
 CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
 endif
-
-CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
diff --git a/xen/arch/arm/arm32/Makefile b/xen/arch/arm/arm32/Makefile
new file mode 100644
index 0000000..20931fa
--- /dev/null
+++ b/xen/arch/arm/arm32/Makefile
@@ -0,0 +1,5 @@
+subdir-y += lib
+
+obj-y += entry.o
+obj-y += mode_switch.o
+obj-y += proc-ca15.o
diff --git a/xen/arch/arm/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c
similarity index 100%
rename from xen/arch/arm/asm-offsets.c
rename to xen/arch/arm/arm32/asm-offsets.c
diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/arm32/entry.S
similarity index 100%
rename from xen/arch/arm/entry.S
rename to xen/arch/arm/arm32/entry.S
diff --git a/xen/arch/arm/head.S b/xen/arch/arm/arm32/head.S
similarity index 100%
rename from xen/arch/arm/head.S
rename to xen/arch/arm/arm32/head.S
diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/arm32/lib/Makefile
similarity index 100%
rename from xen/arch/arm/lib/Makefile
rename to xen/arch/arm/arm32/lib/Makefile
diff --git a/xen/arch/arm/lib/assembler.h b/xen/arch/arm/arm32/lib/assembler.h
similarity index 100%
rename from xen/arch/arm/lib/assembler.h
rename to xen/arch/arm/arm32/lib/assembler.h
diff --git a/xen/arch/arm/lib/bitops.h b/xen/arch/arm/arm32/lib/bitops.h
similarity index 100%
rename from xen/arch/arm/lib/bitops.h
rename to xen/arch/arm/arm32/lib/bitops.h
diff --git a/xen/arch/arm/lib/changebit.S b/xen/arch/arm/arm32/lib/changebit.S
similarity index 100%
rename from xen/arch/arm/lib/changebit.S
rename to xen/arch/arm/arm32/lib/changebit.S
diff --git a/xen/arch/arm/lib/clearbit.S b/xen/arch/arm/arm32/lib/clearbit.S
similarity index 100%
rename from xen/arch/arm/lib/clearbit.S
rename to xen/arch/arm/arm32/lib/clearbit.S
diff --git a/xen/arch/arm/lib/copy_template.S b/xen/arch/arm/arm32/lib/copy_template.S
similarity index 100%
rename from xen/arch/arm/lib/copy_template.S
rename to xen/arch/arm/arm32/lib/copy_template.S
diff --git a/xen/arch/arm/lib/div64.S b/xen/arch/arm/arm32/lib/div64.S
similarity index 100%
rename from xen/arch/arm/lib/div64.S
rename to xen/arch/arm/arm32/lib/div64.S
diff --git a/xen/arch/arm/lib/findbit.S b/xen/arch/arm/arm32/lib/findbit.S
similarity index 100%
rename from xen/arch/arm/lib/findbit.S
rename to xen/arch/arm/arm32/lib/findbit.S
diff --git a/xen/arch/arm/lib/lib1funcs.S b/xen/arch/arm/arm32/lib/lib1funcs.S
similarity index 100%
rename from xen/arch/arm/lib/lib1funcs.S
rename to xen/arch/arm/arm32/lib/lib1funcs.S
diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/arm32/lib/lshrdi3.S
similarity index 100%
rename from xen/arch/arm/lib/lshrdi3.S
rename to xen/arch/arm/arm32/lib/lshrdi3.S
diff --git a/xen/arch/arm/lib/memcpy.S b/xen/arch/arm/arm32/lib/memcpy.S
similarity index 100%
rename from xen/arch/arm/lib/memcpy.S
rename to xen/arch/arm/arm32/lib/memcpy.S
diff --git a/xen/arch/arm/lib/memmove.S b/xen/arch/arm/arm32/lib/memmove.S
similarity index 100%
rename from xen/arch/arm/lib/memmove.S
rename to xen/arch/arm/arm32/lib/memmove.S
diff --git a/xen/arch/arm/lib/memset.S b/xen/arch/arm/arm32/lib/memset.S
similarity index 100%
rename from xen/arch/arm/lib/memset.S
rename to xen/arch/arm/arm32/lib/memset.S
diff --git a/xen/arch/arm/lib/memzero.S b/xen/arch/arm/arm32/lib/memzero.S
similarity index 100%
rename from xen/arch/arm/lib/memzero.S
rename to xen/arch/arm/arm32/lib/memzero.S
diff --git a/xen/arch/arm/lib/setbit.S b/xen/arch/arm/arm32/lib/setbit.S
similarity index 100%
rename from xen/arch/arm/lib/setbit.S
rename to xen/arch/arm/arm32/lib/setbit.S
diff --git a/xen/arch/arm/lib/testchangebit.S b/xen/arch/arm/arm32/lib/testchangebit.S
similarity index 100%
rename from xen/arch/arm/lib/testchangebit.S
rename to xen/arch/arm/arm32/lib/testchangebit.S
diff --git a/xen/arch/arm/lib/testclearbit.S b/xen/arch/arm/arm32/lib/testclearbit.S
similarity index 100%
rename from xen/arch/arm/lib/testclearbit.S
rename to xen/arch/arm/arm32/lib/testclearbit.S
diff --git a/xen/arch/arm/lib/testsetbit.S b/xen/arch/arm/arm32/lib/testsetbit.S
similarity index 100%
rename from xen/arch/arm/lib/testsetbit.S
rename to xen/arch/arm/arm32/lib/testsetbit.S
diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/arm32/mode_switch.S
similarity index 99%
rename from xen/arch/arm/mode_switch.S
rename to xen/arch/arm/arm32/mode_switch.S
index 7c3b357..d550c33 100644
--- a/xen/arch/arm/mode_switch.S
+++ b/xen/arch/arm/arm32/mode_switch.S
@@ -21,7 +21,7 @@
 #include <asm/page.h>
 #include <asm/platform_vexpress.h>
 #include <asm/asm_defns.h>
-#include "gic.h"
+#include <asm/gic.h>
 
 
 /* XXX: Versatile Express specific code */
diff --git a/xen/arch/arm/proc-ca15.S b/xen/arch/arm/arm32/proc-ca15.S
similarity index 100%
rename from xen/arch/arm/proc-ca15.S
rename to xen/arch/arm/arm32/proc-ca15.S
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c5292c7..0875045 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -12,7 +12,7 @@
 #include <asm/p2m.h>
 #include <asm/irq.h>
 
-#include "gic.h"
+#include <asm/gic.h>
 #include "vtimer.h"
 #include "vpl011.h"
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index a9e7f43..aac92b3 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -11,7 +11,7 @@
 #include <xen/libfdt/libfdt.h>
 #include <xen/guest_access.h>
 
-#include "gic.h"
+#include <asm/gic.h>
 #include "kernel.h"
 
 static unsigned int __initdata opt_dom0_max_vcpus;
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0c6fab9..41824c9 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -29,7 +29,7 @@
 #include <asm/p2m.h>
 #include <asm/domain.h>
 
-#include "gic.h"
+#include <asm/gic.h>
 
 /* Access to the GIC Distributor registers through the fixmap */
 #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 72e83e6..c141d81 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -25,7 +25,7 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 
-#include "gic.h"
+#include <asm/gic.h>
 
 static void enable_none(struct irq_desc *irq) { }
 static unsigned int startup_none(struct irq_desc *irq) { return 0; }
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7ae4515..852f0d8 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -4,7 +4,7 @@
 #include <xen/errno.h>
 #include <xen/domain_page.h>
 #include <asm/flushtlb.h>
-#include "gic.h"
+#include <asm/gic.h>
 
 void dump_p2m_lookup(struct domain *d, paddr_t addr)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..8f85ae6 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -39,7 +39,7 @@
 #include <asm/setup.h>
 #include <asm/vfp.h>
 #include <asm/early_printk.h>
-#include "gic.h"
+#include <asm/gic.h>
 
 static __used void init_done(void)
 {
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 6555ac6..7b6ffa0 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -29,7 +29,7 @@
 #include <xen/timer.h>
 #include <xen/irq.h>
 #include <asm/vfp.h>
-#include "gic.h"
+#include <asm/gic.h>
 
 cpumask_t cpu_online_map;
 EXPORT_SYMBOL(cpu_online_map);
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 19e2081..d01ff6d 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -35,7 +35,7 @@
 
 #include "io.h"
 #include "vtimer.h"
-#include "gic.h"
+#include <asm/gic.h>
 
 /* The base of the stack must always be double-word aligned, which means
  * that both the kernel half of struct cpu_user_regs (which is pushed in
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 3f7e757..7d1a5ad 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -27,7 +27,7 @@
 #include <asm/current.h>
 
 #include "io.h"
-#include "gic.h"
+#include <asm/gic.h>
 
 #define VGIC_DISTR_BASE_ADDRESS 0x000000002c001000
 
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 490b021..1c45f4a 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -21,7 +21,7 @@
 #include <xen/lib.h>
 #include <xen/timer.h>
 #include <xen/sched.h>
-#include "gic.h"
+#include <asm/gic.h>
 
 extern s_time_t ticks_to_ns(uint64_t ticks);
 extern uint64_t ns_to_ticks(s_time_t ns);
diff --git a/xen/arch/arm/gic.h b/xen/include/asm-arm/gic.h
similarity index 98%
rename from xen/arch/arm/gic.h
rename to xen/include/asm-arm/gic.h
index 1bf1b02..bf30fbd 100644
--- a/xen/arch/arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -1,6 +1,4 @@
 /*
- * xen/arch/arm/gic.h
- *
  * ARM Generic Interrupt Controller support
  *
  * Tim Deegan <tim@xen.org>
@@ -17,8 +15,8 @@
  * GNU General Public License for more details.
  */
 
-#ifndef __ARCH_ARM_GIC_H__
-#define __ARCH_ARM_GIC_H__
+#ifndef __ASM_ARM_GIC_H__
+#define __ASM_ARM_GIC_H__
 
 #define GICD_CTLR       (0x000/4)
 #define GICD_TYPER      (0x004/4)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:58:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:58:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfutI-00034W-27; Tue, 04 Dec 2012 15:58:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TfutG-000342-8D
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:58:34 +0000
Received: from [85.158.138.51:30058] by server-12.bemta-3.messagelabs.com id
	64/A7-22757-9AD1EB05; Tue, 04 Dec 2012 15:58:33 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354636705!27465140!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13104 invoked from network); 4 Dec 2012 15:58:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:58:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216347893"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	04 Dec 2012 15:58:16 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Tue, 4 Dec 2012
	10:58:16 -0500
Message-ID: <50BE1D97.1000509@citrix.com>
Date: Tue, 4 Dec 2012 15:58:15 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <1354202142-22695-1-git-send-email-roger.pau@citrix.com>
	<1354636260.15296.43.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354636260.15296.43.camel@zakaz.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] [PATCH v2] docs: expand persistent grants protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDQvMTIvMTIgMTU6NTEsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBUaHUsIDIwMTItMTEt
MjkgYXQgMTU6MTUgKzAwMDAsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPj4gU2lnbmVkLW9mZi1i
eTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Cj4gQWNrZWQgKyBhcHBs
aWVkLCB0aGFua3MuCj4KPj4gLS0tCj4+ICAgeGVuL2luY2x1ZGUvcHVibGljL2lvL2Jsa2lmLmgg
fCAgIDM3ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0KPj4gICAxIGZpbGVz
IGNoYW5nZWQsIDM0IGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCj4+Cj4+IGRpZmYgLS1n
aXQgYS94ZW4vaW5jbHVkZS9wdWJsaWMvaW8vYmxraWYuaCBiL3hlbi9pbmNsdWRlL3B1YmxpYy9p
by9ibGtpZi5oCj4+IGluZGV4IDhkZjU4NjYuLjFmMGZiZDYgMTAwNjQ0Cj4+IC0tLSBhL3hlbi9p
bmNsdWRlL3B1YmxpYy9pby9ibGtpZi5oCj4+ICsrKyBiL3hlbi9pbmNsdWRlL3B1YmxpYy9pby9i
bGtpZi5oCj4+IEBAIC0xMzcsNyArMTM3LDIyIEBACj4+ICAgICogICAgICBjYW4gbWFwIHBlcnNp
c3RlbnRseSBkZXBlbmRzIG9uIHRoZSBpbXBsZW1lbnRhdGlvbiwgYnV0IGlkZWFsbHkgaXQKPj4g
ICAgKiAgICAgIHNob3VsZCBiZSBSSU5HX1NJWkUgKiBCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JF
UVVFU1QuIFVzaW5nIHRoaXMKPj4gICAgKiAgICAgIGZlYXR1cmUgdGhlIGJhY2tlbmQgZG9lc24n
dCBuZWVkIHRvIHVubWFwIGVhY2ggZ3JhbnQsIHByZXZlbnRpbmcKPj4gLSAqICAgICAgY29zdGx5
IFRMQiBmbHVzaGVzLgo+PiArICogICAgICBjb3N0bHkgVExCIGZsdXNoZXMuIFRoZSBiYWNrZW5k
IGRyaXZlciBzaG91bGQgb25seSBtYXAgZ3JhbnRzCj4+ICsgKiAgICAgIHBlcnNpc3RlbnRseSBp
ZiB0aGUgZnJvbnRlbmQgc3VwcG9ydHMgaXQuIElmIGEgYmFja2VuZCBkcml2ZXIgY2hvb3Nlcwo+
PiArICogICAgICB0byB1c2UgdGhlIHBlcnNpc3RlbnQgcHJvdG9jb2wgd2hlbiB0aGUgZnJvbnRl
bmQgZG9lc24ndCBzdXBwb3J0IGl0LAo+PiArICogICAgICBpdCB3aWxsIHByb2JhYmx5IGhpdCB0
aGUgbWF4aW11bSBudW1iZXIgb2YgcGVyc2lzdGVudGx5IG1hcHBlZCBncmFudHMKPj4gKyAqICAg
ICAgKGR1ZSB0byB0aGUgZmFjdCB0aGF0IHRoZSBmcm9udGVuZCB3b24ndCBiZSByZXVzaW5nIHRo
ZSBzYW1lIGdyYW50cyksCj4+ICsgKiAgICAgIGFuZCBmYWxsIGJhY2sgdG8gbm9uLXBlcnNpc3Rl
bnQgbW9kZS4gQmFja2VuZCBpbXBsZW1lbnRhdGlvbnMgbWF5Cj4+ICsgKiAgICAgIHNocmluayBv
ciBleHBhbmQgdGhlIG51bWJlciBvZiBwZXJzaXN0ZW50bHkgbWFwcGVkIGdyYW50cyB3aXRob3V0
Cj4+ICsgKiAgICAgIG5vdGlmeWluZyB0aGUgZnJvbnRlbmQgZGVwZW5kaW5nIG9uIG1lbW9yeSBj
b25zdHJhaW50cyAodGhpcyBtaWdodAo+PiArICogICAgICBjYXVzZSBhIHBlcmZvcm1hbmNlIGRl
Z3JhZGF0aW9uKS4KPj4gKyAqCj4+ICsgKiAgICAgIElmIGEgYmFja2VuZCBkcml2ZXIgd2FudHMg
dG8gbGltaXQgdGhlIG1heGltdW0gbnVtYmVyIG9mIHBlcnNpc3RlbnRseQo+PiArICogICAgICBt
YXBwZWQgZ3JhbnRzIHRvIGEgdmFsdWUgbGVzcyB0aGFuIFJJTkdfU0laRSAqCj4+ICsgKiAgICAg
IEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVCBhIExSVSBzdHJhdGVneSBzaG91bGQgYmUg
dXNlZCB0bwo+PiArICogICAgICBkaXNjYXJkIHRoZSBncmFudHMgdGhhdCBhcmUgbGVzcyBjb21t
b25seSB1c2VkLiBVc2luZyBhIExSVSBpbiB0aGUKPj4gKyAqICAgICAgYmFja2VuZCBkcml2ZXIg
cGFpcmVkIHdpdGggYSBMSUZPIHF1ZXVlIGluIHRoZSBmcm9udGVuZCB3aWxsCj4+ICsgKiAgICAg
IGFsbG93IHVzIHRvIGhhdmUgYmV0dGVyIHBlcmZvcm1hbmNlIGluIHRoaXMgc2NlbmFyaW8uCj4+
ICAgICoKPj4gICAgKi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIFJlcXVlc3QgVHJhbnNwb3J0IFBh
cmFtZXRlcnMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCj4+ICAgICoKPj4gQEAgLTI1OCwxMSAr
MjczLDIzIEBACj4+ICAgICogZmVhdHVyZS1wZXJzaXN0ZW50Cj4+ICAgICogICAgICBWYWx1ZXM6
ICAgICAgICAgMC8xIChib29sZWFuKQo+PiAgICAqICAgICAgRGVmYXVsdCBWYWx1ZTogIDAKPj4g
LSAqICAgICAgTm90ZXM6IDcsIDgKPj4gKyAqICAgICAgTm90ZXM6IDcsIDgsIDkKPj4gICAgKgo+
PiAgICAqICAgICAgQSB2YWx1ZSBvZiAiMSIgaW5kaWNhdGVzIHRoYXQgdGhlIGZyb250ZW5kIHdp
bGwgcmV1c2UgdGhlIHNhbWUgZ3JhbnRzCj4+ICAgICogICAgICBmb3IgYWxsIHRyYW5zYWN0aW9u
cywgYWxsb3dpbmcgdGhlIGJhY2tlbmQgdG8gbWFwIHRoZW0gd2l0aCB3cml0ZQo+PiAtICogICAg
ICBhY2Nlc3MgKGV2ZW4gd2hlbiBpdCBzaG91bGQgYmUgcmVhZC1vbmx5KS4KPj4gKyAqICAgICAg
YWNjZXNzIChldmVuIHdoZW4gaXQgc2hvdWxkIGJlIHJlYWQtb25seSkuIElmIHRoZSBmcm9udGVu
ZCBoaXRzIHRoZQo+PiArICogICAgICBtYXhpbXVtIG51bWJlciBvZiBhbGxvd2VkIHBlcnNpc3Rl
bnRseSBtYXBwZWQgZ3JhbnRzLCBpdCBjYW4gZmFsbGJhY2sKPj4gKyAqICAgICAgdG8gbm9uIHBl
cnNpc3RlbnQgbW9kZS4gVGhpcyB3aWxsIGNhdXNlIGEgcGVyZm9ybWFuY2UgZGVncmFkYXRpb24s
Cj4+ICsgKiAgICAgIHNpbmNlIHRoZSB0aGUgYmFja2VuZCBkcml2ZXIgd2lsbCBzdGlsbCB0cnkg
dG8gbWFwIHRob3NlIGdyYW50cwo+PiArICogICAgICBwZXJzaXN0ZW50bHkuIFNpbmNlIHRoZSBw
ZXJzaXN0ZW50IGdyYW50cyBwcm90b2NvbCBpcyBjb21wYXRpYmxlIHdpdGgKPj4gKyAqICAgICAg
dGhlIHByZXZpb3VzIHByb3RvY29sLCBhIGZyb250ZW5kIGRyaXZlciBjYW4gY2hvb3NlIHRvIHdv
cmsgaW4KPj4gKyAqICAgICAgcGVyc2lzdGVudCBtb2RlIGV2ZW4gd2hlbiB0aGUgYmFja2VuZCBk
b2Vzbid0IHN1cHBvcnQgaXQuCj4+ICsgKgo+PiArICogICAgICBJdCBpcyByZWNvbW1lbmRlZCB0
aGF0IHRoZSBmcm9udGVuZCBkcml2ZXIgc3RvcmVzIHRoZSBwZXJzaXN0ZW50bHkKPj4gKyAqICAg
ICAgbWFwcGVkIGdyYW50cyBpbiBhIExJRk8gcXVldWUsIHNvIGEgc3Vic2V0IG9mIGFsbCBwZXJz
aXN0ZW50bHkgbWFwcGVkCj4+ICsgKiAgICAgIGdyYW50cyBnZXRzIHVzZWQgY29tbW9ubHkuIFRo
aXMgaXMgZG9uZSBpbiBjYXNlIHRoZSBiYWNrZW5kIGRyaXZlcgo+PiArICogICAgICBkZWNpZGVz
IHRvIGxpbWl0IHRoZSBtYXhpbXVtIG51bWJlciBvZiBwZXJzaXN0ZW50bHkgbWFwcGVkIGdyYW50
cwo+PiArICogICAgICB0byBhIHZhbHVlIGxlc3MgdGhhbiBSSU5HX1NJWkUgKiBCTEtJRl9NQVhf
U0VHTUVOVFNfUEVSX1JFUVVFU1QuCj4+ICAgICoKPj4gICAgKi0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0gVmlydHVhbCBEZXZpY2UgUHJvcGVydGllcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
Cj4+ICAgICoKPj4gQEAgLTMwOCw2ICszMzUsMTAgQEAKPj4gICAgKiAoOCkgVGhlIGZyb250ZW5k
IGRyaXZlciBoYXMgdG8gYWxsb3cgdGhlIGJhY2tlbmQgZHJpdmVyIHRvIG1hcCBhbGwgZ3JhbnRz
Cj4+ICAgICogICAgIHdpdGggd3JpdGUgYWNjZXNzLCBldmVuIHdoZW4gdGhleSBzaG91bGQgYmUg
bWFwcGVkIHJlYWQtb25seSwgc2luY2UKPj4gICAgKiAgICAgZnVydGhlciByZXF1ZXN0cyBtYXkg
cmV1c2UgdGhlc2UgZ3JhbnRzIGFuZCByZXF1aXJlIHdyaXRlIHBlcm1pc3Npb25zLgo+PiArICog
KDkpIExpbnV4IGltcGxlbWVudGF0aW9uIGRvZXNuJ3QgaGF2ZSBhIGxpbWl0IG9uIHRoZSBtYXhp
bXVtIG51bWJlciBvZgo+PiArICogICAgIGdyYW50cyB0aGF0IGNhbiBiZSBwZXJzaXN0ZW50bHkg
bWFwcGVkIGluIHRoZSBmcm9udGVuZCBkcml2ZXIsIGJ1dAo+PiArICogICAgIGR1ZSB0byB0aGUg
ZnJvbnRlbnQgZHJpdmVyIGltcGxlbWVudGF0aW9uIGl0IHNob3VsZCBuZXZlciBiZSBiaWdnZXIK
ZnJvbnRlbnQgLT4gZnJvbnRlbmQ/CgotLQpNYXRzCj4+ICsgKiAgICAgdGhhbiBSSU5HX1NJWkUg
KiBCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JFUVVFU1QuCj4+ICAgICovCj4+ICAgCj4+ICAgLyoK
Pgo+Cj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKPiBodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Dec 04 15:58:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 15:58:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfutI-00034W-27; Tue, 04 Dec 2012 15:58:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TfutG-000342-8D
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 15:58:34 +0000
Received: from [85.158.138.51:30058] by server-12.bemta-3.messagelabs.com id
	64/A7-22757-9AD1EB05; Tue, 04 Dec 2012 15:58:33 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354636705!27465140!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13104 invoked from network); 4 Dec 2012 15:58:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 15:58:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216347893"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	04 Dec 2012 15:58:16 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Tue, 4 Dec 2012
	10:58:16 -0500
Message-ID: <50BE1D97.1000509@citrix.com>
Date: Tue, 4 Dec 2012 15:58:15 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <1354202142-22695-1-git-send-email-roger.pau@citrix.com>
	<1354636260.15296.43.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354636260.15296.43.camel@zakaz.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] [PATCH v2] docs: expand persistent grants protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDQvMTIvMTIgMTU6NTEsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBUaHUsIDIwMTItMTEt
MjkgYXQgMTU6MTUgKzAwMDAsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPj4gU2lnbmVkLW9mZi1i
eTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Cj4gQWNrZWQgKyBhcHBs
aWVkLCB0aGFua3MuCj4KPj4gLS0tCj4+ICAgeGVuL2luY2x1ZGUvcHVibGljL2lvL2Jsa2lmLmgg
fCAgIDM3ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0KPj4gICAxIGZpbGVz
IGNoYW5nZWQsIDM0IGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCj4+Cj4+IGRpZmYgLS1n
aXQgYS94ZW4vaW5jbHVkZS9wdWJsaWMvaW8vYmxraWYuaCBiL3hlbi9pbmNsdWRlL3B1YmxpYy9p
by9ibGtpZi5oCj4+IGluZGV4IDhkZjU4NjYuLjFmMGZiZDYgMTAwNjQ0Cj4+IC0tLSBhL3hlbi9p
bmNsdWRlL3B1YmxpYy9pby9ibGtpZi5oCj4+ICsrKyBiL3hlbi9pbmNsdWRlL3B1YmxpYy9pby9i
bGtpZi5oCj4+IEBAIC0xMzcsNyArMTM3LDIyIEBACj4+ICAgICogICAgICBjYW4gbWFwIHBlcnNp
c3RlbnRseSBkZXBlbmRzIG9uIHRoZSBpbXBsZW1lbnRhdGlvbiwgYnV0IGlkZWFsbHkgaXQKPj4g
ICAgKiAgICAgIHNob3VsZCBiZSBSSU5HX1NJWkUgKiBCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JF
UVVFU1QuIFVzaW5nIHRoaXMKPj4gICAgKiAgICAgIGZlYXR1cmUgdGhlIGJhY2tlbmQgZG9lc24n
dCBuZWVkIHRvIHVubWFwIGVhY2ggZ3JhbnQsIHByZXZlbnRpbmcKPj4gLSAqICAgICAgY29zdGx5
IFRMQiBmbHVzaGVzLgo+PiArICogICAgICBjb3N0bHkgVExCIGZsdXNoZXMuIFRoZSBiYWNrZW5k
IGRyaXZlciBzaG91bGQgb25seSBtYXAgZ3JhbnRzCj4+ICsgKiAgICAgIHBlcnNpc3RlbnRseSBp
ZiB0aGUgZnJvbnRlbmQgc3VwcG9ydHMgaXQuIElmIGEgYmFja2VuZCBkcml2ZXIgY2hvb3Nlcwo+
PiArICogICAgICB0byB1c2UgdGhlIHBlcnNpc3RlbnQgcHJvdG9jb2wgd2hlbiB0aGUgZnJvbnRl
bmQgZG9lc24ndCBzdXBwb3J0IGl0LAo+PiArICogICAgICBpdCB3aWxsIHByb2JhYmx5IGhpdCB0
aGUgbWF4aW11bSBudW1iZXIgb2YgcGVyc2lzdGVudGx5IG1hcHBlZCBncmFudHMKPj4gKyAqICAg
ICAgKGR1ZSB0byB0aGUgZmFjdCB0aGF0IHRoZSBmcm9udGVuZCB3b24ndCBiZSByZXVzaW5nIHRo
ZSBzYW1lIGdyYW50cyksCj4+ICsgKiAgICAgIGFuZCBmYWxsIGJhY2sgdG8gbm9uLXBlcnNpc3Rl
bnQgbW9kZS4gQmFja2VuZCBpbXBsZW1lbnRhdGlvbnMgbWF5Cj4+ICsgKiAgICAgIHNocmluayBv
ciBleHBhbmQgdGhlIG51bWJlciBvZiBwZXJzaXN0ZW50bHkgbWFwcGVkIGdyYW50cyB3aXRob3V0
Cj4+ICsgKiAgICAgIG5vdGlmeWluZyB0aGUgZnJvbnRlbmQgZGVwZW5kaW5nIG9uIG1lbW9yeSBj
b25zdHJhaW50cyAodGhpcyBtaWdodAo+PiArICogICAgICBjYXVzZSBhIHBlcmZvcm1hbmNlIGRl
Z3JhZGF0aW9uKS4KPj4gKyAqCj4+ICsgKiAgICAgIElmIGEgYmFja2VuZCBkcml2ZXIgd2FudHMg
dG8gbGltaXQgdGhlIG1heGltdW0gbnVtYmVyIG9mIHBlcnNpc3RlbnRseQo+PiArICogICAgICBt
YXBwZWQgZ3JhbnRzIHRvIGEgdmFsdWUgbGVzcyB0aGFuIFJJTkdfU0laRSAqCj4+ICsgKiAgICAg
IEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVCBhIExSVSBzdHJhdGVneSBzaG91bGQgYmUg
dXNlZCB0bwo+PiArICogICAgICBkaXNjYXJkIHRoZSBncmFudHMgdGhhdCBhcmUgbGVzcyBjb21t
b25seSB1c2VkLiBVc2luZyBhIExSVSBpbiB0aGUKPj4gKyAqICAgICAgYmFja2VuZCBkcml2ZXIg
cGFpcmVkIHdpdGggYSBMSUZPIHF1ZXVlIGluIHRoZSBmcm9udGVuZCB3aWxsCj4+ICsgKiAgICAg
IGFsbG93IHVzIHRvIGhhdmUgYmV0dGVyIHBlcmZvcm1hbmNlIGluIHRoaXMgc2NlbmFyaW8uCj4+
ICAgICoKPj4gICAgKi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIFJlcXVlc3QgVHJhbnNwb3J0IFBh
cmFtZXRlcnMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCj4+ICAgICoKPj4gQEAgLTI1OCwxMSAr
MjczLDIzIEBACj4+ICAgICogZmVhdHVyZS1wZXJzaXN0ZW50Cj4+ICAgICogICAgICBWYWx1ZXM6
ICAgICAgICAgMC8xIChib29sZWFuKQo+PiAgICAqICAgICAgRGVmYXVsdCBWYWx1ZTogIDAKPj4g
LSAqICAgICAgTm90ZXM6IDcsIDgKPj4gKyAqICAgICAgTm90ZXM6IDcsIDgsIDkKPj4gICAgKgo+
PiAgICAqICAgICAgQSB2YWx1ZSBvZiAiMSIgaW5kaWNhdGVzIHRoYXQgdGhlIGZyb250ZW5kIHdp
bGwgcmV1c2UgdGhlIHNhbWUgZ3JhbnRzCj4+ICAgICogICAgICBmb3IgYWxsIHRyYW5zYWN0aW9u
cywgYWxsb3dpbmcgdGhlIGJhY2tlbmQgdG8gbWFwIHRoZW0gd2l0aCB3cml0ZQo+PiAtICogICAg
ICBhY2Nlc3MgKGV2ZW4gd2hlbiBpdCBzaG91bGQgYmUgcmVhZC1vbmx5KS4KPj4gKyAqICAgICAg
YWNjZXNzIChldmVuIHdoZW4gaXQgc2hvdWxkIGJlIHJlYWQtb25seSkuIElmIHRoZSBmcm9udGVu
ZCBoaXRzIHRoZQo+PiArICogICAgICBtYXhpbXVtIG51bWJlciBvZiBhbGxvd2VkIHBlcnNpc3Rl
bnRseSBtYXBwZWQgZ3JhbnRzLCBpdCBjYW4gZmFsbGJhY2sKPj4gKyAqICAgICAgdG8gbm9uIHBl
cnNpc3RlbnQgbW9kZS4gVGhpcyB3aWxsIGNhdXNlIGEgcGVyZm9ybWFuY2UgZGVncmFkYXRpb24s
Cj4+ICsgKiAgICAgIHNpbmNlIHRoZSB0aGUgYmFja2VuZCBkcml2ZXIgd2lsbCBzdGlsbCB0cnkg
dG8gbWFwIHRob3NlIGdyYW50cwo+PiArICogICAgICBwZXJzaXN0ZW50bHkuIFNpbmNlIHRoZSBw
ZXJzaXN0ZW50IGdyYW50cyBwcm90b2NvbCBpcyBjb21wYXRpYmxlIHdpdGgKPj4gKyAqICAgICAg
dGhlIHByZXZpb3VzIHByb3RvY29sLCBhIGZyb250ZW5kIGRyaXZlciBjYW4gY2hvb3NlIHRvIHdv
cmsgaW4KPj4gKyAqICAgICAgcGVyc2lzdGVudCBtb2RlIGV2ZW4gd2hlbiB0aGUgYmFja2VuZCBk
b2Vzbid0IHN1cHBvcnQgaXQuCj4+ICsgKgo+PiArICogICAgICBJdCBpcyByZWNvbW1lbmRlZCB0
aGF0IHRoZSBmcm9udGVuZCBkcml2ZXIgc3RvcmVzIHRoZSBwZXJzaXN0ZW50bHkKPj4gKyAqICAg
ICAgbWFwcGVkIGdyYW50cyBpbiBhIExJRk8gcXVldWUsIHNvIGEgc3Vic2V0IG9mIGFsbCBwZXJz
aXN0ZW50bHkgbWFwcGVkCj4+ICsgKiAgICAgIGdyYW50cyBnZXRzIHVzZWQgY29tbW9ubHkuIFRo
aXMgaXMgZG9uZSBpbiBjYXNlIHRoZSBiYWNrZW5kIGRyaXZlcgo+PiArICogICAgICBkZWNpZGVz
IHRvIGxpbWl0IHRoZSBtYXhpbXVtIG51bWJlciBvZiBwZXJzaXN0ZW50bHkgbWFwcGVkIGdyYW50
cwo+PiArICogICAgICB0byBhIHZhbHVlIGxlc3MgdGhhbiBSSU5HX1NJWkUgKiBCTEtJRl9NQVhf
U0VHTUVOVFNfUEVSX1JFUVVFU1QuCj4+ICAgICoKPj4gICAgKi0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0gVmlydHVhbCBEZXZpY2UgUHJvcGVydGllcyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
Cj4+ICAgICoKPj4gQEAgLTMwOCw2ICszMzUsMTAgQEAKPj4gICAgKiAoOCkgVGhlIGZyb250ZW5k
IGRyaXZlciBoYXMgdG8gYWxsb3cgdGhlIGJhY2tlbmQgZHJpdmVyIHRvIG1hcCBhbGwgZ3JhbnRz
Cj4+ICAgICogICAgIHdpdGggd3JpdGUgYWNjZXNzLCBldmVuIHdoZW4gdGhleSBzaG91bGQgYmUg
bWFwcGVkIHJlYWQtb25seSwgc2luY2UKPj4gICAgKiAgICAgZnVydGhlciByZXF1ZXN0cyBtYXkg
cmV1c2UgdGhlc2UgZ3JhbnRzIGFuZCByZXF1aXJlIHdyaXRlIHBlcm1pc3Npb25zLgo+PiArICog
KDkpIExpbnV4IGltcGxlbWVudGF0aW9uIGRvZXNuJ3QgaGF2ZSBhIGxpbWl0IG9uIHRoZSBtYXhp
bXVtIG51bWJlciBvZgo+PiArICogICAgIGdyYW50cyB0aGF0IGNhbiBiZSBwZXJzaXN0ZW50bHkg
bWFwcGVkIGluIHRoZSBmcm9udGVuZCBkcml2ZXIsIGJ1dAo+PiArICogICAgIGR1ZSB0byB0aGUg
ZnJvbnRlbnQgZHJpdmVyIGltcGxlbWVudGF0aW9uIGl0IHNob3VsZCBuZXZlciBiZSBiaWdnZXIK
ZnJvbnRlbnQgLT4gZnJvbnRlbmQ/CgotLQpNYXRzCj4+ICsgKiAgICAgdGhhbiBSSU5HX1NJWkUg
KiBCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JFUVVFU1QuCj4+ICAgICovCj4+ICAgCj4+ICAgLyoK
Pgo+Cj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKPiBodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Dec 04 16:02:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 16:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuxK-0003ra-Q0; Tue, 04 Dec 2012 16:02:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfuxJ-0003rE-B3
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 16:02:45 +0000
Received: from [85.158.138.51:56991] by server-4.bemta-3.messagelabs.com id
	A3/97-30023-4AE1EB05; Tue, 04 Dec 2012 16:02:44 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1354636963!27491686!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6077 invoked from network); 4 Dec 2012 16:02:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 16:02:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151994"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 16:01:58 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	16:01:58 +0000
Message-ID: <50BE1E75.2050303@citrix.com>
Date: Tue, 4 Dec 2012 17:01:57 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
In-Reply-To: <1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/11/12 18:35, Matthew Fioravante wrote:
> --- /dev/null
> +++ b/configure.ac
> @@ -0,0 +1,14 @@
> +#                                               -*- Autoconf -*-
> +# Process this file with autoconf to produce a configure script.
> +
> +AC_PREREQ([2.67])
> +AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
> +AC_CONFIG_SRCDIR([./tools/libxl/libxl.c])
> +AC_CONFIG_FILES([./config/Tools.mk])

Why is config/Tools.mk included here, but not config/Stubdom.mk?

> +AC_PREFIX_DEFAULT([/usr])
> +AC_CONFIG_AUX_DIR([.])
> +
> +AC_CONFIG_SUBDIRS([tools stubdom])

NetBSD is not able to build stubdoms, I guess running ./tools/configure
will still produce all the necessary configure foo to compile the tools
correctly?

> +
> +AC_OUTPUT()


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 16:02:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 16:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfuxK-0003ra-Q0; Tue, 04 Dec 2012 16:02:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfuxJ-0003rE-B3
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 16:02:45 +0000
Received: from [85.158.138.51:56991] by server-4.bemta-3.messagelabs.com id
	A3/97-30023-4AE1EB05; Tue, 04 Dec 2012 16:02:44 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1354636963!27491686!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6077 invoked from network); 4 Dec 2012 16:02:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 16:02:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16151994"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 16:01:58 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	16:01:58 +0000
Message-ID: <50BE1E75.2050303@citrix.com>
Date: Tue, 4 Dec 2012 17:01:57 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
In-Reply-To: <1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/11/12 18:35, Matthew Fioravante wrote:
> --- /dev/null
> +++ b/configure.ac
> @@ -0,0 +1,14 @@
> +#                                               -*- Autoconf -*-
> +# Process this file with autoconf to produce a configure script.
> +
> +AC_PREREQ([2.67])
> +AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
> +AC_CONFIG_SRCDIR([./tools/libxl/libxl.c])
> +AC_CONFIG_FILES([./config/Tools.mk])

Why is config/Tools.mk included here, but not config/Stubdom.mk?

> +AC_PREFIX_DEFAULT([/usr])
> +AC_CONFIG_AUX_DIR([.])
> +
> +AC_CONFIG_SUBDIRS([tools stubdom])

NetBSD is not able to build stubdoms, I guess running ./tools/configure
will still produce all the necessary configure foo to compile the tools
correctly?

> +
> +AC_OUTPUT()


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 16:05:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 16:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfv0D-00044m-DS; Tue, 04 Dec 2012 16:05:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1Tfv0C-00044e-CS
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 16:05:44 +0000
Received: from [85.158.143.99:8193] by server-3.bemta-4.messagelabs.com id
	77/58-06841-75F1EB05; Tue, 04 Dec 2012 16:05:43 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354637139!20991040!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22691 invoked from network); 4 Dec 2012 16:05:41 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 16:05:41 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 69e5_01ad_b6c6c163_ced9_451a_a60e_8a0dcba1cacd;
	Tue, 04 Dec 2012 11:05:26 -0500
Message-ID: <50BE1F3F.2070306@jhuapl.edu>
Date: Tue, 04 Dec 2012 11:05:19 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
	<50BE1E75.2050303@citrix.com>
In-Reply-To: <50BE1E75.2050303@citrix.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2629517028782313021=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============2629517028782313021==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060700060403010002080507"

This is a cryptographically signed message in MIME format.

--------------ms060700060403010002080507
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12/04/2012 11:01 AM, Roger Pau Monn=C3=A9 wrote:
> On 29/11/12 18:35, Matthew Fioravante wrote:
>> --- /dev/null
>> +++ b/configure.ac
>> @@ -0,0 +1,14 @@
>> +#                                               -*- Autoconf -*-
>> +# Process this file with autoconf to produce a configure script.
>> +
>> +AC_PREREQ([2.67])
>> +AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
>> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
>> +AC_CONFIG_SRCDIR([./tools/libxl/libxl.c])
>> +AC_CONFIG_FILES([./config/Tools.mk])
> Why is config/Tools.mk included here, but not config/Stubdom.mk?
Its a typo
>
>> +AC_PREFIX_DEFAULT([/usr])
>> +AC_CONFIG_AUX_DIR([.])
>> +
>> +AC_CONFIG_SUBDIRS([tools stubdom])
> NetBSD is not able to build stubdoms, I guess running ./tools/configure=

> will still produce all the necessary configure foo to compile the tools=

> correctly?
Yes you can just run the tools one directly to avoid stubdoms.
>> +
>> +AC_OUTPUT()



--------------ms060700060403010002080507
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE2MDUxOVowIwYJKoZIhvcNAQkEMRYEFHl1Aqc5l7lPiHmH
dyAOC2LU7JevMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYBvQ10d0+o4XcEU9zrmeSzuNePzZFmEpxiu
2zcUuw1DCFP4WyJhGTKjlFq+w4xG1GGLNBxz2OOl29MXy2bETCDWASW5kW/L/Tc5i75zhu6T
9OUcA1kBbSclg5sQuAHJ+yxwaRTA5apsCoaTkjFdgotDAubb3gktE4TeMIVPwvPhlgAAAAAA
AA==
--------------ms060700060403010002080507--


--===============2629517028782313021==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2629517028782313021==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 16:05:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 16:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfv0D-00044m-DS; Tue, 04 Dec 2012 16:05:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1Tfv0C-00044e-CS
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 16:05:44 +0000
Received: from [85.158.143.99:8193] by server-3.bemta-4.messagelabs.com id
	77/58-06841-75F1EB05; Tue, 04 Dec 2012 16:05:43 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354637139!20991040!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22691 invoked from network); 4 Dec 2012 16:05:41 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 16:05:41 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 69e5_01ad_b6c6c163_ced9_451a_a60e_8a0dcba1cacd;
	Tue, 04 Dec 2012 11:05:26 -0500
Message-ID: <50BE1F3F.2070306@jhuapl.edu>
Date: Tue, 04 Dec 2012 11:05:19 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
	<50BE1E75.2050303@citrix.com>
In-Reply-To: <50BE1E75.2050303@citrix.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2629517028782313021=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============2629517028782313021==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060700060403010002080507"

This is a cryptographically signed message in MIME format.

--------------ms060700060403010002080507
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12/04/2012 11:01 AM, Roger Pau Monn=C3=A9 wrote:
> On 29/11/12 18:35, Matthew Fioravante wrote:
>> --- /dev/null
>> +++ b/configure.ac
>> @@ -0,0 +1,14 @@
>> +#                                               -*- Autoconf -*-
>> +# Process this file with autoconf to produce a configure script.
>> +
>> +AC_PREREQ([2.67])
>> +AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
>> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
>> +AC_CONFIG_SRCDIR([./tools/libxl/libxl.c])
>> +AC_CONFIG_FILES([./config/Tools.mk])
> Why is config/Tools.mk included here, but not config/Stubdom.mk?
Its a typo
>
>> +AC_PREFIX_DEFAULT([/usr])
>> +AC_CONFIG_AUX_DIR([.])
>> +
>> +AC_CONFIG_SUBDIRS([tools stubdom])
> NetBSD is not able to build stubdoms, I guess running ./tools/configure=

> will still produce all the necessary configure foo to compile the tools=

> correctly?
Yes you can just run the tools one directly to avoid stubdoms.
>> +
>> +AC_OUTPUT()



--------------ms060700060403010002080507
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE2MDUxOVowIwYJKoZIhvcNAQkEMRYEFHl1Aqc5l7lPiHmH
dyAOC2LU7JevMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYBvQ10d0+o4XcEU9zrmeSzuNePzZFmEpxiu
2zcUuw1DCFP4WyJhGTKjlFq+w4xG1GGLNBxz2OOl29MXy2bETCDWASW5kW/L/Tc5i75zhu6T
9OUcA1kBbSclg5sQuAHJ+yxwaRTA5apsCoaTkjFdgotDAubb3gktE4TeMIVPwvPhlgAAAAAA
AA==
--------------ms060700060403010002080507--


--===============2629517028782313021==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2629517028782313021==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 16:14:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 16:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfv8v-0004ag-Jp; Tue, 04 Dec 2012 16:14:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfv8u-0004ab-AI
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 16:14:44 +0000
Received: from [85.158.143.99:26305] by server-2.bemta-4.messagelabs.com id
	47/BC-28922-3712EB05; Tue, 04 Dec 2012 16:14:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354637682!22778915!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10226 invoked from network); 4 Dec 2012 16:14:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 16:14:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16152398"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 16:13:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	16:13:04 +0000
Message-ID: <1354637583.15296.52.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 4 Dec 2012 16:13:03 +0000
In-Reply-To: <50BE1E75.2050303@citrix.com>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
	<50BE1E75.2050303@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 16:01 +0000, Roger Pau Monne wrote:
> 
> > +AC_PREFIX_DEFAULT([/usr])
> > +AC_CONFIG_AUX_DIR([.])
> > +
> > +AC_CONFIG_SUBDIRS([tools stubdom])
> 
> NetBSD is not able to build stubdoms, I guess
> running ./tools/configure
> will still produce all the necessary configure foo to compile the
> tools correctly? 

Does the build system today avoid building stubdoms on BSD if you do
"make world" or "make dist" at the top level? Or do you always do make
tools-dist etc explicitly?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 16:14:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 16:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfv8v-0004ag-Jp; Tue, 04 Dec 2012 16:14:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfv8u-0004ab-AI
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 16:14:44 +0000
Received: from [85.158.143.99:26305] by server-2.bemta-4.messagelabs.com id
	47/BC-28922-3712EB05; Tue, 04 Dec 2012 16:14:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354637682!22778915!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10226 invoked from network); 4 Dec 2012 16:14:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 16:14:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16152398"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 16:13:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	16:13:04 +0000
Message-ID: <1354637583.15296.52.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 4 Dec 2012 16:13:03 +0000
In-Reply-To: <50BE1E75.2050303@citrix.com>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
	<50BE1E75.2050303@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 16:01 +0000, Roger Pau Monne wrote:
> 
> > +AC_PREFIX_DEFAULT([/usr])
> > +AC_CONFIG_AUX_DIR([.])
> > +
> > +AC_CONFIG_SUBDIRS([tools stubdom])
> 
> NetBSD is not able to build stubdoms, I guess
> running ./tools/configure
> will still produce all the necessary configure foo to compile the
> tools correctly? 

Does the build system today avoid building stubdoms on BSD if you do
"make world" or "make dist" at the top level? Or do you always do make
tools-dist etc explicitly?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 16:23:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 16:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfvGq-0004l3-JG; Tue, 04 Dec 2012 16:22:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfvGo-0004kx-QI
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 16:22:54 +0000
Received: from [85.158.139.211:48938] by server-12.bemta-5.messagelabs.com id
	F0/F7-02886-E532EB05; Tue, 04 Dec 2012 16:22:54 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1354638173!18608386!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15988 invoked from network); 4 Dec 2012 16:22:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 16:22:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16152651"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 16:22:53 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	16:22:53 +0000
Message-ID: <50BE235C.10205@citrix.com>
Date: Tue, 4 Dec 2012 17:22:52 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
	<50BE1E75.2050303@citrix.com>
	<1354637583.15296.52.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354637583.15296.52.camel@zakaz.uk.xensource.com>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 17:13, Ian Campbell wrote:
> On Tue, 2012-12-04 at 16:01 +0000, Roger Pau Monne wrote:
>>
>>> +AC_PREFIX_DEFAULT([/usr])
>>> +AC_CONFIG_AUX_DIR([.])
>>> +
>>> +AC_CONFIG_SUBDIRS([tools stubdom])
>>
>> NetBSD is not able to build stubdoms, I guess
>> running ./tools/configure
>> will still produce all the necessary configure foo to compile the
>> tools correctly? 
> 
> Does the build system today avoid building stubdoms on BSD if you do
> "make world" or "make dist" at the top level? Or do you always do make
> tools-dist etc explicitly?

Yes, in NetBSD you have to build the tools explicitly, doing a make
world tries to build stubdoms, and fails.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 16:23:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 16:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfvGq-0004l3-JG; Tue, 04 Dec 2012 16:22:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TfvGo-0004kx-QI
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 16:22:54 +0000
Received: from [85.158.139.211:48938] by server-12.bemta-5.messagelabs.com id
	F0/F7-02886-E532EB05; Tue, 04 Dec 2012 16:22:54 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1354638173!18608386!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15988 invoked from network); 4 Dec 2012 16:22:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 16:22:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16152651"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 16:22:53 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	16:22:53 +0000
Message-ID: <50BE235C.10205@citrix.com>
Date: Tue, 4 Dec 2012 17:22:52 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
	<50BE1E75.2050303@citrix.com>
	<1354637583.15296.52.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354637583.15296.52.camel@zakaz.uk.xensource.com>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 17:13, Ian Campbell wrote:
> On Tue, 2012-12-04 at 16:01 +0000, Roger Pau Monne wrote:
>>
>>> +AC_PREFIX_DEFAULT([/usr])
>>> +AC_CONFIG_AUX_DIR([.])
>>> +
>>> +AC_CONFIG_SUBDIRS([tools stubdom])
>>
>> NetBSD is not able to build stubdoms, I guess
>> running ./tools/configure
>> will still produce all the necessary configure foo to compile the
>> tools correctly? 
> 
> Does the build system today avoid building stubdoms on BSD if you do
> "make world" or "make dist" at the top level? Or do you always do make
> tools-dist etc explicitly?

Yes, in NetBSD you have to build the tools explicitly, doing a make
world tries to build stubdoms, and fails.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 17:14:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:14:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfw4B-0007bm-8p; Tue, 04 Dec 2012 17:13:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1Tfw49-0007be-Ml
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:13:53 +0000
Received: from [193.109.254.147:38750] by server-1.bemta-14.messagelabs.com id
	1B/4C-25314-15F2EB05; Tue, 04 Dec 2012 17:13:53 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-11.tower-27.messagelabs.com!1354641230!2536142!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17394 invoked from network); 4 Dec 2012 17:13:52 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 17:13:52 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 22d7_02c7_d32a3273_f7cf_4db4_81d4_275d4b27bd2e;
	Tue, 04 Dec 2012 12:13:46 -0500
Message-ID: <50BE2F40.5080309@jhuapl.edu>
Date: Tue, 04 Dec 2012 12:13:36 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
	<50BE1E75.2050303@citrix.com>
	<1354637583.15296.52.camel@zakaz.uk.xensource.com>
	<50BE235C.10205@citrix.com>
In-Reply-To: <50BE235C.10205@citrix.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6483271655546460769=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============6483271655546460769==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030305050007070208020408"

This is a cryptographically signed message in MIME format.

--------------ms030305050007070208020408
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12/04/2012 11:22 AM, Roger Pau Monn=C3=A9 wrote:
> On 04/12/12 17:13, Ian Campbell wrote:
>> On Tue, 2012-12-04 at 16:01 +0000, Roger Pau Monne wrote:
>>>> +AC_PREFIX_DEFAULT([/usr])
>>>> +AC_CONFIG_AUX_DIR([.])
>>>> +
>>>> +AC_CONFIG_SUBDIRS([tools stubdom])
>>> NetBSD is not able to build stubdoms, I guess
>>> running ./tools/configure
>>> will still produce all the necessary configure foo to compile the
>>> tools correctly?
>> Does the build system today avoid building stubdoms on BSD if you do
>> "make world" or "make dist" at the top level? Or do you always do make=

>> tools-dist etc explicitly?
> Yes, in NetBSD you have to build the tools explicitly, doing a make
> world tries to build stubdoms, and fails.
>
I have a potential solution to this problem coming in my next patch.


--------------ms030305050007070208020408
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE3MTMzNlowIwYJKoZIhvcNAQkEMRYEFAV2XtOy7f08Ph9/
KMZQ22N2ACifMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYCTV4WO4oqKi5qrdBeMbjTeLccxsNMzDS0r
j8n9DjODf+rykUUz+tS9C+NNA+FiM9oFPuZSTuhVp/0FldKMq7I6xLf204UNr8fXFq4ZqmUH
9FQ2xjL/xp4SyH1Sh8YDRjG4vv+bhdLZGdF/85jragwiaE5UazeffKf4ofRmoVgecQAAAAAA
AA==
--------------ms030305050007070208020408--


--===============6483271655546460769==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6483271655546460769==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 17:14:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:14:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfw4B-0007bm-8p; Tue, 04 Dec 2012 17:13:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1Tfw49-0007be-Ml
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:13:53 +0000
Received: from [193.109.254.147:38750] by server-1.bemta-14.messagelabs.com id
	1B/4C-25314-15F2EB05; Tue, 04 Dec 2012 17:13:53 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-11.tower-27.messagelabs.com!1354641230!2536142!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17394 invoked from network); 4 Dec 2012 17:13:52 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 17:13:52 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 22d7_02c7_d32a3273_f7cf_4db4_81d4_275d4b27bd2e;
	Tue, 04 Dec 2012 12:13:46 -0500
Message-ID: <50BE2F40.5080309@jhuapl.edu>
Date: Tue, 04 Dec 2012 12:13:36 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-8-git-send-email-matthew.fioravante@jhuapl.edu>
	<50BE1E75.2050303@citrix.com>
	<1354637583.15296.52.camel@zakaz.uk.xensource.com>
	<50BE235C.10205@citrix.com>
In-Reply-To: <50BE235C.10205@citrix.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 7/7] Add a real top level configure script
 that calls the others
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6483271655546460769=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============6483271655546460769==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030305050007070208020408"

This is a cryptographically signed message in MIME format.

--------------ms030305050007070208020408
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12/04/2012 11:22 AM, Roger Pau Monn=C3=A9 wrote:
> On 04/12/12 17:13, Ian Campbell wrote:
>> On Tue, 2012-12-04 at 16:01 +0000, Roger Pau Monne wrote:
>>>> +AC_PREFIX_DEFAULT([/usr])
>>>> +AC_CONFIG_AUX_DIR([.])
>>>> +
>>>> +AC_CONFIG_SUBDIRS([tools stubdom])
>>> NetBSD is not able to build stubdoms, I guess
>>> running ./tools/configure
>>> will still produce all the necessary configure foo to compile the
>>> tools correctly?
>> Does the build system today avoid building stubdoms on BSD if you do
>> "make world" or "make dist" at the top level? Or do you always do make=

>> tools-dist etc explicitly?
> Yes, in NetBSD you have to build the tools explicitly, doing a make
> world tries to build stubdoms, and fails.
>
I have a potential solution to this problem coming in my next patch.


--------------ms030305050007070208020408
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE3MTMzNlowIwYJKoZIhvcNAQkEMRYEFAV2XtOy7f08Ph9/
KMZQ22N2ACifMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYCTV4WO4oqKi5qrdBeMbjTeLccxsNMzDS0r
j8n9DjODf+rykUUz+tS9C+NNA+FiM9oFPuZSTuhVp/0FldKMq7I6xLf204UNr8fXFq4ZqmUH
9FQ2xjL/xp4SyH1Sh8YDRjG4vv+bhdLZGdF/85jragwiaE5UazeffKf4ofRmoVgecQAAAAAA
AA==
--------------ms030305050007070208020408--


--===============6483271655546460769==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6483271655546460769==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 17:42:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:42:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwVG-00005s-1B; Tue, 04 Dec 2012 17:41:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfwVE-00005n-Aj
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:41:52 +0000
Received: from [85.158.143.35:51739] by server-2.bemta-4.messagelabs.com id
	77/40-28922-FD53EB05; Tue, 04 Dec 2012 17:41:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354642909!12595621!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13748 invoked from network); 4 Dec 2012 17:41:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 17:41:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216367552"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 17:41:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 12:41:44 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfwV6-0002P2-0V;
	Tue, 04 Dec 2012 17:41:44 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 17:41:43 +0000
Message-ID: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH RFC] tools: install under /usr/local by default.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is the defacto (or FHS mandated?) standard location for software
built from source, in order to avoid clashing with packaged software
which is installed under /usr/bin etc.

I think there is benefit in having Xen's install behave more like the
majority of other OSS software out there.

The major downside here is in the transition from 4.2 to 4.3 where
people who have built from source will innevitably discover breakage
because 4.3 no longer overwrites stuff in /usr like it used to so they
pickup old stale bits from /usr instead of new stuff from /usr/local.

Packages will use ./configure --prefix=/usr or whatever helper macro
their package manager gives them. I have confirmed that doing this
results in the same list of installed files as before this patch was
applied.

Note that this does not currently affect docs or stubdoms at the
moment, so they still end up under /usr. There are proposals to use
configure here too at which point I would propose a similar patch and
these would also move as expected (depending on the sequencing I may
end folding that into this patch)

The hypervisor remains in /boot/ and there is no intention to move it.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/configure    |    2 --
 tools/configure.ac |    1 -
 2 files changed, 0 insertions(+), 3 deletions(-)

diff --git a/tools/configure b/tools/configure
index f9d1925..463f79d 100755
--- a/tools/configure
+++ b/tools/configure
@@ -558,7 +558,6 @@ PACKAGE_BUGREPORT='xen-devel@lists.xen.org'
 PACKAGE_URL='http://www.xen.org/'
 
 ac_unique_file="libxl/libxl.c"
-ac_default_prefix=/usr
 # Factoring default headers for most tests.
 ac_includes_default="\
 #include <stdio.h>
@@ -2145,7 +2144,6 @@ ac_config_files="$ac_config_files ../config/Tools.mk"
 
 ac_config_headers="$ac_config_headers config.h"
 
-
 ac_aux_dir=
 for ac_dir in . "$srcdir"/.; do
   if test -f "$ac_dir/install-sh"; then
diff --git a/tools/configure.ac b/tools/configure.ac
index 586313d..ccb2ae4 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -7,7 +7,6 @@ AC_INIT([Xen Hypervisor], m4_esyscmd([../version.sh ../xen/Makefile]),
 AC_CONFIG_SRCDIR([libxl/libxl.c])
 AC_CONFIG_FILES([../config/Tools.mk])
 AC_CONFIG_HEADERS([config.h])
-AC_PREFIX_DEFAULT([/usr])
 AC_CONFIG_AUX_DIR([.])
 
 # Check if CFLAGS, LDFLAGS, LIBS, CPPFLAGS or CPP is set and print a warning
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 17:42:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:42:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwVG-00005s-1B; Tue, 04 Dec 2012 17:41:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TfwVE-00005n-Aj
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:41:52 +0000
Received: from [85.158.143.35:51739] by server-2.bemta-4.messagelabs.com id
	77/40-28922-FD53EB05; Tue, 04 Dec 2012 17:41:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354642909!12595621!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13748 invoked from network); 4 Dec 2012 17:41:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 17:41:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216367552"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 17:41:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 12:41:44 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TfwV6-0002P2-0V;
	Tue, 04 Dec 2012 17:41:44 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Tue, 4 Dec 2012 17:41:43 +0000
Message-ID: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH RFC] tools: install under /usr/local by default.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is the defacto (or FHS mandated?) standard location for software
built from source, in order to avoid clashing with packaged software
which is installed under /usr/bin etc.

I think there is benefit in having Xen's install behave more like the
majority of other OSS software out there.

The major downside here is in the transition from 4.2 to 4.3 where
people who have built from source will innevitably discover breakage
because 4.3 no longer overwrites stuff in /usr like it used to so they
pickup old stale bits from /usr instead of new stuff from /usr/local.

Packages will use ./configure --prefix=/usr or whatever helper macro
their package manager gives them. I have confirmed that doing this
results in the same list of installed files as before this patch was
applied.

Note that this does not currently affect docs or stubdoms at the
moment, so they still end up under /usr. There are proposals to use
configure here too at which point I would propose a similar patch and
these would also move as expected (depending on the sequencing I may
end folding that into this patch)

The hypervisor remains in /boot/ and there is no intention to move it.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/configure    |    2 --
 tools/configure.ac |    1 -
 2 files changed, 0 insertions(+), 3 deletions(-)

diff --git a/tools/configure b/tools/configure
index f9d1925..463f79d 100755
--- a/tools/configure
+++ b/tools/configure
@@ -558,7 +558,6 @@ PACKAGE_BUGREPORT='xen-devel@lists.xen.org'
 PACKAGE_URL='http://www.xen.org/'
 
 ac_unique_file="libxl/libxl.c"
-ac_default_prefix=/usr
 # Factoring default headers for most tests.
 ac_includes_default="\
 #include <stdio.h>
@@ -2145,7 +2144,6 @@ ac_config_files="$ac_config_files ../config/Tools.mk"
 
 ac_config_headers="$ac_config_headers config.h"
 
-
 ac_aux_dir=
 for ac_dir in . "$srcdir"/.; do
   if test -f "$ac_dir/install-sh"; then
diff --git a/tools/configure.ac b/tools/configure.ac
index 586313d..ccb2ae4 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -7,7 +7,6 @@ AC_INIT([Xen Hypervisor], m4_esyscmd([../version.sh ../xen/Makefile]),
 AC_CONFIG_SRCDIR([libxl/libxl.c])
 AC_CONFIG_FILES([../config/Tools.mk])
 AC_CONFIG_HEADERS([config.h])
-AC_PREFIX_DEFAULT([/usr])
 AC_CONFIG_AUX_DIR([.])
 
 # Check if CFLAGS, LDFLAGS, LIBS, CPPFLAGS or CPP is set and print a warning
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 17:43:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwWM-0000Ct-LG; Tue, 04 Dec 2012 17:43:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Huang2@amd.com>) id 1TfwWJ-0000Br-UV
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:43:00 +0000
Received: from [85.158.139.83:44236] by server-7.bemta-5.messagelabs.com id
	B4/4D-23096-3263EB05; Tue, 04 Dec 2012 17:42:59 +0000
X-Env-Sender: Wei.Huang2@amd.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354642976!27838372!1
X-Originating-IP: [216.32.180.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7163 invoked from network); 4 Dec 2012 17:42:58 -0000
Received: from co1ehsobe003.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.186)
	by server-13.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	4 Dec 2012 17:42:58 -0000
Received: from mail193-co1-R.bigfish.com (10.243.78.237) by
	CO1EHSOBE009.bigfish.com (10.243.66.72) with Microsoft SMTP Server id
	14.1.225.23; Tue, 4 Dec 2012 17:42:56 +0000
Received: from mail193-co1 (localhost [127.0.0.1])	by
	mail193-co1-R.bigfish.com (Postfix) with ESMTP id EA284203C3;
	Tue,  4 Dec 2012 17:42:55 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(zz98dI9371I542I1432I4015Izz1de0h1202h1d1ah1d2ahzz8275bhz2dh668h839h944hd25hf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h14ddh1504h1537h153bh15d0h162dh1631h1155h)
Received: from mail193-co1 (localhost.localdomain [127.0.0.1]) by mail193-co1
	(MessageSwitch) id 1354642973612851_29947;
	Tue,  4 Dec 2012 17:42:53 +0000 (UTC)
Received: from CO1EHSMHS006.bigfish.com (unknown [10.243.78.243])	by
	mail193-co1.bigfish.com (Postfix) with ESMTP id 888DD300049;
	Tue,  4 Dec 2012 17:42:53 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS006.bigfish.com (10.243.66.16) with Microsoft SMTP Server id
	14.1.225.23; Tue, 4 Dec 2012 17:42:38 +0000
X-WSS-ID: 0MEIP6Y-01-0AF-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 20EE4102808C;	Tue,  4 Dec 2012 11:42:33 -0600 (CST)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Tue, 4 Dec 2012 11:42:43 -0600
Received: from SAUSEXDAG04.amd.com ([fe80::9143:6575:e649:e862]) by
	sausexdag03.amd.com ([fe80::85b5:3838:d8b4:20ba%19]) with mapi id
	14.02.0318.004; Tue, 4 Dec 2012 11:42:33 -0600
From: "Huang2, Wei" <Wei.Huang2@amd.com>
To: Jan Beulich <JBeulich@suse.com>, Wei Wang <weiwang.dd@gmail.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Thread-Topic: Ping: [PATCH] IOMMU/ATS: fix maximum queue depth calculation
Thread-Index: AQHN0geGCWmCfBacKUegsGml5UF2oZgI5bTg
Date: Tue, 4 Dec 2012 17:42:33 +0000
Message-ID: <4400B41FB768044EA720935D0808176C219932E3@sausexdag04.amd.com>
References: <50BDD9E702000078000ADA57@nat28.tlf.novell.com>
In-Reply-To: <50BDD9E702000078000ADA57@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.236.48.197]
MIME-Version: 1.0
X-OriginatorOrg: amd.com
Cc: "Ostrovsky, Boris" <Boris.Ostrovsky@amd.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Ping: [PATCH] IOMMU/ATS: fix maximum queue depth
	calculation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry for being late. Here it is.

Acked-by: Wei Huang <wei.huang2@amd.com>

Thanks,
-Wei
-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Tuesday, December 04, 2012 4:09 AM
To: Huang2, Wei; Wei Wang; xiantao.zhang@intel.com
Cc: Ostrovsky, Boris; xen-devel
Subject: Ping: [PATCH] IOMMU/ATS: fix maximum queue depth calculation

Anyone? (I'd really want an ack from both Intel - who originally contributed the ATS code - and AMD - due to the adjustment of their later re-arrangements.)

Jan

>>> On 28.11.12 at 12:32, Jan Beulich wrote:
> The capabilities register field is a 5-bit value, and the 5 bits all 
> being zero actually means 32 entries.
> 
> Under the assumption that amd_iommu_flush_iotlb() really just tried to 
> correct for the miscalculation above when adding 32 to the value, that 
> adjustment is also being removed.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/drivers/passthrough/amd/iommu_cmd.c
> +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
> @@ -321,7 +321,7 @@ void amd_iommu_flush_iotlb(struct pci_de
>  
>      req_id = get_dma_requestor_id(iommu->seg, bdf);
>      queueid = req_id;
> -    maxpend = (ats_pdev->ats_queue_depth + 32) & 0xff;
> +    maxpend = ats_pdev->ats_queue_depth & 0xff;
>  
>      /* send INVALIDATE_IOTLB_PAGES command */
>      spin_lock_irqsave(&iommu->lock, flags);
> --- a/xen/drivers/passthrough/ats.h
> +++ b/xen/drivers/passthrough/ats.h
> @@ -28,7 +28,7 @@ struct pci_ats_dev {
>  
>  #define ATS_REG_CAP    4
>  #define ATS_REG_CTL    6
> -#define ATS_QUEUE_DEPTH_MASK     0xF
> +#define ATS_QUEUE_DEPTH_MASK     0x1f
>  #define ATS_ENABLE               (1<<15)
>  
>  extern struct list_head ats_devices;
> --- a/xen/drivers/passthrough/x86/ats.c
> +++ b/xen/drivers/passthrough/x86/ats.c
> @@ -94,6 +94,8 @@ int enable_ats_device(int seg, int bus, 
>          value = pci_conf_read16(seg, bus, PCI_SLOT(devfn),
>                                  PCI_FUNC(devfn), pos + ATS_REG_CAP);
>          pdev->ats_queue_depth = value & ATS_QUEUE_DEPTH_MASK;
> +        if ( !pdev->ats_queue_depth )
> +            pdev->ats_queue_depth = ATS_QUEUE_DEPTH_MASK + 1;
>          list_add(&pdev->list, &ats_devices);
>      }
>  
> 
> 
> 






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 17:43:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwWM-0000Ct-LG; Tue, 04 Dec 2012 17:43:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Huang2@amd.com>) id 1TfwWJ-0000Br-UV
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:43:00 +0000
Received: from [85.158.139.83:44236] by server-7.bemta-5.messagelabs.com id
	B4/4D-23096-3263EB05; Tue, 04 Dec 2012 17:42:59 +0000
X-Env-Sender: Wei.Huang2@amd.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354642976!27838372!1
X-Originating-IP: [216.32.180.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7163 invoked from network); 4 Dec 2012 17:42:58 -0000
Received: from co1ehsobe003.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.186)
	by server-13.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	4 Dec 2012 17:42:58 -0000
Received: from mail193-co1-R.bigfish.com (10.243.78.237) by
	CO1EHSOBE009.bigfish.com (10.243.66.72) with Microsoft SMTP Server id
	14.1.225.23; Tue, 4 Dec 2012 17:42:56 +0000
Received: from mail193-co1 (localhost [127.0.0.1])	by
	mail193-co1-R.bigfish.com (Postfix) with ESMTP id EA284203C3;
	Tue,  4 Dec 2012 17:42:55 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(zz98dI9371I542I1432I4015Izz1de0h1202h1d1ah1d2ahzz8275bhz2dh668h839h944hd25hf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h14ddh1504h1537h153bh15d0h162dh1631h1155h)
Received: from mail193-co1 (localhost.localdomain [127.0.0.1]) by mail193-co1
	(MessageSwitch) id 1354642973612851_29947;
	Tue,  4 Dec 2012 17:42:53 +0000 (UTC)
Received: from CO1EHSMHS006.bigfish.com (unknown [10.243.78.243])	by
	mail193-co1.bigfish.com (Postfix) with ESMTP id 888DD300049;
	Tue,  4 Dec 2012 17:42:53 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS006.bigfish.com (10.243.66.16) with Microsoft SMTP Server id
	14.1.225.23; Tue, 4 Dec 2012 17:42:38 +0000
X-WSS-ID: 0MEIP6Y-01-0AF-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 20EE4102808C;	Tue,  4 Dec 2012 11:42:33 -0600 (CST)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Tue, 4 Dec 2012 11:42:43 -0600
Received: from SAUSEXDAG04.amd.com ([fe80::9143:6575:e649:e862]) by
	sausexdag03.amd.com ([fe80::85b5:3838:d8b4:20ba%19]) with mapi id
	14.02.0318.004; Tue, 4 Dec 2012 11:42:33 -0600
From: "Huang2, Wei" <Wei.Huang2@amd.com>
To: Jan Beulich <JBeulich@suse.com>, Wei Wang <weiwang.dd@gmail.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Thread-Topic: Ping: [PATCH] IOMMU/ATS: fix maximum queue depth calculation
Thread-Index: AQHN0geGCWmCfBacKUegsGml5UF2oZgI5bTg
Date: Tue, 4 Dec 2012 17:42:33 +0000
Message-ID: <4400B41FB768044EA720935D0808176C219932E3@sausexdag04.amd.com>
References: <50BDD9E702000078000ADA57@nat28.tlf.novell.com>
In-Reply-To: <50BDD9E702000078000ADA57@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.236.48.197]
MIME-Version: 1.0
X-OriginatorOrg: amd.com
Cc: "Ostrovsky, Boris" <Boris.Ostrovsky@amd.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Ping: [PATCH] IOMMU/ATS: fix maximum queue depth
	calculation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry for being late. Here it is.

Acked-by: Wei Huang <wei.huang2@amd.com>

Thanks,
-Wei
-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Tuesday, December 04, 2012 4:09 AM
To: Huang2, Wei; Wei Wang; xiantao.zhang@intel.com
Cc: Ostrovsky, Boris; xen-devel
Subject: Ping: [PATCH] IOMMU/ATS: fix maximum queue depth calculation

Anyone? (I'd really want an ack from both Intel - who originally contributed the ATS code - and AMD - due to the adjustment of their later re-arrangements.)

Jan

>>> On 28.11.12 at 12:32, Jan Beulich wrote:
> The capabilities register field is a 5-bit value, and the 5 bits all 
> being zero actually means 32 entries.
> 
> Under the assumption that amd_iommu_flush_iotlb() really just tried to 
> correct for the miscalculation above when adding 32 to the value, that 
> adjustment is also being removed.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/drivers/passthrough/amd/iommu_cmd.c
> +++ b/xen/drivers/passthrough/amd/iommu_cmd.c
> @@ -321,7 +321,7 @@ void amd_iommu_flush_iotlb(struct pci_de
>  
>      req_id = get_dma_requestor_id(iommu->seg, bdf);
>      queueid = req_id;
> -    maxpend = (ats_pdev->ats_queue_depth + 32) & 0xff;
> +    maxpend = ats_pdev->ats_queue_depth & 0xff;
>  
>      /* send INVALIDATE_IOTLB_PAGES command */
>      spin_lock_irqsave(&iommu->lock, flags);
> --- a/xen/drivers/passthrough/ats.h
> +++ b/xen/drivers/passthrough/ats.h
> @@ -28,7 +28,7 @@ struct pci_ats_dev {
>  
>  #define ATS_REG_CAP    4
>  #define ATS_REG_CTL    6
> -#define ATS_QUEUE_DEPTH_MASK     0xF
> +#define ATS_QUEUE_DEPTH_MASK     0x1f
>  #define ATS_ENABLE               (1<<15)
>  
>  extern struct list_head ats_devices;
> --- a/xen/drivers/passthrough/x86/ats.c
> +++ b/xen/drivers/passthrough/x86/ats.c
> @@ -94,6 +94,8 @@ int enable_ats_device(int seg, int bus, 
>          value = pci_conf_read16(seg, bus, PCI_SLOT(devfn),
>                                  PCI_FUNC(devfn), pos + ATS_REG_CAP);
>          pdev->ats_queue_depth = value & ATS_QUEUE_DEPTH_MASK;
> +        if ( !pdev->ats_queue_depth )
> +            pdev->ats_queue_depth = ATS_QUEUE_DEPTH_MASK + 1;
>          list_add(&pdev->list, &ats_devices);
>      }
>  
> 
> 
> 






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 17:43:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:43:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwWx-0000K9-3Y; Tue, 04 Dec 2012 17:43:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwWv-0000Jw-Se
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:43:38 +0000
Received: from [85.158.143.99:62773] by server-1.bemta-4.messagelabs.com id
	28/1B-27934-9463EB05; Tue, 04 Dec 2012 17:43:37 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354643010!21654500!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26863 invoked from network); 4 Dec 2012 17:43:35 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 17:43:35 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 50ad_061f_2a0e3cb4_047b_4e2a_9198_7bf56ef18651;
	Tue, 04 Dec 2012 12:43:09 -0500
Message-ID: <50BE3625.8010503@jhuapl.edu>
Date: Tue, 04 Dec 2012 12:43:01 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-3-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1354286955-23900-3-git-send-email-dgdegra@tycho.nsa.gov>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/9] stubdom/vtpm: Support locality field
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8521078541022201897=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============8521078541022201897==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms000602010104040900050706"

This is a cryptographically signed message in MIME format.

--------------ms000602010104040900050706
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Looks good, will need to test.

Acked by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>

On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
> The vTPM protocol now contains a field allowing the locality of a
> command to be specified; pass this to the TPM when processing a packet.=

> While the locality is not currently checked for validity, a binding
> between locality and some distinguishing feature of the client domain
> (such as the XSM label) will need to be defined in order to properly
> support a multi-client vTPM.
>
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>   stubdom/Makefile            |  1 +
>   stubdom/vtpm-locality.patch | 50 ++++++++++++++++++++++++++++++++++++=
+++++++++
>   stubdom/vtpm/vtpm.c         |  2 +-
>   3 files changed, 52 insertions(+), 1 deletion(-)
>   create mode 100644 stubdom/vtpm-locality.patch
>
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index fcc608e..683bc51 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -207,6 +207,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPM=
EMU_VERSION).tar.gz
>   	tar xzf $<
>   	mv tpm_emulator-$(TPMEMU_VERSION) $@
>   	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
> +	patch -d $@ -p1 < vtpm-locality.patch
>   	mkdir $@/build
>   	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=3D${CC} -DCMAKE_C_FLAGS=3D=
"-std=3Dc99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-decl=
aration-after-statement"
>   	touch $@
> diff --git a/stubdom/vtpm-locality.patch b/stubdom/vtpm-locality.patch
> new file mode 100644
> index 0000000..8ab7dea
> --- /dev/null
> +++ b/stubdom/vtpm-locality.patch
> @@ -0,0 +1,50 @@
> +diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
> +index 60bbb90..f8f7f0f 100644
> +--- a/tpm/tpm_capability.c
> ++++ b/tpm/tpm_capability.c
> +@@ -949,6 +949,8 @@ static TPM_RESULT set_vendor(UINT32 subCap, BYTE *=
setValue,
> +                              UINT32 setValueSize, BOOL ownerAuth,
> +                              BOOL deactivated, BOOL disabled)
> + {
> ++  if (tpmData.stany.flags.localityModifier !=3D 8)
> ++    return TPM_BAD_PARAMETER;
> +   /* set the capability area with the specified data, on failure
> +      deactivate the TPM */
> +   switch (subCap) {
> +diff --git a/tpm/tpm_cmd_handler.c b/tpm/tpm_cmd_handler.c
> +index 288d1ce..9e1cfb4 100644
> +--- a/tpm/tpm_cmd_handler.c
> ++++ b/tpm/tpm_cmd_handler.c
> +@@ -4132,7 +4132,7 @@ void tpm_emulator_shutdown()
> +   tpm_extern_release();
> + }
> +
> +-int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t *=
*out, uint32_t *out_size)
> ++int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t *=
*out, uint32_t *out_size, int locality)
> + {
> +   TPM_REQUEST req;
> +   TPM_RESPONSE rsp;
> +@@ -4140,7 +4140,9 @@ int tpm_handle_command(const uint8_t *in, uint32=
_t in_size, uint8_t **out, uint3
> +   UINT32 len;
> +   BOOL free_out;
> +
> +-  debug("tpm_handle_command()");
> ++  debug("tpm_handle_command(%d)", locality);
> ++  if (locality !=3D -1)
> ++    tpmData.stany.flags.localityModifier =3D locality;
> +
> +   /* we need the whole packet at once, otherwise unmarshalling will f=
ail */
> +   if (tpm_unmarshal_TPM_REQUEST((uint8_t**)&in, &in_size, &req) !=3D =
0) {
> +diff --git a/tpm/tpm_emulator.h b/tpm/tpm_emulator.h
> +index eed749e..4c228bd 100644
> +--- a/tpm/tpm_emulator.h
> ++++ b/tpm/tpm_emulator.h
> +@@ -59,7 +59,7 @@ void tpm_emulator_shutdown(void);
> +  * its usage. In case of an error, all internally allocated memory
> +  * is released and the the state of out and out_size is unspecified.
> +  */
> +-int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t *=
*out, uint32_t *out_size);
> ++int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t *=
*out, uint32_t *out_size, int locality);
> +
> + #endif /* _TPM_EMULATOR_H_ */
> +
> diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
> index 71aef78..eb7912f 100644
> --- a/stubdom/vtpm/vtpm.c
> +++ b/stubdom/vtpm/vtpm.c
> @@ -183,7 +183,7 @@ static void main_loop(void) {
>            }
>            /* If not disabled, do the command */
>            else {
> -            if((res =3D tpm_handle_command(tpmcmd->req, tpmcmd->req_le=
n, &tpmcmd->resp, &tpmcmd->resp_len)) !=3D 0) {
> +            if((res =3D tpm_handle_command(tpmcmd->req, tpmcmd->req_le=
n, &tpmcmd->resp, &tpmcmd->resp_len, tpmcmd->locality)) !=3D 0) {
>                  error("tpm_handle_command() failed");
>                  create_error_response(tpmcmd, TPM_FAIL);
>               }



--------------ms000602010104040900050706
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE3NDMwMVowIwYJKoZIhvcNAQkEMRYEFJcLrPOWN4ShdHGX
I5E+c8VXreN0MGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYCYScz1hk8O+Fa9IYWF+1nyeAWhqIzuNG0y
RQ7LEapuurYMyFxkWQ6ptRJ/lt99rQtfJL59PIca2r7fJQrQyGooKLEDJV7zidHq0+qituhf
jPUdyYAgjm1sQHzZnIZkttxtHLJOftQJgpZFgTPtaG2oJdujJn30B5/1FvDmIk4KTwAAAAAA
AA==
--------------ms000602010104040900050706--


--===============8521078541022201897==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8521078541022201897==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 17:43:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:43:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwWx-0000K9-3Y; Tue, 04 Dec 2012 17:43:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwWv-0000Jw-Se
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:43:38 +0000
Received: from [85.158.143.99:62773] by server-1.bemta-4.messagelabs.com id
	28/1B-27934-9463EB05; Tue, 04 Dec 2012 17:43:37 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354643010!21654500!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26863 invoked from network); 4 Dec 2012 17:43:35 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 17:43:35 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 50ad_061f_2a0e3cb4_047b_4e2a_9198_7bf56ef18651;
	Tue, 04 Dec 2012 12:43:09 -0500
Message-ID: <50BE3625.8010503@jhuapl.edu>
Date: Tue, 04 Dec 2012 12:43:01 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-3-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1354286955-23900-3-git-send-email-dgdegra@tycho.nsa.gov>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/9] stubdom/vtpm: Support locality field
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8521078541022201897=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============8521078541022201897==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms000602010104040900050706"

This is a cryptographically signed message in MIME format.

--------------ms000602010104040900050706
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Looks good, will need to test.

Acked by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>

On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
> The vTPM protocol now contains a field allowing the locality of a
> command to be specified; pass this to the TPM when processing a packet.=

> While the locality is not currently checked for validity, a binding
> between locality and some distinguishing feature of the client domain
> (such as the XSM label) will need to be defined in order to properly
> support a multi-client vTPM.
>
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>   stubdom/Makefile            |  1 +
>   stubdom/vtpm-locality.patch | 50 ++++++++++++++++++++++++++++++++++++=
+++++++++
>   stubdom/vtpm/vtpm.c         |  2 +-
>   3 files changed, 52 insertions(+), 1 deletion(-)
>   create mode 100644 stubdom/vtpm-locality.patch
>
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index fcc608e..683bc51 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -207,6 +207,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPM=
EMU_VERSION).tar.gz
>   	tar xzf $<
>   	mv tpm_emulator-$(TPMEMU_VERSION) $@
>   	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
> +	patch -d $@ -p1 < vtpm-locality.patch
>   	mkdir $@/build
>   	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=3D${CC} -DCMAKE_C_FLAGS=3D=
"-std=3Dc99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-decl=
aration-after-statement"
>   	touch $@
> diff --git a/stubdom/vtpm-locality.patch b/stubdom/vtpm-locality.patch
> new file mode 100644
> index 0000000..8ab7dea
> --- /dev/null
> +++ b/stubdom/vtpm-locality.patch
> @@ -0,0 +1,50 @@
> +diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
> +index 60bbb90..f8f7f0f 100644
> +--- a/tpm/tpm_capability.c
> ++++ b/tpm/tpm_capability.c
> +@@ -949,6 +949,8 @@ static TPM_RESULT set_vendor(UINT32 subCap, BYTE *=
setValue,
> +                              UINT32 setValueSize, BOOL ownerAuth,
> +                              BOOL deactivated, BOOL disabled)
> + {
> ++  if (tpmData.stany.flags.localityModifier !=3D 8)
> ++    return TPM_BAD_PARAMETER;
> +   /* set the capability area with the specified data, on failure
> +      deactivate the TPM */
> +   switch (subCap) {
> +diff --git a/tpm/tpm_cmd_handler.c b/tpm/tpm_cmd_handler.c
> +index 288d1ce..9e1cfb4 100644
> +--- a/tpm/tpm_cmd_handler.c
> ++++ b/tpm/tpm_cmd_handler.c
> +@@ -4132,7 +4132,7 @@ void tpm_emulator_shutdown()
> +   tpm_extern_release();
> + }
> +
> +-int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t *=
*out, uint32_t *out_size)
> ++int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t *=
*out, uint32_t *out_size, int locality)
> + {
> +   TPM_REQUEST req;
> +   TPM_RESPONSE rsp;
> +@@ -4140,7 +4140,9 @@ int tpm_handle_command(const uint8_t *in, uint32=
_t in_size, uint8_t **out, uint3
> +   UINT32 len;
> +   BOOL free_out;
> +
> +-  debug("tpm_handle_command()");
> ++  debug("tpm_handle_command(%d)", locality);
> ++  if (locality !=3D -1)
> ++    tpmData.stany.flags.localityModifier =3D locality;
> +
> +   /* we need the whole packet at once, otherwise unmarshalling will f=
ail */
> +   if (tpm_unmarshal_TPM_REQUEST((uint8_t**)&in, &in_size, &req) !=3D =
0) {
> +diff --git a/tpm/tpm_emulator.h b/tpm/tpm_emulator.h
> +index eed749e..4c228bd 100644
> +--- a/tpm/tpm_emulator.h
> ++++ b/tpm/tpm_emulator.h
> +@@ -59,7 +59,7 @@ void tpm_emulator_shutdown(void);
> +  * its usage. In case of an error, all internally allocated memory
> +  * is released and the the state of out and out_size is unspecified.
> +  */
> +-int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t *=
*out, uint32_t *out_size);
> ++int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t *=
*out, uint32_t *out_size, int locality);
> +
> + #endif /* _TPM_EMULATOR_H_ */
> +
> diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
> index 71aef78..eb7912f 100644
> --- a/stubdom/vtpm/vtpm.c
> +++ b/stubdom/vtpm/vtpm.c
> @@ -183,7 +183,7 @@ static void main_loop(void) {
>            }
>            /* If not disabled, do the command */
>            else {
> -            if((res =3D tpm_handle_command(tpmcmd->req, tpmcmd->req_le=
n, &tpmcmd->resp, &tpmcmd->resp_len)) !=3D 0) {
> +            if((res =3D tpm_handle_command(tpmcmd->req, tpmcmd->req_le=
n, &tpmcmd->resp, &tpmcmd->resp_len, tpmcmd->locality)) !=3D 0) {
>                  error("tpm_handle_command() failed");
>                  create_error_response(tpmcmd, TPM_FAIL);
>               }



--------------ms000602010104040900050706
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE3NDMwMVowIwYJKoZIhvcNAQkEMRYEFJcLrPOWN4ShdHGX
I5E+c8VXreN0MGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYCYScz1hk8O+Fa9IYWF+1nyeAWhqIzuNG0y
RQ7LEapuurYMyFxkWQ6ptRJ/lt99rQtfJL59PIca2r7fJQrQyGooKLEDJV7zidHq0+qituhf
jPUdyYAgjm1sQHzZnIZkttxtHLJOftQJgpZFgTPtaG2oJdujJn30B5/1FvDmIk4KTwAAAAAA
AA==
--------------ms000602010104040900050706--


--===============8521078541022201897==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8521078541022201897==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 17:45:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:45:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwYZ-0000it-K0; Tue, 04 Dec 2012 17:45:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwYY-0000ig-FV
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:45:18 +0000
Received: from [85.158.139.211:38915] by server-6.bemta-5.messagelabs.com id
	FB/43-19321-DA63EB05; Tue, 04 Dec 2012 17:45:17 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-13.tower-206.messagelabs.com!1354643113!14827906!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9576 invoked from network); 4 Dec 2012 17:45:15 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 17:45:15 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 69e4_0aaa_34690c78_40e3_4848_8d73_f72f3de8c3d8;
	Tue, 04 Dec 2012 12:44:53 -0500
Message-ID: <50BE368F.5060605@jhuapl.edu>
Date: Tue, 04 Dec 2012 12:44:47 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-8-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1354286955-23900-8-git-send-email-dgdegra@tycho.nsa.gov>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 7/9] stubdom/grub: send kernel measurements
	to vTPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7380186979353865438=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============7380186979353865438==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms000208000406030507090206"

This is a cryptographically signed message in MIME format.

--------------ms000208000406030507090206
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable


Acked by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>

On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
> This allows a domU with an arbitrary kernel and initrd to take advantag=
e
> of the static root of trust provided by a vTPM.
>
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>   stubdom/Makefile        |  2 +-
>   stubdom/grub/Makefile   |  1 +
>   stubdom/grub/kexec.c    | 54 ++++++++++++++++++++++++++++++++++++++++=
+++++++++
>   stubdom/grub/minios.cfg |  1 +
>   4 files changed, 57 insertions(+), 1 deletion(-)
>
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index 4744b79..790b547 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -399,7 +399,7 @@ grub-upstream: grub-$(GRUB_VERSION).tar.gz
>   	done
>  =20
>   .PHONY: grub
> -grub: grub-upstream $(CROSS_ROOT)
> +grub: cross-polarssl grub-upstream $(CROSS_ROOT)
>   	mkdir -p grub-$(XEN_TARGET_ARCH)
>   	CPPFLAGS=3D"$(TARGET_CPPFLAGS)" CFLAGS=3D"$(TARGET_CFLAGS)" $(MAKE) =
DESTDIR=3D -C $@ OBJ_DIR=3D$(CURDIR)/grub-$(XEN_TARGET_ARCH)
>  =20
> diff --git a/stubdom/grub/Makefile b/stubdom/grub/Makefile
> index d6e3a1e..6bd2c4c 100644
> --- a/stubdom/grub/Makefile
> +++ b/stubdom/grub/Makefile
> @@ -60,6 +60,7 @@ NETBOOT_SOURCES:=3D$(addprefix netboot/,$(NETBOOT_SOU=
RCES))
>   $(BOOT): DEF_CPPFLAGS+=3D-D__ASSEMBLY__
>  =20
>   PV_GRUB_SOURCES =3D kexec.c mini-os.c
> +PV_GRUB_SOURCES +=3D ../polarssl-$(XEN_TARGET_ARCH)/library/sha1.o
>  =20
>   SOURCES =3D $(NETBOOT_SOURCES) $(STAGE2_SOURCES) $(PV_GRUB_SOURCES)
>  =20
> diff --git a/stubdom/grub/kexec.c b/stubdom/grub/kexec.c
> index b21c91a..cef357e 100644
> --- a/stubdom/grub/kexec.c
> +++ b/stubdom/grub/kexec.c
> @@ -28,7 +28,9 @@
>   #include <blkfront.h>
>   #include <netfront.h>
>   #include <fbfront.h>
> +#include <tpmfront.h>
>   #include <shared.h>
> +#include <byteswap.h>
>  =20
>   #include "mini-os.h"
>  =20
> @@ -54,6 +56,22 @@ static unsigned long allocated;
>   int pin_table(xc_interface *xc_handle, unsigned int type, unsigned lo=
ng mfn,
>                 domid_t dom);
>  =20
> +#define TPM_TAG_RQU_COMMAND 0xC1
> +#define TPM_ORD_Extend 20
> +
> +struct pcr_extend_cmd {
> +	uint16_t tag;
> +	uint32_t size;
> +	uint32_t ord;
> +
> +	uint32_t pcr;
> +	unsigned char hash[20];
> +} __attribute__((packed));
> +
> +/* Not imported from polarssl's header since the prototype unhelpfully=
 defines
> + * the input as unsigned char, which causes pointer type mismatches */=

> +void sha1(const void *input, size_t ilen, unsigned char output[20]);
> +
>   /* We need mfn to appear as target_pfn, so exchange with the MFN ther=
e */
>   static void do_exchange(struct xc_dom_image *dom, xen_pfn_t target_pf=
n, xen_pfn_t source_mfn)
>   {
> @@ -117,6 +135,40 @@ int kexec_allocate(struct xc_dom_image *dom, xen_v=
addr_t up_to)
>       return 0;
>   }
>  =20
> +static void tpm_hash2pcr(struct xc_dom_image *dom, char *cmdline)
> +{
> +	struct tpmfront_dev* tpm =3D init_tpmfront(NULL);
> +	uint8_t *resp;
> +	size_t resplen =3D 0;
> +	struct pcr_extend_cmd cmd;
> +
> +	/* If all guests have access to a vTPM, it may be useful to replace t=
his
> +	 * with ASSERT(tpm) to prevent configuration errors from allowing a g=
uest
> +	 * to boot without a TPM (or with a TPM that has not been sent any
> +	 * measurements, which could allow forging the measurements).
> +	 */
> +	if (!tpm)
> +		return;
> +
> +	cmd.tag =3D bswap_16(TPM_TAG_RQU_COMMAND);
> +	cmd.size =3D bswap_32(sizeof(cmd));
> +	cmd.ord =3D bswap_32(TPM_ORD_Extend);
> +	cmd.pcr =3D bswap_32(4); // PCR #4 for kernel
> +	sha1(dom->kernel_blob, dom->kernel_size, cmd.hash);
> +
> +	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
> +
> +	cmd.pcr =3D bswap_32(5); // PCR #5 for cmdline
> +	sha1(cmdline, strlen(cmdline), cmd.hash);
> +	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
> +
> +	cmd.pcr =3D bswap_32(5); // PCR #5 for initrd
> +	sha1(dom->ramdisk_blob, dom->ramdisk_size, cmd.hash);
> +	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
> +
> +	shutdown_tpmfront(tpm);
> +}
> +
>   void kexec(void *kernel, long kernel_size, void *module, long module_=
size, char *cmdline, unsigned long flags)
>   {
>       struct xc_dom_image *dom;
> @@ -151,6 +203,8 @@ void kexec(void *kernel, long kernel_size, void *mo=
dule, long module_size, char
>       dom->console_evtchn =3D start_info.console.domU.evtchn;
>       dom->xenstore_evtchn =3D start_info.store_evtchn;
>  =20
> +    tpm_hash2pcr(dom, cmdline);
> +
>       if ( (rc =3D xc_dom_boot_xen_init(dom, xc_handle, domid)) !=3D 0 =
) {
>           grub_printf("xc_dom_boot_xen_init returned %d\n", rc);
>           errnum =3D ERR_BOOT_FAILURE;
> diff --git a/stubdom/grub/minios.cfg b/stubdom/grub/minios.cfg
> index 40cfa68..8df4909 100644
> --- a/stubdom/grub/minios.cfg
> +++ b/stubdom/grub/minios.cfg
> @@ -1,2 +1,3 @@
>   CONFIG_START_NETWORK=3Dn
>   CONFIG_SPARSE_BSS=3Dn
> +CONFIG_TPMFRONT=3Dy



--------------ms000208000406030507090206
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE3NDQ0N1owIwYJKoZIhvcNAQkEMRYEFBY85HZQSueIcxRR
ijy7T1JdlwjiMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYAZxdxdOpcnr31E9pDF6ZD45xVglIekHz5U
vGCIT/Xh32uGYbFpVkRABMBbjHow/54DZ1ZADe04zKxsPeNfAAgJEf8VtXcRy/vT0DIEhNyn
Z0PhGAq2cYkQ6gOJwHbKWnUzBqJbfPAhkMb/XlJGKusdpkETu3s93rJfYgR3vHhEJQAAAAAA
AA==
--------------ms000208000406030507090206--


--===============7380186979353865438==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7380186979353865438==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 17:45:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:45:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwYZ-0000it-K0; Tue, 04 Dec 2012 17:45:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwYY-0000ig-FV
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:45:18 +0000
Received: from [85.158.139.211:38915] by server-6.bemta-5.messagelabs.com id
	FB/43-19321-DA63EB05; Tue, 04 Dec 2012 17:45:17 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-13.tower-206.messagelabs.com!1354643113!14827906!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9576 invoked from network); 4 Dec 2012 17:45:15 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 17:45:15 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 69e4_0aaa_34690c78_40e3_4848_8d73_f72f3de8c3d8;
	Tue, 04 Dec 2012 12:44:53 -0500
Message-ID: <50BE368F.5060605@jhuapl.edu>
Date: Tue, 04 Dec 2012 12:44:47 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-8-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1354286955-23900-8-git-send-email-dgdegra@tycho.nsa.gov>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 7/9] stubdom/grub: send kernel measurements
	to vTPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7380186979353865438=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============7380186979353865438==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms000208000406030507090206"

This is a cryptographically signed message in MIME format.

--------------ms000208000406030507090206
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable


Acked by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>

On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
> This allows a domU with an arbitrary kernel and initrd to take advantag=
e
> of the static root of trust provided by a vTPM.
>
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>   stubdom/Makefile        |  2 +-
>   stubdom/grub/Makefile   |  1 +
>   stubdom/grub/kexec.c    | 54 ++++++++++++++++++++++++++++++++++++++++=
+++++++++
>   stubdom/grub/minios.cfg |  1 +
>   4 files changed, 57 insertions(+), 1 deletion(-)
>
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index 4744b79..790b547 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -399,7 +399,7 @@ grub-upstream: grub-$(GRUB_VERSION).tar.gz
>   	done
>  =20
>   .PHONY: grub
> -grub: grub-upstream $(CROSS_ROOT)
> +grub: cross-polarssl grub-upstream $(CROSS_ROOT)
>   	mkdir -p grub-$(XEN_TARGET_ARCH)
>   	CPPFLAGS=3D"$(TARGET_CPPFLAGS)" CFLAGS=3D"$(TARGET_CFLAGS)" $(MAKE) =
DESTDIR=3D -C $@ OBJ_DIR=3D$(CURDIR)/grub-$(XEN_TARGET_ARCH)
>  =20
> diff --git a/stubdom/grub/Makefile b/stubdom/grub/Makefile
> index d6e3a1e..6bd2c4c 100644
> --- a/stubdom/grub/Makefile
> +++ b/stubdom/grub/Makefile
> @@ -60,6 +60,7 @@ NETBOOT_SOURCES:=3D$(addprefix netboot/,$(NETBOOT_SOU=
RCES))
>   $(BOOT): DEF_CPPFLAGS+=3D-D__ASSEMBLY__
>  =20
>   PV_GRUB_SOURCES =3D kexec.c mini-os.c
> +PV_GRUB_SOURCES +=3D ../polarssl-$(XEN_TARGET_ARCH)/library/sha1.o
>  =20
>   SOURCES =3D $(NETBOOT_SOURCES) $(STAGE2_SOURCES) $(PV_GRUB_SOURCES)
>  =20
> diff --git a/stubdom/grub/kexec.c b/stubdom/grub/kexec.c
> index b21c91a..cef357e 100644
> --- a/stubdom/grub/kexec.c
> +++ b/stubdom/grub/kexec.c
> @@ -28,7 +28,9 @@
>   #include <blkfront.h>
>   #include <netfront.h>
>   #include <fbfront.h>
> +#include <tpmfront.h>
>   #include <shared.h>
> +#include <byteswap.h>
>  =20
>   #include "mini-os.h"
>  =20
> @@ -54,6 +56,22 @@ static unsigned long allocated;
>   int pin_table(xc_interface *xc_handle, unsigned int type, unsigned lo=
ng mfn,
>                 domid_t dom);
>  =20
> +#define TPM_TAG_RQU_COMMAND 0xC1
> +#define TPM_ORD_Extend 20
> +
> +struct pcr_extend_cmd {
> +	uint16_t tag;
> +	uint32_t size;
> +	uint32_t ord;
> +
> +	uint32_t pcr;
> +	unsigned char hash[20];
> +} __attribute__((packed));
> +
> +/* Not imported from polarssl's header since the prototype unhelpfully=
 defines
> + * the input as unsigned char, which causes pointer type mismatches */=

> +void sha1(const void *input, size_t ilen, unsigned char output[20]);
> +
>   /* We need mfn to appear as target_pfn, so exchange with the MFN ther=
e */
>   static void do_exchange(struct xc_dom_image *dom, xen_pfn_t target_pf=
n, xen_pfn_t source_mfn)
>   {
> @@ -117,6 +135,40 @@ int kexec_allocate(struct xc_dom_image *dom, xen_v=
addr_t up_to)
>       return 0;
>   }
>  =20
> +static void tpm_hash2pcr(struct xc_dom_image *dom, char *cmdline)
> +{
> +	struct tpmfront_dev* tpm =3D init_tpmfront(NULL);
> +	uint8_t *resp;
> +	size_t resplen =3D 0;
> +	struct pcr_extend_cmd cmd;
> +
> +	/* If all guests have access to a vTPM, it may be useful to replace t=
his
> +	 * with ASSERT(tpm) to prevent configuration errors from allowing a g=
uest
> +	 * to boot without a TPM (or with a TPM that has not been sent any
> +	 * measurements, which could allow forging the measurements).
> +	 */
> +	if (!tpm)
> +		return;
> +
> +	cmd.tag =3D bswap_16(TPM_TAG_RQU_COMMAND);
> +	cmd.size =3D bswap_32(sizeof(cmd));
> +	cmd.ord =3D bswap_32(TPM_ORD_Extend);
> +	cmd.pcr =3D bswap_32(4); // PCR #4 for kernel
> +	sha1(dom->kernel_blob, dom->kernel_size, cmd.hash);
> +
> +	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
> +
> +	cmd.pcr =3D bswap_32(5); // PCR #5 for cmdline
> +	sha1(cmdline, strlen(cmdline), cmd.hash);
> +	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
> +
> +	cmd.pcr =3D bswap_32(5); // PCR #5 for initrd
> +	sha1(dom->ramdisk_blob, dom->ramdisk_size, cmd.hash);
> +	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
> +
> +	shutdown_tpmfront(tpm);
> +}
> +
>   void kexec(void *kernel, long kernel_size, void *module, long module_=
size, char *cmdline, unsigned long flags)
>   {
>       struct xc_dom_image *dom;
> @@ -151,6 +203,8 @@ void kexec(void *kernel, long kernel_size, void *mo=
dule, long module_size, char
>       dom->console_evtchn =3D start_info.console.domU.evtchn;
>       dom->xenstore_evtchn =3D start_info.store_evtchn;
>  =20
> +    tpm_hash2pcr(dom, cmdline);
> +
>       if ( (rc =3D xc_dom_boot_xen_init(dom, xc_handle, domid)) !=3D 0 =
) {
>           grub_printf("xc_dom_boot_xen_init returned %d\n", rc);
>           errnum =3D ERR_BOOT_FAILURE;
> diff --git a/stubdom/grub/minios.cfg b/stubdom/grub/minios.cfg
> index 40cfa68..8df4909 100644
> --- a/stubdom/grub/minios.cfg
> +++ b/stubdom/grub/minios.cfg
> @@ -1,2 +1,3 @@
>   CONFIG_START_NETWORK=3Dn
>   CONFIG_SPARSE_BSS=3Dn
> +CONFIG_TPMFRONT=3Dy



--------------ms000208000406030507090206
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE3NDQ0N1owIwYJKoZIhvcNAQkEMRYEFBY85HZQSueIcxRR
ijy7T1JdlwjiMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYAZxdxdOpcnr31E9pDF6ZD45xVglIekHz5U
vGCIT/Xh32uGYbFpVkRABMBbjHow/54DZ1ZADe04zKxsPeNfAAgJEf8VtXcRy/vT0DIEhNyn
Z0PhGAq2cYkQ6gOJwHbKWnUzBqJbfPAhkMb/XlJGKusdpkETu3s93rJfYgR3vHhEJQAAAAAA
AA==
--------------ms000208000406030507090206--


--===============7380186979353865438==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7380186979353865438==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 17:48:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwbB-0000wc-7v; Tue, 04 Dec 2012 17:48:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1Tfwb8-0000wQ-Qf
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:47:59 +0000
Received: from [85.158.138.51:41755] by server-1.bemta-3.messagelabs.com id
	15/DD-12169-D473EB05; Tue, 04 Dec 2012 17:47:57 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354643274!19362947!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19967 invoked from network); 4 Dec 2012 17:47:56 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 17:47:56 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6442_03d6_e3ffdbc4_49a5_43c9_9d67_f59f1df41214;
	Tue, 04 Dec 2012 12:47:35 -0500
Message-ID: <50BE3730.2030509@jhuapl.edu>
Date: Tue, 04 Dec 2012 12:47:28 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-10-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1354286955-23900-10-git-send-email-dgdegra@tycho.nsa.gov>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] stubdom/vtpm: Add PCR pass-through to
	hardware TPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0168533734265684402=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============0168533734265684402==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms090009020705090302010700"

This is a cryptographically signed message in MIME format.

--------------ms090009020705090302010700
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

So this maps a fixed set of PCRs always? Is there any use case where the =

user might want the PCR mappings to be configurable? Might they want to=20
disable this feature to disallow Hardware PCR access in the vm?

Also can you update the docs/misc/vtpm.txt documentation with a note=20
about this and the grub feature?

On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
> This allows the hardware TPM's PCRs to be accessed from a vTPM for
> debugging and as a simple alternative to a deep quote in situations
> where the integrity of the vTPM's own TCB is not in question.
>
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>   stubdom/Makefile                   |  1 +
>   stubdom/vtpm-pcr-passthrough.patch | 73 +++++++++++++++++++++++++++++=
+++++++++
>   stubdom/vtpm/vtpm_cmd.c            | 38 ++++++++++++++++++++
>   3 files changed, 112 insertions(+)
>   create mode 100644 stubdom/vtpm-pcr-passthrough.patch
>
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index 790b547..03ec07e 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -210,6 +210,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPM=
EMU_VERSION).tar.gz
>   	patch -d $@ -p1 < vtpm-locality.patch
>   	patch -d $@ -p1 < vtpm-bufsize.patch
>   	patch -d $@ -p1 < vtpm-locality5-pcrs.patch
> +	patch -d $@ -p1 < vtpm-pcr-passthrough.patch
>   	mkdir $@/build
>   	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=3D${CC} -DCMAKE_C_FLAGS=3D=
"-std=3Dc99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-decl=
aration-after-statement"
>   	touch $@
> diff --git a/stubdom/vtpm-pcr-passthrough.patch b/stubdom/vtpm-pcr-pass=
through.patch
> new file mode 100644
> index 0000000..4e898a5
> --- /dev/null
> +++ b/stubdom/vtpm-pcr-passthrough.patch
> @@ -0,0 +1,73 @@
> +diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
> +index f8f7f0f..885af52 100644
> +--- a/tpm/tpm_capability.c
> ++++ b/tpm/tpm_capability.c
> +@@ -72,7 +72,7 @@ static TPM_RESULT cap_property(UINT32 subCapSize, BY=
TE *subCap,
> +   switch (property) {
> +     case TPM_CAP_PROP_PCR:
> +       debug("[TPM_CAP_PROP_PCR]");
> +-      return return_UINT32(respSize, resp, TPM_NUM_PCR);
> ++      return return_UINT32(respSize, resp, TPM_NUM_PCR_V);
> +
> +     case TPM_CAP_PROP_DIR:
> +       debug("[TPM_CAP_PROP_DIR]");
> +diff --git a/tpm/tpm_emulator_extern.h b/tpm/tpm_emulator_extern.h
> +index 36a32dd..77ed595 100644
> +--- a/tpm/tpm_emulator_extern.h
> ++++ b/tpm/tpm_emulator_extern.h
> +@@ -56,6 +56,7 @@ void (*tpm_free)(/*const*/ void *ptr);
> + /* random numbers */
> +
> + void (*tpm_get_extern_random_bytes)(void *buf, size_t nbytes);
> ++void tpm_get_extern_pcr(int index, void *buf);
> +
> + /* usec since last call */
> +
> +diff --git a/tpm/tpm_integrity.c b/tpm/tpm_integrity.c
> +index 66ece83..f3c4196 100644
> +--- a/tpm/tpm_integrity.c
> ++++ b/tpm/tpm_integrity.c
> +@@ -56,8 +56,11 @@ TPM_RESULT TPM_Extend(TPM_PCRINDEX pcrNum, TPM_DIGE=
ST *inDigest,
> + TPM_RESULT TPM_PCRRead(TPM_PCRINDEX pcrIndex, TPM_PCRVALUE *outDigest=
)
> + {
> +   info("TPM_PCRRead()");
> +-  if (pcrIndex >=3D TPM_NUM_PCR) return TPM_BADINDEX;
> +-  memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
> ++  if (pcrIndex >=3D TPM_NUM_PCR_V) return TPM_BADINDEX;
> ++  if (pcrIndex >=3D TPM_NUM_PCR)
> ++	tpm_get_extern_pcr(pcrIndex - TPM_NUM_PCR, outDigest);
> ++  else
> ++    memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
> +   return TPM_SUCCESS;
> + }
> +
> +@@ -138,12 +141,15 @@ TPM_RESULT tpm_compute_pcr_digest(TPM_PCR_SELECT=
ION *pcrSelection,
> +   BYTE *buf, *ptr;
> +   info("tpm_compute_pcr_digest()");
> +   /* create PCR composite */
> +-  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR
> ++  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR_V
> +       || pcrSelection->sizeOfSelect =3D=3D 0) return TPM_INVALID_PCR_=
INFO;
> +   for (i =3D 0, j =3D 0; i < pcrSelection->sizeOfSelect * 8; i++) {
> +     /* is PCR number i selected ? */
> +     if (pcrSelection->pcrSelect[i >> 3] & (1 << (i & 7))) {
> +-      memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALUE)=
);
> ++      if (i >=3D TPM_NUM_PCR)
> ++        tpm_get_extern_pcr(i - TPM_NUM_PCR, &comp.pcrValue[j++]);
> ++      else
> ++        memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALU=
E));
> +     }
> +   }
> +   memcpy(&comp.select, pcrSelection, sizeof(TPM_PCR_SELECTION));
> +diff --git a/tpm/tpm_structures.h b/tpm/tpm_structures.h
> +index 08cef1e..8c97fc5 100644
> +--- a/tpm/tpm_structures.h
> ++++ b/tpm/tpm_structures.h
> +@@ -677,6 +677,7 @@ typedef struct tdTPM_CMK_MA_APPROVAL {
> +  * Number of PCRs of the TPM (must be a multiple of eight)
> +  */
> + #define TPM_NUM_PCR 32
> ++#define TPM_NUM_PCR_V (TPM_NUM_PCR + 24)
> +
> + /*
> +  * TPM_PCR_SELECTION ([TPM_Part2], Section 8.1)
> diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
> index 7eae98b..ed058fb 100644
> --- a/stubdom/vtpm/vtpm_cmd.c
> +++ b/stubdom/vtpm/vtpm_cmd.c
> @@ -134,6 +134,44 @@ egress:
>  =20
>   }
>  =20
> +extern struct tpmfront_dev* tpmfront_dev;
> +void tpm_get_extern_pcr(int index, void *buf) {
> +   TPM_RESULT status =3D TPM_SUCCESS;
> +   uint8_t* cmdbuf, *resp, *bptr;
> +   size_t resplen =3D 0;
> +   UINT32 len;
> +
> +   /*Ask the real tpm for the PCR value */
> +   TPM_TAG tag =3D TPM_TAG_RQU_COMMAND;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord =3D TPM_ORD_PCRRead;
> +   len =3D size =3D sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMM=
AND_CODE) + sizeof(UINT32);
> +
> +   /*Create the raw tpm command */
> +   bptr =3D cmdbuf =3D malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, index));
> +
> +   /* Send cmd, wait for response */
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &res=
plen),
> +      ERR_TPMFRONT);
> +
> +   bptr =3D resp; len =3D resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_M=
ALFORMED);
> +
> +   //Check return status of command
> +   CHECKSTATUSGOTO(ord, "TPM_PCRRead()");
> +
> +   //Get the PCR value out
> +   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, buf, 20), ERR_=
MALFORMED);
> +
> +   goto egress;
> +abort_egress:
> +   memset(buf, 0x20, 20);
> +egress:
> +   free(cmdbuf);
> +}
> +
>   TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_=
t** data, size_t* data_length)
>   {
>      TPM_RESULT status =3D TPM_SUCCESS;



--------------ms090009020705090302010700
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE3NDcyOFowIwYJKoZIhvcNAQkEMRYEFOZHczjv9pScTvxB
MNFgsUq8ucbWMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYB6dRzn2+ZzqPlvvB2LNG6ngmbXgzS2ceFJ
oLPLtsD2nlFICePaeZ8/wGnMfjx/cycHT8ZylZnvmwmXtuvPgr9+copMCLJTHmRVENitNwDm
23aDQmloi23BeFz8Mfz57oSPpUMU7WsKg5Rbd1WxAJ7lu2WtY/gcZTJmf4P0dILJGwAAAAAA
AA==
--------------ms090009020705090302010700--


--===============0168533734265684402==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0168533734265684402==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 17:48:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwbB-0000wc-7v; Tue, 04 Dec 2012 17:48:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1Tfwb8-0000wQ-Qf
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:47:59 +0000
Received: from [85.158.138.51:41755] by server-1.bemta-3.messagelabs.com id
	15/DD-12169-D473EB05; Tue, 04 Dec 2012 17:47:57 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354643274!19362947!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19967 invoked from network); 4 Dec 2012 17:47:56 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 17:47:56 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6442_03d6_e3ffdbc4_49a5_43c9_9d67_f59f1df41214;
	Tue, 04 Dec 2012 12:47:35 -0500
Message-ID: <50BE3730.2030509@jhuapl.edu>
Date: Tue, 04 Dec 2012 12:47:28 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-10-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1354286955-23900-10-git-send-email-dgdegra@tycho.nsa.gov>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] stubdom/vtpm: Add PCR pass-through to
	hardware TPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0168533734265684402=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============0168533734265684402==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms090009020705090302010700"

This is a cryptographically signed message in MIME format.

--------------ms090009020705090302010700
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

So this maps a fixed set of PCRs always? Is there any use case where the =

user might want the PCR mappings to be configurable? Might they want to=20
disable this feature to disallow Hardware PCR access in the vm?

Also can you update the docs/misc/vtpm.txt documentation with a note=20
about this and the grub feature?

On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
> This allows the hardware TPM's PCRs to be accessed from a vTPM for
> debugging and as a simple alternative to a deep quote in situations
> where the integrity of the vTPM's own TCB is not in question.
>
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>   stubdom/Makefile                   |  1 +
>   stubdom/vtpm-pcr-passthrough.patch | 73 +++++++++++++++++++++++++++++=
+++++++++
>   stubdom/vtpm/vtpm_cmd.c            | 38 ++++++++++++++++++++
>   3 files changed, 112 insertions(+)
>   create mode 100644 stubdom/vtpm-pcr-passthrough.patch
>
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index 790b547..03ec07e 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -210,6 +210,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPM=
EMU_VERSION).tar.gz
>   	patch -d $@ -p1 < vtpm-locality.patch
>   	patch -d $@ -p1 < vtpm-bufsize.patch
>   	patch -d $@ -p1 < vtpm-locality5-pcrs.patch
> +	patch -d $@ -p1 < vtpm-pcr-passthrough.patch
>   	mkdir $@/build
>   	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=3D${CC} -DCMAKE_C_FLAGS=3D=
"-std=3Dc99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-decl=
aration-after-statement"
>   	touch $@
> diff --git a/stubdom/vtpm-pcr-passthrough.patch b/stubdom/vtpm-pcr-pass=
through.patch
> new file mode 100644
> index 0000000..4e898a5
> --- /dev/null
> +++ b/stubdom/vtpm-pcr-passthrough.patch
> @@ -0,0 +1,73 @@
> +diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
> +index f8f7f0f..885af52 100644
> +--- a/tpm/tpm_capability.c
> ++++ b/tpm/tpm_capability.c
> +@@ -72,7 +72,7 @@ static TPM_RESULT cap_property(UINT32 subCapSize, BY=
TE *subCap,
> +   switch (property) {
> +     case TPM_CAP_PROP_PCR:
> +       debug("[TPM_CAP_PROP_PCR]");
> +-      return return_UINT32(respSize, resp, TPM_NUM_PCR);
> ++      return return_UINT32(respSize, resp, TPM_NUM_PCR_V);
> +
> +     case TPM_CAP_PROP_DIR:
> +       debug("[TPM_CAP_PROP_DIR]");
> +diff --git a/tpm/tpm_emulator_extern.h b/tpm/tpm_emulator_extern.h
> +index 36a32dd..77ed595 100644
> +--- a/tpm/tpm_emulator_extern.h
> ++++ b/tpm/tpm_emulator_extern.h
> +@@ -56,6 +56,7 @@ void (*tpm_free)(/*const*/ void *ptr);
> + /* random numbers */
> +
> + void (*tpm_get_extern_random_bytes)(void *buf, size_t nbytes);
> ++void tpm_get_extern_pcr(int index, void *buf);
> +
> + /* usec since last call */
> +
> +diff --git a/tpm/tpm_integrity.c b/tpm/tpm_integrity.c
> +index 66ece83..f3c4196 100644
> +--- a/tpm/tpm_integrity.c
> ++++ b/tpm/tpm_integrity.c
> +@@ -56,8 +56,11 @@ TPM_RESULT TPM_Extend(TPM_PCRINDEX pcrNum, TPM_DIGE=
ST *inDigest,
> + TPM_RESULT TPM_PCRRead(TPM_PCRINDEX pcrIndex, TPM_PCRVALUE *outDigest=
)
> + {
> +   info("TPM_PCRRead()");
> +-  if (pcrIndex >=3D TPM_NUM_PCR) return TPM_BADINDEX;
> +-  memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
> ++  if (pcrIndex >=3D TPM_NUM_PCR_V) return TPM_BADINDEX;
> ++  if (pcrIndex >=3D TPM_NUM_PCR)
> ++	tpm_get_extern_pcr(pcrIndex - TPM_NUM_PCR, outDigest);
> ++  else
> ++    memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
> +   return TPM_SUCCESS;
> + }
> +
> +@@ -138,12 +141,15 @@ TPM_RESULT tpm_compute_pcr_digest(TPM_PCR_SELECT=
ION *pcrSelection,
> +   BYTE *buf, *ptr;
> +   info("tpm_compute_pcr_digest()");
> +   /* create PCR composite */
> +-  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR
> ++  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR_V
> +       || pcrSelection->sizeOfSelect =3D=3D 0) return TPM_INVALID_PCR_=
INFO;
> +   for (i =3D 0, j =3D 0; i < pcrSelection->sizeOfSelect * 8; i++) {
> +     /* is PCR number i selected ? */
> +     if (pcrSelection->pcrSelect[i >> 3] & (1 << (i & 7))) {
> +-      memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALUE)=
);
> ++      if (i >=3D TPM_NUM_PCR)
> ++        tpm_get_extern_pcr(i - TPM_NUM_PCR, &comp.pcrValue[j++]);
> ++      else
> ++        memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALU=
E));
> +     }
> +   }
> +   memcpy(&comp.select, pcrSelection, sizeof(TPM_PCR_SELECTION));
> +diff --git a/tpm/tpm_structures.h b/tpm/tpm_structures.h
> +index 08cef1e..8c97fc5 100644
> +--- a/tpm/tpm_structures.h
> ++++ b/tpm/tpm_structures.h
> +@@ -677,6 +677,7 @@ typedef struct tdTPM_CMK_MA_APPROVAL {
> +  * Number of PCRs of the TPM (must be a multiple of eight)
> +  */
> + #define TPM_NUM_PCR 32
> ++#define TPM_NUM_PCR_V (TPM_NUM_PCR + 24)
> +
> + /*
> +  * TPM_PCR_SELECTION ([TPM_Part2], Section 8.1)
> diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
> index 7eae98b..ed058fb 100644
> --- a/stubdom/vtpm/vtpm_cmd.c
> +++ b/stubdom/vtpm/vtpm_cmd.c
> @@ -134,6 +134,44 @@ egress:
>  =20
>   }
>  =20
> +extern struct tpmfront_dev* tpmfront_dev;
> +void tpm_get_extern_pcr(int index, void *buf) {
> +   TPM_RESULT status =3D TPM_SUCCESS;
> +   uint8_t* cmdbuf, *resp, *bptr;
> +   size_t resplen =3D 0;
> +   UINT32 len;
> +
> +   /*Ask the real tpm for the PCR value */
> +   TPM_TAG tag =3D TPM_TAG_RQU_COMMAND;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord =3D TPM_ORD_PCRRead;
> +   len =3D size =3D sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMM=
AND_CODE) + sizeof(UINT32);
> +
> +   /*Create the raw tpm command */
> +   bptr =3D cmdbuf =3D malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, index));
> +
> +   /* Send cmd, wait for response */
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &res=
plen),
> +      ERR_TPMFRONT);
> +
> +   bptr =3D resp; len =3D resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_M=
ALFORMED);
> +
> +   //Check return status of command
> +   CHECKSTATUSGOTO(ord, "TPM_PCRRead()");
> +
> +   //Get the PCR value out
> +   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, buf, 20), ERR_=
MALFORMED);
> +
> +   goto egress;
> +abort_egress:
> +   memset(buf, 0x20, 20);
> +egress:
> +   free(cmdbuf);
> +}
> +
>   TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_=
t** data, size_t* data_length)
>   {
>      TPM_RESULT status =3D TPM_SUCCESS;



--------------ms090009020705090302010700
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE3NDcyOFowIwYJKoZIhvcNAQkEMRYEFOZHczjv9pScTvxB
MNFgsUq8ucbWMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYB6dRzn2+ZzqPlvvB2LNG6ngmbXgzS2ceFJ
oLPLtsD2nlFICePaeZ8/wGnMfjx/cycHT8ZylZnvmwmXtuvPgr9+copMCLJTHmRVENitNwDm
23aDQmloi23BeFz8Mfz57oSPpUMU7WsKg5Rbd1WxAJ7lu2WtY/gcZTJmf4P0dILJGwAAAAAA
AA==
--------------ms090009020705090302010700--


--===============0168533734265684402==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0168533734265684402==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 17:49:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfwc5-00012C-FK; Tue, 04 Dec 2012 17:48:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sunds@peapod.net>) id 1Tfwc3-00011x-M9
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:48:56 +0000
Received: from [85.158.143.35:11198] by server-3.bemta-4.messagelabs.com id
	DA/73-06841-7873EB05; Tue, 04 Dec 2012 17:48:55 +0000
X-Env-Sender: sunds@peapod.net
X-Msg-Ref: server-5.tower-21.messagelabs.com!1354643331!4390735!1
X-Originating-IP: [209.85.212.41]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9966 invoked from network); 4 Dec 2012 17:48:52 -0000
Received: from mail-vb0-f41.google.com (HELO mail-vb0-f41.google.com)
	(209.85.212.41)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 17:48:52 -0000
Received: by mail-vb0-f41.google.com with SMTP id l22so3007110vbn.28
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 09:48:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=5fROxoeKc7gF/4RvW79WWTgD/3oprZAZ4/IIGbYQJA0=;
	b=LH4IwW4Kyq7ULMwacsnz5A6ZfLYPWC7HjFCfCtboEymn7Po6B0Ir4Fdc1RhLl/MuiH
	A8FvOtT1J3g3MOmUUHvl9tgkCIR7RYVRaEScOKUny0vau5seCVbsc22p5CXRLfrEi5Fl
	F2EBuDuDwvm8DEybuNaFiIT2UY310HLkxAqO08zIBlYqZdDJCmEY222WyQS3wVsxVvF+
	s2/0V5cTH7+Nxr7BsvyP4GAkMt2vNOWiMd/jEQCZHHCgGV/M+/nqUACUKfvI3mGqW4pg
	0h1Xp9A+Dxd5+qOdfnlyhFYVffi5N7oao/EitosRVCp0HtBucazAboS1bzdP5n2xC6Uu
	xPOA==
MIME-Version: 1.0
Received: by 10.220.106.147 with SMTP id x19mr1525151vco.37.1354643330972;
	Tue, 04 Dec 2012 09:48:50 -0800 (PST)
Received: by 10.58.88.72 with HTTP; Tue, 4 Dec 2012 09:48:50 -0800 (PST)
In-Reply-To: <50BE0651.1060502@tycho.nsa.gov>
References: <CAP4+OA3-CsKxJizFpx-V48cCFKJc5KXNQzzY4oQ6pQ5pu0RFMg@mail.gmail.com>
	<CAL08nMGDeaQO6HAJypK+pzmsmg8wcz2=WnLWwX==R5MRBCyoPA@mail.gmail.com>
	<CAP4+OA064YvLQyPahGWoyxPhWv+v5LLCjBUOUg09QF7XKJ1tZg@mail.gmail.com>
	<50A27423.6040104@tycho.nsa.gov>
	<CAP4+OA0KLDk-5p=S1sSq9ugrLF4siL_iQ7YNT1nCGO4mwG5=zA@mail.gmail.com>
	<50BE0651.1060502@tycho.nsa.gov>
Date: Tue, 4 Dec 2012 11:48:50 -0600
Message-ID: <CAP4+OA05Nd981TDNrtLr_axKna8gntd1srJJrJrUSQmJ1hotFQ@mail.gmail.com>
From: D Sundstrom <sunds@peapod.net>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
X-Gm-Message-State: ALoCoQl4GkDBdjwIYwSPGAOCkX0OcPLaGrU74xqVbP/ITKtLeFDGj4Evfni4v4WY4q0nl/IxDr7i
Cc: Pablo Llopis <pllopis@arcos.inf.uc3m.es>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Immediate kernel panic using gntdev device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0258163357177378733=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0258163357177378733==
Content-Type: multipart/alternative; boundary=f46d042fdea28a3de904d00a7adb

--f46d042fdea28a3de904d00a7adb
Content-Type: text/plain; charset=ISO-8859-1

Thanks Daniel and Pablo.

Pablo please keep me advised if you solve any issues regarding granting
large amounts of memory.  I have the same requirement.

Daniel, thanks for the explanation.  Indeed, if I just allocate memory of
the heap everything works, but I'm "leaking" that memory.

I'll need an answer from Citrix as to why XenClient fails for the memory op
hypercall.

Is the intent of "decrease reservation" to pull more memory into the DomU?
 I didn't quite understand the logic in this driver if it fails to find
memory already in the balloon list.

-David


On Tue, Dec 4, 2012 at 8:18 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:

> On 12/03/2012 07:49 PM, D Sundstrom wrote:
> > The issue seems to be my version of Xen (XenClient XT) must not support
> > ballon drivers. Any call to the memory_op hypercall to change the
> > reservation terminates my guest with extreme prejudice.
> >
> > I'll take that one up with Citrix.  However, can someone explain why
> > mapping a grant needs to manipulate the balloon reservation?
> >
> > Specifically, in the 3.7-RC7 linux kernel tree, the file
> > drivers/xen/balloon.c:
> >
> > At line 512 it tries to get a page out of the balloon.  This returns null
> > (no page).
> > If page.... at line 513 evaluates to false
> > At line 518 the else block calls decrease_reservation().
> >
> >
> > Thanks
> > David
> >
>
> The gntdev driver needs a GFN for the mapped page (this is a hard
> requirement
> for HVM, and also makes PV in-kernel mapping of the page easier iirc), and
> this
> GFN must be unused by the guest (no associated MFN - otherwise it may end
> up
> leaking the MFN until the domain is shutdown). Since ballooned out pages
> satisfy
> these requirements, the gntdev code uses the balloon pool instead of
> breaking
> the GFN/MFN association itself or trying to use the pages beyond the last
> valid
> GFN.
>
> >
> > On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov
> >wrote:
> >
> >> On 11/12/2012 08:15 AM, D Sundstrom wrote:
> >>> Thank you Pablo.
> >>>
> >>> It makes no difference if I run both the src-add and map from the same
> >>> domain or from different DomU domains.
> >>> Whichever DomU I run the map function in crashes immediately.
> >>
> >> Mapping your own grants (which is what the test run you showed did)
> might
> >> cause problems - although it's a bug that needs to be fixed, if so. You
> >> may want to try using the vchan-node2 tool (tools/libvchan) for testing
> >> and as an example user.
> >>
> >>> You mention Dom0.  I just want to be clear that I'd like to share
> >>> between two DomU domains.  Have you gotten this to work?
> >>
> >> That was the goal of gntalloc/libvchan - it should work (and has for
> me).
> >>
> >>> I also tried the userspace APIs provided by Xen such as
> >>> xc_gnttab_map_grant_ref() and these also crash.  Of course, these use
> >>> the same driver IOCTLs, so this isn't a surprise.
> >>>
> >>> I'll need to see if I can get some debug info from the DomU kernel to
> >>> make progress.
> >>
> >> You might want to try booting your domU with console=hvc0 and look at
> >> xl console - that will usually give you useful backtraces. Without that,
> >> it's rather difficult to tell what the problem is.
> >>
> >>> If I can get this to work, are there any restrictions on sharing large
> >>> amounts of memory?  Say 160Mb?  Or are grant tables intended for a
> >>> small number of pages?
> >>
> >> There are restrictions within the modules (default is 1024 4K pages),
> and
> >> in Xen itself for the number of grant table and maptrack pages - but I
> >> think those can be adjusted via a boot parameter. The grant tables
> aren't
> >> currently intended to share large amounts of memory, so you may run in
> to
> >> some inefficiencies when doing the map/unmap. If you're using an IOMMU
> for
> >> one of the domUs, this may end up being especially costly.
> >>
> >>> Thanks,
> >>> David
> >>>
> >>>
> >>>
> >>> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <
> pllopis@arcos.inf.uc3m.es>
> >> wrote:
> >>>>
> >>>> It is not clear from your output from which domain you are running
> >>>> each command. It looks like you are trying to issue a grant and map it
> >>>> from within the same domain. That's probably the reason it crashes.
> >>>> You are supposed to run this tool from both domains, running the calls
> >>>> which interface with gntalloc from one domain, and the calls which
> >>>> interface with gntdev from the other domain.
> >>>> In any case, the domid you have to specify in the map must be the
> >>>> domid of the domain which issued the grant. In other words, when
> >>>> creating a grant, the domid which is granted access is specified. When
> >>>> mapping a grant, the domid which issued the grant is specified. (i.e.
> >>>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
> >>>> 8)
> >>>>
> >>>>>
> >>>
>
>
> --
> Daniel De Graaf
> National Security Agency
>

--f46d042fdea28a3de904d00a7adb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div><br></div>Thanks Daniel and Pablo.<div><br></div><div>Pablo please kee=
p me advised if you solve any issues regarding granting large amounts of me=
mory. =A0I have the same requirement.</div><div><br></div><div>Daniel, than=
ks for the explanation. =A0Indeed, if I just allocate memory of the heap ev=
erything works, but I&#39;m &quot;leaking&quot; that memory.</div>
<div><br></div><div>I&#39;ll need an answer from Citrix as to why XenClient=
 fails for the memory op hypercall.</div><div><br></div><div>Is the intent =
of &quot;decrease reservation&quot; to pull more memory into the DomU? =A0I=
 didn&#39;t quite understand the logic in this driver if it fails to find m=
emory already in the balloon list.</div>
<div><br></div><div>-David</div><div><br></div><div><br><div class=3D"gmail=
_quote">On Tue, Dec 4, 2012 at 8:18 AM, Daniel De Graaf <span dir=3D"ltr">&=
lt;<a href=3D"mailto:dgdegra@tycho.nsa.gov" target=3D"_blank">dgdegra@tycho=
.nsa.gov</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On 12/03/2012 07:49 PM, D =
Sundstrom wrote:<br>
&gt; The issue seems to be my version of Xen (XenClient XT) must not suppor=
t<br>
&gt; ballon drivers. Any call to the memory_op hypercall to change the<br>
&gt; reservation terminates my guest with extreme prejudice.<br>
&gt;<br>
&gt; I&#39;ll take that one up with Citrix. =A0However, can someone explain=
 why<br>
&gt; mapping a grant needs to manipulate the balloon reservation?<br>
&gt;<br>
&gt; Specifically, in the 3.7-RC7 linux kernel tree, the file<br>
&gt; drivers/xen/balloon.c:<br>
&gt;<br>
&gt; At line 512 it tries to get a page out of the balloon. =A0This returns=
 null<br>
&gt; (no page).<br>
&gt; If page.... at line 513 evaluates to false<br>
&gt; At line 518 the else block calls decrease_reservation().<br>
&gt;<br>
&gt;<br>
&gt; Thanks<br>
&gt; David<br>
&gt;<br>
<br>
</div>The gntdev driver needs a GFN for the mapped page (this is a hard req=
uirement<br>
for HVM, and also makes PV in-kernel mapping of the page easier iirc), and =
this<br>
GFN must be unused by the guest (no associated MFN - otherwise it may end u=
p<br>
leaking the MFN until the domain is shutdown). Since ballooned out pages sa=
tisfy<br>
these requirements, the gntdev code uses the balloon pool instead of breaki=
ng<br>
the GFN/MFN association itself or trying to use the pages beyond the last v=
alid<br>
GFN.<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt;<br>
&gt; On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf &lt;<a href=3D"mailt=
o:dgdegra@tycho.nsa.gov">dgdegra@tycho.nsa.gov</a>&gt;wrote:<br>
&gt;<br>
&gt;&gt; On 11/12/2012 08:15 AM, D Sundstrom wrote:<br>
&gt;&gt;&gt; Thank you Pablo.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; It makes no difference if I run both the src-add and map from =
the same<br>
&gt;&gt;&gt; domain or from different DomU domains.<br>
&gt;&gt;&gt; Whichever DomU I run the map function in crashes immediately.<=
br>
&gt;&gt;<br>
&gt;&gt; Mapping your own grants (which is what the test run you showed did=
) might<br>
&gt;&gt; cause problems - although it&#39;s a bug that needs to be fixed, i=
f so. You<br>
&gt;&gt; may want to try using the vchan-node2 tool (tools/libvchan) for te=
sting<br>
&gt;&gt; and as an example user.<br>
&gt;&gt;<br>
&gt;&gt;&gt; You mention Dom0. =A0I just want to be clear that I&#39;d like=
 to share<br>
&gt;&gt;&gt; between two DomU domains. =A0Have you gotten this to work?<br>
&gt;&gt;<br>
&gt;&gt; That was the goal of gntalloc/libvchan - it should work (and has f=
or me).<br>
&gt;&gt;<br>
&gt;&gt;&gt; I also tried the userspace APIs provided by Xen such as<br>
&gt;&gt;&gt; xc_gnttab_map_grant_ref() and these also crash. =A0Of course, =
these use<br>
&gt;&gt;&gt; the same driver IOCTLs, so this isn&#39;t a surprise.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; I&#39;ll need to see if I can get some debug info from the Dom=
U kernel to<br>
&gt;&gt;&gt; make progress.<br>
&gt;&gt;<br>
&gt;&gt; You might want to try booting your domU with console=3Dhvc0 and lo=
ok at<br>
&gt;&gt; xl console - that will usually give you useful backtraces. Without=
 that,<br>
&gt;&gt; it&#39;s rather difficult to tell what the problem is.<br>
&gt;&gt;<br>
&gt;&gt;&gt; If I can get this to work, are there any restrictions on shari=
ng large<br>
&gt;&gt;&gt; amounts of memory? =A0Say 160Mb? =A0Or are grant tables intend=
ed for a<br>
&gt;&gt;&gt; small number of pages?<br>
&gt;&gt;<br>
&gt;&gt; There are restrictions within the modules (default is 1024 4K page=
s), and<br>
&gt;&gt; in Xen itself for the number of grant table and maptrack pages - b=
ut I<br>
&gt;&gt; think those can be adjusted via a boot parameter. The grant tables=
 aren&#39;t<br>
&gt;&gt; currently intended to share large amounts of memory, so you may ru=
n in to<br>
&gt;&gt; some inefficiencies when doing the map/unmap. If you&#39;re using =
an IOMMU for<br>
&gt;&gt; one of the domUs, this may end up being especially costly.<br>
&gt;&gt;<br>
&gt;&gt;&gt; Thanks,<br>
&gt;&gt;&gt; David<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis &lt;<a href=3D"m=
ailto:pllopis@arcos.inf.uc3m.es">pllopis@arcos.inf.uc3m.es</a>&gt;<br>
&gt;&gt; wrote:<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; It is not clear from your output from which domain you are=
 running<br>
&gt;&gt;&gt;&gt; each command. It looks like you are trying to issue a gran=
t and map it<br>
&gt;&gt;&gt;&gt; from within the same domain. That&#39;s probably the reaso=
n it crashes.<br>
&gt;&gt;&gt;&gt; You are supposed to run this tool from both domains, runni=
ng the calls<br>
&gt;&gt;&gt;&gt; which interface with gntalloc from one domain, and the cal=
ls which<br>
&gt;&gt;&gt;&gt; interface with gntdev from the other domain.<br>
&gt;&gt;&gt;&gt; In any case, the domid you have to specify in the map must=
 be the<br>
&gt;&gt;&gt;&gt; domid of the domain which issued the grant. In other words=
, when<br>
&gt;&gt;&gt;&gt; creating a grant, the domid which is granted access is spe=
cified. When<br>
&gt;&gt;&gt;&gt; mapping a grant, the domid which issued the grant is speci=
fied. (i.e.<br>
&gt;&gt;&gt;&gt; If you did &quot;src-add 8&quot; from dom0 you should run =
map 0 1372 from domU<br>
&gt;&gt;&gt;&gt; 8)<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
<br>
<br>
</div></div><div class=3D"HOEnZb"><div class=3D"h5">--<br>
Daniel De Graaf<br>
National Security Agency<br>
</div></div></blockquote></div><br></div><div><br></div>

--f46d042fdea28a3de904d00a7adb--


--===============0258163357177378733==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0258163357177378733==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 17:49:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfwc5-00012C-FK; Tue, 04 Dec 2012 17:48:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sunds@peapod.net>) id 1Tfwc3-00011x-M9
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:48:56 +0000
Received: from [85.158.143.35:11198] by server-3.bemta-4.messagelabs.com id
	DA/73-06841-7873EB05; Tue, 04 Dec 2012 17:48:55 +0000
X-Env-Sender: sunds@peapod.net
X-Msg-Ref: server-5.tower-21.messagelabs.com!1354643331!4390735!1
X-Originating-IP: [209.85.212.41]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9966 invoked from network); 4 Dec 2012 17:48:52 -0000
Received: from mail-vb0-f41.google.com (HELO mail-vb0-f41.google.com)
	(209.85.212.41)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 17:48:52 -0000
Received: by mail-vb0-f41.google.com with SMTP id l22so3007110vbn.28
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 09:48:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=5fROxoeKc7gF/4RvW79WWTgD/3oprZAZ4/IIGbYQJA0=;
	b=LH4IwW4Kyq7ULMwacsnz5A6ZfLYPWC7HjFCfCtboEymn7Po6B0Ir4Fdc1RhLl/MuiH
	A8FvOtT1J3g3MOmUUHvl9tgkCIR7RYVRaEScOKUny0vau5seCVbsc22p5CXRLfrEi5Fl
	F2EBuDuDwvm8DEybuNaFiIT2UY310HLkxAqO08zIBlYqZdDJCmEY222WyQS3wVsxVvF+
	s2/0V5cTH7+Nxr7BsvyP4GAkMt2vNOWiMd/jEQCZHHCgGV/M+/nqUACUKfvI3mGqW4pg
	0h1Xp9A+Dxd5+qOdfnlyhFYVffi5N7oao/EitosRVCp0HtBucazAboS1bzdP5n2xC6Uu
	xPOA==
MIME-Version: 1.0
Received: by 10.220.106.147 with SMTP id x19mr1525151vco.37.1354643330972;
	Tue, 04 Dec 2012 09:48:50 -0800 (PST)
Received: by 10.58.88.72 with HTTP; Tue, 4 Dec 2012 09:48:50 -0800 (PST)
In-Reply-To: <50BE0651.1060502@tycho.nsa.gov>
References: <CAP4+OA3-CsKxJizFpx-V48cCFKJc5KXNQzzY4oQ6pQ5pu0RFMg@mail.gmail.com>
	<CAL08nMGDeaQO6HAJypK+pzmsmg8wcz2=WnLWwX==R5MRBCyoPA@mail.gmail.com>
	<CAP4+OA064YvLQyPahGWoyxPhWv+v5LLCjBUOUg09QF7XKJ1tZg@mail.gmail.com>
	<50A27423.6040104@tycho.nsa.gov>
	<CAP4+OA0KLDk-5p=S1sSq9ugrLF4siL_iQ7YNT1nCGO4mwG5=zA@mail.gmail.com>
	<50BE0651.1060502@tycho.nsa.gov>
Date: Tue, 4 Dec 2012 11:48:50 -0600
Message-ID: <CAP4+OA05Nd981TDNrtLr_axKna8gntd1srJJrJrUSQmJ1hotFQ@mail.gmail.com>
From: D Sundstrom <sunds@peapod.net>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
X-Gm-Message-State: ALoCoQl4GkDBdjwIYwSPGAOCkX0OcPLaGrU74xqVbP/ITKtLeFDGj4Evfni4v4WY4q0nl/IxDr7i
Cc: Pablo Llopis <pllopis@arcos.inf.uc3m.es>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Immediate kernel panic using gntdev device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0258163357177378733=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0258163357177378733==
Content-Type: multipart/alternative; boundary=f46d042fdea28a3de904d00a7adb

--f46d042fdea28a3de904d00a7adb
Content-Type: text/plain; charset=ISO-8859-1

Thanks Daniel and Pablo.

Pablo please keep me advised if you solve any issues regarding granting
large amounts of memory.  I have the same requirement.

Daniel, thanks for the explanation.  Indeed, if I just allocate memory of
the heap everything works, but I'm "leaking" that memory.

I'll need an answer from Citrix as to why XenClient fails for the memory op
hypercall.

Is the intent of "decrease reservation" to pull more memory into the DomU?
 I didn't quite understand the logic in this driver if it fails to find
memory already in the balloon list.

-David


On Tue, Dec 4, 2012 at 8:18 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:

> On 12/03/2012 07:49 PM, D Sundstrom wrote:
> > The issue seems to be my version of Xen (XenClient XT) must not support
> > ballon drivers. Any call to the memory_op hypercall to change the
> > reservation terminates my guest with extreme prejudice.
> >
> > I'll take that one up with Citrix.  However, can someone explain why
> > mapping a grant needs to manipulate the balloon reservation?
> >
> > Specifically, in the 3.7-RC7 linux kernel tree, the file
> > drivers/xen/balloon.c:
> >
> > At line 512 it tries to get a page out of the balloon.  This returns null
> > (no page).
> > If page.... at line 513 evaluates to false
> > At line 518 the else block calls decrease_reservation().
> >
> >
> > Thanks
> > David
> >
>
> The gntdev driver needs a GFN for the mapped page (this is a hard
> requirement
> for HVM, and also makes PV in-kernel mapping of the page easier iirc), and
> this
> GFN must be unused by the guest (no associated MFN - otherwise it may end
> up
> leaking the MFN until the domain is shutdown). Since ballooned out pages
> satisfy
> these requirements, the gntdev code uses the balloon pool instead of
> breaking
> the GFN/MFN association itself or trying to use the pages beyond the last
> valid
> GFN.
>
> >
> > On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov
> >wrote:
> >
> >> On 11/12/2012 08:15 AM, D Sundstrom wrote:
> >>> Thank you Pablo.
> >>>
> >>> It makes no difference if I run both the src-add and map from the same
> >>> domain or from different DomU domains.
> >>> Whichever DomU I run the map function in crashes immediately.
> >>
> >> Mapping your own grants (which is what the test run you showed did)
> might
> >> cause problems - although it's a bug that needs to be fixed, if so. You
> >> may want to try using the vchan-node2 tool (tools/libvchan) for testing
> >> and as an example user.
> >>
> >>> You mention Dom0.  I just want to be clear that I'd like to share
> >>> between two DomU domains.  Have you gotten this to work?
> >>
> >> That was the goal of gntalloc/libvchan - it should work (and has for
> me).
> >>
> >>> I also tried the userspace APIs provided by Xen such as
> >>> xc_gnttab_map_grant_ref() and these also crash.  Of course, these use
> >>> the same driver IOCTLs, so this isn't a surprise.
> >>>
> >>> I'll need to see if I can get some debug info from the DomU kernel to
> >>> make progress.
> >>
> >> You might want to try booting your domU with console=hvc0 and look at
> >> xl console - that will usually give you useful backtraces. Without that,
> >> it's rather difficult to tell what the problem is.
> >>
> >>> If I can get this to work, are there any restrictions on sharing large
> >>> amounts of memory?  Say 160Mb?  Or are grant tables intended for a
> >>> small number of pages?
> >>
> >> There are restrictions within the modules (default is 1024 4K pages),
> and
> >> in Xen itself for the number of grant table and maptrack pages - but I
> >> think those can be adjusted via a boot parameter. The grant tables
> aren't
> >> currently intended to share large amounts of memory, so you may run in
> to
> >> some inefficiencies when doing the map/unmap. If you're using an IOMMU
> for
> >> one of the domUs, this may end up being especially costly.
> >>
> >>> Thanks,
> >>> David
> >>>
> >>>
> >>>
> >>> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <
> pllopis@arcos.inf.uc3m.es>
> >> wrote:
> >>>>
> >>>> It is not clear from your output from which domain you are running
> >>>> each command. It looks like you are trying to issue a grant and map it
> >>>> from within the same domain. That's probably the reason it crashes.
> >>>> You are supposed to run this tool from both domains, running the calls
> >>>> which interface with gntalloc from one domain, and the calls which
> >>>> interface with gntdev from the other domain.
> >>>> In any case, the domid you have to specify in the map must be the
> >>>> domid of the domain which issued the grant. In other words, when
> >>>> creating a grant, the domid which is granted access is specified. When
> >>>> mapping a grant, the domid which issued the grant is specified. (i.e.
> >>>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
> >>>> 8)
> >>>>
> >>>>>
> >>>
>
>
> --
> Daniel De Graaf
> National Security Agency
>

--f46d042fdea28a3de904d00a7adb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div><br></div>Thanks Daniel and Pablo.<div><br></div><div>Pablo please kee=
p me advised if you solve any issues regarding granting large amounts of me=
mory. =A0I have the same requirement.</div><div><br></div><div>Daniel, than=
ks for the explanation. =A0Indeed, if I just allocate memory of the heap ev=
erything works, but I&#39;m &quot;leaking&quot; that memory.</div>
<div><br></div><div>I&#39;ll need an answer from Citrix as to why XenClient=
 fails for the memory op hypercall.</div><div><br></div><div>Is the intent =
of &quot;decrease reservation&quot; to pull more memory into the DomU? =A0I=
 didn&#39;t quite understand the logic in this driver if it fails to find m=
emory already in the balloon list.</div>
<div><br></div><div>-David</div><div><br></div><div><br><div class=3D"gmail=
_quote">On Tue, Dec 4, 2012 at 8:18 AM, Daniel De Graaf <span dir=3D"ltr">&=
lt;<a href=3D"mailto:dgdegra@tycho.nsa.gov" target=3D"_blank">dgdegra@tycho=
.nsa.gov</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On 12/03/2012 07:49 PM, D =
Sundstrom wrote:<br>
&gt; The issue seems to be my version of Xen (XenClient XT) must not suppor=
t<br>
&gt; ballon drivers. Any call to the memory_op hypercall to change the<br>
&gt; reservation terminates my guest with extreme prejudice.<br>
&gt;<br>
&gt; I&#39;ll take that one up with Citrix. =A0However, can someone explain=
 why<br>
&gt; mapping a grant needs to manipulate the balloon reservation?<br>
&gt;<br>
&gt; Specifically, in the 3.7-RC7 linux kernel tree, the file<br>
&gt; drivers/xen/balloon.c:<br>
&gt;<br>
&gt; At line 512 it tries to get a page out of the balloon. =A0This returns=
 null<br>
&gt; (no page).<br>
&gt; If page.... at line 513 evaluates to false<br>
&gt; At line 518 the else block calls decrease_reservation().<br>
&gt;<br>
&gt;<br>
&gt; Thanks<br>
&gt; David<br>
&gt;<br>
<br>
</div>The gntdev driver needs a GFN for the mapped page (this is a hard req=
uirement<br>
for HVM, and also makes PV in-kernel mapping of the page easier iirc), and =
this<br>
GFN must be unused by the guest (no associated MFN - otherwise it may end u=
p<br>
leaking the MFN until the domain is shutdown). Since ballooned out pages sa=
tisfy<br>
these requirements, the gntdev code uses the balloon pool instead of breaki=
ng<br>
the GFN/MFN association itself or trying to use the pages beyond the last v=
alid<br>
GFN.<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt;<br>
&gt; On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf &lt;<a href=3D"mailt=
o:dgdegra@tycho.nsa.gov">dgdegra@tycho.nsa.gov</a>&gt;wrote:<br>
&gt;<br>
&gt;&gt; On 11/12/2012 08:15 AM, D Sundstrom wrote:<br>
&gt;&gt;&gt; Thank you Pablo.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; It makes no difference if I run both the src-add and map from =
the same<br>
&gt;&gt;&gt; domain or from different DomU domains.<br>
&gt;&gt;&gt; Whichever DomU I run the map function in crashes immediately.<=
br>
&gt;&gt;<br>
&gt;&gt; Mapping your own grants (which is what the test run you showed did=
) might<br>
&gt;&gt; cause problems - although it&#39;s a bug that needs to be fixed, i=
f so. You<br>
&gt;&gt; may want to try using the vchan-node2 tool (tools/libvchan) for te=
sting<br>
&gt;&gt; and as an example user.<br>
&gt;&gt;<br>
&gt;&gt;&gt; You mention Dom0. =A0I just want to be clear that I&#39;d like=
 to share<br>
&gt;&gt;&gt; between two DomU domains. =A0Have you gotten this to work?<br>
&gt;&gt;<br>
&gt;&gt; That was the goal of gntalloc/libvchan - it should work (and has f=
or me).<br>
&gt;&gt;<br>
&gt;&gt;&gt; I also tried the userspace APIs provided by Xen such as<br>
&gt;&gt;&gt; xc_gnttab_map_grant_ref() and these also crash. =A0Of course, =
these use<br>
&gt;&gt;&gt; the same driver IOCTLs, so this isn&#39;t a surprise.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; I&#39;ll need to see if I can get some debug info from the Dom=
U kernel to<br>
&gt;&gt;&gt; make progress.<br>
&gt;&gt;<br>
&gt;&gt; You might want to try booting your domU with console=3Dhvc0 and lo=
ok at<br>
&gt;&gt; xl console - that will usually give you useful backtraces. Without=
 that,<br>
&gt;&gt; it&#39;s rather difficult to tell what the problem is.<br>
&gt;&gt;<br>
&gt;&gt;&gt; If I can get this to work, are there any restrictions on shari=
ng large<br>
&gt;&gt;&gt; amounts of memory? =A0Say 160Mb? =A0Or are grant tables intend=
ed for a<br>
&gt;&gt;&gt; small number of pages?<br>
&gt;&gt;<br>
&gt;&gt; There are restrictions within the modules (default is 1024 4K page=
s), and<br>
&gt;&gt; in Xen itself for the number of grant table and maptrack pages - b=
ut I<br>
&gt;&gt; think those can be adjusted via a boot parameter. The grant tables=
 aren&#39;t<br>
&gt;&gt; currently intended to share large amounts of memory, so you may ru=
n in to<br>
&gt;&gt; some inefficiencies when doing the map/unmap. If you&#39;re using =
an IOMMU for<br>
&gt;&gt; one of the domUs, this may end up being especially costly.<br>
&gt;&gt;<br>
&gt;&gt;&gt; Thanks,<br>
&gt;&gt;&gt; David<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis &lt;<a href=3D"m=
ailto:pllopis@arcos.inf.uc3m.es">pllopis@arcos.inf.uc3m.es</a>&gt;<br>
&gt;&gt; wrote:<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; It is not clear from your output from which domain you are=
 running<br>
&gt;&gt;&gt;&gt; each command. It looks like you are trying to issue a gran=
t and map it<br>
&gt;&gt;&gt;&gt; from within the same domain. That&#39;s probably the reaso=
n it crashes.<br>
&gt;&gt;&gt;&gt; You are supposed to run this tool from both domains, runni=
ng the calls<br>
&gt;&gt;&gt;&gt; which interface with gntalloc from one domain, and the cal=
ls which<br>
&gt;&gt;&gt;&gt; interface with gntdev from the other domain.<br>
&gt;&gt;&gt;&gt; In any case, the domid you have to specify in the map must=
 be the<br>
&gt;&gt;&gt;&gt; domid of the domain which issued the grant. In other words=
, when<br>
&gt;&gt;&gt;&gt; creating a grant, the domid which is granted access is spe=
cified. When<br>
&gt;&gt;&gt;&gt; mapping a grant, the domid which issued the grant is speci=
fied. (i.e.<br>
&gt;&gt;&gt;&gt; If you did &quot;src-add 8&quot; from dom0 you should run =
map 0 1372 from domU<br>
&gt;&gt;&gt;&gt; 8)<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
<br>
<br>
</div></div><div class=3D"HOEnZb"><div class=3D"h5">--<br>
Daniel De Graaf<br>
National Security Agency<br>
</div></div></blockquote></div><br></div><div><br></div>

--f46d042fdea28a3de904d00a7adb--


--===============0258163357177378733==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0258163357177378733==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 17:49:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfwce-00017U-UZ; Tue, 04 Dec 2012 17:49:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfwce-00017C-0M
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:49:32 +0000
Received: from [193.109.254.147:31164] by server-3.bemta-14.messagelabs.com id
	8C/7C-01317-BA73EB05; Tue, 04 Dec 2012 17:49:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1354643370!4197248!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26858 invoked from network); 4 Dec 2012 17:49:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 17:49:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155205"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 17:49:30 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	17:49:30 +0000
Message-ID: <1354643369.15296.92.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Tue, 4 Dec 2012 17:49:29 +0000
In-Reply-To: <1354210534-31052-7-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-7-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 6/7] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-11-29 at 17:35 +0000, Matthew Fioravante wrote:
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> ---
>  stubdom/configure              | 4370 ++++++++++++++++++++++++++++++++++++++++
>  stubdom/configure.ac           |   54 +
>  stubdom/install.sh             |    1 +

Does this (and the next patch) require stuff to be added to .hgignore
and .gitignore?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 17:49:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfwce-00017U-UZ; Tue, 04 Dec 2012 17:49:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tfwce-00017C-0M
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:49:32 +0000
Received: from [193.109.254.147:31164] by server-3.bemta-14.messagelabs.com id
	8C/7C-01317-BA73EB05; Tue, 04 Dec 2012 17:49:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1354643370!4197248!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26858 invoked from network); 4 Dec 2012 17:49:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 17:49:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155205"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 17:49:30 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	17:49:30 +0000
Message-ID: <1354643369.15296.92.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Tue, 4 Dec 2012 17:49:29 +0000
In-Reply-To: <1354210534-31052-7-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-7-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 6/7] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-11-29 at 17:35 +0000, Matthew Fioravante wrote:
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> ---
>  stubdom/configure              | 4370 ++++++++++++++++++++++++++++++++++++++++
>  stubdom/configure.ac           |   54 +
>  stubdom/install.sh             |    1 +

Does this (and the next patch) require stuff to be added to .hgignore
and .gitignore?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 17:55:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfwhy-0001ew-Lu; Tue, 04 Dec 2012 17:55:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tfwhx-0001er-3m
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:55:01 +0000
Received: from [85.158.139.211:31223] by server-13.bemta-5.messagelabs.com id
	60/8E-27809-4F83EB05; Tue, 04 Dec 2012 17:55:00 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354643698!18254207!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21517 invoked from network); 4 Dec 2012 17:54:59 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-15.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 17:54:59 -0000
X-TM-IMSS-Message-ID: <e5b7f5040007856a@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id e5b7f5040007856a ;
	Tue, 4 Dec 2012 12:54:49 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qB4HspkG028825; 
	Tue, 4 Dec 2012 12:54:51 -0500
Message-ID: <50BE38EB.8090109@tycho.nsa.gov>
Date: Tue, 04 Dec 2012 12:54:51 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: D Sundstrom <sunds@peapod.net>
References: <CAP4+OA3-CsKxJizFpx-V48cCFKJc5KXNQzzY4oQ6pQ5pu0RFMg@mail.gmail.com>
	<CAL08nMGDeaQO6HAJypK+pzmsmg8wcz2=WnLWwX==R5MRBCyoPA@mail.gmail.com>
	<CAP4+OA064YvLQyPahGWoyxPhWv+v5LLCjBUOUg09QF7XKJ1tZg@mail.gmail.com>
	<50A27423.6040104@tycho.nsa.gov>
	<CAP4+OA0KLDk-5p=S1sSq9ugrLF4siL_iQ7YNT1nCGO4mwG5=zA@mail.gmail.com>
	<50BE0651.1060502@tycho.nsa.gov>
	<CAP4+OA05Nd981TDNrtLr_axKna8gntd1srJJrJrUSQmJ1hotFQ@mail.gmail.com>
In-Reply-To: <CAP4+OA05Nd981TDNrtLr_axKna8gntd1srJJrJrUSQmJ1hotFQ@mail.gmail.com>
Cc: Pablo Llopis <pllopis@arcos.inf.uc3m.es>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Immediate kernel panic using gntdev device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/04/2012 12:48 PM, D Sundstrom wrote:
> Thanks Daniel and Pablo.
> 
> Pablo please keep me advised if you solve any issues regarding granting
> large amounts of memory.  I have the same requirement.
> 
> Daniel, thanks for the explanation.  Indeed, if I just allocate memory of
> the heap everything works, but I'm "leaking" that memory.
> 
> I'll need an answer from Citrix as to why XenClient fails for the memory op
> hypercall.
> 
> Is the intent of "decrease reservation" to pull more memory into the DomU?
>  I didn't quite understand the logic in this driver if it fails to find
> memory already in the balloon list.
> 
> -David

The decrease reservation actually removes memory from the DomU (lowers usage),
creating a free GFN. When the grant is later unmapped, the GFN will be passed
to increase reservation so that it's usable normally again. The overhead of
these extra two hypercalls is avoided if ballooned pages are already available,
which is why they aren't done at the same time as the grant map/unmap.
 
> 
> On Tue, Dec 4, 2012 at 8:18 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:
> 
>> On 12/03/2012 07:49 PM, D Sundstrom wrote:
>>> The issue seems to be my version of Xen (XenClient XT) must not support
>>> ballon drivers. Any call to the memory_op hypercall to change the
>>> reservation terminates my guest with extreme prejudice.
>>>
>>> I'll take that one up with Citrix.  However, can someone explain why
>>> mapping a grant needs to manipulate the balloon reservation?
>>>
>>> Specifically, in the 3.7-RC7 linux kernel tree, the file
>>> drivers/xen/balloon.c:
>>>
>>> At line 512 it tries to get a page out of the balloon.  This returns null
>>> (no page).
>>> If page.... at line 513 evaluates to false
>>> At line 518 the else block calls decrease_reservation().
>>>
>>>
>>> Thanks
>>> David
>>>
>>
>> The gntdev driver needs a GFN for the mapped page (this is a hard
>> requirement
>> for HVM, and also makes PV in-kernel mapping of the page easier iirc), and
>> this
>> GFN must be unused by the guest (no associated MFN - otherwise it may end
>> up
>> leaking the MFN until the domain is shutdown). Since ballooned out pages
>> satisfy
>> these requirements, the gntdev code uses the balloon pool instead of
>> breaking
>> the GFN/MFN association itself or trying to use the pages beyond the last
>> valid
>> GFN.
>>
>>>
>>> On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov
>>> wrote:
>>>
>>>> On 11/12/2012 08:15 AM, D Sundstrom wrote:
>>>>> Thank you Pablo.
>>>>>
>>>>> It makes no difference if I run both the src-add and map from the same
>>>>> domain or from different DomU domains.
>>>>> Whichever DomU I run the map function in crashes immediately.
>>>>
>>>> Mapping your own grants (which is what the test run you showed did)
>> might
>>>> cause problems - although it's a bug that needs to be fixed, if so. You
>>>> may want to try using the vchan-node2 tool (tools/libvchan) for testing
>>>> and as an example user.
>>>>
>>>>> You mention Dom0.  I just want to be clear that I'd like to share
>>>>> between two DomU domains.  Have you gotten this to work?
>>>>
>>>> That was the goal of gntalloc/libvchan - it should work (and has for
>> me).
>>>>
>>>>> I also tried the userspace APIs provided by Xen such as
>>>>> xc_gnttab_map_grant_ref() and these also crash.  Of course, these use
>>>>> the same driver IOCTLs, so this isn't a surprise.
>>>>>
>>>>> I'll need to see if I can get some debug info from the DomU kernel to
>>>>> make progress.
>>>>
>>>> You might want to try booting your domU with console=hvc0 and look at
>>>> xl console - that will usually give you useful backtraces. Without that,
>>>> it's rather difficult to tell what the problem is.
>>>>
>>>>> If I can get this to work, are there any restrictions on sharing large
>>>>> amounts of memory?  Say 160Mb?  Or are grant tables intended for a
>>>>> small number of pages?
>>>>
>>>> There are restrictions within the modules (default is 1024 4K pages),
>> and
>>>> in Xen itself for the number of grant table and maptrack pages - but I
>>>> think those can be adjusted via a boot parameter. The grant tables
>> aren't
>>>> currently intended to share large amounts of memory, so you may run in
>> to
>>>> some inefficiencies when doing the map/unmap. If you're using an IOMMU
>> for
>>>> one of the domUs, this may end up being especially costly.
>>>>
>>>>> Thanks,
>>>>> David
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <
>> pllopis@arcos.inf.uc3m.es>
>>>> wrote:
>>>>>>
>>>>>> It is not clear from your output from which domain you are running
>>>>>> each command. It looks like you are trying to issue a grant and map it
>>>>>> from within the same domain. That's probably the reason it crashes.
>>>>>> You are supposed to run this tool from both domains, running the calls
>>>>>> which interface with gntalloc from one domain, and the calls which
>>>>>> interface with gntdev from the other domain.
>>>>>> In any case, the domid you have to specify in the map must be the
>>>>>> domid of the domain which issued the grant. In other words, when
>>>>>> creating a grant, the domid which is granted access is specified. When
>>>>>> mapping a grant, the domid which issued the grant is specified. (i.e.
>>>>>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
>>>>>> 8)
>>>>>>
>>>>>>>
>>>>>
>>
>>
>> --
>> Daniel De Graaf
>> National Security Agency
>>
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 17:55:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 17:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfwhy-0001ew-Lu; Tue, 04 Dec 2012 17:55:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tfwhx-0001er-3m
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 17:55:01 +0000
Received: from [85.158.139.211:31223] by server-13.bemta-5.messagelabs.com id
	60/8E-27809-4F83EB05; Tue, 04 Dec 2012 17:55:00 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354643698!18254207!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21517 invoked from network); 4 Dec 2012 17:54:59 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-15.tower-206.messagelabs.com with SMTP;
	4 Dec 2012 17:54:59 -0000
X-TM-IMSS-Message-ID: <e5b7f5040007856a@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id e5b7f5040007856a ;
	Tue, 4 Dec 2012 12:54:49 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qB4HspkG028825; 
	Tue, 4 Dec 2012 12:54:51 -0500
Message-ID: <50BE38EB.8090109@tycho.nsa.gov>
Date: Tue, 04 Dec 2012 12:54:51 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: D Sundstrom <sunds@peapod.net>
References: <CAP4+OA3-CsKxJizFpx-V48cCFKJc5KXNQzzY4oQ6pQ5pu0RFMg@mail.gmail.com>
	<CAL08nMGDeaQO6HAJypK+pzmsmg8wcz2=WnLWwX==R5MRBCyoPA@mail.gmail.com>
	<CAP4+OA064YvLQyPahGWoyxPhWv+v5LLCjBUOUg09QF7XKJ1tZg@mail.gmail.com>
	<50A27423.6040104@tycho.nsa.gov>
	<CAP4+OA0KLDk-5p=S1sSq9ugrLF4siL_iQ7YNT1nCGO4mwG5=zA@mail.gmail.com>
	<50BE0651.1060502@tycho.nsa.gov>
	<CAP4+OA05Nd981TDNrtLr_axKna8gntd1srJJrJrUSQmJ1hotFQ@mail.gmail.com>
In-Reply-To: <CAP4+OA05Nd981TDNrtLr_axKna8gntd1srJJrJrUSQmJ1hotFQ@mail.gmail.com>
Cc: Pablo Llopis <pllopis@arcos.inf.uc3m.es>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Immediate kernel panic using gntdev device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/04/2012 12:48 PM, D Sundstrom wrote:
> Thanks Daniel and Pablo.
> 
> Pablo please keep me advised if you solve any issues regarding granting
> large amounts of memory.  I have the same requirement.
> 
> Daniel, thanks for the explanation.  Indeed, if I just allocate memory of
> the heap everything works, but I'm "leaking" that memory.
> 
> I'll need an answer from Citrix as to why XenClient fails for the memory op
> hypercall.
> 
> Is the intent of "decrease reservation" to pull more memory into the DomU?
>  I didn't quite understand the logic in this driver if it fails to find
> memory already in the balloon list.
> 
> -David

The decrease reservation actually removes memory from the DomU (lowers usage),
creating a free GFN. When the grant is later unmapped, the GFN will be passed
to increase reservation so that it's usable normally again. The overhead of
these extra two hypercalls is avoided if ballooned pages are already available,
which is why they aren't done at the same time as the grant map/unmap.
 
> 
> On Tue, Dec 4, 2012 at 8:18 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov>wrote:
> 
>> On 12/03/2012 07:49 PM, D Sundstrom wrote:
>>> The issue seems to be my version of Xen (XenClient XT) must not support
>>> ballon drivers. Any call to the memory_op hypercall to change the
>>> reservation terminates my guest with extreme prejudice.
>>>
>>> I'll take that one up with Citrix.  However, can someone explain why
>>> mapping a grant needs to manipulate the balloon reservation?
>>>
>>> Specifically, in the 3.7-RC7 linux kernel tree, the file
>>> drivers/xen/balloon.c:
>>>
>>> At line 512 it tries to get a page out of the balloon.  This returns null
>>> (no page).
>>> If page.... at line 513 evaluates to false
>>> At line 518 the else block calls decrease_reservation().
>>>
>>>
>>> Thanks
>>> David
>>>
>>
>> The gntdev driver needs a GFN for the mapped page (this is a hard
>> requirement
>> for HVM, and also makes PV in-kernel mapping of the page easier iirc), and
>> this
>> GFN must be unused by the guest (no associated MFN - otherwise it may end
>> up
>> leaking the MFN until the domain is shutdown). Since ballooned out pages
>> satisfy
>> these requirements, the gntdev code uses the balloon pool instead of
>> breaking
>> the GFN/MFN association itself or trying to use the pages beyond the last
>> valid
>> GFN.
>>
>>>
>>> On Tue, Nov 13, 2012 at 10:24 AM, Daniel De Graaf <dgdegra@tycho.nsa.gov
>>> wrote:
>>>
>>>> On 11/12/2012 08:15 AM, D Sundstrom wrote:
>>>>> Thank you Pablo.
>>>>>
>>>>> It makes no difference if I run both the src-add and map from the same
>>>>> domain or from different DomU domains.
>>>>> Whichever DomU I run the map function in crashes immediately.
>>>>
>>>> Mapping your own grants (which is what the test run you showed did)
>> might
>>>> cause problems - although it's a bug that needs to be fixed, if so. You
>>>> may want to try using the vchan-node2 tool (tools/libvchan) for testing
>>>> and as an example user.
>>>>
>>>>> You mention Dom0.  I just want to be clear that I'd like to share
>>>>> between two DomU domains.  Have you gotten this to work?
>>>>
>>>> That was the goal of gntalloc/libvchan - it should work (and has for
>> me).
>>>>
>>>>> I also tried the userspace APIs provided by Xen such as
>>>>> xc_gnttab_map_grant_ref() and these also crash.  Of course, these use
>>>>> the same driver IOCTLs, so this isn't a surprise.
>>>>>
>>>>> I'll need to see if I can get some debug info from the DomU kernel to
>>>>> make progress.
>>>>
>>>> You might want to try booting your domU with console=hvc0 and look at
>>>> xl console - that will usually give you useful backtraces. Without that,
>>>> it's rather difficult to tell what the problem is.
>>>>
>>>>> If I can get this to work, are there any restrictions on sharing large
>>>>> amounts of memory?  Say 160Mb?  Or are grant tables intended for a
>>>>> small number of pages?
>>>>
>>>> There are restrictions within the modules (default is 1024 4K pages),
>> and
>>>> in Xen itself for the number of grant table and maptrack pages - but I
>>>> think those can be adjusted via a boot parameter. The grant tables
>> aren't
>>>> currently intended to share large amounts of memory, so you may run in
>> to
>>>> some inefficiencies when doing the map/unmap. If you're using an IOMMU
>> for
>>>> one of the domUs, this may end up being especially costly.
>>>>
>>>>> Thanks,
>>>>> David
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Nov 12, 2012 at 5:36 AM, Pablo Llopis <
>> pllopis@arcos.inf.uc3m.es>
>>>> wrote:
>>>>>>
>>>>>> It is not clear from your output from which domain you are running
>>>>>> each command. It looks like you are trying to issue a grant and map it
>>>>>> from within the same domain. That's probably the reason it crashes.
>>>>>> You are supposed to run this tool from both domains, running the calls
>>>>>> which interface with gntalloc from one domain, and the calls which
>>>>>> interface with gntdev from the other domain.
>>>>>> In any case, the domid you have to specify in the map must be the
>>>>>> domid of the domain which issued the grant. In other words, when
>>>>>> creating a grant, the domid which is granted access is specified. When
>>>>>> mapping a grant, the domid which issued the grant is specified. (i.e.
>>>>>> If you did "src-add 8" from dom0 you should run map 0 1372 from domU
>>>>>> 8)
>>>>>>
>>>>>>>
>>>>>
>>
>>
>> --
>> Daniel De Graaf
>> National Security Agency
>>
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwN-00027T-OM; Tue, 04 Dec 2012 18:09:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwL-00026y-CV
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:53 +0000
Received: from [85.158.137.99:62078] by server-16.bemta-3.messagelabs.com id
	68/C3-07461-07C3EB05; Tue, 04 Dec 2012 18:09:52 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-3.tower-217.messagelabs.com!1354644590!12844071!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26955 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-3.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f5e_ff34b262_d049_45d0_85d1_1d60bccb7b14;
	Tue, 04 Dec 2012 13:09:48 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:30 -0500
Message-Id: <1354644571-3202-8-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 8/8] Add conditional build of subsystems to
	configure.ac
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The toplevel Makefile still works without running configure
and will default build everything

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 Makefile              |   10 ++++++++--
 config/Toplevel.mk.in |    1 +
 configure.ac          |   11 ++++++++++-
 m4/subsystem.m4       |   32 ++++++++++++++++++++++++++++++++
 4 files changed, 51 insertions(+), 3 deletions(-)
 create mode 100644 config/Toplevel.mk.in
 create mode 100644 m4/subsystem.m4

diff --git a/Makefile b/Makefile
index a6ed8be..aa3c7bd 100644
--- a/Makefile
+++ b/Makefile
@@ -6,6 +6,11 @@
 .PHONY: all
 all: dist
 
+-include config/Toplevel.mk
+SUBSYSTEMS?=xen kernels tools stubdom docs
+TARGS_DIST=$(patsubst %, dist-%, $(SUBSYSTEMS))
+TARGS_INSTALL=$(patsubst %, install-%, $(SUBSYSTEMS))
+
 export XEN_ROOT=$(CURDIR)
 include Config.mk
 
@@ -15,7 +20,7 @@ include buildconfigs/Rules.mk
 
 # build and install everything into the standard system directories
 .PHONY: install
-install: install-xen install-kernels install-tools install-stubdom install-docs
+install: $(TARGS_INSTALL)
 
 .PHONY: build
 build: kernels
@@ -37,7 +42,7 @@ test:
 # build and install everything into local dist directory
 .PHONY: dist
 dist: DESTDIR=$(DISTDIR)/install
-dist: dist-xen dist-kernels dist-tools dist-stubdom dist-docs dist-misc
+dist: $(TARGS_DIST) dist-misc
 
 dist-misc:
 	$(INSTALL_DIR) $(DISTDIR)/
@@ -151,6 +156,7 @@ endif
 # clean, but blow away kernel build tree plus tarballs
 .PHONY: distclean
 distclean:
+	-rm config/Toplevel.mk
 	$(MAKE) -C xen distclean
 	$(MAKE) -C tools distclean
 	$(MAKE) -C stubdom distclean
diff --git a/config/Toplevel.mk.in b/config/Toplevel.mk.in
new file mode 100644
index 0000000..4db7eaf
--- /dev/null
+++ b/config/Toplevel.mk.in
@@ -0,0 +1 @@
+SUBSYSTEMS               := @SUBSYSTEMS@
diff --git a/configure.ac b/configure.ac
index 5dacb46..fcbc4ae 100644
--- a/configure.ac
+++ b/configure.ac
@@ -6,7 +6,16 @@ AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
     [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
 AC_CONFIG_SRCDIR([./xen/common/kernel.c])
 AC_PREFIX_DEFAULT([/usr])
+AC_CONFIG_FILES([./config/Toplevel.mk])
 
-AC_CONFIG_SUBDIRS([tools stubdom])
+m4_include([m4/features.m4])
+m4_include([m4/subsystem.m4])
+
+AX_SUBSYSTEM_DEFAULT_ENABLE([xen])
+AX_SUBSYSTEM_DEFAULT_ENABLE([kernels])
+AX_SUBSYSTEM_DEFAULT_ENABLE([tools])
+AX_SUBSYSTEM_DEFAULT_ENABLE([stubdom])
+AX_SUBSYSTEM_DEFAULT_ENABLE([docs])
+AX_SUBSYSTEM_FINISH
 
 AC_OUTPUT()
diff --git a/m4/subsystem.m4 b/m4/subsystem.m4
new file mode 100644
index 0000000..d3eb8c9
--- /dev/null
+++ b/m4/subsystem.m4
@@ -0,0 +1,32 @@
+AC_DEFUN([AX_SUBSYSTEM_DEFAULT_ENABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--disable-$1], [Disable build and install of $1]),[
+$1=n
+],[
+$1=y
+SUBSYSTEMS="$SUBSYSTEMS $1"
+AS_IF([test -e "$1/configure"], [
+AC_CONFIG_SUBDIRS([$1])
+])
+])
+AC_SUBST($1)
+])
+
+AC_DEFUN([AX_SUBSYSTEM_DEFAULT_DISABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--enable-$1], [Enable build and install of $1]),[
+$1=y
+SUBSYSTEMS="$SUBSYSTEMS $1"
+AS_IF([test -e "$1/configure"], [
+AC_CONFIG_SUBDIRS([$1])
+])
+],[
+$1=n
+])
+AC_SUBST($1)
+])
+
+
+AC_DEFUN([AX_SUBSYSTEM_FINISH], [
+AC_SUBST(SUBSYSTEMS)
+])
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwO-00027i-9N; Tue, 04 Dec 2012 18:09:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwM-000273-4k
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:54 +0000
Received: from [85.158.138.51:30011] by server-3.bemta-3.messagelabs.com id
	0D/2B-31566-17C3EB05; Tue, 04 Dec 2012 18:09:53 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-7.tower-174.messagelabs.com!1354644590!18549161!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9016 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f18_aad8eedf_e7b3_45dc_912b_f159c12f3073;
	Tue, 04 Dec 2012 13:09:47 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:25 -0500
Message-Id: <1354644571-3202-3-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 3/8] vtpm/vtpmmgr and required libs to
	stubdom/Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add 3 new libraries to stubdom:
libgmp
polarssl
Berlios TPM Emulator 0.7.4

Also adds makefile structure for vtpm-stubdom and vtpmmgrdom

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/Makefile           |  138 +++++++++++++++++++++++++++++++++++++++++++-
 stubdom/polarssl.patch     |   64 ++++++++++++++++++++
 stubdom/tpmemu-0.7.4.patch |   12 ++++
 3 files changed, 211 insertions(+), 3 deletions(-)
 create mode 100644 stubdom/polarssl.patch
 create mode 100644 stubdom/tpmemu-0.7.4.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 50ba360..fc70d88 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -31,6 +31,18 @@ GRUB_VERSION=0.97
 OCAML_URL?=http://caml.inria.fr/pub/distrib/ocaml-3.11
 OCAML_VERSION=3.11.0
 
+GMP_VERSION=4.3.2
+GMP_URL?=$(XEN_EXTFILES_URL)
+#GMP_URL?=ftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
+
+POLARSSL_VERSION=1.1.4
+POLARSSL_URL?=$(XEN_EXTFILES_URL)
+#POLARSSL_URL?=http://polarssl.org/code/releases
+
+TPMEMU_VERSION=0.7.4
+TPMEMU_URL?=$(XEN_EXTFILES_URL)
+#TPMEMU_URL?=http://download.berlios.de/tpm-emulator
+
 WGET=wget -c
 
 GNU_TARGET_ARCH:=$(XEN_TARGET_ARCH)
@@ -74,12 +86,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include
 
 TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib
 
-TARGETS=ioemu c caml grub xenstore
+TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr
 
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom
+build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stubdom vtpmmgrdom
 else
 build: genpath
 endif
@@ -176,6 +188,76 @@ lwip-$(XEN_TARGET_ARCH): lwip-$(LWIP_VERSION).tar.gz
 	touch $@
 
 #############
+# cross-gmp
+#############
+gmp-$(GMP_VERSION).tar.bz2:
+	$(WGET) $(GMP_URL)/$@
+
+.PHONY: cross-gmp
+ifeq ($(XEN_TARGET_ARCH), x86_32)
+   GMPEXT=ABI=32
+endif
+gmp-$(XEN_TARGET_ARCH): gmp-$(GMP_VERSION).tar.bz2 $(NEWLIB_STAMPFILE)
+	tar xjf $<
+	mv gmp-$(GMP_VERSION) $@
+	#patch -d $@ -p0 < gmp.patch
+	cd $@; CPPFLAGS="-isystem $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include $(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" CC=$(CC) $(GMPEXT) ./configure --disable-shared --enable-static --disable-fft --without-readline --prefix=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf
+	sed -i 's/#define HAVE_OBSTACK_VPRINTF 1/\/\/#define HAVE_OBSTACK_VPRINTF 1/' $@/config.h
+	touch $@
+
+GMP_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libgmp.a
+cross-gmp: $(GMP_STAMPFILE)
+$(GMP_STAMPFILE): gmp-$(XEN_TARGET_ARCH)
+	( cd $< && \
+	  $(MAKE) && \
+	  $(MAKE) install )
+
+#############
+# cross-polarssl
+#############
+polarssl-$(POLARSSL_VERSION)-gpl.tgz:
+	$(WGET) $(POLARSSL_URL)/$@
+
+polarssl-$(XEN_TARGET_ARCH): polarssl-$(POLARSSL_VERSION)-gpl.tgz
+	tar xzf $<
+	mv polarssl-$(POLARSSL_VERSION) $@
+	patch -d $@ -p1 < polarssl.patch
+	touch $@
+
+POLARSSL_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libpolarssl.a
+cross-polarssl: $(POLARSSL_STAMPFILE)
+$(POLARSSL_STAMPFILE): polarssl-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE) lwip-$(XEN_TARGET_ARCH)
+	 ( cd $</library && \
+	   make CC="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -I $(realpath $(MINI_OS)/include)" && \
+	   mkdir -p $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include && \
+	   cp -r ../include/* $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include && \
+	   mkdir -p $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib && \
+	   $(INSTALL_DATA) libpolarssl.a $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib/ )
+
+#############
+# cross-tpmemu
+#############
+tpm_emulator-$(TPMEMU_VERSION).tar.gz:
+	$(WGET) $(TPMEMU_URL)/$@
+
+tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
+	tar xzf $<
+	mv tpm_emulator-$(TPMEMU_VERSION) $@
+	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
+	mkdir $@/build
+	cd $@/build; cmake .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
+	touch $@
+
+TPMEMU_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm.a
+$(TPMEMU_STAMPFILE): tpm_emulator-$(XEN_TARGET_ARCH) $(GMP_STAMPFILE)
+	( cd $</build && make VERBOSE=1 tpm_crypto tpm  )
+	cp $</build/crypto/libtpm_crypto.a $(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm_crypto.a
+	cp $</build/tpm/libtpm.a $(TPMEMU_STAMPFILE)
+
+.PHONY: cross-tpmemu
+cross-tpmemu: $(TPMEMU_STAMPFILE)
+
+#############
 # Cross-ocaml
 #############
 
@@ -319,6 +401,24 @@ c: $(CROSS_ROOT)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) 
 
 ######
+# VTPM
+######
+
+.PHONY: vtpm
+vtpm: cross-polarssl cross-tpmemu
+	make -C $(MINI_OS) links
+	XEN_TARGET_ARCH="$(XEN_TARGET_ARCH)" CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) -C $@
+
+######
+# VTPMMGR
+######
+
+.PHONY: vtpmmgr
+vtpmmgr: cross-polarssl
+	make -C $(MINI_OS) links
+	XEN_TARGET_ARCH="$(XEN_TARGET_ARCH)" CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) -C $@
+
+######
 # Grub
 ######
 
@@ -362,6 +462,14 @@ caml-stubdom: mini-os-$(XEN_TARGET_ARCH)-caml lwip-$(XEN_TARGET_ARCH) libxc cros
 c-stubdom: mini-os-$(XEN_TARGET_ARCH)-c lwip-$(XEN_TARGET_ARCH) libxc c
 	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/c/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS=$(CURDIR)/c/main.a
 
+.PHONY: vtpm-stubdom
+vtpm-stubdom: mini-os-$(XEN_TARGET_ARCH)-vtpm vtpm
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/vtpm/minios.cfg" $(MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS="$(CURDIR)/vtpm/vtpm.a" APP_LDLIBS="-ltpm -ltpm_crypto -lgmp"
+
+.PHONY: vtpmmgrdom
+vtpmmgrdom: mini-os-$(XEN_TARGET_ARCH)-vtpmmgr vtpmmgr
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/vtpmmgr/minios.cfg" $(MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS="$(CURDIR)/vtpmmgr/vtpmmgr.a" APP_LDLIBS="-lm"
+
 .PHONY: pv-grub
 pv-grub: mini-os-$(XEN_TARGET_ARCH)-grub libxc grub
 	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/grub/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/grub-$(XEN_TARGET_ARCH)/main.a
@@ -375,7 +483,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
 #########
 
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: genpath install-readme install-ioemu install-grub install-xenstore
+install: genpath install-readme install-ioemu install-grub install-xenstore install-vtpm install-vtpmmgr
 else
 install: genpath
 endif
@@ -399,6 +507,14 @@ install-xenstore: xenstore-stubdom
 	$(INSTALL_DIR) "$(DESTDIR)/usr/lib/xen/boot"
 	$(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-xenstore/mini-os.gz "$(DESTDIR)/usr/lib/xen/boot/xenstore-stubdom.gz"
 
+install-vtpm: vtpm-stubdom
+	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
+	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpm/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpm-stubdom.gz"
+
+install-vtpmmgr: vtpm-stubdom
+	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
+	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpmmgr/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpmmgrdom.gz"
+
 #######
 # clean
 #######
@@ -411,8 +527,12 @@ clean:
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-caml
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-grub
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-xenstore
+	rm -fr mini-os-$(XEN_TARGET_ARCH)-vtpm
+	rm -fr mini-os-$(XEN_TARGET_ARCH)-vtpmmgr
 	$(MAKE) DESTDIR= -C caml clean
 	$(MAKE) DESTDIR= -C c clean
+	$(MAKE) -C vtpm clean
+	$(MAKE) -C vtpmmgr clean
 	rm -fr grub-$(XEN_TARGET_ARCH)
 	rm -f $(STUBDOMPATH)
 	[ ! -d libxc-$(XEN_TARGET_ARCH) ] || $(MAKE) DESTDIR= -C libxc-$(XEN_TARGET_ARCH) clean
@@ -426,6 +546,10 @@ crossclean: clean
 	rm -fr newlib-$(XEN_TARGET_ARCH)
 	rm -fr zlib-$(XEN_TARGET_ARCH) pciutils-$(XEN_TARGET_ARCH)
 	rm -fr libxc-$(XEN_TARGET_ARCH) ioemu
+	rm -fr gmp-$(XEN_TARGET_ARCH)
+	rm -fr polarssl-$(XEN_TARGET_ARCH)
+	rm -fr openssl-$(XEN_TARGET_ARCH)
+	rm -fr tpm_emulator-$(XEN_TARGET_ARCH)
 	rm -f mk-headers-$(XEN_TARGET_ARCH)
 	rm -fr ocaml-$(XEN_TARGET_ARCH)
 	rm -fr include
@@ -434,6 +558,10 @@ crossclean: clean
 .PHONY: patchclean
 patchclean: crossclean
 	rm -fr newlib-$(NEWLIB_VERSION)
+	rm -fr gmp-$(XEN_TARGET_ARCH)
+	rm -fr polarssl-$(XEN_TARGET_ARCH)
+	rm -fr openssl-$(XEN_TARGET_ARCH)
+	rm -fr tpm_emulator-$(XEN_TARGET_ARCH)
 	rm -fr lwip-$(XEN_TARGET_ARCH)
 	rm -fr grub-upstream
 
@@ -442,10 +570,14 @@ patchclean: crossclean
 downloadclean: patchclean
 	rm -f newlib-$(NEWLIB_VERSION).tar.gz
 	rm -f zlib-$(ZLIB_VERSION).tar.gz
+	rm -f gmp-$(GMP_VERSION).tar.gz
+	rm -f tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	rm -f pciutils-$(LIBPCI_VERSION).tar.bz2
 	rm -f grub-$(GRUB_VERSION).tar.gz
 	rm -f lwip-$(LWIP_VERSION).tar.gz
 	rm -f ocaml-$(OCAML_VERSION).tar.gz
+	rm -f polarssl-$(POLARSSL_VERSION)-gpl.tgz
+	rm -f openssl-$(POLARSSL_VERSION)-gpl.tgz
 
 .PHONY: distclean
 distclean: downloadclean
diff --git a/stubdom/polarssl.patch b/stubdom/polarssl.patch
new file mode 100644
index 0000000..d387d4e
--- /dev/null
+++ b/stubdom/polarssl.patch
@@ -0,0 +1,64 @@
+diff -Naur polarssl-1.1.4/include/polarssl/config.h polarssl-x86_64/include/polarssl/config.h
+--- polarssl-1.1.4/include/polarssl/config.h	2011-12-22 05:06:27.000000000 -0500
++++ polarssl-x86_64/include/polarssl/config.h	2012-10-30 17:18:07.567001000 -0400
+@@ -164,8 +164,8 @@
+  * application.
+  *
+  * Uncomment this macro to prevent loading of default entropy functions.
+-#define POLARSSL_NO_DEFAULT_ENTROPY_SOURCES
+  */
++#define POLARSSL_NO_DEFAULT_ENTROPY_SOURCES
+
+ /**
+  * \def POLARSSL_NO_PLATFORM_ENTROPY
+@@ -175,8 +175,8 @@
+  * standards like the /dev/urandom or Windows CryptoAPI.
+  *
+  * Uncomment this macro to disable the built-in platform entropy functions.
+-#define POLARSSL_NO_PLATFORM_ENTROPY
+  */
++#define POLARSSL_NO_PLATFORM_ENTROPY
+
+ /**
+  * \def POLARSSL_PKCS1_V21
+@@ -426,8 +426,8 @@
+  * Requires: POLARSSL_TIMING_C
+  *
+  * This module enables the HAVEGE random number generator.
+- */
+ #define POLARSSL_HAVEGE_C
++ */
+
+ /**
+  * \def POLARSSL_MD_C
+@@ -490,7 +490,7 @@
+  *
+  * This module provides TCP/IP networking routines.
+  */
+-#define POLARSSL_NET_C
++//#define POLARSSL_NET_C
+
+ /**
+  * \def POLARSSL_PADLOCK_C
+@@ -644,8 +644,8 @@
+  * Caller:  library/havege.c
+  *
+  * This module is used by the HAVEGE random number generator.
+- */
+ #define POLARSSL_TIMING_C
++ */
+
+ /**
+  * \def POLARSSL_VERSION_C
+diff -Naur polarssl-1.1.4/library/bignum.c polarssl-x86_64/library/bignum.c
+--- polarssl-1.1.4/library/bignum.c	2012-04-29 16:15:55.000000000 -0400
++++ polarssl-x86_64/library/bignum.c	2012-10-30 17:21:52.135000999 -0400
+@@ -1101,7 +1101,7 @@
+             Z.p[i - t - 1] = ~0;
+         else
+         {
+-#if defined(POLARSSL_HAVE_LONGLONG)
++#if 0 //defined(POLARSSL_HAVE_LONGLONG)
+             t_udbl r;
+
+             r  = (t_udbl) X.p[i] << biL;
diff --git a/stubdom/tpmemu-0.7.4.patch b/stubdom/tpmemu-0.7.4.patch
new file mode 100644
index 0000000..b84eff1
--- /dev/null
+++ b/stubdom/tpmemu-0.7.4.patch
@@ -0,0 +1,12 @@
+diff -Naur tpm_emulator-x86_64-back/tpm/tpm_emulator_extern.c tpm_emulator-x86_64/tpm/tpm_emulator_extern.c
+--- tpm_emulator-x86_64-back/tpm/tpm_emulator_extern.c	2012-04-27 10:55:46.581963398 -0400
++++ tpm_emulator-x86_64/tpm/tpm_emulator_extern.c	2012-04-27 10:56:02.193034152 -0400
+@@ -249,7 +249,7 @@
+ #else /* TPM_NO_EXTERN */
+
+ int (*tpm_extern_init)(void)                                      = NULL;
+-int (*tpm_extern_release)(void)                                   = NULL;
++void (*tpm_extern_release)(void)                                   = NULL;
+ void* (*tpm_malloc)(size_t size)                                  = NULL;
+ void (*tpm_free)(/*const*/ void *ptr)                             = NULL;
+ void (*tpm_log)(int priority, const char *fmt, ...)               = NULL;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwP-00027z-5j; Tue, 04 Dec 2012 18:09:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwN-00027H-9g
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:55 +0000
Received: from [85.158.139.83:36827] by server-10.bemta-5.messagelabs.com id
	71/41-09257-27C3EB05; Tue, 04 Dec 2012 18:09:54 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354644590!28397939!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1509 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f50_949f38e3_95df_47ba_a482_37bda28b572e;
	Tue, 04 Dec 2012 13:09:48 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:29 -0500
Message-Id: <1354644571-3202-7-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 7/8] Add a top level configure script
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please rerun autoconf after commiting this patch

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 autogen.sh                         |    1 +
 tools/config.guess => config.guess |    0
 tools/config.sub => config.sub     |    0
 configure.ac                       |   12 ++++++++++++
 tools/configure.ac                 |    4 ++--
 tools/install.sh                   |    1 -
 6 files changed, 15 insertions(+), 3 deletions(-)
 rename tools/config.guess => config.guess (100%)
 rename tools/config.sub => config.sub (100%)
 create mode 100644 configure.ac
 mode change 100755 => 100644 install.sh
 delete mode 100644 tools/install.sh

diff --git a/autogen.sh b/autogen.sh
index ada482c..1456d94 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -1,4 +1,5 @@
 #!/bin/sh -e
+autoconf
 cd tools
 autoconf
 autoheader
diff --git a/tools/config.guess b/config.guess
similarity index 100%
rename from tools/config.guess
rename to config.guess
diff --git a/tools/config.sub b/config.sub
similarity index 100%
rename from tools/config.sub
rename to config.sub
diff --git a/configure.ac b/configure.ac
new file mode 100644
index 0000000..5dacb46
--- /dev/null
+++ b/configure.ac
@@ -0,0 +1,12 @@
+#                                               -*- Autoconf -*-
+# Process this file with autoconf to produce a configure script.
+
+AC_PREREQ([2.67])
+AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
+    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
+AC_CONFIG_SRCDIR([./xen/common/kernel.c])
+AC_PREFIX_DEFAULT([/usr])
+
+AC_CONFIG_SUBDIRS([tools stubdom])
+
+AC_OUTPUT()
diff --git a/install.sh b/install.sh
old mode 100755
new mode 100644
diff --git a/tools/configure.ac b/tools/configure.ac
index 971e3e9..2bd71b6 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -2,13 +2,13 @@
 # Process this file with autoconf to produce a configure script.
 
 AC_PREREQ([2.67])
-AC_INIT([Xen Hypervisor], m4_esyscmd([../version.sh ../xen/Makefile]),
+AC_INIT([Xen Hypervisor Tools], m4_esyscmd([../version.sh ../xen/Makefile]),
     [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
 AC_CONFIG_SRCDIR([libxl/libxl.c])
 AC_CONFIG_FILES([../config/Tools.mk])
 AC_CONFIG_HEADERS([config.h])
 AC_PREFIX_DEFAULT([/usr])
-AC_CONFIG_AUX_DIR([.])
+AC_CONFIG_AUX_DIR([../])
 
 # Check if CFLAGS, LDFLAGS, LIBS, CPPFLAGS or CPP is set and print a warning
 
diff --git a/tools/install.sh b/tools/install.sh
deleted file mode 100644
index 3f44f99..0000000
--- a/tools/install.sh
+++ /dev/null
@@ -1 +0,0 @@
-../install.sh
\ No newline at end of file
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwN-00027T-OM; Tue, 04 Dec 2012 18:09:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwL-00026y-CV
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:53 +0000
Received: from [85.158.137.99:62078] by server-16.bemta-3.messagelabs.com id
	68/C3-07461-07C3EB05; Tue, 04 Dec 2012 18:09:52 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-3.tower-217.messagelabs.com!1354644590!12844071!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26955 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-3.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f5e_ff34b262_d049_45d0_85d1_1d60bccb7b14;
	Tue, 04 Dec 2012 13:09:48 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:30 -0500
Message-Id: <1354644571-3202-8-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 8/8] Add conditional build of subsystems to
	configure.ac
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The toplevel Makefile still works without running configure
and will default build everything

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 Makefile              |   10 ++++++++--
 config/Toplevel.mk.in |    1 +
 configure.ac          |   11 ++++++++++-
 m4/subsystem.m4       |   32 ++++++++++++++++++++++++++++++++
 4 files changed, 51 insertions(+), 3 deletions(-)
 create mode 100644 config/Toplevel.mk.in
 create mode 100644 m4/subsystem.m4

diff --git a/Makefile b/Makefile
index a6ed8be..aa3c7bd 100644
--- a/Makefile
+++ b/Makefile
@@ -6,6 +6,11 @@
 .PHONY: all
 all: dist
 
+-include config/Toplevel.mk
+SUBSYSTEMS?=xen kernels tools stubdom docs
+TARGS_DIST=$(patsubst %, dist-%, $(SUBSYSTEMS))
+TARGS_INSTALL=$(patsubst %, install-%, $(SUBSYSTEMS))
+
 export XEN_ROOT=$(CURDIR)
 include Config.mk
 
@@ -15,7 +20,7 @@ include buildconfigs/Rules.mk
 
 # build and install everything into the standard system directories
 .PHONY: install
-install: install-xen install-kernels install-tools install-stubdom install-docs
+install: $(TARGS_INSTALL)
 
 .PHONY: build
 build: kernels
@@ -37,7 +42,7 @@ test:
 # build and install everything into local dist directory
 .PHONY: dist
 dist: DESTDIR=$(DISTDIR)/install
-dist: dist-xen dist-kernels dist-tools dist-stubdom dist-docs dist-misc
+dist: $(TARGS_DIST) dist-misc
 
 dist-misc:
 	$(INSTALL_DIR) $(DISTDIR)/
@@ -151,6 +156,7 @@ endif
 # clean, but blow away kernel build tree plus tarballs
 .PHONY: distclean
 distclean:
+	-rm config/Toplevel.mk
 	$(MAKE) -C xen distclean
 	$(MAKE) -C tools distclean
 	$(MAKE) -C stubdom distclean
diff --git a/config/Toplevel.mk.in b/config/Toplevel.mk.in
new file mode 100644
index 0000000..4db7eaf
--- /dev/null
+++ b/config/Toplevel.mk.in
@@ -0,0 +1 @@
+SUBSYSTEMS               := @SUBSYSTEMS@
diff --git a/configure.ac b/configure.ac
index 5dacb46..fcbc4ae 100644
--- a/configure.ac
+++ b/configure.ac
@@ -6,7 +6,16 @@ AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
     [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
 AC_CONFIG_SRCDIR([./xen/common/kernel.c])
 AC_PREFIX_DEFAULT([/usr])
+AC_CONFIG_FILES([./config/Toplevel.mk])
 
-AC_CONFIG_SUBDIRS([tools stubdom])
+m4_include([m4/features.m4])
+m4_include([m4/subsystem.m4])
+
+AX_SUBSYSTEM_DEFAULT_ENABLE([xen])
+AX_SUBSYSTEM_DEFAULT_ENABLE([kernels])
+AX_SUBSYSTEM_DEFAULT_ENABLE([tools])
+AX_SUBSYSTEM_DEFAULT_ENABLE([stubdom])
+AX_SUBSYSTEM_DEFAULT_ENABLE([docs])
+AX_SUBSYSTEM_FINISH
 
 AC_OUTPUT()
diff --git a/m4/subsystem.m4 b/m4/subsystem.m4
new file mode 100644
index 0000000..d3eb8c9
--- /dev/null
+++ b/m4/subsystem.m4
@@ -0,0 +1,32 @@
+AC_DEFUN([AX_SUBSYSTEM_DEFAULT_ENABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--disable-$1], [Disable build and install of $1]),[
+$1=n
+],[
+$1=y
+SUBSYSTEMS="$SUBSYSTEMS $1"
+AS_IF([test -e "$1/configure"], [
+AC_CONFIG_SUBDIRS([$1])
+])
+])
+AC_SUBST($1)
+])
+
+AC_DEFUN([AX_SUBSYSTEM_DEFAULT_DISABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--enable-$1], [Enable build and install of $1]),[
+$1=y
+SUBSYSTEMS="$SUBSYSTEMS $1"
+AS_IF([test -e "$1/configure"], [
+AC_CONFIG_SUBDIRS([$1])
+])
+],[
+$1=n
+])
+AC_SUBST($1)
+])
+
+
+AC_DEFUN([AX_SUBSYSTEM_FINISH], [
+AC_SUBST(SUBSYSTEMS)
+])
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwO-00027i-9N; Tue, 04 Dec 2012 18:09:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwM-000273-4k
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:54 +0000
Received: from [85.158.138.51:30011] by server-3.bemta-3.messagelabs.com id
	0D/2B-31566-17C3EB05; Tue, 04 Dec 2012 18:09:53 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-7.tower-174.messagelabs.com!1354644590!18549161!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9016 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f18_aad8eedf_e7b3_45dc_912b_f159c12f3073;
	Tue, 04 Dec 2012 13:09:47 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:25 -0500
Message-Id: <1354644571-3202-3-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 3/8] vtpm/vtpmmgr and required libs to
	stubdom/Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add 3 new libraries to stubdom:
libgmp
polarssl
Berlios TPM Emulator 0.7.4

Also adds makefile structure for vtpm-stubdom and vtpmmgrdom

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/Makefile           |  138 +++++++++++++++++++++++++++++++++++++++++++-
 stubdom/polarssl.patch     |   64 ++++++++++++++++++++
 stubdom/tpmemu-0.7.4.patch |   12 ++++
 3 files changed, 211 insertions(+), 3 deletions(-)
 create mode 100644 stubdom/polarssl.patch
 create mode 100644 stubdom/tpmemu-0.7.4.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 50ba360..fc70d88 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -31,6 +31,18 @@ GRUB_VERSION=0.97
 OCAML_URL?=http://caml.inria.fr/pub/distrib/ocaml-3.11
 OCAML_VERSION=3.11.0
 
+GMP_VERSION=4.3.2
+GMP_URL?=$(XEN_EXTFILES_URL)
+#GMP_URL?=ftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
+
+POLARSSL_VERSION=1.1.4
+POLARSSL_URL?=$(XEN_EXTFILES_URL)
+#POLARSSL_URL?=http://polarssl.org/code/releases
+
+TPMEMU_VERSION=0.7.4
+TPMEMU_URL?=$(XEN_EXTFILES_URL)
+#TPMEMU_URL?=http://download.berlios.de/tpm-emulator
+
 WGET=wget -c
 
 GNU_TARGET_ARCH:=$(XEN_TARGET_ARCH)
@@ -74,12 +86,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include
 
 TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib
 
-TARGETS=ioemu c caml grub xenstore
+TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr
 
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom
+build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stubdom vtpmmgrdom
 else
 build: genpath
 endif
@@ -176,6 +188,76 @@ lwip-$(XEN_TARGET_ARCH): lwip-$(LWIP_VERSION).tar.gz
 	touch $@
 
 #############
+# cross-gmp
+#############
+gmp-$(GMP_VERSION).tar.bz2:
+	$(WGET) $(GMP_URL)/$@
+
+.PHONY: cross-gmp
+ifeq ($(XEN_TARGET_ARCH), x86_32)
+   GMPEXT=ABI=32
+endif
+gmp-$(XEN_TARGET_ARCH): gmp-$(GMP_VERSION).tar.bz2 $(NEWLIB_STAMPFILE)
+	tar xjf $<
+	mv gmp-$(GMP_VERSION) $@
+	#patch -d $@ -p0 < gmp.patch
+	cd $@; CPPFLAGS="-isystem $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include $(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" CC=$(CC) $(GMPEXT) ./configure --disable-shared --enable-static --disable-fft --without-readline --prefix=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf
+	sed -i 's/#define HAVE_OBSTACK_VPRINTF 1/\/\/#define HAVE_OBSTACK_VPRINTF 1/' $@/config.h
+	touch $@
+
+GMP_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libgmp.a
+cross-gmp: $(GMP_STAMPFILE)
+$(GMP_STAMPFILE): gmp-$(XEN_TARGET_ARCH)
+	( cd $< && \
+	  $(MAKE) && \
+	  $(MAKE) install )
+
+#############
+# cross-polarssl
+#############
+polarssl-$(POLARSSL_VERSION)-gpl.tgz:
+	$(WGET) $(POLARSSL_URL)/$@
+
+polarssl-$(XEN_TARGET_ARCH): polarssl-$(POLARSSL_VERSION)-gpl.tgz
+	tar xzf $<
+	mv polarssl-$(POLARSSL_VERSION) $@
+	patch -d $@ -p1 < polarssl.patch
+	touch $@
+
+POLARSSL_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libpolarssl.a
+cross-polarssl: $(POLARSSL_STAMPFILE)
+$(POLARSSL_STAMPFILE): polarssl-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE) lwip-$(XEN_TARGET_ARCH)
+	 ( cd $</library && \
+	   make CC="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -I $(realpath $(MINI_OS)/include)" && \
+	   mkdir -p $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include && \
+	   cp -r ../include/* $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include && \
+	   mkdir -p $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib && \
+	   $(INSTALL_DATA) libpolarssl.a $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib/ )
+
+#############
+# cross-tpmemu
+#############
+tpm_emulator-$(TPMEMU_VERSION).tar.gz:
+	$(WGET) $(TPMEMU_URL)/$@
+
+tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
+	tar xzf $<
+	mv tpm_emulator-$(TPMEMU_VERSION) $@
+	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
+	mkdir $@/build
+	cd $@/build; cmake .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
+	touch $@
+
+TPMEMU_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm.a
+$(TPMEMU_STAMPFILE): tpm_emulator-$(XEN_TARGET_ARCH) $(GMP_STAMPFILE)
+	( cd $</build && make VERBOSE=1 tpm_crypto tpm  )
+	cp $</build/crypto/libtpm_crypto.a $(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm_crypto.a
+	cp $</build/tpm/libtpm.a $(TPMEMU_STAMPFILE)
+
+.PHONY: cross-tpmemu
+cross-tpmemu: $(TPMEMU_STAMPFILE)
+
+#############
 # Cross-ocaml
 #############
 
@@ -319,6 +401,24 @@ c: $(CROSS_ROOT)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) 
 
 ######
+# VTPM
+######
+
+.PHONY: vtpm
+vtpm: cross-polarssl cross-tpmemu
+	make -C $(MINI_OS) links
+	XEN_TARGET_ARCH="$(XEN_TARGET_ARCH)" CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) -C $@
+
+######
+# VTPMMGR
+######
+
+.PHONY: vtpmmgr
+vtpmmgr: cross-polarssl
+	make -C $(MINI_OS) links
+	XEN_TARGET_ARCH="$(XEN_TARGET_ARCH)" CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) -C $@
+
+######
 # Grub
 ######
 
@@ -362,6 +462,14 @@ caml-stubdom: mini-os-$(XEN_TARGET_ARCH)-caml lwip-$(XEN_TARGET_ARCH) libxc cros
 c-stubdom: mini-os-$(XEN_TARGET_ARCH)-c lwip-$(XEN_TARGET_ARCH) libxc c
 	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/c/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS=$(CURDIR)/c/main.a
 
+.PHONY: vtpm-stubdom
+vtpm-stubdom: mini-os-$(XEN_TARGET_ARCH)-vtpm vtpm
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/vtpm/minios.cfg" $(MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS="$(CURDIR)/vtpm/vtpm.a" APP_LDLIBS="-ltpm -ltpm_crypto -lgmp"
+
+.PHONY: vtpmmgrdom
+vtpmmgrdom: mini-os-$(XEN_TARGET_ARCH)-vtpmmgr vtpmmgr
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/vtpmmgr/minios.cfg" $(MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS="$(CURDIR)/vtpmmgr/vtpmmgr.a" APP_LDLIBS="-lm"
+
 .PHONY: pv-grub
 pv-grub: mini-os-$(XEN_TARGET_ARCH)-grub libxc grub
 	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/grub/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/grub-$(XEN_TARGET_ARCH)/main.a
@@ -375,7 +483,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
 #########
 
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: genpath install-readme install-ioemu install-grub install-xenstore
+install: genpath install-readme install-ioemu install-grub install-xenstore install-vtpm install-vtpmmgr
 else
 install: genpath
 endif
@@ -399,6 +507,14 @@ install-xenstore: xenstore-stubdom
 	$(INSTALL_DIR) "$(DESTDIR)/usr/lib/xen/boot"
 	$(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-xenstore/mini-os.gz "$(DESTDIR)/usr/lib/xen/boot/xenstore-stubdom.gz"
 
+install-vtpm: vtpm-stubdom
+	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
+	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpm/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpm-stubdom.gz"
+
+install-vtpmmgr: vtpm-stubdom
+	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
+	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpmmgr/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpmmgrdom.gz"
+
 #######
 # clean
 #######
@@ -411,8 +527,12 @@ clean:
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-caml
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-grub
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-xenstore
+	rm -fr mini-os-$(XEN_TARGET_ARCH)-vtpm
+	rm -fr mini-os-$(XEN_TARGET_ARCH)-vtpmmgr
 	$(MAKE) DESTDIR= -C caml clean
 	$(MAKE) DESTDIR= -C c clean
+	$(MAKE) -C vtpm clean
+	$(MAKE) -C vtpmmgr clean
 	rm -fr grub-$(XEN_TARGET_ARCH)
 	rm -f $(STUBDOMPATH)
 	[ ! -d libxc-$(XEN_TARGET_ARCH) ] || $(MAKE) DESTDIR= -C libxc-$(XEN_TARGET_ARCH) clean
@@ -426,6 +546,10 @@ crossclean: clean
 	rm -fr newlib-$(XEN_TARGET_ARCH)
 	rm -fr zlib-$(XEN_TARGET_ARCH) pciutils-$(XEN_TARGET_ARCH)
 	rm -fr libxc-$(XEN_TARGET_ARCH) ioemu
+	rm -fr gmp-$(XEN_TARGET_ARCH)
+	rm -fr polarssl-$(XEN_TARGET_ARCH)
+	rm -fr openssl-$(XEN_TARGET_ARCH)
+	rm -fr tpm_emulator-$(XEN_TARGET_ARCH)
 	rm -f mk-headers-$(XEN_TARGET_ARCH)
 	rm -fr ocaml-$(XEN_TARGET_ARCH)
 	rm -fr include
@@ -434,6 +558,10 @@ crossclean: clean
 .PHONY: patchclean
 patchclean: crossclean
 	rm -fr newlib-$(NEWLIB_VERSION)
+	rm -fr gmp-$(XEN_TARGET_ARCH)
+	rm -fr polarssl-$(XEN_TARGET_ARCH)
+	rm -fr openssl-$(XEN_TARGET_ARCH)
+	rm -fr tpm_emulator-$(XEN_TARGET_ARCH)
 	rm -fr lwip-$(XEN_TARGET_ARCH)
 	rm -fr grub-upstream
 
@@ -442,10 +570,14 @@ patchclean: crossclean
 downloadclean: patchclean
 	rm -f newlib-$(NEWLIB_VERSION).tar.gz
 	rm -f zlib-$(ZLIB_VERSION).tar.gz
+	rm -f gmp-$(GMP_VERSION).tar.gz
+	rm -f tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	rm -f pciutils-$(LIBPCI_VERSION).tar.bz2
 	rm -f grub-$(GRUB_VERSION).tar.gz
 	rm -f lwip-$(LWIP_VERSION).tar.gz
 	rm -f ocaml-$(OCAML_VERSION).tar.gz
+	rm -f polarssl-$(POLARSSL_VERSION)-gpl.tgz
+	rm -f openssl-$(POLARSSL_VERSION)-gpl.tgz
 
 .PHONY: distclean
 distclean: downloadclean
diff --git a/stubdom/polarssl.patch b/stubdom/polarssl.patch
new file mode 100644
index 0000000..d387d4e
--- /dev/null
+++ b/stubdom/polarssl.patch
@@ -0,0 +1,64 @@
+diff -Naur polarssl-1.1.4/include/polarssl/config.h polarssl-x86_64/include/polarssl/config.h
+--- polarssl-1.1.4/include/polarssl/config.h	2011-12-22 05:06:27.000000000 -0500
++++ polarssl-x86_64/include/polarssl/config.h	2012-10-30 17:18:07.567001000 -0400
+@@ -164,8 +164,8 @@
+  * application.
+  *
+  * Uncomment this macro to prevent loading of default entropy functions.
+-#define POLARSSL_NO_DEFAULT_ENTROPY_SOURCES
+  */
++#define POLARSSL_NO_DEFAULT_ENTROPY_SOURCES
+
+ /**
+  * \def POLARSSL_NO_PLATFORM_ENTROPY
+@@ -175,8 +175,8 @@
+  * standards like the /dev/urandom or Windows CryptoAPI.
+  *
+  * Uncomment this macro to disable the built-in platform entropy functions.
+-#define POLARSSL_NO_PLATFORM_ENTROPY
+  */
++#define POLARSSL_NO_PLATFORM_ENTROPY
+
+ /**
+  * \def POLARSSL_PKCS1_V21
+@@ -426,8 +426,8 @@
+  * Requires: POLARSSL_TIMING_C
+  *
+  * This module enables the HAVEGE random number generator.
+- */
+ #define POLARSSL_HAVEGE_C
++ */
+
+ /**
+  * \def POLARSSL_MD_C
+@@ -490,7 +490,7 @@
+  *
+  * This module provides TCP/IP networking routines.
+  */
+-#define POLARSSL_NET_C
++//#define POLARSSL_NET_C
+
+ /**
+  * \def POLARSSL_PADLOCK_C
+@@ -644,8 +644,8 @@
+  * Caller:  library/havege.c
+  *
+  * This module is used by the HAVEGE random number generator.
+- */
+ #define POLARSSL_TIMING_C
++ */
+
+ /**
+  * \def POLARSSL_VERSION_C
+diff -Naur polarssl-1.1.4/library/bignum.c polarssl-x86_64/library/bignum.c
+--- polarssl-1.1.4/library/bignum.c	2012-04-29 16:15:55.000000000 -0400
++++ polarssl-x86_64/library/bignum.c	2012-10-30 17:21:52.135000999 -0400
+@@ -1101,7 +1101,7 @@
+             Z.p[i - t - 1] = ~0;
+         else
+         {
+-#if defined(POLARSSL_HAVE_LONGLONG)
++#if 0 //defined(POLARSSL_HAVE_LONGLONG)
+             t_udbl r;
+
+             r  = (t_udbl) X.p[i] << biL;
diff --git a/stubdom/tpmemu-0.7.4.patch b/stubdom/tpmemu-0.7.4.patch
new file mode 100644
index 0000000..b84eff1
--- /dev/null
+++ b/stubdom/tpmemu-0.7.4.patch
@@ -0,0 +1,12 @@
+diff -Naur tpm_emulator-x86_64-back/tpm/tpm_emulator_extern.c tpm_emulator-x86_64/tpm/tpm_emulator_extern.c
+--- tpm_emulator-x86_64-back/tpm/tpm_emulator_extern.c	2012-04-27 10:55:46.581963398 -0400
++++ tpm_emulator-x86_64/tpm/tpm_emulator_extern.c	2012-04-27 10:56:02.193034152 -0400
+@@ -249,7 +249,7 @@
+ #else /* TPM_NO_EXTERN */
+
+ int (*tpm_extern_init)(void)                                      = NULL;
+-int (*tpm_extern_release)(void)                                   = NULL;
++void (*tpm_extern_release)(void)                                   = NULL;
+ void* (*tpm_malloc)(size_t size)                                  = NULL;
+ void (*tpm_free)(/*const*/ void *ptr)                             = NULL;
+ void (*tpm_log)(int priority, const char *fmt, ...)               = NULL;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwP-00027z-5j; Tue, 04 Dec 2012 18:09:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwN-00027H-9g
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:55 +0000
Received: from [85.158.139.83:36827] by server-10.bemta-5.messagelabs.com id
	71/41-09257-27C3EB05; Tue, 04 Dec 2012 18:09:54 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354644590!28397939!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1509 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f50_949f38e3_95df_47ba_a482_37bda28b572e;
	Tue, 04 Dec 2012 13:09:48 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:29 -0500
Message-Id: <1354644571-3202-7-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 7/8] Add a top level configure script
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please rerun autoconf after commiting this patch

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 autogen.sh                         |    1 +
 tools/config.guess => config.guess |    0
 tools/config.sub => config.sub     |    0
 configure.ac                       |   12 ++++++++++++
 tools/configure.ac                 |    4 ++--
 tools/install.sh                   |    1 -
 6 files changed, 15 insertions(+), 3 deletions(-)
 rename tools/config.guess => config.guess (100%)
 rename tools/config.sub => config.sub (100%)
 create mode 100644 configure.ac
 mode change 100755 => 100644 install.sh
 delete mode 100644 tools/install.sh

diff --git a/autogen.sh b/autogen.sh
index ada482c..1456d94 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -1,4 +1,5 @@
 #!/bin/sh -e
+autoconf
 cd tools
 autoconf
 autoheader
diff --git a/tools/config.guess b/config.guess
similarity index 100%
rename from tools/config.guess
rename to config.guess
diff --git a/tools/config.sub b/config.sub
similarity index 100%
rename from tools/config.sub
rename to config.sub
diff --git a/configure.ac b/configure.ac
new file mode 100644
index 0000000..5dacb46
--- /dev/null
+++ b/configure.ac
@@ -0,0 +1,12 @@
+#                                               -*- Autoconf -*-
+# Process this file with autoconf to produce a configure script.
+
+AC_PREREQ([2.67])
+AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
+    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
+AC_CONFIG_SRCDIR([./xen/common/kernel.c])
+AC_PREFIX_DEFAULT([/usr])
+
+AC_CONFIG_SUBDIRS([tools stubdom])
+
+AC_OUTPUT()
diff --git a/install.sh b/install.sh
old mode 100755
new mode 100644
diff --git a/tools/configure.ac b/tools/configure.ac
index 971e3e9..2bd71b6 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -2,13 +2,13 @@
 # Process this file with autoconf to produce a configure script.
 
 AC_PREREQ([2.67])
-AC_INIT([Xen Hypervisor], m4_esyscmd([../version.sh ../xen/Makefile]),
+AC_INIT([Xen Hypervisor Tools], m4_esyscmd([../version.sh ../xen/Makefile]),
     [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
 AC_CONFIG_SRCDIR([libxl/libxl.c])
 AC_CONFIG_FILES([../config/Tools.mk])
 AC_CONFIG_HEADERS([config.h])
 AC_PREFIX_DEFAULT([/usr])
-AC_CONFIG_AUX_DIR([.])
+AC_CONFIG_AUX_DIR([../])
 
 # Check if CFLAGS, LDFLAGS, LIBS, CPPFLAGS or CPP is set and print a warning
 
diff --git a/tools/install.sh b/tools/install.sh
deleted file mode 100644
index 3f44f99..0000000
--- a/tools/install.sh
+++ /dev/null
@@ -1 +0,0 @@
-../install.sh
\ No newline at end of file
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwN-00027I-AZ; Tue, 04 Dec 2012 18:09:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwL-00026x-6h
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:53 +0000
Received: from [85.158.139.83:36813] by server-8.bemta-5.messagelabs.com id
	8E/1F-06050-07C3EB05; Tue, 04 Dec 2012 18:09:52 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354644590!27841939!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28448 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f34_a823575e_fb4f_4887_a0e3_2d557c48b89f;
	Tue, 04 Dec 2012 13:09:47 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:27 -0500
Message-Id: <1354644571-3202-5-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 5/8] Add cmake dependency to README
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 README |    1 +
 1 file changed, 1 insertion(+)

diff --git a/README b/README
index 88300df..2f7e04c 100644
--- a/README
+++ b/README
@@ -61,6 +61,7 @@ provided by your OS distributor:
     * 16-bit x86 assembler, loader and compiler (dev86 rpm or bin86 & bcc debs)
     * ACPI ASL compiler (iasl)
     * markdown
+    * cmake (if building vtpm stub domains)
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwN-00027I-AZ; Tue, 04 Dec 2012 18:09:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwL-00026x-6h
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:53 +0000
Received: from [85.158.139.83:36813] by server-8.bemta-5.messagelabs.com id
	8E/1F-06050-07C3EB05; Tue, 04 Dec 2012 18:09:52 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354644590!27841939!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28448 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f34_a823575e_fb4f_4887_a0e3_2d557c48b89f;
	Tue, 04 Dec 2012 13:09:47 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:27 -0500
Message-Id: <1354644571-3202-5-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 5/8] Add cmake dependency to README
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 README |    1 +
 1 file changed, 1 insertion(+)

diff --git a/README b/README
index 88300df..2f7e04c 100644
--- a/README
+++ b/README
@@ -61,6 +61,7 @@ provided by your OS distributor:
     * 16-bit x86 assembler, loader and compiler (dev86 rpm or bin86 & bcc debs)
     * ACPI ASL compiler (iasl)
     * markdown
+    * cmake (if building vtpm stub domains)
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwO-00027r-Oy; Tue, 04 Dec 2012 18:09:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwM-000278-9I
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:54 +0000
Received: from [85.158.143.99:28590] by server-2.bemta-4.messagelabs.com id
	DC/0A-28922-17C3EB05; Tue, 04 Dec 2012 18:09:53 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354644589!16846195!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_23,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28591 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f26_3d4183fa_355b_4005_9436_58d12b30627d;
	Tue, 04 Dec 2012 13:09:47 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:26 -0500
Message-Id: <1354644571-3202-4-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 4/8] Add vtpm documentation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See the files included in this patch for details

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 docs/misc/vtpm.txt     |  357 ++++++++++++++++++++++++++++++++++--------------
 stubdom/vtpm/README    |   75 ++++++++++
 stubdom/vtpmmgr/README |   75 ++++++++++
 3 files changed, 401 insertions(+), 106 deletions(-)
 create mode 100644 stubdom/vtpm/README
 create mode 100644 stubdom/vtpmmgr/README

diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt
index ad37fe8..fc6029a 100644
--- a/docs/misc/vtpm.txt
+++ b/docs/misc/vtpm.txt
@@ -1,152 +1,297 @@
-Copyright: IBM Corporation (C), Intel Corporation
-29 June 2006
-Authors: Stefan Berger <stefanb@us.ibm.com> (IBM), 
-         Employees of Intel Corp
-
-This document gives a short introduction to the virtual TPM support
-in XEN and goes as far as connecting a user domain to a virtual TPM
-instance and doing a short test to verify success. It is assumed
-that the user is fairly familiar with compiling and installing XEN
-and Linux on a machine. 
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the virtual Trusted Platform Module (vTPM) subsystem
+for Xen. The reader is assumed to have familiarity with building and installing
+Xen, Linux, and a basic understanding of the TPM and vTPM concepts.
+
+------------------------------
+INTRODUCTION
+------------------------------
+The goal of this work is to provide a TPM functionality to a virtual guest
+operating system (a DomU).  This allows programs to interact with a TPM in a
+virtual system the same way they interact with a TPM on the physical system.
+Each guest gets its own unique, emulated, software TPM.  However, each of the
+vTPM's secrets (Keys, NVRAM, etc) are managed by a vTPM Manager domain, which
+seals the secrets to the Physical TPM.  Thus, the vTPM subsystem extends the
+chain of trust rooted in the hardware TPM to virtual machines in Xen. Each
+major component of vTPM is implemented as a separate domain, providing secure
+separation guaranteed by the hypervisor. The vTPM domains are implemented in
+mini-os to reduce memory and processor overhead.
+
+This mini-os vTPM subsystem was built on top of the previous vTPM
+work done by IBM and Intel corporation.
  
-Production Prerequisites: An x86-based machine machine with a
-Linux-supported TPM on the motherboard (NSC, Atmel, Infineon, TPM V1.2).
-Development Prerequisites: An emulator for TESTING ONLY is provided
-
+------------------------------
+DESIGN OVERVIEW
+------------------------------
+
+The architecture of vTPM is described below:
+
++------------------+
+|    Linux DomU    | ...
+|       |  ^       |
+|       v  |       |
+|   xen-tpmfront   |
++------------------+
+        |  ^
+        v  |
++------------------+
+| mini-os/tpmback  |
+|       |  ^       |
+|       v  |       |
+|  vtpm-stubdom    | ...
+|       |  ^       |
+|       v  |       |
+| mini-os/tpmfront |
++------------------+
+        |  ^
+        v  |
++------------------+
+| mini-os/tpmback  |
+|       |  ^       |
+|       v  |       |
+|   vtpmmgrdom     |
+|       |  ^       |
+|       v  |       |
+| mini-os/tpm_tis  |
++------------------+
+        |  ^
+        v  |
++------------------+
+|   Hardware TPM   |
++------------------+
+ * Linux DomU: The Linux based guest that wants to use a vTPM. There many be
+               more than one of these.
+
+ * xen-tpmfront.ko: Linux kernel virtual TPM frontend driver. This driver
+                    provides vTPM access to a para-virtualized Linux based DomU.
+
+ * mini-os/tpmback: Mini-os TPM backend driver. The Linux frontend driver
+                    connects to this backend driver to facilitate
+                    communications between the Linux DomU and its vTPM. This
+                    driver is also used by vtpmmgrdom to communicate with
+                    vtpm-stubdom.
+
+ * vtpm-stubdom: A mini-os stub domain that implements a vTPM. There is a
+                 one to one mapping between running vtpm-stubdom instances and
+                 logical vtpms on the system. The vTPM Platform Configuration
+                 Registers (PCRs) are all initialized to zero.
+
+ * mini-os/tpmfront: Mini-os TPM frontend driver. The vTPM mini-os domain
+                     vtpm-stubdom uses this driver to communicate with
+                     vtpmmgrdom. This driver could also be used separately to
+                     implement a mini-os domain that wishes to use a vTPM of
+                     its own.
+
+ * vtpmmgrdom: A mini-os domain that implements the vTPM manager.
+               There is only one vTPM manager and it should be running during
+               the entire lifetime of the machine.  This domain regulates
+               access to the physical TPM on the system and secures the
+               persistent state of each vTPM.
+
+ * mini-os/tpm_tis: Mini-os TPM version 1.2 TPM Interface Specification (TIS)
+                    driver. This driver used by vtpmmgrdom to talk directly to
+                    the hardware TPM. Communication is facilitated by mapping
+                    hardware memory pages into vtpmmgrdom.
+
+ * Hardware TPM: The physical TPM that is soldered onto the motherboard.
+
+------------------------------
+INSTALLATION
+------------------------------
+
+Prerequisites:
+--------------
+You must have an x86 machine with a TPM on the motherboard.
+The only software requirement to compiling vTPM is cmake.
+You must use libxl to manage domains with vTPMs. 'xm' is
+deprecated and does not support vTPM.
 
 Compiling the XEN tree:
 -----------------------
 
-Compile the XEN tree as usual after the following lines set in the
-linux-2.6.??-xen/.config file:
+Compile and install the XEN tree as usual. Be sure to build and install
+the stubdom tree.
+
+Compiling the LINUX dom0 kernel:
+--------------------------------
 
-CONFIG_XEN_TPMDEV_BACKEND=m
+The Linux dom0 kernel has no special prerequisites.
 
-CONFIG_TCG_TPM=m
-CONFIG_TCG_TIS=m      (supported after 2.6.17-rc4)
-CONFIG_TCG_NSC=m
-CONFIG_TCG_ATMEL=m
-CONFIG_TCG_INFINEON=m
-CONFIG_TCG_XEN=m
-<possible other TPM drivers supported by Linux>
+Compiling the LINUX domU kernel:
+--------------------------------
 
-If the frontend driver needs to be compiled into the user domain
-kernel, then the following two lines should be changed.
+The domU kernel used by domains with vtpms must
+include the xen-tpmfront.ko driver. It can be built
+directly into the kernel or as a module.
 
 CONFIG_TCG_TPM=y
 CONFIG_TCG_XEN=y
 
+------------------------------
+VTPM MANAGER SETUP
+------------------------------
+
+Manager disk image setup:
+-------------------------
+
+The vTPM Manager requires a disk image to store its
+encrypted data. The image does not require a filesystem
+and can live anywhere on the host disk. The image does not need
+to be large. 8 to 16 Mb should be sufficient.
+
+# dd if=/dev/zero of=/var/vtpmmgrdom.img bs=16M count=1
+
+Manager config file:
+--------------------
+
+The vTPM Manager domain (vtpmmgrdom) must be started like
+any other Xen virtual machine and requires a config file.
+The manager requires a disk image for storage and permission
+to access the hardware memory pages for the TPM. An
+example configuration looks like the following.
+
+kernel="/usr/lib/xen/boot/vtpmmgrdom.gz"
+memory=16
+disk=["file:/var/vtpmmgrdom.img,hda,w"]
+name="vtpmmgrdom"
+iomem=["fed40,5"]
+
+The iomem line tells xl to allow access to the TPM
+IO memory pages, which are 5 pages that start at
+0xfed40000.
+
+Starting and stopping the manager:
+----------------------------------
+
+The vTPM manager should be started at boot, you may wish to
+create an init script to do this.
+
+# xl create -c vtpmmgrdom.cfg
+
+Once initialization is complete you should see the following:
+INFO[VTPM]: Waiting for commands from vTPM's:
+
+To shutdown the manager you must destroy it. To avoid data corruption,
+only destroy the manager when you see the above "Waiting for commands"
+message. This ensures the disk is in a consistent state.
+
+# xl destroy vtpmmgrdom
+
+------------------------------
+VTPM AND LINUX PVM SETUP
+------------------------------
 
-You must also enable the virtual TPM to be built:
+In the following examples we will assume we have Linux
+guest named "domu" with its associated configuration
+located at /home/user/domu. It's vtpm will be named
+domu-vtpm.
 
-In Config.mk in the Xen root directory set the line
+vTPM disk image setup:
+----------------------
 
-VTPM_TOOLS ?= y
+The vTPM requires a disk image to store its persistent
+data. The image does not require a filesystem. The image
+does not need to be large. 8 Mb should be sufficient.
 
-and in
+# dd if=/dev/zero of=/home/user/domu/vtpm.img bs=8M count=1
 
-tools/vtpm/Rules.mk set the line
+vTPM config file:
+-----------------
 
-BUILD_EMULATOR = y
+The vTPM domain requires a configuration file like
+any other domain. The vTPM requires a disk image for
+storage and a TPM frontend driver to communicate
+with the manager. An example configuration is given:
 
-Now build the Xen sources from Xen's root directory:
+kernel="/usr/lib/xen/boot/vtpm-stubdom.gz"
+memory=8
+disk=["file:/home/user/domu/vtpm.img,hda,w"]
+name="domu-vtpm"
+vtpm=["backend=vtpmmgrdom,uuid=ac0a5b9e-cbe2-4c07-b43b-1d69e46fb839"]
 
-make install
+The vtpm= line sets up the tpm frontend driver. The backend must set
+to vtpmmgrdom. You are required to generate a uuid for this vtpm.
+You can use the uuidgen unix program or some other method to create a
+uuid. The uuid uniquely identifies this vtpm to manager.
 
+If you wish to clear the vTPM data you can either recreate the
+disk image or change the uuid.
 
-Also build the initial RAM disk if necessary.
+Linux Guest config file:
+------------------------
 
-Reboot the machine with the created Xen kernel.
+The Linux guest config file needs to be modified to include
+the Linux tpmfront driver. Add the following line:
 
-Note: If you do not want any TPM-related code compiled into your
-kernel or built as module then comment all the above lines like
-this example:
-# CONFIG_TCG_TPM is not set
+vtpm=["backend=domu-vtpm"]
 
+Currently only paravirtualized guests are supported.
 
-Modifying VM Configuration files:
----------------------------------
+Launching and shut down:
+------------------------
 
-VM configuration files need to be adapted to make a TPM instance
-available to a user domain. The following VM configuration file is
-an example of how a user domain can be configured to have a TPM
-available. It works similar to making a network interface
-available to a domain.
+To launch a Linux guest with a vTPM we first have to start the vTPM domain.
 
-kernel = "/boot/vmlinuz-2.6.x"
-ramdisk = "/xen/initrd_domU/U1_ramdisk.img"
-memory = 32
-name = "TPMUserDomain0"
-vtpm = ['instance=1,backend=0']
-root = "/dev/ram0 console=tty ro"
-vif = ['backend=0']
+# xl create -c /home/user/domu/vtpm.cfg
 
-In the above configuration file the line 'vtpm = ...' provides
-information about the domain where the virtual TPM is running and
-where the TPM backend has been compiled into - this has to be 
-domain 0  at the moment - and which TPM instance the user domain
-is supposed to talk to. Note that each running VM must use a 
-different instance and that using instance 0 is NOT allowed. The
-instance parameter is taken as the desired instance number, but
-the actual instance number that is assigned to the virtual machine
-can be different. This is the case if for example that particular
-instance is already used by another virtual machine. The association
-of which TPM instance number is used by which virtual machine is
-kept in the file /var/vtpm/vtpm.db. Associations are maintained by
-a xend-internal vTPM UUID and vTPM instance number.
+After initialization is complete, you should see the following:
+Info: Waiting for frontend domain to connect..
 
-Note: If you do not want TPM functionality for your user domain simply
-leave out the 'vtpm' line in the configuration file.
+Next, launch the Linux guest
 
+# xl create -c /home/user/domu/domu.cfg
 
-Running the TPM:
-----------------
+If xen-tpmfront was compiled as a module, be sure to load it
+in the guest.
 
-To run the vTPM, the device /dev/vtpm must be available.
-Verify that 'ls -l /dev/vtpm' shows the following output:
+# modprobe xen-tpmfront
 
-crw-------  1 root root 10, 225 Aug 11 06:58 /dev/vtpm
+After the Linux domain boots and the xen-tpmfront driver is loaded,
+you should see the following on the vtpm console:
 
-If it is not available, run the following command as 'root'.
-mknod /dev/vtpm c 10 225
+Info: VTPM attached to Frontend X/Y
 
-Make sure that the vTPM is running in domain 0. To do this run the
-following:
+If you have trousers and tpm_tools installed on the guest, you can test the
+vtpm.
 
-modprobe tpmbk
+On guest:
+# tcsd (if tcsd is not running already)
+# tpm_version
 
-/usr/bin/vtpm_managerd
+The version command should return the following:
+  TPM 1.2 Version Info:
+  Chip Version:        1.2.0.7
+  Spec Level:          2
+  Errata Revision:     1
+  TPM Vendor ID:       ETHZ
+  TPM Version:         01010000
+  Manufacturer Info:   4554485a
 
-Start a user domain using the 'xm create' command. Once you are in the
-shell of the user domain, you should be able to do the following as
-user 'root':
+You should also see the command being sent to the vtpm console as well
+as the vtpm saving its state. You should see the vtpm key being
+encrypted and stored on the vtpmmgrdom console.
 
-Insert the TPM frontend into the kernel if it has been compiled as a
-kernel module.
+To shutdown the guest and its vtpm, you just have to shutdown the guest
+normally. As soon as the guest vm disconnects, the vtpm will shut itself
+down automatically.
 
-> modprobe tpm_xenu
+On guest:
+# shutdown -h now
 
-Check the status of the TPM
+You may wish to write a script to start your vtpm and guest together.
 
-> cd /sys/devices/xen/vtpm-0
-> ls
-[...]  cancel  caps   pcrs    pubek   [...]
-> cat pcrs
-PCR-00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-01: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-02: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-03: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-04: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-05: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-06: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-07: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-08: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-[...]
+------------------------------
+MORE INFORMATION
+------------------------------
 
-At this point the user domain has been successfully connected to its
-virtual TPM instance.
+See stubdom/vtpmmgr/README for more details about how
+the manager domain works, how to use it, and its command line
+parameters.
 
-For further information please read the documentation in 
-tools/vtpm_manager/README and tools/vtpm/README
+See stubdom/vtpm/README for more specifics about how vtpm-stubdom
+operates and the command line options it accepts.
 
-Stefan Berger and Employees of the Intel Corp
diff --git a/stubdom/vtpm/README b/stubdom/vtpm/README
new file mode 100644
index 0000000..11bdacb
--- /dev/null
+++ b/stubdom/vtpm/README
@@ -0,0 +1,75 @@
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the operation and command line interface
+of vtpm-stubdom. See docs/misc/vtpm.txt for details on the
+vTPM subsystem as a whole.
+
+
+------------------------------
+OPERATION
+------------------------------
+
+The vtpm-stubdom is a mini-OS domain that emulates a TPM for the guest OS to
+use. It is a small wrapper around the Berlios TPM emulator
+version 0.7.4. Commands are passed from the linux guest via the
+mini-os TPM backend driver. vTPM data is encrypted and stored via a disk image
+provided to the virtual machine. The key used to encrypt the data along
+with a hash of the vTPM's data is sent to the vTPM manager for secure storage
+and later retrieval.  The vTPM domain communicates with the manager using a
+mini-os tpm front/back device pair.
+
+------------------------------
+COMMAND LINE ARGUMENTS
+------------------------------
+
+Command line arguments are passed to the domain via the 'extra'
+parameter in the VM config file. Each parameter is separated
+by white space. For example:
+
+extra="foo=bar baz"
+
+List of Arguments:
+------------------
+
+loglevel=<LOG>: Controls the amount of logging printed to the console.
+	The possible values for <LOG> are:
+	 error
+	 info (default)
+	 debug
+
+clear: Start the Berlios emulator in "clear" mode. (default)
+
+save: Start the Berlios emulator in "save" mode.
+
+deactivated: Start the Berlios emulator in "deactivated" mode.
+	See the Berlios TPM emulator documentation for details
+	about the startup mode. For all normal use, always use clear
+	which is the default. You should not need to specify any of these.
+
+maintcmds=<1|0>: Enable to disable the TPM maintenance commands.
+	These commands are used by tpm manufacturers and thus
+	open a security hole. They are disabled by default.
+
+hwinitpcr=<PCRSPEC>: Initialize the virtual Platform Configuration Registers
+	(PCRs) with PCR values from the hardware TPM. Each pcr specified by
+	<PCRSPEC> will be initialized with the value of that same PCR in TPM
+	once at startup. By default all PCRs are zero initialized.
+	Value values of <PCRSPEC> are:
+	 all: copy all pcrs
+	 none: copy no pcrs (default)
+	 <N>: copy pcr n
+	 <X-Y>: copy pcrs x to y (inclusive)
+
+	These can also be combined by comma separation, for example:
+	 hwinitpcrs=5,12-16
+	will copy pcrs 5, 12, 13, 14, 15, and 16.
+
+------------------------------
+REFERENCES
+------------------------------
+
+Berlios TPM Emulator:
+http://tpm-emulator.berlios.de/
diff --git a/stubdom/vtpmmgr/README b/stubdom/vtpmmgr/README
new file mode 100644
index 0000000..09f3958
--- /dev/null
+++ b/stubdom/vtpmmgr/README
@@ -0,0 +1,75 @@
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the operation and command line interface
+of vtpmmgrdom. See docs/misc/vtpm.txt for details on the
+vTPM subsystem as a whole.
+
+
+------------------------------
+OPERATION
+------------------------------
+
+The vtpmmgrdom implements a vTPM manager who has two major functions:
+
+ - Securely store encryption keys for each of the vTPMS
+ - Regulate access to the hardware TPM for the entire system
+
+The manager accepts commands from the vtpm-stubdom domains via the mini-os
+TPM backend driver. The vTPM manager communicates directly with hardware TPM
+using the mini-os tpm_tis driver.
+
+
+When the manager starts for the first time it will check if the TPM
+has an owner. If the TPM is unowned, it will attempt to take ownership
+with the supplied owner_auth (see below) and then create a TPM
+storage key which will be used to secure vTPM key data. Currently the
+manager only binds vTPM keys to the disk. In the future support
+for sealing to PCRs should be added.
+
+------------------------------
+COMMAND LINE ARGUMENTS
+------------------------------
+
+Command line arguments are passed to the domain via the 'extra'
+parameter in the VM config file. Each parameter is separated
+by white space. For example:
+
+extra="foo=bar baz"
+
+List of Arguments:
+------------------
+
+owner_auth=<AUTHSPEC>: Set the owner auth of the TPM. The default
+	is the well known owner auth of all ones.
+
+srk_auth=<AUTHSPEC>: Set the SRK auth for the TPM. The default is
+	the well known srk auth of all zeroes.
+	The possible values of <AUTHSPEC> are:
+	 well-known: Use the well known auth (default)
+	 random: Randomly generate an auth
+	 hash: <HASH>: Use the given 40 character ASCII hex string
+	 text: <STR>: Use sha1 hash of <STR>.
+
+tpmdriver=<DRIVER>: Which driver to use to talk to the hardware TPM.
+	Don't change this unless you know what you're doing.
+	The possible values of <DRIVER> are:
+	 tpm_tis: Use the tpm_tis driver to talk directly to the TPM.
+		The domain must have access to TPM IO memory.  (default)
+	 tpmfront: Use tpmfront to talk to the TPM. The domain must have
+		a tpmfront device setup to talk to another domain
+		which provides access to the TPM.
+
+The following options only apply to the tpm_tis driver:
+
+tpmiomem=<ADDR>: The base address of the hardware memory pages of the
+	TPM (default 0xfed40000).
+
+tpmirq=<IRQ>: The irq of the hardware TPM if using interrupts. A value of
+	"probe" can be set to probe for the irq. A value of 0
+	disabled interrupts and uses polling (default 0).
+
+tpmlocality=<LOC>: Attempt to use locality <LOC> of the hardware TPM.
+	(default 0)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwO-00027r-Oy; Tue, 04 Dec 2012 18:09:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwM-000278-9I
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:54 +0000
Received: from [85.158.143.99:28590] by server-2.bemta-4.messagelabs.com id
	DC/0A-28922-17C3EB05; Tue, 04 Dec 2012 18:09:53 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354644589!16846195!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_23,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28591 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f26_3d4183fa_355b_4005_9436_58d12b30627d;
	Tue, 04 Dec 2012 13:09:47 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:26 -0500
Message-Id: <1354644571-3202-4-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 4/8] Add vtpm documentation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See the files included in this patch for details

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 docs/misc/vtpm.txt     |  357 ++++++++++++++++++++++++++++++++++--------------
 stubdom/vtpm/README    |   75 ++++++++++
 stubdom/vtpmmgr/README |   75 ++++++++++
 3 files changed, 401 insertions(+), 106 deletions(-)
 create mode 100644 stubdom/vtpm/README
 create mode 100644 stubdom/vtpmmgr/README

diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt
index ad37fe8..fc6029a 100644
--- a/docs/misc/vtpm.txt
+++ b/docs/misc/vtpm.txt
@@ -1,152 +1,297 @@
-Copyright: IBM Corporation (C), Intel Corporation
-29 June 2006
-Authors: Stefan Berger <stefanb@us.ibm.com> (IBM), 
-         Employees of Intel Corp
-
-This document gives a short introduction to the virtual TPM support
-in XEN and goes as far as connecting a user domain to a virtual TPM
-instance and doing a short test to verify success. It is assumed
-that the user is fairly familiar with compiling and installing XEN
-and Linux on a machine. 
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the virtual Trusted Platform Module (vTPM) subsystem
+for Xen. The reader is assumed to have familiarity with building and installing
+Xen, Linux, and a basic understanding of the TPM and vTPM concepts.
+
+------------------------------
+INTRODUCTION
+------------------------------
+The goal of this work is to provide a TPM functionality to a virtual guest
+operating system (a DomU).  This allows programs to interact with a TPM in a
+virtual system the same way they interact with a TPM on the physical system.
+Each guest gets its own unique, emulated, software TPM.  However, each of the
+vTPM's secrets (Keys, NVRAM, etc) are managed by a vTPM Manager domain, which
+seals the secrets to the Physical TPM.  Thus, the vTPM subsystem extends the
+chain of trust rooted in the hardware TPM to virtual machines in Xen. Each
+major component of vTPM is implemented as a separate domain, providing secure
+separation guaranteed by the hypervisor. The vTPM domains are implemented in
+mini-os to reduce memory and processor overhead.
+
+This mini-os vTPM subsystem was built on top of the previous vTPM
+work done by IBM and Intel corporation.
  
-Production Prerequisites: An x86-based machine machine with a
-Linux-supported TPM on the motherboard (NSC, Atmel, Infineon, TPM V1.2).
-Development Prerequisites: An emulator for TESTING ONLY is provided
-
+------------------------------
+DESIGN OVERVIEW
+------------------------------
+
+The architecture of vTPM is described below:
+
++------------------+
+|    Linux DomU    | ...
+|       |  ^       |
+|       v  |       |
+|   xen-tpmfront   |
++------------------+
+        |  ^
+        v  |
++------------------+
+| mini-os/tpmback  |
+|       |  ^       |
+|       v  |       |
+|  vtpm-stubdom    | ...
+|       |  ^       |
+|       v  |       |
+| mini-os/tpmfront |
++------------------+
+        |  ^
+        v  |
++------------------+
+| mini-os/tpmback  |
+|       |  ^       |
+|       v  |       |
+|   vtpmmgrdom     |
+|       |  ^       |
+|       v  |       |
+| mini-os/tpm_tis  |
++------------------+
+        |  ^
+        v  |
++------------------+
+|   Hardware TPM   |
++------------------+
+ * Linux DomU: The Linux based guest that wants to use a vTPM. There many be
+               more than one of these.
+
+ * xen-tpmfront.ko: Linux kernel virtual TPM frontend driver. This driver
+                    provides vTPM access to a para-virtualized Linux based DomU.
+
+ * mini-os/tpmback: Mini-os TPM backend driver. The Linux frontend driver
+                    connects to this backend driver to facilitate
+                    communications between the Linux DomU and its vTPM. This
+                    driver is also used by vtpmmgrdom to communicate with
+                    vtpm-stubdom.
+
+ * vtpm-stubdom: A mini-os stub domain that implements a vTPM. There is a
+                 one to one mapping between running vtpm-stubdom instances and
+                 logical vtpms on the system. The vTPM Platform Configuration
+                 Registers (PCRs) are all initialized to zero.
+
+ * mini-os/tpmfront: Mini-os TPM frontend driver. The vTPM mini-os domain
+                     vtpm-stubdom uses this driver to communicate with
+                     vtpmmgrdom. This driver could also be used separately to
+                     implement a mini-os domain that wishes to use a vTPM of
+                     its own.
+
+ * vtpmmgrdom: A mini-os domain that implements the vTPM manager.
+               There is only one vTPM manager and it should be running during
+               the entire lifetime of the machine.  This domain regulates
+               access to the physical TPM on the system and secures the
+               persistent state of each vTPM.
+
+ * mini-os/tpm_tis: Mini-os TPM version 1.2 TPM Interface Specification (TIS)
+                    driver. This driver used by vtpmmgrdom to talk directly to
+                    the hardware TPM. Communication is facilitated by mapping
+                    hardware memory pages into vtpmmgrdom.
+
+ * Hardware TPM: The physical TPM that is soldered onto the motherboard.
+
+------------------------------
+INSTALLATION
+------------------------------
+
+Prerequisites:
+--------------
+You must have an x86 machine with a TPM on the motherboard.
+The only software requirement to compiling vTPM is cmake.
+You must use libxl to manage domains with vTPMs. 'xm' is
+deprecated and does not support vTPM.
 
 Compiling the XEN tree:
 -----------------------
 
-Compile the XEN tree as usual after the following lines set in the
-linux-2.6.??-xen/.config file:
+Compile and install the XEN tree as usual. Be sure to build and install
+the stubdom tree.
+
+Compiling the LINUX dom0 kernel:
+--------------------------------
 
-CONFIG_XEN_TPMDEV_BACKEND=m
+The Linux dom0 kernel has no special prerequisites.
 
-CONFIG_TCG_TPM=m
-CONFIG_TCG_TIS=m      (supported after 2.6.17-rc4)
-CONFIG_TCG_NSC=m
-CONFIG_TCG_ATMEL=m
-CONFIG_TCG_INFINEON=m
-CONFIG_TCG_XEN=m
-<possible other TPM drivers supported by Linux>
+Compiling the LINUX domU kernel:
+--------------------------------
 
-If the frontend driver needs to be compiled into the user domain
-kernel, then the following two lines should be changed.
+The domU kernel used by domains with vtpms must
+include the xen-tpmfront.ko driver. It can be built
+directly into the kernel or as a module.
 
 CONFIG_TCG_TPM=y
 CONFIG_TCG_XEN=y
 
+------------------------------
+VTPM MANAGER SETUP
+------------------------------
+
+Manager disk image setup:
+-------------------------
+
+The vTPM Manager requires a disk image to store its
+encrypted data. The image does not require a filesystem
+and can live anywhere on the host disk. The image does not need
+to be large. 8 to 16 Mb should be sufficient.
+
+# dd if=/dev/zero of=/var/vtpmmgrdom.img bs=16M count=1
+
+Manager config file:
+--------------------
+
+The vTPM Manager domain (vtpmmgrdom) must be started like
+any other Xen virtual machine and requires a config file.
+The manager requires a disk image for storage and permission
+to access the hardware memory pages for the TPM. An
+example configuration looks like the following.
+
+kernel="/usr/lib/xen/boot/vtpmmgrdom.gz"
+memory=16
+disk=["file:/var/vtpmmgrdom.img,hda,w"]
+name="vtpmmgrdom"
+iomem=["fed40,5"]
+
+The iomem line tells xl to allow access to the TPM
+IO memory pages, which are 5 pages that start at
+0xfed40000.
+
+Starting and stopping the manager:
+----------------------------------
+
+The vTPM manager should be started at boot, you may wish to
+create an init script to do this.
+
+# xl create -c vtpmmgrdom.cfg
+
+Once initialization is complete you should see the following:
+INFO[VTPM]: Waiting for commands from vTPM's:
+
+To shutdown the manager you must destroy it. To avoid data corruption,
+only destroy the manager when you see the above "Waiting for commands"
+message. This ensures the disk is in a consistent state.
+
+# xl destroy vtpmmgrdom
+
+------------------------------
+VTPM AND LINUX PVM SETUP
+------------------------------
 
-You must also enable the virtual TPM to be built:
+In the following examples we will assume we have Linux
+guest named "domu" with its associated configuration
+located at /home/user/domu. It's vtpm will be named
+domu-vtpm.
 
-In Config.mk in the Xen root directory set the line
+vTPM disk image setup:
+----------------------
 
-VTPM_TOOLS ?= y
+The vTPM requires a disk image to store its persistent
+data. The image does not require a filesystem. The image
+does not need to be large. 8 Mb should be sufficient.
 
-and in
+# dd if=/dev/zero of=/home/user/domu/vtpm.img bs=8M count=1
 
-tools/vtpm/Rules.mk set the line
+vTPM config file:
+-----------------
 
-BUILD_EMULATOR = y
+The vTPM domain requires a configuration file like
+any other domain. The vTPM requires a disk image for
+storage and a TPM frontend driver to communicate
+with the manager. An example configuration is given:
 
-Now build the Xen sources from Xen's root directory:
+kernel="/usr/lib/xen/boot/vtpm-stubdom.gz"
+memory=8
+disk=["file:/home/user/domu/vtpm.img,hda,w"]
+name="domu-vtpm"
+vtpm=["backend=vtpmmgrdom,uuid=ac0a5b9e-cbe2-4c07-b43b-1d69e46fb839"]
 
-make install
+The vtpm= line sets up the tpm frontend driver. The backend must set
+to vtpmmgrdom. You are required to generate a uuid for this vtpm.
+You can use the uuidgen unix program or some other method to create a
+uuid. The uuid uniquely identifies this vtpm to manager.
 
+If you wish to clear the vTPM data you can either recreate the
+disk image or change the uuid.
 
-Also build the initial RAM disk if necessary.
+Linux Guest config file:
+------------------------
 
-Reboot the machine with the created Xen kernel.
+The Linux guest config file needs to be modified to include
+the Linux tpmfront driver. Add the following line:
 
-Note: If you do not want any TPM-related code compiled into your
-kernel or built as module then comment all the above lines like
-this example:
-# CONFIG_TCG_TPM is not set
+vtpm=["backend=domu-vtpm"]
 
+Currently only paravirtualized guests are supported.
 
-Modifying VM Configuration files:
----------------------------------
+Launching and shut down:
+------------------------
 
-VM configuration files need to be adapted to make a TPM instance
-available to a user domain. The following VM configuration file is
-an example of how a user domain can be configured to have a TPM
-available. It works similar to making a network interface
-available to a domain.
+To launch a Linux guest with a vTPM we first have to start the vTPM domain.
 
-kernel = "/boot/vmlinuz-2.6.x"
-ramdisk = "/xen/initrd_domU/U1_ramdisk.img"
-memory = 32
-name = "TPMUserDomain0"
-vtpm = ['instance=1,backend=0']
-root = "/dev/ram0 console=tty ro"
-vif = ['backend=0']
+# xl create -c /home/user/domu/vtpm.cfg
 
-In the above configuration file the line 'vtpm = ...' provides
-information about the domain where the virtual TPM is running and
-where the TPM backend has been compiled into - this has to be 
-domain 0  at the moment - and which TPM instance the user domain
-is supposed to talk to. Note that each running VM must use a 
-different instance and that using instance 0 is NOT allowed. The
-instance parameter is taken as the desired instance number, but
-the actual instance number that is assigned to the virtual machine
-can be different. This is the case if for example that particular
-instance is already used by another virtual machine. The association
-of which TPM instance number is used by which virtual machine is
-kept in the file /var/vtpm/vtpm.db. Associations are maintained by
-a xend-internal vTPM UUID and vTPM instance number.
+After initialization is complete, you should see the following:
+Info: Waiting for frontend domain to connect..
 
-Note: If you do not want TPM functionality for your user domain simply
-leave out the 'vtpm' line in the configuration file.
+Next, launch the Linux guest
 
+# xl create -c /home/user/domu/domu.cfg
 
-Running the TPM:
-----------------
+If xen-tpmfront was compiled as a module, be sure to load it
+in the guest.
 
-To run the vTPM, the device /dev/vtpm must be available.
-Verify that 'ls -l /dev/vtpm' shows the following output:
+# modprobe xen-tpmfront
 
-crw-------  1 root root 10, 225 Aug 11 06:58 /dev/vtpm
+After the Linux domain boots and the xen-tpmfront driver is loaded,
+you should see the following on the vtpm console:
 
-If it is not available, run the following command as 'root'.
-mknod /dev/vtpm c 10 225
+Info: VTPM attached to Frontend X/Y
 
-Make sure that the vTPM is running in domain 0. To do this run the
-following:
+If you have trousers and tpm_tools installed on the guest, you can test the
+vtpm.
 
-modprobe tpmbk
+On guest:
+# tcsd (if tcsd is not running already)
+# tpm_version
 
-/usr/bin/vtpm_managerd
+The version command should return the following:
+  TPM 1.2 Version Info:
+  Chip Version:        1.2.0.7
+  Spec Level:          2
+  Errata Revision:     1
+  TPM Vendor ID:       ETHZ
+  TPM Version:         01010000
+  Manufacturer Info:   4554485a
 
-Start a user domain using the 'xm create' command. Once you are in the
-shell of the user domain, you should be able to do the following as
-user 'root':
+You should also see the command being sent to the vtpm console as well
+as the vtpm saving its state. You should see the vtpm key being
+encrypted and stored on the vtpmmgrdom console.
 
-Insert the TPM frontend into the kernel if it has been compiled as a
-kernel module.
+To shutdown the guest and its vtpm, you just have to shutdown the guest
+normally. As soon as the guest vm disconnects, the vtpm will shut itself
+down automatically.
 
-> modprobe tpm_xenu
+On guest:
+# shutdown -h now
 
-Check the status of the TPM
+You may wish to write a script to start your vtpm and guest together.
 
-> cd /sys/devices/xen/vtpm-0
-> ls
-[...]  cancel  caps   pcrs    pubek   [...]
-> cat pcrs
-PCR-00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-01: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-02: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-03: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-04: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-05: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-06: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-07: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-08: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-[...]
+------------------------------
+MORE INFORMATION
+------------------------------
 
-At this point the user domain has been successfully connected to its
-virtual TPM instance.
+See stubdom/vtpmmgr/README for more details about how
+the manager domain works, how to use it, and its command line
+parameters.
 
-For further information please read the documentation in 
-tools/vtpm_manager/README and tools/vtpm/README
+See stubdom/vtpm/README for more specifics about how vtpm-stubdom
+operates and the command line options it accepts.
 
-Stefan Berger and Employees of the Intel Corp
diff --git a/stubdom/vtpm/README b/stubdom/vtpm/README
new file mode 100644
index 0000000..11bdacb
--- /dev/null
+++ b/stubdom/vtpm/README
@@ -0,0 +1,75 @@
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the operation and command line interface
+of vtpm-stubdom. See docs/misc/vtpm.txt for details on the
+vTPM subsystem as a whole.
+
+
+------------------------------
+OPERATION
+------------------------------
+
+The vtpm-stubdom is a mini-OS domain that emulates a TPM for the guest OS to
+use. It is a small wrapper around the Berlios TPM emulator
+version 0.7.4. Commands are passed from the linux guest via the
+mini-os TPM backend driver. vTPM data is encrypted and stored via a disk image
+provided to the virtual machine. The key used to encrypt the data along
+with a hash of the vTPM's data is sent to the vTPM manager for secure storage
+and later retrieval.  The vTPM domain communicates with the manager using a
+mini-os tpm front/back device pair.
+
+------------------------------
+COMMAND LINE ARGUMENTS
+------------------------------
+
+Command line arguments are passed to the domain via the 'extra'
+parameter in the VM config file. Each parameter is separated
+by white space. For example:
+
+extra="foo=bar baz"
+
+List of Arguments:
+------------------
+
+loglevel=<LOG>: Controls the amount of logging printed to the console.
+	The possible values for <LOG> are:
+	 error
+	 info (default)
+	 debug
+
+clear: Start the Berlios emulator in "clear" mode. (default)
+
+save: Start the Berlios emulator in "save" mode.
+
+deactivated: Start the Berlios emulator in "deactivated" mode.
+	See the Berlios TPM emulator documentation for details
+	about the startup mode. For all normal use, always use clear
+	which is the default. You should not need to specify any of these.
+
+maintcmds=<1|0>: Enable to disable the TPM maintenance commands.
+	These commands are used by tpm manufacturers and thus
+	open a security hole. They are disabled by default.
+
+hwinitpcr=<PCRSPEC>: Initialize the virtual Platform Configuration Registers
+	(PCRs) with PCR values from the hardware TPM. Each pcr specified by
+	<PCRSPEC> will be initialized with the value of that same PCR in TPM
+	once at startup. By default all PCRs are zero initialized.
+	Value values of <PCRSPEC> are:
+	 all: copy all pcrs
+	 none: copy no pcrs (default)
+	 <N>: copy pcr n
+	 <X-Y>: copy pcrs x to y (inclusive)
+
+	These can also be combined by comma separation, for example:
+	 hwinitpcrs=5,12-16
+	will copy pcrs 5, 12, 13, 14, 15, and 16.
+
+------------------------------
+REFERENCES
+------------------------------
+
+Berlios TPM Emulator:
+http://tpm-emulator.berlios.de/
diff --git a/stubdom/vtpmmgr/README b/stubdom/vtpmmgr/README
new file mode 100644
index 0000000..09f3958
--- /dev/null
+++ b/stubdom/vtpmmgr/README
@@ -0,0 +1,75 @@
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the operation and command line interface
+of vtpmmgrdom. See docs/misc/vtpm.txt for details on the
+vTPM subsystem as a whole.
+
+
+------------------------------
+OPERATION
+------------------------------
+
+The vtpmmgrdom implements a vTPM manager who has two major functions:
+
+ - Securely store encryption keys for each of the vTPMS
+ - Regulate access to the hardware TPM for the entire system
+
+The manager accepts commands from the vtpm-stubdom domains via the mini-os
+TPM backend driver. The vTPM manager communicates directly with hardware TPM
+using the mini-os tpm_tis driver.
+
+
+When the manager starts for the first time it will check if the TPM
+has an owner. If the TPM is unowned, it will attempt to take ownership
+with the supplied owner_auth (see below) and then create a TPM
+storage key which will be used to secure vTPM key data. Currently the
+manager only binds vTPM keys to the disk. In the future support
+for sealing to PCRs should be added.
+
+------------------------------
+COMMAND LINE ARGUMENTS
+------------------------------
+
+Command line arguments are passed to the domain via the 'extra'
+parameter in the VM config file. Each parameter is separated
+by white space. For example:
+
+extra="foo=bar baz"
+
+List of Arguments:
+------------------
+
+owner_auth=<AUTHSPEC>: Set the owner auth of the TPM. The default
+	is the well known owner auth of all ones.
+
+srk_auth=<AUTHSPEC>: Set the SRK auth for the TPM. The default is
+	the well known srk auth of all zeroes.
+	The possible values of <AUTHSPEC> are:
+	 well-known: Use the well known auth (default)
+	 random: Randomly generate an auth
+	 hash: <HASH>: Use the given 40 character ASCII hex string
+	 text: <STR>: Use sha1 hash of <STR>.
+
+tpmdriver=<DRIVER>: Which driver to use to talk to the hardware TPM.
+	Don't change this unless you know what you're doing.
+	The possible values of <DRIVER> are:
+	 tpm_tis: Use the tpm_tis driver to talk directly to the TPM.
+		The domain must have access to TPM IO memory.  (default)
+	 tpmfront: Use tpmfront to talk to the TPM. The domain must have
+		a tpmfront device setup to talk to another domain
+		which provides access to the TPM.
+
+The following options only apply to the tpm_tis driver:
+
+tpmiomem=<ADDR>: The base address of the hardware memory pages of the
+	TPM (default 0xfed40000).
+
+tpmirq=<IRQ>: The irq of the hardware TPM if using interrupts. A value of
+	"probe" can be set to probe for the irq. A value of 0
+	disabled interrupts and uses polling (default 0).
+
+tpmlocality=<LOC>: Attempt to use locality <LOC> of the hardware TPM.
+	(default 0)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwR-00028o-PN; Tue, 04 Dec 2012 18:09:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwP-00027v-C2
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:57 +0000
Received: from [193.109.254.147:30596] by server-5.bemta-14.messagelabs.com id
	91/60-10257-47C3EB05; Tue, 04 Dec 2012 18:09:56 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354644590!8953407!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 885 invoked from network); 4 Dec 2012 18:09:54 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:09:54 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f42_b6ac197d_3adf_4758_a421_c3789af76646;
	Tue, 04 Dec 2012 13:09:48 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:28 -0500
Message-Id: <1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please rerun autoconf after commiting this patch

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 autogen.sh                             |    2 ++
 config/Stubdom.mk.in                   |   44 ++++++++++++++++++++++++
 {tools/m4 => m4}/curses.m4             |    0
 {tools/m4 => m4}/extfs.m4              |    0
 {tools/m4 => m4}/features.m4           |    0
 {tools/m4 => m4}/fetcher.m4            |    0
 {tools/m4 => m4}/ocaml.m4              |    0
 m4/path_or_fail.m4                     |   13 ++++++++
 {tools/m4 => m4}/pkg.m4                |    0
 {tools/m4 => m4}/pthread.m4            |    0
 {tools/m4 => m4}/ptyfuncs.m4           |    0
 {tools/m4 => m4}/python_devel.m4       |    0
 {tools/m4 => m4}/python_version.m4     |    0
 {tools/m4 => m4}/savevar.m4            |    0
 {tools/m4 => m4}/set_cflags_ldflags.m4 |    0
 m4/stubdom.m4                          |   57 ++++++++++++++++++++++++++++++++
 {tools/m4 => m4}/uuid.m4               |    0
 stubdom/Makefile                       |   53 ++++++-----------------------
 stubdom/configure.ac                   |   53 +++++++++++++++++++++++++++++
 tools/configure.ac                     |   28 ++++++++--------
 tools/m4/path_or_fail.m4               |    6 ----
 21 files changed, 194 insertions(+), 62 deletions(-)
 create mode 100644 config/Stubdom.mk.in
 rename {tools/m4 => m4}/curses.m4 (100%)
 rename {tools/m4 => m4}/extfs.m4 (100%)
 rename {tools/m4 => m4}/features.m4 (100%)
 rename {tools/m4 => m4}/fetcher.m4 (100%)
 rename {tools/m4 => m4}/ocaml.m4 (100%)
 create mode 100644 m4/path_or_fail.m4
 rename {tools/m4 => m4}/pkg.m4 (100%)
 rename {tools/m4 => m4}/pthread.m4 (100%)
 rename {tools/m4 => m4}/ptyfuncs.m4 (100%)
 rename {tools/m4 => m4}/python_devel.m4 (100%)
 rename {tools/m4 => m4}/python_version.m4 (100%)
 rename {tools/m4 => m4}/savevar.m4 (100%)
 rename {tools/m4 => m4}/set_cflags_ldflags.m4 (100%)
 create mode 100644 m4/stubdom.m4
 rename {tools/m4 => m4}/uuid.m4 (100%)
 create mode 100644 stubdom/configure.ac
 delete mode 100644 tools/m4/path_or_fail.m4

diff --git a/autogen.sh b/autogen.sh
index 58a71ce..ada482c 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -2,3 +2,5 @@
 cd tools
 autoconf
 autoheader
+cd ../stubdom
+autoconf
diff --git a/config/Stubdom.mk.in b/config/Stubdom.mk.in
new file mode 100644
index 0000000..d2456a9
--- /dev/null
+++ b/config/Stubdom.mk.in
@@ -0,0 +1,44 @@
+# Prefix and install folder
+prefix              := @prefix@
+PREFIX              := $(prefix)
+exec_prefix         := @exec_prefix@
+libdir              := @libdir@
+LIBDIR              := $(libdir)
+
+# Path Programs
+CMAKE               := @CMAKE@
+WGET                := @WGET@ -c
+
+# A debug build of stubdom? //FIXME: Someone make this do something
+debug               := @debug@
+
+STUBDOM_TARGETS     := @STUBDOM_TARGETS@
+STUBDOM_BUILD       := @STUBDOM_BUILD@
+STUBDOM_INSTALL     := @STUBDOM_INSTALL@
+
+ZLIB_VERSION        := @ZLIB_VERSION@
+ZLIB_URL            := @ZLIB_URL@
+
+LIBPCI_VERSION      := @LIBPCI_VERSION@
+LIBPCI_URL          := @LIBPCI_URL@
+
+NEWLIB_VERSION      := @NEWLIB_VERSION@
+NEWLIB_URL          := @NEWLIB_URL@
+
+LWIP_VERSION        := @LWIP_VERSION@
+LWIP_URL            := @LWIP_URL@
+
+GRUB_VERSION        := @GRUB_VERSION@
+GRUB_URL            := @GRUB_URL@
+
+OCAML_VERSION       := @OCAML_VERSION@
+OCAML_URL           := @OCAML_URL@
+
+GMP_VERSION         := @GMP_VERSION@
+GMP_URL             := @GMP_URL@
+
+POLARSSL_VERSION    := @POLARSSL_VERSION@
+POLARSSL_URL        := @POLARSSL_URL@
+
+TPMEMU_VERSION      := @TPMEMU_VERSION@
+TPMEMU_URL          := @TPMEMU_URL@
diff --git a/tools/m4/curses.m4 b/m4/curses.m4
similarity index 100%
rename from tools/m4/curses.m4
rename to m4/curses.m4
diff --git a/tools/m4/extfs.m4 b/m4/extfs.m4
similarity index 100%
rename from tools/m4/extfs.m4
rename to m4/extfs.m4
diff --git a/tools/m4/features.m4 b/m4/features.m4
similarity index 100%
rename from tools/m4/features.m4
rename to m4/features.m4
diff --git a/tools/m4/fetcher.m4 b/m4/fetcher.m4
similarity index 100%
rename from tools/m4/fetcher.m4
rename to m4/fetcher.m4
diff --git a/tools/m4/ocaml.m4 b/m4/ocaml.m4
similarity index 100%
rename from tools/m4/ocaml.m4
rename to m4/ocaml.m4
diff --git a/m4/path_or_fail.m4 b/m4/path_or_fail.m4
new file mode 100644
index 0000000..1fdb90d
--- /dev/null
+++ b/m4/path_or_fail.m4
@@ -0,0 +1,13 @@
+AC_DEFUN([AX_PATH_PROG_OR_FAIL],
+[AC_PATH_PROG([$1], [$2], [no])
+if test x"${$1}" == x"no"
+then
+    AC_MSG_ERROR([Unable to find $2, please install $2])
+fi])
+
+AC_DEFUN([AX_PATH_PROG_OR_FAIL_ARG],[
+AS_IF([test "x$$1" = "xy"], [AX_PATH_PROG_OR_FAIL([$2], [$3])], [
+$2="/$3-disabled-in-configure-script"
+AC_SUBST($2)
+])
+])
diff --git a/tools/m4/pkg.m4 b/m4/pkg.m4
similarity index 100%
rename from tools/m4/pkg.m4
rename to m4/pkg.m4
diff --git a/tools/m4/pthread.m4 b/m4/pthread.m4
similarity index 100%
rename from tools/m4/pthread.m4
rename to m4/pthread.m4
diff --git a/tools/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
similarity index 100%
rename from tools/m4/ptyfuncs.m4
rename to m4/ptyfuncs.m4
diff --git a/tools/m4/python_devel.m4 b/m4/python_devel.m4
similarity index 100%
rename from tools/m4/python_devel.m4
rename to m4/python_devel.m4
diff --git a/tools/m4/python_version.m4 b/m4/python_version.m4
similarity index 100%
rename from tools/m4/python_version.m4
rename to m4/python_version.m4
diff --git a/tools/m4/savevar.m4 b/m4/savevar.m4
similarity index 100%
rename from tools/m4/savevar.m4
rename to m4/savevar.m4
diff --git a/tools/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
similarity index 100%
rename from tools/m4/set_cflags_ldflags.m4
rename to m4/set_cflags_ldflags.m4
diff --git a/m4/stubdom.m4 b/m4/stubdom.m4
new file mode 100644
index 0000000..82f92dc
--- /dev/null
+++ b/m4/stubdom.m4
@@ -0,0 +1,57 @@
+AC_DEFUN([AX_STUBDOM_DEFAULT_ENABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--disable-$1], [Build and install $1 (default is ENABLED)]),[
+$2=n
+],[
+$2=y
+STUBDOM_TARGETS="$STUBDOM_TARGETS $2"
+STUBDOM_BUILD="$STUBDOM_BUILD $1"
+STUBDOM_INSTALL="$STUBDOM_INSTALL install-$2"
+])
+AC_SUBST($2)
+])
+
+AC_DEFUN([AX_STUBDOM_DEFAULT_DISABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--enable-$1], [Build and install $1 (default is DISABLED)]),[
+$2=y
+STUBDOM_TARGETS="$STUBDOM_TARGETS $2"
+STUBDOM_BUILD="$STUBDOM_BUILD $1"
+STUBDOM_INSTALL="$STUBDOM_INSTALL install-$2"
+],[
+$2=n
+])
+AC_SUBST($2)
+])
+
+AC_DEFUN([AX_STUBDOM_FINISH], [
+AC_SUBST(STUBDOM_TARGETS)
+AC_SUBST(STUBDOM_BUILD)
+AC_SUBST(STUBDOM_INSTALL)
+echo "Will build the following stub domains:"
+for x in $STUBDOM_BUILD; do
+	echo "  $x"
+done
+])
+
+AC_DEFUN([AX_STUBDOM_LIB], [
+AC_ARG_VAR([$1_URL], [Download url for $2])
+AS_IF([test "x$$1_URL" = "x"], [
+	AS_IF([test "x$extfiles" = "xy"],
+		[$1_URL=\@S|@\@{:@XEN_EXTFILES_URL\@:}@],
+		[$1_URL="$4"])
+	])
+$1_VERSION="$3"
+AC_SUBST($1_URL)
+AC_SUBST($1_VERSION)
+])
+
+AC_DEFUN([AX_STUBDOM_LIB_NOEXT], [
+AC_ARG_VAR([$1_URL], [Download url for $2])
+AS_IF([test "x$$1_URL" = "x"], [
+	$1_URL="$4"
+	])
+$1_VERSION="$3"
+AC_SUBST($1_URL)
+AC_SUBST($1_VERSION)
+])
diff --git a/tools/m4/uuid.m4 b/m4/uuid.m4
similarity index 100%
rename from tools/m4/uuid.m4
rename to m4/uuid.m4
diff --git a/stubdom/Makefile b/stubdom/Makefile
index fc70d88..c5bb3cb 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -6,44 +6,7 @@ export XEN_OS=MiniOS
 export stubdom=y
 export debug=y
 include $(XEN_ROOT)/Config.mk
-
-#ZLIB_URL?=http://www.zlib.net
-ZLIB_URL=$(XEN_EXTFILES_URL)
-ZLIB_VERSION=1.2.3
-
-#LIBPCI_URL?=http://www.kernel.org/pub/software/utils/pciutils
-LIBPCI_URL?=$(XEN_EXTFILES_URL)
-LIBPCI_VERSION=2.2.9
-
-#NEWLIB_URL?=ftp://sources.redhat.com/pub/newlib
-NEWLIB_URL?=$(XEN_EXTFILES_URL)
-NEWLIB_VERSION=1.16.0
-
-#LWIP_URL?=http://download.savannah.gnu.org/releases/lwip
-LWIP_URL?=$(XEN_EXTFILES_URL)
-LWIP_VERSION=1.3.0
-
-#GRUB_URL?=http://alpha.gnu.org/gnu/grub
-GRUB_URL?=$(XEN_EXTFILES_URL)
-GRUB_VERSION=0.97
-
-#OCAML_URL?=$(XEN_EXTFILES_URL)
-OCAML_URL?=http://caml.inria.fr/pub/distrib/ocaml-3.11
-OCAML_VERSION=3.11.0
-
-GMP_VERSION=4.3.2
-GMP_URL?=$(XEN_EXTFILES_URL)
-#GMP_URL?=ftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
-
-POLARSSL_VERSION=1.1.4
-POLARSSL_URL?=$(XEN_EXTFILES_URL)
-#POLARSSL_URL?=http://polarssl.org/code/releases
-
-TPMEMU_VERSION=0.7.4
-TPMEMU_URL?=$(XEN_EXTFILES_URL)
-#TPMEMU_URL?=http://download.berlios.de/tpm-emulator
-
-WGET=wget -c
+-include $(XEN_ROOT)/config/Stubdom.mk
 
 GNU_TARGET_ARCH:=$(XEN_TARGET_ARCH)
 ifeq ($(XEN_TARGET_ARCH),x86_32)
@@ -86,12 +49,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include
 
 TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib
 
-TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr
+TARGETS=$(STUBDOM_TARGETS)
 
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stubdom vtpmmgrdom
+build: genpath $(STUBDOM_BUILD)
 else
 build: genpath
 endif
@@ -245,7 +208,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	mv tpm_emulator-$(TPMEMU_VERSION) $@
 	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
 	mkdir $@/build
-	cd $@/build; cmake .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
+	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
 
 TPMEMU_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm.a
@@ -483,7 +446,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
 #########
 
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: genpath install-readme install-ioemu install-grub install-xenstore install-vtpm install-vtpmmgr
+install: genpath install-readme $(STUBDOM_INSTALL)
 else
 install: genpath
 endif
@@ -581,3 +544,9 @@ downloadclean: patchclean
 
 .PHONY: distclean
 distclean: downloadclean
+	-rm ../config/Stubdom.mk
+
+ifeq (,$(findstring clean,$(MAKECMDGOALS)))
+$(XEN_ROOT)/config/Stubdom.mk:
+	$(error You have to run ./configure before building or installing stubdom)
+endif
diff --git a/stubdom/configure.ac b/stubdom/configure.ac
new file mode 100644
index 0000000..b3a307a
--- /dev/null
+++ b/stubdom/configure.ac
@@ -0,0 +1,53 @@
+#                                               -*- Autoconf -*-
+# Process this file with autoconf to produce a configure script.
+
+AC_PREREQ([2.67])
+AC_INIT([Xen Hypervisor Stub Domains], m4_esyscmd([../version.sh ../xen/Makefile]),
+    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
+AC_CONFIG_SRCDIR([../extras/mini-os/kernel.c])
+AC_CONFIG_FILES([../config/Stubdom.mk])
+AC_PREFIX_DEFAULT([/usr])
+AC_CONFIG_AUX_DIR([../])
+
+# M4 Macro includes
+m4_include([../m4/stubdom.m4])
+m4_include([../m4/features.m4])
+m4_include([../m4/path_or_fail.m4])
+
+# Enable/disable stub domains
+AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
+AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
+AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
+AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
+AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
+AX_STUBDOM_DEFAULT_ENABLE([vtpm-stubdom], [vtpm])
+AX_STUBDOM_DEFAULT_ENABLE([vtpmmgrdom], [vtpmmgr])
+
+AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
+AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for libraries])
+
+AC_ARG_VAR([CMAKE], [Path to the cmake program])
+AC_ARG_VAR([WGET], [Path to wget program])
+
+# Checks for programs.
+AC_PROG_CC
+AC_PROG_MAKE_SET
+AC_PROG_INSTALL
+AX_PATH_PROG_OR_FAIL([WGET], [wget])
+
+# Checks for programs that depend on an argument
+AX_PATH_PROG_OR_FAIL_ARG([vtpm], [CMAKE], [cmake])
+
+# Stubdom libraries version and url setup
+AX_STUBDOM_LIB([ZLIB], [zlib], [1.2.3], [http://www.zlib.net])
+AX_STUBDOM_LIB([LIBPCI], [libpci], [2.2.9], [http://www.kernel.org/pub/software/utils/pciutils])
+AX_STUBDOM_LIB([NEWLIB], [newlib], [1.16.0], [ftp://sources.redhat.com/pub/newlib])
+AX_STUBDOM_LIB([LWIP], [lwip], [1.3.0], [http://download.savannah.gnu.org/releases/lwip])
+AX_STUBDOM_LIB([GRUB], [grub], [0.97], [http://alpha.gnu.org/gnu/grub])
+AX_STUBDOM_LIB_NOEXT([OCAML], [ocaml], [3.11.0], [http://caml.inria.fr/pub/distrib/ocaml-3.11])
+AX_STUBDOM_LIB([GMP], [libgmp], [4.3.2], [ftp://ftp.gmplib.org/pub/gmp-4.3.2])
+AX_STUBDOM_LIB([POLARSSL], [polarssl], [1.1.4], [http://polarssl.org/code/releases])
+AX_STUBDOM_LIB([TPMEMU], [berlios tpm emulator], [0.7.4], [http://download.berlios.de/tpm-emulator])
+
+AX_STUBDOM_FINISH
+AC_OUTPUT()
diff --git a/tools/configure.ac b/tools/configure.ac
index 586313d..971e3e9 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -22,20 +22,20 @@ APPEND_INCLUDES and APPEND_LIB instead when possible.])
 AC_CANONICAL_HOST
 
 # M4 Macro includes
-m4_include([m4/savevar.m4])
-m4_include([m4/features.m4])
-m4_include([m4/path_or_fail.m4])
-m4_include([m4/python_version.m4])
-m4_include([m4/python_devel.m4])
-m4_include([m4/ocaml.m4])
-m4_include([m4/set_cflags_ldflags.m4])
-m4_include([m4/uuid.m4])
-m4_include([m4/pkg.m4])
-m4_include([m4/curses.m4])
-m4_include([m4/pthread.m4])
-m4_include([m4/ptyfuncs.m4])
-m4_include([m4/extfs.m4])
-m4_include([m4/fetcher.m4])
+m4_include([../m4/savevar.m4])
+m4_include([../m4/features.m4])
+m4_include([../m4/path_or_fail.m4])
+m4_include([../m4/python_version.m4])
+m4_include([../m4/python_devel.m4])
+m4_include([../m4/ocaml.m4])
+m4_include([../m4/set_cflags_ldflags.m4])
+m4_include([../m4/uuid.m4])
+m4_include([../m4/pkg.m4])
+m4_include([../m4/curses.m4])
+m4_include([../m4/pthread.m4])
+m4_include([../m4/ptyfuncs.m4])
+m4_include([../m4/extfs.m4])
+m4_include([../m4/fetcher.m4])
 
 # Enable/disable options
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
diff --git a/tools/m4/path_or_fail.m4 b/tools/m4/path_or_fail.m4
deleted file mode 100644
index ece8cd4..0000000
--- a/tools/m4/path_or_fail.m4
+++ /dev/null
@@ -1,6 +0,0 @@
-AC_DEFUN([AX_PATH_PROG_OR_FAIL],
-[AC_PATH_PROG([$1], [$2], [no])
-if test x"${$1}" == x"no" 
-then
-    AC_MSG_ERROR([Unable to find $2, please install $2])
-fi])
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:10:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwwR-00028o-PN; Tue, 04 Dec 2012 18:09:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwwP-00027v-C2
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:09:57 +0000
Received: from [193.109.254.147:30596] by server-5.bemta-14.messagelabs.com id
	91/60-10257-47C3EB05; Tue, 04 Dec 2012 18:09:56 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354644590!8953407!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 885 invoked from network); 4 Dec 2012 18:09:54 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:09:54 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f42_b6ac197d_3adf_4758_a421_c3789af76646;
	Tue, 04 Dec 2012 13:09:48 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:28 -0500
Message-Id: <1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please rerun autoconf after commiting this patch

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 autogen.sh                             |    2 ++
 config/Stubdom.mk.in                   |   44 ++++++++++++++++++++++++
 {tools/m4 => m4}/curses.m4             |    0
 {tools/m4 => m4}/extfs.m4              |    0
 {tools/m4 => m4}/features.m4           |    0
 {tools/m4 => m4}/fetcher.m4            |    0
 {tools/m4 => m4}/ocaml.m4              |    0
 m4/path_or_fail.m4                     |   13 ++++++++
 {tools/m4 => m4}/pkg.m4                |    0
 {tools/m4 => m4}/pthread.m4            |    0
 {tools/m4 => m4}/ptyfuncs.m4           |    0
 {tools/m4 => m4}/python_devel.m4       |    0
 {tools/m4 => m4}/python_version.m4     |    0
 {tools/m4 => m4}/savevar.m4            |    0
 {tools/m4 => m4}/set_cflags_ldflags.m4 |    0
 m4/stubdom.m4                          |   57 ++++++++++++++++++++++++++++++++
 {tools/m4 => m4}/uuid.m4               |    0
 stubdom/Makefile                       |   53 ++++++-----------------------
 stubdom/configure.ac                   |   53 +++++++++++++++++++++++++++++
 tools/configure.ac                     |   28 ++++++++--------
 tools/m4/path_or_fail.m4               |    6 ----
 21 files changed, 194 insertions(+), 62 deletions(-)
 create mode 100644 config/Stubdom.mk.in
 rename {tools/m4 => m4}/curses.m4 (100%)
 rename {tools/m4 => m4}/extfs.m4 (100%)
 rename {tools/m4 => m4}/features.m4 (100%)
 rename {tools/m4 => m4}/fetcher.m4 (100%)
 rename {tools/m4 => m4}/ocaml.m4 (100%)
 create mode 100644 m4/path_or_fail.m4
 rename {tools/m4 => m4}/pkg.m4 (100%)
 rename {tools/m4 => m4}/pthread.m4 (100%)
 rename {tools/m4 => m4}/ptyfuncs.m4 (100%)
 rename {tools/m4 => m4}/python_devel.m4 (100%)
 rename {tools/m4 => m4}/python_version.m4 (100%)
 rename {tools/m4 => m4}/savevar.m4 (100%)
 rename {tools/m4 => m4}/set_cflags_ldflags.m4 (100%)
 create mode 100644 m4/stubdom.m4
 rename {tools/m4 => m4}/uuid.m4 (100%)
 create mode 100644 stubdom/configure.ac
 delete mode 100644 tools/m4/path_or_fail.m4

diff --git a/autogen.sh b/autogen.sh
index 58a71ce..ada482c 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -2,3 +2,5 @@
 cd tools
 autoconf
 autoheader
+cd ../stubdom
+autoconf
diff --git a/config/Stubdom.mk.in b/config/Stubdom.mk.in
new file mode 100644
index 0000000..d2456a9
--- /dev/null
+++ b/config/Stubdom.mk.in
@@ -0,0 +1,44 @@
+# Prefix and install folder
+prefix              := @prefix@
+PREFIX              := $(prefix)
+exec_prefix         := @exec_prefix@
+libdir              := @libdir@
+LIBDIR              := $(libdir)
+
+# Path Programs
+CMAKE               := @CMAKE@
+WGET                := @WGET@ -c
+
+# A debug build of stubdom? //FIXME: Someone make this do something
+debug               := @debug@
+
+STUBDOM_TARGETS     := @STUBDOM_TARGETS@
+STUBDOM_BUILD       := @STUBDOM_BUILD@
+STUBDOM_INSTALL     := @STUBDOM_INSTALL@
+
+ZLIB_VERSION        := @ZLIB_VERSION@
+ZLIB_URL            := @ZLIB_URL@
+
+LIBPCI_VERSION      := @LIBPCI_VERSION@
+LIBPCI_URL          := @LIBPCI_URL@
+
+NEWLIB_VERSION      := @NEWLIB_VERSION@
+NEWLIB_URL          := @NEWLIB_URL@
+
+LWIP_VERSION        := @LWIP_VERSION@
+LWIP_URL            := @LWIP_URL@
+
+GRUB_VERSION        := @GRUB_VERSION@
+GRUB_URL            := @GRUB_URL@
+
+OCAML_VERSION       := @OCAML_VERSION@
+OCAML_URL           := @OCAML_URL@
+
+GMP_VERSION         := @GMP_VERSION@
+GMP_URL             := @GMP_URL@
+
+POLARSSL_VERSION    := @POLARSSL_VERSION@
+POLARSSL_URL        := @POLARSSL_URL@
+
+TPMEMU_VERSION      := @TPMEMU_VERSION@
+TPMEMU_URL          := @TPMEMU_URL@
diff --git a/tools/m4/curses.m4 b/m4/curses.m4
similarity index 100%
rename from tools/m4/curses.m4
rename to m4/curses.m4
diff --git a/tools/m4/extfs.m4 b/m4/extfs.m4
similarity index 100%
rename from tools/m4/extfs.m4
rename to m4/extfs.m4
diff --git a/tools/m4/features.m4 b/m4/features.m4
similarity index 100%
rename from tools/m4/features.m4
rename to m4/features.m4
diff --git a/tools/m4/fetcher.m4 b/m4/fetcher.m4
similarity index 100%
rename from tools/m4/fetcher.m4
rename to m4/fetcher.m4
diff --git a/tools/m4/ocaml.m4 b/m4/ocaml.m4
similarity index 100%
rename from tools/m4/ocaml.m4
rename to m4/ocaml.m4
diff --git a/m4/path_or_fail.m4 b/m4/path_or_fail.m4
new file mode 100644
index 0000000..1fdb90d
--- /dev/null
+++ b/m4/path_or_fail.m4
@@ -0,0 +1,13 @@
+AC_DEFUN([AX_PATH_PROG_OR_FAIL],
+[AC_PATH_PROG([$1], [$2], [no])
+if test x"${$1}" == x"no"
+then
+    AC_MSG_ERROR([Unable to find $2, please install $2])
+fi])
+
+AC_DEFUN([AX_PATH_PROG_OR_FAIL_ARG],[
+AS_IF([test "x$$1" = "xy"], [AX_PATH_PROG_OR_FAIL([$2], [$3])], [
+$2="/$3-disabled-in-configure-script"
+AC_SUBST($2)
+])
+])
diff --git a/tools/m4/pkg.m4 b/m4/pkg.m4
similarity index 100%
rename from tools/m4/pkg.m4
rename to m4/pkg.m4
diff --git a/tools/m4/pthread.m4 b/m4/pthread.m4
similarity index 100%
rename from tools/m4/pthread.m4
rename to m4/pthread.m4
diff --git a/tools/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
similarity index 100%
rename from tools/m4/ptyfuncs.m4
rename to m4/ptyfuncs.m4
diff --git a/tools/m4/python_devel.m4 b/m4/python_devel.m4
similarity index 100%
rename from tools/m4/python_devel.m4
rename to m4/python_devel.m4
diff --git a/tools/m4/python_version.m4 b/m4/python_version.m4
similarity index 100%
rename from tools/m4/python_version.m4
rename to m4/python_version.m4
diff --git a/tools/m4/savevar.m4 b/m4/savevar.m4
similarity index 100%
rename from tools/m4/savevar.m4
rename to m4/savevar.m4
diff --git a/tools/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
similarity index 100%
rename from tools/m4/set_cflags_ldflags.m4
rename to m4/set_cflags_ldflags.m4
diff --git a/m4/stubdom.m4 b/m4/stubdom.m4
new file mode 100644
index 0000000..82f92dc
--- /dev/null
+++ b/m4/stubdom.m4
@@ -0,0 +1,57 @@
+AC_DEFUN([AX_STUBDOM_DEFAULT_ENABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--disable-$1], [Build and install $1 (default is ENABLED)]),[
+$2=n
+],[
+$2=y
+STUBDOM_TARGETS="$STUBDOM_TARGETS $2"
+STUBDOM_BUILD="$STUBDOM_BUILD $1"
+STUBDOM_INSTALL="$STUBDOM_INSTALL install-$2"
+])
+AC_SUBST($2)
+])
+
+AC_DEFUN([AX_STUBDOM_DEFAULT_DISABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--enable-$1], [Build and install $1 (default is DISABLED)]),[
+$2=y
+STUBDOM_TARGETS="$STUBDOM_TARGETS $2"
+STUBDOM_BUILD="$STUBDOM_BUILD $1"
+STUBDOM_INSTALL="$STUBDOM_INSTALL install-$2"
+],[
+$2=n
+])
+AC_SUBST($2)
+])
+
+AC_DEFUN([AX_STUBDOM_FINISH], [
+AC_SUBST(STUBDOM_TARGETS)
+AC_SUBST(STUBDOM_BUILD)
+AC_SUBST(STUBDOM_INSTALL)
+echo "Will build the following stub domains:"
+for x in $STUBDOM_BUILD; do
+	echo "  $x"
+done
+])
+
+AC_DEFUN([AX_STUBDOM_LIB], [
+AC_ARG_VAR([$1_URL], [Download url for $2])
+AS_IF([test "x$$1_URL" = "x"], [
+	AS_IF([test "x$extfiles" = "xy"],
+		[$1_URL=\@S|@\@{:@XEN_EXTFILES_URL\@:}@],
+		[$1_URL="$4"])
+	])
+$1_VERSION="$3"
+AC_SUBST($1_URL)
+AC_SUBST($1_VERSION)
+])
+
+AC_DEFUN([AX_STUBDOM_LIB_NOEXT], [
+AC_ARG_VAR([$1_URL], [Download url for $2])
+AS_IF([test "x$$1_URL" = "x"], [
+	$1_URL="$4"
+	])
+$1_VERSION="$3"
+AC_SUBST($1_URL)
+AC_SUBST($1_VERSION)
+])
diff --git a/tools/m4/uuid.m4 b/m4/uuid.m4
similarity index 100%
rename from tools/m4/uuid.m4
rename to m4/uuid.m4
diff --git a/stubdom/Makefile b/stubdom/Makefile
index fc70d88..c5bb3cb 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -6,44 +6,7 @@ export XEN_OS=MiniOS
 export stubdom=y
 export debug=y
 include $(XEN_ROOT)/Config.mk
-
-#ZLIB_URL?=http://www.zlib.net
-ZLIB_URL=$(XEN_EXTFILES_URL)
-ZLIB_VERSION=1.2.3
-
-#LIBPCI_URL?=http://www.kernel.org/pub/software/utils/pciutils
-LIBPCI_URL?=$(XEN_EXTFILES_URL)
-LIBPCI_VERSION=2.2.9
-
-#NEWLIB_URL?=ftp://sources.redhat.com/pub/newlib
-NEWLIB_URL?=$(XEN_EXTFILES_URL)
-NEWLIB_VERSION=1.16.0
-
-#LWIP_URL?=http://download.savannah.gnu.org/releases/lwip
-LWIP_URL?=$(XEN_EXTFILES_URL)
-LWIP_VERSION=1.3.0
-
-#GRUB_URL?=http://alpha.gnu.org/gnu/grub
-GRUB_URL?=$(XEN_EXTFILES_URL)
-GRUB_VERSION=0.97
-
-#OCAML_URL?=$(XEN_EXTFILES_URL)
-OCAML_URL?=http://caml.inria.fr/pub/distrib/ocaml-3.11
-OCAML_VERSION=3.11.0
-
-GMP_VERSION=4.3.2
-GMP_URL?=$(XEN_EXTFILES_URL)
-#GMP_URL?=ftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
-
-POLARSSL_VERSION=1.1.4
-POLARSSL_URL?=$(XEN_EXTFILES_URL)
-#POLARSSL_URL?=http://polarssl.org/code/releases
-
-TPMEMU_VERSION=0.7.4
-TPMEMU_URL?=$(XEN_EXTFILES_URL)
-#TPMEMU_URL?=http://download.berlios.de/tpm-emulator
-
-WGET=wget -c
+-include $(XEN_ROOT)/config/Stubdom.mk
 
 GNU_TARGET_ARCH:=$(XEN_TARGET_ARCH)
 ifeq ($(XEN_TARGET_ARCH),x86_32)
@@ -86,12 +49,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include
 
 TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib
 
-TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr
+TARGETS=$(STUBDOM_TARGETS)
 
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stubdom vtpmmgrdom
+build: genpath $(STUBDOM_BUILD)
 else
 build: genpath
 endif
@@ -245,7 +208,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	mv tpm_emulator-$(TPMEMU_VERSION) $@
 	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
 	mkdir $@/build
-	cd $@/build; cmake .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
+	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
 
 TPMEMU_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm.a
@@ -483,7 +446,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
 #########
 
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: genpath install-readme install-ioemu install-grub install-xenstore install-vtpm install-vtpmmgr
+install: genpath install-readme $(STUBDOM_INSTALL)
 else
 install: genpath
 endif
@@ -581,3 +544,9 @@ downloadclean: patchclean
 
 .PHONY: distclean
 distclean: downloadclean
+	-rm ../config/Stubdom.mk
+
+ifeq (,$(findstring clean,$(MAKECMDGOALS)))
+$(XEN_ROOT)/config/Stubdom.mk:
+	$(error You have to run ./configure before building or installing stubdom)
+endif
diff --git a/stubdom/configure.ac b/stubdom/configure.ac
new file mode 100644
index 0000000..b3a307a
--- /dev/null
+++ b/stubdom/configure.ac
@@ -0,0 +1,53 @@
+#                                               -*- Autoconf -*-
+# Process this file with autoconf to produce a configure script.
+
+AC_PREREQ([2.67])
+AC_INIT([Xen Hypervisor Stub Domains], m4_esyscmd([../version.sh ../xen/Makefile]),
+    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
+AC_CONFIG_SRCDIR([../extras/mini-os/kernel.c])
+AC_CONFIG_FILES([../config/Stubdom.mk])
+AC_PREFIX_DEFAULT([/usr])
+AC_CONFIG_AUX_DIR([../])
+
+# M4 Macro includes
+m4_include([../m4/stubdom.m4])
+m4_include([../m4/features.m4])
+m4_include([../m4/path_or_fail.m4])
+
+# Enable/disable stub domains
+AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
+AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
+AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
+AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
+AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
+AX_STUBDOM_DEFAULT_ENABLE([vtpm-stubdom], [vtpm])
+AX_STUBDOM_DEFAULT_ENABLE([vtpmmgrdom], [vtpmmgr])
+
+AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
+AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for libraries])
+
+AC_ARG_VAR([CMAKE], [Path to the cmake program])
+AC_ARG_VAR([WGET], [Path to wget program])
+
+# Checks for programs.
+AC_PROG_CC
+AC_PROG_MAKE_SET
+AC_PROG_INSTALL
+AX_PATH_PROG_OR_FAIL([WGET], [wget])
+
+# Checks for programs that depend on an argument
+AX_PATH_PROG_OR_FAIL_ARG([vtpm], [CMAKE], [cmake])
+
+# Stubdom libraries version and url setup
+AX_STUBDOM_LIB([ZLIB], [zlib], [1.2.3], [http://www.zlib.net])
+AX_STUBDOM_LIB([LIBPCI], [libpci], [2.2.9], [http://www.kernel.org/pub/software/utils/pciutils])
+AX_STUBDOM_LIB([NEWLIB], [newlib], [1.16.0], [ftp://sources.redhat.com/pub/newlib])
+AX_STUBDOM_LIB([LWIP], [lwip], [1.3.0], [http://download.savannah.gnu.org/releases/lwip])
+AX_STUBDOM_LIB([GRUB], [grub], [0.97], [http://alpha.gnu.org/gnu/grub])
+AX_STUBDOM_LIB_NOEXT([OCAML], [ocaml], [3.11.0], [http://caml.inria.fr/pub/distrib/ocaml-3.11])
+AX_STUBDOM_LIB([GMP], [libgmp], [4.3.2], [ftp://ftp.gmplib.org/pub/gmp-4.3.2])
+AX_STUBDOM_LIB([POLARSSL], [polarssl], [1.1.4], [http://polarssl.org/code/releases])
+AX_STUBDOM_LIB([TPMEMU], [berlios tpm emulator], [0.7.4], [http://download.berlios.de/tpm-emulator])
+
+AX_STUBDOM_FINISH
+AC_OUTPUT()
diff --git a/tools/configure.ac b/tools/configure.ac
index 586313d..971e3e9 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -22,20 +22,20 @@ APPEND_INCLUDES and APPEND_LIB instead when possible.])
 AC_CANONICAL_HOST
 
 # M4 Macro includes
-m4_include([m4/savevar.m4])
-m4_include([m4/features.m4])
-m4_include([m4/path_or_fail.m4])
-m4_include([m4/python_version.m4])
-m4_include([m4/python_devel.m4])
-m4_include([m4/ocaml.m4])
-m4_include([m4/set_cflags_ldflags.m4])
-m4_include([m4/uuid.m4])
-m4_include([m4/pkg.m4])
-m4_include([m4/curses.m4])
-m4_include([m4/pthread.m4])
-m4_include([m4/ptyfuncs.m4])
-m4_include([m4/extfs.m4])
-m4_include([m4/fetcher.m4])
+m4_include([../m4/savevar.m4])
+m4_include([../m4/features.m4])
+m4_include([../m4/path_or_fail.m4])
+m4_include([../m4/python_version.m4])
+m4_include([../m4/python_devel.m4])
+m4_include([../m4/ocaml.m4])
+m4_include([../m4/set_cflags_ldflags.m4])
+m4_include([../m4/uuid.m4])
+m4_include([../m4/pkg.m4])
+m4_include([../m4/curses.m4])
+m4_include([../m4/pthread.m4])
+m4_include([../m4/ptyfuncs.m4])
+m4_include([../m4/extfs.m4])
+m4_include([../m4/fetcher.m4])
 
 # Enable/disable options
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
diff --git a/tools/m4/path_or_fail.m4 b/tools/m4/path_or_fail.m4
deleted file mode 100644
index ece8cd4..0000000
--- a/tools/m4/path_or_fail.m4
+++ /dev/null
@@ -1,6 +0,0 @@
-AC_DEFUN([AX_PATH_PROG_OR_FAIL],
-[AC_PATH_PROG([$1], [$2], [no])
-if test x"${$1}" == x"no" 
-then
-    AC_MSG_ERROR([Unable to find $2, please install $2])
-fi])
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:11:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:11:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwxS-0002VU-CD; Tue, 04 Dec 2012 18:11:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwxP-0002Uc-Vl
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:11:00 +0000
Received: from [85.158.143.35:13636] by server-2.bemta-4.messagelabs.com id
	84/FA-28922-3BC3EB05; Tue, 04 Dec 2012 18:10:59 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354644649!11634202!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14565 invoked from network); 4 Dec 2012 18:10:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:10:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f0a_223fc024_5262_4c64_bcf0_7ea37957be51;
	Tue, 04 Dec 2012 13:09:47 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:24 -0500
Message-Id: <1354644571-3202-2-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 2/8] add stubdom/vtpmmgr code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the code base for vtpmmgrdom. Makefile changes
next patch.

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/vtpmmgr/Makefile           |   32 ++
 stubdom/vtpmmgr/init.c             |  553 +++++++++++++++++++++
 stubdom/vtpmmgr/log.c              |  151 ++++++
 stubdom/vtpmmgr/log.h              |   85 ++++
 stubdom/vtpmmgr/marshal.h          |  528 ++++++++++++++++++++
 stubdom/vtpmmgr/minios.cfg         |   14 +
 stubdom/vtpmmgr/tcg.h              |  707 +++++++++++++++++++++++++++
 stubdom/vtpmmgr/tpm.c              |  938 ++++++++++++++++++++++++++++++++++++
 stubdom/vtpmmgr/tpm.h              |  218 +++++++++
 stubdom/vtpmmgr/tpmrsa.c           |  175 +++++++
 stubdom/vtpmmgr/tpmrsa.h           |   67 +++
 stubdom/vtpmmgr/uuid.h             |   50 ++
 stubdom/vtpmmgr/vtpm_cmd_handler.c |  152 ++++++
 stubdom/vtpmmgr/vtpm_manager.h     |   64 +++
 stubdom/vtpmmgr/vtpm_storage.c     |  794 ++++++++++++++++++++++++++++++
 stubdom/vtpmmgr/vtpm_storage.h     |   68 +++
 stubdom/vtpmmgr/vtpmmgr.c          |   93 ++++
 stubdom/vtpmmgr/vtpmmgr.h          |   77 +++
 18 files changed, 4766 insertions(+)
 create mode 100644 stubdom/vtpmmgr/Makefile
 create mode 100644 stubdom/vtpmmgr/init.c
 create mode 100644 stubdom/vtpmmgr/log.c
 create mode 100644 stubdom/vtpmmgr/log.h
 create mode 100644 stubdom/vtpmmgr/marshal.h
 create mode 100644 stubdom/vtpmmgr/minios.cfg
 create mode 100644 stubdom/vtpmmgr/tcg.h
 create mode 100644 stubdom/vtpmmgr/tpm.c
 create mode 100644 stubdom/vtpmmgr/tpm.h
 create mode 100644 stubdom/vtpmmgr/tpmrsa.c
 create mode 100644 stubdom/vtpmmgr/tpmrsa.h
 create mode 100644 stubdom/vtpmmgr/uuid.h
 create mode 100644 stubdom/vtpmmgr/vtpm_cmd_handler.c
 create mode 100644 stubdom/vtpmmgr/vtpm_manager.h
 create mode 100644 stubdom/vtpmmgr/vtpm_storage.c
 create mode 100644 stubdom/vtpmmgr/vtpm_storage.h
 create mode 100644 stubdom/vtpmmgr/vtpmmgr.c
 create mode 100644 stubdom/vtpmmgr/vtpmmgr.h

diff --git a/stubdom/vtpmmgr/Makefile b/stubdom/vtpmmgr/Makefile
new file mode 100644
index 0000000..88c83c3
--- /dev/null
+++ b/stubdom/vtpmmgr/Makefile
@@ -0,0 +1,32 @@
+# Copyright (c) 2010-2012 United States Government, as represented by
+# the Secretary of Defense.  All rights reserved.
+#
+# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+# INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+# DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+# SOFTWARE.
+#
+
+XEN_ROOT=../..
+
+PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
+PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o bignum.o sha4.o havege.o timing.o entropy_poll.o
+
+TARGET=vtpmmgr.a
+OBJS=vtpmmgr.o vtpm_cmd_handler.o vtpm_storage.o init.o tpmrsa.o tpm.o log.o
+
+CFLAGS+=-Werror -Iutil -Icrypto -Itcs
+CFLAGS+=-Wno-declaration-after-statement -Wno-unused-label
+
+build: $(TARGET)
+$(TARGET): $(OBJS)
+	ar -rcs $@ $^ $(foreach obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
+
+clean:
+	rm -f $(TARGET) $(OBJS)
+
+distclean: clean
+
+.PHONY: clean distclean
diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
new file mode 100644
index 0000000..a158020
--- /dev/null
+++ b/stubdom/vtpmmgr/init.c
@@ -0,0 +1,553 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#include <stdint.h>
+#include <stdlib.h>
+
+#include <xen/xen.h>
+#include <mini-os/tpmback.h>
+#include <mini-os/tpmfront.h>
+#include <mini-os/tpm_tis.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include <polarssl/sha1.h>
+
+#include "log.h"
+#include "vtpmmgr.h"
+#include "vtpm_storage.h"
+#include "tpm.h"
+#include "marshal.h"
+
+struct Opts {
+   enum {
+      TPMDRV_TPM_TIS,
+      TPMDRV_TPMFRONT,
+   } tpmdriver;
+   unsigned long tpmiomem;
+   unsigned int tpmirq;
+   unsigned int tpmlocality;
+   int gen_owner_auth;
+};
+
+// --------------------------- Well Known Auths --------------------------
+const TPM_AUTHDATA WELLKNOWN_SRK_AUTH = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+const TPM_AUTHDATA WELLKNOWN_OWNER_AUTH = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
+
+struct vtpm_globals vtpm_globals = {
+   .tpm_fd = -1,
+   .storage_key = TPM_KEY_INIT,
+   .storage_key_handle = 0,
+   .oiap = { .AuthHandle = 0 }
+};
+
+static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
+   UINT32 sz = len;
+   TPM_RESULT rc = TPM_GetRandom(&sz, data);
+   *olen = sz;
+   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
+}
+
+static TPM_RESULT check_tpm_version(void) {
+   TPM_RESULT status;
+   UINT32 rsize;
+   BYTE* res = NULL;
+   TPM_CAP_VERSION_INFO vinfo;
+
+   TPMTRYRETURN(TPM_GetCapability(
+            TPM_CAP_VERSION_VAL,
+            0,
+            NULL,
+            &rsize,
+            &res));
+   if(rsize < 4) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid size returned by GetCapability!\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   unpack_TPM_CAP_VERSION_INFO(res, &vinfo, UNPACK_ALIAS);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Hardware TPM:\n");
+   vtpmloginfo(VTPM_LOG_VTPM, " version: %hhd %hhd %hhd %hhd\n",
+         vinfo.version.major, vinfo.version.minor, vinfo.version.revMajor, vinfo.version.revMinor);
+   vtpmloginfo(VTPM_LOG_VTPM, " specLevel: %hd\n", vinfo.specLevel);
+   vtpmloginfo(VTPM_LOG_VTPM, " errataRev: %hhd\n", vinfo.errataRev);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorID: %c%c%c%c\n",
+         vinfo.tpmVendorID[0], vinfo.tpmVendorID[1],
+         vinfo.tpmVendorID[2], vinfo.tpmVendorID[3]);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorSpecificSize: %hd\n", vinfo.vendorSpecificSize);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorSpecific: ");
+   for(int i = 0; i < vinfo.vendorSpecificSize; ++i) {
+      vtpmloginfomore(VTPM_LOG_VTPM, "%02hhx", vinfo.vendorSpecific[i]);
+   }
+   vtpmloginfomore(VTPM_LOG_VTPM, "\n");
+
+abort_egress:
+   free(res);
+   return status;
+}
+
+static TPM_RESULT flush_tpm(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   const TPM_RESOURCE_TYPE reslist[] = { TPM_RT_KEY, TPM_RT_AUTH, TPM_RT_TRANS, TPM_RT_COUNTER, TPM_RT_DAA_TPM, TPM_RT_CONTEXT };
+   BYTE* keylist = NULL;
+   UINT32 keylistSize;
+   BYTE* ptr;
+
+   //Iterate through each resource type and flush all handles
+   for(int i = 0; i < sizeof(reslist) / sizeof(TPM_RESOURCE_TYPE); ++i) {
+      TPM_RESOURCE_TYPE beres = cpu_to_be32(reslist[i]);
+      UINT16 size;
+      TPMTRYRETURN(TPM_GetCapability(
+               TPM_CAP_HANDLE,
+               sizeof(TPM_RESOURCE_TYPE),
+               (BYTE*)(&beres),
+               &keylistSize,
+               &keylist));
+
+      ptr = keylist;
+      ptr = unpack_UINT16(ptr, &size);
+
+      //Flush each handle
+      if(size) {
+         vtpmloginfo(VTPM_LOG_VTPM, "Flushing %u handle(s) of type %lu\n", size, (unsigned long) reslist[i]);
+         for(int j = 0; j < size; ++j) {
+            TPM_HANDLE h;
+            ptr = unpack_TPM_HANDLE(ptr, &h);
+            TPMTRYRETURN(TPM_FlushSpecific(h, reslist[i]));
+         }
+      }
+
+      free(keylist);
+      keylist = NULL;
+   }
+
+   goto egress;
+abort_egress:
+   free(keylist);
+egress:
+   return status;
+}
+
+
+static TPM_RESULT try_take_ownership(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_PUBKEY pubEK = TPM_PUBKEY_INIT;
+
+   // If we can read PubEK then there is no owner and we should take it.
+   status = TPM_ReadPubek(&pubEK);
+
+   switch(status) {
+      case TPM_DISABLED_CMD:
+         //Cannot read ek? TPM has owner
+         vtpmloginfo(VTPM_LOG_VTPM, "Failed to readEK meaning TPM has an owner. Creating Keys off existing SRK.\n");
+         status = TPM_SUCCESS;
+         break;
+      case TPM_NO_ENDORSEMENT:
+         {
+            //If theres no ek, we have to create one
+            TPM_KEY_PARMS keyInfo = {
+               .algorithmID = TPM_ALG_RSA,
+               .encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1,
+               .sigScheme = TPM_SS_NONE,
+               .parmSize = 12,
+               .parms.rsa = {
+                  .keyLength = RSA_KEY_SIZE,
+                  .numPrimes = 2,
+                  .exponentSize = 0,
+                  .exponent = NULL,
+               },
+            };
+            TPMTRYRETURN(TPM_CreateEndorsementKeyPair(&keyInfo, &pubEK));
+         }
+         //fall through to take ownership
+      case TPM_SUCCESS:
+         {
+            //Construct the Srk
+            TPM_KEY srk = {
+               .ver = TPM_STRUCT_VER_1_1,
+               .keyUsage = TPM_KEY_STORAGE,
+               .keyFlags = 0x00,
+               .authDataUsage = TPM_AUTH_ALWAYS,
+               .algorithmParms = {
+                  .algorithmID = TPM_ALG_RSA,
+                  .encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1,
+                  .sigScheme =  TPM_SS_NONE,
+                  .parmSize = 12,
+                  .parms.rsa = {
+                     .keyLength = RSA_KEY_SIZE,
+                     .numPrimes = 2,
+                     .exponentSize = 0,
+                     .exponent = NULL,
+                  },
+               },
+               .PCRInfoSize = 0,
+               .pubKey = {
+                  .keyLength = 0,
+                  .key = NULL,
+               },
+               .encDataSize = 0,
+            };
+
+            TPMTRYRETURN(TPM_TakeOwnership(
+                     &pubEK,
+                     (const TPM_AUTHDATA*)&vtpm_globals.owner_auth,
+                     (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+                     &srk,
+                     NULL,
+                     &vtpm_globals.oiap));
+
+            TPMTRYRETURN(TPM_DisablePubekRead(
+                     (const TPM_AUTHDATA*)&vtpm_globals.owner_auth,
+                     &vtpm_globals.oiap));
+         }
+         break;
+      default:
+         break;
+   }
+abort_egress:
+   free_TPM_PUBKEY(&pubEK);
+   return status;
+}
+
+static void init_storage_key(TPM_KEY* key) {
+   key->ver.major = 1;
+   key->ver.minor = 1;
+   key->ver.revMajor = 0;
+   key->ver.revMinor = 0;
+
+   key->keyUsage = TPM_KEY_BIND;
+   key->keyFlags = 0;
+   key->authDataUsage = TPM_AUTH_ALWAYS;
+
+   TPM_KEY_PARMS* p = &key->algorithmParms;
+   p->algorithmID = TPM_ALG_RSA;
+   p->encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1;
+   p->sigScheme = TPM_SS_NONE;
+   p->parmSize = 12;
+
+   TPM_RSA_KEY_PARMS* r = &p->parms.rsa;
+   r->keyLength = RSA_KEY_SIZE;
+   r->numPrimes = 2;
+   r->exponentSize = 0;
+   r->exponent = NULL;
+
+   key->PCRInfoSize = 0;
+   key->encDataSize = 0;
+   key->encData = NULL;
+}
+
+static int parse_auth_string(char* authstr, BYTE* target, const TPM_AUTHDATA wellknown, int allowrandom) {
+   int rc;
+   /* well known owner auth */
+   if(!strcmp(authstr, "well-known")) {
+      memcpy(target, wellknown, sizeof(TPM_AUTHDATA));
+   }
+   /* Create a randomly generated owner auth */
+   else if(allowrandom && !strcmp(authstr, "random")) {
+      return 1;
+   }
+   /* owner auth is a raw hash */
+   else if(!strncmp(authstr, "hash:", 5)) {
+      authstr += 5;
+      if((rc = strlen(authstr)) != 40) {
+         vtpmlogerror(VTPM_LOG_VTPM, "Supplied owner auth hex string `%s' must be exactly 40 characters (20 bytes) long, length=%d\n", authstr, rc);
+         return -1;
+      }
+      for(int j = 0; j < 20; ++j) {
+         if(sscanf(authstr, "%hhX", target + j) != 1) {
+            vtpmlogerror(VTPM_LOG_VTPM, "Supplied owner auth string `%s' is not a valid hex string\n", authstr);
+            return -1;
+         }
+         authstr += 2;
+      }
+   }
+   /* owner auth is a string that will be hashed */
+   else if(!strncmp(authstr, "text:", 5)) {
+      authstr += 5;
+      sha1((const unsigned char*)authstr, strlen(authstr), target);
+   }
+   else {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid auth string %s\n", authstr);
+      return -1;
+   }
+
+   return 0;
+}
+
+int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
+{
+   int rc;
+   int i;
+
+   //Set defaults
+   memcpy(vtpm_globals.owner_auth, WELLKNOWN_OWNER_AUTH, sizeof(TPM_AUTHDATA));
+   memcpy(vtpm_globals.srk_auth, WELLKNOWN_SRK_AUTH, sizeof(TPM_AUTHDATA));
+
+   for(i = 1; i < argc; ++i) {
+      if(!strncmp(argv[i], "owner_auth:", 10)) {
+         if((rc = parse_auth_string(argv[i] + 10, vtpm_globals.owner_auth, WELLKNOWN_OWNER_AUTH, 1)) < 0) {
+            goto err_invalid;
+         }
+         if(rc == 1) {
+            opts->gen_owner_auth = 1;
+         }
+      }
+      else if(!strncmp(argv[i], "srk_auth:", 8)) {
+         if((rc = parse_auth_string(argv[i] + 8, vtpm_globals.srk_auth, WELLKNOWN_SRK_AUTH, 0)) != 0) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmdriver=", 10)) {
+         if(!strcmp(argv[i] + 10, "tpm_tis")) {
+            opts->tpmdriver = TPMDRV_TPM_TIS;
+         } else if(!strcmp(argv[i] + 10, "tpmfront")) {
+            opts->tpmdriver = TPMDRV_TPMFRONT;
+         } else {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmiomem=",9)) {
+         if(sscanf(argv[i] + 9, "0x%lX", &opts->tpmiomem) != 1) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmirq=",7)) {
+         if(!strcmp(argv[i] + 7, "probe")) {
+            opts->tpmirq = TPM_PROBE_IRQ;
+         } else if( sscanf(argv[i] + 7, "%u", &opts->tpmirq) != 1) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmlocality=",12)) {
+         if(sscanf(argv[i] + 12, "%u", &opts->tpmlocality) != 1 || opts->tpmlocality > 4) {
+            goto err_invalid;
+         }
+      }
+   }
+
+   switch(opts->tpmdriver) {
+      case TPMDRV_TPM_TIS:
+         vtpmloginfo(VTPM_LOG_VTPM, "Option: Using tpm_tis driver\n");
+         break;
+      case TPMDRV_TPMFRONT:
+         vtpmloginfo(VTPM_LOG_VTPM, "Option: Using tpmfront driver\n");
+         break;
+   }
+
+   return 0;
+err_invalid:
+   vtpmlogerror(VTPM_LOG_VTPM, "Invalid Option %s\n", argv[i]);
+   return -1;
+}
+
+
+
+static TPM_RESULT vtpmmgr_create(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_AUTH_SESSION osap = TPM_AUTH_SESSION_INIT;
+   TPM_AUTHDATA sharedsecret;
+
+   // Take ownership if TPM is unowned
+   TPMTRYRETURN(try_take_ownership());
+
+   // Generate storage key's auth
+   memset(&vtpm_globals.storage_key_usage_auth, 0, sizeof(TPM_AUTHDATA));
+
+   TPMTRYRETURN( TPM_OSAP(
+            TPM_ET_KEYHANDLE,
+            TPM_SRK_KEYHANDLE,
+            (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+            &sharedsecret,
+            &osap) );
+
+   init_storage_key(&vtpm_globals.storage_key);
+
+   //initialize the storage key
+   TPMTRYRETURN( TPM_CreateWrapKey(
+            TPM_SRK_KEYHANDLE,
+            (const TPM_AUTHDATA*)&sharedsecret,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.storage_key,
+            &osap) );
+
+   //Load Storage Key
+   TPMTRYRETURN( TPM_LoadKey(
+            TPM_SRK_KEYHANDLE,
+            &vtpm_globals.storage_key,
+            &vtpm_globals.storage_key_handle,
+            (const TPM_AUTHDATA*) &vtpm_globals.srk_auth,
+            &vtpm_globals.oiap));
+
+   //Make sure TPM has commited changes
+   TPMTRYRETURN( TPM_SaveState() );
+
+   //Create new disk image
+   TPMTRYRETURN(vtpm_storage_new_header());
+
+   goto egress;
+abort_egress:
+egress:
+   vtpmloginfo(VTPM_LOG_VTPM, "Finished initialized new VTPM manager\n");
+
+   //End the OSAP session
+   if(osap.AuthHandle) {
+      TPM_TerminateHandle(osap.AuthHandle);
+   }
+
+   return status;
+}
+
+TPM_RESULT vtpmmgr_init(int argc, char** argv) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   /* Default commandline options */
+   struct Opts opts = {
+      .tpmdriver = TPMDRV_TPM_TIS,
+      .tpmiomem = TPM_BASEADDR,
+      .tpmirq = 0,
+      .tpmlocality = 0,
+      .gen_owner_auth = 0,
+   };
+
+   if(parse_cmdline_opts(argc, argv, &opts) != 0) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Command line parsing failed! exiting..\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   //Setup storage system
+   if(vtpm_storage_init() != 0) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize storage subsystem!\n");
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   //Setup tpmback device
+   init_tpmback();
+
+   //Setup tpm access
+   switch(opts.tpmdriver) {
+      case TPMDRV_TPM_TIS:
+         {
+            struct tpm_chip* tpm;
+            if((tpm = init_tpm_tis(opts.tpmiomem, TPM_TIS_LOCL_INT_TO_FLAG(opts.tpmlocality), opts.tpmirq)) == NULL) {
+               vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize tpmfront device\n");
+               status = TPM_IOERROR;
+               goto abort_egress;
+            }
+            vtpm_globals.tpm_fd = tpm_tis_open(tpm);
+            tpm_tis_request_locality(tpm, opts.tpmlocality);
+         }
+         break;
+      case TPMDRV_TPMFRONT:
+         {
+            struct tpmfront_dev* tpmfront_dev;
+            if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
+               vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize tpmfront device\n");
+               status = TPM_IOERROR;
+               goto abort_egress;
+            }
+            vtpm_globals.tpm_fd = tpmfront_open(tpmfront_dev);
+         }
+         break;
+   }
+
+   //Get the version of the tpm
+   TPMTRYRETURN(check_tpm_version());
+
+   // Blow away all stale handles left in the tpm
+   if(flush_tpm() != TPM_SUCCESS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "VTPM_FlushResources failed, continuing anyway..\n");
+   }
+
+   /* Initialize the rng */
+   entropy_init(&vtpm_globals.entropy);
+   entropy_add_source(&vtpm_globals.entropy, tpm_entropy_source, NULL, 0);
+   entropy_gather(&vtpm_globals.entropy);
+   ctr_drbg_init(&vtpm_globals.ctr_drbg, entropy_func, &vtpm_globals.entropy, NULL, 0);
+   ctr_drbg_set_prediction_resistance( &vtpm_globals.ctr_drbg, CTR_DRBG_PR_OFF );
+
+   // Generate Auth for Owner
+   if(opts.gen_owner_auth) {
+      vtpmmgr_rand(vtpm_globals.owner_auth, sizeof(TPM_AUTHDATA));
+   }
+
+   // Create OIAP session for service's authorized commands
+   TPMTRYRETURN( TPM_OIAP(&vtpm_globals.oiap) );
+
+   /* Load the Manager data, if it fails create a new manager */
+   if (vtpm_storage_load_header() != TPM_SUCCESS) {
+      /* If the OIAP session was closed by an error, create a new one */
+      if(vtpm_globals.oiap.AuthHandle == 0) {
+         TPMTRYRETURN( TPM_OIAP(&vtpm_globals.oiap) );
+      }
+      vtpmloginfo(VTPM_LOG_VTPM, "Failed to read manager file. Assuming first time initialization.\n");
+      TPMTRYRETURN( vtpmmgr_create() );
+   }
+
+   goto egress;
+abort_egress:
+   vtpmmgr_shutdown();
+egress:
+   return status;
+}
+
+void vtpmmgr_shutdown(void)
+{
+   /* Cleanup resources */
+   free_TPM_KEY(&vtpm_globals.storage_key);
+
+   /* Cleanup TPM resources */
+   TPM_EvictKey(vtpm_globals.storage_key_handle);
+   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
+
+   /* Close tpmback */
+   shutdown_tpmback();
+
+   /* Close the storage system and blkfront */
+   vtpm_storage_shutdown();
+
+   /* Close tpmfront/tpm_tis */
+   close(vtpm_globals.tpm_fd);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
+}
diff --git a/stubdom/vtpmmgr/log.c b/stubdom/vtpmmgr/log.c
new file mode 100644
index 0000000..a82c913
--- /dev/null
+++ b/stubdom/vtpmmgr/log.c
@@ -0,0 +1,151 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdlib.h>
+#include <string.h>
+#include <stdio.h>
+
+#include "tcg.h"
+
+char *module_names[] = { "",
+                                "TPM",
+                                "TPM",
+                                "VTPM",
+                                "VTPM",
+                                "TXDATA",
+                              };
+// Helper code for the consts, eg. to produce messages for error codes.
+
+typedef struct error_code_entry_t {
+  TPM_RESULT code;
+  char * code_name;
+  char * msg;
+} error_code_entry_t;
+
+static const error_code_entry_t error_msgs [] = {
+  { TPM_SUCCESS, "TPM_SUCCESS", "Successful completion of the operation" },
+  { TPM_AUTHFAIL, "TPM_AUTHFAIL", "Authentication failed" },
+  { TPM_BADINDEX, "TPM_BADINDEX", "The index to a PCR, DIR or other register is incorrect" },
+  { TPM_BAD_PARAMETER, "TPM_BAD_PARAMETER", "One or more parameter is bad" },
+  { TPM_AUDITFAILURE, "TPM_AUDITFAILURE", "An operation completed successfully but the auditing of that operation failed." },
+  { TPM_CLEAR_DISABLED, "TPM_CLEAR_DISABLED", "The clear disable flag is set and all clear operations now require physical access" },
+  { TPM_DEACTIVATED, "TPM_DEACTIVATED", "The TPM is deactivated" },
+  { TPM_DISABLED, "TPM_DISABLED", "The TPM is disabled" },
+  { TPM_DISABLED_CMD, "TPM_DISABLED_CMD", "The target command has been disabled" },
+  { TPM_FAIL, "TPM_FAIL", "The operation failed" },
+  { TPM_BAD_ORDINAL, "TPM_BAD_ORDINAL", "The ordinal was unknown or inconsistent" },
+  { TPM_INSTALL_DISABLED, "TPM_INSTALL_DISABLED", "The ability to install an owner is disabled" },
+  { TPM_INVALID_KEYHANDLE, "TPM_INVALID_KEYHANDLE", "The key handle presented was invalid" },
+  { TPM_KEYNOTFOUND, "TPM_KEYNOTFOUND", "The target key was not found" },
+  { TPM_INAPPROPRIATE_ENC, "TPM_INAPPROPRIATE_ENC", "Unacceptable encryption scheme" },
+  { TPM_MIGRATEFAIL, "TPM_MIGRATEFAIL", "Migration authorization failed" },
+  { TPM_INVALID_PCR_INFO, "TPM_INVALID_PCR_INFO", "PCR information could not be interpreted" },
+  { TPM_NOSPACE, "TPM_NOSPACE", "No room to load key." },
+  { TPM_NOSRK, "TPM_NOSRK", "There is no SRK set" },
+  { TPM_NOTSEALED_BLOB, "TPM_NOTSEALED_BLOB", "An encrypted blob is invalid or was not created by this TPM" },
+  { TPM_OWNER_SET, "TPM_OWNER_SET", "There is already an Owner" },
+  { TPM_RESOURCES, "TPM_RESOURCES", "The TPM has insufficient internal resources to perform the requested action." },
+  { TPM_SHORTRANDOM, "TPM_SHORTRANDOM", "A random string was too short" },
+  { TPM_SIZE, "TPM_SIZE", "The TPM does not have the space to perform the operation." },
+  { TPM_WRONGPCRVAL, "TPM_WRONGPCRVAL", "The named PCR value does not match the current PCR value." },
+  { TPM_BAD_PARAM_SIZE, "TPM_BAD_PARAM_SIZE", "The paramSize argument to the command has the incorrect value" },
+  { TPM_SHA_THREAD, "TPM_SHA_THREAD", "There is no existing SHA-1 thread." },
+  { TPM_SHA_ERROR, "TPM_SHA_ERROR", "The calculation is unable to proceed because the existing SHA-1 thread has already encountered an error." },
+  { TPM_FAILEDSELFTEST, "TPM_FAILEDSELFTEST", "Self-test has failed and the TPM has shutdown." },
+  { TPM_AUTH2FAIL, "TPM_AUTH2FAIL", "The authorization for the second key in a 2 key function failed authorization" },
+  { TPM_BADTAG, "TPM_BADTAG", "The tag value sent to for a command is invalid" },
+  { TPM_IOERROR, "TPM_IOERROR", "An IO error occurred transmitting information to the TPM" },
+  { TPM_ENCRYPT_ERROR, "TPM_ENCRYPT_ERROR", "The encryption process had a problem." },
+  { TPM_DECRYPT_ERROR, "TPM_DECRYPT_ERROR", "The decryption process did not complete." },
+  { TPM_INVALID_AUTHHANDLE, "TPM_INVALID_AUTHHANDLE", "An invalid handle was used." },
+  { TPM_NO_ENDORSEMENT, "TPM_NO_ENDORSEMENT", "The TPM does not a EK installed" },
+  { TPM_INVALID_KEYUSAGE, "TPM_INVALID_KEYUSAGE", "The usage of a key is not allowed" },
+  { TPM_WRONG_ENTITYTYPE, "TPM_WRONG_ENTITYTYPE", "The submitted entity type is not allowed" },
+  { TPM_INVALID_POSTINIT, "TPM_INVALID_POSTINIT", "The command was received in the wrong sequence relative to TPM_Init and a subsequent TPM_Startup" },
+  { TPM_INAPPROPRIATE_SIG, "TPM_INAPPROPRIATE_SIG", "Signed data cannot include additional DER information" },
+  { TPM_BAD_KEY_PROPERTY, "TPM_BAD_KEY_PROPERTY", "The key properties in TPM_KEY_PARMs are not supported by this TPM" },
+
+  { TPM_BAD_MIGRATION, "TPM_BAD_MIGRATION", "The migration properties of this key are incorrect." },
+  { TPM_BAD_SCHEME, "TPM_BAD_SCHEME", "The signature or encryption scheme for this key is incorrect or not permitted in this situation." },
+  { TPM_BAD_DATASIZE, "TPM_BAD_DATASIZE", "The size of the data (or blob) parameter is bad or inconsistent with the referenced key" },
+  { TPM_BAD_MODE, "TPM_BAD_MODE", "A mode parameter is bad, such as capArea or subCapArea for TPM_GetCapability, phsicalPresence parameter for TPM_PhysicalPresence, or migrationType for TPM_CreateMigrationBlob." },
+  { TPM_BAD_PRESENCE, "TPM_BAD_PRESENCE", "Either the physicalPresence or physicalPresenceLock bits have the wrong value" },
+  { TPM_BAD_VERSION, "TPM_BAD_VERSION", "The TPM cannot perform this version of the capability" },
+  { TPM_NO_WRAP_TRANSPORT, "TPM_NO_WRAP_TRANSPORT", "The TPM does not allow for wrapped transport sessions" },
+  { TPM_AUDITFAIL_UNSUCCESSFUL, "TPM_AUDITFAIL_UNSUCCESSFUL", "TPM audit construction failed and the underlying command was returning a failure code also" },
+  { TPM_AUDITFAIL_SUCCESSFUL, "TPM_AUDITFAIL_SUCCESSFUL", "TPM audit construction failed and the underlying command was returning success" },
+  { TPM_NOTRESETABLE, "TPM_NOTRESETABLE", "Attempt to reset a PCR register that does not have the resettable attribute" },
+  { TPM_NOTLOCAL, "TPM_NOTLOCAL", "Attempt to reset a PCR register that requires locality and locality modifier not part of command transport" },
+  { TPM_BAD_TYPE, "TPM_BAD_TYPE", "Make identity blob not properly typed" },
+  { TPM_INVALID_RESOURCE, "TPM_INVALID_RESOURCE", "When saving context identified resource type does not match actual resource" },
+  { TPM_NOTFIPS, "TPM_NOTFIPS", "The TPM is attempting to execute a command only available when in FIPS mode" },
+  { TPM_INVALID_FAMILY, "TPM_INVALID_FAMILY", "The command is attempting to use an invalid family ID" },
+  { TPM_NO_NV_PERMISSION, "TPM_NO_NV_PERMISSION", "The permission to manipulate the NV storage is not available" },
+  { TPM_REQUIRES_SIGN, "TPM_REQUIRES_SIGN", "The operation requires a signed command" },
+  { TPM_KEY_NOTSUPPORTED, "TPM_KEY_NOTSUPPORTED", "Wrong operation to load an NV key" },
+  { TPM_AUTH_CONFLICT, "TPM_AUTH_CONFLICT", "NV_LoadKey blob requires both owner and blob authorization" },
+  { TPM_AREA_LOCKED, "TPM_AREA_LOCKED", "The NV area is locked and not writtable" },
+  { TPM_BAD_LOCALITY, "TPM_BAD_LOCALITY", "The locality is incorrect for the attempted operation" },
+  { TPM_READ_ONLY, "TPM_READ_ONLY", "The NV area is read only and can't be written to" },
+  { TPM_PER_NOWRITE, "TPM_PER_NOWRITE", "There is no protection on the write to the NV area" },
+  { TPM_FAMILYCOUNT, "TPM_FAMILYCOUNT", "The family count value does not match" },
+  { TPM_WRITE_LOCKED, "TPM_WRITE_LOCKED", "The NV area has already been written to" },
+  { TPM_BAD_ATTRIBUTES, "TPM_BAD_ATTRIBUTES", "The NV area attributes conflict" },
+  { TPM_INVALID_STRUCTURE, "TPM_INVALID_STRUCTURE", "The structure tag and version are invalid or inconsistent" },
+  { TPM_KEY_OWNER_CONTROL, "TPM_KEY_OWNER_CONTROL", "The key is under control of the TPM Owner and can only be evicted by the TPM Owner." },
+  { TPM_BAD_COUNTER, "TPM_BAD_COUNTER", "The counter handle is incorrect" },
+  { TPM_NOT_FULLWRITE, "TPM_NOT_FULLWRITE", "The write is not a complete write of the area" },
+  { TPM_CONTEXT_GAP, "TPM_CONTEXT_GAP", "The gap between saved context counts is too large" },
+  { TPM_MAXNVWRITES, "TPM_MAXNVWRITES", "The maximum number of NV writes without an owner has been exceeded" },
+  { TPM_NOOPERATOR, "TPM_NOOPERATOR", "No operator authorization value is set" },
+  { TPM_RESOURCEMISSING, "TPM_RESOURCEMISSING", "The resource pointed to by context is not loaded" },
+  { TPM_DELEGATE_LOCK, "TPM_DELEGATE_LOCK", "The delegate administration is locked" },
+  { TPM_DELEGATE_FAMILY, "TPM_DELEGATE_FAMILY", "Attempt to manage a family other then the delegated family" },
+  { TPM_DELEGATE_ADMIN, "TPM_DELEGATE_ADMIN", "Delegation table management not enabled" },
+  { TPM_TRANSPORT_EXCLUSIVE, "TPM_TRANSPORT_EXCLUSIVE", "There was a command executed outside of an exclusive transport session" },
+};
+
+
+// helper function for the error codes:
+const char* tpm_get_error_name (TPM_RESULT code) {
+  // just do a linear scan for now
+  unsigned i;
+  for (i = 0; i < sizeof(error_msgs)/sizeof(error_msgs[0]); i++)
+    if (code == error_msgs[i].code)
+      return error_msgs[i].code_name;
+
+    return("Unknown Error Code");
+}
diff --git a/stubdom/vtpmmgr/log.h b/stubdom/vtpmmgr/log.h
new file mode 100644
index 0000000..5c7abf5
--- /dev/null
+++ b/stubdom/vtpmmgr/log.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __VTPM_LOG_H__
+#define __VTPM_LOG_H__
+
+#include <stdint.h>             // for uint32_t
+#include <stddef.h>             // for pointer NULL
+#include <stdio.h>
+#include "tcg.h"
+
+// =========================== LOGGING ==============================
+
+// the logging module numbers
+#define VTPM_LOG_TPM         1
+#define VTPM_LOG_TPM_DEEP    2
+#define VTPM_LOG_VTPM        3
+#define VTPM_LOG_VTPM_DEEP   4
+#define VTPM_LOG_TXDATA      5
+
+extern char *module_names[];
+
+// Default to standard logging
+#ifndef LOGGING_MODULES
+#define LOGGING_MODULES (BITMASK(VTPM_LOG_VTPM)|BITMASK(VTPM_LOG_TPM))
+#endif
+
+// bit-access macros
+#define BITMASK(idx)      ( 1U << (idx) )
+#define GETBIT(num,idx)   ( ((num) & BITMASK(idx)) >> idx )
+#define SETBIT(num,idx)   (num) |= BITMASK(idx)
+#define CLEARBIT(num,idx) (num) &= ( ~ BITMASK(idx) )
+
+#define vtpmloginfo(module, fmt, args...) \
+  if (GETBIT (LOGGING_MODULES, module) == 1) {				\
+    fprintf (stdout, "INFO[%s]: " fmt, module_names[module], ##args); \
+  }
+
+#define vtpmloginfomore(module, fmt, args...) \
+  if (GETBIT (LOGGING_MODULES, module) == 1) {			      \
+    fprintf (stdout, fmt,##args);				      \
+  }
+
+#define vtpmlogerror(module, fmt, args...) \
+  fprintf (stderr, "ERROR[%s]: " fmt, module_names[module], ##args);
+
+//typedef UINT32 tpm_size_t;
+
+// helper function for the error codes:
+const char* tpm_get_error_name (TPM_RESULT code);
+
+#endif // _VTPM_LOG_H_
diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
new file mode 100644
index 0000000..77d32f0
--- /dev/null
+++ b/stubdom/vtpmmgr/marshal.h
@@ -0,0 +1,528 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef MARSHAL_H
+#define MARSHAL_H
+
+#include <stdlib.h>
+#include <mini-os/byteorder.h>
+#include <mini-os/endian.h>
+#include "tcg.h"
+
+typedef enum UnpackPtr {
+   UNPACK_ALIAS,
+   UNPACK_ALLOC
+} UnpackPtr;
+
+inline BYTE* pack_BYTE(BYTE* ptr, BYTE t) {
+   ptr[0] = t;
+   return ++ptr;
+}
+
+inline BYTE* unpack_BYTE(BYTE* ptr, BYTE* t) {
+   t[0] = ptr[0];
+   return ++ptr;
+}
+
+#define pack_BOOL(p, t) pack_BYTE(p, t)
+#define unpack_BOOL(p, t) unpack_BYTE(p, t)
+
+inline BYTE* pack_UINT16(BYTE* ptr, UINT16 t) {
+   BYTE* b = (BYTE*)&t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   ptr[0] = b[1];
+   ptr[1] = b[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   ptr[0] = b[0];
+   ptr[1] = b[1];
+#endif
+   return ptr + sizeof(UINT16);
+}
+
+inline BYTE* unpack_UINT16(BYTE* ptr, UINT16* t) {
+   BYTE* b = (BYTE*)t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   b[0] = ptr[1];
+   b[1] = ptr[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   b[0] = ptr[0];
+   b[1] = ptr[1];
+#endif
+   return ptr + sizeof(UINT16);
+}
+
+inline BYTE* pack_UINT32(BYTE* ptr, UINT32 t) {
+   BYTE* b = (BYTE*)&t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   ptr[3] = b[0];
+   ptr[2] = b[1];
+   ptr[1] = b[2];
+   ptr[0] = b[3];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   ptr[0] = b[0];
+   ptr[1] = b[1];
+   ptr[2] = b[2];
+   ptr[3] = b[3];
+#endif
+   return ptr + sizeof(UINT32);
+}
+
+inline BYTE* unpack_UINT32(BYTE* ptr, UINT32* t) {
+   BYTE* b = (BYTE*)t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   b[0] = ptr[3];
+   b[1] = ptr[2];
+   b[2] = ptr[1];
+   b[3] = ptr[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   b[0] = ptr[0];
+   b[1] = ptr[1];
+   b[2] = ptr[2];
+   b[3] = ptr[3];
+#endif
+   return ptr + sizeof(UINT32);
+}
+
+#define pack_TPM_RESULT(p, t) pack_UINT32(p, t)
+#define pack_TPM_PCRINDEX(p, t) pack_UINT32(p, t)
+#define pack_TPM_DIRINDEX(p, t) pack_UINT32(p, t)
+#define pack_TPM_HANDLE(p, t) pack_UINT32(p, t)
+#define pack_TPM_AUTHHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_HASHHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_HMACHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_ENCHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TPM_KEY_HANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_ENTITYHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TPM_RESOURCE_TYPE(p, t) pack_UINT32(p, t)
+#define pack_TPM_COMMAND_CODE(p, t) pack_UINT32(p, t)
+#define pack_TPM_PROTOCOL_ID(p, t) pack_UINT16(p, t)
+#define pack_TPM_AUTH_DATA_USAGE(p, t) pack_BYTE(p, t)
+#define pack_TPM_ENTITY_TYPE(p, t) pack_UINT16(p, t)
+#define pack_TPM_ALGORITHM_ID(p, t) pack_UINT32(p, t)
+#define pack_TPM_KEY_USAGE(p, t) pack_UINT16(p, t)
+#define pack_TPM_STARTUP_TYPE(p, t) pack_UINT16(p, t)
+#define pack_TPM_CAPABILITY_AREA(p, t) pack_UINT32(p, t)
+#define pack_TPM_ENC_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_SIG_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_MIGRATE_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_PHYSICAL_PRESENCE(p, t) pack_UINT16(p, t)
+#define pack_TPM_KEY_FLAGS(p, t) pack_UINT32(p, t)
+
+#define unpack_TPM_RESULT(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_PCRINDEX(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_DIRINDEX(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_HANDLE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_AUTHHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_HASHHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_HMACHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_ENCHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TPM_KEY_HANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_ENTITYHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TPM_RESOURCE_TYPE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_COMMAND_CODE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_PROTOCOL_ID(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_AUTH_DATA_USAGE(p, t) unpack_BYTE(p, t)
+#define unpack_TPM_ENTITY_TYPE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_ALGORITHM_ID(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_KEY_USAGE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_STARTUP_TYPE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_CAPABILITY_AREA(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_ENC_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_SIG_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_MIGRATE_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_PHYSICAL_PRESENCE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_KEY_FLAGS(p, t) unpack_UINT32(p, t)
+
+#define pack_TPM_AUTH_HANDLE(p, t) pack_UINT32(p, t);
+#define pack_TCS_CONTEXT_HANDLE(p, t) pack_UINT32(p, t);
+#define pack_TCS_KEY_HANDLE(p, t) pack_UINT32(p, t);
+
+#define unpack_TPM_AUTH_HANDLE(p, t) unpack_UINT32(p, t);
+#define unpack_TCS_CONTEXT_HANDLE(p, t) unpack_UINT32(p, t);
+#define unpack_TCS_KEY_HANDLE(p, t) unpack_UINT32(p, t);
+
+inline BYTE* pack_BUFFER(BYTE* ptr, const BYTE* buf, UINT32 size) {
+   memcpy(ptr, buf, size);
+   return ptr + size;
+}
+
+inline BYTE* unpack_BUFFER(BYTE* ptr, BYTE* buf, UINT32 size) {
+   memcpy(buf, ptr, size);
+   return ptr + size;
+}
+
+inline BYTE* unpack_ALIAS(BYTE* ptr, BYTE** buf, UINT32 size) {
+   *buf = ptr;
+   return ptr + size;
+}
+
+inline BYTE* unpack_ALLOC(BYTE* ptr, BYTE** buf, UINT32 size) {
+   if(size) {
+      *buf = malloc(size);
+      memcpy(*buf, ptr, size);
+   } else {
+      *buf = NULL;
+   }
+   return ptr + size;
+}
+
+inline BYTE* unpack_PTR(BYTE* ptr, BYTE** buf, UINT32 size, UnpackPtr alloc) {
+   if(alloc == UNPACK_ALLOC) {
+      return unpack_ALLOC(ptr, buf, size);
+   } else {
+      return unpack_ALIAS(ptr, buf, size);
+   }
+}
+
+inline BYTE* pack_TPM_AUTHDATA(BYTE* ptr, const TPM_AUTHDATA* d) {
+   return pack_BUFFER(ptr, *d, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_AUTHDATA(BYTE* ptr, TPM_AUTHDATA* d) {
+   return unpack_BUFFER(ptr, *d, TPM_DIGEST_SIZE);
+}
+
+#define pack_TPM_SECRET(p, t) pack_TPM_AUTHDATA(p, t)
+#define pack_TPM_ENCAUTH(p, t) pack_TPM_AUTHDATA(p, t)
+#define pack_TPM_PAYLOAD_TYPE(p, t) pack_BYTE(p, t)
+#define pack_TPM_TAG(p, t) pack_UINT16(p, t)
+#define pack_TPM_STRUCTURE_TAG(p, t) pack_UINT16(p, t)
+
+#define unpack_TPM_SECRET(p, t) unpack_TPM_AUTHDATA(p, t)
+#define unpack_TPM_ENCAUTH(p, t) unpack_TPM_AUTHDATA(p, t)
+#define unpack_TPM_PAYLOAD_TYPE(p, t) unpack_BYTE(p, t)
+#define unpack_TPM_TAG(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_STRUCTURE_TAG(p, t) unpack_UINT16(p, t)
+
+inline BYTE* pack_TPM_VERSION(BYTE* ptr, const TPM_VERSION* t) {
+   ptr[0] = t->major;
+   ptr[1] = t->minor;
+   ptr[2] = t->revMajor;
+   ptr[3] = t->revMinor;
+   return ptr + 4;
+}
+
+inline BYTE* unpack_TPM_VERSION(BYTE* ptr, TPM_VERSION* t) {
+   t->major = ptr[0];
+   t->minor = ptr[1];
+   t->revMajor = ptr[2];
+   t->revMinor = ptr[3];
+   return ptr + 4;
+}
+
+inline BYTE* pack_TPM_CAP_VERSION_INFO(BYTE* ptr, const TPM_CAP_VERSION_INFO* v) {
+   ptr = pack_TPM_STRUCTURE_TAG(ptr, v->tag);
+   ptr = pack_TPM_VERSION(ptr, &v->version);
+   ptr = pack_UINT16(ptr, v->specLevel);
+   ptr = pack_BYTE(ptr, v->errataRev);
+   ptr = pack_BUFFER(ptr, v->tpmVendorID, sizeof(v->tpmVendorID));
+   ptr = pack_UINT16(ptr, v->vendorSpecificSize);
+   ptr = pack_BUFFER(ptr, v->vendorSpecific, v->vendorSpecificSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_CAP_VERSION_INFO(BYTE* ptr, TPM_CAP_VERSION_INFO* v, UnpackPtr alloc) {
+   ptr = unpack_TPM_STRUCTURE_TAG(ptr, &v->tag);
+   ptr = unpack_TPM_VERSION(ptr, &v->version);
+   ptr = unpack_UINT16(ptr, &v->specLevel);
+   ptr = unpack_BYTE(ptr, &v->errataRev);
+   ptr = unpack_BUFFER(ptr, v->tpmVendorID, sizeof(v->tpmVendorID));
+   ptr = unpack_UINT16(ptr, &v->vendorSpecificSize);
+   ptr = unpack_PTR(ptr, &v->vendorSpecific, v->vendorSpecificSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_DIGEST(BYTE* ptr, const TPM_DIGEST* d) {
+   return pack_BUFFER(ptr, d->digest, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_DIGEST(BYTE* ptr, TPM_DIGEST* d) {
+   return unpack_BUFFER(ptr, d->digest, TPM_DIGEST_SIZE);
+}
+
+#define pack_TPM_PCRVALUE(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_PCRVALUE(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_COMPOSITE_HASH(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_COMPOSITE_HASH(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_DIRVALUE(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_DIRVALUE(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_HMAC(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_HMAC(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_CHOSENID_HASH(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_CHOSENID_HASH(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+inline BYTE* pack_TPM_NONCE(BYTE* ptr, const TPM_NONCE* n) {
+   return pack_BUFFER(ptr, n->nonce, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_NONCE(BYTE* ptr, TPM_NONCE* n) {
+   return unpack_BUFFER(ptr, n->nonce, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* pack_TPM_SYMMETRIC_KEY_PARMS(BYTE* ptr, const TPM_SYMMETRIC_KEY_PARMS* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_UINT32(ptr, k->blockSize);
+   ptr = pack_UINT32(ptr, k->ivSize);
+   return pack_BUFFER(ptr, k->IV, k->ivSize);
+}
+
+inline BYTE* unpack_TPM_SYMMETRIC_KEY_PARMS(BYTE* ptr, TPM_SYMMETRIC_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_UINT32(ptr, &k->blockSize);
+   ptr = unpack_UINT32(ptr, &k->ivSize);
+   return unpack_PTR(ptr, &k->IV, k->ivSize, alloc);
+}
+
+inline BYTE* pack_TPM_RSA_KEY_PARMS(BYTE* ptr, const TPM_RSA_KEY_PARMS* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_UINT32(ptr, k->numPrimes);
+   ptr = pack_UINT32(ptr, k->exponentSize);
+   return pack_BUFFER(ptr, k->exponent, k->exponentSize);
+}
+
+inline BYTE* unpack_TPM_RSA_KEY_PARMS(BYTE* ptr, TPM_RSA_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_UINT32(ptr, &k->numPrimes);
+   ptr = unpack_UINT32(ptr, &k->exponentSize);
+   return unpack_PTR(ptr, &k->exponent, k->exponentSize, alloc);
+}
+
+inline BYTE* pack_TPM_KEY_PARMS(BYTE* ptr, const TPM_KEY_PARMS* k) {
+   ptr = pack_TPM_ALGORITHM_ID(ptr, k->algorithmID);
+   ptr = pack_TPM_ENC_SCHEME(ptr, k->encScheme);
+   ptr = pack_TPM_SIG_SCHEME(ptr, k->sigScheme);
+   ptr = pack_UINT32(ptr, k->parmSize);
+
+   if(k->parmSize) {
+      switch(k->algorithmID) {
+         case TPM_ALG_RSA:
+            return pack_TPM_RSA_KEY_PARMS(ptr, &k->parms.rsa);
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            return pack_TPM_SYMMETRIC_KEY_PARMS(ptr, &k->parms.sym);
+      }
+   }
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_KEY_PARMS(BYTE* ptr, TPM_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_ALGORITHM_ID(ptr, &k->algorithmID);
+   ptr = unpack_TPM_ENC_SCHEME(ptr, &k->encScheme);
+   ptr = unpack_TPM_SIG_SCHEME(ptr, &k->sigScheme);
+   ptr = unpack_UINT32(ptr, &k->parmSize);
+
+   if(k->parmSize) {
+      switch(k->algorithmID) {
+         case TPM_ALG_RSA:
+            return unpack_TPM_RSA_KEY_PARMS(ptr, &k->parms.rsa, alloc);
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            return unpack_TPM_SYMMETRIC_KEY_PARMS(ptr, &k->parms.sym, alloc);
+      }
+   }
+   return ptr;
+}
+
+inline BYTE* pack_TPM_STORE_PUBKEY(BYTE* ptr, const TPM_STORE_PUBKEY* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_BUFFER(ptr, k->key, k->keyLength);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_STORE_PUBKEY(BYTE* ptr, TPM_STORE_PUBKEY* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_PTR(ptr, &k->key, k->keyLength, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PUBKEY(BYTE* ptr, const TPM_PUBKEY* k) {
+   ptr = pack_TPM_KEY_PARMS(ptr, &k->algorithmParms);
+   return pack_TPM_STORE_PUBKEY(ptr, &k->pubKey);
+}
+
+inline BYTE* unpack_TPM_PUBKEY(BYTE* ptr, TPM_PUBKEY* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_KEY_PARMS(ptr, &k->algorithmParms, alloc);
+   return unpack_TPM_STORE_PUBKEY(ptr, &k->pubKey, alloc);
+}
+
+inline BYTE* pack_TPM_PCR_SELECTION(BYTE* ptr, const TPM_PCR_SELECTION* p) {
+   ptr = pack_UINT16(ptr, p->sizeOfSelect);
+   ptr = pack_BUFFER(ptr, p->pcrSelect, p->sizeOfSelect);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_SELECTION(BYTE* ptr, TPM_PCR_SELECTION* p, UnpackPtr alloc) {
+   ptr = unpack_UINT16(ptr, &p->sizeOfSelect);
+   ptr = unpack_PTR(ptr, &p->pcrSelect, p->sizeOfSelect, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PCR_INFO(BYTE* ptr, const TPM_PCR_INFO* p) {
+   ptr = pack_TPM_PCR_SELECTION(ptr, &p->pcrSelection);
+   ptr = pack_TPM_COMPOSITE_HASH(ptr, &p->digestAtRelease);
+   ptr = pack_TPM_COMPOSITE_HASH(ptr, &p->digestAtCreation);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_INFO(BYTE* ptr, TPM_PCR_INFO* p, UnpackPtr alloc) {
+   ptr = unpack_TPM_PCR_SELECTION(ptr, &p->pcrSelection, alloc);
+   ptr = unpack_TPM_COMPOSITE_HASH(ptr, &p->digestAtRelease);
+   ptr = unpack_TPM_COMPOSITE_HASH(ptr, &p->digestAtCreation);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PCR_COMPOSITE(BYTE* ptr, const TPM_PCR_COMPOSITE* p) {
+   ptr = pack_TPM_PCR_SELECTION(ptr, &p->select);
+   ptr = pack_UINT32(ptr, p->valueSize);
+   ptr = pack_BUFFER(ptr, (const BYTE*)p->pcrValue, p->valueSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_COMPOSITE(BYTE* ptr, TPM_PCR_COMPOSITE* p, UnpackPtr alloc) {
+   ptr = unpack_TPM_PCR_SELECTION(ptr, &p->select, alloc);
+   ptr = unpack_UINT32(ptr, &p->valueSize);
+   ptr = unpack_PTR(ptr, (BYTE**)&p->pcrValue, p->valueSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_KEY(BYTE* ptr, const TPM_KEY* k) {
+   ptr = pack_TPM_VERSION(ptr, &k->ver);
+   ptr = pack_TPM_KEY_USAGE(ptr, k->keyUsage);
+   ptr = pack_TPM_KEY_FLAGS(ptr, k->keyFlags);
+   ptr = pack_TPM_AUTH_DATA_USAGE(ptr, k->authDataUsage);
+   ptr = pack_TPM_KEY_PARMS(ptr, &k->algorithmParms);
+   ptr = pack_UINT32(ptr, k->PCRInfoSize);
+   if(k->PCRInfoSize) {
+      ptr = pack_TPM_PCR_INFO(ptr, &k->PCRInfo);
+   }
+   ptr = pack_TPM_STORE_PUBKEY(ptr, &k->pubKey);
+   ptr = pack_UINT32(ptr, k->encDataSize);
+   return pack_BUFFER(ptr, k->encData, k->encDataSize);
+}
+
+inline BYTE* unpack_TPM_KEY(BYTE* ptr, TPM_KEY* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &k->ver);
+   ptr = unpack_TPM_KEY_USAGE(ptr, &k->keyUsage);
+   ptr = unpack_TPM_KEY_FLAGS(ptr, &k->keyFlags);
+   ptr = unpack_TPM_AUTH_DATA_USAGE(ptr, &k->authDataUsage);
+   ptr = unpack_TPM_KEY_PARMS(ptr, &k->algorithmParms, alloc);
+   ptr = unpack_UINT32(ptr, &k->PCRInfoSize);
+   if(k->PCRInfoSize) {
+      ptr = unpack_TPM_PCR_INFO(ptr, &k->PCRInfo, alloc);
+   }
+   ptr = unpack_TPM_STORE_PUBKEY(ptr, &k->pubKey, alloc);
+   ptr = unpack_UINT32(ptr, &k->encDataSize);
+   return unpack_PTR(ptr, &k->encData, k->encDataSize, alloc);
+}
+
+inline BYTE* pack_TPM_BOUND_DATA(BYTE* ptr, const TPM_BOUND_DATA* b, UINT32 payloadSize) {
+   ptr = pack_TPM_VERSION(ptr, &b->ver);
+   ptr = pack_TPM_PAYLOAD_TYPE(ptr, b->payload);
+   return pack_BUFFER(ptr, b->payloadData, payloadSize);
+}
+
+inline BYTE* unpack_TPM_BOUND_DATA(BYTE* ptr, TPM_BOUND_DATA* b, UINT32 payloadSize, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &b->ver);
+   ptr = unpack_TPM_PAYLOAD_TYPE(ptr, &b->payload);
+   return unpack_PTR(ptr, &b->payloadData, payloadSize, alloc);
+}
+
+inline BYTE* pack_TPM_STORED_DATA(BYTE* ptr, const TPM_STORED_DATA* d) {
+   ptr = pack_TPM_VERSION(ptr, &d->ver);
+   ptr = pack_UINT32(ptr, d->sealInfoSize);
+   if(d->sealInfoSize) {
+      ptr = pack_TPM_PCR_INFO(ptr, &d->sealInfo);
+   }
+   ptr = pack_UINT32(ptr, d->encDataSize);
+   ptr = pack_BUFFER(ptr, d->encData, d->encDataSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_STORED_DATA(BYTE* ptr, TPM_STORED_DATA* d, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &d->ver);
+   ptr = unpack_UINT32(ptr, &d->sealInfoSize);
+   if(d->sealInfoSize) {
+      ptr = unpack_TPM_PCR_INFO(ptr, &d->sealInfo, alloc);
+   }
+   ptr = unpack_UINT32(ptr, &d->encDataSize);
+   ptr = unpack_PTR(ptr, &d->encData, d->encDataSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_AUTH_SESSION(BYTE* ptr, const TPM_AUTH_SESSION* auth) {
+   ptr = pack_TPM_AUTH_HANDLE(ptr, auth->AuthHandle);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+   ptr = pack_TPM_AUTHDATA(ptr, &auth->HMAC);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_AUTH_SESSION(BYTE* ptr, TPM_AUTH_SESSION* auth) {
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = unpack_BOOL(ptr, &auth->fContinueAuthSession);
+   ptr = unpack_TPM_AUTHDATA(ptr, &auth->HMAC);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
+      TPM_TAG tag,
+      UINT32 size,
+      TPM_COMMAND_CODE ord) {
+   ptr = pack_UINT16(ptr, tag);
+   ptr = pack_UINT32(ptr, size);
+   return pack_UINT32(ptr, ord);
+}
+
+inline BYTE* unpack_TPM_RQU_HEADER(BYTE* ptr,
+      TPM_TAG* tag,
+      UINT32* size,
+      TPM_COMMAND_CODE* ord) {
+   ptr = unpack_UINT16(ptr, tag);
+   ptr = unpack_UINT32(ptr, size);
+   ptr = unpack_UINT32(ptr, ord);
+   return ptr;
+}
+
+#define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r);
+#define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r);
+
+#endif
diff --git a/stubdom/vtpmmgr/minios.cfg b/stubdom/vtpmmgr/minios.cfg
new file mode 100644
index 0000000..3fb383d
--- /dev/null
+++ b/stubdom/vtpmmgr/minios.cfg
@@ -0,0 +1,14 @@
+CONFIG_TPMFRONT=y
+CONFIG_TPM_TIS=y
+CONFIG_TPMBACK=y
+CONFIG_START_NETWORK=n
+CONFIG_TEST=n
+CONFIG_PCIFRONT=n
+CONFIG_BLKFRONT=y
+CONFIG_NETFRONT=n
+CONFIG_FBFRONT=n
+CONFIG_KBDFRONT=n
+CONFIG_CONSFRONT=n
+CONFIG_XENBUS=y
+CONFIG_LWIP=n
+CONFIG_XC=n
diff --git a/stubdom/vtpmmgr/tcg.h b/stubdom/vtpmmgr/tcg.h
new file mode 100644
index 0000000..7687eae
--- /dev/null
+++ b/stubdom/vtpmmgr/tcg.h
@@ -0,0 +1,707 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005 Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __TCG_H__
+#define __TCG_H__
+
+#include <stdlib.h>
+#include <stdint.h>
+
+// **************************** CONSTANTS *********************************
+
+// BOOL values
+#define TRUE 0x01
+#define FALSE 0x00
+
+#define TCPA_MAX_BUFFER_LENGTH 0x2000
+
+//
+// TPM_COMMAND_CODE values
+#define TPM_PROTECTED_ORDINAL 0x00000000UL
+#define TPM_UNPROTECTED_ORDINAL 0x80000000UL
+#define TPM_CONNECTION_ORDINAL 0x40000000UL
+#define TPM_VENDOR_ORDINAL 0x20000000UL
+
+#define TPM_ORD_OIAP                     (10UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OSAP                     (11UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuth               (12UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_TakeOwnership            (13UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthAsymStart      (14UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthAsymFinish     (15UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthOwner          (16UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Extend                   (20UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PcrRead                  (21UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Quote                    (22UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Seal                     (23UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Unseal                   (24UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DirWriteAuth             (25UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DirRead                  (26UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_UnBind                   (30UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateWrapKey            (31UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadKey                  (32UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetPubKey                (33UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_EvictKey                 (34UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateMigrationBlob      (40UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReWrapKey                (41UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ConvertMigrationBlob     (42UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_AuthorizeMigrationKey    (43UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateMaintenanceArchive (44UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadMaintenanceArchive   (45UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_KillMaintenanceFeature   (46UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadManuMaintPub         (47UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadManuMaintPub         (48UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CertifyKey               (50UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Sign                     (60UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetRandom                (70UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_StirRandom               (71UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SelfTestFull             (80UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SelfTestStartup          (81UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CertifySelfTest          (82UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ContinueSelfTest         (83UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetTestResult            (84UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Reset                    (90UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerClear               (91UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisableOwnerClear        (92UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ForceClear               (93UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisableForceClear        (94UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapabilitySigned      (100UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapability            (101UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapabilityOwner       (102UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerSetDisable          (110UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalEnable           (111UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalDisable          (112UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetOwnerInstall          (113UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalSetDeactivated   (114UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetTempDeactivated       (115UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateEndorsementKeyPair (120UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_MakeIdentity             (121UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ActivateIdentity         (122UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadPubek                (124UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerReadPubek           (125UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisablePubekRead         (126UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetAuditEvent            (130UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetAuditEventSigned      (131UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetOrdinalAuditStatus    (140UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetOrdinalAuditStatus    (141UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Terminate_Handle         (150UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Init                     (151UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveState                (152UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Startup                  (153UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetRedirection           (154UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Start                (160UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Update               (161UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Complete             (162UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1CompleteExtend       (163UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_FieldUpgrade             (170UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveKeyContext           (180UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadKeyContext           (181UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveAuthContext          (182UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadAuthContext          (183UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveContext                      (184UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadContext                      (185UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_FlushSpecific                    (186UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PCR_Reset                        (200UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_DefineSpace                   (204UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_WriteValue                    (205UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_WriteValueAuth                (206UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_ReadValue                     (207UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_ReadValueAuth                 (208UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_UpdateVerification      (209UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_Manage                  (210UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_CreateKeyDelegation     (212UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_CreateOwnerDelegation   (213UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_VerifyDelegation        (214UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_LoadOwnerDelegation     (216UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_ReadAuth                (217UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_ReadTable               (219UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateCounter                    (220UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_IncrementCounter                 (221UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadCounter                      (222UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseCounter                   (223UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseCounterOwner              (224UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_EstablishTransport               (230UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ExecuteTransport                 (231UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseTransportSigned           (232UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetTicks                         (241UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_TickStampBlob                    (242UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_MAX                              (256UL + TPM_PROTECTED_ORDINAL)
+
+#define TSC_ORD_PhysicalPresence         (10UL + TPM_CONNECTION_ORDINAL)
+
+
+
+//
+// TPM_RESULT values
+//
+// just put in the whole table from spec 1.2
+
+#define TPM_BASE   0x0 // The start of TPM return codes
+#define TPM_VENDOR_ERROR 0x00000400 // Mask to indicate that the error code is vendor specific for vendor specific commands
+#define TPM_NON_FATAL  0x00000800 // Mask to indicate that the error code is a non-fatal failure.
+
+#define TPM_SUCCESS   TPM_BASE // Successful completion of the operation
+#define TPM_AUTHFAIL      TPM_BASE + 1 // Authentication failed
+#define TPM_BADINDEX      TPM_BASE + 2 // The index to a PCR, DIR or other register is incorrect
+#define TPM_BAD_PARAMETER     TPM_BASE + 3 // One or more parameter is bad
+#define TPM_AUDITFAILURE     TPM_BASE + 4 // An operation completed successfully but the auditing of that operation failed.
+#define TPM_CLEAR_DISABLED     TPM_BASE + 5 // The clear disable flag is set and all clear operations now require physical access
+#define TPM_DEACTIVATED     TPM_BASE + 6 // The TPM is deactivated
+#define TPM_DISABLED      TPM_BASE + 7 // The TPM is disabled
+#define TPM_DISABLED_CMD     TPM_BASE + 8 // The target command has been disabled
+#define TPM_FAIL       TPM_BASE + 9 // The operation failed
+#define TPM_BAD_ORDINAL     TPM_BASE + 10 // The ordinal was unknown or inconsistent
+#define TPM_INSTALL_DISABLED   TPM_BASE + 11 // The ability to install an owner is disabled
+#define TPM_INVALID_KEYHANDLE  TPM_BASE + 12 // The key handle presented was invalid
+#define TPM_KEYNOTFOUND     TPM_BASE + 13 // The target key was not found
+#define TPM_INAPPROPRIATE_ENC  TPM_BASE + 14 // Unacceptable encryption scheme
+#define TPM_MIGRATEFAIL     TPM_BASE + 15 // Migration authorization failed
+#define TPM_INVALID_PCR_INFO   TPM_BASE + 16 // PCR information could not be interpreted
+#define TPM_NOSPACE      TPM_BASE + 17 // No room to load key.
+#define TPM_NOSRK       TPM_BASE + 18 // There is no SRK set
+#define TPM_NOTSEALED_BLOB     TPM_BASE + 19 // An encrypted blob is invalid or was not created by this TPM
+#define TPM_OWNER_SET      TPM_BASE + 20 // There is already an Owner
+#define TPM_RESOURCES      TPM_BASE + 21 // The TPM has insufficient internal resources to perform the requested action.
+#define TPM_SHORTRANDOM     TPM_BASE + 22 // A random string was too short
+#define TPM_SIZE       TPM_BASE + 23 // The TPM does not have the space to perform the operation.
+#define TPM_WRONGPCRVAL     TPM_BASE + 24 // The named PCR value does not match the current PCR value.
+#define TPM_BAD_PARAM_SIZE     TPM_BASE + 25 // The paramSize argument to the command has the incorrect value
+#define TPM_SHA_THREAD      TPM_BASE + 26 // There is no existing SHA-1 thread.
+#define TPM_SHA_ERROR      TPM_BASE + 27 // The calculation is unable to proceed because the existing SHA-1 thread has already encountered an error.
+#define TPM_FAILEDSELFTEST     TPM_BASE + 28 // Self-test has failed and the TPM has shutdown.
+#define TPM_AUTH2FAIL      TPM_BASE + 29 // The authorization for the second key in a 2 key function failed authorization
+#define TPM_BADTAG       TPM_BASE + 30 // The tag value sent to for a command is invalid
+#define TPM_IOERROR      TPM_BASE + 31 // An IO error occurred transmitting information to the TPM
+#define TPM_ENCRYPT_ERROR     TPM_BASE + 32 // The encryption process had a problem.
+#define TPM_DECRYPT_ERROR     TPM_BASE + 33 // The decryption process did not complete.
+#define TPM_INVALID_AUTHHANDLE TPM_BASE + 34 // An invalid handle was used.
+#define TPM_NO_ENDORSEMENT     TPM_BASE + 35 // The TPM does not a EK installed
+#define TPM_INVALID_KEYUSAGE   TPM_BASE + 36 // The usage of a key is not allowed
+#define TPM_WRONG_ENTITYTYPE   TPM_BASE + 37 // The submitted entity type is not allowed
+#define TPM_INVALID_POSTINIT   TPM_BASE + 38 // The command was received in the wrong sequence relative to TPM_Init and a subsequent TPM_Startup
+#define TPM_INAPPROPRIATE_SIG  TPM_BASE + 39 // Signed data cannot include additional DER information
+#define TPM_BAD_KEY_PROPERTY   TPM_BASE + 40 // The key properties in TPM_KEY_PARMs are not supported by this TPM
+
+#define TPM_BAD_MIGRATION      TPM_BASE + 41 // The migration properties of this key are incorrect.
+#define TPM_BAD_SCHEME       TPM_BASE + 42 // The signature or encryption scheme for this key is incorrect or not permitted in this situation.
+#define TPM_BAD_DATASIZE      TPM_BASE + 43 // The size of the data (or blob) parameter is bad or inconsistent with the referenced key
+#define TPM_BAD_MODE       TPM_BASE + 44 // A mode parameter is bad, such as capArea or subCapArea for TPM_GetCapability, phsicalPresence parameter for TPM_PhysicalPresence, or migrationType for TPM_CreateMigrationBlob.
+#define TPM_BAD_PRESENCE      TPM_BASE + 45 // Either the physicalPresence or physicalPresenceLock bits have the wrong value
+#define TPM_BAD_VERSION      TPM_BASE + 46 // The TPM cannot perform this version of the capability
+#define TPM_NO_WRAP_TRANSPORT     TPM_BASE + 47 // The TPM does not allow for wrapped transport sessions
+#define TPM_AUDITFAIL_UNSUCCESSFUL TPM_BASE + 48 // TPM audit construction failed and the underlying command was returning a failure code also
+#define TPM_AUDITFAIL_SUCCESSFUL   TPM_BASE + 49 // TPM audit construction failed and the underlying command was returning success
+#define TPM_NOTRESETABLE      TPM_BASE + 50 // Attempt to reset a PCR register that does not have the resettable attribute
+#define TPM_NOTLOCAL       TPM_BASE + 51 // Attempt to reset a PCR register that requires locality and locality modifier not part of command transport
+#define TPM_BAD_TYPE       TPM_BASE + 52 // Make identity blob not properly typed
+#define TPM_INVALID_RESOURCE     TPM_BASE + 53 // When saving context identified resource type does not match actual resource
+#define TPM_NOTFIPS       TPM_BASE + 54 // The TPM is attempting to execute a command only available when in FIPS mode
+#define TPM_INVALID_FAMILY      TPM_BASE + 55 // The command is attempting to use an invalid family ID
+#define TPM_NO_NV_PERMISSION     TPM_BASE + 56 // The permission to manipulate the NV storage is not available
+#define TPM_REQUIRES_SIGN      TPM_BASE + 57 // The operation requires a signed command
+#define TPM_KEY_NOTSUPPORTED     TPM_BASE + 58 // Wrong operation to load an NV key
+#define TPM_AUTH_CONFLICT      TPM_BASE + 59 // NV_LoadKey blob requires both owner and blob authorization
+#define TPM_AREA_LOCKED      TPM_BASE + 60 // The NV area is locked and not writtable
+#define TPM_BAD_LOCALITY      TPM_BASE + 61 // The locality is incorrect for the attempted operation
+#define TPM_READ_ONLY       TPM_BASE + 62 // The NV area is read only and can't be written to
+#define TPM_PER_NOWRITE      TPM_BASE + 63 // There is no protection on the write to the NV area
+#define TPM_FAMILYCOUNT      TPM_BASE + 64 // The family count value does not match
+#define TPM_WRITE_LOCKED      TPM_BASE + 65 // The NV area has already been written to
+#define TPM_BAD_ATTRIBUTES      TPM_BASE + 66 // The NV area attributes conflict
+#define TPM_INVALID_STRUCTURE     TPM_BASE + 67 // The structure tag and version are invalid or inconsistent
+#define TPM_KEY_OWNER_CONTROL     TPM_BASE + 68 // The key is under control of the TPM Owner and can only be evicted by the TPM Owner.
+#define TPM_BAD_COUNTER      TPM_BASE + 69 // The counter handle is incorrect
+#define TPM_NOT_FULLWRITE      TPM_BASE + 70 // The write is not a complete write of the area
+#define TPM_CONTEXT_GAP      TPM_BASE + 71 // The gap between saved context counts is too large
+#define TPM_MAXNVWRITES      TPM_BASE + 72 // The maximum number of NV writes without an owner has been exceeded
+#define TPM_NOOPERATOR       TPM_BASE + 73 // No operator authorization value is set
+#define TPM_RESOURCEMISSING     TPM_BASE + 74 // The resource pointed to by context is not loaded
+#define TPM_DELEGATE_LOCK      TPM_BASE + 75 // The delegate administration is locked
+#define TPM_DELEGATE_FAMILY     TPM_BASE + 76 // Attempt to manage a family other then the delegated family
+#define TPM_DELEGATE_ADMIN      TPM_BASE + 77 // Delegation table management not enabled
+#define TPM_TRANSPORT_EXCLUSIVE    TPM_BASE + 78 // There was a command executed outside of an exclusive transport session
+
+// TPM_STARTUP_TYPE values
+#define TPM_ST_CLEAR 0x0001
+#define TPM_ST_STATE 0x0002
+#define TPM_ST_DEACTIVATED 0x003
+
+// TPM_TAG values
+#define TPM_TAG_RQU_COMMAND 0x00c1
+#define TPM_TAG_RQU_AUTH1_COMMAND 0x00c2
+#define TPM_TAG_RQU_AUTH2_COMMAND 0x00c3
+#define TPM_TAG_RSP_COMMAND 0x00c4
+#define TPM_TAG_RSP_AUTH1_COMMAND 0x00c5
+#define TPM_TAG_RSP_AUTH2_COMMAND 0x00c6
+
+// TPM_PAYLOAD_TYPE values
+#define TPM_PT_ASYM 0x01
+#define TPM_PT_BIND 0x02
+#define TPM_PT_MIGRATE 0x03
+#define TPM_PT_MAINT 0x04
+#define TPM_PT_SEAL 0x05
+
+// TPM_ENTITY_TYPE values
+#define TPM_ET_KEYHANDLE 0x0001
+#define TPM_ET_OWNER 0x0002
+#define TPM_ET_DATA 0x0003
+#define TPM_ET_SRK 0x0004
+#define TPM_ET_KEY 0x0005
+
+/// TPM_ResourceTypes
+#define TPM_RT_KEY      0x00000001
+#define TPM_RT_AUTH     0x00000002
+#define TPM_RT_HASH     0x00000003
+#define TPM_RT_TRANS    0x00000004
+#define TPM_RT_CONTEXT  0x00000005
+#define TPM_RT_COUNTER  0x00000006
+#define TPM_RT_DELEGATE 0x00000007
+#define TPM_RT_DAA_TPM  0x00000008
+#define TPM_RT_DAA_V0   0x00000009
+#define TPM_RT_DAA_V1   0x0000000A
+
+
+
+// TPM_PROTOCOL_ID values
+#define TPM_PID_OIAP 0x0001
+#define TPM_PID_OSAP 0x0002
+#define TPM_PID_ADIP 0x0003
+#define TPM_PID_ADCP 0x0004
+#define TPM_PID_OWNER 0x0005
+
+// TPM_ALGORITHM_ID values
+#define TPM_ALG_RSA 0x00000001
+#define TPM_ALG_SHA 0x00000004
+#define TPM_ALG_HMAC 0x00000005
+#define TPM_ALG_AES128 0x00000006
+#define TPM_ALG_MFG1 0x00000007
+#define TPM_ALG_AES192 0x00000008
+#define TPM_ALG_AES256 0x00000009
+#define TPM_ALG_XOR 0x0000000A
+
+// TPM_ENC_SCHEME values
+#define TPM_ES_NONE 0x0001
+#define TPM_ES_RSAESPKCSv15 0x0002
+#define TPM_ES_RSAESOAEP_SHA1_MGF1 0x0003
+
+// TPM_SIG_SCHEME values
+#define TPM_SS_NONE 0x0001
+#define TPM_SS_RSASSAPKCS1v15_SHA1 0x0002
+#define TPM_SS_RSASSAPKCS1v15_DER 0x0003
+
+/*
+ * TPM_CAPABILITY_AREA Values for TPM_GetCapability ([TPM_Part2], Section 21.1)
+ */
+#define TPM_CAP_ORD                     0x00000001
+#define TPM_CAP_ALG                     0x00000002
+#define TPM_CAP_PID                     0x00000003
+#define TPM_CAP_FLAG                    0x00000004
+#define TPM_CAP_PROPERTY                0x00000005
+#define TPM_CAP_VERSION                 0x00000006
+#define TPM_CAP_KEY_HANDLE              0x00000007
+#define TPM_CAP_CHECK_LOADED            0x00000008
+#define TPM_CAP_SYM_MODE                0x00000009
+#define TPM_CAP_KEY_STATUS              0x0000000C
+#define TPM_CAP_NV_LIST                 0x0000000D
+#define TPM_CAP_MFR                     0x00000010
+#define TPM_CAP_NV_INDEX                0x00000011
+#define TPM_CAP_TRANS_ALG               0x00000012
+#define TPM_CAP_HANDLE                  0x00000014
+#define TPM_CAP_TRANS_ES                0x00000015
+#define TPM_CAP_AUTH_ENCRYPT            0x00000017
+#define TPM_CAP_SELECT_SIZE             0x00000018
+#define TPM_CAP_DA_LOGIC                0x00000019
+#define TPM_CAP_VERSION_VAL             0x0000001A
+
+/* subCap definitions ([TPM_Part2], Section 21.2) */
+#define TPM_CAP_PROP_PCR                0x00000101
+#define TPM_CAP_PROP_DIR                0x00000102
+#define TPM_CAP_PROP_MANUFACTURER       0x00000103
+#define TPM_CAP_PROP_KEYS               0x00000104
+#define TPM_CAP_PROP_MIN_COUNTER        0x00000107
+#define TPM_CAP_FLAG_PERMANENT          0x00000108
+#define TPM_CAP_FLAG_VOLATILE           0x00000109
+#define TPM_CAP_PROP_AUTHSESS           0x0000010A
+#define TPM_CAP_PROP_TRANSESS           0x0000010B
+#define TPM_CAP_PROP_COUNTERS           0x0000010C
+#define TPM_CAP_PROP_MAX_AUTHSESS       0x0000010D
+#define TPM_CAP_PROP_MAX_TRANSESS       0x0000010E
+#define TPM_CAP_PROP_MAX_COUNTERS       0x0000010F
+#define TPM_CAP_PROP_MAX_KEYS           0x00000110
+#define TPM_CAP_PROP_OWNER              0x00000111
+#define TPM_CAP_PROP_CONTEXT            0x00000112
+#define TPM_CAP_PROP_MAX_CONTEXT        0x00000113
+#define TPM_CAP_PROP_FAMILYROWS         0x00000114
+#define TPM_CAP_PROP_TIS_TIMEOUT        0x00000115
+#define TPM_CAP_PROP_STARTUP_EFFECT     0x00000116
+#define TPM_CAP_PROP_DELEGATE_ROW       0x00000117
+#define TPM_CAP_PROP_MAX_DAASESS        0x00000119
+#define TPM_CAP_PROP_DAASESS            0x0000011A
+#define TPM_CAP_PROP_CONTEXT_DIST       0x0000011B
+#define TPM_CAP_PROP_DAA_INTERRUPT      0x0000011C
+#define TPM_CAP_PROP_SESSIONS           0x0000011D
+#define TPM_CAP_PROP_MAX_SESSIONS       0x0000011E
+#define TPM_CAP_PROP_CMK_RESTRICTION    0x0000011F
+#define TPM_CAP_PROP_DURATION           0x00000120
+#define TPM_CAP_PROP_ACTIVE_COUNTER     0x00000122
+#define TPM_CAP_PROP_MAX_NV_AVAILABLE   0x00000123
+#define TPM_CAP_PROP_INPUT_BUFFER       0x00000124
+
+// TPM_KEY_USAGE values
+#define TPM_KEY_EK 0x0000
+#define TPM_KEY_SIGNING 0x0010
+#define TPM_KEY_STORAGE 0x0011
+#define TPM_KEY_IDENTITY 0x0012
+#define TPM_KEY_AUTHCHANGE 0X0013
+#define TPM_KEY_BIND 0x0014
+#define TPM_KEY_LEGACY 0x0015
+
+// TPM_AUTH_DATA_USAGE values
+#define TPM_AUTH_NEVER 0x00
+#define TPM_AUTH_ALWAYS 0x01
+
+// Key Handle of owner and srk
+#define TPM_OWNER_KEYHANDLE 0x40000001
+#define TPM_SRK_KEYHANDLE 0x40000000
+
+
+
+// *************************** TYPEDEFS *********************************
+typedef unsigned char BYTE;
+typedef unsigned char BOOL;
+typedef uint16_t UINT16;
+typedef uint32_t UINT32;
+typedef uint64_t UINT64;
+
+typedef UINT32 TPM_RESULT;
+typedef UINT32 TPM_PCRINDEX;
+typedef UINT32 TPM_DIRINDEX;
+typedef UINT32 TPM_HANDLE;
+typedef TPM_HANDLE TPM_AUTHHANDLE;
+typedef TPM_HANDLE TCPA_HASHHANDLE;
+typedef TPM_HANDLE TCPA_HMACHANDLE;
+typedef TPM_HANDLE TCPA_ENCHANDLE;
+typedef TPM_HANDLE TPM_KEY_HANDLE;
+typedef TPM_HANDLE TCPA_ENTITYHANDLE;
+typedef UINT32 TPM_RESOURCE_TYPE;
+typedef UINT32 TPM_COMMAND_CODE;
+typedef UINT16 TPM_PROTOCOL_ID;
+typedef BYTE TPM_AUTH_DATA_USAGE;
+typedef UINT16 TPM_ENTITY_TYPE;
+typedef UINT32 TPM_ALGORITHM_ID;
+typedef UINT16 TPM_KEY_USAGE;
+typedef UINT16 TPM_STARTUP_TYPE;
+typedef UINT32 TPM_CAPABILITY_AREA;
+typedef UINT16 TPM_ENC_SCHEME;
+typedef UINT16 TPM_SIG_SCHEME;
+typedef UINT16 TPM_MIGRATE_SCHEME;
+typedef UINT16 TPM_PHYSICAL_PRESENCE;
+typedef UINT32 TPM_KEY_FLAGS;
+
+#define TPM_DIGEST_SIZE 20  // Don't change this
+typedef BYTE TPM_AUTHDATA[TPM_DIGEST_SIZE];
+typedef TPM_AUTHDATA TPM_SECRET;
+typedef TPM_AUTHDATA TPM_ENCAUTH;
+typedef BYTE TPM_PAYLOAD_TYPE;
+typedef UINT16 TPM_TAG;
+typedef UINT16 TPM_STRUCTURE_TAG;
+
+// Data Types of the TCS
+typedef UINT32 TCS_AUTHHANDLE;  // Handle addressing a authorization session
+typedef UINT32 TCS_CONTEXT_HANDLE; // Basic context handle
+typedef UINT32 TCS_KEY_HANDLE;  // Basic key handle
+
+// ************************* STRUCTURES **********************************
+
+typedef struct TPM_VERSION {
+  BYTE major;
+  BYTE minor;
+  BYTE revMajor;
+  BYTE revMinor;
+} TPM_VERSION;
+
+static const TPM_VERSION TPM_STRUCT_VER_1_1 = { 1,1,0,0 };
+
+typedef struct TPM_CAP_VERSION_INFO {
+   TPM_STRUCTURE_TAG tag;
+   TPM_VERSION version;
+   UINT16 specLevel;
+   BYTE errataRev;
+   BYTE tpmVendorID[4];
+   UINT16 vendorSpecificSize;
+   BYTE* vendorSpecific;
+} TPM_CAP_VERSION_INFO;
+
+inline void free_TPM_CAP_VERSION_INFO(TPM_CAP_VERSION_INFO* v) {
+   free(v->vendorSpecific);
+   v->vendorSpecific = NULL;
+}
+
+typedef struct TPM_DIGEST {
+  BYTE digest[TPM_DIGEST_SIZE];
+} TPM_DIGEST;
+
+typedef TPM_DIGEST TPM_PCRVALUE;
+typedef TPM_DIGEST TPM_COMPOSITE_HASH;
+typedef TPM_DIGEST TPM_DIRVALUE;
+typedef TPM_DIGEST TPM_HMAC;
+typedef TPM_DIGEST TPM_CHOSENID_HASH;
+
+typedef struct TPM_NONCE {
+  BYTE nonce[TPM_DIGEST_SIZE];
+} TPM_NONCE;
+
+typedef struct TPM_SYMMETRIC_KEY_PARMS {
+   UINT32 keyLength;
+   UINT32 blockSize;
+   UINT32 ivSize;
+   BYTE* IV;
+} TPM_SYMMETRIC_KEY_PARMS;
+
+inline void free_TPM_SYMMETRIC_KEY_PARMS(TPM_SYMMETRIC_KEY_PARMS* p) {
+   free(p->IV);
+   p->IV = NULL;
+}
+
+#define TPM_SYMMETRIC_KEY_PARMS_INIT { 0, 0, 0, NULL }
+
+typedef struct TPM_RSA_KEY_PARMS {
+  UINT32 keyLength;
+  UINT32 numPrimes;
+  UINT32 exponentSize;
+  BYTE* exponent;
+} TPM_RSA_KEY_PARMS;
+
+#define TPM_RSA_KEY_PARMS_INIT { 0, 0, 0, NULL }
+
+inline void free_TPM_RSA_KEY_PARMS(TPM_RSA_KEY_PARMS* p) {
+   free(p->exponent);
+   p->exponent = NULL;
+}
+
+typedef struct TPM_KEY_PARMS {
+  TPM_ALGORITHM_ID algorithmID;
+  TPM_ENC_SCHEME encScheme;
+  TPM_SIG_SCHEME sigScheme;
+  UINT32 parmSize;
+  union {
+     TPM_SYMMETRIC_KEY_PARMS sym;
+     TPM_RSA_KEY_PARMS rsa;
+  } parms;
+} TPM_KEY_PARMS;
+
+#define TPM_KEY_PARMS_INIT { 0, 0, 0, 0 }
+
+inline void free_TPM_KEY_PARMS(TPM_KEY_PARMS* p) {
+   if(p->parmSize) {
+      switch(p->algorithmID) {
+         case TPM_ALG_RSA:
+            free_TPM_RSA_KEY_PARMS(&p->parms.rsa);
+            break;
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            free_TPM_SYMMETRIC_KEY_PARMS(&p->parms.sym);
+            break;
+      }
+   }
+}
+
+typedef struct TPM_STORE_PUBKEY {
+  UINT32 keyLength;
+  BYTE* key;
+} TPM_STORE_PUBKEY;
+
+#define TPM_STORE_PUBKEY_INIT { 0, NULL }
+
+inline void free_TPM_STORE_PUBKEY(TPM_STORE_PUBKEY* p) {
+   free(p->key);
+   p->key = NULL;
+}
+
+typedef struct TPM_PUBKEY {
+  TPM_KEY_PARMS algorithmParms;
+  TPM_STORE_PUBKEY pubKey;
+} TPM_PUBKEY;
+
+#define TPM_PUBKEY_INIT { TPM_KEY_PARMS_INIT, TPM_STORE_PUBKEY_INIT }
+
+inline void free_TPM_PUBKEY(TPM_PUBKEY* k) {
+   free_TPM_KEY_PARMS(&k->algorithmParms);
+   free_TPM_STORE_PUBKEY(&k->pubKey);
+}
+
+typedef struct TPM_PCR_SELECTION {
+   UINT16 sizeOfSelect;
+   BYTE* pcrSelect;
+} TPM_PCR_SELECTION;
+
+#define TPM_PCR_SELECTION_INIT { 0, NULL }
+
+inline void free_TPM_PCR_SELECTION(TPM_PCR_SELECTION* p) {
+   free(p->pcrSelect);
+   p->pcrSelect = NULL;
+}
+
+typedef struct TPM_PCR_INFO {
+   TPM_PCR_SELECTION pcrSelection;
+   TPM_COMPOSITE_HASH digestAtRelease;
+   TPM_COMPOSITE_HASH digestAtCreation;
+} TPM_PCR_INFO;
+
+#define TPM_PCR_INFO_INIT { TPM_PCR_SELECTION_INIT }
+
+inline void free_TPM_PCR_INFO(TPM_PCR_INFO* p) {
+   free_TPM_PCR_SELECTION(&p->pcrSelection);
+}
+
+typedef struct TPM_PCR_COMPOSITE {
+  TPM_PCR_SELECTION select;
+  UINT32 valueSize;
+  TPM_PCRVALUE* pcrValue;
+} TPM_PCR_COMPOSITE;
+
+#define TPM_PCR_COMPOSITE_INIT { TPM_PCR_SELECTION_INIT, 0, NULL }
+
+inline void free_TPM_PCR_COMPOSITE(TPM_PCR_COMPOSITE* p) {
+   free_TPM_PCR_SELECTION(&p->select);
+   free(p->pcrValue);
+   p->pcrValue = NULL;
+}
+
+typedef struct TPM_KEY {
+  TPM_VERSION         ver;
+  TPM_KEY_USAGE       keyUsage;
+  TPM_KEY_FLAGS       keyFlags;
+  TPM_AUTH_DATA_USAGE authDataUsage;
+  TPM_KEY_PARMS       algorithmParms;
+  UINT32              PCRInfoSize;
+  TPM_PCR_INFO        PCRInfo;
+  TPM_STORE_PUBKEY    pubKey;
+  UINT32              encDataSize;
+  BYTE*               encData;
+} TPM_KEY;
+
+#define TPM_KEY_INIT { .algorithmParms = TPM_KEY_PARMS_INIT,\
+   .PCRInfoSize = 0, .PCRInfo = TPM_PCR_INFO_INIT, \
+   .pubKey = TPM_STORE_PUBKEY_INIT, \
+   .encDataSize = 0, .encData = NULL }
+
+inline void free_TPM_KEY(TPM_KEY* k) {
+   if(k->PCRInfoSize) {
+      free_TPM_PCR_INFO(&k->PCRInfo);
+   }
+   free_TPM_STORE_PUBKEY(&k->pubKey);
+   free(k->encData);
+   k->encData = NULL;
+}
+
+typedef struct TPM_BOUND_DATA {
+  TPM_VERSION ver;
+  TPM_PAYLOAD_TYPE payload;
+  BYTE* payloadData;
+} TPM_BOUND_DATA;
+
+#define TPM_BOUND_DATA_INIT { .payloadData = NULL }
+
+inline void free_TPM_BOUND_DATA(TPM_BOUND_DATA* d) {
+   free(d->payloadData);
+   d->payloadData = NULL;
+}
+
+typedef struct TPM_STORED_DATA {
+  TPM_VERSION ver;
+  UINT32 sealInfoSize;
+  TPM_PCR_INFO sealInfo;
+  UINT32 encDataSize;
+  BYTE* encData;
+} TPM_STORED_DATA;
+
+#define TPM_STORED_DATA_INIT { .sealInfoSize = 0, sealInfo = TPM_PCR_INFO_INIT,\
+   .encDataSize = 0, .encData = NULL }
+
+inline void free_TPM_STORED_DATA(TPM_STORED_DATA* d) {
+   if(d->sealInfoSize) {
+      free_TPM_PCR_INFO(&d->sealInfo);
+   }
+   free(d->encData);
+   d->encData = NULL;
+}
+
+typedef struct TPM_AUTH_SESSION {
+  TPM_AUTHHANDLE  AuthHandle;
+  TPM_NONCE   NonceOdd;   // system
+  TPM_NONCE   NonceEven;   // TPM
+  BOOL   fContinueAuthSession;
+  TPM_AUTHDATA  HMAC;
+} TPM_AUTH_SESSION;
+
+#define TPM_AUTH_SESSION_INIT { .AuthHandle = 0, .fContinueAuthSession = FALSE }
+
+// ---------------------- Functions for checking TPM_RESULTs -----------------
+
+#include <stdio.h>
+
+// FIXME: Review use of these and delete unneeded ones.
+
+// these are really badly dependent on local structure:
+// DEPENDS: local var 'status' of type TPM_RESULT
+// DEPENDS: label 'abort_egress' which cleans up and returns the status
+#define ERRORDIE(s) do { status = s; \
+                         fprintf (stderr, "*** ERRORDIE in %s at %s: %i\n", __func__, __FILE__, __LINE__); \
+                         goto abort_egress; } \
+                    while (0)
+
+// DEPENDS: local var 'status' of type TPM_RESULT
+// DEPENDS: label 'abort_egress' which cleans up and returns the status
+// Try command c. If it fails, set status to s and goto abort.
+#define TPMTRY(s,c) if (c != TPM_SUCCESS) { \
+                       status = s; \
+                       printf("ERROR in %s at %s:%i code: %s.\n", __func__, __FILE__, __LINE__, tpm_get_error_name(status)); \
+                       goto abort_egress; \
+                    } else {\
+                       status = c; \
+                    }
+
+// Try command c. If it fails, print error message, set status to actual return code. Goto abort
+#define TPMTRYRETURN(c) do { status = c; \
+                             if (status != TPM_SUCCESS) { \
+                               fprintf(stderr, "ERROR in %s at %s:%i code: %s.\n", __func__, __FILE__, __LINE__, tpm_get_error_name(status)); \
+                               goto abort_egress; \
+                             } \
+                        } while(0)
+
+
+#endif //__TCPA_H__
diff --git a/stubdom/vtpmmgr/tpm.c b/stubdom/vtpmmgr/tpm.c
new file mode 100644
index 0000000..123a27c
--- /dev/null
+++ b/stubdom/vtpmmgr/tpm.c
@@ -0,0 +1,938 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdio.h>
+#include <string.h>
+#include <malloc.h>
+#include <unistd.h>
+#include <errno.h>
+
+#include <polarssl/sha1.h>
+
+#include "tcg.h"
+#include "tpm.h"
+#include "log.h"
+#include "marshal.h"
+#include "tpmrsa.h"
+#include "vtpmmgr.h"
+
+#define TCPA_MAX_BUFFER_LENGTH 0x2000
+
+#define TPM_BEGIN(TAG, ORD) \
+   const TPM_TAG intag = TAG;\
+TPM_TAG tag = intag;\
+UINT32 paramSize;\
+const TPM_COMMAND_CODE ordinal = ORD;\
+TPM_RESULT status = TPM_SUCCESS;\
+BYTE in_buf[TCPA_MAX_BUFFER_LENGTH];\
+BYTE out_buf[TCPA_MAX_BUFFER_LENGTH];\
+UINT32 out_len = sizeof(out_buf);\
+BYTE* ptr = in_buf;\
+/*Print a log message */\
+vtpmloginfo(VTPM_LOG_TPM, "%s\n", __func__);\
+/* Pack the header*/\
+ptr = pack_TPM_TAG(ptr, tag);\
+ptr += sizeof(UINT32);\
+ptr = pack_TPM_COMMAND_CODE(ptr, ordinal)\
+
+#define TPM_AUTH_BEGIN() \
+   sha1_context sha1_ctx;\
+BYTE* authbase = ptr - sizeof(TPM_COMMAND_CODE);\
+TPM_DIGEST paramDigest;\
+sha1_starts(&sha1_ctx)
+
+#define TPM_AUTH1_GEN(HMACkey, auth) do {\
+   sha1_finish(&sha1_ctx, paramDigest.digest);\
+   generateAuth(&paramDigest, HMACkey, auth);\
+   ptr = pack_TPM_AUTH_SESSION(ptr, auth);\
+} while(0)
+
+#define TPM_AUTH2_GEN(HMACkey, auth) do {\
+   generateAuth(&paramDigest, HMACkey, auth);\
+   ptr = pack_TPM_AUTH_SESSION(ptr, auth);\
+} while(0)
+
+#define TPM_TRANSMIT() do {\
+   /* Pack the command size */\
+   paramSize = ptr - in_buf;\
+   pack_UINT32(in_buf + sizeof(TPM_TAG), paramSize);\
+   if((status = TPM_TransmitData(in_buf, paramSize, out_buf, &out_len)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH_VERIFY_BEGIN() do {\
+   UINT32 buf[2] = { cpu_to_be32(status), cpu_to_be32(ordinal) };\
+   sha1_starts(&sha1_ctx);\
+   sha1_update(&sha1_ctx, (unsigned char*)buf, sizeof(buf));\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH1_VERIFY(HMACkey, auth) do {\
+   sha1_finish(&sha1_ctx, paramDigest.digest);\
+   ptr = unpack_TPM_AUTH_SESSION(ptr, auth);\
+   if((status = verifyAuth(&paramDigest, HMACkey, auth)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH2_VERIFY(HMACkey, auth) do {\
+   ptr = unpack_TPM_AUTH_SESSION(ptr, auth);\
+   if((status = verifyAuth(&paramDigest, HMACkey, auth)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+
+
+#define TPM_UNPACK_VERIFY() do { \
+   ptr = out_buf;\
+   ptr = unpack_TPM_RSP_HEADER(ptr, \
+         &(tag), &(paramSize), &(status));\
+   if((status) != TPM_SUCCESS || (tag) != (intag +3)) { \
+      vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(status));\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH_HASH() do {\
+   sha1_update(&sha1_ctx, authbase, ptr - authbase);\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH_SKIP() do {\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH_ERR_CHECK(auth) do {\
+   if(status != TPM_SUCCESS || auth->fContinueAuthSession == FALSE) {\
+      vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x closed by TPM\n", auth->AuthHandle);\
+      auth->AuthHandle = 0;\
+   }\
+} while(0)
+
+static void xorEncrypt(const TPM_SECRET* sharedSecret,
+      TPM_NONCE* nonce,
+      const TPM_AUTHDATA* inAuth0,
+      TPM_ENCAUTH outAuth0,
+      const TPM_AUTHDATA* inAuth1,
+      TPM_ENCAUTH outAuth1) {
+   BYTE XORbuffer[sizeof(TPM_SECRET) + sizeof(TPM_NONCE)];
+   BYTE XORkey[TPM_DIGEST_SIZE];
+   BYTE* ptr = XORbuffer;
+   ptr = pack_TPM_SECRET(ptr, sharedSecret);
+   ptr = pack_TPM_NONCE(ptr, nonce);
+
+   sha1(XORbuffer, ptr - XORbuffer, XORkey);
+
+   if(inAuth0) {
+      for(int i = 0; i < TPM_DIGEST_SIZE; ++i) {
+         outAuth0[i] = XORkey[i] ^ (*inAuth0)[i];
+      }
+   }
+   if(inAuth1) {
+      for(int i = 0; i < TPM_DIGEST_SIZE; ++i) {
+         outAuth1[i] = XORkey[i] ^ (*inAuth1)[i];
+      }
+   }
+
+}
+
+static void generateAuth(const TPM_DIGEST* paramDigest,
+      const TPM_SECRET* HMACkey,
+      TPM_AUTH_SESSION *auth)
+{
+   //Generate new OddNonce
+   vtpmmgr_rand((BYTE*)auth->NonceOdd.nonce, sizeof(TPM_NONCE));
+
+   // Create HMAC text. (Concat inParamsDigest with inAuthSetupParams).
+   BYTE hmacText[sizeof(TPM_DIGEST) + (2 * sizeof(TPM_NONCE)) + sizeof(BOOL)];
+   BYTE* ptr = hmacText;
+
+   ptr = pack_TPM_DIGEST(ptr, paramDigest);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+
+   sha1_hmac((BYTE *) HMACkey, sizeof(TPM_DIGEST),
+         (BYTE *) hmacText, sizeof(hmacText),
+         auth->HMAC);
+}
+
+static TPM_RESULT verifyAuth(const TPM_DIGEST* paramDigest,
+      /*[IN]*/ const TPM_SECRET *HMACkey,
+      /*[IN,OUT]*/ TPM_AUTH_SESSION *auth)
+{
+
+   // Create HMAC text. (Concat inParamsDigest with inAuthSetupParams).
+   TPM_AUTHDATA hm;
+   BYTE hmacText[sizeof(TPM_DIGEST) + (2 * sizeof(TPM_NONCE)) + sizeof(BOOL)];
+   BYTE* ptr = hmacText;
+
+   ptr = pack_TPM_DIGEST(ptr, paramDigest);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+
+   sha1_hmac( (BYTE *) HMACkey, sizeof(TPM_DIGEST),
+         (BYTE *) hmacText, sizeof(hmacText),
+         hm);
+
+   // Compare correct HMAC with provided one.
+   if (memcmp(hm, auth->HMAC, sizeof(TPM_DIGEST)) == 0) { // 0 indicates equality
+      return TPM_SUCCESS;
+   } else {
+      vtpmlogerror(VTPM_LOG_TPM, "Auth Session verification failed!\n");
+      return TPM_AUTHFAIL;
+   }
+}
+
+
+
+// ------------------------------------------------------------------
+// Authorization Commands
+// ------------------------------------------------------------------
+
+TPM_RESULT TPM_OIAP(TPM_AUTH_SESSION*   auth)  // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_OIAP);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   memset(&auth->HMAC, 0, sizeof(TPM_DIGEST));
+   auth->fContinueAuthSession = TRUE;
+
+   ptr = unpack_UINT32(ptr, &auth->AuthHandle);
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x opened by TPM_OIAP.\n", auth->AuthHandle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_OSAP(TPM_ENTITY_TYPE  entityType,  // in
+      UINT32    entityValue, // in
+      const TPM_AUTHDATA* usageAuth, //in
+      TPM_SECRET *sharedSecret, //out
+      TPM_AUTH_SESSION *auth)
+{
+   BYTE* nonceOddOSAP;
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_OSAP);
+
+   ptr = pack_TPM_ENTITY_TYPE(ptr, entityType);
+   ptr = pack_UINT32(ptr, entityValue);
+
+   //nonce Odd OSAP
+   nonceOddOSAP = ptr;
+   vtpmmgr_rand(ptr, TPM_DIGEST_SIZE);
+   ptr += TPM_DIGEST_SIZE;
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, &auth->AuthHandle);
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+
+   //Calculate session secret
+   sha1_context ctx;
+   sha1_hmac_starts(&ctx, *usageAuth, TPM_DIGEST_SIZE);
+   sha1_hmac_update(&ctx, ptr, TPM_DIGEST_SIZE); //ptr = nonceEvenOSAP
+   sha1_hmac_update(&ctx, nonceOddOSAP, TPM_DIGEST_SIZE);
+   sha1_hmac_finish(&ctx, *sharedSecret);
+
+   memset(&auth->HMAC, 0, sizeof(TPM_DIGEST));
+   auth->fContinueAuthSession = FALSE;
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x opened by TPM_OSAP.\n", auth->AuthHandle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_TakeOwnership(
+      const TPM_PUBKEY *pubEK, //in
+      const TPM_AUTHDATA* ownerAuth, //in
+      const TPM_AUTHDATA* srkAuth, //in
+      const TPM_KEY* inSrk, //in
+      TPM_KEY* outSrk, //out, optional
+      TPM_AUTH_SESSION*   auth)   // in, out
+{
+   int keyAlloced = 0;
+   tpmrsa_context ek_rsa = TPMRSA_CTX_INIT;
+
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_TakeOwnership);
+   TPM_AUTH_BEGIN();
+
+   tpmrsa_set_pubkey(&ek_rsa,
+         pubEK->pubKey.key, pubEK->pubKey.keyLength,
+         pubEK->algorithmParms.parms.rsa.exponent,
+         pubEK->algorithmParms.parms.rsa.exponentSize);
+
+   /* Pack the protocol ID */
+   ptr = pack_UINT16(ptr, TPM_PID_OWNER);
+
+   /* Pack the encrypted owner auth */
+   ptr = pack_UINT32(ptr, pubEK->algorithmParms.parms.rsa.keyLength / 8);
+   tpmrsa_pub_encrypt_oaep(&ek_rsa,
+         ctr_drbg_random, &vtpm_globals.ctr_drbg,
+         sizeof(TPM_SECRET),
+         (BYTE*) ownerAuth,
+         ptr);
+   ptr += pubEK->algorithmParms.parms.rsa.keyLength / 8;
+
+   /* Pack the encrypted srk auth */
+   ptr = pack_UINT32(ptr, pubEK->algorithmParms.parms.rsa.keyLength / 8);
+   tpmrsa_pub_encrypt_oaep(&ek_rsa,
+         ctr_drbg_random, &vtpm_globals.ctr_drbg,
+         sizeof(TPM_SECRET),
+         (BYTE*) srkAuth,
+         ptr);
+   ptr += pubEK->algorithmParms.parms.rsa.keyLength / 8;
+
+   /* Pack the Srk key */
+   ptr = pack_TPM_KEY(ptr, inSrk);
+
+   /* Hash everything up to here */
+   TPM_AUTH_HASH();
+
+   /* Generate the authorization */
+   TPM_AUTH1_GEN(ownerAuth, auth);
+
+   /* Send the command to the tpm*/
+   TPM_TRANSMIT();
+   /* Unpack and validate the header */
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   if(outSrk != NULL) {
+      /* If the user wants a copy of the srk we give it to them */
+      keyAlloced = 1;
+      ptr = unpack_TPM_KEY(ptr, outSrk, UNPACK_ALLOC);
+   } else {
+      /*otherwise just parse past it */
+      TPM_KEY temp;
+      ptr = unpack_TPM_KEY(ptr, &temp, UNPACK_ALIAS);
+   }
+
+   /* Hash the output key */
+   TPM_AUTH_HASH();
+
+   /* Verify authorizaton */
+   TPM_AUTH1_VERIFY(ownerAuth, auth);
+
+   goto egress;
+abort_egress:
+   if(keyAlloced) {
+      free_TPM_KEY(outSrk);
+   }
+egress:
+   tpmrsa_free(&ek_rsa);
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+
+TPM_RESULT TPM_DisablePubekRead (
+      const TPM_AUTHDATA* ownerAuth,
+      TPM_AUTH_SESSION*   auth)
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_DisablePubekRead);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(ownerAuth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   TPM_AUTH1_VERIFY(ownerAuth, auth);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+
+TPM_RESULT TPM_TerminateHandle(TPM_AUTHHANDLE  handle)  // in
+{
+   if(handle == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_Terminate_Handle);
+
+   ptr = pack_TPM_AUTHHANDLE(ptr, handle);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x closed by TPM_TerminateHandle\n", handle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_Extend( TPM_PCRINDEX  pcrNum,  // in
+      TPM_DIGEST  inDigest, // in
+      TPM_PCRVALUE*  outDigest) // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_Extend);
+
+   ptr = pack_TPM_PCRINDEX(ptr, pcrNum);
+   ptr = pack_TPM_DIGEST(ptr, &inDigest);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_TPM_PCRVALUE(ptr, outDigest);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_Seal(
+      TPM_KEY_HANDLE  keyHandle,  // in
+      UINT32    pcrInfoSize, // in
+      TPM_PCR_INFO*    pcrInfo,  // in
+      UINT32    inDataSize,  // in
+      const BYTE*    inData,   // in
+      TPM_STORED_DATA* sealedData, //out
+      const TPM_SECRET* osapSharedSecret, //in
+      const TPM_AUTHDATA* sealedDataAuth, //in
+      TPM_AUTH_SESSION*   pubAuth  // in, out
+      )
+{
+   int dataAlloced = 0;
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_Seal);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, keyHandle);
+
+   TPM_AUTH_SKIP();
+
+   xorEncrypt(osapSharedSecret, &pubAuth->NonceEven,
+         sealedDataAuth, ptr,
+         NULL, NULL);
+   ptr += sizeof(TPM_ENCAUTH);
+
+   ptr = pack_UINT32(ptr, pcrInfoSize);
+   ptr = pack_TPM_PCR_INFO(ptr, pcrInfo);
+
+   ptr = pack_UINT32(ptr, inDataSize);
+   ptr = pack_BUFFER(ptr, inData, inDataSize);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(osapSharedSecret, pubAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_TPM_STORED_DATA(ptr, sealedData, UNPACK_ALLOC);
+   dataAlloced = 1;
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(osapSharedSecret, pubAuth);
+
+   goto egress;
+abort_egress:
+   if(dataAlloced) {
+      free_TPM_STORED_DATA(sealedData);
+   }
+egress:
+   TPM_AUTH_ERR_CHECK(pubAuth);
+   return status;
+}
+
+TPM_RESULT TPM_Unseal(
+      TPM_KEY_HANDLE parentHandle, // in
+      const TPM_STORED_DATA* sealedData,
+      UINT32*   outSize,  // out
+      BYTE**    out, //out
+      const TPM_AUTHDATA* key_usage_auth, //in
+      const TPM_AUTHDATA* data_usage_auth, //in
+      TPM_AUTH_SESSION*   keyAuth,  // in, out
+      TPM_AUTH_SESSION*   dataAuth  // in, out
+      )
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH2_COMMAND, TPM_ORD_Unseal);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, parentHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_TPM_STORED_DATA(ptr, sealedData);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(key_usage_auth, keyAuth);
+   TPM_AUTH2_GEN(data_usage_auth, dataAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, outSize);
+   ptr = unpack_ALLOC(ptr, out, *outSize);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(key_usage_auth, keyAuth);
+   TPM_AUTH2_VERIFY(data_usage_auth, dataAuth);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(keyAuth);
+   TPM_AUTH_ERR_CHECK(dataAuth);
+   return status;
+}
+
+TPM_RESULT TPM_Bind(
+      const TPM_KEY* key,
+      const BYTE* in,
+      UINT32 ilen,
+      BYTE* out)
+{
+   TPM_RESULT status;
+   tpmrsa_context rsa = TPMRSA_CTX_INIT;
+   TPM_BOUND_DATA boundData;
+   uint8_t plain[TCPA_MAX_BUFFER_LENGTH];
+   BYTE* ptr = plain;
+
+   vtpmloginfo(VTPM_LOG_TPM, "%s\n", __func__);
+
+   tpmrsa_set_pubkey(&rsa,
+         key->pubKey.key, key->pubKey.keyLength,
+         key->algorithmParms.parms.rsa.exponent,
+         key->algorithmParms.parms.rsa.exponentSize);
+
+   // Fill boundData's accessory information
+   boundData.ver = TPM_STRUCT_VER_1_1;
+   boundData.payload = TPM_PT_BIND;
+   boundData.payloadData = (BYTE*)in;
+
+   //marshall the bound data object
+   ptr = pack_TPM_BOUND_DATA(ptr, &boundData, ilen);
+
+   // Encrypt the data
+   TPMTRYRETURN(tpmrsa_pub_encrypt_oaep(&rsa,
+            ctr_drbg_random, &vtpm_globals.ctr_drbg,
+            ptr - plain,
+            plain,
+            out));
+
+abort_egress:
+   tpmrsa_free(&rsa);
+   return status;
+
+}
+
+TPM_RESULT TPM_UnBind(
+      TPM_KEY_HANDLE  keyHandle,  // in
+      UINT32 ilen, //in
+      const BYTE* in, //
+      UINT32* olen, //
+      BYTE*    out, //out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth //in, out
+      )
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_UnBind);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, keyHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_UINT32(ptr, ilen);
+   ptr = pack_BUFFER(ptr, in, ilen);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(usage_auth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, olen);
+   if(*olen > ilen) {
+      vtpmlogerror(VTPM_LOG_TPM, "Output length < input length!\n");
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+   ptr = unpack_BUFFER(ptr, out, *olen);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(usage_auth, auth);
+
+abort_egress:
+egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+TPM_RESULT TPM_CreateWrapKey(
+      TPM_KEY_HANDLE  hWrappingKey,  // in
+      const TPM_AUTHDATA* osapSharedSecret,
+      const TPM_AUTHDATA* dataUsageAuth, //in
+      const TPM_AUTHDATA* dataMigrationAuth, //in
+      TPM_KEY*     key, //in, out
+      TPM_AUTH_SESSION*   pAuth)    // in, out
+{
+   int keyAlloced = 0;
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_CreateWrapKey);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, hWrappingKey);
+
+   TPM_AUTH_SKIP();
+
+   //Encrypted auths
+   xorEncrypt(osapSharedSecret, &pAuth->NonceEven,
+         dataUsageAuth, ptr,
+         dataMigrationAuth, ptr + sizeof(TPM_ENCAUTH));
+   ptr += sizeof(TPM_ENCAUTH) * 2;
+
+   ptr = pack_TPM_KEY(ptr, key);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(osapSharedSecret, pAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   keyAlloced = 1;
+   ptr = unpack_TPM_KEY(ptr, key, UNPACK_ALLOC);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(osapSharedSecret, pAuth);
+
+   goto egress;
+abort_egress:
+   if(keyAlloced) {
+      free_TPM_KEY(key);
+   }
+egress:
+   TPM_AUTH_ERR_CHECK(pAuth);
+   return status;
+}
+
+TPM_RESULT TPM_LoadKey(
+      TPM_KEY_HANDLE  parentHandle, //
+      const TPM_KEY* key, //in
+      TPM_HANDLE*  keyHandle,    // out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth)
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_LoadKey);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, parentHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_TPM_KEY(ptr, key);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(usage_auth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, keyHandle);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(usage_auth, auth);
+
+   vtpmloginfo(VTPM_LOG_TPM, "Key Handle: 0x%x opened by TPM_LoadKey\n", *keyHandle);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+TPM_RESULT TPM_EvictKey( TPM_KEY_HANDLE  hKey)  // in
+{
+   if(hKey == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_EvictKey);
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, hKey);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   vtpmloginfo(VTPM_LOG_TPM, "Key handle: 0x%x closed by TPM_EvictKey\n", hKey);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_FlushSpecific(TPM_HANDLE handle,
+      TPM_RESOURCE_TYPE rt) {
+   if(handle == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_FlushSpecific);
+
+   ptr = pack_TPM_HANDLE(ptr, handle);
+   ptr = pack_TPM_RESOURCE_TYPE(ptr, rt);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_GetRandom( UINT32*    bytesRequested, // in, out
+      BYTE*    randomBytes) // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_GetRandom);
+
+   // check input params
+   if (bytesRequested == NULL || randomBytes == NULL){
+      return TPM_BAD_PARAMETER;
+   }
+
+   ptr = pack_UINT32(ptr, *bytesRequested);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, bytesRequested);
+   ptr = unpack_BUFFER(ptr, randomBytes, *bytesRequested);
+
+abort_egress:
+   return status;
+}
+
+
+TPM_RESULT TPM_ReadPubek(
+      TPM_PUBKEY* pubEK //out
+      )
+{
+   BYTE* antiReplay = NULL;
+   BYTE* kptr = NULL;
+   BYTE digest[TPM_DIGEST_SIZE];
+   sha1_context ctx;
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_ReadPubek);
+
+   //antiReplay nonce
+   vtpmmgr_rand(ptr, TPM_DIGEST_SIZE);
+   antiReplay = ptr;
+   ptr += TPM_DIGEST_SIZE;
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   //unpack and allocate the key
+   kptr = ptr;
+   ptr = unpack_TPM_PUBKEY(ptr, pubEK, UNPACK_ALLOC);
+
+   //Verify the checksum
+   sha1_starts(&ctx);
+   sha1_update(&ctx, kptr, ptr - kptr);
+   sha1_update(&ctx, antiReplay, TPM_DIGEST_SIZE);
+   sha1_finish(&ctx, digest);
+
+   //ptr points to the checksum computed by TPM
+   if(memcmp(digest, ptr, TPM_DIGEST_SIZE)) {
+      vtpmlogerror(VTPM_LOG_TPM, "TPM_ReadPubek: Checksum returned by TPM was invalid!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   goto egress;
+abort_egress:
+   if(kptr != NULL) { //If we unpacked the pubEK, we have to free it
+      free_TPM_PUBKEY(pubEK);
+   }
+egress:
+   return status;
+}
+
+
+TPM_RESULT TPM_SaveState(void)
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_SaveState);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_GetCapability(
+      TPM_CAPABILITY_AREA capArea,
+      UINT32 subCapSize,
+      const BYTE* subCap,
+      UINT32* respSize,
+      BYTE** resp)
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_GetCapability);
+
+   ptr = pack_TPM_CAPABILITY_AREA(ptr, capArea);
+   ptr = pack_UINT32(ptr, subCapSize);
+   ptr = pack_BUFFER(ptr, subCap, subCapSize);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, respSize);
+   ptr = unpack_ALLOC(ptr, resp, *respSize);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_CreateEndorsementKeyPair(
+      const TPM_KEY_PARMS* keyInfo,
+      TPM_PUBKEY* pubEK)
+{
+   BYTE* kptr = NULL;
+   sha1_context ctx;
+   TPM_DIGEST checksum;
+   TPM_DIGEST hash;
+   TPM_NONCE antiReplay;
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_CreateEndorsementKeyPair);
+
+   //Make anti replay nonce
+   vtpmmgr_rand(antiReplay.nonce, sizeof(antiReplay.nonce));
+
+   ptr = pack_TPM_NONCE(ptr, &antiReplay);
+   ptr = pack_TPM_KEY_PARMS(ptr, keyInfo);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   sha1_starts(&ctx);
+
+   kptr = ptr;
+   ptr = unpack_TPM_PUBKEY(ptr, pubEK, UNPACK_ALLOC);
+
+   /* Hash the pub key blob */
+   sha1_update(&ctx, kptr, ptr - kptr);
+   ptr = unpack_TPM_DIGEST(ptr, &checksum);
+
+   sha1_update(&ctx, antiReplay.nonce, sizeof(antiReplay.nonce));
+
+   sha1_finish(&ctx, hash.digest);
+   if(memcmp(checksum.digest, hash.digest, TPM_DIGEST_SIZE)) {
+      vtpmloginfo(VTPM_LOG_VTPM, "TPM_CreateEndorsementKey: Checkum verification failed!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   goto egress;
+abort_egress:
+   if(kptr) {
+      free_TPM_PUBKEY(pubEK);
+   }
+egress:
+   return status;
+}
+
+TPM_RESULT TPM_TransmitData(
+      BYTE* in,
+      UINT32 insize,
+      BYTE* out,
+      UINT32* outsize) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   UINT32 i;
+   vtpmloginfo(VTPM_LOG_TXDATA, "Sending buffer = 0x");
+   for(i = 0 ; i < insize ; i++)
+      vtpmloginfomore(VTPM_LOG_TXDATA, "%2.2x ", in[i]);
+
+   vtpmloginfomore(VTPM_LOG_TXDATA, "\n");
+
+   ssize_t size = 0;
+
+   // send the request
+   size = write (vtpm_globals.tpm_fd, in, insize);
+   if (size < 0) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "write() failed : %s\n", strerror(errno));
+      ERRORDIE (TPM_IOERROR);
+   }
+   else if ((UINT32) size < insize) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "Wrote %d instead of %d bytes!\n", (int) size, insize);
+      ERRORDIE (TPM_IOERROR);
+   }
+
+   // read the response
+   size = read (vtpm_globals.tpm_fd, out, *outsize);
+   if (size < 0) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "read() failed : %s\n", strerror(errno));
+      ERRORDIE (TPM_IOERROR);
+   }
+
+   vtpmloginfo(VTPM_LOG_TXDATA, "Receiving buffer = 0x");
+   for(i = 0 ; i < size ; i++)
+      vtpmloginfomore(VTPM_LOG_TXDATA, "%2.2x ", out[i]);
+
+   vtpmloginfomore(VTPM_LOG_TXDATA, "\n");
+
+   *outsize = size;
+   goto egress;
+
+abort_egress:
+egress:
+   return status;
+}
diff --git a/stubdom/vtpmmgr/tpm.h b/stubdom/vtpmmgr/tpm.h
new file mode 100644
index 0000000..304e145
--- /dev/null
+++ b/stubdom/vtpmmgr/tpm.h
@@ -0,0 +1,218 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005/2006, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __TPM_H__
+#define __TPM_H__
+
+#include "tcg.h"
+
+// ------------------------------------------------------------------
+// Exposed API
+// ------------------------------------------------------------------
+
+// TPM v1.1B Command Set
+
+// Authorzation
+TPM_RESULT TPM_OIAP(
+      TPM_AUTH_SESSION*   auth //out
+      );
+
+TPM_RESULT TPM_OSAP (
+      TPM_ENTITY_TYPE entityType,  // in
+      UINT32    entityValue, // in
+      const TPM_AUTHDATA* usageAuth, //in
+      TPM_SECRET *sharedSecret, //out
+      TPM_AUTH_SESSION *auth);
+
+TPM_RESULT TPM_TakeOwnership(
+      const TPM_PUBKEY *pubEK, //in
+      const TPM_AUTHDATA* ownerAuth, //in
+      const TPM_AUTHDATA* srkAuth, //in
+      const TPM_KEY* inSrk, //in
+      TPM_KEY* outSrk, //out, optional
+      TPM_AUTH_SESSION*   auth   // in, out
+      );
+
+TPM_RESULT TPM_DisablePubekRead (
+      const TPM_AUTHDATA* ownerAuth,
+      TPM_AUTH_SESSION*   auth
+      );
+
+TPM_RESULT TPM_TerminateHandle ( TPM_AUTHHANDLE  handle  // in
+      );
+
+TPM_RESULT TPM_FlushSpecific ( TPM_HANDLE  handle,  // in
+      TPM_RESOURCE_TYPE resourceType //in
+      );
+
+// TPM Mandatory
+TPM_RESULT TPM_Extend ( TPM_PCRINDEX  pcrNum,  // in
+      TPM_DIGEST   inDigest, // in
+      TPM_PCRVALUE*   outDigest // out
+      );
+
+TPM_RESULT TPM_PcrRead ( TPM_PCRINDEX  pcrNum,  // in
+      TPM_PCRVALUE*  outDigest // out
+      );
+
+TPM_RESULT TPM_Quote ( TCS_KEY_HANDLE  keyHandle,  // in
+      TPM_NONCE   antiReplay,  // in
+      UINT32*    PcrDataSize, // in, out
+      BYTE**    PcrData,  // in, out
+      TPM_AUTH_SESSION*   privAuth,  // in, out
+      UINT32*    sigSize,  // out
+      BYTE**    sig    // out
+      );
+
+TPM_RESULT TPM_Seal(
+      TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32    pcrInfoSize, // in
+      TPM_PCR_INFO*    pcrInfo,  // in
+      UINT32    inDataSize,  // in
+      const BYTE*    inData,   // in
+      TPM_STORED_DATA* sealedData, //out
+      const TPM_SECRET* osapSharedSecret, //in
+      const TPM_AUTHDATA* sealDataAuth, //in
+      TPM_AUTH_SESSION*   pubAuth  // in, out
+      );
+
+TPM_RESULT TPM_Unseal (
+      TPM_KEY_HANDLE parentHandle, // in
+      const TPM_STORED_DATA* sealedData,
+      UINT32*   outSize,  // out
+      BYTE**    out, //out
+      const TPM_AUTHDATA* key_usage_auth, //in
+      const TPM_AUTHDATA* data_usage_auth, //in
+      TPM_AUTH_SESSION*   keyAuth,  // in, out
+      TPM_AUTH_SESSION*   dataAuth  // in, out
+      );
+
+TPM_RESULT TPM_DirWriteAuth ( TPM_DIRINDEX  dirIndex,  // in
+      TPM_DIRVALUE  newContents, // in
+      TPM_AUTH_SESSION*   ownerAuth  // in, out
+      );
+
+TPM_RESULT TPM_DirRead ( TPM_DIRINDEX  dirIndex, // in
+      TPM_DIRVALUE*  dirValue // out
+      );
+
+TPM_RESULT TPM_Bind(
+      const TPM_KEY* key, //in
+      const BYTE* in, //in
+      UINT32 ilen, //in
+      BYTE* out //out, must be at least cipher block size
+      );
+
+TPM_RESULT TPM_UnBind (
+      TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32 ilen, //in
+      const BYTE* in, //
+      UINT32*   outDataSize, // out
+      BYTE*    outData, //out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth //in, out
+      );
+
+TPM_RESULT TPM_CreateWrapKey (
+      TCS_KEY_HANDLE  hWrappingKey,  // in
+      const TPM_AUTHDATA* osapSharedSecret,
+      const TPM_AUTHDATA* dataUsageAuth, //in
+      const TPM_AUTHDATA* dataMigrationAuth, //in
+      TPM_KEY*     key, //in
+      TPM_AUTH_SESSION*   pAuth    // in, out
+      );
+
+TPM_RESULT TPM_LoadKey (
+      TPM_KEY_HANDLE  parentHandle, //
+      const TPM_KEY* key, //in
+      TPM_HANDLE*  keyHandle,    // out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth
+      );
+
+TPM_RESULT TPM_GetPubKey (  TCS_KEY_HANDLE  hKey,   // in
+      TPM_AUTH_SESSION*   pAuth,   // in, out
+      UINT32*    pcPubKeySize, // out
+      BYTE**    prgbPubKey  // out
+      );
+
+TPM_RESULT TPM_EvictKey ( TCS_KEY_HANDLE  hKey  // in
+      );
+
+TPM_RESULT TPM_FlushSpecific(TPM_HANDLE handle, //in
+      TPM_RESOURCE_TYPE rt //in
+      );
+
+TPM_RESULT TPM_Sign ( TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32    areaToSignSize, // in
+      BYTE*    areaToSign,  // in
+      TPM_AUTH_SESSION*   privAuth,  // in, out
+      UINT32*    sigSize,  // out
+      BYTE**    sig    // out
+      );
+
+TPM_RESULT TPM_GetRandom (  UINT32*    bytesRequested, // in, out
+      BYTE*    randomBytes  // out
+      );
+
+TPM_RESULT TPM_StirRandom (  UINT32    inDataSize, // in
+      BYTE*    inData  // in
+      );
+
+TPM_RESULT TPM_ReadPubek (
+      TPM_PUBKEY* pubEK //out
+      );
+
+TPM_RESULT TPM_GetCapability(
+      TPM_CAPABILITY_AREA capArea,
+      UINT32 subCapSize,
+      const BYTE* subCap,
+      UINT32* respSize,
+      BYTE** resp);
+
+TPM_RESULT TPM_SaveState(void);
+
+TPM_RESULT TPM_CreateEndorsementKeyPair(
+      const TPM_KEY_PARMS* keyInfo,
+      TPM_PUBKEY* pubEK);
+
+TPM_RESULT TPM_TransmitData(
+      BYTE* in,
+      UINT32 insize,
+      BYTE* out,
+      UINT32* outsize);
+
+#endif //TPM_H
diff --git a/stubdom/vtpmmgr/tpmrsa.c b/stubdom/vtpmmgr/tpmrsa.c
new file mode 100644
index 0000000..56094e7
--- /dev/null
+++ b/stubdom/vtpmmgr/tpmrsa.c
@@ -0,0 +1,175 @@
+/*
+ *  The RSA public-key cryptosystem
+ *
+ *  Copyright (C) 2006-2011, Brainspark B.V.
+ *
+ *  This file is part of PolarSSL (http://www.polarssl.org)
+ *  Lead Maintainer: Paul Bakker <polarssl_maintainer at polarssl.org>
+ *
+ *  All rights reserved.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License along
+ *  with this program; if not, write to the Free Software Foundation, Inc.,
+ *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+/*
+ *  RSA was designed by Ron Rivest, Adi Shamir and Len Adleman.
+ *
+ *  http://theory.lcs.mit.edu/~rivest/rsapaper.pdf
+ *  http://www.cacr.math.uwaterloo.ca/hac/about/chap8.pdf
+ */
+
+#include "tcg.h"
+#include "polarssl/sha1.h"
+
+#include <stdlib.h>
+#include <stdio.h>
+
+#include "tpmrsa.h"
+
+#define HASH_LEN 20
+
+void tpmrsa_set_pubkey(tpmrsa_context* ctx,
+      const unsigned char* key,
+      int keylen,
+      const unsigned char* exponent,
+      int explen) {
+
+   tpmrsa_free(ctx);
+
+   if(explen == 0) { //Default e= 2^16+1
+      mpi_lset(&ctx->E, 65537);
+   } else {
+      mpi_read_binary(&ctx->E, exponent, explen);
+   }
+   mpi_read_binary(&ctx->N, key, keylen);
+
+   ctx->len = ( mpi_msb(&ctx->N) + 7) >> 3;
+}
+
+static TPM_RESULT tpmrsa_public( tpmrsa_context *ctx,
+      const unsigned char *input,
+      unsigned char *output )
+{
+   int ret;
+   size_t olen;
+   mpi T;
+
+   mpi_init( &T );
+
+   MPI_CHK( mpi_read_binary( &T, input, ctx->len ) );
+
+   if( mpi_cmp_mpi( &T, &ctx->N ) >= 0 )
+   {
+      mpi_free( &T );
+      return TPM_ENCRYPT_ERROR;
+   }
+
+   olen = ctx->len;
+   MPI_CHK( mpi_exp_mod( &T, &T, &ctx->E, &ctx->N, &ctx->RN ) );
+   MPI_CHK( mpi_write_binary( &T, output, olen ) );
+
+cleanup:
+
+   mpi_free( &T );
+
+   if( ret != 0 )
+      return TPM_ENCRYPT_ERROR;
+
+   return TPM_SUCCESS;
+}
+
+static void mgf_mask( unsigned char *dst, int dlen, unsigned char *src, int slen)
+{
+   unsigned char mask[HASH_LEN];
+   unsigned char counter[4] = {0, 0, 0, 0};
+   int i;
+   sha1_context mctx;
+
+   //We always hash the src with the counter, so save the partial hash
+   sha1_starts(&mctx);
+   sha1_update(&mctx, src, slen);
+
+   // Generate and apply dbMask
+   while(dlen > 0) {
+      //Copy the sha1 context
+      sha1_context ctx = mctx;
+
+      //compute hash for input || counter
+      sha1_update(&ctx, counter, sizeof(counter));
+      sha1_finish(&ctx, mask);
+
+      //Apply the mask
+      for(i = 0; i < (dlen < HASH_LEN ? dlen : HASH_LEN); ++i) {
+         *(dst++) ^= mask[i];
+      }
+
+      //Increment counter
+      ++counter[3];
+
+      dlen -= HASH_LEN;
+   }
+}
+
+/*
+ * Add the message padding, then do an RSA operation
+ */
+TPM_RESULT tpmrsa_pub_encrypt_oaep( tpmrsa_context *ctx,
+      int (*f_rng)(void *, unsigned char *, size_t),
+      void *p_rng,
+      size_t ilen,
+      const unsigned char *input,
+      unsigned char *output )
+{
+   int ret;
+   int olen;
+   unsigned char* seed = output + 1;
+   unsigned char* db = output + HASH_LEN +1;
+
+   olen = ctx->len-1;
+
+   if( f_rng == NULL )
+      return TPM_ENCRYPT_ERROR;
+
+   if( ilen > olen - 2 * HASH_LEN - 1)
+      return TPM_ENCRYPT_ERROR;
+
+   output[0] = 0;
+
+   //Encoding parameter p
+   sha1((unsigned char*)"TCPA", 4, db);
+
+   //PS
+   memset(db + HASH_LEN, 0,
+         olen - ilen - 2 * HASH_LEN - 1);
+
+   //constant 1 byte
+   db[olen - ilen - HASH_LEN -1] = 0x01;
+
+   //input string
+   memcpy(db + olen - ilen - HASH_LEN,
+         input, ilen);
+
+   //Generate random seed
+   if( ( ret = f_rng( p_rng, seed, HASH_LEN ) ) != 0 )
+      return TPM_ENCRYPT_ERROR;
+
+   // maskedDB: Apply dbMask to DB
+   mgf_mask( db, olen - HASH_LEN, seed, HASH_LEN);
+
+   // maskedSeed: Apply seedMask to seed
+   mgf_mask( seed, HASH_LEN, db, olen - HASH_LEN);
+
+   // Do the crypto op
+   return tpmrsa_public(ctx, output, output);
+}
diff --git a/stubdom/vtpmmgr/tpmrsa.h b/stubdom/vtpmmgr/tpmrsa.h
new file mode 100644
index 0000000..59579e7
--- /dev/null
+++ b/stubdom/vtpmmgr/tpmrsa.h
@@ -0,0 +1,67 @@
+/**
+ * \file rsa.h
+ *
+ * \brief The RSA public-key cryptosystem
+ *
+ *  Copyright (C) 2006-2010, Brainspark B.V.
+ *
+ *  This file is part of PolarSSL (http://www.polarssl.org)
+ *  Lead Maintainer: Paul Bakker <polarssl_maintainer at polarssl.org>
+ *
+ *  All rights reserved.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License along
+ *  with this program; if not, write to the Free Software Foundation, Inc.,
+ *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+#ifndef TPMRSA_H
+#define TPMRSA_H
+
+#include "tcg.h"
+#include <polarssl/bignum.h>
+
+/* tpm software key */
+typedef struct
+{
+    size_t len;                 /*!<  size(N) in chars  */
+
+    mpi N;                      /*!<  public modulus    */
+    mpi E;                      /*!<  public exponent   */
+
+    mpi RN;                     /*!<  cached R^2 mod N  */
+}
+tpmrsa_context;
+
+#define TPMRSA_CTX_INIT { 0, {0, 0, NULL}, {0, 0, NULL}, {0, 0, NULL}}
+
+/* Setup the rsa context using tpm public key data */
+void tpmrsa_set_pubkey(tpmrsa_context* ctx,
+      const unsigned char* key,
+      int keylen,
+      const unsigned char* exponent,
+      int explen);
+
+/* Do rsa public crypto */
+TPM_RESULT tpmrsa_pub_encrypt_oaep( tpmrsa_context *ctx,
+      int (*f_rng)(void *, unsigned char *, size_t),
+      void *p_rng,
+      size_t ilen,
+      const unsigned char *input,
+      unsigned char *output );
+
+/* free tpmrsa key */
+inline void tpmrsa_free( tpmrsa_context *ctx ) {
+   mpi_free( &ctx->RN ); mpi_free( &ctx->E  ); mpi_free( &ctx->N  );
+}
+
+#endif /* tpmrsa.h */
diff --git a/stubdom/vtpmmgr/uuid.h b/stubdom/vtpmmgr/uuid.h
new file mode 100644
index 0000000..4737645
--- /dev/null
+++ b/stubdom/vtpmmgr/uuid.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPMMGR_UUID_H
+#define VTPMMGR_UUID_H
+
+#define UUID_FMT "%02hhx%02hhx%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx"
+#define UUID_FMTLEN ((2*16)+4) /* 16 hex bytes plus 4 hypens */
+#define UUID_BYTES(uuid) uuid[0], uuid[1], uuid[2], uuid[3], \
+                                uuid[4], uuid[5], uuid[6], uuid[7], \
+                                uuid[8], uuid[9], uuid[10], uuid[11], \
+                                uuid[12], uuid[13], uuid[14], uuid[15]
+
+
+typedef uint8_t uuid_t[16];
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
new file mode 100644
index 0000000..f82a2a9
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
@@ -0,0 +1,152 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <inttypes.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "marshal.h"
+#include "log.h"
+#include "vtpm_storage.h"
+#include "vtpmmgr.h"
+#include "tpm.h"
+#include "tcg.h"
+
+static TPM_RESULT vtpmmgr_SaveHashKey(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+
+   if(tpmcmd->req_len != VTPM_COMMAND_HEADER_SIZE + HASHKEYSZ) {
+      vtpmlogerror(VTPM_LOG_VTPM, "VTPM_ORD_SAVEHASHKEY hashkey too short!\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   /* Do the command */
+   TPMTRYRETURN(vtpm_storage_save_hashkey(uuid, tpmcmd->req + VTPM_COMMAND_HEADER_SIZE));
+
+abort_egress:
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         VTPM_TAG_RSP, VTPM_COMMAND_HEADER_SIZE, status);
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+
+   return status;
+}
+
+static TPM_RESULT vtpmmgr_LoadHashKey(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+
+   TPMTRYRETURN(vtpm_storage_load_hashkey(uuid, tpmcmd->resp + VTPM_COMMAND_HEADER_SIZE));
+
+   tpmcmd->resp_len += HASHKEYSZ;
+
+abort_egress:
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         VTPM_TAG_RSP, tpmcmd->resp_len, status);
+
+   return status;
+}
+
+
+TPM_RESULT vtpmmgr_handle_cmd(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_TAG tag;
+   UINT32 size;
+   TPM_COMMAND_CODE ord;
+
+   unpack_TPM_RQU_HEADER(tpmcmd->req,
+         &tag, &size, &ord);
+
+   /* Handle the command now */
+   switch(tag) {
+      case VTPM_TAG_REQ:
+         //This is a vTPM command
+         switch(ord) {
+            case VTPM_ORD_SAVEHASHKEY:
+               return vtpmmgr_SaveHashKey(uuid, tpmcmd);
+            case VTPM_ORD_LOADHASHKEY:
+               return vtpmmgr_LoadHashKey(uuid, tpmcmd);
+            default:
+               vtpmlogerror(VTPM_LOG_VTPM, "Invalid vTPM Ordinal %" PRIu32 "\n", ord);
+               status = TPM_BAD_ORDINAL;
+         }
+         break;
+      case TPM_TAG_RQU_COMMAND:
+      case TPM_TAG_RQU_AUTH1_COMMAND:
+      case TPM_TAG_RQU_AUTH2_COMMAND:
+         //This is a TPM passthrough command
+         switch(ord) {
+            case TPM_ORD_GetRandom:
+               vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
+               break;
+            case TPM_ORD_PcrRead:
+               vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_PcrRead\n");
+               break;
+            default:
+               vtpmlogerror(VTPM_LOG_VTPM, "TPM Disallowed Passthrough ord=%" PRIu32 "\n", ord);
+               status = TPM_DISABLED_CMD;
+               goto abort_egress;
+         }
+
+         size = TCPA_MAX_BUFFER_LENGTH;
+         TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len, tpmcmd->resp, &size));
+         tpmcmd->resp_len = size;
+
+         unpack_TPM_RESULT(tpmcmd->resp + sizeof(TPM_TAG) + sizeof(UINT32), &status);
+         return status;
+
+         break;
+      default:
+         vtpmlogerror(VTPM_LOG_VTPM, "Invalid tag=%" PRIu16 "\n", tag);
+         status = TPM_BADTAG;
+   }
+
+abort_egress:
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         tag + 3, tpmcmd->resp_len, status);
+
+   return status;
+}
diff --git a/stubdom/vtpmmgr/vtpm_manager.h b/stubdom/vtpmmgr/vtpm_manager.h
new file mode 100644
index 0000000..a2bbcca
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_manager.h
@@ -0,0 +1,64 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPM_MANAGER_H
+#define VTPM_MANAGER_H
+
+#define VTPM_TAG_REQ 0x01c1
+#define VTPM_TAG_RSP 0x01c4
+#define COMMAND_BUFFER_SIZE 4096
+
+// Header size
+#define VTPM_COMMAND_HEADER_SIZE ( 2 + 4 + 4)
+
+//************************ Command Codes ****************************
+#define VTPM_ORD_BASE       0x0000
+#define VTPM_PRIV_MASK      0x01000000 // Priviledged VTPM Command
+#define VTPM_PRIV_BASE      (VTPM_ORD_BASE | VTPM_PRIV_MASK)
+
+// Non-priviledged VTPM Commands (From DMI's)
+#define VTPM_ORD_SAVEHASHKEY      (VTPM_ORD_BASE + 1) // DMI requests encryption key for persistent storage
+#define VTPM_ORD_LOADHASHKEY      (VTPM_ORD_BASE + 2) // DMI requests symkey to be regenerated
+
+//************************ Return Codes ****************************
+#define VTPM_SUCCESS               0
+#define VTPM_FAIL                  1
+#define VTPM_UNSUPPORTED           2
+#define VTPM_FORBIDDEN             3
+#define VTPM_RESTORE_CONTEXT_FAILED    4
+#define VTPM_INVALID_REQUEST       5
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_storage.c b/stubdom/vtpmmgr/vtpm_storage.c
new file mode 100644
index 0000000..abb0dba
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_storage.c
@@ -0,0 +1,794 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+/***************************************************************
+ * DISK IMAGE LAYOUT
+ * *************************************************************
+ * All data is stored in BIG ENDIAN format
+ * *************************************************************
+ * Section 1: Header
+ *
+ * 10 bytes 	 id			ID String "VTPMMGRDOM"
+ * uint32_t	 version	        Disk Image version number (current == 1)
+ * uint32_t      storage_key_len	Length of the storage Key
+ * TPM_KEY       storage_key		Marshalled TPM_KEY structure (See TPM spec v2)
+ * RSA_BLOCK     aes_crypto             Encrypted aes key data (RSA_CIPHER_SIZE bytes), bound by the storage_key
+ *  BYTE[32] aes_key                    Aes key for encrypting the uuid table
+ *  uint32_t cipher_sz                  Encrypted size of the uuid table
+ *
+ * *************************************************************
+ * Section 2: Uuid Table
+ *
+ * This table is encrypted by the aes_key in the header. The cipher text size is just
+ * large enough to hold all of the entries plus required padding.
+ *
+ * Each entry is as follows
+ * BYTE[16] uuid                       Uuid of a vtpm that is stored on this disk
+ * uint32_t offset                     Disk offset where the vtpm data is stored
+ *
+ * *************************************************************
+ * Section 3: Vtpm Table
+ *
+ * The rest of the disk stores vtpms. Each vtpm is an RSA_BLOCK encrypted
+ * by the storage key. Each vtpm must exist on an RSA_BLOCK aligned boundary,
+ * starting at the first RSA_BLOCK aligned offset after the uuid table.
+ * As the uuid table grows, vtpms may be relocated.
+ *
+ * RSA_BLOCK     vtpm_crypto          Vtpm data encrypted by storage_key
+ *   BYTE[20]    hash                 Sha1 hash of vtpm encrypted data
+ *   BYTE[16]    vtpm_aes_key         Encryption key for vtpm data
+ *
+  *************************************************************
+ */
+#define DISKVERS 1
+#define IDSTR "VTPMMGRDOM"
+#define IDSTRLEN 10
+#define AES_BLOCK_SIZE 16
+#define AES_KEY_BITS 256
+#define AES_KEY_SIZE (AES_KEY_BITS/8)
+#define BUF_SIZE 4096
+
+#define UUID_TBL_ENT_SIZE (sizeof(uuid_t) + sizeof(uint32_t))
+
+#define HEADERSZ (10 + 4 + 4)
+
+#define TRY_READ(buf, size, msg) do {\
+   int rc; \
+   if((rc = read(blkfront_fd, buf, (size))) != (size)) { \
+      vtpmlogerror(VTPM_LOG_VTPM, "read() failed! " msg " : rc=(%d/%d), error=(%s)\n", rc, (int)(size), strerror(errno)); \
+      status = TPM_IOERROR;\
+      goto abort_egress;\
+   } \
+} while(0)
+
+#define TRY_WRITE(buf, size, msg) do {\
+   int rc; \
+   if((rc = write(blkfront_fd, buf, (size))) != (size)) { \
+      vtpmlogerror(VTPM_LOG_VTPM, "write() failed! " msg " : rc=(%d/%d), error=(%s)\n", rc, (int)(size), strerror(errno)); \
+      status = TPM_IOERROR;\
+      goto abort_egress;\
+   } \
+} while(0)
+
+#include <blkfront.h>
+#include <unistd.h>
+#include <errno.h>
+#include <string.h>
+#include <inttypes.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <mini-os/byteorder.h>
+#include <polarssl/aes.h>
+
+#include "vtpm_manager.h"
+#include "log.h"
+#include "marshal.h"
+#include "tpm.h"
+#include "uuid.h"
+
+#include "vtpmmgr.h"
+#include "vtpm_storage.h"
+
+#define MAX(a,b) ( ((a) > (b)) ? (a) : (b) )
+#define MIN(a,b) ( ((a) < (b)) ? (a) : (b) )
+
+/* blkfront device objets */
+static struct blkfront_dev* blkdev = NULL;
+static int blkfront_fd = -1;
+
+struct Vtpm {
+   uuid_t uuid;
+   int offset;
+};
+struct Storage {
+   int aes_offset;
+   int uuid_offset;
+   int end_offset;
+
+   int num_vtpms;
+   int num_vtpms_alloced;
+   struct Vtpm* vtpms;
+};
+
+/* Global storage data */
+static struct Storage g_store = {
+   .vtpms = NULL,
+};
+
+static int get_offset(void) {
+   return lseek(blkfront_fd, 0, SEEK_CUR);
+}
+
+static void reset_store(void) {
+   g_store.aes_offset = 0;
+   g_store.uuid_offset = 0;
+   g_store.end_offset = 0;
+
+   g_store.num_vtpms = 0;
+   g_store.num_vtpms_alloced = 0;
+   free(g_store.vtpms);
+   g_store.vtpms = NULL;
+}
+
+static int vtpm_get_index(const uuid_t uuid) {
+   int st = 0;
+   int ed = g_store.num_vtpms-1;
+   while(st <= ed) {
+      int mid = ((unsigned int)st + (unsigned int)ed) >> 1; //avoid overflow
+      int c = memcmp(uuid, &g_store.vtpms[mid].uuid, sizeof(uuid_t));
+      if(c == 0) {
+         return mid;
+      } else if(c > 0) {
+         st = mid + 1;
+      } else {
+         ed = mid - 1;
+      }
+   }
+   return -(st + 1);
+}
+
+static void vtpm_add(const uuid_t uuid, int offset, int index) {
+   /* Realloc more space if needed */
+   if(g_store.num_vtpms >= g_store.num_vtpms_alloced) {
+      g_store.num_vtpms_alloced += 16;
+      g_store.vtpms = realloc(
+            g_store.vtpms,
+            sizeof(struct Vtpm) * g_store.num_vtpms_alloced);
+   }
+
+   /* Move everybody after the new guy */
+   for(int i = g_store.num_vtpms; i > index; --i) {
+      g_store.vtpms[i] = g_store.vtpms[i-1];
+   }
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Registered vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+
+   /* Finally add new one */
+   memcpy(g_store.vtpms[index].uuid, uuid, sizeof(uuid_t));
+   g_store.vtpms[index].offset = offset;
+   ++g_store.num_vtpms;
+}
+
+#if 0
+static void vtpm_remove(int index) {
+   for(i = index; i < g_store.num_vtpms; ++i) {
+      g_store.vtpms[i] = g_store.vtpms[i+1];
+   }
+   --g_store.num_vtpms;
+}
+#endif
+
+static int pack_uuid_table(uint8_t* table, int size, int* nvtpms) {
+   uint8_t* ptr = table;
+   while(*nvtpms < g_store.num_vtpms && size >= 0)
+   {
+      /* Pack the uuid */
+      memcpy(ptr, (uint8_t*)g_store.vtpms[*nvtpms].uuid, sizeof(uuid_t));
+      ptr+= sizeof(uuid_t);
+
+
+      /* Pack the offset */
+      ptr = pack_UINT32(ptr, g_store.vtpms[*nvtpms].offset);
+
+      ++*nvtpms;
+      size -= UUID_TBL_ENT_SIZE;
+   }
+   return ptr - table;
+}
+
+/* Extract the uuids */
+static int extract_uuid_table(uint8_t* table, int size) {
+   uint8_t* ptr = table;
+   for(;size >= UUID_TBL_ENT_SIZE; size -= UUID_TBL_ENT_SIZE) {
+      int index;
+      uint32_t v32;
+
+      /*uuid_t is just an array of bytes, so we can do a direct cast here */
+      uint8_t* uuid = ptr;
+      ptr += sizeof(uuid_t);
+
+      /* Get the offset of the key */
+      ptr = unpack_UINT32(ptr, &v32);
+
+      /* Insert the new vtpm in sorted order */
+      if((index = vtpm_get_index(uuid)) >= 0) {
+         vtpmlogerror(VTPM_LOG_VTPM, "Vtpm (" UUID_FMT ") exists multiple times! ignoring...\n", UUID_BYTES(uuid));
+         continue;
+      }
+      index = -index -1;
+
+      vtpm_add(uuid, v32, index);
+
+   }
+   return ptr - table;
+}
+
+static void vtpm_decrypt_block(aes_context* aes,
+      uint8_t* iv,
+      uint8_t* cipher,
+      uint8_t* plain,
+      int cipher_sz,
+      int* overlap)
+{
+   int bytes_ext;
+   /* Decrypt */
+   aes_crypt_cbc(aes, AES_DECRYPT,
+         cipher_sz,
+         iv, cipher, plain + *overlap);
+
+   /* Extract */
+   bytes_ext = extract_uuid_table(plain, cipher_sz + *overlap);
+
+   /* Copy left overs to the beginning */
+   *overlap = cipher_sz + *overlap - bytes_ext;
+   memcpy(plain, plain + bytes_ext, *overlap);
+}
+
+static int vtpm_encrypt_block(aes_context* aes,
+      uint8_t* iv,
+      uint8_t* plain,
+      uint8_t* cipher,
+      int block_sz,
+      int* overlap,
+      int* num_vtpms)
+{
+   int bytes_to_crypt;
+   int bytes_packed;
+
+   /* Pack the uuid table */
+   bytes_packed = *overlap + pack_uuid_table(plain + *overlap, block_sz - *overlap, num_vtpms);
+   bytes_to_crypt = MIN(bytes_packed, block_sz);
+
+   /* Add padding if we aren't on a multiple of the block size */
+   if(bytes_to_crypt & (AES_BLOCK_SIZE-1)) {
+      int oldsz = bytes_to_crypt;
+      //add padding
+      bytes_to_crypt += AES_BLOCK_SIZE - (bytes_to_crypt & (AES_BLOCK_SIZE-1));
+      //fill padding with random bytes
+      vtpmmgr_rand(plain + oldsz, bytes_to_crypt - oldsz);
+      *overlap = 0;
+   } else {
+      *overlap = bytes_packed - bytes_to_crypt;
+   }
+
+   /* Encrypt this chunk */
+   aes_crypt_cbc(aes, AES_ENCRYPT,
+            bytes_to_crypt,
+            iv, plain, cipher);
+
+   /* Copy the left over partials to the beginning */
+   memcpy(plain, plain + bytes_to_crypt, *overlap);
+
+   return bytes_to_crypt;
+}
+
+static TPM_RESULT vtpm_storage_new_vtpm(const uuid_t uuid, int index) {
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t plain[BUF_SIZE + AES_BLOCK_SIZE];
+   uint8_t buf[BUF_SIZE];
+   uint8_t* ptr;
+   int cipher_sz;
+   aes_context aes;
+
+   /* Add new vtpm to the table */
+   vtpm_add(uuid, g_store.end_offset, index);
+   g_store.end_offset += RSA_CIPHER_SIZE;
+
+   /* Compute the new end location of the encrypted uuid table */
+   cipher_sz = AES_BLOCK_SIZE; //IV
+   cipher_sz += g_store.num_vtpms * UUID_TBL_ENT_SIZE; //uuid table
+   cipher_sz += (AES_BLOCK_SIZE - (cipher_sz & (AES_BLOCK_SIZE -1))) & (AES_BLOCK_SIZE-1); //aes padding
+
+   /* Does this overlap any key data? If so they need to be relocated */
+   int uuid_end = (g_store.uuid_offset + cipher_sz + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+   for(int i = 0; i < g_store.num_vtpms; ++i) {
+      if(g_store.vtpms[i].offset < uuid_end) {
+
+         vtpmloginfo(VTPM_LOG_VTPM, "Relocating vtpm data\n");
+
+         //Read the hashkey cipher text
+         lseek(blkfront_fd, g_store.vtpms[i].offset, SEEK_SET);
+         TRY_READ(buf, RSA_CIPHER_SIZE, "vtpm hashkey relocate");
+
+         //Write the cipher text to new offset
+         lseek(blkfront_fd, g_store.end_offset, SEEK_SET);
+         TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm hashkey relocate");
+
+         //Save new offset
+         g_store.vtpms[i].offset = g_store.end_offset;
+         g_store.end_offset += RSA_CIPHER_SIZE;
+      }
+   }
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Generating a new symmetric key\n");
+
+   /* Generate an aes key */
+   TPMTRYRETURN(vtpmmgr_rand(plain, AES_KEY_SIZE));
+   aes_setkey_enc(&aes, plain, AES_KEY_BITS);
+   ptr = plain + AES_KEY_SIZE;
+
+   /* Pack the crypted size */
+   ptr = pack_UINT32(ptr, cipher_sz);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Binding encrypted key\n");
+
+   /* Seal the key and size */
+   TPMTRYRETURN(TPM_Bind(&vtpm_globals.storage_key,
+            plain,
+            ptr - plain,
+            buf));
+
+   /* Write the sealed key to disk */
+   lseek(blkfront_fd, g_store.aes_offset, SEEK_SET);
+   TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm aes key");
+
+   /* ENCRYPT AND WRITE UUID TABLE */
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Encrypting the uuid table\n");
+
+   int num_vtpms = 0;
+   int overlap = 0;
+   int bytes_crypted;
+   uint8_t iv[AES_BLOCK_SIZE];
+
+   /* Generate the iv for the first block */
+   TPMTRYRETURN(vtpmmgr_rand(iv, AES_BLOCK_SIZE));
+
+   /* Copy the iv to the cipher text buffer to be written to disk */
+   memcpy(buf, iv, AES_BLOCK_SIZE);
+   ptr = buf + AES_BLOCK_SIZE;
+
+   /* Encrypt the first block of the uuid table */
+   bytes_crypted = vtpm_encrypt_block(&aes,
+         iv, //iv
+         plain, //plaintext
+         ptr, //cipher text
+         BUF_SIZE - AES_BLOCK_SIZE,
+         &overlap,
+         &num_vtpms);
+
+   /* Write the iv followed by the crypted table*/
+   TRY_WRITE(buf, bytes_crypted + AES_BLOCK_SIZE, "vtpm uuid table");
+
+   /* Decrement the number of bytes encrypted */
+   cipher_sz -= bytes_crypted + AES_BLOCK_SIZE;
+
+   /* If there are more vtpms, encrypt and write them block by block */
+   while(cipher_sz > 0) {
+      /* Encrypt the next block of the uuid table */
+      bytes_crypted = vtpm_encrypt_block(&aes,
+               iv,
+               plain,
+               buf,
+               BUF_SIZE,
+               &overlap,
+               &num_vtpms);
+
+      /* Write the cipher text to disk */
+      TRY_WRITE(buf, bytes_crypted, "vtpm uuid table");
+
+      cipher_sz -= bytes_crypted;
+   }
+
+   goto egress;
+abort_egress:
+egress:
+   return status;
+}
+
+
+/**************************************
+ * PUBLIC FUNCTIONS
+ * ***********************************/
+
+int vtpm_storage_init(void) {
+   struct blkfront_info info;
+   if((blkdev = init_blkfront(NULL, &info)) == NULL) {
+      return -1;
+   }
+   if((blkfront_fd = blkfront_open(blkdev)) < 0) {
+      return -1;
+   }
+   return 0;
+}
+
+void vtpm_storage_shutdown(void) {
+   reset_store();
+   close(blkfront_fd);
+}
+
+TPM_RESULT vtpm_storage_load_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ])
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   int index;
+   uint8_t cipher[RSA_CIPHER_SIZE];
+   uint8_t clear[RSA_CIPHER_SIZE];
+   UINT32 clear_size;
+
+   /* Find the index of this uuid */
+   if((index = vtpm_get_index(uuid)) < 0) {
+      index = -index-1;
+      vtpmlogerror(VTPM_LOG_VTPM, "LoadKey failure: Unrecognized uuid! " UUID_FMT "\n", UUID_BYTES(uuid));
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   /* Read the table entry */
+   lseek(blkfront_fd, g_store.vtpms[index].offset, SEEK_SET);
+   TRY_READ(cipher, RSA_CIPHER_SIZE, "vtpm hashkey data");
+
+   /* Decrypt the table entry */
+   TPMTRYRETURN(TPM_UnBind(
+            vtpm_globals.storage_key_handle,
+            RSA_CIPHER_SIZE,
+            cipher,
+            &clear_size,
+            clear,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.oiap));
+
+   if(clear_size < HASHKEYSZ) {
+      vtpmloginfo(VTPM_LOG_VTPM, "Decrypted Hash key size (%" PRIu32 ") was too small!\n", clear_size);
+      status = TPM_RESOURCES;
+      goto abort_egress;
+   }
+
+   memcpy(hashkey, clear, HASHKEYSZ);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Loaded hash and key for vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+   goto egress;
+abort_egress:
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to load key\n");
+egress:
+   return status;
+}
+
+TPM_RESULT vtpm_storage_save_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ])
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   int index;
+   uint8_t buf[RSA_CIPHER_SIZE];
+
+   /* Find the index of this uuid */
+   if((index = vtpm_get_index(uuid)) < 0) {
+      index = -index-1;
+      /* Create a new vtpm */
+      TPMTRYRETURN( vtpm_storage_new_vtpm(uuid, index) );
+   }
+
+   /* Encrypt the hash and key */
+   TPMTRYRETURN( TPM_Bind(&vtpm_globals.storage_key,
+            hashkey,
+            HASHKEYSZ,
+            buf));
+
+   /* Write to disk */
+   lseek(blkfront_fd, g_store.vtpms[index].offset, SEEK_SET);
+   TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm hashkey data");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saved hash and key for vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+   goto egress;
+abort_egress:
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to save key\n");
+egress:
+   return status;
+}
+
+TPM_RESULT vtpm_storage_new_header()
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t buf[BUF_SIZE];
+   uint8_t keybuf[AES_KEY_SIZE + sizeof(uint32_t)];
+   uint8_t* ptr = buf;
+   uint8_t* sptr;
+
+   /* Clear everything first */
+   reset_store();
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Creating new disk image header\n");
+
+   /*Copy the ID string */
+   memcpy(ptr, IDSTR, IDSTRLEN);
+   ptr += IDSTRLEN;
+
+   /*Copy the version */
+   ptr = pack_UINT32(ptr, DISKVERS);
+
+   /*Save the location of the key size */
+   sptr = ptr;
+   ptr += sizeof(UINT32);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saving root storage key..\n");
+
+   /* Copy the storage key */
+   ptr = pack_TPM_KEY(ptr, &vtpm_globals.storage_key);
+
+   /* Now save the size */
+   pack_UINT32(sptr, ptr - (sptr + 4));
+
+   /* Create a fake aes key and set cipher text size to 0 */
+   memset(keybuf, 0, sizeof(keybuf));
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Binding uuid table symmetric key..\n");
+
+   /* Save the location of the aes key */
+   g_store.aes_offset = ptr - buf;
+
+   /* Store the fake aes key and vtpm count */
+   TPMTRYRETURN(TPM_Bind(&vtpm_globals.storage_key,
+         keybuf,
+         sizeof(keybuf),
+         ptr));
+   ptr+= RSA_CIPHER_SIZE;
+
+   /* Write the header to disk */
+   lseek(blkfront_fd, 0, SEEK_SET);
+   TRY_WRITE(buf, ptr-buf, "vtpm header");
+
+   /* Save the location of the uuid table */
+   g_store.uuid_offset = get_offset();
+
+   /* Save the end offset */
+   g_store.end_offset = (g_store.uuid_offset + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saved new manager disk header.\n");
+
+   goto egress;
+abort_egress:
+egress:
+   return status;
+}
+
+
+TPM_RESULT vtpm_storage_load_header(void)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint32_t v32;
+   uint8_t buf[BUF_SIZE];
+   uint8_t* ptr = buf;
+   aes_context aes;
+
+   /* Clear everything first */
+   reset_store();
+
+   /* Read the header from disk */
+   lseek(blkfront_fd, 0, SEEK_SET);
+   TRY_READ(buf, IDSTRLEN + sizeof(UINT32) + sizeof(UINT32), "vtpm header");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Loading disk image header\n");
+
+   /* Verify the ID string */
+   if(memcmp(ptr, IDSTR, IDSTRLEN)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid ID string in disk image!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+   ptr+=IDSTRLEN;
+
+   /* Unpack the version */
+   ptr = unpack_UINT32(ptr, &v32);
+
+   /* Verify the version */
+   if(v32 != DISKVERS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unsupported disk image version number %" PRIu32 "\n", v32);
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   /* Size of the storage key */
+   ptr = unpack_UINT32(ptr, &v32);
+
+   /* Sanity check */
+   if(v32 > BUF_SIZE) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Size of storage key (%" PRIu32 ") is too large!\n", v32);
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* read the storage key */
+   TRY_READ(buf, v32, "storage pub key");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Unpacking storage key\n");
+
+   /* unpack the storage key */
+   ptr = unpack_TPM_KEY(buf, &vtpm_globals.storage_key, UNPACK_ALLOC);
+
+   /* Load Storage Key into the TPM */
+   TPMTRYRETURN( TPM_LoadKey(
+            TPM_SRK_KEYHANDLE,
+            &vtpm_globals.storage_key,
+            &vtpm_globals.storage_key_handle,
+            (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+            &vtpm_globals.oiap));
+
+   /* Initialize the storage key auth */
+   memset(vtpm_globals.storage_key_usage_auth, 0, sizeof(TPM_AUTHDATA));
+
+   /* Store the offset of the aes key */
+   g_store.aes_offset = get_offset();
+
+   /* Read the rsa cipher text for the aes key */
+   TRY_READ(buf, RSA_CIPHER_SIZE, "aes key");
+   ptr = buf + RSA_CIPHER_SIZE;
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Unbinding uuid table symmetric key\n");
+
+   /* Decrypt the aes key protecting the uuid table */
+   UINT32 datalen;
+   TPMTRYRETURN(TPM_UnBind(
+            vtpm_globals.storage_key_handle,
+            RSA_CIPHER_SIZE,
+            buf,
+            &datalen,
+            ptr,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.oiap));
+
+   /* Validate the length of the output buffer */
+   if(datalen < AES_KEY_SIZE + sizeof(UINT32)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unbound AES key size (%d) was too small! expected (%ld)\n", datalen, AES_KEY_SIZE + sizeof(UINT32));
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* Extract the aes key */
+   aes_setkey_dec(&aes, ptr, AES_KEY_BITS);
+   ptr+= AES_KEY_SIZE;
+
+   /* Extract the ciphertext size */
+   ptr = unpack_UINT32(ptr, &v32);
+   int cipher_size = v32;
+
+   /* Sanity check */
+   if(cipher_size & (AES_BLOCK_SIZE-1)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Cipher text size (%" PRIu32 ") is not a multiple of the aes block size! (%d)\n", v32, AES_BLOCK_SIZE);
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* Save the location of the uuid table */
+   g_store.uuid_offset = get_offset();
+
+   /* Only decrypt the table if there are vtpms to decrypt */
+   if(cipher_size > 0) {
+      int rbytes;
+      int overlap = 0;
+      uint8_t plain[BUF_SIZE + AES_BLOCK_SIZE];
+      uint8_t iv[AES_BLOCK_SIZE];
+
+      vtpmloginfo(VTPM_LOG_VTPM, "Decrypting uuid table\n");
+
+      /* Pre allocate the vtpm array */
+      g_store.num_vtpms_alloced = cipher_size / UUID_TBL_ENT_SIZE;
+      g_store.vtpms = malloc(sizeof(struct Vtpm) * g_store.num_vtpms_alloced);
+
+      /* Read the iv and the first chunk of cipher text */
+      rbytes = MIN(cipher_size, BUF_SIZE);
+      TRY_READ(buf, rbytes, "vtpm uuid table\n");
+      cipher_size -= rbytes;
+
+      /* Copy the iv */
+      memcpy(iv, buf, AES_BLOCK_SIZE);
+      ptr = buf + AES_BLOCK_SIZE;
+
+      /* Remove the iv from the number of bytes to decrypt */
+      rbytes -= AES_BLOCK_SIZE;
+
+      /* Decrypt and extract vtpms */
+      vtpm_decrypt_block(&aes,
+            iv, ptr, plain,
+            rbytes, &overlap);
+
+      /* Read the rest of the table if there is more */
+      while(cipher_size > 0) {
+         /* Read next chunk of cipher text */
+         rbytes = MIN(cipher_size, BUF_SIZE);
+         TRY_READ(buf, rbytes, "vtpm uuid table");
+         cipher_size -= rbytes;
+
+         /* Decrypt a block of text */
+         vtpm_decrypt_block(&aes,
+               iv, buf, plain,
+               rbytes, &overlap);
+
+      }
+      vtpmloginfo(VTPM_LOG_VTPM, "Loaded %d vtpms!\n", g_store.num_vtpms);
+   }
+
+   /* The end of the key table, new vtpms go here */
+   int uuid_end = (get_offset() + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+   g_store.end_offset = uuid_end;
+
+   /* Compute the end offset while validating vtpms*/
+   for(int i = 0; i < g_store.num_vtpms; ++i) {
+      /* offset must not collide with previous data */
+      if(g_store.vtpms[i].offset < uuid_end) {
+         vtpmlogerror(VTPM_LOG_VTPM, "vtpm: " UUID_FMT
+               " offset (%d) is before end of uuid table (%d)!\n",
+               UUID_BYTES(g_store.vtpms[i].uuid),
+               g_store.vtpms[i].offset, uuid_end);
+         status = TPM_IOERROR;
+         goto abort_egress;
+      }
+      /* offset must be at a multiple of cipher size */
+      if(g_store.vtpms[i].offset & (RSA_CIPHER_SIZE-1)) {
+         vtpmlogerror(VTPM_LOG_VTPM, "vtpm: " UUID_FMT
+               " offset(%d) is not at a multiple of the rsa cipher text size (%d)!\n",
+               UUID_BYTES(g_store.vtpms[i].uuid),
+               g_store.vtpms[i].offset, RSA_CIPHER_SIZE);
+         status = TPM_IOERROR;
+         goto abort_egress;
+      }
+      /* Save the last offset */
+      if(g_store.vtpms[i].offset >= g_store.end_offset) {
+         g_store.end_offset = g_store.vtpms[i].offset + RSA_CIPHER_SIZE;
+      }
+   }
+
+   goto egress;
+abort_egress:
+   //An error occured somewhere
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to load manager data!\n");
+
+   //Clear the data store
+   reset_store();
+
+   //Reset the storage key structure
+   free_TPM_KEY(&vtpm_globals.storage_key);
+   {
+      TPM_KEY key = TPM_KEY_INIT;
+      vtpm_globals.storage_key = key;
+   }
+
+   //Reset the storage key handle
+   TPM_EvictKey(vtpm_globals.storage_key_handle);
+   vtpm_globals.storage_key_handle = 0;
+egress:
+   return status;
+}
+
+#if 0
+/* For testing disk IO */
+void add_fake_vtpms(int num) {
+   for(int i = 0; i < num; ++i) {
+      uint32_t ind = cpu_to_be32(i);
+
+      uuid_t uuid;
+      memset(uuid, 0, sizeof(uuid_t));
+      memcpy(uuid, &ind, sizeof(ind));
+      int index = vtpm_get_index(uuid);
+      index = -index-1;
+
+      vtpm_storage_new_vtpm(uuid, index);
+   }
+}
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_storage.h b/stubdom/vtpmmgr/vtpm_storage.h
new file mode 100644
index 0000000..a5a5fd7
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_storage.h
@@ -0,0 +1,68 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPM_STORAGE_H
+#define VTPM_STORAGE_h
+
+#include "uuid.h"
+
+#define VTPM_NVMKEY_SIZE 32
+#define HASHKEYSZ (sizeof(TPM_DIGEST) + VTPM_NVMKEY_SIZE)
+
+/* Initialize the storage system and its virtual disk */
+int vtpm_storage_init(void);
+
+/* Shutdown the storage system and its virtual disk */
+void vtpm_storage_shutdown(void);
+
+/* Loads Sha1 hash and 256 bit AES key from disk and stores them
+ * packed together in outbuf. outbuf must be freed
+ * by the caller using buffer_free()
+ */
+TPM_RESULT vtpm_storage_load_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ]);
+
+/* inbuf must contain a sha1 hash followed by a 256 bit AES key.
+ * Encrypts and stores the hash and key to disk */
+TPM_RESULT vtpm_storage_save_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ]);
+
+/* Load the vtpm manager data - call this on startup */
+TPM_RESULT vtpm_storage_load_header(void);
+
+/* Saves the vtpm manager data - call this on shutdown */
+TPM_RESULT vtpm_storage_new_header(void);
+
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpmmgr.c b/stubdom/vtpmmgr/vtpmmgr.c
new file mode 100644
index 0000000..563f4e8
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpmmgr.c
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdint.h>
+#include <mini-os/tpmback.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include "log.h"
+
+#include "vtpmmgr.h"
+#include "tcg.h"
+
+
+void main_loop(void) {
+   tpmcmd_t* tpmcmd;
+   uint8_t respbuf[TCPA_MAX_BUFFER_LENGTH];
+
+   while(1) {
+      /* Wait for requests from a vtpm */
+      vtpmloginfo(VTPM_LOG_VTPM, "Waiting for commands from vTPM's:\n");
+      if((tpmcmd = tpmback_req_any()) == NULL) {
+         vtpmlogerror(VTPM_LOG_VTPM, "NULL tpmcmd\n");
+         continue;
+      }
+
+      tpmcmd->resp = respbuf;
+
+      /* Process the command */
+      vtpmmgr_handle_cmd(tpmcmd->uuid, tpmcmd);
+
+      /* Send response */
+      tpmback_resp(tpmcmd);
+   }
+}
+
+int main(int argc, char** argv)
+{
+   int rc = 0;
+   sleep(2);
+   vtpmloginfo(VTPM_LOG_VTPM, "Starting vTPM manager domain\n");
+
+   /* Initialize the vtpm manager */
+   if(vtpmmgr_init(argc, argv) != TPM_SUCCESS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize vtpmmgr domain!\n");
+      rc = -1;
+      goto exit;
+   }
+
+   main_loop();
+
+   vtpmloginfo(VTPM_LOG_VTPM, "vTPM Manager shutting down...\n");
+
+   vtpmmgr_shutdown();
+
+exit:
+   return rc;
+
+}
diff --git a/stubdom/vtpmmgr/vtpmmgr.h b/stubdom/vtpmmgr/vtpmmgr.h
new file mode 100644
index 0000000..50a1992
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpmmgr.h
@@ -0,0 +1,77 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPMMGR_H
+#define VTPMMGR_H
+
+#include <mini-os/tpmback.h>
+#include <polarssl/entropy.h>
+#include <polarssl/ctr_drbg.h>
+
+#include "uuid.h"
+#include "tcg.h"
+#include "vtpm_manager.h"
+
+#define RSA_KEY_SIZE 0x0800
+#define RSA_CIPHER_SIZE (RSA_KEY_SIZE / 8)
+
+struct vtpm_globals {
+   int tpm_fd;
+   TPM_KEY             storage_key;
+   TPM_HANDLE          storage_key_handle;       // Key used by persistent store
+   TPM_AUTH_SESSION    oiap;                // OIAP session for storageKey
+   TPM_AUTHDATA        storage_key_usage_auth;
+
+   TPM_AUTHDATA        owner_auth;
+   TPM_AUTHDATA        srk_auth;
+
+   entropy_context     entropy;
+   ctr_drbg_context    ctr_drbg;
+};
+
+// --------------------------- Global Values --------------------------
+extern struct vtpm_globals vtpm_globals;   // Key info and DMI states
+
+TPM_RESULT vtpmmgr_init(int argc, char** argv);
+void vtpmmgr_shutdown(void);
+
+TPM_RESULT vtpmmgr_handle_cmd(const uuid_t uuid, tpmcmd_t* tpmcmd);
+
+inline TPM_RESULT vtpmmgr_rand(unsigned char* bytes, size_t num_bytes) {
+   return ctr_drbg_random(&vtpm_globals.ctr_drbg, bytes, num_bytes) == 0 ? 0 : TPM_FAIL;
+}
+
+#endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:11:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:11:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwxS-0002VU-CD; Tue, 04 Dec 2012 18:11:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwxP-0002Uc-Vl
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:11:00 +0000
Received: from [85.158.143.35:13636] by server-2.bemta-4.messagelabs.com id
	84/FA-28922-3BC3EB05; Tue, 04 Dec 2012 18:10:59 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354644649!11634202!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14565 invoked from network); 4 Dec 2012 18:10:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:10:51 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0f0a_223fc024_5262_4c64_bcf0_7ea37957be51;
	Tue, 04 Dec 2012 13:09:47 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:24 -0500
Message-Id: <1354644571-3202-2-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 2/8] add stubdom/vtpmmgr code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the code base for vtpmmgrdom. Makefile changes
next patch.

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/vtpmmgr/Makefile           |   32 ++
 stubdom/vtpmmgr/init.c             |  553 +++++++++++++++++++++
 stubdom/vtpmmgr/log.c              |  151 ++++++
 stubdom/vtpmmgr/log.h              |   85 ++++
 stubdom/vtpmmgr/marshal.h          |  528 ++++++++++++++++++++
 stubdom/vtpmmgr/minios.cfg         |   14 +
 stubdom/vtpmmgr/tcg.h              |  707 +++++++++++++++++++++++++++
 stubdom/vtpmmgr/tpm.c              |  938 ++++++++++++++++++++++++++++++++++++
 stubdom/vtpmmgr/tpm.h              |  218 +++++++++
 stubdom/vtpmmgr/tpmrsa.c           |  175 +++++++
 stubdom/vtpmmgr/tpmrsa.h           |   67 +++
 stubdom/vtpmmgr/uuid.h             |   50 ++
 stubdom/vtpmmgr/vtpm_cmd_handler.c |  152 ++++++
 stubdom/vtpmmgr/vtpm_manager.h     |   64 +++
 stubdom/vtpmmgr/vtpm_storage.c     |  794 ++++++++++++++++++++++++++++++
 stubdom/vtpmmgr/vtpm_storage.h     |   68 +++
 stubdom/vtpmmgr/vtpmmgr.c          |   93 ++++
 stubdom/vtpmmgr/vtpmmgr.h          |   77 +++
 18 files changed, 4766 insertions(+)
 create mode 100644 stubdom/vtpmmgr/Makefile
 create mode 100644 stubdom/vtpmmgr/init.c
 create mode 100644 stubdom/vtpmmgr/log.c
 create mode 100644 stubdom/vtpmmgr/log.h
 create mode 100644 stubdom/vtpmmgr/marshal.h
 create mode 100644 stubdom/vtpmmgr/minios.cfg
 create mode 100644 stubdom/vtpmmgr/tcg.h
 create mode 100644 stubdom/vtpmmgr/tpm.c
 create mode 100644 stubdom/vtpmmgr/tpm.h
 create mode 100644 stubdom/vtpmmgr/tpmrsa.c
 create mode 100644 stubdom/vtpmmgr/tpmrsa.h
 create mode 100644 stubdom/vtpmmgr/uuid.h
 create mode 100644 stubdom/vtpmmgr/vtpm_cmd_handler.c
 create mode 100644 stubdom/vtpmmgr/vtpm_manager.h
 create mode 100644 stubdom/vtpmmgr/vtpm_storage.c
 create mode 100644 stubdom/vtpmmgr/vtpm_storage.h
 create mode 100644 stubdom/vtpmmgr/vtpmmgr.c
 create mode 100644 stubdom/vtpmmgr/vtpmmgr.h

diff --git a/stubdom/vtpmmgr/Makefile b/stubdom/vtpmmgr/Makefile
new file mode 100644
index 0000000..88c83c3
--- /dev/null
+++ b/stubdom/vtpmmgr/Makefile
@@ -0,0 +1,32 @@
+# Copyright (c) 2010-2012 United States Government, as represented by
+# the Secretary of Defense.  All rights reserved.
+#
+# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+# INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+# DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+# SOFTWARE.
+#
+
+XEN_ROOT=../..
+
+PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
+PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o bignum.o sha4.o havege.o timing.o entropy_poll.o
+
+TARGET=vtpmmgr.a
+OBJS=vtpmmgr.o vtpm_cmd_handler.o vtpm_storage.o init.o tpmrsa.o tpm.o log.o
+
+CFLAGS+=-Werror -Iutil -Icrypto -Itcs
+CFLAGS+=-Wno-declaration-after-statement -Wno-unused-label
+
+build: $(TARGET)
+$(TARGET): $(OBJS)
+	ar -rcs $@ $^ $(foreach obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
+
+clean:
+	rm -f $(TARGET) $(OBJS)
+
+distclean: clean
+
+.PHONY: clean distclean
diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
new file mode 100644
index 0000000..a158020
--- /dev/null
+++ b/stubdom/vtpmmgr/init.c
@@ -0,0 +1,553 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#include <stdint.h>
+#include <stdlib.h>
+
+#include <xen/xen.h>
+#include <mini-os/tpmback.h>
+#include <mini-os/tpmfront.h>
+#include <mini-os/tpm_tis.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include <polarssl/sha1.h>
+
+#include "log.h"
+#include "vtpmmgr.h"
+#include "vtpm_storage.h"
+#include "tpm.h"
+#include "marshal.h"
+
+struct Opts {
+   enum {
+      TPMDRV_TPM_TIS,
+      TPMDRV_TPMFRONT,
+   } tpmdriver;
+   unsigned long tpmiomem;
+   unsigned int tpmirq;
+   unsigned int tpmlocality;
+   int gen_owner_auth;
+};
+
+// --------------------------- Well Known Auths --------------------------
+const TPM_AUTHDATA WELLKNOWN_SRK_AUTH = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+const TPM_AUTHDATA WELLKNOWN_OWNER_AUTH = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
+
+struct vtpm_globals vtpm_globals = {
+   .tpm_fd = -1,
+   .storage_key = TPM_KEY_INIT,
+   .storage_key_handle = 0,
+   .oiap = { .AuthHandle = 0 }
+};
+
+static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
+   UINT32 sz = len;
+   TPM_RESULT rc = TPM_GetRandom(&sz, data);
+   *olen = sz;
+   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
+}
+
+static TPM_RESULT check_tpm_version(void) {
+   TPM_RESULT status;
+   UINT32 rsize;
+   BYTE* res = NULL;
+   TPM_CAP_VERSION_INFO vinfo;
+
+   TPMTRYRETURN(TPM_GetCapability(
+            TPM_CAP_VERSION_VAL,
+            0,
+            NULL,
+            &rsize,
+            &res));
+   if(rsize < 4) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid size returned by GetCapability!\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   unpack_TPM_CAP_VERSION_INFO(res, &vinfo, UNPACK_ALIAS);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Hardware TPM:\n");
+   vtpmloginfo(VTPM_LOG_VTPM, " version: %hhd %hhd %hhd %hhd\n",
+         vinfo.version.major, vinfo.version.minor, vinfo.version.revMajor, vinfo.version.revMinor);
+   vtpmloginfo(VTPM_LOG_VTPM, " specLevel: %hd\n", vinfo.specLevel);
+   vtpmloginfo(VTPM_LOG_VTPM, " errataRev: %hhd\n", vinfo.errataRev);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorID: %c%c%c%c\n",
+         vinfo.tpmVendorID[0], vinfo.tpmVendorID[1],
+         vinfo.tpmVendorID[2], vinfo.tpmVendorID[3]);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorSpecificSize: %hd\n", vinfo.vendorSpecificSize);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorSpecific: ");
+   for(int i = 0; i < vinfo.vendorSpecificSize; ++i) {
+      vtpmloginfomore(VTPM_LOG_VTPM, "%02hhx", vinfo.vendorSpecific[i]);
+   }
+   vtpmloginfomore(VTPM_LOG_VTPM, "\n");
+
+abort_egress:
+   free(res);
+   return status;
+}
+
+static TPM_RESULT flush_tpm(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   const TPM_RESOURCE_TYPE reslist[] = { TPM_RT_KEY, TPM_RT_AUTH, TPM_RT_TRANS, TPM_RT_COUNTER, TPM_RT_DAA_TPM, TPM_RT_CONTEXT };
+   BYTE* keylist = NULL;
+   UINT32 keylistSize;
+   BYTE* ptr;
+
+   //Iterate through each resource type and flush all handles
+   for(int i = 0; i < sizeof(reslist) / sizeof(TPM_RESOURCE_TYPE); ++i) {
+      TPM_RESOURCE_TYPE beres = cpu_to_be32(reslist[i]);
+      UINT16 size;
+      TPMTRYRETURN(TPM_GetCapability(
+               TPM_CAP_HANDLE,
+               sizeof(TPM_RESOURCE_TYPE),
+               (BYTE*)(&beres),
+               &keylistSize,
+               &keylist));
+
+      ptr = keylist;
+      ptr = unpack_UINT16(ptr, &size);
+
+      //Flush each handle
+      if(size) {
+         vtpmloginfo(VTPM_LOG_VTPM, "Flushing %u handle(s) of type %lu\n", size, (unsigned long) reslist[i]);
+         for(int j = 0; j < size; ++j) {
+            TPM_HANDLE h;
+            ptr = unpack_TPM_HANDLE(ptr, &h);
+            TPMTRYRETURN(TPM_FlushSpecific(h, reslist[i]));
+         }
+      }
+
+      free(keylist);
+      keylist = NULL;
+   }
+
+   goto egress;
+abort_egress:
+   free(keylist);
+egress:
+   return status;
+}
+
+
+static TPM_RESULT try_take_ownership(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_PUBKEY pubEK = TPM_PUBKEY_INIT;
+
+   // If we can read PubEK then there is no owner and we should take it.
+   status = TPM_ReadPubek(&pubEK);
+
+   switch(status) {
+      case TPM_DISABLED_CMD:
+         //Cannot read ek? TPM has owner
+         vtpmloginfo(VTPM_LOG_VTPM, "Failed to readEK meaning TPM has an owner. Creating Keys off existing SRK.\n");
+         status = TPM_SUCCESS;
+         break;
+      case TPM_NO_ENDORSEMENT:
+         {
+            //If theres no ek, we have to create one
+            TPM_KEY_PARMS keyInfo = {
+               .algorithmID = TPM_ALG_RSA,
+               .encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1,
+               .sigScheme = TPM_SS_NONE,
+               .parmSize = 12,
+               .parms.rsa = {
+                  .keyLength = RSA_KEY_SIZE,
+                  .numPrimes = 2,
+                  .exponentSize = 0,
+                  .exponent = NULL,
+               },
+            };
+            TPMTRYRETURN(TPM_CreateEndorsementKeyPair(&keyInfo, &pubEK));
+         }
+         //fall through to take ownership
+      case TPM_SUCCESS:
+         {
+            //Construct the Srk
+            TPM_KEY srk = {
+               .ver = TPM_STRUCT_VER_1_1,
+               .keyUsage = TPM_KEY_STORAGE,
+               .keyFlags = 0x00,
+               .authDataUsage = TPM_AUTH_ALWAYS,
+               .algorithmParms = {
+                  .algorithmID = TPM_ALG_RSA,
+                  .encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1,
+                  .sigScheme =  TPM_SS_NONE,
+                  .parmSize = 12,
+                  .parms.rsa = {
+                     .keyLength = RSA_KEY_SIZE,
+                     .numPrimes = 2,
+                     .exponentSize = 0,
+                     .exponent = NULL,
+                  },
+               },
+               .PCRInfoSize = 0,
+               .pubKey = {
+                  .keyLength = 0,
+                  .key = NULL,
+               },
+               .encDataSize = 0,
+            };
+
+            TPMTRYRETURN(TPM_TakeOwnership(
+                     &pubEK,
+                     (const TPM_AUTHDATA*)&vtpm_globals.owner_auth,
+                     (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+                     &srk,
+                     NULL,
+                     &vtpm_globals.oiap));
+
+            TPMTRYRETURN(TPM_DisablePubekRead(
+                     (const TPM_AUTHDATA*)&vtpm_globals.owner_auth,
+                     &vtpm_globals.oiap));
+         }
+         break;
+      default:
+         break;
+   }
+abort_egress:
+   free_TPM_PUBKEY(&pubEK);
+   return status;
+}
+
+static void init_storage_key(TPM_KEY* key) {
+   key->ver.major = 1;
+   key->ver.minor = 1;
+   key->ver.revMajor = 0;
+   key->ver.revMinor = 0;
+
+   key->keyUsage = TPM_KEY_BIND;
+   key->keyFlags = 0;
+   key->authDataUsage = TPM_AUTH_ALWAYS;
+
+   TPM_KEY_PARMS* p = &key->algorithmParms;
+   p->algorithmID = TPM_ALG_RSA;
+   p->encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1;
+   p->sigScheme = TPM_SS_NONE;
+   p->parmSize = 12;
+
+   TPM_RSA_KEY_PARMS* r = &p->parms.rsa;
+   r->keyLength = RSA_KEY_SIZE;
+   r->numPrimes = 2;
+   r->exponentSize = 0;
+   r->exponent = NULL;
+
+   key->PCRInfoSize = 0;
+   key->encDataSize = 0;
+   key->encData = NULL;
+}
+
+static int parse_auth_string(char* authstr, BYTE* target, const TPM_AUTHDATA wellknown, int allowrandom) {
+   int rc;
+   /* well known owner auth */
+   if(!strcmp(authstr, "well-known")) {
+      memcpy(target, wellknown, sizeof(TPM_AUTHDATA));
+   }
+   /* Create a randomly generated owner auth */
+   else if(allowrandom && !strcmp(authstr, "random")) {
+      return 1;
+   }
+   /* owner auth is a raw hash */
+   else if(!strncmp(authstr, "hash:", 5)) {
+      authstr += 5;
+      if((rc = strlen(authstr)) != 40) {
+         vtpmlogerror(VTPM_LOG_VTPM, "Supplied owner auth hex string `%s' must be exactly 40 characters (20 bytes) long, length=%d\n", authstr, rc);
+         return -1;
+      }
+      for(int j = 0; j < 20; ++j) {
+         if(sscanf(authstr, "%hhX", target + j) != 1) {
+            vtpmlogerror(VTPM_LOG_VTPM, "Supplied owner auth string `%s' is not a valid hex string\n", authstr);
+            return -1;
+         }
+         authstr += 2;
+      }
+   }
+   /* owner auth is a string that will be hashed */
+   else if(!strncmp(authstr, "text:", 5)) {
+      authstr += 5;
+      sha1((const unsigned char*)authstr, strlen(authstr), target);
+   }
+   else {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid auth string %s\n", authstr);
+      return -1;
+   }
+
+   return 0;
+}
+
+int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
+{
+   int rc;
+   int i;
+
+   //Set defaults
+   memcpy(vtpm_globals.owner_auth, WELLKNOWN_OWNER_AUTH, sizeof(TPM_AUTHDATA));
+   memcpy(vtpm_globals.srk_auth, WELLKNOWN_SRK_AUTH, sizeof(TPM_AUTHDATA));
+
+   for(i = 1; i < argc; ++i) {
+      if(!strncmp(argv[i], "owner_auth:", 10)) {
+         if((rc = parse_auth_string(argv[i] + 10, vtpm_globals.owner_auth, WELLKNOWN_OWNER_AUTH, 1)) < 0) {
+            goto err_invalid;
+         }
+         if(rc == 1) {
+            opts->gen_owner_auth = 1;
+         }
+      }
+      else if(!strncmp(argv[i], "srk_auth:", 8)) {
+         if((rc = parse_auth_string(argv[i] + 8, vtpm_globals.srk_auth, WELLKNOWN_SRK_AUTH, 0)) != 0) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmdriver=", 10)) {
+         if(!strcmp(argv[i] + 10, "tpm_tis")) {
+            opts->tpmdriver = TPMDRV_TPM_TIS;
+         } else if(!strcmp(argv[i] + 10, "tpmfront")) {
+            opts->tpmdriver = TPMDRV_TPMFRONT;
+         } else {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmiomem=",9)) {
+         if(sscanf(argv[i] + 9, "0x%lX", &opts->tpmiomem) != 1) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmirq=",7)) {
+         if(!strcmp(argv[i] + 7, "probe")) {
+            opts->tpmirq = TPM_PROBE_IRQ;
+         } else if( sscanf(argv[i] + 7, "%u", &opts->tpmirq) != 1) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmlocality=",12)) {
+         if(sscanf(argv[i] + 12, "%u", &opts->tpmlocality) != 1 || opts->tpmlocality > 4) {
+            goto err_invalid;
+         }
+      }
+   }
+
+   switch(opts->tpmdriver) {
+      case TPMDRV_TPM_TIS:
+         vtpmloginfo(VTPM_LOG_VTPM, "Option: Using tpm_tis driver\n");
+         break;
+      case TPMDRV_TPMFRONT:
+         vtpmloginfo(VTPM_LOG_VTPM, "Option: Using tpmfront driver\n");
+         break;
+   }
+
+   return 0;
+err_invalid:
+   vtpmlogerror(VTPM_LOG_VTPM, "Invalid Option %s\n", argv[i]);
+   return -1;
+}
+
+
+
+static TPM_RESULT vtpmmgr_create(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_AUTH_SESSION osap = TPM_AUTH_SESSION_INIT;
+   TPM_AUTHDATA sharedsecret;
+
+   // Take ownership if TPM is unowned
+   TPMTRYRETURN(try_take_ownership());
+
+   // Generate storage key's auth
+   memset(&vtpm_globals.storage_key_usage_auth, 0, sizeof(TPM_AUTHDATA));
+
+   TPMTRYRETURN( TPM_OSAP(
+            TPM_ET_KEYHANDLE,
+            TPM_SRK_KEYHANDLE,
+            (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+            &sharedsecret,
+            &osap) );
+
+   init_storage_key(&vtpm_globals.storage_key);
+
+   //initialize the storage key
+   TPMTRYRETURN( TPM_CreateWrapKey(
+            TPM_SRK_KEYHANDLE,
+            (const TPM_AUTHDATA*)&sharedsecret,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.storage_key,
+            &osap) );
+
+   //Load Storage Key
+   TPMTRYRETURN( TPM_LoadKey(
+            TPM_SRK_KEYHANDLE,
+            &vtpm_globals.storage_key,
+            &vtpm_globals.storage_key_handle,
+            (const TPM_AUTHDATA*) &vtpm_globals.srk_auth,
+            &vtpm_globals.oiap));
+
+   //Make sure TPM has commited changes
+   TPMTRYRETURN( TPM_SaveState() );
+
+   //Create new disk image
+   TPMTRYRETURN(vtpm_storage_new_header());
+
+   goto egress;
+abort_egress:
+egress:
+   vtpmloginfo(VTPM_LOG_VTPM, "Finished initialized new VTPM manager\n");
+
+   //End the OSAP session
+   if(osap.AuthHandle) {
+      TPM_TerminateHandle(osap.AuthHandle);
+   }
+
+   return status;
+}
+
+TPM_RESULT vtpmmgr_init(int argc, char** argv) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   /* Default commandline options */
+   struct Opts opts = {
+      .tpmdriver = TPMDRV_TPM_TIS,
+      .tpmiomem = TPM_BASEADDR,
+      .tpmirq = 0,
+      .tpmlocality = 0,
+      .gen_owner_auth = 0,
+   };
+
+   if(parse_cmdline_opts(argc, argv, &opts) != 0) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Command line parsing failed! exiting..\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   //Setup storage system
+   if(vtpm_storage_init() != 0) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize storage subsystem!\n");
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   //Setup tpmback device
+   init_tpmback();
+
+   //Setup tpm access
+   switch(opts.tpmdriver) {
+      case TPMDRV_TPM_TIS:
+         {
+            struct tpm_chip* tpm;
+            if((tpm = init_tpm_tis(opts.tpmiomem, TPM_TIS_LOCL_INT_TO_FLAG(opts.tpmlocality), opts.tpmirq)) == NULL) {
+               vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize tpmfront device\n");
+               status = TPM_IOERROR;
+               goto abort_egress;
+            }
+            vtpm_globals.tpm_fd = tpm_tis_open(tpm);
+            tpm_tis_request_locality(tpm, opts.tpmlocality);
+         }
+         break;
+      case TPMDRV_TPMFRONT:
+         {
+            struct tpmfront_dev* tpmfront_dev;
+            if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
+               vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize tpmfront device\n");
+               status = TPM_IOERROR;
+               goto abort_egress;
+            }
+            vtpm_globals.tpm_fd = tpmfront_open(tpmfront_dev);
+         }
+         break;
+   }
+
+   //Get the version of the tpm
+   TPMTRYRETURN(check_tpm_version());
+
+   // Blow away all stale handles left in the tpm
+   if(flush_tpm() != TPM_SUCCESS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "VTPM_FlushResources failed, continuing anyway..\n");
+   }
+
+   /* Initialize the rng */
+   entropy_init(&vtpm_globals.entropy);
+   entropy_add_source(&vtpm_globals.entropy, tpm_entropy_source, NULL, 0);
+   entropy_gather(&vtpm_globals.entropy);
+   ctr_drbg_init(&vtpm_globals.ctr_drbg, entropy_func, &vtpm_globals.entropy, NULL, 0);
+   ctr_drbg_set_prediction_resistance( &vtpm_globals.ctr_drbg, CTR_DRBG_PR_OFF );
+
+   // Generate Auth for Owner
+   if(opts.gen_owner_auth) {
+      vtpmmgr_rand(vtpm_globals.owner_auth, sizeof(TPM_AUTHDATA));
+   }
+
+   // Create OIAP session for service's authorized commands
+   TPMTRYRETURN( TPM_OIAP(&vtpm_globals.oiap) );
+
+   /* Load the Manager data, if it fails create a new manager */
+   if (vtpm_storage_load_header() != TPM_SUCCESS) {
+      /* If the OIAP session was closed by an error, create a new one */
+      if(vtpm_globals.oiap.AuthHandle == 0) {
+         TPMTRYRETURN( TPM_OIAP(&vtpm_globals.oiap) );
+      }
+      vtpmloginfo(VTPM_LOG_VTPM, "Failed to read manager file. Assuming first time initialization.\n");
+      TPMTRYRETURN( vtpmmgr_create() );
+   }
+
+   goto egress;
+abort_egress:
+   vtpmmgr_shutdown();
+egress:
+   return status;
+}
+
+void vtpmmgr_shutdown(void)
+{
+   /* Cleanup resources */
+   free_TPM_KEY(&vtpm_globals.storage_key);
+
+   /* Cleanup TPM resources */
+   TPM_EvictKey(vtpm_globals.storage_key_handle);
+   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
+
+   /* Close tpmback */
+   shutdown_tpmback();
+
+   /* Close the storage system and blkfront */
+   vtpm_storage_shutdown();
+
+   /* Close tpmfront/tpm_tis */
+   close(vtpm_globals.tpm_fd);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
+}
diff --git a/stubdom/vtpmmgr/log.c b/stubdom/vtpmmgr/log.c
new file mode 100644
index 0000000..a82c913
--- /dev/null
+++ b/stubdom/vtpmmgr/log.c
@@ -0,0 +1,151 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdlib.h>
+#include <string.h>
+#include <stdio.h>
+
+#include "tcg.h"
+
+char *module_names[] = { "",
+                                "TPM",
+                                "TPM",
+                                "VTPM",
+                                "VTPM",
+                                "TXDATA",
+                              };
+// Helper code for the consts, eg. to produce messages for error codes.
+
+typedef struct error_code_entry_t {
+  TPM_RESULT code;
+  char * code_name;
+  char * msg;
+} error_code_entry_t;
+
+static const error_code_entry_t error_msgs [] = {
+  { TPM_SUCCESS, "TPM_SUCCESS", "Successful completion of the operation" },
+  { TPM_AUTHFAIL, "TPM_AUTHFAIL", "Authentication failed" },
+  { TPM_BADINDEX, "TPM_BADINDEX", "The index to a PCR, DIR or other register is incorrect" },
+  { TPM_BAD_PARAMETER, "TPM_BAD_PARAMETER", "One or more parameter is bad" },
+  { TPM_AUDITFAILURE, "TPM_AUDITFAILURE", "An operation completed successfully but the auditing of that operation failed." },
+  { TPM_CLEAR_DISABLED, "TPM_CLEAR_DISABLED", "The clear disable flag is set and all clear operations now require physical access" },
+  { TPM_DEACTIVATED, "TPM_DEACTIVATED", "The TPM is deactivated" },
+  { TPM_DISABLED, "TPM_DISABLED", "The TPM is disabled" },
+  { TPM_DISABLED_CMD, "TPM_DISABLED_CMD", "The target command has been disabled" },
+  { TPM_FAIL, "TPM_FAIL", "The operation failed" },
+  { TPM_BAD_ORDINAL, "TPM_BAD_ORDINAL", "The ordinal was unknown or inconsistent" },
+  { TPM_INSTALL_DISABLED, "TPM_INSTALL_DISABLED", "The ability to install an owner is disabled" },
+  { TPM_INVALID_KEYHANDLE, "TPM_INVALID_KEYHANDLE", "The key handle presented was invalid" },
+  { TPM_KEYNOTFOUND, "TPM_KEYNOTFOUND", "The target key was not found" },
+  { TPM_INAPPROPRIATE_ENC, "TPM_INAPPROPRIATE_ENC", "Unacceptable encryption scheme" },
+  { TPM_MIGRATEFAIL, "TPM_MIGRATEFAIL", "Migration authorization failed" },
+  { TPM_INVALID_PCR_INFO, "TPM_INVALID_PCR_INFO", "PCR information could not be interpreted" },
+  { TPM_NOSPACE, "TPM_NOSPACE", "No room to load key." },
+  { TPM_NOSRK, "TPM_NOSRK", "There is no SRK set" },
+  { TPM_NOTSEALED_BLOB, "TPM_NOTSEALED_BLOB", "An encrypted blob is invalid or was not created by this TPM" },
+  { TPM_OWNER_SET, "TPM_OWNER_SET", "There is already an Owner" },
+  { TPM_RESOURCES, "TPM_RESOURCES", "The TPM has insufficient internal resources to perform the requested action." },
+  { TPM_SHORTRANDOM, "TPM_SHORTRANDOM", "A random string was too short" },
+  { TPM_SIZE, "TPM_SIZE", "The TPM does not have the space to perform the operation." },
+  { TPM_WRONGPCRVAL, "TPM_WRONGPCRVAL", "The named PCR value does not match the current PCR value." },
+  { TPM_BAD_PARAM_SIZE, "TPM_BAD_PARAM_SIZE", "The paramSize argument to the command has the incorrect value" },
+  { TPM_SHA_THREAD, "TPM_SHA_THREAD", "There is no existing SHA-1 thread." },
+  { TPM_SHA_ERROR, "TPM_SHA_ERROR", "The calculation is unable to proceed because the existing SHA-1 thread has already encountered an error." },
+  { TPM_FAILEDSELFTEST, "TPM_FAILEDSELFTEST", "Self-test has failed and the TPM has shutdown." },
+  { TPM_AUTH2FAIL, "TPM_AUTH2FAIL", "The authorization for the second key in a 2 key function failed authorization" },
+  { TPM_BADTAG, "TPM_BADTAG", "The tag value sent to for a command is invalid" },
+  { TPM_IOERROR, "TPM_IOERROR", "An IO error occurred transmitting information to the TPM" },
+  { TPM_ENCRYPT_ERROR, "TPM_ENCRYPT_ERROR", "The encryption process had a problem." },
+  { TPM_DECRYPT_ERROR, "TPM_DECRYPT_ERROR", "The decryption process did not complete." },
+  { TPM_INVALID_AUTHHANDLE, "TPM_INVALID_AUTHHANDLE", "An invalid handle was used." },
+  { TPM_NO_ENDORSEMENT, "TPM_NO_ENDORSEMENT", "The TPM does not a EK installed" },
+  { TPM_INVALID_KEYUSAGE, "TPM_INVALID_KEYUSAGE", "The usage of a key is not allowed" },
+  { TPM_WRONG_ENTITYTYPE, "TPM_WRONG_ENTITYTYPE", "The submitted entity type is not allowed" },
+  { TPM_INVALID_POSTINIT, "TPM_INVALID_POSTINIT", "The command was received in the wrong sequence relative to TPM_Init and a subsequent TPM_Startup" },
+  { TPM_INAPPROPRIATE_SIG, "TPM_INAPPROPRIATE_SIG", "Signed data cannot include additional DER information" },
+  { TPM_BAD_KEY_PROPERTY, "TPM_BAD_KEY_PROPERTY", "The key properties in TPM_KEY_PARMs are not supported by this TPM" },
+
+  { TPM_BAD_MIGRATION, "TPM_BAD_MIGRATION", "The migration properties of this key are incorrect." },
+  { TPM_BAD_SCHEME, "TPM_BAD_SCHEME", "The signature or encryption scheme for this key is incorrect or not permitted in this situation." },
+  { TPM_BAD_DATASIZE, "TPM_BAD_DATASIZE", "The size of the data (or blob) parameter is bad or inconsistent with the referenced key" },
+  { TPM_BAD_MODE, "TPM_BAD_MODE", "A mode parameter is bad, such as capArea or subCapArea for TPM_GetCapability, phsicalPresence parameter for TPM_PhysicalPresence, or migrationType for TPM_CreateMigrationBlob." },
+  { TPM_BAD_PRESENCE, "TPM_BAD_PRESENCE", "Either the physicalPresence or physicalPresenceLock bits have the wrong value" },
+  { TPM_BAD_VERSION, "TPM_BAD_VERSION", "The TPM cannot perform this version of the capability" },
+  { TPM_NO_WRAP_TRANSPORT, "TPM_NO_WRAP_TRANSPORT", "The TPM does not allow for wrapped transport sessions" },
+  { TPM_AUDITFAIL_UNSUCCESSFUL, "TPM_AUDITFAIL_UNSUCCESSFUL", "TPM audit construction failed and the underlying command was returning a failure code also" },
+  { TPM_AUDITFAIL_SUCCESSFUL, "TPM_AUDITFAIL_SUCCESSFUL", "TPM audit construction failed and the underlying command was returning success" },
+  { TPM_NOTRESETABLE, "TPM_NOTRESETABLE", "Attempt to reset a PCR register that does not have the resettable attribute" },
+  { TPM_NOTLOCAL, "TPM_NOTLOCAL", "Attempt to reset a PCR register that requires locality and locality modifier not part of command transport" },
+  { TPM_BAD_TYPE, "TPM_BAD_TYPE", "Make identity blob not properly typed" },
+  { TPM_INVALID_RESOURCE, "TPM_INVALID_RESOURCE", "When saving context identified resource type does not match actual resource" },
+  { TPM_NOTFIPS, "TPM_NOTFIPS", "The TPM is attempting to execute a command only available when in FIPS mode" },
+  { TPM_INVALID_FAMILY, "TPM_INVALID_FAMILY", "The command is attempting to use an invalid family ID" },
+  { TPM_NO_NV_PERMISSION, "TPM_NO_NV_PERMISSION", "The permission to manipulate the NV storage is not available" },
+  { TPM_REQUIRES_SIGN, "TPM_REQUIRES_SIGN", "The operation requires a signed command" },
+  { TPM_KEY_NOTSUPPORTED, "TPM_KEY_NOTSUPPORTED", "Wrong operation to load an NV key" },
+  { TPM_AUTH_CONFLICT, "TPM_AUTH_CONFLICT", "NV_LoadKey blob requires both owner and blob authorization" },
+  { TPM_AREA_LOCKED, "TPM_AREA_LOCKED", "The NV area is locked and not writtable" },
+  { TPM_BAD_LOCALITY, "TPM_BAD_LOCALITY", "The locality is incorrect for the attempted operation" },
+  { TPM_READ_ONLY, "TPM_READ_ONLY", "The NV area is read only and can't be written to" },
+  { TPM_PER_NOWRITE, "TPM_PER_NOWRITE", "There is no protection on the write to the NV area" },
+  { TPM_FAMILYCOUNT, "TPM_FAMILYCOUNT", "The family count value does not match" },
+  { TPM_WRITE_LOCKED, "TPM_WRITE_LOCKED", "The NV area has already been written to" },
+  { TPM_BAD_ATTRIBUTES, "TPM_BAD_ATTRIBUTES", "The NV area attributes conflict" },
+  { TPM_INVALID_STRUCTURE, "TPM_INVALID_STRUCTURE", "The structure tag and version are invalid or inconsistent" },
+  { TPM_KEY_OWNER_CONTROL, "TPM_KEY_OWNER_CONTROL", "The key is under control of the TPM Owner and can only be evicted by the TPM Owner." },
+  { TPM_BAD_COUNTER, "TPM_BAD_COUNTER", "The counter handle is incorrect" },
+  { TPM_NOT_FULLWRITE, "TPM_NOT_FULLWRITE", "The write is not a complete write of the area" },
+  { TPM_CONTEXT_GAP, "TPM_CONTEXT_GAP", "The gap between saved context counts is too large" },
+  { TPM_MAXNVWRITES, "TPM_MAXNVWRITES", "The maximum number of NV writes without an owner has been exceeded" },
+  { TPM_NOOPERATOR, "TPM_NOOPERATOR", "No operator authorization value is set" },
+  { TPM_RESOURCEMISSING, "TPM_RESOURCEMISSING", "The resource pointed to by context is not loaded" },
+  { TPM_DELEGATE_LOCK, "TPM_DELEGATE_LOCK", "The delegate administration is locked" },
+  { TPM_DELEGATE_FAMILY, "TPM_DELEGATE_FAMILY", "Attempt to manage a family other then the delegated family" },
+  { TPM_DELEGATE_ADMIN, "TPM_DELEGATE_ADMIN", "Delegation table management not enabled" },
+  { TPM_TRANSPORT_EXCLUSIVE, "TPM_TRANSPORT_EXCLUSIVE", "There was a command executed outside of an exclusive transport session" },
+};
+
+
+// helper function for the error codes:
+const char* tpm_get_error_name (TPM_RESULT code) {
+  // just do a linear scan for now
+  unsigned i;
+  for (i = 0; i < sizeof(error_msgs)/sizeof(error_msgs[0]); i++)
+    if (code == error_msgs[i].code)
+      return error_msgs[i].code_name;
+
+    return("Unknown Error Code");
+}
diff --git a/stubdom/vtpmmgr/log.h b/stubdom/vtpmmgr/log.h
new file mode 100644
index 0000000..5c7abf5
--- /dev/null
+++ b/stubdom/vtpmmgr/log.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __VTPM_LOG_H__
+#define __VTPM_LOG_H__
+
+#include <stdint.h>             // for uint32_t
+#include <stddef.h>             // for pointer NULL
+#include <stdio.h>
+#include "tcg.h"
+
+// =========================== LOGGING ==============================
+
+// the logging module numbers
+#define VTPM_LOG_TPM         1
+#define VTPM_LOG_TPM_DEEP    2
+#define VTPM_LOG_VTPM        3
+#define VTPM_LOG_VTPM_DEEP   4
+#define VTPM_LOG_TXDATA      5
+
+extern char *module_names[];
+
+// Default to standard logging
+#ifndef LOGGING_MODULES
+#define LOGGING_MODULES (BITMASK(VTPM_LOG_VTPM)|BITMASK(VTPM_LOG_TPM))
+#endif
+
+// bit-access macros
+#define BITMASK(idx)      ( 1U << (idx) )
+#define GETBIT(num,idx)   ( ((num) & BITMASK(idx)) >> idx )
+#define SETBIT(num,idx)   (num) |= BITMASK(idx)
+#define CLEARBIT(num,idx) (num) &= ( ~ BITMASK(idx) )
+
+#define vtpmloginfo(module, fmt, args...) \
+  if (GETBIT (LOGGING_MODULES, module) == 1) {				\
+    fprintf (stdout, "INFO[%s]: " fmt, module_names[module], ##args); \
+  }
+
+#define vtpmloginfomore(module, fmt, args...) \
+  if (GETBIT (LOGGING_MODULES, module) == 1) {			      \
+    fprintf (stdout, fmt,##args);				      \
+  }
+
+#define vtpmlogerror(module, fmt, args...) \
+  fprintf (stderr, "ERROR[%s]: " fmt, module_names[module], ##args);
+
+//typedef UINT32 tpm_size_t;
+
+// helper function for the error codes:
+const char* tpm_get_error_name (TPM_RESULT code);
+
+#endif // _VTPM_LOG_H_
diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
new file mode 100644
index 0000000..77d32f0
--- /dev/null
+++ b/stubdom/vtpmmgr/marshal.h
@@ -0,0 +1,528 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef MARSHAL_H
+#define MARSHAL_H
+
+#include <stdlib.h>
+#include <mini-os/byteorder.h>
+#include <mini-os/endian.h>
+#include "tcg.h"
+
+typedef enum UnpackPtr {
+   UNPACK_ALIAS,
+   UNPACK_ALLOC
+} UnpackPtr;
+
+inline BYTE* pack_BYTE(BYTE* ptr, BYTE t) {
+   ptr[0] = t;
+   return ++ptr;
+}
+
+inline BYTE* unpack_BYTE(BYTE* ptr, BYTE* t) {
+   t[0] = ptr[0];
+   return ++ptr;
+}
+
+#define pack_BOOL(p, t) pack_BYTE(p, t)
+#define unpack_BOOL(p, t) unpack_BYTE(p, t)
+
+inline BYTE* pack_UINT16(BYTE* ptr, UINT16 t) {
+   BYTE* b = (BYTE*)&t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   ptr[0] = b[1];
+   ptr[1] = b[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   ptr[0] = b[0];
+   ptr[1] = b[1];
+#endif
+   return ptr + sizeof(UINT16);
+}
+
+inline BYTE* unpack_UINT16(BYTE* ptr, UINT16* t) {
+   BYTE* b = (BYTE*)t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   b[0] = ptr[1];
+   b[1] = ptr[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   b[0] = ptr[0];
+   b[1] = ptr[1];
+#endif
+   return ptr + sizeof(UINT16);
+}
+
+inline BYTE* pack_UINT32(BYTE* ptr, UINT32 t) {
+   BYTE* b = (BYTE*)&t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   ptr[3] = b[0];
+   ptr[2] = b[1];
+   ptr[1] = b[2];
+   ptr[0] = b[3];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   ptr[0] = b[0];
+   ptr[1] = b[1];
+   ptr[2] = b[2];
+   ptr[3] = b[3];
+#endif
+   return ptr + sizeof(UINT32);
+}
+
+inline BYTE* unpack_UINT32(BYTE* ptr, UINT32* t) {
+   BYTE* b = (BYTE*)t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   b[0] = ptr[3];
+   b[1] = ptr[2];
+   b[2] = ptr[1];
+   b[3] = ptr[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   b[0] = ptr[0];
+   b[1] = ptr[1];
+   b[2] = ptr[2];
+   b[3] = ptr[3];
+#endif
+   return ptr + sizeof(UINT32);
+}
+
+#define pack_TPM_RESULT(p, t) pack_UINT32(p, t)
+#define pack_TPM_PCRINDEX(p, t) pack_UINT32(p, t)
+#define pack_TPM_DIRINDEX(p, t) pack_UINT32(p, t)
+#define pack_TPM_HANDLE(p, t) pack_UINT32(p, t)
+#define pack_TPM_AUTHHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_HASHHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_HMACHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_ENCHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TPM_KEY_HANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_ENTITYHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TPM_RESOURCE_TYPE(p, t) pack_UINT32(p, t)
+#define pack_TPM_COMMAND_CODE(p, t) pack_UINT32(p, t)
+#define pack_TPM_PROTOCOL_ID(p, t) pack_UINT16(p, t)
+#define pack_TPM_AUTH_DATA_USAGE(p, t) pack_BYTE(p, t)
+#define pack_TPM_ENTITY_TYPE(p, t) pack_UINT16(p, t)
+#define pack_TPM_ALGORITHM_ID(p, t) pack_UINT32(p, t)
+#define pack_TPM_KEY_USAGE(p, t) pack_UINT16(p, t)
+#define pack_TPM_STARTUP_TYPE(p, t) pack_UINT16(p, t)
+#define pack_TPM_CAPABILITY_AREA(p, t) pack_UINT32(p, t)
+#define pack_TPM_ENC_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_SIG_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_MIGRATE_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_PHYSICAL_PRESENCE(p, t) pack_UINT16(p, t)
+#define pack_TPM_KEY_FLAGS(p, t) pack_UINT32(p, t)
+
+#define unpack_TPM_RESULT(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_PCRINDEX(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_DIRINDEX(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_HANDLE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_AUTHHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_HASHHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_HMACHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_ENCHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TPM_KEY_HANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_ENTITYHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TPM_RESOURCE_TYPE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_COMMAND_CODE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_PROTOCOL_ID(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_AUTH_DATA_USAGE(p, t) unpack_BYTE(p, t)
+#define unpack_TPM_ENTITY_TYPE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_ALGORITHM_ID(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_KEY_USAGE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_STARTUP_TYPE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_CAPABILITY_AREA(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_ENC_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_SIG_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_MIGRATE_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_PHYSICAL_PRESENCE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_KEY_FLAGS(p, t) unpack_UINT32(p, t)
+
+#define pack_TPM_AUTH_HANDLE(p, t) pack_UINT32(p, t);
+#define pack_TCS_CONTEXT_HANDLE(p, t) pack_UINT32(p, t);
+#define pack_TCS_KEY_HANDLE(p, t) pack_UINT32(p, t);
+
+#define unpack_TPM_AUTH_HANDLE(p, t) unpack_UINT32(p, t);
+#define unpack_TCS_CONTEXT_HANDLE(p, t) unpack_UINT32(p, t);
+#define unpack_TCS_KEY_HANDLE(p, t) unpack_UINT32(p, t);
+
+inline BYTE* pack_BUFFER(BYTE* ptr, const BYTE* buf, UINT32 size) {
+   memcpy(ptr, buf, size);
+   return ptr + size;
+}
+
+inline BYTE* unpack_BUFFER(BYTE* ptr, BYTE* buf, UINT32 size) {
+   memcpy(buf, ptr, size);
+   return ptr + size;
+}
+
+inline BYTE* unpack_ALIAS(BYTE* ptr, BYTE** buf, UINT32 size) {
+   *buf = ptr;
+   return ptr + size;
+}
+
+inline BYTE* unpack_ALLOC(BYTE* ptr, BYTE** buf, UINT32 size) {
+   if(size) {
+      *buf = malloc(size);
+      memcpy(*buf, ptr, size);
+   } else {
+      *buf = NULL;
+   }
+   return ptr + size;
+}
+
+inline BYTE* unpack_PTR(BYTE* ptr, BYTE** buf, UINT32 size, UnpackPtr alloc) {
+   if(alloc == UNPACK_ALLOC) {
+      return unpack_ALLOC(ptr, buf, size);
+   } else {
+      return unpack_ALIAS(ptr, buf, size);
+   }
+}
+
+inline BYTE* pack_TPM_AUTHDATA(BYTE* ptr, const TPM_AUTHDATA* d) {
+   return pack_BUFFER(ptr, *d, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_AUTHDATA(BYTE* ptr, TPM_AUTHDATA* d) {
+   return unpack_BUFFER(ptr, *d, TPM_DIGEST_SIZE);
+}
+
+#define pack_TPM_SECRET(p, t) pack_TPM_AUTHDATA(p, t)
+#define pack_TPM_ENCAUTH(p, t) pack_TPM_AUTHDATA(p, t)
+#define pack_TPM_PAYLOAD_TYPE(p, t) pack_BYTE(p, t)
+#define pack_TPM_TAG(p, t) pack_UINT16(p, t)
+#define pack_TPM_STRUCTURE_TAG(p, t) pack_UINT16(p, t)
+
+#define unpack_TPM_SECRET(p, t) unpack_TPM_AUTHDATA(p, t)
+#define unpack_TPM_ENCAUTH(p, t) unpack_TPM_AUTHDATA(p, t)
+#define unpack_TPM_PAYLOAD_TYPE(p, t) unpack_BYTE(p, t)
+#define unpack_TPM_TAG(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_STRUCTURE_TAG(p, t) unpack_UINT16(p, t)
+
+inline BYTE* pack_TPM_VERSION(BYTE* ptr, const TPM_VERSION* t) {
+   ptr[0] = t->major;
+   ptr[1] = t->minor;
+   ptr[2] = t->revMajor;
+   ptr[3] = t->revMinor;
+   return ptr + 4;
+}
+
+inline BYTE* unpack_TPM_VERSION(BYTE* ptr, TPM_VERSION* t) {
+   t->major = ptr[0];
+   t->minor = ptr[1];
+   t->revMajor = ptr[2];
+   t->revMinor = ptr[3];
+   return ptr + 4;
+}
+
+inline BYTE* pack_TPM_CAP_VERSION_INFO(BYTE* ptr, const TPM_CAP_VERSION_INFO* v) {
+   ptr = pack_TPM_STRUCTURE_TAG(ptr, v->tag);
+   ptr = pack_TPM_VERSION(ptr, &v->version);
+   ptr = pack_UINT16(ptr, v->specLevel);
+   ptr = pack_BYTE(ptr, v->errataRev);
+   ptr = pack_BUFFER(ptr, v->tpmVendorID, sizeof(v->tpmVendorID));
+   ptr = pack_UINT16(ptr, v->vendorSpecificSize);
+   ptr = pack_BUFFER(ptr, v->vendorSpecific, v->vendorSpecificSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_CAP_VERSION_INFO(BYTE* ptr, TPM_CAP_VERSION_INFO* v, UnpackPtr alloc) {
+   ptr = unpack_TPM_STRUCTURE_TAG(ptr, &v->tag);
+   ptr = unpack_TPM_VERSION(ptr, &v->version);
+   ptr = unpack_UINT16(ptr, &v->specLevel);
+   ptr = unpack_BYTE(ptr, &v->errataRev);
+   ptr = unpack_BUFFER(ptr, v->tpmVendorID, sizeof(v->tpmVendorID));
+   ptr = unpack_UINT16(ptr, &v->vendorSpecificSize);
+   ptr = unpack_PTR(ptr, &v->vendorSpecific, v->vendorSpecificSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_DIGEST(BYTE* ptr, const TPM_DIGEST* d) {
+   return pack_BUFFER(ptr, d->digest, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_DIGEST(BYTE* ptr, TPM_DIGEST* d) {
+   return unpack_BUFFER(ptr, d->digest, TPM_DIGEST_SIZE);
+}
+
+#define pack_TPM_PCRVALUE(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_PCRVALUE(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_COMPOSITE_HASH(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_COMPOSITE_HASH(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_DIRVALUE(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_DIRVALUE(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_HMAC(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_HMAC(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_CHOSENID_HASH(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_CHOSENID_HASH(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+inline BYTE* pack_TPM_NONCE(BYTE* ptr, const TPM_NONCE* n) {
+   return pack_BUFFER(ptr, n->nonce, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_NONCE(BYTE* ptr, TPM_NONCE* n) {
+   return unpack_BUFFER(ptr, n->nonce, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* pack_TPM_SYMMETRIC_KEY_PARMS(BYTE* ptr, const TPM_SYMMETRIC_KEY_PARMS* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_UINT32(ptr, k->blockSize);
+   ptr = pack_UINT32(ptr, k->ivSize);
+   return pack_BUFFER(ptr, k->IV, k->ivSize);
+}
+
+inline BYTE* unpack_TPM_SYMMETRIC_KEY_PARMS(BYTE* ptr, TPM_SYMMETRIC_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_UINT32(ptr, &k->blockSize);
+   ptr = unpack_UINT32(ptr, &k->ivSize);
+   return unpack_PTR(ptr, &k->IV, k->ivSize, alloc);
+}
+
+inline BYTE* pack_TPM_RSA_KEY_PARMS(BYTE* ptr, const TPM_RSA_KEY_PARMS* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_UINT32(ptr, k->numPrimes);
+   ptr = pack_UINT32(ptr, k->exponentSize);
+   return pack_BUFFER(ptr, k->exponent, k->exponentSize);
+}
+
+inline BYTE* unpack_TPM_RSA_KEY_PARMS(BYTE* ptr, TPM_RSA_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_UINT32(ptr, &k->numPrimes);
+   ptr = unpack_UINT32(ptr, &k->exponentSize);
+   return unpack_PTR(ptr, &k->exponent, k->exponentSize, alloc);
+}
+
+inline BYTE* pack_TPM_KEY_PARMS(BYTE* ptr, const TPM_KEY_PARMS* k) {
+   ptr = pack_TPM_ALGORITHM_ID(ptr, k->algorithmID);
+   ptr = pack_TPM_ENC_SCHEME(ptr, k->encScheme);
+   ptr = pack_TPM_SIG_SCHEME(ptr, k->sigScheme);
+   ptr = pack_UINT32(ptr, k->parmSize);
+
+   if(k->parmSize) {
+      switch(k->algorithmID) {
+         case TPM_ALG_RSA:
+            return pack_TPM_RSA_KEY_PARMS(ptr, &k->parms.rsa);
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            return pack_TPM_SYMMETRIC_KEY_PARMS(ptr, &k->parms.sym);
+      }
+   }
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_KEY_PARMS(BYTE* ptr, TPM_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_ALGORITHM_ID(ptr, &k->algorithmID);
+   ptr = unpack_TPM_ENC_SCHEME(ptr, &k->encScheme);
+   ptr = unpack_TPM_SIG_SCHEME(ptr, &k->sigScheme);
+   ptr = unpack_UINT32(ptr, &k->parmSize);
+
+   if(k->parmSize) {
+      switch(k->algorithmID) {
+         case TPM_ALG_RSA:
+            return unpack_TPM_RSA_KEY_PARMS(ptr, &k->parms.rsa, alloc);
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            return unpack_TPM_SYMMETRIC_KEY_PARMS(ptr, &k->parms.sym, alloc);
+      }
+   }
+   return ptr;
+}
+
+inline BYTE* pack_TPM_STORE_PUBKEY(BYTE* ptr, const TPM_STORE_PUBKEY* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_BUFFER(ptr, k->key, k->keyLength);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_STORE_PUBKEY(BYTE* ptr, TPM_STORE_PUBKEY* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_PTR(ptr, &k->key, k->keyLength, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PUBKEY(BYTE* ptr, const TPM_PUBKEY* k) {
+   ptr = pack_TPM_KEY_PARMS(ptr, &k->algorithmParms);
+   return pack_TPM_STORE_PUBKEY(ptr, &k->pubKey);
+}
+
+inline BYTE* unpack_TPM_PUBKEY(BYTE* ptr, TPM_PUBKEY* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_KEY_PARMS(ptr, &k->algorithmParms, alloc);
+   return unpack_TPM_STORE_PUBKEY(ptr, &k->pubKey, alloc);
+}
+
+inline BYTE* pack_TPM_PCR_SELECTION(BYTE* ptr, const TPM_PCR_SELECTION* p) {
+   ptr = pack_UINT16(ptr, p->sizeOfSelect);
+   ptr = pack_BUFFER(ptr, p->pcrSelect, p->sizeOfSelect);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_SELECTION(BYTE* ptr, TPM_PCR_SELECTION* p, UnpackPtr alloc) {
+   ptr = unpack_UINT16(ptr, &p->sizeOfSelect);
+   ptr = unpack_PTR(ptr, &p->pcrSelect, p->sizeOfSelect, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PCR_INFO(BYTE* ptr, const TPM_PCR_INFO* p) {
+   ptr = pack_TPM_PCR_SELECTION(ptr, &p->pcrSelection);
+   ptr = pack_TPM_COMPOSITE_HASH(ptr, &p->digestAtRelease);
+   ptr = pack_TPM_COMPOSITE_HASH(ptr, &p->digestAtCreation);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_INFO(BYTE* ptr, TPM_PCR_INFO* p, UnpackPtr alloc) {
+   ptr = unpack_TPM_PCR_SELECTION(ptr, &p->pcrSelection, alloc);
+   ptr = unpack_TPM_COMPOSITE_HASH(ptr, &p->digestAtRelease);
+   ptr = unpack_TPM_COMPOSITE_HASH(ptr, &p->digestAtCreation);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PCR_COMPOSITE(BYTE* ptr, const TPM_PCR_COMPOSITE* p) {
+   ptr = pack_TPM_PCR_SELECTION(ptr, &p->select);
+   ptr = pack_UINT32(ptr, p->valueSize);
+   ptr = pack_BUFFER(ptr, (const BYTE*)p->pcrValue, p->valueSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_COMPOSITE(BYTE* ptr, TPM_PCR_COMPOSITE* p, UnpackPtr alloc) {
+   ptr = unpack_TPM_PCR_SELECTION(ptr, &p->select, alloc);
+   ptr = unpack_UINT32(ptr, &p->valueSize);
+   ptr = unpack_PTR(ptr, (BYTE**)&p->pcrValue, p->valueSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_KEY(BYTE* ptr, const TPM_KEY* k) {
+   ptr = pack_TPM_VERSION(ptr, &k->ver);
+   ptr = pack_TPM_KEY_USAGE(ptr, k->keyUsage);
+   ptr = pack_TPM_KEY_FLAGS(ptr, k->keyFlags);
+   ptr = pack_TPM_AUTH_DATA_USAGE(ptr, k->authDataUsage);
+   ptr = pack_TPM_KEY_PARMS(ptr, &k->algorithmParms);
+   ptr = pack_UINT32(ptr, k->PCRInfoSize);
+   if(k->PCRInfoSize) {
+      ptr = pack_TPM_PCR_INFO(ptr, &k->PCRInfo);
+   }
+   ptr = pack_TPM_STORE_PUBKEY(ptr, &k->pubKey);
+   ptr = pack_UINT32(ptr, k->encDataSize);
+   return pack_BUFFER(ptr, k->encData, k->encDataSize);
+}
+
+inline BYTE* unpack_TPM_KEY(BYTE* ptr, TPM_KEY* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &k->ver);
+   ptr = unpack_TPM_KEY_USAGE(ptr, &k->keyUsage);
+   ptr = unpack_TPM_KEY_FLAGS(ptr, &k->keyFlags);
+   ptr = unpack_TPM_AUTH_DATA_USAGE(ptr, &k->authDataUsage);
+   ptr = unpack_TPM_KEY_PARMS(ptr, &k->algorithmParms, alloc);
+   ptr = unpack_UINT32(ptr, &k->PCRInfoSize);
+   if(k->PCRInfoSize) {
+      ptr = unpack_TPM_PCR_INFO(ptr, &k->PCRInfo, alloc);
+   }
+   ptr = unpack_TPM_STORE_PUBKEY(ptr, &k->pubKey, alloc);
+   ptr = unpack_UINT32(ptr, &k->encDataSize);
+   return unpack_PTR(ptr, &k->encData, k->encDataSize, alloc);
+}
+
+inline BYTE* pack_TPM_BOUND_DATA(BYTE* ptr, const TPM_BOUND_DATA* b, UINT32 payloadSize) {
+   ptr = pack_TPM_VERSION(ptr, &b->ver);
+   ptr = pack_TPM_PAYLOAD_TYPE(ptr, b->payload);
+   return pack_BUFFER(ptr, b->payloadData, payloadSize);
+}
+
+inline BYTE* unpack_TPM_BOUND_DATA(BYTE* ptr, TPM_BOUND_DATA* b, UINT32 payloadSize, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &b->ver);
+   ptr = unpack_TPM_PAYLOAD_TYPE(ptr, &b->payload);
+   return unpack_PTR(ptr, &b->payloadData, payloadSize, alloc);
+}
+
+inline BYTE* pack_TPM_STORED_DATA(BYTE* ptr, const TPM_STORED_DATA* d) {
+   ptr = pack_TPM_VERSION(ptr, &d->ver);
+   ptr = pack_UINT32(ptr, d->sealInfoSize);
+   if(d->sealInfoSize) {
+      ptr = pack_TPM_PCR_INFO(ptr, &d->sealInfo);
+   }
+   ptr = pack_UINT32(ptr, d->encDataSize);
+   ptr = pack_BUFFER(ptr, d->encData, d->encDataSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_STORED_DATA(BYTE* ptr, TPM_STORED_DATA* d, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &d->ver);
+   ptr = unpack_UINT32(ptr, &d->sealInfoSize);
+   if(d->sealInfoSize) {
+      ptr = unpack_TPM_PCR_INFO(ptr, &d->sealInfo, alloc);
+   }
+   ptr = unpack_UINT32(ptr, &d->encDataSize);
+   ptr = unpack_PTR(ptr, &d->encData, d->encDataSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_AUTH_SESSION(BYTE* ptr, const TPM_AUTH_SESSION* auth) {
+   ptr = pack_TPM_AUTH_HANDLE(ptr, auth->AuthHandle);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+   ptr = pack_TPM_AUTHDATA(ptr, &auth->HMAC);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_AUTH_SESSION(BYTE* ptr, TPM_AUTH_SESSION* auth) {
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = unpack_BOOL(ptr, &auth->fContinueAuthSession);
+   ptr = unpack_TPM_AUTHDATA(ptr, &auth->HMAC);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
+      TPM_TAG tag,
+      UINT32 size,
+      TPM_COMMAND_CODE ord) {
+   ptr = pack_UINT16(ptr, tag);
+   ptr = pack_UINT32(ptr, size);
+   return pack_UINT32(ptr, ord);
+}
+
+inline BYTE* unpack_TPM_RQU_HEADER(BYTE* ptr,
+      TPM_TAG* tag,
+      UINT32* size,
+      TPM_COMMAND_CODE* ord) {
+   ptr = unpack_UINT16(ptr, tag);
+   ptr = unpack_UINT32(ptr, size);
+   ptr = unpack_UINT32(ptr, ord);
+   return ptr;
+}
+
+#define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r);
+#define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r);
+
+#endif
diff --git a/stubdom/vtpmmgr/minios.cfg b/stubdom/vtpmmgr/minios.cfg
new file mode 100644
index 0000000..3fb383d
--- /dev/null
+++ b/stubdom/vtpmmgr/minios.cfg
@@ -0,0 +1,14 @@
+CONFIG_TPMFRONT=y
+CONFIG_TPM_TIS=y
+CONFIG_TPMBACK=y
+CONFIG_START_NETWORK=n
+CONFIG_TEST=n
+CONFIG_PCIFRONT=n
+CONFIG_BLKFRONT=y
+CONFIG_NETFRONT=n
+CONFIG_FBFRONT=n
+CONFIG_KBDFRONT=n
+CONFIG_CONSFRONT=n
+CONFIG_XENBUS=y
+CONFIG_LWIP=n
+CONFIG_XC=n
diff --git a/stubdom/vtpmmgr/tcg.h b/stubdom/vtpmmgr/tcg.h
new file mode 100644
index 0000000..7687eae
--- /dev/null
+++ b/stubdom/vtpmmgr/tcg.h
@@ -0,0 +1,707 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005 Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __TCG_H__
+#define __TCG_H__
+
+#include <stdlib.h>
+#include <stdint.h>
+
+// **************************** CONSTANTS *********************************
+
+// BOOL values
+#define TRUE 0x01
+#define FALSE 0x00
+
+#define TCPA_MAX_BUFFER_LENGTH 0x2000
+
+//
+// TPM_COMMAND_CODE values
+#define TPM_PROTECTED_ORDINAL 0x00000000UL
+#define TPM_UNPROTECTED_ORDINAL 0x80000000UL
+#define TPM_CONNECTION_ORDINAL 0x40000000UL
+#define TPM_VENDOR_ORDINAL 0x20000000UL
+
+#define TPM_ORD_OIAP                     (10UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OSAP                     (11UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuth               (12UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_TakeOwnership            (13UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthAsymStart      (14UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthAsymFinish     (15UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthOwner          (16UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Extend                   (20UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PcrRead                  (21UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Quote                    (22UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Seal                     (23UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Unseal                   (24UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DirWriteAuth             (25UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DirRead                  (26UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_UnBind                   (30UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateWrapKey            (31UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadKey                  (32UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetPubKey                (33UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_EvictKey                 (34UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateMigrationBlob      (40UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReWrapKey                (41UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ConvertMigrationBlob     (42UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_AuthorizeMigrationKey    (43UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateMaintenanceArchive (44UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadMaintenanceArchive   (45UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_KillMaintenanceFeature   (46UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadManuMaintPub         (47UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadManuMaintPub         (48UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CertifyKey               (50UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Sign                     (60UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetRandom                (70UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_StirRandom               (71UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SelfTestFull             (80UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SelfTestStartup          (81UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CertifySelfTest          (82UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ContinueSelfTest         (83UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetTestResult            (84UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Reset                    (90UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerClear               (91UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisableOwnerClear        (92UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ForceClear               (93UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisableForceClear        (94UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapabilitySigned      (100UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapability            (101UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapabilityOwner       (102UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerSetDisable          (110UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalEnable           (111UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalDisable          (112UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetOwnerInstall          (113UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalSetDeactivated   (114UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetTempDeactivated       (115UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateEndorsementKeyPair (120UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_MakeIdentity             (121UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ActivateIdentity         (122UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadPubek                (124UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerReadPubek           (125UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisablePubekRead         (126UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetAuditEvent            (130UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetAuditEventSigned      (131UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetOrdinalAuditStatus    (140UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetOrdinalAuditStatus    (141UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Terminate_Handle         (150UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Init                     (151UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveState                (152UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Startup                  (153UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetRedirection           (154UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Start                (160UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Update               (161UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Complete             (162UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1CompleteExtend       (163UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_FieldUpgrade             (170UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveKeyContext           (180UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadKeyContext           (181UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveAuthContext          (182UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadAuthContext          (183UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveContext                      (184UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadContext                      (185UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_FlushSpecific                    (186UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PCR_Reset                        (200UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_DefineSpace                   (204UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_WriteValue                    (205UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_WriteValueAuth                (206UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_ReadValue                     (207UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_ReadValueAuth                 (208UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_UpdateVerification      (209UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_Manage                  (210UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_CreateKeyDelegation     (212UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_CreateOwnerDelegation   (213UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_VerifyDelegation        (214UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_LoadOwnerDelegation     (216UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_ReadAuth                (217UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_ReadTable               (219UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateCounter                    (220UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_IncrementCounter                 (221UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadCounter                      (222UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseCounter                   (223UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseCounterOwner              (224UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_EstablishTransport               (230UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ExecuteTransport                 (231UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseTransportSigned           (232UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetTicks                         (241UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_TickStampBlob                    (242UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_MAX                              (256UL + TPM_PROTECTED_ORDINAL)
+
+#define TSC_ORD_PhysicalPresence         (10UL + TPM_CONNECTION_ORDINAL)
+
+
+
+//
+// TPM_RESULT values
+//
+// just put in the whole table from spec 1.2
+
+#define TPM_BASE   0x0 // The start of TPM return codes
+#define TPM_VENDOR_ERROR 0x00000400 // Mask to indicate that the error code is vendor specific for vendor specific commands
+#define TPM_NON_FATAL  0x00000800 // Mask to indicate that the error code is a non-fatal failure.
+
+#define TPM_SUCCESS   TPM_BASE // Successful completion of the operation
+#define TPM_AUTHFAIL      TPM_BASE + 1 // Authentication failed
+#define TPM_BADINDEX      TPM_BASE + 2 // The index to a PCR, DIR or other register is incorrect
+#define TPM_BAD_PARAMETER     TPM_BASE + 3 // One or more parameter is bad
+#define TPM_AUDITFAILURE     TPM_BASE + 4 // An operation completed successfully but the auditing of that operation failed.
+#define TPM_CLEAR_DISABLED     TPM_BASE + 5 // The clear disable flag is set and all clear operations now require physical access
+#define TPM_DEACTIVATED     TPM_BASE + 6 // The TPM is deactivated
+#define TPM_DISABLED      TPM_BASE + 7 // The TPM is disabled
+#define TPM_DISABLED_CMD     TPM_BASE + 8 // The target command has been disabled
+#define TPM_FAIL       TPM_BASE + 9 // The operation failed
+#define TPM_BAD_ORDINAL     TPM_BASE + 10 // The ordinal was unknown or inconsistent
+#define TPM_INSTALL_DISABLED   TPM_BASE + 11 // The ability to install an owner is disabled
+#define TPM_INVALID_KEYHANDLE  TPM_BASE + 12 // The key handle presented was invalid
+#define TPM_KEYNOTFOUND     TPM_BASE + 13 // The target key was not found
+#define TPM_INAPPROPRIATE_ENC  TPM_BASE + 14 // Unacceptable encryption scheme
+#define TPM_MIGRATEFAIL     TPM_BASE + 15 // Migration authorization failed
+#define TPM_INVALID_PCR_INFO   TPM_BASE + 16 // PCR information could not be interpreted
+#define TPM_NOSPACE      TPM_BASE + 17 // No room to load key.
+#define TPM_NOSRK       TPM_BASE + 18 // There is no SRK set
+#define TPM_NOTSEALED_BLOB     TPM_BASE + 19 // An encrypted blob is invalid or was not created by this TPM
+#define TPM_OWNER_SET      TPM_BASE + 20 // There is already an Owner
+#define TPM_RESOURCES      TPM_BASE + 21 // The TPM has insufficient internal resources to perform the requested action.
+#define TPM_SHORTRANDOM     TPM_BASE + 22 // A random string was too short
+#define TPM_SIZE       TPM_BASE + 23 // The TPM does not have the space to perform the operation.
+#define TPM_WRONGPCRVAL     TPM_BASE + 24 // The named PCR value does not match the current PCR value.
+#define TPM_BAD_PARAM_SIZE     TPM_BASE + 25 // The paramSize argument to the command has the incorrect value
+#define TPM_SHA_THREAD      TPM_BASE + 26 // There is no existing SHA-1 thread.
+#define TPM_SHA_ERROR      TPM_BASE + 27 // The calculation is unable to proceed because the existing SHA-1 thread has already encountered an error.
+#define TPM_FAILEDSELFTEST     TPM_BASE + 28 // Self-test has failed and the TPM has shutdown.
+#define TPM_AUTH2FAIL      TPM_BASE + 29 // The authorization for the second key in a 2 key function failed authorization
+#define TPM_BADTAG       TPM_BASE + 30 // The tag value sent to for a command is invalid
+#define TPM_IOERROR      TPM_BASE + 31 // An IO error occurred transmitting information to the TPM
+#define TPM_ENCRYPT_ERROR     TPM_BASE + 32 // The encryption process had a problem.
+#define TPM_DECRYPT_ERROR     TPM_BASE + 33 // The decryption process did not complete.
+#define TPM_INVALID_AUTHHANDLE TPM_BASE + 34 // An invalid handle was used.
+#define TPM_NO_ENDORSEMENT     TPM_BASE + 35 // The TPM does not a EK installed
+#define TPM_INVALID_KEYUSAGE   TPM_BASE + 36 // The usage of a key is not allowed
+#define TPM_WRONG_ENTITYTYPE   TPM_BASE + 37 // The submitted entity type is not allowed
+#define TPM_INVALID_POSTINIT   TPM_BASE + 38 // The command was received in the wrong sequence relative to TPM_Init and a subsequent TPM_Startup
+#define TPM_INAPPROPRIATE_SIG  TPM_BASE + 39 // Signed data cannot include additional DER information
+#define TPM_BAD_KEY_PROPERTY   TPM_BASE + 40 // The key properties in TPM_KEY_PARMs are not supported by this TPM
+
+#define TPM_BAD_MIGRATION      TPM_BASE + 41 // The migration properties of this key are incorrect.
+#define TPM_BAD_SCHEME       TPM_BASE + 42 // The signature or encryption scheme for this key is incorrect or not permitted in this situation.
+#define TPM_BAD_DATASIZE      TPM_BASE + 43 // The size of the data (or blob) parameter is bad or inconsistent with the referenced key
+#define TPM_BAD_MODE       TPM_BASE + 44 // A mode parameter is bad, such as capArea or subCapArea for TPM_GetCapability, phsicalPresence parameter for TPM_PhysicalPresence, or migrationType for TPM_CreateMigrationBlob.
+#define TPM_BAD_PRESENCE      TPM_BASE + 45 // Either the physicalPresence or physicalPresenceLock bits have the wrong value
+#define TPM_BAD_VERSION      TPM_BASE + 46 // The TPM cannot perform this version of the capability
+#define TPM_NO_WRAP_TRANSPORT     TPM_BASE + 47 // The TPM does not allow for wrapped transport sessions
+#define TPM_AUDITFAIL_UNSUCCESSFUL TPM_BASE + 48 // TPM audit construction failed and the underlying command was returning a failure code also
+#define TPM_AUDITFAIL_SUCCESSFUL   TPM_BASE + 49 // TPM audit construction failed and the underlying command was returning success
+#define TPM_NOTRESETABLE      TPM_BASE + 50 // Attempt to reset a PCR register that does not have the resettable attribute
+#define TPM_NOTLOCAL       TPM_BASE + 51 // Attempt to reset a PCR register that requires locality and locality modifier not part of command transport
+#define TPM_BAD_TYPE       TPM_BASE + 52 // Make identity blob not properly typed
+#define TPM_INVALID_RESOURCE     TPM_BASE + 53 // When saving context identified resource type does not match actual resource
+#define TPM_NOTFIPS       TPM_BASE + 54 // The TPM is attempting to execute a command only available when in FIPS mode
+#define TPM_INVALID_FAMILY      TPM_BASE + 55 // The command is attempting to use an invalid family ID
+#define TPM_NO_NV_PERMISSION     TPM_BASE + 56 // The permission to manipulate the NV storage is not available
+#define TPM_REQUIRES_SIGN      TPM_BASE + 57 // The operation requires a signed command
+#define TPM_KEY_NOTSUPPORTED     TPM_BASE + 58 // Wrong operation to load an NV key
+#define TPM_AUTH_CONFLICT      TPM_BASE + 59 // NV_LoadKey blob requires both owner and blob authorization
+#define TPM_AREA_LOCKED      TPM_BASE + 60 // The NV area is locked and not writtable
+#define TPM_BAD_LOCALITY      TPM_BASE + 61 // The locality is incorrect for the attempted operation
+#define TPM_READ_ONLY       TPM_BASE + 62 // The NV area is read only and can't be written to
+#define TPM_PER_NOWRITE      TPM_BASE + 63 // There is no protection on the write to the NV area
+#define TPM_FAMILYCOUNT      TPM_BASE + 64 // The family count value does not match
+#define TPM_WRITE_LOCKED      TPM_BASE + 65 // The NV area has already been written to
+#define TPM_BAD_ATTRIBUTES      TPM_BASE + 66 // The NV area attributes conflict
+#define TPM_INVALID_STRUCTURE     TPM_BASE + 67 // The structure tag and version are invalid or inconsistent
+#define TPM_KEY_OWNER_CONTROL     TPM_BASE + 68 // The key is under control of the TPM Owner and can only be evicted by the TPM Owner.
+#define TPM_BAD_COUNTER      TPM_BASE + 69 // The counter handle is incorrect
+#define TPM_NOT_FULLWRITE      TPM_BASE + 70 // The write is not a complete write of the area
+#define TPM_CONTEXT_GAP      TPM_BASE + 71 // The gap between saved context counts is too large
+#define TPM_MAXNVWRITES      TPM_BASE + 72 // The maximum number of NV writes without an owner has been exceeded
+#define TPM_NOOPERATOR       TPM_BASE + 73 // No operator authorization value is set
+#define TPM_RESOURCEMISSING     TPM_BASE + 74 // The resource pointed to by context is not loaded
+#define TPM_DELEGATE_LOCK      TPM_BASE + 75 // The delegate administration is locked
+#define TPM_DELEGATE_FAMILY     TPM_BASE + 76 // Attempt to manage a family other then the delegated family
+#define TPM_DELEGATE_ADMIN      TPM_BASE + 77 // Delegation table management not enabled
+#define TPM_TRANSPORT_EXCLUSIVE    TPM_BASE + 78 // There was a command executed outside of an exclusive transport session
+
+// TPM_STARTUP_TYPE values
+#define TPM_ST_CLEAR 0x0001
+#define TPM_ST_STATE 0x0002
+#define TPM_ST_DEACTIVATED 0x003
+
+// TPM_TAG values
+#define TPM_TAG_RQU_COMMAND 0x00c1
+#define TPM_TAG_RQU_AUTH1_COMMAND 0x00c2
+#define TPM_TAG_RQU_AUTH2_COMMAND 0x00c3
+#define TPM_TAG_RSP_COMMAND 0x00c4
+#define TPM_TAG_RSP_AUTH1_COMMAND 0x00c5
+#define TPM_TAG_RSP_AUTH2_COMMAND 0x00c6
+
+// TPM_PAYLOAD_TYPE values
+#define TPM_PT_ASYM 0x01
+#define TPM_PT_BIND 0x02
+#define TPM_PT_MIGRATE 0x03
+#define TPM_PT_MAINT 0x04
+#define TPM_PT_SEAL 0x05
+
+// TPM_ENTITY_TYPE values
+#define TPM_ET_KEYHANDLE 0x0001
+#define TPM_ET_OWNER 0x0002
+#define TPM_ET_DATA 0x0003
+#define TPM_ET_SRK 0x0004
+#define TPM_ET_KEY 0x0005
+
+/// TPM_ResourceTypes
+#define TPM_RT_KEY      0x00000001
+#define TPM_RT_AUTH     0x00000002
+#define TPM_RT_HASH     0x00000003
+#define TPM_RT_TRANS    0x00000004
+#define TPM_RT_CONTEXT  0x00000005
+#define TPM_RT_COUNTER  0x00000006
+#define TPM_RT_DELEGATE 0x00000007
+#define TPM_RT_DAA_TPM  0x00000008
+#define TPM_RT_DAA_V0   0x00000009
+#define TPM_RT_DAA_V1   0x0000000A
+
+
+
+// TPM_PROTOCOL_ID values
+#define TPM_PID_OIAP 0x0001
+#define TPM_PID_OSAP 0x0002
+#define TPM_PID_ADIP 0x0003
+#define TPM_PID_ADCP 0x0004
+#define TPM_PID_OWNER 0x0005
+
+// TPM_ALGORITHM_ID values
+#define TPM_ALG_RSA 0x00000001
+#define TPM_ALG_SHA 0x00000004
+#define TPM_ALG_HMAC 0x00000005
+#define TPM_ALG_AES128 0x00000006
+#define TPM_ALG_MFG1 0x00000007
+#define TPM_ALG_AES192 0x00000008
+#define TPM_ALG_AES256 0x00000009
+#define TPM_ALG_XOR 0x0000000A
+
+// TPM_ENC_SCHEME values
+#define TPM_ES_NONE 0x0001
+#define TPM_ES_RSAESPKCSv15 0x0002
+#define TPM_ES_RSAESOAEP_SHA1_MGF1 0x0003
+
+// TPM_SIG_SCHEME values
+#define TPM_SS_NONE 0x0001
+#define TPM_SS_RSASSAPKCS1v15_SHA1 0x0002
+#define TPM_SS_RSASSAPKCS1v15_DER 0x0003
+
+/*
+ * TPM_CAPABILITY_AREA Values for TPM_GetCapability ([TPM_Part2], Section 21.1)
+ */
+#define TPM_CAP_ORD                     0x00000001
+#define TPM_CAP_ALG                     0x00000002
+#define TPM_CAP_PID                     0x00000003
+#define TPM_CAP_FLAG                    0x00000004
+#define TPM_CAP_PROPERTY                0x00000005
+#define TPM_CAP_VERSION                 0x00000006
+#define TPM_CAP_KEY_HANDLE              0x00000007
+#define TPM_CAP_CHECK_LOADED            0x00000008
+#define TPM_CAP_SYM_MODE                0x00000009
+#define TPM_CAP_KEY_STATUS              0x0000000C
+#define TPM_CAP_NV_LIST                 0x0000000D
+#define TPM_CAP_MFR                     0x00000010
+#define TPM_CAP_NV_INDEX                0x00000011
+#define TPM_CAP_TRANS_ALG               0x00000012
+#define TPM_CAP_HANDLE                  0x00000014
+#define TPM_CAP_TRANS_ES                0x00000015
+#define TPM_CAP_AUTH_ENCRYPT            0x00000017
+#define TPM_CAP_SELECT_SIZE             0x00000018
+#define TPM_CAP_DA_LOGIC                0x00000019
+#define TPM_CAP_VERSION_VAL             0x0000001A
+
+/* subCap definitions ([TPM_Part2], Section 21.2) */
+#define TPM_CAP_PROP_PCR                0x00000101
+#define TPM_CAP_PROP_DIR                0x00000102
+#define TPM_CAP_PROP_MANUFACTURER       0x00000103
+#define TPM_CAP_PROP_KEYS               0x00000104
+#define TPM_CAP_PROP_MIN_COUNTER        0x00000107
+#define TPM_CAP_FLAG_PERMANENT          0x00000108
+#define TPM_CAP_FLAG_VOLATILE           0x00000109
+#define TPM_CAP_PROP_AUTHSESS           0x0000010A
+#define TPM_CAP_PROP_TRANSESS           0x0000010B
+#define TPM_CAP_PROP_COUNTERS           0x0000010C
+#define TPM_CAP_PROP_MAX_AUTHSESS       0x0000010D
+#define TPM_CAP_PROP_MAX_TRANSESS       0x0000010E
+#define TPM_CAP_PROP_MAX_COUNTERS       0x0000010F
+#define TPM_CAP_PROP_MAX_KEYS           0x00000110
+#define TPM_CAP_PROP_OWNER              0x00000111
+#define TPM_CAP_PROP_CONTEXT            0x00000112
+#define TPM_CAP_PROP_MAX_CONTEXT        0x00000113
+#define TPM_CAP_PROP_FAMILYROWS         0x00000114
+#define TPM_CAP_PROP_TIS_TIMEOUT        0x00000115
+#define TPM_CAP_PROP_STARTUP_EFFECT     0x00000116
+#define TPM_CAP_PROP_DELEGATE_ROW       0x00000117
+#define TPM_CAP_PROP_MAX_DAASESS        0x00000119
+#define TPM_CAP_PROP_DAASESS            0x0000011A
+#define TPM_CAP_PROP_CONTEXT_DIST       0x0000011B
+#define TPM_CAP_PROP_DAA_INTERRUPT      0x0000011C
+#define TPM_CAP_PROP_SESSIONS           0x0000011D
+#define TPM_CAP_PROP_MAX_SESSIONS       0x0000011E
+#define TPM_CAP_PROP_CMK_RESTRICTION    0x0000011F
+#define TPM_CAP_PROP_DURATION           0x00000120
+#define TPM_CAP_PROP_ACTIVE_COUNTER     0x00000122
+#define TPM_CAP_PROP_MAX_NV_AVAILABLE   0x00000123
+#define TPM_CAP_PROP_INPUT_BUFFER       0x00000124
+
+// TPM_KEY_USAGE values
+#define TPM_KEY_EK 0x0000
+#define TPM_KEY_SIGNING 0x0010
+#define TPM_KEY_STORAGE 0x0011
+#define TPM_KEY_IDENTITY 0x0012
+#define TPM_KEY_AUTHCHANGE 0X0013
+#define TPM_KEY_BIND 0x0014
+#define TPM_KEY_LEGACY 0x0015
+
+// TPM_AUTH_DATA_USAGE values
+#define TPM_AUTH_NEVER 0x00
+#define TPM_AUTH_ALWAYS 0x01
+
+// Key Handle of owner and srk
+#define TPM_OWNER_KEYHANDLE 0x40000001
+#define TPM_SRK_KEYHANDLE 0x40000000
+
+
+
+// *************************** TYPEDEFS *********************************
+typedef unsigned char BYTE;
+typedef unsigned char BOOL;
+typedef uint16_t UINT16;
+typedef uint32_t UINT32;
+typedef uint64_t UINT64;
+
+typedef UINT32 TPM_RESULT;
+typedef UINT32 TPM_PCRINDEX;
+typedef UINT32 TPM_DIRINDEX;
+typedef UINT32 TPM_HANDLE;
+typedef TPM_HANDLE TPM_AUTHHANDLE;
+typedef TPM_HANDLE TCPA_HASHHANDLE;
+typedef TPM_HANDLE TCPA_HMACHANDLE;
+typedef TPM_HANDLE TCPA_ENCHANDLE;
+typedef TPM_HANDLE TPM_KEY_HANDLE;
+typedef TPM_HANDLE TCPA_ENTITYHANDLE;
+typedef UINT32 TPM_RESOURCE_TYPE;
+typedef UINT32 TPM_COMMAND_CODE;
+typedef UINT16 TPM_PROTOCOL_ID;
+typedef BYTE TPM_AUTH_DATA_USAGE;
+typedef UINT16 TPM_ENTITY_TYPE;
+typedef UINT32 TPM_ALGORITHM_ID;
+typedef UINT16 TPM_KEY_USAGE;
+typedef UINT16 TPM_STARTUP_TYPE;
+typedef UINT32 TPM_CAPABILITY_AREA;
+typedef UINT16 TPM_ENC_SCHEME;
+typedef UINT16 TPM_SIG_SCHEME;
+typedef UINT16 TPM_MIGRATE_SCHEME;
+typedef UINT16 TPM_PHYSICAL_PRESENCE;
+typedef UINT32 TPM_KEY_FLAGS;
+
+#define TPM_DIGEST_SIZE 20  // Don't change this
+typedef BYTE TPM_AUTHDATA[TPM_DIGEST_SIZE];
+typedef TPM_AUTHDATA TPM_SECRET;
+typedef TPM_AUTHDATA TPM_ENCAUTH;
+typedef BYTE TPM_PAYLOAD_TYPE;
+typedef UINT16 TPM_TAG;
+typedef UINT16 TPM_STRUCTURE_TAG;
+
+// Data Types of the TCS
+typedef UINT32 TCS_AUTHHANDLE;  // Handle addressing a authorization session
+typedef UINT32 TCS_CONTEXT_HANDLE; // Basic context handle
+typedef UINT32 TCS_KEY_HANDLE;  // Basic key handle
+
+// ************************* STRUCTURES **********************************
+
+typedef struct TPM_VERSION {
+  BYTE major;
+  BYTE minor;
+  BYTE revMajor;
+  BYTE revMinor;
+} TPM_VERSION;
+
+static const TPM_VERSION TPM_STRUCT_VER_1_1 = { 1,1,0,0 };
+
+typedef struct TPM_CAP_VERSION_INFO {
+   TPM_STRUCTURE_TAG tag;
+   TPM_VERSION version;
+   UINT16 specLevel;
+   BYTE errataRev;
+   BYTE tpmVendorID[4];
+   UINT16 vendorSpecificSize;
+   BYTE* vendorSpecific;
+} TPM_CAP_VERSION_INFO;
+
+inline void free_TPM_CAP_VERSION_INFO(TPM_CAP_VERSION_INFO* v) {
+   free(v->vendorSpecific);
+   v->vendorSpecific = NULL;
+}
+
+typedef struct TPM_DIGEST {
+  BYTE digest[TPM_DIGEST_SIZE];
+} TPM_DIGEST;
+
+typedef TPM_DIGEST TPM_PCRVALUE;
+typedef TPM_DIGEST TPM_COMPOSITE_HASH;
+typedef TPM_DIGEST TPM_DIRVALUE;
+typedef TPM_DIGEST TPM_HMAC;
+typedef TPM_DIGEST TPM_CHOSENID_HASH;
+
+typedef struct TPM_NONCE {
+  BYTE nonce[TPM_DIGEST_SIZE];
+} TPM_NONCE;
+
+typedef struct TPM_SYMMETRIC_KEY_PARMS {
+   UINT32 keyLength;
+   UINT32 blockSize;
+   UINT32 ivSize;
+   BYTE* IV;
+} TPM_SYMMETRIC_KEY_PARMS;
+
+inline void free_TPM_SYMMETRIC_KEY_PARMS(TPM_SYMMETRIC_KEY_PARMS* p) {
+   free(p->IV);
+   p->IV = NULL;
+}
+
+#define TPM_SYMMETRIC_KEY_PARMS_INIT { 0, 0, 0, NULL }
+
+typedef struct TPM_RSA_KEY_PARMS {
+  UINT32 keyLength;
+  UINT32 numPrimes;
+  UINT32 exponentSize;
+  BYTE* exponent;
+} TPM_RSA_KEY_PARMS;
+
+#define TPM_RSA_KEY_PARMS_INIT { 0, 0, 0, NULL }
+
+inline void free_TPM_RSA_KEY_PARMS(TPM_RSA_KEY_PARMS* p) {
+   free(p->exponent);
+   p->exponent = NULL;
+}
+
+typedef struct TPM_KEY_PARMS {
+  TPM_ALGORITHM_ID algorithmID;
+  TPM_ENC_SCHEME encScheme;
+  TPM_SIG_SCHEME sigScheme;
+  UINT32 parmSize;
+  union {
+     TPM_SYMMETRIC_KEY_PARMS sym;
+     TPM_RSA_KEY_PARMS rsa;
+  } parms;
+} TPM_KEY_PARMS;
+
+#define TPM_KEY_PARMS_INIT { 0, 0, 0, 0 }
+
+inline void free_TPM_KEY_PARMS(TPM_KEY_PARMS* p) {
+   if(p->parmSize) {
+      switch(p->algorithmID) {
+         case TPM_ALG_RSA:
+            free_TPM_RSA_KEY_PARMS(&p->parms.rsa);
+            break;
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            free_TPM_SYMMETRIC_KEY_PARMS(&p->parms.sym);
+            break;
+      }
+   }
+}
+
+typedef struct TPM_STORE_PUBKEY {
+  UINT32 keyLength;
+  BYTE* key;
+} TPM_STORE_PUBKEY;
+
+#define TPM_STORE_PUBKEY_INIT { 0, NULL }
+
+inline void free_TPM_STORE_PUBKEY(TPM_STORE_PUBKEY* p) {
+   free(p->key);
+   p->key = NULL;
+}
+
+typedef struct TPM_PUBKEY {
+  TPM_KEY_PARMS algorithmParms;
+  TPM_STORE_PUBKEY pubKey;
+} TPM_PUBKEY;
+
+#define TPM_PUBKEY_INIT { TPM_KEY_PARMS_INIT, TPM_STORE_PUBKEY_INIT }
+
+inline void free_TPM_PUBKEY(TPM_PUBKEY* k) {
+   free_TPM_KEY_PARMS(&k->algorithmParms);
+   free_TPM_STORE_PUBKEY(&k->pubKey);
+}
+
+typedef struct TPM_PCR_SELECTION {
+   UINT16 sizeOfSelect;
+   BYTE* pcrSelect;
+} TPM_PCR_SELECTION;
+
+#define TPM_PCR_SELECTION_INIT { 0, NULL }
+
+inline void free_TPM_PCR_SELECTION(TPM_PCR_SELECTION* p) {
+   free(p->pcrSelect);
+   p->pcrSelect = NULL;
+}
+
+typedef struct TPM_PCR_INFO {
+   TPM_PCR_SELECTION pcrSelection;
+   TPM_COMPOSITE_HASH digestAtRelease;
+   TPM_COMPOSITE_HASH digestAtCreation;
+} TPM_PCR_INFO;
+
+#define TPM_PCR_INFO_INIT { TPM_PCR_SELECTION_INIT }
+
+inline void free_TPM_PCR_INFO(TPM_PCR_INFO* p) {
+   free_TPM_PCR_SELECTION(&p->pcrSelection);
+}
+
+typedef struct TPM_PCR_COMPOSITE {
+  TPM_PCR_SELECTION select;
+  UINT32 valueSize;
+  TPM_PCRVALUE* pcrValue;
+} TPM_PCR_COMPOSITE;
+
+#define TPM_PCR_COMPOSITE_INIT { TPM_PCR_SELECTION_INIT, 0, NULL }
+
+inline void free_TPM_PCR_COMPOSITE(TPM_PCR_COMPOSITE* p) {
+   free_TPM_PCR_SELECTION(&p->select);
+   free(p->pcrValue);
+   p->pcrValue = NULL;
+}
+
+typedef struct TPM_KEY {
+  TPM_VERSION         ver;
+  TPM_KEY_USAGE       keyUsage;
+  TPM_KEY_FLAGS       keyFlags;
+  TPM_AUTH_DATA_USAGE authDataUsage;
+  TPM_KEY_PARMS       algorithmParms;
+  UINT32              PCRInfoSize;
+  TPM_PCR_INFO        PCRInfo;
+  TPM_STORE_PUBKEY    pubKey;
+  UINT32              encDataSize;
+  BYTE*               encData;
+} TPM_KEY;
+
+#define TPM_KEY_INIT { .algorithmParms = TPM_KEY_PARMS_INIT,\
+   .PCRInfoSize = 0, .PCRInfo = TPM_PCR_INFO_INIT, \
+   .pubKey = TPM_STORE_PUBKEY_INIT, \
+   .encDataSize = 0, .encData = NULL }
+
+inline void free_TPM_KEY(TPM_KEY* k) {
+   if(k->PCRInfoSize) {
+      free_TPM_PCR_INFO(&k->PCRInfo);
+   }
+   free_TPM_STORE_PUBKEY(&k->pubKey);
+   free(k->encData);
+   k->encData = NULL;
+}
+
+typedef struct TPM_BOUND_DATA {
+  TPM_VERSION ver;
+  TPM_PAYLOAD_TYPE payload;
+  BYTE* payloadData;
+} TPM_BOUND_DATA;
+
+#define TPM_BOUND_DATA_INIT { .payloadData = NULL }
+
+inline void free_TPM_BOUND_DATA(TPM_BOUND_DATA* d) {
+   free(d->payloadData);
+   d->payloadData = NULL;
+}
+
+typedef struct TPM_STORED_DATA {
+  TPM_VERSION ver;
+  UINT32 sealInfoSize;
+  TPM_PCR_INFO sealInfo;
+  UINT32 encDataSize;
+  BYTE* encData;
+} TPM_STORED_DATA;
+
+#define TPM_STORED_DATA_INIT { .sealInfoSize = 0, sealInfo = TPM_PCR_INFO_INIT,\
+   .encDataSize = 0, .encData = NULL }
+
+inline void free_TPM_STORED_DATA(TPM_STORED_DATA* d) {
+   if(d->sealInfoSize) {
+      free_TPM_PCR_INFO(&d->sealInfo);
+   }
+   free(d->encData);
+   d->encData = NULL;
+}
+
+typedef struct TPM_AUTH_SESSION {
+  TPM_AUTHHANDLE  AuthHandle;
+  TPM_NONCE   NonceOdd;   // system
+  TPM_NONCE   NonceEven;   // TPM
+  BOOL   fContinueAuthSession;
+  TPM_AUTHDATA  HMAC;
+} TPM_AUTH_SESSION;
+
+#define TPM_AUTH_SESSION_INIT { .AuthHandle = 0, .fContinueAuthSession = FALSE }
+
+// ---------------------- Functions for checking TPM_RESULTs -----------------
+
+#include <stdio.h>
+
+// FIXME: Review use of these and delete unneeded ones.
+
+// these are really badly dependent on local structure:
+// DEPENDS: local var 'status' of type TPM_RESULT
+// DEPENDS: label 'abort_egress' which cleans up and returns the status
+#define ERRORDIE(s) do { status = s; \
+                         fprintf (stderr, "*** ERRORDIE in %s at %s: %i\n", __func__, __FILE__, __LINE__); \
+                         goto abort_egress; } \
+                    while (0)
+
+// DEPENDS: local var 'status' of type TPM_RESULT
+// DEPENDS: label 'abort_egress' which cleans up and returns the status
+// Try command c. If it fails, set status to s and goto abort.
+#define TPMTRY(s,c) if (c != TPM_SUCCESS) { \
+                       status = s; \
+                       printf("ERROR in %s at %s:%i code: %s.\n", __func__, __FILE__, __LINE__, tpm_get_error_name(status)); \
+                       goto abort_egress; \
+                    } else {\
+                       status = c; \
+                    }
+
+// Try command c. If it fails, print error message, set status to actual return code. Goto abort
+#define TPMTRYRETURN(c) do { status = c; \
+                             if (status != TPM_SUCCESS) { \
+                               fprintf(stderr, "ERROR in %s at %s:%i code: %s.\n", __func__, __FILE__, __LINE__, tpm_get_error_name(status)); \
+                               goto abort_egress; \
+                             } \
+                        } while(0)
+
+
+#endif //__TCPA_H__
diff --git a/stubdom/vtpmmgr/tpm.c b/stubdom/vtpmmgr/tpm.c
new file mode 100644
index 0000000..123a27c
--- /dev/null
+++ b/stubdom/vtpmmgr/tpm.c
@@ -0,0 +1,938 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdio.h>
+#include <string.h>
+#include <malloc.h>
+#include <unistd.h>
+#include <errno.h>
+
+#include <polarssl/sha1.h>
+
+#include "tcg.h"
+#include "tpm.h"
+#include "log.h"
+#include "marshal.h"
+#include "tpmrsa.h"
+#include "vtpmmgr.h"
+
+#define TCPA_MAX_BUFFER_LENGTH 0x2000
+
+#define TPM_BEGIN(TAG, ORD) \
+   const TPM_TAG intag = TAG;\
+TPM_TAG tag = intag;\
+UINT32 paramSize;\
+const TPM_COMMAND_CODE ordinal = ORD;\
+TPM_RESULT status = TPM_SUCCESS;\
+BYTE in_buf[TCPA_MAX_BUFFER_LENGTH];\
+BYTE out_buf[TCPA_MAX_BUFFER_LENGTH];\
+UINT32 out_len = sizeof(out_buf);\
+BYTE* ptr = in_buf;\
+/*Print a log message */\
+vtpmloginfo(VTPM_LOG_TPM, "%s\n", __func__);\
+/* Pack the header*/\
+ptr = pack_TPM_TAG(ptr, tag);\
+ptr += sizeof(UINT32);\
+ptr = pack_TPM_COMMAND_CODE(ptr, ordinal)\
+
+#define TPM_AUTH_BEGIN() \
+   sha1_context sha1_ctx;\
+BYTE* authbase = ptr - sizeof(TPM_COMMAND_CODE);\
+TPM_DIGEST paramDigest;\
+sha1_starts(&sha1_ctx)
+
+#define TPM_AUTH1_GEN(HMACkey, auth) do {\
+   sha1_finish(&sha1_ctx, paramDigest.digest);\
+   generateAuth(&paramDigest, HMACkey, auth);\
+   ptr = pack_TPM_AUTH_SESSION(ptr, auth);\
+} while(0)
+
+#define TPM_AUTH2_GEN(HMACkey, auth) do {\
+   generateAuth(&paramDigest, HMACkey, auth);\
+   ptr = pack_TPM_AUTH_SESSION(ptr, auth);\
+} while(0)
+
+#define TPM_TRANSMIT() do {\
+   /* Pack the command size */\
+   paramSize = ptr - in_buf;\
+   pack_UINT32(in_buf + sizeof(TPM_TAG), paramSize);\
+   if((status = TPM_TransmitData(in_buf, paramSize, out_buf, &out_len)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH_VERIFY_BEGIN() do {\
+   UINT32 buf[2] = { cpu_to_be32(status), cpu_to_be32(ordinal) };\
+   sha1_starts(&sha1_ctx);\
+   sha1_update(&sha1_ctx, (unsigned char*)buf, sizeof(buf));\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH1_VERIFY(HMACkey, auth) do {\
+   sha1_finish(&sha1_ctx, paramDigest.digest);\
+   ptr = unpack_TPM_AUTH_SESSION(ptr, auth);\
+   if((status = verifyAuth(&paramDigest, HMACkey, auth)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH2_VERIFY(HMACkey, auth) do {\
+   ptr = unpack_TPM_AUTH_SESSION(ptr, auth);\
+   if((status = verifyAuth(&paramDigest, HMACkey, auth)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+
+
+#define TPM_UNPACK_VERIFY() do { \
+   ptr = out_buf;\
+   ptr = unpack_TPM_RSP_HEADER(ptr, \
+         &(tag), &(paramSize), &(status));\
+   if((status) != TPM_SUCCESS || (tag) != (intag +3)) { \
+      vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(status));\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH_HASH() do {\
+   sha1_update(&sha1_ctx, authbase, ptr - authbase);\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH_SKIP() do {\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH_ERR_CHECK(auth) do {\
+   if(status != TPM_SUCCESS || auth->fContinueAuthSession == FALSE) {\
+      vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x closed by TPM\n", auth->AuthHandle);\
+      auth->AuthHandle = 0;\
+   }\
+} while(0)
+
+static void xorEncrypt(const TPM_SECRET* sharedSecret,
+      TPM_NONCE* nonce,
+      const TPM_AUTHDATA* inAuth0,
+      TPM_ENCAUTH outAuth0,
+      const TPM_AUTHDATA* inAuth1,
+      TPM_ENCAUTH outAuth1) {
+   BYTE XORbuffer[sizeof(TPM_SECRET) + sizeof(TPM_NONCE)];
+   BYTE XORkey[TPM_DIGEST_SIZE];
+   BYTE* ptr = XORbuffer;
+   ptr = pack_TPM_SECRET(ptr, sharedSecret);
+   ptr = pack_TPM_NONCE(ptr, nonce);
+
+   sha1(XORbuffer, ptr - XORbuffer, XORkey);
+
+   if(inAuth0) {
+      for(int i = 0; i < TPM_DIGEST_SIZE; ++i) {
+         outAuth0[i] = XORkey[i] ^ (*inAuth0)[i];
+      }
+   }
+   if(inAuth1) {
+      for(int i = 0; i < TPM_DIGEST_SIZE; ++i) {
+         outAuth1[i] = XORkey[i] ^ (*inAuth1)[i];
+      }
+   }
+
+}
+
+static void generateAuth(const TPM_DIGEST* paramDigest,
+      const TPM_SECRET* HMACkey,
+      TPM_AUTH_SESSION *auth)
+{
+   //Generate new OddNonce
+   vtpmmgr_rand((BYTE*)auth->NonceOdd.nonce, sizeof(TPM_NONCE));
+
+   // Create HMAC text. (Concat inParamsDigest with inAuthSetupParams).
+   BYTE hmacText[sizeof(TPM_DIGEST) + (2 * sizeof(TPM_NONCE)) + sizeof(BOOL)];
+   BYTE* ptr = hmacText;
+
+   ptr = pack_TPM_DIGEST(ptr, paramDigest);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+
+   sha1_hmac((BYTE *) HMACkey, sizeof(TPM_DIGEST),
+         (BYTE *) hmacText, sizeof(hmacText),
+         auth->HMAC);
+}
+
+static TPM_RESULT verifyAuth(const TPM_DIGEST* paramDigest,
+      /*[IN]*/ const TPM_SECRET *HMACkey,
+      /*[IN,OUT]*/ TPM_AUTH_SESSION *auth)
+{
+
+   // Create HMAC text. (Concat inParamsDigest with inAuthSetupParams).
+   TPM_AUTHDATA hm;
+   BYTE hmacText[sizeof(TPM_DIGEST) + (2 * sizeof(TPM_NONCE)) + sizeof(BOOL)];
+   BYTE* ptr = hmacText;
+
+   ptr = pack_TPM_DIGEST(ptr, paramDigest);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+
+   sha1_hmac( (BYTE *) HMACkey, sizeof(TPM_DIGEST),
+         (BYTE *) hmacText, sizeof(hmacText),
+         hm);
+
+   // Compare correct HMAC with provided one.
+   if (memcmp(hm, auth->HMAC, sizeof(TPM_DIGEST)) == 0) { // 0 indicates equality
+      return TPM_SUCCESS;
+   } else {
+      vtpmlogerror(VTPM_LOG_TPM, "Auth Session verification failed!\n");
+      return TPM_AUTHFAIL;
+   }
+}
+
+
+
+// ------------------------------------------------------------------
+// Authorization Commands
+// ------------------------------------------------------------------
+
+TPM_RESULT TPM_OIAP(TPM_AUTH_SESSION*   auth)  // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_OIAP);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   memset(&auth->HMAC, 0, sizeof(TPM_DIGEST));
+   auth->fContinueAuthSession = TRUE;
+
+   ptr = unpack_UINT32(ptr, &auth->AuthHandle);
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x opened by TPM_OIAP.\n", auth->AuthHandle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_OSAP(TPM_ENTITY_TYPE  entityType,  // in
+      UINT32    entityValue, // in
+      const TPM_AUTHDATA* usageAuth, //in
+      TPM_SECRET *sharedSecret, //out
+      TPM_AUTH_SESSION *auth)
+{
+   BYTE* nonceOddOSAP;
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_OSAP);
+
+   ptr = pack_TPM_ENTITY_TYPE(ptr, entityType);
+   ptr = pack_UINT32(ptr, entityValue);
+
+   //nonce Odd OSAP
+   nonceOddOSAP = ptr;
+   vtpmmgr_rand(ptr, TPM_DIGEST_SIZE);
+   ptr += TPM_DIGEST_SIZE;
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, &auth->AuthHandle);
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+
+   //Calculate session secret
+   sha1_context ctx;
+   sha1_hmac_starts(&ctx, *usageAuth, TPM_DIGEST_SIZE);
+   sha1_hmac_update(&ctx, ptr, TPM_DIGEST_SIZE); //ptr = nonceEvenOSAP
+   sha1_hmac_update(&ctx, nonceOddOSAP, TPM_DIGEST_SIZE);
+   sha1_hmac_finish(&ctx, *sharedSecret);
+
+   memset(&auth->HMAC, 0, sizeof(TPM_DIGEST));
+   auth->fContinueAuthSession = FALSE;
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x opened by TPM_OSAP.\n", auth->AuthHandle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_TakeOwnership(
+      const TPM_PUBKEY *pubEK, //in
+      const TPM_AUTHDATA* ownerAuth, //in
+      const TPM_AUTHDATA* srkAuth, //in
+      const TPM_KEY* inSrk, //in
+      TPM_KEY* outSrk, //out, optional
+      TPM_AUTH_SESSION*   auth)   // in, out
+{
+   int keyAlloced = 0;
+   tpmrsa_context ek_rsa = TPMRSA_CTX_INIT;
+
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_TakeOwnership);
+   TPM_AUTH_BEGIN();
+
+   tpmrsa_set_pubkey(&ek_rsa,
+         pubEK->pubKey.key, pubEK->pubKey.keyLength,
+         pubEK->algorithmParms.parms.rsa.exponent,
+         pubEK->algorithmParms.parms.rsa.exponentSize);
+
+   /* Pack the protocol ID */
+   ptr = pack_UINT16(ptr, TPM_PID_OWNER);
+
+   /* Pack the encrypted owner auth */
+   ptr = pack_UINT32(ptr, pubEK->algorithmParms.parms.rsa.keyLength / 8);
+   tpmrsa_pub_encrypt_oaep(&ek_rsa,
+         ctr_drbg_random, &vtpm_globals.ctr_drbg,
+         sizeof(TPM_SECRET),
+         (BYTE*) ownerAuth,
+         ptr);
+   ptr += pubEK->algorithmParms.parms.rsa.keyLength / 8;
+
+   /* Pack the encrypted srk auth */
+   ptr = pack_UINT32(ptr, pubEK->algorithmParms.parms.rsa.keyLength / 8);
+   tpmrsa_pub_encrypt_oaep(&ek_rsa,
+         ctr_drbg_random, &vtpm_globals.ctr_drbg,
+         sizeof(TPM_SECRET),
+         (BYTE*) srkAuth,
+         ptr);
+   ptr += pubEK->algorithmParms.parms.rsa.keyLength / 8;
+
+   /* Pack the Srk key */
+   ptr = pack_TPM_KEY(ptr, inSrk);
+
+   /* Hash everything up to here */
+   TPM_AUTH_HASH();
+
+   /* Generate the authorization */
+   TPM_AUTH1_GEN(ownerAuth, auth);
+
+   /* Send the command to the tpm*/
+   TPM_TRANSMIT();
+   /* Unpack and validate the header */
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   if(outSrk != NULL) {
+      /* If the user wants a copy of the srk we give it to them */
+      keyAlloced = 1;
+      ptr = unpack_TPM_KEY(ptr, outSrk, UNPACK_ALLOC);
+   } else {
+      /*otherwise just parse past it */
+      TPM_KEY temp;
+      ptr = unpack_TPM_KEY(ptr, &temp, UNPACK_ALIAS);
+   }
+
+   /* Hash the output key */
+   TPM_AUTH_HASH();
+
+   /* Verify authorizaton */
+   TPM_AUTH1_VERIFY(ownerAuth, auth);
+
+   goto egress;
+abort_egress:
+   if(keyAlloced) {
+      free_TPM_KEY(outSrk);
+   }
+egress:
+   tpmrsa_free(&ek_rsa);
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+
+TPM_RESULT TPM_DisablePubekRead (
+      const TPM_AUTHDATA* ownerAuth,
+      TPM_AUTH_SESSION*   auth)
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_DisablePubekRead);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(ownerAuth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   TPM_AUTH1_VERIFY(ownerAuth, auth);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+
+TPM_RESULT TPM_TerminateHandle(TPM_AUTHHANDLE  handle)  // in
+{
+   if(handle == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_Terminate_Handle);
+
+   ptr = pack_TPM_AUTHHANDLE(ptr, handle);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x closed by TPM_TerminateHandle\n", handle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_Extend( TPM_PCRINDEX  pcrNum,  // in
+      TPM_DIGEST  inDigest, // in
+      TPM_PCRVALUE*  outDigest) // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_Extend);
+
+   ptr = pack_TPM_PCRINDEX(ptr, pcrNum);
+   ptr = pack_TPM_DIGEST(ptr, &inDigest);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_TPM_PCRVALUE(ptr, outDigest);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_Seal(
+      TPM_KEY_HANDLE  keyHandle,  // in
+      UINT32    pcrInfoSize, // in
+      TPM_PCR_INFO*    pcrInfo,  // in
+      UINT32    inDataSize,  // in
+      const BYTE*    inData,   // in
+      TPM_STORED_DATA* sealedData, //out
+      const TPM_SECRET* osapSharedSecret, //in
+      const TPM_AUTHDATA* sealedDataAuth, //in
+      TPM_AUTH_SESSION*   pubAuth  // in, out
+      )
+{
+   int dataAlloced = 0;
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_Seal);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, keyHandle);
+
+   TPM_AUTH_SKIP();
+
+   xorEncrypt(osapSharedSecret, &pubAuth->NonceEven,
+         sealedDataAuth, ptr,
+         NULL, NULL);
+   ptr += sizeof(TPM_ENCAUTH);
+
+   ptr = pack_UINT32(ptr, pcrInfoSize);
+   ptr = pack_TPM_PCR_INFO(ptr, pcrInfo);
+
+   ptr = pack_UINT32(ptr, inDataSize);
+   ptr = pack_BUFFER(ptr, inData, inDataSize);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(osapSharedSecret, pubAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_TPM_STORED_DATA(ptr, sealedData, UNPACK_ALLOC);
+   dataAlloced = 1;
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(osapSharedSecret, pubAuth);
+
+   goto egress;
+abort_egress:
+   if(dataAlloced) {
+      free_TPM_STORED_DATA(sealedData);
+   }
+egress:
+   TPM_AUTH_ERR_CHECK(pubAuth);
+   return status;
+}
+
+TPM_RESULT TPM_Unseal(
+      TPM_KEY_HANDLE parentHandle, // in
+      const TPM_STORED_DATA* sealedData,
+      UINT32*   outSize,  // out
+      BYTE**    out, //out
+      const TPM_AUTHDATA* key_usage_auth, //in
+      const TPM_AUTHDATA* data_usage_auth, //in
+      TPM_AUTH_SESSION*   keyAuth,  // in, out
+      TPM_AUTH_SESSION*   dataAuth  // in, out
+      )
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH2_COMMAND, TPM_ORD_Unseal);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, parentHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_TPM_STORED_DATA(ptr, sealedData);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(key_usage_auth, keyAuth);
+   TPM_AUTH2_GEN(data_usage_auth, dataAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, outSize);
+   ptr = unpack_ALLOC(ptr, out, *outSize);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(key_usage_auth, keyAuth);
+   TPM_AUTH2_VERIFY(data_usage_auth, dataAuth);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(keyAuth);
+   TPM_AUTH_ERR_CHECK(dataAuth);
+   return status;
+}
+
+TPM_RESULT TPM_Bind(
+      const TPM_KEY* key,
+      const BYTE* in,
+      UINT32 ilen,
+      BYTE* out)
+{
+   TPM_RESULT status;
+   tpmrsa_context rsa = TPMRSA_CTX_INIT;
+   TPM_BOUND_DATA boundData;
+   uint8_t plain[TCPA_MAX_BUFFER_LENGTH];
+   BYTE* ptr = plain;
+
+   vtpmloginfo(VTPM_LOG_TPM, "%s\n", __func__);
+
+   tpmrsa_set_pubkey(&rsa,
+         key->pubKey.key, key->pubKey.keyLength,
+         key->algorithmParms.parms.rsa.exponent,
+         key->algorithmParms.parms.rsa.exponentSize);
+
+   // Fill boundData's accessory information
+   boundData.ver = TPM_STRUCT_VER_1_1;
+   boundData.payload = TPM_PT_BIND;
+   boundData.payloadData = (BYTE*)in;
+
+   //marshall the bound data object
+   ptr = pack_TPM_BOUND_DATA(ptr, &boundData, ilen);
+
+   // Encrypt the data
+   TPMTRYRETURN(tpmrsa_pub_encrypt_oaep(&rsa,
+            ctr_drbg_random, &vtpm_globals.ctr_drbg,
+            ptr - plain,
+            plain,
+            out));
+
+abort_egress:
+   tpmrsa_free(&rsa);
+   return status;
+
+}
+
+TPM_RESULT TPM_UnBind(
+      TPM_KEY_HANDLE  keyHandle,  // in
+      UINT32 ilen, //in
+      const BYTE* in, //
+      UINT32* olen, //
+      BYTE*    out, //out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth //in, out
+      )
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_UnBind);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, keyHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_UINT32(ptr, ilen);
+   ptr = pack_BUFFER(ptr, in, ilen);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(usage_auth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, olen);
+   if(*olen > ilen) {
+      vtpmlogerror(VTPM_LOG_TPM, "Output length < input length!\n");
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+   ptr = unpack_BUFFER(ptr, out, *olen);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(usage_auth, auth);
+
+abort_egress:
+egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+TPM_RESULT TPM_CreateWrapKey(
+      TPM_KEY_HANDLE  hWrappingKey,  // in
+      const TPM_AUTHDATA* osapSharedSecret,
+      const TPM_AUTHDATA* dataUsageAuth, //in
+      const TPM_AUTHDATA* dataMigrationAuth, //in
+      TPM_KEY*     key, //in, out
+      TPM_AUTH_SESSION*   pAuth)    // in, out
+{
+   int keyAlloced = 0;
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_CreateWrapKey);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, hWrappingKey);
+
+   TPM_AUTH_SKIP();
+
+   //Encrypted auths
+   xorEncrypt(osapSharedSecret, &pAuth->NonceEven,
+         dataUsageAuth, ptr,
+         dataMigrationAuth, ptr + sizeof(TPM_ENCAUTH));
+   ptr += sizeof(TPM_ENCAUTH) * 2;
+
+   ptr = pack_TPM_KEY(ptr, key);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(osapSharedSecret, pAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   keyAlloced = 1;
+   ptr = unpack_TPM_KEY(ptr, key, UNPACK_ALLOC);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(osapSharedSecret, pAuth);
+
+   goto egress;
+abort_egress:
+   if(keyAlloced) {
+      free_TPM_KEY(key);
+   }
+egress:
+   TPM_AUTH_ERR_CHECK(pAuth);
+   return status;
+}
+
+TPM_RESULT TPM_LoadKey(
+      TPM_KEY_HANDLE  parentHandle, //
+      const TPM_KEY* key, //in
+      TPM_HANDLE*  keyHandle,    // out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth)
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_LoadKey);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, parentHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_TPM_KEY(ptr, key);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(usage_auth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, keyHandle);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(usage_auth, auth);
+
+   vtpmloginfo(VTPM_LOG_TPM, "Key Handle: 0x%x opened by TPM_LoadKey\n", *keyHandle);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+TPM_RESULT TPM_EvictKey( TPM_KEY_HANDLE  hKey)  // in
+{
+   if(hKey == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_EvictKey);
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, hKey);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   vtpmloginfo(VTPM_LOG_TPM, "Key handle: 0x%x closed by TPM_EvictKey\n", hKey);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_FlushSpecific(TPM_HANDLE handle,
+      TPM_RESOURCE_TYPE rt) {
+   if(handle == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_FlushSpecific);
+
+   ptr = pack_TPM_HANDLE(ptr, handle);
+   ptr = pack_TPM_RESOURCE_TYPE(ptr, rt);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_GetRandom( UINT32*    bytesRequested, // in, out
+      BYTE*    randomBytes) // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_GetRandom);
+
+   // check input params
+   if (bytesRequested == NULL || randomBytes == NULL){
+      return TPM_BAD_PARAMETER;
+   }
+
+   ptr = pack_UINT32(ptr, *bytesRequested);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, bytesRequested);
+   ptr = unpack_BUFFER(ptr, randomBytes, *bytesRequested);
+
+abort_egress:
+   return status;
+}
+
+
+TPM_RESULT TPM_ReadPubek(
+      TPM_PUBKEY* pubEK //out
+      )
+{
+   BYTE* antiReplay = NULL;
+   BYTE* kptr = NULL;
+   BYTE digest[TPM_DIGEST_SIZE];
+   sha1_context ctx;
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_ReadPubek);
+
+   //antiReplay nonce
+   vtpmmgr_rand(ptr, TPM_DIGEST_SIZE);
+   antiReplay = ptr;
+   ptr += TPM_DIGEST_SIZE;
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   //unpack and allocate the key
+   kptr = ptr;
+   ptr = unpack_TPM_PUBKEY(ptr, pubEK, UNPACK_ALLOC);
+
+   //Verify the checksum
+   sha1_starts(&ctx);
+   sha1_update(&ctx, kptr, ptr - kptr);
+   sha1_update(&ctx, antiReplay, TPM_DIGEST_SIZE);
+   sha1_finish(&ctx, digest);
+
+   //ptr points to the checksum computed by TPM
+   if(memcmp(digest, ptr, TPM_DIGEST_SIZE)) {
+      vtpmlogerror(VTPM_LOG_TPM, "TPM_ReadPubek: Checksum returned by TPM was invalid!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   goto egress;
+abort_egress:
+   if(kptr != NULL) { //If we unpacked the pubEK, we have to free it
+      free_TPM_PUBKEY(pubEK);
+   }
+egress:
+   return status;
+}
+
+
+TPM_RESULT TPM_SaveState(void)
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_SaveState);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_GetCapability(
+      TPM_CAPABILITY_AREA capArea,
+      UINT32 subCapSize,
+      const BYTE* subCap,
+      UINT32* respSize,
+      BYTE** resp)
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_GetCapability);
+
+   ptr = pack_TPM_CAPABILITY_AREA(ptr, capArea);
+   ptr = pack_UINT32(ptr, subCapSize);
+   ptr = pack_BUFFER(ptr, subCap, subCapSize);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, respSize);
+   ptr = unpack_ALLOC(ptr, resp, *respSize);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_CreateEndorsementKeyPair(
+      const TPM_KEY_PARMS* keyInfo,
+      TPM_PUBKEY* pubEK)
+{
+   BYTE* kptr = NULL;
+   sha1_context ctx;
+   TPM_DIGEST checksum;
+   TPM_DIGEST hash;
+   TPM_NONCE antiReplay;
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_CreateEndorsementKeyPair);
+
+   //Make anti replay nonce
+   vtpmmgr_rand(antiReplay.nonce, sizeof(antiReplay.nonce));
+
+   ptr = pack_TPM_NONCE(ptr, &antiReplay);
+   ptr = pack_TPM_KEY_PARMS(ptr, keyInfo);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   sha1_starts(&ctx);
+
+   kptr = ptr;
+   ptr = unpack_TPM_PUBKEY(ptr, pubEK, UNPACK_ALLOC);
+
+   /* Hash the pub key blob */
+   sha1_update(&ctx, kptr, ptr - kptr);
+   ptr = unpack_TPM_DIGEST(ptr, &checksum);
+
+   sha1_update(&ctx, antiReplay.nonce, sizeof(antiReplay.nonce));
+
+   sha1_finish(&ctx, hash.digest);
+   if(memcmp(checksum.digest, hash.digest, TPM_DIGEST_SIZE)) {
+      vtpmloginfo(VTPM_LOG_VTPM, "TPM_CreateEndorsementKey: Checkum verification failed!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   goto egress;
+abort_egress:
+   if(kptr) {
+      free_TPM_PUBKEY(pubEK);
+   }
+egress:
+   return status;
+}
+
+TPM_RESULT TPM_TransmitData(
+      BYTE* in,
+      UINT32 insize,
+      BYTE* out,
+      UINT32* outsize) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   UINT32 i;
+   vtpmloginfo(VTPM_LOG_TXDATA, "Sending buffer = 0x");
+   for(i = 0 ; i < insize ; i++)
+      vtpmloginfomore(VTPM_LOG_TXDATA, "%2.2x ", in[i]);
+
+   vtpmloginfomore(VTPM_LOG_TXDATA, "\n");
+
+   ssize_t size = 0;
+
+   // send the request
+   size = write (vtpm_globals.tpm_fd, in, insize);
+   if (size < 0) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "write() failed : %s\n", strerror(errno));
+      ERRORDIE (TPM_IOERROR);
+   }
+   else if ((UINT32) size < insize) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "Wrote %d instead of %d bytes!\n", (int) size, insize);
+      ERRORDIE (TPM_IOERROR);
+   }
+
+   // read the response
+   size = read (vtpm_globals.tpm_fd, out, *outsize);
+   if (size < 0) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "read() failed : %s\n", strerror(errno));
+      ERRORDIE (TPM_IOERROR);
+   }
+
+   vtpmloginfo(VTPM_LOG_TXDATA, "Receiving buffer = 0x");
+   for(i = 0 ; i < size ; i++)
+      vtpmloginfomore(VTPM_LOG_TXDATA, "%2.2x ", out[i]);
+
+   vtpmloginfomore(VTPM_LOG_TXDATA, "\n");
+
+   *outsize = size;
+   goto egress;
+
+abort_egress:
+egress:
+   return status;
+}
diff --git a/stubdom/vtpmmgr/tpm.h b/stubdom/vtpmmgr/tpm.h
new file mode 100644
index 0000000..304e145
--- /dev/null
+++ b/stubdom/vtpmmgr/tpm.h
@@ -0,0 +1,218 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005/2006, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __TPM_H__
+#define __TPM_H__
+
+#include "tcg.h"
+
+// ------------------------------------------------------------------
+// Exposed API
+// ------------------------------------------------------------------
+
+// TPM v1.1B Command Set
+
+// Authorzation
+TPM_RESULT TPM_OIAP(
+      TPM_AUTH_SESSION*   auth //out
+      );
+
+TPM_RESULT TPM_OSAP (
+      TPM_ENTITY_TYPE entityType,  // in
+      UINT32    entityValue, // in
+      const TPM_AUTHDATA* usageAuth, //in
+      TPM_SECRET *sharedSecret, //out
+      TPM_AUTH_SESSION *auth);
+
+TPM_RESULT TPM_TakeOwnership(
+      const TPM_PUBKEY *pubEK, //in
+      const TPM_AUTHDATA* ownerAuth, //in
+      const TPM_AUTHDATA* srkAuth, //in
+      const TPM_KEY* inSrk, //in
+      TPM_KEY* outSrk, //out, optional
+      TPM_AUTH_SESSION*   auth   // in, out
+      );
+
+TPM_RESULT TPM_DisablePubekRead (
+      const TPM_AUTHDATA* ownerAuth,
+      TPM_AUTH_SESSION*   auth
+      );
+
+TPM_RESULT TPM_TerminateHandle ( TPM_AUTHHANDLE  handle  // in
+      );
+
+TPM_RESULT TPM_FlushSpecific ( TPM_HANDLE  handle,  // in
+      TPM_RESOURCE_TYPE resourceType //in
+      );
+
+// TPM Mandatory
+TPM_RESULT TPM_Extend ( TPM_PCRINDEX  pcrNum,  // in
+      TPM_DIGEST   inDigest, // in
+      TPM_PCRVALUE*   outDigest // out
+      );
+
+TPM_RESULT TPM_PcrRead ( TPM_PCRINDEX  pcrNum,  // in
+      TPM_PCRVALUE*  outDigest // out
+      );
+
+TPM_RESULT TPM_Quote ( TCS_KEY_HANDLE  keyHandle,  // in
+      TPM_NONCE   antiReplay,  // in
+      UINT32*    PcrDataSize, // in, out
+      BYTE**    PcrData,  // in, out
+      TPM_AUTH_SESSION*   privAuth,  // in, out
+      UINT32*    sigSize,  // out
+      BYTE**    sig    // out
+      );
+
+TPM_RESULT TPM_Seal(
+      TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32    pcrInfoSize, // in
+      TPM_PCR_INFO*    pcrInfo,  // in
+      UINT32    inDataSize,  // in
+      const BYTE*    inData,   // in
+      TPM_STORED_DATA* sealedData, //out
+      const TPM_SECRET* osapSharedSecret, //in
+      const TPM_AUTHDATA* sealDataAuth, //in
+      TPM_AUTH_SESSION*   pubAuth  // in, out
+      );
+
+TPM_RESULT TPM_Unseal (
+      TPM_KEY_HANDLE parentHandle, // in
+      const TPM_STORED_DATA* sealedData,
+      UINT32*   outSize,  // out
+      BYTE**    out, //out
+      const TPM_AUTHDATA* key_usage_auth, //in
+      const TPM_AUTHDATA* data_usage_auth, //in
+      TPM_AUTH_SESSION*   keyAuth,  // in, out
+      TPM_AUTH_SESSION*   dataAuth  // in, out
+      );
+
+TPM_RESULT TPM_DirWriteAuth ( TPM_DIRINDEX  dirIndex,  // in
+      TPM_DIRVALUE  newContents, // in
+      TPM_AUTH_SESSION*   ownerAuth  // in, out
+      );
+
+TPM_RESULT TPM_DirRead ( TPM_DIRINDEX  dirIndex, // in
+      TPM_DIRVALUE*  dirValue // out
+      );
+
+TPM_RESULT TPM_Bind(
+      const TPM_KEY* key, //in
+      const BYTE* in, //in
+      UINT32 ilen, //in
+      BYTE* out //out, must be at least cipher block size
+      );
+
+TPM_RESULT TPM_UnBind (
+      TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32 ilen, //in
+      const BYTE* in, //
+      UINT32*   outDataSize, // out
+      BYTE*    outData, //out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth //in, out
+      );
+
+TPM_RESULT TPM_CreateWrapKey (
+      TCS_KEY_HANDLE  hWrappingKey,  // in
+      const TPM_AUTHDATA* osapSharedSecret,
+      const TPM_AUTHDATA* dataUsageAuth, //in
+      const TPM_AUTHDATA* dataMigrationAuth, //in
+      TPM_KEY*     key, //in
+      TPM_AUTH_SESSION*   pAuth    // in, out
+      );
+
+TPM_RESULT TPM_LoadKey (
+      TPM_KEY_HANDLE  parentHandle, //
+      const TPM_KEY* key, //in
+      TPM_HANDLE*  keyHandle,    // out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth
+      );
+
+TPM_RESULT TPM_GetPubKey (  TCS_KEY_HANDLE  hKey,   // in
+      TPM_AUTH_SESSION*   pAuth,   // in, out
+      UINT32*    pcPubKeySize, // out
+      BYTE**    prgbPubKey  // out
+      );
+
+TPM_RESULT TPM_EvictKey ( TCS_KEY_HANDLE  hKey  // in
+      );
+
+TPM_RESULT TPM_FlushSpecific(TPM_HANDLE handle, //in
+      TPM_RESOURCE_TYPE rt //in
+      );
+
+TPM_RESULT TPM_Sign ( TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32    areaToSignSize, // in
+      BYTE*    areaToSign,  // in
+      TPM_AUTH_SESSION*   privAuth,  // in, out
+      UINT32*    sigSize,  // out
+      BYTE**    sig    // out
+      );
+
+TPM_RESULT TPM_GetRandom (  UINT32*    bytesRequested, // in, out
+      BYTE*    randomBytes  // out
+      );
+
+TPM_RESULT TPM_StirRandom (  UINT32    inDataSize, // in
+      BYTE*    inData  // in
+      );
+
+TPM_RESULT TPM_ReadPubek (
+      TPM_PUBKEY* pubEK //out
+      );
+
+TPM_RESULT TPM_GetCapability(
+      TPM_CAPABILITY_AREA capArea,
+      UINT32 subCapSize,
+      const BYTE* subCap,
+      UINT32* respSize,
+      BYTE** resp);
+
+TPM_RESULT TPM_SaveState(void);
+
+TPM_RESULT TPM_CreateEndorsementKeyPair(
+      const TPM_KEY_PARMS* keyInfo,
+      TPM_PUBKEY* pubEK);
+
+TPM_RESULT TPM_TransmitData(
+      BYTE* in,
+      UINT32 insize,
+      BYTE* out,
+      UINT32* outsize);
+
+#endif //TPM_H
diff --git a/stubdom/vtpmmgr/tpmrsa.c b/stubdom/vtpmmgr/tpmrsa.c
new file mode 100644
index 0000000..56094e7
--- /dev/null
+++ b/stubdom/vtpmmgr/tpmrsa.c
@@ -0,0 +1,175 @@
+/*
+ *  The RSA public-key cryptosystem
+ *
+ *  Copyright (C) 2006-2011, Brainspark B.V.
+ *
+ *  This file is part of PolarSSL (http://www.polarssl.org)
+ *  Lead Maintainer: Paul Bakker <polarssl_maintainer at polarssl.org>
+ *
+ *  All rights reserved.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License along
+ *  with this program; if not, write to the Free Software Foundation, Inc.,
+ *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+/*
+ *  RSA was designed by Ron Rivest, Adi Shamir and Len Adleman.
+ *
+ *  http://theory.lcs.mit.edu/~rivest/rsapaper.pdf
+ *  http://www.cacr.math.uwaterloo.ca/hac/about/chap8.pdf
+ */
+
+#include "tcg.h"
+#include "polarssl/sha1.h"
+
+#include <stdlib.h>
+#include <stdio.h>
+
+#include "tpmrsa.h"
+
+#define HASH_LEN 20
+
+void tpmrsa_set_pubkey(tpmrsa_context* ctx,
+      const unsigned char* key,
+      int keylen,
+      const unsigned char* exponent,
+      int explen) {
+
+   tpmrsa_free(ctx);
+
+   if(explen == 0) { //Default e= 2^16+1
+      mpi_lset(&ctx->E, 65537);
+   } else {
+      mpi_read_binary(&ctx->E, exponent, explen);
+   }
+   mpi_read_binary(&ctx->N, key, keylen);
+
+   ctx->len = ( mpi_msb(&ctx->N) + 7) >> 3;
+}
+
+static TPM_RESULT tpmrsa_public( tpmrsa_context *ctx,
+      const unsigned char *input,
+      unsigned char *output )
+{
+   int ret;
+   size_t olen;
+   mpi T;
+
+   mpi_init( &T );
+
+   MPI_CHK( mpi_read_binary( &T, input, ctx->len ) );
+
+   if( mpi_cmp_mpi( &T, &ctx->N ) >= 0 )
+   {
+      mpi_free( &T );
+      return TPM_ENCRYPT_ERROR;
+   }
+
+   olen = ctx->len;
+   MPI_CHK( mpi_exp_mod( &T, &T, &ctx->E, &ctx->N, &ctx->RN ) );
+   MPI_CHK( mpi_write_binary( &T, output, olen ) );
+
+cleanup:
+
+   mpi_free( &T );
+
+   if( ret != 0 )
+      return TPM_ENCRYPT_ERROR;
+
+   return TPM_SUCCESS;
+}
+
+static void mgf_mask( unsigned char *dst, int dlen, unsigned char *src, int slen)
+{
+   unsigned char mask[HASH_LEN];
+   unsigned char counter[4] = {0, 0, 0, 0};
+   int i;
+   sha1_context mctx;
+
+   //We always hash the src with the counter, so save the partial hash
+   sha1_starts(&mctx);
+   sha1_update(&mctx, src, slen);
+
+   // Generate and apply dbMask
+   while(dlen > 0) {
+      //Copy the sha1 context
+      sha1_context ctx = mctx;
+
+      //compute hash for input || counter
+      sha1_update(&ctx, counter, sizeof(counter));
+      sha1_finish(&ctx, mask);
+
+      //Apply the mask
+      for(i = 0; i < (dlen < HASH_LEN ? dlen : HASH_LEN); ++i) {
+         *(dst++) ^= mask[i];
+      }
+
+      //Increment counter
+      ++counter[3];
+
+      dlen -= HASH_LEN;
+   }
+}
+
+/*
+ * Add the message padding, then do an RSA operation
+ */
+TPM_RESULT tpmrsa_pub_encrypt_oaep( tpmrsa_context *ctx,
+      int (*f_rng)(void *, unsigned char *, size_t),
+      void *p_rng,
+      size_t ilen,
+      const unsigned char *input,
+      unsigned char *output )
+{
+   int ret;
+   int olen;
+   unsigned char* seed = output + 1;
+   unsigned char* db = output + HASH_LEN +1;
+
+   olen = ctx->len-1;
+
+   if( f_rng == NULL )
+      return TPM_ENCRYPT_ERROR;
+
+   if( ilen > olen - 2 * HASH_LEN - 1)
+      return TPM_ENCRYPT_ERROR;
+
+   output[0] = 0;
+
+   //Encoding parameter p
+   sha1((unsigned char*)"TCPA", 4, db);
+
+   //PS
+   memset(db + HASH_LEN, 0,
+         olen - ilen - 2 * HASH_LEN - 1);
+
+   //constant 1 byte
+   db[olen - ilen - HASH_LEN -1] = 0x01;
+
+   //input string
+   memcpy(db + olen - ilen - HASH_LEN,
+         input, ilen);
+
+   //Generate random seed
+   if( ( ret = f_rng( p_rng, seed, HASH_LEN ) ) != 0 )
+      return TPM_ENCRYPT_ERROR;
+
+   // maskedDB: Apply dbMask to DB
+   mgf_mask( db, olen - HASH_LEN, seed, HASH_LEN);
+
+   // maskedSeed: Apply seedMask to seed
+   mgf_mask( seed, HASH_LEN, db, olen - HASH_LEN);
+
+   // Do the crypto op
+   return tpmrsa_public(ctx, output, output);
+}
diff --git a/stubdom/vtpmmgr/tpmrsa.h b/stubdom/vtpmmgr/tpmrsa.h
new file mode 100644
index 0000000..59579e7
--- /dev/null
+++ b/stubdom/vtpmmgr/tpmrsa.h
@@ -0,0 +1,67 @@
+/**
+ * \file rsa.h
+ *
+ * \brief The RSA public-key cryptosystem
+ *
+ *  Copyright (C) 2006-2010, Brainspark B.V.
+ *
+ *  This file is part of PolarSSL (http://www.polarssl.org)
+ *  Lead Maintainer: Paul Bakker <polarssl_maintainer at polarssl.org>
+ *
+ *  All rights reserved.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License along
+ *  with this program; if not, write to the Free Software Foundation, Inc.,
+ *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+#ifndef TPMRSA_H
+#define TPMRSA_H
+
+#include "tcg.h"
+#include <polarssl/bignum.h>
+
+/* tpm software key */
+typedef struct
+{
+    size_t len;                 /*!<  size(N) in chars  */
+
+    mpi N;                      /*!<  public modulus    */
+    mpi E;                      /*!<  public exponent   */
+
+    mpi RN;                     /*!<  cached R^2 mod N  */
+}
+tpmrsa_context;
+
+#define TPMRSA_CTX_INIT { 0, {0, 0, NULL}, {0, 0, NULL}, {0, 0, NULL}}
+
+/* Setup the rsa context using tpm public key data */
+void tpmrsa_set_pubkey(tpmrsa_context* ctx,
+      const unsigned char* key,
+      int keylen,
+      const unsigned char* exponent,
+      int explen);
+
+/* Do rsa public crypto */
+TPM_RESULT tpmrsa_pub_encrypt_oaep( tpmrsa_context *ctx,
+      int (*f_rng)(void *, unsigned char *, size_t),
+      void *p_rng,
+      size_t ilen,
+      const unsigned char *input,
+      unsigned char *output );
+
+/* free tpmrsa key */
+inline void tpmrsa_free( tpmrsa_context *ctx ) {
+   mpi_free( &ctx->RN ); mpi_free( &ctx->E  ); mpi_free( &ctx->N  );
+}
+
+#endif /* tpmrsa.h */
diff --git a/stubdom/vtpmmgr/uuid.h b/stubdom/vtpmmgr/uuid.h
new file mode 100644
index 0000000..4737645
--- /dev/null
+++ b/stubdom/vtpmmgr/uuid.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPMMGR_UUID_H
+#define VTPMMGR_UUID_H
+
+#define UUID_FMT "%02hhx%02hhx%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx"
+#define UUID_FMTLEN ((2*16)+4) /* 16 hex bytes plus 4 hypens */
+#define UUID_BYTES(uuid) uuid[0], uuid[1], uuid[2], uuid[3], \
+                                uuid[4], uuid[5], uuid[6], uuid[7], \
+                                uuid[8], uuid[9], uuid[10], uuid[11], \
+                                uuid[12], uuid[13], uuid[14], uuid[15]
+
+
+typedef uint8_t uuid_t[16];
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
new file mode 100644
index 0000000..f82a2a9
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
@@ -0,0 +1,152 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <inttypes.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "marshal.h"
+#include "log.h"
+#include "vtpm_storage.h"
+#include "vtpmmgr.h"
+#include "tpm.h"
+#include "tcg.h"
+
+static TPM_RESULT vtpmmgr_SaveHashKey(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+
+   if(tpmcmd->req_len != VTPM_COMMAND_HEADER_SIZE + HASHKEYSZ) {
+      vtpmlogerror(VTPM_LOG_VTPM, "VTPM_ORD_SAVEHASHKEY hashkey too short!\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   /* Do the command */
+   TPMTRYRETURN(vtpm_storage_save_hashkey(uuid, tpmcmd->req + VTPM_COMMAND_HEADER_SIZE));
+
+abort_egress:
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         VTPM_TAG_RSP, VTPM_COMMAND_HEADER_SIZE, status);
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+
+   return status;
+}
+
+static TPM_RESULT vtpmmgr_LoadHashKey(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+
+   TPMTRYRETURN(vtpm_storage_load_hashkey(uuid, tpmcmd->resp + VTPM_COMMAND_HEADER_SIZE));
+
+   tpmcmd->resp_len += HASHKEYSZ;
+
+abort_egress:
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         VTPM_TAG_RSP, tpmcmd->resp_len, status);
+
+   return status;
+}
+
+
+TPM_RESULT vtpmmgr_handle_cmd(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_TAG tag;
+   UINT32 size;
+   TPM_COMMAND_CODE ord;
+
+   unpack_TPM_RQU_HEADER(tpmcmd->req,
+         &tag, &size, &ord);
+
+   /* Handle the command now */
+   switch(tag) {
+      case VTPM_TAG_REQ:
+         //This is a vTPM command
+         switch(ord) {
+            case VTPM_ORD_SAVEHASHKEY:
+               return vtpmmgr_SaveHashKey(uuid, tpmcmd);
+            case VTPM_ORD_LOADHASHKEY:
+               return vtpmmgr_LoadHashKey(uuid, tpmcmd);
+            default:
+               vtpmlogerror(VTPM_LOG_VTPM, "Invalid vTPM Ordinal %" PRIu32 "\n", ord);
+               status = TPM_BAD_ORDINAL;
+         }
+         break;
+      case TPM_TAG_RQU_COMMAND:
+      case TPM_TAG_RQU_AUTH1_COMMAND:
+      case TPM_TAG_RQU_AUTH2_COMMAND:
+         //This is a TPM passthrough command
+         switch(ord) {
+            case TPM_ORD_GetRandom:
+               vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
+               break;
+            case TPM_ORD_PcrRead:
+               vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_PcrRead\n");
+               break;
+            default:
+               vtpmlogerror(VTPM_LOG_VTPM, "TPM Disallowed Passthrough ord=%" PRIu32 "\n", ord);
+               status = TPM_DISABLED_CMD;
+               goto abort_egress;
+         }
+
+         size = TCPA_MAX_BUFFER_LENGTH;
+         TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len, tpmcmd->resp, &size));
+         tpmcmd->resp_len = size;
+
+         unpack_TPM_RESULT(tpmcmd->resp + sizeof(TPM_TAG) + sizeof(UINT32), &status);
+         return status;
+
+         break;
+      default:
+         vtpmlogerror(VTPM_LOG_VTPM, "Invalid tag=%" PRIu16 "\n", tag);
+         status = TPM_BADTAG;
+   }
+
+abort_egress:
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         tag + 3, tpmcmd->resp_len, status);
+
+   return status;
+}
diff --git a/stubdom/vtpmmgr/vtpm_manager.h b/stubdom/vtpmmgr/vtpm_manager.h
new file mode 100644
index 0000000..a2bbcca
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_manager.h
@@ -0,0 +1,64 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPM_MANAGER_H
+#define VTPM_MANAGER_H
+
+#define VTPM_TAG_REQ 0x01c1
+#define VTPM_TAG_RSP 0x01c4
+#define COMMAND_BUFFER_SIZE 4096
+
+// Header size
+#define VTPM_COMMAND_HEADER_SIZE ( 2 + 4 + 4)
+
+//************************ Command Codes ****************************
+#define VTPM_ORD_BASE       0x0000
+#define VTPM_PRIV_MASK      0x01000000 // Priviledged VTPM Command
+#define VTPM_PRIV_BASE      (VTPM_ORD_BASE | VTPM_PRIV_MASK)
+
+// Non-priviledged VTPM Commands (From DMI's)
+#define VTPM_ORD_SAVEHASHKEY      (VTPM_ORD_BASE + 1) // DMI requests encryption key for persistent storage
+#define VTPM_ORD_LOADHASHKEY      (VTPM_ORD_BASE + 2) // DMI requests symkey to be regenerated
+
+//************************ Return Codes ****************************
+#define VTPM_SUCCESS               0
+#define VTPM_FAIL                  1
+#define VTPM_UNSUPPORTED           2
+#define VTPM_FORBIDDEN             3
+#define VTPM_RESTORE_CONTEXT_FAILED    4
+#define VTPM_INVALID_REQUEST       5
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_storage.c b/stubdom/vtpmmgr/vtpm_storage.c
new file mode 100644
index 0000000..abb0dba
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_storage.c
@@ -0,0 +1,794 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+/***************************************************************
+ * DISK IMAGE LAYOUT
+ * *************************************************************
+ * All data is stored in BIG ENDIAN format
+ * *************************************************************
+ * Section 1: Header
+ *
+ * 10 bytes 	 id			ID String "VTPMMGRDOM"
+ * uint32_t	 version	        Disk Image version number (current == 1)
+ * uint32_t      storage_key_len	Length of the storage Key
+ * TPM_KEY       storage_key		Marshalled TPM_KEY structure (See TPM spec v2)
+ * RSA_BLOCK     aes_crypto             Encrypted aes key data (RSA_CIPHER_SIZE bytes), bound by the storage_key
+ *  BYTE[32] aes_key                    Aes key for encrypting the uuid table
+ *  uint32_t cipher_sz                  Encrypted size of the uuid table
+ *
+ * *************************************************************
+ * Section 2: Uuid Table
+ *
+ * This table is encrypted by the aes_key in the header. The cipher text size is just
+ * large enough to hold all of the entries plus required padding.
+ *
+ * Each entry is as follows
+ * BYTE[16] uuid                       Uuid of a vtpm that is stored on this disk
+ * uint32_t offset                     Disk offset where the vtpm data is stored
+ *
+ * *************************************************************
+ * Section 3: Vtpm Table
+ *
+ * The rest of the disk stores vtpms. Each vtpm is an RSA_BLOCK encrypted
+ * by the storage key. Each vtpm must exist on an RSA_BLOCK aligned boundary,
+ * starting at the first RSA_BLOCK aligned offset after the uuid table.
+ * As the uuid table grows, vtpms may be relocated.
+ *
+ * RSA_BLOCK     vtpm_crypto          Vtpm data encrypted by storage_key
+ *   BYTE[20]    hash                 Sha1 hash of vtpm encrypted data
+ *   BYTE[16]    vtpm_aes_key         Encryption key for vtpm data
+ *
+  *************************************************************
+ */
+#define DISKVERS 1
+#define IDSTR "VTPMMGRDOM"
+#define IDSTRLEN 10
+#define AES_BLOCK_SIZE 16
+#define AES_KEY_BITS 256
+#define AES_KEY_SIZE (AES_KEY_BITS/8)
+#define BUF_SIZE 4096
+
+#define UUID_TBL_ENT_SIZE (sizeof(uuid_t) + sizeof(uint32_t))
+
+#define HEADERSZ (10 + 4 + 4)
+
+#define TRY_READ(buf, size, msg) do {\
+   int rc; \
+   if((rc = read(blkfront_fd, buf, (size))) != (size)) { \
+      vtpmlogerror(VTPM_LOG_VTPM, "read() failed! " msg " : rc=(%d/%d), error=(%s)\n", rc, (int)(size), strerror(errno)); \
+      status = TPM_IOERROR;\
+      goto abort_egress;\
+   } \
+} while(0)
+
+#define TRY_WRITE(buf, size, msg) do {\
+   int rc; \
+   if((rc = write(blkfront_fd, buf, (size))) != (size)) { \
+      vtpmlogerror(VTPM_LOG_VTPM, "write() failed! " msg " : rc=(%d/%d), error=(%s)\n", rc, (int)(size), strerror(errno)); \
+      status = TPM_IOERROR;\
+      goto abort_egress;\
+   } \
+} while(0)
+
+#include <blkfront.h>
+#include <unistd.h>
+#include <errno.h>
+#include <string.h>
+#include <inttypes.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <mini-os/byteorder.h>
+#include <polarssl/aes.h>
+
+#include "vtpm_manager.h"
+#include "log.h"
+#include "marshal.h"
+#include "tpm.h"
+#include "uuid.h"
+
+#include "vtpmmgr.h"
+#include "vtpm_storage.h"
+
+#define MAX(a,b) ( ((a) > (b)) ? (a) : (b) )
+#define MIN(a,b) ( ((a) < (b)) ? (a) : (b) )
+
+/* blkfront device objets */
+static struct blkfront_dev* blkdev = NULL;
+static int blkfront_fd = -1;
+
+struct Vtpm {
+   uuid_t uuid;
+   int offset;
+};
+struct Storage {
+   int aes_offset;
+   int uuid_offset;
+   int end_offset;
+
+   int num_vtpms;
+   int num_vtpms_alloced;
+   struct Vtpm* vtpms;
+};
+
+/* Global storage data */
+static struct Storage g_store = {
+   .vtpms = NULL,
+};
+
+static int get_offset(void) {
+   return lseek(blkfront_fd, 0, SEEK_CUR);
+}
+
+static void reset_store(void) {
+   g_store.aes_offset = 0;
+   g_store.uuid_offset = 0;
+   g_store.end_offset = 0;
+
+   g_store.num_vtpms = 0;
+   g_store.num_vtpms_alloced = 0;
+   free(g_store.vtpms);
+   g_store.vtpms = NULL;
+}
+
+static int vtpm_get_index(const uuid_t uuid) {
+   int st = 0;
+   int ed = g_store.num_vtpms-1;
+   while(st <= ed) {
+      int mid = ((unsigned int)st + (unsigned int)ed) >> 1; //avoid overflow
+      int c = memcmp(uuid, &g_store.vtpms[mid].uuid, sizeof(uuid_t));
+      if(c == 0) {
+         return mid;
+      } else if(c > 0) {
+         st = mid + 1;
+      } else {
+         ed = mid - 1;
+      }
+   }
+   return -(st + 1);
+}
+
+static void vtpm_add(const uuid_t uuid, int offset, int index) {
+   /* Realloc more space if needed */
+   if(g_store.num_vtpms >= g_store.num_vtpms_alloced) {
+      g_store.num_vtpms_alloced += 16;
+      g_store.vtpms = realloc(
+            g_store.vtpms,
+            sizeof(struct Vtpm) * g_store.num_vtpms_alloced);
+   }
+
+   /* Move everybody after the new guy */
+   for(int i = g_store.num_vtpms; i > index; --i) {
+      g_store.vtpms[i] = g_store.vtpms[i-1];
+   }
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Registered vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+
+   /* Finally add new one */
+   memcpy(g_store.vtpms[index].uuid, uuid, sizeof(uuid_t));
+   g_store.vtpms[index].offset = offset;
+   ++g_store.num_vtpms;
+}
+
+#if 0
+static void vtpm_remove(int index) {
+   for(i = index; i < g_store.num_vtpms; ++i) {
+      g_store.vtpms[i] = g_store.vtpms[i+1];
+   }
+   --g_store.num_vtpms;
+}
+#endif
+
+static int pack_uuid_table(uint8_t* table, int size, int* nvtpms) {
+   uint8_t* ptr = table;
+   while(*nvtpms < g_store.num_vtpms && size >= 0)
+   {
+      /* Pack the uuid */
+      memcpy(ptr, (uint8_t*)g_store.vtpms[*nvtpms].uuid, sizeof(uuid_t));
+      ptr+= sizeof(uuid_t);
+
+
+      /* Pack the offset */
+      ptr = pack_UINT32(ptr, g_store.vtpms[*nvtpms].offset);
+
+      ++*nvtpms;
+      size -= UUID_TBL_ENT_SIZE;
+   }
+   return ptr - table;
+}
+
+/* Extract the uuids */
+static int extract_uuid_table(uint8_t* table, int size) {
+   uint8_t* ptr = table;
+   for(;size >= UUID_TBL_ENT_SIZE; size -= UUID_TBL_ENT_SIZE) {
+      int index;
+      uint32_t v32;
+
+      /*uuid_t is just an array of bytes, so we can do a direct cast here */
+      uint8_t* uuid = ptr;
+      ptr += sizeof(uuid_t);
+
+      /* Get the offset of the key */
+      ptr = unpack_UINT32(ptr, &v32);
+
+      /* Insert the new vtpm in sorted order */
+      if((index = vtpm_get_index(uuid)) >= 0) {
+         vtpmlogerror(VTPM_LOG_VTPM, "Vtpm (" UUID_FMT ") exists multiple times! ignoring...\n", UUID_BYTES(uuid));
+         continue;
+      }
+      index = -index -1;
+
+      vtpm_add(uuid, v32, index);
+
+   }
+   return ptr - table;
+}
+
+static void vtpm_decrypt_block(aes_context* aes,
+      uint8_t* iv,
+      uint8_t* cipher,
+      uint8_t* plain,
+      int cipher_sz,
+      int* overlap)
+{
+   int bytes_ext;
+   /* Decrypt */
+   aes_crypt_cbc(aes, AES_DECRYPT,
+         cipher_sz,
+         iv, cipher, plain + *overlap);
+
+   /* Extract */
+   bytes_ext = extract_uuid_table(plain, cipher_sz + *overlap);
+
+   /* Copy left overs to the beginning */
+   *overlap = cipher_sz + *overlap - bytes_ext;
+   memcpy(plain, plain + bytes_ext, *overlap);
+}
+
+static int vtpm_encrypt_block(aes_context* aes,
+      uint8_t* iv,
+      uint8_t* plain,
+      uint8_t* cipher,
+      int block_sz,
+      int* overlap,
+      int* num_vtpms)
+{
+   int bytes_to_crypt;
+   int bytes_packed;
+
+   /* Pack the uuid table */
+   bytes_packed = *overlap + pack_uuid_table(plain + *overlap, block_sz - *overlap, num_vtpms);
+   bytes_to_crypt = MIN(bytes_packed, block_sz);
+
+   /* Add padding if we aren't on a multiple of the block size */
+   if(bytes_to_crypt & (AES_BLOCK_SIZE-1)) {
+      int oldsz = bytes_to_crypt;
+      //add padding
+      bytes_to_crypt += AES_BLOCK_SIZE - (bytes_to_crypt & (AES_BLOCK_SIZE-1));
+      //fill padding with random bytes
+      vtpmmgr_rand(plain + oldsz, bytes_to_crypt - oldsz);
+      *overlap = 0;
+   } else {
+      *overlap = bytes_packed - bytes_to_crypt;
+   }
+
+   /* Encrypt this chunk */
+   aes_crypt_cbc(aes, AES_ENCRYPT,
+            bytes_to_crypt,
+            iv, plain, cipher);
+
+   /* Copy the left over partials to the beginning */
+   memcpy(plain, plain + bytes_to_crypt, *overlap);
+
+   return bytes_to_crypt;
+}
+
+static TPM_RESULT vtpm_storage_new_vtpm(const uuid_t uuid, int index) {
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t plain[BUF_SIZE + AES_BLOCK_SIZE];
+   uint8_t buf[BUF_SIZE];
+   uint8_t* ptr;
+   int cipher_sz;
+   aes_context aes;
+
+   /* Add new vtpm to the table */
+   vtpm_add(uuid, g_store.end_offset, index);
+   g_store.end_offset += RSA_CIPHER_SIZE;
+
+   /* Compute the new end location of the encrypted uuid table */
+   cipher_sz = AES_BLOCK_SIZE; //IV
+   cipher_sz += g_store.num_vtpms * UUID_TBL_ENT_SIZE; //uuid table
+   cipher_sz += (AES_BLOCK_SIZE - (cipher_sz & (AES_BLOCK_SIZE -1))) & (AES_BLOCK_SIZE-1); //aes padding
+
+   /* Does this overlap any key data? If so they need to be relocated */
+   int uuid_end = (g_store.uuid_offset + cipher_sz + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+   for(int i = 0; i < g_store.num_vtpms; ++i) {
+      if(g_store.vtpms[i].offset < uuid_end) {
+
+         vtpmloginfo(VTPM_LOG_VTPM, "Relocating vtpm data\n");
+
+         //Read the hashkey cipher text
+         lseek(blkfront_fd, g_store.vtpms[i].offset, SEEK_SET);
+         TRY_READ(buf, RSA_CIPHER_SIZE, "vtpm hashkey relocate");
+
+         //Write the cipher text to new offset
+         lseek(blkfront_fd, g_store.end_offset, SEEK_SET);
+         TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm hashkey relocate");
+
+         //Save new offset
+         g_store.vtpms[i].offset = g_store.end_offset;
+         g_store.end_offset += RSA_CIPHER_SIZE;
+      }
+   }
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Generating a new symmetric key\n");
+
+   /* Generate an aes key */
+   TPMTRYRETURN(vtpmmgr_rand(plain, AES_KEY_SIZE));
+   aes_setkey_enc(&aes, plain, AES_KEY_BITS);
+   ptr = plain + AES_KEY_SIZE;
+
+   /* Pack the crypted size */
+   ptr = pack_UINT32(ptr, cipher_sz);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Binding encrypted key\n");
+
+   /* Seal the key and size */
+   TPMTRYRETURN(TPM_Bind(&vtpm_globals.storage_key,
+            plain,
+            ptr - plain,
+            buf));
+
+   /* Write the sealed key to disk */
+   lseek(blkfront_fd, g_store.aes_offset, SEEK_SET);
+   TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm aes key");
+
+   /* ENCRYPT AND WRITE UUID TABLE */
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Encrypting the uuid table\n");
+
+   int num_vtpms = 0;
+   int overlap = 0;
+   int bytes_crypted;
+   uint8_t iv[AES_BLOCK_SIZE];
+
+   /* Generate the iv for the first block */
+   TPMTRYRETURN(vtpmmgr_rand(iv, AES_BLOCK_SIZE));
+
+   /* Copy the iv to the cipher text buffer to be written to disk */
+   memcpy(buf, iv, AES_BLOCK_SIZE);
+   ptr = buf + AES_BLOCK_SIZE;
+
+   /* Encrypt the first block of the uuid table */
+   bytes_crypted = vtpm_encrypt_block(&aes,
+         iv, //iv
+         plain, //plaintext
+         ptr, //cipher text
+         BUF_SIZE - AES_BLOCK_SIZE,
+         &overlap,
+         &num_vtpms);
+
+   /* Write the iv followed by the crypted table*/
+   TRY_WRITE(buf, bytes_crypted + AES_BLOCK_SIZE, "vtpm uuid table");
+
+   /* Decrement the number of bytes encrypted */
+   cipher_sz -= bytes_crypted + AES_BLOCK_SIZE;
+
+   /* If there are more vtpms, encrypt and write them block by block */
+   while(cipher_sz > 0) {
+      /* Encrypt the next block of the uuid table */
+      bytes_crypted = vtpm_encrypt_block(&aes,
+               iv,
+               plain,
+               buf,
+               BUF_SIZE,
+               &overlap,
+               &num_vtpms);
+
+      /* Write the cipher text to disk */
+      TRY_WRITE(buf, bytes_crypted, "vtpm uuid table");
+
+      cipher_sz -= bytes_crypted;
+   }
+
+   goto egress;
+abort_egress:
+egress:
+   return status;
+}
+
+
+/**************************************
+ * PUBLIC FUNCTIONS
+ * ***********************************/
+
+int vtpm_storage_init(void) {
+   struct blkfront_info info;
+   if((blkdev = init_blkfront(NULL, &info)) == NULL) {
+      return -1;
+   }
+   if((blkfront_fd = blkfront_open(blkdev)) < 0) {
+      return -1;
+   }
+   return 0;
+}
+
+void vtpm_storage_shutdown(void) {
+   reset_store();
+   close(blkfront_fd);
+}
+
+TPM_RESULT vtpm_storage_load_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ])
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   int index;
+   uint8_t cipher[RSA_CIPHER_SIZE];
+   uint8_t clear[RSA_CIPHER_SIZE];
+   UINT32 clear_size;
+
+   /* Find the index of this uuid */
+   if((index = vtpm_get_index(uuid)) < 0) {
+      index = -index-1;
+      vtpmlogerror(VTPM_LOG_VTPM, "LoadKey failure: Unrecognized uuid! " UUID_FMT "\n", UUID_BYTES(uuid));
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   /* Read the table entry */
+   lseek(blkfront_fd, g_store.vtpms[index].offset, SEEK_SET);
+   TRY_READ(cipher, RSA_CIPHER_SIZE, "vtpm hashkey data");
+
+   /* Decrypt the table entry */
+   TPMTRYRETURN(TPM_UnBind(
+            vtpm_globals.storage_key_handle,
+            RSA_CIPHER_SIZE,
+            cipher,
+            &clear_size,
+            clear,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.oiap));
+
+   if(clear_size < HASHKEYSZ) {
+      vtpmloginfo(VTPM_LOG_VTPM, "Decrypted Hash key size (%" PRIu32 ") was too small!\n", clear_size);
+      status = TPM_RESOURCES;
+      goto abort_egress;
+   }
+
+   memcpy(hashkey, clear, HASHKEYSZ);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Loaded hash and key for vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+   goto egress;
+abort_egress:
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to load key\n");
+egress:
+   return status;
+}
+
+TPM_RESULT vtpm_storage_save_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ])
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   int index;
+   uint8_t buf[RSA_CIPHER_SIZE];
+
+   /* Find the index of this uuid */
+   if((index = vtpm_get_index(uuid)) < 0) {
+      index = -index-1;
+      /* Create a new vtpm */
+      TPMTRYRETURN( vtpm_storage_new_vtpm(uuid, index) );
+   }
+
+   /* Encrypt the hash and key */
+   TPMTRYRETURN( TPM_Bind(&vtpm_globals.storage_key,
+            hashkey,
+            HASHKEYSZ,
+            buf));
+
+   /* Write to disk */
+   lseek(blkfront_fd, g_store.vtpms[index].offset, SEEK_SET);
+   TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm hashkey data");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saved hash and key for vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+   goto egress;
+abort_egress:
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to save key\n");
+egress:
+   return status;
+}
+
+TPM_RESULT vtpm_storage_new_header()
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t buf[BUF_SIZE];
+   uint8_t keybuf[AES_KEY_SIZE + sizeof(uint32_t)];
+   uint8_t* ptr = buf;
+   uint8_t* sptr;
+
+   /* Clear everything first */
+   reset_store();
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Creating new disk image header\n");
+
+   /*Copy the ID string */
+   memcpy(ptr, IDSTR, IDSTRLEN);
+   ptr += IDSTRLEN;
+
+   /*Copy the version */
+   ptr = pack_UINT32(ptr, DISKVERS);
+
+   /*Save the location of the key size */
+   sptr = ptr;
+   ptr += sizeof(UINT32);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saving root storage key..\n");
+
+   /* Copy the storage key */
+   ptr = pack_TPM_KEY(ptr, &vtpm_globals.storage_key);
+
+   /* Now save the size */
+   pack_UINT32(sptr, ptr - (sptr + 4));
+
+   /* Create a fake aes key and set cipher text size to 0 */
+   memset(keybuf, 0, sizeof(keybuf));
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Binding uuid table symmetric key..\n");
+
+   /* Save the location of the aes key */
+   g_store.aes_offset = ptr - buf;
+
+   /* Store the fake aes key and vtpm count */
+   TPMTRYRETURN(TPM_Bind(&vtpm_globals.storage_key,
+         keybuf,
+         sizeof(keybuf),
+         ptr));
+   ptr+= RSA_CIPHER_SIZE;
+
+   /* Write the header to disk */
+   lseek(blkfront_fd, 0, SEEK_SET);
+   TRY_WRITE(buf, ptr-buf, "vtpm header");
+
+   /* Save the location of the uuid table */
+   g_store.uuid_offset = get_offset();
+
+   /* Save the end offset */
+   g_store.end_offset = (g_store.uuid_offset + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saved new manager disk header.\n");
+
+   goto egress;
+abort_egress:
+egress:
+   return status;
+}
+
+
+TPM_RESULT vtpm_storage_load_header(void)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint32_t v32;
+   uint8_t buf[BUF_SIZE];
+   uint8_t* ptr = buf;
+   aes_context aes;
+
+   /* Clear everything first */
+   reset_store();
+
+   /* Read the header from disk */
+   lseek(blkfront_fd, 0, SEEK_SET);
+   TRY_READ(buf, IDSTRLEN + sizeof(UINT32) + sizeof(UINT32), "vtpm header");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Loading disk image header\n");
+
+   /* Verify the ID string */
+   if(memcmp(ptr, IDSTR, IDSTRLEN)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid ID string in disk image!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+   ptr+=IDSTRLEN;
+
+   /* Unpack the version */
+   ptr = unpack_UINT32(ptr, &v32);
+
+   /* Verify the version */
+   if(v32 != DISKVERS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unsupported disk image version number %" PRIu32 "\n", v32);
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   /* Size of the storage key */
+   ptr = unpack_UINT32(ptr, &v32);
+
+   /* Sanity check */
+   if(v32 > BUF_SIZE) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Size of storage key (%" PRIu32 ") is too large!\n", v32);
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* read the storage key */
+   TRY_READ(buf, v32, "storage pub key");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Unpacking storage key\n");
+
+   /* unpack the storage key */
+   ptr = unpack_TPM_KEY(buf, &vtpm_globals.storage_key, UNPACK_ALLOC);
+
+   /* Load Storage Key into the TPM */
+   TPMTRYRETURN( TPM_LoadKey(
+            TPM_SRK_KEYHANDLE,
+            &vtpm_globals.storage_key,
+            &vtpm_globals.storage_key_handle,
+            (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+            &vtpm_globals.oiap));
+
+   /* Initialize the storage key auth */
+   memset(vtpm_globals.storage_key_usage_auth, 0, sizeof(TPM_AUTHDATA));
+
+   /* Store the offset of the aes key */
+   g_store.aes_offset = get_offset();
+
+   /* Read the rsa cipher text for the aes key */
+   TRY_READ(buf, RSA_CIPHER_SIZE, "aes key");
+   ptr = buf + RSA_CIPHER_SIZE;
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Unbinding uuid table symmetric key\n");
+
+   /* Decrypt the aes key protecting the uuid table */
+   UINT32 datalen;
+   TPMTRYRETURN(TPM_UnBind(
+            vtpm_globals.storage_key_handle,
+            RSA_CIPHER_SIZE,
+            buf,
+            &datalen,
+            ptr,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.oiap));
+
+   /* Validate the length of the output buffer */
+   if(datalen < AES_KEY_SIZE + sizeof(UINT32)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unbound AES key size (%d) was too small! expected (%ld)\n", datalen, AES_KEY_SIZE + sizeof(UINT32));
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* Extract the aes key */
+   aes_setkey_dec(&aes, ptr, AES_KEY_BITS);
+   ptr+= AES_KEY_SIZE;
+
+   /* Extract the ciphertext size */
+   ptr = unpack_UINT32(ptr, &v32);
+   int cipher_size = v32;
+
+   /* Sanity check */
+   if(cipher_size & (AES_BLOCK_SIZE-1)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Cipher text size (%" PRIu32 ") is not a multiple of the aes block size! (%d)\n", v32, AES_BLOCK_SIZE);
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* Save the location of the uuid table */
+   g_store.uuid_offset = get_offset();
+
+   /* Only decrypt the table if there are vtpms to decrypt */
+   if(cipher_size > 0) {
+      int rbytes;
+      int overlap = 0;
+      uint8_t plain[BUF_SIZE + AES_BLOCK_SIZE];
+      uint8_t iv[AES_BLOCK_SIZE];
+
+      vtpmloginfo(VTPM_LOG_VTPM, "Decrypting uuid table\n");
+
+      /* Pre allocate the vtpm array */
+      g_store.num_vtpms_alloced = cipher_size / UUID_TBL_ENT_SIZE;
+      g_store.vtpms = malloc(sizeof(struct Vtpm) * g_store.num_vtpms_alloced);
+
+      /* Read the iv and the first chunk of cipher text */
+      rbytes = MIN(cipher_size, BUF_SIZE);
+      TRY_READ(buf, rbytes, "vtpm uuid table\n");
+      cipher_size -= rbytes;
+
+      /* Copy the iv */
+      memcpy(iv, buf, AES_BLOCK_SIZE);
+      ptr = buf + AES_BLOCK_SIZE;
+
+      /* Remove the iv from the number of bytes to decrypt */
+      rbytes -= AES_BLOCK_SIZE;
+
+      /* Decrypt and extract vtpms */
+      vtpm_decrypt_block(&aes,
+            iv, ptr, plain,
+            rbytes, &overlap);
+
+      /* Read the rest of the table if there is more */
+      while(cipher_size > 0) {
+         /* Read next chunk of cipher text */
+         rbytes = MIN(cipher_size, BUF_SIZE);
+         TRY_READ(buf, rbytes, "vtpm uuid table");
+         cipher_size -= rbytes;
+
+         /* Decrypt a block of text */
+         vtpm_decrypt_block(&aes,
+               iv, buf, plain,
+               rbytes, &overlap);
+
+      }
+      vtpmloginfo(VTPM_LOG_VTPM, "Loaded %d vtpms!\n", g_store.num_vtpms);
+   }
+
+   /* The end of the key table, new vtpms go here */
+   int uuid_end = (get_offset() + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+   g_store.end_offset = uuid_end;
+
+   /* Compute the end offset while validating vtpms*/
+   for(int i = 0; i < g_store.num_vtpms; ++i) {
+      /* offset must not collide with previous data */
+      if(g_store.vtpms[i].offset < uuid_end) {
+         vtpmlogerror(VTPM_LOG_VTPM, "vtpm: " UUID_FMT
+               " offset (%d) is before end of uuid table (%d)!\n",
+               UUID_BYTES(g_store.vtpms[i].uuid),
+               g_store.vtpms[i].offset, uuid_end);
+         status = TPM_IOERROR;
+         goto abort_egress;
+      }
+      /* offset must be at a multiple of cipher size */
+      if(g_store.vtpms[i].offset & (RSA_CIPHER_SIZE-1)) {
+         vtpmlogerror(VTPM_LOG_VTPM, "vtpm: " UUID_FMT
+               " offset(%d) is not at a multiple of the rsa cipher text size (%d)!\n",
+               UUID_BYTES(g_store.vtpms[i].uuid),
+               g_store.vtpms[i].offset, RSA_CIPHER_SIZE);
+         status = TPM_IOERROR;
+         goto abort_egress;
+      }
+      /* Save the last offset */
+      if(g_store.vtpms[i].offset >= g_store.end_offset) {
+         g_store.end_offset = g_store.vtpms[i].offset + RSA_CIPHER_SIZE;
+      }
+   }
+
+   goto egress;
+abort_egress:
+   //An error occured somewhere
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to load manager data!\n");
+
+   //Clear the data store
+   reset_store();
+
+   //Reset the storage key structure
+   free_TPM_KEY(&vtpm_globals.storage_key);
+   {
+      TPM_KEY key = TPM_KEY_INIT;
+      vtpm_globals.storage_key = key;
+   }
+
+   //Reset the storage key handle
+   TPM_EvictKey(vtpm_globals.storage_key_handle);
+   vtpm_globals.storage_key_handle = 0;
+egress:
+   return status;
+}
+
+#if 0
+/* For testing disk IO */
+void add_fake_vtpms(int num) {
+   for(int i = 0; i < num; ++i) {
+      uint32_t ind = cpu_to_be32(i);
+
+      uuid_t uuid;
+      memset(uuid, 0, sizeof(uuid_t));
+      memcpy(uuid, &ind, sizeof(ind));
+      int index = vtpm_get_index(uuid);
+      index = -index-1;
+
+      vtpm_storage_new_vtpm(uuid, index);
+   }
+}
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_storage.h b/stubdom/vtpmmgr/vtpm_storage.h
new file mode 100644
index 0000000..a5a5fd7
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_storage.h
@@ -0,0 +1,68 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPM_STORAGE_H
+#define VTPM_STORAGE_h
+
+#include "uuid.h"
+
+#define VTPM_NVMKEY_SIZE 32
+#define HASHKEYSZ (sizeof(TPM_DIGEST) + VTPM_NVMKEY_SIZE)
+
+/* Initialize the storage system and its virtual disk */
+int vtpm_storage_init(void);
+
+/* Shutdown the storage system and its virtual disk */
+void vtpm_storage_shutdown(void);
+
+/* Loads Sha1 hash and 256 bit AES key from disk and stores them
+ * packed together in outbuf. outbuf must be freed
+ * by the caller using buffer_free()
+ */
+TPM_RESULT vtpm_storage_load_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ]);
+
+/* inbuf must contain a sha1 hash followed by a 256 bit AES key.
+ * Encrypts and stores the hash and key to disk */
+TPM_RESULT vtpm_storage_save_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ]);
+
+/* Load the vtpm manager data - call this on startup */
+TPM_RESULT vtpm_storage_load_header(void);
+
+/* Saves the vtpm manager data - call this on shutdown */
+TPM_RESULT vtpm_storage_new_header(void);
+
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpmmgr.c b/stubdom/vtpmmgr/vtpmmgr.c
new file mode 100644
index 0000000..563f4e8
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpmmgr.c
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdint.h>
+#include <mini-os/tpmback.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include "log.h"
+
+#include "vtpmmgr.h"
+#include "tcg.h"
+
+
+void main_loop(void) {
+   tpmcmd_t* tpmcmd;
+   uint8_t respbuf[TCPA_MAX_BUFFER_LENGTH];
+
+   while(1) {
+      /* Wait for requests from a vtpm */
+      vtpmloginfo(VTPM_LOG_VTPM, "Waiting for commands from vTPM's:\n");
+      if((tpmcmd = tpmback_req_any()) == NULL) {
+         vtpmlogerror(VTPM_LOG_VTPM, "NULL tpmcmd\n");
+         continue;
+      }
+
+      tpmcmd->resp = respbuf;
+
+      /* Process the command */
+      vtpmmgr_handle_cmd(tpmcmd->uuid, tpmcmd);
+
+      /* Send response */
+      tpmback_resp(tpmcmd);
+   }
+}
+
+int main(int argc, char** argv)
+{
+   int rc = 0;
+   sleep(2);
+   vtpmloginfo(VTPM_LOG_VTPM, "Starting vTPM manager domain\n");
+
+   /* Initialize the vtpm manager */
+   if(vtpmmgr_init(argc, argv) != TPM_SUCCESS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize vtpmmgr domain!\n");
+      rc = -1;
+      goto exit;
+   }
+
+   main_loop();
+
+   vtpmloginfo(VTPM_LOG_VTPM, "vTPM Manager shutting down...\n");
+
+   vtpmmgr_shutdown();
+
+exit:
+   return rc;
+
+}
diff --git a/stubdom/vtpmmgr/vtpmmgr.h b/stubdom/vtpmmgr/vtpmmgr.h
new file mode 100644
index 0000000..50a1992
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpmmgr.h
@@ -0,0 +1,77 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPMMGR_H
+#define VTPMMGR_H
+
+#include <mini-os/tpmback.h>
+#include <polarssl/entropy.h>
+#include <polarssl/ctr_drbg.h>
+
+#include "uuid.h"
+#include "tcg.h"
+#include "vtpm_manager.h"
+
+#define RSA_KEY_SIZE 0x0800
+#define RSA_CIPHER_SIZE (RSA_KEY_SIZE / 8)
+
+struct vtpm_globals {
+   int tpm_fd;
+   TPM_KEY             storage_key;
+   TPM_HANDLE          storage_key_handle;       // Key used by persistent store
+   TPM_AUTH_SESSION    oiap;                // OIAP session for storageKey
+   TPM_AUTHDATA        storage_key_usage_auth;
+
+   TPM_AUTHDATA        owner_auth;
+   TPM_AUTHDATA        srk_auth;
+
+   entropy_context     entropy;
+   ctr_drbg_context    ctr_drbg;
+};
+
+// --------------------------- Global Values --------------------------
+extern struct vtpm_globals vtpm_globals;   // Key info and DMI states
+
+TPM_RESULT vtpmmgr_init(int argc, char** argv);
+void vtpmmgr_shutdown(void);
+
+TPM_RESULT vtpmmgr_handle_cmd(const uuid_t uuid, tpmcmd_t* tpmcmd);
+
+inline TPM_RESULT vtpmmgr_rand(unsigned char* bytes, size_t num_bytes) {
+   return ctr_drbg_random(&vtpm_globals.ctr_drbg, bytes, num_bytes) == 0 ? 0 : TPM_FAIL;
+}
+
+#endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:11:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwxW-0002XD-6C; Tue, 04 Dec 2012 18:11:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwxU-0002Uc-LA
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:11:05 +0000
Received: from [85.158.143.35:13798] by server-2.bemta-4.messagelabs.com id
	37/0B-28922-8BC3EB05; Tue, 04 Dec 2012 18:11:04 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-15.tower-21.messagelabs.com!1354644589!14047766!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19798 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (unknown [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0efc_20e9ab0d_0261_41bc_8dd5_077671fcd724;
	Tue, 04 Dec 2012 13:09:46 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:23 -0500
Message-Id: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 1/8] add vtpm-stubdom code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the code base for vtpm-stubdom to the stubdom
heirarchy. Makefile changes in later patch.

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/vtpm/Makefile    |   37 +++++
 stubdom/vtpm/minios.cfg  |   14 ++
 stubdom/vtpm/vtpm.c      |  404 ++++++++++++++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm.h      |   36 +++++
 stubdom/vtpm/vtpm_cmd.c  |  256 +++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm_cmd.h  |   31 ++++
 stubdom/vtpm/vtpm_pcrs.c |   43 +++++
 stubdom/vtpm/vtpm_pcrs.h |   53 ++++++
 stubdom/vtpm/vtpmblk.c   |  307 +++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpmblk.h   |   31 ++++
 10 files changed, 1212 insertions(+)
 create mode 100644 stubdom/vtpm/Makefile
 create mode 100644 stubdom/vtpm/minios.cfg
 create mode 100644 stubdom/vtpm/vtpm.c
 create mode 100644 stubdom/vtpm/vtpm.h
 create mode 100644 stubdom/vtpm/vtpm_cmd.c
 create mode 100644 stubdom/vtpm/vtpm_cmd.h
 create mode 100644 stubdom/vtpm/vtpm_pcrs.c
 create mode 100644 stubdom/vtpm/vtpm_pcrs.h
 create mode 100644 stubdom/vtpm/vtpmblk.c
 create mode 100644 stubdom/vtpm/vtpmblk.h

diff --git a/stubdom/vtpm/Makefile b/stubdom/vtpm/Makefile
new file mode 100644
index 0000000..686c0ea
--- /dev/null
+++ b/stubdom/vtpm/Makefile
@@ -0,0 +1,37 @@
+# Copyright (c) 2010-2012 United States Government, as represented by
+# the Secretary of Defense.  All rights reserved.
+#
+# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+# INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+# DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+# SOFTWARE.
+#
+
+XEN_ROOT=../..
+
+PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
+PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o sha4.o
+
+TARGET=vtpm.a
+OBJS=vtpm.o vtpm_cmd.o vtpmblk.o vtpm_pcrs.o
+
+
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/build
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/tpm
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/crypto
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)
+
+$(TARGET): $(OBJS)
+	ar -cr $@ $(OBJS) $(TPMEMU_OBJS) $(foreach obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
+
+$(OBJS): vtpm_manager.h
+
+vtpm_manager.h:
+	ln -s ../vtpmmgr/vtpm_manager.h vtpm_manager.h
+
+clean:
+	-rm $(TARGET) $(OBJS) vtpm_manager.h
+
+.PHONY: clean
diff --git a/stubdom/vtpm/minios.cfg b/stubdom/vtpm/minios.cfg
new file mode 100644
index 0000000..31652ee
--- /dev/null
+++ b/stubdom/vtpm/minios.cfg
@@ -0,0 +1,14 @@
+CONFIG_TPMFRONT=y
+CONFIG_TPM_TIS=n
+CONFIG_TPMBACK=y
+CONFIG_START_NETWORK=n
+CONFIG_TEST=n
+CONFIG_PCIFRONT=n
+CONFIG_BLKFRONT=y
+CONFIG_NETFRONT=n
+CONFIG_FBFRONT=n
+CONFIG_KBDFRONT=n
+CONFIG_CONSFRONT=n
+CONFIG_XENBUS=y
+CONFIG_LWIP=n
+CONFIG_XC=n
diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
new file mode 100644
index 0000000..71aef78
--- /dev/null
+++ b/stubdom/vtpm/vtpm.c
@@ -0,0 +1,404 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <syslog.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <sys/time.h>
+#include <xen/xen.h>
+#include <tpmback.h>
+#include <tpmfront.h>
+
+#include <polarssl/entropy.h>
+#include <polarssl/ctr_drbg.h>
+
+#include "tpm/tpm_emulator_extern.h"
+#include "tpm/tpm_marshalling.h"
+#include "vtpm.h"
+#include "vtpm_cmd.h"
+#include "vtpm_pcrs.h"
+#include "vtpmblk.h"
+
+#define TPM_LOG_INFO LOG_INFO
+#define TPM_LOG_ERROR LOG_ERR
+#define TPM_LOG_DEBUG LOG_DEBUG
+
+/* Global commandline options - default values */
+struct Opt_args opt_args = {
+   .startup = ST_CLEAR,
+   .loglevel = TPM_LOG_INFO,
+   .hwinitpcrs = VTPM_PCRNONE,
+   .tpmconf = 0,
+   .enable_maint_cmds = false,
+};
+
+static uint32_t badords[32];
+static unsigned int n_badords = 0;
+
+entropy_context entropy;
+ctr_drbg_context ctr_drbg;
+
+struct tpmfront_dev* tpmfront_dev;
+
+void vtpm_get_extern_random_bytes(void *buf, size_t nbytes)
+{
+   ctr_drbg_random(&ctr_drbg, buf, nbytes);
+}
+
+int vtpm_read_from_file(uint8_t **data, size_t *data_length) {
+   return read_vtpmblk(tpmfront_dev, data, data_length);
+}
+
+int vtpm_write_to_file(uint8_t *data, size_t data_length) {
+   return write_vtpmblk(tpmfront_dev, data, data_length);
+}
+
+int vtpm_extern_init_fake(void) {
+   return 0;
+}
+
+void vtpm_extern_release_fake(void) {
+}
+
+
+void vtpm_log(int priority, const char *fmt, ...)
+{
+   if(opt_args.loglevel >= priority) {
+      va_list v;
+      va_start(v, fmt);
+      vprintf(fmt, v);
+      va_end(v);
+   }
+}
+
+static uint64_t vtpm_get_ticks(void)
+{
+  static uint64_t old_t = 0;
+  uint64_t new_t, res_t;
+  struct timeval tv;
+  gettimeofday(&tv, NULL);
+  new_t = (uint64_t)tv.tv_sec * 1000000 + (uint64_t)tv.tv_usec;
+  res_t = (old_t > 0) ? new_t - old_t : 0;
+  old_t = new_t;
+  return res_t;
+}
+
+
+static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
+   UINT32 sz = len;
+   TPM_RESULT rc = VTPM_GetRandom(tpmfront_dev, data, &sz);
+   *olen = sz;
+   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
+}
+
+int init_random(void) {
+   /* Initialize the rng */
+   entropy_init(&entropy);
+   entropy_add_source(&entropy, tpm_entropy_source, NULL, 0);
+   entropy_gather(&entropy);
+   ctr_drbg_init(&ctr_drbg, entropy_func, &entropy, NULL, 0);
+   ctr_drbg_set_prediction_resistance( &ctr_drbg, CTR_DRBG_PR_OFF );
+
+   return 0;
+}
+
+int check_ordinal(tpmcmd_t* tpmcmd) {
+   TPM_COMMAND_CODE ord;
+   UINT32 len = 4;
+   BYTE* ptr;
+   unsigned int i;
+
+   if(tpmcmd->req_len < 10) {
+      return true;
+   }
+
+   ptr = tpmcmd->req + 6;
+   tpm_unmarshal_UINT32(&ptr, &len, &ord);
+
+   for(i = 0; i < n_badords; ++i) {
+      if(ord == badords[i]) {
+         error("Disabled command ordinal (%" PRIu32") requested!\n");
+         return false;
+      }
+   }
+   return true;
+}
+
+static void main_loop(void) {
+   tpmcmd_t* tpmcmd = NULL;
+   domid_t domid;		/* Domid of frontend */
+   unsigned int handle;	/* handle of frontend */
+   int res = -1;
+
+   info("VTPM Initializing\n");
+
+   /* Set required tpm config args */
+   opt_args.tpmconf |= TPM_CONF_STRONG_PERSISTENCE;
+   opt_args.tpmconf &= ~TPM_CONF_USE_INTERNAL_PRNG;
+   opt_args.tpmconf |= TPM_CONF_GENERATE_EK;
+   opt_args.tpmconf |= TPM_CONF_GENERATE_SEED_DAA;
+
+   /* Initialize the emulator */
+   tpm_emulator_init(opt_args.startup, opt_args.tpmconf);
+
+   /* Initialize any requested PCRs with hardware TPM values */
+   if(vtpm_initialize_hw_pcrs(tpmfront_dev, opt_args.hwinitpcrs) != TPM_SUCCESS) {
+      error("Failed to initialize PCRs with hardware TPM values");
+      goto abort_postpcrs;
+   }
+
+   /* Wait for the frontend domain to connect */
+   info("Waiting for frontend domain to connect..");
+   if(tpmback_wait_for_frontend_connect(&domid, &handle) == 0) {
+      info("VTPM attached to Frontend %u/%u", (unsigned int) domid, handle);
+   } else {
+      error("Unable to attach to a frontend");
+   }
+
+   tpmcmd = tpmback_req(domid, handle);
+   while(tpmcmd) {
+      /* Handle the request */
+      if(tpmcmd->req_len) {
+	 tpmcmd->resp = NULL;
+	 tpmcmd->resp_len = 0;
+
+         /* First check for disabled ordinals */
+         if(!check_ordinal(tpmcmd)) {
+            create_error_response(tpmcmd, TPM_BAD_ORDINAL);
+         }
+         /* If not disabled, do the command */
+         else {
+            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len)) != 0) {
+               error("tpm_handle_command() failed");
+               create_error_response(tpmcmd, TPM_FAIL);
+            }
+         }
+      }
+
+      /* Send the response */
+      tpmback_resp(tpmcmd);
+
+      /* Wait for the next request */
+      tpmcmd = tpmback_req(domid, handle);
+
+   }
+
+abort_postpcrs:
+   info("VTPM Shutting down\n");
+
+   tpm_emulator_shutdown();
+}
+
+int parse_cmd_line(int argc, char** argv)
+{
+   char sval[25];
+   char* logstr = NULL;
+   /* Parse the command strings */
+   for(unsigned int i = 1; i < argc; ++i) {
+      if (sscanf(argv[i], "loglevel=%25s", sval) == 1){
+	 if (!strcmp(sval, "debug")) {
+	    opt_args.loglevel = TPM_LOG_DEBUG;
+	    logstr = "debug";
+	 }
+	 else if (!strcmp(sval, "info")) {
+	    logstr = "info";
+	    opt_args.loglevel = TPM_LOG_INFO;
+	 }
+	 else if (!strcmp(sval, "error")) {
+	    logstr = "error";
+	    opt_args.loglevel = TPM_LOG_ERROR;
+	 }
+      }
+      else if (!strcmp(argv[i], "clear")) {
+	 opt_args.startup = ST_CLEAR;
+      }
+      else if (!strcmp(argv[i], "save")) {
+	 opt_args.startup = ST_SAVE;
+      }
+      else if (!strcmp(argv[i], "deactivated")) {
+	 opt_args.startup = ST_DEACTIVATED;
+      }
+      else if (!strncmp(argv[i], "maintcmds=", 10)) {
+         if(!strcmp(argv[i] + 10, "1")) {
+            opt_args.enable_maint_cmds = true;
+         } else if(!strcmp(argv[i] + 10, "0")) {
+            opt_args.enable_maint_cmds = false;
+         }
+      }
+      else if(!strncmp(argv[i], "hwinitpcr=", 10)) {
+         char *pch = argv[i] + 10;
+         unsigned int v1, v2;
+         pch = strtok(pch, ",");
+         while(pch != NULL) {
+            if(!strcmp(pch, "all")) {
+               //Set all
+               opt_args.hwinitpcrs = VTPM_PCRALL;
+            } else if(!strcmp(pch, "none")) {
+               //Set none
+               opt_args.hwinitpcrs = VTPM_PCRNONE;
+            } else if(sscanf(pch, "%u", &v1) == 1) {
+               //Set one
+               if(v1 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               opt_args.hwinitpcrs |= (1 << v1);
+            } else if(sscanf(pch, "%u-%u", &v1, &v2) == 2) {
+               //Set range
+               if(v1 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               if(v2 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               if(v2 < v1) {
+                  unsigned tp = v1;
+                  v1 = v2;
+                  v2 = tp;
+               }
+               for(unsigned int i = v1; i <= v2; ++i) {
+                  opt_args.hwinitpcrs |= (1 << i);
+               }
+            } else {
+               error("hwintipcr error: Invalid PCR specification : %s", pch);
+               return -1;
+            }
+            pch = strtok(NULL, ",");
+         }
+      }
+      else {
+	 error("Invalid command line option `%s'", argv[i]);
+      }
+
+   }
+
+   /* Check Errors and print results */
+   switch(opt_args.startup) {
+      case ST_CLEAR:
+	 info("Startup mode is `clear'");
+	 break;
+      case ST_SAVE:
+	 info("Startup mode is `save'");
+	 break;
+      case ST_DEACTIVATED:
+	 info("Startup mode is `deactivated'");
+	 break;
+      default:
+	 error("Invalid startup mode %d", opt_args.startup);
+	 return -1;
+   }
+
+   if(opt_args.hwinitpcrs & (VTPM_PCRALL))
+   {
+      char pcrstr[1024];
+      char* ptr = pcrstr;
+
+      pcrstr[0] = '\0';
+      info("The following PCRs will be initialized with values from the hardware TPM:");
+      for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
+         if(opt_args.hwinitpcrs & (1 << i)) {
+            ptr += sprintf(ptr, "%u, ", i);
+         }
+      }
+      /* get rid of the last comma if any numbers were printed */
+      *(ptr -2) = '\0';
+
+      info("\t%s", pcrstr);
+   } else {
+      info("All PCRs initialized to default values");
+   }
+
+   if(!opt_args.enable_maint_cmds) {
+      info("TPM Maintenance Commands disabled");
+      badords[n_badords++] = TPM_ORD_CreateMaintenanceArchive;
+      badords[n_badords++] = TPM_ORD_LoadMaintenanceArchive;
+      badords[n_badords++] = TPM_ORD_KillMaintenanceFeature;
+      badords[n_badords++] = TPM_ORD_LoadManuMaintPub;
+      badords[n_badords++] = TPM_ORD_ReadManuMaintPub;
+   } else {
+      info("TPM Maintenance Commands enabled");
+   }
+
+   info("Log level set to %s", logstr);
+
+   return 0;
+}
+
+void cleanup_opt_args(void) {
+}
+
+int main(int argc, char **argv)
+{
+   //FIXME: initializing blkfront without this sleep causes the domain to crash on boot
+   sleep(2);
+
+   /* Setup extern function pointers */
+   tpm_extern_init = vtpm_extern_init_fake;
+   tpm_extern_release = vtpm_extern_release_fake;
+   tpm_malloc = malloc;
+   tpm_free = free;
+   tpm_log = vtpm_log;
+   tpm_get_ticks = vtpm_get_ticks;
+   tpm_get_extern_random_bytes = vtpm_get_extern_random_bytes;
+   tpm_write_to_storage = vtpm_write_to_file;
+   tpm_read_from_storage = vtpm_read_from_file;
+
+   info("starting TPM Emulator (1.2.%d.%d-%d)", VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD);
+   if(parse_cmd_line(argc, argv)) {
+      error("Error parsing commandline\n");
+      return -1;
+   }
+
+   /* Initialize devices */
+   init_tpmback();
+   if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
+      error("Unable to initialize tpmfront device");
+      goto abort_posttpmfront;
+   }
+
+   /* Seed the RNG with entropy from hardware TPM */
+   if(init_random()) {
+      error("Unable to initialize RNG");
+      goto abort_postrng;
+   }
+
+   /* Initialize blkfront device */
+   if(init_vtpmblk(tpmfront_dev)) {
+      error("Unable to initialize Blkfront persistent storage");
+      goto abort_postvtpmblk;
+   }
+
+   /* Run main loop */
+   main_loop();
+
+   /* Shutdown blkfront */
+   shutdown_vtpmblk();
+abort_postvtpmblk:
+abort_postrng:
+
+   /* Close devices */
+   shutdown_tpmfront(tpmfront_dev);
+abort_posttpmfront:
+   shutdown_tpmback();
+
+   cleanup_opt_args();
+
+   return 0;
+}
diff --git a/stubdom/vtpm/vtpm.h b/stubdom/vtpm/vtpm.h
new file mode 100644
index 0000000..5919e44
--- /dev/null
+++ b/stubdom/vtpm/vtpm.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef VTPM_H
+#define VTPM_H
+
+#include <stdbool.h>
+
+/* For testing */
+#define VERS_CMD "\x00\xC1\x00\x00\x00\x16\x00\x00\x00\x65\x00\x00\x00\x05\x00\x00\x00\x04\x00\x00\x01\x03"
+#define VERS_CMD_LEN 22
+
+/* Global commandline options */
+struct Opt_args {
+   enum StartUp {
+      ST_CLEAR = 1,
+      ST_SAVE = 2,
+      ST_DEACTIVATED = 3
+   } startup;
+   unsigned long hwinitpcrs;
+   int loglevel;
+   uint32_t tpmconf;
+   bool enable_maint_cmds;
+};
+extern struct Opt_args opt_args;
+
+#endif
diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
new file mode 100644
index 0000000..7eae98b
--- /dev/null
+++ b/stubdom/vtpm/vtpm_cmd.c
@@ -0,0 +1,256 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <types.h>
+#include <xen/xen.h>
+#include <mm.h>
+#include <gnttab.h>
+#include "tpm/tpm_marshalling.h"
+#include "vtpm_manager.h"
+#include "vtpm_cmd.h"
+#include <tpmback.h>
+
+#define TRYFAILGOTO(C) \
+   if((C)) { \
+      status = TPM_FAIL; \
+      goto abort_egress; \
+   }
+#define TRYFAILGOTOMSG(C, msg) \
+   if((C)) { \
+      status = TPM_FAIL; \
+      error(msg); \
+      goto abort_egress; \
+   }
+#define CHECKSTATUSGOTO(ret, fname) \
+   if((ret) != TPM_SUCCESS) { \
+      error("%s failed with error code (%lu)", fname, (unsigned long) ret); \
+      status = ord; \
+      goto abort_egress; \
+   }
+
+#define ERR_MALFORMED "Malformed response from backend"
+#define ERR_TPMFRONT "Error sending command through frontend device"
+
+struct shpage {
+   void* page;
+   grant_ref_t grantref;
+};
+
+typedef struct shpage shpage_t;
+
+static inline int pack_header(uint8_t** bptr, UINT32* len, TPM_TAG tag, UINT32 size, TPM_COMMAND_CODE ord)
+{
+   return *bptr == NULL ||
+	 tpm_marshal_UINT16(bptr, len, tag) ||
+	 tpm_marshal_UINT32(bptr, len, size) ||
+	 tpm_marshal_UINT32(bptr, len, ord);
+}
+
+static inline int unpack_header(uint8_t** bptr, UINT32* len, TPM_TAG* tag, UINT32* size, TPM_COMMAND_CODE* ord)
+{
+   return *bptr == NULL ||
+	 tpm_unmarshal_UINT16(bptr, len, tag) ||
+	 tpm_unmarshal_UINT32(bptr, len, size) ||
+	 tpm_unmarshal_UINT32(bptr, len, ord);
+}
+
+int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode)
+{
+   TPM_TAG tag;
+   UINT32 len = tpmcmd->req_len;
+   uint8_t* respptr;
+   uint8_t* cmdptr = tpmcmd->req;
+
+   if(!tpm_unmarshal_UINT16(&cmdptr, &len, &tag)) {
+      switch (tag) {
+         case TPM_TAG_RQU_COMMAND:
+            tag = TPM_TAG_RSP_COMMAND;
+            break;
+         case TPM_TAG_RQU_AUTH1_COMMAND:
+            tag = TPM_TAG_RQU_AUTH2_COMMAND;
+            break;
+         case TPM_TAG_RQU_AUTH2_COMMAND:
+            tag = TPM_TAG_RQU_AUTH2_COMMAND;
+            break;
+      }
+   } else {
+      tag = TPM_TAG_RSP_COMMAND;
+   }
+
+   tpmcmd->resp_len = len = 10;
+   tpmcmd->resp = respptr = tpm_malloc(tpmcmd->resp_len);
+
+   return pack_header(&respptr, &len, tag, len, errorcode);
+}
+
+TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32 *numbytes) {
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* cmdbuf, *resp, *bptr;
+   size_t resplen = 0;
+   UINT32 len;
+
+   /*Ask the real tpm for random bytes for the seed */
+   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = TPM_ORD_GetRandom;
+   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
+
+   /*Create the raw tpm command */
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, *numbytes));
+
+   /* Send cmd, wait for response */
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
+      ERR_TPMFRONT);
+
+   bptr = resp; len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   //Check return status of command
+   CHECKSTATUSGOTO(ord, "TPM_GetRandom()");
+
+   // Get the number of random bytes in the response
+   TRYFAILGOTOMSG(tpm_unmarshal_UINT32(&bptr, &len, &size), ERR_MALFORMED);
+   *numbytes = size;
+
+   //Get the random bytes out, tpm may give us less bytes than what we wanrt
+   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, bytes, *numbytes), ERR_MALFORMED);
+
+   goto egress;
+abort_egress:
+egress:
+   free(cmdbuf);
+   return status;
+
+}
+
+TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* bptr, *resp;
+   uint8_t* cmdbuf = NULL;
+   size_t resplen = 0;
+   UINT32 len;
+
+   TPM_TAG tag = VTPM_TAG_REQ;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = VTPM_ORD_LOADHASHKEY;
+
+   /*Create the command*/
+   len = size = VTPM_COMMAND_HEADER_SIZE;
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+
+   /* Send the command to vtpm_manager */
+   info("Requesting Encryption key from backend");
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   /* Unpack response header */
+   bptr = resp;
+   len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   /* Check return code */
+   CHECKSTATUSGOTO(ord, "VTPM_LoadHashKey()");
+
+   /* Get the size of the key */
+   *data_length = size - VTPM_COMMAND_HEADER_SIZE;
+
+   /* Copy the key bits */
+   *data = malloc(*data_length);
+   memcpy(*data, bptr, *data_length);
+
+   goto egress;
+abort_egress:
+   error("VTPM_LoadHashKey failed");
+egress:
+   free(cmdbuf);
+   return status;
+}
+
+TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* bptr, *resp;
+   uint8_t* cmdbuf = NULL;
+   size_t resplen = 0;
+   UINT32 len;
+
+   TPM_TAG tag = VTPM_TAG_REQ;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = VTPM_ORD_SAVEHASHKEY;
+
+   /*Create the command*/
+   len = size = VTPM_COMMAND_HEADER_SIZE + data_length;
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   memcpy(bptr, data, data_length);
+   bptr += data_length;
+
+   /* Send the command to vtpm_manager */
+   info("Sending encryption key to backend");
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   /* Unpack response header */
+   bptr = resp;
+   len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   /* Check return code */
+   CHECKSTATUSGOTO(ord, "VTPM_SaveHashKey()");
+
+   goto egress;
+abort_egress:
+   error("VTPM_SaveHashKey failed");
+egress:
+   free(cmdbuf);
+   return status;
+}
+
+TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t *cmdbuf, *resp, *bptr;
+   size_t resplen = 0;
+   UINT32 len;
+
+   /*Just send a TPM_PCRRead Command to the HW tpm */
+   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
+   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
+
+   /*Create the raw tpm cmd */
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, pcrIndex));
+
+   /*Send Cmd wait for response */
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   bptr = resp; len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   //Check return status of command
+   CHECKSTATUSGOTO(ord, "TPM_PCRRead");
+
+   //Get the ptr value
+   memcpy(outDigest, bptr, sizeof(TPM_PCRVALUE));
+
+   goto egress;
+abort_egress:
+egress:
+   free(cmdbuf);
+   return status;
+
+}
diff --git a/stubdom/vtpm/vtpm_cmd.h b/stubdom/vtpm/vtpm_cmd.h
new file mode 100644
index 0000000..b0bfa22
--- /dev/null
+++ b/stubdom/vtpm/vtpm_cmd.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef MANAGER_H
+#define MANAGER_H
+
+#include <tpmfront.h>
+#include <tpmback.h>
+#include "tpm/tpm_structures.h"
+
+/* Create a command response error header */
+int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode);
+/* Request random bytes from hardware tpm, returns 0 on success */
+TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32* numbytes);
+/* Retreive 256 bit AES encryption key from manager */
+TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length);
+/* Manager securely saves our 256 bit AES encryption key */
+TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length);
+/* Send a TPM_PCRRead command passthrough the manager to the hw tpm */
+TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest);
+
+#endif
diff --git a/stubdom/vtpm/vtpm_pcrs.c b/stubdom/vtpm/vtpm_pcrs.c
new file mode 100644
index 0000000..22a6cef
--- /dev/null
+++ b/stubdom/vtpm/vtpm_pcrs.c
@@ -0,0 +1,43 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include "vtpm_pcrs.h"
+#include "vtpm_cmd.h"
+#include "tpm/tpm_data.h"
+
+#define PCR_VALUE      tpmData.permanent.data.pcrValue
+
+static int write_pcr_direct(unsigned int pcrIndex, uint8_t* val) {
+   if(pcrIndex > TPM_NUM_PCR) {
+      return TPM_BADINDEX;
+   }
+   memcpy(&PCR_VALUE[pcrIndex], val, sizeof(TPM_PCRVALUE));
+   return TPM_SUCCESS;
+}
+
+TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs)
+{
+   TPM_RESULT rc = TPM_SUCCESS;
+   uint8_t digest[sizeof(TPM_PCRVALUE)];
+
+   for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
+      if(pcrs & 1 << i) {
+         if((rc = VTPM_PCRRead(tpmfront_dev, i, digest)) != TPM_SUCCESS) {
+            error("TPM_PCRRead failed with error : %d", rc);
+            return rc;
+         }
+         write_pcr_direct(i, digest);
+      }
+   }
+
+   return rc;
+}
diff --git a/stubdom/vtpm/vtpm_pcrs.h b/stubdom/vtpm/vtpm_pcrs.h
new file mode 100644
index 0000000..11835f9
--- /dev/null
+++ b/stubdom/vtpm/vtpm_pcrs.h
@@ -0,0 +1,53 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef VTPM_PCRS_H
+#define VTPM_PCRS_H
+
+#include "tpm/tpm_structures.h"
+
+#define VTPM_PCR0 1
+#define VTPM_PCR1 1 << 1
+#define VTPM_PCR2 1 << 2
+#define VTPM_PCR3 1 << 3
+#define VTPM_PCR4 1 << 4
+#define VTPM_PCR5 1 << 5
+#define VTPM_PCR6 1 << 6
+#define VTPM_PCR7 1 << 7
+#define VTPM_PCR8 1 << 8
+#define VTPM_PCR9 1 << 9
+#define VTPM_PCR10 1 << 10
+#define VTPM_PCR11 1 << 11
+#define VTPM_PCR12 1 << 12
+#define VTPM_PCR13 1 << 13
+#define VTPM_PCR14 1 << 14
+#define VTPM_PCR15 1 << 15
+#define VTPM_PCR16 1 << 16
+#define VTPM_PCR17 1 << 17
+#define VTPM_PCR18 1 << 18
+#define VTPM_PCR19 1 << 19
+#define VTPM_PCR20 1 << 20
+#define VTPM_PCR21 1 << 21
+#define VTPM_PCR22 1 << 22
+#define VTPM_PCR23 1 << 23
+
+#define VTPM_PCRALL (1 << TPM_NUM_PCR) - 1
+#define VTPM_PCRNONE 0
+
+#define VTPM_NUMPCRS 24
+
+struct tpmfront_dev;
+
+TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs);
+
+
+#endif
diff --git a/stubdom/vtpm/vtpmblk.c b/stubdom/vtpm/vtpmblk.c
new file mode 100644
index 0000000..b343bd8
--- /dev/null
+++ b/stubdom/vtpm/vtpmblk.c
@@ -0,0 +1,307 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <mini-os/byteorder.h>
+#include "vtpmblk.h"
+#include "tpm/tpm_marshalling.h"
+#include "vtpm_cmd.h"
+#include "polarssl/aes.h"
+#include "polarssl/sha1.h"
+#include <blkfront.h>
+#include <unistd.h>
+#include <errno.h>
+#include <fcntl.h>
+
+/*Encryption key and block sizes */
+#define BLKSZ 16
+
+static struct blkfront_dev* blkdev = NULL;
+static int blkfront_fd = -1;
+
+int init_vtpmblk(struct tpmfront_dev* tpmfront_dev)
+{
+   struct blkfront_info blkinfo;
+   info("Initializing persistent NVM storage\n");
+
+   if((blkdev = init_blkfront(NULL, &blkinfo)) == NULL) {
+      error("BLKIO: ERROR Unable to initialize blkfront");
+      return -1;
+   }
+   if (blkinfo.info & VDISK_READONLY || blkinfo.mode != O_RDWR) {
+      error("BLKIO: ERROR block device is read only!");
+      goto error;
+   }
+   if((blkfront_fd = blkfront_open(blkdev)) == -1) {
+      error("Unable to open blkfront file descriptor!");
+      goto error;
+   }
+
+   return 0;
+error:
+   shutdown_blkfront(blkdev);
+   blkdev = NULL;
+   return -1;
+}
+
+void shutdown_vtpmblk(void)
+{
+   close(blkfront_fd);
+   blkfront_fd = -1;
+   blkdev = NULL;
+}
+
+int write_vtpmblk_raw(uint8_t *data, size_t data_length)
+{
+   int rc;
+   uint32_t lenbuf;
+   debug("Begin Write data=%p len=%u", data, data_length);
+
+   lenbuf = cpu_to_be32((uint32_t)data_length);
+
+   lseek(blkfront_fd, 0, SEEK_SET);
+   if((rc = write(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
+      error("write(length) failed! error was %s", strerror(errno));
+      return -1;
+   }
+   if((rc = write(blkfront_fd, data, data_length)) != data_length) {
+      error("write(data) failed! error was %s", strerror(errno));
+      return -1;
+   }
+
+   info("Wrote %u bytes to NVM persistent storage", data_length);
+
+   return 0;
+}
+
+int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
+{
+   int rc;
+   uint32_t lenbuf;
+
+   lseek(blkfront_fd, 0, SEEK_SET);
+   if(( rc = read(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
+      error("read(length) failed! error was %s", strerror(errno));
+      return -1;
+   }
+   *data_length = (size_t) cpu_to_be32(lenbuf);
+   if(*data_length == 0) {
+      error("read 0 data_length for NVM");
+      return -1;
+   }
+
+   *data = tpm_malloc(*data_length);
+   if((rc = read(blkfront_fd, *data, *data_length)) != *data_length) {
+      error("read(data) failed! error was %s", strerror(errno));
+      return -1;
+   }
+
+   info("Read %u bytes from NVM persistent storage", *data_length);
+   return 0;
+}
+
+int encrypt_vtpmblk(uint8_t* clear, size_t clear_len, uint8_t** cipher, size_t* cipher_len, uint8_t* symkey)
+{
+   int rc = 0;
+   uint8_t iv[BLKSZ];
+   aes_context aes_ctx;
+   UINT32 temp;
+   int mod;
+
+   uint8_t* clbuf = NULL;
+
+   uint8_t* ivptr;
+   int ivlen;
+
+   uint8_t* cptr;	//Cipher block pointer
+   int clen;	//Cipher block length
+
+   /*Create a new 256 bit encryption key */
+   if(symkey == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+   tpm_get_extern_random_bytes(symkey, NVMKEYSZ);
+
+   /*Setup initialization vector - random bits and then 4 bytes clear text size at the end*/
+   temp = sizeof(UINT32);
+   ivlen = BLKSZ - temp;
+   tpm_get_extern_random_bytes(iv, ivlen);
+   ivptr = iv + ivlen;
+   tpm_marshal_UINT32(&ivptr, &temp, (UINT32) clear_len);
+
+   /*The clear text needs to be padded out to a multiple of BLKSZ */
+   mod = clear_len % BLKSZ;
+   clen = mod ? clear_len + BLKSZ - mod : clear_len;
+   clbuf = malloc(clen);
+   if (clbuf == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+   memcpy(clbuf, clear, clear_len);
+   /* zero out the padding bits - FIXME: better / more secure way to handle these? */
+   if(clen - clear_len) {
+      memset(clbuf + clear_len, 0, clen - clear_len);
+   }
+
+   /* Setup the ciphertext buffer */
+   *cipher_len = BLKSZ + clen;		/*iv + ciphertext */
+   cptr = *cipher = malloc(*cipher_len);
+   if (*cipher == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Copy the IV to cipher text blob*/
+   memcpy(cptr, iv, BLKSZ);
+   cptr += BLKSZ;
+
+   /* Setup encryption */
+   aes_setkey_enc(&aes_ctx, symkey, 256);
+
+   /* Do encryption now */
+   aes_crypt_cbc(&aes_ctx, AES_ENCRYPT, clen, iv, clbuf, cptr);
+
+   goto egress;
+abort_egress:
+egress:
+   free(clbuf);
+   return rc;
+}
+int decrypt_vtpmblk(uint8_t* cipher, size_t cipher_len, uint8_t** clear, size_t* clear_len, uint8_t* symkey)
+{
+   int rc = 0;
+   uint8_t iv[BLKSZ];
+   uint8_t* ivptr;
+   UINT32 u32, temp;
+   aes_context aes_ctx;
+
+   uint8_t* cptr = cipher;	//cipher block pointer
+   int clen = cipher_len;	//cipher block length
+
+   /* Pull out the initialization vector */
+   memcpy(iv, cipher, BLKSZ);
+   cptr += BLKSZ;
+   clen -= BLKSZ;
+
+   /* Setup the clear text buffer */
+   if((*clear = malloc(clen)) == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Get the length of clear text from last 4 bytes of iv */
+   temp = sizeof(UINT32);
+   ivptr = iv + BLKSZ - temp;
+   tpm_unmarshal_UINT32(&ivptr, &temp, &u32);
+   *clear_len = u32;
+
+   /* Setup decryption */
+   aes_setkey_dec(&aes_ctx, symkey, 256);
+
+   /* Do decryption now */
+   if ((clen % BLKSZ) != 0) {
+      error("Decryption Error: Cipher block size was not a multiple of %u", BLKSZ);
+      rc = -1;
+      goto abort_egress;
+   }
+   aes_crypt_cbc(&aes_ctx, AES_DECRYPT, clen, iv, cptr, *clear);
+
+   goto egress;
+abort_egress:
+egress:
+   return rc;
+}
+
+int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length) {
+   int rc;
+   uint8_t* cipher = NULL;
+   size_t cipher_len = 0;
+   uint8_t hashkey[HASHKEYSZ];
+   uint8_t* symkey = hashkey + HASHSZ;
+
+   /* Encrypt the data */
+   if((rc = encrypt_vtpmblk(data, data_length, &cipher, &cipher_len, symkey))) {
+      goto abort_egress;
+   }
+   /* Write to disk */
+   if((rc = write_vtpmblk_raw(cipher, cipher_len))) {
+      goto abort_egress;
+   }
+   /* Get sha1 hash of data */
+   sha1(cipher, cipher_len, hashkey);
+
+   /* Send hash and key to manager */
+   if((rc = VTPM_SaveHashKey(tpmfront_dev, hashkey, HASHKEYSZ)) != TPM_SUCCESS) {
+      goto abort_egress;
+   }
+   goto egress;
+abort_egress:
+egress:
+   free(cipher);
+   return rc;
+}
+
+int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data_length) {
+   int rc;
+   uint8_t* cipher = NULL;
+   size_t cipher_len = 0;
+   size_t keysize;
+   uint8_t* hashkey = NULL;
+   uint8_t hash[HASHSZ];
+   uint8_t* symkey;
+
+   /* Retreive the hash and the key from the manager */
+   if((rc = VTPM_LoadHashKey(tpmfront_dev, &hashkey, &keysize)) != TPM_SUCCESS) {
+      goto abort_egress;
+   }
+   if(keysize != HASHKEYSZ) {
+      error("Manager returned a hashkey of invalid size! expected %d, actual %d", NVMKEYSZ, keysize);
+      rc = -1;
+      goto abort_egress;
+   }
+   symkey = hashkey + HASHSZ;
+
+   /* Read from disk now */
+   if((rc = read_vtpmblk_raw(&cipher, &cipher_len))) {
+      goto abort_egress;
+   }
+
+   /* Compute the hash of the cipher text and compare */
+   sha1(cipher, cipher_len, hash);
+   if(memcmp(hash, hashkey, HASHSZ)) {
+      int i;
+      error("NVM Storage Checksum failed!");
+      printf("Expected: ");
+      for(i = 0; i < HASHSZ; ++i) {
+	 printf("%02hhX ", hashkey[i]);
+      }
+      printf("\n");
+      printf("Actual:   ");
+      for(i = 0; i < HASHSZ; ++i) {
+	 printf("%02hhX ", hash[i]);
+      }
+      printf("\n");
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Decrypt the blob */
+   if((rc = decrypt_vtpmblk(cipher, cipher_len, data, data_length, symkey))) {
+      goto abort_egress;
+   }
+   goto egress;
+abort_egress:
+egress:
+   free(cipher);
+   free(hashkey);
+   return rc;
+}
diff --git a/stubdom/vtpm/vtpmblk.h b/stubdom/vtpm/vtpmblk.h
new file mode 100644
index 0000000..282ce6a
--- /dev/null
+++ b/stubdom/vtpm/vtpmblk.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef NVM_H
+#define NVM_H
+#include <mini-os/types.h>
+#include <xen/xen.h>
+#include <tpmfront.h>
+
+#define NVMKEYSZ 32
+#define HASHSZ 20
+#define HASHKEYSZ (NVMKEYSZ + HASHSZ)
+
+int init_vtpmblk(struct tpmfront_dev* tpmfront_dev);
+void shutdown_vtpmblk(void);
+
+/* Encrypts and writes data to blk device */
+int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t *data, size_t data_length);
+/* Reads, Decrypts, and returns data from blk device */
+int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t **data, size_t *data_length);
+
+#endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:11:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfwxW-0002XD-6C; Tue, 04 Dec 2012 18:11:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfwxU-0002Uc-LA
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:11:05 +0000
Received: from [85.158.143.35:13798] by server-2.bemta-4.messagelabs.com id
	37/0B-28922-8BC3EB05; Tue, 04 Dec 2012 18:11:04 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-15.tower-21.messagelabs.com!1354644589!14047766!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19798 invoked from network); 4 Dec 2012 18:09:51 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:09:51 -0000
Received: from anonelbe.jhuapl.edu (unknown [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 347d_0efc_20e9ab0d_0261_41bc_8dd5_077671fcd724;
	Tue, 04 Dec 2012 13:09:46 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Campbell@citrix.com, Ian.Jackson@eu.citrix.com
Date: Tue,  4 Dec 2012 13:09:23 -0500
Message-Id: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v6 1/8] add vtpm-stubdom code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the code base for vtpm-stubdom to the stubdom
heirarchy. Makefile changes in later patch.

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/vtpm/Makefile    |   37 +++++
 stubdom/vtpm/minios.cfg  |   14 ++
 stubdom/vtpm/vtpm.c      |  404 ++++++++++++++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm.h      |   36 +++++
 stubdom/vtpm/vtpm_cmd.c  |  256 +++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm_cmd.h  |   31 ++++
 stubdom/vtpm/vtpm_pcrs.c |   43 +++++
 stubdom/vtpm/vtpm_pcrs.h |   53 ++++++
 stubdom/vtpm/vtpmblk.c   |  307 +++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpmblk.h   |   31 ++++
 10 files changed, 1212 insertions(+)
 create mode 100644 stubdom/vtpm/Makefile
 create mode 100644 stubdom/vtpm/minios.cfg
 create mode 100644 stubdom/vtpm/vtpm.c
 create mode 100644 stubdom/vtpm/vtpm.h
 create mode 100644 stubdom/vtpm/vtpm_cmd.c
 create mode 100644 stubdom/vtpm/vtpm_cmd.h
 create mode 100644 stubdom/vtpm/vtpm_pcrs.c
 create mode 100644 stubdom/vtpm/vtpm_pcrs.h
 create mode 100644 stubdom/vtpm/vtpmblk.c
 create mode 100644 stubdom/vtpm/vtpmblk.h

diff --git a/stubdom/vtpm/Makefile b/stubdom/vtpm/Makefile
new file mode 100644
index 0000000..686c0ea
--- /dev/null
+++ b/stubdom/vtpm/Makefile
@@ -0,0 +1,37 @@
+# Copyright (c) 2010-2012 United States Government, as represented by
+# the Secretary of Defense.  All rights reserved.
+#
+# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+# INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+# DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+# SOFTWARE.
+#
+
+XEN_ROOT=../..
+
+PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
+PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o sha4.o
+
+TARGET=vtpm.a
+OBJS=vtpm.o vtpm_cmd.o vtpmblk.o vtpm_pcrs.o
+
+
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/build
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/tpm
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/crypto
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)
+
+$(TARGET): $(OBJS)
+	ar -cr $@ $(OBJS) $(TPMEMU_OBJS) $(foreach obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
+
+$(OBJS): vtpm_manager.h
+
+vtpm_manager.h:
+	ln -s ../vtpmmgr/vtpm_manager.h vtpm_manager.h
+
+clean:
+	-rm $(TARGET) $(OBJS) vtpm_manager.h
+
+.PHONY: clean
diff --git a/stubdom/vtpm/minios.cfg b/stubdom/vtpm/minios.cfg
new file mode 100644
index 0000000..31652ee
--- /dev/null
+++ b/stubdom/vtpm/minios.cfg
@@ -0,0 +1,14 @@
+CONFIG_TPMFRONT=y
+CONFIG_TPM_TIS=n
+CONFIG_TPMBACK=y
+CONFIG_START_NETWORK=n
+CONFIG_TEST=n
+CONFIG_PCIFRONT=n
+CONFIG_BLKFRONT=y
+CONFIG_NETFRONT=n
+CONFIG_FBFRONT=n
+CONFIG_KBDFRONT=n
+CONFIG_CONSFRONT=n
+CONFIG_XENBUS=y
+CONFIG_LWIP=n
+CONFIG_XC=n
diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
new file mode 100644
index 0000000..71aef78
--- /dev/null
+++ b/stubdom/vtpm/vtpm.c
@@ -0,0 +1,404 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <syslog.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <sys/time.h>
+#include <xen/xen.h>
+#include <tpmback.h>
+#include <tpmfront.h>
+
+#include <polarssl/entropy.h>
+#include <polarssl/ctr_drbg.h>
+
+#include "tpm/tpm_emulator_extern.h"
+#include "tpm/tpm_marshalling.h"
+#include "vtpm.h"
+#include "vtpm_cmd.h"
+#include "vtpm_pcrs.h"
+#include "vtpmblk.h"
+
+#define TPM_LOG_INFO LOG_INFO
+#define TPM_LOG_ERROR LOG_ERR
+#define TPM_LOG_DEBUG LOG_DEBUG
+
+/* Global commandline options - default values */
+struct Opt_args opt_args = {
+   .startup = ST_CLEAR,
+   .loglevel = TPM_LOG_INFO,
+   .hwinitpcrs = VTPM_PCRNONE,
+   .tpmconf = 0,
+   .enable_maint_cmds = false,
+};
+
+static uint32_t badords[32];
+static unsigned int n_badords = 0;
+
+entropy_context entropy;
+ctr_drbg_context ctr_drbg;
+
+struct tpmfront_dev* tpmfront_dev;
+
+void vtpm_get_extern_random_bytes(void *buf, size_t nbytes)
+{
+   ctr_drbg_random(&ctr_drbg, buf, nbytes);
+}
+
+int vtpm_read_from_file(uint8_t **data, size_t *data_length) {
+   return read_vtpmblk(tpmfront_dev, data, data_length);
+}
+
+int vtpm_write_to_file(uint8_t *data, size_t data_length) {
+   return write_vtpmblk(tpmfront_dev, data, data_length);
+}
+
+int vtpm_extern_init_fake(void) {
+   return 0;
+}
+
+void vtpm_extern_release_fake(void) {
+}
+
+
+void vtpm_log(int priority, const char *fmt, ...)
+{
+   if(opt_args.loglevel >= priority) {
+      va_list v;
+      va_start(v, fmt);
+      vprintf(fmt, v);
+      va_end(v);
+   }
+}
+
+static uint64_t vtpm_get_ticks(void)
+{
+  static uint64_t old_t = 0;
+  uint64_t new_t, res_t;
+  struct timeval tv;
+  gettimeofday(&tv, NULL);
+  new_t = (uint64_t)tv.tv_sec * 1000000 + (uint64_t)tv.tv_usec;
+  res_t = (old_t > 0) ? new_t - old_t : 0;
+  old_t = new_t;
+  return res_t;
+}
+
+
+static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
+   UINT32 sz = len;
+   TPM_RESULT rc = VTPM_GetRandom(tpmfront_dev, data, &sz);
+   *olen = sz;
+   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
+}
+
+int init_random(void) {
+   /* Initialize the rng */
+   entropy_init(&entropy);
+   entropy_add_source(&entropy, tpm_entropy_source, NULL, 0);
+   entropy_gather(&entropy);
+   ctr_drbg_init(&ctr_drbg, entropy_func, &entropy, NULL, 0);
+   ctr_drbg_set_prediction_resistance( &ctr_drbg, CTR_DRBG_PR_OFF );
+
+   return 0;
+}
+
+int check_ordinal(tpmcmd_t* tpmcmd) {
+   TPM_COMMAND_CODE ord;
+   UINT32 len = 4;
+   BYTE* ptr;
+   unsigned int i;
+
+   if(tpmcmd->req_len < 10) {
+      return true;
+   }
+
+   ptr = tpmcmd->req + 6;
+   tpm_unmarshal_UINT32(&ptr, &len, &ord);
+
+   for(i = 0; i < n_badords; ++i) {
+      if(ord == badords[i]) {
+         error("Disabled command ordinal (%" PRIu32") requested!\n");
+         return false;
+      }
+   }
+   return true;
+}
+
+static void main_loop(void) {
+   tpmcmd_t* tpmcmd = NULL;
+   domid_t domid;		/* Domid of frontend */
+   unsigned int handle;	/* handle of frontend */
+   int res = -1;
+
+   info("VTPM Initializing\n");
+
+   /* Set required tpm config args */
+   opt_args.tpmconf |= TPM_CONF_STRONG_PERSISTENCE;
+   opt_args.tpmconf &= ~TPM_CONF_USE_INTERNAL_PRNG;
+   opt_args.tpmconf |= TPM_CONF_GENERATE_EK;
+   opt_args.tpmconf |= TPM_CONF_GENERATE_SEED_DAA;
+
+   /* Initialize the emulator */
+   tpm_emulator_init(opt_args.startup, opt_args.tpmconf);
+
+   /* Initialize any requested PCRs with hardware TPM values */
+   if(vtpm_initialize_hw_pcrs(tpmfront_dev, opt_args.hwinitpcrs) != TPM_SUCCESS) {
+      error("Failed to initialize PCRs with hardware TPM values");
+      goto abort_postpcrs;
+   }
+
+   /* Wait for the frontend domain to connect */
+   info("Waiting for frontend domain to connect..");
+   if(tpmback_wait_for_frontend_connect(&domid, &handle) == 0) {
+      info("VTPM attached to Frontend %u/%u", (unsigned int) domid, handle);
+   } else {
+      error("Unable to attach to a frontend");
+   }
+
+   tpmcmd = tpmback_req(domid, handle);
+   while(tpmcmd) {
+      /* Handle the request */
+      if(tpmcmd->req_len) {
+	 tpmcmd->resp = NULL;
+	 tpmcmd->resp_len = 0;
+
+         /* First check for disabled ordinals */
+         if(!check_ordinal(tpmcmd)) {
+            create_error_response(tpmcmd, TPM_BAD_ORDINAL);
+         }
+         /* If not disabled, do the command */
+         else {
+            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len)) != 0) {
+               error("tpm_handle_command() failed");
+               create_error_response(tpmcmd, TPM_FAIL);
+            }
+         }
+      }
+
+      /* Send the response */
+      tpmback_resp(tpmcmd);
+
+      /* Wait for the next request */
+      tpmcmd = tpmback_req(domid, handle);
+
+   }
+
+abort_postpcrs:
+   info("VTPM Shutting down\n");
+
+   tpm_emulator_shutdown();
+}
+
+int parse_cmd_line(int argc, char** argv)
+{
+   char sval[25];
+   char* logstr = NULL;
+   /* Parse the command strings */
+   for(unsigned int i = 1; i < argc; ++i) {
+      if (sscanf(argv[i], "loglevel=%25s", sval) == 1){
+	 if (!strcmp(sval, "debug")) {
+	    opt_args.loglevel = TPM_LOG_DEBUG;
+	    logstr = "debug";
+	 }
+	 else if (!strcmp(sval, "info")) {
+	    logstr = "info";
+	    opt_args.loglevel = TPM_LOG_INFO;
+	 }
+	 else if (!strcmp(sval, "error")) {
+	    logstr = "error";
+	    opt_args.loglevel = TPM_LOG_ERROR;
+	 }
+      }
+      else if (!strcmp(argv[i], "clear")) {
+	 opt_args.startup = ST_CLEAR;
+      }
+      else if (!strcmp(argv[i], "save")) {
+	 opt_args.startup = ST_SAVE;
+      }
+      else if (!strcmp(argv[i], "deactivated")) {
+	 opt_args.startup = ST_DEACTIVATED;
+      }
+      else if (!strncmp(argv[i], "maintcmds=", 10)) {
+         if(!strcmp(argv[i] + 10, "1")) {
+            opt_args.enable_maint_cmds = true;
+         } else if(!strcmp(argv[i] + 10, "0")) {
+            opt_args.enable_maint_cmds = false;
+         }
+      }
+      else if(!strncmp(argv[i], "hwinitpcr=", 10)) {
+         char *pch = argv[i] + 10;
+         unsigned int v1, v2;
+         pch = strtok(pch, ",");
+         while(pch != NULL) {
+            if(!strcmp(pch, "all")) {
+               //Set all
+               opt_args.hwinitpcrs = VTPM_PCRALL;
+            } else if(!strcmp(pch, "none")) {
+               //Set none
+               opt_args.hwinitpcrs = VTPM_PCRNONE;
+            } else if(sscanf(pch, "%u", &v1) == 1) {
+               //Set one
+               if(v1 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               opt_args.hwinitpcrs |= (1 << v1);
+            } else if(sscanf(pch, "%u-%u", &v1, &v2) == 2) {
+               //Set range
+               if(v1 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               if(v2 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               if(v2 < v1) {
+                  unsigned tp = v1;
+                  v1 = v2;
+                  v2 = tp;
+               }
+               for(unsigned int i = v1; i <= v2; ++i) {
+                  opt_args.hwinitpcrs |= (1 << i);
+               }
+            } else {
+               error("hwintipcr error: Invalid PCR specification : %s", pch);
+               return -1;
+            }
+            pch = strtok(NULL, ",");
+         }
+      }
+      else {
+	 error("Invalid command line option `%s'", argv[i]);
+      }
+
+   }
+
+   /* Check Errors and print results */
+   switch(opt_args.startup) {
+      case ST_CLEAR:
+	 info("Startup mode is `clear'");
+	 break;
+      case ST_SAVE:
+	 info("Startup mode is `save'");
+	 break;
+      case ST_DEACTIVATED:
+	 info("Startup mode is `deactivated'");
+	 break;
+      default:
+	 error("Invalid startup mode %d", opt_args.startup);
+	 return -1;
+   }
+
+   if(opt_args.hwinitpcrs & (VTPM_PCRALL))
+   {
+      char pcrstr[1024];
+      char* ptr = pcrstr;
+
+      pcrstr[0] = '\0';
+      info("The following PCRs will be initialized with values from the hardware TPM:");
+      for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
+         if(opt_args.hwinitpcrs & (1 << i)) {
+            ptr += sprintf(ptr, "%u, ", i);
+         }
+      }
+      /* get rid of the last comma if any numbers were printed */
+      *(ptr -2) = '\0';
+
+      info("\t%s", pcrstr);
+   } else {
+      info("All PCRs initialized to default values");
+   }
+
+   if(!opt_args.enable_maint_cmds) {
+      info("TPM Maintenance Commands disabled");
+      badords[n_badords++] = TPM_ORD_CreateMaintenanceArchive;
+      badords[n_badords++] = TPM_ORD_LoadMaintenanceArchive;
+      badords[n_badords++] = TPM_ORD_KillMaintenanceFeature;
+      badords[n_badords++] = TPM_ORD_LoadManuMaintPub;
+      badords[n_badords++] = TPM_ORD_ReadManuMaintPub;
+   } else {
+      info("TPM Maintenance Commands enabled");
+   }
+
+   info("Log level set to %s", logstr);
+
+   return 0;
+}
+
+void cleanup_opt_args(void) {
+}
+
+int main(int argc, char **argv)
+{
+   //FIXME: initializing blkfront without this sleep causes the domain to crash on boot
+   sleep(2);
+
+   /* Setup extern function pointers */
+   tpm_extern_init = vtpm_extern_init_fake;
+   tpm_extern_release = vtpm_extern_release_fake;
+   tpm_malloc = malloc;
+   tpm_free = free;
+   tpm_log = vtpm_log;
+   tpm_get_ticks = vtpm_get_ticks;
+   tpm_get_extern_random_bytes = vtpm_get_extern_random_bytes;
+   tpm_write_to_storage = vtpm_write_to_file;
+   tpm_read_from_storage = vtpm_read_from_file;
+
+   info("starting TPM Emulator (1.2.%d.%d-%d)", VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD);
+   if(parse_cmd_line(argc, argv)) {
+      error("Error parsing commandline\n");
+      return -1;
+   }
+
+   /* Initialize devices */
+   init_tpmback();
+   if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
+      error("Unable to initialize tpmfront device");
+      goto abort_posttpmfront;
+   }
+
+   /* Seed the RNG with entropy from hardware TPM */
+   if(init_random()) {
+      error("Unable to initialize RNG");
+      goto abort_postrng;
+   }
+
+   /* Initialize blkfront device */
+   if(init_vtpmblk(tpmfront_dev)) {
+      error("Unable to initialize Blkfront persistent storage");
+      goto abort_postvtpmblk;
+   }
+
+   /* Run main loop */
+   main_loop();
+
+   /* Shutdown blkfront */
+   shutdown_vtpmblk();
+abort_postvtpmblk:
+abort_postrng:
+
+   /* Close devices */
+   shutdown_tpmfront(tpmfront_dev);
+abort_posttpmfront:
+   shutdown_tpmback();
+
+   cleanup_opt_args();
+
+   return 0;
+}
diff --git a/stubdom/vtpm/vtpm.h b/stubdom/vtpm/vtpm.h
new file mode 100644
index 0000000..5919e44
--- /dev/null
+++ b/stubdom/vtpm/vtpm.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef VTPM_H
+#define VTPM_H
+
+#include <stdbool.h>
+
+/* For testing */
+#define VERS_CMD "\x00\xC1\x00\x00\x00\x16\x00\x00\x00\x65\x00\x00\x00\x05\x00\x00\x00\x04\x00\x00\x01\x03"
+#define VERS_CMD_LEN 22
+
+/* Global commandline options */
+struct Opt_args {
+   enum StartUp {
+      ST_CLEAR = 1,
+      ST_SAVE = 2,
+      ST_DEACTIVATED = 3
+   } startup;
+   unsigned long hwinitpcrs;
+   int loglevel;
+   uint32_t tpmconf;
+   bool enable_maint_cmds;
+};
+extern struct Opt_args opt_args;
+
+#endif
diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
new file mode 100644
index 0000000..7eae98b
--- /dev/null
+++ b/stubdom/vtpm/vtpm_cmd.c
@@ -0,0 +1,256 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <types.h>
+#include <xen/xen.h>
+#include <mm.h>
+#include <gnttab.h>
+#include "tpm/tpm_marshalling.h"
+#include "vtpm_manager.h"
+#include "vtpm_cmd.h"
+#include <tpmback.h>
+
+#define TRYFAILGOTO(C) \
+   if((C)) { \
+      status = TPM_FAIL; \
+      goto abort_egress; \
+   }
+#define TRYFAILGOTOMSG(C, msg) \
+   if((C)) { \
+      status = TPM_FAIL; \
+      error(msg); \
+      goto abort_egress; \
+   }
+#define CHECKSTATUSGOTO(ret, fname) \
+   if((ret) != TPM_SUCCESS) { \
+      error("%s failed with error code (%lu)", fname, (unsigned long) ret); \
+      status = ord; \
+      goto abort_egress; \
+   }
+
+#define ERR_MALFORMED "Malformed response from backend"
+#define ERR_TPMFRONT "Error sending command through frontend device"
+
+struct shpage {
+   void* page;
+   grant_ref_t grantref;
+};
+
+typedef struct shpage shpage_t;
+
+static inline int pack_header(uint8_t** bptr, UINT32* len, TPM_TAG tag, UINT32 size, TPM_COMMAND_CODE ord)
+{
+   return *bptr == NULL ||
+	 tpm_marshal_UINT16(bptr, len, tag) ||
+	 tpm_marshal_UINT32(bptr, len, size) ||
+	 tpm_marshal_UINT32(bptr, len, ord);
+}
+
+static inline int unpack_header(uint8_t** bptr, UINT32* len, TPM_TAG* tag, UINT32* size, TPM_COMMAND_CODE* ord)
+{
+   return *bptr == NULL ||
+	 tpm_unmarshal_UINT16(bptr, len, tag) ||
+	 tpm_unmarshal_UINT32(bptr, len, size) ||
+	 tpm_unmarshal_UINT32(bptr, len, ord);
+}
+
+int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode)
+{
+   TPM_TAG tag;
+   UINT32 len = tpmcmd->req_len;
+   uint8_t* respptr;
+   uint8_t* cmdptr = tpmcmd->req;
+
+   if(!tpm_unmarshal_UINT16(&cmdptr, &len, &tag)) {
+      switch (tag) {
+         case TPM_TAG_RQU_COMMAND:
+            tag = TPM_TAG_RSP_COMMAND;
+            break;
+         case TPM_TAG_RQU_AUTH1_COMMAND:
+            tag = TPM_TAG_RQU_AUTH2_COMMAND;
+            break;
+         case TPM_TAG_RQU_AUTH2_COMMAND:
+            tag = TPM_TAG_RQU_AUTH2_COMMAND;
+            break;
+      }
+   } else {
+      tag = TPM_TAG_RSP_COMMAND;
+   }
+
+   tpmcmd->resp_len = len = 10;
+   tpmcmd->resp = respptr = tpm_malloc(tpmcmd->resp_len);
+
+   return pack_header(&respptr, &len, tag, len, errorcode);
+}
+
+TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32 *numbytes) {
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* cmdbuf, *resp, *bptr;
+   size_t resplen = 0;
+   UINT32 len;
+
+   /*Ask the real tpm for random bytes for the seed */
+   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = TPM_ORD_GetRandom;
+   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
+
+   /*Create the raw tpm command */
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, *numbytes));
+
+   /* Send cmd, wait for response */
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
+      ERR_TPMFRONT);
+
+   bptr = resp; len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   //Check return status of command
+   CHECKSTATUSGOTO(ord, "TPM_GetRandom()");
+
+   // Get the number of random bytes in the response
+   TRYFAILGOTOMSG(tpm_unmarshal_UINT32(&bptr, &len, &size), ERR_MALFORMED);
+   *numbytes = size;
+
+   //Get the random bytes out, tpm may give us less bytes than what we wanrt
+   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, bytes, *numbytes), ERR_MALFORMED);
+
+   goto egress;
+abort_egress:
+egress:
+   free(cmdbuf);
+   return status;
+
+}
+
+TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* bptr, *resp;
+   uint8_t* cmdbuf = NULL;
+   size_t resplen = 0;
+   UINT32 len;
+
+   TPM_TAG tag = VTPM_TAG_REQ;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = VTPM_ORD_LOADHASHKEY;
+
+   /*Create the command*/
+   len = size = VTPM_COMMAND_HEADER_SIZE;
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+
+   /* Send the command to vtpm_manager */
+   info("Requesting Encryption key from backend");
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   /* Unpack response header */
+   bptr = resp;
+   len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   /* Check return code */
+   CHECKSTATUSGOTO(ord, "VTPM_LoadHashKey()");
+
+   /* Get the size of the key */
+   *data_length = size - VTPM_COMMAND_HEADER_SIZE;
+
+   /* Copy the key bits */
+   *data = malloc(*data_length);
+   memcpy(*data, bptr, *data_length);
+
+   goto egress;
+abort_egress:
+   error("VTPM_LoadHashKey failed");
+egress:
+   free(cmdbuf);
+   return status;
+}
+
+TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* bptr, *resp;
+   uint8_t* cmdbuf = NULL;
+   size_t resplen = 0;
+   UINT32 len;
+
+   TPM_TAG tag = VTPM_TAG_REQ;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = VTPM_ORD_SAVEHASHKEY;
+
+   /*Create the command*/
+   len = size = VTPM_COMMAND_HEADER_SIZE + data_length;
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   memcpy(bptr, data, data_length);
+   bptr += data_length;
+
+   /* Send the command to vtpm_manager */
+   info("Sending encryption key to backend");
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   /* Unpack response header */
+   bptr = resp;
+   len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   /* Check return code */
+   CHECKSTATUSGOTO(ord, "VTPM_SaveHashKey()");
+
+   goto egress;
+abort_egress:
+   error("VTPM_SaveHashKey failed");
+egress:
+   free(cmdbuf);
+   return status;
+}
+
+TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t *cmdbuf, *resp, *bptr;
+   size_t resplen = 0;
+   UINT32 len;
+
+   /*Just send a TPM_PCRRead Command to the HW tpm */
+   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
+   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
+
+   /*Create the raw tpm cmd */
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, pcrIndex));
+
+   /*Send Cmd wait for response */
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   bptr = resp; len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   //Check return status of command
+   CHECKSTATUSGOTO(ord, "TPM_PCRRead");
+
+   //Get the ptr value
+   memcpy(outDigest, bptr, sizeof(TPM_PCRVALUE));
+
+   goto egress;
+abort_egress:
+egress:
+   free(cmdbuf);
+   return status;
+
+}
diff --git a/stubdom/vtpm/vtpm_cmd.h b/stubdom/vtpm/vtpm_cmd.h
new file mode 100644
index 0000000..b0bfa22
--- /dev/null
+++ b/stubdom/vtpm/vtpm_cmd.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef MANAGER_H
+#define MANAGER_H
+
+#include <tpmfront.h>
+#include <tpmback.h>
+#include "tpm/tpm_structures.h"
+
+/* Create a command response error header */
+int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode);
+/* Request random bytes from hardware tpm, returns 0 on success */
+TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32* numbytes);
+/* Retreive 256 bit AES encryption key from manager */
+TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length);
+/* Manager securely saves our 256 bit AES encryption key */
+TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length);
+/* Send a TPM_PCRRead command passthrough the manager to the hw tpm */
+TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest);
+
+#endif
diff --git a/stubdom/vtpm/vtpm_pcrs.c b/stubdom/vtpm/vtpm_pcrs.c
new file mode 100644
index 0000000..22a6cef
--- /dev/null
+++ b/stubdom/vtpm/vtpm_pcrs.c
@@ -0,0 +1,43 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include "vtpm_pcrs.h"
+#include "vtpm_cmd.h"
+#include "tpm/tpm_data.h"
+
+#define PCR_VALUE      tpmData.permanent.data.pcrValue
+
+static int write_pcr_direct(unsigned int pcrIndex, uint8_t* val) {
+   if(pcrIndex > TPM_NUM_PCR) {
+      return TPM_BADINDEX;
+   }
+   memcpy(&PCR_VALUE[pcrIndex], val, sizeof(TPM_PCRVALUE));
+   return TPM_SUCCESS;
+}
+
+TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs)
+{
+   TPM_RESULT rc = TPM_SUCCESS;
+   uint8_t digest[sizeof(TPM_PCRVALUE)];
+
+   for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
+      if(pcrs & 1 << i) {
+         if((rc = VTPM_PCRRead(tpmfront_dev, i, digest)) != TPM_SUCCESS) {
+            error("TPM_PCRRead failed with error : %d", rc);
+            return rc;
+         }
+         write_pcr_direct(i, digest);
+      }
+   }
+
+   return rc;
+}
diff --git a/stubdom/vtpm/vtpm_pcrs.h b/stubdom/vtpm/vtpm_pcrs.h
new file mode 100644
index 0000000..11835f9
--- /dev/null
+++ b/stubdom/vtpm/vtpm_pcrs.h
@@ -0,0 +1,53 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef VTPM_PCRS_H
+#define VTPM_PCRS_H
+
+#include "tpm/tpm_structures.h"
+
+#define VTPM_PCR0 1
+#define VTPM_PCR1 1 << 1
+#define VTPM_PCR2 1 << 2
+#define VTPM_PCR3 1 << 3
+#define VTPM_PCR4 1 << 4
+#define VTPM_PCR5 1 << 5
+#define VTPM_PCR6 1 << 6
+#define VTPM_PCR7 1 << 7
+#define VTPM_PCR8 1 << 8
+#define VTPM_PCR9 1 << 9
+#define VTPM_PCR10 1 << 10
+#define VTPM_PCR11 1 << 11
+#define VTPM_PCR12 1 << 12
+#define VTPM_PCR13 1 << 13
+#define VTPM_PCR14 1 << 14
+#define VTPM_PCR15 1 << 15
+#define VTPM_PCR16 1 << 16
+#define VTPM_PCR17 1 << 17
+#define VTPM_PCR18 1 << 18
+#define VTPM_PCR19 1 << 19
+#define VTPM_PCR20 1 << 20
+#define VTPM_PCR21 1 << 21
+#define VTPM_PCR22 1 << 22
+#define VTPM_PCR23 1 << 23
+
+#define VTPM_PCRALL (1 << TPM_NUM_PCR) - 1
+#define VTPM_PCRNONE 0
+
+#define VTPM_NUMPCRS 24
+
+struct tpmfront_dev;
+
+TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs);
+
+
+#endif
diff --git a/stubdom/vtpm/vtpmblk.c b/stubdom/vtpm/vtpmblk.c
new file mode 100644
index 0000000..b343bd8
--- /dev/null
+++ b/stubdom/vtpm/vtpmblk.c
@@ -0,0 +1,307 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <mini-os/byteorder.h>
+#include "vtpmblk.h"
+#include "tpm/tpm_marshalling.h"
+#include "vtpm_cmd.h"
+#include "polarssl/aes.h"
+#include "polarssl/sha1.h"
+#include <blkfront.h>
+#include <unistd.h>
+#include <errno.h>
+#include <fcntl.h>
+
+/*Encryption key and block sizes */
+#define BLKSZ 16
+
+static struct blkfront_dev* blkdev = NULL;
+static int blkfront_fd = -1;
+
+int init_vtpmblk(struct tpmfront_dev* tpmfront_dev)
+{
+   struct blkfront_info blkinfo;
+   info("Initializing persistent NVM storage\n");
+
+   if((blkdev = init_blkfront(NULL, &blkinfo)) == NULL) {
+      error("BLKIO: ERROR Unable to initialize blkfront");
+      return -1;
+   }
+   if (blkinfo.info & VDISK_READONLY || blkinfo.mode != O_RDWR) {
+      error("BLKIO: ERROR block device is read only!");
+      goto error;
+   }
+   if((blkfront_fd = blkfront_open(blkdev)) == -1) {
+      error("Unable to open blkfront file descriptor!");
+      goto error;
+   }
+
+   return 0;
+error:
+   shutdown_blkfront(blkdev);
+   blkdev = NULL;
+   return -1;
+}
+
+void shutdown_vtpmblk(void)
+{
+   close(blkfront_fd);
+   blkfront_fd = -1;
+   blkdev = NULL;
+}
+
+int write_vtpmblk_raw(uint8_t *data, size_t data_length)
+{
+   int rc;
+   uint32_t lenbuf;
+   debug("Begin Write data=%p len=%u", data, data_length);
+
+   lenbuf = cpu_to_be32((uint32_t)data_length);
+
+   lseek(blkfront_fd, 0, SEEK_SET);
+   if((rc = write(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
+      error("write(length) failed! error was %s", strerror(errno));
+      return -1;
+   }
+   if((rc = write(blkfront_fd, data, data_length)) != data_length) {
+      error("write(data) failed! error was %s", strerror(errno));
+      return -1;
+   }
+
+   info("Wrote %u bytes to NVM persistent storage", data_length);
+
+   return 0;
+}
+
+int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
+{
+   int rc;
+   uint32_t lenbuf;
+
+   lseek(blkfront_fd, 0, SEEK_SET);
+   if(( rc = read(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
+      error("read(length) failed! error was %s", strerror(errno));
+      return -1;
+   }
+   *data_length = (size_t) cpu_to_be32(lenbuf);
+   if(*data_length == 0) {
+      error("read 0 data_length for NVM");
+      return -1;
+   }
+
+   *data = tpm_malloc(*data_length);
+   if((rc = read(blkfront_fd, *data, *data_length)) != *data_length) {
+      error("read(data) failed! error was %s", strerror(errno));
+      return -1;
+   }
+
+   info("Read %u bytes from NVM persistent storage", *data_length);
+   return 0;
+}
+
+int encrypt_vtpmblk(uint8_t* clear, size_t clear_len, uint8_t** cipher, size_t* cipher_len, uint8_t* symkey)
+{
+   int rc = 0;
+   uint8_t iv[BLKSZ];
+   aes_context aes_ctx;
+   UINT32 temp;
+   int mod;
+
+   uint8_t* clbuf = NULL;
+
+   uint8_t* ivptr;
+   int ivlen;
+
+   uint8_t* cptr;	//Cipher block pointer
+   int clen;	//Cipher block length
+
+   /*Create a new 256 bit encryption key */
+   if(symkey == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+   tpm_get_extern_random_bytes(symkey, NVMKEYSZ);
+
+   /*Setup initialization vector - random bits and then 4 bytes clear text size at the end*/
+   temp = sizeof(UINT32);
+   ivlen = BLKSZ - temp;
+   tpm_get_extern_random_bytes(iv, ivlen);
+   ivptr = iv + ivlen;
+   tpm_marshal_UINT32(&ivptr, &temp, (UINT32) clear_len);
+
+   /*The clear text needs to be padded out to a multiple of BLKSZ */
+   mod = clear_len % BLKSZ;
+   clen = mod ? clear_len + BLKSZ - mod : clear_len;
+   clbuf = malloc(clen);
+   if (clbuf == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+   memcpy(clbuf, clear, clear_len);
+   /* zero out the padding bits - FIXME: better / more secure way to handle these? */
+   if(clen - clear_len) {
+      memset(clbuf + clear_len, 0, clen - clear_len);
+   }
+
+   /* Setup the ciphertext buffer */
+   *cipher_len = BLKSZ + clen;		/*iv + ciphertext */
+   cptr = *cipher = malloc(*cipher_len);
+   if (*cipher == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Copy the IV to cipher text blob*/
+   memcpy(cptr, iv, BLKSZ);
+   cptr += BLKSZ;
+
+   /* Setup encryption */
+   aes_setkey_enc(&aes_ctx, symkey, 256);
+
+   /* Do encryption now */
+   aes_crypt_cbc(&aes_ctx, AES_ENCRYPT, clen, iv, clbuf, cptr);
+
+   goto egress;
+abort_egress:
+egress:
+   free(clbuf);
+   return rc;
+}
+int decrypt_vtpmblk(uint8_t* cipher, size_t cipher_len, uint8_t** clear, size_t* clear_len, uint8_t* symkey)
+{
+   int rc = 0;
+   uint8_t iv[BLKSZ];
+   uint8_t* ivptr;
+   UINT32 u32, temp;
+   aes_context aes_ctx;
+
+   uint8_t* cptr = cipher;	//cipher block pointer
+   int clen = cipher_len;	//cipher block length
+
+   /* Pull out the initialization vector */
+   memcpy(iv, cipher, BLKSZ);
+   cptr += BLKSZ;
+   clen -= BLKSZ;
+
+   /* Setup the clear text buffer */
+   if((*clear = malloc(clen)) == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Get the length of clear text from last 4 bytes of iv */
+   temp = sizeof(UINT32);
+   ivptr = iv + BLKSZ - temp;
+   tpm_unmarshal_UINT32(&ivptr, &temp, &u32);
+   *clear_len = u32;
+
+   /* Setup decryption */
+   aes_setkey_dec(&aes_ctx, symkey, 256);
+
+   /* Do decryption now */
+   if ((clen % BLKSZ) != 0) {
+      error("Decryption Error: Cipher block size was not a multiple of %u", BLKSZ);
+      rc = -1;
+      goto abort_egress;
+   }
+   aes_crypt_cbc(&aes_ctx, AES_DECRYPT, clen, iv, cptr, *clear);
+
+   goto egress;
+abort_egress:
+egress:
+   return rc;
+}
+
+int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length) {
+   int rc;
+   uint8_t* cipher = NULL;
+   size_t cipher_len = 0;
+   uint8_t hashkey[HASHKEYSZ];
+   uint8_t* symkey = hashkey + HASHSZ;
+
+   /* Encrypt the data */
+   if((rc = encrypt_vtpmblk(data, data_length, &cipher, &cipher_len, symkey))) {
+      goto abort_egress;
+   }
+   /* Write to disk */
+   if((rc = write_vtpmblk_raw(cipher, cipher_len))) {
+      goto abort_egress;
+   }
+   /* Get sha1 hash of data */
+   sha1(cipher, cipher_len, hashkey);
+
+   /* Send hash and key to manager */
+   if((rc = VTPM_SaveHashKey(tpmfront_dev, hashkey, HASHKEYSZ)) != TPM_SUCCESS) {
+      goto abort_egress;
+   }
+   goto egress;
+abort_egress:
+egress:
+   free(cipher);
+   return rc;
+}
+
+int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data_length) {
+   int rc;
+   uint8_t* cipher = NULL;
+   size_t cipher_len = 0;
+   size_t keysize;
+   uint8_t* hashkey = NULL;
+   uint8_t hash[HASHSZ];
+   uint8_t* symkey;
+
+   /* Retreive the hash and the key from the manager */
+   if((rc = VTPM_LoadHashKey(tpmfront_dev, &hashkey, &keysize)) != TPM_SUCCESS) {
+      goto abort_egress;
+   }
+   if(keysize != HASHKEYSZ) {
+      error("Manager returned a hashkey of invalid size! expected %d, actual %d", NVMKEYSZ, keysize);
+      rc = -1;
+      goto abort_egress;
+   }
+   symkey = hashkey + HASHSZ;
+
+   /* Read from disk now */
+   if((rc = read_vtpmblk_raw(&cipher, &cipher_len))) {
+      goto abort_egress;
+   }
+
+   /* Compute the hash of the cipher text and compare */
+   sha1(cipher, cipher_len, hash);
+   if(memcmp(hash, hashkey, HASHSZ)) {
+      int i;
+      error("NVM Storage Checksum failed!");
+      printf("Expected: ");
+      for(i = 0; i < HASHSZ; ++i) {
+	 printf("%02hhX ", hashkey[i]);
+      }
+      printf("\n");
+      printf("Actual:   ");
+      for(i = 0; i < HASHSZ; ++i) {
+	 printf("%02hhX ", hash[i]);
+      }
+      printf("\n");
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Decrypt the blob */
+   if((rc = decrypt_vtpmblk(cipher, cipher_len, data, data_length, symkey))) {
+      goto abort_egress;
+   }
+   goto egress;
+abort_egress:
+egress:
+   free(cipher);
+   free(hashkey);
+   return rc;
+}
diff --git a/stubdom/vtpm/vtpmblk.h b/stubdom/vtpm/vtpmblk.h
new file mode 100644
index 0000000..282ce6a
--- /dev/null
+++ b/stubdom/vtpm/vtpmblk.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef NVM_H
+#define NVM_H
+#include <mini-os/types.h>
+#include <xen/xen.h>
+#include <tpmfront.h>
+
+#define NVMKEYSZ 32
+#define HASHSZ 20
+#define HASHKEYSZ (NVMKEYSZ + HASHSZ)
+
+int init_vtpmblk(struct tpmfront_dev* tpmfront_dev);
+void shutdown_vtpmblk(void);
+
+/* Encrypts and writes data to blk device */
+int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t *data, size_t data_length);
+/* Reads, Decrypts, and returns data from blk device */
+int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t **data, size_t *data_length);
+
+#endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:18:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx4V-0003ZO-QN; Tue, 04 Dec 2012 18:18:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tfx4U-0003ZJ-9e
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:18:18 +0000
Received: from [85.158.143.35:34021] by server-3.bemta-4.messagelabs.com id
	C3/EC-06841-96E3EB05; Tue, 04 Dec 2012 18:18:17 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1354645095!11723903!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26973 invoked from network); 4 Dec 2012 18:18:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:18:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216375517"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:16:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:16:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfx2r-0002sA-QM;
	Tue, 04 Dec 2012 18:16:37 +0000
MIME-Version: 1.0
X-Mercurial-Node: b15d3ae525af5b46915f6820e0a8940d2ccd7382
Message-ID: <b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Tue, 4 Dec 2012 18:16:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safe in all circumstances.

This patch adds three new low level routines:
 * trap_nop
     This is a no op handler which irets immediately
 * nmi_crash
     This is a special NMI handler for using during a kexec crash
 * enable_nmis
     This function enables NMIs by executing an iret-to-self

As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable interrupt stack tables.
    Disabling ISTs will prevent stack corruption from future NMIs and
    MCEs. As Xen is going to crash, we are not concerned with the
    security race condition with 'sysret'; any guest which managed to
    benefit from the race condition will only be able execute a handful
    of instructions before it will be killed anyway, and a guest is
    still unable to trigger the race condition in the first place.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the crash_nmi handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,97 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( crashing_cpu != smp_processor_id() );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely senario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
+     * queue up another NMI, to force us back into this loop if we exit.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i;
+    int cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get crash_nmi which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     *
+     * Futhermore, disable stack tables for NMIs and MCEs to prevent
+     * race conditions resulting in corrupt stack frames.  As Xen is
+     * about to die, we no longer care about the security-related race
+     * condition with 'sysret' which ISTs are designed to solve.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
+            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
+
+            if ( i == cpu )
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+            else
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -87,6 +87,17 @@ void machine_kexec(xen_kexec_image_t *im
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.  NMIs have already been
+     * dealt with by machine_crash_shutdown().
+     *
+     * Set all pcpu MCE handler to be a noop. */
+    set_intr_gate(TRAP_machine_check, &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,42 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        cli
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* No op trap handler.  Required for kexec path. */
+ENTRY(trap_nop)
+        iretq
+
+/* Enable NMIs.  No special register assumptions, and all preserved. */
+ENTRY(enable_nmis)
+        push %rax
+        movq %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        popq %rax
+        retq
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r 1c69c938f641 -r b15d3ae525af xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:18:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx4V-0003ZO-QN; Tue, 04 Dec 2012 18:18:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tfx4U-0003ZJ-9e
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:18:18 +0000
Received: from [85.158.143.35:34021] by server-3.bemta-4.messagelabs.com id
	C3/EC-06841-96E3EB05; Tue, 04 Dec 2012 18:18:17 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1354645095!11723903!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26973 invoked from network); 4 Dec 2012 18:18:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:18:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="216375517"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:16:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:16:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfx2r-0002sA-QM;
	Tue, 04 Dec 2012 18:16:37 +0000
MIME-Version: 1.0
X-Mercurial-Node: b15d3ae525af5b46915f6820e0a8940d2ccd7382
Message-ID: <b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Tue, 4 Dec 2012 18:16:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safe in all circumstances.

This patch adds three new low level routines:
 * trap_nop
     This is a no op handler which irets immediately
 * nmi_crash
     This is a special NMI handler for using during a kexec crash
 * enable_nmis
     This function enables NMIs by executing an iret-to-self

As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable interrupt stack tables.
    Disabling ISTs will prevent stack corruption from future NMIs and
    MCEs. As Xen is going to crash, we are not concerned with the
    security race condition with 'sysret'; any guest which managed to
    benefit from the race condition will only be able execute a handful
    of instructions before it will be killed anyway, and a guest is
    still unable to trigger the race condition in the first place.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the crash_nmi handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,97 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( crashing_cpu != smp_processor_id() );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely senario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
+     * queue up another NMI, to force us back into this loop if we exit.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i;
+    int cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get crash_nmi which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     *
+     * Futhermore, disable stack tables for NMIs and MCEs to prevent
+     * race conditions resulting in corrupt stack frames.  As Xen is
+     * about to die, we no longer care about the security-related race
+     * condition with 'sysret' which ISTs are designed to solve.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
+            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
+
+            if ( i == cpu )
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+            else
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -87,6 +87,17 @@ void machine_kexec(xen_kexec_image_t *im
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.  NMIs have already been
+     * dealt with by machine_crash_shutdown().
+     *
+     * Set all pcpu MCE handler to be a noop. */
+    set_intr_gate(TRAP_machine_check, &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,42 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        cli
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* No op trap handler.  Required for kexec path. */
+ENTRY(trap_nop)
+        iretq
+
+/* Enable NMIs.  No special register assumptions, and all preserved. */
+ENTRY(enable_nmis)
+        push %rax
+        movq %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        popq %rax
+        retq
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r 1c69c938f641 -r b15d3ae525af xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:19:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx5L-0003cX-9G; Tue, 04 Dec 2012 18:19:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682edb154=Matthew.Fioravante@jhuapl.edu>)
	id 1Tfx5K-0003cM-1O
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:19:10 +0000
Received: from [85.158.143.99:54693] by server-2.bemta-4.messagelabs.com id
	82/80-28922-D9E3EB05; Tue, 04 Dec 2012 18:19:09 +0000
X-Env-Sender: prvs=0682edb154=Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354645147!21658196!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14484 invoked from network); 4 Dec 2012 18:19:08 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 18:19:08 -0000
Received: from aplexcas2.dom1.jhuapl.edu (unknown [128.244.198.91]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 0906_1344_2543d5a7_70c2_4644_9067_f7e98d486868;
	Tue, 04 Dec 2012 13:19:02 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas2.dom1.jhuapl.edu ([128.244.198.91]) with mapi; Tue, 4 Dec 2012
	13:19:03 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 13:19:02 -0500
Thread-Topic: [VTPM v5 6/7] Add autoconf to stubdom
Thread-Index: Ac3SR7i+outmjSOwQ3u9ThdIkD3LoQAA/9XA
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDB489AB@aplesstripe.dom1.jhuapl.edu>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-7-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354643369.15296.92.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354643369.15296.92.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 6/7] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yes all the usual crap generated by configure.

Also, config/Stubdom.mk and config/Toplevel.mk (see new patches)

-----Original Message-----
From: Ian Campbell [mailto:Ian.Campbell@citrix.com] 
Sent: Tuesday, December 04, 2012 12:49 PM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [VTPM v5 6/7] Add autoconf to stubdom

On Thu, 2012-11-29 at 17:35 +0000, Matthew Fioravante wrote:
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> ---
>  stubdom/configure              | 4370 ++++++++++++++++++++++++++++++++++++++++
>  stubdom/configure.ac           |   54 +
>  stubdom/install.sh             |    1 +

Does this (and the next patch) require stuff to be added to .hgignore and .gitignore?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:19:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx5L-0003cX-9G; Tue, 04 Dec 2012 18:19:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682edb154=Matthew.Fioravante@jhuapl.edu>)
	id 1Tfx5K-0003cM-1O
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:19:10 +0000
Received: from [85.158.143.99:54693] by server-2.bemta-4.messagelabs.com id
	82/80-28922-D9E3EB05; Tue, 04 Dec 2012 18:19:09 +0000
X-Env-Sender: prvs=0682edb154=Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354645147!21658196!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14484 invoked from network); 4 Dec 2012 18:19:08 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 18:19:08 -0000
Received: from aplexcas2.dom1.jhuapl.edu (unknown [128.244.198.91]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 0906_1344_2543d5a7_70c2_4644_9067_f7e98d486868;
	Tue, 04 Dec 2012 13:19:02 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas2.dom1.jhuapl.edu ([128.244.198.91]) with mapi; Tue, 4 Dec 2012
	13:19:03 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 4 Dec 2012 13:19:02 -0500
Thread-Topic: [VTPM v5 6/7] Add autoconf to stubdom
Thread-Index: Ac3SR7i+outmjSOwQ3u9ThdIkD3LoQAA/9XA
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDB489AB@aplesstripe.dom1.jhuapl.edu>
References: <1354210534-31052-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354210534-31052-7-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354643369.15296.92.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354643369.15296.92.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v5 6/7] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yes all the usual crap generated by configure.

Also, config/Stubdom.mk and config/Toplevel.mk (see new patches)

-----Original Message-----
From: Ian Campbell [mailto:Ian.Campbell@citrix.com] 
Sent: Tuesday, December 04, 2012 12:49 PM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [VTPM v5 6/7] Add autoconf to stubdom

On Thu, 2012-11-29 at 17:35 +0000, Matthew Fioravante wrote:
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> ---
>  stubdom/configure              | 4370 ++++++++++++++++++++++++++++++++++++++++
>  stubdom/configure.ac           |   54 +
>  stubdom/install.sh             |    1 +

Does this (and the next patch) require stuff to be added to .hgignore and .gitignore?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:20:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:20:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx6e-0003l4-RP; Tue, 04 Dec 2012 18:20:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tfx6d-0003kf-90
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:20:31 +0000
Received: from [85.158.143.99:59200] by server-1.bemta-4.messagelabs.com id
	46/B9-27934-EEE3EB05; Tue, 04 Dec 2012 18:20:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354645227!21658335!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19899 invoked from network); 4 Dec 2012 18:20:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:20:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46574898"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:16:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:16:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfx2r-0002sA-Pr;
	Tue, 04 Dec 2012 18:16:37 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Tue, 4 Dec 2012 18:16:07 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 4 RFC] Kexec path alteration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I discovered a triple fault bug in the crash kernel while working on the
wider reentrant NMI/MCE issues discussed on the past few weeks on the
list.

As this is quite a substantial change, I present it here as an RFC ahead
of the longer series for reentrant behaviour (which I am still working
on).

I present patches 2 thru 4 simply as debugging aids for the bug fixed in
patch 1.  I am primarily rally concerned about RFC feedback from patch
1, although comments on patches 2 and 3 will not go amis, as they are
expected to be in the longer reentrant series.

Patch 4 is a debugkey '1' which deliberately creates reentrant NMIs and
stack corruption.  It is not intended for actually committing into the
codebase.

I am at a loss to explain why the triple fault is actually occurring; I
failed to get anything from early printk from the crash kernel, but did
not attempt to debug the extra gubbins which kexec puts in the blob.
The fault however is completely deterministic and can be verified by
commenting out the call to enable_nmis() in machine_kexec()

My setup is xen-unstable, 2.6.32-classic based dom0 and kdump kernels
with kexec-tools 2.0.2

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:20:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:20:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx6e-0003l4-RP; Tue, 04 Dec 2012 18:20:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tfx6d-0003kf-90
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:20:31 +0000
Received: from [85.158.143.99:59200] by server-1.bemta-4.messagelabs.com id
	46/B9-27934-EEE3EB05; Tue, 04 Dec 2012 18:20:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354645227!21658335!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19899 invoked from network); 4 Dec 2012 18:20:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:20:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46574898"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:16:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:16:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfx2r-0002sA-Pr;
	Tue, 04 Dec 2012 18:16:37 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Tue, 4 Dec 2012 18:16:07 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 4 RFC] Kexec path alteration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I discovered a triple fault bug in the crash kernel while working on the
wider reentrant NMI/MCE issues discussed on the past few weeks on the
list.

As this is quite a substantial change, I present it here as an RFC ahead
of the longer series for reentrant behaviour (which I am still working
on).

I present patches 2 thru 4 simply as debugging aids for the bug fixed in
patch 1.  I am primarily rally concerned about RFC feedback from patch
1, although comments on patches 2 and 3 will not go amis, as they are
expected to be in the longer reentrant series.

Patch 4 is a debugkey '1' which deliberately creates reentrant NMIs and
stack corruption.  It is not intended for actually committing into the
codebase.

I am at a loss to explain why the triple fault is actually occurring; I
failed to get anything from early printk from the crash kernel, but did
not attempt to debug the extra gubbins which kexec puts in the blob.
The fault however is completely deterministic and can be verified by
commenting out the call to enable_nmis() in machine_kexec()

My setup is xen-unstable, 2.6.32-classic based dom0 and kdump kernels
with kexec-tools 2.0.2

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:20:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx6f-0003lG-7E; Tue, 04 Dec 2012 18:20:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tfx6e-0003ko-1C
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:20:32 +0000
Received: from [85.158.143.99:59235] by server-3.bemta-4.messagelabs.com id
	0C/3E-06841-FEE3EB05; Tue, 04 Dec 2012 18:20:31 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354645227!21658335!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20060 invoked from network); 4 Dec 2012 18:20:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:20:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46574899"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:16:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:16:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfx2r-0002sA-Qv;
	Tue, 04 Dec 2012 18:16:37 +0000
MIME-Version: 1.0
X-Mercurial-Node: 48a60a407e15d71abecaf14e9232973b6aea6dcb
Message-ID: <48a60a407e15d71abeca.1354644969@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Tue, 4 Dec 2012 18:16:09 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 2 of 4 RFC] xen/panic: Introduce
	panic_in_progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The panic() function will re-enable NMIs.  In addition, because of the
private spinlock, panic() is not safe to reentry on the same pcpu, due
to an NMI or MCE.

We introduce a panic_in_progress flag and is_panic_in_progress() helper,
to be used in subsequent patches.  (And also fix a whitespace error.)

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r b15d3ae525af -r 48a60a407e15 xen/drivers/char/console.c
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -65,6 +65,8 @@ static int __read_mostly sercon_handle =
 
 static DEFINE_SPINLOCK(console_lock);
 
+static atomic_t panic_in_progress = ATOMIC_INIT(0);
+
 /*
  * To control the amount of printing, thresholds are added.
  * These thresholds correspond to the XENLOG logging levels.
@@ -980,13 +982,22 @@ static int __init debugtrace_init(void)
  * **************************************************************
  */
 
+/* Is a panic() in progress? Used to prevent reentrancy of panic() from
+ * NMIs/MCEs, as there is potential to deadlock from those contexts. */
+long is_panic_in_progress(void)
+{
+    return atomic_read(&panic_in_progress);
+}
+
 void panic(const char *fmt, ...)
 {
     va_list args;
     unsigned long flags;
     static DEFINE_SPINLOCK(lock);
     static char buf[128];
-    
+
+    atomic_set(&panic_in_progress, 1);
+
     debugtrace_dump();
 
     /* Protects buf[] and ensure multi-line message prints atomically. */
diff -r b15d3ae525af -r 48a60a407e15 xen/include/xen/lib.h
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -78,6 +78,7 @@ extern void debugtrace_printk(const char
 #define _p(_x) ((void *)(unsigned long)(_x))
 extern void printk(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
+extern long is_panic_in_progress(void);
 extern void panic(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:20:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx6f-0003lG-7E; Tue, 04 Dec 2012 18:20:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tfx6e-0003ko-1C
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:20:32 +0000
Received: from [85.158.143.99:59235] by server-3.bemta-4.messagelabs.com id
	0C/3E-06841-FEE3EB05; Tue, 04 Dec 2012 18:20:31 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354645227!21658335!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20060 invoked from network); 4 Dec 2012 18:20:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:20:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46574899"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:16:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:16:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfx2r-0002sA-Qv;
	Tue, 04 Dec 2012 18:16:37 +0000
MIME-Version: 1.0
X-Mercurial-Node: 48a60a407e15d71abecaf14e9232973b6aea6dcb
Message-ID: <48a60a407e15d71abeca.1354644969@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Tue, 4 Dec 2012 18:16:09 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 2 of 4 RFC] xen/panic: Introduce
	panic_in_progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The panic() function will re-enable NMIs.  In addition, because of the
private spinlock, panic() is not safe to reentry on the same pcpu, due
to an NMI or MCE.

We introduce a panic_in_progress flag and is_panic_in_progress() helper,
to be used in subsequent patches.  (And also fix a whitespace error.)

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r b15d3ae525af -r 48a60a407e15 xen/drivers/char/console.c
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -65,6 +65,8 @@ static int __read_mostly sercon_handle =
 
 static DEFINE_SPINLOCK(console_lock);
 
+static atomic_t panic_in_progress = ATOMIC_INIT(0);
+
 /*
  * To control the amount of printing, thresholds are added.
  * These thresholds correspond to the XENLOG logging levels.
@@ -980,13 +982,22 @@ static int __init debugtrace_init(void)
  * **************************************************************
  */
 
+/* Is a panic() in progress? Used to prevent reentrancy of panic() from
+ * NMIs/MCEs, as there is potential to deadlock from those contexts. */
+long is_panic_in_progress(void)
+{
+    return atomic_read(&panic_in_progress);
+}
+
 void panic(const char *fmt, ...)
 {
     va_list args;
     unsigned long flags;
     static DEFINE_SPINLOCK(lock);
     static char buf[128];
-    
+
+    atomic_set(&panic_in_progress, 1);
+
     debugtrace_dump();
 
     /* Protects buf[] and ensure multi-line message prints atomically. */
diff -r b15d3ae525af -r 48a60a407e15 xen/include/xen/lib.h
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -78,6 +78,7 @@ extern void debugtrace_printk(const char
 #define _p(_x) ((void *)(unsigned long)(_x))
 extern void printk(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
+extern long is_panic_in_progress(void);
 extern void panic(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:20:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx6g-0003lU-K4; Tue, 04 Dec 2012 18:20:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tfx6e-0003ko-G9
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:20:32 +0000
Received: from [85.158.143.99:4961] by server-3.bemta-4.messagelabs.com id
	0E/3E-06841-0FE3EB05; Tue, 04 Dec 2012 18:20:32 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354645227!21658335!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20137 invoked from network); 4 Dec 2012 18:20:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:20:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46574900"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:16:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:16:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfx2r-0002sA-RW;
	Tue, 04 Dec 2012 18:16:37 +0000
MIME-Version: 1.0
X-Mercurial-Node: f6ad86b61d5afd86026c7f1a8330ad29ce74ea6f
Message-ID: <f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Tue, 4 Dec 2012 18:16:10 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 3 of 4 RFC] x86/nmi: Prevent reentrant execution
 of the C nmi handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The (old) function do_nmi() is not reentrantly-safe.  Rename it to
_do_nmi() and present a new do_nmi() which reentrancy guards.

If a reentrant NMI has been detected, then it is highly likely that the
outer NMI exception frame has been corrupted, meaning we cannot return
to the original context.  In this case, we panic() obviously rather than
falling into an infinite loop.

panic() however is not safe to reenter from an NMI context, as an NMI
(or MCE) can interrupt it inside its critical section, at which point a
new call to panic() will deadlock.  As a result, we bail early if a
panic() is already in progress, as Xen is about to die anyway.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--
I am fairly sure this is safe with the current kexec_crash functionality
which involves holding all non-crashing pcpus in an NMI loop.  In the
case of reentrant NMIs and panic_in_progress, we will repeatedly bail
early in an infinite loop of NMIs, which has the same intended effect of
simply causing all non-crashing CPUs to stay out of the way while the
main crash occurs.

diff -r 48a60a407e15 -r f6ad86b61d5a xen/arch/x86/traps.c
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -88,6 +88,7 @@ static char __read_mostly opt_nmi[10] = 
 string_param("nmi", opt_nmi);
 
 DEFINE_PER_CPU(u64, efer);
+static DEFINE_PER_CPU(bool_t, nmi_in_progress) = 0;
 
 DEFINE_PER_CPU_READ_MOSTLY(u32, ler_msr);
 
@@ -3182,7 +3183,8 @@ static int dummy_nmi_callback(struct cpu
  
 static nmi_callback_t nmi_callback = dummy_nmi_callback;
 
-void do_nmi(struct cpu_user_regs *regs)
+/* This function should never be called directly.  Use do_nmi() instead. */
+static void _do_nmi(struct cpu_user_regs *regs)
 {
     unsigned int cpu = smp_processor_id();
     unsigned char reason;
@@ -3208,6 +3210,44 @@ void do_nmi(struct cpu_user_regs *regs)
     }
 }
 
+/* This function is NOT SAFE to call from C code in general.
+ * Use with extreme care! */
+void do_nmi(struct cpu_user_regs *regs)
+{
+    bool_t * in_progress = &this_cpu(nmi_in_progress);
+
+    if ( is_panic_in_progress() )
+    {
+        /* A panic is already in progress.  It may have reenabled NMIs,
+         * or we are simply unluckly to receive one right now.  Either
+         * way, bail early, as Xen is about to die.
+         *
+         * TODO: Ideally we should exit without executing an iret, to
+         * leave NMIs disabled, but that option is not currently
+         * available to us.
+         */
+        return;
+    }
+
+    if ( test_and_set_bool(*in_progress) )
+    {
+        /* Crash in an obvious mannor, as opposed to falling into
+         * infinite loop because our exception frame corrupted the
+         * exception frame of the previous NMI.
+         *
+         * TODO: This check does not cover all possible cases of corrupt
+         * exception frames, but it is substantially better than
+         * nothing.
+         */
+        console_force_unlock();
+        show_execution_state(regs);
+        panic("Reentrant NMI detected\n");
+    }
+
+    _do_nmi(regs);
+    *in_progress = 0;
+}
+
 void set_nmi_callback(nmi_callback_t callback)
 {
     nmi_callback = callback;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:20:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx6g-0003lU-K4; Tue, 04 Dec 2012 18:20:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tfx6e-0003ko-G9
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:20:32 +0000
Received: from [85.158.143.99:4961] by server-3.bemta-4.messagelabs.com id
	0E/3E-06841-0FE3EB05; Tue, 04 Dec 2012 18:20:32 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354645227!21658335!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20137 invoked from network); 4 Dec 2012 18:20:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:20:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46574900"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:16:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:16:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfx2r-0002sA-RW;
	Tue, 04 Dec 2012 18:16:37 +0000
MIME-Version: 1.0
X-Mercurial-Node: f6ad86b61d5afd86026c7f1a8330ad29ce74ea6f
Message-ID: <f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Tue, 4 Dec 2012 18:16:10 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 3 of 4 RFC] x86/nmi: Prevent reentrant execution
 of the C nmi handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The (old) function do_nmi() is not reentrantly-safe.  Rename it to
_do_nmi() and present a new do_nmi() which reentrancy guards.

If a reentrant NMI has been detected, then it is highly likely that the
outer NMI exception frame has been corrupted, meaning we cannot return
to the original context.  In this case, we panic() obviously rather than
falling into an infinite loop.

panic() however is not safe to reenter from an NMI context, as an NMI
(or MCE) can interrupt it inside its critical section, at which point a
new call to panic() will deadlock.  As a result, we bail early if a
panic() is already in progress, as Xen is about to die anyway.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--
I am fairly sure this is safe with the current kexec_crash functionality
which involves holding all non-crashing pcpus in an NMI loop.  In the
case of reentrant NMIs and panic_in_progress, we will repeatedly bail
early in an infinite loop of NMIs, which has the same intended effect of
simply causing all non-crashing CPUs to stay out of the way while the
main crash occurs.

diff -r 48a60a407e15 -r f6ad86b61d5a xen/arch/x86/traps.c
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -88,6 +88,7 @@ static char __read_mostly opt_nmi[10] = 
 string_param("nmi", opt_nmi);
 
 DEFINE_PER_CPU(u64, efer);
+static DEFINE_PER_CPU(bool_t, nmi_in_progress) = 0;
 
 DEFINE_PER_CPU_READ_MOSTLY(u32, ler_msr);
 
@@ -3182,7 +3183,8 @@ static int dummy_nmi_callback(struct cpu
  
 static nmi_callback_t nmi_callback = dummy_nmi_callback;
 
-void do_nmi(struct cpu_user_regs *regs)
+/* This function should never be called directly.  Use do_nmi() instead. */
+static void _do_nmi(struct cpu_user_regs *regs)
 {
     unsigned int cpu = smp_processor_id();
     unsigned char reason;
@@ -3208,6 +3210,44 @@ void do_nmi(struct cpu_user_regs *regs)
     }
 }
 
+/* This function is NOT SAFE to call from C code in general.
+ * Use with extreme care! */
+void do_nmi(struct cpu_user_regs *regs)
+{
+    bool_t * in_progress = &this_cpu(nmi_in_progress);
+
+    if ( is_panic_in_progress() )
+    {
+        /* A panic is already in progress.  It may have reenabled NMIs,
+         * or we are simply unluckly to receive one right now.  Either
+         * way, bail early, as Xen is about to die.
+         *
+         * TODO: Ideally we should exit without executing an iret, to
+         * leave NMIs disabled, but that option is not currently
+         * available to us.
+         */
+        return;
+    }
+
+    if ( test_and_set_bool(*in_progress) )
+    {
+        /* Crash in an obvious mannor, as opposed to falling into
+         * infinite loop because our exception frame corrupted the
+         * exception frame of the previous NMI.
+         *
+         * TODO: This check does not cover all possible cases of corrupt
+         * exception frames, but it is substantially better than
+         * nothing.
+         */
+        console_force_unlock();
+        show_execution_state(regs);
+        panic("Reentrant NMI detected\n");
+    }
+
+    _do_nmi(regs);
+    *in_progress = 0;
+}
+
 void set_nmi_callback(nmi_callback_t callback)
 {
     nmi_callback = callback;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:20:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx6i-0003ly-2X; Tue, 04 Dec 2012 18:20:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tfx6g-0003ko-VH
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:20:35 +0000
Received: from [85.158.143.99:59342] by server-3.bemta-4.messagelabs.com id
	06/4E-06841-2FE3EB05; Tue, 04 Dec 2012 18:20:34 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354645230!22796470!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18105 invoked from network); 4 Dec 2012 18:20:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:20:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46574901"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:16:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:16:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfx2r-0002sA-S6;
	Tue, 04 Dec 2012 18:16:37 +0000
MIME-Version: 1.0
X-Mercurial-Node: 303d94fa720c9cda22f0b9874a897e12d3e86ae8
Message-ID: <303d94fa720c9cda22f0.1354644971@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Tue, 4 Dec 2012 18:16:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 4 of 4 RFC] xen/nmi: DO NOT APPLY: debugkey to
 deliberatly invoke a reentrant NMI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

FOR TESTING PURPOSES ONLY.  Use debug key '1'

diff -r f6ad86b61d5a -r 303d94fa720c xen/arch/x86/nmi.c
--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -39,6 +39,7 @@ static unsigned int nmi_perfctr_msr;	/* 
 static unsigned int nmi_p4_cccr_val;
 static DEFINE_PER_CPU(struct timer, nmi_timer);
 static DEFINE_PER_CPU(unsigned int, nmi_timer_ticks);
+static DEFINE_PER_CPU(unsigned int, reent_state);
 
 /* opt_watchdog: If true, run a watchdog NMI on each processor. */
 bool_t __initdata opt_watchdog = 0;
@@ -532,10 +533,106 @@ static struct keyhandler nmi_stats_keyha
     .desc = "NMI statistics"
 };
 
+void do_nmi_reent_fault(struct cpu_user_regs * regs)
+{
+    printk("In fault handler from NMI - iret from this is bad\n");
+    mdelay(10);
+}
+
+void do_nmi_reentrant_handler(void)
+{
+    volatile unsigned int * state_ptr = &(this_cpu(reent_state));
+    unsigned int state;
+
+    state = *state_ptr;
+
+    switch ( state )
+    {
+    case 0: // Not a reentrant NMI request
+        return;
+
+    case 1: // Outer NMI call - lets try to set up a reentrant case
+        printk("In outer reentrant case\n");
+
+        // Signal inner state
+        *state_ptr = state = 2;
+        // Queue up another NMI which cant currently be delivered
+        self_nmi();
+        /* printk("Queued up suplimentary NMI\n"); */
+        /* mdelay(10); */
+        /* // iret from exception handler should re-enable NMIs */
+        run_in_exception_handler(do_nmi_reent_fault);
+        /* mdelay(10); */
+
+        return;
+
+    case 2:
+        printk("In inner reentrant case - stuff will probably break quickly\n");
+        *state_ptr = state = 0;
+    }
+}
+
+static void nmi_reentrant(unsigned char key)
+{
+    volatile unsigned int * state_ptr = &(this_cpu(reent_state));
+    unsigned int state;
+    s_time_t deadline;
+
+    printk("*** Trying to force reentrant NMI ***\n");
+
+    if ( *state_ptr != 0 )
+    {
+        printk("State not ready - waiting for up to 2 seconds\n");
+        deadline = (NOW() + SECONDS(2));
+        do
+        {
+            cpu_relax();
+            state = *state_ptr;
+        } while ( NOW() < deadline && state != 0 );
+
+        if ( state != 0 )
+        {
+            printk("Forcibly resetting state\n");
+            *state_ptr = 0;
+        }
+        else
+            printk("Waiting seemed to work\n");
+    }
+
+    // Set up trap for NMI handler
+    *state_ptr = 1;
+    deadline = (NOW() + SECONDS(1));
+    printk("Set up trigger for NMI handler and waiting for magic\n");
+    self_nmi();
+
+    do
+    {
+        cpu_relax();
+        state = *state_ptr;
+    } while ( NOW() < deadline && state != 0 );
+
+    if ( state != 0 )
+    {
+        printk("Forcibly resetting state\n");
+        *state_ptr = 0;
+    }
+    else
+        printk("Apparently AOK\n");
+
+    printk("*** Done reentrant test ***\n");
+}
+
+static struct keyhandler nmi_reentrant_keyhandler = {
+    .diagnostic = 0,
+    .u.fn = nmi_reentrant,
+    .desc = "Reentrant NMI test"
+};
+
 static __init int register_nmi_trigger(void)
 {
     register_keyhandler('N', &nmi_trigger_keyhandler);
     register_keyhandler('n', &nmi_stats_keyhandler);
+    register_keyhandler('1', &nmi_reentrant_keyhandler);
     return 0;
 }
 __initcall(register_nmi_trigger);
diff -r f6ad86b61d5a -r 303d94fa720c xen/arch/x86/traps.c
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3183,6 +3183,7 @@ static int dummy_nmi_callback(struct cpu
  
 static nmi_callback_t nmi_callback = dummy_nmi_callback;
 
+extern void do_nmi_reentrant_handler(void);
 /* This function should never be called directly.  Use do_nmi() instead. */
 static void _do_nmi(struct cpu_user_regs *regs)
 {
@@ -3191,6 +3192,8 @@ static void _do_nmi(struct cpu_user_regs
 
     ++nmi_count(cpu);
 
+    do_nmi_reentrant_handler();
+
     if ( nmi_callback(regs, cpu) )
         return;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:20:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx6i-0003ly-2X; Tue, 04 Dec 2012 18:20:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tfx6g-0003ko-VH
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:20:35 +0000
Received: from [85.158.143.99:59342] by server-3.bemta-4.messagelabs.com id
	06/4E-06841-2FE3EB05; Tue, 04 Dec 2012 18:20:34 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354645230!22796470!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18105 invoked from network); 4 Dec 2012 18:20:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:20:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="46574901"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:16:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:16:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfx2r-0002sA-S6;
	Tue, 04 Dec 2012 18:16:37 +0000
MIME-Version: 1.0
X-Mercurial-Node: 303d94fa720c9cda22f0b9874a897e12d3e86ae8
Message-ID: <303d94fa720c9cda22f0.1354644971@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Tue, 4 Dec 2012 18:16:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 4 of 4 RFC] xen/nmi: DO NOT APPLY: debugkey to
 deliberatly invoke a reentrant NMI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

FOR TESTING PURPOSES ONLY.  Use debug key '1'

diff -r f6ad86b61d5a -r 303d94fa720c xen/arch/x86/nmi.c
--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -39,6 +39,7 @@ static unsigned int nmi_perfctr_msr;	/* 
 static unsigned int nmi_p4_cccr_val;
 static DEFINE_PER_CPU(struct timer, nmi_timer);
 static DEFINE_PER_CPU(unsigned int, nmi_timer_ticks);
+static DEFINE_PER_CPU(unsigned int, reent_state);
 
 /* opt_watchdog: If true, run a watchdog NMI on each processor. */
 bool_t __initdata opt_watchdog = 0;
@@ -532,10 +533,106 @@ static struct keyhandler nmi_stats_keyha
     .desc = "NMI statistics"
 };
 
+void do_nmi_reent_fault(struct cpu_user_regs * regs)
+{
+    printk("In fault handler from NMI - iret from this is bad\n");
+    mdelay(10);
+}
+
+void do_nmi_reentrant_handler(void)
+{
+    volatile unsigned int * state_ptr = &(this_cpu(reent_state));
+    unsigned int state;
+
+    state = *state_ptr;
+
+    switch ( state )
+    {
+    case 0: // Not a reentrant NMI request
+        return;
+
+    case 1: // Outer NMI call - lets try to set up a reentrant case
+        printk("In outer reentrant case\n");
+
+        // Signal inner state
+        *state_ptr = state = 2;
+        // Queue up another NMI which cant currently be delivered
+        self_nmi();
+        /* printk("Queued up suplimentary NMI\n"); */
+        /* mdelay(10); */
+        /* // iret from exception handler should re-enable NMIs */
+        run_in_exception_handler(do_nmi_reent_fault);
+        /* mdelay(10); */
+
+        return;
+
+    case 2:
+        printk("In inner reentrant case - stuff will probably break quickly\n");
+        *state_ptr = state = 0;
+    }
+}
+
+static void nmi_reentrant(unsigned char key)
+{
+    volatile unsigned int * state_ptr = &(this_cpu(reent_state));
+    unsigned int state;
+    s_time_t deadline;
+
+    printk("*** Trying to force reentrant NMI ***\n");
+
+    if ( *state_ptr != 0 )
+    {
+        printk("State not ready - waiting for up to 2 seconds\n");
+        deadline = (NOW() + SECONDS(2));
+        do
+        {
+            cpu_relax();
+            state = *state_ptr;
+        } while ( NOW() < deadline && state != 0 );
+
+        if ( state != 0 )
+        {
+            printk("Forcibly resetting state\n");
+            *state_ptr = 0;
+        }
+        else
+            printk("Waiting seemed to work\n");
+    }
+
+    // Set up trap for NMI handler
+    *state_ptr = 1;
+    deadline = (NOW() + SECONDS(1));
+    printk("Set up trigger for NMI handler and waiting for magic\n");
+    self_nmi();
+
+    do
+    {
+        cpu_relax();
+        state = *state_ptr;
+    } while ( NOW() < deadline && state != 0 );
+
+    if ( state != 0 )
+    {
+        printk("Forcibly resetting state\n");
+        *state_ptr = 0;
+    }
+    else
+        printk("Apparently AOK\n");
+
+    printk("*** Done reentrant test ***\n");
+}
+
+static struct keyhandler nmi_reentrant_keyhandler = {
+    .diagnostic = 0,
+    .u.fn = nmi_reentrant,
+    .desc = "Reentrant NMI test"
+};
+
 static __init int register_nmi_trigger(void)
 {
     register_keyhandler('N', &nmi_trigger_keyhandler);
     register_keyhandler('n', &nmi_stats_keyhandler);
+    register_keyhandler('1', &nmi_reentrant_keyhandler);
     return 0;
 }
 __initcall(register_nmi_trigger);
diff -r f6ad86b61d5a -r 303d94fa720c xen/arch/x86/traps.c
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3183,6 +3183,7 @@ static int dummy_nmi_callback(struct cpu
  
 static nmi_callback_t nmi_callback = dummy_nmi_callback;
 
+extern void do_nmi_reentrant_handler(void);
 /* This function should never be called directly.  Use do_nmi() instead. */
 static void _do_nmi(struct cpu_user_regs *regs)
 {
@@ -3191,6 +3192,8 @@ static void _do_nmi(struct cpu_user_regs
 
     ++nmi_count(cpu);
 
+    do_nmi_reentrant_handler();
+
     if ( nmi_callback(regs, cpu) )
         return;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7E-0003zo-HX; Tue, 04 Dec 2012 18:21:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7B-0003y0-SA
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:06 +0000
Received: from [85.158.139.83:25615] by server-13.bemta-5.messagelabs.com id
	E3/D1-27809-11F3EB05; Tue, 04 Dec 2012 18:21:05 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!3
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21985 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155747"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:04 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:03 +0000
MIME-Version: 1.0
X-Mercurial-Node: 161c88709114c0e8210175ec06395bc4db3cb02e
Message-ID: <161c88709114c0e82101.1354645180@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:40 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 2 of 9 RFC v2] blktap3/libblktapctl: Introduce
 tapdisk message types and structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces function prototypes and structures that are implemented
by libblktapctl (the library that allows libxl, tap-ctl, and the tapback daemon
to manage a running tapdisk process). This file is based on the existing
blktap2 file, with some changes coming from blktap2 living in github (the STATS
message, support for mirroring).

tapdisk_message_name is now neater and uses a look up table instead of a big
switch.

blktap3 introduces the following messages:
	- DISK_INFO: used by xenio to communicate to blkfront via XenStore the
	  number of sectors and the sector size so that it can create the virtual
	  block device.
	- XENBLKIF_CONNECT/DISCONNECT: used by the xenio daemon to instruct a
	  running tapdisk process to connect to the ring. The
	  tapdisk_message_blkif structure is used to convey such messages.

The ATTACH message has been removed since it is now superseded by
XENBLKIF_CONNECT (this probably means that DETACH must be removed as well in
favour of XENBLKIF_DISCONNECT). However, it would probably be nicer to keep the
ATTACH/DETACH identifiers in order to minimize interface changes.

diff --git a/tools/blktap2/include/tapdisk-message.h b/tools/blktap3/include/tapdisk-message.h
copy from tools/blktap2/include/tapdisk-message.h
copy to tools/blktap3/include/tapdisk-message.h
--- a/tools/blktap2/include/tapdisk-message.h
+++ b/tools/blktap3/include/tapdisk-message.h
@@ -36,29 +36,35 @@
 #define TAPDISK_MESSAGE_MAX_MINORS \
 	((TAPDISK_MESSAGE_MAX_PATH_LENGTH / sizeof(int)) - 1)
 
-#define TAPDISK_MESSAGE_FLAG_SHARED      0x01
-#define TAPDISK_MESSAGE_FLAG_RDONLY      0x02
-#define TAPDISK_MESSAGE_FLAG_ADD_CACHE   0x04
-#define TAPDISK_MESSAGE_FLAG_VHD_INDEX   0x08
-#define TAPDISK_MESSAGE_FLAG_LOG_DIRTY   0x10
+#define TAPDISK_MESSAGE_FLAG_SHARED      0x001
+#define TAPDISK_MESSAGE_FLAG_RDONLY      0x002
+#define TAPDISK_MESSAGE_FLAG_ADD_CACHE   0x004
+#define TAPDISK_MESSAGE_FLAG_VHD_INDEX   0x008
+#define TAPDISK_MESSAGE_FLAG_LOG_DIRTY   0x010
+#define TAPDISK_MESSAGE_FLAG_ADD_LCACHE  0x020
+#define TAPDISK_MESSAGE_FLAG_REUSE_PRT   0x040
+#define TAPDISK_MESSAGE_FLAG_SECONDARY   0x080
+#define TAPDISK_MESSAGE_FLAG_STANDBY     0x100
 
 typedef struct tapdisk_message           tapdisk_message_t;
-typedef uint8_t                          tapdisk_message_flag_t;
+typedef uint32_t                         tapdisk_message_flag_t;
 typedef struct tapdisk_message_image     tapdisk_message_image_t;
 typedef struct tapdisk_message_params    tapdisk_message_params_t;
 typedef struct tapdisk_message_string    tapdisk_message_string_t;
 typedef struct tapdisk_message_response  tapdisk_message_response_t;
 typedef struct tapdisk_message_minors    tapdisk_message_minors_t;
 typedef struct tapdisk_message_list      tapdisk_message_list_t;
+typedef struct tapdisk_message_stat      tapdisk_message_stat_t;
+typedef struct tapdisk_message_blkif     tapdisk_message_blkif_t;
 
 struct tapdisk_message_params {
 	tapdisk_message_flag_t           flags;
 
-	uint8_t                          storage;
 	uint32_t                         devnum;
 	uint32_t                         domid;
-	uint16_t                         path_len;
 	char                             path[TAPDISK_MESSAGE_MAX_PATH_LENGTH];
+	uint32_t                         prt_devnum;
+	char                             secondary[TAPDISK_MESSAGE_MAX_PATH_LENGTH];
 };
 
 struct tapdisk_message_image {
@@ -88,6 +94,55 @@ struct tapdisk_message_list {
 	char                             path[TAPDISK_MESSAGE_MAX_PATH_LENGTH];
 };
 
+struct tapdisk_message_stat {
+	uint16_t type;
+	uint16_t cookie;
+	size_t length;
+};
+
+/**
+ * Tapdisk message containing all the necessary information required for the
+ * tapdisk to connect to a guest's blkfront.
+ */
+struct tapdisk_message_blkif {
+	/**
+	 * The domain ID of the guest to connect to.
+	 */
+	uint32_t domid;
+
+	/**
+	 * The device ID of the virtual block device.
+	 */
+	uint32_t devid;
+
+	/**
+	 * Grant references for the shared ring.
+	 * TODO Why 8 specifically?
+	 */
+	uint32_t gref[8];
+
+	/**
+	 * Number of pages in the ring, expressed as a page order.
+	 */
+	uint32_t order;
+
+	/**
+	 * Protocol to use: native, 32 bit, or 64 bit.
+	 * TODO Is this used for supporting a 32 bit guest on a 64 bit hypervisor?
+	 */
+	uint32_t proto;
+
+	/**
+	 * TODO Page pool? Can be NULL.
+	 */
+	char pool[TAPDISK_MESSAGE_STRING_LENGTH];
+
+	/**
+	 * The event channel port.
+	 */
+	uint32_t port;
+};
+
 struct tapdisk_message {
 	uint16_t                         type;
 	uint16_t                         cookie;
@@ -100,10 +155,15 @@ struct tapdisk_message {
 		tapdisk_message_minors_t minors;
 		tapdisk_message_response_t response;
 		tapdisk_message_list_t   list;
+		tapdisk_message_stat_t   info;
+		tapdisk_message_blkif_t  blkif;
 	} u;
 };
 
 enum tapdisk_message_id {
+	/*
+	 * TODO Why start from 1 and not from 0?
+	 */
 	TAPDISK_MESSAGE_ERROR = 1,
 	TAPDISK_MESSAGE_RUNTIME_ERROR,
 	TAPDISK_MESSAGE_PID,
@@ -120,84 +180,70 @@ enum tapdisk_message_id {
 	TAPDISK_MESSAGE_CLOSE_RSP,
 	TAPDISK_MESSAGE_DETACH,
 	TAPDISK_MESSAGE_DETACH_RSP,
-	TAPDISK_MESSAGE_LIST_MINORS,
-	TAPDISK_MESSAGE_LIST_MINORS_RSP,
+	TAPDISK_MESSAGE_LIST_MINORS,		/* TODO still valid? */
+	TAPDISK_MESSAGE_LIST_MINORS_RSP,	/* TODO still valid? */
 	TAPDISK_MESSAGE_LIST,
 	TAPDISK_MESSAGE_LIST_RSP,
+	TAPDISK_MESSAGE_STATS,
+	TAPDISK_MESSAGE_STATS_RSP,
 	TAPDISK_MESSAGE_FORCE_SHUTDOWN,
+	TAPDISK_MESSAGE_DISK_INFO,
+	TAPDISK_MESSAGE_DISK_INFO_RSP,
+	TAPDISK_MESSAGE_XENBLKIF_CONNECT,
+	TAPDISK_MESSAGE_XENBLKIF_CONNECT_RSP,
+	TAPDISK_MESSAGE_XENBLKIF_DISCONNECT,
+	TAPDISK_MESSAGE_XENBLKIF_DISCONNECT_RSP,
 	TAPDISK_MESSAGE_EXIT,
 };
 
-static inline char *
-tapdisk_message_name(enum tapdisk_message_id id)
-{
-	switch (id) {
-	case TAPDISK_MESSAGE_ERROR:
-		return "error";
+#define TAPDISK_MESSAGE_MAX TAPDISK_MESSAGE_EXIT
 
-	case TAPDISK_MESSAGE_PID:
-		return "pid";
+/**
+ * Retrieves a message's human-readable representation.
+ *
+ * @param id the message ID to translate
+ * @return the name of the message 
+ */
+static inline char const *
+tapdisk_message_name(const enum tapdisk_message_id id) {
+	static char const *msg_names[(TAPDISK_MESSAGE_MAX + 1)] = {
+		[TAPDISK_MESSAGE_ERROR] = "error",
+		[TAPDISK_MESSAGE_RUNTIME_ERROR] = "runtime error",
+		[TAPDISK_MESSAGE_PID] = "pid",
+		[TAPDISK_MESSAGE_PID_RSP] = "pid response",
+		[TAPDISK_MESSAGE_OPEN] = "open",
+		[TAPDISK_MESSAGE_OPEN_RSP] = "open response",
+		[TAPDISK_MESSAGE_PAUSE] = "pause",
+		[TAPDISK_MESSAGE_PAUSE_RSP] = "pause response",
+		[TAPDISK_MESSAGE_RESUME] = "resume",
+		[TAPDISK_MESSAGE_RESUME_RSP] = "resume response",
+		[TAPDISK_MESSAGE_CLOSE] = "close",
+		[TAPDISK_MESSAGE_FORCE_SHUTDOWN] = "force shutdown",
+		[TAPDISK_MESSAGE_CLOSE_RSP] = "close response",
+		[TAPDISK_MESSAGE_ATTACH] = "attach",
+		[TAPDISK_MESSAGE_ATTACH_RSP] = "attach response",
+		[TAPDISK_MESSAGE_DETACH] = "detach",
+		[TAPDISK_MESSAGE_DETACH_RSP] = "detach response",
+		[TAPDISK_MESSAGE_LIST_MINORS] = "list minors",
+		[TAPDISK_MESSAGE_LIST_MINORS_RSP] = "list minors response",
+		[TAPDISK_MESSAGE_LIST] = "list",
+		[TAPDISK_MESSAGE_LIST_RSP] = "list response",
+		[TAPDISK_MESSAGE_STATS] = "stats",
+		[TAPDISK_MESSAGE_STATS_RSP] = "stats response",
+		[TAPDISK_MESSAGE_DISK_INFO] = "disk info",
+		[TAPDISK_MESSAGE_DISK_INFO_RSP] = "disk info response",
+		[TAPDISK_MESSAGE_XENBLKIF_CONNECT] = "blkif connect",
+		[TAPDISK_MESSAGE_XENBLKIF_CONNECT_RSP] = "blkif connect response",
+		[TAPDISK_MESSAGE_XENBLKIF_DISCONNECT] = "blkif disconnect",
+		[TAPDISK_MESSAGE_XENBLKIF_DISCONNECT_RSP]
+			= "blkif disconnect response",
+		[TAPDISK_MESSAGE_EXIT] = "exit"
+	};
 
-	case TAPDISK_MESSAGE_PID_RSP:
-		return "pid response";
-
-	case TAPDISK_MESSAGE_OPEN:
-		return "open";
-
-	case TAPDISK_MESSAGE_OPEN_RSP:
-		return "open response";
-
-	case TAPDISK_MESSAGE_PAUSE:
-		return "pause";
-
-	case TAPDISK_MESSAGE_PAUSE_RSP:
-		return "pause response";
-
-	case TAPDISK_MESSAGE_RESUME:
-		return "resume";
-
-	case TAPDISK_MESSAGE_RESUME_RSP:
-		return "resume response";
-
-	case TAPDISK_MESSAGE_CLOSE:
-		return "close";
-
-	case TAPDISK_MESSAGE_FORCE_SHUTDOWN:
-		return "force shutdown";
-
-	case TAPDISK_MESSAGE_CLOSE_RSP:
-		return "close response";
-
-	case TAPDISK_MESSAGE_ATTACH:
-		return "attach";
-
-	case TAPDISK_MESSAGE_ATTACH_RSP:
-		return "attach response";
-
-	case TAPDISK_MESSAGE_DETACH:
-		return "detach";
-
-	case TAPDISK_MESSAGE_DETACH_RSP:
-		return "detach response";
-
-	case TAPDISK_MESSAGE_LIST_MINORS:
-		return "list minors";
-
-	case TAPDISK_MESSAGE_LIST_MINORS_RSP:
-		return "list minors response";
-
-	case TAPDISK_MESSAGE_LIST:
-		return "list";
-
-	case TAPDISK_MESSAGE_LIST_RSP:
-		return "list response";
-
-	case TAPDISK_MESSAGE_EXIT:
-		return "exit";
-
-	default:
+	if (id < 1 || id > TAPDISK_MESSAGE_MAX) {
 		return "unknown";
 	}
+	return msg_names[id];
 }
 
-#endif
+#endif /* _TAPDISK_MESSAGE_H_ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7E-0003zo-HX; Tue, 04 Dec 2012 18:21:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7B-0003y0-SA
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:06 +0000
Received: from [85.158.139.83:25615] by server-13.bemta-5.messagelabs.com id
	E3/D1-27809-11F3EB05; Tue, 04 Dec 2012 18:21:05 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!3
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21985 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155747"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:04 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:03 +0000
MIME-Version: 1.0
X-Mercurial-Node: 161c88709114c0e8210175ec06395bc4db3cb02e
Message-ID: <161c88709114c0e82101.1354645180@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:40 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 2 of 9 RFC v2] blktap3/libblktapctl: Introduce
 tapdisk message types and structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces function prototypes and structures that are implemented
by libblktapctl (the library that allows libxl, tap-ctl, and the tapback daemon
to manage a running tapdisk process). This file is based on the existing
blktap2 file, with some changes coming from blktap2 living in github (the STATS
message, support for mirroring).

tapdisk_message_name is now neater and uses a look up table instead of a big
switch.

blktap3 introduces the following messages:
	- DISK_INFO: used by xenio to communicate to blkfront via XenStore the
	  number of sectors and the sector size so that it can create the virtual
	  block device.
	- XENBLKIF_CONNECT/DISCONNECT: used by the xenio daemon to instruct a
	  running tapdisk process to connect to the ring. The
	  tapdisk_message_blkif structure is used to convey such messages.

The ATTACH message has been removed since it is now superseded by
XENBLKIF_CONNECT (this probably means that DETACH must be removed as well in
favour of XENBLKIF_DISCONNECT). However, it would probably be nicer to keep the
ATTACH/DETACH identifiers in order to minimize interface changes.

diff --git a/tools/blktap2/include/tapdisk-message.h b/tools/blktap3/include/tapdisk-message.h
copy from tools/blktap2/include/tapdisk-message.h
copy to tools/blktap3/include/tapdisk-message.h
--- a/tools/blktap2/include/tapdisk-message.h
+++ b/tools/blktap3/include/tapdisk-message.h
@@ -36,29 +36,35 @@
 #define TAPDISK_MESSAGE_MAX_MINORS \
 	((TAPDISK_MESSAGE_MAX_PATH_LENGTH / sizeof(int)) - 1)
 
-#define TAPDISK_MESSAGE_FLAG_SHARED      0x01
-#define TAPDISK_MESSAGE_FLAG_RDONLY      0x02
-#define TAPDISK_MESSAGE_FLAG_ADD_CACHE   0x04
-#define TAPDISK_MESSAGE_FLAG_VHD_INDEX   0x08
-#define TAPDISK_MESSAGE_FLAG_LOG_DIRTY   0x10
+#define TAPDISK_MESSAGE_FLAG_SHARED      0x001
+#define TAPDISK_MESSAGE_FLAG_RDONLY      0x002
+#define TAPDISK_MESSAGE_FLAG_ADD_CACHE   0x004
+#define TAPDISK_MESSAGE_FLAG_VHD_INDEX   0x008
+#define TAPDISK_MESSAGE_FLAG_LOG_DIRTY   0x010
+#define TAPDISK_MESSAGE_FLAG_ADD_LCACHE  0x020
+#define TAPDISK_MESSAGE_FLAG_REUSE_PRT   0x040
+#define TAPDISK_MESSAGE_FLAG_SECONDARY   0x080
+#define TAPDISK_MESSAGE_FLAG_STANDBY     0x100
 
 typedef struct tapdisk_message           tapdisk_message_t;
-typedef uint8_t                          tapdisk_message_flag_t;
+typedef uint32_t                         tapdisk_message_flag_t;
 typedef struct tapdisk_message_image     tapdisk_message_image_t;
 typedef struct tapdisk_message_params    tapdisk_message_params_t;
 typedef struct tapdisk_message_string    tapdisk_message_string_t;
 typedef struct tapdisk_message_response  tapdisk_message_response_t;
 typedef struct tapdisk_message_minors    tapdisk_message_minors_t;
 typedef struct tapdisk_message_list      tapdisk_message_list_t;
+typedef struct tapdisk_message_stat      tapdisk_message_stat_t;
+typedef struct tapdisk_message_blkif     tapdisk_message_blkif_t;
 
 struct tapdisk_message_params {
 	tapdisk_message_flag_t           flags;
 
-	uint8_t                          storage;
 	uint32_t                         devnum;
 	uint32_t                         domid;
-	uint16_t                         path_len;
 	char                             path[TAPDISK_MESSAGE_MAX_PATH_LENGTH];
+	uint32_t                         prt_devnum;
+	char                             secondary[TAPDISK_MESSAGE_MAX_PATH_LENGTH];
 };
 
 struct tapdisk_message_image {
@@ -88,6 +94,55 @@ struct tapdisk_message_list {
 	char                             path[TAPDISK_MESSAGE_MAX_PATH_LENGTH];
 };
 
+struct tapdisk_message_stat {
+	uint16_t type;
+	uint16_t cookie;
+	size_t length;
+};
+
+/**
+ * Tapdisk message containing all the necessary information required for the
+ * tapdisk to connect to a guest's blkfront.
+ */
+struct tapdisk_message_blkif {
+	/**
+	 * The domain ID of the guest to connect to.
+	 */
+	uint32_t domid;
+
+	/**
+	 * The device ID of the virtual block device.
+	 */
+	uint32_t devid;
+
+	/**
+	 * Grant references for the shared ring.
+	 * TODO Why 8 specifically?
+	 */
+	uint32_t gref[8];
+
+	/**
+	 * Number of pages in the ring, expressed as a page order.
+	 */
+	uint32_t order;
+
+	/**
+	 * Protocol to use: native, 32 bit, or 64 bit.
+	 * TODO Is this used for supporting a 32 bit guest on a 64 bit hypervisor?
+	 */
+	uint32_t proto;
+
+	/**
+	 * TODO Page pool? Can be NULL.
+	 */
+	char pool[TAPDISK_MESSAGE_STRING_LENGTH];
+
+	/**
+	 * The event channel port.
+	 */
+	uint32_t port;
+};
+
 struct tapdisk_message {
 	uint16_t                         type;
 	uint16_t                         cookie;
@@ -100,10 +155,15 @@ struct tapdisk_message {
 		tapdisk_message_minors_t minors;
 		tapdisk_message_response_t response;
 		tapdisk_message_list_t   list;
+		tapdisk_message_stat_t   info;
+		tapdisk_message_blkif_t  blkif;
 	} u;
 };
 
 enum tapdisk_message_id {
+	/*
+	 * TODO Why start from 1 and not from 0?
+	 */
 	TAPDISK_MESSAGE_ERROR = 1,
 	TAPDISK_MESSAGE_RUNTIME_ERROR,
 	TAPDISK_MESSAGE_PID,
@@ -120,84 +180,70 @@ enum tapdisk_message_id {
 	TAPDISK_MESSAGE_CLOSE_RSP,
 	TAPDISK_MESSAGE_DETACH,
 	TAPDISK_MESSAGE_DETACH_RSP,
-	TAPDISK_MESSAGE_LIST_MINORS,
-	TAPDISK_MESSAGE_LIST_MINORS_RSP,
+	TAPDISK_MESSAGE_LIST_MINORS,		/* TODO still valid? */
+	TAPDISK_MESSAGE_LIST_MINORS_RSP,	/* TODO still valid? */
 	TAPDISK_MESSAGE_LIST,
 	TAPDISK_MESSAGE_LIST_RSP,
+	TAPDISK_MESSAGE_STATS,
+	TAPDISK_MESSAGE_STATS_RSP,
 	TAPDISK_MESSAGE_FORCE_SHUTDOWN,
+	TAPDISK_MESSAGE_DISK_INFO,
+	TAPDISK_MESSAGE_DISK_INFO_RSP,
+	TAPDISK_MESSAGE_XENBLKIF_CONNECT,
+	TAPDISK_MESSAGE_XENBLKIF_CONNECT_RSP,
+	TAPDISK_MESSAGE_XENBLKIF_DISCONNECT,
+	TAPDISK_MESSAGE_XENBLKIF_DISCONNECT_RSP,
 	TAPDISK_MESSAGE_EXIT,
 };
 
-static inline char *
-tapdisk_message_name(enum tapdisk_message_id id)
-{
-	switch (id) {
-	case TAPDISK_MESSAGE_ERROR:
-		return "error";
+#define TAPDISK_MESSAGE_MAX TAPDISK_MESSAGE_EXIT
 
-	case TAPDISK_MESSAGE_PID:
-		return "pid";
+/**
+ * Retrieves a message's human-readable representation.
+ *
+ * @param id the message ID to translate
+ * @return the name of the message 
+ */
+static inline char const *
+tapdisk_message_name(const enum tapdisk_message_id id) {
+	static char const *msg_names[(TAPDISK_MESSAGE_MAX + 1)] = {
+		[TAPDISK_MESSAGE_ERROR] = "error",
+		[TAPDISK_MESSAGE_RUNTIME_ERROR] = "runtime error",
+		[TAPDISK_MESSAGE_PID] = "pid",
+		[TAPDISK_MESSAGE_PID_RSP] = "pid response",
+		[TAPDISK_MESSAGE_OPEN] = "open",
+		[TAPDISK_MESSAGE_OPEN_RSP] = "open response",
+		[TAPDISK_MESSAGE_PAUSE] = "pause",
+		[TAPDISK_MESSAGE_PAUSE_RSP] = "pause response",
+		[TAPDISK_MESSAGE_RESUME] = "resume",
+		[TAPDISK_MESSAGE_RESUME_RSP] = "resume response",
+		[TAPDISK_MESSAGE_CLOSE] = "close",
+		[TAPDISK_MESSAGE_FORCE_SHUTDOWN] = "force shutdown",
+		[TAPDISK_MESSAGE_CLOSE_RSP] = "close response",
+		[TAPDISK_MESSAGE_ATTACH] = "attach",
+		[TAPDISK_MESSAGE_ATTACH_RSP] = "attach response",
+		[TAPDISK_MESSAGE_DETACH] = "detach",
+		[TAPDISK_MESSAGE_DETACH_RSP] = "detach response",
+		[TAPDISK_MESSAGE_LIST_MINORS] = "list minors",
+		[TAPDISK_MESSAGE_LIST_MINORS_RSP] = "list minors response",
+		[TAPDISK_MESSAGE_LIST] = "list",
+		[TAPDISK_MESSAGE_LIST_RSP] = "list response",
+		[TAPDISK_MESSAGE_STATS] = "stats",
+		[TAPDISK_MESSAGE_STATS_RSP] = "stats response",
+		[TAPDISK_MESSAGE_DISK_INFO] = "disk info",
+		[TAPDISK_MESSAGE_DISK_INFO_RSP] = "disk info response",
+		[TAPDISK_MESSAGE_XENBLKIF_CONNECT] = "blkif connect",
+		[TAPDISK_MESSAGE_XENBLKIF_CONNECT_RSP] = "blkif connect response",
+		[TAPDISK_MESSAGE_XENBLKIF_DISCONNECT] = "blkif disconnect",
+		[TAPDISK_MESSAGE_XENBLKIF_DISCONNECT_RSP]
+			= "blkif disconnect response",
+		[TAPDISK_MESSAGE_EXIT] = "exit"
+	};
 
-	case TAPDISK_MESSAGE_PID_RSP:
-		return "pid response";
-
-	case TAPDISK_MESSAGE_OPEN:
-		return "open";
-
-	case TAPDISK_MESSAGE_OPEN_RSP:
-		return "open response";
-
-	case TAPDISK_MESSAGE_PAUSE:
-		return "pause";
-
-	case TAPDISK_MESSAGE_PAUSE_RSP:
-		return "pause response";
-
-	case TAPDISK_MESSAGE_RESUME:
-		return "resume";
-
-	case TAPDISK_MESSAGE_RESUME_RSP:
-		return "resume response";
-
-	case TAPDISK_MESSAGE_CLOSE:
-		return "close";
-
-	case TAPDISK_MESSAGE_FORCE_SHUTDOWN:
-		return "force shutdown";
-
-	case TAPDISK_MESSAGE_CLOSE_RSP:
-		return "close response";
-
-	case TAPDISK_MESSAGE_ATTACH:
-		return "attach";
-
-	case TAPDISK_MESSAGE_ATTACH_RSP:
-		return "attach response";
-
-	case TAPDISK_MESSAGE_DETACH:
-		return "detach";
-
-	case TAPDISK_MESSAGE_DETACH_RSP:
-		return "detach response";
-
-	case TAPDISK_MESSAGE_LIST_MINORS:
-		return "list minors";
-
-	case TAPDISK_MESSAGE_LIST_MINORS_RSP:
-		return "list minors response";
-
-	case TAPDISK_MESSAGE_LIST:
-		return "list";
-
-	case TAPDISK_MESSAGE_LIST_RSP:
-		return "list response";
-
-	case TAPDISK_MESSAGE_EXIT:
-		return "exit";
-
-	default:
+	if (id < 1 || id > TAPDISK_MESSAGE_MAX) {
 		return "unknown";
 	}
+	return msg_names[id];
 }
 
-#endif
+#endif /* _TAPDISK_MESSAGE_H_ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7G-00041o-Rh; Tue, 04 Dec 2012 18:21:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7D-0003xs-8G
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:07 +0000
Received: from [85.158.139.211:51129] by server-8.bemta-5.messagelabs.com id
	D7/CB-06050-21F3EB05; Tue, 04 Dec 2012 18:21:06 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354645265!19057626!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8056 invoked from network); 4 Dec 2012 18:21:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155755"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:05 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:05 +0000
MIME-Version: 1.0
X-Mercurial-Node: 147ac3eaca336a9c52b065740edf35861914af90
Message-ID: <147ac3eaca336a9c52b0.1354645186@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:46 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 8 of 9 RFC v2] blktap3/libblktapctl: Introduce
 tapdisk spawn functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch imports file control/tap-ctl-spawn.c from the existing blktap2
implementation, with most changes coming from blktap2 living in github.
Function tap-ctl-spawn is used for spawning a new tapdisk process in order to
serve a new virtual block device.

diff --git a/tools/blktap2/control/tap-ctl-spawn.c b/tools/blktap3/control/tap-ctl-spawn.c
copy from tools/blktap2/control/tap-ctl-spawn.c
copy to tools/blktap3/control/tap-ctl-spawn.c
--- a/tools/blktap2/control/tap-ctl-spawn.c
+++ b/tools/blktap3/control/tap-ctl-spawn.c
@@ -31,15 +31,17 @@
 #include <unistd.h>
 #include <stdlib.h>
 #include <string.h>
+#include <signal.h>
 #include <sys/wait.h>
 
 #include "tap-ctl.h"
-#include "blktap2.h"
+#include "blktap3.h"
+#include "_paths.h"
 
 static pid_t
 __tap_ctl_spawn(int *readfd)
 {
-	int err, child, channel[2];
+	int child, channel[2];
 	char *tapdisk;
 
 	if (pipe(channel)) {
@@ -71,14 +73,21 @@ static pid_t
 	close(channel[0]);
 	close(channel[1]);
 
-	tapdisk = getenv("TAPDISK2");
+	tapdisk = getenv("TAPDISK");
 	if (!tapdisk)
-		tapdisk = "tapdisk2";
+		tapdisk = getenv("TAPDISK2");
 
-	execlp(tapdisk, tapdisk, NULL);
+	if (tapdisk) {
+		execlp(tapdisk, tapdisk, NULL);
+		exit(errno);
+	}
 
-	EPRINTF("exec failed\n");
-	exit(1);
+	execl(TAPDISK_EXECDIR "/" TAPDISK_EXEC, TAPDISK_EXEC, NULL);
+
+	if (errno == ENOENT)
+		execl(TAPDISK_BUILDDIR "/" TAPDISK_EXEC, TAPDISK_EXEC, NULL);
+
+	exit(errno);
 }
 
 pid_t
@@ -90,7 +99,7 @@ tap_ctl_get_pid(const int id)
 	memset(&message, 0, sizeof(message));
 	message.type = TAPDISK_MESSAGE_PID;
 
-	err = tap_ctl_connect_send_and_receive(id, &message, 2);
+	err = tap_ctl_connect_send_and_receive(id, &message, NULL);
 	if (err)
 		return err;
 
@@ -119,6 +128,12 @@ tap_ctl_wait(pid_t child)
 	if (WIFSIGNALED(status)) {
 		int signo = WTERMSIG(status);
 		EPRINTF("tapdisk2[%d] killed by signal %d\n", child, signo);
+		if (signo == SIGUSR1)
+			/* NB. there's a race between tapdisk's
+			 * sigaction init and xen-bugtool shooting
+			 * debug signals. If killed by something as
+			 * innocuous as USR1, then retry. */
+			return -EAGAIN;
 		return -EINTR;
 	}
 
@@ -139,8 +154,8 @@ tap_ctl_get_child_id(int readfd)
 	}
 
 	errno = 0;
-	if (fscanf(f, BLKTAP2_CONTROL_DIR"/"
-		   BLKTAP2_CONTROL_SOCKET"%d", &id) != 1) {
+	if (fscanf(f, BLKTAP3_CONTROL_DIR "/"
+			   BLKTAP3_CONTROL_SOCKET "%d", &id) != 1) {
 		errno = (errno ? : EINVAL);
 		EPRINTF("parsing id failed: %d\n", errno);
 		id = -1;
@@ -158,13 +173,17 @@ tap_ctl_spawn(void)
 
 	readfd = -1;
 
+  again:
 	child = __tap_ctl_spawn(&readfd);
 	if (child < 0)
 		return child;
 
 	err = tap_ctl_wait(child);
-	if (err)
+	if (err) {
+		if (err == -EAGAIN)
+			goto again;
 		return err;
+	}
 
 	id = tap_ctl_get_child_id(readfd);
 	if (id < 0)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7G-00041o-Rh; Tue, 04 Dec 2012 18:21:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7D-0003xs-8G
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:07 +0000
Received: from [85.158.139.211:51129] by server-8.bemta-5.messagelabs.com id
	D7/CB-06050-21F3EB05; Tue, 04 Dec 2012 18:21:06 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354645265!19057626!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8056 invoked from network); 4 Dec 2012 18:21:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155755"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:05 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:05 +0000
MIME-Version: 1.0
X-Mercurial-Node: 147ac3eaca336a9c52b065740edf35861914af90
Message-ID: <147ac3eaca336a9c52b0.1354645186@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:46 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 8 of 9 RFC v2] blktap3/libblktapctl: Introduce
 tapdisk spawn functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch imports file control/tap-ctl-spawn.c from the existing blktap2
implementation, with most changes coming from blktap2 living in github.
Function tap-ctl-spawn is used for spawning a new tapdisk process in order to
serve a new virtual block device.

diff --git a/tools/blktap2/control/tap-ctl-spawn.c b/tools/blktap3/control/tap-ctl-spawn.c
copy from tools/blktap2/control/tap-ctl-spawn.c
copy to tools/blktap3/control/tap-ctl-spawn.c
--- a/tools/blktap2/control/tap-ctl-spawn.c
+++ b/tools/blktap3/control/tap-ctl-spawn.c
@@ -31,15 +31,17 @@
 #include <unistd.h>
 #include <stdlib.h>
 #include <string.h>
+#include <signal.h>
 #include <sys/wait.h>
 
 #include "tap-ctl.h"
-#include "blktap2.h"
+#include "blktap3.h"
+#include "_paths.h"
 
 static pid_t
 __tap_ctl_spawn(int *readfd)
 {
-	int err, child, channel[2];
+	int child, channel[2];
 	char *tapdisk;
 
 	if (pipe(channel)) {
@@ -71,14 +73,21 @@ static pid_t
 	close(channel[0]);
 	close(channel[1]);
 
-	tapdisk = getenv("TAPDISK2");
+	tapdisk = getenv("TAPDISK");
 	if (!tapdisk)
-		tapdisk = "tapdisk2";
+		tapdisk = getenv("TAPDISK2");
 
-	execlp(tapdisk, tapdisk, NULL);
+	if (tapdisk) {
+		execlp(tapdisk, tapdisk, NULL);
+		exit(errno);
+	}
 
-	EPRINTF("exec failed\n");
-	exit(1);
+	execl(TAPDISK_EXECDIR "/" TAPDISK_EXEC, TAPDISK_EXEC, NULL);
+
+	if (errno == ENOENT)
+		execl(TAPDISK_BUILDDIR "/" TAPDISK_EXEC, TAPDISK_EXEC, NULL);
+
+	exit(errno);
 }
 
 pid_t
@@ -90,7 +99,7 @@ tap_ctl_get_pid(const int id)
 	memset(&message, 0, sizeof(message));
 	message.type = TAPDISK_MESSAGE_PID;
 
-	err = tap_ctl_connect_send_and_receive(id, &message, 2);
+	err = tap_ctl_connect_send_and_receive(id, &message, NULL);
 	if (err)
 		return err;
 
@@ -119,6 +128,12 @@ tap_ctl_wait(pid_t child)
 	if (WIFSIGNALED(status)) {
 		int signo = WTERMSIG(status);
 		EPRINTF("tapdisk2[%d] killed by signal %d\n", child, signo);
+		if (signo == SIGUSR1)
+			/* NB. there's a race between tapdisk's
+			 * sigaction init and xen-bugtool shooting
+			 * debug signals. If killed by something as
+			 * innocuous as USR1, then retry. */
+			return -EAGAIN;
 		return -EINTR;
 	}
 
@@ -139,8 +154,8 @@ tap_ctl_get_child_id(int readfd)
 	}
 
 	errno = 0;
-	if (fscanf(f, BLKTAP2_CONTROL_DIR"/"
-		   BLKTAP2_CONTROL_SOCKET"%d", &id) != 1) {
+	if (fscanf(f, BLKTAP3_CONTROL_DIR "/"
+			   BLKTAP3_CONTROL_SOCKET "%d", &id) != 1) {
 		errno = (errno ? : EINVAL);
 		EPRINTF("parsing id failed: %d\n", errno);
 		id = -1;
@@ -158,13 +173,17 @@ tap_ctl_spawn(void)
 
 	readfd = -1;
 
+  again:
 	child = __tap_ctl_spawn(&readfd);
 	if (child < 0)
 		return child;
 
 	err = tap_ctl_wait(child);
-	if (err)
+	if (err) {
+		if (err == -EAGAIN)
+			goto again;
 		return err;
+	}
 
 	id = tap_ctl_get_child_id(readfd);
 	if (id < 0)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7F-00040j-NV; Tue, 04 Dec 2012 18:21:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7C-0003xs-Mr
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:06 +0000
Received: from [85.158.139.83:25682] by server-8.bemta-5.messagelabs.com id
	76/CB-06050-21F3EB05; Tue, 04 Dec 2012 18:21:06 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!7
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22058 invoked from network); 4 Dec 2012 18:21:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155753"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:05 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:05 +0000
MIME-Version: 1.0
X-Mercurial-Node: a1e748149395af688f3a8f68f8969a9962d26cdf
Message-ID: <a1e748149395af688f3a.1354645185@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:45 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 7 of 9 RFC v2] blktap3/libblktapctl: Introduce
 tapdisk message exchange functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces file conrol/tap-ctl-ipc.c, where the functionality of
talking to a tapdisk process is implemented. This file is imported from the
existing blktap2 implementation, with most changes coming from blktap2 living
in github.

diff --git a/tools/blktap2/control/tap-ctl-ipc.c b/tools/blktap3/control/tap-ctl-ipc.c
copy from tools/blktap2/control/tap-ctl-ipc.c
copy to tools/blktap3/control/tap-ctl-ipc.c
--- a/tools/blktap2/control/tap-ctl-ipc.c
+++ b/tools/blktap3/control/tap-ctl-ipc.c
@@ -30,44 +30,33 @@
 #include <unistd.h>
 #include <stdlib.h>
 #include <string.h>
+#include <fcntl.h>
 #include <sys/un.h>
 #include <sys/stat.h>
 #include <sys/types.h>
 #include <sys/socket.h>
 
 #include "tap-ctl.h"
-#include "blktap2.h"
+#include "blktap3.h"
 
 int tap_ctl_debug = 0;
 
 int
-tap_ctl_read_message(int fd, tapdisk_message_t *message, int timeout)
+tap_ctl_read_raw(int fd, void *buf, size_t size, struct timeval *timeout)
 {
 	fd_set readfds;
-	int ret, len, offset;
-	struct timeval tv, *t;
+	size_t offset = 0;
+	int ret;
 
-	t      = NULL;
-	offset = 0;
-	len    = sizeof(tapdisk_message_t);
-
-	if (timeout) {
-		tv.tv_sec  = timeout;
-		tv.tv_usec = 0;
-		t = &tv;
-	}
-
-	memset(message, 0, sizeof(tapdisk_message_t));
-
-	while (offset < len) {
+	while (offset < size) {
 		FD_ZERO(&readfds);
 		FD_SET(fd, &readfds);
 
-		ret = select(fd + 1, &readfds, NULL, NULL, t);
+		ret = select(fd + 1, &readfds, NULL, NULL, timeout);
 		if (ret == -1)
 			break;
 		else if (FD_ISSET(fd, &readfds)) {
-			ret = read(fd, message + offset, len - offset);
+			ret = read(fd, buf + offset, size - offset);
 			if (ret <= 0)
 				break;
 			offset += ret;
@@ -75,34 +64,24 @@ tap_ctl_read_message(int fd, tapdisk_mes
 			break;
 	}
 
-	if (offset != len) {
-		EPRINTF("failure reading message\n");
+	if (offset != size) {
+		EPRINTF("failure reading data %zd/%zd\n", offset, size);
 		return -EIO;
 	}
 
-	DBG("received '%s' message (uuid = %u)\n",
-	    tapdisk_message_name(message->type), message->cookie);
-
 	return 0;
 }
 
 int
-tap_ctl_write_message(int fd, tapdisk_message_t *message, int timeout)
+tap_ctl_write_message(int fd, tapdisk_message_t * message,
+					  struct timeval *timeout)
 {
 	fd_set writefds;
 	int ret, len, offset;
-	struct timeval tv, *t;
 
-	t      = NULL;
 	offset = 0;
 	len    = sizeof(tapdisk_message_t);
 
-	if (timeout) {
-		tv.tv_sec  = timeout;
-		tv.tv_usec = 0;
-		t = &tv;
-	}
-
 	DBG("sending '%s' message (uuid = %u)\n",
 	    tapdisk_message_name(message->type), message->cookie);
 
@@ -113,7 +92,7 @@ tap_ctl_write_message(int fd, tapdisk_me
 		/* we don't bother reinitializing tv. at worst, it will wait a
 		 * bit more time than expected. */
 
-		ret = select(fd + 1, NULL, &writefds, NULL, t);
+		ret = select(fd + 1, NULL, &writefds, NULL, timeout);
 		if (ret == -1)
 			break;
 		else if (FD_ISSET(fd, &writefds)) {
@@ -134,7 +113,8 @@ tap_ctl_write_message(int fd, tapdisk_me
 }
 
 int
-tap_ctl_send_and_receive(int sfd, tapdisk_message_t *message, int timeout)
+tap_ctl_send_and_receive(int sfd, tapdisk_message_t * message,
+						 struct timeval *timeout)
 {
 	int err;
 
@@ -161,7 +141,7 @@ tap_ctl_socket_name(int id)
 	char *name;
 
 	if (asprintf(&name, "%s/%s%d",
-		     BLKTAP2_CONTROL_DIR, BLKTAP2_CONTROL_SOCKET, id) == -1)
+				 BLKTAP3_CONTROL_DIR, BLKTAP3_CONTROL_SOCKET, id) == -1)
 		return NULL;
 
 	return name;
@@ -216,13 +196,15 @@ tap_ctl_connect_id(int id, int *sfd)
 	}
 
 	err = tap_ctl_connect(name, sfd);
+
 	free(name);
 
 	return err;
 }
 
 int
-tap_ctl_connect_send_and_receive(int id, tapdisk_message_t *message, int timeout)
+tap_ctl_connect_send_and_receive(int id, tapdisk_message_t * message,
+								 struct timeval *timeout)
 {
 	int err, sfd;
 
@@ -235,3 +217,20 @@ tap_ctl_connect_send_and_receive(int id,
 	close(sfd);
 	return err;
 }
+
+int
+tap_ctl_read_message(int fd, tapdisk_message_t * message,
+					 struct timeval *timeout)
+{
+	size_t size = sizeof(tapdisk_message_t);
+	int err;
+
+	err = tap_ctl_read_raw(fd, message, size, timeout);
+	if (err)
+		return err;
+
+	DBG("received '%s' message (uuid = %u)\n",
+		tapdisk_message_name(message->type), message->cookie);
+
+	return 0;
+}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7E-0003zV-5Q; Tue, 04 Dec 2012 18:21:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7B-0003xt-Mn
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:05 +0000
Received: from [85.158.139.83:25600] by server-9.bemta-5.messagelabs.com id
	E1/52-29295-01F3EB05; Tue, 04 Dec 2012 18:21:04 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21983 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155746"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:03 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:03 +0000
MIME-Version: 1.0
X-Mercurial-Node: a7fd551a4644d380f50540c75665286176b30526
Message-ID: <a7fd551a4644d380f505.1354645179@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:39 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 1 of 9 RFC v2] blktap3: Introduce blktap3 headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces basic blktap3 header files.

diff --git a/tools/blktap3/include/blktap3.h b/tools/blktap3/include/blktap3.h
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/include/blktap3.h
@@ -0,0 +1,47 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * 
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ *
+ * Commonly used headers and definitions.
+ */
+
+#ifndef __BLKTAP_3_H__
+#define __BLKTAP_3_H__
+
+#include "compiler.h"
+
+/* TODO remove from other files */
+#include <xen-external/bsd-sys-queue.h>
+
+#define BLKTAP3_CONTROL_NAME        "blktap-control"
+#define BLKTAP3_CONTROL_DIR         "/var/run/"BLKTAP3_CONTROL_NAME
+#define BLKTAP3_CONTROL_SOCKET      "ctl"
+
+#define BLKTAP3_ENOSPC_SIGNAL_FILE  "/var/run/tapdisk3-enospc"
+
+/*
+ * TODO They may have to change due to macro namespacing.
+ */
+#define TAILQ_MOVE_HEAD(node, src, dst, entry)	\
+	TAILQ_REMOVE(src, node, entry);				\
+	TAILQ_INSERT_HEAD(dst, node, entry);
+
+#define TAILQ_MOVE_TAIL(node, src, dst, entry)	\
+	TAILQ_REMOVE(src, node, entry);				\
+	TAILQ_INSERT_TAIL(dst, node, entry);
+
+#endif /* __BLKTAP_3_H__ */
diff --git a/tools/blktap3/include/compiler.h b/tools/blktap3/include/compiler.h
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/include/compiler.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ * 
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ * 
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301
+ * USA
+ */
+
+#ifndef __COMPILER_H__
+#define __COMPILER_H__
+
+#define likely(_cond)	__builtin_expect(!!(_cond), 1)
+#define unlikely(_cond)	__builtin_expect(!!(_cond), 0)
+
+/*
+ * FIXME taken from list.h, do we need to mention anything about the license?
+ */
+#define containerof(_ptr, _type, _memb) \
+	((_type*)((void*)(_ptr) - offsetof(_type, _memb)))
+
+#define __printf(a, b)	__attribute__((format(printf, a, b)))
+#define __scanf(_f, _a) __attribute__((format (scanf, _f, _a)))
+
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif /* ARRAY_SIZE */
+
+#define UNUSED_PARAMETER(x) \
+    (void)(x);
+
+#endif /* __COMPILER_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7F-00040j-NV; Tue, 04 Dec 2012 18:21:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7C-0003xs-Mr
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:06 +0000
Received: from [85.158.139.83:25682] by server-8.bemta-5.messagelabs.com id
	76/CB-06050-21F3EB05; Tue, 04 Dec 2012 18:21:06 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!7
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22058 invoked from network); 4 Dec 2012 18:21:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155753"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:05 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:05 +0000
MIME-Version: 1.0
X-Mercurial-Node: a1e748149395af688f3a8f68f8969a9962d26cdf
Message-ID: <a1e748149395af688f3a.1354645185@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:45 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 7 of 9 RFC v2] blktap3/libblktapctl: Introduce
 tapdisk message exchange functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces file conrol/tap-ctl-ipc.c, where the functionality of
talking to a tapdisk process is implemented. This file is imported from the
existing blktap2 implementation, with most changes coming from blktap2 living
in github.

diff --git a/tools/blktap2/control/tap-ctl-ipc.c b/tools/blktap3/control/tap-ctl-ipc.c
copy from tools/blktap2/control/tap-ctl-ipc.c
copy to tools/blktap3/control/tap-ctl-ipc.c
--- a/tools/blktap2/control/tap-ctl-ipc.c
+++ b/tools/blktap3/control/tap-ctl-ipc.c
@@ -30,44 +30,33 @@
 #include <unistd.h>
 #include <stdlib.h>
 #include <string.h>
+#include <fcntl.h>
 #include <sys/un.h>
 #include <sys/stat.h>
 #include <sys/types.h>
 #include <sys/socket.h>
 
 #include "tap-ctl.h"
-#include "blktap2.h"
+#include "blktap3.h"
 
 int tap_ctl_debug = 0;
 
 int
-tap_ctl_read_message(int fd, tapdisk_message_t *message, int timeout)
+tap_ctl_read_raw(int fd, void *buf, size_t size, struct timeval *timeout)
 {
 	fd_set readfds;
-	int ret, len, offset;
-	struct timeval tv, *t;
+	size_t offset = 0;
+	int ret;
 
-	t      = NULL;
-	offset = 0;
-	len    = sizeof(tapdisk_message_t);
-
-	if (timeout) {
-		tv.tv_sec  = timeout;
-		tv.tv_usec = 0;
-		t = &tv;
-	}
-
-	memset(message, 0, sizeof(tapdisk_message_t));
-
-	while (offset < len) {
+	while (offset < size) {
 		FD_ZERO(&readfds);
 		FD_SET(fd, &readfds);
 
-		ret = select(fd + 1, &readfds, NULL, NULL, t);
+		ret = select(fd + 1, &readfds, NULL, NULL, timeout);
 		if (ret == -1)
 			break;
 		else if (FD_ISSET(fd, &readfds)) {
-			ret = read(fd, message + offset, len - offset);
+			ret = read(fd, buf + offset, size - offset);
 			if (ret <= 0)
 				break;
 			offset += ret;
@@ -75,34 +64,24 @@ tap_ctl_read_message(int fd, tapdisk_mes
 			break;
 	}
 
-	if (offset != len) {
-		EPRINTF("failure reading message\n");
+	if (offset != size) {
+		EPRINTF("failure reading data %zd/%zd\n", offset, size);
 		return -EIO;
 	}
 
-	DBG("received '%s' message (uuid = %u)\n",
-	    tapdisk_message_name(message->type), message->cookie);
-
 	return 0;
 }
 
 int
-tap_ctl_write_message(int fd, tapdisk_message_t *message, int timeout)
+tap_ctl_write_message(int fd, tapdisk_message_t * message,
+					  struct timeval *timeout)
 {
 	fd_set writefds;
 	int ret, len, offset;
-	struct timeval tv, *t;
 
-	t      = NULL;
 	offset = 0;
 	len    = sizeof(tapdisk_message_t);
 
-	if (timeout) {
-		tv.tv_sec  = timeout;
-		tv.tv_usec = 0;
-		t = &tv;
-	}
-
 	DBG("sending '%s' message (uuid = %u)\n",
 	    tapdisk_message_name(message->type), message->cookie);
 
@@ -113,7 +92,7 @@ tap_ctl_write_message(int fd, tapdisk_me
 		/* we don't bother reinitializing tv. at worst, it will wait a
 		 * bit more time than expected. */
 
-		ret = select(fd + 1, NULL, &writefds, NULL, t);
+		ret = select(fd + 1, NULL, &writefds, NULL, timeout);
 		if (ret == -1)
 			break;
 		else if (FD_ISSET(fd, &writefds)) {
@@ -134,7 +113,8 @@ tap_ctl_write_message(int fd, tapdisk_me
 }
 
 int
-tap_ctl_send_and_receive(int sfd, tapdisk_message_t *message, int timeout)
+tap_ctl_send_and_receive(int sfd, tapdisk_message_t * message,
+						 struct timeval *timeout)
 {
 	int err;
 
@@ -161,7 +141,7 @@ tap_ctl_socket_name(int id)
 	char *name;
 
 	if (asprintf(&name, "%s/%s%d",
-		     BLKTAP2_CONTROL_DIR, BLKTAP2_CONTROL_SOCKET, id) == -1)
+				 BLKTAP3_CONTROL_DIR, BLKTAP3_CONTROL_SOCKET, id) == -1)
 		return NULL;
 
 	return name;
@@ -216,13 +196,15 @@ tap_ctl_connect_id(int id, int *sfd)
 	}
 
 	err = tap_ctl_connect(name, sfd);
+
 	free(name);
 
 	return err;
 }
 
 int
-tap_ctl_connect_send_and_receive(int id, tapdisk_message_t *message, int timeout)
+tap_ctl_connect_send_and_receive(int id, tapdisk_message_t * message,
+								 struct timeval *timeout)
 {
 	int err, sfd;
 
@@ -235,3 +217,20 @@ tap_ctl_connect_send_and_receive(int id,
 	close(sfd);
 	return err;
 }
+
+int
+tap_ctl_read_message(int fd, tapdisk_message_t * message,
+					 struct timeval *timeout)
+{
+	size_t size = sizeof(tapdisk_message_t);
+	int err;
+
+	err = tap_ctl_read_raw(fd, message, size, timeout);
+	if (err)
+		return err;
+
+	DBG("received '%s' message (uuid = %u)\n",
+		tapdisk_message_name(message->type), message->cookie);
+
+	return 0;
+}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7E-0003zV-5Q; Tue, 04 Dec 2012 18:21:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7B-0003xt-Mn
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:05 +0000
Received: from [85.158.139.83:25600] by server-9.bemta-5.messagelabs.com id
	E1/52-29295-01F3EB05; Tue, 04 Dec 2012 18:21:04 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21983 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155746"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:03 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:03 +0000
MIME-Version: 1.0
X-Mercurial-Node: a7fd551a4644d380f50540c75665286176b30526
Message-ID: <a7fd551a4644d380f505.1354645179@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:39 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 1 of 9 RFC v2] blktap3: Introduce blktap3 headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces basic blktap3 header files.

diff --git a/tools/blktap3/include/blktap3.h b/tools/blktap3/include/blktap3.h
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/include/blktap3.h
@@ -0,0 +1,47 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * 
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ *
+ * Commonly used headers and definitions.
+ */
+
+#ifndef __BLKTAP_3_H__
+#define __BLKTAP_3_H__
+
+#include "compiler.h"
+
+/* TODO remove from other files */
+#include <xen-external/bsd-sys-queue.h>
+
+#define BLKTAP3_CONTROL_NAME        "blktap-control"
+#define BLKTAP3_CONTROL_DIR         "/var/run/"BLKTAP3_CONTROL_NAME
+#define BLKTAP3_CONTROL_SOCKET      "ctl"
+
+#define BLKTAP3_ENOSPC_SIGNAL_FILE  "/var/run/tapdisk3-enospc"
+
+/*
+ * TODO They may have to change due to macro namespacing.
+ */
+#define TAILQ_MOVE_HEAD(node, src, dst, entry)	\
+	TAILQ_REMOVE(src, node, entry);				\
+	TAILQ_INSERT_HEAD(dst, node, entry);
+
+#define TAILQ_MOVE_TAIL(node, src, dst, entry)	\
+	TAILQ_REMOVE(src, node, entry);				\
+	TAILQ_INSERT_TAIL(dst, node, entry);
+
+#endif /* __BLKTAP_3_H__ */
diff --git a/tools/blktap3/include/compiler.h b/tools/blktap3/include/compiler.h
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/include/compiler.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ * 
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ * 
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301
+ * USA
+ */
+
+#ifndef __COMPILER_H__
+#define __COMPILER_H__
+
+#define likely(_cond)	__builtin_expect(!!(_cond), 1)
+#define unlikely(_cond)	__builtin_expect(!!(_cond), 0)
+
+/*
+ * FIXME taken from list.h, do we need to mention anything about the license?
+ */
+#define containerof(_ptr, _type, _memb) \
+	((_type*)((void*)(_ptr) - offsetof(_type, _memb)))
+
+#define __printf(a, b)	__attribute__((format(printf, a, b)))
+#define __scanf(_f, _a) __attribute__((format (scanf, _f, _a)))
+
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#endif /* ARRAY_SIZE */
+
+#define UNUSED_PARAMETER(x) \
+    (void)(x);
+
+#endif /* __COMPILER_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7E-000404-VD; Tue, 04 Dec 2012 18:21:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7C-0003xs-5A
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:06 +0000
Received: from [85.158.139.83:44191] by server-8.bemta-5.messagelabs.com id
	12/CB-06050-11F3EB05; Tue, 04 Dec 2012 18:21:05 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!4
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22012 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155748"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:04 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:04 +0000
MIME-Version: 1.0
X-Mercurial-Node: e6f891ddab4b19fc25cdff3c6ae8337dfb706662
Message-ID: <e6f891ddab4b19fc25cd.1354645181@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:41 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 3 of 9 RFC v2] blktap3/libblktapctl: Introduce
 the tapdisk control header
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces the header file where tapdisk control-related structures
and functions are declared. This file is based on the existing blktap2, with
most changes coming from blktap2 living in github.  Linux lists are replaced by
BSD tail queues. Few functions are partly documented, most of them are not
documented at all, this will be addressed by a future patch.

diff --git a/tools/blktap2/control/tap-ctl.h b/tools/blktap3/control/tap-ctl.h
copy from tools/blktap2/control/tap-ctl.h
copy to tools/blktap3/control/tap-ctl.h
--- a/tools/blktap2/control/tap-ctl.h
+++ b/tools/blktap3/control/tap-ctl.h
@@ -30,72 +30,180 @@
 
 #include <syslog.h>
 #include <errno.h>
+#include <sys/time.h>
 #include <tapdisk-message.h>
+#include "blktap3.h"
 
+/*
+ * TODO These are private, move to an internal header.
+ */
 extern int tap_ctl_debug;
 
 #ifdef TAPCTL
-#define DBG(_f, _a...)				\
-	do {					\
+#define DBG(_f, _a...)			\
+	do {						\
 		if (tap_ctl_debug)		\
 			printf(_f, ##_a);	\
 	} while (0)
 
 #define DPRINTF(_f, _a...) syslog(LOG_INFO, _f, ##_a)
 #define EPRINTF(_f, _a...) syslog(LOG_ERR, "tap-err:%s: " _f, __func__, ##_a)
-#define  PERROR(_f, _a...) syslog(LOG_ERR, "tap-err:%s: " _f ": %s", __func__, ##_a, \
-				  strerror(errno))
+#define PERROR(_f, _a...) syslog(LOG_ERR, "tap-err:%s: " _f ": %s", \
+        __func__, ##_a, strerror(errno))
 #endif
 
-void tap_ctl_version(int *major, int *minor);
-int tap_ctl_kernel_version(int *major, int *minor);
+/**
+ * Contains information about a tapdisk process.
+ */
+typedef struct tap_list {
 
-int tap_ctl_check_blktap(const char **message);
-int tap_ctl_check_version(const char **message);
-int tap_ctl_check(const char **message);
+    /**
+     * The process ID.
+     */
+    pid_t pid;
 
+    /**
+     * TODO
+     */
+    int minor;
+
+    /**
+     * State of the VBD, specified in drivers/tapdisk-vbd.h.
+     */
+    int state;
+
+    /**
+     * TODO
+     */
+    char *type;
+
+    /**
+     * /path/to/file
+     */
+    char *path;
+
+    /**
+     * for linked lists
+     */
+    TAILQ_ENTRY(tap_list) entry;
+} tap_list_t;
+
+TAILQ_HEAD(tqh_tap_list, tap_list);
+
+/**
+ * Iterate over a list of struct tap_list elements.
+ */
+#define tap_list_for_each_entry(_pos, _head) \
+    TAILQ_FOREACH(_pos, _head, entry)
+
+/**
+ * Iterate over a list of struct tap_list elements allowing deletions without
+ * having to restart the iteration.
+ */
+#define tap_list_for_each_entry_safe(_pos, _n, _head) \
+    TAILQ_FOREACH_SAFE(_pos, _head, entry, _n)
+
+/**
+ * Connects to a tapdisk.
+ *
+ * @param /path/to/file of the control socket (e.g. 
+ * /var/run/blktap-control/ctl/<pid>
+ * @param socket output parameter that receives the connection
+ * @returns 0 on success, an error code otherwise
+ */
 int tap_ctl_connect(const char *path, int *socket);
-int tap_ctl_connect_id(int id, int *socket);
-int tap_ctl_read_message(int fd, tapdisk_message_t *message, int timeout);
-int tap_ctl_write_message(int fd, tapdisk_message_t *message, int timeout);
-int tap_ctl_send_and_receive(int fd, tapdisk_message_t *message, int timeout);
-int tap_ctl_connect_send_and_receive(int id,
-				     tapdisk_message_t *message, int timeout);
+
+/**
+ * Connects to a tapdisk.
+ *
+ * @param id the process ID of the tapdisk to connect to
+ * @param socket output parameter that receives the connection
+ * @returns 0 on success, an error code otherwise
+ */
+int tap_ctl_connect_id(const int id, int *socket);
+
+/**
+ * Reads from the tapdisk connection to the buffer.
+ *
+ * @param fd the file descriptor of the socket to read from
+ * @param buf buffer that receives the output
+ * @param sz size, in bytes, of the buffer
+ * @param timeout (optional) specifies the maximum time to wait for reading
+ * @returns 0 on success, an error code otherwise
+ */
+int tap_ctl_read_raw(const int fd, void *buf, const size_t sz,
+        struct timeval *timeout);
+
+int tap_ctl_read_message(int fd, tapdisk_message_t * message,
+        struct timeval *timeout);
+
+int tap_ctl_write_message(int fd, tapdisk_message_t * message,
+        struct timeval *timeout);
+
+int tap_ctl_send_and_receive(int fd, tapdisk_message_t * message,
+        struct timeval *timeout);
+
+int tap_ctl_connect_send_and_receive(int id, tapdisk_message_t * message,
+        struct timeval *timeout);
+
 char *tap_ctl_socket_name(int id);
 
-typedef struct {
-	int         id;
-	pid_t       pid;
-	int         minor;
-	int         state;
-	char       *type;
-	char       *path;
-} tap_list_t;
+int tap_ctl_list_pid(pid_t pid, struct tqh_tap_list *list);
 
-int tap_ctl_get_driver_id(const char *handle);
+/**
+ * Retrieves a list of all tapdisks.
+ *
+ * @param list output parameter that receives the list of tapdisks
+ * @returns 0 on success, an error code otherwise
+ */
+int tap_ctl_list(struct tqh_tap_list *list);
 
-int tap_ctl_list(tap_list_t ***list);
-void tap_ctl_free_list(tap_list_t **list);
-int tap_ctl_find(const char *type, const char *path, tap_list_t *tap);
+/**
+ * Deallocates a list of struct tap_list.
+ *
+ * @param list the tapdisk information structure to deallocate.
+ */
+void tap_ctl_list_free(struct tqh_tap_list *list);
 
-int tap_ctl_allocate(int *minor, char **devname);
-int tap_ctl_free(const int minor);
+/**
+ * Creates a tapdisk process.
+ *
+ * TODO document parameters
+ * @param params
+ * @param flags
+ * @param prt_minor
+ * @param secondary
+ * @returns 0 on success, en error code otherwise
+ */
+int tap_ctl_create(const char *params, int flags, int prt_minor,
+        char *secondary);
 
-int tap_ctl_create(const char *params, char **devname);
-int tap_ctl_destroy(const int id, const int minor);
+int tap_ctl_destroy(const int id, const int minor, int force,
+        struct timeval *timeout);
 
+/*
+ * TODO The following functions are not currently used by anything else
+ * other than the tapdisk itself. Move to a private header?
+ */
 int tap_ctl_spawn(void);
 pid_t tap_ctl_get_pid(const int id);
 
 int tap_ctl_attach(const int id, const int minor);
 int tap_ctl_detach(const int id, const int minor);
 
-int tap_ctl_open(const int id, const int minor, const char *params);
-int tap_ctl_close(const int id, const int minor, const int force);
+int tap_ctl_open(const int id, const int minor, const char *params,
+        int flags, const int prt_minor, const char *secondary);
 
-int tap_ctl_pause(const int id, const int minor);
+int tap_ctl_close(const int id, const int minor, const int force,
+        struct timeval *timeout);
+
+int tap_ctl_pause(const int id, const int minor, struct timeval 
+        *timeout);
+
 int tap_ctl_unpause(const int id, const int minor, const char *params);
 
-int tap_ctl_blk_major(void);
+ssize_t tap_ctl_stats(pid_t pid, int minor, char *buf, size_t size); 
 
-#endif
+int tap_ctl_stats_fwrite(pid_t pid, int minor, FILE * out);
+
+#endif /* __TAP_CTL_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7G-00041Q-FK; Tue, 04 Dec 2012 18:21:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7C-0003yY-T2
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:07 +0000
Received: from [85.158.143.99:7984] by server-1.bemta-4.messagelabs.com id
	0A/2A-27934-21F3EB05; Tue, 04 Dec 2012 18:21:06 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354645265!22796529!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18982 invoked from network); 4 Dec 2012 18:21:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155752"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:05 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:04 +0000
MIME-Version: 1.0
X-Mercurial-Node: 56108441d0c306084c1d91b48bbdb5d875d68732
Message-ID: <56108441d0c306084c1d.1354645184@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:44 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 6 of 9 RFC v2] blktap3/libblktapctl: Introduce
 block device control information retrieval functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces function tap_ctl_info. It is used by the tapback daemon
to retrieve the size of the block device, as well as the sector size,  in order
to communicate it to blkfront so that it can create the virtual block device.

diff --git a/tools/blktap3/control/tap-ctl-info.c b/tools/blktap3/control/tap-ctl-info.c
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/control/tap-ctl-info.c
@@ -0,0 +1,47 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * 
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include "tap-ctl.h"
+#include "tap-ctl-info.h"
+
+int tap_ctl_info(pid_t pid, int minor, unsigned long long *sectors,
+        unsigned int *sector_size, unsigned int *info)
+{
+    tapdisk_message_t message;
+    int err;
+
+    memset(&message, 0, sizeof(message));
+    message.type = TAPDISK_MESSAGE_DISK_INFO;
+    message.cookie = minor;
+
+    err = tap_ctl_connect_send_and_receive(pid, &message, NULL);
+    if (err)
+        return err;
+
+    if (message.type != TAPDISK_MESSAGE_DISK_INFO_RSP)
+        return -EINVAL;
+
+    *sectors = message.u.image.sectors;
+    *sector_size = message.u.image.sector_size;
+    *info = message.u.image.info;
+    return err;
+}
diff --git a/tools/blktap3/control/tap-ctl-info.h b/tools/blktap3/control/tap-ctl-info.h
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/control/tap-ctl-info.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * 
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#ifndef __TAP_CTL_INFO_H__
+#define __TAP_CTL_INFO_H__
+
+/**
+ * Retrieves virtual disk information from a tapdisk.
+ *
+ * @pid the process ID of the tapdisk process
+ * @minor the minor device number
+ * @sectors output parameter that receives the number of sectors
+ * @sector_size output parameter that receives the size of the sector
+ * @info TODO ?
+ * 
+ */
+int tap_ctl_info(pid_t pid, int minor, unsigned long long *sectors,
+        unsigned int *sector_size, unsigned int *info);
+
+#endif /* __TAP_CTL_INFO_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7D-0003zI-OU; Tue, 04 Dec 2012 18:21:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7B-0003xs-Kd
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:05 +0000
Received: from [85.158.139.83:25591] by server-8.bemta-5.messagelabs.com id
	BF/BB-06050-01F3EB05; Tue, 04 Dec 2012 18:21:04 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21978 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155745"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:03 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:03 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:38 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 0 of 9 RFC v2] blktap3: Introduce a small subset
 of blktap3 files
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

blktap3 is a disk back-end driver. It is based on blktap2 but does not require
the blktap/blkback kernel modules as it allows tapdisk to talk directly to
blkfront. This primarily simplifies maintenance, and _may_ lead to performance
improvements. blktap3 is based on a blktap2 fork maintained mostly by Citrix
(it lives in github), so these changes are also imported, apart from the
blktap3 ones.

I've organised my upstream effort as follows:
1. Upstream the smallest possible subset of blktap3 that will allow guest VMs
   to use RAW images backed by blktap3. This will enable early testing on the
   bits introduced by blktap3.
2. Upstream the remaining of blktap3, most notably back-end drivers, e.g. VHD
   etc.
3. Import bug fixes from blktap2 living in github.
4. Import new features and optimisations from blktap2 living in github, e.g.
   the mirroring plug-in.

blktap3 is made of the following components:
1. blkfront (not a blktap3 component and already upstream): a virtual
   block device driver in the guest VM that receives block I/O requests and
   forwards them to tapdisk via the shared ring.
2. tapdisk: a user space process that receives block I/O requests from
   blkfront, translates them to whatever the current backing file format is
   (i.e. RAW, VHD, qcow etc.), and performs the actual I/O. Apart from block
   I/O requests, the tapdisk also allows basic management of each virtual
   block device, e.g. a device may be temporarily paused. tapdisk listens to
   a loopback socket for such commands. The tap-ctl utility (explained later)
   can be used for managing the tapdisk.
3. libtapback: a user space library that implements the functionality required
   to access the shared ring. It is used by tapdisk to obtain the block I/O
   requests forwarded by blkfront, and to produce the corresponding responses.
   This is the very "heart" of blktap3, it's architecture will be thoroughly
   explained by the patch series that introduces it.
4. tapback: a user space daemon that acts as the back-end of each virtual
   block device: it monitors XenStore for the block front-end's state changes,
   creates/destroys the shared ring, and instructs the tapdisk to connect
   to/disconnect from the shared ring. It also communicates to the block
   front-end required parameters (e.g. block device size in sectors) via
   XenStore.
5. libblktapctl: a user space library where the tapdisk management functions
   are implemented.
6. tap-ctl: a user space utility that allows management of the tapdisk, uses
   libblktapctl.

The tapdisk is spawned/destroyed by libxl when a domain is created/destroyed,
in the exact same way as in blktap2. libxl uses libblktapctl for this.

This patch series introduces a small subset of files required by tapback (the
tapback daemon is introduced by the next patch series):
- basic blktap3 header files
- a rudimentary implementation of libblktapctl. Only the bits required by
  tapback to manage the tapdisk are introduced, the rest of this library will
  be introduced by later patches.

---
Changed since v1:
* In all patches the patch message has been improved.
* Patches 1, 5, and 6 use GPLv2.
* Patch 0: Basic explanation of blktap3's fundamental components.
* Patch 9: Improved tools/blktap3/control/Makefile by moving hard coded
  paths to config/StdGNU.mk.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7G-000410-3f; Tue, 04 Dec 2012 18:21:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7C-0003yD-HQ
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:06 +0000
Received: from [85.158.139.83:25645] by server-3.bemta-5.messagelabs.com id
	BB/2D-18736-11F3EB05; Tue, 04 Dec 2012 18:21:05 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!5
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22019 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155750"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:04 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:04 +0000
MIME-Version: 1.0
X-Mercurial-Node: db722120283629dbd51bd92ba0a6ad91c11ce1f2
Message-ID: <db722120283629dbd51b.1354645182@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:42 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 4 of 9 RFC v2] blktap3/libblktapctl: Introduce
 listing running tapdisks functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces tap-ctl-list.c, the file where listing tapdisks
functionality is implemented. It is based on the existing blktap2 file with
most changes coming from blktap2 living in github. I have not examined the
changes in detail but it seems they are minor.

Function tap_ctl_list needs to be amended because minors are not retrieved
from sysfs (there's no sysfs blktap directory any more).

diff --git a/tools/blktap2/control/tap-ctl-list.c b/tools/blktap3/control/tap-ctl-list.c
copy from tools/blktap2/control/tap-ctl-list.c
copy to tools/blktap3/control/tap-ctl-list.c
--- a/tools/blktap2/control/tap-ctl-list.c
+++ b/tools/blktap3/control/tap-ctl-list.c
@@ -34,23 +34,47 @@
 #include <glob.h>
 
 #include "tap-ctl.h"
-#include "blktap2.h"
-#include "list.h"
+#include "blktap3.h"
+
+/**
+ * Allocates and initializes a tap_list_t.
+ */
+static tap_list_t *
+_tap_list_alloc(void)
+{
+	const size_t sz = sizeof(tap_list_t);
+	tap_list_t *tl;
+
+	tl = malloc(sz);
+	if (!tl)
+		return NULL;
+
+	tl->pid = -1;
+	tl->minor = -1;
+	tl->state = -1;
+	tl->type = NULL;
+	tl->path = NULL;
+
+	return tl;
+}
 
 static void
-free_list(tap_list_t *entry)
+_tap_list_free(tap_list_t * tl, struct tqh_tap_list *list)
 {
-	if (entry->type) {
-		free(entry->type);
-		entry->type = NULL;
+	if (tl->type) {
+		free(tl->type);
+		tl->type = NULL;
 	}
 
-	if (entry->path) {
-		free(entry->path);
-		entry->path = NULL;
+	if (tl->path) {
+		free(tl->path);
+		tl->path = NULL;
 	}
 
-	free(entry);
+	if (list)
+		TAILQ_REMOVE(list, tl, entry);
+
+	free(tl);
 }
 
 int
@@ -66,7 +90,7 @@ int
 	len = ptr - params;
 
 	*type = strndup(params, len);
-	*path =  strdup(params + len + 1);
+	*path = strdup(params + len + 1);
 
 	if (!*type || !*path) {
 		free(*type);
@@ -81,102 +105,26 @@ int
 	return 0;
 }
 
-static int
-init_list(tap_list_t *entry,
-	  int tap_id, pid_t tap_pid, int vbd_minor, int vbd_state,
-	  const char *params)
+void
+tap_ctl_list_free(struct tqh_tap_list *list)
 {
-	int err = 0;
+	tap_list_t *tl, *n;
 
-	entry->id     = tap_id;
-	entry->pid    = tap_pid;
-	entry->minor  = vbd_minor;
-	entry->state  = vbd_state;
-
-	if (params)
-		err = _parse_params(params, &entry->type, &entry->path);
-
-	return err;
-}
-
-void
-tap_ctl_free_list(tap_list_t **list)
-{
-	tap_list_t **_entry;
-
-	for (_entry = list; *_entry != NULL; ++_entry)
-		free_list(*_entry);
-
-	free(list);
-}
-
-static tap_list_t**
-tap_ctl_alloc_list(int n)
-{
-	tap_list_t **list, *entry;
-	size_t size;
-	int i;
-
-	size = sizeof(tap_list_t*) * (n+1);
-	list = malloc(size);
-	if (!list)
-		goto fail;
-
-	memset(list, 0, size);
-
-	for (i = 0; i < n; ++i) {
-		tap_list_t *entry;
-
-		entry = malloc(sizeof(tap_list_t));
-		if (!entry)
-			goto fail;
-
-		memset(entry, 0, sizeof(tap_list_t));
-
-		list[i] = entry;
-	}
-
-	return list;
-
-fail:
-	if (list)
-		tap_ctl_free_list(list);
-
-	return NULL;
-}
-
-static int
-tap_ctl_list_length(const tap_list_t **list)
-{
-	const tap_list_t **_entry;
-	int n;
-
-	n = 0;
-	for (_entry = list; *_entry != NULL; ++_entry)
-		n++;
-
-	return n;
-}
-
-static int
-_tap_minor_cmp(const void *a, const void *b)
-{
-	return *(int*)a - *(int*)b;
+	tap_list_for_each_entry_safe(tl, n, list)
+		_tap_list_free(tl, list);
 }
 
 int
-_tap_ctl_find_minors(int **_minorv)
+_tap_ctl_find_tapdisks(struct tqh_tap_list *list)
 {
-	glob_t glbuf = { 0 };
+	glob_t glbuf = { 0 };	
 	const char *pattern, *format;
-	int *minorv = NULL, n_minors = 0;
-	int err, i;
+	int err, i, n_taps = 0;
 
-	pattern = BLKTAP2_SYSFS_DIR"/blktap*";
-	format  = BLKTAP2_SYSFS_DIR"/blktap%d";
+	pattern = BLKTAP3_CONTROL_DIR "/" BLKTAP3_CONTROL_SOCKET "*";
+	format = BLKTAP3_CONTROL_DIR "/" BLKTAP3_CONTROL_SOCKET "%d";
 
-	n_minors = 0;
-	minorv   = NULL;
+	TAILQ_INIT(list);
 
 	err = glob(pattern, 0, NULL, &glbuf);
 	switch (err) {
@@ -186,337 +134,231 @@ int
 	case GLOB_ABORTED:
 	case GLOB_NOSPACE:
 		err = -errno;
-		EPRINTF("%s: glob failed, err %d", pattern, err);
-		goto fail;
-	}
-
-	minorv = malloc(sizeof(int) * glbuf.gl_pathc);
-	if (!minorv) {
-		err = -errno;
+		EPRINTF("%s: glob failed: %s", pattern, strerror(err));
 		goto fail;
 	}
 
 	for (i = 0; i < glbuf.gl_pathc; ++i) {
+		tap_list_t *tl;
 		int n;
 
-		n = sscanf(glbuf.gl_pathv[i], format, &minorv[n_minors]);
+		tl = _tap_list_alloc();
+		if (!tl) {
+			err = -ENOMEM;
+			goto fail;
+		}
+
+		n = sscanf(glbuf.gl_pathv[i], format, &tl->pid);
 		if (n != 1)
-			continue;
+			goto skip;
 
-		n_minors++;
+		tl->pid = tap_ctl_get_pid(tl->pid);
+		if (tl->pid < 0)
+			goto skip;
+
+		TAILQ_INSERT_TAIL(list, tl, entry);
+		n_taps++;
+		continue;
+
+	  skip:
+		_tap_list_free(tl, NULL);
 	}
 
-	qsort(minorv, n_minors, sizeof(int), _tap_minor_cmp);
-
-done:
-	*_minorv = minorv;
+  done:
 	err = 0;
-
-out:
-	if (glbuf.gl_pathv)
-		globfree(&glbuf);
-
-	return err ? : n_minors;
-
-fail:
-	if (minorv)
-		free(minorv);
-
-	goto out;
-}
-
-struct tapdisk {
-	int    id;
-	pid_t  pid;
-	struct list_head list;
-};
-
-static int
-_tap_tapdisk_cmp(const void *a, const void *b)
-{
-	return ((struct tapdisk*)a)->id - ((struct tapdisk*)b)->id;
-}
-
-int
-_tap_ctl_find_tapdisks(struct tapdisk **_tapv)
-{
-	glob_t glbuf = { 0 };
-	const char *pattern, *format;
-	struct tapdisk *tapv = NULL;
-	int err, i, n_taps = 0;
-
-	pattern = BLKTAP2_CONTROL_DIR"/"BLKTAP2_CONTROL_SOCKET"*";
-	format  = BLKTAP2_CONTROL_DIR"/"BLKTAP2_CONTROL_SOCKET"%d";
-
-	n_taps = 0;
-	tapv   = NULL;
-
-	err = glob(pattern, 0, NULL, &glbuf);
-	switch (err) {
-	case GLOB_NOMATCH:
-		goto done;
-
-	case GLOB_ABORTED:
-	case GLOB_NOSPACE:
-		err = -errno;
-		EPRINTF("%s: glob failed, err %d", pattern, err);
-		goto fail;
-	}
-
-	tapv = malloc(sizeof(struct tapdisk) * glbuf.gl_pathc);
-	if (!tapv) {
-		err = -errno;
-		goto fail;
-	}
-
-	for (i = 0; i < glbuf.gl_pathc; ++i) {
-		struct tapdisk *tap;
-		int n;
-
-		tap = &tapv[n_taps];
-
-		err = sscanf(glbuf.gl_pathv[i], format, &tap->id);
-		if (err != 1)
-			continue;
-
-		tap->pid = tap_ctl_get_pid(tap->id);
-		if (tap->pid < 0)
-			continue;
-
-		n_taps++;
-	}
-
-	qsort(tapv, n_taps, sizeof(struct tapdisk), _tap_tapdisk_cmp);
-
-	for (i = 0; i < n_taps; ++i)
-		INIT_LIST_HEAD(&tapv[i].list);
-
-done:
-	*_tapv = tapv;
-	err = 0;
-
-out:
+  out:
 	if (glbuf.gl_pathv)
 		globfree(&glbuf);
 
 	return err ? : n_taps;
 
-fail:
-	if (tapv)
-		free(tapv);
-
+  fail:
+	tap_ctl_list_free(list);
 	goto out;
 }
 
-struct tapdisk_list {
-	int  minor;
-	int  state;
-	char *params;
-	struct list_head entry;
-};
-
-int
-_tap_ctl_list_tapdisk(int id, struct list_head *_list)
+/**
+ * Retrieves all the VBDs a tapdisk is serving.
+ *
+ * @param pid the process ID of the tapdisk whose VBDs should be retrieved
+ * @param list output parameter that receives the list of VBD
+ * @returns 0 on success, an error code otherwise
+ */
+static int
+_tap_ctl_list_tapdisk(pid_t pid, struct tqh_tap_list *list)
 {
+	struct timeval timeout = {.tv_sec = 10,.tv_usec = 0 };
 	tapdisk_message_t message;
-	struct list_head list;
-	struct tapdisk_list *tl, *next;
+	tap_list_t *tl;
 	int err, sfd;
 
-	err = tap_ctl_connect_id(id, &sfd);
+	err = tap_ctl_connect_id(pid, &sfd);
 	if (err)
 		return err;
 
 	memset(&message, 0, sizeof(message));
-	message.type   = TAPDISK_MESSAGE_LIST;
+	message.type = TAPDISK_MESSAGE_LIST;
 	message.cookie = -1;
 
-	err = tap_ctl_write_message(sfd, &message, 2);
+	err = tap_ctl_write_message(sfd, &message, &timeout);
 	if (err)
 		return err;
 
-	INIT_LIST_HEAD(&list);
+	TAILQ_INIT(list);
+
 	do {
-		err = tap_ctl_read_message(sfd, &message, 2);
+		err = tap_ctl_read_message(sfd, &message, &timeout);
 		if (err) {
 			err = -EPROTO;
-			break;
+			goto fail;
 		}
 
 		if (message.u.list.count == 0)
 			break;
 
-		tl = malloc(sizeof(struct tapdisk_list));
+		tl = _tap_list_alloc();
 		if (!tl) {
 			err = -ENOMEM;
-			break;
+			goto fail;
 		}
 
-		tl->minor  = message.u.list.minor;
-		tl->state  = message.u.list.state;
+		tl->pid = pid;
+		tl->minor = message.u.list.minor;
+		tl->state = message.u.list.state;
+
 		if (message.u.list.path[0] != 0) {
-			tl->params = strndup(message.u.list.path,
-					     sizeof(message.u.list.path));
-			if (!tl->params) {
-				err = -errno;
-				break;
+			err = _parse_params(message.u.list.path, &tl->type, &tl->path);
+			if (err) {
+				_tap_list_free(tl, NULL);
+				goto fail;
 			}
-		} else
-			tl->params = NULL;
+		}
 
-		list_add(&tl->entry, &list);
+		TAILQ_INSERT_HEAD(list, tl, entry);
 	} while (1);
 
-	if (err)
-		list_for_each_entry_safe(tl, next, &list, entry) {
-			list_del(&tl->entry);
-			free(tl->params);
-			free(tl);
-		}
+	err = 0;
+  out:
+	close(sfd);
+	return 0;
 
-	close(sfd);
-	list_splice(&list, _list);
-	return err;
-}
-
-void
-_tap_ctl_free_tapdisks(struct tapdisk *tapv, int n_taps)
-{
-	struct tapdisk *tap;
-
-	for (tap = tapv; tap < &tapv[n_taps]; ++tap) {
-		struct tapdisk_list *tl, *next;
-
-		list_for_each_entry_safe(tl, next, &tap->list, entry) {
-			free(tl->params);
-			free(tl);
-		}
-	}
-
-	free(tapv);
+  fail:
+	tap_ctl_list_free(list);
+	goto out;
 }
 
 int
-_tap_list_join3(int n_minors, int *minorv, int n_taps, struct tapdisk *tapv,
-		tap_list_t ***_list)
+tap_ctl_list(struct tqh_tap_list *list)
 {
-	tap_list_t **list, **_entry, *entry;
-	int i, _m, err;
+	struct tqh_tap_list minors, tapdisks, vbds;
+	tap_list_t *t, *next_t, *v, *next_v, *m, *next_m;
+	int err;
 
-	list = tap_ctl_alloc_list(n_minors + n_taps);
-	if (!list) {
-		err = -ENOMEM;
+	/*
+	 * Find all minors, find all tapdisks, then list all minors
+	 * they attached to. Output is a 3-way outer join.
+	 */
+	TAILQ_INIT(&minors);
+
+	/*
+	 * TODO There's no blktap sysfs entry anymore, get rid of minors and
+	 * rationalise the rest of this function.
+	 */
+#if 0
+	err = _tap_ctl_find_minors(&minors);
+	if (err < 0) {
+		EPRINTF("error finding minors: %s\n", strerror(err));
+		goto fail;
+	}
+#endif
+
+	err = _tap_ctl_find_tapdisks(&tapdisks);
+	if (err < 0) {
+		EPRINTF("error finding tapdisks: %s\n", strerror(err));
 		goto fail;
 	}
 
-	_entry = list;
+	TAILQ_INIT(list);
 
-	for (i = 0; i < n_taps; ++i) {
-		struct tapdisk *tap = &tapv[i];
-		struct tapdisk_list *tl;
+	tap_list_for_each_entry_safe(t, next_t, &tapdisks) {
 
-		/* orphaned tapdisk */
-		if (list_empty(&tap->list)) {
-			err = init_list(*_entry++, tap->id, tap->pid, -1, -1, NULL);
-			if (err)
-				goto fail;
+		err = _tap_ctl_list_tapdisk(t->pid, &vbds);
+
+		/*
+		 * TODO Don't just swallow the error, print a warning, at least.
+		 */
+		if (err || TAILQ_EMPTY(&vbds)) {
+			TAILQ_MOVE_TAIL(t, &tapdisks, list, entry);
 			continue;
 		}
 
-		list_for_each_entry(tl, &tap->list, entry) {
+		tap_list_for_each_entry_safe(v, next_v, &vbds) {
 
-			err = init_list(*_entry++,
-					tap->id, tap->pid,
-					tl->minor, tl->state, tl->params);
-			if (err)
-				goto fail;
+			tap_list_for_each_entry_safe(m, next_m, &minors)
+				if (m->minor == v->minor) {
+				_tap_list_free(m, &minors);
+				break;
+			}
 
-			if (tl->minor >= 0) {
-				/* clear minor */
-				for (_m = 0; _m < n_minors; ++_m) {
-					if (minorv[_m] == tl->minor) {
-						minorv[_m] = -1;
-						break;
-					}
-				}
-			}
+			TAILQ_MOVE_TAIL(v, &vbds, list, entry);
 		}
+
+		_tap_list_free(t, &tapdisks);
 	}
 
 	/* orphaned minors */
-	for (_m = 0; _m < n_minors; ++_m) {
-		int minor = minorv[_m];
-		if (minor >= 0) {
-			err = init_list(*_entry++, -1, -1, minor, -1, NULL);
-			if (err)
-				goto fail;
-		}
-	}
-
-	/* free extraneous list entries */
-	for (; *_entry != NULL; ++entry) {
-		free_list(*_entry);
-		*_entry = NULL;
-	}
-
-	*_list = list;
+	TAILQ_CONCAT(list, &minors, entry);
 
 	return 0;
 
 fail:
-	if (list)
-		tap_ctl_free_list(list);
+	tap_ctl_list_free(list);
+
+	tap_ctl_list_free(&vbds);
+	tap_ctl_list_free(&tapdisks);
+	tap_ctl_list_free(&minors);
 
 	return err;
 }
 
 int
-tap_ctl_list(tap_list_t ***list)
+tap_ctl_list_pid(pid_t pid, struct tqh_tap_list *list)
 {
-	int n_taps, n_minors, err, *minorv;
-	struct tapdisk *tapv, *tap;
+	tap_list_t *t;
+	int err;
 
-	n_taps   = -1;
-	n_minors = -1;
+	t = _tap_list_alloc();
+	if (!t)
+		return -ENOMEM;
 
-	err = n_minors = _tap_ctl_find_minors(&minorv);
-	if (err < 0)
-		goto out;
-
-	err = n_taps = _tap_ctl_find_tapdisks(&tapv);
-	if (err < 0)
-		goto out;
-
-	for (tap = tapv; tap < &tapv[n_taps]; ++tap) {
-		err = _tap_ctl_list_tapdisk(tap->id, &tap->list);
-		if (err)
-			goto out;
+	t->pid = tap_ctl_get_pid(pid);
+	if (t->pid < 0) {
+		_tap_list_free(t, NULL);
+		return 0;
 	}
 
-	err = _tap_list_join3(n_minors, minorv, n_taps, tapv, list);
+	err = _tap_ctl_list_tapdisk(t->pid, list);
 
-out:
-	if (n_taps > 0)
-		_tap_ctl_free_tapdisks(tapv, n_taps);
+	if (err || TAILQ_EMPTY(list))
+		TAILQ_INSERT_TAIL(list, t, entry);
 
-	if (n_minors > 0)
-		free(minorv);
-
-	return err;
+	return 0;
 }
 
 int
-tap_ctl_find(const char *type, const char *path, tap_list_t *tap)
+tap_ctl_find_minor(const char *type, const char *path)
 {
-	tap_list_t **list, **_entry;
-	int ret = -ENOENT, err;
+	struct tqh_tap_list list = TAILQ_HEAD_INITIALIZER(list);
+	tap_list_t *entry;
+	int minor, err;
 
 	err = tap_ctl_list(&list);
 	if (err)
 		return err;
 
-	for (_entry = list; *_entry != NULL; ++_entry) {
-		tap_list_t *entry  = *_entry;
+	minor = -1;
+
+	tap_list_for_each_entry(entry, &list) {
 
 		if (type && (!entry->type || strcmp(entry->type, type)))
 			continue;
@@ -524,13 +366,11 @@ tap_ctl_find(const char *type, const cha
 		if (path && (!entry->path || strcmp(entry->path, path)))
 			continue;
 
-		*tap = *entry;
-		tap->type = tap->path = NULL;
-		ret = 0;
+		minor = entry->minor;
 		break;
 	}
 
-	tap_ctl_free_list(list);
+	tap_ctl_list_free(&list);
 
-	return ret;
+	return minor >= 0 ? minor : -ENOENT;
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7F-00040L-Bp; Tue, 04 Dec 2012 18:21:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7C-0003yJ-J2
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:06 +0000
Received: from [85.158.139.83:25650] by server-6.bemta-5.messagelabs.com id
	8C/FE-19321-11F3EB05; Tue, 04 Dec 2012 18:21:05 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!6
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22036 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155751"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:04 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:04 +0000
MIME-Version: 1.0
X-Mercurial-Node: 59bb0802d11ee95e4a08e7a72ff49ecfa98cfc30
Message-ID: <59bb0802d11ee95e4a08.1354645183@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:43 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 5 of 9 RFC v2] blktap3/libblktapctl: Introduce
 functionality used by tapback to instruct tapdisk to connect to the sring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces functions tap_ctl_connect_xenblkif and
tap_ctl_disconnect_xenblkif, that are used by the tapback daemon to instruct a
running tapdisk process to connect to/disconnect from the shared ring.

diff --git a/tools/blktap3/control/tap-ctl-xen.c b/tools/blktap3/control/tap-ctl-xen.c
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/control/tap-ctl-xen.c
@@ -0,0 +1,80 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * 
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include "tap-ctl.h"
+#include "tap-ctl-xen.h"
+
+int tap_ctl_connect_xenblkif(pid_t pid, int minor, domid_t domid,
+        int devid, const grant_ref_t * grefs, int order,
+        const evtchn_port_t port, int proto, const char *pool)
+{
+    tapdisk_message_t message;
+    int i, err;
+
+    memset(&message, 0, sizeof(message));
+    message.type = TAPDISK_MESSAGE_XENBLKIF_CONNECT;
+    message.cookie = minor;
+
+    message.u.blkif.domid = domid;
+    message.u.blkif.devid = devid;
+    for (i = 0; i < 1 << order; i++)
+        message.u.blkif.gref[i] = grefs[i];
+    message.u.blkif.order = order;
+    message.u.blkif.port = port;
+    message.u.blkif.proto = proto;
+    if (pool)
+        strncpy(message.u.blkif.pool, pool, sizeof(message.u.blkif.pool));
+    else
+        message.u.blkif.pool[0] = 0;
+
+    err = tap_ctl_connect_send_and_receive(pid, &message, NULL);
+    if (err)
+        return err;
+
+    if (message.type == TAPDISK_MESSAGE_XENBLKIF_CONNECT_RSP)
+        err = -message.u.response.error;
+    else
+        err = -EINVAL;
+
+    return err;
+}
+
+int tap_ctl_disconnect_xenblkif(pid_t pid, int minor, domid_t domid,
+        int devid, struct timeval *timeout)
+{
+    tapdisk_message_t message;
+    int err;
+
+    memset(&message, 0, sizeof(message));
+    message.type = TAPDISK_MESSAGE_XENBLKIF_DISCONNECT;
+    message.cookie = minor;
+    message.u.blkif.domid = domid;
+    message.u.blkif.devid = devid;
+
+    err = tap_ctl_connect_send_and_receive(pid, &message, timeout);
+    if (message.type == TAPDISK_MESSAGE_XENBLKIF_CONNECT_RSP)
+        err = -message.u.response.error;
+    else
+        err = -EINVAL;
+
+    return err;
+}
diff --git a/tools/blktap3/control/tap-ctl-xen.h b/tools/blktap3/control/tap-ctl-xen.h
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/control/tap-ctl-xen.h
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * 
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#ifndef __TAP_CTL_XEN_H__
+#define __TAP_CTL_XEN_H__
+
+#include <xen/xen.h>
+#include <xen/grant_table.h>
+#include <xen/event_channel.h>
+
+/**
+ * Instructs a tapdisk to connect to the shared ring.
+ *
+ * TODO further explain parameters
+ *
+ * @param pid the process ID of the tapdisk that should connect to the shared
+ * ring
+ * @param minor the minor number of the virtual block device
+ * @param domid the domain ID of the guest VM
+ * @param devid the device ID
+ * @param grefs the grant references
+ * @param order number of grant references, expressed as a 2's order
+ * @param port event channel port
+ * @param proto the protocol: native (XENIO_BLKIF_PROTO_NATIVE),
+ * x86 (XENIO_BLKIF_PROTO_X86_32), or x64 (XENIO_BLKIF_PROTO_X86_64)
+ * @param pool TODO page pool?
+ * @returns 0 on success, an error code otherwise
+ */
+int tap_ctl_connect_xenblkif(pid_t pid, int minor, domid_t domid, int devid,
+        const grant_ref_t * grefs, int order, evtchn_port_t port, int proto,
+        const char *pool);
+
+/**
+ * Instructs a tapdisk to disconnect from the shared ring.
+ *
+ * @param pid the process ID of the tapdisk that should disconnect
+ * @param minor the minor number of the virtual block device
+ * @param domid the ID of the guest VM
+ * @param devid the device ID of the virtual block device
+ * @param timeout timeout to wait, if NULL the function will wait indefinitely
+ */
+int tap_ctl_disconnect_xenblkif(pid_t pid, int minor, domid_t domid,
+        int devid, struct timeval *timeout);
+
+#endif /* __TAP_CTL_XEN_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7G-000410-3f; Tue, 04 Dec 2012 18:21:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7C-0003yD-HQ
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:06 +0000
Received: from [85.158.139.83:25645] by server-3.bemta-5.messagelabs.com id
	BB/2D-18736-11F3EB05; Tue, 04 Dec 2012 18:21:05 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!5
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22019 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155750"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:04 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:04 +0000
MIME-Version: 1.0
X-Mercurial-Node: db722120283629dbd51bd92ba0a6ad91c11ce1f2
Message-ID: <db722120283629dbd51b.1354645182@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:42 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 4 of 9 RFC v2] blktap3/libblktapctl: Introduce
 listing running tapdisks functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces tap-ctl-list.c, the file where listing tapdisks
functionality is implemented. It is based on the existing blktap2 file with
most changes coming from blktap2 living in github. I have not examined the
changes in detail but it seems they are minor.

Function tap_ctl_list needs to be amended because minors are not retrieved
from sysfs (there's no sysfs blktap directory any more).

diff --git a/tools/blktap2/control/tap-ctl-list.c b/tools/blktap3/control/tap-ctl-list.c
copy from tools/blktap2/control/tap-ctl-list.c
copy to tools/blktap3/control/tap-ctl-list.c
--- a/tools/blktap2/control/tap-ctl-list.c
+++ b/tools/blktap3/control/tap-ctl-list.c
@@ -34,23 +34,47 @@
 #include <glob.h>
 
 #include "tap-ctl.h"
-#include "blktap2.h"
-#include "list.h"
+#include "blktap3.h"
+
+/**
+ * Allocates and initializes a tap_list_t.
+ */
+static tap_list_t *
+_tap_list_alloc(void)
+{
+	const size_t sz = sizeof(tap_list_t);
+	tap_list_t *tl;
+
+	tl = malloc(sz);
+	if (!tl)
+		return NULL;
+
+	tl->pid = -1;
+	tl->minor = -1;
+	tl->state = -1;
+	tl->type = NULL;
+	tl->path = NULL;
+
+	return tl;
+}
 
 static void
-free_list(tap_list_t *entry)
+_tap_list_free(tap_list_t * tl, struct tqh_tap_list *list)
 {
-	if (entry->type) {
-		free(entry->type);
-		entry->type = NULL;
+	if (tl->type) {
+		free(tl->type);
+		tl->type = NULL;
 	}
 
-	if (entry->path) {
-		free(entry->path);
-		entry->path = NULL;
+	if (tl->path) {
+		free(tl->path);
+		tl->path = NULL;
 	}
 
-	free(entry);
+	if (list)
+		TAILQ_REMOVE(list, tl, entry);
+
+	free(tl);
 }
 
 int
@@ -66,7 +90,7 @@ int
 	len = ptr - params;
 
 	*type = strndup(params, len);
-	*path =  strdup(params + len + 1);
+	*path = strdup(params + len + 1);
 
 	if (!*type || !*path) {
 		free(*type);
@@ -81,102 +105,26 @@ int
 	return 0;
 }
 
-static int
-init_list(tap_list_t *entry,
-	  int tap_id, pid_t tap_pid, int vbd_minor, int vbd_state,
-	  const char *params)
+void
+tap_ctl_list_free(struct tqh_tap_list *list)
 {
-	int err = 0;
+	tap_list_t *tl, *n;
 
-	entry->id     = tap_id;
-	entry->pid    = tap_pid;
-	entry->minor  = vbd_minor;
-	entry->state  = vbd_state;
-
-	if (params)
-		err = _parse_params(params, &entry->type, &entry->path);
-
-	return err;
-}
-
-void
-tap_ctl_free_list(tap_list_t **list)
-{
-	tap_list_t **_entry;
-
-	for (_entry = list; *_entry != NULL; ++_entry)
-		free_list(*_entry);
-
-	free(list);
-}
-
-static tap_list_t**
-tap_ctl_alloc_list(int n)
-{
-	tap_list_t **list, *entry;
-	size_t size;
-	int i;
-
-	size = sizeof(tap_list_t*) * (n+1);
-	list = malloc(size);
-	if (!list)
-		goto fail;
-
-	memset(list, 0, size);
-
-	for (i = 0; i < n; ++i) {
-		tap_list_t *entry;
-
-		entry = malloc(sizeof(tap_list_t));
-		if (!entry)
-			goto fail;
-
-		memset(entry, 0, sizeof(tap_list_t));
-
-		list[i] = entry;
-	}
-
-	return list;
-
-fail:
-	if (list)
-		tap_ctl_free_list(list);
-
-	return NULL;
-}
-
-static int
-tap_ctl_list_length(const tap_list_t **list)
-{
-	const tap_list_t **_entry;
-	int n;
-
-	n = 0;
-	for (_entry = list; *_entry != NULL; ++_entry)
-		n++;
-
-	return n;
-}
-
-static int
-_tap_minor_cmp(const void *a, const void *b)
-{
-	return *(int*)a - *(int*)b;
+	tap_list_for_each_entry_safe(tl, n, list)
+		_tap_list_free(tl, list);
 }
 
 int
-_tap_ctl_find_minors(int **_minorv)
+_tap_ctl_find_tapdisks(struct tqh_tap_list *list)
 {
-	glob_t glbuf = { 0 };
+	glob_t glbuf = { 0 };	
 	const char *pattern, *format;
-	int *minorv = NULL, n_minors = 0;
-	int err, i;
+	int err, i, n_taps = 0;
 
-	pattern = BLKTAP2_SYSFS_DIR"/blktap*";
-	format  = BLKTAP2_SYSFS_DIR"/blktap%d";
+	pattern = BLKTAP3_CONTROL_DIR "/" BLKTAP3_CONTROL_SOCKET "*";
+	format = BLKTAP3_CONTROL_DIR "/" BLKTAP3_CONTROL_SOCKET "%d";
 
-	n_minors = 0;
-	minorv   = NULL;
+	TAILQ_INIT(list);
 
 	err = glob(pattern, 0, NULL, &glbuf);
 	switch (err) {
@@ -186,337 +134,231 @@ int
 	case GLOB_ABORTED:
 	case GLOB_NOSPACE:
 		err = -errno;
-		EPRINTF("%s: glob failed, err %d", pattern, err);
-		goto fail;
-	}
-
-	minorv = malloc(sizeof(int) * glbuf.gl_pathc);
-	if (!minorv) {
-		err = -errno;
+		EPRINTF("%s: glob failed: %s", pattern, strerror(err));
 		goto fail;
 	}
 
 	for (i = 0; i < glbuf.gl_pathc; ++i) {
+		tap_list_t *tl;
 		int n;
 
-		n = sscanf(glbuf.gl_pathv[i], format, &minorv[n_minors]);
+		tl = _tap_list_alloc();
+		if (!tl) {
+			err = -ENOMEM;
+			goto fail;
+		}
+
+		n = sscanf(glbuf.gl_pathv[i], format, &tl->pid);
 		if (n != 1)
-			continue;
+			goto skip;
 
-		n_minors++;
+		tl->pid = tap_ctl_get_pid(tl->pid);
+		if (tl->pid < 0)
+			goto skip;
+
+		TAILQ_INSERT_TAIL(list, tl, entry);
+		n_taps++;
+		continue;
+
+	  skip:
+		_tap_list_free(tl, NULL);
 	}
 
-	qsort(minorv, n_minors, sizeof(int), _tap_minor_cmp);
-
-done:
-	*_minorv = minorv;
+  done:
 	err = 0;
-
-out:
-	if (glbuf.gl_pathv)
-		globfree(&glbuf);
-
-	return err ? : n_minors;
-
-fail:
-	if (minorv)
-		free(minorv);
-
-	goto out;
-}
-
-struct tapdisk {
-	int    id;
-	pid_t  pid;
-	struct list_head list;
-};
-
-static int
-_tap_tapdisk_cmp(const void *a, const void *b)
-{
-	return ((struct tapdisk*)a)->id - ((struct tapdisk*)b)->id;
-}
-
-int
-_tap_ctl_find_tapdisks(struct tapdisk **_tapv)
-{
-	glob_t glbuf = { 0 };
-	const char *pattern, *format;
-	struct tapdisk *tapv = NULL;
-	int err, i, n_taps = 0;
-
-	pattern = BLKTAP2_CONTROL_DIR"/"BLKTAP2_CONTROL_SOCKET"*";
-	format  = BLKTAP2_CONTROL_DIR"/"BLKTAP2_CONTROL_SOCKET"%d";
-
-	n_taps = 0;
-	tapv   = NULL;
-
-	err = glob(pattern, 0, NULL, &glbuf);
-	switch (err) {
-	case GLOB_NOMATCH:
-		goto done;
-
-	case GLOB_ABORTED:
-	case GLOB_NOSPACE:
-		err = -errno;
-		EPRINTF("%s: glob failed, err %d", pattern, err);
-		goto fail;
-	}
-
-	tapv = malloc(sizeof(struct tapdisk) * glbuf.gl_pathc);
-	if (!tapv) {
-		err = -errno;
-		goto fail;
-	}
-
-	for (i = 0; i < glbuf.gl_pathc; ++i) {
-		struct tapdisk *tap;
-		int n;
-
-		tap = &tapv[n_taps];
-
-		err = sscanf(glbuf.gl_pathv[i], format, &tap->id);
-		if (err != 1)
-			continue;
-
-		tap->pid = tap_ctl_get_pid(tap->id);
-		if (tap->pid < 0)
-			continue;
-
-		n_taps++;
-	}
-
-	qsort(tapv, n_taps, sizeof(struct tapdisk), _tap_tapdisk_cmp);
-
-	for (i = 0; i < n_taps; ++i)
-		INIT_LIST_HEAD(&tapv[i].list);
-
-done:
-	*_tapv = tapv;
-	err = 0;
-
-out:
+  out:
 	if (glbuf.gl_pathv)
 		globfree(&glbuf);
 
 	return err ? : n_taps;
 
-fail:
-	if (tapv)
-		free(tapv);
-
+  fail:
+	tap_ctl_list_free(list);
 	goto out;
 }
 
-struct tapdisk_list {
-	int  minor;
-	int  state;
-	char *params;
-	struct list_head entry;
-};
-
-int
-_tap_ctl_list_tapdisk(int id, struct list_head *_list)
+/**
+ * Retrieves all the VBDs a tapdisk is serving.
+ *
+ * @param pid the process ID of the tapdisk whose VBDs should be retrieved
+ * @param list output parameter that receives the list of VBD
+ * @returns 0 on success, an error code otherwise
+ */
+static int
+_tap_ctl_list_tapdisk(pid_t pid, struct tqh_tap_list *list)
 {
+	struct timeval timeout = {.tv_sec = 10,.tv_usec = 0 };
 	tapdisk_message_t message;
-	struct list_head list;
-	struct tapdisk_list *tl, *next;
+	tap_list_t *tl;
 	int err, sfd;
 
-	err = tap_ctl_connect_id(id, &sfd);
+	err = tap_ctl_connect_id(pid, &sfd);
 	if (err)
 		return err;
 
 	memset(&message, 0, sizeof(message));
-	message.type   = TAPDISK_MESSAGE_LIST;
+	message.type = TAPDISK_MESSAGE_LIST;
 	message.cookie = -1;
 
-	err = tap_ctl_write_message(sfd, &message, 2);
+	err = tap_ctl_write_message(sfd, &message, &timeout);
 	if (err)
 		return err;
 
-	INIT_LIST_HEAD(&list);
+	TAILQ_INIT(list);
+
 	do {
-		err = tap_ctl_read_message(sfd, &message, 2);
+		err = tap_ctl_read_message(sfd, &message, &timeout);
 		if (err) {
 			err = -EPROTO;
-			break;
+			goto fail;
 		}
 
 		if (message.u.list.count == 0)
 			break;
 
-		tl = malloc(sizeof(struct tapdisk_list));
+		tl = _tap_list_alloc();
 		if (!tl) {
 			err = -ENOMEM;
-			break;
+			goto fail;
 		}
 
-		tl->minor  = message.u.list.minor;
-		tl->state  = message.u.list.state;
+		tl->pid = pid;
+		tl->minor = message.u.list.minor;
+		tl->state = message.u.list.state;
+
 		if (message.u.list.path[0] != 0) {
-			tl->params = strndup(message.u.list.path,
-					     sizeof(message.u.list.path));
-			if (!tl->params) {
-				err = -errno;
-				break;
+			err = _parse_params(message.u.list.path, &tl->type, &tl->path);
+			if (err) {
+				_tap_list_free(tl, NULL);
+				goto fail;
 			}
-		} else
-			tl->params = NULL;
+		}
 
-		list_add(&tl->entry, &list);
+		TAILQ_INSERT_HEAD(list, tl, entry);
 	} while (1);
 
-	if (err)
-		list_for_each_entry_safe(tl, next, &list, entry) {
-			list_del(&tl->entry);
-			free(tl->params);
-			free(tl);
-		}
+	err = 0;
+  out:
+	close(sfd);
+	return 0;
 
-	close(sfd);
-	list_splice(&list, _list);
-	return err;
-}
-
-void
-_tap_ctl_free_tapdisks(struct tapdisk *tapv, int n_taps)
-{
-	struct tapdisk *tap;
-
-	for (tap = tapv; tap < &tapv[n_taps]; ++tap) {
-		struct tapdisk_list *tl, *next;
-
-		list_for_each_entry_safe(tl, next, &tap->list, entry) {
-			free(tl->params);
-			free(tl);
-		}
-	}
-
-	free(tapv);
+  fail:
+	tap_ctl_list_free(list);
+	goto out;
 }
 
 int
-_tap_list_join3(int n_minors, int *minorv, int n_taps, struct tapdisk *tapv,
-		tap_list_t ***_list)
+tap_ctl_list(struct tqh_tap_list *list)
 {
-	tap_list_t **list, **_entry, *entry;
-	int i, _m, err;
+	struct tqh_tap_list minors, tapdisks, vbds;
+	tap_list_t *t, *next_t, *v, *next_v, *m, *next_m;
+	int err;
 
-	list = tap_ctl_alloc_list(n_minors + n_taps);
-	if (!list) {
-		err = -ENOMEM;
+	/*
+	 * Find all minors, find all tapdisks, then list all minors
+	 * they attached to. Output is a 3-way outer join.
+	 */
+	TAILQ_INIT(&minors);
+
+	/*
+	 * TODO There's no blktap sysfs entry anymore, get rid of minors and
+	 * rationalise the rest of this function.
+	 */
+#if 0
+	err = _tap_ctl_find_minors(&minors);
+	if (err < 0) {
+		EPRINTF("error finding minors: %s\n", strerror(err));
+		goto fail;
+	}
+#endif
+
+	err = _tap_ctl_find_tapdisks(&tapdisks);
+	if (err < 0) {
+		EPRINTF("error finding tapdisks: %s\n", strerror(err));
 		goto fail;
 	}
 
-	_entry = list;
+	TAILQ_INIT(list);
 
-	for (i = 0; i < n_taps; ++i) {
-		struct tapdisk *tap = &tapv[i];
-		struct tapdisk_list *tl;
+	tap_list_for_each_entry_safe(t, next_t, &tapdisks) {
 
-		/* orphaned tapdisk */
-		if (list_empty(&tap->list)) {
-			err = init_list(*_entry++, tap->id, tap->pid, -1, -1, NULL);
-			if (err)
-				goto fail;
+		err = _tap_ctl_list_tapdisk(t->pid, &vbds);
+
+		/*
+		 * TODO Don't just swallow the error, print a warning, at least.
+		 */
+		if (err || TAILQ_EMPTY(&vbds)) {
+			TAILQ_MOVE_TAIL(t, &tapdisks, list, entry);
 			continue;
 		}
 
-		list_for_each_entry(tl, &tap->list, entry) {
+		tap_list_for_each_entry_safe(v, next_v, &vbds) {
 
-			err = init_list(*_entry++,
-					tap->id, tap->pid,
-					tl->minor, tl->state, tl->params);
-			if (err)
-				goto fail;
+			tap_list_for_each_entry_safe(m, next_m, &minors)
+				if (m->minor == v->minor) {
+				_tap_list_free(m, &minors);
+				break;
+			}
 
-			if (tl->minor >= 0) {
-				/* clear minor */
-				for (_m = 0; _m < n_minors; ++_m) {
-					if (minorv[_m] == tl->minor) {
-						minorv[_m] = -1;
-						break;
-					}
-				}
-			}
+			TAILQ_MOVE_TAIL(v, &vbds, list, entry);
 		}
+
+		_tap_list_free(t, &tapdisks);
 	}
 
 	/* orphaned minors */
-	for (_m = 0; _m < n_minors; ++_m) {
-		int minor = minorv[_m];
-		if (minor >= 0) {
-			err = init_list(*_entry++, -1, -1, minor, -1, NULL);
-			if (err)
-				goto fail;
-		}
-	}
-
-	/* free extraneous list entries */
-	for (; *_entry != NULL; ++entry) {
-		free_list(*_entry);
-		*_entry = NULL;
-	}
-
-	*_list = list;
+	TAILQ_CONCAT(list, &minors, entry);
 
 	return 0;
 
 fail:
-	if (list)
-		tap_ctl_free_list(list);
+	tap_ctl_list_free(list);
+
+	tap_ctl_list_free(&vbds);
+	tap_ctl_list_free(&tapdisks);
+	tap_ctl_list_free(&minors);
 
 	return err;
 }
 
 int
-tap_ctl_list(tap_list_t ***list)
+tap_ctl_list_pid(pid_t pid, struct tqh_tap_list *list)
 {
-	int n_taps, n_minors, err, *minorv;
-	struct tapdisk *tapv, *tap;
+	tap_list_t *t;
+	int err;
 
-	n_taps   = -1;
-	n_minors = -1;
+	t = _tap_list_alloc();
+	if (!t)
+		return -ENOMEM;
 
-	err = n_minors = _tap_ctl_find_minors(&minorv);
-	if (err < 0)
-		goto out;
-
-	err = n_taps = _tap_ctl_find_tapdisks(&tapv);
-	if (err < 0)
-		goto out;
-
-	for (tap = tapv; tap < &tapv[n_taps]; ++tap) {
-		err = _tap_ctl_list_tapdisk(tap->id, &tap->list);
-		if (err)
-			goto out;
+	t->pid = tap_ctl_get_pid(pid);
+	if (t->pid < 0) {
+		_tap_list_free(t, NULL);
+		return 0;
 	}
 
-	err = _tap_list_join3(n_minors, minorv, n_taps, tapv, list);
+	err = _tap_ctl_list_tapdisk(t->pid, list);
 
-out:
-	if (n_taps > 0)
-		_tap_ctl_free_tapdisks(tapv, n_taps);
+	if (err || TAILQ_EMPTY(list))
+		TAILQ_INSERT_TAIL(list, t, entry);
 
-	if (n_minors > 0)
-		free(minorv);
-
-	return err;
+	return 0;
 }
 
 int
-tap_ctl_find(const char *type, const char *path, tap_list_t *tap)
+tap_ctl_find_minor(const char *type, const char *path)
 {
-	tap_list_t **list, **_entry;
-	int ret = -ENOENT, err;
+	struct tqh_tap_list list = TAILQ_HEAD_INITIALIZER(list);
+	tap_list_t *entry;
+	int minor, err;
 
 	err = tap_ctl_list(&list);
 	if (err)
 		return err;
 
-	for (_entry = list; *_entry != NULL; ++_entry) {
-		tap_list_t *entry  = *_entry;
+	minor = -1;
+
+	tap_list_for_each_entry(entry, &list) {
 
 		if (type && (!entry->type || strcmp(entry->type, type)))
 			continue;
@@ -524,13 +366,11 @@ tap_ctl_find(const char *type, const cha
 		if (path && (!entry->path || strcmp(entry->path, path)))
 			continue;
 
-		*tap = *entry;
-		tap->type = tap->path = NULL;
-		ret = 0;
+		minor = entry->minor;
 		break;
 	}
 
-	tap_ctl_free_list(list);
+	tap_ctl_list_free(&list);
 
-	return ret;
+	return minor >= 0 ? minor : -ENOENT;
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7D-0003zI-OU; Tue, 04 Dec 2012 18:21:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7B-0003xs-Kd
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:05 +0000
Received: from [85.158.139.83:25591] by server-8.bemta-5.messagelabs.com id
	BF/BB-06050-01F3EB05; Tue, 04 Dec 2012 18:21:04 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21978 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155745"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:03 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:03 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:38 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 0 of 9 RFC v2] blktap3: Introduce a small subset
 of blktap3 files
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

blktap3 is a disk back-end driver. It is based on blktap2 but does not require
the blktap/blkback kernel modules as it allows tapdisk to talk directly to
blkfront. This primarily simplifies maintenance, and _may_ lead to performance
improvements. blktap3 is based on a blktap2 fork maintained mostly by Citrix
(it lives in github), so these changes are also imported, apart from the
blktap3 ones.

I've organised my upstream effort as follows:
1. Upstream the smallest possible subset of blktap3 that will allow guest VMs
   to use RAW images backed by blktap3. This will enable early testing on the
   bits introduced by blktap3.
2. Upstream the remaining of blktap3, most notably back-end drivers, e.g. VHD
   etc.
3. Import bug fixes from blktap2 living in github.
4. Import new features and optimisations from blktap2 living in github, e.g.
   the mirroring plug-in.

blktap3 is made of the following components:
1. blkfront (not a blktap3 component and already upstream): a virtual
   block device driver in the guest VM that receives block I/O requests and
   forwards them to tapdisk via the shared ring.
2. tapdisk: a user space process that receives block I/O requests from
   blkfront, translates them to whatever the current backing file format is
   (i.e. RAW, VHD, qcow etc.), and performs the actual I/O. Apart from block
   I/O requests, the tapdisk also allows basic management of each virtual
   block device, e.g. a device may be temporarily paused. tapdisk listens to
   a loopback socket for such commands. The tap-ctl utility (explained later)
   can be used for managing the tapdisk.
3. libtapback: a user space library that implements the functionality required
   to access the shared ring. It is used by tapdisk to obtain the block I/O
   requests forwarded by blkfront, and to produce the corresponding responses.
   This is the very "heart" of blktap3, it's architecture will be thoroughly
   explained by the patch series that introduces it.
4. tapback: a user space daemon that acts as the back-end of each virtual
   block device: it monitors XenStore for the block front-end's state changes,
   creates/destroys the shared ring, and instructs the tapdisk to connect
   to/disconnect from the shared ring. It also communicates to the block
   front-end required parameters (e.g. block device size in sectors) via
   XenStore.
5. libblktapctl: a user space library where the tapdisk management functions
   are implemented.
6. tap-ctl: a user space utility that allows management of the tapdisk, uses
   libblktapctl.

The tapdisk is spawned/destroyed by libxl when a domain is created/destroyed,
in the exact same way as in blktap2. libxl uses libblktapctl for this.

This patch series introduces a small subset of files required by tapback (the
tapback daemon is introduced by the next patch series):
- basic blktap3 header files
- a rudimentary implementation of libblktapctl. Only the bits required by
  tapback to manage the tapdisk are introduced, the rest of this library will
  be introduced by later patches.

---
Changed since v1:
* In all patches the patch message has been improved.
* Patches 1, 5, and 6 use GPLv2.
* Patch 0: Basic explanation of blktap3's fundamental components.
* Patch 9: Improved tools/blktap3/control/Makefile by moving hard coded
  paths to config/StdGNU.mk.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7F-00040L-Bp; Tue, 04 Dec 2012 18:21:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7C-0003yJ-J2
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:06 +0000
Received: from [85.158.139.83:25650] by server-6.bemta-5.messagelabs.com id
	8C/FE-19321-11F3EB05; Tue, 04 Dec 2012 18:21:05 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!6
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22036 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155751"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:04 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:04 +0000
MIME-Version: 1.0
X-Mercurial-Node: 59bb0802d11ee95e4a08e7a72ff49ecfa98cfc30
Message-ID: <59bb0802d11ee95e4a08.1354645183@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:43 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 5 of 9 RFC v2] blktap3/libblktapctl: Introduce
 functionality used by tapback to instruct tapdisk to connect to the sring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces functions tap_ctl_connect_xenblkif and
tap_ctl_disconnect_xenblkif, that are used by the tapback daemon to instruct a
running tapdisk process to connect to/disconnect from the shared ring.

diff --git a/tools/blktap3/control/tap-ctl-xen.c b/tools/blktap3/control/tap-ctl-xen.c
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/control/tap-ctl-xen.c
@@ -0,0 +1,80 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * 
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include "tap-ctl.h"
+#include "tap-ctl-xen.h"
+
+int tap_ctl_connect_xenblkif(pid_t pid, int minor, domid_t domid,
+        int devid, const grant_ref_t * grefs, int order,
+        const evtchn_port_t port, int proto, const char *pool)
+{
+    tapdisk_message_t message;
+    int i, err;
+
+    memset(&message, 0, sizeof(message));
+    message.type = TAPDISK_MESSAGE_XENBLKIF_CONNECT;
+    message.cookie = minor;
+
+    message.u.blkif.domid = domid;
+    message.u.blkif.devid = devid;
+    for (i = 0; i < 1 << order; i++)
+        message.u.blkif.gref[i] = grefs[i];
+    message.u.blkif.order = order;
+    message.u.blkif.port = port;
+    message.u.blkif.proto = proto;
+    if (pool)
+        strncpy(message.u.blkif.pool, pool, sizeof(message.u.blkif.pool));
+    else
+        message.u.blkif.pool[0] = 0;
+
+    err = tap_ctl_connect_send_and_receive(pid, &message, NULL);
+    if (err)
+        return err;
+
+    if (message.type == TAPDISK_MESSAGE_XENBLKIF_CONNECT_RSP)
+        err = -message.u.response.error;
+    else
+        err = -EINVAL;
+
+    return err;
+}
+
+int tap_ctl_disconnect_xenblkif(pid_t pid, int minor, domid_t domid,
+        int devid, struct timeval *timeout)
+{
+    tapdisk_message_t message;
+    int err;
+
+    memset(&message, 0, sizeof(message));
+    message.type = TAPDISK_MESSAGE_XENBLKIF_DISCONNECT;
+    message.cookie = minor;
+    message.u.blkif.domid = domid;
+    message.u.blkif.devid = devid;
+
+    err = tap_ctl_connect_send_and_receive(pid, &message, timeout);
+    if (message.type == TAPDISK_MESSAGE_XENBLKIF_CONNECT_RSP)
+        err = -message.u.response.error;
+    else
+        err = -EINVAL;
+
+    return err;
+}
diff --git a/tools/blktap3/control/tap-ctl-xen.h b/tools/blktap3/control/tap-ctl-xen.h
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/control/tap-ctl-xen.h
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * 
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#ifndef __TAP_CTL_XEN_H__
+#define __TAP_CTL_XEN_H__
+
+#include <xen/xen.h>
+#include <xen/grant_table.h>
+#include <xen/event_channel.h>
+
+/**
+ * Instructs a tapdisk to connect to the shared ring.
+ *
+ * TODO further explain parameters
+ *
+ * @param pid the process ID of the tapdisk that should connect to the shared
+ * ring
+ * @param minor the minor number of the virtual block device
+ * @param domid the domain ID of the guest VM
+ * @param devid the device ID
+ * @param grefs the grant references
+ * @param order number of grant references, expressed as a 2's order
+ * @param port event channel port
+ * @param proto the protocol: native (XENIO_BLKIF_PROTO_NATIVE),
+ * x86 (XENIO_BLKIF_PROTO_X86_32), or x64 (XENIO_BLKIF_PROTO_X86_64)
+ * @param pool TODO page pool?
+ * @returns 0 on success, an error code otherwise
+ */
+int tap_ctl_connect_xenblkif(pid_t pid, int minor, domid_t domid, int devid,
+        const grant_ref_t * grefs, int order, evtchn_port_t port, int proto,
+        const char *pool);
+
+/**
+ * Instructs a tapdisk to disconnect from the shared ring.
+ *
+ * @param pid the process ID of the tapdisk that should disconnect
+ * @param minor the minor number of the virtual block device
+ * @param domid the ID of the guest VM
+ * @param devid the device ID of the virtual block device
+ * @param timeout timeout to wait, if NULL the function will wait indefinitely
+ */
+int tap_ctl_disconnect_xenblkif(pid_t pid, int minor, domid_t domid,
+        int devid, struct timeval *timeout);
+
+#endif /* __TAP_CTL_XEN_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7G-00041Q-FK; Tue, 04 Dec 2012 18:21:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7C-0003yY-T2
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:07 +0000
Received: from [85.158.143.99:7984] by server-1.bemta-4.messagelabs.com id
	0A/2A-27934-21F3EB05; Tue, 04 Dec 2012 18:21:06 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354645265!22796529!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18982 invoked from network); 4 Dec 2012 18:21:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155752"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:05 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:04 +0000
MIME-Version: 1.0
X-Mercurial-Node: 56108441d0c306084c1d91b48bbdb5d875d68732
Message-ID: <56108441d0c306084c1d.1354645184@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:44 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 6 of 9 RFC v2] blktap3/libblktapctl: Introduce
 block device control information retrieval functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces function tap_ctl_info. It is used by the tapback daemon
to retrieve the size of the block device, as well as the sector size,  in order
to communicate it to blkfront so that it can create the virtual block device.

diff --git a/tools/blktap3/control/tap-ctl-info.c b/tools/blktap3/control/tap-ctl-info.c
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/control/tap-ctl-info.c
@@ -0,0 +1,47 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * 
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include "tap-ctl.h"
+#include "tap-ctl-info.h"
+
+int tap_ctl_info(pid_t pid, int minor, unsigned long long *sectors,
+        unsigned int *sector_size, unsigned int *info)
+{
+    tapdisk_message_t message;
+    int err;
+
+    memset(&message, 0, sizeof(message));
+    message.type = TAPDISK_MESSAGE_DISK_INFO;
+    message.cookie = minor;
+
+    err = tap_ctl_connect_send_and_receive(pid, &message, NULL);
+    if (err)
+        return err;
+
+    if (message.type != TAPDISK_MESSAGE_DISK_INFO_RSP)
+        return -EINVAL;
+
+    *sectors = message.u.image.sectors;
+    *sector_size = message.u.image.sector_size;
+    *info = message.u.image.info;
+    return err;
+}
diff --git a/tools/blktap3/control/tap-ctl-info.h b/tools/blktap3/control/tap-ctl-info.h
new file mode 100644
--- /dev/null
+++ b/tools/blktap3/control/tap-ctl-info.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2012      Citrix Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * 
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#ifndef __TAP_CTL_INFO_H__
+#define __TAP_CTL_INFO_H__
+
+/**
+ * Retrieves virtual disk information from a tapdisk.
+ *
+ * @pid the process ID of the tapdisk process
+ * @minor the minor device number
+ * @sectors output parameter that receives the number of sectors
+ * @sector_size output parameter that receives the size of the sector
+ * @info TODO ?
+ * 
+ */
+int tap_ctl_info(pid_t pid, int minor, unsigned long long *sectors,
+        unsigned int *sector_size, unsigned int *info);
+
+#endif /* __TAP_CTL_INFO_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7E-000404-VD; Tue, 04 Dec 2012 18:21:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7C-0003xs-5A
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:06 +0000
Received: from [85.158.139.83:44191] by server-8.bemta-5.messagelabs.com id
	12/CB-06050-11F3EB05; Tue, 04 Dec 2012 18:21:05 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354645263!28348593!4
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22012 invoked from network); 4 Dec 2012 18:21:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155748"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:04 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:04 +0000
MIME-Version: 1.0
X-Mercurial-Node: e6f891ddab4b19fc25cdff3c6ae8337dfb706662
Message-ID: <e6f891ddab4b19fc25cd.1354645181@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:41 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 3 of 9 RFC v2] blktap3/libblktapctl: Introduce
 the tapdisk control header
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces the header file where tapdisk control-related structures
and functions are declared. This file is based on the existing blktap2, with
most changes coming from blktap2 living in github.  Linux lists are replaced by
BSD tail queues. Few functions are partly documented, most of them are not
documented at all, this will be addressed by a future patch.

diff --git a/tools/blktap2/control/tap-ctl.h b/tools/blktap3/control/tap-ctl.h
copy from tools/blktap2/control/tap-ctl.h
copy to tools/blktap3/control/tap-ctl.h
--- a/tools/blktap2/control/tap-ctl.h
+++ b/tools/blktap3/control/tap-ctl.h
@@ -30,72 +30,180 @@
 
 #include <syslog.h>
 #include <errno.h>
+#include <sys/time.h>
 #include <tapdisk-message.h>
+#include "blktap3.h"
 
+/*
+ * TODO These are private, move to an internal header.
+ */
 extern int tap_ctl_debug;
 
 #ifdef TAPCTL
-#define DBG(_f, _a...)				\
-	do {					\
+#define DBG(_f, _a...)			\
+	do {						\
 		if (tap_ctl_debug)		\
 			printf(_f, ##_a);	\
 	} while (0)
 
 #define DPRINTF(_f, _a...) syslog(LOG_INFO, _f, ##_a)
 #define EPRINTF(_f, _a...) syslog(LOG_ERR, "tap-err:%s: " _f, __func__, ##_a)
-#define  PERROR(_f, _a...) syslog(LOG_ERR, "tap-err:%s: " _f ": %s", __func__, ##_a, \
-				  strerror(errno))
+#define PERROR(_f, _a...) syslog(LOG_ERR, "tap-err:%s: " _f ": %s", \
+        __func__, ##_a, strerror(errno))
 #endif
 
-void tap_ctl_version(int *major, int *minor);
-int tap_ctl_kernel_version(int *major, int *minor);
+/**
+ * Contains information about a tapdisk process.
+ */
+typedef struct tap_list {
 
-int tap_ctl_check_blktap(const char **message);
-int tap_ctl_check_version(const char **message);
-int tap_ctl_check(const char **message);
+    /**
+     * The process ID.
+     */
+    pid_t pid;
 
+    /**
+     * TODO
+     */
+    int minor;
+
+    /**
+     * State of the VBD, specified in drivers/tapdisk-vbd.h.
+     */
+    int state;
+
+    /**
+     * TODO
+     */
+    char *type;
+
+    /**
+     * /path/to/file
+     */
+    char *path;
+
+    /**
+     * for linked lists
+     */
+    TAILQ_ENTRY(tap_list) entry;
+} tap_list_t;
+
+TAILQ_HEAD(tqh_tap_list, tap_list);
+
+/**
+ * Iterate over a list of struct tap_list elements.
+ */
+#define tap_list_for_each_entry(_pos, _head) \
+    TAILQ_FOREACH(_pos, _head, entry)
+
+/**
+ * Iterate over a list of struct tap_list elements allowing deletions without
+ * having to restart the iteration.
+ */
+#define tap_list_for_each_entry_safe(_pos, _n, _head) \
+    TAILQ_FOREACH_SAFE(_pos, _head, entry, _n)
+
+/**
+ * Connects to a tapdisk.
+ *
+ * @param /path/to/file of the control socket (e.g. 
+ * /var/run/blktap-control/ctl/<pid>
+ * @param socket output parameter that receives the connection
+ * @returns 0 on success, an error code otherwise
+ */
 int tap_ctl_connect(const char *path, int *socket);
-int tap_ctl_connect_id(int id, int *socket);
-int tap_ctl_read_message(int fd, tapdisk_message_t *message, int timeout);
-int tap_ctl_write_message(int fd, tapdisk_message_t *message, int timeout);
-int tap_ctl_send_and_receive(int fd, tapdisk_message_t *message, int timeout);
-int tap_ctl_connect_send_and_receive(int id,
-				     tapdisk_message_t *message, int timeout);
+
+/**
+ * Connects to a tapdisk.
+ *
+ * @param id the process ID of the tapdisk to connect to
+ * @param socket output parameter that receives the connection
+ * @returns 0 on success, an error code otherwise
+ */
+int tap_ctl_connect_id(const int id, int *socket);
+
+/**
+ * Reads from the tapdisk connection to the buffer.
+ *
+ * @param fd the file descriptor of the socket to read from
+ * @param buf buffer that receives the output
+ * @param sz size, in bytes, of the buffer
+ * @param timeout (optional) specifies the maximum time to wait for reading
+ * @returns 0 on success, an error code otherwise
+ */
+int tap_ctl_read_raw(const int fd, void *buf, const size_t sz,
+        struct timeval *timeout);
+
+int tap_ctl_read_message(int fd, tapdisk_message_t * message,
+        struct timeval *timeout);
+
+int tap_ctl_write_message(int fd, tapdisk_message_t * message,
+        struct timeval *timeout);
+
+int tap_ctl_send_and_receive(int fd, tapdisk_message_t * message,
+        struct timeval *timeout);
+
+int tap_ctl_connect_send_and_receive(int id, tapdisk_message_t * message,
+        struct timeval *timeout);
+
 char *tap_ctl_socket_name(int id);
 
-typedef struct {
-	int         id;
-	pid_t       pid;
-	int         minor;
-	int         state;
-	char       *type;
-	char       *path;
-} tap_list_t;
+int tap_ctl_list_pid(pid_t pid, struct tqh_tap_list *list);
 
-int tap_ctl_get_driver_id(const char *handle);
+/**
+ * Retrieves a list of all tapdisks.
+ *
+ * @param list output parameter that receives the list of tapdisks
+ * @returns 0 on success, an error code otherwise
+ */
+int tap_ctl_list(struct tqh_tap_list *list);
 
-int tap_ctl_list(tap_list_t ***list);
-void tap_ctl_free_list(tap_list_t **list);
-int tap_ctl_find(const char *type, const char *path, tap_list_t *tap);
+/**
+ * Deallocates a list of struct tap_list.
+ *
+ * @param list the tapdisk information structure to deallocate.
+ */
+void tap_ctl_list_free(struct tqh_tap_list *list);
 
-int tap_ctl_allocate(int *minor, char **devname);
-int tap_ctl_free(const int minor);
+/**
+ * Creates a tapdisk process.
+ *
+ * TODO document parameters
+ * @param params
+ * @param flags
+ * @param prt_minor
+ * @param secondary
+ * @returns 0 on success, en error code otherwise
+ */
+int tap_ctl_create(const char *params, int flags, int prt_minor,
+        char *secondary);
 
-int tap_ctl_create(const char *params, char **devname);
-int tap_ctl_destroy(const int id, const int minor);
+int tap_ctl_destroy(const int id, const int minor, int force,
+        struct timeval *timeout);
 
+/*
+ * TODO The following functions are not currently used by anything else
+ * other than the tapdisk itself. Move to a private header?
+ */
 int tap_ctl_spawn(void);
 pid_t tap_ctl_get_pid(const int id);
 
 int tap_ctl_attach(const int id, const int minor);
 int tap_ctl_detach(const int id, const int minor);
 
-int tap_ctl_open(const int id, const int minor, const char *params);
-int tap_ctl_close(const int id, const int minor, const int force);
+int tap_ctl_open(const int id, const int minor, const char *params,
+        int flags, const int prt_minor, const char *secondary);
 
-int tap_ctl_pause(const int id, const int minor);
+int tap_ctl_close(const int id, const int minor, const int force,
+        struct timeval *timeout);
+
+int tap_ctl_pause(const int id, const int minor, struct timeval 
+        *timeout);
+
 int tap_ctl_unpause(const int id, const int minor, const char *params);
 
-int tap_ctl_blk_major(void);
+ssize_t tap_ctl_stats(pid_t pid, int minor, char *buf, size_t size); 
 
-#endif
+int tap_ctl_stats_fwrite(pid_t pid, int minor, FILE * out);
+
+#endif /* __TAP_CTL_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7H-00042x-KL; Tue, 04 Dec 2012 18:21:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7D-0003yk-Ew
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:07 +0000
Received: from [85.158.139.211:29659] by server-12.bemta-5.messagelabs.com id
	9C/B7-02886-21F3EB05; Tue, 04 Dec 2012 18:21:06 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354645265!19057626!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8067 invoked from network); 4 Dec 2012 18:21:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155756"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:05 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:05 +0000
MIME-Version: 1.0
X-Mercurial-Node: fbf47a5d98ffe0e4430fa59c50d26ef71fd89e92
Message-ID: <fbf47a5d98ffe0e4430f.1354645187@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:47 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 9 of 9 RFC v2] blktap3/libblktapctl: Introduce
 makefile that builds tapback-required libblktapctl functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch imports control/Makefile from the existing blktap2 implementation,
building only the libblktapctl functionality required by the tapback daemon.
The rest of the binaries/functionality will be introduced by a later patch.

Defines TAPDISK_EXEC, TAPDISK_EXECDIR, and TAPDISK_BUILDDIR are used by
tap-ctl-spawn, as it needs to know where the tapdisk binary is located in order
to spawn a tapdisk process. Variables TAPDISK_EXEC and TAPDISK_EXECDIR are
declared in config/StdGNU.mk and end up as #define's using buildmakevars2file,
just like libxl does.

diff --git a/Config.mk b/Config.mk
--- a/Config.mk
+++ b/Config.mk
@@ -147,7 +147,7 @@ define buildmakevars2file-closure
 	$(foreach var,                                                      \
 	          SBINDIR BINDIR LIBEXEC LIBDIR SHAREDIR PRIVATE_BINDIR     \
 	          XENFIRMWAREDIR XEN_CONFIG_DIR XEN_SCRIPT_DIR XEN_LOCK_DIR \
-	          XEN_RUN_DIR XEN_PAGING_DIR,                               \
+	          XEN_RUN_DIR XEN_PAGING_DIR TAPDISK_EXEC TAPDISK_EXECDIR,  \
 	          echo "$(var)=\"$($(var))\"" >>$(1).tmp;)        \
 	$(call move-if-changed,$(1).tmp,$(1))
 endef
diff --git a/config/StdGNU.mk b/config/StdGNU.mk
--- a/config/StdGNU.mk
+++ b/config/StdGNU.mk
@@ -77,3 +77,6 @@ ifeq ($(lto),y)
 CFLAGS += -flto
 LDFLAGS-$(clang) += -plugin LLVMgold.so
 endif
+
+TAPDISK_EXEC = "tapdisk"
+TAPDISK_EXECDIR = $(LIBEXEC)
diff --git a/tools/blktap2/control/Makefile b/tools/blktap3/control/Makefile
copy from tools/blktap2/control/Makefile
copy to tools/blktap3/control/Makefile
--- a/tools/blktap2/control/Makefile
+++ b/tools/blktap3/control/Makefile
@@ -6,40 +6,45 @@ MINOR              = 0
 LIBNAME            = libblktapctl
 LIBSONAME          = $(LIBNAME).so.$(MAJOR)
 
-IBIN               = tap-ctl
+genpath-target = $(call buildmakevars2file,_paths.h.tmp)
+$(eval $(genpath-target))
 
-CFLAGS            += -Werror
-CFLAGS            += -Wno-unused
-CFLAGS            += -I../include -I../drivers
-CFLAGS            += $(CFLAGS_xeninclude)
-CFLAGS            += $(CFLAGS_libxenctrl)
-CFLAGS            += -D_GNU_SOURCE
-CFLAGS            += -DTAPCTL
+_paths.h: genpath
+	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
+	rm -f $@.tmp
+	$(call move-if-changed,$@.2.tmp,$@)
 
-CTL_OBJS  := tap-ctl-ipc.o
+override CFLAGS += \
+	-I../include \
+	-DTAPDISK_BUILDDIR='"../drivers"' \
+	$(CFLAGS_xeninclude) \
+	$(CFLAGS_libxenctrl) \
+	-D_GNU_SOURCE \
+	-DTAPCTL \
+    -Wall \
+    -Wextra \
+    -Werror
+
+# FIXME cause trouble
+override CFLAGS += \
+    -Wno-type-limits \
+    -Wno-missing-field-initializers \
+    -Wno-sign-compare
+
 CTL_OBJS  += tap-ctl-list.o
-CTL_OBJS  += tap-ctl-allocate.o
-CTL_OBJS  += tap-ctl-free.o
-CTL_OBJS  += tap-ctl-create.o
-CTL_OBJS  += tap-ctl-destroy.o
+CTL_OBJS  += tap-ctl-info.o
+CTL_OBJS  += tap-ctl-xen.o
+CTL_OBJS  += tap-ctl-ipc.o
 CTL_OBJS  += tap-ctl-spawn.o
-CTL_OBJS  += tap-ctl-attach.o
-CTL_OBJS  += tap-ctl-detach.o
-CTL_OBJS  += tap-ctl-open.o
-CTL_OBJS  += tap-ctl-close.o
-CTL_OBJS  += tap-ctl-pause.o
-CTL_OBJS  += tap-ctl-unpause.o
-CTL_OBJS  += tap-ctl-major.o
-CTL_OBJS  += tap-ctl-check.o
+
+tap-ctl-spawn.o: _paths.h
 
 CTL_PICS  = $(patsubst %.o,%.opic,$(CTL_OBJS))
 
-OBJS = $(CTL_OBJS) tap-ctl.o
 PICS = $(CTL_PICS)
 
 LIB_STATIC = $(LIBNAME).a
 LIB_SHARED = $(LIBSONAME).$(MINOR)
-IBIN = tap-ctl
 
 all: build
 
@@ -51,27 +56,24 @@ build: $(IBIN) $(LIB_STATIC) $(LIB_SHARE
 $(LIBSONAME): $(LIB_SHARED)
 	ln -sf $< $@
 
-tap-ctl: tap-ctl.o $(LIBNAME).so
-	$(CC) $(LDFLAGS) -o $@ $^
-
 $(LIB_STATIC): $(CTL_OBJS)
 	$(AR) r $@ $^
 
 $(LIB_SHARED): $(CTL_PICS)
 	$(CC) $(LDFLAGS) -fPIC  -Wl,$(SONAME_LDFLAG) -Wl,$(LIBSONAME) $(SHLIB_LDFLAGS) -rdynamic $^ -o $@
 
-install: $(IBIN) $(LIB_STATIC) $(LIB_SHARED)
+install: $(LIB_STATIC) $(LIB_SHARED)
 	$(INSTALL_DIR) -p $(DESTDIR)$(SBINDIR)
-	$(INSTALL_PROG) $(IBIN) $(DESTDIR)$(SBINDIR)
 	$(INSTALL_DATA) $(LIB_STATIC) $(DESTDIR)$(LIBDIR)
 	$(INSTALL_PROG) $(LIB_SHARED) $(DESTDIR)$(LIBDIR)
 	ln -sf $(LIBSONAME) $(DESTDIR)$(LIBDIR)/$(LIBNAME).so
 	ln -sf $(LIB_SHARED) $(DESTDIR)$(LIBDIR)/$(LIBSONAME)
 
 clean:
-	rm -f $(OBJS) $(PICS) $(DEPS) $(IBIN) $(LIB_STATIC) $(LIB_SHARED)
+	rm -f $(CTL_OBJS) $(PICS) $(DEPS) $(LIB_STATIC) $(LIB_SHARED)
 	rm -f $(LIBNAME).so $(LIBSONAME)
 	rm -f *~
+	rm -f _paths.h
 
 .PHONY: all build clean install
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:21:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx7H-00042x-KL; Tue, 04 Dec 2012 18:21:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Tfx7D-0003yk-Ew
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:21:07 +0000
Received: from [85.158.139.211:29659] by server-12.bemta-5.messagelabs.com id
	9C/B7-02886-21F3EB05; Tue, 04 Dec 2012 18:21:06 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354645265!19057626!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8067 invoked from network); 4 Dec 2012 18:21:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:21:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,215,1355097600"; d="scan'208";a="16155756"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:21:05 +0000
Received: from [127.0.1.1] (10.80.3.216) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Tue, 4 Dec 2012
	18:21:05 +0000
MIME-Version: 1.0
X-Mercurial-Node: fbf47a5d98ffe0e4430fa59c50d26ef71fd89e92
Message-ID: <fbf47a5d98ffe0e4430f.1354645187@makatos-desktop>
In-Reply-To: <patchbomb.1354645178@makatos-desktop>
References: <patchbomb.1354645178@makatos-desktop>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 4 Dec 2012 18:19:47 +0000
From: Thanos Makatos <thanos.makatos@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: thanos.makatos@citrix.com
Subject: [Xen-devel] [PATCH 9 of 9 RFC v2] blktap3/libblktapctl: Introduce
 makefile that builds tapback-required libblktapctl functionality
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch imports control/Makefile from the existing blktap2 implementation,
building only the libblktapctl functionality required by the tapback daemon.
The rest of the binaries/functionality will be introduced by a later patch.

Defines TAPDISK_EXEC, TAPDISK_EXECDIR, and TAPDISK_BUILDDIR are used by
tap-ctl-spawn, as it needs to know where the tapdisk binary is located in order
to spawn a tapdisk process. Variables TAPDISK_EXEC and TAPDISK_EXECDIR are
declared in config/StdGNU.mk and end up as #define's using buildmakevars2file,
just like libxl does.

diff --git a/Config.mk b/Config.mk
--- a/Config.mk
+++ b/Config.mk
@@ -147,7 +147,7 @@ define buildmakevars2file-closure
 	$(foreach var,                                                      \
 	          SBINDIR BINDIR LIBEXEC LIBDIR SHAREDIR PRIVATE_BINDIR     \
 	          XENFIRMWAREDIR XEN_CONFIG_DIR XEN_SCRIPT_DIR XEN_LOCK_DIR \
-	          XEN_RUN_DIR XEN_PAGING_DIR,                               \
+	          XEN_RUN_DIR XEN_PAGING_DIR TAPDISK_EXEC TAPDISK_EXECDIR,  \
 	          echo "$(var)=\"$($(var))\"" >>$(1).tmp;)        \
 	$(call move-if-changed,$(1).tmp,$(1))
 endef
diff --git a/config/StdGNU.mk b/config/StdGNU.mk
--- a/config/StdGNU.mk
+++ b/config/StdGNU.mk
@@ -77,3 +77,6 @@ ifeq ($(lto),y)
 CFLAGS += -flto
 LDFLAGS-$(clang) += -plugin LLVMgold.so
 endif
+
+TAPDISK_EXEC = "tapdisk"
+TAPDISK_EXECDIR = $(LIBEXEC)
diff --git a/tools/blktap2/control/Makefile b/tools/blktap3/control/Makefile
copy from tools/blktap2/control/Makefile
copy to tools/blktap3/control/Makefile
--- a/tools/blktap2/control/Makefile
+++ b/tools/blktap3/control/Makefile
@@ -6,40 +6,45 @@ MINOR              = 0
 LIBNAME            = libblktapctl
 LIBSONAME          = $(LIBNAME).so.$(MAJOR)
 
-IBIN               = tap-ctl
+genpath-target = $(call buildmakevars2file,_paths.h.tmp)
+$(eval $(genpath-target))
 
-CFLAGS            += -Werror
-CFLAGS            += -Wno-unused
-CFLAGS            += -I../include -I../drivers
-CFLAGS            += $(CFLAGS_xeninclude)
-CFLAGS            += $(CFLAGS_libxenctrl)
-CFLAGS            += -D_GNU_SOURCE
-CFLAGS            += -DTAPCTL
+_paths.h: genpath
+	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
+	rm -f $@.tmp
+	$(call move-if-changed,$@.2.tmp,$@)
 
-CTL_OBJS  := tap-ctl-ipc.o
+override CFLAGS += \
+	-I../include \
+	-DTAPDISK_BUILDDIR='"../drivers"' \
+	$(CFLAGS_xeninclude) \
+	$(CFLAGS_libxenctrl) \
+	-D_GNU_SOURCE \
+	-DTAPCTL \
+    -Wall \
+    -Wextra \
+    -Werror
+
+# FIXME cause trouble
+override CFLAGS += \
+    -Wno-type-limits \
+    -Wno-missing-field-initializers \
+    -Wno-sign-compare
+
 CTL_OBJS  += tap-ctl-list.o
-CTL_OBJS  += tap-ctl-allocate.o
-CTL_OBJS  += tap-ctl-free.o
-CTL_OBJS  += tap-ctl-create.o
-CTL_OBJS  += tap-ctl-destroy.o
+CTL_OBJS  += tap-ctl-info.o
+CTL_OBJS  += tap-ctl-xen.o
+CTL_OBJS  += tap-ctl-ipc.o
 CTL_OBJS  += tap-ctl-spawn.o
-CTL_OBJS  += tap-ctl-attach.o
-CTL_OBJS  += tap-ctl-detach.o
-CTL_OBJS  += tap-ctl-open.o
-CTL_OBJS  += tap-ctl-close.o
-CTL_OBJS  += tap-ctl-pause.o
-CTL_OBJS  += tap-ctl-unpause.o
-CTL_OBJS  += tap-ctl-major.o
-CTL_OBJS  += tap-ctl-check.o
+
+tap-ctl-spawn.o: _paths.h
 
 CTL_PICS  = $(patsubst %.o,%.opic,$(CTL_OBJS))
 
-OBJS = $(CTL_OBJS) tap-ctl.o
 PICS = $(CTL_PICS)
 
 LIB_STATIC = $(LIBNAME).a
 LIB_SHARED = $(LIBSONAME).$(MINOR)
-IBIN = tap-ctl
 
 all: build
 
@@ -51,27 +56,24 @@ build: $(IBIN) $(LIB_STATIC) $(LIB_SHARE
 $(LIBSONAME): $(LIB_SHARED)
 	ln -sf $< $@
 
-tap-ctl: tap-ctl.o $(LIBNAME).so
-	$(CC) $(LDFLAGS) -o $@ $^
-
 $(LIB_STATIC): $(CTL_OBJS)
 	$(AR) r $@ $^
 
 $(LIB_SHARED): $(CTL_PICS)
 	$(CC) $(LDFLAGS) -fPIC  -Wl,$(SONAME_LDFLAG) -Wl,$(LIBSONAME) $(SHLIB_LDFLAGS) -rdynamic $^ -o $@
 
-install: $(IBIN) $(LIB_STATIC) $(LIB_SHARED)
+install: $(LIB_STATIC) $(LIB_SHARED)
 	$(INSTALL_DIR) -p $(DESTDIR)$(SBINDIR)
-	$(INSTALL_PROG) $(IBIN) $(DESTDIR)$(SBINDIR)
 	$(INSTALL_DATA) $(LIB_STATIC) $(DESTDIR)$(LIBDIR)
 	$(INSTALL_PROG) $(LIB_SHARED) $(DESTDIR)$(LIBDIR)
 	ln -sf $(LIBSONAME) $(DESTDIR)$(LIBDIR)/$(LIBNAME).so
 	ln -sf $(LIB_SHARED) $(DESTDIR)$(LIBDIR)/$(LIBSONAME)
 
 clean:
-	rm -f $(OBJS) $(PICS) $(DEPS) $(IBIN) $(LIB_STATIC) $(LIB_SHARED)
+	rm -f $(CTL_OBJS) $(PICS) $(DEPS) $(LIB_STATIC) $(LIB_SHARED)
 	rm -f $(LIBNAME).so $(LIBSONAME)
 	rm -f *~
+	rm -f _paths.h
 
 .PHONY: all build clean install
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:22:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx8B-00050l-5o; Tue, 04 Dec 2012 18:22:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Tfx89-0004z9-Nf
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:22:05 +0000
Received: from [85.158.138.51:8671] by server-2.bemta-3.messagelabs.com id
	E6/C8-04744-C4F3EB05; Tue, 04 Dec 2012 18:22:04 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-6.tower-174.messagelabs.com!1354645324!19421344!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODQzMTI=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODQzMTI=\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27199 invoked from network); 4 Dec 2012 18:22:04 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:22:04 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFHjy0MEysz
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-091-208.pools.arcor-ip.net [88.65.91.208])
	by smtp.strato.de (jored mo18) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 505758oB4IGaR8 ;
	Tue, 4 Dec 2012 19:21:52 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id BD4831884C; Tue,  4 Dec 2012 19:21:51 +0100 (CET)
Date: Tue, 4 Dec 2012 19:21:51 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121204182151.GA25878@aepfle.de>
References: <1354563168-6609-1-git-send-email-olaf@aepfle.de>
	<50BDDEBF02000078000ADAAC@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BDDEBF02000078000ADAAC@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: xen-devel@lists.xen.org, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, Jan Beulich wrote:

> This looks necessary but insufficient - there's nothing really
> preventing backend_changed() from being called more than once
> for a given device (is simply the handler of xenbus watch). Hence
> I think either that function needs to be guarded against multiple
> execution (e.g. by removing the watch from that function itself,
> if that's permitted by xenbus), or to properly deal with the
> effects this has (including but probably not limited to the leaking
> of be->mode).

If another watch does really trigger after the kfree(be) in
xen_blkbk_remove(), wouldnt backend_changed access stale memory?
So if that can really happen in practice, shouldnt the backend_watch be
a separate allocation instead being contained within backend_info?

Looking at unregister_xenbus_watch, it clears removes the watch from the
list, so that process_msg will not see it anymore.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:22:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfx8B-00050l-5o; Tue, 04 Dec 2012 18:22:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Tfx89-0004z9-Nf
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:22:05 +0000
Received: from [85.158.138.51:8671] by server-2.bemta-3.messagelabs.com id
	E6/C8-04744-C4F3EB05; Tue, 04 Dec 2012 18:22:04 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-6.tower-174.messagelabs.com!1354645324!19421344!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODQzMTI=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODQzMTI=\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27199 invoked from network); 4 Dec 2012 18:22:04 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:22:04 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFHjy0MEysz
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-091-208.pools.arcor-ip.net [88.65.91.208])
	by smtp.strato.de (jored mo18) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 505758oB4IGaR8 ;
	Tue, 4 Dec 2012 19:21:52 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id BD4831884C; Tue,  4 Dec 2012 19:21:51 +0100 (CET)
Date: Tue, 4 Dec 2012 19:21:51 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121204182151.GA25878@aepfle.de>
References: <1354563168-6609-1-git-send-email-olaf@aepfle.de>
	<50BDDEBF02000078000ADAAC@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BDDEBF02000078000ADAAC@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: xen-devel@lists.xen.org, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, Jan Beulich wrote:

> This looks necessary but insufficient - there's nothing really
> preventing backend_changed() from being called more than once
> for a given device (is simply the handler of xenbus watch). Hence
> I think either that function needs to be guarded against multiple
> execution (e.g. by removing the watch from that function itself,
> if that's permitted by xenbus), or to properly deal with the
> effects this has (including but probably not limited to the leaking
> of be->mode).

If another watch does really trigger after the kfree(be) in
xen_blkbk_remove(), wouldnt backend_changed access stale memory?
So if that can really happen in practice, shouldnt the backend_watch be
a separate allocation instead being contained within backend_info?

Looking at unregister_xenbus_watch, it clears removes the watch from the
list, so that process_msg will not see it anymore.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:27:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:27:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfxD0-0006Nr-LV; Tue, 04 Dec 2012 18:27:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TfxCz-0006Nj-Fz
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:27:05 +0000
Received: from [85.158.139.83:19984] by server-5.bemta-5.messagelabs.com id
	F6/B7-11353-8704EB05; Tue, 04 Dec 2012 18:27:04 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354645605!28293198!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10495 invoked from network); 4 Dec 2012 18:26:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:26:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,216,1355097600"; d="scan'208";a="216377178"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:25:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:25:03 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TfxB0-0002zY-Sg	for
	xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:25:02 +0000
Message-ID: <50BE3FFE.3040709@citrix.com>
Date: Tue, 4 Dec 2012 18:25:02 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
In-Reply-To: <b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
X-Enigmail-Version: 1.4.6
Subject: Re: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 04/12/12 18:16, Andrew Cooper wrote:
> Experimentally, certain crash kernels will triple fault very early after
> starting if started with NMIs disabled.  This was discovered when
> experimenting with a debug keyhandler which deliberately created a
> reentrant NMI, causing stack corruption.
>
> Because of this discovered bug, and that the future changes to the NMI
> handling will make the kexec path more fragile, take the time now to
> bullet-proof the kexec behaviour to be safe in all circumstances.
>
> This patch adds three new low level routines:
>  * trap_nop
>      This is a no op handler which irets immediately
>  * nmi_crash
>      This is a special NMI handler for using during a kexec crash
>  * enable_nmis
>      This function enables NMIs by executing an iret-to-self
>
> As a result, the new behaviour of the kexec_crash path is:
>
> nmi_shootdown_cpus() will:
>
>  * Disable interrupt stack tables.
>     Disabling ISTs will prevent stack corruption from future NMIs and
>     MCEs. As Xen is going to crash, we are not concerned with the
>     security race condition with 'sysret'; any guest which managed to
>     benefit from the race condition will only be able execute a handful
>     of instructions before it will be killed anyway, and a guest is
>     still unable to trigger the race condition in the first place.
>
>  * Swap the NMI trap handlers.
>     The crashing pcpu gets the nop handler, to prevent it getting stuck in
>     an NMI context, causing a hang instead of crash.  The non-crashing
>     pcpus all get the crash_nmi handler which is designed never to
>     return.
>
> do_nmi_crash() will:
>
>  * Save the crash notes and shut the pcpu down.
>     There is now an extra per-cpu variable to prevent us from executing
>     this multiple times.  In the case where we reenter midway through,
>     attempt the whole operation again in preference to not completing
>     it in the first place.
>
>  * Set up another NMI at the LAPIC.
>     Even when the LAPIC has been disabled, the ID and command registers
>     are still usable.  As a result, we can deliberately queue up a new
>     NMI to re-interrupt us later if NMIs get unlatched.  Because of the
>     call to __stop_this_cpu(), we have to hand craft self_nmi() to be
>     safe from General Protection Faults.
>
>  * Fall into infinite loop.
>
> machine_kexec() will:
>
>   * Swap the MCE handlers to be a nop.
>      We cannot prevent MCEs from being delivered when we pass off to the
>      crash kernel, and the less Xen context is being touched the better.
>
>   * Explicitly enable NMIs.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/crash.c
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,97 @@
>  
>  static atomic_t waiting_for_crash_ipi;
>  static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>  
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs. */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>  {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
> +    ASSERT( crashing_cpu != smp_processor_id() );
> +
> +    /* Save crash information and shut down CPU.  Attempt only once. */
> +    if ( ! this_cpu(crash_save_done) )
> +    {
> +        kexec_crash_save_cpu();
> +        __stop_this_cpu();
> +
> +        this_cpu(crash_save_done) = 1;

And I have already found a bug.  The
"atomic_dec(&waiting_for_crash_ipi);" from below should appear here.

Without it, we just wait the full second in nmi_shootdown_cpus() and
continue anyway.

~Andrew

> +    }
> +
> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
> +     * back to its boot state, so we are unable to rely on the regular
> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
> +     * (The likely senario is that we have reverted from x2apic mode to
> +     * xapic, at which point #GPFs will occur if we use the apic_*
> +     * functions)
> +     *
> +     * The ICR and APIC ID of the LAPIC are still valid even during
> +     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
> +     * queue up another NMI, to force us back into this loop if we exit.
>       */
> -    if ( cpu == crashing_cpu )
> -        return 1;
> -    local_irq_disable();
> +    switch ( current_local_apic_mode() )
> +    {
> +        u32 apic_id;
>  
> -    kexec_crash_save_cpu();
> +    case APIC_MODE_X2APIC:
> +        apic_id = apic_rdmsr(APIC_ID);
>  
> -    __stop_this_cpu();
> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
> +        break;
>  
> -    atomic_dec(&waiting_for_crash_ipi);
> +    case APIC_MODE_XAPIC:
> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
> +
> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
> +            cpu_relax();
> +
> +        apic_mem_write(APIC_ICR2, apic_id << 24);
> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
> +        break;
> +
> +    default:
> +        break;
> +    }
>  
>      for ( ; ; )
>          halt();
> -
> -    return 1;
>  }
>  
>  static void nmi_shootdown_cpus(void)
>  {
>      unsigned long msecs;
> +    int i;
> +    int cpu = smp_processor_id();
>  
>      local_irq_disable();
>  
> -    crashing_cpu = smp_processor_id();
> +    crashing_cpu = cpu;
>      local_irq_count(crashing_cpu) = 0;
>  
>      atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
> -    /* Would it be better to replace the trap vector here? */
> -    set_nmi_callback(crash_nmi_callback);
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get crash_nmi which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     *
> +     * Futhermore, disable stack tables for NMIs and MCEs to prevent
> +     * race conditions resulting in corrupt stack frames.  As Xen is
> +     * about to die, we no longer care about the security-related race
> +     * condition with 'sysret' which ISTs are designed to solve.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
> +            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
> +
> +            if ( i == cpu )
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
> +            else
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
> +        }
> +
>      /* Ensure the new callback function is set before sending out the NMI. */
>      wmb();
>  
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/machine_kexec.c
> --- a/xen/arch/x86/machine_kexec.c
> +++ b/xen/arch/x86/machine_kexec.c
> @@ -87,6 +87,17 @@ void machine_kexec(xen_kexec_image_t *im
>       */
>      local_irq_disable();
>  
> +    /* Now regular interrupts are disabled, we need to reduce the impact
> +     * of interrupts not disabled by 'cli'.  NMIs have already been
> +     * dealt with by machine_crash_shutdown().
> +     *
> +     * Set all pcpu MCE handler to be a noop. */
> +    set_intr_gate(TRAP_machine_check, &trap_nop);
> +
> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
> +     * not like running with NMIs disabled. */
> +    enable_nmis();
> +
>      /*
>       * compat_machine_kexec() returns to idle pagetables, which requires us
>       * to be running on a static GDT mapping (idle pagetables have no GDT
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -635,11 +635,42 @@ ENTRY(nmi)
>          movl  $TRAP_nmi,4(%rsp)
>          jmp   handle_ist_exception
>  
> +ENTRY(nmi_crash)
> +        cli
> +        pushq $0
> +        movl $TRAP_nmi,4(%rsp)
> +        SAVE_ALL
> +        movq %rsp,%rdi
> +        callq do_nmi_crash /* Does not return */
> +        ud2
> +
>  ENTRY(machine_check)
>          pushq $0
>          movl  $TRAP_machine_check,4(%rsp)
>          jmp   handle_ist_exception
>  
> +/* No op trap handler.  Required for kexec path. */
> +ENTRY(trap_nop)
> +        iretq
> +
> +/* Enable NMIs.  No special register assumptions, and all preserved. */
> +ENTRY(enable_nmis)
> +        push %rax
> +        movq %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
>  .section .rodata, "a", @progbits
>  
>  ENTRY(exception_table)
> diff -r 1c69c938f641 -r b15d3ae525af xen/include/asm-x86/processor.h
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
>  DECLARE_TRAP_HANDLER(divide_error);
>  DECLARE_TRAP_HANDLER(debug);
>  DECLARE_TRAP_HANDLER(nmi);
> +DECLARE_TRAP_HANDLER(nmi_crash);
>  DECLARE_TRAP_HANDLER(int3);
>  DECLARE_TRAP_HANDLER(overflow);
>  DECLARE_TRAP_HANDLER(bounds);
> @@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>  DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>  #undef DECLARE_TRAP_HANDLER
>  
> +void trap_nop(void);
> +void enable_nmis(void);
> +
>  void syscall_enter(void);
>  void sysenter_entry(void);
>  void sysenter_eflags_saved(void);

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:27:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:27:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfxD0-0006Nr-LV; Tue, 04 Dec 2012 18:27:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TfxCz-0006Nj-Fz
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:27:05 +0000
Received: from [85.158.139.83:19984] by server-5.bemta-5.messagelabs.com id
	F6/B7-11353-8704EB05; Tue, 04 Dec 2012 18:27:04 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354645605!28293198!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10495 invoked from network); 4 Dec 2012 18:26:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:26:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,216,1355097600"; d="scan'208";a="216377178"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 18:25:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 13:25:03 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TfxB0-0002zY-Sg	for
	xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:25:02 +0000
Message-ID: <50BE3FFE.3040709@citrix.com>
Date: Tue, 4 Dec 2012 18:25:02 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
In-Reply-To: <b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
X-Enigmail-Version: 1.4.6
Subject: Re: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 04/12/12 18:16, Andrew Cooper wrote:
> Experimentally, certain crash kernels will triple fault very early after
> starting if started with NMIs disabled.  This was discovered when
> experimenting with a debug keyhandler which deliberately created a
> reentrant NMI, causing stack corruption.
>
> Because of this discovered bug, and that the future changes to the NMI
> handling will make the kexec path more fragile, take the time now to
> bullet-proof the kexec behaviour to be safe in all circumstances.
>
> This patch adds three new low level routines:
>  * trap_nop
>      This is a no op handler which irets immediately
>  * nmi_crash
>      This is a special NMI handler for using during a kexec crash
>  * enable_nmis
>      This function enables NMIs by executing an iret-to-self
>
> As a result, the new behaviour of the kexec_crash path is:
>
> nmi_shootdown_cpus() will:
>
>  * Disable interrupt stack tables.
>     Disabling ISTs will prevent stack corruption from future NMIs and
>     MCEs. As Xen is going to crash, we are not concerned with the
>     security race condition with 'sysret'; any guest which managed to
>     benefit from the race condition will only be able execute a handful
>     of instructions before it will be killed anyway, and a guest is
>     still unable to trigger the race condition in the first place.
>
>  * Swap the NMI trap handlers.
>     The crashing pcpu gets the nop handler, to prevent it getting stuck in
>     an NMI context, causing a hang instead of crash.  The non-crashing
>     pcpus all get the crash_nmi handler which is designed never to
>     return.
>
> do_nmi_crash() will:
>
>  * Save the crash notes and shut the pcpu down.
>     There is now an extra per-cpu variable to prevent us from executing
>     this multiple times.  In the case where we reenter midway through,
>     attempt the whole operation again in preference to not completing
>     it in the first place.
>
>  * Set up another NMI at the LAPIC.
>     Even when the LAPIC has been disabled, the ID and command registers
>     are still usable.  As a result, we can deliberately queue up a new
>     NMI to re-interrupt us later if NMIs get unlatched.  Because of the
>     call to __stop_this_cpu(), we have to hand craft self_nmi() to be
>     safe from General Protection Faults.
>
>  * Fall into infinite loop.
>
> machine_kexec() will:
>
>   * Swap the MCE handlers to be a nop.
>      We cannot prevent MCEs from being delivered when we pass off to the
>      crash kernel, and the less Xen context is being touched the better.
>
>   * Explicitly enable NMIs.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/crash.c
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,97 @@
>  
>  static atomic_t waiting_for_crash_ipi;
>  static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>  
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs. */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>  {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
> +    ASSERT( crashing_cpu != smp_processor_id() );
> +
> +    /* Save crash information and shut down CPU.  Attempt only once. */
> +    if ( ! this_cpu(crash_save_done) )
> +    {
> +        kexec_crash_save_cpu();
> +        __stop_this_cpu();
> +
> +        this_cpu(crash_save_done) = 1;

And I have already found a bug.  The
"atomic_dec(&waiting_for_crash_ipi);" from below should appear here.

Without it, we just wait the full second in nmi_shootdown_cpus() and
continue anyway.

~Andrew

> +    }
> +
> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
> +     * back to its boot state, so we are unable to rely on the regular
> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
> +     * (The likely senario is that we have reverted from x2apic mode to
> +     * xapic, at which point #GPFs will occur if we use the apic_*
> +     * functions)
> +     *
> +     * The ICR and APIC ID of the LAPIC are still valid even during
> +     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
> +     * queue up another NMI, to force us back into this loop if we exit.
>       */
> -    if ( cpu == crashing_cpu )
> -        return 1;
> -    local_irq_disable();
> +    switch ( current_local_apic_mode() )
> +    {
> +        u32 apic_id;
>  
> -    kexec_crash_save_cpu();
> +    case APIC_MODE_X2APIC:
> +        apic_id = apic_rdmsr(APIC_ID);
>  
> -    __stop_this_cpu();
> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
> +        break;
>  
> -    atomic_dec(&waiting_for_crash_ipi);
> +    case APIC_MODE_XAPIC:
> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
> +
> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
> +            cpu_relax();
> +
> +        apic_mem_write(APIC_ICR2, apic_id << 24);
> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
> +        break;
> +
> +    default:
> +        break;
> +    }
>  
>      for ( ; ; )
>          halt();
> -
> -    return 1;
>  }
>  
>  static void nmi_shootdown_cpus(void)
>  {
>      unsigned long msecs;
> +    int i;
> +    int cpu = smp_processor_id();
>  
>      local_irq_disable();
>  
> -    crashing_cpu = smp_processor_id();
> +    crashing_cpu = cpu;
>      local_irq_count(crashing_cpu) = 0;
>  
>      atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
> -    /* Would it be better to replace the trap vector here? */
> -    set_nmi_callback(crash_nmi_callback);
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get crash_nmi which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     *
> +     * Futhermore, disable stack tables for NMIs and MCEs to prevent
> +     * race conditions resulting in corrupt stack frames.  As Xen is
> +     * about to die, we no longer care about the security-related race
> +     * condition with 'sysret' which ISTs are designed to solve.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
> +            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
> +
> +            if ( i == cpu )
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
> +            else
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
> +        }
> +
>      /* Ensure the new callback function is set before sending out the NMI. */
>      wmb();
>  
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/machine_kexec.c
> --- a/xen/arch/x86/machine_kexec.c
> +++ b/xen/arch/x86/machine_kexec.c
> @@ -87,6 +87,17 @@ void machine_kexec(xen_kexec_image_t *im
>       */
>      local_irq_disable();
>  
> +    /* Now regular interrupts are disabled, we need to reduce the impact
> +     * of interrupts not disabled by 'cli'.  NMIs have already been
> +     * dealt with by machine_crash_shutdown().
> +     *
> +     * Set all pcpu MCE handler to be a noop. */
> +    set_intr_gate(TRAP_machine_check, &trap_nop);
> +
> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
> +     * not like running with NMIs disabled. */
> +    enable_nmis();
> +
>      /*
>       * compat_machine_kexec() returns to idle pagetables, which requires us
>       * to be running on a static GDT mapping (idle pagetables have no GDT
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -635,11 +635,42 @@ ENTRY(nmi)
>          movl  $TRAP_nmi,4(%rsp)
>          jmp   handle_ist_exception
>  
> +ENTRY(nmi_crash)
> +        cli
> +        pushq $0
> +        movl $TRAP_nmi,4(%rsp)
> +        SAVE_ALL
> +        movq %rsp,%rdi
> +        callq do_nmi_crash /* Does not return */
> +        ud2
> +
>  ENTRY(machine_check)
>          pushq $0
>          movl  $TRAP_machine_check,4(%rsp)
>          jmp   handle_ist_exception
>  
> +/* No op trap handler.  Required for kexec path. */
> +ENTRY(trap_nop)
> +        iretq
> +
> +/* Enable NMIs.  No special register assumptions, and all preserved. */
> +ENTRY(enable_nmis)
> +        push %rax
> +        movq %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
>  .section .rodata, "a", @progbits
>  
>  ENTRY(exception_table)
> diff -r 1c69c938f641 -r b15d3ae525af xen/include/asm-x86/processor.h
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
>  DECLARE_TRAP_HANDLER(divide_error);
>  DECLARE_TRAP_HANDLER(debug);
>  DECLARE_TRAP_HANDLER(nmi);
> +DECLARE_TRAP_HANDLER(nmi_crash);
>  DECLARE_TRAP_HANDLER(int3);
>  DECLARE_TRAP_HANDLER(overflow);
>  DECLARE_TRAP_HANDLER(bounds);
> @@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>  DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>  #undef DECLARE_TRAP_HANDLER
>  
> +void trap_nop(void);
> +void enable_nmis(void);
> +
>  void syscall_enter(void);
>  void sysenter_entry(void);
>  void sysenter_eflags_saved(void);

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:43:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfxSx-0006tN-LN; Tue, 04 Dec 2012 18:43:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TfxSv-0006tI-TI
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:43:34 +0000
Received: from [85.158.137.99:48638] by server-14.bemta-3.messagelabs.com id
	9E/F6-31424-5544EB05; Tue, 04 Dec 2012 18:43:33 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354646611!12410662!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32327 invoked from network); 4 Dec 2012 18:43:31 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-7.tower-217.messagelabs.com with SMTP;
	4 Dec 2012 18:43:31 -0000
X-TM-IMSS-Message-ID: <48d570ca00045e65@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 48d570ca00045e65 ;
	Tue, 4 Dec 2012 13:41:59 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qB4IhEXc032616; 
	Tue, 4 Dec 2012 13:43:15 -0500
Message-ID: <50BE4442.6070902@tycho.nsa.gov>
Date: Tue, 04 Dec 2012 13:43:14 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-10-git-send-email-dgdegra@tycho.nsa.gov>
	<50BE3730.2030509@jhuapl.edu>
In-Reply-To: <50BE3730.2030509@jhuapl.edu>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] stubdom/vtpm: Add PCR pass-through to
	hardware TPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/04/2012 12:47 PM, Matthew Fioravante wrote:
> So this maps a fixed set of PCRs always? 

Yes; and making them not the last 24 (or last N) PCRs requires a more
complex patch, since the PCR-modification code also needs to be changed.

> Is there any use case where the user might want the PCR mappings to be configurable? Might they want to disable this feature to disallow Hardware PCR access in the vm?

I think all of these configuration options will be needed for this to be 
truly useful in an upstream vTPM implementation; however, they require 
more significant changes to the TPM emulator itself, since values that
are currently #defines need to be changed to be configurable at TPM
creation. Adding more configuration will also eventually run past the
limits of what is reasonable to place on the vTPM's command line, so it
may be useful to consider how to define this type of configuration so 
that it still ends up in the vTPM's measurement when create by a 
measuring domain builder (hash of a config struct on the command line).

Things that I think could be configurable:
  * Localities and what domains can use them (I have a command-line 
    based patch for this).
  * PCRs beyond the defined minimum (currently 24): what localities have 
    extend/clear access to them, or what HW-PCR they are mapped to
  * Maximum sizes of TPM items: NV area, counters, sessions, etc
  
TPM version 2.0 may address some of these issues in the specification, 
so it may be reasonable to require compile-time configuration for a 1.2
TPM, and allow the owner to define the rest at runtime in a 2.0 TPM.

> Also can you update the docs/misc/vtpm.txt documentation with a note about this and the grub feature?

Sure.
 
> On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
>> This allows the hardware TPM's PCRs to be accessed from a vTPM for
>> debugging and as a simple alternative to a deep quote in situations
>> where the integrity of the vTPM's own TCB is not in question.
>>
>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>> ---
>>   stubdom/Makefile                   |  1 +
>>   stubdom/vtpm-pcr-passthrough.patch | 73 ++++++++++++++++++++++++++++++++++++++
>>   stubdom/vtpm/vtpm_cmd.c            | 38 ++++++++++++++++++++
>>   3 files changed, 112 insertions(+)
>>   create mode 100644 stubdom/vtpm-pcr-passthrough.patch
>>
>> diff --git a/stubdom/Makefile b/stubdom/Makefile
>> index 790b547..03ec07e 100644
>> --- a/stubdom/Makefile
>> +++ b/stubdom/Makefile
>> @@ -210,6 +210,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
>>       patch -d $@ -p1 < vtpm-locality.patch
>>       patch -d $@ -p1 < vtpm-bufsize.patch
>>       patch -d $@ -p1 < vtpm-locality5-pcrs.patch
>> +    patch -d $@ -p1 < vtpm-pcr-passthrough.patch
>>       mkdir $@/build
>>       cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
>>       touch $@
>> diff --git a/stubdom/vtpm-pcr-passthrough.patch b/stubdom/vtpm-pcr-passthrough.patch
>> new file mode 100644
>> index 0000000..4e898a5
>> --- /dev/null
>> +++ b/stubdom/vtpm-pcr-passthrough.patch
>> @@ -0,0 +1,73 @@
>> +diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
>> +index f8f7f0f..885af52 100644
>> +--- a/tpm/tpm_capability.c
>> ++++ b/tpm/tpm_capability.c
>> +@@ -72,7 +72,7 @@ static TPM_RESULT cap_property(UINT32 subCapSize, BYTE *subCap,
>> +   switch (property) {
>> +     case TPM_CAP_PROP_PCR:
>> +       debug("[TPM_CAP_PROP_PCR]");
>> +-      return return_UINT32(respSize, resp, TPM_NUM_PCR);
>> ++      return return_UINT32(respSize, resp, TPM_NUM_PCR_V);
>> +
>> +     case TPM_CAP_PROP_DIR:
>> +       debug("[TPM_CAP_PROP_DIR]");
>> +diff --git a/tpm/tpm_emulator_extern.h b/tpm/tpm_emulator_extern.h
>> +index 36a32dd..77ed595 100644
>> +--- a/tpm/tpm_emulator_extern.h
>> ++++ b/tpm/tpm_emulator_extern.h
>> +@@ -56,6 +56,7 @@ void (*tpm_free)(/*const*/ void *ptr);
>> + /* random numbers */
>> +
>> + void (*tpm_get_extern_random_bytes)(void *buf, size_t nbytes);
>> ++void tpm_get_extern_pcr(int index, void *buf);
>> +
>> + /* usec since last call */
>> +
>> +diff --git a/tpm/tpm_integrity.c b/tpm/tpm_integrity.c
>> +index 66ece83..f3c4196 100644
>> +--- a/tpm/tpm_integrity.c
>> ++++ b/tpm/tpm_integrity.c
>> +@@ -56,8 +56,11 @@ TPM_RESULT TPM_Extend(TPM_PCRINDEX pcrNum, TPM_DIGEST *inDigest,
>> + TPM_RESULT TPM_PCRRead(TPM_PCRINDEX pcrIndex, TPM_PCRVALUE *outDigest)
>> + {
>> +   info("TPM_PCRRead()");
>> +-  if (pcrIndex >= TPM_NUM_PCR) return TPM_BADINDEX;
>> +-  memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
>> ++  if (pcrIndex >= TPM_NUM_PCR_V) return TPM_BADINDEX;
>> ++  if (pcrIndex >= TPM_NUM_PCR)
>> ++    tpm_get_extern_pcr(pcrIndex - TPM_NUM_PCR, outDigest);
>> ++  else
>> ++    memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
>> +   return TPM_SUCCESS;
>> + }
>> +
>> +@@ -138,12 +141,15 @@ TPM_RESULT tpm_compute_pcr_digest(TPM_PCR_SELECTION *pcrSelection,
>> +   BYTE *buf, *ptr;
>> +   info("tpm_compute_pcr_digest()");
>> +   /* create PCR composite */
>> +-  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR
>> ++  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR_V
>> +       || pcrSelection->sizeOfSelect == 0) return TPM_INVALID_PCR_INFO;
>> +   for (i = 0, j = 0; i < pcrSelection->sizeOfSelect * 8; i++) {
>> +     /* is PCR number i selected ? */
>> +     if (pcrSelection->pcrSelect[i >> 3] & (1 << (i & 7))) {
>> +-      memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALUE));
>> ++      if (i >= TPM_NUM_PCR)
>> ++        tpm_get_extern_pcr(i - TPM_NUM_PCR, &comp.pcrValue[j++]);
>> ++      else
>> ++        memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALUE));
>> +     }
>> +   }
>> +   memcpy(&comp.select, pcrSelection, sizeof(TPM_PCR_SELECTION));
>> +diff --git a/tpm/tpm_structures.h b/tpm/tpm_structures.h
>> +index 08cef1e..8c97fc5 100644
>> +--- a/tpm/tpm_structures.h
>> ++++ b/tpm/tpm_structures.h
>> +@@ -677,6 +677,7 @@ typedef struct tdTPM_CMK_MA_APPROVAL {
>> +  * Number of PCRs of the TPM (must be a multiple of eight)
>> +  */
>> + #define TPM_NUM_PCR 32
>> ++#define TPM_NUM_PCR_V (TPM_NUM_PCR + 24)
>> +
>> + /*
>> +  * TPM_PCR_SELECTION ([TPM_Part2], Section 8.1)
>> diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
>> index 7eae98b..ed058fb 100644
>> --- a/stubdom/vtpm/vtpm_cmd.c
>> +++ b/stubdom/vtpm/vtpm_cmd.c
>> @@ -134,6 +134,44 @@ egress:
>>     }
>>   +extern struct tpmfront_dev* tpmfront_dev;
>> +void tpm_get_extern_pcr(int index, void *buf) {
>> +   TPM_RESULT status = TPM_SUCCESS;
>> +   uint8_t* cmdbuf, *resp, *bptr;
>> +   size_t resplen = 0;
>> +   UINT32 len;
>> +
>> +   /*Ask the real tpm for the PCR value */
>> +   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
>> +   UINT32 size;
>> +   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
>> +   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
>> +
>> +   /*Create the raw tpm command */
>> +   bptr = cmdbuf = malloc(size);
>> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
>> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, index));
>> +
>> +   /* Send cmd, wait for response */
>> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
>> +      ERR_TPMFRONT);
>> +
>> +   bptr = resp; len = resplen;
>> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
>> +
>> +   //Check return status of command
>> +   CHECKSTATUSGOTO(ord, "TPM_PCRRead()");
>> +
>> +   //Get the PCR value out
>> +   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, buf, 20), ERR_MALFORMED);
>> +
>> +   goto egress;
>> +abort_egress:
>> +   memset(buf, 0x20, 20);
>> +egress:
>> +   free(cmdbuf);
>> +}
>> +
>>   TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length)
>>   {
>>      TPM_RESULT status = TPM_SUCCESS;
> 
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:43:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfxSx-0006tN-LN; Tue, 04 Dec 2012 18:43:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TfxSv-0006tI-TI
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:43:34 +0000
Received: from [85.158.137.99:48638] by server-14.bemta-3.messagelabs.com id
	9E/F6-31424-5544EB05; Tue, 04 Dec 2012 18:43:33 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354646611!12410662!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32327 invoked from network); 4 Dec 2012 18:43:31 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-7.tower-217.messagelabs.com with SMTP;
	4 Dec 2012 18:43:31 -0000
X-TM-IMSS-Message-ID: <48d570ca00045e65@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 48d570ca00045e65 ;
	Tue, 4 Dec 2012 13:41:59 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qB4IhEXc032616; 
	Tue, 4 Dec 2012 13:43:15 -0500
Message-ID: <50BE4442.6070902@tycho.nsa.gov>
Date: Tue, 04 Dec 2012 13:43:14 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-10-git-send-email-dgdegra@tycho.nsa.gov>
	<50BE3730.2030509@jhuapl.edu>
In-Reply-To: <50BE3730.2030509@jhuapl.edu>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] stubdom/vtpm: Add PCR pass-through to
	hardware TPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/04/2012 12:47 PM, Matthew Fioravante wrote:
> So this maps a fixed set of PCRs always? 

Yes; and making them not the last 24 (or last N) PCRs requires a more
complex patch, since the PCR-modification code also needs to be changed.

> Is there any use case where the user might want the PCR mappings to be configurable? Might they want to disable this feature to disallow Hardware PCR access in the vm?

I think all of these configuration options will be needed for this to be 
truly useful in an upstream vTPM implementation; however, they require 
more significant changes to the TPM emulator itself, since values that
are currently #defines need to be changed to be configurable at TPM
creation. Adding more configuration will also eventually run past the
limits of what is reasonable to place on the vTPM's command line, so it
may be useful to consider how to define this type of configuration so 
that it still ends up in the vTPM's measurement when create by a 
measuring domain builder (hash of a config struct on the command line).

Things that I think could be configurable:
  * Localities and what domains can use them (I have a command-line 
    based patch for this).
  * PCRs beyond the defined minimum (currently 24): what localities have 
    extend/clear access to them, or what HW-PCR they are mapped to
  * Maximum sizes of TPM items: NV area, counters, sessions, etc
  
TPM version 2.0 may address some of these issues in the specification, 
so it may be reasonable to require compile-time configuration for a 1.2
TPM, and allow the owner to define the rest at runtime in a 2.0 TPM.

> Also can you update the docs/misc/vtpm.txt documentation with a note about this and the grub feature?

Sure.
 
> On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
>> This allows the hardware TPM's PCRs to be accessed from a vTPM for
>> debugging and as a simple alternative to a deep quote in situations
>> where the integrity of the vTPM's own TCB is not in question.
>>
>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>> ---
>>   stubdom/Makefile                   |  1 +
>>   stubdom/vtpm-pcr-passthrough.patch | 73 ++++++++++++++++++++++++++++++++++++++
>>   stubdom/vtpm/vtpm_cmd.c            | 38 ++++++++++++++++++++
>>   3 files changed, 112 insertions(+)
>>   create mode 100644 stubdom/vtpm-pcr-passthrough.patch
>>
>> diff --git a/stubdom/Makefile b/stubdom/Makefile
>> index 790b547..03ec07e 100644
>> --- a/stubdom/Makefile
>> +++ b/stubdom/Makefile
>> @@ -210,6 +210,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
>>       patch -d $@ -p1 < vtpm-locality.patch
>>       patch -d $@ -p1 < vtpm-bufsize.patch
>>       patch -d $@ -p1 < vtpm-locality5-pcrs.patch
>> +    patch -d $@ -p1 < vtpm-pcr-passthrough.patch
>>       mkdir $@/build
>>       cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
>>       touch $@
>> diff --git a/stubdom/vtpm-pcr-passthrough.patch b/stubdom/vtpm-pcr-passthrough.patch
>> new file mode 100644
>> index 0000000..4e898a5
>> --- /dev/null
>> +++ b/stubdom/vtpm-pcr-passthrough.patch
>> @@ -0,0 +1,73 @@
>> +diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
>> +index f8f7f0f..885af52 100644
>> +--- a/tpm/tpm_capability.c
>> ++++ b/tpm/tpm_capability.c
>> +@@ -72,7 +72,7 @@ static TPM_RESULT cap_property(UINT32 subCapSize, BYTE *subCap,
>> +   switch (property) {
>> +     case TPM_CAP_PROP_PCR:
>> +       debug("[TPM_CAP_PROP_PCR]");
>> +-      return return_UINT32(respSize, resp, TPM_NUM_PCR);
>> ++      return return_UINT32(respSize, resp, TPM_NUM_PCR_V);
>> +
>> +     case TPM_CAP_PROP_DIR:
>> +       debug("[TPM_CAP_PROP_DIR]");
>> +diff --git a/tpm/tpm_emulator_extern.h b/tpm/tpm_emulator_extern.h
>> +index 36a32dd..77ed595 100644
>> +--- a/tpm/tpm_emulator_extern.h
>> ++++ b/tpm/tpm_emulator_extern.h
>> +@@ -56,6 +56,7 @@ void (*tpm_free)(/*const*/ void *ptr);
>> + /* random numbers */
>> +
>> + void (*tpm_get_extern_random_bytes)(void *buf, size_t nbytes);
>> ++void tpm_get_extern_pcr(int index, void *buf);
>> +
>> + /* usec since last call */
>> +
>> +diff --git a/tpm/tpm_integrity.c b/tpm/tpm_integrity.c
>> +index 66ece83..f3c4196 100644
>> +--- a/tpm/tpm_integrity.c
>> ++++ b/tpm/tpm_integrity.c
>> +@@ -56,8 +56,11 @@ TPM_RESULT TPM_Extend(TPM_PCRINDEX pcrNum, TPM_DIGEST *inDigest,
>> + TPM_RESULT TPM_PCRRead(TPM_PCRINDEX pcrIndex, TPM_PCRVALUE *outDigest)
>> + {
>> +   info("TPM_PCRRead()");
>> +-  if (pcrIndex >= TPM_NUM_PCR) return TPM_BADINDEX;
>> +-  memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
>> ++  if (pcrIndex >= TPM_NUM_PCR_V) return TPM_BADINDEX;
>> ++  if (pcrIndex >= TPM_NUM_PCR)
>> ++    tpm_get_extern_pcr(pcrIndex - TPM_NUM_PCR, outDigest);
>> ++  else
>> ++    memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
>> +   return TPM_SUCCESS;
>> + }
>> +
>> +@@ -138,12 +141,15 @@ TPM_RESULT tpm_compute_pcr_digest(TPM_PCR_SELECTION *pcrSelection,
>> +   BYTE *buf, *ptr;
>> +   info("tpm_compute_pcr_digest()");
>> +   /* create PCR composite */
>> +-  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR
>> ++  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR_V
>> +       || pcrSelection->sizeOfSelect == 0) return TPM_INVALID_PCR_INFO;
>> +   for (i = 0, j = 0; i < pcrSelection->sizeOfSelect * 8; i++) {
>> +     /* is PCR number i selected ? */
>> +     if (pcrSelection->pcrSelect[i >> 3] & (1 << (i & 7))) {
>> +-      memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALUE));
>> ++      if (i >= TPM_NUM_PCR)
>> ++        tpm_get_extern_pcr(i - TPM_NUM_PCR, &comp.pcrValue[j++]);
>> ++      else
>> ++        memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALUE));
>> +     }
>> +   }
>> +   memcpy(&comp.select, pcrSelection, sizeof(TPM_PCR_SELECTION));
>> +diff --git a/tpm/tpm_structures.h b/tpm/tpm_structures.h
>> +index 08cef1e..8c97fc5 100644
>> +--- a/tpm/tpm_structures.h
>> ++++ b/tpm/tpm_structures.h
>> +@@ -677,6 +677,7 @@ typedef struct tdTPM_CMK_MA_APPROVAL {
>> +  * Number of PCRs of the TPM (must be a multiple of eight)
>> +  */
>> + #define TPM_NUM_PCR 32
>> ++#define TPM_NUM_PCR_V (TPM_NUM_PCR + 24)
>> +
>> + /*
>> +  * TPM_PCR_SELECTION ([TPM_Part2], Section 8.1)
>> diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
>> index 7eae98b..ed058fb 100644
>> --- a/stubdom/vtpm/vtpm_cmd.c
>> +++ b/stubdom/vtpm/vtpm_cmd.c
>> @@ -134,6 +134,44 @@ egress:
>>     }
>>   +extern struct tpmfront_dev* tpmfront_dev;
>> +void tpm_get_extern_pcr(int index, void *buf) {
>> +   TPM_RESULT status = TPM_SUCCESS;
>> +   uint8_t* cmdbuf, *resp, *bptr;
>> +   size_t resplen = 0;
>> +   UINT32 len;
>> +
>> +   /*Ask the real tpm for the PCR value */
>> +   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
>> +   UINT32 size;
>> +   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
>> +   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
>> +
>> +   /*Create the raw tpm command */
>> +   bptr = cmdbuf = malloc(size);
>> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
>> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, index));
>> +
>> +   /* Send cmd, wait for response */
>> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
>> +      ERR_TPMFRONT);
>> +
>> +   bptr = resp; len = resplen;
>> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
>> +
>> +   //Check return status of command
>> +   CHECKSTATUSGOTO(ord, "TPM_PCRRead()");
>> +
>> +   //Get the PCR value out
>> +   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, buf, 20), ERR_MALFORMED);
>> +
>> +   goto egress;
>> +abort_egress:
>> +   memset(buf, 0x20, 20);
>> +egress:
>> +   free(cmdbuf);
>> +}
>> +
>>   TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length)
>>   {
>>      TPM_RESULT status = TPM_SUCCESS;
> 
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 18:54:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfxdC-0007Jf-1w; Tue, 04 Dec 2012 18:54:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfxdA-0007JG-Gl
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:54:08 +0000
Received: from [85.158.138.51:29192] by server-7.bemta-3.messagelabs.com id
	E8/42-01713-FC64EB05; Tue, 04 Dec 2012 18:54:07 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354647243!19514675!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16355 invoked from network); 4 Dec 2012 18:54:04 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 18:54:04 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 1071_0439_210d0318_6d4e_4d85_91e5_0e0428f8f0dd;
	Tue, 04 Dec 2012 13:53:25 -0500
Message-ID: <50BE469D.1020106@jhuapl.edu>
Date: Tue, 04 Dec 2012 13:53:17 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-10-git-send-email-dgdegra@tycho.nsa.gov>
	<50BE3730.2030509@jhuapl.edu> <50BE4442.6070902@tycho.nsa.gov>
In-Reply-To: <50BE4442.6070902@tycho.nsa.gov>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] stubdom/vtpm: Add PCR pass-through to
	hardware TPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5134661836852435559=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============5134661836852435559==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060509000209010304090602"

This is a cryptographically signed message in MIME format.

--------------ms060509000209010304090602
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Sounds good.

Acked by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>

On 12/04/2012 01:43 PM, Daniel De Graaf wrote:
> On 12/04/2012 12:47 PM, Matthew Fioravante wrote:
>> So this maps a fixed set of PCRs always?
> Yes; and making them not the last 24 (or last N) PCRs requires a more
> complex patch, since the PCR-modification code also needs to be changed=
=2E
>
>> Is there any use case where the user might want the PCR mappings to be=
 configurable? Might they want to disable this feature to disallow Hardwa=
re PCR access in the vm?
> I think all of these configuration options will be needed for this to b=
e
> truly useful in an upstream vTPM implementation; however, they require
> more significant changes to the TPM emulator itself, since values that
> are currently #defines need to be changed to be configurable at TPM
> creation. Adding more configuration will also eventually run past the
> limits of what is reasonable to place on the vTPM's command line, so it=

> may be useful to consider how to define this type of configuration so
> that it still ends up in the vTPM's measurement when create by a
> measuring domain builder (hash of a config struct on the command line).=

>
> Things that I think could be configurable:
>    * Localities and what domains can use them (I have a command-line
>      based patch for this).
>    * PCRs beyond the defined minimum (currently 24): what localities ha=
ve
>      extend/clear access to them, or what HW-PCR they are mapped to
>    * Maximum sizes of TPM items: NV area, counters, sessions, etc
>   =20
> TPM version 2.0 may address some of these issues in the specification,
> so it may be reasonable to require compile-time configuration for a 1.2=

> TPM, and allow the owner to define the rest at runtime in a 2.0 TPM.
>
>> Also can you update the docs/misc/vtpm.txt documentation with a note a=
bout this and the grub feature?
> Sure.
>  =20
>> On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
>>> This allows the hardware TPM's PCRs to be accessed from a vTPM for
>>> debugging and as a simple alternative to a deep quote in situations
>>> where the integrity of the vTPM's own TCB is not in question.
>>>
>>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>>> ---
>>>    stubdom/Makefile                   |  1 +
>>>    stubdom/vtpm-pcr-passthrough.patch | 73 ++++++++++++++++++++++++++=
++++++++++++
>>>    stubdom/vtpm/vtpm_cmd.c            | 38 ++++++++++++++++++++
>>>    3 files changed, 112 insertions(+)
>>>    create mode 100644 stubdom/vtpm-pcr-passthrough.patch
>>>
>>> diff --git a/stubdom/Makefile b/stubdom/Makefile
>>> index 790b547..03ec07e 100644
>>> --- a/stubdom/Makefile
>>> +++ b/stubdom/Makefile
>>> @@ -210,6 +210,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(T=
PMEMU_VERSION).tar.gz
>>>        patch -d $@ -p1 < vtpm-locality.patch
>>>        patch -d $@ -p1 < vtpm-bufsize.patch
>>>        patch -d $@ -p1 < vtpm-locality5-pcrs.patch
>>> +    patch -d $@ -p1 < vtpm-pcr-passthrough.patch
>>>        mkdir $@/build
>>>        cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=3D${CC} -DCMAKE_C_=
FLAGS=3D"-std=3Dc99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -=
Wno-declaration-after-statement"
>>>        touch $@
>>> diff --git a/stubdom/vtpm-pcr-passthrough.patch b/stubdom/vtpm-pcr-pa=
ssthrough.patch
>>> new file mode 100644
>>> index 0000000..4e898a5
>>> --- /dev/null
>>> +++ b/stubdom/vtpm-pcr-passthrough.patch
>>> @@ -0,0 +1,73 @@
>>> +diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
>>> +index f8f7f0f..885af52 100644
>>> +--- a/tpm/tpm_capability.c
>>> ++++ b/tpm/tpm_capability.c
>>> +@@ -72,7 +72,7 @@ static TPM_RESULT cap_property(UINT32 subCapSize, =
BYTE *subCap,
>>> +   switch (property) {
>>> +     case TPM_CAP_PROP_PCR:
>>> +       debug("[TPM_CAP_PROP_PCR]");
>>> +-      return return_UINT32(respSize, resp, TPM_NUM_PCR);
>>> ++      return return_UINT32(respSize, resp, TPM_NUM_PCR_V);
>>> +
>>> +     case TPM_CAP_PROP_DIR:
>>> +       debug("[TPM_CAP_PROP_DIR]");
>>> +diff --git a/tpm/tpm_emulator_extern.h b/tpm/tpm_emulator_extern.h
>>> +index 36a32dd..77ed595 100644
>>> +--- a/tpm/tpm_emulator_extern.h
>>> ++++ b/tpm/tpm_emulator_extern.h
>>> +@@ -56,6 +56,7 @@ void (*tpm_free)(/*const*/ void *ptr);
>>> + /* random numbers */
>>> +
>>> + void (*tpm_get_extern_random_bytes)(void *buf, size_t nbytes);
>>> ++void tpm_get_extern_pcr(int index, void *buf);
>>> +
>>> + /* usec since last call */
>>> +
>>> +diff --git a/tpm/tpm_integrity.c b/tpm/tpm_integrity.c
>>> +index 66ece83..f3c4196 100644
>>> +--- a/tpm/tpm_integrity.c
>>> ++++ b/tpm/tpm_integrity.c
>>> +@@ -56,8 +56,11 @@ TPM_RESULT TPM_Extend(TPM_PCRINDEX pcrNum, TPM_DI=
GEST *inDigest,
>>> + TPM_RESULT TPM_PCRRead(TPM_PCRINDEX pcrIndex, TPM_PCRVALUE *outDige=
st)
>>> + {
>>> +   info("TPM_PCRRead()");
>>> +-  if (pcrIndex >=3D TPM_NUM_PCR) return TPM_BADINDEX;
>>> +-  memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
>>> ++  if (pcrIndex >=3D TPM_NUM_PCR_V) return TPM_BADINDEX;
>>> ++  if (pcrIndex >=3D TPM_NUM_PCR)
>>> ++    tpm_get_extern_pcr(pcrIndex - TPM_NUM_PCR, outDigest);
>>> ++  else
>>> ++    memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
>>> +   return TPM_SUCCESS;
>>> + }
>>> +
>>> +@@ -138,12 +141,15 @@ TPM_RESULT tpm_compute_pcr_digest(TPM_PCR_SELE=
CTION *pcrSelection,
>>> +   BYTE *buf, *ptr;
>>> +   info("tpm_compute_pcr_digest()");
>>> +   /* create PCR composite */
>>> +-  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR
>>> ++  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR_V
>>> +       || pcrSelection->sizeOfSelect =3D=3D 0) return TPM_INVALID_PC=
R_INFO;
>>> +   for (i =3D 0, j =3D 0; i < pcrSelection->sizeOfSelect * 8; i++) {=

>>> +     /* is PCR number i selected ? */
>>> +     if (pcrSelection->pcrSelect[i >> 3] & (1 << (i & 7))) {
>>> +-      memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALU=
E));
>>> ++      if (i >=3D TPM_NUM_PCR)
>>> ++        tpm_get_extern_pcr(i - TPM_NUM_PCR, &comp.pcrValue[j++]);
>>> ++      else
>>> ++        memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVA=
LUE));
>>> +     }
>>> +   }
>>> +   memcpy(&comp.select, pcrSelection, sizeof(TPM_PCR_SELECTION));
>>> +diff --git a/tpm/tpm_structures.h b/tpm/tpm_structures.h
>>> +index 08cef1e..8c97fc5 100644
>>> +--- a/tpm/tpm_structures.h
>>> ++++ b/tpm/tpm_structures.h
>>> +@@ -677,6 +677,7 @@ typedef struct tdTPM_CMK_MA_APPROVAL {
>>> +  * Number of PCRs of the TPM (must be a multiple of eight)
>>> +  */
>>> + #define TPM_NUM_PCR 32
>>> ++#define TPM_NUM_PCR_V (TPM_NUM_PCR + 24)
>>> +
>>> + /*
>>> +  * TPM_PCR_SELECTION ([TPM_Part2], Section 8.1)
>>> diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
>>> index 7eae98b..ed058fb 100644
>>> --- a/stubdom/vtpm/vtpm_cmd.c
>>> +++ b/stubdom/vtpm/vtpm_cmd.c
>>> @@ -134,6 +134,44 @@ egress:
>>>      }
>>>    +extern struct tpmfront_dev* tpmfront_dev;
>>> +void tpm_get_extern_pcr(int index, void *buf) {
>>> +   TPM_RESULT status =3D TPM_SUCCESS;
>>> +   uint8_t* cmdbuf, *resp, *bptr;
>>> +   size_t resplen =3D 0;
>>> +   UINT32 len;
>>> +
>>> +   /*Ask the real tpm for the PCR value */
>>> +   TPM_TAG tag =3D TPM_TAG_RQU_COMMAND;
>>> +   UINT32 size;
>>> +   TPM_COMMAND_CODE ord =3D TPM_ORD_PCRRead;
>>> +   len =3D size =3D sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_CO=
MMAND_CODE) + sizeof(UINT32);
>>> +
>>> +   /*Create the raw tpm command */
>>> +   bptr =3D cmdbuf =3D malloc(size);
>>> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
>>> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, index));
>>> +
>>> +   /* Send cmd, wait for response */
>>> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &r=
esplen),
>>> +      ERR_TPMFRONT);
>>> +
>>> +   bptr =3D resp; len =3D resplen;
>>> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR=
_MALFORMED);
>>> +
>>> +   //Check return status of command
>>> +   CHECKSTATUSGOTO(ord, "TPM_PCRRead()");
>>> +
>>> +   //Get the PCR value out
>>> +   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, buf, 20), ER=
R_MALFORMED);
>>> +
>>> +   goto egress;
>>> +abort_egress:
>>> +   memset(buf, 0x20, 20);
>>> +egress:
>>> +   free(cmdbuf);
>>> +}
>>> +
>>>    TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uin=
t8_t** data, size_t* data_length)
>>>    {
>>>       TPM_RESULT status =3D TPM_SUCCESS;
>>
>



--------------ms060509000209010304090602
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE4NTMxN1owIwYJKoZIhvcNAQkEMRYEFOB6/eJ/PhbHNa5t
82NAKtdeNsC0MGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYAMX7xxtTt4BTwXER+A17cFlJAQTbqcbAAZ
5vOhL0Eu9+3RS9eBYumF39uT2FnQ99YaZ9/Scl9Bm4W4HkqzpB/lSJjM55oUloshMV5Z2ckp
b1f+4anknYIxSs8fi+W2ZSTnywL0M8pa9elfPTO8BvK7VJNET96YlbB/79mqvppj/wAAAAAA
AA==
--------------ms060509000209010304090602--


--===============5134661836852435559==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5134661836852435559==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 18:54:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfxdC-0007Jf-1w; Tue, 04 Dec 2012 18:54:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfxdA-0007JG-Gl
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:54:08 +0000
Received: from [85.158.138.51:29192] by server-7.bemta-3.messagelabs.com id
	E8/42-01713-FC64EB05; Tue, 04 Dec 2012 18:54:07 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354647243!19514675!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16355 invoked from network); 4 Dec 2012 18:54:04 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Dec 2012 18:54:04 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 1071_0439_210d0318_6d4e_4d85_91e5_0e0428f8f0dd;
	Tue, 04 Dec 2012 13:53:25 -0500
Message-ID: <50BE469D.1020106@jhuapl.edu>
Date: Tue, 04 Dec 2012 13:53:17 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354286955-23900-10-git-send-email-dgdegra@tycho.nsa.gov>
	<50BE3730.2030509@jhuapl.edu> <50BE4442.6070902@tycho.nsa.gov>
In-Reply-To: <50BE4442.6070902@tycho.nsa.gov>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] stubdom/vtpm: Add PCR pass-through to
	hardware TPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5134661836852435559=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============5134661836852435559==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060509000209010304090602"

This is a cryptographically signed message in MIME format.

--------------ms060509000209010304090602
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Sounds good.

Acked by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>

On 12/04/2012 01:43 PM, Daniel De Graaf wrote:
> On 12/04/2012 12:47 PM, Matthew Fioravante wrote:
>> So this maps a fixed set of PCRs always?
> Yes; and making them not the last 24 (or last N) PCRs requires a more
> complex patch, since the PCR-modification code also needs to be changed=
=2E
>
>> Is there any use case where the user might want the PCR mappings to be=
 configurable? Might they want to disable this feature to disallow Hardwa=
re PCR access in the vm?
> I think all of these configuration options will be needed for this to b=
e
> truly useful in an upstream vTPM implementation; however, they require
> more significant changes to the TPM emulator itself, since values that
> are currently #defines need to be changed to be configurable at TPM
> creation. Adding more configuration will also eventually run past the
> limits of what is reasonable to place on the vTPM's command line, so it=

> may be useful to consider how to define this type of configuration so
> that it still ends up in the vTPM's measurement when create by a
> measuring domain builder (hash of a config struct on the command line).=

>
> Things that I think could be configurable:
>    * Localities and what domains can use them (I have a command-line
>      based patch for this).
>    * PCRs beyond the defined minimum (currently 24): what localities ha=
ve
>      extend/clear access to them, or what HW-PCR they are mapped to
>    * Maximum sizes of TPM items: NV area, counters, sessions, etc
>   =20
> TPM version 2.0 may address some of these issues in the specification,
> so it may be reasonable to require compile-time configuration for a 1.2=

> TPM, and allow the owner to define the rest at runtime in a 2.0 TPM.
>
>> Also can you update the docs/misc/vtpm.txt documentation with a note a=
bout this and the grub feature?
> Sure.
>  =20
>> On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
>>> This allows the hardware TPM's PCRs to be accessed from a vTPM for
>>> debugging and as a simple alternative to a deep quote in situations
>>> where the integrity of the vTPM's own TCB is not in question.
>>>
>>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>>> ---
>>>    stubdom/Makefile                   |  1 +
>>>    stubdom/vtpm-pcr-passthrough.patch | 73 ++++++++++++++++++++++++++=
++++++++++++
>>>    stubdom/vtpm/vtpm_cmd.c            | 38 ++++++++++++++++++++
>>>    3 files changed, 112 insertions(+)
>>>    create mode 100644 stubdom/vtpm-pcr-passthrough.patch
>>>
>>> diff --git a/stubdom/Makefile b/stubdom/Makefile
>>> index 790b547..03ec07e 100644
>>> --- a/stubdom/Makefile
>>> +++ b/stubdom/Makefile
>>> @@ -210,6 +210,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(T=
PMEMU_VERSION).tar.gz
>>>        patch -d $@ -p1 < vtpm-locality.patch
>>>        patch -d $@ -p1 < vtpm-bufsize.patch
>>>        patch -d $@ -p1 < vtpm-locality5-pcrs.patch
>>> +    patch -d $@ -p1 < vtpm-pcr-passthrough.patch
>>>        mkdir $@/build
>>>        cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=3D${CC} -DCMAKE_C_=
FLAGS=3D"-std=3Dc99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -=
Wno-declaration-after-statement"
>>>        touch $@
>>> diff --git a/stubdom/vtpm-pcr-passthrough.patch b/stubdom/vtpm-pcr-pa=
ssthrough.patch
>>> new file mode 100644
>>> index 0000000..4e898a5
>>> --- /dev/null
>>> +++ b/stubdom/vtpm-pcr-passthrough.patch
>>> @@ -0,0 +1,73 @@
>>> +diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
>>> +index f8f7f0f..885af52 100644
>>> +--- a/tpm/tpm_capability.c
>>> ++++ b/tpm/tpm_capability.c
>>> +@@ -72,7 +72,7 @@ static TPM_RESULT cap_property(UINT32 subCapSize, =
BYTE *subCap,
>>> +   switch (property) {
>>> +     case TPM_CAP_PROP_PCR:
>>> +       debug("[TPM_CAP_PROP_PCR]");
>>> +-      return return_UINT32(respSize, resp, TPM_NUM_PCR);
>>> ++      return return_UINT32(respSize, resp, TPM_NUM_PCR_V);
>>> +
>>> +     case TPM_CAP_PROP_DIR:
>>> +       debug("[TPM_CAP_PROP_DIR]");
>>> +diff --git a/tpm/tpm_emulator_extern.h b/tpm/tpm_emulator_extern.h
>>> +index 36a32dd..77ed595 100644
>>> +--- a/tpm/tpm_emulator_extern.h
>>> ++++ b/tpm/tpm_emulator_extern.h
>>> +@@ -56,6 +56,7 @@ void (*tpm_free)(/*const*/ void *ptr);
>>> + /* random numbers */
>>> +
>>> + void (*tpm_get_extern_random_bytes)(void *buf, size_t nbytes);
>>> ++void tpm_get_extern_pcr(int index, void *buf);
>>> +
>>> + /* usec since last call */
>>> +
>>> +diff --git a/tpm/tpm_integrity.c b/tpm/tpm_integrity.c
>>> +index 66ece83..f3c4196 100644
>>> +--- a/tpm/tpm_integrity.c
>>> ++++ b/tpm/tpm_integrity.c
>>> +@@ -56,8 +56,11 @@ TPM_RESULT TPM_Extend(TPM_PCRINDEX pcrNum, TPM_DI=
GEST *inDigest,
>>> + TPM_RESULT TPM_PCRRead(TPM_PCRINDEX pcrIndex, TPM_PCRVALUE *outDige=
st)
>>> + {
>>> +   info("TPM_PCRRead()");
>>> +-  if (pcrIndex >=3D TPM_NUM_PCR) return TPM_BADINDEX;
>>> +-  memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
>>> ++  if (pcrIndex >=3D TPM_NUM_PCR_V) return TPM_BADINDEX;
>>> ++  if (pcrIndex >=3D TPM_NUM_PCR)
>>> ++    tpm_get_extern_pcr(pcrIndex - TPM_NUM_PCR, outDigest);
>>> ++  else
>>> ++    memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
>>> +   return TPM_SUCCESS;
>>> + }
>>> +
>>> +@@ -138,12 +141,15 @@ TPM_RESULT tpm_compute_pcr_digest(TPM_PCR_SELE=
CTION *pcrSelection,
>>> +   BYTE *buf, *ptr;
>>> +   info("tpm_compute_pcr_digest()");
>>> +   /* create PCR composite */
>>> +-  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR
>>> ++  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR_V
>>> +       || pcrSelection->sizeOfSelect =3D=3D 0) return TPM_INVALID_PC=
R_INFO;
>>> +   for (i =3D 0, j =3D 0; i < pcrSelection->sizeOfSelect * 8; i++) {=

>>> +     /* is PCR number i selected ? */
>>> +     if (pcrSelection->pcrSelect[i >> 3] & (1 << (i & 7))) {
>>> +-      memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALU=
E));
>>> ++      if (i >=3D TPM_NUM_PCR)
>>> ++        tpm_get_extern_pcr(i - TPM_NUM_PCR, &comp.pcrValue[j++]);
>>> ++      else
>>> ++        memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVA=
LUE));
>>> +     }
>>> +   }
>>> +   memcpy(&comp.select, pcrSelection, sizeof(TPM_PCR_SELECTION));
>>> +diff --git a/tpm/tpm_structures.h b/tpm/tpm_structures.h
>>> +index 08cef1e..8c97fc5 100644
>>> +--- a/tpm/tpm_structures.h
>>> ++++ b/tpm/tpm_structures.h
>>> +@@ -677,6 +677,7 @@ typedef struct tdTPM_CMK_MA_APPROVAL {
>>> +  * Number of PCRs of the TPM (must be a multiple of eight)
>>> +  */
>>> + #define TPM_NUM_PCR 32
>>> ++#define TPM_NUM_PCR_V (TPM_NUM_PCR + 24)
>>> +
>>> + /*
>>> +  * TPM_PCR_SELECTION ([TPM_Part2], Section 8.1)
>>> diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
>>> index 7eae98b..ed058fb 100644
>>> --- a/stubdom/vtpm/vtpm_cmd.c
>>> +++ b/stubdom/vtpm/vtpm_cmd.c
>>> @@ -134,6 +134,44 @@ egress:
>>>      }
>>>    +extern struct tpmfront_dev* tpmfront_dev;
>>> +void tpm_get_extern_pcr(int index, void *buf) {
>>> +   TPM_RESULT status =3D TPM_SUCCESS;
>>> +   uint8_t* cmdbuf, *resp, *bptr;
>>> +   size_t resplen =3D 0;
>>> +   UINT32 len;
>>> +
>>> +   /*Ask the real tpm for the PCR value */
>>> +   TPM_TAG tag =3D TPM_TAG_RQU_COMMAND;
>>> +   UINT32 size;
>>> +   TPM_COMMAND_CODE ord =3D TPM_ORD_PCRRead;
>>> +   len =3D size =3D sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_CO=
MMAND_CODE) + sizeof(UINT32);
>>> +
>>> +   /*Create the raw tpm command */
>>> +   bptr =3D cmdbuf =3D malloc(size);
>>> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
>>> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, index));
>>> +
>>> +   /* Send cmd, wait for response */
>>> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &r=
esplen),
>>> +      ERR_TPMFRONT);
>>> +
>>> +   bptr =3D resp; len =3D resplen;
>>> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR=
_MALFORMED);
>>> +
>>> +   //Check return status of command
>>> +   CHECKSTATUSGOTO(ord, "TPM_PCRRead()");
>>> +
>>> +   //Get the PCR value out
>>> +   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, buf, 20), ER=
R_MALFORMED);
>>> +
>>> +   goto egress;
>>> +abort_egress:
>>> +   memset(buf, 0x20, 20);
>>> +egress:
>>> +   free(cmdbuf);
>>> +}
>>> +
>>>    TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uin=
t8_t** data, size_t* data_length)
>>>    {
>>>       TPM_RESULT status =3D TPM_SUCCESS;
>>
>



--------------ms060509000209010304090602
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE4NTMxN1owIwYJKoZIhvcNAQkEMRYEFOB6/eJ/PhbHNa5t
82NAKtdeNsC0MGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYAMX7xxtTt4BTwXER+A17cFlJAQTbqcbAAZ
5vOhL0Eu9+3RS9eBYumF39uT2FnQ99YaZ9/Scl9Bm4W4HkqzpB/lSJjM55oUloshMV5Z2ckp
b1f+4anknYIxSs8fi+W2ZSTnywL0M8pa9elfPTO8BvK7VJNET96YlbB/79mqvppj/wAAAAAA
AA==
--------------ms060509000209010304090602--


--===============5134661836852435559==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5134661836852435559==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 18:55:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfxdu-0007Oa-8M; Tue, 04 Dec 2012 18:54:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Tfxds-0007OK-7L
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:54:52 +0000
Received: from [85.158.143.35:14264] by server-2.bemta-4.messagelabs.com id
	DA/16-28922-BF64EB05; Tue, 04 Dec 2012 18:54:51 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1354647288!14405065!1
X-Originating-IP: [209.85.212.48]
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1576 invoked from network); 4 Dec 2012 18:54:49 -0000
Received: from mail-vb0-f48.google.com (HELO mail-vb0-f48.google.com)
	(209.85.212.48)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:54:49 -0000
Received: by mail-vb0-f48.google.com with SMTP id fc21so3475197vbb.7
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Dec 2012 10:53:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=JRTjbkyj2dAnNvzkFgwPBGmB/WavcgY2eZLCkxCiwUY=;
	b=oMvUNWqaZ0L+oqLHe1jEDjk2HTBLL+8ajN2YElCUPpjiwZdcDJACvoUklrZpTkKZfw
	yDwEVZ5bDbwTKMeiyL59QOs3r7jdqveYJTLzn+fO84ifr5beoNtcKYG8B4jczB3ym9Rt
	2FtKz0uv+1IWrx3/cw5BoTRd/csfbxDnilz5LJzMPB+ScciYenVLuWCpQxTF9eZH2I51
	HRB2KytQezdtsnOu5azALs4IsOu0I7aMtOT9b8MGLOUWzoIZ2ZhFVVJvpPNXFW+OgPpj
	K6vdUL9DdP5KUfLufHPLEcSfD9Tdt0S21TDzOQudIBG3amrcf5l9RAjXVaL4xyjAMaX3
	KNkw==
MIME-Version: 1.0
Received: by 10.52.92.144 with SMTP id cm16mr10858423vdb.36.1354647228968;
	Tue, 04 Dec 2012 10:53:48 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Tue, 4 Dec 2012 10:53:48 -0800 (PST)
In-Reply-To: <7265520b0188740d3dfa.1354552499@Solace>
References: <patchbomb.1354552497@Solace>
	<7265520b0188740d3dfa.1354552499@Solace>
Date: Tue, 4 Dec 2012 18:53:48 +0000
X-Google-Sender-Auth: 7C04IqKKELmG4mQjpukBPMeFONE
Message-ID: <CAFLBxZaor6aBA6AzAGrGhVBCG+UMS+6G_Q0SKQCgw-hmWuDLxA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Cc: xen-devel <xen-devel@lists.xensource.com>, Keir Fraser <keir@xen.org>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 3] xen: tracing: introduce
 per-scheduler trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2386283722053980556=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2386283722053980556==
Content-Type: multipart/alternative; boundary=20cf307cff96e0ef6f04d00b62c7

--20cf307cff96e0ef6f04d00b62c7
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Dec 3, 2012 at 4:34 PM, Dario Faggioli <raistlin@linux.it> wrote:

> So that it becomes possible to create specific scheduling event
> within each scheduler without worrying about the overlapping,
> and also without giving up being able to recognise them
> univocally. The latter is deemed as useful, since we can have
> more than one scheduler running at the same time, thanks to
> cpupools.
>
> The event ID is 12 bits, and this change uses the upper 3 of them
> for the 'scheduler ID'. This means we're limited to 8 schedulers
> and to 512 scheduler specific tracing events. Both seem reasonable
> limitations as of now.
>
> This also converts the existing credit2 tracing (the only scheduler
> generating tracing events up to now) to the new system.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>




>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -29,18 +29,24 @@
>  #define d2printk(x...)
>  //#define d2printk printk
>
> -#define TRC_CSCHED2_TICK        TRC_SCHED_CLASS + 1
> -#define TRC_CSCHED2_RUNQ_POS    TRC_SCHED_CLASS + 2
> -#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3
> -#define TRC_CSCHED2_CREDIT_ADD  TRC_SCHED_CLASS + 4
> -#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5
> -#define TRC_CSCHED2_TICKLE       TRC_SCHED_CLASS + 6
> -#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7
> -#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8
> -#define TRC_CSCHED2_UPDATE_LOAD   TRC_SCHED_CLASS + 9
> -#define TRC_CSCHED2_RUNQ_ASSIGN   TRC_SCHED_CLASS + 10
> -#define TRC_CSCHED2_UPDATE_VCPU_LOAD   TRC_SCHED_CLASS + 11
> -#define TRC_CSCHED2_UPDATE_RUNQ_LOAD   TRC_SCHED_CLASS + 12
> +/*
> + * Credit2 tracing events ("only" 512 available!). Check
> + * include/public/trace.h for more details.
> + */
> +#define TRC_CSCHED2_EVENT(_e)        ((TRC_SCHED_CLASS|TRC_MASK_CSCHED2)
> + _e)
>

I think I would make this generic, and put it in trace.h -- maybe something
like this?  (Haven't run this through a compiler)

#define TRC_SCHED_CLASS_EVT(_c, _e) \

((TRC_SCHED_CLASS|(TRC_SCHED_##_c<<TRC_SCHED_MASK_SHIFT))+(_e&TRC_SCHED_CLASS_MASK))

Then these would look like:
#define TRC_CSCHED2_TICK TRC_SCHED_CLASS_EVT(CSCHED2,1)


> +
> +#define TRC_CSCHED2_TICK             (TRC_CSCHED2_EVENT(1))
> +#define TRC_CSCHED2_RUNQ_POS         (TRC_CSCHED2_EVENT(2))
> +#define TRC_CSCHED2_CREDIT_BURN      (TRC_CSCHED2_EVENT(3))
> +#define TRC_CSCHED2_CREDIT_ADD       (TRC_CSCHED2_EVENT(4))
> +#define TRC_CSCHED2_TICKLE_CHECK     (TRC_CSCHED2_EVENT(5))
> +#define TRC_CSCHED2_TICKLE           (TRC_CSCHED2_EVENT(6))
> +#define TRC_CSCHED2_CREDIT_RESET     (TRC_CSCHED2_EVENT(7))
> +#define TRC_CSCHED2_SCHED_TASKLET    (TRC_CSCHED2_EVENT(8))
> +#define TRC_CSCHED2_UPDATE_LOAD      (TRC_CSCHED2_EVENT(9))
> +#define TRC_CSCHED2_RUNQ_ASSIGN      (TRC_CSCHED2_EVENT(10))
> +#define TRC_CSCHED2_UPDATE_VCPU_LOAD (TRC_CSCHED2_EVENT(11))
> +#define TRC_CSCHED2_UPDATE_RUNQ_LOAD (TRC_CSCHED2_EVENT(12))
>
>  /*
>   * WARNING: This is still in an experimental phase.  Status and work can
> be found at the
> diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
> --- a/xen/include/public/trace.h
> +++ b/xen/include/public/trace.h
> @@ -57,6 +57,26 @@
>  #define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
>  #define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
>
> +/*
> + * Per-scheduler masks, to identify scheduler specific events.
> + *
> + * The highest 3 bits of the last 12 bits of the above TRC_SCHED_*
> + * tracing classes are reserved for encoding what scheduler is producing
> + * the event. The last 9 bits is where the scheduler specific event is
> + * encoded.
> + *
> + * This means we can have at most 8 tracing scheduling masks (which
> + * means at most 8 schedulers generating tracing events) and, in each
> + * scheduler, up to 512 different events.
> + */
> +#define TRC_SCHED_ID_BITS    3
> +#define TRC_SCHED_MASK_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
> +
> +#define TRC_MASK_CSCHED      (0 << TRC_SCHED_MASK_SHIFT)
> +#define TRC_MASK_CSCHED2     (1 << TRC_SCHED_MASK_SHIFT)
> +#define TRC_MASK_SEDF        (2 << TRC_SCHED_MASK_SHIFT)
> +#define TRC_MASK_ARINC653    (3 << TRC_SCHED_MASK_SHIFT)
>

I don't think "mask" is right here -- these aren't masks, they're numerical
values. :-)  If we use something like the #define above, then we can do:

#define TRC_SCHED_CSCHED 0
#define TRC_SCHED_CSCHED2
/*...*/

 -George

--20cf307cff96e0ef6f04d00b62c7
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Mon, Dec 3, 2012 at 4:34 PM, Dario Faggioli <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:raistlin@linux.it" target=3D"_blank">raistlin@linux.it</a>&gt;<=
/span> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blo=
ckquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #c=
cc solid;padding-left:1ex">
So that it becomes possible to create specific scheduling event<br>
within each scheduler without worrying about the overlapping,<br>
and also without giving up being able to recognise them<br>
univocally. The latter is deemed as useful, since we can have<br>
more than one scheduler running at the same time, thanks to<br>
cpupools.<br>
<br>
The event ID is 12 bits, and this change uses the upper 3 of them<br>
for the &#39;scheduler ID&#39;. This means we&#39;re limited to 8 scheduler=
s<br>
and to 512 scheduler specific tracing events. Both seem reasonable<br>
limitations as of now.<br>
<br>
This also converts the existing credit2 tracing (the only scheduler<br>
generating tracing events up to now) to the new system.<br>
<br>
Signed-off-by: Dario Faggioli &lt;<a href=3D"mailto:dario.faggioli@citrix.c=
om">dario.faggioli@citrix.com</a>&gt;<br></blockquote><div><br><br>=A0</div=
><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1=
px #ccc solid;padding-left:1ex">

<br>
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c<br>
--- a/xen/common/sched_credit2.c<br>
+++ b/xen/common/sched_credit2.c<br>
@@ -29,18 +29,24 @@<br>
=A0#define d2printk(x...)<br>
=A0//#define d2printk printk<br>
<br>
-#define TRC_CSCHED2_TICK =A0 =A0 =A0 =A0TRC_SCHED_CLASS + 1<br>
-#define TRC_CSCHED2_RUNQ_POS =A0 =A0TRC_SCHED_CLASS + 2<br>
-#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3<br>
-#define TRC_CSCHED2_CREDIT_ADD =A0TRC_SCHED_CLASS + 4<br>
-#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5<br>
-#define TRC_CSCHED2_TICKLE =A0 =A0 =A0 TRC_SCHED_CLASS + 6<br>
-#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7<br>
-#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8<br>
-#define TRC_CSCHED2_UPDATE_LOAD =A0 TRC_SCHED_CLASS + 9<br>
-#define TRC_CSCHED2_RUNQ_ASSIGN =A0 TRC_SCHED_CLASS + 10<br>
-#define TRC_CSCHED2_UPDATE_VCPU_LOAD =A0 TRC_SCHED_CLASS + 11<br>
-#define TRC_CSCHED2_UPDATE_RUNQ_LOAD =A0 TRC_SCHED_CLASS + 12<br>
+/*<br>
+ * Credit2 tracing events (&quot;only&quot; 512 available!). Check<br>
+ * include/public/trace.h for more details.<br>
+ */<br>
+#define TRC_CSCHED2_EVENT(_e) =A0 =A0 =A0 =A0((TRC_SCHED_CLASS|TRC_MASK_CS=
CHED2) + _e)<br></blockquote><div><br>I think I would make this generic, an=
d put it in trace.h -- maybe something like this?=A0 (Haven&#39;t run this =
through a compiler)<br>
<br>#define TRC_SCHED_CLASS_EVT(_c, _e) \<br>=A0 ((TRC_SCHED_CLASS|(TRC_SCH=
ED_##_c&lt;&lt;TRC_SCHED_MASK_SHIFT))+(_e&amp;TRC_SCHED_CLASS_MASK))<br><br=
>Then these would look like:<br>#define TRC_CSCHED2_TICK TRC_SCHED_CLASS_EV=
T(CSCHED2,1)<br>
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde=
r-left:1px #ccc solid;padding-left:1ex">
+<br>
+#define TRC_CSCHED2_TICK =A0 =A0 =A0 =A0 =A0 =A0 (TRC_CSCHED2_EVENT(1))<br=
>
+#define TRC_CSCHED2_RUNQ_POS =A0 =A0 =A0 =A0 (TRC_CSCHED2_EVENT(2))<br>
+#define TRC_CSCHED2_CREDIT_BURN =A0 =A0 =A0(TRC_CSCHED2_EVENT(3))<br>
+#define TRC_CSCHED2_CREDIT_ADD =A0 =A0 =A0 (TRC_CSCHED2_EVENT(4))<br>
+#define TRC_CSCHED2_TICKLE_CHECK =A0 =A0 (TRC_CSCHED2_EVENT(5))<br>
+#define TRC_CSCHED2_TICKLE =A0 =A0 =A0 =A0 =A0 (TRC_CSCHED2_EVENT(6))<br>
+#define TRC_CSCHED2_CREDIT_RESET =A0 =A0 (TRC_CSCHED2_EVENT(7))<br>
+#define TRC_CSCHED2_SCHED_TASKLET =A0 =A0(TRC_CSCHED2_EVENT(8))<br>
+#define TRC_CSCHED2_UPDATE_LOAD =A0 =A0 =A0(TRC_CSCHED2_EVENT(9))<br>
+#define TRC_CSCHED2_RUNQ_ASSIGN =A0 =A0 =A0(TRC_CSCHED2_EVENT(10))<br>
+#define TRC_CSCHED2_UPDATE_VCPU_LOAD (TRC_CSCHED2_EVENT(11))<br>
+#define TRC_CSCHED2_UPDATE_RUNQ_LOAD (TRC_CSCHED2_EVENT(12))<br>
<br>
=A0/*<br>
=A0 * WARNING: This is still in an experimental phase. =A0Status and work c=
an be found at the<br>
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h<br>
--- a/xen/include/public/trace.h<br>
+++ b/xen/include/public/trace.h<br>
@@ -57,6 +57,26 @@<br>
=A0#define TRC_SCHED_CLASS =A0 =A0 0x00022000 =A0 /* Scheduler-specific =A0=
 =A0*/<br>
=A0#define TRC_SCHED_VERBOSE =A0 0x00028000 =A0 /* More inclusive schedulin=
g */<br>
<br>
+/*<br>
+ * Per-scheduler masks, to identify scheduler specific events.<br>
+ *<br>
+ * The highest 3 bits of the last 12 bits of the above TRC_SCHED_*<br>
+ * tracing classes are reserved for encoding what scheduler is producing<b=
r>
+ * the event. The last 9 bits is where the scheduler specific event is<br>
+ * encoded.<br>
+ *<br>
+ * This means we can have at most 8 tracing scheduling masks (which<br>
+ * means at most 8 schedulers generating tracing events) and, in each<br>
+ * scheduler, up to 512 different events.<br>
+ */<br>
+#define TRC_SCHED_ID_BITS =A0 =A03<br>
+#define TRC_SCHED_MASK_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)<br>
+<br>
+#define TRC_MASK_CSCHED =A0 =A0 =A0(0 &lt;&lt; TRC_SCHED_MASK_SHIFT)<br>
+#define TRC_MASK_CSCHED2 =A0 =A0 (1 &lt;&lt; TRC_SCHED_MASK_SHIFT)<br>
+#define TRC_MASK_SEDF =A0 =A0 =A0 =A0(2 &lt;&lt; TRC_SCHED_MASK_SHIFT)<br>
+#define TRC_MASK_ARINC653 =A0 =A0(3 &lt;&lt; TRC_SCHED_MASK_SHIFT)<br></bl=
ockquote><div><br>I don&#39;t think &quot;mask&quot; is right here -- these=
 aren&#39;t masks, they&#39;re numerical values. :-)=A0 If we use something=
 like the #define above, then we can do:<br>
<br>#define TRC_SCHED_CSCHED 0<br>#define TRC_SCHED_CSCHED2<br>/*...*/<br>=
=A0<br>=A0-George<br></div></div><br></div>

--20cf307cff96e0ef6f04d00b62c7--


--===============2386283722053980556==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2386283722053980556==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 18:55:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfxdu-0007Oa-8M; Tue, 04 Dec 2012 18:54:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Tfxds-0007OK-7L
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:54:52 +0000
Received: from [85.158.143.35:14264] by server-2.bemta-4.messagelabs.com id
	DA/16-28922-BF64EB05; Tue, 04 Dec 2012 18:54:51 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1354647288!14405065!1
X-Originating-IP: [209.85.212.48]
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1576 invoked from network); 4 Dec 2012 18:54:49 -0000
Received: from mail-vb0-f48.google.com (HELO mail-vb0-f48.google.com)
	(209.85.212.48)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:54:49 -0000
Received: by mail-vb0-f48.google.com with SMTP id fc21so3475197vbb.7
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Dec 2012 10:53:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=JRTjbkyj2dAnNvzkFgwPBGmB/WavcgY2eZLCkxCiwUY=;
	b=oMvUNWqaZ0L+oqLHe1jEDjk2HTBLL+8ajN2YElCUPpjiwZdcDJACvoUklrZpTkKZfw
	yDwEVZ5bDbwTKMeiyL59QOs3r7jdqveYJTLzn+fO84ifr5beoNtcKYG8B4jczB3ym9Rt
	2FtKz0uv+1IWrx3/cw5BoTRd/csfbxDnilz5LJzMPB+ScciYenVLuWCpQxTF9eZH2I51
	HRB2KytQezdtsnOu5azALs4IsOu0I7aMtOT9b8MGLOUWzoIZ2ZhFVVJvpPNXFW+OgPpj
	K6vdUL9DdP5KUfLufHPLEcSfD9Tdt0S21TDzOQudIBG3amrcf5l9RAjXVaL4xyjAMaX3
	KNkw==
MIME-Version: 1.0
Received: by 10.52.92.144 with SMTP id cm16mr10858423vdb.36.1354647228968;
	Tue, 04 Dec 2012 10:53:48 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Tue, 4 Dec 2012 10:53:48 -0800 (PST)
In-Reply-To: <7265520b0188740d3dfa.1354552499@Solace>
References: <patchbomb.1354552497@Solace>
	<7265520b0188740d3dfa.1354552499@Solace>
Date: Tue, 4 Dec 2012 18:53:48 +0000
X-Google-Sender-Auth: 7C04IqKKELmG4mQjpukBPMeFONE
Message-ID: <CAFLBxZaor6aBA6AzAGrGhVBCG+UMS+6G_Q0SKQCgw-hmWuDLxA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Cc: xen-devel <xen-devel@lists.xensource.com>, Keir Fraser <keir@xen.org>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 3] xen: tracing: introduce
 per-scheduler trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2386283722053980556=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2386283722053980556==
Content-Type: multipart/alternative; boundary=20cf307cff96e0ef6f04d00b62c7

--20cf307cff96e0ef6f04d00b62c7
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Dec 3, 2012 at 4:34 PM, Dario Faggioli <raistlin@linux.it> wrote:

> So that it becomes possible to create specific scheduling event
> within each scheduler without worrying about the overlapping,
> and also without giving up being able to recognise them
> univocally. The latter is deemed as useful, since we can have
> more than one scheduler running at the same time, thanks to
> cpupools.
>
> The event ID is 12 bits, and this change uses the upper 3 of them
> for the 'scheduler ID'. This means we're limited to 8 schedulers
> and to 512 scheduler specific tracing events. Both seem reasonable
> limitations as of now.
>
> This also converts the existing credit2 tracing (the only scheduler
> generating tracing events up to now) to the new system.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>




>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -29,18 +29,24 @@
>  #define d2printk(x...)
>  //#define d2printk printk
>
> -#define TRC_CSCHED2_TICK        TRC_SCHED_CLASS + 1
> -#define TRC_CSCHED2_RUNQ_POS    TRC_SCHED_CLASS + 2
> -#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3
> -#define TRC_CSCHED2_CREDIT_ADD  TRC_SCHED_CLASS + 4
> -#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5
> -#define TRC_CSCHED2_TICKLE       TRC_SCHED_CLASS + 6
> -#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7
> -#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8
> -#define TRC_CSCHED2_UPDATE_LOAD   TRC_SCHED_CLASS + 9
> -#define TRC_CSCHED2_RUNQ_ASSIGN   TRC_SCHED_CLASS + 10
> -#define TRC_CSCHED2_UPDATE_VCPU_LOAD   TRC_SCHED_CLASS + 11
> -#define TRC_CSCHED2_UPDATE_RUNQ_LOAD   TRC_SCHED_CLASS + 12
> +/*
> + * Credit2 tracing events ("only" 512 available!). Check
> + * include/public/trace.h for more details.
> + */
> +#define TRC_CSCHED2_EVENT(_e)        ((TRC_SCHED_CLASS|TRC_MASK_CSCHED2)
> + _e)
>

I think I would make this generic, and put it in trace.h -- maybe something
like this?  (Haven't run this through a compiler)

#define TRC_SCHED_CLASS_EVT(_c, _e) \

((TRC_SCHED_CLASS|(TRC_SCHED_##_c<<TRC_SCHED_MASK_SHIFT))+(_e&TRC_SCHED_CLASS_MASK))

Then these would look like:
#define TRC_CSCHED2_TICK TRC_SCHED_CLASS_EVT(CSCHED2,1)


> +
> +#define TRC_CSCHED2_TICK             (TRC_CSCHED2_EVENT(1))
> +#define TRC_CSCHED2_RUNQ_POS         (TRC_CSCHED2_EVENT(2))
> +#define TRC_CSCHED2_CREDIT_BURN      (TRC_CSCHED2_EVENT(3))
> +#define TRC_CSCHED2_CREDIT_ADD       (TRC_CSCHED2_EVENT(4))
> +#define TRC_CSCHED2_TICKLE_CHECK     (TRC_CSCHED2_EVENT(5))
> +#define TRC_CSCHED2_TICKLE           (TRC_CSCHED2_EVENT(6))
> +#define TRC_CSCHED2_CREDIT_RESET     (TRC_CSCHED2_EVENT(7))
> +#define TRC_CSCHED2_SCHED_TASKLET    (TRC_CSCHED2_EVENT(8))
> +#define TRC_CSCHED2_UPDATE_LOAD      (TRC_CSCHED2_EVENT(9))
> +#define TRC_CSCHED2_RUNQ_ASSIGN      (TRC_CSCHED2_EVENT(10))
> +#define TRC_CSCHED2_UPDATE_VCPU_LOAD (TRC_CSCHED2_EVENT(11))
> +#define TRC_CSCHED2_UPDATE_RUNQ_LOAD (TRC_CSCHED2_EVENT(12))
>
>  /*
>   * WARNING: This is still in an experimental phase.  Status and work can
> be found at the
> diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
> --- a/xen/include/public/trace.h
> +++ b/xen/include/public/trace.h
> @@ -57,6 +57,26 @@
>  #define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
>  #define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
>
> +/*
> + * Per-scheduler masks, to identify scheduler specific events.
> + *
> + * The highest 3 bits of the last 12 bits of the above TRC_SCHED_*
> + * tracing classes are reserved for encoding what scheduler is producing
> + * the event. The last 9 bits is where the scheduler specific event is
> + * encoded.
> + *
> + * This means we can have at most 8 tracing scheduling masks (which
> + * means at most 8 schedulers generating tracing events) and, in each
> + * scheduler, up to 512 different events.
> + */
> +#define TRC_SCHED_ID_BITS    3
> +#define TRC_SCHED_MASK_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
> +
> +#define TRC_MASK_CSCHED      (0 << TRC_SCHED_MASK_SHIFT)
> +#define TRC_MASK_CSCHED2     (1 << TRC_SCHED_MASK_SHIFT)
> +#define TRC_MASK_SEDF        (2 << TRC_SCHED_MASK_SHIFT)
> +#define TRC_MASK_ARINC653    (3 << TRC_SCHED_MASK_SHIFT)
>

I don't think "mask" is right here -- these aren't masks, they're numerical
values. :-)  If we use something like the #define above, then we can do:

#define TRC_SCHED_CSCHED 0
#define TRC_SCHED_CSCHED2
/*...*/

 -George

--20cf307cff96e0ef6f04d00b62c7
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Mon, Dec 3, 2012 at 4:34 PM, Dario Faggioli <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:raistlin@linux.it" target=3D"_blank">raistlin@linux.it</a>&gt;<=
/span> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blo=
ckquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #c=
cc solid;padding-left:1ex">
So that it becomes possible to create specific scheduling event<br>
within each scheduler without worrying about the overlapping,<br>
and also without giving up being able to recognise them<br>
univocally. The latter is deemed as useful, since we can have<br>
more than one scheduler running at the same time, thanks to<br>
cpupools.<br>
<br>
The event ID is 12 bits, and this change uses the upper 3 of them<br>
for the &#39;scheduler ID&#39;. This means we&#39;re limited to 8 scheduler=
s<br>
and to 512 scheduler specific tracing events. Both seem reasonable<br>
limitations as of now.<br>
<br>
This also converts the existing credit2 tracing (the only scheduler<br>
generating tracing events up to now) to the new system.<br>
<br>
Signed-off-by: Dario Faggioli &lt;<a href=3D"mailto:dario.faggioli@citrix.c=
om">dario.faggioli@citrix.com</a>&gt;<br></blockquote><div><br><br>=A0</div=
><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1=
px #ccc solid;padding-left:1ex">

<br>
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c<br>
--- a/xen/common/sched_credit2.c<br>
+++ b/xen/common/sched_credit2.c<br>
@@ -29,18 +29,24 @@<br>
=A0#define d2printk(x...)<br>
=A0//#define d2printk printk<br>
<br>
-#define TRC_CSCHED2_TICK =A0 =A0 =A0 =A0TRC_SCHED_CLASS + 1<br>
-#define TRC_CSCHED2_RUNQ_POS =A0 =A0TRC_SCHED_CLASS + 2<br>
-#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3<br>
-#define TRC_CSCHED2_CREDIT_ADD =A0TRC_SCHED_CLASS + 4<br>
-#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5<br>
-#define TRC_CSCHED2_TICKLE =A0 =A0 =A0 TRC_SCHED_CLASS + 6<br>
-#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7<br>
-#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8<br>
-#define TRC_CSCHED2_UPDATE_LOAD =A0 TRC_SCHED_CLASS + 9<br>
-#define TRC_CSCHED2_RUNQ_ASSIGN =A0 TRC_SCHED_CLASS + 10<br>
-#define TRC_CSCHED2_UPDATE_VCPU_LOAD =A0 TRC_SCHED_CLASS + 11<br>
-#define TRC_CSCHED2_UPDATE_RUNQ_LOAD =A0 TRC_SCHED_CLASS + 12<br>
+/*<br>
+ * Credit2 tracing events (&quot;only&quot; 512 available!). Check<br>
+ * include/public/trace.h for more details.<br>
+ */<br>
+#define TRC_CSCHED2_EVENT(_e) =A0 =A0 =A0 =A0((TRC_SCHED_CLASS|TRC_MASK_CS=
CHED2) + _e)<br></blockquote><div><br>I think I would make this generic, an=
d put it in trace.h -- maybe something like this?=A0 (Haven&#39;t run this =
through a compiler)<br>
<br>#define TRC_SCHED_CLASS_EVT(_c, _e) \<br>=A0 ((TRC_SCHED_CLASS|(TRC_SCH=
ED_##_c&lt;&lt;TRC_SCHED_MASK_SHIFT))+(_e&amp;TRC_SCHED_CLASS_MASK))<br><br=
>Then these would look like:<br>#define TRC_CSCHED2_TICK TRC_SCHED_CLASS_EV=
T(CSCHED2,1)<br>
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde=
r-left:1px #ccc solid;padding-left:1ex">
+<br>
+#define TRC_CSCHED2_TICK =A0 =A0 =A0 =A0 =A0 =A0 (TRC_CSCHED2_EVENT(1))<br=
>
+#define TRC_CSCHED2_RUNQ_POS =A0 =A0 =A0 =A0 (TRC_CSCHED2_EVENT(2))<br>
+#define TRC_CSCHED2_CREDIT_BURN =A0 =A0 =A0(TRC_CSCHED2_EVENT(3))<br>
+#define TRC_CSCHED2_CREDIT_ADD =A0 =A0 =A0 (TRC_CSCHED2_EVENT(4))<br>
+#define TRC_CSCHED2_TICKLE_CHECK =A0 =A0 (TRC_CSCHED2_EVENT(5))<br>
+#define TRC_CSCHED2_TICKLE =A0 =A0 =A0 =A0 =A0 (TRC_CSCHED2_EVENT(6))<br>
+#define TRC_CSCHED2_CREDIT_RESET =A0 =A0 (TRC_CSCHED2_EVENT(7))<br>
+#define TRC_CSCHED2_SCHED_TASKLET =A0 =A0(TRC_CSCHED2_EVENT(8))<br>
+#define TRC_CSCHED2_UPDATE_LOAD =A0 =A0 =A0(TRC_CSCHED2_EVENT(9))<br>
+#define TRC_CSCHED2_RUNQ_ASSIGN =A0 =A0 =A0(TRC_CSCHED2_EVENT(10))<br>
+#define TRC_CSCHED2_UPDATE_VCPU_LOAD (TRC_CSCHED2_EVENT(11))<br>
+#define TRC_CSCHED2_UPDATE_RUNQ_LOAD (TRC_CSCHED2_EVENT(12))<br>
<br>
=A0/*<br>
=A0 * WARNING: This is still in an experimental phase. =A0Status and work c=
an be found at the<br>
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h<br>
--- a/xen/include/public/trace.h<br>
+++ b/xen/include/public/trace.h<br>
@@ -57,6 +57,26 @@<br>
=A0#define TRC_SCHED_CLASS =A0 =A0 0x00022000 =A0 /* Scheduler-specific =A0=
 =A0*/<br>
=A0#define TRC_SCHED_VERBOSE =A0 0x00028000 =A0 /* More inclusive schedulin=
g */<br>
<br>
+/*<br>
+ * Per-scheduler masks, to identify scheduler specific events.<br>
+ *<br>
+ * The highest 3 bits of the last 12 bits of the above TRC_SCHED_*<br>
+ * tracing classes are reserved for encoding what scheduler is producing<b=
r>
+ * the event. The last 9 bits is where the scheduler specific event is<br>
+ * encoded.<br>
+ *<br>
+ * This means we can have at most 8 tracing scheduling masks (which<br>
+ * means at most 8 schedulers generating tracing events) and, in each<br>
+ * scheduler, up to 512 different events.<br>
+ */<br>
+#define TRC_SCHED_ID_BITS =A0 =A03<br>
+#define TRC_SCHED_MASK_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)<br>
+<br>
+#define TRC_MASK_CSCHED =A0 =A0 =A0(0 &lt;&lt; TRC_SCHED_MASK_SHIFT)<br>
+#define TRC_MASK_CSCHED2 =A0 =A0 (1 &lt;&lt; TRC_SCHED_MASK_SHIFT)<br>
+#define TRC_MASK_SEDF =A0 =A0 =A0 =A0(2 &lt;&lt; TRC_SCHED_MASK_SHIFT)<br>
+#define TRC_MASK_ARINC653 =A0 =A0(3 &lt;&lt; TRC_SCHED_MASK_SHIFT)<br></bl=
ockquote><div><br>I don&#39;t think &quot;mask&quot; is right here -- these=
 aren&#39;t masks, they&#39;re numerical values. :-)=A0 If we use something=
 like the #define above, then we can do:<br>
<br>#define TRC_SCHED_CSCHED 0<br>#define TRC_SCHED_CSCHED2<br>/*...*/<br>=
=A0<br>=A0-George<br></div></div><br></div>

--20cf307cff96e0ef6f04d00b62c7--


--===============2386283722053980556==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2386283722053980556==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 18:55:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfxe9-0007Ro-RJ; Tue, 04 Dec 2012 18:55:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Tfxe8-0007RK-7W
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:55:08 +0000
Received: from [85.158.143.35:14791] by server-2.bemta-4.messagelabs.com id
	D6/56-28922-B074EB05; Tue, 04 Dec 2012 18:55:07 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1354647288!14405065!2
X-Originating-IP: [209.85.212.48]
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2397 invoked from network); 4 Dec 2012 18:55:06 -0000
Received: from mail-vb0-f48.google.com (HELO mail-vb0-f48.google.com)
	(209.85.212.48)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:55:06 -0000
Received: by mail-vb0-f48.google.com with SMTP id fc21so3475197vbb.7
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Dec 2012 10:55:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=EscpbdjeuATa85tQvqdx5Z84BDBeQaToFX9FYxPTkt4=;
	b=y2fn8CzHxSwOLh/mxh1AI6PuCVm6rCJMOl7f8ytL31iNIYmr4bBYECKYyyrjXRjSPW
	8ej8HuZvfRzbP/JWggn/KzgwfGMGzd/KdhIZFpv7sGl+xZGM4jAVDhutXV4is+vwgGFT
	h/nt/mC4VsbpWdqQtFTY1seJ3Fo2NDyKMGO4LUdkIIDDl0CgM2TMog+zi6YERd1FX6ig
	XtqszxArHWnlxR6xRNEoTPUiSgHC9chPC1AR0qaee81yGO1SDquLwkNgYb7x6z70d2rM
	6a3rS5IQy5QCWr+uGJdho3/JYjsWIwkI7a3SoQmem6V0uFW5iaF6YhaYmuog0BBGkp+i
	HdYw==
MIME-Version: 1.0
Received: by 10.52.68.242 with SMTP id z18mr11013723vdt.0.1354647305843; Tue,
	04 Dec 2012 10:55:05 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Tue, 4 Dec 2012 10:55:05 -0800 (PST)
In-Reply-To: <CAFLBxZaor6aBA6AzAGrGhVBCG+UMS+6G_Q0SKQCgw-hmWuDLxA@mail.gmail.com>
References: <patchbomb.1354552497@Solace>
	<7265520b0188740d3dfa.1354552499@Solace>
	<CAFLBxZaor6aBA6AzAGrGhVBCG+UMS+6G_Q0SKQCgw-hmWuDLxA@mail.gmail.com>
Date: Tue, 4 Dec 2012 18:55:05 +0000
X-Google-Sender-Auth: WuAuRhGyTa5kpbYsU1SuYHeA-QM
Message-ID: <CAFLBxZY3ftUVw47Nmo7LDHZGEBaNNnzX7RrX1nes3qEmFQgRhQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Cc: xen-devel <xen-devel@lists.xensource.com>, Keir Fraser <keir@xen.org>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 3] xen: tracing: introduce
 per-scheduler trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4267046545172791679=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4267046545172791679==
Content-Type: multipart/alternative; boundary=20cf3079bb9c75f5e404d00b675e

--20cf3079bb9c75f5e404d00b675e
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 4, 2012 at 6:53 PM, George Dunlap
<George.Dunlap@eu.citrix.com>wrote:

>
> #define TRC_SCHED_CSCHED 0
> #define TRC_SCHED_CSCHED2
> /*...*/
>

No idea what random key combination I pushed to make gmail just sent that
e-mail... anyway, that should be a "1" after CSCHED2.

Thoughts?

 -George

--20cf3079bb9c75f5e404d00b675e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 4, 2012 at 6:53 PM, George Dunlap <span dir=3D"ltr">&lt;<a href=
=3D"mailto:George.Dunlap@eu.citrix.com" target=3D"_blank">George.Dunlap@eu.=
citrix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=
=3D"gmail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><br><div class=3D"gmail_extra"><div class=3D=
"gmail_quote"><div>#define TRC_SCHED_CSCHED 0<br>#define TRC_SCHED_CSCHED2<=
br>/*...*/<span class=3D"HOEnZb"><font color=3D"#888888"><br>
</font></span></div></div></div></blockquote><div><br>No idea what random k=
ey combination I pushed to make gmail just sent that e-mail... anyway, that=
 should be a &quot;1&quot; after CSCHED2.<br><br>Thoughts?<br></div></div>
<br>=A0-George<br></div>

--20cf3079bb9c75f5e404d00b675e--


--===============4267046545172791679==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4267046545172791679==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 18:55:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfxe9-0007Ro-RJ; Tue, 04 Dec 2012 18:55:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Tfxe8-0007RK-7W
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 18:55:08 +0000
Received: from [85.158.143.35:14791] by server-2.bemta-4.messagelabs.com id
	D6/56-28922-B074EB05; Tue, 04 Dec 2012 18:55:07 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1354647288!14405065!2
X-Originating-IP: [209.85.212.48]
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2397 invoked from network); 4 Dec 2012 18:55:06 -0000
Received: from mail-vb0-f48.google.com (HELO mail-vb0-f48.google.com)
	(209.85.212.48)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 18:55:06 -0000
Received: by mail-vb0-f48.google.com with SMTP id fc21so3475197vbb.7
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Dec 2012 10:55:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=EscpbdjeuATa85tQvqdx5Z84BDBeQaToFX9FYxPTkt4=;
	b=y2fn8CzHxSwOLh/mxh1AI6PuCVm6rCJMOl7f8ytL31iNIYmr4bBYECKYyyrjXRjSPW
	8ej8HuZvfRzbP/JWggn/KzgwfGMGzd/KdhIZFpv7sGl+xZGM4jAVDhutXV4is+vwgGFT
	h/nt/mC4VsbpWdqQtFTY1seJ3Fo2NDyKMGO4LUdkIIDDl0CgM2TMog+zi6YERd1FX6ig
	XtqszxArHWnlxR6xRNEoTPUiSgHC9chPC1AR0qaee81yGO1SDquLwkNgYb7x6z70d2rM
	6a3rS5IQy5QCWr+uGJdho3/JYjsWIwkI7a3SoQmem6V0uFW5iaF6YhaYmuog0BBGkp+i
	HdYw==
MIME-Version: 1.0
Received: by 10.52.68.242 with SMTP id z18mr11013723vdt.0.1354647305843; Tue,
	04 Dec 2012 10:55:05 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Tue, 4 Dec 2012 10:55:05 -0800 (PST)
In-Reply-To: <CAFLBxZaor6aBA6AzAGrGhVBCG+UMS+6G_Q0SKQCgw-hmWuDLxA@mail.gmail.com>
References: <patchbomb.1354552497@Solace>
	<7265520b0188740d3dfa.1354552499@Solace>
	<CAFLBxZaor6aBA6AzAGrGhVBCG+UMS+6G_Q0SKQCgw-hmWuDLxA@mail.gmail.com>
Date: Tue, 4 Dec 2012 18:55:05 +0000
X-Google-Sender-Auth: WuAuRhGyTa5kpbYsU1SuYHeA-QM
Message-ID: <CAFLBxZY3ftUVw47Nmo7LDHZGEBaNNnzX7RrX1nes3qEmFQgRhQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Cc: xen-devel <xen-devel@lists.xensource.com>, Keir Fraser <keir@xen.org>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 3] xen: tracing: introduce
 per-scheduler trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4267046545172791679=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4267046545172791679==
Content-Type: multipart/alternative; boundary=20cf3079bb9c75f5e404d00b675e

--20cf3079bb9c75f5e404d00b675e
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 4, 2012 at 6:53 PM, George Dunlap
<George.Dunlap@eu.citrix.com>wrote:

>
> #define TRC_SCHED_CSCHED 0
> #define TRC_SCHED_CSCHED2
> /*...*/
>

No idea what random key combination I pushed to make gmail just sent that
e-mail... anyway, that should be a "1" after CSCHED2.

Thoughts?

 -George

--20cf3079bb9c75f5e404d00b675e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 4, 2012 at 6:53 PM, George Dunlap <span dir=3D"ltr">&lt;<a href=
=3D"mailto:George.Dunlap@eu.citrix.com" target=3D"_blank">George.Dunlap@eu.=
citrix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=
=3D"gmail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><br><div class=3D"gmail_extra"><div class=3D=
"gmail_quote"><div>#define TRC_SCHED_CSCHED 0<br>#define TRC_SCHED_CSCHED2<=
br>/*...*/<span class=3D"HOEnZb"><font color=3D"#888888"><br>
</font></span></div></div></div></blockquote><div><br>No idea what random k=
ey combination I pushed to make gmail just sent that e-mail... anyway, that=
 should be a &quot;1&quot; after CSCHED2.<br><br>Thoughts?<br></div></div>
<br>=A0-George<br></div>

--20cf3079bb9c75f5e404d00b675e--


--===============4267046545172791679==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4267046545172791679==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 18:56:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfxfG-0007o7-NX; Tue, 04 Dec 2012 18:56:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfxfF-0007nR-In
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:56:17 +0000
Received: from [193.109.254.147:14444] by server-15.bemta-14.messagelabs.com
	id F9/7F-12105-0574EB05; Tue, 04 Dec 2012 18:56:16 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354647372!9447921!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5186 invoked from network); 4 Dec 2012 18:56:14 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:56:14 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 3b3f_16a9_1149d40d_8889_454b_8a9a_2f554e1a297d;
	Tue, 04 Dec 2012 13:55:55 -0500
Message-ID: <50BE4734.7070108@jhuapl.edu>
Date: Tue, 04 Dec 2012 13:55:48 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/9] vTPM new ABI, extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2309029627151822504=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============2309029627151822504==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms040000030702030201060507"

This is a cryptographically signed message in MIME format.

--------------ms040000030702030201060507
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Have you given any thought to the vtpm auto-shutdown semantics? I'd like =

to preserve that if at all possible. Is the only conflicting patch there =

patch 8?

On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
> This patch queue goes on top of Matthew Fioravante's [VTPM v5 0/7]
> series. While some of the patches have been posted before, all have
> been cleaned up a bit.
>
> [PATCH 1/9] stubdom: Change vTPM shared page ABI
> 	* Removed unneeded reconfiguration pieces
> 	* Removed feature-protocol-v2 xenstore key references
>
> [PATCH 2/9] stubdom/vtpm: Support locality field
> 	* Add distinct patch file instead of patching a patch
> 	* Comment on future use of the locality field
>
> [PATCH 3/9] stubdom/vtpm: correct the buffer size returned by
> 	* New patch
>
> [PATCH 4/9] stubdom/vtpm: Add locality-5 PCRs
> 	* New patch
>
> [PATCH 5/9] stubdom/vtpm: Allow repoen of closed devices
> 	* This used to use Reconfigure, but has been changed to use
> 	  the Closed states similar to blkback
>
> [PATCH 6/9] stubdom/vtpm: make state save operation atomic
> 	* Avoid hardcoded maximum saved state size
> 	* Better debug/error messages
>
> [PATCH 7/9] stubdom/grub: send kernel measurements to vTPM
> 	* Use PolarSSL SHA1 function
> 	* Use byteswap.h functions
>
> [PATCH 8/9] stubdom/vtpm: support multiple backends
> 	* Split into its own patch so it can be excluded if
> 	  automatic vTPM shutdown is required
>
> [PATCH 9/9] stubdom/vtpm: Add PCR pass-through to hardware TPM
> 	* New patch, RFC; an alternative to hwinitpcrs



--------------ms040000030702030201060507
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE4NTU0OFowIwYJKoZIhvcNAQkEMRYEFEXmvDROLZ3e/t6L
Ct5DW/E+H1X0MGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYABakSRGwYHDAT9n97kFquNkzevnOcEUPCu
ZRCEaLKSpNdun8QEBiBy2o3x/dQ6anis5y1+Tgh3z0JDk+LL0NKn4WVqGoPqlOcXSFkarxQo
tXH5IDYoqDTIRf+P6Wt/COIuKbEMArO+a2PuZoSnjPfPB0fC2SGpNkOX/DIN+Y9/BQAAAAAA
AA==
--------------ms040000030702030201060507--


--===============2309029627151822504==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2309029627151822504==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 18:56:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 18:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfxfG-0007o7-NX; Tue, 04 Dec 2012 18:56:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0682a7ec59=matthew.fioravante@jhuapl.edu>)
	id 1TfxfF-0007nR-In
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 18:56:17 +0000
Received: from [193.109.254.147:14444] by server-15.bemta-14.messagelabs.com
	id F9/7F-12105-0574EB05; Tue, 04 Dec 2012 18:56:16 +0000
X-Env-Sender: prvs=0682a7ec59=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354647372!9447921!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5186 invoked from network); 4 Dec 2012 18:56:14 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Dec 2012 18:56:14 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 3b3f_16a9_1149d40d_8889_454b_8a9a_2f554e1a297d;
	Tue, 04 Dec 2012 13:55:55 -0500
Message-ID: <50BE4734.7070108@jhuapl.edu>
Date: Tue, 04 Dec 2012 13:55:48 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/9] vTPM new ABI, extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2309029627151822504=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============2309029627151822504==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms040000030702030201060507"

This is a cryptographically signed message in MIME format.

--------------ms040000030702030201060507
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Have you given any thought to the vtpm auto-shutdown semantics? I'd like =

to preserve that if at all possible. Is the only conflicting patch there =

patch 8?

On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
> This patch queue goes on top of Matthew Fioravante's [VTPM v5 0/7]
> series. While some of the patches have been posted before, all have
> been cleaned up a bit.
>
> [PATCH 1/9] stubdom: Change vTPM shared page ABI
> 	* Removed unneeded reconfiguration pieces
> 	* Removed feature-protocol-v2 xenstore key references
>
> [PATCH 2/9] stubdom/vtpm: Support locality field
> 	* Add distinct patch file instead of patching a patch
> 	* Comment on future use of the locality field
>
> [PATCH 3/9] stubdom/vtpm: correct the buffer size returned by
> 	* New patch
>
> [PATCH 4/9] stubdom/vtpm: Add locality-5 PCRs
> 	* New patch
>
> [PATCH 5/9] stubdom/vtpm: Allow repoen of closed devices
> 	* This used to use Reconfigure, but has been changed to use
> 	  the Closed states similar to blkback
>
> [PATCH 6/9] stubdom/vtpm: make state save operation atomic
> 	* Avoid hardcoded maximum saved state size
> 	* Better debug/error messages
>
> [PATCH 7/9] stubdom/grub: send kernel measurements to vTPM
> 	* Use PolarSSL SHA1 function
> 	* Use byteswap.h functions
>
> [PATCH 8/9] stubdom/vtpm: support multiple backends
> 	* Split into its own patch so it can be excluded if
> 	  automatic vTPM shutdown is required
>
> [PATCH 9/9] stubdom/vtpm: Add PCR pass-through to hardware TPM
> 	* New patch, RFC; an alternative to hwinitpcrs



--------------ms040000030702030201060507
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIwNDE4NTU0OFowIwYJKoZIhvcNAQkEMRYEFEXmvDROLZ3e/t6L
Ct5DW/E+H1X0MGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYABakSRGwYHDAT9n97kFquNkzevnOcEUPCu
ZRCEaLKSpNdun8QEBiBy2o3x/dQ6anis5y1+Tgh3z0JDk+LL0NKn4WVqGoPqlOcXSFkarxQo
tXH5IDYoqDTIRf+P6Wt/COIuKbEMArO+a2PuZoSnjPfPB0fC2SGpNkOX/DIN+Y9/BQAAAAAA
AA==
--------------ms040000030702030201060507--


--===============2309029627151822504==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2309029627151822504==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 19:00:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 19:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfxjd-0000Dh-Kq; Tue, 04 Dec 2012 19:00:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tfxjc-0000DS-Bc
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 19:00:48 +0000
Received: from [193.109.254.147:30708] by server-2.bemta-14.messagelabs.com id
	13/FD-20829-F584EB05; Tue, 04 Dec 2012 19:00:47 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354647645!9448221!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12606 invoked from network); 4 Dec 2012 19:00:45 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-27.messagelabs.com with SMTP;
	4 Dec 2012 19:00:45 -0000
X-TM-IMSS-Message-ID: <e5f40b53000796d1@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id e5f40b53000796d1 ;
	Tue, 4 Dec 2012 14:00:27 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qB4J0T59001367; 
	Tue, 4 Dec 2012 14:00:29 -0500
Message-ID: <50BE484D.2060208@tycho.nsa.gov>
Date: Tue, 04 Dec 2012 14:00:29 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<50BE4734.7070108@jhuapl.edu>
In-Reply-To: <50BE4734.7070108@jhuapl.edu>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/9] vTPM new ABI, extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/04/2012 01:55 PM, Matthew Fioravante wrote:
> Have you given any thought to the vtpm auto-shutdown semantics? I'd like to preserve that if at all possible. Is the only conflicting patch there patch 8?
> 

#8 should be the only patch that conflicts, although I have not tested
that the shutdown works as expected after #5. I think the shutdown key
in Xenstore is the best solution for this issue (and don't really have
a strong preference for weak function vs a waitqueue).

> On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
>> This patch queue goes on top of Matthew Fioravante's [VTPM v5 0/7]
>> series. While some of the patches have been posted before, all have
>> been cleaned up a bit.
>>
>> [PATCH 1/9] stubdom: Change vTPM shared page ABI
>>     * Removed unneeded reconfiguration pieces
>>     * Removed feature-protocol-v2 xenstore key references
>>
>> [PATCH 2/9] stubdom/vtpm: Support locality field
>>     * Add distinct patch file instead of patching a patch
>>     * Comment on future use of the locality field
>>
>> [PATCH 3/9] stubdom/vtpm: correct the buffer size returned by
>>     * New patch
>>
>> [PATCH 4/9] stubdom/vtpm: Add locality-5 PCRs
>>     * New patch
>>
>> [PATCH 5/9] stubdom/vtpm: Allow repoen of closed devices
>>     * This used to use Reconfigure, but has been changed to use
>>       the Closed states similar to blkback
>>
>> [PATCH 6/9] stubdom/vtpm: make state save operation atomic
>>     * Avoid hardcoded maximum saved state size
>>     * Better debug/error messages
>>
>> [PATCH 7/9] stubdom/grub: send kernel measurements to vTPM
>>     * Use PolarSSL SHA1 function
>>     * Use byteswap.h functions
>>
>> [PATCH 8/9] stubdom/vtpm: support multiple backends
>>     * Split into its own patch so it can be excluded if
>>       automatic vTPM shutdown is required
>>
>> [PATCH 9/9] stubdom/vtpm: Add PCR pass-through to hardware TPM
>>     * New patch, RFC; an alternative to hwinitpcrs
> 
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 19:00:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 19:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfxjd-0000Dh-Kq; Tue, 04 Dec 2012 19:00:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tfxjc-0000DS-Bc
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 19:00:48 +0000
Received: from [193.109.254.147:30708] by server-2.bemta-14.messagelabs.com id
	13/FD-20829-F584EB05; Tue, 04 Dec 2012 19:00:47 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354647645!9448221!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12606 invoked from network); 4 Dec 2012 19:00:45 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-27.messagelabs.com with SMTP;
	4 Dec 2012 19:00:45 -0000
X-TM-IMSS-Message-ID: <e5f40b53000796d1@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id e5f40b53000796d1 ;
	Tue, 4 Dec 2012 14:00:27 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qB4J0T59001367; 
	Tue, 4 Dec 2012 14:00:29 -0500
Message-ID: <50BE484D.2060208@tycho.nsa.gov>
Date: Tue, 04 Dec 2012 14:00:29 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
References: <1354286955-23900-1-git-send-email-dgdegra@tycho.nsa.gov>
	<50BE4734.7070108@jhuapl.edu>
In-Reply-To: <50BE4734.7070108@jhuapl.edu>
Cc: "samuel.thibault@ens-lyon.org" <samuel.thibault@ens-lyon.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/9] vTPM new ABI, extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/04/2012 01:55 PM, Matthew Fioravante wrote:
> Have you given any thought to the vtpm auto-shutdown semantics? I'd like to preserve that if at all possible. Is the only conflicting patch there patch 8?
> 

#8 should be the only patch that conflicts, although I have not tested
that the shutdown works as expected after #5. I think the shutdown key
in Xenstore is the best solution for this issue (and don't really have
a strong preference for weak function vs a waitqueue).

> On 11/30/2012 09:49 AM, Daniel De Graaf wrote:
>> This patch queue goes on top of Matthew Fioravante's [VTPM v5 0/7]
>> series. While some of the patches have been posted before, all have
>> been cleaned up a bit.
>>
>> [PATCH 1/9] stubdom: Change vTPM shared page ABI
>>     * Removed unneeded reconfiguration pieces
>>     * Removed feature-protocol-v2 xenstore key references
>>
>> [PATCH 2/9] stubdom/vtpm: Support locality field
>>     * Add distinct patch file instead of patching a patch
>>     * Comment on future use of the locality field
>>
>> [PATCH 3/9] stubdom/vtpm: correct the buffer size returned by
>>     * New patch
>>
>> [PATCH 4/9] stubdom/vtpm: Add locality-5 PCRs
>>     * New patch
>>
>> [PATCH 5/9] stubdom/vtpm: Allow repoen of closed devices
>>     * This used to use Reconfigure, but has been changed to use
>>       the Closed states similar to blkback
>>
>> [PATCH 6/9] stubdom/vtpm: make state save operation atomic
>>     * Avoid hardcoded maximum saved state size
>>     * Better debug/error messages
>>
>> [PATCH 7/9] stubdom/grub: send kernel measurements to vTPM
>>     * Use PolarSSL SHA1 function
>>     * Use byteswap.h functions
>>
>> [PATCH 8/9] stubdom/vtpm: support multiple backends
>>     * Split into its own patch so it can be excluded if
>>       automatic vTPM shutdown is required
>>
>> [PATCH 9/9] stubdom/vtpm: Add PCR pass-through to hardware TPM
>>     * New patch, RFC; an alternative to hwinitpcrs
> 
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 19:11:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 19:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfxu1-0000vK-Ep; Tue, 04 Dec 2012 19:11:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tfxu0-0000vD-Lz
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 19:11:32 +0000
Received: from [85.158.139.83:47597] by server-10.bemta-5.messagelabs.com id
	14/C7-09257-3EA4EB05; Tue, 04 Dec 2012 19:11:31 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354648289!21149085!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1566 invoked from network); 4 Dec 2012 19:11:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 19:11:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,217,1355097600"; d="scan'208";a="216384639"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 19:10:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 14:10:59 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TfxtT-0003dN-6R;
	Tue, 04 Dec 2012 19:10:59 +0000
Message-ID: <50BE4AC2.9010507@eu.citrix.com>
Date: Tue, 4 Dec 2012 19:10:58 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>
In-Reply-To: <bf4b06eda19a03a6aa0b.1354552500@Solace>
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 16:35, Dario Faggioli wrote:
> +            /* Avoid TRACE_* to avoid a lot of useless !tb_init_done checks */
> +            for_each_cpu(cpu, &mask)
> +            {
> +                struct {
> +                    unsigned cpu:8;
> +                } d;
> +                d.cpu = cpu;
> +                trace_var(TRC_CSCHED_TICKLE, 0,
> +                          sizeof(d),
> +                          (unsigned char*)&d);

Why not just TRC_1D()?  The tracing infrastructure can only set the size 
at a granularity of 32-bit words anyway, and at this point cpu is 
"unsigned int", which will be a single word.

Other than that, everything looks good.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 19:11:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 19:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tfxu1-0000vK-Ep; Tue, 04 Dec 2012 19:11:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tfxu0-0000vD-Lz
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 19:11:32 +0000
Received: from [85.158.139.83:47597] by server-10.bemta-5.messagelabs.com id
	14/C7-09257-3EA4EB05; Tue, 04 Dec 2012 19:11:31 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354648289!21149085!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODkzNDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1566 invoked from network); 4 Dec 2012 19:11:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 19:11:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,217,1355097600"; d="scan'208";a="216384639"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 19:10:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 14:10:59 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TfxtT-0003dN-6R;
	Tue, 04 Dec 2012 19:10:59 +0000
Message-ID: <50BE4AC2.9010507@eu.citrix.com>
Date: Tue, 4 Dec 2012 19:10:58 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>
In-Reply-To: <bf4b06eda19a03a6aa0b.1354552500@Solace>
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 16:35, Dario Faggioli wrote:
> +            /* Avoid TRACE_* to avoid a lot of useless !tb_init_done checks */
> +            for_each_cpu(cpu, &mask)
> +            {
> +                struct {
> +                    unsigned cpu:8;
> +                } d;
> +                d.cpu = cpu;
> +                trace_var(TRC_CSCHED_TICKLE, 0,
> +                          sizeof(d),
> +                          (unsigned char*)&d);

Why not just TRC_1D()?  The tracing infrastructure can only set the size 
at a granularity of 32-bit words anyway, and at this point cpu is 
"unsigned int", which will be a single word.

Other than that, everything looks good.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 19:34:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 19:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfyGK-0001oC-LP; Tue, 04 Dec 2012 19:34:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lukas@laukamp.me>)
	id 1TfyGI-0001o1-Tj; Tue, 04 Dec 2012 19:34:35 +0000
Received: from [193.109.254.147:34509] by server-9.bemta-14.messagelabs.com id
	F2/85-30773-9405EB05; Tue, 04 Dec 2012 19:34:33 +0000
X-Env-Sender: lukas@laukamp.me
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354649670!9213307!1
X-Originating-IP: [5.9.218.243]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24921 invoked from network); 4 Dec 2012 19:34:30 -0000
Received: from mailer0.lippux.de (HELO mailer0.lippux.de) (5.9.218.243)
	by server-16.tower-27.messagelabs.com with SMTP;
	4 Dec 2012 19:34:30 -0000
Received: from localhost (localhost [127.0.0.1])
	by mailer0.lippux.de (Postfix) with ESMTP id 4B6212C216;
	Tue,  4 Dec 2012 20:34:44 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mailer1.lippux.de
Received: from mailer0.lippux.de ([127.0.0.1])
	by localhost (mailer0.lippux.de [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id GBIef6hycC6j; Tue,  4 Dec 2012 20:34:43 +0100 (CET)
Received: from [127.0.0.1] (ashlynn.lippux.de [5.9.218.242])
	by mailer0.lippux.de (Postfix) with ESMTPSA id 1C8ED2C212;
	Tue,  4 Dec 2012 20:34:41 +0100 (CET)
Message-ID: <50BE503B.8010901@laukamp.me>
Date: Tue, 04 Dec 2012 20:34:19 +0100
From: Lukas Laukamp <lukas@laukamp.me>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:16.0) Gecko/20121026 Thunderbird/16.0.2
MIME-Version: 1.0
To: xen-users <xen-users@lists.xen.org>
References: <50BE46C0.8020406@laukamp.me>
In-Reply-To: <50BE46C0.8020406@laukamp.me>
X-Forwarded-Message-Id: <50BE46C0.8020406@laukamp.me>
Cc: port-xen@NetBSD.org, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Fwd: Compilation of Xen 4.2 Utils
 breaks on NetBSD 6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2920205431551750612=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============2920205431551750612==
Content-Type: multipart/alternative;
 boundary="------------070901050605000809060504"

This is a multi-part message in MIME format.
--------------070901050605000809060504
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit




-------- Original-Nachricht --------
Betreff: 	Re: [Xen-users] Fwd: Compilation of Xen 4.2 Utils breaks on 
NetBSD 6
Datum: 	Tue, 04 Dec 2012 19:53:52 +0100
Von: 	Lukas Laukamp <lukas@laukamp.me>
An: 	Roger Pau MonnÃ© <roger.pau@citrix.com>
Kopie (CC): 	xen-users@lists.xen.org <xen-users@lists.xen.org>



Am 04.12.2012 17:30, schrieb Roger Pau MonnÃ©:
> On 04/12/12 15:43, Lukas Laukamp wrote:
>> Am 04.12.2012 15:10, schrieb Roger Pau MonnÃ©:
>>> On 04/12/12 14:45, Lukas Laukamp wrote:
>>>> Hello all,
>>>>
>>>> because there are still problems to build Xen 4.2 on NetBSD (there was
>>>> also another thread on the port-xen list) I forward this message to get
>>>> a solution for the problem. The complete output of my build is in a log
>>>> file in the attachment.
>>>>
>>>> I used this commands for compilation:
>>>>
>>>> ./configure PYTHON=/usr/pkg/bin/python2.7 APPEND_INCLUDES=/usr/pkg/include APPEND_LIB=/usr/pkg/lib --prefix=/usr/xen42
>>>> gmake PYTHON=/usr/pkg/bin/python2.7 xen
>>>> gmake tools
>>>>
>>>> I took the commans from this wiki article: http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD
>>>>
>>>> The build error appears in the tools target in libxl.
>>>>
>>>> This is the last mail from port-xen list related to this theme:
>>>>
>>>> On 30/11/12 21:16, Mike Bowie wrote:
>>>>
>>>>> On 11/30/12 12:13 PM, Jeff Rizzo wrote:
>>>>>> Anyone up for creating a pkgsrc package for xen 4.2?  There's clearly a
>>>>>> lot to be done, and my pkgsrc-fu is not all that great.
>>>>> I could be up for that... might not be until next week, but if the build
>>>>> steps all work out, I should be able to cobble something together into
>>>>> pkgsrc/wip. (Which would motivate me to get a box onto 4.2 also...
>>>>> double win.)
>>>> I would definetely help, this will probably require some Makefile
>>>> changes, which I think should be submitted upstream.
>>>>
>>>> Is the problem solvable without big changes in the build system to get 4.2 running on a NetBSD 6 box? Or isn't it able to compile th toolstack on NetBSD for 4.2 without big changes?
>>>>
>>>>
>>>>
>>>> -------- Original-Nachricht --------
>>>> Betreff: 	Compilation of Xen 4.2 Utils breaks on NetBSD 6
>>>> Datum: 	Mon, 3 Dec 2012 17:19:16 +0000
>>>> Von: 	Miguel Clara<miguelmclara@gmail.com>
>>>> An: 	port-xen@netbsd.org, lukas@laukamp.me
>>>>
>>>>
>>>>
>>>> Lukas Laukamp<lukas<at>  laukamp.me<http://laukamp.me>>  writes:
>>>>
>>>>> Hey all,
>>>>>
>>>>> I trying to compile Xen 4.2 on NetBSD 6. The hypervisor it self compiled
>>>>> fine but the compilation of the utils breaks with this error:
>>>>>
>>>>> In file included from xl_cmdimpl.c:40:0:
>>>>> libxl_json.h:18:27: fatal error: yajl/yajl_gen.h: No such file or
>>>> directory
>>>>> compilation terminated.
>>>>> gmake[3]: *** [xl_cmdimpl.o] Error 1
>>>>> gmake[3]: Leaving directory `/root/xen-4.2.0/tools/libxl'
>>>>> gmake[2]: *** [subdir-install-libxl] Error 2
>>>>> gmake[2]: Leaving directory `/root/xen-4.2.0/tools'
>>>>> gmake[1]: *** [subdirs-install] Error 2
>>>>> gmake[1]: Leaving directory `/root/xen-4.2.0/tools'
>>>>> gmake: *** [install-tools] Error 2
>>>>> testdom0#
>>>>>
>>>>> I passed the needed options to the configure script so that it searches
>>>>> in /usr/pkg/include/ and /usr/pkg/lib and so on. The file which is
>>>>> declaired to don't exist, exists in /usr/pkg/include/yajl/ so I don't
>>>>> understand why the file could not be found.
>>>>>
>>>>> Hope that someone could help me.
>>>>>
>>>>> Best Regards
>>>>>
>>>>>
>>>> I'm trying to build following the guide at:
>>>> http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD
>>>>
>>>> All works fine until I try to build "tools"
>>>>
>>>> gmake[3]: Entering directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> rm -f _paths.h.tmp.tmp; echo "SBINDIR=\"/usr/pkg/sbin\""
>>>>>> _paths.h.tmp.tmp; echo "BINDIR=\"/usr/pkg/bin\"">>_paths.h.tmp.tmp;
>>>> echo "LIBEXEC=\"/usr/pkg/lâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> ibexec\"">>_paths.h.tmp.tmp; echo "LIBDIR=\"/usr/pkg/lib\""
>>>>>> _paths.h.tmp.tmp; echo "SHAREDIR=\"/usr/pkg/share\""
>>>>>> _paths.h.tmp.tmp; echo "PRIVATE_BINDâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> IR=\"/usr/pkg/bin\"">>_paths.h.tmp.tmp; echo
>>>> "XENFIRMWAREDIR=\"/usr/pkg/lib/xen/boot\"">>_paths.h.tmp.tmp; echo
>>>> "XEN_CONFIG_DIR=\"/usr/pkg/etc/xen\"">>_â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> paths.h.tmp.tmp; echo "XEN_SCRIPT_DIR=\"/usr/pkg/etc/xen/scripts\""
>>>>>> _paths.h.tmp.tmp; echo "XEN_LOCK_DIR=\"/usr/pkg/var/lib\""
>>>>>> _paths.h.tmp.tmp; echo â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> "XEN_RUN_DIR=\"/usr/pkg/var/run/xen\"">>_paths.h.tmp.tmp; echo
>>>> "XEN_PAGING_DIR=\"/usr/pkg/var/lib/xen/xenpaging\"">>_paths.h.tmp.tmp;
>>>> if ! cmp -s _pathâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> s.h.tmp.tmp _paths.h.tmp; then mv -f _paths.h.tmp.tmp _paths.h.tmp; else
>>>> rm -f _paths.h.tmp.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp>_paths.h.2.tmp
>>>> â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> rm -f _paths.h.tmp â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> if ! cmp -s _paths.h.2.tmp _paths.h; then mv -f _paths.h.2.tmp _paths.h;
>>>> else rm -f _paths.h.2.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gcc -pthread -o testidl testidl.o libxlutil.so
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so
>>>> -Wl,-rpath-link=/home/miguelcâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /xen-data/xen-4.2.0/tools/libxl/../../tools/libxc
>>>> -Wl,-rpath-link=/home/xen/xen-4.2.0/tools/libxl/../../tools/xenstore
>>>> /home/xen/xâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> en-4.2.0/tools/libxl/../../tools/libxc/libxenctrl.so -L/usr/pkg/lib
>>>> â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> ld: warning: libyajl.so.2, needed by
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so, not
>>>> found (try using -rpath or -rpath-linâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> k) â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_complete_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_null' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_array_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_string' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_map_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_get_buf' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_array_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_map_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_get_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_free_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_integer' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_bool' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[3]: *** [testidl] Error 1 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[3]: Leaving directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[2]: *** [subdir-install-libxl] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[2]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[1]: *** [subdirs-install] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[1]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake: *** [install-tools] Error 2
>>>>
>>>>
>>>> I'm using yajl version 2....  could this be the problem? Is there any patch?
>>> yajl 2 should be supported, since I guess you installed yajl from
>>> pkgsrc, could you try setting LD_LIBRARY_PATH=/usr/pkg/lib before compiling?
>>>
>>> See the following message from Riz:
>>> http://mail-index.netbsd.org/port-xen/2012/11/30/msg007740.html
>>>
>>> Indeed this should be looked at and fixed.
>>>
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@lists.xen.org
>>> http://lists.xen.org/xen-users
>> Hello,
>>
>> when I assigne LD_LIBRARY_PATH=/usr/pkg/lib to gmake when trying to
>> compile tools target libxl gets compiled. But later it breaks when
>> building the filesystem structure for the tools-install target because
>> it can't find pygrub. The complete output of the build process is in the
>> attachment.
> Could you remove the dist folder and try again? AFAIK it works for me
> without problems.
>

Hello,

I deleted the dist folder and now everything compiles fine. I have
stored xen in /usr/xen42/ and added this directorys to the PATH variable
now for example xl can't find the libxlutil, the library exists in
/usr/xen42/lib to what is this problem related? Got xl compiled wrong or
is something else wrong?

Best Regards





--------------070901050605000809060504
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    <div class="moz-forward-container"><br>
      <br>
      -------- Original-Nachricht --------
      <table class="moz-email-headers-table" border="0" cellpadding="0"
        cellspacing="0">
        <tbody>
          <tr>
            <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Betreff:
            </th>
            <td>Re: [Xen-users] Fwd: Compilation of Xen 4.2 Utils breaks
              on NetBSD 6</td>
          </tr>
          <tr>
            <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Datum: </th>
            <td>Tue, 04 Dec 2012 19:53:52 +0100</td>
          </tr>
          <tr>
            <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Von: </th>
            <td>Lukas Laukamp <a class="moz-txt-link-rfc2396E" href="mailto:lukas@laukamp.me">&lt;lukas@laukamp.me&gt;</a></td>
          </tr>
          <tr>
            <th align="RIGHT" nowrap="nowrap" valign="BASELINE">An: </th>
            <td>Roger Pau MonnÃ© <a class="moz-txt-link-rfc2396E" href="mailto:roger.pau@citrix.com">&lt;roger.pau@citrix.com&gt;</a></td>
          </tr>
          <tr>
            <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Kopie
              (CC): </th>
            <td><a class="moz-txt-link-abbreviated" href="mailto:xen-users@lists.xen.org">xen-users@lists.xen.org</a> <a class="moz-txt-link-rfc2396E" href="mailto:xen-users@lists.xen.org">&lt;xen-users@lists.xen.org&gt;</a></td>
          </tr>
        </tbody>
      </table>
      <br>
      <br>
      <pre>Am 04.12.2012 17:30, schrieb Roger Pau MonnÃ©:
&gt; On 04/12/12 15:43, Lukas Laukamp wrote:
&gt;&gt; Am 04.12.2012 15:10, schrieb Roger Pau MonnÃ©:
&gt;&gt;&gt; On 04/12/12 14:45, Lukas Laukamp wrote:
&gt;&gt;&gt;&gt; Hello all,
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; because there are still problems to build Xen 4.2 on NetBSD (there was
&gt;&gt;&gt;&gt; also another thread on the port-xen list) I forward this message to get
&gt;&gt;&gt;&gt; a solution for the problem. The complete output of my build is in a log
&gt;&gt;&gt;&gt; file in the attachment.
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; I used this commands for compilation:
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; ./configure PYTHON=/usr/pkg/bin/python2.7 APPEND_INCLUDES=/usr/pkg/include APPEND_LIB=/usr/pkg/lib --prefix=/usr/xen42
&gt;&gt;&gt;&gt; gmake PYTHON=/usr/pkg/bin/python2.7 xen
&gt;&gt;&gt;&gt; gmake tools
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; I took the commans from this wiki article: <a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD">http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD</a>
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; The build error appears in the tools target in libxl.
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; This is the last mail from port-xen list related to this theme:
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; On 30/11/12 21:16, Mike Bowie wrote:
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; On 11/30/12 12:13 PM, Jeff Rizzo wrote:
&gt;&gt;&gt;&gt;&gt;&gt; Anyone up for creating a pkgsrc package for xen 4.2?  There's clearly a
&gt;&gt;&gt;&gt;&gt;&gt; lot to be done, and my pkgsrc-fu is not all that great.
&gt;&gt;&gt;&gt;&gt; I could be up for that... might not be until next week, but if the build
&gt;&gt;&gt;&gt;&gt; steps all work out, I should be able to cobble something together into
&gt;&gt;&gt;&gt;&gt; pkgsrc/wip. (Which would motivate me to get a box onto 4.2 also...
&gt;&gt;&gt;&gt;&gt; double win.)
&gt;&gt;&gt;&gt; I would definetely help, this will probably require some Makefile
&gt;&gt;&gt;&gt; changes, which I think should be submitted upstream.
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; Is the problem solvable without big changes in the build system to get 4.2 running on a NetBSD 6 box? Or isn't it able to compile th toolstack on NetBSD for 4.2 without big changes?
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; -------- Original-Nachricht --------
&gt;&gt;&gt;&gt; Betreff: 	Compilation of Xen 4.2 Utils breaks on NetBSD 6
&gt;&gt;&gt;&gt; Datum: 	Mon, 3 Dec 2012 17:19:16 +0000
&gt;&gt;&gt;&gt; Von: 	Miguel Clara<a class="moz-txt-link-rfc2396E" href="mailto:miguelmclara@gmail.com">&lt;miguelmclara@gmail.com&gt;</a>
&gt;&gt;&gt;&gt; An: 	<a class="moz-txt-link-abbreviated" href="mailto:port-xen@netbsd.org">port-xen@netbsd.org</a>, <a class="moz-txt-link-abbreviated" href="mailto:lukas@laukamp.me">lukas@laukamp.me</a>
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; Lukas Laukamp&lt;lukas&lt;at&gt;  laukamp.me<a class="moz-txt-link-rfc2396E" href="http://laukamp.me">&lt;http://laukamp.me&gt;</a>&gt;  writes:
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; Hey all,
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; I trying to compile Xen 4.2 on NetBSD 6. The hypervisor it self compiled
&gt;&gt;&gt;&gt;&gt; fine but the compilation of the utils breaks with this error:
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; In file included from xl_cmdimpl.c:40:0:
&gt;&gt;&gt;&gt;&gt; libxl_json.h:18:27: fatal error: yajl/yajl_gen.h: No such file or
&gt;&gt;&gt;&gt; directory
&gt;&gt;&gt;&gt;&gt; compilation terminated.
&gt;&gt;&gt;&gt;&gt; gmake[3]: *** [xl_cmdimpl.o] Error 1
&gt;&gt;&gt;&gt;&gt; gmake[3]: Leaving directory `/root/xen-4.2.0/tools/libxl'
&gt;&gt;&gt;&gt;&gt; gmake[2]: *** [subdir-install-libxl] Error 2
&gt;&gt;&gt;&gt;&gt; gmake[2]: Leaving directory `/root/xen-4.2.0/tools'
&gt;&gt;&gt;&gt;&gt; gmake[1]: *** [subdirs-install] Error 2
&gt;&gt;&gt;&gt;&gt; gmake[1]: Leaving directory `/root/xen-4.2.0/tools'
&gt;&gt;&gt;&gt;&gt; gmake: *** [install-tools] Error 2
&gt;&gt;&gt;&gt;&gt; testdom0#
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; I passed the needed options to the configure script so that it searches
&gt;&gt;&gt;&gt;&gt; in /usr/pkg/include/ and /usr/pkg/lib and so on. The file which is
&gt;&gt;&gt;&gt;&gt; declaired to don't exist, exists in /usr/pkg/include/yajl/ so I don't
&gt;&gt;&gt;&gt;&gt; understand why the file could not be found.
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; Hope that someone could help me.
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; Best Regards
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; I'm trying to build following the guide at:
&gt;&gt;&gt;&gt; <a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD">http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD</a>
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; All works fine until I try to build "tools"
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; gmake[3]: Entering directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; rm -f _paths.h.tmp.tmp; echo "SBINDIR=\"/usr/pkg/sbin\""
&gt;&gt;&gt;&gt;&gt;&gt; _paths.h.tmp.tmp; echo "BINDIR=\"/usr/pkg/bin\""&gt;&gt;_paths.h.tmp.tmp;
&gt;&gt;&gt;&gt; echo "LIBEXEC=\"/usr/pkg/lâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; ibexec\""&gt;&gt;_paths.h.tmp.tmp; echo "LIBDIR=\"/usr/pkg/lib\""
&gt;&gt;&gt;&gt;&gt;&gt; _paths.h.tmp.tmp; echo "SHAREDIR=\"/usr/pkg/share\""
&gt;&gt;&gt;&gt;&gt;&gt; _paths.h.tmp.tmp; echo "PRIVATE_BINDâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; IR=\"/usr/pkg/bin\""&gt;&gt;_paths.h.tmp.tmp; echo
&gt;&gt;&gt;&gt; "XENFIRMWAREDIR=\"/usr/pkg/lib/xen/boot\""&gt;&gt;_paths.h.tmp.tmp; echo
&gt;&gt;&gt;&gt; "XEN_CONFIG_DIR=\"/usr/pkg/etc/xen\""&gt;&gt;_â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; paths.h.tmp.tmp; echo "XEN_SCRIPT_DIR=\"/usr/pkg/etc/xen/scripts\""
&gt;&gt;&gt;&gt;&gt;&gt; _paths.h.tmp.tmp; echo "XEN_LOCK_DIR=\"/usr/pkg/var/lib\""
&gt;&gt;&gt;&gt;&gt;&gt; _paths.h.tmp.tmp; echo â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; "XEN_RUN_DIR=\"/usr/pkg/var/run/xen\""&gt;&gt;_paths.h.tmp.tmp; echo
&gt;&gt;&gt;&gt; "XEN_PAGING_DIR=\"/usr/pkg/var/lib/xen/xenpaging\""&gt;&gt;_paths.h.tmp.tmp;
&gt;&gt;&gt;&gt; if ! cmp -s _pathâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; s.h.tmp.tmp _paths.h.tmp; then mv -f _paths.h.tmp.tmp _paths.h.tmp; else
&gt;&gt;&gt;&gt; rm -f _paths.h.tmp.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp&gt;_paths.h.2.tmp
&gt;&gt;&gt;&gt; â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; rm -f _paths.h.tmp â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; if ! cmp -s _paths.h.2.tmp _paths.h; then mv -f _paths.h.2.tmp _paths.h;
&gt;&gt;&gt;&gt; else rm -f _paths.h.2.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gcc -pthread -o testidl testidl.o libxlutil.so
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so
&gt;&gt;&gt;&gt; -Wl,-rpath-link=/home/miguelcâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /xen-data/xen-4.2.0/tools/libxl/../../tools/libxc
&gt;&gt;&gt;&gt; -Wl,-rpath-link=/home/xen/xen-4.2.0/tools/libxl/../../tools/xenstore
&gt;&gt;&gt;&gt; /home/xen/xâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; en-4.2.0/tools/libxl/../../tools/libxc/libxenctrl.so -L/usr/pkg/lib
&gt;&gt;&gt;&gt; â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; ld: warning: libyajl.so.2, needed by
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so, not
&gt;&gt;&gt;&gt; found (try using -rpath or -rpath-linâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; k) â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_complete_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_null' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_array_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_string' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_map_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_get_buf' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_array_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_map_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_get_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_free_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_integer' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_bool' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[3]: *** [testidl] Error 1 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[3]: Leaving directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[2]: *** [subdir-install-libxl] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[2]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[1]: *** [subdirs-install] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[1]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake: *** [install-tools] Error 2
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; I'm using yajl version 2....  could this be the problem? Is there any patch?
&gt;&gt;&gt; yajl 2 should be supported, since I guess you installed yajl from
&gt;&gt;&gt; pkgsrc, could you try setting LD_LIBRARY_PATH=/usr/pkg/lib before compiling?
&gt;&gt;&gt;
&gt;&gt;&gt; See the following message from Riz:
&gt;&gt;&gt; <a class="moz-txt-link-freetext" href="http://mail-index.netbsd.org/port-xen/2012/11/30/msg007740.html">http://mail-index.netbsd.org/port-xen/2012/11/30/msg007740.html</a>
&gt;&gt;&gt;
&gt;&gt;&gt; Indeed this should be looked at and fixed.
&gt;&gt;&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; _______________________________________________
&gt;&gt;&gt; Xen-users mailing list
&gt;&gt;&gt; <a class="moz-txt-link-abbreviated" href="mailto:Xen-users@lists.xen.org">Xen-users@lists.xen.org</a>
&gt;&gt;&gt; <a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-users">http://lists.xen.org/xen-users</a>
&gt;&gt; Hello,
&gt;&gt;
&gt;&gt; when I assigne LD_LIBRARY_PATH=/usr/pkg/lib to gmake when trying to
&gt;&gt; compile tools target libxl gets compiled. But later it breaks when
&gt;&gt; building the filesystem structure for the tools-install target because
&gt;&gt; it can't find pygrub. The complete output of the build process is in the
&gt;&gt; attachment.
&gt; Could you remove the dist folder and try again? AFAIK it works for me
&gt; without problems.
&gt;

Hello,

I deleted the dist folder and now everything compiles fine. I have 
stored xen in /usr/xen42/ and added this directorys to the PATH variable 
now for example xl can't find the libxlutil, the library exists in 
/usr/xen42/lib to what is this problem related? Got xl compiled wrong or 
is something else wrong?

Best Regards
</pre>
      <br>
      <br>
    </div>
    <br>
  </body>
</html>

--------------070901050605000809060504--


--===============2920205431551750612==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2920205431551750612==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 19:34:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 19:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfyGK-0001oC-LP; Tue, 04 Dec 2012 19:34:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lukas@laukamp.me>)
	id 1TfyGI-0001o1-Tj; Tue, 04 Dec 2012 19:34:35 +0000
Received: from [193.109.254.147:34509] by server-9.bemta-14.messagelabs.com id
	F2/85-30773-9405EB05; Tue, 04 Dec 2012 19:34:33 +0000
X-Env-Sender: lukas@laukamp.me
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354649670!9213307!1
X-Originating-IP: [5.9.218.243]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24921 invoked from network); 4 Dec 2012 19:34:30 -0000
Received: from mailer0.lippux.de (HELO mailer0.lippux.de) (5.9.218.243)
	by server-16.tower-27.messagelabs.com with SMTP;
	4 Dec 2012 19:34:30 -0000
Received: from localhost (localhost [127.0.0.1])
	by mailer0.lippux.de (Postfix) with ESMTP id 4B6212C216;
	Tue,  4 Dec 2012 20:34:44 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mailer1.lippux.de
Received: from mailer0.lippux.de ([127.0.0.1])
	by localhost (mailer0.lippux.de [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id GBIef6hycC6j; Tue,  4 Dec 2012 20:34:43 +0100 (CET)
Received: from [127.0.0.1] (ashlynn.lippux.de [5.9.218.242])
	by mailer0.lippux.de (Postfix) with ESMTPSA id 1C8ED2C212;
	Tue,  4 Dec 2012 20:34:41 +0100 (CET)
Message-ID: <50BE503B.8010901@laukamp.me>
Date: Tue, 04 Dec 2012 20:34:19 +0100
From: Lukas Laukamp <lukas@laukamp.me>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:16.0) Gecko/20121026 Thunderbird/16.0.2
MIME-Version: 1.0
To: xen-users <xen-users@lists.xen.org>
References: <50BE46C0.8020406@laukamp.me>
In-Reply-To: <50BE46C0.8020406@laukamp.me>
X-Forwarded-Message-Id: <50BE46C0.8020406@laukamp.me>
Cc: port-xen@NetBSD.org, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Fwd: Compilation of Xen 4.2 Utils
 breaks on NetBSD 6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2920205431551750612=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============2920205431551750612==
Content-Type: multipart/alternative;
 boundary="------------070901050605000809060504"

This is a multi-part message in MIME format.
--------------070901050605000809060504
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit




-------- Original-Nachricht --------
Betreff: 	Re: [Xen-users] Fwd: Compilation of Xen 4.2 Utils breaks on 
NetBSD 6
Datum: 	Tue, 04 Dec 2012 19:53:52 +0100
Von: 	Lukas Laukamp <lukas@laukamp.me>
An: 	Roger Pau MonnÃ© <roger.pau@citrix.com>
Kopie (CC): 	xen-users@lists.xen.org <xen-users@lists.xen.org>



Am 04.12.2012 17:30, schrieb Roger Pau MonnÃ©:
> On 04/12/12 15:43, Lukas Laukamp wrote:
>> Am 04.12.2012 15:10, schrieb Roger Pau MonnÃ©:
>>> On 04/12/12 14:45, Lukas Laukamp wrote:
>>>> Hello all,
>>>>
>>>> because there are still problems to build Xen 4.2 on NetBSD (there was
>>>> also another thread on the port-xen list) I forward this message to get
>>>> a solution for the problem. The complete output of my build is in a log
>>>> file in the attachment.
>>>>
>>>> I used this commands for compilation:
>>>>
>>>> ./configure PYTHON=/usr/pkg/bin/python2.7 APPEND_INCLUDES=/usr/pkg/include APPEND_LIB=/usr/pkg/lib --prefix=/usr/xen42
>>>> gmake PYTHON=/usr/pkg/bin/python2.7 xen
>>>> gmake tools
>>>>
>>>> I took the commans from this wiki article: http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD
>>>>
>>>> The build error appears in the tools target in libxl.
>>>>
>>>> This is the last mail from port-xen list related to this theme:
>>>>
>>>> On 30/11/12 21:16, Mike Bowie wrote:
>>>>
>>>>> On 11/30/12 12:13 PM, Jeff Rizzo wrote:
>>>>>> Anyone up for creating a pkgsrc package for xen 4.2?  There's clearly a
>>>>>> lot to be done, and my pkgsrc-fu is not all that great.
>>>>> I could be up for that... might not be until next week, but if the build
>>>>> steps all work out, I should be able to cobble something together into
>>>>> pkgsrc/wip. (Which would motivate me to get a box onto 4.2 also...
>>>>> double win.)
>>>> I would definetely help, this will probably require some Makefile
>>>> changes, which I think should be submitted upstream.
>>>>
>>>> Is the problem solvable without big changes in the build system to get 4.2 running on a NetBSD 6 box? Or isn't it able to compile th toolstack on NetBSD for 4.2 without big changes?
>>>>
>>>>
>>>>
>>>> -------- Original-Nachricht --------
>>>> Betreff: 	Compilation of Xen 4.2 Utils breaks on NetBSD 6
>>>> Datum: 	Mon, 3 Dec 2012 17:19:16 +0000
>>>> Von: 	Miguel Clara<miguelmclara@gmail.com>
>>>> An: 	port-xen@netbsd.org, lukas@laukamp.me
>>>>
>>>>
>>>>
>>>> Lukas Laukamp<lukas<at>  laukamp.me<http://laukamp.me>>  writes:
>>>>
>>>>> Hey all,
>>>>>
>>>>> I trying to compile Xen 4.2 on NetBSD 6. The hypervisor it self compiled
>>>>> fine but the compilation of the utils breaks with this error:
>>>>>
>>>>> In file included from xl_cmdimpl.c:40:0:
>>>>> libxl_json.h:18:27: fatal error: yajl/yajl_gen.h: No such file or
>>>> directory
>>>>> compilation terminated.
>>>>> gmake[3]: *** [xl_cmdimpl.o] Error 1
>>>>> gmake[3]: Leaving directory `/root/xen-4.2.0/tools/libxl'
>>>>> gmake[2]: *** [subdir-install-libxl] Error 2
>>>>> gmake[2]: Leaving directory `/root/xen-4.2.0/tools'
>>>>> gmake[1]: *** [subdirs-install] Error 2
>>>>> gmake[1]: Leaving directory `/root/xen-4.2.0/tools'
>>>>> gmake: *** [install-tools] Error 2
>>>>> testdom0#
>>>>>
>>>>> I passed the needed options to the configure script so that it searches
>>>>> in /usr/pkg/include/ and /usr/pkg/lib and so on. The file which is
>>>>> declaired to don't exist, exists in /usr/pkg/include/yajl/ so I don't
>>>>> understand why the file could not be found.
>>>>>
>>>>> Hope that someone could help me.
>>>>>
>>>>> Best Regards
>>>>>
>>>>>
>>>> I'm trying to build following the guide at:
>>>> http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD
>>>>
>>>> All works fine until I try to build "tools"
>>>>
>>>> gmake[3]: Entering directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> rm -f _paths.h.tmp.tmp; echo "SBINDIR=\"/usr/pkg/sbin\""
>>>>>> _paths.h.tmp.tmp; echo "BINDIR=\"/usr/pkg/bin\"">>_paths.h.tmp.tmp;
>>>> echo "LIBEXEC=\"/usr/pkg/lâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> ibexec\"">>_paths.h.tmp.tmp; echo "LIBDIR=\"/usr/pkg/lib\""
>>>>>> _paths.h.tmp.tmp; echo "SHAREDIR=\"/usr/pkg/share\""
>>>>>> _paths.h.tmp.tmp; echo "PRIVATE_BINDâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> IR=\"/usr/pkg/bin\"">>_paths.h.tmp.tmp; echo
>>>> "XENFIRMWAREDIR=\"/usr/pkg/lib/xen/boot\"">>_paths.h.tmp.tmp; echo
>>>> "XEN_CONFIG_DIR=\"/usr/pkg/etc/xen\"">>_â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> paths.h.tmp.tmp; echo "XEN_SCRIPT_DIR=\"/usr/pkg/etc/xen/scripts\""
>>>>>> _paths.h.tmp.tmp; echo "XEN_LOCK_DIR=\"/usr/pkg/var/lib\""
>>>>>> _paths.h.tmp.tmp; echo â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> "XEN_RUN_DIR=\"/usr/pkg/var/run/xen\"">>_paths.h.tmp.tmp; echo
>>>> "XEN_PAGING_DIR=\"/usr/pkg/var/lib/xen/xenpaging\"">>_paths.h.tmp.tmp;
>>>> if ! cmp -s _pathâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> s.h.tmp.tmp _paths.h.tmp; then mv -f _paths.h.tmp.tmp _paths.h.tmp; else
>>>> rm -f _paths.h.tmp.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp>_paths.h.2.tmp
>>>> â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> rm -f _paths.h.tmp â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> if ! cmp -s _paths.h.2.tmp _paths.h; then mv -f _paths.h.2.tmp _paths.h;
>>>> else rm -f _paths.h.2.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gcc -pthread -o testidl testidl.o libxlutil.so
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so
>>>> -Wl,-rpath-link=/home/miguelcâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /xen-data/xen-4.2.0/tools/libxl/../../tools/libxc
>>>> -Wl,-rpath-link=/home/xen/xen-4.2.0/tools/libxl/../../tools/xenstore
>>>> /home/xen/xâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> en-4.2.0/tools/libxl/../../tools/libxc/libxenctrl.so -L/usr/pkg/lib
>>>> â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> ld: warning: libyajl.so.2, needed by
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so, not
>>>> found (try using -rpath or -rpath-linâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> k) â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_complete_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_null' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_array_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_string' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_map_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_get_buf' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_array_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_map_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_get_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_free_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_integer' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
>>>> undefined reference to `yajl_gen_bool' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[3]: *** [testidl] Error 1 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[3]: Leaving directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[2]: *** [subdir-install-libxl] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[2]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[1]: *** [subdirs-install] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake[1]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
>>>> gmake: *** [install-tools] Error 2
>>>>
>>>>
>>>> I'm using yajl version 2....  could this be the problem? Is there any patch?
>>> yajl 2 should be supported, since I guess you installed yajl from
>>> pkgsrc, could you try setting LD_LIBRARY_PATH=/usr/pkg/lib before compiling?
>>>
>>> See the following message from Riz:
>>> http://mail-index.netbsd.org/port-xen/2012/11/30/msg007740.html
>>>
>>> Indeed this should be looked at and fixed.
>>>
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@lists.xen.org
>>> http://lists.xen.org/xen-users
>> Hello,
>>
>> when I assigne LD_LIBRARY_PATH=/usr/pkg/lib to gmake when trying to
>> compile tools target libxl gets compiled. But later it breaks when
>> building the filesystem structure for the tools-install target because
>> it can't find pygrub. The complete output of the build process is in the
>> attachment.
> Could you remove the dist folder and try again? AFAIK it works for me
> without problems.
>

Hello,

I deleted the dist folder and now everything compiles fine. I have
stored xen in /usr/xen42/ and added this directorys to the PATH variable
now for example xl can't find the libxlutil, the library exists in
/usr/xen42/lib to what is this problem related? Got xl compiled wrong or
is something else wrong?

Best Regards





--------------070901050605000809060504
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    <div class="moz-forward-container"><br>
      <br>
      -------- Original-Nachricht --------
      <table class="moz-email-headers-table" border="0" cellpadding="0"
        cellspacing="0">
        <tbody>
          <tr>
            <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Betreff:
            </th>
            <td>Re: [Xen-users] Fwd: Compilation of Xen 4.2 Utils breaks
              on NetBSD 6</td>
          </tr>
          <tr>
            <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Datum: </th>
            <td>Tue, 04 Dec 2012 19:53:52 +0100</td>
          </tr>
          <tr>
            <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Von: </th>
            <td>Lukas Laukamp <a class="moz-txt-link-rfc2396E" href="mailto:lukas@laukamp.me">&lt;lukas@laukamp.me&gt;</a></td>
          </tr>
          <tr>
            <th align="RIGHT" nowrap="nowrap" valign="BASELINE">An: </th>
            <td>Roger Pau MonnÃ© <a class="moz-txt-link-rfc2396E" href="mailto:roger.pau@citrix.com">&lt;roger.pau@citrix.com&gt;</a></td>
          </tr>
          <tr>
            <th align="RIGHT" nowrap="nowrap" valign="BASELINE">Kopie
              (CC): </th>
            <td><a class="moz-txt-link-abbreviated" href="mailto:xen-users@lists.xen.org">xen-users@lists.xen.org</a> <a class="moz-txt-link-rfc2396E" href="mailto:xen-users@lists.xen.org">&lt;xen-users@lists.xen.org&gt;</a></td>
          </tr>
        </tbody>
      </table>
      <br>
      <br>
      <pre>Am 04.12.2012 17:30, schrieb Roger Pau MonnÃ©:
&gt; On 04/12/12 15:43, Lukas Laukamp wrote:
&gt;&gt; Am 04.12.2012 15:10, schrieb Roger Pau MonnÃ©:
&gt;&gt;&gt; On 04/12/12 14:45, Lukas Laukamp wrote:
&gt;&gt;&gt;&gt; Hello all,
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; because there are still problems to build Xen 4.2 on NetBSD (there was
&gt;&gt;&gt;&gt; also another thread on the port-xen list) I forward this message to get
&gt;&gt;&gt;&gt; a solution for the problem. The complete output of my build is in a log
&gt;&gt;&gt;&gt; file in the attachment.
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; I used this commands for compilation:
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; ./configure PYTHON=/usr/pkg/bin/python2.7 APPEND_INCLUDES=/usr/pkg/include APPEND_LIB=/usr/pkg/lib --prefix=/usr/xen42
&gt;&gt;&gt;&gt; gmake PYTHON=/usr/pkg/bin/python2.7 xen
&gt;&gt;&gt;&gt; gmake tools
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; I took the commans from this wiki article: <a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD">http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD</a>
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; The build error appears in the tools target in libxl.
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; This is the last mail from port-xen list related to this theme:
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; On 30/11/12 21:16, Mike Bowie wrote:
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; On 11/30/12 12:13 PM, Jeff Rizzo wrote:
&gt;&gt;&gt;&gt;&gt;&gt; Anyone up for creating a pkgsrc package for xen 4.2?  There's clearly a
&gt;&gt;&gt;&gt;&gt;&gt; lot to be done, and my pkgsrc-fu is not all that great.
&gt;&gt;&gt;&gt;&gt; I could be up for that... might not be until next week, but if the build
&gt;&gt;&gt;&gt;&gt; steps all work out, I should be able to cobble something together into
&gt;&gt;&gt;&gt;&gt; pkgsrc/wip. (Which would motivate me to get a box onto 4.2 also...
&gt;&gt;&gt;&gt;&gt; double win.)
&gt;&gt;&gt;&gt; I would definetely help, this will probably require some Makefile
&gt;&gt;&gt;&gt; changes, which I think should be submitted upstream.
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; Is the problem solvable without big changes in the build system to get 4.2 running on a NetBSD 6 box? Or isn't it able to compile th toolstack on NetBSD for 4.2 without big changes?
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; -------- Original-Nachricht --------
&gt;&gt;&gt;&gt; Betreff: 	Compilation of Xen 4.2 Utils breaks on NetBSD 6
&gt;&gt;&gt;&gt; Datum: 	Mon, 3 Dec 2012 17:19:16 +0000
&gt;&gt;&gt;&gt; Von: 	Miguel Clara<a class="moz-txt-link-rfc2396E" href="mailto:miguelmclara@gmail.com">&lt;miguelmclara@gmail.com&gt;</a>
&gt;&gt;&gt;&gt; An: 	<a class="moz-txt-link-abbreviated" href="mailto:port-xen@netbsd.org">port-xen@netbsd.org</a>, <a class="moz-txt-link-abbreviated" href="mailto:lukas@laukamp.me">lukas@laukamp.me</a>
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; Lukas Laukamp&lt;lukas&lt;at&gt;  laukamp.me<a class="moz-txt-link-rfc2396E" href="http://laukamp.me">&lt;http://laukamp.me&gt;</a>&gt;  writes:
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; Hey all,
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; I trying to compile Xen 4.2 on NetBSD 6. The hypervisor it self compiled
&gt;&gt;&gt;&gt;&gt; fine but the compilation of the utils breaks with this error:
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; In file included from xl_cmdimpl.c:40:0:
&gt;&gt;&gt;&gt;&gt; libxl_json.h:18:27: fatal error: yajl/yajl_gen.h: No such file or
&gt;&gt;&gt;&gt; directory
&gt;&gt;&gt;&gt;&gt; compilation terminated.
&gt;&gt;&gt;&gt;&gt; gmake[3]: *** [xl_cmdimpl.o] Error 1
&gt;&gt;&gt;&gt;&gt; gmake[3]: Leaving directory `/root/xen-4.2.0/tools/libxl'
&gt;&gt;&gt;&gt;&gt; gmake[2]: *** [subdir-install-libxl] Error 2
&gt;&gt;&gt;&gt;&gt; gmake[2]: Leaving directory `/root/xen-4.2.0/tools'
&gt;&gt;&gt;&gt;&gt; gmake[1]: *** [subdirs-install] Error 2
&gt;&gt;&gt;&gt;&gt; gmake[1]: Leaving directory `/root/xen-4.2.0/tools'
&gt;&gt;&gt;&gt;&gt; gmake: *** [install-tools] Error 2
&gt;&gt;&gt;&gt;&gt; testdom0#
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; I passed the needed options to the configure script so that it searches
&gt;&gt;&gt;&gt;&gt; in /usr/pkg/include/ and /usr/pkg/lib and so on. The file which is
&gt;&gt;&gt;&gt;&gt; declaired to don't exist, exists in /usr/pkg/include/yajl/ so I don't
&gt;&gt;&gt;&gt;&gt; understand why the file could not be found.
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; Hope that someone could help me.
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt; Best Regards
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; I'm trying to build following the guide at:
&gt;&gt;&gt;&gt; <a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD">http://wiki.xen.org/wiki/Compiling_Xen_From_Source_on_NetBSD</a>
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; All works fine until I try to build "tools"
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; gmake[3]: Entering directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; rm -f _paths.h.tmp.tmp; echo "SBINDIR=\"/usr/pkg/sbin\""
&gt;&gt;&gt;&gt;&gt;&gt; _paths.h.tmp.tmp; echo "BINDIR=\"/usr/pkg/bin\""&gt;&gt;_paths.h.tmp.tmp;
&gt;&gt;&gt;&gt; echo "LIBEXEC=\"/usr/pkg/lâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; ibexec\""&gt;&gt;_paths.h.tmp.tmp; echo "LIBDIR=\"/usr/pkg/lib\""
&gt;&gt;&gt;&gt;&gt;&gt; _paths.h.tmp.tmp; echo "SHAREDIR=\"/usr/pkg/share\""
&gt;&gt;&gt;&gt;&gt;&gt; _paths.h.tmp.tmp; echo "PRIVATE_BINDâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; IR=\"/usr/pkg/bin\""&gt;&gt;_paths.h.tmp.tmp; echo
&gt;&gt;&gt;&gt; "XENFIRMWAREDIR=\"/usr/pkg/lib/xen/boot\""&gt;&gt;_paths.h.tmp.tmp; echo
&gt;&gt;&gt;&gt; "XEN_CONFIG_DIR=\"/usr/pkg/etc/xen\""&gt;&gt;_â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; paths.h.tmp.tmp; echo "XEN_SCRIPT_DIR=\"/usr/pkg/etc/xen/scripts\""
&gt;&gt;&gt;&gt;&gt;&gt; _paths.h.tmp.tmp; echo "XEN_LOCK_DIR=\"/usr/pkg/var/lib\""
&gt;&gt;&gt;&gt;&gt;&gt; _paths.h.tmp.tmp; echo â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; "XEN_RUN_DIR=\"/usr/pkg/var/run/xen\""&gt;&gt;_paths.h.tmp.tmp; echo
&gt;&gt;&gt;&gt; "XEN_PAGING_DIR=\"/usr/pkg/var/lib/xen/xenpaging\""&gt;&gt;_paths.h.tmp.tmp;
&gt;&gt;&gt;&gt; if ! cmp -s _pathâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; s.h.tmp.tmp _paths.h.tmp; then mv -f _paths.h.tmp.tmp _paths.h.tmp; else
&gt;&gt;&gt;&gt; rm -f _paths.h.tmp.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp&gt;_paths.h.2.tmp
&gt;&gt;&gt;&gt; â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; rm -f _paths.h.tmp â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; if ! cmp -s _paths.h.2.tmp _paths.h; then mv -f _paths.h.2.tmp _paths.h;
&gt;&gt;&gt;&gt; else rm -f _paths.h.2.tmp; fi â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gcc -pthread -o testidl testidl.o libxlutil.so
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so
&gt;&gt;&gt;&gt; -Wl,-rpath-link=/home/miguelcâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /xen-data/xen-4.2.0/tools/libxl/../../tools/libxc
&gt;&gt;&gt;&gt; -Wl,-rpath-link=/home/xen/xen-4.2.0/tools/libxl/../../tools/xenstore
&gt;&gt;&gt;&gt; /home/xen/xâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; en-4.2.0/tools/libxl/../../tools/libxc/libxenctrl.so -L/usr/pkg/lib
&gt;&gt;&gt;&gt; â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; ld: warning: libyajl.so.2, needed by
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so, not
&gt;&gt;&gt;&gt; found (try using -rpath or -rpath-linâ”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; k) â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_complete_parse' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_null' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_array_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_string' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_map_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_get_buf' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_array_close' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_map_open' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_get_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_free_error' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_integer' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_alloc' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_free' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; /home/xen/xen-4.2.0/tools/libxl/../../tools/libxl/libxenlight.so:
&gt;&gt;&gt;&gt; undefined reference to `yajl_gen_bool' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[3]: *** [testidl] Error 1 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[3]: Leaving directory `/home/xen/xen-4.2.0/tools/libxl' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[2]: *** [subdir-install-libxl] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[2]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[1]: *** [subdirs-install] Error 2 â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake[1]: Leaving directory `/home/xen/xen-4.2.0/tools' â”‚Â·Â·Â·Â·Â·Â·Â·Â·Â·Â·
&gt;&gt;&gt;&gt; gmake: *** [install-tools] Error 2
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt;
&gt;&gt;&gt;&gt; I'm using yajl version 2....  could this be the problem? Is there any patch?
&gt;&gt;&gt; yajl 2 should be supported, since I guess you installed yajl from
&gt;&gt;&gt; pkgsrc, could you try setting LD_LIBRARY_PATH=/usr/pkg/lib before compiling?
&gt;&gt;&gt;
&gt;&gt;&gt; See the following message from Riz:
&gt;&gt;&gt; <a class="moz-txt-link-freetext" href="http://mail-index.netbsd.org/port-xen/2012/11/30/msg007740.html">http://mail-index.netbsd.org/port-xen/2012/11/30/msg007740.html</a>
&gt;&gt;&gt;
&gt;&gt;&gt; Indeed this should be looked at and fixed.
&gt;&gt;&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; _______________________________________________
&gt;&gt;&gt; Xen-users mailing list
&gt;&gt;&gt; <a class="moz-txt-link-abbreviated" href="mailto:Xen-users@lists.xen.org">Xen-users@lists.xen.org</a>
&gt;&gt;&gt; <a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-users">http://lists.xen.org/xen-users</a>
&gt;&gt; Hello,
&gt;&gt;
&gt;&gt; when I assigne LD_LIBRARY_PATH=/usr/pkg/lib to gmake when trying to
&gt;&gt; compile tools target libxl gets compiled. But later it breaks when
&gt;&gt; building the filesystem structure for the tools-install target because
&gt;&gt; it can't find pygrub. The complete output of the build process is in the
&gt;&gt; attachment.
&gt; Could you remove the dist folder and try again? AFAIK it works for me
&gt; without problems.
&gt;

Hello,

I deleted the dist folder and now everything compiles fine. I have 
stored xen in /usr/xen42/ and added this directorys to the PATH variable 
now for example xl can't find the libxlutil, the library exists in 
/usr/xen42/lib to what is this problem related? Got xl compiled wrong or 
is something else wrong?

Best Regards
</pre>
      <br>
      <br>
    </div>
    <br>
  </body>
</html>

--------------070901050605000809060504--


--===============2920205431551750612==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2920205431551750612==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 20:05:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 20:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfyjN-0003Jq-Fi; Tue, 04 Dec 2012 20:04:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TfyjM-0003Jl-It
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 20:04:36 +0000
Received: from [85.158.143.35:35779] by server-2.bemta-4.messagelabs.com id
	B9/2D-28922-3575EB05; Tue, 04 Dec 2012 20:04:35 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354651468!16099573!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2287 invoked from network); 4 Dec 2012 20:04:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 20:04:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,217,1355097600"; d="scan'208";a="46592169"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 20:04:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 15:04:02 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfyio-0004MJ-JR;
	Tue, 04 Dec 2012 20:04:02 +0000
Message-ID: <50BE5732.2050801@citrix.com>
Date: Tue, 4 Dec 2012 20:04:02 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.4.6
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] Audit of NMI and MCE paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I have just starting auditing the NMI path and found that the oprofile
code calls into a fair amount of common code.

So far, down the first leg of the call graph, I have found several
ASSERT()s, a BUG() and many {rd,wr}msr()s.  Given that these are common
code, and sensible in their places, removing them for the sake of being
on the NMI path seems silly.

As an alternative, I suggest that we make ASSERT()s, BUG()s and WARN()s
NMI/MCE safe, from a printk spinlock point of view.

Either we can modify the macros to do a console_force_unlock(), which is
fine for BUG() and ASSERT(), but problematic for WARN() (and deferring
the printing to a tasklet wont work if we want a stack trace). 
Alternativly, we could change the console lock to be a recursive lock,
at which point it is safe from the deadlock point of view.  Are there
any performance concerns from changing to a recursive lock?

As for spinlocks themselves, as far as I can reason, recursive locks are
safe to use, as are per-cpu spinlocks which are used exclusivly in the
NMI handler or MCE handler (but not both), given the proviso that we
have C level reentrance protection for do_{nmi,mce}().

For the {rd,wr}msr()s, we can assume that the Xen code is good and is
not going to fault on access to the MSR, but we certainly cant guarantee
this.


As a result, I do not think it is practical or indeed sensible to remove
all possibility of faults from the NMI path (and MCE to a lesser
extent).  Would it however be acceptable to change the console lock to a
recursive lock, and rely on the Linux-inspired extended solution which
will correctly deal with some nested cases, and panic verbosely in all
other cases?

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 20:05:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 20:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfyjN-0003Jq-Fi; Tue, 04 Dec 2012 20:04:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TfyjM-0003Jl-It
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 20:04:36 +0000
Received: from [85.158.143.35:35779] by server-2.bemta-4.messagelabs.com id
	B9/2D-28922-3575EB05; Tue, 04 Dec 2012 20:04:35 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354651468!16099573!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2287 invoked from network); 4 Dec 2012 20:04:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 20:04:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,217,1355097600"; d="scan'208";a="46592169"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 20:04:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 4 Dec 2012 15:04:02 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tfyio-0004MJ-JR;
	Tue, 04 Dec 2012 20:04:02 +0000
Message-ID: <50BE5732.2050801@citrix.com>
Date: Tue, 4 Dec 2012 20:04:02 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.4.6
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] Audit of NMI and MCE paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I have just starting auditing the NMI path and found that the oprofile
code calls into a fair amount of common code.

So far, down the first leg of the call graph, I have found several
ASSERT()s, a BUG() and many {rd,wr}msr()s.  Given that these are common
code, and sensible in their places, removing them for the sake of being
on the NMI path seems silly.

As an alternative, I suggest that we make ASSERT()s, BUG()s and WARN()s
NMI/MCE safe, from a printk spinlock point of view.

Either we can modify the macros to do a console_force_unlock(), which is
fine for BUG() and ASSERT(), but problematic for WARN() (and deferring
the printing to a tasklet wont work if we want a stack trace). 
Alternativly, we could change the console lock to be a recursive lock,
at which point it is safe from the deadlock point of view.  Are there
any performance concerns from changing to a recursive lock?

As for spinlocks themselves, as far as I can reason, recursive locks are
safe to use, as are per-cpu spinlocks which are used exclusivly in the
NMI handler or MCE handler (but not both), given the proviso that we
have C level reentrance protection for do_{nmi,mce}().

For the {rd,wr}msr()s, we can assume that the Xen code is good and is
not going to fault on access to the MSR, but we certainly cant guarantee
this.


As a result, I do not think it is practical or indeed sensible to remove
all possibility of faults from the NMI path (and MCE to a lesser
extent).  Would it however be acceptable to change the console lock to a
recursive lock, and rely on the Linux-inspired extended solution which
will correctly deal with some nested cases, and panic verbosely in all
other cases?

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 20:08:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 20:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfymN-0003Qd-4I; Tue, 04 Dec 2012 20:07:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tls@panix.com>) id 1TfymL-0003QV-MT
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 20:07:41 +0000
Received: from [85.158.138.51:16803] by server-4.bemta-3.messagelabs.com id
	86/63-30023-C085EB05; Tue, 04 Dec 2012 20:07:40 +0000
X-Env-Sender: tls@panix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1354651660!27520891!1
X-Originating-IP: [166.84.1.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTY2Ljg0LjEuODkgPT4gMjg1NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26115 invoked from network); 4 Dec 2012 20:07:40 -0000
Received: from mailbackend.panix.com (HELO mailbackend.panix.com) (166.84.1.89)
	by server-9.tower-174.messagelabs.com with SMTP;
	4 Dec 2012 20:07:40 -0000
Received: from panix5.panix.com (panix5.panix.com [166.84.1.5])
	by mailbackend.panix.com (Postfix) with ESMTP id B97D52ECC3;
	Tue,  4 Dec 2012 15:07:39 -0500 (EST)
Received: by panix5.panix.com (Postfix, from userid 415)
	id A8E5424241; Tue,  4 Dec 2012 15:07:39 -0500 (EST)
Date: Tue, 4 Dec 2012 15:07:39 -0500
From: Thor Lancelot Simon <tls@panix.com>
To: Roger Pau Monn? <roger.pau@citrix.com>
Message-ID: <20121204200739.GA23149@panix.com>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BE161B.8000603@citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
> 
> Independently of what we end up doing as default for handling raw file
> disks, could someone review this code?
> 
> It's the first time I've done a device, so someone with more experience
> should review it.

I am not sure I entirely follow what this code's doing, but it seems to
me it may allow arbitrary physical pages to be exposed to userspace
processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.

Is that a correct understanding of one of its effects?  If so, there's
a problem, since not being able to do precisely that is one important
assumption of the 4.4BSD security model.

Thor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 20:08:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 20:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfymN-0003Qd-4I; Tue, 04 Dec 2012 20:07:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tls@panix.com>) id 1TfymL-0003QV-MT
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 20:07:41 +0000
Received: from [85.158.138.51:16803] by server-4.bemta-3.messagelabs.com id
	86/63-30023-C085EB05; Tue, 04 Dec 2012 20:07:40 +0000
X-Env-Sender: tls@panix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1354651660!27520891!1
X-Originating-IP: [166.84.1.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTY2Ljg0LjEuODkgPT4gMjg1NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26115 invoked from network); 4 Dec 2012 20:07:40 -0000
Received: from mailbackend.panix.com (HELO mailbackend.panix.com) (166.84.1.89)
	by server-9.tower-174.messagelabs.com with SMTP;
	4 Dec 2012 20:07:40 -0000
Received: from panix5.panix.com (panix5.panix.com [166.84.1.5])
	by mailbackend.panix.com (Postfix) with ESMTP id B97D52ECC3;
	Tue,  4 Dec 2012 15:07:39 -0500 (EST)
Received: by panix5.panix.com (Postfix, from userid 415)
	id A8E5424241; Tue,  4 Dec 2012 15:07:39 -0500 (EST)
Date: Tue, 4 Dec 2012 15:07:39 -0500
From: Thor Lancelot Simon <tls@panix.com>
To: Roger Pau Monn? <roger.pau@citrix.com>
Message-ID: <20121204200739.GA23149@panix.com>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BE161B.8000603@citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
> 
> Independently of what we end up doing as default for handling raw file
> disks, could someone review this code?
> 
> It's the first time I've done a device, so someone with more experience
> should review it.

I am not sure I entirely follow what this code's doing, but it seems to
me it may allow arbitrary physical pages to be exposed to userspace
processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.

Is that a correct understanding of one of its effects?  If so, there's
a problem, since not being able to do precisely that is one important
assumption of the 4.4BSD security model.

Thor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 20:36:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 20:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfzDi-0004Gf-Ij; Tue, 04 Dec 2012 20:35:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1TfzDg-0004Ga-UU
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 20:35:57 +0000
Received: from [85.158.139.211:2929] by server-11.bemta-5.messagelabs.com id
	BA/33-03409-CAE5EB05; Tue, 04 Dec 2012 20:35:56 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1354653354!18634972!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17544 invoked from network); 4 Dec 2012 20:35:55 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 20:35:55 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so7800709iej.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 12:35:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=d63RzaQrmVZCanafuZEBTxA6SmLvDQqbsWKtKScqpkk=;
	b=EtDqhGwe++krySZ/RmXVXhRvzB29ZMAq6rzYKj5Vc82ZVcacvZ1GnU/orrwnSVmeNY
	UgWicXmiLgJFpF8n+Xo9WRLmmzSd88xjB57clDqmbcIxvMHBlvm1nEYMNVemsOlCfieB
	PTIFe1yE26CTl1dVhNDOATYUNYxtAsH9nJzY2FK9t16tajWkzk1blmNzXRDrfg0JM8hL
	wPwHHSxlucNvDn1Mexr08588o2Zbu6lDVrbMaz4M/U1NP5nLA329AdkD8QW5rrUyAWCu
	VYOc1CJbtA7CoIP4Q5ZIl0edJFK1ICOuwxLf6+Tni5WRnF325ifd6fRyQMD6NhIF7eHv
	/nJQ==
MIME-Version: 1.0
Received: by 10.50.183.167 with SMTP id en7mr3994445igc.49.1354653353948; Tue,
	04 Dec 2012 12:35:53 -0800 (PST)
Received: by 10.231.93.74 with HTTP; Tue, 4 Dec 2012 12:35:53 -0800 (PST)
Date: Tue, 4 Dec 2012 15:35:53 -0500
X-Google-Sender-Auth: VBtPf48bAkxmOXGukz6GLfpmO3g
Message-ID: <CAOvdn6VYCFcfXS67SutP4Mi131Jva1kj53cz+aLLCxzcXkEghA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] libxl: stable-4.2 git tree dirty after a build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1925245233724927249=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1925245233724927249==
Content-Type: multipart/alternative; boundary=14dae9340ca3f4b7b504d00ccf6d

--14dae9340ca3f4b7b504d00ccf6d
Content-Type: text/plain; charset=ISO-8859-1

It appears that something about the 4.2.1-rc1 tree is leaving around
generated files in a modified state after a build:

# Changes not staged for commit:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working
directory)
#
# modified:   tools/libxl/libxlu_cfg_y.c
# modified:   tools/libxl/libxlu_cfg_y.h

Unstable does not leave these files modified, as far as I can see - Though
a simple diff of the 2 libxl dirs does not make it immediately obvious what
might be modifying these files.


Does anyone have any pointers on a changeset to cherry-pick?

--14dae9340ca3f4b7b504d00ccf6d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

It appears that something about the 4.2.1-rc1 tree is leaving around genera=
ted files in a modified state after a build:<div><br></div><div><div># Chan=
ges not staged for commit:</div><div># =A0 (use &quot;git add &lt;file&gt;.=
..&quot; to update what will be committed)</div>

<div># =A0 (use &quot;git checkout -- &lt;file&gt;...&quot; to discard chan=
ges in working directory)</div><div>#</div><div>#<span style=3D"white-space=
:pre-wrap">	</span>modified: =A0 tools/libxl/libxlu_cfg_y.c</div>
<div>#<span style=3D"white-space:pre-wrap">	</span>modified: =A0 tools/libx=
l/libxlu_cfg_y.h</div></div><div><br></div><div>Unstable does not leave the=
se files modified, as far as I can see - Though a simple diff of the 2 libx=
l dirs does not make it immediately obvious what might be modifying these f=
iles.</div>

<div><br></div><div><br></div><div>Does anyone have any pointers on a chang=
eset to cherry-pick?</div><div><br></div><div><br></div><div><br></div>

--14dae9340ca3f4b7b504d00ccf6d--


--===============1925245233724927249==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1925245233724927249==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 20:36:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 20:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TfzDi-0004Gf-Ij; Tue, 04 Dec 2012 20:35:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1TfzDg-0004Ga-UU
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 20:35:57 +0000
Received: from [85.158.139.211:2929] by server-11.bemta-5.messagelabs.com id
	BA/33-03409-CAE5EB05; Tue, 04 Dec 2012 20:35:56 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1354653354!18634972!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17544 invoked from network); 4 Dec 2012 20:35:55 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 20:35:55 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so7800709iej.32
	for <xen-devel@lists.xen.org>; Tue, 04 Dec 2012 12:35:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=d63RzaQrmVZCanafuZEBTxA6SmLvDQqbsWKtKScqpkk=;
	b=EtDqhGwe++krySZ/RmXVXhRvzB29ZMAq6rzYKj5Vc82ZVcacvZ1GnU/orrwnSVmeNY
	UgWicXmiLgJFpF8n+Xo9WRLmmzSd88xjB57clDqmbcIxvMHBlvm1nEYMNVemsOlCfieB
	PTIFe1yE26CTl1dVhNDOATYUNYxtAsH9nJzY2FK9t16tajWkzk1blmNzXRDrfg0JM8hL
	wPwHHSxlucNvDn1Mexr08588o2Zbu6lDVrbMaz4M/U1NP5nLA329AdkD8QW5rrUyAWCu
	VYOc1CJbtA7CoIP4Q5ZIl0edJFK1ICOuwxLf6+Tni5WRnF325ifd6fRyQMD6NhIF7eHv
	/nJQ==
MIME-Version: 1.0
Received: by 10.50.183.167 with SMTP id en7mr3994445igc.49.1354653353948; Tue,
	04 Dec 2012 12:35:53 -0800 (PST)
Received: by 10.231.93.74 with HTTP; Tue, 4 Dec 2012 12:35:53 -0800 (PST)
Date: Tue, 4 Dec 2012 15:35:53 -0500
X-Google-Sender-Auth: VBtPf48bAkxmOXGukz6GLfpmO3g
Message-ID: <CAOvdn6VYCFcfXS67SutP4Mi131Jva1kj53cz+aLLCxzcXkEghA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] libxl: stable-4.2 git tree dirty after a build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1925245233724927249=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1925245233724927249==
Content-Type: multipart/alternative; boundary=14dae9340ca3f4b7b504d00ccf6d

--14dae9340ca3f4b7b504d00ccf6d
Content-Type: text/plain; charset=ISO-8859-1

It appears that something about the 4.2.1-rc1 tree is leaving around
generated files in a modified state after a build:

# Changes not staged for commit:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working
directory)
#
# modified:   tools/libxl/libxlu_cfg_y.c
# modified:   tools/libxl/libxlu_cfg_y.h

Unstable does not leave these files modified, as far as I can see - Though
a simple diff of the 2 libxl dirs does not make it immediately obvious what
might be modifying these files.


Does anyone have any pointers on a changeset to cherry-pick?

--14dae9340ca3f4b7b504d00ccf6d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

It appears that something about the 4.2.1-rc1 tree is leaving around genera=
ted files in a modified state after a build:<div><br></div><div><div># Chan=
ges not staged for commit:</div><div># =A0 (use &quot;git add &lt;file&gt;.=
..&quot; to update what will be committed)</div>

<div># =A0 (use &quot;git checkout -- &lt;file&gt;...&quot; to discard chan=
ges in working directory)</div><div>#</div><div>#<span style=3D"white-space=
:pre-wrap">	</span>modified: =A0 tools/libxl/libxlu_cfg_y.c</div>
<div>#<span style=3D"white-space:pre-wrap">	</span>modified: =A0 tools/libx=
l/libxlu_cfg_y.h</div></div><div><br></div><div>Unstable does not leave the=
se files modified, as far as I can see - Though a simple diff of the 2 libx=
l dirs does not make it immediately obvious what might be modifying these f=
iles.</div>

<div><br></div><div><br></div><div>Does anyone have any pointers on a chang=
eset to cherry-pick?</div><div><br></div><div><br></div><div><br></div>

--14dae9340ca3f4b7b504d00ccf6d--


--===============1925245233724927249==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1925245233724927249==--


From xen-devel-bounces@lists.xen.org Tue Dec 04 23:06:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 23:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg1Yh-0001QD-3n; Tue, 04 Dec 2012 23:05:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tg1Yf-0001Q6-KS
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 23:05:45 +0000
Received: from [85.158.139.211:54279] by server-14.bemta-5.messagelabs.com id
	70/1F-21768-8C18EB05; Tue, 04 Dec 2012 23:05:44 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-206.messagelabs.com!1354662343!18979868!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4399 invoked from network); 4 Dec 2012 23:05:44 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-12.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	4 Dec 2012 23:05:44 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:61149 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tg1cH-0004ZW-HT; Wed, 05 Dec 2012 00:09:29 +0100
Date: Wed, 5 Dec 2012 00:05:39 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <28369388.20121205000539@eikelenboom.it>
To: =?utf-8?Q?Pasi_K=C3=A4rkk=C3=A4inen?= <pasik@iki.fi>
In-Reply-To: <20121204110105.GX8912@reaktio.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, December 4, 2012, 12:01:05 PM, you wrote:

> On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
>> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
>> >>  I had a quick look, and it doesn't look that hard to backport that patch.
>> > 
>> > Thanks, Mat.
>> > I'm glad to report that the patch do fix my problem.
>> > 
>> > And yest it is really easy to port since the code did not change across the
>> > two release.
>> > The only change would be line numbers (3841 vs 3803) and one extra comments
>> > before this line:
>> > 
>> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>> > 
>> > I'm not sure if you are going to release another maintenance version that
>> > include this patch,
>> > but I'll report this to Debain maintainer since it's about to freeze for
>> > v7.0 release and v4.2.0 will not make it.
>> 
>> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
>> out?
>> 

> It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
> so I'd say it should be a candidate for Xen 4.1.4.

Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
(XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3

Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.

--
Sander

> -- Pasi

>> Jan
>> 
>> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
>> > <mats.petersson@citrix.com>wrote:
>> > 
>> >> On 03/12/12 13:19, Mats Petersson wrote:
>> >>
>> >>> On 03/12/12 13:14, G.R. wrote:
>> >>>
>> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>> >>>> <mats.petersson@citrix.com 
>> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>> >>>> wrote:
>> >>>>
>> >>>>      On 03/12/12 03:47, G.R. wrote:
>> >>>>
>> >>>>          Hi developers,
>> >>>>          I met some domU issues and the log suggests missing interrupt.
>> >>>>          Details from here:
>> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
>> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>> >>>>          In summary, this is the suspicious log:
>> >>>>
>> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>> >>>>
>> >>>>          I've checked the code in question and found that mode 3 is an
>> >>>>          'reserved_1' mode.
>> >>>>          I want to trace down the source of this mode setting to
>> >>>>          root-cause the issue.
>> >>>>          But I'm not an xen developer, and am even a newbie as a xen
>> >>>> user.
>> >>>>          Could anybody give me instructions about how to enable
>> >>>>          detailed debug log?
>> >>>>          It could be better if I can get advice about experiments to
>> >>>>          perform / switches to try out etc.
>> >>>>
>> >>>>          My SW config:
>> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>> >>>>          domU: Debian wheezy 3.2.x stock kernel.
>> >>>>
>> >>>>          Thanks,
>> >>>>          Timothy
>> >>>>
>> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>> >>>>      What are the exact messages in the DomU?
>> >>>>
>> >>>>
>> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>> >>>> enabled.
>> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>> >>>> did not see such msi related error message.
>> >>>> Actually, with that domU I do not see anything obvious wrong from the
>> >>>> log, but I also see nothing from panel (panel receive no signal and go
>> >>>> power-saving) :-(
>> >>>>
>> >>>>
>> >>>> Back to the issue I was reporting, the domU log looks like this:
>> >>>>
>> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>> >>>> [drm:i915_hangcheck_ring_idle]
>> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>> >>>> 3354], missed IRQ?
>> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>> >>>> [drm:i915_hangcheck_ring_idle]
>> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>> >>>> 11297], missed IRQ?
>> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>> >>>> timeout, switching to polling mode: last cmd=0x000f0000
>> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>> >>>> codec, disabling MSI: last cmd=0x002f0600
>> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>> >>>>
>> >>>>
>> >>>> Thanks,
>> >>>> Timothy
>> >>>>
>> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>> >>> fixes this. I'm not fully clued up to what the policy for backporting
>> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
>> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
>> >>> right solution here.
>> >>>
>> >> I had a quick look, and it doesn't look that hard to backport that patch.
>> >>
>> >> --
>> >> Mats
>> >>
>> >>
>> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>> >>> to your original email.
>> >>>
>> >>> --
>> >>> Mats
>> >>>
>> >>> ______________________________**_________________
>> >>> Xen-devel mailing list
>> >>> Xen-devel@lists.xen.org 
>> >>> http://lists.xen.org/xen-devel 
>> >>>
>> >>>
>> >>>
>> >>
>> >> ______________________________**_________________
>> >> Xen-devel mailing list
>> >> Xen-devel@lists.xen.org 
>> >> http://lists.xen.org/xen-devel 
>> >>
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 23:06:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 23:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg1Yh-0001QD-3n; Tue, 04 Dec 2012 23:05:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tg1Yf-0001Q6-KS
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 23:05:45 +0000
Received: from [85.158.139.211:54279] by server-14.bemta-5.messagelabs.com id
	70/1F-21768-8C18EB05; Tue, 04 Dec 2012 23:05:44 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-206.messagelabs.com!1354662343!18979868!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4399 invoked from network); 4 Dec 2012 23:05:44 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-12.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	4 Dec 2012 23:05:44 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:61149 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tg1cH-0004ZW-HT; Wed, 05 Dec 2012 00:09:29 +0100
Date: Wed, 5 Dec 2012 00:05:39 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <28369388.20121205000539@eikelenboom.it>
To: =?utf-8?Q?Pasi_K=C3=A4rkk=C3=A4inen?= <pasik@iki.fi>
In-Reply-To: <20121204110105.GX8912@reaktio.net>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, December 4, 2012, 12:01:05 PM, you wrote:

> On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
>> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
>> >>  I had a quick look, and it doesn't look that hard to backport that patch.
>> > 
>> > Thanks, Mat.
>> > I'm glad to report that the patch do fix my problem.
>> > 
>> > And yest it is really easy to port since the code did not change across the
>> > two release.
>> > The only change would be line numbers (3841 vs 3803) and one extra comments
>> > before this line:
>> > 
>> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>> > 
>> > I'm not sure if you are going to release another maintenance version that
>> > include this patch,
>> > but I'll report this to Debain maintainer since it's about to freeze for
>> > v7.0 release and v4.2.0 will not make it.
>> 
>> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
>> out?
>> 

> It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
> so I'd say it should be a candidate for Xen 4.1.4.

Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
(XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3

Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.

--
Sander

> -- Pasi

>> Jan
>> 
>> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
>> > <mats.petersson@citrix.com>wrote:
>> > 
>> >> On 03/12/12 13:19, Mats Petersson wrote:
>> >>
>> >>> On 03/12/12 13:14, G.R. wrote:
>> >>>
>> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>> >>>> <mats.petersson@citrix.com 
>> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>> >>>> wrote:
>> >>>>
>> >>>>      On 03/12/12 03:47, G.R. wrote:
>> >>>>
>> >>>>          Hi developers,
>> >>>>          I met some domU issues and the log suggests missing interrupt.
>> >>>>          Details from here:
>> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
>> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>> >>>>          In summary, this is the suspicious log:
>> >>>>
>> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>> >>>>
>> >>>>          I've checked the code in question and found that mode 3 is an
>> >>>>          'reserved_1' mode.
>> >>>>          I want to trace down the source of this mode setting to
>> >>>>          root-cause the issue.
>> >>>>          But I'm not an xen developer, and am even a newbie as a xen
>> >>>> user.
>> >>>>          Could anybody give me instructions about how to enable
>> >>>>          detailed debug log?
>> >>>>          It could be better if I can get advice about experiments to
>> >>>>          perform / switches to try out etc.
>> >>>>
>> >>>>          My SW config:
>> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>> >>>>          domU: Debian wheezy 3.2.x stock kernel.
>> >>>>
>> >>>>          Thanks,
>> >>>>          Timothy
>> >>>>
>> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>> >>>>      What are the exact messages in the DomU?
>> >>>>
>> >>>>
>> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>> >>>> enabled.
>> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>> >>>> did not see such msi related error message.
>> >>>> Actually, with that domU I do not see anything obvious wrong from the
>> >>>> log, but I also see nothing from panel (panel receive no signal and go
>> >>>> power-saving) :-(
>> >>>>
>> >>>>
>> >>>> Back to the issue I was reporting, the domU log looks like this:
>> >>>>
>> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>> >>>> [drm:i915_hangcheck_ring_idle]
>> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>> >>>> 3354], missed IRQ?
>> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>> >>>> [drm:i915_hangcheck_ring_idle]
>> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>> >>>> 11297], missed IRQ?
>> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>> >>>> timeout, switching to polling mode: last cmd=0x000f0000
>> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>> >>>> codec, disabling MSI: last cmd=0x002f0600
>> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>> >>>>
>> >>>>
>> >>>> Thanks,
>> >>>> Timothy
>> >>>>
>> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>> >>> fixes this. I'm not fully clued up to what the policy for backporting
>> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
>> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
>> >>> right solution here.
>> >>>
>> >> I had a quick look, and it doesn't look that hard to backport that patch.
>> >>
>> >> --
>> >> Mats
>> >>
>> >>
>> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>> >>> to your original email.
>> >>>
>> >>> --
>> >>> Mats
>> >>>
>> >>> ______________________________**_________________
>> >>> Xen-devel mailing list
>> >>> Xen-devel@lists.xen.org 
>> >>> http://lists.xen.org/xen-devel 
>> >>>
>> >>>
>> >>>
>> >>
>> >> ______________________________**_________________
>> >> Xen-devel mailing list
>> >> Xen-devel@lists.xen.org 
>> >> http://lists.xen.org/xen-devel 
>> >>
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 23:23:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 23:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg1pH-0001uk-Nk; Tue, 04 Dec 2012 23:22:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tg1pF-0001tL-SX
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 23:22:54 +0000
Received: from [85.158.137.99:38630] by server-16.bemta-3.messagelabs.com id
	0A/FF-07461-DC58EB05; Tue, 04 Dec 2012 23:22:53 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354663372!17278141!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11570 invoked from network); 4 Dec 2012 23:22:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 23:22:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,217,1355097600"; d="scan'208";a="16159200"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 23:22:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 4 Dec 2012 23:22:51 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tg1pD-00079G-Ae;
	Tue, 04 Dec 2012 23:22:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tg1pD-0005UK-1Z;
	Tue, 04 Dec 2012 23:22:51 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14561-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Dec 2012 23:22:51 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14561: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14561 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14561/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3  8 guest-saverestore   fail REGR. vs. 14559

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14559
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14559

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  eb2394c97d57
baseline version:
 xen                  29247e44df47

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Keir Fraser <keir@xen.org>
  Roger Pau Monn? <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26228:eb2394c97d57
tag:         tip
user:        George Dunlap <george.dunlap@eu.citrix.com>
date:        Tue Dec 04 15:50:20 2012 +0000
    
    xl: Check for duplicate vncdisplay options, and return an error
    
    If the user has set a vnc display number both in vnclisten (with
    "xxxx:yy"), and with vncdisplay, throw an error.
    
    Update man pages to match.
    
    Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26227:36d5d8ee5643
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Tue Dec 04 15:50:19 2012 +0000
    
    xen: arm: Use $(OBJCOPY) not bare objcopy
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Reported-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26226:2ddd5138eca7
user:        Roger Pau Monne <roger.pau@citrix.com>
date:        Tue Dec 04 15:50:18 2012 +0000
    
    libxl: fix wrong comment
    
    The comment in function libxl__try_phy_backend is wrong, 1 is returned
    if the backend should be handled as "phy", while 0 is returned if not.
    
    Signed-off-by: Roger Pau Monn? <roger.pau@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26225:49692c28f6d9
user:        Roger Pau Monne <roger.pau@citrix.com>
date:        Tue Dec 04 15:50:18 2012 +0000
    
    docs: expand persistent grants protocol
    
    Signed-off-by: Roger Pau Monn? <roger.pau@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26224:29247e44df47
user:        Daniel De Graaf <dgdegra@tycho.nsa.gov>
date:        Fri Nov 30 21:51:17 2012 +0000
    
    mini-os: shutdown_thread depends on xenbus
    
    This fixes the build of the xenstore stub domain, which should never
    be shut down and so does not need this feature.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 23:23:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 23:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg1pH-0001uk-Nk; Tue, 04 Dec 2012 23:22:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tg1pF-0001tL-SX
	for xen-devel@lists.xensource.com; Tue, 04 Dec 2012 23:22:54 +0000
Received: from [85.158.137.99:38630] by server-16.bemta-3.messagelabs.com id
	0A/FF-07461-DC58EB05; Tue, 04 Dec 2012 23:22:53 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354663372!17278141!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDY5MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11570 invoked from network); 4 Dec 2012 23:22:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 23:22:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,217,1355097600"; d="scan'208";a="16159200"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Dec 2012 23:22:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 4 Dec 2012 23:22:51 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tg1pD-00079G-Ae;
	Tue, 04 Dec 2012 23:22:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tg1pD-0005UK-1Z;
	Tue, 04 Dec 2012 23:22:51 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14561-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Dec 2012 23:22:51 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14561: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14561 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14561/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3  8 guest-saverestore   fail REGR. vs. 14559

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14559
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14559

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  eb2394c97d57
baseline version:
 xen                  29247e44df47

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Keir Fraser <keir@xen.org>
  Roger Pau Monn? <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26228:eb2394c97d57
tag:         tip
user:        George Dunlap <george.dunlap@eu.citrix.com>
date:        Tue Dec 04 15:50:20 2012 +0000
    
    xl: Check for duplicate vncdisplay options, and return an error
    
    If the user has set a vnc display number both in vnclisten (with
    "xxxx:yy"), and with vncdisplay, throw an error.
    
    Update man pages to match.
    
    Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26227:36d5d8ee5643
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Tue Dec 04 15:50:19 2012 +0000
    
    xen: arm: Use $(OBJCOPY) not bare objcopy
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Reported-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26226:2ddd5138eca7
user:        Roger Pau Monne <roger.pau@citrix.com>
date:        Tue Dec 04 15:50:18 2012 +0000
    
    libxl: fix wrong comment
    
    The comment in function libxl__try_phy_backend is wrong, 1 is returned
    if the backend should be handled as "phy", while 0 is returned if not.
    
    Signed-off-by: Roger Pau Monn? <roger.pau@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26225:49692c28f6d9
user:        Roger Pau Monne <roger.pau@citrix.com>
date:        Tue Dec 04 15:50:18 2012 +0000
    
    docs: expand persistent grants protocol
    
    Signed-off-by: Roger Pau Monn? <roger.pau@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26224:29247e44df47
user:        Daniel De Graaf <dgdegra@tycho.nsa.gov>
date:        Fri Nov 30 21:51:17 2012 +0000
    
    mini-os: shutdown_thread depends on xenbus
    
    This fixes the build of the xenstore stub domain, which should never
    be shut down and so does not need this feature.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 23:26:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 23:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg1sp-0002AB-II; Tue, 04 Dec 2012 23:26:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tg1so-0002A4-Fw
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 23:26:34 +0000
Received: from [85.158.143.99:5313] by server-2.bemta-4.messagelabs.com id
	3A/50-28922-9A68EB05; Tue, 04 Dec 2012 23:26:33 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354663591!27093220!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzM5OTk1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4816 invoked from network); 4 Dec 2012 23:26:32 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 23:26:32 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354663592; x=1386199592;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=y5qqaru9SBhyXq3s8PfVf9b/+DHkNEqHLSmLZzYMLd4=;
	b=ou/Kd7bOYttoGVyVcBxbaD1OX997frdL+iPB8ha2p4/HM/oV/pZ6nJUt
	JfDag+NzNCjfERHoP8QeHrOR1Wd/RCcWQhQM81SHv8itK/Xwmzw0DCEXv
	uB+38D4WCzGyQTp1R9swqjEXfECHZnFrkY7TZqDPJQjKozPJjL3V3woww w=;
X-IronPort-AV: E=McAfee;i="5400,1158,6916"; a="1072165566"
Received: from smtp-in-1105.vdc.amazon.com ([10.140.9.24])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 04 Dec 2012 23:25:26 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-1105.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB4NPOLo022760
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 4 Dec 2012 23:25:25 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-9003.ant.amazon.com (10.185.137.132) with Microsoft SMTP
	Server id 14.2.247.3; Tue, 4 Dec 2012 15:23:08 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Tue, 04 Dec 2012 15:23:08 -0800
Date: Tue, 4 Dec 2012 15:23:08 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, "Palagummi, Siva"
	<Siva.Palagummi@ca.com>
Message-ID: <20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1346314031.27277.20.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, 2012 at 09:07:11AM +0100, Ian Campbell wrote:
> On Wed, 2012-08-29 at 13:21 +0100, Palagummi, Siva wrote:
> > This patch contains the modifications that are discussed in thread
> > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01730.html

[...]

> > Instead of using max_required_rx_slots, I used the count that we
> > already have in hand to verify if we have enough room in the batch
> > queue for next skb. Please let me know if that is not appropriate.
> > Things worked fine in my environment. Under heavy load now we seems to
> > be consuming most of the slots in the queue and no BUG_ON :-)
> 
> > From: Siva Palagummi <Siva.Palagummi@ca.com>
> > 
> > count variable in xen_netbk_rx_action need to be incremented
> > correctly to take into account of extra slots required when skb_headlen is 
> > greater than PAGE_SIZE when larger MTU values are used. Without this change
> > BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)) is causing netback thread 
> > to exit.
> > 
> > The fix is to stash the counting already done in xen_netbk_count_skb_slots
> > in skb_cb_overlay and use it directly in xen_netbk_rx_action.

You don't need to stash the estimated value to use it in
xen_netbk_rx_action() - you have the actual number of slots consumed
in hand from the return value of netbk_gop_skb(). See below.

> > Also improved the checking for filling the batch queue. 
> > 
> > Also merged a change from a patch created for xen_netbk_count_skb_slots 
> > function as per thread 
> > http://lists.xen.org/archives/html/xen-devel/2012-05/msg01864.html
> > 
> > The problem is seen with linux 3.2.2 kernel on Intel 10Gbps network
> > 
> > 
> > Signed-off-by: Siva Palagummi <Siva.Palagummi@ca.com>
> > ---
> > diff -uprN a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> > --- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000 -0500
> > +++ b/drivers/net/xen-netback/netback.c	2012-08-28 17:31:22.000000000 -0400
> > @@ -80,6 +80,11 @@ union page_ext {
> >  	void *mapping;
> >  };
> >  
> > +struct skb_cb_overlay {
> > +	int meta_slots_used;
> > +	int count;
> > +};

We don't actually need a separate variable for the estimate. We could
rename meta_slots_used to meta_slots. It could hold the estimate
before netbk_gop_skb() is called and the actual number of slots
consumed after. That might be confusing, though, so maybe it's better
off left as two variables.

> >  struct xen_netbk {
> >  	wait_queue_head_t wq;
> >  	struct task_struct *task;
> > @@ -324,9 +329,9 @@ unsigned int xen_netbk_count_skb_slots(s
> >  {
> >  	unsigned int count;
> >  	int i, copy_off;
> > +	struct skb_cb_overlay *sco;
> >  
> > -	count = DIV_ROUND_UP(
> > -			offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE);
> > +	count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
> This hunk appears to be upstream already (see
> e26b203ede31fffd52571a5ba607a26c79dc5c0d). Which tree are you working
> against? You should either base patches on Linus' branch or on the
> net-next branch.
> 
> Other than this the patch looks good, thanks.

I don't think that this patch completely addresses problems
calculating the number of slots required when large MTUs are used.

For example: if netback is handling a skb with a large linear data
portion, say 8157 bytes, that begins at 64 bytes from the start of the
page. Assume that GSO is disabled and there are no frags.
xen_netbk_count_skb_slots() will calculate that two slots are needed:

    count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);

but netbk_gop_frag_copy() will actually use three. Let's walk through
the loop:

        data = skb->data;
        while (data < skb_tail_pointer(skb)) {
                unsigned int offset = offset_in_page(data);
                unsigned int len = PAGE_SIZE - offset;

                if (data + len > skb_tail_pointer(skb))
                        len = skb_tail_pointer(skb) - data;

                netbk_gop_frag_copy(vif, skb, npo,
                                    virt_to_page(data), len, offset, &head);
                data += len;
        }

The first pass will call netbk_gop_frag_copy() with len=4032,
offset=64, and head=1. After the call, head will be set to 0. Inside
netbk_gop_frag_copy(), start_new_rx_buffer() will be called with
offset=0, size=4032, head=1. We'll return false due to the checks for
"offset" and "!head".

The second pass will call netbk_gop_frag_copy() with len=4096,
offset=0, head=0. netbk_gop_frag_copy() will get called with
offset=4032, bytes=4096, head=0. We'll return true here, which we
shouldn't since it's just going to lead to buffer waste for the last
bit.

The third pass will call with len=29 and offset=0. start_new_rx_buffer()
will be called with offset=4096, bytes=29, head=0. We'll start a new
buffer for the last bit.

So you can see that we underestimate the buffers / meta slots required
to handle a skb with a large linear buffer, as we commonly have to
handle with large MTU sizes. This can lead to problems later on.

I'm not as familiar with the new compound page frag handling code, but
I can imagine that the same problem could exist there. But since the
calculation logic in xen_netbk_count_skb_slots() directly models the
code setting up the copy operation, at least it will be estimated
properly.

> >  
> >  	copy_off = skb_headlen(skb) % PAGE_SIZE;
> >  
> > @@ -352,6 +357,8 @@ unsigned int xen_netbk_count_skb_slots(s
> >  			size -= bytes;
> >  		}
> >  	}
> > +	sco = (struct skb_cb_overlay *)skb->cb;
> > +	sco->count = count;
> >  	return count;
> >  }
> >  
> > @@ -586,9 +593,6 @@ static void netbk_add_frag_responses(str
> >  	}
> >  }
> >  
> > -struct skb_cb_overlay {
> > -	int meta_slots_used;
> > -};
> >  
> >  static void xen_netbk_rx_action(struct xen_netbk *netbk)
> >  {
> > @@ -621,12 +625,16 @@ static void xen_netbk_rx_action(struct x
> >  		sco = (struct skb_cb_overlay *)skb->cb;
> >  		sco->meta_slots_used = netbk_gop_skb(skb, &npo);
> >  
> > -		count += nr_frags + 1;
> > +		count += sco->count;

Why increment count by the /estimated/ count instead of the actual
number of slots used? We have the number of slots in the line just
above, in sco->meta_slots_used.

> >  		__skb_queue_tail(&rxq, skb);
> >  
> > +		skb = skb_peek(&netbk->rx_queue);
> > +		if (skb == NULL)
> > +			break;
> > +		sco = (struct skb_cb_overlay *)skb->cb;
> >  		/* Filled the batch queue? */
> > -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> >  			break;
> >  	}
> >  

This change I like.

We're working on a patch to improve the buffer efficiency and the
miscalculation problem. Siva, I'd be happy to re-base and re-submit
this patch (with minor adjustments) as part of that work, unless you
want to handle that.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 04 23:26:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Dec 2012 23:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg1sp-0002AB-II; Tue, 04 Dec 2012 23:26:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tg1so-0002A4-Fw
	for xen-devel@lists.xen.org; Tue, 04 Dec 2012 23:26:34 +0000
Received: from [85.158.143.99:5313] by server-2.bemta-4.messagelabs.com id
	3A/50-28922-9A68EB05; Tue, 04 Dec 2012 23:26:33 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354663591!27093220!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzM5OTk1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4816 invoked from network); 4 Dec 2012 23:26:32 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Dec 2012 23:26:32 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354663592; x=1386199592;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=y5qqaru9SBhyXq3s8PfVf9b/+DHkNEqHLSmLZzYMLd4=;
	b=ou/Kd7bOYttoGVyVcBxbaD1OX997frdL+iPB8ha2p4/HM/oV/pZ6nJUt
	JfDag+NzNCjfERHoP8QeHrOR1Wd/RCcWQhQM81SHv8itK/Xwmzw0DCEXv
	uB+38D4WCzGyQTp1R9swqjEXfECHZnFrkY7TZqDPJQjKozPJjL3V3woww w=;
X-IronPort-AV: E=McAfee;i="5400,1158,6916"; a="1072165566"
Received: from smtp-in-1105.vdc.amazon.com ([10.140.9.24])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 04 Dec 2012 23:25:26 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-1105.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB4NPOLo022760
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 4 Dec 2012 23:25:25 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-9003.ant.amazon.com (10.185.137.132) with Microsoft SMTP
	Server id 14.2.247.3; Tue, 4 Dec 2012 15:23:08 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Tue, 04 Dec 2012 15:23:08 -0800
Date: Tue, 4 Dec 2012 15:23:08 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, "Palagummi, Siva"
	<Siva.Palagummi@ca.com>
Message-ID: <20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1346314031.27277.20.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, 2012 at 09:07:11AM +0100, Ian Campbell wrote:
> On Wed, 2012-08-29 at 13:21 +0100, Palagummi, Siva wrote:
> > This patch contains the modifications that are discussed in thread
> > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01730.html

[...]

> > Instead of using max_required_rx_slots, I used the count that we
> > already have in hand to verify if we have enough room in the batch
> > queue for next skb. Please let me know if that is not appropriate.
> > Things worked fine in my environment. Under heavy load now we seems to
> > be consuming most of the slots in the queue and no BUG_ON :-)
> 
> > From: Siva Palagummi <Siva.Palagummi@ca.com>
> > 
> > count variable in xen_netbk_rx_action need to be incremented
> > correctly to take into account of extra slots required when skb_headlen is 
> > greater than PAGE_SIZE when larger MTU values are used. Without this change
> > BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)) is causing netback thread 
> > to exit.
> > 
> > The fix is to stash the counting already done in xen_netbk_count_skb_slots
> > in skb_cb_overlay and use it directly in xen_netbk_rx_action.

You don't need to stash the estimated value to use it in
xen_netbk_rx_action() - you have the actual number of slots consumed
in hand from the return value of netbk_gop_skb(). See below.

> > Also improved the checking for filling the batch queue. 
> > 
> > Also merged a change from a patch created for xen_netbk_count_skb_slots 
> > function as per thread 
> > http://lists.xen.org/archives/html/xen-devel/2012-05/msg01864.html
> > 
> > The problem is seen with linux 3.2.2 kernel on Intel 10Gbps network
> > 
> > 
> > Signed-off-by: Siva Palagummi <Siva.Palagummi@ca.com>
> > ---
> > diff -uprN a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> > --- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000 -0500
> > +++ b/drivers/net/xen-netback/netback.c	2012-08-28 17:31:22.000000000 -0400
> > @@ -80,6 +80,11 @@ union page_ext {
> >  	void *mapping;
> >  };
> >  
> > +struct skb_cb_overlay {
> > +	int meta_slots_used;
> > +	int count;
> > +};

We don't actually need a separate variable for the estimate. We could
rename meta_slots_used to meta_slots. It could hold the estimate
before netbk_gop_skb() is called and the actual number of slots
consumed after. That might be confusing, though, so maybe it's better
off left as two variables.

> >  struct xen_netbk {
> >  	wait_queue_head_t wq;
> >  	struct task_struct *task;
> > @@ -324,9 +329,9 @@ unsigned int xen_netbk_count_skb_slots(s
> >  {
> >  	unsigned int count;
> >  	int i, copy_off;
> > +	struct skb_cb_overlay *sco;
> >  
> > -	count = DIV_ROUND_UP(
> > -			offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE);
> > +	count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
> This hunk appears to be upstream already (see
> e26b203ede31fffd52571a5ba607a26c79dc5c0d). Which tree are you working
> against? You should either base patches on Linus' branch or on the
> net-next branch.
> 
> Other than this the patch looks good, thanks.

I don't think that this patch completely addresses problems
calculating the number of slots required when large MTUs are used.

For example: if netback is handling a skb with a large linear data
portion, say 8157 bytes, that begins at 64 bytes from the start of the
page. Assume that GSO is disabled and there are no frags.
xen_netbk_count_skb_slots() will calculate that two slots are needed:

    count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);

but netbk_gop_frag_copy() will actually use three. Let's walk through
the loop:

        data = skb->data;
        while (data < skb_tail_pointer(skb)) {
                unsigned int offset = offset_in_page(data);
                unsigned int len = PAGE_SIZE - offset;

                if (data + len > skb_tail_pointer(skb))
                        len = skb_tail_pointer(skb) - data;

                netbk_gop_frag_copy(vif, skb, npo,
                                    virt_to_page(data), len, offset, &head);
                data += len;
        }

The first pass will call netbk_gop_frag_copy() with len=4032,
offset=64, and head=1. After the call, head will be set to 0. Inside
netbk_gop_frag_copy(), start_new_rx_buffer() will be called with
offset=0, size=4032, head=1. We'll return false due to the checks for
"offset" and "!head".

The second pass will call netbk_gop_frag_copy() with len=4096,
offset=0, head=0. netbk_gop_frag_copy() will get called with
offset=4032, bytes=4096, head=0. We'll return true here, which we
shouldn't since it's just going to lead to buffer waste for the last
bit.

The third pass will call with len=29 and offset=0. start_new_rx_buffer()
will be called with offset=4096, bytes=29, head=0. We'll start a new
buffer for the last bit.

So you can see that we underestimate the buffers / meta slots required
to handle a skb with a large linear buffer, as we commonly have to
handle with large MTU sizes. This can lead to problems later on.

I'm not as familiar with the new compound page frag handling code, but
I can imagine that the same problem could exist there. But since the
calculation logic in xen_netbk_count_skb_slots() directly models the
code setting up the copy operation, at least it will be estimated
properly.

> >  
> >  	copy_off = skb_headlen(skb) % PAGE_SIZE;
> >  
> > @@ -352,6 +357,8 @@ unsigned int xen_netbk_count_skb_slots(s
> >  			size -= bytes;
> >  		}
> >  	}
> > +	sco = (struct skb_cb_overlay *)skb->cb;
> > +	sco->count = count;
> >  	return count;
> >  }
> >  
> > @@ -586,9 +593,6 @@ static void netbk_add_frag_responses(str
> >  	}
> >  }
> >  
> > -struct skb_cb_overlay {
> > -	int meta_slots_used;
> > -};
> >  
> >  static void xen_netbk_rx_action(struct xen_netbk *netbk)
> >  {
> > @@ -621,12 +625,16 @@ static void xen_netbk_rx_action(struct x
> >  		sco = (struct skb_cb_overlay *)skb->cb;
> >  		sco->meta_slots_used = netbk_gop_skb(skb, &npo);
> >  
> > -		count += nr_frags + 1;
> > +		count += sco->count;

Why increment count by the /estimated/ count instead of the actual
number of slots used? We have the number of slots in the line just
above, in sco->meta_slots_used.

> >  		__skb_queue_tail(&rxq, skb);
> >  
> > +		skb = skb_peek(&netbk->rx_queue);
> > +		if (skb == NULL)
> > +			break;
> > +		sco = (struct skb_cb_overlay *)skb->cb;
> >  		/* Filled the batch queue? */
> > -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> >  			break;
> >  	}
> >  

This change I like.

We're working on a patch to improve the buffer efficiency and the
miscalculation problem. Siva, I'd be happy to re-base and re-submit
this patch (with minor adjustments) as part of that work, unless you
want to handle that.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 01:27:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 01:27:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg3lr-00017m-GO; Wed, 05 Dec 2012 01:27:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tg3lq-00017P-8r
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 01:27:30 +0000
Received: from [85.158.143.99:24156] by server-2.bemta-4.messagelabs.com id
	C3/49-28922-103AEB05; Wed, 05 Dec 2012 01:27:29 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354670843!21042374!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTY2OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18939 invoked from network); 5 Dec 2012 01:27:23 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 01:27:23 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 04 Dec 2012 17:27:22 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,218,1355126400"; d="scan'208";a="257381292"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga001.fm.intel.com with ESMTP; 04 Dec 2012 17:27:16 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:27:15 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:27:15 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Wed, 5 Dec 2012 09:26:49 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 06/10] nested vmx: enable IA32E mode while
	do VM entry
Thread-Index: AQHN0gacwLzkZlrSrEGkhVRiJkUMHJgJagsw
Date: Wed, 5 Dec 2012 01:26:49 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB4A23@SHSMSX102.ccr.corp.intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-7-git-send-email-dongxiao.xu@intel.com>
	<50BDD87402000078000ADA38@nat28.tlf.novell.com>
In-Reply-To: <50BDD87402000078000ADA38@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 06/10] nested vmx: enable IA32E mode while
 do VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 04, 2012 6:03 PM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 06/10] nested vmx: enable IA32E mode while
> do VM entry
> 
> >>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> 
> How did things work without this, or if it worked, what does this fix?
> 
> Jan

For current Xen, it doesn't check the VM_ENTRY_IA32E_MODE bit in related MSR but directly enable this bit in VMCS if guest supports long mode.
Therefore Xen on Xen doesn't have problem.

However other VMMs may detect this bit in MSR and then set value to related VMCS fields.

Thanks,
Dongxiao

> 
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > ---
> >  xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
> >  1 files changed, 2 insertions(+), 1 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> > index 0ac78af..1304636 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1388,7 +1388,8 @@ int nvmx_msr_read_intercept(unsigned int msr,
> > u64
> > *msr_content)
> >              tmp = 0x11fb;
> >          data = VM_ENTRY_LOAD_GUEST_PAT |
> >                 VM_ENTRY_LOAD_GUEST_EFER |
> > -               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
> > +               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
> > +               VM_ENTRY_IA32E_MODE;
> >          data = ((data | tmp) << 32) | tmp;
> >          break;
> >
> > --
> > 1.7.1
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 01:27:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 01:27:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg3lr-00017m-GO; Wed, 05 Dec 2012 01:27:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tg3lq-00017P-8r
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 01:27:30 +0000
Received: from [85.158.143.99:24156] by server-2.bemta-4.messagelabs.com id
	C3/49-28922-103AEB05; Wed, 05 Dec 2012 01:27:29 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354670843!21042374!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTY2OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18939 invoked from network); 5 Dec 2012 01:27:23 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 01:27:23 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 04 Dec 2012 17:27:22 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,218,1355126400"; d="scan'208";a="257381292"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga001.fm.intel.com with ESMTP; 04 Dec 2012 17:27:16 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:27:15 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:27:15 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Wed, 5 Dec 2012 09:26:49 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 06/10] nested vmx: enable IA32E mode while
	do VM entry
Thread-Index: AQHN0gacwLzkZlrSrEGkhVRiJkUMHJgJagsw
Date: Wed, 5 Dec 2012 01:26:49 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB4A23@SHSMSX102.ccr.corp.intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-7-git-send-email-dongxiao.xu@intel.com>
	<50BDD87402000078000ADA38@nat28.tlf.novell.com>
In-Reply-To: <50BDD87402000078000ADA38@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 06/10] nested vmx: enable IA32E mode while
 do VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 04, 2012 6:03 PM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 06/10] nested vmx: enable IA32E mode while
> do VM entry
> 
> >>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> 
> How did things work without this, or if it worked, what does this fix?
> 
> Jan

For current Xen, it doesn't check the VM_ENTRY_IA32E_MODE bit in related MSR but directly enable this bit in VMCS if guest supports long mode.
Therefore Xen on Xen doesn't have problem.

However other VMMs may detect this bit in MSR and then set value to related VMCS fields.

Thanks,
Dongxiao

> 
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > ---
> >  xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
> >  1 files changed, 2 insertions(+), 1 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> > index 0ac78af..1304636 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1388,7 +1388,8 @@ int nvmx_msr_read_intercept(unsigned int msr,
> > u64
> > *msr_content)
> >              tmp = 0x11fb;
> >          data = VM_ENTRY_LOAD_GUEST_PAT |
> >                 VM_ENTRY_LOAD_GUEST_EFER |
> > -               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
> > +               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
> > +               VM_ENTRY_IA32E_MODE;
> >          data = ((data | tmp) << 32) | tmp;
> >          break;
> >
> > --
> > 1.7.1
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg3lr-00017t-Rd; Wed, 05 Dec 2012 01:27:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tg3lq-00017R-EU
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 01:27:30 +0000
Received: from [85.158.143.99:24157] by server-1.bemta-4.messagelabs.com id
	7D/21-27934-103AEB05; Wed, 05 Dec 2012 01:27:29 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354670843!21042374!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTY2OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18942 invoked from network); 5 Dec 2012 01:27:24 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 01:27:24 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 04 Dec 2012 17:27:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,218,1355126400"; d="scan'208";a="257381322"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga001.fm.intel.com with ESMTP; 04 Dec 2012 17:27:22 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:27:22 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Wed, 5 Dec 2012 09:27:20 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 05/10] nested vmx: fix DR access VM exit
Thread-Index: AQHN0gaCbm8V1zNvbEy6Yby5h4nAEZgJay+w
Date: Wed, 5 Dec 2012 01:27:19 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB4A29@SHSMSX102.ccr.corp.intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-6-git-send-email-dongxiao.xu@intel.com>
	<50BDD82B02000078000ADA35@nat28.tlf.novell.com>
In-Reply-To: <50BDD82B02000078000ADA35@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 05/10] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 04, 2012 6:02 PM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 05/10] nested vmx: fix DR access VM exit
> 
> >>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > For DR register, we use lazy restore mechanism when access it.
> > Therefore when receiving such VM exit, L0 should be responsible to
> > switch to the right DR values, then inject to L1 hypervisor.
> >
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > ---
> >  xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
> >  1 files changed, 2 insertions(+), 1 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> > index cf3797c..0ac78af 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1654,7 +1654,8 @@ int nvmx_n2_vmexit_handler(struct
> cpu_user_regs *regs,
> >      case EXIT_REASON_DR_ACCESS:
> >          ctrl = __n2_exec_control(v);
> >          if ( ctrl & CPU_BASED_MOV_DR_EXITING )
> > -            nvcpu->nv_vmexit_pending = 1;
> > +            if ( v->arch.hvm_vcpu.flag_dr_dirty )
> > +                nvcpu->nv_vmexit_pending = 1;
> 
> Personally I'd prefer if you combined the two if-s.

Will merge it in updated pull request.

Thanks,
Dongxiao

> 
> Jan
> 
> >          break;
> >      case EXIT_REASON_INVLPG:
> >          ctrl = __n2_exec_control(v);
> > --
> > 1.7.1
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg3lr-00017t-Rd; Wed, 05 Dec 2012 01:27:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tg3lq-00017R-EU
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 01:27:30 +0000
Received: from [85.158.143.99:24157] by server-1.bemta-4.messagelabs.com id
	7D/21-27934-103AEB05; Wed, 05 Dec 2012 01:27:29 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354670843!21042374!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTY2OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18942 invoked from network); 5 Dec 2012 01:27:24 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 01:27:24 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 04 Dec 2012 17:27:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,218,1355126400"; d="scan'208";a="257381322"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga001.fm.intel.com with ESMTP; 04 Dec 2012 17:27:22 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:27:22 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Wed, 5 Dec 2012 09:27:20 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 05/10] nested vmx: fix DR access VM exit
Thread-Index: AQHN0gaCbm8V1zNvbEy6Yby5h4nAEZgJay+w
Date: Wed, 5 Dec 2012 01:27:19 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB4A29@SHSMSX102.ccr.corp.intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-6-git-send-email-dongxiao.xu@intel.com>
	<50BDD82B02000078000ADA35@nat28.tlf.novell.com>
In-Reply-To: <50BDD82B02000078000ADA35@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 05/10] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 04, 2012 6:02 PM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 05/10] nested vmx: fix DR access VM exit
> 
> >>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > For DR register, we use lazy restore mechanism when access it.
> > Therefore when receiving such VM exit, L0 should be responsible to
> > switch to the right DR values, then inject to L1 hypervisor.
> >
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > ---
> >  xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
> >  1 files changed, 2 insertions(+), 1 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> > index cf3797c..0ac78af 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1654,7 +1654,8 @@ int nvmx_n2_vmexit_handler(struct
> cpu_user_regs *regs,
> >      case EXIT_REASON_DR_ACCESS:
> >          ctrl = __n2_exec_control(v);
> >          if ( ctrl & CPU_BASED_MOV_DR_EXITING )
> > -            nvcpu->nv_vmexit_pending = 1;
> > +            if ( v->arch.hvm_vcpu.flag_dr_dirty )
> > +                nvcpu->nv_vmexit_pending = 1;
> 
> Personally I'd prefer if you combined the two if-s.

Will merge it in updated pull request.

Thanks,
Dongxiao

> 
> Jan
> 
> >          break;
> >      case EXIT_REASON_INVLPG:
> >          ctrl = __n2_exec_control(v);
> > --
> > 1.7.1
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 01:35:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 01:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg3ta-0001Z1-Qr; Wed, 05 Dec 2012 01:35:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tg3tY-0001Yo-TG
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 01:35:29 +0000
Received: from [193.109.254.147:14203] by server-6.bemta-14.messagelabs.com id
	21/B1-02788-DD4AEB05; Wed, 05 Dec 2012 01:35:25 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354671323!3743875!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNDM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9856 invoked from network); 5 Dec 2012 01:35:24 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-27.messagelabs.com with SMTP;
	5 Dec 2012 01:35:24 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 04 Dec 2012 17:34:36 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,218,1355126400"; d="scan'208";a="229051744"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 04 Dec 2012 17:35:21 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:35:20 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:35:20 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Wed, 5 Dec 2012 09:35:17 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 02/10] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
Thread-Index: AQHN0gYq/qY+wdhJk0iim2W1LFp3I5gJbVLA
Date: Wed, 5 Dec 2012 01:35:17 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB4A7E@SHSMSX102.ccr.corp.intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-3-git-send-email-dongxiao.xu@intel.com>
	<50BDD79802000078000ADA32@nat28.tlf.novell.com>
In-Reply-To: <50BDD79802000078000ADA32@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 02/10] nested vmx: expose bit 55 of
 IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 04, 2012 6:00 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 02/10] nested vmx: expose bit 55 of
> IA32_VMX_BASIC_MSR to guest VMM
> 
> >>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> 
> I'm sorry, but exposing something that doesn't even have a name sound very
> awkward to me. Please adjust existing code using the literal number in a
> prerequisite patch, and then use the added constant here too.
> 

Thanks for the comments.
I will refine this patch in next pull request.

Thanks,
Dongxiao

> Jan
> 
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > ---
> >  xen/arch/x86/hvm/vmx/vvmx.c |   37
> +++++++++++++++++++++++++------------
> >  1 files changed, 25 insertions(+), 12 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> > index 719bfce..cf91c7c 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs
> *regs)
> >   */
> >  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)  {
> > -    u64 data = 0, tmp;
> > +    u64 data = 0, tmp = 0;
> >      int r = 1;
> >
> >      if ( !nestedhvm_enabled(current->domain) ) @@ -1311,18 +1311,20
> > @@ int nvmx_msr_read_intercept(unsigned int msr, u64
> > *msr_content)
> >      switch (msr) {
> >      case MSR_IA32_VMX_BASIC:
> >          data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 |
> > -               ((u64)MTRR_TYPE_WRBACK) << 50;
> > +               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
> >          break;
> >      case MSR_IA32_VMX_PINBASED_CTLS:
> > +    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
> >          /* 1-seetings */
> >          data = PIN_BASED_EXT_INTR_MASK |
> >                 PIN_BASED_NMI_EXITING |
> >                 PIN_BASED_PREEMPT_TIMER;
> > -        data <<= 32;
> > -	/* 0-settings */
> > -        data |= 0;
> > +        /* Consult SDM for default1 setting */
> > +        tmp = ( (1<<1) | (1<<2) | (1<<4) );
> > +        data = ((data | tmp) << 32) | (tmp);
> >          break;
> >      case MSR_IA32_VMX_PROCBASED_CTLS:
> > +    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
> >          /* 1-seetings */
> >          data = CPU_BASED_HLT_EXITING |
> >                 CPU_BASED_VIRTUAL_INTR_PENDING | @@ -1342,10
> +1344,14
> > @@ int nvmx_msr_read_intercept(unsigned int msr, u64
> > *msr_content)
> >                 CPU_BASED_VIRTUAL_NMI_PENDING |
> >                 CPU_BASED_ACTIVATE_MSR_BITMAP |
> >                 CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
> > -        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
> > -        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
> > +        /* Consult SDM for default1 setting */
> > +        if ( msr == MSR_IA32_VMX_PROCBASED_CTLS )
> > +            tmp = 0x401e172;
> > +        else if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
> > +            tmp = 0x4006172;
> >          /* 0-settings */
> >          data = ((data | tmp) << 32) | (tmp);
> > +
> >          break;
> >      case MSR_IA32_VMX_PROCBASED_CTLS2:
> >          /* 1-seetings */
> > @@ -1355,9 +1361,12 @@ int nvmx_msr_read_intercept(unsigned int msr,
> > u64
> > *msr_content)
> >          data = (data << 32) | tmp;
> >          break;
> >      case MSR_IA32_VMX_EXIT_CTLS:
> > -        /* 1-seetings */
> > -        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
> > -        tmp = 0x36dff;
> > +    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
> > +        /* Consult SDM for default1 setting */
> > +        if ( msr == MSR_IA32_VMX_EXIT_CTLS )
> > +            tmp = 0x36dff;
> > +        else if ( msr == MSR_IA32_VMX_TRUE_EXIT_CTLS )
> > +            tmp = 0x36dfb;
> >          data = VM_EXIT_ACK_INTR_ON_EXIT |
> >                 VM_EXIT_IA32E_MODE |
> >                 VM_EXIT_SAVE_PREEMPT_TIMER | @@ -1370,8
> +1379,12 @@
> > int nvmx_msr_read_intercept(unsigned int msr, u64
> > *msr_content)
> >          data = ((data | tmp) << 32) | tmp;
> >          break;
> >      case MSR_IA32_VMX_ENTRY_CTLS:
> > -        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
> > -        tmp = 0x11ff;
> > +    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> > +        /* Consult SDM for default1 setting */
> > +        if ( msr == MSR_IA32_VMX_ENTRY_CTLS )
> > +            tmp = 0x11ff;
> > +        else if ( msr == MSR_IA32_VMX_TRUE_ENTRY_CTLS )
> > +            tmp = 0x11fb;
> >          data = VM_ENTRY_LOAD_GUEST_PAT |
> >                 VM_ENTRY_LOAD_GUEST_EFER |
> >                 VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
> > --
> > 1.7.1
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 01:35:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 01:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg3ta-0001Z1-Qr; Wed, 05 Dec 2012 01:35:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tg3tY-0001Yo-TG
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 01:35:29 +0000
Received: from [193.109.254.147:14203] by server-6.bemta-14.messagelabs.com id
	21/B1-02788-DD4AEB05; Wed, 05 Dec 2012 01:35:25 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354671323!3743875!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNDM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9856 invoked from network); 5 Dec 2012 01:35:24 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-27.messagelabs.com with SMTP;
	5 Dec 2012 01:35:24 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 04 Dec 2012 17:34:36 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,218,1355126400"; d="scan'208";a="229051744"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 04 Dec 2012 17:35:21 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:35:20 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:35:20 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Wed, 5 Dec 2012 09:35:17 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 02/10] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
Thread-Index: AQHN0gYq/qY+wdhJk0iim2W1LFp3I5gJbVLA
Date: Wed, 5 Dec 2012 01:35:17 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB4A7E@SHSMSX102.ccr.corp.intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<1354600410-3390-3-git-send-email-dongxiao.xu@intel.com>
	<50BDD79802000078000ADA32@nat28.tlf.novell.com>
In-Reply-To: <50BDD79802000078000ADA32@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 02/10] nested vmx: expose bit 55 of
 IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 04, 2012 6:00 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 02/10] nested vmx: expose bit 55 of
> IA32_VMX_BASIC_MSR to guest VMM
> 
> >>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> 
> I'm sorry, but exposing something that doesn't even have a name sound very
> awkward to me. Please adjust existing code using the literal number in a
> prerequisite patch, and then use the added constant here too.
> 

Thanks for the comments.
I will refine this patch in next pull request.

Thanks,
Dongxiao

> Jan
> 
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > ---
> >  xen/arch/x86/hvm/vmx/vvmx.c |   37
> +++++++++++++++++++++++++------------
> >  1 files changed, 25 insertions(+), 12 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> > index 719bfce..cf91c7c 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs
> *regs)
> >   */
> >  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)  {
> > -    u64 data = 0, tmp;
> > +    u64 data = 0, tmp = 0;
> >      int r = 1;
> >
> >      if ( !nestedhvm_enabled(current->domain) ) @@ -1311,18 +1311,20
> > @@ int nvmx_msr_read_intercept(unsigned int msr, u64
> > *msr_content)
> >      switch (msr) {
> >      case MSR_IA32_VMX_BASIC:
> >          data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 |
> > -               ((u64)MTRR_TYPE_WRBACK) << 50;
> > +               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
> >          break;
> >      case MSR_IA32_VMX_PINBASED_CTLS:
> > +    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
> >          /* 1-seetings */
> >          data = PIN_BASED_EXT_INTR_MASK |
> >                 PIN_BASED_NMI_EXITING |
> >                 PIN_BASED_PREEMPT_TIMER;
> > -        data <<= 32;
> > -	/* 0-settings */
> > -        data |= 0;
> > +        /* Consult SDM for default1 setting */
> > +        tmp = ( (1<<1) | (1<<2) | (1<<4) );
> > +        data = ((data | tmp) << 32) | (tmp);
> >          break;
> >      case MSR_IA32_VMX_PROCBASED_CTLS:
> > +    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
> >          /* 1-seetings */
> >          data = CPU_BASED_HLT_EXITING |
> >                 CPU_BASED_VIRTUAL_INTR_PENDING | @@ -1342,10
> +1344,14
> > @@ int nvmx_msr_read_intercept(unsigned int msr, u64
> > *msr_content)
> >                 CPU_BASED_VIRTUAL_NMI_PENDING |
> >                 CPU_BASED_ACTIVATE_MSR_BITMAP |
> >                 CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
> > -        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
> > -        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
> > +        /* Consult SDM for default1 setting */
> > +        if ( msr == MSR_IA32_VMX_PROCBASED_CTLS )
> > +            tmp = 0x401e172;
> > +        else if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
> > +            tmp = 0x4006172;
> >          /* 0-settings */
> >          data = ((data | tmp) << 32) | (tmp);
> > +
> >          break;
> >      case MSR_IA32_VMX_PROCBASED_CTLS2:
> >          /* 1-seetings */
> > @@ -1355,9 +1361,12 @@ int nvmx_msr_read_intercept(unsigned int msr,
> > u64
> > *msr_content)
> >          data = (data << 32) | tmp;
> >          break;
> >      case MSR_IA32_VMX_EXIT_CTLS:
> > -        /* 1-seetings */
> > -        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
> > -        tmp = 0x36dff;
> > +    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
> > +        /* Consult SDM for default1 setting */
> > +        if ( msr == MSR_IA32_VMX_EXIT_CTLS )
> > +            tmp = 0x36dff;
> > +        else if ( msr == MSR_IA32_VMX_TRUE_EXIT_CTLS )
> > +            tmp = 0x36dfb;
> >          data = VM_EXIT_ACK_INTR_ON_EXIT |
> >                 VM_EXIT_IA32E_MODE |
> >                 VM_EXIT_SAVE_PREEMPT_TIMER | @@ -1370,8
> +1379,12 @@
> > int nvmx_msr_read_intercept(unsigned int msr, u64
> > *msr_content)
> >          data = ((data | tmp) << 32) | tmp;
> >          break;
> >      case MSR_IA32_VMX_ENTRY_CTLS:
> > -        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
> > -        tmp = 0x11ff;
> > +    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> > +        /* Consult SDM for default1 setting */
> > +        if ( msr == MSR_IA32_VMX_ENTRY_CTLS )
> > +            tmp = 0x11ff;
> > +        else if ( msr == MSR_IA32_VMX_TRUE_ENTRY_CTLS )
> > +            tmp = 0x11fb;
> >          data = VM_ENTRY_LOAD_GUEST_PAT |
> >                 VM_ENTRY_LOAD_GUEST_EFER |
> >                 VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
> > --
> > 1.7.1
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 01:38:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 01:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg3vw-0001hJ-Gj; Wed, 05 Dec 2012 01:37:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tg3vv-0001hB-Og
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 01:37:55 +0000
Received: from [85.158.138.51:25922] by server-15.bemta-3.messagelabs.com id
	86/10-23779-E65AEB05; Wed, 05 Dec 2012 01:37:50 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1354671469!18584873!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM1OTQ3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28005 invoked from network); 5 Dec 2012 01:37:49 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-7.tower-174.messagelabs.com with SMTP;
	5 Dec 2012 01:37:49 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 04 Dec 2012 17:37:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,218,1355126400"; d="scan'208";a="226676423"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by azsmga001.ch.intel.com with ESMTP; 04 Dec 2012 17:37:47 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:37:47 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:37:47 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Wed, 5 Dec 2012 09:37:44 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 00/10] nested vmx: bug fixes and feature
	enabling
Thread-Index: AQHN0gby4mBwzCsQUkSB+xv6tY/+iJgJbeeg
Date: Wed, 5 Dec 2012 01:37:44 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB4A96@SHSMSX102.ccr.corp.intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<50BDD90102000078000ADA54@nat28.tlf.novell.com>
In-Reply-To: <50BDD90102000078000ADA54@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/10] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 04, 2012 6:06 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 00/10] nested vmx: bug fixes and feature
> enabling
> 
> >>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > This series of patches contain some bug fixes and feature enabling for
> > nested vmx, please help to review and pull.
> 
> As with the previous series, we would want this series to be ack-ed by one of
> the formally listed maintainers.
> 
> Beyond that I'd appreciate if you could indicate which of the bug fixes you sent
> would be candidates for 4.2.x.

Sure. I will describe the bugs/issues in the commit message in next pull request.

Thanks,
Dongxiao

> 
> Jan
> 
> > Dongxiao Xu (10):
> >   nested vmx: emulate MSR bitmaps
> >   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
> >   nested vmx: fix rflags status in virtual vmexit
> >   nested vmx: fix handling of RDTSC
> >   nested vmx: fix DR access VM exit
> >   nested vmx: enable IA32E mode while do VM entry
> >   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
> >   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
> >   nested vmx: fix interrupt delivery to L2 guest
> >   nested vmx: check host ability when intercept MSR read
> >
> >  xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
> >  xen/arch/x86/hvm/vmx/vmcs.c        |   28 ++++++++
> >  xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
> >  xen/arch/x86/hvm/vmx/vvmx.c        |  128
> ++++++++++++++++++++++++++++++------
> >  xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
> >  xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
> >  xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
> >  7 files changed, 148 insertions(+), 25 deletions(-)
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 01:38:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 01:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg3vw-0001hJ-Gj; Wed, 05 Dec 2012 01:37:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tg3vv-0001hB-Og
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 01:37:55 +0000
Received: from [85.158.138.51:25922] by server-15.bemta-3.messagelabs.com id
	86/10-23779-E65AEB05; Wed, 05 Dec 2012 01:37:50 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1354671469!18584873!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM1OTQ3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28005 invoked from network); 5 Dec 2012 01:37:49 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-7.tower-174.messagelabs.com with SMTP;
	5 Dec 2012 01:37:49 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 04 Dec 2012 17:37:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,218,1355126400"; d="scan'208";a="226676423"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by azsmga001.ch.intel.com with ESMTP; 04 Dec 2012 17:37:47 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:37:47 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 4 Dec 2012 17:37:47 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Wed, 5 Dec 2012 09:37:44 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 00/10] nested vmx: bug fixes and feature
	enabling
Thread-Index: AQHN0gby4mBwzCsQUkSB+xv6tY/+iJgJbeeg
Date: Wed, 5 Dec 2012 01:37:44 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB4A96@SHSMSX102.ccr.corp.intel.com>
References: <1354600410-3390-1-git-send-email-dongxiao.xu@intel.com>
	<50BDD90102000078000ADA54@nat28.tlf.novell.com>
In-Reply-To: <50BDD90102000078000ADA54@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/10] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, December 04, 2012 6:06 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 00/10] nested vmx: bug fixes and feature
> enabling
> 
> >>> On 04.12.12 at 06:53, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > This series of patches contain some bug fixes and feature enabling for
> > nested vmx, please help to review and pull.
> 
> As with the previous series, we would want this series to be ack-ed by one of
> the formally listed maintainers.
> 
> Beyond that I'd appreciate if you could indicate which of the bug fixes you sent
> would be candidates for 4.2.x.

Sure. I will describe the bugs/issues in the commit message in next pull request.

Thanks,
Dongxiao

> 
> Jan
> 
> > Dongxiao Xu (10):
> >   nested vmx: emulate MSR bitmaps
> >   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
> >   nested vmx: fix rflags status in virtual vmexit
> >   nested vmx: fix handling of RDTSC
> >   nested vmx: fix DR access VM exit
> >   nested vmx: enable IA32E mode while do VM entry
> >   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
> >   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
> >   nested vmx: fix interrupt delivery to L2 guest
> >   nested vmx: check host ability when intercept MSR read
> >
> >  xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
> >  xen/arch/x86/hvm/vmx/vmcs.c        |   28 ++++++++
> >  xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
> >  xen/arch/x86/hvm/vmx/vvmx.c        |  128
> ++++++++++++++++++++++++++++++------
> >  xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
> >  xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
> >  xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
> >  7 files changed, 148 insertions(+), 25 deletions(-)
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 03:56:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 03:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg65k-0004S4-3M; Wed, 05 Dec 2012 03:56:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tg65j-0004Rw-4r
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 03:56:11 +0000
Received: from [85.158.143.99:12013] by server-2.bemta-4.messagelabs.com id
	40/77-28922-AD5CEB05; Wed, 05 Dec 2012 03:56:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354679769!22838004!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDcyMjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28589 invoked from network); 5 Dec 2012 03:56:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 03:56:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,219,1355097600"; d="scan'208";a="16160899"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 03:56:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 03:56:09 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tg65g-00004t-RI;
	Wed, 05 Dec 2012 03:56:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tg65g-00078q-Cy;
	Wed, 05 Dec 2012 03:56:08 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14562-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 03:56:08 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14562: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14562 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14562/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 14484

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14484
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 14484

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  a8a9e1c126ea
baseline version:
 xen                  d89986111f0c

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23421:a8a9e1c126ea
tag:         tip
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
changeset:   23420:cadc212c8ef3
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:01 2012 +0000
    
    xen: fix error handling of guest_physmap_mark_populate_on_demand()
    
    The only user of the "out" label bypasses a necessary unlock, thus
    enabling the caller to lock up Xen.
    
    Also, the function was never meant to be called by a guest for itself,
    so rather than inspecting the code paths in depth for potential other
    problems this might cause, and adjusting e.g. the non-guest printk()
    in the above error path, just disallow the guest access to it.
    
    Finally, the printk() (considering its potential of spamming the log,
    the more that it's not using XENLOG_GUEST), is being converted to
    P2M_DEBUG(), as debugging is what it apparently was added for in the
    first place.
    
    This is XSA-30 / CVE-2012-5514.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
changeset:   23419:f81286b3be32
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:49:56 2012 +0000
    
    xen: add missing guest address range checks to XENMEM_exchange handlers
    
    Ever since its existence (3.0.3 iirc) the handler for this has been
    using non address range checking guest memory accessors (i.e.
    the ones prefixed with two underscores) without first range
    checking the accessed space (via guest_handle_okay()), allowing
    a guest to access and overwrite hypervisor memory.
    
    This is XSA-29 / CVE-2012-5513.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
changeset:   23418:e7c8ffa11596
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:49:53 2012 +0000
    
    x86/HVM: range check xen_hvm_set_mem_access.hvmmem_access before use
    
    Otherwise an out of bounds array access can happen if changing the
    default access is being requested, which - if it doesn't crash Xen -
    would subsequently allow reading arbitrary memory through
    HVMOP_get_mem_access (again, unless that operation crashes Xen).
    
    This is XSA-28 / CVE-2012-5512.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
changeset:   23417:53ef1f35a0f8
user:        Tim Deegan <tim@xen.org>
date:        Tue Dec 04 18:49:49 2012 +0000
    
    hvm: Limit the size of large HVM op batches
    
    Doing large p2m updates for HVMOP_track_dirty_vram without preemption
    ties up the physical processor. Integrating preemption into the p2m
    updates is hard so simply limit to 1GB which is sufficient for a 15000
    * 15000 * 32bpp framebuffer.
    
    For HVMOP_modified_memory and HVMOP_set_mem_type preemptible add the
    necessary machinery to handle preemption.
    
    This is CVE-2012-5511 / XSA-27.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    x86/paging: Don't allocate user-controlled amounts of stack memory.
    
    This is XSA-27 / CVE-2012-5511.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    v2: Provide definition of GB to fix x86-32 compile.
    
    Signed-off-by: Jan Beulich <JBeulich@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23416:7172203aec98
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:49:42 2012 +0000
    
    gnttab: fix releasing of memory upon switches between versions
    
    gnttab_unpopulate_status_frames() incompletely freed the pages
    previously used as status frame in that they did not get removed from
    the domain's xenpage_list, thus causing subsequent list corruption
    when those pages did get allocated again for the same or another purpose.
    
    Similarly, grant_table_create() and gnttab_grow_table() both improperly
    clean up in the event of an error - pages already shared with the guest
    can't be freed by just passing them to free_xenheap_page(). Fix this by
    sharing the pages only after all allocations succeeded.
    
    This is CVE-2012-5510 / XSA-26.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
changeset:   23415:d89986111f0c
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Nov 27 13:28:36 2012 +0100
    
    x86/time: fix scale_delta() inline assembly
    
    The way it was coded, it clobbered %rdx without telling the compiler.
    This generally didn't cause any problems except when there are two back
    to back invocations (as in plt_overflow()), as in that case the
    compiler may validly assume that it can re-use for the second instance
    the value loaded into %rdx before the first one.
    
    Once at it, also properly relax the second operand of "mul" (there's no
    need for it to be in %rdx, or a register at all), and switch away from
    using explicit register names in the instruction operands.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26188:16bf7f3069a7
    xen-unstable date: Mon Nov 26 16:20:39 UTC 2012
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 03:56:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 03:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg65k-0004S4-3M; Wed, 05 Dec 2012 03:56:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tg65j-0004Rw-4r
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 03:56:11 +0000
Received: from [85.158.143.99:12013] by server-2.bemta-4.messagelabs.com id
	40/77-28922-AD5CEB05; Wed, 05 Dec 2012 03:56:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354679769!22838004!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDcyMjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28589 invoked from network); 5 Dec 2012 03:56:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 03:56:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,219,1355097600"; d="scan'208";a="16160899"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 03:56:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 03:56:09 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tg65g-00004t-RI;
	Wed, 05 Dec 2012 03:56:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tg65g-00078q-Cy;
	Wed, 05 Dec 2012 03:56:08 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14562-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 03:56:08 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14562: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14562 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14562/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 14484

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14484
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 14484

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  a8a9e1c126ea
baseline version:
 xen                  d89986111f0c

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23421:a8a9e1c126ea
tag:         tip
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
changeset:   23420:cadc212c8ef3
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:01 2012 +0000
    
    xen: fix error handling of guest_physmap_mark_populate_on_demand()
    
    The only user of the "out" label bypasses a necessary unlock, thus
    enabling the caller to lock up Xen.
    
    Also, the function was never meant to be called by a guest for itself,
    so rather than inspecting the code paths in depth for potential other
    problems this might cause, and adjusting e.g. the non-guest printk()
    in the above error path, just disallow the guest access to it.
    
    Finally, the printk() (considering its potential of spamming the log,
    the more that it's not using XENLOG_GUEST), is being converted to
    P2M_DEBUG(), as debugging is what it apparently was added for in the
    first place.
    
    This is XSA-30 / CVE-2012-5514.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
changeset:   23419:f81286b3be32
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:49:56 2012 +0000
    
    xen: add missing guest address range checks to XENMEM_exchange handlers
    
    Ever since its existence (3.0.3 iirc) the handler for this has been
    using non address range checking guest memory accessors (i.e.
    the ones prefixed with two underscores) without first range
    checking the accessed space (via guest_handle_okay()), allowing
    a guest to access and overwrite hypervisor memory.
    
    This is XSA-29 / CVE-2012-5513.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
changeset:   23418:e7c8ffa11596
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:49:53 2012 +0000
    
    x86/HVM: range check xen_hvm_set_mem_access.hvmmem_access before use
    
    Otherwise an out of bounds array access can happen if changing the
    default access is being requested, which - if it doesn't crash Xen -
    would subsequently allow reading arbitrary memory through
    HVMOP_get_mem_access (again, unless that operation crashes Xen).
    
    This is XSA-28 / CVE-2012-5512.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
changeset:   23417:53ef1f35a0f8
user:        Tim Deegan <tim@xen.org>
date:        Tue Dec 04 18:49:49 2012 +0000
    
    hvm: Limit the size of large HVM op batches
    
    Doing large p2m updates for HVMOP_track_dirty_vram without preemption
    ties up the physical processor. Integrating preemption into the p2m
    updates is hard so simply limit to 1GB which is sufficient for a 15000
    * 15000 * 32bpp framebuffer.
    
    For HVMOP_modified_memory and HVMOP_set_mem_type preemptible add the
    necessary machinery to handle preemption.
    
    This is CVE-2012-5511 / XSA-27.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    x86/paging: Don't allocate user-controlled amounts of stack memory.
    
    This is XSA-27 / CVE-2012-5511.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    v2: Provide definition of GB to fix x86-32 compile.
    
    Signed-off-by: Jan Beulich <JBeulich@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23416:7172203aec98
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:49:42 2012 +0000
    
    gnttab: fix releasing of memory upon switches between versions
    
    gnttab_unpopulate_status_frames() incompletely freed the pages
    previously used as status frame in that they did not get removed from
    the domain's xenpage_list, thus causing subsequent list corruption
    when those pages did get allocated again for the same or another purpose.
    
    Similarly, grant_table_create() and gnttab_grow_table() both improperly
    clean up in the event of an error - pages already shared with the guest
    can't be freed by just passing them to free_xenheap_page(). Fix this by
    sharing the pages only after all allocations succeeded.
    
    This is CVE-2012-5510 / XSA-26.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
changeset:   23415:d89986111f0c
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Nov 27 13:28:36 2012 +0100
    
    x86/time: fix scale_delta() inline assembly
    
    The way it was coded, it clobbered %rdx without telling the compiler.
    This generally didn't cause any problems except when there are two back
    to back invocations (as in plt_overflow()), as in that case the
    compiler may validly assume that it can re-use for the second instance
    the value loaded into %rdx before the first one.
    
    Once at it, also properly relax the second operand of "mul" (there's no
    need for it to be in %rdx, or a register at all), and switch away from
    using explicit register names in the instruction operands.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26188:16bf7f3069a7
    xen-unstable date: Mon Nov 26 16:20:39 UTC 2012
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 06:03:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 06:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg84B-0007ls-Ff; Wed, 05 Dec 2012 06:02:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tg849-0007ln-UC
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 06:02:42 +0000
Received: from [85.158.143.35:34373] by server-2.bemta-4.messagelabs.com id
	60/95-30861-183EEB05; Wed, 05 Dec 2012 06:02:41 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1354687358!11766854!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDEwNjU0Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16910 invoked from network); 5 Dec 2012 06:02:40 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 06:02:40 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354687360; x=1386223360;
	h=mime-version:content-transfer-encoding:subject:
	message-id:date:from:to:cc;
	bh=v6zzVpntDVubAOwlH7OdS5GJRruf9swfBb0uKz3hlJ4=;
	b=LwQq6xmOZdhwMrkp2H9Hvd3Pn8gA0u+JDprks0GSu+mtXs+FOSHlAraU
	tj8nwudPAC8OD9kre/kiPOvvzdxCwgflBxRjn6AukTdPcx/Tf+6T+veKg
	u/3sto+GyKvUnBno2rOPHSbWUaylkG+Zm49tY7sxmJL7lCej6EwK7ySJc A=;
X-IronPort-AV: E=McAfee;i="5400,1158,6916"; a="408649719"
Received: from smtp-in-9001.sea19.amazon.com ([10.186.144.32])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 05 Dec 2012 06:02:37 +0000
Received: from [127.0.0.1] (vpn-10-50-44-196.sea3.amazon.com [10.50.44.196])
	by smtp-in-9001.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB562aJH004407; Wed, 5 Dec 2012 06:02:37 GMT
MIME-Version: 1.0
X-Mercurial-Node: 2614dd8be3a01247230c51a526615fe92c372f3f
Message-Id: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
User-Agent: Mercurial-patchbomb/2.2.3
Date: Wed, 05 Dec 2012 06:02:07 +0000
From: Matt Wilson <msw@amazon.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be re-pinned
 when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
but still have the flexibility to change the configuration later.
There's no logic that keys off of domain->is_pinned outside of
sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
is_pinned_vcpu() macro to only check for a single CPU set in the
cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
boots.

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r 29247e44df47 -r 2614dd8be3a0 docs/misc/xen-command-line.markdown
--- a/docs/misc/xen-command-line.markdown	Fri Nov 30 21:51:17 2012 +0000
+++ b/docs/misc/xen-command-line.markdown	Wed Dec 05 05:48:23 2012 +0000
@@ -453,7 +453,7 @@ Practices](http://wiki.xen.org/wiki/Xen_
 
 > Default: `false`
 
-Pin dom0 vcpus to their respective pcpus
+Initially pin dom0 vcpus to their respective pcpus
 
 ### e820-mtrr-clip
 > `= <boolean>`
diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/domain.c
--- a/xen/common/domain.c	Fri Nov 30 21:51:17 2012 +0000
+++ b/xen/common/domain.c	Wed Dec 05 05:48:23 2012 +0000
@@ -45,10 +45,6 @@
 /* xen_processor_pmbits: xen control Cx, Px, ... */
 unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
 
-/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */
-bool_t opt_dom0_vcpus_pin;
-boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
-
 /* Protect updates/reads (resp.) of domain_list and domain_hash. */
 DEFINE_SPINLOCK(domlist_update_lock);
 DEFINE_RCU_READ_LOCK(domlist_read_lock);
@@ -235,7 +231,6 @@ struct domain *domain_create(
 
     if ( domid == 0 )
     {
-        d->is_pinned = opt_dom0_vcpus_pin;
         d->disable_migrate = 1;
     }
 
diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/schedule.c
--- a/xen/common/schedule.c	Fri Nov 30 21:51:17 2012 +0000
+++ b/xen/common/schedule.c	Wed Dec 05 05:48:23 2012 +0000
@@ -52,6 +52,11 @@ boolean_param("sched_smt_power_savings",
  * */
 int sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
 integer_param("sched_ratelimit_us", sched_ratelimit_us);
+
+/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned at boot. */
+bool_t opt_dom0_vcpus_pin;
+boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
+
 /* Various timer handlers. */
 static void s_timer_fn(void *unused);
 static void vcpu_periodic_timer_fn(void *data);
@@ -194,7 +199,8 @@ int sched_init_vcpu(struct vcpu *v, unsi
      * domain-0 VCPUs, are pinned onto their respective physical CPUs.
      */
     v->processor = processor;
-    if ( is_idle_domain(d) || d->is_pinned )
+
+    if ( is_idle_domain(d) || (d->domain_id == 0 && opt_dom0_vcpus_pin) )
         cpumask_copy(v->cpu_affinity, cpumask_of(processor));
     else
         cpumask_setall(v->cpu_affinity);
@@ -595,8 +601,6 @@ int vcpu_set_affinity(struct vcpu *v, co
     cpumask_t online_affinity;
     cpumask_t *online;
 
-    if ( v->domain->is_pinned )
-        return -EINVAL;
     online = VCPU2ONLINE(v);
     cpumask_and(&online_affinity, affinity, online);
     if ( cpumask_empty(&online_affinity) )
diff -r 29247e44df47 -r 2614dd8be3a0 xen/include/xen/sched.h
--- a/xen/include/xen/sched.h	Fri Nov 30 21:51:17 2012 +0000
+++ b/xen/include/xen/sched.h	Wed Dec 05 05:48:23 2012 +0000
@@ -292,8 +292,6 @@ struct domain
     enum { DOMDYING_alive, DOMDYING_dying, DOMDYING_dead } is_dying;
     /* Domain is paused by controller software? */
     bool_t           is_paused_by_controller;
-    /* Domain's VCPUs are pinned 1:1 to physical CPUs? */
-    bool_t           is_pinned;
 
     /* Are any VCPUs polling event channels (SCHEDOP_poll)? */
 #if MAX_VIRT_CPUS <= BITS_PER_LONG
@@ -713,8 +711,7 @@ void watchdog_domain_destroy(struct doma
 
 #define is_hvm_domain(d) ((d)->is_hvm)
 #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
-#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
-                           cpumask_weight((v)->cpu_affinity) == 1)
+#define is_pinned_vcpu(v) (cpumask_weight((v)->cpu_affinity) == 1)
 #ifdef HAS_PASSTHROUGH
 #define need_iommu(d)    ((d)->need_iommu)
 #else

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 06:03:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 06:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg84B-0007ls-Ff; Wed, 05 Dec 2012 06:02:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tg849-0007ln-UC
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 06:02:42 +0000
Received: from [85.158.143.35:34373] by server-2.bemta-4.messagelabs.com id
	60/95-30861-183EEB05; Wed, 05 Dec 2012 06:02:41 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1354687358!11766854!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDEwNjU0Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16910 invoked from network); 5 Dec 2012 06:02:40 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 06:02:40 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354687360; x=1386223360;
	h=mime-version:content-transfer-encoding:subject:
	message-id:date:from:to:cc;
	bh=v6zzVpntDVubAOwlH7OdS5GJRruf9swfBb0uKz3hlJ4=;
	b=LwQq6xmOZdhwMrkp2H9Hvd3Pn8gA0u+JDprks0GSu+mtXs+FOSHlAraU
	tj8nwudPAC8OD9kre/kiPOvvzdxCwgflBxRjn6AukTdPcx/Tf+6T+veKg
	u/3sto+GyKvUnBno2rOPHSbWUaylkG+Zm49tY7sxmJL7lCej6EwK7ySJc A=;
X-IronPort-AV: E=McAfee;i="5400,1158,6916"; a="408649719"
Received: from smtp-in-9001.sea19.amazon.com ([10.186.144.32])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 05 Dec 2012 06:02:37 +0000
Received: from [127.0.0.1] (vpn-10-50-44-196.sea3.amazon.com [10.50.44.196])
	by smtp-in-9001.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB562aJH004407; Wed, 5 Dec 2012 06:02:37 GMT
MIME-Version: 1.0
X-Mercurial-Node: 2614dd8be3a01247230c51a526615fe92c372f3f
Message-Id: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
User-Agent: Mercurial-patchbomb/2.2.3
Date: Wed, 05 Dec 2012 06:02:07 +0000
From: Matt Wilson <msw@amazon.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be re-pinned
 when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
but still have the flexibility to change the configuration later.
There's no logic that keys off of domain->is_pinned outside of
sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
is_pinned_vcpu() macro to only check for a single CPU set in the
cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
boots.

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r 29247e44df47 -r 2614dd8be3a0 docs/misc/xen-command-line.markdown
--- a/docs/misc/xen-command-line.markdown	Fri Nov 30 21:51:17 2012 +0000
+++ b/docs/misc/xen-command-line.markdown	Wed Dec 05 05:48:23 2012 +0000
@@ -453,7 +453,7 @@ Practices](http://wiki.xen.org/wiki/Xen_
 
 > Default: `false`
 
-Pin dom0 vcpus to their respective pcpus
+Initially pin dom0 vcpus to their respective pcpus
 
 ### e820-mtrr-clip
 > `= <boolean>`
diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/domain.c
--- a/xen/common/domain.c	Fri Nov 30 21:51:17 2012 +0000
+++ b/xen/common/domain.c	Wed Dec 05 05:48:23 2012 +0000
@@ -45,10 +45,6 @@
 /* xen_processor_pmbits: xen control Cx, Px, ... */
 unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
 
-/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */
-bool_t opt_dom0_vcpus_pin;
-boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
-
 /* Protect updates/reads (resp.) of domain_list and domain_hash. */
 DEFINE_SPINLOCK(domlist_update_lock);
 DEFINE_RCU_READ_LOCK(domlist_read_lock);
@@ -235,7 +231,6 @@ struct domain *domain_create(
 
     if ( domid == 0 )
     {
-        d->is_pinned = opt_dom0_vcpus_pin;
         d->disable_migrate = 1;
     }
 
diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/schedule.c
--- a/xen/common/schedule.c	Fri Nov 30 21:51:17 2012 +0000
+++ b/xen/common/schedule.c	Wed Dec 05 05:48:23 2012 +0000
@@ -52,6 +52,11 @@ boolean_param("sched_smt_power_savings",
  * */
 int sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
 integer_param("sched_ratelimit_us", sched_ratelimit_us);
+
+/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned at boot. */
+bool_t opt_dom0_vcpus_pin;
+boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
+
 /* Various timer handlers. */
 static void s_timer_fn(void *unused);
 static void vcpu_periodic_timer_fn(void *data);
@@ -194,7 +199,8 @@ int sched_init_vcpu(struct vcpu *v, unsi
      * domain-0 VCPUs, are pinned onto their respective physical CPUs.
      */
     v->processor = processor;
-    if ( is_idle_domain(d) || d->is_pinned )
+
+    if ( is_idle_domain(d) || (d->domain_id == 0 && opt_dom0_vcpus_pin) )
         cpumask_copy(v->cpu_affinity, cpumask_of(processor));
     else
         cpumask_setall(v->cpu_affinity);
@@ -595,8 +601,6 @@ int vcpu_set_affinity(struct vcpu *v, co
     cpumask_t online_affinity;
     cpumask_t *online;
 
-    if ( v->domain->is_pinned )
-        return -EINVAL;
     online = VCPU2ONLINE(v);
     cpumask_and(&online_affinity, affinity, online);
     if ( cpumask_empty(&online_affinity) )
diff -r 29247e44df47 -r 2614dd8be3a0 xen/include/xen/sched.h
--- a/xen/include/xen/sched.h	Fri Nov 30 21:51:17 2012 +0000
+++ b/xen/include/xen/sched.h	Wed Dec 05 05:48:23 2012 +0000
@@ -292,8 +292,6 @@ struct domain
     enum { DOMDYING_alive, DOMDYING_dying, DOMDYING_dead } is_dying;
     /* Domain is paused by controller software? */
     bool_t           is_paused_by_controller;
-    /* Domain's VCPUs are pinned 1:1 to physical CPUs? */
-    bool_t           is_pinned;
 
     /* Are any VCPUs polling event channels (SCHEDOP_poll)? */
 #if MAX_VIRT_CPUS <= BITS_PER_LONG
@@ -713,8 +711,7 @@ void watchdog_domain_destroy(struct doma
 
 #define is_hvm_domain(d) ((d)->is_hvm)
 #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
-#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
-                           cpumask_weight((v)->cpu_affinity) == 1)
+#define is_pinned_vcpu(v) (cpumask_weight((v)->cpu_affinity) == 1)
 #ifdef HAS_PASSTHROUGH
 #define need_iommu(d)    ((d)->need_iommu)
 #else

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 06:51:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 06:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg8pE-0000M7-0B; Wed, 05 Dec 2012 06:51:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liyz@pku.edu.cn>) id 1Tg8MK-0008JK-1a
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 06:21:28 +0000
Received: from [85.158.139.211:39846] by server-14.bemta-5.messagelabs.com id
	F4/35-21768-7E7EEB05; Wed, 05 Dec 2012 06:21:27 +0000
X-Env-Sender: liyz@pku.edu.cn
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354688484!19127718!1
X-Originating-IP: [162.105.129.21]
X-SpamReason: No, hits=1.4 required=7.0 tests=FROM_EXCESS_BASE64,
	SUBJECT_EXCESS_BASE64
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25913 invoked from network); 5 Dec 2012 06:21:25 -0000
Received: from mx1.pku.edu.cn (HELO mail.pku.edu.cn) (162.105.129.21)
	by server-11.tower-206.messagelabs.com with SMTP;
	5 Dec 2012 06:21:25 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by mail.pku.edu.cn (tmailer) with ESMTP id 1C83F2804E
	for <xen-devel@lists.xen.org>; Wed,  5 Dec 2012 14:21:22 +0800 (CST)
X-Spam-Flag: NO
X-Spam-Score: -427.52
X-Spam-Level: 
X-Spam-Status: No, score=-427.52 tagged_above=-1000 required=20
	tests=[ALL_TRUSTED=-1.8, AWL=-117.080, BAYES_00=-10.396,
	CN_BODY_1043=0.3, FROM_EXCESS_BASE64=1.456, USER_IN_WHITELIST=-300]
	autolearn=no
Received: from mail.pku.edu.cn ([127.0.0.1])
	by localhost (bj-mail05.pku.edu.cn [127.0.0.1]) (theinterface-new,
	port 10024) with ESMTP id 667jOQ0PuTp0 for <xen-devel@lists.xen.org>;
	Wed,  5 Dec 2012 14:21:21 +0800 (CST)
Received: from bj-mail05.pku.edu.cn (bj-mail05.pku.edu.cn [162.105.129.125])
	by mail.pku.edu.cn (tmailer) with ESMTP id 5AEB02804D
	for <xen-devel@lists.xen.org>; Wed,  5 Dec 2012 14:21:21 +0800 (CST)
Date: Wed, 5 Dec 2012 14:21:20 +0800 (CST)
From: =?gbk?B?WWFuemhhbmcgTGkg?=<liyz@pku.edu.cn>
To: xen-devel@lists.xen.org
Message-ID: <1141524074.6.1354688480672.JavaMail.root@bj-mail05.pku.edu.cn>
MIME-Version: 1.0
X-Originating-IP: [162.105.129.93]
X-Mailman-Approved-At: Wed, 05 Dec 2012 06:51:18 +0000
Subject: [Xen-devel] =?gbk?q?Suggestion=3A_Improve_hypercall_Interface_to_?=
	=?gbk?q?get_real_return_value?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGVsbG8gZXZlcnlvbmUsCgpJIGFtIGEgdXNlciBvZiBYZW4gYW5kIEkgaGF2ZSBlbmNvdW50ZXJl
ZCBhIHByb2JsZW0gd2hlbiB0cnlpbmcgdG8gZ2V0IHRoZSByZXR1cm4gdmFsdWUgZnJvbSBoeXBl
cmNhbGwgaW4gVE9PTFMuIEkgZm91bmQgdGhhdCB3aGVuIHByb2dyYW1zIGluIFRPT0xTIGNhbGxl
ZCBoeXBlcmNhbGwgYW5kIGZhaWxlZCwgaXQgd291bGQgZ2V0IHJldHVybiB2YWx1ZSAtMSBpbnN0
ZWFkIG9mIHRoZSByZWFsIHJldHVybiB2YWx1ZSBnaXZlbiBieSBIeXBlcnZpc29yLgoKVGhlIHJl
YXNvbiBpcyB0aGF0IHRoZSBoeXBlcmNhbGwgaW50ZXJmYWNlIG9mIFRPT0xTIHVzZXMgaW9jdGwg
dG8gY2FsbCBoeXBlcmNhbGwsIGFuZCBpb2N0bCB3aWxsIG9ubHkgcmV0dXJuIC0xIHdoZW4gZmFp
bHVyZSBvY2N1cnMuIFRoZSByZWxhdGVkIGNvZGUgaXMgaW4geGVuLTQuMi4wL3Rvb2xzL2xpYnhj
L3hjX2xpbnV4X29zZGVwLmM6CiAgICBzdGF0aWMgaW50IGxpbnV4X3ByaXZjbWRfaHlwZXJjYWxs
KHhjX2ludGVyZmFjZSAqeGNoLCB4Y19vc2RlcF9oYW5kbGUgaCwgcHJpdmNtZF9oeXBlcmNhbGxf
dCAqaHlwZXJjYWxsKQogICAgewogICAgICAgIGludCBmZCA9IChpbnQpaDsKICAgICAgICByZXR1
cm4gaW9jdGwoZmQsIElPQ1RMX1BSSVZDTURfSFlQRVJDQUxMLCBoeXBlcmNhbGwpOwogICAgfQoK
V2hpbGUgSSBkb24ndCB0aGluayB0aGlzIGlzIGEgZ29vZCBpZGVhLCBiZWNhdXNlIHNvbWUgdG9v
bHMgbWF5IHdpc2ggdG8gdXNlIHJlYWwgcmV0dXJuIHZhbHVlLiBGb3IgZXhhbXBsZSwgaW4geGVu
LTQuMi4wL3Rvb2xzL21lbXNoci9pbnRlcmZhY2UuYywgdGhlIGZ1bmN0aW9uIG1lbXNocl92YmRf
aXNzdWVfcm9fcmVxdWVzdCB3aWxsIHN3aXRjaCB2YXJpYWJsZSByZXQgd2hpY2ggY29tZXMgZnJv
bSByZXR1cm4gdmFsdWUgb2YgaHlwZXJjYWxsIGRvX21lbW9yeV9vcDoKICAgIHN3aXRjaChyZXQp
CiAgICB7CiAgICAgICAgY2FzZSBYRU5NRU1fU0hBUklOR19PUF9TX0hBTkRMRV9JTlZBTElEOgog
ICAgICAgICAgICDigKbigKYKICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgY2FzZSBYRU5NRU1f
U0hBUklOR19PUF9DX0hBTkRMRV9JTlZBTElEOgogICAgICAgICAgICDigKbigKYKICAgICAgICAg
ICAgYnJlYWs7CiAgICAgICAgZGVmYXVsdDoKICAgICAgICAgICAgYnJlYWs7Cn0KClNvIEkgdGhp
bmsgaWYgd2UgY291bGQgbW9kaWZ5IHRoZSBpbnRlcmZhY2UgYSBsaXR0bGUgYml0IHRvIHJldHVy
biB0aGUgcmVhbCBlcnJvciBudW1iZXIsIGl0IHdvdWxkIGJlIGJlbmVmaWNpYWwgdG8gbWFueSBU
T09MUyBkZXZlbG9wZXJzIGluY2x1ZGluZyBteXNlbGYuIE15IHN1Z2dlc3RlZCBtb2RpZmljYXRp
b24gaXMgc2ltcGxlOgogICAgc3RhdGljIGludCBsaW51eF9wcml2Y21kX2h5cGVyY2FsbCh4Y19p
bnRlcmZhY2UgKnhjaCwgeGNfb3NkZXBfaGFuZGxlIGgsIHByaXZjbWRfaHlwZXJjYWxsX3QgKmh5
cGVyY2FsbCkKICAgIHsKICAgICAgICBpbnQgZmQgPSAoaW50KWg7CiAgICAgICAgaW50IHJldCA9
IGlvY3RsKGZkLCBJT0NUTF9QUklWQ01EX0hZUEVSQ0FMTCwgaHlwZXJjYWxsKTsKICAgICAgICBp
ZiAocmV0IDwgMCkKICAgICAgICAgICAgcmV0dXJuIC1lcnJubzsKICAgICAgICByZXR1cm4gcmV0
OwogICAgfQoKRG8geW91IHRoaW5rIHRoaXMgd291bGQgYmUgYSBnb29kIG1vZGlmaWNhdGlvbj8g
CkFsc28sIEkgYW0gY3VyaW91cyB3aHkgdGhlIG9yaWdpbmFsIGRlc2lnbiBkaWRuJ3QgZG8gdGhh
dC4gSXMgaXQgYSBidWcgb3IgaXMgaXQgZGVzaWduZWQgdGhhdCB3YXkgaW50ZW50aW9uYWxseT8K
QW55IHN1Z2dlc3Rpb25zIGFuZCBjb21tZW50cyB3aWxsIGJlIGhpZ2hseSBhcHByZWNpYXRlZC4K
ClRoYW5rcyEKQmVzdCBSZWdhcmRzLApZYW56aGFuZyBMaQoKX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Dec 05 06:51:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 06:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg8pE-0000M7-0B; Wed, 05 Dec 2012 06:51:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liyz@pku.edu.cn>) id 1Tg8MK-0008JK-1a
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 06:21:28 +0000
Received: from [85.158.139.211:39846] by server-14.bemta-5.messagelabs.com id
	F4/35-21768-7E7EEB05; Wed, 05 Dec 2012 06:21:27 +0000
X-Env-Sender: liyz@pku.edu.cn
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354688484!19127718!1
X-Originating-IP: [162.105.129.21]
X-SpamReason: No, hits=1.4 required=7.0 tests=FROM_EXCESS_BASE64,
	SUBJECT_EXCESS_BASE64
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25913 invoked from network); 5 Dec 2012 06:21:25 -0000
Received: from mx1.pku.edu.cn (HELO mail.pku.edu.cn) (162.105.129.21)
	by server-11.tower-206.messagelabs.com with SMTP;
	5 Dec 2012 06:21:25 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by mail.pku.edu.cn (tmailer) with ESMTP id 1C83F2804E
	for <xen-devel@lists.xen.org>; Wed,  5 Dec 2012 14:21:22 +0800 (CST)
X-Spam-Flag: NO
X-Spam-Score: -427.52
X-Spam-Level: 
X-Spam-Status: No, score=-427.52 tagged_above=-1000 required=20
	tests=[ALL_TRUSTED=-1.8, AWL=-117.080, BAYES_00=-10.396,
	CN_BODY_1043=0.3, FROM_EXCESS_BASE64=1.456, USER_IN_WHITELIST=-300]
	autolearn=no
Received: from mail.pku.edu.cn ([127.0.0.1])
	by localhost (bj-mail05.pku.edu.cn [127.0.0.1]) (theinterface-new,
	port 10024) with ESMTP id 667jOQ0PuTp0 for <xen-devel@lists.xen.org>;
	Wed,  5 Dec 2012 14:21:21 +0800 (CST)
Received: from bj-mail05.pku.edu.cn (bj-mail05.pku.edu.cn [162.105.129.125])
	by mail.pku.edu.cn (tmailer) with ESMTP id 5AEB02804D
	for <xen-devel@lists.xen.org>; Wed,  5 Dec 2012 14:21:21 +0800 (CST)
Date: Wed, 5 Dec 2012 14:21:20 +0800 (CST)
From: =?gbk?B?WWFuemhhbmcgTGkg?=<liyz@pku.edu.cn>
To: xen-devel@lists.xen.org
Message-ID: <1141524074.6.1354688480672.JavaMail.root@bj-mail05.pku.edu.cn>
MIME-Version: 1.0
X-Originating-IP: [162.105.129.93]
X-Mailman-Approved-At: Wed, 05 Dec 2012 06:51:18 +0000
Subject: [Xen-devel] =?gbk?q?Suggestion=3A_Improve_hypercall_Interface_to_?=
	=?gbk?q?get_real_return_value?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGVsbG8gZXZlcnlvbmUsCgpJIGFtIGEgdXNlciBvZiBYZW4gYW5kIEkgaGF2ZSBlbmNvdW50ZXJl
ZCBhIHByb2JsZW0gd2hlbiB0cnlpbmcgdG8gZ2V0IHRoZSByZXR1cm4gdmFsdWUgZnJvbSBoeXBl
cmNhbGwgaW4gVE9PTFMuIEkgZm91bmQgdGhhdCB3aGVuIHByb2dyYW1zIGluIFRPT0xTIGNhbGxl
ZCBoeXBlcmNhbGwgYW5kIGZhaWxlZCwgaXQgd291bGQgZ2V0IHJldHVybiB2YWx1ZSAtMSBpbnN0
ZWFkIG9mIHRoZSByZWFsIHJldHVybiB2YWx1ZSBnaXZlbiBieSBIeXBlcnZpc29yLgoKVGhlIHJl
YXNvbiBpcyB0aGF0IHRoZSBoeXBlcmNhbGwgaW50ZXJmYWNlIG9mIFRPT0xTIHVzZXMgaW9jdGwg
dG8gY2FsbCBoeXBlcmNhbGwsIGFuZCBpb2N0bCB3aWxsIG9ubHkgcmV0dXJuIC0xIHdoZW4gZmFp
bHVyZSBvY2N1cnMuIFRoZSByZWxhdGVkIGNvZGUgaXMgaW4geGVuLTQuMi4wL3Rvb2xzL2xpYnhj
L3hjX2xpbnV4X29zZGVwLmM6CiAgICBzdGF0aWMgaW50IGxpbnV4X3ByaXZjbWRfaHlwZXJjYWxs
KHhjX2ludGVyZmFjZSAqeGNoLCB4Y19vc2RlcF9oYW5kbGUgaCwgcHJpdmNtZF9oeXBlcmNhbGxf
dCAqaHlwZXJjYWxsKQogICAgewogICAgICAgIGludCBmZCA9IChpbnQpaDsKICAgICAgICByZXR1
cm4gaW9jdGwoZmQsIElPQ1RMX1BSSVZDTURfSFlQRVJDQUxMLCBoeXBlcmNhbGwpOwogICAgfQoK
V2hpbGUgSSBkb24ndCB0aGluayB0aGlzIGlzIGEgZ29vZCBpZGVhLCBiZWNhdXNlIHNvbWUgdG9v
bHMgbWF5IHdpc2ggdG8gdXNlIHJlYWwgcmV0dXJuIHZhbHVlLiBGb3IgZXhhbXBsZSwgaW4geGVu
LTQuMi4wL3Rvb2xzL21lbXNoci9pbnRlcmZhY2UuYywgdGhlIGZ1bmN0aW9uIG1lbXNocl92YmRf
aXNzdWVfcm9fcmVxdWVzdCB3aWxsIHN3aXRjaCB2YXJpYWJsZSByZXQgd2hpY2ggY29tZXMgZnJv
bSByZXR1cm4gdmFsdWUgb2YgaHlwZXJjYWxsIGRvX21lbW9yeV9vcDoKICAgIHN3aXRjaChyZXQp
CiAgICB7CiAgICAgICAgY2FzZSBYRU5NRU1fU0hBUklOR19PUF9TX0hBTkRMRV9JTlZBTElEOgog
ICAgICAgICAgICDigKbigKYKICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgY2FzZSBYRU5NRU1f
U0hBUklOR19PUF9DX0hBTkRMRV9JTlZBTElEOgogICAgICAgICAgICDigKbigKYKICAgICAgICAg
ICAgYnJlYWs7CiAgICAgICAgZGVmYXVsdDoKICAgICAgICAgICAgYnJlYWs7Cn0KClNvIEkgdGhp
bmsgaWYgd2UgY291bGQgbW9kaWZ5IHRoZSBpbnRlcmZhY2UgYSBsaXR0bGUgYml0IHRvIHJldHVy
biB0aGUgcmVhbCBlcnJvciBudW1iZXIsIGl0IHdvdWxkIGJlIGJlbmVmaWNpYWwgdG8gbWFueSBU
T09MUyBkZXZlbG9wZXJzIGluY2x1ZGluZyBteXNlbGYuIE15IHN1Z2dlc3RlZCBtb2RpZmljYXRp
b24gaXMgc2ltcGxlOgogICAgc3RhdGljIGludCBsaW51eF9wcml2Y21kX2h5cGVyY2FsbCh4Y19p
bnRlcmZhY2UgKnhjaCwgeGNfb3NkZXBfaGFuZGxlIGgsIHByaXZjbWRfaHlwZXJjYWxsX3QgKmh5
cGVyY2FsbCkKICAgIHsKICAgICAgICBpbnQgZmQgPSAoaW50KWg7CiAgICAgICAgaW50IHJldCA9
IGlvY3RsKGZkLCBJT0NUTF9QUklWQ01EX0hZUEVSQ0FMTCwgaHlwZXJjYWxsKTsKICAgICAgICBp
ZiAocmV0IDwgMCkKICAgICAgICAgICAgcmV0dXJuIC1lcnJubzsKICAgICAgICByZXR1cm4gcmV0
OwogICAgfQoKRG8geW91IHRoaW5rIHRoaXMgd291bGQgYmUgYSBnb29kIG1vZGlmaWNhdGlvbj8g
CkFsc28sIEkgYW0gY3VyaW91cyB3aHkgdGhlIG9yaWdpbmFsIGRlc2lnbiBkaWRuJ3QgZG8gdGhh
dC4gSXMgaXQgYSBidWcgb3IgaXMgaXQgZGVzaWduZWQgdGhhdCB3YXkgaW50ZW50aW9uYWxseT8K
QW55IHN1Z2dlc3Rpb25zIGFuZCBjb21tZW50cyB3aWxsIGJlIGhpZ2hseSBhcHByZWNpYXRlZC4K
ClRoYW5rcyEKQmVzdCBSZWdhcmRzLApZYW56aGFuZyBMaQoKX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Dec 05 08:01:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 08:01:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg9uj-0002Om-Do; Wed, 05 Dec 2012 08:01:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tg9uh-0002OG-Rn
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 08:01:04 +0000
Received: from [85.158.139.83:61030] by server-6.bemta-5.messagelabs.com id
	1C/A6-19321-E3FFEB05; Wed, 05 Dec 2012 08:01:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1354694461!24154925!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxOTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24550 invoked from network); 5 Dec 2012 08:01:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 08:01:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 08:01:00 +0000
Message-Id: <50BF0D4902000078000AE0D0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 08:00:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <1354563168-6609-1-git-send-email-olaf@aepfle.de>
	<50BDDEBF02000078000ADAAC@nat28.tlf.novell.com>
	<20121204182151.GA25878@aepfle.de>
In-Reply-To: <20121204182151.GA25878@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 19:21, Olaf Hering <olaf@aepfle.de> wrote:
> On Tue, Dec 04, Jan Beulich wrote:
> 
>> This looks necessary but insufficient - there's nothing really
>> preventing backend_changed() from being called more than once
>> for a given device (is simply the handler of xenbus watch). Hence
>> I think either that function needs to be guarded against multiple
>> execution (e.g. by removing the watch from that function itself,
>> if that's permitted by xenbus), or to properly deal with the
>> effects this has (including but probably not limited to the leaking
>> of be->mode).
> 
> If another watch does really trigger after the kfree(be) in
> xen_blkbk_remove(), wouldnt backend_changed access stale memory?
> So if that can really happen in practice, shouldnt the backend_watch be
> a separate allocation instead being contained within backend_info?
> 
> Looking at unregister_xenbus_watch, it clears removes the watch from the
> list, so that process_msg will not see it anymore.

That's not the scenario I was talking about: I'm concerned about
multiple calls to backend_changed() to similarly leak "mode" (and
possibly cause other bad stuff to happen) while the device is still
alive - after all it overwrites "mode" without checking what's in
there.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 08:01:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 08:01:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg9uj-0002Om-Do; Wed, 05 Dec 2012 08:01:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tg9uh-0002OG-Rn
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 08:01:04 +0000
Received: from [85.158.139.83:61030] by server-6.bemta-5.messagelabs.com id
	1C/A6-19321-E3FFEB05; Wed, 05 Dec 2012 08:01:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1354694461!24154925!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxOTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24550 invoked from network); 5 Dec 2012 08:01:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 08:01:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 08:01:00 +0000
Message-Id: <50BF0D4902000078000AE0D0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 08:00:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <1354563168-6609-1-git-send-email-olaf@aepfle.de>
	<50BDDEBF02000078000ADAAC@nat28.tlf.novell.com>
	<20121204182151.GA25878@aepfle.de>
In-Reply-To: <20121204182151.GA25878@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 19:21, Olaf Hering <olaf@aepfle.de> wrote:
> On Tue, Dec 04, Jan Beulich wrote:
> 
>> This looks necessary but insufficient - there's nothing really
>> preventing backend_changed() from being called more than once
>> for a given device (is simply the handler of xenbus watch). Hence
>> I think either that function needs to be guarded against multiple
>> execution (e.g. by removing the watch from that function itself,
>> if that's permitted by xenbus), or to properly deal with the
>> effects this has (including but probably not limited to the leaking
>> of be->mode).
> 
> If another watch does really trigger after the kfree(be) in
> xen_blkbk_remove(), wouldnt backend_changed access stale memory?
> So if that can really happen in practice, shouldnt the backend_watch be
> a separate allocation instead being contained within backend_info?
> 
> Looking at unregister_xenbus_watch, it clears removes the watch from the
> list, so that process_msg will not see it anymore.

That's not the scenario I was talking about: I'm concerned about
multiple calls to backend_changed() to similarly leak "mode" (and
possibly cause other bad stuff to happen) while the device is still
alive - after all it overwrites "mode" without checking what's in
there.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 08:03:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 08:03:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg9wY-0002i0-4O; Wed, 05 Dec 2012 08:02:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tg9wW-0002hl-9X
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 08:02:56 +0000
Received: from [193.109.254.147:10485] by server-8.bemta-14.messagelabs.com id
	F9/C6-05026-FAFFEB05; Wed, 05 Dec 2012 08:02:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354694572!6317999!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDcyMjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31753 invoked from network); 5 Dec 2012 08:02:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 08:02:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,220,1355097600"; d="scan'208";a="16163842"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 08:02:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 08:02:52 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tg9wR-0001Si-S8;
	Wed, 05 Dec 2012 08:02:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tg9wR-0003zv-Qa;
	Wed, 05 Dec 2012 08:02:51 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14563-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 08:02:51 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14563: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14563 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14563/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14501
 test-amd64-amd64-xl-sedf     10 guest-saverestore         fail REGR. vs. 14501

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  b306bce61341
baseline version:
 xen                  b3dafd42268a

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Ronny Hegewald <Ronny.Hegewald@online.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=b306bce61341
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing b306bce61341
+ branch=xen-4.2-testing
+ revision=b306bce61341
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r b306bce61341 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 9 changes to 8 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 08:03:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 08:03:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tg9wY-0002i0-4O; Wed, 05 Dec 2012 08:02:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tg9wW-0002hl-9X
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 08:02:56 +0000
Received: from [193.109.254.147:10485] by server-8.bemta-14.messagelabs.com id
	F9/C6-05026-FAFFEB05; Wed, 05 Dec 2012 08:02:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354694572!6317999!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDcyMjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31753 invoked from network); 5 Dec 2012 08:02:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 08:02:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,220,1355097600"; d="scan'208";a="16163842"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 08:02:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 08:02:52 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tg9wR-0001Si-S8;
	Wed, 05 Dec 2012 08:02:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tg9wR-0003zv-Qa;
	Wed, 05 Dec 2012 08:02:51 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14563-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 08:02:51 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14563: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14563 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14563/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14501
 test-amd64-amd64-xl-sedf     10 guest-saverestore         fail REGR. vs. 14501

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  b306bce61341
baseline version:
 xen                  b3dafd42268a

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Ronny Hegewald <Ronny.Hegewald@online.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=b306bce61341
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing b306bce61341
+ branch=xen-4.2-testing
+ revision=b306bce61341
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r b306bce61341 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 9 changes to 8 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 09:17:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 09:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgB6f-00046x-JG; Wed, 05 Dec 2012 09:17:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgB6d-00046s-Qk
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 09:17:28 +0000
Received: from [85.158.143.99:2646] by server-3.bemta-4.messagelabs.com id
	9C/E9-06841-7211FB05; Wed, 05 Dec 2012 09:17:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354699046!27233620!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32503 invoked from network); 5 Dec 2012 09:17:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 09:17:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 09:17:20 +0000
Message-Id: <50BF1F2F02000078000AE0E3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 09:17:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
In-Reply-To: <b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 19:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,97 @@
>  
>  static atomic_t waiting_for_crash_ipi;
>  static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>  
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs. */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>  {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
> +    ASSERT( crashing_cpu != smp_processor_id() );
> +
> +    /* Save crash information and shut down CPU.  Attempt only once. */
> +    if ( ! this_cpu(crash_save_done) )
> +    {
> +        kexec_crash_save_cpu();
> +        __stop_this_cpu();
> +
> +        this_cpu(crash_save_done) = 1;
> +    }
> +
> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
> +     * back to its boot state, so we are unable to rely on the regular
> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
> +     * (The likely senario is that we have reverted from x2apic mode to
> +     * xapic, at which point #GPFs will occur if we use the apic_*
> +     * functions)
> +     *
> +     * The ICR and APIC ID of the LAPIC are still valid even during
> +     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
> +     * queue up another NMI, to force us back into this loop if we exit.
>       */
> -    if ( cpu == crashing_cpu )
> -        return 1;
> -    local_irq_disable();
> +    switch ( current_local_apic_mode() )
> +    {
> +        u32 apic_id;
>  
> -    kexec_crash_save_cpu();
> +    case APIC_MODE_X2APIC:
> +        apic_id = apic_rdmsr(APIC_ID);
>  
> -    __stop_this_cpu();
> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));

As in the description you talk about the LAPIC being (possibly)
disabled here - did you check that this would not #GP in that
case?

> +        break;
>  
> -    atomic_dec(&waiting_for_crash_ipi);
> +    case APIC_MODE_XAPIC:
> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
> +
> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
> +            cpu_relax();
> +
> +        apic_mem_write(APIC_ICR2, apic_id << 24);
> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
> +        break;
> +
> +    default:
> +        break;
> +    }
>  
>      for ( ; ; )
>          halt();
> -
> -    return 1;
>  }

>...

>  ENTRY(machine_check)
>          pushq $0
>          movl  $TRAP_machine_check,4(%rsp)
>          jmp   handle_ist_exception
>  
> +/* No op trap handler.  Required for kexec path. */
> +ENTRY(trap_nop)

I'd prefer you to re-use an existing IRETQ (e.g. the one you add
below) here - this is not performance critical, and hence there's
no need for it to be aligned.

Jan

> +        iretq
> +
> +/* Enable NMIs.  No special register assumptions, and all preserved. */
> +ENTRY(enable_nmis)
> +        push %rax
> +        movq %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
>  .section .rodata, "a", @progbits
>  
>  ENTRY(exception_table)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 09:17:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 09:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgB6f-00046x-JG; Wed, 05 Dec 2012 09:17:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgB6d-00046s-Qk
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 09:17:28 +0000
Received: from [85.158.143.99:2646] by server-3.bemta-4.messagelabs.com id
	9C/E9-06841-7211FB05; Wed, 05 Dec 2012 09:17:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354699046!27233620!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32503 invoked from network); 5 Dec 2012 09:17:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 09:17:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 09:17:20 +0000
Message-Id: <50BF1F2F02000078000AE0E3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 09:17:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
In-Reply-To: <b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 19:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,97 @@
>  
>  static atomic_t waiting_for_crash_ipi;
>  static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>  
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs. */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>  {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
> +    ASSERT( crashing_cpu != smp_processor_id() );
> +
> +    /* Save crash information and shut down CPU.  Attempt only once. */
> +    if ( ! this_cpu(crash_save_done) )
> +    {
> +        kexec_crash_save_cpu();
> +        __stop_this_cpu();
> +
> +        this_cpu(crash_save_done) = 1;
> +    }
> +
> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
> +     * back to its boot state, so we are unable to rely on the regular
> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
> +     * (The likely senario is that we have reverted from x2apic mode to
> +     * xapic, at which point #GPFs will occur if we use the apic_*
> +     * functions)
> +     *
> +     * The ICR and APIC ID of the LAPIC are still valid even during
> +     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
> +     * queue up another NMI, to force us back into this loop if we exit.
>       */
> -    if ( cpu == crashing_cpu )
> -        return 1;
> -    local_irq_disable();
> +    switch ( current_local_apic_mode() )
> +    {
> +        u32 apic_id;
>  
> -    kexec_crash_save_cpu();
> +    case APIC_MODE_X2APIC:
> +        apic_id = apic_rdmsr(APIC_ID);
>  
> -    __stop_this_cpu();
> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));

As in the description you talk about the LAPIC being (possibly)
disabled here - did you check that this would not #GP in that
case?

> +        break;
>  
> -    atomic_dec(&waiting_for_crash_ipi);
> +    case APIC_MODE_XAPIC:
> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
> +
> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
> +            cpu_relax();
> +
> +        apic_mem_write(APIC_ICR2, apic_id << 24);
> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
> +        break;
> +
> +    default:
> +        break;
> +    }
>  
>      for ( ; ; )
>          halt();
> -
> -    return 1;
>  }

>...

>  ENTRY(machine_check)
>          pushq $0
>          movl  $TRAP_machine_check,4(%rsp)
>          jmp   handle_ist_exception
>  
> +/* No op trap handler.  Required for kexec path. */
> +ENTRY(trap_nop)

I'd prefer you to re-use an existing IRETQ (e.g. the one you add
below) here - this is not performance critical, and hence there's
no need for it to be aligned.

Jan

> +        iretq
> +
> +/* Enable NMIs.  No special register assumptions, and all preserved. */
> +ENTRY(enable_nmis)
> +        push %rax
> +        movq %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
>  .section .rodata, "a", @progbits
>  
>  ENTRY(exception_table)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 09:22:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 09:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBB2-0004M1-8L; Wed, 05 Dec 2012 09:22:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgBB1-0004Lv-EX
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 09:21:59 +0000
Received: from [85.158.143.35:10432] by server-3.bemta-4.messagelabs.com id
	3E/21-06841-6321FB05; Wed, 05 Dec 2012 09:21:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1354699288!5271297!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20642 invoked from network); 5 Dec 2012 09:21:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 09:21:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 09:21:27 +0000
Message-Id: <50BF202502000078000AE0E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 09:21:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
In-Reply-To: <f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3 of 4 RFC] x86/nmi: Prevent reentrant
 execution of the C nmi handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 19:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> The (old) function do_nmi() is not reentrantly-safe.  Rename it to
> _do_nmi() and present a new do_nmi() which reentrancy guards.
> 
> If a reentrant NMI has been detected, then it is highly likely that the
> outer NMI exception frame has been corrupted, meaning we cannot return
> to the original context.  In this case, we panic() obviously rather than
> falling into an infinite loop.
> 
> panic() however is not safe to reenter from an NMI context, as an NMI
> (or MCE) can interrupt it inside its critical section, at which point a
> new call to panic() will deadlock.  As a result, we bail early if a
> panic() is already in progress, as Xen is about to die anyway.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> --
> I am fairly sure this is safe with the current kexec_crash functionality
> which involves holding all non-crashing pcpus in an NMI loop.  In the
> case of reentrant NMIs and panic_in_progress, we will repeatedly bail
> early in an infinite loop of NMIs, which has the same intended effect of
> simply causing all non-crashing CPUs to stay out of the way while the
> main crash occurs.
> 
> diff -r 48a60a407e15 -r f6ad86b61d5a xen/arch/x86/traps.c
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -88,6 +88,7 @@ static char __read_mostly opt_nmi[10] = 
>  string_param("nmi", opt_nmi);
>  
>  DEFINE_PER_CPU(u64, efer);
> +static DEFINE_PER_CPU(bool_t, nmi_in_progress) = 0;
>  
>  DEFINE_PER_CPU_READ_MOSTLY(u32, ler_msr);
>  
> @@ -3182,7 +3183,8 @@ static int dummy_nmi_callback(struct cpu
>   
>  static nmi_callback_t nmi_callback = dummy_nmi_callback;
>  
> -void do_nmi(struct cpu_user_regs *regs)
> +/* This function should never be called directly.  Use do_nmi() instead. */
> +static void _do_nmi(struct cpu_user_regs *regs)
>  {
>      unsigned int cpu = smp_processor_id();
>      unsigned char reason;
> @@ -3208,6 +3210,44 @@ void do_nmi(struct cpu_user_regs *regs)
>      }
>  }
>  
> +/* This function is NOT SAFE to call from C code in general.
> + * Use with extreme care! */
> +void do_nmi(struct cpu_user_regs *regs)
> +{
> +    bool_t * in_progress = &this_cpu(nmi_in_progress);
> +
> +    if ( is_panic_in_progress() )
> +    {
> +        /* A panic is already in progress.  It may have reenabled NMIs,
> +         * or we are simply unluckly to receive one right now.  Either
> +         * way, bail early, as Xen is about to die.
> +         *
> +         * TODO: Ideally we should exit without executing an iret, to
> +         * leave NMIs disabled, but that option is not currently
> +         * available to us.

You could easily provide the ground work for this here by having
the function return a bool_t (even if not immediately consumed by
the caller in this same patch).

Jan

> +         */
> +        return;
> +    }
> +
> +    if ( test_and_set_bool(*in_progress) )
> +    {
> +        /* Crash in an obvious mannor, as opposed to falling into
> +         * infinite loop because our exception frame corrupted the
> +         * exception frame of the previous NMI.
> +         *
> +         * TODO: This check does not cover all possible cases of corrupt
> +         * exception frames, but it is substantially better than
> +         * nothing.
> +         */
> +        console_force_unlock();
> +        show_execution_state(regs);
> +        panic("Reentrant NMI detected\n");
> +    }
> +
> +    _do_nmi(regs);
> +    *in_progress = 0;
> +}
> +
>  void set_nmi_callback(nmi_callback_t callback)
>  {
>      nmi_callback = callback;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 09:22:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 09:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBB2-0004M1-8L; Wed, 05 Dec 2012 09:22:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgBB1-0004Lv-EX
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 09:21:59 +0000
Received: from [85.158.143.35:10432] by server-3.bemta-4.messagelabs.com id
	3E/21-06841-6321FB05; Wed, 05 Dec 2012 09:21:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1354699288!5271297!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20642 invoked from network); 5 Dec 2012 09:21:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 09:21:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 09:21:27 +0000
Message-Id: <50BF202502000078000AE0E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 09:21:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
In-Reply-To: <f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3 of 4 RFC] x86/nmi: Prevent reentrant
 execution of the C nmi handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 19:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> The (old) function do_nmi() is not reentrantly-safe.  Rename it to
> _do_nmi() and present a new do_nmi() which reentrancy guards.
> 
> If a reentrant NMI has been detected, then it is highly likely that the
> outer NMI exception frame has been corrupted, meaning we cannot return
> to the original context.  In this case, we panic() obviously rather than
> falling into an infinite loop.
> 
> panic() however is not safe to reenter from an NMI context, as an NMI
> (or MCE) can interrupt it inside its critical section, at which point a
> new call to panic() will deadlock.  As a result, we bail early if a
> panic() is already in progress, as Xen is about to die anyway.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> --
> I am fairly sure this is safe with the current kexec_crash functionality
> which involves holding all non-crashing pcpus in an NMI loop.  In the
> case of reentrant NMIs and panic_in_progress, we will repeatedly bail
> early in an infinite loop of NMIs, which has the same intended effect of
> simply causing all non-crashing CPUs to stay out of the way while the
> main crash occurs.
> 
> diff -r 48a60a407e15 -r f6ad86b61d5a xen/arch/x86/traps.c
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -88,6 +88,7 @@ static char __read_mostly opt_nmi[10] = 
>  string_param("nmi", opt_nmi);
>  
>  DEFINE_PER_CPU(u64, efer);
> +static DEFINE_PER_CPU(bool_t, nmi_in_progress) = 0;
>  
>  DEFINE_PER_CPU_READ_MOSTLY(u32, ler_msr);
>  
> @@ -3182,7 +3183,8 @@ static int dummy_nmi_callback(struct cpu
>   
>  static nmi_callback_t nmi_callback = dummy_nmi_callback;
>  
> -void do_nmi(struct cpu_user_regs *regs)
> +/* This function should never be called directly.  Use do_nmi() instead. */
> +static void _do_nmi(struct cpu_user_regs *regs)
>  {
>      unsigned int cpu = smp_processor_id();
>      unsigned char reason;
> @@ -3208,6 +3210,44 @@ void do_nmi(struct cpu_user_regs *regs)
>      }
>  }
>  
> +/* This function is NOT SAFE to call from C code in general.
> + * Use with extreme care! */
> +void do_nmi(struct cpu_user_regs *regs)
> +{
> +    bool_t * in_progress = &this_cpu(nmi_in_progress);
> +
> +    if ( is_panic_in_progress() )
> +    {
> +        /* A panic is already in progress.  It may have reenabled NMIs,
> +         * or we are simply unluckly to receive one right now.  Either
> +         * way, bail early, as Xen is about to die.
> +         *
> +         * TODO: Ideally we should exit without executing an iret, to
> +         * leave NMIs disabled, but that option is not currently
> +         * available to us.

You could easily provide the ground work for this here by having
the function return a bool_t (even if not immediately consumed by
the caller in this same patch).

Jan

> +         */
> +        return;
> +    }
> +
> +    if ( test_and_set_bool(*in_progress) )
> +    {
> +        /* Crash in an obvious mannor, as opposed to falling into
> +         * infinite loop because our exception frame corrupted the
> +         * exception frame of the previous NMI.
> +         *
> +         * TODO: This check does not cover all possible cases of corrupt
> +         * exception frames, but it is substantially better than
> +         * nothing.
> +         */
> +        console_force_unlock();
> +        show_execution_state(regs);
> +        panic("Reentrant NMI detected\n");
> +    }
> +
> +    _do_nmi(regs);
> +    *in_progress = 0;
> +}
> +
>  void set_nmi_callback(nmi_callback_t callback)
>  {
>      nmi_callback = callback;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 09:31:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 09:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBKG-0004fL-AP; Wed, 05 Dec 2012 09:31:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TgBKE-0004fG-OV
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 09:31:30 +0000
Received: from [85.158.139.83:38422] by server-16.bemta-5.messagelabs.com id
	2A/5A-21311-1741FB05; Wed, 05 Dec 2012 09:31:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354699863!28503442!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2581 invoked from network); 5 Dec 2012 09:31:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 09:31:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16165909"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 09:31:03 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	09:31:03 +0000
Message-ID: <50BF1456.3010607@citrix.com>
Date: Wed, 5 Dec 2012 10:31:02 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Thor Lancelot Simon <tls@panix.com>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com> <20121204200739.GA23149@panix.com>
In-Reply-To: <20121204200739.GA23149@panix.com>
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 21:07, Thor Lancelot Simon wrote:
> On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
>>
>> Independently of what we end up doing as default for handling raw file
>> disks, could someone review this code?
>>
>> It's the first time I've done a device, so someone with more experience
>> should review it.
> 
> I am not sure I entirely follow what this code's doing, but it seems to
> me it may allow arbitrary physical pages to be exposed to userspace
> processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.

This device allows to map memory pages shared with another domain to a
userspace process, provided that the other domain has granted all the
necessary permissions to the remote domain wishing to access this pages.

This device does not allow to share pages from the current domain to a
remote domain, only allows the mapping of pages shared with another
domain, and they are not "arbritary", you need an abstract reference
(grant reference) to that memory page to be able to map it, which is
provided by the remote domain.

> Is that a correct understanding of one of its effects?  If so, there's
> a problem, since not being able to do precisely that is one important
> assumption of the 4.4BSD security model.
> 
> Thor
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 09:31:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 09:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBKG-0004fL-AP; Wed, 05 Dec 2012 09:31:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TgBKE-0004fG-OV
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 09:31:30 +0000
Received: from [85.158.139.83:38422] by server-16.bemta-5.messagelabs.com id
	2A/5A-21311-1741FB05; Wed, 05 Dec 2012 09:31:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354699863!28503442!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2581 invoked from network); 5 Dec 2012 09:31:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 09:31:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16165909"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 09:31:03 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	09:31:03 +0000
Message-ID: <50BF1456.3010607@citrix.com>
Date: Wed, 5 Dec 2012 10:31:02 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Thor Lancelot Simon <tls@panix.com>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com> <20121204200739.GA23149@panix.com>
In-Reply-To: <20121204200739.GA23149@panix.com>
Cc: "port-xen@netbsd.org" <port-xen@netbsd.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 21:07, Thor Lancelot Simon wrote:
> On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
>>
>> Independently of what we end up doing as default for handling raw file
>> disks, could someone review this code?
>>
>> It's the first time I've done a device, so someone with more experience
>> should review it.
> 
> I am not sure I entirely follow what this code's doing, but it seems to
> me it may allow arbitrary physical pages to be exposed to userspace
> processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.

This device allows to map memory pages shared with another domain to a
userspace process, provided that the other domain has granted all the
necessary permissions to the remote domain wishing to access this pages.

This device does not allow to share pages from the current domain to a
remote domain, only allows the mapping of pages shared with another
domain, and they are not "arbritary", you need an abstract reference
(grant reference) to that memory page to be able to map it, which is
provided by the remote domain.

> Is that a correct understanding of one of its effects?  If so, there's
> a problem, since not being able to do precisely that is one important
> assumption of the 4.4BSD security model.
> 
> Thor
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:02:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBnu-0005Ge-Ty; Wed, 05 Dec 2012 10:02:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TgBnt-0005GW-My
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:02:09 +0000
Received: from [85.158.139.211:47555] by server-5.bemta-5.messagelabs.com id
	0D/4B-11353-0AB1FB05; Wed, 05 Dec 2012 10:02:08 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354701726!19158217!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzE3NzY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzE3NzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3735 invoked from network); 5 Dec 2012 10:02:06 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 10:02:06 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2s7sEVU=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-093-073.pools.arcor-ip.net [84.57.93.73])
	by smtp.strato.de (josoe mo31) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id e03644oB58hSD5 ;
	Wed, 5 Dec 2012 11:01:43 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 8141D1884C; Wed,  5 Dec 2012 11:01:42 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Wed,  5 Dec 2012 11:01:37 +0100
Message-Id: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.0.1
Cc: Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	jbeulich@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
	multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

backend_changed might be called multiple times, which will leak
be->mode. free the previous value before storing the current mode value.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

!! Not compile tested !!

 drivers/block/xen-blkback/xenbus.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index a6585a4..8650b19 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -502,7 +502,7 @@ static void backend_changed(struct xenbus_watch *watch,
 		= container_of(watch, struct backend_info, backend_watch);
 	struct xenbus_device *dev = be->dev;
 	int cdrom = 0;
-	char *device_type;
+	char *mode, *device_type;
 
 	DPRINTK("");
 
@@ -528,13 +528,15 @@ static void backend_changed(struct xenbus_watch *watch,
 		return;
 	}
 
-	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
-	if (IS_ERR(be->mode)) {
-		err = PTR_ERR(be->mode);
+	mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
+	kfree(be->mode);
+	if (IS_ERR(mode)) {
+		err = PTR_ERR(mode);
 		be->mode = NULL;
 		xenbus_dev_fatal(dev, err, "reading mode");
 		return;
 	}
+	be->mode = mode;
 
 	device_type = xenbus_read(XBT_NIL, dev->otherend, "device-type", NULL);
 	if (!IS_ERR(device_type)) {
@@ -555,7 +557,7 @@ static void backend_changed(struct xenbus_watch *watch,
 		be->minor = minor;
 
 		err = xen_vbd_create(be->blkif, handle, major, minor,
-				 (NULL == strchr(be->mode, 'w')), cdrom);
+				 (NULL == strchr(mode, 'w')), cdrom);
 		if (err) {
 			be->major = 0;
 			be->minor = 0;
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:02:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBnu-0005Ge-Ty; Wed, 05 Dec 2012 10:02:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TgBnt-0005GW-My
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:02:09 +0000
Received: from [85.158.139.211:47555] by server-5.bemta-5.messagelabs.com id
	0D/4B-11353-0AB1FB05; Wed, 05 Dec 2012 10:02:08 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354701726!19158217!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzE3NzY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzE3NzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3735 invoked from network); 5 Dec 2012 10:02:06 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 10:02:06 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2s7sEVU=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-093-073.pools.arcor-ip.net [84.57.93.73])
	by smtp.strato.de (josoe mo31) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id e03644oB58hSD5 ;
	Wed, 5 Dec 2012 11:01:43 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 8141D1884C; Wed,  5 Dec 2012 11:01:42 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Wed,  5 Dec 2012 11:01:37 +0100
Message-Id: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.0.1
Cc: Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	jbeulich@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
	multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

backend_changed might be called multiple times, which will leak
be->mode. free the previous value before storing the current mode value.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

!! Not compile tested !!

 drivers/block/xen-blkback/xenbus.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index a6585a4..8650b19 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -502,7 +502,7 @@ static void backend_changed(struct xenbus_watch *watch,
 		= container_of(watch, struct backend_info, backend_watch);
 	struct xenbus_device *dev = be->dev;
 	int cdrom = 0;
-	char *device_type;
+	char *mode, *device_type;
 
 	DPRINTK("");
 
@@ -528,13 +528,15 @@ static void backend_changed(struct xenbus_watch *watch,
 		return;
 	}
 
-	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
-	if (IS_ERR(be->mode)) {
-		err = PTR_ERR(be->mode);
+	mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
+	kfree(be->mode);
+	if (IS_ERR(mode)) {
+		err = PTR_ERR(mode);
 		be->mode = NULL;
 		xenbus_dev_fatal(dev, err, "reading mode");
 		return;
 	}
+	be->mode = mode;
 
 	device_type = xenbus_read(XBT_NIL, dev->otherend, "device-type", NULL);
 	if (!IS_ERR(device_type)) {
@@ -555,7 +557,7 @@ static void backend_changed(struct xenbus_watch *watch,
 		be->minor = minor;
 
 		err = xen_vbd_create(be->blkif, handle, major, minor,
-				 (NULL == strchr(be->mode, 'w')), cdrom);
+				 (NULL == strchr(mode, 'w')), cdrom);
 		if (err) {
 			be->major = 0;
 			be->minor = 0;
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:04:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBqD-0005Q4-Ln; Wed, 05 Dec 2012 10:04:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgBqC-0005Px-AT
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:04:32 +0000
Received: from [85.158.143.35:6363] by server-3.bemta-4.messagelabs.com id
	EC/A5-06841-F2C1FB05; Wed, 05 Dec 2012 10:04:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354701852!11703113!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2520 invoked from network); 5 Dec 2012 10:04:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:04:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16166829"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:04:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	10:04:13 +0000
Message-ID: <1354701851.15296.120.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 10:04:11 +0000
In-Reply-To: <1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:
> +# Enable/disable stub domains
> +AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
> +AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
> +AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
> +AX_STUBDOM_DEFAULT_ENABLE([vtpm-stubdom], [vtpm])
> +AX_STUBDOM_DEFAULT_ENABLE([vtpmmgrdom], [vtpmmgr])
> +
> +AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
> +AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for libraries])
> +
> +AC_ARG_VAR([CMAKE], [Path to the cmake program])

Weren't you going to make vtpm* conditional on the presence of cmake? Or
is there some magic here which I don't understand?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:04:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBqD-0005Q4-Ln; Wed, 05 Dec 2012 10:04:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgBqC-0005Px-AT
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:04:32 +0000
Received: from [85.158.143.35:6363] by server-3.bemta-4.messagelabs.com id
	EC/A5-06841-F2C1FB05; Wed, 05 Dec 2012 10:04:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354701852!11703113!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2520 invoked from network); 5 Dec 2012 10:04:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:04:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16166829"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:04:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	10:04:13 +0000
Message-ID: <1354701851.15296.120.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 10:04:11 +0000
In-Reply-To: <1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:
> +# Enable/disable stub domains
> +AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
> +AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
> +AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
> +AX_STUBDOM_DEFAULT_ENABLE([vtpm-stubdom], [vtpm])
> +AX_STUBDOM_DEFAULT_ENABLE([vtpmmgrdom], [vtpmmgr])
> +
> +AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
> +AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for libraries])
> +
> +AC_ARG_VAR([CMAKE], [Path to the cmake program])

Weren't you going to make vtpm* conditional on the presence of cmake? Or
is there some magic here which I don't understand?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:05:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBrA-0005Ud-4C; Wed, 05 Dec 2012 10:05:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgBr8-0005UT-IK
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:05:30 +0000
Received: from [85.158.143.99:62906] by server-3.bemta-4.messagelabs.com id
	2A/B7-06841-96C1FB05; Wed, 05 Dec 2012 10:05:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1354701902!18367629!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24937 invoked from network); 5 Dec 2012 10:05:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:05:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216452081"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:05:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 05:05:01 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgBqf-00088Q-2F;
	Wed, 05 Dec 2012 10:05:01 +0000
Message-ID: <50BF1C4D.6050609@citrix.com>
Date: Wed, 5 Dec 2012 10:05:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
	<50BF1F2F02000078000AE0E3@nat28.tlf.novell.com>
In-Reply-To: <50BF1F2F02000078000AE0E3@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.6
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 09:17, Jan Beulich wrote:
>>>> On 04.12.12 at 19:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/arch/x86/crash.c
>> +++ b/xen/arch/x86/crash.c
>> @@ -32,41 +32,97 @@
>>  
>>  static atomic_t waiting_for_crash_ipi;
>>  static unsigned int crashing_cpu;
>> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>>  
>> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
>> +/* This becomes the NMI handler for non-crashing CPUs. */
>> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>>  {
>> -    /* Don't do anything if this handler is invoked on crashing cpu.
>> -     * Otherwise, system will completely hang. Crashing cpu can get
>> -     * an NMI if system was initially booted with nmi_watchdog parameter.
>> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
>> +    ASSERT( crashing_cpu != smp_processor_id() );
>> +
>> +    /* Save crash information and shut down CPU.  Attempt only once. */
>> +    if ( ! this_cpu(crash_save_done) )
>> +    {
>> +        kexec_crash_save_cpu();
>> +        __stop_this_cpu();
>> +
>> +        this_cpu(crash_save_done) = 1;
>> +    }
>> +
>> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
>> +     * back to its boot state, so we are unable to rely on the regular
>> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
>> +     * (The likely senario is that we have reverted from x2apic mode to
>> +     * xapic, at which point #GPFs will occur if we use the apic_*
>> +     * functions)
>> +     *
>> +     * The ICR and APIC ID of the LAPIC are still valid even during
>> +     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
>> +     * queue up another NMI, to force us back into this loop if we exit.
>>       */
>> -    if ( cpu == crashing_cpu )
>> -        return 1;
>> -    local_irq_disable();
>> +    switch ( current_local_apic_mode() )
>> +    {
>> +        u32 apic_id;
>>  
>> -    kexec_crash_save_cpu();
>> +    case APIC_MODE_X2APIC:
>> +        apic_id = apic_rdmsr(APIC_ID);
>>  
>> -    __stop_this_cpu();
>> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
> As in the description you talk about the LAPIC being (possibly)
> disabled here - did you check that this would not #GP in that
> case?

Yes - we are switching on current_local_apic_mode() which reads the
APICBASE MSR to work out which mode we are in, and returns an enum
apic_mode.

With this information, all the apic_**msr accesses are in case x2apic,
and all the apic_mem_** accesses are in case xapic.

If the apic_mode is disabled, then we skip trying to set up the next NMI.

The patch is sadly rather less legible than I would have hoped, but the
code post patch is far more logical to read.

>> +        break;
>>  
>> -    atomic_dec(&waiting_for_crash_ipi);
>> +    case APIC_MODE_XAPIC:
>> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
>> +
>> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
>> +            cpu_relax();
>> +
>> +        apic_mem_write(APIC_ICR2, apic_id << 24);
>> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
>> +        break;
>> +
>> +    default:
>> +        break;
>> +    }
>>  
>>      for ( ; ; )
>>          halt();
>> -
>> -    return 1;
>>  }
>> ...
>>  ENTRY(machine_check)
>>          pushq $0
>>          movl  $TRAP_machine_check,4(%rsp)
>>          jmp   handle_ist_exception
>>  
>> +/* No op trap handler.  Required for kexec path. */
>> +ENTRY(trap_nop)
> I'd prefer you to re-use an existing IRETQ (e.g. the one you add
> below) here - this is not performance critical, and hence there's
> no need for it to be aligned.
>
> Jan

Certainly.

~Andrew

>
>> +        iretq
>> +
>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>> +ENTRY(enable_nmis)
>> +        push %rax
>> +        movq %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
>> +
>> +        iretq /* Disable the hardware NMI latch */
>> +1:
>> +        popq %rax
>> +        retq
>> +
>>  .section .rodata, "a", @progbits
>>  
>>  ENTRY(exception_table)

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:05:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBrA-0005Ud-4C; Wed, 05 Dec 2012 10:05:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgBr8-0005UT-IK
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:05:30 +0000
Received: from [85.158.143.99:62906] by server-3.bemta-4.messagelabs.com id
	2A/B7-06841-96C1FB05; Wed, 05 Dec 2012 10:05:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1354701902!18367629!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24937 invoked from network); 5 Dec 2012 10:05:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:05:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216452081"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:05:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 05:05:01 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgBqf-00088Q-2F;
	Wed, 05 Dec 2012 10:05:01 +0000
Message-ID: <50BF1C4D.6050609@citrix.com>
Date: Wed, 5 Dec 2012 10:05:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
	<50BF1F2F02000078000AE0E3@nat28.tlf.novell.com>
In-Reply-To: <50BF1F2F02000078000AE0E3@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.6
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 09:17, Jan Beulich wrote:
>>>> On 04.12.12 at 19:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/arch/x86/crash.c
>> +++ b/xen/arch/x86/crash.c
>> @@ -32,41 +32,97 @@
>>  
>>  static atomic_t waiting_for_crash_ipi;
>>  static unsigned int crashing_cpu;
>> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>>  
>> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
>> +/* This becomes the NMI handler for non-crashing CPUs. */
>> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>>  {
>> -    /* Don't do anything if this handler is invoked on crashing cpu.
>> -     * Otherwise, system will completely hang. Crashing cpu can get
>> -     * an NMI if system was initially booted with nmi_watchdog parameter.
>> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
>> +    ASSERT( crashing_cpu != smp_processor_id() );
>> +
>> +    /* Save crash information and shut down CPU.  Attempt only once. */
>> +    if ( ! this_cpu(crash_save_done) )
>> +    {
>> +        kexec_crash_save_cpu();
>> +        __stop_this_cpu();
>> +
>> +        this_cpu(crash_save_done) = 1;
>> +    }
>> +
>> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
>> +     * back to its boot state, so we are unable to rely on the regular
>> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
>> +     * (The likely senario is that we have reverted from x2apic mode to
>> +     * xapic, at which point #GPFs will occur if we use the apic_*
>> +     * functions)
>> +     *
>> +     * The ICR and APIC ID of the LAPIC are still valid even during
>> +     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
>> +     * queue up another NMI, to force us back into this loop if we exit.
>>       */
>> -    if ( cpu == crashing_cpu )
>> -        return 1;
>> -    local_irq_disable();
>> +    switch ( current_local_apic_mode() )
>> +    {
>> +        u32 apic_id;
>>  
>> -    kexec_crash_save_cpu();
>> +    case APIC_MODE_X2APIC:
>> +        apic_id = apic_rdmsr(APIC_ID);
>>  
>> -    __stop_this_cpu();
>> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
> As in the description you talk about the LAPIC being (possibly)
> disabled here - did you check that this would not #GP in that
> case?

Yes - we are switching on current_local_apic_mode() which reads the
APICBASE MSR to work out which mode we are in, and returns an enum
apic_mode.

With this information, all the apic_**msr accesses are in case x2apic,
and all the apic_mem_** accesses are in case xapic.

If the apic_mode is disabled, then we skip trying to set up the next NMI.

The patch is sadly rather less legible than I would have hoped, but the
code post patch is far more logical to read.

>> +        break;
>>  
>> -    atomic_dec(&waiting_for_crash_ipi);
>> +    case APIC_MODE_XAPIC:
>> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
>> +
>> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
>> +            cpu_relax();
>> +
>> +        apic_mem_write(APIC_ICR2, apic_id << 24);
>> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
>> +        break;
>> +
>> +    default:
>> +        break;
>> +    }
>>  
>>      for ( ; ; )
>>          halt();
>> -
>> -    return 1;
>>  }
>> ...
>>  ENTRY(machine_check)
>>          pushq $0
>>          movl  $TRAP_machine_check,4(%rsp)
>>          jmp   handle_ist_exception
>>  
>> +/* No op trap handler.  Required for kexec path. */
>> +ENTRY(trap_nop)
> I'd prefer you to re-use an existing IRETQ (e.g. the one you add
> below) here - this is not performance critical, and hence there's
> no need for it to be aligned.
>
> Jan

Certainly.

~Andrew

>
>> +        iretq
>> +
>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>> +ENTRY(enable_nmis)
>> +        push %rax
>> +        movq %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
>> +
>> +        iretq /* Disable the hardware NMI latch */
>> +1:
>> +        popq %rax
>> +        retq
>> +
>>  .section .rodata, "a", @progbits
>>  
>>  ENTRY(exception_table)

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:07:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBsR-0005bb-KR; Wed, 05 Dec 2012 10:06:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgBsQ-0005bV-Ae
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:06:50 +0000
Received: from [85.158.139.83:54675] by server-16.bemta-5.messagelabs.com id
	8D/BB-21311-9BC1FB05; Wed, 05 Dec 2012 10:06:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354701946!24599781!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14750 invoked from network); 5 Dec 2012 10:05:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:05:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16166893"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:05:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	10:05:45 +0000
Message-ID: <1354701943.15296.122.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 5 Dec 2012 10:05:43 +0000
In-Reply-To: <1c8fa0e2583b409ca7d4.1354031680@cosworth.uk.xensource.com>
References: <1c8fa0e2583b409ca7d4.1354031680@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Lars Kurth <lars.kurth.xen@gmail.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: Reference stable maintenance
 policy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-11-27 at 15:54 +0000, Ian Campbell wrote:
> # HG changeset patch
> # User Ian Campbell <ijc@hellion.org.uk>
> # Date 1354031669 0
> # Node ID 1c8fa0e2583b409ca7d45cefa9eaff0127c28ee9
> # Parent  ae6fb202b233af815466055d9f1a635802a50855
> MAINTAINERS: Reference stable maintenance policy
> 
> I also couldn't resist fixing a typo and adding a reference to
> http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
> well.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Any objections/acks to this patch? Shall I commit it?

> 
> diff -r ae6fb202b233 -r 1c8fa0e2583b MAINTAINERS
> --- a/MAINTAINERS	Tue Nov 20 08:58:31 2012 +0100
> +++ b/MAINTAINERS	Tue Nov 27 15:54:29 2012 +0000
> @@ -1,5 +1,6 @@
>  
>  	List of maintainers and how to submit changes
> +	=============================================
>  
>  Please try to follow the guidelines below.  This will make things
>  easier on the maintainers.  Not all of these guidelines matter for every
> @@ -15,7 +16,11 @@ 3.	Make a patch available to the relevan
>  	'diff -u' to make the patch easy to merge. Be prepared to get your
>  	changes sent back with seemingly silly requests about formatting
>  	and variable names.  These aren't as silly as they seem. One
> -	job the maintainersdo is to keep things looking the same.
> +	job the maintainers do is to keep things looking the same.
> +
> +	PLEASE see http://wiki.xen.org/wiki/Submitting_Xen_Patches for
> +	hints on how to submit a patch to xen-unstable in a suitable
> +	form.
>  
>  	PLEASE try to include any credit lines you want added with the
>  	patch. It avoids people being missed off by mistake and makes
> @@ -34,6 +39,23 @@ 4.	Make sure you have the right to send 
>  
>  5.	Happy hacking.
>  
> +
> +	Stable Release Maintenance
> +	==========================
> +
> +The policy for inclusion in a Xen stable release is different to that
> +for inclusion in xen-unstable.
> +
> +Please see http://wiki.xen.org/wiki/Xen_Maintenance_Releases for more
> +information.
> +
> +Remember to copy the appropriate stable branch maintainer who will be
> +listed in this section of the MAINTAINERS file in the appropriate
> +branch.
> +
> +	Unstable Subsystem Maintainers
> +	==============================
> +
>  Descriptions of section entries:
>  
>  	M: Mail patches to: FullName <address@domain>
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:07:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgBsR-0005bb-KR; Wed, 05 Dec 2012 10:06:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgBsQ-0005bV-Ae
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:06:50 +0000
Received: from [85.158.139.83:54675] by server-16.bemta-5.messagelabs.com id
	8D/BB-21311-9BC1FB05; Wed, 05 Dec 2012 10:06:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354701946!24599781!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14750 invoked from network); 5 Dec 2012 10:05:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:05:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16166893"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:05:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	10:05:45 +0000
Message-ID: <1354701943.15296.122.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 5 Dec 2012 10:05:43 +0000
In-Reply-To: <1c8fa0e2583b409ca7d4.1354031680@cosworth.uk.xensource.com>
References: <1c8fa0e2583b409ca7d4.1354031680@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Lars Kurth <lars.kurth.xen@gmail.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: Reference stable maintenance
 policy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-11-27 at 15:54 +0000, Ian Campbell wrote:
> # HG changeset patch
> # User Ian Campbell <ijc@hellion.org.uk>
> # Date 1354031669 0
> # Node ID 1c8fa0e2583b409ca7d45cefa9eaff0127c28ee9
> # Parent  ae6fb202b233af815466055d9f1a635802a50855
> MAINTAINERS: Reference stable maintenance policy
> 
> I also couldn't resist fixing a typo and adding a reference to
> http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
> well.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Any objections/acks to this patch? Shall I commit it?

> 
> diff -r ae6fb202b233 -r 1c8fa0e2583b MAINTAINERS
> --- a/MAINTAINERS	Tue Nov 20 08:58:31 2012 +0100
> +++ b/MAINTAINERS	Tue Nov 27 15:54:29 2012 +0000
> @@ -1,5 +1,6 @@
>  
>  	List of maintainers and how to submit changes
> +	=============================================
>  
>  Please try to follow the guidelines below.  This will make things
>  easier on the maintainers.  Not all of these guidelines matter for every
> @@ -15,7 +16,11 @@ 3.	Make a patch available to the relevan
>  	'diff -u' to make the patch easy to merge. Be prepared to get your
>  	changes sent back with seemingly silly requests about formatting
>  	and variable names.  These aren't as silly as they seem. One
> -	job the maintainersdo is to keep things looking the same.
> +	job the maintainers do is to keep things looking the same.
> +
> +	PLEASE see http://wiki.xen.org/wiki/Submitting_Xen_Patches for
> +	hints on how to submit a patch to xen-unstable in a suitable
> +	form.
>  
>  	PLEASE try to include any credit lines you want added with the
>  	patch. It avoids people being missed off by mistake and makes
> @@ -34,6 +39,23 @@ 4.	Make sure you have the right to send 
>  
>  5.	Happy hacking.
>  
> +
> +	Stable Release Maintenance
> +	==========================
> +
> +The policy for inclusion in a Xen stable release is different to that
> +for inclusion in xen-unstable.
> +
> +Please see http://wiki.xen.org/wiki/Xen_Maintenance_Releases for more
> +information.
> +
> +Remember to copy the appropriate stable branch maintainer who will be
> +listed in this section of the MAINTAINERS file in the appropriate
> +branch.
> +
> +	Unstable Subsystem Maintainers
> +	==============================
> +
>  Descriptions of section entries:
>  
>  	M: Mail patches to: FullName <address@domain>
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:13:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgByq-00061E-FO; Wed, 05 Dec 2012 10:13:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Manuel.Bouyer@lip6.fr>) id 1TgByp-000616-5p
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:13:27 +0000
Received: from [85.158.139.211:41003] by server-3.bemta-5.messagelabs.com id
	DE/CC-18736-64E1FB05; Wed, 05 Dec 2012 10:13:26 +0000
X-Env-Sender: Manuel.Bouyer@lip6.fr
X-Msg-Ref: server-4.tower-206.messagelabs.com!1354702405!19161428!1
X-Originating-IP: [132.227.60.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14313 invoked from network); 5 Dec 2012 10:13:25 -0000
Received: from isis.lip6.fr (HELO isis.lip6.fr) (132.227.60.2)
	by server-4.tower-206.messagelabs.com with SMTP;
	5 Dec 2012 10:13:25 -0000
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
	by isis.lip6.fr (8.14.5/lip6) with ESMTP id qB5ADMXg028804
	; Wed, 5 Dec 2012 11:13:22 +0100 (CET)
X-pt: isis.lip6.fr
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
	by asim.lip6.fr (8.14.5/8.14.4) with ESMTP id qB5ADMKV005147;
	Wed, 5 Dec 2012 11:13:22 +0100 (MET)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
	id 2F30934907; Wed,  5 Dec 2012 11:13:22 +0100 (MET)
Date: Wed, 5 Dec 2012 11:13:22 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20121205101322.GA16579@asim.lip6.fr>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BE161B.8000603@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7
	(isis.lip6.fr [132.227.60.2]);
	Wed, 05 Dec 2012 11:13:22 +0100 (CET)
X-Scanned-By: MIMEDefang 2.73 on 132.227.60.2
Cc: "port-xen@netbsd.org" <port-xen@NetBSD.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn=E9 wrote:
> Independently of what we end up doing as default for handling raw file
> disks, could someone review this code?

As this interracts with UVM, I think it would be better to post this to
tech-kern@.
I'm not sure UVM experts are listening here.

-- =

Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:13:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgByq-00061E-FO; Wed, 05 Dec 2012 10:13:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Manuel.Bouyer@lip6.fr>) id 1TgByp-000616-5p
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:13:27 +0000
Received: from [85.158.139.211:41003] by server-3.bemta-5.messagelabs.com id
	DE/CC-18736-64E1FB05; Wed, 05 Dec 2012 10:13:26 +0000
X-Env-Sender: Manuel.Bouyer@lip6.fr
X-Msg-Ref: server-4.tower-206.messagelabs.com!1354702405!19161428!1
X-Originating-IP: [132.227.60.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14313 invoked from network); 5 Dec 2012 10:13:25 -0000
Received: from isis.lip6.fr (HELO isis.lip6.fr) (132.227.60.2)
	by server-4.tower-206.messagelabs.com with SMTP;
	5 Dec 2012 10:13:25 -0000
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
	by isis.lip6.fr (8.14.5/lip6) with ESMTP id qB5ADMXg028804
	; Wed, 5 Dec 2012 11:13:22 +0100 (CET)
X-pt: isis.lip6.fr
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
	by asim.lip6.fr (8.14.5/8.14.4) with ESMTP id qB5ADMKV005147;
	Wed, 5 Dec 2012 11:13:22 +0100 (MET)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
	id 2F30934907; Wed,  5 Dec 2012 11:13:22 +0100 (MET)
Date: Wed, 5 Dec 2012 11:13:22 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20121205101322.GA16579@asim.lip6.fr>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BE161B.8000603@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7
	(isis.lip6.fr [132.227.60.2]);
	Wed, 05 Dec 2012 11:13:22 +0100 (CET)
X-Scanned-By: MIMEDefang 2.73 on 132.227.60.2
Cc: "port-xen@netbsd.org" <port-xen@NetBSD.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn=E9 wrote:
> Independently of what we end up doing as default for handling raw file
> disks, could someone review this code?

As this interracts with UVM, I think it would be better to post this to
tech-kern@.
I'm not sure UVM experts are listening here.

-- =

Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:15:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgC0Q-00066U-Vm; Wed, 05 Dec 2012 10:15:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgC0P-00066M-Vr
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:15:06 +0000
Received: from [85.158.139.211:3873] by server-4.bemta-5.messagelabs.com id
	AF/7F-15011-9AE1FB05; Wed, 05 Dec 2012 10:15:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1354702501!19071302!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27012 invoked from network); 5 Dec 2012 10:15:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 10:15:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 10:15:01 +0000
Message-Id: <50BF2CB302000078000AE142@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 10:14:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1c8fa0e2583b409ca7d4.1354031680@cosworth.uk.xensource.com>
	<1354701943.15296.122.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354701943.15296.122.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: Reference stable maintenance
 policy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 11:05, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-11-27 at 15:54 +0000, Ian Campbell wrote:
>> # HG changeset patch
>> # User Ian Campbell <ijc@hellion.org.uk>
>> # Date 1354031669 0
>> # Node ID 1c8fa0e2583b409ca7d45cefa9eaff0127c28ee9
>> # Parent  ae6fb202b233af815466055d9f1a635802a50855
>> MAINTAINERS: Reference stable maintenance policy
>> 
>> I also couldn't resist fixing a typo and adding a reference to
>> http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
>> well.
>> 
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Any objections/acks to this patch? Shall I commit it?

Acked-by: Jan Beulich <jbeulich@suse.com>

(but formally you need Keir's ack anyway)

>> diff -r ae6fb202b233 -r 1c8fa0e2583b MAINTAINERS
>> --- a/MAINTAINERS	Tue Nov 20 08:58:31 2012 +0100
>> +++ b/MAINTAINERS	Tue Nov 27 15:54:29 2012 +0000
>> @@ -1,5 +1,6 @@
>>  
>>  	List of maintainers and how to submit changes
>> +	=============================================
>>  
>>  Please try to follow the guidelines below.  This will make things
>>  easier on the maintainers.  Not all of these guidelines matter for every
>> @@ -15,7 +16,11 @@ 3.	Make a patch available to the relevan
>>  	'diff -u' to make the patch easy to merge. Be prepared to get your
>>  	changes sent back with seemingly silly requests about formatting
>>  	and variable names.  These aren't as silly as they seem. One
>> -	job the maintainersdo is to keep things looking the same.
>> +	job the maintainers do is to keep things looking the same.
>> +
>> +	PLEASE see http://wiki.xen.org/wiki/Submitting_Xen_Patches for
>> +	hints on how to submit a patch to xen-unstable in a suitable
>> +	form.
>>  
>>  	PLEASE try to include any credit lines you want added with the
>>  	patch. It avoids people being missed off by mistake and makes
>> @@ -34,6 +39,23 @@ 4.	Make sure you have the right to send 
>>  
>>  5.	Happy hacking.
>>  
>> +
>> +	Stable Release Maintenance
>> +	==========================
>> +
>> +The policy for inclusion in a Xen stable release is different to that
>> +for inclusion in xen-unstable.
>> +
>> +Please see http://wiki.xen.org/wiki/Xen_Maintenance_Releases for more
>> +information.
>> +
>> +Remember to copy the appropriate stable branch maintainer who will be
>> +listed in this section of the MAINTAINERS file in the appropriate
>> +branch.
>> +
>> +	Unstable Subsystem Maintainers
>> +	==============================
>> +
>>  Descriptions of section entries:
>>  
>>  	M: Mail patches to: FullName <address@domain>
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:15:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgC0Q-00066U-Vm; Wed, 05 Dec 2012 10:15:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgC0P-00066M-Vr
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:15:06 +0000
Received: from [85.158.139.211:3873] by server-4.bemta-5.messagelabs.com id
	AF/7F-15011-9AE1FB05; Wed, 05 Dec 2012 10:15:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1354702501!19071302!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27012 invoked from network); 5 Dec 2012 10:15:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 10:15:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 10:15:01 +0000
Message-Id: <50BF2CB302000078000AE142@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 10:14:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1c8fa0e2583b409ca7d4.1354031680@cosworth.uk.xensource.com>
	<1354701943.15296.122.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354701943.15296.122.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: Reference stable maintenance
 policy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 11:05, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-11-27 at 15:54 +0000, Ian Campbell wrote:
>> # HG changeset patch
>> # User Ian Campbell <ijc@hellion.org.uk>
>> # Date 1354031669 0
>> # Node ID 1c8fa0e2583b409ca7d45cefa9eaff0127c28ee9
>> # Parent  ae6fb202b233af815466055d9f1a635802a50855
>> MAINTAINERS: Reference stable maintenance policy
>> 
>> I also couldn't resist fixing a typo and adding a reference to
>> http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
>> well.
>> 
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Any objections/acks to this patch? Shall I commit it?

Acked-by: Jan Beulich <jbeulich@suse.com>

(but formally you need Keir's ack anyway)

>> diff -r ae6fb202b233 -r 1c8fa0e2583b MAINTAINERS
>> --- a/MAINTAINERS	Tue Nov 20 08:58:31 2012 +0100
>> +++ b/MAINTAINERS	Tue Nov 27 15:54:29 2012 +0000
>> @@ -1,5 +1,6 @@
>>  
>>  	List of maintainers and how to submit changes
>> +	=============================================
>>  
>>  Please try to follow the guidelines below.  This will make things
>>  easier on the maintainers.  Not all of these guidelines matter for every
>> @@ -15,7 +16,11 @@ 3.	Make a patch available to the relevan
>>  	'diff -u' to make the patch easy to merge. Be prepared to get your
>>  	changes sent back with seemingly silly requests about formatting
>>  	and variable names.  These aren't as silly as they seem. One
>> -	job the maintainersdo is to keep things looking the same.
>> +	job the maintainers do is to keep things looking the same.
>> +
>> +	PLEASE see http://wiki.xen.org/wiki/Submitting_Xen_Patches for
>> +	hints on how to submit a patch to xen-unstable in a suitable
>> +	form.
>>  
>>  	PLEASE try to include any credit lines you want added with the
>>  	patch. It avoids people being missed off by mistake and makes
>> @@ -34,6 +39,23 @@ 4.	Make sure you have the right to send 
>>  
>>  5.	Happy hacking.
>>  
>> +
>> +	Stable Release Maintenance
>> +	==========================
>> +
>> +The policy for inclusion in a Xen stable release is different to that
>> +for inclusion in xen-unstable.
>> +
>> +Please see http://wiki.xen.org/wiki/Xen_Maintenance_Releases for more
>> +information.
>> +
>> +Remember to copy the appropriate stable branch maintainer who will be
>> +listed in this section of the MAINTAINERS file in the appropriate
>> +branch.
>> +
>> +	Unstable Subsystem Maintainers
>> +	==============================
>> +
>>  Descriptions of section entries:
>>  
>>  	M: Mail patches to: FullName <address@domain>
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:16:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:16:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgC1F-0006Ak-E6; Wed, 05 Dec 2012 10:15:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Manuel.Bouyer@lip6.fr>) id 1TgC1D-0006AU-Tp
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:15:56 +0000
Received: from [85.158.137.99:23304] by server-2.bemta-3.messagelabs.com id
	5D/DD-04744-BDE1FB05; Wed, 05 Dec 2012 10:15:55 +0000
X-Env-Sender: Manuel.Bouyer@lip6.fr
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354702554!14824684!1
X-Originating-IP: [132.227.60.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8389 invoked from network); 5 Dec 2012 10:15:54 -0000
Received: from isis.lip6.fr (HELO isis.lip6.fr) (132.227.60.2)
	by server-9.tower-217.messagelabs.com with SMTP;
	5 Dec 2012 10:15:54 -0000
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
	by isis.lip6.fr (8.14.5/lip6) with ESMTP id qB5AFqZj003611
	; Wed, 5 Dec 2012 11:15:52 +0100 (CET)
X-pt: isis.lip6.fr
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
	by asim.lip6.fr (8.14.5/8.14.4) with ESMTP id qB5AFpOK017141;
	Wed, 5 Dec 2012 11:15:51 +0100 (MET)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
	id DEABD34907; Wed,  5 Dec 2012 11:15:51 +0100 (MET)
Date: Wed, 5 Dec 2012 11:15:51 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Thor Lancelot Simon <tls@panix.com>
Message-ID: <20121205101551.GA2999@asim.lip6.fr>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com> <20121204200739.GA23149@panix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121204200739.GA23149@panix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7
	(isis.lip6.fr [132.227.60.2]);
	Wed, 05 Dec 2012 11:15:52 +0100 (CET)
X-Scanned-By: MIMEDefang 2.73 on 132.227.60.2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"port-xen@netbsd.org" <port-xen@NetBSD.org>,
	Roger Pau Monn? <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 03:07:39PM -0500, Thor Lancelot Simon wrote:
> On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
> > 
> > Independently of what we end up doing as default for handling raw file
> > disks, could someone review this code?
> > 
> > It's the first time I've done a device, so someone with more experience
> > should review it.
> 
> I am not sure I entirely follow what this code's doing, but it seems to
> me it may allow arbitrary physical pages to be exposed to userspace
> processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.
> 
> Is that a correct understanding of one of its effects?  If so, there's
> a problem, since not being able to do precisely that is one important
> assumption of the 4.4BSD security model.

If I read it properly, It allows only to map pages that are part of a
grant. You provide the ioctl a grant reference, and this is what
the driver uses to find the physical pages. So it should be limited to
pages that are referenced by a grant.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:16:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:16:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgC1F-0006Ak-E6; Wed, 05 Dec 2012 10:15:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Manuel.Bouyer@lip6.fr>) id 1TgC1D-0006AU-Tp
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:15:56 +0000
Received: from [85.158.137.99:23304] by server-2.bemta-3.messagelabs.com id
	5D/DD-04744-BDE1FB05; Wed, 05 Dec 2012 10:15:55 +0000
X-Env-Sender: Manuel.Bouyer@lip6.fr
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354702554!14824684!1
X-Originating-IP: [132.227.60.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8389 invoked from network); 5 Dec 2012 10:15:54 -0000
Received: from isis.lip6.fr (HELO isis.lip6.fr) (132.227.60.2)
	by server-9.tower-217.messagelabs.com with SMTP;
	5 Dec 2012 10:15:54 -0000
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
	by isis.lip6.fr (8.14.5/lip6) with ESMTP id qB5AFqZj003611
	; Wed, 5 Dec 2012 11:15:52 +0100 (CET)
X-pt: isis.lip6.fr
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
	by asim.lip6.fr (8.14.5/8.14.4) with ESMTP id qB5AFpOK017141;
	Wed, 5 Dec 2012 11:15:51 +0100 (MET)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
	id DEABD34907; Wed,  5 Dec 2012 11:15:51 +0100 (MET)
Date: Wed, 5 Dec 2012 11:15:51 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Thor Lancelot Simon <tls@panix.com>
Message-ID: <20121205101551.GA2999@asim.lip6.fr>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com> <20121204200739.GA23149@panix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121204200739.GA23149@panix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7
	(isis.lip6.fr [132.227.60.2]);
	Wed, 05 Dec 2012 11:15:52 +0100 (CET)
X-Scanned-By: MIMEDefang 2.73 on 132.227.60.2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"port-xen@netbsd.org" <port-xen@NetBSD.org>,
	Roger Pau Monn? <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 03:07:39PM -0500, Thor Lancelot Simon wrote:
> On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
> > 
> > Independently of what we end up doing as default for handling raw file
> > disks, could someone review this code?
> > 
> > It's the first time I've done a device, so someone with more experience
> > should review it.
> 
> I am not sure I entirely follow what this code's doing, but it seems to
> me it may allow arbitrary physical pages to be exposed to userspace
> processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.
> 
> Is that a correct understanding of one of its effects?  If so, there's
> a problem, since not being able to do precisely that is one important
> assumption of the 4.4BSD security model.

If I read it properly, It allows only to map pages that are part of a
grant. You provide the ioctl a grant reference, and this is what
the driver uses to find the physical pages. So it should be limited to
pages that are referenced by a grant.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:22:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgC7b-0006aU-CO; Wed, 05 Dec 2012 10:22:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgC7Z-0006aP-5s
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:22:29 +0000
Received: from [85.158.139.83:27831] by server-16.bemta-5.messagelabs.com id
	0A/49-21311-4602FB05; Wed, 05 Dec 2012 10:22:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1354702905!24180667!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17566 invoked from network); 5 Dec 2012 10:21:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 10:21:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 10:21:31 +0000
Message-Id: <50BF2E3802000078000AE162@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 10:21:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,<konrad.wilk@oracle.com>
References: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
 multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 11:01, Olaf Hering <olaf@aepfle.de> wrote:
> backend_changed might be called multiple times, which will leak
> be->mode. free the previous value before storing the current mode value.

As said before - this is one possible route to take. But did you
consider at all the alternative of preventing the function from
getting called more than once for a given device? As also said
before, I think that would have other bad effects, and hence
should be preferred (and would likely also result in a smaller
patch).

And _if_ this here is the route to go, I'd clearly see this and your
earlier patch to be folded into just one (dealing with both leaks
in one go).

Jan

> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
> 
> !! Not compile tested !!
> 
>  drivers/block/xen-blkback/xenbus.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/block/xen-blkback/xenbus.c 
> b/drivers/block/xen-blkback/xenbus.c
> index a6585a4..8650b19 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -502,7 +502,7 @@ static void backend_changed(struct xenbus_watch *watch,
>  		= container_of(watch, struct backend_info, backend_watch);
>  	struct xenbus_device *dev = be->dev;
>  	int cdrom = 0;
> -	char *device_type;
> +	char *mode, *device_type;
>  
>  	DPRINTK("");
>  
> @@ -528,13 +528,15 @@ static void backend_changed(struct xenbus_watch *watch,
>  		return;
>  	}
>  
> -	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
> -	if (IS_ERR(be->mode)) {
> -		err = PTR_ERR(be->mode);
> +	mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
> +	kfree(be->mode);
> +	if (IS_ERR(mode)) {
> +		err = PTR_ERR(mode);
>  		be->mode = NULL;
>  		xenbus_dev_fatal(dev, err, "reading mode");
>  		return;
>  	}
> +	be->mode = mode;
>  
>  	device_type = xenbus_read(XBT_NIL, dev->otherend, "device-type", NULL);
>  	if (!IS_ERR(device_type)) {
> @@ -555,7 +557,7 @@ static void backend_changed(struct xenbus_watch *watch,
>  		be->minor = minor;
>  
>  		err = xen_vbd_create(be->blkif, handle, major, minor,
> -				 (NULL == strchr(be->mode, 'w')), cdrom);
> +				 (NULL == strchr(mode, 'w')), cdrom);
>  		if (err) {
>  			be->major = 0;
>  			be->minor = 0;
> -- 
> 1.8.0.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:22:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgC7b-0006aU-CO; Wed, 05 Dec 2012 10:22:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgC7Z-0006aP-5s
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:22:29 +0000
Received: from [85.158.139.83:27831] by server-16.bemta-5.messagelabs.com id
	0A/49-21311-4602FB05; Wed, 05 Dec 2012 10:22:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1354702905!24180667!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17566 invoked from network); 5 Dec 2012 10:21:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 10:21:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 10:21:31 +0000
Message-Id: <50BF2E3802000078000AE162@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 10:21:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,<konrad.wilk@oracle.com>
References: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
 multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 11:01, Olaf Hering <olaf@aepfle.de> wrote:
> backend_changed might be called multiple times, which will leak
> be->mode. free the previous value before storing the current mode value.

As said before - this is one possible route to take. But did you
consider at all the alternative of preventing the function from
getting called more than once for a given device? As also said
before, I think that would have other bad effects, and hence
should be preferred (and would likely also result in a smaller
patch).

And _if_ this here is the route to go, I'd clearly see this and your
earlier patch to be folded into just one (dealing with both leaks
in one go).

Jan

> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
> 
> !! Not compile tested !!
> 
>  drivers/block/xen-blkback/xenbus.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/block/xen-blkback/xenbus.c 
> b/drivers/block/xen-blkback/xenbus.c
> index a6585a4..8650b19 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -502,7 +502,7 @@ static void backend_changed(struct xenbus_watch *watch,
>  		= container_of(watch, struct backend_info, backend_watch);
>  	struct xenbus_device *dev = be->dev;
>  	int cdrom = 0;
> -	char *device_type;
> +	char *mode, *device_type;
>  
>  	DPRINTK("");
>  
> @@ -528,13 +528,15 @@ static void backend_changed(struct xenbus_watch *watch,
>  		return;
>  	}
>  
> -	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
> -	if (IS_ERR(be->mode)) {
> -		err = PTR_ERR(be->mode);
> +	mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
> +	kfree(be->mode);
> +	if (IS_ERR(mode)) {
> +		err = PTR_ERR(mode);
>  		be->mode = NULL;
>  		xenbus_dev_fatal(dev, err, "reading mode");
>  		return;
>  	}
> +	be->mode = mode;
>  
>  	device_type = xenbus_read(XBT_NIL, dev->otherend, "device-type", NULL);
>  	if (!IS_ERR(device_type)) {
> @@ -555,7 +557,7 @@ static void backend_changed(struct xenbus_watch *watch,
>  		be->minor = minor;
>  
>  		err = xen_vbd_create(be->blkif, handle, major, minor,
> -				 (NULL == strchr(be->mode, 'w')), cdrom);
> +				 (NULL == strchr(mode, 'w')), cdrom);
>  		if (err) {
>  			be->major = 0;
>  			be->minor = 0;
> -- 
> 1.8.0.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:26:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:26:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCBJ-0006j5-5g; Wed, 05 Dec 2012 10:26:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TgCBH-0006iz-Ry
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:26:20 +0000
Received: from [193.109.254.147:3389] by server-13.bemta-14.messagelabs.com id
	1C/41-11239-B412FB05; Wed, 05 Dec 2012 10:26:19 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1354702807!4270993!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19549 invoked from network); 5 Dec 2012 10:20:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:20:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16167354"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:20:07 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	10:20:07 +0000
Message-ID: <50BF1FD6.4010905@citrix.com>
Date: Wed, 5 Dec 2012 11:20:06 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com> <20121204200739.GA23149@panix.com>
	<20121205101551.GA2999@asim.lip6.fr>
In-Reply-To: <20121205101551.GA2999@asim.lip6.fr>
Cc: Thor Lancelot Simon <tls@panix.com>,
	"port-xen@netbsd.org" <port-xen@NetBSD.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 11:15, Manuel Bouyer wrote:
> On Tue, Dec 04, 2012 at 03:07:39PM -0500, Thor Lancelot Simon wrote:
>> On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
>>>
>>> Independently of what we end up doing as default for handling raw file
>>> disks, could someone review this code?
>>>
>>> It's the first time I've done a device, so someone with more experience
>>> should review it.
>>
>> I am not sure I entirely follow what this code's doing, but it seems to
>> me it may allow arbitrary physical pages to be exposed to userspace
>> processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.
>>
>> Is that a correct understanding of one of its effects?  If so, there's
>> a problem, since not being able to do precisely that is one important
>> assumption of the 4.4BSD security model.
> 
> If I read it properly, It allows only to map pages that are part of a
> grant. You provide the ioctl a grant reference, and this is what
> the driver uses to find the physical pages. So it should be limited to
> pages that are referenced by a grant.

Yes, it should be limited to grant pages, you are not able to map
arbitrary mfns.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:26:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:26:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCBJ-0006j5-5g; Wed, 05 Dec 2012 10:26:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TgCBH-0006iz-Ry
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:26:20 +0000
Received: from [193.109.254.147:3389] by server-13.bemta-14.messagelabs.com id
	1C/41-11239-B412FB05; Wed, 05 Dec 2012 10:26:19 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1354702807!4270993!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19549 invoked from network); 5 Dec 2012 10:20:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:20:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16167354"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:20:07 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	10:20:07 +0000
Message-ID: <50BF1FD6.4010905@citrix.com>
Date: Wed, 5 Dec 2012 11:20:06 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com> <20121204200739.GA23149@panix.com>
	<20121205101551.GA2999@asim.lip6.fr>
In-Reply-To: <20121205101551.GA2999@asim.lip6.fr>
Cc: Thor Lancelot Simon <tls@panix.com>,
	"port-xen@netbsd.org" <port-xen@NetBSD.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 11:15, Manuel Bouyer wrote:
> On Tue, Dec 04, 2012 at 03:07:39PM -0500, Thor Lancelot Simon wrote:
>> On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
>>>
>>> Independently of what we end up doing as default for handling raw file
>>> disks, could someone review this code?
>>>
>>> It's the first time I've done a device, so someone with more experience
>>> should review it.
>>
>> I am not sure I entirely follow what this code's doing, but it seems to
>> me it may allow arbitrary physical pages to be exposed to userspace
>> processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.
>>
>> Is that a correct understanding of one of its effects?  If so, there's
>> a problem, since not being able to do precisely that is one important
>> assumption of the 4.4BSD security model.
> 
> If I read it properly, It allows only to map pages that are part of a
> grant. You provide the ioctl a grant reference, and this is what
> the driver uses to find the physical pages. So it should be limited to
> pages that are referenced by a grant.

Yes, it should be limited to grant pages, you are not able to map
arbitrary mfns.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:27:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCC0-0006mZ-KK; Wed, 05 Dec 2012 10:27:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgCBy-0006mK-KH
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:27:02 +0000
Received: from [85.158.139.211:34553] by server-2.bemta-5.messagelabs.com id
	27/DA-04892-5712FB05; Wed, 05 Dec 2012 10:27:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1354703217!18708040!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32265 invoked from network); 5 Dec 2012 10:27:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 10:27:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 10:26:59 +0000
Message-Id: <50BF2F7E02000078000AE174@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 10:26:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <50BE5732.2050801@citrix.com>
In-Reply-To: <50BE5732.2050801@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Audit of NMI and MCE paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 21:04, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> As an alternative, I suggest that we make ASSERT()s, BUG()s and WARN()s
> NMI/MCE safe, from a printk spinlock point of view.
> 
> Either we can modify the macros to do a console_force_unlock(), which is
> fine for BUG() and ASSERT(), but problematic for WARN() (and deferring
> the printing to a tasklet wont work if we want a stack trace). 
> Alternativly, we could change the console lock to be a recursive lock,
> at which point it is safe from the deadlock point of view.  Are there
> any performance concerns from changing to a recursive lock?

Not really, and the console lock isn't performance critical anyway.

> As for spinlocks themselves, as far as I can reason, recursive locks are
> safe to use, as are per-cpu spinlocks which are used exclusivly in the
> NMI handler or MCE handler (but not both), given the proviso that we
> have C level reentrance protection for do_{nmi,mce}().
> 
> For the {rd,wr}msr()s, we can assume that the Xen code is good and is
> not going to fault on access to the MSR, but we certainly cant guarantee
> this.

{rd,wr}msr() are of no concern - if they fault it's exactly like a #PF
or #GP from a bad memory reference: a bug that will bring down the
hypervisor. Their _safe counterparts are what needs to be looked
for, as there the fault is being recovered from (and it's this recovery's
side effect of re-enabling NMIs that we don't want).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:27:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCC0-0006mZ-KK; Wed, 05 Dec 2012 10:27:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgCBy-0006mK-KH
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:27:02 +0000
Received: from [85.158.139.211:34553] by server-2.bemta-5.messagelabs.com id
	27/DA-04892-5712FB05; Wed, 05 Dec 2012 10:27:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1354703217!18708040!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32265 invoked from network); 5 Dec 2012 10:27:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 10:27:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 10:26:59 +0000
Message-Id: <50BF2F7E02000078000AE174@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 10:26:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <50BE5732.2050801@citrix.com>
In-Reply-To: <50BE5732.2050801@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Audit of NMI and MCE paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 21:04, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> As an alternative, I suggest that we make ASSERT()s, BUG()s and WARN()s
> NMI/MCE safe, from a printk spinlock point of view.
> 
> Either we can modify the macros to do a console_force_unlock(), which is
> fine for BUG() and ASSERT(), but problematic for WARN() (and deferring
> the printing to a tasklet wont work if we want a stack trace). 
> Alternativly, we could change the console lock to be a recursive lock,
> at which point it is safe from the deadlock point of view.  Are there
> any performance concerns from changing to a recursive lock?

Not really, and the console lock isn't performance critical anyway.

> As for spinlocks themselves, as far as I can reason, recursive locks are
> safe to use, as are per-cpu spinlocks which are used exclusivly in the
> NMI handler or MCE handler (but not both), given the proviso that we
> have C level reentrance protection for do_{nmi,mce}().
> 
> For the {rd,wr}msr()s, we can assume that the Xen code is good and is
> not going to fault on access to the MSR, but we certainly cant guarantee
> this.

{rd,wr}msr() are of no concern - if they fault it's exactly like a #PF
or #GP from a bad memory reference: a bug that will bring down the
hypervisor. Their _safe counterparts are what needs to be looked
for, as there the fault is being recovered from (and it's this recovery's
side effect of re-enabling NMIs that we don't want).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:27:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:27:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCCN-0006qP-28; Wed, 05 Dec 2012 10:27:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Manuel.Bouyer@lip6.fr>) id 1TgCCL-0006q7-E2
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:27:25 +0000
Received: from [85.158.143.35:42995] by server-3.bemta-4.messagelabs.com id
	19/ED-06841-C812FB05; Wed, 05 Dec 2012 10:27:24 +0000
X-Env-Sender: Manuel.Bouyer@lip6.fr
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354703059!16267643!1
X-Originating-IP: [132.227.60.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11162 invoked from network); 5 Dec 2012 10:24:20 -0000
Received: from isis.lip6.fr (HELO isis.lip6.fr) (132.227.60.2)
	by server-14.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 10:24:20 -0000
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
	by isis.lip6.fr (8.14.5/lip6) with ESMTP id qB5AOHmW021338
	; Wed, 5 Dec 2012 11:24:17 +0100 (CET)
X-pt: isis.lip6.fr
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
	by asim.lip6.fr (8.14.5/8.14.4) with ESMTP id qB5AOHiQ027200;
	Wed, 5 Dec 2012 11:24:17 +0100 (MET)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
	id 1738834B40; Wed,  5 Dec 2012 11:24:17 +0100 (MET)
Date: Wed, 5 Dec 2012 11:24:17 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20121205102417.GB2999@asim.lip6.fr>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7
	(isis.lip6.fr [132.227.60.2]);
	Wed, 05 Dec 2012 11:24:17 +0100 (CET)
X-Scanned-By: MIMEDefang 2.73 on 132.227.60.2
Cc: port-xen@NetBSD.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Nov 28, 2012 at 02:19:48PM +0100, Roger Pau Monne wrote:
> This is a basic (and experimental) gntdev implementation for NetBSD.
> 
> The gnt device allows usermode applications to map grant references in
> userspace. It is mainly used by Qemu to implement a Xen backend (that
> runs in userspace).
> 
> Due to the fact that qemu-upstream is not yet functional in NetBSD,
> the only way to try this gntdev is to use the old qemu
> (qemu-traditional).
> 
> Performance is not that bad (given that we are using qemu-traditional
> and running a backend in userspace), the throughput of write
> operations is 64.7 MB/s, while in the Dom0 it is 104.6 MB/s. Regarding
> read operations, the throughput inside the DomU is 76.0 MB/s, while on
> the Dom0 it is 108.8 MB/s.
> 
> Patches to libxc and libxl are also comming soon.
> 
> [...]
> +map_grant_ref(struct gntmap *map);

please, prefix all struct, global variables and functions of the
driver with gnt

> +		if (pmap_extract_ma(pmap_kernel(), k_va, &ma) == false) {
> +			debug("unable to extract kernel MA");
> +			return EFAULT;
> +		}

Can't you get the pa or ma directly from the grant reference ?
Adding a kernel mapping only to run pmap_extract_ma() on it looks like an
innefficient way of doing it (this will require TLB shootdown IPIs).

Or maybe GNTTABOP_map_grant_ref could map directly in user space ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:27:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:27:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCCN-0006qP-28; Wed, 05 Dec 2012 10:27:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Manuel.Bouyer@lip6.fr>) id 1TgCCL-0006q7-E2
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:27:25 +0000
Received: from [85.158.143.35:42995] by server-3.bemta-4.messagelabs.com id
	19/ED-06841-C812FB05; Wed, 05 Dec 2012 10:27:24 +0000
X-Env-Sender: Manuel.Bouyer@lip6.fr
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354703059!16267643!1
X-Originating-IP: [132.227.60.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11162 invoked from network); 5 Dec 2012 10:24:20 -0000
Received: from isis.lip6.fr (HELO isis.lip6.fr) (132.227.60.2)
	by server-14.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 10:24:20 -0000
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
	by isis.lip6.fr (8.14.5/lip6) with ESMTP id qB5AOHmW021338
	; Wed, 5 Dec 2012 11:24:17 +0100 (CET)
X-pt: isis.lip6.fr
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
	by asim.lip6.fr (8.14.5/8.14.4) with ESMTP id qB5AOHiQ027200;
	Wed, 5 Dec 2012 11:24:17 +0100 (MET)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
	id 1738834B40; Wed,  5 Dec 2012 11:24:17 +0100 (MET)
Date: Wed, 5 Dec 2012 11:24:17 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20121205102417.GB2999@asim.lip6.fr>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7
	(isis.lip6.fr [132.227.60.2]);
	Wed, 05 Dec 2012 11:24:17 +0100 (CET)
X-Scanned-By: MIMEDefang 2.73 on 132.227.60.2
Cc: port-xen@NetBSD.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Nov 28, 2012 at 02:19:48PM +0100, Roger Pau Monne wrote:
> This is a basic (and experimental) gntdev implementation for NetBSD.
> 
> The gnt device allows usermode applications to map grant references in
> userspace. It is mainly used by Qemu to implement a Xen backend (that
> runs in userspace).
> 
> Due to the fact that qemu-upstream is not yet functional in NetBSD,
> the only way to try this gntdev is to use the old qemu
> (qemu-traditional).
> 
> Performance is not that bad (given that we are using qemu-traditional
> and running a backend in userspace), the throughput of write
> operations is 64.7 MB/s, while in the Dom0 it is 104.6 MB/s. Regarding
> read operations, the throughput inside the DomU is 76.0 MB/s, while on
> the Dom0 it is 108.8 MB/s.
> 
> Patches to libxc and libxl are also comming soon.
> 
> [...]
> +map_grant_ref(struct gntmap *map);

please, prefix all struct, global variables and functions of the
driver with gnt

> +		if (pmap_extract_ma(pmap_kernel(), k_va, &ma) == false) {
> +			debug("unable to extract kernel MA");
> +			return EFAULT;
> +		}

Can't you get the pa or ma directly from the grant reference ?
Adding a kernel mapping only to run pmap_extract_ma() on it looks like an
innefficient way of doing it (this will require TLB shootdown IPIs).

Or maybe GNTTABOP_map_grant_ref could map directly in user space ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:45:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCTE-0007zE-KW; Wed, 05 Dec 2012 10:44:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgCTD-0007z9-0j
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:44:51 +0000
Received: from [85.158.139.211:62778] by server-15.bemta-5.messagelabs.com id
	9D/F6-26920-2A52FB05; Wed, 05 Dec 2012 10:44:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354704246!19152563!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6240 invoked from network); 5 Dec 2012 10:44:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:44:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46655586"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:44:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 05:44:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgCSS-0000GI-VI;
	Wed, 05 Dec 2012 10:44:04 +0000
Message-ID: <50BF2574.6080702@citrix.com>
Date: Wed, 5 Dec 2012 10:44:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Matt Wilson <msw@amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
In-Reply-To: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
X-Enigmail-Version: 1.4.6
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 06:02, Matt Wilson wrote:
> An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
> but still have the flexibility to change the configuration later.
> There's no logic that keys off of domain->is_pinned outside of
> sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
> is_pinned_vcpu() macro to only check for a single CPU set in the
> cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
> boots.

Sadly this patch will break things.  There are certain callers of
is_pinned_vcpu() which rely on the value to allow access to certain
power related MSRs, which is where the requirement for never permitting
an update of the affinity mask comes from.

When I encountered this problem before, I considered implementing
dom0_vcpu_pin=dynamic (or name to suit) which sets up an identity pin at
create time, but leaves is_pinned as false. 

~Andrew

>
> Signed-off-by: Matt Wilson <msw@amazon.com>
>
> diff -r 29247e44df47 -r 2614dd8be3a0 docs/misc/xen-command-line.markdown
> --- a/docs/misc/xen-command-line.markdown	Fri Nov 30 21:51:17 2012 +0000
> +++ b/docs/misc/xen-command-line.markdown	Wed Dec 05 05:48:23 2012 +0000
> @@ -453,7 +453,7 @@ Practices](http://wiki.xen.org/wiki/Xen_
>  
>  > Default: `false`
>  
> -Pin dom0 vcpus to their respective pcpus
> +Initially pin dom0 vcpus to their respective pcpus
>  
>  ### e820-mtrr-clip
>  > `= <boolean>`
> diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/domain.c
> --- a/xen/common/domain.c	Fri Nov 30 21:51:17 2012 +0000
> +++ b/xen/common/domain.c	Wed Dec 05 05:48:23 2012 +0000
> @@ -45,10 +45,6 @@
>  /* xen_processor_pmbits: xen control Cx, Px, ... */
>  unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
>  
> -/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */
> -bool_t opt_dom0_vcpus_pin;
> -boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> -
>  /* Protect updates/reads (resp.) of domain_list and domain_hash. */
>  DEFINE_SPINLOCK(domlist_update_lock);
>  DEFINE_RCU_READ_LOCK(domlist_read_lock);
> @@ -235,7 +231,6 @@ struct domain *domain_create(
>  
>      if ( domid == 0 )
>      {
> -        d->is_pinned = opt_dom0_vcpus_pin;
>          d->disable_migrate = 1;
>      }
>  
> diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/schedule.c
> --- a/xen/common/schedule.c	Fri Nov 30 21:51:17 2012 +0000
> +++ b/xen/common/schedule.c	Wed Dec 05 05:48:23 2012 +0000
> @@ -52,6 +52,11 @@ boolean_param("sched_smt_power_savings",
>   * */
>  int sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
>  integer_param("sched_ratelimit_us", sched_ratelimit_us);
> +
> +/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned at boot. */
> +bool_t opt_dom0_vcpus_pin;
> +boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> +
>  /* Various timer handlers. */
>  static void s_timer_fn(void *unused);
>  static void vcpu_periodic_timer_fn(void *data);
> @@ -194,7 +199,8 @@ int sched_init_vcpu(struct vcpu *v, unsi
>       * domain-0 VCPUs, are pinned onto their respective physical CPUs.
>       */
>      v->processor = processor;
> -    if ( is_idle_domain(d) || d->is_pinned )
> +
> +    if ( is_idle_domain(d) || (d->domain_id == 0 && opt_dom0_vcpus_pin) )
>          cpumask_copy(v->cpu_affinity, cpumask_of(processor));
>      else
>          cpumask_setall(v->cpu_affinity);
> @@ -595,8 +601,6 @@ int vcpu_set_affinity(struct vcpu *v, co
>      cpumask_t online_affinity;
>      cpumask_t *online;
>  
> -    if ( v->domain->is_pinned )
> -        return -EINVAL;
>      online = VCPU2ONLINE(v);
>      cpumask_and(&online_affinity, affinity, online);
>      if ( cpumask_empty(&online_affinity) )
> diff -r 29247e44df47 -r 2614dd8be3a0 xen/include/xen/sched.h
> --- a/xen/include/xen/sched.h	Fri Nov 30 21:51:17 2012 +0000
> +++ b/xen/include/xen/sched.h	Wed Dec 05 05:48:23 2012 +0000
> @@ -292,8 +292,6 @@ struct domain
>      enum { DOMDYING_alive, DOMDYING_dying, DOMDYING_dead } is_dying;
>      /* Domain is paused by controller software? */
>      bool_t           is_paused_by_controller;
> -    /* Domain's VCPUs are pinned 1:1 to physical CPUs? */
> -    bool_t           is_pinned;
>  
>      /* Are any VCPUs polling event channels (SCHEDOP_poll)? */
>  #if MAX_VIRT_CPUS <= BITS_PER_LONG
> @@ -713,8 +711,7 @@ void watchdog_domain_destroy(struct doma
>  
>  #define is_hvm_domain(d) ((d)->is_hvm)
>  #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
> -#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
> -                           cpumask_weight((v)->cpu_affinity) == 1)
> +#define is_pinned_vcpu(v) (cpumask_weight((v)->cpu_affinity) == 1)
>  #ifdef HAS_PASSTHROUGH
>  #define need_iommu(d)    ((d)->need_iommu)
>  #else
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:45:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCTE-0007zE-KW; Wed, 05 Dec 2012 10:44:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgCTD-0007z9-0j
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:44:51 +0000
Received: from [85.158.139.211:62778] by server-15.bemta-5.messagelabs.com id
	9D/F6-26920-2A52FB05; Wed, 05 Dec 2012 10:44:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354704246!19152563!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6240 invoked from network); 5 Dec 2012 10:44:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:44:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46655586"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:44:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 05:44:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgCSS-0000GI-VI;
	Wed, 05 Dec 2012 10:44:04 +0000
Message-ID: <50BF2574.6080702@citrix.com>
Date: Wed, 5 Dec 2012 10:44:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Matt Wilson <msw@amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
In-Reply-To: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
X-Enigmail-Version: 1.4.6
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 06:02, Matt Wilson wrote:
> An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
> but still have the flexibility to change the configuration later.
> There's no logic that keys off of domain->is_pinned outside of
> sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
> is_pinned_vcpu() macro to only check for a single CPU set in the
> cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
> boots.

Sadly this patch will break things.  There are certain callers of
is_pinned_vcpu() which rely on the value to allow access to certain
power related MSRs, which is where the requirement for never permitting
an update of the affinity mask comes from.

When I encountered this problem before, I considered implementing
dom0_vcpu_pin=dynamic (or name to suit) which sets up an identity pin at
create time, but leaves is_pinned as false. 

~Andrew

>
> Signed-off-by: Matt Wilson <msw@amazon.com>
>
> diff -r 29247e44df47 -r 2614dd8be3a0 docs/misc/xen-command-line.markdown
> --- a/docs/misc/xen-command-line.markdown	Fri Nov 30 21:51:17 2012 +0000
> +++ b/docs/misc/xen-command-line.markdown	Wed Dec 05 05:48:23 2012 +0000
> @@ -453,7 +453,7 @@ Practices](http://wiki.xen.org/wiki/Xen_
>  
>  > Default: `false`
>  
> -Pin dom0 vcpus to their respective pcpus
> +Initially pin dom0 vcpus to their respective pcpus
>  
>  ### e820-mtrr-clip
>  > `= <boolean>`
> diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/domain.c
> --- a/xen/common/domain.c	Fri Nov 30 21:51:17 2012 +0000
> +++ b/xen/common/domain.c	Wed Dec 05 05:48:23 2012 +0000
> @@ -45,10 +45,6 @@
>  /* xen_processor_pmbits: xen control Cx, Px, ... */
>  unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
>  
> -/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */
> -bool_t opt_dom0_vcpus_pin;
> -boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> -
>  /* Protect updates/reads (resp.) of domain_list and domain_hash. */
>  DEFINE_SPINLOCK(domlist_update_lock);
>  DEFINE_RCU_READ_LOCK(domlist_read_lock);
> @@ -235,7 +231,6 @@ struct domain *domain_create(
>  
>      if ( domid == 0 )
>      {
> -        d->is_pinned = opt_dom0_vcpus_pin;
>          d->disable_migrate = 1;
>      }
>  
> diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/schedule.c
> --- a/xen/common/schedule.c	Fri Nov 30 21:51:17 2012 +0000
> +++ b/xen/common/schedule.c	Wed Dec 05 05:48:23 2012 +0000
> @@ -52,6 +52,11 @@ boolean_param("sched_smt_power_savings",
>   * */
>  int sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
>  integer_param("sched_ratelimit_us", sched_ratelimit_us);
> +
> +/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned at boot. */
> +bool_t opt_dom0_vcpus_pin;
> +boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> +
>  /* Various timer handlers. */
>  static void s_timer_fn(void *unused);
>  static void vcpu_periodic_timer_fn(void *data);
> @@ -194,7 +199,8 @@ int sched_init_vcpu(struct vcpu *v, unsi
>       * domain-0 VCPUs, are pinned onto their respective physical CPUs.
>       */
>      v->processor = processor;
> -    if ( is_idle_domain(d) || d->is_pinned )
> +
> +    if ( is_idle_domain(d) || (d->domain_id == 0 && opt_dom0_vcpus_pin) )
>          cpumask_copy(v->cpu_affinity, cpumask_of(processor));
>      else
>          cpumask_setall(v->cpu_affinity);
> @@ -595,8 +601,6 @@ int vcpu_set_affinity(struct vcpu *v, co
>      cpumask_t online_affinity;
>      cpumask_t *online;
>  
> -    if ( v->domain->is_pinned )
> -        return -EINVAL;
>      online = VCPU2ONLINE(v);
>      cpumask_and(&online_affinity, affinity, online);
>      if ( cpumask_empty(&online_affinity) )
> diff -r 29247e44df47 -r 2614dd8be3a0 xen/include/xen/sched.h
> --- a/xen/include/xen/sched.h	Fri Nov 30 21:51:17 2012 +0000
> +++ b/xen/include/xen/sched.h	Wed Dec 05 05:48:23 2012 +0000
> @@ -292,8 +292,6 @@ struct domain
>      enum { DOMDYING_alive, DOMDYING_dying, DOMDYING_dead } is_dying;
>      /* Domain is paused by controller software? */
>      bool_t           is_paused_by_controller;
> -    /* Domain's VCPUs are pinned 1:1 to physical CPUs? */
> -    bool_t           is_pinned;
>  
>      /* Are any VCPUs polling event channels (SCHEDOP_poll)? */
>  #if MAX_VIRT_CPUS <= BITS_PER_LONG
> @@ -713,8 +711,7 @@ void watchdog_domain_destroy(struct doma
>  
>  #define is_hvm_domain(d) ((d)->is_hvm)
>  #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
> -#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
> -                           cpumask_weight((v)->cpu_affinity) == 1)
> +#define is_pinned_vcpu(v) (cpumask_weight((v)->cpu_affinity) == 1)
>  #ifdef HAS_PASSTHROUGH
>  #define need_iommu(d)    ((d)->need_iommu)
>  #else
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:48:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:48:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCW2-0008AJ-DB; Wed, 05 Dec 2012 10:47:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgCW1-0008A4-Ct
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:47:45 +0000
Received: from [85.158.138.51:62754] by server-10.bemta-3.messagelabs.com id
	5A/4D-19806-0562FB05; Wed, 05 Dec 2012 10:47:44 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354704462!19603985!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30820 invoked from network); 5 Dec 2012 10:47:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:47:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46655870"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:47:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 05:47:42 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgCVx-0000KF-KX;
	Wed, 05 Dec 2012 10:47:41 +0000
Message-ID: <50BF264D.9040804@citrix.com>
Date: Wed, 5 Dec 2012 10:47:41 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
	<50BF202502000078000AE0E8@nat28.tlf.novell.com>
In-Reply-To: <50BF202502000078000AE0E8@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.6
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 4 RFC] x86/nmi: Prevent reentrant
 execution of the C nmi handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 09:21, Jan Beulich wrote:
>>>> On 04.12.12 at 19:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> The (old) function do_nmi() is not reentrantly-safe.  Rename it to
>> _do_nmi() and present a new do_nmi() which reentrancy guards.
>>
>> If a reentrant NMI has been detected, then it is highly likely that the
>> outer NMI exception frame has been corrupted, meaning we cannot return
>> to the original context.  In this case, we panic() obviously rather than
>> falling into an infinite loop.
>>
>> panic() however is not safe to reenter from an NMI context, as an NMI
>> (or MCE) can interrupt it inside its critical section, at which point a
>> new call to panic() will deadlock.  As a result, we bail early if a
>> panic() is already in progress, as Xen is about to die anyway.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> --
>> I am fairly sure this is safe with the current kexec_crash functionality
>> which involves holding all non-crashing pcpus in an NMI loop.  In the
>> case of reentrant NMIs and panic_in_progress, we will repeatedly bail
>> early in an infinite loop of NMIs, which has the same intended effect of
>> simply causing all non-crashing CPUs to stay out of the way while the
>> main crash occurs.
>>
>> diff -r 48a60a407e15 -r f6ad86b61d5a xen/arch/x86/traps.c
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -88,6 +88,7 @@ static char __read_mostly opt_nmi[10] = 
>>  string_param("nmi", opt_nmi);
>>  
>>  DEFINE_PER_CPU(u64, efer);
>> +static DEFINE_PER_CPU(bool_t, nmi_in_progress) = 0;
>>  
>>  DEFINE_PER_CPU_READ_MOSTLY(u32, ler_msr);
>>  
>> @@ -3182,7 +3183,8 @@ static int dummy_nmi_callback(struct cpu
>>   
>>  static nmi_callback_t nmi_callback = dummy_nmi_callback;
>>  
>> -void do_nmi(struct cpu_user_regs *regs)
>> +/* This function should never be called directly.  Use do_nmi() instead. */
>> +static void _do_nmi(struct cpu_user_regs *regs)
>>  {
>>      unsigned int cpu = smp_processor_id();
>>      unsigned char reason;
>> @@ -3208,6 +3210,44 @@ void do_nmi(struct cpu_user_regs *regs)
>>      }
>>  }
>>  
>> +/* This function is NOT SAFE to call from C code in general.
>> + * Use with extreme care! */
>> +void do_nmi(struct cpu_user_regs *regs)
>> +{
>> +    bool_t * in_progress = &this_cpu(nmi_in_progress);
>> +
>> +    if ( is_panic_in_progress() )
>> +    {
>> +        /* A panic is already in progress.  It may have reenabled NMIs,
>> +         * or we are simply unluckly to receive one right now.  Either
>> +         * way, bail early, as Xen is about to die.
>> +         *
>> +         * TODO: Ideally we should exit without executing an iret, to
>> +         * leave NMIs disabled, but that option is not currently
>> +         * available to us.
> You could easily provide the ground work for this here by having
> the function return a bool_t (even if not immediately consumed by
> the caller in this same patch).
>
> Jan

Will do.  I had considered a bool_t and was planning to integrate it
later in development.

~Andrew

>
>> +         */
>> +        return;
>> +    }
>> +
>> +    if ( test_and_set_bool(*in_progress) )
>> +    {
>> +        /* Crash in an obvious mannor, as opposed to falling into
>> +         * infinite loop because our exception frame corrupted the
>> +         * exception frame of the previous NMI.
>> +         *
>> +         * TODO: This check does not cover all possible cases of corrupt
>> +         * exception frames, but it is substantially better than
>> +         * nothing.
>> +         */
>> +        console_force_unlock();
>> +        show_execution_state(regs);
>> +        panic("Reentrant NMI detected\n");
>> +    }
>> +
>> +    _do_nmi(regs);
>> +    *in_progress = 0;
>> +}
>> +
>>  void set_nmi_callback(nmi_callback_t callback)
>>  {
>>      nmi_callback = callback;
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 
>
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:48:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:48:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCW2-0008AJ-DB; Wed, 05 Dec 2012 10:47:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgCW1-0008A4-Ct
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:47:45 +0000
Received: from [85.158.138.51:62754] by server-10.bemta-3.messagelabs.com id
	5A/4D-19806-0562FB05; Wed, 05 Dec 2012 10:47:44 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354704462!19603985!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30820 invoked from network); 5 Dec 2012 10:47:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:47:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46655870"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:47:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 05:47:42 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgCVx-0000KF-KX;
	Wed, 05 Dec 2012 10:47:41 +0000
Message-ID: <50BF264D.9040804@citrix.com>
Date: Wed, 5 Dec 2012 10:47:41 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
	<50BF202502000078000AE0E8@nat28.tlf.novell.com>
In-Reply-To: <50BF202502000078000AE0E8@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.6
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 4 RFC] x86/nmi: Prevent reentrant
 execution of the C nmi handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 09:21, Jan Beulich wrote:
>>>> On 04.12.12 at 19:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> The (old) function do_nmi() is not reentrantly-safe.  Rename it to
>> _do_nmi() and present a new do_nmi() which reentrancy guards.
>>
>> If a reentrant NMI has been detected, then it is highly likely that the
>> outer NMI exception frame has been corrupted, meaning we cannot return
>> to the original context.  In this case, we panic() obviously rather than
>> falling into an infinite loop.
>>
>> panic() however is not safe to reenter from an NMI context, as an NMI
>> (or MCE) can interrupt it inside its critical section, at which point a
>> new call to panic() will deadlock.  As a result, we bail early if a
>> panic() is already in progress, as Xen is about to die anyway.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> --
>> I am fairly sure this is safe with the current kexec_crash functionality
>> which involves holding all non-crashing pcpus in an NMI loop.  In the
>> case of reentrant NMIs and panic_in_progress, we will repeatedly bail
>> early in an infinite loop of NMIs, which has the same intended effect of
>> simply causing all non-crashing CPUs to stay out of the way while the
>> main crash occurs.
>>
>> diff -r 48a60a407e15 -r f6ad86b61d5a xen/arch/x86/traps.c
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -88,6 +88,7 @@ static char __read_mostly opt_nmi[10] = 
>>  string_param("nmi", opt_nmi);
>>  
>>  DEFINE_PER_CPU(u64, efer);
>> +static DEFINE_PER_CPU(bool_t, nmi_in_progress) = 0;
>>  
>>  DEFINE_PER_CPU_READ_MOSTLY(u32, ler_msr);
>>  
>> @@ -3182,7 +3183,8 @@ static int dummy_nmi_callback(struct cpu
>>   
>>  static nmi_callback_t nmi_callback = dummy_nmi_callback;
>>  
>> -void do_nmi(struct cpu_user_regs *regs)
>> +/* This function should never be called directly.  Use do_nmi() instead. */
>> +static void _do_nmi(struct cpu_user_regs *regs)
>>  {
>>      unsigned int cpu = smp_processor_id();
>>      unsigned char reason;
>> @@ -3208,6 +3210,44 @@ void do_nmi(struct cpu_user_regs *regs)
>>      }
>>  }
>>  
>> +/* This function is NOT SAFE to call from C code in general.
>> + * Use with extreme care! */
>> +void do_nmi(struct cpu_user_regs *regs)
>> +{
>> +    bool_t * in_progress = &this_cpu(nmi_in_progress);
>> +
>> +    if ( is_panic_in_progress() )
>> +    {
>> +        /* A panic is already in progress.  It may have reenabled NMIs,
>> +         * or we are simply unluckly to receive one right now.  Either
>> +         * way, bail early, as Xen is about to die.
>> +         *
>> +         * TODO: Ideally we should exit without executing an iret, to
>> +         * leave NMIs disabled, but that option is not currently
>> +         * available to us.
> You could easily provide the ground work for this here by having
> the function return a bool_t (even if not immediately consumed by
> the caller in this same patch).
>
> Jan

Will do.  I had considered a bool_t and was planning to integrate it
later in development.

~Andrew

>
>> +         */
>> +        return;
>> +    }
>> +
>> +    if ( test_and_set_bool(*in_progress) )
>> +    {
>> +        /* Crash in an obvious mannor, as opposed to falling into
>> +         * infinite loop because our exception frame corrupted the
>> +         * exception frame of the previous NMI.
>> +         *
>> +         * TODO: This check does not cover all possible cases of corrupt
>> +         * exception frames, but it is substantially better than
>> +         * nothing.
>> +         */
>> +        console_force_unlock();
>> +        show_execution_state(regs);
>> +        panic("Reentrant NMI detected\n");
>> +    }
>> +
>> +    _do_nmi(regs);
>> +    *in_progress = 0;
>> +}
>> +
>>  void set_nmi_callback(nmi_callback_t callback)
>>  {
>>      nmi_callback = callback;
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 
>
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:58:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:58:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCfj-0008Sl-M1; Wed, 05 Dec 2012 10:57:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgCfi-0008Se-9Q
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:57:46 +0000
Received: from [85.158.143.35:55195] by server-3.bemta-4.messagelabs.com id
	40/61-06841-9A82FB05; Wed, 05 Dec 2012 10:57:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354704915!5211760!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7937 invoked from network); 5 Dec 2012 10:55:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:55:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16168427"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:55:15 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	10:55:15 +0000
Message-ID: <1354704913.15296.140.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ben Guthro <ben@guthro.net>
Date: Wed, 5 Dec 2012 10:55:13 +0000
In-Reply-To: <CAOvdn6VYCFcfXS67SutP4Mi131Jva1kj53cz+aLLCxzcXkEghA@mail.gmail.com>
References: <CAOvdn6VYCFcfXS67SutP4Mi131Jva1kj53cz+aLLCxzcXkEghA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl: stable-4.2 git tree dirty after a build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 20:35 +0000, Ben Guthro wrote:


> Unstable does not leave these files modified, as far as I can see -
> Though a simple diff of the 2 libxl dirs does not make it immediately
> obvious what might be modifying these files.

The makefile is supposed to not rerun flex/bison unless something has
actually changed in the source files.

I suspect this has something to do with the timestamps stored in
mercurial/git. As far as I can tell both create files with the time of
the clone/update and not the time which is committed. This commonly
results in them having the same timestamp AFAICT.

A common problem with keeping generated files in the VCS is that the
timestamps are not preserved and make gets confused into thinking
something needs updating when it doesn't. If you happen to have a
different version of the tool installed then this will likely result in
changes to the generated files.

A simple local workaround would be to touch the output files after
checkout.

Solving it properly seems like it would be harder. One idea would be a
helper (e.g. a call_if_changed make macro) which stashes a checksum of
the input file somewhere (perhaps the last line of the output file) and
does
	if checksum($input) == tail -n 1 $output:
		touch $output
	else
		regenerate $input
		checksum($input) >> $output
	endif
(Needs to inject/strip suitable comment characters for the language used
in $output)

Nasty.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:58:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:58:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCfj-0008Sl-M1; Wed, 05 Dec 2012 10:57:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgCfi-0008Se-9Q
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:57:46 +0000
Received: from [85.158.143.35:55195] by server-3.bemta-4.messagelabs.com id
	40/61-06841-9A82FB05; Wed, 05 Dec 2012 10:57:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354704915!5211760!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7937 invoked from network); 5 Dec 2012 10:55:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:55:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16168427"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 10:55:15 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	10:55:15 +0000
Message-ID: <1354704913.15296.140.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ben Guthro <ben@guthro.net>
Date: Wed, 5 Dec 2012 10:55:13 +0000
In-Reply-To: <CAOvdn6VYCFcfXS67SutP4Mi131Jva1kj53cz+aLLCxzcXkEghA@mail.gmail.com>
References: <CAOvdn6VYCFcfXS67SutP4Mi131Jva1kj53cz+aLLCxzcXkEghA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl: stable-4.2 git tree dirty after a build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 20:35 +0000, Ben Guthro wrote:


> Unstable does not leave these files modified, as far as I can see -
> Though a simple diff of the 2 libxl dirs does not make it immediately
> obvious what might be modifying these files.

The makefile is supposed to not rerun flex/bison unless something has
actually changed in the source files.

I suspect this has something to do with the timestamps stored in
mercurial/git. As far as I can tell both create files with the time of
the clone/update and not the time which is committed. This commonly
results in them having the same timestamp AFAICT.

A common problem with keeping generated files in the VCS is that the
timestamps are not preserved and make gets confused into thinking
something needs updating when it doesn't. If you happen to have a
different version of the tool installed then this will likely result in
changes to the generated files.

A simple local workaround would be to touch the output files after
checkout.

Solving it properly seems like it would be harder. One idea would be a
helper (e.g. a call_if_changed make macro) which stashes a checksum of
the input file somewhere (perhaps the last line of the output file) and
does
	if checksum($input) == tail -n 1 $output:
		touch $output
	else
		regenerate $input
		checksum($input) >> $output
	endif
(Needs to inject/strip suitable comment characters for the language used
in $output)

Nasty.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 10:58:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCfs-0008TL-3J; Wed, 05 Dec 2012 10:57:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TgCfp-0008T1-RO
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:57:54 +0000
Received: from [85.158.138.51:20863] by server-4.bemta-3.messagelabs.com id
	EE/B2-30023-1B82FB05; Wed, 05 Dec 2012 10:57:53 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354705071!27459752!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23019 invoked from network); 5 Dec 2012 10:57:52 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:57:52 -0000
Received: by mail-qa0-f52.google.com with SMTP id d13so1663720qak.11
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 02:57:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=gWnRIeSFf9ILNUWL1v1FEyoEUN/NojOnLr9WbqQoSqI=;
	b=oTU9EkNPXYSqmv933uqjHPvMuz2ToXaKq07wdbqku+1p2GjKK/ZSISjUy6thOI4ak9
	UTxvW2wQZULW9wfirBA5FONJHTX4ctW3TtQKrM9m1xLC+vgqK48dvFPvCyDH9Iji5jEv
	eY29EqNz2j7dLR2w2g6VyMsFzYJwuNVjhGe9F5hMoB8xKAMQl9aSp4BrZ8yONOvALTuU
	gEwrvBp+rWTJs4umCttrh4I3cJ2hiwBxvfv5QNuFv2eJCgYpRXWO6DVzjcBf04bJdTTq
	zhUoiGcmiKNDoOc0hT3/DiJXeTcfoPp1Syx7CTBokqzWEufdc22OcRZdi9S1PKGcQX9Z
	FFfQ==
MIME-Version: 1.0
Received: by 10.49.127.101 with SMTP id nf5mr9262504qeb.20.1354705070707; Wed,
	05 Dec 2012 02:57:50 -0800 (PST)
Received: by 10.49.51.169 with HTTP; Wed, 5 Dec 2012 02:57:50 -0800 (PST)
In-Reply-To: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
References: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
Date: Wed, 5 Dec 2012 10:57:50 +0000
X-Google-Sender-Auth: 3rpnSHGVihYZjTUEgDQ-9zmkabg
Message-ID: <CAFLBxZbOrvpkEQb9sexsrzoZTCH7y25EHvw_jf-7DgFwy5CwVQ@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Ian Campbell <ian.campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] tools: install under /usr/local by
	default.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9033688198885381396=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9033688198885381396==
Content-Type: multipart/alternative; boundary=047d7b6dc0bc83d5df04d018da8f

--047d7b6dc0bc83d5df04d018da8f
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 4, 2012 at 5:41 PM, Ian Campbell <ian.campbell@citrix.com>wrote:

> This is the defacto (or FHS mandated?) standard location for software
> built from source, in order to avoid clashing with packaged software
> which is installed under /usr/bin etc.
>

So if someone just does "./configure && make deb", the .deb will install
stuff in /usr/local.  Is that what we want?

 -George

--047d7b6dc0bc83d5df04d018da8f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 4, 2012 at 5:41 PM, Ian Campbell <span dir=3D"ltr">&lt;<a href=
=3D"mailto:ian.campbell@citrix.com" target=3D"_blank">ian.campbell@citrix.c=
om</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_=
quote"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-=
left:1px #ccc solid;padding-left:1ex">
This is the defacto (or FHS mandated?) standard location for software<br>
built from source, in order to avoid clashing with packaged software<br>
which is installed under /usr/bin etc.<br></blockquote><div><br>So if someo=
ne just does &quot;./configure &amp;&amp; make deb&quot;, the .deb will ins=
tall stuff in /usr/local.=A0 Is that what we want?<br><br>=A0-George<br></d=
iv>
</div></div>

--047d7b6dc0bc83d5df04d018da8f--


--===============9033688198885381396==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9033688198885381396==--


From xen-devel-bounces@lists.xen.org Wed Dec 05 10:58:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 10:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCfs-0008TL-3J; Wed, 05 Dec 2012 10:57:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TgCfp-0008T1-RO
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 10:57:54 +0000
Received: from [85.158.138.51:20863] by server-4.bemta-3.messagelabs.com id
	EE/B2-30023-1B82FB05; Wed, 05 Dec 2012 10:57:53 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354705071!27459752!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23019 invoked from network); 5 Dec 2012 10:57:52 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 10:57:52 -0000
Received: by mail-qa0-f52.google.com with SMTP id d13so1663720qak.11
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 02:57:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=gWnRIeSFf9ILNUWL1v1FEyoEUN/NojOnLr9WbqQoSqI=;
	b=oTU9EkNPXYSqmv933uqjHPvMuz2ToXaKq07wdbqku+1p2GjKK/ZSISjUy6thOI4ak9
	UTxvW2wQZULW9wfirBA5FONJHTX4ctW3TtQKrM9m1xLC+vgqK48dvFPvCyDH9Iji5jEv
	eY29EqNz2j7dLR2w2g6VyMsFzYJwuNVjhGe9F5hMoB8xKAMQl9aSp4BrZ8yONOvALTuU
	gEwrvBp+rWTJs4umCttrh4I3cJ2hiwBxvfv5QNuFv2eJCgYpRXWO6DVzjcBf04bJdTTq
	zhUoiGcmiKNDoOc0hT3/DiJXeTcfoPp1Syx7CTBokqzWEufdc22OcRZdi9S1PKGcQX9Z
	FFfQ==
MIME-Version: 1.0
Received: by 10.49.127.101 with SMTP id nf5mr9262504qeb.20.1354705070707; Wed,
	05 Dec 2012 02:57:50 -0800 (PST)
Received: by 10.49.51.169 with HTTP; Wed, 5 Dec 2012 02:57:50 -0800 (PST)
In-Reply-To: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
References: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
Date: Wed, 5 Dec 2012 10:57:50 +0000
X-Google-Sender-Auth: 3rpnSHGVihYZjTUEgDQ-9zmkabg
Message-ID: <CAFLBxZbOrvpkEQb9sexsrzoZTCH7y25EHvw_jf-7DgFwy5CwVQ@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Ian Campbell <ian.campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] tools: install under /usr/local by
	default.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9033688198885381396=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9033688198885381396==
Content-Type: multipart/alternative; boundary=047d7b6dc0bc83d5df04d018da8f

--047d7b6dc0bc83d5df04d018da8f
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 4, 2012 at 5:41 PM, Ian Campbell <ian.campbell@citrix.com>wrote:

> This is the defacto (or FHS mandated?) standard location for software
> built from source, in order to avoid clashing with packaged software
> which is installed under /usr/bin etc.
>

So if someone just does "./configure && make deb", the .deb will install
stuff in /usr/local.  Is that what we want?

 -George

--047d7b6dc0bc83d5df04d018da8f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 4, 2012 at 5:41 PM, Ian Campbell <span dir=3D"ltr">&lt;<a href=
=3D"mailto:ian.campbell@citrix.com" target=3D"_blank">ian.campbell@citrix.c=
om</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_=
quote"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-=
left:1px #ccc solid;padding-left:1ex">
This is the defacto (or FHS mandated?) standard location for software<br>
built from source, in order to avoid clashing with packaged software<br>
which is installed under /usr/bin etc.<br></blockquote><div><br>So if someo=
ne just does &quot;./configure &amp;&amp; make deb&quot;, the .deb will ins=
tall stuff in /usr/local.=A0 Is that what we want?<br><br>=A0-George<br></d=
iv>
</div></div>

--047d7b6dc0bc83d5df04d018da8f--


--===============9033688198885381396==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9033688198885381396==--


From xen-devel-bounces@lists.xen.org Wed Dec 05 11:02:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCkK-0000Qy-0v; Wed, 05 Dec 2012 11:02:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgCkI-0000Qq-Re
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:02:30 +0000
Received: from [85.158.137.99:60504] by server-3.bemta-3.messagelabs.com id
	98/91-31566-4C92FB05; Wed, 05 Dec 2012 11:02:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354705347!17699784!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4922 invoked from network); 5 Dec 2012 11:02:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:02:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16168735"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 11:02:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	11:02:26 +0000
Message-ID: <1354705345.15296.144.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <dunlapg@umich.edu>
Date: Wed, 5 Dec 2012 11:02:25 +0000
In-Reply-To: <CAFLBxZbOrvpkEQb9sexsrzoZTCH7y25EHvw_jf-7DgFwy5CwVQ@mail.gmail.com>
References: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
	<CAFLBxZbOrvpkEQb9sexsrzoZTCH7y25EHvw_jf-7DgFwy5CwVQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] tools: install under /usr/local by
 default.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 10:57 +0000, George Dunlap wrote:
> On Tue, Dec 4, 2012 at 5:41 PM, Ian Campbell <ian.campbell@citrix.com>
> wrote:
>         This is the defacto (or FHS mandated?) standard location for
>         software
>         built from source, in order to avoid clashing with packaged
>         software
>         which is installed under /usr/bin etc.
> 
> So if someone just does "./configure && make deb", the .deb will
> install stuff in /usr/local.  Is that what we want?

Given that the purpose of the deb target is not to produce a policy
compliant deb but simply to be a thing which can be uninstalled somewhat
easily I think it is acceptable if a little unusual.

I see the .deb as as special case of the "local admin building from
source" case rather than a special case of the "Debian package" case.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:02:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCkK-0000Qy-0v; Wed, 05 Dec 2012 11:02:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgCkI-0000Qq-Re
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:02:30 +0000
Received: from [85.158.137.99:60504] by server-3.bemta-3.messagelabs.com id
	98/91-31566-4C92FB05; Wed, 05 Dec 2012 11:02:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354705347!17699784!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4922 invoked from network); 5 Dec 2012 11:02:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:02:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16168735"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 11:02:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	11:02:26 +0000
Message-ID: <1354705345.15296.144.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <dunlapg@umich.edu>
Date: Wed, 5 Dec 2012 11:02:25 +0000
In-Reply-To: <CAFLBxZbOrvpkEQb9sexsrzoZTCH7y25EHvw_jf-7DgFwy5CwVQ@mail.gmail.com>
References: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
	<CAFLBxZbOrvpkEQb9sexsrzoZTCH7y25EHvw_jf-7DgFwy5CwVQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] tools: install under /usr/local by
 default.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 10:57 +0000, George Dunlap wrote:
> On Tue, Dec 4, 2012 at 5:41 PM, Ian Campbell <ian.campbell@citrix.com>
> wrote:
>         This is the defacto (or FHS mandated?) standard location for
>         software
>         built from source, in order to avoid clashing with packaged
>         software
>         which is installed under /usr/bin etc.
> 
> So if someone just does "./configure && make deb", the .deb will
> install stuff in /usr/local.  Is that what we want?

Given that the purpose of the deb target is not to produce a policy
compliant deb but simply to be a thing which can be uninstalled somewhat
easily I think it is acceptable if a little unusual.

I see the .deb as as special case of the "local admin building from
source" case rather than a special case of the "Debian package" case.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:05:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:05:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCmZ-0000e4-JH; Wed, 05 Dec 2012 11:04:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgCmY-0000dw-Cd
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:04:50 +0000
Received: from [85.158.139.83:45323] by server-16.bemta-5.messagelabs.com id
	26/60-21311-15A2FB05; Wed, 05 Dec 2012 11:04:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354705441!21242425!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14198 invoked from network); 5 Dec 2012 11:04:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:04:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16168790"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 11:03:28 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	11:03:28 +0000
Message-ID: <1354705407.15296.145.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 11:03:27 +0000
In-Reply-To: <1354644571-3202-7-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-7-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] RFC: using autoconf for stubdom + docs subtrees (Was:
 Re: [VTPM v6 7/8] Add a top level configure script)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I just wanted to highlight this patch outside of the vtpm umbrella since
it is quite a significant change and I worry some people might not
notice it because they aren't following the vtpm stuff.

The proposal here is to add a configure.ac script for the stubdom
subtree and a small toplevel configure.ac which simply calls down to the
configure scripts in the stubdom and tools subtrees.

There are plans to also add docs/configure.ac and tie it in similarly.

There are no plans to change anything wert the xen subtree (i.e. the
hypervisor itself).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:05:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:05:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCmZ-0000e4-JH; Wed, 05 Dec 2012 11:04:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgCmY-0000dw-Cd
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:04:50 +0000
Received: from [85.158.139.83:45323] by server-16.bemta-5.messagelabs.com id
	26/60-21311-15A2FB05; Wed, 05 Dec 2012 11:04:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354705441!21242425!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14198 invoked from network); 5 Dec 2012 11:04:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:04:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16168790"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 11:03:28 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	11:03:28 +0000
Message-ID: <1354705407.15296.145.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 11:03:27 +0000
In-Reply-To: <1354644571-3202-7-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-7-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] RFC: using autoconf for stubdom + docs subtrees (Was:
 Re: [VTPM v6 7/8] Add a top level configure script)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I just wanted to highlight this patch outside of the vtpm umbrella since
it is quite a significant change and I worry some people might not
notice it because they aren't following the vtpm stuff.

The proposal here is to add a configure.ac script for the stubdom
subtree and a small toplevel configure.ac which simply calls down to the
configure scripts in the stubdom and tools subtrees.

There are plans to also add docs/configure.ac and tie it in similarly.

There are no plans to change anything wert the xen subtree (i.e. the
hypervisor itself).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:07:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:07:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCok-0000ro-4E; Wed, 05 Dec 2012 11:07:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TgCoj-0000re-B9
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:07:05 +0000
Received: from [85.158.143.35:14938] by server-3.bemta-4.messagelabs.com id
	48/32-06841-7DA2FB05; Wed, 05 Dec 2012 11:07:03 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354705532!11711328!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6873 invoked from network); 5 Dec 2012 11:05:34 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:05:34 -0000
Received: by mail-qa0-f45.google.com with SMTP id j15so1997960qaq.11
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 03:05:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=VhlknFO25ZdrFMIFTvQtKnbAonidbi+fILxIaE3Thz0=;
	b=uGtgt2hr/pt63JQs2Me4CjJjItdZTFazV+bOrkzY8KhrGLVNpb3ZwvZ8KysFEywat5
	M0qJPNA9OB/9NvQeX8KVrZ4RQqcSp/CXud9JVXZ2fZef6ySxBHe1Mv5URPd417EZrt4A
	9brt0kyBPn8fBHn8l1H2Rk5pDnwLvUbolfboJs8lr+/VrsbdzEDDCau8CGnbCehZXk9+
	2dmy6ZG+xHGRIx+7JV+oW+gqp195AMbamlEWc0itw5bf0ZIK4v/8o5pOtqZs+Swew1uf
	14VHIeeQ1/XB3GPuETdrYp7BwGYjQ1heOClL3lw5izP8m633tCv0tgfOg/LPFer1rVJH
	RUdw==
MIME-Version: 1.0
Received: by 10.49.127.101 with SMTP id nf5mr9308840qeb.20.1354705532471; Wed,
	05 Dec 2012 03:05:32 -0800 (PST)
Received: by 10.49.51.169 with HTTP; Wed, 5 Dec 2012 03:05:32 -0800 (PST)
In-Reply-To: <1141524074.6.1354688480672.JavaMail.root@bj-mail05.pku.edu.cn>
References: <1141524074.6.1354688480672.JavaMail.root@bj-mail05.pku.edu.cn>
Date: Wed, 5 Dec 2012 11:05:32 +0000
X-Google-Sender-Auth: iRsxVgPSaC5DWCqLX13Sz8NeqO8
Message-ID: <CAFLBxZa+Cq0Zzbg69Zr-o6fo4g+P+h5mb+3eCgFOWQoxx_J9sw@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Yanzhang Li <liyz@pku.edu.cn>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Suggestion: Improve hypercall Interface to get real
 return value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7347799775753987786=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7347799775753987786==
Content-Type: multipart/alternative; boundary=047d7b6dc0bc09c8e504d018f6b3

--047d7b6dc0bc09c8e504d018f6b3
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Dec 5, 2012 at 6:21 AM, Yanzhang Li <liyz@pku.edu.cn> wrote:

>
>
> Do you think this would be a good modification?
> Also, I am curious why the original design didn't do that. Is it a bug or
> is it designed that way intentionally?
> Any suggestions and comments will be highly appreciated.
>

The reason we just return -1 is because that is the standard practice for
Unix system libraries: to return -1 but set the error value in "errno".  I
couldn't tell you why Unix does this, but there's an advantage to following
standard interfaces, because it reduces the surprise factor, and reduces
the amount of information programmers need to keep in their head.

 -George

--047d7b6dc0bc09c8e504d018f6b3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 5, 2012 at 6:21 AM, Yanzhang Li <span dir=3D"ltr">&lt;<a href=
=3D"mailto:liyz@pku.edu.cn" target=3D"_blank">liyz@pku.edu.cn</a>&gt;</span=
> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blockquo=
te class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc so=
lid;padding-left:1ex">
<br>
<br>
Do you think this would be a good modification?<br>
Also, I am curious why the original design didn&#39;t do that. Is it a bug =
or is it designed that way intentionally?<br>
Any suggestions and comments will be highly appreciated.<br></blockquote><d=
iv><br>The reason we just return -1 is because that is the standard practic=
e for Unix system libraries: to return -1 but set the error value in &quot;=
errno&quot;.=A0 I couldn&#39;t tell you why Unix does this, but there&#39;s=
 an advantage to following standard interfaces, because it reduces the surp=
rise factor, and reduces the amount of information programmers need to keep=
 in their head.<br>
</div></div><br>=A0-George<br></div>

--047d7b6dc0bc09c8e504d018f6b3--


--===============7347799775753987786==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7347799775753987786==--


From xen-devel-bounces@lists.xen.org Wed Dec 05 11:07:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:07:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgCok-0000ro-4E; Wed, 05 Dec 2012 11:07:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TgCoj-0000re-B9
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:07:05 +0000
Received: from [85.158.143.35:14938] by server-3.bemta-4.messagelabs.com id
	48/32-06841-7DA2FB05; Wed, 05 Dec 2012 11:07:03 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354705532!11711328!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6873 invoked from network); 5 Dec 2012 11:05:34 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:05:34 -0000
Received: by mail-qa0-f45.google.com with SMTP id j15so1997960qaq.11
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 03:05:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=VhlknFO25ZdrFMIFTvQtKnbAonidbi+fILxIaE3Thz0=;
	b=uGtgt2hr/pt63JQs2Me4CjJjItdZTFazV+bOrkzY8KhrGLVNpb3ZwvZ8KysFEywat5
	M0qJPNA9OB/9NvQeX8KVrZ4RQqcSp/CXud9JVXZ2fZef6ySxBHe1Mv5URPd417EZrt4A
	9brt0kyBPn8fBHn8l1H2Rk5pDnwLvUbolfboJs8lr+/VrsbdzEDDCau8CGnbCehZXk9+
	2dmy6ZG+xHGRIx+7JV+oW+gqp195AMbamlEWc0itw5bf0ZIK4v/8o5pOtqZs+Swew1uf
	14VHIeeQ1/XB3GPuETdrYp7BwGYjQ1heOClL3lw5izP8m633tCv0tgfOg/LPFer1rVJH
	RUdw==
MIME-Version: 1.0
Received: by 10.49.127.101 with SMTP id nf5mr9308840qeb.20.1354705532471; Wed,
	05 Dec 2012 03:05:32 -0800 (PST)
Received: by 10.49.51.169 with HTTP; Wed, 5 Dec 2012 03:05:32 -0800 (PST)
In-Reply-To: <1141524074.6.1354688480672.JavaMail.root@bj-mail05.pku.edu.cn>
References: <1141524074.6.1354688480672.JavaMail.root@bj-mail05.pku.edu.cn>
Date: Wed, 5 Dec 2012 11:05:32 +0000
X-Google-Sender-Auth: iRsxVgPSaC5DWCqLX13Sz8NeqO8
Message-ID: <CAFLBxZa+Cq0Zzbg69Zr-o6fo4g+P+h5mb+3eCgFOWQoxx_J9sw@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Yanzhang Li <liyz@pku.edu.cn>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Suggestion: Improve hypercall Interface to get real
 return value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7347799775753987786=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7347799775753987786==
Content-Type: multipart/alternative; boundary=047d7b6dc0bc09c8e504d018f6b3

--047d7b6dc0bc09c8e504d018f6b3
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Dec 5, 2012 at 6:21 AM, Yanzhang Li <liyz@pku.edu.cn> wrote:

>
>
> Do you think this would be a good modification?
> Also, I am curious why the original design didn't do that. Is it a bug or
> is it designed that way intentionally?
> Any suggestions and comments will be highly appreciated.
>

The reason we just return -1 is because that is the standard practice for
Unix system libraries: to return -1 but set the error value in "errno".  I
couldn't tell you why Unix does this, but there's an advantage to following
standard interfaces, because it reduces the surprise factor, and reduces
the amount of information programmers need to keep in their head.

 -George

--047d7b6dc0bc09c8e504d018f6b3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 5, 2012 at 6:21 AM, Yanzhang Li <span dir=3D"ltr">&lt;<a href=
=3D"mailto:liyz@pku.edu.cn" target=3D"_blank">liyz@pku.edu.cn</a>&gt;</span=
> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blockquo=
te class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc so=
lid;padding-left:1ex">
<br>
<br>
Do you think this would be a good modification?<br>
Also, I am curious why the original design didn&#39;t do that. Is it a bug =
or is it designed that way intentionally?<br>
Any suggestions and comments will be highly appreciated.<br></blockquote><d=
iv><br>The reason we just return -1 is because that is the standard practic=
e for Unix system libraries: to return -1 but set the error value in &quot;=
errno&quot;.=A0 I couldn&#39;t tell you why Unix does this, but there&#39;s=
 an advantage to following standard interfaces, because it reduces the surp=
rise factor, and reduces the amount of information programmers need to keep=
 in their head.<br>
</div></div><br>=A0-George<br></div>

--047d7b6dc0bc09c8e504d018f6b3--


--===============7347799775753987786==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7347799775753987786==--


From xen-devel-bounces@lists.xen.org Wed Dec 05 11:22:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:22:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgD3Z-0001JH-Lf; Wed, 05 Dec 2012 11:22:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgD3Y-0001JC-6u
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 11:22:24 +0000
Received: from [85.158.138.51:8647] by server-11.bemta-3.messagelabs.com id
	40/36-19361-F6E2FB05; Wed, 05 Dec 2012 11:22:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354706539!27583151!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15306 invoked from network); 5 Dec 2012 11:22:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:22:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16169423"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 11:22:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	11:22:19 +0000
Message-ID: <1354706537.15296.149.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 5 Dec 2012 11:22:17 +0000
In-Reply-To: <fa37bd276212a01e3c89.1354292657@elijah>
References: <patchbomb.1354292655@elijah>
	<fa37bd276212a01e3c89.1354292657@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 2 of 4 v2] xl: Check for duplicate
 vncdisplay options, and return an error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(seems that I forgot to hit send on this yesterday) 

On Fri, 2012-11-30 at 16:24 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354291984 0
> # Node ID fa37bd276212a01e3c898b54a7f2385454c406a7
> # Parent  722da032ac90c0e1a78b1154fa588bf295d1f009
> xl: Check for duplicate vncdisplay options, and return an error
> 
> If the user has set a vnc display number both in vnclisten (with
> "xxxx:yy"), and with vncdisplay, throw an error.
> 
> Update man pages to match.
> 
> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

Kind of nasty that we are stuck with both at the libxl interface, but oh
well, to late now...

Acked + applied, thanks.

> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -350,11 +350,17 @@ other VNC-related settings.  The default
>  
>  Specifies the IP address, and optionally VNC display number, to use.
>  
> +NB that if you specify the display number here, you should not use
> +vncdisplay.
> +
>  =item C<vncdisplay=DISPLAYNUM>
>  
>  Specifies the VNC display number to use.  The actual TCP port number
>  will be DISPLAYNUM+5900.
>  
> +NB that you should not use this option if you set the displaynum in the 
> +vnclisten string.
> +
>  =item C<vncunused=BOOLEAN>
>  
>  Requests that the VNC display setup search for a free TCP port to use.
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1257,6 +1257,7 @@ skip_nic:
>                      vfb->sdl.xauthority = strdup(p2 + 1);
>                  }
>              } while ((p = strtok(NULL, ",")) != NULL);
> +
>  skip_vfb:
>              free(buf2);
>              d_config->num_vfbs++;
> @@ -1490,6 +1491,16 @@ skip_vfb:
>          xlu_cfg_replace_string (config, "soundhw", &b_info->u.hvm.soundhw, 0);
>          xlu_cfg_get_defbool(config, "xen_platform_pci",
>                              &b_info->u.hvm.xen_platform_pci, 0);
> +
> +        if(b_info->u.hvm.vnc.listen
> +           && b_info->u.hvm.vnc.display
> +           && strchr(b_info->u.hvm.vnc.listen, ':') != NULL) {
> +            fprintf(stderr,
> +                    "ERROR: Display specified both in vnclisten"
> +                    " and vncdisplay!\n");
> +            exit (1);
> +            
> +        }
>      }
>  
>      xlu_cfg_destroy(config);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:22:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:22:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgD3Z-0001JH-Lf; Wed, 05 Dec 2012 11:22:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgD3Y-0001JC-6u
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 11:22:24 +0000
Received: from [85.158.138.51:8647] by server-11.bemta-3.messagelabs.com id
	40/36-19361-F6E2FB05; Wed, 05 Dec 2012 11:22:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354706539!27583151!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15306 invoked from network); 5 Dec 2012 11:22:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:22:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16169423"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 11:22:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	11:22:19 +0000
Message-ID: <1354706537.15296.149.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 5 Dec 2012 11:22:17 +0000
In-Reply-To: <fa37bd276212a01e3c89.1354292657@elijah>
References: <patchbomb.1354292655@elijah>
	<fa37bd276212a01e3c89.1354292657@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 2 of 4 v2] xl: Check for duplicate
 vncdisplay options, and return an error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(seems that I forgot to hit send on this yesterday) 

On Fri, 2012-11-30 at 16:24 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354291984 0
> # Node ID fa37bd276212a01e3c898b54a7f2385454c406a7
> # Parent  722da032ac90c0e1a78b1154fa588bf295d1f009
> xl: Check for duplicate vncdisplay options, and return an error
> 
> If the user has set a vnc display number both in vnclisten (with
> "xxxx:yy"), and with vncdisplay, throw an error.
> 
> Update man pages to match.
> 
> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

Kind of nasty that we are stuck with both at the libxl interface, but oh
well, to late now...

Acked + applied, thanks.

> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -350,11 +350,17 @@ other VNC-related settings.  The default
>  
>  Specifies the IP address, and optionally VNC display number, to use.
>  
> +NB that if you specify the display number here, you should not use
> +vncdisplay.
> +
>  =item C<vncdisplay=DISPLAYNUM>
>  
>  Specifies the VNC display number to use.  The actual TCP port number
>  will be DISPLAYNUM+5900.
>  
> +NB that you should not use this option if you set the displaynum in the 
> +vnclisten string.
> +
>  =item C<vncunused=BOOLEAN>
>  
>  Requests that the VNC display setup search for a free TCP port to use.
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1257,6 +1257,7 @@ skip_nic:
>                      vfb->sdl.xauthority = strdup(p2 + 1);
>                  }
>              } while ((p = strtok(NULL, ",")) != NULL);
> +
>  skip_vfb:
>              free(buf2);
>              d_config->num_vfbs++;
> @@ -1490,6 +1491,16 @@ skip_vfb:
>          xlu_cfg_replace_string (config, "soundhw", &b_info->u.hvm.soundhw, 0);
>          xlu_cfg_get_defbool(config, "xen_platform_pci",
>                              &b_info->u.hvm.xen_platform_pci, 0);
> +
> +        if(b_info->u.hvm.vnc.listen
> +           && b_info->u.hvm.vnc.display
> +           && strchr(b_info->u.hvm.vnc.listen, ':') != NULL) {
> +            fprintf(stderr,
> +                    "ERROR: Display specified both in vnclisten"
> +                    " and vncdisplay!\n");
> +            exit (1);
> +            
> +        }
>      }
>  
>      xlu_cfg_destroy(config);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:51:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDVZ-00028f-Vz; Wed, 05 Dec 2012 11:51:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgDVY-00028M-CO
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:51:20 +0000
Received: from [85.158.139.211:41969] by server-15.bemta-5.messagelabs.com id
	3F/6D-26920-7353FB05; Wed, 05 Dec 2012 11:51:19 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1354708277!18305423!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19300 invoked from network); 5 Dec 2012 11:51:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:51:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46660788"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 11:51:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 06:51:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgDVU-0001LH-BI;
	Wed, 05 Dec 2012 11:51:16 +0000
Date: Wed, 5 Dec 2012 11:51:13 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <28369388.20121205000539@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<28369388.20121205000539@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
> Tuesday, December 4, 2012, 12:01:05 PM, you wrote:
> 
> > On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
> >> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
> >> >>  I had a quick look, and it doesn't look that hard to backport that patch.
> >> > 
> >> > Thanks, Mat.
> >> > I'm glad to report that the patch do fix my problem.
> >> > 
> >> > And yest it is really easy to port since the code did not change across the
> >> > two release.
> >> > The only change would be line numbers (3841 vs 3803) and one extra comments
> >> > before this line:
> >> > 
> >> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> >> > 
> >> > I'm not sure if you are going to release another maintenance version that
> >> > include this patch,
> >> > but I'll report this to Debain maintainer since it's about to freeze for
> >> > v7.0 release and v4.2.0 will not make it.
> >> 
> >> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
> >> out?
> >> 
> 
> > It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
> > so I'd say it should be a candidate for Xen 4.1.4.
> 
> Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
> (XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3
> 
> Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.
> 

The patch is supposed to fix a bug affecting msi_translate but in
upstream QEMU we haven't implemented msi_translate at all! So in this
case the cause of the bug (and as a consequence the fix) must be a
different one.



> Sander
> 
> > -- Pasi
> 
> >> Jan
> >> 
> >> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
> >> > <mats.petersson@citrix.com>wrote:
> >> > 
> >> >> On 03/12/12 13:19, Mats Petersson wrote:
> >> >>
> >> >>> On 03/12/12 13:14, G.R. wrote:
> >> >>>
> >> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
> >> >>>> <mats.petersson@citrix.com 
> >> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
> >> >>>> wrote:
> >> >>>>
> >> >>>>      On 03/12/12 03:47, G.R. wrote:
> >> >>>>
> >> >>>>          Hi developers,
> >> >>>>          I met some domU issues and the log suggests missing interrupt.
> >> >>>>          Details from here:
> >> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
> >> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
> >> >>>>          In summary, this is the suspicious log:
> >> >>>>
> >> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> >> >>>>
> >> >>>>          I've checked the code in question and found that mode 3 is an
> >> >>>>          'reserved_1' mode.
> >> >>>>          I want to trace down the source of this mode setting to
> >> >>>>          root-cause the issue.
> >> >>>>          But I'm not an xen developer, and am even a newbie as a xen
> >> >>>> user.
> >> >>>>          Could anybody give me instructions about how to enable
> >> >>>>          detailed debug log?
> >> >>>>          It could be better if I can get advice about experiments to
> >> >>>>          perform / switches to try out etc.
> >> >>>>
> >> >>>>          My SW config:
> >> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
> >> >>>>          domU: Debian wheezy 3.2.x stock kernel.
> >> >>>>
> >> >>>>          Thanks,
> >> >>>>          Timothy
> >> >>>>
> >> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
> >> >>>>      What are the exact messages in the DomU?
> >> >>>>
> >> >>>>
> >> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
> >> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
> >> >>>> enabled.
> >> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
> >> >>>> did not see such msi related error message.
> >> >>>> Actually, with that domU I do not see anything obvious wrong from the
> >> >>>> log, but I also see nothing from panel (panel receive no signal and go
> >> >>>> power-saving) :-(
> >> >>>>
> >> >>>>
> >> >>>> Back to the issue I was reporting, the domU log looks like this:
> >> >>>>
> >> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
> >> >>>> [drm:i915_hangcheck_ring_idle]
> >> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
> >> >>>> 3354], missed IRQ?
> >> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
> >> >>>> [drm:i915_hangcheck_ring_idle]
> >> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
> >> >>>> 11297], missed IRQ?
> >> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
> >> >>>> timeout, switching to polling mode: last cmd=0x000f0000
> >> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
> >> >>>> codec, disabling MSI: last cmd=0x002f0600
> >> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
> >> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
> >> >>>>
> >> >>>>
> >> >>>> Thanks,
> >> >>>> Timothy
> >> >>>>
> >> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
> >> >>> fixes this. I'm not fully clued up to what the policy for backporting
> >> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
> >> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
> >> >>> right solution here.
> >> >>>
> >> >> I had a quick look, and it doesn't look that hard to backport that patch.
> >> >>
> >> >> --
> >> >> Mats
> >> >>
> >> >>
> >> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
> >> >>> to your original email.
> >> >>>
> >> >>> --
> >> >>> Mats
> >> >>>
> >> >>> ______________________________**_________________
> >> >>> Xen-devel mailing list
> >> >>> Xen-devel@lists.xen.org 
> >> >>> http://lists.xen.org/xen-devel 
> >> >>>
> >> >>>
> >> >>>
> >> >>
> >> >> ______________________________**_________________
> >> >> Xen-devel mailing list
> >> >> Xen-devel@lists.xen.org 
> >> >> http://lists.xen.org/xen-devel 
> >> >>
> >> 
> >> 
> >> 
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org
> >> http://lists.xen.org/xen-devel
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:51:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDVZ-00028f-Vz; Wed, 05 Dec 2012 11:51:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgDVY-00028M-CO
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:51:20 +0000
Received: from [85.158.139.211:41969] by server-15.bemta-5.messagelabs.com id
	3F/6D-26920-7353FB05; Wed, 05 Dec 2012 11:51:19 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1354708277!18305423!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19300 invoked from network); 5 Dec 2012 11:51:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:51:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46660788"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 11:51:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 06:51:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgDVU-0001LH-BI;
	Wed, 05 Dec 2012 11:51:16 +0000
Date: Wed, 5 Dec 2012 11:51:13 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <28369388.20121205000539@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<28369388.20121205000539@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
> Tuesday, December 4, 2012, 12:01:05 PM, you wrote:
> 
> > On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
> >> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
> >> >>  I had a quick look, and it doesn't look that hard to backport that patch.
> >> > 
> >> > Thanks, Mat.
> >> > I'm glad to report that the patch do fix my problem.
> >> > 
> >> > And yest it is really easy to port since the code did not change across the
> >> > two release.
> >> > The only change would be line numbers (3841 vs 3803) and one extra comments
> >> > before this line:
> >> > 
> >> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> >> > 
> >> > I'm not sure if you are going to release another maintenance version that
> >> > include this patch,
> >> > but I'll report this to Debain maintainer since it's about to freeze for
> >> > v7.0 release and v4.2.0 will not make it.
> >> 
> >> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
> >> out?
> >> 
> 
> > It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
> > so I'd say it should be a candidate for Xen 4.1.4.
> 
> Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
> (XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3
> 
> Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.
> 

The patch is supposed to fix a bug affecting msi_translate but in
upstream QEMU we haven't implemented msi_translate at all! So in this
case the cause of the bug (and as a consequence the fix) must be a
different one.



> Sander
> 
> > -- Pasi
> 
> >> Jan
> >> 
> >> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
> >> > <mats.petersson@citrix.com>wrote:
> >> > 
> >> >> On 03/12/12 13:19, Mats Petersson wrote:
> >> >>
> >> >>> On 03/12/12 13:14, G.R. wrote:
> >> >>>
> >> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
> >> >>>> <mats.petersson@citrix.com 
> >> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
> >> >>>> wrote:
> >> >>>>
> >> >>>>      On 03/12/12 03:47, G.R. wrote:
> >> >>>>
> >> >>>>          Hi developers,
> >> >>>>          I met some domU issues and the log suggests missing interrupt.
> >> >>>>          Details from here:
> >> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
> >> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
> >> >>>>          In summary, this is the suspicious log:
> >> >>>>
> >> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> >> >>>>
> >> >>>>          I've checked the code in question and found that mode 3 is an
> >> >>>>          'reserved_1' mode.
> >> >>>>          I want to trace down the source of this mode setting to
> >> >>>>          root-cause the issue.
> >> >>>>          But I'm not an xen developer, and am even a newbie as a xen
> >> >>>> user.
> >> >>>>          Could anybody give me instructions about how to enable
> >> >>>>          detailed debug log?
> >> >>>>          It could be better if I can get advice about experiments to
> >> >>>>          perform / switches to try out etc.
> >> >>>>
> >> >>>>          My SW config:
> >> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
> >> >>>>          domU: Debian wheezy 3.2.x stock kernel.
> >> >>>>
> >> >>>>          Thanks,
> >> >>>>          Timothy
> >> >>>>
> >> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
> >> >>>>      What are the exact messages in the DomU?
> >> >>>>
> >> >>>>
> >> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
> >> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
> >> >>>> enabled.
> >> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
> >> >>>> did not see such msi related error message.
> >> >>>> Actually, with that domU I do not see anything obvious wrong from the
> >> >>>> log, but I also see nothing from panel (panel receive no signal and go
> >> >>>> power-saving) :-(
> >> >>>>
> >> >>>>
> >> >>>> Back to the issue I was reporting, the domU log looks like this:
> >> >>>>
> >> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
> >> >>>> [drm:i915_hangcheck_ring_idle]
> >> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
> >> >>>> 3354], missed IRQ?
> >> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
> >> >>>> [drm:i915_hangcheck_ring_idle]
> >> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
> >> >>>> 11297], missed IRQ?
> >> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
> >> >>>> timeout, switching to polling mode: last cmd=0x000f0000
> >> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
> >> >>>> codec, disabling MSI: last cmd=0x002f0600
> >> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
> >> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
> >> >>>>
> >> >>>>
> >> >>>> Thanks,
> >> >>>> Timothy
> >> >>>>
> >> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
> >> >>> fixes this. I'm not fully clued up to what the policy for backporting
> >> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
> >> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
> >> >>> right solution here.
> >> >>>
> >> >> I had a quick look, and it doesn't look that hard to backport that patch.
> >> >>
> >> >> --
> >> >> Mats
> >> >>
> >> >>
> >> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
> >> >>> to your original email.
> >> >>>
> >> >>> --
> >> >>> Mats
> >> >>>
> >> >>> ______________________________**_________________
> >> >>> Xen-devel mailing list
> >> >>> Xen-devel@lists.xen.org 
> >> >>> http://lists.xen.org/xen-devel 
> >> >>>
> >> >>>
> >> >>>
> >> >>
> >> >> ______________________________**_________________
> >> >> Xen-devel mailing list
> >> >> Xen-devel@lists.xen.org 
> >> >> http://lists.xen.org/xen-devel 
> >> >>
> >> 
> >> 
> >> 
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org
> >> http://lists.xen.org/xen-devel
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDZB-0002SX-9b; Wed, 05 Dec 2012 11:55:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TgDZ9-0002SM-Eh
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:55:03 +0000
Received: from [193.109.254.147:32235] by server-2.bemta-14.messagelabs.com id
	32/E5-20829-6163FB05; Wed, 05 Dec 2012 11:55:02 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354708500!8765077!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10599 invoked from network); 5 Dec 2012 11:55:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:55:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216460503"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	05 Dec 2012 11:55:00 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1; Wed, 5 Dec 2012
	06:54:59 -0500
Message-ID: <50BF3612.9020606@citrix.com>
Date: Wed, 5 Dec 2012 11:54:58 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <1141524074.6.1354688480672.JavaMail.root@bj-mail05.pku.edu.cn>
	<CAFLBxZa+Cq0Zzbg69Zr-o6fo4g+P+h5mb+3eCgFOWQoxx_J9sw@mail.gmail.com>
In-Reply-To: <CAFLBxZa+Cq0Zzbg69Zr-o6fo4g+P+h5mb+3eCgFOWQoxx_J9sw@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] Suggestion: Improve hypercall Interface to get real
 return value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 11:05, George Dunlap wrote:
> On Wed, Dec 5, 2012 at 6:21 AM, Yanzhang Li <liyz@pku.edu.cn 
> <mailto:liyz@pku.edu.cn>> wrote:
>
>
>
>     Do you think this would be a good modification?
>     Also, I am curious why the original design didn't do that. Is it a
>     bug or is it designed that way intentionally?
>     Any suggestions and comments will be highly appreciated.
>
>
> The reason we just return -1 is because that is the standard practice 
> for Unix system libraries: to return -1 but set the error value in 
> "errno".  I couldn't tell you why Unix does this, but there's an 
> advantage to following standard interfaces, because it reduces the 
> surprise factor, and reduces the amount of information programmers 
> need to keep in their head.
I think returning -1 instead of "the error" allows for simpler code when 
you do something like this:
int func()
{
      FILE *f = fopen(...);
      if(!f) return -1;

      while(...)
      {
         if (fread(f, ...) < 0)
         {
            fclose(f);
            return -1;
          }
          ...
         if (...)
            if (fwrite(f, ...) < 0)
            {
               fclose(f);
               return -1;
            }
      }
      fclose(f);
      return 0;
}

Now, we don't need extra code to "remember the errno from the failing 
function". [And I'm assuming here that fclose isn't "interfering" with 
the errno - if you REALLY need to know for sure what the errno was at 
fread or fwrite, you still need to "remember errno".

(Note that some functions do not return -1 for failure in the above 
code, but for example NULL, and some function would not be able to 
return -errno, as that may well be a "valid" return value - so keeping 
the interface as alike as possible is a good idea)

--
Mats
>
>  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDZB-0002SX-9b; Wed, 05 Dec 2012 11:55:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TgDZ9-0002SM-Eh
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:55:03 +0000
Received: from [193.109.254.147:32235] by server-2.bemta-14.messagelabs.com id
	32/E5-20829-6163FB05; Wed, 05 Dec 2012 11:55:02 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354708500!8765077!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10599 invoked from network); 5 Dec 2012 11:55:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 11:55:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216460503"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	05 Dec 2012 11:55:00 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1; Wed, 5 Dec 2012
	06:54:59 -0500
Message-ID: <50BF3612.9020606@citrix.com>
Date: Wed, 5 Dec 2012 11:54:58 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <1141524074.6.1354688480672.JavaMail.root@bj-mail05.pku.edu.cn>
	<CAFLBxZa+Cq0Zzbg69Zr-o6fo4g+P+h5mb+3eCgFOWQoxx_J9sw@mail.gmail.com>
In-Reply-To: <CAFLBxZa+Cq0Zzbg69Zr-o6fo4g+P+h5mb+3eCgFOWQoxx_J9sw@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] Suggestion: Improve hypercall Interface to get real
 return value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 11:05, George Dunlap wrote:
> On Wed, Dec 5, 2012 at 6:21 AM, Yanzhang Li <liyz@pku.edu.cn 
> <mailto:liyz@pku.edu.cn>> wrote:
>
>
>
>     Do you think this would be a good modification?
>     Also, I am curious why the original design didn't do that. Is it a
>     bug or is it designed that way intentionally?
>     Any suggestions and comments will be highly appreciated.
>
>
> The reason we just return -1 is because that is the standard practice 
> for Unix system libraries: to return -1 but set the error value in 
> "errno".  I couldn't tell you why Unix does this, but there's an 
> advantage to following standard interfaces, because it reduces the 
> surprise factor, and reduces the amount of information programmers 
> need to keep in their head.
I think returning -1 instead of "the error" allows for simpler code when 
you do something like this:
int func()
{
      FILE *f = fopen(...);
      if(!f) return -1;

      while(...)
      {
         if (fread(f, ...) < 0)
         {
            fclose(f);
            return -1;
          }
          ...
         if (...)
            if (fwrite(f, ...) < 0)
            {
               fclose(f);
               return -1;
            }
      }
      fclose(f);
      return 0;
}

Now, we don't need extra code to "remember the errno from the failing 
function". [And I'm assuming here that fclose isn't "interfering" with 
the errno - if you REALLY need to know for sure what the errno was at 
fread or fwrite, you still need to "remember errno".

(Note that some functions do not return -1 for failure in the above 
code, but for example NULL, and some function would not be able to 
return -errno, as that may well be a "valid" return value - so keeping 
the interface as alike as possible is a good idea)

--
Mats
>
>  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:56:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:56:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDZx-0002Xk-28; Wed, 05 Dec 2012 11:55:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1TgDZv-0002XW-CM
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 11:55:51 +0000
Received: from [85.158.137.99:22579] by server-5.bemta-3.messagelabs.com id
	42/24-26311-6463FB05; Wed, 05 Dec 2012 11:55:50 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354708473!18032275!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17367 invoked from network); 5 Dec 2012 11:54:34 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-13.tower-217.messagelabs.com with SMTP;
	5 Dec 2012 11:54:34 -0000
Received: from [62.94.143.201] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 76759398; Wed, 05 Dec 2012 12:54:44 +0100
Message-ID: <1354708472.21632.21.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 05 Dec 2012 12:54:32 +0100
In-Reply-To: <50BE4AC2.9010507@eu.citrix.com>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>
	<50BE4AC2.9010507@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8566342031109079452=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8566342031109079452==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-WUZgaHDN8BN8NoE8N1ki"


--=-WUZgaHDN8BN8NoE8N1ki
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2012-12-04 at 19:10 +0000, George Dunlap wrote:=20
> On 03/12/12 16:35, Dario Faggioli wrote:
> > +            /* Avoid TRACE_* to avoid a lot of useless !tb_init_done c=
hecks */
> > +            for_each_cpu(cpu, &mask)
> > +            {
> > +                struct {
> > +                    unsigned cpu:8;
> > +                } d;
> > +                d.cpu =3D cpu;
> > +                trace_var(TRC_CSCHED_TICKLE, 0,
> > +                          sizeof(d),
> > +                          (unsigned char*)&d);
>=20
> Why not just TRC_1D()?=20
>
As I tried to explain in the comment, I just wanted to avoid checking
for !tb_init_done more than once, as this happens within a loop and, at
least potentially, there may be more CPUs to tickle (and thus more calls
to TRACE_1D). I take this comment of yours as you not thinking that is
something worthwhile, right? If so, I can definitely turn this into a
"standard" TRACE_1D() call.

> The tracing infrastructure can only set the size=20
> at a granularity of 32-bit words anyway, and at this point cpu is=20
> "unsigned int", which will be a single word.
>=20
I know that. I just followed suit from sched_credit2.c, but I agree it's
quite pointless for just one single field. Even if we decide to leave
the direct call to trace_var, I'll kill the dummy struct.

> Other than that, everything looks good.
>=20
Ok, thanks. :-)
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-WUZgaHDN8BN8NoE8N1ki
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlC/NfgACgkQk4XaBE3IOsSMRACfcIRIZQMBcotE1UUDvHOi7MCM
fMcAoIK6Wz9Ju+Dq+j39UOda/nNa5FER
=EPkL
-----END PGP SIGNATURE-----

--=-WUZgaHDN8BN8NoE8N1ki--



--===============8566342031109079452==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8566342031109079452==--



From xen-devel-bounces@lists.xen.org Wed Dec 05 11:56:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:56:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDZx-0002Xk-28; Wed, 05 Dec 2012 11:55:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1TgDZv-0002XW-CM
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 11:55:51 +0000
Received: from [85.158.137.99:22579] by server-5.bemta-3.messagelabs.com id
	42/24-26311-6463FB05; Wed, 05 Dec 2012 11:55:50 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354708473!18032275!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17367 invoked from network); 5 Dec 2012 11:54:34 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-13.tower-217.messagelabs.com with SMTP;
	5 Dec 2012 11:54:34 -0000
Received: from [62.94.143.201] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 76759398; Wed, 05 Dec 2012 12:54:44 +0100
Message-ID: <1354708472.21632.21.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 05 Dec 2012 12:54:32 +0100
In-Reply-To: <50BE4AC2.9010507@eu.citrix.com>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>
	<50BE4AC2.9010507@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8566342031109079452=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8566342031109079452==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-WUZgaHDN8BN8NoE8N1ki"


--=-WUZgaHDN8BN8NoE8N1ki
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2012-12-04 at 19:10 +0000, George Dunlap wrote:=20
> On 03/12/12 16:35, Dario Faggioli wrote:
> > +            /* Avoid TRACE_* to avoid a lot of useless !tb_init_done c=
hecks */
> > +            for_each_cpu(cpu, &mask)
> > +            {
> > +                struct {
> > +                    unsigned cpu:8;
> > +                } d;
> > +                d.cpu =3D cpu;
> > +                trace_var(TRC_CSCHED_TICKLE, 0,
> > +                          sizeof(d),
> > +                          (unsigned char*)&d);
>=20
> Why not just TRC_1D()?=20
>
As I tried to explain in the comment, I just wanted to avoid checking
for !tb_init_done more than once, as this happens within a loop and, at
least potentially, there may be more CPUs to tickle (and thus more calls
to TRACE_1D). I take this comment of yours as you not thinking that is
something worthwhile, right? If so, I can definitely turn this into a
"standard" TRACE_1D() call.

> The tracing infrastructure can only set the size=20
> at a granularity of 32-bit words anyway, and at this point cpu is=20
> "unsigned int", which will be a single word.
>=20
I know that. I just followed suit from sched_credit2.c, but I agree it's
quite pointless for just one single field. Even if we decide to leave
the direct call to trace_var, I'll kill the dummy struct.

> Other than that, everything looks good.
>=20
Ok, thanks. :-)
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-WUZgaHDN8BN8NoE8N1ki
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlC/NfgACgkQk4XaBE3IOsSMRACfcIRIZQMBcotE1UUDvHOi7MCM
fMcAoIK6Wz9Ju+Dq+j39UOda/nNa5FER
=EPkL
-----END PGP SIGNATURE-----

--=-WUZgaHDN8BN8NoE8N1ki--



--===============8566342031109079452==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8566342031109079452==--



From xen-devel-bounces@lists.xen.org Wed Dec 05 11:56:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:56:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDal-0002ec-NS; Wed, 05 Dec 2012 11:56:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1TgDaj-0002eR-UF
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:56:42 +0000
Received: from [85.158.138.51:52700] by server-16.bemta-3.messagelabs.com id
	43/3E-07461-9763FB05; Wed, 05 Dec 2012 11:56:41 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1354708595!23399160!1
X-Originating-IP: [74.125.149.246]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3795 invoked from network); 5 Dec 2012 11:56:38 -0000
Received: from na3sys009aog119.obsmtp.com (HELO na3sys009aog119.obsmtp.com)
	(74.125.149.246)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 11:56:38 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob119.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUL82c8RSZnuxbASxI+tdpUp3UNJr4MZg@postini.com;
	Wed, 05 Dec 2012 03:56:37 PST
Received: from INHYMS170.ca.com (155.35.35.44) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 5 Dec 2012 17:26:32 +0530
Received: from INHYMS111A.ca.com ([169.254.3.239]) by INHYMS170.ca.com
	([155.35.35.44]) with mapi id 14.01.0355.002;
	Wed, 5 Dec 2012 17:26:32 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Matt Wilson <msw@amazon.com>, Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
	properly when larger MTU sizes are used
Thread-Index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAgAd+rmAEvv3JQAAGcEEQA==
Date: Wed, 5 Dec 2012 11:56:32 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {BEC6B819-E9B0-423F-8636-BEAF70ABD305}
x-cr-hashedpuzzle: GPtR GmyV G14Q H9Gk KEPr MaUT M0YN N/eL RhWl SImS SWUV
	WMAA W//T XBmS Yc1f cggL; 3;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAbQBzAHcAQABhAG0AYQB6AG8AbgAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {BEC6B819-E9B0-423F-8636-BEAF70ABD305};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Wed,
	05 Dec 2012 11:52:34 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDACAAVgAyAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt,

> -----Original Message-----
> From: Matt Wilson [mailto:msw@amazon.com]
> Sent: Wednesday, December 05, 2012 4:53 AM
> To: Ian Campbell; Palagummi, Siva
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> On Thu, Aug 30, 2012 at 09:07:11AM +0100, Ian Campbell wrote:
> > On Wed, 2012-08-29 at 13:21 +0100, Palagummi, Siva wrote:
> > > This patch contains the modifications that are discussed in thread
> > > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01730.html
> 
> [...]
> 
> > > Instead of using max_required_rx_slots, I used the count that we
> > > already have in hand to verify if we have enough room in the batch
> > > queue for next skb. Please let me know if that is not appropriate.
> > > Things worked fine in my environment. Under heavy load now we seems
> to
> > > be consuming most of the slots in the queue and no BUG_ON :-)
> >
> > > From: Siva Palagummi <Siva.Palagummi@ca.com>
> > >
> > > count variable in xen_netbk_rx_action need to be incremented
> > > correctly to take into account of extra slots required when
> skb_headlen is
> > > greater than PAGE_SIZE when larger MTU values are used. Without
> this change
> > > BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)) is causing netback
> thread
> > > to exit.
> > >
> > > The fix is to stash the counting already done in
> xen_netbk_count_skb_slots
> > > in skb_cb_overlay and use it directly in xen_netbk_rx_action.
> 
> You don't need to stash the estimated value to use it in
> xen_netbk_rx_action() - you have the actual number of slots consumed
> in hand from the return value of netbk_gop_skb(). See below.
> 
> > > Also improved the checking for filling the batch queue.
> > >
> > > Also merged a change from a patch created for
> xen_netbk_count_skb_slots
> > > function as per thread
> > > http://lists.xen.org/archives/html/xen-devel/2012-05/msg01864.html
> > >
> > > The problem is seen with linux 3.2.2 kernel on Intel 10Gbps network
> > >
> > >
> > > Signed-off-by: Siva Palagummi <Siva.Palagummi@ca.com>
> > > ---
> > > diff -uprN a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> > > --- a/drivers/net/xen-netback/netback.c	2012-01-25
> 19:39:32.000000000 -0500
> > > +++ b/drivers/net/xen-netback/netback.c	2012-08-28
> 17:31:22.000000000 -0400
> > > @@ -80,6 +80,11 @@ union page_ext {
> > >  	void *mapping;
> > >  };
> > >
> > > +struct skb_cb_overlay {
> > > +	int meta_slots_used;
> > > +	int count;
> > > +};
> 
> We don't actually need a separate variable for the estimate. We could
> rename meta_slots_used to meta_slots. It could hold the estimate
> before netbk_gop_skb() is called and the actual number of slots
> consumed after. That might be confusing, though, so maybe it's better
> off left as two variables.
> 
> > >  struct xen_netbk {
> > >  	wait_queue_head_t wq;
> > >  	struct task_struct *task;
> > > @@ -324,9 +329,9 @@ unsigned int xen_netbk_count_skb_slots(s
> > >  {
> > >  	unsigned int count;
> > >  	int i, copy_off;
> > > +	struct skb_cb_overlay *sco;
> > >
> > > -	count = DIV_ROUND_UP(
> > > -			offset_in_page(skb->data)+skb_headlen(skb),
> PAGE_SIZE);
> > > +	count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> >
> > This hunk appears to be upstream already (see
> > e26b203ede31fffd52571a5ba607a26c79dc5c0d). Which tree are you working
> > against? You should either base patches on Linus' branch or on the
> > net-next branch.
> >
> > Other than this the patch looks good, thanks.
> 
> I don't think that this patch completely addresses problems
> calculating the number of slots required when large MTUs are used.
> 
> For example: if netback is handling a skb with a large linear data
> portion, say 8157 bytes, that begins at 64 bytes from the start of the
> page. Assume that GSO is disabled and there are no frags.
> xen_netbk_count_skb_slots() will calculate that two slots are needed:
> 
>     count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
> but netbk_gop_frag_copy() will actually use three. Let's walk through
> the loop:
You are right. The above chunk which is already part of the upstream is unfortunately incorrect for some cases. We also ran into issues in our environment around a week back and found this problem. The count will be different based on head len because of the optimization that start_new_rx_buffer is trying to do for large buffers.  A hole of size "offset_in_page" will be left in first page during copy if the remaining buffer size is >=PAG_SIZE. This subsequently affects the copy_off as well.


So xen_netbk_count_skb_slots actually needs a fix to calculate the count correctly based on head len. And also a fix to calculate the copy_off properly to which the data from fragments gets copied.

max_required_rx_slots also may require a fix to account the additional slot that may be required in case mtu >= PAG_SIZE. For worst case scenario atleast another +1.  One thing that is still puzzling here is, max_required_rx_slots seems to be assuming that linear length in head will never be greater than mtu size. But that doesn't seem to be the case all the time. I wonder if it requires some kind of fix there or special handling when count_skb_slots exceeds max_required_rx_slots.

> 
>         data = skb->data;
>         while (data < skb_tail_pointer(skb)) {
>                 unsigned int offset = offset_in_page(data);
>                 unsigned int len = PAGE_SIZE - offset;
> 
>                 if (data + len > skb_tail_pointer(skb))
>                         len = skb_tail_pointer(skb) - data;
> 
>                 netbk_gop_frag_copy(vif, skb, npo,
>                                     virt_to_page(data), len, offset,
> &head);
>                 data += len;
>         }
> 
> The first pass will call netbk_gop_frag_copy() with len=4032,
> offset=64, and head=1. After the call, head will be set to 0. Inside
> netbk_gop_frag_copy(), start_new_rx_buffer() will be called with
> offset=0, size=4032, head=1. We'll return false due to the checks for
> "offset" and "!head".
> 
> The second pass will call netbk_gop_frag_copy() with len=4096,
> offset=0, head=0. netbk_gop_frag_copy() will get called with
> offset=4032, bytes=4096, head=0. We'll return true here, which we
> shouldn't since it's just going to lead to buffer waste for the last
> bit.
> 
> The third pass will call with len=29 and offset=0.
> start_new_rx_buffer()
> will be called with offset=4096, bytes=29, head=0. We'll start a new
> buffer for the last bit.
> 
> So you can see that we underestimate the buffers / meta slots required
> to handle a skb with a large linear buffer, as we commonly have to
> handle with large MTU sizes. This can lead to problems later on.
> 
> I'm not as familiar with the new compound page frag handling code, but
> I can imagine that the same problem could exist there. But since the
> calculation logic in xen_netbk_count_skb_slots() directly models the
> code setting up the copy operation, at least it will be estimated
> properly.
> 
> > >
> > >  	copy_off = skb_headlen(skb) % PAGE_SIZE;
> > >
> > > @@ -352,6 +357,8 @@ unsigned int xen_netbk_count_skb_slots(s
> > >  			size -= bytes;
> > >  		}
> > >  	}
> > > +	sco = (struct skb_cb_overlay *)skb->cb;
> > > +	sco->count = count;
> > >  	return count;
> > >  }
> > >
> > > @@ -586,9 +593,6 @@ static void netbk_add_frag_responses(str
> > >  	}
> > >  }
> > >
> > > -struct skb_cb_overlay {
> > > -	int meta_slots_used;
> > > -};
> > >
> > >  static void xen_netbk_rx_action(struct xen_netbk *netbk)
> > >  {
> > > @@ -621,12 +625,16 @@ static void xen_netbk_rx_action(struct x
> > >  		sco = (struct skb_cb_overlay *)skb->cb;
> > >  		sco->meta_slots_used = netbk_gop_skb(skb, &npo);
> > >
> > > -		count += nr_frags + 1;
> > > +		count += sco->count;
> 
> Why increment count by the /estimated/ count instead of the actual
> number of slots used? We have the number of slots in the line just
> above, in sco->meta_slots_used.
> 
Count actually refers to ring slots consumed rather than meta_slots used.  Count can be different from meta_slots_used.

> > >  		__skb_queue_tail(&rxq, skb);
> > >
> > > +		skb = skb_peek(&netbk->rx_queue);
> > > +		if (skb == NULL)
> > > +			break;
> > > +		sco = (struct skb_cb_overlay *)skb->cb;
> > >  		/* Filled the batch queue? */
> > > -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > > +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> > >  			break;
> > >  	}
> > >
> 
> This change I like.
> 
> We're working on a patch to improve the buffer efficiency and the
> miscalculation problem. Siva, I'd be happy to re-base and re-submit
> this patch (with minor adjustments) as part of that work, unless you
> want to handle that.
> 
> Matt

Thanks!!  Please feel free to re-base and re-submit :-)

Siva


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:56:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:56:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDal-0002ec-NS; Wed, 05 Dec 2012 11:56:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1TgDaj-0002eR-UF
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 11:56:42 +0000
Received: from [85.158.138.51:52700] by server-16.bemta-3.messagelabs.com id
	43/3E-07461-9763FB05; Wed, 05 Dec 2012 11:56:41 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1354708595!23399160!1
X-Originating-IP: [74.125.149.246]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3795 invoked from network); 5 Dec 2012 11:56:38 -0000
Received: from na3sys009aog119.obsmtp.com (HELO na3sys009aog119.obsmtp.com)
	(74.125.149.246)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 11:56:38 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob119.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUL82c8RSZnuxbASxI+tdpUp3UNJr4MZg@postini.com;
	Wed, 05 Dec 2012 03:56:37 PST
Received: from INHYMS170.ca.com (155.35.35.44) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 5 Dec 2012 17:26:32 +0530
Received: from INHYMS111A.ca.com ([169.254.3.239]) by INHYMS170.ca.com
	([155.35.35.44]) with mapi id 14.01.0355.002;
	Wed, 5 Dec 2012 17:26:32 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Matt Wilson <msw@amazon.com>, Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
	properly when larger MTU sizes are used
Thread-Index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAgAd+rmAEvv3JQAAGcEEQA==
Date: Wed, 5 Dec 2012 11:56:32 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {BEC6B819-E9B0-423F-8636-BEAF70ABD305}
x-cr-hashedpuzzle: GPtR GmyV G14Q H9Gk KEPr MaUT M0YN N/eL RhWl SImS SWUV
	WMAA W//T XBmS Yc1f cggL; 3;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAbQBzAHcAQABhAG0AYQB6AG8AbgAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {BEC6B819-E9B0-423F-8636-BEAF70ABD305};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Wed,
	05 Dec 2012 11:52:34 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDACAAVgAyAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt,

> -----Original Message-----
> From: Matt Wilson [mailto:msw@amazon.com]
> Sent: Wednesday, December 05, 2012 4:53 AM
> To: Ian Campbell; Palagummi, Siva
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> On Thu, Aug 30, 2012 at 09:07:11AM +0100, Ian Campbell wrote:
> > On Wed, 2012-08-29 at 13:21 +0100, Palagummi, Siva wrote:
> > > This patch contains the modifications that are discussed in thread
> > > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01730.html
> 
> [...]
> 
> > > Instead of using max_required_rx_slots, I used the count that we
> > > already have in hand to verify if we have enough room in the batch
> > > queue for next skb. Please let me know if that is not appropriate.
> > > Things worked fine in my environment. Under heavy load now we seems
> to
> > > be consuming most of the slots in the queue and no BUG_ON :-)
> >
> > > From: Siva Palagummi <Siva.Palagummi@ca.com>
> > >
> > > count variable in xen_netbk_rx_action need to be incremented
> > > correctly to take into account of extra slots required when
> skb_headlen is
> > > greater than PAGE_SIZE when larger MTU values are used. Without
> this change
> > > BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)) is causing netback
> thread
> > > to exit.
> > >
> > > The fix is to stash the counting already done in
> xen_netbk_count_skb_slots
> > > in skb_cb_overlay and use it directly in xen_netbk_rx_action.
> 
> You don't need to stash the estimated value to use it in
> xen_netbk_rx_action() - you have the actual number of slots consumed
> in hand from the return value of netbk_gop_skb(). See below.
> 
> > > Also improved the checking for filling the batch queue.
> > >
> > > Also merged a change from a patch created for
> xen_netbk_count_skb_slots
> > > function as per thread
> > > http://lists.xen.org/archives/html/xen-devel/2012-05/msg01864.html
> > >
> > > The problem is seen with linux 3.2.2 kernel on Intel 10Gbps network
> > >
> > >
> > > Signed-off-by: Siva Palagummi <Siva.Palagummi@ca.com>
> > > ---
> > > diff -uprN a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> > > --- a/drivers/net/xen-netback/netback.c	2012-01-25
> 19:39:32.000000000 -0500
> > > +++ b/drivers/net/xen-netback/netback.c	2012-08-28
> 17:31:22.000000000 -0400
> > > @@ -80,6 +80,11 @@ union page_ext {
> > >  	void *mapping;
> > >  };
> > >
> > > +struct skb_cb_overlay {
> > > +	int meta_slots_used;
> > > +	int count;
> > > +};
> 
> We don't actually need a separate variable for the estimate. We could
> rename meta_slots_used to meta_slots. It could hold the estimate
> before netbk_gop_skb() is called and the actual number of slots
> consumed after. That might be confusing, though, so maybe it's better
> off left as two variables.
> 
> > >  struct xen_netbk {
> > >  	wait_queue_head_t wq;
> > >  	struct task_struct *task;
> > > @@ -324,9 +329,9 @@ unsigned int xen_netbk_count_skb_slots(s
> > >  {
> > >  	unsigned int count;
> > >  	int i, copy_off;
> > > +	struct skb_cb_overlay *sco;
> > >
> > > -	count = DIV_ROUND_UP(
> > > -			offset_in_page(skb->data)+skb_headlen(skb),
> PAGE_SIZE);
> > > +	count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> >
> > This hunk appears to be upstream already (see
> > e26b203ede31fffd52571a5ba607a26c79dc5c0d). Which tree are you working
> > against? You should either base patches on Linus' branch or on the
> > net-next branch.
> >
> > Other than this the patch looks good, thanks.
> 
> I don't think that this patch completely addresses problems
> calculating the number of slots required when large MTUs are used.
> 
> For example: if netback is handling a skb with a large linear data
> portion, say 8157 bytes, that begins at 64 bytes from the start of the
> page. Assume that GSO is disabled and there are no frags.
> xen_netbk_count_skb_slots() will calculate that two slots are needed:
> 
>     count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
> but netbk_gop_frag_copy() will actually use three. Let's walk through
> the loop:
You are right. The above chunk which is already part of the upstream is unfortunately incorrect for some cases. We also ran into issues in our environment around a week back and found this problem. The count will be different based on head len because of the optimization that start_new_rx_buffer is trying to do for large buffers.  A hole of size "offset_in_page" will be left in first page during copy if the remaining buffer size is >=PAG_SIZE. This subsequently affects the copy_off as well.


So xen_netbk_count_skb_slots actually needs a fix to calculate the count correctly based on head len. And also a fix to calculate the copy_off properly to which the data from fragments gets copied.

max_required_rx_slots also may require a fix to account the additional slot that may be required in case mtu >= PAG_SIZE. For worst case scenario atleast another +1.  One thing that is still puzzling here is, max_required_rx_slots seems to be assuming that linear length in head will never be greater than mtu size. But that doesn't seem to be the case all the time. I wonder if it requires some kind of fix there or special handling when count_skb_slots exceeds max_required_rx_slots.

> 
>         data = skb->data;
>         while (data < skb_tail_pointer(skb)) {
>                 unsigned int offset = offset_in_page(data);
>                 unsigned int len = PAGE_SIZE - offset;
> 
>                 if (data + len > skb_tail_pointer(skb))
>                         len = skb_tail_pointer(skb) - data;
> 
>                 netbk_gop_frag_copy(vif, skb, npo,
>                                     virt_to_page(data), len, offset,
> &head);
>                 data += len;
>         }
> 
> The first pass will call netbk_gop_frag_copy() with len=4032,
> offset=64, and head=1. After the call, head will be set to 0. Inside
> netbk_gop_frag_copy(), start_new_rx_buffer() will be called with
> offset=0, size=4032, head=1. We'll return false due to the checks for
> "offset" and "!head".
> 
> The second pass will call netbk_gop_frag_copy() with len=4096,
> offset=0, head=0. netbk_gop_frag_copy() will get called with
> offset=4032, bytes=4096, head=0. We'll return true here, which we
> shouldn't since it's just going to lead to buffer waste for the last
> bit.
> 
> The third pass will call with len=29 and offset=0.
> start_new_rx_buffer()
> will be called with offset=4096, bytes=29, head=0. We'll start a new
> buffer for the last bit.
> 
> So you can see that we underestimate the buffers / meta slots required
> to handle a skb with a large linear buffer, as we commonly have to
> handle with large MTU sizes. This can lead to problems later on.
> 
> I'm not as familiar with the new compound page frag handling code, but
> I can imagine that the same problem could exist there. But since the
> calculation logic in xen_netbk_count_skb_slots() directly models the
> code setting up the copy operation, at least it will be estimated
> properly.
> 
> > >
> > >  	copy_off = skb_headlen(skb) % PAGE_SIZE;
> > >
> > > @@ -352,6 +357,8 @@ unsigned int xen_netbk_count_skb_slots(s
> > >  			size -= bytes;
> > >  		}
> > >  	}
> > > +	sco = (struct skb_cb_overlay *)skb->cb;
> > > +	sco->count = count;
> > >  	return count;
> > >  }
> > >
> > > @@ -586,9 +593,6 @@ static void netbk_add_frag_responses(str
> > >  	}
> > >  }
> > >
> > > -struct skb_cb_overlay {
> > > -	int meta_slots_used;
> > > -};
> > >
> > >  static void xen_netbk_rx_action(struct xen_netbk *netbk)
> > >  {
> > > @@ -621,12 +625,16 @@ static void xen_netbk_rx_action(struct x
> > >  		sco = (struct skb_cb_overlay *)skb->cb;
> > >  		sco->meta_slots_used = netbk_gop_skb(skb, &npo);
> > >
> > > -		count += nr_frags + 1;
> > > +		count += sco->count;
> 
> Why increment count by the /estimated/ count instead of the actual
> number of slots used? We have the number of slots in the line just
> above, in sco->meta_slots_used.
> 
Count actually refers to ring slots consumed rather than meta_slots used.  Count can be different from meta_slots_used.

> > >  		__skb_queue_tail(&rxq, skb);
> > >
> > > +		skb = skb_peek(&netbk->rx_queue);
> > > +		if (skb == NULL)
> > > +			break;
> > > +		sco = (struct skb_cb_overlay *)skb->cb;
> > >  		/* Filled the batch queue? */
> > > -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > > +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> > >  			break;
> > >  	}
> > >
> 
> This change I like.
> 
> We're working on a patch to improve the buffer efficiency and the
> miscalculation problem. Siva, I'd be happy to re-base and re-submit
> this patch (with minor adjustments) as part of that work, unless you
> want to handle that.
> 
> Matt

Thanks!!  Please feel free to re-base and re-submit :-)

Siva


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 11:57:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDbP-0002l9-9D; Wed, 05 Dec 2012 11:57:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1TgDbN-0002kz-VT
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 11:57:22 +0000
Received: from [193.109.254.147:48638] by server-13.bemta-14.messagelabs.com
	id 2A/2A-11239-1A63FB05; Wed, 05 Dec 2012 11:57:21 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354708640!9016185!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14390 invoked from network); 5 Dec 2012 11:57:20 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-2.tower-27.messagelabs.com with SMTP;
	5 Dec 2012 11:57:20 -0000
Received: from [62.94.143.201] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 76759516; Wed, 05 Dec 2012 12:57:30 +0100
Message-ID: <1354708633.21632.24.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Wed, 05 Dec 2012 12:57:13 +0100
In-Reply-To: <CAFLBxZaor6aBA6AzAGrGhVBCG+UMS+6G_Q0SKQCgw-hmWuDLxA@mail.gmail.com>
References: <patchbomb.1354552497@Solace>
	<7265520b0188740d3dfa.1354552499@Solace>
	<CAFLBxZaor6aBA6AzAGrGhVBCG+UMS+6G_Q0SKQCgw-hmWuDLxA@mail.gmail.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xensource.com>, Keir Fraser <keir@xen.org>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 3] xen: tracing: introduce
 per-scheduler trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8352038728249691919=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8352038728249691919==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-+mjKfYoTFEcOr3UP2GjC"


--=-+mjKfYoTFEcOr3UP2GjC
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2012-12-04 at 18:53 +0000, George Dunlap wrote:=20
>        =20
>         +/*
>         + * Credit2 tracing events ("only" 512 available!). Check
>         + * include/public/trace.h for more details.
>         + */
>         +#define TRC_CSCHED2_EVENT(_e)        ((TRC_SCHED_CLASS|
>         TRC_MASK_CSCHED2) + _e)=20
>=20
> I think I would make this generic, and put it in trace.h=20
>
Sounds good. Will do.

> -- maybe something like this?  (Haven't run this through a compiler)
>=20
> #define TRC_SCHED_CLASS_EVT(_c, _e) \
>=20
> ((TRC_SCHED_CLASS|(TRC_SCHED_##_c<<TRC_SCHED_MASK_SHIFT))+(_e&TRC_SCHED_C=
LASS_MASK))
>=20
I'll try it and resend.=20
>         +#define TRC_SCHED_ID_BITS    3
>         +#define TRC_SCHED_MASK_SHIFT (TRC_SUBCLS_SHIFT -
>         TRC_SCHED_ID_BITS)
>         +
>         +#define TRC_MASK_CSCHED      (0 << TRC_SCHED_MASK_SHIFT)
>         +#define TRC_MASK_CSCHED2     (1 << TRC_SCHED_MASK_SHIFT)
>         +#define TRC_MASK_SEDF        (2 << TRC_SCHED_MASK_SHIFT)
>         +#define TRC_MASK_ARINC653    (3 << TRC_SCHED_MASK_SHIFT)=20
>=20
> I don't think "mask" is right here -- these aren't masks, they're
> numerical values. :-)  If we use something like the #define above,
> then we can do:
>=20
> #define TRC_SCHED_CSCHED 0
> #define TRC_SCHED_CSCHED2
> /*...*/
>=20
I agree, bad name. (re "mask"). :-)

Thanks and Regards,
Dario=20
--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-+mjKfYoTFEcOr3UP2GjC
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlC/NpkACgkQk4XaBE3IOsTxzwCdFIHhcVEV5aXJKySTOHr2HI2W
YjgAmwSAoUxjsmFKN2pFNUKtuLDajCnm
=LoNY
-----END PGP SIGNATURE-----

--=-+mjKfYoTFEcOr3UP2GjC--



--===============8352038728249691919==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8352038728249691919==--



From xen-devel-bounces@lists.xen.org Wed Dec 05 11:57:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 11:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDbP-0002l9-9D; Wed, 05 Dec 2012 11:57:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1TgDbN-0002kz-VT
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 11:57:22 +0000
Received: from [193.109.254.147:48638] by server-13.bemta-14.messagelabs.com
	id 2A/2A-11239-1A63FB05; Wed, 05 Dec 2012 11:57:21 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354708640!9016185!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14390 invoked from network); 5 Dec 2012 11:57:20 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-2.tower-27.messagelabs.com with SMTP;
	5 Dec 2012 11:57:20 -0000
Received: from [62.94.143.201] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 76759516; Wed, 05 Dec 2012 12:57:30 +0100
Message-ID: <1354708633.21632.24.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Wed, 05 Dec 2012 12:57:13 +0100
In-Reply-To: <CAFLBxZaor6aBA6AzAGrGhVBCG+UMS+6G_Q0SKQCgw-hmWuDLxA@mail.gmail.com>
References: <patchbomb.1354552497@Solace>
	<7265520b0188740d3dfa.1354552499@Solace>
	<CAFLBxZaor6aBA6AzAGrGhVBCG+UMS+6G_Q0SKQCgw-hmWuDLxA@mail.gmail.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xensource.com>, Keir Fraser <keir@xen.org>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 3] xen: tracing: introduce
 per-scheduler trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8352038728249691919=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8352038728249691919==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-+mjKfYoTFEcOr3UP2GjC"


--=-+mjKfYoTFEcOr3UP2GjC
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2012-12-04 at 18:53 +0000, George Dunlap wrote:=20
>        =20
>         +/*
>         + * Credit2 tracing events ("only" 512 available!). Check
>         + * include/public/trace.h for more details.
>         + */
>         +#define TRC_CSCHED2_EVENT(_e)        ((TRC_SCHED_CLASS|
>         TRC_MASK_CSCHED2) + _e)=20
>=20
> I think I would make this generic, and put it in trace.h=20
>
Sounds good. Will do.

> -- maybe something like this?  (Haven't run this through a compiler)
>=20
> #define TRC_SCHED_CLASS_EVT(_c, _e) \
>=20
> ((TRC_SCHED_CLASS|(TRC_SCHED_##_c<<TRC_SCHED_MASK_SHIFT))+(_e&TRC_SCHED_C=
LASS_MASK))
>=20
I'll try it and resend.=20
>         +#define TRC_SCHED_ID_BITS    3
>         +#define TRC_SCHED_MASK_SHIFT (TRC_SUBCLS_SHIFT -
>         TRC_SCHED_ID_BITS)
>         +
>         +#define TRC_MASK_CSCHED      (0 << TRC_SCHED_MASK_SHIFT)
>         +#define TRC_MASK_CSCHED2     (1 << TRC_SCHED_MASK_SHIFT)
>         +#define TRC_MASK_SEDF        (2 << TRC_SCHED_MASK_SHIFT)
>         +#define TRC_MASK_ARINC653    (3 << TRC_SCHED_MASK_SHIFT)=20
>=20
> I don't think "mask" is right here -- these aren't masks, they're
> numerical values. :-)  If we use something like the #define above,
> then we can do:
>=20
> #define TRC_SCHED_CSCHED 0
> #define TRC_SCHED_CSCHED2
> /*...*/
>=20
I agree, bad name. (re "mask"). :-)

Thanks and Regards,
Dario=20
--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-+mjKfYoTFEcOr3UP2GjC
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlC/NpkACgkQk4XaBE3IOsTxzwCdFIHhcVEV5aXJKySTOHr2HI2W
YjgAmwSAoUxjsmFKN2pFNUKtuLDajCnm
=LoNY
-----END PGP SIGNATURE-----

--=-+mjKfYoTFEcOr3UP2GjC--



--===============8352038728249691919==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8352038728249691919==--



From xen-devel-bounces@lists.xen.org Wed Dec 05 12:01:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:01:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDf7-0003IK-Sc; Wed, 05 Dec 2012 12:01:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgDf7-0003I2-5s
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:01:13 +0000
Received: from [85.158.138.51:35683] by server-13.bemta-3.messagelabs.com id
	4F/62-24887-8873FB05; Wed, 05 Dec 2012 12:01:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354708871!27471737!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29632 invoked from network); 5 Dec 2012 12:01:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:01:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16170536"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:01:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	12:01:11 +0000
Message-ID: <1354708869.15296.173.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Date: Wed, 5 Dec 2012 12:01:09 +0000
In-Reply-To: <1354708472.21632.21.camel@Abyss>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>	<50BE4AC2.9010507@eu.citrix.com>
	<1354708472.21632.21.camel@Abyss>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 11:54 +0000, Dario Faggioli wrote:
> On Tue, 2012-12-04 at 19:10 +0000, George Dunlap wrote: 
> > On 03/12/12 16:35, Dario Faggioli wrote:
> > > +            /* Avoid TRACE_* to avoid a lot of useless !tb_init_done checks */
> > > +            for_each_cpu(cpu, &mask)
> > > +            {
> > > +                struct {
> > > +                    unsigned cpu:8;
> > > +                } d;
> > > +                d.cpu = cpu;
> > > +                trace_var(TRC_CSCHED_TICKLE, 0,
> > > +                          sizeof(d),
> > > +                          (unsigned char*)&d);
> > 
> > Why not just TRC_1D()? 
> >
> As I tried to explain in the comment, I just wanted to avoid checking
> for !tb_init_done more than once, as this happens within a loop and, at
> least potentially, there may be more CPUs to tickle (and thus more calls
> to TRACE_1D).

If tb_init_done isn't marked volatile or anything like that isn't the
check hoisted out of the loop by the compiler?

> I take this comment of yours as you not thinking that is
> something worthwhile, right? If so, I can definitely turn this into a
> "standard" TRACE_1D() call.

Or maybe consider __TRACE_1D and friends which omit the check?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:01:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:01:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDf7-0003IK-Sc; Wed, 05 Dec 2012 12:01:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgDf7-0003I2-5s
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:01:13 +0000
Received: from [85.158.138.51:35683] by server-13.bemta-3.messagelabs.com id
	4F/62-24887-8873FB05; Wed, 05 Dec 2012 12:01:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354708871!27471737!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29632 invoked from network); 5 Dec 2012 12:01:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:01:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16170536"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:01:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	12:01:11 +0000
Message-ID: <1354708869.15296.173.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Date: Wed, 5 Dec 2012 12:01:09 +0000
In-Reply-To: <1354708472.21632.21.camel@Abyss>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>	<50BE4AC2.9010507@eu.citrix.com>
	<1354708472.21632.21.camel@Abyss>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 11:54 +0000, Dario Faggioli wrote:
> On Tue, 2012-12-04 at 19:10 +0000, George Dunlap wrote: 
> > On 03/12/12 16:35, Dario Faggioli wrote:
> > > +            /* Avoid TRACE_* to avoid a lot of useless !tb_init_done checks */
> > > +            for_each_cpu(cpu, &mask)
> > > +            {
> > > +                struct {
> > > +                    unsigned cpu:8;
> > > +                } d;
> > > +                d.cpu = cpu;
> > > +                trace_var(TRC_CSCHED_TICKLE, 0,
> > > +                          sizeof(d),
> > > +                          (unsigned char*)&d);
> > 
> > Why not just TRC_1D()? 
> >
> As I tried to explain in the comment, I just wanted to avoid checking
> for !tb_init_done more than once, as this happens within a loop and, at
> least potentially, there may be more CPUs to tickle (and thus more calls
> to TRACE_1D).

If tb_init_done isn't marked volatile or anything like that isn't the
check hoisted out of the loop by the compiler?

> I take this comment of yours as you not thinking that is
> something worthwhile, right? If so, I can definitely turn this into a
> "standard" TRACE_1D() call.

Or maybe consider __TRACE_1D and friends which omit the check?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:02:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDgE-0003dZ-Q0; Wed, 05 Dec 2012 12:02:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgDgD-0003dL-EL
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:02:21 +0000
Received: from [85.158.139.211:50093] by server-4.bemta-5.messagelabs.com id
	8A/96-15011-CC73FB05; Wed, 05 Dec 2012 12:02:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1354708898!19181434!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1969 invoked from network); 5 Dec 2012 12:01:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:01:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16170569"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:01:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 12:01:37 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgDfV-0002kx-LP;
	Wed, 05 Dec 2012 12:01:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgDfV-00027G-Cc;
	Wed, 05 Dec 2012 12:01:37 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14565-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 12:01:37 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14565: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14565 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14565/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14559
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14559

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  bc624b00d6d6
baseline version:
 xen                  29247e44df47

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau Monn? <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=bc624b00d6d6
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable bc624b00d6d6
+ branch=xen-unstable
+ revision=bc624b00d6d6
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r bc624b00d6d6 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 10 changesets with 12 changes to 11 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:02:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDgE-0003dZ-Q0; Wed, 05 Dec 2012 12:02:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgDgD-0003dL-EL
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:02:21 +0000
Received: from [85.158.139.211:50093] by server-4.bemta-5.messagelabs.com id
	8A/96-15011-CC73FB05; Wed, 05 Dec 2012 12:02:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1354708898!19181434!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1969 invoked from network); 5 Dec 2012 12:01:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:01:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16170569"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:01:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 12:01:37 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgDfV-0002kx-LP;
	Wed, 05 Dec 2012 12:01:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgDfV-00027G-Cc;
	Wed, 05 Dec 2012 12:01:37 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14565-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 12:01:37 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14565: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14565 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14565/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14559
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14559

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  bc624b00d6d6
baseline version:
 xen                  29247e44df47

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau Monn? <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=bc624b00d6d6
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable bc624b00d6d6
+ branch=xen-unstable
+ revision=bc624b00d6d6
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r bc624b00d6d6 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 10 changesets with 12 changes to 11 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:02:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDgM-0003em-8O; Wed, 05 Dec 2012 12:02:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TgDgK-0003e8-11
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:02:28 +0000
Received: from [85.158.139.83:33099] by server-5.bemta-5.messagelabs.com id
	BB/AB-11353-3D73FB05; Wed, 05 Dec 2012 12:02:27 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354708946!21254542!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3114 invoked from network); 5 Dec 2012 12:02:26 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-11.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 12:02:26 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:50822 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TgDjy-0003cr-3O; Wed, 05 Dec 2012 13:06:14 +0100
Date: Wed, 5 Dec 2012 13:02:22 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1662073313.20121205130222@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<28369388.20121205000539@eikelenboom.it>
	<alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, December 5, 2012, 12:51:13 PM, you wrote:

> On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
>> Tuesday, December 4, 2012, 12:01:05 PM, you wrote:
>> 
>> > On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
>> >> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
>> >> >>  I had a quick look, and it doesn't look that hard to backport that patch.
>> >> > 
>> >> > Thanks, Mat.
>> >> > I'm glad to report that the patch do fix my problem.
>> >> > 
>> >> > And yest it is really easy to port since the code did not change across the
>> >> > two release.
>> >> > The only change would be line numbers (3841 vs 3803) and one extra comments
>> >> > before this line:
>> >> > 
>> >> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>> >> > 
>> >> > I'm not sure if you are going to release another maintenance version that
>> >> > include this patch,
>> >> > but I'll report this to Debain maintainer since it's about to freeze for
>> >> > v7.0 release and v4.2.0 will not make it.
>> >> 
>> >> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
>> >> out?
>> >> 
>> 
>> > It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
>> > so I'd say it should be a candidate for Xen 4.1.4.
>> 
>> Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
>> (XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3
>> 
>> Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.
>> 

> The patch is supposed to fix a bug affecting msi_translate but in
> upstream QEMU we haven't implemented msi_translate at all! So in this
> case the cause of the bug (and as a consequence the fix) must be a
> different one.

Ok will see if i can find out what is going on. Probably have to force msi_translate=0.

I noticed "qemu-xen" doesn't make a log file in /var/log/xen as "qemu-dm" does, is this expected ?


>> Sander
>> 
>> > -- Pasi
>> 
>> >> Jan
>> >> 
>> >> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
>> >> > <mats.petersson@citrix.com>wrote:
>> >> > 
>> >> >> On 03/12/12 13:19, Mats Petersson wrote:
>> >> >>
>> >> >>> On 03/12/12 13:14, G.R. wrote:
>> >> >>>
>> >> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>> >> >>>> <mats.petersson@citrix.com 
>> >> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>> >> >>>> wrote:
>> >> >>>>
>> >> >>>>      On 03/12/12 03:47, G.R. wrote:
>> >> >>>>
>> >> >>>>          Hi developers,
>> >> >>>>          I met some domU issues and the log suggests missing interrupt.
>> >> >>>>          Details from here:
>> >> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
>> >> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>> >> >>>>          In summary, this is the suspicious log:
>> >> >>>>
>> >> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>> >> >>>>
>> >> >>>>          I've checked the code in question and found that mode 3 is an
>> >> >>>>          'reserved_1' mode.
>> >> >>>>          I want to trace down the source of this mode setting to
>> >> >>>>          root-cause the issue.
>> >> >>>>          But I'm not an xen developer, and am even a newbie as a xen
>> >> >>>> user.
>> >> >>>>          Could anybody give me instructions about how to enable
>> >> >>>>          detailed debug log?
>> >> >>>>          It could be better if I can get advice about experiments to
>> >> >>>>          perform / switches to try out etc.
>> >> >>>>
>> >> >>>>          My SW config:
>> >> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>> >> >>>>          domU: Debian wheezy 3.2.x stock kernel.
>> >> >>>>
>> >> >>>>          Thanks,
>> >> >>>>          Timothy
>> >> >>>>
>> >> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>> >> >>>>      What are the exact messages in the DomU?
>> >> >>>>
>> >> >>>>
>> >> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>> >> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>> >> >>>> enabled.
>> >> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>> >> >>>> did not see such msi related error message.
>> >> >>>> Actually, with that domU I do not see anything obvious wrong from the
>> >> >>>> log, but I also see nothing from panel (panel receive no signal and go
>> >> >>>> power-saving) :-(
>> >> >>>>
>> >> >>>>
>> >> >>>> Back to the issue I was reporting, the domU log looks like this:
>> >> >>>>
>> >> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>> >> >>>> 3354], missed IRQ?
>> >> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>> >> >>>> 11297], missed IRQ?
>> >> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>> >> >>>> timeout, switching to polling mode: last cmd=0x000f0000
>> >> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>> >> >>>> codec, disabling MSI: last cmd=0x002f0600
>> >> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>> >> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>> >> >>>>
>> >> >>>>
>> >> >>>> Thanks,
>> >> >>>> Timothy
>> >> >>>>
>> >> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>> >> >>> fixes this. I'm not fully clued up to what the policy for backporting
>> >> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
>> >> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
>> >> >>> right solution here.
>> >> >>>
>> >> >> I had a quick look, and it doesn't look that hard to backport that patch.
>> >> >>
>> >> >> --
>> >> >> Mats
>> >> >>
>> >> >>
>> >> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>> >> >>> to your original email.
>> >> >>>
>> >> >>> --
>> >> >>> Mats
>> >> >>>
>> >> >>> ______________________________**_________________
>> >> >>> Xen-devel mailing list
>> >> >>> Xen-devel@lists.xen.org 
>> >> >>> http://lists.xen.org/xen-devel 
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>
>> >> >> ______________________________**_________________
>> >> >> Xen-devel mailing list
>> >> >> Xen-devel@lists.xen.org 
>> >> >> http://lists.xen.org/xen-devel 
>> >> >>
>> >> 
>> >> 
>> >> 
>> >> _______________________________________________
>> >> Xen-devel mailing list
>> >> Xen-devel@lists.xen.org
>> >> http://lists.xen.org/xen-devel
>> 
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:02:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDgM-0003em-8O; Wed, 05 Dec 2012 12:02:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TgDgK-0003e8-11
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:02:28 +0000
Received: from [85.158.139.83:33099] by server-5.bemta-5.messagelabs.com id
	BB/AB-11353-3D73FB05; Wed, 05 Dec 2012 12:02:27 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354708946!21254542!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3114 invoked from network); 5 Dec 2012 12:02:26 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-11.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 12:02:26 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:50822 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TgDjy-0003cr-3O; Wed, 05 Dec 2012 13:06:14 +0100
Date: Wed, 5 Dec 2012 13:02:22 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1662073313.20121205130222@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<28369388.20121205000539@eikelenboom.it>
	<alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, December 5, 2012, 12:51:13 PM, you wrote:

> On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
>> Tuesday, December 4, 2012, 12:01:05 PM, you wrote:
>> 
>> > On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
>> >> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
>> >> >>  I had a quick look, and it doesn't look that hard to backport that patch.
>> >> > 
>> >> > Thanks, Mat.
>> >> > I'm glad to report that the patch do fix my problem.
>> >> > 
>> >> > And yest it is really easy to port since the code did not change across the
>> >> > two release.
>> >> > The only change would be line numbers (3841 vs 3803) and one extra comments
>> >> > before this line:
>> >> > 
>> >> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>> >> > 
>> >> > I'm not sure if you are going to release another maintenance version that
>> >> > include this patch,
>> >> > but I'll report this to Debain maintainer since it's about to freeze for
>> >> > v7.0 release and v4.2.0 will not make it.
>> >> 
>> >> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
>> >> out?
>> >> 
>> 
>> > It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
>> > so I'd say it should be a candidate for Xen 4.1.4.
>> 
>> Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
>> (XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3
>> 
>> Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.
>> 

> The patch is supposed to fix a bug affecting msi_translate but in
> upstream QEMU we haven't implemented msi_translate at all! So in this
> case the cause of the bug (and as a consequence the fix) must be a
> different one.

Ok will see if i can find out what is going on. Probably have to force msi_translate=0.

I noticed "qemu-xen" doesn't make a log file in /var/log/xen as "qemu-dm" does, is this expected ?


>> Sander
>> 
>> > -- Pasi
>> 
>> >> Jan
>> >> 
>> >> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
>> >> > <mats.petersson@citrix.com>wrote:
>> >> > 
>> >> >> On 03/12/12 13:19, Mats Petersson wrote:
>> >> >>
>> >> >>> On 03/12/12 13:14, G.R. wrote:
>> >> >>>
>> >> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>> >> >>>> <mats.petersson@citrix.com 
>> >> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>> >> >>>> wrote:
>> >> >>>>
>> >> >>>>      On 03/12/12 03:47, G.R. wrote:
>> >> >>>>
>> >> >>>>          Hi developers,
>> >> >>>>          I met some domU issues and the log suggests missing interrupt.
>> >> >>>>          Details from here:
>> >> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
>> >> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>> >> >>>>          In summary, this is the suspicious log:
>> >> >>>>
>> >> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>> >> >>>>
>> >> >>>>          I've checked the code in question and found that mode 3 is an
>> >> >>>>          'reserved_1' mode.
>> >> >>>>          I want to trace down the source of this mode setting to
>> >> >>>>          root-cause the issue.
>> >> >>>>          But I'm not an xen developer, and am even a newbie as a xen
>> >> >>>> user.
>> >> >>>>          Could anybody give me instructions about how to enable
>> >> >>>>          detailed debug log?
>> >> >>>>          It could be better if I can get advice about experiments to
>> >> >>>>          perform / switches to try out etc.
>> >> >>>>
>> >> >>>>          My SW config:
>> >> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>> >> >>>>          domU: Debian wheezy 3.2.x stock kernel.
>> >> >>>>
>> >> >>>>          Thanks,
>> >> >>>>          Timothy
>> >> >>>>
>> >> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>> >> >>>>      What are the exact messages in the DomU?
>> >> >>>>
>> >> >>>>
>> >> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>> >> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>> >> >>>> enabled.
>> >> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>> >> >>>> did not see such msi related error message.
>> >> >>>> Actually, with that domU I do not see anything obvious wrong from the
>> >> >>>> log, but I also see nothing from panel (panel receive no signal and go
>> >> >>>> power-saving) :-(
>> >> >>>>
>> >> >>>>
>> >> >>>> Back to the issue I was reporting, the domU log looks like this:
>> >> >>>>
>> >> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>> >> >>>> 3354], missed IRQ?
>> >> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>> >> >>>> 11297], missed IRQ?
>> >> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>> >> >>>> timeout, switching to polling mode: last cmd=0x000f0000
>> >> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>> >> >>>> codec, disabling MSI: last cmd=0x002f0600
>> >> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>> >> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>> >> >>>>
>> >> >>>>
>> >> >>>> Thanks,
>> >> >>>> Timothy
>> >> >>>>
>> >> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>> >> >>> fixes this. I'm not fully clued up to what the policy for backporting
>> >> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
>> >> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
>> >> >>> right solution here.
>> >> >>>
>> >> >> I had a quick look, and it doesn't look that hard to backport that patch.
>> >> >>
>> >> >> --
>> >> >> Mats
>> >> >>
>> >> >>
>> >> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>> >> >>> to your original email.
>> >> >>>
>> >> >>> --
>> >> >>> Mats
>> >> >>>
>> >> >>> ______________________________**_________________
>> >> >>> Xen-devel mailing list
>> >> >>> Xen-devel@lists.xen.org 
>> >> >>> http://lists.xen.org/xen-devel 
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>
>> >> >> ______________________________**_________________
>> >> >> Xen-devel mailing list
>> >> >> Xen-devel@lists.xen.org 
>> >> >> http://lists.xen.org/xen-devel 
>> >> >>
>> >> 
>> >> 
>> >> 
>> >> _______________________________________________
>> >> Xen-devel mailing list
>> >> Xen-devel@lists.xen.org
>> >> http://lists.xen.org/xen-devel
>> 
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:07:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDlB-0004A2-IX; Wed, 05 Dec 2012 12:07:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgDl9-00049t-5B
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:07:27 +0000
Received: from [85.158.137.99:8742] by server-1.bemta-3.messagelabs.com id
	B9/1B-12169-EF83FB05; Wed, 05 Dec 2012 12:07:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354709232!17986156!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25671 invoked from network); 5 Dec 2012 12:07:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:07:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216461582"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:07:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 07:07:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgDks-0001ag-NJ;
	Wed, 05 Dec 2012 12:07:10 +0000
Date: Wed, 5 Dec 2012 12:07:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1662073313.20121205130222@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212051206250.8801@kaball.uk.xensource.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<28369388.20121205000539@eikelenboom.it>
	<alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
	<1662073313.20121205130222@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 5 Dec 2012, Sander Eikelenboom wrote:
> Wednesday, December 5, 2012, 12:51:13 PM, you wrote:
> 
> > On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
> >> Tuesday, December 4, 2012, 12:01:05 PM, you wrote:
> >> 
> >> > On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
> >> >> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
> >> >> >>  I had a quick look, and it doesn't look that hard to backport that patch.
> >> >> > 
> >> >> > Thanks, Mat.
> >> >> > I'm glad to report that the patch do fix my problem.
> >> >> > 
> >> >> > And yest it is really easy to port since the code did not change across the
> >> >> > two release.
> >> >> > The only change would be line numbers (3841 vs 3803) and one extra comments
> >> >> > before this line:
> >> >> > 
> >> >> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> >> >> > 
> >> >> > I'm not sure if you are going to release another maintenance version that
> >> >> > include this patch,
> >> >> > but I'll report this to Debain maintainer since it's about to freeze for
> >> >> > v7.0 release and v4.2.0 will not make it.
> >> >> 
> >> >> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
> >> >> out?
> >> >> 
> >> 
> >> > It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
> >> > so I'd say it should be a candidate for Xen 4.1.4.
> >> 
> >> Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
> >> (XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3
> >> 
> >> Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.
> >> 
> 
> > The patch is supposed to fix a bug affecting msi_translate but in
> > upstream QEMU we haven't implemented msi_translate at all! So in this
> > case the cause of the bug (and as a consequence the fix) must be a
> > different one.
> 
> Ok will see if i can find out what is going on. Probably have to force msi_translate=0.
> 
> I noticed "qemu-xen" doesn't make a log file in /var/log/xen as "qemu-dm" does, is this expected ?

No, it is not. You should get the usual /var/log/xen/qemu-dm-domainname.log file.


> >> Sander
> >> 
> >> > -- Pasi
> >> 
> >> >> Jan
> >> >> 
> >> >> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
> >> >> > <mats.petersson@citrix.com>wrote:
> >> >> > 
> >> >> >> On 03/12/12 13:19, Mats Petersson wrote:
> >> >> >>
> >> >> >>> On 03/12/12 13:14, G.R. wrote:
> >> >> >>>
> >> >> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
> >> >> >>>> <mats.petersson@citrix.com 
> >> >> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
> >> >> >>>> wrote:
> >> >> >>>>
> >> >> >>>>      On 03/12/12 03:47, G.R. wrote:
> >> >> >>>>
> >> >> >>>>          Hi developers,
> >> >> >>>>          I met some domU issues and the log suggests missing interrupt.
> >> >> >>>>          Details from here:
> >> >> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
> >> >> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
> >> >> >>>>          In summary, this is the suspicious log:
> >> >> >>>>
> >> >> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> >> >> >>>>
> >> >> >>>>          I've checked the code in question and found that mode 3 is an
> >> >> >>>>          'reserved_1' mode.
> >> >> >>>>          I want to trace down the source of this mode setting to
> >> >> >>>>          root-cause the issue.
> >> >> >>>>          But I'm not an xen developer, and am even a newbie as a xen
> >> >> >>>> user.
> >> >> >>>>          Could anybody give me instructions about how to enable
> >> >> >>>>          detailed debug log?
> >> >> >>>>          It could be better if I can get advice about experiments to
> >> >> >>>>          perform / switches to try out etc.
> >> >> >>>>
> >> >> >>>>          My SW config:
> >> >> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
> >> >> >>>>          domU: Debian wheezy 3.2.x stock kernel.
> >> >> >>>>
> >> >> >>>>          Thanks,
> >> >> >>>>          Timothy
> >> >> >>>>
> >> >> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
> >> >> >>>>      What are the exact messages in the DomU?
> >> >> >>>>
> >> >> >>>>
> >> >> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
> >> >> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
> >> >> >>>> enabled.
> >> >> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
> >> >> >>>> did not see such msi related error message.
> >> >> >>>> Actually, with that domU I do not see anything obvious wrong from the
> >> >> >>>> log, but I also see nothing from panel (panel receive no signal and go
> >> >> >>>> power-saving) :-(
> >> >> >>>>
> >> >> >>>>
> >> >> >>>> Back to the issue I was reporting, the domU log looks like this:
> >> >> >>>>
> >> >> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
> >> >> >>>> [drm:i915_hangcheck_ring_idle]
> >> >> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
> >> >> >>>> 3354], missed IRQ?
> >> >> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
> >> >> >>>> [drm:i915_hangcheck_ring_idle]
> >> >> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
> >> >> >>>> 11297], missed IRQ?
> >> >> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
> >> >> >>>> timeout, switching to polling mode: last cmd=0x000f0000
> >> >> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
> >> >> >>>> codec, disabling MSI: last cmd=0x002f0600
> >> >> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
> >> >> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
> >> >> >>>>
> >> >> >>>>
> >> >> >>>> Thanks,
> >> >> >>>> Timothy
> >> >> >>>>
> >> >> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
> >> >> >>> fixes this. I'm not fully clued up to what the policy for backporting
> >> >> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
> >> >> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
> >> >> >>> right solution here.
> >> >> >>>
> >> >> >> I had a quick look, and it doesn't look that hard to backport that patch.
> >> >> >>
> >> >> >> --
> >> >> >> Mats
> >> >> >>
> >> >> >>
> >> >> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
> >> >> >>> to your original email.
> >> >> >>>
> >> >> >>> --
> >> >> >>> Mats
> >> >> >>>
> >> >> >>> ______________________________**_________________
> >> >> >>> Xen-devel mailing list
> >> >> >>> Xen-devel@lists.xen.org 
> >> >> >>> http://lists.xen.org/xen-devel 
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>
> >> >> >> ______________________________**_________________
> >> >> >> Xen-devel mailing list
> >> >> >> Xen-devel@lists.xen.org 
> >> >> >> http://lists.xen.org/xen-devel 
> >> >> >>
> >> >> 
> >> >> 
> >> >> 
> >> >> _______________________________________________
> >> >> Xen-devel mailing list
> >> >> Xen-devel@lists.xen.org
> >> >> http://lists.xen.org/xen-devel
> >> 
> >> 
> >> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:07:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDlB-0004A2-IX; Wed, 05 Dec 2012 12:07:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgDl9-00049t-5B
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:07:27 +0000
Received: from [85.158.137.99:8742] by server-1.bemta-3.messagelabs.com id
	B9/1B-12169-EF83FB05; Wed, 05 Dec 2012 12:07:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354709232!17986156!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25671 invoked from network); 5 Dec 2012 12:07:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:07:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216461582"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:07:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 07:07:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgDks-0001ag-NJ;
	Wed, 05 Dec 2012 12:07:10 +0000
Date: Wed, 5 Dec 2012 12:07:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1662073313.20121205130222@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212051206250.8801@kaball.uk.xensource.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<28369388.20121205000539@eikelenboom.it>
	<alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
	<1662073313.20121205130222@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 5 Dec 2012, Sander Eikelenboom wrote:
> Wednesday, December 5, 2012, 12:51:13 PM, you wrote:
> 
> > On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
> >> Tuesday, December 4, 2012, 12:01:05 PM, you wrote:
> >> 
> >> > On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
> >> >> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
> >> >> >>  I had a quick look, and it doesn't look that hard to backport that patch.
> >> >> > 
> >> >> > Thanks, Mat.
> >> >> > I'm glad to report that the patch do fix my problem.
> >> >> > 
> >> >> > And yest it is really easy to port since the code did not change across the
> >> >> > two release.
> >> >> > The only change would be line numbers (3841 vs 3803) and one extra comments
> >> >> > before this line:
> >> >> > 
> >> >> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> >> >> > 
> >> >> > I'm not sure if you are going to release another maintenance version that
> >> >> > include this patch,
> >> >> > but I'll report this to Debain maintainer since it's about to freeze for
> >> >> > v7.0 release and v4.2.0 will not make it.
> >> >> 
> >> >> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
> >> >> out?
> >> >> 
> >> 
> >> > It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
> >> > so I'd say it should be a candidate for Xen 4.1.4.
> >> 
> >> Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
> >> (XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3
> >> 
> >> Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.
> >> 
> 
> > The patch is supposed to fix a bug affecting msi_translate but in
> > upstream QEMU we haven't implemented msi_translate at all! So in this
> > case the cause of the bug (and as a consequence the fix) must be a
> > different one.
> 
> Ok will see if i can find out what is going on. Probably have to force msi_translate=0.
> 
> I noticed "qemu-xen" doesn't make a log file in /var/log/xen as "qemu-dm" does, is this expected ?

No, it is not. You should get the usual /var/log/xen/qemu-dm-domainname.log file.


> >> Sander
> >> 
> >> > -- Pasi
> >> 
> >> >> Jan
> >> >> 
> >> >> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
> >> >> > <mats.petersson@citrix.com>wrote:
> >> >> > 
> >> >> >> On 03/12/12 13:19, Mats Petersson wrote:
> >> >> >>
> >> >> >>> On 03/12/12 13:14, G.R. wrote:
> >> >> >>>
> >> >> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
> >> >> >>>> <mats.petersson@citrix.com 
> >> >> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
> >> >> >>>> wrote:
> >> >> >>>>
> >> >> >>>>      On 03/12/12 03:47, G.R. wrote:
> >> >> >>>>
> >> >> >>>>          Hi developers,
> >> >> >>>>          I met some domU issues and the log suggests missing interrupt.
> >> >> >>>>          Details from here:
> >> >> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
> >> >> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
> >> >> >>>>          In summary, this is the suspicious log:
> >> >> >>>>
> >> >> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
> >> >> >>>>
> >> >> >>>>          I've checked the code in question and found that mode 3 is an
> >> >> >>>>          'reserved_1' mode.
> >> >> >>>>          I want to trace down the source of this mode setting to
> >> >> >>>>          root-cause the issue.
> >> >> >>>>          But I'm not an xen developer, and am even a newbie as a xen
> >> >> >>>> user.
> >> >> >>>>          Could anybody give me instructions about how to enable
> >> >> >>>>          detailed debug log?
> >> >> >>>>          It could be better if I can get advice about experiments to
> >> >> >>>>          perform / switches to try out etc.
> >> >> >>>>
> >> >> >>>>          My SW config:
> >> >> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
> >> >> >>>>          domU: Debian wheezy 3.2.x stock kernel.
> >> >> >>>>
> >> >> >>>>          Thanks,
> >> >> >>>>          Timothy
> >> >> >>>>
> >> >> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
> >> >> >>>>      What are the exact messages in the DomU?
> >> >> >>>>
> >> >> >>>>
> >> >> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
> >> >> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
> >> >> >>>> enabled.
> >> >> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
> >> >> >>>> did not see such msi related error message.
> >> >> >>>> Actually, with that domU I do not see anything obvious wrong from the
> >> >> >>>> log, but I also see nothing from panel (panel receive no signal and go
> >> >> >>>> power-saving) :-(
> >> >> >>>>
> >> >> >>>>
> >> >> >>>> Back to the issue I was reporting, the domU log looks like this:
> >> >> >>>>
> >> >> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
> >> >> >>>> [drm:i915_hangcheck_ring_idle]
> >> >> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
> >> >> >>>> 3354], missed IRQ?
> >> >> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
> >> >> >>>> [drm:i915_hangcheck_ring_idle]
> >> >> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
> >> >> >>>> 11297], missed IRQ?
> >> >> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
> >> >> >>>> timeout, switching to polling mode: last cmd=0x000f0000
> >> >> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
> >> >> >>>> codec, disabling MSI: last cmd=0x002f0600
> >> >> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
> >> >> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
> >> >> >>>>
> >> >> >>>>
> >> >> >>>> Thanks,
> >> >> >>>> Timothy
> >> >> >>>>
> >> >> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
> >> >> >>> fixes this. I'm not fully clued up to what the policy for backporting
> >> >> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
> >> >> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
> >> >> >>> right solution here.
> >> >> >>>
> >> >> >> I had a quick look, and it doesn't look that hard to backport that patch.
> >> >> >>
> >> >> >> --
> >> >> >> Mats
> >> >> >>
> >> >> >>
> >> >> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
> >> >> >>> to your original email.
> >> >> >>>
> >> >> >>> --
> >> >> >>> Mats
> >> >> >>>
> >> >> >>> ______________________________**_________________
> >> >> >>> Xen-devel mailing list
> >> >> >>> Xen-devel@lists.xen.org 
> >> >> >>> http://lists.xen.org/xen-devel 
> >> >> >>>
> >> >> >>>
> >> >> >>>
> >> >> >>
> >> >> >> ______________________________**_________________
> >> >> >> Xen-devel mailing list
> >> >> >> Xen-devel@lists.xen.org 
> >> >> >> http://lists.xen.org/xen-devel 
> >> >> >>
> >> >> 
> >> >> 
> >> >> 
> >> >> _______________________________________________
> >> >> Xen-devel mailing list
> >> >> Xen-devel@lists.xen.org
> >> >> http://lists.xen.org/xen-devel
> >> 
> >> 
> >> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:07:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:07:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDlN-0004BV-5Q; Wed, 05 Dec 2012 12:07:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgDlL-0004BF-Is
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:07:39 +0000
Received: from [85.158.138.51:35411] by server-3.bemta-3.messagelabs.com id
	B7/15-31566-A093FB05; Wed, 05 Dec 2012 12:07:38 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354709256!27521514!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27037 invoked from network); 5 Dec 2012 12:07:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:07:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216461609"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:07:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 07:07:35 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgDb5-0001QG-Kz;
	Wed, 05 Dec 2012 11:57:03 +0000
Message-ID: <50BF3530.8080603@eu.citrix.com>
Date: Wed, 5 Dec 2012 11:51:12 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>
	<50BE4AC2.9010507@eu.citrix.com> <1354708472.21632.21.camel@Abyss>
In-Reply-To: <1354708472.21632.21.camel@Abyss>
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 11:54, Dario Faggioli wrote:
> On Tue, 2012-12-04 at 19:10 +0000, George Dunlap wrote:
>> On 03/12/12 16:35, Dario Faggioli wrote:
>>> +            /* Avoid TRACE_* to avoid a lot of useless !tb_init_done checks */
>>> +            for_each_cpu(cpu, &mask)
>>> +            {
>>> +                struct {
>>> +                    unsigned cpu:8;
>>> +                } d;
>>> +                d.cpu = cpu;
>>> +                trace_var(TRC_CSCHED_TICKLE, 0,
>>> +                          sizeof(d),
>>> +                          (unsigned char*)&d);
>> Why not just TRC_1D()?
>>
> As I tried to explain in the comment, I just wanted to avoid checking
> for !tb_init_done more than once, as this happens within a loop and, at
> least potentially, there may be more CPUs to tickle (and thus more calls
> to TRACE_1D). I take this comment of yours as you not thinking that is
> something worthwhile, right? If so, I can definitely turn this into a
> "standard" TRACE_1D() call.

Oh right -- yeah, no sense in having a duplicate check on tb_init_done; 
but the struct is still pointless; just passing sizeof(cpu) and &cpu 
should be prettier (even if the complier will probably optimize it to 
the same thing).

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:07:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:07:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDlN-0004BV-5Q; Wed, 05 Dec 2012 12:07:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgDlL-0004BF-Is
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:07:39 +0000
Received: from [85.158.138.51:35411] by server-3.bemta-3.messagelabs.com id
	B7/15-31566-A093FB05; Wed, 05 Dec 2012 12:07:38 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354709256!27521514!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27037 invoked from network); 5 Dec 2012 12:07:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:07:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216461609"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:07:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 07:07:35 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgDb5-0001QG-Kz;
	Wed, 05 Dec 2012 11:57:03 +0000
Message-ID: <50BF3530.8080603@eu.citrix.com>
Date: Wed, 5 Dec 2012 11:51:12 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>
	<50BE4AC2.9010507@eu.citrix.com> <1354708472.21632.21.camel@Abyss>
In-Reply-To: <1354708472.21632.21.camel@Abyss>
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 11:54, Dario Faggioli wrote:
> On Tue, 2012-12-04 at 19:10 +0000, George Dunlap wrote:
>> On 03/12/12 16:35, Dario Faggioli wrote:
>>> +            /* Avoid TRACE_* to avoid a lot of useless !tb_init_done checks */
>>> +            for_each_cpu(cpu, &mask)
>>> +            {
>>> +                struct {
>>> +                    unsigned cpu:8;
>>> +                } d;
>>> +                d.cpu = cpu;
>>> +                trace_var(TRC_CSCHED_TICKLE, 0,
>>> +                          sizeof(d),
>>> +                          (unsigned char*)&d);
>> Why not just TRC_1D()?
>>
> As I tried to explain in the comment, I just wanted to avoid checking
> for !tb_init_done more than once, as this happens within a loop and, at
> least potentially, there may be more CPUs to tickle (and thus more calls
> to TRACE_1D). I take this comment of yours as you not thinking that is
> something worthwhile, right? If so, I can definitely turn this into a
> "standard" TRACE_1D() call.

Oh right -- yeah, no sense in having a duplicate check on tb_init_done; 
but the struct is still pointless; just passing sizeof(cpu) and &cpu 
should be prettier (even if the complier will probably optimize it to 
the same thing).

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:11:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDpA-0004bA-Bt; Wed, 05 Dec 2012 12:11:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TgDp7-0004Zy-UF
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:11:34 +0000
Received: from [85.158.139.211:32042] by server-11.bemta-5.messagelabs.com id
	28/C2-03409-5F93FB05; Wed, 05 Dec 2012 12:11:33 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354709491!19169516!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8557 invoked from network); 5 Dec 2012 12:11:31 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 12:11:31 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:50881 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TgDsm-0003fv-06; Wed, 05 Dec 2012 13:15:20 +0100
Date: Wed, 5 Dec 2012 13:11:28 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <182397271.20121205131128@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212051206250.8801@kaball.uk.xensource.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<28369388.20121205000539@eikelenboom.it>
	<alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
	<1662073313.20121205130222@eikelenboom.it>
	<alpine.DEB.2.02.1212051206250.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, December 5, 2012, 1:07:07 PM, you wrote:

> On Wed, 5 Dec 2012, Sander Eikelenboom wrote:
>> Wednesday, December 5, 2012, 12:51:13 PM, you wrote:
>> 
>> > On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
>> >> Tuesday, December 4, 2012, 12:01:05 PM, you wrote:
>> >> 
>> >> > On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
>> >> >> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
>> >> >> >>  I had a quick look, and it doesn't look that hard to backport that patch.
>> >> >> > 
>> >> >> > Thanks, Mat.
>> >> >> > I'm glad to report that the patch do fix my problem.
>> >> >> > 
>> >> >> > And yest it is really easy to port since the code did not change across the
>> >> >> > two release.
>> >> >> > The only change would be line numbers (3841 vs 3803) and one extra comments
>> >> >> > before this line:
>> >> >> > 
>> >> >> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>> >> >> > 
>> >> >> > I'm not sure if you are going to release another maintenance version that
>> >> >> > include this patch,
>> >> >> > but I'll report this to Debain maintainer since it's about to freeze for
>> >> >> > v7.0 release and v4.2.0 will not make it.
>> >> >> 
>> >> >> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
>> >> >> out?
>> >> >> 
>> >> 
>> >> > It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
>> >> > so I'd say it should be a candidate for Xen 4.1.4.
>> >> 
>> >> Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
>> >> (XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3
>> >> 
>> >> Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.
>> >> 
>> 
>> > The patch is supposed to fix a bug affecting msi_translate but in
>> > upstream QEMU we haven't implemented msi_translate at all! So in this
>> > case the cause of the bug (and as a consequence the fix) must be a
>> > different one.
>> 
>> Ok will see if i can find out what is going on. Probably have to force msi_translate=0.
>> 
>> I noticed "qemu-xen" doesn't make a log file in /var/log/xen as "qemu-dm" does, is this expected ?

> No, it is not. You should get the usual /var/log/xen/qemu-dm-domainname.log file.

OK .. i was expecting a qemu-xen-domainname, so will double check :-)

Thx !

>> >> Sander
>> >> 
>> >> > -- Pasi
>> >> 
>> >> >> Jan
>> >> >> 
>> >> >> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
>> >> >> > <mats.petersson@citrix.com>wrote:
>> >> >> > 
>> >> >> >> On 03/12/12 13:19, Mats Petersson wrote:
>> >> >> >>
>> >> >> >>> On 03/12/12 13:14, G.R. wrote:
>> >> >> >>>
>> >> >> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>> >> >> >>>> <mats.petersson@citrix.com 
>> >> >> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>> >> >> >>>> wrote:
>> >> >> >>>>
>> >> >> >>>>      On 03/12/12 03:47, G.R. wrote:
>> >> >> >>>>
>> >> >> >>>>          Hi developers,
>> >> >> >>>>          I met some domU issues and the log suggests missing interrupt.
>> >> >> >>>>          Details from here:
>> >> >> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
>> >> >> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>> >> >> >>>>          In summary, this is the suspicious log:
>> >> >> >>>>
>> >> >> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>> >> >> >>>>
>> >> >> >>>>          I've checked the code in question and found that mode 3 is an
>> >> >> >>>>          'reserved_1' mode.
>> >> >> >>>>          I want to trace down the source of this mode setting to
>> >> >> >>>>          root-cause the issue.
>> >> >> >>>>          But I'm not an xen developer, and am even a newbie as a xen
>> >> >> >>>> user.
>> >> >> >>>>          Could anybody give me instructions about how to enable
>> >> >> >>>>          detailed debug log?
>> >> >> >>>>          It could be better if I can get advice about experiments to
>> >> >> >>>>          perform / switches to try out etc.
>> >> >> >>>>
>> >> >> >>>>          My SW config:
>> >> >> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>> >> >> >>>>          domU: Debian wheezy 3.2.x stock kernel.
>> >> >> >>>>
>> >> >> >>>>          Thanks,
>> >> >> >>>>          Timothy
>> >> >> >>>>
>> >> >> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>> >> >> >>>>      What are the exact messages in the DomU?
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>> >> >> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>> >> >> >>>> enabled.
>> >> >> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>> >> >> >>>> did not see such msi related error message.
>> >> >> >>>> Actually, with that domU I do not see anything obvious wrong from the
>> >> >> >>>> log, but I also see nothing from panel (panel receive no signal and go
>> >> >> >>>> power-saving) :-(
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Back to the issue I was reporting, the domU log looks like this:
>> >> >> >>>>
>> >> >> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>> >> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>> >> >> >>>> 3354], missed IRQ?
>> >> >> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>> >> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>> >> >> >>>> 11297], missed IRQ?
>> >> >> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>> >> >> >>>> timeout, switching to polling mode: last cmd=0x000f0000
>> >> >> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>> >> >> >>>> codec, disabling MSI: last cmd=0x002f0600
>> >> >> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>> >> >> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Thanks,
>> >> >> >>>> Timothy
>> >> >> >>>>
>> >> >> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>> >> >> >>> fixes this. I'm not fully clued up to what the policy for backporting
>> >> >> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
>> >> >> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
>> >> >> >>> right solution here.
>> >> >> >>>
>> >> >> >> I had a quick look, and it doesn't look that hard to backport that patch.
>> >> >> >>
>> >> >> >> --
>> >> >> >> Mats
>> >> >> >>
>> >> >> >>
>> >> >> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>> >> >> >>> to your original email.
>> >> >> >>>
>> >> >> >>> --
>> >> >> >>> Mats
>> >> >> >>>
>> >> >> >>> ______________________________**_________________
>> >> >> >>> Xen-devel mailing list
>> >> >> >>> Xen-devel@lists.xen.org 
>> >> >> >>> http://lists.xen.org/xen-devel 
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>
>> >> >> >> ______________________________**_________________
>> >> >> >> Xen-devel mailing list
>> >> >> >> Xen-devel@lists.xen.org 
>> >> >> >> http://lists.xen.org/xen-devel 
>> >> >> >>
>> >> >> 
>> >> >> 
>> >> >> 
>> >> >> _______________________________________________
>> >> >> Xen-devel mailing list
>> >> >> Xen-devel@lists.xen.org
>> >> >> http://lists.xen.org/xen-devel
>> >> 
>> >> 
>> >> 
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:11:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDpA-0004bA-Bt; Wed, 05 Dec 2012 12:11:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TgDp7-0004Zy-UF
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:11:34 +0000
Received: from [85.158.139.211:32042] by server-11.bemta-5.messagelabs.com id
	28/C2-03409-5F93FB05; Wed, 05 Dec 2012 12:11:33 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354709491!19169516!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8557 invoked from network); 5 Dec 2012 12:11:31 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 12:11:31 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:50881 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TgDsm-0003fv-06; Wed, 05 Dec 2012 13:15:20 +0100
Date: Wed, 5 Dec 2012 13:11:28 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <182397271.20121205131128@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212051206250.8801@kaball.uk.xensource.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<28369388.20121205000539@eikelenboom.it>
	<alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
	<1662073313.20121205130222@eikelenboom.it>
	<alpine.DEB.2.02.1212051206250.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, December 5, 2012, 1:07:07 PM, you wrote:

> On Wed, 5 Dec 2012, Sander Eikelenboom wrote:
>> Wednesday, December 5, 2012, 12:51:13 PM, you wrote:
>> 
>> > On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
>> >> Tuesday, December 4, 2012, 12:01:05 PM, you wrote:
>> >> 
>> >> > On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
>> >> >> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
>> >> >> >>  I had a quick look, and it doesn't look that hard to backport that patch.
>> >> >> > 
>> >> >> > Thanks, Mat.
>> >> >> > I'm glad to report that the patch do fix my problem.
>> >> >> > 
>> >> >> > And yest it is really easy to port since the code did not change across the
>> >> >> > two release.
>> >> >> > The only change would be line numbers (3841 vs 3803) and one extra comments
>> >> >> > before this line:
>> >> >> > 
>> >> >> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>> >> >> > 
>> >> >> > I'm not sure if you are going to release another maintenance version that
>> >> >> > include this patch,
>> >> >> > but I'll report this to Debain maintainer since it's about to freeze for
>> >> >> > v7.0 release and v4.2.0 will not make it.
>> >> >> 
>> >> >> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
>> >> >> out?
>> >> >> 
>> >> 
>> >> > It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
>> >> > so I'd say it should be a candidate for Xen 4.1.4.
>> >> 
>> >> Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
>> >> (XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3
>> >> 
>> >> Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.
>> >> 
>> 
>> > The patch is supposed to fix a bug affecting msi_translate but in
>> > upstream QEMU we haven't implemented msi_translate at all! So in this
>> > case the cause of the bug (and as a consequence the fix) must be a
>> > different one.
>> 
>> Ok will see if i can find out what is going on. Probably have to force msi_translate=0.
>> 
>> I noticed "qemu-xen" doesn't make a log file in /var/log/xen as "qemu-dm" does, is this expected ?

> No, it is not. You should get the usual /var/log/xen/qemu-dm-domainname.log file.

OK .. i was expecting a qemu-xen-domainname, so will double check :-)

Thx !

>> >> Sander
>> >> 
>> >> > -- Pasi
>> >> 
>> >> >> Jan
>> >> >> 
>> >> >> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
>> >> >> > <mats.petersson@citrix.com>wrote:
>> >> >> > 
>> >> >> >> On 03/12/12 13:19, Mats Petersson wrote:
>> >> >> >>
>> >> >> >>> On 03/12/12 13:14, G.R. wrote:
>> >> >> >>>
>> >> >> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>> >> >> >>>> <mats.petersson@citrix.com 
>> >> >> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>> >> >> >>>> wrote:
>> >> >> >>>>
>> >> >> >>>>      On 03/12/12 03:47, G.R. wrote:
>> >> >> >>>>
>> >> >> >>>>          Hi developers,
>> >> >> >>>>          I met some domU issues and the log suggests missing interrupt.
>> >> >> >>>>          Details from here:
>> >> >> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
>> >> >> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>> >> >> >>>>          In summary, this is the suspicious log:
>> >> >> >>>>
>> >> >> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>> >> >> >>>>
>> >> >> >>>>          I've checked the code in question and found that mode 3 is an
>> >> >> >>>>          'reserved_1' mode.
>> >> >> >>>>          I want to trace down the source of this mode setting to
>> >> >> >>>>          root-cause the issue.
>> >> >> >>>>          But I'm not an xen developer, and am even a newbie as a xen
>> >> >> >>>> user.
>> >> >> >>>>          Could anybody give me instructions about how to enable
>> >> >> >>>>          detailed debug log?
>> >> >> >>>>          It could be better if I can get advice about experiments to
>> >> >> >>>>          perform / switches to try out etc.
>> >> >> >>>>
>> >> >> >>>>          My SW config:
>> >> >> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>> >> >> >>>>          domU: Debian wheezy 3.2.x stock kernel.
>> >> >> >>>>
>> >> >> >>>>          Thanks,
>> >> >> >>>>          Timothy
>> >> >> >>>>
>> >> >> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>> >> >> >>>>      What are the exact messages in the DomU?
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>> >> >> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>> >> >> >>>> enabled.
>> >> >> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>> >> >> >>>> did not see such msi related error message.
>> >> >> >>>> Actually, with that domU I do not see anything obvious wrong from the
>> >> >> >>>> log, but I also see nothing from panel (panel receive no signal and go
>> >> >> >>>> power-saving) :-(
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Back to the issue I was reporting, the domU log looks like this:
>> >> >> >>>>
>> >> >> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>> >> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>> >> >> >>>> 3354], missed IRQ?
>> >> >> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>> >> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>> >> >> >>>> 11297], missed IRQ?
>> >> >> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>> >> >> >>>> timeout, switching to polling mode: last cmd=0x000f0000
>> >> >> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>> >> >> >>>> codec, disabling MSI: last cmd=0x002f0600
>> >> >> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>> >> >> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Thanks,
>> >> >> >>>> Timothy
>> >> >> >>>>
>> >> >> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>> >> >> >>> fixes this. I'm not fully clued up to what the policy for backporting
>> >> >> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
>> >> >> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
>> >> >> >>> right solution here.
>> >> >> >>>
>> >> >> >> I had a quick look, and it doesn't look that hard to backport that patch.
>> >> >> >>
>> >> >> >> --
>> >> >> >> Mats
>> >> >> >>
>> >> >> >>
>> >> >> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>> >> >> >>> to your original email.
>> >> >> >>>
>> >> >> >>> --
>> >> >> >>> Mats
>> >> >> >>>
>> >> >> >>> ______________________________**_________________
>> >> >> >>> Xen-devel mailing list
>> >> >> >>> Xen-devel@lists.xen.org 
>> >> >> >>> http://lists.xen.org/xen-devel 
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>
>> >> >> >> ______________________________**_________________
>> >> >> >> Xen-devel mailing list
>> >> >> >> Xen-devel@lists.xen.org 
>> >> >> >> http://lists.xen.org/xen-devel 
>> >> >> >>
>> >> >> 
>> >> >> 
>> >> >> 
>> >> >> _______________________________________________
>> >> >> Xen-devel mailing list
>> >> >> Xen-devel@lists.xen.org
>> >> >> http://lists.xen.org/xen-devel
>> >> 
>> >> 
>> >> 
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:16:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:16:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDtX-000553-7R; Wed, 05 Dec 2012 12:16:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1TgDtW-00054x-1l
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:16:06 +0000
Received: from [85.158.138.51:15201] by server-7.bemta-3.messagelabs.com id
	0C/84-01713-50B3FB05; Wed, 05 Dec 2012 12:16:05 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354709761!27523135!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7762 invoked from network); 5 Dec 2012 12:16:02 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-8.tower-174.messagelabs.com with SMTP;
	5 Dec 2012 12:16:02 -0000
Received: from [62.94.143.201] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 76759989; Wed, 05 Dec 2012 13:16:11 +0100
Message-ID: <1354709754.21632.31.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 05 Dec 2012 13:15:54 +0100
In-Reply-To: <1354708869.15296.173.camel@zakaz.uk.xensource.com>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>	<50BE4AC2.9010507@eu.citrix.com>
	<1354708472.21632.21.camel@Abyss>
	<1354708869.15296.173.camel@zakaz.uk.xensource.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4656008493446154556=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4656008493446154556==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-AmXBMiv0fP4S0W9fi3vp"


--=-AmXBMiv0fP4S0W9fi3vp
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-12-05 at 12:01 +0000, Ian Campbell wrote:=20
> > As I tried to explain in the comment, I just wanted to avoid checking
> > for !tb_init_done more than once, as this happens within a loop and, at
> > least potentially, there may be more CPUs to tickle (and thus more call=
s
> > to TRACE_1D).
>=20
> If tb_init_done isn't marked volatile or anything like that isn't the
> check hoisted out of the loop by the compiler?
>=20
Good point. As they're all macros, yes, I think that is something very
likely to happen... Although, I haven't checked the generated code, I'll
take a look. Thanks.

> > I take this comment of yours as you not thinking that is
> > something worthwhile, right? If so, I can definitely turn this into a
> > "standard" TRACE_1D() call.
>=20
> Or maybe consider __TRACE_1D and friends which omit the check?
>=20
Mmm... It may well be me, but my

$ grep __TRACE xen/* -R

does not show any results... What am I missing?

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-AmXBMiv0fP4S0W9fi3vp
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlC/OvoACgkQk4XaBE3IOsRJlgCfZl12ClbD6wpY8zMSJyaPJL5M
88IAn0fUSPtsY6rfdeA8/y1v2KCXj/wC
=Fsfd
-----END PGP SIGNATURE-----

--=-AmXBMiv0fP4S0W9fi3vp--



--===============4656008493446154556==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4656008493446154556==--



From xen-devel-bounces@lists.xen.org Wed Dec 05 12:16:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:16:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDtX-000553-7R; Wed, 05 Dec 2012 12:16:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1TgDtW-00054x-1l
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:16:06 +0000
Received: from [85.158.138.51:15201] by server-7.bemta-3.messagelabs.com id
	0C/84-01713-50B3FB05; Wed, 05 Dec 2012 12:16:05 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354709761!27523135!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7762 invoked from network); 5 Dec 2012 12:16:02 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-8.tower-174.messagelabs.com with SMTP;
	5 Dec 2012 12:16:02 -0000
Received: from [62.94.143.201] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 76759989; Wed, 05 Dec 2012 13:16:11 +0100
Message-ID: <1354709754.21632.31.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 05 Dec 2012 13:15:54 +0100
In-Reply-To: <1354708869.15296.173.camel@zakaz.uk.xensource.com>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>	<50BE4AC2.9010507@eu.citrix.com>
	<1354708472.21632.21.camel@Abyss>
	<1354708869.15296.173.camel@zakaz.uk.xensource.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4656008493446154556=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4656008493446154556==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-AmXBMiv0fP4S0W9fi3vp"


--=-AmXBMiv0fP4S0W9fi3vp
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-12-05 at 12:01 +0000, Ian Campbell wrote:=20
> > As I tried to explain in the comment, I just wanted to avoid checking
> > for !tb_init_done more than once, as this happens within a loop and, at
> > least potentially, there may be more CPUs to tickle (and thus more call=
s
> > to TRACE_1D).
>=20
> If tb_init_done isn't marked volatile or anything like that isn't the
> check hoisted out of the loop by the compiler?
>=20
Good point. As they're all macros, yes, I think that is something very
likely to happen... Although, I haven't checked the generated code, I'll
take a look. Thanks.

> > I take this comment of yours as you not thinking that is
> > something worthwhile, right? If so, I can definitely turn this into a
> > "standard" TRACE_1D() call.
>=20
> Or maybe consider __TRACE_1D and friends which omit the check?
>=20
Mmm... It may well be me, but my

$ grep __TRACE xen/* -R

does not show any results... What am I missing?

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-AmXBMiv0fP4S0W9fi3vp
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlC/OvoACgkQk4XaBE3IOsRJlgCfZl12ClbD6wpY8zMSJyaPJL5M
88IAn0fUSPtsY6rfdeA8/y1v2KCXj/wC
=Fsfd
-----END PGP SIGNATURE-----

--=-AmXBMiv0fP4S0W9fi3vp--



--===============4656008493446154556==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4656008493446154556==--



From xen-devel-bounces@lists.xen.org Wed Dec 05 12:21:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDyD-0005FP-1o; Wed, 05 Dec 2012 12:20:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgDyB-0005El-Fu
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:20:55 +0000
Received: from [85.158.139.211:51277] by server-5.bemta-5.messagelabs.com id
	01/3F-11353-62C3FB05; Wed, 05 Dec 2012 12:20:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354710053!19164671!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8745 invoked from network); 5 Dec 2012 12:20:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:20:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16171083"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:20:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	12:20:49 +0000
Message-ID: <1354710048.15296.175.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Date: Wed, 5 Dec 2012 12:20:48 +0000
In-Reply-To: <1354709754.21632.31.camel@Abyss>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>	<50BE4AC2.9010507@eu.citrix.com>
	<1354708472.21632.21.camel@Abyss>
	<1354708869.15296.173.camel@zakaz.uk.xensource.com>
	<1354709754.21632.31.camel@Abyss>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 12:15 +0000, Dario Faggioli wrote:
> On Wed, 2012-12-05 at 12:01 +0000, Ian Campbell wrote: 
> > > As I tried to explain in the comment, I just wanted to avoid checking
> > > for !tb_init_done more than once, as this happens within a loop and, at
> > > least potentially, there may be more CPUs to tickle (and thus more calls
> > > to TRACE_1D).
> > 
> > If tb_init_done isn't marked volatile or anything like that isn't the
> > check hoisted out of the loop by the compiler?
> > 
> Good point. As they're all macros, yes, I think that is something very
> likely to happen... Although, I haven't checked the generated code, I'll
> take a look. Thanks.
> 
> > > I take this comment of yours as you not thinking that is
> > > something worthwhile, right? If so, I can definitely turn this into a
> > > "standard" TRACE_1D() call.
> > 
> > Or maybe consider __TRACE_1D and friends which omit the check?
> > 
> Mmm... It may well be me, but my
> 
> $ grep __TRACE xen/* -R
> 
> does not show any results... What am I missing?

I meant to define + use those macros.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:21:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDyD-0005FP-1o; Wed, 05 Dec 2012 12:20:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgDyB-0005El-Fu
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:20:55 +0000
Received: from [85.158.139.211:51277] by server-5.bemta-5.messagelabs.com id
	01/3F-11353-62C3FB05; Wed, 05 Dec 2012 12:20:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354710053!19164671!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8745 invoked from network); 5 Dec 2012 12:20:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:20:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16171083"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:20:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	12:20:49 +0000
Message-ID: <1354710048.15296.175.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Date: Wed, 5 Dec 2012 12:20:48 +0000
In-Reply-To: <1354709754.21632.31.camel@Abyss>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>	<50BE4AC2.9010507@eu.citrix.com>
	<1354708472.21632.21.camel@Abyss>
	<1354708869.15296.173.camel@zakaz.uk.xensource.com>
	<1354709754.21632.31.camel@Abyss>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 12:15 +0000, Dario Faggioli wrote:
> On Wed, 2012-12-05 at 12:01 +0000, Ian Campbell wrote: 
> > > As I tried to explain in the comment, I just wanted to avoid checking
> > > for !tb_init_done more than once, as this happens within a loop and, at
> > > least potentially, there may be more CPUs to tickle (and thus more calls
> > > to TRACE_1D).
> > 
> > If tb_init_done isn't marked volatile or anything like that isn't the
> > check hoisted out of the loop by the compiler?
> > 
> Good point. As they're all macros, yes, I think that is something very
> likely to happen... Although, I haven't checked the generated code, I'll
> take a look. Thanks.
> 
> > > I take this comment of yours as you not thinking that is
> > > something worthwhile, right? If so, I can definitely turn this into a
> > > "standard" TRACE_1D() call.
> > 
> > Or maybe consider __TRACE_1D and friends which omit the check?
> > 
> Mmm... It may well be me, but my
> 
> $ grep __TRACE xen/* -R
> 
> does not show any results... What am I missing?

I meant to define + use those macros.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:21:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:21:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDyR-0005LW-G2; Wed, 05 Dec 2012 12:21:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Jonathan.Ludlam@eu.citrix.com>) id 1TgDyQ-0005Kw-RX
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:21:11 +0000
Received: from [85.158.139.211:53140] by server-9.bemta-5.messagelabs.com id
	DE/3D-29295-63C3FB05; Wed, 05 Dec 2012 12:21:10 +0000
X-Env-Sender: Jonathan.Ludlam@eu.citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1354710066!18311014!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4411 invoked from network); 5 Dec 2012 12:21:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:21:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46663783"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:21:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 07:21:05 -0500
Received: from [10.80.118.170] (helo=[127.0.1.1])	by ukmail1.uk.xensource.com
	with esmtp (Exim 4.69)	(envelope-from
	<jonathan.ludlam@eu.citrix.com>)	id
	1TgDyL-0001tP-1O; Wed, 05 Dec 2012 12:21:05 +0000
MIME-Version: 1.0
X-Mercurial-Node: 575e649ad4dc61a37118cccd4181645379492878
Message-ID: <575e649ad4dc61a37118.1354710064@fungus>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 5 Dec 2012 12:21:04 +0000
From: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
To: xen-devel@lists.xensource.com
Cc: ijc@hellion.org.uk
Subject: [Xen-devel] [PATCH] Fix the build of the ocaml libraries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These were previously capturing the absolute path of the build, causing
linking against the libraries to fail unless you still have the build
directory around.

Signed-off-by: Jon Ludlam <jonathan.ludlam@eu.citrix.com>

diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/Makefile.rules
--- a/tools/ocaml/Makefile.rules
+++ b/tools/ocaml/Makefile.rules
@@ -58,14 +58,8 @@
 
 # define a library target <name>.cmxa and <name>.cma
 define OCAML_LIBRARY_template
- $(1).cmxa: lib$(1)_stubs.a $(foreach obj,$($(1)_OBJS),$(obj).cmx)
-	$(call mk-caml-lib-native,$$@, -cclib -l$(1)_stubs $(foreach lib,$(LIBS_$(1)),-cclib $(lib)), $(foreach obj,$($(1)_OBJS),$(obj).cmx))
- $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmo)
-	$(call mk-caml-lib-bytecode,$$@, -dllib dll$(1)_stubs.so -cclib -l$(1)_stubs, $$+)
- $(1)_stubs.a: $(foreach obj,$$($(1)_C_OBJS),$(obj).o)
-	$(call mk-caml-stubs,$$@, $$+)
- lib$(1)_stubs.a: $(foreach obj,$($(1)_C_OBJS),$(obj).o)
-	$(call mk-caml-lib-stubs,$$@, $$+)
+  $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o)
+	$(OCAMLMKLIB) -o $1 -oc $(1)_stubs $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o) $(foreach lib, $(LIBS_$(1)_SYSTEM), -cclib $(lib)) $(foreach arg,$(LIBS_$(1)),-ldopt $(arg))
 endef
 
 define OCAML_NOC_LIBRARY_template
diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/eventchn/Makefile
--- a/tools/ocaml/libs/eventchn/Makefile
+++ b/tools/ocaml/libs/eventchn/Makefile
@@ -9,6 +9,7 @@
 LIBS = xeneventchn.cma xeneventchn.cmxa
 
 LIBS_xeneventchn = $(LDLIBS_libxenctrl)
+LIBS_xeneventchn_SYSTEM = -lxenctrl
 
 all: $(INTF) $(LIBS) $(PROGRAMS)
 
diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/xc/Makefile
--- a/tools/ocaml/libs/xc/Makefile
+++ b/tools/ocaml/libs/xc/Makefile
@@ -10,6 +10,7 @@
 LIBS = xenctrl.cma xenctrl.cmxa
 
 LIBS_xenctrl = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
+LIBS_xenctrl_SYSTEM = -lxenctrl -lxenguest
 
 xenctrl_OBJS = $(OBJS)
 xenctrl_C_OBJS = xenctrl_stubs
diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/xenstored/Makefile
--- a/tools/ocaml/xenstored/Makefile
+++ b/tools/ocaml/xenstored/Makefile
@@ -36,7 +36,9 @@
 	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/eventchn $(OCAML_TOPLEVEL)/libs/eventchn/xeneventchn.cmxa \
 	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xc $(OCAML_TOPLEVEL)/libs/xc/xenctrl.cmxa \
 	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xb $(OCAML_TOPLEVEL)/libs/xb/xenbus.cmxa \
-	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc
+	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc \
+	$(foreach obj, $(LDLIBS_libxenctrl), -ccopt $(obj)) \
+	$(foreach obj, $(LDLIBS_libxenguest), -ccopt $(obj)) 
 
 PROGRAMS = oxenstored
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:21:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:21:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDyR-0005LW-G2; Wed, 05 Dec 2012 12:21:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Jonathan.Ludlam@eu.citrix.com>) id 1TgDyQ-0005Kw-RX
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:21:11 +0000
Received: from [85.158.139.211:53140] by server-9.bemta-5.messagelabs.com id
	DE/3D-29295-63C3FB05; Wed, 05 Dec 2012 12:21:10 +0000
X-Env-Sender: Jonathan.Ludlam@eu.citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1354710066!18311014!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4411 invoked from network); 5 Dec 2012 12:21:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:21:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46663783"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:21:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 07:21:05 -0500
Received: from [10.80.118.170] (helo=[127.0.1.1])	by ukmail1.uk.xensource.com
	with esmtp (Exim 4.69)	(envelope-from
	<jonathan.ludlam@eu.citrix.com>)	id
	1TgDyL-0001tP-1O; Wed, 05 Dec 2012 12:21:05 +0000
MIME-Version: 1.0
X-Mercurial-Node: 575e649ad4dc61a37118cccd4181645379492878
Message-ID: <575e649ad4dc61a37118.1354710064@fungus>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 5 Dec 2012 12:21:04 +0000
From: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
To: xen-devel@lists.xensource.com
Cc: ijc@hellion.org.uk
Subject: [Xen-devel] [PATCH] Fix the build of the ocaml libraries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These were previously capturing the absolute path of the build, causing
linking against the libraries to fail unless you still have the build
directory around.

Signed-off-by: Jon Ludlam <jonathan.ludlam@eu.citrix.com>

diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/Makefile.rules
--- a/tools/ocaml/Makefile.rules
+++ b/tools/ocaml/Makefile.rules
@@ -58,14 +58,8 @@
 
 # define a library target <name>.cmxa and <name>.cma
 define OCAML_LIBRARY_template
- $(1).cmxa: lib$(1)_stubs.a $(foreach obj,$($(1)_OBJS),$(obj).cmx)
-	$(call mk-caml-lib-native,$$@, -cclib -l$(1)_stubs $(foreach lib,$(LIBS_$(1)),-cclib $(lib)), $(foreach obj,$($(1)_OBJS),$(obj).cmx))
- $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmo)
-	$(call mk-caml-lib-bytecode,$$@, -dllib dll$(1)_stubs.so -cclib -l$(1)_stubs, $$+)
- $(1)_stubs.a: $(foreach obj,$$($(1)_C_OBJS),$(obj).o)
-	$(call mk-caml-stubs,$$@, $$+)
- lib$(1)_stubs.a: $(foreach obj,$($(1)_C_OBJS),$(obj).o)
-	$(call mk-caml-lib-stubs,$$@, $$+)
+  $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o)
+	$(OCAMLMKLIB) -o $1 -oc $(1)_stubs $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o) $(foreach lib, $(LIBS_$(1)_SYSTEM), -cclib $(lib)) $(foreach arg,$(LIBS_$(1)),-ldopt $(arg))
 endef
 
 define OCAML_NOC_LIBRARY_template
diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/eventchn/Makefile
--- a/tools/ocaml/libs/eventchn/Makefile
+++ b/tools/ocaml/libs/eventchn/Makefile
@@ -9,6 +9,7 @@
 LIBS = xeneventchn.cma xeneventchn.cmxa
 
 LIBS_xeneventchn = $(LDLIBS_libxenctrl)
+LIBS_xeneventchn_SYSTEM = -lxenctrl
 
 all: $(INTF) $(LIBS) $(PROGRAMS)
 
diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/xc/Makefile
--- a/tools/ocaml/libs/xc/Makefile
+++ b/tools/ocaml/libs/xc/Makefile
@@ -10,6 +10,7 @@
 LIBS = xenctrl.cma xenctrl.cmxa
 
 LIBS_xenctrl = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
+LIBS_xenctrl_SYSTEM = -lxenctrl -lxenguest
 
 xenctrl_OBJS = $(OBJS)
 xenctrl_C_OBJS = xenctrl_stubs
diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/xenstored/Makefile
--- a/tools/ocaml/xenstored/Makefile
+++ b/tools/ocaml/xenstored/Makefile
@@ -36,7 +36,9 @@
 	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/eventchn $(OCAML_TOPLEVEL)/libs/eventchn/xeneventchn.cmxa \
 	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xc $(OCAML_TOPLEVEL)/libs/xc/xenctrl.cmxa \
 	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xb $(OCAML_TOPLEVEL)/libs/xb/xenbus.cmxa \
-	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc
+	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc \
+	$(foreach obj, $(LDLIBS_libxenctrl), -ccopt $(obj)) \
+	$(foreach obj, $(LDLIBS_libxenguest), -ccopt $(obj)) 
 
 PROGRAMS = oxenstored
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:22:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDzs-0005Yh-0d; Wed, 05 Dec 2012 12:22:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Jonathan.Ludlam@eu.citrix.com>) id 1TgDzo-0005YP-H0
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:22:38 +0000
Received: from [85.158.137.99:41155] by server-16.bemta-3.messagelabs.com id
	39/19-07461-B8C3FB05; Wed, 05 Dec 2012 12:22:35 +0000
X-Env-Sender: Jonathan.Ludlam@eu.citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354710153!18037681!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20388 invoked from network); 5 Dec 2012 12:22:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:22:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46663929"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	05 Dec 2012 12:22:33 +0000
Received: from [10.80.118.170] (10.80.118.170) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Wed, 5 Dec 2012
	07:22:33 -0500
Message-ID: <50BF3C87.5090201@eu.citrix.com>
Date: Wed, 5 Dec 2012 12:22:31 +0000
From: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:15.0) Gecko/20120912 Thunderbird/15.0.1
MIME-Version: 1.0
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
References: <575e649ad4dc61a37118.1354710064@fungus>
In-Reply-To: <575e649ad4dc61a37118.1354710064@fungus>
X-Originating-IP: [10.80.118.170]
Subject: Re: [Xen-devel] [PATCH] Fix the build of the ocaml libraries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a problem that affects xen-4.2.0, so it may well be worth 
backporting.

Jon

On 05/12/12 12:21, Jon Ludlam wrote:
> These were previously capturing the absolute path of the build, causing
> linking against the libraries to fail unless you still have the build
> directory around.
>
> Signed-off-by: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
>
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/Makefile.rules
> --- a/tools/ocaml/Makefile.rules
> +++ b/tools/ocaml/Makefile.rules
> @@ -58,14 +58,8 @@
>   
>   # define a library target <name>.cmxa and <name>.cma
>   define OCAML_LIBRARY_template
> - $(1).cmxa: lib$(1)_stubs.a $(foreach obj,$($(1)_OBJS),$(obj).cmx)
> -	$(call mk-caml-lib-native,$$@, -cclib -l$(1)_stubs $(foreach lib,$(LIBS_$(1)),-cclib $(lib)), $(foreach obj,$($(1)_OBJS),$(obj).cmx))
> - $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmo)
> -	$(call mk-caml-lib-bytecode,$$@, -dllib dll$(1)_stubs.so -cclib -l$(1)_stubs, $$+)
> - $(1)_stubs.a: $(foreach obj,$$($(1)_C_OBJS),$(obj).o)
> -	$(call mk-caml-stubs,$$@, $$+)
> - lib$(1)_stubs.a: $(foreach obj,$($(1)_C_OBJS),$(obj).o)
> -	$(call mk-caml-lib-stubs,$$@, $$+)
> +  $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o)
> +	$(OCAMLMKLIB) -o $1 -oc $(1)_stubs $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o) $(foreach lib, $(LIBS_$(1)_SYSTEM), -cclib $(lib)) $(foreach arg,$(LIBS_$(1)),-ldopt $(arg))
>   endef
>   
>   define OCAML_NOC_LIBRARY_template
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/eventchn/Makefile
> --- a/tools/ocaml/libs/eventchn/Makefile
> +++ b/tools/ocaml/libs/eventchn/Makefile
> @@ -9,6 +9,7 @@
>   LIBS = xeneventchn.cma xeneventchn.cmxa
>   
>   LIBS_xeneventchn = $(LDLIBS_libxenctrl)
> +LIBS_xeneventchn_SYSTEM = -lxenctrl
>   
>   all: $(INTF) $(LIBS) $(PROGRAMS)
>   
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/xc/Makefile
> --- a/tools/ocaml/libs/xc/Makefile
> +++ b/tools/ocaml/libs/xc/Makefile
> @@ -10,6 +10,7 @@
>   LIBS = xenctrl.cma xenctrl.cmxa
>   
>   LIBS_xenctrl = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
> +LIBS_xenctrl_SYSTEM = -lxenctrl -lxenguest
>   
>   xenctrl_OBJS = $(OBJS)
>   xenctrl_C_OBJS = xenctrl_stubs
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/xenstored/Makefile
> --- a/tools/ocaml/xenstored/Makefile
> +++ b/tools/ocaml/xenstored/Makefile
> @@ -36,7 +36,9 @@
>   	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/eventchn $(OCAML_TOPLEVEL)/libs/eventchn/xeneventchn.cmxa \
>   	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xc $(OCAML_TOPLEVEL)/libs/xc/xenctrl.cmxa \
>   	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xb $(OCAML_TOPLEVEL)/libs/xb/xenbus.cmxa \
> -	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc
> +	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc \
> +	$(foreach obj, $(LDLIBS_libxenctrl), -ccopt $(obj)) \
> +	$(foreach obj, $(LDLIBS_libxenguest), -ccopt $(obj))
>   
>   PROGRAMS = oxenstored
>   
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:22:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgDzs-0005Yh-0d; Wed, 05 Dec 2012 12:22:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Jonathan.Ludlam@eu.citrix.com>) id 1TgDzo-0005YP-H0
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:22:38 +0000
Received: from [85.158.137.99:41155] by server-16.bemta-3.messagelabs.com id
	39/19-07461-B8C3FB05; Wed, 05 Dec 2012 12:22:35 +0000
X-Env-Sender: Jonathan.Ludlam@eu.citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354710153!18037681!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20388 invoked from network); 5 Dec 2012 12:22:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:22:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46663929"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	05 Dec 2012 12:22:33 +0000
Received: from [10.80.118.170] (10.80.118.170) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Wed, 5 Dec 2012
	07:22:33 -0500
Message-ID: <50BF3C87.5090201@eu.citrix.com>
Date: Wed, 5 Dec 2012 12:22:31 +0000
From: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:15.0) Gecko/20120912 Thunderbird/15.0.1
MIME-Version: 1.0
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
References: <575e649ad4dc61a37118.1354710064@fungus>
In-Reply-To: <575e649ad4dc61a37118.1354710064@fungus>
X-Originating-IP: [10.80.118.170]
Subject: Re: [Xen-devel] [PATCH] Fix the build of the ocaml libraries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a problem that affects xen-4.2.0, so it may well be worth 
backporting.

Jon

On 05/12/12 12:21, Jon Ludlam wrote:
> These were previously capturing the absolute path of the build, causing
> linking against the libraries to fail unless you still have the build
> directory around.
>
> Signed-off-by: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
>
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/Makefile.rules
> --- a/tools/ocaml/Makefile.rules
> +++ b/tools/ocaml/Makefile.rules
> @@ -58,14 +58,8 @@
>   
>   # define a library target <name>.cmxa and <name>.cma
>   define OCAML_LIBRARY_template
> - $(1).cmxa: lib$(1)_stubs.a $(foreach obj,$($(1)_OBJS),$(obj).cmx)
> -	$(call mk-caml-lib-native,$$@, -cclib -l$(1)_stubs $(foreach lib,$(LIBS_$(1)),-cclib $(lib)), $(foreach obj,$($(1)_OBJS),$(obj).cmx))
> - $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmo)
> -	$(call mk-caml-lib-bytecode,$$@, -dllib dll$(1)_stubs.so -cclib -l$(1)_stubs, $$+)
> - $(1)_stubs.a: $(foreach obj,$$($(1)_C_OBJS),$(obj).o)
> -	$(call mk-caml-stubs,$$@, $$+)
> - lib$(1)_stubs.a: $(foreach obj,$($(1)_C_OBJS),$(obj).o)
> -	$(call mk-caml-lib-stubs,$$@, $$+)
> +  $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o)
> +	$(OCAMLMKLIB) -o $1 -oc $(1)_stubs $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o) $(foreach lib, $(LIBS_$(1)_SYSTEM), -cclib $(lib)) $(foreach arg,$(LIBS_$(1)),-ldopt $(arg))
>   endef
>   
>   define OCAML_NOC_LIBRARY_template
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/eventchn/Makefile
> --- a/tools/ocaml/libs/eventchn/Makefile
> +++ b/tools/ocaml/libs/eventchn/Makefile
> @@ -9,6 +9,7 @@
>   LIBS = xeneventchn.cma xeneventchn.cmxa
>   
>   LIBS_xeneventchn = $(LDLIBS_libxenctrl)
> +LIBS_xeneventchn_SYSTEM = -lxenctrl
>   
>   all: $(INTF) $(LIBS) $(PROGRAMS)
>   
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/xc/Makefile
> --- a/tools/ocaml/libs/xc/Makefile
> +++ b/tools/ocaml/libs/xc/Makefile
> @@ -10,6 +10,7 @@
>   LIBS = xenctrl.cma xenctrl.cmxa
>   
>   LIBS_xenctrl = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
> +LIBS_xenctrl_SYSTEM = -lxenctrl -lxenguest
>   
>   xenctrl_OBJS = $(OBJS)
>   xenctrl_C_OBJS = xenctrl_stubs
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/xenstored/Makefile
> --- a/tools/ocaml/xenstored/Makefile
> +++ b/tools/ocaml/xenstored/Makefile
> @@ -36,7 +36,9 @@
>   	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/eventchn $(OCAML_TOPLEVEL)/libs/eventchn/xeneventchn.cmxa \
>   	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xc $(OCAML_TOPLEVEL)/libs/xc/xenctrl.cmxa \
>   	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xb $(OCAML_TOPLEVEL)/libs/xb/xenbus.cmxa \
> -	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc
> +	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc \
> +	$(foreach obj, $(LDLIBS_libxenctrl), -ccopt $(obj)) \
> +	$(foreach obj, $(LDLIBS_libxenguest), -ccopt $(obj))
>   
>   PROGRAMS = oxenstored
>   
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:24:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:24:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgE11-0005gw-Ls; Wed, 05 Dec 2012 12:23:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgE0z-0005gb-Ix
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:23:49 +0000
Received: from [85.158.139.211:41781] by server-12.bemta-5.messagelabs.com id
	81/0E-02886-4DC3FB05; Wed, 05 Dec 2012 12:23:48 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354710226!19172109!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2415 invoked from network); 5 Dec 2012 12:23:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216463396"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:22:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 07:22:46 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgDzv-0001v2-Al;
	Wed, 05 Dec 2012 12:22:43 +0000
Message-ID: <50BF3B34.5030501@eu.citrix.com>
Date: Wed, 5 Dec 2012 12:16:52 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <patchbomb.1354552497@Solace>
	<dde3de6d81a3014f1d13.1354552498@Solace>
In-Reply-To: <dde3de6d81a3014f1d13.1354552498@Solace>
Content-Type: multipart/mixed; boundary="------------000604080007020604090108"
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 3] xen: sched_credit,
	improve tickling of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------000604080007020604090108
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit

On 03/12/12 16:34, Dario Faggioli wrote:
> Right now, when a VCPU wakes-up, we check if the it should preempt
> what is running on the PCPU, and whether or not the waking VCPU can
> be migrated (by tickling some idlers). However, this can result in
> suboptimal or even wrong behaviour, as explained here:
>
>   http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html
>
> This change, instead, when deciding what PCPUs to tickle upon VCPU
> wake-up, considers both what it is likely to happen on the PCPU
> where the wakeup occurs, as well as whether or not there are idle
> PCPUs where to run the waking VCPU.
> In fact, if there are idlers where the new VCPU can run, we can
> avoid interrupting the running VCPU. OTOH, in case there aren't
> any of these PCPUs, preemption and migration are the way to go.
>
> This has been tested by running the following benchmarks inside 2,
> 6 and 10 VMs concurrently, on a shared host, each with 2 VCPUs and
> 960 MB of memory (host has 16 ways and 12 GB RAM).
>
> 1) All VMs had 'cpus="all"' in their config file.
>
> $ sysbench --test=cpu ... (time, lower is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 50.078467 +/- 1.6676162 | 49.704933 +/- 0.0277184 |
>   | 6   | 63.259472 +/- 0.1137586 | 62.227367 +/- 0.3880619 |
>   | 10  | 91.246797 +/- 0.1154008 | 91.174820 +/- 0.0928781 |
> $ sysbench --test=memory ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 485.56333 +/- 6.0527356 | 525.57833 +/- 25.085826 |
>   | 6   | 401.36278 +/- 1.9745916 | 421.96111 +/- 9.0364048 |
>   | 10  | 294.43933 +/- 0.8064945 | 302.49033 +/- 0.2343978 |
> $ specjbb2005 ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 43150.63 +/- 1359.5616  | 42720.632 +/- 1937.4488 |
>   | 6   | 29274.29 +/- 1024.4042  | 29518.171 +/- 1014.5239 |
>   | 10  | 19061.28 +/- 512.88561  | 19050.141 +/- 458.77327 |
>
>
> 2) All VMs had their VCPUs statically pinned to the host's PCPUs.
>
> $ sysbench --test=cpu ... (time, lower is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 47.8211   +/- 0.0215504 | 47.826900 +/- 0.0077872 |
>   | 6   | 62.689122 +/- 0.0877173 | 62.764539 +/- 0.3882493 |
>   | 10  | 90.321097 +/- 1.4803867 | 89.974570 +/- 1.1437566 |
> $ sysbench --test=memory ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 550.97667 +/- 2.3512355 | 550.87000 +/- 0.8140792 |
>   | 6   | 443.15000 +/- 5.7471797 | 454.01056 +/- 8.4373466 |
>   | 10  | 313.89233 +/- 1.3237493 | 321.81167 +/- 0.3528418 |
> $ specjbb2005 ... (throughput, higher is better)
>   | 2   | 49591.057 +/- 952.93384 | 49610.98  +/- 1242.1675 |
>   | 6   | 33538.247 +/- 1089.2115 | 33682.222 +/- 1216.1078 |
>   | 10  | 21927.870 +/- 831.88742 | 21801.138 +/- 561.97068 |
>
>
> Numbers show how the change has either no or very limited impact
> (specjbb2005 case) or, when it does have some impact, that is an
> actual improvement in performances, especially in the
> sysbench-memory case.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

So I think the principle is good, but the resulting set of "if" 
statements is hard to figure out what's going on.

What do you think about re-arranging things, something like the attached?

This particular version I got rid of the stats, because they require 
if() statements that break up the flow.  If we really think they're 
useful, maybe we could have a separate block somewhere for them?

We could actually do without the idlers_empty entirely, as if we just 
remove the condition from the "else" block, the "right thing" will 
happen; however, it means several unnecessary cpumask operations on a 
busy system.

Thoughts?

  -George


--------------000604080007020604090108
Content-Type: text/plain; charset="UTF-8";
	name="xen_sched_credit_improve_tickling_of_idle_cpus"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename="xen_sched_credit_improve_tickling_of_idle_cpus"

xen: sched_credit, improve tickling of idle CPUs

RFC: Re-organized ifs

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -249,54 +249,53 @@ static inline void
     struct csched_vcpu * const cur =
         CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
-    cpumask_t mask;
+    cpumask_t mask, idle_mask;
+    int idlers_empty;
 
     ASSERT(cur);
     cpumask_clear(&mask);
 
-    /* If strictly higher priority than current VCPU, signal the CPU */
-    if ( new->pri > cur->pri )
+    idlers_empty = cpumask_empty(prv->idlers);
+    /*
+     * If the pcpu is idle, or there are no idlers and the new
+     * vcpu is a higher priority than the old vcpu, run it here.
+     *
+     * If there are idle cpus, first try to find one suitable to run
+     * "new", so we can avoid preempting cur.  If we cannot find a
+     * suitable idler on which to run "new", run it here, but try to
+     * find a suitable idler on which to run "cur" instead.
+     */
+    if ( cur->pri == CSCHED_PRI_IDLE
+         || (idlers_empty && new->pri > cur->pri) )
     {
-        if ( cur->pri == CSCHED_PRI_IDLE )
-            SCHED_STAT_CRANK(tickle_local_idler);
-        else if ( cur->pri == CSCHED_PRI_TS_OVER )
-            SCHED_STAT_CRANK(tickle_local_over);
-        else if ( cur->pri == CSCHED_PRI_TS_UNDER )
-            SCHED_STAT_CRANK(tickle_local_under);
-        else
-            SCHED_STAT_CRANK(tickle_local_other);
-
         cpumask_set_cpu(cpu, &mask);
     }
+    else if (!idlers_empty)
+    {
+        /* Check whether or not there are idlers that can run new */
+        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
 
-    /*
-     * If this CPU has at least two runnable VCPUs, we tickle any idlers to
-     * let them know there is runnable work in the system...
-     */
-    if ( cur->pri > CSCHED_PRI_IDLE )
-    {
-        if ( cpumask_empty(prv->idlers) )
+        /* If there are no suitable idlers for new, and it's higher
+         * priority than cur, wake up the current cpu, but also
+         * look for idlers suitable for cur. */
+        if (cpumask_empty(&idle_mask) && new->pri > cur->pri)
         {
-            SCHED_STAT_CRANK(tickle_idlers_none);
+            cpumask_set_cpu(cpu, &mask);
+            cpumask_and(&idle_mask, prv->idlers, cur->vcpu->cpu_affinity);
         }
-        else
+
+        /* Which of the idlers shall we wake up? */
+        if ( !cpumask_empty(&idle_mask) )
         {
-            cpumask_t idle_mask;
-
-            cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
-            if ( !cpumask_empty(&idle_mask) )
+            SCHED_STAT_CRANK(tickle_idlers_some);
+            if ( opt_tickle_one_idle )
             {
-                SCHED_STAT_CRANK(tickle_idlers_some);
-                if ( opt_tickle_one_idle )
-                {
-                    this_cpu(last_tickle_cpu) = 
-                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
-                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
-                }
-                else
-                    cpumask_or(&mask, &mask, &idle_mask);
+                this_cpu(last_tickle_cpu) = 
+                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
+                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
             }
-            cpumask_and(&mask, &mask, new->vcpu->cpu_affinity);
+            else
+                cpumask_or(&mask, &mask, &idle_mask);
         }
     }
 

--------------000604080007020604090108
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------000604080007020604090108--


From xen-devel-bounces@lists.xen.org Wed Dec 05 12:24:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:24:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgE11-0005gw-Ls; Wed, 05 Dec 2012 12:23:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgE0z-0005gb-Ix
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:23:49 +0000
Received: from [85.158.139.211:41781] by server-12.bemta-5.messagelabs.com id
	81/0E-02886-4DC3FB05; Wed, 05 Dec 2012 12:23:48 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354710226!19172109!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2415 invoked from network); 5 Dec 2012 12:23:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216463396"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:22:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 07:22:46 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgDzv-0001v2-Al;
	Wed, 05 Dec 2012 12:22:43 +0000
Message-ID: <50BF3B34.5030501@eu.citrix.com>
Date: Wed, 5 Dec 2012 12:16:52 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <patchbomb.1354552497@Solace>
	<dde3de6d81a3014f1d13.1354552498@Solace>
In-Reply-To: <dde3de6d81a3014f1d13.1354552498@Solace>
Content-Type: multipart/mixed; boundary="------------000604080007020604090108"
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 3] xen: sched_credit,
	improve tickling of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------000604080007020604090108
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit

On 03/12/12 16:34, Dario Faggioli wrote:
> Right now, when a VCPU wakes-up, we check if the it should preempt
> what is running on the PCPU, and whether or not the waking VCPU can
> be migrated (by tickling some idlers). However, this can result in
> suboptimal or even wrong behaviour, as explained here:
>
>   http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html
>
> This change, instead, when deciding what PCPUs to tickle upon VCPU
> wake-up, considers both what it is likely to happen on the PCPU
> where the wakeup occurs, as well as whether or not there are idle
> PCPUs where to run the waking VCPU.
> In fact, if there are idlers where the new VCPU can run, we can
> avoid interrupting the running VCPU. OTOH, in case there aren't
> any of these PCPUs, preemption and migration are the way to go.
>
> This has been tested by running the following benchmarks inside 2,
> 6 and 10 VMs concurrently, on a shared host, each with 2 VCPUs and
> 960 MB of memory (host has 16 ways and 12 GB RAM).
>
> 1) All VMs had 'cpus="all"' in their config file.
>
> $ sysbench --test=cpu ... (time, lower is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 50.078467 +/- 1.6676162 | 49.704933 +/- 0.0277184 |
>   | 6   | 63.259472 +/- 0.1137586 | 62.227367 +/- 0.3880619 |
>   | 10  | 91.246797 +/- 0.1154008 | 91.174820 +/- 0.0928781 |
> $ sysbench --test=memory ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 485.56333 +/- 6.0527356 | 525.57833 +/- 25.085826 |
>   | 6   | 401.36278 +/- 1.9745916 | 421.96111 +/- 9.0364048 |
>   | 10  | 294.43933 +/- 0.8064945 | 302.49033 +/- 0.2343978 |
> $ specjbb2005 ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 43150.63 +/- 1359.5616  | 42720.632 +/- 1937.4488 |
>   | 6   | 29274.29 +/- 1024.4042  | 29518.171 +/- 1014.5239 |
>   | 10  | 19061.28 +/- 512.88561  | 19050.141 +/- 458.77327 |
>
>
> 2) All VMs had their VCPUs statically pinned to the host's PCPUs.
>
> $ sysbench --test=cpu ... (time, lower is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 47.8211   +/- 0.0215504 | 47.826900 +/- 0.0077872 |
>   | 6   | 62.689122 +/- 0.0877173 | 62.764539 +/- 0.3882493 |
>   | 10  | 90.321097 +/- 1.4803867 | 89.974570 +/- 1.1437566 |
> $ sysbench --test=memory ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 550.97667 +/- 2.3512355 | 550.87000 +/- 0.8140792 |
>   | 6   | 443.15000 +/- 5.7471797 | 454.01056 +/- 8.4373466 |
>   | 10  | 313.89233 +/- 1.3237493 | 321.81167 +/- 0.3528418 |
> $ specjbb2005 ... (throughput, higher is better)
>   | 2   | 49591.057 +/- 952.93384 | 49610.98  +/- 1242.1675 |
>   | 6   | 33538.247 +/- 1089.2115 | 33682.222 +/- 1216.1078 |
>   | 10  | 21927.870 +/- 831.88742 | 21801.138 +/- 561.97068 |
>
>
> Numbers show how the change has either no or very limited impact
> (specjbb2005 case) or, when it does have some impact, that is an
> actual improvement in performances, especially in the
> sysbench-memory case.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

So I think the principle is good, but the resulting set of "if" 
statements is hard to figure out what's going on.

What do you think about re-arranging things, something like the attached?

This particular version I got rid of the stats, because they require 
if() statements that break up the flow.  If we really think they're 
useful, maybe we could have a separate block somewhere for them?

We could actually do without the idlers_empty entirely, as if we just 
remove the condition from the "else" block, the "right thing" will 
happen; however, it means several unnecessary cpumask operations on a 
busy system.

Thoughts?

  -George


--------------000604080007020604090108
Content-Type: text/plain; charset="UTF-8";
	name="xen_sched_credit_improve_tickling_of_idle_cpus"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename="xen_sched_credit_improve_tickling_of_idle_cpus"

xen: sched_credit, improve tickling of idle CPUs

RFC: Re-organized ifs

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -249,54 +249,53 @@ static inline void
     struct csched_vcpu * const cur =
         CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
-    cpumask_t mask;
+    cpumask_t mask, idle_mask;
+    int idlers_empty;
 
     ASSERT(cur);
     cpumask_clear(&mask);
 
-    /* If strictly higher priority than current VCPU, signal the CPU */
-    if ( new->pri > cur->pri )
+    idlers_empty = cpumask_empty(prv->idlers);
+    /*
+     * If the pcpu is idle, or there are no idlers and the new
+     * vcpu is a higher priority than the old vcpu, run it here.
+     *
+     * If there are idle cpus, first try to find one suitable to run
+     * "new", so we can avoid preempting cur.  If we cannot find a
+     * suitable idler on which to run "new", run it here, but try to
+     * find a suitable idler on which to run "cur" instead.
+     */
+    if ( cur->pri == CSCHED_PRI_IDLE
+         || (idlers_empty && new->pri > cur->pri) )
     {
-        if ( cur->pri == CSCHED_PRI_IDLE )
-            SCHED_STAT_CRANK(tickle_local_idler);
-        else if ( cur->pri == CSCHED_PRI_TS_OVER )
-            SCHED_STAT_CRANK(tickle_local_over);
-        else if ( cur->pri == CSCHED_PRI_TS_UNDER )
-            SCHED_STAT_CRANK(tickle_local_under);
-        else
-            SCHED_STAT_CRANK(tickle_local_other);
-
         cpumask_set_cpu(cpu, &mask);
     }
+    else if (!idlers_empty)
+    {
+        /* Check whether or not there are idlers that can run new */
+        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
 
-    /*
-     * If this CPU has at least two runnable VCPUs, we tickle any idlers to
-     * let them know there is runnable work in the system...
-     */
-    if ( cur->pri > CSCHED_PRI_IDLE )
-    {
-        if ( cpumask_empty(prv->idlers) )
+        /* If there are no suitable idlers for new, and it's higher
+         * priority than cur, wake up the current cpu, but also
+         * look for idlers suitable for cur. */
+        if (cpumask_empty(&idle_mask) && new->pri > cur->pri)
         {
-            SCHED_STAT_CRANK(tickle_idlers_none);
+            cpumask_set_cpu(cpu, &mask);
+            cpumask_and(&idle_mask, prv->idlers, cur->vcpu->cpu_affinity);
         }
-        else
+
+        /* Which of the idlers shall we wake up? */
+        if ( !cpumask_empty(&idle_mask) )
         {
-            cpumask_t idle_mask;
-
-            cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
-            if ( !cpumask_empty(&idle_mask) )
+            SCHED_STAT_CRANK(tickle_idlers_some);
+            if ( opt_tickle_one_idle )
             {
-                SCHED_STAT_CRANK(tickle_idlers_some);
-                if ( opt_tickle_one_idle )
-                {
-                    this_cpu(last_tickle_cpu) = 
-                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
-                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
-                }
-                else
-                    cpumask_or(&mask, &mask, &idle_mask);
+                this_cpu(last_tickle_cpu) = 
+                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
+                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
             }
-            cpumask_and(&mask, &mask, new->vcpu->cpu_affinity);
+            else
+                cpumask_or(&mask, &mask, &idle_mask);
         }
     }
 

--------------000604080007020604090108
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------000604080007020604090108--


From xen-devel-bounces@lists.xen.org Wed Dec 05 12:32:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgE8k-00060v-Na; Wed, 05 Dec 2012 12:31:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgE8j-00060m-6A
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:31:49 +0000
Received: from [85.158.143.99:22024] by server-2.bemta-4.messagelabs.com id
	5F/BC-30861-4BE3FB05; Wed, 05 Dec 2012 12:31:48 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354710706!18530099!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30747 invoked from network); 5 Dec 2012 12:31:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:31:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216464106"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:31:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 07:31:46 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgE8f-000232-Hd;
	Wed, 05 Dec 2012 12:31:45 +0000
Message-ID: <50BF3D52.9050403@eu.citrix.com>
Date: Wed, 5 Dec 2012 12:25:54 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>	<50BE4AC2.9010507@eu.citrix.com>
	<1354708472.21632.21.camel@Abyss>
	<1354708869.15296.173.camel@zakaz.uk.xensource.com>
	<1354709754.21632.31.camel@Abyss>
	<1354710048.15296.175.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354710048.15296.175.camel@zakaz.uk.xensource.com>
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	xen-devel <xen-devel@lists.xensource.com>,
	Dario Faggioli <raistlin@linux.it>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 12:20, Ian Campbell wrote:
> On Wed, 2012-12-05 at 12:15 +0000, Dario Faggioli wrote:
>> On Wed, 2012-12-05 at 12:01 +0000, Ian Campbell wrote:
>>>> As I tried to explain in the comment, I just wanted to avoid checking
>>>> for !tb_init_done more than once, as this happens within a loop and, at
>>>> least potentially, there may be more CPUs to tickle (and thus more calls
>>>> to TRACE_1D).
>>> If tb_init_done isn't marked volatile or anything like that isn't the
>>> check hoisted out of the loop by the compiler?
>>>
>> Good point. As they're all macros, yes, I think that is something very
>> likely to happen... Although, I haven't checked the generated code, I'll
>> take a look. Thanks.
>>
>>>> I take this comment of yours as you not thinking that is
>>>> something worthwhile, right? If so, I can definitely turn this into a
>>>> "standard" TRACE_1D() call.
>>> Or maybe consider __TRACE_1D and friends which omit the check?
>>>
>> Mmm... It may well be me, but my
>>
>> $ grep __TRACE xen/* -R
>>
>> does not show any results... What am I missing?
> I meant to define + use those macros.

Well ATM there would be only one user -- and "trace_var(..., 
sizeof(cpu), &cpu);" is probably just as pretty as __TRACE_1D(..., cpu).

I wouldn't oppose such a patch, but I don't think it should be required 
until we want to use "__TRACE_(N>2)D".

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:32:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgE8k-00060v-Na; Wed, 05 Dec 2012 12:31:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgE8j-00060m-6A
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:31:49 +0000
Received: from [85.158.143.99:22024] by server-2.bemta-4.messagelabs.com id
	5F/BC-30861-4BE3FB05; Wed, 05 Dec 2012 12:31:48 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354710706!18530099!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30747 invoked from network); 5 Dec 2012 12:31:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:31:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216464106"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 12:31:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 07:31:46 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgE8f-000232-Hd;
	Wed, 05 Dec 2012 12:31:45 +0000
Message-ID: <50BF3D52.9050403@eu.citrix.com>
Date: Wed, 5 Dec 2012 12:25:54 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>	<50BE4AC2.9010507@eu.citrix.com>
	<1354708472.21632.21.camel@Abyss>
	<1354708869.15296.173.camel@zakaz.uk.xensource.com>
	<1354709754.21632.31.camel@Abyss>
	<1354710048.15296.175.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354710048.15296.175.camel@zakaz.uk.xensource.com>
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	xen-devel <xen-devel@lists.xensource.com>,
	Dario Faggioli <raistlin@linux.it>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 12:20, Ian Campbell wrote:
> On Wed, 2012-12-05 at 12:15 +0000, Dario Faggioli wrote:
>> On Wed, 2012-12-05 at 12:01 +0000, Ian Campbell wrote:
>>>> As I tried to explain in the comment, I just wanted to avoid checking
>>>> for !tb_init_done more than once, as this happens within a loop and, at
>>>> least potentially, there may be more CPUs to tickle (and thus more calls
>>>> to TRACE_1D).
>>> If tb_init_done isn't marked volatile or anything like that isn't the
>>> check hoisted out of the loop by the compiler?
>>>
>> Good point. As they're all macros, yes, I think that is something very
>> likely to happen... Although, I haven't checked the generated code, I'll
>> take a look. Thanks.
>>
>>>> I take this comment of yours as you not thinking that is
>>>> something worthwhile, right? If so, I can definitely turn this into a
>>>> "standard" TRACE_1D() call.
>>> Or maybe consider __TRACE_1D and friends which omit the check?
>>>
>> Mmm... It may well be me, but my
>>
>> $ grep __TRACE xen/* -R
>>
>> does not show any results... What am I missing?
> I meant to define + use those macros.

Well ATM there would be only one user -- and "trace_var(..., 
sizeof(cpu), &cpu);" is probably just as pretty as __TRACE_1D(..., cpu).

I wouldn't oppose such a patch, but I don't think it should be required 
until we want to use "__TRACE_(N>2)D".

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:39:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEFd-0006Eh-Se; Wed, 05 Dec 2012 12:38:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TgEFc-0006Ec-KO
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:38:56 +0000
Received: from [193.109.254.147:31312] by server-11.bemta-14.messagelabs.com
	id 1D/3D-29027-F504FB05; Wed, 05 Dec 2012 12:38:55 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354711115!6359067!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2641 invoked from network); 5 Dec 2012 12:38:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:38:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216464815"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	05 Dec 2012 12:38:35 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Wed, 5 Dec 2012
	07:38:35 -0500
Message-ID: <50BF404A.3000403@citrix.com>
Date: Wed, 5 Dec 2012 12:38:34 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>	<50BE4AC2.9010507@eu.citrix.com>
	<1354708472.21632.21.camel@Abyss>
	<1354708869.15296.173.camel@zakaz.uk.xensource.com>
	<1354709754.21632.31.camel@Abyss>
In-Reply-To: <1354709754.21632.31.camel@Abyss>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 12:15, Dario Faggioli wrote:
> On Wed, 2012-12-05 at 12:01 +0000, Ian Campbell wrote:
>>> As I tried to explain in the comment, I just wanted to avoid checking
>>> for !tb_init_done more than once, as this happens within a loop and, at
>>> least potentially, there may be more CPUs to tickle (and thus more calls
>>> to TRACE_1D).
>> If tb_init_done isn't marked volatile or anything like that isn't the
>> check hoisted out of the loop by the compiler?
>>
> Good point. As they're all macros, yes, I think that is something very
> likely to happen... Although, I haven't checked the generated code, I'll
> take a look. Thanks.
But if there is a call to an opaque function (any function the compiler 
doesn't "know" what it does in the current compile unit - or one where 
the compiler can't determine that the global variable isn't being 
modified, e.g. anything that uses pointers the compiler can't trace, 
etc) in between, the compiler must NOT move the actual read of a global 
variable. So it would be worth checking the compile output.

--
Mats
>
>>> I take this comment of yours as you not thinking that is
>>> something worthwhile, right? If so, I can definitely turn this into a
>>> "standard" TRACE_1D() call.
>> Or maybe consider __TRACE_1D and friends which omit the check?
>>
> Mmm... It may well be me, but my
>
> $ grep __TRACE xen/* -R
>
> does not show any results... What am I missing?
>
> Dario
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:39:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEFd-0006Eh-Se; Wed, 05 Dec 2012 12:38:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TgEFc-0006Ec-KO
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:38:56 +0000
Received: from [193.109.254.147:31312] by server-11.bemta-14.messagelabs.com
	id 1D/3D-29027-F504FB05; Wed, 05 Dec 2012 12:38:55 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354711115!6359067!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2641 invoked from network); 5 Dec 2012 12:38:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:38:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216464815"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	05 Dec 2012 12:38:35 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Wed, 5 Dec 2012
	07:38:35 -0500
Message-ID: <50BF404A.3000403@citrix.com>
Date: Wed, 5 Dec 2012 12:38:34 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <patchbomb.1354552497@Solace>
	<bf4b06eda19a03a6aa0b.1354552500@Solace>	<50BE4AC2.9010507@eu.citrix.com>
	<1354708472.21632.21.camel@Abyss>
	<1354708869.15296.173.camel@zakaz.uk.xensource.com>
	<1354709754.21632.31.camel@Abyss>
In-Reply-To: <1354709754.21632.31.camel@Abyss>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] [PATCH 3 of 3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 12:15, Dario Faggioli wrote:
> On Wed, 2012-12-05 at 12:01 +0000, Ian Campbell wrote:
>>> As I tried to explain in the comment, I just wanted to avoid checking
>>> for !tb_init_done more than once, as this happens within a loop and, at
>>> least potentially, there may be more CPUs to tickle (and thus more calls
>>> to TRACE_1D).
>> If tb_init_done isn't marked volatile or anything like that isn't the
>> check hoisted out of the loop by the compiler?
>>
> Good point. As they're all macros, yes, I think that is something very
> likely to happen... Although, I haven't checked the generated code, I'll
> take a look. Thanks.
But if there is a call to an opaque function (any function the compiler 
doesn't "know" what it does in the current compile unit - or one where 
the compiler can't determine that the global variable isn't being 
modified, e.g. anything that uses pointers the compiler can't trace, 
etc) in between, the compiler must NOT move the actual read of a global 
variable. So it would be worth checking the compile output.

--
Mats
>
>>> I take this comment of yours as you not thinking that is
>>> something worthwhile, right? If so, I can definitely turn this into a
>>> "standard" TRACE_1D() call.
>> Or maybe consider __TRACE_1D and friends which omit the check?
>>
> Mmm... It may well be me, but my
>
> $ grep __TRACE xen/* -R
>
> does not show any results... What am I missing?
>
> Dario
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:43:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:43:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEKF-0006dB-MB; Wed, 05 Dec 2012 12:43:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgEKD-0006d4-TH
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:43:42 +0000
Received: from [85.158.139.83:59259] by server-12.bemta-5.messagelabs.com id
	0D/DC-02886-D714FB05; Wed, 05 Dec 2012 12:43:41 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354711418!28520668!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30791 invoked from network); 5 Dec 2012 12:43:38 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-3.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 12:43:38 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgEK6-0005yL-Qh; Wed, 05 Dec 2012 12:43:34 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgEJz-0007x3-Qq; Wed, 05 Dec 2012 12:43:34 +0000
Message-ID: <1354711402.15296.188.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Wed, 05 Dec 2012 12:43:22 +0000
In-Reply-To: <50B3FF8A.4050909@amd.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	debian-kernel <debian-kernel@lists.debian.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-11-26 at 23:47 +0000, Boris Ostrovsky wrote:
> 
> On 11/26/2012 09:58 AM, Boris Ostrovsky wrote:
> >
> >
> > On 11/26/2012 09:13 AM, Ian Campbell wrote:
> >> On Mon, 2012-11-26 at 13:44 +0000, Jan Beulich wrote:
> 
> >>>
> >>> The only other thing to check for is that you don't have any
> >>> artificial size restriction left in that code (I think patch files early
> >>> on were limited to 4k in size, and that got lifted during the last
> >>> couple of years).
> >>
> >> I can't find one by inspection, it uses the standard request_firmware
> >> interface and stashes the result in a valloc'd buffer, neither of which
> >> suffer from any 4K related limitations AFAIK.
> >>
> >> I'll try and track something more recent down to test but the worst
> >> downside of applying this patch seems to be that something which doesn't
> >> work still doesn't work.
> >
> > I submitted a fix for fam 16h to Linux right before the Thanksgiving
> > break in US and was planning to look at Xen as well. Give me a day or
> > two to test it.
> 
> It works fine, no issues with size (which is different from other families).

I've just tried this on a fam 15h and I get:

        (XEN) microcode: collect_cpu_info: patch_id=0x6000626
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: CPU0 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000629
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base id is 6012) 
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000626
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: CPU2 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) microcode: CPU2 updated from revision 0x6000626 to 0x6000629
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000629
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU3 patch does not match (patch is 6101, cpu base id is 6012) 
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000626
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: CPU4 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) microcode: CPU4 updated from revision 0x6000626 to 0x6000629
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000629
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU5 patch does not match (patch is 6101, cpu base id is 6012) 
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000626
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: CPU6 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) microcode: CPU6 updated from revision 0x6000626 to 0x6000629
        
        ....

It seems like it is applying successfully on only the even numbered
cpus. Is this because the odd and even ones share some execution units
and therefore share microcode updates too? IOW update CPU0 also updates
CPU1 under the hood.

If so then we probably want to teach Xen about this, although at least
for now though it would mean that the microcode is actually getting
applied despite the messages.

Ian.
-- 
Ian Campbell

I like your game but we have to change the rules.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:43:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:43:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEKF-0006dB-MB; Wed, 05 Dec 2012 12:43:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgEKD-0006d4-TH
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:43:42 +0000
Received: from [85.158.139.83:59259] by server-12.bemta-5.messagelabs.com id
	0D/DC-02886-D714FB05; Wed, 05 Dec 2012 12:43:41 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354711418!28520668!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30791 invoked from network); 5 Dec 2012 12:43:38 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-3.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 12:43:38 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgEK6-0005yL-Qh; Wed, 05 Dec 2012 12:43:34 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgEJz-0007x3-Qq; Wed, 05 Dec 2012 12:43:34 +0000
Message-ID: <1354711402.15296.188.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Wed, 05 Dec 2012 12:43:22 +0000
In-Reply-To: <50B3FF8A.4050909@amd.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	debian-kernel <debian-kernel@lists.debian.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-11-26 at 23:47 +0000, Boris Ostrovsky wrote:
> 
> On 11/26/2012 09:58 AM, Boris Ostrovsky wrote:
> >
> >
> > On 11/26/2012 09:13 AM, Ian Campbell wrote:
> >> On Mon, 2012-11-26 at 13:44 +0000, Jan Beulich wrote:
> 
> >>>
> >>> The only other thing to check for is that you don't have any
> >>> artificial size restriction left in that code (I think patch files early
> >>> on were limited to 4k in size, and that got lifted during the last
> >>> couple of years).
> >>
> >> I can't find one by inspection, it uses the standard request_firmware
> >> interface and stashes the result in a valloc'd buffer, neither of which
> >> suffer from any 4K related limitations AFAIK.
> >>
> >> I'll try and track something more recent down to test but the worst
> >> downside of applying this patch seems to be that something which doesn't
> >> work still doesn't work.
> >
> > I submitted a fix for fam 16h to Linux right before the Thanksgiving
> > break in US and was planning to look at Xen as well. Give me a day or
> > two to test it.
> 
> It works fine, no issues with size (which is different from other families).

I've just tried this on a fam 15h and I get:

        (XEN) microcode: collect_cpu_info: patch_id=0x6000626
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: CPU0 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000629
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base id is 6012) 
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000626
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: CPU2 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) microcode: CPU2 updated from revision 0x6000626 to 0x6000629
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000629
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU3 patch does not match (patch is 6101, cpu base id is 6012) 
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000626
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: CPU4 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) microcode: CPU4 updated from revision 0x6000626 to 0x6000629
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000629
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU5 patch does not match (patch is 6101, cpu base id is 6012) 
        
        (XEN) microcode: collect_cpu_info: patch_id=0x6000626
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: CPU6 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) microcode: CPU6 updated from revision 0x6000626 to 0x6000629
        
        ....

It seems like it is applying successfully on only the even numbered
cpus. Is this because the odd and even ones share some execution units
and therefore share microcode updates too? IOW update CPU0 also updates
CPU1 under the hood.

If so then we probably want to teach Xen about this, although at least
for now though it would mean that the microcode is actually getting
applied despite the messages.

Ian.
-- 
Ian Campbell

I like your game but we have to change the rules.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:47:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgENK-0006lC-9x; Wed, 05 Dec 2012 12:46:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgENJ-0006l3-3B
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:46:53 +0000
Received: from [85.158.138.51:7021] by server-10.bemta-3.messagelabs.com id
	8B/0D-19806-C324FB05; Wed, 05 Dec 2012 12:46:52 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-10.tower-174.messagelabs.com!1354711611!23409279!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28082 invoked from network); 5 Dec 2012 12:46:51 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-10.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 12:46:51 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgENH-00060T-1A; Wed, 05 Dec 2012 12:46:51 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgENA-0007yI-J9; Wed, 05 Dec 2012 12:46:50 +0000
Message-ID: <1354711599.15296.191.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 05 Dec 2012 12:46:39 +0000
In-Reply-To: <50B3805A02000078000AB1B8@nat28.tlf.novell.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-11-26 at 13:44 +0000, Jan Beulich wrote:
> >>> On 26.11.12 at 14:21, Ian Campbell <ijc@hellion.org.uk> wrote:
> > Debian has decided to take Jeremy's microcode patch [0] as an interim
> > measure for their next release. (TL;DR -- Debian is shipping pvops Linux
> > 3.2 and Xen 4.1 in the next release. See http://bugs.debian.org/693053 
> > and https://lists.debian.org/debian-devel/2012/11/msg00141.html for some
> > more background).
> > 
> > However the patch is a bit old and predates the use introduction of
> > separate firmware files for AMD family >= 15h. Looking at the SuSE
> > forward ported classic Xen patches it seems like the following patch is
> > all that is required. But it seems a little too simple to be true and I
> > don't have any such processors to test on.
> > 
> > Jan, can you recall if it really is that easy on the kernel side ;-)
> 
> While so far I didn't myself run anything on post-Fam10 systems
> either, it really ought to be that easy - the patch format didn't
> change, it's just that they decided to spit the files by family to
> keep them manageable.
> 
> The only other thing to check for is that you don't have any
> artificial size restriction left in that code (I think patch files early
> on were limited to 4k in size, and that got lifted during the last
> couple of years).

I managed to find a machine and try this and it turns out that all that
was missing from the kernel side was:

        @@ -58,7 +58,7 @@
         
         static enum ucode_state xen_request_microcode_fw(int cpu, struct device *device)
         {
        -       char name[30];
        +       char name[36];
                struct cpuinfo_x86 *c = &cpu_data(cpu);
                const struct firmware *firmware;
                struct ucode_cpu_info *uci = ucode_cpu_info + cpu;

> The hypervisor is really going to take care of all other aspects
> here.

There may be some  other issue here (I replied to Boris about it) but it
does seem like the kernel side is now correct.

Ian.


-- 
Ian Campbell

Friction is a drag.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:47:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgENK-0006lC-9x; Wed, 05 Dec 2012 12:46:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgENJ-0006l3-3B
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:46:53 +0000
Received: from [85.158.138.51:7021] by server-10.bemta-3.messagelabs.com id
	8B/0D-19806-C324FB05; Wed, 05 Dec 2012 12:46:52 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-10.tower-174.messagelabs.com!1354711611!23409279!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28082 invoked from network); 5 Dec 2012 12:46:51 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-10.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 12:46:51 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgENH-00060T-1A; Wed, 05 Dec 2012 12:46:51 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgENA-0007yI-J9; Wed, 05 Dec 2012 12:46:50 +0000
Message-ID: <1354711599.15296.191.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 05 Dec 2012 12:46:39 +0000
In-Reply-To: <50B3805A02000078000AB1B8@nat28.tlf.novell.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-11-26 at 13:44 +0000, Jan Beulich wrote:
> >>> On 26.11.12 at 14:21, Ian Campbell <ijc@hellion.org.uk> wrote:
> > Debian has decided to take Jeremy's microcode patch [0] as an interim
> > measure for their next release. (TL;DR -- Debian is shipping pvops Linux
> > 3.2 and Xen 4.1 in the next release. See http://bugs.debian.org/693053 
> > and https://lists.debian.org/debian-devel/2012/11/msg00141.html for some
> > more background).
> > 
> > However the patch is a bit old and predates the use introduction of
> > separate firmware files for AMD family >= 15h. Looking at the SuSE
> > forward ported classic Xen patches it seems like the following patch is
> > all that is required. But it seems a little too simple to be true and I
> > don't have any such processors to test on.
> > 
> > Jan, can you recall if it really is that easy on the kernel side ;-)
> 
> While so far I didn't myself run anything on post-Fam10 systems
> either, it really ought to be that easy - the patch format didn't
> change, it's just that they decided to spit the files by family to
> keep them manageable.
> 
> The only other thing to check for is that you don't have any
> artificial size restriction left in that code (I think patch files early
> on were limited to 4k in size, and that got lifted during the last
> couple of years).

I managed to find a machine and try this and it turns out that all that
was missing from the kernel side was:

        @@ -58,7 +58,7 @@
         
         static enum ucode_state xen_request_microcode_fw(int cpu, struct device *device)
         {
        -       char name[30];
        +       char name[36];
                struct cpuinfo_x86 *c = &cpu_data(cpu);
                const struct firmware *firmware;
                struct ucode_cpu_info *uci = ucode_cpu_info + cpu;

> The hypervisor is really going to take care of all other aspects
> here.

There may be some  other issue here (I replied to Boris about it) but it
does seem like the kernel side is now correct.

Ian.


-- 
Ian Campbell

Friction is a drag.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:52:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgESn-00076l-9Y; Wed, 05 Dec 2012 12:52:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgESm-00076e-18
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:52:32 +0000
Received: from [85.158.138.51:54176] by server-9.bemta-3.messagelabs.com id
	DD/EE-02388-F834FB05; Wed, 05 Dec 2012 12:52:31 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-10.tower-174.messagelabs.com!1354711949!23410429!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32074 invoked from network); 5 Dec 2012 12:52:29 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-10.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 12:52:29 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgESh-00063b-Nw; Wed, 05 Dec 2012 12:52:27 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgESa-0007zy-SK; Wed, 05 Dec 2012 12:52:27 +0000
Message-ID: <1354711935.15296.195.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
Date: Wed, 05 Dec 2012 12:52:15 +0000
In-Reply-To: <575e649ad4dc61a37118.1354710064@fungus>
References: <575e649ad4dc61a37118.1354710064@fungus>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] Fix the build of the ocaml libraries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 12:21 +0000, Jon Ludlam wrote:
> These were previously capturing the absolute path of the build, causing
> linking against the libraries to fail unless you still have the build
> directory around.

Good catch, thanks.

> Signed-off-by: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
> 
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/Makefile.rules
> --- a/tools/ocaml/Makefile.rules
> +++ b/tools/ocaml/Makefile.rules
> @@ -58,14 +58,8 @@
>  
>  # define a library target <name>.cmxa and <name>.cma

This no longer produces a foo.cmxa, is that deliberate?

Likewise the foo_stubs.a and libfoo_stubs.a.

>  define OCAML_LIBRARY_template
> - $(1).cmxa: lib$(1)_stubs.a $(foreach obj,$($(1)_OBJS),$(obj).cmx)
> -	$(call mk-caml-lib-native,$$@, -cclib -l$(1)_stubs $(foreach lib,$(LIBS_$(1)),-cclib $(lib)), $(foreach obj,$($(1)_OBJS),$(obj).cmx))
> - $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmo)
> -	$(call mk-caml-lib-bytecode,$$@, -dllib dll$(1)_stubs.so -cclib -l$(1)_stubs, $$+)
> - $(1)_stubs.a: $(foreach obj,$$($(1)_C_OBJS),$(obj).o)
> -	$(call mk-caml-stubs,$$@, $$+)
> - lib$(1)_stubs.a: $(foreach obj,$($(1)_C_OBJS),$(obj).o)
> -	$(call mk-caml-lib-stubs,$$@, $$+)
> +  $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o)
> +	$(OCAMLMKLIB) -o $1 -oc $(1)_stubs $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o) $(foreach lib, $(LIBS_$(1)_SYSTEM), -cclib $(lib)) $(foreach arg,$(LIBS_$(1)),-ldopt $(arg))

Can this change be made part of mk-caml-lib-bytecode ? 

I don't much like it but the existing caml build stuff uses that
quiet-command stuff so I think this new command will end up standing out
like a sore thumb. If you want to fix this by nuking all that stuff then
that is fine by me ;-)

>  endef
>  
>  define OCAML_NOC_LIBRARY_template
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/eventchn/Makefile
> --- a/tools/ocaml/libs/eventchn/Makefile
> +++ b/tools/ocaml/libs/eventchn/Makefile
> @@ -9,6 +9,7 @@
>  LIBS = xeneventchn.cma xeneventchn.cmxa
>  
>  LIBS_xeneventchn = $(LDLIBS_libxenctrl)
> +LIBS_xeneventchn_SYSTEM = -lxenctrl
>  
>  all: $(INTF) $(LIBS) $(PROGRAMS)
>  
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/xc/Makefile
> --- a/tools/ocaml/libs/xc/Makefile
> +++ b/tools/ocaml/libs/xc/Makefile
> @@ -10,6 +10,7 @@
>  LIBS = xenctrl.cma xenctrl.cmxa
>  
>  LIBS_xenctrl = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
> +LIBS_xenctrl_SYSTEM = -lxenctrl -lxenguest
>  
>  xenctrl_OBJS = $(OBJS)
>  xenctrl_C_OBJS = xenctrl_stubs
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/xenstored/Makefile
> --- a/tools/ocaml/xenstored/Makefile
> +++ b/tools/ocaml/xenstored/Makefile
> @@ -36,7 +36,9 @@
>  	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/eventchn $(OCAML_TOPLEVEL)/libs/eventchn/xeneventchn.cmxa \
>  	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xc $(OCAML_TOPLEVEL)/libs/xc/xenctrl.cmxa \
>  	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xb $(OCAML_TOPLEVEL)/libs/xb/xenbus.cmxa \
> -	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc
> +	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc \
> +	$(foreach obj, $(LDLIBS_libxenctrl), -ccopt $(obj)) \
> +	$(foreach obj, $(LDLIBS_libxenguest), -ccopt $(obj)) 
>  
>  PROGRAMS = oxenstored
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Ian Campbell



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:52:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgESn-00076l-9Y; Wed, 05 Dec 2012 12:52:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgESm-00076e-18
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 12:52:32 +0000
Received: from [85.158.138.51:54176] by server-9.bemta-3.messagelabs.com id
	DD/EE-02388-F834FB05; Wed, 05 Dec 2012 12:52:31 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-10.tower-174.messagelabs.com!1354711949!23410429!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32074 invoked from network); 5 Dec 2012 12:52:29 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-10.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 12:52:29 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgESh-00063b-Nw; Wed, 05 Dec 2012 12:52:27 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgESa-0007zy-SK; Wed, 05 Dec 2012 12:52:27 +0000
Message-ID: <1354711935.15296.195.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
Date: Wed, 05 Dec 2012 12:52:15 +0000
In-Reply-To: <575e649ad4dc61a37118.1354710064@fungus>
References: <575e649ad4dc61a37118.1354710064@fungus>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] Fix the build of the ocaml libraries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 12:21 +0000, Jon Ludlam wrote:
> These were previously capturing the absolute path of the build, causing
> linking against the libraries to fail unless you still have the build
> directory around.

Good catch, thanks.

> Signed-off-by: Jon Ludlam <jonathan.ludlam@eu.citrix.com>
> 
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/Makefile.rules
> --- a/tools/ocaml/Makefile.rules
> +++ b/tools/ocaml/Makefile.rules
> @@ -58,14 +58,8 @@
>  
>  # define a library target <name>.cmxa and <name>.cma

This no longer produces a foo.cmxa, is that deliberate?

Likewise the foo_stubs.a and libfoo_stubs.a.

>  define OCAML_LIBRARY_template
> - $(1).cmxa: lib$(1)_stubs.a $(foreach obj,$($(1)_OBJS),$(obj).cmx)
> -	$(call mk-caml-lib-native,$$@, -cclib -l$(1)_stubs $(foreach lib,$(LIBS_$(1)),-cclib $(lib)), $(foreach obj,$($(1)_OBJS),$(obj).cmx))
> - $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmo)
> -	$(call mk-caml-lib-bytecode,$$@, -dllib dll$(1)_stubs.so -cclib -l$(1)_stubs, $$+)
> - $(1)_stubs.a: $(foreach obj,$$($(1)_C_OBJS),$(obj).o)
> -	$(call mk-caml-stubs,$$@, $$+)
> - lib$(1)_stubs.a: $(foreach obj,$($(1)_C_OBJS),$(obj).o)
> -	$(call mk-caml-lib-stubs,$$@, $$+)
> +  $(1).cma: $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o)
> +	$(OCAMLMKLIB) -o $1 -oc $(1)_stubs $(foreach obj,$($(1)_OBJS),$(obj).cmx $(obj).cmo) $(foreach obj,$($(1)_C_OBJS),$(obj).o) $(foreach lib, $(LIBS_$(1)_SYSTEM), -cclib $(lib)) $(foreach arg,$(LIBS_$(1)),-ldopt $(arg))

Can this change be made part of mk-caml-lib-bytecode ? 

I don't much like it but the existing caml build stuff uses that
quiet-command stuff so I think this new command will end up standing out
like a sore thumb. If you want to fix this by nuking all that stuff then
that is fine by me ;-)

>  endef
>  
>  define OCAML_NOC_LIBRARY_template
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/eventchn/Makefile
> --- a/tools/ocaml/libs/eventchn/Makefile
> +++ b/tools/ocaml/libs/eventchn/Makefile
> @@ -9,6 +9,7 @@
>  LIBS = xeneventchn.cma xeneventchn.cmxa
>  
>  LIBS_xeneventchn = $(LDLIBS_libxenctrl)
> +LIBS_xeneventchn_SYSTEM = -lxenctrl
>  
>  all: $(INTF) $(LIBS) $(PROGRAMS)
>  
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/libs/xc/Makefile
> --- a/tools/ocaml/libs/xc/Makefile
> +++ b/tools/ocaml/libs/xc/Makefile
> @@ -10,6 +10,7 @@
>  LIBS = xenctrl.cma xenctrl.cmxa
>  
>  LIBS_xenctrl = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
> +LIBS_xenctrl_SYSTEM = -lxenctrl -lxenguest
>  
>  xenctrl_OBJS = $(OBJS)
>  xenctrl_C_OBJS = xenctrl_stubs
> diff -r 29247e44df47 -r 575e649ad4dc tools/ocaml/xenstored/Makefile
> --- a/tools/ocaml/xenstored/Makefile
> +++ b/tools/ocaml/xenstored/Makefile
> @@ -36,7 +36,9 @@
>  	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/eventchn $(OCAML_TOPLEVEL)/libs/eventchn/xeneventchn.cmxa \
>  	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xc $(OCAML_TOPLEVEL)/libs/xc/xenctrl.cmxa \
>  	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xb $(OCAML_TOPLEVEL)/libs/xb/xenbus.cmxa \
> -	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc
> +	-ccopt -L -ccopt $(XEN_ROOT)/tools/libxc \
> +	$(foreach obj, $(LDLIBS_libxenctrl), -ccopt $(obj)) \
> +	$(foreach obj, $(LDLIBS_libxenguest), -ccopt $(obj)) 
>  
>  PROGRAMS = oxenstored
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Ian Campbell



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 12:59:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEZ1-0007Ge-5z; Wed, 05 Dec 2012 12:58:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TgEYz-0007GZ-NM
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:58:58 +0000
Received: from [85.158.143.99:48850] by server-3.bemta-4.messagelabs.com id
	F9/7F-06841-0154FB05; Wed, 05 Dec 2012 12:58:56 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354712334!28145498!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3942 invoked from network); 5 Dec 2012 12:58:55 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:58:55 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so9156843iej.32
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 04:58:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=x8a4w1rIfL9eyiOW74xqLEv7aNu+gF0foWyMnzZQlCo=;
	b=Boe4Yx/pRbHJy7XXVO3hYePjRUWK6XFdnmX+5jKxLeI6E7Zj/RHfRorqzwMlmUr5bI
	7vIiQ6hrPt+Ar/W+qCyk4BFgYCZG1Mskcj6p/gtmEDC13+d5v0pQzEbBl+hYGlIR1EW4
	f7Xw7pRnNsVBHYWmHqlUUFr5U54kGmAmRjasPyEWWR6nr3770wREM1/q05/xwAzRShEY
	VzpfAAwIysCdHQz7GCEfly738GPNVJnCfRGkXYtWKSJxEayNHrOBQydRV9CfnFM3qjqW
	xuDYC3iI1XAZABLQWKd1QJoYP/iDTw+52gnW4bSbdaRgkp8zqSKuplzNpLuhTu87WXFr
	XZUA==
MIME-Version: 1.0
Received: by 10.42.179.8 with SMTP id bo8mr14288466icb.0.1354712333999; Wed,
	05 Dec 2012 04:58:53 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Wed, 5 Dec 2012 04:58:53 -0800 (PST)
In-Reply-To: <CAKhsbWYoCfWAOsBsxbpScYRHD1h5o-WOK2Hd3yj0XhC5eGvC3Q@mail.gmail.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<CAKhsbWYoCfWAOsBsxbpScYRHD1h5o-WOK2Hd3yj0XhC5eGvC3Q@mail.gmail.com>
Date: Wed, 5 Dec 2012 20:58:53 +0800
X-Google-Sender-Auth: td4G4qmoGAGtVb2Pxy9B_mtbZ8A
Message-ID: <CAKhsbWZsVhLg1-b72T8ojaJu8o7cJW3jRubku8ygfHbkzeyMJA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8044711677056468962=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8044711677056468962==
Content-Type: multipart/alternative; boundary=90e6ba6e8e2070dd2804d01a8bec

--90e6ba6e8e2070dd2804d01a8bec
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 4, 2012 at 11:20 PM, G.R. <firemeteor@users.sourceforge.net>wro=
te:

> On Tue, Dec 4, 2012 at 7:01 PM, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:
>
>> On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
>> > >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net>
>> wrote:
>> > >>  I had a quick look, and it doesn't look that hard to backport that
>> patch.
>> > >
>> > > Thanks, Mat.
>> > > I'm glad to report that the patch do fix my problem.
>> > >
>> > > And yest it is really easy to port since the code did not change
>> across the
>> > > two release.
>> > > The only change would be line numbers (3841 vs 3803) and one extra
>> comments
>> > > before this line:
>> > >
>> > >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>> > >
>> > > I'm not sure if you are going to release another maintenance version
>> that
>> > > include this patch,
>> > > but I'll report this to Debain maintainer since it's about to freeze
>> for
>> > > v7.0 release and v4.2.0 will not make it.
>> >
>> > Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
>> > out?
>> >
>>
>> It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
>> so I'd say it should be a candidate for Xen 4.1.4.
>>
>>
> Hi, it seems that the patch has some side effect on pure HVM guest.
> For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled,
> I see such syndrome in qemu-dm-xxx.log:
>
> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
> support??
> pt_pci_write_config: Internal error: Invalid write emulation return
> value[-1]. I/O emulator exit.
>
> The guest dies immediately after the log, I have no way to check guest
> kernel log.
> Without the patch, this guest can boot without obvious error log even the
> VGA passthrough does not quite work.
> I'll check the code to see what does these log mean...
>

I did some analysis and it really looks like a bug to me.
Since this is a patch back-ported from 4.2.0, I would like to ask is there
any follow up patch that would fix this issue?
Please see my analysis below:

Here is part of the qemu-dm log, with debug log enabled at compile time:

dm-command: hot insert pass-through pci dev
register_real_device: Assigning real physical device 00:1b.0 ...
pt_iomul_init: Error: pt_iomul_init can't open file /dev/xen/pci_iomul: No
such file or directory: 0x0:0x1b.0x0
pt_register_regions: IO region registered (size=3D0x00004000
base_addr=3D0xf7c10004)
pt_msi_setup: msi mapped with pirq 36
pci_intx: intx=3D1
register_real_device: Real physical device 00:1b.0 registered successfuly!
IRQ type =3D MSI-INTx
...
pt_pci_read_config: [00:06.0]: address=3D0000 val=3D0x0000*8086* len=3D2
pt_pci_read_config: [00:06.0]: address=3D0002 val=3D0x0000*1e20* len=3D2
...
*pt_pci_write_config: [00:06.0]: address=3D0068* val=3D0x00000000 len=3D4
...
pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
*pt_msgctrl_reg_write: guest enabling MSI, disable MSI-INTx translation*
pci_intx: intx=3D1
*pt_msi_disable: Unmap msi with pirq 36*
pt_msgctrl_reg_write: setup msi for dev 30
pt_msi_setup: msi mapped with pirq 36
pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303
pt_pci_read_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
pt_pci_write_config: [00:06.0]: address=3D0064 val=3D0xfee0300c len=3D4
*pt_pci_write_config: [00:06.0]: address=3D0068* val=3D0x00000000 len=3D4
pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
support??
pt_pci_write_config: Internal error: Invalid write emulation return
value[-1]. I/O emulator exit.


Here the device in question should is the audio controller, 00:1b.0 in the
host, which is 64 bit capable:
00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family
High Definition Audio Controller (rev 04)
    Capabilities: [60] MSI: Enable+ Count=3D1/1 Maskable- 64bit+
        Address: 00000000fee00378  Data: 0000

And there is also a successful write to offset 0x68 above, while the second
write fails the I/O emulator.

pt_disable_msi_translate is called after the pt_msgctrl_reg_write log above=
:
            PT_LOG("guest enabling MSI-X, disable MSI-INTx translation\n");
            pt_disable_msi_translate(ptdev);


The patch added pt_msi_disable() call into this function, And the
pt_msi_disable function has these lines:
out:
    /* clear msi info */
    dev->msi->flags =3D 0;
    dev->msi->pirq =3D -1;
    dev->msi_trans_en =3D 0;

As a result, the flags are cleared -- this is new to the patch.
And I believe this change caused the failure in pt_msgaddr64_write():

3882     /* check whether the type is 64 bit or not */
3883     if (!(ptdev->msi->flags & PCI_MSI_FLAGS_64BIT))
3884     {
3885         /* exit I/O emulator */
3886         PT_LOG("Error: why comes to Upper Address without 64 bit
support??\n");
3887         return -1;
3888     }


I only see flags setup in  pt_msgctrl_reg_init() function. I guess this is
not called this time.

Thanks,
Timothy

--90e6ba6e8e2070dd2804d01a8bec
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><div class=3D"gmail_quote">On Tue, Dec 4, 2012 a=
t 11:20 PM, G.R. <span dir=3D"ltr">&lt;<a href=3D"mailto:firemeteor@users.s=
ourceforge.net" target=3D"_blank">firemeteor@users.sourceforge.net</a>&gt;<=
/span> wrote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D"gmail_extra=
"><div class=3D"gmail_quote"><div>On Tue, Dec 4, 2012 at 7:01 PM, Pasi K=E4=
rkk=E4inen <span dir=3D"ltr">&lt;<a href=3D"mailto:pasik@iki.fi" target=3D"=
_blank">pasik@iki.fi</a>&gt;</span> wrote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">
<div>On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:<br>
&gt; &gt;&gt;&gt; On 04.12.12 at 04:07, &quot;G.R.&quot; &lt;<a href=3D"mai=
lto:firemeteor@users.sourceforge.net" target=3D"_blank">firemeteor@users.so=
urceforge.net</a>&gt; wrote:<br>
&gt; &gt;&gt; =A0I had a quick look, and it doesn&#39;t look that hard to b=
ackport that patch.<br>
&gt; &gt;<br>
&gt; &gt; Thanks, Mat.<br>
&gt; &gt; I&#39;m glad to report that the patch do fix my problem.<br>
&gt; &gt;<br>
&gt; &gt; And yest it is really easy to port since the code did not change =
across the<br>
&gt; &gt; two release.<br>
&gt; &gt; The only change would be line numbers (3841 vs 3803) and one extr=
a comments<br>
&gt; &gt; before this line:<br>
&gt; &gt;<br>
&gt; &gt; =A0static int pt_msix_update_one(struct pt_dev *dev, int entry_nr=
)<br>
&gt; &gt;<br>
&gt; &gt; I&#39;m not sure if you are going to release another maintenance =
version that<br>
&gt; &gt; include this patch,<br>
&gt; &gt; but I&#39;ll report this to Debain maintainer since it&#39;s abou=
t to freeze for<br>
&gt; &gt; v7.0 release and v4.2.0 will not make it.<br>
&gt;<br>
&gt; Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is<br>
&gt; out?<br>
&gt;<br>
<br>
</div>It&#39;s already in Xen 4.2, it&#39;s confirmed to fix a bug in Xen 4=
.1,<br>
so I&#39;d say it should be a candidate for Xen 4.1.4.<br>
<span><font color=3D"#888888"><br></font></span></blockquote></div><div><br=
>Hi, it seems that the patch has some side effect on pure HVM guest.<br>For=
 openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled, I se=
e such syndrome in qemu-dm-xxx.log:<br>



<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>pt_pci_write_config: Internal error: Invalid write emulation=
 return value[-1]. I/O emulator exit.<br><br>The guest dies immediately aft=
er the log, I have no way to check guest kernel log.<br>



Without the patch, this guest can boot without obvious error log even the V=
GA passthrough does not quite work.<br>I&#39;ll check the code to see what =
does these log mean...<br></div></div></div></blockquote><div><br>I did som=
e analysis and it really looks like a bug to me.<br>

Since this is a patch back-ported from 4.2.0, I would like to ask is there =
any follow up patch that would fix this issue?<br>Please see my analysis be=
low:<br>
<br>Here is part of the qemu-dm log, with debug log enabled at compile time=
:<br><br><span style=3D"font-family:courier new,monospace">dm-command: hot =
insert pass-through pci dev<br>register_real_device: Assigning real physica=
l device 00:1b.0 ...<br>

pt_iomul_init: Error: pt_iomul_init can&#39;t open file /dev/xen/pci_iomul:=
 No such file or directory: 0x0:0x1b.0x0<br>pt_register_regions: IO region =
registered (size=3D0x00004000 base_addr=3D0xf7c10004)<br>pt_msi_setup: msi =
mapped with pirq 36<br>

pci_intx: intx=3D1<br>register_real_device: Real physical device 00:1b.0 re=
gistered successfuly!<br>IRQ type =3D MSI-INTx<br>...<br>pt_pci_read_config=
: [00:06.0]: address=3D0000 val=3D0x0000<b>8086</b> len=3D2<br>pt_pci_read_=
config: [00:06.0]: address=3D0002 val=3D0x0000<b>1e20</b> len=3D2<br>

...<br><b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x000000=
00 len=3D4<br>...<br>pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0=
x00000081 len=3D2<br><b>pt_msgctrl_reg_write: guest enabling MSI, disable M=
SI-INTx translation</b><br>

pci_intx: intx=3D1<br><b>pt_msi_disable: Unmap msi with pirq 36</b><br>pt_m=
sgctrl_reg_write: setup msi for dev 30<br>pt_msi_setup: msi mapped with pir=
q 36<br>pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303<br>pt_pc=
i_read_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>

pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>=
pt_pci_write_config: [00:06.0]: address=3D0064 val=3D0xfee0300c len=3D4<br>=
<b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x00000000 len=
=3D4<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 6=
4 bit support??<br>

pt_pci_write_config: Internal error: Invalid write emulation return value[-=
1]. I/O emulator exit.<br></span><br><span style=3D"font-family:courier new=
,monospace"></span></div></div><br>Here the device in question should is th=
e audio controller, 00:1b.0 in the host, which is 64 bit capable:<br>


00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family=
 High Definition Audio Controller (rev 04)<br>=A0=A0=A0 Capabilities: [60] =
MSI: Enable+ Count=3D1/1 Maskable- 64bit+<br>=A0=A0=A0 =A0=A0=A0 Address: 0=
0000000fee00378=A0 Data: 0000<br>

<br>And there is also a successful write to offset 0x68 above, while the se=
cond write fails the I/O emulator.<br>
<br>pt_disable_msi_translate is called after the pt_msgctrl_reg_write log a=
bove:<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 PT_LOG(&quot;guest enabling MSI-=
X, disable MSI-INTx translation\n&quot;);<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0 pt_disable_msi_translate(ptdev);<br>


<br><br>The patch added pt_msi_disable() call into this function, And the p=
t_msi_disable function has these lines:<br><span style=3D"font-family:couri=
er new,monospace">out:<br>=A0=A0=A0 /* clear msi info */<br>=A0=A0=A0 dev-&=
gt;msi-&gt;flags =3D 0;<br>

=A0=A0=A0 dev-&gt;msi-&gt;pirq =3D -1;<br>=A0=A0=A0 dev-&gt;msi_trans_en =
=3D 0;</span><br>
<br>As a result, the flags are cleared -- this is new to the patch. <br>And=
 I believe this change caused the failure in pt_msgaddr64_write():<br><br><=
span style=3D"font-family:courier new,monospace">3882=A0=A0=A0=A0 /* check =
whether the type is 64 bit or not */<br>

3883=A0=A0=A0=A0 if (!(ptdev-&gt;msi-&gt;flags &amp; PCI_MSI_FLAGS_64BIT))<=
br>3884=A0=A0=A0=A0 {<br>3885=A0=A0=A0=A0=A0=A0=A0=A0 /* exit I/O emulator =
*/<br>3886=A0=A0=A0=A0=A0=A0=A0=A0 PT_LOG(&quot;Error: why comes to Upper A=
ddress without 64 bit support??\n&quot;);<br>

3887=A0=A0=A0=A0=A0=A0=A0=A0 return -1;<br>3888=A0=A0=A0=A0 }</span><br><br=
><br>I only see flags setup in=A0 pt_msgctrl_reg_init() function. I guess t=
his is not called this time.<br><br>Thanks,<br>Timothy<br>
</div>

--90e6ba6e8e2070dd2804d01a8bec--


--===============8044711677056468962==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8044711677056468962==--


From xen-devel-bounces@lists.xen.org Wed Dec 05 12:59:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 12:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEZ1-0007Ge-5z; Wed, 05 Dec 2012 12:58:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TgEYz-0007GZ-NM
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 12:58:58 +0000
Received: from [85.158.143.99:48850] by server-3.bemta-4.messagelabs.com id
	F9/7F-06841-0154FB05; Wed, 05 Dec 2012 12:58:56 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354712334!28145498!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3942 invoked from network); 5 Dec 2012 12:58:55 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 12:58:55 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so9156843iej.32
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 04:58:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=x8a4w1rIfL9eyiOW74xqLEv7aNu+gF0foWyMnzZQlCo=;
	b=Boe4Yx/pRbHJy7XXVO3hYePjRUWK6XFdnmX+5jKxLeI6E7Zj/RHfRorqzwMlmUr5bI
	7vIiQ6hrPt+Ar/W+qCyk4BFgYCZG1Mskcj6p/gtmEDC13+d5v0pQzEbBl+hYGlIR1EW4
	f7Xw7pRnNsVBHYWmHqlUUFr5U54kGmAmRjasPyEWWR6nr3770wREM1/q05/xwAzRShEY
	VzpfAAwIysCdHQz7GCEfly738GPNVJnCfRGkXYtWKSJxEayNHrOBQydRV9CfnFM3qjqW
	xuDYC3iI1XAZABLQWKd1QJoYP/iDTw+52gnW4bSbdaRgkp8zqSKuplzNpLuhTu87WXFr
	XZUA==
MIME-Version: 1.0
Received: by 10.42.179.8 with SMTP id bo8mr14288466icb.0.1354712333999; Wed,
	05 Dec 2012 04:58:53 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Wed, 5 Dec 2012 04:58:53 -0800 (PST)
In-Reply-To: <CAKhsbWYoCfWAOsBsxbpScYRHD1h5o-WOK2Hd3yj0XhC5eGvC3Q@mail.gmail.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<CAKhsbWYoCfWAOsBsxbpScYRHD1h5o-WOK2Hd3yj0XhC5eGvC3Q@mail.gmail.com>
Date: Wed, 5 Dec 2012 20:58:53 +0800
X-Google-Sender-Auth: td4G4qmoGAGtVb2Pxy9B_mtbZ8A
Message-ID: <CAKhsbWZsVhLg1-b72T8ojaJu8o7cJW3jRubku8ygfHbkzeyMJA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8044711677056468962=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8044711677056468962==
Content-Type: multipart/alternative; boundary=90e6ba6e8e2070dd2804d01a8bec

--90e6ba6e8e2070dd2804d01a8bec
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 4, 2012 at 11:20 PM, G.R. <firemeteor@users.sourceforge.net>wro=
te:

> On Tue, Dec 4, 2012 at 7:01 PM, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:
>
>> On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
>> > >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net>
>> wrote:
>> > >>  I had a quick look, and it doesn't look that hard to backport that
>> patch.
>> > >
>> > > Thanks, Mat.
>> > > I'm glad to report that the patch do fix my problem.
>> > >
>> > > And yest it is really easy to port since the code did not change
>> across the
>> > > two release.
>> > > The only change would be line numbers (3841 vs 3803) and one extra
>> comments
>> > > before this line:
>> > >
>> > >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>> > >
>> > > I'm not sure if you are going to release another maintenance version
>> that
>> > > include this patch,
>> > > but I'll report this to Debain maintainer since it's about to freeze
>> for
>> > > v7.0 release and v4.2.0 will not make it.
>> >
>> > Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
>> > out?
>> >
>>
>> It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
>> so I'd say it should be a candidate for Xen 4.1.4.
>>
>>
> Hi, it seems that the patch has some side effect on pure HVM guest.
> For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled,
> I see such syndrome in qemu-dm-xxx.log:
>
> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
> support??
> pt_pci_write_config: Internal error: Invalid write emulation return
> value[-1]. I/O emulator exit.
>
> The guest dies immediately after the log, I have no way to check guest
> kernel log.
> Without the patch, this guest can boot without obvious error log even the
> VGA passthrough does not quite work.
> I'll check the code to see what does these log mean...
>

I did some analysis and it really looks like a bug to me.
Since this is a patch back-ported from 4.2.0, I would like to ask is there
any follow up patch that would fix this issue?
Please see my analysis below:

Here is part of the qemu-dm log, with debug log enabled at compile time:

dm-command: hot insert pass-through pci dev
register_real_device: Assigning real physical device 00:1b.0 ...
pt_iomul_init: Error: pt_iomul_init can't open file /dev/xen/pci_iomul: No
such file or directory: 0x0:0x1b.0x0
pt_register_regions: IO region registered (size=3D0x00004000
base_addr=3D0xf7c10004)
pt_msi_setup: msi mapped with pirq 36
pci_intx: intx=3D1
register_real_device: Real physical device 00:1b.0 registered successfuly!
IRQ type =3D MSI-INTx
...
pt_pci_read_config: [00:06.0]: address=3D0000 val=3D0x0000*8086* len=3D2
pt_pci_read_config: [00:06.0]: address=3D0002 val=3D0x0000*1e20* len=3D2
...
*pt_pci_write_config: [00:06.0]: address=3D0068* val=3D0x00000000 len=3D4
...
pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
*pt_msgctrl_reg_write: guest enabling MSI, disable MSI-INTx translation*
pci_intx: intx=3D1
*pt_msi_disable: Unmap msi with pirq 36*
pt_msgctrl_reg_write: setup msi for dev 30
pt_msi_setup: msi mapped with pirq 36
pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303
pt_pci_read_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
pt_pci_write_config: [00:06.0]: address=3D0064 val=3D0xfee0300c len=3D4
*pt_pci_write_config: [00:06.0]: address=3D0068* val=3D0x00000000 len=3D4
pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
support??
pt_pci_write_config: Internal error: Invalid write emulation return
value[-1]. I/O emulator exit.


Here the device in question should is the audio controller, 00:1b.0 in the
host, which is 64 bit capable:
00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family
High Definition Audio Controller (rev 04)
    Capabilities: [60] MSI: Enable+ Count=3D1/1 Maskable- 64bit+
        Address: 00000000fee00378  Data: 0000

And there is also a successful write to offset 0x68 above, while the second
write fails the I/O emulator.

pt_disable_msi_translate is called after the pt_msgctrl_reg_write log above=
:
            PT_LOG("guest enabling MSI-X, disable MSI-INTx translation\n");
            pt_disable_msi_translate(ptdev);


The patch added pt_msi_disable() call into this function, And the
pt_msi_disable function has these lines:
out:
    /* clear msi info */
    dev->msi->flags =3D 0;
    dev->msi->pirq =3D -1;
    dev->msi_trans_en =3D 0;

As a result, the flags are cleared -- this is new to the patch.
And I believe this change caused the failure in pt_msgaddr64_write():

3882     /* check whether the type is 64 bit or not */
3883     if (!(ptdev->msi->flags & PCI_MSI_FLAGS_64BIT))
3884     {
3885         /* exit I/O emulator */
3886         PT_LOG("Error: why comes to Upper Address without 64 bit
support??\n");
3887         return -1;
3888     }


I only see flags setup in  pt_msgctrl_reg_init() function. I guess this is
not called this time.

Thanks,
Timothy

--90e6ba6e8e2070dd2804d01a8bec
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><div class=3D"gmail_quote">On Tue, Dec 4, 2012 a=
t 11:20 PM, G.R. <span dir=3D"ltr">&lt;<a href=3D"mailto:firemeteor@users.s=
ourceforge.net" target=3D"_blank">firemeteor@users.sourceforge.net</a>&gt;<=
/span> wrote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D"gmail_extra=
"><div class=3D"gmail_quote"><div>On Tue, Dec 4, 2012 at 7:01 PM, Pasi K=E4=
rkk=E4inen <span dir=3D"ltr">&lt;<a href=3D"mailto:pasik@iki.fi" target=3D"=
_blank">pasik@iki.fi</a>&gt;</span> wrote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">
<div>On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:<br>
&gt; &gt;&gt;&gt; On 04.12.12 at 04:07, &quot;G.R.&quot; &lt;<a href=3D"mai=
lto:firemeteor@users.sourceforge.net" target=3D"_blank">firemeteor@users.so=
urceforge.net</a>&gt; wrote:<br>
&gt; &gt;&gt; =A0I had a quick look, and it doesn&#39;t look that hard to b=
ackport that patch.<br>
&gt; &gt;<br>
&gt; &gt; Thanks, Mat.<br>
&gt; &gt; I&#39;m glad to report that the patch do fix my problem.<br>
&gt; &gt;<br>
&gt; &gt; And yest it is really easy to port since the code did not change =
across the<br>
&gt; &gt; two release.<br>
&gt; &gt; The only change would be line numbers (3841 vs 3803) and one extr=
a comments<br>
&gt; &gt; before this line:<br>
&gt; &gt;<br>
&gt; &gt; =A0static int pt_msix_update_one(struct pt_dev *dev, int entry_nr=
)<br>
&gt; &gt;<br>
&gt; &gt; I&#39;m not sure if you are going to release another maintenance =
version that<br>
&gt; &gt; include this patch,<br>
&gt; &gt; but I&#39;ll report this to Debain maintainer since it&#39;s abou=
t to freeze for<br>
&gt; &gt; v7.0 release and v4.2.0 will not make it.<br>
&gt;<br>
&gt; Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is<br>
&gt; out?<br>
&gt;<br>
<br>
</div>It&#39;s already in Xen 4.2, it&#39;s confirmed to fix a bug in Xen 4=
.1,<br>
so I&#39;d say it should be a candidate for Xen 4.1.4.<br>
<span><font color=3D"#888888"><br></font></span></blockquote></div><div><br=
>Hi, it seems that the patch has some side effect on pure HVM guest.<br>For=
 openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled, I se=
e such syndrome in qemu-dm-xxx.log:<br>



<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>pt_pci_write_config: Internal error: Invalid write emulation=
 return value[-1]. I/O emulator exit.<br><br>The guest dies immediately aft=
er the log, I have no way to check guest kernel log.<br>



Without the patch, this guest can boot without obvious error log even the V=
GA passthrough does not quite work.<br>I&#39;ll check the code to see what =
does these log mean...<br></div></div></div></blockquote><div><br>I did som=
e analysis and it really looks like a bug to me.<br>

Since this is a patch back-ported from 4.2.0, I would like to ask is there =
any follow up patch that would fix this issue?<br>Please see my analysis be=
low:<br>
<br>Here is part of the qemu-dm log, with debug log enabled at compile time=
:<br><br><span style=3D"font-family:courier new,monospace">dm-command: hot =
insert pass-through pci dev<br>register_real_device: Assigning real physica=
l device 00:1b.0 ...<br>

pt_iomul_init: Error: pt_iomul_init can&#39;t open file /dev/xen/pci_iomul:=
 No such file or directory: 0x0:0x1b.0x0<br>pt_register_regions: IO region =
registered (size=3D0x00004000 base_addr=3D0xf7c10004)<br>pt_msi_setup: msi =
mapped with pirq 36<br>

pci_intx: intx=3D1<br>register_real_device: Real physical device 00:1b.0 re=
gistered successfuly!<br>IRQ type =3D MSI-INTx<br>...<br>pt_pci_read_config=
: [00:06.0]: address=3D0000 val=3D0x0000<b>8086</b> len=3D2<br>pt_pci_read_=
config: [00:06.0]: address=3D0002 val=3D0x0000<b>1e20</b> len=3D2<br>

...<br><b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x000000=
00 len=3D4<br>...<br>pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0=
x00000081 len=3D2<br><b>pt_msgctrl_reg_write: guest enabling MSI, disable M=
SI-INTx translation</b><br>

pci_intx: intx=3D1<br><b>pt_msi_disable: Unmap msi with pirq 36</b><br>pt_m=
sgctrl_reg_write: setup msi for dev 30<br>pt_msi_setup: msi mapped with pir=
q 36<br>pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303<br>pt_pc=
i_read_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>

pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>=
pt_pci_write_config: [00:06.0]: address=3D0064 val=3D0xfee0300c len=3D4<br>=
<b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x00000000 len=
=3D4<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 6=
4 bit support??<br>

pt_pci_write_config: Internal error: Invalid write emulation return value[-=
1]. I/O emulator exit.<br></span><br><span style=3D"font-family:courier new=
,monospace"></span></div></div><br>Here the device in question should is th=
e audio controller, 00:1b.0 in the host, which is 64 bit capable:<br>


00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family=
 High Definition Audio Controller (rev 04)<br>=A0=A0=A0 Capabilities: [60] =
MSI: Enable+ Count=3D1/1 Maskable- 64bit+<br>=A0=A0=A0 =A0=A0=A0 Address: 0=
0000000fee00378=A0 Data: 0000<br>

<br>And there is also a successful write to offset 0x68 above, while the se=
cond write fails the I/O emulator.<br>
<br>pt_disable_msi_translate is called after the pt_msgctrl_reg_write log a=
bove:<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 PT_LOG(&quot;guest enabling MSI-=
X, disable MSI-INTx translation\n&quot;);<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0 pt_disable_msi_translate(ptdev);<br>


<br><br>The patch added pt_msi_disable() call into this function, And the p=
t_msi_disable function has these lines:<br><span style=3D"font-family:couri=
er new,monospace">out:<br>=A0=A0=A0 /* clear msi info */<br>=A0=A0=A0 dev-&=
gt;msi-&gt;flags =3D 0;<br>

=A0=A0=A0 dev-&gt;msi-&gt;pirq =3D -1;<br>=A0=A0=A0 dev-&gt;msi_trans_en =
=3D 0;</span><br>
<br>As a result, the flags are cleared -- this is new to the patch. <br>And=
 I believe this change caused the failure in pt_msgaddr64_write():<br><br><=
span style=3D"font-family:courier new,monospace">3882=A0=A0=A0=A0 /* check =
whether the type is 64 bit or not */<br>

3883=A0=A0=A0=A0 if (!(ptdev-&gt;msi-&gt;flags &amp; PCI_MSI_FLAGS_64BIT))<=
br>3884=A0=A0=A0=A0 {<br>3885=A0=A0=A0=A0=A0=A0=A0=A0 /* exit I/O emulator =
*/<br>3886=A0=A0=A0=A0=A0=A0=A0=A0 PT_LOG(&quot;Error: why comes to Upper A=
ddress without 64 bit support??\n&quot;);<br>

3887=A0=A0=A0=A0=A0=A0=A0=A0 return -1;<br>3888=A0=A0=A0=A0 }</span><br><br=
><br>I only see flags setup in=A0 pt_msgctrl_reg_init() function. I guess t=
his is not called this time.<br><br>Thanks,<br>Timothy<br>
</div>

--90e6ba6e8e2070dd2804d01a8bec--


--===============8044711677056468962==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8044711677056468962==--


From xen-devel-bounces@lists.xen.org Wed Dec 05 13:10:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:10:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEk7-0007cD-D2; Wed, 05 Dec 2012 13:10:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1TgEk5-0007c8-LC
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:10:25 +0000
Received: from [85.158.137.99:28503] by server-16.bemta-3.messagelabs.com id
	1C/DB-07461-0C74FB05; Wed, 05 Dec 2012 13:10:24 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354713021!18045774!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15666 invoked from network); 5 Dec 2012 13:10:23 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 13:10:23 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so9181923iej.32
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 05:10:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=1ha9wbArG/UtIeO5yktP2vLWOwcsA3esV0iegL2j9gk=;
	b=dCbSikRHEj497cilSP+IZVCUCA5ZIPOtit2Qx+iHaF9fRH7hWTXIFLyOsG+ZkCNSHN
	sv3uM2Gd9D+kzzB9fVYy2j0Z2eoxsXaf3CHU09vMtzivfERdSPDrghTDG4x8ZLEe7YtW
	bf48aR1XwCppdY5DcNNlQNPbvL/2ZJKX+RCnaxv1ReNhCRpYFwmQHg+yj6vABgzYWrCp
	BoymJeCdb0uIMTkJNhVv7gQZPLMezdxxzZFLCAGEgpteQGdY9XcdSQbopX+5kp8zj5lc
	lhM22krmqYGDmdAX1dP37DmHuMg3m4RaJ7k90XfaUrEpRNOrUqKMozxgtkrNPnMfOMQ9
	5dxg==
MIME-Version: 1.0
Received: by 10.50.236.104 with SMTP id ut8mr1764421igc.20.1354713021309; Wed,
	05 Dec 2012 05:10:21 -0800 (PST)
Received: by 10.231.93.74 with HTTP; Wed, 5 Dec 2012 05:10:21 -0800 (PST)
In-Reply-To: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
Date: Wed, 5 Dec 2012 08:10:21 -0500
X-Google-Sender-Auth: oQruvhJpnk7EMnx8nI4ZoIPcC1Q
Message-ID: <CAOvdn6U0rWX3Azu_rxbxJUv2d+-txCj3+pZeWOtcJmbJs8bDDg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Ian Campbell <ijc@hellion.org.uk>
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3364806421774197164=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3364806421774197164==
Content-Type: multipart/alternative; boundary=14dae93403436863ba04d01ab485

--14dae93403436863ba04d01ab485
Content-Type: text/plain; charset=ISO-8859-1

FWIW, there's a bug in this original implementation. See Konrad's "misc"
tree - for the fix:
http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=commit;h=f6c958ff0d00ffbf1cdc8fcf2f2a82f06fbbb5f4

Here is the original thread where I submitted the fix:
http://markmail.org/message/i2dc4vbqrujkwhu7




On Mon, Nov 26, 2012 at 8:21 AM, Ian Campbell <ijc@hellion.org.uk> wrote:

> Debian has decided to take Jeremy's microcode patch [0] as an interim
> measure for their next release. (TL;DR -- Debian is shipping pvops Linux
> 3.2 and Xen 4.1 in the next release. See http://bugs.debian.org/693053
> and https://lists.debian.org/debian-devel/2012/11/msg00141.html for some
> more background).
>
> However the patch is a bit old and predates the use introduction of
> separate firmware files for AMD family >= 15h. Looking at the SuSE
> forward ported classic Xen patches it seems like the following patch is
> all that is required. But it seems a little too simple to be true and I
> don't have any such processors to test on.
>
> Jan, can you recall if it really is that easy on the kernel side ;-)
>
> Ian.
>
> [0]
>
> http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=shortlog;h=refs/heads/upstream/microcode
>
> commit 109cf37876567ef346c0ecde8b473e7ad1e74e07
> Author: Ian Campbell <ijc@hellion.org.uk>
> Date:   Mon Nov 26 09:41:02 2012 +0000
>
>     microcode_xen: Add support for AMD family >= 15h
>
>     Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
>
> diff --git a/arch/x86/kernel/microcode_xen.c
> b/arch/x86/kernel/microcode_xen.c
> index 9d2a06b..2b8a78a 100644
> --- a/arch/x86/kernel/microcode_xen.c
> +++ b/arch/x86/kernel/microcode_xen.c
> @@ -74,7 +74,11 @@ static enum ucode_state xen_request_microcode_fw(int
> cpu, struct device *device)
>                 break;
>
>         case X86_VENDOR_AMD:
> -               snprintf(name, sizeof(name),
> "amd-ucode/microcode_amd.bin");
> +               /* Beginning with family 15h AMD uses family-specific
> firmware files. */
> +               if (c->x86 >= 0x15)
> +                       snprintf(name, sizeof(name),
> "amd-ucode/microcode_amd_fam%.2xh.bin", c->x86);
> +               else
> +                       snprintf(name, sizeof(name),
> "amd-ucode/microcode_amd.bin");
>                 break;
>
>         default:
>
>
> --
> Ian Campbell
> Current Noise: Dew-Scented - Metal Militia
>
> Now KEN and BARBIE are PERMANENTLY ADDICTED to MIND-ALTERING DRUGS ...
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--14dae93403436863ba04d01ab485
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

FWIW, there&#39;s a bug in this original implementation. See Konrad&#39;s &=
quot;misc&quot; tree - for the fix:<div><a href=3D"http://git.kernel.org/?p=
=3Dlinux/kernel/git/konrad/xen.git;a=3Dcommit;h=3Df6c958ff0d00ffbf1cdc8fcf2=
f2a82f06fbbb5f4">http://git.kernel.org/?p=3Dlinux/kernel/git/konrad/xen.git=
;a=3Dcommit;h=3Df6c958ff0d00ffbf1cdc8fcf2f2a82f06fbbb5f4</a></div>
<div><br></div><div>Here is the original thread where I submitted the fix:<=
/div><div><a href=3D"http://markmail.org/message/i2dc4vbqrujkwhu7">http://m=
arkmail.org/message/i2dc4vbqrujkwhu7</a></div><div><br></div><div><br></div=
>
<div><br><br><div class=3D"gmail_quote">On Mon, Nov 26, 2012 at 8:21 AM, Ia=
n Campbell <span dir=3D"ltr">&lt;<a href=3D"mailto:ijc@hellion.org.uk" targ=
et=3D"_blank">ijc@hellion.org.uk</a>&gt;</span> wrote:<br><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
Debian has decided to take Jeremy&#39;s microcode patch [0] as an interim<b=
r>
measure for their next release. (TL;DR -- Debian is shipping pvops Linux<br=
>
3.2 and Xen 4.1 in the next release. See <a href=3D"http://bugs.debian.org/=
693053" target=3D"_blank">http://bugs.debian.org/693053</a><br>
and <a href=3D"https://lists.debian.org/debian-devel/2012/11/msg00141.html"=
 target=3D"_blank">https://lists.debian.org/debian-devel/2012/11/msg00141.h=
tml</a> for some<br>
more background).<br>
<br>
However the patch is a bit old and predates the use introduction of<br>
separate firmware files for AMD family &gt;=3D 15h. Looking at the SuSE<br>
forward ported classic Xen patches it seems like the following patch is<br>
all that is required. But it seems a little too simple to be true and I<br>
don&#39;t have any such processors to test on.<br>
<br>
Jan, can you recall if it really is that easy on the kernel side ;-)<br>
<br>
Ian.<br>
<br>
[0]<br>
<a href=3D"http://git.kernel.org/?p=3Dlinux/kernel/git/jeremy/xen.git;a=3Ds=
hortlog;h=3Drefs/heads/upstream/microcode" target=3D"_blank">http://git.ker=
nel.org/?p=3Dlinux/kernel/git/jeremy/xen.git;a=3Dshortlog;h=3Drefs/heads/up=
stream/microcode</a><br>

<br>
commit 109cf37876567ef346c0ecde8b473e7ad1e74e07<br>
Author: Ian Campbell &lt;<a href=3D"mailto:ijc@hellion.org.uk">ijc@hellion.=
org.uk</a>&gt;<br>
Date: =A0 Mon Nov 26 09:41:02 2012 +0000<br>
<br>
=A0 =A0 microcode_xen: Add support for AMD family &gt;=3D 15h<br>
<br>
=A0 =A0 Signed-off-by: Ian Campbell &lt;<a href=3D"mailto:ijc@hellion.org.u=
k">ijc@hellion.org.uk</a>&gt;<br>
<br>
diff --git a/arch/x86/kernel/microcode_xen.c b/arch/x86/kernel/microcode_xe=
n.c<br>
index 9d2a06b..2b8a78a 100644<br>
--- a/arch/x86/kernel/microcode_xen.c<br>
+++ b/arch/x86/kernel/microcode_xen.c<br>
@@ -74,7 +74,11 @@ static enum ucode_state xen_request_microcode_fw(int cpu=
, struct device *device)<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;<br>
<br>
=A0 =A0 =A0 =A0 case X86_VENDOR_AMD:<br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 snprintf(name, sizeof(name), &quot;amd-ucode/=
microcode_amd.bin&quot;);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* Beginning with family 15h AMD uses family-=
specific firmware files. */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (c-&gt;x86 &gt;=3D 0x15)<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 snprintf(name, sizeof(name), =
&quot;amd-ucode/microcode_amd_fam%.2xh.bin&quot;, c-&gt;x86);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 else<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 snprintf(name, sizeof(name), =
&quot;amd-ucode/microcode_amd.bin&quot;);<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;<br>
<br>
=A0 =A0 =A0 =A0 default:<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
--<br>
Ian Campbell<br>
Current Noise: Dew-Scented - Metal Militia<br>
<br>
Now KEN and BARBIE are PERMANENTLY ADDICTED to MIND-ALTERING DRUGS ...<br>
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</font></span></blockquote></div><br></div>

--14dae93403436863ba04d01ab485--


--===============3364806421774197164==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3364806421774197164==--


From xen-devel-bounces@lists.xen.org Wed Dec 05 13:10:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:10:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEk7-0007cD-D2; Wed, 05 Dec 2012 13:10:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1TgEk5-0007c8-LC
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:10:25 +0000
Received: from [85.158.137.99:28503] by server-16.bemta-3.messagelabs.com id
	1C/DB-07461-0C74FB05; Wed, 05 Dec 2012 13:10:24 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354713021!18045774!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15666 invoked from network); 5 Dec 2012 13:10:23 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 13:10:23 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so9181923iej.32
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 05:10:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=1ha9wbArG/UtIeO5yktP2vLWOwcsA3esV0iegL2j9gk=;
	b=dCbSikRHEj497cilSP+IZVCUCA5ZIPOtit2Qx+iHaF9fRH7hWTXIFLyOsG+ZkCNSHN
	sv3uM2Gd9D+kzzB9fVYy2j0Z2eoxsXaf3CHU09vMtzivfERdSPDrghTDG4x8ZLEe7YtW
	bf48aR1XwCppdY5DcNNlQNPbvL/2ZJKX+RCnaxv1ReNhCRpYFwmQHg+yj6vABgzYWrCp
	BoymJeCdb0uIMTkJNhVv7gQZPLMezdxxzZFLCAGEgpteQGdY9XcdSQbopX+5kp8zj5lc
	lhM22krmqYGDmdAX1dP37DmHuMg3m4RaJ7k90XfaUrEpRNOrUqKMozxgtkrNPnMfOMQ9
	5dxg==
MIME-Version: 1.0
Received: by 10.50.236.104 with SMTP id ut8mr1764421igc.20.1354713021309; Wed,
	05 Dec 2012 05:10:21 -0800 (PST)
Received: by 10.231.93.74 with HTTP; Wed, 5 Dec 2012 05:10:21 -0800 (PST)
In-Reply-To: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
Date: Wed, 5 Dec 2012 08:10:21 -0500
X-Google-Sender-Auth: oQruvhJpnk7EMnx8nI4ZoIPcC1Q
Message-ID: <CAOvdn6U0rWX3Azu_rxbxJUv2d+-txCj3+pZeWOtcJmbJs8bDDg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Ian Campbell <ijc@hellion.org.uk>
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3364806421774197164=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3364806421774197164==
Content-Type: multipart/alternative; boundary=14dae93403436863ba04d01ab485

--14dae93403436863ba04d01ab485
Content-Type: text/plain; charset=ISO-8859-1

FWIW, there's a bug in this original implementation. See Konrad's "misc"
tree - for the fix:
http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=commit;h=f6c958ff0d00ffbf1cdc8fcf2f2a82f06fbbb5f4

Here is the original thread where I submitted the fix:
http://markmail.org/message/i2dc4vbqrujkwhu7




On Mon, Nov 26, 2012 at 8:21 AM, Ian Campbell <ijc@hellion.org.uk> wrote:

> Debian has decided to take Jeremy's microcode patch [0] as an interim
> measure for their next release. (TL;DR -- Debian is shipping pvops Linux
> 3.2 and Xen 4.1 in the next release. See http://bugs.debian.org/693053
> and https://lists.debian.org/debian-devel/2012/11/msg00141.html for some
> more background).
>
> However the patch is a bit old and predates the use introduction of
> separate firmware files for AMD family >= 15h. Looking at the SuSE
> forward ported classic Xen patches it seems like the following patch is
> all that is required. But it seems a little too simple to be true and I
> don't have any such processors to test on.
>
> Jan, can you recall if it really is that easy on the kernel side ;-)
>
> Ian.
>
> [0]
>
> http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=shortlog;h=refs/heads/upstream/microcode
>
> commit 109cf37876567ef346c0ecde8b473e7ad1e74e07
> Author: Ian Campbell <ijc@hellion.org.uk>
> Date:   Mon Nov 26 09:41:02 2012 +0000
>
>     microcode_xen: Add support for AMD family >= 15h
>
>     Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
>
> diff --git a/arch/x86/kernel/microcode_xen.c
> b/arch/x86/kernel/microcode_xen.c
> index 9d2a06b..2b8a78a 100644
> --- a/arch/x86/kernel/microcode_xen.c
> +++ b/arch/x86/kernel/microcode_xen.c
> @@ -74,7 +74,11 @@ static enum ucode_state xen_request_microcode_fw(int
> cpu, struct device *device)
>                 break;
>
>         case X86_VENDOR_AMD:
> -               snprintf(name, sizeof(name),
> "amd-ucode/microcode_amd.bin");
> +               /* Beginning with family 15h AMD uses family-specific
> firmware files. */
> +               if (c->x86 >= 0x15)
> +                       snprintf(name, sizeof(name),
> "amd-ucode/microcode_amd_fam%.2xh.bin", c->x86);
> +               else
> +                       snprintf(name, sizeof(name),
> "amd-ucode/microcode_amd.bin");
>                 break;
>
>         default:
>
>
> --
> Ian Campbell
> Current Noise: Dew-Scented - Metal Militia
>
> Now KEN and BARBIE are PERMANENTLY ADDICTED to MIND-ALTERING DRUGS ...
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--14dae93403436863ba04d01ab485
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

FWIW, there&#39;s a bug in this original implementation. See Konrad&#39;s &=
quot;misc&quot; tree - for the fix:<div><a href=3D"http://git.kernel.org/?p=
=3Dlinux/kernel/git/konrad/xen.git;a=3Dcommit;h=3Df6c958ff0d00ffbf1cdc8fcf2=
f2a82f06fbbb5f4">http://git.kernel.org/?p=3Dlinux/kernel/git/konrad/xen.git=
;a=3Dcommit;h=3Df6c958ff0d00ffbf1cdc8fcf2f2a82f06fbbb5f4</a></div>
<div><br></div><div>Here is the original thread where I submitted the fix:<=
/div><div><a href=3D"http://markmail.org/message/i2dc4vbqrujkwhu7">http://m=
arkmail.org/message/i2dc4vbqrujkwhu7</a></div><div><br></div><div><br></div=
>
<div><br><br><div class=3D"gmail_quote">On Mon, Nov 26, 2012 at 8:21 AM, Ia=
n Campbell <span dir=3D"ltr">&lt;<a href=3D"mailto:ijc@hellion.org.uk" targ=
et=3D"_blank">ijc@hellion.org.uk</a>&gt;</span> wrote:<br><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
Debian has decided to take Jeremy&#39;s microcode patch [0] as an interim<b=
r>
measure for their next release. (TL;DR -- Debian is shipping pvops Linux<br=
>
3.2 and Xen 4.1 in the next release. See <a href=3D"http://bugs.debian.org/=
693053" target=3D"_blank">http://bugs.debian.org/693053</a><br>
and <a href=3D"https://lists.debian.org/debian-devel/2012/11/msg00141.html"=
 target=3D"_blank">https://lists.debian.org/debian-devel/2012/11/msg00141.h=
tml</a> for some<br>
more background).<br>
<br>
However the patch is a bit old and predates the use introduction of<br>
separate firmware files for AMD family &gt;=3D 15h. Looking at the SuSE<br>
forward ported classic Xen patches it seems like the following patch is<br>
all that is required. But it seems a little too simple to be true and I<br>
don&#39;t have any such processors to test on.<br>
<br>
Jan, can you recall if it really is that easy on the kernel side ;-)<br>
<br>
Ian.<br>
<br>
[0]<br>
<a href=3D"http://git.kernel.org/?p=3Dlinux/kernel/git/jeremy/xen.git;a=3Ds=
hortlog;h=3Drefs/heads/upstream/microcode" target=3D"_blank">http://git.ker=
nel.org/?p=3Dlinux/kernel/git/jeremy/xen.git;a=3Dshortlog;h=3Drefs/heads/up=
stream/microcode</a><br>

<br>
commit 109cf37876567ef346c0ecde8b473e7ad1e74e07<br>
Author: Ian Campbell &lt;<a href=3D"mailto:ijc@hellion.org.uk">ijc@hellion.=
org.uk</a>&gt;<br>
Date: =A0 Mon Nov 26 09:41:02 2012 +0000<br>
<br>
=A0 =A0 microcode_xen: Add support for AMD family &gt;=3D 15h<br>
<br>
=A0 =A0 Signed-off-by: Ian Campbell &lt;<a href=3D"mailto:ijc@hellion.org.u=
k">ijc@hellion.org.uk</a>&gt;<br>
<br>
diff --git a/arch/x86/kernel/microcode_xen.c b/arch/x86/kernel/microcode_xe=
n.c<br>
index 9d2a06b..2b8a78a 100644<br>
--- a/arch/x86/kernel/microcode_xen.c<br>
+++ b/arch/x86/kernel/microcode_xen.c<br>
@@ -74,7 +74,11 @@ static enum ucode_state xen_request_microcode_fw(int cpu=
, struct device *device)<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;<br>
<br>
=A0 =A0 =A0 =A0 case X86_VENDOR_AMD:<br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 snprintf(name, sizeof(name), &quot;amd-ucode/=
microcode_amd.bin&quot;);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* Beginning with family 15h AMD uses family-=
specific firmware files. */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (c-&gt;x86 &gt;=3D 0x15)<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 snprintf(name, sizeof(name), =
&quot;amd-ucode/microcode_amd_fam%.2xh.bin&quot;, c-&gt;x86);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 else<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 snprintf(name, sizeof(name), =
&quot;amd-ucode/microcode_amd.bin&quot;);<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;<br>
<br>
=A0 =A0 =A0 =A0 default:<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
--<br>
Ian Campbell<br>
Current Noise: Dew-Scented - Metal Militia<br>
<br>
Now KEN and BARBIE are PERMANENTLY ADDICTED to MIND-ALTERING DRUGS ...<br>
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</font></span></blockquote></div><br></div>

--14dae93403436863ba04d01ab485--


--===============3364806421774197164==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3364806421774197164==--


From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElS-0007qz-1i; Wed, 05 Dec 2012 13:11:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElR-0007qq-EQ
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:11:49 +0000
Received: from [85.158.143.35:14709] by server-2.bemta-4.messagelabs.com id
	55/B4-30861-4184FB05; Wed, 05 Dec 2012 13:11:48 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354713107!16186658!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTkxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15393 invoked from network); 5 Dec 2012 13:11:48 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 13:11:48 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 05:11:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="257618091"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 05:11:45 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:03 +0800
Message-Id: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 00/11] nested vmx: bug fixes and feature
	enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series of patches contain some bug fixes and feature enabling for
nested vmx, please help to review and pull.

For the following patches, it doesn't have influence about Xen on Xen functionality.
No special need to backport to 4.2.x (My own opinion).

Changes from v1 to v2:
 - Use literal name instead of hard numbers to expose default 1 settings in VMX related MSRs.
 - For TRUE VMX MSRs, we use the same value as normal VMX MSRs.
 - Fix a coding style issue.

Thanks,
Dongxiao

Dongxiao Xu (11):
  nested vmx: emulate MSR bitmaps
  nested vmx: use literal name instead of hard numbers
  nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
  nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
  nested vmx: fix interrupt delivery to L2 guest
  nested vmx: check host ability when intercept MSR read

 xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |  115 ++++++++++++++++++++++++++++++------
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |   10 +++
 7 files changed, 145 insertions(+), 24 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElS-0007qz-1i; Wed, 05 Dec 2012 13:11:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElR-0007qq-EQ
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:11:49 +0000
Received: from [85.158.143.35:14709] by server-2.bemta-4.messagelabs.com id
	55/B4-30861-4184FB05; Wed, 05 Dec 2012 13:11:48 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354713107!16186658!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTkxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15393 invoked from network); 5 Dec 2012 13:11:48 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 13:11:48 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 05:11:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="257618091"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 05:11:45 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:03 +0800
Message-Id: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 00/11] nested vmx: bug fixes and feature
	enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series of patches contain some bug fixes and feature enabling for
nested vmx, please help to review and pull.

For the following patches, it doesn't have influence about Xen on Xen functionality.
No special need to backport to 4.2.x (My own opinion).

Changes from v1 to v2:
 - Use literal name instead of hard numbers to expose default 1 settings in VMX related MSRs.
 - For TRUE VMX MSRs, we use the same value as normal VMX MSRs.
 - Fix a coding style issue.

Thanks,
Dongxiao

Dongxiao Xu (11):
  nested vmx: emulate MSR bitmaps
  nested vmx: use literal name instead of hard numbers
  nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
  nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
  nested vmx: fix interrupt delivery to L2 guest
  nested vmx: check host ability when intercept MSR read

 xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |  115 ++++++++++++++++++++++++++++++------
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |   10 +++
 7 files changed, 145 insertions(+), 24 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElU-0007rK-ET; Wed, 05 Dec 2012 13:11:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElS-0007r3-MT
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:11:50 +0000
Received: from [85.158.143.35:14751] by server-1.bemta-4.messagelabs.com id
	50/9D-27934-5184FB05; Wed, 05 Dec 2012 13:11:49 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354713107!16186658!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTkxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15453 invoked from network); 5 Dec 2012 13:11:49 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 13:11:49 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 05:11:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="259358196"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 05 Dec 2012 05:11:48 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:05 +0800
Message-Id: <1354712534-31338-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 02/11] nested vmx: use literal name instead
	of hard numbers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For those default 1 settings in VMX MSR, use some literal name
instead of hard numbers in the code.

Besides, fix the default 1 setting for pin based control MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   15 ++++++---------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    9 +++++++++
 2 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 719bfce..4d0f26b 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1318,9 +1318,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        data <<= 32;
-	/* 0-settings */
-        data |= 0;
+        tmp = VMX_PINBASED_CTLS_DEFAULT1;
+        data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
         /* 1-seetings */
@@ -1342,8 +1341,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
-        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
+        tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
         break;
@@ -1356,8 +1354,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
         /* 1-seetings */
-        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
-        tmp = 0x36dff;
+        tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1370,8 +1367,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
-        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
-        tmp = 0x11ff;
+        /* 1-seetings */
+        tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 067fbe4..dce2cd8 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -36,6 +36,15 @@ struct nestedvmx {
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
 
+/* bit 1, 2, 4 must be 1 */
+#define VMX_PINBASED_CTLS_DEFAULT1	0x16
+/* bit 1, 4-6,8,13-16,26 must be 1 */
+#define VMX_PROCBASED_CTLS_DEFAULT1	0x401e172
+/* bit 0-8, 10,11,13,14,16,17 must be 1 */
+#define VMX_EXIT_CTLS_DEFAULT1		0x36dff
+/* bit 0-8, and 12 must be 1 */
+#define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
+
 /*
  * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
  */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElU-0007rK-ET; Wed, 05 Dec 2012 13:11:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElS-0007r3-MT
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:11:50 +0000
Received: from [85.158.143.35:14751] by server-1.bemta-4.messagelabs.com id
	50/9D-27934-5184FB05; Wed, 05 Dec 2012 13:11:49 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354713107!16186658!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTkxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15453 invoked from network); 5 Dec 2012 13:11:49 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 13:11:49 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 05:11:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="259358196"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 05 Dec 2012 05:11:48 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:05 +0800
Message-Id: <1354712534-31338-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 02/11] nested vmx: use literal name instead
	of hard numbers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For those default 1 settings in VMX MSR, use some literal name
instead of hard numbers in the code.

Besides, fix the default 1 setting for pin based control MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   15 ++++++---------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    9 +++++++++
 2 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 719bfce..4d0f26b 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1318,9 +1318,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        data <<= 32;
-	/* 0-settings */
-        data |= 0;
+        tmp = VMX_PINBASED_CTLS_DEFAULT1;
+        data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
         /* 1-seetings */
@@ -1342,8 +1341,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
-        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
+        tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
         break;
@@ -1356,8 +1354,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
         /* 1-seetings */
-        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
-        tmp = 0x36dff;
+        tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1370,8 +1367,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
-        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
-        tmp = 0x11ff;
+        /* 1-seetings */
+        tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 067fbe4..dce2cd8 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -36,6 +36,15 @@ struct nestedvmx {
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
 
+/* bit 1, 2, 4 must be 1 */
+#define VMX_PINBASED_CTLS_DEFAULT1	0x16
+/* bit 1, 4-6,8,13-16,26 must be 1 */
+#define VMX_PROCBASED_CTLS_DEFAULT1	0x401e172
+/* bit 0-8, 10,11,13,14,16,17 must be 1 */
+#define VMX_EXIT_CTLS_DEFAULT1		0x36dff
+/* bit 0-8, and 12 must be 1 */
+#define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
+
 /*
  * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
  */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEla-0007sV-7M; Wed, 05 Dec 2012 13:11:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElZ-0007sJ-J8
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:11:57 +0000
Received: from [85.158.143.99:6452] by server-1.bemta-4.messagelabs.com id
	1A/BD-27934-C184FB05; Wed, 05 Dec 2012 13:11:56 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354713114!27277508!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2MTk1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12266 invoked from network); 5 Dec 2012 13:11:55 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-13.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 13:11:55 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 05 Dec 2012 05:11:53 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="226912263"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 05 Dec 2012 05:11:52 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:08 +0800
Message-Id: <1354712534-31338-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 05/11] nested vmx: fix handling of RDTSC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If L0 is to handle the TSC access, then we need to update guest EIP by
calling update_guest_eip().

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 ++
 3 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3bb0d99..9fb9562 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1555,7 +1555,7 @@ static int get_instruction_length(void)
     return len;
 }
 
-static void update_guest_eip(void)
+void update_guest_eip(void)
 {
     struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned long x;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6e5c1d3..fd5bb92 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1614,6 +1614,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
             tsc += __get_vvmcs(nvcpu->nv_vvmcx, TSC_OFFSET);
             regs->eax = (uint32_t)tsc;
             regs->edx = (uint32_t)(tsc >> 32);
+            update_guest_eip();
 
             return 1;
         }
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c4c2fe8..aa5b080 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -399,6 +399,8 @@ void ept_p2m_init(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
+void update_guest_eip(void);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElX-0007ru-Qs; Wed, 05 Dec 2012 13:11:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElW-0007rY-Fx
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:11:54 +0000
Received: from [85.158.139.83:8863] by server-13.bemta-5.messagelabs.com id
	04/FB-27809-9184FB05; Wed, 05 Dec 2012 13:11:53 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1354713112!28517068!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3Nzg4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30242 invoked from network); 5 Dec 2012 13:11:53 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-12.tower-182.messagelabs.com with SMTP;
	5 Dec 2012 13:11:53 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 05 Dec 2012 05:11:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="229273750"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 05 Dec 2012 05:11:51 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:07 +0800
Message-Id: <1354712534-31338-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 04/11] nested vmx: fix rflags status in
	virtual vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As stated in SDM, all bits (except for those 1-reserved) in rflags would
be set to 0 in VM exit. Therefore we need to follow this logic in
virtual_vmexit.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index a09fa97..6e5c1d3 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -991,7 +991,8 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
 
     regs->eip = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RIP);
     regs->esp = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RSP);
-    regs->eflags = __vmread(GUEST_RFLAGS);
+    /* VM exit clears all bits except bit 1 */
+    regs->eflags = 0x2;
 
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEle-0007tw-Dx; Wed, 05 Dec 2012 13:12:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElc-0007t1-5a
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:00 +0000
Received: from [85.158.143.99:56159] by server-3.bemta-4.messagelabs.com id
	44/F2-06841-F184FB05; Wed, 05 Dec 2012 13:11:59 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354713114!27277508!2
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2MTk1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12504 invoked from network); 5 Dec 2012 13:11:58 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-13.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 13:11:58 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 05 Dec 2012 05:11:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="176455139"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 05:11:57 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:11 +0800
Message-Id: <1354712534-31338-9-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 08/11] nested vmx: enable "Virtualize APIC
	accesses" feature for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the "Virtualize APIC accesses" feature is enabled, we need to sync
the APIC-access address from virtual vvmcs into shadow vmcs when doing
virtual_vmentry.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   27 ++++++++++++++++++++++++++-
 1 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index e4ce466..ae553bb 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -554,6 +554,24 @@ void nvmx_update_exception_bitmap(struct vcpu *v, unsigned long value)
     set_shadow_control(v, EXCEPTION_BITMAP, value);
 }
 
+static void nvmx_update_apic_access_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 apic_gpfn, apic_mfn;
+    u32 ctrl;
+    void *apic_va;
+
+    ctrl = __n2_secondary_exec_control(v);
+    if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+    {
+        apic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
+        apic_va = hvm_map_guest_frame_ro(apic_gpfn);
+        apic_mfn = virt_to_mfn(apic_va);
+        __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(apic_va); 
+    }
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -761,6 +779,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_exit_control(v, vmx_vmexit_control);
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
+    nvmx_update_apic_access_address(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1350,7 +1369,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
@@ -1680,6 +1700,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
         break;
     }
+    case EXIT_REASON_APIC_ACCESS:
+        ctrl = __n2_secondary_exec_control(v);
+        if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEla-0007sV-7M; Wed, 05 Dec 2012 13:11:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElZ-0007sJ-J8
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:11:57 +0000
Received: from [85.158.143.99:6452] by server-1.bemta-4.messagelabs.com id
	1A/BD-27934-C184FB05; Wed, 05 Dec 2012 13:11:56 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354713114!27277508!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2MTk1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12266 invoked from network); 5 Dec 2012 13:11:55 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-13.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 13:11:55 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 05 Dec 2012 05:11:53 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="226912263"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 05 Dec 2012 05:11:52 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:08 +0800
Message-Id: <1354712534-31338-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 05/11] nested vmx: fix handling of RDTSC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If L0 is to handle the TSC access, then we need to update guest EIP by
calling update_guest_eip().

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 ++
 3 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3bb0d99..9fb9562 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1555,7 +1555,7 @@ static int get_instruction_length(void)
     return len;
 }
 
-static void update_guest_eip(void)
+void update_guest_eip(void)
 {
     struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned long x;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6e5c1d3..fd5bb92 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1614,6 +1614,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
             tsc += __get_vvmcs(nvcpu->nv_vvmcx, TSC_OFFSET);
             regs->eax = (uint32_t)tsc;
             regs->edx = (uint32_t)(tsc >> 32);
+            update_guest_eip();
 
             return 1;
         }
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c4c2fe8..aa5b080 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -399,6 +399,8 @@ void ept_p2m_init(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
+void update_guest_eip(void);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEle-0007tw-Dx; Wed, 05 Dec 2012 13:12:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElc-0007t1-5a
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:00 +0000
Received: from [85.158.143.99:56159] by server-3.bemta-4.messagelabs.com id
	44/F2-06841-F184FB05; Wed, 05 Dec 2012 13:11:59 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354713114!27277508!2
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2MTk1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12504 invoked from network); 5 Dec 2012 13:11:58 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-13.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 13:11:58 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 05 Dec 2012 05:11:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="176455139"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 05:11:57 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:11 +0800
Message-Id: <1354712534-31338-9-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 08/11] nested vmx: enable "Virtualize APIC
	accesses" feature for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the "Virtualize APIC accesses" feature is enabled, we need to sync
the APIC-access address from virtual vvmcs into shadow vmcs when doing
virtual_vmentry.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   27 ++++++++++++++++++++++++++-
 1 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index e4ce466..ae553bb 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -554,6 +554,24 @@ void nvmx_update_exception_bitmap(struct vcpu *v, unsigned long value)
     set_shadow_control(v, EXCEPTION_BITMAP, value);
 }
 
+static void nvmx_update_apic_access_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 apic_gpfn, apic_mfn;
+    u32 ctrl;
+    void *apic_va;
+
+    ctrl = __n2_secondary_exec_control(v);
+    if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+    {
+        apic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
+        apic_va = hvm_map_guest_frame_ro(apic_gpfn);
+        apic_mfn = virt_to_mfn(apic_va);
+        __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(apic_va); 
+    }
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -761,6 +779,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_exit_control(v, vmx_vmexit_control);
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
+    nvmx_update_apic_access_address(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1350,7 +1369,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
@@ -1680,6 +1700,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
         break;
     }
+    case EXIT_REASON_APIC_ACCESS:
+        ctrl = __n2_secondary_exec_control(v);
+        if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElX-0007ru-Qs; Wed, 05 Dec 2012 13:11:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElW-0007rY-Fx
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:11:54 +0000
Received: from [85.158.139.83:8863] by server-13.bemta-5.messagelabs.com id
	04/FB-27809-9184FB05; Wed, 05 Dec 2012 13:11:53 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1354713112!28517068!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3Nzg4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30242 invoked from network); 5 Dec 2012 13:11:53 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-12.tower-182.messagelabs.com with SMTP;
	5 Dec 2012 13:11:53 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 05 Dec 2012 05:11:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="229273750"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 05 Dec 2012 05:11:51 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:07 +0800
Message-Id: <1354712534-31338-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 04/11] nested vmx: fix rflags status in
	virtual vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As stated in SDM, all bits (except for those 1-reserved) in rflags would
be set to 0 in VM exit. Therefore we need to follow this logic in
virtual_vmexit.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index a09fa97..6e5c1d3 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -991,7 +991,8 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
 
     regs->eip = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RIP);
     regs->esp = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RSP);
-    regs->eflags = __vmread(GUEST_RFLAGS);
+    /* VM exit clears all bits except bit 1 */
+    regs->eflags = 0x2;
 
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEle-0007uJ-Qo; Wed, 05 Dec 2012 13:12:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tls@panix.com>) id 1TgElc-0007t1-On
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:00 +0000
Received: from [85.158.143.99:56213] by server-3.bemta-4.messagelabs.com id
	6A/F2-06841-0284FB05; Wed, 05 Dec 2012 13:12:00 +0000
X-Env-Sender: tls@panix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354713119!27569775!1
X-Originating-IP: [166.84.1.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTY2Ljg0LjEuODkgPT4gMjg3Mzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13990 invoked from network); 5 Dec 2012 13:12:00 -0000
Received: from mailbackend.panix.com (HELO mailbackend.panix.com) (166.84.1.89)
	by server-3.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 13:12:00 -0000
Received: from panix5.panix.com (panix5.panix.com [166.84.1.5])
	by mailbackend.panix.com (Postfix) with ESMTP id 908372E991;
	Wed,  5 Dec 2012 08:11:59 -0500 (EST)
Received: by panix5.panix.com (Postfix, from userid 415)
	id 6EA7224241; Wed,  5 Dec 2012 08:11:59 -0500 (EST)
Date: Wed, 5 Dec 2012 08:11:59 -0500
From: Thor Lancelot Simon <tls@panix.com>
To: Roger Pau Monn? <roger.pau@citrix.com>
Message-ID: <20121205131159.GA29622@panix.com>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com> <20121204200739.GA23149@panix.com>
	<20121205101551.GA2999@asim.lip6.fr> <50BF1FD6.4010905@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BF1FD6.4010905@citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
	"port-xen@netbsd.org" <port-xen@NetBSD.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 11:20:06AM +0100, Roger Pau Monn? wrote:
> On 05/12/12 11:15, Manuel Bouyer wrote:
> > On Tue, Dec 04, 2012 at 03:07:39PM -0500, Thor Lancelot Simon wrote:
> >> On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
> >>>
> >>> Independently of what we end up doing as default for handling raw file
> >>> disks, could someone review this code?
> >>>
> >>> It's the first time I've done a device, so someone with more experience
> >>> should review it.
> >>
> >> I am not sure I entirely follow what this code's doing, but it seems to
> >> me it may allow arbitrary physical pages to be exposed to userspace
> >> processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.
> >>
> >> Is that a correct understanding of one of its effects?  If so, there's
> >> a problem, since not being able to do precisely that is one important
> >> assumption of the 4.4BSD security model.
> > 
> > If I read it properly, It allows only to map pages that are part of a
> > grant. You provide the ioctl a grant reference, and this is what
> > the driver uses to find the physical pages. So it should be limited to
> > pages that are referenced by a grant.
> 
> Yes, it should be limited to grant pages, you are not able to map
> arbitrary mfns.

So, can dom0 give away arbitrary physical pages to a domU which can
then hand them back as a "grant", or is there other protection
against that?  That was my concern.  I'm sorry I don't understand
some of the fundamental terminology very well.

-- 
 Thor Lancelot Simon	                                      tls@panix.com

   	It's very complicated.  It's very cumbersome.  There's a
   	lot of numbers involved with it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEle-0007uJ-Qo; Wed, 05 Dec 2012 13:12:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tls@panix.com>) id 1TgElc-0007t1-On
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:00 +0000
Received: from [85.158.143.99:56213] by server-3.bemta-4.messagelabs.com id
	6A/F2-06841-0284FB05; Wed, 05 Dec 2012 13:12:00 +0000
X-Env-Sender: tls@panix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354713119!27569775!1
X-Originating-IP: [166.84.1.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTY2Ljg0LjEuODkgPT4gMjg3Mzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13990 invoked from network); 5 Dec 2012 13:12:00 -0000
Received: from mailbackend.panix.com (HELO mailbackend.panix.com) (166.84.1.89)
	by server-3.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 13:12:00 -0000
Received: from panix5.panix.com (panix5.panix.com [166.84.1.5])
	by mailbackend.panix.com (Postfix) with ESMTP id 908372E991;
	Wed,  5 Dec 2012 08:11:59 -0500 (EST)
Received: by panix5.panix.com (Postfix, from userid 415)
	id 6EA7224241; Wed,  5 Dec 2012 08:11:59 -0500 (EST)
Date: Wed, 5 Dec 2012 08:11:59 -0500
From: Thor Lancelot Simon <tls@panix.com>
To: Roger Pau Monn? <roger.pau@citrix.com>
Message-ID: <20121205131159.GA29622@panix.com>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com> <20121204200739.GA23149@panix.com>
	<20121205101551.GA2999@asim.lip6.fr> <50BF1FD6.4010905@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BF1FD6.4010905@citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
	"port-xen@netbsd.org" <port-xen@NetBSD.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 11:20:06AM +0100, Roger Pau Monn? wrote:
> On 05/12/12 11:15, Manuel Bouyer wrote:
> > On Tue, Dec 04, 2012 at 03:07:39PM -0500, Thor Lancelot Simon wrote:
> >> On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
> >>>
> >>> Independently of what we end up doing as default for handling raw file
> >>> disks, could someone review this code?
> >>>
> >>> It's the first time I've done a device, so someone with more experience
> >>> should review it.
> >>
> >> I am not sure I entirely follow what this code's doing, but it seems to
> >> me it may allow arbitrary physical pages to be exposed to userspace
> >> processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.
> >>
> >> Is that a correct understanding of one of its effects?  If so, there's
> >> a problem, since not being able to do precisely that is one important
> >> assumption of the 4.4BSD security model.
> > 
> > If I read it properly, It allows only to map pages that are part of a
> > grant. You provide the ioctl a grant reference, and this is what
> > the driver uses to find the physical pages. So it should be limited to
> > pages that are referenced by a grant.
> 
> Yes, it should be limited to grant pages, you are not able to map
> arbitrary mfns.

So, can dom0 give away arbitrary physical pages to a domU which can
then hand them back as a "grant", or is there other protection
against that?  That was my concern.  I'm sorry I don't understand
some of the fundamental terminology very well.

-- 
 Thor Lancelot Simon	                                      tls@panix.com

   	It's very complicated.  It's very cumbersome.  There's a
   	lot of numbers involved with it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElc-0007tK-Kx; Wed, 05 Dec 2012 13:12:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElb-0007sg-3w
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:11:59 +0000
Received: from [85.158.139.211:33282] by server-11.bemta-5.messagelabs.com id
	EB/B6-03409-E184FB05; Wed, 05 Dec 2012 13:11:58 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354713117!18377205!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MTg2OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11436 invoked from network); 5 Dec 2012 13:11:57 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-15.tower-206.messagelabs.com with SMTP;
	5 Dec 2012 13:11:57 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 05 Dec 2012 05:11:56 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="259358251"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 05 Dec 2012 05:11:55 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:10 +0800
Message-Id: <1354712534-31338-8-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 07/11] nested vmx: enable IA32E mode while do
	VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some VMMs may check the platform capability to judge whether long
mode guest is supported. Therefore we need to expose this bit to
guest VMM.

Xen on Xen works fine in current solution because Xen doesn't
check this capability but directly set it in VMCS if guest
supports long mode.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index a5a8e3d..e4ce466 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1376,7 +1376,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
+               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+               VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
         break;
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElc-0007tK-Kx; Wed, 05 Dec 2012 13:12:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElb-0007sg-3w
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:11:59 +0000
Received: from [85.158.139.211:33282] by server-11.bemta-5.messagelabs.com id
	EB/B6-03409-E184FB05; Wed, 05 Dec 2012 13:11:58 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354713117!18377205!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MTg2OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11436 invoked from network); 5 Dec 2012 13:11:57 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-15.tower-206.messagelabs.com with SMTP;
	5 Dec 2012 13:11:57 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 05 Dec 2012 05:11:56 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="259358251"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 05 Dec 2012 05:11:55 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:10 +0800
Message-Id: <1354712534-31338-8-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 07/11] nested vmx: enable IA32E mode while do
	VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some VMMs may check the platform capability to judge whether long
mode guest is supported. Therefore we need to expose this bit to
guest VMM.

Xen on Xen works fine in current solution because Xen doesn't
check this capability but directly set it in VMCS if guest
supports long mode.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index a5a8e3d..e4ce466 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1376,7 +1376,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
+               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+               VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
         break;
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElg-0007vB-8B; Wed, 05 Dec 2012 13:12:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElf-0007t1-2J
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:03 +0000
Received: from [85.158.143.99:6775] by server-3.bemta-4.messagelabs.com id
	B6/03-06841-2284FB05; Wed, 05 Dec 2012 13:12:02 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1354713121!22873542!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3Nzg4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28670 invoked from network); 5 Dec 2012 13:12:02 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 13:12:02 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 05 Dec 2012 05:12:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="229273793"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 05 Dec 2012 05:12:00 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:13 +0800
Message-Id: <1354712534-31338-11-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 10/11] nested vmx: fix interrupt delivery to
	L2 guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While delivering interrupt into L2 guest, L0 hypervisor need to check
whether L1 hypervisor wants to own the interrupt, if not, directly inject
the interrupt into L2 guest.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 3961bc7..ef8b925 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -163,7 +163,7 @@ enum hvm_intblk nvmx_intr_blocked(struct vcpu *v)
 
 static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
 {
-    u32 exit_ctrl;
+    u32 ctrl;
 
     if ( nvmx_intr_blocked(v) != hvm_intblk_none )
     {
@@ -176,11 +176,14 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
         if ( intack.source == hvm_intsrc_pic ||
                  intack.source == hvm_intsrc_lapic )
         {
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, PIN_BASED_VM_EXEC_CONTROL);
+            if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) )
+                return 0;
+
             vmx_inject_extint(intack.vector);
 
-            exit_ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
-                            VM_EXIT_CONTROLS);
-            if ( exit_ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, VM_EXIT_CONTROLS);
+            if ( ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
             {
                 /* for now, duplicate the ack path in vmx_intr_assist */
                 hvm_vcpu_ack_pending_irq(v, intack);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElg-0007vB-8B; Wed, 05 Dec 2012 13:12:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElf-0007t1-2J
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:03 +0000
Received: from [85.158.143.99:6775] by server-3.bemta-4.messagelabs.com id
	B6/03-06841-2284FB05; Wed, 05 Dec 2012 13:12:02 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1354713121!22873542!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3Nzg4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28670 invoked from network); 5 Dec 2012 13:12:02 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 13:12:02 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 05 Dec 2012 05:12:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="229273793"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 05 Dec 2012 05:12:00 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:13 +0800
Message-Id: <1354712534-31338-11-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 10/11] nested vmx: fix interrupt delivery to
	L2 guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While delivering interrupt into L2 guest, L0 hypervisor need to check
whether L1 hypervisor wants to own the interrupt, if not, directly inject
the interrupt into L2 guest.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 3961bc7..ef8b925 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -163,7 +163,7 @@ enum hvm_intblk nvmx_intr_blocked(struct vcpu *v)
 
 static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
 {
-    u32 exit_ctrl;
+    u32 ctrl;
 
     if ( nvmx_intr_blocked(v) != hvm_intblk_none )
     {
@@ -176,11 +176,14 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
         if ( intack.source == hvm_intsrc_pic ||
                  intack.source == hvm_intsrc_lapic )
         {
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, PIN_BASED_VM_EXEC_CONTROL);
+            if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) )
+                return 0;
+
             vmx_inject_extint(intack.vector);
 
-            exit_ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
-                            VM_EXIT_CONTROLS);
-            if ( exit_ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, VM_EXIT_CONTROLS);
+            if ( ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
             {
                 /* for now, duplicate the ack path in vmx_intr_assist */
                 hvm_vcpu_ack_pending_irq(v, intack);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElu-00087X-CI; Wed, 05 Dec 2012 13:12:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgEls-00086X-VR
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:17 +0000
Received: from [193.109.254.147:17965] by server-5.bemta-14.messagelabs.com id
	AA/EF-10257-0384FB05; Wed, 05 Dec 2012 13:12:16 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354713133!6363892!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MTg2OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15691 invoked from network); 5 Dec 2012 13:12:15 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-5.tower-27.messagelabs.com with SMTP;
	5 Dec 2012 13:12:15 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 05 Dec 2012 05:12:15 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="257618194"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 05:11:58 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:12 +0800
Message-Id: <1354712534-31338-10-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 09/11] nested vmx: enable PAUSE and RDPMC
	exiting for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ae553bb..fba09cc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1362,6 +1362,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
+               CPU_BASED_PAUSE_EXITING |
+               CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElu-00087X-CI; Wed, 05 Dec 2012 13:12:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgEls-00086X-VR
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:17 +0000
Received: from [193.109.254.147:17965] by server-5.bemta-14.messagelabs.com id
	AA/EF-10257-0384FB05; Wed, 05 Dec 2012 13:12:16 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354713133!6363892!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MTg2OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15691 invoked from network); 5 Dec 2012 13:12:15 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-5.tower-27.messagelabs.com with SMTP;
	5 Dec 2012 13:12:15 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 05 Dec 2012 05:12:15 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="257618194"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 05:11:58 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:12 +0800
Message-Id: <1354712534-31338-10-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 09/11] nested vmx: enable PAUSE and RDPMC
	exiting for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ae553bb..fba09cc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1362,6 +1362,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
+               CPU_BASED_PAUSE_EXITING |
+               CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElr-000864-U0; Wed, 05 Dec 2012 13:12:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElr-00085A-AW
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:15 +0000
Received: from [193.109.254.147:46755] by server-13.bemta-14.messagelabs.com
	id C3/D0-11239-E284FB05; Wed, 05 Dec 2012 13:12:14 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354713133!6363892!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MTg2OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15562 invoked from network); 5 Dec 2012 13:12:14 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-5.tower-27.messagelabs.com with SMTP;
	5 Dec 2012 13:12:14 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 05 Dec 2012 05:12:13 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="257618121"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 05:11:54 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:09 +0800
Message-Id: <1354712534-31338-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 06/11] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For DR register, we use lazy restore mechanism when access it. Therefore
when receiving such VM exit, L0 should be responsible to switch to the
right DR values, then inject to L1 hypervisor.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index fd5bb92..a5a8e3d 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1641,7 +1641,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         break;
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
-        if ( ctrl & CPU_BASED_MOV_DR_EXITING )
+        if ( (ctrl & CPU_BASED_MOV_DR_EXITING) &&
+            v->arch.hvm_vcpu.flag_dr_dirty )
             nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElr-000864-U0; Wed, 05 Dec 2012 13:12:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElr-00085A-AW
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:15 +0000
Received: from [193.109.254.147:46755] by server-13.bemta-14.messagelabs.com
	id C3/D0-11239-E284FB05; Wed, 05 Dec 2012 13:12:14 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354713133!6363892!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MTg2OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15562 invoked from network); 5 Dec 2012 13:12:14 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-5.tower-27.messagelabs.com with SMTP;
	5 Dec 2012 13:12:14 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 05 Dec 2012 05:12:13 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="257618121"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 05:11:54 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:09 +0800
Message-Id: <1354712534-31338-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 06/11] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For DR register, we use lazy restore mechanism when access it. Therefore
when receiving such VM exit, L0 should be responsible to switch to the
right DR values, then inject to L1 hypervisor.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index fd5bb92..a5a8e3d 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1641,7 +1641,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         break;
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
-        if ( ctrl & CPU_BASED_MOV_DR_EXITING )
+        if ( (ctrl & CPU_BASED_MOV_DR_EXITING) &&
+            v->arch.hvm_vcpu.flag_dr_dirty )
             nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElv-00088e-Pj; Wed, 05 Dec 2012 13:12:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElu-00087C-Df
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:18 +0000
Received: from [85.158.138.51:48367] by server-13.bemta-3.messagelabs.com id
	6F/4E-24887-1384FB05; Wed, 05 Dec 2012 13:12:17 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354713135!19487716!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19281 invoked from network); 5 Dec 2012 13:12:16 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-3.tower-174.messagelabs.com with SMTP;
	5 Dec 2012 13:12:16 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 05 Dec 2012 05:11:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="252333781"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 05 Dec 2012 05:11:49 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:06 +0800
Message-Id: <1354712534-31338-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 03/11] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 4d0f26b..a09fa97 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp;
+    u64 data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
@@ -1311,9 +1311,10 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50;
+               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
         /* 1-seetings */
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
@@ -1322,6 +1323,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1353,6 +1355,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = (data << 32) | tmp;
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
+    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
         /* 1-seetings */
         tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
@@ -1367,6 +1370,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
+    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         /* 1-seetings */
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElv-00088e-Pj; Wed, 05 Dec 2012 13:12:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElu-00087C-Df
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:18 +0000
Received: from [85.158.138.51:48367] by server-13.bemta-3.messagelabs.com id
	6F/4E-24887-1384FB05; Wed, 05 Dec 2012 13:12:17 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354713135!19487716!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19281 invoked from network); 5 Dec 2012 13:12:16 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-3.tower-174.messagelabs.com with SMTP;
	5 Dec 2012 13:12:16 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 05 Dec 2012 05:11:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="252333781"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 05 Dec 2012 05:11:49 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:06 +0800
Message-Id: <1354712534-31338-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 03/11] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 4d0f26b..a09fa97 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp;
+    u64 data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
@@ -1311,9 +1311,10 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50;
+               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
         /* 1-seetings */
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
@@ -1322,6 +1323,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1353,6 +1355,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = (data << 32) | tmp;
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
+    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
         /* 1-seetings */
         tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
@@ -1367,6 +1370,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
+    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         /* 1-seetings */
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElx-00089e-7a; Wed, 05 Dec 2012 13:12:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElv-00088F-Ly
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:19 +0000
Received: from [193.109.254.147:18113] by server-1.bemta-14.messagelabs.com id
	01/8B-25314-2384FB05; Wed, 05 Dec 2012 13:12:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354713136!9050767!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTkxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14180 invoked from network); 5 Dec 2012 13:12:17 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-13.tower-27.messagelabs.com with SMTP;
	5 Dec 2012 13:12:17 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 05:12:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="259358296"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 05 Dec 2012 05:12:01 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:14 +0800
Message-Id: <1354712534-31338-12-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 11/11] nested vmx: check host ability when
	intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When guest hypervisor tries to read MSR value, we intercept this behavior
and return certain emulated values. Besides that, we also need to ensure
that those emulated values must compatible with host ability.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   18 ++++++++++++++----
 1 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index fba09cc..e65f963 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1319,19 +1319,20 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp = 0;
+    u64 data = 0, host_data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
         return 0;
 
+    rdmsrl(msr, host_data);
+
     /*
      * Remove unsupport features from n1 guest capability MSR
      */
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
-        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
+        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
@@ -1341,6 +1342,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                PIN_BASED_PREEMPT_TIMER;
         tmp = VMX_PINBASED_CTLS_DEFAULT1;
         data = ((data | tmp) << 32) | (tmp);
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
@@ -1368,6 +1371,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
@@ -1376,6 +1381,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
     case MSR_IA32_VMX_TRUE_EXIT_CTLS:
@@ -1391,6 +1398,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
 	/* 0-settings */
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
     case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
@@ -1401,8 +1410,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
                VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
-
     case IA32_FEATURE_CONTROL_MSR:
         data = IA32_FEATURE_CONTROL_MSR_LOCK | 
                IA32_FEATURE_CONTROL_MSR_ENABLE_VMXON_OUTSIDE_SMX;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:12:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:12:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgElx-00089e-7a; Wed, 05 Dec 2012 13:12:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgElv-00088F-Ly
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:19 +0000
Received: from [193.109.254.147:18113] by server-1.bemta-14.messagelabs.com id
	01/8B-25314-2384FB05; Wed, 05 Dec 2012 13:12:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354713136!9050767!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM0OTkxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14180 invoked from network); 5 Dec 2012 13:12:17 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-13.tower-27.messagelabs.com with SMTP;
	5 Dec 2012 13:12:17 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 05:12:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="259358296"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 05 Dec 2012 05:12:01 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:14 +0800
Message-Id: <1354712534-31338-12-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 11/11] nested vmx: check host ability when
	intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When guest hypervisor tries to read MSR value, we intercept this behavior
and return certain emulated values. Besides that, we also need to ensure
that those emulated values must compatible with host ability.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   18 ++++++++++++++----
 1 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index fba09cc..e65f963 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1319,19 +1319,20 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp = 0;
+    u64 data = 0, host_data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
         return 0;
 
+    rdmsrl(msr, host_data);
+
     /*
      * Remove unsupport features from n1 guest capability MSR
      */
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
-        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
+        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
@@ -1341,6 +1342,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                PIN_BASED_PREEMPT_TIMER;
         tmp = VMX_PINBASED_CTLS_DEFAULT1;
         data = ((data | tmp) << 32) | (tmp);
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
@@ -1368,6 +1371,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
@@ -1376,6 +1381,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
     case MSR_IA32_VMX_TRUE_EXIT_CTLS:
@@ -1391,6 +1398,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
 	/* 0-settings */
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
     case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
@@ -1401,8 +1410,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
                VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
-
     case IA32_FEATURE_CONTROL_MSR:
         data = IA32_FEATURE_CONTROL_MSR_LOCK | 
                IA32_FEATURE_CONTROL_MSR_ENABLE_VMXON_OUTSIDE_SMX;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:13:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:13:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEmZ-0000AV-Ox; Wed, 05 Dec 2012 13:12:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgEmX-00008j-ON
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:58 +0000
Received: from [85.158.139.83:21119] by server-3.bemta-5.messagelabs.com id
	F4/58-18736-8584FB05; Wed, 05 Dec 2012 13:12:56 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354713108!17255073!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3Nzg4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28574 invoked from network); 5 Dec 2012 13:11:49 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-182.messagelabs.com with SMTP;
	5 Dec 2012 13:11:49 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 05 Dec 2012 05:11:47 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="229273736"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 05 Dec 2012 05:11:46 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:04 +0800
Message-Id: <1354712534-31338-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 01/11] nested vmx: emulate MSR bitmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In nested vmx virtualization for MSR bitmaps, L0 hypervisor will trap all the VM
exit from L2 guest by disable the MSR_BITMAP feature. When handling this VM exit,
L0 hypervisor judges whether L1 hypervisor uses MSR_BITMAP feature and the
corresponding bit is set to 1. If so, L0 will inject such VM exit into L1
hypervisor; otherwise, L0 will be responsible for handling this VM exit.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0fbdd75..205e705 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -674,6 +674,34 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 }
 
 /*
+ * access_type: read == 0, write == 1
+ */
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+{
+    int ret = 1;
+    if ( !msr_bitmap )
+        return 1;
+
+    if ( msr <= 0x1fff )
+    {
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+    }
+    else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+    {
+        msr &= 0x1fff;
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+    }
+    return ret;
+}
+
+
+/*
  * Switch VMCS between layer 1 & 2 guest
  */
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ed47780..719bfce 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -48,6 +48,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
     nvmx->iobitmap[1] = NULL;
+    nvmx->msrbitmap = NULL;
     return 0;
 out:
     return -ENOMEM;
@@ -561,6 +562,17 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
 }
 
+static void __map_msr_bitmap(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    unsigned long gpa;
+
+    if ( nvmx->msrbitmap )
+        hvm_unmap_guest_frame (nvmx->msrbitmap); 
+    gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, MSR_BITMAP);
+    nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT);
+}
+
 static void __map_io_bitmap(struct vcpu *v, u64 vmcs_reg)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -597,6 +609,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
             nvmx->iobitmap[i] = NULL;
         }
     }
+    if ( nvmx->msrbitmap ) {
+        hvm_unmap_guest_frame(nvmx->msrbitmap);
+        nvmx->msrbitmap = NULL;
+    }
 }
 
 u64 nvmx_get_tsc_offset(struct vcpu *v)
@@ -1153,6 +1169,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
         nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT);
         nvcpu->nv_vvmcxaddr = gpa;
         map_io_bitmap_all (v);
+        __map_msr_bitmap(v);
     }
 
     vmreturn(regs, VMSUCCEED);
@@ -1270,6 +1287,9 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
               vmcs_encoding == IO_BITMAP_B_HIGH )
         __map_io_bitmap (v, IO_BITMAP_B);
 
+    if ( vmcs_encoding == MSR_BITMAP || vmcs_encoding == MSR_BITMAP_HIGH )
+        __map_msr_bitmap(v);
+
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
 }
@@ -1320,6 +1340,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDTSC_EXITING |
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
+               CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
         tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
@@ -1497,8 +1518,6 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_TRIPLE_FAULT:
     case EXIT_REASON_TASK_SWITCH:
     case EXIT_REASON_CPUID:
-    case EXIT_REASON_MSR_READ:
-    case EXIT_REASON_MSR_WRITE:
     case EXIT_REASON_VMCALL:
     case EXIT_REASON_VMCLEAR:
     case EXIT_REASON_VMLAUNCH:
@@ -1514,6 +1533,22 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_MSR_READ:
+    case EXIT_REASON_MSR_WRITE:
+    {
+        int status;
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        {
+            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
+                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
+            if ( status )
+                nvcpu->nv_vmexit_pending = 1;
+        }
+        else
+            nvcpu->nv_vmexit_pending = 1;
+        break;
+    }
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index cc92f69..14ac773 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -427,6 +427,7 @@ int vmx_add_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 
 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index b9137b8..067fbe4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -26,6 +26,7 @@
 struct nestedvmx {
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
+    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:13:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:13:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgEmZ-0000AV-Ox; Wed, 05 Dec 2012 13:12:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgEmX-00008j-ON
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:12:58 +0000
Received: from [85.158.139.83:21119] by server-3.bemta-5.messagelabs.com id
	F4/58-18736-8584FB05; Wed, 05 Dec 2012 13:12:56 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354713108!17255073!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3Nzg4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28574 invoked from network); 5 Dec 2012 13:11:49 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-182.messagelabs.com with SMTP;
	5 Dec 2012 13:11:49 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 05 Dec 2012 05:11:47 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="229273736"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 05 Dec 2012 05:11:46 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 21:02:04 +0800
Message-Id: <1354712534-31338-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v2 01/11] nested vmx: emulate MSR bitmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In nested vmx virtualization for MSR bitmaps, L0 hypervisor will trap all the VM
exit from L2 guest by disable the MSR_BITMAP feature. When handling this VM exit,
L0 hypervisor judges whether L1 hypervisor uses MSR_BITMAP feature and the
corresponding bit is set to 1. If so, L0 will inject such VM exit into L1
hypervisor; otherwise, L0 will be responsible for handling this VM exit.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0fbdd75..205e705 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -674,6 +674,34 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 }
 
 /*
+ * access_type: read == 0, write == 1
+ */
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+{
+    int ret = 1;
+    if ( !msr_bitmap )
+        return 1;
+
+    if ( msr <= 0x1fff )
+    {
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+    }
+    else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+    {
+        msr &= 0x1fff;
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+    }
+    return ret;
+}
+
+
+/*
  * Switch VMCS between layer 1 & 2 guest
  */
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ed47780..719bfce 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -48,6 +48,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
     nvmx->iobitmap[1] = NULL;
+    nvmx->msrbitmap = NULL;
     return 0;
 out:
     return -ENOMEM;
@@ -561,6 +562,17 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
 }
 
+static void __map_msr_bitmap(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    unsigned long gpa;
+
+    if ( nvmx->msrbitmap )
+        hvm_unmap_guest_frame (nvmx->msrbitmap); 
+    gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, MSR_BITMAP);
+    nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT);
+}
+
 static void __map_io_bitmap(struct vcpu *v, u64 vmcs_reg)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -597,6 +609,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
             nvmx->iobitmap[i] = NULL;
         }
     }
+    if ( nvmx->msrbitmap ) {
+        hvm_unmap_guest_frame(nvmx->msrbitmap);
+        nvmx->msrbitmap = NULL;
+    }
 }
 
 u64 nvmx_get_tsc_offset(struct vcpu *v)
@@ -1153,6 +1169,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
         nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT);
         nvcpu->nv_vvmcxaddr = gpa;
         map_io_bitmap_all (v);
+        __map_msr_bitmap(v);
     }
 
     vmreturn(regs, VMSUCCEED);
@@ -1270,6 +1287,9 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
               vmcs_encoding == IO_BITMAP_B_HIGH )
         __map_io_bitmap (v, IO_BITMAP_B);
 
+    if ( vmcs_encoding == MSR_BITMAP || vmcs_encoding == MSR_BITMAP_HIGH )
+        __map_msr_bitmap(v);
+
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
 }
@@ -1320,6 +1340,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDTSC_EXITING |
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
+               CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
         tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
@@ -1497,8 +1518,6 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_TRIPLE_FAULT:
     case EXIT_REASON_TASK_SWITCH:
     case EXIT_REASON_CPUID:
-    case EXIT_REASON_MSR_READ:
-    case EXIT_REASON_MSR_WRITE:
     case EXIT_REASON_VMCALL:
     case EXIT_REASON_VMCLEAR:
     case EXIT_REASON_VMLAUNCH:
@@ -1514,6 +1533,22 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_MSR_READ:
+    case EXIT_REASON_MSR_WRITE:
+    {
+        int status;
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        {
+            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
+                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
+            if ( status )
+                nvcpu->nv_vmexit_pending = 1;
+        }
+        else
+            nvcpu->nv_vmexit_pending = 1;
+        break;
+    }
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index cc92f69..14ac773 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -427,6 +427,7 @@ int vmx_add_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 
 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index b9137b8..067fbe4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -26,6 +26,7 @@
 struct nestedvmx {
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
+    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:40:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgFDK-0001ew-7I; Wed, 05 Dec 2012 13:40:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgFDI-0001ei-HN
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:40:36 +0000
Received: from [85.158.137.99:56239] by server-3.bemta-3.messagelabs.com id
	AD/89-31566-3DE4FB05; Wed, 05 Dec 2012 13:40:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354714832!12913688!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18900 invoked from network); 5 Dec 2012 13:40:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 13:40:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16173100"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 13:40:32 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	13:40:32 +0000
Message-ID: <1354714830.15296.200.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 13:40:30 +0000
In-Reply-To: <1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:
> Please rerun autoconf after commiting this patch

This is still missing the .{git,hg}ignore updates?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:40:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgFDK-0001ew-7I; Wed, 05 Dec 2012 13:40:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgFDI-0001ei-HN
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:40:36 +0000
Received: from [85.158.137.99:56239] by server-3.bemta-3.messagelabs.com id
	AD/89-31566-3DE4FB05; Wed, 05 Dec 2012 13:40:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354714832!12913688!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18900 invoked from network); 5 Dec 2012 13:40:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 13:40:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16173100"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 13:40:32 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	13:40:32 +0000
Message-ID: <1354714830.15296.200.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 13:40:30 +0000
In-Reply-To: <1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:
> Please rerun autoconf after commiting this patch

This is still missing the .{git,hg}ignore updates?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 13:59:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgFVl-0002FY-31; Wed, 05 Dec 2012 13:59:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1TgFVk-0002FT-8y
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:59:40 +0000
Received: from [85.158.143.35:16832] by server-2.bemta-4.messagelabs.com id
	30/D9-30861-B435FB05; Wed, 05 Dec 2012 13:59:39 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-15.tower-21.messagelabs.com!1354708872!14132730!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29998 invoked from network); 5 Dec 2012 12:01:12 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-15.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 12:01:12 -0000
Received: from [62.94.143.201] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 76759632; Wed, 05 Dec 2012 13:01:22 +0100
Message-ID: <1354708870.21632.27.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 05 Dec 2012 13:01:10 +0100
In-Reply-To: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
References: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH RFC] tools: install under /usr/local by
 default.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1484126004937567780=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============1484126004937567780==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-E6Hs6f05hei9zPtdxXey"


--=-E6Hs6f05hei9zPtdxXey
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2012-12-04 at 17:41 +0000, Ian Campbell wrote:=20
> This is the defacto (or FHS mandated?) standard location for software
> built from source, in order to avoid clashing with packaged software
> which is installed under /usr/bin etc.
>=20
> I think there is benefit in having Xen's install behave more like the
> majority of other OSS software out there.
>=20
I think that too. Very small knowledge of autotools and related stuff
here, so I don't think I can provide a "technical ack", but I definitely
like the idea!

> The major downside here is in the transition from 4.2 to 4.3 where
> people who have built from source will innevitably discover breakage
> because 4.3 no longer overwrites stuff in /usr like it used to so they
> pickup old stale bits from /usr instead of new stuff from /usr/local.
>=20
I think this is something that needs to be made clear in release notes,
migration instructions, etc., but, at the same time, I wouldn't consider
it a showstopper.

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-E6Hs6f05hei9zPtdxXey
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlC/N4YACgkQk4XaBE3IOsQoHgCeJE8NNMoMChXV/YHJJyXI+7nl
pe0AnjYvvlJR5yIKYtMi/DubKmBJFDdM
=keel
-----END PGP SIGNATURE-----

--=-E6Hs6f05hei9zPtdxXey--



--===============1484126004937567780==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1484126004937567780==--



From xen-devel-bounces@lists.xen.org Wed Dec 05 13:59:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 13:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgFVl-0002FY-31; Wed, 05 Dec 2012 13:59:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1TgFVk-0002FT-8y
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 13:59:40 +0000
Received: from [85.158.143.35:16832] by server-2.bemta-4.messagelabs.com id
	30/D9-30861-B435FB05; Wed, 05 Dec 2012 13:59:39 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-15.tower-21.messagelabs.com!1354708872!14132730!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29998 invoked from network); 5 Dec 2012 12:01:12 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-15.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 12:01:12 -0000
Received: from [62.94.143.201] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 76759632; Wed, 05 Dec 2012 13:01:22 +0100
Message-ID: <1354708870.21632.27.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 05 Dec 2012 13:01:10 +0100
In-Reply-To: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
References: <1354642903-4545-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH RFC] tools: install under /usr/local by
 default.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1484126004937567780=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============1484126004937567780==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-E6Hs6f05hei9zPtdxXey"


--=-E6Hs6f05hei9zPtdxXey
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2012-12-04 at 17:41 +0000, Ian Campbell wrote:=20
> This is the defacto (or FHS mandated?) standard location for software
> built from source, in order to avoid clashing with packaged software
> which is installed under /usr/bin etc.
>=20
> I think there is benefit in having Xen's install behave more like the
> majority of other OSS software out there.
>=20
I think that too. Very small knowledge of autotools and related stuff
here, so I don't think I can provide a "technical ack", but I definitely
like the idea!

> The major downside here is in the transition from 4.2 to 4.3 where
> people who have built from source will innevitably discover breakage
> because 4.3 no longer overwrites stuff in /usr like it used to so they
> pickup old stale bits from /usr instead of new stuff from /usr/local.
>=20
I think this is something that needs to be made clear in release notes,
migration instructions, etc., but, at the same time, I wouldn't consider
it a showstopper.

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-E6Hs6f05hei9zPtdxXey
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlC/N4YACgkQk4XaBE3IOsQoHgCeJE8NNMoMChXV/YHJJyXI+7nl
pe0AnjYvvlJR5yIKYtMi/DubKmBJFDdM
=keel
-----END PGP SIGNATURE-----

--=-E6Hs6f05hei9zPtdxXey--



--===============1484126004937567780==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1484126004937567780==--



From xen-devel-bounces@lists.xen.org Wed Dec 05 14:01:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgFXQ-0002SH-PM; Wed, 05 Dec 2012 14:01:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgFXP-0002S6-GR
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:01:23 +0000
Received: from [85.158.139.211:30587] by server-13.bemta-5.messagelabs.com id
	7B/C1-27809-2B35FB05; Wed, 05 Dec 2012 14:01:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354716080!19203944!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30849 invoked from network); 5 Dec 2012 14:01:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:01:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16173647"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:01:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	14:01:19 +0000
Message-ID: <1354716078.15296.201.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Wed, 5 Dec 2012 14:01:18 +0000
In-Reply-To: <1354711402.15296.188.camel@zakaz.uk.xensource.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 12:43 +0000, Ian Campbell wrote:
> 
> It seems like it is applying successfully on only the even numbered
> cpus. Is this because the odd and even ones share some execution units
> and therefore share microcode updates too? IOW update CPU0 also
> updates CPU1 under the hood. 

I added some debug and it does seem like the odd CPUs have already been
updated when we get to them.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:01:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgFXQ-0002SH-PM; Wed, 05 Dec 2012 14:01:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgFXP-0002S6-GR
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:01:23 +0000
Received: from [85.158.139.211:30587] by server-13.bemta-5.messagelabs.com id
	7B/C1-27809-2B35FB05; Wed, 05 Dec 2012 14:01:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354716080!19203944!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30849 invoked from network); 5 Dec 2012 14:01:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:01:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16173647"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:01:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	14:01:19 +0000
Message-ID: <1354716078.15296.201.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Wed, 5 Dec 2012 14:01:18 +0000
In-Reply-To: <1354711402.15296.188.camel@zakaz.uk.xensource.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 12:43 +0000, Ian Campbell wrote:
> 
> It seems like it is applying successfully on only the even numbered
> cpus. Is this because the odd and even ones share some execution units
> and therefore share microcode updates too? IOW update CPU0 also
> updates CPU1 under the hood. 

I added some debug and it does seem like the odd CPUs have already been
updated when we get to them.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:28:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgFwy-00039p-8X; Wed, 05 Dec 2012 14:27:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu>)
	id 1TgFwx-00039i-By
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:27:47 +0000
Received: from [85.158.143.99:9750] by server-3.bemta-4.messagelabs.com id
	10/7B-06841-2E95FB05; Wed, 05 Dec 2012 14:27:46 +0000
X-Env-Sender: prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354717664!22865967!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 427 invoked from network); 5 Dec 2012 14:27:46 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 14:27:46 -0000
Received: from aplexcas1.dom1.jhuapl.edu (aplexcas1.dom1.jhuapl.edu
	[128.244.198.90]) by piper.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 5f05_0602_3c5decab_2351_4c0b_97d5_6d5d79b66c97;
	Wed, 05 Dec 2012 09:27:41 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Wed, 5 Dec 2012
	09:25:21 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 5 Dec 2012 09:25:05 -0500
Thread-Topic: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
Thread-Index: Ac3S7hsyG1jgtExGQGWcdc4NOwgIcAABjbQC
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC203@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>,
	<1354714830.15296.200.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354714830.15296.200.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yes still missing.
________________________________________
From: Ian Campbell [Ian.Campbell@citrix.com]
Sent: Wednesday, December 05, 2012 8:40 AM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:
> Please rerun autoconf after commiting this patch

This is still missing the .{git,hg}ignore updates?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:28:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgFwy-00039p-8X; Wed, 05 Dec 2012 14:27:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu>)
	id 1TgFwx-00039i-By
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:27:47 +0000
Received: from [85.158.143.99:9750] by server-3.bemta-4.messagelabs.com id
	10/7B-06841-2E95FB05; Wed, 05 Dec 2012 14:27:46 +0000
X-Env-Sender: prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354717664!22865967!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 427 invoked from network); 5 Dec 2012 14:27:46 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 14:27:46 -0000
Received: from aplexcas1.dom1.jhuapl.edu (aplexcas1.dom1.jhuapl.edu
	[128.244.198.90]) by piper.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 5f05_0602_3c5decab_2351_4c0b_97d5_6d5d79b66c97;
	Wed, 05 Dec 2012 09:27:41 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Wed, 5 Dec 2012
	09:25:21 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 5 Dec 2012 09:25:05 -0500
Thread-Topic: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
Thread-Index: Ac3S7hsyG1jgtExGQGWcdc4NOwgIcAABjbQC
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC203@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>,
	<1354714830.15296.200.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354714830.15296.200.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yes still missing.
________________________________________
From: Ian Campbell [Ian.Campbell@citrix.com]
Sent: Wednesday, December 05, 2012 8:40 AM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:
> Please rerun autoconf after commiting this patch

This is still missing the .{git,hg}ignore updates?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:29:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgFyA-0003DS-Nn; Wed, 05 Dec 2012 14:29:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgFy9-0003DL-Fp
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:29:01 +0000
Received: from [85.158.139.83:5282] by server-6.bemta-5.messagelabs.com id
	C8/72-19321-C2A5FB05; Wed, 05 Dec 2012 14:29:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1354717740!21172345!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17560 invoked from network); 5 Dec 2012 14:29:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:29:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16174496"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:29:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	14:29:00 +0000
Message-ID: <1354717738.15296.203.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 14:28:58 +0000
In-Reply-To: <1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:

> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])

I'm getting:

make[1]: *** No rule to make target `install-caml', needed by `install'.  Stop.
make[1]: *** Waiting for unfinished jobs....

I think because currently the caml stubdom is built but not installed.
It appears in $(TARGETS) but not in the install: rule

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:29:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgFyA-0003DS-Nn; Wed, 05 Dec 2012 14:29:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgFy9-0003DL-Fp
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:29:01 +0000
Received: from [85.158.139.83:5282] by server-6.bemta-5.messagelabs.com id
	C8/72-19321-C2A5FB05; Wed, 05 Dec 2012 14:29:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1354717740!21172345!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17560 invoked from network); 5 Dec 2012 14:29:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:29:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16174496"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:29:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	14:29:00 +0000
Message-ID: <1354717738.15296.203.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 14:28:58 +0000
In-Reply-To: <1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:

> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])

I'm getting:

make[1]: *** No rule to make target `install-caml', needed by `install'.  Stop.
make[1]: *** Waiting for unfinished jobs....

I think because currently the caml stubdom is built but not installed.
It appears in $(TARGETS) but not in the install: rule

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:33:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgG2M-0003Xh-JA; Wed, 05 Dec 2012 14:33:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu>)
	id 1TgG2L-0003X7-B6
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:33:21 +0000
Received: from [85.158.143.35:60765] by server-1.bemta-4.messagelabs.com id
	2E/7B-27934-03B5FB05; Wed, 05 Dec 2012 14:33:20 +0000
X-Env-Sender: prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354717998!12705370!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18861 invoked from network); 5 Dec 2012 14:33:20 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 14:33:20 -0000
Received: from aplexcas1.dom1.jhuapl.edu (aplexcas1.dom1.jhuapl.edu
	[128.244.198.90]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 1a66_01bd_bdb74738_7907_494c_af68_a70bab43699d;
	Wed, 05 Dec 2012 09:33:10 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Wed, 5 Dec 2012
	09:32:31 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 5 Dec 2012 09:31:53 -0500
Thread-Topic: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
Thread-Index: Ac3S9N/nG6PazAuQSACi9slH1vvrzgAAGWvG
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC205@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>,
	<1354717738.15296.203.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354717738.15296.203.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I think the best way to fix this would be to add an install-caml: rule that does nothing to the makefile
________________________________________
From: Ian Campbell [Ian.Campbell@citrix.com]
Sent: Wednesday, December 05, 2012 9:28 AM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:

> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])

I'm getting:

make[1]: *** No rule to make target `install-caml', needed by `install'.  Stop.
make[1]: *** Waiting for unfinished jobs....

I think because currently the caml stubdom is built but not installed.
It appears in $(TARGETS) but not in the install: rule

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:33:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgG2M-0003Xh-JA; Wed, 05 Dec 2012 14:33:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu>)
	id 1TgG2L-0003X7-B6
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:33:21 +0000
Received: from [85.158.143.35:60765] by server-1.bemta-4.messagelabs.com id
	2E/7B-27934-03B5FB05; Wed, 05 Dec 2012 14:33:20 +0000
X-Env-Sender: prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354717998!12705370!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18861 invoked from network); 5 Dec 2012 14:33:20 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 14:33:20 -0000
Received: from aplexcas1.dom1.jhuapl.edu (aplexcas1.dom1.jhuapl.edu
	[128.244.198.90]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 1a66_01bd_bdb74738_7907_494c_af68_a70bab43699d;
	Wed, 05 Dec 2012 09:33:10 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Wed, 5 Dec 2012
	09:32:31 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 5 Dec 2012 09:31:53 -0500
Thread-Topic: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
Thread-Index: Ac3S9N/nG6PazAuQSACi9slH1vvrzgAAGWvG
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC205@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>,
	<1354717738.15296.203.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354717738.15296.203.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I think the best way to fix this would be to add an install-caml: rule that does nothing to the makefile
________________________________________
From: Ian Campbell [Ian.Campbell@citrix.com]
Sent: Wednesday, December 05, 2012 9:28 AM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:

> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])

I'm getting:

make[1]: *** No rule to make target `install-caml', needed by `install'.  Stop.
make[1]: *** Waiting for unfinished jobs....

I think because currently the caml stubdom is built but not installed.
It appears in $(TARGETS) but not in the install: rule

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:38:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgG6e-0003ib-CD; Wed, 05 Dec 2012 14:37:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu>)
	id 1TgG6c-0003iT-AO
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:37:46 +0000
Received: from [85.158.139.83:55538] by server-14.bemta-5.messagelabs.com id
	51/D3-21768-93C5FB05; Wed, 05 Dec 2012 14:37:45 +0000
X-Env-Sender: prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354718259!28491089!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22383 invoked from network); 5 Dec 2012 14:37:41 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 14:37:41 -0000
Received: from aplexcas1.dom1.jhuapl.edu (aplexcas1.dom1.jhuapl.edu
	[128.244.198.90]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 0895_06bb_84d4abac_0be4_4e9c_8028_f2f47e61a7de;
	Wed, 05 Dec 2012 09:37:34 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Wed, 5 Dec 2012
	09:35:09 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 5 Dec 2012 09:32:44 -0500
Thread-Topic: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
Thread-Index: Ac3Sz+KIIzVG1tyiQvK+JXyCdkxz4wAJYEpg
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC206@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>,
	<1354701851.15296.120.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354701851.15296.120.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

AC_ARG_VAR just allows you to do CMAKE=foo ./configure. It also adds a message about the CMAKE variable when you do ./configure --help. I'm not convinced this half needs to be conditional.

The line that does the conditional check is the following:
AX_PATH_PROG_OR_FAIL_ARG([vtpm], [CMAKE], [cmake])

AX_PATH_PROG_OR_FAIL_ARG is a new macro I added to path_or_fail.m4. It does the same thing as AX_PATH_PROG_OR_FAIL if the first argument (in this case "$vtpm") is equal to "y". If the argument is anything else it sets the variable (in this case CMAKE) to NAME_disabled_in_configure_script.

The result is that if someone tries to build vtpm after disabling it they will get a rather loud and obvious error message cmake-disabled-in-configure-script: command not found.
________________________________________
________________________________________
From: Ian Campbell [Ian.Campbell@citrix.com]
Sent: Wednesday, December 05, 2012 5:04 AM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:
> +# Enable/disable stub domains
> +AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
> +AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
> +AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
> +AX_STUBDOM_DEFAULT_ENABLE([vtpm-stubdom], [vtpm])
> +AX_STUBDOM_DEFAULT_ENABLE([vtpmmgrdom], [vtpmmgr])
> +
> +AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
> +AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for libraries])
> +
> +AC_ARG_VAR([CMAKE], [Path to the cmake program])

Weren't you going to make vtpm* conditional on the presence of cmake? Or
is there some magic here which I don't understand?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:38:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgG6e-0003ib-CD; Wed, 05 Dec 2012 14:37:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu>)
	id 1TgG6c-0003iT-AO
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:37:46 +0000
Received: from [85.158.139.83:55538] by server-14.bemta-5.messagelabs.com id
	51/D3-21768-93C5FB05; Wed, 05 Dec 2012 14:37:45 +0000
X-Env-Sender: prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354718259!28491089!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22383 invoked from network); 5 Dec 2012 14:37:41 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 14:37:41 -0000
Received: from aplexcas1.dom1.jhuapl.edu (aplexcas1.dom1.jhuapl.edu
	[128.244.198.90]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 0895_06bb_84d4abac_0be4_4e9c_8028_f2f47e61a7de;
	Wed, 05 Dec 2012 09:37:34 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Wed, 5 Dec 2012
	09:35:09 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 5 Dec 2012 09:32:44 -0500
Thread-Topic: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
Thread-Index: Ac3Sz+KIIzVG1tyiQvK+JXyCdkxz4wAJYEpg
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC206@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>,
	<1354701851.15296.120.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354701851.15296.120.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

AC_ARG_VAR just allows you to do CMAKE=foo ./configure. It also adds a message about the CMAKE variable when you do ./configure --help. I'm not convinced this half needs to be conditional.

The line that does the conditional check is the following:
AX_PATH_PROG_OR_FAIL_ARG([vtpm], [CMAKE], [cmake])

AX_PATH_PROG_OR_FAIL_ARG is a new macro I added to path_or_fail.m4. It does the same thing as AX_PATH_PROG_OR_FAIL if the first argument (in this case "$vtpm") is equal to "y". If the argument is anything else it sets the variable (in this case CMAKE) to NAME_disabled_in_configure_script.

The result is that if someone tries to build vtpm after disabling it they will get a rather loud and obvious error message cmake-disabled-in-configure-script: command not found.
________________________________________
________________________________________
From: Ian Campbell [Ian.Campbell@citrix.com]
Sent: Wednesday, December 05, 2012 5:04 AM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom

On Tue, 2012-12-04 at 18:09 +0000, Matthew Fioravante wrote:
> +# Enable/disable stub domains
> +AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
> +AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
> +AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
> +AX_STUBDOM_DEFAULT_ENABLE([vtpm-stubdom], [vtpm])
> +AX_STUBDOM_DEFAULT_ENABLE([vtpmmgrdom], [vtpmmgr])
> +
> +AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
> +AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for libraries])
> +
> +AC_ARG_VAR([CMAKE], [Path to the cmake program])

Weren't you going to make vtpm* conditional on the presence of cmake? Or
is there some magic here which I don't understand?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:40:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:40:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgG8k-0003pc-Up; Wed, 05 Dec 2012 14:39:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgG8j-0003pW-WE
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:39:58 +0000
Received: from [85.158.137.99:42621] by server-11.bemta-3.messagelabs.com id
	9E/BA-19361-DBC5FB05; Wed, 05 Dec 2012 14:39:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354718396!12687595!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20854 invoked from network); 5 Dec 2012 14:39:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:39:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16174798"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:39:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	14:39:55 +0000
Message-ID: <1354718394.15296.209.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 14:39:54 +0000
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48BDDDC205@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
	,<1354717738.15296.203.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC205@aplesstripe.dom1.jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 14:31 +0000, Fioravante, Matthew E. wrote:
> I think the best way to fix this would be to add an install-caml: rule that does nothing to the makefile

Agreed.
]


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:40:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:40:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgG8k-0003pc-Up; Wed, 05 Dec 2012 14:39:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgG8j-0003pW-WE
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:39:58 +0000
Received: from [85.158.137.99:42621] by server-11.bemta-3.messagelabs.com id
	9E/BA-19361-DBC5FB05; Wed, 05 Dec 2012 14:39:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354718396!12687595!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20854 invoked from network); 5 Dec 2012 14:39:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:39:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16174798"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:39:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	14:39:55 +0000
Message-ID: <1354718394.15296.209.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 14:39:54 +0000
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48BDDDC205@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
	,<1354717738.15296.203.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC205@aplesstripe.dom1.jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 14:31 +0000, Fioravante, Matthew E. wrote:
> I think the best way to fix this would be to add an install-caml: rule that does nothing to the makefile

Agreed.
]


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:41:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:41:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgG9h-0003vf-EC; Wed, 05 Dec 2012 14:40:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgG9g-0003vS-FR
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 14:40:56 +0000
Received: from [193.109.254.147:64070] by server-5.bemta-14.messagelabs.com id
	73/11-10257-7FC5FB05; Wed, 05 Dec 2012 14:40:55 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354718451!9536100!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12883 invoked from network); 5 Dec 2012 14:40:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:40:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216481114"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:40:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 09:40:49 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgG9Z-0004AD-Dd;
	Wed, 05 Dec 2012 14:40:49 +0000
Message-ID: <50BF5B92.6000702@eu.citrix.com>
Date: Wed, 5 Dec 2012 14:34:58 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <bc2a7645dc2be4a01f2b.1354295635@elijah>
	<1354631744.15296.8.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354631744.15296.8.camel@zakaz.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] libxl: Make an internal function explicitly
 check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 14:35, Ian Campbell wrote:
> On Fri, 2012-11-30 at 17:13 +0000, George Dunlap wrote:
>> # HG changeset patch
>> # User George Dunlap <george.dunlap@eu.citrix.com>
>> # Date 1354294821 0
>> # Node ID bc2a7645dc2be4a01f2b5ee30d7453cc3d7339aa
>> # Parent  bd041b7426fe10a730994edd98708ff98ae1cb74
>> libxl: Make an internal function explicitly check existence of expected paths
>>
>> libxl__device_disk_from_xs_be() was failing without error for some
>> missing xenstore nodes in a backend, while assuming (without checking)
>> that other nodes were valid, causing a crash when another internal
>> error wrote these nodes in the wrong place.
>>
>> Make this function consistent by:
>> * Checking the existence of all nodes before using
>> * Choosing a default only when the node is not written in device_disk_add()
>> * Failing with log msg if any node written by device_disk_add() is not present
>> * Returning an error on failure
>>
>> Also make the callers of the function pay attention to the error and
>> behave appropriately.
> If libxl__device_disk_from_xs_be returns an error then someone needs to
> cleanup the partial allocations in the disk (pdev_path) probably by
> calling libxl_device_disk_dispose.
>
> It's probably easiest to do this in libxl__device_disk_from_xs_be on
> error rather than modifying all the callers?

Well, there are only two callers, only one of which (it looks like) 
needs a clean-up.  It seems like better design to make each caller do 
its own clean-up.  Let me take a look at that.

  -George

>
> Also libxl__append_disk_list_of_type updates *ndisks early, so if you
> abort half way through initialising the elements of the disks array
> using libxl__device_disk_from_xs_be then the caller will try and free
> some stuff which hasn't been initialised. I think the code needs to
> remember ndisks-in-array separately from
> ndisks-which-I-have-initialised, with the latter becoming the returned
> *ndisks.
>
>> v2:
>>   * Remove "Internal error", as the failure will most likely look internal
>>   * Use LOG(ERROR...) macros for incrased prettiness
> More crass?
>
>> @@ -2186,21 +2187,36 @@ static void libxl__device_disk_from_xs_b
>>       } else {
>>           disk->pdev_path = tmp;
>>       }
>> -    libxl_string_to_backend(ctx,
>> -                        libxl__xs_read(gc, XBT_NULL,
>> -                                       libxl__sprintf(gc, "%s/type", be_path)),
>> -                        &(disk->backend));
>> +
>> +
>> +    tmp = libxl__xs_read(gc, XBT_NULL,
>> +                         libxl__sprintf(gc, "%s/type", be_path));
>> +    if (!tmp) {
>> +        LOG(ERROR, "Missing xenstore node %s/type", be_path);
>> +        return ERROR_FAIL;
>> +    }
> I've just remembered about libxl__xs_read_checked which effectively
> implements the error reporting for you.
>
> Oh, but it accepts ENOENT, so not quite what you need -- nevermind!
>
> Ian.
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:41:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:41:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgG9h-0003vf-EC; Wed, 05 Dec 2012 14:40:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgG9g-0003vS-FR
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 14:40:56 +0000
Received: from [193.109.254.147:64070] by server-5.bemta-14.messagelabs.com id
	73/11-10257-7FC5FB05; Wed, 05 Dec 2012 14:40:55 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354718451!9536100!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12883 invoked from network); 5 Dec 2012 14:40:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:40:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216481114"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:40:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 09:40:49 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgG9Z-0004AD-Dd;
	Wed, 05 Dec 2012 14:40:49 +0000
Message-ID: <50BF5B92.6000702@eu.citrix.com>
Date: Wed, 5 Dec 2012 14:34:58 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <bc2a7645dc2be4a01f2b.1354295635@elijah>
	<1354631744.15296.8.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354631744.15296.8.camel@zakaz.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] libxl: Make an internal function explicitly
 check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/12/12 14:35, Ian Campbell wrote:
> On Fri, 2012-11-30 at 17:13 +0000, George Dunlap wrote:
>> # HG changeset patch
>> # User George Dunlap <george.dunlap@eu.citrix.com>
>> # Date 1354294821 0
>> # Node ID bc2a7645dc2be4a01f2b5ee30d7453cc3d7339aa
>> # Parent  bd041b7426fe10a730994edd98708ff98ae1cb74
>> libxl: Make an internal function explicitly check existence of expected paths
>>
>> libxl__device_disk_from_xs_be() was failing without error for some
>> missing xenstore nodes in a backend, while assuming (without checking)
>> that other nodes were valid, causing a crash when another internal
>> error wrote these nodes in the wrong place.
>>
>> Make this function consistent by:
>> * Checking the existence of all nodes before using
>> * Choosing a default only when the node is not written in device_disk_add()
>> * Failing with log msg if any node written by device_disk_add() is not present
>> * Returning an error on failure
>>
>> Also make the callers of the function pay attention to the error and
>> behave appropriately.
> If libxl__device_disk_from_xs_be returns an error then someone needs to
> cleanup the partial allocations in the disk (pdev_path) probably by
> calling libxl_device_disk_dispose.
>
> It's probably easiest to do this in libxl__device_disk_from_xs_be on
> error rather than modifying all the callers?

Well, there are only two callers, only one of which (it looks like) 
needs a clean-up.  It seems like better design to make each caller do 
its own clean-up.  Let me take a look at that.

  -George

>
> Also libxl__append_disk_list_of_type updates *ndisks early, so if you
> abort half way through initialising the elements of the disks array
> using libxl__device_disk_from_xs_be then the caller will try and free
> some stuff which hasn't been initialised. I think the code needs to
> remember ndisks-in-array separately from
> ndisks-which-I-have-initialised, with the latter becoming the returned
> *ndisks.
>
>> v2:
>>   * Remove "Internal error", as the failure will most likely look internal
>>   * Use LOG(ERROR...) macros for incrased prettiness
> More crass?
>
>> @@ -2186,21 +2187,36 @@ static void libxl__device_disk_from_xs_b
>>       } else {
>>           disk->pdev_path = tmp;
>>       }
>> -    libxl_string_to_backend(ctx,
>> -                        libxl__xs_read(gc, XBT_NULL,
>> -                                       libxl__sprintf(gc, "%s/type", be_path)),
>> -                        &(disk->backend));
>> +
>> +
>> +    tmp = libxl__xs_read(gc, XBT_NULL,
>> +                         libxl__sprintf(gc, "%s/type", be_path));
>> +    if (!tmp) {
>> +        LOG(ERROR, "Missing xenstore node %s/type", be_path);
>> +        return ERROR_FAIL;
>> +    }
> I've just remembered about libxl__xs_read_checked which effectively
> implements the error reporting for you.
>
> Oh, but it accepts ENOENT, so not quite what you need -- nevermind!
>
> Ian.
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:52:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:52:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgGKD-0004Ss-KE; Wed, 05 Dec 2012 14:51:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgGKB-0004Sh-MJ
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:51:47 +0000
Received: from [85.158.143.35:28228] by server-3.bemta-4.messagelabs.com id
	A0/A3-06841-38F5FB05; Wed, 05 Dec 2012 14:51:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354719044!16305055!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14933 invoked from network); 5 Dec 2012 14:50:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:50:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16175076"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:50:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	14:50:24 +0000
Message-ID: <1354719022.15296.215.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 14:50:22 +0000
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48BDDDC206@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
	,<1354701851.15296.120.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC206@aplesstripe.dom1.jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 14:32 +0000, Fioravante, Matthew E. wrote:
> AC_ARG_VAR just allows you to do CMAKE=foo ./configure. It also adds a
> message about the CMAKE variable when you do ./configure --help. I'm
> not convinced this half needs to be conditional.

Agreed.

> The line that does the conditional check is the following:
> AX_PATH_PROG_OR_FAIL_ARG([vtpm], [CMAKE], [cmake])
> 
> AX_PATH_PROG_OR_FAIL_ARG is a new macro I added to path_or_fail.m4. It
> does the same thing as AX_PATH_PROG_OR_FAIL if the first argument (in
> this case "$vtpm") is equal to "y". If the argument is anything else
> it sets the variable (in this case CMAKE) to
> NAME_disabled_in_configure_script.

I see.

What I was expecting is that the default would be to build the vtpm
stuff if cmake was installed and to silently not do so if cmake wasn't.
If --enable-vtpm is given then it should of course fail noisily if cmake
isn't present.

> The result is that if someone tries to build vtpm after disabling it
> they will get a rather loud and obvious error message
> cmake-disabled-in-configure-script: command not found.

That sounds fine too.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:52:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:52:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgGKD-0004Ss-KE; Wed, 05 Dec 2012 14:51:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgGKB-0004Sh-MJ
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:51:47 +0000
Received: from [85.158.143.35:28228] by server-3.bemta-4.messagelabs.com id
	A0/A3-06841-38F5FB05; Wed, 05 Dec 2012 14:51:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354719044!16305055!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14933 invoked from network); 5 Dec 2012 14:50:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:50:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16175076"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:50:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	14:50:24 +0000
Message-ID: <1354719022.15296.215.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 14:50:22 +0000
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48BDDDC206@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
	,<1354701851.15296.120.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC206@aplesstripe.dom1.jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 14:32 +0000, Fioravante, Matthew E. wrote:
> AC_ARG_VAR just allows you to do CMAKE=foo ./configure. It also adds a
> message about the CMAKE variable when you do ./configure --help. I'm
> not convinced this half needs to be conditional.

Agreed.

> The line that does the conditional check is the following:
> AX_PATH_PROG_OR_FAIL_ARG([vtpm], [CMAKE], [cmake])
> 
> AX_PATH_PROG_OR_FAIL_ARG is a new macro I added to path_or_fail.m4. It
> does the same thing as AX_PATH_PROG_OR_FAIL if the first argument (in
> this case "$vtpm") is equal to "y". If the argument is anything else
> it sets the variable (in this case CMAKE) to
> NAME_disabled_in_configure_script.

I see.

What I was expecting is that the default would be to build the vtpm
stuff if cmake was installed and to silently not do so if cmake wasn't.
If --enable-vtpm is given then it should of course fail noisily if cmake
isn't present.

> The result is that if someone tries to build vtpm after disabling it
> they will get a rather loud and obvious error message
> cmake-disabled-in-configure-script: command not found.

That sounds fine too.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:55:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgGNk-0004ch-9Y; Wed, 05 Dec 2012 14:55:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgGNj-0004cc-7x
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:55:27 +0000
Received: from [85.158.137.99:57546] by server-16.bemta-3.messagelabs.com id
	7A/F0-07461-D506FB05; Wed, 05 Dec 2012 14:55:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354719313!12689748!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7229 invoked from network); 5 Dec 2012 14:55:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:55:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46684898"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:55:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 09:55:12 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgGNT-0004Oq-Mw;
	Wed, 05 Dec 2012 14:55:11 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Wed, 5 Dec 2012 14:55:11 +0000
Message-ID: <1354719311-14267-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: ian.jackson@citrix.com, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] gitignore: ignore xen-foreign/arm.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 .gitignore |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/.gitignore b/.gitignore
index d724e5e..46ce63a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -340,6 +340,7 @@ tools/include/xen-foreign/checker.c
 tools/include/xen-foreign/structs.pyc
 tools/include/xen-foreign/x86_32.h
 tools/include/xen-foreign/x86_64.h
+tools/include/xen-foreign/arm.h
 
 .git
 tools/misc/xen-hptool
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 14:55:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 14:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgGNk-0004ch-9Y; Wed, 05 Dec 2012 14:55:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgGNj-0004cc-7x
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 14:55:27 +0000
Received: from [85.158.137.99:57546] by server-16.bemta-3.messagelabs.com id
	7A/F0-07461-D506FB05; Wed, 05 Dec 2012 14:55:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354719313!12689748!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7229 invoked from network); 5 Dec 2012 14:55:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 14:55:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46684898"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 14:55:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 09:55:12 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgGNT-0004Oq-Mw;
	Wed, 05 Dec 2012 14:55:11 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Wed, 5 Dec 2012 14:55:11 +0000
Message-ID: <1354719311-14267-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: ian.jackson@citrix.com, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] gitignore: ignore xen-foreign/arm.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 .gitignore |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/.gitignore b/.gitignore
index d724e5e..46ce63a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -340,6 +340,7 @@ tools/include/xen-foreign/checker.c
 tools/include/xen-foreign/structs.pyc
 tools/include/xen-foreign/x86_32.h
 tools/include/xen-foreign/x86_64.h
+tools/include/xen-foreign/arm.h
 
 .git
 tools/misc/xen-hptool
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:01:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:01:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgGTG-0004vD-Fl; Wed, 05 Dec 2012 15:01:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgGTF-0004v1-At
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 15:01:09 +0000
Received: from [85.158.143.35:11107] by server-1.bemta-4.messagelabs.com id
	D6/C6-27934-4B16FB05; Wed, 05 Dec 2012 15:01:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354719649!12929257!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2025 invoked from network); 5 Dec 2012 15:00:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 15:00:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216484113"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 15:00:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 10:00:33 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgGSe-0004U3-Ld;
	Wed, 05 Dec 2012 15:00:32 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Wed, 5 Dec 2012 15:00:32 +0000
Message-ID: <1354719632-21000-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen: arm: fix long lines in entry.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/entry.S |   66 +++++++++++++++++++++++++-------------------------
 1 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
index 2ff32a1..1f85e3c 100644
--- a/xen/arch/arm/entry.S
+++ b/xen/arch/arm/entry.S
@@ -11,22 +11,22 @@
 #define RESTORE_BANKED(mode) \
 	RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
 
-#define SAVE_ALL											\
-	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */					\
-	push {r0-r12}; /* Save R0-R12 */								\
-													\
-	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */				\
-	str r11, [sp, #UREGS_pc];									\
-													\
-	str lr, [sp, #UREGS_lr];									\
-													\
-	add r11, sp, #UREGS_kernel_sizeof+4;								\
-	str r11, [sp, #UREGS_sp];									\
-													\
-	mrs r11, SPSR_hyp;										\
-	str r11, [sp, #UREGS_cpsr];									\
-	and r11, #PSR_MODE_MASK;									\
-	cmp r11, #PSR_MODE_HYP;										\
+#define SAVE_ALL							\
+	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */	\
+	push {r0-r12}; /* Save R0-R12 */				\
+									\
+	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */\
+	str r11, [sp, #UREGS_pc];					\
+									\
+	str lr, [sp, #UREGS_lr];					\
+									\
+	add r11, sp, #UREGS_kernel_sizeof+4;				\
+	str r11, [sp, #UREGS_sp];					\
+									\
+	mrs r11, SPSR_hyp;						\
+	str r11, [sp, #UREGS_cpsr];					\
+	and r11, #PSR_MODE_MASK;					\
+	cmp r11, #PSR_MODE_HYP;						\
 	blne save_guest_regs
 
 save_guest_regs:
@@ -43,25 +43,25 @@ save_guest_regs:
 	SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
 	mov pc, lr
 
-#define DEFINE_TRAP_ENTRY(trap)										\
-	ALIGN;												\
-trap_##trap:												\
-	SAVE_ALL;											\
-	cpsie i; 	/* local_irq_enable */								\
-	adr lr, return_from_trap;									\
-	mov r0, sp;											\
-	mov r11, sp;											\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
+#define DEFINE_TRAP_ENTRY(trap)						\
+	ALIGN;								\
+trap_##trap:								\
+	SAVE_ALL;							\
+	cpsie i; 	/* local_irq_enable */				\
+	adr lr, return_from_trap;					\
+	mov r0, sp;							\
+	mov r11, sp;							\
+	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
 	b do_trap_##trap
 
-#define DEFINE_TRAP_ENTRY_NOIRQ(trap)									\
-	ALIGN;												\
-trap_##trap:												\
-	SAVE_ALL;											\
-	adr lr, return_from_trap;									\
-	mov r0, sp;											\
-	mov r11, sp;											\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
+#define DEFINE_TRAP_ENTRY_NOIRQ(trap)					\
+	ALIGN;								\
+trap_##trap:								\
+	SAVE_ALL;							\
+	adr lr, return_from_trap;					\
+	mov r0, sp;							\
+	mov r11, sp;							\
+	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
 	b do_trap_##trap
 
 .globl hyp_traps_vector
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:01:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:01:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgGTG-0004vD-Fl; Wed, 05 Dec 2012 15:01:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgGTF-0004v1-At
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 15:01:09 +0000
Received: from [85.158.143.35:11107] by server-1.bemta-4.messagelabs.com id
	D6/C6-27934-4B16FB05; Wed, 05 Dec 2012 15:01:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354719649!12929257!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2025 invoked from network); 5 Dec 2012 15:00:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 15:00:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216484113"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 15:00:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 10:00:33 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgGSe-0004U3-Ld;
	Wed, 05 Dec 2012 15:00:32 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Wed, 5 Dec 2012 15:00:32 +0000
Message-ID: <1354719632-21000-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen: arm: fix long lines in entry.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/entry.S |   66 +++++++++++++++++++++++++-------------------------
 1 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
index 2ff32a1..1f85e3c 100644
--- a/xen/arch/arm/entry.S
+++ b/xen/arch/arm/entry.S
@@ -11,22 +11,22 @@
 #define RESTORE_BANKED(mode) \
 	RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
 
-#define SAVE_ALL											\
-	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */					\
-	push {r0-r12}; /* Save R0-R12 */								\
-													\
-	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */				\
-	str r11, [sp, #UREGS_pc];									\
-													\
-	str lr, [sp, #UREGS_lr];									\
-													\
-	add r11, sp, #UREGS_kernel_sizeof+4;								\
-	str r11, [sp, #UREGS_sp];									\
-													\
-	mrs r11, SPSR_hyp;										\
-	str r11, [sp, #UREGS_cpsr];									\
-	and r11, #PSR_MODE_MASK;									\
-	cmp r11, #PSR_MODE_HYP;										\
+#define SAVE_ALL							\
+	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */	\
+	push {r0-r12}; /* Save R0-R12 */				\
+									\
+	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */\
+	str r11, [sp, #UREGS_pc];					\
+									\
+	str lr, [sp, #UREGS_lr];					\
+									\
+	add r11, sp, #UREGS_kernel_sizeof+4;				\
+	str r11, [sp, #UREGS_sp];					\
+									\
+	mrs r11, SPSR_hyp;						\
+	str r11, [sp, #UREGS_cpsr];					\
+	and r11, #PSR_MODE_MASK;					\
+	cmp r11, #PSR_MODE_HYP;						\
 	blne save_guest_regs
 
 save_guest_regs:
@@ -43,25 +43,25 @@ save_guest_regs:
 	SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
 	mov pc, lr
 
-#define DEFINE_TRAP_ENTRY(trap)										\
-	ALIGN;												\
-trap_##trap:												\
-	SAVE_ALL;											\
-	cpsie i; 	/* local_irq_enable */								\
-	adr lr, return_from_trap;									\
-	mov r0, sp;											\
-	mov r11, sp;											\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
+#define DEFINE_TRAP_ENTRY(trap)						\
+	ALIGN;								\
+trap_##trap:								\
+	SAVE_ALL;							\
+	cpsie i; 	/* local_irq_enable */				\
+	adr lr, return_from_trap;					\
+	mov r0, sp;							\
+	mov r11, sp;							\
+	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
 	b do_trap_##trap
 
-#define DEFINE_TRAP_ENTRY_NOIRQ(trap)									\
-	ALIGN;												\
-trap_##trap:												\
-	SAVE_ALL;											\
-	adr lr, return_from_trap;									\
-	mov r0, sp;											\
-	mov r11, sp;											\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
+#define DEFINE_TRAP_ENTRY_NOIRQ(trap)					\
+	ALIGN;								\
+trap_##trap:								\
+	SAVE_ALL;							\
+	adr lr, return_from_trap;					\
+	mov r0, sp;							\
+	mov r11, sp;							\
+	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
 	b do_trap_##trap
 
 .globl hyp_traps_vector
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:34:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgGzA-0005kT-Iz; Wed, 05 Dec 2012 15:34:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TgGz9-0005kO-3g
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 15:34:07 +0000
Received: from [85.158.138.51:9895] by server-15.bemta-3.messagelabs.com id
	6B/13-23779-E696FB05; Wed, 05 Dec 2012 15:34:06 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354721644!27511507!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26649 invoked from network); 5 Dec 2012 15:34:04 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 15:34:04 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:53204 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TgH2n-0004kb-AD; Wed, 05 Dec 2012 16:37:53 +0100
Date: Wed, 5 Dec 2012 16:34:01 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <32870747.20121205163401@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212051206250.8801@kaball.uk.xensource.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<28369388.20121205000539@eikelenboom.it>
	<alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
	<1662073313.20121205130222@eikelenboom.it>
	<alpine.DEB.2.02.1212051206250.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, December 5, 2012, 1:07:07 PM, you wrote:

> On Wed, 5 Dec 2012, Sander Eikelenboom wrote:
>> Wednesday, December 5, 2012, 12:51:13 PM, you wrote:
>> 
>> > On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
>> >> Tuesday, December 4, 2012, 12:01:05 PM, you wrote:
>> >> 
>> >> > On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
>> >> >> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
>> >> >> >>  I had a quick look, and it doesn't look that hard to backport that patch.
>> >> >> > 
>> >> >> > Thanks, Mat.
>> >> >> > I'm glad to report that the patch do fix my problem.
>> >> >> > 
>> >> >> > And yest it is really easy to port since the code did not change across the
>> >> >> > two release.
>> >> >> > The only change would be line numbers (3841 vs 3803) and one extra comments
>> >> >> > before this line:
>> >> >> > 
>> >> >> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>> >> >> > 
>> >> >> > I'm not sure if you are going to release another maintenance version that
>> >> >> > include this patch,
>> >> >> > but I'll report this to Debain maintainer since it's about to freeze for
>> >> >> > v7.0 release and v4.2.0 will not make it.
>> >> >> 
>> >> >> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
>> >> >> out?
>> >> >> 
>> >> 
>> >> > It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
>> >> > so I'd say it should be a candidate for Xen 4.1.4.
>> >> 
>> >> Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
>> >> (XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3
>> >> 
>> >> Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.
>> >> 
>> 
>> > The patch is supposed to fix a bug affecting msi_translate but in
>> > upstream QEMU we haven't implemented msi_translate at all! So in this
>> > case the cause of the bug (and as a consequence the fix) must be a
>> > different one.

Using msi_translate=0 didn't change a thing, still got "unsupported delivery mode" and a non working passthrough device.
lspci in the HVM shows MSI-X enabled for the device, but it doesn't function properly.

00:05.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
        Subsystem: Micro-Star International Co., Ltd. Device 4257
        Physical Slot: 5
        Flags: bus master, fast devsel, latency 0, IRQ 36
        Memory at f3250000 (64-bit, non-prefetchable) [size=8K]
        Capabilities: [50] Power Management version 3
        Capabilities: [70] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [90] MSI-X: Enable+ Count=8 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] #1033
        Kernel driver in use: xhci_hcd


Disabling MSI completly for the HVM (by using pci=nomsi for the HVM kernel) makes passthrough device work ok with normal interrupts.

Looking into qemu-xen-dir/hw i do see xen-msi.c, so there seems to be work done in that area ?

>>
>> Ok will see if i can find out what is going on. Probably have to force msi_translate=0.
>> 
>> I noticed "qemu-xen" doesn't make a log file in /var/log/xen as "qemu-dm" does, is this expected ?

> No, it is not. You should get the usual /var/log/xen/qemu-dm-domainname.log file.

You were right, all though it lacks a bit in verbosity i guess ... the only 2 lines i'm getting are:

char device redirected to /dev/pts/17
Issued domain 25 poweroff

Instead of the 99 lines of output when using qemu-xen-traditional.


>> >> Sander
>> >> 
>> >> > -- Pasi
>> >> 
>> >> >> Jan
>> >> >> 
>> >> >> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
>> >> >> > <mats.petersson@citrix.com>wrote:
>> >> >> > 
>> >> >> >> On 03/12/12 13:19, Mats Petersson wrote:
>> >> >> >>
>> >> >> >>> On 03/12/12 13:14, G.R. wrote:
>> >> >> >>>
>> >> >> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>> >> >> >>>> <mats.petersson@citrix.com 
>> >> >> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>> >> >> >>>> wrote:
>> >> >> >>>>
>> >> >> >>>>      On 03/12/12 03:47, G.R. wrote:
>> >> >> >>>>
>> >> >> >>>>          Hi developers,
>> >> >> >>>>          I met some domU issues and the log suggests missing interrupt.
>> >> >> >>>>          Details from here:
>> >> >> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
>> >> >> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>> >> >> >>>>          In summary, this is the suspicious log:
>> >> >> >>>>
>> >> >> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>> >> >> >>>>
>> >> >> >>>>          I've checked the code in question and found that mode 3 is an
>> >> >> >>>>          'reserved_1' mode.
>> >> >> >>>>          I want to trace down the source of this mode setting to
>> >> >> >>>>          root-cause the issue.
>> >> >> >>>>          But I'm not an xen developer, and am even a newbie as a xen
>> >> >> >>>> user.
>> >> >> >>>>          Could anybody give me instructions about how to enable
>> >> >> >>>>          detailed debug log?
>> >> >> >>>>          It could be better if I can get advice about experiments to
>> >> >> >>>>          perform / switches to try out etc.
>> >> >> >>>>
>> >> >> >>>>          My SW config:
>> >> >> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>> >> >> >>>>          domU: Debian wheezy 3.2.x stock kernel.
>> >> >> >>>>
>> >> >> >>>>          Thanks,
>> >> >> >>>>          Timothy
>> >> >> >>>>
>> >> >> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>> >> >> >>>>      What are the exact messages in the DomU?
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>> >> >> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>> >> >> >>>> enabled.
>> >> >> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>> >> >> >>>> did not see such msi related error message.
>> >> >> >>>> Actually, with that domU I do not see anything obvious wrong from the
>> >> >> >>>> log, but I also see nothing from panel (panel receive no signal and go
>> >> >> >>>> power-saving) :-(
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Back to the issue I was reporting, the domU log looks like this:
>> >> >> >>>>
>> >> >> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>> >> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>> >> >> >>>> 3354], missed IRQ?
>> >> >> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>> >> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>> >> >> >>>> 11297], missed IRQ?
>> >> >> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>> >> >> >>>> timeout, switching to polling mode: last cmd=0x000f0000
>> >> >> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>> >> >> >>>> codec, disabling MSI: last cmd=0x002f0600
>> >> >> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>> >> >> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Thanks,
>> >> >> >>>> Timothy
>> >> >> >>>>
>> >> >> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>> >> >> >>> fixes this. I'm not fully clued up to what the policy for backporting
>> >> >> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
>> >> >> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
>> >> >> >>> right solution here.
>> >> >> >>>
>> >> >> >> I had a quick look, and it doesn't look that hard to backport that patch.
>> >> >> >>
>> >> >> >> --
>> >> >> >> Mats
>> >> >> >>
>> >> >> >>
>> >> >> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>> >> >> >>> to your original email.
>> >> >> >>>
>> >> >> >>> --
>> >> >> >>> Mats
>> >> >> >>>
>> >> >> >>> ______________________________**_________________
>> >> >> >>> Xen-devel mailing list
>> >> >> >>> Xen-devel@lists.xen.org 
>> >> >> >>> http://lists.xen.org/xen-devel 
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>
>> >> >> >> ______________________________**_________________
>> >> >> >> Xen-devel mailing list
>> >> >> >> Xen-devel@lists.xen.org 
>> >> >> >> http://lists.xen.org/xen-devel 
>> >> >> >>
>> >> >> 
>> >> >> 
>> >> >> 
>> >> >> _______________________________________________
>> >> >> Xen-devel mailing list
>> >> >> Xen-devel@lists.xen.org
>> >> >> http://lists.xen.org/xen-devel
>> >> 
>> >> 
>> >> 
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:34:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgGzA-0005kT-Iz; Wed, 05 Dec 2012 15:34:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TgGz9-0005kO-3g
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 15:34:07 +0000
Received: from [85.158.138.51:9895] by server-15.bemta-3.messagelabs.com id
	6B/13-23779-E696FB05; Wed, 05 Dec 2012 15:34:06 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354721644!27511507!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26649 invoked from network); 5 Dec 2012 15:34:04 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 15:34:04 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:53204 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TgH2n-0004kb-AD; Wed, 05 Dec 2012 16:37:53 +0100
Date: Wed, 5 Dec 2012 16:34:01 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <32870747.20121205163401@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212051206250.8801@kaball.uk.xensource.com>
References: <CAKhsbWZo6XPL81Cq8U7wFA-f1ZEiOHja8_Br4D3k2QGwMN=wag@mail.gmail.com>
	<50BC7BAC.8050107@citrix.com>
	<CAKhsbWYym9Xd5ZD=PSX5EzFr-pyeYeCXcZOcZBPQMFuWtP+rwQ@mail.gmail.com>
	<50BCA6F3.8060804@citrix.com> <50BCAFEF.7040300@citrix.com>
	<CAKhsbWbx+Lt+R9rbRJOQ5+PG1jcnkB+X-qRX3SeUrS_6VCr-Ww@mail.gmail.com>
	<50BDDB6E02000078000ADA98@nat28.tlf.novell.com>
	<20121204110105.GX8912@reaktio.net>
	<28369388.20121205000539@eikelenboom.it>
	<alpine.DEB.2.02.1212051149100.8801@kaball.uk.xensource.com>
	<1662073313.20121205130222@eikelenboom.it>
	<alpine.DEB.2.02.1212051206250.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] Issue about domU missing interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, December 5, 2012, 1:07:07 PM, you wrote:

> On Wed, 5 Dec 2012, Sander Eikelenboom wrote:
>> Wednesday, December 5, 2012, 12:51:13 PM, you wrote:
>> 
>> > On Tue, 4 Dec 2012, Sander Eikelenboom wrote:
>> >> Tuesday, December 4, 2012, 12:01:05 PM, you wrote:
>> >> 
>> >> > On Tue, Dec 04, 2012 at 10:15:58AM +0000, Jan Beulich wrote:
>> >> >> >>> On 04.12.12 at 04:07, "G.R." <firemeteor@users.sourceforge.net> wrote:
>> >> >> >>  I had a quick look, and it doesn't look that hard to backport that patch.
>> >> >> > 
>> >> >> > Thanks, Mat.
>> >> >> > I'm glad to report that the patch do fix my problem.
>> >> >> > 
>> >> >> > And yest it is really easy to port since the code did not change across the
>> >> >> > two release.
>> >> >> > The only change would be line numbers (3841 vs 3803) and one extra comments
>> >> >> > before this line:
>> >> >> > 
>> >> >> >  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
>> >> >> > 
>> >> >> > I'm not sure if you are going to release another maintenance version that
>> >> >> > include this patch,
>> >> >> > but I'll report this to Debain maintainer since it's about to freeze for
>> >> >> > v7.0 release and v4.2.0 will not make it.
>> >> >> 
>> >> >> Any thoughts on backporting this to 4.1.x, now or after 4.1.4 is
>> >> >> out?
>> >> >> 
>> >> 
>> >> > It's already in Xen 4.2, it's confirmed to fix a bug in Xen 4.1,
>> >> > so I'd say it should be a candidate for Xen 4.1.4.
>> >> 
>> >> Just tried switching the device model to "qemu-xen", seems this one isn't upstream either.
>> >> (XEN) [2012-12-04 22:49:25] vmsi.c:108:d32767 Unsupported delivery mode 3
>> >> 
>> >> Running xen-unstable as of today, with device-model "qemu-xen" for this hvm guest.
>> >> 
>> 
>> > The patch is supposed to fix a bug affecting msi_translate but in
>> > upstream QEMU we haven't implemented msi_translate at all! So in this
>> > case the cause of the bug (and as a consequence the fix) must be a
>> > different one.

Using msi_translate=0 didn't change a thing, still got "unsupported delivery mode" and a non working passthrough device.
lspci in the HVM shows MSI-X enabled for the device, but it doesn't function properly.

00:05.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30 [XHCI])
        Subsystem: Micro-Star International Co., Ltd. Device 4257
        Physical Slot: 5
        Flags: bus master, fast devsel, latency 0, IRQ 36
        Memory at f3250000 (64-bit, non-prefetchable) [size=8K]
        Capabilities: [50] Power Management version 3
        Capabilities: [70] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [90] MSI-X: Enable+ Count=8 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] #1033
        Kernel driver in use: xhci_hcd


Disabling MSI completly for the HVM (by using pci=nomsi for the HVM kernel) makes passthrough device work ok with normal interrupts.

Looking into qemu-xen-dir/hw i do see xen-msi.c, so there seems to be work done in that area ?

>>
>> Ok will see if i can find out what is going on. Probably have to force msi_translate=0.
>> 
>> I noticed "qemu-xen" doesn't make a log file in /var/log/xen as "qemu-dm" does, is this expected ?

> No, it is not. You should get the usual /var/log/xen/qemu-dm-domainname.log file.

You were right, all though it lacks a bit in verbosity i guess ... the only 2 lines i'm getting are:

char device redirected to /dev/pts/17
Issued domain 25 poweroff

Instead of the 99 lines of output when using qemu-xen-traditional.


>> >> Sander
>> >> 
>> >> > -- Pasi
>> >> 
>> >> >> Jan
>> >> >> 
>> >> >> > On Mon, Dec 3, 2012 at 9:58 PM, Mats Petersson 
>> >> >> > <mats.petersson@citrix.com>wrote:
>> >> >> > 
>> >> >> >> On 03/12/12 13:19, Mats Petersson wrote:
>> >> >> >>
>> >> >> >>> On 03/12/12 13:14, G.R. wrote:
>> >> >> >>>
>> >> >> >>>> On Mon, Dec 3, 2012 at 6:15 PM, Mats Petersson
>> >> >> >>>> <mats.petersson@citrix.com 
>> >> >> > <mailto:mats.petersson@citrix.**com<mats.petersson@citrix.com>>>
>> >> >> >>>> wrote:
>> >> >> >>>>
>> >> >> >>>>      On 03/12/12 03:47, G.R. wrote:
>> >> >> >>>>
>> >> >> >>>>          Hi developers,
>> >> >> >>>>          I met some domU issues and the log suggests missing interrupt.
>> >> >> >>>>          Details from here:
>> >> >> >>>>          http://www.gossamer-threads.**com/lists/xen/users/263938#** 
>> >> >> >>>> 263938 <http://www.gossamer-threads.com/lists/xen/users/263938#263938>
>> >> >> >>>>          In summary, this is the suspicious log:
>> >> >> >>>>
>> >> >> >>>>          (XEN) vmsi.c:122:d32767 Unsupported delivery mode 3
>> >> >> >>>>
>> >> >> >>>>          I've checked the code in question and found that mode 3 is an
>> >> >> >>>>          'reserved_1' mode.
>> >> >> >>>>          I want to trace down the source of this mode setting to
>> >> >> >>>>          root-cause the issue.
>> >> >> >>>>          But I'm not an xen developer, and am even a newbie as a xen
>> >> >> >>>> user.
>> >> >> >>>>          Could anybody give me instructions about how to enable
>> >> >> >>>>          detailed debug log?
>> >> >> >>>>          It could be better if I can get advice about experiments to
>> >> >> >>>>          perform / switches to try out etc.
>> >> >> >>>>
>> >> >> >>>>          My SW config:
>> >> >> >>>>          dom0: Debian wheezy (3.6 experimental kernel, xen 4.1.3-4)
>> >> >> >>>>          domU: Debian wheezy 3.2.x stock kernel.
>> >> >> >>>>
>> >> >> >>>>          Thanks,
>> >> >> >>>>          Timothy
>> >> >> >>>>
>> >> >> >>>>      Are you passing hardware (PCI Passthrough) to the HVM guest?
>> >> >> >>>>      What are the exact messages in the DomU?
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Yes, I'm doing PCI passthrough (the IGD, audio && USB controller).
>> >> >> >>>> But this is actually a PVHVM guest since debian stock kernel has PVOP
>> >> >> >>>> enabled.
>> >> >> >>>> And when I tried another PVOP disabled linux distro (openelec v2.0), I
>> >> >> >>>> did not see such msi related error message.
>> >> >> >>>> Actually, with that domU I do not see anything obvious wrong from the
>> >> >> >>>> log, but I also see nothing from panel (panel receive no signal and go
>> >> >> >>>> power-saving) :-(
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Back to the issue I was reporting, the domU log looks like this:
>> >> >> >>>>
>> >> >> >>>> Dec 2 21:52:44 debvm kernel: [ 1085.604071]
>> >> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >> >>>> *ERROR* Hangcheck timer elapsed... blt ring idle [waiting on 3354, at
>> >> >> >>>> 3354], missed IRQ?
>> >> >> >>>> Dec 2 21:56:50 debvm kernel: [ 1332.076071]
>> >> >> >>>> [drm:i915_hangcheck_ring_idle]
>> >> >> >>>> *ERROR* Hangcheck timer elapsed... render ring idle [waiting on 11297, at
>> >> >> >>>> 11297], missed IRQ?
>> >> >> >>>> Dec 2 22:32:48 debvm kernel: [ 7.220073] hda-intel: azx_get_response
>> >> >> >>>> timeout, switching to polling mode: last cmd=0x000f0000
>> >> >> >>>> Dec 2 22:45:32 debvm kernel: [ 776.392084] hda-intel: No response from
>> >> >> >>>> codec, disabling MSI: last cmd=0x002f0600
>> >> >> >>>> Dec 2 22:45:33 debvm kernel: [ 777.400075] hda_intel: azx_get_response
>> >> >> >>>> timeout, switching to single_cmd mode: last cmd=0x002f0600
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Thanks,
>> >> >> >>>> Timothy
>> >> >> >>>>
>> >> >> >>> It does sound like there is a fix in 4.2.0, as indicated by Jan, that
>> >> >> >>> fixes this. I'm not fully clued up to what the policy for backporting
>> >> >> >>> fixes are, and I haven't looked at the complexity of the fix itself, but
>> >> >> >>> either updating to the 4.2.0 or a (personal) backport sounds like the
>> >> >> >>> right solution here.
>> >> >> >>>
>> >> >> >> I had a quick look, and it doesn't look that hard to backport that patch.
>> >> >> >>
>> >> >> >> --
>> >> >> >> Mats
>> >> >> >>
>> >> >> >>
>> >> >> >>> Unfortunately, I hadn't seen Jan's reply by the time I wrote my response
>> >> >> >>> to your original email.
>> >> >> >>>
>> >> >> >>> --
>> >> >> >>> Mats
>> >> >> >>>
>> >> >> >>> ______________________________**_________________
>> >> >> >>> Xen-devel mailing list
>> >> >> >>> Xen-devel@lists.xen.org 
>> >> >> >>> http://lists.xen.org/xen-devel 
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>
>> >> >> >> ______________________________**_________________
>> >> >> >> Xen-devel mailing list
>> >> >> >> Xen-devel@lists.xen.org 
>> >> >> >> http://lists.xen.org/xen-devel 
>> >> >> >>
>> >> >> 
>> >> >> 
>> >> >> 
>> >> >> _______________________________________________
>> >> >> Xen-devel mailing list
>> >> >> Xen-devel@lists.xen.org
>> >> >> http://lists.xen.org/xen-devel
>> >> 
>> >> 
>> >> 
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:42:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:42:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgH6a-00062t-It; Wed, 05 Dec 2012 15:41:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgH6Z-00062m-1x
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 15:41:47 +0000
Received: from [85.158.143.99:17140] by server-3.bemta-4.messagelabs.com id
	1C/70-06841-A3B6FB05; Wed, 05 Dec 2012 15:41:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354722104!22880076!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21929 invoked from network); 5 Dec 2012 15:41:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 15:41:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216492327"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 15:41:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 10:41:43 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgH6V-00054d-Ea;
	Wed, 05 Dec 2012 15:41:43 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Wed, 5 Dec 2012 15:41:43 +0000
Message-ID: <1354722103-22585-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] README: docs/pdf/user.pdf was deleted in
	24563:4271634e4c86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 README |    7 ++-----
 1 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/README b/README
index 1938f66..f5d5530 100644
--- a/README
+++ b/README
@@ -25,11 +25,8 @@ live relocation of VMs. Ports to Linux, NetBSD, FreeBSD and Solaris
 are available from the community.
 
 This file contains some quick-start instructions to install Xen on
-your system. For full documentation, see the Xen User Manual. If this
-is a pre-built release then you can find the manual at:
- dist/install/usr/share/doc/xen/pdf/user.pdf
-If you have a source release, then 'make -C docs' will build the
-manual at docs/pdf/user.pdf.
+your system. For more information see http:/www.xen.org/ and
+http://wiki.xen.org/
 
 Quick-Start Guide
 =================
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:42:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:42:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgH6a-00062t-It; Wed, 05 Dec 2012 15:41:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgH6Z-00062m-1x
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 15:41:47 +0000
Received: from [85.158.143.99:17140] by server-3.bemta-4.messagelabs.com id
	1C/70-06841-A3B6FB05; Wed, 05 Dec 2012 15:41:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354722104!22880076!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21929 invoked from network); 5 Dec 2012 15:41:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 15:41:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="216492327"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 15:41:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 10:41:43 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgH6V-00054d-Ea;
	Wed, 05 Dec 2012 15:41:43 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Wed, 5 Dec 2012 15:41:43 +0000
Message-ID: <1354722103-22585-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] README: docs/pdf/user.pdf was deleted in
	24563:4271634e4c86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 README |    7 ++-----
 1 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/README b/README
index 1938f66..f5d5530 100644
--- a/README
+++ b/README
@@ -25,11 +25,8 @@ live relocation of VMs. Ports to Linux, NetBSD, FreeBSD and Solaris
 are available from the community.
 
 This file contains some quick-start instructions to install Xen on
-your system. For full documentation, see the Xen User Manual. If this
-is a pre-built release then you can find the manual at:
- dist/install/usr/share/doc/xen/pdf/user.pdf
-If you have a source release, then 'make -C docs' will build the
-manual at docs/pdf/user.pdf.
+your system. For more information see http:/www.xen.org/ and
+http://wiki.xen.org/
 
 Quick-Start Guide
 =================
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:57:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHLS-0006WB-L7; Wed, 05 Dec 2012 15:57:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgHLR-0006W3-1v
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 15:57:09 +0000
Received: from [193.109.254.147:35613] by server-5.bemta-14.messagelabs.com id
	D9/25-10257-4DE6FB05; Wed, 05 Dec 2012 15:57:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354723025!9326596!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31590 invoked from network); 5 Dec 2012 15:57:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 15:57:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16177231"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 15:57:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 15:57:04 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgHLM-0003yL-1v;
	Wed, 05 Dec 2012 15:57:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgHLL-0001kU-GV;
	Wed, 05 Dec 2012 15:57:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14566-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 15:57:03 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14566: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14566 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14566/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    7 debian-install              fail pass in 14562
 test-amd64-amd64-xl-sedf-pin 11 guest-localmigrate          fail pass in 14562
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 14562 pass in 14566

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14484
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 14484

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  a8a9e1c126ea
baseline version:
 xen                  d89986111f0c

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=a8a9e1c126ea
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing a8a9e1c126ea
+ branch=xen-4.1-testing
+ revision=a8a9e1c126ea
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r a8a9e1c126ea ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 6 changesets with 9 changes to 7 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:57:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHLS-0006WB-L7; Wed, 05 Dec 2012 15:57:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgHLR-0006W3-1v
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 15:57:09 +0000
Received: from [193.109.254.147:35613] by server-5.bemta-14.messagelabs.com id
	D9/25-10257-4DE6FB05; Wed, 05 Dec 2012 15:57:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354723025!9326596!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31590 invoked from network); 5 Dec 2012 15:57:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 15:57:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16177231"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 15:57:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 15:57:04 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgHLM-0003yL-1v;
	Wed, 05 Dec 2012 15:57:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgHLL-0001kU-GV;
	Wed, 05 Dec 2012 15:57:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14566-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 15:57:03 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14566: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14566 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14566/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    7 debian-install              fail pass in 14562
 test-amd64-amd64-xl-sedf-pin 11 guest-localmigrate          fail pass in 14562
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 14562 pass in 14566

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14484
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 14484

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  a8a9e1c126ea
baseline version:
 xen                  d89986111f0c

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=a8a9e1c126ea
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing a8a9e1c126ea
+ branch=xen-4.1-testing
+ revision=a8a9e1c126ea
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r a8a9e1c126ea ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 6 changesets with 9 changes to 7 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:58:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:58:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHMe-0006bq-AV; Wed, 05 Dec 2012 15:58:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgHMc-0006bg-H2
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 15:58:22 +0000
Received: from [85.158.138.51:60785] by server-14.bemta-3.messagelabs.com id
	46/2C-31424-D1F6FB05; Wed, 05 Dec 2012 15:58:21 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1354723100!23444266!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29884 invoked from network); 5 Dec 2012 15:58:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 15:58:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16177269"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 15:58:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 15:58:11 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TgHMQ-0003yh-Oi; Wed, 05 Dec 2012 15:58:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TgHMQ-0002sc-JZ;
	Wed, 05 Dec 2012 15:58:10 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20671.28434.406210.691273@mariner.uk.xensource.com>
Date: Wed, 5 Dec 2012 15:58:10 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50BCD24B.1010608@citrix.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as device model by default"):
> On 27/11/12 16:17, Stefano Stabellini wrote:
> > -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> > +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
> 
> Is there anyway we may keep qemu-traditional as default for NetBSD?
> Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
> heavy patching.

Right.  OK, that's something we need to take care of then.

> Could a helper function be added to libxl_{netbsd/linux}.c to decide
> which device model to use?

Yes, I think that would be fine.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:58:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:58:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHMe-0006bq-AV; Wed, 05 Dec 2012 15:58:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgHMc-0006bg-H2
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 15:58:22 +0000
Received: from [85.158.138.51:60785] by server-14.bemta-3.messagelabs.com id
	46/2C-31424-D1F6FB05; Wed, 05 Dec 2012 15:58:21 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1354723100!23444266!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29884 invoked from network); 5 Dec 2012 15:58:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 15:58:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16177269"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 15:58:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 15:58:11 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TgHMQ-0003yh-Oi; Wed, 05 Dec 2012 15:58:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TgHMQ-0002sc-JZ;
	Wed, 05 Dec 2012 15:58:10 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20671.28434.406210.691273@mariner.uk.xensource.com>
Date: Wed, 5 Dec 2012 15:58:10 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50BCD24B.1010608@citrix.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as device model by default"):
> On 27/11/12 16:17, Stefano Stabellini wrote:
> > -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
> > +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
> 
> Is there anyway we may keep qemu-traditional as default for NetBSD?
> Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
> heavy patching.

Right.  OK, that's something we need to take care of then.

> Could a helper function be added to libxl_{netbsd/linux}.c to decide
> which device model to use?

Yes, I think that would be fine.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:58:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:58:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHMh-0006cI-N4; Wed, 05 Dec 2012 15:58:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgHMg-0006c1-2D
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 15:58:26 +0000
Received: from [85.158.143.99:48032] by server-3.bemta-4.messagelabs.com id
	5E/87-06841-12F6FB05; Wed, 05 Dec 2012 15:58:25 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354723059!28180010!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2MTk1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25889 invoked from network); 5 Dec 2012 15:57:40 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-15.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 15:57:40 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 05 Dec 2012 07:57:37 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="176513519"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 07:57:36 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 07:57:36 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Wed, 5 Dec 2012 23:57:34 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with
	regard to migration
Thread-Index: AQHNzhioH+TZqbearkyCq5DrDs2IGJgCq1rwgAe3YyA=
Date: Wed, 5 Dec 2012 15:57:33 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Liu, Jinsong wrote:
> Ian Campbell wrote:
>> On Wed, 2012-11-28 at 14:37 +0000, Liu, Jinsong wrote:
>>> Ping?
>> 
>> Sorry I've been meaning to reply but didn't manage to yet. Also you
>> replied to V4 saying to ignore it, so I was half waiting for V5 but I
>> see this should actually be labelled V5 anyway.
>> 
>> I'm afraid I still don't fully grok the reason for the loop that
>> goes with: +            /*
>> +             * At the last iter, count the number of broken pages
>> after sending, +             * and if there are more than before
>> sending, do one or more iter +             * to make sure the pages
>> are marked broken on the receiving side. +             */
>> 
>> Can we go through it one more time? Sorry.
> 
> Sure, and I'm very much appreciated your kindly/patient review. Your
> comments help/force me to re-think it more detailed. 
> 
>> 
>> Let me outline the sequence of events and you can point out where I'm
>> going wrong. I'm afraid this has turned out to be rather long, again
>> I'm sorry for that.
>> 
>> First we do some number of iterations with the guest live. If an MCE
>> occurs during this phase then the page will be marked dirty and we
>> will pick this up on the next iteration and resend the page with the
>> dirty type etc and all is fine. This all looks good to me, so we
>> don't need to worry about anything at this stage.
> 
> Hmm, seems not so safe. If the page is good and it's in dirty bitmap
> (get from step #2), while at the iteration it broken after #4, same
> issue occur as your analysis of last iteration --> migration process
> will read broken page (except this case I agree it's OK) --> so let's
> merge analysis of 'migration process read broken page' with last
> iteration B.I case.     
> 
>> 
>> Eventually we get to the last iteration, at which point we pause the
>> guest. From here on in the guest itself is not going to cause an MCE
>> (e.g. by touching its RAM) because it is not running but we must
>> still account for the possibility of an asynchronous MCE of some
>> sort e.g. triggered by the error detection mechanisms in the
>> hardware, cosmic rays and such like.
>> 
>> The final iteration proceeds roughly as follows.
>> 
>>      1. The domain is paused
>>      2. We scan the dirty bitmap and add dirty pages to the batch of
>>         pages to process (there may be several batches in the last
>>         iteration, we only need to concern ourselves with any one   
>> batch here). 
>>      3. We map all of the pages in the resulting batch with        
>> xc_map_foreign_bulk 
>>      4. We query the types of all the pages in the batch with       
>> xc_get_pfn_type_batch 
>>      5. We iterate over the batch, checking the type of each page, in
>>         some cases we do some incidental processing.
>>      6. We send the types of the pages in the batch over the wire.
>>      7. We iterate over the batch again, and send any normal page
>>         (not broken, xtab etc) over the wire. Actually we do this as
>>         runs of normal pages, but the key point is we avoid touching
>>         any special page (including ones marked as broken by #4)
>> 
>> Is this sequence of events accurate?
> 
> Yes, exactly.
> 
>> 
>> Now lets consider the consequences of an MCE occurring at various
>> stages here.
>> 
>> Any MCE which happens before #4 is fine, we will pick that up in #4
>> and the following steps will do the right thing.
>> 
>> Note that I am assuming that the mapping step in #3 is safe even for
>> a broken page, so long as we don't actually try and use the mapping
>> (more on that later), is this true?
> 
> I think so. #3 mmu mapping is safe itself.
> 
>> 
>> If an MCE occurs after #4 then the page will be marked as dirty in
>> the bitmap and Xen will internally mark it as broken, but we won't
>> see either of those with the current algorithm. There are two cases
>> to think about here AFAICT,
>>      A. The page was not already dirty at #2. In this case we know
>>         that the guest hasn't dirtied the page since the previous
>>         iteration and therefore the target has a good copy of this
>>         page from that time. The page isn't even in the batch we are
>>         processing So we don't particularly care about the MCE here
>>         and can, from the PoV of migrating this guest, ignore it.
>>      B. The page was already dirty (but not broken, we handled that
>>         case above in "Any MCE which happens before #4...") at #2
>>         which means we have do not have an up to date copy on the
>>         target. This has two subcases:
>>              I. The MCE occurs before (or during) #6 (sending the
>>                 page) and therefore we do not have a good up to date
>>                 copy of that data at either end.
>>             II. The MCE occurs after #6, in which case we already
>>                 have a good copy at the target end.
>> 
>> To fix B you have added an 8th step to the above:
>> 
>>         8. Query the types of the pages again, using
>>         xc_get_pfn_type_batch, and if there are more pages dirty now
>>         than we say at #4 (actually #5 when we scanned the array, but
>>         that distinction doesn't matter) then a new MCE must have
>>         occurred. Go back to #2 and try again.
>> 
>> This won't do anything for A since the page wasn't in the batch to
>> start with and so neither #4 or #8 will look at its type, this is
>> good and proper. 
>> 
>> So now we consider the two subcases of B. Lets consider B.II first
>> since it seems to be the more obvious case.
>> 
>> In case B.II the target end already has a good copy of the data page,
>> there is no need to mark the page as broken on the far end, nor to
>> arrange for a vMCE to be injected. I don't know if/how we arrange for
>> vMCEs to be injected under these circumstances, however even if a
>> vMCE does get injected into the guest when it eventually gets
>> unpaused on the target then all that will happen is that it will
>> needlessly throw away a good page. However this is a rare corner
>> case which is not worth concerning ourselves with (it's largely
>> indistinguishable from case A). If the MCE had happened even a
>> single cycle earlier then this would have been a B.I event instead
>> of a B.II one. In any case there is no need to return to #2 and try
>> again, everything will be just fine if we complete the migration at
>> this point. 
>> 
>> In case B.I the MCE occurs before (or while) we send the page onto
>> the wire. We will therefore try to read from this page because we
>> haven't looked at the type since #4 and have no idea that it is now
>> broken. 
> 
> Right.
> 
>> Reading from the broken page will cause a fault, perhaps causing a
>> vMCE to be delivered to dom0, which causes the kernel to kill the
>> process doing the migration. Or maybe it kills dom0 or the host
>> entirely. 
> 
> Not exactly I think. Reading the broken page will trigger a serious
> SRAR error. Under such case hypervisor will inject a vMCE to the
> guest which was migrating, not dom0. The reason of this injection is,
> guest is best one to handle it, w/ sufficient clue/status/information
> (other component like hypervisor/dom0 are not proper). For xl
> migration process, after return from MCE context, it *again* read the
> broken page ... this will kill system entirely --> so we definitely
> not care migration any more.       
> 
>> Either
>> way the idea of looping again is rather moot.
> 
> Agree the conclusion, though a little different opinion about its
> reason. So let's remove step #8? 
> 
> Thanks,
> Jinsong

More simply, we can remove not only step #8 for last iteration check, but also the action to 'mark broken page to dirty bitmap':
In any iteration, there is a key point #4, if vmce occur before #4 it will transfer proper pfn_type/pfn_number and (xl migration) will not access broken page (if guest access the broken page again it will be killed by hypervisor), if vmce occur after #4 system will crash and no need care migration any more.

So we can go back to the original patch which used to handle 'vmce occur before migration' and entirely don't need add specific code to handle 'vmce occur during migration', since it in fact has handled both cases (and simple). Thoughts?

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:58:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:58:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHMh-0006cI-N4; Wed, 05 Dec 2012 15:58:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgHMg-0006c1-2D
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 15:58:26 +0000
Received: from [85.158.143.99:48032] by server-3.bemta-4.messagelabs.com id
	5E/87-06841-12F6FB05; Wed, 05 Dec 2012 15:58:25 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354723059!28180010!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2MTk1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25889 invoked from network); 5 Dec 2012 15:57:40 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-15.tower-216.messagelabs.com with SMTP;
	5 Dec 2012 15:57:40 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 05 Dec 2012 07:57:37 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,221,1355126400"; d="scan'208";a="176513519"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 07:57:36 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 07:57:36 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Wed, 5 Dec 2012 23:57:34 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with
	regard to migration
Thread-Index: AQHNzhioH+TZqbearkyCq5DrDs2IGJgCq1rwgAe3YyA=
Date: Wed, 5 Dec 2012 15:57:33 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Liu, Jinsong wrote:
> Ian Campbell wrote:
>> On Wed, 2012-11-28 at 14:37 +0000, Liu, Jinsong wrote:
>>> Ping?
>> 
>> Sorry I've been meaning to reply but didn't manage to yet. Also you
>> replied to V4 saying to ignore it, so I was half waiting for V5 but I
>> see this should actually be labelled V5 anyway.
>> 
>> I'm afraid I still don't fully grok the reason for the loop that
>> goes with: +            /*
>> +             * At the last iter, count the number of broken pages
>> after sending, +             * and if there are more than before
>> sending, do one or more iter +             * to make sure the pages
>> are marked broken on the receiving side. +             */
>> 
>> Can we go through it one more time? Sorry.
> 
> Sure, and I'm very much appreciated your kindly/patient review. Your
> comments help/force me to re-think it more detailed. 
> 
>> 
>> Let me outline the sequence of events and you can point out where I'm
>> going wrong. I'm afraid this has turned out to be rather long, again
>> I'm sorry for that.
>> 
>> First we do some number of iterations with the guest live. If an MCE
>> occurs during this phase then the page will be marked dirty and we
>> will pick this up on the next iteration and resend the page with the
>> dirty type etc and all is fine. This all looks good to me, so we
>> don't need to worry about anything at this stage.
> 
> Hmm, seems not so safe. If the page is good and it's in dirty bitmap
> (get from step #2), while at the iteration it broken after #4, same
> issue occur as your analysis of last iteration --> migration process
> will read broken page (except this case I agree it's OK) --> so let's
> merge analysis of 'migration process read broken page' with last
> iteration B.I case.     
> 
>> 
>> Eventually we get to the last iteration, at which point we pause the
>> guest. From here on in the guest itself is not going to cause an MCE
>> (e.g. by touching its RAM) because it is not running but we must
>> still account for the possibility of an asynchronous MCE of some
>> sort e.g. triggered by the error detection mechanisms in the
>> hardware, cosmic rays and such like.
>> 
>> The final iteration proceeds roughly as follows.
>> 
>>      1. The domain is paused
>>      2. We scan the dirty bitmap and add dirty pages to the batch of
>>         pages to process (there may be several batches in the last
>>         iteration, we only need to concern ourselves with any one   
>> batch here). 
>>      3. We map all of the pages in the resulting batch with        
>> xc_map_foreign_bulk 
>>      4. We query the types of all the pages in the batch with       
>> xc_get_pfn_type_batch 
>>      5. We iterate over the batch, checking the type of each page, in
>>         some cases we do some incidental processing.
>>      6. We send the types of the pages in the batch over the wire.
>>      7. We iterate over the batch again, and send any normal page
>>         (not broken, xtab etc) over the wire. Actually we do this as
>>         runs of normal pages, but the key point is we avoid touching
>>         any special page (including ones marked as broken by #4)
>> 
>> Is this sequence of events accurate?
> 
> Yes, exactly.
> 
>> 
>> Now lets consider the consequences of an MCE occurring at various
>> stages here.
>> 
>> Any MCE which happens before #4 is fine, we will pick that up in #4
>> and the following steps will do the right thing.
>> 
>> Note that I am assuming that the mapping step in #3 is safe even for
>> a broken page, so long as we don't actually try and use the mapping
>> (more on that later), is this true?
> 
> I think so. #3 mmu mapping is safe itself.
> 
>> 
>> If an MCE occurs after #4 then the page will be marked as dirty in
>> the bitmap and Xen will internally mark it as broken, but we won't
>> see either of those with the current algorithm. There are two cases
>> to think about here AFAICT,
>>      A. The page was not already dirty at #2. In this case we know
>>         that the guest hasn't dirtied the page since the previous
>>         iteration and therefore the target has a good copy of this
>>         page from that time. The page isn't even in the batch we are
>>         processing So we don't particularly care about the MCE here
>>         and can, from the PoV of migrating this guest, ignore it.
>>      B. The page was already dirty (but not broken, we handled that
>>         case above in "Any MCE which happens before #4...") at #2
>>         which means we have do not have an up to date copy on the
>>         target. This has two subcases:
>>              I. The MCE occurs before (or during) #6 (sending the
>>                 page) and therefore we do not have a good up to date
>>                 copy of that data at either end.
>>             II. The MCE occurs after #6, in which case we already
>>                 have a good copy at the target end.
>> 
>> To fix B you have added an 8th step to the above:
>> 
>>         8. Query the types of the pages again, using
>>         xc_get_pfn_type_batch, and if there are more pages dirty now
>>         than we say at #4 (actually #5 when we scanned the array, but
>>         that distinction doesn't matter) then a new MCE must have
>>         occurred. Go back to #2 and try again.
>> 
>> This won't do anything for A since the page wasn't in the batch to
>> start with and so neither #4 or #8 will look at its type, this is
>> good and proper. 
>> 
>> So now we consider the two subcases of B. Lets consider B.II first
>> since it seems to be the more obvious case.
>> 
>> In case B.II the target end already has a good copy of the data page,
>> there is no need to mark the page as broken on the far end, nor to
>> arrange for a vMCE to be injected. I don't know if/how we arrange for
>> vMCEs to be injected under these circumstances, however even if a
>> vMCE does get injected into the guest when it eventually gets
>> unpaused on the target then all that will happen is that it will
>> needlessly throw away a good page. However this is a rare corner
>> case which is not worth concerning ourselves with (it's largely
>> indistinguishable from case A). If the MCE had happened even a
>> single cycle earlier then this would have been a B.I event instead
>> of a B.II one. In any case there is no need to return to #2 and try
>> again, everything will be just fine if we complete the migration at
>> this point. 
>> 
>> In case B.I the MCE occurs before (or while) we send the page onto
>> the wire. We will therefore try to read from this page because we
>> haven't looked at the type since #4 and have no idea that it is now
>> broken. 
> 
> Right.
> 
>> Reading from the broken page will cause a fault, perhaps causing a
>> vMCE to be delivered to dom0, which causes the kernel to kill the
>> process doing the migration. Or maybe it kills dom0 or the host
>> entirely. 
> 
> Not exactly I think. Reading the broken page will trigger a serious
> SRAR error. Under such case hypervisor will inject a vMCE to the
> guest which was migrating, not dom0. The reason of this injection is,
> guest is best one to handle it, w/ sufficient clue/status/information
> (other component like hypervisor/dom0 are not proper). For xl
> migration process, after return from MCE context, it *again* read the
> broken page ... this will kill system entirely --> so we definitely
> not care migration any more.       
> 
>> Either
>> way the idea of looping again is rather moot.
> 
> Agree the conclusion, though a little different opinion about its
> reason. So let's remove step #8? 
> 
> Thanks,
> Jinsong

More simply, we can remove not only step #8 for last iteration check, but also the action to 'mark broken page to dirty bitmap':
In any iteration, there is a key point #4, if vmce occur before #4 it will transfer proper pfn_type/pfn_number and (xl migration) will not access broken page (if guest access the broken page again it will be killed by hypervisor), if vmce occur after #4 system will crash and no need care migration any more.

So we can go back to the original patch which used to handle 'vmce occur before migration' and entirely don't need add specific code to handle 'vmce occur during migration', since it in fact has handled both cases (and simple). Thoughts?

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:59:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHNh-0006ld-6I; Wed, 05 Dec 2012 15:59:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TgHNg-0006lQ-D0
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 15:59:28 +0000
Received: from [193.109.254.147:22712] by server-8.bemta-14.messagelabs.com id
	A7/7F-05026-F5F6FB05; Wed, 05 Dec 2012 15:59:27 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354723162!1794883!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDEwNjc2Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12332 invoked from network); 5 Dec 2012 15:59:26 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 15:59:26 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354723166; x=1386259166;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=7HbmNIxvphKwzPokZHuvty/rKR/JqXESQv5ZN1r204M=;
	b=eN1yfgSrhnKmLKmbdtM2+LAl5BDBpR0kimDqP6j9MUaI2ETC5j48o/pL
	qfOaHAy3rQRru9m1MkPmjkj8jNHshbE0Xpy6jrYEVRXh2zrloZ7EtYfhA
	JpV8RDtxv+m6ivXVZMWNS/xzH+G0/vhXpHmtIAimNVIikGw8oqAl7TevZ I=;
X-IronPort-AV: E=McAfee;i="5400,1158,6916"; a="409015534"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 05 Dec 2012 15:59:17 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB5FxFTQ029823
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 5 Dec 2012 15:59:16 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 5 Dec 2012 07:59:08 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Wed, 05 Dec 2012 07:59:08 -0800
Date: Wed, 5 Dec 2012 07:59:08 -0800
From: Matt Wilson <msw@amazon.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BF2574.6080702@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
> On 05/12/12 06:02, Matt Wilson wrote:
> > An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
> > but still have the flexibility to change the configuration later.
> > There's no logic that keys off of domain->is_pinned outside of
> > sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
> > is_pinned_vcpu() macro to only check for a single CPU set in the
> > cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
> > boots.
> 
> Sadly this patch will break things.  There are certain callers of
> is_pinned_vcpu() which rely on the value to allow access to certain
> power related MSRs, which is where the requirement for never permitting
> an update of the affinity mask comes from.

If this is true, the existing is_pinned_vcpu() test is broken:

   #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
                              cpumask_weight((v)->cpu_affinity) == 1)

It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
the MSR traps will suddenly start working.

See commit: http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd

> When I encountered this problem before, I considered implementing
> dom0_vcpu_pin=dynamic (or name to suit) which sets up an identity pin at
> create time, but leaves is_pinned as false. 

I could implement that, but I want to make sure we're fixing a real
problem. It sounds like Keir thinks this can be relaxed.

Matt

> > Signed-off-by: Matt Wilson <msw@amazon.com>
> >
> > diff -r 29247e44df47 -r 2614dd8be3a0 docs/misc/xen-command-line.markdown
> > --- a/docs/misc/xen-command-line.markdown	Fri Nov 30 21:51:17 2012 +0000
> > +++ b/docs/misc/xen-command-line.markdown	Wed Dec 05 05:48:23 2012 +0000
> > @@ -453,7 +453,7 @@ Practices](http://wiki.xen.org/wiki/Xen_
> >  
> >  > Default: `false`
> >  
> > -Pin dom0 vcpus to their respective pcpus
> > +Initially pin dom0 vcpus to their respective pcpus
> >  
> >  ### e820-mtrr-clip
> >  > `= <boolean>`
> > diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/domain.c
> > --- a/xen/common/domain.c	Fri Nov 30 21:51:17 2012 +0000
> > +++ b/xen/common/domain.c	Wed Dec 05 05:48:23 2012 +0000
> > @@ -45,10 +45,6 @@
> >  /* xen_processor_pmbits: xen control Cx, Px, ... */
> >  unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
> >  
> > -/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */
> > -bool_t opt_dom0_vcpus_pin;
> > -boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> > -
> >  /* Protect updates/reads (resp.) of domain_list and domain_hash. */
> >  DEFINE_SPINLOCK(domlist_update_lock);
> >  DEFINE_RCU_READ_LOCK(domlist_read_lock);
> > @@ -235,7 +231,6 @@ struct domain *domain_create(
> >  
> >      if ( domid == 0 )
> >      {
> > -        d->is_pinned = opt_dom0_vcpus_pin;
> >          d->disable_migrate = 1;
> >      }
> >  
> > diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/schedule.c
> > --- a/xen/common/schedule.c	Fri Nov 30 21:51:17 2012 +0000
> > +++ b/xen/common/schedule.c	Wed Dec 05 05:48:23 2012 +0000
> > @@ -52,6 +52,11 @@ boolean_param("sched_smt_power_savings",
> >   * */
> >  int sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
> >  integer_param("sched_ratelimit_us", sched_ratelimit_us);
> > +
> > +/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned at boot. */
> > +bool_t opt_dom0_vcpus_pin;
> > +boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> > +
> >  /* Various timer handlers. */
> >  static void s_timer_fn(void *unused);
> >  static void vcpu_periodic_timer_fn(void *data);
> > @@ -194,7 +199,8 @@ int sched_init_vcpu(struct vcpu *v, unsi
> >       * domain-0 VCPUs, are pinned onto their respective physical CPUs.
> >       */
> >      v->processor = processor;
> > -    if ( is_idle_domain(d) || d->is_pinned )
> > +
> > +    if ( is_idle_domain(d) || (d->domain_id == 0 && opt_dom0_vcpus_pin) )
> >          cpumask_copy(v->cpu_affinity, cpumask_of(processor));
> >      else
> >          cpumask_setall(v->cpu_affinity);
> > @@ -595,8 +601,6 @@ int vcpu_set_affinity(struct vcpu *v, co
> >      cpumask_t online_affinity;
> >      cpumask_t *online;
> >  
> > -    if ( v->domain->is_pinned )
> > -        return -EINVAL;
> >      online = VCPU2ONLINE(v);
> >      cpumask_and(&online_affinity, affinity, online);
> >      if ( cpumask_empty(&online_affinity) )
> > diff -r 29247e44df47 -r 2614dd8be3a0 xen/include/xen/sched.h
> > --- a/xen/include/xen/sched.h	Fri Nov 30 21:51:17 2012 +0000
> > +++ b/xen/include/xen/sched.h	Wed Dec 05 05:48:23 2012 +0000
> > @@ -292,8 +292,6 @@ struct domain
> >      enum { DOMDYING_alive, DOMDYING_dying, DOMDYING_dead } is_dying;
> >      /* Domain is paused by controller software? */
> >      bool_t           is_paused_by_controller;
> > -    /* Domain's VCPUs are pinned 1:1 to physical CPUs? */
> > -    bool_t           is_pinned;
> >  
> >      /* Are any VCPUs polling event channels (SCHEDOP_poll)? */
> >  #if MAX_VIRT_CPUS <= BITS_PER_LONG
> > @@ -713,8 +711,7 @@ void watchdog_domain_destroy(struct doma
> >  
> >  #define is_hvm_domain(d) ((d)->is_hvm)
> >  #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
> > -#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
> > -                           cpumask_weight((v)->cpu_affinity) == 1)
> > +#define is_pinned_vcpu(v) (cpumask_weight((v)->cpu_affinity) == 1)
> >  #ifdef HAS_PASSTHROUGH
> >  #define need_iommu(d)    ((d)->need_iommu)
> >  #else
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> -- 
> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
> T: +44 (0)1223 225 900, http://www.citrix.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 15:59:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 15:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHNh-0006ld-6I; Wed, 05 Dec 2012 15:59:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TgHNg-0006lQ-D0
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 15:59:28 +0000
Received: from [193.109.254.147:22712] by server-8.bemta-14.messagelabs.com id
	A7/7F-05026-F5F6FB05; Wed, 05 Dec 2012 15:59:27 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354723162!1794883!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDEwNjc2Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12332 invoked from network); 5 Dec 2012 15:59:26 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 15:59:26 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354723166; x=1386259166;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=7HbmNIxvphKwzPokZHuvty/rKR/JqXESQv5ZN1r204M=;
	b=eN1yfgSrhnKmLKmbdtM2+LAl5BDBpR0kimDqP6j9MUaI2ETC5j48o/pL
	qfOaHAy3rQRru9m1MkPmjkj8jNHshbE0Xpy6jrYEVRXh2zrloZ7EtYfhA
	JpV8RDtxv+m6ivXVZMWNS/xzH+G0/vhXpHmtIAimNVIikGw8oqAl7TevZ I=;
X-IronPort-AV: E=McAfee;i="5400,1158,6916"; a="409015534"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 05 Dec 2012 15:59:17 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB5FxFTQ029823
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 5 Dec 2012 15:59:16 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 5 Dec 2012 07:59:08 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Wed, 05 Dec 2012 07:59:08 -0800
Date: Wed, 5 Dec 2012 07:59:08 -0800
From: Matt Wilson <msw@amazon.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BF2574.6080702@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
> On 05/12/12 06:02, Matt Wilson wrote:
> > An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
> > but still have the flexibility to change the configuration later.
> > There's no logic that keys off of domain->is_pinned outside of
> > sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
> > is_pinned_vcpu() macro to only check for a single CPU set in the
> > cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
> > boots.
> 
> Sadly this patch will break things.  There are certain callers of
> is_pinned_vcpu() which rely on the value to allow access to certain
> power related MSRs, which is where the requirement for never permitting
> an update of the affinity mask comes from.

If this is true, the existing is_pinned_vcpu() test is broken:

   #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
                              cpumask_weight((v)->cpu_affinity) == 1)

It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
the MSR traps will suddenly start working.

See commit: http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd

> When I encountered this problem before, I considered implementing
> dom0_vcpu_pin=dynamic (or name to suit) which sets up an identity pin at
> create time, but leaves is_pinned as false. 

I could implement that, but I want to make sure we're fixing a real
problem. It sounds like Keir thinks this can be relaxed.

Matt

> > Signed-off-by: Matt Wilson <msw@amazon.com>
> >
> > diff -r 29247e44df47 -r 2614dd8be3a0 docs/misc/xen-command-line.markdown
> > --- a/docs/misc/xen-command-line.markdown	Fri Nov 30 21:51:17 2012 +0000
> > +++ b/docs/misc/xen-command-line.markdown	Wed Dec 05 05:48:23 2012 +0000
> > @@ -453,7 +453,7 @@ Practices](http://wiki.xen.org/wiki/Xen_
> >  
> >  > Default: `false`
> >  
> > -Pin dom0 vcpus to their respective pcpus
> > +Initially pin dom0 vcpus to their respective pcpus
> >  
> >  ### e820-mtrr-clip
> >  > `= <boolean>`
> > diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/domain.c
> > --- a/xen/common/domain.c	Fri Nov 30 21:51:17 2012 +0000
> > +++ b/xen/common/domain.c	Wed Dec 05 05:48:23 2012 +0000
> > @@ -45,10 +45,6 @@
> >  /* xen_processor_pmbits: xen control Cx, Px, ... */
> >  unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
> >  
> > -/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */
> > -bool_t opt_dom0_vcpus_pin;
> > -boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> > -
> >  /* Protect updates/reads (resp.) of domain_list and domain_hash. */
> >  DEFINE_SPINLOCK(domlist_update_lock);
> >  DEFINE_RCU_READ_LOCK(domlist_read_lock);
> > @@ -235,7 +231,6 @@ struct domain *domain_create(
> >  
> >      if ( domid == 0 )
> >      {
> > -        d->is_pinned = opt_dom0_vcpus_pin;
> >          d->disable_migrate = 1;
> >      }
> >  
> > diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/schedule.c
> > --- a/xen/common/schedule.c	Fri Nov 30 21:51:17 2012 +0000
> > +++ b/xen/common/schedule.c	Wed Dec 05 05:48:23 2012 +0000
> > @@ -52,6 +52,11 @@ boolean_param("sched_smt_power_savings",
> >   * */
> >  int sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
> >  integer_param("sched_ratelimit_us", sched_ratelimit_us);
> > +
> > +/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned at boot. */
> > +bool_t opt_dom0_vcpus_pin;
> > +boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> > +
> >  /* Various timer handlers. */
> >  static void s_timer_fn(void *unused);
> >  static void vcpu_periodic_timer_fn(void *data);
> > @@ -194,7 +199,8 @@ int sched_init_vcpu(struct vcpu *v, unsi
> >       * domain-0 VCPUs, are pinned onto their respective physical CPUs.
> >       */
> >      v->processor = processor;
> > -    if ( is_idle_domain(d) || d->is_pinned )
> > +
> > +    if ( is_idle_domain(d) || (d->domain_id == 0 && opt_dom0_vcpus_pin) )
> >          cpumask_copy(v->cpu_affinity, cpumask_of(processor));
> >      else
> >          cpumask_setall(v->cpu_affinity);
> > @@ -595,8 +601,6 @@ int vcpu_set_affinity(struct vcpu *v, co
> >      cpumask_t online_affinity;
> >      cpumask_t *online;
> >  
> > -    if ( v->domain->is_pinned )
> > -        return -EINVAL;
> >      online = VCPU2ONLINE(v);
> >      cpumask_and(&online_affinity, affinity, online);
> >      if ( cpumask_empty(&online_affinity) )
> > diff -r 29247e44df47 -r 2614dd8be3a0 xen/include/xen/sched.h
> > --- a/xen/include/xen/sched.h	Fri Nov 30 21:51:17 2012 +0000
> > +++ b/xen/include/xen/sched.h	Wed Dec 05 05:48:23 2012 +0000
> > @@ -292,8 +292,6 @@ struct domain
> >      enum { DOMDYING_alive, DOMDYING_dying, DOMDYING_dead } is_dying;
> >      /* Domain is paused by controller software? */
> >      bool_t           is_paused_by_controller;
> > -    /* Domain's VCPUs are pinned 1:1 to physical CPUs? */
> > -    bool_t           is_pinned;
> >  
> >      /* Are any VCPUs polling event channels (SCHEDOP_poll)? */
> >  #if MAX_VIRT_CPUS <= BITS_PER_LONG
> > @@ -713,8 +711,7 @@ void watchdog_domain_destroy(struct doma
> >  
> >  #define is_hvm_domain(d) ((d)->is_hvm)
> >  #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
> > -#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
> > -                           cpumask_weight((v)->cpu_affinity) == 1)
> > +#define is_pinned_vcpu(v) (cpumask_weight((v)->cpu_affinity) == 1)
> >  #ifdef HAS_PASSTHROUGH
> >  #define need_iommu(d)    ((d)->need_iommu)
> >  #else
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> -- 
> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
> T: +44 (0)1223 225 900, http://www.citrix.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:02:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:02:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHQ9-0007RZ-Pg; Wed, 05 Dec 2012 16:02:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgHQ8-0007RA-OY
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 16:02:00 +0000
Received: from [85.158.137.99:4396] by server-10.bemta-3.messagelabs.com id
	DE/24-19806-3FF6FB05; Wed, 05 Dec 2012 16:01:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354723315!18078982!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5483 invoked from network); 5 Dec 2012 16:01:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:01:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16177419"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:01:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	16:01:55 +0000
Message-ID: <1354723313.17165.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>
Date: Wed, 5 Dec 2012 16:01:53 +0000
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(please trim your quotes)

> More simply, we can remove not only step #8 for last iteration check,
> but also the action to 'mark broken page to dirty bitmap':
> In any iteration, there is a key point #4, if vmce occur before #4 it
> will transfer proper pfn_type/pfn_number and (xl migration) will not
> access broken page (if guest access the broken page again it will be
> killed by hypervisor), if vmce occur after #4 system will crash and no
> need care migration any more.
> 
> So we can go back to the original patch which used to handle 'vmce
> occur before migration' and entirely don't need add specific code to
> handle 'vmce occur during migration', since it in fact has handled
> both cases (and simple). Thoughts?

Do you not need that stuff to handle MCEs during the live phase?

Specifically an MCE occurring on a page which is not in the current
batch at all can be handled safely during the live phase.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:02:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:02:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHQ9-0007RZ-Pg; Wed, 05 Dec 2012 16:02:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgHQ8-0007RA-OY
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 16:02:00 +0000
Received: from [85.158.137.99:4396] by server-10.bemta-3.messagelabs.com id
	DE/24-19806-3FF6FB05; Wed, 05 Dec 2012 16:01:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354723315!18078982!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5483 invoked from network); 5 Dec 2012 16:01:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:01:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16177419"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:01:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	16:01:55 +0000
Message-ID: <1354723313.17165.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>
Date: Wed, 5 Dec 2012 16:01:53 +0000
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(please trim your quotes)

> More simply, we can remove not only step #8 for last iteration check,
> but also the action to 'mark broken page to dirty bitmap':
> In any iteration, there is a key point #4, if vmce occur before #4 it
> will transfer proper pfn_type/pfn_number and (xl migration) will not
> access broken page (if guest access the broken page again it will be
> killed by hypervisor), if vmce occur after #4 system will crash and no
> need care migration any more.
> 
> So we can go back to the original patch which used to handle 'vmce
> occur before migration' and entirely don't need add specific code to
> handle 'vmce occur during migration', since it in fact has handled
> both cases (and simple). Thoughts?

Do you not need that stuff to handle MCEs during the live phase?

Specifically an MCE occurring on a page which is not in the current
batch at all can be handled safely during the live phase.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:05:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:05:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHTY-0007nv-5o; Wed, 05 Dec 2012 16:05:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TgHTW-0007nV-PA
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:05:31 +0000
Received: from [85.158.138.51:58519] by server-4.bemta-3.messagelabs.com id
	B3/B6-30023-5C07FB05; Wed, 05 Dec 2012 16:05:25 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354723507!27582471!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDE2MTA3Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3643 invoked from network); 5 Dec 2012 16:05:09 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:05:09 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354723509; x=1386259509;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=oQk590YZaU+DovTx/IOQAoPFqlRqooilmG99pAzy43U=;
	b=v2N2Y6Zv9wayknmunoujYktnCMoC1D5qUXe4e6NGY+iFBOXTvlZSQrb2
	d5r3URzBlx5fDryqK9EEVYvIvvE3PBqRlD4ebMELkvujJV0+FGJr9XBUf
	omTFDvdsVrW4rTeu9G5h2XMp8ii2ls47cWJufzI6Vj8qKoTxFibpjetyR k=;
X-IronPort-AV: E=McAfee;i="5400,1158,6916"; a="490475661"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 05 Dec 2012 16:04:46 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB5G4j8s032196
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 5 Dec 2012 16:04:45 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.118) by
	ex10-hub-31005.ant.amazon.com (10.185.176.12) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 5 Dec 2012 08:04:35 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Wed, 05 Dec 2012 08:04:35 -0800
Date: Wed, 5 Dec 2012 08:04:35 -0800
From: Matt Wilson <msw@amazon.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121205160435.GB32088@u109add4315675089e695.ant.amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 07:59:08AM -0800, Matt Wilson wrote:
> On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
> > On 05/12/12 06:02, Matt Wilson wrote:
> > > An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
> > > but still have the flexibility to change the configuration later.
> > > There's no logic that keys off of domain->is_pinned outside of
> > > sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
> > > is_pinned_vcpu() macro to only check for a single CPU set in the
> > > cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
> > > boots.
> > 
> > Sadly this patch will break things.  There are certain callers of
> > is_pinned_vcpu() which rely on the value to allow access to certain
> > power related MSRs, which is where the requirement for never permitting
> > an update of the affinity mask comes from.
> 
> If this is true, the existing is_pinned_vcpu() test is broken:
> 
>    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
>                               cpumask_weight((v)->cpu_affinity) == 1)
> 
> It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
> the MSR traps will suddenly start working.
> 
> See commit: http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd
> 
> > When I encountered this problem before, I considered implementing
> > dom0_vcpu_pin=dynamic (or name to suit) which sets up an identity pin at
> > create time, but leaves is_pinned as false. 
> 
> I could implement that, but I want to make sure we're fixing a real
> problem. It sounds like Keir thinks this can be relaxed.

Sorry, misread the commit. Jan made that change.

> Matt
> 
> > > Signed-off-by: Matt Wilson <msw@amazon.com>
> > >
> > > diff -r 29247e44df47 -r 2614dd8be3a0 docs/misc/xen-command-line.markdown
> > > --- a/docs/misc/xen-command-line.markdown	Fri Nov 30 21:51:17 2012 +0000
> > > +++ b/docs/misc/xen-command-line.markdown	Wed Dec 05 05:48:23 2012 +0000
> > > @@ -453,7 +453,7 @@ Practices](http://wiki.xen.org/wiki/Xen_
> > >  
> > >  > Default: `false`
> > >  
> > > -Pin dom0 vcpus to their respective pcpus
> > > +Initially pin dom0 vcpus to their respective pcpus
> > >  
> > >  ### e820-mtrr-clip
> > >  > `= <boolean>`
> > > diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/domain.c
> > > --- a/xen/common/domain.c	Fri Nov 30 21:51:17 2012 +0000
> > > +++ b/xen/common/domain.c	Wed Dec 05 05:48:23 2012 +0000
> > > @@ -45,10 +45,6 @@
> > >  /* xen_processor_pmbits: xen control Cx, Px, ... */
> > >  unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
> > >  
> > > -/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */
> > > -bool_t opt_dom0_vcpus_pin;
> > > -boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> > > -
> > >  /* Protect updates/reads (resp.) of domain_list and domain_hash. */
> > >  DEFINE_SPINLOCK(domlist_update_lock);
> > >  DEFINE_RCU_READ_LOCK(domlist_read_lock);
> > > @@ -235,7 +231,6 @@ struct domain *domain_create(
> > >  
> > >      if ( domid == 0 )
> > >      {
> > > -        d->is_pinned = opt_dom0_vcpus_pin;
> > >          d->disable_migrate = 1;
> > >      }
> > >  
> > > diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/schedule.c
> > > --- a/xen/common/schedule.c	Fri Nov 30 21:51:17 2012 +0000
> > > +++ b/xen/common/schedule.c	Wed Dec 05 05:48:23 2012 +0000
> > > @@ -52,6 +52,11 @@ boolean_param("sched_smt_power_savings",
> > >   * */
> > >  int sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
> > >  integer_param("sched_ratelimit_us", sched_ratelimit_us);
> > > +
> > > +/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned at boot. */
> > > +bool_t opt_dom0_vcpus_pin;
> > > +boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> > > +
> > >  /* Various timer handlers. */
> > >  static void s_timer_fn(void *unused);
> > >  static void vcpu_periodic_timer_fn(void *data);
> > > @@ -194,7 +199,8 @@ int sched_init_vcpu(struct vcpu *v, unsi
> > >       * domain-0 VCPUs, are pinned onto their respective physical CPUs.
> > >       */
> > >      v->processor = processor;
> > > -    if ( is_idle_domain(d) || d->is_pinned )
> > > +
> > > +    if ( is_idle_domain(d) || (d->domain_id == 0 && opt_dom0_vcpus_pin) )
> > >          cpumask_copy(v->cpu_affinity, cpumask_of(processor));
> > >      else
> > >          cpumask_setall(v->cpu_affinity);
> > > @@ -595,8 +601,6 @@ int vcpu_set_affinity(struct vcpu *v, co
> > >      cpumask_t online_affinity;
> > >      cpumask_t *online;
> > >  
> > > -    if ( v->domain->is_pinned )
> > > -        return -EINVAL;
> > >      online = VCPU2ONLINE(v);
> > >      cpumask_and(&online_affinity, affinity, online);
> > >      if ( cpumask_empty(&online_affinity) )
> > > diff -r 29247e44df47 -r 2614dd8be3a0 xen/include/xen/sched.h
> > > --- a/xen/include/xen/sched.h	Fri Nov 30 21:51:17 2012 +0000
> > > +++ b/xen/include/xen/sched.h	Wed Dec 05 05:48:23 2012 +0000
> > > @@ -292,8 +292,6 @@ struct domain
> > >      enum { DOMDYING_alive, DOMDYING_dying, DOMDYING_dead } is_dying;
> > >      /* Domain is paused by controller software? */
> > >      bool_t           is_paused_by_controller;
> > > -    /* Domain's VCPUs are pinned 1:1 to physical CPUs? */
> > > -    bool_t           is_pinned;
> > >  
> > >      /* Are any VCPUs polling event channels (SCHEDOP_poll)? */
> > >  #if MAX_VIRT_CPUS <= BITS_PER_LONG
> > > @@ -713,8 +711,7 @@ void watchdog_domain_destroy(struct doma
> > >  
> > >  #define is_hvm_domain(d) ((d)->is_hvm)
> > >  #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
> > > -#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
> > > -                           cpumask_weight((v)->cpu_affinity) == 1)
> > > +#define is_pinned_vcpu(v) (cpumask_weight((v)->cpu_affinity) == 1)
> > >  #ifdef HAS_PASSTHROUGH
> > >  #define need_iommu(d)    ((d)->need_iommu)
> > >  #else
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 
> > -- 
> > Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
> > T: +44 (0)1223 225 900, http://www.citrix.com
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:05:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:05:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHTY-0007nv-5o; Wed, 05 Dec 2012 16:05:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TgHTW-0007nV-PA
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:05:31 +0000
Received: from [85.158.138.51:58519] by server-4.bemta-3.messagelabs.com id
	B3/B6-30023-5C07FB05; Wed, 05 Dec 2012 16:05:25 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354723507!27582471!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDE2MTA3Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3643 invoked from network); 5 Dec 2012 16:05:09 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:05:09 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354723509; x=1386259509;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=oQk590YZaU+DovTx/IOQAoPFqlRqooilmG99pAzy43U=;
	b=v2N2Y6Zv9wayknmunoujYktnCMoC1D5qUXe4e6NGY+iFBOXTvlZSQrb2
	d5r3URzBlx5fDryqK9EEVYvIvvE3PBqRlD4ebMELkvujJV0+FGJr9XBUf
	omTFDvdsVrW4rTeu9G5h2XMp8ii2ls47cWJufzI6Vj8qKoTxFibpjetyR k=;
X-IronPort-AV: E=McAfee;i="5400,1158,6916"; a="490475661"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 05 Dec 2012 16:04:46 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB5G4j8s032196
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 5 Dec 2012 16:04:45 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.118) by
	ex10-hub-31005.ant.amazon.com (10.185.176.12) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 5 Dec 2012 08:04:35 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Wed, 05 Dec 2012 08:04:35 -0800
Date: Wed, 5 Dec 2012 08:04:35 -0800
From: Matt Wilson <msw@amazon.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121205160435.GB32088@u109add4315675089e695.ant.amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 07:59:08AM -0800, Matt Wilson wrote:
> On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
> > On 05/12/12 06:02, Matt Wilson wrote:
> > > An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
> > > but still have the flexibility to change the configuration later.
> > > There's no logic that keys off of domain->is_pinned outside of
> > > sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
> > > is_pinned_vcpu() macro to only check for a single CPU set in the
> > > cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
> > > boots.
> > 
> > Sadly this patch will break things.  There are certain callers of
> > is_pinned_vcpu() which rely on the value to allow access to certain
> > power related MSRs, which is where the requirement for never permitting
> > an update of the affinity mask comes from.
> 
> If this is true, the existing is_pinned_vcpu() test is broken:
> 
>    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
>                               cpumask_weight((v)->cpu_affinity) == 1)
> 
> It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
> the MSR traps will suddenly start working.
> 
> See commit: http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd
> 
> > When I encountered this problem before, I considered implementing
> > dom0_vcpu_pin=dynamic (or name to suit) which sets up an identity pin at
> > create time, but leaves is_pinned as false. 
> 
> I could implement that, but I want to make sure we're fixing a real
> problem. It sounds like Keir thinks this can be relaxed.

Sorry, misread the commit. Jan made that change.

> Matt
> 
> > > Signed-off-by: Matt Wilson <msw@amazon.com>
> > >
> > > diff -r 29247e44df47 -r 2614dd8be3a0 docs/misc/xen-command-line.markdown
> > > --- a/docs/misc/xen-command-line.markdown	Fri Nov 30 21:51:17 2012 +0000
> > > +++ b/docs/misc/xen-command-line.markdown	Wed Dec 05 05:48:23 2012 +0000
> > > @@ -453,7 +453,7 @@ Practices](http://wiki.xen.org/wiki/Xen_
> > >  
> > >  > Default: `false`
> > >  
> > > -Pin dom0 vcpus to their respective pcpus
> > > +Initially pin dom0 vcpus to their respective pcpus
> > >  
> > >  ### e820-mtrr-clip
> > >  > `= <boolean>`
> > > diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/domain.c
> > > --- a/xen/common/domain.c	Fri Nov 30 21:51:17 2012 +0000
> > > +++ b/xen/common/domain.c	Wed Dec 05 05:48:23 2012 +0000
> > > @@ -45,10 +45,6 @@
> > >  /* xen_processor_pmbits: xen control Cx, Px, ... */
> > >  unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
> > >  
> > > -/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */
> > > -bool_t opt_dom0_vcpus_pin;
> > > -boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> > > -
> > >  /* Protect updates/reads (resp.) of domain_list and domain_hash. */
> > >  DEFINE_SPINLOCK(domlist_update_lock);
> > >  DEFINE_RCU_READ_LOCK(domlist_read_lock);
> > > @@ -235,7 +231,6 @@ struct domain *domain_create(
> > >  
> > >      if ( domid == 0 )
> > >      {
> > > -        d->is_pinned = opt_dom0_vcpus_pin;
> > >          d->disable_migrate = 1;
> > >      }
> > >  
> > > diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/schedule.c
> > > --- a/xen/common/schedule.c	Fri Nov 30 21:51:17 2012 +0000
> > > +++ b/xen/common/schedule.c	Wed Dec 05 05:48:23 2012 +0000
> > > @@ -52,6 +52,11 @@ boolean_param("sched_smt_power_savings",
> > >   * */
> > >  int sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
> > >  integer_param("sched_ratelimit_us", sched_ratelimit_us);
> > > +
> > > +/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned at boot. */
> > > +bool_t opt_dom0_vcpus_pin;
> > > +boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
> > > +
> > >  /* Various timer handlers. */
> > >  static void s_timer_fn(void *unused);
> > >  static void vcpu_periodic_timer_fn(void *data);
> > > @@ -194,7 +199,8 @@ int sched_init_vcpu(struct vcpu *v, unsi
> > >       * domain-0 VCPUs, are pinned onto their respective physical CPUs.
> > >       */
> > >      v->processor = processor;
> > > -    if ( is_idle_domain(d) || d->is_pinned )
> > > +
> > > +    if ( is_idle_domain(d) || (d->domain_id == 0 && opt_dom0_vcpus_pin) )
> > >          cpumask_copy(v->cpu_affinity, cpumask_of(processor));
> > >      else
> > >          cpumask_setall(v->cpu_affinity);
> > > @@ -595,8 +601,6 @@ int vcpu_set_affinity(struct vcpu *v, co
> > >      cpumask_t online_affinity;
> > >      cpumask_t *online;
> > >  
> > > -    if ( v->domain->is_pinned )
> > > -        return -EINVAL;
> > >      online = VCPU2ONLINE(v);
> > >      cpumask_and(&online_affinity, affinity, online);
> > >      if ( cpumask_empty(&online_affinity) )
> > > diff -r 29247e44df47 -r 2614dd8be3a0 xen/include/xen/sched.h
> > > --- a/xen/include/xen/sched.h	Fri Nov 30 21:51:17 2012 +0000
> > > +++ b/xen/include/xen/sched.h	Wed Dec 05 05:48:23 2012 +0000
> > > @@ -292,8 +292,6 @@ struct domain
> > >      enum { DOMDYING_alive, DOMDYING_dying, DOMDYING_dead } is_dying;
> > >      /* Domain is paused by controller software? */
> > >      bool_t           is_paused_by_controller;
> > > -    /* Domain's VCPUs are pinned 1:1 to physical CPUs? */
> > > -    bool_t           is_pinned;
> > >  
> > >      /* Are any VCPUs polling event channels (SCHEDOP_poll)? */
> > >  #if MAX_VIRT_CPUS <= BITS_PER_LONG
> > > @@ -713,8 +711,7 @@ void watchdog_domain_destroy(struct doma
> > >  
> > >  #define is_hvm_domain(d) ((d)->is_hvm)
> > >  #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
> > > -#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
> > > -                           cpumask_weight((v)->cpu_affinity) == 1)
> > > +#define is_pinned_vcpu(v) (cpumask_weight((v)->cpu_affinity) == 1)
> > >  #ifdef HAS_PASSTHROUGH
> > >  #define need_iommu(d)    ((d)->need_iommu)
> > >  #else
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 
> > -- 
> > Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
> > T: +44 (0)1223 225 900, http://www.citrix.com
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:05:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:05:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHTW-0007nW-Jw; Wed, 05 Dec 2012 16:05:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgHTU-0007nM-VA
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 16:05:29 +0000
Received: from [85.158.143.99:29582] by server-2.bemta-4.messagelabs.com id
	23/1B-30861-7C07FB05; Wed, 05 Dec 2012 16:05:27 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1354723519!22905881!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14764 invoked from network); 5 Dec 2012 16:05:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:05:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46698228"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:05:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 11:05:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgHTK-0005SG-Hx;
	Wed, 05 Dec 2012 16:05:18 +0000
Message-ID: <50BF6F5F.1020209@eu.citrix.com>
Date: Wed, 5 Dec 2012 15:59:27 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: "Liu, Jinsong" <jinsong.liu@intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 15:57, Liu, Jinsong wrote:
> More simply, we can remove not only step #8 for last iteration check, but also the action to 'mark broken page to dirty bitmap':
> In any iteration, there is a key point #4, if vmce occur before #4 it will transfer proper pfn_type/pfn_number and (xl migration) will not access broken page (if guest access the broken page again it will be killed by hypervisor), if vmce occur after #4 system will crash and no need care migration any more.
>
> So we can go back to the original patch which used to handle 'vmce occur before migration' and entirely don't need add specific code to handle 'vmce occur during migration', since it in fact has handled both cases (and simple). Thoughts?

I thought part of the point of that was to have a consistent behavior 
from the presented virtual hardware -- i.e,. if the guest OS receives a 
vMCE, and subsequently touches that page, it should get an SRAR?  Is 
that important or not?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:05:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:05:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHTW-0007nW-Jw; Wed, 05 Dec 2012 16:05:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgHTU-0007nM-VA
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 16:05:29 +0000
Received: from [85.158.143.99:29582] by server-2.bemta-4.messagelabs.com id
	23/1B-30861-7C07FB05; Wed, 05 Dec 2012 16:05:27 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1354723519!22905881!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14764 invoked from network); 5 Dec 2012 16:05:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:05:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46698228"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:05:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 11:05:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgHTK-0005SG-Hx;
	Wed, 05 Dec 2012 16:05:18 +0000
Message-ID: <50BF6F5F.1020209@eu.citrix.com>
Date: Wed, 5 Dec 2012 15:59:27 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: "Liu, Jinsong" <jinsong.liu@intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 15:57, Liu, Jinsong wrote:
> More simply, we can remove not only step #8 for last iteration check, but also the action to 'mark broken page to dirty bitmap':
> In any iteration, there is a key point #4, if vmce occur before #4 it will transfer proper pfn_type/pfn_number and (xl migration) will not access broken page (if guest access the broken page again it will be killed by hypervisor), if vmce occur after #4 system will crash and no need care migration any more.
>
> So we can go back to the original patch which used to handle 'vmce occur before migration' and entirely don't need add specific code to handle 'vmce occur during migration', since it in fact has handled both cases (and simple). Thoughts?

I thought part of the point of that was to have a consistent behavior 
from the presented virtual hardware -- i.e,. if the guest OS receives a 
vMCE, and subsequently touches that page, it should get an SRAR?  Is 
that important or not?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:07:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:07:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHUs-0007yS-Ma; Wed, 05 Dec 2012 16:06:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgHUr-0007yD-Dy
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:06:53 +0000
Received: from [85.158.138.51:62288] by server-11.bemta-3.messagelabs.com id
	EB/C2-19361-C117FB05; Wed, 05 Dec 2012 16:06:52 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1354723609!18704783!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22535 invoked from network); 5 Dec 2012 16:06:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:06:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46698479"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:06:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 11:06:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgHUd-0005T5-5a;
	Wed, 05 Dec 2012 16:06:39 +0000
Message-ID: <50BF710F.2070407@citrix.com>
Date: Wed, 5 Dec 2012 16:06:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Matt Wilson <msw@amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
X-Enigmail-Version: 1.4.6
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 15:59, Matt Wilson wrote:
> On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
>> On 05/12/12 06:02, Matt Wilson wrote:
>>> An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
>>> but still have the flexibility to change the configuration later.
>>> There's no logic that keys off of domain->is_pinned outside of
>>> sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
>>> is_pinned_vcpu() macro to only check for a single CPU set in the
>>> cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
>>> boots.
>> Sadly this patch will break things.  There are certain callers of
>> is_pinned_vcpu() which rely on the value to allow access to certain
>> power related MSRs, which is where the requirement for never permitting
>> an update of the affinity mask comes from.
> If this is true, the existing is_pinned_vcpu() test is broken:
>
>    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
>                               cpumask_weight((v)->cpu_affinity) == 1)
>
> It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
> the MSR traps will suddenly start working.
>
> See commit: http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd
>
>> When I encountered this problem before, I considered implementing
>> dom0_vcpu_pin=dynamic (or name to suit) which sets up an identity pin at
>> create time, but leaves is_pinned as false. 
> I could implement that, but I want to make sure we're fixing a real
> problem. It sounds like Keir thinks this can be relaxed.
>
> Matt

Hmm - the logic does indeed look broken.  When I was working on this
problem, it was an older tree without this changeset.

With the current logic, the vcpu will transparently move to a different
set of MSRs whenever the affinity is changed, which does not sound safe
for an unaware vcpu.  I did not pay much attention to the function of
the MSRs.

~Andrew

>
>>> Signed-off-by: Matt Wilson <msw@amazon.com>
>>>
>>> diff -r 29247e44df47 -r 2614dd8be3a0 docs/misc/xen-command-line.markdown
>>> --- a/docs/misc/xen-command-line.markdown	Fri Nov 30 21:51:17 2012 +0000
>>> +++ b/docs/misc/xen-command-line.markdown	Wed Dec 05 05:48:23 2012 +0000
>>> @@ -453,7 +453,7 @@ Practices](http://wiki.xen.org/wiki/Xen_
>>>  
>>>  > Default: `false`
>>>  
>>> -Pin dom0 vcpus to their respective pcpus
>>> +Initially pin dom0 vcpus to their respective pcpus
>>>  
>>>  ### e820-mtrr-clip
>>>  > `= <boolean>`
>>> diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/domain.c
>>> --- a/xen/common/domain.c	Fri Nov 30 21:51:17 2012 +0000
>>> +++ b/xen/common/domain.c	Wed Dec 05 05:48:23 2012 +0000
>>> @@ -45,10 +45,6 @@
>>>  /* xen_processor_pmbits: xen control Cx, Px, ... */
>>>  unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
>>>  
>>> -/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */
>>> -bool_t opt_dom0_vcpus_pin;
>>> -boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
>>> -
>>>  /* Protect updates/reads (resp.) of domain_list and domain_hash. */
>>>  DEFINE_SPINLOCK(domlist_update_lock);
>>>  DEFINE_RCU_READ_LOCK(domlist_read_lock);
>>> @@ -235,7 +231,6 @@ struct domain *domain_create(
>>>  
>>>      if ( domid == 0 )
>>>      {
>>> -        d->is_pinned = opt_dom0_vcpus_pin;
>>>          d->disable_migrate = 1;
>>>      }
>>>  
>>> diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/schedule.c
>>> --- a/xen/common/schedule.c	Fri Nov 30 21:51:17 2012 +0000
>>> +++ b/xen/common/schedule.c	Wed Dec 05 05:48:23 2012 +0000
>>> @@ -52,6 +52,11 @@ boolean_param("sched_smt_power_savings",
>>>   * */
>>>  int sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
>>>  integer_param("sched_ratelimit_us", sched_ratelimit_us);
>>> +
>>> +/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned at boot. */
>>> +bool_t opt_dom0_vcpus_pin;
>>> +boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
>>> +
>>>  /* Various timer handlers. */
>>>  static void s_timer_fn(void *unused);
>>>  static void vcpu_periodic_timer_fn(void *data);
>>> @@ -194,7 +199,8 @@ int sched_init_vcpu(struct vcpu *v, unsi
>>>       * domain-0 VCPUs, are pinned onto their respective physical CPUs.
>>>       */
>>>      v->processor = processor;
>>> -    if ( is_idle_domain(d) || d->is_pinned )
>>> +
>>> +    if ( is_idle_domain(d) || (d->domain_id == 0 && opt_dom0_vcpus_pin) )
>>>          cpumask_copy(v->cpu_affinity, cpumask_of(processor));
>>>      else
>>>          cpumask_setall(v->cpu_affinity);
>>> @@ -595,8 +601,6 @@ int vcpu_set_affinity(struct vcpu *v, co
>>>      cpumask_t online_affinity;
>>>      cpumask_t *online;
>>>  
>>> -    if ( v->domain->is_pinned )
>>> -        return -EINVAL;
>>>      online = VCPU2ONLINE(v);
>>>      cpumask_and(&online_affinity, affinity, online);
>>>      if ( cpumask_empty(&online_affinity) )
>>> diff -r 29247e44df47 -r 2614dd8be3a0 xen/include/xen/sched.h
>>> --- a/xen/include/xen/sched.h	Fri Nov 30 21:51:17 2012 +0000
>>> +++ b/xen/include/xen/sched.h	Wed Dec 05 05:48:23 2012 +0000
>>> @@ -292,8 +292,6 @@ struct domain
>>>      enum { DOMDYING_alive, DOMDYING_dying, DOMDYING_dead } is_dying;
>>>      /* Domain is paused by controller software? */
>>>      bool_t           is_paused_by_controller;
>>> -    /* Domain's VCPUs are pinned 1:1 to physical CPUs? */
>>> -    bool_t           is_pinned;
>>>  
>>>      /* Are any VCPUs polling event channels (SCHEDOP_poll)? */
>>>  #if MAX_VIRT_CPUS <= BITS_PER_LONG
>>> @@ -713,8 +711,7 @@ void watchdog_domain_destroy(struct doma
>>>  
>>>  #define is_hvm_domain(d) ((d)->is_hvm)
>>>  #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
>>> -#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
>>> -                           cpumask_weight((v)->cpu_affinity) == 1)
>>> +#define is_pinned_vcpu(v) (cpumask_weight((v)->cpu_affinity) == 1)
>>>  #ifdef HAS_PASSTHROUGH
>>>  #define need_iommu(d)    ((d)->need_iommu)
>>>  #else
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>> -- 
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:07:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:07:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHUs-0007yS-Ma; Wed, 05 Dec 2012 16:06:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgHUr-0007yD-Dy
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:06:53 +0000
Received: from [85.158.138.51:62288] by server-11.bemta-3.messagelabs.com id
	EB/C2-19361-C117FB05; Wed, 05 Dec 2012 16:06:52 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1354723609!18704783!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22535 invoked from network); 5 Dec 2012 16:06:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:06:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="46698479"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:06:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 11:06:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgHUd-0005T5-5a;
	Wed, 05 Dec 2012 16:06:39 +0000
Message-ID: <50BF710F.2070407@citrix.com>
Date: Wed, 5 Dec 2012 16:06:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Matt Wilson <msw@amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
X-Enigmail-Version: 1.4.6
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 15:59, Matt Wilson wrote:
> On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
>> On 05/12/12 06:02, Matt Wilson wrote:
>>> An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
>>> but still have the flexibility to change the configuration later.
>>> There's no logic that keys off of domain->is_pinned outside of
>>> sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
>>> is_pinned_vcpu() macro to only check for a single CPU set in the
>>> cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
>>> boots.
>> Sadly this patch will break things.  There are certain callers of
>> is_pinned_vcpu() which rely on the value to allow access to certain
>> power related MSRs, which is where the requirement for never permitting
>> an update of the affinity mask comes from.
> If this is true, the existing is_pinned_vcpu() test is broken:
>
>    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
>                               cpumask_weight((v)->cpu_affinity) == 1)
>
> It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
> the MSR traps will suddenly start working.
>
> See commit: http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd
>
>> When I encountered this problem before, I considered implementing
>> dom0_vcpu_pin=dynamic (or name to suit) which sets up an identity pin at
>> create time, but leaves is_pinned as false. 
> I could implement that, but I want to make sure we're fixing a real
> problem. It sounds like Keir thinks this can be relaxed.
>
> Matt

Hmm - the logic does indeed look broken.  When I was working on this
problem, it was an older tree without this changeset.

With the current logic, the vcpu will transparently move to a different
set of MSRs whenever the affinity is changed, which does not sound safe
for an unaware vcpu.  I did not pay much attention to the function of
the MSRs.

~Andrew

>
>>> Signed-off-by: Matt Wilson <msw@amazon.com>
>>>
>>> diff -r 29247e44df47 -r 2614dd8be3a0 docs/misc/xen-command-line.markdown
>>> --- a/docs/misc/xen-command-line.markdown	Fri Nov 30 21:51:17 2012 +0000
>>> +++ b/docs/misc/xen-command-line.markdown	Wed Dec 05 05:48:23 2012 +0000
>>> @@ -453,7 +453,7 @@ Practices](http://wiki.xen.org/wiki/Xen_
>>>  
>>>  > Default: `false`
>>>  
>>> -Pin dom0 vcpus to their respective pcpus
>>> +Initially pin dom0 vcpus to their respective pcpus
>>>  
>>>  ### e820-mtrr-clip
>>>  > `= <boolean>`
>>> diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/domain.c
>>> --- a/xen/common/domain.c	Fri Nov 30 21:51:17 2012 +0000
>>> +++ b/xen/common/domain.c	Wed Dec 05 05:48:23 2012 +0000
>>> @@ -45,10 +45,6 @@
>>>  /* xen_processor_pmbits: xen control Cx, Px, ... */
>>>  unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
>>>  
>>> -/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned. */
>>> -bool_t opt_dom0_vcpus_pin;
>>> -boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
>>> -
>>>  /* Protect updates/reads (resp.) of domain_list and domain_hash. */
>>>  DEFINE_SPINLOCK(domlist_update_lock);
>>>  DEFINE_RCU_READ_LOCK(domlist_read_lock);
>>> @@ -235,7 +231,6 @@ struct domain *domain_create(
>>>  
>>>      if ( domid == 0 )
>>>      {
>>> -        d->is_pinned = opt_dom0_vcpus_pin;
>>>          d->disable_migrate = 1;
>>>      }
>>>  
>>> diff -r 29247e44df47 -r 2614dd8be3a0 xen/common/schedule.c
>>> --- a/xen/common/schedule.c	Fri Nov 30 21:51:17 2012 +0000
>>> +++ b/xen/common/schedule.c	Wed Dec 05 05:48:23 2012 +0000
>>> @@ -52,6 +52,11 @@ boolean_param("sched_smt_power_savings",
>>>   * */
>>>  int sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
>>>  integer_param("sched_ratelimit_us", sched_ratelimit_us);
>>> +
>>> +/* opt_dom0_vcpus_pin: If true, dom0 VCPUs are pinned at boot. */
>>> +bool_t opt_dom0_vcpus_pin;
>>> +boolean_param("dom0_vcpus_pin", opt_dom0_vcpus_pin);
>>> +
>>>  /* Various timer handlers. */
>>>  static void s_timer_fn(void *unused);
>>>  static void vcpu_periodic_timer_fn(void *data);
>>> @@ -194,7 +199,8 @@ int sched_init_vcpu(struct vcpu *v, unsi
>>>       * domain-0 VCPUs, are pinned onto their respective physical CPUs.
>>>       */
>>>      v->processor = processor;
>>> -    if ( is_idle_domain(d) || d->is_pinned )
>>> +
>>> +    if ( is_idle_domain(d) || (d->domain_id == 0 && opt_dom0_vcpus_pin) )
>>>          cpumask_copy(v->cpu_affinity, cpumask_of(processor));
>>>      else
>>>          cpumask_setall(v->cpu_affinity);
>>> @@ -595,8 +601,6 @@ int vcpu_set_affinity(struct vcpu *v, co
>>>      cpumask_t online_affinity;
>>>      cpumask_t *online;
>>>  
>>> -    if ( v->domain->is_pinned )
>>> -        return -EINVAL;
>>>      online = VCPU2ONLINE(v);
>>>      cpumask_and(&online_affinity, affinity, online);
>>>      if ( cpumask_empty(&online_affinity) )
>>> diff -r 29247e44df47 -r 2614dd8be3a0 xen/include/xen/sched.h
>>> --- a/xen/include/xen/sched.h	Fri Nov 30 21:51:17 2012 +0000
>>> +++ b/xen/include/xen/sched.h	Wed Dec 05 05:48:23 2012 +0000
>>> @@ -292,8 +292,6 @@ struct domain
>>>      enum { DOMDYING_alive, DOMDYING_dying, DOMDYING_dead } is_dying;
>>>      /* Domain is paused by controller software? */
>>>      bool_t           is_paused_by_controller;
>>> -    /* Domain's VCPUs are pinned 1:1 to physical CPUs? */
>>> -    bool_t           is_pinned;
>>>  
>>>      /* Are any VCPUs polling event channels (SCHEDOP_poll)? */
>>>  #if MAX_VIRT_CPUS <= BITS_PER_LONG
>>> @@ -713,8 +711,7 @@ void watchdog_domain_destroy(struct doma
>>>  
>>>  #define is_hvm_domain(d) ((d)->is_hvm)
>>>  #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
>>> -#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
>>> -                           cpumask_weight((v)->cpu_affinity) == 1)
>>> +#define is_pinned_vcpu(v) (cpumask_weight((v)->cpu_affinity) == 1)
>>>  #ifdef HAS_PASSTHROUGH
>>>  #define need_iommu(d)    ((d)->need_iommu)
>>>  #else
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>> -- 
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:07:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHVO-000830-48; Wed, 05 Dec 2012 16:07:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TgHVL-00082c-Ux
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:07:24 +0000
Received: from [85.158.139.83:40126] by server-15.bemta-5.messagelabs.com id
	A7/75-26920-B317FB05; Wed, 05 Dec 2012 16:07:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354723589!28508163!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28429 invoked from network); 5 Dec 2012 16:06:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:06:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16177564"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:06:15 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	16:06:15 +0000
Message-ID: <50BF70F5.6020903@citrix.com>
Date: Wed, 5 Dec 2012 17:06:13 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Thor Lancelot Simon <tls@panix.com>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com> <20121204200739.GA23149@panix.com>
	<20121205101551.GA2999@asim.lip6.fr> <50BF1FD6.4010905@citrix.com>
	<20121205131159.GA29622@panix.com>
In-Reply-To: <20121205131159.GA29622@panix.com>
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
	"port-xen@netbsd.org" <port-xen@NetBSD.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 14:11, Thor Lancelot Simon wrote:
> On Wed, Dec 05, 2012 at 11:20:06AM +0100, Roger Pau Monn? wrote:
>> On 05/12/12 11:15, Manuel Bouyer wrote:
>>> On Tue, Dec 04, 2012 at 03:07:39PM -0500, Thor Lancelot Simon wrote:
>>>> On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
>>>>>
>>>>> Independently of what we end up doing as default for handling raw file
>>>>> disks, could someone review this code?
>>>>>
>>>>> It's the first time I've done a device, so someone with more experience
>>>>> should review it.
>>>>
>>>> I am not sure I entirely follow what this code's doing, but it seems to
>>>> me it may allow arbitrary physical pages to be exposed to userspace
>>>> processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.
>>>>
>>>> Is that a correct understanding of one of its effects?  If so, there's
>>>> a problem, since not being able to do precisely that is one important
>>>> assumption of the 4.4BSD security model.
>>>
>>> If I read it properly, It allows only to map pages that are part of a
>>> grant. You provide the ioctl a grant reference, and this is what
>>> the driver uses to find the physical pages. So it should be limited to
>>> pages that are referenced by a grant.
>>
>> Yes, it should be limited to grant pages, you are not able to map
>> arbitrary mfns.
> 
> So, can dom0 give away arbitrary physical pages to a domU which can
> then hand them back as a "grant", or is there other protection
> against that?  That was my concern.  I'm sorry I don't understand
> some of the fundamental terminology very well.

All this "grants" thing is done at the hypervisor level, the hypervisor
is the one that assigns and manages grant pages, so the Dom0 is only
able to map certain pages that the remote DomU has permited the Dom0 to
map, and all this is done using HYPERVISOR_grant_table_op calls.

The code that does this is already inside the NetBSD kernel (because it
is used by the backends inside the NetBSD kernel). What this patch does
is allowing userspace processes to also map this grants into their own
virtual address space.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:07:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHVO-000830-48; Wed, 05 Dec 2012 16:07:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TgHVL-00082c-Ux
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:07:24 +0000
Received: from [85.158.139.83:40126] by server-15.bemta-5.messagelabs.com id
	A7/75-26920-B317FB05; Wed, 05 Dec 2012 16:07:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354723589!28508163!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28429 invoked from network); 5 Dec 2012 16:06:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:06:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16177564"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:06:15 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	16:06:15 +0000
Message-ID: <50BF70F5.6020903@citrix.com>
Date: Wed, 5 Dec 2012 17:06:13 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Thor Lancelot Simon <tls@panix.com>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<50BE161B.8000603@citrix.com> <20121204200739.GA23149@panix.com>
	<20121205101551.GA2999@asim.lip6.fr> <50BF1FD6.4010905@citrix.com>
	<20121205131159.GA29622@panix.com>
In-Reply-To: <20121205131159.GA29622@panix.com>
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
	"port-xen@netbsd.org" <port-xen@NetBSD.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 14:11, Thor Lancelot Simon wrote:
> On Wed, Dec 05, 2012 at 11:20:06AM +0100, Roger Pau Monn? wrote:
>> On 05/12/12 11:15, Manuel Bouyer wrote:
>>> On Tue, Dec 04, 2012 at 03:07:39PM -0500, Thor Lancelot Simon wrote:
>>>> On Tue, Dec 04, 2012 at 04:26:19PM +0100, Roger Pau Monn? wrote:
>>>>>
>>>>> Independently of what we end up doing as default for handling raw file
>>>>> disks, could someone review this code?
>>>>>
>>>>> It's the first time I've done a device, so someone with more experience
>>>>> should review it.
>>>>
>>>> I am not sure I entirely follow what this code's doing, but it seems to
>>>> me it may allow arbitrary physical pages to be exposed to userspace
>>>> processes in dom0 -- or in a domU, albeit only if dom0 userspace says so.
>>>>
>>>> Is that a correct understanding of one of its effects?  If so, there's
>>>> a problem, since not being able to do precisely that is one important
>>>> assumption of the 4.4BSD security model.
>>>
>>> If I read it properly, It allows only to map pages that are part of a
>>> grant. You provide the ioctl a grant reference, and this is what
>>> the driver uses to find the physical pages. So it should be limited to
>>> pages that are referenced by a grant.
>>
>> Yes, it should be limited to grant pages, you are not able to map
>> arbitrary mfns.
> 
> So, can dom0 give away arbitrary physical pages to a domU which can
> then hand them back as a "grant", or is there other protection
> against that?  That was my concern.  I'm sorry I don't understand
> some of the fundamental terminology very well.

All this "grants" thing is done at the hypervisor level, the hypervisor
is the one that assigns and manages grant pages, so the Dom0 is only
able to map certain pages that the remote DomU has permited the Dom0 to
map, and all this is done using HYPERVISOR_grant_table_op calls.

The code that does this is already inside the NetBSD kernel (because it
is used by the backends inside the NetBSD kernel). What this patch does
is allowing userspace processes to also map this grants into their own
virtual address space.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:15:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:15:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHdJ-00008h-4G; Wed, 05 Dec 2012 16:15:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TgHdH-00008c-TD
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:15:36 +0000
Received: from [85.158.137.99:28992] by server-2.bemta-3.messagelabs.com id
	C2/68-04744-7237FB05; Wed, 05 Dec 2012 16:15:35 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1354724129!18033555!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15844 invoked from network); 5 Dec 2012 16:15:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:15:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16177800"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:15:29 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	16:15:28 +0000
Message-ID: <50BF731F.8070101@citrix.com>
Date: Wed, 5 Dec 2012 17:15:27 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<20121205102417.GB2999@asim.lip6.fr>
In-Reply-To: <20121205102417.GB2999@asim.lip6.fr>
Cc: "port-xen@NetBSD.org" <port-xen@NetBSD.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 11:24, Manuel Bouyer wrote:
> On Wed, Nov 28, 2012 at 02:19:48PM +0100, Roger Pau Monne wrote:
>> This is a basic (and experimental) gntdev implementation for NetBSD.
>>
>> The gnt device allows usermode applications to map grant references in
>> userspace. It is mainly used by Qemu to implement a Xen backend (that
>> runs in userspace).
>>
>> Due to the fact that qemu-upstream is not yet functional in NetBSD,
>> the only way to try this gntdev is to use the old qemu
>> (qemu-traditional).
>>
>> Performance is not that bad (given that we are using qemu-traditional
>> and running a backend in userspace), the throughput of write
>> operations is 64.7 MB/s, while in the Dom0 it is 104.6 MB/s. Regarding
>> read operations, the throughput inside the DomU is 76.0 MB/s, while on
>> the Dom0 it is 108.8 MB/s.
>>
>> Patches to libxc and libxl are also comming soon.
>>
>> [...]
>> +map_grant_ref(struct gntmap *map);
> 
> please, prefix all struct, global variables and functions of the
> driver with gnt
> 
>> +		if (pmap_extract_ma(pmap_kernel(), k_va, &ma) == false) {
>> +			debug("unable to extract kernel MA");
>> +			return EFAULT;
>> +		}
> 
> Can't you get the pa or ma directly from the grant reference ?
> Adding a kernel mapping only to run pmap_extract_ma() on it looks like an
> innefficient way of doing it (this will require TLB shootdown IPIs).

I think I can get the mfn as a result from the GNTTABOP_map_grant_ref
operation, according to
http://xenbits.xen.org/docs/unstable/misc/grant-tables.txt it should be
the dev_bus_addr field, but I'm not sure how updated that document is
(I'm afraid it's not really maintained).

I've just found that document now, but it looks like it is possible to
update the pfn -> mfn mappings to point to the right mfn, so we might be
able to use the regular mmap device handler to perform the mappings.

On a different question, does someone know why fileops doesn't implement
a mmap handler?

> Or maybe GNTTABOP_map_grant_ref could map directly in user space ?

GNTTABOP_map_grant_ref has an option that's GNTMAP_application_map, but
there's no documentation about how that works, and the Linux code is not
trivial to understand (at least for me).


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:15:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:15:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHdJ-00008h-4G; Wed, 05 Dec 2012 16:15:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TgHdH-00008c-TD
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:15:36 +0000
Received: from [85.158.137.99:28992] by server-2.bemta-3.messagelabs.com id
	C2/68-04744-7237FB05; Wed, 05 Dec 2012 16:15:35 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1354724129!18033555!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15844 invoked from network); 5 Dec 2012 16:15:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:15:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,221,1355097600"; d="scan'208";a="16177800"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:15:29 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	16:15:28 +0000
Message-ID: <50BF731F.8070101@citrix.com>
Date: Wed, 5 Dec 2012 17:15:27 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <1354108788-2344-1-git-send-email-roger.pau@citrix.com>
	<20121205102417.GB2999@asim.lip6.fr>
In-Reply-To: <20121205102417.GB2999@asim.lip6.fr>
Cc: "port-xen@NetBSD.org" <port-xen@NetBSD.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: add gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 11:24, Manuel Bouyer wrote:
> On Wed, Nov 28, 2012 at 02:19:48PM +0100, Roger Pau Monne wrote:
>> This is a basic (and experimental) gntdev implementation for NetBSD.
>>
>> The gnt device allows usermode applications to map grant references in
>> userspace. It is mainly used by Qemu to implement a Xen backend (that
>> runs in userspace).
>>
>> Due to the fact that qemu-upstream is not yet functional in NetBSD,
>> the only way to try this gntdev is to use the old qemu
>> (qemu-traditional).
>>
>> Performance is not that bad (given that we are using qemu-traditional
>> and running a backend in userspace), the throughput of write
>> operations is 64.7 MB/s, while in the Dom0 it is 104.6 MB/s. Regarding
>> read operations, the throughput inside the DomU is 76.0 MB/s, while on
>> the Dom0 it is 108.8 MB/s.
>>
>> Patches to libxc and libxl are also comming soon.
>>
>> [...]
>> +map_grant_ref(struct gntmap *map);
> 
> please, prefix all struct, global variables and functions of the
> driver with gnt
> 
>> +		if (pmap_extract_ma(pmap_kernel(), k_va, &ma) == false) {
>> +			debug("unable to extract kernel MA");
>> +			return EFAULT;
>> +		}
> 
> Can't you get the pa or ma directly from the grant reference ?
> Adding a kernel mapping only to run pmap_extract_ma() on it looks like an
> innefficient way of doing it (this will require TLB shootdown IPIs).

I think I can get the mfn as a result from the GNTTABOP_map_grant_ref
operation, according to
http://xenbits.xen.org/docs/unstable/misc/grant-tables.txt it should be
the dev_bus_addr field, but I'm not sure how updated that document is
(I'm afraid it's not really maintained).

I've just found that document now, but it looks like it is possible to
update the pfn -> mfn mappings to point to the right mfn, so we might be
able to use the regular mmap device handler to perform the mappings.

On a different question, does someone know why fileops doesn't implement
a mmap handler?

> Or maybe GNTTABOP_map_grant_ref could map directly in user space ?

GNTTABOP_map_grant_ref has an option that's GNTMAP_application_map, but
there's no documentation about how that works, and the Linux code is not
trivial to understand (at least for me).


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:16:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:16:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHdv-0000Bi-I5; Wed, 05 Dec 2012 16:16:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgHdt-0000BY-Q4
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 16:16:13 +0000
Received: from [85.158.143.35:25564] by server-2.bemta-4.messagelabs.com id
	FB/69-30861-D437FB05; Wed, 05 Dec 2012 16:16:13 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354724171!10152055!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM2Nzgy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26560 invoked from network); 5 Dec 2012 16:16:12 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 16:16:12 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Dec 2012 08:16:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,223,1355126400"; d="scan'208";a="176523827"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 08:15:41 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 08:15:41 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 00:15:39 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with
	regard to migration
Thread-Index: AQHN0wHXH+TZqbearkyCq5DrDs2IGJgKX2iw
Date: Wed, 5 Dec 2012 16:15:38 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A1B7D@SHSMSX101.ccr.corp.intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
	<1354723313.17165.11.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354723313.17165.11.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> (please trim your quotes)
> 
>> More simply, we can remove not only step #8 for last iteration check,
>> but also the action to 'mark broken page to dirty bitmap':
>> In any iteration, there is a key point #4, if vmce occur before #4 it
>> will transfer proper pfn_type/pfn_number and (xl migration) will not
>> access broken page (if guest access the broken page again it will be
>> killed by hypervisor), if vmce occur after #4 system will crash and
>> no need care migration any more. 
>> 
>> So we can go back to the original patch which used to handle 'vmce
>> occur before migration' and entirely don't need add specific code to
>> handle 'vmce occur during migration', since it in fact has handled
>> both cases (and simple). Thoughts?
> 
> Do you not need that stuff to handle MCEs during the live phase?
> 
> Specifically an MCE occurring on a page which is not in the current
> batch at all can be handled safely during the live phase.
> 
> Ian.

No, we don't need care it. Whether a page (mce occurred at) is in current batch or not is not important:
1. if xl will not transfer it in the future, that's fine;
2. if xl will transfer it in the future, but it was not in current batch, that's also fine --> it will be handled at its batch;
(guest will be killed if it access the broken page again)

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:16:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:16:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHdv-0000Bi-I5; Wed, 05 Dec 2012 16:16:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgHdt-0000BY-Q4
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 16:16:13 +0000
Received: from [85.158.143.35:25564] by server-2.bemta-4.messagelabs.com id
	FB/69-30861-D437FB05; Wed, 05 Dec 2012 16:16:13 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354724171!10152055!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM2Nzgy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26560 invoked from network); 5 Dec 2012 16:16:12 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 16:16:12 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Dec 2012 08:16:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,223,1355126400"; d="scan'208";a="176523827"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 08:15:41 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 08:15:41 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 00:15:39 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with
	regard to migration
Thread-Index: AQHN0wHXH+TZqbearkyCq5DrDs2IGJgKX2iw
Date: Wed, 5 Dec 2012 16:15:38 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A1B7D@SHSMSX101.ccr.corp.intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
	<1354723313.17165.11.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354723313.17165.11.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> (please trim your quotes)
> 
>> More simply, we can remove not only step #8 for last iteration check,
>> but also the action to 'mark broken page to dirty bitmap':
>> In any iteration, there is a key point #4, if vmce occur before #4 it
>> will transfer proper pfn_type/pfn_number and (xl migration) will not
>> access broken page (if guest access the broken page again it will be
>> killed by hypervisor), if vmce occur after #4 system will crash and
>> no need care migration any more. 
>> 
>> So we can go back to the original patch which used to handle 'vmce
>> occur before migration' and entirely don't need add specific code to
>> handle 'vmce occur during migration', since it in fact has handled
>> both cases (and simple). Thoughts?
> 
> Do you not need that stuff to handle MCEs during the live phase?
> 
> Specifically an MCE occurring on a page which is not in the current
> batch at all can be handled safely during the live phase.
> 
> Ian.

No, we don't need care it. Whether a page (mce occurred at) is in current batch or not is not important:
1. if xl will not transfer it in the future, that's fine;
2. if xl will transfer it in the future, but it was not in current batch, that's also fine --> it will be handled at its batch;
(guest will be killed if it access the broken page again)

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:25:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHn3-0000df-Ox; Wed, 05 Dec 2012 16:25:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgHn2-0000da-DN
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:25:40 +0000
Received: from [85.158.143.35:64050] by server-1.bemta-4.messagelabs.com id
	48/3E-27934-3857FB05; Wed, 05 Dec 2012 16:25:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1354724738!13707005!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27056 invoked from network); 5 Dec 2012 16:25:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 16:25:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 16:25:37 +0000
Message-Id: <50BF839002000078000AE3DF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 16:25:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>, "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 16:59, Matt Wilson <msw@amazon.com> wrote:
> On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
>> On 05/12/12 06:02, Matt Wilson wrote:
>> > An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
>> > but still have the flexibility to change the configuration later.
>> > There's no logic that keys off of domain->is_pinned outside of
>> > sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
>> > is_pinned_vcpu() macro to only check for a single CPU set in the
>> > cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
>> > boots.
>> 
>> Sadly this patch will break things.  There are certain callers of
>> is_pinned_vcpu() which rely on the value to allow access to certain
>> power related MSRs, which is where the requirement for never permitting
>> an update of the affinity mask comes from.
> 
> If this is true, the existing is_pinned_vcpu() test is broken:
> 
>    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
>                               cpumask_weight((v)->cpu_affinity) == 1)
> 
> It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
> the MSR traps will suddenly start working.
> 
> See commit: http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd 

I don't see what's wrong here. Certain things merely require the
pCPU that a vCPU runs on to be stable, which is what the test
above is for.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:25:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHn3-0000df-Ox; Wed, 05 Dec 2012 16:25:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgHn2-0000da-DN
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:25:40 +0000
Received: from [85.158.143.35:64050] by server-1.bemta-4.messagelabs.com id
	48/3E-27934-3857FB05; Wed, 05 Dec 2012 16:25:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1354724738!13707005!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27056 invoked from network); 5 Dec 2012 16:25:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 16:25:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 16:25:37 +0000
Message-Id: <50BF839002000078000AE3DF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 16:25:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>, "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 16:59, Matt Wilson <msw@amazon.com> wrote:
> On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
>> On 05/12/12 06:02, Matt Wilson wrote:
>> > An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
>> > but still have the flexibility to change the configuration later.
>> > There's no logic that keys off of domain->is_pinned outside of
>> > sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
>> > is_pinned_vcpu() macro to only check for a single CPU set in the
>> > cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
>> > boots.
>> 
>> Sadly this patch will break things.  There are certain callers of
>> is_pinned_vcpu() which rely on the value to allow access to certain
>> power related MSRs, which is where the requirement for never permitting
>> an update of the affinity mask comes from.
> 
> If this is true, the existing is_pinned_vcpu() test is broken:
> 
>    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
>                               cpumask_weight((v)->cpu_affinity) == 1)
> 
> It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
> the MSR traps will suddenly start working.
> 
> See commit: http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd 

I don't see what's wrong here. Certain things merely require the
pCPU that a vCPU runs on to be stable, which is what the test
above is for.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:29:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHqe-0000ks-DP; Wed, 05 Dec 2012 16:29:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TgHqc-0000km-Tb
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 16:29:23 +0000
Received: from [85.158.139.211:7616] by server-11.bemta-5.messagelabs.com id
	87/FB-03409-2667FB05; Wed, 05 Dec 2012 16:29:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354724961!19254889!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19766 invoked from network); 5 Dec 2012 16:29:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:29:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,223,1355097600"; d="scan'208";a="16178379"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:29:21 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	16:29:21 +0000
Message-ID: <50BF765F.10906@citrix.com>
Date: Wed, 5 Dec 2012 17:29:19 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
	<20671.28434.406210.691273@mariner.uk.xensource.com>
In-Reply-To: <20671.28434.406210.691273@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 16:58, Ian Jackson wrote:
> Roger Pau Monne writes ("Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as device model by default"):
>> On 27/11/12 16:17, Stefano Stabellini wrote:
>>> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
>>> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
>>
>> Is there anyway we may keep qemu-traditional as default for NetBSD?
>> Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
>> heavy patching.
> 
> Right.  OK, that's something we need to take care of then.

I have a pending patch for NetBSD that fixes a problem with the privcmd
device, and the way NetBSD handles IOCTL_PRIVCMD_MMAPBATCH. It is here:
http://mail-index.netbsd.org/port-xen/2012/06/27/msg007464.html

With this patch at least we are able to launch Qemu-upstream without
crashing, but the next problem is with network interfaces. There's no
way in NetBSD to change the name of a cloned tap interface, and in libxl
we pass the desired name of the tap interface to be created to Qemu, and
then we launch hotplug scripts according to that name. This doesn't work
in NetBSD, but I see several possible solutions:

1. Implement interface renaming in NetBSD

2. Implement a QMP interface in Qemu to query information about network
devices, so we can get the actual name of the interface that Qemu has
created. There was a partial implementation of this as part of a Qemu
GSoC, but it never got commited, see
http://wiki.qemu.org/Google_Summer_of_Code_2010/QMP#query-netdev

3. Create the tap interface in libxl and use pass fd to pass that file
descriptor to Qemu "-net tap,fd=XXX[,...]"

More suggestions?

>> Could a helper function be added to libxl_{netbsd/linux}.c to decide
>> which device model to use?
>
> Yes, I think that would be fine.
> 
> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:29:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHqe-0000ks-DP; Wed, 05 Dec 2012 16:29:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TgHqc-0000km-Tb
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 16:29:23 +0000
Received: from [85.158.139.211:7616] by server-11.bemta-5.messagelabs.com id
	87/FB-03409-2667FB05; Wed, 05 Dec 2012 16:29:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354724961!19254889!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19766 invoked from network); 5 Dec 2012 16:29:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 16:29:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,223,1355097600"; d="scan'208";a="16178379"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 16:29:21 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	16:29:21 +0000
Message-ID: <50BF765F.10906@citrix.com>
Date: Wed, 5 Dec 2012 17:29:19 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <alpine.DEB.2.02.1211271511520.5310@kaball.uk.xensource.com>
	<50BCD24B.1010608@citrix.com>
	<20671.28434.406210.691273@mariner.uk.xensource.com>
In-Reply-To: <20671.28434.406210.691273@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as
 device model by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/12 16:58, Ian Jackson wrote:
> Roger Pau Monne writes ("Re: [Xen-devel] [PATCH] libxl: use qemu-xen (upstream QEMU) as device model by default"):
>> On 27/11/12 16:17, Stefano Stabellini wrote:
>>> -                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
>>> +                LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
>>
>> Is there anyway we may keep qemu-traditional as default for NetBSD?
>> Upstream Qemu is not working on NetBSD, and I'm afraid it needs some
>> heavy patching.
> 
> Right.  OK, that's something we need to take care of then.

I have a pending patch for NetBSD that fixes a problem with the privcmd
device, and the way NetBSD handles IOCTL_PRIVCMD_MMAPBATCH. It is here:
http://mail-index.netbsd.org/port-xen/2012/06/27/msg007464.html

With this patch at least we are able to launch Qemu-upstream without
crashing, but the next problem is with network interfaces. There's no
way in NetBSD to change the name of a cloned tap interface, and in libxl
we pass the desired name of the tap interface to be created to Qemu, and
then we launch hotplug scripts according to that name. This doesn't work
in NetBSD, but I see several possible solutions:

1. Implement interface renaming in NetBSD

2. Implement a QMP interface in Qemu to query information about network
devices, so we can get the actual name of the interface that Qemu has
created. There was a partial implementation of this as part of a Qemu
GSoC, but it never got commited, see
http://wiki.qemu.org/Google_Summer_of_Code_2010/QMP#query-netdev

3. Create the tap interface in libxl and use pass fd to pass that file
descriptor to Qemu "-net tap,fd=XXX[,...]"

More suggestions?

>> Could a helper function be added to libxl_{netbsd/linux}.c to decide
>> which device model to use?
>
> Yes, I think that would be fine.
> 
> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:32:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHsa-0000rY-V4; Wed, 05 Dec 2012 16:31:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgHsZ-0000rL-Gg
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:31:23 +0000
Received: from [85.158.139.211:59361] by server-4.bemta-5.messagelabs.com id
	A6/66-15011-8D67FB05; Wed, 05 Dec 2012 16:31:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1354725078!19135449!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25321 invoked from network); 5 Dec 2012 16:31:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 16:31:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 16:31:17 +0000
Message-Id: <50BF84E302000078000AE400@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 16:31:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
	<1354712534-31338-4-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354712534-31338-4-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 03/11] nested vmx: expose bit 55 of
 IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 14:02, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++--
>  1 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 4d0f26b..a09fa97 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
>   */
>  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
>  {
> -    u64 data = 0, tmp;
> +    u64 data = 0, tmp = 0;
>      int r = 1;
>  
>      if ( !nestedhvm_enabled(current->domain) )
> @@ -1311,9 +1311,10 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>      switch (msr) {
>      case MSR_IA32_VMX_BASIC:
>          data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
> -               ((u64)MTRR_TYPE_WRBACK) << 50;
> +               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);

There's still this literal use of 55 here.

Jan

>          break;
>      case MSR_IA32_VMX_PINBASED_CTLS:
> +    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
>          /* 1-seetings */
>          data = PIN_BASED_EXT_INTR_MASK |
>                 PIN_BASED_NMI_EXITING |
> @@ -1322,6 +1323,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          data = ((data | tmp) << 32) | (tmp);
>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS:
> +    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
>          /* 1-seetings */
>          data = CPU_BASED_HLT_EXITING |
>                 CPU_BASED_VIRTUAL_INTR_PENDING |
> @@ -1353,6 +1355,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          data = (data << 32) | tmp;
>          break;
>      case MSR_IA32_VMX_EXIT_CTLS:
> +    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
>          /* 1-seetings */
>          tmp = VMX_EXIT_CTLS_DEFAULT1;
>          data = VM_EXIT_ACK_INTR_ON_EXIT |
> @@ -1367,6 +1370,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          data = ((data | tmp) << 32) | tmp;
>          break;
>      case MSR_IA32_VMX_ENTRY_CTLS:
> +    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
>          /* 1-seetings */
>          tmp = VMX_ENTRY_CTLS_DEFAULT1;
>          data = VM_ENTRY_LOAD_GUEST_PAT |
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:32:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHsa-0000rY-V4; Wed, 05 Dec 2012 16:31:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgHsZ-0000rL-Gg
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:31:23 +0000
Received: from [85.158.139.211:59361] by server-4.bemta-5.messagelabs.com id
	A6/66-15011-8D67FB05; Wed, 05 Dec 2012 16:31:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1354725078!19135449!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25321 invoked from network); 5 Dec 2012 16:31:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 16:31:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 16:31:17 +0000
Message-Id: <50BF84E302000078000AE400@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 16:31:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
	<1354712534-31338-4-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354712534-31338-4-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 03/11] nested vmx: expose bit 55 of
 IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 14:02, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++--
>  1 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 4d0f26b..a09fa97 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
>   */
>  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
>  {
> -    u64 data = 0, tmp;
> +    u64 data = 0, tmp = 0;
>      int r = 1;
>  
>      if ( !nestedhvm_enabled(current->domain) )
> @@ -1311,9 +1311,10 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>      switch (msr) {
>      case MSR_IA32_VMX_BASIC:
>          data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
> -               ((u64)MTRR_TYPE_WRBACK) << 50;
> +               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);

There's still this literal use of 55 here.

Jan

>          break;
>      case MSR_IA32_VMX_PINBASED_CTLS:
> +    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
>          /* 1-seetings */
>          data = PIN_BASED_EXT_INTR_MASK |
>                 PIN_BASED_NMI_EXITING |
> @@ -1322,6 +1323,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          data = ((data | tmp) << 32) | (tmp);
>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS:
> +    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
>          /* 1-seetings */
>          data = CPU_BASED_HLT_EXITING |
>                 CPU_BASED_VIRTUAL_INTR_PENDING |
> @@ -1353,6 +1355,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          data = (data << 32) | tmp;
>          break;
>      case MSR_IA32_VMX_EXIT_CTLS:
> +    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
>          /* 1-seetings */
>          tmp = VMX_EXIT_CTLS_DEFAULT1;
>          data = VM_EXIT_ACK_INTR_ON_EXIT |
> @@ -1367,6 +1370,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          data = ((data | tmp) << 32) | tmp;
>          break;
>      case MSR_IA32_VMX_ENTRY_CTLS:
> +    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
>          /* 1-seetings */
>          tmp = VMX_ENTRY_CTLS_DEFAULT1;
>          data = VM_ENTRY_LOAD_GUEST_PAT |
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:33:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHua-00015T-Gf; Wed, 05 Dec 2012 16:33:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgHuY-00015M-TA
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:33:27 +0000
Received: from [85.158.139.211:27201] by server-9.bemta-5.messagelabs.com id
	8C/2D-29295-6577FB05; Wed, 05 Dec 2012 16:33:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1354725204!14982017!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3844 invoked from network); 5 Dec 2012 16:33:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 16:33:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 16:33:23 +0000
Message-Id: <50BF856202000078000AE403@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 16:33:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 00/11] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 14:02, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> This series of patches contain some bug fixes and feature enabling for
> nested vmx, please help to review and pull.
> 
> For the following patches, it doesn't have influence about Xen on Xen 
> functionality.
> No special need to backport to 4.2.x (My own opinion).

Xen on Xen isn't the only thing being cared about afaik.

>   nested vmx: emulate MSR bitmaps
>   nested vmx: use literal name instead of hard numbers
>   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit

3 bug fixes above that I would want to know whether they are
relevant for 4.2.

>   nested vmx: enable IA32E mode while do VM entry
>   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
>   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
>   nested vmx: fix interrupt delivery to L2 guest

And one more here.

Jan

>   nested vmx: check host ability when intercept MSR read



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:33:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHua-00015T-Gf; Wed, 05 Dec 2012 16:33:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgHuY-00015M-TA
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:33:27 +0000
Received: from [85.158.139.211:27201] by server-9.bemta-5.messagelabs.com id
	8C/2D-29295-6577FB05; Wed, 05 Dec 2012 16:33:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1354725204!14982017!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3844 invoked from network); 5 Dec 2012 16:33:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 16:33:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 16:33:23 +0000
Message-Id: <50BF856202000078000AE403@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 16:33:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354712534-31338-1-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 00/11] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 14:02, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> This series of patches contain some bug fixes and feature enabling for
> nested vmx, please help to review and pull.
> 
> For the following patches, it doesn't have influence about Xen on Xen 
> functionality.
> No special need to backport to 4.2.x (My own opinion).

Xen on Xen isn't the only thing being cared about afaik.

>   nested vmx: emulate MSR bitmaps
>   nested vmx: use literal name instead of hard numbers
>   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit

3 bug fixes above that I would want to know whether they are
relevant for 4.2.

>   nested vmx: enable IA32E mode while do VM entry
>   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
>   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
>   nested vmx: fix interrupt delivery to L2 guest

And one more here.

Jan

>   nested vmx: check host ability when intercept MSR read



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:37:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHxv-0001JM-56; Wed, 05 Dec 2012 16:36:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgHxt-0001JC-Pj
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 16:36:54 +0000
Received: from [85.158.137.99:15380] by server-10.bemta-3.messagelabs.com id
	49/E4-19806-4287FB05; Wed, 05 Dec 2012 16:36:52 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-12.tower-217.messagelabs.com!1354725411!14911636!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3Nzg4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31215 invoked from network); 5 Dec 2012 16:36:52 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-12.tower-217.messagelabs.com with SMTP;
	5 Dec 2012 16:36:52 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 05 Dec 2012 08:36:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,223,1355126400"; d="scan'208";a="252430154"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 05 Dec 2012 08:36:50 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 08:36:49 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 08:36:49 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 00:36:48 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with
	regard to migration
Thread-Index: AQHN0wGAH+TZqbearkyCq5DrDs2IGJgKYhaA
Date: Wed, 5 Dec 2012 16:36:47 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A1BF8@SHSMSX101.ccr.corp.intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
	<50BF6F5F.1020209@eu.citrix.com>
In-Reply-To: <50BF6F5F.1020209@eu.citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote:
> On 05/12/12 15:57, Liu, Jinsong wrote:
>> More simply, we can remove not only step #8 for last iteration
>> check, but also the action to 'mark broken page to dirty bitmap': 
>> In any iteration, there is a key point #4, if vmce occur before #4
>> it will transfer proper pfn_type/pfn_number and (xl migration) will
>> not access broken page (if guest access the broken page again it
>> will be killed by hypervisor), if vmce occur after #4 system will
>> crash and no need care migration any more.    
>> 
>> So we can go back to the original patch which used to handle 'vmce
>> occur before migration' and entirely don't need add specific code to
>> handle 'vmce occur during migration', since it in fact has handled
>> both cases (and simple). Thoughts?   
> 
> I thought part of the point of that was to have a consistent behavior
> from the presented virtual hardware -- i.e,. if the guest OS receives
> a vMCE, and subsequently touches that page, it should get an SRAR?  Is
> that important or not?
> 
>   -George

Currently hypervisor will kill guest under such case (instead of inject SRAR to guest). The reason is, not all h/w platform support SRAR so if guest access broken page it hardwarely will trigger more serious error: in some old platform which not support SRAR it will crash whole system. So to prevent this bad situation hypervisor prefer kill the guest.
(Under MCA architecture, there is no cpuid to distinguish if the platform support SRAR or not --> if MCi_status report an known type indicate a SRAR it means h/w support SRAR, otherwise h/w will report an unknown error --> hypervisor then kill system under this case)

So from guest point of view, it will not feel strange: if it received a vMCE (SRAO) then it access the page again, it was killed thinking itself runs at a platform not support SRAR.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:37:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgHxv-0001JM-56; Wed, 05 Dec 2012 16:36:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgHxt-0001JC-Pj
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 16:36:54 +0000
Received: from [85.158.137.99:15380] by server-10.bemta-3.messagelabs.com id
	49/E4-19806-4287FB05; Wed, 05 Dec 2012 16:36:52 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-12.tower-217.messagelabs.com!1354725411!14911636!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3Nzg4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31215 invoked from network); 5 Dec 2012 16:36:52 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-12.tower-217.messagelabs.com with SMTP;
	5 Dec 2012 16:36:52 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 05 Dec 2012 08:36:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,223,1355126400"; d="scan'208";a="252430154"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 05 Dec 2012 08:36:50 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 08:36:49 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 08:36:49 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 00:36:48 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with
	regard to migration
Thread-Index: AQHN0wGAH+TZqbearkyCq5DrDs2IGJgKYhaA
Date: Wed, 5 Dec 2012 16:36:47 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A1BF8@SHSMSX101.ccr.corp.intel.com>
References: <d3378692eece3e552ba2.1353630328@ljsromley.bj.intel.com>
	<CAFLBxZZ_W5PMLYiqEwdrpqkjgiMN9NoN2f33ErLriOwNOD9GgA@mail.gmail.com>
	<DE8DF0795D48FD4CA783C40EC82923353996FF@SHSMSX101.ccr.corp.intel.com>
	<1354183356.25834.108.camel@zakaz.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC829233539B85D@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A1AE4@SHSMSX101.ccr.corp.intel.com>
	<50BF6F5F.1020209@eu.citrix.com>
In-Reply-To: <50BF6F5F.1020209@eu.citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote:
> On 05/12/12 15:57, Liu, Jinsong wrote:
>> More simply, we can remove not only step #8 for last iteration
>> check, but also the action to 'mark broken page to dirty bitmap': 
>> In any iteration, there is a key point #4, if vmce occur before #4
>> it will transfer proper pfn_type/pfn_number and (xl migration) will
>> not access broken page (if guest access the broken page again it
>> will be killed by hypervisor), if vmce occur after #4 system will
>> crash and no need care migration any more.    
>> 
>> So we can go back to the original patch which used to handle 'vmce
>> occur before migration' and entirely don't need add specific code to
>> handle 'vmce occur during migration', since it in fact has handled
>> both cases (and simple). Thoughts?   
> 
> I thought part of the point of that was to have a consistent behavior
> from the presented virtual hardware -- i.e,. if the guest OS receives
> a vMCE, and subsequently touches that page, it should get an SRAR?  Is
> that important or not?
> 
>   -George

Currently hypervisor will kill guest under such case (instead of inject SRAR to guest). The reason is, not all h/w platform support SRAR so if guest access broken page it hardwarely will trigger more serious error: in some old platform which not support SRAR it will crash whole system. So to prevent this bad situation hypervisor prefer kill the guest.
(Under MCA architecture, there is no cpuid to distinguish if the platform support SRAR or not --> if MCi_status report an known type indicate a SRAR it means h/w support SRAR, otherwise h/w will report an unknown error --> hypervisor then kill system under this case)

So from guest point of view, it will not feel strange: if it received a vMCE (SRAO) then it access the page again, it was killed thinking itself runs at a platform not support SRAR.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:46:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:46:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgI78-0001ez-7z; Wed, 05 Dec 2012 16:46:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu>)
	id 1TgI76-0001eu-JO
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:46:24 +0000
Received: from [85.158.143.99:58097] by server-3.bemta-4.messagelabs.com id
	3B/2A-06841-F5A7FB05; Wed, 05 Dec 2012 16:46:23 +0000
X-Env-Sender: prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354725979!18574581!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10620 invoked from network); 5 Dec 2012 16:46:21 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 16:46:21 -0000
Received: from aplexcas1.dom1.jhuapl.edu (unknown [128.244.198.90]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 4bcf_0737_90ed6009_4df4_4202_afa4_1d3dd4bb6899;
	Wed, 05 Dec 2012 11:46:11 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Wed, 5 Dec 2012
	11:43:55 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 5 Dec 2012 11:43:55 -0500
Thread-Topic: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
Thread-Index: Ac3S9+lTKw/viIfbRM6/zbbQcBvsBwADqhs6
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC208@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
	,<1354701851.15296.120.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC206@aplesstripe.dom1.jhuapl.edu>,
	<1354719022.15296.215.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354719022.15296.215.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So the consensus is that the vtpm components are optional? I'm not sure I like basing the decision to build vtpm or not on the presence of cmake. That can lead to plenty of subtle surprises and hard to find build bugs for people building the same version of xen on different machines which may or may not have cmake installed. I'd prefer to either enable or disable it by default and just keep the cmake check conditional on whether vtpm is enabled or not.

If we want vtpm (and its cmake dependency) to be optional, I can just make it default disabled. What do you all want? Default enabled or disabled?

________________________________________
From: Ian Campbell [Ian.Campbell@citrix.com]
Sent: Wednesday, December 05, 2012 9:50 AM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom

On Wed, 2012-12-05 at 14:32 +0000, Fioravante, Matthew E. wrote:
> AC_ARG_VAR just allows you to do CMAKE=foo ./configure. It also adds a
> message about the CMAKE variable when you do ./configure --help. I'm
> not convinced this half needs to be conditional.

Agreed.

> The line that does the conditional check is the following:
> AX_PATH_PROG_OR_FAIL_ARG([vtpm], [CMAKE], [cmake])
>
> AX_PATH_PROG_OR_FAIL_ARG is a new macro I added to path_or_fail.m4. It
> does the same thing as AX_PATH_PROG_OR_FAIL if the first argument (in
> this case "$vtpm") is equal to "y". If the argument is anything else
> it sets the variable (in this case CMAKE) to
> NAME_disabled_in_configure_script.

I see.

What I was expecting is that the default would be to build the vtpm
stuff if cmake was installed and to silently not do so if cmake wasn't.
If --enable-vtpm is given then it should of course fail noisily if cmake
isn't present.

> The result is that if someone tries to build vtpm after disabling it
> they will get a rather loud and obvious error message
> cmake-disabled-in-configure-script: command not found.

That sounds fine too.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:46:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:46:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgI78-0001ez-7z; Wed, 05 Dec 2012 16:46:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu>)
	id 1TgI76-0001eu-JO
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:46:24 +0000
Received: from [85.158.143.99:58097] by server-3.bemta-4.messagelabs.com id
	3B/2A-06841-F5A7FB05; Wed, 05 Dec 2012 16:46:23 +0000
X-Env-Sender: prvs=0683a96e43=Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354725979!18574581!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10620 invoked from network); 5 Dec 2012 16:46:21 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 16:46:21 -0000
Received: from aplexcas1.dom1.jhuapl.edu (unknown [128.244.198.90]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 4bcf_0737_90ed6009_4df4_4202_afa4_1d3dd4bb6899;
	Wed, 05 Dec 2012 11:46:11 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Wed, 5 Dec 2012
	11:43:55 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 5 Dec 2012 11:43:55 -0500
Thread-Topic: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
Thread-Index: Ac3S9+lTKw/viIfbRM6/zbbQcBvsBwADqhs6
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC208@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
	,<1354701851.15296.120.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC206@aplesstripe.dom1.jhuapl.edu>,
	<1354719022.15296.215.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354719022.15296.215.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So the consensus is that the vtpm components are optional? I'm not sure I like basing the decision to build vtpm or not on the presence of cmake. That can lead to plenty of subtle surprises and hard to find build bugs for people building the same version of xen on different machines which may or may not have cmake installed. I'd prefer to either enable or disable it by default and just keep the cmake check conditional on whether vtpm is enabled or not.

If we want vtpm (and its cmake dependency) to be optional, I can just make it default disabled. What do you all want? Default enabled or disabled?

________________________________________
From: Ian Campbell [Ian.Campbell@citrix.com]
Sent: Wednesday, December 05, 2012 9:50 AM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom

On Wed, 2012-12-05 at 14:32 +0000, Fioravante, Matthew E. wrote:
> AC_ARG_VAR just allows you to do CMAKE=foo ./configure. It also adds a
> message about the CMAKE variable when you do ./configure --help. I'm
> not convinced this half needs to be conditional.

Agreed.

> The line that does the conditional check is the following:
> AX_PATH_PROG_OR_FAIL_ARG([vtpm], [CMAKE], [cmake])
>
> AX_PATH_PROG_OR_FAIL_ARG is a new macro I added to path_or_fail.m4. It
> does the same thing as AX_PATH_PROG_OR_FAIL if the first argument (in
> this case "$vtpm") is equal to "y". If the argument is anything else
> it sets the variable (in this case CMAKE) to
> NAME_disabled_in_configure_script.

I see.

What I was expecting is that the default would be to build the vtpm
stuff if cmake was installed and to silently not do so if cmake wasn't.
If --enable-vtpm is given then it should of course fail noisily if cmake
isn't present.

> The result is that if someone tries to build vtpm after disabling it
> they will get a rather loud and obvious error message
> cmake-disabled-in-configure-script: command not found.

That sounds fine too.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:48:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgI9C-0001k7-P1; Wed, 05 Dec 2012 16:48:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgI9A-0001jz-RH
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:48:33 +0000
Received: from [85.158.137.99:49286] by server-10.bemta-3.messagelabs.com id
	9F/57-19806-0EA7FB05; Wed, 05 Dec 2012 16:48:32 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354726108!18085673!1
X-Originating-IP: [65.55.88.11]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17067 invoked from network); 5 Dec 2012 16:48:30 -0000
Received: from tx2ehsobe001.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.11)
	by server-13.tower-217.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Dec 2012 16:48:30 -0000
Received: from mail185-tx2-R.bigfish.com (10.9.14.242) by
	TX2EHSOBE002.bigfish.com (10.9.40.22) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 16:48:28 +0000
Received: from mail185-tx2 (localhost [127.0.0.1])	by
	mail185-tx2-R.bigfish.com (Postfix) with ESMTP id 70B1D2A0255;
	Wed,  5 Dec 2012 16:48:28 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(z551bizbb2dI98dI9371I1432Izz1de0h1202h1d1ah1d2ahzzz2dh668h839h93fhd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1155h)
Received: from mail185-tx2 (localhost.localdomain [127.0.0.1]) by mail185-tx2
	(MessageSwitch) id 1354726106154193_28150;
	Wed,  5 Dec 2012 16:48:26 +0000 (UTC)
Received: from TX2EHSMHS029.bigfish.com (unknown [10.9.14.242])	by
	mail185-tx2.bigfish.com (Postfix) with ESMTP id 1E93A4060C;
	Wed,  5 Dec 2012 16:48:26 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	TX2EHSMHS029.bigfish.com (10.9.99.129) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 16:48:23 +0000
X-WSS-ID: 0MEKHCK-02-MJG-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 25F8AC80BF;	Wed,  5 Dec 2012 10:48:19 -0600 (CST)
Received: from SAUSEXDAG02.amd.com (163.181.55.2) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 5 Dec 2012 10:30:38 -0600
Received: from [10.234.222.132] (163.181.55.254) by sausexdag02.amd.com
	(163.181.55.2) with Microsoft SMTP Server id 14.2.318.4; Wed, 5 Dec 2012
	10:48:20 -0600
Message-ID: <50BF7AD3.9010407@amd.com>
Date: Wed, 5 Dec 2012 11:48:19 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121025 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <ijc@hellion.org.uk>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354711402.15296.188.camel@zakaz.uk.xensource.com>
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	debian-kernel <debian-kernel@lists.debian.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/05/2012 07:43 AM, Ian Campbell wrote:
> I've just tried this on a fam 15h and I get:
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: CPU0 found a matching microcode update with version 0x6000629 (current=0x6000626)
>          (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: size 5260, block size 2592, offset 2660
>          (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base id is 6012)
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: CPU2 found a matching microcode update with version 0x6000629 (current=0x6000626)
>          (XEN) microcode: CPU2 updated from revision 0x6000626 to 0x6000629
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: size 5260, block size 2592, offset 2660
>          (XEN) microcode: CPU3 patch does not match (patch is 6101, cpu base id is 6012)
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: CPU4 found a matching microcode update with version 0x6000629 (current=0x6000626)
>          (XEN) microcode: CPU4 updated from revision 0x6000626 to 0x6000629
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: size 5260, block size 2592, offset 2660
>          (XEN) microcode: CPU5 patch does not match (patch is 6101, cpu base id is 6012)
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: CPU6 found a matching microcode update with version 0x6000629 (current=0x6000626)
>          (XEN) microcode: CPU6 updated from revision 0x6000626 to 0x6000629
>
>          ....
>
> It seems like it is applying successfully on only the even numbered
> cpus. Is this because the odd and even ones share some execution units
> and therefore share microcode updates too? IOW update CPU0 also updates
> CPU1 under the hood.
>
> If so then we probably want to teach Xen about this, although at least
> for now though it would mean that the microcode is actually getting
> applied despite the messages.

On fam15h cores are grouped in pairs into compute units (CUs) and cores 
in CUs share microcode engine. So yes, you are right --- when we apply a 
patch to one core, the other one sees the update.

I believe at some point we thought about making code smarter and 
applying patch only on one core in a CU but then decided against it 
because of some corner cases, For example, there are parts with 
single-core CUs and it is not out of question that some BIOSes may not 
enumerate them correctly. Yes, we can figure this all out in the code 
but we didn't feel that adding complexity was worth it.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 16:48:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 16:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgI9C-0001k7-P1; Wed, 05 Dec 2012 16:48:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgI9A-0001jz-RH
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 16:48:33 +0000
Received: from [85.158.137.99:49286] by server-10.bemta-3.messagelabs.com id
	9F/57-19806-0EA7FB05; Wed, 05 Dec 2012 16:48:32 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354726108!18085673!1
X-Originating-IP: [65.55.88.11]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17067 invoked from network); 5 Dec 2012 16:48:30 -0000
Received: from tx2ehsobe001.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.11)
	by server-13.tower-217.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Dec 2012 16:48:30 -0000
Received: from mail185-tx2-R.bigfish.com (10.9.14.242) by
	TX2EHSOBE002.bigfish.com (10.9.40.22) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 16:48:28 +0000
Received: from mail185-tx2 (localhost [127.0.0.1])	by
	mail185-tx2-R.bigfish.com (Postfix) with ESMTP id 70B1D2A0255;
	Wed,  5 Dec 2012 16:48:28 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(z551bizbb2dI98dI9371I1432Izz1de0h1202h1d1ah1d2ahzzz2dh668h839h93fhd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1155h)
Received: from mail185-tx2 (localhost.localdomain [127.0.0.1]) by mail185-tx2
	(MessageSwitch) id 1354726106154193_28150;
	Wed,  5 Dec 2012 16:48:26 +0000 (UTC)
Received: from TX2EHSMHS029.bigfish.com (unknown [10.9.14.242])	by
	mail185-tx2.bigfish.com (Postfix) with ESMTP id 1E93A4060C;
	Wed,  5 Dec 2012 16:48:26 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	TX2EHSMHS029.bigfish.com (10.9.99.129) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 16:48:23 +0000
X-WSS-ID: 0MEKHCK-02-MJG-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 25F8AC80BF;	Wed,  5 Dec 2012 10:48:19 -0600 (CST)
Received: from SAUSEXDAG02.amd.com (163.181.55.2) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 5 Dec 2012 10:30:38 -0600
Received: from [10.234.222.132] (163.181.55.254) by sausexdag02.amd.com
	(163.181.55.2) with Microsoft SMTP Server id 14.2.318.4; Wed, 5 Dec 2012
	10:48:20 -0600
Message-ID: <50BF7AD3.9010407@amd.com>
Date: Wed, 5 Dec 2012 11:48:19 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121025 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <ijc@hellion.org.uk>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354711402.15296.188.camel@zakaz.uk.xensource.com>
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	debian-kernel <debian-kernel@lists.debian.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/05/2012 07:43 AM, Ian Campbell wrote:
> I've just tried this on a fam 15h and I get:
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: CPU0 found a matching microcode update with version 0x6000629 (current=0x6000626)
>          (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: size 5260, block size 2592, offset 2660
>          (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base id is 6012)
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: CPU2 found a matching microcode update with version 0x6000629 (current=0x6000626)
>          (XEN) microcode: CPU2 updated from revision 0x6000626 to 0x6000629
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: size 5260, block size 2592, offset 2660
>          (XEN) microcode: CPU3 patch does not match (patch is 6101, cpu base id is 6012)
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: CPU4 found a matching microcode update with version 0x6000629 (current=0x6000626)
>          (XEN) microcode: CPU4 updated from revision 0x6000626 to 0x6000629
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: size 5260, block size 2592, offset 2660
>          (XEN) microcode: CPU5 patch does not match (patch is 6101, cpu base id is 6012)
>
>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>          (XEN) microcode: size 5260, block size 2592, offset 60
>          (XEN) microcode: CPU6 found a matching microcode update with version 0x6000629 (current=0x6000626)
>          (XEN) microcode: CPU6 updated from revision 0x6000626 to 0x6000629
>
>          ....
>
> It seems like it is applying successfully on only the even numbered
> cpus. Is this because the odd and even ones share some execution units
> and therefore share microcode updates too? IOW update CPU0 also updates
> CPU1 under the hood.
>
> If so then we probably want to teach Xen about this, although at least
> for now though it would mean that the microcode is actually getting
> applied despite the messages.

On fam15h cores are grouped in pairs into compute units (CUs) and cores 
in CUs share microcode engine. So yes, you are right --- when we apply a 
patch to one core, the other one sees the update.

I believe at some point we thought about making code smarter and 
applying patch only on one core in a CU but then decided against it 
because of some corner cases, For example, there are parts with 
single-core CUs and it is not out of question that some BIOSes may not 
enumerate them correctly. Yes, we can figure this all out in the code 
but we didn't feel that adding complexity was worth it.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:02:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:02:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIMT-0002EA-1U; Wed, 05 Dec 2012 17:02:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgIMR-0002E1-Tr
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:02:16 +0000
Received: from [85.158.139.211:49785] by server-9.bemta-5.messagelabs.com id
	E4/C8-29295-71E7FB05; Wed, 05 Dec 2012 17:02:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354726931!19259675!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18381 invoked from network); 5 Dec 2012 17:02:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 17:02:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 17:02:11 +0000
Message-Id: <50BF8C2002000078000AE42D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 17:02:08 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
In-Reply-To: <50BF7AD3.9010407@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <ijc@hellion.org.uk>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 17:48, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> On 12/05/2012 07:43 AM, Ian Campbell wrote:
>> I've just tried this on a fam 15h and I get:
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: CPU0 found a matching microcode update with 
> version 0x6000629 (current=0x6000626)
>>          (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: size 5260, block size 2592, offset 2660
>>          (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base 
> id is 6012)
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: CPU2 found a matching microcode update with 
> version 0x6000629 (current=0x6000626)
>>          (XEN) microcode: CPU2 updated from revision 0x6000626 to 0x6000629
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: size 5260, block size 2592, offset 2660
>>          (XEN) microcode: CPU3 patch does not match (patch is 6101, cpu base 
> id is 6012)
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: CPU4 found a matching microcode update with 
> version 0x6000629 (current=0x6000626)
>>          (XEN) microcode: CPU4 updated from revision 0x6000626 to 0x6000629
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: size 5260, block size 2592, offset 2660
>>          (XEN) microcode: CPU5 patch does not match (patch is 6101, cpu base 
> id is 6012)
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: CPU6 found a matching microcode update with 
> version 0x6000629 (current=0x6000626)
>>          (XEN) microcode: CPU6 updated from revision 0x6000626 to 0x6000629
>>
>>          ....
>>
>> It seems like it is applying successfully on only the even numbered
>> cpus. Is this because the odd and even ones share some execution units
>> and therefore share microcode updates too? IOW update CPU0 also updates
>> CPU1 under the hood.
>>
>> If so then we probably want to teach Xen about this, although at least
>> for now though it would mean that the microcode is actually getting
>> applied despite the messages.
> 
> On fam15h cores are grouped in pairs into compute units (CUs) and cores 
> in CUs share microcode engine. So yes, you are right --- when we apply a 
> patch to one core, the other one sees the update.
> 
> I believe at some point we thought about making code smarter and 
> applying patch only on one core in a CU but then decided against it 
> because of some corner cases, For example, there are parts with 
> single-core CUs and it is not out of question that some BIOSes may not 
> enumerate them correctly. Yes, we can figure this all out in the code 
> but we didn't feel that adding complexity was worth it.

But all of this shouldn't lead to equivalent ID mismatches, should
it? It ought to simply find nothing to update...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:02:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:02:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIMT-0002EA-1U; Wed, 05 Dec 2012 17:02:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgIMR-0002E1-Tr
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:02:16 +0000
Received: from [85.158.139.211:49785] by server-9.bemta-5.messagelabs.com id
	E4/C8-29295-71E7FB05; Wed, 05 Dec 2012 17:02:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354726931!19259675!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyMTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18381 invoked from network); 5 Dec 2012 17:02:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Dec 2012 17:02:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Dec 2012 17:02:11 +0000
Message-Id: <50BF8C2002000078000AE42D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 05 Dec 2012 17:02:08 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
In-Reply-To: <50BF7AD3.9010407@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <ijc@hellion.org.uk>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 17:48, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> On 12/05/2012 07:43 AM, Ian Campbell wrote:
>> I've just tried this on a fam 15h and I get:
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: CPU0 found a matching microcode update with 
> version 0x6000629 (current=0x6000626)
>>          (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: size 5260, block size 2592, offset 2660
>>          (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base 
> id is 6012)
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: CPU2 found a matching microcode update with 
> version 0x6000629 (current=0x6000626)
>>          (XEN) microcode: CPU2 updated from revision 0x6000626 to 0x6000629
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: size 5260, block size 2592, offset 2660
>>          (XEN) microcode: CPU3 patch does not match (patch is 6101, cpu base 
> id is 6012)
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: CPU4 found a matching microcode update with 
> version 0x6000629 (current=0x6000626)
>>          (XEN) microcode: CPU4 updated from revision 0x6000626 to 0x6000629
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: size 5260, block size 2592, offset 2660
>>          (XEN) microcode: CPU5 patch does not match (patch is 6101, cpu base 
> id is 6012)
>>
>>          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>          (XEN) microcode: size 5260, block size 2592, offset 60
>>          (XEN) microcode: CPU6 found a matching microcode update with 
> version 0x6000629 (current=0x6000626)
>>          (XEN) microcode: CPU6 updated from revision 0x6000626 to 0x6000629
>>
>>          ....
>>
>> It seems like it is applying successfully on only the even numbered
>> cpus. Is this because the odd and even ones share some execution units
>> and therefore share microcode updates too? IOW update CPU0 also updates
>> CPU1 under the hood.
>>
>> If so then we probably want to teach Xen about this, although at least
>> for now though it would mean that the microcode is actually getting
>> applied despite the messages.
> 
> On fam15h cores are grouped in pairs into compute units (CUs) and cores 
> in CUs share microcode engine. So yes, you are right --- when we apply a 
> patch to one core, the other one sees the update.
> 
> I believe at some point we thought about making code smarter and 
> applying patch only on one core in a CU but then decided against it 
> because of some corner cases, For example, there are parts with 
> single-core CUs and it is not out of question that some BIOSes may not 
> enumerate them correctly. Yes, we can figure this all out in the code 
> but we didn't feel that adding complexity was worth it.

But all of this shouldn't lead to equivalent ID mismatches, should
it? It ought to simply find nothing to update...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:03:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgINN-0002Ly-Fk; Wed, 05 Dec 2012 17:03:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgINL-0002Ll-Uo
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:03:12 +0000
Received: from [85.158.139.211:59113] by server-10.bemta-5.messagelabs.com id
	44/D8-09257-F4E7FB05; Wed, 05 Dec 2012 17:03:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354726990!19213273!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1676 invoked from network); 5 Dec 2012 17:03:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 17:03:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,223,1355097600"; d="scan'208";a="16179218"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 17:03:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	17:03:10 +0000
Message-ID: <1354726988.17165.20.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 17:03:08 +0000
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48BDDDC208@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
	,<1354701851.15296.120.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC206@aplesstripe.dom1.jhuapl.edu>
	,<1354719022.15296.215.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC208@aplesstripe.dom1.jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 16:43 +0000, Fioravante, Matthew E. wrote:
> So the consensus is that the vtpm components are optional? I'm not
> sure I like basing the decision to build vtpm or not on the presence
> of cmake. That can lead to plenty of subtle surprises and hard to find
> build bugs for people building the same version of xen on different
> machines which may or may not have cmake installed.

This happens with lots of things when autoconf is involved, its
expected. (I mean in general not just in Xen).

If you don't want to be surprised you should add --enable-vtpm or
--disable-vtpm as appropriate.

>  I'd prefer to either enable or disable it by default and just keep
> the cmake check conditional on whether vtpm is enabled or not.
> 
> If we want vtpm (and its cmake dependency) to be optional, I can just
> make it default disabled. What do you all want? Default enabled or
> disabled?

Default to building if it can be built please.

Ian.

> 
> ________________________________________
> From: Ian Campbell [Ian.Campbell@citrix.com]
> Sent: Wednesday, December 05, 2012 9:50 AM
> To: Fioravante, Matthew E.
> Cc: xen-devel@lists.xen.org; Ian Jackson
> Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
> 
> On Wed, 2012-12-05 at 14:32 +0000, Fioravante, Matthew E. wrote:
> > AC_ARG_VAR just allows you to do CMAKE=foo ./configure. It also adds a
> > message about the CMAKE variable when you do ./configure --help. I'm
> > not convinced this half needs to be conditional.
> 
> Agreed.
> 
> > The line that does the conditional check is the following:
> > AX_PATH_PROG_OR_FAIL_ARG([vtpm], [CMAKE], [cmake])
> >
> > AX_PATH_PROG_OR_FAIL_ARG is a new macro I added to path_or_fail.m4. It
> > does the same thing as AX_PATH_PROG_OR_FAIL if the first argument (in
> > this case "$vtpm") is equal to "y". If the argument is anything else
> > it sets the variable (in this case CMAKE) to
> > NAME_disabled_in_configure_script.
> 
> I see.
> 
> What I was expecting is that the default would be to build the vtpm
> stuff if cmake was installed and to silently not do so if cmake wasn't.
> If --enable-vtpm is given then it should of course fail noisily if cmake
> isn't present.
> 
> > The result is that if someone tries to build vtpm after disabling it
> > they will get a rather loud and obvious error message
> > cmake-disabled-in-configure-script: command not found.
> 
> That sounds fine too.
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:03:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgINN-0002Ly-Fk; Wed, 05 Dec 2012 17:03:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgINL-0002Ll-Uo
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:03:12 +0000
Received: from [85.158.139.211:59113] by server-10.bemta-5.messagelabs.com id
	44/D8-09257-F4E7FB05; Wed, 05 Dec 2012 17:03:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354726990!19213273!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1676 invoked from network); 5 Dec 2012 17:03:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 17:03:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,223,1355097600"; d="scan'208";a="16179218"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 17:03:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Wed, 5 Dec 2012
	17:03:10 +0000
Message-ID: <1354726988.17165.20.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Date: Wed, 5 Dec 2012 17:03:08 +0000
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48BDDDC208@aplesstripe.dom1.jhuapl.edu>
References: <1354644571-3202-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354644571-3202-6-git-send-email-matthew.fioravante@jhuapl.edu>
	,<1354701851.15296.120.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC206@aplesstripe.dom1.jhuapl.edu>
	,<1354719022.15296.215.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC208@aplesstripe.dom1.jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 16:43 +0000, Fioravante, Matthew E. wrote:
> So the consensus is that the vtpm components are optional? I'm not
> sure I like basing the decision to build vtpm or not on the presence
> of cmake. That can lead to plenty of subtle surprises and hard to find
> build bugs for people building the same version of xen on different
> machines which may or may not have cmake installed.

This happens with lots of things when autoconf is involved, its
expected. (I mean in general not just in Xen).

If you don't want to be surprised you should add --enable-vtpm or
--disable-vtpm as appropriate.

>  I'd prefer to either enable or disable it by default and just keep
> the cmake check conditional on whether vtpm is enabled or not.
> 
> If we want vtpm (and its cmake dependency) to be optional, I can just
> make it default disabled. What do you all want? Default enabled or
> disabled?

Default to building if it can be built please.

Ian.

> 
> ________________________________________
> From: Ian Campbell [Ian.Campbell@citrix.com]
> Sent: Wednesday, December 05, 2012 9:50 AM
> To: Fioravante, Matthew E.
> Cc: xen-devel@lists.xen.org; Ian Jackson
> Subject: Re: [Xen-devel] [VTPM v6 6/8] Add autoconf to stubdom
> 
> On Wed, 2012-12-05 at 14:32 +0000, Fioravante, Matthew E. wrote:
> > AC_ARG_VAR just allows you to do CMAKE=foo ./configure. It also adds a
> > message about the CMAKE variable when you do ./configure --help. I'm
> > not convinced this half needs to be conditional.
> 
> Agreed.
> 
> > The line that does the conditional check is the following:
> > AX_PATH_PROG_OR_FAIL_ARG([vtpm], [CMAKE], [cmake])
> >
> > AX_PATH_PROG_OR_FAIL_ARG is a new macro I added to path_or_fail.m4. It
> > does the same thing as AX_PATH_PROG_OR_FAIL if the first argument (in
> > this case "$vtpm") is equal to "y". If the argument is anything else
> > it sets the variable (in this case CMAKE) to
> > NAME_disabled_in_configure_script.
> 
> I see.
> 
> What I was expecting is that the default would be to build the vtpm
> stuff if cmake was installed and to silently not do so if cmake wasn't.
> If --enable-vtpm is given then it should of course fail noisily if cmake
> isn't present.
> 
> > The result is that if someone tries to build vtpm after disabling it
> > they will get a rather loud and obvious error message
> > cmake-disabled-in-configure-script: command not found.
> 
> That sounds fine too.
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:06:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIQ8-0002dH-2c; Wed, 05 Dec 2012 17:06:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgIQ6-0002cv-TZ
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:06:03 +0000
Received: from [85.158.139.211:4987] by server-7.bemta-5.messagelabs.com id
	36/A9-23096-AFE7FB05; Wed, 05 Dec 2012 17:06:02 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-3.tower-206.messagelabs.com!1354727161!18360015!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_SEX
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10291 invoked from network); 5 Dec 2012 17:06:01 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-3.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 17:06:01 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgIQ4-0008KB-B7; Wed, 05 Dec 2012 17:06:00 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgIPx-0000VS-TE; Wed, 05 Dec 2012 17:05:59 +0000
Message-ID: <1354727148.17165.23.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Wed, 05 Dec 2012 17:05:48 +0000
In-Reply-To: <50BF7AD3.9010407@amd.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 16:48 +0000, Boris Ostrovsky wrote:
> On 12/05/2012 07:43 AM, Ian Campbell wrote:
> > I've just tried this on a fam 15h and I get:
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: CPU0 found a matching microcode update with version 0x6000629 (current=0x6000626)
> >          (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: size 5260, block size 2592, offset 2660
> >          (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base id is 6012)
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: CPU2 found a matching microcode update with version 0x6000629 (current=0x6000626)
> >          (XEN) microcode: CPU2 updated from revision 0x6000626 to 0x6000629
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: size 5260, block size 2592, offset 2660
> >          (XEN) microcode: CPU3 patch does not match (patch is 6101, cpu base id is 6012)
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: CPU4 found a matching microcode update with version 0x6000629 (current=0x6000626)
> >          (XEN) microcode: CPU4 updated from revision 0x6000626 to 0x6000629
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: size 5260, block size 2592, offset 2660
> >          (XEN) microcode: CPU5 patch does not match (patch is 6101, cpu base id is 6012)
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: CPU6 found a matching microcode update with version 0x6000629 (current=0x6000626)
> >          (XEN) microcode: CPU6 updated from revision 0x6000626 to 0x6000629
> >
> >          ....
> >
> > It seems like it is applying successfully on only the even numbered
> > cpus. Is this because the odd and even ones share some execution units
> > and therefore share microcode updates too? IOW update CPU0 also updates
> > CPU1 under the hood.
> >
> > If so then we probably want to teach Xen about this, although at least
> > for now though it would mean that the microcode is actually getting
> > applied despite the messages.
> 
> On fam15h cores are grouped in pairs into compute units (CUs) and cores 
> in CUs share microcode engine. So yes, you are right --- when we apply a 
> patch to one core, the other one sees the update.
> 
> I believe at some point we thought about making code smarter and 
> applying patch only on one core in a CU but then decided against it 
> because of some corner cases, For example, there are parts with 
> single-core CUs and it is not out of question that some BIOSes may not 
> enumerate them correctly. Yes, we can figure this all out in the code 
> but we didn't feel that adding complexity was worth it.

It looks to me like Linux silently avoids updating the microcode on a
core if it detects that already has that version, which silently avoids
this issue without the possibility of missing a core out in a corned
case.

I looked at trying to apply the same logic to the Xen side of things but
it is different enough that I can't immediately see how.
microcode_fits() would seem to be the place to do it, but I'm not at all
sure what this equiv table stuff is all about.

Ian.

-- 
Ian Campbell

I've finally found the perfect girl,
I couldn't ask for more,
She's deaf and dumb and over-sexed,
And owns a liquor store.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:06:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIQ8-0002dH-2c; Wed, 05 Dec 2012 17:06:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgIQ6-0002cv-TZ
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:06:03 +0000
Received: from [85.158.139.211:4987] by server-7.bemta-5.messagelabs.com id
	36/A9-23096-AFE7FB05; Wed, 05 Dec 2012 17:06:02 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-3.tower-206.messagelabs.com!1354727161!18360015!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_SEX
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10291 invoked from network); 5 Dec 2012 17:06:01 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-3.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 17:06:01 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgIQ4-0008KB-B7; Wed, 05 Dec 2012 17:06:00 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgIPx-0000VS-TE; Wed, 05 Dec 2012 17:05:59 +0000
Message-ID: <1354727148.17165.23.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Wed, 05 Dec 2012 17:05:48 +0000
In-Reply-To: <50BF7AD3.9010407@amd.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 16:48 +0000, Boris Ostrovsky wrote:
> On 12/05/2012 07:43 AM, Ian Campbell wrote:
> > I've just tried this on a fam 15h and I get:
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: CPU0 found a matching microcode update with version 0x6000629 (current=0x6000626)
> >          (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: size 5260, block size 2592, offset 2660
> >          (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base id is 6012)
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: CPU2 found a matching microcode update with version 0x6000629 (current=0x6000626)
> >          (XEN) microcode: CPU2 updated from revision 0x6000626 to 0x6000629
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: size 5260, block size 2592, offset 2660
> >          (XEN) microcode: CPU3 patch does not match (patch is 6101, cpu base id is 6012)
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: CPU4 found a matching microcode update with version 0x6000629 (current=0x6000626)
> >          (XEN) microcode: CPU4 updated from revision 0x6000626 to 0x6000629
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000629
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: size 5260, block size 2592, offset 2660
> >          (XEN) microcode: CPU5 patch does not match (patch is 6101, cpu base id is 6012)
> >
> >          (XEN) microcode: collect_cpu_info: patch_id=0x6000626
> >          (XEN) microcode: size 5260, block size 2592, offset 60
> >          (XEN) microcode: CPU6 found a matching microcode update with version 0x6000629 (current=0x6000626)
> >          (XEN) microcode: CPU6 updated from revision 0x6000626 to 0x6000629
> >
> >          ....
> >
> > It seems like it is applying successfully on only the even numbered
> > cpus. Is this because the odd and even ones share some execution units
> > and therefore share microcode updates too? IOW update CPU0 also updates
> > CPU1 under the hood.
> >
> > If so then we probably want to teach Xen about this, although at least
> > for now though it would mean that the microcode is actually getting
> > applied despite the messages.
> 
> On fam15h cores are grouped in pairs into compute units (CUs) and cores 
> in CUs share microcode engine. So yes, you are right --- when we apply a 
> patch to one core, the other one sees the update.
> 
> I believe at some point we thought about making code smarter and 
> applying patch only on one core in a CU but then decided against it 
> because of some corner cases, For example, there are parts with 
> single-core CUs and it is not out of question that some BIOSes may not 
> enumerate them correctly. Yes, we can figure this all out in the code 
> but we didn't feel that adding complexity was worth it.

It looks to me like Linux silently avoids updating the microcode on a
core if it detects that already has that version, which silently avoids
this issue without the possibility of missing a core out in a corned
case.

I looked at trying to apply the same logic to the Xen side of things but
it is different enough that I can't immediately see how.
microcode_fits() would seem to be the place to do it, but I'm not at all
sure what this equiv table stuff is all about.

Ian.

-- 
Ian Campbell

I've finally found the perfect girl,
I couldn't ask for more,
She's deaf and dumb and over-sexed,
And owns a liquor store.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:07:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIQr-0002nz-H7; Wed, 05 Dec 2012 17:06:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TgIQp-0002nm-QG
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:06:48 +0000
Received: from [193.109.254.147:40210] by server-4.bemta-14.messagelabs.com id
	8C/91-18856-72F7FB05; Wed, 05 Dec 2012 17:06:47 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354727201!3849768!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gOTQ0Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11791 invoked from network); 5 Dec 2012 17:06:43 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 17:06:43 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354727203; x=1386263203;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=DdtbqNJ+RKsH7rO/uCryPG0uPcuOOpyhRi6TXjMtTIU=;
	b=cOmNm9GNND+PGD4aOOjnvtOfvGQeuwx0IkYyAnZZZmojJEIPW9tUEKEa
	zLgMEYfwFOZa6ZmqDhJNRJZ57yo1rKWEksxQKCMVq21HR2c7Q3J6QOc4F
	eGGI/U6WjPPVDg5wvQYqenFbvRLYw4M+vRdch8m4ORFgeAt5wWFYlRMgE k=;
X-IronPort-AV: E=McAfee;i="5400,1158,6917"; a="343098894"
Received: from smtp-in-31001.sea31.amazon.com ([10.184.168.27])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 05 Dec 2012 17:06:27 +0000
Received: from ex10-hub-9001.ant.amazon.com (ex10-hub-9001.ant.amazon.com
	[10.185.137.58])
	by smtp-in-31001.sea31.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB5H6QC4024364
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 5 Dec 2012 17:06:27 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-9001.ant.amazon.com (10.185.137.58) with Microsoft SMTP Server
	id 14.2.247.3; Wed, 5 Dec 2012 09:06:20 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Wed, 05 Dec 2012 09:06:20 -0800
Date: Wed, 5 Dec 2012 09:06:20 -0800
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
	<50BF839002000078000AE3DF@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BF839002000078000AE3DF@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 04:25:36PM +0000, Jan Beulich wrote:
> >>> On 05.12.12 at 16:59, Matt Wilson <msw@amazon.com> wrote:
> > On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
> >> On 05/12/12 06:02, Matt Wilson wrote:
> >> > An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
> >> > but still have the flexibility to change the configuration later.
> >> > There's no logic that keys off of domain->is_pinned outside of
> >> > sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
> >> > is_pinned_vcpu() macro to only check for a single CPU set in the
> >> > cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
> >> > boots.
> >> 
> >> Sadly this patch will break things.  There are certain callers of
> >> is_pinned_vcpu() which rely on the value to allow access to certain
> >> power related MSRs, which is where the requirement for never permitting
> >> an update of the affinity mask comes from.
> > 
> > If this is true, the existing is_pinned_vcpu() test is broken:
> > 
> >    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
> >                               cpumask_weight((v)->cpu_affinity) == 1)
> > 
> > It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
> > the MSR traps will suddenly start working.
> > 
> > See commit: http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd 
> 
> I don't see what's wrong here. Certain things merely require the
> pCPU that a vCPU runs on to be stable, which is what the test
> above is for.

Me either. That said, are you willing to Ack and commit my patch that
started this thread?

Thanks,

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:07:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIQr-0002nz-H7; Wed, 05 Dec 2012 17:06:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TgIQp-0002nm-QG
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:06:48 +0000
Received: from [193.109.254.147:40210] by server-4.bemta-14.messagelabs.com id
	8C/91-18856-72F7FB05; Wed, 05 Dec 2012 17:06:47 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354727201!3849768!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gOTQ0Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11791 invoked from network); 5 Dec 2012 17:06:43 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 17:06:43 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354727203; x=1386263203;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=DdtbqNJ+RKsH7rO/uCryPG0uPcuOOpyhRi6TXjMtTIU=;
	b=cOmNm9GNND+PGD4aOOjnvtOfvGQeuwx0IkYyAnZZZmojJEIPW9tUEKEa
	zLgMEYfwFOZa6ZmqDhJNRJZ57yo1rKWEksxQKCMVq21HR2c7Q3J6QOc4F
	eGGI/U6WjPPVDg5wvQYqenFbvRLYw4M+vRdch8m4ORFgeAt5wWFYlRMgE k=;
X-IronPort-AV: E=McAfee;i="5400,1158,6917"; a="343098894"
Received: from smtp-in-31001.sea31.amazon.com ([10.184.168.27])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 05 Dec 2012 17:06:27 +0000
Received: from ex10-hub-9001.ant.amazon.com (ex10-hub-9001.ant.amazon.com
	[10.185.137.58])
	by smtp-in-31001.sea31.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB5H6QC4024364
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 5 Dec 2012 17:06:27 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-9001.ant.amazon.com (10.185.137.58) with Microsoft SMTP Server
	id 14.2.247.3; Wed, 5 Dec 2012 09:06:20 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Wed, 05 Dec 2012 09:06:20 -0800
Date: Wed, 5 Dec 2012 09:06:20 -0800
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
	<50BF839002000078000AE3DF@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BF839002000078000AE3DF@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 04:25:36PM +0000, Jan Beulich wrote:
> >>> On 05.12.12 at 16:59, Matt Wilson <msw@amazon.com> wrote:
> > On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
> >> On 05/12/12 06:02, Matt Wilson wrote:
> >> > An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
> >> > but still have the flexibility to change the configuration later.
> >> > There's no logic that keys off of domain->is_pinned outside of
> >> > sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
> >> > is_pinned_vcpu() macro to only check for a single CPU set in the
> >> > cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
> >> > boots.
> >> 
> >> Sadly this patch will break things.  There are certain callers of
> >> is_pinned_vcpu() which rely on the value to allow access to certain
> >> power related MSRs, which is where the requirement for never permitting
> >> an update of the affinity mask comes from.
> > 
> > If this is true, the existing is_pinned_vcpu() test is broken:
> > 
> >    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
> >                               cpumask_weight((v)->cpu_affinity) == 1)
> > 
> > It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
> > the MSR traps will suddenly start working.
> > 
> > See commit: http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd 
> 
> I don't see what's wrong here. Certain things merely require the
> pCPU that a vCPU runs on to be stable, which is what the test
> above is for.

Me either. That said, are you willing to Ack and commit my patch that
started this thread?

Thanks,

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:17:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIap-0003Lu-Lp; Wed, 05 Dec 2012 17:17:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TgIao-0003Ln-CY
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:17:06 +0000
Received: from [85.158.143.35:51854] by server-2.bemta-4.messagelabs.com id
	66/A6-30861-1918FB05; Wed, 05 Dec 2012 17:17:05 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354727816!12727376!1
X-Originating-IP: [209.85.216.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11533 invoked from network); 5 Dec 2012 17:16:57 -0000
Received: from mail-qc0-f173.google.com (HELO mail-qc0-f173.google.com)
	(209.85.216.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 17:16:57 -0000
Received: by mail-qc0-f173.google.com with SMTP id b12so3336118qca.32
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 09:16:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=YjC4xjH/RP88Ripx0DnnJCHRTDNqnEbgzuiEMwYaOMo=;
	b=nfZUNAeM7z9oZUQhj29+GI4hHNN+zFNq07GOHdLNxWcRI4REjhoFU+6Lu38pjpIP59
	3pqkWG957kgD5cWYDlWqoDXMUJbEWcyHqo1aGFszWHj+DoITbpGoXrN0PhgJ+bKqG09d
	a3y1p82F2StPI65asgqOdmCUwYOLplQE7NAjNen+kgWvhHrBAvYrg2AEy9oR3BhWKBfM
	NrdhKiH019YcfMgbZFEFEe4h/NXQpnKFhA9n6SOoCbTmZZWjv7c6aiwfTXJnEKLev1QC
	T/RMYEkOKFUSCMzZxiB62Bo6FV3p/jTNDIjWjpnU+UYcj4gYfHDlBad9DqCT52dvzskX
	kRQg==
MIME-Version: 1.0
Received: by 10.224.195.136 with SMTP id ec8mr28512239qab.98.1354727815988;
	Wed, 05 Dec 2012 09:16:55 -0800 (PST)
Received: by 10.49.51.169 with HTTP; Wed, 5 Dec 2012 09:16:55 -0800 (PST)
In-Reply-To: <20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
	<50BF839002000078000AE3DF@nat28.tlf.novell.com>
	<20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
Date: Wed, 5 Dec 2012 17:16:55 +0000
X-Google-Sender-Auth: tmWp-LD8pVbaJhCoexWnlA5euFg
Message-ID: <CAFLBxZYb4BnBi+mdGQ50Cf_voGhje4p_v85K+mjO+-qwX7W5FQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Matt Wilson <msw@amazon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5722801519365496969=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5722801519365496969==
Content-Type: multipart/alternative; boundary=20cf3005dcae3d46da04d01e2619

--20cf3005dcae3d46da04d01e2619
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Dec 5, 2012 at 5:06 PM, Matt Wilson <msw@amazon.com> wrote:

> Me either. That said, are you willing to Ack and commit my patch that
> started this thread?
>

FWIW, I'm OK with the scheduler side of things; can't comment on the MSR
issue:
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

--20cf3005dcae3d46da04d01e2619
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 5, 2012 at 5:06 PM, Matt Wilson <span dir=3D"ltr">&lt;<a href=
=3D"mailto:msw@amazon.com" target=3D"_blank">msw@amazon.com</a>&gt;</span> =
wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blockquote=
 class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc soli=
d;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">Me either. That said, are you willi=
ng to Ack and commit my patch that<br></div></div>
started this thread?<br></blockquote><br></div>FWIW, I&#39;m OK with the sc=
heduler side of things; can&#39;t comment on the MSR issue:<br>Acked-by: Ge=
orge Dunlap &lt;<a href=3D"mailto:george.dunlap@eu.citrix.com">george.dunla=
p@eu.citrix.com</a>&gt;<br>
</div>

--20cf3005dcae3d46da04d01e2619--


--===============5722801519365496969==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5722801519365496969==--


From xen-devel-bounces@lists.xen.org Wed Dec 05 17:17:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIap-0003Lu-Lp; Wed, 05 Dec 2012 17:17:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TgIao-0003Ln-CY
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:17:06 +0000
Received: from [85.158.143.35:51854] by server-2.bemta-4.messagelabs.com id
	66/A6-30861-1918FB05; Wed, 05 Dec 2012 17:17:05 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354727816!12727376!1
X-Originating-IP: [209.85.216.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11533 invoked from network); 5 Dec 2012 17:16:57 -0000
Received: from mail-qc0-f173.google.com (HELO mail-qc0-f173.google.com)
	(209.85.216.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 17:16:57 -0000
Received: by mail-qc0-f173.google.com with SMTP id b12so3336118qca.32
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 09:16:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=YjC4xjH/RP88Ripx0DnnJCHRTDNqnEbgzuiEMwYaOMo=;
	b=nfZUNAeM7z9oZUQhj29+GI4hHNN+zFNq07GOHdLNxWcRI4REjhoFU+6Lu38pjpIP59
	3pqkWG957kgD5cWYDlWqoDXMUJbEWcyHqo1aGFszWHj+DoITbpGoXrN0PhgJ+bKqG09d
	a3y1p82F2StPI65asgqOdmCUwYOLplQE7NAjNen+kgWvhHrBAvYrg2AEy9oR3BhWKBfM
	NrdhKiH019YcfMgbZFEFEe4h/NXQpnKFhA9n6SOoCbTmZZWjv7c6aiwfTXJnEKLev1QC
	T/RMYEkOKFUSCMzZxiB62Bo6FV3p/jTNDIjWjpnU+UYcj4gYfHDlBad9DqCT52dvzskX
	kRQg==
MIME-Version: 1.0
Received: by 10.224.195.136 with SMTP id ec8mr28512239qab.98.1354727815988;
	Wed, 05 Dec 2012 09:16:55 -0800 (PST)
Received: by 10.49.51.169 with HTTP; Wed, 5 Dec 2012 09:16:55 -0800 (PST)
In-Reply-To: <20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
	<50BF839002000078000AE3DF@nat28.tlf.novell.com>
	<20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
Date: Wed, 5 Dec 2012 17:16:55 +0000
X-Google-Sender-Auth: tmWp-LD8pVbaJhCoexWnlA5euFg
Message-ID: <CAFLBxZYb4BnBi+mdGQ50Cf_voGhje4p_v85K+mjO+-qwX7W5FQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Matt Wilson <msw@amazon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5722801519365496969=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5722801519365496969==
Content-Type: multipart/alternative; boundary=20cf3005dcae3d46da04d01e2619

--20cf3005dcae3d46da04d01e2619
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Dec 5, 2012 at 5:06 PM, Matt Wilson <msw@amazon.com> wrote:

> Me either. That said, are you willing to Ack and commit my patch that
> started this thread?
>

FWIW, I'm OK with the scheduler side of things; can't comment on the MSR
issue:
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

--20cf3005dcae3d46da04d01e2619
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 5, 2012 at 5:06 PM, Matt Wilson <span dir=3D"ltr">&lt;<a href=
=3D"mailto:msw@amazon.com" target=3D"_blank">msw@amazon.com</a>&gt;</span> =
wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blockquote=
 class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc soli=
d;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">Me either. That said, are you willi=
ng to Ack and commit my patch that<br></div></div>
started this thread?<br></blockquote><br></div>FWIW, I&#39;m OK with the sc=
heduler side of things; can&#39;t comment on the MSR issue:<br>Acked-by: Ge=
orge Dunlap &lt;<a href=3D"mailto:george.dunlap@eu.citrix.com">george.dunla=
p@eu.citrix.com</a>&gt;<br>
</div>

--20cf3005dcae3d46da04d01e2619--


--===============5722801519365496969==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5722801519365496969==--


From xen-devel-bounces@lists.xen.org Wed Dec 05 17:28:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIlU-0003hq-Sv; Wed, 05 Dec 2012 17:28:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgIlT-0003hl-RD
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:28:08 +0000
Received: from [85.158.138.51:55984] by server-10.bemta-3.messagelabs.com id
	EE/93-19806-7248FB05; Wed, 05 Dec 2012 17:28:07 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354728484!27529266!1
X-Originating-IP: [216.32.181.183]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8771 invoked from network); 5 Dec 2012 17:28:05 -0000
Received: from ch1ehsobe003.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.183)
	by server-16.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Dec 2012 17:28:05 -0000
Received: from mail191-ch1-R.bigfish.com (10.43.68.235) by
	CH1EHSOBE013.bigfish.com (10.43.70.63) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 17:28:04 +0000
Received: from mail191-ch1 (localhost [127.0.0.1])	by
	mail191-ch1-R.bigfish.com (Postfix) with ESMTP id 5E3E1260191;
	Wed,  5 Dec 2012 17:28:04 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(z551bizbb2dI98dI9371I1432Izz1de0h1202h1d1ah1d2ahzz8275bhz2dh668h839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1155h)
Received: from mail191-ch1 (localhost.localdomain [127.0.0.1]) by mail191-ch1
	(MessageSwitch) id 1354728481712337_23851;
	Wed,  5 Dec 2012 17:28:01 +0000 (UTC)
Received: from CH1EHSMHS037.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.252])	by mail191-ch1.bigfish.com (Postfix) with ESMTP id
	AB90D801C9;	Wed,  5 Dec 2012 17:28:01 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CH1EHSMHS037.bigfish.com (10.43.69.246) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 17:27:59 +0000
X-WSS-ID: 0MEKJ6K-02-0PL-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2B156C80B7;	Wed,  5 Dec 2012 11:27:55 -0600 (CST)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 5 Dec 2012 11:10:14 -0600
Received: from [10.234.222.132] (163.181.55.254) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server id 14.2.318.4; Wed, 5 Dec 2012
	11:27:57 -0600
Message-ID: <50BF841C.6010906@amd.com>
Date: Wed, 5 Dec 2012 12:27:56 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121025 Thunderbird/16.0.2
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
In-Reply-To: <50BF8C2002000078000AE42D@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <ijc@hellion.org.uk>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 12/05/2012 12:02 PM, Jan Beulich wrote:
>>>> On 05.12.12 at 17:48, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> On 12/05/2012 07:43 AM, Ian Campbell wrote:
>>> I've just tried this on a fam 15h and I get:
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: CPU0 found a matching microcode update with
>> version 0x6000629 (current=0x6000626)
>>>           (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: size 5260, block size 2592, offset 2660
>>>           (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base
>> id is 6012)
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: CPU2 found a matching microcode update with
>> version 0x6000629 (current=0x6000626)
>>>           (XEN) microcode: CPU2 updated from revision 0x6000626 to 0x6000629
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: size 5260, block size 2592, offset 2660
>>>           (XEN) microcode: CPU3 patch does not match (patch is 6101, cpu base
>> id is 6012)
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: CPU4 found a matching microcode update with
>> version 0x6000629 (current=0x6000626)
>>>           (XEN) microcode: CPU4 updated from revision 0x6000626 to 0x6000629
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: size 5260, block size 2592, offset 2660
>>>           (XEN) microcode: CPU5 patch does not match (patch is 6101, cpu base
>> id is 6012)
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: CPU6 found a matching microcode update with
>> version 0x6000629 (current=0x6000626)
>>>           (XEN) microcode: CPU6 updated from revision 0x6000626 to 0x6000629
>>>
>>>           ....
>>>
>>> It seems like it is applying successfully on only the even numbered
>>> cpus. Is this because the odd and even ones share some execution units
>>> and therefore share microcode updates too? IOW update CPU0 also updates
>>> CPU1 under the hood.
>>>
>>> If so then we probably want to teach Xen about this, although at least
>>> for now though it would mean that the microcode is actually getting
>>> applied despite the messages.
>>
>> On fam15h cores are grouped in pairs into compute units (CUs) and cores
>> in CUs share microcode engine. So yes, you are right --- when we apply a
>> patch to one core, the other one sees the update.
>>
>> I believe at some point we thought about making code smarter and
>> applying patch only on one core in a CU but then decided against it
>> because of some corner cases, For example, there are parts with
>> single-core CUs and it is not out of question that some BIOSes may not
>> enumerate them correctly. Yes, we can figure this all out in the code
>> but we didn't feel that adding complexity was worth it.
>
> But all of this shouldn't lead to equivalent ID mismatches, should
> it? It ought to simply find nothing to update...


The patch file (/lib/firmware/amd-ucode/microcode_amd_fam15h.bin) may 
contain more than one patch. The driver goes over this file patch by 
patch and tries to see whether to apply it.

I think what happened in Ian's case was that the patch file contained 
two patches --- one for this processor (ID 6012) and another for a 
different processor (ID 6101). (Both are family 15h but different revs).

The driver applied the first patch on core 0. Then, on core 1, the code 
tried the first patch (at file offset 60) and noticed that it is already 
applied. So it continued to the next patch (at offset 2660) which is not 
meant for this processor, thus generating the "does not match" message.

So we have at least a problem in how the error is reported to the log -- 
it is confusing. I'll try to make it more understandable.

And maybe core 1 shouldn't go into the second patch in the first place 
because it already found a patch for this processor (but decided that it 
is not needed based on patch ID).


-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:28:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIlU-0003hq-Sv; Wed, 05 Dec 2012 17:28:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgIlT-0003hl-RD
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:28:08 +0000
Received: from [85.158.138.51:55984] by server-10.bemta-3.messagelabs.com id
	EE/93-19806-7248FB05; Wed, 05 Dec 2012 17:28:07 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354728484!27529266!1
X-Originating-IP: [216.32.181.183]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8771 invoked from network); 5 Dec 2012 17:28:05 -0000
Received: from ch1ehsobe003.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.183)
	by server-16.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Dec 2012 17:28:05 -0000
Received: from mail191-ch1-R.bigfish.com (10.43.68.235) by
	CH1EHSOBE013.bigfish.com (10.43.70.63) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 17:28:04 +0000
Received: from mail191-ch1 (localhost [127.0.0.1])	by
	mail191-ch1-R.bigfish.com (Postfix) with ESMTP id 5E3E1260191;
	Wed,  5 Dec 2012 17:28:04 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(z551bizbb2dI98dI9371I1432Izz1de0h1202h1d1ah1d2ahzz8275bhz2dh668h839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1155h)
Received: from mail191-ch1 (localhost.localdomain [127.0.0.1]) by mail191-ch1
	(MessageSwitch) id 1354728481712337_23851;
	Wed,  5 Dec 2012 17:28:01 +0000 (UTC)
Received: from CH1EHSMHS037.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.252])	by mail191-ch1.bigfish.com (Postfix) with ESMTP id
	AB90D801C9;	Wed,  5 Dec 2012 17:28:01 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CH1EHSMHS037.bigfish.com (10.43.69.246) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 17:27:59 +0000
X-WSS-ID: 0MEKJ6K-02-0PL-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2B156C80B7;	Wed,  5 Dec 2012 11:27:55 -0600 (CST)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 5 Dec 2012 11:10:14 -0600
Received: from [10.234.222.132] (163.181.55.254) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server id 14.2.318.4; Wed, 5 Dec 2012
	11:27:57 -0600
Message-ID: <50BF841C.6010906@amd.com>
Date: Wed, 5 Dec 2012 12:27:56 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121025 Thunderbird/16.0.2
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
In-Reply-To: <50BF8C2002000078000AE42D@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <ijc@hellion.org.uk>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 12/05/2012 12:02 PM, Jan Beulich wrote:
>>>> On 05.12.12 at 17:48, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> On 12/05/2012 07:43 AM, Ian Campbell wrote:
>>> I've just tried this on a fam 15h and I get:
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: CPU0 found a matching microcode update with
>> version 0x6000629 (current=0x6000626)
>>>           (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: size 5260, block size 2592, offset 2660
>>>           (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base
>> id is 6012)
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: CPU2 found a matching microcode update with
>> version 0x6000629 (current=0x6000626)
>>>           (XEN) microcode: CPU2 updated from revision 0x6000626 to 0x6000629
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: size 5260, block size 2592, offset 2660
>>>           (XEN) microcode: CPU3 patch does not match (patch is 6101, cpu base
>> id is 6012)
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: CPU4 found a matching microcode update with
>> version 0x6000629 (current=0x6000626)
>>>           (XEN) microcode: CPU4 updated from revision 0x6000626 to 0x6000629
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000629
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: size 5260, block size 2592, offset 2660
>>>           (XEN) microcode: CPU5 patch does not match (patch is 6101, cpu base
>> id is 6012)
>>>
>>>           (XEN) microcode: collect_cpu_info: patch_id=0x6000626
>>>           (XEN) microcode: size 5260, block size 2592, offset 60
>>>           (XEN) microcode: CPU6 found a matching microcode update with
>> version 0x6000629 (current=0x6000626)
>>>           (XEN) microcode: CPU6 updated from revision 0x6000626 to 0x6000629
>>>
>>>           ....
>>>
>>> It seems like it is applying successfully on only the even numbered
>>> cpus. Is this because the odd and even ones share some execution units
>>> and therefore share microcode updates too? IOW update CPU0 also updates
>>> CPU1 under the hood.
>>>
>>> If so then we probably want to teach Xen about this, although at least
>>> for now though it would mean that the microcode is actually getting
>>> applied despite the messages.
>>
>> On fam15h cores are grouped in pairs into compute units (CUs) and cores
>> in CUs share microcode engine. So yes, you are right --- when we apply a
>> patch to one core, the other one sees the update.
>>
>> I believe at some point we thought about making code smarter and
>> applying patch only on one core in a CU but then decided against it
>> because of some corner cases, For example, there are parts with
>> single-core CUs and it is not out of question that some BIOSes may not
>> enumerate them correctly. Yes, we can figure this all out in the code
>> but we didn't feel that adding complexity was worth it.
>
> But all of this shouldn't lead to equivalent ID mismatches, should
> it? It ought to simply find nothing to update...


The patch file (/lib/firmware/amd-ucode/microcode_amd_fam15h.bin) may 
contain more than one patch. The driver goes over this file patch by 
patch and tries to see whether to apply it.

I think what happened in Ian's case was that the patch file contained 
two patches --- one for this processor (ID 6012) and another for a 
different processor (ID 6101). (Both are family 15h but different revs).

The driver applied the first patch on core 0. Then, on core 1, the code 
tried the first patch (at file offset 60) and noticed that it is already 
applied. So it continued to the next patch (at offset 2660) which is not 
meant for this processor, thus generating the "does not match" message.

So we have at least a problem in how the error is reported to the log -- 
it is confusing. I'll try to make it more understandable.

And maybe core 1 shouldn't go into the second patch in the first place 
because it already found a patch for this processor (but decided that it 
is not needed based on patch ID).


-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:33:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:33:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIqa-00041R-RY; Wed, 05 Dec 2012 17:33:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgIqZ-00041L-Nh
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:33:24 +0000
Received: from [85.158.138.51:29576] by server-12.bemta-3.messagelabs.com id
	D5/77-22757-E558FB05; Wed, 05 Dec 2012 17:33:18 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354728796!27575572!1
X-Originating-IP: [207.46.163.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15515 invoked from network); 5 Dec 2012 17:33:18 -0000
Received: from co9ehsobe002.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.25)
	by server-8.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Dec 2012 17:33:18 -0000
Received: from mail78-co9-R.bigfish.com (10.236.132.234) by
	CO9EHSOBE004.bigfish.com (10.236.130.67) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 17:33:15 +0000
Received: from mail78-co9 (localhost [127.0.0.1])	by mail78-co9-R.bigfish.com
	(Postfix) with ESMTP id C7B34A00E2;
	Wed,  5 Dec 2012 17:33:15 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(z551bizbb2dI98dI9371I1432Izz1de0h1202h1d1ah1d2ahzzz2dh668h839h93fhd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1155h)
Received: from mail78-co9 (localhost.localdomain [127.0.0.1]) by mail78-co9
	(MessageSwitch) id 1354728793422507_13798;
	Wed,  5 Dec 2012 17:33:13 +0000 (UTC)
Received: from CO9EHSMHS006.bigfish.com (unknown [10.236.132.240])	by
	mail78-co9.bigfish.com (Postfix) with ESMTP id 64DCC2E005A;
	Wed,  5 Dec 2012 17:33:13 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO9EHSMHS006.bigfish.com (10.236.130.16) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 17:33:08 +0000
X-WSS-ID: 0MEKJF6-01-AT1-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 200791028095;	Wed,  5 Dec 2012 11:33:06 -0600 (CST)
Received: from SAUSEXDAG01.amd.com (163.181.55.1) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 5 Dec 2012 11:15:12 -0600
Received: from [10.234.222.132] (163.181.55.254) by sausexdag01.amd.com
	(163.181.55.1) with Microsoft SMTP Server id 14.2.318.4; Wed, 5 Dec 2012
	11:33:06 -0600
Message-ID: <50BF8551.6000607@amd.com>
Date: Wed, 5 Dec 2012 12:33:05 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121025 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <ijc@hellion.org.uk>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
	<1354727148.17165.23.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354727148.17165.23.camel@zakaz.uk.xensource.com>
X-OriginatorOrg: amd.com
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 12/05/2012 12:05 PM, Ian Campbell wrote:
>
> I looked at trying to apply the same logic to the Xen side of things but
> it is different enough that I can't immediately see how.
> microcode_fits() would seem to be the place to do it, but I'm not at all
> sure what this equiv table stuff is all about.
>


Because more than one processor revision may require the same patch we 
group processors into "equivalence classes". The mapping is stored in 
the patch file header.

The Equivalent Processor ID is verified by HW when the patch is being 
loaded.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:33:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:33:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIqa-00041R-RY; Wed, 05 Dec 2012 17:33:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgIqZ-00041L-Nh
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:33:24 +0000
Received: from [85.158.138.51:29576] by server-12.bemta-3.messagelabs.com id
	D5/77-22757-E558FB05; Wed, 05 Dec 2012 17:33:18 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354728796!27575572!1
X-Originating-IP: [207.46.163.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15515 invoked from network); 5 Dec 2012 17:33:18 -0000
Received: from co9ehsobe002.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.25)
	by server-8.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Dec 2012 17:33:18 -0000
Received: from mail78-co9-R.bigfish.com (10.236.132.234) by
	CO9EHSOBE004.bigfish.com (10.236.130.67) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 17:33:15 +0000
Received: from mail78-co9 (localhost [127.0.0.1])	by mail78-co9-R.bigfish.com
	(Postfix) with ESMTP id C7B34A00E2;
	Wed,  5 Dec 2012 17:33:15 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(z551bizbb2dI98dI9371I1432Izz1de0h1202h1d1ah1d2ahzzz2dh668h839h93fhd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1155h)
Received: from mail78-co9 (localhost.localdomain [127.0.0.1]) by mail78-co9
	(MessageSwitch) id 1354728793422507_13798;
	Wed,  5 Dec 2012 17:33:13 +0000 (UTC)
Received: from CO9EHSMHS006.bigfish.com (unknown [10.236.132.240])	by
	mail78-co9.bigfish.com (Postfix) with ESMTP id 64DCC2E005A;
	Wed,  5 Dec 2012 17:33:13 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO9EHSMHS006.bigfish.com (10.236.130.16) with Microsoft SMTP Server id
	14.1.225.23; Wed, 5 Dec 2012 17:33:08 +0000
X-WSS-ID: 0MEKJF6-01-AT1-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 200791028095;	Wed,  5 Dec 2012 11:33:06 -0600 (CST)
Received: from SAUSEXDAG01.amd.com (163.181.55.1) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 5 Dec 2012 11:15:12 -0600
Received: from [10.234.222.132] (163.181.55.254) by sausexdag01.amd.com
	(163.181.55.1) with Microsoft SMTP Server id 14.2.318.4; Wed, 5 Dec 2012
	11:33:06 -0600
Message-ID: <50BF8551.6000607@amd.com>
Date: Wed, 5 Dec 2012 12:33:05 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121025 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <ijc@hellion.org.uk>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
	<1354727148.17165.23.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354727148.17165.23.camel@zakaz.uk.xensource.com>
X-OriginatorOrg: amd.com
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 12/05/2012 12:05 PM, Ian Campbell wrote:
>
> I looked at trying to apply the same logic to the Xen side of things but
> it is different enough that I can't immediately see how.
> microcode_fits() would seem to be the place to do it, but I'm not at all
> sure what this equiv table stuff is all about.
>


Because more than one processor revision may require the same patch we 
group processors into "equivalence classes". The mapping is stored in 
the patch file header.

The Equivalent Processor ID is verified by HW when the patch is being 
loaded.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:35:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIsP-00047f-Ch; Wed, 05 Dec 2012 17:35:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgIsN-00047V-BA
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 17:35:15 +0000
Received: from [85.158.139.211:57593] by server-4.bemta-5.messagelabs.com id
	95/1F-15011-2D58FB05; Wed, 05 Dec 2012 17:35:14 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354728913!19222850!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 669 invoked from network); 5 Dec 2012 17:35:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 17:35:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,223,1355097600"; d="scan'208";a="16179863"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 17:35:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 17:35:13 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgIsL-0004WC-A4;
	Wed, 05 Dec 2012 17:35:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgIsL-0005gY-0E;
	Wed, 05 Dec 2012 17:35:13 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14570-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 17:35:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14570: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14570 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14570/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win  8 guest-saverestore        fail REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check             fail   never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-win-vcpus1                             fail    
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   fail    
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:35:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgIsP-00047f-Ch; Wed, 05 Dec 2012 17:35:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgIsN-00047V-BA
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 17:35:15 +0000
Received: from [85.158.139.211:57593] by server-4.bemta-5.messagelabs.com id
	95/1F-15011-2D58FB05; Wed, 05 Dec 2012 17:35:14 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354728913!19222850!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 669 invoked from network); 5 Dec 2012 17:35:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 17:35:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,223,1355097600"; d="scan'208";a="16179863"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 17:35:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 17:35:13 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgIsL-0004WC-A4;
	Wed, 05 Dec 2012 17:35:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgIsL-0005gY-0E;
	Wed, 05 Dec 2012 17:35:13 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14570-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 17:35:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14570: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14570 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14570/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win  8 guest-saverestore        fail REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check             fail   never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-win-vcpus1                             fail    
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   fail    
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:43:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:43:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJ0E-0004Qf-Dl; Wed, 05 Dec 2012 17:43:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TgJ0C-0004Qa-PA
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 17:43:21 +0000
Received: from [193.109.254.147:19777] by server-8.bemta-14.messagelabs.com id
	E8/23-05026-8B78FB05; Wed, 05 Dec 2012 17:43:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1354729397!2678890!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMDQ0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27616 invoked from network); 5 Dec 2012 17:43:19 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 17:43:19 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB5HhAad002546
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Dec 2012 17:43:11 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB5Hh9Pf008167
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 5 Dec 2012 17:43:10 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB5Hh9JA029113; Wed, 5 Dec 2012 11:43:09 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Dec 2012 09:43:09 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E70FF1C05A8; Wed,  5 Dec 2012 12:43:07 -0500 (EST)
Date: Wed, 5 Dec 2012 12:43:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>
Message-ID: <20121205174307.GC16072@phenom.dumpdata.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"lenb@kernel.org" <lenb@kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Nov 30, 2012 at 03:08:02AM +0000, Liu, Jinsong wrote:
> Konrad Rzeszutek Wilk wrote:
> > On Wed, Nov 21, 2012 at 11:45:04AM +0000, Liu, Jinsong wrote:
> >>> From 630c65690c878255ce71e7c1172338ed08709273 Mon Sep 17 00:00:00
> >>> 2001 
> >> From: Liu Jinsong <jinsong.liu@intel.com>
> >> Date: Tue, 20 Nov 2012 21:14:37 +0800
> >> Subject: [PATCH 1/2] Xen acpi memory hotplug driver
> >> 
> >> Xen acpi memory hotplug consists of 2 logic components:
> >> Xen acpi memory hotplug driver and Xen hypercall.
> >> 
> >> This patch implement Xen acpi memory hotplug driver. When running
> >> under xen platform, Xen driver will early occupy (so native driver
> > 
> > How will it 'early occupy'? Can you spell it out here please?
> 
> Sure, will add it like
> 'When running under xen platform, at booting stage xen memory hotplug driver will early occupy via subsys_initcall (earlier than native module_init), so xen driver will take effect and native driver will be blocked'.

OK.
> 
> > 
> >> will be blocked). When acpi memory notify OSPM, xen driver will take
> >> effect, adding related memory device and parsing memory information.
> >> 
> >> Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
> >> ---
> >>  drivers/xen/Kconfig               |   11 +
> >>  drivers/xen/Makefile              |    1 +
> >>  drivers/xen/xen-acpi-memhotplug.c |  383
> >>  +++++++++++++++++++++++++++++++++++++ 3 files changed, 395
> >>  insertions(+), 0 deletions(-) create mode 100644
> >> drivers/xen/xen-acpi-memhotplug.c 
> >> 
> >> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> >> index 126d8ce..abd0396 100644
> >> --- a/drivers/xen/Kconfig
> >> +++ b/drivers/xen/Kconfig
> >> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
> >>  	  Allow kernel fetching MCE error from Xen platform and
> >>  	  converting it into Linux mcelog format for mcelog tools
> >> 
> >> +config XEN_ACPI_MEMORY_HOTPLUG
> >> +	bool "Xen ACPI memory hotplug"
> > 
> > There should be a way to make this a module.
> 
> I have some concerns to make it a module:
> 1. xen and native memhotplug driver both work as module, while we need early load xen driver.
> 2. if possible, a xen stub driver may solve load sequence issue, but it may involve other issues
>   * if xen driver load then unload, native driver may have chance to load successfully;

The stub driver would still "occupy" the ACPI bus for the memory hotplug PnP, so
I think this would not be a problem.

>   * if xen driver load --> unload --> load again, then it will lose hotplug notification during unload period;

Sure. But I think we can do it with this driver? After all the function of 
it is to just tell the firmware to turn on/off sockets - and if we miss
one notification we won't take advantage of the power savings - but we
can do that later on.


>   * if xen driver load --> unload --> load again, then it will re-add all memory devices, but the handle for 'booting memory device' and 'hotplug memory device' are different while we have no way to distinguish these 2 kind of devices.

Wouldn't the stub driver hold onto that?

> 
> IMHO I think to make xen hotplug logic as module may involves unexpected result. Is there any obvious advantages of doing so? after all we have provided config choice to user. Thoughts?

Yes, it becomes a module - which is what we want.

> 
> > 
> > 
> >> +	depends on XEN_DOM0 && X86_64 && ACPI
> >> +	default n
> >> +	help
> >> +	  This is Xen acpi memory hotplug.
> >                       ^^^^ -> ACPI
> > 
> >> +
> >> +	  Currently Xen only support acpi memory hot-add. If you want
> >                                      ^^^^-> ACPI
> > 
> >> +	  to hot-add memory at runtime (the hot-added memory cannot be
> >> +	  removed until machine stop), select Y here, otherwise select N. +
> >>  endmenu
> >> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> >> index 7435470..c339eb4 100644
> >> --- a/drivers/xen/Makefile
> >> +++ b/drivers/xen/Makefile
> >> @@ -30,6 +30,7 @@ obj-$(CONFIG_XEN_MCE_LOG)		+= mcelog.o
> >>  obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
> >>  obj-$(CONFIG_XEN_PRIVCMD)		+= xen-privcmd.o
> >>  obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
> >> +obj-$(CONFIG_XEN_ACPI_MEMORY_HOTPLUG)	+= xen-acpi-memhotplug.o 
> >>  xen-evtchn-y				:= evtchn.o xen-gntdev-y				:= gntdev.o
> >>  xen-gntalloc-y				:= gntalloc.o
> >> diff --git a/drivers/xen/xen-acpi-memhotplug.c
> >> b/drivers/xen/xen-acpi-memhotplug.c new file mode 100644 index
> >> 0000000..f0c7990 --- /dev/null
> >> +++ b/drivers/xen/xen-acpi-memhotplug.c
> >> @@ -0,0 +1,383 @@
> >> +/*
> >> + * Copyright (C) 2012 Intel Corporation
> >> + *    Author: Liu Jinsong <jinsong.liu@intel.com>
> >> + *    Author: Jiang Yunhong <yunhong.jiang@intel.com> + *
> >> + * This program is free software; you can redistribute it and/or
> >> modify + * it under the terms of the GNU General Public License as
> >> published by + * the Free Software Foundation; either version 2 of
> >> the License, or (at + * your option) any later version.
> >> + *
> >> + * This program is distributed in the hope that it will be useful,
> >> but + * WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE
> >> or + * NON INFRINGEMENT.  See the GNU General Public License for
> >> more + * details. + */
> >> +
> >> +#include <linux/kernel.h>
> >> +#include <linux/init.h>
> >> +#include <linux/types.h>
> >> +#include <acpi/acpi_drivers.h>
> >> +
> >> +#define ACPI_MEMORY_DEVICE_CLASS		"memory"
> >> +#define ACPI_MEMORY_DEVICE_HID			"PNP0C80"
> >> +#define ACPI_MEMORY_DEVICE_NAME			"Hotplug Mem Device"
> > 
> > Weird tabs?
> > 
> 
> It ported from native and seems right tabs? will double check.
> 
> >> +
> >> +#undef PREFIX
> > 
> > Why the #undef ?
> >> +#define PREFIX "ACPI:memory_hp:"
> > 
> > 
> > Not "ACPI:memory_xen:" ?
> 
> OK, how about more detailed "ACPI:xen_memory_hotplug:"?

Sure.
> 
> > 
> > 
> >> +
> >> +static int acpi_memory_device_add(struct acpi_device *device);
> >> +static int acpi_memory_device_remove(struct acpi_device *device,
> >> int type); + +static const struct acpi_device_id memory_device_ids[]
> >> = { +	{ACPI_MEMORY_DEVICE_HID, 0},
> >> +	{"", 0},
> >> +};
> >> +
> >> +static struct acpi_driver acpi_memory_device_driver = {
> >> +	.name = "acpi_memhotplug",
> > 
> > Not 'xen_acpi_memhotplug' ?
> 
> No, here driver name (same as native driver name) used to block native driver loading.

Then you need a comment in the file explaining that.

> 
> > 
> >> +	.class = ACPI_MEMORY_DEVICE_CLASS,
> >> +	.ids = memory_device_ids,
> >> +	.ops = {
> >> +		.add = acpi_memory_device_add,
> >> +		.remove = acpi_memory_device_remove,
> > 
> > Just for sake of clarity I would prefix those with 'xen_'.
> 
> OK.
> 
> > 
> >> +		},
> >> +};
> >> +
> >> +struct acpi_memory_info {
> >> +	struct list_head list;
> >> +	u64 start_addr;		/* Memory Range start physical addr */
> >> +	u64 length;		/* Memory Range length */
> >> +	unsigned short caching;	/* memory cache attribute */
> >> +	unsigned short write_protect;	/* memory read/write attribute */
> > 
> > Can't the write_protect by a bit field like the 'enabled'? So
> > 	unsigned int write_protect:1;
> > ?
> 
> Seems not good, write_protect copied from an acpi buffer (byte3) getting from _CRS evaluation.

Ah, pls put a comment in there as well why that cannot be done.

> 
> >> +	unsigned int enabled:1;
> >> +};
> >> +
> >> +struct acpi_memory_device {
> >> +	struct acpi_device *device;
> >> +	struct list_head res_list;
> >> +};
> >> +
> >> +static int acpi_hotmem_initialized;
> > 
> > Just make it a bool and also use __read_mostly please.
> 
> OK.
> 
> > 
> >> +
> >> +
> >> +int xen_acpi_memory_enable_device(struct acpi_memory_device
> >> *mem_device) +{ +	return 0;
> >> +}
> > 
> > Why even have this function if it does not do anything?
> 
> Not a nop, it implemented at patch 2/2.

Yup, saw it in the next patch.
> 
> > 
> >> +
> >> +static acpi_status
> >> +acpi_memory_get_resource(struct acpi_resource *resource, void
> >> *context) +{ +	struct acpi_memory_device *mem_device = context;
> >> +	struct acpi_resource_address64 address64;
> >> +	struct acpi_memory_info *info, *new;
> >> +	acpi_status status;
> >> +
> >> +	status = acpi_resource_to_address64(resource, &address64); +	if
> >> (ACPI_FAILURE(status) || +	    (address64.resource_type !=
> >> ACPI_MEMORY_RANGE)) +		return AE_OK; +
> >> +	list_for_each_entry(info, &mem_device->res_list, list) {
> >> +		/* Can we combine the resource range information? */
> > 
> > I don't know? Is this is a future TODO?
> 
> I'm also not quite sure, this comments ported from native side.

OK, pls find out. Perhaps this comment is stale.

> 
> > 
> >> +		if ((info->caching == address64.info.mem.caching) &&
> >> +		    (info->write_protect == address64.info.mem.write_protect) &&
> >> +		    (info->start_addr + info->length == address64.minimum)) {
> >> +			info->length += address64.address_length;
> >> +			return AE_OK;
> >> +		}
> >> +	}
> >> +
> >> +	new = kzalloc(sizeof(struct acpi_memory_info), GFP_KERNEL); +	if
> >> (!new) +		return AE_ERROR;
> >> +
> >> +	INIT_LIST_HEAD(&new->list);
> >> +	new->caching = address64.info.mem.caching;
> >> +	new->write_protect = address64.info.mem.write_protect;
> >> +	new->start_addr = address64.minimum;
> >> +	new->length = address64.address_length;
> >> +	list_add_tail(&new->list, &mem_device->res_list); +
> >> +	return AE_OK;
> >> +}
> >> +
> >> +static int
> >> +acpi_memory_get_device_resources(struct acpi_memory_device
> >> *mem_device) +{ +	acpi_status status;
> >> +	struct acpi_memory_info *info, *n;
> >> +
> >> +	if (!list_empty(&mem_device->res_list))
> >> +		return 0;
> >> +
> >> +	status = acpi_walk_resources(mem_device->device->handle,
> >> +		METHOD_NAME__CRS, acpi_memory_get_resource, mem_device); +
> >> +	if (ACPI_FAILURE(status)) {
> >> +		list_for_each_entry_safe(info, n, &mem_device->res_list, list)
> >> +			kfree(info); +		INIT_LIST_HEAD(&mem_device->res_list);
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +acpi_memory_get_device(acpi_handle handle,
> >> +		       struct acpi_memory_device **mem_device)
> >> +{
> >> +	acpi_status status;
> >> +	acpi_handle phandle;
> >> +	struct acpi_device *device = NULL;
> >> +	struct acpi_device *pdevice = NULL;
> >> +	int result;
> >> +
> >> +	if (!acpi_bus_get_device(handle, &device) && device) +		goto end;
> >> +
> >> +	status = acpi_get_parent(handle, &phandle);
> >> +	if (ACPI_FAILURE(status)) {
> >> +		pr_warn(PREFIX "Cannot find acpi parent\n");
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	/* Get the parent device */
> >> +	result = acpi_bus_get_device(phandle, &pdevice);
> >> +	if (result) {
> >> +		pr_warn(PREFIX "Cannot get acpi bus device\n");
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	/*
> >> +	 * Now add the notified device.  This creates the acpi_device
> >> +	 * and invokes .add function
> >> +	 */
> >> +	result = acpi_bus_add(&device, pdevice, handle,
> >> ACPI_BUS_TYPE_DEVICE); +	if (result) { +		pr_warn(PREFIX "Cannot add
> >> acpi bus\n"); +		return -EINVAL;
> >> +	}
> >> +
> >> +end:
> >> +	*mem_device = acpi_driver_data(device);
> >> +	if (!(*mem_device)) {
> >> +		pr_err(PREFIX "Driver data not found\n");
> >> +		return -ENODEV;
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int acpi_memory_check_device(struct acpi_memory_device
> >> *mem_device) +{ +	unsigned long long current_status;
> >> +
> >> +	/* Get device present/absent information from the _STA */
> >> +	if (ACPI_FAILURE(acpi_evaluate_integer(mem_device->device->handle,
> >> +				"_STA", NULL, &current_status)))
> >> +		return -ENODEV;
> >> +	/*
> >> +	 * Check for device status. Device should be
> >> +	 * present/enabled/functioning.
> >> +	 */
> >> +	if (!((current_status & ACPI_STA_DEVICE_PRESENT)
> >> +	      && (current_status & ACPI_STA_DEVICE_ENABLED)
> >> +	      && (current_status & ACPI_STA_DEVICE_FUNCTIONING)))
> >> +		return -ENODEV; +
> >> +	return 0;
> >> +}
> >> +
> >> +static int acpi_memory_disable_device(struct acpi_memory_device
> >> *mem_device) +{ +	pr_warn(PREFIX "Xen does not support memory
> >> hotremove\n"); + +	return -ENOSYS;
> >> +}
> >> +
> >> +static void acpi_memory_device_notify(acpi_handle handle, u32
> >> event, void *data) +{ +	struct acpi_memory_device *mem_device;
> >> +	struct acpi_device *device;
> >> +	u32 ost_code = ACPI_OST_SC_NON_SPECIFIC_FAILURE; /* default */ +
> >> +	switch (event) {
> >> +	case ACPI_NOTIFY_BUS_CHECK:
> >> +		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
> >> +			"\nReceived BUS CHECK notification for device\n")); +		/* Fall
> >> Through */ +	case ACPI_NOTIFY_DEVICE_CHECK:
> >> +		if (event == ACPI_NOTIFY_DEVICE_CHECK)
> >> +			ACPI_DEBUG_PRINT((ACPI_DB_INFO,
> >> +			"\nReceived DEVICE CHECK notification for device\n")); +
> >> +		if (acpi_memory_get_device(handle, &mem_device)) {
> >> +			pr_err(PREFIX "Cannot find driver data\n");
> >> +			break;
> >> +		}
> >> +
> >> +		ost_code = ACPI_OST_SC_SUCCESS;
> >> +		break;
> >> +
> >> +	case ACPI_NOTIFY_EJECT_REQUEST:
> >> +		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
> >> +			"\nReceived EJECT REQUEST notification for device\n")); +
> >> +		if (acpi_bus_get_device(handle, &device)) {
> >> +			pr_err(PREFIX "Device doesn't exist\n");
> >> +			break;
> >> +		}
> >> +		mem_device = acpi_driver_data(device);
> >> +		if (!mem_device) {
> >> +			pr_err(PREFIX "Driver Data is NULL\n");
> >> +			break;
> >> +		}
> >> +
> >> +		/*
> >> +		 * TBD: implement acpi_memory_disable_device and invoke
> >> +		 * acpi_bus_remove if Xen support hotremove in the future +		 */
> >> +		acpi_memory_disable_device(mem_device);
> >> +		break;
> >> +
> >> +	default:
> >> +		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
> >> +				  "Unsupported event [0x%x]\n", event));
> >> +		/* non-hotplug event; possibly handled by other handler */
> >> +		return; +	}
> >> +
> >> +	/* Inform firmware that the hotplug operation has completed */
> >> +	(void) acpi_evaluate_hotplug_ost(handle, event, ost_code, NULL);
> > 
> > 
> > Hm, even if we failed? Say for the ACPI_NOTIFY_EJECT_REQUEST ?
> 
> OK, let's remove this the comments 'Inform firmware that the hotplug operation has completed'
> For ACPI_NOTIFY_EJECT_REQUEST, it in fact inform firmware 'ACPI_OST_SC_NON_SPECIFIC_FAILURE'.
> 
> > 
> >> +	return;
> >> +}
> >> +
> >> +static int acpi_memory_device_add(struct acpi_device *device) +{
> >> +	int result;
> >> +	struct acpi_memory_device *mem_device = NULL;
> >> +
> >> +
> >> +	if (!device)
> >> +		return -EINVAL;
> >> +
> >> +	mem_device = kzalloc(sizeof(struct acpi_memory_device),
> >> GFP_KERNEL); +	if (!mem_device) +		return -ENOMEM;
> >> +
> >> +	INIT_LIST_HEAD(&mem_device->res_list);
> >> +	mem_device->device = device;
> >> +	sprintf(acpi_device_name(device), "%s", ACPI_MEMORY_DEVICE_NAME);
> >> +	sprintf(acpi_device_class(device), "%s", ACPI_MEMORY_DEVICE_CLASS);
> >> +	device->driver_data = mem_device;
> >> +
> >> +	/* Get the range from the _CRS */
> >> +	result = acpi_memory_get_device_resources(mem_device); +	if
> >> (result) { +		kfree(mem_device);
> >> +		return result;
> >> +	}
> >> +
> >> +	/*
> >> +	 * Early boot code has recognized memory area by EFI/E820.
> >> +	 * If DSDT shows these memory devices on boot, hotplug is not
> >> necessary +	 * for them. So, it just returns until completion of
> >> this driver's +	 * start up. +	 */
> >> +	if (!acpi_hotmem_initialized)
> >> +		return 0;
> >> +
> >> +	if (!acpi_memory_check_device(mem_device))
> >> +		result = xen_acpi_memory_enable_device(mem_device);
> > 
> > This is a nop. Could you just do:
> > 		result = 0;
> > ?
> 
> It implemented at patch 2/2.
> 
> Thanks,
> Jinsong
> 
> > 
> >> +
> >> +	return result;
> >> +}
> >> +
> >> +static int acpi_memory_device_remove(struct acpi_device *device,
> >> int type) +{ +	struct acpi_memory_device *mem_device = NULL;
> >> +
> >> +	if (!device || !acpi_driver_data(device))
> >> +		return -EINVAL;
> >> +
> >> +	mem_device = acpi_driver_data(device);
> >> +	kfree(mem_device);
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +/*
> >> + * Helper function to check for memory device
> >> + */
> >> +static acpi_status is_memory_device(acpi_handle handle) +{
> >> +	char *hardware_id;
> >> +	acpi_status status;
> >> +	struct acpi_device_info *info;
> >> +
> >> +	status = acpi_get_object_info(handle, &info);
> >> +	if (ACPI_FAILURE(status))
> >> +		return status;
> >> +
> >> +	if (!(info->valid & ACPI_VALID_HID)) {
> >> +		kfree(info);
> >> +		return AE_ERROR;
> >> +	}
> >> +
> >> +	hardware_id = info->hardware_id.string;
> >> +	if ((hardware_id == NULL) ||
> >> +	    (strcmp(hardware_id, ACPI_MEMORY_DEVICE_HID))) +		status =
> >> AE_ERROR; +
> >> +	kfree(info);
> >> +	return status;
> >> +}
> >> +
> >> +static acpi_status
> >> +acpi_memory_register_notify_handler(acpi_handle handle,
> >> +				    u32 level, void *ctxt, void **retv)
> >> +{
> >> +	acpi_status status;
> >> +
> >> +	status = is_memory_device(handle);
> >> +	if (ACPI_FAILURE(status))
> >> +		return AE_OK;	/* continue */
> >> +
> >> +	status = acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY,
> >> +					     acpi_memory_device_notify, NULL);
> >> +	/* continue */
> >> +	return AE_OK;
> >> +}
> >> +
> >> +static int __init xen_acpi_memory_device_init(void) +{
> >> +	int result;
> >> +	acpi_status status;
> >> +
> >> +	/* only dom0 is responsible for xen acpi memory hotplug */ +	if
> >> (!xen_initial_domain()) +		return -ENODEV;
> >> +
> >> +	result = acpi_bus_register_driver(&acpi_memory_device_driver);
> >> +	if (result < 0) +		return -ENODEV;
> >> +
> >> +	status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT,
> >> +				     ACPI_UINT32_MAX, +				    
> >> acpi_memory_register_notify_handler, NULL, +				     NULL, NULL); +
> >> +	if (ACPI_FAILURE(status)) {
> >> +		pr_warn(PREFIX "walk_namespace failed\n");
> >> +		acpi_bus_unregister_driver(&acpi_memory_device_driver); +		return
> >> -ENODEV; +	}
> >> +
> >> +	acpi_hotmem_initialized = 1;
> >> +	return 0;
> >> +}
> >> +subsys_initcall(xen_acpi_memory_device_init);
> >> --
> >> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:43:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:43:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJ0E-0004Qf-Dl; Wed, 05 Dec 2012 17:43:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TgJ0C-0004Qa-PA
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 17:43:21 +0000
Received: from [193.109.254.147:19777] by server-8.bemta-14.messagelabs.com id
	E8/23-05026-8B78FB05; Wed, 05 Dec 2012 17:43:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1354729397!2678890!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMDQ0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27616 invoked from network); 5 Dec 2012 17:43:19 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 17:43:19 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB5HhAad002546
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Dec 2012 17:43:11 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB5Hh9Pf008167
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 5 Dec 2012 17:43:10 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB5Hh9JA029113; Wed, 5 Dec 2012 11:43:09 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Dec 2012 09:43:09 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E70FF1C05A8; Wed,  5 Dec 2012 12:43:07 -0500 (EST)
Date: Wed, 5 Dec 2012 12:43:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>
Message-ID: <20121205174307.GC16072@phenom.dumpdata.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"lenb@kernel.org" <lenb@kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Nov 30, 2012 at 03:08:02AM +0000, Liu, Jinsong wrote:
> Konrad Rzeszutek Wilk wrote:
> > On Wed, Nov 21, 2012 at 11:45:04AM +0000, Liu, Jinsong wrote:
> >>> From 630c65690c878255ce71e7c1172338ed08709273 Mon Sep 17 00:00:00
> >>> 2001 
> >> From: Liu Jinsong <jinsong.liu@intel.com>
> >> Date: Tue, 20 Nov 2012 21:14:37 +0800
> >> Subject: [PATCH 1/2] Xen acpi memory hotplug driver
> >> 
> >> Xen acpi memory hotplug consists of 2 logic components:
> >> Xen acpi memory hotplug driver and Xen hypercall.
> >> 
> >> This patch implement Xen acpi memory hotplug driver. When running
> >> under xen platform, Xen driver will early occupy (so native driver
> > 
> > How will it 'early occupy'? Can you spell it out here please?
> 
> Sure, will add it like
> 'When running under xen platform, at booting stage xen memory hotplug driver will early occupy via subsys_initcall (earlier than native module_init), so xen driver will take effect and native driver will be blocked'.

OK.
> 
> > 
> >> will be blocked). When acpi memory notify OSPM, xen driver will take
> >> effect, adding related memory device and parsing memory information.
> >> 
> >> Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
> >> ---
> >>  drivers/xen/Kconfig               |   11 +
> >>  drivers/xen/Makefile              |    1 +
> >>  drivers/xen/xen-acpi-memhotplug.c |  383
> >>  +++++++++++++++++++++++++++++++++++++ 3 files changed, 395
> >>  insertions(+), 0 deletions(-) create mode 100644
> >> drivers/xen/xen-acpi-memhotplug.c 
> >> 
> >> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> >> index 126d8ce..abd0396 100644
> >> --- a/drivers/xen/Kconfig
> >> +++ b/drivers/xen/Kconfig
> >> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
> >>  	  Allow kernel fetching MCE error from Xen platform and
> >>  	  converting it into Linux mcelog format for mcelog tools
> >> 
> >> +config XEN_ACPI_MEMORY_HOTPLUG
> >> +	bool "Xen ACPI memory hotplug"
> > 
> > There should be a way to make this a module.
> 
> I have some concerns to make it a module:
> 1. xen and native memhotplug driver both work as module, while we need early load xen driver.
> 2. if possible, a xen stub driver may solve load sequence issue, but it may involve other issues
>   * if xen driver load then unload, native driver may have chance to load successfully;

The stub driver would still "occupy" the ACPI bus for the memory hotplug PnP, so
I think this would not be a problem.

>   * if xen driver load --> unload --> load again, then it will lose hotplug notification during unload period;

Sure. But I think we can do it with this driver? After all the function of 
it is to just tell the firmware to turn on/off sockets - and if we miss
one notification we won't take advantage of the power savings - but we
can do that later on.


>   * if xen driver load --> unload --> load again, then it will re-add all memory devices, but the handle for 'booting memory device' and 'hotplug memory device' are different while we have no way to distinguish these 2 kind of devices.

Wouldn't the stub driver hold onto that?

> 
> IMHO I think to make xen hotplug logic as module may involves unexpected result. Is there any obvious advantages of doing so? after all we have provided config choice to user. Thoughts?

Yes, it becomes a module - which is what we want.

> 
> > 
> > 
> >> +	depends on XEN_DOM0 && X86_64 && ACPI
> >> +	default n
> >> +	help
> >> +	  This is Xen acpi memory hotplug.
> >                       ^^^^ -> ACPI
> > 
> >> +
> >> +	  Currently Xen only support acpi memory hot-add. If you want
> >                                      ^^^^-> ACPI
> > 
> >> +	  to hot-add memory at runtime (the hot-added memory cannot be
> >> +	  removed until machine stop), select Y here, otherwise select N. +
> >>  endmenu
> >> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> >> index 7435470..c339eb4 100644
> >> --- a/drivers/xen/Makefile
> >> +++ b/drivers/xen/Makefile
> >> @@ -30,6 +30,7 @@ obj-$(CONFIG_XEN_MCE_LOG)		+= mcelog.o
> >>  obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
> >>  obj-$(CONFIG_XEN_PRIVCMD)		+= xen-privcmd.o
> >>  obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
> >> +obj-$(CONFIG_XEN_ACPI_MEMORY_HOTPLUG)	+= xen-acpi-memhotplug.o 
> >>  xen-evtchn-y				:= evtchn.o xen-gntdev-y				:= gntdev.o
> >>  xen-gntalloc-y				:= gntalloc.o
> >> diff --git a/drivers/xen/xen-acpi-memhotplug.c
> >> b/drivers/xen/xen-acpi-memhotplug.c new file mode 100644 index
> >> 0000000..f0c7990 --- /dev/null
> >> +++ b/drivers/xen/xen-acpi-memhotplug.c
> >> @@ -0,0 +1,383 @@
> >> +/*
> >> + * Copyright (C) 2012 Intel Corporation
> >> + *    Author: Liu Jinsong <jinsong.liu@intel.com>
> >> + *    Author: Jiang Yunhong <yunhong.jiang@intel.com> + *
> >> + * This program is free software; you can redistribute it and/or
> >> modify + * it under the terms of the GNU General Public License as
> >> published by + * the Free Software Foundation; either version 2 of
> >> the License, or (at + * your option) any later version.
> >> + *
> >> + * This program is distributed in the hope that it will be useful,
> >> but + * WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE
> >> or + * NON INFRINGEMENT.  See the GNU General Public License for
> >> more + * details. + */
> >> +
> >> +#include <linux/kernel.h>
> >> +#include <linux/init.h>
> >> +#include <linux/types.h>
> >> +#include <acpi/acpi_drivers.h>
> >> +
> >> +#define ACPI_MEMORY_DEVICE_CLASS		"memory"
> >> +#define ACPI_MEMORY_DEVICE_HID			"PNP0C80"
> >> +#define ACPI_MEMORY_DEVICE_NAME			"Hotplug Mem Device"
> > 
> > Weird tabs?
> > 
> 
> It ported from native and seems right tabs? will double check.
> 
> >> +
> >> +#undef PREFIX
> > 
> > Why the #undef ?
> >> +#define PREFIX "ACPI:memory_hp:"
> > 
> > 
> > Not "ACPI:memory_xen:" ?
> 
> OK, how about more detailed "ACPI:xen_memory_hotplug:"?

Sure.
> 
> > 
> > 
> >> +
> >> +static int acpi_memory_device_add(struct acpi_device *device);
> >> +static int acpi_memory_device_remove(struct acpi_device *device,
> >> int type); + +static const struct acpi_device_id memory_device_ids[]
> >> = { +	{ACPI_MEMORY_DEVICE_HID, 0},
> >> +	{"", 0},
> >> +};
> >> +
> >> +static struct acpi_driver acpi_memory_device_driver = {
> >> +	.name = "acpi_memhotplug",
> > 
> > Not 'xen_acpi_memhotplug' ?
> 
> No, here driver name (same as native driver name) used to block native driver loading.

Then you need a comment in the file explaining that.

> 
> > 
> >> +	.class = ACPI_MEMORY_DEVICE_CLASS,
> >> +	.ids = memory_device_ids,
> >> +	.ops = {
> >> +		.add = acpi_memory_device_add,
> >> +		.remove = acpi_memory_device_remove,
> > 
> > Just for sake of clarity I would prefix those with 'xen_'.
> 
> OK.
> 
> > 
> >> +		},
> >> +};
> >> +
> >> +struct acpi_memory_info {
> >> +	struct list_head list;
> >> +	u64 start_addr;		/* Memory Range start physical addr */
> >> +	u64 length;		/* Memory Range length */
> >> +	unsigned short caching;	/* memory cache attribute */
> >> +	unsigned short write_protect;	/* memory read/write attribute */
> > 
> > Can't the write_protect by a bit field like the 'enabled'? So
> > 	unsigned int write_protect:1;
> > ?
> 
> Seems not good, write_protect copied from an acpi buffer (byte3) getting from _CRS evaluation.

Ah, pls put a comment in there as well why that cannot be done.

> 
> >> +	unsigned int enabled:1;
> >> +};
> >> +
> >> +struct acpi_memory_device {
> >> +	struct acpi_device *device;
> >> +	struct list_head res_list;
> >> +};
> >> +
> >> +static int acpi_hotmem_initialized;
> > 
> > Just make it a bool and also use __read_mostly please.
> 
> OK.
> 
> > 
> >> +
> >> +
> >> +int xen_acpi_memory_enable_device(struct acpi_memory_device
> >> *mem_device) +{ +	return 0;
> >> +}
> > 
> > Why even have this function if it does not do anything?
> 
> Not a nop, it implemented at patch 2/2.

Yup, saw it in the next patch.
> 
> > 
> >> +
> >> +static acpi_status
> >> +acpi_memory_get_resource(struct acpi_resource *resource, void
> >> *context) +{ +	struct acpi_memory_device *mem_device = context;
> >> +	struct acpi_resource_address64 address64;
> >> +	struct acpi_memory_info *info, *new;
> >> +	acpi_status status;
> >> +
> >> +	status = acpi_resource_to_address64(resource, &address64); +	if
> >> (ACPI_FAILURE(status) || +	    (address64.resource_type !=
> >> ACPI_MEMORY_RANGE)) +		return AE_OK; +
> >> +	list_for_each_entry(info, &mem_device->res_list, list) {
> >> +		/* Can we combine the resource range information? */
> > 
> > I don't know? Is this is a future TODO?
> 
> I'm also not quite sure, this comments ported from native side.

OK, pls find out. Perhaps this comment is stale.

> 
> > 
> >> +		if ((info->caching == address64.info.mem.caching) &&
> >> +		    (info->write_protect == address64.info.mem.write_protect) &&
> >> +		    (info->start_addr + info->length == address64.minimum)) {
> >> +			info->length += address64.address_length;
> >> +			return AE_OK;
> >> +		}
> >> +	}
> >> +
> >> +	new = kzalloc(sizeof(struct acpi_memory_info), GFP_KERNEL); +	if
> >> (!new) +		return AE_ERROR;
> >> +
> >> +	INIT_LIST_HEAD(&new->list);
> >> +	new->caching = address64.info.mem.caching;
> >> +	new->write_protect = address64.info.mem.write_protect;
> >> +	new->start_addr = address64.minimum;
> >> +	new->length = address64.address_length;
> >> +	list_add_tail(&new->list, &mem_device->res_list); +
> >> +	return AE_OK;
> >> +}
> >> +
> >> +static int
> >> +acpi_memory_get_device_resources(struct acpi_memory_device
> >> *mem_device) +{ +	acpi_status status;
> >> +	struct acpi_memory_info *info, *n;
> >> +
> >> +	if (!list_empty(&mem_device->res_list))
> >> +		return 0;
> >> +
> >> +	status = acpi_walk_resources(mem_device->device->handle,
> >> +		METHOD_NAME__CRS, acpi_memory_get_resource, mem_device); +
> >> +	if (ACPI_FAILURE(status)) {
> >> +		list_for_each_entry_safe(info, n, &mem_device->res_list, list)
> >> +			kfree(info); +		INIT_LIST_HEAD(&mem_device->res_list);
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +acpi_memory_get_device(acpi_handle handle,
> >> +		       struct acpi_memory_device **mem_device)
> >> +{
> >> +	acpi_status status;
> >> +	acpi_handle phandle;
> >> +	struct acpi_device *device = NULL;
> >> +	struct acpi_device *pdevice = NULL;
> >> +	int result;
> >> +
> >> +	if (!acpi_bus_get_device(handle, &device) && device) +		goto end;
> >> +
> >> +	status = acpi_get_parent(handle, &phandle);
> >> +	if (ACPI_FAILURE(status)) {
> >> +		pr_warn(PREFIX "Cannot find acpi parent\n");
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	/* Get the parent device */
> >> +	result = acpi_bus_get_device(phandle, &pdevice);
> >> +	if (result) {
> >> +		pr_warn(PREFIX "Cannot get acpi bus device\n");
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	/*
> >> +	 * Now add the notified device.  This creates the acpi_device
> >> +	 * and invokes .add function
> >> +	 */
> >> +	result = acpi_bus_add(&device, pdevice, handle,
> >> ACPI_BUS_TYPE_DEVICE); +	if (result) { +		pr_warn(PREFIX "Cannot add
> >> acpi bus\n"); +		return -EINVAL;
> >> +	}
> >> +
> >> +end:
> >> +	*mem_device = acpi_driver_data(device);
> >> +	if (!(*mem_device)) {
> >> +		pr_err(PREFIX "Driver data not found\n");
> >> +		return -ENODEV;
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int acpi_memory_check_device(struct acpi_memory_device
> >> *mem_device) +{ +	unsigned long long current_status;
> >> +
> >> +	/* Get device present/absent information from the _STA */
> >> +	if (ACPI_FAILURE(acpi_evaluate_integer(mem_device->device->handle,
> >> +				"_STA", NULL, &current_status)))
> >> +		return -ENODEV;
> >> +	/*
> >> +	 * Check for device status. Device should be
> >> +	 * present/enabled/functioning.
> >> +	 */
> >> +	if (!((current_status & ACPI_STA_DEVICE_PRESENT)
> >> +	      && (current_status & ACPI_STA_DEVICE_ENABLED)
> >> +	      && (current_status & ACPI_STA_DEVICE_FUNCTIONING)))
> >> +		return -ENODEV; +
> >> +	return 0;
> >> +}
> >> +
> >> +static int acpi_memory_disable_device(struct acpi_memory_device
> >> *mem_device) +{ +	pr_warn(PREFIX "Xen does not support memory
> >> hotremove\n"); + +	return -ENOSYS;
> >> +}
> >> +
> >> +static void acpi_memory_device_notify(acpi_handle handle, u32
> >> event, void *data) +{ +	struct acpi_memory_device *mem_device;
> >> +	struct acpi_device *device;
> >> +	u32 ost_code = ACPI_OST_SC_NON_SPECIFIC_FAILURE; /* default */ +
> >> +	switch (event) {
> >> +	case ACPI_NOTIFY_BUS_CHECK:
> >> +		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
> >> +			"\nReceived BUS CHECK notification for device\n")); +		/* Fall
> >> Through */ +	case ACPI_NOTIFY_DEVICE_CHECK:
> >> +		if (event == ACPI_NOTIFY_DEVICE_CHECK)
> >> +			ACPI_DEBUG_PRINT((ACPI_DB_INFO,
> >> +			"\nReceived DEVICE CHECK notification for device\n")); +
> >> +		if (acpi_memory_get_device(handle, &mem_device)) {
> >> +			pr_err(PREFIX "Cannot find driver data\n");
> >> +			break;
> >> +		}
> >> +
> >> +		ost_code = ACPI_OST_SC_SUCCESS;
> >> +		break;
> >> +
> >> +	case ACPI_NOTIFY_EJECT_REQUEST:
> >> +		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
> >> +			"\nReceived EJECT REQUEST notification for device\n")); +
> >> +		if (acpi_bus_get_device(handle, &device)) {
> >> +			pr_err(PREFIX "Device doesn't exist\n");
> >> +			break;
> >> +		}
> >> +		mem_device = acpi_driver_data(device);
> >> +		if (!mem_device) {
> >> +			pr_err(PREFIX "Driver Data is NULL\n");
> >> +			break;
> >> +		}
> >> +
> >> +		/*
> >> +		 * TBD: implement acpi_memory_disable_device and invoke
> >> +		 * acpi_bus_remove if Xen support hotremove in the future +		 */
> >> +		acpi_memory_disable_device(mem_device);
> >> +		break;
> >> +
> >> +	default:
> >> +		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
> >> +				  "Unsupported event [0x%x]\n", event));
> >> +		/* non-hotplug event; possibly handled by other handler */
> >> +		return; +	}
> >> +
> >> +	/* Inform firmware that the hotplug operation has completed */
> >> +	(void) acpi_evaluate_hotplug_ost(handle, event, ost_code, NULL);
> > 
> > 
> > Hm, even if we failed? Say for the ACPI_NOTIFY_EJECT_REQUEST ?
> 
> OK, let's remove this the comments 'Inform firmware that the hotplug operation has completed'
> For ACPI_NOTIFY_EJECT_REQUEST, it in fact inform firmware 'ACPI_OST_SC_NON_SPECIFIC_FAILURE'.
> 
> > 
> >> +	return;
> >> +}
> >> +
> >> +static int acpi_memory_device_add(struct acpi_device *device) +{
> >> +	int result;
> >> +	struct acpi_memory_device *mem_device = NULL;
> >> +
> >> +
> >> +	if (!device)
> >> +		return -EINVAL;
> >> +
> >> +	mem_device = kzalloc(sizeof(struct acpi_memory_device),
> >> GFP_KERNEL); +	if (!mem_device) +		return -ENOMEM;
> >> +
> >> +	INIT_LIST_HEAD(&mem_device->res_list);
> >> +	mem_device->device = device;
> >> +	sprintf(acpi_device_name(device), "%s", ACPI_MEMORY_DEVICE_NAME);
> >> +	sprintf(acpi_device_class(device), "%s", ACPI_MEMORY_DEVICE_CLASS);
> >> +	device->driver_data = mem_device;
> >> +
> >> +	/* Get the range from the _CRS */
> >> +	result = acpi_memory_get_device_resources(mem_device); +	if
> >> (result) { +		kfree(mem_device);
> >> +		return result;
> >> +	}
> >> +
> >> +	/*
> >> +	 * Early boot code has recognized memory area by EFI/E820.
> >> +	 * If DSDT shows these memory devices on boot, hotplug is not
> >> necessary +	 * for them. So, it just returns until completion of
> >> this driver's +	 * start up. +	 */
> >> +	if (!acpi_hotmem_initialized)
> >> +		return 0;
> >> +
> >> +	if (!acpi_memory_check_device(mem_device))
> >> +		result = xen_acpi_memory_enable_device(mem_device);
> > 
> > This is a nop. Could you just do:
> > 		result = 0;
> > ?
> 
> It implemented at patch 2/2.
> 
> Thanks,
> Jinsong
> 
> > 
> >> +
> >> +	return result;
> >> +}
> >> +
> >> +static int acpi_memory_device_remove(struct acpi_device *device,
> >> int type) +{ +	struct acpi_memory_device *mem_device = NULL;
> >> +
> >> +	if (!device || !acpi_driver_data(device))
> >> +		return -EINVAL;
> >> +
> >> +	mem_device = acpi_driver_data(device);
> >> +	kfree(mem_device);
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +/*
> >> + * Helper function to check for memory device
> >> + */
> >> +static acpi_status is_memory_device(acpi_handle handle) +{
> >> +	char *hardware_id;
> >> +	acpi_status status;
> >> +	struct acpi_device_info *info;
> >> +
> >> +	status = acpi_get_object_info(handle, &info);
> >> +	if (ACPI_FAILURE(status))
> >> +		return status;
> >> +
> >> +	if (!(info->valid & ACPI_VALID_HID)) {
> >> +		kfree(info);
> >> +		return AE_ERROR;
> >> +	}
> >> +
> >> +	hardware_id = info->hardware_id.string;
> >> +	if ((hardware_id == NULL) ||
> >> +	    (strcmp(hardware_id, ACPI_MEMORY_DEVICE_HID))) +		status =
> >> AE_ERROR; +
> >> +	kfree(info);
> >> +	return status;
> >> +}
> >> +
> >> +static acpi_status
> >> +acpi_memory_register_notify_handler(acpi_handle handle,
> >> +				    u32 level, void *ctxt, void **retv)
> >> +{
> >> +	acpi_status status;
> >> +
> >> +	status = is_memory_device(handle);
> >> +	if (ACPI_FAILURE(status))
> >> +		return AE_OK;	/* continue */
> >> +
> >> +	status = acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY,
> >> +					     acpi_memory_device_notify, NULL);
> >> +	/* continue */
> >> +	return AE_OK;
> >> +}
> >> +
> >> +static int __init xen_acpi_memory_device_init(void) +{
> >> +	int result;
> >> +	acpi_status status;
> >> +
> >> +	/* only dom0 is responsible for xen acpi memory hotplug */ +	if
> >> (!xen_initial_domain()) +		return -ENODEV;
> >> +
> >> +	result = acpi_bus_register_driver(&acpi_memory_device_driver);
> >> +	if (result < 0) +		return -ENODEV;
> >> +
> >> +	status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT,
> >> +				     ACPI_UINT32_MAX, +				    
> >> acpi_memory_register_notify_handler, NULL, +				     NULL, NULL); +
> >> +	if (ACPI_FAILURE(status)) {
> >> +		pr_warn(PREFIX "walk_namespace failed\n");
> >> +		acpi_bus_unregister_driver(&acpi_memory_device_driver); +		return
> >> -ENODEV; +	}
> >> +
> >> +	acpi_hotmem_initialized = 1;
> >> +	return 0;
> >> +}
> >> +subsys_initcall(xen_acpi_memory_device_init);
> >> --
> >> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:54:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJAJ-0004pX-O3; Wed, 05 Dec 2012 17:53:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgJAH-0004pQ-TW
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:53:46 +0000
Received: from [85.158.143.35:49878] by server-3.bemta-4.messagelabs.com id
	4B/1D-06841-92A8FB05; Wed, 05 Dec 2012 17:53:45 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354730022!12949897!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21348 invoked from network); 5 Dec 2012 17:53:42 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-12.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 17:53:42 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgJA9-0000Hd-Mm; Wed, 05 Dec 2012 17:53:37 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgJA5-0000f0-VZ; Wed, 05 Dec 2012 17:53:37 +0000
Message-ID: <1354730007.17165.31.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Wed, 05 Dec 2012 17:53:27 +0000
In-Reply-To: <50BF841C.6010906@amd.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
	<50BF841C.6010906@amd.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	debian-kernel <debian-kernel@lists.debian.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 17:27 +0000, Boris Ostrovsky wrote:
> On 12/05/2012 12:02 PM, Jan Beulich wrote:
> > But all of this shouldn't lead to equivalent ID mismatches, should
> > it? It ought to simply find nothing to update...
> 
> 
> The patch file (/lib/firmware/amd-ucode/microcode_amd_fam15h.bin) may 
> contain more than one patch. The driver goes over this file patch by 
> patch and tries to see whether to apply it.
> 
> I think what happened in Ian's case was that the patch file contained 
> two patches --- one for this processor (ID 6012) and another for a 
> different processor (ID 6101). (Both are family 15h but different revs).
> 
> The driver applied the first patch on core 0. Then, on core 1, the code 
> tried the first patch (at file offset 60) and noticed that it is already 
> applied. So it continued to the next patch (at offset 2660) which is not 
> meant for this processor, thus generating the "does not match" message.

I added some debugging and can confirm this is what happens:

        (XEN) microcode: collect_cpu_info: CPU0 patch_id=0x6000626
        (XEN) CPU0: current patch level 0x6000626
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: CPU0 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) CPU0: apply_microcodeA: current patch level 0x6000626. Patch is 0x6000629
        (XEN) CPU0: apply_microcodeB: new patch level 0x6000629. Patch is 0x6000629
        (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
        
        (XEN) microcode: collect_cpu_info: CPU1 patch_id=0x6000629
        (XEN) CPU1: current patch level 0x6000629
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) CPU1: microcode_fits: older patch 0x6000629 <= 0x6000629, returning
        (XEN) microcode: size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base id is 6012) 
        
> So we have at least a problem in how the error is reported to the log -- 
> it is confusing. I'll try to make it more understandable.

FWIW it also results in an error from the hypercall overall as well as
the logging stuff.

> And maybe core 1 shouldn't go into the second patch in the first place 
> because it already found a patch for this processor (but decided that it 
> is not needed based on patch ID).

-- 
Ian Campbell

* PerlGeek is really a space alien
* Knghtktty believes PerlGeek


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 17:54:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 17:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJAJ-0004pX-O3; Wed, 05 Dec 2012 17:53:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgJAH-0004pQ-TW
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 17:53:46 +0000
Received: from [85.158.143.35:49878] by server-3.bemta-4.messagelabs.com id
	4B/1D-06841-92A8FB05; Wed, 05 Dec 2012 17:53:45 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354730022!12949897!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21348 invoked from network); 5 Dec 2012 17:53:42 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-12.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	5 Dec 2012 17:53:42 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgJA9-0000Hd-Mm; Wed, 05 Dec 2012 17:53:37 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgJA5-0000f0-VZ; Wed, 05 Dec 2012 17:53:37 +0000
Message-ID: <1354730007.17165.31.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Wed, 05 Dec 2012 17:53:27 +0000
In-Reply-To: <50BF841C.6010906@amd.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
	<50BF841C.6010906@amd.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	debian-kernel <debian-kernel@lists.debian.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 17:27 +0000, Boris Ostrovsky wrote:
> On 12/05/2012 12:02 PM, Jan Beulich wrote:
> > But all of this shouldn't lead to equivalent ID mismatches, should
> > it? It ought to simply find nothing to update...
> 
> 
> The patch file (/lib/firmware/amd-ucode/microcode_amd_fam15h.bin) may 
> contain more than one patch. The driver goes over this file patch by 
> patch and tries to see whether to apply it.
> 
> I think what happened in Ian's case was that the patch file contained 
> two patches --- one for this processor (ID 6012) and another for a 
> different processor (ID 6101). (Both are family 15h but different revs).
> 
> The driver applied the first patch on core 0. Then, on core 1, the code 
> tried the first patch (at file offset 60) and noticed that it is already 
> applied. So it continued to the next patch (at offset 2660) which is not 
> meant for this processor, thus generating the "does not match" message.

I added some debugging and can confirm this is what happens:

        (XEN) microcode: collect_cpu_info: CPU0 patch_id=0x6000626
        (XEN) CPU0: current patch level 0x6000626
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) microcode: CPU0 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) CPU0: apply_microcodeA: current patch level 0x6000626. Patch is 0x6000629
        (XEN) CPU0: apply_microcodeB: new patch level 0x6000629. Patch is 0x6000629
        (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
        
        (XEN) microcode: collect_cpu_info: CPU1 patch_id=0x6000629
        (XEN) CPU1: current patch level 0x6000629
        (XEN) microcode: size 5260, block size 2592, offset 60
        (XEN) CPU1: microcode_fits: older patch 0x6000629 <= 0x6000629, returning
        (XEN) microcode: size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base id is 6012) 
        
> So we have at least a problem in how the error is reported to the log -- 
> it is confusing. I'll try to make it more understandable.

FWIW it also results in an error from the hypercall overall as well as
the logging stuff.

> And maybe core 1 shouldn't go into the second patch in the first place 
> because it already found a patch for this processor (but decided that it 
> is not needed based on patch ID).

-- 
Ian Campbell

* PerlGeek is really a space alien
* Knghtktty believes PerlGeek


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:02:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:02:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJIt-0005B9-RI; Wed, 05 Dec 2012 18:02:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgJIs-0005A9-2B
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:02:38 +0000
Received: from [85.158.138.51:53549] by server-2.bemta-3.messagelabs.com id
	00/C3-04744-D3C8FB05; Wed, 05 Dec 2012 18:02:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354730556!25846825!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25188 invoked from network); 5 Dec 2012 18:02:36 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:02:36 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so2303234wgb.32
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 10:02:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=MhZR0TBrZ+eCQu6O9R/vfOAaX4wy5B2Rab42gubzUxo=;
	b=NMnvuffU6bryaqOLuYqHqg6lb3tqLn1JvBOqxNfxOi3DOZNF9Ut1pzKAwzj90fgqeZ
	gyBYUE37bb40d9W5EvDebotAgoOWiRY6CSYMbvFgTR8iPVVZWaDqErVqp7l6/oRSqsMA
	/MUR5rgqjxiLo7l9YcF3N1LOuFb4T6cIPYrDr5bGlTc7m0LbKdvJtoRTYz8uNDy0wXkc
	t1rxbIrELnnxdIFTk1L9mydTOu6BOHkuEaNUZ+4TXx4cKuJ3FsrqUsddfnYJNJxioZan
	fD8T3vQ7qq7H5c2gV7xYZKohmPxWEWNd9NGRNzVBjChiErmiEs/DcevjmLuIGbNMeixH
	d1Ew==
Received: by 10.216.225.232 with SMTP id z82mr6556657wep.151.1354730556078;
	Wed, 05 Dec 2012 10:02:36 -0800 (PST)
Received: from [192.168.1.88]
	(host86-183-153-239.range86-183.btcentralplus.com. [86.183.153.239])
	by mx.google.com with ESMTPS id g2sm7421641wiy.0.2012.12.05.10.02.34
	(version=SSLv3 cipher=OTHER); Wed, 05 Dec 2012 10:02:34 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Wed, 05 Dec 2012 18:02:29 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <CCE53CB5.46647%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] MAINTAINERS: Reference stable maintenance
	policy
Thread-Index: Ac3TErCjYqZhmRiujUitgnqw4zYiFg==
In-Reply-To: <50BF2CB302000078000AE142@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Lars Kurth <lars.kurth.xen@gmail.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: Reference stable maintenance
 policy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/2012 10:14, "Jan Beulich" <JBeulich@suse.com> wrote:

>>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> 
>> Any objections/acks to this patch? Shall I commit it?
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> (but formally you need Keir's ack anyway)

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:02:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:02:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJIt-0005B9-RI; Wed, 05 Dec 2012 18:02:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgJIs-0005A9-2B
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:02:38 +0000
Received: from [85.158.138.51:53549] by server-2.bemta-3.messagelabs.com id
	00/C3-04744-D3C8FB05; Wed, 05 Dec 2012 18:02:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354730556!25846825!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25188 invoked from network); 5 Dec 2012 18:02:36 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:02:36 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so2303234wgb.32
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 10:02:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=MhZR0TBrZ+eCQu6O9R/vfOAaX4wy5B2Rab42gubzUxo=;
	b=NMnvuffU6bryaqOLuYqHqg6lb3tqLn1JvBOqxNfxOi3DOZNF9Ut1pzKAwzj90fgqeZ
	gyBYUE37bb40d9W5EvDebotAgoOWiRY6CSYMbvFgTR8iPVVZWaDqErVqp7l6/oRSqsMA
	/MUR5rgqjxiLo7l9YcF3N1LOuFb4T6cIPYrDr5bGlTc7m0LbKdvJtoRTYz8uNDy0wXkc
	t1rxbIrELnnxdIFTk1L9mydTOu6BOHkuEaNUZ+4TXx4cKuJ3FsrqUsddfnYJNJxioZan
	fD8T3vQ7qq7H5c2gV7xYZKohmPxWEWNd9NGRNzVBjChiErmiEs/DcevjmLuIGbNMeixH
	d1Ew==
Received: by 10.216.225.232 with SMTP id z82mr6556657wep.151.1354730556078;
	Wed, 05 Dec 2012 10:02:36 -0800 (PST)
Received: from [192.168.1.88]
	(host86-183-153-239.range86-183.btcentralplus.com. [86.183.153.239])
	by mx.google.com with ESMTPS id g2sm7421641wiy.0.2012.12.05.10.02.34
	(version=SSLv3 cipher=OTHER); Wed, 05 Dec 2012 10:02:34 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Wed, 05 Dec 2012 18:02:29 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <CCE53CB5.46647%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] MAINTAINERS: Reference stable maintenance
	policy
Thread-Index: Ac3TErCjYqZhmRiujUitgnqw4zYiFg==
In-Reply-To: <50BF2CB302000078000AE142@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Lars Kurth <lars.kurth.xen@gmail.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: Reference stable maintenance
 policy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/2012 10:14, "Jan Beulich" <JBeulich@suse.com> wrote:

>>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> 
>> Any objections/acks to this patch? Shall I commit it?
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> (but formally you need Keir's ack anyway)

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:03:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJJZ-0005FK-94; Wed, 05 Dec 2012 18:03:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgJJX-0005FB-Qx
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:03:20 +0000
Received: from [85.158.143.35:26617] by server-3.bemta-4.messagelabs.com id
	05/A4-06841-76C8FB05; Wed, 05 Dec 2012 18:03:19 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354730596!13228884!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM2Nzgy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10590 invoked from network); 5 Dec 2012 18:03:17 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-3.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 18:03:17 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Dec 2012 10:03:11 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,223,1355126400"; d="scan'208";a="176567653"
Received: from ljsromley.bj.intel.com ([10.240.192.102])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 10:03:09 -0800
MIME-Version: 1.0
X-Mercurial-Node: 75d16e26926c873480abe205f1b7fb006a0fdf3c
Message-Id: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
User-Agent: Mercurial-patchbomb/1.4
Date: Thu, 06 Dec 2012 09:55:07 +0800
From: Liu Jinsong <jinsong.liu@intel.com>
To: ian.campbell@citrix.com, george.dunlap@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, JBeulich@suse.com
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with regard to
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At the sender
  xc_domain_save has a key point: 'to query the types of all the pages
  with xc_get_pfn_type_batch'
  1) if broken page occur before the key point, migration will be fine
     since proper pfn_type and pfn number will be transferred to the
     target and then take appropriate action;
  2) if broken page occur after the key point, whole system will crash
     and no need care migration any more;

At the target
  Target will populates pages for guest. As for the case of broken page,
  we prefer to keep the type of the page for the sake of seamless migration.
  Target will set p2m as p2m_ram_broken for broken page. If guest access
  the broken page again it will kill itself as expected.

Patch version history:
V5:
  - remove extra check at the last iteration
  - remove marking broken page to dirty bitmap
V4:
  - adjust variables and patch description based on feedback
V3:
  - handle pages broken at the last iteration
V2:
  - migrate continue when broken page occur during migration,
    via marking broken page to dirty bitmap
V1:
  - migration abort when broken page occur during migration
  - transfer pfn_type to target for broken page occur before migration

Suggested-by: George Dunlap <george.dunlap@eu.citrix.com>
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>

diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Tue Nov 13 11:19:17 2012 +0000
+++ b/tools/libxc/xc_domain.c	Thu Dec 06 09:50:40 2012 +0800
@@ -283,6 +283,22 @@
     return ret;
 }
 
+/* set broken page p2m */
+int xc_set_broken_page_p2m(xc_interface *xch,
+                           uint32_t domid,
+                           unsigned long pfn)
+{
+    int ret;
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_set_broken_page_p2m;
+    domctl.domain = (domid_t)domid;
+    domctl.u.set_broken_page_p2m.pfn = pfn;
+    ret = do_domctl(xch, &domctl);
+
+    return ret ? -1 : 0;
+}
+
 /* get info from hvm guest for save */
 int xc_domain_hvm_getcontext(xc_interface *xch,
                              uint32_t domid,
diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c	Tue Nov 13 11:19:17 2012 +0000
+++ b/tools/libxc/xc_domain_restore.c	Thu Dec 06 09:50:40 2012 +0800
@@ -1023,9 +1023,15 @@
 
     countpages = count;
     for (i = oldcount; i < buf->nr_pages; ++i)
-        if ((buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) == XEN_DOMCTL_PFINFO_XTAB
-            ||(buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) == XEN_DOMCTL_PFINFO_XALLOC)
+    {
+        unsigned long pagetype;
+
+        pagetype = buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK;
+        if ( pagetype == XEN_DOMCTL_PFINFO_XTAB ||
+             pagetype == XEN_DOMCTL_PFINFO_BROKEN ||
+             pagetype == XEN_DOMCTL_PFINFO_XALLOC )
             --countpages;
+    }
 
     if (!countpages)
         return count;
@@ -1267,6 +1273,17 @@
             /* a bogus/unmapped/allocate-only page: skip it */
             continue;
 
+        if ( pagetype == XEN_DOMCTL_PFINFO_BROKEN )
+        {
+            if ( xc_set_broken_page_p2m(xch, dom, pfn) )
+            {
+                ERROR("Set p2m for broken page failed, "
+                      "dom=%d, pfn=%lx\n", dom, pfn);
+                goto err_mapped;
+            }
+            continue;
+        }
+
         if (pfn_err[i])
         {
             ERROR("unexpected PFN mapping failure pfn %lx map_mfn %lx p2m_mfn %lx",
diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c	Tue Nov 13 11:19:17 2012 +0000
+++ b/tools/libxc/xc_domain_save.c	Thu Dec 06 09:50:40 2012 +0800
@@ -1277,6 +1277,13 @@
                 if ( !hvm )
                     gmfn = pfn_to_mfn(gmfn);
 
+                if ( pfn_type[j] == XEN_DOMCTL_PFINFO_BROKEN )
+                {
+                    pfn_type[j] |= pfn_batch[j];
+                    ++run;
+                    continue;
+                }
+
                 if ( pfn_err[j] )
                 {
                     if ( pfn_type[j] == XEN_DOMCTL_PFINFO_XTAB )
@@ -1371,8 +1378,12 @@
                     }
                 }
 
-                /* skip pages that aren't present or are alloc-only */
+                /*
+                 * skip pages that aren't present,
+                 * or are broken, or are alloc-only
+                 */
                 if ( pagetype == XEN_DOMCTL_PFINFO_XTAB
+                    || pagetype == XEN_DOMCTL_PFINFO_BROKEN
                     || pagetype == XEN_DOMCTL_PFINFO_XALLOC )
                     continue;
 
diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Tue Nov 13 11:19:17 2012 +0000
+++ b/tools/libxc/xenctrl.h	Thu Dec 06 09:50:40 2012 +0800
@@ -575,6 +575,17 @@
                           xc_domaininfo_t *info);
 
 /**
+ * This function set p2m for broken page
+ * &parm xch a handle to an open hypervisor interface
+ * @parm domid the domain id which broken page belong to
+ * @parm pfn the pfn number of the broken page
+ * @return 0 on success, -1 on failure
+ */
+int xc_set_broken_page_p2m(xc_interface *xch,
+                           uint32_t domid,
+                           unsigned long pfn);
+
+/**
  * This function returns information about the context of a hvm domain
  * @parm xch a handle to an open hypervisor interface
  * @parm domid the domain to get information from
diff -r 8b93ac0c93f3 -r 75d16e26926c xen/arch/x86/domctl.c
--- a/xen/arch/x86/domctl.c	Tue Nov 13 11:19:17 2012 +0000
+++ b/xen/arch/x86/domctl.c	Thu Dec 06 09:50:40 2012 +0800
@@ -209,12 +209,18 @@
                 for ( j = 0; j < k; j++ )
                 {
                     unsigned long type = 0;
+                    p2m_type_t t;
 
-                    page = get_page_from_gfn(d, arr[j], NULL, P2M_ALLOC);
+                    page = get_page_from_gfn(d, arr[j], &t, P2M_ALLOC);
 
                     if ( unlikely(!page) ||
                          unlikely(is_xen_heap_page(page)) )
-                        type = XEN_DOMCTL_PFINFO_XTAB;
+                    {
+                        if ( p2m_is_broken(t) )
+                            type = XEN_DOMCTL_PFINFO_BROKEN;
+                        else
+                            type = XEN_DOMCTL_PFINFO_XTAB;
+                    }
                     else
                     {
                         switch( page->u.inuse.type_info & PGT_type_mask )
@@ -235,6 +241,9 @@
 
                         if ( page->u.inuse.type_info & PGT_pinned )
                             type |= XEN_DOMCTL_PFINFO_LPINTAB;
+
+                        if ( page->count_info & PGC_broken )
+                            type = XEN_DOMCTL_PFINFO_BROKEN;
                     }
 
                     if ( page )
@@ -1568,6 +1577,29 @@
     }
     break;
 
+    case XEN_DOMCTL_set_broken_page_p2m:
+    {
+        struct domain *d;
+
+        d = rcu_lock_domain_by_id(domctl->domain);
+        if ( d != NULL )
+        {
+            p2m_type_t pt;
+            unsigned long pfn = domctl->u.set_broken_page_p2m.pfn;
+            mfn_t mfn = get_gfn_query(d, pfn, &pt);
+
+            if ( unlikely(!mfn_valid(mfn_x(mfn)) || !p2m_is_ram(pt) ||
+                         (p2m_change_type(d, pfn, pt, p2m_ram_broken) != pt)) )
+                ret = -EINVAL;
+
+            put_gfn(d, pfn);
+            rcu_unlock_domain(d);
+        }
+        else
+            ret = -ESRCH;
+    }
+    break;
+
     default:
         ret = iommu_do_domctl(domctl, u_domctl);
         break;
diff -r 8b93ac0c93f3 -r 75d16e26926c xen/include/public/domctl.h
--- a/xen/include/public/domctl.h	Tue Nov 13 11:19:17 2012 +0000
+++ b/xen/include/public/domctl.h	Thu Dec 06 09:50:40 2012 +0800
@@ -136,6 +136,7 @@
 #define XEN_DOMCTL_PFINFO_LPINTAB (0x1U<<31)
 #define XEN_DOMCTL_PFINFO_XTAB    (0xfU<<28) /* invalid page */
 #define XEN_DOMCTL_PFINFO_XALLOC  (0xeU<<28) /* allocate-only page */
+#define XEN_DOMCTL_PFINFO_BROKEN  (0xdU<<28) /* broken page */
 #define XEN_DOMCTL_PFINFO_LTAB_MASK (0xfU<<28)
 
 struct xen_domctl_getpageframeinfo {
@@ -834,6 +835,12 @@
 typedef struct xen_domctl_set_access_required xen_domctl_set_access_required_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_access_required_t);
 
+struct xen_domctl_set_broken_page_p2m {
+    uint64_aligned_t pfn;
+};
+typedef struct xen_domctl_set_broken_page_p2m xen_domctl_set_broken_page_p2m_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_broken_page_p2m_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -899,6 +906,7 @@
 #define XEN_DOMCTL_set_access_required           64
 #define XEN_DOMCTL_audit_p2m                     65
 #define XEN_DOMCTL_set_virq_handler              66
+#define XEN_DOMCTL_set_broken_page_p2m           67
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -954,6 +962,7 @@
         struct xen_domctl_audit_p2m         audit_p2m;
         struct xen_domctl_set_virq_handler  set_virq_handler;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
+        struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:03:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJJZ-0005FK-94; Wed, 05 Dec 2012 18:03:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgJJX-0005FB-Qx
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:03:20 +0000
Received: from [85.158.143.35:26617] by server-3.bemta-4.messagelabs.com id
	05/A4-06841-76C8FB05; Wed, 05 Dec 2012 18:03:19 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354730596!13228884!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM2Nzgy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10590 invoked from network); 5 Dec 2012 18:03:17 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-3.tower-21.messagelabs.com with SMTP;
	5 Dec 2012 18:03:17 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Dec 2012 10:03:11 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,223,1355126400"; d="scan'208";a="176567653"
Received: from ljsromley.bj.intel.com ([10.240.192.102])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 10:03:09 -0800
MIME-Version: 1.0
X-Mercurial-Node: 75d16e26926c873480abe205f1b7fb006a0fdf3c
Message-Id: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
User-Agent: Mercurial-patchbomb/1.4
Date: Thu, 06 Dec 2012 09:55:07 +0800
From: Liu Jinsong <jinsong.liu@intel.com>
To: ian.campbell@citrix.com, george.dunlap@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, JBeulich@suse.com
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with regard to
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At the sender
  xc_domain_save has a key point: 'to query the types of all the pages
  with xc_get_pfn_type_batch'
  1) if broken page occur before the key point, migration will be fine
     since proper pfn_type and pfn number will be transferred to the
     target and then take appropriate action;
  2) if broken page occur after the key point, whole system will crash
     and no need care migration any more;

At the target
  Target will populates pages for guest. As for the case of broken page,
  we prefer to keep the type of the page for the sake of seamless migration.
  Target will set p2m as p2m_ram_broken for broken page. If guest access
  the broken page again it will kill itself as expected.

Patch version history:
V5:
  - remove extra check at the last iteration
  - remove marking broken page to dirty bitmap
V4:
  - adjust variables and patch description based on feedback
V3:
  - handle pages broken at the last iteration
V2:
  - migrate continue when broken page occur during migration,
    via marking broken page to dirty bitmap
V1:
  - migration abort when broken page occur during migration
  - transfer pfn_type to target for broken page occur before migration

Suggested-by: George Dunlap <george.dunlap@eu.citrix.com>
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>

diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Tue Nov 13 11:19:17 2012 +0000
+++ b/tools/libxc/xc_domain.c	Thu Dec 06 09:50:40 2012 +0800
@@ -283,6 +283,22 @@
     return ret;
 }
 
+/* set broken page p2m */
+int xc_set_broken_page_p2m(xc_interface *xch,
+                           uint32_t domid,
+                           unsigned long pfn)
+{
+    int ret;
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_set_broken_page_p2m;
+    domctl.domain = (domid_t)domid;
+    domctl.u.set_broken_page_p2m.pfn = pfn;
+    ret = do_domctl(xch, &domctl);
+
+    return ret ? -1 : 0;
+}
+
 /* get info from hvm guest for save */
 int xc_domain_hvm_getcontext(xc_interface *xch,
                              uint32_t domid,
diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c	Tue Nov 13 11:19:17 2012 +0000
+++ b/tools/libxc/xc_domain_restore.c	Thu Dec 06 09:50:40 2012 +0800
@@ -1023,9 +1023,15 @@
 
     countpages = count;
     for (i = oldcount; i < buf->nr_pages; ++i)
-        if ((buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) == XEN_DOMCTL_PFINFO_XTAB
-            ||(buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) == XEN_DOMCTL_PFINFO_XALLOC)
+    {
+        unsigned long pagetype;
+
+        pagetype = buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK;
+        if ( pagetype == XEN_DOMCTL_PFINFO_XTAB ||
+             pagetype == XEN_DOMCTL_PFINFO_BROKEN ||
+             pagetype == XEN_DOMCTL_PFINFO_XALLOC )
             --countpages;
+    }
 
     if (!countpages)
         return count;
@@ -1267,6 +1273,17 @@
             /* a bogus/unmapped/allocate-only page: skip it */
             continue;
 
+        if ( pagetype == XEN_DOMCTL_PFINFO_BROKEN )
+        {
+            if ( xc_set_broken_page_p2m(xch, dom, pfn) )
+            {
+                ERROR("Set p2m for broken page failed, "
+                      "dom=%d, pfn=%lx\n", dom, pfn);
+                goto err_mapped;
+            }
+            continue;
+        }
+
         if (pfn_err[i])
         {
             ERROR("unexpected PFN mapping failure pfn %lx map_mfn %lx p2m_mfn %lx",
diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c	Tue Nov 13 11:19:17 2012 +0000
+++ b/tools/libxc/xc_domain_save.c	Thu Dec 06 09:50:40 2012 +0800
@@ -1277,6 +1277,13 @@
                 if ( !hvm )
                     gmfn = pfn_to_mfn(gmfn);
 
+                if ( pfn_type[j] == XEN_DOMCTL_PFINFO_BROKEN )
+                {
+                    pfn_type[j] |= pfn_batch[j];
+                    ++run;
+                    continue;
+                }
+
                 if ( pfn_err[j] )
                 {
                     if ( pfn_type[j] == XEN_DOMCTL_PFINFO_XTAB )
@@ -1371,8 +1378,12 @@
                     }
                 }
 
-                /* skip pages that aren't present or are alloc-only */
+                /*
+                 * skip pages that aren't present,
+                 * or are broken, or are alloc-only
+                 */
                 if ( pagetype == XEN_DOMCTL_PFINFO_XTAB
+                    || pagetype == XEN_DOMCTL_PFINFO_BROKEN
                     || pagetype == XEN_DOMCTL_PFINFO_XALLOC )
                     continue;
 
diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Tue Nov 13 11:19:17 2012 +0000
+++ b/tools/libxc/xenctrl.h	Thu Dec 06 09:50:40 2012 +0800
@@ -575,6 +575,17 @@
                           xc_domaininfo_t *info);
 
 /**
+ * This function set p2m for broken page
+ * &parm xch a handle to an open hypervisor interface
+ * @parm domid the domain id which broken page belong to
+ * @parm pfn the pfn number of the broken page
+ * @return 0 on success, -1 on failure
+ */
+int xc_set_broken_page_p2m(xc_interface *xch,
+                           uint32_t domid,
+                           unsigned long pfn);
+
+/**
  * This function returns information about the context of a hvm domain
  * @parm xch a handle to an open hypervisor interface
  * @parm domid the domain to get information from
diff -r 8b93ac0c93f3 -r 75d16e26926c xen/arch/x86/domctl.c
--- a/xen/arch/x86/domctl.c	Tue Nov 13 11:19:17 2012 +0000
+++ b/xen/arch/x86/domctl.c	Thu Dec 06 09:50:40 2012 +0800
@@ -209,12 +209,18 @@
                 for ( j = 0; j < k; j++ )
                 {
                     unsigned long type = 0;
+                    p2m_type_t t;
 
-                    page = get_page_from_gfn(d, arr[j], NULL, P2M_ALLOC);
+                    page = get_page_from_gfn(d, arr[j], &t, P2M_ALLOC);
 
                     if ( unlikely(!page) ||
                          unlikely(is_xen_heap_page(page)) )
-                        type = XEN_DOMCTL_PFINFO_XTAB;
+                    {
+                        if ( p2m_is_broken(t) )
+                            type = XEN_DOMCTL_PFINFO_BROKEN;
+                        else
+                            type = XEN_DOMCTL_PFINFO_XTAB;
+                    }
                     else
                     {
                         switch( page->u.inuse.type_info & PGT_type_mask )
@@ -235,6 +241,9 @@
 
                         if ( page->u.inuse.type_info & PGT_pinned )
                             type |= XEN_DOMCTL_PFINFO_LPINTAB;
+
+                        if ( page->count_info & PGC_broken )
+                            type = XEN_DOMCTL_PFINFO_BROKEN;
                     }
 
                     if ( page )
@@ -1568,6 +1577,29 @@
     }
     break;
 
+    case XEN_DOMCTL_set_broken_page_p2m:
+    {
+        struct domain *d;
+
+        d = rcu_lock_domain_by_id(domctl->domain);
+        if ( d != NULL )
+        {
+            p2m_type_t pt;
+            unsigned long pfn = domctl->u.set_broken_page_p2m.pfn;
+            mfn_t mfn = get_gfn_query(d, pfn, &pt);
+
+            if ( unlikely(!mfn_valid(mfn_x(mfn)) || !p2m_is_ram(pt) ||
+                         (p2m_change_type(d, pfn, pt, p2m_ram_broken) != pt)) )
+                ret = -EINVAL;
+
+            put_gfn(d, pfn);
+            rcu_unlock_domain(d);
+        }
+        else
+            ret = -ESRCH;
+    }
+    break;
+
     default:
         ret = iommu_do_domctl(domctl, u_domctl);
         break;
diff -r 8b93ac0c93f3 -r 75d16e26926c xen/include/public/domctl.h
--- a/xen/include/public/domctl.h	Tue Nov 13 11:19:17 2012 +0000
+++ b/xen/include/public/domctl.h	Thu Dec 06 09:50:40 2012 +0800
@@ -136,6 +136,7 @@
 #define XEN_DOMCTL_PFINFO_LPINTAB (0x1U<<31)
 #define XEN_DOMCTL_PFINFO_XTAB    (0xfU<<28) /* invalid page */
 #define XEN_DOMCTL_PFINFO_XALLOC  (0xeU<<28) /* allocate-only page */
+#define XEN_DOMCTL_PFINFO_BROKEN  (0xdU<<28) /* broken page */
 #define XEN_DOMCTL_PFINFO_LTAB_MASK (0xfU<<28)
 
 struct xen_domctl_getpageframeinfo {
@@ -834,6 +835,12 @@
 typedef struct xen_domctl_set_access_required xen_domctl_set_access_required_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_access_required_t);
 
+struct xen_domctl_set_broken_page_p2m {
+    uint64_aligned_t pfn;
+};
+typedef struct xen_domctl_set_broken_page_p2m xen_domctl_set_broken_page_p2m_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_broken_page_p2m_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -899,6 +906,7 @@
 #define XEN_DOMCTL_set_access_required           64
 #define XEN_DOMCTL_audit_p2m                     65
 #define XEN_DOMCTL_set_virq_handler              66
+#define XEN_DOMCTL_set_broken_page_p2m           67
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -954,6 +962,7 @@
         struct xen_domctl_audit_p2m         audit_p2m;
         struct xen_domctl_set_virq_handler  set_virq_handler;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
+        struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:05:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJKw-0005MI-P3; Wed, 05 Dec 2012 18:04:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgJKv-0005M7-3M
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:04:45 +0000
Received: from [85.158.143.35:65230] by server-1.bemta-4.messagelabs.com id
	42/21-27934-CBC8FB05; Wed, 05 Dec 2012 18:04:44 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354730683!16328229!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26371 invoked from network); 5 Dec 2012 18:04:43 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:04:43 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so2304220wgb.32
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 10:04:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=CaimDQKILg0triRrlG17+ZPRL8x7NalKZMpdm/TPYMw=;
	b=XHXbN6felRxP3piDav1oEZR2v/cGj15WbPoC1WSGJz2Menzl2LvlE8tn7348Oi0uck
	LviMXy8ECUd7CX8XUOxH7iTjBuCBSxfmDDKKCGKLfXtAS9pvUJAGiJmyTNF8Pcwps0bS
	5fuTG4UFOB0EQyEbR2HDjGa11oWbZ14Ig0OuPpUIeHm6fdgTg+nVI5l3L3sKNxhcHCFq
	oANNjZr9+uDd1wQwfDWKc2OJWEcS+qJfHiipNxadm3Ox5AQuIjSPhGZXNllx9GXbgSoU
	bMoMILoTfG/u7CF2yXeOnP6JfacLMcZmJhbbxbMkbrm/sE3c2dmEqxwqjnw/QnlX0UML
	IRRQ==
Received: by 10.180.88.99 with SMTP id bf3mr4643563wib.22.1354730682916;
	Wed, 05 Dec 2012 10:04:42 -0800 (PST)
Received: from [192.168.1.88]
	(host86-183-153-239.range86-183.btcentralplus.com. [86.183.153.239])
	by mx.google.com with ESMTPS id hv4sm19979933wib.0.2012.12.05.10.04.40
	(version=SSLv3 cipher=OTHER); Wed, 05 Dec 2012 10:04:41 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Wed, 05 Dec 2012 18:04:38 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Campbell <ian.campbell@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CCE53D36.46658%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] gitignore: ignore xen-foreign/arm.h
Thread-Index: Ac3TEv2HFI0G6L7oQEG34SOqwAoBbA==
In-Reply-To: <1354719311-14267-1-git-send-email-ian.campbell@citrix.com>
Mime-version: 1.0
Cc: ian.jackson@citrix.com
Subject: Re: [Xen-devel] [PATCH] gitignore: ignore xen-foreign/arm.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/2012 14:55, "Ian Campbell" <ian.campbell@citrix.com> wrote:

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>

> ---
>  .gitignore |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/.gitignore b/.gitignore
> index d724e5e..46ce63a 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -340,6 +340,7 @@ tools/include/xen-foreign/checker.c
>  tools/include/xen-foreign/structs.pyc
>  tools/include/xen-foreign/x86_32.h
>  tools/include/xen-foreign/x86_64.h
> +tools/include/xen-foreign/arm.h
>  
>  .git
>  tools/misc/xen-hptool



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:05:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJKw-0005MI-P3; Wed, 05 Dec 2012 18:04:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgJKv-0005M7-3M
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:04:45 +0000
Received: from [85.158.143.35:65230] by server-1.bemta-4.messagelabs.com id
	42/21-27934-CBC8FB05; Wed, 05 Dec 2012 18:04:44 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354730683!16328229!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26371 invoked from network); 5 Dec 2012 18:04:43 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:04:43 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so2304220wgb.32
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 10:04:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=CaimDQKILg0triRrlG17+ZPRL8x7NalKZMpdm/TPYMw=;
	b=XHXbN6felRxP3piDav1oEZR2v/cGj15WbPoC1WSGJz2Menzl2LvlE8tn7348Oi0uck
	LviMXy8ECUd7CX8XUOxH7iTjBuCBSxfmDDKKCGKLfXtAS9pvUJAGiJmyTNF8Pcwps0bS
	5fuTG4UFOB0EQyEbR2HDjGa11oWbZ14Ig0OuPpUIeHm6fdgTg+nVI5l3L3sKNxhcHCFq
	oANNjZr9+uDd1wQwfDWKc2OJWEcS+qJfHiipNxadm3Ox5AQuIjSPhGZXNllx9GXbgSoU
	bMoMILoTfG/u7CF2yXeOnP6JfacLMcZmJhbbxbMkbrm/sE3c2dmEqxwqjnw/QnlX0UML
	IRRQ==
Received: by 10.180.88.99 with SMTP id bf3mr4643563wib.22.1354730682916;
	Wed, 05 Dec 2012 10:04:42 -0800 (PST)
Received: from [192.168.1.88]
	(host86-183-153-239.range86-183.btcentralplus.com. [86.183.153.239])
	by mx.google.com with ESMTPS id hv4sm19979933wib.0.2012.12.05.10.04.40
	(version=SSLv3 cipher=OTHER); Wed, 05 Dec 2012 10:04:41 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Wed, 05 Dec 2012 18:04:38 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Campbell <ian.campbell@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CCE53D36.46658%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] gitignore: ignore xen-foreign/arm.h
Thread-Index: Ac3TEv2HFI0G6L7oQEG34SOqwAoBbA==
In-Reply-To: <1354719311-14267-1-git-send-email-ian.campbell@citrix.com>
Mime-version: 1.0
Cc: ian.jackson@citrix.com
Subject: Re: [Xen-devel] [PATCH] gitignore: ignore xen-foreign/arm.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/2012 14:55, "Ian Campbell" <ian.campbell@citrix.com> wrote:

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>

> ---
>  .gitignore |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/.gitignore b/.gitignore
> index d724e5e..46ce63a 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -340,6 +340,7 @@ tools/include/xen-foreign/checker.c
>  tools/include/xen-foreign/structs.pyc
>  tools/include/xen-foreign/x86_32.h
>  tools/include/xen-foreign/x86_64.h
> +tools/include/xen-foreign/arm.h
>  
>  .git
>  tools/misc/xen-hptool



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:05:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJLO-0005P6-6O; Wed, 05 Dec 2012 18:05:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgJLM-0005Or-H9
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:05:12 +0000
Received: from [85.158.137.99:29936] by server-2.bemta-3.messagelabs.com id
	F6/A6-04744-7DC8FB05; Wed, 05 Dec 2012 18:05:11 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1354730710!13022578!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_23,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1750 invoked from network); 5 Dec 2012 18:05:11 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:05:11 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so1880396wib.14
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 10:05:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=S9xeF0XM4P0QFyjDQ2UZ71BDsh9u7boRBaToJ9teF4I=;
	b=eI8JfARhtIwKQpQfgxT2qAOOtbEoHNK2+j1ZmmCRFMKYkh+o3fQrW+hoI16960ZTq8
	epLf06B96l6HNO9AwkFXtKpvtR+7cUJWCO5CrcAxuh5G6+HTUIWx3jKV2mFgVZ/SC5MH
	ToMNVVTMY1A21weZmF54O9jEL7zXz1QgHCbDZniJfxE8bkk6ZkaoOEBaA6uAZai2248W
	9X9Ia6U5OMQ694bHD97CRDq347X+xBbBMTkTDtbMTliCITUFscUH/6NBU2TmENsRZkrV
	RcRMtM2ADHOU9O/FDBrs5Gn34MEz+pKE0d/0/zplwjuKuYMACGDio9JS+s4bqoGf681i
	Tt7w==
Received: by 10.216.227.137 with SMTP id d9mr6341068weq.205.1354730710503;
	Wed, 05 Dec 2012 10:05:10 -0800 (PST)
Received: from [192.168.1.88]
	(host86-183-153-239.range86-183.btcentralplus.com. [86.183.153.239])
	by mx.google.com with ESMTPS id cf6sm7431896wib.3.2012.12.05.10.05.08
	(version=SSLv3 cipher=OTHER); Wed, 05 Dec 2012 10:05:09 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Wed, 05 Dec 2012 18:05:06 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Campbell <ian.campbell@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CCE53D52.46659%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] README: docs/pdf/user.pdf was deleted in
	24563:4271634e4c86
Thread-Index: Ac3TEw43rxfGVHqEbEaQayYimcWoAA==
In-Reply-To: <1354722103-22585-1-git-send-email-ian.campbell@citrix.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] README: docs/pdf/user.pdf was deleted in
 24563:4271634e4c86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/2012 15:41, "Ian Campbell" <ian.campbell@citrix.com> wrote:

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>

> ---
>  README |    7 ++-----
>  1 files changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/README b/README
> index 1938f66..f5d5530 100644
> --- a/README
> +++ b/README
> @@ -25,11 +25,8 @@ live relocation of VMs. Ports to Linux, NetBSD, FreeBSD and
> Solaris
>  are available from the community.
>  
>  This file contains some quick-start instructions to install Xen on
> -your system. For full documentation, see the Xen User Manual. If this
> -is a pre-built release then you can find the manual at:
> - dist/install/usr/share/doc/xen/pdf/user.pdf
> -If you have a source release, then 'make -C docs' will build the
> -manual at docs/pdf/user.pdf.
> +your system. For more information see http:/www.xen.org/ and
> +http://wiki.xen.org/
>  
>  Quick-Start Guide
>  =================



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:05:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJLO-0005P6-6O; Wed, 05 Dec 2012 18:05:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgJLM-0005Or-H9
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:05:12 +0000
Received: from [85.158.137.99:29936] by server-2.bemta-3.messagelabs.com id
	F6/A6-04744-7DC8FB05; Wed, 05 Dec 2012 18:05:11 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1354730710!13022578!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_23,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1750 invoked from network); 5 Dec 2012 18:05:11 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:05:11 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so1880396wib.14
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 10:05:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=S9xeF0XM4P0QFyjDQ2UZ71BDsh9u7boRBaToJ9teF4I=;
	b=eI8JfARhtIwKQpQfgxT2qAOOtbEoHNK2+j1ZmmCRFMKYkh+o3fQrW+hoI16960ZTq8
	epLf06B96l6HNO9AwkFXtKpvtR+7cUJWCO5CrcAxuh5G6+HTUIWx3jKV2mFgVZ/SC5MH
	ToMNVVTMY1A21weZmF54O9jEL7zXz1QgHCbDZniJfxE8bkk6ZkaoOEBaA6uAZai2248W
	9X9Ia6U5OMQ694bHD97CRDq347X+xBbBMTkTDtbMTliCITUFscUH/6NBU2TmENsRZkrV
	RcRMtM2ADHOU9O/FDBrs5Gn34MEz+pKE0d/0/zplwjuKuYMACGDio9JS+s4bqoGf681i
	Tt7w==
Received: by 10.216.227.137 with SMTP id d9mr6341068weq.205.1354730710503;
	Wed, 05 Dec 2012 10:05:10 -0800 (PST)
Received: from [192.168.1.88]
	(host86-183-153-239.range86-183.btcentralplus.com. [86.183.153.239])
	by mx.google.com with ESMTPS id cf6sm7431896wib.3.2012.12.05.10.05.08
	(version=SSLv3 cipher=OTHER); Wed, 05 Dec 2012 10:05:09 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Wed, 05 Dec 2012 18:05:06 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Campbell <ian.campbell@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CCE53D52.46659%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] README: docs/pdf/user.pdf was deleted in
	24563:4271634e4c86
Thread-Index: Ac3TEw43rxfGVHqEbEaQayYimcWoAA==
In-Reply-To: <1354722103-22585-1-git-send-email-ian.campbell@citrix.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] README: docs/pdf/user.pdf was deleted in
 24563:4271634e4c86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/12/2012 15:41, "Ian Campbell" <ian.campbell@citrix.com> wrote:

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>

> ---
>  README |    7 ++-----
>  1 files changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/README b/README
> index 1938f66..f5d5530 100644
> --- a/README
> +++ b/README
> @@ -25,11 +25,8 @@ live relocation of VMs. Ports to Linux, NetBSD, FreeBSD and
> Solaris
>  are available from the community.
>  
>  This file contains some quick-start instructions to install Xen on
> -your system. For full documentation, see the Xen User Manual. If this
> -is a pre-built release then you can find the manual at:
> - dist/install/usr/share/doc/xen/pdf/user.pdf
> -If you have a source release, then 'make -C docs' will build the
> -manual at docs/pdf/user.pdf.
> +your system. For more information see http:/www.xen.org/ and
> +http://wiki.xen.org/
>  
>  Quick-Start Guide
>  =================



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:19:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:19:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJYz-0005xF-Kt; Wed, 05 Dec 2012 18:19:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJYy-0005x6-1R
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:19:16 +0000
Received: from [85.158.137.99:7788] by server-10.bemta-3.messagelabs.com id
	E6/6F-19806-3209FB05; Wed, 05 Dec 2012 18:19:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354731553!12958109!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32281 invoked from network); 5 Dec 2012 18:19:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:19:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46721247"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:12 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJYu-0008WM-21;
	Wed, 05 Dec 2012 18:19:12 +0000
Date: Wed, 5 Dec 2012 18:19:08 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 0/6] xen: ARM HDLCD video driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series introduces a very simple driver for the ARM HDLCD
Controller, that means that we can finally have something on the screen
while Xen is booting on ARM :)

The driver is very limited at the moment and it is only capable of
setting one resolution that is 1280x1024. Also it requires the OSC5
timer to be set to 108Mhz in the Versatile Express config file.

In order to reduce code duplication with x86, I tried to generalize the
existing vesa character rendering functions into a architecture agnostic
framebuffer driver that can be used by vesa and the hdlcd drivers.

I would very much appreciate if you could give a close look at the vesa
changes because I don't have any x86 test machines that boot in vesa
mode, therefore I couldn't test it.



Stefano Stabellini (6):
      xen/arm: introduce map_phys_range
      xen: infrastructure to have cross-platform video drivers
      xen: introduce a generic framebuffer driver
      xen/vesa: use the new fb_* functions
      xen/device_tree: introduce find_compatible_node
      xen/arm: introduce a driver for the ARM HDLCD controller

 xen/arch/arm/Rules.mk         |    2 +
 xen/arch/arm/mm.c             |   23 +++++
 xen/arch/arm/setup.c          |    2 +-
 xen/arch/x86/Rules.mk         |    1 +
 xen/common/device_tree.c      |   47 +++++++++
 xen/drivers/Makefile          |    2 +-
 xen/drivers/char/console.c    |   12 +-
 xen/drivers/video/Makefile    |   12 ++-
 xen/drivers/video/arm_hdlcd.c |  165 ++++++++++++++++++++++++++++++++
 xen/drivers/video/fb.c        |  209 +++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/fb.h        |   44 +++++++++
 xen/drivers/video/vesa.c      |  179 +++++------------------------------
 xen/drivers/video/vga.c       |   12 +-
 xen/include/asm-arm/config.h  |    3 +
 xen/include/asm-arm/mm.h      |    3 +
 xen/include/asm-x86/config.h  |    1 +
 xen/include/xen/device_tree.h |    3 +
 xen/include/xen/vga.h         |    9 +--
 xen/include/xen/video.h       |   24 +++++
 19 files changed, 573 insertions(+), 180 deletions(-)



Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:19:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:19:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJYz-0005xF-Kt; Wed, 05 Dec 2012 18:19:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJYy-0005x6-1R
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:19:16 +0000
Received: from [85.158.137.99:7788] by server-10.bemta-3.messagelabs.com id
	E6/6F-19806-3209FB05; Wed, 05 Dec 2012 18:19:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354731553!12958109!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32281 invoked from network); 5 Dec 2012 18:19:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:19:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46721247"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:12 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJYu-0008WM-21;
	Wed, 05 Dec 2012 18:19:12 +0000
Date: Wed, 5 Dec 2012 18:19:08 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 0/6] xen: ARM HDLCD video driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series introduces a very simple driver for the ARM HDLCD
Controller, that means that we can finally have something on the screen
while Xen is booting on ARM :)

The driver is very limited at the moment and it is only capable of
setting one resolution that is 1280x1024. Also it requires the OSC5
timer to be set to 108Mhz in the Versatile Express config file.

In order to reduce code duplication with x86, I tried to generalize the
existing vesa character rendering functions into a architecture agnostic
framebuffer driver that can be used by vesa and the hdlcd drivers.

I would very much appreciate if you could give a close look at the vesa
changes because I don't have any x86 test machines that boot in vesa
mode, therefore I couldn't test it.



Stefano Stabellini (6):
      xen/arm: introduce map_phys_range
      xen: infrastructure to have cross-platform video drivers
      xen: introduce a generic framebuffer driver
      xen/vesa: use the new fb_* functions
      xen/device_tree: introduce find_compatible_node
      xen/arm: introduce a driver for the ARM HDLCD controller

 xen/arch/arm/Rules.mk         |    2 +
 xen/arch/arm/mm.c             |   23 +++++
 xen/arch/arm/setup.c          |    2 +-
 xen/arch/x86/Rules.mk         |    1 +
 xen/common/device_tree.c      |   47 +++++++++
 xen/drivers/Makefile          |    2 +-
 xen/drivers/char/console.c    |   12 +-
 xen/drivers/video/Makefile    |   12 ++-
 xen/drivers/video/arm_hdlcd.c |  165 ++++++++++++++++++++++++++++++++
 xen/drivers/video/fb.c        |  209 +++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/fb.h        |   44 +++++++++
 xen/drivers/video/vesa.c      |  179 +++++------------------------------
 xen/drivers/video/vga.c       |   12 +-
 xen/include/asm-arm/config.h  |    3 +
 xen/include/asm-arm/mm.h      |    3 +
 xen/include/asm-x86/config.h  |    1 +
 xen/include/xen/device_tree.h |    3 +
 xen/include/xen/vga.h         |    9 +--
 xen/include/xen/video.h       |   24 +++++
 19 files changed, 573 insertions(+), 180 deletions(-)



Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJZj-00060X-7c; Wed, 05 Dec 2012 18:20:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJZi-00060M-22
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:02 +0000
Received: from [85.158.138.51:19153] by server-14.bemta-3.messagelabs.com id
	3E/74-31424-1509FB05; Wed, 05 Dec 2012 18:20:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354731599!27628621!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22723 invoked from network); 5 Dec 2012 18:20:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46721342"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-4p;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:47 +0000
Message-ID: <1354731588-32579-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 5/6] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a find_compatible_node function that can be used by device
drivers to find the node corresponding to their device in the device
tree.

Also add device_tree_node_compatible to device_tree.h, that is currently
missing.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/setup.c          |    2 +-
 xen/common/device_tree.c      |   47 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/device_tree.h |    3 ++
 3 files changed, 51 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 5f4e318..d978938 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -187,7 +187,7 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     smp_clear_cpu_maps();
 
-    fdt = (void *)BOOT_MISC_VIRT_START
+    device_tree_flattened = fdt = (void *)BOOT_MISC_VIRT_START
         + (atag_paddr & ((1 << SECOND_SHIFT) - 1));
     fdt_size = device_tree_early_init(fdt);
 
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 8d5b6b0..ca0aba7 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -173,6 +173,53 @@ int device_tree_for_each_node(const void *fdt,
     return 0;
 }
 
+struct find_compat {
+    const char *compatible;
+    int found;
+    int node;
+    int depth;
+    u32 address_cells;
+    u32 size_cells;
+};
+
+static int _find_compatible_node(const void *fdt,
+                             int node, const char *name, int depth,
+                             u32 address_cells, u32 size_cells,
+                             void *data)
+{
+    struct find_compat *c = (struct find_compat *) data;
+    if ( device_tree_node_compatible(fdt, node, c->compatible) )
+    {
+        c->found = 1;
+        c->node = node;
+        c->depth = depth;
+        c->address_cells = address_cells;
+        c->size_cells = size_cells;
+    }
+    return 0;
+}
+ 
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells)
+{
+    int ret;
+    struct find_compat c;
+    c.compatible = compatible;
+    c.found = 0;
+
+    ret = device_tree_for_each_node(device_tree_flattened, _find_compatible_node, &c);
+    if ( !c.found )
+        return ret;
+    else
+    {
+        *node = c.node;
+        *depth = c.depth;
+        *address_cells = c.address_cells;
+        *size_cells = c.size_cells;
+        return 1;
+    }
+}
+
 /**
  * device_tree_bootargs - return the bootargs (the Xen command line)
  * @fdt flat device tree.
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index a0e3a97..5a75f0e 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -54,6 +54,9 @@ void device_tree_set_reg(u32 **cell, u32 address_cells, u32 size_cells,
                          u64 start, u64 size);
 u32 device_tree_get_u32(const void *fdt, int node, const char *prop_name);
 bool_t device_tree_node_matches(const void *fdt, int node, const char *match);
+bool_t device_tree_node_compatible(const void *fdt, int node, const char *match);
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells);
 int device_tree_for_each_node(const void *fdt,
                               device_tree_node_func func, void *data);
 const char *device_tree_bootargs(const void *fdt);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJZj-00060X-7c; Wed, 05 Dec 2012 18:20:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJZi-00060M-22
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:02 +0000
Received: from [85.158.138.51:19153] by server-14.bemta-3.messagelabs.com id
	3E/74-31424-1509FB05; Wed, 05 Dec 2012 18:20:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354731599!27628621!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22723 invoked from network); 5 Dec 2012 18:20:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46721342"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-4p;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:47 +0000
Message-ID: <1354731588-32579-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 5/6] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a find_compatible_node function that can be used by device
drivers to find the node corresponding to their device in the device
tree.

Also add device_tree_node_compatible to device_tree.h, that is currently
missing.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/setup.c          |    2 +-
 xen/common/device_tree.c      |   47 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/device_tree.h |    3 ++
 3 files changed, 51 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 5f4e318..d978938 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -187,7 +187,7 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     smp_clear_cpu_maps();
 
-    fdt = (void *)BOOT_MISC_VIRT_START
+    device_tree_flattened = fdt = (void *)BOOT_MISC_VIRT_START
         + (atag_paddr & ((1 << SECOND_SHIFT) - 1));
     fdt_size = device_tree_early_init(fdt);
 
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 8d5b6b0..ca0aba7 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -173,6 +173,53 @@ int device_tree_for_each_node(const void *fdt,
     return 0;
 }
 
+struct find_compat {
+    const char *compatible;
+    int found;
+    int node;
+    int depth;
+    u32 address_cells;
+    u32 size_cells;
+};
+
+static int _find_compatible_node(const void *fdt,
+                             int node, const char *name, int depth,
+                             u32 address_cells, u32 size_cells,
+                             void *data)
+{
+    struct find_compat *c = (struct find_compat *) data;
+    if ( device_tree_node_compatible(fdt, node, c->compatible) )
+    {
+        c->found = 1;
+        c->node = node;
+        c->depth = depth;
+        c->address_cells = address_cells;
+        c->size_cells = size_cells;
+    }
+    return 0;
+}
+ 
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells)
+{
+    int ret;
+    struct find_compat c;
+    c.compatible = compatible;
+    c.found = 0;
+
+    ret = device_tree_for_each_node(device_tree_flattened, _find_compatible_node, &c);
+    if ( !c.found )
+        return ret;
+    else
+    {
+        *node = c.node;
+        *depth = c.depth;
+        *address_cells = c.address_cells;
+        *size_cells = c.size_cells;
+        return 1;
+    }
+}
+
 /**
  * device_tree_bootargs - return the bootargs (the Xen command line)
  * @fdt flat device tree.
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index a0e3a97..5a75f0e 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -54,6 +54,9 @@ void device_tree_set_reg(u32 **cell, u32 address_cells, u32 size_cells,
                          u64 start, u64 size);
 u32 device_tree_get_u32(const void *fdt, int node, const char *prop_name);
 bool_t device_tree_node_matches(const void *fdt, int node, const char *match);
+bool_t device_tree_node_compatible(const void *fdt, int node, const char *match);
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells);
 int device_tree_for_each_node(const void *fdt,
                               device_tree_node_func func, void *data);
 const char *device_tree_bootargs(const void *fdt);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJZo-00061I-IP; Wed, 05 Dec 2012 18:20:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJZk-00060c-1U
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:06 +0000
Received: from [85.158.138.51:33045] by server-16.bemta-3.messagelabs.com id
	AD/89-07461-3509FB05; Wed, 05 Dec 2012 18:20:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1354731601!8949613!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2659 invoked from network); 5 Dec 2012 18:20:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46721345"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-4G;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:46 +0000
Message-ID: <1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 4/6] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make use of the framebuffer functions previously introduced.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
 1 files changed, 26 insertions(+), 153 deletions(-)

diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index 759355f..4dbaecf 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -12,20 +12,15 @@
 #include <xen/vga.h>
 #include <asm/page.h>
 #include "font.h"
+#include "fb.h"
 
 #define vlfb_info    vga_console_info.u.vesa_lfb
-#define text_columns (vlfb_info.width / font->width)
-#define text_rows    (vlfb_info.height / font->height)
 
-static void vesa_redraw_puts(const char *s);
-static void vesa_scroll_puts(const char *s);
+static void lfb_flush(void);
 
-static unsigned char *lfb, *lbuf, *text_buf;
-static unsigned int *__initdata line_len;
+static unsigned char *lfb;
 static const struct font_desc *font;
 static bool_t vga_compat;
-static unsigned int pixel_on;
-static unsigned int xpos, ypos;
 
 static unsigned int vram_total;
 integer_param("vesa-ram", vram_total);
@@ -86,30 +81,27 @@ void __init vesa_early_init(void)
 
 void __init vesa_init(void)
 {
-    if ( !font )
-        goto fail;
-
-    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
-    if ( !lbuf )
-        goto fail;
+    struct fb_prop fbp;
 
-    text_buf = xzalloc_bytes(text_columns * text_rows);
-    if ( !text_buf )
-        goto fail;
+    if ( !font )
+        return;
 
-    line_len = xzalloc_array(unsigned int, text_columns);
-    if ( !line_len )
-        goto fail;
+    fbp.font = font;
+    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
+    fbp.bytes_per_line = vlfb_info.bytes_per_line;
+    fbp.width = vlfb_info.width;
+    fbp.height = vlfb_info.height;
+    fbp.flush = lfb_flush;
+    fbp.text_columns = vlfb_info.width / font->width;
+    fbp.text_rows = vlfb_info.height / font->height;
 
     if ( map_pages_to_xen(IOREMAP_VIRT_START,
                           vlfb_info.lfb_base >> PAGE_SHIFT,
                           vram_remap >> PAGE_SHIFT,
                           PAGE_HYPERVISOR_NOCACHE) )
-        goto fail;
-
-    lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
+        return;
 
-    video_puts = vesa_redraw_puts;
+    fbp.lfb = lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
 
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
@@ -132,7 +124,7 @@ void __init vesa_init(void)
     {
         /* Light grey in truecolor. */
         unsigned int grey = 0xaaaaaaaa;
-        pixel_on = 
+        fbp.pixel_on = 
             ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
             ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
             ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
@@ -140,15 +132,14 @@ void __init vesa_init(void)
     else
     {
         /* White(ish) in default pseudocolor palette. */
-        pixel_on = 7;
+        fbp.pixel_on = 7;
     }
 
-    return;
-
- fail:
-    xfree(lbuf);
-    xfree(text_buf);
-    xfree(line_len);
+    if ( fb_init(fbp) < 0 )
+        return;
+    if ( fb_alloc() < 0 )
+        return;
+    video_puts = fb_redraw_puts;
 }
 
 #include <asm/mtrr.h>
@@ -193,8 +184,8 @@ void __init vesa_endboot(bool_t keep)
 {
     if ( keep )
     {
-        xpos = 0;
-        video_puts = vesa_scroll_puts;
+        video_puts = fb_scroll_puts;
+        fb_cr();
     }
     else
     {
@@ -203,124 +194,6 @@ void __init vesa_endboot(bool_t keep)
             memset(lfb + i * vlfb_info.bytes_per_line, 0,
                    vlfb_info.width * bpp);
         lfb_flush();
+        fb_free();
     }
-
-    xfree(line_len);
-}
-
-/* Render one line of text to given linear framebuffer line. */
-static void vesa_show_line(
-    const unsigned char *text_line,
-    unsigned char *video_line,
-    unsigned int nr_chars,
-    unsigned int nr_cells)
-{
-    unsigned int i, j, b, bpp, pixel;
-
-    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
-
-    for ( i = 0; i < font->height; i++ )
-    {
-        unsigned char *ptr = lbuf;
-
-        for ( j = 0; j < nr_chars; j++ )
-        {
-            const unsigned char *bits = font->data;
-            bits += ((text_line[j] * font->height + i) *
-                     ((font->width + 7) >> 3));
-            for ( b = font->width; b--; )
-            {
-                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
-                memcpy(ptr, &pixel, bpp);
-                ptr += bpp;
-            }
-        }
-
-        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
-        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
-        video_line += vlfb_info.bytes_per_line;
-    }
-}
-
-/* Fast mode which redraws all modified parts of a 2D text buffer. */
-static void __init vesa_redraw_puts(const char *s)
-{
-    unsigned int i, min_redraw_y = ypos;
-    char c;
-
-    /* Paste characters into text buffer. */
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            if ( ++ypos >= text_rows )
-            {
-                min_redraw_y = 0;
-                ypos = text_rows - 1;
-                memmove(text_buf, text_buf + text_columns,
-                        ypos * text_columns);
-                memset(text_buf + ypos * text_columns, 0, xpos);
-            }
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++ + ypos * text_columns] = c;
-    }
-
-    /* Render modified section of text buffer to VESA linear framebuffer. */
-    for ( i = min_redraw_y; i <= ypos; i++ )
-    {
-        const unsigned char *line = text_buf + i * text_columns;
-        unsigned int width;
-
-        for ( width = text_columns; width; --width )
-            if ( line[width - 1] )
-                 break;
-        vesa_show_line(line,
-                       lfb + i * font->height * vlfb_info.bytes_per_line,
-                       width, max(line_len[i], width));
-        line_len[i] = width;
-    }
-
-    lfb_flush();
-}
-
-/* Slower line-based scroll mode which interacts better with dom0. */
-static void vesa_scroll_puts(const char *s)
-{
-    unsigned int i;
-    char c;
-
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            unsigned int bytes = (vlfb_info.width *
-                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
-            unsigned char *src = lfb + font->height * vlfb_info.bytes_per_line;
-            unsigned char *dst = lfb;
-            
-            /* New line: scroll all previous rows up one line. */
-            for ( i = font->height; i < vlfb_info.height; i++ )
-            {
-                memcpy(dst, src, bytes);
-                src += vlfb_info.bytes_per_line;
-                dst += vlfb_info.bytes_per_line;
-            }
-
-            /* Render new line. */
-            vesa_show_line(
-                text_buf,
-                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
-                xpos, text_columns);
-
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++] = c;
-    }
-
-    lfb_flush();
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJZo-00061I-IP; Wed, 05 Dec 2012 18:20:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJZk-00060c-1U
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:06 +0000
Received: from [85.158.138.51:33045] by server-16.bemta-3.messagelabs.com id
	AD/89-07461-3509FB05; Wed, 05 Dec 2012 18:20:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1354731601!8949613!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2659 invoked from network); 5 Dec 2012 18:20:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46721345"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-4G;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:46 +0000
Message-ID: <1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 4/6] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make use of the framebuffer functions previously introduced.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
 1 files changed, 26 insertions(+), 153 deletions(-)

diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index 759355f..4dbaecf 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -12,20 +12,15 @@
 #include <xen/vga.h>
 #include <asm/page.h>
 #include "font.h"
+#include "fb.h"
 
 #define vlfb_info    vga_console_info.u.vesa_lfb
-#define text_columns (vlfb_info.width / font->width)
-#define text_rows    (vlfb_info.height / font->height)
 
-static void vesa_redraw_puts(const char *s);
-static void vesa_scroll_puts(const char *s);
+static void lfb_flush(void);
 
-static unsigned char *lfb, *lbuf, *text_buf;
-static unsigned int *__initdata line_len;
+static unsigned char *lfb;
 static const struct font_desc *font;
 static bool_t vga_compat;
-static unsigned int pixel_on;
-static unsigned int xpos, ypos;
 
 static unsigned int vram_total;
 integer_param("vesa-ram", vram_total);
@@ -86,30 +81,27 @@ void __init vesa_early_init(void)
 
 void __init vesa_init(void)
 {
-    if ( !font )
-        goto fail;
-
-    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
-    if ( !lbuf )
-        goto fail;
+    struct fb_prop fbp;
 
-    text_buf = xzalloc_bytes(text_columns * text_rows);
-    if ( !text_buf )
-        goto fail;
+    if ( !font )
+        return;
 
-    line_len = xzalloc_array(unsigned int, text_columns);
-    if ( !line_len )
-        goto fail;
+    fbp.font = font;
+    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
+    fbp.bytes_per_line = vlfb_info.bytes_per_line;
+    fbp.width = vlfb_info.width;
+    fbp.height = vlfb_info.height;
+    fbp.flush = lfb_flush;
+    fbp.text_columns = vlfb_info.width / font->width;
+    fbp.text_rows = vlfb_info.height / font->height;
 
     if ( map_pages_to_xen(IOREMAP_VIRT_START,
                           vlfb_info.lfb_base >> PAGE_SHIFT,
                           vram_remap >> PAGE_SHIFT,
                           PAGE_HYPERVISOR_NOCACHE) )
-        goto fail;
-
-    lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
+        return;
 
-    video_puts = vesa_redraw_puts;
+    fbp.lfb = lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
 
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
@@ -132,7 +124,7 @@ void __init vesa_init(void)
     {
         /* Light grey in truecolor. */
         unsigned int grey = 0xaaaaaaaa;
-        pixel_on = 
+        fbp.pixel_on = 
             ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
             ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
             ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
@@ -140,15 +132,14 @@ void __init vesa_init(void)
     else
     {
         /* White(ish) in default pseudocolor palette. */
-        pixel_on = 7;
+        fbp.pixel_on = 7;
     }
 
-    return;
-
- fail:
-    xfree(lbuf);
-    xfree(text_buf);
-    xfree(line_len);
+    if ( fb_init(fbp) < 0 )
+        return;
+    if ( fb_alloc() < 0 )
+        return;
+    video_puts = fb_redraw_puts;
 }
 
 #include <asm/mtrr.h>
@@ -193,8 +184,8 @@ void __init vesa_endboot(bool_t keep)
 {
     if ( keep )
     {
-        xpos = 0;
-        video_puts = vesa_scroll_puts;
+        video_puts = fb_scroll_puts;
+        fb_cr();
     }
     else
     {
@@ -203,124 +194,6 @@ void __init vesa_endboot(bool_t keep)
             memset(lfb + i * vlfb_info.bytes_per_line, 0,
                    vlfb_info.width * bpp);
         lfb_flush();
+        fb_free();
     }
-
-    xfree(line_len);
-}
-
-/* Render one line of text to given linear framebuffer line. */
-static void vesa_show_line(
-    const unsigned char *text_line,
-    unsigned char *video_line,
-    unsigned int nr_chars,
-    unsigned int nr_cells)
-{
-    unsigned int i, j, b, bpp, pixel;
-
-    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
-
-    for ( i = 0; i < font->height; i++ )
-    {
-        unsigned char *ptr = lbuf;
-
-        for ( j = 0; j < nr_chars; j++ )
-        {
-            const unsigned char *bits = font->data;
-            bits += ((text_line[j] * font->height + i) *
-                     ((font->width + 7) >> 3));
-            for ( b = font->width; b--; )
-            {
-                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
-                memcpy(ptr, &pixel, bpp);
-                ptr += bpp;
-            }
-        }
-
-        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
-        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
-        video_line += vlfb_info.bytes_per_line;
-    }
-}
-
-/* Fast mode which redraws all modified parts of a 2D text buffer. */
-static void __init vesa_redraw_puts(const char *s)
-{
-    unsigned int i, min_redraw_y = ypos;
-    char c;
-
-    /* Paste characters into text buffer. */
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            if ( ++ypos >= text_rows )
-            {
-                min_redraw_y = 0;
-                ypos = text_rows - 1;
-                memmove(text_buf, text_buf + text_columns,
-                        ypos * text_columns);
-                memset(text_buf + ypos * text_columns, 0, xpos);
-            }
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++ + ypos * text_columns] = c;
-    }
-
-    /* Render modified section of text buffer to VESA linear framebuffer. */
-    for ( i = min_redraw_y; i <= ypos; i++ )
-    {
-        const unsigned char *line = text_buf + i * text_columns;
-        unsigned int width;
-
-        for ( width = text_columns; width; --width )
-            if ( line[width - 1] )
-                 break;
-        vesa_show_line(line,
-                       lfb + i * font->height * vlfb_info.bytes_per_line,
-                       width, max(line_len[i], width));
-        line_len[i] = width;
-    }
-
-    lfb_flush();
-}
-
-/* Slower line-based scroll mode which interacts better with dom0. */
-static void vesa_scroll_puts(const char *s)
-{
-    unsigned int i;
-    char c;
-
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            unsigned int bytes = (vlfb_info.width *
-                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
-            unsigned char *src = lfb + font->height * vlfb_info.bytes_per_line;
-            unsigned char *dst = lfb;
-            
-            /* New line: scroll all previous rows up one line. */
-            for ( i = font->height; i < vlfb_info.height; i++ )
-            {
-                memcpy(dst, src, bytes);
-                src += vlfb_info.bytes_per_line;
-                dst += vlfb_info.bytes_per_line;
-            }
-
-            /* Render new line. */
-            vesa_show_line(
-                text_buf,
-                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
-                xpos, text_columns);
-
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++] = c;
-    }
-
-    lfb_flush();
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJZo-00061U-VN; Wed, 05 Dec 2012 18:20:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJZj-00060V-3s
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:06 +0000
Received: from [85.158.138.51:19204] by server-1.bemta-3.messagelabs.com id
	6A/1E-12169-2509FB05; Wed, 05 Dec 2012 18:20:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354731599!27628621!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22741 invoked from network); 5 Dec 2012 18:20:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46721343"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-2T;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:45 +0000
Message-ID: <1354731588-32579-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 3/6] xen: introduce a generic framebuffer driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Abstract away from vesa.c the funcions to handle a linear framebuffer
and print characters to it.
The corresponding functions are going to be removed from vesa.c in the
next patch.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/Makefile |    1 +
 xen/drivers/video/fb.c     |  209 ++++++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/fb.h     |   44 +++++++++
 3 files changed, 254 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/fb.c
 create mode 100644 xen/drivers/video/fb.h

diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 2993c39..3b3eb43 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -2,4 +2,5 @@ obj-$(HAS_VGA) := vga.o
 obj-$(HAS_VIDEO) += font_8x14.o
 obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/fb.c b/xen/drivers/video/fb.c
new file mode 100644
index 0000000..282f97e
--- /dev/null
+++ b/xen/drivers/video/fb.c
@@ -0,0 +1,209 @@
+/******************************************************************************
+ * fb.c
+ *
+ * linear frame buffer handling.
+ */
+
+#include <xen/config.h>
+#include <xen/kernel.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include "fb.h"
+#include "font.h"
+
+#define MAX_XRES 1900
+#define MAX_YRES 1200
+#define MAX_BPP 4
+#define MAX_FONT_W 8
+#define MAX_FONT_H 16
+static __initdata unsigned int line_len[MAX_XRES / MAX_FONT_W];
+static __initdata unsigned char lbuf[MAX_XRES * MAX_BPP];
+static __initdata unsigned char text_buf[(MAX_XRES / MAX_FONT_W) * \
+                          (MAX_YRES / MAX_FONT_H)];
+
+struct fb_status {
+    struct fb_prop fbp;
+
+    unsigned char *lbuf, *text_buf;
+    unsigned int *line_len;
+    unsigned int xpos, ypos;
+};
+static struct fb_status fb;
+
+static void fb_show_line(
+    const unsigned char *text_line,
+    unsigned char *video_line,
+    unsigned int nr_chars,
+    unsigned int nr_cells)
+{
+    unsigned int i, j, b, bpp, pixel;
+
+    bpp = (fb.fbp.bits_per_pixel + 7) >> 3;
+
+    for ( i = 0; i < fb.fbp.font->height; i++ )
+    {
+        unsigned char *ptr = fb.lbuf;
+
+        for ( j = 0; j < nr_chars; j++ )
+        {
+            const unsigned char *bits = fb.fbp.font->data;
+            bits += ((text_line[j] * fb.fbp.font->height + i) *
+                     ((fb.fbp.font->width + 7) >> 3));
+            for ( b = fb.fbp.font->width; b--; )
+            {
+                pixel = (*bits & (1u<<b)) ? fb.fbp.pixel_on : 0;
+                memcpy(ptr, &pixel, bpp);
+                ptr += bpp;
+            }
+        }
+
+        memset(ptr, 0, (fb.fbp.width - nr_chars * fb.fbp.font->width) * bpp);
+        memcpy(video_line, fb.lbuf, nr_cells * fb.fbp.font->width * bpp);
+        video_line += fb.fbp.bytes_per_line;
+    }
+}
+
+/* Fast mode which redraws all modified parts of a 2D text buffer. */
+void fb_redraw_puts(const char *s)
+{
+    unsigned int i, min_redraw_y = fb.ypos;
+    char c;
+
+    /* Paste characters into text buffer. */
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            if ( ++fb.ypos >= fb.fbp.text_rows )
+            {
+                min_redraw_y = 0;
+                fb.ypos = fb.fbp.text_rows - 1;
+                memmove(fb.text_buf, fb.text_buf + fb.fbp.text_columns,
+                        fb.ypos * fb.fbp.text_columns);
+                memset(fb.text_buf + fb.ypos * fb.fbp.text_columns, 0, fb.xpos);
+            }
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++ + fb.ypos * fb.fbp.text_columns] = c;
+    }
+
+    /* Render modified section of text buffer to VESA linear framebuffer. */
+    for ( i = min_redraw_y; i <= fb.ypos; i++ )
+    {
+        const unsigned char *line = fb.text_buf + i * fb.fbp.text_columns;
+        unsigned int width;
+
+        for ( width = fb.fbp.text_columns; width; --width )
+            if ( line[width - 1] )
+                 break;
+        fb_show_line(line,
+                       fb.fbp.lfb + i * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                       width, max(fb.line_len[i], width));
+        fb.line_len[i] = width;
+    }
+
+    fb.fbp.flush();
+}
+
+/* Slower line-based scroll mode which interacts better with dom0. */
+void fb_scroll_puts(const char *s)
+{
+    unsigned int i;
+    char c;
+
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            unsigned int bytes = (fb.fbp.width *
+                                  ((fb.fbp.bits_per_pixel + 7) >> 3));
+            unsigned char *src = fb.fbp.lfb + fb.fbp.font->height * fb.fbp.bytes_per_line;
+            unsigned char *dst = fb.fbp.lfb;
+
+            /* New line: scroll all previous rows up one line. */
+            for ( i = fb.fbp.font->height; i < fb.fbp.height; i++ )
+            {
+                memcpy(dst, src, bytes);
+                src += fb.fbp.bytes_per_line;
+                dst += fb.fbp.bytes_per_line;
+            }
+
+            /* Render new line. */
+            fb_show_line(
+                fb.text_buf,
+                fb.fbp.lfb + (fb.fbp.text_rows-1) * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                fb.xpos, fb.fbp.text_columns);
+
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++] = c;
+    }
+
+    fb.fbp.flush();
+}
+
+void fb_cr(void)
+{
+    fb.xpos = 0;
+}
+
+int __init fb_init(struct fb_prop fbp)
+{
+    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
+    {
+        printk("Couldn't initialize a %xx%x framebuffer early.\n",
+                        fbp.width, fbp.height);
+        return -EINVAL;
+    }
+
+    fb.fbp = fbp;
+    fb.lbuf = lbuf;
+    fb.text_buf = text_buf;
+    fb.line_len = line_len;
+    return 0;
+}
+
+int __init fb_alloc(void)
+{
+    fb.lbuf = NULL;
+    fb.text_buf = NULL;
+    fb.line_len = NULL;
+
+    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
+    if ( !fb.lbuf )
+        goto fail;
+
+    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
+    if ( !fb.text_buf )
+        goto fail;
+
+    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
+    if ( !fb.line_len )
+        goto fail;
+
+    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
+    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
+    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
+
+    return 0;
+
+fail:
+    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
+                    "the framebuffer\n");
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+
+    return -ENOMEM;
+}
+
+void fb_free(void)
+{
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+}
diff --git a/xen/drivers/video/fb.h b/xen/drivers/video/fb.h
new file mode 100644
index 0000000..a432e19
--- /dev/null
+++ b/xen/drivers/video/fb.h
@@ -0,0 +1,44 @@
+/*
+ * xen/drivers/video/fb.h
+ *
+ * Cross-platform framebuffer library
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/init.h>
+
+struct fb_prop {
+    const struct font_desc *font;
+    unsigned char *lfb;
+    unsigned int pixel_on;
+    uint16_t width, height;
+    uint16_t bytes_per_line;
+    uint16_t bits_per_pixel;
+    void (*flush)(void);
+
+    unsigned int text_columns;
+    unsigned int text_rows;
+};
+
+void fb_redraw_puts(const char *s);
+void fb_scroll_puts(const char *s);
+void fb_cr(void);
+void fb_free(void);
+
+/* initialize the framebuffer, can be called early (before xmalloc is
+ * available) */
+int __init fb_init(struct fb_prop fbp);
+/* fb_alloc allocates internal structures using xmalloc */
+int __init fb_alloc(void);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJZo-00061U-VN; Wed, 05 Dec 2012 18:20:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJZj-00060V-3s
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:06 +0000
Received: from [85.158.138.51:19204] by server-1.bemta-3.messagelabs.com id
	6A/1E-12169-2509FB05; Wed, 05 Dec 2012 18:20:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354731599!27628621!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22741 invoked from network); 5 Dec 2012 18:20:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46721343"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-2T;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:45 +0000
Message-ID: <1354731588-32579-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 3/6] xen: introduce a generic framebuffer driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Abstract away from vesa.c the funcions to handle a linear framebuffer
and print characters to it.
The corresponding functions are going to be removed from vesa.c in the
next patch.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/Makefile |    1 +
 xen/drivers/video/fb.c     |  209 ++++++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/fb.h     |   44 +++++++++
 3 files changed, 254 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/fb.c
 create mode 100644 xen/drivers/video/fb.h

diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 2993c39..3b3eb43 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -2,4 +2,5 @@ obj-$(HAS_VGA) := vga.o
 obj-$(HAS_VIDEO) += font_8x14.o
 obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/fb.c b/xen/drivers/video/fb.c
new file mode 100644
index 0000000..282f97e
--- /dev/null
+++ b/xen/drivers/video/fb.c
@@ -0,0 +1,209 @@
+/******************************************************************************
+ * fb.c
+ *
+ * linear frame buffer handling.
+ */
+
+#include <xen/config.h>
+#include <xen/kernel.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include "fb.h"
+#include "font.h"
+
+#define MAX_XRES 1900
+#define MAX_YRES 1200
+#define MAX_BPP 4
+#define MAX_FONT_W 8
+#define MAX_FONT_H 16
+static __initdata unsigned int line_len[MAX_XRES / MAX_FONT_W];
+static __initdata unsigned char lbuf[MAX_XRES * MAX_BPP];
+static __initdata unsigned char text_buf[(MAX_XRES / MAX_FONT_W) * \
+                          (MAX_YRES / MAX_FONT_H)];
+
+struct fb_status {
+    struct fb_prop fbp;
+
+    unsigned char *lbuf, *text_buf;
+    unsigned int *line_len;
+    unsigned int xpos, ypos;
+};
+static struct fb_status fb;
+
+static void fb_show_line(
+    const unsigned char *text_line,
+    unsigned char *video_line,
+    unsigned int nr_chars,
+    unsigned int nr_cells)
+{
+    unsigned int i, j, b, bpp, pixel;
+
+    bpp = (fb.fbp.bits_per_pixel + 7) >> 3;
+
+    for ( i = 0; i < fb.fbp.font->height; i++ )
+    {
+        unsigned char *ptr = fb.lbuf;
+
+        for ( j = 0; j < nr_chars; j++ )
+        {
+            const unsigned char *bits = fb.fbp.font->data;
+            bits += ((text_line[j] * fb.fbp.font->height + i) *
+                     ((fb.fbp.font->width + 7) >> 3));
+            for ( b = fb.fbp.font->width; b--; )
+            {
+                pixel = (*bits & (1u<<b)) ? fb.fbp.pixel_on : 0;
+                memcpy(ptr, &pixel, bpp);
+                ptr += bpp;
+            }
+        }
+
+        memset(ptr, 0, (fb.fbp.width - nr_chars * fb.fbp.font->width) * bpp);
+        memcpy(video_line, fb.lbuf, nr_cells * fb.fbp.font->width * bpp);
+        video_line += fb.fbp.bytes_per_line;
+    }
+}
+
+/* Fast mode which redraws all modified parts of a 2D text buffer. */
+void fb_redraw_puts(const char *s)
+{
+    unsigned int i, min_redraw_y = fb.ypos;
+    char c;
+
+    /* Paste characters into text buffer. */
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            if ( ++fb.ypos >= fb.fbp.text_rows )
+            {
+                min_redraw_y = 0;
+                fb.ypos = fb.fbp.text_rows - 1;
+                memmove(fb.text_buf, fb.text_buf + fb.fbp.text_columns,
+                        fb.ypos * fb.fbp.text_columns);
+                memset(fb.text_buf + fb.ypos * fb.fbp.text_columns, 0, fb.xpos);
+            }
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++ + fb.ypos * fb.fbp.text_columns] = c;
+    }
+
+    /* Render modified section of text buffer to VESA linear framebuffer. */
+    for ( i = min_redraw_y; i <= fb.ypos; i++ )
+    {
+        const unsigned char *line = fb.text_buf + i * fb.fbp.text_columns;
+        unsigned int width;
+
+        for ( width = fb.fbp.text_columns; width; --width )
+            if ( line[width - 1] )
+                 break;
+        fb_show_line(line,
+                       fb.fbp.lfb + i * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                       width, max(fb.line_len[i], width));
+        fb.line_len[i] = width;
+    }
+
+    fb.fbp.flush();
+}
+
+/* Slower line-based scroll mode which interacts better with dom0. */
+void fb_scroll_puts(const char *s)
+{
+    unsigned int i;
+    char c;
+
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            unsigned int bytes = (fb.fbp.width *
+                                  ((fb.fbp.bits_per_pixel + 7) >> 3));
+            unsigned char *src = fb.fbp.lfb + fb.fbp.font->height * fb.fbp.bytes_per_line;
+            unsigned char *dst = fb.fbp.lfb;
+
+            /* New line: scroll all previous rows up one line. */
+            for ( i = fb.fbp.font->height; i < fb.fbp.height; i++ )
+            {
+                memcpy(dst, src, bytes);
+                src += fb.fbp.bytes_per_line;
+                dst += fb.fbp.bytes_per_line;
+            }
+
+            /* Render new line. */
+            fb_show_line(
+                fb.text_buf,
+                fb.fbp.lfb + (fb.fbp.text_rows-1) * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                fb.xpos, fb.fbp.text_columns);
+
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++] = c;
+    }
+
+    fb.fbp.flush();
+}
+
+void fb_cr(void)
+{
+    fb.xpos = 0;
+}
+
+int __init fb_init(struct fb_prop fbp)
+{
+    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
+    {
+        printk("Couldn't initialize a %xx%x framebuffer early.\n",
+                        fbp.width, fbp.height);
+        return -EINVAL;
+    }
+
+    fb.fbp = fbp;
+    fb.lbuf = lbuf;
+    fb.text_buf = text_buf;
+    fb.line_len = line_len;
+    return 0;
+}
+
+int __init fb_alloc(void)
+{
+    fb.lbuf = NULL;
+    fb.text_buf = NULL;
+    fb.line_len = NULL;
+
+    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
+    if ( !fb.lbuf )
+        goto fail;
+
+    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
+    if ( !fb.text_buf )
+        goto fail;
+
+    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
+    if ( !fb.line_len )
+        goto fail;
+
+    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
+    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
+    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
+
+    return 0;
+
+fail:
+    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
+                    "the framebuffer\n");
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+
+    return -ENOMEM;
+}
+
+void fb_free(void)
+{
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+}
diff --git a/xen/drivers/video/fb.h b/xen/drivers/video/fb.h
new file mode 100644
index 0000000..a432e19
--- /dev/null
+++ b/xen/drivers/video/fb.h
@@ -0,0 +1,44 @@
+/*
+ * xen/drivers/video/fb.h
+ *
+ * Cross-platform framebuffer library
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/init.h>
+
+struct fb_prop {
+    const struct font_desc *font;
+    unsigned char *lfb;
+    unsigned int pixel_on;
+    uint16_t width, height;
+    uint16_t bytes_per_line;
+    uint16_t bits_per_pixel;
+    void (*flush)(void);
+
+    unsigned int text_columns;
+    unsigned int text_rows;
+};
+
+void fb_redraw_puts(const char *s);
+void fb_scroll_puts(const char *s);
+void fb_cr(void);
+void fb_free(void);
+
+/* initialize the framebuffer, can be called early (before xmalloc is
+ * available) */
+int __init fb_init(struct fb_prop fbp);
+/* fb_alloc allocates internal structures using xmalloc */
+int __init fb_alloc(void);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJZo-000619-5z; Wed, 05 Dec 2012 18:20:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJZk-00060d-0U
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:06 +0000
Received: from [85.158.138.51:19238] by server-3.bemta-3.messagelabs.com id
	40/E9-31566-3509FB05; Wed, 05 Dec 2012 18:20:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354731599!27628621!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22894 invoked from network); 5 Dec 2012 18:20:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46721344"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-6d;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:48 +0000
Message-ID: <1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 6/6] xen/arm: introduce a driver for the ARM
	HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For the moment the resolution is hardcoded to 1280x1024@60.
Use the generic framebuffer functions to print on the screen.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Rules.mk         |    1 +
 xen/drivers/video/Makefile    |    1 +
 xen/drivers/video/arm_hdlcd.c |  165 +++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/config.h  |    3 +
 4 files changed, 170 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/arm_hdlcd.c

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index fa9f9c1..9580e6b 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -8,6 +8,7 @@
 
 HAS_DEVICE_TREE := y
 HAS_VIDEO := y
+HAS_ARM_HDLCD := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 3b3eb43..8a6f5da 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
 obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
+obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
new file mode 100644
index 0000000..68f588c
--- /dev/null
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -0,0 +1,165 @@
+/*
+ * xen/drivers/video/arm_hdlcd.c
+ *
+ * Driver for ARM HDLCD Controller
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/delay.h>
+#include <asm/types.h>
+#include <xen/config.h>
+#include <xen/device_tree.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include "font.h"
+#include "fb.h"
+
+#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
+
+#define HDLCD_INTMASK       (0x18/4)
+#define HDLCD_FBBASE        (0x100/4)
+#define HDLCD_LINELENGTH    (0x104/4)
+#define HDLCD_LINECOUNT     (0x108/4)
+#define HDLCD_LINEPITCH     (0x10C/4)
+#define HDLCD_BUS           (0x110/4)
+#define HDLCD_VSYNC         (0x200/4)
+#define HDLCD_VBACK         (0x204/4)
+#define HDLCD_VDATA         (0x208/4)
+#define HDLCD_VFRONT        (0x20C/4)
+#define HDLCD_HSYNC         (0x210/4)
+#define HDLCD_HBACK         (0x214/4)
+#define HDLCD_HDATA         (0x218/4)
+#define HDLCD_HFRONT        (0x21C/4)
+#define HDLCD_POLARITIES    (0x220/4)
+#define HDLCD_COMMAND       (0x230/4)
+#define HDLCD_PF            (0x240/4)
+#define HDLCD_RED           (0x244/4)
+#define HDLCD_GREEN         (0x248/4)
+#define HDLCD_BLUE          (0x24C/4)
+
+#define BPP             4
+#define XRES            1280
+#define YRES            1024
+#define refresh         60
+#define pixclock        108 /* in Mhz, needs to be set in the board config for OSC5 */
+#define left_margin     80
+#define hback left_margin
+#define right_margin    48
+#define hfront right_margin
+#define upper_margin    21
+#define vback upper_margin
+#define lower_margin    3
+#define vfront lower_margin
+#define hsync_len       32
+#define vsync_len       6
+
+#define HDLCD_SIZE (XRES*YRES*BPP)
+
+static void vga_noop_puts(const char *s) {}
+void (*video_puts)(const char *) = vga_noop_puts;
+
+static void hdlcd_flush(void)
+{
+    dsb();
+}
+
+void __init video_init(void)
+{
+    int node, depth;
+    u32 address_cells, size_cells;
+    struct fb_prop fbp;
+    unsigned char *lfb = (unsigned char *) VRAM_VIRT_START;
+    paddr_t hdlcd_start, hdlcd_size;
+    paddr_t framebuffer_start, framebuffer_size;
+    const struct fdt_property *prop;
+    const u32 *cell;
+
+    if ( find_compatible_node("arm,hdlcd", &node, &depth,
+                &address_cells, &size_cells) <= 0 )
+        return;
+
+    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &hdlcd_start, &hdlcd_size); 
+
+    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &framebuffer_start, &framebuffer_size); 
+
+    if ( !hdlcd_start || !framebuffer_start )
+        return;
+
+    printk("Initializing HDLCD driver\n");
+
+    map_phys_range(framebuffer_start,
+                    framebuffer_start + framebuffer_size,
+                    VRAM_VIRT_START, DEV_WC);
+    memset(lfb, 0x00, HDLCD_SIZE);
+
+    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
+    HDLCD[HDLCD_COMMAND] = 0;
+
+    HDLCD[HDLCD_LINELENGTH] = XRES * BPP;
+    HDLCD[HDLCD_LINECOUNT] = YRES - 1;
+    HDLCD[HDLCD_LINEPITCH] = XRES * BPP;
+    HDLCD[HDLCD_PF] = ((BPP - 1) << 3);
+    HDLCD[HDLCD_INTMASK] = 0;
+    HDLCD[HDLCD_FBBASE] = framebuffer_start;
+    HDLCD[HDLCD_BUS] = 0xf00|(1<<4);
+    HDLCD[HDLCD_VBACK] = upper_margin - 1;
+    HDLCD[HDLCD_VSYNC] = vsync_len - 1;
+    HDLCD[HDLCD_VDATA] = YRES - 1;
+    HDLCD[HDLCD_VFRONT] = lower_margin - 1;
+    HDLCD[HDLCD_HBACK] = left_margin - 1;
+    HDLCD[HDLCD_HSYNC] = hsync_len - 1;
+    HDLCD[HDLCD_HDATA] = XRES - 1;
+    HDLCD[HDLCD_HFRONT] = right_margin - 1;
+    HDLCD[HDLCD_POLARITIES] = (1<<2)|(1<<3);
+    HDLCD[HDLCD_RED] = (8<<8)|0;
+    HDLCD[HDLCD_GREEN] = (8<<8)|8;
+    HDLCD[HDLCD_BLUE] = (8<<8)|16;
+
+    HDLCD[HDLCD_COMMAND] = 1;
+    clear_fixmap(FIXMAP_MISC);
+
+    fbp.lfb = lfb;
+    fbp.font = &font_vga_8x16;
+    fbp.pixel_on = 0xffffff;
+    fbp.bits_per_pixel = BPP*8;
+    fbp.bytes_per_line = BPP*XRES;
+    fbp.width = XRES;
+    fbp.height = YRES;
+    fbp.flush = hdlcd_flush;
+    fbp.text_columns = XRES / 8;
+    fbp.text_rows = YRES / 16;
+    if ( fb_init(fbp) < 0 )
+            return;
+    video_puts = fb_scroll_puts;
+}
+
+void video_endboot(void)
+{
+    if ( video_puts != vga_noop_puts )
+        fb_alloc();
+}
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 2a05539..9727562 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -19,6 +19,8 @@
 
 #define CONFIG_DOMAIN_PAGE 1
 
+#define CONFIG_VIDEO 1
+
 #define OPT_CONSOLE_STR "com1"
 
 #ifdef MAX_PHYS_CPUS
@@ -73,6 +75,7 @@
 #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
 #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
 #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
+#define VRAM_VIRT_START        mk_unsigned_long(0x10000000)
 #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
 #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJZo-000619-5z; Wed, 05 Dec 2012 18:20:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJZk-00060d-0U
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:06 +0000
Received: from [85.158.138.51:19238] by server-3.bemta-3.messagelabs.com id
	40/E9-31566-3509FB05; Wed, 05 Dec 2012 18:20:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354731599!27628621!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22894 invoked from network); 5 Dec 2012 18:20:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46721344"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-6d;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:48 +0000
Message-ID: <1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 6/6] xen/arm: introduce a driver for the ARM
	HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For the moment the resolution is hardcoded to 1280x1024@60.
Use the generic framebuffer functions to print on the screen.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Rules.mk         |    1 +
 xen/drivers/video/Makefile    |    1 +
 xen/drivers/video/arm_hdlcd.c |  165 +++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/config.h  |    3 +
 4 files changed, 170 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/arm_hdlcd.c

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index fa9f9c1..9580e6b 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -8,6 +8,7 @@
 
 HAS_DEVICE_TREE := y
 HAS_VIDEO := y
+HAS_ARM_HDLCD := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 3b3eb43..8a6f5da 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
 obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
+obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
new file mode 100644
index 0000000..68f588c
--- /dev/null
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -0,0 +1,165 @@
+/*
+ * xen/drivers/video/arm_hdlcd.c
+ *
+ * Driver for ARM HDLCD Controller
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/delay.h>
+#include <asm/types.h>
+#include <xen/config.h>
+#include <xen/device_tree.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include "font.h"
+#include "fb.h"
+
+#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
+
+#define HDLCD_INTMASK       (0x18/4)
+#define HDLCD_FBBASE        (0x100/4)
+#define HDLCD_LINELENGTH    (0x104/4)
+#define HDLCD_LINECOUNT     (0x108/4)
+#define HDLCD_LINEPITCH     (0x10C/4)
+#define HDLCD_BUS           (0x110/4)
+#define HDLCD_VSYNC         (0x200/4)
+#define HDLCD_VBACK         (0x204/4)
+#define HDLCD_VDATA         (0x208/4)
+#define HDLCD_VFRONT        (0x20C/4)
+#define HDLCD_HSYNC         (0x210/4)
+#define HDLCD_HBACK         (0x214/4)
+#define HDLCD_HDATA         (0x218/4)
+#define HDLCD_HFRONT        (0x21C/4)
+#define HDLCD_POLARITIES    (0x220/4)
+#define HDLCD_COMMAND       (0x230/4)
+#define HDLCD_PF            (0x240/4)
+#define HDLCD_RED           (0x244/4)
+#define HDLCD_GREEN         (0x248/4)
+#define HDLCD_BLUE          (0x24C/4)
+
+#define BPP             4
+#define XRES            1280
+#define YRES            1024
+#define refresh         60
+#define pixclock        108 /* in Mhz, needs to be set in the board config for OSC5 */
+#define left_margin     80
+#define hback left_margin
+#define right_margin    48
+#define hfront right_margin
+#define upper_margin    21
+#define vback upper_margin
+#define lower_margin    3
+#define vfront lower_margin
+#define hsync_len       32
+#define vsync_len       6
+
+#define HDLCD_SIZE (XRES*YRES*BPP)
+
+static void vga_noop_puts(const char *s) {}
+void (*video_puts)(const char *) = vga_noop_puts;
+
+static void hdlcd_flush(void)
+{
+    dsb();
+}
+
+void __init video_init(void)
+{
+    int node, depth;
+    u32 address_cells, size_cells;
+    struct fb_prop fbp;
+    unsigned char *lfb = (unsigned char *) VRAM_VIRT_START;
+    paddr_t hdlcd_start, hdlcd_size;
+    paddr_t framebuffer_start, framebuffer_size;
+    const struct fdt_property *prop;
+    const u32 *cell;
+
+    if ( find_compatible_node("arm,hdlcd", &node, &depth,
+                &address_cells, &size_cells) <= 0 )
+        return;
+
+    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &hdlcd_start, &hdlcd_size); 
+
+    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &framebuffer_start, &framebuffer_size); 
+
+    if ( !hdlcd_start || !framebuffer_start )
+        return;
+
+    printk("Initializing HDLCD driver\n");
+
+    map_phys_range(framebuffer_start,
+                    framebuffer_start + framebuffer_size,
+                    VRAM_VIRT_START, DEV_WC);
+    memset(lfb, 0x00, HDLCD_SIZE);
+
+    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
+    HDLCD[HDLCD_COMMAND] = 0;
+
+    HDLCD[HDLCD_LINELENGTH] = XRES * BPP;
+    HDLCD[HDLCD_LINECOUNT] = YRES - 1;
+    HDLCD[HDLCD_LINEPITCH] = XRES * BPP;
+    HDLCD[HDLCD_PF] = ((BPP - 1) << 3);
+    HDLCD[HDLCD_INTMASK] = 0;
+    HDLCD[HDLCD_FBBASE] = framebuffer_start;
+    HDLCD[HDLCD_BUS] = 0xf00|(1<<4);
+    HDLCD[HDLCD_VBACK] = upper_margin - 1;
+    HDLCD[HDLCD_VSYNC] = vsync_len - 1;
+    HDLCD[HDLCD_VDATA] = YRES - 1;
+    HDLCD[HDLCD_VFRONT] = lower_margin - 1;
+    HDLCD[HDLCD_HBACK] = left_margin - 1;
+    HDLCD[HDLCD_HSYNC] = hsync_len - 1;
+    HDLCD[HDLCD_HDATA] = XRES - 1;
+    HDLCD[HDLCD_HFRONT] = right_margin - 1;
+    HDLCD[HDLCD_POLARITIES] = (1<<2)|(1<<3);
+    HDLCD[HDLCD_RED] = (8<<8)|0;
+    HDLCD[HDLCD_GREEN] = (8<<8)|8;
+    HDLCD[HDLCD_BLUE] = (8<<8)|16;
+
+    HDLCD[HDLCD_COMMAND] = 1;
+    clear_fixmap(FIXMAP_MISC);
+
+    fbp.lfb = lfb;
+    fbp.font = &font_vga_8x16;
+    fbp.pixel_on = 0xffffff;
+    fbp.bits_per_pixel = BPP*8;
+    fbp.bytes_per_line = BPP*XRES;
+    fbp.width = XRES;
+    fbp.height = YRES;
+    fbp.flush = hdlcd_flush;
+    fbp.text_columns = XRES / 8;
+    fbp.text_rows = YRES / 16;
+    if ( fb_init(fbp) < 0 )
+            return;
+    video_puts = fb_scroll_puts;
+}
+
+void video_endboot(void)
+{
+    if ( video_puts != vga_noop_puts )
+        fb_alloc();
+}
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 2a05539..9727562 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -19,6 +19,8 @@
 
 #define CONFIG_DOMAIN_PAGE 1
 
+#define CONFIG_VIDEO 1
+
 #define OPT_CONSOLE_STR "com1"
 
 #ifdef MAX_PHYS_CPUS
@@ -73,6 +75,7 @@
 #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
 #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
 #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
+#define VRAM_VIRT_START        mk_unsigned_long(0x10000000)
 #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
 #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJa9-00069t-Mf; Wed, 05 Dec 2012 18:20:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJa8-00069M-Js
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:28 +0000
Received: from [85.158.143.35:53664] by server-3.bemta-4.messagelabs.com id
	CE/31-06841-B609FB05; Wed, 05 Dec 2012 18:20:27 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354731622!16329655!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31993 invoked from network); 5 Dec 2012 18:20:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216518448"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-1O;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:43 +0000
Message-ID: <1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 1/6] xen/arm: introduce map_phys_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a function to map a physical memory into virtual memory.
It is going to be used later to map the videoram.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/mm.c        |   23 +++++++++++++++++++++++
 xen/include/asm-arm/mm.h |    3 +++
 2 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 68ee9da..418a414 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -376,6 +376,29 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
 }
 
+/* Map the physical memory range start -  end at the virtual address
+ * virt_start in 2MB chunks. start and virt_start have to be 2MB
+ * aligned.
+ */
+void map_phys_range(paddr_t start, paddr_t end,
+        unsigned long virt_start, unsigned attributes)
+{
+    ASSERT(!(start & ((1 << 21) - 1)));
+    ASSERT(!(virt_start & ((1 << 21) - 1)));
+
+    while ( start < end )
+    {
+        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
+        e.pt.ai = attributes;
+        write_pte(xen_second + second_table_offset(virt_start), e);
+        
+        start += (1<<21);
+        virt_start += (1<<21);
+    }
+
+    flush_xen_data_tlb();
+}
+
 enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
 static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
 {
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 3549c83..a11f20b 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -152,6 +152,9 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
 extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
 /* Remove a mapping from a fixmap entry */
 extern void clear_fixmap(unsigned map);
+/* map a 2MB aligned physical range in virtual memory. */
+extern void map_phys_range(paddr_t start, paddr_t end,
+		unsigned long virt_start, unsigned attributes);
 
 
 #define mfn_valid(mfn)        ({                                              \
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJa9-00069t-Mf; Wed, 05 Dec 2012 18:20:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJa8-00069M-Js
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:28 +0000
Received: from [85.158.143.35:53664] by server-3.bemta-4.messagelabs.com id
	CE/31-06841-B609FB05; Wed, 05 Dec 2012 18:20:27 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354731622!16329655!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31993 invoked from network); 5 Dec 2012 18:20:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216518448"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-1O;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:43 +0000
Message-ID: <1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 1/6] xen/arm: introduce map_phys_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a function to map a physical memory into virtual memory.
It is going to be used later to map the videoram.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/mm.c        |   23 +++++++++++++++++++++++
 xen/include/asm-arm/mm.h |    3 +++
 2 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 68ee9da..418a414 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -376,6 +376,29 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
 }
 
+/* Map the physical memory range start -  end at the virtual address
+ * virt_start in 2MB chunks. start and virt_start have to be 2MB
+ * aligned.
+ */
+void map_phys_range(paddr_t start, paddr_t end,
+        unsigned long virt_start, unsigned attributes)
+{
+    ASSERT(!(start & ((1 << 21) - 1)));
+    ASSERT(!(virt_start & ((1 << 21) - 1)));
+
+    while ( start < end )
+    {
+        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
+        e.pt.ai = attributes;
+        write_pte(xen_second + second_table_offset(virt_start), e);
+        
+        start += (1<<21);
+        virt_start += (1<<21);
+    }
+
+    flush_xen_data_tlb();
+}
+
 enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
 static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
 {
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 3549c83..a11f20b 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -152,6 +152,9 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
 extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
 /* Remove a mapping from a fixmap entry */
 extern void clear_fixmap(unsigned map);
+/* map a 2MB aligned physical range in virtual memory. */
+extern void map_phys_range(paddr_t start, paddr_t end,
+		unsigned long virt_start, unsigned attributes);
 
 
 #define mfn_valid(mfn)        ({                                              \
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJaC-0006Ap-4J; Wed, 05 Dec 2012 18:20:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJaA-00069M-7Q
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:30 +0000
Received: from [85.158.143.35:55704] by server-3.bemta-4.messagelabs.com id
	F4/41-06841-D609FB05; Wed, 05 Dec 2012 18:20:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354731622!16329655!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32011 invoked from network); 5 Dec 2012 18:20:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216518449"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-1x;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:44 +0000
Message-ID: <1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 2/6] xen: infrastructure to have cross-platform
	video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- introduce a new HAS_VIDEO config variable;
- build xen/drivers/video/font* if HAS_VIDEO;
- rename vga_puts to video_puts;
- rename vga_init to video_init;
- rename vga_endboot to video_endboot.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Rules.mk        |    1 +
 xen/arch/x86/Rules.mk        |    1 +
 xen/drivers/Makefile         |    2 +-
 xen/drivers/char/console.c   |   12 ++++++------
 xen/drivers/video/Makefile   |   10 +++++-----
 xen/drivers/video/vesa.c     |    4 ++--
 xen/drivers/video/vga.c      |   12 ++++++------
 xen/include/asm-x86/config.h |    1 +
 xen/include/xen/vga.h        |    9 +--------
 xen/include/xen/video.h      |   24 ++++++++++++++++++++++++
 10 files changed, 48 insertions(+), 28 deletions(-)
 create mode 100644 xen/include/xen/video.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index a45c654..fa9f9c1 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -7,6 +7,7 @@
 #
 
 HAS_DEVICE_TREE := y
+HAS_VIDEO := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
index 963850f..0a9d68d 100644
--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -3,6 +3,7 @@
 
 HAS_ACPI := y
 HAS_VGA  := y
+HAS_VIDEO  := y
 HAS_CPUFREQ := y
 HAS_PCI := y
 HAS_PASSTHROUGH := y
diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile
index 7239375..9c70f20 100644
--- a/xen/drivers/Makefile
+++ b/xen/drivers/Makefile
@@ -3,4 +3,4 @@ subdir-$(HAS_CPUFREQ) += cpufreq
 subdir-$(HAS_PCI) += pci
 subdir-$(HAS_PASSTHROUGH) += passthrough
 subdir-$(HAS_ACPI) += acpi
-subdir-$(HAS_VGA) += video
+subdir-$(HAS_VIDEO) += video
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 9e1adb5..683271e 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -21,7 +21,7 @@
 #include <xen/delay.h>
 #include <xen/guest_access.h>
 #include <xen/shutdown.h>
-#include <xen/vga.h>
+#include <xen/video.h>
 #include <xen/kexec.h>
 #include <asm/debugger.h>
 #include <asm/div64.h>
@@ -297,7 +297,7 @@ static void dump_console_ring_key(unsigned char key)
     buf[sofar] = '\0';
 
     sercon_puts(buf);
-    vga_puts(buf);
+    video_puts(buf);
 
     free_xenheap_pages(buf, order);
 }
@@ -383,7 +383,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
         spin_lock_irq(&console_lock);
 
         sercon_puts(kbuf);
-        vga_puts(kbuf);
+        video_puts(kbuf);
 
         if ( opt_console_to_ring )
         {
@@ -464,7 +464,7 @@ static void __putstr(const char *str)
     ASSERT(spin_is_locked(&console_lock));
 
     sercon_puts(str);
-    vga_puts(str);
+    video_puts(str);
 
     if ( !console_locks_busted )
     {
@@ -592,7 +592,7 @@ void __init console_init_preirq(void)
         if ( *p == ',' )
             p++;
         if ( !strncmp(p, "vga", 3) )
-            vga_init();
+            video_init();
         else if ( !strncmp(p, "none", 4) )
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
@@ -694,7 +694,7 @@ void __init console_endboot(void)
         printk("\n");
     }
 
-    vga_endboot();
+    video_endboot();
 
     /*
      * If user specifies so, we fool the switch routine to redirect input
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 6c3e5b4..2993c39 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -1,5 +1,5 @@
-obj-y := vga.o
-obj-$(CONFIG_X86) += font_8x14.o
-obj-$(CONFIG_X86) += font_8x16.o
-obj-$(CONFIG_X86) += font_8x8.o
-obj-$(CONFIG_X86) += vesa.o
+obj-$(HAS_VGA) := vga.o
+obj-$(HAS_VIDEO) += font_8x14.o
+obj-$(HAS_VIDEO) += font_8x16.o
+obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index 47cd3ed..759355f 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -109,7 +109,7 @@ void __init vesa_init(void)
 
     lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
 
-    vga_puts = vesa_redraw_puts;
+    video_puts = vesa_redraw_puts;
 
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
@@ -194,7 +194,7 @@ void __init vesa_endboot(bool_t keep)
     if ( keep )
     {
         xpos = 0;
-        vga_puts = vesa_scroll_puts;
+        video_puts = vesa_scroll_puts;
     }
     else
     {
diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
index a98bd00..40e5963 100644
--- a/xen/drivers/video/vga.c
+++ b/xen/drivers/video/vga.c
@@ -21,7 +21,7 @@ static unsigned char *video;
 
 static void vga_text_puts(const char *s);
 static void vga_noop_puts(const char *s) {}
-void (*vga_puts)(const char *) = vga_noop_puts;
+void (*video_puts)(const char *) = vga_noop_puts;
 
 /*
  * 'vga=<mode-specifier>[,keep]' where <mode-specifier> is one of:
@@ -62,7 +62,7 @@ void vesa_endboot(bool_t keep);
 #define vesa_endboot(x)   ((void)0)
 #endif
 
-void __init vga_init(void)
+void __init video_init(void)
 {
     char *p;
 
@@ -85,7 +85,7 @@ void __init vga_init(void)
         columns = vga_console_info.u.text_mode_3.columns;
         lines   = vga_console_info.u.text_mode_3.rows;
         memset(video, 0, columns * lines * 2);
-        vga_puts = vga_text_puts;
+        video_puts = vga_text_puts;
         break;
     case XEN_VGATYPE_VESA_LFB:
     case XEN_VGATYPE_EFI_LFB:
@@ -97,16 +97,16 @@ void __init vga_init(void)
     }
 }
 
-void __init vga_endboot(void)
+void __init video_endboot(void)
 {
-    if ( vga_puts == vga_noop_puts )
+    if ( video_puts == vga_noop_puts )
         return;
 
     printk("Xen is %s VGA console.\n",
            vgacon_keep ? "keeping" : "relinquishing");
 
     if ( !vgacon_keep )
-        vga_puts = vga_noop_puts;
+        video_puts = vga_noop_puts;
     else
     {
         int bus, devfn;
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index b69dbe6..2169627 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -38,6 +38,7 @@
 #define CONFIG_ACPI_CSTATE 1
 
 #define CONFIG_VGA 1
+#define CONFIG_VIDEO 1
 
 #define CONFIG_HOTPLUG 1
 #define CONFIG_HOTPLUG_CPU 1
diff --git a/xen/include/xen/vga.h b/xen/include/xen/vga.h
index cc690b9..f72b63d 100644
--- a/xen/include/xen/vga.h
+++ b/xen/include/xen/vga.h
@@ -9,17 +9,10 @@
 #ifndef _XEN_VGA_H
 #define _XEN_VGA_H
 
-#include <public/xen.h>
+#include <xen/video.h>
 
 #ifdef CONFIG_VGA
 extern struct xen_vga_console_info vga_console_info;
-void vga_init(void);
-void vga_endboot(void);
-extern void (*vga_puts)(const char *);
-#else
-#define vga_init()    ((void)0)
-#define vga_endboot() ((void)0)
-#define vga_puts(s)   ((void)0)
 #endif
 
 #endif /* _XEN_VGA_H */
diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
new file mode 100644
index 0000000..e9bc92e
--- /dev/null
+++ b/xen/include/xen/video.h
@@ -0,0 +1,24 @@
+/*
+ *  vga.h
+ *
+ *  This file is subject to the terms and conditions of the GNU General Public
+ *  License.  See the file COPYING in the main directory of this archive
+ *  for more details.
+ */
+
+#ifndef _XEN_VIDEO_H
+#define _XEN_VIDEO_H
+
+#include <public/xen.h>
+
+#ifdef CONFIG_VIDEO
+void video_init(void);
+extern void (*video_puts)(const char *);
+void video_endboot(void);
+#else
+#define video_init()    ((void)0)
+#define video_puts(s)   ((void)0)
+#define video_endboot() ((void)0)
+#endif
+
+#endif /* _XEN_VIDEO_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:20:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJaC-0006Ap-4J; Wed, 05 Dec 2012 18:20:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJaA-00069M-7Q
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:20:30 +0000
Received: from [85.158.143.35:55704] by server-3.bemta-4.messagelabs.com id
	F4/41-06841-D609FB05; Wed, 05 Dec 2012 18:20:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354731622!16329655!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32011 invoked from network); 5 Dec 2012 18:20:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:20:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216518449"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:19:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:19:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJZZ-00005I-1x;
	Wed, 05 Dec 2012 18:19:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:19:44 +0000
Message-ID: <1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 2/6] xen: infrastructure to have cross-platform
	video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- introduce a new HAS_VIDEO config variable;
- build xen/drivers/video/font* if HAS_VIDEO;
- rename vga_puts to video_puts;
- rename vga_init to video_init;
- rename vga_endboot to video_endboot.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Rules.mk        |    1 +
 xen/arch/x86/Rules.mk        |    1 +
 xen/drivers/Makefile         |    2 +-
 xen/drivers/char/console.c   |   12 ++++++------
 xen/drivers/video/Makefile   |   10 +++++-----
 xen/drivers/video/vesa.c     |    4 ++--
 xen/drivers/video/vga.c      |   12 ++++++------
 xen/include/asm-x86/config.h |    1 +
 xen/include/xen/vga.h        |    9 +--------
 xen/include/xen/video.h      |   24 ++++++++++++++++++++++++
 10 files changed, 48 insertions(+), 28 deletions(-)
 create mode 100644 xen/include/xen/video.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index a45c654..fa9f9c1 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -7,6 +7,7 @@
 #
 
 HAS_DEVICE_TREE := y
+HAS_VIDEO := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
index 963850f..0a9d68d 100644
--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -3,6 +3,7 @@
 
 HAS_ACPI := y
 HAS_VGA  := y
+HAS_VIDEO  := y
 HAS_CPUFREQ := y
 HAS_PCI := y
 HAS_PASSTHROUGH := y
diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile
index 7239375..9c70f20 100644
--- a/xen/drivers/Makefile
+++ b/xen/drivers/Makefile
@@ -3,4 +3,4 @@ subdir-$(HAS_CPUFREQ) += cpufreq
 subdir-$(HAS_PCI) += pci
 subdir-$(HAS_PASSTHROUGH) += passthrough
 subdir-$(HAS_ACPI) += acpi
-subdir-$(HAS_VGA) += video
+subdir-$(HAS_VIDEO) += video
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 9e1adb5..683271e 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -21,7 +21,7 @@
 #include <xen/delay.h>
 #include <xen/guest_access.h>
 #include <xen/shutdown.h>
-#include <xen/vga.h>
+#include <xen/video.h>
 #include <xen/kexec.h>
 #include <asm/debugger.h>
 #include <asm/div64.h>
@@ -297,7 +297,7 @@ static void dump_console_ring_key(unsigned char key)
     buf[sofar] = '\0';
 
     sercon_puts(buf);
-    vga_puts(buf);
+    video_puts(buf);
 
     free_xenheap_pages(buf, order);
 }
@@ -383,7 +383,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
         spin_lock_irq(&console_lock);
 
         sercon_puts(kbuf);
-        vga_puts(kbuf);
+        video_puts(kbuf);
 
         if ( opt_console_to_ring )
         {
@@ -464,7 +464,7 @@ static void __putstr(const char *str)
     ASSERT(spin_is_locked(&console_lock));
 
     sercon_puts(str);
-    vga_puts(str);
+    video_puts(str);
 
     if ( !console_locks_busted )
     {
@@ -592,7 +592,7 @@ void __init console_init_preirq(void)
         if ( *p == ',' )
             p++;
         if ( !strncmp(p, "vga", 3) )
-            vga_init();
+            video_init();
         else if ( !strncmp(p, "none", 4) )
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
@@ -694,7 +694,7 @@ void __init console_endboot(void)
         printk("\n");
     }
 
-    vga_endboot();
+    video_endboot();
 
     /*
      * If user specifies so, we fool the switch routine to redirect input
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 6c3e5b4..2993c39 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -1,5 +1,5 @@
-obj-y := vga.o
-obj-$(CONFIG_X86) += font_8x14.o
-obj-$(CONFIG_X86) += font_8x16.o
-obj-$(CONFIG_X86) += font_8x8.o
-obj-$(CONFIG_X86) += vesa.o
+obj-$(HAS_VGA) := vga.o
+obj-$(HAS_VIDEO) += font_8x14.o
+obj-$(HAS_VIDEO) += font_8x16.o
+obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index 47cd3ed..759355f 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -109,7 +109,7 @@ void __init vesa_init(void)
 
     lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
 
-    vga_puts = vesa_redraw_puts;
+    video_puts = vesa_redraw_puts;
 
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
@@ -194,7 +194,7 @@ void __init vesa_endboot(bool_t keep)
     if ( keep )
     {
         xpos = 0;
-        vga_puts = vesa_scroll_puts;
+        video_puts = vesa_scroll_puts;
     }
     else
     {
diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
index a98bd00..40e5963 100644
--- a/xen/drivers/video/vga.c
+++ b/xen/drivers/video/vga.c
@@ -21,7 +21,7 @@ static unsigned char *video;
 
 static void vga_text_puts(const char *s);
 static void vga_noop_puts(const char *s) {}
-void (*vga_puts)(const char *) = vga_noop_puts;
+void (*video_puts)(const char *) = vga_noop_puts;
 
 /*
  * 'vga=<mode-specifier>[,keep]' where <mode-specifier> is one of:
@@ -62,7 +62,7 @@ void vesa_endboot(bool_t keep);
 #define vesa_endboot(x)   ((void)0)
 #endif
 
-void __init vga_init(void)
+void __init video_init(void)
 {
     char *p;
 
@@ -85,7 +85,7 @@ void __init vga_init(void)
         columns = vga_console_info.u.text_mode_3.columns;
         lines   = vga_console_info.u.text_mode_3.rows;
         memset(video, 0, columns * lines * 2);
-        vga_puts = vga_text_puts;
+        video_puts = vga_text_puts;
         break;
     case XEN_VGATYPE_VESA_LFB:
     case XEN_VGATYPE_EFI_LFB:
@@ -97,16 +97,16 @@ void __init vga_init(void)
     }
 }
 
-void __init vga_endboot(void)
+void __init video_endboot(void)
 {
-    if ( vga_puts == vga_noop_puts )
+    if ( video_puts == vga_noop_puts )
         return;
 
     printk("Xen is %s VGA console.\n",
            vgacon_keep ? "keeping" : "relinquishing");
 
     if ( !vgacon_keep )
-        vga_puts = vga_noop_puts;
+        video_puts = vga_noop_puts;
     else
     {
         int bus, devfn;
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index b69dbe6..2169627 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -38,6 +38,7 @@
 #define CONFIG_ACPI_CSTATE 1
 
 #define CONFIG_VGA 1
+#define CONFIG_VIDEO 1
 
 #define CONFIG_HOTPLUG 1
 #define CONFIG_HOTPLUG_CPU 1
diff --git a/xen/include/xen/vga.h b/xen/include/xen/vga.h
index cc690b9..f72b63d 100644
--- a/xen/include/xen/vga.h
+++ b/xen/include/xen/vga.h
@@ -9,17 +9,10 @@
 #ifndef _XEN_VGA_H
 #define _XEN_VGA_H
 
-#include <public/xen.h>
+#include <xen/video.h>
 
 #ifdef CONFIG_VGA
 extern struct xen_vga_console_info vga_console_info;
-void vga_init(void);
-void vga_endboot(void);
-extern void (*vga_puts)(const char *);
-#else
-#define vga_init()    ((void)0)
-#define vga_endboot() ((void)0)
-#define vga_puts(s)   ((void)0)
 #endif
 
 #endif /* _XEN_VGA_H */
diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
new file mode 100644
index 0000000..e9bc92e
--- /dev/null
+++ b/xen/include/xen/video.h
@@ -0,0 +1,24 @@
+/*
+ *  vga.h
+ *
+ *  This file is subject to the terms and conditions of the GNU General Public
+ *  License.  See the file COPYING in the main directory of this archive
+ *  for more details.
+ */
+
+#ifndef _XEN_VIDEO_H
+#define _XEN_VIDEO_H
+
+#include <public/xen.h>
+
+#ifdef CONFIG_VIDEO
+void video_init(void);
+extern void (*video_puts)(const char *);
+void video_endboot(void);
+#else
+#define video_init()    ((void)0)
+#define video_puts(s)   ((void)0)
+#define video_endboot() ((void)0)
+#endif
+
+#endif /* _XEN_VIDEO_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:26:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJg6-00078r-2f; Wed, 05 Dec 2012 18:26:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgJg5-00078l-Cr
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:26:37 +0000
Received: from [193.109.254.147:37596] by server-4.bemta-14.messagelabs.com id
	E8/48-18856-CD19FB05; Wed, 05 Dec 2012 18:26:36 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354731982!9342928!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21671 invoked from network); 5 Dec 2012 18:26:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:26:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46722468"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:26:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:26:21 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgJfp-0000AP-A4;
	Wed, 05 Dec 2012 18:26:21 +0000
Message-ID: <50BF91CD.8020103@citrix.com>
Date: Wed, 5 Dec 2012 18:26:21 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.4.6
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] Recursive locking in Xen (in reference to NMI/MCE path
	audit)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

While auditing the NMI/MCE paths, I have encountered some issues with
recursive locking in Xen, discovered by the misuse of the console_lock
intermittently as a regular lock and as a recursive lock.

The comment in spinlock.h is unclear as to whether mixing recursive and
non recursive calls on the same spinlock is valid.  If the calls are
genuinely not valid, then surely regular spinlocks and recursive
spinlocks should be separate types to let the compiler work for us.

If mixing calls are valid, then there appear to be problems with nesting
recursive and regular calls, as either ordering of spin_lock and
spin_lock_recursive will deadlock.

As a result, I am wondering which of the above to fix?

There are very few users of recursive locks (domain lock, domain
page_alloc lock, mm (pod and paging) locks and console lock).  The
console and page_alloc locks appear to have mixed callers, while the
domain and mm locks appear to have strictly recursive callers.

It seems to me that either we need to make the two locks different
types, or use ASSERT()s to ensure we dont next spin_lock() and
spin_lock_recursive() calls.

In addition to the above problems, I find myself needing to implement
spin_lock_recursive_irq{,save,restore}() variants.  The implementations
themselves are not too hard to do, but I did wonder whether we might
want to have extra ASSERT()s to remove potential deadlock scenarios from
the NMI/MCE paths.   The ASSERT()s would have to be along the lines of
"assert this is exclusively a recursive lock" or "assert this is a
per-cpu regular spinlock which is never referenced outside of this
specific NMI/MCE path".  The possible implementation of these
pseudo-asserts would differ depending on the outcome of the query.

Any other comment or queries?

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:26:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJg6-00078r-2f; Wed, 05 Dec 2012 18:26:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgJg5-00078l-Cr
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:26:37 +0000
Received: from [193.109.254.147:37596] by server-4.bemta-14.messagelabs.com id
	E8/48-18856-CD19FB05; Wed, 05 Dec 2012 18:26:36 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354731982!9342928!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21671 invoked from network); 5 Dec 2012 18:26:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:26:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46722468"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:26:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:26:21 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgJfp-0000AP-A4;
	Wed, 05 Dec 2012 18:26:21 +0000
Message-ID: <50BF91CD.8020103@citrix.com>
Date: Wed, 5 Dec 2012 18:26:21 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.4.6
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] Recursive locking in Xen (in reference to NMI/MCE path
	audit)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

While auditing the NMI/MCE paths, I have encountered some issues with
recursive locking in Xen, discovered by the misuse of the console_lock
intermittently as a regular lock and as a recursive lock.

The comment in spinlock.h is unclear as to whether mixing recursive and
non recursive calls on the same spinlock is valid.  If the calls are
genuinely not valid, then surely regular spinlocks and recursive
spinlocks should be separate types to let the compiler work for us.

If mixing calls are valid, then there appear to be problems with nesting
recursive and regular calls, as either ordering of spin_lock and
spin_lock_recursive will deadlock.

As a result, I am wondering which of the above to fix?

There are very few users of recursive locks (domain lock, domain
page_alloc lock, mm (pod and paging) locks and console lock).  The
console and page_alloc locks appear to have mixed callers, while the
domain and mm locks appear to have strictly recursive callers.

It seems to me that either we need to make the two locks different
types, or use ASSERT()s to ensure we dont next spin_lock() and
spin_lock_recursive() calls.

In addition to the above problems, I find myself needing to implement
spin_lock_recursive_irq{,save,restore}() variants.  The implementations
themselves are not too hard to do, but I did wonder whether we might
want to have extra ASSERT()s to remove potential deadlock scenarios from
the NMI/MCE paths.   The ASSERT()s would have to be along the lines of
"assert this is exclusively a recursive lock" or "assert this is a
per-cpu regular spinlock which is never referenced outside of this
specific NMI/MCE path".  The possible implementation of these
pseudo-asserts would differ depending on the outcome of the query.

Any other comment or queries?

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:34:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJo1-0007UU-1X; Wed, 05 Dec 2012 18:34:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgJnz-0007UP-VX
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:34:48 +0000
Received: from [85.158.139.83:17704] by server-12.bemta-5.messagelabs.com id
	78/7B-02886-7C39FB05; Wed, 05 Dec 2012 18:34:47 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354732485!24580292!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13243 invoked from network); 5 Dec 2012 18:34:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:34:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216520421"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:34:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:34:40 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24] helo=[127.0.1.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgJns-0000IT-2i;
	Wed, 05 Dec 2012 18:34:40 +0000
MIME-Version: 1.0
X-Mercurial-Node: 21504ec56304ada2f093c30b290ac33c28381ae1
Message-ID: <21504ec56304ada2f093.1354732128@elijah>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 5 Dec 2012 18:28:48 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
To: xen-devel@lists.xensource.com
Cc: george.dunlap@eu.citrix.com
Subject: [Xen-devel] [PATCH] libxl: Make an internal function explicitly
 check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User George Dunlap <george.dunlap@eu.citrix.com>
# Date 1354732124 0
# Node ID 21504ec56304ada2f093c30b290ac33c28381ae1
# Parent  670b07e8d7382229639af0d1df30071e6c1ebb19
libxl: Make an internal function explicitly check existence of expected paths

libxl__device_disk_from_xs_be() was failing without error for some
missing xenstore nodes in a backend, while assuming (without checking)
that other nodes were valid, causing a crash when another internal
error wrote these nodes in the wrong place.

Make this function consistent by:
* Checking the existence of all nodes before using
* Choosing a default only when the node is not written in device_disk_add()
* Failing with log msg if any node written by device_disk_add() is not present
* Returning an error on failure
* Disposing of the structure before returning using libxl_device_disk_displose()

Also make the callers of the function pay attention to the error and
behave appropriately.  In the case of libxl__append_disk_list_of_type(),
this means only incrementing *ndisks as the disk structures are
successfully initialized.

v3:
 * Make a failure path in libxl__device_disk_from_be() to free allocations
   from half-initialized disks
 * Modify libxl__append_disk_list_of_type to only update *ndisks as they are
   completed.  This will allow the only caller (libxl_device_disk_list()) to
   clean up the completed disks properly on an error.
v2:
 * Remove "Internal error", as the failure will most likely look internal
 * Use LOG(ERROR...) macros for incrased prettiness

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2169,9 +2169,9 @@ void libxl__device_disk_add(libxl__egc *
     device_disk_add(egc, domid, disk, aodev, NULL, NULL);
 }
 
-static void libxl__device_disk_from_xs_be(libxl__gc *gc,
-                                          const char *be_path,
-                                          libxl_device_disk *disk)
+static int libxl__device_disk_from_xs_be(libxl__gc *gc,
+                                         const char *be_path,
+                                         libxl_device_disk *disk)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     unsigned int len;
@@ -2179,6 +2179,7 @@ static void libxl__device_disk_from_xs_b
 
     libxl_device_disk_init(disk);
 
+    /* "params" may not be present; but everything else must be. */
     tmp = xs_read(ctx->xsh, XBT_NULL,
                   libxl__sprintf(gc, "%s/params", be_path), &len);
     if (tmp && strchr(tmp, ':')) {
@@ -2187,21 +2188,36 @@ static void libxl__device_disk_from_xs_b
     } else {
         disk->pdev_path = tmp;
     }
-    libxl_string_to_backend(ctx,
-                        libxl__xs_read(gc, XBT_NULL,
-                                       libxl__sprintf(gc, "%s/type", be_path)),
-                        &(disk->backend));
+
+    
+    tmp = libxl__xs_read(gc, XBT_NULL,
+                         libxl__sprintf(gc, "%s/type", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/type", be_path);
+        goto cleanup;
+    }
+    libxl_string_to_backend(ctx, tmp, &(disk->backend));
+
     disk->vdev = xs_read(ctx->xsh, XBT_NULL,
                          libxl__sprintf(gc, "%s/dev", be_path), &len);
+    if (!disk->vdev) {
+        LOG(ERROR, "Missing xenstore node %s/dev", be_path);
+        goto cleanup;
+    }
+
     tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf
                          (gc, "%s/removable", be_path));
-
-    if (tmp)
-        disk->removable = atoi(tmp);
-    else
-        disk->removable = 0;
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/removable", be_path);
+        goto cleanup;
+    }
+    disk->removable = atoi(tmp);
 
     tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/mode", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/mode", be_path);
+        goto cleanup;
+    }
     if (!strcmp(tmp, "w"))
         disk->readwrite = 1;
     else
@@ -2209,9 +2225,18 @@ static void libxl__device_disk_from_xs_b
 
     tmp = libxl__xs_read(gc, XBT_NULL,
                          libxl__sprintf(gc, "%s/device-type", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/device-type", be_path);
+        goto cleanup;
+    }
     disk->is_cdrom = !strcmp(tmp, "cdrom");
 
     disk->format = LIBXL_DISK_FORMAT_UNKNOWN;
+
+    return 0;
+cleanup:
+    libxl_device_disk_dispose(disk);
+    return ERROR_FAIL;
 }
 
 int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
@@ -2237,9 +2262,7 @@ int libxl_vdev_to_device_disk(libxl_ctx 
     if (!path)
         goto out;
 
-    libxl__device_disk_from_xs_be(gc, path, disk);
-
-    rc = 0;
+    rc = libxl__device_disk_from_xs_be(gc, path, disk);
 out:
     GC_FREE;
     return rc;
@@ -2256,6 +2279,8 @@ static int libxl__append_disk_list_of_ty
     char **dir = NULL;
     unsigned int n = 0;
     libxl_device_disk *pdisk = NULL, *pdisk_end = NULL;
+    int rc=0;
+    int initial_disks = *ndisks;
 
     be_path = libxl__sprintf(gc, "%s/backend/%s/%d",
                              libxl__xs_get_dompath(gc, 0), type, domid);
@@ -2266,17 +2291,19 @@ static int libxl__append_disk_list_of_ty
         if (tmp == NULL)
             return ERROR_NOMEM;
         *disks = tmp;
-        pdisk = *disks + *ndisks;
-        *ndisks += n;
-        pdisk_end = *disks + *ndisks;
+        pdisk = *disks + initial_disks;
+        pdisk_end = *disks + initial_disks + n;
         for (; pdisk < pdisk_end; pdisk++, dir++) {
             const char *p;
             p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
-            libxl__device_disk_from_xs_be(gc, p, pdisk);
+            if ((rc=libxl__device_disk_from_xs_be(gc, p, pdisk)))
+                goto out;
             pdisk->backend_domid = 0;
+            *ndisks += 1;
         }
     }
-    return 0;
+out:
+    return rc;
 }
 
 libxl_device_disk *libxl_device_disk_list(libxl_ctx *ctx, uint32_t domid, int *num)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:34:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJo1-0007UU-1X; Wed, 05 Dec 2012 18:34:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgJnz-0007UP-VX
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:34:48 +0000
Received: from [85.158.139.83:17704] by server-12.bemta-5.messagelabs.com id
	78/7B-02886-7C39FB05; Wed, 05 Dec 2012 18:34:47 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354732485!24580292!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13243 invoked from network); 5 Dec 2012 18:34:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:34:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216520421"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:34:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:34:40 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24] helo=[127.0.1.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgJns-0000IT-2i;
	Wed, 05 Dec 2012 18:34:40 +0000
MIME-Version: 1.0
X-Mercurial-Node: 21504ec56304ada2f093c30b290ac33c28381ae1
Message-ID: <21504ec56304ada2f093.1354732128@elijah>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 5 Dec 2012 18:28:48 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
To: xen-devel@lists.xensource.com
Cc: george.dunlap@eu.citrix.com
Subject: [Xen-devel] [PATCH] libxl: Make an internal function explicitly
 check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User George Dunlap <george.dunlap@eu.citrix.com>
# Date 1354732124 0
# Node ID 21504ec56304ada2f093c30b290ac33c28381ae1
# Parent  670b07e8d7382229639af0d1df30071e6c1ebb19
libxl: Make an internal function explicitly check existence of expected paths

libxl__device_disk_from_xs_be() was failing without error for some
missing xenstore nodes in a backend, while assuming (without checking)
that other nodes were valid, causing a crash when another internal
error wrote these nodes in the wrong place.

Make this function consistent by:
* Checking the existence of all nodes before using
* Choosing a default only when the node is not written in device_disk_add()
* Failing with log msg if any node written by device_disk_add() is not present
* Returning an error on failure
* Disposing of the structure before returning using libxl_device_disk_displose()

Also make the callers of the function pay attention to the error and
behave appropriately.  In the case of libxl__append_disk_list_of_type(),
this means only incrementing *ndisks as the disk structures are
successfully initialized.

v3:
 * Make a failure path in libxl__device_disk_from_be() to free allocations
   from half-initialized disks
 * Modify libxl__append_disk_list_of_type to only update *ndisks as they are
   completed.  This will allow the only caller (libxl_device_disk_list()) to
   clean up the completed disks properly on an error.
v2:
 * Remove "Internal error", as the failure will most likely look internal
 * Use LOG(ERROR...) macros for incrased prettiness

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2169,9 +2169,9 @@ void libxl__device_disk_add(libxl__egc *
     device_disk_add(egc, domid, disk, aodev, NULL, NULL);
 }
 
-static void libxl__device_disk_from_xs_be(libxl__gc *gc,
-                                          const char *be_path,
-                                          libxl_device_disk *disk)
+static int libxl__device_disk_from_xs_be(libxl__gc *gc,
+                                         const char *be_path,
+                                         libxl_device_disk *disk)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     unsigned int len;
@@ -2179,6 +2179,7 @@ static void libxl__device_disk_from_xs_b
 
     libxl_device_disk_init(disk);
 
+    /* "params" may not be present; but everything else must be. */
     tmp = xs_read(ctx->xsh, XBT_NULL,
                   libxl__sprintf(gc, "%s/params", be_path), &len);
     if (tmp && strchr(tmp, ':')) {
@@ -2187,21 +2188,36 @@ static void libxl__device_disk_from_xs_b
     } else {
         disk->pdev_path = tmp;
     }
-    libxl_string_to_backend(ctx,
-                        libxl__xs_read(gc, XBT_NULL,
-                                       libxl__sprintf(gc, "%s/type", be_path)),
-                        &(disk->backend));
+
+    
+    tmp = libxl__xs_read(gc, XBT_NULL,
+                         libxl__sprintf(gc, "%s/type", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/type", be_path);
+        goto cleanup;
+    }
+    libxl_string_to_backend(ctx, tmp, &(disk->backend));
+
     disk->vdev = xs_read(ctx->xsh, XBT_NULL,
                          libxl__sprintf(gc, "%s/dev", be_path), &len);
+    if (!disk->vdev) {
+        LOG(ERROR, "Missing xenstore node %s/dev", be_path);
+        goto cleanup;
+    }
+
     tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf
                          (gc, "%s/removable", be_path));
-
-    if (tmp)
-        disk->removable = atoi(tmp);
-    else
-        disk->removable = 0;
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/removable", be_path);
+        goto cleanup;
+    }
+    disk->removable = atoi(tmp);
 
     tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/mode", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/mode", be_path);
+        goto cleanup;
+    }
     if (!strcmp(tmp, "w"))
         disk->readwrite = 1;
     else
@@ -2209,9 +2225,18 @@ static void libxl__device_disk_from_xs_b
 
     tmp = libxl__xs_read(gc, XBT_NULL,
                          libxl__sprintf(gc, "%s/device-type", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/device-type", be_path);
+        goto cleanup;
+    }
     disk->is_cdrom = !strcmp(tmp, "cdrom");
 
     disk->format = LIBXL_DISK_FORMAT_UNKNOWN;
+
+    return 0;
+cleanup:
+    libxl_device_disk_dispose(disk);
+    return ERROR_FAIL;
 }
 
 int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
@@ -2237,9 +2262,7 @@ int libxl_vdev_to_device_disk(libxl_ctx 
     if (!path)
         goto out;
 
-    libxl__device_disk_from_xs_be(gc, path, disk);
-
-    rc = 0;
+    rc = libxl__device_disk_from_xs_be(gc, path, disk);
 out:
     GC_FREE;
     return rc;
@@ -2256,6 +2279,8 @@ static int libxl__append_disk_list_of_ty
     char **dir = NULL;
     unsigned int n = 0;
     libxl_device_disk *pdisk = NULL, *pdisk_end = NULL;
+    int rc=0;
+    int initial_disks = *ndisks;
 
     be_path = libxl__sprintf(gc, "%s/backend/%s/%d",
                              libxl__xs_get_dompath(gc, 0), type, domid);
@@ -2266,17 +2291,19 @@ static int libxl__append_disk_list_of_ty
         if (tmp == NULL)
             return ERROR_NOMEM;
         *disks = tmp;
-        pdisk = *disks + *ndisks;
-        *ndisks += n;
-        pdisk_end = *disks + *ndisks;
+        pdisk = *disks + initial_disks;
+        pdisk_end = *disks + initial_disks + n;
         for (; pdisk < pdisk_end; pdisk++, dir++) {
             const char *p;
             p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
-            libxl__device_disk_from_xs_be(gc, p, pdisk);
+            if ((rc=libxl__device_disk_from_xs_be(gc, p, pdisk)))
+                goto out;
             pdisk->backend_domid = 0;
+            *ndisks += 1;
         }
     }
-    return 0;
+out:
+    return rc;
 }
 
 libxl_device_disk *libxl_device_disk_list(libxl_ctx *ctx, uint32_t domid, int *num)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:35:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJoF-0007W2-EB; Wed, 05 Dec 2012 18:35:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgJoD-0007VP-Mo
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:35:02 +0000
Received: from [85.158.137.99:56418] by server-6.bemta-3.messagelabs.com id
	3A/08-28265-4D39FB05; Wed, 05 Dec 2012 18:35:00 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354732498!12588907!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14248 invoked from network); 5 Dec 2012 18:35:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:35:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46723821"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:34:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:34:58 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24] helo=[127.0.1.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgJo9-0000IZ-Lz;
	Wed, 05 Dec 2012 18:34:57 +0000
MIME-Version: 1.0
X-Mercurial-Node: 21504ec56304ada2f093c30b290ac33c28381ae1
Message-ID: <21504ec56304ada2f093.1354732146@elijah>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 5 Dec 2012 18:29:06 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
To: xen-devel@lists.xensource.com
Cc: george.dunlap@eu.citrix.com
Subject: [Xen-devel] [PATCH v3] libxl: Make an internal function explicitly
 check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User George Dunlap <george.dunlap@eu.citrix.com>
# Date 1354732124 0
# Node ID 21504ec56304ada2f093c30b290ac33c28381ae1
# Parent  670b07e8d7382229639af0d1df30071e6c1ebb19
libxl: Make an internal function explicitly check existence of expected paths

libxl__device_disk_from_xs_be() was failing without error for some
missing xenstore nodes in a backend, while assuming (without checking)
that other nodes were valid, causing a crash when another internal
error wrote these nodes in the wrong place.

Make this function consistent by:
* Checking the existence of all nodes before using
* Choosing a default only when the node is not written in device_disk_add()
* Failing with log msg if any node written by device_disk_add() is not present
* Returning an error on failure
* Disposing of the structure before returning using libxl_device_disk_displose()

Also make the callers of the function pay attention to the error and
behave appropriately.  In the case of libxl__append_disk_list_of_type(),
this means only incrementing *ndisks as the disk structures are
successfully initialized.

v3:
 * Make a failure path in libxl__device_disk_from_be() to free allocations
   from half-initialized disks
 * Modify libxl__append_disk_list_of_type to only update *ndisks as they are
   completed.  This will allow the only caller (libxl_device_disk_list()) to
   clean up the completed disks properly on an error.
v2:
 * Remove "Internal error", as the failure will most likely look internal
 * Use LOG(ERROR...) macros for incrased prettiness

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2169,9 +2169,9 @@ void libxl__device_disk_add(libxl__egc *
     device_disk_add(egc, domid, disk, aodev, NULL, NULL);
 }
 
-static void libxl__device_disk_from_xs_be(libxl__gc *gc,
-                                          const char *be_path,
-                                          libxl_device_disk *disk)
+static int libxl__device_disk_from_xs_be(libxl__gc *gc,
+                                         const char *be_path,
+                                         libxl_device_disk *disk)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     unsigned int len;
@@ -2179,6 +2179,7 @@ static void libxl__device_disk_from_xs_b
 
     libxl_device_disk_init(disk);
 
+    /* "params" may not be present; but everything else must be. */
     tmp = xs_read(ctx->xsh, XBT_NULL,
                   libxl__sprintf(gc, "%s/params", be_path), &len);
     if (tmp && strchr(tmp, ':')) {
@@ -2187,21 +2188,36 @@ static void libxl__device_disk_from_xs_b
     } else {
         disk->pdev_path = tmp;
     }
-    libxl_string_to_backend(ctx,
-                        libxl__xs_read(gc, XBT_NULL,
-                                       libxl__sprintf(gc, "%s/type", be_path)),
-                        &(disk->backend));
+
+    
+    tmp = libxl__xs_read(gc, XBT_NULL,
+                         libxl__sprintf(gc, "%s/type", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/type", be_path);
+        goto cleanup;
+    }
+    libxl_string_to_backend(ctx, tmp, &(disk->backend));
+
     disk->vdev = xs_read(ctx->xsh, XBT_NULL,
                          libxl__sprintf(gc, "%s/dev", be_path), &len);
+    if (!disk->vdev) {
+        LOG(ERROR, "Missing xenstore node %s/dev", be_path);
+        goto cleanup;
+    }
+
     tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf
                          (gc, "%s/removable", be_path));
-
-    if (tmp)
-        disk->removable = atoi(tmp);
-    else
-        disk->removable = 0;
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/removable", be_path);
+        goto cleanup;
+    }
+    disk->removable = atoi(tmp);
 
     tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/mode", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/mode", be_path);
+        goto cleanup;
+    }
     if (!strcmp(tmp, "w"))
         disk->readwrite = 1;
     else
@@ -2209,9 +2225,18 @@ static void libxl__device_disk_from_xs_b
 
     tmp = libxl__xs_read(gc, XBT_NULL,
                          libxl__sprintf(gc, "%s/device-type", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/device-type", be_path);
+        goto cleanup;
+    }
     disk->is_cdrom = !strcmp(tmp, "cdrom");
 
     disk->format = LIBXL_DISK_FORMAT_UNKNOWN;
+
+    return 0;
+cleanup:
+    libxl_device_disk_dispose(disk);
+    return ERROR_FAIL;
 }
 
 int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
@@ -2237,9 +2262,7 @@ int libxl_vdev_to_device_disk(libxl_ctx 
     if (!path)
         goto out;
 
-    libxl__device_disk_from_xs_be(gc, path, disk);
-
-    rc = 0;
+    rc = libxl__device_disk_from_xs_be(gc, path, disk);
 out:
     GC_FREE;
     return rc;
@@ -2256,6 +2279,8 @@ static int libxl__append_disk_list_of_ty
     char **dir = NULL;
     unsigned int n = 0;
     libxl_device_disk *pdisk = NULL, *pdisk_end = NULL;
+    int rc=0;
+    int initial_disks = *ndisks;
 
     be_path = libxl__sprintf(gc, "%s/backend/%s/%d",
                              libxl__xs_get_dompath(gc, 0), type, domid);
@@ -2266,17 +2291,19 @@ static int libxl__append_disk_list_of_ty
         if (tmp == NULL)
             return ERROR_NOMEM;
         *disks = tmp;
-        pdisk = *disks + *ndisks;
-        *ndisks += n;
-        pdisk_end = *disks + *ndisks;
+        pdisk = *disks + initial_disks;
+        pdisk_end = *disks + initial_disks + n;
         for (; pdisk < pdisk_end; pdisk++, dir++) {
             const char *p;
             p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
-            libxl__device_disk_from_xs_be(gc, p, pdisk);
+            if ((rc=libxl__device_disk_from_xs_be(gc, p, pdisk)))
+                goto out;
             pdisk->backend_domid = 0;
+            *ndisks += 1;
         }
     }
-    return 0;
+out:
+    return rc;
 }
 
 libxl_device_disk *libxl_device_disk_list(libxl_ctx *ctx, uint32_t domid, int *num)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:35:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJoF-0007W2-EB; Wed, 05 Dec 2012 18:35:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgJoD-0007VP-Mo
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:35:02 +0000
Received: from [85.158.137.99:56418] by server-6.bemta-3.messagelabs.com id
	3A/08-28265-4D39FB05; Wed, 05 Dec 2012 18:35:00 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354732498!12588907!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14248 invoked from network); 5 Dec 2012 18:35:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:35:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46723821"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:34:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:34:58 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24] helo=[127.0.1.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgJo9-0000IZ-Lz;
	Wed, 05 Dec 2012 18:34:57 +0000
MIME-Version: 1.0
X-Mercurial-Node: 21504ec56304ada2f093c30b290ac33c28381ae1
Message-ID: <21504ec56304ada2f093.1354732146@elijah>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 5 Dec 2012 18:29:06 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
To: xen-devel@lists.xensource.com
Cc: george.dunlap@eu.citrix.com
Subject: [Xen-devel] [PATCH v3] libxl: Make an internal function explicitly
 check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User George Dunlap <george.dunlap@eu.citrix.com>
# Date 1354732124 0
# Node ID 21504ec56304ada2f093c30b290ac33c28381ae1
# Parent  670b07e8d7382229639af0d1df30071e6c1ebb19
libxl: Make an internal function explicitly check existence of expected paths

libxl__device_disk_from_xs_be() was failing without error for some
missing xenstore nodes in a backend, while assuming (without checking)
that other nodes were valid, causing a crash when another internal
error wrote these nodes in the wrong place.

Make this function consistent by:
* Checking the existence of all nodes before using
* Choosing a default only when the node is not written in device_disk_add()
* Failing with log msg if any node written by device_disk_add() is not present
* Returning an error on failure
* Disposing of the structure before returning using libxl_device_disk_displose()

Also make the callers of the function pay attention to the error and
behave appropriately.  In the case of libxl__append_disk_list_of_type(),
this means only incrementing *ndisks as the disk structures are
successfully initialized.

v3:
 * Make a failure path in libxl__device_disk_from_be() to free allocations
   from half-initialized disks
 * Modify libxl__append_disk_list_of_type to only update *ndisks as they are
   completed.  This will allow the only caller (libxl_device_disk_list()) to
   clean up the completed disks properly on an error.
v2:
 * Remove "Internal error", as the failure will most likely look internal
 * Use LOG(ERROR...) macros for incrased prettiness

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2169,9 +2169,9 @@ void libxl__device_disk_add(libxl__egc *
     device_disk_add(egc, domid, disk, aodev, NULL, NULL);
 }
 
-static void libxl__device_disk_from_xs_be(libxl__gc *gc,
-                                          const char *be_path,
-                                          libxl_device_disk *disk)
+static int libxl__device_disk_from_xs_be(libxl__gc *gc,
+                                         const char *be_path,
+                                         libxl_device_disk *disk)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     unsigned int len;
@@ -2179,6 +2179,7 @@ static void libxl__device_disk_from_xs_b
 
     libxl_device_disk_init(disk);
 
+    /* "params" may not be present; but everything else must be. */
     tmp = xs_read(ctx->xsh, XBT_NULL,
                   libxl__sprintf(gc, "%s/params", be_path), &len);
     if (tmp && strchr(tmp, ':')) {
@@ -2187,21 +2188,36 @@ static void libxl__device_disk_from_xs_b
     } else {
         disk->pdev_path = tmp;
     }
-    libxl_string_to_backend(ctx,
-                        libxl__xs_read(gc, XBT_NULL,
-                                       libxl__sprintf(gc, "%s/type", be_path)),
-                        &(disk->backend));
+
+    
+    tmp = libxl__xs_read(gc, XBT_NULL,
+                         libxl__sprintf(gc, "%s/type", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/type", be_path);
+        goto cleanup;
+    }
+    libxl_string_to_backend(ctx, tmp, &(disk->backend));
+
     disk->vdev = xs_read(ctx->xsh, XBT_NULL,
                          libxl__sprintf(gc, "%s/dev", be_path), &len);
+    if (!disk->vdev) {
+        LOG(ERROR, "Missing xenstore node %s/dev", be_path);
+        goto cleanup;
+    }
+
     tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf
                          (gc, "%s/removable", be_path));
-
-    if (tmp)
-        disk->removable = atoi(tmp);
-    else
-        disk->removable = 0;
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/removable", be_path);
+        goto cleanup;
+    }
+    disk->removable = atoi(tmp);
 
     tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/mode", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/mode", be_path);
+        goto cleanup;
+    }
     if (!strcmp(tmp, "w"))
         disk->readwrite = 1;
     else
@@ -2209,9 +2225,18 @@ static void libxl__device_disk_from_xs_b
 
     tmp = libxl__xs_read(gc, XBT_NULL,
                          libxl__sprintf(gc, "%s/device-type", be_path));
+    if (!tmp) {
+        LOG(ERROR, "Missing xenstore node %s/device-type", be_path);
+        goto cleanup;
+    }
     disk->is_cdrom = !strcmp(tmp, "cdrom");
 
     disk->format = LIBXL_DISK_FORMAT_UNKNOWN;
+
+    return 0;
+cleanup:
+    libxl_device_disk_dispose(disk);
+    return ERROR_FAIL;
 }
 
 int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
@@ -2237,9 +2262,7 @@ int libxl_vdev_to_device_disk(libxl_ctx 
     if (!path)
         goto out;
 
-    libxl__device_disk_from_xs_be(gc, path, disk);
-
-    rc = 0;
+    rc = libxl__device_disk_from_xs_be(gc, path, disk);
 out:
     GC_FREE;
     return rc;
@@ -2256,6 +2279,8 @@ static int libxl__append_disk_list_of_ty
     char **dir = NULL;
     unsigned int n = 0;
     libxl_device_disk *pdisk = NULL, *pdisk_end = NULL;
+    int rc=0;
+    int initial_disks = *ndisks;
 
     be_path = libxl__sprintf(gc, "%s/backend/%s/%d",
                              libxl__xs_get_dompath(gc, 0), type, domid);
@@ -2266,17 +2291,19 @@ static int libxl__append_disk_list_of_ty
         if (tmp == NULL)
             return ERROR_NOMEM;
         *disks = tmp;
-        pdisk = *disks + *ndisks;
-        *ndisks += n;
-        pdisk_end = *disks + *ndisks;
+        pdisk = *disks + initial_disks;
+        pdisk_end = *disks + initial_disks + n;
         for (; pdisk < pdisk_end; pdisk++, dir++) {
             const char *p;
             p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
-            libxl__device_disk_from_xs_be(gc, p, pdisk);
+            if ((rc=libxl__device_disk_from_xs_be(gc, p, pdisk)))
+                goto out;
             pdisk->backend_domid = 0;
+            *ndisks += 1;
         }
     }
-    return 0;
+out:
+    return rc;
 }
 
 libxl_device_disk *libxl_device_disk_list(libxl_ctx *ctx, uint32_t domid, int *num)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:35:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJot-0007bX-1l; Wed, 05 Dec 2012 18:35:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgJor-0007bE-Bj
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:35:41 +0000
Received: from [85.158.137.99:62518] by server-15.bemta-3.messagelabs.com id
	24/B7-23779-CF39FB05; Wed, 05 Dec 2012 18:35:40 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354732538!14912679!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28110 invoked from network); 5 Dec 2012 18:35:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:35:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46723885"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:35:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:35:37 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgJon-0000JE-G1	for
	xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:35:37 +0000
Message-ID: <50BF929A.1070606@eu.citrix.com>
Date: Wed, 5 Dec 2012 18:29:46 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
References: <21504ec56304ada2f093.1354732128@elijah>
In-Reply-To: <21504ec56304ada2f093.1354732128@elijah>
Subject: Re: [Xen-devel] [PATCH] libxl: Make an internal function explicitly
 check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I've sent another one with a "v3" to help make it clear what's going on...

  -George

On 05/12/12 18:28, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354732124 0
> # Node ID 21504ec56304ada2f093c30b290ac33c28381ae1
> # Parent  670b07e8d7382229639af0d1df30071e6c1ebb19
> libxl: Make an internal function explicitly check existence of expected paths
>
> libxl__device_disk_from_xs_be() was failing without error for some
> missing xenstore nodes in a backend, while assuming (without checking)
> that other nodes were valid, causing a crash when another internal
> error wrote these nodes in the wrong place.
>
> Make this function consistent by:
> * Checking the existence of all nodes before using
> * Choosing a default only when the node is not written in device_disk_add()
> * Failing with log msg if any node written by device_disk_add() is not present
> * Returning an error on failure
> * Disposing of the structure before returning using libxl_device_disk_displose()
>
> Also make the callers of the function pay attention to the error and
> behave appropriately.  In the case of libxl__append_disk_list_of_type(),
> this means only incrementing *ndisks as the disk structures are
> successfully initialized.
>
> v3:
>   * Make a failure path in libxl__device_disk_from_be() to free allocations
>     from half-initialized disks
>   * Modify libxl__append_disk_list_of_type to only update *ndisks as they are
>     completed.  This will allow the only caller (libxl_device_disk_list()) to
>     clean up the completed disks properly on an error.
> v2:
>   * Remove "Internal error", as the failure will most likely look internal
>   * Use LOG(ERROR...) macros for incrased prettiness
>
> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
>
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -2169,9 +2169,9 @@ void libxl__device_disk_add(libxl__egc *
>       device_disk_add(egc, domid, disk, aodev, NULL, NULL);
>   }
>   
> -static void libxl__device_disk_from_xs_be(libxl__gc *gc,
> -                                          const char *be_path,
> -                                          libxl_device_disk *disk)
> +static int libxl__device_disk_from_xs_be(libxl__gc *gc,
> +                                         const char *be_path,
> +                                         libxl_device_disk *disk)
>   {
>       libxl_ctx *ctx = libxl__gc_owner(gc);
>       unsigned int len;
> @@ -2179,6 +2179,7 @@ static void libxl__device_disk_from_xs_b
>   
>       libxl_device_disk_init(disk);
>   
> +    /* "params" may not be present; but everything else must be. */
>       tmp = xs_read(ctx->xsh, XBT_NULL,
>                     libxl__sprintf(gc, "%s/params", be_path), &len);
>       if (tmp && strchr(tmp, ':')) {
> @@ -2187,21 +2188,36 @@ static void libxl__device_disk_from_xs_b
>       } else {
>           disk->pdev_path = tmp;
>       }
> -    libxl_string_to_backend(ctx,
> -                        libxl__xs_read(gc, XBT_NULL,
> -                                       libxl__sprintf(gc, "%s/type", be_path)),
> -                        &(disk->backend));
> +
> +
> +    tmp = libxl__xs_read(gc, XBT_NULL,
> +                         libxl__sprintf(gc, "%s/type", be_path));
> +    if (!tmp) {
> +        LOG(ERROR, "Missing xenstore node %s/type", be_path);
> +        goto cleanup;
> +    }
> +    libxl_string_to_backend(ctx, tmp, &(disk->backend));
> +
>       disk->vdev = xs_read(ctx->xsh, XBT_NULL,
>                            libxl__sprintf(gc, "%s/dev", be_path), &len);
> +    if (!disk->vdev) {
> +        LOG(ERROR, "Missing xenstore node %s/dev", be_path);
> +        goto cleanup;
> +    }
> +
>       tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf
>                            (gc, "%s/removable", be_path));
> -
> -    if (tmp)
> -        disk->removable = atoi(tmp);
> -    else
> -        disk->removable = 0;
> +    if (!tmp) {
> +        LOG(ERROR, "Missing xenstore node %s/removable", be_path);
> +        goto cleanup;
> +    }
> +    disk->removable = atoi(tmp);
>   
>       tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/mode", be_path));
> +    if (!tmp) {
> +        LOG(ERROR, "Missing xenstore node %s/mode", be_path);
> +        goto cleanup;
> +    }
>       if (!strcmp(tmp, "w"))
>           disk->readwrite = 1;
>       else
> @@ -2209,9 +2225,18 @@ static void libxl__device_disk_from_xs_b
>   
>       tmp = libxl__xs_read(gc, XBT_NULL,
>                            libxl__sprintf(gc, "%s/device-type", be_path));
> +    if (!tmp) {
> +        LOG(ERROR, "Missing xenstore node %s/device-type", be_path);
> +        goto cleanup;
> +    }
>       disk->is_cdrom = !strcmp(tmp, "cdrom");
>   
>       disk->format = LIBXL_DISK_FORMAT_UNKNOWN;
> +
> +    return 0;
> +cleanup:
> +    libxl_device_disk_dispose(disk);
> +    return ERROR_FAIL;
>   }
>   
>   int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
> @@ -2237,9 +2262,7 @@ int libxl_vdev_to_device_disk(libxl_ctx
>       if (!path)
>           goto out;
>   
> -    libxl__device_disk_from_xs_be(gc, path, disk);
> -
> -    rc = 0;
> +    rc = libxl__device_disk_from_xs_be(gc, path, disk);
>   out:
>       GC_FREE;
>       return rc;
> @@ -2256,6 +2279,8 @@ static int libxl__append_disk_list_of_ty
>       char **dir = NULL;
>       unsigned int n = 0;
>       libxl_device_disk *pdisk = NULL, *pdisk_end = NULL;
> +    int rc=0;
> +    int initial_disks = *ndisks;
>   
>       be_path = libxl__sprintf(gc, "%s/backend/%s/%d",
>                                libxl__xs_get_dompath(gc, 0), type, domid);
> @@ -2266,17 +2291,19 @@ static int libxl__append_disk_list_of_ty
>           if (tmp == NULL)
>               return ERROR_NOMEM;
>           *disks = tmp;
> -        pdisk = *disks + *ndisks;
> -        *ndisks += n;
> -        pdisk_end = *disks + *ndisks;
> +        pdisk = *disks + initial_disks;
> +        pdisk_end = *disks + initial_disks + n;
>           for (; pdisk < pdisk_end; pdisk++, dir++) {
>               const char *p;
>               p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
> -            libxl__device_disk_from_xs_be(gc, p, pdisk);
> +            if ((rc=libxl__device_disk_from_xs_be(gc, p, pdisk)))
> +                goto out;
>               pdisk->backend_domid = 0;
> +            *ndisks += 1;
>           }
>       }
> -    return 0;
> +out:
> +    return rc;
>   }
>   
>   libxl_device_disk *libxl_device_disk_list(libxl_ctx *ctx, uint32_t domid, int *num)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:35:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJot-0007bX-1l; Wed, 05 Dec 2012 18:35:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TgJor-0007bE-Bj
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:35:41 +0000
Received: from [85.158.137.99:62518] by server-15.bemta-3.messagelabs.com id
	24/B7-23779-CF39FB05; Wed, 05 Dec 2012 18:35:40 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354732538!14912679!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28110 invoked from network); 5 Dec 2012 18:35:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:35:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46723885"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:35:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:35:37 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TgJon-0000JE-G1	for
	xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:35:37 +0000
Message-ID: <50BF929A.1070606@eu.citrix.com>
Date: Wed, 5 Dec 2012 18:29:46 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
References: <21504ec56304ada2f093.1354732128@elijah>
In-Reply-To: <21504ec56304ada2f093.1354732128@elijah>
Subject: Re: [Xen-devel] [PATCH] libxl: Make an internal function explicitly
 check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I've sent another one with a "v3" to help make it clear what's going on...

  -George

On 05/12/12 18:28, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354732124 0
> # Node ID 21504ec56304ada2f093c30b290ac33c28381ae1
> # Parent  670b07e8d7382229639af0d1df30071e6c1ebb19
> libxl: Make an internal function explicitly check existence of expected paths
>
> libxl__device_disk_from_xs_be() was failing without error for some
> missing xenstore nodes in a backend, while assuming (without checking)
> that other nodes were valid, causing a crash when another internal
> error wrote these nodes in the wrong place.
>
> Make this function consistent by:
> * Checking the existence of all nodes before using
> * Choosing a default only when the node is not written in device_disk_add()
> * Failing with log msg if any node written by device_disk_add() is not present
> * Returning an error on failure
> * Disposing of the structure before returning using libxl_device_disk_displose()
>
> Also make the callers of the function pay attention to the error and
> behave appropriately.  In the case of libxl__append_disk_list_of_type(),
> this means only incrementing *ndisks as the disk structures are
> successfully initialized.
>
> v3:
>   * Make a failure path in libxl__device_disk_from_be() to free allocations
>     from half-initialized disks
>   * Modify libxl__append_disk_list_of_type to only update *ndisks as they are
>     completed.  This will allow the only caller (libxl_device_disk_list()) to
>     clean up the completed disks properly on an error.
> v2:
>   * Remove "Internal error", as the failure will most likely look internal
>   * Use LOG(ERROR...) macros for incrased prettiness
>
> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
>
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -2169,9 +2169,9 @@ void libxl__device_disk_add(libxl__egc *
>       device_disk_add(egc, domid, disk, aodev, NULL, NULL);
>   }
>   
> -static void libxl__device_disk_from_xs_be(libxl__gc *gc,
> -                                          const char *be_path,
> -                                          libxl_device_disk *disk)
> +static int libxl__device_disk_from_xs_be(libxl__gc *gc,
> +                                         const char *be_path,
> +                                         libxl_device_disk *disk)
>   {
>       libxl_ctx *ctx = libxl__gc_owner(gc);
>       unsigned int len;
> @@ -2179,6 +2179,7 @@ static void libxl__device_disk_from_xs_b
>   
>       libxl_device_disk_init(disk);
>   
> +    /* "params" may not be present; but everything else must be. */
>       tmp = xs_read(ctx->xsh, XBT_NULL,
>                     libxl__sprintf(gc, "%s/params", be_path), &len);
>       if (tmp && strchr(tmp, ':')) {
> @@ -2187,21 +2188,36 @@ static void libxl__device_disk_from_xs_b
>       } else {
>           disk->pdev_path = tmp;
>       }
> -    libxl_string_to_backend(ctx,
> -                        libxl__xs_read(gc, XBT_NULL,
> -                                       libxl__sprintf(gc, "%s/type", be_path)),
> -                        &(disk->backend));
> +
> +
> +    tmp = libxl__xs_read(gc, XBT_NULL,
> +                         libxl__sprintf(gc, "%s/type", be_path));
> +    if (!tmp) {
> +        LOG(ERROR, "Missing xenstore node %s/type", be_path);
> +        goto cleanup;
> +    }
> +    libxl_string_to_backend(ctx, tmp, &(disk->backend));
> +
>       disk->vdev = xs_read(ctx->xsh, XBT_NULL,
>                            libxl__sprintf(gc, "%s/dev", be_path), &len);
> +    if (!disk->vdev) {
> +        LOG(ERROR, "Missing xenstore node %s/dev", be_path);
> +        goto cleanup;
> +    }
> +
>       tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf
>                            (gc, "%s/removable", be_path));
> -
> -    if (tmp)
> -        disk->removable = atoi(tmp);
> -    else
> -        disk->removable = 0;
> +    if (!tmp) {
> +        LOG(ERROR, "Missing xenstore node %s/removable", be_path);
> +        goto cleanup;
> +    }
> +    disk->removable = atoi(tmp);
>   
>       tmp = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/mode", be_path));
> +    if (!tmp) {
> +        LOG(ERROR, "Missing xenstore node %s/mode", be_path);
> +        goto cleanup;
> +    }
>       if (!strcmp(tmp, "w"))
>           disk->readwrite = 1;
>       else
> @@ -2209,9 +2225,18 @@ static void libxl__device_disk_from_xs_b
>   
>       tmp = libxl__xs_read(gc, XBT_NULL,
>                            libxl__sprintf(gc, "%s/device-type", be_path));
> +    if (!tmp) {
> +        LOG(ERROR, "Missing xenstore node %s/device-type", be_path);
> +        goto cleanup;
> +    }
>       disk->is_cdrom = !strcmp(tmp, "cdrom");
>   
>       disk->format = LIBXL_DISK_FORMAT_UNKNOWN;
> +
> +    return 0;
> +cleanup:
> +    libxl_device_disk_dispose(disk);
> +    return ERROR_FAIL;
>   }
>   
>   int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
> @@ -2237,9 +2262,7 @@ int libxl_vdev_to_device_disk(libxl_ctx
>       if (!path)
>           goto out;
>   
> -    libxl__device_disk_from_xs_be(gc, path, disk);
> -
> -    rc = 0;
> +    rc = libxl__device_disk_from_xs_be(gc, path, disk);
>   out:
>       GC_FREE;
>       return rc;
> @@ -2256,6 +2279,8 @@ static int libxl__append_disk_list_of_ty
>       char **dir = NULL;
>       unsigned int n = 0;
>       libxl_device_disk *pdisk = NULL, *pdisk_end = NULL;
> +    int rc=0;
> +    int initial_disks = *ndisks;
>   
>       be_path = libxl__sprintf(gc, "%s/backend/%s/%d",
>                                libxl__xs_get_dompath(gc, 0), type, domid);
> @@ -2266,17 +2291,19 @@ static int libxl__append_disk_list_of_ty
>           if (tmp == NULL)
>               return ERROR_NOMEM;
>           *disks = tmp;
> -        pdisk = *disks + *ndisks;
> -        *ndisks += n;
> -        pdisk_end = *disks + *ndisks;
> +        pdisk = *disks + initial_disks;
> +        pdisk_end = *disks + initial_disks + n;
>           for (; pdisk < pdisk_end; pdisk++, dir++) {
>               const char *p;
>               p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
> -            libxl__device_disk_from_xs_be(gc, p, pdisk);
> +            if ((rc=libxl__device_disk_from_xs_be(gc, p, pdisk)))
> +                goto out;
>               pdisk->backend_domid = 0;
> +            *ndisks += 1;
>           }
>       }
> -    return 0;
> +out:
> +    return rc;
>   }
>   
>   libxl_device_disk *libxl_device_disk_list(libxl_ctx *ctx, uint32_t domid, int *num)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:38:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:38:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJr7-0007rj-P4; Wed, 05 Dec 2012 18:38:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJr6-0007rX-Ms
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:38:00 +0000
Received: from [85.158.139.83:31154] by server-1.bemta-5.messagelabs.com id
	B0/C1-09311-7849FB05; Wed, 05 Dec 2012 18:37:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354732677!28580677!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19678 invoked from network); 5 Dec 2012 18:37:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:37:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46724126"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:37:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:37:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJqw-0000Kr-Dx;
	Wed, 05 Dec 2012 18:37:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:37:46 +0000
Message-ID: <1354732666-3132-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] xen/arm: flush dcache after memcpy'ing the
	kernel image
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

After memcpy'ing the kernel in guest memory we need to flush the dcache
to make sure that the data actually reaches the memory before we start
executing guest code with caches disabled.

This fixes a boot time bug on the Cortex A15 Versatile Express that
usually shows up as follow:

(XEN) Hypervisor Trap. HSR=0x80000006 EC=0x20 IL=0 Syndrome=6
(XEN) Unexpected Trap: Hypervisor

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/kernel.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index b4a823d..81818b1 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -53,6 +53,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
 
         set_fixmap(FIXMAP_MISC, p, attrindx);
         memcpy(dst, src + s, l);
+        flush_xen_dcache_va_range(dst, l);
 
         paddr += l;
         dst += l;
@@ -82,6 +83,7 @@ static void kernel_zimage_load(struct kernel_info *info)
 
         set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED);
         memcpy(dst, src, PAGE_SIZE);
+        flush_xen_dcache_va_range(dst, PAGE_SIZE);
         clear_fixmap(FIXMAP_MISC);
 
         unmap_domain_page(dst);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:38:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:38:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJr7-0007rj-P4; Wed, 05 Dec 2012 18:38:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJr6-0007rX-Ms
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:38:00 +0000
Received: from [85.158.139.83:31154] by server-1.bemta-5.messagelabs.com id
	B0/C1-09311-7849FB05; Wed, 05 Dec 2012 18:37:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354732677!28580677!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19678 invoked from network); 5 Dec 2012 18:37:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:37:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46724126"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:37:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:37:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJqw-0000Kr-Dx;
	Wed, 05 Dec 2012 18:37:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:37:46 +0000
Message-ID: <1354732666-3132-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] xen/arm: flush dcache after memcpy'ing the
	kernel image
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

After memcpy'ing the kernel in guest memory we need to flush the dcache
to make sure that the data actually reaches the memory before we start
executing guest code with caches disabled.

This fixes a boot time bug on the Cortex A15 Versatile Express that
usually shows up as follow:

(XEN) Hypervisor Trap. HSR=0x80000006 EC=0x20 IL=0 Syndrome=6
(XEN) Unexpected Trap: Hypervisor

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/kernel.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index b4a823d..81818b1 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -53,6 +53,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
 
         set_fixmap(FIXMAP_MISC, p, attrindx);
         memcpy(dst, src + s, l);
+        flush_xen_dcache_va_range(dst, l);
 
         paddr += l;
         dst += l;
@@ -82,6 +83,7 @@ static void kernel_zimage_load(struct kernel_info *info)
 
         set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED);
         memcpy(dst, src, PAGE_SIZE);
+        flush_xen_dcache_va_range(dst, PAGE_SIZE);
         clear_fixmap(FIXMAP_MISC);
 
         unmap_domain_page(dst);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:38:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJr9-0007ru-5m; Wed, 05 Dec 2012 18:38:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJr6-0007rV-VJ
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:38:01 +0000
Received: from [85.158.139.83:27641] by server-7.bemta-5.messagelabs.com id
	A5/B1-23096-7849FB05; Wed, 05 Dec 2012 18:37:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354732677!28580677!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19660 invoked from network); 5 Dec 2012 18:37:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:37:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46724124"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:37:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:37:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJqw-0000Kr-DC;
	Wed, 05 Dec 2012 18:37:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:37:45 +0000
Message-ID: <1354732666-3132-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] xen/arm: disable interrupts on
	return_to_hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At the moment it is possible to reach return_to_hypervisor with
interrupts enabled (it happens all the times when we are actually going
back to hypervisor mode, when we don't take the return_to_guest path).

If that happens we risk loosing the content of ELR_hyp: if we receive an
interrupt right after restoring ELR_hyp, once we come back we'll have a
different value in ELR_hyp and the original is lost.

In order to make the return_to_hypervisor path safe, we disable
interrupts before restoring any registers.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/entry.S |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
index 2ff32a1..1d6ff32 100644
--- a/xen/arch/arm/entry.S
+++ b/xen/arch/arm/entry.S
@@ -108,6 +108,7 @@ ENTRY(return_to_guest)
 	RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq);
 	/* Fall thru */
 ENTRY(return_to_hypervisor)
+	cpsid i
 	ldr lr, [sp, #UREGS_lr]
 	ldr r11, [sp, #UREGS_pc]
 	msr ELR_hyp, r11
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:38:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJr9-0007ru-5m; Wed, 05 Dec 2012 18:38:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgJr6-0007rV-VJ
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:38:01 +0000
Received: from [85.158.139.83:27641] by server-7.bemta-5.messagelabs.com id
	A5/B1-23096-7849FB05; Wed, 05 Dec 2012 18:37:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354732677!28580677!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODcyMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19660 invoked from network); 5 Dec 2012 18:37:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:37:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="46724124"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:37:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:37:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgJqw-0000Kr-DC;
	Wed, 05 Dec 2012 18:37:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 18:37:45 +0000
Message-ID: <1354732666-3132-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] xen/arm: disable interrupts on
	return_to_hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At the moment it is possible to reach return_to_hypervisor with
interrupts enabled (it happens all the times when we are actually going
back to hypervisor mode, when we don't take the return_to_guest path).

If that happens we risk loosing the content of ELR_hyp: if we receive an
interrupt right after restoring ELR_hyp, once we come back we'll have a
different value in ELR_hyp and the original is lost.

In order to make the return_to_hypervisor path safe, we disable
interrupts before restoring any registers.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/entry.S |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
index 2ff32a1..1d6ff32 100644
--- a/xen/arch/arm/entry.S
+++ b/xen/arch/arm/entry.S
@@ -108,6 +108,7 @@ ENTRY(return_to_guest)
 	RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq);
 	/* Fall thru */
 ENTRY(return_to_hypervisor)
+	cpsid i
 	ldr lr, [sp, #UREGS_lr]
 	ldr r11, [sp, #UREGS_pc]
 	msr ELR_hyp, r11
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:41:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:41:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJuX-0008GJ-RC; Wed, 05 Dec 2012 18:41:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TgJuW-0008G6-AE
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:41:32 +0000
Received: from [193.109.254.147:60076] by server-8.bemta-14.messagelabs.com id
	14/6E-05026-B559FB05; Wed, 05 Dec 2012 18:41:31 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354732888!6406270!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTcwMzE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTcwMzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17422 invoked from network); 5 Dec 2012 18:41:29 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 18:41:29 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2s7sEVU=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-093-073.pools.arcor-ip.net [84.57.93.73])
	by smtp.strato.de (joses mo6) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id z061d3oB5IDMEF ;
	Wed, 5 Dec 2012 19:41:28 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id D72BD1884C; Wed,  5 Dec 2012 19:41:27 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 19:41:20 +0100
Message-Id: <1354732880-20096-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.0.1
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] qemu-traditional: update configure check for
	-lrt changes in glibc 2.17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

configure uses clock_gettime to check whether -lrt is needed - and don't
check other functions. With glibc 2.17 clock_gettime is part of libc, so
use timer_gettime instead, which is in -lrt in old and new versions of
glibc.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 configure | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/configure b/configure
index 904e019..ace3c3e 100755
--- a/configure
+++ b/configure
@@ -1097,7 +1097,7 @@ fi
 cat > $TMPC <<EOF
 #include <signal.h>
 #include <time.h>
-int main(void) { clockid_t id; return clock_gettime(id, NULL); }
+int main(void) { struct itimerspec v; timer_t t; return timer_gettime (t, &v); }
 EOF
 
 rt=no
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:41:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:41:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJuX-0008GJ-RC; Wed, 05 Dec 2012 18:41:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TgJuW-0008G6-AE
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:41:32 +0000
Received: from [193.109.254.147:60076] by server-8.bemta-14.messagelabs.com id
	14/6E-05026-B559FB05; Wed, 05 Dec 2012 18:41:31 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354732888!6406270!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTcwMzE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTcwMzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17422 invoked from network); 5 Dec 2012 18:41:29 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 18:41:29 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2s7sEVU=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-093-073.pools.arcor-ip.net [84.57.93.73])
	by smtp.strato.de (joses mo6) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id z061d3oB5IDMEF ;
	Wed, 5 Dec 2012 19:41:28 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id D72BD1884C; Wed,  5 Dec 2012 19:41:27 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 19:41:20 +0100
Message-Id: <1354732880-20096-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.0.1
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] qemu-traditional: update configure check for
	-lrt changes in glibc 2.17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

configure uses clock_gettime to check whether -lrt is needed - and don't
check other functions. With glibc 2.17 clock_gettime is part of libc, so
use timer_gettime instead, which is in -lrt in old and new versions of
glibc.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 configure | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/configure b/configure
index 904e019..ace3c3e 100755
--- a/configure
+++ b/configure
@@ -1097,7 +1097,7 @@ fi
 cat > $TMPC <<EOF
 #include <signal.h>
 #include <time.h>
-int main(void) { clockid_t id; return clock_gettime(id, NULL); }
+int main(void) { struct itimerspec v; timer_t t; return timer_gettime (t, &v); }
 EOF
 
 rt=no
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:42:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:42:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJv6-0008LV-8e; Wed, 05 Dec 2012 18:42:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TgJv3-0008KX-D9
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:42:06 +0000
Received: from [193.109.254.147:45882] by server-14.bemta-14.messagelabs.com
	id FD/47-14517-C759FB05; Wed, 05 Dec 2012 18:42:04 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354732923!9069508!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODYyNTY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODYyNTY=\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30838 invoked from network); 5 Dec 2012 18:42:04 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 18:42:04 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2s7sEVU=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-093-073.pools.arcor-ip.net [84.57.93.73])
	by smtp.strato.de (joses mo6) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id z061d3oB5IDMES ;
	Wed, 5 Dec 2012 19:42:03 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id D19BD1884C; Wed,  5 Dec 2012 19:42:02 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 19:42:00 +0100
Message-Id: <1354732920-20334-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.0.1
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] qemu-traditional: do not strip binaries during
	make install
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 Makefile        | 2 +-
 Makefile.target | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/Makefile b/Makefile
index 37c7066..594f0ef 100644
--- a/Makefile
+++ b/Makefile
@@ -243,7 +243,7 @@ endif
 install: all $(if $(BUILD_DOCS),install-doc)
 	mkdir -p "$(DESTDIR)$(bindir)"
 ifneq ($(TOOLS),)
-	$(INSTALL) -m 755 -s $(TOOLS) "$(DESTDIR)$(bindir)"
+	$(INSTALL) -m 755 $(TOOLS) "$(DESTDIR)$(bindir)"
 endif
 ifneq ($(BLOBS),)
 	mkdir -p "$(DESTDIR)$(datadir)"
diff --git a/Makefile.target b/Makefile.target
index 19bb0fd..1cf7f34 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -755,7 +755,7 @@ clean:
 
 install: all install-hook
 ifneq ($(PROGS),)
-	$(INSTALL) -m 755 -s $(PROGS) "$(DESTDIR)$(bindir)"
+	$(INSTALL) -m 755 $(PROGS) "$(DESTDIR)$(bindir)"
 endif
 
 # Include automatically generated dependency files
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:42:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:42:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgJv6-0008LV-8e; Wed, 05 Dec 2012 18:42:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TgJv3-0008KX-D9
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 18:42:06 +0000
Received: from [193.109.254.147:45882] by server-14.bemta-14.messagelabs.com
	id FD/47-14517-C759FB05; Wed, 05 Dec 2012 18:42:04 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354732923!9069508!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODYyNTY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODYyNTY=\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30838 invoked from network); 5 Dec 2012 18:42:04 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 18:42:04 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2s7sEVU=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-093-073.pools.arcor-ip.net [84.57.93.73])
	by smtp.strato.de (joses mo6) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id z061d3oB5IDMES ;
	Wed, 5 Dec 2012 19:42:03 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id D19BD1884C; Wed,  5 Dec 2012 19:42:02 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Wed,  5 Dec 2012 19:42:00 +0100
Message-Id: <1354732920-20334-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.0.1
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] qemu-traditional: do not strip binaries during
	make install
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 Makefile        | 2 +-
 Makefile.target | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/Makefile b/Makefile
index 37c7066..594f0ef 100644
--- a/Makefile
+++ b/Makefile
@@ -243,7 +243,7 @@ endif
 install: all $(if $(BUILD_DOCS),install-doc)
 	mkdir -p "$(DESTDIR)$(bindir)"
 ifneq ($(TOOLS),)
-	$(INSTALL) -m 755 -s $(TOOLS) "$(DESTDIR)$(bindir)"
+	$(INSTALL) -m 755 $(TOOLS) "$(DESTDIR)$(bindir)"
 endif
 ifneq ($(BLOBS),)
 	mkdir -p "$(DESTDIR)$(datadir)"
diff --git a/Makefile.target b/Makefile.target
index 19bb0fd..1cf7f34 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -755,7 +755,7 @@ clean:
 
 install: all install-hook
 ifneq ($(PROGS),)
-	$(INSTALL) -m 755 -s $(PROGS) "$(DESTDIR)$(bindir)"
+	$(INSTALL) -m 755 $(PROGS) "$(DESTDIR)$(bindir)"
 endif
 
 # Include automatically generated dependency files
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgK7j-0000a6-BJ; Wed, 05 Dec 2012 18:55:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgK7i-0000Zt-IR
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:55:10 +0000
Received: from [193.109.254.147:61344] by server-5.bemta-14.messagelabs.com id
	5E/3F-10257-D889FB05; Wed, 05 Dec 2012 18:55:09 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354733707!1813025!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24014 invoked from network); 5 Dec 2012 18:55:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:55:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216522990"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:55:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:55:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgK7e-0000Zm-7h;
	Wed, 05 Dec 2012 18:55:06 +0000
Date: Wed, 5 Dec 2012 18:55:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354633209.15296.25.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212051846530.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211301426120.5310@kaball.uk.xensource.com>
	<1354633209.15296.25.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Dec 2012, Ian Campbell wrote:
> On Fri, 2012-11-30 at 14:28 +0000, Stefano Stabellini wrote:
> > Get the address of the GIC distributor, cpu, virtual and virtual cpu
> > interfaces registers from device tree.
> > 
> > Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
> > and friends because we are using them from mode_switch.S, that is
> > executed before device tree has been parsed. But at least mode_switch.S
> > is known to contain vexpress specific code anyway.
> > 
> > 
> > Changes in v2:
> > - remove 2 superflous lines from process_gic_node;
> > - introduce device_tree_get_reg_ranges;
> > - add a check for uninitialized GIC interface addresses;
> > - add a check for non-page aligned GIC interface addresses;
> > - remove the code to deal with non-page aligned addresses from GICC and
> > GICH.
> > 
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 0c6fab9..2b29e7e 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -26,6 +26,7 @@
> >  #include <xen/errno.h>
> >  #include <xen/softirq.h>
> >  #include <xen/list.h>
> > +#include <xen/device_tree.h>
> >  #include <asm/p2m.h>
> >  #include <asm/domain.h>
> >  
> > @@ -33,10 +34,8 @@
> >  
> >  /* Access to the GIC Distributor registers through the fixmap */
> >  #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
> > -#define GICC ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICC1)  \
> > -                                     + (GIC_CR_OFFSET & 0xfff)))
> > -#define GICH ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICH)  \
> > -                                     + (GIC_HR_OFFSET & 0xfff)))
> > +#define GICC ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICC1)) 
> > +#define GICH ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICH))
> >  static void gic_restore_pending_irqs(struct vcpu *v);
> >  
> >  /* Global state */
> > @@ -44,6 +43,7 @@ static struct {
> >      paddr_t dbase;       /* Address of distributor registers */
> >      paddr_t cbase;       /* Address of CPU interface registers */
> >      paddr_t hbase;       /* Address of virtual interface registers */
> > +    paddr_t vbase;       /* Address of virtual cpu interface registers */
> >      unsigned int lines;
> >      unsigned int cpus;
> >      spinlock_t lock;
> > @@ -306,10 +306,27 @@ static void __cpuinit gic_hyp_disable(void)
> >  /* Set up the GIC */
> >  void __init gic_init(void)
> >  {
> > -    /* XXX FIXME get this from devicetree */
> > -    gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET;
> > -    gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET;
> > -    gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET;
> > +    if ( !early_info.gic.gic_dist_addr ||
> > +         !early_info.gic.gic_cpu_addr ||
> > +         !early_info.gic.gic_hyp_addr ||
> > +         !early_info.gic.gic_vcpu_addr )
> > +        panic("the physical address of one of the GIC interfaces is missing:\n"
> > +              "        gic_dist_addr=%"PRIpaddr"\n"
> > +              "        gic_cpu_addr=%"PRIpaddr"\n"
> > +              "        gic_hyp_addr=%"PRIpaddr"\n"
> > +              "        gic_vcpu_addr=%"PRIpaddr"\n",
> > +              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
> > +              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);
> > +    if ( (early_info.gic.gic_dist_addr & ~PAGE_MASK) ||
> > +         (early_info.gic.gic_cpu_addr & ~PAGE_MASK) ||
> > +         (early_info.gic.gic_hyp_addr & ~PAGE_MASK) ||
> > +         (early_info.gic.gic_vcpu_addr & ~PAGE_MASK) )
> > +        panic("error: GIC interfaces not page aligned.\n");
> 
> Oh, maybe I should have skipped "arm: add few checks to gic_init" and
> come straight here instead?

Yep. I am going to address the other comments too.


> >  static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index da0af77..8d5b6b0 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -54,6 +54,33 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
> >      return !strncmp(prop, match, len);
> >  }
> >  
> > +bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
> > +{
> > +    int len, l;
> > +    const void *prop;
> > +
> > +    prop = fdt_getprop(fdt, node, "compatible", &len);
> > +    if ( prop == NULL )
> > +        return 0;
> > +
> > +    while ( len > 0 ) {
> > +        if ( !strncmp(prop, match, strlen(match)) )
> > +            return 1;
> 
> This will decide that match="foo-bar" is compatible with a node which
> contains compatible = "foo-bar-baz". Is that deliberate? I thought the
> DT way would be to have compatible = "foo-bar-baz", "foo-bar" ?
> 
> Perhaps this is just due to cut-n-paste from device_tree_node_matches
> (where it makes sense, I think)?

You are right, I think that we should have a perfect match for the
compatible string.


> I now wonder the same thing about device_tree_type_matches too...

I think we should use strcmp in both cases.
I'll send another patch for that.


> > +        l = strlen(prop) + 1;
> > +        prop += l;
> > +        len -= l;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static void device_tree_get_reg_ranges(const struct fdt_property *prop,
> 
> "nr" in the name somewhere? I thought at first this was somehow
> returning the ranges themselves.

OK

> > +        u32 address_cells, u32 size_cells, int *ranges)
> > +{
> > +    u32 reg_cells = address_cells + size_cells;
> > +    *ranges = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
> > +}
> > +
> >  static void __init get_val(const u32 **cell, u32 cells, u64 *val)
> >  {
> >      *val = 0;
> > @@ -209,7 +236,6 @@ static void __init process_memory_node(const void *fdt, int node,
> >                                         u32 address_cells, u32 size_cells)
> >  {
> >      const struct fdt_property *prop;
> > -    size_t reg_cells;
> >      int i;
> >      int banks;
> >      const u32 *cell;
> > @@ -230,8 +256,7 @@ static void __init process_memory_node(const void *fdt, int node,
> >      }
> >  
> >      cell = (const u32 *)prop->data;
> > -    reg_cells = address_cells + size_cells;
> > -    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
> > +    device_tree_get_reg_ranges(prop, address_cells, size_cells, &banks);
> >  
> >      for ( i = 0; i < banks && early_info.mem.nr_banks < NR_MEM_BANKS; i++ )
> >      {
> > @@ -270,6 +295,46 @@ static void __init process_cpu_node(const void *fdt, int node,
> >      cpumask_set_cpu(start, &cpu_possible_map);
> >  }
> >  
> > +static void __init process_gic_node(const void *fdt, int node,
> > +                                    const char *name,
> > +                                    u32 address_cells, u32 size_cells)
> > +{
> > +    const struct fdt_property *prop;
> > +    const u32 *cell;
> > +    paddr_t start, size;
> > +    int interfaces;
> > +
> > +    if ( address_cells < 1 || size_cells < 1 )
> > +    {
> > +        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
> > +                     name);
> > +        return;
> > +    }
> 
> Is this the sort of check which the generic DT walker helper thing ought
> to include?

No, sometimes address_cells < 1 or size_cells < 1 are completely
acceptable, see process_cpu_node for example.


> > +
> > +    prop = fdt_get_property(fdt, node, "reg", NULL);
> > +    if ( !prop )
> > +    {
> > +        early_printk("fdt: node `%s': missing `reg' property\n", name);
> > +        return;
> > +    }
> > +
> > +    cell = (const u32 *)prop->data;
> > +    device_tree_get_reg_ranges(prop, address_cells, size_cells, &interfaces);
> > +    if ( interfaces < 4 )
> > +    {
> > +        early_printk("fdt: node `%s': invalid `reg' property\n", name);
> 
> A more specific message would be "not enough ranges in ..."

OK

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 18:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 18:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgK7j-0000a6-BJ; Wed, 05 Dec 2012 18:55:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgK7i-0000Zt-IR
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 18:55:10 +0000
Received: from [193.109.254.147:61344] by server-5.bemta-14.messagelabs.com id
	5E/3F-10257-D889FB05; Wed, 05 Dec 2012 18:55:09 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354733707!1813025!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24014 invoked from network); 5 Dec 2012 18:55:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 18:55:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216522990"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 18:55:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 13:55:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgK7e-0000Zm-7h;
	Wed, 05 Dec 2012 18:55:06 +0000
Date: Wed, 5 Dec 2012 18:55:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354633209.15296.25.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212051846530.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211301426120.5310@kaball.uk.xensource.com>
	<1354633209.15296.25.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Dec 2012, Ian Campbell wrote:
> On Fri, 2012-11-30 at 14:28 +0000, Stefano Stabellini wrote:
> > Get the address of the GIC distributor, cpu, virtual and virtual cpu
> > interfaces registers from device tree.
> > 
> > Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
> > and friends because we are using them from mode_switch.S, that is
> > executed before device tree has been parsed. But at least mode_switch.S
> > is known to contain vexpress specific code anyway.
> > 
> > 
> > Changes in v2:
> > - remove 2 superflous lines from process_gic_node;
> > - introduce device_tree_get_reg_ranges;
> > - add a check for uninitialized GIC interface addresses;
> > - add a check for non-page aligned GIC interface addresses;
> > - remove the code to deal with non-page aligned addresses from GICC and
> > GICH.
> > 
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 0c6fab9..2b29e7e 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -26,6 +26,7 @@
> >  #include <xen/errno.h>
> >  #include <xen/softirq.h>
> >  #include <xen/list.h>
> > +#include <xen/device_tree.h>
> >  #include <asm/p2m.h>
> >  #include <asm/domain.h>
> >  
> > @@ -33,10 +34,8 @@
> >  
> >  /* Access to the GIC Distributor registers through the fixmap */
> >  #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
> > -#define GICC ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICC1)  \
> > -                                     + (GIC_CR_OFFSET & 0xfff)))
> > -#define GICH ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICH)  \
> > -                                     + (GIC_HR_OFFSET & 0xfff)))
> > +#define GICC ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICC1)) 
> > +#define GICH ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICH))
> >  static void gic_restore_pending_irqs(struct vcpu *v);
> >  
> >  /* Global state */
> > @@ -44,6 +43,7 @@ static struct {
> >      paddr_t dbase;       /* Address of distributor registers */
> >      paddr_t cbase;       /* Address of CPU interface registers */
> >      paddr_t hbase;       /* Address of virtual interface registers */
> > +    paddr_t vbase;       /* Address of virtual cpu interface registers */
> >      unsigned int lines;
> >      unsigned int cpus;
> >      spinlock_t lock;
> > @@ -306,10 +306,27 @@ static void __cpuinit gic_hyp_disable(void)
> >  /* Set up the GIC */
> >  void __init gic_init(void)
> >  {
> > -    /* XXX FIXME get this from devicetree */
> > -    gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET;
> > -    gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET;
> > -    gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET;
> > +    if ( !early_info.gic.gic_dist_addr ||
> > +         !early_info.gic.gic_cpu_addr ||
> > +         !early_info.gic.gic_hyp_addr ||
> > +         !early_info.gic.gic_vcpu_addr )
> > +        panic("the physical address of one of the GIC interfaces is missing:\n"
> > +              "        gic_dist_addr=%"PRIpaddr"\n"
> > +              "        gic_cpu_addr=%"PRIpaddr"\n"
> > +              "        gic_hyp_addr=%"PRIpaddr"\n"
> > +              "        gic_vcpu_addr=%"PRIpaddr"\n",
> > +              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
> > +              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);
> > +    if ( (early_info.gic.gic_dist_addr & ~PAGE_MASK) ||
> > +         (early_info.gic.gic_cpu_addr & ~PAGE_MASK) ||
> > +         (early_info.gic.gic_hyp_addr & ~PAGE_MASK) ||
> > +         (early_info.gic.gic_vcpu_addr & ~PAGE_MASK) )
> > +        panic("error: GIC interfaces not page aligned.\n");
> 
> Oh, maybe I should have skipped "arm: add few checks to gic_init" and
> come straight here instead?

Yep. I am going to address the other comments too.


> >  static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index da0af77..8d5b6b0 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -54,6 +54,33 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
> >      return !strncmp(prop, match, len);
> >  }
> >  
> > +bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
> > +{
> > +    int len, l;
> > +    const void *prop;
> > +
> > +    prop = fdt_getprop(fdt, node, "compatible", &len);
> > +    if ( prop == NULL )
> > +        return 0;
> > +
> > +    while ( len > 0 ) {
> > +        if ( !strncmp(prop, match, strlen(match)) )
> > +            return 1;
> 
> This will decide that match="foo-bar" is compatible with a node which
> contains compatible = "foo-bar-baz". Is that deliberate? I thought the
> DT way would be to have compatible = "foo-bar-baz", "foo-bar" ?
> 
> Perhaps this is just due to cut-n-paste from device_tree_node_matches
> (where it makes sense, I think)?

You are right, I think that we should have a perfect match for the
compatible string.


> I now wonder the same thing about device_tree_type_matches too...

I think we should use strcmp in both cases.
I'll send another patch for that.


> > +        l = strlen(prop) + 1;
> > +        prop += l;
> > +        len -= l;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static void device_tree_get_reg_ranges(const struct fdt_property *prop,
> 
> "nr" in the name somewhere? I thought at first this was somehow
> returning the ranges themselves.

OK

> > +        u32 address_cells, u32 size_cells, int *ranges)
> > +{
> > +    u32 reg_cells = address_cells + size_cells;
> > +    *ranges = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
> > +}
> > +
> >  static void __init get_val(const u32 **cell, u32 cells, u64 *val)
> >  {
> >      *val = 0;
> > @@ -209,7 +236,6 @@ static void __init process_memory_node(const void *fdt, int node,
> >                                         u32 address_cells, u32 size_cells)
> >  {
> >      const struct fdt_property *prop;
> > -    size_t reg_cells;
> >      int i;
> >      int banks;
> >      const u32 *cell;
> > @@ -230,8 +256,7 @@ static void __init process_memory_node(const void *fdt, int node,
> >      }
> >  
> >      cell = (const u32 *)prop->data;
> > -    reg_cells = address_cells + size_cells;
> > -    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
> > +    device_tree_get_reg_ranges(prop, address_cells, size_cells, &banks);
> >  
> >      for ( i = 0; i < banks && early_info.mem.nr_banks < NR_MEM_BANKS; i++ )
> >      {
> > @@ -270,6 +295,46 @@ static void __init process_cpu_node(const void *fdt, int node,
> >      cpumask_set_cpu(start, &cpu_possible_map);
> >  }
> >  
> > +static void __init process_gic_node(const void *fdt, int node,
> > +                                    const char *name,
> > +                                    u32 address_cells, u32 size_cells)
> > +{
> > +    const struct fdt_property *prop;
> > +    const u32 *cell;
> > +    paddr_t start, size;
> > +    int interfaces;
> > +
> > +    if ( address_cells < 1 || size_cells < 1 )
> > +    {
> > +        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
> > +                     name);
> > +        return;
> > +    }
> 
> Is this the sort of check which the generic DT walker helper thing ought
> to include?

No, sometimes address_cells < 1 or size_cells < 1 are completely
acceptable, see process_cpu_node for example.


> > +
> > +    prop = fdt_get_property(fdt, node, "reg", NULL);
> > +    if ( !prop )
> > +    {
> > +        early_printk("fdt: node `%s': missing `reg' property\n", name);
> > +        return;
> > +    }
> > +
> > +    cell = (const u32 *)prop->data;
> > +    device_tree_get_reg_ranges(prop, address_cells, size_cells, &interfaces);
> > +    if ( interfaces < 4 )
> > +    {
> > +        early_printk("fdt: node `%s': invalid `reg' property\n", name);
> 
> A more specific message would be "not enough ranges in ..."

OK

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 19:02:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 19:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgKEO-0001BB-NJ; Wed, 05 Dec 2012 19:02:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgKEN-0001Az-M5
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 19:02:03 +0000
Received: from [85.158.137.99:27093] by server-13.bemta-3.messagelabs.com id
	64/39-24887-62A9FB05; Wed, 05 Dec 2012 19:01:58 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354734117!12962166!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27933 invoked from network); 5 Dec 2012 19:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 19:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216523941"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 19:01:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 14:01:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgKEA-0000fy-H2;
	Wed, 05 Dec 2012 19:01:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 19:01:45 +0000
Message-ID: <1354734105-7007-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 2/2] xen/arm: use strcmp in
	device_tree_type_matches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We want to match the exact string rather than the first subset.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/common/device_tree.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index b321d99..3d4a7a9 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -51,7 +51,7 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
     if ( prop == NULL )
         return 0;
 
-    return !strncmp(prop, match, len);
+    return !strcmp(prop, match);
 }
 
 bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 19:02:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 19:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgKEO-0001BB-NJ; Wed, 05 Dec 2012 19:02:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgKEN-0001Az-M5
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 19:02:03 +0000
Received: from [85.158.137.99:27093] by server-13.bemta-3.messagelabs.com id
	64/39-24887-62A9FB05; Wed, 05 Dec 2012 19:01:58 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354734117!12962166!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27933 invoked from network); 5 Dec 2012 19:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 19:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216523941"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 19:01:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 14:01:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgKEA-0000fy-H2;
	Wed, 05 Dec 2012 19:01:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 19:01:45 +0000
Message-ID: <1354734105-7007-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 2/2] xen/arm: use strcmp in
	device_tree_type_matches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We want to match the exact string rather than the first subset.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/common/device_tree.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index b321d99..3d4a7a9 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -51,7 +51,7 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
     if ( prop == NULL )
         return 0;
 
-    return !strncmp(prop, match, len);
+    return !strcmp(prop, match);
 }
 
 bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 19:02:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 19:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgKEQ-0001BL-4N; Wed, 05 Dec 2012 19:02:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgKEO-0001Az-5Q
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 19:02:04 +0000
Received: from [85.158.137.99:30736] by server-13.bemta-3.messagelabs.com id
	EE/39-24887-72A9FB05; Wed, 05 Dec 2012 19:01:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354734117!12962166!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27965 invoked from network); 5 Dec 2012 19:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 19:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216523942"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 19:01:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 14:01:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgKEA-0000fy-GX;
	Wed, 05 Dec 2012 19:01:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 19:01:44 +0000
Message-ID: <1354734105-7007-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Get the address of the GIC distributor, cpu, virtual and virtual cpu
interfaces registers from device tree.

Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
and friends because we are using them from mode_switch.S, that is
executed before device tree has been parsed. But at least mode_switch.S
is known to contain vexpress specific code anyway.

Changes in v3:
- printk a message with the GIC interface addresses in gic_init;
- use strcmp in device_tree_node_compatible;
- rename device_tree_get_reg_ranges to device_tree_nr_reg_ranges;
- improve error message in process_gic_node.

Changes in v2:
- remove 2 superflous lines from process_gic_node;
- introduce device_tree_get_reg_ranges;
- add a check for uninitialized GIC interface addresses;
- add a check for non-page aligned GIC interface addresses;
- remove the code to deal with non-page aligned addresses from GICC and
GICH.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c            |   40 ++++++++++++++++------
 xen/common/device_tree.c      |   73 +++++++++++++++++++++++++++++++++++++++--
 xen/include/xen/device_tree.h |    8 ++++
 3 files changed, 107 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0c6fab9..2e2e1a3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -26,6 +26,7 @@
 #include <xen/errno.h>
 #include <xen/softirq.h>
 #include <xen/list.h>
+#include <xen/device_tree.h>
 #include <asm/p2m.h>
 #include <asm/domain.h>
 
@@ -33,10 +34,8 @@
 
 /* Access to the GIC Distributor registers through the fixmap */
 #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
-#define GICC ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICC1)  \
-                                     + (GIC_CR_OFFSET & 0xfff)))
-#define GICH ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICH)  \
-                                     + (GIC_HR_OFFSET & 0xfff)))
+#define GICC ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICC1)) 
+#define GICH ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICH))
 static void gic_restore_pending_irqs(struct vcpu *v);
 
 /* Global state */
@@ -44,6 +43,7 @@ static struct {
     paddr_t dbase;       /* Address of distributor registers */
     paddr_t cbase;       /* Address of CPU interface registers */
     paddr_t hbase;       /* Address of virtual interface registers */
+    paddr_t vbase;       /* Address of virtual cpu interface registers */
     unsigned int lines;
     unsigned int cpus;
     spinlock_t lock;
@@ -306,10 +306,28 @@ static void __cpuinit gic_hyp_disable(void)
 /* Set up the GIC */
 void __init gic_init(void)
 {
-    /* XXX FIXME get this from devicetree */
-    gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET;
-    gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET;
-    gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET;
+	printk("GIC initialization:\n"
+              "        gic_dist_addr=%"PRIpaddr"\n"
+              "        gic_cpu_addr=%"PRIpaddr"\n"
+              "        gic_hyp_addr=%"PRIpaddr"\n"
+              "        gic_vcpu_addr=%"PRIpaddr"\n",
+              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
+              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);
+    if ( !early_info.gic.gic_dist_addr ||
+         !early_info.gic.gic_cpu_addr ||
+         !early_info.gic.gic_hyp_addr ||
+         !early_info.gic.gic_vcpu_addr )
+        panic("the physical address of one of the GIC interfaces is missing\n");
+    if ( (early_info.gic.gic_dist_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_cpu_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_hyp_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_vcpu_addr & ~PAGE_MASK) )
+        panic("GIC interfaces not page aligned.\n");
+
+    gic.dbase = early_info.gic.gic_dist_addr;
+    gic.cbase = early_info.gic.gic_cpu_addr;
+    gic.hbase = early_info.gic.gic_hyp_addr;
+    gic.vbase = early_info.gic.gic_vcpu_addr;
     set_fixmap(FIXMAP_GICD, gic.dbase >> PAGE_SHIFT, DEV_SHARED);
     BUILD_BUG_ON(FIXMAP_ADDR(FIXMAP_GICC1) !=
                  FIXMAP_ADDR(FIXMAP_GICC2)-PAGE_SIZE);
@@ -569,9 +587,9 @@ int gicv_setup(struct domain *d)
 {
     /* map the gic virtual cpu interface in the gic cpu interface region of
      * the guest */
-    return map_mmio_regions(d, GIC_BASE_ADDRESS + GIC_CR_OFFSET,
-                        GIC_BASE_ADDRESS + GIC_CR_OFFSET + (2 * PAGE_SIZE) - 1,
-                        GIC_BASE_ADDRESS + GIC_VR_OFFSET);
+    return map_mmio_regions(d, gic.cbase,
+                        gic.cbase + (2 * PAGE_SIZE) - 1,
+                        gic.vbase);
 }
 
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index da0af77..b321d99 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -54,6 +54,33 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
     return !strncmp(prop, match, len);
 }
 
+bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
+{
+    int len, l;
+    const void *prop;
+
+    prop = fdt_getprop(fdt, node, "compatible", &len);
+    if ( prop == NULL )
+        return 0;
+
+    while ( len > 0 ) {
+        if ( !strcmp(prop, match) )
+            return 1;
+        l = strlen(prop) + 1;
+        prop += l;
+        len -= l;
+    }
+
+    return 0;
+}
+
+static void device_tree_nr_reg_ranges(const struct fdt_property *prop,
+        u32 address_cells, u32 size_cells, int *ranges)
+{
+    u32 reg_cells = address_cells + size_cells;
+    *ranges = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
+}
+
 static void __init get_val(const u32 **cell, u32 cells, u64 *val)
 {
     *val = 0;
@@ -209,7 +236,6 @@ static void __init process_memory_node(const void *fdt, int node,
                                        u32 address_cells, u32 size_cells)
 {
     const struct fdt_property *prop;
-    size_t reg_cells;
     int i;
     int banks;
     const u32 *cell;
@@ -230,8 +256,7 @@ static void __init process_memory_node(const void *fdt, int node,
     }
 
     cell = (const u32 *)prop->data;
-    reg_cells = address_cells + size_cells;
-    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
+    device_tree_nr_reg_ranges(prop, address_cells, size_cells, &banks);
 
     for ( i = 0; i < banks && early_info.mem.nr_banks < NR_MEM_BANKS; i++ )
     {
@@ -270,6 +295,46 @@ static void __init process_cpu_node(const void *fdt, int node,
     cpumask_set_cpu(start, &cpu_possible_map);
 }
 
+static void __init process_gic_node(const void *fdt, int node,
+                                    const char *name,
+                                    u32 address_cells, u32 size_cells)
+{
+    const struct fdt_property *prop;
+    const u32 *cell;
+    paddr_t start, size;
+    int interfaces;
+
+    if ( address_cells < 1 || size_cells < 1 )
+    {
+        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
+                     name);
+        return;
+    }
+
+    prop = fdt_get_property(fdt, node, "reg", NULL);
+    if ( !prop )
+    {
+        early_printk("fdt: node `%s': missing `reg' property\n", name);
+        return;
+    }
+
+    cell = (const u32 *)prop->data;
+    device_tree_nr_reg_ranges(prop, address_cells, size_cells, &interfaces);
+    if ( interfaces < 4 )
+    {
+        early_printk("fdt: node `%s': not enough ranges\n", name);
+        return;
+    }
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_dist_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_cpu_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_hyp_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_vcpu_addr = start;
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -279,6 +344,8 @@ static int __init early_scan_node(const void *fdt,
         process_memory_node(fdt, node, name, address_cells, size_cells);
     else if ( device_tree_type_matches(fdt, node, "cpu") )
         process_cpu_node(fdt, node, name, address_cells, size_cells);
+    else if ( device_tree_node_compatible(fdt, node, "arm,cortex-a15-gic") )
+        process_gic_node(fdt, node, name, address_cells, size_cells);
 
     return 0;
 }
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 4d010c0..a0e3a97 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -26,8 +26,16 @@ struct dt_mem_info {
     struct membank bank[NR_MEM_BANKS];
 };
 
+struct dt_gic_info {
+    paddr_t gic_dist_addr;
+    paddr_t gic_cpu_addr;
+    paddr_t gic_hyp_addr;
+    paddr_t gic_vcpu_addr;
+};
+
 struct dt_early_info {
     struct dt_mem_info mem;
+    struct dt_gic_info gic;
 };
 
 typedef int (*device_tree_node_func)(const void *fdt,
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 19:02:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 19:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgKEQ-0001BL-4N; Wed, 05 Dec 2012 19:02:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgKEO-0001Az-5Q
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 19:02:04 +0000
Received: from [85.158.137.99:30736] by server-13.bemta-3.messagelabs.com id
	EE/39-24887-72A9FB05; Wed, 05 Dec 2012 19:01:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354734117!12962166!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyODk2Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27965 invoked from network); 5 Dec 2012 19:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 19:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="216523942"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 19:01:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 5 Dec 2012 14:01:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgKEA-0000fy-GX;
	Wed, 05 Dec 2012 19:01:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Dec 2012 19:01:44 +0000
Message-ID: <1354734105-7007-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Get the address of the GIC distributor, cpu, virtual and virtual cpu
interfaces registers from device tree.

Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
and friends because we are using them from mode_switch.S, that is
executed before device tree has been parsed. But at least mode_switch.S
is known to contain vexpress specific code anyway.

Changes in v3:
- printk a message with the GIC interface addresses in gic_init;
- use strcmp in device_tree_node_compatible;
- rename device_tree_get_reg_ranges to device_tree_nr_reg_ranges;
- improve error message in process_gic_node.

Changes in v2:
- remove 2 superflous lines from process_gic_node;
- introduce device_tree_get_reg_ranges;
- add a check for uninitialized GIC interface addresses;
- add a check for non-page aligned GIC interface addresses;
- remove the code to deal with non-page aligned addresses from GICC and
GICH.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c            |   40 ++++++++++++++++------
 xen/common/device_tree.c      |   73 +++++++++++++++++++++++++++++++++++++++--
 xen/include/xen/device_tree.h |    8 ++++
 3 files changed, 107 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0c6fab9..2e2e1a3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -26,6 +26,7 @@
 #include <xen/errno.h>
 #include <xen/softirq.h>
 #include <xen/list.h>
+#include <xen/device_tree.h>
 #include <asm/p2m.h>
 #include <asm/domain.h>
 
@@ -33,10 +34,8 @@
 
 /* Access to the GIC Distributor registers through the fixmap */
 #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
-#define GICC ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICC1)  \
-                                     + (GIC_CR_OFFSET & 0xfff)))
-#define GICH ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICH)  \
-                                     + (GIC_HR_OFFSET & 0xfff)))
+#define GICC ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICC1)) 
+#define GICH ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICH))
 static void gic_restore_pending_irqs(struct vcpu *v);
 
 /* Global state */
@@ -44,6 +43,7 @@ static struct {
     paddr_t dbase;       /* Address of distributor registers */
     paddr_t cbase;       /* Address of CPU interface registers */
     paddr_t hbase;       /* Address of virtual interface registers */
+    paddr_t vbase;       /* Address of virtual cpu interface registers */
     unsigned int lines;
     unsigned int cpus;
     spinlock_t lock;
@@ -306,10 +306,28 @@ static void __cpuinit gic_hyp_disable(void)
 /* Set up the GIC */
 void __init gic_init(void)
 {
-    /* XXX FIXME get this from devicetree */
-    gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET;
-    gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET;
-    gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET;
+	printk("GIC initialization:\n"
+              "        gic_dist_addr=%"PRIpaddr"\n"
+              "        gic_cpu_addr=%"PRIpaddr"\n"
+              "        gic_hyp_addr=%"PRIpaddr"\n"
+              "        gic_vcpu_addr=%"PRIpaddr"\n",
+              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
+              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);
+    if ( !early_info.gic.gic_dist_addr ||
+         !early_info.gic.gic_cpu_addr ||
+         !early_info.gic.gic_hyp_addr ||
+         !early_info.gic.gic_vcpu_addr )
+        panic("the physical address of one of the GIC interfaces is missing\n");
+    if ( (early_info.gic.gic_dist_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_cpu_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_hyp_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_vcpu_addr & ~PAGE_MASK) )
+        panic("GIC interfaces not page aligned.\n");
+
+    gic.dbase = early_info.gic.gic_dist_addr;
+    gic.cbase = early_info.gic.gic_cpu_addr;
+    gic.hbase = early_info.gic.gic_hyp_addr;
+    gic.vbase = early_info.gic.gic_vcpu_addr;
     set_fixmap(FIXMAP_GICD, gic.dbase >> PAGE_SHIFT, DEV_SHARED);
     BUILD_BUG_ON(FIXMAP_ADDR(FIXMAP_GICC1) !=
                  FIXMAP_ADDR(FIXMAP_GICC2)-PAGE_SIZE);
@@ -569,9 +587,9 @@ int gicv_setup(struct domain *d)
 {
     /* map the gic virtual cpu interface in the gic cpu interface region of
      * the guest */
-    return map_mmio_regions(d, GIC_BASE_ADDRESS + GIC_CR_OFFSET,
-                        GIC_BASE_ADDRESS + GIC_CR_OFFSET + (2 * PAGE_SIZE) - 1,
-                        GIC_BASE_ADDRESS + GIC_VR_OFFSET);
+    return map_mmio_regions(d, gic.cbase,
+                        gic.cbase + (2 * PAGE_SIZE) - 1,
+                        gic.vbase);
 }
 
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index da0af77..b321d99 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -54,6 +54,33 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
     return !strncmp(prop, match, len);
 }
 
+bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
+{
+    int len, l;
+    const void *prop;
+
+    prop = fdt_getprop(fdt, node, "compatible", &len);
+    if ( prop == NULL )
+        return 0;
+
+    while ( len > 0 ) {
+        if ( !strcmp(prop, match) )
+            return 1;
+        l = strlen(prop) + 1;
+        prop += l;
+        len -= l;
+    }
+
+    return 0;
+}
+
+static void device_tree_nr_reg_ranges(const struct fdt_property *prop,
+        u32 address_cells, u32 size_cells, int *ranges)
+{
+    u32 reg_cells = address_cells + size_cells;
+    *ranges = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
+}
+
 static void __init get_val(const u32 **cell, u32 cells, u64 *val)
 {
     *val = 0;
@@ -209,7 +236,6 @@ static void __init process_memory_node(const void *fdt, int node,
                                        u32 address_cells, u32 size_cells)
 {
     const struct fdt_property *prop;
-    size_t reg_cells;
     int i;
     int banks;
     const u32 *cell;
@@ -230,8 +256,7 @@ static void __init process_memory_node(const void *fdt, int node,
     }
 
     cell = (const u32 *)prop->data;
-    reg_cells = address_cells + size_cells;
-    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
+    device_tree_nr_reg_ranges(prop, address_cells, size_cells, &banks);
 
     for ( i = 0; i < banks && early_info.mem.nr_banks < NR_MEM_BANKS; i++ )
     {
@@ -270,6 +295,46 @@ static void __init process_cpu_node(const void *fdt, int node,
     cpumask_set_cpu(start, &cpu_possible_map);
 }
 
+static void __init process_gic_node(const void *fdt, int node,
+                                    const char *name,
+                                    u32 address_cells, u32 size_cells)
+{
+    const struct fdt_property *prop;
+    const u32 *cell;
+    paddr_t start, size;
+    int interfaces;
+
+    if ( address_cells < 1 || size_cells < 1 )
+    {
+        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
+                     name);
+        return;
+    }
+
+    prop = fdt_get_property(fdt, node, "reg", NULL);
+    if ( !prop )
+    {
+        early_printk("fdt: node `%s': missing `reg' property\n", name);
+        return;
+    }
+
+    cell = (const u32 *)prop->data;
+    device_tree_nr_reg_ranges(prop, address_cells, size_cells, &interfaces);
+    if ( interfaces < 4 )
+    {
+        early_printk("fdt: node `%s': not enough ranges\n", name);
+        return;
+    }
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_dist_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_cpu_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_hyp_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_vcpu_addr = start;
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -279,6 +344,8 @@ static int __init early_scan_node(const void *fdt,
         process_memory_node(fdt, node, name, address_cells, size_cells);
     else if ( device_tree_type_matches(fdt, node, "cpu") )
         process_cpu_node(fdt, node, name, address_cells, size_cells);
+    else if ( device_tree_node_compatible(fdt, node, "arm,cortex-a15-gic") )
+        process_gic_node(fdt, node, name, address_cells, size_cells);
 
     return 0;
 }
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 4d010c0..a0e3a97 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -26,8 +26,16 @@ struct dt_mem_info {
     struct membank bank[NR_MEM_BANKS];
 };
 
+struct dt_gic_info {
+    paddr_t gic_dist_addr;
+    paddr_t gic_cpu_addr;
+    paddr_t gic_hyp_addr;
+    paddr_t gic_vcpu_addr;
+};
+
 struct dt_early_info {
     struct dt_mem_info mem;
+    struct dt_gic_info gic;
 };
 
 typedef int (*device_tree_node_func)(const void *fdt,
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 21:48:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 21:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgMop-0004RR-Li; Wed, 05 Dec 2012 21:47:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1TgMoo-0004RM-4X
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 21:47:50 +0000
Received: from [85.158.143.35:26038] by server-3.bemta-4.messagelabs.com id
	3E/2F-06841-501CFB05; Wed, 05 Dec 2012 21:47:49 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354744067!11783051!1
X-Originating-IP: [209.85.212.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31581 invoked from network); 5 Dec 2012 21:47:48 -0000
Received: from mail-vb0-f41.google.com (HELO mail-vb0-f41.google.com)
	(209.85.212.41)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 21:47:48 -0000
Received: by mail-vb0-f41.google.com with SMTP id l22so4777078vbn.28
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 13:47:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=qqmr4ZaynKEQL95ySBRQNFxJMWgDG1nHLbCIa83FcCE=;
	b=ZRc+3058liWVEoySZyBcimveNdx40rOHDMzWT6JVXPPUarnFNYg1yfG/QTq9bC8sVT
	Ca19tJptzCfX7YwUCJUvq01vPraPCkTChSGJOJxDwzf8gCY4xae69cF3Gru1y2FMUvaG
	xW4NebTdDSxwbLQiGzNorJMiCM48AbdjiR3eeSTNXCjvJd59PzVaIFd8//41HTHoNr5A
	xLnQocPL/sDOOs1YwS1foZYZloNppOib+ztT8/RZMfE+qK0CAprmVoqyhnpIsCNS0v01
	ut+MgX+St5W6z3Nl1KXWtFnc3CMgKNaptaeMzmthrLjTSxuGxiAQpjV/MT8vsgH3sCxF
	jbAw==
Received: by 10.52.172.195 with SMTP id be3mr14694615vdc.54.1354744067126;
	Wed, 05 Dec 2012 13:47:47 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id b10sm2368115vdk.15.2012.12.05.13.47.45
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 05 Dec 2012 13:47:46 -0800 (PST)
Date: Wed, 5 Dec 2012 16:47:43 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Ian Campbell <ijc@hellion.org.uk>
Message-ID: <20121205214741.GA1150@phenom.dumpdata.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1354711599.15296.191.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354711599.15296.191.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	debian-kernel <debian-kernel@lists.debian.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 12:46:39PM +0000, Ian Campbell wrote:
> On Mon, 2012-11-26 at 13:44 +0000, Jan Beulich wrote:
> > >>> On 26.11.12 at 14:21, Ian Campbell <ijc@hellion.org.uk> wrote:
> > > Debian has decided to take Jeremy's microcode patch [0] as an interim
> > > measure for their next release. (TL;DR -- Debian is shipping pvops Linux
> > > 3.2 and Xen 4.1 in the next release. See http://bugs.debian.org/693053 
> > > and https://lists.debian.org/debian-devel/2012/11/msg00141.html for some
> > > more background).
> > > 
> > > However the patch is a bit old and predates the use introduction of
> > > separate firmware files for AMD family >= 15h. Looking at the SuSE
> > > forward ported classic Xen patches it seems like the following patch is
> > > all that is required. But it seems a little too simple to be true and I
> > > don't have any such processors to test on.
> > > 
> > > Jan, can you recall if it really is that easy on the kernel side ;-)
> > 
> > While so far I didn't myself run anything on post-Fam10 systems
> > either, it really ought to be that easy - the patch format didn't
> > change, it's just that they decided to spit the files by family to
> > keep them manageable.
> > 
> > The only other thing to check for is that you don't have any
> > artificial size restriction left in that code (I think patch files early
> > on were limited to 4k in size, and that got lifted during the last
> > couple of years).
> 
> I managed to find a machine and try this and it turns out that all that
> was missing from the kernel side was:
> 
>         @@ -58,7 +58,7 @@
>          
>          static enum ucode_state xen_request_microcode_fw(int cpu, struct device *device)
>          {
>         -       char name[30];
>         +       char name[36];
>                 struct cpuinfo_x86 *c = &cpu_data(cpu);
>                 const struct firmware *firmware;
>                 struct ucode_cpu_info *uci = ucode_cpu_info + cpu;

Do you want to prep a patch that I can stick in my 'microcode' branch?
.. That I will at some point try to upstream.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 21:48:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 21:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgMop-0004RR-Li; Wed, 05 Dec 2012 21:47:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1TgMoo-0004RM-4X
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 21:47:50 +0000
Received: from [85.158.143.35:26038] by server-3.bemta-4.messagelabs.com id
	3E/2F-06841-501CFB05; Wed, 05 Dec 2012 21:47:49 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354744067!11783051!1
X-Originating-IP: [209.85.212.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31581 invoked from network); 5 Dec 2012 21:47:48 -0000
Received: from mail-vb0-f41.google.com (HELO mail-vb0-f41.google.com)
	(209.85.212.41)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 21:47:48 -0000
Received: by mail-vb0-f41.google.com with SMTP id l22so4777078vbn.28
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 13:47:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=qqmr4ZaynKEQL95ySBRQNFxJMWgDG1nHLbCIa83FcCE=;
	b=ZRc+3058liWVEoySZyBcimveNdx40rOHDMzWT6JVXPPUarnFNYg1yfG/QTq9bC8sVT
	Ca19tJptzCfX7YwUCJUvq01vPraPCkTChSGJOJxDwzf8gCY4xae69cF3Gru1y2FMUvaG
	xW4NebTdDSxwbLQiGzNorJMiCM48AbdjiR3eeSTNXCjvJd59PzVaIFd8//41HTHoNr5A
	xLnQocPL/sDOOs1YwS1foZYZloNppOib+ztT8/RZMfE+qK0CAprmVoqyhnpIsCNS0v01
	ut+MgX+St5W6z3Nl1KXWtFnc3CMgKNaptaeMzmthrLjTSxuGxiAQpjV/MT8vsgH3sCxF
	jbAw==
Received: by 10.52.172.195 with SMTP id be3mr14694615vdc.54.1354744067126;
	Wed, 05 Dec 2012 13:47:47 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id b10sm2368115vdk.15.2012.12.05.13.47.45
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 05 Dec 2012 13:47:46 -0800 (PST)
Date: Wed, 5 Dec 2012 16:47:43 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Ian Campbell <ijc@hellion.org.uk>
Message-ID: <20121205214741.GA1150@phenom.dumpdata.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1354711599.15296.191.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354711599.15296.191.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	debian-kernel <debian-kernel@lists.debian.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 12:46:39PM +0000, Ian Campbell wrote:
> On Mon, 2012-11-26 at 13:44 +0000, Jan Beulich wrote:
> > >>> On 26.11.12 at 14:21, Ian Campbell <ijc@hellion.org.uk> wrote:
> > > Debian has decided to take Jeremy's microcode patch [0] as an interim
> > > measure for their next release. (TL;DR -- Debian is shipping pvops Linux
> > > 3.2 and Xen 4.1 in the next release. See http://bugs.debian.org/693053 
> > > and https://lists.debian.org/debian-devel/2012/11/msg00141.html for some
> > > more background).
> > > 
> > > However the patch is a bit old and predates the use introduction of
> > > separate firmware files for AMD family >= 15h. Looking at the SuSE
> > > forward ported classic Xen patches it seems like the following patch is
> > > all that is required. But it seems a little too simple to be true and I
> > > don't have any such processors to test on.
> > > 
> > > Jan, can you recall if it really is that easy on the kernel side ;-)
> > 
> > While so far I didn't myself run anything on post-Fam10 systems
> > either, it really ought to be that easy - the patch format didn't
> > change, it's just that they decided to spit the files by family to
> > keep them manageable.
> > 
> > The only other thing to check for is that you don't have any
> > artificial size restriction left in that code (I think patch files early
> > on were limited to 4k in size, and that got lifted during the last
> > couple of years).
> 
> I managed to find a machine and try this and it turns out that all that
> was missing from the kernel side was:
> 
>         @@ -58,7 +58,7 @@
>          
>          static enum ucode_state xen_request_microcode_fw(int cpu, struct device *device)
>          {
>         -       char name[30];
>         +       char name[36];
>                 struct cpuinfo_x86 *c = &cpu_data(cpu);
>                 const struct firmware *firmware;
>                 struct ucode_cpu_info *uci = ucode_cpu_info + cpu;

Do you want to prep a patch that I can stick in my 'microcode' branch?
.. That I will at some point try to upstream.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 22:06:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 22:06:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgN6u-00050M-In; Wed, 05 Dec 2012 22:06:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carsten@schiers.de>) id 1TgN6s-00050H-NG
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 22:06:31 +0000
Received: from [85.158.143.35:36387] by server-3.bemta-4.messagelabs.com id
	DD/66-06841-565CFB05; Wed, 05 Dec 2012 22:06:29 +0000
X-Env-Sender: carsten@schiers.de
X-Msg-Ref: server-15.tower-21.messagelabs.com!1354745166!14168235!1
X-Originating-IP: [194.117.254.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=FROM_EXCESS_QP,
  HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16719 invoked from network); 5 Dec 2012 22:06:07 -0000
Received: from www.zeus06.de (HELO mail.zeus06.de) (194.117.254.36)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 22:06:07 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=simple; d=mail.zeus06.de; h=subject
	:from:to:cc:date:mime-version:content-type:in-reply-to
	:references:message-id; s=beta; bh=aPNUA1Rcsj3PAYhy60rxp17nKBvst
	jd69WjORjC8LDY=; b=hFVrcSYgfdXDff6iAMwuDJWCd/vXAWKpoSg8rLjkR76Wb
	yMUIdK0FDApjQdpNQP/9nIN2A5FRmnP/ZVfgLePgxaRtpY3bGTUgdaA2GXJJ7bpU
	m7QZr3HoqrbtSJ5/Cup9tC73g5HQ67TN5wreP3Ruh7X9HrG05R35l7MeFFBYyI=
Received: (qmail 13059 invoked from network); 5 Dec 2012 23:06:05 +0100
Received: from unknown (HELO uhura.zz) (l3s6271p1@46.59.195.81)
	by mail.zeus06.de with ESMTPA; 5 Dec 2012 23:06:05 +0100
Received: from localhost (localhost [127.0.0.1])
	by uhura.zz (Postfix) with ESMTP id 68F002C519;
	Wed,  5 Dec 2012 23:06:05 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at schiers.de
Received: from uhura.zz ([127.0.0.1])
	by localhost (uhura.space.zz [127.0.0.1]) (amavisd-new, port 10024)
	with LMTP id dS1MvOJ4trkG; Wed,  5 Dec 2012 23:06:03 +0100 (CET)
Received: from uhura.space.zz (localhost [127.0.0.1])
	by uhura.zz (Postfix) with ESMTP id E88CF2C518;
	Wed,  5 Dec 2012 23:06:02 +0100 (CET)
From: =?windows-1252?Q?Carsten_Schiers?= <carsten@schiers.de>
To: =?windows-1252?Q?Konrad_Rzeszutek_Wilk?= <konrad@kernel.org>
Date: Wed, 5 Dec 2012 23:06:02 +0100
Mime-Version: 1.0
In-Reply-To: <20120918193401.GA7667@phenom.dumpdata.com>
References: <CAAnFQG9TZj=SxYK_jxqN=M6L40vL=AZu9EM2tbEffPhkL1s8FA@mail.gmail.com>
X-Priority: 3 (Normal)
X-Mailer: Zarafa 7.0.8-35178
Thread-Index: Ac3TNLQqf8cZVHaMQA+1Jh89MiYWNw==
Message-Id: <zarafa.50bfc54a.29c1.3cb9c7c6411698ee@uhura.space.zz>
Cc: =?windows-1252?Q?Xen_Devel_Mailing_list?= <xen-devel@lists.xen.org>
Subject: [Xen-devel] The neverending load increase....
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1517825123261143294=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format. Your mail reader does not
understand MIME message format.
--===============1517825123261143294==
Content-Type: multipart/alternative; 
 boundary="=_alYL6RncGWT5DHcvNegXUBlz9ldXvKGFVeljLqVIFIrq03Ev"

This is a multi-part message in MIME format. Your mail reader does not
understand MIME message format.
--=_alYL6RncGWT5DHcvNegXUBlz9ldXvKGFVeljLqVIFIrq03Ev
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Hi Konrad,=0D=0A=0D=0A=A0=0D=0AYes me again, the guy with the double boun=
ce buffering problem with the DVB card when using more than=20=0D=0A=0D=0A=
4GB of RAM and PVOPS.=0D=0A=0D=0A=A0=0D=0AIf you are still interested: gu=
es what do you think what happened between 3rd and fourth on riker=3F Yes=
,=0D=0A=0D=0AI switched to PVOPS ;o).=0D=0A=0D=0A=A0=0D=0A=0D=0A=A0=0D=0A=
So, I am currently on Xen 4.1 with 3.6.7 PVOPS Kernel for Dom0 and riker.=
 Before the increase, I was using=0D=0A=0D=0Aan old-style kernel, no othe=
r changes in Domain.=0D=0A=0D=0A=A0=0D=0ACompared to last time we checked=
 on it, I switched mainboard, CPU (AMD->Intel), and now have 16GB.=20=0D=0A=
=0D=0A=A0=0D=0ASo just in case, we would like to check on it further, ple=
ase feel free to advise or patch a kernel and I=20=0D=0A=0D=0Aam more tha=
n happy to try out.=0D=0A=0D=0A=A0=0D=0AI use the PVOPS kernel now becaus=
e of some glitches when recording and I want to test whether they persist=
=0D=0A=0D=0Aon PVOPS. I am going to switch to Xen 4.2 soon, because I wan=
t to modify the scheduler with the new parameters,=0D=0A=0D=0Aas I guess =
the Domain doesn=92t get enough attention, there are 3 PCI cards for it=85=
=0D=0A=0D=0A=A0=0D=0AIf this is not interesting, please simply let me kno=
w, it=92s not really an issue, it=92s more for curiosity.=0D=0A=0D=0A=A0=0D=
=0ABR,=0D=0A=0D=0ACarsten.=0D=0A=0D=0A
--=_alYL6RncGWT5DHcvNegXUBlz9ldXvKGFVeljLqVIFIrq03Ev
Content-Type: multipart/related; 
 boundary="=_alYL0jHM0BaW0zWL1lZTEVpaoTHnF+q00E-M0oWd1EQsEuU+"

--=_alYL0jHM0BaW0zWL1lZTEVpaoTHnF+q00E-M0oWd1EQsEuU+
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-mi=
crosoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:wo=
rd" xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D=
"http://www.w3.org/TR/REC-html40"><head><META HTTP-EQUIV=3D"Content-Type"=
 CONTENT=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=
=3D"Microsoft Word 12 (filtered medium)"><!--[if !mso]><style>v\:* {behav=
ior:url(#default#VML);}=0D=0Ao\:* {behavior:url(#default#VML);}=0D=0Aw\:*=
 {behavior:url(#default#VML);}=0D=0A.shape {behavior:url(#default#VML);}=0D=
=0A</style><![endif]--><style><!--=0D=0A/* Font Definitions */=0D=0A@font=
-face=0D=0A=09{font-family:"Cambria Math";=0D=0A=09panose-1:2 4 5 3 5 4 6=
 3 2 4;}=0D=0A@font-face=0D=0A=09{font-family:Calibri;=0D=0A=09panose-1:2=
 15 5 2 2 2 4 3 2 4;}=0D=0A@font-face=0D=0A=09{font-family:Tahoma;=0D=0A=09=
panose-1:2 11 6 4 3 5 4 4 2 4;}=0D=0A@font-face=0D=0A=09{font-family:Cons=
olas;=0D=0A=09panose-1:2 11 6 9 2 2 4 3 2 4;}=0D=0A/* Style Definitions *=
/=0D=0Ap.MsoNormal, li.MsoNormal, div.MsoNormal=0D=0A=09{margin:0cm;=0D=0A=
=09margin-bottom:.0001pt;=0D=0A=09font-size:11.0pt;=0D=0A=09font-family:"=
Calibri","sans-serif";}=0D=0Aa:link, span.MsoHyperlink=0D=0A=09{mso-style=
-priority:99;=0D=0A=09color:blue;=0D=0A=09text-decoration:underline;}=0D=0A=
a:visited, span.MsoHyperlinkFollowed=0D=0A=09{mso-style-priority:99;=0D=0A=
=09color:purple;=0D=0A=09text-decoration:underline;}=0D=0Ap.MsoPlainText,=
 li.MsoPlainText, div.MsoPlainText=0D=0A=09{mso-style-priority:99;=0D=0A=09=
mso-style-link:"Nur Text Zchn";=0D=0A=09margin:0cm;=0D=0A=09margin-bottom=
:.0001pt;=0D=0A=09font-size:10.5pt;=0D=0A=09font-family:Consolas;}=0D=0Ap=
=2EMsoAcetate, li.MsoAcetate, div.MsoAcetate=0D=0A=09{mso-style-priority:=
99;=0D=0A=09mso-style-link:"Sprechblasentext Zchn";=0D=0A=09margin:0cm;=0D=
=0A=09margin-bottom:.0001pt;=0D=0A=09font-size:8.0pt;=0D=0A=09font-family=
:"Tahoma","sans-serif";}=0D=0Aspan.NurTextZchn=0D=0A=09{mso-style-name:"N=
ur Text Zchn";=0D=0A=09mso-style-priority:99;=0D=0A=09mso-style-link:"Nur=
 Text";=0D=0A=09font-family:Consolas;}=0D=0Aspan.SprechblasentextZchn=0D=0A=
=09{mso-style-name:"Sprechblasentext Zchn";=0D=0A=09mso-style-priority:99=
;=0D=0A=09mso-style-link:Sprechblasentext;=0D=0A=09font-family:"Tahoma","=
sans-serif";}=0D=0A.MsoChpDefault=0D=0A=09{mso-style-type:export-only;}=0D=
=0A@page WordSection1=0D=0A=09{size:612.0pt 792.0pt;=0D=0A=09margin:70.85=
pt 70.85pt 2.0cm 70.85pt;}=0D=0Adiv.WordSection1=0D=0A=09{page:WordSectio=
n1;}=0D=0A--></style><!--[if gte mso 9]><xml>=0D=0A<o:shapedefaults v:ext=
=3D"edit" spidmax=3D"2050" />=0D=0A</xml><![endif]--><!--[if gte mso 9]><=
xml>=0D=0A<o:shapelayout v:ext=3D"edit">=0D=0A<o:idmap v:ext=3D"edit" dat=
a=3D"1" />=0D=0A</o:shapelayout></xml><![endif]--></head><body lang=3DDE =
link=3Dblue vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoPlainT=
ext>Hi Konrad,<o:p></o:p></p><p class=3DMsoPlainText><span style=3D'color=
:black'><o:p>&nbsp;</o:p></span></p><p class=3DMsoPlainText><span lang=3D=
EN-US style=3D'color:black'>Yes me again, the guy with the double bounce =
buffering problem with the DVB card when using more than <o:p></o:p></spa=
n></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'>4G=
B of RAM and PVOPS.<o:p></o:p></span></p><p class=3DMsoPlainText><span la=
ng=3DEN-US style=3D'color:black'><o:p>&nbsp;</o:p></span></p><p class=3DM=
soPlainText><span lang=3DEN-US style=3D'color:black'>If you are still int=
erested: gues what do you think what happened between 3<sup>rd</sup> and =
fourth on riker=3F Yes,<o:p></o:p></span></p><p class=3DMsoPlainText><spa=
n lang=3DEN-US style=3D'color:black'>I switched to PVOPS ;o).<o:p></o:p><=
/span></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:black=
'><o:p>&nbsp;</o:p></span></p><p class=3DMsoPlainText><a href=3D"https://=
data.pcip.de/munin/space.zz/data.space.zz/xen_new.html"><span style=3D'co=
lor:black;text-decoration:none'><img border=3D0 width=3D497 height=3D400 =
id=3D"Bild_x0020_1" src=3D"cid:image001.png@01CDD33D.15ECD200" alt=3D"Xen=
 Domain Utilization"></span></a><span lang=3DEN-US style=3D'color:black'>=
<o:p></o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D=
'color:black'><o:p>&nbsp;</o:p></span></p><p class=3DMsoPlainText><span l=
ang=3DEN-US style=3D'color:black'>So, I am currently on Xen 4.1 with 3.6.=
7 PVOPS Kernel for Dom0 and riker. Before the increase, I was using<o:p><=
/o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color=
:black'>an old-style kernel, no other changes in Domain.<o:p></o:p></span=
></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'><o:=
p>&nbsp;</o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-US style=
=3D'color:black'>Compared to last time we checked on it, I switched mainb=
oard, CPU (AMD-&gt;Intel), and now have 16GB. <o:p></o:p></span></p><p cl=
ass=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'><o:p>&nbsp;</=
o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:=
black'>So just in case, we would like to check on it further, please feel=
 free to advise or patch a kernel and I <o:p></o:p></span></p><p class=3D=
MsoPlainText><span lang=3DEN-US style=3D'color:black'>am more than happy =
to try out.<o:p></o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-=
US style=3D'color:black'><o:p>&nbsp;</o:p></span></p><p class=3DMsoPlainT=
ext><span lang=3DEN-US style=3D'color:black'>I use the PVOPS kernel now b=
ecause of some glitches when recording and I want to test whether they pe=
rsist<o:p></o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-US sty=
le=3D'color:black'>on PVOPS. I am going to switch to Xen 4.2 soon, becaus=
e I want to modify the scheduler with the new parameters,<o:p></o:p></spa=
n></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'>as=
 I guess the Domain doesn&#8217;t get enough attention, there are 3 PCI c=
ards for it&#8230;<o:p></o:p></span></p><p class=3DMsoPlainText><span lan=
g=3DEN-US style=3D'color:black'><o:p>&nbsp;</o:p></span></p><p class=3DMs=
oPlainText><span lang=3DEN-US style=3D'color:black'>If this is not intere=
sting, please simply let me know, it&#8217;s not really an issue, it&#821=
7;s more for curiosity.<o:p></o:p></span></p><p class=3DMsoPlainText><spa=
n lang=3DEN-US style=3D'color:black'><o:p>&nbsp;</o:p></span></p><p class=
=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'>BR,<o:p></o:p></=
span></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'=
>Carsten.<o:p></o:p></span></p></div></body></html>
--=_alYL0jHM0BaW0zWL1lZTEVpaoTHnF+q00E-M0oWd1EQsEuU+
Content-Type: image/png
Content-Id: <image001.png@01CDD33D.15ECD200>
Content-Disposition: inline
Content-Transfer-Encoding: base64

iVBORw0KGgoAAAANSUhEUgAAAfEAAAGQCAIAAAAx+M7VAAAABmJLR0QA/wD/AP+gvaeTAAAg
AElEQVR4nOyde1xU1fr/nwHN66CBMEKIhoHJaJqolZmIR8O8/MxzTLopanJRwdQs9aRcxAtW
aCpaIJrG1zI1rfAYfPUgdvJbKngjrNEkBSREMWE83kDm98caNpu99+xZDLNn7xme94sXr2Gx
5tmfWXvN2s+sWeuzVadPnwYEQRDE/qmurm4ltwYEQRDECty6deuPP/4wjulPPPGEvGoQBEGQ
5rBnzx4AcJJbBoIgCGI1cExHEARxHHBMRxAEcRzwO1IEQRD7ICsri1MyevRoTgnm6YjDolar
5ZbQBJSgVgkaEBFGjx49evTokJAQEBrNCTimI/Ig3fDBj2yVY+F4hyiBhw8f6nQ6lUpVXV3t
7OzMr4Bjup0RFRX19ttvs0vmzp07a9as5sRUq9VqtdrFxcXLy+v555+PjY29ceNG82SaR6/X
01cWHE+ZQs5/+ZGbdCxTB7UsCIJYkZKSkh9++OHOnTtPPfXU8ePHu3fvzq+D8+l2xtq1a4OD
g/fs2fPKK68AwO7du0+cOHHkyJFmhiUD1p07dy5evLhz584hQ4YcPnzYx8fHCooRBLES5eXl
gYGBLi4uAODp6SlYB/N0O6N9+/YZGRlLliy5ePHixYsX33///f/5n/9p165dXV3dBx98oNVq
u3XrNmvWrDt37pD6arV669atWq3W3d19xIgR58+fFw/er1+/Dz74YOrUqYmJiaTw/v37ixYt
6tmzZ8+ePRctWnT//n0mcnp6ep8+fdzc3AYPHvzTTz/t3Lmzf//+5EAXLlwg1X7//fc333zT
x8fHy8vr9ddfr6ysZJ5ugUI+JA75qMGJzKnDVGMQUSgSVqRBmvNCAODjjz/29fX18fF55513
Hjx4AAAhISF79+5lKpSUlPj5+VVXVzMlTz311K+//koe79y5kzz49ddfn3rqKQAw1StMlTOc
Pn26V69emzdvbupLQCRl0KBBZEAXAcd0+8Pf33/lypVTpkyZMmVKUlIS2QO8adOm//znPwcP
HiwoKKitrV2xYgVT//Dhw99///2VK1defPFFzryNKcLCwpjc/8MPP/z1119/+OGHH3744Zdf
fvnoo4+YallZWZmZmSUlJZMnT/773//+3XffffPNN1euXBkzZgxzoClTpkREROh0ut9++83L
yysuLo5/OAsUMpBPGHq9nmZuRF/Pxo0bma+YBBWKhBVpkOa8EADIycn58ccff/7554sXLyYn
JwPAu+++m5SUVFdXRyokJSXNnj2b/a4eOXLksWPHAODq1asLFy68ffs2APz444+jRo0C071C
pLcAQFZW1t///ve1a9fOnj27qS8BkZQsHvw6KuLhhd4AdsewYcOcnZ2ZkTcwMHDXrl1+fn4A
UFFRERwcXFhYCABqtfr333/XaDQAcOfOnR49elRUVHBCqdVqzshVU1Oj0Whu3rwJAH379t2z
Z8+TTz4JAOfPn3/11VfPnTtHnnXx4sWuXbuSyBqNxuyB9Hr9oEGDfvvtN/ZBLVPIicD+L7+c
U+HIkSPLli3Lysrq2LEjjUJOWJEGMftCRFCr1SdPniRhf/3119DQUBJ2+PDhs2fPnjx58qVL
l8aOHXvq1Kn27dszzzp48ODu3bu3b9++bt26DRs2xMfHh4WFhYWFvfrqqy+99JKpXiHSW5KT
kz/66KMvv/wyMDCQXjxiSwwGQ3Z29ujRo7OystirX9AbwI7ZuXOnwWCora396quvSElJScmA
AQPIXEHPnj1LS0uZymSUAYD27dvfvXuXJn55ebmbmxvzuEePHuSxr6/vn3/+yVQjAzqJbOpA
p06dGjduXLdu3dRqtZeXV3l5Of9wZhW2bt2ameIg3Lt3z8nJkt7722+/zZ8//4svvmAGdBqF
bEQaxOwL4Uz7cGDCPv7440xYkqo/fPhw5cqV8+bNYw/oABAUFHTixAkA+Oqrrz799NOMjAwA
OHnyZFBQEJjuFSK9ZePGja+//joO6IoF1704IL/99ltCQkJGRsbnn3++bNkyMnPt7e1dWFjI
zC1UVVU15xA7duwIDg4mj7t27Xr58mXyuKioyNQ3M6aYNm3a66+/fu7cuVu3bhUXFz98+NAC
Pb6+vgUFBeySgoICb29v8lilUlHGuX79+uuvv/7pp5+yv/41pdBU2OY0CHOCBP/LhL18+TIT
dsyYMY888khcXNzx48dnzJjBeUqHDh169Oixb9++tm3bhoSE1NbW/utf/3r88cfJ0G+qV4j0
lqysrP37969fv57+RSE2g2bdC47pdsadO3emTp26du3aHj16PP744x9++OHUqVPv3r07c+bM
6OhonU734MGDwsLCadOmWRb87NmzixYt+vzzz5cuXUoKX3nllffee+/q1atXr1597733yHqb
JsV0cXFp3759SUlJTEyMBaoAIDIyMjo6+scffyRj0H/+85/o6OiIiAjyXzc3N51ORxMnNDT0
nXfeefbZZ2kUmgrbzAYRYfHixWVlZWVlZYsXLw4NDSWFKpXq3XffXb9+/aJFix555BH+s0aN
GrVkyZJXX32VeYFkMh0ATPUKkd7y2GOPZWVl7dixg0zoI4qCrHsZMGCAp6fnqFGj/P39+XVw
TLczFixYMHLkyHHjxpE/J0yYMGzYsPnz50dFRY0ZM+b111/38vJ66623mjrQkPXpvr6+s2bN
euSRR44dO8Zksu+9916vXr2GDRs2bNiw3r17v/vuu02KvGnTptjYWE9PzzFjxgwdOrRJz2WY
OXNmTEzM+++//+STTz755JNLly59++23mS8hFyxY8Le//Y1mT9DJkyejoqI4EyCmFJoK28wG
EWH48OFDhw595plnHn/88YULFzLlzs7OPXv2fP311wWf9be//a2iomLSpEkAMGnSpGvXrv3t
b38j/zLVK8R7i6en5/fff//ll1+uWbPGWi8NsQo0617wO1IEUTqTJ09+5ZVXrPiBALFTxP1e
yHekuOcIQZRLXV3djh07iouL//GPf8itBVEE7EFccC0jjukIolw6derk4+OTkZFh2SIfxMF4
5JFHHjx4QL5WuX//vuD3KzimI4hyQZMZhI2rq+vFixfJV6O///47s+CYDY7pCIIg9kHv3r0L
CwuPHj0KAK6urlqtll8Hx3QEQRD7oE2bNgMGDGD+5OwjJeAkHYIgiOOAeTqCIIh9ILjQhYOD
jOl//Oc/KpWqh6VbWhAEQZQPZ6bFYdcy1ty9ezghQeXsPO2771q1aSO3HARBENlwhPn0k9u2
Vf/5Z1Vp6cmtW+XWgiAIYiMEbzNt93n6X5cvn0hPJ49PbN0a8P/+X6d6uz4EQRBH4t69e7/+
+iu5FZebm1vv3r3btm3LqWPrPJ3vH71///5BgwZ5eHgEBQWRO7ZUVFRMmDDB09NzwoQJ5K4C
/BKG3A8+ePjgAXlce+9ezurVNnw1CIIgtqOgoKBDhw5BQUFBQUHt27fnGFATbD2m882j9+/f
TxwtZsyYMX36dACIjY3VarU6nS4gIIDcSIxfwjBx8+Z3CgurJ016p7Bw7t69EzdtEhdg9o4H
9NUoQ9XU3y6y+aFQFX0oVEUfClXRh7KiKgu4deuWr69v69atW7du3bNnz1u3bvHryD+f/vnn
nwcEBLRt23bYsGHkc0ROTk50dLSLi0tMTExOTo5giSD3fvrJ7OGuXbtGo4qmGmUoVEUfClXR
h0JV9KGUqcoCOnfuXFRUVFNTU1NTc+nSpc6dO/PryD+mE65duxYWFpaUlAQAlZWV7u7uAODu
7n7jxg3BEoaHlZX6bdvqbt2CvDz311+HvLyOHTuK/B44cKDZOh07dtTevWuVOqgKVaEqVMX5
3fbePf22bTSfHjj07dv39u3bR48ePXr06H//+9++ffsKVDp9+vTp06f1tgVYN/HS6/W5ubk+
Pj7p6enkT09PT51Op9frdTqdl5eXYAmHBQsWGAwG/ZEjBnNcunTJbB3KapShUBV9KFRFHwpV
0YdSoCopxtVt27Zt27ZNnntisO/Ivn379sTExC1btowYMYKUREVFubq6Ll68OCkp6ebNm59+
+im/hBMwLi4Ob7WFIIi9cPv2bQueJb7uhdwTQ551L+wHMTExZFkLKfnvf/+bkJBQUFDg7+9f
UFCQkJAAAPwSQW7n5poVUFRURKOTphplKFRFHwpV0YdCVfShlKnKAmjWvTjIveswT0cQxI6w
LE8/dOhQcHBwq1atAKC2tvbIkSPM/cRBrjxdUpR5NUZV9KFQFX0oVEUfSpmqLIBm3Qvm6QiC
ILbG4vn08+fP37x5EwBcXV3JKnDmv5inN7eaXecIqIo+FKqiD4Wq6ENZQNu2bQcMGDBy5MiR
I0cOGDCAbwwAmKcjCILYHsvydL65LtvGC/P05laz6xwBVdGHQlX0oVAVfSjLGN0YfgXM0xEE
QWyNxXm64DhOwDy9udXsOkdAVfShUBV9KFRFH0oibJ2nMy67zD5SfklFRUV4ePiJEycGDx68
ZcsWDw8PfgknLObpCILYEZbl6eLIk6czfi8iJU3y2mWjzKsxqqIPharoQ6Eq+lDKVCUR8vu9
8Ev8/f1zc3O9vLzKysqCg4N1Oh2/hBMQ83QEQewIC/L0o0ePuru7d+nSxc3NzdnZmV9BufPp
lnnt3s7NJT9Qf1kW/F1UVGS2DgAU7d5tlTqoClWhKlTF+V1744YFXrsDBgxo167d5cuXc3Jy
Tpw48ccff3AyYwLm6QiCILamOfPptbW1N2/evHHjxvXr14OCgphy5ebpI0aMSElJqa6uTklJ
CQ4OFiwRhFwGxcG5PAKqog+FquhDoSr6UBbTqlUrDw+PgIAA9oDOINu6F4Jer+eXXLt2bebM
mSdPnhw0aFB6erpGo+GXcMJino4giB3haOteGARLNBpNZmZmeXl5ZmYmGb75JYIo82qMquhD
oSr6UKiKPpQyVUkE7iNFEASxNVbJ0znbSpU7n24xyrwaoyr6UKiKPhSqog+lTFUSgXk6giCI
rUFfRiqUeTVGVfShUBV9KFRFH0qZqizArCkjYJ6OIAhie3A+nQplXo1RFX0oVEUfClXRh1Km
quajCP90y1wY0ZcRkQLVS7GG75fLrQJpiVh8P9Jff/21srISANzc3Hr37i3//Ugtc2FEX0b6
aqiKvprWvTVNKGwr+lCoij6UBRQUFHTo0CEoKCgoKKh9+/YFBQX8OvL7vdC4u6DfCyIFmKcj
cmFZnn7o0KHg4OBWrVoBQG1t7ZEjR0aNGsX8Vynz6TQujOjLiKqkUDWnU7UCVSmzrVCVFVVZ
5ssIAJ07dy4qKqqpqampqbl06VLnzp35dTBPR1oumKcjcmHxfPr58+dv3rwJAK6urgEBAfLP
p/OhcWFEX0b6aqiKvhrOp9NXQ1X01aSbT2/btu2AAQNGjhw5cuTIAQMGsAd0Bvl9GWlcGNGX
EZEC1ccqwzyD3CqQlojj7CO1zIURfRnpq6Eq+mpa0NKEwraiD4Wq6ENZBhnEcR8pggiAeToi
F5bl6f/7v//77LPP/vTTT0OGDAGA48ePjxw5kvmvUubTrYgyr8aoij4U5un0oVAVfShlqrKA
bt26HT9+PCAg4NSpUz///LO/vz+/DubpSAtGpQID5umIDDjOfY4kRZlXY1RFH8rWqrSYp9NW
Q1X01aTL07N48Otgno60YDBPR2QC83QqlHk1RlX0oTBPpw+FquhDKVOVRGCejrRgME9HZMKR
8/QjR44MGjSoa9eukyZNInteKyoqJkyY4OnpOWHChIqKCsESQZR5NUZV9KEwT6cPharoQylT
lUTIn6c/+eST69evHz58eG5u7oEDBzZu3BgVFeXq6rp48eLVq1ffunXrk08+4ZdwgmCejlgC
5umITDiOfzqfhw8fAoBKpQKAQ4cOAUBOTk50dLSLi0tMTExOTo5giSDKvBqjKvpQmKfTh0JV
9KGUqcoCaPzT5R/TU1JS3nvvvW7duh09epRcfyz22u04fLhZt0xfX18aR02P4mKr1EFVilZV
VaVEVcpsK1RlPVUWe+3eunXL19e3devWrVu37tmz561btwQqnT59+vTp03oF8OWXX/bq1Uuv
13t6eup0Or1er9PpvLy8BEs4LFiwwGAw6I8cMZjj0qVLZutQVqMMharoQ9lalVZLEwrbij4U
qqKsY9kgmZOTk5+ff/PmzZs3b+bl5eXk5LD/u23btm3btsk/nw4AdXV1Z8+enT59elQ9ZPY8
KSnp5s2bn376Kb+EEwHn0xFLwPl0e8NhLO8d2T9drVY/+uijYWFhM2bMiIiIAICEhISCggJ/
f/+CgoKEhATBEkHIRxtxcC6PgKoA59ObUg1V0VeTbj5dif7pEoF5OmIJmKfbGy08T1eif7qk
KPNqjKroQ2GeTh8KVdGHUqYqy0D/dAQxDebp9gbm6aNHjya/mT+Z/2Ke3txqdp0joCrAPL0p
1VAVfbWWvo/UKmCejlgC5un2hsPcmsoqfi+YpwNgjlAPqgLM05tSDVXRV8M83Qpgno5YAubp
9kYLz9PtY91LVlZWYGBgly5dAgMDiWL0ZWxONVRFXw3zdPpqClFlmK9EVZaFsoDRjRGsI3+e
3qNHj7S0tKCgoNzc3KioqD/++AN9GREbgXm63eEop8yR59M9PT2h3pfRy8sL0JexedVQFX01
zNPpq6Eq+mo2m08XTNXlH9M3btw4ffp0Nze3GTNmrF+/HtCXEVVZSZXqpVgzddCXEVXJocpi
X0aae0zL78vYs2fPr7/++saNG3v37n3iiSf06MvYvGqoiqkGo5eJ19ms3Wx7VdYK1UJVAShR
VdPrSDGWKsWX0dfX99NPPx0+fPiRI0dmz5596dIl9GVErILZPYdpqrQIQ4TN9CBWoGXPp9vH
upeNGzcuWrTIy8tr8eLFKSkpgL6MzauGquirPdQ+pAmFbUUfClXRh7KA0aNHjxgxQq1WOzs7
Dxw4UHA+Xf483Spgno7wwTzdAWnZefqDBw9Onjzp4eHh7u6en58/aNAgFxcX5r9KydOtiDKv
xqiKPhTm6fShUBV9KGWqsoATJ05oNBo/P7/OnTtrtdr8/Hx+HczTEYfF7J5DzNPtj5adp//+
++/ssfrKlSvdu3dn/jSTp+fn5z/33HOdOnUCgBkzZuzfv98CBTZGmVdjVEUfCvN0+lAtVpXq
pVgFqrIglAX8/vvv7IWMvwqthjSZpz/77LMrVqyYOHGiXq8vLS0dPXr0L7/8IpHQ5oN5OsIH
83QHRKVSjV7mABbqMvinX758+YUXXiCP27RpU11dbYECG6PMqzGqog9lXVUcbxA+mKfTV0NV
9NVsto9UEJNj+tChQ/fu3QsAf/7553vvvffiiy/aUJWFdBw+3GwdX19fmlA01ShDoSr6UDZW
5VzoTBMK24o+FKqiDyURJsf0zZs3Hz582NXV9ZlnnnF2dv7www9tKcsylHk1RlX0oXA+nT4U
qqIPpUxVFsDcjJT9Jwf5172o1Wrmsaur65UrVyoqKsLDw0+cODF48OAtW7Z4eHjwSzhBcD4d
EcDcGgmcT7c/Wvx8OqdEiftIGbOCf//732+99RYAxMbGarVanU4XEBAQFxcnWCKIMq/GqIo+
FObp9KFarqqQRCWqanooCyC26SEhIWAiSQd+ns7Omjno9XoJRDYwadKklJSUrl27+vv75+bm
enl5lZWVBQcH63Q6fgnnuZinIwJgnu54qFSqdeAAtzqy2D/94cOHFy9evHLlynPPPXf8+PFR
o0Yx/xLO00VMvyxWT8PPP//s6uratWtXaIbXLvkBUZfLoqIiGkfNot27rVIHVSlZleaZEgWq
UmZbkToTx8bIrurlcm+7aCuROhZ77ZaUlPzwww937tx56qmnjh8/zt5wxCD/fDph/PjxK1as
6NevHwBgno5YB8zTrY1ZCx3pFagAwAG2klqWp588ebJXr15sjxc2wnm62jQWKKDk2LFjtbW1
ZEAHgBEjRqSkpFRXV6ekpAQHBwuWCEIug+I45gxj00OhKsD59KZUQ1X01aSbT+eYdgmiiDx9
zJgxs2fPHjduHPnz2rVrM2fOPHny5KBBg9LT0zUaDb+EEwHzdEQAzNOtDebp1sIq9yPloJR1
LwBw8OBBZkAHAI1Gk5mZWV5enpmZSYZvfokgyrwaoyr6UJin04eSSxXbbkU5qsRRpiqJEMjT
1Wq1Xq/nT7ZI/TVpc8A8HREA83Rro/pYBdmyLg/HPN00JvN0MnbbeN2LVVDm1RhV0YfCPJ0+
VMtUlQapClRlWSiJUMR8evPBPB0RAPN0ayN7np6mSouASMzTBTEzn56dnf3000936tTJBute
rIUyr8aoij4U5un0oWRRZdbqEtuKPpREmMzT/fz8Pvzww/Hjxzs7U3nXyQvm6YgAmKdbHbnt
VjBPF8FMnn7v3r0XX3zRLgZ0BmVejVEVfSjM0+lDyaaKZbeiIFWiKFOVRJjM05OSkurq6hYs
WNC2bVs5hDUNzNMRATBPtzpy261gni6CmTw9ICBg7dq17u7uUs+n19XVrV+/vl+/fmTuHgAq
KiomTJjg6ek5YcKEiooKwRJBlHk1RlX0oTBPpw+FquhDKVOVRJjM0319fT/++OOxY8dKPf2y
adOmtLS0zz77rH///k5OTgAQFRXl6uq6ePHi1atX37p165NPPuGXcIJgno4IgHm61ZF7eTjm
6SKYydOdnZ1Hjhxpg/n0rVu3rlixYsCAAWRAB4CcnJzo6GgXF5eYmJicnBzBEgb0ZURVFh/x
GvoyNlWVt7d4TVuo8m65vow0yD+f3qVLl6lTp+7cudPT0/PDDz8cNWqUm5tbeXl569ata2pq
unbtWllZyS/hBME8HRHAXJ4+UJWWh3l6k8A83UrIkKevXLly9erVNphPd3V1DQkJKS4uXrNm
zZw5cwDAzc3t+vXrAHD9+vUuXboIlghCLoPi4FweQQmqVC/FcsxDrKsq39wb3xvn06mroSr6
avLOp5sc023mDTBy5EgAUKlUAECmXyz22lXm3cFRFX0o66rKV6WK1yktpJpabAltZa1QqIo+
lESYHNPNuvRai4SEhLS0tG7dur3//vvky8+EhISCggJ/f/+CgoKEhATBEkGUeTVGVfw6prz9
bKxqd/YGmlB4BulDoSr6UBJhcj69V69ex44dE5noUBQ4n25fEBtuMrJLtynR/LKWfBUE2v3M
rE3B+XQrIcN8ekxMTEJCQlVVldUPLB3KvBqjKvpQtlZVqaUJhW1FHwpV0YeSCJN5OvqnI9Jh
zNM/VgFIuCmx+Xm6/Lf1URqYp1sJGfJ09E+3VihURR8K83T6UKiKPpQyVUkE+qcjMqD6WGWY
Z5A66cM83fpgnm4lZMjT8/Pzn3vuuU6dOgHAjBkz9u/fb3UFVkeZV2NURR8K83T6UKiKPpQy
VUmEyTz92WefXbFixcSJE/V6fWlp6ejRo3/55Rc5FFKBebqdQXZ4Yp5udyggTw80RAYC5ukC
mMnTL1++/MILL5DHbdq0qa6utroCgpoFKUFfxuZUQ1X01TBPp6+mHFXsrWTKUWVBKIkwmadP
mjRp4sSJUVFRFy5c+Oc//+ns7Jyeni6FArVazfkCFn0ZHR9yu5ysRABl5+lk3h9hUECeDgAO
4KYpQ56+efPmw4cPu7q6PvPMM87Ozh9++KHVFTB4e3v7+Pi88cYbV69ehSb6MrJR5tUYVdGH
UlqeTm6/iW3FqWNqG7C8qkRQpiqJMDmme3h4fPbZZ1euXCkuLk5PT3/00UclUkDm6/Pz87t3
7x4ZGQkAlZWV7u7uAODu7n7jxg3BEga2127H4cPNumX6+vrSOGp6FBdbpQ6qMlXnZae/ALje
rbZWdbXKLtpKQaq8vRvOnUyqPL2v2kdbma4jqdcunD59+vTp0/zV6LanrKysQ4cOer3e09NT
p9Pp9XqdTufl5SVYwmHBggUGg0F/5IjBHJcuXTJbh7IaZShUJVAHAEYvMwAYAKRTlQqpZupk
a81EAbC6KmuFkkcVgPHcyaQqFVLZp1XRbSUuSQK2bdu2bds2gTz91KlTQ4cO9fDwGDp06Jkz
ZyS5kvCoqqpKTk7u168foC9j86qhKvpqvm6FNKGwrZg6aWDG6hLbij6URAiM6fPmzXvjjTcu
X778xhtvzJs3T2oFZMVLQEDAL7/8kpaWBujL2LxqdqNK6PbztlaF616oq6Eq+mqKW/fi5ub2
559/PvLII/fv3/fy8uLfVEiB4LoXO4MsnyAoeN2L2TsltTTINk7VOgldeswKAFz3YgKT614e
PHjwyCOPAECbNm0ePHhg9QNLhzKvxqiKPhTm6fShUBV9KGWqkgiBPN3Uber0Crbxwjydg9I3
QGKebp8Y7VZAniXqqpdiU7O8AfN0E5jM0019qWp1BVZHmVdjVEUfCvN0+lByqRL/mhTbij6U
RKAvo2OCeTpgni4BxulsmZwRMU8Xx8w+UntEmVdjVEUfyrqq0szWwTyduhqqoq+GeboVwDyd
A+bpADBQlZZnt3m6Ms8g5unWAvN0KpR5NUZV9KFwPp3UETFUkVEVTShURR9KIpQypq9atar5
XrvK3C0mj6qQRPFxAdsKcB9pU6opQhVvn5q8qky9xRS3j9T2nDp16rPPPmP+jI2N1Wq1Op0u
ICAgLi5OsEQQZV6NURV9KOuqysuLNFNHqXk6TSipValeimWPWU1SJZ5POF5bCdah+bAlBfKP
6ffu3Zs9e/bWrVuZEou9dlti5mJpqJamSvANJpinc8YySVUJHlf2tmpOKElVGeZDmrk6tldl
cSiJkH9Mj4uLe/PNN4cNG8aUWOy1S35A1OWyqKiIxlGzaPduq9SRS9XL5Q0ethPHxtheleql
WDN1iMtuY69dW7fV/z3DbiVGM8dL1gZtRc4ROa7xDPb7TDyaDdqK3Q6kjtHntvFZ46gir0Ww
1zVXlbc3sLx2VS/F0pzliWNjJGqrl53+4vTzRu9BVuuR3sW0iaReu/Kve0NrUIkAACAASURB
VHFxcTGwvkPX6/X+/v65ubleXl5lZWXBwcE6nY5fwgmC6144qD5WQfYysnBClhUUZg6qUgFA
vgEAQMJ7S7KWtRA9XFWNK7Cfaqwm8boXRhXnuKqPVQCyeaoQiCrOSaRZ98K8HOv3OpVqIKRG
1K97ET6nQnqsroR/yoDXYuzj8tvEkde9VFdXM/tUyW+LvXbJJVGcFjKXZ5hv/ELJ1KSeEtoq
X5XKvrek1Ko4TaF6KZaZTxef+lRCW/FRvipTrUo53cw5Iv/0WaaK5nAi2pq5cskGk+zyj+l8
LPbaVeasme1V0fQnSVXxBXDq5JtI8lrguhdTbWWYb+b9L3lbhSSy15lY1tuFv8ZgdVGR18gc
kV+HKek5ZzvUfxUhEqo5bcWJ3KiakF80o4rRKXg1kg4FjemMpYxGo8nMzCwvL8/MzNRoNIIl
gig/c2lmqCapiq+KV+x9IznpuWyqFLXupX6AKCoqiq+KV4qqJoYyqjIx2JkKxRmUyWOadfpa
99biccgDMpHNKeSrYv9LcCBWvRTbZyp3lTBn4Fa9FGtUJfpBWTrkn0+3Cjifzkb1UqwhKxEA
VKOXAQCEJJKZWZtNrKteimUOKgix9yNuUBLuCWRPl3+sguxlwJnuXJGoWrrMOKMdkkgqGMcj
8m2EtefTmVPAfavXD4LGRlOpAEA1epmcW0nJXt/GL59yPt2QlWjse1bFkJVI5tMjR5daPbgF
mHmZTI/iP/H75Y48n25FFJ25WCMUqqIPZUme3ni2QQpVnLyy4bjkK5CmhFLuGSRtaDpVF24E
C6qFJFKFCkkkS1CaezjBakIvUyAU6VqiH1+shUON6TifLlAnJJEZLGypin1Q1Uuximgr1puK
JOa+PxUyjxnBhvkN4sm8vxVVFV6v4UqqPyjzWxFtZWkoGlXcRmhGtcLrNTQD5Td1j1pZleAA
XV9I6gi+72yAQ43pys1crBSKVpWpXs4yDJBWFU8AZSj2vKd1VQm+wYpaaTl1ONUiVakgZZ5O
5s0517+GUKKjlWJ7e8PLMaGfPd0sgpmMOCRRoI4JmpynU3/IEOxXWvfWIu9Bs0qaiUON6fab
uUilynQHUmZb0eRTzdmFyF5MYpgPvrW2XvdSeL2GPZMeFx9P42HCfO9ns5VLFodiVBnmQ3yV
wKsDTnJtuos2JM6N66RBw2xV4YBY47Jd0ZmNZuXpjSOLpPPG8T0ksXBA/WkSnIaSeFh3qDFd
sZmLtULRVOsztVGPIZkgGRTYOYWkqvjJi0go9jhFk09Z8wzW5+kC2RbrjWfNPD1kHz8+B+YM
clRxhnXb9CvmcBb09rj4eME6DakuM+RxBuWQRAhJ1IbsExn6SeNoQegbEd6zyKZc7kH5qtiF
rFDMbg+2Ku7HEVZhI1UhiQAQXxUfXxVvm9mYVrY4iChff/31ypUrS0tLe/fuvXLlyqFDh1ZU
VISHh584cWLw4MFbtmzx8PDglwiGUnjm0vxQNNXYeYRhPgDEK0EVTR3VS7EAVsvTe87Zblhh
LlRtIYQUmlqcwGDNPB0KAf5u/CMkEbIELieUE7vKOYPs9VQcVYb5oFrHHT0LuRVYf7MzYlJR
6OIXXxVPOvYv8wuhfiQlceofNxrWv+laCl0bj9e8yIWsf7FVccZuoop91viv8ZWqV36JL2RX
YL8NDfMB5vFfk9WQP0/Pzs7+5ptvrly5MmfOnOnTpwP6Mjavmta9tXg6QDIv2duKc1tLosqK
eTp73tPkkmSSp5tOmYmzY3PainPoS+nmV8RTzhHLfgYJnBfIV2WchKknvir+UrqWFDLlZFqM
80OqMTVJryZnhPkEUKTVsg/E9HwSmfm6Qp/sDY2/vWB+kweMKnY1dkAig1El8hoN82HKnj3M
Y35NqVHK+vT79+9nZWUlJSX99NNP6PdiMUbHiSzuIEVW0TLLaSVf9cxa2sxZq05yOrLMmdBo
rXFIImNT01wJL8UaViSS9enMgn1gr9mfDxAPqk4Apt5pBoP5GyHRyGC/HJWKORfC9Tn/ZdqQ
rUv6RevsLQ7sw/HXp3OcTNhNLYmwPAgcmJoHZlyUlY7B4ODr09VqdZcuXaKjozds2AD25sto
1oVOOlWCnnBzOlUDNHY99PaG+hSYcftrqirBY4n755FnkeNGvzqPE4G465HfRm1Of+mTvTme
iBa3lbEd6v3zOO3w+R9zAaDo1jNMEsdvMUnO4DPPMMulBY7o7Q0hiewzaPRr7PeZ8VkhiYwX
oKS9vZFCVh1yvgqOeHNqMmftZae/hF8X63eRVmu2jrGthMqHsbwhTdXh/CY/kqpqUh0H92Uk
3L17NzMzc+XKlWfPnrWvPJ1tDiedA5xgcMESwQXpzFSjah1Ylgg37aXV5+nGJK5+Qx3zKhrl
6esimRlt8knCunk657OLsQXIn/Gmvm4AIClqfaZvwdGZx9w8nS2AR0J8fPzPD/mfKoyh5kNT
P2lxTpyg26Lwsxrn6eSJ5HagABA5upSzIZYxmMQ83TyOnacvWrSIpN5OTk4PHjwAZfsyst+r
SphP588Uc1cCNIYZ7m0/yw8mLI0aCEkkiUwzD0fikxUm/GOxL3ic9emmohHza+ZPtqkI85i4
hQiaSbHLi7QCs7Fs4uLjG9bGsDQ3zO3Wu4jQn0G2VEYSuxrxMOFXI4cjJfxZfk7NJnwjoqWy
2aGpRhmKql/ZXJVEyL/upU+fPkOGDKmuru7Tp8+OHTsAICEhYebMmf7+/oMGDUpPTxcsEcTq
KwEEcxzxUOw8iDzdxusTyEoADvxxxJaqDPNBNVpsVa9qtHGo6lhaCn0a/iX44YBjPCnwScX4
eYXVDiGJkCWknG59OlndzL88sEsYKz5xfAvNH7EQxOoYmwtAvSYH1uTQHBTMiRcvNKq6XgMA
aQAC7jwsYxOqHQYUjUBZjTJUx1Lz/jC2VyUR8ufpU6ZMuXDhQnl5+eHDhwcPHgyK92Xkr9gV
fMMwg4vVVYk5vYUkms8RQhLB2m0lNkBw9vsJLjKpX8NL8inBRJj54SzA5/yXKRf7vMI4IFLk
6WDBLkShYxmPqDWz0gbo1sY0VxV1NSYbEHmB7L1FmKfTh5II+cd0K9Kk3FNsZGySUznbQsSE
Ks5HVH5N5mMBNP6kzPmw3HPOdlNHYSrTZi50bdVwOKFhiFEFjcdWfs3CAbEiAxlZJRYXH9+x
tNTsLjtaXw7gLBAWoEl5ulVUNTnLk2hvpNlqvOMyddK4VRvtLfpmVIrZw2GeLinyz71YC7LE
Yv+/NopXKyoqYg/r7DuHNXzJ8/3yoqIi9qdpwUkYkrnEV8XHw0MRX82JY2PIVhr+5CNz0Eub
pjEjI6cmY/1KjkjeWuzvHhtNRMyHIq1WvEsZ5oMqO/Zlp78EBwV2g1zaNI11LJOf1hlVAtQP
DVrQMps1yNQBG2ZQuO3tbZhfyq/ARuxwLC6la+EF4XZocEBspfUVmehg5Z5mB1ABVczuFdbL
IWdHfEqdewb5E0chiZC9zEJVllbThuwrhELI4hrfN3otIYkvl3sDmBlAzXZR+mqUoW57e5sd
1m2vSiIcKk//pu5R8ewb6G4DpHopln2nkkbjLM/5gYxH7O+v+KrAdBbPTnXNEJLIeeMJ+mlQ
diZTw4GpOWKRYYjKPw8Kae7zQJNPmT9cSCIw7SBqA9KEPJ2/Sb3xgwYPE/4eRdbe96ZmeXxb
sUaqzGFUZWoLPlNtAKtvcza+1z+RXJLz8yIbClkwu+e/6arEjBjzdHuFZC7iwzp7AQaYHmqN
mQsn+27cj/nZTcMuYdazmplPGfc6jwbDfOgzpVG1uPj4+HXc+pQ5gm2yPGaP9aV0rW9hfPw6
gCySeJq4zxFFPqV1b104IBayl7E/wfAOmlhUIpqDAwBAn+e0v5iuY5gPkGfMPb8Bk6M5eaAF
bSE0DIumBmKrZHnxVfHxIfEvl3ubHUCNquolNezC5/Rk0BZyLki8r5SZT1p5eZFpA1MbKrNq
GuaTmWtpM+K8vEgY2LRQLSpPd6gxXdoZRgBgRu3sZeZD1b9tvjFnJ9JkVawrDX8qg3SmNEiN
EF3Da+W2Mj31ZPQELyyEepsOw3yBOVmC8Y0nOqXO8+XgVc6C+Kp438J4EP14AQCFFPcjNcwH
Mkip1jX4ihjmQ0J8fHyn+PiqePJBLSHeaPGRYJxHimdHSIiPr59fskKWFxcfHwcAUMpsOyBi
mAdAxv1O8cwiKL57CaOfBGRXUDVOFOqfUggAqjxQAaSa3vVug4xYBWAwV4cD5un2itS5J6cf
N9QheQ3Lkok90HBVCQ1/jT4WhDS+iVpW/bOyRMXXV7Zynk4yYlOEJEL2Mm3IvoYUlcDSTCCq
OEZ9ZGRnr40j+VSTppsFK8fFxxdpzefp2kotzSDLUUUe1A+s8aSQsfgQNCNk+5NYcY7YML+U
iR/X+AFAfBwrFNvkpPFT4skRDfPNNKkSMuI0SIXGmYoSVFkcSiIcakz/ZlSKWY89rikzewxl
yrOXCXzfxU6N1yUCQCF5rtDCZ/a7QrUupeFAvGjkz0L2f8FkotrgywysEbNx5fo8HUAkVQ9J
/Eb4H42mMhquWAACx6q/8NB0XmXmUzR5Oig1y0NV9KGUqUoi5P+OdP/+/YMGDfLw8AgKCjp2
7BgAVFRUTJgwwdPTc8KECRUVFYIlAhi/czeDyRtCNv76qMFMmf0VEydU/X4/fhLExqiqfgm2
8VuvxkckoUg5Ox/k5IZk5XKDmzM0qknqMGtjTc1vNFLFCs5vCm3IPu6/oNGf5JP7iv+uYAzt
+JoJylxHrK1UoipSJ9+cJYEy11yjKvpQEiG/38vUqVMXL17s6+v71VdfrVy58sKFC1FRUa6u
rosXL169evWtW7c++eQTfgknSFxcnIuLC3takAb+1KGpQuZf9sJASAUAYothdm5dCvINEKgy
+V9itEuuOhJ6d+QB82WaIKo8MIhWMBtBOmQ5a2ZR5QEApA6UR1sapEbmRQKYO2vKx7H9Xj7/
/POAgIC2bdsOGzasbdu2AJCTkxMdHe3i4hITE5OTkyNYwicuPv62tzfb+5jAJI/E45gklWzf
ZI5rM3k6493M+RcbJecIeXlm3nJSq8pXpZqtkxcfmRffSCfm6aQOx1xeFlVpkMqWoeTebhZl
qpII+cd0wrVr18LCwpKSksBir11v746lpe+kp9cvqAJ9srdhPryTng71J/Wd9PT3ly5l+16y
/8v+7VFVJVje1DpEldk6voWFND6fTVD1eUOJJ8ubVGZV9SVpAJ7eV1VlmRMvrZVOlSotU7yO
31XlnkHBs2ZjVWwNpM7L50z2KBuoMjozn7OT96DpOpJ67cLp06dPnz6tl5Xc3FwfH5/09HTy
p6enp06n0+v1Op3Oy8tLsITDggULDAB6b28DgPjPJa3WbB3KapSh5FEVD4Z4Y0kqpHIeSK0q
FVLZx+LXCYTUVEiFPIA8CduKE5z/o802FyrPdmeQabFLWi2/Afk/UqviaCB1yCkT0SZ1v+L0
GXnfg6bawXwog0GKgXTbtm3btm2TP0/fvn375MmTN27cGBoaSkos9tpV5rfb8qoy9Ske2woa
r3shDZVv4M452EYV+UaUHFeZbcWuI/L1O1ElPndksSr+cZXfVrZH/jE9JiaGLGtRq9Vqtfq/
//1vQkJCQUGBv79/QUFBQkICAPBLBCEfbcRpCXN5zJIJkbeWvG2VlxcpOC5Ip4qM1OwGyTc0
mk9PI1/BqVLT2GPHAWlVMUM5++sHxfYrmlBElfiaq5bwHgRzFzbpkH99ul6v55R06NAhMzOT
XUK8ds2GUubV2Paq0lSpgfXfPfL39ciliv2nCiBQqJqkqjhNka9KLcxjWqlhHY4tVeWrUgN5
C0iYOuJLXxTb282OZS3hPUgTSiLkz9OtiJIzF6uEol41YUQkXWppbSXYFEpY98Jfhy57W5mt
I7KqinwBDqJZqrVUpUGq7G3FfpnMY1z3YjWUeTVWjir2uKYcVWxsqSqNNZ8uPlcgnao03qJP
+lAynkHTGw8aVKWZHtat/KmUYoqjmW3FPgT7UxS/24h/I2Kb2RiHGtNbWu5pQR2mV8muKl8o
17OxKnaenhcfmZcXafydJ9Wq+U+0m60VSvYzKAhblalhnR3K4nSeDKmkPdnflOQbBD76NLWt
2BE4Cjlz5YKvkewwYEZ8wS/epUP++XQrgrmnYJ0Gc1QlqRJEUlXG/U3xDSWFboWN5qwPNPwe
CA03p7eiqq2FzrMAgPuZvYEIlmOPwJ0/WdjFGUwDCOTtKP7rl8J8MBaSl8m0RqAhknxwCTRE
5qvmBhoimedyZjlIozkXOjMNZaygAgBgf0uRBqmBJY3iREAks8+ZeXC4sOFwkarUPIhkD9zE
PSkNUhnPN6EkncCukxoBkZGNPoqlpgHk8Z5rRRxqTL+tSPc1G6v6RLs57xVjh86Lj4RxDf9i
D+7KbKtM79jxpdybSlusiu25mJcXaRyyAQAgDVLz8iL7VJLvHlI5/+Vgxbby1j4koshIJHi5
LdJq0yjGT2Wewdve3hz79EhVKufidLDPw9JCZ+bPRpMYzPCnSvXWPkyr/zOi8RjKPD6ofQiF
zoJzIBGsysO6xebDY6wnpqapjGFBBfkAaQDkcMyBBjZOq0k/IdXY4qE+VWcfjqlDnsWLIy0O
NfdiF5lLc0LRVNvauMNxhiqmS0mtKo2iDp8/Sx+TVBUHMp+eBtxW4mDFtmo8lgEc4DZUGqRS
Tuza4AymmavDR1BVWuMf/pgoCLetzNUxdVAA+KH0MfbLSROqwxqFTQYUOSL7iTTKpUP+MV1d
D1NioS+jPcwwNjMUTTVv7UPxCmRckFQVpx8PFF2fwJ67/M37qlSq6kfthmHxgPl1L6pxAFZt
K2/tQ7PzqkLtmVr/04D99nazXZS+GmWoYRT9yvaqJEL+MZ3samWXxMbGarVanU4XEBAQFxcn
WCKIzfb78euIOKPadi1HqmAekcYbZ5uqSmQkauZnGvbCjx/kyNMFYX9NasUz+M0rs/mJG/Ey
I1/PAsDhwrmkUDCdZOYEbNOvmPNuxU+lIsl1U6tRhqLpV7ZXJRHyj+l8LPNlBJtkLqb2+5Fy
wYGvmarY35iLrI1lEMgRhKZfmqpK5COksOdiHvd+NOLHIi+TJp+iPIP/e8nUMoaG39pKLTOS
cmE1mjX7VStjNeHjHgAAeKh9mBcfqRrH/ScHqXs7R6F4F22SKszTJUWJY3pzfBkldV9Lg1S+
U12mdyzUTxpI4YD4m/fVtPqj890NydHZJf/zzioAuH3Tm/ObdOvcbYlWUSXeVsO8r6rqNXPq
MCXEXe/lc8ZXkQbwm/fVH0ofs5aq+0H/YB6XdYtltwNz8TD6Mn7ObSvjb2ucQeb1GtuhgtVW
n3sDQO7cRM5xvavKVeMauQ8StezfNujtqnEwkeeAyJw1U9E6lpby1XJ+lxY6m60DAL5V5YLl
5OjidTi/fyh9TGpVTaojqS+j/PfEIKjVamYGxt/fPzc318vLq6ysLDg4WKfT8Us4T4+Li0te
u1bqlQBMPkJWv5E6ZLkSWfo2sP7rb/aW7maqYu5uMRBSd2s3sKuxb3zBlOxescG3lheKlfQN
HJiaB5FNVcVe22dWPLlzgmFg/e72PCgK0/71S2GgyliiYi3mCmQt/CBvP/G7ZFCewU+0m2fv
mE1unpAGqRH1ZgkD441rG/LiI/s8p/3lJ9Oh4o03zWjOGeTs7y9aoZ28dG4EQES88GscGJ/6
lvbh7B2zASBwoLHNOWswSKGkvZ25+wRz+wtShzlxzF0pyJuCWQV429t7eKmZ+0fyl45QVmM+
vjB9hjIU6VcSqbKgTp4hwpHvicHHZr6MIpPgpneCNTzRt7Aw38Bft2Bc28T82Ho1Dn9AF8Km
qg7A4cK5kSoxTxXC2m/MLGSkORwZZbaKrj0gswri9yNNa97MteDmF9/aQlP+ZQyM8vy8SJG9
KraaT6eqwyzBlnQ+nT8fhfPpfOQf05lFL8wDy3wZ0yCVTESIw0wL5hsgkrctG+rfycTdME1o
bSnZz50GqZ9oN0eyptE5u0jShGau+e6AbFWcOEzNvDzj9gdm1xypwEx3sjeqkRlbVVmD5Rnn
nUCexVZl6trGtnjMj/ASrlS/a46thKXf+OdB7UNo7CgryO1zTZsjHmg6FDOnyTmc8bvHAwDm
/F6Y4cyCmWtitZiv4m4fL2qlFV83CY1nY/lfbhuDGyz4RqQJOzb5B2V3BmjUS42QU0/zHsT5
dEmRf88R35eR78JI48uYBgClj/1pdgMus8XLuI2NW9+4TaDQWAGEAhr7cf3VmHyQHxhvombj
j6INm80EVRnXq7Fu1xkfyaxi3lro7MyqEHEg0rhppX5LXl58JNQCAEDEeP7YoUrLNESMN75q
djqlgny+dADSCMbNFBHj09IaVEVwqgGQHRlsKxDmEhhRn7nkm0vVOxabd9+GxvmiqTNYWgjA
GvRV48BwwPi7IZJont4Q33zqyVclrDCiVmDKhX3RDYwXyPLIZXjgwFTmQYQqVfiUmVbVcI74
O1RFm8GYahQCqAAanCyB04eNGZIiM2KJ8nRT73p583T5x3QrMsz7aloTZ80Ex5c0434/gT0I
eXmRA03P5bH/y/Cb0FweJyyNqrz4yJf3bE5jqYoAgU0rRa20PUcXAic9JwNZxHgmPuUMI3uT
nuBGPn41U6FKTdRhvzHIl4Q0oQSVsAu9tQ85ox67QVRlmQav8dpK415T8i8y3LMfQ1PaimY3
yshWWjL9AqxjscnLiyyq1PbkKE/LBADIG0+umIGWqhI8leLimcqC+yf5KG3mWlJVqnEQGG+y
juBoYAPkn3uxIlLPmuXlRapYMx6kDnERMb45DwDfAcqWc3mqssyexUkC5eOMCpk02jaq2G1F
Goo/h0OEkXZTTy3NM/HlYVNVffPK7AYZjWPm5UVCxHjVOG6ezrQSR3xz2orTGYzfdpieflEB
9PyadwYjxjPXYwbbnMH8xr1dEPa9wpuqSuR0mxWfH+FFhk6zR2xmW7EPwVQjhWwrOua9T+qI
uFdKikON6U2dNRPpT4IzYsaTdMB4Or3r1xEzp1w1DlRpmaq0zIZdJPGRTVZVH41zeVCNM1Yz
1YnJUKX9x2LxY5FXId0MYyPN5Pc42L1ig7GINzZB/fwSHDAuU6M/nOAZJAL6PKdl/myUoY9r
eLNpK7WccZxdjcGCtiLnjiQBbJj16aqyTFMr0NlnUKSLWtyvxKuZOq6pzsBZSp87NzE/wsvk
qn9eKMGLqOAR8xqPnvkRXhAxfnf2BhXjqcn64RyxqW3VaKSObziJeXmRu7M3EP9OFauC8bWQ
2an6OoxmjjCzWUszUcpaxmYSFxe39nYwRIw3vZKFB5ld5RUzk87GZGocwAHjWkDuhTctkz1C
kelaPtxDHGi0spBfomKe0ngqwDgXPK6hgooT34QA85IotHFUCVerTz85mhlVBhLHnE76k6gS
al5yWo2NQ90mzRfDESYYwQJVgpUtUGVsK8HTJ1SZeyzOZFH920TFqiaYmRqEPpSQuS9VWSZE
jDccqP9KP2I8eU+RElKh4TdNi6U1fOtm8DK+N9lfopBQph6zVUGa8b/G46bVa2j8rme3hqmX
L3D6Ag3SrWV0oDH99bUvn/P+5ikz32dpK7U034zRVKMMhapQFapCVWxwTDcPGdPlVoEgCGIe
Scd0h5pPp5mNpbwFJU01ylCoij4UqqIPharoQylTlUTYR55eUVERHh5+4sSJwYMHb9myxcPD
g1MB83QEQewFJk/PXLBg+Hvvqbt2tUpYe8rTKb12lXk1RlX0oVAVfShURR9KmaoA4EJ29rax
Y/9v06ba+/cpn2IW+8jTaTy8ME9HEMQuYPL05HpXhk7e3iOWLPEdPrw5YUmebh/7SAW9dnU6
3RdffGG4f7/28uWzN28O+KOby73W1W1rLrv+t8fNDqZ+1zzSufWDW+J1Lrv+9+ky79Nepc2v
0+NmB1SFqlAVqmL/XuAVXXv5srNG07l+iKurrbVWqm4fY7qbm9v169e9vLyuX7/epUsXUtir
Vy/GzysuLi4hIeHejz+2HTpUPFRxcbGPj4/ZI9JUowyFqlAVqkJVgiRrta3atXvh7bf7vfqq
c+vWNE8xi33Mp1N67dZcuGA2VFe6ryNoqlGGQlX0oVAVfShURR9KmaoAoPtzz03du3fAlCnW
GtDBXubTr127NnPmzJMnTw4aNCg9PV2j0XAqkDz9YWWls5ubLApFQFX0oCp6UBU9ylRldRxq
z5FOp+vVq5fcKhAEQWTDntYymgUHdARBEHCYMd0q7N+/f9CgQR4eHkFBQceOHQOArKyswMDA
Ll26BAYGZmVlKUTVkSNHBg0a1LVr10mTJt28eVMWVV9//fWAAQOIqh9//BEAKioqJkyY4Onp
OWHChIqKCllU8TWo65FFjylV/HOqBFX8c6oEVYRVq1bJeBJF+pW8XUsQHNMb2L9//44dO4qL
i2fMmDF9+nQAiIqKWr169dWrV1etWjVr1iyFqJo1a9aKFSuuXLkSHh4usgNLUrKzs7/55psr
V67MmTOHqKLcFyYpfA16vZ5/Iy3ZVfHPqRJU8c+pElQBwKlTpz777DNZ9Iio0tcjozBB7GMt
o234/PPPyYNhw4YlJycDgKenJwCoVCoA8PIyeUNOG6t6+PAho+rQoUOyqEpLSwOA+/fvt2nT
hqwuzcnJyc3NdXFxiYmJEVmbJClK0MCHr4p/TpWgin9OlaDq3r17s2fP3rp169ixY2WRJKgK
ALy9vZ2cnF544YUPPvjgscfM33PDZuCYzuXatWthYWFJSUkAsHHjxgkTJlRXV7u4uHz77bcK
UZWSkvLee++Vl5e/9dZblZWVckkinzo7d+68b98+MLEvzMYoQQMfU6rY51QhqjjnVAmq4uLi
3nzzzWHDhsmix5Qqkp5fv3593bp1kZGRBw6Yu3G4DcG5l0bk5+ePOa0x5QAAIABJREFUGDFi
zpw5Y8aMAYCZM2d+9tlnN27c2LZtW3h4uEJUhYSEnD179tq1a0OGDOnevbtcqvR6fUVFRXJy
8syZM6F+XxgAsPeF2RglaOAjqIpzThWiinNOlaDqk08+WbJkCbnYyDV5bapfubu7L1myJC8v
TxZVpsAxvYHt27dPnjx548aNoaGhpKS6uhrqZznIYyWoAoC6urrTp08vXbpUrvfeokWLSMLi
5OT04MEDoN4XJilK0MCHr0rwnMquin9OlaCqurqambaWa/LaVL+qqqpKTk7u16+fLKpM4SDr
060CJwsoLy/Pzc1dunRpSUlJt27dVq1a9dJLLylBVdeuXZ2cnLp37z5z5szo6GgnJxkuzBkZ
GYmJidXV1X369Fm1atXgwYPN7guzAXwNnNaTZVAwq6q8vLxDhw6yq+KfUxtLElTF/EutVss1
pps6gy4uLs8991xycrKMH5fZONSeIwRBkBaOQ+05QhAEQQDHdARBEEcCx3QEQRDHAcd0BEEQ
xwHHdARBEMcBx3QEQRDHAcd0pOWiQFM9BGkmOKbbJTU1NYmJiVqt1svLa926dXLLUTpqtXrj
xo0AsGHDBvY4rkBTPSVw/vx5tVp9/vx5uYXYAaa6lozgmG6XrF279syZM99///358+dLSkrk
lmMH7Nixw2Aw7NixQ24hdsChQ4ecnJzksvy0O5TWtXBMt0u++OKLpKQkHx+fzp07r127FhpP
IzCP1Wp1UlKSRqMJCgri/Kul4erq+tFHH7ENmDg3NFCr1Z988omPj4+vr++uXbuYQlsLVQCH
Dx9+8803Dx8+TP58/vnnjxw5AgA5OTlDhw4FgIKCgueff16j0cTFxbE7m1yC5YXTtUi/evTR
R1944YWff/4ZABYsWBAZGQkAERER77zzDlNNIj04ptslZWVlPj4+NDU7dOhQVFT09NNPkz9b
7GxDeHj4ihUr2Oaa/KZo1aqVTqdLTU2NjY01VcfhuXPnzsmTJ5ctW3by5Mk7d+4AwJQpUzIy
MgAgIyNjypQpABATEzNjxow//vjD19eXeWILbCsCp2sRx7Hr16+vWbNm2rRpAPDBBx+Ul5cv
XLjw2rVra9asYapJpAf9XuySfv367d2718/PjylhHI5qa2sfffRR8litVt+8ebN169ayCVUG
bPsnkcd//fVXq1atQFa7KNnJysp65ZVXyOM9e/aMHj36r7/+6tu377Fjx55//vlffvmlc+fO
Hh4eV65cadeu3d27dz08PFpsW4FQd9qxY0dycnJpaWlNTY1KpSJ+rqdOnQoKCjp69OiAAQOk
E4N+L3bMq6++unjx4uLi4qqqqoULFwJA165d//3vf9+7d2/79u3smjig00MG9BbOoUOHli9f
rtfrly9fTqbUH3300ZEjR7755pujRo3q3LkzAPTu3XvXrl137979+uuv5darOP75z38mJydf
vXp17969BoMBAO7evRsdHR0SEhIdHX3v3j2pBeCYbpe88847ffv2HT16dO/evcl9s5YvXz5z
5kx/f39yZztTtNhJTw7MZLr4bYJbYHMdOnToxRdfBIAXX3yR+Zp06tSpZ86cIRMvALBhw4Yt
W7Z0795dp9MxF8IW2FaCzJs3b/r06X5+fr/++ispWbBgQUBAwN69e3v37r1gwQJSKF1z4dwL
giCWUFdXd/DgwVWrVv3f//2f3FoQgPq5F/ywiSBIk1Gr1U5OTn5+fps2bZJbC9IIHNMRBGky
Lfl7UYWD8+kIgiCOg9LHdPziBUEQhB7aMV1tAsGanTt37tu3b2xsbPMX7jT1I96ff/45btw4
T0/P8ePHX7t2rZlHtyNMnZTk5GSR62JWVlZgYGCXLl0CAwOzs7M5cRRy51zpYDfOd999N3jw
YI1GM2LEiBMnTgjW57ewyBvBYeB3LZpOwn8n8jubw8DuSNnZ2f3799doNGPHjmX7doi/E/lN
Kj7MitCUPD2P92OCysrKzMzMixcvrl69uklqmk9sbOxTTz2l0+n69OkTFxdn46PLi74epuTc
uXPiX2HNmjVrzZo1V69eXb16dVRUFDtIRETEjBkzJBctH5zG2bdvX0ZGRnFx8fz588PCwgSf
wmlewRKHhNO1aDoJ/53I72yOAacjzZkzZ926dSUlJXPnzl22bJlgHT6CTcp/R9MgydyLs7Nz
jx49Vq9evW/fPgDIz88fOnSoh4fH0KFD8/PzAUCtVo8aNWrOnDmDBg0iZ5dcjtgmCcBbO6wW
cuTgkJubGx0d7eLiEhMTQ0wqWg4+Pj7e3t6vvfba1atXAeD+/fvh4eHil1VPT08nJyeVSuXs
7EzWuROqqqp2794dEREhuWiZ4DfO9u3be/XqVVdXd+/evfbt28uoTYFwuhZBvJPw34mmOptd
I/guU6lU5MGxY8dM1RGE06SCzW4WCefTPT09y8vLAWDu3LkRERFXrlyZOXPm22+/Tf77wQcf
fP7551u2bPnuu+9AyCQB6Bw5ONy8edPd3X3SpEnu7u43btyQ6KUpEL1eX1xcfObMmccff5wY
Bi1fvvzJJ58MDQ0VedbGjRvDwsLc3NymTZu2YcMGpvyzzz4LCQnx9PSUXLdMCDaOWq328PBY
uHDh1q1b5RKmQPhdiyDeSfjvRFOdza7hd6SPP/44Ojq6W7duOTk5f/31l2AdU7Cb1FSzm4V2
z5FarRaYbBkoMOyq6w0QLl++PH78+IKCAsYd4s6dOz169KioqFCr1dXV1S4uLtXV1Z06daqu
rhY0SYCmO3L4+fn98MMPnp6eZWVlwcHBOp2Ovi0cA71e7+fnV15e3qlTp7q6OnY5v/KAAQOS
kpKCgoJyc3Pff//9vLw8AKitre3bt+/OnTsl9aaQF1ONc+fOne+++27dunXHjx839Vx+3zPV
Gx0MpmsBRSfhvxMFO5u9I/IuO3To0MKFC8+ePUvzTgTTTcpudnEk9Ht5+PBhcXHx0qVLJ06c
CAB+fn579uy5e/fu7t27/f39SR3y8USlUhFLBL5JgiBmHTmCgoI2bdqk1+s3bdo0fPhwa70i
e+H27dubNm3q378/AFRVVTGTcaa6UWVlJQCQj8PMx5pvv/3W29vbgQd0EGqcWbNmlZSUODs7
t2nTpkV9wqOE3bWAopPw34mCnc3eEXyX1dXVnTt3bsmSJa+99pqpOnwEm5TT7DRIsufIzc3t
sccemzhx4pIlSwBgw4YNc+fOXbhwoZ+fn6nPXMQkAQCIIxU0NgEH6gUwy5cvDw8P9/PzCwwM
TE9Pb/5rsRdIK3Xs2HHIkCGpqaki1dgtmZKSsmjRouLiYh8fn82bNzOFzBRZy2HkyJFjxowp
Ly/v1asXM/fCaS5+n7Ssl9oXgl2L30k4bcV/Jwp2NsdDrVarVKpu3bpNnjyZcXcRrMbpLZwm
pXxH82nK3IsQDtmJEQRB7I6m+b0ocOzmX2YUKBJBEMSW2LHfC47gCIIgHJTuDYAgCILQY4sx
nWZvq2PvrkYQBLEN1vd7kRQc+umhsb4R93uxrV6ZoWmulun3wofmVfPbs2W2FTTRSkjQ76VJ
zktNydMjMrk/iIKhsb4RtOCwzGXC3qFprhbr98KB5lXz27NlthU0xUqI7fdisfOS9edezp8/
P2LECDc3N/YFmePTcunSpbFjx2o0msGDB588eZKpduPGjTFjxuzcuRNMuMRAYxMYfpzg4OCc
nBwA+Prrr0eNGmX1V2dH0FjfCFpwWOYyYe+0ZKcgKcD2ZKC3EuJb6FjgvGT9MX327NmhoaFl
ZWXsazLHpyUqKiosLKy4uHjVqlXR0dGkjk6nmzBhwrvvvvvGG2+AkEsMsxGLicyPM2fOnIyM
DABYv379/Pnzrf7q7Aga6xu+BYfFLhP2Tot1CpIIbE82lFZCfAsdC5yXmrLniD/Zkjae/2HK
w8Pj8uXL7MsR36dFo9HcuXOH/NfJyamqqkqtVvft27e8vPzAgQMBAQEkDsclBnibr/hxamtr
AwMDExISVqxYcfLkScYgrQVCY30jYsFB7zLhGNA7BbVYvxcO4q/aVHu2zLYCCishvt9LU52X
pPJ7CQgIyMjIePDgAbuQ49PSv3//Xbt2Xb9+Xa/XV1VVkcJvv/1248aNoaGhZPgWdIlp165d
cXGxSJxWrVpNmzYtMjIyJiamJQ/oQGd9Y8qCwwKXCXunhTsFWR1sTwZKKyG+34tlzkvWH9NT
UlK+/PLLrl27inzB/emnn27bts3X15c9Oe7u7j527Nj33nsvNDT07t27GzZsSE1N7d69+5Yt
W9avX0/qzJo1a/DgwcxTBOOEhYW1bdt28uTJVn9p9sXy5ctPnz7t5+d35syZ+Ph4Usg5KcSC
w9PT89133yUWHKQl/fz8Tp482SSXCXuHprmYbibyoCUg+Ko5L5/fni2zraDeSsjb2zs5OZlt
JcSplpKSMmfOHPESGhzN76WmpiYtLa2srGzlypVya0EQBLEddu/3Ioibm9vAgQN3794ttxAE
QRAZsGO/F0GYm2kgCIK0QNDvBUEQxHGw/pjeor79QBAEURR25veC0GOZKQeN7YlDgs1FD40V
ya5du/r16+fh4TF8+PCffvpJsMQB4HebBw8eLF++vE+fPi4uLux1PuJdi++8REhOTm7qMNuE
PN0Qz/1BlIxlphw0ticOCTYXPTRWJP/617+++OKL4uLiqKioqVOnCpY4APxu89FHH2VmZu7c
ufPWrVv8re+mEHReOnfu3KZNm5oqSar5dLZzC9+VZevWrf369SOeMJjsywjflANtOkTA5mIj
bkWSkZGh1Wrbtm07ZMiQysrKmpoafomNBduGXbt2JSUl9evXz8mpCaMr33np/v374eHhq1ev
bqoAScZ0jnML35UlLi5u6tSpx44dq6ystJdVkg4J35QDbTpEwOZiQ2NFUllZOXXq1KioqNat
W5sqcTDKysoOHz6s0Wj69u37/fffUz6L77y0fPnyJ598MjQ0tKkCJBnTp0+f/ueff2o0GvLn
uXPn3nrrrS5dukycOPG3334DgK1bt544ceLVV1/t3r07bg6SEVdX1xs3buzdu/f69etdunQR
LEEYsLkYamtrU1NTZ8+eLVLnzJkzQUFBQUFBK1asMFXieHTu3Hn48OElJSUbNmyIiYmhfFZ4
ePjWrVtv3LiRnp5OPvqkpKTs27ePmZGnFyDJmM5xbuG7soSEhHz11VenTp366quvLJgwQqwF
35QDbTpEwOZiMGtFkpGRMW/evK1btyYkJJBZCH6JQzJ06FDmMb3lFN95qaqqipmIb9JkRhO8
AfhfiqriBQ7G+K5lZGRs27bt4MGD5eXlCxcu/Omnnxh95LLTunVrHx+fd955Z8qUKfSKEUo4
13bS+BxXvLKysvDw8Pz8/MDAwPT0dOKixymxtW6ZwOZqEsHBwW+//fbLL7/MlHDaitOe5eXl
Xbt25ZR06NBBap1Sw+82JSUlkZGR+fn5np6ea9asCQkJoelamZmZsbGxxcXFPj4+K1euHDNm
DPsQlGM68QZwNL8XBEGQlolj+r0gCIK0ZBx2VgtBEKQFgmM6giCI42C7Md26e4twpxKCIAgf
2/m94Iy8LIj7RfDPI42Vh0Py3XffDR48WKPRjBgx4sSJE4J10O+FQD8CsLufQ3YtflPwOxLN
C6eJQ0lT/F7yuD+IwjHrF8F3oqCx8nBI9u3bl5GRUVxcPH/+/LCwMME66PfCwPQTkTqc7ueo
XYvTFPyORPnCzcahRBKv3aVLl2o0miFDhpw9e5YpZF/Sz58/P2LECOL3wq7w6KOPvvDCCz//
/DNTmJSUpNFogoKCAODs2bPPP/+8RqNZunSp1WU7Hhb7RYA5Kw+HZPv27b169aqrq7t37177
9u0F66DfC4OPj4+3t/drr7129epVwQqmup/jdS1OU5jqSGZfOGUcs0gyn/7EE09cvnx51qxZ
c+fOJSWc6/ns2bNDQ0PLysqYcnKBun79+po1a6ZNm8bU7NChQ1FR0dNPPw0AMTExUVFRly9f
9vX1lUK2g2GxXwTQWXk4Hmq12sPDY+HChcyNgDmg3wtBr9cXFxefOXPm8ccfj4yMFKxjqvs5
WNcSbArBjiT+wunjmKUp+0h5ky2qgcL7SCsqKtq1a3f37t0ePXowk4zs3VAeHh6XL19mX3x2
7NiRnJxcWlpaU1OjUqnILejUavXNmzcZrx8PD48rV660a9fuzp07Go0GJ+jF6dSpU11dHfOn
SHNxNqrV1tb27dt3586dIju/HZU7d+58991369atO378OP+/fn5+P/zwA9k+GhwcrNPp+CW2
1ywjer3ez8+vvLyc/y/B7ufAXYvTFJyORP/CxeOIQ/YcSZKn79mz5+7du3v37vX39xesEBAQ
kJGR8eDBA6bkn//8Z3Jy8tWrV/fu3WswGJhytnlbr1699u7de/fuXbyFNA0W+0WYtfJwSGbN
mlVSUuLs7NymTRtTGTf6vbC5ffv2pk2b+vfvL/hfwe7nqF2L3RSCHYnyhZuNQ4Mk95i+cOFC
9+7de/bsuXnzZmCtOyQP9Hp9SkpKdHT0kiVLampqyPmeN2/e9OnTAWDhwoWmwm7YsGHOnDkL
Fy409XEPMYspUw7m1ABASkrK22+/LYs8GRk5cuSYMWPKy8t79erFfNTlNNfy5cvDw8P9/PyI
u4tgSUuA9JaOHTsOGTIkNTWVKTSbOjhe1+I3hWBH4r9wwXei2Tg0WN/vhd5xBkEQBLEW6PeC
IAjiaFh/Ph1HfwRBELlAvxcEQRDHQR6/F3RrQRAEkQL0e3EQsrKyAgMDu3TpEhgYmJ2dzZRT
+r3wTTkkV6wwdu3a1a9fPw8Pj+HDh//0008iNdlNaqrZHQm+8Uh2dnb//v01Gs3YsWNLSkoE
n8XvWg7ZVvz3C78j0fi9WNFKqAl5eiCkcn7on4tIzaxZs9asWXP16tXVq1dHRUWRQrN+L8Bz
mTBr4uGo/Otf//riiy+Ki4ujoqKmTp1qqhqnSQWb3cHgG4/MmTNn3bp1JSUlc+fOXbZsmakn
crqWQ7YV//3C70g0fi9WtBKSZO6F49MCPL8Xwo0bN8aMGbNz504AuHTp0tixYzUazeDBg0+e
PGkqDmIKT09PJycnco/axx57DKj9Xswad7QQMjIytFpt27ZthwwZUllZWVNTw6/Db1J+szse
gsYjzN2Tjx07ZuqJnK7VEtoKTHckcb8XK1oJSTWfzvZpAaGJF51ON2HChHffffeNN94AgKio
qLCwsOLi4lWrVkVHR5uKg5hi48aNYWFhbm5u06ZN27BhA9D5vdAYd7QoKisrp06dGhUVxd7A
zMBvUn6zOyQc45GPP/44Ojq6W7duOTk5f/31l+BT+F2rhbQVgd+RxP1erGgl1IQ9R/zJlnyI
FPyczvFpYQqZymq1um/fvuXl5QcOHAgICAAAjUZz584d8l8nJ6eqqipTcRBBBgwYkJSUFBQU
lJub+/777+fl5dH7vQDPZaJlbhw7c+bMm2+++Y9//CMuLs7JSSDd4Tcpv9ltqNemCBqPHDp0
aOHChYz9qiBM13LgtuK8X/gdyazfi1WshCT0e4HGPi2CfPvttxs3bgwNDa2oqACA/v3779q1
6/r163q9ngzolHEQQmVlJQCQD7bkqk7v9yJu3NFCyMjImDdv3tatWxMSEgQHdBBqUn6zOx6C
xiN1dXXnzp1bsmTJa6+9JvJcdtdqCW0FJjqSWb8XK1oJSeL3wkHQVMTd3X3s2LE3b94MDQ09
ePDgp59+unDhwvDwcAs8pxAASElJWbRoUXFxsY+PD7HZEcSsy4TgyWoJzJ49GwBGjhxJ/iwv
L+/QoYPZzyuUzW7X8I1H1Gq1SqXq1q3b5MmTFyxYQKqZ7VoO2Vb894tgRzLr92JFKyHr+70g
CIIgtgf9XhAEQRwN9AZAEARxHHBMRxAEcRxsMaabmoun2bPe1JgIgiAtGdv5vfARmaNvsTvU
rQjNaTJlciLuEuPAiL9wK5py2BH8XsR3gKF51oMHD5YvX96nTx8XFxcH613sbsNvHBq/F35z
2cLvJRVSOT/0z0VkgWO4wUfQ5ITGJcYhMfvCrWjKYUfwuxDfAYbmWR999FFmZubOnTtv3brl
SBkbp9vwG4fG74XfXArye9m6dWu/fv3c3NzYl51PPvnEx8fH19d3165dpIRzURL0e+Fw9uzZ
559/XqPRLF26VORYCINZLxe+NwWlS4zjQfPCrWjKYdcIOsCYZdeuXUlJSf369TO1pcse4Xcb
U40j7vfCR0F+L3FxcVOnTj127FhlZSVz5WnVqpVOp0tNTY2NjSUlnIuSKb8XNjExMVFRUZcv
X/b19RU5FkKg93Jhe1PQuMQ4JDQv3IqmHPYOxwGGhrKyssOHD2s0mr59+37//feSyrMZgt1G
sHHE/V742MLvhT/ZEink95Kdnb1t2zadTnft2rXo6Oj3339frVb/9ddfrVq1Ap7rC/PYlN8L
O76Hh8eVK1fatWt3584djUaj1+v5x6J/5S0HjpcLB443RZNcYhwJmhduFVMOO4W/pVbQAUbk
WU888cSmTZuCg4OPHTsWHh7++++/S6vYJpjqNpzGMev3QmA3l4L8XkJCQr766qtTp0599dVX
zDQTGdBFEPR76dq165kzZ5g6vXr12rt37927d3fv3i1yLISNuJcL35uC3iXGwaB54VY05bBr
BB1gzDJ06FDmMWPVa+/wu41g45j1e+GjIL8XMq/dunVrHx+fNWvWiNQBlkmCoN9LbGzs2LFj
q6uryZ8bNmyYM2fOwoULmZkEmmO1WPiGG8DLtgS9KWyuVLlIZ8phR/DfrXwHGDDh98J+VmJi
YmRk5NSpUz09PVNSUmz6GmyIYOOY9XvhNxf6vSAIgrRo0O8FQRDE0XCcRUUIgiAIjukIgiCO
g3LHdNxDhCAI0lTk9HtBJIXGLyIrKyswMLBLly6BgYHZ2dlA5xLjkNA0F79lWmZb0bxqfnu2
zLYC065KbKz4TmxCnp5n4P4gSobGL2LWrFlr1qy5evXq6tWro6KiSKFZlxiHhKa5+M3SAhsK
6F41vz1bZluBCVclDlZ8J9rC70WtVi9dulSj0QwZMoTcYjw/P3/o0KEeHh5Dhw7Nz88HgPPn
z48YMYI8ix3txo0bY8aM2blzp9V1Ojw0fhGenp5OTk7ktr+PPfYYKTTrEuOQtEznFunA9mTg
uyrx61jxnWgjv5cnnnji8uXLs2bNmjt3LgDMnTs3IiLiypUrM2fOJEvxZ8+eHRoaWlZWxr4o
6XS6CRMmvPvuu2+88YbVdTo8NH4RGzduDAsLc3NzmzZt2oYNG6ApLjEORot1bpEIbE8ObFcl
/n+t+E6UJE8/ceLEq6++2r1795UrV5LC0NDQdu3aTZo06bfffgOAixcvvvLKK+3atZs8efKF
CxcA4Pz581OmTGnTpg071PTp0//880+NRmN1kS0BV1fXGzdu7N279/r16126dBGsEx4evnXr
1hs3bqSnp7Md47p06bJkyZK8vDxbiZUfmuZC6MH2ZHPmzJmgoKCgoKAVK1YIVrDiO9FGfi97
9uy5e/fu3r17/f39AcDPz4+U7N69m5QEBARkZGQ8ePCAHerbb7/duHFjaGhoRUWF1XU6PDR+
EZWVlQBAPvGxkylxlxiHpGU6t0gHticD31WJjxXfiTbye7lw4UL37t179uy5efNmANiwYcPc
uXMXLlzo5+dHPmikpKRER0cvWbKkpqaGmX5xd3cfO3bszZs3Q0NDDx482K5dO6urdWAE/SI4
LhMpKSmLFi0qLi728fEhp0bQJaYlQNNcfFMOfolNRcuE4Ks2643TMtsKTLgqSfdOtIXfC9+l
E0EQBLEu6PeCIAjiaNhiHyleDxAEQWyDcr0BEARBkKairDG9Be4bRhAEsSKS+L1YPDTjLA0l
IsYjarW6e/fuQOcywX8Wje2J3cFvLn7jNKm5ROLYO+Jdi5R89913gwcP1mg0I0aMOHHihGCc
7Ozs/v37azSasWPHlpSUgFBncwD4zcV/4fwSPvz3ncVdqwl5uoH3g8iFKeMRvV4fERExY8YM
oHOZ4D+LxvbE7uA3F79x6JtLPI69I2gwwjEe2bdvX0ZGRnFx8fz588PCwgTjzJkzZ926dSUl
JXPnzl22bBkIdTYHgN9c/BfOL+HDf99Z3LWkWp/OWYKqVqvff//9devWPfnkk0ePHs3Pz3/7
7bcvXLjg7++/fv36wMBA/lMQC6iqqtq9ezdJnTIyMkgh4zIhuCmZ86zc3NyjR48Sm47g4GCb
Kbcx/Mahby7xODTPsjt8fHzq6upeeOGFjz766LHHHtu+fTsA3L179969e+3btzf1LOZG0seO
HWMK2Z3NUeG/cMGmYMN/31nctaw/n87cJJozOnfo0KGoqOjpp58GIb8XwNHcGnz22WchISGe
np5MibjLBP9ZLcqmg984NM1FE8eREDQeUavVHh4eCxcuZG6jzOHjjz+Ojo7u9v/bO/eYpq4/
gJ8CU7taVyK24HhNVrqMIVMCWxQfGAwGZzY1k5AJ6CJaFNFtbAZwMtlDyYYPZC5sdpM0bOiQ
BAwZj8mcySIL60BGXBhBocqjPMag46FA+/vjZjf93XNveyhtgdvvJ4SUw7nn1XO+vT3lfPDx
qa2tHRwcpNPxKcoz8I5zDYUpXOvOiqnluM9IlUqlSCQ6d+4cYvO9ADNncnKyoKCAOrRGYdEy
gV/lPJoOfHBIhoukHF7CEI/o9XqdTpeTk8Oll3rllVeam5t1Ol1UVBQtGsSnKP/AO846FAxY
1511U8v2ey8IIaFQSB1yNU00fZ2hfC+vv/467XsBZk5ZWZm3t/fq1aupH9VqtUqlUqlUL730
EvlVlKbj2LFj/NZ04INDOFwWy+ErpuKR5OTkjIwMqVS6cOFCM2/mDAZDc3Nzenp6XFwclcKY
bHwF7ziewgBfd1ZPrWm4AfAPRQUcGyZZWVkFBQUjIyOm++lPkmTdAAAP00lEQVSmOTUaTWpq
amtrK+V7CQ0NZfypDOzDmId1uCIjI48cOfLaa6+x5mG1TOBXdXV1JSUlaTQaStPBj/fI+HDh
g+Pp6clIwYeLpByRSGSzds8GXH2kxCNnzpzx8/O7du3aBx980NPTo1AoPvroIyoG4WMlEAh8
fHx27dqVnp6+YMEChE02HsA6XIyOsw4FY7jwdWfF1KLcAI7wvQAAAAD2BnwvAAAAfGNunSMF
AAAAZgLEdAAAAP4wd2M6uF8AAACmy9zyvQA2xPxzRIFbJnjpeyGBpON4Hl46TCxCMlbmfS+O
be8sQ2JuseFKnMZ9+gcY5NcCjodV3MEAt0zw0vdCAknH8Ty8dJhYhGSsWCUnDG+Mk0BibrHh
SrT93gvte6FfjRmON4TQ2rVrf/rpJ4RQbW1tREQEQkij0UREREil0oiICI1GQ+fv7++PiYkp
KiqyeTsBhNDNmzdTUlIoywT1jOApTgJJx7nyUA4T03/3zm8IJwkuOfH19fX29o6Li+vs7HRE
Q+cGarU6KCho0aJFtLkFz2PDleg434sp8fHxlKFGrVbHx8cjDgNMS0vLq6+++u67777xxhs2
byeA2CwTTuV7MYWk41x5eO8wYUAyVrjkhNUb4zyYN7fYcCU69DPSyclJ6kFsbGx1dXVHR0dN
TU1sbCziMMDs3bu3u7tbJpM5spFOBW6ZcB7fCwOSjrPmcQaHCQOSseKSnDC8MU6CRXOLDVei
XWI65Xuhf/T09Lxx48b4+Dil6EQIubu7R0VF7d69e/PmzRKJBP1ngBkbGzM1wJSVlV24cCE2
Nra3t9ce7QQoy4Rer6ctE3iKk0DScdY8TuIwMYVwkhgMhqamJobkxNQb4ySo1eqjR4+qVKqT
J0+6uLCHXBuuRLvE9OTk5PDwcHobPTs7e9++fYGBgVNTU3SehISExsZGauMFIZSXl1dQUODn
5/fVV1+dP3+eSly2bNnWrVvfe++92NjYsbExezSVx9AfaXB9toEQys7ObmhokMvljY2N1Ife
eIqTwNpxi8OFEMrPzz906JBjGzvLkIyVWCyWSCRxcXHbtm17++230X/zUC6X19fXFxQUOL7Z
s8XBgwc1Gk1UVBQ1AiMjI8ieKxF8LwAAAHwAfC8AAAB8Y+6eIwUAAACmC8R0AAAA/uCImO5s
R4EBAABmC7v4XgDHw/WM5ObmmnmaHj9+nJ2d/cILLyxZsoTKVllZGRoa6uHhERoaWlVVZd9G
zyXKy8vDw8NlMtmmTZu4/qu9c+px8KlFIjAx43txULsdAt4pPDxa53vB1yYh07lPP4t9AXMG
1oO7TU1Nn3/+uZmrPvvss+vXrxcVFf3zzz/U5cnJyTk5OZ2dnadOnVIqlXZs8RyjtLRUrVZr
tdq33norMTGRNY9z6nHwqUUiMMF9L7w0vbB2iqG1sc73gq9NQuzle2E8/uKLL3x9fVesWFFc
XMyVRywWnz59WiaTbdiwAf33cufu7r5u3bq6ujqbt5P3PHr0KCkp6dSpU2byFBcXnz59OiQk
hD4K4eXl5eLiIhAIXF1duf7HOS+5fPmyQqEwGAzj4+NPPvkkax7Q41CQCEwQm+/FSWBobazz
veBrkxAHfUbq5ubW0tJSUFBw4sQJM9lEItG9e/dWrVqF/nut6+vry8nJ2bNnj2PaySeys7Of
e+45Sr3ARVdX148//iiTyYKDg3/44QeE0IULFxITE5cuXbpnz568vDxHNXZOIBaLpVJpWlqa
SqVizQB6HFPMC0xw34uTwKW1ma7vBV+bhNgxptN2F4TQ3r17hULh5s2bu7u7ufIghJRKpUgk
OnfuHEKosLBw5cqVUqk0Ojq6q6vLfu3kK/n5+aWlpfRRUtY8Eolk48aNDx48yMvLO3z4MEIo
KSlJpVL19/dfunTJeUSDFHq9XqfT5eTkcBmmQI9DY1FgwuV7cRIYWhsrfC/42iTE9jEdt7sg
hNzc3CzmQQiZvoJlZGTk5uZ2dnaWlJQYjUabt5P3DA0N0Zt6XPtxlOiYgnqnPDAwQD12dXV1
qhvP5OTkBw8euLq6Lly4kKvjoMehIBGYIA7fi5NgqrWxzveCr01CSM+RkkPZXaampjIzM2eS
5+jRo3v37kUIpaWl2byR/INhdOEK4mKx2PRXH3744YEDBxISEry8vPLz8xFC+fn5x44d02q1
vr6+Fy9etH/D5wpRUVExMTE9PT0KhYLee2EMV3Z2dlJSklwuDw0NvXTpEmsK/8CnFiWhjIqK
otJ7enpEIhFjrMRisUAg8PHx2bVrF+17YZTjwE7YC7xT1IPFixevWbOG0tqQDBc+kfC1SQj4
XgAAAPgA+F4AAAD4BrgBAAAA+APEdAAAAP4wh2J6UVGRj48Pz44OAwAAOJI55Hv5+OOPKyoq
YOPeOnBdienT5OfnR3iVMwhM0Mw6jit0zEt15jtcJhySXpvmcSpPlGnHSaYWvlpJ1i8r07lP
3/I+88umdHZ2rly50rZlOg+4roSWTuzfv//NN98kvMoZBCZoBh3HFToWpTrzHVYTDkmv8TwM
EQpfYXScZGrhq5Vk/bJi+72XoKCgjo4OhFB7e3tQUBBCSKPRRERESKXSiIgIjUZDZcPtLgaD
wUlew+0Bl65kaGjo6tWrXCdC8aucRGBiXcdxhQ6JVGe+g48VSa9Z8zBEKLwE7zj5msJXq/n1
y4rtY/r69etra2vFYvGtW7eoeJ2amrp///6Ojo59+/YdOXKEzsmwuyCemtscBquu5JtvvomO
jvby8iK8ynkEJlZ0HFfokEh1eABjrEh6jefhEqHwDLzj5GsKX60W1y/OdM4c4ZstlR/iIbi4
uPjatWu9vb2enp47duyIjY2VSqUdHR1CoXB0dNTf37+3t5cq8O+//zaVATAOVgFWMDo6Wl5e
fvbs2V9//RUhNDk5GRwcXFRUtHr1asKr5HL5rVu3vLy8urq6IiMjW1paHNX2WWC6HX/qqacM
BgP9o16vx1Mc0e7ZwHSsSHptJo9er5fL5T09PXZt8GyBd5xwTeGrlXD90lBnjuxyn/7zzz/v
3Lnz5s2b69evRwjJ5fLvv/9+bGzs6tWrgYGBdE5WPxlgHay6krKyMm9vbzMTAr/KSQQm1nUc
V+iQSHXmO/hYkfSaK4+pCIWX4B0nXFP4arW4flmxve9l+fLl3t7eO3bsKCwspN4y5OXlpaam
pqWlyeVyZ9O3OgxWXUl+fr7pZhfC3gzhVzmDwAQRdxzeOyKOqYVjcaxwEYqTQDi18NWKp5AA
vhcAAAA+AL4XAAAAvjGHzpECAAAAMwRiOgAAAH+AmA4Ac5S5dv5urrUHYAViOgD8H1ZHLkeG
vJnX5efnR50UQQjpdLppGUXwxkgkkuDg4BMnToyPj8+wYYySLSpiiouLQ0JCpFLpxo0bb9++
zZpCUrL96sLz4AYYvGqSklmBmA4AzohcLm9tbQ0MDFQoFNSDmZQ2MDBw/fr11tZWm2sSLCpi
Kioqvv32W61Wq1QqExISWFMIS7ZTXXge3ACD10vYCxyI6QBgAeoGyt3dfd26dXV1dQghlUoV
EhKydOlS+t6K/m7mLs/0V3T+48ePy2SyNWvW3LlzByF0586dtWvXymSy48ePc9WO19XW1rZ1
61aZTBYeHl5fX49XhyOXy+vr6xcsWODm5qbRaORyOWs5eHtYcXV19ff3P3XqVGlpKWs5d+/e
3bRpEzVi5sYaw6IiRq1WBwUFLVq0aM2aNQMDAxMTE3gK62jgJeMpjKsI67LYQhIDDEnJrEBM
BwALUPdQfX19OTk5e/bsQQhlZWUlJCT88ssvAwMDpicGrRAWPfvss+3t7cnJyampqQihw4cP
K5XK9vb2FStWcNWO16VUKhMTE7Va7SeffJKSkkJfaKZehUJRXl4eFhYWFhZWVlamUChYy8Hb
YwYvLy/qxD9ezsGDB2NjY7u6uqY1PuSKmIGBgYSEBKVSSZ9OZ6Qw6sVLZq2LtbUW67LYQnID
DEnJDEjPHAGAk4Af8CssLMzNzX348OHExIRAIBgeHq6qqvr6669bWlp0Ol1KSkpmZibrhVwl
T05Ouru76/V6sVjc29srFArHxsb8/f11Op2pHEkmk+n1erx2vC6ZTDY6Oko9dnFxGRoastjN
ioqKuLi406dPG43G9PT04uLimJgYvBy8PWb61d7evm3btj/++IO1nPb2dlNj6LQwr4hpbGzc
vXv3zp07s7KyXFxcWFPIS7ZHXYw8XAYYxtNK3gsKe/leAIBnZGRk5ObmdnZ2lpSUGI1GhFB0
dPSVK1d+//33K1eu0KZsoVCo1WrNlOPp6Xnjxo3x8fHLly/TiZQKqaSkhNrRVigUJSUllByJ
q3a8rhdffLG4uLivr0+v15MEdIRQYGCg0WgMDw8PDw83Go1U7Xg5eHtYmZqa0mq1x48f3759
O2s5zz//vFqtfvz4MUnbGJhXxKjV6qNHj6pUqpMnT1KBD08hL9kedeF5SAww5L1gAPfpAPB/
MLZQ9Xr9p59+ev78eYRQWlra+++/T91fI4SeeOIJX1/fd955Jz4+HiGUlZVVUFAwMjLCdbf+
3XffZWRkTE1NZWZmpqWlUeUcOXLkyy+/DAgIuHjx4qpVqxoaGg4dOtTW1nbgwIGzZ8+y1o7X
df/+/bS0tNu3b5tuBJl/3zAxMeHj46PVao1Go5+f38OHD93c3PBy8Pawjpirq+vTTz+9ffv2
zMxMoVCIl9Pc3JySktLU1DQxMUG+/WKqiDlz5gzrH+cwnq+enh5PT09GikgkYowGXjJrXaxX
WazLYguHhoaSkpI0Gg1lgPHy8sJnHX4VXjID6j4dYjoAzBrgCANsCOy9AAAA8A2I6QAwa8BN
OmBzIKYDAADwB4jpAAAA/AFiOgAAAH+AmA7MdSorK2tqaqampmpqaiorKwkvMf9brgyjo6NV
VVWMxEePHmk0murqasLaAWAWgZgOzANEItH9+/cXL15MmH/Lli3W/ba1tZU+10Pz559/Go3G
yMhI88UCwFwAYjowD1i2bNm9e/c8PDzoFPqWmXpQWVl59+7d6urqv/76iyvRIv/+++/AwACe
3t/fHxAQQC7cAIBZBGI6MA/w8PAwGAymMR1n+fLlL7/8cnt7u8VELlpbW5955hk83Wg0arXa
6urq1tbWaTQaAGYDiOnAPEAikQiFQolEQqe4uLgMDw8PDg6a5lmyZInBYGBciCdyodPpKJsS
Y998yZIlvr6+YWFhhK8NADCLQEwH5gECgWDDhg0CgYBO8ff3r6ur6+7utqI0emfG9EeE0JYt
W6gdc+o7na5QKJqamn777beAgIAZdAIAHAH4XgAAAPgA+F4AAAD4BsR0AAAA/gAxHQAAgD9A
TAcAAOAPENMBAAD4g6ChoWF4eLitrW22WwIAAADMFLfBwUE4SQEAAMAP/gfPJRJlYkzjfAAA
AABJRU5ErkJggg==
--=_alYL0jHM0BaW0zWL1lZTEVpaoTHnF+q00E-M0oWd1EQsEuU+--

--=_alYL6RncGWT5DHcvNegXUBlz9ldXvKGFVeljLqVIFIrq03Ev--



--===============1517825123261143294==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1517825123261143294==--



From xen-devel-bounces@lists.xen.org Wed Dec 05 22:06:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 22:06:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgN6u-00050M-In; Wed, 05 Dec 2012 22:06:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carsten@schiers.de>) id 1TgN6s-00050H-NG
	for xen-devel@lists.xen.org; Wed, 05 Dec 2012 22:06:31 +0000
Received: from [85.158.143.35:36387] by server-3.bemta-4.messagelabs.com id
	DD/66-06841-565CFB05; Wed, 05 Dec 2012 22:06:29 +0000
X-Env-Sender: carsten@schiers.de
X-Msg-Ref: server-15.tower-21.messagelabs.com!1354745166!14168235!1
X-Originating-IP: [194.117.254.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=FROM_EXCESS_QP,
  HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16719 invoked from network); 5 Dec 2012 22:06:07 -0000
Received: from www.zeus06.de (HELO mail.zeus06.de) (194.117.254.36)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Dec 2012 22:06:07 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=simple; d=mail.zeus06.de; h=subject
	:from:to:cc:date:mime-version:content-type:in-reply-to
	:references:message-id; s=beta; bh=aPNUA1Rcsj3PAYhy60rxp17nKBvst
	jd69WjORjC8LDY=; b=hFVrcSYgfdXDff6iAMwuDJWCd/vXAWKpoSg8rLjkR76Wb
	yMUIdK0FDApjQdpNQP/9nIN2A5FRmnP/ZVfgLePgxaRtpY3bGTUgdaA2GXJJ7bpU
	m7QZr3HoqrbtSJ5/Cup9tC73g5HQ67TN5wreP3Ruh7X9HrG05R35l7MeFFBYyI=
Received: (qmail 13059 invoked from network); 5 Dec 2012 23:06:05 +0100
Received: from unknown (HELO uhura.zz) (l3s6271p1@46.59.195.81)
	by mail.zeus06.de with ESMTPA; 5 Dec 2012 23:06:05 +0100
Received: from localhost (localhost [127.0.0.1])
	by uhura.zz (Postfix) with ESMTP id 68F002C519;
	Wed,  5 Dec 2012 23:06:05 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at schiers.de
Received: from uhura.zz ([127.0.0.1])
	by localhost (uhura.space.zz [127.0.0.1]) (amavisd-new, port 10024)
	with LMTP id dS1MvOJ4trkG; Wed,  5 Dec 2012 23:06:03 +0100 (CET)
Received: from uhura.space.zz (localhost [127.0.0.1])
	by uhura.zz (Postfix) with ESMTP id E88CF2C518;
	Wed,  5 Dec 2012 23:06:02 +0100 (CET)
From: =?windows-1252?Q?Carsten_Schiers?= <carsten@schiers.de>
To: =?windows-1252?Q?Konrad_Rzeszutek_Wilk?= <konrad@kernel.org>
Date: Wed, 5 Dec 2012 23:06:02 +0100
Mime-Version: 1.0
In-Reply-To: <20120918193401.GA7667@phenom.dumpdata.com>
References: <CAAnFQG9TZj=SxYK_jxqN=M6L40vL=AZu9EM2tbEffPhkL1s8FA@mail.gmail.com>
X-Priority: 3 (Normal)
X-Mailer: Zarafa 7.0.8-35178
Thread-Index: Ac3TNLQqf8cZVHaMQA+1Jh89MiYWNw==
Message-Id: <zarafa.50bfc54a.29c1.3cb9c7c6411698ee@uhura.space.zz>
Cc: =?windows-1252?Q?Xen_Devel_Mailing_list?= <xen-devel@lists.xen.org>
Subject: [Xen-devel] The neverending load increase....
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1517825123261143294=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format. Your mail reader does not
understand MIME message format.
--===============1517825123261143294==
Content-Type: multipart/alternative; 
 boundary="=_alYL6RncGWT5DHcvNegXUBlz9ldXvKGFVeljLqVIFIrq03Ev"

This is a multi-part message in MIME format. Your mail reader does not
understand MIME message format.
--=_alYL6RncGWT5DHcvNegXUBlz9ldXvKGFVeljLqVIFIrq03Ev
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Hi Konrad,=0D=0A=0D=0A=A0=0D=0AYes me again, the guy with the double boun=
ce buffering problem with the DVB card when using more than=20=0D=0A=0D=0A=
4GB of RAM and PVOPS.=0D=0A=0D=0A=A0=0D=0AIf you are still interested: gu=
es what do you think what happened between 3rd and fourth on riker=3F Yes=
,=0D=0A=0D=0AI switched to PVOPS ;o).=0D=0A=0D=0A=A0=0D=0A=0D=0A=A0=0D=0A=
So, I am currently on Xen 4.1 with 3.6.7 PVOPS Kernel for Dom0 and riker.=
 Before the increase, I was using=0D=0A=0D=0Aan old-style kernel, no othe=
r changes in Domain.=0D=0A=0D=0A=A0=0D=0ACompared to last time we checked=
 on it, I switched mainboard, CPU (AMD->Intel), and now have 16GB.=20=0D=0A=
=0D=0A=A0=0D=0ASo just in case, we would like to check on it further, ple=
ase feel free to advise or patch a kernel and I=20=0D=0A=0D=0Aam more tha=
n happy to try out.=0D=0A=0D=0A=A0=0D=0AI use the PVOPS kernel now becaus=
e of some glitches when recording and I want to test whether they persist=
=0D=0A=0D=0Aon PVOPS. I am going to switch to Xen 4.2 soon, because I wan=
t to modify the scheduler with the new parameters,=0D=0A=0D=0Aas I guess =
the Domain doesn=92t get enough attention, there are 3 PCI cards for it=85=
=0D=0A=0D=0A=A0=0D=0AIf this is not interesting, please simply let me kno=
w, it=92s not really an issue, it=92s more for curiosity.=0D=0A=0D=0A=A0=0D=
=0ABR,=0D=0A=0D=0ACarsten.=0D=0A=0D=0A
--=_alYL6RncGWT5DHcvNegXUBlz9ldXvKGFVeljLqVIFIrq03Ev
Content-Type: multipart/related; 
 boundary="=_alYL0jHM0BaW0zWL1lZTEVpaoTHnF+q00E-M0oWd1EQsEuU+"

--=_alYL0jHM0BaW0zWL1lZTEVpaoTHnF+q00E-M0oWd1EQsEuU+
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-mi=
crosoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:wo=
rd" xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D=
"http://www.w3.org/TR/REC-html40"><head><META HTTP-EQUIV=3D"Content-Type"=
 CONTENT=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=
=3D"Microsoft Word 12 (filtered medium)"><!--[if !mso]><style>v\:* {behav=
ior:url(#default#VML);}=0D=0Ao\:* {behavior:url(#default#VML);}=0D=0Aw\:*=
 {behavior:url(#default#VML);}=0D=0A.shape {behavior:url(#default#VML);}=0D=
=0A</style><![endif]--><style><!--=0D=0A/* Font Definitions */=0D=0A@font=
-face=0D=0A=09{font-family:"Cambria Math";=0D=0A=09panose-1:2 4 5 3 5 4 6=
 3 2 4;}=0D=0A@font-face=0D=0A=09{font-family:Calibri;=0D=0A=09panose-1:2=
 15 5 2 2 2 4 3 2 4;}=0D=0A@font-face=0D=0A=09{font-family:Tahoma;=0D=0A=09=
panose-1:2 11 6 4 3 5 4 4 2 4;}=0D=0A@font-face=0D=0A=09{font-family:Cons=
olas;=0D=0A=09panose-1:2 11 6 9 2 2 4 3 2 4;}=0D=0A/* Style Definitions *=
/=0D=0Ap.MsoNormal, li.MsoNormal, div.MsoNormal=0D=0A=09{margin:0cm;=0D=0A=
=09margin-bottom:.0001pt;=0D=0A=09font-size:11.0pt;=0D=0A=09font-family:"=
Calibri","sans-serif";}=0D=0Aa:link, span.MsoHyperlink=0D=0A=09{mso-style=
-priority:99;=0D=0A=09color:blue;=0D=0A=09text-decoration:underline;}=0D=0A=
a:visited, span.MsoHyperlinkFollowed=0D=0A=09{mso-style-priority:99;=0D=0A=
=09color:purple;=0D=0A=09text-decoration:underline;}=0D=0Ap.MsoPlainText,=
 li.MsoPlainText, div.MsoPlainText=0D=0A=09{mso-style-priority:99;=0D=0A=09=
mso-style-link:"Nur Text Zchn";=0D=0A=09margin:0cm;=0D=0A=09margin-bottom=
:.0001pt;=0D=0A=09font-size:10.5pt;=0D=0A=09font-family:Consolas;}=0D=0Ap=
=2EMsoAcetate, li.MsoAcetate, div.MsoAcetate=0D=0A=09{mso-style-priority:=
99;=0D=0A=09mso-style-link:"Sprechblasentext Zchn";=0D=0A=09margin:0cm;=0D=
=0A=09margin-bottom:.0001pt;=0D=0A=09font-size:8.0pt;=0D=0A=09font-family=
:"Tahoma","sans-serif";}=0D=0Aspan.NurTextZchn=0D=0A=09{mso-style-name:"N=
ur Text Zchn";=0D=0A=09mso-style-priority:99;=0D=0A=09mso-style-link:"Nur=
 Text";=0D=0A=09font-family:Consolas;}=0D=0Aspan.SprechblasentextZchn=0D=0A=
=09{mso-style-name:"Sprechblasentext Zchn";=0D=0A=09mso-style-priority:99=
;=0D=0A=09mso-style-link:Sprechblasentext;=0D=0A=09font-family:"Tahoma","=
sans-serif";}=0D=0A.MsoChpDefault=0D=0A=09{mso-style-type:export-only;}=0D=
=0A@page WordSection1=0D=0A=09{size:612.0pt 792.0pt;=0D=0A=09margin:70.85=
pt 70.85pt 2.0cm 70.85pt;}=0D=0Adiv.WordSection1=0D=0A=09{page:WordSectio=
n1;}=0D=0A--></style><!--[if gte mso 9]><xml>=0D=0A<o:shapedefaults v:ext=
=3D"edit" spidmax=3D"2050" />=0D=0A</xml><![endif]--><!--[if gte mso 9]><=
xml>=0D=0A<o:shapelayout v:ext=3D"edit">=0D=0A<o:idmap v:ext=3D"edit" dat=
a=3D"1" />=0D=0A</o:shapelayout></xml><![endif]--></head><body lang=3DDE =
link=3Dblue vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoPlainT=
ext>Hi Konrad,<o:p></o:p></p><p class=3DMsoPlainText><span style=3D'color=
:black'><o:p>&nbsp;</o:p></span></p><p class=3DMsoPlainText><span lang=3D=
EN-US style=3D'color:black'>Yes me again, the guy with the double bounce =
buffering problem with the DVB card when using more than <o:p></o:p></spa=
n></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'>4G=
B of RAM and PVOPS.<o:p></o:p></span></p><p class=3DMsoPlainText><span la=
ng=3DEN-US style=3D'color:black'><o:p>&nbsp;</o:p></span></p><p class=3DM=
soPlainText><span lang=3DEN-US style=3D'color:black'>If you are still int=
erested: gues what do you think what happened between 3<sup>rd</sup> and =
fourth on riker=3F Yes,<o:p></o:p></span></p><p class=3DMsoPlainText><spa=
n lang=3DEN-US style=3D'color:black'>I switched to PVOPS ;o).<o:p></o:p><=
/span></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:black=
'><o:p>&nbsp;</o:p></span></p><p class=3DMsoPlainText><a href=3D"https://=
data.pcip.de/munin/space.zz/data.space.zz/xen_new.html"><span style=3D'co=
lor:black;text-decoration:none'><img border=3D0 width=3D497 height=3D400 =
id=3D"Bild_x0020_1" src=3D"cid:image001.png@01CDD33D.15ECD200" alt=3D"Xen=
 Domain Utilization"></span></a><span lang=3DEN-US style=3D'color:black'>=
<o:p></o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D=
'color:black'><o:p>&nbsp;</o:p></span></p><p class=3DMsoPlainText><span l=
ang=3DEN-US style=3D'color:black'>So, I am currently on Xen 4.1 with 3.6.=
7 PVOPS Kernel for Dom0 and riker. Before the increase, I was using<o:p><=
/o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color=
:black'>an old-style kernel, no other changes in Domain.<o:p></o:p></span=
></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'><o:=
p>&nbsp;</o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-US style=
=3D'color:black'>Compared to last time we checked on it, I switched mainb=
oard, CPU (AMD-&gt;Intel), and now have 16GB. <o:p></o:p></span></p><p cl=
ass=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'><o:p>&nbsp;</=
o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:=
black'>So just in case, we would like to check on it further, please feel=
 free to advise or patch a kernel and I <o:p></o:p></span></p><p class=3D=
MsoPlainText><span lang=3DEN-US style=3D'color:black'>am more than happy =
to try out.<o:p></o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-=
US style=3D'color:black'><o:p>&nbsp;</o:p></span></p><p class=3DMsoPlainT=
ext><span lang=3DEN-US style=3D'color:black'>I use the PVOPS kernel now b=
ecause of some glitches when recording and I want to test whether they pe=
rsist<o:p></o:p></span></p><p class=3DMsoPlainText><span lang=3DEN-US sty=
le=3D'color:black'>on PVOPS. I am going to switch to Xen 4.2 soon, becaus=
e I want to modify the scheduler with the new parameters,<o:p></o:p></spa=
n></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'>as=
 I guess the Domain doesn&#8217;t get enough attention, there are 3 PCI c=
ards for it&#8230;<o:p></o:p></span></p><p class=3DMsoPlainText><span lan=
g=3DEN-US style=3D'color:black'><o:p>&nbsp;</o:p></span></p><p class=3DMs=
oPlainText><span lang=3DEN-US style=3D'color:black'>If this is not intere=
sting, please simply let me know, it&#8217;s not really an issue, it&#821=
7;s more for curiosity.<o:p></o:p></span></p><p class=3DMsoPlainText><spa=
n lang=3DEN-US style=3D'color:black'><o:p>&nbsp;</o:p></span></p><p class=
=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'>BR,<o:p></o:p></=
span></p><p class=3DMsoPlainText><span lang=3DEN-US style=3D'color:black'=
>Carsten.<o:p></o:p></span></p></div></body></html>
--=_alYL0jHM0BaW0zWL1lZTEVpaoTHnF+q00E-M0oWd1EQsEuU+
Content-Type: image/png
Content-Id: <image001.png@01CDD33D.15ECD200>
Content-Disposition: inline
Content-Transfer-Encoding: base64

iVBORw0KGgoAAAANSUhEUgAAAfEAAAGQCAIAAAAx+M7VAAAABmJLR0QA/wD/AP+gvaeTAAAg
AElEQVR4nOyde1xU1fr/nwHN66CBMEKIhoHJaJqolZmIR8O8/MxzTLopanJRwdQs9aRcxAtW
aCpaIJrG1zI1rfAYfPUgdvJbKngjrNEkBSREMWE83kDm98caNpu99+xZDLNn7xme94sXr2Gx
5tmfWXvN2s+sWeuzVadPnwYEQRDE/qmurm4ltwYEQRDECty6deuPP/4wjulPPPGEvGoQBEGQ
5rBnzx4AcJJbBoIgCGI1cExHEARxHHBMRxAEcRzwO1IEQRD7ICsri1MyevRoTgnm6YjDolar
5ZbQBJSgVgkaEBFGjx49evTokJAQEBrNCTimI/Ig3fDBj2yVY+F4hyiBhw8f6nQ6lUpVXV3t
7OzMr4Bjup0RFRX19ttvs0vmzp07a9as5sRUq9VqtdrFxcXLy+v555+PjY29ceNG82SaR6/X
01cWHE+ZQs5/+ZGbdCxTB7UsCIJYkZKSkh9++OHOnTtPPfXU8ePHu3fvzq+D8+l2xtq1a4OD
g/fs2fPKK68AwO7du0+cOHHkyJFmhiUD1p07dy5evLhz584hQ4YcPnzYx8fHCooRBLES5eXl
gYGBLi4uAODp6SlYB/N0O6N9+/YZGRlLliy5ePHixYsX33///f/5n/9p165dXV3dBx98oNVq
u3XrNmvWrDt37pD6arV669atWq3W3d19xIgR58+fFw/er1+/Dz74YOrUqYmJiaTw/v37ixYt
6tmzZ8+ePRctWnT//n0mcnp6ep8+fdzc3AYPHvzTTz/t3Lmzf//+5EAXLlwg1X7//fc333zT
x8fHy8vr9ddfr6ysZJ5ugUI+JA75qMGJzKnDVGMQUSgSVqRBmvNCAODjjz/29fX18fF55513
Hjx4AAAhISF79+5lKpSUlPj5+VVXVzMlTz311K+//koe79y5kzz49ddfn3rqKQAw1StMlTOc
Pn26V69emzdvbupLQCRl0KBBZEAXAcd0+8Pf33/lypVTpkyZMmVKUlIS2QO8adOm//znPwcP
HiwoKKitrV2xYgVT//Dhw99///2VK1defPFFzryNKcLCwpjc/8MPP/z1119/+OGHH3744Zdf
fvnoo4+YallZWZmZmSUlJZMnT/773//+3XffffPNN1euXBkzZgxzoClTpkREROh0ut9++83L
yysuLo5/OAsUMpBPGHq9nmZuRF/Pxo0bma+YBBWKhBVpkOa8EADIycn58ccff/7554sXLyYn
JwPAu+++m5SUVFdXRyokJSXNnj2b/a4eOXLksWPHAODq1asLFy68ffs2APz444+jRo0C071C
pLcAQFZW1t///ve1a9fOnj27qS8BkZQsHvw6KuLhhd4AdsewYcOcnZ2ZkTcwMHDXrl1+fn4A
UFFRERwcXFhYCABqtfr333/XaDQAcOfOnR49elRUVHBCqdVqzshVU1Oj0Whu3rwJAH379t2z
Z8+TTz4JAOfPn3/11VfPnTtHnnXx4sWuXbuSyBqNxuyB9Hr9oEGDfvvtN/ZBLVPIicD+L7+c
U+HIkSPLli3Lysrq2LEjjUJOWJEGMftCRFCr1SdPniRhf/3119DQUBJ2+PDhs2fPnjx58qVL
l8aOHXvq1Kn27dszzzp48ODu3bu3b9++bt26DRs2xMfHh4WFhYWFvfrqqy+99JKpXiHSW5KT
kz/66KMvv/wyMDCQXjxiSwwGQ3Z29ujRo7OystirX9AbwI7ZuXOnwWCora396quvSElJScmA
AQPIXEHPnj1LS0uZymSUAYD27dvfvXuXJn55ebmbmxvzuEePHuSxr6/vn3/+yVQjAzqJbOpA
p06dGjduXLdu3dRqtZeXV3l5Of9wZhW2bt2ameIg3Lt3z8nJkt7722+/zZ8//4svvmAGdBqF
bEQaxOwL4Uz7cGDCPv7440xYkqo/fPhw5cqV8+bNYw/oABAUFHTixAkA+Oqrrz799NOMjAwA
OHnyZFBQEJjuFSK9ZePGja+//joO6IoF1704IL/99ltCQkJGRsbnn3++bNkyMnPt7e1dWFjI
zC1UVVU15xA7duwIDg4mj7t27Xr58mXyuKioyNQ3M6aYNm3a66+/fu7cuVu3bhUXFz98+NAC
Pb6+vgUFBeySgoICb29v8lilUlHGuX79+uuvv/7pp5+yv/41pdBU2OY0CHOCBP/LhL18+TIT
dsyYMY888khcXNzx48dnzJjBeUqHDh169Oixb9++tm3bhoSE1NbW/utf/3r88cfJ0G+qV4j0
lqysrP37969fv57+RSE2g2bdC47pdsadO3emTp26du3aHj16PP744x9++OHUqVPv3r07c+bM
6OhonU734MGDwsLCadOmWRb87NmzixYt+vzzz5cuXUoKX3nllffee+/q1atXr1597733yHqb
JsV0cXFp3759SUlJTEyMBaoAIDIyMjo6+scffyRj0H/+85/o6OiIiAjyXzc3N51ORxMnNDT0
nXfeefbZZ2kUmgrbzAYRYfHixWVlZWVlZYsXLw4NDSWFKpXq3XffXb9+/aJFix555BH+s0aN
GrVkyZJXX32VeYFkMh0ATPUKkd7y2GOPZWVl7dixg0zoI4qCrHsZMGCAp6fnqFGj/P39+XVw
TLczFixYMHLkyHHjxpE/J0yYMGzYsPnz50dFRY0ZM+b111/38vJ66623mjrQkPXpvr6+s2bN
euSRR44dO8Zksu+9916vXr2GDRs2bNiw3r17v/vuu02KvGnTptjYWE9PzzFjxgwdOrRJz2WY
OXNmTEzM+++//+STTz755JNLly59++23mS8hFyxY8Le//Y1mT9DJkyejoqI4EyCmFJoK28wG
EWH48OFDhw595plnHn/88YULFzLlzs7OPXv2fP311wWf9be//a2iomLSpEkAMGnSpGvXrv3t
b38j/zLVK8R7i6en5/fff//ll1+uWbPGWi8NsQo0617wO1IEUTqTJ09+5ZVXrPiBALFTxP1e
yHekuOcIQZRLXV3djh07iouL//GPf8itBVEE7EFccC0jjukIolw6derk4+OTkZFh2SIfxMF4
5JFHHjx4QL5WuX//vuD3KzimI4hyQZMZhI2rq+vFixfJV6O///47s+CYDY7pCIIg9kHv3r0L
CwuPHj0KAK6urlqtll8Hx3QEQRD7oE2bNgMGDGD+5OwjJeAkHYIgiOOAeTqCIIh9ILjQhYOD
jOl//Oc/KpWqh6VbWhAEQZQPZ6bFYdcy1ty9ezghQeXsPO2771q1aSO3HARBENlwhPn0k9u2
Vf/5Z1Vp6cmtW+XWgiAIYiMEbzNt93n6X5cvn0hPJ49PbN0a8P/+X6d6uz4EQRBH4t69e7/+
+iu5FZebm1vv3r3btm3LqWPrPJ3vH71///5BgwZ5eHgEBQWRO7ZUVFRMmDDB09NzwoQJ5K4C
/BKG3A8+ePjgAXlce+9ezurVNnw1CIIgtqOgoKBDhw5BQUFBQUHt27fnGFATbD2m882j9+/f
TxwtZsyYMX36dACIjY3VarU6nS4gIIDcSIxfwjBx8+Z3CgurJ016p7Bw7t69EzdtEhdg9o4H
9NUoQ9XU3y6y+aFQFX0oVEUfClXRh7KiKgu4deuWr69v69atW7du3bNnz1u3bvHryD+f/vnn
nwcEBLRt23bYsGHkc0ROTk50dLSLi0tMTExOTo5giSD3fvrJ7OGuXbtGo4qmGmUoVEUfClXR
h0JV9KGUqcoCOnfuXFRUVFNTU1NTc+nSpc6dO/PryD+mE65duxYWFpaUlAQAlZWV7u7uAODu
7n7jxg3BEoaHlZX6bdvqbt2CvDz311+HvLyOHTuK/B44cKDZOh07dtTevWuVOqgKVaEqVMX5
3fbePf22bTSfHjj07dv39u3bR48ePXr06H//+9++ffsKVDp9+vTp06f1tgVYN/HS6/W5ubk+
Pj7p6enkT09PT51Op9frdTqdl5eXYAmHBQsWGAwG/ZEjBnNcunTJbB3KapShUBV9KFRFHwpV
0YdSoCopxtVt27Zt27ZNnntisO/Ivn379sTExC1btowYMYKUREVFubq6Ll68OCkp6ebNm59+
+im/hBMwLi4Ob7WFIIi9cPv2bQueJb7uhdwTQ551L+wHMTExZFkLKfnvf/+bkJBQUFDg7+9f
UFCQkJAAAPwSQW7n5poVUFRURKOTphplKFRFHwpV0YdCVfShlKnKAmjWvTjIveswT0cQxI6w
LE8/dOhQcHBwq1atAKC2tvbIkSPM/cRBrjxdUpR5NUZV9KFQFX0oVEUfSpmqLIBm3Qvm6QiC
ILbG4vn08+fP37x5EwBcXV3JKnDmv5inN7eaXecIqIo+FKqiD4Wq6ENZQNu2bQcMGDBy5MiR
I0cOGDCAbwwAmKcjCILYHsvydL65LtvGC/P05laz6xwBVdGHQlX0oVAVfSjLGN0YfgXM0xEE
QWyNxXm64DhOwDy9udXsOkdAVfShUBV9KFRFH0oibJ2nMy67zD5SfklFRUV4ePiJEycGDx68
ZcsWDw8PfgknLObpCILYEZbl6eLIk6czfi8iJU3y2mWjzKsxqqIPharoQ6Eq+lDKVCUR8vu9
8Ev8/f1zc3O9vLzKysqCg4N1Oh2/hBMQ83QEQewIC/L0o0ePuru7d+nSxc3NzdnZmV9BufPp
lnnt3s7NJT9Qf1kW/F1UVGS2DgAU7d5tlTqoClWhKlTF+V1744YFXrsDBgxo167d5cuXc3Jy
Tpw48ccff3AyYwLm6QiCILamOfPptbW1N2/evHHjxvXr14OCgphy5ebpI0aMSElJqa6uTklJ
CQ4OFiwRhFwGxcG5PAKqog+FquhDoSr6UBbTqlUrDw+PgIAA9oDOINu6F4Jer+eXXLt2bebM
mSdPnhw0aFB6erpGo+GXcMJino4giB3haOteGARLNBpNZmZmeXl5ZmYmGb75JYIo82qMquhD
oSr6UKiKPpQyVUkE7iNFEASxNVbJ0znbSpU7n24xyrwaoyr6UKiKPhSqog+lTFUSgXk6giCI
rUFfRiqUeTVGVfShUBV9KFRFH0qZqizArCkjYJ6OIAhie3A+nQplXo1RFX0oVEUfClXRh1Km
quajCP90y1wY0ZcRkQLVS7GG75fLrQJpiVh8P9Jff/21srISANzc3Hr37i3//Ugtc2FEX0b6
aqiKvprWvTVNKGwr+lCoij6UBRQUFHTo0CEoKCgoKKh9+/YFBQX8OvL7vdC4u6DfCyIFmKcj
cmFZnn7o0KHg4OBWrVoBQG1t7ZEjR0aNGsX8Vynz6TQujOjLiKqkUDWnU7UCVSmzrVCVFVVZ
5ssIAJ07dy4qKqqpqampqbl06VLnzp35dTBPR1oumKcjcmHxfPr58+dv3rwJAK6urgEBAfLP
p/OhcWFEX0b6aqiKvhrOp9NXQ1X01aSbT2/btu2AAQNGjhw5cuTIAQMGsAd0Bvl9GWlcGNGX
EZEC1ccqwzyD3CqQlojj7CO1zIURfRnpq6Eq+mpa0NKEwraiD4Wq6ENZBhnEcR8pggiAeToi
F5bl6f/7v//77LPP/vTTT0OGDAGA48ePjxw5kvmvUubTrYgyr8aoij4U5un0oVAVfShlqrKA
bt26HT9+PCAg4NSpUz///LO/vz+/DubpSAtGpQID5umIDDjOfY4kRZlXY1RFH8rWqrSYp9NW
Q1X01aTL07N48Otgno60YDBPR2QC83QqlHk1RlX0oTBPpw+FquhDKVOVRGCejrRgME9HZMKR
8/QjR44MGjSoa9eukyZNInteKyoqJkyY4OnpOWHChIqKCsESQZR5NUZV9KEwT6cPharoQylT
lUTIn6c/+eST69evHz58eG5u7oEDBzZu3BgVFeXq6rp48eLVq1ffunXrk08+4ZdwgmCejlgC
5umITDiOfzqfhw8fAoBKpQKAQ4cOAUBOTk50dLSLi0tMTExOTo5giSDKvBqjKvpQmKfTh0JV
9KGUqcoCaPzT5R/TU1JS3nvvvW7duh09epRcfyz22u04fLhZt0xfX18aR02P4mKr1EFVilZV
VaVEVcpsK1RlPVUWe+3eunXL19e3devWrVu37tmz561btwQqnT59+vTp03oF8OWXX/bq1Uuv
13t6eup0Or1er9PpvLy8BEs4LFiwwGAw6I8cMZjj0qVLZutQVqMMharoQ9lalVZLEwrbij4U
qqKsY9kgmZOTk5+ff/PmzZs3b+bl5eXk5LD/u23btm3btsk/nw4AdXV1Z8+enT59elQ9ZPY8
KSnp5s2bn376Kb+EEwHn0xFLwPl0e8NhLO8d2T9drVY/+uijYWFhM2bMiIiIAICEhISCggJ/
f/+CgoKEhATBEkHIRxtxcC6PgKoA59ObUg1V0VeTbj5dif7pEoF5OmIJmKfbGy08T1eif7qk
KPNqjKroQ2GeTh8KVdGHUqYqy0D/dAQxDebp9gbm6aNHjya/mT+Z/2Ke3txqdp0joCrAPL0p
1VAVfbWWvo/UKmCejlgC5un2hsPcmsoqfi+YpwNgjlAPqgLM05tSDVXRV8M83Qpgno5YAubp
9kYLz9PtY91LVlZWYGBgly5dAgMDiWL0ZWxONVRFXw3zdPpqClFlmK9EVZaFsoDRjRGsI3+e
3qNHj7S0tKCgoNzc3KioqD/++AN9GREbgXm63eEop8yR59M9PT2h3pfRy8sL0JexedVQFX01
zNPpq6Eq+mo2m08XTNXlH9M3btw4ffp0Nze3GTNmrF+/HtCXEVVZSZXqpVgzddCXEVXJocpi
X0aae0zL78vYs2fPr7/++saNG3v37n3iiSf06MvYvGqoiqkGo5eJ19ms3Wx7VdYK1UJVAShR
VdPrSDGWKsWX0dfX99NPPx0+fPiRI0dmz5596dIl9GVErILZPYdpqrQIQ4TN9CBWoGXPp9vH
upeNGzcuWrTIy8tr8eLFKSkpgL6MzauGquirPdQ+pAmFbUUfClXRh7KA0aNHjxgxQq1WOzs7
Dxw4UHA+Xf483Spgno7wwTzdAWnZefqDBw9Onjzp4eHh7u6en58/aNAgFxcX5r9KydOtiDKv
xqiKPhTm6fShUBV9KGWqsoATJ05oNBo/P7/OnTtrtdr8/Hx+HczTEYfF7J5DzNPtj5adp//+
++/ssfrKlSvdu3dn/jSTp+fn5z/33HOdOnUCgBkzZuzfv98CBTZGmVdjVEUfCvN0+lAtVpXq
pVgFqrIglAX8/vvv7IWMvwqthjSZpz/77LMrVqyYOHGiXq8vLS0dPXr0L7/8IpHQ5oN5OsIH
83QHRKVSjV7mABbqMvinX758+YUXXiCP27RpU11dbYECG6PMqzGqog9lXVUcbxA+mKfTV0NV
9NVsto9UEJNj+tChQ/fu3QsAf/7553vvvffiiy/aUJWFdBw+3GwdX19fmlA01ShDoSr6UDZW
5VzoTBMK24o+FKqiDyURJsf0zZs3Hz582NXV9ZlnnnF2dv7www9tKcsylHk1RlX0oXA+nT4U
qqIPpUxVFsDcjJT9Jwf5172o1Wrmsaur65UrVyoqKsLDw0+cODF48OAtW7Z4eHjwSzhBcD4d
EcDcGgmcT7c/Wvx8OqdEiftIGbOCf//732+99RYAxMbGarVanU4XEBAQFxcnWCKIMq/GqIo+
FObp9KFarqqQRCWqanooCyC26SEhIWAiSQd+ns7Omjno9XoJRDYwadKklJSUrl27+vv75+bm
enl5lZWVBQcH63Q6fgnnuZinIwJgnu54qFSqdeAAtzqy2D/94cOHFy9evHLlynPPPXf8+PFR
o0Yx/xLO00VMvyxWT8PPP//s6uratWtXaIbXLvkBUZfLoqIiGkfNot27rVIHVSlZleaZEgWq
UmZbkToTx8bIrurlcm+7aCuROhZ77ZaUlPzwww937tx56qmnjh8/zt5wxCD/fDph/PjxK1as
6NevHwBgno5YB8zTrY1ZCx3pFagAwAG2klqWp588ebJXr15sjxc2wnm62jQWKKDk2LFjtbW1
ZEAHgBEjRqSkpFRXV6ekpAQHBwuWCEIug+I45gxj00OhKsD59KZUQ1X01aSbT+eYdgmiiDx9
zJgxs2fPHjduHPnz2rVrM2fOPHny5KBBg9LT0zUaDb+EEwHzdEQAzNOtDebp1sIq9yPloJR1
LwBw8OBBZkAHAI1Gk5mZWV5enpmZSYZvfokgyrwaoyr6UJin04eSSxXbbkU5qsRRpiqJEMjT
1Wq1Xq/nT7ZI/TVpc8A8HREA83Rro/pYBdmyLg/HPN00JvN0MnbbeN2LVVDm1RhV0YfCPJ0+
VMtUlQapClRlWSiJUMR8evPBPB0RAPN0ayN7np6mSouASMzTBTEzn56dnf3000936tTJBute
rIUyr8aoij4U5un0oWRRZdbqEtuKPpREmMzT/fz8Pvzww/Hjxzs7U3nXyQvm6YgAmKdbHbnt
VjBPF8FMnn7v3r0XX3zRLgZ0BmVejVEVfSjM0+lDyaaKZbeiIFWiKFOVRJjM05OSkurq6hYs
WNC2bVs5hDUNzNMRATBPtzpy261gni6CmTw9ICBg7dq17u7uUs+n19XVrV+/vl+/fmTuHgAq
KiomTJjg6ek5YcKEiooKwRJBlHk1RlX0oTBPpw+FquhDKVOVRJjM0319fT/++OOxY8dKPf2y
adOmtLS0zz77rH///k5OTgAQFRXl6uq6ePHi1atX37p165NPPuGXcIJgno4IgHm61ZF7eTjm
6SKYydOdnZ1Hjhxpg/n0rVu3rlixYsCAAWRAB4CcnJzo6GgXF5eYmJicnBzBEgb0ZURVFh/x
GvoyNlWVt7d4TVuo8m65vow0yD+f3qVLl6lTp+7cudPT0/PDDz8cNWqUm5tbeXl569ata2pq
unbtWllZyS/hBME8HRHAXJ4+UJWWh3l6k8A83UrIkKevXLly9erVNphPd3V1DQkJKS4uXrNm
zZw5cwDAzc3t+vXrAHD9+vUuXboIlghCLoPi4FweQQmqVC/FcsxDrKsq39wb3xvn06mroSr6
avLOp5sc023mDTBy5EgAUKlUAECmXyz22lXm3cFRFX0o66rKV6WK1yktpJpabAltZa1QqIo+
lESYHNPNuvRai4SEhLS0tG7dur3//vvky8+EhISCggJ/f/+CgoKEhATBEkGUeTVGVfw6prz9
bKxqd/YGmlB4BulDoSr6UBJhcj69V69ex44dE5noUBQ4n25fEBtuMrJLtynR/LKWfBUE2v3M
rE3B+XQrIcN8ekxMTEJCQlVVldUPLB3KvBqjKvpQtlZVqaUJhW1FHwpV0YeSCJN5OvqnI9Jh
zNM/VgFIuCmx+Xm6/Lf1URqYp1sJGfJ09E+3VihURR8K83T6UKiKPpQyVUkE+qcjMqD6WGWY
Z5A66cM83fpgnm4lZMjT8/Pzn3vuuU6dOgHAjBkz9u/fb3UFVkeZV2NURR8K83T6UKiKPpQy
VUmEyTz92WefXbFixcSJE/V6fWlp6ejRo3/55Rc5FFKBebqdQXZ4Yp5udyggTw80RAYC5ukC
mMnTL1++/MILL5DHbdq0qa6utroCgpoFKUFfxuZUQ1X01TBPp6+mHFXsrWTKUWVBKIkwmadP
mjRp4sSJUVFRFy5c+Oc//+ns7Jyeni6FArVazfkCFn0ZHR9yu5ysRABl5+lk3h9hUECeDgAO
4KYpQ56+efPmw4cPu7q6PvPMM87Ozh9++KHVFTB4e3v7+Pi88cYbV69ehSb6MrJR5tUYVdGH
UlqeTm6/iW3FqWNqG7C8qkRQpiqJMDmme3h4fPbZZ1euXCkuLk5PT3/00UclUkDm6/Pz87t3
7x4ZGQkAlZWV7u7uAODu7n7jxg3BEga2127H4cPNumX6+vrSOGp6FBdbpQ6qMlXnZae/ALje
rbZWdbXKLtpKQaq8vRvOnUyqPL2v2kdbma4jqdcunD59+vTp0/zV6LanrKysQ4cOer3e09NT
p9Pp9XqdTufl5SVYwmHBggUGg0F/5IjBHJcuXTJbh7IaZShUJVAHAEYvMwAYAKRTlQqpZupk
a81EAbC6KmuFkkcVgPHcyaQqFVLZp1XRbSUuSQK2bdu2bds2gTz91KlTQ4cO9fDwGDp06Jkz
ZyS5kvCoqqpKTk7u168foC9j86qhKvpqvm6FNKGwrZg6aWDG6hLbij6URAiM6fPmzXvjjTcu
X778xhtvzJs3T2oFZMVLQEDAL7/8kpaWBujL2LxqdqNK6PbztlaF616oq6Eq+mqKW/fi5ub2
559/PvLII/fv3/fy8uLfVEiB4LoXO4MsnyAoeN2L2TsltTTINk7VOgldeswKAFz3YgKT614e
PHjwyCOPAECbNm0ePHhg9QNLhzKvxqiKPhTm6fShUBV9KGWqkgiBPN3Uber0Crbxwjydg9I3
QGKebp8Y7VZAniXqqpdiU7O8AfN0E5jM0019qWp1BVZHmVdjVEUfCvN0+lByqRL/mhTbij6U
RKAvo2OCeTpgni4BxulsmZwRMU8Xx8w+UntEmVdjVEUfyrqq0szWwTyduhqqoq+GeboVwDyd
A+bpADBQlZZnt3m6Ms8g5unWAvN0KpR5NUZV9KFwPp3UETFUkVEVTShURR9KIpQypq9atar5
XrvK3C0mj6qQRPFxAdsKcB9pU6opQhVvn5q8qky9xRS3j9T2nDp16rPPPmP+jI2N1Wq1Op0u
ICAgLi5OsEQQZV6NURV9KOuqysuLNFNHqXk6TSipValeimWPWU1SJZ5POF5bCdah+bAlBfKP
6ffu3Zs9e/bWrVuZEou9dlti5mJpqJamSvANJpinc8YySVUJHlf2tmpOKElVGeZDmrk6tldl
cSiJkH9Mj4uLe/PNN4cNG8aUWOy1S35A1OWyqKiIxlGzaPduq9SRS9XL5Q0ethPHxtheleql
WDN1iMtuY69dW7fV/z3DbiVGM8dL1gZtRc4ROa7xDPb7TDyaDdqK3Q6kjtHntvFZ46gir0Ww
1zVXlbc3sLx2VS/F0pzliWNjJGqrl53+4vTzRu9BVuuR3sW0iaReu/Kve0NrUIkAACAASURB
VHFxcTGwvkPX6/X+/v65ubleXl5lZWXBwcE6nY5fwgmC6144qD5WQfYysnBClhUUZg6qUgFA
vgEAQMJ7S7KWtRA9XFWNK7Cfaqwm8boXRhXnuKqPVQCyeaoQiCrOSaRZ98K8HOv3OpVqIKRG
1K97ET6nQnqsroR/yoDXYuzj8tvEkde9VFdXM/tUyW+LvXbJJVGcFjKXZ5hv/ELJ1KSeEtoq
X5XKvrek1Ko4TaF6KZaZTxef+lRCW/FRvipTrUo53cw5Iv/0WaaK5nAi2pq5cskGk+zyj+l8
LPbaVeasme1V0fQnSVXxBXDq5JtI8lrguhdTbWWYb+b9L3lbhSSy15lY1tuFv8ZgdVGR18gc
kV+HKek5ZzvUfxUhEqo5bcWJ3KiakF80o4rRKXg1kg4FjemMpYxGo8nMzCwvL8/MzNRoNIIl
gig/c2lmqCapiq+KV+x9IznpuWyqFLXupX6AKCoqiq+KV4qqJoYyqjIx2JkKxRmUyWOadfpa
99biccgDMpHNKeSrYv9LcCBWvRTbZyp3lTBn4Fa9FGtUJfpBWTrkn0+3Cjifzkb1UqwhKxEA
VKOXAQCEJJKZWZtNrKteimUOKgix9yNuUBLuCWRPl3+sguxlwJnuXJGoWrrMOKMdkkgqGMcj
8m2EtefTmVPAfavXD4LGRlOpAEA1epmcW0nJXt/GL59yPt2QlWjse1bFkJVI5tMjR5daPbgF
mHmZTI/iP/H75Y48n25FFJ25WCMUqqIPZUme3ni2QQpVnLyy4bjkK5CmhFLuGSRtaDpVF24E
C6qFJFKFCkkkS1CaezjBakIvUyAU6VqiH1+shUON6TifLlAnJJEZLGypin1Q1Uuximgr1puK
JOa+PxUyjxnBhvkN4sm8vxVVFV6v4UqqPyjzWxFtZWkoGlXcRmhGtcLrNTQD5Td1j1pZleAA
XV9I6gi+72yAQ43pys1crBSKVpWpXs4yDJBWFU8AZSj2vKd1VQm+wYpaaTl1ONUiVakgZZ5O
5s0517+GUKKjlWJ7e8PLMaGfPd0sgpmMOCRRoI4JmpynU3/IEOxXWvfWIu9Bs0qaiUON6fab
uUilynQHUmZb0eRTzdmFyF5MYpgPvrW2XvdSeL2GPZMeFx9P42HCfO9ns5VLFodiVBnmQ3yV
wKsDTnJtuos2JM6N66RBw2xV4YBY47Jd0ZmNZuXpjSOLpPPG8T0ksXBA/WkSnIaSeFh3qDFd
sZmLtULRVOsztVGPIZkgGRTYOYWkqvjJi0go9jhFk09Z8wzW5+kC2RbrjWfNPD1kHz8+B+YM
clRxhnXb9CvmcBb09rj4eME6DakuM+RxBuWQRAhJ1IbsExn6SeNoQegbEd6zyKZc7kH5qtiF
rFDMbg+2Ku7HEVZhI1UhiQAQXxUfXxVvm9mYVrY4iChff/31ypUrS0tLe/fuvXLlyqFDh1ZU
VISHh584cWLw4MFbtmzx8PDglwiGUnjm0vxQNNXYeYRhPgDEK0EVTR3VS7EAVsvTe87Zblhh
LlRtIYQUmlqcwGDNPB0KAf5u/CMkEbIELieUE7vKOYPs9VQcVYb5oFrHHT0LuRVYf7MzYlJR
6OIXXxVPOvYv8wuhfiQlceofNxrWv+laCl0bj9e8yIWsf7FVccZuoop91viv8ZWqV36JL2RX
YL8NDfMB5vFfk9WQP0/Pzs7+5ptvrly5MmfOnOnTpwP6Mjavmta9tXg6QDIv2duKc1tLosqK
eTp73tPkkmSSp5tOmYmzY3PainPoS+nmV8RTzhHLfgYJnBfIV2WchKknvir+UrqWFDLlZFqM
80OqMTVJryZnhPkEUKTVsg/E9HwSmfm6Qp/sDY2/vWB+kweMKnY1dkAig1El8hoN82HKnj3M
Y35NqVHK+vT79+9nZWUlJSX99NNP6PdiMUbHiSzuIEVW0TLLaSVf9cxa2sxZq05yOrLMmdBo
rXFIImNT01wJL8UaViSS9enMgn1gr9mfDxAPqk4Apt5pBoP5GyHRyGC/HJWKORfC9Tn/ZdqQ
rUv6RevsLQ7sw/HXp3OcTNhNLYmwPAgcmJoHZlyUlY7B4ODr09VqdZcuXaKjozds2AD25sto
1oVOOlWCnnBzOlUDNHY99PaG+hSYcftrqirBY4n755FnkeNGvzqPE4G465HfRm1Of+mTvTme
iBa3lbEd6v3zOO3w+R9zAaDo1jNMEsdvMUnO4DPPMMulBY7o7Q0hiewzaPRr7PeZ8VkhiYwX
oKS9vZFCVh1yvgqOeHNqMmftZae/hF8X63eRVmu2jrGthMqHsbwhTdXh/CY/kqpqUh0H92Uk
3L17NzMzc+XKlWfPnrWvPJ1tDiedA5xgcMESwQXpzFSjah1Ylgg37aXV5+nGJK5+Qx3zKhrl
6esimRlt8knCunk657OLsQXIn/Gmvm4AIClqfaZvwdGZx9w8nS2AR0J8fPzPD/mfKoyh5kNT
P2lxTpyg26Lwsxrn6eSJ5HagABA5upSzIZYxmMQ83TyOnacvWrSIpN5OTk4PHjwAZfsyst+r
SphP588Uc1cCNIYZ7m0/yw8mLI0aCEkkiUwzD0fikxUm/GOxL3ic9emmohHza+ZPtqkI85i4
hQiaSbHLi7QCs7Fs4uLjG9bGsDQ3zO3Wu4jQn0G2VEYSuxrxMOFXI4cjJfxZfk7NJnwjoqWy
2aGpRhmKql/ZXJVEyL/upU+fPkOGDKmuru7Tp8+OHTsAICEhYebMmf7+/oMGDUpPTxcsEcTq
KwEEcxzxUOw8iDzdxusTyEoADvxxxJaqDPNBNVpsVa9qtHGo6lhaCn0a/iX44YBjPCnwScX4
eYXVDiGJkCWknG59OlndzL88sEsYKz5xfAvNH7EQxOoYmwtAvSYH1uTQHBTMiRcvNKq6XgMA
aQAC7jwsYxOqHQYUjUBZjTJUx1Lz/jC2VyUR8ufpU6ZMuXDhQnl5+eHDhwcPHgyK92Xkr9gV
fMMwg4vVVYk5vYUkms8RQhLB2m0lNkBw9vsJLjKpX8NL8inBRJj54SzA5/yXKRf7vMI4IFLk
6WDBLkShYxmPqDWz0gbo1sY0VxV1NSYbEHmB7L1FmKfTh5II+cd0K9Kk3FNsZGySUznbQsSE
Ks5HVH5N5mMBNP6kzPmw3HPOdlNHYSrTZi50bdVwOKFhiFEFjcdWfs3CAbEiAxlZJRYXH9+x
tNTsLjtaXw7gLBAWoEl5ulVUNTnLk2hvpNlqvOMyddK4VRvtLfpmVIrZw2GeLinyz71YC7LE
Yv+/NopXKyoqYg/r7DuHNXzJ8/3yoqIi9qdpwUkYkrnEV8XHw0MRX82JY2PIVhr+5CNz0Eub
pjEjI6cmY/1KjkjeWuzvHhtNRMyHIq1WvEsZ5oMqO/Zlp78EBwV2g1zaNI11LJOf1hlVAtQP
DVrQMps1yNQBG2ZQuO3tbZhfyq/ARuxwLC6la+EF4XZocEBspfUVmehg5Z5mB1ABVczuFdbL
IWdHfEqdewb5E0chiZC9zEJVllbThuwrhELI4hrfN3otIYkvl3sDmBlAzXZR+mqUoW57e5sd
1m2vSiIcKk//pu5R8ewb6G4DpHopln2nkkbjLM/5gYxH7O+v+KrAdBbPTnXNEJLIeeMJ+mlQ
diZTw4GpOWKRYYjKPw8Kae7zQJNPmT9cSCIw7SBqA9KEPJ2/Sb3xgwYPE/4eRdbe96ZmeXxb
sUaqzGFUZWoLPlNtAKtvcza+1z+RXJLz8yIbClkwu+e/6arEjBjzdHuFZC7iwzp7AQaYHmqN
mQsn+27cj/nZTcMuYdazmplPGfc6jwbDfOgzpVG1uPj4+HXc+pQ5gm2yPGaP9aV0rW9hfPw6
gCySeJq4zxFFPqV1b104IBayl7E/wfAOmlhUIpqDAwBAn+e0v5iuY5gPkGfMPb8Bk6M5eaAF
bSE0DIumBmKrZHnxVfHxIfEvl3ubHUCNquolNezC5/Rk0BZyLki8r5SZT1p5eZFpA1MbKrNq
GuaTmWtpM+K8vEgY2LRQLSpPd6gxXdoZRgBgRu3sZeZD1b9tvjFnJ9JkVawrDX8qg3SmNEiN
EF3Da+W2Mj31ZPQELyyEepsOw3yBOVmC8Y0nOqXO8+XgVc6C+Kp438J4EP14AQCFFPcjNcwH
Mkip1jX4ihjmQ0J8fHyn+PiqePJBLSHeaPGRYJxHimdHSIiPr59fskKWFxcfHwcAUMpsOyBi
mAdAxv1O8cwiKL57CaOfBGRXUDVOFOqfUggAqjxQAaSa3vVug4xYBWAwV4cD5un2itS5J6cf
N9QheQ3Lkok90HBVCQ1/jT4WhDS+iVpW/bOyRMXXV7Zynk4yYlOEJEL2Mm3IvoYUlcDSTCCq
OEZ9ZGRnr40j+VSTppsFK8fFxxdpzefp2kotzSDLUUUe1A+s8aSQsfgQNCNk+5NYcY7YML+U
iR/X+AFAfBwrFNvkpPFT4skRDfPNNKkSMuI0SIXGmYoSVFkcSiIcakz/ZlSKWY89rikzewxl
yrOXCXzfxU6N1yUCQCF5rtDCZ/a7QrUupeFAvGjkz0L2f8FkotrgywysEbNx5fo8HUAkVQ9J
/Eb4H42mMhquWAACx6q/8NB0XmXmUzR5Oig1y0NV9KGUqUoi5P+OdP/+/YMGDfLw8AgKCjp2
7BgAVFRUTJgwwdPTc8KECRUVFYIlAhi/czeDyRtCNv76qMFMmf0VEydU/X4/fhLExqiqfgm2
8VuvxkckoUg5Ox/k5IZk5XKDmzM0qknqMGtjTc1vNFLFCs5vCm3IPu6/oNGf5JP7iv+uYAzt
+JoJylxHrK1UoipSJ9+cJYEy11yjKvpQEiG/38vUqVMXL17s6+v71VdfrVy58sKFC1FRUa6u
rosXL169evWtW7c++eQTfgknSFxcnIuLC3takAb+1KGpQuZf9sJASAUAYothdm5dCvINEKgy
+V9itEuuOhJ6d+QB82WaIKo8MIhWMBtBOmQ5a2ZR5QEApA6UR1sapEbmRQKYO2vKx7H9Xj7/
/POAgIC2bdsOGzasbdu2AJCTkxMdHe3i4hITE5OTkyNYwicuPv62tzfb+5jAJI/E45gklWzf
ZI5rM3k6493M+RcbJecIeXlm3nJSq8pXpZqtkxcfmRffSCfm6aQOx1xeFlVpkMqWoeTebhZl
qpII+cd0wrVr18LCwpKSksBir11v746lpe+kp9cvqAJ9srdhPryTng71J/Wd9PT3ly5l+16y
/8v+7VFVJVje1DpEldk6voWFND6fTVD1eUOJJ8ubVGZV9SVpAJ7eV1VlmRMvrZVOlSotU7yO
31XlnkHBs2ZjVWwNpM7L50z2KBuoMjozn7OT96DpOpJ67cLp06dPnz6tl5Xc3FwfH5/09HTy
p6enp06n0+v1Op3Oy8tLsITDggULDAB6b28DgPjPJa3WbB3KapSh5FEVD4Z4Y0kqpHIeSK0q
FVLZx+LXCYTUVEiFPIA8CduKE5z/o802FyrPdmeQabFLWi2/Afk/UqviaCB1yCkT0SZ1v+L0
GXnfg6bawXwog0GKgXTbtm3btm2TP0/fvn375MmTN27cGBoaSkos9tpV5rfb8qoy9Ske2woa
r3shDZVv4M452EYV+UaUHFeZbcWuI/L1O1ElPndksSr+cZXfVrZH/jE9JiaGLGtRq9Vqtfq/
//1vQkJCQUGBv79/QUFBQkICAPBLBCEfbcRpCXN5zJIJkbeWvG2VlxcpOC5Ip4qM1OwGyTc0
mk9PI1/BqVLT2GPHAWlVMUM5++sHxfYrmlBElfiaq5bwHgRzFzbpkH99ul6v55R06NAhMzOT
XUK8ds2GUubV2Paq0lSpgfXfPfL39ciliv2nCiBQqJqkqjhNka9KLcxjWqlhHY4tVeWrUgN5
C0iYOuJLXxTb282OZS3hPUgTSiLkz9OtiJIzF6uEol41YUQkXWppbSXYFEpY98Jfhy57W5mt
I7KqinwBDqJZqrVUpUGq7G3FfpnMY1z3YjWUeTVWjir2uKYcVWxsqSqNNZ8uPlcgnao03qJP
+lAynkHTGw8aVKWZHtat/KmUYoqjmW3FPgT7UxS/24h/I2Kb2RiHGtNbWu5pQR2mV8muKl8o
17OxKnaenhcfmZcXafydJ9Wq+U+0m60VSvYzKAhblalhnR3K4nSeDKmkPdnflOQbBD76NLWt
2BE4Cjlz5YKvkewwYEZ8wS/epUP++XQrgrmnYJ0Gc1QlqRJEUlXG/U3xDSWFboWN5qwPNPwe
CA03p7eiqq2FzrMAgPuZvYEIlmOPwJ0/WdjFGUwDCOTtKP7rl8J8MBaSl8m0RqAhknxwCTRE
5qvmBhoimedyZjlIozkXOjMNZaygAgBgf0uRBqmBJY3iREAks8+ZeXC4sOFwkarUPIhkD9zE
PSkNUhnPN6EkncCukxoBkZGNPoqlpgHk8Z5rRRxqTL+tSPc1G6v6RLs57xVjh86Lj4RxDf9i
D+7KbKtM79jxpdybSlusiu25mJcXaRyyAQAgDVLz8iL7VJLvHlI5/+Vgxbby1j4koshIJHi5
LdJq0yjGT2Wewdve3hz79EhVKufidLDPw9JCZ+bPRpMYzPCnSvXWPkyr/zOi8RjKPD6ofQiF
zoJzIBGsysO6xebDY6wnpqapjGFBBfkAaQDkcMyBBjZOq0k/IdXY4qE+VWcfjqlDnsWLIy0O
NfdiF5lLc0LRVNvauMNxhiqmS0mtKo2iDp8/Sx+TVBUHMp+eBtxW4mDFtmo8lgEc4DZUGqRS
Tuza4AymmavDR1BVWuMf/pgoCLetzNUxdVAA+KH0MfbLSROqwxqFTQYUOSL7iTTKpUP+MV1d
D1NioS+jPcwwNjMUTTVv7UPxCmRckFQVpx8PFF2fwJ67/M37qlSq6kfthmHxgPl1L6pxAFZt
K2/tQ7PzqkLtmVr/04D99nazXZS+GmWoYRT9yvaqJEL+MZ3samWXxMbGarVanU4XEBAQFxcn
WCKIzfb78euIOKPadi1HqmAekcYbZ5uqSmQkauZnGvbCjx/kyNMFYX9NasUz+M0rs/mJG/Ey
I1/PAsDhwrmkUDCdZOYEbNOvmPNuxU+lIsl1U6tRhqLpV7ZXJRHyj+l8LPNlBJtkLqb2+5Fy
wYGvmarY35iLrI1lEMgRhKZfmqpK5COksOdiHvd+NOLHIi+TJp+iPIP/e8nUMoaG39pKLTOS
cmE1mjX7VStjNeHjHgAAeKh9mBcfqRrH/ScHqXs7R6F4F22SKszTJUWJY3pzfBkldV9Lg1S+
U12mdyzUTxpI4YD4m/fVtPqj890NydHZJf/zzioAuH3Tm/ObdOvcbYlWUSXeVsO8r6rqNXPq
MCXEXe/lc8ZXkQbwm/fVH0ofs5aq+0H/YB6XdYtltwNz8TD6Mn7ObSvjb2ucQeb1GtuhgtVW
n3sDQO7cRM5xvavKVeMauQ8StezfNujtqnEwkeeAyJw1U9E6lpby1XJ+lxY6m60DAL5V5YLl
5OjidTi/fyh9TGpVTaojqS+j/PfEIKjVamYGxt/fPzc318vLq6ysLDg4WKfT8Us4T4+Li0te
u1bqlQBMPkJWv5E6ZLkSWfo2sP7rb/aW7maqYu5uMRBSd2s3sKuxb3zBlOxescG3lheKlfQN
HJiaB5FNVcVe22dWPLlzgmFg/e72PCgK0/71S2GgyliiYi3mCmQt/CBvP/G7ZFCewU+0m2fv
mE1unpAGqRH1ZgkD441rG/LiI/s8p/3lJ9Oh4o03zWjOGeTs7y9aoZ28dG4EQES88GscGJ/6
lvbh7B2zASBwoLHNOWswSKGkvZ25+wRz+wtShzlxzF0pyJuCWQV429t7eKmZ+0fyl45QVmM+
vjB9hjIU6VcSqbKgTp4hwpHvicHHZr6MIpPgpneCNTzRt7Aw38Bft2Bc28T82Ho1Dn9AF8Km
qg7A4cK5kSoxTxXC2m/MLGSkORwZZbaKrj0gswri9yNNa97MteDmF9/aQlP+ZQyM8vy8SJG9
KraaT6eqwyzBlnQ+nT8fhfPpfOQf05lFL8wDy3wZ0yCVTESIw0wL5hsgkrctG+rfycTdME1o
bSnZz50GqZ9oN0eyptE5u0jShGau+e6AbFWcOEzNvDzj9gdm1xypwEx3sjeqkRlbVVmD5Rnn
nUCexVZl6trGtnjMj/ASrlS/a46thKXf+OdB7UNo7CgryO1zTZsjHmg6FDOnyTmc8bvHAwDm
/F6Y4cyCmWtitZiv4m4fL2qlFV83CY1nY/lfbhuDGyz4RqQJOzb5B2V3BmjUS42QU0/zHsT5
dEmRf88R35eR78JI48uYBgClj/1pdgMus8XLuI2NW9+4TaDQWAGEAhr7cf3VmHyQHxhvombj
j6INm80EVRnXq7Fu1xkfyaxi3lro7MyqEHEg0rhppX5LXl58JNQCAEDEeP7YoUrLNESMN75q
djqlgny+dADSCMbNFBHj09IaVEVwqgGQHRlsKxDmEhhRn7nkm0vVOxabd9+GxvmiqTNYWgjA
GvRV48BwwPi7IZJont4Q33zqyVclrDCiVmDKhX3RDYwXyPLIZXjgwFTmQYQqVfiUmVbVcI74
O1RFm8GYahQCqAAanCyB04eNGZIiM2KJ8nRT73p583T5x3QrMsz7aloTZ80Ex5c0434/gT0I
eXmRA03P5bH/y/Cb0FweJyyNqrz4yJf3bE5jqYoAgU0rRa20PUcXAic9JwNZxHgmPuUMI3uT
nuBGPn41U6FKTdRhvzHIl4Q0oQSVsAu9tQ85ox67QVRlmQav8dpK415T8i8y3LMfQ1PaimY3
yshWWjL9AqxjscnLiyyq1PbkKE/LBADIG0+umIGWqhI8leLimcqC+yf5KG3mWlJVqnEQGG+y
juBoYAPkn3uxIlLPmuXlRapYMx6kDnERMb45DwDfAcqWc3mqssyexUkC5eOMCpk02jaq2G1F
Goo/h0OEkXZTTy3NM/HlYVNVffPK7AYZjWPm5UVCxHjVOG6ezrQSR3xz2orTGYzfdpieflEB
9PyadwYjxjPXYwbbnMH8xr1dEPa9wpuqSuR0mxWfH+FFhk6zR2xmW7EPwVQjhWwrOua9T+qI
uFdKikON6U2dNRPpT4IzYsaTdMB4Or3r1xEzp1w1DlRpmaq0zIZdJPGRTVZVH41zeVCNM1Yz
1YnJUKX9x2LxY5FXId0MYyPN5Pc42L1ig7GINzZB/fwSHDAuU6M/nOAZJAL6PKdl/myUoY9r
eLNpK7WccZxdjcGCtiLnjiQBbJj16aqyTFMr0NlnUKSLWtyvxKuZOq6pzsBZSp87NzE/wsvk
qn9eKMGLqOAR8xqPnvkRXhAxfnf2BhXjqcn64RyxqW3VaKSObziJeXmRu7M3EP9OFauC8bWQ
2an6OoxmjjCzWUszUcpaxmYSFxe39nYwRIw3vZKFB5ld5RUzk87GZGocwAHjWkDuhTctkz1C
kelaPtxDHGi0spBfomKe0ngqwDgXPK6hgooT34QA85IotHFUCVerTz85mhlVBhLHnE76k6gS
al5yWo2NQ90mzRfDESYYwQJVgpUtUGVsK8HTJ1SZeyzOZFH920TFqiaYmRqEPpSQuS9VWSZE
jDccqP9KP2I8eU+RElKh4TdNi6U1fOtm8DK+N9lfopBQph6zVUGa8b/G46bVa2j8rme3hqmX
L3D6Ag3SrWV0oDH99bUvn/P+5ikz32dpK7U034zRVKMMhapQFapCVWxwTDcPGdPlVoEgCGIe
Scd0h5pPp5mNpbwFJU01ylCoij4UqqIPharoQylTlUTYR55eUVERHh5+4sSJwYMHb9myxcPD
g1MB83QEQewFJk/PXLBg+Hvvqbt2tUpYe8rTKb12lXk1RlX0oVAVfShURR9KmaoA4EJ29rax
Y/9v06ba+/cpn2IW+8jTaTy8ME9HEMQuYPL05HpXhk7e3iOWLPEdPrw5YUmebh/7SAW9dnU6
3RdffGG4f7/28uWzN28O+KOby73W1W1rLrv+t8fNDqZ+1zzSufWDW+J1Lrv+9+ky79Nepc2v
0+NmB1SFqlAVqmL/XuAVXXv5srNG07l+iKurrbVWqm4fY7qbm9v169e9vLyuX7/epUsXUtir
Vy/GzysuLi4hIeHejz+2HTpUPFRxcbGPj4/ZI9JUowyFqlAVqkJVgiRrta3atXvh7bf7vfqq
c+vWNE8xi33Mp1N67dZcuGA2VFe6ryNoqlGGQlX0oVAVfShURR9KmaoAoPtzz03du3fAlCnW
GtDBXubTr127NnPmzJMnTw4aNCg9PV2j0XAqkDz9YWWls5ubLApFQFX0oCp6UBU9ylRldRxq
z5FOp+vVq5fcKhAEQWTDntYymgUHdARBEHCYMd0q7N+/f9CgQR4eHkFBQceOHQOArKyswMDA
Ll26BAYGZmVlKUTVkSNHBg0a1LVr10mTJt28eVMWVV9//fWAAQOIqh9//BEAKioqJkyY4Onp
OWHChIqKCllU8TWo65FFjylV/HOqBFX8c6oEVYRVq1bJeBJF+pW8XUsQHNMb2L9//44dO4qL
i2fMmDF9+nQAiIqKWr169dWrV1etWjVr1iyFqJo1a9aKFSuuXLkSHh4usgNLUrKzs7/55psr
V67MmTOHqKLcFyYpfA16vZ5/Iy3ZVfHPqRJU8c+pElQBwKlTpz777DNZ9Iio0tcjozBB7GMt
o234/PPPyYNhw4YlJycDgKenJwCoVCoA8PIyeUNOG6t6+PAho+rQoUOyqEpLSwOA+/fvt2nT
hqwuzcnJyc3NdXFxiYmJEVmbJClK0MCHr4p/TpWgin9OlaDq3r17s2fP3rp169ixY2WRJKgK
ALy9vZ2cnF544YUPPvjgscfM33PDZuCYzuXatWthYWFJSUkAsHHjxgkTJlRXV7u4uHz77bcK
UZWSkvLee++Vl5e/9dZblZWVckkinzo7d+68b98+MLEvzMYoQQMfU6rY51QhqjjnVAmq4uLi
3nzzzWHDhsmix5Qqkp5fv3593bp1kZGRBw6Yu3G4DcG5l0bk5+ePOa0x5QAAIABJREFUGDFi
zpw5Y8aMAYCZM2d+9tlnN27c2LZtW3h4uEJUhYSEnD179tq1a0OGDOnevbtcqvR6fUVFRXJy
8syZM6F+XxgAsPeF2RglaOAjqIpzThWiinNOlaDqk08+WbJkCbnYyDV5bapfubu7L1myJC8v
TxZVpsAxvYHt27dPnjx548aNoaGhpKS6uhrqZznIYyWoAoC6urrTp08vXbpUrvfeokWLSMLi
5OT04MEDoN4XJilK0MCHr0rwnMquin9OlaCqurqambaWa/LaVL+qqqpKTk7u16+fLKpM4SDr
060CJwsoLy/Pzc1dunRpSUlJt27dVq1a9dJLLylBVdeuXZ2cnLp37z5z5szo6GgnJxkuzBkZ
GYmJidXV1X369Fm1atXgwYPN7guzAXwNnNaTZVAwq6q8vLxDhw6yq+KfUxtLElTF/EutVss1
pps6gy4uLs8991xycrKMH5fZONSeIwRBkBaOQ+05QhAEQQDHdARBEEcCx3QEQRDHAcd0BEEQ
xwHHdARBEMcBx3QEQRDHAcd0pOWiQFM9BGkmOKbbJTU1NYmJiVqt1svLa926dXLLUTpqtXrj
xo0AsGHDBvY4rkBTPSVw/vx5tVp9/vx5uYXYAaa6lozgmG6XrF279syZM99///358+dLSkrk
lmMH7Nixw2Aw7NixQ24hdsChQ4ecnJzksvy0O5TWtXBMt0u++OKLpKQkHx+fzp07r127FhpP
IzCP1Wp1UlKSRqMJCgri/Kul4erq+tFHH7ENmDg3NFCr1Z988omPj4+vr++uXbuYQlsLVQCH
Dx9+8803Dx8+TP58/vnnjxw5AgA5OTlDhw4FgIKCgueff16j0cTFxbE7m1yC5YXTtUi/evTR
R1944YWff/4ZABYsWBAZGQkAERER77zzDlNNIj04ptslZWVlPj4+NDU7dOhQVFT09NNPkz9b
7GxDeHj4ihUr2Oaa/KZo1aqVTqdLTU2NjY01VcfhuXPnzsmTJ5ctW3by5Mk7d+4AwJQpUzIy
MgAgIyNjypQpABATEzNjxow//vjD19eXeWILbCsCp2sRx7Hr16+vWbNm2rRpAPDBBx+Ul5cv
XLjw2rVra9asYapJpAf9XuySfv367d2718/PjylhHI5qa2sfffRR8litVt+8ebN169ayCVUG
bPsnkcd//fVXq1atQFa7KNnJysp65ZVXyOM9e/aMHj36r7/+6tu377Fjx55//vlffvmlc+fO
Hh4eV65cadeu3d27dz08PFpsW4FQd9qxY0dycnJpaWlNTY1KpSJ+rqdOnQoKCjp69OiAAQOk
E4N+L3bMq6++unjx4uLi4qqqqoULFwJA165d//3vf9+7d2/79u3smjig00MG9BbOoUOHli9f
rtfrly9fTqbUH3300ZEjR7755pujRo3q3LkzAPTu3XvXrl137979+uuv5darOP75z38mJydf
vXp17969BoMBAO7evRsdHR0SEhIdHX3v3j2pBeCYbpe88847ffv2HT16dO/evcl9s5YvXz5z
5kx/f39yZztTtNhJTw7MZLr4bYJbYHMdOnToxRdfBIAXX3yR+Zp06tSpZ86cIRMvALBhw4Yt
W7Z0795dp9MxF8IW2FaCzJs3b/r06X5+fr/++ispWbBgQUBAwN69e3v37r1gwQJSKF1z4dwL
giCWUFdXd/DgwVWrVv3f//2f3FoQgPq5F/ywiSBIk1Gr1U5OTn5+fps2bZJbC9IIHNMRBGky
Lfl7UYWD8+kIgiCOg9LHdPziBUEQhB7aMV1tAsGanTt37tu3b2xsbPMX7jT1I96ff/45btw4
T0/P8ePHX7t2rZlHtyNMnZTk5GSR62JWVlZgYGCXLl0CAwOzs7M5cRRy51zpYDfOd999N3jw
YI1GM2LEiBMnTgjW57ewyBvBYeB3LZpOwn8n8jubw8DuSNnZ2f3799doNGPHjmX7doi/E/lN
Kj7MitCUPD2P92OCysrKzMzMixcvrl69uklqmk9sbOxTTz2l0+n69OkTFxdn46PLi74epuTc
uXPiX2HNmjVrzZo1V69eXb16dVRUFDtIRETEjBkzJBctH5zG2bdvX0ZGRnFx8fz588PCwgSf
wmlewRKHhNO1aDoJ/53I72yOAacjzZkzZ926dSUlJXPnzl22bJlgHT6CTcp/R9MgydyLs7Nz
jx49Vq9evW/fPgDIz88fOnSoh4fH0KFD8/PzAUCtVo8aNWrOnDmDBg0iZ5dcjtgmCcBbO6wW
cuTgkJubGx0d7eLiEhMTQ0wqWg4+Pj7e3t6vvfba1atXAeD+/fvh4eHil1VPT08nJyeVSuXs
7EzWuROqqqp2794dEREhuWiZ4DfO9u3be/XqVVdXd+/evfbt28uoTYFwuhZBvJPw34mmOptd
I/guU6lU5MGxY8dM1RGE06SCzW4WCefTPT09y8vLAWDu3LkRERFXrlyZOXPm22+/Tf77wQcf
fP7551u2bPnuu+9AyCQB6Bw5ONy8edPd3X3SpEnu7u43btyQ6KUpEL1eX1xcfObMmccff5wY
Bi1fvvzJJ58MDQ0VedbGjRvDwsLc3NymTZu2YcMGpvyzzz4LCQnx9PSUXLdMCDaOWq328PBY
uHDh1q1b5RKmQPhdiyDeSfjvRFOdza7hd6SPP/44Ojq6W7duOTk5f/31l2AdU7Cb1FSzm4V2
z5FarRaYbBkoMOyq6w0QLl++PH78+IKCAsYd4s6dOz169KioqFCr1dXV1S4uLtXV1Z06daqu
rhY0SYCmO3L4+fn98MMPnp6eZWVlwcHBOp2Ovi0cA71e7+fnV15e3qlTp7q6OnY5v/KAAQOS
kpKCgoJyc3Pff//9vLw8AKitre3bt+/OnTsl9aaQF1ONc+fOne+++27dunXHjx839Vx+3zPV
Gx0MpmsBRSfhvxMFO5u9I/IuO3To0MKFC8+ePUvzTgTTTcpudnEk9Ht5+PBhcXHx0qVLJ06c
CAB+fn579uy5e/fu7t27/f39SR3y8USlUhFLBL5JgiBmHTmCgoI2bdqk1+s3bdo0fPhwa70i
e+H27dubNm3q378/AFRVVTGTcaa6UWVlJQCQj8PMx5pvv/3W29vbgQd0EGqcWbNmlZSUODs7
t2nTpkV9wqOE3bWAopPw34mCnc3eEXyX1dXVnTt3bsmSJa+99pqpOnwEm5TT7DRIsufIzc3t
sccemzhx4pIlSwBgw4YNc+fOXbhwoZ+fn6nPXMQkAQCIIxU0NgEH6gUwy5cvDw8P9/PzCwwM
TE9Pb/5rsRdIK3Xs2HHIkCGpqaki1dgtmZKSsmjRouLiYh8fn82bNzOFzBRZy2HkyJFjxowp
Ly/v1asXM/fCaS5+n7Ssl9oXgl2L30k4bcV/Jwp2NsdDrVarVKpu3bpNnjyZcXcRrMbpLZwm
pXxH82nK3IsQDtmJEQRB7I6m+b0ocOzmX2YUKBJBEMSW2LHfC47gCIIgHJTuDYAgCILQY4sx
nWZvq2PvrkYQBLEN1vd7kRQc+umhsb4R93uxrV6ZoWmulun3wofmVfPbs2W2FTTRSkjQ76VJ
zktNydMjMrk/iIKhsb4RtOCwzGXC3qFprhbr98KB5lXz27NlthU0xUqI7fdisfOS9edezp8/
P2LECDc3N/YFmePTcunSpbFjx2o0msGDB588eZKpduPGjTFjxuzcuRNMuMRAYxMYfpzg4OCc
nBwA+Prrr0eNGmX1V2dH0FjfCFpwWOYyYe+0ZKcgKcD2ZKC3EuJb6FjgvGT9MX327NmhoaFl
ZWXsazLHpyUqKiosLKy4uHjVqlXR0dGkjk6nmzBhwrvvvvvGG2+AkEsMsxGLicyPM2fOnIyM
DABYv379/Pnzrf7q7Aga6xu+BYfFLhP2Tot1CpIIbE82lFZCfAsdC5yXmrLniD/Zkjae/2HK
w8Pj8uXL7MsR36dFo9HcuXOH/NfJyamqqkqtVvft27e8vPzAgQMBAQEkDsclBnibr/hxamtr
AwMDExISVqxYcfLkScYgrQVCY30jYsFB7zLhGNA7BbVYvxcO4q/aVHu2zLYCCishvt9LU52X
pPJ7CQgIyMjIePDgAbuQ49PSv3//Xbt2Xb9+Xa/XV1VVkcJvv/1248aNoaGhZPgWdIlp165d
cXGxSJxWrVpNmzYtMjIyJiamJQ/oQGd9Y8qCwwKXCXunhTsFWR1sTwZKKyG+34tlzkvWH9NT
UlK+/PLLrl27inzB/emnn27bts3X15c9Oe7u7j527Nj33nsvNDT07t27GzZsSE1N7d69+5Yt
W9avX0/qzJo1a/DgwcxTBOOEhYW1bdt28uTJVn9p9sXy5ctPnz7t5+d35syZ+Ph4Usg5KcSC
w9PT89133yUWHKQl/fz8Tp482SSXCXuHprmYbibyoCUg+Ko5L5/fni2zraDeSsjb2zs5OZlt
JcSplpKSMmfOHPESGhzN76WmpiYtLa2srGzlypVya0EQBLEddu/3Ioibm9vAgQN3794ttxAE
QRAZsGO/F0GYm2kgCIK0QNDvBUEQxHGw/pjeor79QBAEURR25veC0GOZKQeN7YlDgs1FD40V
ya5du/r16+fh4TF8+PCffvpJsMQB4HebBw8eLF++vE+fPi4uLux1PuJdi++8REhOTm7qMNuE
PN0Qz/1BlIxlphw0ticOCTYXPTRWJP/617+++OKL4uLiqKioqVOnCpY4APxu89FHH2VmZu7c
ufPWrVv8re+mEHReOnfu3KZNm5oqSar5dLZzC9+VZevWrf369SOeMJjsywjflANtOkTA5mIj
bkWSkZGh1Wrbtm07ZMiQysrKmpoafomNBduGXbt2JSUl9evXz8mpCaMr33np/v374eHhq1ev
bqoAScZ0jnML35UlLi5u6tSpx44dq6ystJdVkg4J35QDbTpEwOZiQ2NFUllZOXXq1KioqNat
W5sqcTDKysoOHz6s0Wj69u37/fffUz6L77y0fPnyJ598MjQ0tKkCJBnTp0+f/ueff2o0GvLn
uXPn3nrrrS5dukycOPG3334DgK1bt544ceLVV1/t3r07bg6SEVdX1xs3buzdu/f69etdunQR
LEEYsLkYamtrU1NTZ8+eLVLnzJkzQUFBQUFBK1asMFXieHTu3Hn48OElJSUbNmyIiYmhfFZ4
ePjWrVtv3LiRnp5OPvqkpKTs27ePmZGnFyDJmM5xbuG7soSEhHz11VenTp366quvLJgwQqwF
35QDbTpEwOZiMGtFkpGRMW/evK1btyYkJJBZCH6JQzJ06FDmMb3lFN95qaqqipmIb9JkRhO8
AfhfiqriBQ7G+K5lZGRs27bt4MGD5eXlCxcu/Omnnxh95LLTunVrHx+fd955Z8qUKfSKEUo4
13bS+BxXvLKysvDw8Pz8/MDAwPT0dOKixymxtW6ZwOZqEsHBwW+//fbLL7/MlHDaitOe5eXl
Xbt25ZR06NBBap1Sw+82JSUlkZGR+fn5np6ea9asCQkJoelamZmZsbGxxcXFPj4+K1euHDNm
DPsQlGM68QZwNL8XBEGQlolj+r0gCIK0ZBx2VgtBEKQFgmM6giCI42C7Md26e4twpxKCIAgf
2/m94Iy8LIj7RfDPI42Vh0Py3XffDR48WKPRjBgx4sSJE4J10O+FQD8CsLufQ3YtflPwOxLN
C6eJQ0lT/F7yuD+IwjHrF8F3oqCx8nBI9u3bl5GRUVxcPH/+/LCwMME66PfCwPQTkTqc7ueo
XYvTFPyORPnCzcahRBKv3aVLl2o0miFDhpw9e5YpZF/Sz58/P2LECOL3wq7w6KOPvvDCCz//
/DNTmJSUpNFogoKCAODs2bPPP/+8RqNZunSp1WU7Hhb7RYA5Kw+HZPv27b169aqrq7t37177
9u0F66DfC4OPj4+3t/drr7129epVwQqmup/jdS1OU5jqSGZfOGUcs0gyn/7EE09cvnx51qxZ
c+fOJSWc6/ns2bNDQ0PLysqYcnKBun79+po1a6ZNm8bU7NChQ1FR0dNPPw0AMTExUVFRly9f
9vX1lUK2g2GxXwTQWXk4Hmq12sPDY+HChcyNgDmg3wtBr9cXFxefOXPm8ccfj4yMFKxjqvs5
WNcSbArBjiT+wunjmKUp+0h5ky2qgcL7SCsqKtq1a3f37t0ePXowk4zs3VAeHh6XL19mX3x2
7NiRnJxcWlpaU1OjUqnILejUavXNmzcZrx8PD48rV660a9fuzp07Go0GJ+jF6dSpU11dHfOn
SHNxNqrV1tb27dt3586dIju/HZU7d+58991369atO378OP+/fn5+P/zwA9k+GhwcrNPp+CW2
1ywjer3ez8+vvLyc/y/B7ufAXYvTFJyORP/CxeOIQ/YcSZKn79mz5+7du3v37vX39xesEBAQ
kJGR8eDBA6bkn//8Z3Jy8tWrV/fu3WswGJhytnlbr1699u7de/fuXbyFNA0W+0WYtfJwSGbN
mlVSUuLs7NymTRtTGTf6vbC5ffv2pk2b+vfvL/hfwe7nqF2L3RSCHYnyhZuNQ4Mk95i+cOFC
9+7de/bsuXnzZmCtOyQP9Hp9SkpKdHT0kiVLampqyPmeN2/e9OnTAWDhwoWmwm7YsGHOnDkL
Fy409XEPMYspUw7m1ABASkrK22+/LYs8GRk5cuSYMWPKy8t79erFfNTlNNfy5cvDw8P9/PyI
u4tgSUuA9JaOHTsOGTIkNTWVKTSbOjhe1+I3hWBH4r9wwXei2Tg0WN/vhd5xBkEQBLEW6PeC
IAjiaFh/Ph1HfwRBELlAvxcEQRDHQR6/F3RrQRAEkQL0e3EQsrKyAgMDu3TpEhgYmJ2dzZRT
+r3wTTkkV6wwdu3a1a9fPw8Pj+HDh//0008iNdlNaqrZHQm+8Uh2dnb//v01Gs3YsWNLSkoE
n8XvWg7ZVvz3C78j0fi9WNFKqAl5eiCkcn7on4tIzaxZs9asWXP16tXVq1dHRUWRQrN+L8Bz
mTBr4uGo/Otf//riiy+Ki4ujoqKmTp1qqhqnSQWb3cHgG4/MmTNn3bp1JSUlc+fOXbZsmakn
crqWQ7YV//3C70g0fi9WtBKSZO6F49MCPL8Xwo0bN8aMGbNz504AuHTp0tixYzUazeDBg0+e
PGkqDmIKT09PJycnco/axx57DKj9Xswad7QQMjIytFpt27ZthwwZUllZWVNTw6/Db1J+szse
gsYjzN2Tjx07ZuqJnK7VEtoKTHckcb8XK1oJSTWfzvZpAaGJF51ON2HChHffffeNN94AgKio
qLCwsOLi4lWrVkVHR5uKg5hi48aNYWFhbm5u06ZN27BhA9D5vdAYd7QoKisrp06dGhUVxd7A
zMBvUn6zOyQc45GPP/44Ojq6W7duOTk5f/31l+BT+F2rhbQVgd+RxP1erGgl1IQ9R/zJlnyI
FPyczvFpYQqZymq1um/fvuXl5QcOHAgICAAAjUZz584d8l8nJ6eqqipTcRBBBgwYkJSUFBQU
lJub+/777+fl5dH7vQDPZaJlbhw7c+bMm2+++Y9//CMuLs7JSSDd4Tcpv9ltqNemCBqPHDp0
aOHChYz9qiBM13LgtuK8X/gdyazfi1WshCT0e4HGPi2CfPvttxs3bgwNDa2oqACA/v3779q1
6/r163q9ngzolHEQQmVlJQCQD7bkqk7v9yJu3NFCyMjImDdv3tatWxMSEgQHdBBqUn6zOx6C
xiN1dXXnzp1bsmTJa6+9JvJcdtdqCW0FJjqSWb8XK1oJSeL3wkHQVMTd3X3s2LE3b94MDQ09
ePDgp59+unDhwvDwcAs8pxAASElJWbRoUXFxsY+PD7HZEcSsy4TgyWoJzJ49GwBGjhxJ/iwv
L+/QoYPZzyuUzW7X8I1H1Gq1SqXq1q3b5MmTFyxYQKqZ7VoO2Vb894tgRzLr92JFKyHr+70g
CIIgtgf9XhAEQRwN9AZAEARxHHBMRxAEcRxsMaabmoun2bPe1JgIgiAtGdv5vfARmaNvsTvU
rQjNaTJlciLuEuPAiL9wK5py2BH8XsR3gKF51oMHD5YvX96nTx8XFxcH613sbsNvHBq/F35z
2cLvJRVSOT/0z0VkgWO4wUfQ5ITGJcYhMfvCrWjKYUfwuxDfAYbmWR999FFmZubOnTtv3brl
SBkbp9vwG4fG74XfXArye9m6dWu/fv3c3NzYl51PPvnEx8fH19d3165dpIRzURL0e+Fw9uzZ
559/XqPRLF26VORYCINZLxe+NwWlS4zjQfPCrWjKYdcIOsCYZdeuXUlJSf369TO1pcse4Xcb
U40j7vfCR0F+L3FxcVOnTj127FhlZSVz5WnVqpVOp0tNTY2NjSUlnIuSKb8XNjExMVFRUZcv
X/b19RU5FkKg93Jhe1PQuMQ4JDQv3IqmHPYOxwGGhrKyssOHD2s0mr59+37//feSyrMZgt1G
sHHE/V742MLvhT/ZEink95Kdnb1t2zadTnft2rXo6Oj3339frVb/9ddfrVq1Ap7rC/PYlN8L
O76Hh8eVK1fatWt3584djUaj1+v5x6J/5S0HjpcLB443RZNcYhwJmhduFVMOO4W/pVbQAUbk
WU888cSmTZuCg4OPHTsWHh7++++/S6vYJpjqNpzGMev3QmA3l4L8XkJCQr766qtTp0599dVX
zDQTGdBFEPR76dq165kzZ5g6vXr12rt37927d3fv3i1yLISNuJcL35uC3iXGwaB54VY05bBr
BB1gzDJ06FDmMWPVa+/wu41g45j1e+GjIL8XMq/dunVrHx+fNWvWiNQBlkmCoN9LbGzs2LFj
q6uryZ8bNmyYM2fOwoULmZkEmmO1WPiGG8DLtgS9KWyuVLlIZ8phR/DfrXwHGDDh98J+VmJi
YmRk5NSpUz09PVNSUmz6GmyIYOOY9XvhNxf6vSAIgrRo0O8FQRDE0XCcRUUIgiAIjukIgiCO
g3LHdNxDhCAI0lTk9HtBJIXGLyIrKyswMLBLly6BgYHZ2dlA5xLjkNA0F79lWmZb0bxqfnu2
zLYC065KbKz4TmxCnp5n4P4gSobGL2LWrFlr1qy5evXq6tWro6KiSKFZlxiHhKa5+M3SAhsK
6F41vz1bZluBCVclDlZ8J9rC70WtVi9dulSj0QwZMoTcYjw/P3/o0KEeHh5Dhw7Nz88HgPPn
z48YMYI8ix3txo0bY8aM2blzp9V1Ojw0fhGenp5OTk7ktr+PPfYYKTTrEuOQtEznFunA9mTg
uyrx61jxnWgjv5cnnnji8uXLs2bNmjt3LgDMnTs3IiLiypUrM2fOJEvxZ8+eHRoaWlZWxr4o
6XS6CRMmvPvuu2+88YbVdTo8NH4RGzduDAsLc3NzmzZt2oYNG6ApLjEORot1bpEIbE8ObFcl
/n+t+E6UJE8/ceLEq6++2r1795UrV5LC0NDQdu3aTZo06bfffgOAixcvvvLKK+3atZs8efKF
CxcA4Pz581OmTGnTpg071PTp0//880+NRmN1kS0BV1fXGzdu7N279/r16126dBGsEx4evnXr
1hs3bqSnp7Md47p06bJkyZK8vDxbiZUfmuZC6MH2ZHPmzJmgoKCgoKAVK1YIVrDiO9FGfi97
9uy5e/fu3r17/f39AcDPz4+U7N69m5QEBARkZGQ8ePCAHerbb7/duHFjaGhoRUWF1XU6PDR+
EZWVlQBAPvGxkylxlxiHpGU6t0gHticD31WJjxXfiTbye7lw4UL37t179uy5efNmANiwYcPc
uXMXLlzo5+dHPmikpKRER0cvWbKkpqaGmX5xd3cfO3bszZs3Q0NDDx482K5dO6urdWAE/SI4
LhMpKSmLFi0qLi728fEhp0bQJaYlQNNcfFMOfolNRcuE4Ks2643TMtsKTLgqSfdOtIXfC9+l
E0EQBLEu6PeCIAjiaNhiHyleDxAEQWyDcr0BEARBkKairDG9Be4bRhAEsSKS+L1YPDTjLA0l
IsYjarW6e/fuQOcywX8Wje2J3cFvLn7jNKm5ROLYO+Jdi5R89913gwcP1mg0I0aMOHHihGCc
7Ozs/v37azSasWPHlpSUgFBncwD4zcV/4fwSPvz3ncVdqwl5uoH3g8iFKeMRvV4fERExY8YM
oHOZ4D+LxvbE7uA3F79x6JtLPI69I2gwwjEe2bdvX0ZGRnFx8fz588PCwgTjzJkzZ926dSUl
JXPnzl22bBkIdTYHgN9c/BfOL+HDf99Z3LWkWp/OWYKqVqvff//9devWPfnkk0ePHs3Pz3/7
7bcvXLjg7++/fv36wMBA/lMQC6iqqtq9ezdJnTIyMkgh4zIhuCmZ86zc3NyjR48Sm47g4GCb
Kbcx/Mahby7xODTPsjt8fHzq6upeeOGFjz766LHHHtu+fTsA3L179969e+3btzf1LOZG0seO
HWMK2Z3NUeG/cMGmYMN/31nctaw/n87cJJozOnfo0KGoqOjpp58GIb8XwNHcGnz22WchISGe
np5MibjLBP9ZLcqmg984NM1FE8eREDQeUavVHh4eCxcuZG6jzOHjjz+Ojo7u9v/bO/eYpq4/
gJ8CU7taVyK24HhNVrqMIVMCWxQfGAwGZzY1k5AJ6CJaFNFtbAZwMtlDyYYPZC5sdpM0bOiQ
BAwZj8mcySIL60BGXBhBocqjPMag46FA+/vjZjf93XNveyhtgdvvJ4SUw7nn1XO+vT3lfPDx
qa2tHRwcpNPxKcoz8I5zDYUpXOvOiqnluM9IlUqlSCQ6d+4cYvO9ADNncnKyoKCAOrRGYdEy
gV/lPJoOfHBIhoukHF7CEI/o9XqdTpeTk8Oll3rllVeam5t1Ol1UVBQtGsSnKP/AO846FAxY
1511U8v2ey8IIaFQSB1yNU00fZ2hfC+vv/467XsBZk5ZWZm3t/fq1aupH9VqtUqlUqlUL730
EvlVlKbj2LFj/NZ04INDOFwWy+ErpuKR5OTkjIwMqVS6cOFCM2/mDAZDc3Nzenp6XFwclcKY
bHwF7ziewgBfd1ZPrWm4AfAPRQUcGyZZWVkFBQUjIyOm++lPkmTdAAAP00lEQVSmOTUaTWpq
amtrK+V7CQ0NZfypDOzDmId1uCIjI48cOfLaa6+x5mG1TOBXdXV1JSUlaTQaStPBj/fI+HDh
g+Pp6clIwYeLpByRSGSzds8GXH2kxCNnzpzx8/O7du3aBx980NPTo1AoPvroIyoG4WMlEAh8
fHx27dqVnp6+YMEChE02HsA6XIyOsw4FY7jwdWfF1KLcAI7wvQAAAAD2BnwvAAAAfGNunSMF
AAAAZgLEdAAAAP4wd2M6uF8AAACmy9zyvQA2xPxzRIFbJnjpeyGBpON4Hl46TCxCMlbmfS+O
be8sQ2JuseFKnMZ9+gcY5NcCjodV3MEAt0zw0vdCAknH8Ty8dJhYhGSsWCUnDG+Mk0BibrHh
SrT93gvte6FfjRmON4TQ2rVrf/rpJ4RQbW1tREQEQkij0UREREil0oiICI1GQ+fv7++PiYkp
KiqyeTsBhNDNmzdTUlIoywT1jOApTgJJx7nyUA4T03/3zm8IJwkuOfH19fX29o6Li+vs7HRE
Q+cGarU6KCho0aJFtLkFz2PDleg434sp8fHxlKFGrVbHx8cjDgNMS0vLq6+++u67777xxhs2
byeA2CwTTuV7MYWk41x5eO8wYUAyVrjkhNUb4zyYN7fYcCU69DPSyclJ6kFsbGx1dXVHR0dN
TU1sbCziMMDs3bu3u7tbJpM5spFOBW6ZcB7fCwOSjrPmcQaHCQOSseKSnDC8MU6CRXOLDVei
XWI65Xuhf/T09Lxx48b4+Dil6EQIubu7R0VF7d69e/PmzRKJBP1ngBkbGzM1wJSVlV24cCE2
Nra3t9ce7QQoy4Rer6ctE3iKk0DScdY8TuIwMYVwkhgMhqamJobkxNQb4ySo1eqjR4+qVKqT
J0+6uLCHXBuuRLvE9OTk5PDwcHobPTs7e9++fYGBgVNTU3SehISExsZGauMFIZSXl1dQUODn
5/fVV1+dP3+eSly2bNnWrVvfe++92NjYsbExezSVx9AfaXB9toEQys7ObmhokMvljY2N1Ife
eIqTwNpxi8OFEMrPzz906JBjGzvLkIyVWCyWSCRxcXHbtm17++230X/zUC6X19fXFxQUOL7Z
s8XBgwc1Gk1UVBQ1AiMjI8ieKxF8LwAAAHwAfC8AAAB8Y+6eIwUAAACmC8R0AAAA/uCImO5s
R4EBAABmC7v4XgDHw/WM5ObmmnmaHj9+nJ2d/cILLyxZsoTKVllZGRoa6uHhERoaWlVVZd9G
zyXKy8vDw8NlMtmmTZu4/qu9c+px8KlFIjAx43txULsdAt4pPDxa53vB1yYh07lPP4t9AXMG
1oO7TU1Nn3/+uZmrPvvss+vXrxcVFf3zzz/U5cnJyTk5OZ2dnadOnVIqlXZs8RyjtLRUrVZr
tdq33norMTGRNY9z6nHwqUUiMMF9L7w0vbB2iqG1sc73gq9NQuzle2E8/uKLL3x9fVesWFFc
XMyVRywWnz59WiaTbdiwAf33cufu7r5u3bq6ujqbt5P3PHr0KCkp6dSpU2byFBcXnz59OiQk
hD4K4eXl5eLiIhAIXF1duf7HOS+5fPmyQqEwGAzj4+NPPvkkax7Q41CQCEwQm+/FSWBobazz
veBrkxAHfUbq5ubW0tJSUFBw4sQJM9lEItG9e/dWrVqF/nut6+vry8nJ2bNnj2PaySeys7Of
e+45Sr3ARVdX148//iiTyYKDg3/44QeE0IULFxITE5cuXbpnz568vDxHNXZOIBaLpVJpWlqa
SqVizQB6HFPMC0xw34uTwKW1ma7vBV+bhNgxptN2F4TQ3r17hULh5s2bu7u7ufIghJRKpUgk
OnfuHEKosLBw5cqVUqk0Ojq6q6vLfu3kK/n5+aWlpfRRUtY8Eolk48aNDx48yMvLO3z4MEIo
KSlJpVL19/dfunTJeUSDFHq9XqfT5eTkcBmmQI9DY1FgwuV7cRIYWhsrfC/42iTE9jEdt7sg
hNzc3CzmQQiZvoJlZGTk5uZ2dnaWlJQYjUabt5P3DA0N0Zt6XPtxlOiYgnqnPDAwQD12dXV1
qhvP5OTkBw8euLq6Lly4kKvjoMehIBGYIA7fi5NgqrWxzveCr01CSM+RkkPZXaampjIzM2eS
5+jRo3v37kUIpaWl2byR/INhdOEK4mKx2PRXH3744YEDBxISEry8vPLz8xFC+fn5x44d02q1
vr6+Fy9etH/D5wpRUVExMTE9PT0KhYLee2EMV3Z2dlJSklwuDw0NvXTpEmsK/8CnFiWhjIqK
otJ7enpEIhFjrMRisUAg8PHx2bVrF+17YZTjwE7YC7xT1IPFixevWbOG0tqQDBc+kfC1SQj4
XgAAAPgA+F4AAAD4BrgBAAAA+APEdAAAAP4wh2J6UVGRj48Pz44OAwAAOJI55Hv5+OOPKyoq
YOPeOnBdienT5OfnR3iVMwhM0Mw6jit0zEt15jtcJhySXpvmcSpPlGnHSaYWvlpJ1i8r07lP
3/I+88umdHZ2rly50rZlOg+4roSWTuzfv//NN98kvMoZBCZoBh3HFToWpTrzHVYTDkmv8TwM
EQpfYXScZGrhq5Vk/bJi+72XoKCgjo4OhFB7e3tQUBBCSKPRRERESKXSiIgIjUZDZcPtLgaD
wUlew+0Bl65kaGjo6tWrXCdC8aucRGBiXcdxhQ6JVGe+g48VSa9Z8zBEKLwE7zj5msJXq/n1
y4rtY/r69etra2vFYvGtW7eoeJ2amrp///6Ojo59+/YdOXKEzsmwuyCemtscBquu5JtvvomO
jvby8iK8ynkEJlZ0HFfokEh1eABjrEh6jefhEqHwDLzj5GsKX60W1y/OdM4c4ZstlR/iIbi4
uPjatWu9vb2enp47duyIjY2VSqUdHR1CoXB0dNTf37+3t5cq8O+//zaVATAOVgFWMDo6Wl5e
fvbs2V9//RUhNDk5GRwcXFRUtHr1asKr5HL5rVu3vLy8urq6IiMjW1paHNX2WWC6HX/qqacM
BgP9o16vx1Mc0e7ZwHSsSHptJo9er5fL5T09PXZt8GyBd5xwTeGrlXD90lBnjuxyn/7zzz/v
3Lnz5s2b69evRwjJ5fLvv/9+bGzs6tWrgYGBdE5WPxlgHay6krKyMm9vbzMTAr/KSQQm1nUc
V+iQSHXmO/hYkfSaK4+pCIWX4B0nXFP4arW4flmxve9l+fLl3t7eO3bsKCwspN4y5OXlpaam
pqWlyeVyZ9O3OgxWXUl+fr7pZhfC3gzhVzmDwAQRdxzeOyKOqYVjcaxwEYqTQDi18NWKp5AA
vhcAAAA+AL4XAAAAvjGHzpECAAAAMwRiOgAAAH+AmA4Ac5S5dv5urrUHYAViOgD8H1ZHLkeG
vJnX5efnR50UQQjpdLppGUXwxkgkkuDg4BMnToyPj8+wYYySLSpiiouLQ0JCpFLpxo0bb9++
zZpCUrL96sLz4AYYvGqSklmBmA4AzohcLm9tbQ0MDFQoFNSDmZQ2MDBw/fr11tZWm2sSLCpi
Kioqvv32W61Wq1QqExISWFMIS7ZTXXge3ACD10vYCxyI6QBgAeoGyt3dfd26dXV1dQghlUoV
EhKydOlS+t6K/m7mLs/0V3T+48ePy2SyNWvW3LlzByF0586dtWvXymSy48ePc9WO19XW1rZ1
61aZTBYeHl5fX49XhyOXy+vr6xcsWODm5qbRaORyOWs5eHtYcXV19ff3P3XqVGlpKWs5d+/e
3bRpEzVi5sYaw6IiRq1WBwUFLVq0aM2aNQMDAxMTE3gK62jgJeMpjKsI67LYQhIDDEnJrEBM
BwALUPdQfX19OTk5e/bsQQhlZWUlJCT88ssvAwMDpicGrRAWPfvss+3t7cnJyampqQihw4cP
K5XK9vb2FStWcNWO16VUKhMTE7Va7SeffJKSkkJfaKZehUJRXl4eFhYWFhZWVlamUChYy8Hb
YwYvLy/qxD9ezsGDB2NjY7u6uqY1PuSKmIGBgYSEBKVSSZ9OZ6Qw6sVLZq2LtbUW67LYQnID
DEnJDEjPHAGAk4Af8CssLMzNzX348OHExIRAIBgeHq6qqvr6669bWlp0Ol1KSkpmZibrhVwl
T05Ouru76/V6sVjc29srFArHxsb8/f11Op2pHEkmk+n1erx2vC6ZTDY6Oko9dnFxGRoastjN
ioqKuLi406dPG43G9PT04uLimJgYvBy8PWb61d7evm3btj/++IO1nPb2dlNj6LQwr4hpbGzc
vXv3zp07s7KyXFxcWFPIS7ZHXYw8XAYYxtNK3gsKe/leAIBnZGRk5ObmdnZ2lpSUGI1GhFB0
dPSVK1d+//33K1eu0KZsoVCo1WrNlOPp6Xnjxo3x8fHLly/TiZQKqaSkhNrRVigUJSUllByJ
q3a8rhdffLG4uLivr0+v15MEdIRQYGCg0WgMDw8PDw83Go1U7Xg5eHtYmZqa0mq1x48f3759
O2s5zz//vFqtfvz4MUnbGJhXxKjV6qNHj6pUqpMnT1KBD08hL9kedeF5SAww5L1gAPfpAPB/
MLZQ9Xr9p59+ev78eYRQWlra+++/T91fI4SeeOIJX1/fd955Jz4+HiGUlZVVUFAwMjLCdbf+
3XffZWRkTE1NZWZmpqWlUeUcOXLkyy+/DAgIuHjx4qpVqxoaGg4dOtTW1nbgwIGzZ8+y1o7X
df/+/bS0tNu3b5tuBJl/3zAxMeHj46PVao1Go5+f38OHD93c3PBy8Pawjpirq+vTTz+9ffv2
zMxMoVCIl9Pc3JySktLU1DQxMUG+/WKqiDlz5gzrH+cwnq+enh5PT09GikgkYowGXjJrXaxX
WazLYguHhoaSkpI0Gg1lgPHy8sJnHX4VXjID6j4dYjoAzBrgCANsCOy9AAAA8A2I6QAwa8BN
OmBzIKYDAADwB4jpAAAA/AFiOgAAAH+AmA7MdSorK2tqaqampmpqaiorKwkvMf9brgyjo6NV
VVWMxEePHmk0murqasLaAWAWgZgOzANEItH9+/cXL15MmH/Lli3W/ba1tZU+10Pz559/Go3G
yMhI88UCwFwAYjowD1i2bNm9e/c8PDzoFPqWmXpQWVl59+7d6urqv/76iyvRIv/+++/AwACe
3t/fHxAQQC7cAIBZBGI6MA/w8PAwGAymMR1n+fLlL7/8cnt7u8VELlpbW5955hk83Wg0arXa
6urq1tbWaTQaAGYDiOnAPEAikQiFQolEQqe4uLgMDw8PDg6a5lmyZInBYGBciCdyodPpKJsS
Y998yZIlvr6+YWFhhK8NADCLQEwH5gECgWDDhg0CgYBO8ff3r6ur6+7utqI0emfG9EeE0JYt
W6gdc+o7na5QKJqamn777beAgIAZdAIAHAH4XgAAAPgA+F4AAAD4BsR0AAAA/gAxHQAAgD9A
TAcAAOAPENMBAAD4g6ChoWF4eLitrW22WwIAAADMFLfBwUE4SQEAAMAP/gfPJRJlYkzjfAAA
AABJRU5ErkJggg==
--=_alYL0jHM0BaW0zWL1lZTEVpaoTHnF+q00E-M0oWd1EQsEuU+--

--=_alYL6RncGWT5DHcvNegXUBlz9ldXvKGFVeljLqVIFIrq03Ev--



--===============1517825123261143294==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1517825123261143294==--



From xen-devel-bounces@lists.xen.org Wed Dec 05 23:52:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 23:52:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgOky-00071Y-Jp; Wed, 05 Dec 2012 23:52:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgOkx-00071T-2w
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 23:51:59 +0000
Received: from [85.158.143.99:64896] by server-2.bemta-4.messagelabs.com id
	5D/3B-30861-E1EDFB05; Wed, 05 Dec 2012 23:51:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354751517!21204519!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc5NTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16854 invoked from network); 5 Dec 2012 23:51:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 23:51:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="16185333"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 23:51:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 23:51:57 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgOkv-0006YK-AL;
	Wed, 05 Dec 2012 23:51:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgOkv-00056V-5m;
	Wed, 05 Dec 2012 23:51:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14573-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 23:51:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14573: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14573 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14573/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             1 hosts-allocate           running [st=running!]
 build-i386-pvops              1 hosts-allocate           running [st=running!]
 build-i386-oldkern            1 hosts-allocate           running [st=running!]
 build-amd64-oldkern           1 hosts-allocate           running [st=running!]
 build-amd64                   1 hosts-allocate           running [st=running!]
 build-i386                    1 hosts-allocate           running [st=running!]

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 05 23:52:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Dec 2012 23:52:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgOky-00071Y-Jp; Wed, 05 Dec 2012 23:52:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgOkx-00071T-2w
	for xen-devel@lists.xensource.com; Wed, 05 Dec 2012 23:51:59 +0000
Received: from [85.158.143.99:64896] by server-2.bemta-4.messagelabs.com id
	5D/3B-30861-E1EDFB05; Wed, 05 Dec 2012 23:51:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354751517!21204519!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDc5NTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16854 invoked from network); 5 Dec 2012 23:51:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Dec 2012 23:51:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,224,1355097600"; d="scan'208";a="16185333"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Dec 2012 23:51:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 5 Dec 2012 23:51:57 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgOkv-0006YK-AL;
	Wed, 05 Dec 2012 23:51:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgOkv-00056V-5m;
	Wed, 05 Dec 2012 23:51:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14573-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Dec 2012 23:51:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14573: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14573 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14573/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             1 hosts-allocate           running [st=running!]
 build-i386-pvops              1 hosts-allocate           running [st=running!]
 build-i386-oldkern            1 hosts-allocate           running [st=running!]
 build-amd64-oldkern           1 hosts-allocate           running [st=running!]
 build-amd64                   1 hosts-allocate           running [st=running!]
 build-i386                    1 hosts-allocate           running [st=running!]

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7N-0004QB-Oh; Thu, 06 Dec 2012 01:19:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7L-0004Pn-Sp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:12 +0000
Received: from [85.158.143.99:62134] by server-2.bemta-4.messagelabs.com id
	91/37-30861-F82FFB05; Thu, 06 Dec 2012 01:19:11 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354756745!21210601!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDE0Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18119 invoked from network); 6 Dec 2012 01:19:10 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 01:19:10 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 17:19:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="257912513"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 17:19:09 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:23 +0800
Message-Id: <1354756169-12370-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 05/11] nested vmx: fix handling of RDTSC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If L0 is to handle the TSC access, then we need to update guest EIP by
calling update_guest_eip().

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 ++
 3 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3bb0d99..9fb9562 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1555,7 +1555,7 @@ static int get_instruction_length(void)
     return len;
 }
 
-static void update_guest_eip(void)
+void update_guest_eip(void)
 {
     struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned long x;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d8b7ce5..dab9551 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1614,6 +1614,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
             tsc += __get_vvmcs(nvcpu->nv_vvmcx, TSC_OFFSET);
             regs->eax = (uint32_t)tsc;
             regs->edx = (uint32_t)(tsc >> 32);
+            update_guest_eip();
 
             return 1;
         }
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c4c2fe8..aa5b080 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -399,6 +399,8 @@ void ept_p2m_init(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
+void update_guest_eip(void);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7I-0004PX-BQ; Thu, 06 Dec 2012 01:19:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7H-0004P6-EG
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:07 +0000
Received: from [85.158.143.99:46225] by server-1.bemta-4.messagelabs.com id
	70/BD-27934-B82FFB05; Thu, 06 Dec 2012 01:19:07 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354756745!21210601!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDE0Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17732 invoked from network); 6 Dec 2012 01:19:06 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 01:19:06 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 17:19:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="257912479"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 17:19:04 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:20 +0800
Message-Id: <1354756169-12370-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 02/11] nested vmx: use literal name instead
	of hard numbers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For those default 1 settings in VMX MSR, use some literal name
instead of hard numbers in the code.

Besides, fix the default 1 setting for pin based control MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++----------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    9 +++++++++
 2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 719bfce..eb10bbf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp;
+    u64 data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
@@ -1318,9 +1318,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        data <<= 32;
-	/* 0-settings */
-        data |= 0;
+        tmp = VMX_PINBASED_CTLS_DEFAULT1;
+        data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
         /* 1-seetings */
@@ -1342,8 +1341,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
-        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
+        tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
         break;
@@ -1356,8 +1354,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
         /* 1-seetings */
-        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
-        tmp = 0x36dff;
+        tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1370,8 +1367,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
-        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
-        tmp = 0x11ff;
+        /* 1-seetings */
+        tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 067fbe4..dce2cd8 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -36,6 +36,15 @@ struct nestedvmx {
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
 
+/* bit 1, 2, 4 must be 1 */
+#define VMX_PINBASED_CTLS_DEFAULT1	0x16
+/* bit 1, 4-6,8,13-16,26 must be 1 */
+#define VMX_PROCBASED_CTLS_DEFAULT1	0x401e172
+/* bit 0-8, 10,11,13,14,16,17 must be 1 */
+#define VMX_EXIT_CTLS_DEFAULT1		0x36dff
+/* bit 0-8, and 12 must be 1 */
+#define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
+
 /*
  * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
  */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7H-0004PQ-Vr; Thu, 06 Dec 2012 01:19:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7G-0004P6-KR
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:06 +0000
Received: from [85.158.143.99:46203] by server-1.bemta-4.messagelabs.com id
	7E/AD-27934-982FFB05; Thu, 06 Dec 2012 01:19:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354756743!22934986!2
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2NDk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14137 invoked from network); 6 Dec 2012 01:19:04 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-2.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 01:19:04 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 05 Dec 2012 17:19:04 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="227222647"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 05 Dec 2012 17:19:02 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:19 +0800
Message-Id: <1354756169-12370-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 01/11] nested vmx: emulate MSR bitmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In nested vmx virtualization for MSR bitmaps, L0 hypervisor will trap all the VM
exit from L2 guest by disable the MSR_BITMAP feature. When handling this VM exit,
L0 hypervisor judges whether L1 hypervisor uses MSR_BITMAP feature and the
corresponding bit is set to 1. If so, L0 will inject such VM exit into L1
hypervisor; otherwise, L0 will be responsible for handling this VM exit.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0fbdd75..205e705 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -674,6 +674,34 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 }
 
 /*
+ * access_type: read == 0, write == 1
+ */
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+{
+    int ret = 1;
+    if ( !msr_bitmap )
+        return 1;
+
+    if ( msr <= 0x1fff )
+    {
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+    }
+    else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+    {
+        msr &= 0x1fff;
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+    }
+    return ret;
+}
+
+
+/*
  * Switch VMCS between layer 1 & 2 guest
  */
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ed47780..719bfce 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -48,6 +48,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
     nvmx->iobitmap[1] = NULL;
+    nvmx->msrbitmap = NULL;
     return 0;
 out:
     return -ENOMEM;
@@ -561,6 +562,17 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
 }
 
+static void __map_msr_bitmap(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    unsigned long gpa;
+
+    if ( nvmx->msrbitmap )
+        hvm_unmap_guest_frame (nvmx->msrbitmap); 
+    gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, MSR_BITMAP);
+    nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT);
+}
+
 static void __map_io_bitmap(struct vcpu *v, u64 vmcs_reg)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -597,6 +609,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
             nvmx->iobitmap[i] = NULL;
         }
     }
+    if ( nvmx->msrbitmap ) {
+        hvm_unmap_guest_frame(nvmx->msrbitmap);
+        nvmx->msrbitmap = NULL;
+    }
 }
 
 u64 nvmx_get_tsc_offset(struct vcpu *v)
@@ -1153,6 +1169,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
         nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT);
         nvcpu->nv_vvmcxaddr = gpa;
         map_io_bitmap_all (v);
+        __map_msr_bitmap(v);
     }
 
     vmreturn(regs, VMSUCCEED);
@@ -1270,6 +1287,9 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
               vmcs_encoding == IO_BITMAP_B_HIGH )
         __map_io_bitmap (v, IO_BITMAP_B);
 
+    if ( vmcs_encoding == MSR_BITMAP || vmcs_encoding == MSR_BITMAP_HIGH )
+        __map_msr_bitmap(v);
+
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
 }
@@ -1320,6 +1340,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDTSC_EXITING |
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
+               CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
         tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
@@ -1497,8 +1518,6 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_TRIPLE_FAULT:
     case EXIT_REASON_TASK_SWITCH:
     case EXIT_REASON_CPUID:
-    case EXIT_REASON_MSR_READ:
-    case EXIT_REASON_MSR_WRITE:
     case EXIT_REASON_VMCALL:
     case EXIT_REASON_VMCLEAR:
     case EXIT_REASON_VMLAUNCH:
@@ -1514,6 +1533,22 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_MSR_READ:
+    case EXIT_REASON_MSR_WRITE:
+    {
+        int status;
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        {
+            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
+                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
+            if ( status )
+                nvcpu->nv_vmexit_pending = 1;
+        }
+        else
+            nvcpu->nv_vmexit_pending = 1;
+        break;
+    }
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index cc92f69..14ac773 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -427,6 +427,7 @@ int vmx_add_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 
 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index b9137b8..067fbe4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -26,6 +26,7 @@
 struct nestedvmx {
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
+    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7H-0004PQ-Vr; Thu, 06 Dec 2012 01:19:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7G-0004P6-KR
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:06 +0000
Received: from [85.158.143.99:46203] by server-1.bemta-4.messagelabs.com id
	7E/AD-27934-982FFB05; Thu, 06 Dec 2012 01:19:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354756743!22934986!2
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2NDk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14137 invoked from network); 6 Dec 2012 01:19:04 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-2.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 01:19:04 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 05 Dec 2012 17:19:04 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="227222647"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 05 Dec 2012 17:19:02 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:19 +0800
Message-Id: <1354756169-12370-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 01/11] nested vmx: emulate MSR bitmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In nested vmx virtualization for MSR bitmaps, L0 hypervisor will trap all the VM
exit from L2 guest by disable the MSR_BITMAP feature. When handling this VM exit,
L0 hypervisor judges whether L1 hypervisor uses MSR_BITMAP feature and the
corresponding bit is set to 1. If so, L0 will inject such VM exit into L1
hypervisor; otherwise, L0 will be responsible for handling this VM exit.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0fbdd75..205e705 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -674,6 +674,34 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 }
 
 /*
+ * access_type: read == 0, write == 1
+ */
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+{
+    int ret = 1;
+    if ( !msr_bitmap )
+        return 1;
+
+    if ( msr <= 0x1fff )
+    {
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+    }
+    else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+    {
+        msr &= 0x1fff;
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+    }
+    return ret;
+}
+
+
+/*
  * Switch VMCS between layer 1 & 2 guest
  */
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ed47780..719bfce 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -48,6 +48,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
     nvmx->iobitmap[1] = NULL;
+    nvmx->msrbitmap = NULL;
     return 0;
 out:
     return -ENOMEM;
@@ -561,6 +562,17 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
 }
 
+static void __map_msr_bitmap(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    unsigned long gpa;
+
+    if ( nvmx->msrbitmap )
+        hvm_unmap_guest_frame (nvmx->msrbitmap); 
+    gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, MSR_BITMAP);
+    nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT);
+}
+
 static void __map_io_bitmap(struct vcpu *v, u64 vmcs_reg)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -597,6 +609,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
             nvmx->iobitmap[i] = NULL;
         }
     }
+    if ( nvmx->msrbitmap ) {
+        hvm_unmap_guest_frame(nvmx->msrbitmap);
+        nvmx->msrbitmap = NULL;
+    }
 }
 
 u64 nvmx_get_tsc_offset(struct vcpu *v)
@@ -1153,6 +1169,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
         nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT);
         nvcpu->nv_vvmcxaddr = gpa;
         map_io_bitmap_all (v);
+        __map_msr_bitmap(v);
     }
 
     vmreturn(regs, VMSUCCEED);
@@ -1270,6 +1287,9 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
               vmcs_encoding == IO_BITMAP_B_HIGH )
         __map_io_bitmap (v, IO_BITMAP_B);
 
+    if ( vmcs_encoding == MSR_BITMAP || vmcs_encoding == MSR_BITMAP_HIGH )
+        __map_msr_bitmap(v);
+
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
 }
@@ -1320,6 +1340,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDTSC_EXITING |
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
+               CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
         tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
@@ -1497,8 +1518,6 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_TRIPLE_FAULT:
     case EXIT_REASON_TASK_SWITCH:
     case EXIT_REASON_CPUID:
-    case EXIT_REASON_MSR_READ:
-    case EXIT_REASON_MSR_WRITE:
     case EXIT_REASON_VMCALL:
     case EXIT_REASON_VMCLEAR:
     case EXIT_REASON_VMLAUNCH:
@@ -1514,6 +1533,22 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_MSR_READ:
+    case EXIT_REASON_MSR_WRITE:
+    {
+        int status;
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        {
+            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
+                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
+            if ( status )
+                nvcpu->nv_vmexit_pending = 1;
+        }
+        else
+            nvcpu->nv_vmexit_pending = 1;
+        break;
+    }
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index cc92f69..14ac773 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -427,6 +427,7 @@ int vmx_add_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 
 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index b9137b8..067fbe4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -26,6 +26,7 @@
 struct nestedvmx {
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
+    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7H-0004PF-JR; Thu, 06 Dec 2012 01:19:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7F-0004P0-DW
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:05 +0000
Received: from [85.158.143.99:62037] by server-3.bemta-4.messagelabs.com id
	A5/65-06841-882FFB05; Thu, 06 Dec 2012 01:19:04 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354756743!22934986!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2NDk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14096 invoked from network); 6 Dec 2012 01:19:04 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-2.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 01:19:04 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 05 Dec 2012 17:19:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="227222640"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 05 Dec 2012 17:19:01 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:18 +0800
Message-Id: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 00/11] nested vmx: bug fixes and feature
	enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series of patches contain some bug fixes and feature enabling for
nested vmx, please help to review and pull.

Changes from v2 to v3:
 - Change a hard number to literal name while exposing bit 55 in IA32_VMX_BASIC MSR.

The following 4 patches are suitable to backport to 4.2.x:
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: fix interrupt delivery to L2 guest

Thanks,
Dongxiao

Dongxiao Xu (11):
  nested vmx: emulate MSR bitmaps
  nested vmx: use literal name instead of hard numbers
  nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
  nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
  nested vmx: fix interrupt delivery to L2 guest
  nested vmx: check host ability when intercept MSR read

 xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |  115 ++++++++++++++++++++++++++++++------
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |   16 +++++
 7 files changed, 151 insertions(+), 24 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7I-0004PX-BQ; Thu, 06 Dec 2012 01:19:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7H-0004P6-EG
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:07 +0000
Received: from [85.158.143.99:46225] by server-1.bemta-4.messagelabs.com id
	70/BD-27934-B82FFB05; Thu, 06 Dec 2012 01:19:07 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354756745!21210601!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDE0Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17732 invoked from network); 6 Dec 2012 01:19:06 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 01:19:06 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 17:19:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="257912479"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 17:19:04 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:20 +0800
Message-Id: <1354756169-12370-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 02/11] nested vmx: use literal name instead
	of hard numbers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For those default 1 settings in VMX MSR, use some literal name
instead of hard numbers in the code.

Besides, fix the default 1 setting for pin based control MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++----------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    9 +++++++++
 2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 719bfce..eb10bbf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp;
+    u64 data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
@@ -1318,9 +1318,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        data <<= 32;
-	/* 0-settings */
-        data |= 0;
+        tmp = VMX_PINBASED_CTLS_DEFAULT1;
+        data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
         /* 1-seetings */
@@ -1342,8 +1341,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
-        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
+        tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
         break;
@@ -1356,8 +1354,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
         /* 1-seetings */
-        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
-        tmp = 0x36dff;
+        tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1370,8 +1367,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
-        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
-        tmp = 0x11ff;
+        /* 1-seetings */
+        tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 067fbe4..dce2cd8 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -36,6 +36,15 @@ struct nestedvmx {
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
 
+/* bit 1, 2, 4 must be 1 */
+#define VMX_PINBASED_CTLS_DEFAULT1	0x16
+/* bit 1, 4-6,8,13-16,26 must be 1 */
+#define VMX_PROCBASED_CTLS_DEFAULT1	0x401e172
+/* bit 0-8, 10,11,13,14,16,17 must be 1 */
+#define VMX_EXIT_CTLS_DEFAULT1		0x36dff
+/* bit 0-8, and 12 must be 1 */
+#define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
+
 /*
  * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
  */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7N-0004QB-Oh; Thu, 06 Dec 2012 01:19:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7L-0004Pn-Sp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:12 +0000
Received: from [85.158.143.99:62134] by server-2.bemta-4.messagelabs.com id
	91/37-30861-F82FFB05; Thu, 06 Dec 2012 01:19:11 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354756745!21210601!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDE0Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18119 invoked from network); 6 Dec 2012 01:19:10 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 01:19:10 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 17:19:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="257912513"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 17:19:09 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:23 +0800
Message-Id: <1354756169-12370-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 05/11] nested vmx: fix handling of RDTSC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If L0 is to handle the TSC access, then we need to update guest EIP by
calling update_guest_eip().

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 ++
 3 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3bb0d99..9fb9562 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1555,7 +1555,7 @@ static int get_instruction_length(void)
     return len;
 }
 
-static void update_guest_eip(void)
+void update_guest_eip(void)
 {
     struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned long x;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d8b7ce5..dab9551 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1614,6 +1614,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
             tsc += __get_vvmcs(nvcpu->nv_vvmcx, TSC_OFFSET);
             regs->eax = (uint32_t)tsc;
             regs->edx = (uint32_t)(tsc >> 32);
+            update_guest_eip();
 
             return 1;
         }
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c4c2fe8..aa5b080 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -399,6 +399,8 @@ void ept_p2m_init(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
+void update_guest_eip(void);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7H-0004PF-JR; Thu, 06 Dec 2012 01:19:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7F-0004P0-DW
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:05 +0000
Received: from [85.158.143.99:62037] by server-3.bemta-4.messagelabs.com id
	A5/65-06841-882FFB05; Thu, 06 Dec 2012 01:19:04 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354756743!22934986!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2NDk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14096 invoked from network); 6 Dec 2012 01:19:04 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-2.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 01:19:04 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 05 Dec 2012 17:19:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="227222640"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 05 Dec 2012 17:19:01 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:18 +0800
Message-Id: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 00/11] nested vmx: bug fixes and feature
	enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series of patches contain some bug fixes and feature enabling for
nested vmx, please help to review and pull.

Changes from v2 to v3:
 - Change a hard number to literal name while exposing bit 55 in IA32_VMX_BASIC MSR.

The following 4 patches are suitable to backport to 4.2.x:
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: fix interrupt delivery to L2 guest

Thanks,
Dongxiao

Dongxiao Xu (11):
  nested vmx: emulate MSR bitmaps
  nested vmx: use literal name instead of hard numbers
  nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
  nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
  nested vmx: fix interrupt delivery to L2 guest
  nested vmx: check host ability when intercept MSR read

 xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |  115 ++++++++++++++++++++++++++++++------
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |   16 +++++
 7 files changed, 151 insertions(+), 24 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7Q-0004Qj-6R; Thu, 06 Dec 2012 01:19:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7O-0004QL-Mk
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:14 +0000
Received: from [85.158.143.99:46365] by server-3.bemta-4.messagelabs.com id
	C0/75-06841-292FFB05; Thu, 06 Dec 2012 01:19:14 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354756745!21210601!3
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDE0Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18174 invoked from network); 6 Dec 2012 01:19:13 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 01:19:13 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 17:19:13 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="257912537"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 17:19:12 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:25 +0800
Message-Id: <1354756169-12370-8-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 07/11] nested vmx: enable IA32E mode while do
	VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some VMMs may check the platform capability to judge whether long
mode guest is supported. Therefore we need to expose this bit to
guest VMM.

Xen on Xen works fine in current solution because Xen doesn't
check this capability but directly set it in VMCS if guest
supports long mode.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 02a7052..dcdc83e 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1376,7 +1376,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
+               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+               VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
         break;
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7Q-0004Qj-6R; Thu, 06 Dec 2012 01:19:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7O-0004QL-Mk
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:14 +0000
Received: from [85.158.143.99:46365] by server-3.bemta-4.messagelabs.com id
	C0/75-06841-292FFB05; Thu, 06 Dec 2012 01:19:14 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354756745!21210601!3
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDE0Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18174 invoked from network); 6 Dec 2012 01:19:13 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 01:19:13 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 05 Dec 2012 17:19:13 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="257912537"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 05 Dec 2012 17:19:12 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:25 +0800
Message-Id: <1354756169-12370-8-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 07/11] nested vmx: enable IA32E mode while do
	VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some VMMs may check the platform capability to judge whether long
mode guest is supported. Therefore we need to expose this bit to
guest VMM.

Xen on Xen works fine in current solution because Xen doesn't
check this capability but directly set it in VMCS if guest
supports long mode.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 02a7052..dcdc83e 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1376,7 +1376,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
+               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+               VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
         break;
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7S-0004Rk-PP; Thu, 06 Dec 2012 01:19:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7R-0004Qy-1m
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:17 +0000
Received: from [193.109.254.147:35581] by server-11.bemta-14.messagelabs.com
	id 9F/33-29027-492FFB05; Thu, 06 Dec 2012 01:19:16 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1354756755!9134645!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjIxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29324 invoked from network); 6 Dec 2012 01:19:15 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-12.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 01:19:15 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 05 Dec 2012 17:19:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="259668735"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 05 Dec 2012 17:19:13 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:26 +0800
Message-Id: <1354756169-12370-9-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 08/11] nested vmx: enable "Virtualize APIC
	accesses" feature for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the "Virtualize APIC accesses" feature is enabled, we need to sync
the APIC-access address from virtual vvmcs into shadow vmcs when doing
virtual_vmentry.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   27 ++++++++++++++++++++++++++-
 1 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dcdc83e..bcb113f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -554,6 +554,24 @@ void nvmx_update_exception_bitmap(struct vcpu *v, unsigned long value)
     set_shadow_control(v, EXCEPTION_BITMAP, value);
 }
 
+static void nvmx_update_apic_access_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 apic_gpfn, apic_mfn;
+    u32 ctrl;
+    void *apic_va;
+
+    ctrl = __n2_secondary_exec_control(v);
+    if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+    {
+        apic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
+        apic_va = hvm_map_guest_frame_ro(apic_gpfn);
+        apic_mfn = virt_to_mfn(apic_va);
+        __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(apic_va); 
+    }
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -761,6 +779,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_exit_control(v, vmx_vmexit_control);
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
+    nvmx_update_apic_access_address(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1350,7 +1369,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
@@ -1680,6 +1700,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
         break;
     }
+    case EXIT_REASON_APIC_ACCESS:
+        ctrl = __n2_secondary_exec_control(v);
+        if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:19:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ7S-0004Rk-PP; Thu, 06 Dec 2012 01:19:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ7R-0004Qy-1m
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:19:17 +0000
Received: from [193.109.254.147:35581] by server-11.bemta-14.messagelabs.com
	id 9F/33-29027-492FFB05; Thu, 06 Dec 2012 01:19:16 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1354756755!9134645!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjIxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29324 invoked from network); 6 Dec 2012 01:19:15 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-12.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 01:19:15 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 05 Dec 2012 17:19:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="259668735"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 05 Dec 2012 17:19:13 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:26 +0800
Message-Id: <1354756169-12370-9-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 08/11] nested vmx: enable "Virtualize APIC
	accesses" feature for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the "Virtualize APIC accesses" feature is enabled, we need to sync
the APIC-access address from virtual vvmcs into shadow vmcs when doing
virtual_vmentry.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   27 ++++++++++++++++++++++++++-
 1 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dcdc83e..bcb113f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -554,6 +554,24 @@ void nvmx_update_exception_bitmap(struct vcpu *v, unsigned long value)
     set_shadow_control(v, EXCEPTION_BITMAP, value);
 }
 
+static void nvmx_update_apic_access_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 apic_gpfn, apic_mfn;
+    u32 ctrl;
+    void *apic_va;
+
+    ctrl = __n2_secondary_exec_control(v);
+    if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+    {
+        apic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
+        apic_va = hvm_map_guest_frame_ro(apic_gpfn);
+        apic_mfn = virt_to_mfn(apic_va);
+        __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(apic_va); 
+    }
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -761,6 +779,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_exit_control(v, vmx_vmexit_control);
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
+    nvmx_update_apic_access_address(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1350,7 +1369,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
@@ -1680,6 +1700,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
         break;
     }
+    case EXIT_REASON_APIC_ACCESS:
+        ctrl = __n2_secondary_exec_control(v);
+        if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8Z-0004le-5z; Thu, 06 Dec 2012 01:20:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8Y-0004km-8f
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:26 +0000
Received: from [85.158.139.211:25901] by server-5.bemta-5.messagelabs.com id
	72/F8-11353-9D2FFB05; Thu, 06 Dec 2012 01:20:25 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354756823!19274582!2
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3MDQw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8597 invoked from network); 6 Dec 2012 01:20:24 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 01:20:24 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Dec 2012 17:19:24 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="176740976"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 17:19:16 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:28 +0800
Message-Id: <1354756169-12370-11-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 10/11] nested vmx: fix interrupt delivery to
	L2 guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While delivering interrupt into L2 guest, L0 hypervisor need to check
whether L1 hypervisor wants to own the interrupt, if not, directly inject
the interrupt into L2 guest.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 3961bc7..ef8b925 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -163,7 +163,7 @@ enum hvm_intblk nvmx_intr_blocked(struct vcpu *v)
 
 static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
 {
-    u32 exit_ctrl;
+    u32 ctrl;
 
     if ( nvmx_intr_blocked(v) != hvm_intblk_none )
     {
@@ -176,11 +176,14 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
         if ( intack.source == hvm_intsrc_pic ||
                  intack.source == hvm_intsrc_lapic )
         {
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, PIN_BASED_VM_EXEC_CONTROL);
+            if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) )
+                return 0;
+
             vmx_inject_extint(intack.vector);
 
-            exit_ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
-                            VM_EXIT_CONTROLS);
-            if ( exit_ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, VM_EXIT_CONTROLS);
+            if ( ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
             {
                 /* for now, duplicate the ack path in vmx_intr_assist */
                 hvm_vcpu_ack_pending_irq(v, intack);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8Z-0004le-5z; Thu, 06 Dec 2012 01:20:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8Y-0004km-8f
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:26 +0000
Received: from [85.158.139.211:25901] by server-5.bemta-5.messagelabs.com id
	72/F8-11353-9D2FFB05; Thu, 06 Dec 2012 01:20:25 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354756823!19274582!2
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3MDQw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8597 invoked from network); 6 Dec 2012 01:20:24 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 01:20:24 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Dec 2012 17:19:24 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="176740976"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 17:19:16 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:28 +0800
Message-Id: <1354756169-12370-11-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 10/11] nested vmx: fix interrupt delivery to
	L2 guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While delivering interrupt into L2 guest, L0 hypervisor need to check
whether L1 hypervisor wants to own the interrupt, if not, directly inject
the interrupt into L2 guest.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 3961bc7..ef8b925 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -163,7 +163,7 @@ enum hvm_intblk nvmx_intr_blocked(struct vcpu *v)
 
 static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
 {
-    u32 exit_ctrl;
+    u32 ctrl;
 
     if ( nvmx_intr_blocked(v) != hvm_intblk_none )
     {
@@ -176,11 +176,14 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
         if ( intack.source == hvm_intsrc_pic ||
                  intack.source == hvm_intsrc_lapic )
         {
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, PIN_BASED_VM_EXEC_CONTROL);
+            if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) )
+                return 0;
+
             vmx_inject_extint(intack.vector);
 
-            exit_ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
-                            VM_EXIT_CONTROLS);
-            if ( exit_ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, VM_EXIT_CONTROLS);
+            if ( ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
             {
                 /* for now, duplicate the ack path in vmx_intr_assist */
                 hvm_vcpu_ack_pending_irq(v, intack);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8Y-0004l9-Ob; Thu, 06 Dec 2012 01:20:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8X-0004kZ-IG
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:25 +0000
Received: from [85.158.139.211:37288] by server-10.bemta-5.messagelabs.com id
	93/D2-09257-8D2FFB05; Thu, 06 Dec 2012 01:20:24 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354756823!19274582!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3MDQw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8558 invoked from network); 6 Dec 2012 01:20:24 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 01:20:24 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Dec 2012 17:19:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="176740962"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 17:19:06 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:21 +0800
Message-Id: <1354756169-12370-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 03/11] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |    6 +++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    6 ++++++
 2 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index eb10bbf..ec5e8a7 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1311,9 +1311,10 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50;
+               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
         /* 1-seetings */
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
@@ -1322,6 +1323,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1353,6 +1355,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = (data << 32) | tmp;
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
+    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
         /* 1-seetings */
         tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
@@ -1367,6 +1370,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
+    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         /* 1-seetings */
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index dce2cd8..2caad9f 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -45,6 +45,12 @@ struct nestedvmx {
 /* bit 0-8, and 12 must be 1 */
 #define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
 
+/* 
+ * bit 55 of IA32_VMX_BASIC MSR, indicating whether any VMX controls that
+ * default to 1 may be cleared to 0.
+ */
+#define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
+
 /*
  * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
  */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8Y-0004l9-Ob; Thu, 06 Dec 2012 01:20:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8X-0004kZ-IG
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:25 +0000
Received: from [85.158.139.211:37288] by server-10.bemta-5.messagelabs.com id
	93/D2-09257-8D2FFB05; Thu, 06 Dec 2012 01:20:24 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354756823!19274582!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3MDQw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8558 invoked from network); 6 Dec 2012 01:20:24 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 01:20:24 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Dec 2012 17:19:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="176740962"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 17:19:06 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:21 +0800
Message-Id: <1354756169-12370-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 03/11] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |    6 +++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    6 ++++++
 2 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index eb10bbf..ec5e8a7 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1311,9 +1311,10 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50;
+               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
         /* 1-seetings */
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
@@ -1322,6 +1323,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1353,6 +1355,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = (data << 32) | tmp;
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
+    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
         /* 1-seetings */
         tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
@@ -1367,6 +1370,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
+    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         /* 1-seetings */
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index dce2cd8..2caad9f 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -45,6 +45,12 @@ struct nestedvmx {
 /* bit 0-8, and 12 must be 1 */
 #define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
 
+/* 
+ * bit 55 of IA32_VMX_BASIC MSR, indicating whether any VMX controls that
+ * default to 1 may be cleared to 0.
+ */
+#define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
+
 /*
  * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
  */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8a-0004mX-Jd; Thu, 06 Dec 2012 01:20:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8Y-0004kg-Sc
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:27 +0000
Received: from [85.158.137.99:40494] by server-13.bemta-3.messagelabs.com id
	D5/85-24887-4D2FFB05; Thu, 06 Dec 2012 01:20:20 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354756819!12752731!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTA3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9162 invoked from network); 6 Dec 2012 01:20:19 -0000
Received: from unknown (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-217.messagelabs.com with SMTP;
	6 Dec 2012 01:20:19 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 05 Dec 2012 17:18:22 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="229577489"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 05 Dec 2012 17:19:07 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:22 +0800
Message-Id: <1354756169-12370-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 04/11] nested vmx: fix rflags status in
	virtual vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As stated in SDM, all bits (except for those 1-reserved) in rflags would
be set to 0 in VM exit. Therefore we need to follow this logic in
virtual_vmexit.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ec5e8a7..d8b7ce5 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -991,7 +991,8 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
 
     regs->eip = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RIP);
     regs->esp = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RSP);
-    regs->eflags = __vmread(GUEST_RFLAGS);
+    /* VM exit clears all bits except bit 1 */
+    regs->eflags = 0x2;
 
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8a-0004mX-Jd; Thu, 06 Dec 2012 01:20:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8Y-0004kg-Sc
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:27 +0000
Received: from [85.158.137.99:40494] by server-13.bemta-3.messagelabs.com id
	D5/85-24887-4D2FFB05; Thu, 06 Dec 2012 01:20:20 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354756819!12752731!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTA3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9162 invoked from network); 6 Dec 2012 01:20:19 -0000
Received: from unknown (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-217.messagelabs.com with SMTP;
	6 Dec 2012 01:20:19 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 05 Dec 2012 17:18:22 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="229577489"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 05 Dec 2012 17:19:07 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:22 +0800
Message-Id: <1354756169-12370-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 04/11] nested vmx: fix rflags status in
	virtual vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As stated in SDM, all bits (except for those 1-reserved) in rflags would
be set to 0 in VM exit. Therefore we need to follow this logic in
virtual_vmexit.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ec5e8a7..d8b7ce5 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -991,7 +991,8 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
 
     regs->eip = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RIP);
     regs->esp = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RSP);
-    regs->eflags = __vmread(GUEST_RFLAGS);
+    /* VM exit clears all bits except bit 1 */
+    regs->eflags = 0x2;
 
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8b-0004nD-EL; Thu, 06 Dec 2012 01:20:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8Z-0004l2-19
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:27 +0000
Received: from [85.158.139.211:25933] by server-12.bemta-5.messagelabs.com id
	C1/AD-02886-AD2FFB05; Thu, 06 Dec 2012 01:20:26 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354756823!19274582!3
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3MDQw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8630 invoked from network); 6 Dec 2012 01:20:25 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 01:20:25 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Dec 2012 17:19:24 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="176740979"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 17:19:18 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:29 +0800
Message-Id: <1354756169-12370-12-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 11/11] nested vmx: check host ability when
	intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When guest hypervisor tries to read MSR value, we intercept this behavior
and return certain emulated values. Besides that, we also need to ensure
that those emulated values must compatible with host ability.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   18 ++++++++++++++----
 1 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 178adbc..e65f963 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1319,19 +1319,20 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp = 0;
+    u64 data = 0, host_data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
         return 0;
 
+    rdmsrl(msr, host_data);
+
     /*
      * Remove unsupport features from n1 guest capability MSR
      */
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
-        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
+        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
@@ -1341,6 +1342,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                PIN_BASED_PREEMPT_TIMER;
         tmp = VMX_PINBASED_CTLS_DEFAULT1;
         data = ((data | tmp) << 32) | (tmp);
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
@@ -1368,6 +1371,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
@@ -1376,6 +1381,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
     case MSR_IA32_VMX_TRUE_EXIT_CTLS:
@@ -1391,6 +1398,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
 	/* 0-settings */
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
     case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
@@ -1401,8 +1410,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
                VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
-
     case IA32_FEATURE_CONTROL_MSR:
         data = IA32_FEATURE_CONTROL_MSR_LOCK | 
                IA32_FEATURE_CONTROL_MSR_ENABLE_VMXON_OUTSIDE_SMX;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8b-0004nD-EL; Thu, 06 Dec 2012 01:20:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8Z-0004l2-19
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:27 +0000
Received: from [85.158.139.211:25933] by server-12.bemta-5.messagelabs.com id
	C1/AD-02886-AD2FFB05; Thu, 06 Dec 2012 01:20:26 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354756823!19274582!3
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3MDQw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8630 invoked from network); 6 Dec 2012 01:20:25 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 01:20:25 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Dec 2012 17:19:24 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="176740979"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2012 17:19:18 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:29 +0800
Message-Id: <1354756169-12370-12-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 11/11] nested vmx: check host ability when
	intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When guest hypervisor tries to read MSR value, we intercept this behavior
and return certain emulated values. Besides that, we also need to ensure
that those emulated values must compatible with host ability.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   18 ++++++++++++++----
 1 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 178adbc..e65f963 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1319,19 +1319,20 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp = 0;
+    u64 data = 0, host_data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
         return 0;
 
+    rdmsrl(msr, host_data);
+
     /*
      * Remove unsupport features from n1 guest capability MSR
      */
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
-        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
+        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
@@ -1341,6 +1342,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                PIN_BASED_PREEMPT_TIMER;
         tmp = VMX_PINBASED_CTLS_DEFAULT1;
         data = ((data | tmp) << 32) | (tmp);
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
@@ -1368,6 +1371,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
@@ -1376,6 +1381,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
     case MSR_IA32_VMX_TRUE_EXIT_CTLS:
@@ -1391,6 +1398,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
 	/* 0-settings */
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
     case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
@@ -1401,8 +1410,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
                VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
+        data = ((data & host_data) & (~0ul << 32)) |
+               ((data | host_data) & (~0u));
         break;
-
     case IA32_FEATURE_CONTROL_MSR:
         data = IA32_FEATURE_CONTROL_MSR_LOCK | 
                IA32_FEATURE_CONTROL_MSR_ENABLE_VMXON_OUTSIDE_SMX;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8X-0004kc-BU; Thu, 06 Dec 2012 01:20:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8V-0004kI-E5
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:23 +0000
Received: from [85.158.143.35:63317] by server-2.bemta-4.messagelabs.com id
	CC/77-30861-6D2FFB05; Thu, 06 Dec 2012 01:20:22 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354756820!12760966!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTA3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23295 invoked from network); 6 Dec 2012 01:20:21 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-21.messagelabs.com with SMTP;
	6 Dec 2012 01:20:21 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 05 Dec 2012 17:18:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="252687180"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 05 Dec 2012 17:19:15 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:27 +0800
Message-Id: <1354756169-12370-10-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 09/11] nested vmx: enable PAUSE and RDPMC
	exiting for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index bcb113f..178adbc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1362,6 +1362,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
+               CPU_BASED_PAUSE_EXITING |
+               CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8X-0004kc-BU; Thu, 06 Dec 2012 01:20:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8V-0004kI-E5
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:23 +0000
Received: from [85.158.143.35:63317] by server-2.bemta-4.messagelabs.com id
	CC/77-30861-6D2FFB05; Thu, 06 Dec 2012 01:20:22 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354756820!12760966!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTA3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23295 invoked from network); 6 Dec 2012 01:20:21 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-21.messagelabs.com with SMTP;
	6 Dec 2012 01:20:21 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 05 Dec 2012 17:18:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="252687180"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 05 Dec 2012 17:19:15 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:27 +0800
Message-Id: <1354756169-12370-10-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 09/11] nested vmx: enable PAUSE and RDPMC
	exiting for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index bcb113f..178adbc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1362,6 +1362,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
+               CPU_BASED_PAUSE_EXITING |
+               CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8b-0004mr-18; Thu, 06 Dec 2012 01:20:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8Y-0004ky-Pr
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:26 +0000
Received: from [85.158.137.99:22408] by server-14.bemta-3.messagelabs.com id
	21/3E-31424-5D2FFB05; Thu, 06 Dec 2012 01:20:21 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354756819!12752731!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9205 invoked from network); 6 Dec 2012 01:20:20 -0000
Received: from unknown (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-217.messagelabs.com with SMTP;
	6 Dec 2012 01:20:20 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 05 Dec 2012 17:18:25 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="229577508"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 05 Dec 2012 17:19:10 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:24 +0800
Message-Id: <1354756169-12370-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 06/11] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For DR register, we use lazy restore mechanism when access it. Therefore
when receiving such VM exit, L0 should be responsible to switch to the
right DR values, then inject to L1 hypervisor.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dab9551..02a7052 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1641,7 +1641,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         break;
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
-        if ( ctrl & CPU_BASED_MOV_DR_EXITING )
+        if ( (ctrl & CPU_BASED_MOV_DR_EXITING) &&
+            v->arch.hvm_vcpu.flag_dr_dirty )
             nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:20:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQ8b-0004mr-18; Thu, 06 Dec 2012 01:20:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgQ8Y-0004ky-Pr
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 01:20:26 +0000
Received: from [85.158.137.99:22408] by server-14.bemta-3.messagelabs.com id
	21/3E-31424-5D2FFB05; Thu, 06 Dec 2012 01:20:21 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354756819!12752731!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9205 invoked from network); 6 Dec 2012 01:20:20 -0000
Received: from unknown (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-217.messagelabs.com with SMTP;
	6 Dec 2012 01:20:20 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 05 Dec 2012 17:18:25 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,224,1355126400"; d="scan'208";a="229577508"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 05 Dec 2012 17:19:10 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 09:09:24 +0800
Message-Id: <1354756169-12370-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v3 06/11] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For DR register, we use lazy restore mechanism when access it. Therefore
when receiving such VM exit, L0 should be responsible to switch to the
right DR values, then inject to L1 hypervisor.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dab9551..02a7052 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1641,7 +1641,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         break;
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
-        if ( ctrl & CPU_BASED_MOV_DR_EXITING )
+        if ( (ctrl & CPU_BASED_MOV_DR_EXITING) &&
+            v->arch.hvm_vcpu.flag_dr_dirty )
             nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:42:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQTS-0006ZJ-Ig; Thu, 06 Dec 2012 01:42:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TgQTQ-0006Z5-7B
	for Xen-devel@lists.xensource.com; Thu, 06 Dec 2012 01:42:00 +0000
Received: from [85.158.139.83:30203] by server-4.bemta-5.messagelabs.com id
	A8/D8-15011-7E7FFB05; Thu, 06 Dec 2012 01:41:59 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354758117!25888511!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gOTg5Njg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18030 invoked from network); 6 Dec 2012 01:41:58 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 01:41:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB61ftOT019030
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 01:41:55 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB61fsc2012301
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Dec 2012 01:41:55 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB61fsiC020346; Wed, 5 Dec 2012 19:41:54 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Dec 2012 17:41:54 -0800
Date: Wed, 5 Dec 2012 17:41:53 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>, Ian
	Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Message-ID: <20121205174153.76fa5dd1@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] page refcount on pages mapped by the toolstack..
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I observed something that doesn't seem right to me:

PV dom0 booting PV guest (say domid 1). (no PVH).

xl cr vm.cfg.pv

Take some mfn from domid 1. It's refcnt is 1 as expected. Now, the lib
wants to map it via xen_remap_domain_mfn_range(). The call goes thru
do_mmu_update(), and upon returning the refcnt is 2, as expected.

Now, I noticed the refcnt doesn't go back to 1 after the guest is 
created/booted. I'd have expected the process exit somewhere to have
resulted in the refcnt going down to 1 (which is what would happen in case
of PVH dom0).

The guest is up, I notice the refcnt is 2. I shutdown the guest, the
refcnt goes to 0 and the page is freed via relinquish_memory() called
from domain_relinquish_resources(). I would have expected the page
to hang with refcnt 1, what if the user process still has it mapped?

What am I missing?

Thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 01:42:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 01:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgQTS-0006ZJ-Ig; Thu, 06 Dec 2012 01:42:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TgQTQ-0006Z5-7B
	for Xen-devel@lists.xensource.com; Thu, 06 Dec 2012 01:42:00 +0000
Received: from [85.158.139.83:30203] by server-4.bemta-5.messagelabs.com id
	A8/D8-15011-7E7FFB05; Thu, 06 Dec 2012 01:41:59 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354758117!25888511!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gOTg5Njg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18030 invoked from network); 6 Dec 2012 01:41:58 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 01:41:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB61ftOT019030
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 01:41:55 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB61fsc2012301
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Dec 2012 01:41:55 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB61fsiC020346; Wed, 5 Dec 2012 19:41:54 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Dec 2012 17:41:54 -0800
Date: Wed, 5 Dec 2012 17:41:53 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>, Ian
	Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Message-ID: <20121205174153.76fa5dd1@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] page refcount on pages mapped by the toolstack..
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I observed something that doesn't seem right to me:

PV dom0 booting PV guest (say domid 1). (no PVH).

xl cr vm.cfg.pv

Take some mfn from domid 1. It's refcnt is 1 as expected. Now, the lib
wants to map it via xen_remap_domain_mfn_range(). The call goes thru
do_mmu_update(), and upon returning the refcnt is 2, as expected.

Now, I noticed the refcnt doesn't go back to 1 after the guest is 
created/booted. I'd have expected the process exit somewhere to have
resulted in the refcnt going down to 1 (which is what would happen in case
of PVH dom0).

The guest is up, I notice the refcnt is 2. I shutdown the guest, the
refcnt goes to 0 and the page is freed via relinquish_memory() called
from domain_relinquish_resources(). I would have expected the page
to hang with refcnt 1, what if the user process still has it mapped?

What am I missing?

Thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 03:15:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 03:15:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgRvW-0000BH-D2; Thu, 06 Dec 2012 03:15:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TgRvU-0000BC-1n
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 03:15:04 +0000
Received: from [85.158.143.99:63837] by server-1.bemta-4.messagelabs.com id
	5A/C5-27934-7BD00C05; Thu, 06 Dec 2012 03:15:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354763701!27653590!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMDYwODc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11016 invoked from network); 6 Dec 2012 03:15:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 03:15:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB63EvJj004610
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 03:14:58 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB63EvvD000588
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Dec 2012 03:14:57 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB63EvYM000619; Wed, 5 Dec 2012 21:14:57 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Dec 2012 19:14:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BB1B21C05A8; Wed,  5 Dec 2012 22:14:55 -0500 (EST)
Date: Wed, 5 Dec 2012 22:14:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20121206031455.GA4408@phenom.dumpdata.com>
References: <1351097925-26221-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1351097925-26221-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] LVM Checksum error when using persistent grants
 (#linux-next + stable/for-jens-3.8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey Roger,

I am seeing this weird behavior when using #linux-next + stable/for-jens-3.8 tree.

Basically I can do 'pvscan' on xvd* disk and quite often I get checksum errors:

# pvscan /dev/xvdf
  PV /dev/xvdf2   VG VolGroup00        lvm2 [18.88 GiB / 0    free]
  PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
  Total: 5 [962.38 GiB] / in use: 5 [962.38 GiB] / in no VG: 0 [0   ]
# pvscan /dev/xvdf
  /dev/xvdf2: Checksum error
  Couldn't read volume group metadata.
  /dev/xvdf2: Checksum error
  Couldn't read volume group metadata.
  PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
  Total: 4 [943.50 GiB] / in use: 4 [943.50 GiB] / in no VG: 0 [0   ]

This is with a i386 dom0, 64-bit Xen 4.1.3 hypervisor, and with either
64-bit or 32-bit PV or PVHVM guest.

Have you seen something like this?

Note, the other LV disks are over iSCSI and are working fine.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 03:15:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 03:15:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgRvW-0000BH-D2; Thu, 06 Dec 2012 03:15:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TgRvU-0000BC-1n
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 03:15:04 +0000
Received: from [85.158.143.99:63837] by server-1.bemta-4.messagelabs.com id
	5A/C5-27934-7BD00C05; Thu, 06 Dec 2012 03:15:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354763701!27653590!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMDYwODc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11016 invoked from network); 6 Dec 2012 03:15:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 03:15:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB63EvJj004610
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 03:14:58 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB63EvvD000588
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Dec 2012 03:14:57 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB63EvYM000619; Wed, 5 Dec 2012 21:14:57 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Dec 2012 19:14:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BB1B21C05A8; Wed,  5 Dec 2012 22:14:55 -0500 (EST)
Date: Wed, 5 Dec 2012 22:14:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20121206031455.GA4408@phenom.dumpdata.com>
References: <1351097925-26221-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1351097925-26221-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] LVM Checksum error when using persistent grants
 (#linux-next + stable/for-jens-3.8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey Roger,

I am seeing this weird behavior when using #linux-next + stable/for-jens-3.8 tree.

Basically I can do 'pvscan' on xvd* disk and quite often I get checksum errors:

# pvscan /dev/xvdf
  PV /dev/xvdf2   VG VolGroup00        lvm2 [18.88 GiB / 0    free]
  PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
  Total: 5 [962.38 GiB] / in use: 5 [962.38 GiB] / in no VG: 0 [0   ]
# pvscan /dev/xvdf
  /dev/xvdf2: Checksum error
  Couldn't read volume group metadata.
  /dev/xvdf2: Checksum error
  Couldn't read volume group metadata.
  PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
  PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
  Total: 4 [943.50 GiB] / in use: 4 [943.50 GiB] / in no VG: 0 [0   ]

This is with a i386 dom0, 64-bit Xen 4.1.3 hypervisor, and with either
64-bit or 32-bit PV or PVHVM guest.

Have you seen something like this?

Note, the other LV disks are over iSCSI and are working fine.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 03:42:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 03:42:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgSLk-0000lZ-Q2; Thu, 06 Dec 2012 03:42:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TgSLj-0000lU-71
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 03:42:11 +0000
Received: from [85.158.138.51:12638] by server-12.bemta-3.messagelabs.com id
	B2/07-22757-21410C05; Thu, 06 Dec 2012 03:42:10 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354765327!27620837!1
X-Originating-IP: [209.85.210.172]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31360 invoked from network); 6 Dec 2012 03:42:08 -0000
Received: from mail-ia0-f172.google.com (HELO mail-ia0-f172.google.com)
	(209.85.210.172)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 03:42:08 -0000
Received: by mail-ia0-f172.google.com with SMTP id z13so4854254iaz.31
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 19:42:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:cc:content-type;
	bh=51PL219pcPRAKVKg4dVJv6IEvF09VTOlvBP2V8omv74=;
	b=rDGxnn+bU0I/L//TSmkZhdzTkI3I6rYixc1jO7ZmqBBuIpj4u+DxWqTWqfqjArh3kR
	PeTgqVyUWel37EgJFNk5OUSM1nIwjSE9uEK3/8CPkfWdyZULsLvvfOeCTvJ12Xg4rc8z
	gzQJEsFaj+Uf2BRT4ri7oKcFZYnJpMClKnd3PYBgO10OUbUho1/DoC5rIYe/gNGmZaRP
	aLpmCfsTj3scUDxb1ecJl7n9sBshgcEUpR3sxRkvOYAUZjZDzi2A1OkUSGZbu1Yfu00s
	hTkdJUdjjXUYNHTkJiSWAqICFN18onX6RrHGd5Cq4GEStritRyNP92sqzICpN4KEZHDL
	QOtA==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr4466729igq.37.1354765326977; Wed,
	05 Dec 2012 19:42:06 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Wed, 5 Dec 2012 19:42:06 -0800 (PST)
Date: Thu, 6 Dec 2012 11:42:06 +0800
X-Google-Sender-Auth: rXqtXSJxQteje59xI3lwVM9b4jI
Message-ID: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3 with
 the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU missing
 interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2170980356570059032=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2170980356570059032==
Content-Type: multipart/alternative; boundary=14dae9341115118b1304d026e222

--14dae9341115118b1304d026e222
Content-Type: text/plain; charset=ISO-8859-1

Sorry, but I have to resend this in a separate thread for better visibility.
Back ground:
After backporting this patch to fix my PVHVM interrupt missing issue in
4.1.3:
http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html
I find side effect for pure HVM guest.
http://lists.xen.org/archives/html/xen-devel/2012-12/msg00208.html

I had a follow up thread to analysis the issue and post it again here:

Hi, it seems that the patch has some side effect on pure HVM guest.
>> For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled,
>> I see such syndrome in qemu-dm-xxx.log:
>>
>> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
>> support??
>> pt_pci_write_config: Internal error: Invalid write emulation return
>> value[-1]. I/O emulator exit.
>>
>> The guest dies immediately after the log, I have no way to check guest
>> kernel log.
>> Without the patch, this guest can boot without obvious error log even the
>> VGA passthrough does not quite work.
>> I'll check the code to see what does these log mean...
>>
>
> I did some analysis and it really looks like a bug to me.
> Since this is a patch back-ported from 4.2.0, I would like to ask is there
> any follow up patch that would fix this issue?
> Please see my analysis below:
>
> Here is part of the qemu-dm log, with debug log enabled at compile time:
>
> dm-command: hot insert pass-through pci dev
> register_real_device: Assigning real physical device 00:1b.0 ...
> pt_iomul_init: Error: pt_iomul_init can't open file /dev/xen/pci_iomul: No
> such file or directory: 0x0:0x1b.0x0
> pt_register_regions: IO region registered (size=0x00004000
> base_addr=0xf7c10004)
> pt_msi_setup: msi mapped with pirq 36
> pci_intx: intx=1
> register_real_device: Real physical device 00:1b.0 registered successfuly!
> IRQ type = MSI-INTx
> ...
> pt_pci_read_config: [00:06.0]: address=0000 val=0x0000*8086* len=2
> pt_pci_read_config: [00:06.0]: address=0002 val=0x0000*1e20* len=2
> ...
> *pt_pci_write_config: [00:06.0]: address=0068* val=0x00000000 len=4
> ...
> pt_pci_write_config: [00:06.0]: address=0062 val=0x00000081 len=2
> *pt_msgctrl_reg_write: guest enabling MSI, disable MSI-INTx translation*
> pci_intx: intx=1
> *pt_msi_disable: Unmap msi with pirq 36*
> pt_msgctrl_reg_write: setup msi for dev 30
> pt_msi_setup: msi mapped with pirq 36
> pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303
> pt_pci_read_config: [00:06.0]: address=0062 val=0x00000081 len=2
> pt_pci_write_config: [00:06.0]: address=0062 val=0x00000081 len=2
> pt_pci_write_config: [00:06.0]: address=0064 val=0xfee0300c len=4
> *pt_pci_write_config: [00:06.0]: address=0068* val=0x00000000 len=4
>
> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
> support??
> pt_pci_write_config: Internal error: Invalid write emulation return
> value[-1]. I/O emulator exit.
>
>
> Here the device in question should is the audio controller, 00:1b.0 in the
> host, which is 64 bit capable:
> 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset
> Family High Definition Audio Controller (rev 04)
>     Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+
>         Address: 00000000fee00378  Data: 0000
>
> And there is also a successful write to offset 0x68 above, while the
> second write fails the I/O emulator.
>
>
> The patch added pt_msi_disable() call into *pt_msgctrl_reg_write() *function,
> And the pt_msi_disable() function has these lines:
> out:
>     /* clear msi info */
>     dev->msi->flags = 0;
>     dev->msi->pirq = -1;
>     dev->msi_trans_en = 0;
>
> As a result, the flags are cleared -- this is new to the patch.
> And I believe this change caused the failure in pt_msgaddr64_write():
>
> 3882     /* check whether the type is 64 bit or not */
> 3883     if (!(ptdev->msi->flags & PCI_MSI_FLAGS_64BIT))
> 3884     {
> 3885         /* exit I/O emulator */
> 3886         PT_LOG("Error: why comes to Upper Address without 64 bit
> support??\n");
> 3887         return -1;
> 3888     }
>
>
> I only see flags setup in  pt_msgctrl_reg_init() function. I guess this is
> not called this time.
>
> Thanks,
> Timothy
>

--14dae9341115118b1304d026e222
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Sorry, but I have to resend this in a separate thread for better visibility=
.<br><div class=3D"gmail_extra">Back ground:<br>After backporting this patc=
h to fix my PVHVM interrupt missing issue in 4.1.3:<br><a rel=3D"nofollow" =
href=3D"http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html"=
>http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html</a><br>
I find side effect for pure HVM guest.<br><a href=3D"http://lists.xen.org/a=
rchives/html/xen-devel/2012-12/msg00208.html">http://lists.xen.org/archives=
/html/xen-devel/2012-12/msg00208.html</a><br><br>I had a follow up thread t=
o analysis the issue and post it again here:<br>
<br><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"m=
argin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left=
:1ex"><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div><div class=
=3D"h5">
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D"gmail_extra=
"><div class=3D"gmail_quote"><div>Hi, it seems that the patch has some side=
 effect on pure HVM guest.<br>
For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled, I=
 see such syndrome in qemu-dm-xxx.log:<br>



<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>pt_pci_write_config: Internal error: Invalid write emulation=
 return value[-1]. I/O emulator exit.<br><br>The guest dies immediately aft=
er the log, I have no way to check guest kernel log.<br>




Without the patch, this guest can boot without obvious error log even the V=
GA passthrough does not quite work.<br>I&#39;ll check the code to see what =
does these log mean...<br></div></div></div></blockquote></div></div><div>
<br>I did some analysis and it really looks like a bug to me.<br>

Since this is a patch back-ported from 4.2.0, I would like to ask is there =
any follow up patch that would fix this issue?<br>Please see my analysis be=
low:<br>
<br>Here is part of the qemu-dm log, with debug log enabled at compile time=
:<br><br><span style=3D"font-family:courier new,monospace">dm-command: hot =
insert pass-through pci dev<br>register_real_device: Assigning real physica=
l device 00:1b.0 ...<br>


pt_iomul_init: Error: pt_iomul_init can&#39;t open file /dev/xen/pci_iomul:=
 No such file or directory: 0x0:0x1b.0x0<br>pt_register_regions: IO region =
registered (size=3D0x00004000 base_addr=3D0xf7c10004)<br>pt_msi_setup: msi =
mapped with pirq 36<br>


pci_intx: intx=3D1<br>register_real_device: Real physical device 00:1b.0 re=
gistered successfuly!<br>IRQ type =3D MSI-INTx<br>...<br>pt_pci_read_config=
: [00:06.0]: address=3D0000 val=3D0x0000<b>8086</b> len=3D2<br>pt_pci_read_=
config: [00:06.0]: address=3D0002 val=3D0x0000<b>1e20</b> len=3D2<br>


...<br><b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x000000=
00 len=3D4<br>...<br>pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0=
x00000081 len=3D2<br><b>pt_msgctrl_reg_write: guest enabling MSI, disable M=
SI-INTx translation</b><br>


pci_intx: intx=3D1<br><b>pt_msi_disable: Unmap msi with pirq 36</b><br>pt_m=
sgctrl_reg_write: setup msi for dev 30<br>pt_msi_setup: msi mapped with pir=
q 36<br>pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303<br>pt_pc=
i_read_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>


pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>=
pt_pci_write_config: [00:06.0]: address=3D0064 val=3D0xfee0300c len=3D4<br>=
<b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x00000000 len=
=3D4<div class=3D"im">
<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>

pt_pci_write_config: Internal error: Invalid write emulation return value[-=
1]. I/O emulator exit.<br></div></span><br><span style=3D"font-family:couri=
er new,monospace"></span></div></div><br>Here the device in question should=
 is the audio controller, 00:1b.0 in the host, which is 64 bit capable:<br>



00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family=
 High Definition Audio Controller (rev 04)<br>=A0=A0=A0 Capabilities: [60] =
MSI: Enable+ Count=3D1/1 Maskable- 64bit+<br>=A0=A0=A0 =A0=A0=A0 Address: 0=
0000000fee00378=A0 Data: 0000<br>


<br>And there is also a successful write to offset 0x68 above, while the se=
cond write fails the I/O emulator.<br>
<br><br>The patch added pt_msi_disable() call into <span style=3D"font-fami=
ly:courier new,monospace"><b>pt_msgctrl_reg_write() </b></span>function, An=
d the pt_msi_disable() function has these lines:<br><span style=3D"font-fam=
ily:courier new,monospace">out:<br>
=A0=A0=A0 /* clear msi info */<br>=A0=A0=A0 dev-&gt;msi-&gt;flags =3D 0;<br=
>

=A0=A0=A0 dev-&gt;msi-&gt;pirq =3D -1;<br>=A0=A0=A0 dev-&gt;msi_trans_en =
=3D 0;</span><br>
<br>As a result, the flags are cleared -- this is new to the patch. <br>And=
 I believe this change caused the failure in pt_msgaddr64_write():<br><br><=
span style=3D"font-family:courier new,monospace">3882=A0=A0=A0=A0 /* check =
whether the type is 64 bit or not */<br>


3883=A0=A0=A0=A0 if (!(ptdev-&gt;msi-&gt;flags &amp; PCI_MSI_FLAGS_64BIT))<=
br>3884=A0=A0=A0=A0 {<br>3885=A0=A0=A0=A0=A0=A0=A0=A0 /* exit I/O emulator =
*/<br>3886=A0=A0=A0=A0=A0=A0=A0=A0 PT_LOG(&quot;Error: why comes to Upper A=
ddress without 64 bit support??\n&quot;);<br>


3887=A0=A0=A0=A0=A0=A0=A0=A0 return -1;<br>3888=A0=A0=A0=A0 }</span><br><br=
><br>I only see flags setup in=A0 pt_msgctrl_reg_init() function. I guess t=
his is not called this time.<br><br>Thanks,<br>Timothy<br>
</div>
</blockquote><br></div><br></div>

--14dae9341115118b1304d026e222--


--===============2170980356570059032==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2170980356570059032==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 03:42:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 03:42:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgSLk-0000lZ-Q2; Thu, 06 Dec 2012 03:42:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TgSLj-0000lU-71
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 03:42:11 +0000
Received: from [85.158.138.51:12638] by server-12.bemta-3.messagelabs.com id
	B2/07-22757-21410C05; Thu, 06 Dec 2012 03:42:10 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354765327!27620837!1
X-Originating-IP: [209.85.210.172]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31360 invoked from network); 6 Dec 2012 03:42:08 -0000
Received: from mail-ia0-f172.google.com (HELO mail-ia0-f172.google.com)
	(209.85.210.172)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 03:42:08 -0000
Received: by mail-ia0-f172.google.com with SMTP id z13so4854254iaz.31
	for <xen-devel@lists.xen.org>; Wed, 05 Dec 2012 19:42:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:cc:content-type;
	bh=51PL219pcPRAKVKg4dVJv6IEvF09VTOlvBP2V8omv74=;
	b=rDGxnn+bU0I/L//TSmkZhdzTkI3I6rYixc1jO7ZmqBBuIpj4u+DxWqTWqfqjArh3kR
	PeTgqVyUWel37EgJFNk5OUSM1nIwjSE9uEK3/8CPkfWdyZULsLvvfOeCTvJ12Xg4rc8z
	gzQJEsFaj+Uf2BRT4ri7oKcFZYnJpMClKnd3PYBgO10OUbUho1/DoC5rIYe/gNGmZaRP
	aLpmCfsTj3scUDxb1ecJl7n9sBshgcEUpR3sxRkvOYAUZjZDzi2A1OkUSGZbu1Yfu00s
	hTkdJUdjjXUYNHTkJiSWAqICFN18onX6RrHGd5Cq4GEStritRyNP92sqzICpN4KEZHDL
	QOtA==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr4466729igq.37.1354765326977; Wed,
	05 Dec 2012 19:42:06 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Wed, 5 Dec 2012 19:42:06 -0800 (PST)
Date: Thu, 6 Dec 2012 11:42:06 +0800
X-Google-Sender-Auth: rXqtXSJxQteje59xI3lwVM9b4jI
Message-ID: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3 with
 the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU missing
 interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2170980356570059032=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2170980356570059032==
Content-Type: multipart/alternative; boundary=14dae9341115118b1304d026e222

--14dae9341115118b1304d026e222
Content-Type: text/plain; charset=ISO-8859-1

Sorry, but I have to resend this in a separate thread for better visibility.
Back ground:
After backporting this patch to fix my PVHVM interrupt missing issue in
4.1.3:
http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html
I find side effect for pure HVM guest.
http://lists.xen.org/archives/html/xen-devel/2012-12/msg00208.html

I had a follow up thread to analysis the issue and post it again here:

Hi, it seems that the patch has some side effect on pure HVM guest.
>> For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled,
>> I see such syndrome in qemu-dm-xxx.log:
>>
>> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
>> support??
>> pt_pci_write_config: Internal error: Invalid write emulation return
>> value[-1]. I/O emulator exit.
>>
>> The guest dies immediately after the log, I have no way to check guest
>> kernel log.
>> Without the patch, this guest can boot without obvious error log even the
>> VGA passthrough does not quite work.
>> I'll check the code to see what does these log mean...
>>
>
> I did some analysis and it really looks like a bug to me.
> Since this is a patch back-ported from 4.2.0, I would like to ask is there
> any follow up patch that would fix this issue?
> Please see my analysis below:
>
> Here is part of the qemu-dm log, with debug log enabled at compile time:
>
> dm-command: hot insert pass-through pci dev
> register_real_device: Assigning real physical device 00:1b.0 ...
> pt_iomul_init: Error: pt_iomul_init can't open file /dev/xen/pci_iomul: No
> such file or directory: 0x0:0x1b.0x0
> pt_register_regions: IO region registered (size=0x00004000
> base_addr=0xf7c10004)
> pt_msi_setup: msi mapped with pirq 36
> pci_intx: intx=1
> register_real_device: Real physical device 00:1b.0 registered successfuly!
> IRQ type = MSI-INTx
> ...
> pt_pci_read_config: [00:06.0]: address=0000 val=0x0000*8086* len=2
> pt_pci_read_config: [00:06.0]: address=0002 val=0x0000*1e20* len=2
> ...
> *pt_pci_write_config: [00:06.0]: address=0068* val=0x00000000 len=4
> ...
> pt_pci_write_config: [00:06.0]: address=0062 val=0x00000081 len=2
> *pt_msgctrl_reg_write: guest enabling MSI, disable MSI-INTx translation*
> pci_intx: intx=1
> *pt_msi_disable: Unmap msi with pirq 36*
> pt_msgctrl_reg_write: setup msi for dev 30
> pt_msi_setup: msi mapped with pirq 36
> pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303
> pt_pci_read_config: [00:06.0]: address=0062 val=0x00000081 len=2
> pt_pci_write_config: [00:06.0]: address=0062 val=0x00000081 len=2
> pt_pci_write_config: [00:06.0]: address=0064 val=0xfee0300c len=4
> *pt_pci_write_config: [00:06.0]: address=0068* val=0x00000000 len=4
>
> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
> support??
> pt_pci_write_config: Internal error: Invalid write emulation return
> value[-1]. I/O emulator exit.
>
>
> Here the device in question should is the audio controller, 00:1b.0 in the
> host, which is 64 bit capable:
> 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset
> Family High Definition Audio Controller (rev 04)
>     Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+
>         Address: 00000000fee00378  Data: 0000
>
> And there is also a successful write to offset 0x68 above, while the
> second write fails the I/O emulator.
>
>
> The patch added pt_msi_disable() call into *pt_msgctrl_reg_write() *function,
> And the pt_msi_disable() function has these lines:
> out:
>     /* clear msi info */
>     dev->msi->flags = 0;
>     dev->msi->pirq = -1;
>     dev->msi_trans_en = 0;
>
> As a result, the flags are cleared -- this is new to the patch.
> And I believe this change caused the failure in pt_msgaddr64_write():
>
> 3882     /* check whether the type is 64 bit or not */
> 3883     if (!(ptdev->msi->flags & PCI_MSI_FLAGS_64BIT))
> 3884     {
> 3885         /* exit I/O emulator */
> 3886         PT_LOG("Error: why comes to Upper Address without 64 bit
> support??\n");
> 3887         return -1;
> 3888     }
>
>
> I only see flags setup in  pt_msgctrl_reg_init() function. I guess this is
> not called this time.
>
> Thanks,
> Timothy
>

--14dae9341115118b1304d026e222
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Sorry, but I have to resend this in a separate thread for better visibility=
.<br><div class=3D"gmail_extra">Back ground:<br>After backporting this patc=
h to fix my PVHVM interrupt missing issue in 4.1.3:<br><a rel=3D"nofollow" =
href=3D"http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html"=
>http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html</a><br>
I find side effect for pure HVM guest.<br><a href=3D"http://lists.xen.org/a=
rchives/html/xen-devel/2012-12/msg00208.html">http://lists.xen.org/archives=
/html/xen-devel/2012-12/msg00208.html</a><br><br>I had a follow up thread t=
o analysis the issue and post it again here:<br>
<br><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"m=
argin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left=
:1ex"><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div><div class=
=3D"h5">
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D"gmail_extra=
"><div class=3D"gmail_quote"><div>Hi, it seems that the patch has some side=
 effect on pure HVM guest.<br>
For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled, I=
 see such syndrome in qemu-dm-xxx.log:<br>



<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>pt_pci_write_config: Internal error: Invalid write emulation=
 return value[-1]. I/O emulator exit.<br><br>The guest dies immediately aft=
er the log, I have no way to check guest kernel log.<br>




Without the patch, this guest can boot without obvious error log even the V=
GA passthrough does not quite work.<br>I&#39;ll check the code to see what =
does these log mean...<br></div></div></div></blockquote></div></div><div>
<br>I did some analysis and it really looks like a bug to me.<br>

Since this is a patch back-ported from 4.2.0, I would like to ask is there =
any follow up patch that would fix this issue?<br>Please see my analysis be=
low:<br>
<br>Here is part of the qemu-dm log, with debug log enabled at compile time=
:<br><br><span style=3D"font-family:courier new,monospace">dm-command: hot =
insert pass-through pci dev<br>register_real_device: Assigning real physica=
l device 00:1b.0 ...<br>


pt_iomul_init: Error: pt_iomul_init can&#39;t open file /dev/xen/pci_iomul:=
 No such file or directory: 0x0:0x1b.0x0<br>pt_register_regions: IO region =
registered (size=3D0x00004000 base_addr=3D0xf7c10004)<br>pt_msi_setup: msi =
mapped with pirq 36<br>


pci_intx: intx=3D1<br>register_real_device: Real physical device 00:1b.0 re=
gistered successfuly!<br>IRQ type =3D MSI-INTx<br>...<br>pt_pci_read_config=
: [00:06.0]: address=3D0000 val=3D0x0000<b>8086</b> len=3D2<br>pt_pci_read_=
config: [00:06.0]: address=3D0002 val=3D0x0000<b>1e20</b> len=3D2<br>


...<br><b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x000000=
00 len=3D4<br>...<br>pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0=
x00000081 len=3D2<br><b>pt_msgctrl_reg_write: guest enabling MSI, disable M=
SI-INTx translation</b><br>


pci_intx: intx=3D1<br><b>pt_msi_disable: Unmap msi with pirq 36</b><br>pt_m=
sgctrl_reg_write: setup msi for dev 30<br>pt_msi_setup: msi mapped with pir=
q 36<br>pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303<br>pt_pc=
i_read_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>


pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>=
pt_pci_write_config: [00:06.0]: address=3D0064 val=3D0xfee0300c len=3D4<br>=
<b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x00000000 len=
=3D4<div class=3D"im">
<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>

pt_pci_write_config: Internal error: Invalid write emulation return value[-=
1]. I/O emulator exit.<br></div></span><br><span style=3D"font-family:couri=
er new,monospace"></span></div></div><br>Here the device in question should=
 is the audio controller, 00:1b.0 in the host, which is 64 bit capable:<br>



00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family=
 High Definition Audio Controller (rev 04)<br>=A0=A0=A0 Capabilities: [60] =
MSI: Enable+ Count=3D1/1 Maskable- 64bit+<br>=A0=A0=A0 =A0=A0=A0 Address: 0=
0000000fee00378=A0 Data: 0000<br>


<br>And there is also a successful write to offset 0x68 above, while the se=
cond write fails the I/O emulator.<br>
<br><br>The patch added pt_msi_disable() call into <span style=3D"font-fami=
ly:courier new,monospace"><b>pt_msgctrl_reg_write() </b></span>function, An=
d the pt_msi_disable() function has these lines:<br><span style=3D"font-fam=
ily:courier new,monospace">out:<br>
=A0=A0=A0 /* clear msi info */<br>=A0=A0=A0 dev-&gt;msi-&gt;flags =3D 0;<br=
>

=A0=A0=A0 dev-&gt;msi-&gt;pirq =3D -1;<br>=A0=A0=A0 dev-&gt;msi_trans_en =
=3D 0;</span><br>
<br>As a result, the flags are cleared -- this is new to the patch. <br>And=
 I believe this change caused the failure in pt_msgaddr64_write():<br><br><=
span style=3D"font-family:courier new,monospace">3882=A0=A0=A0=A0 /* check =
whether the type is 64 bit or not */<br>


3883=A0=A0=A0=A0 if (!(ptdev-&gt;msi-&gt;flags &amp; PCI_MSI_FLAGS_64BIT))<=
br>3884=A0=A0=A0=A0 {<br>3885=A0=A0=A0=A0=A0=A0=A0=A0 /* exit I/O emulator =
*/<br>3886=A0=A0=A0=A0=A0=A0=A0=A0 PT_LOG(&quot;Error: why comes to Upper A=
ddress without 64 bit support??\n&quot;);<br>


3887=A0=A0=A0=A0=A0=A0=A0=A0 return -1;<br>3888=A0=A0=A0=A0 }</span><br><br=
><br>I only see flags setup in=A0 pt_msgctrl_reg_init() function. I guess t=
his is not called this time.<br><br>Thanks,<br>Timothy<br>
</div>
</blockquote><br></div><br></div>

--14dae9341115118b1304d026e222--


--===============2170980356570059032==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2170980356570059032==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 04:29:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 04:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgT4h-0001ZX-Lt; Thu, 06 Dec 2012 04:28:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgT4f-0001ZS-PY
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 04:28:37 +0000
Received: from [85.158.138.51:44812] by server-2.bemta-3.messagelabs.com id
	BA/CC-04744-0FE10C05; Thu, 06 Dec 2012 04:28:32 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354768111!25892138!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTA3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6741 invoked from network); 6 Dec 2012 04:28:32 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 04:28:32 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 05 Dec 2012 20:27:43 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,227,1355126400"; d="scan'208";a="252749691"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 05 Dec 2012 20:28:28 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 20:28:13 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 20:28:12 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 12:27:36 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH V1 1/2] Xen acpi memory hotplug driver
Thread-Index: AQHN0xANUNtbxbdZSkG9MqBQJp2L85gLG13g
Date: Thu, 6 Dec 2012 04:27:36 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
In-Reply-To: <20121205174307.GC16072@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"lenb@kernel.org" <lenb@kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
>>>> index 126d8ce..abd0396 100644
>>>> --- a/drivers/xen/Kconfig
>>>> +++ b/drivers/xen/Kconfig
>>>> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
>>>>  	  Allow kernel fetching MCE error from Xen platform and
>>>>  	  converting it into Linux mcelog format for mcelog tools
>>>> 
>>>> +config XEN_ACPI_MEMORY_HOTPLUG
>>>> +	bool "Xen ACPI memory hotplug"
>>> 
>>> There should be a way to make this a module.
>> 
>> I have some concerns to make it a module:
>> 1. xen and native memhotplug driver both work as module, while we
>> need early load xen driver. 
>> 2. if possible, a xen stub driver may solve load sequence issue, but
>>   it may involve other issues * if xen driver load then unload,
>> native driver may have chance to load successfully; 
> 
> The stub driver would still "occupy" the ACPI bus for the memory
> hotplug PnP, so I think this would not be a problem.
> 

I'm not quite clear your mean here, do you mean it has
1. xen_stub driver + xen_memhoplug driver, then xen_strub driver unload and entirely replaced by xen_memhotplug driver, or
2. xen_stub driver (w/ stub ops) + xen_memhotplug ops (not driver), then xen_stub driver keep occupying but its stub ops later replaced by xen_memhotplug ops?

If in way #1, it has risk that native driver may load (if xen driver unload).
If in way #2, xen_memhotplug ops lose the chance to probe/add/bind existed memory devices (since it's done when driver registerred).

>>   * if xen driver load --> unload --> load again, then it will lose
>> hotplug notification during unload period; 
> 
> Sure. But I think we can do it with this driver? After all the
> function of 
> it is to just tell the firmware to turn on/off sockets - and if we
> miss 
> one notification we won't take advantage of the power savings - but we
> can do that later on.
> 

Not only inform firmware.
Hotplug notify callback will invoke acpi_bus_add -> ... -> implicitly invoke drv->ops.add method to add the hotadded memory device.

> 
>>   * if xen driver load --> unload --> load again, then it will
>> re-add all memory devices, but the handle for 'booting memory
>> device' and 'hotplug memory device' are different while we have no
>> way to distinguish these 2 kind of devices.   
> 
> Wouldn't the stub driver hold onto that?
> 

Same question as comment #1. Do you mean it has a xen_stub driver (w/ stub ops) and a xen_memhotplug ops?

>> 
>> IMHO I think to make xen hotplug logic as module may involves
>> unexpected result. Is there any obvious advantages of doing so?
>> after all we have provided config choice to user. Thoughts?  
> 
> Yes, it becomes a module - which is what we want.
> 

What I meant here is, module will bring some unexpected issues for xen hotplug.
We can provide user 'bool' config choice, let them decide to build-in or not, but not 'tristate' choice.

Thanks
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 04:29:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 04:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgT4h-0001ZX-Lt; Thu, 06 Dec 2012 04:28:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgT4f-0001ZS-PY
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 04:28:37 +0000
Received: from [85.158.138.51:44812] by server-2.bemta-3.messagelabs.com id
	BA/CC-04744-0FE10C05; Thu, 06 Dec 2012 04:28:32 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1354768111!25892138!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTA3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6741 invoked from network); 6 Dec 2012 04:28:32 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 04:28:32 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 05 Dec 2012 20:27:43 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,227,1355126400"; d="scan'208";a="252749691"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 05 Dec 2012 20:28:28 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 20:28:13 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 5 Dec 2012 20:28:12 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 12:27:36 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH V1 1/2] Xen acpi memory hotplug driver
Thread-Index: AQHN0xANUNtbxbdZSkG9MqBQJp2L85gLG13g
Date: Thu, 6 Dec 2012 04:27:36 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
In-Reply-To: <20121205174307.GC16072@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"lenb@kernel.org" <lenb@kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
>>>> index 126d8ce..abd0396 100644
>>>> --- a/drivers/xen/Kconfig
>>>> +++ b/drivers/xen/Kconfig
>>>> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
>>>>  	  Allow kernel fetching MCE error from Xen platform and
>>>>  	  converting it into Linux mcelog format for mcelog tools
>>>> 
>>>> +config XEN_ACPI_MEMORY_HOTPLUG
>>>> +	bool "Xen ACPI memory hotplug"
>>> 
>>> There should be a way to make this a module.
>> 
>> I have some concerns to make it a module:
>> 1. xen and native memhotplug driver both work as module, while we
>> need early load xen driver. 
>> 2. if possible, a xen stub driver may solve load sequence issue, but
>>   it may involve other issues * if xen driver load then unload,
>> native driver may have chance to load successfully; 
> 
> The stub driver would still "occupy" the ACPI bus for the memory
> hotplug PnP, so I think this would not be a problem.
> 

I'm not quite clear your mean here, do you mean it has
1. xen_stub driver + xen_memhoplug driver, then xen_strub driver unload and entirely replaced by xen_memhotplug driver, or
2. xen_stub driver (w/ stub ops) + xen_memhotplug ops (not driver), then xen_stub driver keep occupying but its stub ops later replaced by xen_memhotplug ops?

If in way #1, it has risk that native driver may load (if xen driver unload).
If in way #2, xen_memhotplug ops lose the chance to probe/add/bind existed memory devices (since it's done when driver registerred).

>>   * if xen driver load --> unload --> load again, then it will lose
>> hotplug notification during unload period; 
> 
> Sure. But I think we can do it with this driver? After all the
> function of 
> it is to just tell the firmware to turn on/off sockets - and if we
> miss 
> one notification we won't take advantage of the power savings - but we
> can do that later on.
> 

Not only inform firmware.
Hotplug notify callback will invoke acpi_bus_add -> ... -> implicitly invoke drv->ops.add method to add the hotadded memory device.

> 
>>   * if xen driver load --> unload --> load again, then it will
>> re-add all memory devices, but the handle for 'booting memory
>> device' and 'hotplug memory device' are different while we have no
>> way to distinguish these 2 kind of devices.   
> 
> Wouldn't the stub driver hold onto that?
> 

Same question as comment #1. Do you mean it has a xen_stub driver (w/ stub ops) and a xen_memhotplug ops?

>> 
>> IMHO I think to make xen hotplug logic as module may involves
>> unexpected result. Is there any obvious advantages of doing so?
>> after all we have provided config choice to user. Thoughts?  
> 
> Yes, it becomes a module - which is what we want.
> 

What I meant here is, module will bring some unexpected issues for xen hotplug.
We can provide user 'bool' config choice, let them decide to build-in or not, but not 'tristate' choice.

Thanks
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 05:36:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 05:36:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgU7b-0002vR-Nd; Thu, 06 Dec 2012 05:35:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TgU7Z-0002vM-J1
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 05:35:41 +0000
Received: from [85.158.139.211:33973] by server-12.bemta-5.messagelabs.com id
	EA/31-02886-CAE20C05; Thu, 06 Dec 2012 05:35:40 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1354772136!19169666!1
X-Originating-IP: [72.21.198.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk4LjI1ID0+IDIwODMyNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16590 invoked from network); 6 Dec 2012 05:35:37 -0000
Received: from smtp-fw-4101.amazon.com (HELO smtp-fw-4101.amazon.com)
	(72.21.198.25)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 05:35:37 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354772137; x=1386308137;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=fyWefBtGnWeIcstiGMnoF1zgZelkjImspVBdOi31IDA=;
	b=dbnIn3w7kqj/0NfqaSeyCwLD2G37G20Sl4MaQ4mgJ1CWSoSYBAqw711l
	Z84Fm6RLaQEukd+fg6792WI6Ny7Csl7FgcRJSGL2uQxQAOdHVlJuBpUO7
	OEKMvFHBSKvrhhSl0ojLjgopEnLXtR3wecJ4nLzttI3kgpJXyEivOgPqO c=;
X-IronPort-AV: E=McAfee;i="5400,1158,6917"; a="847258056"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-4101.iad4.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 06 Dec 2012 05:35:35 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB65ZY34009672
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 6 Dec 2012 05:35:34 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 5 Dec 2012 21:35:25 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Wed, 05 Dec 2012 21:35:24 -0800
Date: Wed, 5 Dec 2012 21:35:24 -0800
From: Matt Wilson <msw@amazon.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Message-ID: <20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 11:56:32AM +0000, Palagummi, Siva wrote:
> Matt,
[...]
> You are right. The above chunk which is already part of the upstream
> is unfortunately incorrect for some cases. We also ran into issues
> in our environment around a week back and found this problem. The
> count will be different based on head len because of the
> optimization that start_new_rx_buffer is trying to do for large
> buffers.  A hole of size "offset_in_page" will be left in first page
> during copy if the remaining buffer size is >=PAG_SIZE. This
> subsequently affects the copy_off as well.
>
> So xen_netbk_count_skb_slots actually needs a fix to calculate the
> count correctly based on head len. And also a fix to calculate the
> copy_off properly to which the data from fragments gets copied.

Can you explain more about the copy_off problem? I'm not seeing it.

> max_required_rx_slots also may require a fix to account the
> additional slot that may be required in case mtu >= PAG_SIZE. For
> worst case scenario atleast another +1.  One thing that is still
> puzzling here is, max_required_rx_slots seems to be assuming that
> linear length in head will never be greater than mtu size. But that
> doesn't seem to be the case all the time. I wonder if it requires
> some kind of fix there or special handling when count_skb_slots
> exceeds max_required_rx_slots.

We should only be using the number of pages required to copy the
data. The fix shouldn't be to anticipate wasting ring space by
increasing the return value of max_required_rx_slots().

[...]

> > Why increment count by the /estimated/ count instead of the actual
> > number of slots used? We have the number of slots in the line just
> > above, in sco->meta_slots_used.
> > 
>
> Count actually refers to ring slots consumed rather than meta_slots
> used.  Count can be different from meta_slots_used.

Aah, indeed. This can end up being too pessimistic if you have lots of
frags that require multiple copy operations. I still think that it
would be better to calculate the actual number of ring slots consumed
by netbk_gop_skb() to avoid other bugs like the one you originally
fixed.

> > > >  		__skb_queue_tail(&rxq, skb);
> > > >
> > > > +		skb = skb_peek(&netbk->rx_queue);
> > > > +		if (skb == NULL)
> > > > +			break;
> > > > +		sco = (struct skb_cb_overlay *)skb->cb;
> > > >  		/* Filled the batch queue? */
> > > > -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > > > +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> > > >  			break;
> > > >  	}
> > > >
> > 
> > This change I like.
> > 
> > We're working on a patch to improve the buffer efficiency and the
> > miscalculation problem. Siva, I'd be happy to re-base and re-submit
> > this patch (with minor adjustments) as part of that work, unless you
> > want to handle that.
> > 
> > Matt
> 
> Thanks!!  Please feel free to re-base and re-submit :-)

OK, thanks!

Matt


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 05:36:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 05:36:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgU7b-0002vR-Nd; Thu, 06 Dec 2012 05:35:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TgU7Z-0002vM-J1
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 05:35:41 +0000
Received: from [85.158.139.211:33973] by server-12.bemta-5.messagelabs.com id
	EA/31-02886-CAE20C05; Thu, 06 Dec 2012 05:35:40 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1354772136!19169666!1
X-Originating-IP: [72.21.198.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk4LjI1ID0+IDIwODMyNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16590 invoked from network); 6 Dec 2012 05:35:37 -0000
Received: from smtp-fw-4101.amazon.com (HELO smtp-fw-4101.amazon.com)
	(72.21.198.25)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 05:35:37 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354772137; x=1386308137;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=fyWefBtGnWeIcstiGMnoF1zgZelkjImspVBdOi31IDA=;
	b=dbnIn3w7kqj/0NfqaSeyCwLD2G37G20Sl4MaQ4mgJ1CWSoSYBAqw711l
	Z84Fm6RLaQEukd+fg6792WI6Ny7Csl7FgcRJSGL2uQxQAOdHVlJuBpUO7
	OEKMvFHBSKvrhhSl0ojLjgopEnLXtR3wecJ4nLzttI3kgpJXyEivOgPqO c=;
X-IronPort-AV: E=McAfee;i="5400,1158,6917"; a="847258056"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-4101.iad4.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 06 Dec 2012 05:35:35 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB65ZY34009672
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 6 Dec 2012 05:35:34 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 5 Dec 2012 21:35:25 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Wed, 05 Dec 2012 21:35:24 -0800
Date: Wed, 5 Dec 2012 21:35:24 -0800
From: Matt Wilson <msw@amazon.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Message-ID: <20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 11:56:32AM +0000, Palagummi, Siva wrote:
> Matt,
[...]
> You are right. The above chunk which is already part of the upstream
> is unfortunately incorrect for some cases. We also ran into issues
> in our environment around a week back and found this problem. The
> count will be different based on head len because of the
> optimization that start_new_rx_buffer is trying to do for large
> buffers.  A hole of size "offset_in_page" will be left in first page
> during copy if the remaining buffer size is >=PAG_SIZE. This
> subsequently affects the copy_off as well.
>
> So xen_netbk_count_skb_slots actually needs a fix to calculate the
> count correctly based on head len. And also a fix to calculate the
> copy_off properly to which the data from fragments gets copied.

Can you explain more about the copy_off problem? I'm not seeing it.

> max_required_rx_slots also may require a fix to account the
> additional slot that may be required in case mtu >= PAG_SIZE. For
> worst case scenario atleast another +1.  One thing that is still
> puzzling here is, max_required_rx_slots seems to be assuming that
> linear length in head will never be greater than mtu size. But that
> doesn't seem to be the case all the time. I wonder if it requires
> some kind of fix there or special handling when count_skb_slots
> exceeds max_required_rx_slots.

We should only be using the number of pages required to copy the
data. The fix shouldn't be to anticipate wasting ring space by
increasing the return value of max_required_rx_slots().

[...]

> > Why increment count by the /estimated/ count instead of the actual
> > number of slots used? We have the number of slots in the line just
> > above, in sco->meta_slots_used.
> > 
>
> Count actually refers to ring slots consumed rather than meta_slots
> used.  Count can be different from meta_slots_used.

Aah, indeed. This can end up being too pessimistic if you have lots of
frags that require multiple copy operations. I still think that it
would be better to calculate the actual number of ring slots consumed
by netbk_gop_skb() to avoid other bugs like the one you originally
fixed.

> > > >  		__skb_queue_tail(&rxq, skb);
> > > >
> > > > +		skb = skb_peek(&netbk->rx_queue);
> > > > +		if (skb == NULL)
> > > > +			break;
> > > > +		sco = (struct skb_cb_overlay *)skb->cb;
> > > >  		/* Filled the batch queue? */
> > > > -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > > > +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> > > >  			break;
> > > >  	}
> > > >
> > 
> > This change I like.
> > 
> > We're working on a patch to improve the buffer efficiency and the
> > miscalculation problem. Siva, I'd be happy to re-base and re-submit
> > this patch (with minor adjustments) as part of that work, unless you
> > want to handle that.
> > 
> > Matt
> 
> Thanks!!  Please feel free to re-base and re-submit :-)

OK, thanks!

Matt


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 08:35:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 08:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgWv3-00061P-7y; Thu, 06 Dec 2012 08:34:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgWv1-00061K-Pu
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 08:34:56 +0000
Received: from [85.158.143.99:42339] by server-1.bemta-4.messagelabs.com id
	6A/2E-28401-EA850C05; Thu, 06 Dec 2012 08:34:54 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-7.tower-216.messagelabs.com!1354782894!24920013!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12869 invoked from network); 6 Dec 2012 08:34:54 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-7.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Dec 2012 08:34:54 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgWuj-0004Iv-Uz; Thu, 06 Dec 2012 08:34:38 +0000
Received: from dagon.hellion.org.uk ([192.168.1.7])
	by hopkins.hellion.org.uk with esmtps (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgWud-0007N4-Rf; Thu, 06 Dec 2012 08:34:37 +0000
Message-ID: <1354782871.28777.12.camel@dagon.hellion.org.uk>
From: Ian Campbell <ijc@hellion.org.uk>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Date: Thu, 06 Dec 2012 08:34:31 +0000
In-Reply-To: <20121205214741.GA1150@phenom.dumpdata.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1354711599.15296.191.camel@zakaz.uk.xensource.com>
	<20121205214741.GA1150@phenom.dumpdata.com>
Organization: 
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 192.168.1.7
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(trim quote please...)
On Wed, 2012-12-05 at 21:47 +0000, Konrad Rzeszutek Wilk wrote:
> Do you want to prep a patch that I can stick in my 'microcode' branch?
> .. That I will at some point try to upstream.

You might want to look back at the archives when Jeremy first tried to
upstream this work, it was a vehement "No" and the resulting thread was
not pretty.

Now that we have early loading via the hypervisor in 4.2 and Linux is
finally in the process of growing its own early microcode loading
solution I suspect the No would be even firmer.

It is on xenbits if you want it anyway:

git://xenbits.xen.org/people/ianc/linux-2.6.git debian/wheezy/microcode

About the only argument I can see for continuing to try upstreaming this
stuff is that in
http://www.gossamer-threads.com/lists/linux/kernel/1583630 Fenghua says:

        Note, however, that Linux users have gotten used to being able
        to install a microcode patch in the field without having a
        reboot; we support that model too.

i.e. this is an argument for keeping the previous scheme in parallel,
which I suppose is an argument for supporting the same under Xen (I
don't know if its a good one though.

Ian.

-- 
Ian Campbell


All the existing 2.0.x kernels are to buggy for 2.1.x to be the
main goal.
		-- Alan Cox


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 08:35:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 08:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgWv3-00061P-7y; Thu, 06 Dec 2012 08:34:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TgWv1-00061K-Pu
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 08:34:56 +0000
Received: from [85.158.143.99:42339] by server-1.bemta-4.messagelabs.com id
	6A/2E-28401-EA850C05; Thu, 06 Dec 2012 08:34:54 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-7.tower-216.messagelabs.com!1354782894!24920013!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12869 invoked from network); 6 Dec 2012 08:34:54 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-7.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Dec 2012 08:34:54 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TgWuj-0004Iv-Uz; Thu, 06 Dec 2012 08:34:38 +0000
Received: from dagon.hellion.org.uk ([192.168.1.7])
	by hopkins.hellion.org.uk with esmtps (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TgWud-0007N4-Rf; Thu, 06 Dec 2012 08:34:37 +0000
Message-ID: <1354782871.28777.12.camel@dagon.hellion.org.uk>
From: Ian Campbell <ijc@hellion.org.uk>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Date: Thu, 06 Dec 2012 08:34:31 +0000
In-Reply-To: <20121205214741.GA1150@phenom.dumpdata.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1354711599.15296.191.camel@zakaz.uk.xensource.com>
	<20121205214741.GA1150@phenom.dumpdata.com>
Organization: 
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 192.168.1.7
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(trim quote please...)
On Wed, 2012-12-05 at 21:47 +0000, Konrad Rzeszutek Wilk wrote:
> Do you want to prep a patch that I can stick in my 'microcode' branch?
> .. That I will at some point try to upstream.

You might want to look back at the archives when Jeremy first tried to
upstream this work, it was a vehement "No" and the resulting thread was
not pretty.

Now that we have early loading via the hypervisor in 4.2 and Linux is
finally in the process of growing its own early microcode loading
solution I suspect the No would be even firmer.

It is on xenbits if you want it anyway:

git://xenbits.xen.org/people/ianc/linux-2.6.git debian/wheezy/microcode

About the only argument I can see for continuing to try upstreaming this
stuff is that in
http://www.gossamer-threads.com/lists/linux/kernel/1583630 Fenghua says:

        Note, however, that Linux users have gotten used to being able
        to install a microcode patch in the field without having a
        reboot; we support that model too.

i.e. this is an argument for keeping the previous scheme in parallel,
which I suppose is an argument for supporting the same under Xen (I
don't know if its a good one though.

Ian.

-- 
Ian Campbell


All the existing 2.0.x kernels are to buggy for 2.1.x to be the
main goal.
		-- Alan Cox


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:24:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgXgS-0006pk-Oh; Thu, 06 Dec 2012 09:23:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgXgQ-0006pf-Gp
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:23:54 +0000
Received: from [193.109.254.147:51679] by server-1.bemta-14.messagelabs.com id
	9A/47-25314-92460C05; Thu, 06 Dec 2012 09:23:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1354785786!2745408!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26315 invoked from network); 6 Dec 2012 09:23:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:23:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16192862"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:20:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:20:58 +0000
Message-ID: <1354785657.17165.41.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 09:20:57 +0000
In-Reply-To: <1354732666-3132-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354732666-3132-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/arm: flush dcache after memcpy'ing the
	kernel image
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:37 +0000, Stefano Stabellini wrote:
> After memcpy'ing the kernel in guest memory we need to flush the dcache
> to make sure that the data actually reaches the memory before we start
> executing guest code with caches disabled.
> 
> This fixes a boot time bug on the Cortex A15 Versatile Express that
> usually shows up as follow:
> 
> (XEN) Hypervisor Trap. HSR=0x80000006 EC=0x20 IL=0 Syndrome=6
> (XEN) Unexpected Trap: Hypervisor

That's a symptom of a thousand different problems though, since it's
just a generic Instruction Abort from guest mode caused by a translation
fault at stage 2. 

Anyhow this won't apply in top of "arm: support for initial modules
(e.g. dom0) and DTB supplied in RAM", could you rebase on that please?

> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/kernel.c |    2 ++
>  1 files changed, 2 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index b4a823d..81818b1 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -53,6 +53,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
>  
>          set_fixmap(FIXMAP_MISC, p, attrindx);
>          memcpy(dst, src + s, l);
> +        flush_xen_dcache_va_range(dst, l);
>  
>          paddr += l;
>          dst += l;
> @@ -82,6 +83,7 @@ static void kernel_zimage_load(struct kernel_info *info)
>  
>          set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED);
>          memcpy(dst, src, PAGE_SIZE);
> +        flush_xen_dcache_va_range(dst, PAGE_SIZE);
>          clear_fixmap(FIXMAP_MISC);
>  
>          unmap_domain_page(dst);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:24:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgXgS-0006pk-Oh; Thu, 06 Dec 2012 09:23:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgXgQ-0006pf-Gp
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:23:54 +0000
Received: from [193.109.254.147:51679] by server-1.bemta-14.messagelabs.com id
	9A/47-25314-92460C05; Thu, 06 Dec 2012 09:23:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1354785786!2745408!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26315 invoked from network); 6 Dec 2012 09:23:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:23:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16192862"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:20:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:20:58 +0000
Message-ID: <1354785657.17165.41.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 09:20:57 +0000
In-Reply-To: <1354732666-3132-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354732666-3132-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/arm: flush dcache after memcpy'ing the
	kernel image
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:37 +0000, Stefano Stabellini wrote:
> After memcpy'ing the kernel in guest memory we need to flush the dcache
> to make sure that the data actually reaches the memory before we start
> executing guest code with caches disabled.
> 
> This fixes a boot time bug on the Cortex A15 Versatile Express that
> usually shows up as follow:
> 
> (XEN) Hypervisor Trap. HSR=0x80000006 EC=0x20 IL=0 Syndrome=6
> (XEN) Unexpected Trap: Hypervisor

That's a symptom of a thousand different problems though, since it's
just a generic Instruction Abort from guest mode caused by a translation
fault at stage 2. 

Anyhow this won't apply in top of "arm: support for initial modules
(e.g. dom0) and DTB supplied in RAM", could you rebase on that please?

> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/kernel.c |    2 ++
>  1 files changed, 2 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index b4a823d..81818b1 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -53,6 +53,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
>  
>          set_fixmap(FIXMAP_MISC, p, attrindx);
>          memcpy(dst, src + s, l);
> +        flush_xen_dcache_va_range(dst, l);
>  
>          paddr += l;
>          dst += l;
> @@ -82,6 +83,7 @@ static void kernel_zimage_load(struct kernel_info *info)
>  
>          set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED);
>          memcpy(dst, src, PAGE_SIZE);
> +        flush_xen_dcache_va_range(dst, PAGE_SIZE);
>          clear_fixmap(FIXMAP_MISC);
>  
>          unmap_domain_page(dst);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:26:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:26:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgXiA-0006tw-Aa; Thu, 06 Dec 2012 09:25:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgXi9-0006to-32
	for Xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:25:41 +0000
Received: from [85.158.139.83:37985] by server-2.bemta-5.messagelabs.com id
	20/0C-04892-49460C05; Thu, 06 Dec 2012 09:25:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354785935!17382992!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17286 invoked from network); 6 Dec 2012 09:25:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:25:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16192991"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:25:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:25:32 +0000
Message-ID: <1354785931.17165.46.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Thu, 6 Dec 2012 09:25:31 +0000
In-Reply-To: <20121205174153.76fa5dd1@mantra.us.oracle.com>
References: <20121205174153.76fa5dd1@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>, Jan
	Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] page refcount on pages mapped by the toolstack..
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 01:41 +0000, Mukesh Rathor wrote:
> Hi,
> 
> I observed something that doesn't seem right to me:
> 
> PV dom0 booting PV guest (say domid 1). (no PVH).
> 
> xl cr vm.cfg.pv
> 
> Take some mfn from domid 1. It's refcnt is 1 as expected.

Any arbitrary mfn or some particular mfn?

>  Now, the lib
> wants to map it via xen_remap_domain_mfn_range(). The call goes thru
> do_mmu_update(), and upon returning the refcnt is 2, as expected.
> 
> Now, I noticed the refcnt doesn't go back to 1 after the guest is 
> created/booted. I'd have expected the process exit somewhere to have
> resulted in the refcnt going down to 1 (which is what would happen in case
> of PVH dom0).

Which process exit? xl will daemonize and keep running in the
background. I'm not sure which pages it will keep mapped, might it
actually be xenstored or something similar which has the extra
reference?

Do privcmd mappings show up in /proc/<pid>map?

> The guest is up, I notice the refcnt is 2. I shutdown the guest, the
> refcnt goes to 0 and the page is freed via relinquish_memory() called
> from domain_relinquish_resources(). I would have expected the page
> to hang with refcnt 1, what if the user process still has it mapped?
> 
> What am I missing?

Doesn't the user process exit when the domain shuts down, thereby
releasing the other mapping?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:26:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:26:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgXiA-0006tw-Aa; Thu, 06 Dec 2012 09:25:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgXi9-0006to-32
	for Xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:25:41 +0000
Received: from [85.158.139.83:37985] by server-2.bemta-5.messagelabs.com id
	20/0C-04892-49460C05; Thu, 06 Dec 2012 09:25:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354785935!17382992!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17286 invoked from network); 6 Dec 2012 09:25:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:25:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16192991"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:25:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:25:32 +0000
Message-ID: <1354785931.17165.46.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Thu, 6 Dec 2012 09:25:31 +0000
In-Reply-To: <20121205174153.76fa5dd1@mantra.us.oracle.com>
References: <20121205174153.76fa5dd1@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>, Jan
	Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] page refcount on pages mapped by the toolstack..
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 01:41 +0000, Mukesh Rathor wrote:
> Hi,
> 
> I observed something that doesn't seem right to me:
> 
> PV dom0 booting PV guest (say domid 1). (no PVH).
> 
> xl cr vm.cfg.pv
> 
> Take some mfn from domid 1. It's refcnt is 1 as expected.

Any arbitrary mfn or some particular mfn?

>  Now, the lib
> wants to map it via xen_remap_domain_mfn_range(). The call goes thru
> do_mmu_update(), and upon returning the refcnt is 2, as expected.
> 
> Now, I noticed the refcnt doesn't go back to 1 after the guest is 
> created/booted. I'd have expected the process exit somewhere to have
> resulted in the refcnt going down to 1 (which is what would happen in case
> of PVH dom0).

Which process exit? xl will daemonize and keep running in the
background. I'm not sure which pages it will keep mapped, might it
actually be xenstored or something similar which has the extra
reference?

Do privcmd mappings show up in /proc/<pid>map?

> The guest is up, I notice the refcnt is 2. I shutdown the guest, the
> refcnt goes to 0 and the page is freed via relinquish_memory() called
> from domain_relinquish_resources(). I would have expected the page
> to hang with refcnt 1, what if the user process still has it mapped?
> 
> What am I missing?

Doesn't the user process exit when the domain shuts down, thereby
releasing the other mapping?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:29:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:29:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgXl8-00073O-VF; Thu, 06 Dec 2012 09:28:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TgXl8-00073H-1r
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 09:28:46 +0000
Received: from [85.158.139.83:57888] by server-4.bemta-5.messagelabs.com id
	CF/DF-15011-D4560C05; Thu, 06 Dec 2012 09:28:45 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354786124!21393556!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTgyMTU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTgyMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13201 invoked from network); 6 Dec 2012 09:28:44 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 09:28:44 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2c7ofw==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-090-003.pools.arcor-ip.net [84.57.90.3])
	by smtp.strato.de (joses mo17) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id w0647doB696M64
	for <xen-devel@lists.xen.org>; Thu, 6 Dec 2012 10:28:44 +0100 (CET)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id ABC901839A
	for <xen-devel@lists.xen.org>; Thu,  6 Dec 2012 10:28:43 +0100 (CET)
MIME-Version: 1.0
X-Mercurial-Node: 5dab922f04f4790b70d43d062114bddcc1bb7de6
Message-Id: <5dab922f04f4790b70d4.1354786122@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 06 Dec 2012 10:28:42 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] tools/gdbsx: fix build failure with glibc-2.17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1354786108 -3600
# Node ID 5dab922f04f4790b70d43d062114bddcc1bb7de6
# Parent  670b07e8d7382229639af0d1df30071e6c1ebb19
tools/gdbsx: fix build failure with glibc-2.17

[ 1029s] xg_main.c: In function '_domctl_hcall':
[ 1029s] xg_main.c:181:52: error: 'ulong' undeclared (first use in this function)
[ 1029s] xg_main.c:181:52: note: each undeclared identifier is reported only once for each function it appears in
[ 1029s] xg_main.c: In function '_check_hyp':
[ 1029s] xg_main.c:221:52: error: 'ulong' undeclared (first use in this function)
[ 1029s] make[4]: *** [xg_main.o] Error 1

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 670b07e8d738 -r 5dab922f04f4 tools/debugger/gdbsx/xg/xg_main.c
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -34,6 +34,7 @@
  *  XGTRC(): generic trace utility
  */
 
+#include <sys/types.h>
 #include <stdio.h>
 #include <stddef.h>
 #include <stdarg.h>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:29:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:29:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgXl8-00073O-VF; Thu, 06 Dec 2012 09:28:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TgXl8-00073H-1r
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 09:28:46 +0000
Received: from [85.158.139.83:57888] by server-4.bemta-5.messagelabs.com id
	CF/DF-15011-D4560C05; Thu, 06 Dec 2012 09:28:45 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354786124!21393556!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTgyMTU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MTgyMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13201 invoked from network); 6 Dec 2012 09:28:44 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 09:28:44 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2c7ofw==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-090-003.pools.arcor-ip.net [84.57.90.3])
	by smtp.strato.de (joses mo17) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id w0647doB696M64
	for <xen-devel@lists.xen.org>; Thu, 6 Dec 2012 10:28:44 +0100 (CET)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id ABC901839A
	for <xen-devel@lists.xen.org>; Thu,  6 Dec 2012 10:28:43 +0100 (CET)
MIME-Version: 1.0
X-Mercurial-Node: 5dab922f04f4790b70d43d062114bddcc1bb7de6
Message-Id: <5dab922f04f4790b70d4.1354786122@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 06 Dec 2012 10:28:42 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] tools/gdbsx: fix build failure with glibc-2.17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1354786108 -3600
# Node ID 5dab922f04f4790b70d43d062114bddcc1bb7de6
# Parent  670b07e8d7382229639af0d1df30071e6c1ebb19
tools/gdbsx: fix build failure with glibc-2.17

[ 1029s] xg_main.c: In function '_domctl_hcall':
[ 1029s] xg_main.c:181:52: error: 'ulong' undeclared (first use in this function)
[ 1029s] xg_main.c:181:52: note: each undeclared identifier is reported only once for each function it appears in
[ 1029s] xg_main.c: In function '_check_hyp':
[ 1029s] xg_main.c:221:52: error: 'ulong' undeclared (first use in this function)
[ 1029s] make[4]: *** [xg_main.o] Error 1

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 670b07e8d738 -r 5dab922f04f4 tools/debugger/gdbsx/xg/xg_main.c
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -34,6 +34,7 @@
  *  XGTRC(): generic trace utility
  */
 
+#include <sys/types.h>
 #include <stdio.h>
 #include <stddef.h>
 #include <stdarg.h>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:31:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgXnO-0007F7-HB; Thu, 06 Dec 2012 09:31:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgXnM-0007EB-NW
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:31:05 +0000
Received: from [85.158.137.99:29384] by server-9.bemta-3.messagelabs.com id
	BF/EC-02388-3D560C05; Thu, 06 Dec 2012 09:30:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354786257!17294476!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5617 invoked from network); 6 Dec 2012 09:30:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:30:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16193131"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:30:57 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:30:56 +0000
Message-ID: <1354786255.17165.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Liu Jinsong <jinsong.liu@intel.com>
Date: Thu, 6 Dec 2012 09:30:55 +0000
In-Reply-To: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
References: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with regard
	to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 01:55 +0000, Liu Jinsong wrote:
> At the sender
>   xc_domain_save has a key point: 'to query the types of all the pages
>   with xc_get_pfn_type_batch'
>   1) if broken page occur before the key point, migration will be fine
>      since proper pfn_type and pfn number will be transferred to the
>      target and then take appropriate action;
>   2) if broken page occur after the key point, whole system will crash
>      and no need care migration any more;
> 
> At the target
>   Target will populates pages for guest. As for the case of broken page,
>   we prefer to keep the type of the page for the sake of seamless migration.
>   Target will set p2m as p2m_ram_broken for broken page. If guest access
>   the broken page again it will kill itself as expected.
> 
> Patch version history:
> V5:
>   - remove extra check at the last iteration
>   - remove marking broken page to dirty bitmap

I'm not totally convinced that this second one isn't required, but
what's here is correct as far as it goes.

Acked-by: Ian Campbell <ian.campbell@citrix.com>

If someone from the hypervisor side wants to re-ack I'll apply (it
doesn't look to have changed all that much to me, but I don't want to
presume to take it).

> V4:
>   - adjust variables and patch description based on feedback
> V3:
>   - handle pages broken at the last iteration
> V2:
>   - migrate continue when broken page occur during migration,
>     via marking broken page to dirty bitmap
> V1:
>   - migration abort when broken page occur during migration
>   - transfer pfn_type to target for broken page occur before migration
> 
> Suggested-by: George Dunlap <george.dunlap@eu.citrix.com>
> Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
> 
> diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain.c
> --- a/tools/libxc/xc_domain.c	Tue Nov 13 11:19:17 2012 +0000
> +++ b/tools/libxc/xc_domain.c	Thu Dec 06 09:50:40 2012 +0800
> @@ -283,6 +283,22 @@
>      return ret;
>  }
>  
> +/* set broken page p2m */
> +int xc_set_broken_page_p2m(xc_interface *xch,
> +                           uint32_t domid,
> +                           unsigned long pfn)
> +{
> +    int ret;
> +    DECLARE_DOMCTL;
> +
> +    domctl.cmd = XEN_DOMCTL_set_broken_page_p2m;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.set_broken_page_p2m.pfn = pfn;
> +    ret = do_domctl(xch, &domctl);
> +
> +    return ret ? -1 : 0;
> +}
> +
>  /* get info from hvm guest for save */
>  int xc_domain_hvm_getcontext(xc_interface *xch,
>                               uint32_t domid,
> diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain_restore.c
> --- a/tools/libxc/xc_domain_restore.c	Tue Nov 13 11:19:17 2012 +0000
> +++ b/tools/libxc/xc_domain_restore.c	Thu Dec 06 09:50:40 2012 +0800
> @@ -1023,9 +1023,15 @@
>  
>      countpages = count;
>      for (i = oldcount; i < buf->nr_pages; ++i)
> -        if ((buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) == XEN_DOMCTL_PFINFO_XTAB
> -            ||(buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) == XEN_DOMCTL_PFINFO_XALLOC)
> +    {
> +        unsigned long pagetype;
> +
> +        pagetype = buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK;
> +        if ( pagetype == XEN_DOMCTL_PFINFO_XTAB ||
> +             pagetype == XEN_DOMCTL_PFINFO_BROKEN ||
> +             pagetype == XEN_DOMCTL_PFINFO_XALLOC )
>              --countpages;
> +    }
>  
>      if (!countpages)
>          return count;
> @@ -1267,6 +1273,17 @@
>              /* a bogus/unmapped/allocate-only page: skip it */
>              continue;
>  
> +        if ( pagetype == XEN_DOMCTL_PFINFO_BROKEN )
> +        {
> +            if ( xc_set_broken_page_p2m(xch, dom, pfn) )
> +            {
> +                ERROR("Set p2m for broken page failed, "
> +                      "dom=%d, pfn=%lx\n", dom, pfn);
> +                goto err_mapped;
> +            }
> +            continue;
> +        }
> +
>          if (pfn_err[i])
>          {
>              ERROR("unexpected PFN mapping failure pfn %lx map_mfn %lx p2m_mfn %lx",
> diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain_save.c
> --- a/tools/libxc/xc_domain_save.c	Tue Nov 13 11:19:17 2012 +0000
> +++ b/tools/libxc/xc_domain_save.c	Thu Dec 06 09:50:40 2012 +0800
> @@ -1277,6 +1277,13 @@
>                  if ( !hvm )
>                      gmfn = pfn_to_mfn(gmfn);
>  
> +                if ( pfn_type[j] == XEN_DOMCTL_PFINFO_BROKEN )
> +                {
> +                    pfn_type[j] |= pfn_batch[j];
> +                    ++run;
> +                    continue;
> +                }
> +
>                  if ( pfn_err[j] )
>                  {
>                      if ( pfn_type[j] == XEN_DOMCTL_PFINFO_XTAB )
> @@ -1371,8 +1378,12 @@
>                      }
>                  }
>  
> -                /* skip pages that aren't present or are alloc-only */
> +                /*
> +                 * skip pages that aren't present,
> +                 * or are broken, or are alloc-only
> +                 */
>                  if ( pagetype == XEN_DOMCTL_PFINFO_XTAB
> +                    || pagetype == XEN_DOMCTL_PFINFO_BROKEN
>                      || pagetype == XEN_DOMCTL_PFINFO_XALLOC )
>                      continue;
>  
> diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h	Tue Nov 13 11:19:17 2012 +0000
> +++ b/tools/libxc/xenctrl.h	Thu Dec 06 09:50:40 2012 +0800
> @@ -575,6 +575,17 @@
>                            xc_domaininfo_t *info);
>  
>  /**
> + * This function set p2m for broken page
> + * &parm xch a handle to an open hypervisor interface
> + * @parm domid the domain id which broken page belong to
> + * @parm pfn the pfn number of the broken page
> + * @return 0 on success, -1 on failure
> + */
> +int xc_set_broken_page_p2m(xc_interface *xch,
> +                           uint32_t domid,
> +                           unsigned long pfn);
> +
> +/**
>   * This function returns information about the context of a hvm domain
>   * @parm xch a handle to an open hypervisor interface
>   * @parm domid the domain to get information from
> diff -r 8b93ac0c93f3 -r 75d16e26926c xen/arch/x86/domctl.c
> --- a/xen/arch/x86/domctl.c	Tue Nov 13 11:19:17 2012 +0000
> +++ b/xen/arch/x86/domctl.c	Thu Dec 06 09:50:40 2012 +0800
> @@ -209,12 +209,18 @@
>                  for ( j = 0; j < k; j++ )
>                  {
>                      unsigned long type = 0;
> +                    p2m_type_t t;
>  
> -                    page = get_page_from_gfn(d, arr[j], NULL, P2M_ALLOC);
> +                    page = get_page_from_gfn(d, arr[j], &t, P2M_ALLOC);
>  
>                      if ( unlikely(!page) ||
>                           unlikely(is_xen_heap_page(page)) )
> -                        type = XEN_DOMCTL_PFINFO_XTAB;
> +                    {
> +                        if ( p2m_is_broken(t) )
> +                            type = XEN_DOMCTL_PFINFO_BROKEN;
> +                        else
> +                            type = XEN_DOMCTL_PFINFO_XTAB;
> +                    }
>                      else
>                      {
>                          switch( page->u.inuse.type_info & PGT_type_mask )
> @@ -235,6 +241,9 @@
>  
>                          if ( page->u.inuse.type_info & PGT_pinned )
>                              type |= XEN_DOMCTL_PFINFO_LPINTAB;
> +
> +                        if ( page->count_info & PGC_broken )
> +                            type = XEN_DOMCTL_PFINFO_BROKEN;
>                      }
>  
>                      if ( page )
> @@ -1568,6 +1577,29 @@
>      }
>      break;
>  
> +    case XEN_DOMCTL_set_broken_page_p2m:
> +    {
> +        struct domain *d;
> +
> +        d = rcu_lock_domain_by_id(domctl->domain);
> +        if ( d != NULL )
> +        {
> +            p2m_type_t pt;
> +            unsigned long pfn = domctl->u.set_broken_page_p2m.pfn;
> +            mfn_t mfn = get_gfn_query(d, pfn, &pt);
> +
> +            if ( unlikely(!mfn_valid(mfn_x(mfn)) || !p2m_is_ram(pt) ||
> +                         (p2m_change_type(d, pfn, pt, p2m_ram_broken) != pt)) )
> +                ret = -EINVAL;
> +
> +            put_gfn(d, pfn);
> +            rcu_unlock_domain(d);
> +        }
> +        else
> +            ret = -ESRCH;
> +    }
> +    break;
> +
>      default:
>          ret = iommu_do_domctl(domctl, u_domctl);
>          break;
> diff -r 8b93ac0c93f3 -r 75d16e26926c xen/include/public/domctl.h
> --- a/xen/include/public/domctl.h	Tue Nov 13 11:19:17 2012 +0000
> +++ b/xen/include/public/domctl.h	Thu Dec 06 09:50:40 2012 +0800
> @@ -136,6 +136,7 @@
>  #define XEN_DOMCTL_PFINFO_LPINTAB (0x1U<<31)
>  #define XEN_DOMCTL_PFINFO_XTAB    (0xfU<<28) /* invalid page */
>  #define XEN_DOMCTL_PFINFO_XALLOC  (0xeU<<28) /* allocate-only page */
> +#define XEN_DOMCTL_PFINFO_BROKEN  (0xdU<<28) /* broken page */
>  #define XEN_DOMCTL_PFINFO_LTAB_MASK (0xfU<<28)
>  
>  struct xen_domctl_getpageframeinfo {
> @@ -834,6 +835,12 @@
>  typedef struct xen_domctl_set_access_required xen_domctl_set_access_required_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_access_required_t);
>  
> +struct xen_domctl_set_broken_page_p2m {
> +    uint64_aligned_t pfn;
> +};
> +typedef struct xen_domctl_set_broken_page_p2m xen_domctl_set_broken_page_p2m_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_broken_page_p2m_t);
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -899,6 +906,7 @@
>  #define XEN_DOMCTL_set_access_required           64
>  #define XEN_DOMCTL_audit_p2m                     65
>  #define XEN_DOMCTL_set_virq_handler              66
> +#define XEN_DOMCTL_set_broken_page_p2m           67
>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -954,6 +962,7 @@
>          struct xen_domctl_audit_p2m         audit_p2m;
>          struct xen_domctl_set_virq_handler  set_virq_handler;
>          struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
> +        struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
>          struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
>          struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>          uint8_t                             pad[128];



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:31:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgXnO-0007F7-HB; Thu, 06 Dec 2012 09:31:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgXnM-0007EB-NW
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:31:05 +0000
Received: from [85.158.137.99:29384] by server-9.bemta-3.messagelabs.com id
	BF/EC-02388-3D560C05; Thu, 06 Dec 2012 09:30:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354786257!17294476!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5617 invoked from network); 6 Dec 2012 09:30:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:30:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16193131"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:30:57 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:30:56 +0000
Message-ID: <1354786255.17165.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Liu Jinsong <jinsong.liu@intel.com>
Date: Thu, 6 Dec 2012 09:30:55 +0000
In-Reply-To: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
References: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with regard
	to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 01:55 +0000, Liu Jinsong wrote:
> At the sender
>   xc_domain_save has a key point: 'to query the types of all the pages
>   with xc_get_pfn_type_batch'
>   1) if broken page occur before the key point, migration will be fine
>      since proper pfn_type and pfn number will be transferred to the
>      target and then take appropriate action;
>   2) if broken page occur after the key point, whole system will crash
>      and no need care migration any more;
> 
> At the target
>   Target will populates pages for guest. As for the case of broken page,
>   we prefer to keep the type of the page for the sake of seamless migration.
>   Target will set p2m as p2m_ram_broken for broken page. If guest access
>   the broken page again it will kill itself as expected.
> 
> Patch version history:
> V5:
>   - remove extra check at the last iteration
>   - remove marking broken page to dirty bitmap

I'm not totally convinced that this second one isn't required, but
what's here is correct as far as it goes.

Acked-by: Ian Campbell <ian.campbell@citrix.com>

If someone from the hypervisor side wants to re-ack I'll apply (it
doesn't look to have changed all that much to me, but I don't want to
presume to take it).

> V4:
>   - adjust variables and patch description based on feedback
> V3:
>   - handle pages broken at the last iteration
> V2:
>   - migrate continue when broken page occur during migration,
>     via marking broken page to dirty bitmap
> V1:
>   - migration abort when broken page occur during migration
>   - transfer pfn_type to target for broken page occur before migration
> 
> Suggested-by: George Dunlap <george.dunlap@eu.citrix.com>
> Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
> 
> diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain.c
> --- a/tools/libxc/xc_domain.c	Tue Nov 13 11:19:17 2012 +0000
> +++ b/tools/libxc/xc_domain.c	Thu Dec 06 09:50:40 2012 +0800
> @@ -283,6 +283,22 @@
>      return ret;
>  }
>  
> +/* set broken page p2m */
> +int xc_set_broken_page_p2m(xc_interface *xch,
> +                           uint32_t domid,
> +                           unsigned long pfn)
> +{
> +    int ret;
> +    DECLARE_DOMCTL;
> +
> +    domctl.cmd = XEN_DOMCTL_set_broken_page_p2m;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.set_broken_page_p2m.pfn = pfn;
> +    ret = do_domctl(xch, &domctl);
> +
> +    return ret ? -1 : 0;
> +}
> +
>  /* get info from hvm guest for save */
>  int xc_domain_hvm_getcontext(xc_interface *xch,
>                               uint32_t domid,
> diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain_restore.c
> --- a/tools/libxc/xc_domain_restore.c	Tue Nov 13 11:19:17 2012 +0000
> +++ b/tools/libxc/xc_domain_restore.c	Thu Dec 06 09:50:40 2012 +0800
> @@ -1023,9 +1023,15 @@
>  
>      countpages = count;
>      for (i = oldcount; i < buf->nr_pages; ++i)
> -        if ((buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) == XEN_DOMCTL_PFINFO_XTAB
> -            ||(buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) == XEN_DOMCTL_PFINFO_XALLOC)
> +    {
> +        unsigned long pagetype;
> +
> +        pagetype = buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK;
> +        if ( pagetype == XEN_DOMCTL_PFINFO_XTAB ||
> +             pagetype == XEN_DOMCTL_PFINFO_BROKEN ||
> +             pagetype == XEN_DOMCTL_PFINFO_XALLOC )
>              --countpages;
> +    }
>  
>      if (!countpages)
>          return count;
> @@ -1267,6 +1273,17 @@
>              /* a bogus/unmapped/allocate-only page: skip it */
>              continue;
>  
> +        if ( pagetype == XEN_DOMCTL_PFINFO_BROKEN )
> +        {
> +            if ( xc_set_broken_page_p2m(xch, dom, pfn) )
> +            {
> +                ERROR("Set p2m for broken page failed, "
> +                      "dom=%d, pfn=%lx\n", dom, pfn);
> +                goto err_mapped;
> +            }
> +            continue;
> +        }
> +
>          if (pfn_err[i])
>          {
>              ERROR("unexpected PFN mapping failure pfn %lx map_mfn %lx p2m_mfn %lx",
> diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xc_domain_save.c
> --- a/tools/libxc/xc_domain_save.c	Tue Nov 13 11:19:17 2012 +0000
> +++ b/tools/libxc/xc_domain_save.c	Thu Dec 06 09:50:40 2012 +0800
> @@ -1277,6 +1277,13 @@
>                  if ( !hvm )
>                      gmfn = pfn_to_mfn(gmfn);
>  
> +                if ( pfn_type[j] == XEN_DOMCTL_PFINFO_BROKEN )
> +                {
> +                    pfn_type[j] |= pfn_batch[j];
> +                    ++run;
> +                    continue;
> +                }
> +
>                  if ( pfn_err[j] )
>                  {
>                      if ( pfn_type[j] == XEN_DOMCTL_PFINFO_XTAB )
> @@ -1371,8 +1378,12 @@
>                      }
>                  }
>  
> -                /* skip pages that aren't present or are alloc-only */
> +                /*
> +                 * skip pages that aren't present,
> +                 * or are broken, or are alloc-only
> +                 */
>                  if ( pagetype == XEN_DOMCTL_PFINFO_XTAB
> +                    || pagetype == XEN_DOMCTL_PFINFO_BROKEN
>                      || pagetype == XEN_DOMCTL_PFINFO_XALLOC )
>                      continue;
>  
> diff -r 8b93ac0c93f3 -r 75d16e26926c tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h	Tue Nov 13 11:19:17 2012 +0000
> +++ b/tools/libxc/xenctrl.h	Thu Dec 06 09:50:40 2012 +0800
> @@ -575,6 +575,17 @@
>                            xc_domaininfo_t *info);
>  
>  /**
> + * This function set p2m for broken page
> + * &parm xch a handle to an open hypervisor interface
> + * @parm domid the domain id which broken page belong to
> + * @parm pfn the pfn number of the broken page
> + * @return 0 on success, -1 on failure
> + */
> +int xc_set_broken_page_p2m(xc_interface *xch,
> +                           uint32_t domid,
> +                           unsigned long pfn);
> +
> +/**
>   * This function returns information about the context of a hvm domain
>   * @parm xch a handle to an open hypervisor interface
>   * @parm domid the domain to get information from
> diff -r 8b93ac0c93f3 -r 75d16e26926c xen/arch/x86/domctl.c
> --- a/xen/arch/x86/domctl.c	Tue Nov 13 11:19:17 2012 +0000
> +++ b/xen/arch/x86/domctl.c	Thu Dec 06 09:50:40 2012 +0800
> @@ -209,12 +209,18 @@
>                  for ( j = 0; j < k; j++ )
>                  {
>                      unsigned long type = 0;
> +                    p2m_type_t t;
>  
> -                    page = get_page_from_gfn(d, arr[j], NULL, P2M_ALLOC);
> +                    page = get_page_from_gfn(d, arr[j], &t, P2M_ALLOC);
>  
>                      if ( unlikely(!page) ||
>                           unlikely(is_xen_heap_page(page)) )
> -                        type = XEN_DOMCTL_PFINFO_XTAB;
> +                    {
> +                        if ( p2m_is_broken(t) )
> +                            type = XEN_DOMCTL_PFINFO_BROKEN;
> +                        else
> +                            type = XEN_DOMCTL_PFINFO_XTAB;
> +                    }
>                      else
>                      {
>                          switch( page->u.inuse.type_info & PGT_type_mask )
> @@ -235,6 +241,9 @@
>  
>                          if ( page->u.inuse.type_info & PGT_pinned )
>                              type |= XEN_DOMCTL_PFINFO_LPINTAB;
> +
> +                        if ( page->count_info & PGC_broken )
> +                            type = XEN_DOMCTL_PFINFO_BROKEN;
>                      }
>  
>                      if ( page )
> @@ -1568,6 +1577,29 @@
>      }
>      break;
>  
> +    case XEN_DOMCTL_set_broken_page_p2m:
> +    {
> +        struct domain *d;
> +
> +        d = rcu_lock_domain_by_id(domctl->domain);
> +        if ( d != NULL )
> +        {
> +            p2m_type_t pt;
> +            unsigned long pfn = domctl->u.set_broken_page_p2m.pfn;
> +            mfn_t mfn = get_gfn_query(d, pfn, &pt);
> +
> +            if ( unlikely(!mfn_valid(mfn_x(mfn)) || !p2m_is_ram(pt) ||
> +                         (p2m_change_type(d, pfn, pt, p2m_ram_broken) != pt)) )
> +                ret = -EINVAL;
> +
> +            put_gfn(d, pfn);
> +            rcu_unlock_domain(d);
> +        }
> +        else
> +            ret = -ESRCH;
> +    }
> +    break;
> +
>      default:
>          ret = iommu_do_domctl(domctl, u_domctl);
>          break;
> diff -r 8b93ac0c93f3 -r 75d16e26926c xen/include/public/domctl.h
> --- a/xen/include/public/domctl.h	Tue Nov 13 11:19:17 2012 +0000
> +++ b/xen/include/public/domctl.h	Thu Dec 06 09:50:40 2012 +0800
> @@ -136,6 +136,7 @@
>  #define XEN_DOMCTL_PFINFO_LPINTAB (0x1U<<31)
>  #define XEN_DOMCTL_PFINFO_XTAB    (0xfU<<28) /* invalid page */
>  #define XEN_DOMCTL_PFINFO_XALLOC  (0xeU<<28) /* allocate-only page */
> +#define XEN_DOMCTL_PFINFO_BROKEN  (0xdU<<28) /* broken page */
>  #define XEN_DOMCTL_PFINFO_LTAB_MASK (0xfU<<28)
>  
>  struct xen_domctl_getpageframeinfo {
> @@ -834,6 +835,12 @@
>  typedef struct xen_domctl_set_access_required xen_domctl_set_access_required_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_access_required_t);
>  
> +struct xen_domctl_set_broken_page_p2m {
> +    uint64_aligned_t pfn;
> +};
> +typedef struct xen_domctl_set_broken_page_p2m xen_domctl_set_broken_page_p2m_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_broken_page_p2m_t);
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -899,6 +906,7 @@
>  #define XEN_DOMCTL_set_access_required           64
>  #define XEN_DOMCTL_audit_p2m                     65
>  #define XEN_DOMCTL_set_virq_handler              66
> +#define XEN_DOMCTL_set_broken_page_p2m           67
>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -954,6 +962,7 @@
>          struct xen_domctl_audit_p2m         audit_p2m;
>          struct xen_domctl_set_virq_handler  set_virq_handler;
>          struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
> +        struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
>          struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
>          struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>          uint8_t                             pad[128];



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:32:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:32:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgXoU-0007Ot-W3; Thu, 06 Dec 2012 09:32:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgXoU-0007Oh-18
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 09:32:14 +0000
Received: from [85.158.137.99:55254] by server-7.bemta-3.messagelabs.com id
	0D/5D-01713-D1660C05; Thu, 06 Dec 2012 09:32:13 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354786331!18175209!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12474 invoked from network); 6 Dec 2012 09:32:12 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 09:32:12 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgXoN-000Lb3-13; Thu, 06 Dec 2012 09:32:07 +0000
Date: Thu, 6 Dec 2012 09:32:06 +0000
From: Tim Deegan <tim@xen.org>
To: Robert Phillips <robert.phillips@citrix.com>
Message-ID: <20121206093206.GA82725@ocelot.phlegethon.org>
References: <50B7087D.20407@jp.fujitsu.com>
	<20121129154041.GD80627@ocelot.phlegethon.org>
	<048EAD622912254A9DEA24C1734613C18C87CC821E@FTLPMAILBOX02.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <048EAD622912254A9DEA24C1734613C18C87CC821E@FTLPMAILBOX02.citrite.net>
User-Agent: Mutt/1.4.2.1i
Cc: Kouya Shimura <kouya@jp.fujitsu.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/hap: fix race condition between
	ENABLE_LOGDIRTY and track_dirty_vram hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

At 12:59 -0500 on 03 Dec (1354539567), Robert Phillips wrote:
> > Robert, in your patch you do wrap this all in the paging_lock, but then
> > unlock to call various enable and disable routines.  Is there a version
> > of this race condition there, where some other CPU might call
> > LOG_DIRTY_ENABLE while you've temporarily dropped the lock?
> 
> My proposed patch does not modify the problematic locking code so, 
> unfortunately, it preserves the race condition that Kouya Shimura 
> has discovered.  
> 
> I question whether his proposed patch would be suitable for the 
> multiple frame buffer situation that my proposed patch addresses.
> It is possible that a guest might be updating its frame buffers when 
> live migration starts, and the same race would result.
> 
> I think the domain.arch.paging.log_dirty  function pointers are problematic.
> They are modified and executed without benefit of locking.
> 
> I am uncomfortable with adding another lock.
> 
> I will look at updating my patch to avoid the race and will (hopefully) 
> avoid adding another lock.

Thanks.  I think the paging_lock can probably cover everything we need
here.  These are toolstack operations and should be fairly rare, and HAP
can do most of its work without the paging_lock.

Also, in the next version can you please update this section:

+int hap_track_dirty_vram(struct domain *d,
+                         unsigned long begin_pfn,
+                         unsigned long nr,
+                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
+{
+    long rc = 0;
+    dv_dirty_vram_t *dirty_vram;
+       
+    paging_lock(d);
+    dirty_vram = d->arch.hvm_domain.dirty_vram;
+    if ( nr )
+    {
+        dv_range_t *range = NULL;
+        int size = ( nr + BITS_PER_LONG - 1 ) & ~( BITS_PER_LONG - 1 );
+        uint8_t dirty_bitmap[size];

not to allocate a guest-specified amount of stack memory.  This is one
of the things recently found and fixed in the existing code as XSA-27.
http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg/rev/53ef1f35a0f8

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:32:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:32:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgXoU-0007Ot-W3; Thu, 06 Dec 2012 09:32:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgXoU-0007Oh-18
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 09:32:14 +0000
Received: from [85.158.137.99:55254] by server-7.bemta-3.messagelabs.com id
	0D/5D-01713-D1660C05; Thu, 06 Dec 2012 09:32:13 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354786331!18175209!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12474 invoked from network); 6 Dec 2012 09:32:12 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 09:32:12 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgXoN-000Lb3-13; Thu, 06 Dec 2012 09:32:07 +0000
Date: Thu, 6 Dec 2012 09:32:06 +0000
From: Tim Deegan <tim@xen.org>
To: Robert Phillips <robert.phillips@citrix.com>
Message-ID: <20121206093206.GA82725@ocelot.phlegethon.org>
References: <50B7087D.20407@jp.fujitsu.com>
	<20121129154041.GD80627@ocelot.phlegethon.org>
	<048EAD622912254A9DEA24C1734613C18C87CC821E@FTLPMAILBOX02.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <048EAD622912254A9DEA24C1734613C18C87CC821E@FTLPMAILBOX02.citrite.net>
User-Agent: Mutt/1.4.2.1i
Cc: Kouya Shimura <kouya@jp.fujitsu.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/hap: fix race condition between
	ENABLE_LOGDIRTY and track_dirty_vram hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

At 12:59 -0500 on 03 Dec (1354539567), Robert Phillips wrote:
> > Robert, in your patch you do wrap this all in the paging_lock, but then
> > unlock to call various enable and disable routines.  Is there a version
> > of this race condition there, where some other CPU might call
> > LOG_DIRTY_ENABLE while you've temporarily dropped the lock?
> 
> My proposed patch does not modify the problematic locking code so, 
> unfortunately, it preserves the race condition that Kouya Shimura 
> has discovered.  
> 
> I question whether his proposed patch would be suitable for the 
> multiple frame buffer situation that my proposed patch addresses.
> It is possible that a guest might be updating its frame buffers when 
> live migration starts, and the same race would result.
> 
> I think the domain.arch.paging.log_dirty  function pointers are problematic.
> They are modified and executed without benefit of locking.
> 
> I am uncomfortable with adding another lock.
> 
> I will look at updating my patch to avoid the race and will (hopefully) 
> avoid adding another lock.

Thanks.  I think the paging_lock can probably cover everything we need
here.  These are toolstack operations and should be fairly rare, and HAP
can do most of its work without the paging_lock.

Also, in the next version can you please update this section:

+int hap_track_dirty_vram(struct domain *d,
+                         unsigned long begin_pfn,
+                         unsigned long nr,
+                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
+{
+    long rc = 0;
+    dv_dirty_vram_t *dirty_vram;
+       
+    paging_lock(d);
+    dirty_vram = d->arch.hvm_domain.dirty_vram;
+    if ( nr )
+    {
+        dv_range_t *range = NULL;
+        int size = ( nr + BITS_PER_LONG - 1 ) & ~( BITS_PER_LONG - 1 );
+        uint8_t dirty_bitmap[size];

not to allocate a guest-specified amount of stack memory.  This is one
of the things recently found and fixed in the existing code as XSA-27.
http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg/rev/53ef1f35a0f8

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:45:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgY19-0007ni-CA; Thu, 06 Dec 2012 09:45:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgY18-0007nd-1Z
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:45:18 +0000
Received: from [85.158.143.99:52692] by server-2.bemta-4.messagelabs.com id
	8E/AD-30861-D2960C05; Thu, 06 Dec 2012 09:45:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354787115!21900462!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25175 invoked from network); 6 Dec 2012 09:45:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:45:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16193581"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:45:15 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:45:15 +0000
Message-ID: <1354787114.17165.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 09:45:14 +0000
In-Reply-To: <1354734105-7007-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354734105-7007-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 19:01 +0000, Stefano Stabellini wrote:

> @@ -306,10 +306,28 @@ static void __cpuinit gic_hyp_disable(void)
>  /* Set up the GIC */
>  void __init gic_init(void)
>  {
> -    /* XXX FIXME get this from devicetree */
> -    gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET;
> -    gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET;
> -    gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET;
> +       printk("GIC initialization:\n"
> +              "        gic_dist_addr=%"PRIpaddr"\n"
> +              "        gic_cpu_addr=%"PRIpaddr"\n"
> +              "        gic_hyp_addr=%"PRIpaddr"\n"
> +              "        gic_vcpu_addr=%"PRIpaddr"\n",
> +              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
> +              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);

I think there's a hard tab in here somewhere.

> +static void device_tree_nr_reg_ranges(const struct fdt_property *prop,
> +        u32 address_cells, u32 size_cells, int *ranges)
> +{
> +    u32 reg_cells = address_cells + size_cells;
> +    *ranges = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));

Why not just return the value here?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:45:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgY19-0007ni-CA; Thu, 06 Dec 2012 09:45:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgY18-0007nd-1Z
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:45:18 +0000
Received: from [85.158.143.99:52692] by server-2.bemta-4.messagelabs.com id
	8E/AD-30861-D2960C05; Thu, 06 Dec 2012 09:45:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1354787115!21900462!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25175 invoked from network); 6 Dec 2012 09:45:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:45:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16193581"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:45:15 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:45:15 +0000
Message-ID: <1354787114.17165.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 09:45:14 +0000
In-Reply-To: <1354734105-7007-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354734105-7007-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 19:01 +0000, Stefano Stabellini wrote:

> @@ -306,10 +306,28 @@ static void __cpuinit gic_hyp_disable(void)
>  /* Set up the GIC */
>  void __init gic_init(void)
>  {
> -    /* XXX FIXME get this from devicetree */
> -    gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET;
> -    gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET;
> -    gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET;
> +       printk("GIC initialization:\n"
> +              "        gic_dist_addr=%"PRIpaddr"\n"
> +              "        gic_cpu_addr=%"PRIpaddr"\n"
> +              "        gic_hyp_addr=%"PRIpaddr"\n"
> +              "        gic_vcpu_addr=%"PRIpaddr"\n",
> +              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
> +              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);

I think there's a hard tab in here somewhere.

> +static void device_tree_nr_reg_ranges(const struct fdt_property *prop,
> +        u32 address_cells, u32 size_cells, int *ranges)
> +{
> +    u32 reg_cells = address_cells + size_cells;
> +    *ranges = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));

Why not just return the value here?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:47:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgY3N-0007uh-2m; Thu, 06 Dec 2012 09:47:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgY3L-0007ub-JF
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:47:35 +0000
Received: from [85.158.139.83:26155] by server-3.bemta-5.messagelabs.com id
	29/8C-18736-6B960C05; Thu, 06 Dec 2012 09:47:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354787253!24658532!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14469 invoked from network); 6 Dec 2012 09:47:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:47:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16193627"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:47:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:47:02 +0000
Message-ID: <1354787220.17165.56.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 09:47:00 +0000
In-Reply-To: <1354734105-7007-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354734105-7007-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/arm: use strcmp in
	device_tree_type_matches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 19:01 +0000, Stefano Stabellini wrote:
> We want to match the exact string rather than the first subset.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/common/device_tree.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index b321d99..3d4a7a9 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -51,7 +51,7 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
>      if ( prop == NULL )
>          return 0;
>  
> -    return !strncmp(prop, match, len);
> +    return !strcmp(prop, match);

fdt_getprop handled lenp == NULL so you can actually get rid of len
completely.


>  }
>  
>  bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:47:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgY3N-0007uh-2m; Thu, 06 Dec 2012 09:47:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgY3L-0007ub-JF
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:47:35 +0000
Received: from [85.158.139.83:26155] by server-3.bemta-5.messagelabs.com id
	29/8C-18736-6B960C05; Thu, 06 Dec 2012 09:47:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354787253!24658532!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14469 invoked from network); 6 Dec 2012 09:47:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:47:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16193627"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:47:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:47:02 +0000
Message-ID: <1354787220.17165.56.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 09:47:00 +0000
In-Reply-To: <1354734105-7007-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354734105-7007-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/arm: use strcmp in
	device_tree_type_matches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 19:01 +0000, Stefano Stabellini wrote:
> We want to match the exact string rather than the first subset.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/common/device_tree.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index b321d99..3d4a7a9 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -51,7 +51,7 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
>      if ( prop == NULL )
>          return 0;
>  
> -    return !strncmp(prop, match, len);
> +    return !strcmp(prop, match);

fdt_getprop handled lenp == NULL so you can actually get rid of len
completely.


>  }
>  
>  bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:51:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:51:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgY6y-00088O-NU; Thu, 06 Dec 2012 09:51:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgY6x-000883-PC
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 09:51:19 +0000
Received: from [85.158.143.99:25436] by server-3.bemta-4.messagelabs.com id
	0B/7F-18211-79A60C05; Thu, 06 Dec 2012 09:51:19 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354787477!27407847!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11006 invoked from network); 6 Dec 2012 09:51:17 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 09:51:17 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgY6u-000Lfb-36; Thu, 06 Dec 2012 09:51:16 +0000
Date: Thu, 6 Dec 2012 09:51:16 +0000
From: Tim Deegan <tim@xen.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121206095116.GB82725@ocelot.phlegethon.org>
References: <1354289830-24642-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354289830-24642-16-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354289830-24642-16-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.4.2.1i
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 15/23] arch/x86: use XSM hooks for
	get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:37 -0500 on 30 Nov (1354271822), Daniel De Graaf wrote:
> There are three callers of get_pg_owner:
>  * do_mmuext_op, which does not have XSM hooks on all subfunctions
>  * do_mmu_update, which has hooks that are inefficient
>  * do_update_va_mapping_otherdomain, which has a simple XSM hook
> 
> In order to preserve return values for the do_mmuext_op hypercall, an
> additional XSM hook is required to check the operation even for those
> subfunctions that do not use the pg_owner field. This also covers the
> MMUEXT_UNPIN_TABLE operation which did previously have an XSM hook.
> 
> The XSM hooks in do_mmu_update were capable of replacing the checks in
> get_pg_owner; however, the hooks are buried in the inner loop of the
> function - not very good for performance when XSM is enabled and these
> turn in to indirect function calls. This patch removes the PTE from the
> hooks and replaces it with a bitfield describing what accesses are being
> requested. The XSM hook can then be called only when additional bits are
> set instead of once per iteration of the loop.
> 
> This patch results in a change in the FLASK permissions used for mapping
> an MMIO page: the target for the permisison check on the memory mapping
> is no longer resolved to the device-specific type, and is instead either
> the domain's own type or domio_t (depending on if the domain uses
> DOMID_SELF or DOMID_IO in the map command). Device-specific access is
> still controlled via the "resource use" permisison checked at domain
> creation (or device hotplug).
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Keir Fraser <keir@xen.org>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:51:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:51:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgY6y-00088O-NU; Thu, 06 Dec 2012 09:51:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgY6x-000883-PC
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 09:51:19 +0000
Received: from [85.158.143.99:25436] by server-3.bemta-4.messagelabs.com id
	0B/7F-18211-79A60C05; Thu, 06 Dec 2012 09:51:19 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354787477!27407847!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11006 invoked from network); 6 Dec 2012 09:51:17 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 09:51:17 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgY6u-000Lfb-36; Thu, 06 Dec 2012 09:51:16 +0000
Date: Thu, 6 Dec 2012 09:51:16 +0000
From: Tim Deegan <tim@xen.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121206095116.GB82725@ocelot.phlegethon.org>
References: <1354289830-24642-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354289830-24642-16-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354289830-24642-16-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.4.2.1i
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 15/23] arch/x86: use XSM hooks for
	get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:37 -0500 on 30 Nov (1354271822), Daniel De Graaf wrote:
> There are three callers of get_pg_owner:
>  * do_mmuext_op, which does not have XSM hooks on all subfunctions
>  * do_mmu_update, which has hooks that are inefficient
>  * do_update_va_mapping_otherdomain, which has a simple XSM hook
> 
> In order to preserve return values for the do_mmuext_op hypercall, an
> additional XSM hook is required to check the operation even for those
> subfunctions that do not use the pg_owner field. This also covers the
> MMUEXT_UNPIN_TABLE operation which did previously have an XSM hook.
> 
> The XSM hooks in do_mmu_update were capable of replacing the checks in
> get_pg_owner; however, the hooks are buried in the inner loop of the
> function - not very good for performance when XSM is enabled and these
> turn in to indirect function calls. This patch removes the PTE from the
> hooks and replaces it with a bitfield describing what accesses are being
> requested. The XSM hook can then be called only when additional bits are
> set instead of once per iteration of the loop.
> 
> This patch results in a change in the FLASK permissions used for mapping
> an MMIO page: the target for the permisison check on the memory mapping
> is no longer resolved to the device-specific type, and is instead either
> the domain's own type or domio_t (depending on if the domain uses
> DOMID_SELF or DOMID_IO in the map command). Device-specific access is
> still controlled via the "resource use" permisison checked at domain
> creation (or device hotplug).
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Keir Fraser <keir@xen.org>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:51:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgY77-0008AN-3s; Thu, 06 Dec 2012 09:51:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgY75-00089R-FQ
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 09:51:27 +0000
Received: from [85.158.139.211:52364] by server-16.bemta-5.messagelabs.com id
	69/B5-21311-E9A60C05; Thu, 06 Dec 2012 09:51:26 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354787485!19342014!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19744 invoked from network); 6 Dec 2012 09:51:25 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 09:51:25 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgY73-000Lfs-00; Thu, 06 Dec 2012 09:51:25 +0000
Date: Thu, 6 Dec 2012 09:51:24 +0000
From: Tim Deegan <tim@xen.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121206095124.GC82725@ocelot.phlegethon.org>
References: <1354289830-24642-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354289830-24642-19-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354289830-24642-19-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.4.2.1i
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 18/23] xen/arch/*: add struct domain
	parameter to arch_do_domctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:37 -0500 on 30 Nov (1354271825), Daniel De Graaf wrote:
> Since the arch-independent do_domctl function now RCU locks the domain
> specified by op->domain, pass the struct domain to the arch-specific
> domctl function and remove the duplicate per-subfunction locking.
> 
> This also removes two get_domain/put_domain call pairs (in
> XEN_DOMCTL_assign_device and XEN_DOMCTL_deassign_device), replacing them
> with RCU locking.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Stefano Stabellini <stefano.stabellini@citrix.com>
> Cc: Tim Deegan <tim@xen.org>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:51:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgY77-0008AN-3s; Thu, 06 Dec 2012 09:51:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgY75-00089R-FQ
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 09:51:27 +0000
Received: from [85.158.139.211:52364] by server-16.bemta-5.messagelabs.com id
	69/B5-21311-E9A60C05; Thu, 06 Dec 2012 09:51:26 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354787485!19342014!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19744 invoked from network); 6 Dec 2012 09:51:25 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 09:51:25 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgY73-000Lfs-00; Thu, 06 Dec 2012 09:51:25 +0000
Date: Thu, 6 Dec 2012 09:51:24 +0000
From: Tim Deegan <tim@xen.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121206095124.GC82725@ocelot.phlegethon.org>
References: <1354289830-24642-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354289830-24642-19-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354289830-24642-19-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.4.2.1i
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 18/23] xen/arch/*: add struct domain
	parameter to arch_do_domctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:37 -0500 on 30 Nov (1354271825), Daniel De Graaf wrote:
> Since the arch-independent do_domctl function now RCU locks the domain
> specified by op->domain, pass the struct domain to the arch-specific
> domctl function and remove the duplicate per-subfunction locking.
> 
> This also removes two get_domain/put_domain call pairs (in
> XEN_DOMCTL_assign_device and XEN_DOMCTL_deassign_device), replacing them
> with RCU locking.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Stefano Stabellini <stefano.stabellini@citrix.com>
> Cc: Tim Deegan <tim@xen.org>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:52:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:52:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgY8E-0008Lg-JH; Thu, 06 Dec 2012 09:52:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgY8D-0008LU-AZ
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 09:52:37 +0000
Received: from [85.158.139.83:27751] by server-9.bemta-5.messagelabs.com id
	46/7E-29295-4EA60C05; Thu, 06 Dec 2012 09:52:36 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354787495!24659331!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2113 invoked from network); 6 Dec 2012 09:51:36 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 09:51:36 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgY7D-000LgA-6w; Thu, 06 Dec 2012 09:51:35 +0000
Date: Thu, 6 Dec 2012 09:51:35 +0000
From: Tim Deegan <tim@xen.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121206095135.GD82725@ocelot.phlegethon.org>
References: <1354289830-24642-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354289830-24642-24-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354289830-24642-24-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.4.2.1i
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 23/23] xen/xsm: Add xsm_default parameter to
	XSM hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:37 -0500 on 30 Nov (1354271830), Daniel De Graaf wrote:
> Include the default XSM hook action as the first argument of the hook to
> facilitate quick understanding of how the call site is expected to be
> used (dom0-only, arbitrary guest, or device model). This argument does
> not solely define how a given hook is interpreted, since any changes to
> the hook's default action need to be made identically to all callers of
> a hook (if there are multiple callers; most hooks only have one), and
> may also require changing the arguments of the hook.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Keir Fraser <keir@xen.org>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:52:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:52:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgY8E-0008Lg-JH; Thu, 06 Dec 2012 09:52:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgY8D-0008LU-AZ
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 09:52:37 +0000
Received: from [85.158.139.83:27751] by server-9.bemta-5.messagelabs.com id
	46/7E-29295-4EA60C05; Thu, 06 Dec 2012 09:52:36 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354787495!24659331!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2113 invoked from network); 6 Dec 2012 09:51:36 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 09:51:36 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgY7D-000LgA-6w; Thu, 06 Dec 2012 09:51:35 +0000
Date: Thu, 6 Dec 2012 09:51:35 +0000
From: Tim Deegan <tim@xen.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121206095135.GD82725@ocelot.phlegethon.org>
References: <1354289830-24642-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354289830-24642-24-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354289830-24642-24-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.4.2.1i
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 23/23] xen/xsm: Add xsm_default parameter to
	XSM hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:37 -0500 on 30 Nov (1354271830), Daniel De Graaf wrote:
> Include the default XSM hook action as the first argument of the hook to
> facilitate quick understanding of how the call site is expected to be
> used (dom0-only, arbitrary guest, or device model). This argument does
> not solely define how a given hook is interpreted, since any changes to
> the hook's default action need to be made identically to all callers of
> a hook (if there are multiple callers; most hooks only have one), and
> may also require changing the arguments of the hook.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Keir Fraser <keir@xen.org>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:55:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYAw-00007B-6n; Thu, 06 Dec 2012 09:55:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYAu-00006t-AE
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:55:24 +0000
Received: from [85.158.139.83:41736] by server-14.bemta-5.messagelabs.com id
	9D/18-21768-B8B60C05; Thu, 06 Dec 2012 09:55:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354787722!28662958!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24881 invoked from network); 6 Dec 2012 09:55:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16193976"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:55:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:55:22 +0000
Message-ID: <1354787720.17165.59.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 09:55:20 +0000
In-Reply-To: <1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/6] xen/arm: introduce map_phys_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> Introduce a function to map a physical memory into virtual memory.
> It is going to be used later to map the videoram.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/mm.c        |   23 +++++++++++++++++++++++
>  xen/include/asm-arm/mm.h |    3 +++
>  2 files changed, 26 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 68ee9da..418a414 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -376,6 +376,29 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
>  }
>  
> +/* Map the physical memory range start -  end at the virtual address
> + * virt_start in 2MB chunks. start and virt_start have to be 2MB
> + * aligned.
> + */
> +void map_phys_range(paddr_t start, paddr_t end,
> +        unsigned long virt_start, unsigned attributes)
> +{
> +    ASSERT(!(start & ((1 << 21) - 1)));
> +    ASSERT(!(virt_start & ((1 << 21) - 1)));
> +
> +    while ( start < end )
> +    {
> +        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
> +        e.pt.ai = attributes;
> +        write_pte(xen_second + second_table_offset(virt_start), e);
> +        
> +        start += (1<<21);
> +        virt_start += (1<<21);
> +    }
> +
> +    flush_xen_data_tlb();

What does this flush? The ptes are flushed by the write_pte aren't they?
Should this be a range over virt_start + len?

> +}
> +
>  enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
>  static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>  {
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 3549c83..a11f20b 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -152,6 +152,9 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
>  extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
>  /* Remove a mapping from a fixmap entry */
>  extern void clear_fixmap(unsigned map);
> +/* map a 2MB aligned physical range in virtual memory. */
> +extern void map_phys_range(paddr_t start, paddr_t end,
> +		unsigned long virt_start, unsigned attributes);

Hard tabs I'm afraid.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:55:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYAw-00007B-6n; Thu, 06 Dec 2012 09:55:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYAu-00006t-AE
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:55:24 +0000
Received: from [85.158.139.83:41736] by server-14.bemta-5.messagelabs.com id
	9D/18-21768-B8B60C05; Thu, 06 Dec 2012 09:55:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354787722!28662958!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24881 invoked from network); 6 Dec 2012 09:55:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16193976"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:55:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:55:22 +0000
Message-ID: <1354787720.17165.59.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 09:55:20 +0000
In-Reply-To: <1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/6] xen/arm: introduce map_phys_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> Introduce a function to map a physical memory into virtual memory.
> It is going to be used later to map the videoram.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/mm.c        |   23 +++++++++++++++++++++++
>  xen/include/asm-arm/mm.h |    3 +++
>  2 files changed, 26 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 68ee9da..418a414 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -376,6 +376,29 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
>  }
>  
> +/* Map the physical memory range start -  end at the virtual address
> + * virt_start in 2MB chunks. start and virt_start have to be 2MB
> + * aligned.
> + */
> +void map_phys_range(paddr_t start, paddr_t end,
> +        unsigned long virt_start, unsigned attributes)
> +{
> +    ASSERT(!(start & ((1 << 21) - 1)));
> +    ASSERT(!(virt_start & ((1 << 21) - 1)));
> +
> +    while ( start < end )
> +    {
> +        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
> +        e.pt.ai = attributes;
> +        write_pte(xen_second + second_table_offset(virt_start), e);
> +        
> +        start += (1<<21);
> +        virt_start += (1<<21);
> +    }
> +
> +    flush_xen_data_tlb();

What does this flush? The ptes are flushed by the write_pte aren't they?
Should this be a range over virt_start + len?

> +}
> +
>  enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
>  static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>  {
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 3549c83..a11f20b 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -152,6 +152,9 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
>  extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
>  /* Remove a mapping from a fixmap entry */
>  extern void clear_fixmap(unsigned map);
> +/* map a 2MB aligned physical range in virtual memory. */
> +extern void map_phys_range(paddr_t start, paddr_t end,
> +		unsigned long virt_start, unsigned attributes);

Hard tabs I'm afraid.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:57:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:57:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYD9-0000Iu-Oo; Thu, 06 Dec 2012 09:57:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYD8-0000Im-50
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:57:42 +0000
Received: from [85.158.139.83:60197] by server-1.bemta-5.messagelabs.com id
	04/FC-09311-51C60C05; Thu, 06 Dec 2012 09:57:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354787860!21398924!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17771 invoked from network); 6 Dec 2012 09:57:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:57:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194014"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:57:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:57:07 +0000
Message-ID: <1354787826.17165.60.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 09:57:06 +0000
In-Reply-To: <1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 2/6] xen: infrastructure to have
 cross-platform video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> - introduce a new HAS_VIDEO config variable;
> - build xen/drivers/video/font* if HAS_VIDEO;
> - rename vga_puts to video_puts;
> - rename vga_init to video_init;
> - rename vga_endboot to video_endboot.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

For the general concept + arm specific bits (such as they are..)

> ---
>  xen/arch/arm/Rules.mk        |    1 +
>  xen/arch/x86/Rules.mk        |    1 +
>  xen/drivers/Makefile         |    2 +-
>  xen/drivers/char/console.c   |   12 ++++++------
>  xen/drivers/video/Makefile   |   10 +++++-----
>  xen/drivers/video/vesa.c     |    4 ++--
>  xen/drivers/video/vga.c      |   12 ++++++------
>  xen/include/asm-x86/config.h |    1 +
>  xen/include/xen/vga.h        |    9 +--------
>  xen/include/xen/video.h      |   24 ++++++++++++++++++++++++
>  10 files changed, 48 insertions(+), 28 deletions(-)
>  create mode 100644 xen/include/xen/video.h
> 
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index a45c654..fa9f9c1 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -7,6 +7,7 @@
>  #
>  
>  HAS_DEVICE_TREE := y
> +HAS_VIDEO := y
>  
>  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
>  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
> index 963850f..0a9d68d 100644
> --- a/xen/arch/x86/Rules.mk
> +++ b/xen/arch/x86/Rules.mk
> @@ -3,6 +3,7 @@
>  
>  HAS_ACPI := y
>  HAS_VGA  := y
> +HAS_VIDEO  := y
>  HAS_CPUFREQ := y
>  HAS_PCI := y
>  HAS_PASSTHROUGH := y
> diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile
> index 7239375..9c70f20 100644
> --- a/xen/drivers/Makefile
> +++ b/xen/drivers/Makefile
> @@ -3,4 +3,4 @@ subdir-$(HAS_CPUFREQ) += cpufreq
>  subdir-$(HAS_PCI) += pci
>  subdir-$(HAS_PASSTHROUGH) += passthrough
>  subdir-$(HAS_ACPI) += acpi
> -subdir-$(HAS_VGA) += video
> +subdir-$(HAS_VIDEO) += video
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index 9e1adb5..683271e 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -21,7 +21,7 @@
>  #include <xen/delay.h>
>  #include <xen/guest_access.h>
>  #include <xen/shutdown.h>
> -#include <xen/vga.h>
> +#include <xen/video.h>
>  #include <xen/kexec.h>
>  #include <asm/debugger.h>
>  #include <asm/div64.h>
> @@ -297,7 +297,7 @@ static void dump_console_ring_key(unsigned char key)
>      buf[sofar] = '\0';
>  
>      sercon_puts(buf);
> -    vga_puts(buf);
> +    video_puts(buf);
>  
>      free_xenheap_pages(buf, order);
>  }
> @@ -383,7 +383,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
>          spin_lock_irq(&console_lock);
>  
>          sercon_puts(kbuf);
> -        vga_puts(kbuf);
> +        video_puts(kbuf);
>  
>          if ( opt_console_to_ring )
>          {
> @@ -464,7 +464,7 @@ static void __putstr(const char *str)
>      ASSERT(spin_is_locked(&console_lock));
>  
>      sercon_puts(str);
> -    vga_puts(str);
> +    video_puts(str);
>  
>      if ( !console_locks_busted )
>      {
> @@ -592,7 +592,7 @@ void __init console_init_preirq(void)
>          if ( *p == ',' )
>              p++;
>          if ( !strncmp(p, "vga", 3) )
> -            vga_init();
> +            video_init();
>          else if ( !strncmp(p, "none", 4) )
>              continue;
>          else if ( (sh = serial_parse_handle(p)) >= 0 )
> @@ -694,7 +694,7 @@ void __init console_endboot(void)
>          printk("\n");
>      }
>  
> -    vga_endboot();
> +    video_endboot();
>  
>      /*
>       * If user specifies so, we fool the switch routine to redirect input
> diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> index 6c3e5b4..2993c39 100644
> --- a/xen/drivers/video/Makefile
> +++ b/xen/drivers/video/Makefile
> @@ -1,5 +1,5 @@
> -obj-y := vga.o
> -obj-$(CONFIG_X86) += font_8x14.o
> -obj-$(CONFIG_X86) += font_8x16.o
> -obj-$(CONFIG_X86) += font_8x8.o
> -obj-$(CONFIG_X86) += vesa.o
> +obj-$(HAS_VGA) := vga.o
> +obj-$(HAS_VIDEO) += font_8x14.o
> +obj-$(HAS_VIDEO) += font_8x16.o
> +obj-$(HAS_VIDEO) += font_8x8.o
> +obj-$(HAS_VGA) += vesa.o
> diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> index 47cd3ed..759355f 100644
> --- a/xen/drivers/video/vesa.c
> +++ b/xen/drivers/video/vesa.c
> @@ -109,7 +109,7 @@ void __init vesa_init(void)
>  
>      lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
>  
> -    vga_puts = vesa_redraw_puts;
> +    video_puts = vesa_redraw_puts;
>  
>      printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
>             "using %uk, total %uk\n",
> @@ -194,7 +194,7 @@ void __init vesa_endboot(bool_t keep)
>      if ( keep )
>      {
>          xpos = 0;
> -        vga_puts = vesa_scroll_puts;
> +        video_puts = vesa_scroll_puts;
>      }
>      else
>      {
> diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
> index a98bd00..40e5963 100644
> --- a/xen/drivers/video/vga.c
> +++ b/xen/drivers/video/vga.c
> @@ -21,7 +21,7 @@ static unsigned char *video;
>  
>  static void vga_text_puts(const char *s);
>  static void vga_noop_puts(const char *s) {}
> -void (*vga_puts)(const char *) = vga_noop_puts;
> +void (*video_puts)(const char *) = vga_noop_puts;
>  
>  /*
>   * 'vga=<mode-specifier>[,keep]' where <mode-specifier> is one of:
> @@ -62,7 +62,7 @@ void vesa_endboot(bool_t keep);
>  #define vesa_endboot(x)   ((void)0)
>  #endif
>  
> -void __init vga_init(void)
> +void __init video_init(void)
>  {
>      char *p;
>  
> @@ -85,7 +85,7 @@ void __init vga_init(void)
>          columns = vga_console_info.u.text_mode_3.columns;
>          lines   = vga_console_info.u.text_mode_3.rows;
>          memset(video, 0, columns * lines * 2);
> -        vga_puts = vga_text_puts;
> +        video_puts = vga_text_puts;
>          break;
>      case XEN_VGATYPE_VESA_LFB:
>      case XEN_VGATYPE_EFI_LFB:
> @@ -97,16 +97,16 @@ void __init vga_init(void)
>      }
>  }
>  
> -void __init vga_endboot(void)
> +void __init video_endboot(void)
>  {
> -    if ( vga_puts == vga_noop_puts )
> +    if ( video_puts == vga_noop_puts )
>          return;
>  
>      printk("Xen is %s VGA console.\n",
>             vgacon_keep ? "keeping" : "relinquishing");
>  
>      if ( !vgacon_keep )
> -        vga_puts = vga_noop_puts;
> +        video_puts = vga_noop_puts;
>      else
>      {
>          int bus, devfn;
> diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
> index b69dbe6..2169627 100644
> --- a/xen/include/asm-x86/config.h
> +++ b/xen/include/asm-x86/config.h
> @@ -38,6 +38,7 @@
>  #define CONFIG_ACPI_CSTATE 1
>  
>  #define CONFIG_VGA 1
> +#define CONFIG_VIDEO 1
>  
>  #define CONFIG_HOTPLUG 1
>  #define CONFIG_HOTPLUG_CPU 1
> diff --git a/xen/include/xen/vga.h b/xen/include/xen/vga.h
> index cc690b9..f72b63d 100644
> --- a/xen/include/xen/vga.h
> +++ b/xen/include/xen/vga.h
> @@ -9,17 +9,10 @@
>  #ifndef _XEN_VGA_H
>  #define _XEN_VGA_H
>  
> -#include <public/xen.h>
> +#include <xen/video.h>
>  
>  #ifdef CONFIG_VGA
>  extern struct xen_vga_console_info vga_console_info;
> -void vga_init(void);
> -void vga_endboot(void);
> -extern void (*vga_puts)(const char *);
> -#else
> -#define vga_init()    ((void)0)
> -#define vga_endboot() ((void)0)
> -#define vga_puts(s)   ((void)0)
>  #endif
>  
>  #endif /* _XEN_VGA_H */
> diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
> new file mode 100644
> index 0000000..e9bc92e
> --- /dev/null
> +++ b/xen/include/xen/video.h
> @@ -0,0 +1,24 @@
> +/*
> + *  vga.h
> + *
> + *  This file is subject to the terms and conditions of the GNU General Public
> + *  License.  See the file COPYING in the main directory of this archive
> + *  for more details.
> + */
> +
> +#ifndef _XEN_VIDEO_H
> +#define _XEN_VIDEO_H
> +
> +#include <public/xen.h>
> +
> +#ifdef CONFIG_VIDEO
> +void video_init(void);
> +extern void (*video_puts)(const char *);
> +void video_endboot(void);
> +#else
> +#define video_init()    ((void)0)
> +#define video_puts(s)   ((void)0)
> +#define video_endboot() ((void)0)
> +#endif
> +
> +#endif /* _XEN_VIDEO_H */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 09:57:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 09:57:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYD9-0000Iu-Oo; Thu, 06 Dec 2012 09:57:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYD8-0000Im-50
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 09:57:42 +0000
Received: from [85.158.139.83:60197] by server-1.bemta-5.messagelabs.com id
	04/FC-09311-51C60C05; Thu, 06 Dec 2012 09:57:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354787860!21398924!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17771 invoked from network); 6 Dec 2012 09:57:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 09:57:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194014"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 09:57:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	09:57:07 +0000
Message-ID: <1354787826.17165.60.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 09:57:06 +0000
In-Reply-To: <1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 2/6] xen: infrastructure to have
 cross-platform video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> - introduce a new HAS_VIDEO config variable;
> - build xen/drivers/video/font* if HAS_VIDEO;
> - rename vga_puts to video_puts;
> - rename vga_init to video_init;
> - rename vga_endboot to video_endboot.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

For the general concept + arm specific bits (such as they are..)

> ---
>  xen/arch/arm/Rules.mk        |    1 +
>  xen/arch/x86/Rules.mk        |    1 +
>  xen/drivers/Makefile         |    2 +-
>  xen/drivers/char/console.c   |   12 ++++++------
>  xen/drivers/video/Makefile   |   10 +++++-----
>  xen/drivers/video/vesa.c     |    4 ++--
>  xen/drivers/video/vga.c      |   12 ++++++------
>  xen/include/asm-x86/config.h |    1 +
>  xen/include/xen/vga.h        |    9 +--------
>  xen/include/xen/video.h      |   24 ++++++++++++++++++++++++
>  10 files changed, 48 insertions(+), 28 deletions(-)
>  create mode 100644 xen/include/xen/video.h
> 
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index a45c654..fa9f9c1 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -7,6 +7,7 @@
>  #
>  
>  HAS_DEVICE_TREE := y
> +HAS_VIDEO := y
>  
>  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
>  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
> index 963850f..0a9d68d 100644
> --- a/xen/arch/x86/Rules.mk
> +++ b/xen/arch/x86/Rules.mk
> @@ -3,6 +3,7 @@
>  
>  HAS_ACPI := y
>  HAS_VGA  := y
> +HAS_VIDEO  := y
>  HAS_CPUFREQ := y
>  HAS_PCI := y
>  HAS_PASSTHROUGH := y
> diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile
> index 7239375..9c70f20 100644
> --- a/xen/drivers/Makefile
> +++ b/xen/drivers/Makefile
> @@ -3,4 +3,4 @@ subdir-$(HAS_CPUFREQ) += cpufreq
>  subdir-$(HAS_PCI) += pci
>  subdir-$(HAS_PASSTHROUGH) += passthrough
>  subdir-$(HAS_ACPI) += acpi
> -subdir-$(HAS_VGA) += video
> +subdir-$(HAS_VIDEO) += video
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index 9e1adb5..683271e 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -21,7 +21,7 @@
>  #include <xen/delay.h>
>  #include <xen/guest_access.h>
>  #include <xen/shutdown.h>
> -#include <xen/vga.h>
> +#include <xen/video.h>
>  #include <xen/kexec.h>
>  #include <asm/debugger.h>
>  #include <asm/div64.h>
> @@ -297,7 +297,7 @@ static void dump_console_ring_key(unsigned char key)
>      buf[sofar] = '\0';
>  
>      sercon_puts(buf);
> -    vga_puts(buf);
> +    video_puts(buf);
>  
>      free_xenheap_pages(buf, order);
>  }
> @@ -383,7 +383,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
>          spin_lock_irq(&console_lock);
>  
>          sercon_puts(kbuf);
> -        vga_puts(kbuf);
> +        video_puts(kbuf);
>  
>          if ( opt_console_to_ring )
>          {
> @@ -464,7 +464,7 @@ static void __putstr(const char *str)
>      ASSERT(spin_is_locked(&console_lock));
>  
>      sercon_puts(str);
> -    vga_puts(str);
> +    video_puts(str);
>  
>      if ( !console_locks_busted )
>      {
> @@ -592,7 +592,7 @@ void __init console_init_preirq(void)
>          if ( *p == ',' )
>              p++;
>          if ( !strncmp(p, "vga", 3) )
> -            vga_init();
> +            video_init();
>          else if ( !strncmp(p, "none", 4) )
>              continue;
>          else if ( (sh = serial_parse_handle(p)) >= 0 )
> @@ -694,7 +694,7 @@ void __init console_endboot(void)
>          printk("\n");
>      }
>  
> -    vga_endboot();
> +    video_endboot();
>  
>      /*
>       * If user specifies so, we fool the switch routine to redirect input
> diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> index 6c3e5b4..2993c39 100644
> --- a/xen/drivers/video/Makefile
> +++ b/xen/drivers/video/Makefile
> @@ -1,5 +1,5 @@
> -obj-y := vga.o
> -obj-$(CONFIG_X86) += font_8x14.o
> -obj-$(CONFIG_X86) += font_8x16.o
> -obj-$(CONFIG_X86) += font_8x8.o
> -obj-$(CONFIG_X86) += vesa.o
> +obj-$(HAS_VGA) := vga.o
> +obj-$(HAS_VIDEO) += font_8x14.o
> +obj-$(HAS_VIDEO) += font_8x16.o
> +obj-$(HAS_VIDEO) += font_8x8.o
> +obj-$(HAS_VGA) += vesa.o
> diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> index 47cd3ed..759355f 100644
> --- a/xen/drivers/video/vesa.c
> +++ b/xen/drivers/video/vesa.c
> @@ -109,7 +109,7 @@ void __init vesa_init(void)
>  
>      lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
>  
> -    vga_puts = vesa_redraw_puts;
> +    video_puts = vesa_redraw_puts;
>  
>      printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
>             "using %uk, total %uk\n",
> @@ -194,7 +194,7 @@ void __init vesa_endboot(bool_t keep)
>      if ( keep )
>      {
>          xpos = 0;
> -        vga_puts = vesa_scroll_puts;
> +        video_puts = vesa_scroll_puts;
>      }
>      else
>      {
> diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
> index a98bd00..40e5963 100644
> --- a/xen/drivers/video/vga.c
> +++ b/xen/drivers/video/vga.c
> @@ -21,7 +21,7 @@ static unsigned char *video;
>  
>  static void vga_text_puts(const char *s);
>  static void vga_noop_puts(const char *s) {}
> -void (*vga_puts)(const char *) = vga_noop_puts;
> +void (*video_puts)(const char *) = vga_noop_puts;
>  
>  /*
>   * 'vga=<mode-specifier>[,keep]' where <mode-specifier> is one of:
> @@ -62,7 +62,7 @@ void vesa_endboot(bool_t keep);
>  #define vesa_endboot(x)   ((void)0)
>  #endif
>  
> -void __init vga_init(void)
> +void __init video_init(void)
>  {
>      char *p;
>  
> @@ -85,7 +85,7 @@ void __init vga_init(void)
>          columns = vga_console_info.u.text_mode_3.columns;
>          lines   = vga_console_info.u.text_mode_3.rows;
>          memset(video, 0, columns * lines * 2);
> -        vga_puts = vga_text_puts;
> +        video_puts = vga_text_puts;
>          break;
>      case XEN_VGATYPE_VESA_LFB:
>      case XEN_VGATYPE_EFI_LFB:
> @@ -97,16 +97,16 @@ void __init vga_init(void)
>      }
>  }
>  
> -void __init vga_endboot(void)
> +void __init video_endboot(void)
>  {
> -    if ( vga_puts == vga_noop_puts )
> +    if ( video_puts == vga_noop_puts )
>          return;
>  
>      printk("Xen is %s VGA console.\n",
>             vgacon_keep ? "keeping" : "relinquishing");
>  
>      if ( !vgacon_keep )
> -        vga_puts = vga_noop_puts;
> +        video_puts = vga_noop_puts;
>      else
>      {
>          int bus, devfn;
> diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
> index b69dbe6..2169627 100644
> --- a/xen/include/asm-x86/config.h
> +++ b/xen/include/asm-x86/config.h
> @@ -38,6 +38,7 @@
>  #define CONFIG_ACPI_CSTATE 1
>  
>  #define CONFIG_VGA 1
> +#define CONFIG_VIDEO 1
>  
>  #define CONFIG_HOTPLUG 1
>  #define CONFIG_HOTPLUG_CPU 1
> diff --git a/xen/include/xen/vga.h b/xen/include/xen/vga.h
> index cc690b9..f72b63d 100644
> --- a/xen/include/xen/vga.h
> +++ b/xen/include/xen/vga.h
> @@ -9,17 +9,10 @@
>  #ifndef _XEN_VGA_H
>  #define _XEN_VGA_H
>  
> -#include <public/xen.h>
> +#include <xen/video.h>
>  
>  #ifdef CONFIG_VGA
>  extern struct xen_vga_console_info vga_console_info;
> -void vga_init(void);
> -void vga_endboot(void);
> -extern void (*vga_puts)(const char *);
> -#else
> -#define vga_init()    ((void)0)
> -#define vga_endboot() ((void)0)
> -#define vga_puts(s)   ((void)0)
>  #endif
>  
>  #endif /* _XEN_VGA_H */
> diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
> new file mode 100644
> index 0000000..e9bc92e
> --- /dev/null
> +++ b/xen/include/xen/video.h
> @@ -0,0 +1,24 @@
> +/*
> + *  vga.h
> + *
> + *  This file is subject to the terms and conditions of the GNU General Public
> + *  License.  See the file COPYING in the main directory of this archive
> + *  for more details.
> + */
> +
> +#ifndef _XEN_VIDEO_H
> +#define _XEN_VIDEO_H
> +
> +#include <public/xen.h>
> +
> +#ifdef CONFIG_VIDEO
> +void video_init(void);
> +extern void (*video_puts)(const char *);
> +void video_endboot(void);
> +#else
> +#define video_init()    ((void)0)
> +#define video_puts(s)   ((void)0)
> +#define video_endboot() ((void)0)
> +#endif
> +
> +#endif /* _XEN_VIDEO_H */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:04:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYJt-0000gi-M0; Thu, 06 Dec 2012 10:04:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgYJr-0000gd-Sn
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:04:40 +0000
Received: from [193.109.254.147:6916] by server-1.bemta-14.messagelabs.com id
	F1/CB-25314-7BD60C05; Thu, 06 Dec 2012 10:04:39 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354788192!9631999!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8172 invoked from network); 6 Dec 2012 10:03:12 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:03:12 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgYIO-000Lj2-M8; Thu, 06 Dec 2012 10:03:08 +0000
Date: Thu, 6 Dec 2012 10:03:08 +0000
From: Tim Deegan <tim@xen.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20121206100308.GE82725@ocelot.phlegethon.org>
References: <1354552154.18784.9.camel@iceland> <50BCF4F9.8010601@citrix.com>
	<20121203205612.GA22913@iceland> <50BDDFF0.6050308@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BDDFF0.6050308@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: Wei Liu <Wei.Liu2@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:35 +0000 on 04 Dec (1354620912), David Vrabel wrote:
> On 03/12/12 20:56, Wei Liu wrote:
> > On Mon, Dec 03, 2012 at 06:52:41PM +0000, David Vrabel wrote:
> >> On 03/12/12 16:29, Wei Liu wrote:
> >>> Hi all
> >>>
> >>> There has been discussion on extending number of event channels back in
> >>> September [0].
> >>
> >> It seems that the decision has been made to go for this N-level
> >> approach.  Were any other methods considered?
> >>
> >> Would a per-VCPU ring of pending events work?  The ABI will be easier to
> >> extend in the future for more event channels.  The guest side code will
> >> be simpler.  It will be easier to fairly service the events as they will
> >> be processed in the order they were raised.
> >>
> >> The complexity would be in ensuring that events were not lost due to
> >> lack of space in the ring.  This may make the ring prohibitively large
> >> or require complex or expensive tracking of pending events inside Xen.
> >>
> > 
> > If I understand correctly, one event will always be queued up for
> > processing in the ring model, will this be too overkill? What if event
> > generation is much faster than processing?
> > 
> > In the current implementation, one event channel can be raised multiple
> > times but it is only processed once.
> 
> There would have to be something in Xen to ensure an event was only
> added to the ring once.  i.e., if it's already in the ring, it doesn't
> get added.  This is the tricky bit and I can't immediately think of how
> this would work.

Well, Xen could keep a bitmap of which events it had inserted in the
ring, and then the guest could clear bits from the bitmap as it serviced
the events... :)

More seriously, though, I prefer the existing bitmap interface.  It
handles masking and merging naturally, and it lets the guest service
events in whatever order it chooses (not having to process 1 word per
event of potentially uninteresting events off the ring to get to the one
it wants).

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:04:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYJt-0000gi-M0; Thu, 06 Dec 2012 10:04:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgYJr-0000gd-Sn
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:04:40 +0000
Received: from [193.109.254.147:6916] by server-1.bemta-14.messagelabs.com id
	F1/CB-25314-7BD60C05; Thu, 06 Dec 2012 10:04:39 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354788192!9631999!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8172 invoked from network); 6 Dec 2012 10:03:12 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:03:12 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgYIO-000Lj2-M8; Thu, 06 Dec 2012 10:03:08 +0000
Date: Thu, 6 Dec 2012 10:03:08 +0000
From: Tim Deegan <tim@xen.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20121206100308.GE82725@ocelot.phlegethon.org>
References: <1354552154.18784.9.camel@iceland> <50BCF4F9.8010601@citrix.com>
	<20121203205612.GA22913@iceland> <50BDDFF0.6050308@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BDDFF0.6050308@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: Wei Liu <Wei.Liu2@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Extending numbers of event channels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:35 +0000 on 04 Dec (1354620912), David Vrabel wrote:
> On 03/12/12 20:56, Wei Liu wrote:
> > On Mon, Dec 03, 2012 at 06:52:41PM +0000, David Vrabel wrote:
> >> On 03/12/12 16:29, Wei Liu wrote:
> >>> Hi all
> >>>
> >>> There has been discussion on extending number of event channels back in
> >>> September [0].
> >>
> >> It seems that the decision has been made to go for this N-level
> >> approach.  Were any other methods considered?
> >>
> >> Would a per-VCPU ring of pending events work?  The ABI will be easier to
> >> extend in the future for more event channels.  The guest side code will
> >> be simpler.  It will be easier to fairly service the events as they will
> >> be processed in the order they were raised.
> >>
> >> The complexity would be in ensuring that events were not lost due to
> >> lack of space in the ring.  This may make the ring prohibitively large
> >> or require complex or expensive tracking of pending events inside Xen.
> >>
> > 
> > If I understand correctly, one event will always be queued up for
> > processing in the ring model, will this be too overkill? What if event
> > generation is much faster than processing?
> > 
> > In the current implementation, one event channel can be raised multiple
> > times but it is only processed once.
> 
> There would have to be something in Xen to ensure an event was only
> added to the ring once.  i.e., if it's already in the ring, it doesn't
> get added.  This is the tricky bit and I can't immediately think of how
> this would work.

Well, Xen could keep a bitmap of which events it had inserted in the
ring, and then the guest could clear bits from the bitmap as it serviced
the events... :)

More seriously, though, I prefer the existing bitmap interface.  It
handles masking and merging naturally, and it lets the guest service
events in whatever order it chooses (not having to process 1 word per
event of potentially uninteresting events off the ring to get to the one
it wants).

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:09:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:09:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYO0-0000pu-GF; Thu, 06 Dec 2012 10:08:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgYNz-0000pp-Us
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:08:56 +0000
Received: from [193.109.254.147:38662] by server-10.bemta-14.messagelabs.com
	id 92/5F-31741-7BE60C05; Thu, 06 Dec 2012 10:08:55 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354788502!9411507!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25662 invoked from network); 6 Dec 2012 10:08:23 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:08:23 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgYNO-000LkD-9G; Thu, 06 Dec 2012 10:08:18 +0000
Date: Thu, 6 Dec 2012 10:08:18 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121206100818.GF82725@ocelot.phlegethon.org>
References: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
	<1354786255.17165.48.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354786255.17165.48.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: Liu Jinsong <jinsong.liu@intel.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with regard
	to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:30 +0000 on 06 Dec (1354786255), Ian Campbell wrote:
> On Thu, 2012-12-06 at 01:55 +0000, Liu Jinsong wrote:
> > At the sender
> >   xc_domain_save has a key point: 'to query the types of all the pages
> >   with xc_get_pfn_type_batch'
> >   1) if broken page occur before the key point, migration will be fine
> >      since proper pfn_type and pfn number will be transferred to the
> >      target and then take appropriate action;
> >   2) if broken page occur after the key point, whole system will crash
> >      and no need care migration any more;
> > 
> > At the target
> >   Target will populates pages for guest. As for the case of broken page,
> >   we prefer to keep the type of the page for the sake of seamless migration.
> >   Target will set p2m as p2m_ram_broken for broken page. If guest access
> >   the broken page again it will kill itself as expected.
> > 
> > Patch version history:
> > V5:
> >   - remove extra check at the last iteration
> >   - remove marking broken page to dirty bitmap
> 
> I'm not totally convinced that this second one isn't required, but
> what's here is correct as far as it goes.
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> If someone from the hypervisor side wants to re-ack I'll apply (it
> doesn't look to have changed all that much to me, but I don't want to
> presume to take it).

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:09:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:09:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYO0-0000pu-GF; Thu, 06 Dec 2012 10:08:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgYNz-0000pp-Us
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:08:56 +0000
Received: from [193.109.254.147:38662] by server-10.bemta-14.messagelabs.com
	id 92/5F-31741-7BE60C05; Thu, 06 Dec 2012 10:08:55 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354788502!9411507!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25662 invoked from network); 6 Dec 2012 10:08:23 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:08:23 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgYNO-000LkD-9G; Thu, 06 Dec 2012 10:08:18 +0000
Date: Thu, 6 Dec 2012 10:08:18 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121206100818.GF82725@ocelot.phlegethon.org>
References: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
	<1354786255.17165.48.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354786255.17165.48.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: Liu Jinsong <jinsong.liu@intel.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with regard
	to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:30 +0000 on 06 Dec (1354786255), Ian Campbell wrote:
> On Thu, 2012-12-06 at 01:55 +0000, Liu Jinsong wrote:
> > At the sender
> >   xc_domain_save has a key point: 'to query the types of all the pages
> >   with xc_get_pfn_type_batch'
> >   1) if broken page occur before the key point, migration will be fine
> >      since proper pfn_type and pfn number will be transferred to the
> >      target and then take appropriate action;
> >   2) if broken page occur after the key point, whole system will crash
> >      and no need care migration any more;
> > 
> > At the target
> >   Target will populates pages for guest. As for the case of broken page,
> >   we prefer to keep the type of the page for the sake of seamless migration.
> >   Target will set p2m as p2m_ram_broken for broken page. If guest access
> >   the broken page again it will kill itself as expected.
> > 
> > Patch version history:
> > V5:
> >   - remove extra check at the last iteration
> >   - remove marking broken page to dirty bitmap
> 
> I'm not totally convinced that this second one isn't required, but
> what's here is correct as far as it goes.
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> If someone from the hypervisor side wants to re-ack I'll apply (it
> doesn't look to have changed all that much to me, but I don't want to
> presume to take it).

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:09:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYON-0000sI-St; Thu, 06 Dec 2012 10:09:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYOM-0000s3-8L
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:09:18 +0000
Received: from [85.158.143.99:19806] by server-3.bemta-4.messagelabs.com id
	7C/2F-18211-DCE60C05; Thu, 06 Dec 2012 10:09:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354788557!27411747!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20355 invoked from network); 6 Dec 2012 10:09:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:09:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194528"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:08:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:08:07 +0000
Message-ID: <1354788485.17165.65.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Thu, 6 Dec 2012 10:08:05 +0000
In-Reply-To: <1354730007.17165.31.camel@zakaz.uk.xensource.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
	<50BF841C.6010906@amd.com>
	<1354730007.17165.31.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(Putting debian-kernel to bcc, since I don't imagine they are interested
in the details of this discussion, I'll reraise the result with the
Debian Xen maintainer when we have one)

On Wed, 2012-12-05 at 17:53 +0000, Ian Campbell wrote:
> On Wed, 2012-12-05 at 17:27 +0000, Boris Ostrovsky wrote:
> > On 12/05/2012 12:02 PM, Jan Beulich wrote:
> > > But all of this shouldn't lead to equivalent ID mismatches, should
> > > it? It ought to simply find nothing to update...
> > 
> > 
> > The patch file (/lib/firmware/amd-ucode/microcode_amd_fam15h.bin) may 
> > contain more than one patch. The driver goes over this file patch by 
> > patch and tries to see whether to apply it.
> > 
> > I think what happened in Ian's case was that the patch file contained 
> > two patches --- one for this processor (ID 6012) and another for a 
> > different processor (ID 6101). (Both are family 15h but different revs).
> > 
> > The driver applied the first patch on core 0. Then, on core 1, the code 
> > tried the first patch (at file offset 60) and noticed that it is already 
> > applied. So it continued to the next patch (at offset 2660) which is not 
> > meant for this processor, thus generating the "does not match" message.

OOI what would have happened if the two patches were in the opposite
order? Would CPU0 have seen the ID 6101 patch first and aborted?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:09:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYON-0000sI-St; Thu, 06 Dec 2012 10:09:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYOM-0000s3-8L
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:09:18 +0000
Received: from [85.158.143.99:19806] by server-3.bemta-4.messagelabs.com id
	7C/2F-18211-DCE60C05; Thu, 06 Dec 2012 10:09:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354788557!27411747!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20355 invoked from network); 6 Dec 2012 10:09:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:09:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194528"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:08:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:08:07 +0000
Message-ID: <1354788485.17165.65.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Thu, 6 Dec 2012 10:08:05 +0000
In-Reply-To: <1354730007.17165.31.camel@zakaz.uk.xensource.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
	<50BF841C.6010906@amd.com>
	<1354730007.17165.31.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(Putting debian-kernel to bcc, since I don't imagine they are interested
in the details of this discussion, I'll reraise the result with the
Debian Xen maintainer when we have one)

On Wed, 2012-12-05 at 17:53 +0000, Ian Campbell wrote:
> On Wed, 2012-12-05 at 17:27 +0000, Boris Ostrovsky wrote:
> > On 12/05/2012 12:02 PM, Jan Beulich wrote:
> > > But all of this shouldn't lead to equivalent ID mismatches, should
> > > it? It ought to simply find nothing to update...
> > 
> > 
> > The patch file (/lib/firmware/amd-ucode/microcode_amd_fam15h.bin) may 
> > contain more than one patch. The driver goes over this file patch by 
> > patch and tries to see whether to apply it.
> > 
> > I think what happened in Ian's case was that the patch file contained 
> > two patches --- one for this processor (ID 6012) and another for a 
> > different processor (ID 6101). (Both are family 15h but different revs).
> > 
> > The driver applied the first patch on core 0. Then, on core 1, the code 
> > tried the first patch (at file offset 60) and noticed that it is already 
> > applied. So it continued to the next patch (at offset 2660) which is not 
> > meant for this processor, thus generating the "does not match" message.

OOI what would have happened if the two patches were in the opposite
order? Would CPU0 have seen the ID 6101 patch first and aborted?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:14:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:14:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYTF-0001GM-MT; Thu, 06 Dec 2012 10:14:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYTE-0001GD-AO
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:14:20 +0000
Received: from [85.158.139.83:42659] by server-11.bemta-5.messagelabs.com id
	A2/A7-03409-BFF60C05; Thu, 06 Dec 2012 10:14:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354788842!17392594!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16865 invoked from network); 6 Dec 2012 10:14:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:14:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194738"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:14:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:14:01 +0000
Message-ID: <1354788840.17165.68.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 10:14:00 +0000
In-Reply-To: <1354731588-32579-5-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-5-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 5/6] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> Introduce a find_compatible_node function that can be used by device
> drivers to find the node corresponding to their device in the device
> tree.
> 
> Also add device_tree_node_compatible to device_tree.h, that is currently
> missing.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/setup.c          |    2 +-
>  xen/common/device_tree.c      |   47 +++++++++++++++++++++++++++++++++++++++++
>  xen/include/xen/device_tree.h |    3 ++
>  3 files changed, 51 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 5f4e318..d978938 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -187,7 +187,7 @@ void __init start_xen(unsigned long boot_phys_offset,
>  
>      smp_clear_cpu_maps();
>  
> -    fdt = (void *)BOOT_MISC_VIRT_START
> +    device_tree_flattened = fdt = (void *)BOOT_MISC_VIRT_START

This seems unrelated to the commit log?

Is this to avoid declaring an early variant? It seems like this could
mean we can drop the fdt variable from a bunch of the existing early_*
functions.

>          + (atag_paddr & ((1 << SECOND_SHIFT) - 1));
>      fdt_size = device_tree_early_init(fdt);
>  
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 8d5b6b0..ca0aba7 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -173,6 +173,53 @@ int device_tree_for_each_node(const void *fdt,
>      return 0;
>  }
>  
> +struct find_compat {
> +    const char *compatible;
> +    int found;
> +    int node;
> +    int depth;
> +    u32 address_cells;
> +    u32 size_cells;
> +};
> +
> +static int _find_compatible_node(const void *fdt,
> +                             int node, const char *name, int depth,
> +                             u32 address_cells, u32 size_cells,
> +                             void *data)
> +{
> +    struct find_compat *c = (struct find_compat *) data;

Do you want 
	if ( c->found ) 
		return ?

> +    if ( device_tree_node_compatible(fdt, node, c->compatible) )
> +    {
> +        c->found = 1;
> +        c->node = node;
> +        c->depth = depth;
> +        c->address_cells = address_cells;
> +        c->size_cells = size_cells;
> +    }
> +    return 0;
> +}
> + 
> +int find_compatible_node(const char *compatible, int *node, int *depth,
> +                u32 *address_cells, u32 *size_cells)
> +{
> +    int ret;
> +    struct find_compat c;
> +    c.compatible = compatible;
> +    c.found = 0;
> +
> +    ret = device_tree_for_each_node(device_tree_flattened, _find_compatible_node, &c);
> +    if ( !c.found )
> +        return ret;
> +    else
> +    {
> +        *node = c.node;
> +        *depth = c.depth;
> +        *address_cells = c.address_cells;
> +        *size_cells = c.size_cells;
> +        return 1;
> +    }
> +}
> +
>  /**
>   * device_tree_bootargs - return the bootargs (the Xen command line)
>   * @fdt flat device tree.
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index a0e3a97..5a75f0e 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -54,6 +54,9 @@ void device_tree_set_reg(u32 **cell, u32 address_cells, u32 size_cells,
>                           u64 start, u64 size);
>  u32 device_tree_get_u32(const void *fdt, int node, const char *prop_name);
>  bool_t device_tree_node_matches(const void *fdt, int node, const char *match);
> +bool_t device_tree_node_compatible(const void *fdt, int node, const char *match);
> +int find_compatible_node(const char *compatible, int *node, int *depth,
> +                u32 *address_cells, u32 *size_cells);
>  int device_tree_for_each_node(const void *fdt,
>                                device_tree_node_func func, void *data);
>  const char *device_tree_bootargs(const void *fdt);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:14:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:14:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYTF-0001GM-MT; Thu, 06 Dec 2012 10:14:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYTE-0001GD-AO
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:14:20 +0000
Received: from [85.158.139.83:42659] by server-11.bemta-5.messagelabs.com id
	A2/A7-03409-BFF60C05; Thu, 06 Dec 2012 10:14:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354788842!17392594!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16865 invoked from network); 6 Dec 2012 10:14:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:14:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194738"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:14:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:14:01 +0000
Message-ID: <1354788840.17165.68.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 10:14:00 +0000
In-Reply-To: <1354731588-32579-5-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-5-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 5/6] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> Introduce a find_compatible_node function that can be used by device
> drivers to find the node corresponding to their device in the device
> tree.
> 
> Also add device_tree_node_compatible to device_tree.h, that is currently
> missing.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/setup.c          |    2 +-
>  xen/common/device_tree.c      |   47 +++++++++++++++++++++++++++++++++++++++++
>  xen/include/xen/device_tree.h |    3 ++
>  3 files changed, 51 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 5f4e318..d978938 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -187,7 +187,7 @@ void __init start_xen(unsigned long boot_phys_offset,
>  
>      smp_clear_cpu_maps();
>  
> -    fdt = (void *)BOOT_MISC_VIRT_START
> +    device_tree_flattened = fdt = (void *)BOOT_MISC_VIRT_START

This seems unrelated to the commit log?

Is this to avoid declaring an early variant? It seems like this could
mean we can drop the fdt variable from a bunch of the existing early_*
functions.

>          + (atag_paddr & ((1 << SECOND_SHIFT) - 1));
>      fdt_size = device_tree_early_init(fdt);
>  
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 8d5b6b0..ca0aba7 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -173,6 +173,53 @@ int device_tree_for_each_node(const void *fdt,
>      return 0;
>  }
>  
> +struct find_compat {
> +    const char *compatible;
> +    int found;
> +    int node;
> +    int depth;
> +    u32 address_cells;
> +    u32 size_cells;
> +};
> +
> +static int _find_compatible_node(const void *fdt,
> +                             int node, const char *name, int depth,
> +                             u32 address_cells, u32 size_cells,
> +                             void *data)
> +{
> +    struct find_compat *c = (struct find_compat *) data;

Do you want 
	if ( c->found ) 
		return ?

> +    if ( device_tree_node_compatible(fdt, node, c->compatible) )
> +    {
> +        c->found = 1;
> +        c->node = node;
> +        c->depth = depth;
> +        c->address_cells = address_cells;
> +        c->size_cells = size_cells;
> +    }
> +    return 0;
> +}
> + 
> +int find_compatible_node(const char *compatible, int *node, int *depth,
> +                u32 *address_cells, u32 *size_cells)
> +{
> +    int ret;
> +    struct find_compat c;
> +    c.compatible = compatible;
> +    c.found = 0;
> +
> +    ret = device_tree_for_each_node(device_tree_flattened, _find_compatible_node, &c);
> +    if ( !c.found )
> +        return ret;
> +    else
> +    {
> +        *node = c.node;
> +        *depth = c.depth;
> +        *address_cells = c.address_cells;
> +        *size_cells = c.size_cells;
> +        return 1;
> +    }
> +}
> +
>  /**
>   * device_tree_bootargs - return the bootargs (the Xen command line)
>   * @fdt flat device tree.
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index a0e3a97..5a75f0e 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -54,6 +54,9 @@ void device_tree_set_reg(u32 **cell, u32 address_cells, u32 size_cells,
>                           u64 start, u64 size);
>  u32 device_tree_get_u32(const void *fdt, int node, const char *prop_name);
>  bool_t device_tree_node_matches(const void *fdt, int node, const char *match);
> +bool_t device_tree_node_compatible(const void *fdt, int node, const char *match);
> +int find_compatible_node(const char *compatible, int *node, int *depth,
> +                u32 *address_cells, u32 *size_cells);
>  int device_tree_for_each_node(const void *fdt,
>                                device_tree_node_func func, void *data);
>  const char *device_tree_bootargs(const void *fdt);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:19:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYYM-0001Pi-GO; Thu, 06 Dec 2012 10:19:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYYK-0001Pc-Oy
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:19:36 +0000
Received: from [85.158.138.51:27714] by server-7.bemta-3.messagelabs.com id
	21/91-01713-73170C05; Thu, 06 Dec 2012 10:19:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354789174!19769813!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9521 invoked from network); 6 Dec 2012 10:19:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:19:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194906"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:19:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:19:34 +0000
Message-ID: <1354789172.17165.71.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Dec 2012 10:19:32 +0000
In-Reply-To: <21504ec56304ada2f093.1354732146@elijah>
References: <21504ec56304ada2f093.1354732146@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v3] libxl: Make an internal function
 explicitly check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:29 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354732124 0
> # Node ID 21504ec56304ada2f093c30b290ac33c28381ae1
> # Parent  670b07e8d7382229639af0d1df30071e6c1ebb19
> libxl: Make an internal function explicitly check existence of expected paths
> 
> libxl__device_disk_from_xs_be() was failing without error for some
> missing xenstore nodes in a backend, while assuming (without checking)
> that other nodes were valid, causing a crash when another internal
> error wrote these nodes in the wrong place.
[...]
> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked + applied, thanks.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:19:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYYM-0001Pi-GO; Thu, 06 Dec 2012 10:19:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYYK-0001Pc-Oy
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:19:36 +0000
Received: from [85.158.138.51:27714] by server-7.bemta-3.messagelabs.com id
	21/91-01713-73170C05; Thu, 06 Dec 2012 10:19:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354789174!19769813!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9521 invoked from network); 6 Dec 2012 10:19:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:19:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194906"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:19:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:19:34 +0000
Message-ID: <1354789172.17165.71.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Dec 2012 10:19:32 +0000
In-Reply-To: <21504ec56304ada2f093.1354732146@elijah>
References: <21504ec56304ada2f093.1354732146@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v3] libxl: Make an internal function
 explicitly check existence of expected paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:29 +0000, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1354732124 0
> # Node ID 21504ec56304ada2f093c30b290ac33c28381ae1
> # Parent  670b07e8d7382229639af0d1df30071e6c1ebb19
> libxl: Make an internal function explicitly check existence of expected paths
> 
> libxl__device_disk_from_xs_be() was failing without error for some
> missing xenstore nodes in a backend, while assuming (without checking)
> that other nodes were valid, causing a crash when another internal
> error wrote these nodes in the wrong place.
[...]
> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked + applied, thanks.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYYZ-0001RD-TP; Thu, 06 Dec 2012 10:19:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYYZ-0001Qz-Ap
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:19:51 +0000
Received: from [85.158.143.99:25704] by server-2.bemta-4.messagelabs.com id
	E9/C4-30861-64170C05; Thu, 06 Dec 2012 10:19:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354789189!17097013!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21991 invoked from network); 6 Dec 2012 10:19:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:19:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194917"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:19:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:19:49 +0000
Message-ID: <1354789188.17165.73.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 10:19:48 +0000
In-Reply-To: <1354732666-3132-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354732666-3132-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/arm: disable interrupts on
	return_to_hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:37 +0000, Stefano Stabellini wrote:
> At the moment it is possible to reach return_to_hypervisor with
> interrupts enabled (it happens all the times when we are actually going
> back to hypervisor mode, when we don't take the return_to_guest path).
> 
> If that happens we risk loosing the content of ELR_hyp: if we receive an
> interrupt right after restoring ELR_hyp, once we come back we'll have a
> different value in ELR_hyp and the original is lost.
> 
> In order to make the return_to_hypervisor path safe, we disable
> interrupts before restoring any registers.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYYZ-0001RD-TP; Thu, 06 Dec 2012 10:19:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYYZ-0001Qz-Ap
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:19:51 +0000
Received: from [85.158.143.99:25704] by server-2.bemta-4.messagelabs.com id
	E9/C4-30861-64170C05; Thu, 06 Dec 2012 10:19:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354789189!17097013!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21991 invoked from network); 6 Dec 2012 10:19:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:19:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194917"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:19:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:19:49 +0000
Message-ID: <1354789188.17165.73.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 10:19:48 +0000
In-Reply-To: <1354732666-3132-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354732666-3132-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/arm: disable interrupts on
	return_to_hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:37 +0000, Stefano Stabellini wrote:
> At the moment it is possible to reach return_to_hypervisor with
> interrupts enabled (it happens all the times when we are actually going
> back to hypervisor mode, when we don't take the return_to_guest path).
> 
> If that happens we risk loosing the content of ELR_hyp: if we receive an
> interrupt right after restoring ELR_hyp, once we come back we'll have a
> different value in ELR_hyp and the original is lost.
> 
> In order to make the return_to_hypervisor path safe, we disable
> interrupts before restoring any registers.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:20:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:20:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYYz-0001VC-An; Thu, 06 Dec 2012 10:20:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYYx-0001Up-My
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:20:15 +0000
Received: from [85.158.139.211:12191] by server-9.bemta-5.messagelabs.com id
	72/FC-29295-E5170C05; Thu, 06 Dec 2012 10:20:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1354789213!19208145!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24056 invoked from network); 6 Dec 2012 10:20:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:20:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194929"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:20:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:20:13 +0000
Message-ID: <1354789211.17165.75.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Keir Fraser <keir.xen@gmail.com>
Date: Thu, 6 Dec 2012 10:20:11 +0000
In-Reply-To: <CCE53D36.46658%keir.xen@gmail.com>
References: <CCE53D36.46658%keir.xen@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] gitignore: ignore xen-foreign/arm.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:04 +0000, Keir Fraser wrote:
> On 05/12/2012 14:55, "Ian Campbell" <ian.campbell@citrix.com> wrote:
> 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Keir Fraser <keir@xen.org>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:20:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:20:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYYz-0001VC-An; Thu, 06 Dec 2012 10:20:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYYx-0001Up-My
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:20:15 +0000
Received: from [85.158.139.211:12191] by server-9.bemta-5.messagelabs.com id
	72/FC-29295-E5170C05; Thu, 06 Dec 2012 10:20:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1354789213!19208145!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24056 invoked from network); 6 Dec 2012 10:20:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:20:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194929"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:20:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:20:13 +0000
Message-ID: <1354789211.17165.75.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Keir Fraser <keir.xen@gmail.com>
Date: Thu, 6 Dec 2012 10:20:11 +0000
In-Reply-To: <CCE53D36.46658%keir.xen@gmail.com>
References: <CCE53D36.46658%keir.xen@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] gitignore: ignore xen-foreign/arm.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:04 +0000, Keir Fraser wrote:
> On 05/12/2012 14:55, "Ian Campbell" <ian.campbell@citrix.com> wrote:
> 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Keir Fraser <keir@xen.org>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:20:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYZC-0001YG-Oh; Thu, 06 Dec 2012 10:20:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYZA-0001Xj-WE
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:20:29 +0000
Received: from [85.158.143.35:8309] by server-3.bemta-4.messagelabs.com id
	D4/61-18211-C6170C05; Thu, 06 Dec 2012 10:20:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354789202!13300347!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2836 invoked from network); 6 Dec 2012 10:20:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:20:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194925"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:20:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:20:01 +0000
Message-ID: <1354789199.17165.74.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Keir Fraser <keir.xen@gmail.com>
Date: Thu, 6 Dec 2012 10:19:59 +0000
In-Reply-To: <CCE53D52.46659%keir.xen@gmail.com>
References: <CCE53D52.46659%keir.xen@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] README: docs/pdf/user.pdf was deleted in
 24563:4271634e4c86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:05 +0000, Keir Fraser wrote:
> On 05/12/2012 15:41, "Ian Campbell" <ian.campbell@citrix.com> wrote:
> 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Keir Fraser <keir@xen.org>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:20:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYZC-0001YG-Oh; Thu, 06 Dec 2012 10:20:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYZA-0001Xj-WE
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:20:29 +0000
Received: from [85.158.143.35:8309] by server-3.bemta-4.messagelabs.com id
	D4/61-18211-C6170C05; Thu, 06 Dec 2012 10:20:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354789202!13300347!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2836 invoked from network); 6 Dec 2012 10:20:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:20:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16194925"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:20:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:20:01 +0000
Message-ID: <1354789199.17165.74.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Keir Fraser <keir.xen@gmail.com>
Date: Thu, 6 Dec 2012 10:19:59 +0000
In-Reply-To: <CCE53D52.46659%keir.xen@gmail.com>
References: <CCE53D52.46659%keir.xen@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] README: docs/pdf/user.pdf was deleted in
 24563:4271634e4c86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:05 +0000, Keir Fraser wrote:
> On 05/12/2012 15:41, "Ian Campbell" <ian.campbell@citrix.com> wrote:
> 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Keir Fraser <keir@xen.org>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:27:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYg2-00028y-3J; Thu, 06 Dec 2012 10:27:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgYg1-00028t-72
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:27:33 +0000
Received: from [85.158.143.35:4469] by server-1.bemta-4.messagelabs.com id
	90/60-28401-41370C05; Thu, 06 Dec 2012 10:27:32 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354789635!16401111!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11229 invoked from network); 6 Dec 2012 10:27:15 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:27:15 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgYfi-000LoH-2u; Thu, 06 Dec 2012 10:27:14 +0000
Date: Thu, 6 Dec 2012 10:27:14 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121206102714.GG82725@ocelot.phlegethon.org>
References: <50BE5732.2050801@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BE5732.2050801@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Audit of NMI and MCE paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 20:04 +0000 on 04 Dec (1354651442), Andrew Cooper wrote:
> I have just starting auditing the NMI path and found that the oprofile
> code calls into a fair amount of common code.
> 
> So far, down the first leg of the call graph, I have found several
> ASSERT()s, a BUG() and many {rd,wr}msr()s.  Given that these are common
> code, and sensible in their places, removing them for the sake of being
> on the NMI path seems silly.
> 
> As an alternative, I suggest that we make ASSERT()s, BUG()s and WARN()s
> NMI/MCE safe, from a printk spinlock point of view.

WARN()s would need to be removed, since they involve a non-fatal fault.

> Either we can modify the macros to do a console_force_unlock(), which is
> fine for BUG() and ASSERT(), but problematic for WARN() (and deferring
> the printing to a tasklet wont work if we want a stack trace). 
> Alternativly, we could change the console lock to be a recursive lock,
> at which point it is safe from the deadlock point of view.

It's only safe if the console lock is the _only_ lock that can be taken
both in NMI/MCE context and in 'normal' IRQ context.  Otherwise
we'd end up with exactly the class of deadlocks we had before with
IRQ/non-IRQ.

> For the {rd,wr}msr()s, we can assume that the Xen code is good and is
> not going to fault on access to the MSR, but we certainly cant guarantee
> this.

As Jan points out, it's *msr_safe() we need to worry about.

> As a result, I do not think it is practical or indeed sensible to remove
> all possibility of faults from the NMI path (and MCE to a lesser
> extent).

I'm not sure what the problem is -- the printk() locking issue is AFAICT
unrelated to the nested-NMI one, and will have to be fixed separately
from whatever we do for nested NMI.  So AFAICT we have to audit for
WARN()s and non-fatal printk()s in NMI/MCE code regardless.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:27:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYg2-00028y-3J; Thu, 06 Dec 2012 10:27:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgYg1-00028t-72
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:27:33 +0000
Received: from [85.158.143.35:4469] by server-1.bemta-4.messagelabs.com id
	90/60-28401-41370C05; Thu, 06 Dec 2012 10:27:32 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354789635!16401111!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11229 invoked from network); 6 Dec 2012 10:27:15 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:27:15 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgYfi-000LoH-2u; Thu, 06 Dec 2012 10:27:14 +0000
Date: Thu, 6 Dec 2012 10:27:14 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121206102714.GG82725@ocelot.phlegethon.org>
References: <50BE5732.2050801@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BE5732.2050801@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Audit of NMI and MCE paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 20:04 +0000 on 04 Dec (1354651442), Andrew Cooper wrote:
> I have just starting auditing the NMI path and found that the oprofile
> code calls into a fair amount of common code.
> 
> So far, down the first leg of the call graph, I have found several
> ASSERT()s, a BUG() and many {rd,wr}msr()s.  Given that these are common
> code, and sensible in their places, removing them for the sake of being
> on the NMI path seems silly.
> 
> As an alternative, I suggest that we make ASSERT()s, BUG()s and WARN()s
> NMI/MCE safe, from a printk spinlock point of view.

WARN()s would need to be removed, since they involve a non-fatal fault.

> Either we can modify the macros to do a console_force_unlock(), which is
> fine for BUG() and ASSERT(), but problematic for WARN() (and deferring
> the printing to a tasklet wont work if we want a stack trace). 
> Alternativly, we could change the console lock to be a recursive lock,
> at which point it is safe from the deadlock point of view.

It's only safe if the console lock is the _only_ lock that can be taken
both in NMI/MCE context and in 'normal' IRQ context.  Otherwise
we'd end up with exactly the class of deadlocks we had before with
IRQ/non-IRQ.

> For the {rd,wr}msr()s, we can assume that the Xen code is good and is
> not going to fault on access to the MSR, but we certainly cant guarantee
> this.

As Jan points out, it's *msr_safe() we need to worry about.

> As a result, I do not think it is practical or indeed sensible to remove
> all possibility of faults from the NMI path (and MCE to a lesser
> extent).

I'm not sure what the problem is -- the printk() locking issue is AFAICT
unrelated to the nested-NMI one, and will have to be fixed separately
from whatever we do for nested NMI.  So AFAICT we have to audit for
WARN()s and non-fatal printk()s in NMI/MCE code regardless.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:28:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:28:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYhA-0002DP-I1; Thu, 06 Dec 2012 10:28:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYh8-0002D8-Q9
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:28:43 +0000
Received: from [85.158.139.211:52215] by server-4.bemta-5.messagelabs.com id
	8C/05-15011-85370C05; Thu, 06 Dec 2012 10:28:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354789719!19350171!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28058 invoked from network); 6 Dec 2012 10:28:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:28:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16195221"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:28:39 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:28:39 +0000
Message-ID: <1354789717.17165.76.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 10:28:37 +0000
In-Reply-To: <1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 6/6] xen/arm: introduce a driver for the ARM
 HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> For the moment the resolution is hardcoded to 1280x1024@60.

Is there a longer term alternative? Something in the DTB perhaps.

Also there are hardcoded dependencies on the vexpress (clock stuff)
which aren't mentioned here, but just in /* in Mhz, needs to be set in
the board config for OSC5 */. Would be good to have some instruction on
how to use this stuff somewhere.

> Use the generic framebuffer functions to print on the screen.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/Rules.mk         |    1 +
>  xen/drivers/video/Makefile    |    1 +
>  xen/drivers/video/arm_hdlcd.c |  165 +++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/config.h  |    3 +
>  4 files changed, 170 insertions(+), 0 deletions(-)
>  create mode 100644 xen/drivers/video/arm_hdlcd.c
> 
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index fa9f9c1..9580e6b 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -8,6 +8,7 @@
>  
>  HAS_DEVICE_TREE := y
>  HAS_VIDEO := y
> +HAS_ARM_HDLCD := y
>  
>  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
>  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> index 3b3eb43..8a6f5da 100644
> --- a/xen/drivers/video/Makefile
> +++ b/xen/drivers/video/Makefile
> @@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
>  obj-$(HAS_VIDEO) += font_8x8.o
>  obj-$(HAS_VIDEO) += fb.o
>  obj-$(HAS_VGA) += vesa.o
> +obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
> diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
> new file mode 100644
> index 0000000..68f588c
> --- /dev/null
> +++ b/xen/drivers/video/arm_hdlcd.c
> @@ -0,0 +1,165 @@
> +/*
> + * xen/drivers/video/arm_hdlcd.c
> + *
> + * Driver for ARM HDLCD Controller
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + * Copyright (c) 2012 Citrix Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <asm/delay.h>
> +#include <asm/types.h>
> +#include <xen/config.h>
> +#include <xen/device_tree.h>
> +#include <xen/libfdt/libfdt.h>
> +#include <xen/init.h>
> +#include <xen/mm.h>
> +#include "font.h"
> +#include "fb.h"
> +
> +#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
> +
> +#define HDLCD_INTMASK       (0x18/4)
> +#define HDLCD_FBBASE        (0x100/4)
> +#define HDLCD_LINELENGTH    (0x104/4)
> +#define HDLCD_LINECOUNT     (0x108/4)
> +#define HDLCD_LINEPITCH     (0x10C/4)
> +#define HDLCD_BUS           (0x110/4)
> +#define HDLCD_VSYNC         (0x200/4)
> +#define HDLCD_VBACK         (0x204/4)
> +#define HDLCD_VDATA         (0x208/4)
> +#define HDLCD_VFRONT        (0x20C/4)
> +#define HDLCD_HSYNC         (0x210/4)
> +#define HDLCD_HBACK         (0x214/4)
> +#define HDLCD_HDATA         (0x218/4)
> +#define HDLCD_HFRONT        (0x21C/4)
> +#define HDLCD_POLARITIES    (0x220/4)
> +#define HDLCD_COMMAND       (0x230/4)
> +#define HDLCD_PF            (0x240/4)
> +#define HDLCD_RED           (0x244/4)
> +#define HDLCD_GREEN         (0x248/4)
> +#define HDLCD_BLUE          (0x24C/4)
> +
> +#define BPP             4
> +#define XRES            1280
> +#define YRES            1024
> +#define refresh         60
> +#define pixclock        108 /* in Mhz, needs to be set in the board config for OSC5 */
> +#define left_margin     80
> +#define hback left_margin
> +#define right_margin    48
> +#define hfront right_margin
> +#define upper_margin    21
> +#define vback upper_margin
> +#define lower_margin    3
> +#define vfront lower_margin
> +#define hsync_len       32
> +#define vsync_len       6
> +
> +#define HDLCD_SIZE (XRES*YRES*BPP)
> +
> +static void vga_noop_puts(const char *s) {}
> +void (*video_puts)(const char *) = vga_noop_puts;
> +
> +static void hdlcd_flush(void)
> +{
> +    dsb();
> +}
> +
> +void __init video_init(void)
> +{
> +    int node, depth;
> +    u32 address_cells, size_cells;
> +    struct fb_prop fbp;
> +    unsigned char *lfb = (unsigned char *) VRAM_VIRT_START;
> +    paddr_t hdlcd_start, hdlcd_size;
> +    paddr_t framebuffer_start, framebuffer_size;
> +    const struct fdt_property *prop;
> +    const u32 *cell;
> +
> +    if ( find_compatible_node("arm,hdlcd", &node, &depth,
> +                &address_cells, &size_cells) <= 0 )
> +        return;
> +
> +    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &hdlcd_start, &hdlcd_size);

I wonder why we don't have a function to get the reg given a prop,
because we have this pattern everywhere AFAICT.

> +
> +    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &framebuffer_start, &framebuffer_size); 
> +
> +    if ( !hdlcd_start || !framebuffer_start )
> +        return;
> +
> +    printk("Initializing HDLCD driver\n");
> +
> +    map_phys_range(framebuffer_start,
> +                    framebuffer_start + framebuffer_size,
> +                    VRAM_VIRT_START, DEV_WC);
> +    memset(lfb, 0x00, HDLCD_SIZE);
> +
> +    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
> +    HDLCD[HDLCD_COMMAND] = 0;
> +
> +    HDLCD[HDLCD_LINELENGTH] = XRES * BPP;
> +    HDLCD[HDLCD_LINECOUNT] = YRES - 1;
> +    HDLCD[HDLCD_LINEPITCH] = XRES * BPP;
> +    HDLCD[HDLCD_PF] = ((BPP - 1) << 3);
> +    HDLCD[HDLCD_INTMASK] = 0;
> +    HDLCD[HDLCD_FBBASE] = framebuffer_start;
> +    HDLCD[HDLCD_BUS] = 0xf00|(1<<4);
> +    HDLCD[HDLCD_VBACK] = upper_margin - 1;
> +    HDLCD[HDLCD_VSYNC] = vsync_len - 1;
> +    HDLCD[HDLCD_VDATA] = YRES - 1;
> +    HDLCD[HDLCD_VFRONT] = lower_margin - 1;
> +    HDLCD[HDLCD_HBACK] = left_margin - 1;
> +    HDLCD[HDLCD_HSYNC] = hsync_len - 1;
> +    HDLCD[HDLCD_HDATA] = XRES - 1;
> +    HDLCD[HDLCD_HFRONT] = right_margin - 1;
> +    HDLCD[HDLCD_POLARITIES] = (1<<2)|(1<<3);
> +    HDLCD[HDLCD_RED] = (8<<8)|0;
> +    HDLCD[HDLCD_GREEN] = (8<<8)|8;
> +    HDLCD[HDLCD_BLUE] = (8<<8)|16;
> +
> +    HDLCD[HDLCD_COMMAND] = 1;
> +    clear_fixmap(FIXMAP_MISC);
> +
> +    fbp.lfb = lfb;
> +    fbp.font = &font_vga_8x16;
> +    fbp.pixel_on = 0xffffff;
> +    fbp.bits_per_pixel = BPP*8;
> +    fbp.bytes_per_line = BPP*XRES;
> +    fbp.width = XRES;
> +    fbp.height = YRES;
> +    fbp.flush = hdlcd_flush;
> +    fbp.text_columns = XRES / 8;
> +    fbp.text_rows = YRES / 16;
> +    if ( fb_init(fbp) < 0 )
> +            return;
> +    video_puts = fb_scroll_puts;
> +}
> +
> +void video_endboot(void)
> +{
> +    if ( video_puts != vga_noop_puts )
> +        fb_alloc();
> +}

Can you stick the standard magic block here please.

> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index 2a05539..9727562 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -19,6 +19,8 @@
>  
>  #define CONFIG_DOMAIN_PAGE 1
>  
> +#define CONFIG_VIDEO 1
> +
>  #define OPT_CONSOLE_STR "com1"
>  
>  #ifdef MAX_PHYS_CPUS
> @@ -73,6 +75,7 @@
>  #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
>  #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
>  #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
> +#define VRAM_VIRT_START        mk_unsigned_long(0x10000000)
>  #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
>  #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:28:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:28:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYhA-0002DP-I1; Thu, 06 Dec 2012 10:28:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYh8-0002D8-Q9
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:28:43 +0000
Received: from [85.158.139.211:52215] by server-4.bemta-5.messagelabs.com id
	8C/05-15011-85370C05; Thu, 06 Dec 2012 10:28:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354789719!19350171!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28058 invoked from network); 6 Dec 2012 10:28:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:28:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16195221"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:28:39 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:28:39 +0000
Message-ID: <1354789717.17165.76.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Dec 2012 10:28:37 +0000
In-Reply-To: <1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 6/6] xen/arm: introduce a driver for the ARM
 HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> For the moment the resolution is hardcoded to 1280x1024@60.

Is there a longer term alternative? Something in the DTB perhaps.

Also there are hardcoded dependencies on the vexpress (clock stuff)
which aren't mentioned here, but just in /* in Mhz, needs to be set in
the board config for OSC5 */. Would be good to have some instruction on
how to use this stuff somewhere.

> Use the generic framebuffer functions to print on the screen.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/Rules.mk         |    1 +
>  xen/drivers/video/Makefile    |    1 +
>  xen/drivers/video/arm_hdlcd.c |  165 +++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/config.h  |    3 +
>  4 files changed, 170 insertions(+), 0 deletions(-)
>  create mode 100644 xen/drivers/video/arm_hdlcd.c
> 
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index fa9f9c1..9580e6b 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -8,6 +8,7 @@
>  
>  HAS_DEVICE_TREE := y
>  HAS_VIDEO := y
> +HAS_ARM_HDLCD := y
>  
>  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
>  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> index 3b3eb43..8a6f5da 100644
> --- a/xen/drivers/video/Makefile
> +++ b/xen/drivers/video/Makefile
> @@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
>  obj-$(HAS_VIDEO) += font_8x8.o
>  obj-$(HAS_VIDEO) += fb.o
>  obj-$(HAS_VGA) += vesa.o
> +obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
> diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
> new file mode 100644
> index 0000000..68f588c
> --- /dev/null
> +++ b/xen/drivers/video/arm_hdlcd.c
> @@ -0,0 +1,165 @@
> +/*
> + * xen/drivers/video/arm_hdlcd.c
> + *
> + * Driver for ARM HDLCD Controller
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + * Copyright (c) 2012 Citrix Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <asm/delay.h>
> +#include <asm/types.h>
> +#include <xen/config.h>
> +#include <xen/device_tree.h>
> +#include <xen/libfdt/libfdt.h>
> +#include <xen/init.h>
> +#include <xen/mm.h>
> +#include "font.h"
> +#include "fb.h"
> +
> +#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
> +
> +#define HDLCD_INTMASK       (0x18/4)
> +#define HDLCD_FBBASE        (0x100/4)
> +#define HDLCD_LINELENGTH    (0x104/4)
> +#define HDLCD_LINECOUNT     (0x108/4)
> +#define HDLCD_LINEPITCH     (0x10C/4)
> +#define HDLCD_BUS           (0x110/4)
> +#define HDLCD_VSYNC         (0x200/4)
> +#define HDLCD_VBACK         (0x204/4)
> +#define HDLCD_VDATA         (0x208/4)
> +#define HDLCD_VFRONT        (0x20C/4)
> +#define HDLCD_HSYNC         (0x210/4)
> +#define HDLCD_HBACK         (0x214/4)
> +#define HDLCD_HDATA         (0x218/4)
> +#define HDLCD_HFRONT        (0x21C/4)
> +#define HDLCD_POLARITIES    (0x220/4)
> +#define HDLCD_COMMAND       (0x230/4)
> +#define HDLCD_PF            (0x240/4)
> +#define HDLCD_RED           (0x244/4)
> +#define HDLCD_GREEN         (0x248/4)
> +#define HDLCD_BLUE          (0x24C/4)
> +
> +#define BPP             4
> +#define XRES            1280
> +#define YRES            1024
> +#define refresh         60
> +#define pixclock        108 /* in Mhz, needs to be set in the board config for OSC5 */
> +#define left_margin     80
> +#define hback left_margin
> +#define right_margin    48
> +#define hfront right_margin
> +#define upper_margin    21
> +#define vback upper_margin
> +#define lower_margin    3
> +#define vfront lower_margin
> +#define hsync_len       32
> +#define vsync_len       6
> +
> +#define HDLCD_SIZE (XRES*YRES*BPP)
> +
> +static void vga_noop_puts(const char *s) {}
> +void (*video_puts)(const char *) = vga_noop_puts;
> +
> +static void hdlcd_flush(void)
> +{
> +    dsb();
> +}
> +
> +void __init video_init(void)
> +{
> +    int node, depth;
> +    u32 address_cells, size_cells;
> +    struct fb_prop fbp;
> +    unsigned char *lfb = (unsigned char *) VRAM_VIRT_START;
> +    paddr_t hdlcd_start, hdlcd_size;
> +    paddr_t framebuffer_start, framebuffer_size;
> +    const struct fdt_property *prop;
> +    const u32 *cell;
> +
> +    if ( find_compatible_node("arm,hdlcd", &node, &depth,
> +                &address_cells, &size_cells) <= 0 )
> +        return;
> +
> +    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &hdlcd_start, &hdlcd_size);

I wonder why we don't have a function to get the reg given a prop,
because we have this pattern everywhere AFAICT.

> +
> +    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &framebuffer_start, &framebuffer_size); 
> +
> +    if ( !hdlcd_start || !framebuffer_start )
> +        return;
> +
> +    printk("Initializing HDLCD driver\n");
> +
> +    map_phys_range(framebuffer_start,
> +                    framebuffer_start + framebuffer_size,
> +                    VRAM_VIRT_START, DEV_WC);
> +    memset(lfb, 0x00, HDLCD_SIZE);
> +
> +    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
> +    HDLCD[HDLCD_COMMAND] = 0;
> +
> +    HDLCD[HDLCD_LINELENGTH] = XRES * BPP;
> +    HDLCD[HDLCD_LINECOUNT] = YRES - 1;
> +    HDLCD[HDLCD_LINEPITCH] = XRES * BPP;
> +    HDLCD[HDLCD_PF] = ((BPP - 1) << 3);
> +    HDLCD[HDLCD_INTMASK] = 0;
> +    HDLCD[HDLCD_FBBASE] = framebuffer_start;
> +    HDLCD[HDLCD_BUS] = 0xf00|(1<<4);
> +    HDLCD[HDLCD_VBACK] = upper_margin - 1;
> +    HDLCD[HDLCD_VSYNC] = vsync_len - 1;
> +    HDLCD[HDLCD_VDATA] = YRES - 1;
> +    HDLCD[HDLCD_VFRONT] = lower_margin - 1;
> +    HDLCD[HDLCD_HBACK] = left_margin - 1;
> +    HDLCD[HDLCD_HSYNC] = hsync_len - 1;
> +    HDLCD[HDLCD_HDATA] = XRES - 1;
> +    HDLCD[HDLCD_HFRONT] = right_margin - 1;
> +    HDLCD[HDLCD_POLARITIES] = (1<<2)|(1<<3);
> +    HDLCD[HDLCD_RED] = (8<<8)|0;
> +    HDLCD[HDLCD_GREEN] = (8<<8)|8;
> +    HDLCD[HDLCD_BLUE] = (8<<8)|16;
> +
> +    HDLCD[HDLCD_COMMAND] = 1;
> +    clear_fixmap(FIXMAP_MISC);
> +
> +    fbp.lfb = lfb;
> +    fbp.font = &font_vga_8x16;
> +    fbp.pixel_on = 0xffffff;
> +    fbp.bits_per_pixel = BPP*8;
> +    fbp.bytes_per_line = BPP*XRES;
> +    fbp.width = XRES;
> +    fbp.height = YRES;
> +    fbp.flush = hdlcd_flush;
> +    fbp.text_columns = XRES / 8;
> +    fbp.text_rows = YRES / 16;
> +    if ( fb_init(fbp) < 0 )
> +            return;
> +    video_puts = fb_scroll_puts;
> +}
> +
> +void video_endboot(void)
> +{
> +    if ( video_puts != vga_noop_puts )
> +        fb_alloc();
> +}

Can you stick the standard magic block here please.

> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index 2a05539..9727562 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -19,6 +19,8 @@
>  
>  #define CONFIG_DOMAIN_PAGE 1
>  
> +#define CONFIG_VIDEO 1
> +
>  #define OPT_CONSOLE_STR "com1"
>  
>  #ifdef MAX_PHYS_CPUS
> @@ -73,6 +75,7 @@
>  #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
>  #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
>  #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
> +#define VRAM_VIRT_START        mk_unsigned_long(0x10000000)
>  #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
>  #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:37:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYpN-0002bZ-QO; Thu, 06 Dec 2012 10:37:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robert.phillips@citrix.com>) id 1TgYpN-0002bU-2H
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:37:13 +0000
Received: from [85.158.139.83:45013] by server-8.bemta-5.messagelabs.com id
	89/31-06050-85570C05; Thu, 06 Dec 2012 10:37:12 +0000
X-Env-Sender: robert.phillips@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354790228!28550581!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15462 invoked from network); 6 Dec 2012 10:37:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:37:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="216593225"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:37:08 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.209]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi;
	Thu, 6 Dec 2012 05:37:08 -0500
From: Robert Phillips <robert.phillips@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>
Date: Thu, 6 Dec 2012 05:36:57 -0500
Thread-Topic: [Xen-devel] [PATCH] x86/hap: fix race condition between
	ENABLE_LOGDIRTY and track_dirty_vram hypercall
Thread-Index: Ac3TlJHI7yJyqrsNS/WDX6DfHsbm/wACL6sw
Message-ID: <048EAD622912254A9DEA24C1734613C18C87CC8634@FTLPMAILBOX02.citrite.net>
References: <50B7087D.20407@jp.fujitsu.com>
	<20121129154041.GD80627@ocelot.phlegethon.org>
	<048EAD622912254A9DEA24C1734613C18C87CC821E@FTLPMAILBOX02.citrite.net>
	<20121206093206.GA82725@ocelot.phlegethon.org>
In-Reply-To: <20121206093206.GA82725@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Kouya Shimura <kouya@jp.fujitsu.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/hap: fix race condition between
 ENABLE_LOGDIRTY and track_dirty_vram hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Tim,

> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Thursday, December 06, 2012 4:32 AM
> To: Robert Phillips
> Cc: Kouya Shimura; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] x86/hap: fix race condition between
> ENABLE_LOGDIRTY and track_dirty_vram hypercall
> 
> Hi,
> 
> At 12:59 -0500 on 03 Dec (1354539567), Robert Phillips wrote:
> > > Robert, in your patch you do wrap this all in the paging_lock, but
> > > then unlock to call various enable and disable routines.  Is there a
> > > version of this race condition there, where some other CPU might
> > > call LOG_DIRTY_ENABLE while you've temporarily dropped the lock?
> >
> > My proposed patch does not modify the problematic locking code so,
> > unfortunately, it preserves the race condition that Kouya Shimura has
> > discovered.
> >
> > I question whether his proposed patch would be suitable for the
> > multiple frame buffer situation that my proposed patch addresses.
> > It is possible that a guest might be updating its frame buffers when
> > live migration starts, and the same race would result.
> >
> > I think the domain.arch.paging.log_dirty  function pointers are problematic.
> > They are modified and executed without benefit of locking.
> >
> > I am uncomfortable with adding another lock.
> >
> > I will look at updating my patch to avoid the race and will
> > (hopefully) avoid adding another lock.
> 
> Thanks.  I think the paging_lock can probably cover everything we need here.
> These are toolstack operations and should be fairly rare, and HAP can do
> most of its work without the paging_lock.

I agree.  I am currently working on fix.  Should be ready in a short while.

> 
> Also, in the next version can you please update this section:
> 
> +int hap_track_dirty_vram(struct domain *d,
> +                         unsigned long begin_pfn,
> +                         unsigned long nr,
> +                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
> +{
> +    long rc = 0;
> +    dv_dirty_vram_t *dirty_vram;
> +
> +    paging_lock(d);
> +    dirty_vram = d->arch.hvm_domain.dirty_vram;
> +    if ( nr )
> +    {
> +        dv_range_t *range = NULL;
> +        int size = ( nr + BITS_PER_LONG - 1 ) & ~( BITS_PER_LONG - 1 );
> +        uint8_t dirty_bitmap[size];
> 
> not to allocate a guest-specified amount of stack memory.  This is one of the
> things recently found and fixed in the existing code as XSA-27.
> http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg/rev/53ef1f35a0f8

I'll include this improvement too.
-- rsp

> 
> Cheers,
> 
> Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:37:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYpN-0002bZ-QO; Thu, 06 Dec 2012 10:37:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robert.phillips@citrix.com>) id 1TgYpN-0002bU-2H
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:37:13 +0000
Received: from [85.158.139.83:45013] by server-8.bemta-5.messagelabs.com id
	89/31-06050-85570C05; Thu, 06 Dec 2012 10:37:12 +0000
X-Env-Sender: robert.phillips@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354790228!28550581!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15462 invoked from network); 6 Dec 2012 10:37:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:37:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="216593225"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:37:08 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.209]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi;
	Thu, 6 Dec 2012 05:37:08 -0500
From: Robert Phillips <robert.phillips@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>
Date: Thu, 6 Dec 2012 05:36:57 -0500
Thread-Topic: [Xen-devel] [PATCH] x86/hap: fix race condition between
	ENABLE_LOGDIRTY and track_dirty_vram hypercall
Thread-Index: Ac3TlJHI7yJyqrsNS/WDX6DfHsbm/wACL6sw
Message-ID: <048EAD622912254A9DEA24C1734613C18C87CC8634@FTLPMAILBOX02.citrite.net>
References: <50B7087D.20407@jp.fujitsu.com>
	<20121129154041.GD80627@ocelot.phlegethon.org>
	<048EAD622912254A9DEA24C1734613C18C87CC821E@FTLPMAILBOX02.citrite.net>
	<20121206093206.GA82725@ocelot.phlegethon.org>
In-Reply-To: <20121206093206.GA82725@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Kouya Shimura <kouya@jp.fujitsu.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/hap: fix race condition between
 ENABLE_LOGDIRTY and track_dirty_vram hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Tim,

> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Thursday, December 06, 2012 4:32 AM
> To: Robert Phillips
> Cc: Kouya Shimura; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] x86/hap: fix race condition between
> ENABLE_LOGDIRTY and track_dirty_vram hypercall
> 
> Hi,
> 
> At 12:59 -0500 on 03 Dec (1354539567), Robert Phillips wrote:
> > > Robert, in your patch you do wrap this all in the paging_lock, but
> > > then unlock to call various enable and disable routines.  Is there a
> > > version of this race condition there, where some other CPU might
> > > call LOG_DIRTY_ENABLE while you've temporarily dropped the lock?
> >
> > My proposed patch does not modify the problematic locking code so,
> > unfortunately, it preserves the race condition that Kouya Shimura has
> > discovered.
> >
> > I question whether his proposed patch would be suitable for the
> > multiple frame buffer situation that my proposed patch addresses.
> > It is possible that a guest might be updating its frame buffers when
> > live migration starts, and the same race would result.
> >
> > I think the domain.arch.paging.log_dirty  function pointers are problematic.
> > They are modified and executed without benefit of locking.
> >
> > I am uncomfortable with adding another lock.
> >
> > I will look at updating my patch to avoid the race and will
> > (hopefully) avoid adding another lock.
> 
> Thanks.  I think the paging_lock can probably cover everything we need here.
> These are toolstack operations and should be fairly rare, and HAP can do
> most of its work without the paging_lock.

I agree.  I am currently working on fix.  Should be ready in a short while.

> 
> Also, in the next version can you please update this section:
> 
> +int hap_track_dirty_vram(struct domain *d,
> +                         unsigned long begin_pfn,
> +                         unsigned long nr,
> +                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
> +{
> +    long rc = 0;
> +    dv_dirty_vram_t *dirty_vram;
> +
> +    paging_lock(d);
> +    dirty_vram = d->arch.hvm_domain.dirty_vram;
> +    if ( nr )
> +    {
> +        dv_range_t *range = NULL;
> +        int size = ( nr + BITS_PER_LONG - 1 ) & ~( BITS_PER_LONG - 1 );
> +        uint8_t dirty_bitmap[size];
> 
> not to allocate a guest-specified amount of stack memory.  This is one of the
> things recently found and fixed in the existing code as XSA-27.
> http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg/rev/53ef1f35a0f8

I'll include this improvement too.
-- rsp

> 
> Cheers,
> 
> Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:39:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:39:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYrF-0002gu-Dh; Thu, 06 Dec 2012 10:39:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgYrE-0002gp-Bs
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:39:08 +0000
Received: from [193.109.254.147:31815] by server-1.bemta-14.messagelabs.com id
	1B/7A-25314-BC570C05; Thu, 06 Dec 2012 10:39:07 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1354790344!1943319!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22085 invoked from network); 6 Dec 2012 10:39:04 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:39:04 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgYr9-000Lr1-3N; Thu, 06 Dec 2012 10:39:03 +0000
Date: Thu, 6 Dec 2012 10:39:03 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121206103903.GH82725@ocelot.phlegethon.org>
References: <50BF91CD.8020103@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BF91CD.8020103@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Recursive locking in Xen (in reference to NMI/MCE
	path audit)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:26 +0000 on 05 Dec (1354731981), Andrew Cooper wrote:
> Hello,
> 
> While auditing the NMI/MCE paths, I have encountered some issues with
> recursive locking in Xen, discovered by the misuse of the console_lock
> intermittently as a regular lock and as a recursive lock.
> 
> The comment in spinlock.h is unclear as to whether mixing recursive and
> non recursive calls on the same spinlock is valid.  If the calls are
> genuinely not valid, then surely regular spinlocks and recursive
> spinlocks should be separate types to let the compiler work for us.

Seems like a good idea.  

> If mixing calls are valid, then there appear to be problems with nesting
> recursive and regular calls, as either ordering of spin_lock and
> spin_lock_recursive will deadlock.

Yes.  But paths that know they will not need to recurse can safely use
the non-recursive lock op.  The shadow code used to do this sort of
thing (with a better failure mode) to explicitly catch recursive paths
that weren't intended. 

> As a result, I am wondering which of the above to fix?
> 
> There are very few users of recursive locks (domain lock, domain
> page_alloc lock, mm (pod and paging) locks and console lock).  The
> console and page_alloc locks appear to have mixed callers, while the
> domain and mm locks appear to have strictly recursive callers.

Please don't touch the mm locks!  Any code that uses them in NMI or MCE
handlers is in a bad way already. :)

> It seems to me that either we need to make the two locks different
> types, or use ASSERT()s to ensure we dont next spin_lock() and
> spin_lock_recursive() calls.

I'd be happy with different types, unless there are cases where we care
about the speed of extra operations on the page_alloc or domain locks.

> In addition to the above problems, I find myself needing to implement
> spin_lock_recursive_irq{,save,restore}() variants.  The implementations
> themselves are not too hard to do, but I did wonder whether we might
> want to have extra ASSERT()s to remove potential deadlock scenarios from
> the NMI/MCE paths.   The ASSERT()s would have to be along the lines of
> "assert this is exclusively a recursive lock" or "assert this is a
> per-cpu regular spinlock which is never referenced outside of this
> specific NMI/MCE path".  The possible implementation of these
> pseudo-asserts would differ depending on the outcome of the query.

Have you looked at the existing lock debugging that Keir put in to do
exactly this for normal vs IRQ?  Can it be trivially extended?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:39:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:39:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYrF-0002gu-Dh; Thu, 06 Dec 2012 10:39:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgYrE-0002gp-Bs
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:39:08 +0000
Received: from [193.109.254.147:31815] by server-1.bemta-14.messagelabs.com id
	1B/7A-25314-BC570C05; Thu, 06 Dec 2012 10:39:07 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1354790344!1943319!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22085 invoked from network); 6 Dec 2012 10:39:04 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:39:04 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgYr9-000Lr1-3N; Thu, 06 Dec 2012 10:39:03 +0000
Date: Thu, 6 Dec 2012 10:39:03 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121206103903.GH82725@ocelot.phlegethon.org>
References: <50BF91CD.8020103@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BF91CD.8020103@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Recursive locking in Xen (in reference to NMI/MCE
	path audit)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:26 +0000 on 05 Dec (1354731981), Andrew Cooper wrote:
> Hello,
> 
> While auditing the NMI/MCE paths, I have encountered some issues with
> recursive locking in Xen, discovered by the misuse of the console_lock
> intermittently as a regular lock and as a recursive lock.
> 
> The comment in spinlock.h is unclear as to whether mixing recursive and
> non recursive calls on the same spinlock is valid.  If the calls are
> genuinely not valid, then surely regular spinlocks and recursive
> spinlocks should be separate types to let the compiler work for us.

Seems like a good idea.  

> If mixing calls are valid, then there appear to be problems with nesting
> recursive and regular calls, as either ordering of spin_lock and
> spin_lock_recursive will deadlock.

Yes.  But paths that know they will not need to recurse can safely use
the non-recursive lock op.  The shadow code used to do this sort of
thing (with a better failure mode) to explicitly catch recursive paths
that weren't intended. 

> As a result, I am wondering which of the above to fix?
> 
> There are very few users of recursive locks (domain lock, domain
> page_alloc lock, mm (pod and paging) locks and console lock).  The
> console and page_alloc locks appear to have mixed callers, while the
> domain and mm locks appear to have strictly recursive callers.

Please don't touch the mm locks!  Any code that uses them in NMI or MCE
handlers is in a bad way already. :)

> It seems to me that either we need to make the two locks different
> types, or use ASSERT()s to ensure we dont next spin_lock() and
> spin_lock_recursive() calls.

I'd be happy with different types, unless there are cases where we care
about the speed of extra operations on the page_alloc or domain locks.

> In addition to the above problems, I find myself needing to implement
> spin_lock_recursive_irq{,save,restore}() variants.  The implementations
> themselves are not too hard to do, but I did wonder whether we might
> want to have extra ASSERT()s to remove potential deadlock scenarios from
> the NMI/MCE paths.   The ASSERT()s would have to be along the lines of
> "assert this is exclusively a recursive lock" or "assert this is a
> per-cpu regular spinlock which is never referenced outside of this
> specific NMI/MCE path".  The possible implementation of these
> pseudo-asserts would differ depending on the outcome of the query.

Have you looked at the existing lock debugging that Keir put in to do
exactly this for normal vs IRQ?  Can it be trivially extended?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:44:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYw8-00031G-5G; Thu, 06 Dec 2012 10:44:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgYw6-000317-19
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:44:10 +0000
Received: from [85.158.137.99:22468] by server-16.bemta-3.messagelabs.com id
	4E/F6-07461-8F670C05; Thu, 06 Dec 2012 10:44:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354790647!15004363!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16784 invoked from network); 6 Dec 2012 10:44:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:44:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 10:44:06 +0000
Message-Id: <50C0850402000078000AE77F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 10:44:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
	<50BF839002000078000AE3DF@nat28.tlf.novell.com>
	<20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 18:06, Matt Wilson <msw@amazon.com> wrote:
> On Wed, Dec 05, 2012 at 04:25:36PM +0000, Jan Beulich wrote:
>> >>> On 05.12.12 at 16:59, Matt Wilson <msw@amazon.com> wrote:
>> > On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
>> >> On 05/12/12 06:02, Matt Wilson wrote:
>> >> > An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
>> >> > but still have the flexibility to change the configuration later.
>> >> > There's no logic that keys off of domain->is_pinned outside of
>> >> > sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
>> >> > is_pinned_vcpu() macro to only check for a single CPU set in the
>> >> > cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
>> >> > boots.
>> >> 
>> >> Sadly this patch will break things.  There are certain callers of
>> >> is_pinned_vcpu() which rely on the value to allow access to certain
>> >> power related MSRs, which is where the requirement for never permitting
>> >> an update of the affinity mask comes from.
>> > 
>> > If this is true, the existing is_pinned_vcpu() test is broken:
>> > 
>> >    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
>> >                               cpumask_weight((v)->cpu_affinity) == 1)
>> > 
>> > It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
>> > the MSR traps will suddenly start working.
>> > 
>> > See commit: 
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd 
>> 
>> I don't see what's wrong here. Certain things merely require the
>> pCPU that a vCPU runs on to be stable, which is what the test
>> above is for.
> 
> Me either. That said, are you willing to Ack and commit my patch that
> started this thread?

In no case without Andrew's concerns addressed. Beyond that,
I'd be hesitant to ack it as I'm myself suspecting side effects that
we don't want and/or aren't aware of, and in no case could I
commit it without Keir's ack.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:44:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYw8-00031G-5G; Thu, 06 Dec 2012 10:44:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgYw6-000317-19
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:44:10 +0000
Received: from [85.158.137.99:22468] by server-16.bemta-3.messagelabs.com id
	4E/F6-07461-8F670C05; Thu, 06 Dec 2012 10:44:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354790647!15004363!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16784 invoked from network); 6 Dec 2012 10:44:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:44:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 10:44:06 +0000
Message-Id: <50C0850402000078000AE77F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 10:44:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
	<50BF839002000078000AE3DF@nat28.tlf.novell.com>
	<20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 18:06, Matt Wilson <msw@amazon.com> wrote:
> On Wed, Dec 05, 2012 at 04:25:36PM +0000, Jan Beulich wrote:
>> >>> On 05.12.12 at 16:59, Matt Wilson <msw@amazon.com> wrote:
>> > On Wed, Dec 05, 2012 at 10:44:04AM +0000, Andrew Cooper wrote:
>> >> On 05/12/12 06:02, Matt Wilson wrote:
>> >> > An administrator may want Xen to pin dom0 vCPUs to pCPUs 1:1 at boot,
>> >> > but still have the flexibility to change the configuration later.
>> >> > There's no logic that keys off of domain->is_pinned outside of
>> >> > sched_init_vcpu() and vcpu_set_affinity(). By adjusting the
>> >> > is_pinned_vcpu() macro to only check for a single CPU set in the
>> >> > cpu_affinity mask, dom0 vCPUs can safely be re-pinned after the system
>> >> > boots.
>> >> 
>> >> Sadly this patch will break things.  There are certain callers of
>> >> is_pinned_vcpu() which rely on the value to allow access to certain
>> >> power related MSRs, which is where the requirement for never permitting
>> >> an update of the affinity mask comes from.
>> > 
>> > If this is true, the existing is_pinned_vcpu() test is broken:
>> > 
>> >    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
>> >                               cpumask_weight((v)->cpu_affinity) == 1)
>> > 
>> > It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
>> > the MSR traps will suddenly start working.
>> > 
>> > See commit: 
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd 
>> 
>> I don't see what's wrong here. Certain things merely require the
>> pCPU that a vCPU runs on to be stable, which is what the test
>> above is for.
> 
> Me either. That said, are you willing to Ack and commit my patch that
> started this thread?

In no case without Andrew's concerns addressed. Beyond that,
I'd be hesitant to ack it as I'm myself suspecting side effects that
we don't want and/or aren't aware of, and in no case could I
commit it without Keir's ack.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:47:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:47:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYzU-00038L-Pk; Thu, 06 Dec 2012 10:47:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYzT-00038G-Cx
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:47:39 +0000
Received: from [85.158.137.99:12240] by server-16.bemta-3.messagelabs.com id
	21/2F-07461-AC770C05; Thu, 06 Dec 2012 10:47:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1354790857!15840706!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24793 invoked from network); 6 Dec 2012 10:47:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:47:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16195690"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:47:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:47:37 +0000
Message-ID: <1354790856.17165.78.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 6 Dec 2012 10:47:36 +0000
In-Reply-To: <20121206100818.GF82725@ocelot.phlegethon.org>
References: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
	<1354786255.17165.48.camel@zakaz.uk.xensource.com>
	<20121206100818.GF82725@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Liu Jinsong <jinsong.liu@intel.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 10:08 +0000, Tim Deegan wrote:
> At 09:30 +0000 on 06 Dec (1354786255), Ian Campbell wrote:
> > On Thu, 2012-12-06 at 01:55 +0000, Liu Jinsong wrote:
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > If someone from the hypervisor side wants to re-ack I'll apply (it
> > doesn't look to have changed all that much to me, but I don't want to
> > presume to take it).
> 
> Acked-by: Tim Deegan <tim@xen.org>

Applied, thanks Jinsong.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:47:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:47:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgYzU-00038L-Pk; Thu, 06 Dec 2012 10:47:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgYzT-00038G-Cx
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:47:39 +0000
Received: from [85.158.137.99:12240] by server-16.bemta-3.messagelabs.com id
	21/2F-07461-AC770C05; Thu, 06 Dec 2012 10:47:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1354790857!15840706!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24793 invoked from network); 6 Dec 2012 10:47:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 10:47:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16195690"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 10:47:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	10:47:37 +0000
Message-ID: <1354790856.17165.78.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 6 Dec 2012 10:47:36 +0000
In-Reply-To: <20121206100818.GF82725@ocelot.phlegethon.org>
References: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
	<1354786255.17165.48.camel@zakaz.uk.xensource.com>
	<20121206100818.GF82725@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Liu Jinsong <jinsong.liu@intel.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 10:08 +0000, Tim Deegan wrote:
> At 09:30 +0000 on 06 Dec (1354786255), Ian Campbell wrote:
> > On Thu, 2012-12-06 at 01:55 +0000, Liu Jinsong wrote:
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > If someone from the hypervisor side wants to re-ack I'll apply (it
> > doesn't look to have changed all that much to me, but I don't want to
> > presume to take it).
> 
> Acked-by: Tim Deegan <tim@xen.org>

Applied, thanks Jinsong.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:50:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZ1i-0003Eb-C9; Thu, 06 Dec 2012 10:49:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZ1g-0003EQ-Mp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:49:56 +0000
Received: from [85.158.143.35:56304] by server-3.bemta-4.messagelabs.com id
	19/A3-18211-45870C05; Thu, 06 Dec 2012 10:49:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1354790995!4602228!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6820 invoked from network); 6 Dec 2012 10:49:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:49:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 10:49:54 +0000
Message-Id: <50C0866202000078000AE787@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 10:49:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20121205174153.76fa5dd1@mantra.us.oracle.com>
In-Reply-To: <20121205174153.76fa5dd1@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: IanCampbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] page refcount on pages mapped by the toolstack..
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 02:41, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> Hi,
> 
> I observed something that doesn't seem right to me:
> 
> PV dom0 booting PV guest (say domid 1). (no PVH).
> 
> xl cr vm.cfg.pv
> 
> Take some mfn from domid 1. It's refcnt is 1 as expected. Now, the lib
> wants to map it via xen_remap_domain_mfn_range(). The call goes thru
> do_mmu_update(), and upon returning the refcnt is 2, as expected.
> 
> Now, I noticed the refcnt doesn't go back to 1 after the guest is 
> created/booted. I'd have expected the process exit somewhere to have
> resulted in the refcnt going down to 1 (which is what would happen in case
> of PVH dom0).
> 
> The guest is up, I notice the refcnt is 2. I shutdown the guest, the
> refcnt goes to 0 and the page is freed via relinquish_memory() called
> from domain_relinquish_resources(). I would have expected the page
> to hang with refcnt 1, what if the user process still has it mapped?
> 
> What am I missing?

Did you perhaps not monitor the changes to the refcnt closely
enough? It ought to be 2 when the guest is up (on reference for
the _PGC_allocated bit, and another for it to be mapped
somewhere in the guest). I.e. between Dom0 creating the guest
(and touching its memory) and the guest actually starting, there
could be further adjustments to the refcnt that simply sum up to
zero.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:50:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZ1i-0003Eb-C9; Thu, 06 Dec 2012 10:49:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZ1g-0003EQ-Mp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 10:49:56 +0000
Received: from [85.158.143.35:56304] by server-3.bemta-4.messagelabs.com id
	19/A3-18211-45870C05; Thu, 06 Dec 2012 10:49:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1354790995!4602228!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6820 invoked from network); 6 Dec 2012 10:49:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:49:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 10:49:54 +0000
Message-Id: <50C0866202000078000AE787@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 10:49:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20121205174153.76fa5dd1@mantra.us.oracle.com>
In-Reply-To: <20121205174153.76fa5dd1@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: IanCampbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] page refcount on pages mapped by the toolstack..
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 02:41, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> Hi,
> 
> I observed something that doesn't seem right to me:
> 
> PV dom0 booting PV guest (say domid 1). (no PVH).
> 
> xl cr vm.cfg.pv
> 
> Take some mfn from domid 1. It's refcnt is 1 as expected. Now, the lib
> wants to map it via xen_remap_domain_mfn_range(). The call goes thru
> do_mmu_update(), and upon returning the refcnt is 2, as expected.
> 
> Now, I noticed the refcnt doesn't go back to 1 after the guest is 
> created/booted. I'd have expected the process exit somewhere to have
> resulted in the refcnt going down to 1 (which is what would happen in case
> of PVH dom0).
> 
> The guest is up, I notice the refcnt is 2. I shutdown the guest, the
> refcnt goes to 0 and the page is freed via relinquish_memory() called
> from domain_relinquish_resources(). I would have expected the page
> to hang with refcnt 1, what if the user process still has it mapped?
> 
> What am I missing?

Did you perhaps not monitor the changes to the refcnt closely
enough? It ought to be 2 when the guest is up (on reference for
the _PGC_allocated bit, and another for it to be mapped
somewhere in the guest). I.e. between Dom0 creating the guest
(and touching its memory) and the guest actually starting, there
could be further adjustments to the refcnt that simply sum up to
zero.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:54:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZ6K-0003aV-3M; Thu, 06 Dec 2012 10:54:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgZ6I-0003aO-3n
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:54:42 +0000
Received: from [85.158.139.83:10990] by server-11.bemta-5.messagelabs.com id
	69/D3-03409-17970C05; Thu, 06 Dec 2012 10:54:41 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354791279!24781123!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2NzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29353 invoked from network); 6 Dec 2012 10:54:39 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-6.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 10:54:39 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 06 Dec 2012 02:54:37 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,229,1355126400"; d="scan'208";a="176916244"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Dec 2012 02:54:37 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 02:54:36 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 18:54:35 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with
	regard to migration
Thread-Index: AQHN058pcsHyB2wBT0Wa5y2EZLAG0pgLl7xg
Date: Thu, 6 Dec 2012 10:54:34 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A28C3@SHSMSX101.ccr.corp.intel.com>
References: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
	<1354786255.17165.48.camel@zakaz.uk.xensource.com>
	<20121206100818.GF82725@ocelot.phlegethon.org>
	<1354790856.17165.78.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354790856.17165.78.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Thu, 2012-12-06 at 10:08 +0000, Tim Deegan wrote:
>> At 09:30 +0000 on 06 Dec (1354786255), Ian Campbell wrote:
>>> On Thu, 2012-12-06 at 01:55 +0000, Liu Jinsong wrote:
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>> 
>>> If someone from the hypervisor side wants to re-ack I'll apply (it
>>> doesn't look to have changed all that much to me, but I don't want
>>> to presume to take it).
>> 
>> Acked-by: Tim Deegan <tim@xen.org>
> 
> Applied, thanks Jinsong.
> 
> Ian.

Thank you all, for your kindly review and suggestion :-)

Cheers,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 10:54:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 10:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZ6K-0003aV-3M; Thu, 06 Dec 2012 10:54:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TgZ6I-0003aO-3n
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 10:54:42 +0000
Received: from [85.158.139.83:10990] by server-11.bemta-5.messagelabs.com id
	69/D3-03409-17970C05; Thu, 06 Dec 2012 10:54:41 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354791279!24781123!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2NzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29353 invoked from network); 6 Dec 2012 10:54:39 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-6.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 10:54:39 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 06 Dec 2012 02:54:37 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,229,1355126400"; d="scan'208";a="176916244"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Dec 2012 02:54:37 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 02:54:36 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 18:54:35 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with
	regard to migration
Thread-Index: AQHN058pcsHyB2wBT0Wa5y2EZLAG0pgLl7xg
Date: Thu, 6 Dec 2012 10:54:34 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A28C3@SHSMSX101.ccr.corp.intel.com>
References: <75d16e26926c873480ab.1354758907@ljsromley.bj.intel.com>
	<1354786255.17165.48.camel@zakaz.uk.xensource.com>
	<20121206100818.GF82725@ocelot.phlegethon.org>
	<1354790856.17165.78.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354790856.17165.78.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V5] X86/vMCE: handle broken page with regard
 to migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Thu, 2012-12-06 at 10:08 +0000, Tim Deegan wrote:
>> At 09:30 +0000 on 06 Dec (1354786255), Ian Campbell wrote:
>>> On Thu, 2012-12-06 at 01:55 +0000, Liu Jinsong wrote:
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>> 
>>> If someone from the hypervisor side wants to re-ack I'll apply (it
>>> doesn't look to have changed all that much to me, but I don't want
>>> to presume to take it).
>> 
>> Acked-by: Tim Deegan <tim@xen.org>
> 
> Applied, thanks Jinsong.
> 
> Ian.

Thank you all, for your kindly review and suggestion :-)

Cheers,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:02:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:02:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZE0-0003r0-8Q; Thu, 06 Dec 2012 11:02:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZDy-0003qv-VP
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:02:39 +0000
Received: from [85.158.138.51:49124] by server-11.bemta-3.messagelabs.com id
	59/E6-19361-E4B70C05; Thu, 06 Dec 2012 11:02:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354791580!27628882!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20828 invoked from network); 6 Dec 2012 10:59:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:59:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 10:59:39 +0000
Message-Id: <50C088AA02000078000AE7A8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 10:59:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ijc@hellion.org.uk>,
	"Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1354711599.15296.191.camel@zakaz.uk.xensource.com>
	<20121205214741.GA1150@phenom.dumpdata.com>
	<1354782871.28777.12.camel@dagon.hellion.org.uk>
In-Reply-To: <1354782871.28777.12.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	debian-kernel <debian-kernel@lists.debian.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 09:34, Ian Campbell <ijc@hellion.org.uk> wrote:
> (trim quote please...)
> On Wed, 2012-12-05 at 21:47 +0000, Konrad Rzeszutek Wilk wrote:
>> Do you want to prep a patch that I can stick in my 'microcode' branch?
>> .. That I will at some point try to upstream.
> 
> You might want to look back at the archives when Jeremy first tried to
> upstream this work, it was a vehement "No" and the resulting thread was
> not pretty.
> 
> Now that we have early loading via the hypervisor in 4.2 and Linux is
> finally in the process of growing its own early microcode loading
> solution I suspect the No would be even firmer.
> 
> It is on xenbits if you want it anyway:
> 
> git://xenbits.xen.org/people/ianc/linux-2.6.git debian/wheezy/microcode
> 
> About the only argument I can see for continuing to try upstreaming this
> stuff is that in
> http://www.gossamer-threads.com/lists/linux/kernel/1583630 Fenghua says:
> 
>         Note, however, that Linux users have gotten used to being able
>         to install a microcode patch in the field without having a
>         reboot; we support that model too.
> 
> i.e. this is an argument for keeping the previous scheme in parallel,
> which I suppose is an argument for supporting the same under Xen (I
> don't know if its a good one though.

Another counter argument would be that the kernel really is
only relaying things in the Xen case. Which means the user
mode tool could as well interface with Xen directly.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:02:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:02:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZE0-0003r0-8Q; Thu, 06 Dec 2012 11:02:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZDy-0003qv-VP
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:02:39 +0000
Received: from [85.158.138.51:49124] by server-11.bemta-3.messagelabs.com id
	59/E6-19361-E4B70C05; Thu, 06 Dec 2012 11:02:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354791580!27628882!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20828 invoked from network); 6 Dec 2012 10:59:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 10:59:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 10:59:39 +0000
Message-Id: <50C088AA02000078000AE7A8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 10:59:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ijc@hellion.org.uk>,
	"Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1354711599.15296.191.camel@zakaz.uk.xensource.com>
	<20121205214741.GA1150@phenom.dumpdata.com>
	<1354782871.28777.12.camel@dagon.hellion.org.uk>
In-Reply-To: <1354782871.28777.12.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	debian-kernel <debian-kernel@lists.debian.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 09:34, Ian Campbell <ijc@hellion.org.uk> wrote:
> (trim quote please...)
> On Wed, 2012-12-05 at 21:47 +0000, Konrad Rzeszutek Wilk wrote:
>> Do you want to prep a patch that I can stick in my 'microcode' branch?
>> .. That I will at some point try to upstream.
> 
> You might want to look back at the archives when Jeremy first tried to
> upstream this work, it was a vehement "No" and the resulting thread was
> not pretty.
> 
> Now that we have early loading via the hypervisor in 4.2 and Linux is
> finally in the process of growing its own early microcode loading
> solution I suspect the No would be even firmer.
> 
> It is on xenbits if you want it anyway:
> 
> git://xenbits.xen.org/people/ianc/linux-2.6.git debian/wheezy/microcode
> 
> About the only argument I can see for continuing to try upstreaming this
> stuff is that in
> http://www.gossamer-threads.com/lists/linux/kernel/1583630 Fenghua says:
> 
>         Note, however, that Linux users have gotten used to being able
>         to install a microcode patch in the field without having a
>         reboot; we support that model too.
> 
> i.e. this is an argument for keeping the previous scheme in parallel,
> which I suppose is an argument for supporting the same under Xen (I
> don't know if its a good one though.

Another counter argument would be that the kernel really is
only relaying things in the Xen case. Which means the user
mode tool could as well interface with Xen directly.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:07:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:07:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZID-00043f-Uk; Thu, 06 Dec 2012 11:07:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgZIC-00043Y-6D
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:07:00 +0000
Received: from [85.158.143.99:50200] by server-3.bemta-4.messagelabs.com id
	88/CF-18211-35C70C05; Thu, 06 Dec 2012 11:06:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1354792016!28167008!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3373 invoked from network); 6 Dec 2012 11:06:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 11:06:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="216595662"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 11:06:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 06:06:55 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgZI7-0007Bg-2c;
	Thu, 06 Dec 2012 11:06:55 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 11:06:54 +0000
Message-ID: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Ian Campbell <ian.campbell@citrix.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] docs: check for documentation generation tools
	in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SXQgaXMgc29tZXRpbWVzIGhhcmQgdG8gZGlzY292ZXIgYWxsIHRoZSBvcHRpb25hbCB0b29scyB0
aGF0IHNob3VsZCBiZQpvbiBhIHN5c3RlbSB0byBidWlsZCBhbGwgYXZhaWxhYmxlIFhlbiBkb2N1
bWVudGF0aW9uLiBCeSBjaGVja2luZyBmb3IKZG9jdW1lbnRhdGlvbiBnZW5lcmF0aW9uIHRvb2xz
IGF0IC4vY29uZmlndXJlIHRpbWUgYW5kIGRpc3BsYXlpbmcgYQp3YXJuaW5nLCBYZW4gcGFja2Fn
ZXJzIHdpbGwgbW9yZSBlYXNpbHkgbGVhcm4gYWJvdXQgbmV3IG9wdGlvbmFsIGJ1aWxkCmRlcGVu
ZGVuY2llcywgbGlrZSBtYXJrZG93biwgd2hlbiB0aGV5IGFyZSBpbnRyb2R1Y2VkLgoKQmFzZWQg
b24gYSBwYXRjaCBieSBNYXR0IFdpbHNvbi4gQ2hhbmdlZCB0byB1c2UgYSBzZXBhcmF0ZQpkb2Nz
L2NvbmZpZ3VyZSB3aGljaCBpcyBjYWxsZWQgZnJvbSB0aGUgdG9wLWxldmVsIGluIHRoZSBzYW1l
IG1hbm5lcgphcyBzdHViZG9tcy4KClJlcnVuIGF1dG9nZW4uc2ggYW5kICJnaXQgYWRkIGRvY3Mv
Y29uZmlndXJlIiBhZnRlciBhcHBseWluZyB0aGlzIHBhdGNoLgoKU2lnbmVkLW9mZi1ieTogTWF0
dCBXaWxzb24gPG1zd0BhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBJYW4gQ2FtcGJlbGwgPGlh
bi5jYW1wYmVsbEBjaXRyaXguY29tPgpDYzogIkZpb3JhdmFudGUsIE1hdHRoZXcgRS4iIDxNYXR0
aGV3LkZpb3JhdmFudGVAamh1YXBsLmVkdT4KQ2M6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBh
dUBjaXRyaXguY29tPgotLS0KQXBwbGllcyBvbiB0b3Agb2YgTWF0dGhldydzICJBZGQgYXV0b2Nv
bmYgdG8gc3R1YmRvbSIgYW5kICJBZGQgYSB0b3AKbGV2ZWwgY29uZmlndXJlIHNjcmlwdCIuCi0t
LQogLmdpdGlnbm9yZSAgICAgICAgICAgIHwgICAgMSArCiAuaGdpZ25vcmUgICAgICAgICAgICAg
fCAgICAxICsKIFJFQURNRSAgICAgICAgICAgICAgICB8ICAgIDIgKy0KIGF1dG9nZW4uc2ggICAg
ICAgICAgICB8ICAgMTUgKysrKysrLS0tCiBjb25maWcvRG9jcy5tay5pbiAgICAgfCAgIDIwICsr
KysrKysrKysrCiBjb25maWd1cmUgICAgICAgICAgICAgfCAgICA0ICstCiBjb25maWd1cmUuYWMg
ICAgICAgICAgfCAgICAyICstCiBkb2NzL0RvY3MubWsgICAgICAgICAgfCAgIDEyIC0tLS0tLS0K
IGRvY3MvTWFrZWZpbGUgICAgICAgICB8ICAgODYgKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrLS0tLS0tLS0tLS0tLS0tCiBkb2NzL2NvbmZpZ3VyZS5hYyAgICAgfCAgIDI3ICsrKysr
KysrKysrKysrKwogZG9jcy9maWdzL01ha2VmaWxlICAgIHwgICAgMiArLQogZG9jcy94ZW4tYXBp
L01ha2VmaWxlIHwgICAgNyArKystCiBtNC9kb2NzX3Rvb2wubTQgICAgICAgfCAgIDE3ICsrKysr
KysrKysKIDEzIGZpbGVzIGNoYW5nZWQsIDE0NiBpbnNlcnRpb25zKCspLCA1MCBkZWxldGlvbnMo
LSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBjb25maWcvRG9jcy5tay5pbgogZGVsZXRlIG1vZGUgMTAw
NjQ0IGRvY3MvRG9jcy5tawogY3JlYXRlIG1vZGUgMTAwNjQ0IGRvY3MvY29uZmlndXJlLmFjCiBj
cmVhdGUgbW9kZSAxMDA2NDQgbTQvZG9jc190b29sLm00CgpkaWZmIC0tZ2l0IGEvLmdpdGlnbm9y
ZSBiLy5naXRpZ25vcmUKaW5kZXggNDZjZTYzYS4uYTRjZGQ2YyAxMDA2NDQKLS0tIGEvLmdpdGln
bm9yZQorKysgYi8uZ2l0aWdub3JlCkBAIC0xMjAsNiArMTIwLDcgQEAgY29uZmlnLnN0YXR1cwog
Y29uZmlnLmNhY2hlCiBjb25maWcvVG9vbHMubWsKIGNvbmZpZy9TdHViZG9tLm1rCitjb25maWcv
RG9jcy5tawogdG9vbHMvYmxrdGFwMi9kYWVtb24vYmxrdGFwY3RybAogdG9vbHMvYmxrdGFwMi9k
cml2ZXJzL2ltZzJxY293CiB0b29scy9ibGt0YXAyL2RyaXZlcnMvbG9jay11dGlsCmRpZmYgLS1n
aXQgYS8uaGdpZ25vcmUgYi8uaGdpZ25vcmUKaW5kZXggMDM5MmE1Ni4uZGEzYTdlNiAxMDA2NDQK
LS0tIGEvLmhnaWdub3JlCisrKyBiLy5oZ2lnbm9yZQpAQCAtMzEyLDYgKzMxMiw3IEBACiBedG9v
bHMvY29uZmlnXC5jYWNoZSQKIF5jb25maWcvVG9vbHNcLm1rJAogXmNvbmZpZy9TdHViZG9tXC5t
ayQKK15jb25maWcvRG9jc1wubWskCiBeeGVuL1wuYmFubmVyLiokCiBeeGVuL0JMT0ckCiBeeGVu
L1N5c3RlbS5tYXAkCmRpZmYgLS1naXQgYS9SRUFETUUgYi9SRUFETUUKaW5kZXggZjVkNTUzMC4u
ODg0MDFmNyAxMDA2NDQKLS0tIGEvUkVBRE1FCisrKyBiL1JFQURNRQpAQCAtNTcsNyArNTcsNiBA
QCBwcm92aWRlZCBieSB5b3VyIE9TIGRpc3RyaWJ1dG9yOgogICAgICogR05VIGdldHRleHQKICAg
ICAqIDE2LWJpdCB4ODYgYXNzZW1ibGVyLCBsb2FkZXIgYW5kIGNvbXBpbGVyIChkZXY4NiBycG0g
b3IgYmluODYgJiBiY2MgZGVicykKICAgICAqIEFDUEkgQVNMIGNvbXBpbGVyIChpYXNsKQotICAg
ICogbWFya2Rvd24KIAogSW4gYWRkaXRpb24gdG8gdGhlIGFib3ZlIHRoZXJlIGFyZSBhIG51bWJl
ciBvZiBvcHRpb25hbCBidWlsZAogcHJlcmVxdWlzaXRlcy4gT21pdHRpbmcgdGhlc2Ugd2lsbCBj
YXVzZSB0aGUgcmVsYXRlZCBmZWF0dXJlcyB0byBiZQpAQCAtNjUsNiArNjQsNyBAQCBkaXNhYmxl
ZCBhdCBjb21waWxlIHRpbWU6CiAgICAgKiBEZXZlbG9wbWVudCBpbnN0YWxsIG9mIE9jYW1sIChl
LmcuIG9jYW1sLW5veCBhbmQKICAgICAgIG9jYW1sLWZpbmRsaWIpLiBSZXF1aXJlZCB0byBidWls
ZCBvY2FtbCBjb21wb25lbnRzIHdoaWNoCiAgICAgICBpbmNsdWRlcyB0aGUgYWx0ZXJuYXRpdmUg
b2NhbWwgeGVuc3RvcmVkLgorICAgICogbWFya2Rvd24KIAogU2Vjb25kLCB5b3UgbmVlZCB0byBh
Y3F1aXJlIGEgc3VpdGFibGUga2VybmVsIGZvciB1c2UgaW4gZG9tYWluIDAuIElmCiBwb3NzaWJs
ZSB5b3Ugc2hvdWxkIHVzZSBhIGtlcm5lbCBwcm92aWRlZCBieSB5b3VyIE9TIGRpc3RyaWJ1dG9y
LiBJZgpkaWZmIC0tZ2l0IGEvYXV0b2dlbi5zaCBiL2F1dG9nZW4uc2gKaW5kZXggMTQ1NmQ5NC4u
YjVjOTY4OCAxMDA3NTUKLS0tIGEvYXV0b2dlbi5zaAorKysgYi9hdXRvZ2VuLnNoCkBAIC0xLDcg
KzEsMTIgQEAKICMhL2Jpbi9zaCAtZQogYXV0b2NvbmYKLWNkIHRvb2xzCi1hdXRvY29uZgotYXV0
b2hlYWRlcgotY2QgLi4vc3R1YmRvbQotYXV0b2NvbmYKKyggY2QgdG9vbHMKKyAgYXV0b2NvbmYK
KyAgYXV0b2hlYWRlcgorKQorKCBjZCBzdHViZG9tCisgIGF1dG9jb25mCispCisoIGNkIGRvY3MK
KyAgYXV0b2NvbmYKKykKZGlmZiAtLWdpdCBhL2NvbmZpZy9Eb2NzLm1rLmluIGIvY29uZmlnL0Rv
Y3MubWsuaW4KbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXggMDAwMDAwMC4uYjZhYjZmZQotLS0g
L2Rldi9udWxsCisrKyBiL2NvbmZpZy9Eb2NzLm1rLmluCkBAIC0wLDAgKzEsMjAgQEAKKyMgUHJl
Zml4IGFuZCBpbnN0YWxsIGZvbGRlcgorcHJlZml4ICAgICAgICAgICAgICA6PSBAcHJlZml4QAor
UFJFRklYICAgICAgICAgICAgICA6PSAkKHByZWZpeCkKK2V4ZWNfcHJlZml4ICAgICAgICAgOj0g
QGV4ZWNfcHJlZml4QAorbGliZGlyICAgICAgICAgICAgICA6PSBAbGliZGlyQAorTElCRElSICAg
ICAgICAgICAgICA6PSAkKGxpYmRpcikKKworIyBUb29scworUFMyUERGICAgICAgICAgICAgICA6
PSBAUFMyUERGQAorRFZJUFMgICAgICAgICAgICAgICA6PSBARFZJUFNACitMQVRFWCAgICAgICAg
ICAgICAgIDo9IEBMQVRFWEAKK0ZJRzJERVYgICAgICAgICAgICAgOj0gQEZJRzJERVZACitMQVRF
WDJIVE1MICAgICAgICAgIDo9IEBMQVRFWDJIVE1MQAorRE9YWUdFTiAgICAgICAgICAgICA6PSBA
RE9YWUdFTkAKK1BPRDJNQU4gICAgICAgICAgICAgOj0gQFBPRDJNQU5ACitQT0QySFRNTCAgICAg
ICAgICAgIDo9IEBQT0QySFRNTEAKK1BPRDJURVhUICAgICAgICAgICAgOj0gQFBPRDJURVhUQAor
RE9UICAgICAgICAgICAgICAgICA6PSBARE9UQAorTkVBVE8gICAgICAgICAgICAgICA6PSBATkVB
VE9ACitNQVJLRE9XTiAgICAgICAgICAgIDo9IEBNQVJLRE9XTkAKZGlmZiAtLWdpdCBhL2NvbmZp
Z3VyZSBiL2NvbmZpZ3VyZQppbmRleCA2NDk3MDhmLi5hMzA3ZjNhIDEwMDc1NQotLS0gYS9jb25m
aWd1cmUKKysrIGIvY29uZmlndXJlCkBAIC02MDYsNyArNjA2LDcgQEAgZW5hYmxlX29wdGlvbl9j
aGVja2luZwogICAgICAgYWNfcHJlY2lvdXNfdmFycz0nYnVpbGRfYWxpYXMKIGhvc3RfYWxpYXMK
IHRhcmdldF9hbGlhcycKLWFjX3N1YmRpcnNfYWxsPSd0b29scyBzdHViZG9tJworYWNfc3ViZGly
c19hbGw9J3Rvb2xzIGRvY3Mgc3R1YmRvbScKIAogIyBJbml0aWFsaXplIHNvbWUgdmFyaWFibGVz
IHNldCBieSBvcHRpb25zLgogYWNfaW5pdF9oZWxwPQpAQCAtMTY3NSw3ICsxNjc1LDcgQEAgYWNf
Y29uZmlndXJlPSIkU0hFTEwgJGFjX2F1eF9kaXIvY29uZmlndXJlIiAgIyBQbGVhc2UgZG9uJ3Qg
dXNlIHRoaXMgdmFyLgogCiAKIAotc3ViZGlycz0iJHN1YmRpcnMgdG9vbHMgc3R1YmRvbSIKK3N1
YmRpcnM9IiRzdWJkaXJzIHRvb2xzIGRvY3Mgc3R1YmRvbSIKIAogCiBjYXQgPmNvbmZjYWNoZSA8
PFxfQUNFT0YKZGlmZiAtLWdpdCBhL2NvbmZpZ3VyZS5hYyBiL2NvbmZpZ3VyZS5hYwppbmRleCAw
NDk3ZDk3Li42MzdiMzViIDEwMDY0NAotLS0gYS9jb25maWd1cmUuYWMKKysrIGIvY29uZmlndXJl
LmFjCkBAIC02LDYgKzYsNiBAQCBBQ19JTklUKFtYZW4gSHlwZXJ2aXNvcl0sIG00X2VzeXNjbWQo
Wy4vdmVyc2lvbi5zaCAuL3hlbi9NYWtlZmlsZV0pLAogICAgIFt4ZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZ10sIFt4ZW5dLCBbaHR0cDovL3d3dy54ZW4ub3JnL10pCiBBQ19DT05GSUdfU1JDRElSKFsu
L3hlbi9jb21tb24va2VybmVsLmNdKQogCi1BQ19DT05GSUdfU1VCRElSUyhbdG9vbHMgc3R1YmRv
bV0pCitBQ19DT05GSUdfU1VCRElSUyhbdG9vbHMgZG9jcyBzdHViZG9tXSkKIAogQUNfT1VUUFVU
KCkKZGlmZiAtLWdpdCBhL2RvY3MvRG9jcy5tayBiL2RvY3MvRG9jcy5tawpkZWxldGVkIGZpbGUg
bW9kZSAxMDA2NDQKaW5kZXggYWE2NTNkMy4uMDAwMDAwMAotLS0gYS9kb2NzL0RvY3MubWsKKysr
IC9kZXYvbnVsbApAQCAtMSwxMiArMCwwIEBACi1QUzJQREYJCTo9IHBzMnBkZgotRFZJUFMJCTo9
IGR2aXBzCi1MQVRFWAkJOj0gbGF0ZXgKLUZJRzJERVYJCTo9IGZpZzJkZXYKLUxBVEVYMkhUTUwJ
Oj0gbGF0ZXgyaHRtbAotRE9YWUdFTgkJOj0gZG94eWdlbgotUE9EMk1BTgkJOj0gcG9kMm1hbgot
UE9EMkhUTUwJOj0gcG9kMmh0bWwKLVBPRDJURVhUCTo9IHBvZDJ0ZXh0Ci1ET1QJCTo9IGRvdAot
TkVBVE8JCTo9IG5lYXRvCi1NQVJLRE9XTgk6PSBtYXJrZG93bgpkaWZmIC0tZ2l0IGEvZG9jcy9N
YWtlZmlsZSBiL2RvY3MvTWFrZWZpbGUKaW5kZXggMDNmMTQxYS4uMDM2MjFmNyAxMDA2NDQKLS0t
IGEvZG9jcy9NYWtlZmlsZQorKysgYi9kb2NzL01ha2VmaWxlCkBAIC0yLDcgKzIsNyBAQAogCiBY
RU5fUk9PVD0kKENVUkRJUikvLi4KIGluY2x1ZGUgJChYRU5fUk9PVCkvQ29uZmlnLm1rCi1pbmNs
dWRlICQoWEVOX1JPT1QpL2RvY3MvRG9jcy5taworaW5jbHVkZSAkKFhFTl9ST09UKS9jb25maWcv
RG9jcy5tawogCiBWRVJTSU9OCQk9IHhlbi11bnN0YWJsZQogCkBAIC0yNiwxMCArMjYsMTIgQEAg
YWxsOiBidWlsZAogCiAuUEhPTlk6IGJ1aWxkCiBidWlsZDogaHRtbCB0eHQgbWFuLXBhZ2VzIGZp
Z3MKLQlAaWYgd2hpY2ggJChET1QpIDE+L2Rldi9udWxsIDI+L2Rldi9udWxsIDsgdGhlbiAgICAg
ICAgICAgICAgXAotCSQoTUFLRSkgLUMgeGVuLWFwaSBidWlsZCA7IGVsc2UgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCi0gICAgICAgIGVjaG8gIkdyYXBodml6IChkb3QpIG5vdCBpbnN0
YWxsZWQ7IHNraXBwaW5nIHhlbi1hcGkuIiA7IGZpCitpZmRlZiBET1QKKwkkKE1BS0UpIC1DIHhl
bi1hcGkgYnVpbGQKIAlybSAtZiAqLmF1eCAqLmR2aSAqLmJibCAqLmJsZyAqLmdsbyAqLmlkeCAq
LmlsZyAqLmxvZyAqLmluZCAqLnRvYworZWxzZQorCUBlY2hvICJHcmFwaHZpeiAoZG90KSBub3Qg
aW5zdGFsbGVkOyBza2lwcGluZyB4ZW4tYXBpLiIKK2VuZGlmCiAKIC5QSE9OWTogZGV2LWRvY3MK
IGRldi1kb2NzOiBweXRob24tZGV2LWRvY3MKQEAgLTM5LDMwICs0MSwzNyBAQCBodG1sOiAkKERP
Q19IVE1MKSBodG1sL2luZGV4Lmh0bWwKIAogLlBIT05ZOiB0eHQKIHR4dDoKLQlAaWYgd2hpY2gg
JChQT0QyVEVYVCkgMT4vZGV2L251bGwgMj4vZGV2L251bGw7IHRoZW4gXAotCSQoTUFLRSkgJChE
T0NfVFhUKTsgZWxzZSAgICAgICAgICAgICAgXAotCWVjaG8gInBvZDJ0ZXh0IG5vdCBpbnN0YWxs
ZWQ7IHNraXBwaW5nIHRleHQgb3V0cHV0cy4iOyBmaQoraWZkZWYgUE9EMlRFWFQKKwkkKE1BS0Up
ICQoRE9DX1RYVCkKK2Vsc2UKKwlAZWNobyAicG9kMnRleHQgbm90IGluc3RhbGxlZDsgc2tpcHBp
bmcgdGV4dCBvdXRwdXRzLiIKK2VuZGlmCiAKIC5QSE9OWTogZmlncwogZmlnczoKLQlAc2V0IC1l
IDsgaWYgd2hpY2ggJChGSUcyREVWKSAxPi9kZXYvbnVsbCAyPi9kZXYvbnVsbDsgdGhlbiBcCi0J
c2V0IC14OyAkKE1BS0UpIC1DIGZpZ3MgOyBlbHNlICAgICAgICAgICAgICAgICAgIFwKLQllY2hv
ICJmaWcyZGV2ICh0cmFuc2ZpZykgbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgZmlncy4iOyBmaQor
aWZkZWYgRklHMkRFVgorCXNldCAteDsgJChNQUtFKSAtQyBmaWdzCitlbHNlCisJQGVjaG8gImZp
ZzJkZXYgKHRyYW5zZmlnKSBub3QgaW5zdGFsbGVkOyBza2lwcGluZyBmaWdzLiIKK2VuZGlmCiAK
IC5QSE9OWTogcHl0aG9uLWRldi1kb2NzCiBweXRob24tZGV2LWRvY3M6Ci0JQG1rZGlyIC12IC1w
IGFwaS90b29scy9weXRob24KLQlAc2V0IC1lIDsgaWYgd2hpY2ggJChET1hZR0VOKSAxPi9kZXYv
bnVsbCAyPi9kZXYvbnVsbDsgdGhlbiBcCi0gICAgICAgIGVjaG8gIlJ1bm5pbmcgZG94eWdlbiB0
byBnZW5lcmF0ZSBQeXRob24gdG9vbHMgQVBJcyAuLi4gIjsgXAotCSQoRE9YWUdFTikgRG94eWZp
bGU7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAotCSQoTUFLRSkgLUMg
YXBpL3Rvb2xzL3B5dGhvbi9sYXRleCA7IGVsc2UgICAgICAgICAgICAgICAgICAgXAotICAgICAg
ICBlY2hvICJEb3h5Z2VuIG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nIHB5dGhvbi1kZXYtZG9jcy4i
OyBmaQoraWZkZWYgRE9YWUdFTgorCUBlY2hvICJSdW5uaW5nIGRveHlnZW4gdG8gZ2VuZXJhdGUg
UHl0aG9uIHRvb2xzIEFQSXMgLi4uICIKKwlta2RpciAtdiAtcCBhcGkvdG9vbHMvcHl0aG9uCisJ
JChET1hZR0VOKSBEb3h5ZmlsZSAmJiAkKE1BS0UpIC1DIGFwaS90b29scy9weXRob24vbGF0ZXgK
K2Vsc2UKKwlAZWNobyAiRG94eWdlbiBub3QgaW5zdGFsbGVkOyBza2lwcGluZyBweXRob24tZGV2
LWRvY3MuIgorZW5kaWYKIAogLlBIT05ZOiBtYW4tcGFnZXMKIG1hbi1wYWdlczoKLQlAaWYgd2hp
Y2ggJChQT0QyTUFOKSAxPi9kZXYvbnVsbCAyPi9kZXYvbnVsbDsgdGhlbiBcCi0JJChNQUtFKSAk
KERPQ19NQU4xKSAkKERPQ19NQU41KTsgZWxzZSAgICAgICAgICAgICAgXAotCWVjaG8gInBvZDJt
YW4gbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgbWFuLXBhZ2VzLiI7IGZpCitpZmRlZiBQT0QyTUFO
CisJJChNQUtFKSAkKERPQ19NQU4xKSAkKERPQ19NQU41KQorZWxzZQorCUBlY2hvICJwb2QybWFu
IG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nIG1hbi1wYWdlcy4iCitlbmRpZgogCiBtYW4xLyUuMTog
bWFuLyUucG9kLjEgTWFrZWZpbGUKIAkkKElOU1RBTExfRElSKSAkKEBEKQpAQCAtODcsNiArOTYs
NyBAQCBjbGVhbjoKIAogLlBIT05ZOiBkaXN0Y2xlYW4KIGRpc3RjbGVhbjogY2xlYW4KKwlybSAt
cmYgLi4vY29uZmlnL0RvY3MubWsgY29uZmlnLmxvZyBjb25maWcuc3RhdHVzIGF1dG9tNHRlLmNh
Y2hlCiAKIC5QSE9OWTogaW5zdGFsbAogaW5zdGFsbDogYWxsCkBAIC0xMDQsMzAgKzExNCw0MCBA
QCBodG1sL2luZGV4Lmh0bWw6ICQoRE9DX0hUTUwpIC4vZ2VuLWh0bWwtaW5kZXggSU5ERVgKIAlw
ZXJsIC13IC0tIC4vZ2VuLWh0bWwtaW5kZXggLWkgSU5ERVggaHRtbCAkKERPQ19IVE1MKQogCiBo
dG1sLyUuaHRtbDogJS5tYXJrZG93bgotCUAkKElOU1RBTExfRElSKSAkKEBEKQotCUBzZXQgLWUg
OyBpZiB3aGljaCAkKE1BUktET1dOKSAxPi9kZXYvbnVsbCAyPi9kZXYvbnVsbDsgdGhlbiBcCi0J
ZWNobyAiUnVubmluZyBtYXJrZG93biB0byBnZW5lcmF0ZSAkKi5odG1sIC4uLiAiOyBcCisJJChJ
TlNUQUxMX0RJUikgJChARCkKK2lmZGVmIE1BUktET1dOCisJQGVjaG8gIlJ1bm5pbmcgbWFya2Rv
d24gdG8gZ2VuZXJhdGUgJCouaHRtbCAuLi4gIgogCSQoTUFSS0RPV04pICQ8ID4gJEAudG1wIDsg
XAotCSQoY2FsbCBtb3ZlLWlmLWNoYW5nZWQsJEAudG1wLCRAKSA7IGVsc2UgXAotCWVjaG8gIm1h
cmtkb3duIG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nICQqLmh0bWwuIjsgZmkKKwkkKGNhbGwgbW92
ZS1pZi1jaGFuZ2VkLCRALnRtcCwkQCkKK2Vsc2UKKwlAZWNobyAibWFya2Rvd24gbm90IGluc3Rh
bGxlZDsgc2tpcHBpbmcgJCouaHRtbC4iCitlbmRpZgogCiBodG1sLyUudHh0OiAlLnR4dAotCUAk
KElOU1RBTExfRElSKSAkKEBEKQorCSQoSU5TVEFMTF9ESVIpICQoQEQpCiAJY3AgJDwgJEAKIAog
aHRtbC9tYW4vJS4xLmh0bWw6IG1hbi8lLnBvZC4xIE1ha2VmaWxlCiAJJChJTlNUQUxMX0RJUikg
JChARCkKK2lmZGVmIFBPRDJIVE1MCiAJJChQT0QySFRNTCkgLS1pbmZpbGU9JDwgLS1vdXRmaWxl
PSRALnRtcAogCSQoY2FsbCBtb3ZlLWlmLWNoYW5nZWQsJEAudG1wLCRAKQorZWxzZQorCUBlY2hv
ICJwb2QyaHRtbCBub3QgaW5zdGFsbGVkOyBza2lwcGluZyAkPC4iCitlbmRpZgogCiBodG1sL21h
bi8lLjUuaHRtbDogbWFuLyUucG9kLjUgTWFrZWZpbGUKIAkkKElOU1RBTExfRElSKSAkKEBEKQor
aWZkZWYgUE9EMkhUTUwKIAkkKFBPRDJIVE1MKSAtLWluZmlsZT0kPCAtLW91dGZpbGU9JEAudG1w
CiAJJChjYWxsIG1vdmUtaWYtY2hhbmdlZCwkQC50bXAsJEApCitlbHNlCisJQGVjaG8gInBvZDJo
dG1sIG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nICQ8LiIKK2VuZGlmCiAKIGh0bWwvaHlwZXJjYWxs
L2luZGV4Lmh0bWw6IC4veGVuLWhlYWRlcnMKIAlybSAtcmYgJChARCkKLQlAJChJTlNUQUxMX0RJ
UikgJChARCkKKwkkKElOU1RBTExfRElSKSAkKEBEKQogCS4veGVuLWhlYWRlcnMgLU8gJChARCkg
XAogCQktVCAnYXJjaC14ODZfNjQgLSBYZW4gcHVibGljIGhlYWRlcnMnIFwKIAkJLVggYXJjaC1p
YTY0IC1YIGFyY2gteDg2XzMyIC1YIHhlbi14ODZfMzIgLVggYXJjaC1hcm0gXApAQCAtMTQ3LDEx
ICsxNjcsMjMgQEAgdHh0LyUudHh0OiAlLm1hcmtkb3duCiAKIHR4dC9tYW4vJS4xLnR4dDogbWFu
LyUucG9kLjEgTWFrZWZpbGUKIAkkKElOU1RBTExfRElSKSAkKEBEKQoraWZkZWYgUE9EMlRFWFQK
IAkkKFBPRDJURVhUKSAkPCAkQC50bXAKIAkkKGNhbGwgbW92ZS1pZi1jaGFuZ2VkLCRALnRtcCwk
QCkKK2Vsc2UKKwlAZWNobyAicG9kMnRleHQgbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgJDwuIgor
ZW5kaWYKIAogdHh0L21hbi8lLjUudHh0OiBtYW4vJS5wb2QuNSBNYWtlZmlsZQogCSQoSU5TVEFM
TF9ESVIpICQoQEQpCitpZmRlZiBQT0QyVEVYVAogCSQoUE9EMlRFWFQpICQ8ICRALnRtcAogCSQo
Y2FsbCBtb3ZlLWlmLWNoYW5nZWQsJEAudG1wLCRAKQotCitlbHNlCisJQGVjaG8gInBvZDJ0ZXh0
IG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nICQ8LiIKK2VuZGlmCisKK2lmZXEgKCwkKGZpbmRzdHJp
bmcgY2xlYW4sJChNQUtFQ01ER09BTFMpKSkKKyQoWEVOX1JPT1QpL2NvbmZpZy9Eb2NzLm1rOgor
CSQoZXJyb3IgWW91IGhhdmUgdG8gcnVuIC4vY29uZmlndXJlIGJlZm9yZSBidWlsZGluZyBkb2Nz
KQorZW5kaWYKZGlmZiAtLWdpdCBhL2RvY3MvY29uZmlndXJlLmFjIGIvZG9jcy9jb25maWd1cmUu
YWMKbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXggMDAwMDAwMC4uNDVkYzliOAotLS0gL2Rldi9u
dWxsCisrKyBiL2RvY3MvY29uZmlndXJlLmFjCkBAIC0wLDAgKzEsMjcgQEAKKyMgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC0qLSBBdXRvY29uZiAtKi0KKyMg
UHJvY2VzcyB0aGlzIGZpbGUgd2l0aCBhdXRvY29uZiB0byBwcm9kdWNlIGEgY29uZmlndXJlIHNj
cmlwdC4KKworQUNfUFJFUkVRKFsyLjY3XSkKK0FDX0lOSVQoW1hlbiBIeXBlcnZpc29yIERvY3Vt
ZW50YXRpb25dLCBtNF9lc3lzY21kKFsuLi92ZXJzaW9uLnNoIC4uL3hlbi9NYWtlZmlsZV0pLAor
ICAgIFt4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZ10sIFt4ZW5dLCBbaHR0cDovL3d3dy54ZW4ub3Jn
L10pCitBQ19DT05GSUdfU1JDRElSKFttaXNjL3hlbi1jb21tYW5kLWxpbmUubWFya2Rvd25dKQor
QUNfQ09ORklHX0ZJTEVTKFsuLi9jb25maWcvRG9jcy5ta10pCitBQ19DT05GSUdfQVVYX0RJUihb
Li4vXSkKKworIyBNNCBNYWNybyBpbmNsdWRlcworbTRfaW5jbHVkZShbLi4vbTQvZG9jc190b29s
Lm00XSkKKworQVhfRE9DU19UT09MX1BST0coW1BTMlBERl0sIFtwczJwZGZdKQorQVhfRE9DU19U
T09MX1BST0coW0RWSVBTXSwgW2R2aXBzXSkKK0FYX0RPQ1NfVE9PTF9QUk9HKFtMQVRFWF0sIFts
YXRleF0pCitBWF9ET0NTX1RPT0xfUFJPRyhbRklHMkRFVl0sIFtmaWcyZGV2XSkKK0FYX0RPQ1Nf
VE9PTF9QUk9HKFtMQVRFWDJIVE1MXSwgW2xhdGV4Mmh0bWxdKQorQVhfRE9DU19UT09MX1BST0co
W0RPWFlHRU5dLCBbZG94eWdlbl0pCitBWF9ET0NTX1RPT0xfUFJPRyhbUE9EMk1BTl0sIFtwb2Qy
bWFuXSkKK0FYX0RPQ1NfVE9PTF9QUk9HKFtQT0QySFRNTF0sIFtwb2QyaHRtbF0pCitBWF9ET0NT
X1RPT0xfUFJPRyhbUE9EMlRFWFRdLCBbcG9kMnRleHRdKQorQVhfRE9DU19UT09MX1BST0coW0RP
VF0sIFtkb3RdKQorQVhfRE9DU19UT09MX1BST0coW05FQVRPXSwgW25lYXRvXSkKK0FYX0RPQ1Nf
VE9PTF9QUk9HUyhbTUFSS0RPV05dLCBbbWFya2Rvd25dLCBbbWFya2Rvd24gbWFya2Rvd25fcHld
KQorCitBQ19PVVRQVVQoKQpkaWZmIC0tZ2l0IGEvZG9jcy9maWdzL01ha2VmaWxlIGIvZG9jcy9m
aWdzL01ha2VmaWxlCmluZGV4IDVlY2RhZTMuLmY3ODJkYzEgMTAwNjQ0Ci0tLSBhL2RvY3MvZmln
cy9NYWtlZmlsZQorKysgYi9kb2NzL2ZpZ3MvTWFrZWZpbGUKQEAgLTEsNyArMSw3IEBACiAKIFhF
Tl9ST09UPSQoQ1VSRElSKS8uLi8uLgogaW5jbHVkZSAkKFhFTl9ST09UKS9Db25maWcubWsKLWlu
Y2x1ZGUgJChYRU5fUk9PVCkvZG9jcy9Eb2NzLm1rCitpbmNsdWRlICQoWEVOX1JPT1QpL2NvbmZp
Zy9Eb2NzLm1rCiAKIFRBUkdFVFM9IG5ldHdvcmstYnJpZGdlLnBuZyBuZXR3b3JrLWJhc2ljLnBu
ZwogCmRpZmYgLS1naXQgYS9kb2NzL3hlbi1hcGkvTWFrZWZpbGUgYi9kb2NzL3hlbi1hcGkvTWFr
ZWZpbGUKaW5kZXggNzdhMDExNy4uYjJkYTY1MSAxMDA2NDQKLS0tIGEvZG9jcy94ZW4tYXBpL01h
a2VmaWxlCisrKyBiL2RvY3MveGVuLWFwaS9NYWtlZmlsZQpAQCAtMiw3ICsyLDcgQEAKIAogWEVO
X1JPT1Q9JChDVVJESVIpLy4uLy4uCiBpbmNsdWRlICQoWEVOX1JPT1QpL0NvbmZpZy5tawotaW5j
bHVkZSAkKFhFTl9ST09UKS9kb2NzL0RvY3MubWsKKy1pbmNsdWRlICQoWEVOX1JPT1QpL2NvbmZp
Zy9Eb2NzLm1rCiAKIAogVEVYIDo9ICQod2lsZGNhcmQgKi50ZXgpCkBAIC00MiwzICs0Miw4IEBA
IHhlbmFwaS1kYXRhbW9kZWwtZ3JhcGguZXBzOiB4ZW5hcGktZGF0YW1vZGVsLWdyYXBoLmRvdAog
LlBIT05ZOiBjbGVhbgogY2xlYW46CiAJcm0gLWYgKi5wZGYgKi5wcyAqLmR2aSAqLmF1eCAqLmxv
ZyAqLm91dCAkKEVQU0RPVCkKKworaWZlcSAoLCQoZmluZHN0cmluZyBjbGVhbiwkKE1BS0VDTURH
T0FMUykpKQorJChYRU5fUk9PVCkvY29uZmlnL0RvY3MubWs6CisJJChlcnJvciBZb3UgaGF2ZSB0
byBydW4gLi9jb25maWd1cmUgYmVmb3JlIGJ1aWxkaW5nIGRvY3MpCitlbmRpZgpkaWZmIC0tZ2l0
IGEvbTQvZG9jc190b29sLm00IGIvbTQvZG9jc190b29sLm00Cm5ldyBmaWxlIG1vZGUgMTAwNjQ0
CmluZGV4IDAwMDAwMDAuLjNlODgxNGEKLS0tIC9kZXYvbnVsbAorKysgYi9tNC9kb2NzX3Rvb2wu
bTQKQEAgLTAsMCArMSwxNyBAQAorQUNfREVGVU4oW0FYX0RPQ1NfVE9PTF9QUk9HXSwgWworZG5s
CisgICAgQUNfQVJHX1ZBUihbJDFdLCBbUGF0aCB0byAkMiB0b29sXSkKKyAgICBBQ19QQVRIX1BS
T0coWyQxXSwgWyQyXSkKKyAgICBBU19JRihbISB0ZXN0IC14ICIkYWNfY3ZfcGF0aF8kMSJdLCBb
CisgICAgICAgIEFDX01TR19XQVJOKFskMiBpcyBub3QgYXZhaWxhYmxlIHNvIHNvbWUgZG9jdW1l
bnRhdGlvbiB3b24ndCBiZSBidWlsdF0pCisgICAgXSkKK10pCisKK0FDX0RFRlVOKFtBWF9ET0NT
X1RPT0xfUFJPR1NdLCBbCitkbmwKKyAgICBBQ19BUkdfVkFSKFskMV0sIFtQYXRoIHRvICQyIHRv
b2xdKQorICAgIEFDX1BBVEhfUFJPR1MoWyQxXSwgWyQzXSkKKyAgICBBU19JRihbISB0ZXN0IC14
ICIkYWNfY3ZfcGF0aF8kMSJdLCBbCisgICAgICAgIEFDX01TR19XQVJOKFskMiBpcyBub3QgYXZh
aWxhYmxlIHNvIHNvbWUgZG9jdW1lbnRhdGlvbiB3b24ndCBiZSBidWlsdF0pCisgICAgXSkKK10p
Ci0tIAoxLjcuMi41CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:07:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:07:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZID-00043f-Uk; Thu, 06 Dec 2012 11:07:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgZIC-00043Y-6D
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:07:00 +0000
Received: from [85.158.143.99:50200] by server-3.bemta-4.messagelabs.com id
	88/CF-18211-35C70C05; Thu, 06 Dec 2012 11:06:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1354792016!28167008!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3373 invoked from network); 6 Dec 2012 11:06:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 11:06:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="216595662"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 11:06:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 06:06:55 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgZI7-0007Bg-2c;
	Thu, 06 Dec 2012 11:06:55 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 11:06:54 +0000
Message-ID: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Ian Campbell <ian.campbell@citrix.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] docs: check for documentation generation tools
	in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SXQgaXMgc29tZXRpbWVzIGhhcmQgdG8gZGlzY292ZXIgYWxsIHRoZSBvcHRpb25hbCB0b29scyB0
aGF0IHNob3VsZCBiZQpvbiBhIHN5c3RlbSB0byBidWlsZCBhbGwgYXZhaWxhYmxlIFhlbiBkb2N1
bWVudGF0aW9uLiBCeSBjaGVja2luZyBmb3IKZG9jdW1lbnRhdGlvbiBnZW5lcmF0aW9uIHRvb2xz
IGF0IC4vY29uZmlndXJlIHRpbWUgYW5kIGRpc3BsYXlpbmcgYQp3YXJuaW5nLCBYZW4gcGFja2Fn
ZXJzIHdpbGwgbW9yZSBlYXNpbHkgbGVhcm4gYWJvdXQgbmV3IG9wdGlvbmFsIGJ1aWxkCmRlcGVu
ZGVuY2llcywgbGlrZSBtYXJrZG93biwgd2hlbiB0aGV5IGFyZSBpbnRyb2R1Y2VkLgoKQmFzZWQg
b24gYSBwYXRjaCBieSBNYXR0IFdpbHNvbi4gQ2hhbmdlZCB0byB1c2UgYSBzZXBhcmF0ZQpkb2Nz
L2NvbmZpZ3VyZSB3aGljaCBpcyBjYWxsZWQgZnJvbSB0aGUgdG9wLWxldmVsIGluIHRoZSBzYW1l
IG1hbm5lcgphcyBzdHViZG9tcy4KClJlcnVuIGF1dG9nZW4uc2ggYW5kICJnaXQgYWRkIGRvY3Mv
Y29uZmlndXJlIiBhZnRlciBhcHBseWluZyB0aGlzIHBhdGNoLgoKU2lnbmVkLW9mZi1ieTogTWF0
dCBXaWxzb24gPG1zd0BhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBJYW4gQ2FtcGJlbGwgPGlh
bi5jYW1wYmVsbEBjaXRyaXguY29tPgpDYzogIkZpb3JhdmFudGUsIE1hdHRoZXcgRS4iIDxNYXR0
aGV3LkZpb3JhdmFudGVAamh1YXBsLmVkdT4KQ2M6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBh
dUBjaXRyaXguY29tPgotLS0KQXBwbGllcyBvbiB0b3Agb2YgTWF0dGhldydzICJBZGQgYXV0b2Nv
bmYgdG8gc3R1YmRvbSIgYW5kICJBZGQgYSB0b3AKbGV2ZWwgY29uZmlndXJlIHNjcmlwdCIuCi0t
LQogLmdpdGlnbm9yZSAgICAgICAgICAgIHwgICAgMSArCiAuaGdpZ25vcmUgICAgICAgICAgICAg
fCAgICAxICsKIFJFQURNRSAgICAgICAgICAgICAgICB8ICAgIDIgKy0KIGF1dG9nZW4uc2ggICAg
ICAgICAgICB8ICAgMTUgKysrKysrLS0tCiBjb25maWcvRG9jcy5tay5pbiAgICAgfCAgIDIwICsr
KysrKysrKysrCiBjb25maWd1cmUgICAgICAgICAgICAgfCAgICA0ICstCiBjb25maWd1cmUuYWMg
ICAgICAgICAgfCAgICAyICstCiBkb2NzL0RvY3MubWsgICAgICAgICAgfCAgIDEyIC0tLS0tLS0K
IGRvY3MvTWFrZWZpbGUgICAgICAgICB8ICAgODYgKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrLS0tLS0tLS0tLS0tLS0tCiBkb2NzL2NvbmZpZ3VyZS5hYyAgICAgfCAgIDI3ICsrKysr
KysrKysrKysrKwogZG9jcy9maWdzL01ha2VmaWxlICAgIHwgICAgMiArLQogZG9jcy94ZW4tYXBp
L01ha2VmaWxlIHwgICAgNyArKystCiBtNC9kb2NzX3Rvb2wubTQgICAgICAgfCAgIDE3ICsrKysr
KysrKysKIDEzIGZpbGVzIGNoYW5nZWQsIDE0NiBpbnNlcnRpb25zKCspLCA1MCBkZWxldGlvbnMo
LSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBjb25maWcvRG9jcy5tay5pbgogZGVsZXRlIG1vZGUgMTAw
NjQ0IGRvY3MvRG9jcy5tawogY3JlYXRlIG1vZGUgMTAwNjQ0IGRvY3MvY29uZmlndXJlLmFjCiBj
cmVhdGUgbW9kZSAxMDA2NDQgbTQvZG9jc190b29sLm00CgpkaWZmIC0tZ2l0IGEvLmdpdGlnbm9y
ZSBiLy5naXRpZ25vcmUKaW5kZXggNDZjZTYzYS4uYTRjZGQ2YyAxMDA2NDQKLS0tIGEvLmdpdGln
bm9yZQorKysgYi8uZ2l0aWdub3JlCkBAIC0xMjAsNiArMTIwLDcgQEAgY29uZmlnLnN0YXR1cwog
Y29uZmlnLmNhY2hlCiBjb25maWcvVG9vbHMubWsKIGNvbmZpZy9TdHViZG9tLm1rCitjb25maWcv
RG9jcy5tawogdG9vbHMvYmxrdGFwMi9kYWVtb24vYmxrdGFwY3RybAogdG9vbHMvYmxrdGFwMi9k
cml2ZXJzL2ltZzJxY293CiB0b29scy9ibGt0YXAyL2RyaXZlcnMvbG9jay11dGlsCmRpZmYgLS1n
aXQgYS8uaGdpZ25vcmUgYi8uaGdpZ25vcmUKaW5kZXggMDM5MmE1Ni4uZGEzYTdlNiAxMDA2NDQK
LS0tIGEvLmhnaWdub3JlCisrKyBiLy5oZ2lnbm9yZQpAQCAtMzEyLDYgKzMxMiw3IEBACiBedG9v
bHMvY29uZmlnXC5jYWNoZSQKIF5jb25maWcvVG9vbHNcLm1rJAogXmNvbmZpZy9TdHViZG9tXC5t
ayQKK15jb25maWcvRG9jc1wubWskCiBeeGVuL1wuYmFubmVyLiokCiBeeGVuL0JMT0ckCiBeeGVu
L1N5c3RlbS5tYXAkCmRpZmYgLS1naXQgYS9SRUFETUUgYi9SRUFETUUKaW5kZXggZjVkNTUzMC4u
ODg0MDFmNyAxMDA2NDQKLS0tIGEvUkVBRE1FCisrKyBiL1JFQURNRQpAQCAtNTcsNyArNTcsNiBA
QCBwcm92aWRlZCBieSB5b3VyIE9TIGRpc3RyaWJ1dG9yOgogICAgICogR05VIGdldHRleHQKICAg
ICAqIDE2LWJpdCB4ODYgYXNzZW1ibGVyLCBsb2FkZXIgYW5kIGNvbXBpbGVyIChkZXY4NiBycG0g
b3IgYmluODYgJiBiY2MgZGVicykKICAgICAqIEFDUEkgQVNMIGNvbXBpbGVyIChpYXNsKQotICAg
ICogbWFya2Rvd24KIAogSW4gYWRkaXRpb24gdG8gdGhlIGFib3ZlIHRoZXJlIGFyZSBhIG51bWJl
ciBvZiBvcHRpb25hbCBidWlsZAogcHJlcmVxdWlzaXRlcy4gT21pdHRpbmcgdGhlc2Ugd2lsbCBj
YXVzZSB0aGUgcmVsYXRlZCBmZWF0dXJlcyB0byBiZQpAQCAtNjUsNiArNjQsNyBAQCBkaXNhYmxl
ZCBhdCBjb21waWxlIHRpbWU6CiAgICAgKiBEZXZlbG9wbWVudCBpbnN0YWxsIG9mIE9jYW1sIChl
LmcuIG9jYW1sLW5veCBhbmQKICAgICAgIG9jYW1sLWZpbmRsaWIpLiBSZXF1aXJlZCB0byBidWls
ZCBvY2FtbCBjb21wb25lbnRzIHdoaWNoCiAgICAgICBpbmNsdWRlcyB0aGUgYWx0ZXJuYXRpdmUg
b2NhbWwgeGVuc3RvcmVkLgorICAgICogbWFya2Rvd24KIAogU2Vjb25kLCB5b3UgbmVlZCB0byBh
Y3F1aXJlIGEgc3VpdGFibGUga2VybmVsIGZvciB1c2UgaW4gZG9tYWluIDAuIElmCiBwb3NzaWJs
ZSB5b3Ugc2hvdWxkIHVzZSBhIGtlcm5lbCBwcm92aWRlZCBieSB5b3VyIE9TIGRpc3RyaWJ1dG9y
LiBJZgpkaWZmIC0tZ2l0IGEvYXV0b2dlbi5zaCBiL2F1dG9nZW4uc2gKaW5kZXggMTQ1NmQ5NC4u
YjVjOTY4OCAxMDA3NTUKLS0tIGEvYXV0b2dlbi5zaAorKysgYi9hdXRvZ2VuLnNoCkBAIC0xLDcg
KzEsMTIgQEAKICMhL2Jpbi9zaCAtZQogYXV0b2NvbmYKLWNkIHRvb2xzCi1hdXRvY29uZgotYXV0
b2hlYWRlcgotY2QgLi4vc3R1YmRvbQotYXV0b2NvbmYKKyggY2QgdG9vbHMKKyAgYXV0b2NvbmYK
KyAgYXV0b2hlYWRlcgorKQorKCBjZCBzdHViZG9tCisgIGF1dG9jb25mCispCisoIGNkIGRvY3MK
KyAgYXV0b2NvbmYKKykKZGlmZiAtLWdpdCBhL2NvbmZpZy9Eb2NzLm1rLmluIGIvY29uZmlnL0Rv
Y3MubWsuaW4KbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXggMDAwMDAwMC4uYjZhYjZmZQotLS0g
L2Rldi9udWxsCisrKyBiL2NvbmZpZy9Eb2NzLm1rLmluCkBAIC0wLDAgKzEsMjAgQEAKKyMgUHJl
Zml4IGFuZCBpbnN0YWxsIGZvbGRlcgorcHJlZml4ICAgICAgICAgICAgICA6PSBAcHJlZml4QAor
UFJFRklYICAgICAgICAgICAgICA6PSAkKHByZWZpeCkKK2V4ZWNfcHJlZml4ICAgICAgICAgOj0g
QGV4ZWNfcHJlZml4QAorbGliZGlyICAgICAgICAgICAgICA6PSBAbGliZGlyQAorTElCRElSICAg
ICAgICAgICAgICA6PSAkKGxpYmRpcikKKworIyBUb29scworUFMyUERGICAgICAgICAgICAgICA6
PSBAUFMyUERGQAorRFZJUFMgICAgICAgICAgICAgICA6PSBARFZJUFNACitMQVRFWCAgICAgICAg
ICAgICAgIDo9IEBMQVRFWEAKK0ZJRzJERVYgICAgICAgICAgICAgOj0gQEZJRzJERVZACitMQVRF
WDJIVE1MICAgICAgICAgIDo9IEBMQVRFWDJIVE1MQAorRE9YWUdFTiAgICAgICAgICAgICA6PSBA
RE9YWUdFTkAKK1BPRDJNQU4gICAgICAgICAgICAgOj0gQFBPRDJNQU5ACitQT0QySFRNTCAgICAg
ICAgICAgIDo9IEBQT0QySFRNTEAKK1BPRDJURVhUICAgICAgICAgICAgOj0gQFBPRDJURVhUQAor
RE9UICAgICAgICAgICAgICAgICA6PSBARE9UQAorTkVBVE8gICAgICAgICAgICAgICA6PSBATkVB
VE9ACitNQVJLRE9XTiAgICAgICAgICAgIDo9IEBNQVJLRE9XTkAKZGlmZiAtLWdpdCBhL2NvbmZp
Z3VyZSBiL2NvbmZpZ3VyZQppbmRleCA2NDk3MDhmLi5hMzA3ZjNhIDEwMDc1NQotLS0gYS9jb25m
aWd1cmUKKysrIGIvY29uZmlndXJlCkBAIC02MDYsNyArNjA2LDcgQEAgZW5hYmxlX29wdGlvbl9j
aGVja2luZwogICAgICAgYWNfcHJlY2lvdXNfdmFycz0nYnVpbGRfYWxpYXMKIGhvc3RfYWxpYXMK
IHRhcmdldF9hbGlhcycKLWFjX3N1YmRpcnNfYWxsPSd0b29scyBzdHViZG9tJworYWNfc3ViZGly
c19hbGw9J3Rvb2xzIGRvY3Mgc3R1YmRvbScKIAogIyBJbml0aWFsaXplIHNvbWUgdmFyaWFibGVz
IHNldCBieSBvcHRpb25zLgogYWNfaW5pdF9oZWxwPQpAQCAtMTY3NSw3ICsxNjc1LDcgQEAgYWNf
Y29uZmlndXJlPSIkU0hFTEwgJGFjX2F1eF9kaXIvY29uZmlndXJlIiAgIyBQbGVhc2UgZG9uJ3Qg
dXNlIHRoaXMgdmFyLgogCiAKIAotc3ViZGlycz0iJHN1YmRpcnMgdG9vbHMgc3R1YmRvbSIKK3N1
YmRpcnM9IiRzdWJkaXJzIHRvb2xzIGRvY3Mgc3R1YmRvbSIKIAogCiBjYXQgPmNvbmZjYWNoZSA8
PFxfQUNFT0YKZGlmZiAtLWdpdCBhL2NvbmZpZ3VyZS5hYyBiL2NvbmZpZ3VyZS5hYwppbmRleCAw
NDk3ZDk3Li42MzdiMzViIDEwMDY0NAotLS0gYS9jb25maWd1cmUuYWMKKysrIGIvY29uZmlndXJl
LmFjCkBAIC02LDYgKzYsNiBAQCBBQ19JTklUKFtYZW4gSHlwZXJ2aXNvcl0sIG00X2VzeXNjbWQo
Wy4vdmVyc2lvbi5zaCAuL3hlbi9NYWtlZmlsZV0pLAogICAgIFt4ZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZ10sIFt4ZW5dLCBbaHR0cDovL3d3dy54ZW4ub3JnL10pCiBBQ19DT05GSUdfU1JDRElSKFsu
L3hlbi9jb21tb24va2VybmVsLmNdKQogCi1BQ19DT05GSUdfU1VCRElSUyhbdG9vbHMgc3R1YmRv
bV0pCitBQ19DT05GSUdfU1VCRElSUyhbdG9vbHMgZG9jcyBzdHViZG9tXSkKIAogQUNfT1VUUFVU
KCkKZGlmZiAtLWdpdCBhL2RvY3MvRG9jcy5tayBiL2RvY3MvRG9jcy5tawpkZWxldGVkIGZpbGUg
bW9kZSAxMDA2NDQKaW5kZXggYWE2NTNkMy4uMDAwMDAwMAotLS0gYS9kb2NzL0RvY3MubWsKKysr
IC9kZXYvbnVsbApAQCAtMSwxMiArMCwwIEBACi1QUzJQREYJCTo9IHBzMnBkZgotRFZJUFMJCTo9
IGR2aXBzCi1MQVRFWAkJOj0gbGF0ZXgKLUZJRzJERVYJCTo9IGZpZzJkZXYKLUxBVEVYMkhUTUwJ
Oj0gbGF0ZXgyaHRtbAotRE9YWUdFTgkJOj0gZG94eWdlbgotUE9EMk1BTgkJOj0gcG9kMm1hbgot
UE9EMkhUTUwJOj0gcG9kMmh0bWwKLVBPRDJURVhUCTo9IHBvZDJ0ZXh0Ci1ET1QJCTo9IGRvdAot
TkVBVE8JCTo9IG5lYXRvCi1NQVJLRE9XTgk6PSBtYXJrZG93bgpkaWZmIC0tZ2l0IGEvZG9jcy9N
YWtlZmlsZSBiL2RvY3MvTWFrZWZpbGUKaW5kZXggMDNmMTQxYS4uMDM2MjFmNyAxMDA2NDQKLS0t
IGEvZG9jcy9NYWtlZmlsZQorKysgYi9kb2NzL01ha2VmaWxlCkBAIC0yLDcgKzIsNyBAQAogCiBY
RU5fUk9PVD0kKENVUkRJUikvLi4KIGluY2x1ZGUgJChYRU5fUk9PVCkvQ29uZmlnLm1rCi1pbmNs
dWRlICQoWEVOX1JPT1QpL2RvY3MvRG9jcy5taworaW5jbHVkZSAkKFhFTl9ST09UKS9jb25maWcv
RG9jcy5tawogCiBWRVJTSU9OCQk9IHhlbi11bnN0YWJsZQogCkBAIC0yNiwxMCArMjYsMTIgQEAg
YWxsOiBidWlsZAogCiAuUEhPTlk6IGJ1aWxkCiBidWlsZDogaHRtbCB0eHQgbWFuLXBhZ2VzIGZp
Z3MKLQlAaWYgd2hpY2ggJChET1QpIDE+L2Rldi9udWxsIDI+L2Rldi9udWxsIDsgdGhlbiAgICAg
ICAgICAgICAgXAotCSQoTUFLRSkgLUMgeGVuLWFwaSBidWlsZCA7IGVsc2UgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCi0gICAgICAgIGVjaG8gIkdyYXBodml6IChkb3QpIG5vdCBpbnN0
YWxsZWQ7IHNraXBwaW5nIHhlbi1hcGkuIiA7IGZpCitpZmRlZiBET1QKKwkkKE1BS0UpIC1DIHhl
bi1hcGkgYnVpbGQKIAlybSAtZiAqLmF1eCAqLmR2aSAqLmJibCAqLmJsZyAqLmdsbyAqLmlkeCAq
LmlsZyAqLmxvZyAqLmluZCAqLnRvYworZWxzZQorCUBlY2hvICJHcmFwaHZpeiAoZG90KSBub3Qg
aW5zdGFsbGVkOyBza2lwcGluZyB4ZW4tYXBpLiIKK2VuZGlmCiAKIC5QSE9OWTogZGV2LWRvY3MK
IGRldi1kb2NzOiBweXRob24tZGV2LWRvY3MKQEAgLTM5LDMwICs0MSwzNyBAQCBodG1sOiAkKERP
Q19IVE1MKSBodG1sL2luZGV4Lmh0bWwKIAogLlBIT05ZOiB0eHQKIHR4dDoKLQlAaWYgd2hpY2gg
JChQT0QyVEVYVCkgMT4vZGV2L251bGwgMj4vZGV2L251bGw7IHRoZW4gXAotCSQoTUFLRSkgJChE
T0NfVFhUKTsgZWxzZSAgICAgICAgICAgICAgXAotCWVjaG8gInBvZDJ0ZXh0IG5vdCBpbnN0YWxs
ZWQ7IHNraXBwaW5nIHRleHQgb3V0cHV0cy4iOyBmaQoraWZkZWYgUE9EMlRFWFQKKwkkKE1BS0Up
ICQoRE9DX1RYVCkKK2Vsc2UKKwlAZWNobyAicG9kMnRleHQgbm90IGluc3RhbGxlZDsgc2tpcHBp
bmcgdGV4dCBvdXRwdXRzLiIKK2VuZGlmCiAKIC5QSE9OWTogZmlncwogZmlnczoKLQlAc2V0IC1l
IDsgaWYgd2hpY2ggJChGSUcyREVWKSAxPi9kZXYvbnVsbCAyPi9kZXYvbnVsbDsgdGhlbiBcCi0J
c2V0IC14OyAkKE1BS0UpIC1DIGZpZ3MgOyBlbHNlICAgICAgICAgICAgICAgICAgIFwKLQllY2hv
ICJmaWcyZGV2ICh0cmFuc2ZpZykgbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgZmlncy4iOyBmaQor
aWZkZWYgRklHMkRFVgorCXNldCAteDsgJChNQUtFKSAtQyBmaWdzCitlbHNlCisJQGVjaG8gImZp
ZzJkZXYgKHRyYW5zZmlnKSBub3QgaW5zdGFsbGVkOyBza2lwcGluZyBmaWdzLiIKK2VuZGlmCiAK
IC5QSE9OWTogcHl0aG9uLWRldi1kb2NzCiBweXRob24tZGV2LWRvY3M6Ci0JQG1rZGlyIC12IC1w
IGFwaS90b29scy9weXRob24KLQlAc2V0IC1lIDsgaWYgd2hpY2ggJChET1hZR0VOKSAxPi9kZXYv
bnVsbCAyPi9kZXYvbnVsbDsgdGhlbiBcCi0gICAgICAgIGVjaG8gIlJ1bm5pbmcgZG94eWdlbiB0
byBnZW5lcmF0ZSBQeXRob24gdG9vbHMgQVBJcyAuLi4gIjsgXAotCSQoRE9YWUdFTikgRG94eWZp
bGU7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAotCSQoTUFLRSkgLUMg
YXBpL3Rvb2xzL3B5dGhvbi9sYXRleCA7IGVsc2UgICAgICAgICAgICAgICAgICAgXAotICAgICAg
ICBlY2hvICJEb3h5Z2VuIG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nIHB5dGhvbi1kZXYtZG9jcy4i
OyBmaQoraWZkZWYgRE9YWUdFTgorCUBlY2hvICJSdW5uaW5nIGRveHlnZW4gdG8gZ2VuZXJhdGUg
UHl0aG9uIHRvb2xzIEFQSXMgLi4uICIKKwlta2RpciAtdiAtcCBhcGkvdG9vbHMvcHl0aG9uCisJ
JChET1hZR0VOKSBEb3h5ZmlsZSAmJiAkKE1BS0UpIC1DIGFwaS90b29scy9weXRob24vbGF0ZXgK
K2Vsc2UKKwlAZWNobyAiRG94eWdlbiBub3QgaW5zdGFsbGVkOyBza2lwcGluZyBweXRob24tZGV2
LWRvY3MuIgorZW5kaWYKIAogLlBIT05ZOiBtYW4tcGFnZXMKIG1hbi1wYWdlczoKLQlAaWYgd2hp
Y2ggJChQT0QyTUFOKSAxPi9kZXYvbnVsbCAyPi9kZXYvbnVsbDsgdGhlbiBcCi0JJChNQUtFKSAk
KERPQ19NQU4xKSAkKERPQ19NQU41KTsgZWxzZSAgICAgICAgICAgICAgXAotCWVjaG8gInBvZDJt
YW4gbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgbWFuLXBhZ2VzLiI7IGZpCitpZmRlZiBQT0QyTUFO
CisJJChNQUtFKSAkKERPQ19NQU4xKSAkKERPQ19NQU41KQorZWxzZQorCUBlY2hvICJwb2QybWFu
IG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nIG1hbi1wYWdlcy4iCitlbmRpZgogCiBtYW4xLyUuMTog
bWFuLyUucG9kLjEgTWFrZWZpbGUKIAkkKElOU1RBTExfRElSKSAkKEBEKQpAQCAtODcsNiArOTYs
NyBAQCBjbGVhbjoKIAogLlBIT05ZOiBkaXN0Y2xlYW4KIGRpc3RjbGVhbjogY2xlYW4KKwlybSAt
cmYgLi4vY29uZmlnL0RvY3MubWsgY29uZmlnLmxvZyBjb25maWcuc3RhdHVzIGF1dG9tNHRlLmNh
Y2hlCiAKIC5QSE9OWTogaW5zdGFsbAogaW5zdGFsbDogYWxsCkBAIC0xMDQsMzAgKzExNCw0MCBA
QCBodG1sL2luZGV4Lmh0bWw6ICQoRE9DX0hUTUwpIC4vZ2VuLWh0bWwtaW5kZXggSU5ERVgKIAlw
ZXJsIC13IC0tIC4vZ2VuLWh0bWwtaW5kZXggLWkgSU5ERVggaHRtbCAkKERPQ19IVE1MKQogCiBo
dG1sLyUuaHRtbDogJS5tYXJrZG93bgotCUAkKElOU1RBTExfRElSKSAkKEBEKQotCUBzZXQgLWUg
OyBpZiB3aGljaCAkKE1BUktET1dOKSAxPi9kZXYvbnVsbCAyPi9kZXYvbnVsbDsgdGhlbiBcCi0J
ZWNobyAiUnVubmluZyBtYXJrZG93biB0byBnZW5lcmF0ZSAkKi5odG1sIC4uLiAiOyBcCisJJChJ
TlNUQUxMX0RJUikgJChARCkKK2lmZGVmIE1BUktET1dOCisJQGVjaG8gIlJ1bm5pbmcgbWFya2Rv
d24gdG8gZ2VuZXJhdGUgJCouaHRtbCAuLi4gIgogCSQoTUFSS0RPV04pICQ8ID4gJEAudG1wIDsg
XAotCSQoY2FsbCBtb3ZlLWlmLWNoYW5nZWQsJEAudG1wLCRAKSA7IGVsc2UgXAotCWVjaG8gIm1h
cmtkb3duIG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nICQqLmh0bWwuIjsgZmkKKwkkKGNhbGwgbW92
ZS1pZi1jaGFuZ2VkLCRALnRtcCwkQCkKK2Vsc2UKKwlAZWNobyAibWFya2Rvd24gbm90IGluc3Rh
bGxlZDsgc2tpcHBpbmcgJCouaHRtbC4iCitlbmRpZgogCiBodG1sLyUudHh0OiAlLnR4dAotCUAk
KElOU1RBTExfRElSKSAkKEBEKQorCSQoSU5TVEFMTF9ESVIpICQoQEQpCiAJY3AgJDwgJEAKIAog
aHRtbC9tYW4vJS4xLmh0bWw6IG1hbi8lLnBvZC4xIE1ha2VmaWxlCiAJJChJTlNUQUxMX0RJUikg
JChARCkKK2lmZGVmIFBPRDJIVE1MCiAJJChQT0QySFRNTCkgLS1pbmZpbGU9JDwgLS1vdXRmaWxl
PSRALnRtcAogCSQoY2FsbCBtb3ZlLWlmLWNoYW5nZWQsJEAudG1wLCRAKQorZWxzZQorCUBlY2hv
ICJwb2QyaHRtbCBub3QgaW5zdGFsbGVkOyBza2lwcGluZyAkPC4iCitlbmRpZgogCiBodG1sL21h
bi8lLjUuaHRtbDogbWFuLyUucG9kLjUgTWFrZWZpbGUKIAkkKElOU1RBTExfRElSKSAkKEBEKQor
aWZkZWYgUE9EMkhUTUwKIAkkKFBPRDJIVE1MKSAtLWluZmlsZT0kPCAtLW91dGZpbGU9JEAudG1w
CiAJJChjYWxsIG1vdmUtaWYtY2hhbmdlZCwkQC50bXAsJEApCitlbHNlCisJQGVjaG8gInBvZDJo
dG1sIG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nICQ8LiIKK2VuZGlmCiAKIGh0bWwvaHlwZXJjYWxs
L2luZGV4Lmh0bWw6IC4veGVuLWhlYWRlcnMKIAlybSAtcmYgJChARCkKLQlAJChJTlNUQUxMX0RJ
UikgJChARCkKKwkkKElOU1RBTExfRElSKSAkKEBEKQogCS4veGVuLWhlYWRlcnMgLU8gJChARCkg
XAogCQktVCAnYXJjaC14ODZfNjQgLSBYZW4gcHVibGljIGhlYWRlcnMnIFwKIAkJLVggYXJjaC1p
YTY0IC1YIGFyY2gteDg2XzMyIC1YIHhlbi14ODZfMzIgLVggYXJjaC1hcm0gXApAQCAtMTQ3LDEx
ICsxNjcsMjMgQEAgdHh0LyUudHh0OiAlLm1hcmtkb3duCiAKIHR4dC9tYW4vJS4xLnR4dDogbWFu
LyUucG9kLjEgTWFrZWZpbGUKIAkkKElOU1RBTExfRElSKSAkKEBEKQoraWZkZWYgUE9EMlRFWFQK
IAkkKFBPRDJURVhUKSAkPCAkQC50bXAKIAkkKGNhbGwgbW92ZS1pZi1jaGFuZ2VkLCRALnRtcCwk
QCkKK2Vsc2UKKwlAZWNobyAicG9kMnRleHQgbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgJDwuIgor
ZW5kaWYKIAogdHh0L21hbi8lLjUudHh0OiBtYW4vJS5wb2QuNSBNYWtlZmlsZQogCSQoSU5TVEFM
TF9ESVIpICQoQEQpCitpZmRlZiBQT0QyVEVYVAogCSQoUE9EMlRFWFQpICQ8ICRALnRtcAogCSQo
Y2FsbCBtb3ZlLWlmLWNoYW5nZWQsJEAudG1wLCRAKQotCitlbHNlCisJQGVjaG8gInBvZDJ0ZXh0
IG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nICQ8LiIKK2VuZGlmCisKK2lmZXEgKCwkKGZpbmRzdHJp
bmcgY2xlYW4sJChNQUtFQ01ER09BTFMpKSkKKyQoWEVOX1JPT1QpL2NvbmZpZy9Eb2NzLm1rOgor
CSQoZXJyb3IgWW91IGhhdmUgdG8gcnVuIC4vY29uZmlndXJlIGJlZm9yZSBidWlsZGluZyBkb2Nz
KQorZW5kaWYKZGlmZiAtLWdpdCBhL2RvY3MvY29uZmlndXJlLmFjIGIvZG9jcy9jb25maWd1cmUu
YWMKbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXggMDAwMDAwMC4uNDVkYzliOAotLS0gL2Rldi9u
dWxsCisrKyBiL2RvY3MvY29uZmlndXJlLmFjCkBAIC0wLDAgKzEsMjcgQEAKKyMgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC0qLSBBdXRvY29uZiAtKi0KKyMg
UHJvY2VzcyB0aGlzIGZpbGUgd2l0aCBhdXRvY29uZiB0byBwcm9kdWNlIGEgY29uZmlndXJlIHNj
cmlwdC4KKworQUNfUFJFUkVRKFsyLjY3XSkKK0FDX0lOSVQoW1hlbiBIeXBlcnZpc29yIERvY3Vt
ZW50YXRpb25dLCBtNF9lc3lzY21kKFsuLi92ZXJzaW9uLnNoIC4uL3hlbi9NYWtlZmlsZV0pLAor
ICAgIFt4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZ10sIFt4ZW5dLCBbaHR0cDovL3d3dy54ZW4ub3Jn
L10pCitBQ19DT05GSUdfU1JDRElSKFttaXNjL3hlbi1jb21tYW5kLWxpbmUubWFya2Rvd25dKQor
QUNfQ09ORklHX0ZJTEVTKFsuLi9jb25maWcvRG9jcy5ta10pCitBQ19DT05GSUdfQVVYX0RJUihb
Li4vXSkKKworIyBNNCBNYWNybyBpbmNsdWRlcworbTRfaW5jbHVkZShbLi4vbTQvZG9jc190b29s
Lm00XSkKKworQVhfRE9DU19UT09MX1BST0coW1BTMlBERl0sIFtwczJwZGZdKQorQVhfRE9DU19U
T09MX1BST0coW0RWSVBTXSwgW2R2aXBzXSkKK0FYX0RPQ1NfVE9PTF9QUk9HKFtMQVRFWF0sIFts
YXRleF0pCitBWF9ET0NTX1RPT0xfUFJPRyhbRklHMkRFVl0sIFtmaWcyZGV2XSkKK0FYX0RPQ1Nf
VE9PTF9QUk9HKFtMQVRFWDJIVE1MXSwgW2xhdGV4Mmh0bWxdKQorQVhfRE9DU19UT09MX1BST0co
W0RPWFlHRU5dLCBbZG94eWdlbl0pCitBWF9ET0NTX1RPT0xfUFJPRyhbUE9EMk1BTl0sIFtwb2Qy
bWFuXSkKK0FYX0RPQ1NfVE9PTF9QUk9HKFtQT0QySFRNTF0sIFtwb2QyaHRtbF0pCitBWF9ET0NT
X1RPT0xfUFJPRyhbUE9EMlRFWFRdLCBbcG9kMnRleHRdKQorQVhfRE9DU19UT09MX1BST0coW0RP
VF0sIFtkb3RdKQorQVhfRE9DU19UT09MX1BST0coW05FQVRPXSwgW25lYXRvXSkKK0FYX0RPQ1Nf
VE9PTF9QUk9HUyhbTUFSS0RPV05dLCBbbWFya2Rvd25dLCBbbWFya2Rvd24gbWFya2Rvd25fcHld
KQorCitBQ19PVVRQVVQoKQpkaWZmIC0tZ2l0IGEvZG9jcy9maWdzL01ha2VmaWxlIGIvZG9jcy9m
aWdzL01ha2VmaWxlCmluZGV4IDVlY2RhZTMuLmY3ODJkYzEgMTAwNjQ0Ci0tLSBhL2RvY3MvZmln
cy9NYWtlZmlsZQorKysgYi9kb2NzL2ZpZ3MvTWFrZWZpbGUKQEAgLTEsNyArMSw3IEBACiAKIFhF
Tl9ST09UPSQoQ1VSRElSKS8uLi8uLgogaW5jbHVkZSAkKFhFTl9ST09UKS9Db25maWcubWsKLWlu
Y2x1ZGUgJChYRU5fUk9PVCkvZG9jcy9Eb2NzLm1rCitpbmNsdWRlICQoWEVOX1JPT1QpL2NvbmZp
Zy9Eb2NzLm1rCiAKIFRBUkdFVFM9IG5ldHdvcmstYnJpZGdlLnBuZyBuZXR3b3JrLWJhc2ljLnBu
ZwogCmRpZmYgLS1naXQgYS9kb2NzL3hlbi1hcGkvTWFrZWZpbGUgYi9kb2NzL3hlbi1hcGkvTWFr
ZWZpbGUKaW5kZXggNzdhMDExNy4uYjJkYTY1MSAxMDA2NDQKLS0tIGEvZG9jcy94ZW4tYXBpL01h
a2VmaWxlCisrKyBiL2RvY3MveGVuLWFwaS9NYWtlZmlsZQpAQCAtMiw3ICsyLDcgQEAKIAogWEVO
X1JPT1Q9JChDVVJESVIpLy4uLy4uCiBpbmNsdWRlICQoWEVOX1JPT1QpL0NvbmZpZy5tawotaW5j
bHVkZSAkKFhFTl9ST09UKS9kb2NzL0RvY3MubWsKKy1pbmNsdWRlICQoWEVOX1JPT1QpL2NvbmZp
Zy9Eb2NzLm1rCiAKIAogVEVYIDo9ICQod2lsZGNhcmQgKi50ZXgpCkBAIC00MiwzICs0Miw4IEBA
IHhlbmFwaS1kYXRhbW9kZWwtZ3JhcGguZXBzOiB4ZW5hcGktZGF0YW1vZGVsLWdyYXBoLmRvdAog
LlBIT05ZOiBjbGVhbgogY2xlYW46CiAJcm0gLWYgKi5wZGYgKi5wcyAqLmR2aSAqLmF1eCAqLmxv
ZyAqLm91dCAkKEVQU0RPVCkKKworaWZlcSAoLCQoZmluZHN0cmluZyBjbGVhbiwkKE1BS0VDTURH
T0FMUykpKQorJChYRU5fUk9PVCkvY29uZmlnL0RvY3MubWs6CisJJChlcnJvciBZb3UgaGF2ZSB0
byBydW4gLi9jb25maWd1cmUgYmVmb3JlIGJ1aWxkaW5nIGRvY3MpCitlbmRpZgpkaWZmIC0tZ2l0
IGEvbTQvZG9jc190b29sLm00IGIvbTQvZG9jc190b29sLm00Cm5ldyBmaWxlIG1vZGUgMTAwNjQ0
CmluZGV4IDAwMDAwMDAuLjNlODgxNGEKLS0tIC9kZXYvbnVsbAorKysgYi9tNC9kb2NzX3Rvb2wu
bTQKQEAgLTAsMCArMSwxNyBAQAorQUNfREVGVU4oW0FYX0RPQ1NfVE9PTF9QUk9HXSwgWworZG5s
CisgICAgQUNfQVJHX1ZBUihbJDFdLCBbUGF0aCB0byAkMiB0b29sXSkKKyAgICBBQ19QQVRIX1BS
T0coWyQxXSwgWyQyXSkKKyAgICBBU19JRihbISB0ZXN0IC14ICIkYWNfY3ZfcGF0aF8kMSJdLCBb
CisgICAgICAgIEFDX01TR19XQVJOKFskMiBpcyBub3QgYXZhaWxhYmxlIHNvIHNvbWUgZG9jdW1l
bnRhdGlvbiB3b24ndCBiZSBidWlsdF0pCisgICAgXSkKK10pCisKK0FDX0RFRlVOKFtBWF9ET0NT
X1RPT0xfUFJPR1NdLCBbCitkbmwKKyAgICBBQ19BUkdfVkFSKFskMV0sIFtQYXRoIHRvICQyIHRv
b2xdKQorICAgIEFDX1BBVEhfUFJPR1MoWyQxXSwgWyQzXSkKKyAgICBBU19JRihbISB0ZXN0IC14
ICIkYWNfY3ZfcGF0aF8kMSJdLCBbCisgICAgICAgIEFDX01TR19XQVJOKFskMiBpcyBub3QgYXZh
aWxhYmxlIHNvIHNvbWUgZG9jdW1lbnRhdGlvbiB3b24ndCBiZSBidWlsdF0pCisgICAgXSkKK10p
Ci0tIAoxLjcuMi41CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZOE-0004Oq-Ub; Thu, 06 Dec 2012 11:13:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZOD-0004Ol-KD
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:13:13 +0000
Received: from [85.158.139.83:60644] by server-13.bemta-5.messagelabs.com id
	F4/1A-27809-8CD70C05; Thu, 06 Dec 2012 11:13:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354792391!21416650!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13847 invoked from network); 6 Dec 2012 11:13:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:13:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:13:10 +0000
Message-Id: <50C08BD502000078000AE7BD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:13:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
	<50BF841C.6010906@amd.com>
	<1354730007.17165.31.camel@zakaz.uk.xensource.com>
	<1354788485.17165.65.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354788485.17165.65.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 11:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> (Putting debian-kernel to bcc, since I don't imagine they are interested
> in the details of this discussion, I'll reraise the result with the
> Debian Xen maintainer when we have one)
> 
> On Wed, 2012-12-05 at 17:53 +0000, Ian Campbell wrote:
>> On Wed, 2012-12-05 at 17:27 +0000, Boris Ostrovsky wrote:
>> > On 12/05/2012 12:02 PM, Jan Beulich wrote:
>> > > But all of this shouldn't lead to equivalent ID mismatches, should
>> > > it? It ought to simply find nothing to update...
>> > 
>> > 
>> > The patch file (/lib/firmware/amd-ucode/microcode_amd_fam15h.bin) may 
>> > contain more than one patch. The driver goes over this file patch by 
>> > patch and tries to see whether to apply it.
>> > 
>> > I think what happened in Ian's case was that the patch file contained 
>> > two patches --- one for this processor (ID 6012) and another for a 
>> > different processor (ID 6101). (Both are family 15h but different revs).
>> > 
>> > The driver applied the first patch on core 0. Then, on core 1, the code 
>> > tried the first patch (at file offset 60) and noticed that it is already 
>> > applied. So it continued to the next patch (at offset 2660) which is not 
>> > meant for this processor, thus generating the "does not match" message.
> 
> OOI what would have happened if the two patches were in the opposite
> order? Would CPU0 have seen the ID 6101 patch first and aborted?

That would work well.

The problem is that cpu_request_microcode() returns the result
of its last call to microcode_fits(), no matter whether a prior one
already returned success (>= 0).

Something like

                                                   &offset)) == 0 )
     {
         error = microcode_fits(mc_amd, cpu);
-        if (error <= 0)
+        if (error < 0)
+            error = 0;
+        if (error == 0)
             continue;
 
         error = apply_microcode(cpu);

would apparently be needed. Or we could of course make
microcode_fits() return a bool_t in the first place.

But then again it would probably be nice to indeed return
failure from cpu_request_microcode() if _no_ suitable
microcode was found at all. Question is whether one blob
can contain more than one update for a given equivalent ID.
If not, bailing from the loop even if microcode_fits() returns
zero might be the right solution (and presumably the latter
then shouldn't return zero when no equivalent ID was
found).

But no matter what solution we pick, we need to review this
carefully in the context of the earlier regressions we had in
this area.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZOE-0004Oq-Ub; Thu, 06 Dec 2012 11:13:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZOD-0004Ol-KD
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:13:13 +0000
Received: from [85.158.139.83:60644] by server-13.bemta-5.messagelabs.com id
	F4/1A-27809-8CD70C05; Thu, 06 Dec 2012 11:13:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354792391!21416650!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13847 invoked from network); 6 Dec 2012 11:13:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:13:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:13:10 +0000
Message-Id: <50C08BD502000078000AE7BD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:13:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
	<50BF841C.6010906@amd.com>
	<1354730007.17165.31.camel@zakaz.uk.xensource.com>
	<1354788485.17165.65.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354788485.17165.65.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 11:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> (Putting debian-kernel to bcc, since I don't imagine they are interested
> in the details of this discussion, I'll reraise the result with the
> Debian Xen maintainer when we have one)
> 
> On Wed, 2012-12-05 at 17:53 +0000, Ian Campbell wrote:
>> On Wed, 2012-12-05 at 17:27 +0000, Boris Ostrovsky wrote:
>> > On 12/05/2012 12:02 PM, Jan Beulich wrote:
>> > > But all of this shouldn't lead to equivalent ID mismatches, should
>> > > it? It ought to simply find nothing to update...
>> > 
>> > 
>> > The patch file (/lib/firmware/amd-ucode/microcode_amd_fam15h.bin) may 
>> > contain more than one patch. The driver goes over this file patch by 
>> > patch and tries to see whether to apply it.
>> > 
>> > I think what happened in Ian's case was that the patch file contained 
>> > two patches --- one for this processor (ID 6012) and another for a 
>> > different processor (ID 6101). (Both are family 15h but different revs).
>> > 
>> > The driver applied the first patch on core 0. Then, on core 1, the code 
>> > tried the first patch (at file offset 60) and noticed that it is already 
>> > applied. So it continued to the next patch (at offset 2660) which is not 
>> > meant for this processor, thus generating the "does not match" message.
> 
> OOI what would have happened if the two patches were in the opposite
> order? Would CPU0 have seen the ID 6101 patch first and aborted?

That would work well.

The problem is that cpu_request_microcode() returns the result
of its last call to microcode_fits(), no matter whether a prior one
already returned success (>= 0).

Something like

                                                   &offset)) == 0 )
     {
         error = microcode_fits(mc_amd, cpu);
-        if (error <= 0)
+        if (error < 0)
+            error = 0;
+        if (error == 0)
             continue;
 
         error = apply_microcode(cpu);

would apparently be needed. Or we could of course make
microcode_fits() return a bool_t in the first place.

But then again it would probably be nice to indeed return
failure from cpu_request_microcode() if _no_ suitable
microcode was found at all. Question is whether one blob
can contain more than one update for a given equivalent ID.
If not, bailing from the loop even if microcode_fits() returns
zero might be the right solution (and presumably the latter
then shouldn't return zero when no equivalent ID was
found).

But no matter what solution we pick, we need to review this
carefully in the context of the earlier regressions we had in
this area.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:25:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:25:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZZk-0004xP-TH; Thu, 06 Dec 2012 11:25:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZZj-0004xI-Sl
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:25:08 +0000
Received: from [85.158.137.99:46763] by server-16.bemta-3.messagelabs.com id
	A0/95-07461-E8080C05; Thu, 06 Dec 2012 11:25:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354793101!17317222!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18698 invoked from network); 6 Dec 2012 11:25:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:25:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:25:01 +0000
Message-Id: <50C08E9C02000078000AE7CC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:25:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/6] xen: infrastructure to have
 cross-platform video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 19:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> - introduce a new HAS_VIDEO config variable;
> - build xen/drivers/video/font* if HAS_VIDEO;
> - rename vga_puts to video_puts;
> - rename vga_init to video_init;
> - rename vga_endboot to video_endboot.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/arch/arm/Rules.mk        |    1 +
>  xen/arch/x86/Rules.mk        |    1 +
>  xen/drivers/Makefile         |    2 +-
>  xen/drivers/char/console.c   |   12 ++++++------
>  xen/drivers/video/Makefile   |   10 +++++-----
>  xen/drivers/video/vesa.c     |    4 ++--
>  xen/drivers/video/vga.c      |   12 ++++++------
>  xen/include/asm-x86/config.h |    1 +
>  xen/include/xen/vga.h        |    9 +--------
>  xen/include/xen/video.h      |   24 ++++++++++++++++++++++++
>  10 files changed, 48 insertions(+), 28 deletions(-)
>  create mode 100644 xen/include/xen/video.h
> 
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index a45c654..fa9f9c1 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -7,6 +7,7 @@
>  #
>  
>  HAS_DEVICE_TREE := y
> +HAS_VIDEO := y
>  
>  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
>  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
> index 963850f..0a9d68d 100644
> --- a/xen/arch/x86/Rules.mk
> +++ b/xen/arch/x86/Rules.mk
> @@ -3,6 +3,7 @@
>  
>  HAS_ACPI := y
>  HAS_VGA  := y
> +HAS_VIDEO  := y
>  HAS_CPUFREQ := y
>  HAS_PCI := y
>  HAS_PASSTHROUGH := y
> diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile
> index 7239375..9c70f20 100644
> --- a/xen/drivers/Makefile
> +++ b/xen/drivers/Makefile
> @@ -3,4 +3,4 @@ subdir-$(HAS_CPUFREQ) += cpufreq
>  subdir-$(HAS_PCI) += pci
>  subdir-$(HAS_PASSTHROUGH) += passthrough
>  subdir-$(HAS_ACPI) += acpi
> -subdir-$(HAS_VGA) += video
> +subdir-$(HAS_VIDEO) += video
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index 9e1adb5..683271e 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -21,7 +21,7 @@
>  #include <xen/delay.h>
>  #include <xen/guest_access.h>
>  #include <xen/shutdown.h>
> -#include <xen/vga.h>
> +#include <xen/video.h>
>  #include <xen/kexec.h>
>  #include <asm/debugger.h>
>  #include <asm/div64.h>
> @@ -297,7 +297,7 @@ static void dump_console_ring_key(unsigned char key)
>      buf[sofar] = '\0';
>  
>      sercon_puts(buf);
> -    vga_puts(buf);
> +    video_puts(buf);
>  
>      free_xenheap_pages(buf, order);
>  }
> @@ -383,7 +383,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) 
> buffer, int count)
>          spin_lock_irq(&console_lock);
>  
>          sercon_puts(kbuf);
> -        vga_puts(kbuf);
> +        video_puts(kbuf);
>  
>          if ( opt_console_to_ring )
>          {
> @@ -464,7 +464,7 @@ static void __putstr(const char *str)
>      ASSERT(spin_is_locked(&console_lock));
>  
>      sercon_puts(str);
> -    vga_puts(str);
> +    video_puts(str);
>  
>      if ( !console_locks_busted )
>      {
> @@ -592,7 +592,7 @@ void __init console_init_preirq(void)
>          if ( *p == ',' )
>              p++;
>          if ( !strncmp(p, "vga", 3) )
> -            vga_init();
> +            video_init();
>          else if ( !strncmp(p, "none", 4) )
>              continue;
>          else if ( (sh = serial_parse_handle(p)) >= 0 )
> @@ -694,7 +694,7 @@ void __init console_endboot(void)
>          printk("\n");
>      }
>  
> -    vga_endboot();
> +    video_endboot();
>  
>      /*
>       * If user specifies so, we fool the switch routine to redirect input
> diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> index 6c3e5b4..2993c39 100644
> --- a/xen/drivers/video/Makefile
> +++ b/xen/drivers/video/Makefile
> @@ -1,5 +1,5 @@
> -obj-y := vga.o
> -obj-$(CONFIG_X86) += font_8x14.o
> -obj-$(CONFIG_X86) += font_8x16.o
> -obj-$(CONFIG_X86) += font_8x8.o
> -obj-$(CONFIG_X86) += vesa.o
> +obj-$(HAS_VGA) := vga.o
> +obj-$(HAS_VIDEO) += font_8x14.o
> +obj-$(HAS_VIDEO) += font_8x16.o
> +obj-$(HAS_VIDEO) += font_8x8.o
> +obj-$(HAS_VGA) += vesa.o
> diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> index 47cd3ed..759355f 100644
> --- a/xen/drivers/video/vesa.c
> +++ b/xen/drivers/video/vesa.c
> @@ -109,7 +109,7 @@ void __init vesa_init(void)
>  
>      lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
>  
> -    vga_puts = vesa_redraw_puts;
> +    video_puts = vesa_redraw_puts;
>  
>      printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
>             "using %uk, total %uk\n",
> @@ -194,7 +194,7 @@ void __init vesa_endboot(bool_t keep)
>      if ( keep )
>      {
>          xpos = 0;
> -        vga_puts = vesa_scroll_puts;
> +        video_puts = vesa_scroll_puts;
>      }
>      else
>      {
> diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
> index a98bd00..40e5963 100644
> --- a/xen/drivers/video/vga.c
> +++ b/xen/drivers/video/vga.c
> @@ -21,7 +21,7 @@ static unsigned char *video;
>  
>  static void vga_text_puts(const char *s);
>  static void vga_noop_puts(const char *s) {}
> -void (*vga_puts)(const char *) = vga_noop_puts;
> +void (*video_puts)(const char *) = vga_noop_puts;
>  
>  /*
>   * 'vga=<mode-specifier>[,keep]' where <mode-specifier> is one of:
> @@ -62,7 +62,7 @@ void vesa_endboot(bool_t keep);
>  #define vesa_endboot(x)   ((void)0)
>  #endif
>  
> -void __init vga_init(void)
> +void __init video_init(void)
>  {
>      char *p;
>  
> @@ -85,7 +85,7 @@ void __init vga_init(void)
>          columns = vga_console_info.u.text_mode_3.columns;
>          lines   = vga_console_info.u.text_mode_3.rows;
>          memset(video, 0, columns * lines * 2);
> -        vga_puts = vga_text_puts;
> +        video_puts = vga_text_puts;
>          break;
>      case XEN_VGATYPE_VESA_LFB:
>      case XEN_VGATYPE_EFI_LFB:
> @@ -97,16 +97,16 @@ void __init vga_init(void)
>      }
>  }
>  
> -void __init vga_endboot(void)
> +void __init video_endboot(void)
>  {
> -    if ( vga_puts == vga_noop_puts )
> +    if ( video_puts == vga_noop_puts )
>          return;
>  
>      printk("Xen is %s VGA console.\n",
>             vgacon_keep ? "keeping" : "relinquishing");
>  
>      if ( !vgacon_keep )
> -        vga_puts = vga_noop_puts;
> +        video_puts = vga_noop_puts;
>      else
>      {
>          int bus, devfn;
> diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
> index b69dbe6..2169627 100644
> --- a/xen/include/asm-x86/config.h
> +++ b/xen/include/asm-x86/config.h
> @@ -38,6 +38,7 @@
>  #define CONFIG_ACPI_CSTATE 1
>  
>  #define CONFIG_VGA 1
> +#define CONFIG_VIDEO 1
>  
>  #define CONFIG_HOTPLUG 1
>  #define CONFIG_HOTPLUG_CPU 1
> diff --git a/xen/include/xen/vga.h b/xen/include/xen/vga.h
> index cc690b9..f72b63d 100644
> --- a/xen/include/xen/vga.h
> +++ b/xen/include/xen/vga.h
> @@ -9,17 +9,10 @@
>  #ifndef _XEN_VGA_H
>  #define _XEN_VGA_H
>  
> -#include <public/xen.h>
> +#include <xen/video.h>
>  
>  #ifdef CONFIG_VGA
>  extern struct xen_vga_console_info vga_console_info;
> -void vga_init(void);
> -void vga_endboot(void);
> -extern void (*vga_puts)(const char *);
> -#else
> -#define vga_init()    ((void)0)
> -#define vga_endboot() ((void)0)
> -#define vga_puts(s)   ((void)0)
>  #endif
>  
>  #endif /* _XEN_VGA_H */
> diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
> new file mode 100644
> index 0000000..e9bc92e
> --- /dev/null
> +++ b/xen/include/xen/video.h
> @@ -0,0 +1,24 @@
> +/*
> + *  vga.h
> + *
> + *  This file is subject to the terms and conditions of the GNU General 
> Public
> + *  License.  See the file COPYING in the main directory of this archive
> + *  for more details.
> + */
> +
> +#ifndef _XEN_VIDEO_H
> +#define _XEN_VIDEO_H
> +
> +#include <public/xen.h>
> +
> +#ifdef CONFIG_VIDEO
> +void video_init(void);
> +extern void (*video_puts)(const char *);
> +void video_endboot(void);
> +#else
> +#define video_init()    ((void)0)
> +#define video_puts(s)   ((void)0)
> +#define video_endboot() ((void)0)
> +#endif
> +
> +#endif /* _XEN_VIDEO_H */
> -- 
> 1.7.2.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:25:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:25:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZZk-0004xP-TH; Thu, 06 Dec 2012 11:25:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZZj-0004xI-Sl
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:25:08 +0000
Received: from [85.158.137.99:46763] by server-16.bemta-3.messagelabs.com id
	A0/95-07461-E8080C05; Thu, 06 Dec 2012 11:25:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354793101!17317222!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18698 invoked from network); 6 Dec 2012 11:25:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:25:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:25:01 +0000
Message-Id: <50C08E9C02000078000AE7CC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:25:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/6] xen: infrastructure to have
 cross-platform video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 19:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> - introduce a new HAS_VIDEO config variable;
> - build xen/drivers/video/font* if HAS_VIDEO;
> - rename vga_puts to video_puts;
> - rename vga_init to video_init;
> - rename vga_endboot to video_endboot.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/arch/arm/Rules.mk        |    1 +
>  xen/arch/x86/Rules.mk        |    1 +
>  xen/drivers/Makefile         |    2 +-
>  xen/drivers/char/console.c   |   12 ++++++------
>  xen/drivers/video/Makefile   |   10 +++++-----
>  xen/drivers/video/vesa.c     |    4 ++--
>  xen/drivers/video/vga.c      |   12 ++++++------
>  xen/include/asm-x86/config.h |    1 +
>  xen/include/xen/vga.h        |    9 +--------
>  xen/include/xen/video.h      |   24 ++++++++++++++++++++++++
>  10 files changed, 48 insertions(+), 28 deletions(-)
>  create mode 100644 xen/include/xen/video.h
> 
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index a45c654..fa9f9c1 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -7,6 +7,7 @@
>  #
>  
>  HAS_DEVICE_TREE := y
> +HAS_VIDEO := y
>  
>  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
>  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
> index 963850f..0a9d68d 100644
> --- a/xen/arch/x86/Rules.mk
> +++ b/xen/arch/x86/Rules.mk
> @@ -3,6 +3,7 @@
>  
>  HAS_ACPI := y
>  HAS_VGA  := y
> +HAS_VIDEO  := y
>  HAS_CPUFREQ := y
>  HAS_PCI := y
>  HAS_PASSTHROUGH := y
> diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile
> index 7239375..9c70f20 100644
> --- a/xen/drivers/Makefile
> +++ b/xen/drivers/Makefile
> @@ -3,4 +3,4 @@ subdir-$(HAS_CPUFREQ) += cpufreq
>  subdir-$(HAS_PCI) += pci
>  subdir-$(HAS_PASSTHROUGH) += passthrough
>  subdir-$(HAS_ACPI) += acpi
> -subdir-$(HAS_VGA) += video
> +subdir-$(HAS_VIDEO) += video
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index 9e1adb5..683271e 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -21,7 +21,7 @@
>  #include <xen/delay.h>
>  #include <xen/guest_access.h>
>  #include <xen/shutdown.h>
> -#include <xen/vga.h>
> +#include <xen/video.h>
>  #include <xen/kexec.h>
>  #include <asm/debugger.h>
>  #include <asm/div64.h>
> @@ -297,7 +297,7 @@ static void dump_console_ring_key(unsigned char key)
>      buf[sofar] = '\0';
>  
>      sercon_puts(buf);
> -    vga_puts(buf);
> +    video_puts(buf);
>  
>      free_xenheap_pages(buf, order);
>  }
> @@ -383,7 +383,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) 
> buffer, int count)
>          spin_lock_irq(&console_lock);
>  
>          sercon_puts(kbuf);
> -        vga_puts(kbuf);
> +        video_puts(kbuf);
>  
>          if ( opt_console_to_ring )
>          {
> @@ -464,7 +464,7 @@ static void __putstr(const char *str)
>      ASSERT(spin_is_locked(&console_lock));
>  
>      sercon_puts(str);
> -    vga_puts(str);
> +    video_puts(str);
>  
>      if ( !console_locks_busted )
>      {
> @@ -592,7 +592,7 @@ void __init console_init_preirq(void)
>          if ( *p == ',' )
>              p++;
>          if ( !strncmp(p, "vga", 3) )
> -            vga_init();
> +            video_init();
>          else if ( !strncmp(p, "none", 4) )
>              continue;
>          else if ( (sh = serial_parse_handle(p)) >= 0 )
> @@ -694,7 +694,7 @@ void __init console_endboot(void)
>          printk("\n");
>      }
>  
> -    vga_endboot();
> +    video_endboot();
>  
>      /*
>       * If user specifies so, we fool the switch routine to redirect input
> diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> index 6c3e5b4..2993c39 100644
> --- a/xen/drivers/video/Makefile
> +++ b/xen/drivers/video/Makefile
> @@ -1,5 +1,5 @@
> -obj-y := vga.o
> -obj-$(CONFIG_X86) += font_8x14.o
> -obj-$(CONFIG_X86) += font_8x16.o
> -obj-$(CONFIG_X86) += font_8x8.o
> -obj-$(CONFIG_X86) += vesa.o
> +obj-$(HAS_VGA) := vga.o
> +obj-$(HAS_VIDEO) += font_8x14.o
> +obj-$(HAS_VIDEO) += font_8x16.o
> +obj-$(HAS_VIDEO) += font_8x8.o
> +obj-$(HAS_VGA) += vesa.o
> diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> index 47cd3ed..759355f 100644
> --- a/xen/drivers/video/vesa.c
> +++ b/xen/drivers/video/vesa.c
> @@ -109,7 +109,7 @@ void __init vesa_init(void)
>  
>      lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
>  
> -    vga_puts = vesa_redraw_puts;
> +    video_puts = vesa_redraw_puts;
>  
>      printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
>             "using %uk, total %uk\n",
> @@ -194,7 +194,7 @@ void __init vesa_endboot(bool_t keep)
>      if ( keep )
>      {
>          xpos = 0;
> -        vga_puts = vesa_scroll_puts;
> +        video_puts = vesa_scroll_puts;
>      }
>      else
>      {
> diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
> index a98bd00..40e5963 100644
> --- a/xen/drivers/video/vga.c
> +++ b/xen/drivers/video/vga.c
> @@ -21,7 +21,7 @@ static unsigned char *video;
>  
>  static void vga_text_puts(const char *s);
>  static void vga_noop_puts(const char *s) {}
> -void (*vga_puts)(const char *) = vga_noop_puts;
> +void (*video_puts)(const char *) = vga_noop_puts;
>  
>  /*
>   * 'vga=<mode-specifier>[,keep]' where <mode-specifier> is one of:
> @@ -62,7 +62,7 @@ void vesa_endboot(bool_t keep);
>  #define vesa_endboot(x)   ((void)0)
>  #endif
>  
> -void __init vga_init(void)
> +void __init video_init(void)
>  {
>      char *p;
>  
> @@ -85,7 +85,7 @@ void __init vga_init(void)
>          columns = vga_console_info.u.text_mode_3.columns;
>          lines   = vga_console_info.u.text_mode_3.rows;
>          memset(video, 0, columns * lines * 2);
> -        vga_puts = vga_text_puts;
> +        video_puts = vga_text_puts;
>          break;
>      case XEN_VGATYPE_VESA_LFB:
>      case XEN_VGATYPE_EFI_LFB:
> @@ -97,16 +97,16 @@ void __init vga_init(void)
>      }
>  }
>  
> -void __init vga_endboot(void)
> +void __init video_endboot(void)
>  {
> -    if ( vga_puts == vga_noop_puts )
> +    if ( video_puts == vga_noop_puts )
>          return;
>  
>      printk("Xen is %s VGA console.\n",
>             vgacon_keep ? "keeping" : "relinquishing");
>  
>      if ( !vgacon_keep )
> -        vga_puts = vga_noop_puts;
> +        video_puts = vga_noop_puts;
>      else
>      {
>          int bus, devfn;
> diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
> index b69dbe6..2169627 100644
> --- a/xen/include/asm-x86/config.h
> +++ b/xen/include/asm-x86/config.h
> @@ -38,6 +38,7 @@
>  #define CONFIG_ACPI_CSTATE 1
>  
>  #define CONFIG_VGA 1
> +#define CONFIG_VIDEO 1
>  
>  #define CONFIG_HOTPLUG 1
>  #define CONFIG_HOTPLUG_CPU 1
> diff --git a/xen/include/xen/vga.h b/xen/include/xen/vga.h
> index cc690b9..f72b63d 100644
> --- a/xen/include/xen/vga.h
> +++ b/xen/include/xen/vga.h
> @@ -9,17 +9,10 @@
>  #ifndef _XEN_VGA_H
>  #define _XEN_VGA_H
>  
> -#include <public/xen.h>
> +#include <xen/video.h>
>  
>  #ifdef CONFIG_VGA
>  extern struct xen_vga_console_info vga_console_info;
> -void vga_init(void);
> -void vga_endboot(void);
> -extern void (*vga_puts)(const char *);
> -#else
> -#define vga_init()    ((void)0)
> -#define vga_endboot() ((void)0)
> -#define vga_puts(s)   ((void)0)
>  #endif
>  
>  #endif /* _XEN_VGA_H */
> diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
> new file mode 100644
> index 0000000..e9bc92e
> --- /dev/null
> +++ b/xen/include/xen/video.h
> @@ -0,0 +1,24 @@
> +/*
> + *  vga.h
> + *
> + *  This file is subject to the terms and conditions of the GNU General 
> Public
> + *  License.  See the file COPYING in the main directory of this archive
> + *  for more details.
> + */
> +
> +#ifndef _XEN_VIDEO_H
> +#define _XEN_VIDEO_H
> +
> +#include <public/xen.h>
> +
> +#ifdef CONFIG_VIDEO
> +void video_init(void);
> +extern void (*video_puts)(const char *);
> +void video_endboot(void);
> +#else
> +#define video_init()    ((void)0)
> +#define video_puts(s)   ((void)0)
> +#define video_endboot() ((void)0)
> +#endif
> +
> +#endif /* _XEN_VIDEO_H */
> -- 
> 1.7.2.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:29:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZdN-00056M-Id; Thu, 06 Dec 2012 11:28:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZdL-00056F-MG
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:28:52 +0000
Received: from [85.158.137.99:33726] by server-12.bemta-3.messagelabs.com id
	58/E0-22757-C6180C05; Thu, 06 Dec 2012 11:28:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354793323!12687979!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7676 invoked from network); 6 Dec 2012 11:28:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:28:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:28:43 +0000
Message-Id: <50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:28:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/6] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 19:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> Make use of the framebuffer functions previously introduced.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Conceptually this and the prior patch look fine, but the one here
definitely is against a stale tree (namely lacking the merge with
26184:7b4449bdb980).

Jan

> ---
>  xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
>  1 files changed, 26 insertions(+), 153 deletions(-)
> 
> diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> index 759355f..4dbaecf 100644
> --- a/xen/drivers/video/vesa.c
> +++ b/xen/drivers/video/vesa.c
> @@ -12,20 +12,15 @@
>  #include <xen/vga.h>
>  #include <asm/page.h>
>  #include "font.h"
> +#include "fb.h"
>  
>  #define vlfb_info    vga_console_info.u.vesa_lfb
> -#define text_columns (vlfb_info.width / font->width)
> -#define text_rows    (vlfb_info.height / font->height)
>  
> -static void vesa_redraw_puts(const char *s);
> -static void vesa_scroll_puts(const char *s);
> +static void lfb_flush(void);
>  
> -static unsigned char *lfb, *lbuf, *text_buf;
> -static unsigned int *__initdata line_len;
> +static unsigned char *lfb;
>  static const struct font_desc *font;
>  static bool_t vga_compat;
> -static unsigned int pixel_on;
> -static unsigned int xpos, ypos;
>  
>  static unsigned int vram_total;
>  integer_param("vesa-ram", vram_total);
> @@ -86,30 +81,27 @@ void __init vesa_early_init(void)
>  
>  void __init vesa_init(void)
>  {
> -    if ( !font )
> -        goto fail;
> -
> -    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
> -    if ( !lbuf )
> -        goto fail;
> +    struct fb_prop fbp;
>  
> -    text_buf = xzalloc_bytes(text_columns * text_rows);
> -    if ( !text_buf )
> -        goto fail;
> +    if ( !font )
> +        return;
>  
> -    line_len = xzalloc_array(unsigned int, text_columns);
> -    if ( !line_len )
> -        goto fail;
> +    fbp.font = font;
> +    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
> +    fbp.bytes_per_line = vlfb_info.bytes_per_line;
> +    fbp.width = vlfb_info.width;
> +    fbp.height = vlfb_info.height;
> +    fbp.flush = lfb_flush;
> +    fbp.text_columns = vlfb_info.width / font->width;
> +    fbp.text_rows = vlfb_info.height / font->height;
>  
>      if ( map_pages_to_xen(IOREMAP_VIRT_START,
>                            vlfb_info.lfb_base >> PAGE_SHIFT,
>                            vram_remap >> PAGE_SHIFT,
>                            PAGE_HYPERVISOR_NOCACHE) )
> -        goto fail;
> -
> -    lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
> +        return;
>  
> -    video_puts = vesa_redraw_puts;
> +    fbp.lfb = lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
>  
>      printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
>             "using %uk, total %uk\n",
> @@ -132,7 +124,7 @@ void __init vesa_init(void)
>      {
>          /* Light grey in truecolor. */
>          unsigned int grey = 0xaaaaaaaa;
> -        pixel_on = 
> +        fbp.pixel_on = 
>              ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
>              ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
>              ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
> @@ -140,15 +132,14 @@ void __init vesa_init(void)
>      else
>      {
>          /* White(ish) in default pseudocolor palette. */
> -        pixel_on = 7;
> +        fbp.pixel_on = 7;
>      }
>  
> -    return;
> -
> - fail:
> -    xfree(lbuf);
> -    xfree(text_buf);
> -    xfree(line_len);
> +    if ( fb_init(fbp) < 0 )
> +        return;
> +    if ( fb_alloc() < 0 )
> +        return;
> +    video_puts = fb_redraw_puts;
>  }
>  
>  #include <asm/mtrr.h>
> @@ -193,8 +184,8 @@ void __init vesa_endboot(bool_t keep)
>  {
>      if ( keep )
>      {
> -        xpos = 0;
> -        video_puts = vesa_scroll_puts;
> +        video_puts = fb_scroll_puts;
> +        fb_cr();
>      }
>      else
>      {
> @@ -203,124 +194,6 @@ void __init vesa_endboot(bool_t keep)
>              memset(lfb + i * vlfb_info.bytes_per_line, 0,
>                     vlfb_info.width * bpp);
>          lfb_flush();
> +        fb_free();
>      }
> -
> -    xfree(line_len);
> -}
> -
> -/* Render one line of text to given linear framebuffer line. */
> -static void vesa_show_line(
> -    const unsigned char *text_line,
> -    unsigned char *video_line,
> -    unsigned int nr_chars,
> -    unsigned int nr_cells)
> -{
> -    unsigned int i, j, b, bpp, pixel;
> -
> -    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
> -
> -    for ( i = 0; i < font->height; i++ )
> -    {
> -        unsigned char *ptr = lbuf;
> -
> -        for ( j = 0; j < nr_chars; j++ )
> -        {
> -            const unsigned char *bits = font->data;
> -            bits += ((text_line[j] * font->height + i) *
> -                     ((font->width + 7) >> 3));
> -            for ( b = font->width; b--; )
> -            {
> -                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
> -                memcpy(ptr, &pixel, bpp);
> -                ptr += bpp;
> -            }
> -        }
> -
> -        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
> -        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
> -        video_line += vlfb_info.bytes_per_line;
> -    }
> -}
> -
> -/* Fast mode which redraws all modified parts of a 2D text buffer. */
> -static void __init vesa_redraw_puts(const char *s)
> -{
> -    unsigned int i, min_redraw_y = ypos;
> -    char c;
> -
> -    /* Paste characters into text buffer. */
> -    while ( (c = *s++) != '\0' )
> -    {
> -        if ( (c == '\n') || (xpos >= text_columns) )
> -        {
> -            if ( ++ypos >= text_rows )
> -            {
> -                min_redraw_y = 0;
> -                ypos = text_rows - 1;
> -                memmove(text_buf, text_buf + text_columns,
> -                        ypos * text_columns);
> -                memset(text_buf + ypos * text_columns, 0, xpos);
> -            }
> -            xpos = 0;
> -        }
> -
> -        if ( c != '\n' )
> -            text_buf[xpos++ + ypos * text_columns] = c;
> -    }
> -
> -    /* Render modified section of text buffer to VESA linear framebuffer. */
> -    for ( i = min_redraw_y; i <= ypos; i++ )
> -    {
> -        const unsigned char *line = text_buf + i * text_columns;
> -        unsigned int width;
> -
> -        for ( width = text_columns; width; --width )
> -            if ( line[width - 1] )
> -                 break;
> -        vesa_show_line(line,
> -                       lfb + i * font->height * vlfb_info.bytes_per_line,
> -                       width, max(line_len[i], width));
> -        line_len[i] = width;
> -    }
> -
> -    lfb_flush();
> -}
> -
> -/* Slower line-based scroll mode which interacts better with dom0. */
> -static void vesa_scroll_puts(const char *s)
> -{
> -    unsigned int i;
> -    char c;
> -
> -    while ( (c = *s++) != '\0' )
> -    {
> -        if ( (c == '\n') || (xpos >= text_columns) )
> -        {
> -            unsigned int bytes = (vlfb_info.width *
> -                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
> -            unsigned char *src = lfb + font->height * 
> vlfb_info.bytes_per_line;
> -            unsigned char *dst = lfb;
> -            
> -            /* New line: scroll all previous rows up one line. */
> -            for ( i = font->height; i < vlfb_info.height; i++ )
> -            {
> -                memcpy(dst, src, bytes);
> -                src += vlfb_info.bytes_per_line;
> -                dst += vlfb_info.bytes_per_line;
> -            }
> -
> -            /* Render new line. */
> -            vesa_show_line(
> -                text_buf,
> -                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
> -                xpos, text_columns);
> -
> -            xpos = 0;
> -        }
> -
> -        if ( c != '\n' )
> -            text_buf[xpos++] = c;
> -    }
> -
> -    lfb_flush();
>  }
> -- 
> 1.7.2.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:29:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZdN-00056M-Id; Thu, 06 Dec 2012 11:28:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZdL-00056F-MG
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:28:52 +0000
Received: from [85.158.137.99:33726] by server-12.bemta-3.messagelabs.com id
	58/E0-22757-C6180C05; Thu, 06 Dec 2012 11:28:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354793323!12687979!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7676 invoked from network); 6 Dec 2012 11:28:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:28:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:28:43 +0000
Message-Id: <50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:28:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/6] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.12 at 19:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> Make use of the framebuffer functions previously introduced.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Conceptually this and the prior patch look fine, but the one here
definitely is against a stale tree (namely lacking the merge with
26184:7b4449bdb980).

Jan

> ---
>  xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
>  1 files changed, 26 insertions(+), 153 deletions(-)
> 
> diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> index 759355f..4dbaecf 100644
> --- a/xen/drivers/video/vesa.c
> +++ b/xen/drivers/video/vesa.c
> @@ -12,20 +12,15 @@
>  #include <xen/vga.h>
>  #include <asm/page.h>
>  #include "font.h"
> +#include "fb.h"
>  
>  #define vlfb_info    vga_console_info.u.vesa_lfb
> -#define text_columns (vlfb_info.width / font->width)
> -#define text_rows    (vlfb_info.height / font->height)
>  
> -static void vesa_redraw_puts(const char *s);
> -static void vesa_scroll_puts(const char *s);
> +static void lfb_flush(void);
>  
> -static unsigned char *lfb, *lbuf, *text_buf;
> -static unsigned int *__initdata line_len;
> +static unsigned char *lfb;
>  static const struct font_desc *font;
>  static bool_t vga_compat;
> -static unsigned int pixel_on;
> -static unsigned int xpos, ypos;
>  
>  static unsigned int vram_total;
>  integer_param("vesa-ram", vram_total);
> @@ -86,30 +81,27 @@ void __init vesa_early_init(void)
>  
>  void __init vesa_init(void)
>  {
> -    if ( !font )
> -        goto fail;
> -
> -    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
> -    if ( !lbuf )
> -        goto fail;
> +    struct fb_prop fbp;
>  
> -    text_buf = xzalloc_bytes(text_columns * text_rows);
> -    if ( !text_buf )
> -        goto fail;
> +    if ( !font )
> +        return;
>  
> -    line_len = xzalloc_array(unsigned int, text_columns);
> -    if ( !line_len )
> -        goto fail;
> +    fbp.font = font;
> +    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
> +    fbp.bytes_per_line = vlfb_info.bytes_per_line;
> +    fbp.width = vlfb_info.width;
> +    fbp.height = vlfb_info.height;
> +    fbp.flush = lfb_flush;
> +    fbp.text_columns = vlfb_info.width / font->width;
> +    fbp.text_rows = vlfb_info.height / font->height;
>  
>      if ( map_pages_to_xen(IOREMAP_VIRT_START,
>                            vlfb_info.lfb_base >> PAGE_SHIFT,
>                            vram_remap >> PAGE_SHIFT,
>                            PAGE_HYPERVISOR_NOCACHE) )
> -        goto fail;
> -
> -    lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
> +        return;
>  
> -    video_puts = vesa_redraw_puts;
> +    fbp.lfb = lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
>  
>      printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
>             "using %uk, total %uk\n",
> @@ -132,7 +124,7 @@ void __init vesa_init(void)
>      {
>          /* Light grey in truecolor. */
>          unsigned int grey = 0xaaaaaaaa;
> -        pixel_on = 
> +        fbp.pixel_on = 
>              ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
>              ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
>              ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
> @@ -140,15 +132,14 @@ void __init vesa_init(void)
>      else
>      {
>          /* White(ish) in default pseudocolor palette. */
> -        pixel_on = 7;
> +        fbp.pixel_on = 7;
>      }
>  
> -    return;
> -
> - fail:
> -    xfree(lbuf);
> -    xfree(text_buf);
> -    xfree(line_len);
> +    if ( fb_init(fbp) < 0 )
> +        return;
> +    if ( fb_alloc() < 0 )
> +        return;
> +    video_puts = fb_redraw_puts;
>  }
>  
>  #include <asm/mtrr.h>
> @@ -193,8 +184,8 @@ void __init vesa_endboot(bool_t keep)
>  {
>      if ( keep )
>      {
> -        xpos = 0;
> -        video_puts = vesa_scroll_puts;
> +        video_puts = fb_scroll_puts;
> +        fb_cr();
>      }
>      else
>      {
> @@ -203,124 +194,6 @@ void __init vesa_endboot(bool_t keep)
>              memset(lfb + i * vlfb_info.bytes_per_line, 0,
>                     vlfb_info.width * bpp);
>          lfb_flush();
> +        fb_free();
>      }
> -
> -    xfree(line_len);
> -}
> -
> -/* Render one line of text to given linear framebuffer line. */
> -static void vesa_show_line(
> -    const unsigned char *text_line,
> -    unsigned char *video_line,
> -    unsigned int nr_chars,
> -    unsigned int nr_cells)
> -{
> -    unsigned int i, j, b, bpp, pixel;
> -
> -    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
> -
> -    for ( i = 0; i < font->height; i++ )
> -    {
> -        unsigned char *ptr = lbuf;
> -
> -        for ( j = 0; j < nr_chars; j++ )
> -        {
> -            const unsigned char *bits = font->data;
> -            bits += ((text_line[j] * font->height + i) *
> -                     ((font->width + 7) >> 3));
> -            for ( b = font->width; b--; )
> -            {
> -                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
> -                memcpy(ptr, &pixel, bpp);
> -                ptr += bpp;
> -            }
> -        }
> -
> -        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
> -        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
> -        video_line += vlfb_info.bytes_per_line;
> -    }
> -}
> -
> -/* Fast mode which redraws all modified parts of a 2D text buffer. */
> -static void __init vesa_redraw_puts(const char *s)
> -{
> -    unsigned int i, min_redraw_y = ypos;
> -    char c;
> -
> -    /* Paste characters into text buffer. */
> -    while ( (c = *s++) != '\0' )
> -    {
> -        if ( (c == '\n') || (xpos >= text_columns) )
> -        {
> -            if ( ++ypos >= text_rows )
> -            {
> -                min_redraw_y = 0;
> -                ypos = text_rows - 1;
> -                memmove(text_buf, text_buf + text_columns,
> -                        ypos * text_columns);
> -                memset(text_buf + ypos * text_columns, 0, xpos);
> -            }
> -            xpos = 0;
> -        }
> -
> -        if ( c != '\n' )
> -            text_buf[xpos++ + ypos * text_columns] = c;
> -    }
> -
> -    /* Render modified section of text buffer to VESA linear framebuffer. */
> -    for ( i = min_redraw_y; i <= ypos; i++ )
> -    {
> -        const unsigned char *line = text_buf + i * text_columns;
> -        unsigned int width;
> -
> -        for ( width = text_columns; width; --width )
> -            if ( line[width - 1] )
> -                 break;
> -        vesa_show_line(line,
> -                       lfb + i * font->height * vlfb_info.bytes_per_line,
> -                       width, max(line_len[i], width));
> -        line_len[i] = width;
> -    }
> -
> -    lfb_flush();
> -}
> -
> -/* Slower line-based scroll mode which interacts better with dom0. */
> -static void vesa_scroll_puts(const char *s)
> -{
> -    unsigned int i;
> -    char c;
> -
> -    while ( (c = *s++) != '\0' )
> -    {
> -        if ( (c == '\n') || (xpos >= text_columns) )
> -        {
> -            unsigned int bytes = (vlfb_info.width *
> -                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
> -            unsigned char *src = lfb + font->height * 
> vlfb_info.bytes_per_line;
> -            unsigned char *dst = lfb;
> -            
> -            /* New line: scroll all previous rows up one line. */
> -            for ( i = font->height; i < vlfb_info.height; i++ )
> -            {
> -                memcpy(dst, src, bytes);
> -                src += vlfb_info.bytes_per_line;
> -                dst += vlfb_info.bytes_per_line;
> -            }
> -
> -            /* Render new line. */
> -            vesa_show_line(
> -                text_buf,
> -                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
> -                xpos, text_columns);
> -
> -            xpos = 0;
> -        }
> -
> -        if ( c != '\n' )
> -            text_buf[xpos++] = c;
> -    }
> -
> -    lfb_flush();
>  }
> -- 
> 1.7.2.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:36:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZkV-0005Of-Kk; Thu, 06 Dec 2012 11:36:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgZkU-0005OX-EL
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:36:14 +0000
Received: from [85.158.143.99:60645] by server-3.bemta-4.messagelabs.com id
	E1/9B-18211-D2380C05; Thu, 06 Dec 2012 11:36:13 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354793772!23000043!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14299 invoked from network); 6 Dec 2012 11:36:12 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:36:12 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgZkQ-000M0P-RR; Thu, 06 Dec 2012 11:36:10 +0000
Date: Thu, 6 Dec 2012 11:36:10 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121206113610.GI82725@ocelot.phlegethon.org>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
	handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:16 +0000 on 04 Dec (1354644968), Andrew Cooper wrote:
> do_nmi_crash() will:
> 
>  * Save the crash notes and shut the pcpu down.
>     There is now an extra per-cpu variable to prevent us from executing
>     this multiple times.  In the case where we reenter midway through,
>     attempt the whole operation again in preference to not completing
>     it in the first place.
> 
>  * Set up another NMI at the LAPIC.
>     Even when the LAPIC has been disabled, the ID and command registers
>     are still usable.  As a result, we can deliberately queue up a new
>     NMI to re-interrupt us later if NMIs get unlatched.

Can you please include this reasoning in the code itself, as well as in
the commit message?

> machine_kexec() will:
> 
>   * Swap the MCE handlers to be a nop.
>      We cannot prevent MCEs from being delivered when we pass off to the
>      crash kernel, and the less Xen context is being touched the better.
> 
>   * Explicitly enable NMIs.

Would it be cleaner to have this path explicitly set the IDT entry to
invoke the noop handler?  Or do we know this is alwys the case when we
reach this point?

I'm just wondering about the case where we get here with an NMI pending.
Presumably if we have some other source of NMIs active while we kexec,
the post-exec kernel will crash if it overwrites our handlers &c before
setting up its own IDT.  But if kexec()ing with NMIs _disabled_ also
fails than we haven't much choice. :|

Otherwise, this looks good to me.

Tim.

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/crash.c
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,97 @@
>  
>  static atomic_t waiting_for_crash_ipi;
>  static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>  
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs. */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>  {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
> +    ASSERT( crashing_cpu != smp_processor_id() );
> +
> +    /* Save crash information and shut down CPU.  Attempt only once. */
> +    if ( ! this_cpu(crash_save_done) )
> +    {
> +        kexec_crash_save_cpu();
> +        __stop_this_cpu();
> +
> +        this_cpu(crash_save_done) = 1;
> +    }
> +
> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
> +     * back to its boot state, so we are unable to rely on the regular
> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
> +     * (The likely senario is that we have reverted from x2apic mode to
> +     * xapic, at which point #GPFs will occur if we use the apic_*
> +     * functions)
> +     *
> +     * The ICR and APIC ID of the LAPIC are still valid even during
> +     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
> +     * queue up another NMI, to force us back into this loop if we exit.
>       */
> -    if ( cpu == crashing_cpu )
> -        return 1;
> -    local_irq_disable();
> +    switch ( current_local_apic_mode() )
> +    {
> +        u32 apic_id;
>  
> -    kexec_crash_save_cpu();
> +    case APIC_MODE_X2APIC:
> +        apic_id = apic_rdmsr(APIC_ID);
>  
> -    __stop_this_cpu();
> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
> +        break;
>  
> -    atomic_dec(&waiting_for_crash_ipi);
> +    case APIC_MODE_XAPIC:
> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
> +
> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
> +            cpu_relax();
> +
> +        apic_mem_write(APIC_ICR2, apic_id << 24);
> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
> +        break;
> +
> +    default:
> +        break;
> +    }
>  
>      for ( ; ; )
>          halt();
> -
> -    return 1;
>  }
>  
>  static void nmi_shootdown_cpus(void)
>  {
>      unsigned long msecs;
> +    int i;
> +    int cpu = smp_processor_id();
>  
>      local_irq_disable();
>  
> -    crashing_cpu = smp_processor_id();
> +    crashing_cpu = cpu;
>      local_irq_count(crashing_cpu) = 0;
>  
>      atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
> -    /* Would it be better to replace the trap vector here? */
> -    set_nmi_callback(crash_nmi_callback);
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get crash_nmi which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     *
> +     * Futhermore, disable stack tables for NMIs and MCEs to prevent
> +     * race conditions resulting in corrupt stack frames.  As Xen is
> +     * about to die, we no longer care about the security-related race
> +     * condition with 'sysret' which ISTs are designed to solve.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
> +            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
> +
> +            if ( i == cpu )
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
> +            else
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
> +        }
> +
>      /* Ensure the new callback function is set before sending out the NMI. */
>      wmb();
>  
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/machine_kexec.c
> --- a/xen/arch/x86/machine_kexec.c
> +++ b/xen/arch/x86/machine_kexec.c
> @@ -87,6 +87,17 @@ void machine_kexec(xen_kexec_image_t *im
>       */
>      local_irq_disable();
>  
> +    /* Now regular interrupts are disabled, we need to reduce the impact
> +     * of interrupts not disabled by 'cli'.  NMIs have already been
> +     * dealt with by machine_crash_shutdown().
> +     *
> +     * Set all pcpu MCE handler to be a noop. */
> +    set_intr_gate(TRAP_machine_check, &trap_nop);
> +
> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
> +     * not like running with NMIs disabled. */
> +    enable_nmis();
> +
>      /*
>       * compat_machine_kexec() returns to idle pagetables, which requires us
>       * to be running on a static GDT mapping (idle pagetables have no GDT
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -635,11 +635,42 @@ ENTRY(nmi)
>          movl  $TRAP_nmi,4(%rsp)
>          jmp   handle_ist_exception
>  
> +ENTRY(nmi_crash)
> +        cli
> +        pushq $0
> +        movl $TRAP_nmi,4(%rsp)
> +        SAVE_ALL
> +        movq %rsp,%rdi
> +        callq do_nmi_crash /* Does not return */
> +        ud2
> +
>  ENTRY(machine_check)
>          pushq $0
>          movl  $TRAP_machine_check,4(%rsp)
>          jmp   handle_ist_exception
>  
> +/* No op trap handler.  Required for kexec path. */
> +ENTRY(trap_nop)
> +        iretq
> +
> +/* Enable NMIs.  No special register assumptions, and all preserved. */
> +ENTRY(enable_nmis)
> +        push %rax
> +        movq %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
>  .section .rodata, "a", @progbits
>  
>  ENTRY(exception_table)
> diff -r 1c69c938f641 -r b15d3ae525af xen/include/asm-x86/processor.h
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
>  DECLARE_TRAP_HANDLER(divide_error);
>  DECLARE_TRAP_HANDLER(debug);
>  DECLARE_TRAP_HANDLER(nmi);
> +DECLARE_TRAP_HANDLER(nmi_crash);
>  DECLARE_TRAP_HANDLER(int3);
>  DECLARE_TRAP_HANDLER(overflow);
>  DECLARE_TRAP_HANDLER(bounds);
> @@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>  DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>  #undef DECLARE_TRAP_HANDLER
>  
> +void trap_nop(void);
> +void enable_nmis(void);
> +
>  void syscall_enter(void);
>  void sysenter_entry(void);
>  void sysenter_eflags_saved(void);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:36:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZkV-0005Of-Kk; Thu, 06 Dec 2012 11:36:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgZkU-0005OX-EL
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:36:14 +0000
Received: from [85.158.143.99:60645] by server-3.bemta-4.messagelabs.com id
	E1/9B-18211-D2380C05; Thu, 06 Dec 2012 11:36:13 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354793772!23000043!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14299 invoked from network); 6 Dec 2012 11:36:12 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:36:12 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgZkQ-000M0P-RR; Thu, 06 Dec 2012 11:36:10 +0000
Date: Thu, 6 Dec 2012 11:36:10 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121206113610.GI82725@ocelot.phlegethon.org>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
	handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:16 +0000 on 04 Dec (1354644968), Andrew Cooper wrote:
> do_nmi_crash() will:
> 
>  * Save the crash notes and shut the pcpu down.
>     There is now an extra per-cpu variable to prevent us from executing
>     this multiple times.  In the case where we reenter midway through,
>     attempt the whole operation again in preference to not completing
>     it in the first place.
> 
>  * Set up another NMI at the LAPIC.
>     Even when the LAPIC has been disabled, the ID and command registers
>     are still usable.  As a result, we can deliberately queue up a new
>     NMI to re-interrupt us later if NMIs get unlatched.

Can you please include this reasoning in the code itself, as well as in
the commit message?

> machine_kexec() will:
> 
>   * Swap the MCE handlers to be a nop.
>      We cannot prevent MCEs from being delivered when we pass off to the
>      crash kernel, and the less Xen context is being touched the better.
> 
>   * Explicitly enable NMIs.

Would it be cleaner to have this path explicitly set the IDT entry to
invoke the noop handler?  Or do we know this is alwys the case when we
reach this point?

I'm just wondering about the case where we get here with an NMI pending.
Presumably if we have some other source of NMIs active while we kexec,
the post-exec kernel will crash if it overwrites our handlers &c before
setting up its own IDT.  But if kexec()ing with NMIs _disabled_ also
fails than we haven't much choice. :|

Otherwise, this looks good to me.

Tim.

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/crash.c
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,97 @@
>  
>  static atomic_t waiting_for_crash_ipi;
>  static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>  
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs. */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>  {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
> +    ASSERT( crashing_cpu != smp_processor_id() );
> +
> +    /* Save crash information and shut down CPU.  Attempt only once. */
> +    if ( ! this_cpu(crash_save_done) )
> +    {
> +        kexec_crash_save_cpu();
> +        __stop_this_cpu();
> +
> +        this_cpu(crash_save_done) = 1;
> +    }
> +
> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
> +     * back to its boot state, so we are unable to rely on the regular
> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
> +     * (The likely senario is that we have reverted from x2apic mode to
> +     * xapic, at which point #GPFs will occur if we use the apic_*
> +     * functions)
> +     *
> +     * The ICR and APIC ID of the LAPIC are still valid even during
> +     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
> +     * queue up another NMI, to force us back into this loop if we exit.
>       */
> -    if ( cpu == crashing_cpu )
> -        return 1;
> -    local_irq_disable();
> +    switch ( current_local_apic_mode() )
> +    {
> +        u32 apic_id;
>  
> -    kexec_crash_save_cpu();
> +    case APIC_MODE_X2APIC:
> +        apic_id = apic_rdmsr(APIC_ID);
>  
> -    __stop_this_cpu();
> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
> +        break;
>  
> -    atomic_dec(&waiting_for_crash_ipi);
> +    case APIC_MODE_XAPIC:
> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
> +
> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
> +            cpu_relax();
> +
> +        apic_mem_write(APIC_ICR2, apic_id << 24);
> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
> +        break;
> +
> +    default:
> +        break;
> +    }
>  
>      for ( ; ; )
>          halt();
> -
> -    return 1;
>  }
>  
>  static void nmi_shootdown_cpus(void)
>  {
>      unsigned long msecs;
> +    int i;
> +    int cpu = smp_processor_id();
>  
>      local_irq_disable();
>  
> -    crashing_cpu = smp_processor_id();
> +    crashing_cpu = cpu;
>      local_irq_count(crashing_cpu) = 0;
>  
>      atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
> -    /* Would it be better to replace the trap vector here? */
> -    set_nmi_callback(crash_nmi_callback);
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get crash_nmi which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     *
> +     * Futhermore, disable stack tables for NMIs and MCEs to prevent
> +     * race conditions resulting in corrupt stack frames.  As Xen is
> +     * about to die, we no longer care about the security-related race
> +     * condition with 'sysret' which ISTs are designed to solve.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
> +            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
> +
> +            if ( i == cpu )
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
> +            else
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
> +        }
> +
>      /* Ensure the new callback function is set before sending out the NMI. */
>      wmb();
>  
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/machine_kexec.c
> --- a/xen/arch/x86/machine_kexec.c
> +++ b/xen/arch/x86/machine_kexec.c
> @@ -87,6 +87,17 @@ void machine_kexec(xen_kexec_image_t *im
>       */
>      local_irq_disable();
>  
> +    /* Now regular interrupts are disabled, we need to reduce the impact
> +     * of interrupts not disabled by 'cli'.  NMIs have already been
> +     * dealt with by machine_crash_shutdown().
> +     *
> +     * Set all pcpu MCE handler to be a noop. */
> +    set_intr_gate(TRAP_machine_check, &trap_nop);
> +
> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
> +     * not like running with NMIs disabled. */
> +    enable_nmis();
> +
>      /*
>       * compat_machine_kexec() returns to idle pagetables, which requires us
>       * to be running on a static GDT mapping (idle pagetables have no GDT
> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -635,11 +635,42 @@ ENTRY(nmi)
>          movl  $TRAP_nmi,4(%rsp)
>          jmp   handle_ist_exception
>  
> +ENTRY(nmi_crash)
> +        cli
> +        pushq $0
> +        movl $TRAP_nmi,4(%rsp)
> +        SAVE_ALL
> +        movq %rsp,%rdi
> +        callq do_nmi_crash /* Does not return */
> +        ud2
> +
>  ENTRY(machine_check)
>          pushq $0
>          movl  $TRAP_machine_check,4(%rsp)
>          jmp   handle_ist_exception
>  
> +/* No op trap handler.  Required for kexec path. */
> +ENTRY(trap_nop)
> +        iretq
> +
> +/* Enable NMIs.  No special register assumptions, and all preserved. */
> +ENTRY(enable_nmis)
> +        push %rax
> +        movq %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
>  .section .rodata, "a", @progbits
>  
>  ENTRY(exception_table)
> diff -r 1c69c938f641 -r b15d3ae525af xen/include/asm-x86/processor.h
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
>  DECLARE_TRAP_HANDLER(divide_error);
>  DECLARE_TRAP_HANDLER(debug);
>  DECLARE_TRAP_HANDLER(nmi);
> +DECLARE_TRAP_HANDLER(nmi_crash);
>  DECLARE_TRAP_HANDLER(int3);
>  DECLARE_TRAP_HANDLER(overflow);
>  DECLARE_TRAP_HANDLER(bounds);
> @@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>  DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>  #undef DECLARE_TRAP_HANDLER
>  
> +void trap_nop(void);
> +void enable_nmis(void);
> +
>  void syscall_enter(void);
>  void sysenter_entry(void);
>  void sysenter_eflags_saved(void);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:38:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZmF-0005du-Bx; Thu, 06 Dec 2012 11:38:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgZmC-0005dh-Rp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:38:01 +0000
Received: from [193.109.254.147:4791] by server-2.bemta-14.messagelabs.com id
	07/4B-20829-89380C05; Thu, 06 Dec 2012 11:38:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354793829!9151080!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25177 invoked from network); 6 Dec 2012 11:37:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 11:37:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16197278"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 11:37:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	11:37:01 +0000
Message-ID: <1354793819.17165.83.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 6 Dec 2012 11:36:59 +0000
In-Reply-To: <50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/6] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 11:28 +0000, Jan Beulich wrote:
> >>> On 05.12.12 at 19:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > Make use of the framebuffer functions previously introduced.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Conceptually this and the prior patch look fine, but the one here
> definitely is against a stale tree (namely lacking the merge with
> 26184:7b4449bdb980).

Need to be careful with the code motion then, from that PoV these two
patches would be better combined.

Also, this vmap/ioremap thing seems like something we should replicate
on ARM instead of the map_phys_range introduced in a previous patch
(even if it is just a name change). No reason to gratuitously differ on
these things.

Ian.

> 
> Jan
> 
> > ---
> >  xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
> >  1 files changed, 26 insertions(+), 153 deletions(-)
> > 
> > diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> > index 759355f..4dbaecf 100644
> > --- a/xen/drivers/video/vesa.c
> > +++ b/xen/drivers/video/vesa.c
> > @@ -12,20 +12,15 @@
> >  #include <xen/vga.h>
> >  #include <asm/page.h>
> >  #include "font.h"
> > +#include "fb.h"
> >  
> >  #define vlfb_info    vga_console_info.u.vesa_lfb
> > -#define text_columns (vlfb_info.width / font->width)
> > -#define text_rows    (vlfb_info.height / font->height)
> >  
> > -static void vesa_redraw_puts(const char *s);
> > -static void vesa_scroll_puts(const char *s);
> > +static void lfb_flush(void);
> >  
> > -static unsigned char *lfb, *lbuf, *text_buf;
> > -static unsigned int *__initdata line_len;
> > +static unsigned char *lfb;
> >  static const struct font_desc *font;
> >  static bool_t vga_compat;
> > -static unsigned int pixel_on;
> > -static unsigned int xpos, ypos;
> >  
> >  static unsigned int vram_total;
> >  integer_param("vesa-ram", vram_total);
> > @@ -86,30 +81,27 @@ void __init vesa_early_init(void)
> >  
> >  void __init vesa_init(void)
> >  {
> > -    if ( !font )
> > -        goto fail;
> > -
> > -    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
> > -    if ( !lbuf )
> > -        goto fail;
> > +    struct fb_prop fbp;
> >  
> > -    text_buf = xzalloc_bytes(text_columns * text_rows);
> > -    if ( !text_buf )
> > -        goto fail;
> > +    if ( !font )
> > +        return;
> >  
> > -    line_len = xzalloc_array(unsigned int, text_columns);
> > -    if ( !line_len )
> > -        goto fail;
> > +    fbp.font = font;
> > +    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
> > +    fbp.bytes_per_line = vlfb_info.bytes_per_line;
> > +    fbp.width = vlfb_info.width;
> > +    fbp.height = vlfb_info.height;
> > +    fbp.flush = lfb_flush;
> > +    fbp.text_columns = vlfb_info.width / font->width;
> > +    fbp.text_rows = vlfb_info.height / font->height;
> >  
> >      if ( map_pages_to_xen(IOREMAP_VIRT_START,
> >                            vlfb_info.lfb_base >> PAGE_SHIFT,
> >                            vram_remap >> PAGE_SHIFT,
> >                            PAGE_HYPERVISOR_NOCACHE) )
> > -        goto fail;
> > -
> > -    lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
> > +        return;
> >  
> > -    video_puts = vesa_redraw_puts;
> > +    fbp.lfb = lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
> >  
> >      printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
> >             "using %uk, total %uk\n",
> > @@ -132,7 +124,7 @@ void __init vesa_init(void)
> >      {
> >          /* Light grey in truecolor. */
> >          unsigned int grey = 0xaaaaaaaa;
> > -        pixel_on = 
> > +        fbp.pixel_on = 
> >              ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
> >              ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
> >              ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
> > @@ -140,15 +132,14 @@ void __init vesa_init(void)
> >      else
> >      {
> >          /* White(ish) in default pseudocolor palette. */
> > -        pixel_on = 7;
> > +        fbp.pixel_on = 7;
> >      }
> >  
> > -    return;
> > -
> > - fail:
> > -    xfree(lbuf);
> > -    xfree(text_buf);
> > -    xfree(line_len);
> > +    if ( fb_init(fbp) < 0 )
> > +        return;
> > +    if ( fb_alloc() < 0 )
> > +        return;
> > +    video_puts = fb_redraw_puts;
> >  }
> >  
> >  #include <asm/mtrr.h>
> > @@ -193,8 +184,8 @@ void __init vesa_endboot(bool_t keep)
> >  {
> >      if ( keep )
> >      {
> > -        xpos = 0;
> > -        video_puts = vesa_scroll_puts;
> > +        video_puts = fb_scroll_puts;
> > +        fb_cr();
> >      }
> >      else
> >      {
> > @@ -203,124 +194,6 @@ void __init vesa_endboot(bool_t keep)
> >              memset(lfb + i * vlfb_info.bytes_per_line, 0,
> >                     vlfb_info.width * bpp);
> >          lfb_flush();
> > +        fb_free();
> >      }
> > -
> > -    xfree(line_len);
> > -}
> > -
> > -/* Render one line of text to given linear framebuffer line. */
> > -static void vesa_show_line(
> > -    const unsigned char *text_line,
> > -    unsigned char *video_line,
> > -    unsigned int nr_chars,
> > -    unsigned int nr_cells)
> > -{
> > -    unsigned int i, j, b, bpp, pixel;
> > -
> > -    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
> > -
> > -    for ( i = 0; i < font->height; i++ )
> > -    {
> > -        unsigned char *ptr = lbuf;
> > -
> > -        for ( j = 0; j < nr_chars; j++ )
> > -        {
> > -            const unsigned char *bits = font->data;
> > -            bits += ((text_line[j] * font->height + i) *
> > -                     ((font->width + 7) >> 3));
> > -            for ( b = font->width; b--; )
> > -            {
> > -                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
> > -                memcpy(ptr, &pixel, bpp);
> > -                ptr += bpp;
> > -            }
> > -        }
> > -
> > -        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
> > -        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
> > -        video_line += vlfb_info.bytes_per_line;
> > -    }
> > -}
> > -
> > -/* Fast mode which redraws all modified parts of a 2D text buffer. */
> > -static void __init vesa_redraw_puts(const char *s)
> > -{
> > -    unsigned int i, min_redraw_y = ypos;
> > -    char c;
> > -
> > -    /* Paste characters into text buffer. */
> > -    while ( (c = *s++) != '\0' )
> > -    {
> > -        if ( (c == '\n') || (xpos >= text_columns) )
> > -        {
> > -            if ( ++ypos >= text_rows )
> > -            {
> > -                min_redraw_y = 0;
> > -                ypos = text_rows - 1;
> > -                memmove(text_buf, text_buf + text_columns,
> > -                        ypos * text_columns);
> > -                memset(text_buf + ypos * text_columns, 0, xpos);
> > -            }
> > -            xpos = 0;
> > -        }
> > -
> > -        if ( c != '\n' )
> > -            text_buf[xpos++ + ypos * text_columns] = c;
> > -    }
> > -
> > -    /* Render modified section of text buffer to VESA linear framebuffer. */
> > -    for ( i = min_redraw_y; i <= ypos; i++ )
> > -    {
> > -        const unsigned char *line = text_buf + i * text_columns;
> > -        unsigned int width;
> > -
> > -        for ( width = text_columns; width; --width )
> > -            if ( line[width - 1] )
> > -                 break;
> > -        vesa_show_line(line,
> > -                       lfb + i * font->height * vlfb_info.bytes_per_line,
> > -                       width, max(line_len[i], width));
> > -        line_len[i] = width;
> > -    }
> > -
> > -    lfb_flush();
> > -}
> > -
> > -/* Slower line-based scroll mode which interacts better with dom0. */
> > -static void vesa_scroll_puts(const char *s)
> > -{
> > -    unsigned int i;
> > -    char c;
> > -
> > -    while ( (c = *s++) != '\0' )
> > -    {
> > -        if ( (c == '\n') || (xpos >= text_columns) )
> > -        {
> > -            unsigned int bytes = (vlfb_info.width *
> > -                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
> > -            unsigned char *src = lfb + font->height * 
> > vlfb_info.bytes_per_line;
> > -            unsigned char *dst = lfb;
> > -            
> > -            /* New line: scroll all previous rows up one line. */
> > -            for ( i = font->height; i < vlfb_info.height; i++ )
> > -            {
> > -                memcpy(dst, src, bytes);
> > -                src += vlfb_info.bytes_per_line;
> > -                dst += vlfb_info.bytes_per_line;
> > -            }
> > -
> > -            /* Render new line. */
> > -            vesa_show_line(
> > -                text_buf,
> > -                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
> > -                xpos, text_columns);
> > -
> > -            xpos = 0;
> > -        }
> > -
> > -        if ( c != '\n' )
> > -            text_buf[xpos++] = c;
> > -    }
> > -
> > -    lfb_flush();
> >  }
> > -- 
> > 1.7.2.5
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:38:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZmF-0005du-Bx; Thu, 06 Dec 2012 11:38:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgZmC-0005dh-Rp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:38:01 +0000
Received: from [193.109.254.147:4791] by server-2.bemta-14.messagelabs.com id
	07/4B-20829-89380C05; Thu, 06 Dec 2012 11:38:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354793829!9151080!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25177 invoked from network); 6 Dec 2012 11:37:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 11:37:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="16197278"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 11:37:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	11:37:01 +0000
Message-ID: <1354793819.17165.83.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 6 Dec 2012 11:36:59 +0000
In-Reply-To: <50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/6] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 11:28 +0000, Jan Beulich wrote:
> >>> On 05.12.12 at 19:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > Make use of the framebuffer functions previously introduced.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Conceptually this and the prior patch look fine, but the one here
> definitely is against a stale tree (namely lacking the merge with
> 26184:7b4449bdb980).

Need to be careful with the code motion then, from that PoV these two
patches would be better combined.

Also, this vmap/ioremap thing seems like something we should replicate
on ARM instead of the map_phys_range introduced in a previous patch
(even if it is just a name change). No reason to gratuitously differ on
these things.

Ian.

> 
> Jan
> 
> > ---
> >  xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
> >  1 files changed, 26 insertions(+), 153 deletions(-)
> > 
> > diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> > index 759355f..4dbaecf 100644
> > --- a/xen/drivers/video/vesa.c
> > +++ b/xen/drivers/video/vesa.c
> > @@ -12,20 +12,15 @@
> >  #include <xen/vga.h>
> >  #include <asm/page.h>
> >  #include "font.h"
> > +#include "fb.h"
> >  
> >  #define vlfb_info    vga_console_info.u.vesa_lfb
> > -#define text_columns (vlfb_info.width / font->width)
> > -#define text_rows    (vlfb_info.height / font->height)
> >  
> > -static void vesa_redraw_puts(const char *s);
> > -static void vesa_scroll_puts(const char *s);
> > +static void lfb_flush(void);
> >  
> > -static unsigned char *lfb, *lbuf, *text_buf;
> > -static unsigned int *__initdata line_len;
> > +static unsigned char *lfb;
> >  static const struct font_desc *font;
> >  static bool_t vga_compat;
> > -static unsigned int pixel_on;
> > -static unsigned int xpos, ypos;
> >  
> >  static unsigned int vram_total;
> >  integer_param("vesa-ram", vram_total);
> > @@ -86,30 +81,27 @@ void __init vesa_early_init(void)
> >  
> >  void __init vesa_init(void)
> >  {
> > -    if ( !font )
> > -        goto fail;
> > -
> > -    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
> > -    if ( !lbuf )
> > -        goto fail;
> > +    struct fb_prop fbp;
> >  
> > -    text_buf = xzalloc_bytes(text_columns * text_rows);
> > -    if ( !text_buf )
> > -        goto fail;
> > +    if ( !font )
> > +        return;
> >  
> > -    line_len = xzalloc_array(unsigned int, text_columns);
> > -    if ( !line_len )
> > -        goto fail;
> > +    fbp.font = font;
> > +    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
> > +    fbp.bytes_per_line = vlfb_info.bytes_per_line;
> > +    fbp.width = vlfb_info.width;
> > +    fbp.height = vlfb_info.height;
> > +    fbp.flush = lfb_flush;
> > +    fbp.text_columns = vlfb_info.width / font->width;
> > +    fbp.text_rows = vlfb_info.height / font->height;
> >  
> >      if ( map_pages_to_xen(IOREMAP_VIRT_START,
> >                            vlfb_info.lfb_base >> PAGE_SHIFT,
> >                            vram_remap >> PAGE_SHIFT,
> >                            PAGE_HYPERVISOR_NOCACHE) )
> > -        goto fail;
> > -
> > -    lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
> > +        return;
> >  
> > -    video_puts = vesa_redraw_puts;
> > +    fbp.lfb = lfb = memset((void *)IOREMAP_VIRT_START, 0, vram_remap);
> >  
> >      printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
> >             "using %uk, total %uk\n",
> > @@ -132,7 +124,7 @@ void __init vesa_init(void)
> >      {
> >          /* Light grey in truecolor. */
> >          unsigned int grey = 0xaaaaaaaa;
> > -        pixel_on = 
> > +        fbp.pixel_on = 
> >              ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
> >              ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
> >              ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
> > @@ -140,15 +132,14 @@ void __init vesa_init(void)
> >      else
> >      {
> >          /* White(ish) in default pseudocolor palette. */
> > -        pixel_on = 7;
> > +        fbp.pixel_on = 7;
> >      }
> >  
> > -    return;
> > -
> > - fail:
> > -    xfree(lbuf);
> > -    xfree(text_buf);
> > -    xfree(line_len);
> > +    if ( fb_init(fbp) < 0 )
> > +        return;
> > +    if ( fb_alloc() < 0 )
> > +        return;
> > +    video_puts = fb_redraw_puts;
> >  }
> >  
> >  #include <asm/mtrr.h>
> > @@ -193,8 +184,8 @@ void __init vesa_endboot(bool_t keep)
> >  {
> >      if ( keep )
> >      {
> > -        xpos = 0;
> > -        video_puts = vesa_scroll_puts;
> > +        video_puts = fb_scroll_puts;
> > +        fb_cr();
> >      }
> >      else
> >      {
> > @@ -203,124 +194,6 @@ void __init vesa_endboot(bool_t keep)
> >              memset(lfb + i * vlfb_info.bytes_per_line, 0,
> >                     vlfb_info.width * bpp);
> >          lfb_flush();
> > +        fb_free();
> >      }
> > -
> > -    xfree(line_len);
> > -}
> > -
> > -/* Render one line of text to given linear framebuffer line. */
> > -static void vesa_show_line(
> > -    const unsigned char *text_line,
> > -    unsigned char *video_line,
> > -    unsigned int nr_chars,
> > -    unsigned int nr_cells)
> > -{
> > -    unsigned int i, j, b, bpp, pixel;
> > -
> > -    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
> > -
> > -    for ( i = 0; i < font->height; i++ )
> > -    {
> > -        unsigned char *ptr = lbuf;
> > -
> > -        for ( j = 0; j < nr_chars; j++ )
> > -        {
> > -            const unsigned char *bits = font->data;
> > -            bits += ((text_line[j] * font->height + i) *
> > -                     ((font->width + 7) >> 3));
> > -            for ( b = font->width; b--; )
> > -            {
> > -                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
> > -                memcpy(ptr, &pixel, bpp);
> > -                ptr += bpp;
> > -            }
> > -        }
> > -
> > -        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
> > -        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
> > -        video_line += vlfb_info.bytes_per_line;
> > -    }
> > -}
> > -
> > -/* Fast mode which redraws all modified parts of a 2D text buffer. */
> > -static void __init vesa_redraw_puts(const char *s)
> > -{
> > -    unsigned int i, min_redraw_y = ypos;
> > -    char c;
> > -
> > -    /* Paste characters into text buffer. */
> > -    while ( (c = *s++) != '\0' )
> > -    {
> > -        if ( (c == '\n') || (xpos >= text_columns) )
> > -        {
> > -            if ( ++ypos >= text_rows )
> > -            {
> > -                min_redraw_y = 0;
> > -                ypos = text_rows - 1;
> > -                memmove(text_buf, text_buf + text_columns,
> > -                        ypos * text_columns);
> > -                memset(text_buf + ypos * text_columns, 0, xpos);
> > -            }
> > -            xpos = 0;
> > -        }
> > -
> > -        if ( c != '\n' )
> > -            text_buf[xpos++ + ypos * text_columns] = c;
> > -    }
> > -
> > -    /* Render modified section of text buffer to VESA linear framebuffer. */
> > -    for ( i = min_redraw_y; i <= ypos; i++ )
> > -    {
> > -        const unsigned char *line = text_buf + i * text_columns;
> > -        unsigned int width;
> > -
> > -        for ( width = text_columns; width; --width )
> > -            if ( line[width - 1] )
> > -                 break;
> > -        vesa_show_line(line,
> > -                       lfb + i * font->height * vlfb_info.bytes_per_line,
> > -                       width, max(line_len[i], width));
> > -        line_len[i] = width;
> > -    }
> > -
> > -    lfb_flush();
> > -}
> > -
> > -/* Slower line-based scroll mode which interacts better with dom0. */
> > -static void vesa_scroll_puts(const char *s)
> > -{
> > -    unsigned int i;
> > -    char c;
> > -
> > -    while ( (c = *s++) != '\0' )
> > -    {
> > -        if ( (c == '\n') || (xpos >= text_columns) )
> > -        {
> > -            unsigned int bytes = (vlfb_info.width *
> > -                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
> > -            unsigned char *src = lfb + font->height * 
> > vlfb_info.bytes_per_line;
> > -            unsigned char *dst = lfb;
> > -            
> > -            /* New line: scroll all previous rows up one line. */
> > -            for ( i = font->height; i < vlfb_info.height; i++ )
> > -            {
> > -                memcpy(dst, src, bytes);
> > -                src += vlfb_info.bytes_per_line;
> > -                dst += vlfb_info.bytes_per_line;
> > -            }
> > -
> > -            /* Render new line. */
> > -            vesa_show_line(
> > -                text_buf,
> > -                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
> > -                xpos, text_columns);
> > -
> > -            xpos = 0;
> > -        }
> > -
> > -        if ( c != '\n' )
> > -            text_buf[xpos++] = c;
> > -    }
> > -
> > -    lfb_flush();
> >  }
> > -- 
> > 1.7.2.5
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:39:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZnR-0005kU-Uq; Thu, 06 Dec 2012 11:39:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgZnR-0005kL-4o
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:39:17 +0000
Received: from [85.158.143.35:32359] by server-2.bemta-4.messagelabs.com id
	90/AD-30861-4E380C05; Thu, 06 Dec 2012 11:39:16 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354793955!13033986!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16723 invoked from network); 6 Dec 2012 11:39:15 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:39:15 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgZnP-000M1P-3r; Thu, 06 Dec 2012 11:39:15 +0000
Date: Thu, 6 Dec 2012 11:39:15 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121206113915.GJ82725@ocelot.phlegethon.org>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3 of 4 RFC] x86/nmi: Prevent reentrant
	execution of the C nmi handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:16 +0000 on 04 Dec (1354644970), Andrew Cooper wrote:
> +/* This function is NOT SAFE to call from C code in general.
> + * Use with extreme care! */
> +void do_nmi(struct cpu_user_regs *regs)
> +{
> +    bool_t * in_progress = &this_cpu(nmi_in_progress);
> +
> +    if ( is_panic_in_progress() )
> +    {
> +        /* A panic is already in progress.  It may have reenabled NMIs,
> +         * or we are simply unluckly to receive one right now.  Either
> +         * way, bail early, as Xen is about to die.
> +         *
> +         * TODO: Ideally we should exit without executing an iret, to
> +         * leave NMIs disabled, but that option is not currently
> +         * available to us.
> +         */
> +        return;
> +    }
> +
> +    if ( test_and_set_bool(*in_progress) )
> +    {
> +        /* Crash in an obvious mannor, as opposed to falling into
> +         * infinite loop because our exception frame corrupted the
> +         * exception frame of the previous NMI.
> +         *
> +         * TODO: This check does not cover all possible cases of corrupt
> +         * exception frames, but it is substantially better than
> +         * nothing.
> +         */
> +        console_force_unlock();
> +        show_execution_state(regs);
> +        panic("Reentrant NMI detected\n");
> +    }
> +
> +    _do_nmi(regs);

I think you need a barrier() above and below _do_nmi() in case the
compiler inlines _do_nmi() and then decides it can shuffle the writes to
in_progress around.

Otherwise, this looks fine, and you can add my ack if you like.

> +    *in_progress = 0;
> +}
> +
>  void set_nmi_callback(nmi_callback_t callback)
>  {
>      nmi_callback = callback;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:39:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZnR-0005kU-Uq; Thu, 06 Dec 2012 11:39:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgZnR-0005kL-4o
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:39:17 +0000
Received: from [85.158.143.35:32359] by server-2.bemta-4.messagelabs.com id
	90/AD-30861-4E380C05; Thu, 06 Dec 2012 11:39:16 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354793955!13033986!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16723 invoked from network); 6 Dec 2012 11:39:15 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:39:15 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgZnP-000M1P-3r; Thu, 06 Dec 2012 11:39:15 +0000
Date: Thu, 6 Dec 2012 11:39:15 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121206113915.GJ82725@ocelot.phlegethon.org>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <f6ad86b61d5afd86026c.1354644970@andrewcoop.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3 of 4 RFC] x86/nmi: Prevent reentrant
	execution of the C nmi handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:16 +0000 on 04 Dec (1354644970), Andrew Cooper wrote:
> +/* This function is NOT SAFE to call from C code in general.
> + * Use with extreme care! */
> +void do_nmi(struct cpu_user_regs *regs)
> +{
> +    bool_t * in_progress = &this_cpu(nmi_in_progress);
> +
> +    if ( is_panic_in_progress() )
> +    {
> +        /* A panic is already in progress.  It may have reenabled NMIs,
> +         * or we are simply unluckly to receive one right now.  Either
> +         * way, bail early, as Xen is about to die.
> +         *
> +         * TODO: Ideally we should exit without executing an iret, to
> +         * leave NMIs disabled, but that option is not currently
> +         * available to us.
> +         */
> +        return;
> +    }
> +
> +    if ( test_and_set_bool(*in_progress) )
> +    {
> +        /* Crash in an obvious mannor, as opposed to falling into
> +         * infinite loop because our exception frame corrupted the
> +         * exception frame of the previous NMI.
> +         *
> +         * TODO: This check does not cover all possible cases of corrupt
> +         * exception frames, but it is substantially better than
> +         * nothing.
> +         */
> +        console_force_unlock();
> +        show_execution_state(regs);
> +        panic("Reentrant NMI detected\n");
> +    }
> +
> +    _do_nmi(regs);

I think you need a barrier() above and below _do_nmi() in case the
compiler inlines _do_nmi() and then decides it can shuffle the writes to
in_progress around.

Otherwise, this looks fine, and you can add my ack if you like.

> +    *in_progress = 0;
> +}
> +
>  void set_nmi_callback(nmi_callback_t callback)
>  {
>      nmi_callback = callback;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:39:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:39:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZnY-0005lN-CX; Thu, 06 Dec 2012 11:39:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mst@redhat.com>) id 1TgZnW-0005l0-MW
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 11:39:22 +0000
Received: from [193.109.254.147:14004] by server-3.bemta-14.messagelabs.com id
	48/EB-01317-AE380C05; Thu, 06 Dec 2012 11:39:22 +0000
X-Env-Sender: mst@redhat.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354793886!3940525!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTY3NzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22919 invoked from network); 6 Dec 2012 11:38:06 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 11:38:06 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qB6Bc2FV031806
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 06:38:02 -0500
Received: from redhat.com (vpn1-5-79.ams2.redhat.com [10.36.5.79])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with SMTP
	id qB6Bbxm3028409; Thu, 6 Dec 2012 06:38:00 -0500
Date: Thu, 6 Dec 2012 13:41:12 +0200
From: "Michael S. Tsirkin" <mst@redhat.com>
To: davej@redhat.com
Message-ID: <20121206114112.GA13963@redhat.com>
References: <20120430143835.GA10190@redhat.com>
	<20120905173335.GA15141@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120905173335.GA15141@redhat.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: xen-devel@lists.xensource.com, kvm@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, pv-drivers@vmware.com,
	virtualization@lists.linux-foundation.org, devel@linuxdriverproject.org
Subject: Re: [Xen-devel] [PATCHv2] x86info: dump kvm cpuid's
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Sep 05, 2012 at 08:33:35PM +0300, Michael S. Tsirkin wrote:
> On Mon, Apr 30, 2012 at 05:38:35PM +0300, Michael S. Tsirkin wrote:
> > The following makes 'x86info -r' dump hypervisor leaf cpu ids
> > (for kvm this is signature+features) when running in a vm.
> > 
> > On the guest we see the signature and the features:
> > eax in: 0x40000000, eax = 00000000 ebx = 4b4d564b ecx = 564b4d56 edx = 0000004d
> > eax in: 0x40000001, eax = 0100007b ebx = 00000000 ecx = 00000000 edx = 00000000
> > 
> > Hypervisor flag is checked to avoid output changes when not
> > running on a VM.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > 
> > Changes from v1:
> > 	Make work on non KVM hypervisors (only KVM was tested).
> > 	Avi Kivity said kvm will in the future report
> > 	max HV leaf in eax. For now it reports eax = 0
> >         so add a work around for that.
> 
> Ping.
> Davej, any comments?
> Would be nice to have this in.

Is this the right address?
Davej do you maintain x86info?
Thanks,
MST

> 
> > ---
> > 
> > diff --git a/identify.c b/identify.c
> > index 33f35de..a4a3763 100644
> > --- a/identify.c
> > +++ b/identify.c
> > @@ -9,8 +9,8 @@
> >  
> >  void get_cpu_info_basics(struct cpudata *cpu)
> >  {
> > -	unsigned int maxi, maxei, vendor, address_bits;
> > -	unsigned int eax;
> > +	unsigned int maxi, maxei, maxhv, vendor, address_bits;
> > +	unsigned int eax, ebx, ecx;
> >  
> >  	cpuid(cpu->number, 0, &maxi, &vendor, NULL, NULL);
> >  	maxi &= 0xffff;		/* The high-order word is non-zero on some Cyrix CPUs */
> > @@ -19,7 +19,7 @@ void get_cpu_info_basics(struct cpudata *cpu)
> >  		return;
> >  
> >  	/* Everything that supports cpuid supports these. */
> > -	cpuid(cpu->number, 1, &eax, NULL, NULL, NULL);
> > +	cpuid(cpu->number, 1, &eax, &ebx, &ecx, NULL);
> >  	cpu->stepping = eax & 0xf;
> >  	cpu->model = (eax >> 4) & 0xf;
> >  	cpu->family = (eax >> 8) & 0xf;
> > @@ -29,6 +29,19 @@ void get_cpu_info_basics(struct cpudata *cpu)
> >  
> >  	cpuid(cpu->number, 0xC0000000, &maxei, NULL, NULL, NULL);
> >  	cpu->maxei2 = maxei;
> > +	if (ecx & 0x80000000) {
> > +		cpuid(cpu->number, 0x40000000, &maxhv, NULL, NULL, NULL);
> > +		/*
> > +		 * KVM up to linux 3.4 reports 0 as the max hypervisor leaf,
> > +		 * where it really means 0x40000001.
> > +		 * Most (all?) hypervisors have at least one CPUID besides
> > +		 * the vendor ID so assume that.
> > +		 */
> > +		cpu->maxhv = maxhv ? maxhv : 0x40000001;
> > +	} else {
> > +		/* Suppress hypervisor cpuid unless running on a hypervisor */
> > +		cpu->maxhv = 0;
> > +	}
> >  
> >  	cpuid(cpu->number, 0x80000008,&address_bits, NULL, NULL, NULL);
> >  	cpu->phyaddr_bits = address_bits & 0xFF;
> > diff --git a/x86info.c b/x86info.c
> > index 22c4734..80cae36 100644
> > --- a/x86info.c
> > +++ b/x86info.c
> > @@ -44,6 +44,10 @@ static void display_detailed_info(struct cpudata *cpu)
> >  
> >  		if (cpu->maxei2 >=0xC0000000)
> >  			dump_raw_cpuid(cpu->number, 0xC0000000, cpu->maxei2);
> > +
> > +		if (cpu->maxhv >= 0x40000000)
> > +			dump_raw_cpuid(cpu->number, 0x40000000, cpu->maxhv);
> > +
> >  	}
> >  
> >  	if (show_cacheinfo) {
> > diff --git a/x86info.h b/x86info.h
> > index 7d2a455..c4f5d81 100644
> > --- a/x86info.h
> > +++ b/x86info.h
> > @@ -84,7 +84,7 @@ struct cpudata {
> >  	unsigned int cachesize_trace;
> >  	unsigned int phyaddr_bits;
> >  	unsigned int viraddr_bits;
> > -	unsigned int cpuid_level, maxei, maxei2;
> > +	unsigned int cpuid_level, maxei, maxei2, maxhv;
> >  	char name[CPU_NAME_LEN];
> >  	enum connector connector;
> >  	unsigned int flags_ecx;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:39:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:39:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZnY-0005lN-CX; Thu, 06 Dec 2012 11:39:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mst@redhat.com>) id 1TgZnW-0005l0-MW
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 11:39:22 +0000
Received: from [193.109.254.147:14004] by server-3.bemta-14.messagelabs.com id
	48/EB-01317-AE380C05; Thu, 06 Dec 2012 11:39:22 +0000
X-Env-Sender: mst@redhat.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354793886!3940525!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTY3NzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22919 invoked from network); 6 Dec 2012 11:38:06 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 11:38:06 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qB6Bc2FV031806
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 06:38:02 -0500
Received: from redhat.com (vpn1-5-79.ams2.redhat.com [10.36.5.79])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with SMTP
	id qB6Bbxm3028409; Thu, 6 Dec 2012 06:38:00 -0500
Date: Thu, 6 Dec 2012 13:41:12 +0200
From: "Michael S. Tsirkin" <mst@redhat.com>
To: davej@redhat.com
Message-ID: <20121206114112.GA13963@redhat.com>
References: <20120430143835.GA10190@redhat.com>
	<20120905173335.GA15141@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120905173335.GA15141@redhat.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: xen-devel@lists.xensource.com, kvm@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, pv-drivers@vmware.com,
	virtualization@lists.linux-foundation.org, devel@linuxdriverproject.org
Subject: Re: [Xen-devel] [PATCHv2] x86info: dump kvm cpuid's
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Sep 05, 2012 at 08:33:35PM +0300, Michael S. Tsirkin wrote:
> On Mon, Apr 30, 2012 at 05:38:35PM +0300, Michael S. Tsirkin wrote:
> > The following makes 'x86info -r' dump hypervisor leaf cpu ids
> > (for kvm this is signature+features) when running in a vm.
> > 
> > On the guest we see the signature and the features:
> > eax in: 0x40000000, eax = 00000000 ebx = 4b4d564b ecx = 564b4d56 edx = 0000004d
> > eax in: 0x40000001, eax = 0100007b ebx = 00000000 ecx = 00000000 edx = 00000000
> > 
> > Hypervisor flag is checked to avoid output changes when not
> > running on a VM.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > 
> > Changes from v1:
> > 	Make work on non KVM hypervisors (only KVM was tested).
> > 	Avi Kivity said kvm will in the future report
> > 	max HV leaf in eax. For now it reports eax = 0
> >         so add a work around for that.
> 
> Ping.
> Davej, any comments?
> Would be nice to have this in.

Is this the right address?
Davej do you maintain x86info?
Thanks,
MST

> 
> > ---
> > 
> > diff --git a/identify.c b/identify.c
> > index 33f35de..a4a3763 100644
> > --- a/identify.c
> > +++ b/identify.c
> > @@ -9,8 +9,8 @@
> >  
> >  void get_cpu_info_basics(struct cpudata *cpu)
> >  {
> > -	unsigned int maxi, maxei, vendor, address_bits;
> > -	unsigned int eax;
> > +	unsigned int maxi, maxei, maxhv, vendor, address_bits;
> > +	unsigned int eax, ebx, ecx;
> >  
> >  	cpuid(cpu->number, 0, &maxi, &vendor, NULL, NULL);
> >  	maxi &= 0xffff;		/* The high-order word is non-zero on some Cyrix CPUs */
> > @@ -19,7 +19,7 @@ void get_cpu_info_basics(struct cpudata *cpu)
> >  		return;
> >  
> >  	/* Everything that supports cpuid supports these. */
> > -	cpuid(cpu->number, 1, &eax, NULL, NULL, NULL);
> > +	cpuid(cpu->number, 1, &eax, &ebx, &ecx, NULL);
> >  	cpu->stepping = eax & 0xf;
> >  	cpu->model = (eax >> 4) & 0xf;
> >  	cpu->family = (eax >> 8) & 0xf;
> > @@ -29,6 +29,19 @@ void get_cpu_info_basics(struct cpudata *cpu)
> >  
> >  	cpuid(cpu->number, 0xC0000000, &maxei, NULL, NULL, NULL);
> >  	cpu->maxei2 = maxei;
> > +	if (ecx & 0x80000000) {
> > +		cpuid(cpu->number, 0x40000000, &maxhv, NULL, NULL, NULL);
> > +		/*
> > +		 * KVM up to linux 3.4 reports 0 as the max hypervisor leaf,
> > +		 * where it really means 0x40000001.
> > +		 * Most (all?) hypervisors have at least one CPUID besides
> > +		 * the vendor ID so assume that.
> > +		 */
> > +		cpu->maxhv = maxhv ? maxhv : 0x40000001;
> > +	} else {
> > +		/* Suppress hypervisor cpuid unless running on a hypervisor */
> > +		cpu->maxhv = 0;
> > +	}
> >  
> >  	cpuid(cpu->number, 0x80000008,&address_bits, NULL, NULL, NULL);
> >  	cpu->phyaddr_bits = address_bits & 0xFF;
> > diff --git a/x86info.c b/x86info.c
> > index 22c4734..80cae36 100644
> > --- a/x86info.c
> > +++ b/x86info.c
> > @@ -44,6 +44,10 @@ static void display_detailed_info(struct cpudata *cpu)
> >  
> >  		if (cpu->maxei2 >=0xC0000000)
> >  			dump_raw_cpuid(cpu->number, 0xC0000000, cpu->maxei2);
> > +
> > +		if (cpu->maxhv >= 0x40000000)
> > +			dump_raw_cpuid(cpu->number, 0x40000000, cpu->maxhv);
> > +
> >  	}
> >  
> >  	if (show_cacheinfo) {
> > diff --git a/x86info.h b/x86info.h
> > index 7d2a455..c4f5d81 100644
> > --- a/x86info.h
> > +++ b/x86info.h
> > @@ -84,7 +84,7 @@ struct cpudata {
> >  	unsigned int cachesize_trace;
> >  	unsigned int phyaddr_bits;
> >  	unsigned int viraddr_bits;
> > -	unsigned int cpuid_level, maxei, maxei2;
> > +	unsigned int cpuid_level, maxei, maxei2, maxhv;
> >  	char name[CPU_NAME_LEN];
> >  	enum connector connector;
> >  	unsigned int flags_ecx;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:40:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZoh-0005x6-RY; Thu, 06 Dec 2012 11:40:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZog-0005wn-Km
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:40:34 +0000
Received: from [85.158.143.99:24992] by server-3.bemta-4.messagelabs.com id
	22/F1-18211-23480C05; Thu, 06 Dec 2012 11:40:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354794032!27429261!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4570 invoked from network); 6 Dec 2012 11:40:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:40:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:40:32 +0000
Message-Id: <50C0924002000078000AE7E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:40:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 00/11] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 02:09, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> This series of patches contain some bug fixes and feature enabling for
> nested vmx, please help to review and pull.
> 
> Changes from v2 to v3:
>  - Change a hard number to literal name while exposing bit 55 in 
> IA32_VMX_BASIC MSR.
> 
> The following 4 patches are suitable to backport to 4.2.x:
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit
>   nested vmx: fix interrupt delivery to L2 guest

Thanks for confirming; how about ...

> Dongxiao Xu (11):
>   nested vmx: emulate MSR bitmaps
>   nested vmx: use literal name instead of hard numbers
>   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit
>   nested vmx: enable IA32E mode while do VM entry

... this one?

Jan

>   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
>   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
>   nested vmx: fix interrupt delivery to L2 guest
>   nested vmx: check host ability when intercept MSR read



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:40:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZoh-0005x6-RY; Thu, 06 Dec 2012 11:40:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZog-0005wn-Km
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:40:34 +0000
Received: from [85.158.143.99:24992] by server-3.bemta-4.messagelabs.com id
	22/F1-18211-23480C05; Thu, 06 Dec 2012 11:40:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1354794032!27429261!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4570 invoked from network); 6 Dec 2012 11:40:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:40:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:40:32 +0000
Message-Id: <50C0924002000078000AE7E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:40:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 00/11] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 02:09, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> This series of patches contain some bug fixes and feature enabling for
> nested vmx, please help to review and pull.
> 
> Changes from v2 to v3:
>  - Change a hard number to literal name while exposing bit 55 in 
> IA32_VMX_BASIC MSR.
> 
> The following 4 patches are suitable to backport to 4.2.x:
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit
>   nested vmx: fix interrupt delivery to L2 guest

Thanks for confirming; how about ...

> Dongxiao Xu (11):
>   nested vmx: emulate MSR bitmaps
>   nested vmx: use literal name instead of hard numbers
>   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit
>   nested vmx: enable IA32E mode while do VM entry

... this one?

Jan

>   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
>   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
>   nested vmx: fix interrupt delivery to L2 guest
>   nested vmx: check host ability when intercept MSR read



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:46:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZtq-0006Rk-MU; Thu, 06 Dec 2012 11:45:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZtq-0006Re-2O
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:45:54 +0000
Received: from [85.158.143.99:53888] by server-2.bemta-4.messagelabs.com id
	89/96-30861-17580C05; Thu, 06 Dec 2012 11:45:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1354794351!28173898!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29761 invoked from network); 6 Dec 2012 11:45:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:45:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:45:51 +0000
Message-Id: <50C0937D02000078000AE806@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:45:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
	<1354756169-12370-4-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354756169-12370-4-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 03/11] nested vmx: expose bit 55 of
 IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 02:09, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> @@ -45,6 +45,12 @@ struct nestedvmx {
>  /* bit 0-8, and 12 must be 1 */
>  #define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
>  
> +/* 
> + * bit 55 of IA32_VMX_BASIC MSR, indicating whether any VMX controls that
> + * default to 1 may be cleared to 0.
> + */
> +#define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
> +
>  /*
>   * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
>   */

I assume this relates to

        /*
         * To use EPT we expect to be able to clear certain intercepts.
         * We check VMX_BASIC_MSR[55] to correctly handle default controls.
         */
        uint32_t must_be_one, must_be_zero, msr = MSR_IA32_VMX_PROCBASED_CTLS;
        if ( vmx_basic_msr_high & (1u << 23) )
            msr = MSR_IA32_VMX_TRUE_PROCBASED_CTLS;

in xen/arch/x86/hvm/vmx/vmcs.c. If so, please use the new
constant there too, to point out that connection. That may
require moving this to a different header file.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:46:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZtq-0006Rk-MU; Thu, 06 Dec 2012 11:45:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZtq-0006Re-2O
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:45:54 +0000
Received: from [85.158.143.99:53888] by server-2.bemta-4.messagelabs.com id
	89/96-30861-17580C05; Thu, 06 Dec 2012 11:45:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1354794351!28173898!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29761 invoked from network); 6 Dec 2012 11:45:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:45:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:45:51 +0000
Message-Id: <50C0937D02000078000AE806@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:45:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
	<1354756169-12370-4-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354756169-12370-4-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 03/11] nested vmx: expose bit 55 of
 IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 02:09, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> @@ -45,6 +45,12 @@ struct nestedvmx {
>  /* bit 0-8, and 12 must be 1 */
>  #define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
>  
> +/* 
> + * bit 55 of IA32_VMX_BASIC MSR, indicating whether any VMX controls that
> + * default to 1 may be cleared to 0.
> + */
> +#define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
> +
>  /*
>   * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
>   */

I assume this relates to

        /*
         * To use EPT we expect to be able to clear certain intercepts.
         * We check VMX_BASIC_MSR[55] to correctly handle default controls.
         */
        uint32_t must_be_one, must_be_zero, msr = MSR_IA32_VMX_PROCBASED_CTLS;
        if ( vmx_basic_msr_high & (1u << 23) )
            msr = MSR_IA32_VMX_TRUE_PROCBASED_CTLS;

in xen/arch/x86/hvm/vmx/vmcs.c. If so, please use the new
constant there too, to point out that connection. That may
require moving this to a different header file.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:49:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:49:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZxE-0006Z0-Ia; Thu, 06 Dec 2012 11:49:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZxC-0006Yu-S8
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:49:23 +0000
Received: from [85.158.139.83:34116] by server-6.bemta-5.messagelabs.com id
	07/8F-19321-14680C05; Thu, 06 Dec 2012 11:49:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354794561!17411810!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13639 invoked from network); 6 Dec 2012 11:49:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:49:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:49:20 +0000
Message-Id: <50C0944E02000078000AE810@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:49:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
	<1354756169-12370-12-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354756169-12370-12-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 11/11] nested vmx: check host ability
 when intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 02:09, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> When guest hypervisor tries to read MSR value, we intercept this behavior
> and return certain emulated values. Besides that, we also need to ensure
> that those emulated values must compatible with host ability.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |   18 ++++++++++++++----
>  1 files changed, 14 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 178adbc..e65f963 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1319,19 +1319,20 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
>   */
>  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
>  {
> -    u64 data = 0, tmp = 0;
> +    u64 data = 0, host_data = 0, tmp = 0;
>      int r = 1;
>  
>      if ( !nestedhvm_enabled(current->domain) )
>          return 0;
>  
> +    rdmsrl(msr, host_data);
> +
>      /*
>       * Remove unsupport features from n1 guest capability MSR
>       */
>      switch (msr) {
>      case MSR_IA32_VMX_BASIC:
> -        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
> -               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
> +        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
>          break;
>      case MSR_IA32_VMX_PINBASED_CTLS:
>      case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
> @@ -1341,6 +1342,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 PIN_BASED_PREEMPT_TIMER;
>          tmp = VMX_PINBASED_CTLS_DEFAULT1;
>          data = ((data | tmp) << 32) | (tmp);
> +        data = ((data & host_data) & (~0ul << 32)) |
> +               ((data | host_data) & (~0u));

Can this be macroized, please? And personally I'd prefer the
second part to be done via a cast to uint32_t rather than
and-ing with ~0u.

Jan

>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS:
>      case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
> @@ -1368,6 +1371,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          tmp = VMX_PROCBASED_CTLS_DEFAULT1;
>          /* 0-settings */
>          data = ((data | tmp) << 32) | (tmp);
> +        data = ((data & host_data) & (~0ul << 32)) |
> +               ((data | host_data) & (~0u));
>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS2:
>          /* 1-seetings */
> @@ -1376,6 +1381,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          /* 0-settings */
>          tmp = 0;
>          data = (data << 32) | tmp;
> +        data = ((data & host_data) & (~0ul << 32)) |
> +               ((data | host_data) & (~0u));
>          break;
>      case MSR_IA32_VMX_EXIT_CTLS:
>      case MSR_IA32_VMX_TRUE_EXIT_CTLS:
> @@ -1391,6 +1398,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
>  	/* 0-settings */
>          data = ((data | tmp) << 32) | tmp;
> +        data = ((data & host_data) & (~0ul << 32)) |
> +               ((data | host_data) & (~0u));
>          break;
>      case MSR_IA32_VMX_ENTRY_CTLS:
>      case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> @@ -1401,8 +1410,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
>                 VM_ENTRY_IA32E_MODE;
>          data = ((data | tmp) << 32) | tmp;
> +        data = ((data & host_data) & (~0ul << 32)) |
> +               ((data | host_data) & (~0u));
>          break;
> -
>      case IA32_FEATURE_CONTROL_MSR:
>          data = IA32_FEATURE_CONTROL_MSR_LOCK | 
>                 IA32_FEATURE_CONTROL_MSR_ENABLE_VMXON_OUTSIDE_SMX;
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:49:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:49:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgZxE-0006Z0-Ia; Thu, 06 Dec 2012 11:49:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgZxC-0006Yu-S8
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:49:23 +0000
Received: from [85.158.139.83:34116] by server-6.bemta-5.messagelabs.com id
	07/8F-19321-14680C05; Thu, 06 Dec 2012 11:49:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354794561!17411810!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13639 invoked from network); 6 Dec 2012 11:49:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:49:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 11:49:20 +0000
Message-Id: <50C0944E02000078000AE810@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 11:49:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
	<1354756169-12370-12-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354756169-12370-12-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 11/11] nested vmx: check host ability
 when intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 02:09, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> When guest hypervisor tries to read MSR value, we intercept this behavior
> and return certain emulated values. Besides that, we also need to ensure
> that those emulated values must compatible with host ability.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |   18 ++++++++++++++----
>  1 files changed, 14 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 178adbc..e65f963 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1319,19 +1319,20 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
>   */
>  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
>  {
> -    u64 data = 0, tmp = 0;
> +    u64 data = 0, host_data = 0, tmp = 0;
>      int r = 1;
>  
>      if ( !nestedhvm_enabled(current->domain) )
>          return 0;
>  
> +    rdmsrl(msr, host_data);
> +
>      /*
>       * Remove unsupport features from n1 guest capability MSR
>       */
>      switch (msr) {
>      case MSR_IA32_VMX_BASIC:
> -        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
> -               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
> +        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
>          break;
>      case MSR_IA32_VMX_PINBASED_CTLS:
>      case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
> @@ -1341,6 +1342,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 PIN_BASED_PREEMPT_TIMER;
>          tmp = VMX_PINBASED_CTLS_DEFAULT1;
>          data = ((data | tmp) << 32) | (tmp);
> +        data = ((data & host_data) & (~0ul << 32)) |
> +               ((data | host_data) & (~0u));

Can this be macroized, please? And personally I'd prefer the
second part to be done via a cast to uint32_t rather than
and-ing with ~0u.

Jan

>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS:
>      case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
> @@ -1368,6 +1371,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          tmp = VMX_PROCBASED_CTLS_DEFAULT1;
>          /* 0-settings */
>          data = ((data | tmp) << 32) | (tmp);
> +        data = ((data & host_data) & (~0ul << 32)) |
> +               ((data | host_data) & (~0u));
>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS2:
>          /* 1-seetings */
> @@ -1376,6 +1381,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          /* 0-settings */
>          tmp = 0;
>          data = (data << 32) | tmp;
> +        data = ((data & host_data) & (~0ul << 32)) |
> +               ((data | host_data) & (~0u));
>          break;
>      case MSR_IA32_VMX_EXIT_CTLS:
>      case MSR_IA32_VMX_TRUE_EXIT_CTLS:
> @@ -1391,6 +1398,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
>  	/* 0-settings */
>          data = ((data | tmp) << 32) | tmp;
> +        data = ((data & host_data) & (~0ul << 32)) |
> +               ((data | host_data) & (~0u));
>          break;
>      case MSR_IA32_VMX_ENTRY_CTLS:
>      case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> @@ -1401,8 +1410,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
>                 VM_ENTRY_IA32E_MODE;
>          data = ((data | tmp) << 32) | tmp;
> +        data = ((data & host_data) & (~0ul << 32)) |
> +               ((data | host_data) & (~0u));
>          break;
> -
>      case IA32_FEATURE_CONTROL_MSR:
>          data = IA32_FEATURE_CONTROL_MSR_LOCK | 
>                 IA32_FEATURE_CONTROL_MSR_ENABLE_VMXON_OUTSIDE_SMX;
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:57:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:57:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tga4h-0006yM-7Z; Thu, 06 Dec 2012 11:57:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tga4f-0006yH-SG
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:57:05 +0000
Received: from [85.158.143.99:54733] by server-1.bemta-4.messagelabs.com id
	33/28-28401-01880C05; Thu, 06 Dec 2012 11:57:04 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-216.messagelabs.com!1354795013!24956936!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9701 invoked from network); 6 Dec 2012 11:56:53 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-7.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:56:53 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tga4R-000M4U-Gg; Thu, 06 Dec 2012 11:56:51 +0000
Date: Thu, 6 Dec 2012 11:56:51 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121206115651.GK82725@ocelot.phlegethon.org>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
	<1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/8] device-tree: get_val cannot cope with
	cells > 2, add early_panic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:10 +0000 on 03 Dec (1354554627), Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

> ---
> v3: early_panic instead of BUG_ON
> v2: drop unrelated white space fixup
> ---
>  xen/common/device_tree.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 9eb316f..5a0a1a6 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -58,6 +58,9 @@ static void __init get_val(const u32 **cell, u32 cells, u64 *val)
>  {
>      *val = 0;
>  
> +    if ( cells > 2 )
> +        early_panic("dtb value contains > 2 cells\n");
> +
>      while ( cells-- )
>      {
>          *val <<= 32;
> -- 
> 1.7.9.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:57:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:57:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tga4h-0006yM-7Z; Thu, 06 Dec 2012 11:57:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tga4f-0006yH-SG
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:57:05 +0000
Received: from [85.158.143.99:54733] by server-1.bemta-4.messagelabs.com id
	33/28-28401-01880C05; Thu, 06 Dec 2012 11:57:04 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-216.messagelabs.com!1354795013!24956936!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9701 invoked from network); 6 Dec 2012 11:56:53 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-7.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 11:56:53 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tga4R-000M4U-Gg; Thu, 06 Dec 2012 11:56:51 +0000
Date: Thu, 6 Dec 2012 11:56:51 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121206115651.GK82725@ocelot.phlegethon.org>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
	<1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/8] device-tree: get_val cannot cope with
	cells > 2, add early_panic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:10 +0000 on 03 Dec (1354554627), Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

> ---
> v3: early_panic instead of BUG_ON
> v2: drop unrelated white space fixup
> ---
>  xen/common/device_tree.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 9eb316f..5a0a1a6 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -58,6 +58,9 @@ static void __init get_val(const u32 **cell, u32 cells, u64 *val)
>  {
>      *val = 0;
>  
> +    if ( cells > 2 )
> +        early_panic("dtb value contains > 2 cells\n");
> +
>      while ( cells-- )
>      {
>          *val <<= 32;
> -- 
> 1.7.9.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:59:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tga6U-00073I-ON; Thu, 06 Dec 2012 11:58:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tga6T-000738-Cp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:58:57 +0000
Received: from [85.158.143.35:6211] by server-3.bemta-4.messagelabs.com id
	03/5C-18211-08880C05; Thu, 06 Dec 2012 11:58:56 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354795134!5320091!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27181 invoked from network); 6 Dec 2012 11:58:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 11:58:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="216599125"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 11:58:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 06:58:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tga6P-000815-5B;
	Thu, 06 Dec 2012 11:58:53 +0000
Message-ID: <50C0887D.3030704@citrix.com>
Date: Thu, 6 Dec 2012 11:58:53 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
	<20121206113610.GI82725@ocelot.phlegethon.org>
In-Reply-To: <20121206113610.GI82725@ocelot.phlegethon.org>
X-Enigmail-Version: 1.4.6
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/12 11:36, Tim Deegan wrote:
> At 18:16 +0000 on 04 Dec (1354644968), Andrew Cooper wrote:
>> do_nmi_crash() will:
>>
>>  * Save the crash notes and shut the pcpu down.
>>     There is now an extra per-cpu variable to prevent us from executing
>>     this multiple times.  In the case where we reenter midway through,
>>     attempt the whole operation again in preference to not completing
>>     it in the first place.
>>
>>  * Set up another NMI at the LAPIC.
>>     Even when the LAPIC has been disabled, the ID and command registers
>>     are still usable.  As a result, we can deliberately queue up a new
>>     NMI to re-interrupt us later if NMIs get unlatched.
> Can you please include this reasoning in the code itself, as well as in
> the commit message?

There is a comment alluding to this; the final sentence in the block
beginning "Poor mans self_nmi()", but I will reword it to make it clearer.

>
>> machine_kexec() will:
>>
>>   * Swap the MCE handlers to be a nop.
>>      We cannot prevent MCEs from being delivered when we pass off to the
>>      crash kernel, and the less Xen context is being touched the better.
>>
>>   * Explicitly enable NMIs.
> Would it be cleaner to have this path explicitly set the IDT entry to
> invoke the noop handler?  Or do we know this is alwys the case when we
> reach this point?

Early versions of this patch did change the NMI entry here, but that was
before I decided that nmi_shootdown_cpus() needed redoing.

As it currently stands, all pcpus other than us have the nmi_crash
handler, and we have the nop handler because the call graph looks like:

kexec_crash()
  ...
  machine_crash_shutdown()
    nmi_shootdown_cpus()
  machine_kexec()

I will however clarify this NMI setup with a comment at this location.

>
> I'm just wondering about the case where we get here with an NMI pending.
> Presumably if we have some other source of NMIs active while we kexec,
> the post-exec kernel will crash if it overwrites our handlers &c before
> setting up its own IDT.  But if kexec()ing with NMIs _disabled_ also
> fails than we haven't much choice. :|
>
> Otherwise, this looks good to me.
>
> Tim.

We do have another source of NMIs active; The watchdog disable routine
only tells the NMI handler to ignore watchdog NMIs, rather than disable
the source of NMIs themselves.  It is another item on my hitlist.

The other problem we have have with the kexec mechanism is to do with
taking NMIs and MCEs between purgatory changing addressing schemes and
setting up its own IDT.  I believe that compat_machine_kexec() does this
in an unsafe way even before jumping to purgatory.  I was wondering
whether it was possible to monopolise some very low pages which could be
identity mapped in any paging scheme, but I cant think of a nice way to
fix a move into real mode at which point the loaded IDT is completely
invalid.  Another item for the hitlist.

~Andrew

>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/crash.c
>> --- a/xen/arch/x86/crash.c
>> +++ b/xen/arch/x86/crash.c
>> @@ -32,41 +32,97 @@
>>  
>>  static atomic_t waiting_for_crash_ipi;
>>  static unsigned int crashing_cpu;
>> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>>  
>> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
>> +/* This becomes the NMI handler for non-crashing CPUs. */
>> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>>  {
>> -    /* Don't do anything if this handler is invoked on crashing cpu.
>> -     * Otherwise, system will completely hang. Crashing cpu can get
>> -     * an NMI if system was initially booted with nmi_watchdog parameter.
>> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
>> +    ASSERT( crashing_cpu != smp_processor_id() );
>> +
>> +    /* Save crash information and shut down CPU.  Attempt only once. */
>> +    if ( ! this_cpu(crash_save_done) )
>> +    {
>> +        kexec_crash_save_cpu();
>> +        __stop_this_cpu();
>> +
>> +        this_cpu(crash_save_done) = 1;
>> +    }
>> +
>> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
>> +     * back to its boot state, so we are unable to rely on the regular
>> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
>> +     * (The likely senario is that we have reverted from x2apic mode to
>> +     * xapic, at which point #GPFs will occur if we use the apic_*
>> +     * functions)
>> +     *
>> +     * The ICR and APIC ID of the LAPIC are still valid even during
>> +     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
>> +     * queue up another NMI, to force us back into this loop if we exit.
>>       */
>> -    if ( cpu == crashing_cpu )
>> -        return 1;
>> -    local_irq_disable();
>> +    switch ( current_local_apic_mode() )
>> +    {
>> +        u32 apic_id;
>>  
>> -    kexec_crash_save_cpu();
>> +    case APIC_MODE_X2APIC:
>> +        apic_id = apic_rdmsr(APIC_ID);
>>  
>> -    __stop_this_cpu();
>> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
>> +        break;
>>  
>> -    atomic_dec(&waiting_for_crash_ipi);
>> +    case APIC_MODE_XAPIC:
>> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
>> +
>> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
>> +            cpu_relax();
>> +
>> +        apic_mem_write(APIC_ICR2, apic_id << 24);
>> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
>> +        break;
>> +
>> +    default:
>> +        break;
>> +    }
>>  
>>      for ( ; ; )
>>          halt();
>> -
>> -    return 1;
>>  }
>>  
>>  static void nmi_shootdown_cpus(void)
>>  {
>>      unsigned long msecs;
>> +    int i;
>> +    int cpu = smp_processor_id();
>>  
>>      local_irq_disable();
>>  
>> -    crashing_cpu = smp_processor_id();
>> +    crashing_cpu = cpu;
>>      local_irq_count(crashing_cpu) = 0;
>>  
>>      atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
>> -    /* Would it be better to replace the trap vector here? */
>> -    set_nmi_callback(crash_nmi_callback);
>> +
>> +    /* Change NMI trap handlers.  Non-crashing pcpus get crash_nmi which
>> +     * invokes do_nmi_crash (above), which cause them to write state and
>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>> +     * cause it to return to this function ASAP.
>> +     *
>> +     * Futhermore, disable stack tables for NMIs and MCEs to prevent
>> +     * race conditions resulting in corrupt stack frames.  As Xen is
>> +     * about to die, we no longer care about the security-related race
>> +     * condition with 'sysret' which ISTs are designed to solve.
>> +     */
>> +    for ( i = 0; i < nr_cpu_ids; ++i )
>> +        if ( idt_tables[i] )
>> +        {
>> +            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
>> +            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
>> +
>> +            if ( i == cpu )
>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
>> +            else
>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
>> +        }
>> +
>>      /* Ensure the new callback function is set before sending out the NMI. */
>>      wmb();
>>  
>> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/machine_kexec.c
>> --- a/xen/arch/x86/machine_kexec.c
>> +++ b/xen/arch/x86/machine_kexec.c
>> @@ -87,6 +87,17 @@ void machine_kexec(xen_kexec_image_t *im
>>       */
>>      local_irq_disable();
>>  
>> +    /* Now regular interrupts are disabled, we need to reduce the impact
>> +     * of interrupts not disabled by 'cli'.  NMIs have already been
>> +     * dealt with by machine_crash_shutdown().
>> +     *
>> +     * Set all pcpu MCE handler to be a noop. */
>> +    set_intr_gate(TRAP_machine_check, &trap_nop);
>> +
>> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
>> +     * not like running with NMIs disabled. */
>> +    enable_nmis();
>> +
>>      /*
>>       * compat_machine_kexec() returns to idle pagetables, which requires us
>>       * to be running on a static GDT mapping (idle pagetables have no GDT
>> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/x86_64/entry.S
>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -635,11 +635,42 @@ ENTRY(nmi)
>>          movl  $TRAP_nmi,4(%rsp)
>>          jmp   handle_ist_exception
>>  
>> +ENTRY(nmi_crash)
>> +        cli
>> +        pushq $0
>> +        movl $TRAP_nmi,4(%rsp)
>> +        SAVE_ALL
>> +        movq %rsp,%rdi
>> +        callq do_nmi_crash /* Does not return */
>> +        ud2
>> +
>>  ENTRY(machine_check)
>>          pushq $0
>>          movl  $TRAP_machine_check,4(%rsp)
>>          jmp   handle_ist_exception
>>  
>> +/* No op trap handler.  Required for kexec path. */
>> +ENTRY(trap_nop)
>> +        iretq
>> +
>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>> +ENTRY(enable_nmis)
>> +        push %rax
>> +        movq %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
>> +
>> +        iretq /* Disable the hardware NMI latch */
>> +1:
>> +        popq %rax
>> +        retq
>> +
>>  .section .rodata, "a", @progbits
>>  
>>  ENTRY(exception_table)
>> diff -r 1c69c938f641 -r b15d3ae525af xen/include/asm-x86/processor.h
>> --- a/xen/include/asm-x86/processor.h
>> +++ b/xen/include/asm-x86/processor.h
>> @@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
>>  DECLARE_TRAP_HANDLER(divide_error);
>>  DECLARE_TRAP_HANDLER(debug);
>>  DECLARE_TRAP_HANDLER(nmi);
>> +DECLARE_TRAP_HANDLER(nmi_crash);
>>  DECLARE_TRAP_HANDLER(int3);
>>  DECLARE_TRAP_HANDLER(overflow);
>>  DECLARE_TRAP_HANDLER(bounds);
>> @@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>>  DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>>  #undef DECLARE_TRAP_HANDLER
>>  
>> +void trap_nop(void);
>> +void enable_nmis(void);
>> +
>>  void syscall_enter(void);
>>  void sysenter_entry(void);
>>  void sysenter_eflags_saved(void);
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 11:59:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 11:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tga6U-00073I-ON; Thu, 06 Dec 2012 11:58:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tga6T-000738-Cp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 11:58:57 +0000
Received: from [85.158.143.35:6211] by server-3.bemta-4.messagelabs.com id
	03/5C-18211-08880C05; Thu, 06 Dec 2012 11:58:56 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354795134!5320091!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27181 invoked from network); 6 Dec 2012 11:58:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 11:58:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="216599125"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 11:58:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 06:58:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tga6P-000815-5B;
	Thu, 06 Dec 2012 11:58:53 +0000
Message-ID: <50C0887D.3030704@citrix.com>
Date: Thu, 6 Dec 2012 11:58:53 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <patchbomb.1354644967@andrewcoop.uk.xensource.com>
	<b15d3ae525af5b46915f.1354644968@andrewcoop.uk.xensource.com>
	<20121206113610.GI82725@ocelot.phlegethon.org>
In-Reply-To: <20121206113610.GI82725@ocelot.phlegethon.org>
X-Enigmail-Version: 1.4.6
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 4 RFC] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/12 11:36, Tim Deegan wrote:
> At 18:16 +0000 on 04 Dec (1354644968), Andrew Cooper wrote:
>> do_nmi_crash() will:
>>
>>  * Save the crash notes and shut the pcpu down.
>>     There is now an extra per-cpu variable to prevent us from executing
>>     this multiple times.  In the case where we reenter midway through,
>>     attempt the whole operation again in preference to not completing
>>     it in the first place.
>>
>>  * Set up another NMI at the LAPIC.
>>     Even when the LAPIC has been disabled, the ID and command registers
>>     are still usable.  As a result, we can deliberately queue up a new
>>     NMI to re-interrupt us later if NMIs get unlatched.
> Can you please include this reasoning in the code itself, as well as in
> the commit message?

There is a comment alluding to this; the final sentence in the block
beginning "Poor mans self_nmi()", but I will reword it to make it clearer.

>
>> machine_kexec() will:
>>
>>   * Swap the MCE handlers to be a nop.
>>      We cannot prevent MCEs from being delivered when we pass off to the
>>      crash kernel, and the less Xen context is being touched the better.
>>
>>   * Explicitly enable NMIs.
> Would it be cleaner to have this path explicitly set the IDT entry to
> invoke the noop handler?  Or do we know this is alwys the case when we
> reach this point?

Early versions of this patch did change the NMI entry here, but that was
before I decided that nmi_shootdown_cpus() needed redoing.

As it currently stands, all pcpus other than us have the nmi_crash
handler, and we have the nop handler because the call graph looks like:

kexec_crash()
  ...
  machine_crash_shutdown()
    nmi_shootdown_cpus()
  machine_kexec()

I will however clarify this NMI setup with a comment at this location.

>
> I'm just wondering about the case where we get here with an NMI pending.
> Presumably if we have some other source of NMIs active while we kexec,
> the post-exec kernel will crash if it overwrites our handlers &c before
> setting up its own IDT.  But if kexec()ing with NMIs _disabled_ also
> fails than we haven't much choice. :|
>
> Otherwise, this looks good to me.
>
> Tim.

We do have another source of NMIs active; The watchdog disable routine
only tells the NMI handler to ignore watchdog NMIs, rather than disable
the source of NMIs themselves.  It is another item on my hitlist.

The other problem we have have with the kexec mechanism is to do with
taking NMIs and MCEs between purgatory changing addressing schemes and
setting up its own IDT.  I believe that compat_machine_kexec() does this
in an unsafe way even before jumping to purgatory.  I was wondering
whether it was possible to monopolise some very low pages which could be
identity mapped in any paging scheme, but I cant think of a nice way to
fix a move into real mode at which point the loaded IDT is completely
invalid.  Another item for the hitlist.

~Andrew

>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/crash.c
>> --- a/xen/arch/x86/crash.c
>> +++ b/xen/arch/x86/crash.c
>> @@ -32,41 +32,97 @@
>>  
>>  static atomic_t waiting_for_crash_ipi;
>>  static unsigned int crashing_cpu;
>> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>>  
>> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
>> +/* This becomes the NMI handler for non-crashing CPUs. */
>> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>>  {
>> -    /* Don't do anything if this handler is invoked on crashing cpu.
>> -     * Otherwise, system will completely hang. Crashing cpu can get
>> -     * an NMI if system was initially booted with nmi_watchdog parameter.
>> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
>> +    ASSERT( crashing_cpu != smp_processor_id() );
>> +
>> +    /* Save crash information and shut down CPU.  Attempt only once. */
>> +    if ( ! this_cpu(crash_save_done) )
>> +    {
>> +        kexec_crash_save_cpu();
>> +        __stop_this_cpu();
>> +
>> +        this_cpu(crash_save_done) = 1;
>> +    }
>> +
>> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
>> +     * back to its boot state, so we are unable to rely on the regular
>> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
>> +     * (The likely senario is that we have reverted from x2apic mode to
>> +     * xapic, at which point #GPFs will occur if we use the apic_*
>> +     * functions)
>> +     *
>> +     * The ICR and APIC ID of the LAPIC are still valid even during
>> +     * software disable (Intel SDM Vol 3, 10.4.7.2), which allows us to
>> +     * queue up another NMI, to force us back into this loop if we exit.
>>       */
>> -    if ( cpu == crashing_cpu )
>> -        return 1;
>> -    local_irq_disable();
>> +    switch ( current_local_apic_mode() )
>> +    {
>> +        u32 apic_id;
>>  
>> -    kexec_crash_save_cpu();
>> +    case APIC_MODE_X2APIC:
>> +        apic_id = apic_rdmsr(APIC_ID);
>>  
>> -    __stop_this_cpu();
>> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
>> +        break;
>>  
>> -    atomic_dec(&waiting_for_crash_ipi);
>> +    case APIC_MODE_XAPIC:
>> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
>> +
>> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
>> +            cpu_relax();
>> +
>> +        apic_mem_write(APIC_ICR2, apic_id << 24);
>> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
>> +        break;
>> +
>> +    default:
>> +        break;
>> +    }
>>  
>>      for ( ; ; )
>>          halt();
>> -
>> -    return 1;
>>  }
>>  
>>  static void nmi_shootdown_cpus(void)
>>  {
>>      unsigned long msecs;
>> +    int i;
>> +    int cpu = smp_processor_id();
>>  
>>      local_irq_disable();
>>  
>> -    crashing_cpu = smp_processor_id();
>> +    crashing_cpu = cpu;
>>      local_irq_count(crashing_cpu) = 0;
>>  
>>      atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
>> -    /* Would it be better to replace the trap vector here? */
>> -    set_nmi_callback(crash_nmi_callback);
>> +
>> +    /* Change NMI trap handlers.  Non-crashing pcpus get crash_nmi which
>> +     * invokes do_nmi_crash (above), which cause them to write state and
>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>> +     * cause it to return to this function ASAP.
>> +     *
>> +     * Futhermore, disable stack tables for NMIs and MCEs to prevent
>> +     * race conditions resulting in corrupt stack frames.  As Xen is
>> +     * about to die, we no longer care about the security-related race
>> +     * condition with 'sysret' which ISTs are designed to solve.
>> +     */
>> +    for ( i = 0; i < nr_cpu_ids; ++i )
>> +        if ( idt_tables[i] )
>> +        {
>> +            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
>> +            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
>> +
>> +            if ( i == cpu )
>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
>> +            else
>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
>> +        }
>> +
>>      /* Ensure the new callback function is set before sending out the NMI. */
>>      wmb();
>>  
>> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/machine_kexec.c
>> --- a/xen/arch/x86/machine_kexec.c
>> +++ b/xen/arch/x86/machine_kexec.c
>> @@ -87,6 +87,17 @@ void machine_kexec(xen_kexec_image_t *im
>>       */
>>      local_irq_disable();
>>  
>> +    /* Now regular interrupts are disabled, we need to reduce the impact
>> +     * of interrupts not disabled by 'cli'.  NMIs have already been
>> +     * dealt with by machine_crash_shutdown().
>> +     *
>> +     * Set all pcpu MCE handler to be a noop. */
>> +    set_intr_gate(TRAP_machine_check, &trap_nop);
>> +
>> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
>> +     * not like running with NMIs disabled. */
>> +    enable_nmis();
>> +
>>      /*
>>       * compat_machine_kexec() returns to idle pagetables, which requires us
>>       * to be running on a static GDT mapping (idle pagetables have no GDT
>> diff -r 1c69c938f641 -r b15d3ae525af xen/arch/x86/x86_64/entry.S
>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -635,11 +635,42 @@ ENTRY(nmi)
>>          movl  $TRAP_nmi,4(%rsp)
>>          jmp   handle_ist_exception
>>  
>> +ENTRY(nmi_crash)
>> +        cli
>> +        pushq $0
>> +        movl $TRAP_nmi,4(%rsp)
>> +        SAVE_ALL
>> +        movq %rsp,%rdi
>> +        callq do_nmi_crash /* Does not return */
>> +        ud2
>> +
>>  ENTRY(machine_check)
>>          pushq $0
>>          movl  $TRAP_machine_check,4(%rsp)
>>          jmp   handle_ist_exception
>>  
>> +/* No op trap handler.  Required for kexec path. */
>> +ENTRY(trap_nop)
>> +        iretq
>> +
>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>> +ENTRY(enable_nmis)
>> +        push %rax
>> +        movq %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
>> +
>> +        iretq /* Disable the hardware NMI latch */
>> +1:
>> +        popq %rax
>> +        retq
>> +
>>  .section .rodata, "a", @progbits
>>  
>>  ENTRY(exception_table)
>> diff -r 1c69c938f641 -r b15d3ae525af xen/include/asm-x86/processor.h
>> --- a/xen/include/asm-x86/processor.h
>> +++ b/xen/include/asm-x86/processor.h
>> @@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
>>  DECLARE_TRAP_HANDLER(divide_error);
>>  DECLARE_TRAP_HANDLER(debug);
>>  DECLARE_TRAP_HANDLER(nmi);
>> +DECLARE_TRAP_HANDLER(nmi_crash);
>>  DECLARE_TRAP_HANDLER(int3);
>>  DECLARE_TRAP_HANDLER(overflow);
>>  DECLARE_TRAP_HANDLER(bounds);
>> @@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>>  DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>>  #undef DECLARE_TRAP_HANDLER
>>  
>> +void trap_nop(void);
>> +void enable_nmis(void);
>> +
>>  void syscall_enter(void);
>>  void sysenter_entry(void);
>>  void sysenter_eflags_saved(void);
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:03:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaAo-0007Z6-JR; Thu, 06 Dec 2012 12:03:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgaAm-0007Yt-Jl
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:03:24 +0000
Received: from [193.109.254.147:62502] by server-6.bemta-14.messagelabs.com id
	E4/09-02788-B8980C05; Thu, 06 Dec 2012 12:03:23 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354795402!8899347!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17421 invoked from network); 6 Dec 2012 12:03:23 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:03:23 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgaAk-000M68-Ex; Thu, 06 Dec 2012 12:03:22 +0000
Date: Thu, 6 Dec 2012 12:03:22 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121206120322.GL82725@ocelot.phlegethon.org>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
	<1354554631-17861-5-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354554631-17861-5-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 5/8] arm: load dom0 kernel from first boot
	module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:10 +0000 on 03 Dec (1354554628), Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

> ---
> v3: - correct limit check in try_zimage_prepare
>     - copy zimage header to a local bufffer to avoid issues with
>       crossing page boundaries.
>     - handle non page aligned source and destinations when loading
>     - use a BUFFERABLE mapping when loading kernel from RAM.
> ---
>  xen/arch/arm/kernel.c |   91 ++++++++++++++++++++++++++++++++++--------------
>  xen/arch/arm/kernel.h |   11 ++++++
>  2 files changed, 75 insertions(+), 27 deletions(-)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:03:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaAo-0007Z6-JR; Thu, 06 Dec 2012 12:03:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgaAm-0007Yt-Jl
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:03:24 +0000
Received: from [193.109.254.147:62502] by server-6.bemta-14.messagelabs.com id
	E4/09-02788-B8980C05; Thu, 06 Dec 2012 12:03:23 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354795402!8899347!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17421 invoked from network); 6 Dec 2012 12:03:23 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:03:23 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgaAk-000M68-Ex; Thu, 06 Dec 2012 12:03:22 +0000
Date: Thu, 6 Dec 2012 12:03:22 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121206120322.GL82725@ocelot.phlegethon.org>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
	<1354554631-17861-5-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354554631-17861-5-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 5/8] arm: load dom0 kernel from first boot
	module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:10 +0000 on 03 Dec (1354554628), Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

> ---
> v3: - correct limit check in try_zimage_prepare
>     - copy zimage header to a local bufffer to avoid issues with
>       crossing page boundaries.
>     - handle non page aligned source and destinations when loading
>     - use a BUFFERABLE mapping when loading kernel from RAM.
> ---
>  xen/arch/arm/kernel.c |   91 ++++++++++++++++++++++++++++++++++--------------
>  xen/arch/arm/kernel.h |   11 ++++++
>  2 files changed, 75 insertions(+), 27 deletions(-)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:09:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaGu-0007wy-CU; Thu, 06 Dec 2012 12:09:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgaGt-0007ws-H4
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:09:43 +0000
Received: from [85.158.137.99:47559] by server-11.bemta-3.messagelabs.com id
	1F/21-19361-60B80C05; Thu, 06 Dec 2012 12:09:42 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354795754!18157295!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11989 invoked from network); 6 Dec 2012 12:09:14 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:09:14 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgaGP-000M72-UA; Thu, 06 Dec 2012 12:09:13 +0000
Date: Thu, 6 Dec 2012 12:09:13 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121206120913.GM82725@ocelot.phlegethon.org>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:56 +0000 on 04 Dec (1354622173), Ian Campbell wrote:
> This was a short term hack to get something linking quickly, but its
> usefulness has now passed.
> 
> This series replaces everything in here with proper functions. In many
> cases these are still just stubs.

For the arch/arm parts,

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:09:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaGu-0007wy-CU; Thu, 06 Dec 2012 12:09:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgaGt-0007ws-H4
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:09:43 +0000
Received: from [85.158.137.99:47559] by server-11.bemta-3.messagelabs.com id
	1F/21-19361-60B80C05; Thu, 06 Dec 2012 12:09:42 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354795754!18157295!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11989 invoked from network); 6 Dec 2012 12:09:14 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:09:14 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgaGP-000M72-UA; Thu, 06 Dec 2012 12:09:13 +0000
Date: Thu, 6 Dec 2012 12:09:13 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121206120913.GM82725@ocelot.phlegethon.org>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:56 +0000 on 04 Dec (1354622173), Ian Campbell wrote:
> This was a short term hack to get something linking quickly, but its
> usefulness has now passed.
> 
> This series replaces everything in here with proper functions. In many
> cases these are still just stubs.

For the arch/arm parts,

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:13:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:13:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaKG-00087T-0u; Thu, 06 Dec 2012 12:13:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1TgaKE-00087K-7I
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:13:10 +0000
Received: from [85.158.139.83:63854] by server-11.bemta-5.messagelabs.com id
	AA/5A-03409-5DB80C05; Thu, 06 Dec 2012 12:13:09 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354795986!27991991!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20597 invoked from network); 6 Dec 2012 12:13:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 12:13:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="46808435"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 12:13:06 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1; Thu, 6 Dec 2012
	07:13:06 -0500
Message-ID: <50C08BD1.50905@citrix.com>
Date: Thu, 6 Dec 2012 12:13:05 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
	<1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
X-Originating-IP: [10.80.2.76]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/8] device-tree: get_val cannot cope with
 cells	> 2, add early_panic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 17:10, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> v3: early_panic instead of BUG_ON
> v2: drop unrelated white space fixup
> ---
>  xen/common/device_tree.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 9eb316f..5a0a1a6 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -58,6 +58,9 @@ static void __init get_val(const u32 **cell, u32 cells, u64 *val)
>  {
>      *val = 0;
>  
> +    if ( cells > 2 )
> +        early_panic("dtb value contains > 2 cells\n");
> +

This seems like overkill to me.  get_val() truncates the value down to
64 bits which is fine as no valid physical address is more than 64 bits.

I think the device tree parsing code should try to continue as much as
possible if it gets a DTB that looks a bit funny.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:13:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:13:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaKG-00087T-0u; Thu, 06 Dec 2012 12:13:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1TgaKE-00087K-7I
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:13:10 +0000
Received: from [85.158.139.83:63854] by server-11.bemta-5.messagelabs.com id
	AA/5A-03409-5DB80C05; Thu, 06 Dec 2012 12:13:09 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354795986!27991991!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20597 invoked from network); 6 Dec 2012 12:13:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 12:13:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,229,1355097600"; d="scan'208";a="46808435"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 12:13:06 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1; Thu, 6 Dec 2012
	07:13:06 -0500
Message-ID: <50C08BD1.50905@citrix.com>
Date: Thu, 6 Dec 2012 12:13:05 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
	<1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
X-Originating-IP: [10.80.2.76]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/8] device-tree: get_val cannot cope with
 cells	> 2, add early_panic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/12/12 17:10, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> v3: early_panic instead of BUG_ON
> v2: drop unrelated white space fixup
> ---
>  xen/common/device_tree.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 9eb316f..5a0a1a6 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -58,6 +58,9 @@ static void __init get_val(const u32 **cell, u32 cells, u64 *val)
>  {
>      *val = 0;
>  
> +    if ( cells > 2 )
> +        early_panic("dtb value contains > 2 cells\n");
> +

This seems like overkill to me.  get_val() truncates the value down to
64 bits which is fine as no valid physical address is more than 64 bits.

I think the device tree parsing code should try to continue as much as
possible if it gets a DTB that looks a bit funny.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:16:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:16:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaNH-0008R1-LB; Thu, 06 Dec 2012 12:16:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgaNF-0008Qp-C6
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:16:17 +0000
Received: from [85.158.139.211:34569] by server-15.bemta-5.messagelabs.com id
	81/15-26920-09C80C05; Thu, 06 Dec 2012 12:16:16 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1354796157!19252776!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTQy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6004 invoked from network); 6 Dec 2012 12:15:58 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-5.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 12:15:58 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 06 Dec 2012 04:15:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,229,1355126400"; d="scan'208";a="229789102"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 06 Dec 2012 04:15:55 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 04:15:55 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 20:15:52 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH v3 00/11] nested vmx: bug fixes and feature
	enabling
Thread-Index: AQHN06aNx71frX4Yy0uRQOtaNO5McJgLr2Ig
Date: Thu, 6 Dec 2012 12:15:52 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB922F@SHSMSX102.ccr.corp.intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
	<50C0924002000078000AE7E8@nat28.tlf.novell.com>
In-Reply-To: <50C0924002000078000AE7E8@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 00/11] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, December 06, 2012 7:41 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH v3 00/11] nested vmx: bug fixes and feature
> enabling
> 
> >>> On 06.12.12 at 02:09, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > This series of patches contain some bug fixes and feature enabling for
> > nested vmx, please help to review and pull.
> >
> > Changes from v2 to v3:
> >  - Change a hard number to literal name while exposing bit 55 in
> > IA32_VMX_BASIC MSR.
> >
> > The following 4 patches are suitable to backport to 4.2.x:
> >   nested vmx: fix rflags status in virtual vmexit
> >   nested vmx: fix handling of RDTSC
> >   nested vmx: fix DR access VM exit
> >   nested vmx: fix interrupt delivery to L2 guest
> 
> Thanks for confirming; how about ...
> 
> > Dongxiao Xu (11):
> >   nested vmx: emulate MSR bitmaps
> >   nested vmx: use literal name instead of hard numbers
> >   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
> >   nested vmx: fix rflags status in virtual vmexit
> >   nested vmx: fix handling of RDTSC
> >   nested vmx: fix DR access VM exit
> >   nested vmx: enable IA32E mode while do VM entry
> 
> ... this one?

Yes, this is also a bug fix and could be backport to 4.2.x.

Thanks,
Dongxiao

> 
> Jan
> 
> >   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
> >   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
> >   nested vmx: fix interrupt delivery to L2 guest
> >   nested vmx: check host ability when intercept MSR read
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:16:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:16:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaNH-0008R1-LB; Thu, 06 Dec 2012 12:16:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgaNF-0008Qp-C6
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:16:17 +0000
Received: from [85.158.139.211:34569] by server-15.bemta-5.messagelabs.com id
	81/15-26920-09C80C05; Thu, 06 Dec 2012 12:16:16 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1354796157!19252776!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTQy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6004 invoked from network); 6 Dec 2012 12:15:58 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-5.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 12:15:58 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 06 Dec 2012 04:15:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,229,1355126400"; d="scan'208";a="229789102"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 06 Dec 2012 04:15:55 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 04:15:55 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 20:15:52 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH v3 00/11] nested vmx: bug fixes and feature
	enabling
Thread-Index: AQHN06aNx71frX4Yy0uRQOtaNO5McJgLr2Ig
Date: Thu, 6 Dec 2012 12:15:52 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB922F@SHSMSX102.ccr.corp.intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
	<50C0924002000078000AE7E8@nat28.tlf.novell.com>
In-Reply-To: <50C0924002000078000AE7E8@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 00/11] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, December 06, 2012 7:41 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH v3 00/11] nested vmx: bug fixes and feature
> enabling
> 
> >>> On 06.12.12 at 02:09, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > This series of patches contain some bug fixes and feature enabling for
> > nested vmx, please help to review and pull.
> >
> > Changes from v2 to v3:
> >  - Change a hard number to literal name while exposing bit 55 in
> > IA32_VMX_BASIC MSR.
> >
> > The following 4 patches are suitable to backport to 4.2.x:
> >   nested vmx: fix rflags status in virtual vmexit
> >   nested vmx: fix handling of RDTSC
> >   nested vmx: fix DR access VM exit
> >   nested vmx: fix interrupt delivery to L2 guest
> 
> Thanks for confirming; how about ...
> 
> > Dongxiao Xu (11):
> >   nested vmx: emulate MSR bitmaps
> >   nested vmx: use literal name instead of hard numbers
> >   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
> >   nested vmx: fix rflags status in virtual vmexit
> >   nested vmx: fix handling of RDTSC
> >   nested vmx: fix DR access VM exit
> >   nested vmx: enable IA32E mode while do VM entry
> 
> ... this one?

Yes, this is also a bug fix and could be backport to 4.2.x.

Thanks,
Dongxiao

> 
> Jan
> 
> >   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
> >   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
> >   nested vmx: fix interrupt delivery to L2 guest
> >   nested vmx: check host ability when intercept MSR read
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:25:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaWA-0000Pl-R1; Thu, 06 Dec 2012 12:25:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgaW9-0000Pg-JJ
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:25:29 +0000
Received: from [85.158.143.35:27724] by server-1.bemta-4.messagelabs.com id
	81/E1-28401-8BE80C05; Thu, 06 Dec 2012 12:25:28 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1354796726!4612539!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDQ0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18056 invoked from network); 6 Dec 2012 12:25:27 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-5.tower-21.messagelabs.com with SMTP;
	6 Dec 2012 12:25:27 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 06 Dec 2012 04:25:25 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="258123532"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 06 Dec 2012 04:25:25 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 04:25:25 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 04:25:24 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 20:25:22 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH v3 03/11] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
Thread-Index: AQHN06dar/twxUIzGEaYVOPr5EPoxZgLsiMQ
Date: Thu, 6 Dec 2012 12:25:22 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB9267@SHSMSX102.ccr.corp.intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
	<1354756169-12370-4-git-send-email-dongxiao.xu@intel.com>
	<50C0937D02000078000AE806@nat28.tlf.novell.com>
In-Reply-To: <50C0937D02000078000AE806@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 03/11] nested vmx: expose bit 55 of
 IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, December 06, 2012 7:46 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH v3 03/11] nested vmx: expose bit 55 of
> IA32_VMX_BASIC_MSR to guest VMM
> 
> >>> On 06.12.12 at 02:09, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> > @@ -45,6 +45,12 @@ struct nestedvmx {
> >  /* bit 0-8, and 12 must be 1 */
> >  #define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
> >
> > +/*
> > + * bit 55 of IA32_VMX_BASIC MSR, indicating whether any VMX controls
> > +that
> > + * default to 1 may be cleared to 0.
> > + */
> > +#define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
> > +
> >  /*
> >   * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
> >   */
> 
> I assume this relates to
> 
>         /*
>          * To use EPT we expect to be able to clear certain intercepts.
>          * We check VMX_BASIC_MSR[55] to correctly handle default
> controls.
>          */
>         uint32_t must_be_one, must_be_zero, msr =
> MSR_IA32_VMX_PROCBASED_CTLS;
>         if ( vmx_basic_msr_high & (1u << 23) )
>             msr = MSR_IA32_VMX_TRUE_PROCBASED_CTLS;
> 
> in xen/arch/x86/hvm/vmx/vmcs.c. If so, please use the new constant there too,
> to point out that connection. That may require moving this to a different
> header file.

Sure, will reflect in next pull request.

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:25:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaWA-0000Pl-R1; Thu, 06 Dec 2012 12:25:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgaW9-0000Pg-JJ
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:25:29 +0000
Received: from [85.158.143.35:27724] by server-1.bemta-4.messagelabs.com id
	81/E1-28401-8BE80C05; Thu, 06 Dec 2012 12:25:28 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1354796726!4612539!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDQ0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18056 invoked from network); 6 Dec 2012 12:25:27 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-5.tower-21.messagelabs.com with SMTP;
	6 Dec 2012 12:25:27 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 06 Dec 2012 04:25:25 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="258123532"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 06 Dec 2012 04:25:25 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 04:25:25 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 04:25:24 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 20:25:22 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH v3 03/11] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
Thread-Index: AQHN06dar/twxUIzGEaYVOPr5EPoxZgLsiMQ
Date: Thu, 6 Dec 2012 12:25:22 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB9267@SHSMSX102.ccr.corp.intel.com>
References: <1354756169-12370-1-git-send-email-dongxiao.xu@intel.com>
	<1354756169-12370-4-git-send-email-dongxiao.xu@intel.com>
	<50C0937D02000078000AE806@nat28.tlf.novell.com>
In-Reply-To: <50C0937D02000078000AE806@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 03/11] nested vmx: expose bit 55 of
 IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, December 06, 2012 7:46 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH v3 03/11] nested vmx: expose bit 55 of
> IA32_VMX_BASIC_MSR to guest VMM
> 
> >>> On 06.12.12 at 02:09, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> > @@ -45,6 +45,12 @@ struct nestedvmx {
> >  /* bit 0-8, and 12 must be 1 */
> >  #define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
> >
> > +/*
> > + * bit 55 of IA32_VMX_BASIC MSR, indicating whether any VMX controls
> > +that
> > + * default to 1 may be cleared to 0.
> > + */
> > +#define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
> > +
> >  /*
> >   * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
> >   */
> 
> I assume this relates to
> 
>         /*
>          * To use EPT we expect to be able to clear certain intercepts.
>          * We check VMX_BASIC_MSR[55] to correctly handle default
> controls.
>          */
>         uint32_t must_be_one, must_be_zero, msr =
> MSR_IA32_VMX_PROCBASED_CTLS;
>         if ( vmx_basic_msr_high & (1u << 23) )
>             msr = MSR_IA32_VMX_TRUE_PROCBASED_CTLS;
> 
> in xen/arch/x86/hvm/vmx/vmcs.c. If so, please use the new constant there too,
> to point out that connection. That may require moving this to a different
> header file.

Sure, will reflect in next pull request.

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:38:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaiC-0000gS-8J; Thu, 06 Dec 2012 12:37:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgaiB-0000gN-6F
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:37:55 +0000
Received: from [85.158.143.99:14987] by server-3.bemta-4.messagelabs.com id
	08/94-18211-2A190C05; Thu, 06 Dec 2012 12:37:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354797474!28308465!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1915 invoked from network); 6 Dec 2012 12:37:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 12:37:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16198919"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 12:37:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 12:37:54 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tgai9-0002TL-Kk; Thu, 06 Dec 2012 12:37:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tgai9-0003lr-DM;
	Thu, 06 Dec 2012 12:37:53 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20672.37281.34306.903202@mariner.uk.xensource.com>
Date: Thu, 6 Dec 2012 12:37:53 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1352822539-56930-1-git-send-email-roger.pau@citrix.com>,
	<50BCD2F6.8060106@citrix.com>
References: <1352822539-56930-1-git-send-email-roger.pau@citrix.com>
	<20646.26443.690228.204535@mariner.uk.xensource.com>
	<50BCD2F6.8060106@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change
	[and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("[Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change"):
> qemu-stubdom was stripping the prefix from the "params" xenstore
> key in xenstore_parse_domain_config, which was then saved stripped in
> a variable. In xenstore_process_event we compare the "param" from
> xenstore (not stripped) with the stripped "param" saved in the
> variable, which leads to a medium change (even if there isn't any),
> since we are comparing something like aio:/path/to/file with
> /path/to/file. This only happens one time, since
> xenstore_parse_domain_config is the only place where we strip the
> prefix. The result of this bug is the following:

Roger Pau Monne writes ("Re: [Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change"):
> Yes, it's a can of worms indeed.
> 
> The non-stubdom path is not modified, and the code changes (1st block of
> the patch) are contained inside a #ifdef STUBDOM (which is not seen on
> the patch itself, because it's already there).

Aha!  Indeed.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

This needs to go into 4.2 surely ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:38:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaiC-0000gS-8J; Thu, 06 Dec 2012 12:37:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgaiB-0000gN-6F
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:37:55 +0000
Received: from [85.158.143.99:14987] by server-3.bemta-4.messagelabs.com id
	08/94-18211-2A190C05; Thu, 06 Dec 2012 12:37:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354797474!28308465!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1915 invoked from network); 6 Dec 2012 12:37:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 12:37:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16198919"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 12:37:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 12:37:54 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tgai9-0002TL-Kk; Thu, 06 Dec 2012 12:37:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tgai9-0003lr-DM;
	Thu, 06 Dec 2012 12:37:53 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20672.37281.34306.903202@mariner.uk.xensource.com>
Date: Thu, 6 Dec 2012 12:37:53 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1352822539-56930-1-git-send-email-roger.pau@citrix.com>,
	<50BCD2F6.8060106@citrix.com>
References: <1352822539-56930-1-git-send-email-roger.pau@citrix.com>
	<20646.26443.690228.204535@mariner.uk.xensource.com>
	<50BCD2F6.8060106@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change
	[and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("[Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change"):
> qemu-stubdom was stripping the prefix from the "params" xenstore
> key in xenstore_parse_domain_config, which was then saved stripped in
> a variable. In xenstore_process_event we compare the "param" from
> xenstore (not stripped) with the stripped "param" saved in the
> variable, which leads to a medium change (even if there isn't any),
> since we are comparing something like aio:/path/to/file with
> /path/to/file. This only happens one time, since
> xenstore_parse_domain_config is the only place where we strip the
> prefix. The result of this bug is the following:

Roger Pau Monne writes ("Re: [Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change"):
> Yes, it's a can of worms indeed.
> 
> The non-stubdom path is not modified, and the code changes (1st block of
> the patch) are contained inside a #ifdef STUBDOM (which is not seen on
> the patch itself, because it's already there).

Aha!  Indeed.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

This needs to go into 4.2 surely ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:44:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaoI-00013N-34; Thu, 06 Dec 2012 12:44:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgaoG-00012w-9L
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 12:44:12 +0000
Received: from [193.109.254.147:63627] by server-7.bemta-14.messagelabs.com id
	41/94-02272-A1390C05; Thu, 06 Dec 2012 12:44:10 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354797846!1706954!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28884 invoked from network); 6 Dec 2012 12:44:06 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:44:06 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tgao8-000MCX-CN; Thu, 06 Dec 2012 12:44:04 +0000
Date: Thu, 6 Dec 2012 12:44:04 +0000
From: Tim Deegan <tim@xen.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121206124404.GN82725@ocelot.phlegethon.org>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH 1/6] xen/arm: introduce map_phys_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:19 +0000 on 05 Dec (1354731583), Stefano Stabellini wrote:
> Introduce a function to map a physical memory into virtual memory.
> It is going to be used later to map the videoram.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/mm.c        |   23 +++++++++++++++++++++++
>  xen/include/asm-arm/mm.h |    3 +++
>  2 files changed, 26 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 68ee9da..418a414 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -376,6 +376,29 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
>  }
>  
> +/* Map the physical memory range start -  end at the virtual address
> + * virt_start in 2MB chunks. start and virt_start have to be 2MB
> + * aligned.
> + */
> +void map_phys_range(paddr_t start, paddr_t end,
> +        unsigned long virt_start, unsigned attributes)
> +{
> +    ASSERT(!(start & ((1 << 21) - 1)));
> +    ASSERT(!(virt_start & ((1 << 21) - 1)));

Please use SECOND_SHIFT rather than 21, and maybe even add a SECOND_MASK
&c to page.h rather than open-coding the <<s here.

Also, can you add some assertions that the VAs here are in some
well-defined region (with annotations in config.h)?

> +    while ( start < end )
> +    {
> +        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
> +        e.pt.ai = attributes;
> +        write_pte(xen_second + second_table_offset(virt_start), e);
> +        
> +        start += (1<<21);
> +        virt_start += (1<<21);

Maybe add SECOND_SIZE &c too?

> +    }
> +
> +    flush_xen_data_tlb();

What's this for?

Cheers,

Tim.

> +}
> +
>  enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
>  static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>  {
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 3549c83..a11f20b 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -152,6 +152,9 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
>  extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
>  /* Remove a mapping from a fixmap entry */
>  extern void clear_fixmap(unsigned map);
> +/* map a 2MB aligned physical range in virtual memory. */
> +extern void map_phys_range(paddr_t start, paddr_t end,
> +		unsigned long virt_start, unsigned attributes);
>  
>  
>  #define mfn_valid(mfn)        ({                                              \
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:44:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaoI-00013N-34; Thu, 06 Dec 2012 12:44:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgaoG-00012w-9L
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 12:44:12 +0000
Received: from [193.109.254.147:63627] by server-7.bemta-14.messagelabs.com id
	41/94-02272-A1390C05; Thu, 06 Dec 2012 12:44:10 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354797846!1706954!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28884 invoked from network); 6 Dec 2012 12:44:06 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:44:06 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tgao8-000MCX-CN; Thu, 06 Dec 2012 12:44:04 +0000
Date: Thu, 6 Dec 2012 12:44:04 +0000
From: Tim Deegan <tim@xen.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121206124404.GN82725@ocelot.phlegethon.org>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH 1/6] xen/arm: introduce map_phys_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:19 +0000 on 05 Dec (1354731583), Stefano Stabellini wrote:
> Introduce a function to map a physical memory into virtual memory.
> It is going to be used later to map the videoram.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/mm.c        |   23 +++++++++++++++++++++++
>  xen/include/asm-arm/mm.h |    3 +++
>  2 files changed, 26 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 68ee9da..418a414 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -376,6 +376,29 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
>  }
>  
> +/* Map the physical memory range start -  end at the virtual address
> + * virt_start in 2MB chunks. start and virt_start have to be 2MB
> + * aligned.
> + */
> +void map_phys_range(paddr_t start, paddr_t end,
> +        unsigned long virt_start, unsigned attributes)
> +{
> +    ASSERT(!(start & ((1 << 21) - 1)));
> +    ASSERT(!(virt_start & ((1 << 21) - 1)));

Please use SECOND_SHIFT rather than 21, and maybe even add a SECOND_MASK
&c to page.h rather than open-coding the <<s here.

Also, can you add some assertions that the VAs here are in some
well-defined region (with annotations in config.h)?

> +    while ( start < end )
> +    {
> +        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
> +        e.pt.ai = attributes;
> +        write_pte(xen_second + second_table_offset(virt_start), e);
> +        
> +        start += (1<<21);
> +        virt_start += (1<<21);

Maybe add SECOND_SIZE &c too?

> +    }
> +
> +    flush_xen_data_tlb();

What's this for?

Cheers,

Tim.

> +}
> +
>  enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
>  static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>  {
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 3549c83..a11f20b 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -152,6 +152,9 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
>  extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
>  /* Remove a mapping from a fixmap entry */
>  extern void clear_fixmap(unsigned map);
> +/* map a 2MB aligned physical range in virtual memory. */
> +extern void map_phys_range(paddr_t start, paddr_t end,
> +		unsigned long virt_start, unsigned attributes);
>  
>  
>  #define mfn_valid(mfn)        ({                                              \
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:46:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:46:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgaq6-0001BH-Jp; Thu, 06 Dec 2012 12:46:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tgaq5-0001B6-TN
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:46:06 +0000
Received: from [85.158.139.83:57336] by server-15.bemta-5.messagelabs.com id
	DD/2B-26920-A8390C05; Thu, 06 Dec 2012 12:46:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354797960!24802934!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27063 invoked from network); 6 Dec 2012 12:46:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 12:46:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216603289"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 12:46:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 07:46:00 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tgapz-0000Hz-Gm;
	Thu, 06 Dec 2012 12:45:59 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 12:45:59 +0000
Message-ID: <1354797959-13705-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen: arm: mark early_panic as a noreturn
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise gcc complains about variables being used when not
initialised when in fact that point is never reached.

There aren't any instances of this in tree right now, I noticed this
while developing another patch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/include/asm-arm/early_printk.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index f45f21e..a770d4a 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -15,7 +15,7 @@
 #ifdef EARLY_UART_ADDRESS
 
 void early_printk(const char *fmt, ...);
-void early_panic(const char *fmt, ...);
+void early_panic(const char *fmt, ...) __attribute__((noreturn));
 
 #else
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:46:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:46:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgaq6-0001BH-Jp; Thu, 06 Dec 2012 12:46:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tgaq5-0001B6-TN
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:46:06 +0000
Received: from [85.158.139.83:57336] by server-15.bemta-5.messagelabs.com id
	DD/2B-26920-A8390C05; Thu, 06 Dec 2012 12:46:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1354797960!24802934!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27063 invoked from network); 6 Dec 2012 12:46:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 12:46:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216603289"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 12:46:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 07:46:00 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tgapz-0000Hz-Gm;
	Thu, 06 Dec 2012 12:45:59 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 12:45:59 +0000
Message-ID: <1354797959-13705-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen: arm: mark early_panic as a noreturn
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise gcc complains about variables being used when not
initialised when in fact that point is never reached.

There aren't any instances of this in tree right now, I noticed this
while developing another patch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/include/asm-arm/early_printk.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index f45f21e..a770d4a 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -15,7 +15,7 @@
 #ifdef EARLY_UART_ADDRESS
 
 void early_printk(const char *fmt, ...);
-void early_panic(const char *fmt, ...);
+void early_panic(const char *fmt, ...) __attribute__((noreturn));
 
 #else
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:46:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgaqe-0001Ef-6I; Thu, 06 Dec 2012 12:46:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tgaqc-0001EP-W1
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 12:46:39 +0000
Received: from [85.158.139.211:19884] by server-16.bemta-5.messagelabs.com id
	2F/C8-21311-EA390C05; Thu, 06 Dec 2012 12:46:38 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354797997!19357791!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6730 invoked from network); 6 Dec 2012 12:46:37 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:46:37 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tgaqb-000MDQ-2R; Thu, 06 Dec 2012 12:46:37 +0000
Date: Thu, 6 Dec 2012 12:46:37 +0000
From: Tim Deegan <tim@xen.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121206124637.GO82725@ocelot.phlegethon.org>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH 2/6] xen: infrastructure to have
	cross-platform video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:19 +0000 on 05 Dec (1354731584), Stefano Stabellini wrote:
> diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
> new file mode 100644
> index 0000000..e9bc92e
> --- /dev/null
> +++ b/xen/include/xen/video.h
> @@ -0,0 +1,24 @@
> +/*
> + *  vga.h

video.h ? :)

> + *
> + *  This file is subject to the terms and conditions of the GNU General Public
> + *  License.  See the file COPYING in the main directory of this archive
> + *  for more details.
> + */
> +
> +#ifndef _XEN_VIDEO_H
> +#define _XEN_VIDEO_H
> +
> +#include <public/xen.h>
> +
> +#ifdef CONFIG_VIDEO
> +void video_init(void);
> +extern void (*video_puts)(const char *);
> +void video_endboot(void);
> +#else
> +#define video_init()    ((void)0)
> +#define video_puts(s)   ((void)0)
> +#define video_endboot() ((void)0)
> +#endif
> +
> +#endif /* _XEN_VIDEO_H */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:46:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgaqe-0001Ef-6I; Thu, 06 Dec 2012 12:46:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tgaqc-0001EP-W1
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 12:46:39 +0000
Received: from [85.158.139.211:19884] by server-16.bemta-5.messagelabs.com id
	2F/C8-21311-EA390C05; Thu, 06 Dec 2012 12:46:38 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354797997!19357791!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6730 invoked from network); 6 Dec 2012 12:46:37 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:46:37 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tgaqb-000MDQ-2R; Thu, 06 Dec 2012 12:46:37 +0000
Date: Thu, 6 Dec 2012 12:46:37 +0000
From: Tim Deegan <tim@xen.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121206124637.GO82725@ocelot.phlegethon.org>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354731588-32579-2-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH 2/6] xen: infrastructure to have
	cross-platform video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:19 +0000 on 05 Dec (1354731584), Stefano Stabellini wrote:
> diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
> new file mode 100644
> index 0000000..e9bc92e
> --- /dev/null
> +++ b/xen/include/xen/video.h
> @@ -0,0 +1,24 @@
> +/*
> + *  vga.h

video.h ? :)

> + *
> + *  This file is subject to the terms and conditions of the GNU General Public
> + *  License.  See the file COPYING in the main directory of this archive
> + *  for more details.
> + */
> +
> +#ifndef _XEN_VIDEO_H
> +#define _XEN_VIDEO_H
> +
> +#include <public/xen.h>
> +
> +#ifdef CONFIG_VIDEO
> +void video_init(void);
> +extern void (*video_puts)(const char *);
> +void video_endboot(void);
> +#else
> +#define video_init()    ((void)0)
> +#define video_puts(s)   ((void)0)
> +#define video_endboot() ((void)0)
> +#endif
> +
> +#endif /* _XEN_VIDEO_H */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:46:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgaqf-0001Ez-I8; Thu, 06 Dec 2012 12:46:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgaqd-0001ET-NJ
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:46:39 +0000
Received: from [85.158.137.99:5847] by server-4.bemta-3.messagelabs.com id
	E8/A4-30023-EA390C05; Thu, 06 Dec 2012 12:46:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354797951!18212273!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17792 invoked from network); 6 Dec 2012 12:45:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:45:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:45:51 +0000
Message-Id: <50C0A18F02000078000AE86A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:45:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0435996F.0__="
Subject: [Xen-devel] [PATCH] gnttab_usage_print() should be static
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0435996F.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... as not being used or declared anywhere else.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2802,7 +2802,7 @@ grant_table_destroy(
     d->grant_table =3D NULL;
 }
=20
-void gnttab_usage_print(struct domain *rd)
+static void gnttab_usage_print(struct domain *rd)
 {
     int first =3D 1;
     grant_ref_t ref;




--=__Part0435996F.0__=
Content-Type: text/plain; name="gnttab-static.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="gnttab-static.patch"

gnttab_usage_print() should be static=0A=0A... as not being used or =
declared anywhere else.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/common/grant_table.c=0A+++ b/xen/common/grant_table.c=0A@@ =
-2802,7 +2802,7 @@ grant_table_destroy(=0A     d->grant_table =3D NULL;=0A =
}=0A =0A-void gnttab_usage_print(struct domain *rd)=0A+static void =
gnttab_usage_print(struct domain *rd)=0A {=0A     int first =3D 1;=0A     =
grant_ref_t ref;=0A
--=__Part0435996F.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0435996F.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:46:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgaqf-0001Ez-I8; Thu, 06 Dec 2012 12:46:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgaqd-0001ET-NJ
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:46:39 +0000
Received: from [85.158.137.99:5847] by server-4.bemta-3.messagelabs.com id
	E8/A4-30023-EA390C05; Thu, 06 Dec 2012 12:46:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354797951!18212273!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17792 invoked from network); 6 Dec 2012 12:45:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:45:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:45:51 +0000
Message-Id: <50C0A18F02000078000AE86A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:45:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0435996F.0__="
Subject: [Xen-devel] [PATCH] gnttab_usage_print() should be static
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0435996F.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... as not being used or declared anywhere else.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2802,7 +2802,7 @@ grant_table_destroy(
     d->grant_table =3D NULL;
 }
=20
-void gnttab_usage_print(struct domain *rd)
+static void gnttab_usage_print(struct domain *rd)
 {
     int first =3D 1;
     grant_ref_t ref;




--=__Part0435996F.0__=
Content-Type: text/plain; name="gnttab-static.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="gnttab-static.patch"

gnttab_usage_print() should be static=0A=0A... as not being used or =
declared anywhere else.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/common/grant_table.c=0A+++ b/xen/common/grant_table.c=0A@@ =
-2802,7 +2802,7 @@ grant_table_destroy(=0A     d->grant_table =3D NULL;=0A =
}=0A =0A-void gnttab_usage_print(struct domain *rd)=0A+static void =
gnttab_usage_print(struct domain *rd)=0A {=0A     int first =3D 1;=0A     =
grant_ref_t ref;=0A
--=__Part0435996F.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0435996F.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:51:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:51:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgaum-0001eV-8U; Thu, 06 Dec 2012 12:50:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgauk-0001eI-K1
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:50:54 +0000
Received: from [85.158.143.35:36349] by server-1.bemta-4.messagelabs.com id
	AC/C3-28401-DA490C05; Thu, 06 Dec 2012 12:50:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354798172!16321211!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5882 invoked from network); 6 Dec 2012 12:49:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:49:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:49:31 +0000
Message-Id: <50C0A26A02000078000AE886@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:49:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2617BB4A.0__="
Subject: [Xen-devel] [PATCH] tighten guest memory accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2617BB4A.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Failure should always be detected and handled.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -173,7 +173,9 @@ int compat_grant_table_op(unsigned int c
                         for ( i =3D 0; i < (_s_)->nr_frames; ++i ) \
                         { \
                             unsigned int frame =3D (_s_)->frame_list.p[i];=
 \
-                            (void)__copy_to_compat_offset((_d_)->frame_lis=
t, i, &frame, 1); \
+                            if ( __copy_to_compat_offset((_d_)->frame_list=
, \
+                                                         i, &frame, 1) ) =
\
+                                (_s_)->status =3D GNTST_bad_virt_addr; \
                         } \
                     } \
                 } while (0)
@@ -310,7 +312,9 @@ int compat_grant_table_op(unsigned int c
                         for ( i =3D 0; i < (_s_)->nr_frames; ++i ) \
                         { \
                             uint64_t frame =3D (_s_)->frame_list.p[i]; \
-                            (void)__copy_to_compat_offset((_d_)->frame_lis=
t, i, &frame, 1); \
+                            if ( __copy_to_compat_offset((_d_)->frame_list=
, \
+                                                         i, &frame, 1) ) =
\
+                                (_s_)->status =3D GNTST_bad_virt_addr; \
                         } \
                     } \
                 } while (0)
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -283,18 +283,25 @@ int compat_memory_op(unsigned int cmd, X
                 compat_pfn_t pfn =3D nat.xchg->out.extent_start.p[start_ex=
tent];
=20
                 BUG_ON(pfn !=3D nat.xchg->out.extent_start.p[start_extent]=
);
-                /* Note that we ignore errors accessing the output extent =
list. */
-                __copy_to_compat_offset(cmp.xchg.out.extent_start, =
start_extent, &pfn, 1);
+                if ( __copy_to_compat_offset(cmp.xchg.out.extent_start,
+                                             start_extent, &pfn, 1) )
+                {
+                    rc =3D -EFAULT;
+                    break;
+                }
             }
=20
             cmp.xchg.nr_exchanged =3D nat.xchg->nr_exchanged;
             if ( copy_field_to_guest(guest_handle_cast(compat, compat_memo=
ry_exchange_t),
                                      &cmp.xchg, nr_exchanged) )
+                rc =3D -EFAULT;
+
+            if ( rc < 0 )
             {
                 if ( split < 0 )
                     /* Cannot cancel the continuation... */
                     domain_crash(current->domain);
-                return -EFAULT;
+                return rc;
             }
             break;
         }
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1347,6 +1347,9 @@ gnttab_setup_table(
         goto out1;
     }
=20
+    if ( !guest_handle_okay(op.frame_list, op.nr_frames) )
+        return -EFAULT;
+
     d =3D gt_lock_target_domain_by_id(op.dom);
     if ( IS_ERR(d) )
     {
@@ -1384,7 +1387,8 @@ gnttab_setup_table(
         gmfn =3D gnttab_shared_gmfn(d, gt, i);
         /* Grant tables cannot be shared */
         BUG_ON(SHARED_M2P(gmfn));
-        (void)copy_to_guest_offset(op.frame_list, i, &gmfn, 1);
+        if ( __copy_to_guest_offset(op.frame_list, i, &gmfn, 1) )
+            op.status =3D GNTST_bad_virt_addr;
     }
=20
  out3:
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -445,8 +445,7 @@ static long memory_exchange(XEN_GUEST_HA
         }
=20
         /* Assign each output page to the domain. */
-        j =3D 0;
-        while ( (page =3D page_list_remove_head(&out_chunk_list)) )
+        for ( j =3D 0; (page =3D page_list_remove_head(&out_chunk_list)); =
++j )
         {
             if ( assign_pages(d, page, exch.out.extent_order,
                               MEMF_no_refcount) )
@@ -477,9 +476,12 @@ static long memory_exchange(XEN_GUEST_HA
                 goto dying;
             }
=20
-            /* Note that we ignore errors accessing the output extent =
list. */
-            (void)__copy_from_guest_offset(
-                &gpfn, exch.out.extent_start, (i<<out_chunk_order)+j, 1);
+            if ( __copy_from_guest_offset(&gpfn, exch.out.extent_start,
+                                          (i << out_chunk_order) + j, 1) =
)
+            {
+                rc =3D -EFAULT;
+                continue;
+            }
=20
             mfn =3D page_to_mfn(page);
             guest_physmap_add_page(d, gpfn, mfn, exch.out.extent_order);
@@ -488,10 +490,11 @@ static long memory_exchange(XEN_GUEST_HA
             {
                 for ( k =3D 0; k < (1UL << exch.out.extent_order); k++ )
                     set_gpfn_from_mfn(mfn + k, gpfn + k);
-                (void)__copy_to_guest_offset(
-                    exch.out.extent_start, (i<<out_chunk_order)+j, &mfn, =
1);
+                if ( __copy_to_guest_offset(exch.out.extent_start,
+                                            (i << out_chunk_order) + j,
+                                            &mfn, 1) )
+                    rc =3D -EFAULT;
             }
-            j++;
         }
         BUG_ON( !(d->is_dying) && (j !=3D (1UL << out_chunk_order)) );
     }



--=__Part2617BB4A.0__=
Content-Type: text/plain; name="guest-copy-checks.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="guest-copy-checks.patch"

tighten guest memory accesses=0A=0AFailure should always be detected and =
handled.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/common/compat/grant_table.c=0A+++ b/xen/common/compat/grant_table.c=
=0A@@ -173,7 +173,9 @@ int compat_grant_table_op(unsigned int c=0A         =
                for ( i =3D 0; i < (_s_)->nr_frames; ++i ) \=0A            =
             { \=0A                             unsigned int frame =3D =
(_s_)->frame_list.p[i]; \=0A-                            (void)__copy_to_co=
mpat_offset((_d_)->frame_list, i, &frame, 1); \=0A+                        =
    if ( __copy_to_compat_offset((_d_)->frame_list, \=0A+                  =
                                       i, &frame, 1) ) \=0A+               =
                 (_s_)->status =3D GNTST_bad_virt_addr; \=0A               =
          } \=0A                     } \=0A                 } while =
(0)=0A@@ -310,7 +312,9 @@ int compat_grant_table_op(unsigned int c=0A      =
                   for ( i =3D 0; i < (_s_)->nr_frames; ++i ) \=0A         =
                { \=0A                             uint64_t frame =3D =
(_s_)->frame_list.p[i]; \=0A-                            (void)__copy_to_co=
mpat_offset((_d_)->frame_list, i, &frame, 1); \=0A+                        =
    if ( __copy_to_compat_offset((_d_)->frame_list, \=0A+                  =
                                       i, &frame, 1) ) \=0A+               =
                 (_s_)->status =3D GNTST_bad_virt_addr; \=0A               =
          } \=0A                     } \=0A                 } while =
(0)=0A--- a/xen/common/compat/memory.c=0A+++ b/xen/common/compat/memory.c=
=0A@@ -283,18 +283,25 @@ int compat_memory_op(unsigned int cmd, X=0A       =
          compat_pfn_t pfn =3D nat.xchg->out.extent_start.p[start_extent];=
=0A =0A                 BUG_ON(pfn !=3D nat.xchg->out.extent_start.p[start_=
extent]);=0A-                /* Note that we ignore errors accessing the =
output extent list. */=0A-                __copy_to_compat_offset(cmp.xchg.=
out.extent_start, start_extent, &pfn, 1);=0A+                if ( =
__copy_to_compat_offset(cmp.xchg.out.extent_start,=0A+                     =
                        start_extent, &pfn, 1) )=0A+                {=0A+  =
                  rc =3D -EFAULT;=0A+                    break;=0A+        =
        }=0A             }=0A =0A             cmp.xchg.nr_exchanged =3D =
nat.xchg->nr_exchanged;=0A             if ( copy_field_to_guest(guest_handl=
e_cast(compat, compat_memory_exchange_t),=0A                               =
       &cmp.xchg, nr_exchanged) )=0A+                rc =3D -EFAULT;=0A+=0A=
+            if ( rc < 0 )=0A             {=0A                 if ( split =
< 0 )=0A                     /* Cannot cancel the continuation... */=0A    =
                 domain_crash(current->domain);=0A-                return =
-EFAULT;=0A+                return rc;=0A             }=0A             =
break;=0A         }=0A--- a/xen/common/grant_table.c=0A+++ b/xen/common/gra=
nt_table.c=0A@@ -1347,6 +1347,9 @@ gnttab_setup_table(=0A         goto =
out1;=0A     }=0A =0A+    if ( !guest_handle_okay(op.frame_list, op.nr_fram=
es) )=0A+        return -EFAULT;=0A+=0A     d =3D gt_lock_target_domain_by_=
id(op.dom);=0A     if ( IS_ERR(d) )=0A     {=0A@@ -1384,7 +1387,8 @@ =
gnttab_setup_table(=0A         gmfn =3D gnttab_shared_gmfn(d, gt, i);=0A   =
      /* Grant tables cannot be shared */=0A         BUG_ON(SHARED_M2P(gmfn=
));=0A-        (void)copy_to_guest_offset(op.frame_list, i, &gmfn, 1);=0A+ =
       if ( __copy_to_guest_offset(op.frame_list, i, &gmfn, 1) )=0A+       =
     op.status =3D GNTST_bad_virt_addr;=0A     }=0A =0A  out3:=0A--- =
a/xen/common/memory.c=0A+++ b/xen/common/memory.c=0A@@ -445,8 +445,7 @@ =
static long memory_exchange(XEN_GUEST_HA=0A         }=0A =0A         /* =
Assign each output page to the domain. */=0A-        j =3D 0;=0A-        =
while ( (page =3D page_list_remove_head(&out_chunk_list)) )=0A+        for =
( j =3D 0; (page =3D page_list_remove_head(&out_chunk_list)); ++j )=0A     =
    {=0A             if ( assign_pages(d, page, exch.out.extent_order,=0A  =
                             MEMF_no_refcount) )=0A@@ -477,9 +476,12 @@ =
static long memory_exchange(XEN_GUEST_HA=0A                 goto dying;=0A =
            }=0A =0A-            /* Note that we ignore errors accessing =
the output extent list. */=0A-            (void)__copy_from_guest_offset(=
=0A-                &gpfn, exch.out.extent_start, (i<<out_chunk_order)+j, =
1);=0A+            if ( __copy_from_guest_offset(&gpfn, exch.out.extent_sta=
rt,=0A+                                          (i << out_chunk_order) + =
j, 1) )=0A+            {=0A+                rc =3D -EFAULT;=0A+            =
    continue;=0A+            }=0A =0A             mfn =3D page_to_mfn(page)=
;=0A             guest_physmap_add_page(d, gpfn, mfn, exch.out.extent_order=
);=0A@@ -488,10 +490,11 @@ static long memory_exchange(XEN_GUEST_HA=0A     =
        {=0A                 for ( k =3D 0; k < (1UL << exch.out.extent_ord=
er); k++ )=0A                     set_gpfn_from_mfn(mfn + k, gpfn + =
k);=0A-                (void)__copy_to_guest_offset(=0A-                   =
 exch.out.extent_start, (i<<out_chunk_order)+j, &mfn, 1);=0A+              =
  if ( __copy_to_guest_offset(exch.out.extent_start,=0A+                   =
                         (i << out_chunk_order) + j,=0A+                   =
                         &mfn, 1) )=0A+                    rc =3D =
-EFAULT;=0A             }=0A-            j++;=0A         }=0A         =
BUG_ON( !(d->is_dying) && (j !=3D (1UL << out_chunk_order)) );=0A     }=0A
--=__Part2617BB4A.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2617BB4A.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:51:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:51:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgaum-0001eV-8U; Thu, 06 Dec 2012 12:50:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgauk-0001eI-K1
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:50:54 +0000
Received: from [85.158.143.35:36349] by server-1.bemta-4.messagelabs.com id
	AC/C3-28401-DA490C05; Thu, 06 Dec 2012 12:50:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1354798172!16321211!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5882 invoked from network); 6 Dec 2012 12:49:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:49:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:49:31 +0000
Message-Id: <50C0A26A02000078000AE886@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:49:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2617BB4A.0__="
Subject: [Xen-devel] [PATCH] tighten guest memory accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2617BB4A.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Failure should always be detected and handled.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -173,7 +173,9 @@ int compat_grant_table_op(unsigned int c
                         for ( i =3D 0; i < (_s_)->nr_frames; ++i ) \
                         { \
                             unsigned int frame =3D (_s_)->frame_list.p[i];=
 \
-                            (void)__copy_to_compat_offset((_d_)->frame_lis=
t, i, &frame, 1); \
+                            if ( __copy_to_compat_offset((_d_)->frame_list=
, \
+                                                         i, &frame, 1) ) =
\
+                                (_s_)->status =3D GNTST_bad_virt_addr; \
                         } \
                     } \
                 } while (0)
@@ -310,7 +312,9 @@ int compat_grant_table_op(unsigned int c
                         for ( i =3D 0; i < (_s_)->nr_frames; ++i ) \
                         { \
                             uint64_t frame =3D (_s_)->frame_list.p[i]; \
-                            (void)__copy_to_compat_offset((_d_)->frame_lis=
t, i, &frame, 1); \
+                            if ( __copy_to_compat_offset((_d_)->frame_list=
, \
+                                                         i, &frame, 1) ) =
\
+                                (_s_)->status =3D GNTST_bad_virt_addr; \
                         } \
                     } \
                 } while (0)
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -283,18 +283,25 @@ int compat_memory_op(unsigned int cmd, X
                 compat_pfn_t pfn =3D nat.xchg->out.extent_start.p[start_ex=
tent];
=20
                 BUG_ON(pfn !=3D nat.xchg->out.extent_start.p[start_extent]=
);
-                /* Note that we ignore errors accessing the output extent =
list. */
-                __copy_to_compat_offset(cmp.xchg.out.extent_start, =
start_extent, &pfn, 1);
+                if ( __copy_to_compat_offset(cmp.xchg.out.extent_start,
+                                             start_extent, &pfn, 1) )
+                {
+                    rc =3D -EFAULT;
+                    break;
+                }
             }
=20
             cmp.xchg.nr_exchanged =3D nat.xchg->nr_exchanged;
             if ( copy_field_to_guest(guest_handle_cast(compat, compat_memo=
ry_exchange_t),
                                      &cmp.xchg, nr_exchanged) )
+                rc =3D -EFAULT;
+
+            if ( rc < 0 )
             {
                 if ( split < 0 )
                     /* Cannot cancel the continuation... */
                     domain_crash(current->domain);
-                return -EFAULT;
+                return rc;
             }
             break;
         }
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1347,6 +1347,9 @@ gnttab_setup_table(
         goto out1;
     }
=20
+    if ( !guest_handle_okay(op.frame_list, op.nr_frames) )
+        return -EFAULT;
+
     d =3D gt_lock_target_domain_by_id(op.dom);
     if ( IS_ERR(d) )
     {
@@ -1384,7 +1387,8 @@ gnttab_setup_table(
         gmfn =3D gnttab_shared_gmfn(d, gt, i);
         /* Grant tables cannot be shared */
         BUG_ON(SHARED_M2P(gmfn));
-        (void)copy_to_guest_offset(op.frame_list, i, &gmfn, 1);
+        if ( __copy_to_guest_offset(op.frame_list, i, &gmfn, 1) )
+            op.status =3D GNTST_bad_virt_addr;
     }
=20
  out3:
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -445,8 +445,7 @@ static long memory_exchange(XEN_GUEST_HA
         }
=20
         /* Assign each output page to the domain. */
-        j =3D 0;
-        while ( (page =3D page_list_remove_head(&out_chunk_list)) )
+        for ( j =3D 0; (page =3D page_list_remove_head(&out_chunk_list)); =
++j )
         {
             if ( assign_pages(d, page, exch.out.extent_order,
                               MEMF_no_refcount) )
@@ -477,9 +476,12 @@ static long memory_exchange(XEN_GUEST_HA
                 goto dying;
             }
=20
-            /* Note that we ignore errors accessing the output extent =
list. */
-            (void)__copy_from_guest_offset(
-                &gpfn, exch.out.extent_start, (i<<out_chunk_order)+j, 1);
+            if ( __copy_from_guest_offset(&gpfn, exch.out.extent_start,
+                                          (i << out_chunk_order) + j, 1) =
)
+            {
+                rc =3D -EFAULT;
+                continue;
+            }
=20
             mfn =3D page_to_mfn(page);
             guest_physmap_add_page(d, gpfn, mfn, exch.out.extent_order);
@@ -488,10 +490,11 @@ static long memory_exchange(XEN_GUEST_HA
             {
                 for ( k =3D 0; k < (1UL << exch.out.extent_order); k++ )
                     set_gpfn_from_mfn(mfn + k, gpfn + k);
-                (void)__copy_to_guest_offset(
-                    exch.out.extent_start, (i<<out_chunk_order)+j, &mfn, =
1);
+                if ( __copy_to_guest_offset(exch.out.extent_start,
+                                            (i << out_chunk_order) + j,
+                                            &mfn, 1) )
+                    rc =3D -EFAULT;
             }
-            j++;
         }
         BUG_ON( !(d->is_dying) && (j !=3D (1UL << out_chunk_order)) );
     }



--=__Part2617BB4A.0__=
Content-Type: text/plain; name="guest-copy-checks.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="guest-copy-checks.patch"

tighten guest memory accesses=0A=0AFailure should always be detected and =
handled.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/common/compat/grant_table.c=0A+++ b/xen/common/compat/grant_table.c=
=0A@@ -173,7 +173,9 @@ int compat_grant_table_op(unsigned int c=0A         =
                for ( i =3D 0; i < (_s_)->nr_frames; ++i ) \=0A            =
             { \=0A                             unsigned int frame =3D =
(_s_)->frame_list.p[i]; \=0A-                            (void)__copy_to_co=
mpat_offset((_d_)->frame_list, i, &frame, 1); \=0A+                        =
    if ( __copy_to_compat_offset((_d_)->frame_list, \=0A+                  =
                                       i, &frame, 1) ) \=0A+               =
                 (_s_)->status =3D GNTST_bad_virt_addr; \=0A               =
          } \=0A                     } \=0A                 } while =
(0)=0A@@ -310,7 +312,9 @@ int compat_grant_table_op(unsigned int c=0A      =
                   for ( i =3D 0; i < (_s_)->nr_frames; ++i ) \=0A         =
                { \=0A                             uint64_t frame =3D =
(_s_)->frame_list.p[i]; \=0A-                            (void)__copy_to_co=
mpat_offset((_d_)->frame_list, i, &frame, 1); \=0A+                        =
    if ( __copy_to_compat_offset((_d_)->frame_list, \=0A+                  =
                                       i, &frame, 1) ) \=0A+               =
                 (_s_)->status =3D GNTST_bad_virt_addr; \=0A               =
          } \=0A                     } \=0A                 } while =
(0)=0A--- a/xen/common/compat/memory.c=0A+++ b/xen/common/compat/memory.c=
=0A@@ -283,18 +283,25 @@ int compat_memory_op(unsigned int cmd, X=0A       =
          compat_pfn_t pfn =3D nat.xchg->out.extent_start.p[start_extent];=
=0A =0A                 BUG_ON(pfn !=3D nat.xchg->out.extent_start.p[start_=
extent]);=0A-                /* Note that we ignore errors accessing the =
output extent list. */=0A-                __copy_to_compat_offset(cmp.xchg.=
out.extent_start, start_extent, &pfn, 1);=0A+                if ( =
__copy_to_compat_offset(cmp.xchg.out.extent_start,=0A+                     =
                        start_extent, &pfn, 1) )=0A+                {=0A+  =
                  rc =3D -EFAULT;=0A+                    break;=0A+        =
        }=0A             }=0A =0A             cmp.xchg.nr_exchanged =3D =
nat.xchg->nr_exchanged;=0A             if ( copy_field_to_guest(guest_handl=
e_cast(compat, compat_memory_exchange_t),=0A                               =
       &cmp.xchg, nr_exchanged) )=0A+                rc =3D -EFAULT;=0A+=0A=
+            if ( rc < 0 )=0A             {=0A                 if ( split =
< 0 )=0A                     /* Cannot cancel the continuation... */=0A    =
                 domain_crash(current->domain);=0A-                return =
-EFAULT;=0A+                return rc;=0A             }=0A             =
break;=0A         }=0A--- a/xen/common/grant_table.c=0A+++ b/xen/common/gra=
nt_table.c=0A@@ -1347,6 +1347,9 @@ gnttab_setup_table(=0A         goto =
out1;=0A     }=0A =0A+    if ( !guest_handle_okay(op.frame_list, op.nr_fram=
es) )=0A+        return -EFAULT;=0A+=0A     d =3D gt_lock_target_domain_by_=
id(op.dom);=0A     if ( IS_ERR(d) )=0A     {=0A@@ -1384,7 +1387,8 @@ =
gnttab_setup_table(=0A         gmfn =3D gnttab_shared_gmfn(d, gt, i);=0A   =
      /* Grant tables cannot be shared */=0A         BUG_ON(SHARED_M2P(gmfn=
));=0A-        (void)copy_to_guest_offset(op.frame_list, i, &gmfn, 1);=0A+ =
       if ( __copy_to_guest_offset(op.frame_list, i, &gmfn, 1) )=0A+       =
     op.status =3D GNTST_bad_virt_addr;=0A     }=0A =0A  out3:=0A--- =
a/xen/common/memory.c=0A+++ b/xen/common/memory.c=0A@@ -445,8 +445,7 @@ =
static long memory_exchange(XEN_GUEST_HA=0A         }=0A =0A         /* =
Assign each output page to the domain. */=0A-        j =3D 0;=0A-        =
while ( (page =3D page_list_remove_head(&out_chunk_list)) )=0A+        for =
( j =3D 0; (page =3D page_list_remove_head(&out_chunk_list)); ++j )=0A     =
    {=0A             if ( assign_pages(d, page, exch.out.extent_order,=0A  =
                             MEMF_no_refcount) )=0A@@ -477,9 +476,12 @@ =
static long memory_exchange(XEN_GUEST_HA=0A                 goto dying;=0A =
            }=0A =0A-            /* Note that we ignore errors accessing =
the output extent list. */=0A-            (void)__copy_from_guest_offset(=
=0A-                &gpfn, exch.out.extent_start, (i<<out_chunk_order)+j, =
1);=0A+            if ( __copy_from_guest_offset(&gpfn, exch.out.extent_sta=
rt,=0A+                                          (i << out_chunk_order) + =
j, 1) )=0A+            {=0A+                rc =3D -EFAULT;=0A+            =
    continue;=0A+            }=0A =0A             mfn =3D page_to_mfn(page)=
;=0A             guest_physmap_add_page(d, gpfn, mfn, exch.out.extent_order=
);=0A@@ -488,10 +490,11 @@ static long memory_exchange(XEN_GUEST_HA=0A     =
        {=0A                 for ( k =3D 0; k < (1UL << exch.out.extent_ord=
er); k++ )=0A                     set_gpfn_from_mfn(mfn + k, gpfn + =
k);=0A-                (void)__copy_to_guest_offset(=0A-                   =
 exch.out.extent_start, (i<<out_chunk_order)+j, &mfn, 1);=0A+              =
  if ( __copy_to_guest_offset(exch.out.extent_start,=0A+                   =
                         (i << out_chunk_order) + j,=0A+                   =
                         &mfn, 1) )=0A+                    rc =3D =
-EFAULT;=0A             }=0A-            j++;=0A         }=0A         =
BUG_ON( !(d->is_dying) && (j !=3D (1UL << out_chunk_order)) );=0A     }=0A
--=__Part2617BB4A.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2617BB4A.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:53:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:53:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaxG-0001xi-Rv; Thu, 06 Dec 2012 12:53:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgaxE-0001xb-QH
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 12:53:29 +0000
Received: from [85.158.139.211:56006] by server-2.bemta-5.messagelabs.com id
	9F/29-04892-84590C05; Thu, 06 Dec 2012 12:53:28 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354798406!19321008!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2871 invoked from network); 6 Dec 2012 12:53:27 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:53:27 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgaxC-000MES-5A; Thu, 06 Dec 2012 12:53:26 +0000
Date: Thu, 6 Dec 2012 12:53:26 +0000
From: Tim Deegan <tim@xen.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121206125326.GP82725@ocelot.phlegethon.org>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH 6/6] xen/arm: introduce a driver for the ARM
	HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:19 +0000 on 05 Dec (1354731588), Stefano Stabellini wrote:
> For the moment the resolution is hardcoded to 1280x1024@60.
> Use the generic framebuffer functions to print on the screen.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/Rules.mk         |    1 +
>  xen/drivers/video/Makefile    |    1 +
>  xen/drivers/video/arm_hdlcd.c |  165 +++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/config.h  |    3 +
>  4 files changed, 170 insertions(+), 0 deletions(-)
>  create mode 100644 xen/drivers/video/arm_hdlcd.c
> 
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index fa9f9c1..9580e6b 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -8,6 +8,7 @@
>  
>  HAS_DEVICE_TREE := y
>  HAS_VIDEO := y
> +HAS_ARM_HDLCD := y
>  
>  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
>  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> index 3b3eb43..8a6f5da 100644
> --- a/xen/drivers/video/Makefile
> +++ b/xen/drivers/video/Makefile
> @@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
>  obj-$(HAS_VIDEO) += font_8x8.o
>  obj-$(HAS_VIDEO) += fb.o
>  obj-$(HAS_VGA) += vesa.o
> +obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
> diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
> new file mode 100644
> index 0000000..68f588c
> --- /dev/null
> +++ b/xen/drivers/video/arm_hdlcd.c
> @@ -0,0 +1,165 @@
> +/*
> + * xen/drivers/video/arm_hdlcd.c
> + *
> + * Driver for ARM HDLCD Controller
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + * Copyright (c) 2012 Citrix Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <asm/delay.h>
> +#include <asm/types.h>
> +#include <xen/config.h>
> +#include <xen/device_tree.h>
> +#include <xen/libfdt/libfdt.h>
> +#include <xen/init.h>
> +#include <xen/mm.h>
> +#include "font.h"
> +#include "fb.h"
> +
> +#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
> +
> +#define HDLCD_INTMASK       (0x18/4)
> +#define HDLCD_FBBASE        (0x100/4)
> +#define HDLCD_LINELENGTH    (0x104/4)
> +#define HDLCD_LINECOUNT     (0x108/4)
> +#define HDLCD_LINEPITCH     (0x10C/4)
> +#define HDLCD_BUS           (0x110/4)
> +#define HDLCD_VSYNC         (0x200/4)
> +#define HDLCD_VBACK         (0x204/4)
> +#define HDLCD_VDATA         (0x208/4)
> +#define HDLCD_VFRONT        (0x20C/4)
> +#define HDLCD_HSYNC         (0x210/4)
> +#define HDLCD_HBACK         (0x214/4)
> +#define HDLCD_HDATA         (0x218/4)
> +#define HDLCD_HFRONT        (0x21C/4)
> +#define HDLCD_POLARITIES    (0x220/4)
> +#define HDLCD_COMMAND       (0x230/4)
> +#define HDLCD_PF            (0x240/4)
> +#define HDLCD_RED           (0x244/4)
> +#define HDLCD_GREEN         (0x248/4)
> +#define HDLCD_BLUE          (0x24C/4)
> +
> +#define BPP             4
> +#define XRES            1280
> +#define YRES            1024
> +#define refresh         60
> +#define pixclock        108 /* in Mhz, needs to be set in the board config for OSC5 */
> +#define left_margin     80
> +#define hback left_margin
> +#define right_margin    48
> +#define hfront right_margin
> +#define upper_margin    21
> +#define vback upper_margin
> +#define lower_margin    3
> +#define vfront lower_margin
> +#define hsync_len       32
> +#define vsync_len       6
> +
> +#define HDLCD_SIZE (XRES*YRES*BPP)
> +
> +static void vga_noop_puts(const char *s) {}
> +void (*video_puts)(const char *) = vga_noop_puts;
> +
> +static void hdlcd_flush(void)
> +{
> +    dsb();
> +}
> +
> +void __init video_init(void)
> +{
> +    int node, depth;
> +    u32 address_cells, size_cells;
> +    struct fb_prop fbp;
> +    unsigned char *lfb = (unsigned char *) VRAM_VIRT_START;
> +    paddr_t hdlcd_start, hdlcd_size;
> +    paddr_t framebuffer_start, framebuffer_size;
> +    const struct fdt_property *prop;
> +    const u32 *cell;
> +
> +    if ( find_compatible_node("arm,hdlcd", &node, &depth,
> +                &address_cells, &size_cells) <= 0 )
> +        return;
> +
> +    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &hdlcd_start, &hdlcd_size); 
> +
> +    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &framebuffer_start, &framebuffer_size); 
> +
> +    if ( !hdlcd_start || !framebuffer_start )
> +        return;
> +
> +    printk("Initializing HDLCD driver\n");
> +
> +    map_phys_range(framebuffer_start,
> +                    framebuffer_start + framebuffer_size,
> +                    VRAM_VIRT_START, DEV_WC);
> +    memset(lfb, 0x00, HDLCD_SIZE);

This needs some checks that framebuffer_size >= HDLCD_SIZE, and that
framebuffer_size <= XENHEAP_VIRT_START - VRAM_VIRT_START.

> +    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
> +    HDLCD[HDLCD_COMMAND] = 0;
> +
> +    HDLCD[HDLCD_LINELENGTH] = XRES * BPP;
> +    HDLCD[HDLCD_LINECOUNT] = YRES - 1;
> +    HDLCD[HDLCD_LINEPITCH] = XRES * BPP;
> +    HDLCD[HDLCD_PF] = ((BPP - 1) << 3);
> +    HDLCD[HDLCD_INTMASK] = 0;
> +    HDLCD[HDLCD_FBBASE] = framebuffer_start;
> +    HDLCD[HDLCD_BUS] = 0xf00|(1<<4);
> +    HDLCD[HDLCD_VBACK] = upper_margin - 1;
> +    HDLCD[HDLCD_VSYNC] = vsync_len - 1;
> +    HDLCD[HDLCD_VDATA] = YRES - 1;
> +    HDLCD[HDLCD_VFRONT] = lower_margin - 1;
> +    HDLCD[HDLCD_HBACK] = left_margin - 1;
> +    HDLCD[HDLCD_HSYNC] = hsync_len - 1;
> +    HDLCD[HDLCD_HDATA] = XRES - 1;
> +    HDLCD[HDLCD_HFRONT] = right_margin - 1;
> +    HDLCD[HDLCD_POLARITIES] = (1<<2)|(1<<3);
> +    HDLCD[HDLCD_RED] = (8<<8)|0;
> +    HDLCD[HDLCD_GREEN] = (8<<8)|8;
> +    HDLCD[HDLCD_BLUE] = (8<<8)|16;
> +
> +    HDLCD[HDLCD_COMMAND] = 1;
> +    clear_fixmap(FIXMAP_MISC);
> +
> +    fbp.lfb = lfb;
> +    fbp.font = &font_vga_8x16;
> +    fbp.pixel_on = 0xffffff;
> +    fbp.bits_per_pixel = BPP*8;
> +    fbp.bytes_per_line = BPP*XRES;
> +    fbp.width = XRES;
> +    fbp.height = YRES;
> +    fbp.flush = hdlcd_flush;
> +    fbp.text_columns = XRES / 8;
> +    fbp.text_rows = YRES / 16;
> +    if ( fb_init(fbp) < 0 )
> +            return;
> +    video_puts = fb_scroll_puts;
> +}
> +
> +void video_endboot(void)
> +{
> +    if ( video_puts != vga_noop_puts )
> +        fb_alloc();
> +}
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index 2a05539..9727562 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -19,6 +19,8 @@
>  
>  #define CONFIG_DOMAIN_PAGE 1
>  
> +#define CONFIG_VIDEO 1
> +
>  #define OPT_CONSOLE_STR "com1"
>  
>  #ifdef MAX_PHYS_CPUS
> @@ -73,6 +75,7 @@
>  #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
>  #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
>  #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
> +#define VRAM_VIRT_START        mk_unsigned_long(0x10000000)

Please update the comments above to reflect this change.

Cheers,

Tim.

>  #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
>  #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
>  
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:53:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:53:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgaxG-0001xi-Rv; Thu, 06 Dec 2012 12:53:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TgaxE-0001xb-QH
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 12:53:29 +0000
Received: from [85.158.139.211:56006] by server-2.bemta-5.messagelabs.com id
	9F/29-04892-84590C05; Thu, 06 Dec 2012 12:53:28 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354798406!19321008!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2871 invoked from network); 6 Dec 2012 12:53:27 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:53:27 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TgaxC-000MES-5A; Thu, 06 Dec 2012 12:53:26 +0000
Date: Thu, 6 Dec 2012 12:53:26 +0000
From: Tim Deegan <tim@xen.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121206125326.GP82725@ocelot.phlegethon.org>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH 6/6] xen/arm: introduce a driver for the ARM
	HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:19 +0000 on 05 Dec (1354731588), Stefano Stabellini wrote:
> For the moment the resolution is hardcoded to 1280x1024@60.
> Use the generic framebuffer functions to print on the screen.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/Rules.mk         |    1 +
>  xen/drivers/video/Makefile    |    1 +
>  xen/drivers/video/arm_hdlcd.c |  165 +++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/config.h  |    3 +
>  4 files changed, 170 insertions(+), 0 deletions(-)
>  create mode 100644 xen/drivers/video/arm_hdlcd.c
> 
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index fa9f9c1..9580e6b 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -8,6 +8,7 @@
>  
>  HAS_DEVICE_TREE := y
>  HAS_VIDEO := y
> +HAS_ARM_HDLCD := y
>  
>  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
>  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> index 3b3eb43..8a6f5da 100644
> --- a/xen/drivers/video/Makefile
> +++ b/xen/drivers/video/Makefile
> @@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
>  obj-$(HAS_VIDEO) += font_8x8.o
>  obj-$(HAS_VIDEO) += fb.o
>  obj-$(HAS_VGA) += vesa.o
> +obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
> diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
> new file mode 100644
> index 0000000..68f588c
> --- /dev/null
> +++ b/xen/drivers/video/arm_hdlcd.c
> @@ -0,0 +1,165 @@
> +/*
> + * xen/drivers/video/arm_hdlcd.c
> + *
> + * Driver for ARM HDLCD Controller
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + * Copyright (c) 2012 Citrix Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <asm/delay.h>
> +#include <asm/types.h>
> +#include <xen/config.h>
> +#include <xen/device_tree.h>
> +#include <xen/libfdt/libfdt.h>
> +#include <xen/init.h>
> +#include <xen/mm.h>
> +#include "font.h"
> +#include "fb.h"
> +
> +#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
> +
> +#define HDLCD_INTMASK       (0x18/4)
> +#define HDLCD_FBBASE        (0x100/4)
> +#define HDLCD_LINELENGTH    (0x104/4)
> +#define HDLCD_LINECOUNT     (0x108/4)
> +#define HDLCD_LINEPITCH     (0x10C/4)
> +#define HDLCD_BUS           (0x110/4)
> +#define HDLCD_VSYNC         (0x200/4)
> +#define HDLCD_VBACK         (0x204/4)
> +#define HDLCD_VDATA         (0x208/4)
> +#define HDLCD_VFRONT        (0x20C/4)
> +#define HDLCD_HSYNC         (0x210/4)
> +#define HDLCD_HBACK         (0x214/4)
> +#define HDLCD_HDATA         (0x218/4)
> +#define HDLCD_HFRONT        (0x21C/4)
> +#define HDLCD_POLARITIES    (0x220/4)
> +#define HDLCD_COMMAND       (0x230/4)
> +#define HDLCD_PF            (0x240/4)
> +#define HDLCD_RED           (0x244/4)
> +#define HDLCD_GREEN         (0x248/4)
> +#define HDLCD_BLUE          (0x24C/4)
> +
> +#define BPP             4
> +#define XRES            1280
> +#define YRES            1024
> +#define refresh         60
> +#define pixclock        108 /* in Mhz, needs to be set in the board config for OSC5 */
> +#define left_margin     80
> +#define hback left_margin
> +#define right_margin    48
> +#define hfront right_margin
> +#define upper_margin    21
> +#define vback upper_margin
> +#define lower_margin    3
> +#define vfront lower_margin
> +#define hsync_len       32
> +#define vsync_len       6
> +
> +#define HDLCD_SIZE (XRES*YRES*BPP)
> +
> +static void vga_noop_puts(const char *s) {}
> +void (*video_puts)(const char *) = vga_noop_puts;
> +
> +static void hdlcd_flush(void)
> +{
> +    dsb();
> +}
> +
> +void __init video_init(void)
> +{
> +    int node, depth;
> +    u32 address_cells, size_cells;
> +    struct fb_prop fbp;
> +    unsigned char *lfb = (unsigned char *) VRAM_VIRT_START;
> +    paddr_t hdlcd_start, hdlcd_size;
> +    paddr_t framebuffer_start, framebuffer_size;
> +    const struct fdt_property *prop;
> +    const u32 *cell;
> +
> +    if ( find_compatible_node("arm,hdlcd", &node, &depth,
> +                &address_cells, &size_cells) <= 0 )
> +        return;
> +
> +    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &hdlcd_start, &hdlcd_size); 
> +
> +    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &framebuffer_start, &framebuffer_size); 
> +
> +    if ( !hdlcd_start || !framebuffer_start )
> +        return;
> +
> +    printk("Initializing HDLCD driver\n");
> +
> +    map_phys_range(framebuffer_start,
> +                    framebuffer_start + framebuffer_size,
> +                    VRAM_VIRT_START, DEV_WC);
> +    memset(lfb, 0x00, HDLCD_SIZE);

This needs some checks that framebuffer_size >= HDLCD_SIZE, and that
framebuffer_size <= XENHEAP_VIRT_START - VRAM_VIRT_START.

> +    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
> +    HDLCD[HDLCD_COMMAND] = 0;
> +
> +    HDLCD[HDLCD_LINELENGTH] = XRES * BPP;
> +    HDLCD[HDLCD_LINECOUNT] = YRES - 1;
> +    HDLCD[HDLCD_LINEPITCH] = XRES * BPP;
> +    HDLCD[HDLCD_PF] = ((BPP - 1) << 3);
> +    HDLCD[HDLCD_INTMASK] = 0;
> +    HDLCD[HDLCD_FBBASE] = framebuffer_start;
> +    HDLCD[HDLCD_BUS] = 0xf00|(1<<4);
> +    HDLCD[HDLCD_VBACK] = upper_margin - 1;
> +    HDLCD[HDLCD_VSYNC] = vsync_len - 1;
> +    HDLCD[HDLCD_VDATA] = YRES - 1;
> +    HDLCD[HDLCD_VFRONT] = lower_margin - 1;
> +    HDLCD[HDLCD_HBACK] = left_margin - 1;
> +    HDLCD[HDLCD_HSYNC] = hsync_len - 1;
> +    HDLCD[HDLCD_HDATA] = XRES - 1;
> +    HDLCD[HDLCD_HFRONT] = right_margin - 1;
> +    HDLCD[HDLCD_POLARITIES] = (1<<2)|(1<<3);
> +    HDLCD[HDLCD_RED] = (8<<8)|0;
> +    HDLCD[HDLCD_GREEN] = (8<<8)|8;
> +    HDLCD[HDLCD_BLUE] = (8<<8)|16;
> +
> +    HDLCD[HDLCD_COMMAND] = 1;
> +    clear_fixmap(FIXMAP_MISC);
> +
> +    fbp.lfb = lfb;
> +    fbp.font = &font_vga_8x16;
> +    fbp.pixel_on = 0xffffff;
> +    fbp.bits_per_pixel = BPP*8;
> +    fbp.bytes_per_line = BPP*XRES;
> +    fbp.width = XRES;
> +    fbp.height = YRES;
> +    fbp.flush = hdlcd_flush;
> +    fbp.text_columns = XRES / 8;
> +    fbp.text_rows = YRES / 16;
> +    if ( fb_init(fbp) < 0 )
> +            return;
> +    video_puts = fb_scroll_puts;
> +}
> +
> +void video_endboot(void)
> +{
> +    if ( video_puts != vga_noop_puts )
> +        fb_alloc();
> +}
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index 2a05539..9727562 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -19,6 +19,8 @@
>  
>  #define CONFIG_DOMAIN_PAGE 1
>  
> +#define CONFIG_VIDEO 1
> +
>  #define OPT_CONSOLE_STR "com1"
>  
>  #ifdef MAX_PHYS_CPUS
> @@ -73,6 +75,7 @@
>  #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
>  #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
>  #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
> +#define VRAM_VIRT_START        mk_unsigned_long(0x10000000)

Please update the comments above to reflect this change.

Cheers,

Tim.

>  #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
>  #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
>  
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:55:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:55:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgaz2-000250-C1; Thu, 06 Dec 2012 12:55:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgaz0-00024t-U5
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:55:19 +0000
Received: from [85.158.138.51:62623] by server-9.bemta-3.messagelabs.com id
	58/2C-02388-6B590C05; Thu, 06 Dec 2012 12:55:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354798515!27426527!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6800 invoked from network); 6 Dec 2012 12:55:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:55:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:55:14 +0000
Message-Id: <50C0A3C302000078000AE896@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:55:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartCEFF53A3.0__="
Subject: [Xen-devel] [PATCH] x86/HVM: remove dead code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartCEFF53A3.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2460,9 +2460,7 @@ static enum hvm_copy_result __hvm_copy(
             return HVMCOPY_unhandleable;
         }
         if ( !page )
-        {
             return HVMCOPY_bad_gfn_to_mfn;
-        }
=20
         p =3D (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
=20
@@ -2560,11 +2558,7 @@ static enum hvm_copy_result __hvm_clear(
             return HVMCOPY_unhandleable;
         }
         if ( !page )
-        {
-            if ( page )
-                put_page(page);
             return HVMCOPY_bad_gfn_to_mfn;
-        }
=20
         p =3D (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
=20




--=__PartCEFF53A3.0__=
Content-Type: text/plain; name="x86-hvm-dead-code.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-dead-code.patch"

x86/HVM: remove dead code=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.co=
m>=0A=0A--- a/xen/arch/x86/hvm/hvm.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ =
-2460,9 +2460,7 @@ static enum hvm_copy_result __hvm_copy(=0A             =
return HVMCOPY_unhandleable;=0A         }=0A         if ( !page )=0A-      =
  {=0A             return HVMCOPY_bad_gfn_to_mfn;=0A-        }=0A =0A      =
   p =3D (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);=0A =0A@@ =
-2560,11 +2558,7 @@ static enum hvm_copy_result __hvm_clear(=0A            =
 return HVMCOPY_unhandleable;=0A         }=0A         if ( !page )=0A-     =
   {=0A-            if ( page )=0A-                put_page(page);=0A      =
       return HVMCOPY_bad_gfn_to_mfn;=0A-        }=0A =0A         p =3D =
(char *)__map_domain_page(page) + (addr & ~PAGE_MASK);=0A =0A
--=__PartCEFF53A3.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartCEFF53A3.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:55:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:55:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgaz2-000250-C1; Thu, 06 Dec 2012 12:55:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgaz0-00024t-U5
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:55:19 +0000
Received: from [85.158.138.51:62623] by server-9.bemta-3.messagelabs.com id
	58/2C-02388-6B590C05; Thu, 06 Dec 2012 12:55:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354798515!27426527!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6800 invoked from network); 6 Dec 2012 12:55:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:55:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:55:14 +0000
Message-Id: <50C0A3C302000078000AE896@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:55:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartCEFF53A3.0__="
Subject: [Xen-devel] [PATCH] x86/HVM: remove dead code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartCEFF53A3.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2460,9 +2460,7 @@ static enum hvm_copy_result __hvm_copy(
             return HVMCOPY_unhandleable;
         }
         if ( !page )
-        {
             return HVMCOPY_bad_gfn_to_mfn;
-        }
=20
         p =3D (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
=20
@@ -2560,11 +2558,7 @@ static enum hvm_copy_result __hvm_clear(
             return HVMCOPY_unhandleable;
         }
         if ( !page )
-        {
-            if ( page )
-                put_page(page);
             return HVMCOPY_bad_gfn_to_mfn;
-        }
=20
         p =3D (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
=20




--=__PartCEFF53A3.0__=
Content-Type: text/plain; name="x86-hvm-dead-code.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-dead-code.patch"

x86/HVM: remove dead code=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.co=
m>=0A=0A--- a/xen/arch/x86/hvm/hvm.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ =
-2460,9 +2460,7 @@ static enum hvm_copy_result __hvm_copy(=0A             =
return HVMCOPY_unhandleable;=0A         }=0A         if ( !page )=0A-      =
  {=0A             return HVMCOPY_bad_gfn_to_mfn;=0A-        }=0A =0A      =
   p =3D (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);=0A =0A@@ =
-2560,11 +2558,7 @@ static enum hvm_copy_result __hvm_clear(=0A            =
 return HVMCOPY_unhandleable;=0A         }=0A         if ( !page )=0A-     =
   {=0A-            if ( page )=0A-                put_page(page);=0A      =
       return HVMCOPY_bad_gfn_to_mfn;=0A-        }=0A =0A         p =3D =
(char *)__map_domain_page(page) + (addr & ~PAGE_MASK);=0A =0A
--=__PartCEFF53A3.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartCEFF53A3.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:55:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgazH-000276-0N; Thu, 06 Dec 2012 12:55:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgazF-00026l-PY
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:55:33 +0000
Received: from [85.158.139.83:54893] by server-16.bemta-5.messagelabs.com id
	DC/CC-21311-5C590C05; Thu, 06 Dec 2012 12:55:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354798529!21435117!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10987 invoked from network); 6 Dec 2012 12:55:30 -0000
Received: from unknown (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:55:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:54:15 +0000
Message-Id: <50C0A38702000078000AE892@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:54:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0A3B9767.0__="
Subject: [Xen-devel] [PATCH] memop: adjust error checking in
	populate_physmap()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0A3B9767.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Checking that multi-page allocations are permitted is unnecessary for
PoD population operations. Instead, the (loop invariant) check added
for addressing XSA-31 can be moved here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -99,7 +99,8 @@ static void populate_physmap(struct memo
                                      a->nr_extents-1) )
         return;
=20
-    if ( !multipage_allocation_permitted(current->domain, a->extent_order)=
 )
+    if ( a->memflags & MEMF_populate_on_demand ? a->extent_order > =
MAX_ORDER :
+         !multipage_allocation_permitted(current->domain, a->extent_order)=
 )
         return;
=20
     for ( i =3D a->nr_done; i < a->nr_extents; i++ )
@@ -115,8 +116,7 @@ static void populate_physmap(struct memo
=20
         if ( a->memflags & MEMF_populate_on_demand )
         {
-            if ( a->extent_order > MAX_ORDER ||
-                 guest_physmap_mark_populate_on_demand(d, gpfn,
+            if ( guest_physmap_mark_populate_on_demand(d, gpfn,
                                                        a->extent_order) < =
0 )
                 goto out;
         }




--=__Part0A3B9767.0__=
Content-Type: text/plain; name="memop-populate-PoD-multipage.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="memop-populate-PoD-multipage.patch"

memop: adjust error checking in populate_physmap()=0A=0AChecking that =
multi-page allocations are permitted is unnecessary for=0APoD population =
operations. Instead, the (loop invariant) check added=0Afor addressing =
XSA-31 can be moved here.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.co=
m>=0A=0A--- a/xen/common/memory.c=0A+++ b/xen/common/memory.c=0A@@ -99,7 =
+99,8 @@ static void populate_physmap(struct memo=0A                       =
               a->nr_extents-1) )=0A         return;=0A =0A-    if ( =
!multipage_allocation_permitted(current->domain, a->extent_order) )=0A+    =
if ( a->memflags & MEMF_populate_on_demand ? a->extent_order > MAX_ORDER =
:=0A+         !multipage_allocation_permitted(current->domain, a->extent_or=
der) )=0A         return;=0A =0A     for ( i =3D a->nr_done; i < a->nr_exte=
nts; i++ )=0A@@ -115,8 +116,7 @@ static void populate_physmap(struct =
memo=0A =0A         if ( a->memflags & MEMF_populate_on_demand )=0A        =
 {=0A-            if ( a->extent_order > MAX_ORDER ||=0A-                 =
guest_physmap_mark_populate_on_demand(d, gpfn,=0A+            if ( =
guest_physmap_mark_populate_on_demand(d, gpfn,=0A                          =
                              a->extent_order) < 0 )=0A                 =
goto out;=0A         }=0A
--=__Part0A3B9767.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0A3B9767.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:55:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgazH-000276-0N; Thu, 06 Dec 2012 12:55:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgazF-00026l-PY
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:55:33 +0000
Received: from [85.158.139.83:54893] by server-16.bemta-5.messagelabs.com id
	DC/CC-21311-5C590C05; Thu, 06 Dec 2012 12:55:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354798529!21435117!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10987 invoked from network); 6 Dec 2012 12:55:30 -0000
Received: from unknown (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:55:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:54:15 +0000
Message-Id: <50C0A38702000078000AE892@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:54:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0A3B9767.0__="
Subject: [Xen-devel] [PATCH] memop: adjust error checking in
	populate_physmap()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0A3B9767.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Checking that multi-page allocations are permitted is unnecessary for
PoD population operations. Instead, the (loop invariant) check added
for addressing XSA-31 can be moved here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -99,7 +99,8 @@ static void populate_physmap(struct memo
                                      a->nr_extents-1) )
         return;
=20
-    if ( !multipage_allocation_permitted(current->domain, a->extent_order)=
 )
+    if ( a->memflags & MEMF_populate_on_demand ? a->extent_order > =
MAX_ORDER :
+         !multipage_allocation_permitted(current->domain, a->extent_order)=
 )
         return;
=20
     for ( i =3D a->nr_done; i < a->nr_extents; i++ )
@@ -115,8 +116,7 @@ static void populate_physmap(struct memo
=20
         if ( a->memflags & MEMF_populate_on_demand )
         {
-            if ( a->extent_order > MAX_ORDER ||
-                 guest_physmap_mark_populate_on_demand(d, gpfn,
+            if ( guest_physmap_mark_populate_on_demand(d, gpfn,
                                                        a->extent_order) < =
0 )
                 goto out;
         }




--=__Part0A3B9767.0__=
Content-Type: text/plain; name="memop-populate-PoD-multipage.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="memop-populate-PoD-multipage.patch"

memop: adjust error checking in populate_physmap()=0A=0AChecking that =
multi-page allocations are permitted is unnecessary for=0APoD population =
operations. Instead, the (loop invariant) check added=0Afor addressing =
XSA-31 can be moved here.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.co=
m>=0A=0A--- a/xen/common/memory.c=0A+++ b/xen/common/memory.c=0A@@ -99,7 =
+99,8 @@ static void populate_physmap(struct memo=0A                       =
               a->nr_extents-1) )=0A         return;=0A =0A-    if ( =
!multipage_allocation_permitted(current->domain, a->extent_order) )=0A+    =
if ( a->memflags & MEMF_populate_on_demand ? a->extent_order > MAX_ORDER =
:=0A+         !multipage_allocation_permitted(current->domain, a->extent_or=
der) )=0A         return;=0A =0A     for ( i =3D a->nr_done; i < a->nr_exte=
nts; i++ )=0A@@ -115,8 +116,7 @@ static void populate_physmap(struct =
memo=0A =0A         if ( a->memflags & MEMF_populate_on_demand )=0A        =
 {=0A-            if ( a->extent_order > MAX_ORDER ||=0A-                 =
guest_physmap_mark_populate_on_demand(d, gpfn,=0A+            if ( =
guest_physmap_mark_populate_on_demand(d, gpfn,=0A                          =
                              a->extent_order) < 0 )=0A                 =
goto out;=0A         }=0A
--=__Part0A3B9767.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0A3B9767.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:56:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgazm-0002Cy-Er; Thu, 06 Dec 2012 12:56:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tgazl-0002CY-2W
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 12:56:05 +0000
Received: from [85.158.139.211:27001] by server-10.bemta-5.messagelabs.com id
	DA/05-09257-4E590C05; Thu, 06 Dec 2012 12:56:04 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354798561!19321485!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17783 invoked from network); 6 Dec 2012 12:56:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 12:56:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216603949"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 12:56:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 07:56:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tgazg-0000RK-Di;
	Thu, 06 Dec 2012 12:56:00 +0000
Date: Thu, 6 Dec 2012 12:55:56 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354788840.17165.68.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212061253590.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354788840.17165.68.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 5/6] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> > Introduce a find_compatible_node function that can be used by device
> > drivers to find the node corresponding to their device in the device
> > tree.
> > 
> > Also add device_tree_node_compatible to device_tree.h, that is currently
> > missing.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/setup.c          |    2 +-
> >  xen/common/device_tree.c      |   47 +++++++++++++++++++++++++++++++++++++++++
> >  xen/include/xen/device_tree.h |    3 ++
> >  3 files changed, 51 insertions(+), 1 deletions(-)
> > 
> > diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> > index 5f4e318..d978938 100644
> > --- a/xen/arch/arm/setup.c
> > +++ b/xen/arch/arm/setup.c
> > @@ -187,7 +187,7 @@ void __init start_xen(unsigned long boot_phys_offset,
> >  
> >      smp_clear_cpu_maps();
> >  
> > -    fdt = (void *)BOOT_MISC_VIRT_START
> > +    device_tree_flattened = fdt = (void *)BOOT_MISC_VIRT_START
> 
> This seems unrelated to the commit log?
> 
> Is this to avoid declaring an early variant? It seems like this could
> mean we can drop the fdt variable from a bunch of the existing early_*
> functions.

Yes, it is needed to make device_tree_flattened available earlier than
setup_mm.
I'll get rid of fdt.


> >          + (atag_paddr & ((1 << SECOND_SHIFT) - 1));
> >      fdt_size = device_tree_early_init(fdt);
> >  
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index 8d5b6b0..ca0aba7 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -173,6 +173,53 @@ int device_tree_for_each_node(const void *fdt,
> >      return 0;
> >  }
> >  
> > +struct find_compat {
> > +    const char *compatible;
> > +    int found;
> > +    int node;
> > +    int depth;
> > +    u32 address_cells;
> > +    u32 size_cells;
> > +};
> > +
> > +static int _find_compatible_node(const void *fdt,
> > +                             int node, const char *name, int depth,
> > +                             u32 address_cells, u32 size_cells,
> > +                             void *data)
> > +{
> > +    struct find_compat *c = (struct find_compat *) data;
> 
> Do you want 
> 	if ( c->found ) 
> 		return ?

Good idea


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:56:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgazm-0002Cy-Er; Thu, 06 Dec 2012 12:56:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tgazl-0002CY-2W
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 12:56:05 +0000
Received: from [85.158.139.211:27001] by server-10.bemta-5.messagelabs.com id
	DA/05-09257-4E590C05; Thu, 06 Dec 2012 12:56:04 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354798561!19321485!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17783 invoked from network); 6 Dec 2012 12:56:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 12:56:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216603949"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 12:56:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 07:56:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tgazg-0000RK-Di;
	Thu, 06 Dec 2012 12:56:00 +0000
Date: Thu, 6 Dec 2012 12:55:56 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354788840.17165.68.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212061253590.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354788840.17165.68.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 5/6] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> > Introduce a find_compatible_node function that can be used by device
> > drivers to find the node corresponding to their device in the device
> > tree.
> > 
> > Also add device_tree_node_compatible to device_tree.h, that is currently
> > missing.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/setup.c          |    2 +-
> >  xen/common/device_tree.c      |   47 +++++++++++++++++++++++++++++++++++++++++
> >  xen/include/xen/device_tree.h |    3 ++
> >  3 files changed, 51 insertions(+), 1 deletions(-)
> > 
> > diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> > index 5f4e318..d978938 100644
> > --- a/xen/arch/arm/setup.c
> > +++ b/xen/arch/arm/setup.c
> > @@ -187,7 +187,7 @@ void __init start_xen(unsigned long boot_phys_offset,
> >  
> >      smp_clear_cpu_maps();
> >  
> > -    fdt = (void *)BOOT_MISC_VIRT_START
> > +    device_tree_flattened = fdt = (void *)BOOT_MISC_VIRT_START
> 
> This seems unrelated to the commit log?
> 
> Is this to avoid declaring an early variant? It seems like this could
> mean we can drop the fdt variable from a bunch of the existing early_*
> functions.

Yes, it is needed to make device_tree_flattened available earlier than
setup_mm.
I'll get rid of fdt.


> >          + (atag_paddr & ((1 << SECOND_SHIFT) - 1));
> >      fdt_size = device_tree_early_init(fdt);
> >  
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index 8d5b6b0..ca0aba7 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -173,6 +173,53 @@ int device_tree_for_each_node(const void *fdt,
> >      return 0;
> >  }
> >  
> > +struct find_compat {
> > +    const char *compatible;
> > +    int found;
> > +    int node;
> > +    int depth;
> > +    u32 address_cells;
> > +    u32 size_cells;
> > +};
> > +
> > +static int _find_compatible_node(const void *fdt,
> > +                             int node, const char *name, int depth,
> > +                             u32 address_cells, u32 size_cells,
> > +                             void *data)
> > +{
> > +    struct find_compat *c = (struct find_compat *) data;
> 
> Do you want 
> 	if ( c->found ) 
> 		return ?

Good idea


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 12:57:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb12-0002Pp-Uq; Thu, 06 Dec 2012 12:57:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgb10-0002PR-NW
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:57:22 +0000
Received: from [85.158.138.51:55246] by server-16.bemta-3.messagelabs.com id
	AF/F7-07461-D2690C05; Thu, 06 Dec 2012 12:57:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354798574!27426829!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14506 invoked from network); 6 Dec 2012 12:56:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:56:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:56:14 +0000
Message-Id: <50C0A3FE02000078000AE89A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:56:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part93A20EFE.0__="
Subject: [Xen-devel] [PATCH] x86/HVM: add missing assert to stdvga's
	mmio_move()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part93A20EFE.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to match the IOREQ_READ path.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -519,6 +519,7 @@ static int mmio_move(struct hvm_hw_stdvg
                             put_page(dp);
                         return 0;
                     }
+                    ASSERT(!dp);
                     tmp =3D stdvga_mem_read(data, p->size);
                 }
                 stdvga_mem_write(addr, tmp, p->size);




--=__Part93A20EFE.0__=
Content-Type: text/plain; name="x86-hvm-stdvga-missing-assert.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-stdvga-missing-assert.patch"

x86/HVM: add missing assert to stdvga's mmio_move()=0A=0A... to match the =
IOREQ_READ path.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A-=
-- a/xen/arch/x86/hvm/stdvga.c=0A+++ b/xen/arch/x86/hvm/stdvga.c=0A@@ =
-519,6 +519,7 @@ static int mmio_move(struct hvm_hw_stdvg=0A               =
              put_page(dp);=0A                         return 0;=0A        =
             }=0A+                    ASSERT(!dp);=0A                     =
tmp =3D stdvga_mem_read(data, p->size);=0A                 }=0A            =
     stdvga_mem_write(addr, tmp, p->size);=0A
--=__Part93A20EFE.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part93A20EFE.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:57:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb12-0002Pp-Uq; Thu, 06 Dec 2012 12:57:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgb10-0002PR-NW
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:57:22 +0000
Received: from [85.158.138.51:55246] by server-16.bemta-3.messagelabs.com id
	AF/F7-07461-D2690C05; Thu, 06 Dec 2012 12:57:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354798574!27426829!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14506 invoked from network); 6 Dec 2012 12:56:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:56:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:56:14 +0000
Message-Id: <50C0A3FE02000078000AE89A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:56:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part93A20EFE.0__="
Subject: [Xen-devel] [PATCH] x86/HVM: add missing assert to stdvga's
	mmio_move()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part93A20EFE.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to match the IOREQ_READ path.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -519,6 +519,7 @@ static int mmio_move(struct hvm_hw_stdvg
                             put_page(dp);
                         return 0;
                     }
+                    ASSERT(!dp);
                     tmp =3D stdvga_mem_read(data, p->size);
                 }
                 stdvga_mem_write(addr, tmp, p->size);




--=__Part93A20EFE.0__=
Content-Type: text/plain; name="x86-hvm-stdvga-missing-assert.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-stdvga-missing-assert.patch"

x86/HVM: add missing assert to stdvga's mmio_move()=0A=0A... to match the =
IOREQ_READ path.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A-=
-- a/xen/arch/x86/hvm/stdvga.c=0A+++ b/xen/arch/x86/hvm/stdvga.c=0A@@ =
-519,6 +519,7 @@ static int mmio_move(struct hvm_hw_stdvg=0A               =
              put_page(dp);=0A                         return 0;=0A        =
             }=0A+                    ASSERT(!dp);=0A                     =
tmp =3D stdvga_mem_read(data, p->size);=0A                 }=0A            =
     stdvga_mem_write(addr, tmp, p->size);=0A
--=__Part93A20EFE.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part93A20EFE.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:59:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb2u-0002b9-FS; Thu, 06 Dec 2012 12:59:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgb2s-0002ax-Rd
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:59:19 +0000
Received: from [85.158.138.51:11502] by server-5.bemta-3.messagelabs.com id
	AB/77-26311-6A690C05; Thu, 06 Dec 2012 12:59:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354798757!27650168!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9612 invoked from network); 6 Dec 2012 12:59:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:59:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:59:16 +0000
Message-Id: <50C0A4B302000078000AE8BB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:59:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartDDEC40B3.0__="
Subject: [Xen-devel] [PATCH] x86: properly fail mmuext ops when
 get_page_from_gfn() fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartDDEC40B3.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

I noticed this inconsistency while analyzing the code for XSA-32.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2776,7 +2776,7 @@ long do_mmuext_op(
             page =3D get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, =
P2M_ALLOC);
             if ( unlikely(!page) )
             {
-                rc =3D -EINVAL;
+                okay =3D 0;
                 break;
             }
=20
@@ -2836,6 +2836,7 @@ long do_mmuext_op(
             page =3D get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, =
P2M_ALLOC);
             if ( unlikely(!page) )
             {
+                okay =3D 0;
                 MEM_LOG("Mfn %lx bad domain", op.arg1.mfn);
                 break;
             }




--=__PartDDEC40B3.0__=
Content-Type: text/plain; name="x86-mmuext-errors.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-mmuext-errors.patch"

x86: properly fail mmuext ops when get_page_from_gfn() fails=0A=0AI =
noticed this inconsistency while analyzing the code for XSA-32.=0A=0ASigned=
-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/mm.c=0A+++=
 b/xen/arch/x86/mm.c=0A@@ -2776,7 +2776,7 @@ long do_mmuext_op(=0A         =
    page =3D get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC);=0A =
            if ( unlikely(!page) )=0A             {=0A-                rc =
=3D -EINVAL;=0A+                okay =3D 0;=0A                 break;=0A   =
          }=0A =0A@@ -2836,6 +2836,7 @@ long do_mmuext_op(=0A             =
page =3D get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC);=0A     =
        if ( unlikely(!page) )=0A             {=0A+                okay =
=3D 0;=0A                 MEM_LOG("Mfn %lx bad domain", op.arg1.mfn);=0A   =
              break;=0A             }=0A
--=__PartDDEC40B3.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartDDEC40B3.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 12:59:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 12:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb2u-0002b9-FS; Thu, 06 Dec 2012 12:59:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgb2s-0002ax-Rd
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:59:19 +0000
Received: from [85.158.138.51:11502] by server-5.bemta-3.messagelabs.com id
	AB/77-26311-6A690C05; Thu, 06 Dec 2012 12:59:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354798757!27650168!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9612 invoked from network); 6 Dec 2012 12:59:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:59:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:59:16 +0000
Message-Id: <50C0A4B302000078000AE8BB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:59:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartDDEC40B3.0__="
Subject: [Xen-devel] [PATCH] x86: properly fail mmuext ops when
 get_page_from_gfn() fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartDDEC40B3.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

I noticed this inconsistency while analyzing the code for XSA-32.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2776,7 +2776,7 @@ long do_mmuext_op(
             page =3D get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, =
P2M_ALLOC);
             if ( unlikely(!page) )
             {
-                rc =3D -EINVAL;
+                okay =3D 0;
                 break;
             }
=20
@@ -2836,6 +2836,7 @@ long do_mmuext_op(
             page =3D get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, =
P2M_ALLOC);
             if ( unlikely(!page) )
             {
+                okay =3D 0;
                 MEM_LOG("Mfn %lx bad domain", op.arg1.mfn);
                 break;
             }




--=__PartDDEC40B3.0__=
Content-Type: text/plain; name="x86-mmuext-errors.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-mmuext-errors.patch"

x86: properly fail mmuext ops when get_page_from_gfn() fails=0A=0AI =
noticed this inconsistency while analyzing the code for XSA-32.=0A=0ASigned=
-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/mm.c=0A+++=
 b/xen/arch/x86/mm.c=0A@@ -2776,7 +2776,7 @@ long do_mmuext_op(=0A         =
    page =3D get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC);=0A =
            if ( unlikely(!page) )=0A             {=0A-                rc =
=3D -EINVAL;=0A+                okay =3D 0;=0A                 break;=0A   =
          }=0A =0A@@ -2836,6 +2836,7 @@ long do_mmuext_op(=0A             =
page =3D get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC);=0A     =
        if ( unlikely(!page) )=0A             {=0A+                okay =
=3D 0;=0A                 MEM_LOG("Mfn %lx bad domain", op.arg1.mfn);=0A   =
              break;=0A             }=0A
--=__PartDDEC40B3.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartDDEC40B3.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 13:00:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb3X-0002gf-Sp; Thu, 06 Dec 2012 12:59:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgb3W-0002gD-41
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:59:58 +0000
Received: from [85.158.137.99:23720] by server-14.bemta-3.messagelabs.com id
	D8/F7-31424-DC690C05; Thu, 06 Dec 2012 12:59:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354798796!18164894!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14040 invoked from network); 6 Dec 2012 12:59:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:59:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:59:55 +0000
Message-Id: <50C0A4DA02000078000AE8BF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:59:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB48529DA.0__="
Subject: [Xen-devel] [PATCH] x86/p2m: drop redundant macro definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB48529DA.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Also, add log level indicator to P2M_ERROR(), and drop pointless
underscores from all related macros' parameter names.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -46,18 +46,6 @@ boolean_param("hap_1gb", opt_hap_1gb);
 bool_t __read_mostly opt_hap_2mb =3D 1;
 boolean_param("hap_2mb", opt_hap_2mb);
=20
-/* Printouts */
-#define P2M_PRINTK(_f, _a...)                                \
-    debugtrace_printk("p2m: %s(): " _f, __func__, ##_a)
-#define P2M_ERROR(_f, _a...)                                 \
-    printk("pg error: %s(): " _f, __func__, ##_a)
-#if P2M_DEBUGGING
-#define P2M_DEBUG(_f, _a...)                                 \
-    debugtrace_printk("p2mdebug: %s(): " _f, __func__, ##_a)
-#else
-#define P2M_DEBUG(_f, _a...) do { (void)(_f); } while(0)
-#endif
-
=20
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef mfn_to_page
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -607,15 +607,15 @@ extern void audit_p2m(struct domain *d,
 #endif /* P2M_AUDIT */
=20
 /* Printouts */
-#define P2M_PRINTK(_f, _a...)                                \
-    debugtrace_printk("p2m: %s(): " _f, __func__, ##_a)
-#define P2M_ERROR(_f, _a...)                                 \
-    printk("pg error: %s(): " _f, __func__, ##_a)
+#define P2M_PRINTK(f, a...)                                \
+    debugtrace_printk("p2m: %s(): " f, __func__, ##a)
+#define P2M_ERROR(f, a...)                                 \
+    printk(XENLOG_G_ERR "pg error: %s(): " f, __func__, ##a)
 #if P2M_DEBUGGING
-#define P2M_DEBUG(_f, _a...)                                 \
-    debugtrace_printk("p2mdebug: %s(): " _f, __func__, ##_a)
+#define P2M_DEBUG(f, a...)                                 \
+    debugtrace_printk("p2mdebug: %s(): " f, __func__, ##a)
 #else
-#define P2M_DEBUG(_f, _a...) do { (void)(_f); } while(0)
+#define P2M_DEBUG(f, a...) do { (void)(f); } while(0)
 #endif
=20
 /* Called by p2m code when demand-populating a PoD page */




--=__PartB48529DA.0__=
Content-Type: text/plain; name="x86-p2m-duplicates.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-p2m-duplicates.patch"

x86/p2m: drop redundant macro definitions=0A=0AAlso, add log level =
indicator to P2M_ERROR(), and drop pointless=0Aunderscores from all =
related macros' parameter names.=0A=0ASigned-off-by: Jan Beulich <jbeulich@=
suse.com>=0A=0A--- a/xen/arch/x86/mm/p2m.c=0A+++ b/xen/arch/x86/mm/p2m.c=0A=
@@ -46,18 +46,6 @@ boolean_param("hap_1gb", opt_hap_1gb);=0A bool_t =
__read_mostly opt_hap_2mb =3D 1;=0A boolean_param("hap_2mb", opt_hap_2mb);=
=0A =0A-/* Printouts */=0A-#define P2M_PRINTK(_f, _a...)                   =
             \=0A-    debugtrace_printk("p2m: %s(): " _f, __func__, =
##_a)=0A-#define P2M_ERROR(_f, _a...)                                 =
\=0A-    printk("pg error: %s(): " _f, __func__, ##_a)=0A-#if P2M_DEBUGGING=
=0A-#define P2M_DEBUG(_f, _a...)                                 \=0A-    =
debugtrace_printk("p2mdebug: %s(): " _f, __func__, ##_a)=0A-#else=0A-#defin=
e P2M_DEBUG(_f, _a...) do { (void)(_f); } while(0)=0A-#endif=0A-=0A =0A /* =
Override macros from asm/page.h to make them work with mfn_t */=0A #undef =
mfn_to_page=0A--- a/xen/include/asm-x86/p2m.h=0A+++ b/xen/include/asm-x86/p=
2m.h=0A@@ -607,15 +607,15 @@ extern void audit_p2m(struct domain *d,=0A =
#endif /* P2M_AUDIT */=0A =0A /* Printouts */=0A-#define P2M_PRINTK(_f, =
_a...)                                \=0A-    debugtrace_printk("p2m: =
%s(): " _f, __func__, ##_a)=0A-#define P2M_ERROR(_f, _a...)                =
                 \=0A-    printk("pg error: %s(): " _f, __func__, =
##_a)=0A+#define P2M_PRINTK(f, a...)                                \=0A+  =
  debugtrace_printk("p2m: %s(): " f, __func__, ##a)=0A+#define P2M_ERROR(f,=
 a...)                                 \=0A+    printk(XENLOG_G_ERR "pg =
error: %s(): " f, __func__, ##a)=0A #if P2M_DEBUGGING=0A-#define P2M_DEBUG(=
_f, _a...)                                 \=0A-    debugtrace_printk("p2md=
ebug: %s(): " _f, __func__, ##_a)=0A+#define P2M_DEBUG(f, a...)            =
                     \=0A+    debugtrace_printk("p2mdebug: %s(): " f, =
__func__, ##a)=0A #else=0A-#define P2M_DEBUG(_f, _a...) do { (void)(_f); } =
while(0)=0A+#define P2M_DEBUG(f, a...) do { (void)(f); } while(0)=0A =
#endif=0A =0A /* Called by p2m code when demand-populating a PoD page */=0A
--=__PartB48529DA.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB48529DA.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 13:00:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb3X-0002gf-Sp; Thu, 06 Dec 2012 12:59:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgb3W-0002gD-41
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 12:59:58 +0000
Received: from [85.158.137.99:23720] by server-14.bemta-3.messagelabs.com id
	D8/F7-31424-DC690C05; Thu, 06 Dec 2012 12:59:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354798796!18164894!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14040 invoked from network); 6 Dec 2012 12:59:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 12:59:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 12:59:55 +0000
Message-Id: <50C0A4DA02000078000AE8BF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 12:59:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB48529DA.0__="
Subject: [Xen-devel] [PATCH] x86/p2m: drop redundant macro definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB48529DA.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Also, add log level indicator to P2M_ERROR(), and drop pointless
underscores from all related macros' parameter names.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -46,18 +46,6 @@ boolean_param("hap_1gb", opt_hap_1gb);
 bool_t __read_mostly opt_hap_2mb =3D 1;
 boolean_param("hap_2mb", opt_hap_2mb);
=20
-/* Printouts */
-#define P2M_PRINTK(_f, _a...)                                \
-    debugtrace_printk("p2m: %s(): " _f, __func__, ##_a)
-#define P2M_ERROR(_f, _a...)                                 \
-    printk("pg error: %s(): " _f, __func__, ##_a)
-#if P2M_DEBUGGING
-#define P2M_DEBUG(_f, _a...)                                 \
-    debugtrace_printk("p2mdebug: %s(): " _f, __func__, ##_a)
-#else
-#define P2M_DEBUG(_f, _a...) do { (void)(_f); } while(0)
-#endif
-
=20
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef mfn_to_page
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -607,15 +607,15 @@ extern void audit_p2m(struct domain *d,
 #endif /* P2M_AUDIT */
=20
 /* Printouts */
-#define P2M_PRINTK(_f, _a...)                                \
-    debugtrace_printk("p2m: %s(): " _f, __func__, ##_a)
-#define P2M_ERROR(_f, _a...)                                 \
-    printk("pg error: %s(): " _f, __func__, ##_a)
+#define P2M_PRINTK(f, a...)                                \
+    debugtrace_printk("p2m: %s(): " f, __func__, ##a)
+#define P2M_ERROR(f, a...)                                 \
+    printk(XENLOG_G_ERR "pg error: %s(): " f, __func__, ##a)
 #if P2M_DEBUGGING
-#define P2M_DEBUG(_f, _a...)                                 \
-    debugtrace_printk("p2mdebug: %s(): " _f, __func__, ##_a)
+#define P2M_DEBUG(f, a...)                                 \
+    debugtrace_printk("p2mdebug: %s(): " f, __func__, ##a)
 #else
-#define P2M_DEBUG(_f, _a...) do { (void)(_f); } while(0)
+#define P2M_DEBUG(f, a...) do { (void)(f); } while(0)
 #endif
=20
 /* Called by p2m code when demand-populating a PoD page */




--=__PartB48529DA.0__=
Content-Type: text/plain; name="x86-p2m-duplicates.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-p2m-duplicates.patch"

x86/p2m: drop redundant macro definitions=0A=0AAlso, add log level =
indicator to P2M_ERROR(), and drop pointless=0Aunderscores from all =
related macros' parameter names.=0A=0ASigned-off-by: Jan Beulich <jbeulich@=
suse.com>=0A=0A--- a/xen/arch/x86/mm/p2m.c=0A+++ b/xen/arch/x86/mm/p2m.c=0A=
@@ -46,18 +46,6 @@ boolean_param("hap_1gb", opt_hap_1gb);=0A bool_t =
__read_mostly opt_hap_2mb =3D 1;=0A boolean_param("hap_2mb", opt_hap_2mb);=
=0A =0A-/* Printouts */=0A-#define P2M_PRINTK(_f, _a...)                   =
             \=0A-    debugtrace_printk("p2m: %s(): " _f, __func__, =
##_a)=0A-#define P2M_ERROR(_f, _a...)                                 =
\=0A-    printk("pg error: %s(): " _f, __func__, ##_a)=0A-#if P2M_DEBUGGING=
=0A-#define P2M_DEBUG(_f, _a...)                                 \=0A-    =
debugtrace_printk("p2mdebug: %s(): " _f, __func__, ##_a)=0A-#else=0A-#defin=
e P2M_DEBUG(_f, _a...) do { (void)(_f); } while(0)=0A-#endif=0A-=0A =0A /* =
Override macros from asm/page.h to make them work with mfn_t */=0A #undef =
mfn_to_page=0A--- a/xen/include/asm-x86/p2m.h=0A+++ b/xen/include/asm-x86/p=
2m.h=0A@@ -607,15 +607,15 @@ extern void audit_p2m(struct domain *d,=0A =
#endif /* P2M_AUDIT */=0A =0A /* Printouts */=0A-#define P2M_PRINTK(_f, =
_a...)                                \=0A-    debugtrace_printk("p2m: =
%s(): " _f, __func__, ##_a)=0A-#define P2M_ERROR(_f, _a...)                =
                 \=0A-    printk("pg error: %s(): " _f, __func__, =
##_a)=0A+#define P2M_PRINTK(f, a...)                                \=0A+  =
  debugtrace_printk("p2m: %s(): " f, __func__, ##a)=0A+#define P2M_ERROR(f,=
 a...)                                 \=0A+    printk(XENLOG_G_ERR "pg =
error: %s(): " f, __func__, ##a)=0A #if P2M_DEBUGGING=0A-#define P2M_DEBUG(=
_f, _a...)                                 \=0A-    debugtrace_printk("p2md=
ebug: %s(): " _f, __func__, ##_a)=0A+#define P2M_DEBUG(f, a...)            =
                     \=0A+    debugtrace_printk("p2mdebug: %s(): " f, =
__func__, ##a)=0A #else=0A-#define P2M_DEBUG(_f, _a...) do { (void)(_f); } =
while(0)=0A+#define P2M_DEBUG(f, a...) do { (void)(f); } while(0)=0A =
#endif=0A =0A /* Called by p2m code when demand-populating a PoD page */=0A
--=__PartB48529DA.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB48529DA.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 13:00:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb4K-0002qc-FY; Thu, 06 Dec 2012 13:00:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tgb4I-0002qL-Ru
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:00:47 +0000
Received: from [193.109.254.147:27785] by server-13.bemta-14.messagelabs.com
	id E0/58-11239-EF690C05; Thu, 06 Dec 2012 13:00:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354798844!1897688!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26234 invoked from network); 6 Dec 2012 13:00:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:00:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16199395"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 13:00:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	13:00:44 +0000
Message-ID: <1354798843.17165.87.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Thu, 6 Dec 2012 13:00:43 +0000
In-Reply-To: <50C08BD1.50905@citrix.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
	<1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
	<50C08BD1.50905@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/8] device-tree: get_val cannot cope with
 cells	> 2, add early_panic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 12:13 +0000, David Vrabel wrote:
> On 03/12/12 17:10, Ian Campbell wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> > v3: early_panic instead of BUG_ON
> > v2: drop unrelated white space fixup
> > ---
> >  xen/common/device_tree.c |    3 +++
> >  1 files changed, 3 insertions(+), 0 deletions(-)
> > 
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index 9eb316f..5a0a1a6 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -58,6 +58,9 @@ static void __init get_val(const u32 **cell, u32 cells, u64 *val)
> >  {
> >      *val = 0;
> >  
> > +    if ( cells > 2 )
> > +        early_panic("dtb value contains > 2 cells\n");
> > +
> 
> This seems like overkill to me.  get_val() truncates the value down to
> 64 bits which is fine as no valid physical address is more than 64 bits.

If something does end up giving us a >2 cell number (for example a 4
cell thing with an unhelpful endianess) then we will silently end up
doing something which is unlikely to be the right thing.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:00:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb4K-0002qc-FY; Thu, 06 Dec 2012 13:00:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tgb4I-0002qL-Ru
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:00:47 +0000
Received: from [193.109.254.147:27785] by server-13.bemta-14.messagelabs.com
	id E0/58-11239-EF690C05; Thu, 06 Dec 2012 13:00:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354798844!1897688!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26234 invoked from network); 6 Dec 2012 13:00:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:00:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16199395"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 13:00:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	13:00:44 +0000
Message-ID: <1354798843.17165.87.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Thu, 6 Dec 2012 13:00:43 +0000
In-Reply-To: <50C08BD1.50905@citrix.com>
References: <1354554611.2693.30.camel@zakaz.uk.xensource.com>
	<1354554631-17861-4-git-send-email-ian.campbell@citrix.com>
	<50C08BD1.50905@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/8] device-tree: get_val cannot cope with
 cells	> 2, add early_panic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 12:13 +0000, David Vrabel wrote:
> On 03/12/12 17:10, Ian Campbell wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> > v3: early_panic instead of BUG_ON
> > v2: drop unrelated white space fixup
> > ---
> >  xen/common/device_tree.c |    3 +++
> >  1 files changed, 3 insertions(+), 0 deletions(-)
> > 
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index 9eb316f..5a0a1a6 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -58,6 +58,9 @@ static void __init get_val(const u32 **cell, u32 cells, u64 *val)
> >  {
> >      *val = 0;
> >  
> > +    if ( cells > 2 )
> > +        early_panic("dtb value contains > 2 cells\n");
> > +
> 
> This seems like overkill to me.  get_val() truncates the value down to
> 64 bits which is fine as no valid physical address is more than 64 bits.

If something does end up giving us a >2 cell number (for example a 4
cell thing with an unhelpful endianess) then we will silently end up
doing something which is unlikely to be the right thing.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:04:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:04:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb7q-0003JL-9I; Thu, 06 Dec 2012 13:04:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgb7o-0003JD-BH
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:04:24 +0000
Received: from [85.158.139.83:60599] by server-13.bemta-5.messagelabs.com id
	86/06-27809-7D790C05; Thu, 06 Dec 2012 13:04:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354799061!17425801!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10378 invoked from network); 6 Dec 2012 13:04:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:04:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:03:08 +0000
Message-Id: <50C0A59C02000078000AE90C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:03:08 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartF3C26E9C.0__="
Subject: [Xen-devel] [PATCH] x86: mark certain items static
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartF3C26E9C.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

..., and at once constify the data items among here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4123,10 +4123,10 @@ long do_hvm_op(unsigned long op, XEN_GUE
         struct domain *d;
        =20
         /* Interface types to internal p2m types */
-        p2m_type_t memtype[] =3D {
-            p2m_ram_rw,        /* HVMMEM_ram_rw  */
-            p2m_ram_ro,        /* HVMMEM_ram_ro  */
-            p2m_mmio_dm        /* HVMMEM_mmio_dm */
+        static const p2m_type_t memtype[] =3D {
+            [HVMMEM_ram_rw]  =3D p2m_ram_rw,
+            [HVMMEM_ram_ro]  =3D p2m_ram_ro,
+            [HVMMEM_mmio_dm] =3D p2m_mmio_dm
         };
=20
         if ( copy_from_guest(&a, arg, 1) )
--- a/xen/arch/x86/hvm/svm/emulate.c
+++ b/xen/arch/x86/hvm/svm/emulate.c
@@ -152,7 +152,7 @@ static int fetch(struct vcpu *v, u8 *buf
 }
=20
 int __get_instruction_length_from_list(struct vcpu *v,
-        enum instruction_index *list, unsigned int list_count)
+        const enum instruction_index *list, unsigned int list_count)
 {
     struct vmcb_struct *vmcb =3D v->arch.hvm_svm.vmcb;
     unsigned int i, j, inst_len =3D 0;
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1931,7 +1931,7 @@ static void svm_wbinvd_intercept(void)
=20
 static void svm_vmexit_do_invalidate_cache(struct cpu_user_regs *regs)
 {
-    enum instruction_index list[] =3D { INSTR_INVD, INSTR_WBINVD };
+    static const enum instruction_index list[] =3D { INSTR_INVD, =
INSTR_WBINVD };
     int inst_len;
=20
     inst_len =3D __get_instruction_length_from_list(
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2475,9 +2475,11 @@ void vmx_vmexit_handler(struct cpu_user_
         vmx_update_cpu_exec_control(v);
         break;
     case EXIT_REASON_TASK_SWITCH: {
-        const enum hvm_task_switch_reason reasons[] =3D {
-            TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int };
+        static const enum hvm_task_switch_reason reasons[] =3D {
+            TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int
+        };
         int32_t ecode =3D -1, source;
+
         exit_qualification =3D __vmread(EXIT_QUALIFICATION);
         source =3D (exit_qualification >> 30) & 3;
         /* Vectored event should fill in interrupt information. */
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -34,7 +34,7 @@
 /* Flags that are needed in a pagetable entry, with the sense of NX =
inverted */
 static uint32_t mandatory_flags(struct vcpu *v, uint32_t pfec)=20
 {
-    static uint32_t flags[] =3D {
+    static const uint32_t flags[] =3D {
         /* I/F -  Usr Wr */
         /* 0   0   0   0 */ _PAGE_PRESENT,=20
         /* 0   0   0   1 */ _PAGE_PRESENT|_PAGE_RW,
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -179,7 +179,7 @@ static void synchronize_tsc_slave(unsign
     }
 }
=20
-void smp_callin(void)
+static void smp_callin(void)
 {
     unsigned int cpu =3D smp_processor_id();
     int i, rc;
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1151,7 +1151,7 @@ static void local_time_calibration(void)
  * The Linux original version of this function is
  * Copyright (c) 2006, Red Hat, Inc., Ingo Molnar
  */
-void check_tsc_warp(unsigned long tsc_khz, unsigned long *max_warp)
+static void check_tsc_warp(unsigned long tsc_khz, unsigned long *max_warp)=

 {
 #define rdtsc_barrier() mb()
     static DEFINE_SPINLOCK(sync_lock);
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -45,7 +45,7 @@ static void _show_registers(
     const struct cpu_user_regs *regs, unsigned long crs[8],
     enum context context, const struct vcpu *v)
 {
-    const static char *context_names[] =3D {
+    static const char *const context_names[] =3D {
         [CTXT_hypervisor] =3D "hypervisor",
         [CTXT_pv_guest]   =3D "pv guest",
         [CTXT_hvm_guest]  =3D "hvm guest"
--- a/xen/include/asm-x86/hvm/svm/emulate.h
+++ b/xen/include/asm-x86/hvm/svm/emulate.h
@@ -45,7 +45,7 @@ enum instruction_index {
 struct vcpu;
=20
 int __get_instruction_length_from_list(
-    struct vcpu *v, enum instruction_index *list, unsigned int list_count)=
;
+    struct vcpu *, const enum instruction_index *, unsigned int list_count=
);
=20
 static inline int __get_instruction_length(
     struct vcpu *v, enum instruction_index instr)



--=__PartF3C26E9C.0__=
Content-Type: text/plain; name="x86-static.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-static.patch"

x86: mark certain items static=0A=0A..., and at once constify the data =
items among here.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A=
--- a/xen/arch/x86/hvm/hvm.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ -4123,10 =
+4123,10 @@ long do_hvm_op(unsigned long op, XEN_GUE=0A         struct =
domain *d;=0A         =0A         /* Interface types to internal p2m types =
*/=0A-        p2m_type_t memtype[] =3D {=0A-            p2m_ram_rw,        =
/* HVMMEM_ram_rw  */=0A-            p2m_ram_ro,        /* HVMMEM_ram_ro  =
*/=0A-            p2m_mmio_dm        /* HVMMEM_mmio_dm */=0A+        =
static const p2m_type_t memtype[] =3D {=0A+            [HVMMEM_ram_rw]  =
=3D p2m_ram_rw,=0A+            [HVMMEM_ram_ro]  =3D p2m_ram_ro,=0A+        =
    [HVMMEM_mmio_dm] =3D p2m_mmio_dm=0A         };=0A =0A         if ( =
copy_from_guest(&a, arg, 1) )=0A--- a/xen/arch/x86/hvm/svm/emulate.c=0A+++ =
b/xen/arch/x86/hvm/svm/emulate.c=0A@@ -152,7 +152,7 @@ static int =
fetch(struct vcpu *v, u8 *buf=0A }=0A =0A int __get_instruction_length_from=
_list(struct vcpu *v,=0A-        enum instruction_index *list, unsigned =
int list_count)=0A+        const enum instruction_index *list, unsigned =
int list_count)=0A {=0A     struct vmcb_struct *vmcb =3D v->arch.hvm_svm.vm=
cb;=0A     unsigned int i, j, inst_len =3D 0;=0A--- a/xen/arch/x86/hvm/svm/=
svm.c=0A+++ b/xen/arch/x86/hvm/svm/svm.c=0A@@ -1931,7 +1931,7 @@ static =
void svm_wbinvd_intercept(void)=0A =0A static void svm_vmexit_do_invalidate=
_cache(struct cpu_user_regs *regs)=0A {=0A-    enum instruction_index =
list[] =3D { INSTR_INVD, INSTR_WBINVD };=0A+    static const enum =
instruction_index list[] =3D { INSTR_INVD, INSTR_WBINVD };=0A     int =
inst_len;=0A =0A     inst_len =3D __get_instruction_length_from_list(=0A---=
 a/xen/arch/x86/hvm/vmx/vmx.c=0A+++ b/xen/arch/x86/hvm/vmx/vmx.c=0A@@ =
-2475,9 +2475,11 @@ void vmx_vmexit_handler(struct cpu_user_=0A         =
vmx_update_cpu_exec_control(v);=0A         break;=0A     case EXIT_REASON_T=
ASK_SWITCH: {=0A-        const enum hvm_task_switch_reason reasons[] =3D =
{=0A-            TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int =
};=0A+        static const enum hvm_task_switch_reason reasons[] =3D {=0A+ =
           TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int=0A+        =
};=0A         int32_t ecode =3D -1, source;=0A+=0A         exit_qualificati=
on =3D __vmread(EXIT_QUALIFICATION);=0A         source =3D (exit_qualificat=
ion >> 30) & 3;=0A         /* Vectored event should fill in interrupt =
information. */=0A--- a/xen/arch/x86/mm/guest_walk.c=0A+++ b/xen/arch/x86/m=
m/guest_walk.c=0A@@ -34,7 +34,7 @@=0A /* Flags that are needed in a =
pagetable entry, with the sense of NX inverted */=0A static uint32_t =
mandatory_flags(struct vcpu *v, uint32_t pfec) =0A {=0A-    static =
uint32_t flags[] =3D {=0A+    static const uint32_t flags[] =3D {=0A       =
  /* I/F -  Usr Wr */=0A         /* 0   0   0   0 */ _PAGE_PRESENT, =0A    =
     /* 0   0   0   1 */ _PAGE_PRESENT|_PAGE_RW,=0A--- a/xen/arch/x86/smpbo=
ot.c=0A+++ b/xen/arch/x86/smpboot.c=0A@@ -179,7 +179,7 @@ static void =
synchronize_tsc_slave(unsign=0A     }=0A }=0A =0A-void smp_callin(void)=0A+=
static void smp_callin(void)=0A {=0A     unsigned int cpu =3D smp_processor=
_id();=0A     int i, rc;=0A--- a/xen/arch/x86/time.c=0A+++ b/xen/arch/x86/t=
ime.c=0A@@ -1151,7 +1151,7 @@ static void local_time_calibration(void)=0A  =
* The Linux original version of this function is=0A  * Copyright (c) 2006, =
Red Hat, Inc., Ingo Molnar=0A  */=0A-void check_tsc_warp(unsigned long =
tsc_khz, unsigned long *max_warp)=0A+static void check_tsc_warp(unsigned =
long tsc_khz, unsigned long *max_warp)=0A {=0A #define rdtsc_barrier() =
mb()=0A     static DEFINE_SPINLOCK(sync_lock);=0A--- a/xen/arch/x86/x86_64/=
traps.c=0A+++ b/xen/arch/x86/x86_64/traps.c=0A@@ -45,7 +45,7 @@ static =
void _show_registers(=0A     const struct cpu_user_regs *regs, unsigned =
long crs[8],=0A     enum context context, const struct vcpu *v)=0A {=0A-   =
 const static char *context_names[] =3D {=0A+    static const char *const =
context_names[] =3D {=0A         [CTXT_hypervisor] =3D "hypervisor",=0A    =
     [CTXT_pv_guest]   =3D "pv guest",=0A         [CTXT_hvm_guest]  =3D =
"hvm guest"=0A--- a/xen/include/asm-x86/hvm/svm/emulate.h=0A+++ b/xen/inclu=
de/asm-x86/hvm/svm/emulate.h=0A@@ -45,7 +45,7 @@ enum instruction_index =
{=0A struct vcpu;=0A =0A int __get_instruction_length_from_list(=0A-    =
struct vcpu *v, enum instruction_index *list, unsigned int list_count);=0A+=
    struct vcpu *, const enum instruction_index *, unsigned int list_count)=
;=0A =0A static inline int __get_instruction_length(=0A     struct vcpu =
*v, enum instruction_index instr)=0A
--=__PartF3C26E9C.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartF3C26E9C.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 13:04:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:04:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb7q-0003JL-9I; Thu, 06 Dec 2012 13:04:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgb7o-0003JD-BH
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:04:24 +0000
Received: from [85.158.139.83:60599] by server-13.bemta-5.messagelabs.com id
	86/06-27809-7D790C05; Thu, 06 Dec 2012 13:04:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354799061!17425801!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10378 invoked from network); 6 Dec 2012 13:04:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:04:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:03:08 +0000
Message-Id: <50C0A59C02000078000AE90C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:03:08 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartF3C26E9C.0__="
Subject: [Xen-devel] [PATCH] x86: mark certain items static
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartF3C26E9C.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

..., and at once constify the data items among here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4123,10 +4123,10 @@ long do_hvm_op(unsigned long op, XEN_GUE
         struct domain *d;
        =20
         /* Interface types to internal p2m types */
-        p2m_type_t memtype[] =3D {
-            p2m_ram_rw,        /* HVMMEM_ram_rw  */
-            p2m_ram_ro,        /* HVMMEM_ram_ro  */
-            p2m_mmio_dm        /* HVMMEM_mmio_dm */
+        static const p2m_type_t memtype[] =3D {
+            [HVMMEM_ram_rw]  =3D p2m_ram_rw,
+            [HVMMEM_ram_ro]  =3D p2m_ram_ro,
+            [HVMMEM_mmio_dm] =3D p2m_mmio_dm
         };
=20
         if ( copy_from_guest(&a, arg, 1) )
--- a/xen/arch/x86/hvm/svm/emulate.c
+++ b/xen/arch/x86/hvm/svm/emulate.c
@@ -152,7 +152,7 @@ static int fetch(struct vcpu *v, u8 *buf
 }
=20
 int __get_instruction_length_from_list(struct vcpu *v,
-        enum instruction_index *list, unsigned int list_count)
+        const enum instruction_index *list, unsigned int list_count)
 {
     struct vmcb_struct *vmcb =3D v->arch.hvm_svm.vmcb;
     unsigned int i, j, inst_len =3D 0;
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1931,7 +1931,7 @@ static void svm_wbinvd_intercept(void)
=20
 static void svm_vmexit_do_invalidate_cache(struct cpu_user_regs *regs)
 {
-    enum instruction_index list[] =3D { INSTR_INVD, INSTR_WBINVD };
+    static const enum instruction_index list[] =3D { INSTR_INVD, =
INSTR_WBINVD };
     int inst_len;
=20
     inst_len =3D __get_instruction_length_from_list(
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2475,9 +2475,11 @@ void vmx_vmexit_handler(struct cpu_user_
         vmx_update_cpu_exec_control(v);
         break;
     case EXIT_REASON_TASK_SWITCH: {
-        const enum hvm_task_switch_reason reasons[] =3D {
-            TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int };
+        static const enum hvm_task_switch_reason reasons[] =3D {
+            TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int
+        };
         int32_t ecode =3D -1, source;
+
         exit_qualification =3D __vmread(EXIT_QUALIFICATION);
         source =3D (exit_qualification >> 30) & 3;
         /* Vectored event should fill in interrupt information. */
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -34,7 +34,7 @@
 /* Flags that are needed in a pagetable entry, with the sense of NX =
inverted */
 static uint32_t mandatory_flags(struct vcpu *v, uint32_t pfec)=20
 {
-    static uint32_t flags[] =3D {
+    static const uint32_t flags[] =3D {
         /* I/F -  Usr Wr */
         /* 0   0   0   0 */ _PAGE_PRESENT,=20
         /* 0   0   0   1 */ _PAGE_PRESENT|_PAGE_RW,
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -179,7 +179,7 @@ static void synchronize_tsc_slave(unsign
     }
 }
=20
-void smp_callin(void)
+static void smp_callin(void)
 {
     unsigned int cpu =3D smp_processor_id();
     int i, rc;
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1151,7 +1151,7 @@ static void local_time_calibration(void)
  * The Linux original version of this function is
  * Copyright (c) 2006, Red Hat, Inc., Ingo Molnar
  */
-void check_tsc_warp(unsigned long tsc_khz, unsigned long *max_warp)
+static void check_tsc_warp(unsigned long tsc_khz, unsigned long *max_warp)=

 {
 #define rdtsc_barrier() mb()
     static DEFINE_SPINLOCK(sync_lock);
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -45,7 +45,7 @@ static void _show_registers(
     const struct cpu_user_regs *regs, unsigned long crs[8],
     enum context context, const struct vcpu *v)
 {
-    const static char *context_names[] =3D {
+    static const char *const context_names[] =3D {
         [CTXT_hypervisor] =3D "hypervisor",
         [CTXT_pv_guest]   =3D "pv guest",
         [CTXT_hvm_guest]  =3D "hvm guest"
--- a/xen/include/asm-x86/hvm/svm/emulate.h
+++ b/xen/include/asm-x86/hvm/svm/emulate.h
@@ -45,7 +45,7 @@ enum instruction_index {
 struct vcpu;
=20
 int __get_instruction_length_from_list(
-    struct vcpu *v, enum instruction_index *list, unsigned int list_count)=
;
+    struct vcpu *, const enum instruction_index *, unsigned int list_count=
);
=20
 static inline int __get_instruction_length(
     struct vcpu *v, enum instruction_index instr)



--=__PartF3C26E9C.0__=
Content-Type: text/plain; name="x86-static.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-static.patch"

x86: mark certain items static=0A=0A..., and at once constify the data =
items among here.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A=
--- a/xen/arch/x86/hvm/hvm.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ -4123,10 =
+4123,10 @@ long do_hvm_op(unsigned long op, XEN_GUE=0A         struct =
domain *d;=0A         =0A         /* Interface types to internal p2m types =
*/=0A-        p2m_type_t memtype[] =3D {=0A-            p2m_ram_rw,        =
/* HVMMEM_ram_rw  */=0A-            p2m_ram_ro,        /* HVMMEM_ram_ro  =
*/=0A-            p2m_mmio_dm        /* HVMMEM_mmio_dm */=0A+        =
static const p2m_type_t memtype[] =3D {=0A+            [HVMMEM_ram_rw]  =
=3D p2m_ram_rw,=0A+            [HVMMEM_ram_ro]  =3D p2m_ram_ro,=0A+        =
    [HVMMEM_mmio_dm] =3D p2m_mmio_dm=0A         };=0A =0A         if ( =
copy_from_guest(&a, arg, 1) )=0A--- a/xen/arch/x86/hvm/svm/emulate.c=0A+++ =
b/xen/arch/x86/hvm/svm/emulate.c=0A@@ -152,7 +152,7 @@ static int =
fetch(struct vcpu *v, u8 *buf=0A }=0A =0A int __get_instruction_length_from=
_list(struct vcpu *v,=0A-        enum instruction_index *list, unsigned =
int list_count)=0A+        const enum instruction_index *list, unsigned =
int list_count)=0A {=0A     struct vmcb_struct *vmcb =3D v->arch.hvm_svm.vm=
cb;=0A     unsigned int i, j, inst_len =3D 0;=0A--- a/xen/arch/x86/hvm/svm/=
svm.c=0A+++ b/xen/arch/x86/hvm/svm/svm.c=0A@@ -1931,7 +1931,7 @@ static =
void svm_wbinvd_intercept(void)=0A =0A static void svm_vmexit_do_invalidate=
_cache(struct cpu_user_regs *regs)=0A {=0A-    enum instruction_index =
list[] =3D { INSTR_INVD, INSTR_WBINVD };=0A+    static const enum =
instruction_index list[] =3D { INSTR_INVD, INSTR_WBINVD };=0A     int =
inst_len;=0A =0A     inst_len =3D __get_instruction_length_from_list(=0A---=
 a/xen/arch/x86/hvm/vmx/vmx.c=0A+++ b/xen/arch/x86/hvm/vmx/vmx.c=0A@@ =
-2475,9 +2475,11 @@ void vmx_vmexit_handler(struct cpu_user_=0A         =
vmx_update_cpu_exec_control(v);=0A         break;=0A     case EXIT_REASON_T=
ASK_SWITCH: {=0A-        const enum hvm_task_switch_reason reasons[] =3D =
{=0A-            TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int =
};=0A+        static const enum hvm_task_switch_reason reasons[] =3D {=0A+ =
           TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int=0A+        =
};=0A         int32_t ecode =3D -1, source;=0A+=0A         exit_qualificati=
on =3D __vmread(EXIT_QUALIFICATION);=0A         source =3D (exit_qualificat=
ion >> 30) & 3;=0A         /* Vectored event should fill in interrupt =
information. */=0A--- a/xen/arch/x86/mm/guest_walk.c=0A+++ b/xen/arch/x86/m=
m/guest_walk.c=0A@@ -34,7 +34,7 @@=0A /* Flags that are needed in a =
pagetable entry, with the sense of NX inverted */=0A static uint32_t =
mandatory_flags(struct vcpu *v, uint32_t pfec) =0A {=0A-    static =
uint32_t flags[] =3D {=0A+    static const uint32_t flags[] =3D {=0A       =
  /* I/F -  Usr Wr */=0A         /* 0   0   0   0 */ _PAGE_PRESENT, =0A    =
     /* 0   0   0   1 */ _PAGE_PRESENT|_PAGE_RW,=0A--- a/xen/arch/x86/smpbo=
ot.c=0A+++ b/xen/arch/x86/smpboot.c=0A@@ -179,7 +179,7 @@ static void =
synchronize_tsc_slave(unsign=0A     }=0A }=0A =0A-void smp_callin(void)=0A+=
static void smp_callin(void)=0A {=0A     unsigned int cpu =3D smp_processor=
_id();=0A     int i, rc;=0A--- a/xen/arch/x86/time.c=0A+++ b/xen/arch/x86/t=
ime.c=0A@@ -1151,7 +1151,7 @@ static void local_time_calibration(void)=0A  =
* The Linux original version of this function is=0A  * Copyright (c) 2006, =
Red Hat, Inc., Ingo Molnar=0A  */=0A-void check_tsc_warp(unsigned long =
tsc_khz, unsigned long *max_warp)=0A+static void check_tsc_warp(unsigned =
long tsc_khz, unsigned long *max_warp)=0A {=0A #define rdtsc_barrier() =
mb()=0A     static DEFINE_SPINLOCK(sync_lock);=0A--- a/xen/arch/x86/x86_64/=
traps.c=0A+++ b/xen/arch/x86/x86_64/traps.c=0A@@ -45,7 +45,7 @@ static =
void _show_registers(=0A     const struct cpu_user_regs *regs, unsigned =
long crs[8],=0A     enum context context, const struct vcpu *v)=0A {=0A-   =
 const static char *context_names[] =3D {=0A+    static const char *const =
context_names[] =3D {=0A         [CTXT_hypervisor] =3D "hypervisor",=0A    =
     [CTXT_pv_guest]   =3D "pv guest",=0A         [CTXT_hvm_guest]  =3D =
"hvm guest"=0A--- a/xen/include/asm-x86/hvm/svm/emulate.h=0A+++ b/xen/inclu=
de/asm-x86/hvm/svm/emulate.h=0A@@ -45,7 +45,7 @@ enum instruction_index =
{=0A struct vcpu;=0A =0A int __get_instruction_length_from_list(=0A-    =
struct vcpu *v, enum instruction_index *list, unsigned int list_count);=0A+=
    struct vcpu *, const enum instruction_index *, unsigned int list_count)=
;=0A =0A static inline int __get_instruction_length(=0A     struct vcpu =
*v, enum instruction_index instr)=0A
--=__PartF3C26E9C.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartF3C26E9C.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 13:05:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:05:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb8L-0003N6-Sx; Thu, 06 Dec 2012 13:04:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgb8J-0003Mk-Fb
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:04:55 +0000
Received: from [85.158.139.211:24282] by server-5.bemta-5.messagelabs.com id
	2D/EE-11353-6F790C05; Thu, 06 Dec 2012 13:04:54 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354799093!19381102!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26123 invoked from network); 6 Dec 2012 13:04:54 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:04:54 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so448906wib.14
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 05:04:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=DYgCARyBhnMaWTu/yQ0dIGiLICHVCtkJf1Liw+0zTqw=;
	b=wNSUSYsH2JTlyqwsb6BLi0bfBoqjkfWYRgnqBWPMpi6fMnBViMfJ1nDRIBeO0tJGdq
	kZaDiwdMSIRh9g03ptOwHZMHeb0ltZigfWJmOHz5eXeVLSaWupufl5ll4wXrJ166OLnZ
	eSOWNWxN3eeE2DA0k/s51slwDQ350lrYPqIDMRv2TyJVTt/zTx9P1WPrmqFem02GW7Qx
	z/T9wSH+NReekWuT3SIXg6GPPdGhtVgrXwo5+W7mFzaNdAYG9tTGxuIdsZrpyt5zc3Eb
	NL8o4kXWfxtO8HCtjJWgJf11TLpueqxRRDdqBxrHTrS0ypuzz5XDvBXLZz4nPnBsFNNo
	IOog==
Received: by 10.180.88.71 with SMTP id be7mr2525927wib.17.1354799093750;
	Thu, 06 Dec 2012 05:04:53 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id eo10sm10940156wib.9.2012.12.06.05.04.52
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 05:04:53 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 13:04:44 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE6486C.550B4%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] memop: adjust error checking in
	populate_physmap()
Thread-Index: Ac3TskKu3qixRkXh5USKZI257948Dw==
In-Reply-To: <50C0A38702000078000AE892@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] memop: adjust error checking in
 populate_physmap()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:54, "Jan Beulich" <JBeulich@suse.com> wrote:

> Checking that multi-page allocations are permitted is unnecessary for
> PoD population operations. Instead, the (loop invariant) check added
> for addressing XSA-31 can be moved here.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -99,7 +99,8 @@ static void populate_physmap(struct memo
>                                       a->nr_extents-1) )
>          return;
>  
> -    if ( !multipage_allocation_permitted(current->domain, a->extent_order) )
> +    if ( a->memflags & MEMF_populate_on_demand ? a->extent_order > MAX_ORDER
> :
> +         !multipage_allocation_permitted(current->domain, a->extent_order) )
>          return;
>  
>      for ( i = a->nr_done; i < a->nr_extents; i++ )
> @@ -115,8 +116,7 @@ static void populate_physmap(struct memo
>  
>          if ( a->memflags & MEMF_populate_on_demand )
>          {
> -            if ( a->extent_order > MAX_ORDER ||
> -                 guest_physmap_mark_populate_on_demand(d, gpfn,
> +            if ( guest_physmap_mark_populate_on_demand(d, gpfn,
>                                                         a->extent_order) < 0 )
>                  goto out;
>          }
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:05:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:05:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb8L-0003N6-Sx; Thu, 06 Dec 2012 13:04:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgb8J-0003Mk-Fb
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:04:55 +0000
Received: from [85.158.139.211:24282] by server-5.bemta-5.messagelabs.com id
	2D/EE-11353-6F790C05; Thu, 06 Dec 2012 13:04:54 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354799093!19381102!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26123 invoked from network); 6 Dec 2012 13:04:54 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:04:54 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so448906wib.14
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 05:04:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=DYgCARyBhnMaWTu/yQ0dIGiLICHVCtkJf1Liw+0zTqw=;
	b=wNSUSYsH2JTlyqwsb6BLi0bfBoqjkfWYRgnqBWPMpi6fMnBViMfJ1nDRIBeO0tJGdq
	kZaDiwdMSIRh9g03ptOwHZMHeb0ltZigfWJmOHz5eXeVLSaWupufl5ll4wXrJ166OLnZ
	eSOWNWxN3eeE2DA0k/s51slwDQ350lrYPqIDMRv2TyJVTt/zTx9P1WPrmqFem02GW7Qx
	z/T9wSH+NReekWuT3SIXg6GPPdGhtVgrXwo5+W7mFzaNdAYG9tTGxuIdsZrpyt5zc3Eb
	NL8o4kXWfxtO8HCtjJWgJf11TLpueqxRRDdqBxrHTrS0ypuzz5XDvBXLZz4nPnBsFNNo
	IOog==
Received: by 10.180.88.71 with SMTP id be7mr2525927wib.17.1354799093750;
	Thu, 06 Dec 2012 05:04:53 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id eo10sm10940156wib.9.2012.12.06.05.04.52
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 05:04:53 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 13:04:44 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE6486C.550B4%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] memop: adjust error checking in
	populate_physmap()
Thread-Index: Ac3TskKu3qixRkXh5USKZI257948Dw==
In-Reply-To: <50C0A38702000078000AE892@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] memop: adjust error checking in
 populate_physmap()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:54, "Jan Beulich" <JBeulich@suse.com> wrote:

> Checking that multi-page allocations are permitted is unnecessary for
> PoD population operations. Instead, the (loop invariant) check added
> for addressing XSA-31 can be moved here.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -99,7 +99,8 @@ static void populate_physmap(struct memo
>                                       a->nr_extents-1) )
>          return;
>  
> -    if ( !multipage_allocation_permitted(current->domain, a->extent_order) )
> +    if ( a->memflags & MEMF_populate_on_demand ? a->extent_order > MAX_ORDER
> :
> +         !multipage_allocation_permitted(current->domain, a->extent_order) )
>          return;
>  
>      for ( i = a->nr_done; i < a->nr_extents; i++ )
> @@ -115,8 +116,7 @@ static void populate_physmap(struct memo
>  
>          if ( a->memflags & MEMF_populate_on_demand )
>          {
> -            if ( a->extent_order > MAX_ORDER ||
> -                 guest_physmap_mark_populate_on_demand(d, gpfn,
> +            if ( guest_physmap_mark_populate_on_demand(d, gpfn,
>                                                         a->extent_order) < 0 )
>                  goto out;
>          }
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:05:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb8h-0003R0-9a; Thu, 06 Dec 2012 13:05:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgb8g-0003Qj-Cc
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:05:18 +0000
Received: from [85.158.139.211:57697] by server-8.bemta-5.messagelabs.com id
	73/53-06050-D0890C05; Thu, 06 Dec 2012 13:05:17 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354799093!19381102!2
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28008 invoked from network); 6 Dec 2012 13:05:16 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:05:16 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so448906wib.14
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 05:05:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=YrvynjL8oxsvy08TlKb9CprhiDpX9L/mLWay5AvtrtQ=;
	b=IxpSEejb6M5Afzk6ukzwabMObelxWflOOJBjyxiWfVyXFKuGTkC2dh+JgrY98bmFbe
	lIBTZVo8AqBve3mHSYkQX3VSlxpm2i4ZeS258g1lIC7f9iiCgftFPyF+EQNDRlbpvfVX
	sq0vCwYCYeaCI29px/tsakmK1D99vqLDEEF8VNm63qw3UXaM035fp21rknGn33Qrd57T
	70heVOwLk+TtlMlhtt0BFkVdPSLDAQTobimqoB/oVqgNRsZtJ8heDXBGjlXRo2tc901d
	ZwcF6O49mb8N03dJAWUWv9HVHrtOqRN2FNv1j/4BvWFlZG8CKQw1Xnk1922t6fDiOead
	jVjw==
Received: by 10.180.99.5 with SMTP id em5mr2554350wib.8.1354799116803;
	Thu, 06 Dec 2012 05:05:16 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id b1sm10951151wix.11.2012.12.06.05.05.12
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 05:05:16 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 13:04:59 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE6487B.550B5%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/HVM: remove dead code
Thread-Index: Ac3Tskuf+4xgmAkKEkKdovw2fJ3hCw==
In-Reply-To: <50C0A3C302000078000AE896@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/HVM: remove dead code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:55, "Jan Beulich" <JBeulich@suse.com> wrote:

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2460,9 +2460,7 @@ static enum hvm_copy_result __hvm_copy(
>              return HVMCOPY_unhandleable;
>          }
>          if ( !page )
> -        {
>              return HVMCOPY_bad_gfn_to_mfn;
> -        }
>  
>          p = (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
>  
> @@ -2560,11 +2558,7 @@ static enum hvm_copy_result __hvm_clear(
>              return HVMCOPY_unhandleable;
>          }
>          if ( !page )
> -        {
> -            if ( page )
> -                put_page(page);
>              return HVMCOPY_bad_gfn_to_mfn;
> -        }
>  
>          p = (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
>  
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:05:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb8h-0003R0-9a; Thu, 06 Dec 2012 13:05:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgb8g-0003Qj-Cc
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:05:18 +0000
Received: from [85.158.139.211:57697] by server-8.bemta-5.messagelabs.com id
	73/53-06050-D0890C05; Thu, 06 Dec 2012 13:05:17 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1354799093!19381102!2
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28008 invoked from network); 6 Dec 2012 13:05:16 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:05:16 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so448906wib.14
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 05:05:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=YrvynjL8oxsvy08TlKb9CprhiDpX9L/mLWay5AvtrtQ=;
	b=IxpSEejb6M5Afzk6ukzwabMObelxWflOOJBjyxiWfVyXFKuGTkC2dh+JgrY98bmFbe
	lIBTZVo8AqBve3mHSYkQX3VSlxpm2i4ZeS258g1lIC7f9iiCgftFPyF+EQNDRlbpvfVX
	sq0vCwYCYeaCI29px/tsakmK1D99vqLDEEF8VNm63qw3UXaM035fp21rknGn33Qrd57T
	70heVOwLk+TtlMlhtt0BFkVdPSLDAQTobimqoB/oVqgNRsZtJ8heDXBGjlXRo2tc901d
	ZwcF6O49mb8N03dJAWUWv9HVHrtOqRN2FNv1j/4BvWFlZG8CKQw1Xnk1922t6fDiOead
	jVjw==
Received: by 10.180.99.5 with SMTP id em5mr2554350wib.8.1354799116803;
	Thu, 06 Dec 2012 05:05:16 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id b1sm10951151wix.11.2012.12.06.05.05.12
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 05:05:16 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 13:04:59 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE6487B.550B5%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/HVM: remove dead code
Thread-Index: Ac3Tskuf+4xgmAkKEkKdovw2fJ3hCw==
In-Reply-To: <50C0A3C302000078000AE896@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/HVM: remove dead code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:55, "Jan Beulich" <JBeulich@suse.com> wrote:

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2460,9 +2460,7 @@ static enum hvm_copy_result __hvm_copy(
>              return HVMCOPY_unhandleable;
>          }
>          if ( !page )
> -        {
>              return HVMCOPY_bad_gfn_to_mfn;
> -        }
>  
>          p = (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
>  
> @@ -2560,11 +2558,7 @@ static enum hvm_copy_result __hvm_clear(
>              return HVMCOPY_unhandleable;
>          }
>          if ( !page )
> -        {
> -            if ( page )
> -                put_page(page);
>              return HVMCOPY_bad_gfn_to_mfn;
> -        }
>  
>          p = (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
>  
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:05:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:05:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb98-0003Xk-NA; Thu, 06 Dec 2012 13:05:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgb96-0003XC-Rw
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:05:45 +0000
Received: from [85.158.138.51:61840] by server-5.bemta-3.messagelabs.com id
	44/B1-26311-82890C05; Thu, 06 Dec 2012 13:05:44 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1354799142!18838453!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30899 invoked from network); 6 Dec 2012 13:05:42 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:05:42 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so2670907wgb.32
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 05:05:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=mdWV/Y8ZWfwJChM7+P0nOizynuOb0KG1rvpSXDwCrL4=;
	b=Dv64aILG01cnQZaVqiZgXFlAPyb+WK1XnRBAGFZgg0h+6bfBltz7UrUI6BFMQcnQYl
	GXjworExbjzAieFzTnyarmp8UszuberuJJ/ABLYwMovaA5f0gw+YIpHqZmvjWwRj1NX2
	/CfLtXcdFTh00eTDuRPYO3IstWEDcA4ydiaK4sn4jGNDd3rdecpTlmkFk3tLrzCivFX6
	WlNOETmY8uN7VdEIrCVMkQg6OwS6fXP0NjFeesrj5hzJDKOK/lpYLtR7IgQxJ7u1zvkw
	VMY0ysQIiKKhtZaMPDPKr7yFPfb5aEwI1Xw9lF2akOeBljhYaEzOVhq+/9WlTXoHZJp1
	WbAg==
Received: by 10.180.87.225 with SMTP id bb1mr2526504wib.20.1354799142163;
	Thu, 06 Dec 2012 05:05:42 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id bd7sm10941857wib.8.2012.12.06.05.05.34
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 05:05:39 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 13:05:24 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE64894.550B6%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] tighten guest memory accesses
Thread-Index: Ac3TslqGP4kXYI6sOU237qn8exWewg==
In-Reply-To: <50C0A26A02000078000AE886@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] tighten guest memory accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:49, "Jan Beulich" <JBeulich@suse.com> wrote:

> Failure should always be detected and handled.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/compat/grant_table.c
> +++ b/xen/common/compat/grant_table.c
> @@ -173,7 +173,9 @@ int compat_grant_table_op(unsigned int c
>                          for ( i = 0; i < (_s_)->nr_frames; ++i ) \
>                          { \
>                              unsigned int frame = (_s_)->frame_list.p[i]; \
> -                            (void)__copy_to_compat_offset((_d_)->frame_list,
> i, &frame, 1); \
> +                            if ( __copy_to_compat_offset((_d_)->frame_list, \
> +                                                         i, &frame, 1) ) \
> +                                (_s_)->status = GNTST_bad_virt_addr; \
>                          } \
>                      } \
>                  } while (0)
> @@ -310,7 +312,9 @@ int compat_grant_table_op(unsigned int c
>                          for ( i = 0; i < (_s_)->nr_frames; ++i ) \
>                          { \
>                              uint64_t frame = (_s_)->frame_list.p[i]; \
> -                            (void)__copy_to_compat_offset((_d_)->frame_list,
> i, &frame, 1); \
> +                            if ( __copy_to_compat_offset((_d_)->frame_list, \
> +                                                         i, &frame, 1) ) \
> +                                (_s_)->status = GNTST_bad_virt_addr; \
>                          } \
>                      } \
>                  } while (0)
> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -283,18 +283,25 @@ int compat_memory_op(unsigned int cmd, X
>                  compat_pfn_t pfn =
> nat.xchg->out.extent_start.p[start_extent];
>  
>                  BUG_ON(pfn != nat.xchg->out.extent_start.p[start_extent]);
> -                /* Note that we ignore errors accessing the output extent
> list. */
> -                __copy_to_compat_offset(cmp.xchg.out.extent_start,
> start_extent, &pfn, 1);
> +                if ( __copy_to_compat_offset(cmp.xchg.out.extent_start,
> +                                             start_extent, &pfn, 1) )
> +                {
> +                    rc = -EFAULT;
> +                    break;
> +                }
>              }
>  
>              cmp.xchg.nr_exchanged = nat.xchg->nr_exchanged;
>              if ( copy_field_to_guest(guest_handle_cast(compat,
> compat_memory_exchange_t),
>                                       &cmp.xchg, nr_exchanged) )
> +                rc = -EFAULT;
> +
> +            if ( rc < 0 )
>              {
>                  if ( split < 0 )
>                      /* Cannot cancel the continuation... */
>                      domain_crash(current->domain);
> -                return -EFAULT;
> +                return rc;
>              }
>              break;
>          }
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1347,6 +1347,9 @@ gnttab_setup_table(
>          goto out1;
>      }
>  
> +    if ( !guest_handle_okay(op.frame_list, op.nr_frames) )
> +        return -EFAULT;
> +
>      d = gt_lock_target_domain_by_id(op.dom);
>      if ( IS_ERR(d) )
>      {
> @@ -1384,7 +1387,8 @@ gnttab_setup_table(
>          gmfn = gnttab_shared_gmfn(d, gt, i);
>          /* Grant tables cannot be shared */
>          BUG_ON(SHARED_M2P(gmfn));
> -        (void)copy_to_guest_offset(op.frame_list, i, &gmfn, 1);
> +        if ( __copy_to_guest_offset(op.frame_list, i, &gmfn, 1) )
> +            op.status = GNTST_bad_virt_addr;
>      }
>  
>   out3:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -445,8 +445,7 @@ static long memory_exchange(XEN_GUEST_HA
>          }
>  
>          /* Assign each output page to the domain. */
> -        j = 0;
> -        while ( (page = page_list_remove_head(&out_chunk_list)) )
> +        for ( j = 0; (page = page_list_remove_head(&out_chunk_list)); ++j )
>          {
>              if ( assign_pages(d, page, exch.out.extent_order,
>                                MEMF_no_refcount) )
> @@ -477,9 +476,12 @@ static long memory_exchange(XEN_GUEST_HA
>                  goto dying;
>              }
>  
> -            /* Note that we ignore errors accessing the output extent list.
> */
> -            (void)__copy_from_guest_offset(
> -                &gpfn, exch.out.extent_start, (i<<out_chunk_order)+j, 1);
> +            if ( __copy_from_guest_offset(&gpfn, exch.out.extent_start,
> +                                          (i << out_chunk_order) + j, 1) )
> +            {
> +                rc = -EFAULT;
> +                continue;
> +            }
>  
>              mfn = page_to_mfn(page);
>              guest_physmap_add_page(d, gpfn, mfn, exch.out.extent_order);
> @@ -488,10 +490,11 @@ static long memory_exchange(XEN_GUEST_HA
>              {
>                  for ( k = 0; k < (1UL << exch.out.extent_order); k++ )
>                      set_gpfn_from_mfn(mfn + k, gpfn + k);
> -                (void)__copy_to_guest_offset(
> -                    exch.out.extent_start, (i<<out_chunk_order)+j, &mfn, 1);
> +                if ( __copy_to_guest_offset(exch.out.extent_start,
> +                                            (i << out_chunk_order) + j,
> +                                            &mfn, 1) )
> +                    rc = -EFAULT;
>              }
> -            j++;
>          }
>          BUG_ON( !(d->is_dying) && (j != (1UL << out_chunk_order)) );
>      }
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:05:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:05:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb98-0003Xk-NA; Thu, 06 Dec 2012 13:05:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgb96-0003XC-Rw
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:05:45 +0000
Received: from [85.158.138.51:61840] by server-5.bemta-3.messagelabs.com id
	44/B1-26311-82890C05; Thu, 06 Dec 2012 13:05:44 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1354799142!18838453!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30899 invoked from network); 6 Dec 2012 13:05:42 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:05:42 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so2670907wgb.32
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 05:05:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=mdWV/Y8ZWfwJChM7+P0nOizynuOb0KG1rvpSXDwCrL4=;
	b=Dv64aILG01cnQZaVqiZgXFlAPyb+WK1XnRBAGFZgg0h+6bfBltz7UrUI6BFMQcnQYl
	GXjworExbjzAieFzTnyarmp8UszuberuJJ/ABLYwMovaA5f0gw+YIpHqZmvjWwRj1NX2
	/CfLtXcdFTh00eTDuRPYO3IstWEDcA4ydiaK4sn4jGNDd3rdecpTlmkFk3tLrzCivFX6
	WlNOETmY8uN7VdEIrCVMkQg6OwS6fXP0NjFeesrj5hzJDKOK/lpYLtR7IgQxJ7u1zvkw
	VMY0ysQIiKKhtZaMPDPKr7yFPfb5aEwI1Xw9lF2akOeBljhYaEzOVhq+/9WlTXoHZJp1
	WbAg==
Received: by 10.180.87.225 with SMTP id bb1mr2526504wib.20.1354799142163;
	Thu, 06 Dec 2012 05:05:42 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id bd7sm10941857wib.8.2012.12.06.05.05.34
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 05:05:39 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 13:05:24 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE64894.550B6%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] tighten guest memory accesses
Thread-Index: Ac3TslqGP4kXYI6sOU237qn8exWewg==
In-Reply-To: <50C0A26A02000078000AE886@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] tighten guest memory accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:49, "Jan Beulich" <JBeulich@suse.com> wrote:

> Failure should always be detected and handled.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/compat/grant_table.c
> +++ b/xen/common/compat/grant_table.c
> @@ -173,7 +173,9 @@ int compat_grant_table_op(unsigned int c
>                          for ( i = 0; i < (_s_)->nr_frames; ++i ) \
>                          { \
>                              unsigned int frame = (_s_)->frame_list.p[i]; \
> -                            (void)__copy_to_compat_offset((_d_)->frame_list,
> i, &frame, 1); \
> +                            if ( __copy_to_compat_offset((_d_)->frame_list, \
> +                                                         i, &frame, 1) ) \
> +                                (_s_)->status = GNTST_bad_virt_addr; \
>                          } \
>                      } \
>                  } while (0)
> @@ -310,7 +312,9 @@ int compat_grant_table_op(unsigned int c
>                          for ( i = 0; i < (_s_)->nr_frames; ++i ) \
>                          { \
>                              uint64_t frame = (_s_)->frame_list.p[i]; \
> -                            (void)__copy_to_compat_offset((_d_)->frame_list,
> i, &frame, 1); \
> +                            if ( __copy_to_compat_offset((_d_)->frame_list, \
> +                                                         i, &frame, 1) ) \
> +                                (_s_)->status = GNTST_bad_virt_addr; \
>                          } \
>                      } \
>                  } while (0)
> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -283,18 +283,25 @@ int compat_memory_op(unsigned int cmd, X
>                  compat_pfn_t pfn =
> nat.xchg->out.extent_start.p[start_extent];
>  
>                  BUG_ON(pfn != nat.xchg->out.extent_start.p[start_extent]);
> -                /* Note that we ignore errors accessing the output extent
> list. */
> -                __copy_to_compat_offset(cmp.xchg.out.extent_start,
> start_extent, &pfn, 1);
> +                if ( __copy_to_compat_offset(cmp.xchg.out.extent_start,
> +                                             start_extent, &pfn, 1) )
> +                {
> +                    rc = -EFAULT;
> +                    break;
> +                }
>              }
>  
>              cmp.xchg.nr_exchanged = nat.xchg->nr_exchanged;
>              if ( copy_field_to_guest(guest_handle_cast(compat,
> compat_memory_exchange_t),
>                                       &cmp.xchg, nr_exchanged) )
> +                rc = -EFAULT;
> +
> +            if ( rc < 0 )
>              {
>                  if ( split < 0 )
>                      /* Cannot cancel the continuation... */
>                      domain_crash(current->domain);
> -                return -EFAULT;
> +                return rc;
>              }
>              break;
>          }
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1347,6 +1347,9 @@ gnttab_setup_table(
>          goto out1;
>      }
>  
> +    if ( !guest_handle_okay(op.frame_list, op.nr_frames) )
> +        return -EFAULT;
> +
>      d = gt_lock_target_domain_by_id(op.dom);
>      if ( IS_ERR(d) )
>      {
> @@ -1384,7 +1387,8 @@ gnttab_setup_table(
>          gmfn = gnttab_shared_gmfn(d, gt, i);
>          /* Grant tables cannot be shared */
>          BUG_ON(SHARED_M2P(gmfn));
> -        (void)copy_to_guest_offset(op.frame_list, i, &gmfn, 1);
> +        if ( __copy_to_guest_offset(op.frame_list, i, &gmfn, 1) )
> +            op.status = GNTST_bad_virt_addr;
>      }
>  
>   out3:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -445,8 +445,7 @@ static long memory_exchange(XEN_GUEST_HA
>          }
>  
>          /* Assign each output page to the domain. */
> -        j = 0;
> -        while ( (page = page_list_remove_head(&out_chunk_list)) )
> +        for ( j = 0; (page = page_list_remove_head(&out_chunk_list)); ++j )
>          {
>              if ( assign_pages(d, page, exch.out.extent_order,
>                                MEMF_no_refcount) )
> @@ -477,9 +476,12 @@ static long memory_exchange(XEN_GUEST_HA
>                  goto dying;
>              }
>  
> -            /* Note that we ignore errors accessing the output extent list.
> */
> -            (void)__copy_from_guest_offset(
> -                &gpfn, exch.out.extent_start, (i<<out_chunk_order)+j, 1);
> +            if ( __copy_from_guest_offset(&gpfn, exch.out.extent_start,
> +                                          (i << out_chunk_order) + j, 1) )
> +            {
> +                rc = -EFAULT;
> +                continue;
> +            }
>  
>              mfn = page_to_mfn(page);
>              guest_physmap_add_page(d, gpfn, mfn, exch.out.extent_order);
> @@ -488,10 +490,11 @@ static long memory_exchange(XEN_GUEST_HA
>              {
>                  for ( k = 0; k < (1UL << exch.out.extent_order); k++ )
>                      set_gpfn_from_mfn(mfn + k, gpfn + k);
> -                (void)__copy_to_guest_offset(
> -                    exch.out.extent_start, (i<<out_chunk_order)+j, &mfn, 1);
> +                if ( __copy_to_guest_offset(exch.out.extent_start,
> +                                            (i << out_chunk_order) + j,
> +                                            &mfn, 1) )
> +                    rc = -EFAULT;
>              }
> -            j++;
>          }
>          BUG_ON( !(d->is_dying) && (j != (1UL << out_chunk_order)) );
>      }
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:05:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb9B-0003YQ-46; Thu, 06 Dec 2012 13:05:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgb99-0003Y2-NL
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:05:47 +0000
Received: from [85.158.143.35:37221] by server-3.bemta-4.messagelabs.com id
	5C/DA-18211-B2890C05; Thu, 06 Dec 2012 13:05:47 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354799145!11861854!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12187 invoked from network); 6 Dec 2012 13:05:46 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:05:46 -0000
Received: by mail-wi0-f169.google.com with SMTP id hq12so455645wib.2
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 05:05:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=YHIbf5hppJL1lJbG7HnkYetezTQXKj4AO6oxqAt5YmI=;
	b=rCpzvC0RZkOb0zEkiGRw94JWI1gWp0XXY2YMrpSjEI3EGP60oaU1I+F11yvzEOuOUL
	/353m+mukLjmBIgmZKqY50/jQDqPf6tmbxmYNuiNqjRlAFBlWWLxoYAKfyD0gK+ZVRhr
	qn99L5R0FoNdu5wxEDx3KIhsiUkICJ4WWnTf1AcrOn3oSGUu2z/J6ZLfSBFl1SQOvDNE
	5pLD1IiUvjnCojiavsNkUSASxk4aICF57Y/Iwdq3MFiwbawz+S9v15ujSUU55JpC/67b
	A0DuP4GETI30+7RGnGIODjAk+BTY+otP0oJRZAa6qkEO48tCmIoe0XU/t+IJND3MW40/
	OMFA==
Received: by 10.216.85.194 with SMTP id u44mr626341wee.133.1354799145117;
	Thu, 06 Dec 2012 05:05:45 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id bd7sm10941857wib.8.2012.12.06.05.05.42
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 05:05:43 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 13:05:37 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE648A1.550B7%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] gnttab_usage_print() should be static
Thread-Index: Ac3TsmJFDGC+nWrYGke36ZAYOXgRKA==
In-Reply-To: <50C0A18F02000078000AE86A@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] gnttab_usage_print() should be static
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:45, "Jan Beulich" <JBeulich@suse.com> wrote:

> ... as not being used or declared anywhere else.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -2802,7 +2802,7 @@ grant_table_destroy(
>      d->grant_table = NULL;
>  }
>  
> -void gnttab_usage_print(struct domain *rd)
> +static void gnttab_usage_print(struct domain *rd)
>  {
>      int first = 1;
>      grant_ref_t ref;
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:05:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgb9B-0003YQ-46; Thu, 06 Dec 2012 13:05:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgb99-0003Y2-NL
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:05:47 +0000
Received: from [85.158.143.35:37221] by server-3.bemta-4.messagelabs.com id
	5C/DA-18211-B2890C05; Thu, 06 Dec 2012 13:05:47 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354799145!11861854!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12187 invoked from network); 6 Dec 2012 13:05:46 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:05:46 -0000
Received: by mail-wi0-f169.google.com with SMTP id hq12so455645wib.2
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 05:05:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=YHIbf5hppJL1lJbG7HnkYetezTQXKj4AO6oxqAt5YmI=;
	b=rCpzvC0RZkOb0zEkiGRw94JWI1gWp0XXY2YMrpSjEI3EGP60oaU1I+F11yvzEOuOUL
	/353m+mukLjmBIgmZKqY50/jQDqPf6tmbxmYNuiNqjRlAFBlWWLxoYAKfyD0gK+ZVRhr
	qn99L5R0FoNdu5wxEDx3KIhsiUkICJ4WWnTf1AcrOn3oSGUu2z/J6ZLfSBFl1SQOvDNE
	5pLD1IiUvjnCojiavsNkUSASxk4aICF57Y/Iwdq3MFiwbawz+S9v15ujSUU55JpC/67b
	A0DuP4GETI30+7RGnGIODjAk+BTY+otP0oJRZAa6qkEO48tCmIoe0XU/t+IJND3MW40/
	OMFA==
Received: by 10.216.85.194 with SMTP id u44mr626341wee.133.1354799145117;
	Thu, 06 Dec 2012 05:05:45 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id bd7sm10941857wib.8.2012.12.06.05.05.42
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 05:05:43 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 13:05:37 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE648A1.550B7%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] gnttab_usage_print() should be static
Thread-Index: Ac3TsmJFDGC+nWrYGke36ZAYOXgRKA==
In-Reply-To: <50C0A18F02000078000AE86A@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] gnttab_usage_print() should be static
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:45, "Jan Beulich" <JBeulich@suse.com> wrote:

> ... as not being used or declared anywhere else.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -2802,7 +2802,7 @@ grant_table_destroy(
>      d->grant_table = NULL;
>  }
>  
> -void gnttab_usage_print(struct domain *rd)
> +static void gnttab_usage_print(struct domain *rd)
>  {
>      int first = 1;
>      grant_ref_t ref;
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:07:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbAT-0003sc-E7; Thu, 06 Dec 2012 13:07:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgbAR-0003s1-Th
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:07:08 +0000
Received: from [85.158.139.83:27808] by server-14.bemta-5.messagelabs.com id
	7B/0B-21768-B7890C05; Thu, 06 Dec 2012 13:07:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354799148!25974666!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18563 invoked from network); 6 Dec 2012 13:05:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:05:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:05:48 +0000
Message-Id: <50C0A63C02000078000AE910@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:05:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part5C6DC13C.0__="
Subject: [Xen-devel] [PATCH] x86/EFI: add code interfacing with the secure
 boot shim
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part5C6DC13C.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to validate the kernel image (which is required to be in PE
format, as is e.g. the case for the Linux bzImage when built with
CONFIG_EFI_STUB).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -24,6 +24,18 @@
 #include <asm/msr.h>
 #include <asm/processor.h>
=20
+#define SHIM_LOCK_PROTOCOL_GUID \
+  { 0x605dab50, 0xe046, 0x4300, {0xab, 0xb6, 0x3d, 0xd8, 0x10, 0xdd, =
0x8b, 0x23} }
+
+typedef EFI_STATUS
+(/* _not_ EFIAPI */ *EFI_SHIM_LOCK_VERIFY) (
+    IN VOID *Buffer,
+    IN UINT32 Size);
+
+typedef struct {
+    EFI_SHIM_LOCK_VERIFY Verify;
+} EFI_SHIM_LOCK_PROTOCOL;
+
 extern char start[];
 extern u32 cpuid_ext_features;
=20
@@ -640,12 +652,14 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY
     static EFI_GUID __initdata gop_guid =3D EFI_GRAPHICS_OUTPUT_PROTOCOL_G=
UID;
     static EFI_GUID __initdata bio_guid =3D BLOCK_IO_PROTOCOL;
     static EFI_GUID __initdata devp_guid =3D DEVICE_PATH_PROTOCOL;
+    static EFI_GUID __initdata shim_lock_guid =3D SHIM_LOCK_PROTOCOL_GUID;=

     EFI_LOADED_IMAGE *loaded_image;
     EFI_STATUS status;
     unsigned int i, argc;
     CHAR16 **argv, *file_name, *cfg_file_name =3D NULL;
     UINTN cols, rows, depth, size, map_key, info_size, gop_mode =3D ~0;
     EFI_HANDLE *handles =3D NULL;
+    EFI_SHIM_LOCK_PROTOCOL *shim_lock;
     EFI_GRAPHICS_OUTPUT_PROTOCOL *gop =3D NULL;
     EFI_GRAPHICS_OUTPUT_MODE_INFORMATION *mode_info;
     EFI_FILE_HANDLE dir_handle;
@@ -835,6 +849,11 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY
     read_file(dir_handle, s2w(&name), &kernel);
     efi_bs->FreePool(name.w);
=20
+    if ( !EFI_ERROR(efi_bs->LocateProtocol(&shim_lock_guid, NULL,
+                    (void **)&shim_lock)) &&
+         shim_lock->Verify(kernel.ptr, kernel.size) !=3D EFI_SUCCESS )
+        blexit(L"Dom0 kernel image could not be verified\r\n");
+
     name.s =3D get_value(&cfg, section.s, "ramdisk");
     if ( name.s )
     {




--=__Part5C6DC13C.0__=
Content-Type: text/plain; name="x86-EFI-secure-shim.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-EFI-secure-shim.patch"

x86/EFI: add code interfacing with the secure boot shim=0A=0A... to =
validate the kernel image (which is required to be in PE=0Aformat, as is =
e.g. the case for the Linux bzImage when built with=0ACONFIG_EFI_STUB).=0A=
=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/e=
fi/boot.c=0A+++ b/xen/arch/x86/efi/boot.c=0A@@ -24,6 +24,18 @@=0A #include =
<asm/msr.h>=0A #include <asm/processor.h>=0A =0A+#define SHIM_LOCK_PROTOCOL=
_GUID \=0A+  { 0x605dab50, 0xe046, 0x4300, {0xab, 0xb6, 0x3d, 0xd8, 0x10, =
0xdd, 0x8b, 0x23} }=0A+=0A+typedef EFI_STATUS=0A+(/* _not_ EFIAPI */ =
*EFI_SHIM_LOCK_VERIFY) (=0A+    IN VOID *Buffer,=0A+    IN UINT32 =
Size);=0A+=0A+typedef struct {=0A+    EFI_SHIM_LOCK_VERIFY Verify;=0A+} =
EFI_SHIM_LOCK_PROTOCOL;=0A+=0A extern char start[];=0A extern u32 =
cpuid_ext_features;=0A =0A@@ -640,12 +652,14 @@ efi_start(EFI_HANDLE =
ImageHandle, EFI_SY=0A     static EFI_GUID __initdata gop_guid =3D =
EFI_GRAPHICS_OUTPUT_PROTOCOL_GUID;=0A     static EFI_GUID __initdata =
bio_guid =3D BLOCK_IO_PROTOCOL;=0A     static EFI_GUID __initdata =
devp_guid =3D DEVICE_PATH_PROTOCOL;=0A+    static EFI_GUID __initdata =
shim_lock_guid =3D SHIM_LOCK_PROTOCOL_GUID;=0A     EFI_LOADED_IMAGE =
*loaded_image;=0A     EFI_STATUS status;=0A     unsigned int i, argc;=0A   =
  CHAR16 **argv, *file_name, *cfg_file_name =3D NULL;=0A     UINTN cols, =
rows, depth, size, map_key, info_size, gop_mode =3D ~0;=0A     EFI_HANDLE =
*handles =3D NULL;=0A+    EFI_SHIM_LOCK_PROTOCOL *shim_lock;=0A     =
EFI_GRAPHICS_OUTPUT_PROTOCOL *gop =3D NULL;=0A     EFI_GRAPHICS_OUTPUT_MODE=
_INFORMATION *mode_info;=0A     EFI_FILE_HANDLE dir_handle;=0A@@ -835,6 =
+849,11 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY=0A     read_file(dir_ha=
ndle, s2w(&name), &kernel);=0A     efi_bs->FreePool(name.w);=0A =0A+    if =
( !EFI_ERROR(efi_bs->LocateProtocol(&shim_lock_guid, NULL,=0A+             =
       (void **)&shim_lock)) &&=0A+         shim_lock->Verify(kernel.ptr, =
kernel.size) !=3D EFI_SUCCESS )=0A+        blexit(L"Dom0 kernel image =
could not be verified\r\n");=0A+=0A     name.s =3D get_value(&cfg, =
section.s, "ramdisk");=0A     if ( name.s )=0A     {=0A
--=__Part5C6DC13C.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part5C6DC13C.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 13:07:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbAT-0003sc-E7; Thu, 06 Dec 2012 13:07:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgbAR-0003s1-Th
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:07:08 +0000
Received: from [85.158.139.83:27808] by server-14.bemta-5.messagelabs.com id
	7B/0B-21768-B7890C05; Thu, 06 Dec 2012 13:07:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354799148!25974666!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18563 invoked from network); 6 Dec 2012 13:05:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:05:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:05:48 +0000
Message-Id: <50C0A63C02000078000AE910@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:05:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part5C6DC13C.0__="
Subject: [Xen-devel] [PATCH] x86/EFI: add code interfacing with the secure
 boot shim
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part5C6DC13C.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to validate the kernel image (which is required to be in PE
format, as is e.g. the case for the Linux bzImage when built with
CONFIG_EFI_STUB).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -24,6 +24,18 @@
 #include <asm/msr.h>
 #include <asm/processor.h>
=20
+#define SHIM_LOCK_PROTOCOL_GUID \
+  { 0x605dab50, 0xe046, 0x4300, {0xab, 0xb6, 0x3d, 0xd8, 0x10, 0xdd, =
0x8b, 0x23} }
+
+typedef EFI_STATUS
+(/* _not_ EFIAPI */ *EFI_SHIM_LOCK_VERIFY) (
+    IN VOID *Buffer,
+    IN UINT32 Size);
+
+typedef struct {
+    EFI_SHIM_LOCK_VERIFY Verify;
+} EFI_SHIM_LOCK_PROTOCOL;
+
 extern char start[];
 extern u32 cpuid_ext_features;
=20
@@ -640,12 +652,14 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY
     static EFI_GUID __initdata gop_guid =3D EFI_GRAPHICS_OUTPUT_PROTOCOL_G=
UID;
     static EFI_GUID __initdata bio_guid =3D BLOCK_IO_PROTOCOL;
     static EFI_GUID __initdata devp_guid =3D DEVICE_PATH_PROTOCOL;
+    static EFI_GUID __initdata shim_lock_guid =3D SHIM_LOCK_PROTOCOL_GUID;=

     EFI_LOADED_IMAGE *loaded_image;
     EFI_STATUS status;
     unsigned int i, argc;
     CHAR16 **argv, *file_name, *cfg_file_name =3D NULL;
     UINTN cols, rows, depth, size, map_key, info_size, gop_mode =3D ~0;
     EFI_HANDLE *handles =3D NULL;
+    EFI_SHIM_LOCK_PROTOCOL *shim_lock;
     EFI_GRAPHICS_OUTPUT_PROTOCOL *gop =3D NULL;
     EFI_GRAPHICS_OUTPUT_MODE_INFORMATION *mode_info;
     EFI_FILE_HANDLE dir_handle;
@@ -835,6 +849,11 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY
     read_file(dir_handle, s2w(&name), &kernel);
     efi_bs->FreePool(name.w);
=20
+    if ( !EFI_ERROR(efi_bs->LocateProtocol(&shim_lock_guid, NULL,
+                    (void **)&shim_lock)) &&
+         shim_lock->Verify(kernel.ptr, kernel.size) !=3D EFI_SUCCESS )
+        blexit(L"Dom0 kernel image could not be verified\r\n");
+
     name.s =3D get_value(&cfg, section.s, "ramdisk");
     if ( name.s )
     {




--=__Part5C6DC13C.0__=
Content-Type: text/plain; name="x86-EFI-secure-shim.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-EFI-secure-shim.patch"

x86/EFI: add code interfacing with the secure boot shim=0A=0A... to =
validate the kernel image (which is required to be in PE=0Aformat, as is =
e.g. the case for the Linux bzImage when built with=0ACONFIG_EFI_STUB).=0A=
=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/e=
fi/boot.c=0A+++ b/xen/arch/x86/efi/boot.c=0A@@ -24,6 +24,18 @@=0A #include =
<asm/msr.h>=0A #include <asm/processor.h>=0A =0A+#define SHIM_LOCK_PROTOCOL=
_GUID \=0A+  { 0x605dab50, 0xe046, 0x4300, {0xab, 0xb6, 0x3d, 0xd8, 0x10, =
0xdd, 0x8b, 0x23} }=0A+=0A+typedef EFI_STATUS=0A+(/* _not_ EFIAPI */ =
*EFI_SHIM_LOCK_VERIFY) (=0A+    IN VOID *Buffer,=0A+    IN UINT32 =
Size);=0A+=0A+typedef struct {=0A+    EFI_SHIM_LOCK_VERIFY Verify;=0A+} =
EFI_SHIM_LOCK_PROTOCOL;=0A+=0A extern char start[];=0A extern u32 =
cpuid_ext_features;=0A =0A@@ -640,12 +652,14 @@ efi_start(EFI_HANDLE =
ImageHandle, EFI_SY=0A     static EFI_GUID __initdata gop_guid =3D =
EFI_GRAPHICS_OUTPUT_PROTOCOL_GUID;=0A     static EFI_GUID __initdata =
bio_guid =3D BLOCK_IO_PROTOCOL;=0A     static EFI_GUID __initdata =
devp_guid =3D DEVICE_PATH_PROTOCOL;=0A+    static EFI_GUID __initdata =
shim_lock_guid =3D SHIM_LOCK_PROTOCOL_GUID;=0A     EFI_LOADED_IMAGE =
*loaded_image;=0A     EFI_STATUS status;=0A     unsigned int i, argc;=0A   =
  CHAR16 **argv, *file_name, *cfg_file_name =3D NULL;=0A     UINTN cols, =
rows, depth, size, map_key, info_size, gop_mode =3D ~0;=0A     EFI_HANDLE =
*handles =3D NULL;=0A+    EFI_SHIM_LOCK_PROTOCOL *shim_lock;=0A     =
EFI_GRAPHICS_OUTPUT_PROTOCOL *gop =3D NULL;=0A     EFI_GRAPHICS_OUTPUT_MODE=
_INFORMATION *mode_info;=0A     EFI_FILE_HANDLE dir_handle;=0A@@ -835,6 =
+849,11 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY=0A     read_file(dir_ha=
ndle, s2w(&name), &kernel);=0A     efi_bs->FreePool(name.w);=0A =0A+    if =
( !EFI_ERROR(efi_bs->LocateProtocol(&shim_lock_guid, NULL,=0A+             =
       (void **)&shim_lock)) &&=0A+         shim_lock->Verify(kernel.ptr, =
kernel.size) !=3D EFI_SUCCESS )=0A+        blexit(L"Dom0 kernel image =
could not be verified\r\n");=0A+=0A     name.s =3D get_value(&cfg, =
section.s, "ramdisk");=0A     if ( name.s )=0A     {=0A
--=__Part5C6DC13C.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part5C6DC13C.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 13:09:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbCD-0004CP-VC; Thu, 06 Dec 2012 13:08:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgbCC-0004C9-IA
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:08:56 +0000
Received: from [85.158.143.99:25453] by server-1.bemta-4.messagelabs.com id
	2A/3B-28401-7E890C05; Thu, 06 Dec 2012 13:08:55 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354799333!23015149!1
X-Originating-IP: [216.32.180.184]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10158 invoked from network); 6 Dec 2012 13:08:55 -0000
Received: from co1ehsobe001.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.184)
	by server-2.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	6 Dec 2012 13:08:55 -0000
Received: from mail79-co1-R.bigfish.com (10.243.78.234) by
	CO1EHSOBE007.bigfish.com (10.243.66.70) with Microsoft SMTP Server id
	14.1.225.23; Thu, 6 Dec 2012 13:08:39 +0000
Received: from mail79-co1 (localhost [127.0.0.1])	by mail79-co1-R.bigfish.com
	(Postfix) with ESMTP id E5813B400D9;
	Thu,  6 Dec 2012 13:08:38 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(z551bizbb2dI98dI9371I936eI1432Izz1de0h1202h1d1ah1d2ahzz8275bhz2dh668h839h944hd25hf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h14ddh1504h1537h153bh162dh1631h1155h)
Received: from mail79-co1 (localhost.localdomain [127.0.0.1]) by mail79-co1
	(MessageSwitch) id 1354799316167054_26926;
	Thu,  6 Dec 2012 13:08:36 +0000 (UTC)
Received: from CO1EHSMHS031.bigfish.com (unknown [10.243.78.251])	by
	mail79-co1.bigfish.com (Postfix) with ESMTP id 26A8A38004D;
	Thu,  6 Dec 2012 13:08:36 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS031.bigfish.com (10.243.66.41) with Microsoft SMTP Server id
	14.1.225.23; Thu, 6 Dec 2012 13:08:35 +0000
X-WSS-ID: 0MEM1U7-01-G45-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 20EB71028094;	Thu,  6 Dec 2012 07:08:31 -0600 (CST)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 6 Dec 2012 07:08:34 -0600
Received: from linux-hezd.amd.com (163.181.55.254) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.2.318.4;
	Thu, 6 Dec 2012 07:08:31 -0600
Date: Thu, 6 Dec 2012 08:08:29 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121206130829.GA1206@linux-hezd.amd.com>
References: <1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
	<50BF841C.6010906@amd.com>
	<1354730007.17165.31.camel@zakaz.uk.xensource.com>
	<1354788485.17165.65.camel@zakaz.uk.xensource.com>
	<50C08BD502000078000AE7BD@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C08BD502000078000AE7BD@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

O Thu, Dec 06, 2012 at 11:13:09AM +0000, Jan Beulich wrote:
> >>> On 06.12.12 at 11:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > (Putting debian-kernel to bcc, since I don't imagine they are interested
> > in the details of this discussion, I'll reraise the result with the
> > Debian Xen maintainer when we have one)
> > 
> > On Wed, 2012-12-05 at 17:53 +0000, Ian Campbell wrote:
> >> On Wed, 2012-12-05 at 17:27 +0000, Boris Ostrovsky wrote:
> >> > On 12/05/2012 12:02 PM, Jan Beulich wrote:
> >> > > But all of this shouldn't lead to equivalent ID mismatches, should
> >> > > it? It ought to simply find nothing to update...
> >> > 
> >> > 
> >> > The patch file (/lib/firmware/amd-ucode/microcode_amd_fam15h.bin) may 
> >> > contain more than one patch. The driver goes over this file patch by 
> >> > patch and tries to see whether to apply it.
> >> > 
> >> > I think what happened in Ian's case was that the patch file contained 
> >> > two patches --- one for this processor (ID 6012) and another for a 
> >> > different processor (ID 6101). (Both are family 15h but different revs).
> >> > 
> >> > The driver applied the first patch on core 0. Then, on core 1, the code 
> >> > tried the first patch (at file offset 60) and noticed that it is already 
> >> > applied. So it continued to the next patch (at offset 2660) which is not 
> >> > meant for this processor, thus generating the "does not match" message.
> > 
> > OOI what would have happened if the two patches were in the opposite
> > order? Would CPU0 have seen the ID 6101 patch first and aborted?
> 
> That would work well.
> 
> The problem is that cpu_request_microcode() returns the result
> of its last call to microcode_fits(), no matter whether a prior one
> already returned success (>= 0).
> 
> Something like
> 
>                                                    &offset)) == 0 )
>      {
>          error = microcode_fits(mc_amd, cpu);
> -        if (error <= 0)
> +        if (error < 0)
> +            error = 0;
> +        if (error == 0)
>              continue;
>  
>          error = apply_microcode(cpu);
> 
> would apparently be needed. Or we could of course make
> microcode_fits() return a bool_t in the first place.
> 
> But then again it would probably be nice to indeed return
> failure from cpu_request_microcode() if _no_ suitable
> microcode was found at all. Question is whether one blob
> can contain more than one update for a given equivalent ID.
> If not, bailing from the loop even if microcode_fits() returns
> zero might be the right solution (and presumably the latter
> then shouldn't return zero when no equivalent ID was
> found).

I would argue that cpu_request_microcode() should not return an error if no suitable microcode is available. In most cases BIOS will already have the right version of microcode and so the driver not loading anything is really considered a normal scenario. One could even say that being able to load the microcode in the driver indicates stale BIOS and so *that* is the error. I am not suggesting returning an error on that but perhaps raising log level from KERN_INFO to KERN_WARN.

As for whether a container can have more than one update --- typically no but I would like to be able to support this.

> 
> But no matter what solution we pick, we need to review this
> carefully in the context of the earlier regressions we had in
> this area.


I will probably have a patch for review either later today or tomorrow --- I need to test more patch file configurations.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:09:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbCD-0004CP-VC; Thu, 06 Dec 2012 13:08:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgbCC-0004C9-IA
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:08:56 +0000
Received: from [85.158.143.99:25453] by server-1.bemta-4.messagelabs.com id
	2A/3B-28401-7E890C05; Thu, 06 Dec 2012 13:08:55 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354799333!23015149!1
X-Originating-IP: [216.32.180.184]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10158 invoked from network); 6 Dec 2012 13:08:55 -0000
Received: from co1ehsobe001.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.184)
	by server-2.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	6 Dec 2012 13:08:55 -0000
Received: from mail79-co1-R.bigfish.com (10.243.78.234) by
	CO1EHSOBE007.bigfish.com (10.243.66.70) with Microsoft SMTP Server id
	14.1.225.23; Thu, 6 Dec 2012 13:08:39 +0000
Received: from mail79-co1 (localhost [127.0.0.1])	by mail79-co1-R.bigfish.com
	(Postfix) with ESMTP id E5813B400D9;
	Thu,  6 Dec 2012 13:08:38 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(z551bizbb2dI98dI9371I936eI1432Izz1de0h1202h1d1ah1d2ahzz8275bhz2dh668h839h944hd25hf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h14ddh1504h1537h153bh162dh1631h1155h)
Received: from mail79-co1 (localhost.localdomain [127.0.0.1]) by mail79-co1
	(MessageSwitch) id 1354799316167054_26926;
	Thu,  6 Dec 2012 13:08:36 +0000 (UTC)
Received: from CO1EHSMHS031.bigfish.com (unknown [10.243.78.251])	by
	mail79-co1.bigfish.com (Postfix) with ESMTP id 26A8A38004D;
	Thu,  6 Dec 2012 13:08:36 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS031.bigfish.com (10.243.66.41) with Microsoft SMTP Server id
	14.1.225.23; Thu, 6 Dec 2012 13:08:35 +0000
X-WSS-ID: 0MEM1U7-01-G45-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 20EB71028094;	Thu,  6 Dec 2012 07:08:31 -0600 (CST)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 6 Dec 2012 07:08:34 -0600
Received: from linux-hezd.amd.com (163.181.55.254) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.2.318.4;
	Thu, 6 Dec 2012 07:08:31 -0600
Date: Thu, 6 Dec 2012 08:08:29 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121206130829.GA1206@linux-hezd.amd.com>
References: <1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
	<50BF841C.6010906@amd.com>
	<1354730007.17165.31.camel@zakaz.uk.xensource.com>
	<1354788485.17165.65.camel@zakaz.uk.xensource.com>
	<50C08BD502000078000AE7BD@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C08BD502000078000AE7BD@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

O Thu, Dec 06, 2012 at 11:13:09AM +0000, Jan Beulich wrote:
> >>> On 06.12.12 at 11:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > (Putting debian-kernel to bcc, since I don't imagine they are interested
> > in the details of this discussion, I'll reraise the result with the
> > Debian Xen maintainer when we have one)
> > 
> > On Wed, 2012-12-05 at 17:53 +0000, Ian Campbell wrote:
> >> On Wed, 2012-12-05 at 17:27 +0000, Boris Ostrovsky wrote:
> >> > On 12/05/2012 12:02 PM, Jan Beulich wrote:
> >> > > But all of this shouldn't lead to equivalent ID mismatches, should
> >> > > it? It ought to simply find nothing to update...
> >> > 
> >> > 
> >> > The patch file (/lib/firmware/amd-ucode/microcode_amd_fam15h.bin) may 
> >> > contain more than one patch. The driver goes over this file patch by 
> >> > patch and tries to see whether to apply it.
> >> > 
> >> > I think what happened in Ian's case was that the patch file contained 
> >> > two patches --- one for this processor (ID 6012) and another for a 
> >> > different processor (ID 6101). (Both are family 15h but different revs).
> >> > 
> >> > The driver applied the first patch on core 0. Then, on core 1, the code 
> >> > tried the first patch (at file offset 60) and noticed that it is already 
> >> > applied. So it continued to the next patch (at offset 2660) which is not 
> >> > meant for this processor, thus generating the "does not match" message.
> > 
> > OOI what would have happened if the two patches were in the opposite
> > order? Would CPU0 have seen the ID 6101 patch first and aborted?
> 
> That would work well.
> 
> The problem is that cpu_request_microcode() returns the result
> of its last call to microcode_fits(), no matter whether a prior one
> already returned success (>= 0).
> 
> Something like
> 
>                                                    &offset)) == 0 )
>      {
>          error = microcode_fits(mc_amd, cpu);
> -        if (error <= 0)
> +        if (error < 0)
> +            error = 0;
> +        if (error == 0)
>              continue;
>  
>          error = apply_microcode(cpu);
> 
> would apparently be needed. Or we could of course make
> microcode_fits() return a bool_t in the first place.
> 
> But then again it would probably be nice to indeed return
> failure from cpu_request_microcode() if _no_ suitable
> microcode was found at all. Question is whether one blob
> can contain more than one update for a given equivalent ID.
> If not, bailing from the loop even if microcode_fits() returns
> zero might be the right solution (and presumably the latter
> then shouldn't return zero when no equivalent ID was
> found).

I would argue that cpu_request_microcode() should not return an error if no suitable microcode is available. In most cases BIOS will already have the right version of microcode and so the driver not loading anything is really considered a normal scenario. One could even say that being able to load the microcode in the driver indicates stale BIOS and so *that* is the error. I am not suggesting returning an error on that but perhaps raising log level from KERN_INFO to KERN_WARN.

As for whether a container can have more than one update --- typically no but I would like to be able to support this.

> 
> But no matter what solution we pick, we need to review this
> carefully in the context of the earlier regressions we had in
> this area.


I will probably have a patch for review either later today or tomorrow --- I need to test more patch file configurations.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:10:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbDY-0004S6-N4; Thu, 06 Dec 2012 13:10:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbDX-0004Rt-TY
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:20 +0000
Received: from [85.158.139.83:2559] by server-12.bemta-5.messagelabs.com id
	23/60-02886-B3990C05; Thu, 06 Dec 2012 13:10:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1354799405!24383671!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31049 invoked from network); 6 Dec 2012 13:10:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16199781"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 13:10:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	13:10:01 +0000
Message-ID: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:09:59 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH 00/09 V4] arm: support for initial modules (e.g.
 dom0) and DTB supplied in RAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(I appear to have labelled V3 as V2 when I sent it out, so this V4)

The following series implements support for initial images and DTB in
RAM, as opposed to in flash (dom0 kernel) or compiled into the
hypervisor (DTB). It arranges to not clobber these with either the h/v
text on relocation or with the heaps and frees them as appropriate.

Most of this is independent of the specific bootloader protocol which is
used to tell Xen where these modules actually are, but I have included a
simple PoC bootloader protocol based around device tree which is similar
to the protocol used by Linux to find its initrd
(where /chosen/linux,initrd-{start,end} indicate the physical address).

The PoC protocol is documented in docs/misc/arm/device-tree/booting.txt
which is added by this series.

The major change this time is to use /chosen/modules/module@<N> rather
than /chosen/module@<N> plus using the compatible node to differentiate
which node is which (so <N> is now arbitrary).

As ever the bootloader half is at:
http://xenbits.xen.org/gitweb/?p=people/ianc/boot-wrapper.git;a=summary

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:10:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbDY-0004S6-N4; Thu, 06 Dec 2012 13:10:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbDX-0004Rt-TY
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:20 +0000
Received: from [85.158.139.83:2559] by server-12.bemta-5.messagelabs.com id
	23/60-02886-B3990C05; Thu, 06 Dec 2012 13:10:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1354799405!24383671!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31049 invoked from network); 6 Dec 2012 13:10:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16199781"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 13:10:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Thu, 6 Dec 2012
	13:10:01 +0000
Message-ID: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:09:59 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH 00/09 V4] arm: support for initial modules (e.g.
 dom0) and DTB supplied in RAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(I appear to have labelled V3 as V2 when I sent it out, so this V4)

The following series implements support for initial images and DTB in
RAM, as opposed to in flash (dom0 kernel) or compiled into the
hypervisor (DTB). It arranges to not clobber these with either the h/v
text on relocation or with the heaps and frees them as appropriate.

Most of this is independent of the specific bootloader protocol which is
used to tell Xen where these modules actually are, but I have included a
simple PoC bootloader protocol based around device tree which is similar
to the protocol used by Linux to find its initrd
(where /chosen/linux,initrd-{start,end} indicate the physical address).

The PoC protocol is documented in docs/misc/arm/device-tree/booting.txt
which is added by this series.

The major change this time is to use /chosen/modules/module@<N> rather
than /chosen/module@<N> plus using the compatible node to differentiate
which node is which (so <N> is now arbitrary).

As ever the bootloader half is at:
http://xenbits.xen.org/gitweb/?p=people/ianc/boot-wrapper.git;a=summary

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbE8-0004aa-55; Thu, 06 Dec 2012 13:10:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbE7-0004Zg-2Q
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:55 +0000
Received: from [85.158.139.83:5654] by server-6.bemta-5.messagelabs.com id
	DC/65-19321-E5990C05; Thu, 06 Dec 2012 13:10:54 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354799449!21437966!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3Mjk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13083 invoked from network); 6 Dec 2012 13:10:50 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 13:10:50 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 06 Dec 2012 05:10:49 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="227474268"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 06 Dec 2012 05:10:47 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:01 +0800
Message-Id: <1354798871-5632-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 01/11] nested vmx: emulate MSR bitmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In nested vmx virtualization for MSR bitmaps, L0 hypervisor will trap all the VM
exit from L2 guest by disable the MSR_BITMAP feature. When handling this VM exit,
L0 hypervisor judges whether L1 hypervisor uses MSR_BITMAP feature and the
corresponding bit is set to 1. If so, L0 will inject such VM exit into L1
hypervisor; otherwise, L0 will be responsible for handling this VM exit.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0fbdd75..205e705 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -674,6 +674,34 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 }
 
 /*
+ * access_type: read == 0, write == 1
+ */
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+{
+    int ret = 1;
+    if ( !msr_bitmap )
+        return 1;
+
+    if ( msr <= 0x1fff )
+    {
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+    }
+    else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+    {
+        msr &= 0x1fff;
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+    }
+    return ret;
+}
+
+
+/*
  * Switch VMCS between layer 1 & 2 guest
  */
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ed47780..719bfce 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -48,6 +48,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
     nvmx->iobitmap[1] = NULL;
+    nvmx->msrbitmap = NULL;
     return 0;
 out:
     return -ENOMEM;
@@ -561,6 +562,17 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
 }
 
+static void __map_msr_bitmap(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    unsigned long gpa;
+
+    if ( nvmx->msrbitmap )
+        hvm_unmap_guest_frame (nvmx->msrbitmap); 
+    gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, MSR_BITMAP);
+    nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT);
+}
+
 static void __map_io_bitmap(struct vcpu *v, u64 vmcs_reg)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -597,6 +609,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
             nvmx->iobitmap[i] = NULL;
         }
     }
+    if ( nvmx->msrbitmap ) {
+        hvm_unmap_guest_frame(nvmx->msrbitmap);
+        nvmx->msrbitmap = NULL;
+    }
 }
 
 u64 nvmx_get_tsc_offset(struct vcpu *v)
@@ -1153,6 +1169,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
         nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT);
         nvcpu->nv_vvmcxaddr = gpa;
         map_io_bitmap_all (v);
+        __map_msr_bitmap(v);
     }
 
     vmreturn(regs, VMSUCCEED);
@@ -1270,6 +1287,9 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
               vmcs_encoding == IO_BITMAP_B_HIGH )
         __map_io_bitmap (v, IO_BITMAP_B);
 
+    if ( vmcs_encoding == MSR_BITMAP || vmcs_encoding == MSR_BITMAP_HIGH )
+        __map_msr_bitmap(v);
+
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
 }
@@ -1320,6 +1340,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDTSC_EXITING |
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
+               CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
         tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
@@ -1497,8 +1518,6 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_TRIPLE_FAULT:
     case EXIT_REASON_TASK_SWITCH:
     case EXIT_REASON_CPUID:
-    case EXIT_REASON_MSR_READ:
-    case EXIT_REASON_MSR_WRITE:
     case EXIT_REASON_VMCALL:
     case EXIT_REASON_VMCLEAR:
     case EXIT_REASON_VMLAUNCH:
@@ -1514,6 +1533,22 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_MSR_READ:
+    case EXIT_REASON_MSR_WRITE:
+    {
+        int status;
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        {
+            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
+                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
+            if ( status )
+                nvcpu->nv_vmexit_pending = 1;
+        }
+        else
+            nvcpu->nv_vmexit_pending = 1;
+        break;
+    }
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index cc92f69..14ac773 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -427,6 +427,7 @@ int vmx_add_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 
 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index b9137b8..067fbe4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -26,6 +26,7 @@
 struct nestedvmx {
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
+    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbE8-0004aa-55; Thu, 06 Dec 2012 13:10:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbE7-0004Zg-2Q
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:55 +0000
Received: from [85.158.139.83:5654] by server-6.bemta-5.messagelabs.com id
	DC/65-19321-E5990C05; Thu, 06 Dec 2012 13:10:54 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354799449!21437966!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3Mjk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13083 invoked from network); 6 Dec 2012 13:10:50 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 13:10:50 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 06 Dec 2012 05:10:49 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="227474268"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 06 Dec 2012 05:10:47 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:01 +0800
Message-Id: <1354798871-5632-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 01/11] nested vmx: emulate MSR bitmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In nested vmx virtualization for MSR bitmaps, L0 hypervisor will trap all the VM
exit from L2 guest by disable the MSR_BITMAP feature. When handling this VM exit,
L0 hypervisor judges whether L1 hypervisor uses MSR_BITMAP feature and the
corresponding bit is set to 1. If so, L0 will inject such VM exit into L1
hypervisor; otherwise, L0 will be responsible for handling this VM exit.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0fbdd75..205e705 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -674,6 +674,34 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 }
 
 /*
+ * access_type: read == 0, write == 1
+ */
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+{
+    int ret = 1;
+    if ( !msr_bitmap )
+        return 1;
+
+    if ( msr <= 0x1fff )
+    {
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+    }
+    else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+    {
+        msr &= 0x1fff;
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+    }
+    return ret;
+}
+
+
+/*
  * Switch VMCS between layer 1 & 2 guest
  */
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ed47780..719bfce 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -48,6 +48,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
     nvmx->iobitmap[1] = NULL;
+    nvmx->msrbitmap = NULL;
     return 0;
 out:
     return -ENOMEM;
@@ -561,6 +562,17 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
 }
 
+static void __map_msr_bitmap(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    unsigned long gpa;
+
+    if ( nvmx->msrbitmap )
+        hvm_unmap_guest_frame (nvmx->msrbitmap); 
+    gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, MSR_BITMAP);
+    nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT);
+}
+
 static void __map_io_bitmap(struct vcpu *v, u64 vmcs_reg)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -597,6 +609,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
             nvmx->iobitmap[i] = NULL;
         }
     }
+    if ( nvmx->msrbitmap ) {
+        hvm_unmap_guest_frame(nvmx->msrbitmap);
+        nvmx->msrbitmap = NULL;
+    }
 }
 
 u64 nvmx_get_tsc_offset(struct vcpu *v)
@@ -1153,6 +1169,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
         nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT);
         nvcpu->nv_vvmcxaddr = gpa;
         map_io_bitmap_all (v);
+        __map_msr_bitmap(v);
     }
 
     vmreturn(regs, VMSUCCEED);
@@ -1270,6 +1287,9 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
               vmcs_encoding == IO_BITMAP_B_HIGH )
         __map_io_bitmap (v, IO_BITMAP_B);
 
+    if ( vmcs_encoding == MSR_BITMAP || vmcs_encoding == MSR_BITMAP_HIGH )
+        __map_msr_bitmap(v);
+
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
 }
@@ -1320,6 +1340,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDTSC_EXITING |
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
+               CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
         tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
@@ -1497,8 +1518,6 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_TRIPLE_FAULT:
     case EXIT_REASON_TASK_SWITCH:
     case EXIT_REASON_CPUID:
-    case EXIT_REASON_MSR_READ:
-    case EXIT_REASON_MSR_WRITE:
     case EXIT_REASON_VMCALL:
     case EXIT_REASON_VMCLEAR:
     case EXIT_REASON_VMLAUNCH:
@@ -1514,6 +1533,22 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_MSR_READ:
+    case EXIT_REASON_MSR_WRITE:
+    {
+        int status;
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        {
+            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
+                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
+            if ( status )
+                nvcpu->nv_vmexit_pending = 1;
+        }
+        else
+            nvcpu->nv_vmexit_pending = 1;
+        break;
+    }
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index cc92f69..14ac773 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -427,6 +427,7 @@ int vmx_add_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 
 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index b9137b8..067fbe4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -26,6 +26,7 @@
 struct nestedvmx {
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
+    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEA-0004bZ-DT; Thu, 06 Dec 2012 13:10:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbE8-0004a0-Kf
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:56 +0000
Received: from [85.158.139.83:5782] by server-5.bemta-5.messagelabs.com id
	B4/0A-11353-06990C05; Thu, 06 Dec 2012 13:10:56 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354799449!21437966!3
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3Mjk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13404 invoked from network); 6 Dec 2012 13:10:55 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 13:10:55 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 06 Dec 2012 05:10:55 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="176960660"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Dec 2012 05:10:54 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:05 +0800
Message-Id: <1354798871-5632-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 05/11] nested vmx: fix handling of RDTSC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If L0 is to handle the TSC access, then we need to update guest EIP by
calling update_guest_eip().

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 ++
 3 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3bb0d99..9fb9562 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1555,7 +1555,7 @@ static int get_instruction_length(void)
     return len;
 }
 
-static void update_guest_eip(void)
+void update_guest_eip(void)
 {
     struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned long x;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d8b7ce5..dab9551 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1614,6 +1614,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
             tsc += __get_vvmcs(nvcpu->nv_vvmcx, TSC_OFFSET);
             regs->eax = (uint32_t)tsc;
             regs->edx = (uint32_t)(tsc >> 32);
+            update_guest_eip();
 
             return 1;
         }
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c4c2fe8..aa5b080 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -399,6 +399,8 @@ void ept_p2m_init(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
+void update_guest_eip(void);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEA-0004bZ-DT; Thu, 06 Dec 2012 13:10:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbE8-0004a0-Kf
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:56 +0000
Received: from [85.158.139.83:5782] by server-5.bemta-5.messagelabs.com id
	B4/0A-11353-06990C05; Thu, 06 Dec 2012 13:10:56 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354799449!21437966!3
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3Mjk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13404 invoked from network); 6 Dec 2012 13:10:55 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 13:10:55 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 06 Dec 2012 05:10:55 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="176960660"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Dec 2012 05:10:54 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:05 +0800
Message-Id: <1354798871-5632-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 05/11] nested vmx: fix handling of RDTSC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If L0 is to handle the TSC access, then we need to update guest EIP by
calling update_guest_eip().

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 ++
 3 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3bb0d99..9fb9562 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1555,7 +1555,7 @@ static int get_instruction_length(void)
     return len;
 }
 
-static void update_guest_eip(void)
+void update_guest_eip(void)
 {
     struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned long x;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d8b7ce5..dab9551 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1614,6 +1614,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
             tsc += __get_vvmcs(nvcpu->nv_vvmcx, TSC_OFFSET);
             regs->eax = (uint32_t)tsc;
             regs->edx = (uint32_t)(tsc >> 32);
+            update_guest_eip();
 
             return 1;
         }
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c4c2fe8..aa5b080 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -399,6 +399,8 @@ void ept_p2m_init(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
+void update_guest_eip(void);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEB-0004c9-S5; Thu, 06 Dec 2012 13:10:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbE7-0004Zz-Eg
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:57 +0000
Received: from [85.158.139.83:5680] by server-13.bemta-5.messagelabs.com id
	1C/C5-27809-E5990C05; Thu, 06 Dec 2012 13:10:54 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354799449!21437966!2
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3Mjk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13203 invoked from network); 6 Dec 2012 13:10:53 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 13:10:53 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 06 Dec 2012 05:10:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="176960648"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Dec 2012 05:10:49 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:02 +0800
Message-Id: <1354798871-5632-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 02/11] nested vmx: use literal name instead
	of hard numbers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For those default 1 settings in VMX MSR, use some literal name
instead of hard numbers in the code.

Besides, fix the default 1 setting for pin based control MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++----------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    9 +++++++++
 2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 719bfce..eb10bbf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp;
+    u64 data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
@@ -1318,9 +1318,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        data <<= 32;
-	/* 0-settings */
-        data |= 0;
+        tmp = VMX_PINBASED_CTLS_DEFAULT1;
+        data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
         /* 1-seetings */
@@ -1342,8 +1341,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
-        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
+        tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
         break;
@@ -1356,8 +1354,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
         /* 1-seetings */
-        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
-        tmp = 0x36dff;
+        tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1370,8 +1367,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
-        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
-        tmp = 0x11ff;
+        /* 1-seetings */
+        tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 067fbe4..dce2cd8 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -36,6 +36,15 @@ struct nestedvmx {
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
 
+/* bit 1, 2, 4 must be 1 */
+#define VMX_PINBASED_CTLS_DEFAULT1	0x16
+/* bit 1, 4-6,8,13-16,26 must be 1 */
+#define VMX_PROCBASED_CTLS_DEFAULT1	0x401e172
+/* bit 0-8, 10,11,13,14,16,17 must be 1 */
+#define VMX_EXIT_CTLS_DEFAULT1		0x36dff
+/* bit 0-8, and 12 must be 1 */
+#define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
+
 /*
  * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
  */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEA-0004bG-0m; Thu, 06 Dec 2012 13:10:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbE8-0004aZ-If
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:56 +0000
Received: from [85.158.143.99:57368] by server-1.bemta-4.messagelabs.com id
	55/2E-28401-F5990C05; Thu, 06 Dec 2012 13:10:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354799452!27352422!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8480 invoked from network); 6 Dec 2012 13:10:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605236"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:51 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-8j;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:42 +0000
Message-ID: <1354799451-16876-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/9] xen: arm: mark early_panic as a noreturn
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise gcc complains about variables being used when not
initialised when in fact that point is never reached.

There aren't any instances of this in tree right now, I noticed this
while developing another patch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/include/asm-arm/early_printk.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index f45f21e..a770d4a 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -15,7 +15,7 @@
 #ifdef EARLY_UART_ADDRESS
 
 void early_printk(const char *fmt, ...);
-void early_panic(const char *fmt, ...);
+void early_panic(const char *fmt, ...) __attribute__((noreturn));
 
 #else
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEB-0004c9-S5; Thu, 06 Dec 2012 13:10:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbE7-0004Zz-Eg
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:57 +0000
Received: from [85.158.139.83:5680] by server-13.bemta-5.messagelabs.com id
	1C/C5-27809-E5990C05; Thu, 06 Dec 2012 13:10:54 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354799449!21437966!2
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3Mjk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13203 invoked from network); 6 Dec 2012 13:10:53 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 13:10:53 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 06 Dec 2012 05:10:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="176960648"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Dec 2012 05:10:49 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:02 +0800
Message-Id: <1354798871-5632-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 02/11] nested vmx: use literal name instead
	of hard numbers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For those default 1 settings in VMX MSR, use some literal name
instead of hard numbers in the code.

Besides, fix the default 1 setting for pin based control MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++----------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    9 +++++++++
 2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 719bfce..eb10bbf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp;
+    u64 data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
@@ -1318,9 +1318,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        data <<= 32;
-	/* 0-settings */
-        data |= 0;
+        tmp = VMX_PINBASED_CTLS_DEFAULT1;
+        data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
         /* 1-seetings */
@@ -1342,8 +1341,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
-        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
+        tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
         break;
@@ -1356,8 +1354,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
         /* 1-seetings */
-        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
-        tmp = 0x36dff;
+        tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1370,8 +1367,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
-        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
-        tmp = 0x11ff;
+        /* 1-seetings */
+        tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 067fbe4..dce2cd8 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -36,6 +36,15 @@ struct nestedvmx {
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
 
+/* bit 1, 2, 4 must be 1 */
+#define VMX_PINBASED_CTLS_DEFAULT1	0x16
+/* bit 1, 4-6,8,13-16,26 must be 1 */
+#define VMX_PROCBASED_CTLS_DEFAULT1	0x401e172
+/* bit 0-8, 10,11,13,14,16,17 must be 1 */
+#define VMX_EXIT_CTLS_DEFAULT1		0x36dff
+/* bit 0-8, and 12 must be 1 */
+#define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
+
 /*
  * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
  */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEA-0004bG-0m; Thu, 06 Dec 2012 13:10:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbE8-0004aZ-If
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:56 +0000
Received: from [85.158.143.99:57368] by server-1.bemta-4.messagelabs.com id
	55/2E-28401-F5990C05; Thu, 06 Dec 2012 13:10:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354799452!27352422!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8480 invoked from network); 6 Dec 2012 13:10:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605236"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:51 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-8j;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:42 +0000
Message-ID: <1354799451-16876-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/9] xen: arm: mark early_panic as a noreturn
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise gcc complains about variables being used when not
initialised when in fact that point is never reached.

There aren't any instances of this in tree right now, I noticed this
while developing another patch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/include/asm-arm/early_printk.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index f45f21e..a770d4a 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -15,7 +15,7 @@
 #ifdef EARLY_UART_ADDRESS
 
 void early_printk(const char *fmt, ...);
-void early_panic(const char *fmt, ...);
+void early_panic(const char *fmt, ...) __attribute__((noreturn));
 
 #else
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbE9-0004b1-Hk; Thu, 06 Dec 2012 13:10:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbE7-0004a0-K8
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:55 +0000
Received: from [85.158.139.83:59121] by server-5.bemta-5.messagelabs.com id
	B6/F9-11353-E5990C05; Thu, 06 Dec 2012 13:10:54 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354799452!21437976!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13212 invoked from network); 6 Dec 2012 13:10:53 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-11.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 13:10:53 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 05:10:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259897880"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 05:10:51 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:03 +0800
Message-Id: <1354798871-5632-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 03/11] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Besides, use literal name instead of hard numbers for this bit 55 in
IA32_VMX_BASIC_MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    6 +++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    6 ++++++
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 205e705..9adc7a4 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -237,7 +237,7 @@ static int vmx_init_vmcs_config(void)
          * We check VMX_BASIC_MSR[55] to correctly handle default controls.
          */
         uint32_t must_be_one, must_be_zero, msr = MSR_IA32_VMX_PROCBASED_CTLS;
-        if ( vmx_basic_msr_high & (1u << 23) )
+        if ( vmx_basic_msr_high & (VMX_BASIC_DEFAULT1_ZERO >> 32) )
             msr = MSR_IA32_VMX_TRUE_PROCBASED_CTLS;
         rdmsr(msr, must_be_one, must_be_zero);
         if ( must_be_one & (CPU_BASED_INVLPG_EXITING |
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index eb10bbf..ec5e8a7 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1311,9 +1311,10 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50;
+               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
         /* 1-seetings */
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
@@ -1322,6 +1323,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1353,6 +1355,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = (data << 32) | tmp;
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
+    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
         /* 1-seetings */
         tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
@@ -1367,6 +1370,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
+    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         /* 1-seetings */
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 14ac773..ef2c9c9 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -247,6 +247,12 @@ extern bool_t cpu_has_vmx_ins_outs_instr_info;
 #define VMX_INTR_SHADOW_SMI             0x00000004
 #define VMX_INTR_SHADOW_NMI             0x00000008
 
+/* 
+ * bit 55 of IA32_VMX_BASIC MSR, indicating whether any VMX controls that
+ * default to 1 may be cleared to 0.
+ */
+#define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
+
 /* VMCS field encodings. */
 enum vmcs_field {
     VIRTUAL_PROCESSOR_ID            = 0x00000000,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbE9-0004b1-Hk; Thu, 06 Dec 2012 13:10:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbE7-0004a0-K8
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:55 +0000
Received: from [85.158.139.83:59121] by server-5.bemta-5.messagelabs.com id
	B6/F9-11353-E5990C05; Thu, 06 Dec 2012 13:10:54 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1354799452!21437976!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13212 invoked from network); 6 Dec 2012 13:10:53 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-11.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 13:10:53 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 05:10:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259897880"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 05:10:51 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:03 +0800
Message-Id: <1354798871-5632-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 03/11] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Besides, use literal name instead of hard numbers for this bit 55 in
IA32_VMX_BASIC_MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    6 +++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    6 ++++++
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 205e705..9adc7a4 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -237,7 +237,7 @@ static int vmx_init_vmcs_config(void)
          * We check VMX_BASIC_MSR[55] to correctly handle default controls.
          */
         uint32_t must_be_one, must_be_zero, msr = MSR_IA32_VMX_PROCBASED_CTLS;
-        if ( vmx_basic_msr_high & (1u << 23) )
+        if ( vmx_basic_msr_high & (VMX_BASIC_DEFAULT1_ZERO >> 32) )
             msr = MSR_IA32_VMX_TRUE_PROCBASED_CTLS;
         rdmsr(msr, must_be_one, must_be_zero);
         if ( must_be_one & (CPU_BASED_INVLPG_EXITING |
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index eb10bbf..ec5e8a7 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1311,9 +1311,10 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50;
+               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
         /* 1-seetings */
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
@@ -1322,6 +1323,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1353,6 +1355,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = (data << 32) | tmp;
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
+    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
         /* 1-seetings */
         tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
@@ -1367,6 +1370,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
+    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         /* 1-seetings */
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 14ac773..ef2c9c9 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -247,6 +247,12 @@ extern bool_t cpu_has_vmx_ins_outs_instr_info;
 #define VMX_INTR_SHADOW_SMI             0x00000004
 #define VMX_INTR_SHADOW_NMI             0x00000008
 
+/* 
+ * bit 55 of IA32_VMX_BASIC MSR, indicating whether any VMX controls that
+ * default to 1 may be cleared to 0.
+ */
+#define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
+
 /* VMCS field encodings. */
 enum vmcs_field {
     VIRTUAL_PROCESSOR_ID            = 0x00000000,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEC-0004ci-GC; Thu, 06 Dec 2012 13:11:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbE9-0004aZ-RG
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:58 +0000
Received: from [85.158.143.99:57505] by server-1.bemta-4.messagelabs.com id
	D3/3E-28401-16990C05; Thu, 06 Dec 2012 13:10:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354799452!27352422!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8748 invoked from network); 6 Dec 2012 13:10:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605237"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-Do;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:44 +0000
Message-ID: <1354799451-16876-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/9] arm: avoid placing Xen over any modules.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This will still fail if the modules are such that Xen is pushed out of
the top 32M of RAM since it will then overlap with the domheap (or
possibly xenheap). This will be dealt with later.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: Xen is module 0, modules start at 1.
---
 xen/arch/arm/setup.c |   68 ++++++++++++++++++++++++++++++++++++++++++++-----
 1 files changed, 61 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..a97455e 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -68,17 +68,55 @@ static void __init processor_id(void)
            READ_CP32(ID_ISAR3), READ_CP32(ID_ISAR4), READ_CP32(ID_ISAR5));
 }
 
+/*
+ * Returns the end address of the highest region in the range s..e
+ * with required size and alignment that does not conflict with the
+ * modules from first_mod to nr_modules.
+ *
+ * For non-recursive callers first_mod should normally be 1.
+ */
+static paddr_t __init consider_modules(paddr_t s, paddr_t e,
+                                       uint32_t size, paddr_t align,
+                                       int first_mod)
+{
+    const struct dt_module_info *mi = &early_info.modules;
+    int i;
+
+    s = (s+align-1) & ~(align-1);
+    e = e & ~(align-1);
+
+    if ( s > e ||  e - s < size )
+        return 0;
+
+    for ( i = first_mod; i <= mi->nr_mods; i++ )
+    {
+        paddr_t mod_s = mi->module[i].start;
+        paddr_t mod_e = mod_s + mi->module[i].size;
+
+        if ( s < mod_e && mod_s < e )
+        {
+            mod_e = consider_modules(mod_e, e, size, align, i+1);
+            if ( mod_e )
+                return mod_e;
+
+            return consider_modules(s, mod_s, size, align, i+1);
+        }
+    }
+
+    return e;
+}
+
 /**
  * get_xen_paddr - get physical address to relocate Xen to
  *
- * Xen is relocated to the top of RAM and aligned to a XEN_PADDR_ALIGN
- * boundary.
+ * Xen is relocated to as near to the top of RAM as possible and
+ * aligned to a XEN_PADDR_ALIGN boundary.
  */
 static paddr_t __init get_xen_paddr(void)
 {
     struct dt_mem_info *mi = &early_info.mem;
     paddr_t min_size;
-    paddr_t paddr = 0, t;
+    paddr_t paddr = 0;
     int i;
 
     min_size = (_end - _start + (XEN_PADDR_ALIGN-1)) & ~(XEN_PADDR_ALIGN-1);
@@ -86,17 +124,33 @@ static paddr_t __init get_xen_paddr(void)
     /* Find the highest bank with enough space. */
     for ( i = 0; i < mi->nr_banks; i++ )
     {
-        if ( mi->bank[i].size >= min_size )
+        const struct membank *bank = &mi->bank[i];
+        paddr_t s, e;
+
+        if ( bank->size >= min_size )
         {
-            t = mi->bank[i].start + mi->bank[i].size - min_size;
-            if ( t > paddr )
-                paddr = t;
+            e = consider_modules(bank->start, bank->start + bank->size,
+                                 min_size, XEN_PADDR_ALIGN, 1);
+            if ( !e )
+                continue;
+
+            s = e - min_size;
+
+            if ( s > paddr )
+                paddr = s;
         }
     }
 
     if ( !paddr )
         early_panic("Not enough memory to relocate Xen\n");
 
+    early_printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
+                 paddr, paddr + min_size);
+
+    /* Xen is module 0 */
+    early_info.modules.module[0].start = paddr;
+    early_info.modules.module[0].size = min_size;
+
     return paddr;
 }
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEC-0004ci-GC; Thu, 06 Dec 2012 13:11:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbE9-0004aZ-RG
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:58 +0000
Received: from [85.158.143.99:57505] by server-1.bemta-4.messagelabs.com id
	D3/3E-28401-16990C05; Thu, 06 Dec 2012 13:10:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354799452!27352422!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8748 invoked from network); 6 Dec 2012 13:10:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605237"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-Do;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:44 +0000
Message-ID: <1354799451-16876-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/9] arm: avoid placing Xen over any modules.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This will still fail if the modules are such that Xen is pushed out of
the top 32M of RAM since it will then overlap with the domheap (or
possibly xenheap). This will be dealt with later.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: Xen is module 0, modules start at 1.
---
 xen/arch/arm/setup.c |   68 ++++++++++++++++++++++++++++++++++++++++++++-----
 1 files changed, 61 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..a97455e 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -68,17 +68,55 @@ static void __init processor_id(void)
            READ_CP32(ID_ISAR3), READ_CP32(ID_ISAR4), READ_CP32(ID_ISAR5));
 }
 
+/*
+ * Returns the end address of the highest region in the range s..e
+ * with required size and alignment that does not conflict with the
+ * modules from first_mod to nr_modules.
+ *
+ * For non-recursive callers first_mod should normally be 1.
+ */
+static paddr_t __init consider_modules(paddr_t s, paddr_t e,
+                                       uint32_t size, paddr_t align,
+                                       int first_mod)
+{
+    const struct dt_module_info *mi = &early_info.modules;
+    int i;
+
+    s = (s+align-1) & ~(align-1);
+    e = e & ~(align-1);
+
+    if ( s > e ||  e - s < size )
+        return 0;
+
+    for ( i = first_mod; i <= mi->nr_mods; i++ )
+    {
+        paddr_t mod_s = mi->module[i].start;
+        paddr_t mod_e = mod_s + mi->module[i].size;
+
+        if ( s < mod_e && mod_s < e )
+        {
+            mod_e = consider_modules(mod_e, e, size, align, i+1);
+            if ( mod_e )
+                return mod_e;
+
+            return consider_modules(s, mod_s, size, align, i+1);
+        }
+    }
+
+    return e;
+}
+
 /**
  * get_xen_paddr - get physical address to relocate Xen to
  *
- * Xen is relocated to the top of RAM and aligned to a XEN_PADDR_ALIGN
- * boundary.
+ * Xen is relocated to as near to the top of RAM as possible and
+ * aligned to a XEN_PADDR_ALIGN boundary.
  */
 static paddr_t __init get_xen_paddr(void)
 {
     struct dt_mem_info *mi = &early_info.mem;
     paddr_t min_size;
-    paddr_t paddr = 0, t;
+    paddr_t paddr = 0;
     int i;
 
     min_size = (_end - _start + (XEN_PADDR_ALIGN-1)) & ~(XEN_PADDR_ALIGN-1);
@@ -86,17 +124,33 @@ static paddr_t __init get_xen_paddr(void)
     /* Find the highest bank with enough space. */
     for ( i = 0; i < mi->nr_banks; i++ )
     {
-        if ( mi->bank[i].size >= min_size )
+        const struct membank *bank = &mi->bank[i];
+        paddr_t s, e;
+
+        if ( bank->size >= min_size )
         {
-            t = mi->bank[i].start + mi->bank[i].size - min_size;
-            if ( t > paddr )
-                paddr = t;
+            e = consider_modules(bank->start, bank->start + bank->size,
+                                 min_size, XEN_PADDR_ALIGN, 1);
+            if ( !e )
+                continue;
+
+            s = e - min_size;
+
+            if ( s > paddr )
+                paddr = s;
         }
     }
 
     if ( !paddr )
         early_panic("Not enough memory to relocate Xen\n");
 
+    early_printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
+                 paddr, paddr + min_size);
+
+    /* Xen is module 0 */
+    early_info.modules.module[0].start = paddr;
+    early_info.modules.module[0].size = min_size;
+
     return paddr;
 }
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEC-0004cy-Rq; Thu, 06 Dec 2012 13:11:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEA-0004b3-1l
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:58 +0000
Received: from [85.158.143.99:37555] by server-2.bemta-4.messagelabs.com id
	D2/2B-30861-16990C05; Thu, 06 Dec 2012 13:10:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354799452!28314513!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7454 invoked from network); 6 Dec 2012 13:10:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46813329"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-Ge;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:45 +0000
Message-ID: <1354799451-16876-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/9] arm: avoid allocating the heaps over
	modules or xen itself.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/setup.c |   89 +++++++++++++++++++++++++++++++++++++++++++------
 1 files changed, 78 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index a97455e..9f08daf 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -73,7 +73,8 @@ static void __init processor_id(void)
  * with required size and alignment that does not conflict with the
  * modules from first_mod to nr_modules.
  *
- * For non-recursive callers first_mod should normally be 1.
+ * For non-recursive callers first_mod should normally be 0 (all
+ * modules and Xen itself) or 1 (all modules but not Xen).
  */
 static paddr_t __init consider_modules(paddr_t s, paddr_t e,
                                        uint32_t size, paddr_t align,
@@ -106,6 +107,34 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
     return e;
 }
 
+/*
+ * Return the end of the non-module region starting at s. In other
+ * words return s the start of the next modules after s.
+ *
+ * Also returns the end of that module in *n.
+ */
+static paddr_t __init next_module(paddr_t s, paddr_t *n)
+{
+    struct dt_module_info *mi = &early_info.modules;
+    paddr_t lowest = ~(paddr_t)0;
+    int i;
+
+    for ( i = 0; i <= mi->nr_mods; i++ )
+    {
+        paddr_t mod_s = mi->module[i].start;
+        paddr_t mod_e = mod_s + mi->module[i].size;
+
+        if ( mod_s < s )
+            continue;
+        if ( mod_s > lowest )
+            continue;
+        lowest = mod_s;
+        *n = mod_e;
+    }
+    return lowest;
+}
+
+
 /**
  * get_xen_paddr - get physical address to relocate Xen to
  *
@@ -159,6 +188,7 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     paddr_t ram_start;
     paddr_t ram_end;
     paddr_t ram_size;
+    paddr_t s, e;
     unsigned long ram_pages;
     unsigned long heap_pages, xenheap_pages, domheap_pages;
     unsigned long dtb_pages;
@@ -176,22 +206,37 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     ram_pages = ram_size >> PAGE_SHIFT;
 
     /*
-     * Calculate the sizes for the heaps using these constraints:
+     * Locate the xenheap using these constraints:
      *
-     *  - heaps must be 32 MiB aligned
-     *  - must not include Xen itself
-     *  - xen heap must be at most 1 GiB
+     *  - must be 32 MiB aligned
+     *  - must not include Xen itself or the boot modules
+     *  - must be at most 1 GiB
+     *  - must be at least 128M
      *
-     * XXX: needs a platform with at least 1GiB of RAM or the dom
-     * heap will be empty and no domains can be created.
+     * We try to allocate the largest xenheap possible within these
+     * constraints.
      */
-    heap_pages = (ram_size >> PAGE_SHIFT) - (32 << (20 - PAGE_SHIFT));
+    heap_pages = (ram_size >> PAGE_SHIFT);
     xenheap_pages = min(1ul << (30 - PAGE_SHIFT), heap_pages);
+
+    do
+    {
+        e = consider_modules(ram_start, ram_end, xenheap_pages<<PAGE_SHIFT,
+                             32<<20, 0);
+        if ( e )
+            break;
+
+        xenheap_pages >>= 1;
+    } while ( xenheap_pages > 128<<(20-PAGE_SHIFT) );
+
+    if ( ! e )
+        panic("Not not enough space for xenheap\n");
+
     domheap_pages = heap_pages - xenheap_pages;
 
     printk("Xen heap: %lu pages  Dom heap: %lu pages\n", xenheap_pages, domheap_pages);
 
-    setup_xenheap_mappings(ram_start >> PAGE_SHIFT, xenheap_pages);
+    setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
 
     /*
      * Need a single mapped page for populating bootmem_region_list
@@ -215,8 +260,30 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     copy_from_paddr(device_tree_flattened, dtb_paddr, dtb_size, BUFFERABLE);
 
     /* Add non-xenheap memory */
-    init_boot_pages(pfn_to_paddr(xenheap_mfn_start + xenheap_pages),
-                    pfn_to_paddr(xenheap_mfn_start + xenheap_pages + domheap_pages));
+    s = ram_start;
+    while ( s < ram_end )
+    {
+        paddr_t n = ram_end;
+
+        e = next_module(s, &n);
+
+        if ( e == ~(paddr_t)0 )
+        {
+            e = n = ram_end;
+        }
+
+        /* Avoid the xenheap */
+        if ( s < ((xenheap_mfn_start+xenheap_pages) << PAGE_SHIFT)
+             && (xenheap_mfn_start << PAGE_SHIFT) < e )
+        {
+            e = pfn_to_paddr(xenheap_mfn_start);
+            n = pfn_to_paddr(xenheap_mfn_start+xenheap_pages);
+        }
+
+        init_boot_pages(s, e);
+
+        s = n;
+    }
 
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEC-0004cy-Rq; Thu, 06 Dec 2012 13:11:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEA-0004b3-1l
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:58 +0000
Received: from [85.158.143.99:37555] by server-2.bemta-4.messagelabs.com id
	D2/2B-30861-16990C05; Thu, 06 Dec 2012 13:10:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354799452!28314513!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7454 invoked from network); 6 Dec 2012 13:10:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46813329"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-Ge;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:45 +0000
Message-ID: <1354799451-16876-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/9] arm: avoid allocating the heaps over
	modules or xen itself.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/setup.c |   89 +++++++++++++++++++++++++++++++++++++++++++------
 1 files changed, 78 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index a97455e..9f08daf 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -73,7 +73,8 @@ static void __init processor_id(void)
  * with required size and alignment that does not conflict with the
  * modules from first_mod to nr_modules.
  *
- * For non-recursive callers first_mod should normally be 1.
+ * For non-recursive callers first_mod should normally be 0 (all
+ * modules and Xen itself) or 1 (all modules but not Xen).
  */
 static paddr_t __init consider_modules(paddr_t s, paddr_t e,
                                        uint32_t size, paddr_t align,
@@ -106,6 +107,34 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
     return e;
 }
 
+/*
+ * Return the end of the non-module region starting at s. In other
+ * words return s the start of the next modules after s.
+ *
+ * Also returns the end of that module in *n.
+ */
+static paddr_t __init next_module(paddr_t s, paddr_t *n)
+{
+    struct dt_module_info *mi = &early_info.modules;
+    paddr_t lowest = ~(paddr_t)0;
+    int i;
+
+    for ( i = 0; i <= mi->nr_mods; i++ )
+    {
+        paddr_t mod_s = mi->module[i].start;
+        paddr_t mod_e = mod_s + mi->module[i].size;
+
+        if ( mod_s < s )
+            continue;
+        if ( mod_s > lowest )
+            continue;
+        lowest = mod_s;
+        *n = mod_e;
+    }
+    return lowest;
+}
+
+
 /**
  * get_xen_paddr - get physical address to relocate Xen to
  *
@@ -159,6 +188,7 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     paddr_t ram_start;
     paddr_t ram_end;
     paddr_t ram_size;
+    paddr_t s, e;
     unsigned long ram_pages;
     unsigned long heap_pages, xenheap_pages, domheap_pages;
     unsigned long dtb_pages;
@@ -176,22 +206,37 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     ram_pages = ram_size >> PAGE_SHIFT;
 
     /*
-     * Calculate the sizes for the heaps using these constraints:
+     * Locate the xenheap using these constraints:
      *
-     *  - heaps must be 32 MiB aligned
-     *  - must not include Xen itself
-     *  - xen heap must be at most 1 GiB
+     *  - must be 32 MiB aligned
+     *  - must not include Xen itself or the boot modules
+     *  - must be at most 1 GiB
+     *  - must be at least 128M
      *
-     * XXX: needs a platform with at least 1GiB of RAM or the dom
-     * heap will be empty and no domains can be created.
+     * We try to allocate the largest xenheap possible within these
+     * constraints.
      */
-    heap_pages = (ram_size >> PAGE_SHIFT) - (32 << (20 - PAGE_SHIFT));
+    heap_pages = (ram_size >> PAGE_SHIFT);
     xenheap_pages = min(1ul << (30 - PAGE_SHIFT), heap_pages);
+
+    do
+    {
+        e = consider_modules(ram_start, ram_end, xenheap_pages<<PAGE_SHIFT,
+                             32<<20, 0);
+        if ( e )
+            break;
+
+        xenheap_pages >>= 1;
+    } while ( xenheap_pages > 128<<(20-PAGE_SHIFT) );
+
+    if ( ! e )
+        panic("Not not enough space for xenheap\n");
+
     domheap_pages = heap_pages - xenheap_pages;
 
     printk("Xen heap: %lu pages  Dom heap: %lu pages\n", xenheap_pages, domheap_pages);
 
-    setup_xenheap_mappings(ram_start >> PAGE_SHIFT, xenheap_pages);
+    setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
 
     /*
      * Need a single mapped page for populating bootmem_region_list
@@ -215,8 +260,30 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     copy_from_paddr(device_tree_flattened, dtb_paddr, dtb_size, BUFFERABLE);
 
     /* Add non-xenheap memory */
-    init_boot_pages(pfn_to_paddr(xenheap_mfn_start + xenheap_pages),
-                    pfn_to_paddr(xenheap_mfn_start + xenheap_pages + domheap_pages));
+    s = ram_start;
+    while ( s < ram_end )
+    {
+        paddr_t n = ram_end;
+
+        e = next_module(s, &n);
+
+        if ( e == ~(paddr_t)0 )
+        {
+            e = n = ram_end;
+        }
+
+        /* Avoid the xenheap */
+        if ( s < ((xenheap_mfn_start+xenheap_pages) << PAGE_SHIFT)
+             && (xenheap_mfn_start << PAGE_SHIFT) < e )
+        {
+            e = pfn_to_paddr(xenheap_mfn_start);
+            n = pfn_to_paddr(xenheap_mfn_start+xenheap_pages);
+        }
+
+        init_boot_pages(s, e);
+
+        s = n;
+    }
 
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbED-0004e3-PU; Thu, 06 Dec 2012 13:11:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEB-0004aZ-Ri
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:00 +0000
Received: from [85.158.143.99:61560] by server-1.bemta-4.messagelabs.com id
	1F/3E-28401-36990C05; Thu, 06 Dec 2012 13:10:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354799452!27352422!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9069 invoked from network); 6 Dec 2012 13:10:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605242"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-RE;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:50 +0000
Message-ID: <1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 9/9] xen: strip /chosen/modules/module@<N>/*
	from dom0 device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These nodes are used by Xen to find the initial modules.

Only drop the "xen,multiboot-module" compatible nodes in case someone
else has a similar idea.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v4 - /chosen/modules/modules@N not /chosen/module@N
v3 - use a helper to filter out DT elements which are not for dom0.
     Better than an ad-hoc break in the middle of a loop.
---
 xen/arch/arm/domain_build.c |   40 ++++++++++++++++++++++++++++++++++++++--
 1 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7a964f7..27e02e4 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -172,6 +172,40 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
     return prop;
 }
 
+/* Returns the next node in fdt (starting from offset) which should be
+ * passed through to dom0.
+ */
+static int fdt_next_dom0_node(const void *fdt, int node,
+                              int *depth_out,
+                              int parents[DEVICE_TREE_MAX_DEPTH])
+{
+    int depth = *depth_out;
+
+    while ( (node = fdt_next_node(fdt, node, &depth)) &&
+            node >= 0 && depth >= 0 )
+    {
+        if ( depth >= DEVICE_TREE_MAX_DEPTH )
+            break;
+
+        parents[depth] = node;
+
+        /* Skip /chosen/modules/module@<N>/ and all subnodes */
+        if ( depth >= 3 &&
+             device_tree_node_matches(fdt, parents[1], "chosen") &&
+             device_tree_node_matches(fdt, parents[2], "modules") &&
+             device_tree_node_matches(fdt, parents[3], "module") &&
+             fdt_node_check_compatible(fdt, parents[3],
+                                       "xen,multiboot-module" ) == 0 )
+            continue;
+
+        /* We've arrived at a node which dom0 is interested in. */
+        break;
+    }
+
+    *depth_out = depth;
+    return node;
+}
+
 static int write_nodes(struct domain *d, struct kernel_info *kinfo,
                        const void *fdt)
 {
@@ -179,11 +213,12 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
     int depth = 0, last_depth = -1;
     u32 address_cells[DEVICE_TREE_MAX_DEPTH];
     u32 size_cells[DEVICE_TREE_MAX_DEPTH];
+    int parents[DEVICE_TREE_MAX_DEPTH];
     int ret;
 
     for ( node = 0, depth = 0;
           node >= 0 && depth >= 0;
-          node = fdt_next_node(fdt, node, &depth) )
+          node = fdt_next_dom0_node(fdt, node, &depth, parents) )
     {
         const char *name;
 
@@ -191,7 +226,8 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
 
         if ( depth >= DEVICE_TREE_MAX_DEPTH )
         {
-            printk("warning: node `%s' is nested too deep\n", name);
+            printk("warning: node `%s' is nested too deep (%d)\n",
+                   name, depth);
             continue;
         }
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbED-0004e3-PU; Thu, 06 Dec 2012 13:11:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEB-0004aZ-Ri
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:00 +0000
Received: from [85.158.143.99:61560] by server-1.bemta-4.messagelabs.com id
	1F/3E-28401-36990C05; Thu, 06 Dec 2012 13:10:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354799452!27352422!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9069 invoked from network); 6 Dec 2012 13:10:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605242"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-RE;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:50 +0000
Message-ID: <1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 9/9] xen: strip /chosen/modules/module@<N>/*
	from dom0 device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These nodes are used by Xen to find the initial modules.

Only drop the "xen,multiboot-module" compatible nodes in case someone
else has a similar idea.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v4 - /chosen/modules/modules@N not /chosen/module@N
v3 - use a helper to filter out DT elements which are not for dom0.
     Better than an ad-hoc break in the middle of a loop.
---
 xen/arch/arm/domain_build.c |   40 ++++++++++++++++++++++++++++++++++++++--
 1 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7a964f7..27e02e4 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -172,6 +172,40 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
     return prop;
 }
 
+/* Returns the next node in fdt (starting from offset) which should be
+ * passed through to dom0.
+ */
+static int fdt_next_dom0_node(const void *fdt, int node,
+                              int *depth_out,
+                              int parents[DEVICE_TREE_MAX_DEPTH])
+{
+    int depth = *depth_out;
+
+    while ( (node = fdt_next_node(fdt, node, &depth)) &&
+            node >= 0 && depth >= 0 )
+    {
+        if ( depth >= DEVICE_TREE_MAX_DEPTH )
+            break;
+
+        parents[depth] = node;
+
+        /* Skip /chosen/modules/module@<N>/ and all subnodes */
+        if ( depth >= 3 &&
+             device_tree_node_matches(fdt, parents[1], "chosen") &&
+             device_tree_node_matches(fdt, parents[2], "modules") &&
+             device_tree_node_matches(fdt, parents[3], "module") &&
+             fdt_node_check_compatible(fdt, parents[3],
+                                       "xen,multiboot-module" ) == 0 )
+            continue;
+
+        /* We've arrived at a node which dom0 is interested in. */
+        break;
+    }
+
+    *depth_out = depth;
+    return node;
+}
+
 static int write_nodes(struct domain *d, struct kernel_info *kinfo,
                        const void *fdt)
 {
@@ -179,11 +213,12 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
     int depth = 0, last_depth = -1;
     u32 address_cells[DEVICE_TREE_MAX_DEPTH];
     u32 size_cells[DEVICE_TREE_MAX_DEPTH];
+    int parents[DEVICE_TREE_MAX_DEPTH];
     int ret;
 
     for ( node = 0, depth = 0;
           node >= 0 && depth >= 0;
-          node = fdt_next_node(fdt, node, &depth) )
+          node = fdt_next_dom0_node(fdt, node, &depth, parents) )
     {
         const char *name;
 
@@ -191,7 +226,8 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
 
         if ( depth >= DEVICE_TREE_MAX_DEPTH )
         {
-            printk("warning: node `%s' is nested too deep\n", name);
+            printk("warning: node `%s' is nested too deep (%d)\n",
+                   name, depth);
             continue;
         }
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEF-0004g8-8g; Thu, 06 Dec 2012 13:11:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbED-0004a0-P1
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:01 +0000
Received: from [85.158.139.83:8176] by server-5.bemta-5.messagelabs.com id
	FE/2A-11353-56990C05; Thu, 06 Dec 2012 13:11:01 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354799454!28646466!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTQy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18545 invoked from network); 6 Dec 2012 13:10:54 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 13:10:54 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 06 Dec 2012 05:10:06 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252956573"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 05:10:52 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:04 +0800
Message-Id: <1354798871-5632-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 04/11] nested vmx: fix rflags status in
	virtual vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As stated in SDM, all bits (except for those 1-reserved) in rflags would
be set to 0 in VM exit. Therefore we need to follow this logic in
virtual_vmexit.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ec5e8a7..d8b7ce5 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -991,7 +991,8 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
 
     regs->eip = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RIP);
     regs->esp = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RSP);
-    regs->eflags = __vmread(GUEST_RFLAGS);
+    /* VM exit clears all bits except bit 1 */
+    regs->eflags = 0x2;
 
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEF-0004g8-8g; Thu, 06 Dec 2012 13:11:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbED-0004a0-P1
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:01 +0000
Received: from [85.158.139.83:8176] by server-5.bemta-5.messagelabs.com id
	FE/2A-11353-56990C05; Thu, 06 Dec 2012 13:11:01 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354799454!28646466!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTQy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18545 invoked from network); 6 Dec 2012 13:10:54 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 13:10:54 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 06 Dec 2012 05:10:06 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252956573"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 05:10:52 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:04 +0800
Message-Id: <1354798871-5632-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 04/11] nested vmx: fix rflags status in
	virtual vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As stated in SDM, all bits (except for those 1-reserved) in rflags would
be set to 0 in VM exit. Therefore we need to follow this logic in
virtual_vmexit.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ec5e8a7..d8b7ce5 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -991,7 +991,8 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
 
     regs->eip = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RIP);
     regs->esp = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RSP);
-    regs->eflags = __vmread(GUEST_RFLAGS);
+    /* VM exit clears all bits except bit 1 */
+    regs->eflags = 0x2;
 
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEH-0004k5-6M; Thu, 06 Dec 2012 13:11:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEE-0004aZ-FF
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:02 +0000
Received: from [85.158.143.99:39822] by server-1.bemta-4.messagelabs.com id
	7F/4E-28401-66990C05; Thu, 06 Dec 2012 13:11:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354799452!28314513!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7600 invoked from network); 6 Dec 2012 13:10:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46813330"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:51 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-BT;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:43 +0000
Message-ID: <1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/9] xen: arm: parse modules from DT during
	early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The bootloader should populate /chosen/modules/module@<N>/ for each
module it wishes to pass to the hypervisor. The content of these nodes
is described in docs/misc/arm/device-tree/booting.txt

The hypervisor parses for 2 types of module, linux zImages and linux
initrds. Currently we don't do anything with them.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v4: Use /chosen/modules/module@N
    Identify module type by compatible property not number.
v3: Use a reg = < > property for the module address/length.
v2: Reserve the zeroeth module for Xen itself (not used yet)
    Use a more idiomatic DT layout
    Document said layout
---
 docs/misc/arm/device-tree/booting.txt |   25 ++++++++++
 xen/common/device_tree.c              |   86 +++++++++++++++++++++++++++++++++
 xen/include/xen/device_tree.h         |   14 +++++
 3 files changed, 125 insertions(+), 0 deletions(-)
 create mode 100644 docs/misc/arm/device-tree/booting.txt

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
new file mode 100644
index 0000000..94cd3f1
--- /dev/null
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -0,0 +1,25 @@
+Xen is passed the dom0 kernel and initrd via a reference in the /chosen
+node of the device tree.
+
+Each node has the form /chosen/modules/module@<N> and contains the following
+properties:
+
+- compatible
+
+	Must be:
+
+		"xen,<type>", "xen,multiboot-module"
+
+	where <type> must be one of:
+
+	- "linux-zimage" -- the dom0 kernel
+	- "linux-initrd" -- the dom0 ramdisk
+
+- reg
+
+	Specifies the physical address of the module in RAM and the
+	length of the module.
+
+- bootargs (optional)
+
+	Command line associated with this module
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index da0af77..4bb640e 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -270,6 +270,90 @@ static void __init process_cpu_node(const void *fdt, int node,
     cpumask_set_cpu(start, &cpu_possible_map);
 }
 
+static int __init process_chosen_modules_node(const void *fdt, int node,
+                                              const char *name, int *depth,
+                                              u32 address_cells, u32 size_cells)
+{
+    const struct fdt_property *prop;
+    const u32 *cell;
+    int nr, nr_modules = 0;
+    struct dt_mb_module *mod;
+    int len;
+
+    for ( *depth = 1;
+          *depth >= 1;
+          node = fdt_next_node(fdt, node, depth) )
+    {
+        name = fdt_get_name(fdt, node, NULL);
+        if ( strncmp(name, "module@", strlen("module@")) == 0 ) {
+
+            if ( fdt_node_check_compatible(fdt, node,
+                                           "xen,multiboot-module" ) != 0 )
+                early_panic("%s not a compatible module node\n", name);
+
+            if ( fdt_node_check_compatible(fdt, node,
+                                           "xen,linux-zimage") == 0 )
+                nr = 1;
+            else if ( fdt_node_check_compatible(fdt, node,
+                                                "xen,linux-initrd") == 0)
+                nr = 2;
+            else
+                early_panic("%s not a known xen multiboot byte\n");
+
+            if ( nr > nr_modules )
+                nr_modules = nr;
+
+            mod = &early_info.modules.module[nr];
+
+            prop = fdt_get_property(fdt, node, "reg", NULL);
+            if ( !prop )
+                early_panic("node %s missing `reg' property\n", name);
+
+            cell = (const u32 *)prop->data;
+            device_tree_get_reg(&cell, address_cells, size_cells,
+                                &mod->start, &mod->size);
+
+            prop = fdt_get_property(fdt, node, "bootargs", &len);
+            if ( prop )
+            {
+                if ( len > sizeof(mod->cmdline) )
+                    early_panic("module %d command line too long\n", nr);
+
+                safe_strcpy(mod->cmdline, prop->data);
+            }
+            else
+                mod->cmdline[0] = 0;
+        }
+    }
+
+    for ( nr = 1 ; nr < nr_modules ; nr++ )
+    {
+        mod = &early_info.modules.module[nr];
+        if ( !mod->start || !mod->size )
+            early_panic("module %d  missing / invalid\n", nr);
+    }
+
+    early_info.modules.nr_mods = nr_modules;
+    return node;
+}
+
+static void __init process_chosen_node(const void *fdt, int node,
+                                       const char *name,
+                                       u32 address_cells, u32 size_cells)
+{
+    int depth;
+
+    for ( depth = 0;
+          depth >= 0;
+          node = fdt_next_node(fdt, node, &depth) )
+    {
+        name = fdt_get_name(fdt, node, NULL);
+        if ( depth == 1 && strcmp(name, "modules") == 0 )
+            node = process_chosen_modules_node(fdt, node, name, &depth,
+                                               address_cells, size_cells);
+    }
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -279,6 +363,8 @@ static int __init early_scan_node(const void *fdt,
         process_memory_node(fdt, node, name, address_cells, size_cells);
     else if ( device_tree_type_matches(fdt, node, "cpu") )
         process_cpu_node(fdt, node, name, address_cells, size_cells);
+    else if ( device_tree_node_matches(fdt, node, "chosen") )
+        process_chosen_node(fdt, node, name, address_cells, size_cells);
 
     return 0;
 }
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 4d010c0..c383677 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -15,6 +15,7 @@
 #define DEVICE_TREE_MAX_DEPTH 16
 
 #define NR_MEM_BANKS 8
+#define NR_MODULES 2
 
 struct membank {
     paddr_t start;
@@ -26,8 +27,21 @@ struct dt_mem_info {
     struct membank bank[NR_MEM_BANKS];
 };
 
+struct dt_mb_module {
+    paddr_t start;
+    paddr_t size;
+    char cmdline[1024];
+};
+
+struct dt_module_info {
+    int nr_mods;
+    /* Module 0 is Xen itself, followed by the provided modules-proper */
+    struct dt_mb_module module[NR_MODULES + 1];
+};
+
 struct dt_early_info {
     struct dt_mem_info mem;
+    struct dt_module_info modules;
 };
 
 typedef int (*device_tree_node_func)(const void *fdt,
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEH-0004k5-6M; Thu, 06 Dec 2012 13:11:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEE-0004aZ-FF
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:02 +0000
Received: from [85.158.143.99:39822] by server-1.bemta-4.messagelabs.com id
	7F/4E-28401-66990C05; Thu, 06 Dec 2012 13:11:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354799452!28314513!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7600 invoked from network); 6 Dec 2012 13:10:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46813330"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:51 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-BT;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:43 +0000
Message-ID: <1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/9] xen: arm: parse modules from DT during
	early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The bootloader should populate /chosen/modules/module@<N>/ for each
module it wishes to pass to the hypervisor. The content of these nodes
is described in docs/misc/arm/device-tree/booting.txt

The hypervisor parses for 2 types of module, linux zImages and linux
initrds. Currently we don't do anything with them.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v4: Use /chosen/modules/module@N
    Identify module type by compatible property not number.
v3: Use a reg = < > property for the module address/length.
v2: Reserve the zeroeth module for Xen itself (not used yet)
    Use a more idiomatic DT layout
    Document said layout
---
 docs/misc/arm/device-tree/booting.txt |   25 ++++++++++
 xen/common/device_tree.c              |   86 +++++++++++++++++++++++++++++++++
 xen/include/xen/device_tree.h         |   14 +++++
 3 files changed, 125 insertions(+), 0 deletions(-)
 create mode 100644 docs/misc/arm/device-tree/booting.txt

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
new file mode 100644
index 0000000..94cd3f1
--- /dev/null
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -0,0 +1,25 @@
+Xen is passed the dom0 kernel and initrd via a reference in the /chosen
+node of the device tree.
+
+Each node has the form /chosen/modules/module@<N> and contains the following
+properties:
+
+- compatible
+
+	Must be:
+
+		"xen,<type>", "xen,multiboot-module"
+
+	where <type> must be one of:
+
+	- "linux-zimage" -- the dom0 kernel
+	- "linux-initrd" -- the dom0 ramdisk
+
+- reg
+
+	Specifies the physical address of the module in RAM and the
+	length of the module.
+
+- bootargs (optional)
+
+	Command line associated with this module
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index da0af77..4bb640e 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -270,6 +270,90 @@ static void __init process_cpu_node(const void *fdt, int node,
     cpumask_set_cpu(start, &cpu_possible_map);
 }
 
+static int __init process_chosen_modules_node(const void *fdt, int node,
+                                              const char *name, int *depth,
+                                              u32 address_cells, u32 size_cells)
+{
+    const struct fdt_property *prop;
+    const u32 *cell;
+    int nr, nr_modules = 0;
+    struct dt_mb_module *mod;
+    int len;
+
+    for ( *depth = 1;
+          *depth >= 1;
+          node = fdt_next_node(fdt, node, depth) )
+    {
+        name = fdt_get_name(fdt, node, NULL);
+        if ( strncmp(name, "module@", strlen("module@")) == 0 ) {
+
+            if ( fdt_node_check_compatible(fdt, node,
+                                           "xen,multiboot-module" ) != 0 )
+                early_panic("%s not a compatible module node\n", name);
+
+            if ( fdt_node_check_compatible(fdt, node,
+                                           "xen,linux-zimage") == 0 )
+                nr = 1;
+            else if ( fdt_node_check_compatible(fdt, node,
+                                                "xen,linux-initrd") == 0)
+                nr = 2;
+            else
+                early_panic("%s not a known xen multiboot byte\n");
+
+            if ( nr > nr_modules )
+                nr_modules = nr;
+
+            mod = &early_info.modules.module[nr];
+
+            prop = fdt_get_property(fdt, node, "reg", NULL);
+            if ( !prop )
+                early_panic("node %s missing `reg' property\n", name);
+
+            cell = (const u32 *)prop->data;
+            device_tree_get_reg(&cell, address_cells, size_cells,
+                                &mod->start, &mod->size);
+
+            prop = fdt_get_property(fdt, node, "bootargs", &len);
+            if ( prop )
+            {
+                if ( len > sizeof(mod->cmdline) )
+                    early_panic("module %d command line too long\n", nr);
+
+                safe_strcpy(mod->cmdline, prop->data);
+            }
+            else
+                mod->cmdline[0] = 0;
+        }
+    }
+
+    for ( nr = 1 ; nr < nr_modules ; nr++ )
+    {
+        mod = &early_info.modules.module[nr];
+        if ( !mod->start || !mod->size )
+            early_panic("module %d  missing / invalid\n", nr);
+    }
+
+    early_info.modules.nr_mods = nr_modules;
+    return node;
+}
+
+static void __init process_chosen_node(const void *fdt, int node,
+                                       const char *name,
+                                       u32 address_cells, u32 size_cells)
+{
+    int depth;
+
+    for ( depth = 0;
+          depth >= 0;
+          node = fdt_next_node(fdt, node, &depth) )
+    {
+        name = fdt_get_name(fdt, node, NULL);
+        if ( depth == 1 && strcmp(name, "modules") == 0 )
+            node = process_chosen_modules_node(fdt, node, name, &depth,
+                                               address_cells, size_cells);
+    }
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -279,6 +363,8 @@ static int __init early_scan_node(const void *fdt,
         process_memory_node(fdt, node, name, address_cells, size_cells);
     else if ( device_tree_type_matches(fdt, node, "cpu") )
         process_cpu_node(fdt, node, name, address_cells, size_cells);
+    else if ( device_tree_node_matches(fdt, node, "chosen") )
+        process_chosen_node(fdt, node, name, address_cells, size_cells);
 
     return 0;
 }
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 4d010c0..c383677 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -15,6 +15,7 @@
 #define DEVICE_TREE_MAX_DEPTH 16
 
 #define NR_MEM_BANKS 8
+#define NR_MODULES 2
 
 struct membank {
     paddr_t start;
@@ -26,8 +27,21 @@ struct dt_mem_info {
     struct membank bank[NR_MEM_BANKS];
 };
 
+struct dt_mb_module {
+    paddr_t start;
+    paddr_t size;
+    char cmdline[1024];
+};
+
+struct dt_module_info {
+    int nr_mods;
+    /* Module 0 is Xen itself, followed by the provided modules-proper */
+    struct dt_mb_module module[NR_MODULES + 1];
+};
+
 struct dt_early_info {
     struct dt_mem_info mem;
+    struct dt_module_info modules;
 };
 
 typedef int (*device_tree_node_func)(const void *fdt,
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbED-0004dX-Cd; Thu, 06 Dec 2012 13:11:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEB-0004b3-4r
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:59 +0000
Received: from [85.158.143.99:37623] by server-2.bemta-4.messagelabs.com id
	5C/2B-30861-26990C05; Thu, 06 Dec 2012 13:10:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354799452!27352422!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8847 invoked from network); 6 Dec 2012 13:10:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605239"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-Mz;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:48 +0000
Message-ID: <1354799451-16876-7-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 7/9] arm: discard boot modules after building
	domain 0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/domain_build.c |    3 +++
 xen/arch/arm/setup.c        |   16 ++++++++++++++++
 xen/include/asm-arm/setup.h |    2 ++
 3 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index a9e7f43..e96ed10 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -10,6 +10,7 @@
 #include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 #include <xen/guest_access.h>
+#include <asm/setup.h>
 
 #include "gic.h"
 #include "kernel.h"
@@ -308,6 +309,8 @@ int construct_dom0(struct domain *d)
     dtb_load(&kinfo);
     kernel_load(&kinfo);
 
+    discard_initial_modules();
+
     clear_bit(_VPF_down, &v->pause_flags);
 
     memset(regs, 0, sizeof(*regs));
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 9f08daf..1eb8f77 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -68,6 +68,22 @@ static void __init processor_id(void)
            READ_CP32(ID_ISAR3), READ_CP32(ID_ISAR4), READ_CP32(ID_ISAR5));
 }
 
+void __init discard_initial_modules(void)
+{
+    struct dt_module_info *mi = &early_info.modules;
+    int i;
+
+    for ( i = 1; i <= mi->nr_mods; i++ )
+    {
+        paddr_t s = mi->module[i].start;
+        paddr_t e = s + PAGE_ALIGN(mi->module[i].size);
+
+        init_domheap_pages(s, e);
+    }
+
+    mi->nr_mods = 0;
+}
+
 /*
  * Returns the end address of the highest region in the range s..e
  * with required size and alignment that does not conflict with the
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 8769f66..3267db0 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -9,6 +9,8 @@ void arch_get_xen_caps(xen_capabilities_info_t *info);
 
 int construct_dom0(struct domain *d);
 
+void discard_initial_modules(void);
+
 #endif
 /*
  * Local variables:
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbED-0004dX-Cd; Thu, 06 Dec 2012 13:11:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEB-0004b3-4r
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:10:59 +0000
Received: from [85.158.143.99:37623] by server-2.bemta-4.messagelabs.com id
	5C/2B-30861-26990C05; Thu, 06 Dec 2012 13:10:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354799452!27352422!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8847 invoked from network); 6 Dec 2012 13:10:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605239"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-Mz;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:48 +0000
Message-ID: <1354799451-16876-7-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 7/9] arm: discard boot modules after building
	domain 0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/domain_build.c |    3 +++
 xen/arch/arm/setup.c        |   16 ++++++++++++++++
 xen/include/asm-arm/setup.h |    2 ++
 3 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index a9e7f43..e96ed10 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -10,6 +10,7 @@
 #include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 #include <xen/guest_access.h>
+#include <asm/setup.h>
 
 #include "gic.h"
 #include "kernel.h"
@@ -308,6 +309,8 @@ int construct_dom0(struct domain *d)
     dtb_load(&kinfo);
     kernel_load(&kinfo);
 
+    discard_initial_modules();
+
     clear_bit(_VPF_down, &v->pause_flags);
 
     memset(regs, 0, sizeof(*regs));
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 9f08daf..1eb8f77 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -68,6 +68,22 @@ static void __init processor_id(void)
            READ_CP32(ID_ISAR3), READ_CP32(ID_ISAR4), READ_CP32(ID_ISAR5));
 }
 
+void __init discard_initial_modules(void)
+{
+    struct dt_module_info *mi = &early_info.modules;
+    int i;
+
+    for ( i = 1; i <= mi->nr_mods; i++ )
+    {
+        paddr_t s = mi->module[i].start;
+        paddr_t e = s + PAGE_ALIGN(mi->module[i].size);
+
+        init_domheap_pages(s, e);
+    }
+
+    mi->nr_mods = 0;
+}
+
 /*
  * Returns the end address of the highest region in the range s..e
  * with required size and alignment that does not conflict with the
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 8769f66..3267db0 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -9,6 +9,8 @@ void arch_get_xen_caps(xen_capabilities_info_t *info);
 
 int construct_dom0(struct domain *d);
 
+void discard_initial_modules(void);
+
 #endif
 /*
  * Local variables:
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEJ-0004pP-S9; Thu, 06 Dec 2012 13:11:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbEI-0004ld-EY
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:06 +0000
Received: from [193.109.254.147:13704] by server-12.bemta-14.messagelabs.com
	id 84/0B-00510-96990C05; Thu, 06 Dec 2012 13:11:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1354799449!9060310!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32397 invoked from network); 6 Dec 2012 13:10:50 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 13:10:50 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 05:10:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252956517"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 05:10:44 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:00 +0800
Message-Id: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 00/11] nested vmx: bug fixes and feature
	enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series of patches contain some bug fixes and feature enabling for
nested vmx, please help to review and pull.

Changes from v3 to v4:
 - Use a macro to combine MSR value with host capability.
 - Use the macro defined for bit 55 in IA32_VMX_BASIC MSR in vmcs.c.

Changes from v2 to v3:
 - Change a hard number to literal name while exposing bit 55 in IA32_VMX_BASIC MSR.

Changes from v1 to v2:
 - Use literal name instead of hard numbers to expose default 1 settings in VMX related MSRs.
 - For TRUE VMX MSRs, we use the same value as normal VMX MSRs.
 - Fix a coding style issue.

The following 5 patches are suitable to backport to 4.2.x:
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: fix interrupt delivery to L2 guest

Thanks,
Dongxiao

Dongxiao Xu (11):
  nested vmx: emulate MSR bitmaps
  nested vmx: use literal name instead of hard numbers
  nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
  nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
  nested vmx: fix interrupt delivery to L2 guest
  nested vmx: check host ability when intercept MSR read

 xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
 xen/arch/x86/hvm/vmx/vmcs.c        |   30 +++++++++-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |  113 ++++++++++++++++++++++++++++++------
 xen/include/asm-x86/hvm/vmx/vmcs.h |    7 ++
 xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |   10 +++
 7 files changed, 151 insertions(+), 24 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEJ-0004pP-S9; Thu, 06 Dec 2012 13:11:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbEI-0004ld-EY
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:06 +0000
Received: from [193.109.254.147:13704] by server-12.bemta-14.messagelabs.com
	id 84/0B-00510-96990C05; Thu, 06 Dec 2012 13:11:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1354799449!9060310!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32397 invoked from network); 6 Dec 2012 13:10:50 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 13:10:50 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 05:10:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252956517"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 05:10:44 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:00 +0800
Message-Id: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 00/11] nested vmx: bug fixes and feature
	enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series of patches contain some bug fixes and feature enabling for
nested vmx, please help to review and pull.

Changes from v3 to v4:
 - Use a macro to combine MSR value with host capability.
 - Use the macro defined for bit 55 in IA32_VMX_BASIC MSR in vmcs.c.

Changes from v2 to v3:
 - Change a hard number to literal name while exposing bit 55 in IA32_VMX_BASIC MSR.

Changes from v1 to v2:
 - Use literal name instead of hard numbers to expose default 1 settings in VMX related MSRs.
 - For TRUE VMX MSRs, we use the same value as normal VMX MSRs.
 - Fix a coding style issue.

The following 5 patches are suitable to backport to 4.2.x:
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: fix interrupt delivery to L2 guest

Thanks,
Dongxiao

Dongxiao Xu (11):
  nested vmx: emulate MSR bitmaps
  nested vmx: use literal name instead of hard numbers
  nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
  nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
  nested vmx: fix interrupt delivery to L2 guest
  nested vmx: check host ability when intercept MSR read

 xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
 xen/arch/x86/hvm/vmx/vmcs.c        |   30 +++++++++-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |  113 ++++++++++++++++++++++++++++++------
 xen/include/asm-x86/hvm/vmx/vmcs.h |    7 ++
 xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |   10 +++
 7 files changed, 151 insertions(+), 24 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEN-0004vX-BV; Thu, 06 Dec 2012 13:11:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEM-0004sM-5V
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:10 +0000
Received: from [85.158.139.211:19477] by server-7.bemta-5.messagelabs.com id
	1A/D3-23096-D6990C05; Thu, 06 Dec 2012 13:11:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1354799456!19263132!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9666 invoked from network); 6 Dec 2012 13:11:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:11:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605240"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-OS;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:49 +0000
Message-ID: <1354799451-16876-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 8/9] arm: use /chosen/module@1/bootargs for
	domain 0 command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fallback to xen,dom0-bootargs if this isn't present.

Ideally this would use module1-args iff the kernel came from
module@1/{start,end} and the existing xen,dom0-bootargs if the kernel
came from flash, but this approach is simpler and has the same effect
in practice.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: update for new DT layout
---
 xen/arch/arm/domain_build.c |   28 ++++++++++++++++++++++++----
 1 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e96ed10..7a964f7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -86,8 +86,13 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
                             int node, const char *name, int depth,
                             u32 address_cells, u32 size_cells)
 {
+    const char *bootargs = NULL;
     int prop;
 
+    if ( early_info.modules.nr_mods >= 1 &&
+         early_info.modules.module[1].cmdline[0] )
+        bootargs = &early_info.modules.module[1].cmdline[0];
+
     for ( prop = fdt_first_property_offset(fdt, node);
           prop >= 0;
           prop = fdt_next_property_offset(fdt, prop) )
@@ -104,15 +109,22 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
         prop_len  = fdt32_to_cpu(p->len);
 
         /*
-         * In chosen node: replace bootargs with value from
-         * xen,dom0-bootargs.
+         * In chosen node:
+         *
+         * * remember xen,dom0-bootargs if we don't already have
+         *   bootargs (from module #1, above).
+         * * remove bootargs and xen,dom0-bootargs.
          */
         if ( device_tree_node_matches(fdt, node, "chosen") )
         {
             if ( strcmp(prop_name, "bootargs") == 0 )
                 continue;
-            if ( strcmp(prop_name, "xen,dom0-bootargs") == 0 )
-                prop_name = "bootargs";
+            else if ( strcmp(prop_name, "xen,dom0-bootargs") == 0 )
+            {
+                if ( !bootargs )
+                    bootargs = prop_data;
+                continue;
+            }
         }
         /*
          * In a memory node: adjust reg property.
@@ -147,6 +159,14 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
         xfree(new_data);
     }
 
+    if ( device_tree_node_matches(fdt, node, "chosen") && bootargs )
+        fdt_property(kinfo->fdt, "bootargs", bootargs, strlen(bootargs));
+
+    /*
+     * XXX should populate /chosen/linux,initrd-{start,end} here if we
+     * have module[2]
+     */
+
     if ( prop == -FDT_ERR_NOTFOUND )
         return 0;
     return prop;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEN-0004vX-BV; Thu, 06 Dec 2012 13:11:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEM-0004sM-5V
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:10 +0000
Received: from [85.158.139.211:19477] by server-7.bemta-5.messagelabs.com id
	1A/D3-23096-D6990C05; Thu, 06 Dec 2012 13:11:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1354799456!19263132!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9666 invoked from network); 6 Dec 2012 13:11:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:11:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605240"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-OS;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:49 +0000
Message-ID: <1354799451-16876-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 8/9] arm: use /chosen/module@1/bootargs for
	domain 0 command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fallback to xen,dom0-bootargs if this isn't present.

Ideally this would use module1-args iff the kernel came from
module@1/{start,end} and the existing xen,dom0-bootargs if the kernel
came from flash, but this approach is simpler and has the same effect
in practice.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: update for new DT layout
---
 xen/arch/arm/domain_build.c |   28 ++++++++++++++++++++++++----
 1 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e96ed10..7a964f7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -86,8 +86,13 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
                             int node, const char *name, int depth,
                             u32 address_cells, u32 size_cells)
 {
+    const char *bootargs = NULL;
     int prop;
 
+    if ( early_info.modules.nr_mods >= 1 &&
+         early_info.modules.module[1].cmdline[0] )
+        bootargs = &early_info.modules.module[1].cmdline[0];
+
     for ( prop = fdt_first_property_offset(fdt, node);
           prop >= 0;
           prop = fdt_next_property_offset(fdt, prop) )
@@ -104,15 +109,22 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
         prop_len  = fdt32_to_cpu(p->len);
 
         /*
-         * In chosen node: replace bootargs with value from
-         * xen,dom0-bootargs.
+         * In chosen node:
+         *
+         * * remember xen,dom0-bootargs if we don't already have
+         *   bootargs (from module #1, above).
+         * * remove bootargs and xen,dom0-bootargs.
          */
         if ( device_tree_node_matches(fdt, node, "chosen") )
         {
             if ( strcmp(prop_name, "bootargs") == 0 )
                 continue;
-            if ( strcmp(prop_name, "xen,dom0-bootargs") == 0 )
-                prop_name = "bootargs";
+            else if ( strcmp(prop_name, "xen,dom0-bootargs") == 0 )
+            {
+                if ( !bootargs )
+                    bootargs = prop_data;
+                continue;
+            }
         }
         /*
          * In a memory node: adjust reg property.
@@ -147,6 +159,14 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
         xfree(new_data);
     }
 
+    if ( device_tree_node_matches(fdt, node, "chosen") && bootargs )
+        fdt_property(kinfo->fdt, "bootargs", bootargs, strlen(bootargs));
+
+    /*
+     * XXX should populate /chosen/linux,initrd-{start,end} here if we
+     * have module[2]
+     */
+
     if ( prop == -FDT_ERR_NOTFOUND )
         return 0;
     return prop;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEQ-00051O-OO; Thu, 06 Dec 2012 13:11:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEO-0004ww-KW
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:12 +0000
Received: from [193.109.254.147:34805] by server-10.bemta-14.messagelabs.com
	id 6B/6C-31741-07990C05; Thu, 06 Dec 2012 13:11:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1354799456!9060338!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 671 invoked from network); 6 Dec 2012 13:10:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46813331"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-IJ;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:46 +0000
Message-ID: <1354799451-16876-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5/9] device-tree: get_val cannot cope with cells
	> 2, add early_panic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v3: early_panic instead of BUG_ON
v2: drop unrelated white space fixup
---
 xen/common/device_tree.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 4bb640e..dc28c56 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -58,6 +58,9 @@ static void __init get_val(const u32 **cell, u32 cells, u64 *val)
 {
     *val = 0;
 
+    if ( cells > 2 )
+        early_panic("dtb value contains > 2 cells\n");
+
     while ( cells-- )
     {
         *val <<= 32;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEQ-00051O-OO; Thu, 06 Dec 2012 13:11:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbEO-0004ww-KW
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:12 +0000
Received: from [193.109.254.147:34805] by server-10.bemta-14.messagelabs.com
	id 6B/6C-31741-07990C05; Thu, 06 Dec 2012 13:11:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1354799456!9060338!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 671 invoked from network); 6 Dec 2012 13:10:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46813331"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-IJ;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:46 +0000
Message-ID: <1354799451-16876-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5/9] device-tree: get_val cannot cope with cells
	> 2, add early_panic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v3: early_panic instead of BUG_ON
v2: drop unrelated white space fixup
---
 xen/common/device_tree.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 4bb640e..dc28c56 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -58,6 +58,9 @@ static void __init get_val(const u32 **cell, u32 cells, u64 *val)
 {
     *val = 0;
 
+    if ( cells > 2 )
+        early_panic("dtb value contains > 2 cells\n");
+
     while ( cells-- )
     {
         *val <<= 32;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEW-00058A-5l; Thu, 06 Dec 2012 13:11:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbEV-00056R-Ap
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:19 +0000
Received: from [85.158.143.99:40787] by server-2.bemta-4.messagelabs.com id
	1D/9B-30861-67990C05; Thu, 06 Dec 2012 13:11:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354799457!17128376!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2NzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29654 invoked from network); 6 Dec 2012 13:10:58 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-16.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 13:10:58 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 06 Dec 2012 05:10:56 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="227474302"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 06 Dec 2012 05:10:55 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:06 +0800
Message-Id: <1354798871-5632-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 06/11] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For DR register, we use lazy restore mechanism when access it. Therefore
when receiving such VM exit, L0 should be responsible to switch to the
right DR values, then inject to L1 hypervisor.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dab9551..02a7052 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1641,7 +1641,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         break;
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
-        if ( ctrl & CPU_BASED_MOV_DR_EXITING )
+        if ( (ctrl & CPU_BASED_MOV_DR_EXITING) &&
+            v->arch.hvm_vcpu.flag_dr_dirty )
             nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEW-00058A-5l; Thu, 06 Dec 2012 13:11:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbEV-00056R-Ap
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:19 +0000
Received: from [85.158.143.99:40787] by server-2.bemta-4.messagelabs.com id
	1D/9B-30861-67990C05; Thu, 06 Dec 2012 13:11:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354799457!17128376!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM2NzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29654 invoked from network); 6 Dec 2012 13:10:58 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-16.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 13:10:58 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 06 Dec 2012 05:10:56 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="227474302"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by azsmga001.ch.intel.com with ESMTP; 06 Dec 2012 05:10:55 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:06 +0800
Message-Id: <1354798871-5632-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 06/11] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For DR register, we use lazy restore mechanism when access it. Therefore
when receiving such VM exit, L0 should be responsible to switch to the
right DR values, then inject to L1 hypervisor.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dab9551..02a7052 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1641,7 +1641,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         break;
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
-        if ( ctrl & CPU_BASED_MOV_DR_EXITING )
+        if ( (ctrl & CPU_BASED_MOV_DR_EXITING) &&
+            v->arch.hvm_vcpu.flag_dr_dirty )
             nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEi-0005Mk-JJ; Thu, 06 Dec 2012 13:11:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbEh-0005LB-Ki
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:31 +0000
Received: from [85.158.143.99:41544] by server-3.bemta-4.messagelabs.com id
	FC/62-18211-38990C05; Thu, 06 Dec 2012 13:11:31 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354799489!21293824!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDQ0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10331 invoked from network); 6 Dec 2012 13:11:30 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 13:11:30 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 06 Dec 2012 05:11:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259898128"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 05:11:07 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:10 +0800
Message-Id: <1354798871-5632-11-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 10/11] nested vmx: fix interrupt delivery to
	L2 guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While delivering interrupt into L2 guest, L0 hypervisor need to check
whether L1 hypervisor wants to own the interrupt, if not, directly inject
the interrupt into L2 guest.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 3961bc7..ef8b925 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -163,7 +163,7 @@ enum hvm_intblk nvmx_intr_blocked(struct vcpu *v)
 
 static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
 {
-    u32 exit_ctrl;
+    u32 ctrl;
 
     if ( nvmx_intr_blocked(v) != hvm_intblk_none )
     {
@@ -176,11 +176,14 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
         if ( intack.source == hvm_intsrc_pic ||
                  intack.source == hvm_intsrc_lapic )
         {
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, PIN_BASED_VM_EXEC_CONTROL);
+            if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) )
+                return 0;
+
             vmx_inject_extint(intack.vector);
 
-            exit_ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
-                            VM_EXIT_CONTROLS);
-            if ( exit_ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, VM_EXIT_CONTROLS);
+            if ( ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
             {
                 /* for now, duplicate the ack path in vmx_intr_assist */
                 hvm_vcpu_ack_pending_irq(v, intack);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEi-0005Mk-JJ; Thu, 06 Dec 2012 13:11:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbEh-0005LB-Ki
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:31 +0000
Received: from [85.158.143.99:41544] by server-3.bemta-4.messagelabs.com id
	FC/62-18211-38990C05; Thu, 06 Dec 2012 13:11:31 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354799489!21293824!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDQ0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10331 invoked from network); 6 Dec 2012 13:11:30 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 13:11:30 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 06 Dec 2012 05:11:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259898128"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 05:11:07 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:10 +0800
Message-Id: <1354798871-5632-11-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 10/11] nested vmx: fix interrupt delivery to
	L2 guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While delivering interrupt into L2 guest, L0 hypervisor need to check
whether L1 hypervisor wants to own the interrupt, if not, directly inject
the interrupt into L2 guest.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 3961bc7..ef8b925 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -163,7 +163,7 @@ enum hvm_intblk nvmx_intr_blocked(struct vcpu *v)
 
 static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
 {
-    u32 exit_ctrl;
+    u32 ctrl;
 
     if ( nvmx_intr_blocked(v) != hvm_intblk_none )
     {
@@ -176,11 +176,14 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
         if ( intack.source == hvm_intsrc_pic ||
                  intack.source == hvm_intsrc_lapic )
         {
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, PIN_BASED_VM_EXEC_CONTROL);
+            if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) )
+                return 0;
+
             vmx_inject_extint(intack.vector);
 
-            exit_ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
-                            VM_EXIT_CONTROLS);
-            if ( exit_ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, VM_EXIT_CONTROLS);
+            if ( ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
             {
                 /* for now, duplicate the ack path in vmx_intr_assist */
                 hvm_vcpu_ack_pending_irq(v, intack);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEo-0005Sg-0P; Thu, 06 Dec 2012 13:11:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbEm-0005QN-Hb
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:36 +0000
Received: from [85.158.137.99:59960] by server-3.bemta-3.messagelabs.com id
	7C/74-31566-78990C05; Thu, 06 Dec 2012 13:11:35 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354799490!18216979!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTQy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32158 invoked from network); 6 Dec 2012 13:11:34 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-217.messagelabs.com with SMTP;
	6 Dec 2012 13:11:34 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 06 Dec 2012 05:10:22 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252956693"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 05:11:08 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:11 +0800
Message-Id: <1354798871-5632-12-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 11/11] nested vmx: check host ability when
	intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When guest hypervisor tries to read MSR value, we intercept this behavior
and return certain emulated values. Besides that, we also need to ensure
that those emulated values must compatible with host ability.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   16 +++++++++++++---
 1 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 178adbc..cacbee4 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1314,24 +1314,29 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+#define combine_host_cap(data, host_data) \
+    (((data & host_data) & (~0ul << 32)) | \
+    ((uint32_t)(data | host_data)))
+
 /*
  * Capability reporting
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp = 0;
+    u64 data = 0, host_data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
         return 0;
 
+    rdmsrl(msr, host_data);
+
     /*
      * Remove unsupport features from n1 guest capability MSR
      */
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
-        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
+        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
@@ -1341,6 +1346,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                PIN_BASED_PREEMPT_TIMER;
         tmp = VMX_PINBASED_CTLS_DEFAULT1;
         data = ((data | tmp) << 32) | (tmp);
+        data = combine_host_cap(data, host_data);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
@@ -1368,6 +1374,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
+        data = combine_host_cap(data, host_data);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
@@ -1376,6 +1383,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
+        data = combine_host_cap(data, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
     case MSR_IA32_VMX_TRUE_EXIT_CTLS:
@@ -1391,6 +1399,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
 	/* 0-settings */
         data = ((data | tmp) << 32) | tmp;
+        data = combine_host_cap(data, host_data);
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
     case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
@@ -1401,6 +1410,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
                VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
+        data = combine_host_cap(data, host_data);
         break;
 
     case IA32_FEATURE_CONTROL_MSR:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbEo-0005Sg-0P; Thu, 06 Dec 2012 13:11:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbEm-0005QN-Hb
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:36 +0000
Received: from [85.158.137.99:59960] by server-3.bemta-3.messagelabs.com id
	7C/74-31566-78990C05; Thu, 06 Dec 2012 13:11:35 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354799490!18216979!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTQy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32158 invoked from network); 6 Dec 2012 13:11:34 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-217.messagelabs.com with SMTP;
	6 Dec 2012 13:11:34 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 06 Dec 2012 05:10:22 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252956693"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 05:11:08 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:11 +0800
Message-Id: <1354798871-5632-12-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 11/11] nested vmx: check host ability when
	intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When guest hypervisor tries to read MSR value, we intercept this behavior
and return certain emulated values. Besides that, we also need to ensure
that those emulated values must compatible with host ability.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   16 +++++++++++++---
 1 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 178adbc..cacbee4 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1314,24 +1314,29 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+#define combine_host_cap(data, host_data) \
+    (((data & host_data) & (~0ul << 32)) | \
+    ((uint32_t)(data | host_data)))
+
 /*
  * Capability reporting
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp = 0;
+    u64 data = 0, host_data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
         return 0;
 
+    rdmsrl(msr, host_data);
+
     /*
      * Remove unsupport features from n1 guest capability MSR
      */
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
-        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
+        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
@@ -1341,6 +1346,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                PIN_BASED_PREEMPT_TIMER;
         tmp = VMX_PINBASED_CTLS_DEFAULT1;
         data = ((data | tmp) << 32) | (tmp);
+        data = combine_host_cap(data, host_data);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
@@ -1368,6 +1374,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
+        data = combine_host_cap(data, host_data);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
@@ -1376,6 +1383,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
+        data = combine_host_cap(data, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
     case MSR_IA32_VMX_TRUE_EXIT_CTLS:
@@ -1391,6 +1399,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
 	/* 0-settings */
         data = ((data | tmp) << 32) | tmp;
+        data = combine_host_cap(data, host_data);
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
     case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
@@ -1401,6 +1410,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
                VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
+        data = combine_host_cap(data, host_data);
         break;
 
     case IA32_FEATURE_CONTROL_MSR:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbF3-0005nt-Ei; Thu, 06 Dec 2012 13:11:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbF2-0005la-9C
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:52 +0000
Received: from [85.158.138.51:20511] by server-3.bemta-3.messagelabs.com id
	13/E4-31566-79990C05; Thu, 06 Dec 2012 13:11:51 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1354799470!19716448!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6803 invoked from network); 6 Dec 2012 13:11:11 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 13:11:11 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 05:11:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="258138821"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 06 Dec 2012 05:10:59 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:08 +0800
Message-Id: <1354798871-5632-9-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 08/11] nested vmx: enable "Virtualize APIC
	accesses" feature for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the "Virtualize APIC accesses" feature is enabled, we need to sync
the APIC-access address from virtual vvmcs into shadow vmcs when doing
virtual_vmentry.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   27 ++++++++++++++++++++++++++-
 1 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dcdc83e..bcb113f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -554,6 +554,24 @@ void nvmx_update_exception_bitmap(struct vcpu *v, unsigned long value)
     set_shadow_control(v, EXCEPTION_BITMAP, value);
 }
 
+static void nvmx_update_apic_access_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 apic_gpfn, apic_mfn;
+    u32 ctrl;
+    void *apic_va;
+
+    ctrl = __n2_secondary_exec_control(v);
+    if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+    {
+        apic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
+        apic_va = hvm_map_guest_frame_ro(apic_gpfn);
+        apic_mfn = virt_to_mfn(apic_va);
+        __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(apic_va); 
+    }
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -761,6 +779,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_exit_control(v, vmx_vmexit_control);
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
+    nvmx_update_apic_access_address(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1350,7 +1369,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
@@ -1680,6 +1700,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
         break;
     }
+    case EXIT_REASON_APIC_ACCESS:
+        ctrl = __n2_secondary_exec_control(v);
+        if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:11:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbF3-0005nt-Ei; Thu, 06 Dec 2012 13:11:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbF2-0005la-9C
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:11:52 +0000
Received: from [85.158.138.51:20511] by server-3.bemta-3.messagelabs.com id
	13/E4-31566-79990C05; Thu, 06 Dec 2012 13:11:51 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1354799470!19716448!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6803 invoked from network); 6 Dec 2012 13:11:11 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 13:11:11 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 05:11:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="258138821"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 06 Dec 2012 05:10:59 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:08 +0800
Message-Id: <1354798871-5632-9-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 08/11] nested vmx: enable "Virtualize APIC
	accesses" feature for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the "Virtualize APIC accesses" feature is enabled, we need to sync
the APIC-access address from virtual vvmcs into shadow vmcs when doing
virtual_vmentry.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   27 ++++++++++++++++++++++++++-
 1 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dcdc83e..bcb113f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -554,6 +554,24 @@ void nvmx_update_exception_bitmap(struct vcpu *v, unsigned long value)
     set_shadow_control(v, EXCEPTION_BITMAP, value);
 }
 
+static void nvmx_update_apic_access_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 apic_gpfn, apic_mfn;
+    u32 ctrl;
+    void *apic_va;
+
+    ctrl = __n2_secondary_exec_control(v);
+    if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+    {
+        apic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
+        apic_va = hvm_map_guest_frame_ro(apic_gpfn);
+        apic_mfn = virt_to_mfn(apic_va);
+        __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(apic_va); 
+    }
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -761,6 +779,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_exit_control(v, vmx_vmexit_control);
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
+    nvmx_update_apic_access_address(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1350,7 +1369,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
@@ -1680,6 +1700,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
         break;
     }
+    case EXIT_REASON_APIC_ACCESS:
+        ctrl = __n2_secondary_exec_control(v);
+        if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:12:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:12:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbFJ-000655-TZ; Thu, 06 Dec 2012 13:12:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbFI-00063p-PT
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:12:09 +0000
Received: from [193.109.254.147:49757] by server-1.bemta-14.messagelabs.com id
	E1/50-25314-8A990C05; Thu, 06 Dec 2012 13:12:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1354799457!1964778!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7170 invoked from network); 6 Dec 2012 13:10:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605241"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-LE;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:47 +0000
Message-ID: <1354799451-16876-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 6/9] arm: load dom0 kernel from first boot module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v3: - correct limit check in try_zimage_prepare
    - copy zimage header to a local bufffer to avoid issues with
      crossing page boundaries.
    - handle non page aligned source and destinations when loading
    - use a BUFFERABLE mapping when loading kernel from RAM.
---
 xen/arch/arm/kernel.c |   91 ++++++++++++++++++++++++++++++++++--------------
 xen/arch/arm/kernel.h |   11 ++++++
 2 files changed, 75 insertions(+), 27 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 2d56130..c9265d7 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -22,6 +22,7 @@
 #define ZIMAGE_MAGIC_OFFSET 0x24
 #define ZIMAGE_START_OFFSET 0x28
 #define ZIMAGE_END_OFFSET   0x2c
+#define ZIMAGE_HEADER_LEN   0x30
 
 #define ZIMAGE_MAGIC 0x016f2818
 
@@ -65,40 +66,42 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
 static void kernel_zimage_load(struct kernel_info *info)
 {
     paddr_t load_addr = info->zimage.load_addr;
+    paddr_t paddr = info->zimage.kernel_addr;
+    paddr_t attr = info->load_attr;
     paddr_t len = info->zimage.len;
-    paddr_t flash = KERNEL_FLASH_ADDRESS;
-    void *src = (void *)FIXMAP_ADDR(FIXMAP_MISC);
     unsigned long offs;
 
-    printk("Loading %"PRIpaddr" byte zImage from flash %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr": [",
-           len, flash, load_addr, load_addr + len);
-    for ( offs = 0; offs < len; offs += PAGE_SIZE )
+    printk("Loading zImage from %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr"\n",
+           paddr, load_addr, load_addr + len);
+    for ( offs = 0; offs < len; )
     {
-        paddr_t ma = gvirt_to_maddr(load_addr + offs);
+        paddr_t s, l, ma = gvirt_to_maddr(load_addr + offs);
         void *dst = map_domain_page(ma>>PAGE_SHIFT);
 
-        if ( ( offs % (1<<20) ) == 0 )
-            printk(".");
+        s = offs & ~PAGE_MASK;
+        l = min(PAGE_SIZE - s, len);
 
-        set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED);
-        memcpy(dst, src, PAGE_SIZE);
-        clear_fixmap(FIXMAP_MISC);
+        copy_from_paddr(dst + s, paddr + offs, l, attr);
 
         unmap_domain_page(dst);
+        offs += l;
     }
-    printk("]\n");
 }
 
 /**
  * Check the image is a zImage and return the load address and length
  */
-static int kernel_try_zimage_prepare(struct kernel_info *info)
+static int kernel_try_zimage_prepare(struct kernel_info *info,
+                                     paddr_t addr, paddr_t size)
 {
-    uint32_t *zimage = (void *)FIXMAP_ADDR(FIXMAP_MISC);
+    uint32_t zimage[ZIMAGE_HEADER_LEN/4];
     uint32_t start, end;
     struct minimal_dtb_header dtb_hdr;
 
-    set_fixmap(FIXMAP_MISC, KERNEL_FLASH_ADDRESS >> PAGE_SHIFT, DEV_SHARED);
+    if ( size < ZIMAGE_HEADER_LEN )
+        return -EINVAL;
+
+    copy_from_paddr(zimage, addr, sizeof(zimage), DEV_SHARED);
 
     if (zimage[ZIMAGE_MAGIC_OFFSET/4] != ZIMAGE_MAGIC)
         return -EINVAL;
@@ -106,16 +109,26 @@ static int kernel_try_zimage_prepare(struct kernel_info *info)
     start = zimage[ZIMAGE_START_OFFSET/4];
     end = zimage[ZIMAGE_END_OFFSET/4];
 
-    clear_fixmap(FIXMAP_MISC);
+    if ( (end - start) > size )
+        return -EINVAL;
 
     /*
      * Check for an appended DTB.
      */
-    copy_from_paddr(&dtb_hdr, KERNEL_FLASH_ADDRESS + end - start, sizeof(dtb_hdr), DEV_SHARED);
-    if (be32_to_cpu(dtb_hdr.magic) == DTB_MAGIC) {
-        end += be32_to_cpu(dtb_hdr.total_size);
+    if ( addr + end - start + sizeof(dtb_hdr) <= size )
+    {
+        copy_from_paddr(&dtb_hdr, addr + end - start,
+                        sizeof(dtb_hdr), DEV_SHARED);
+        if (be32_to_cpu(dtb_hdr.magic) == DTB_MAGIC) {
+            end += be32_to_cpu(dtb_hdr.total_size);
+
+            if ( end > addr + size )
+                return -EINVAL;
+        }
     }
 
+    info->zimage.kernel_addr = addr;
+
     /*
      * If start is zero, the zImage is position independent -- load it
      * at 32k from start of RAM.
@@ -142,25 +155,26 @@ static void kernel_elf_load(struct kernel_info *info)
     free_xenheap_pages(info->kernel_img, info->kernel_order);
 }
 
-static int kernel_try_elf_prepare(struct kernel_info *info)
+static int kernel_try_elf_prepare(struct kernel_info *info,
+                                  paddr_t addr, paddr_t size)
 {
     int rc;
 
-    info->kernel_order = get_order_from_bytes(KERNEL_FLASH_SIZE);
+    info->kernel_order = get_order_from_bytes(size);
     info->kernel_img = alloc_xenheap_pages(info->kernel_order, 0);
     if ( info->kernel_img == NULL )
         panic("Cannot allocate temporary buffer for kernel.\n");
 
-    copy_from_paddr(info->kernel_img, KERNEL_FLASH_ADDRESS, KERNEL_FLASH_SIZE, DEV_SHARED);
+    copy_from_paddr(info->kernel_img, addr, size, info->load_attr);
 
-    if ( (rc = elf_init(&info->elf.elf, info->kernel_img, KERNEL_FLASH_SIZE )) != 0 )
-        return rc;
+    if ( (rc = elf_init(&info->elf.elf, info->kernel_img, size )) != 0 )
+        goto err;
 #ifdef VERBOSE
     elf_set_verbose(&info->elf.elf);
 #endif
     elf_parse_binary(&info->elf.elf);
     if ( (rc = elf_xen_parse(&info->elf.elf, &info->elf.parms)) != 0 )
-        return rc;
+        goto err;
 
     /*
      * TODO: can the ELF header be used to find the physical address
@@ -170,15 +184,38 @@ static int kernel_try_elf_prepare(struct kernel_info *info)
     info->load = kernel_elf_load;
 
     return 0;
+err:
+    free_xenheap_pages(info->kernel_img, info->kernel_order);
+    return rc;
 }
 
 int kernel_prepare(struct kernel_info *info)
 {
     int rc;
 
-    rc = kernel_try_zimage_prepare(info);
+    paddr_t start, size;
+
+    if ( early_info.modules.nr_mods > 1 )
+        panic("Cannot handle dom0 initrd yet\n");
+
+    if ( early_info.modules.nr_mods < 1 )
+    {
+        printk("No boot modules found, trying flash\n");
+        start = KERNEL_FLASH_ADDRESS;
+        size = KERNEL_FLASH_SIZE;
+        info->load_attr = DEV_SHARED;
+    }
+    else
+    {
+        printk("Loading kernel from boot module 1\n");
+        start = early_info.modules.module[1].start;
+        size = early_info.modules.module[1].size;
+        info->load_attr = BUFFERABLE;
+    }
+
+    rc = kernel_try_zimage_prepare(info, start, size);
     if (rc < 0)
-        rc = kernel_try_elf_prepare(info);
+        rc = kernel_try_elf_prepare(info, start, size);
 
     return rc;
 }
diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index 4533568..49fe9da 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -22,6 +22,7 @@ struct kernel_info {
 
     union {
         struct {
+            paddr_t kernel_addr;
             paddr_t load_addr;
             paddr_t len;
         } zimage;
@@ -33,9 +34,19 @@ struct kernel_info {
     };
 
     void (*load)(struct kernel_info *info);
+    int load_attr;
 };
 
 int kernel_prepare(struct kernel_info *info);
 void kernel_load(struct kernel_info *info);
 
 #endif /* #ifdef __ARCH_ARM_KERNEL_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:12:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:12:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbFJ-000655-TZ; Thu, 06 Dec 2012 13:12:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgbFI-00063p-PT
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:12:09 +0000
Received: from [193.109.254.147:49757] by server-1.bemta-14.messagelabs.com id
	E1/50-25314-8A990C05; Thu, 06 Dec 2012 13:12:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1354799457!1964778!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7170 invoked from network); 6 Dec 2012 13:10:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 13:10:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216605241"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 13:10:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 08:10:52 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgbE3-0000fX-LE;
	Thu, 06 Dec 2012 13:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 6 Dec 2012 13:10:47 +0000
Message-ID: <1354799451-16876-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 6/9] arm: load dom0 kernel from first boot module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v3: - correct limit check in try_zimage_prepare
    - copy zimage header to a local bufffer to avoid issues with
      crossing page boundaries.
    - handle non page aligned source and destinations when loading
    - use a BUFFERABLE mapping when loading kernel from RAM.
---
 xen/arch/arm/kernel.c |   91 ++++++++++++++++++++++++++++++++++--------------
 xen/arch/arm/kernel.h |   11 ++++++
 2 files changed, 75 insertions(+), 27 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 2d56130..c9265d7 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -22,6 +22,7 @@
 #define ZIMAGE_MAGIC_OFFSET 0x24
 #define ZIMAGE_START_OFFSET 0x28
 #define ZIMAGE_END_OFFSET   0x2c
+#define ZIMAGE_HEADER_LEN   0x30
 
 #define ZIMAGE_MAGIC 0x016f2818
 
@@ -65,40 +66,42 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
 static void kernel_zimage_load(struct kernel_info *info)
 {
     paddr_t load_addr = info->zimage.load_addr;
+    paddr_t paddr = info->zimage.kernel_addr;
+    paddr_t attr = info->load_attr;
     paddr_t len = info->zimage.len;
-    paddr_t flash = KERNEL_FLASH_ADDRESS;
-    void *src = (void *)FIXMAP_ADDR(FIXMAP_MISC);
     unsigned long offs;
 
-    printk("Loading %"PRIpaddr" byte zImage from flash %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr": [",
-           len, flash, load_addr, load_addr + len);
-    for ( offs = 0; offs < len; offs += PAGE_SIZE )
+    printk("Loading zImage from %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr"\n",
+           paddr, load_addr, load_addr + len);
+    for ( offs = 0; offs < len; )
     {
-        paddr_t ma = gvirt_to_maddr(load_addr + offs);
+        paddr_t s, l, ma = gvirt_to_maddr(load_addr + offs);
         void *dst = map_domain_page(ma>>PAGE_SHIFT);
 
-        if ( ( offs % (1<<20) ) == 0 )
-            printk(".");
+        s = offs & ~PAGE_MASK;
+        l = min(PAGE_SIZE - s, len);
 
-        set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED);
-        memcpy(dst, src, PAGE_SIZE);
-        clear_fixmap(FIXMAP_MISC);
+        copy_from_paddr(dst + s, paddr + offs, l, attr);
 
         unmap_domain_page(dst);
+        offs += l;
     }
-    printk("]\n");
 }
 
 /**
  * Check the image is a zImage and return the load address and length
  */
-static int kernel_try_zimage_prepare(struct kernel_info *info)
+static int kernel_try_zimage_prepare(struct kernel_info *info,
+                                     paddr_t addr, paddr_t size)
 {
-    uint32_t *zimage = (void *)FIXMAP_ADDR(FIXMAP_MISC);
+    uint32_t zimage[ZIMAGE_HEADER_LEN/4];
     uint32_t start, end;
     struct minimal_dtb_header dtb_hdr;
 
-    set_fixmap(FIXMAP_MISC, KERNEL_FLASH_ADDRESS >> PAGE_SHIFT, DEV_SHARED);
+    if ( size < ZIMAGE_HEADER_LEN )
+        return -EINVAL;
+
+    copy_from_paddr(zimage, addr, sizeof(zimage), DEV_SHARED);
 
     if (zimage[ZIMAGE_MAGIC_OFFSET/4] != ZIMAGE_MAGIC)
         return -EINVAL;
@@ -106,16 +109,26 @@ static int kernel_try_zimage_prepare(struct kernel_info *info)
     start = zimage[ZIMAGE_START_OFFSET/4];
     end = zimage[ZIMAGE_END_OFFSET/4];
 
-    clear_fixmap(FIXMAP_MISC);
+    if ( (end - start) > size )
+        return -EINVAL;
 
     /*
      * Check for an appended DTB.
      */
-    copy_from_paddr(&dtb_hdr, KERNEL_FLASH_ADDRESS + end - start, sizeof(dtb_hdr), DEV_SHARED);
-    if (be32_to_cpu(dtb_hdr.magic) == DTB_MAGIC) {
-        end += be32_to_cpu(dtb_hdr.total_size);
+    if ( addr + end - start + sizeof(dtb_hdr) <= size )
+    {
+        copy_from_paddr(&dtb_hdr, addr + end - start,
+                        sizeof(dtb_hdr), DEV_SHARED);
+        if (be32_to_cpu(dtb_hdr.magic) == DTB_MAGIC) {
+            end += be32_to_cpu(dtb_hdr.total_size);
+
+            if ( end > addr + size )
+                return -EINVAL;
+        }
     }
 
+    info->zimage.kernel_addr = addr;
+
     /*
      * If start is zero, the zImage is position independent -- load it
      * at 32k from start of RAM.
@@ -142,25 +155,26 @@ static void kernel_elf_load(struct kernel_info *info)
     free_xenheap_pages(info->kernel_img, info->kernel_order);
 }
 
-static int kernel_try_elf_prepare(struct kernel_info *info)
+static int kernel_try_elf_prepare(struct kernel_info *info,
+                                  paddr_t addr, paddr_t size)
 {
     int rc;
 
-    info->kernel_order = get_order_from_bytes(KERNEL_FLASH_SIZE);
+    info->kernel_order = get_order_from_bytes(size);
     info->kernel_img = alloc_xenheap_pages(info->kernel_order, 0);
     if ( info->kernel_img == NULL )
         panic("Cannot allocate temporary buffer for kernel.\n");
 
-    copy_from_paddr(info->kernel_img, KERNEL_FLASH_ADDRESS, KERNEL_FLASH_SIZE, DEV_SHARED);
+    copy_from_paddr(info->kernel_img, addr, size, info->load_attr);
 
-    if ( (rc = elf_init(&info->elf.elf, info->kernel_img, KERNEL_FLASH_SIZE )) != 0 )
-        return rc;
+    if ( (rc = elf_init(&info->elf.elf, info->kernel_img, size )) != 0 )
+        goto err;
 #ifdef VERBOSE
     elf_set_verbose(&info->elf.elf);
 #endif
     elf_parse_binary(&info->elf.elf);
     if ( (rc = elf_xen_parse(&info->elf.elf, &info->elf.parms)) != 0 )
-        return rc;
+        goto err;
 
     /*
      * TODO: can the ELF header be used to find the physical address
@@ -170,15 +184,38 @@ static int kernel_try_elf_prepare(struct kernel_info *info)
     info->load = kernel_elf_load;
 
     return 0;
+err:
+    free_xenheap_pages(info->kernel_img, info->kernel_order);
+    return rc;
 }
 
 int kernel_prepare(struct kernel_info *info)
 {
     int rc;
 
-    rc = kernel_try_zimage_prepare(info);
+    paddr_t start, size;
+
+    if ( early_info.modules.nr_mods > 1 )
+        panic("Cannot handle dom0 initrd yet\n");
+
+    if ( early_info.modules.nr_mods < 1 )
+    {
+        printk("No boot modules found, trying flash\n");
+        start = KERNEL_FLASH_ADDRESS;
+        size = KERNEL_FLASH_SIZE;
+        info->load_attr = DEV_SHARED;
+    }
+    else
+    {
+        printk("Loading kernel from boot module 1\n");
+        start = early_info.modules.module[1].start;
+        size = early_info.modules.module[1].size;
+        info->load_attr = BUFFERABLE;
+    }
+
+    rc = kernel_try_zimage_prepare(info, start, size);
     if (rc < 0)
-        rc = kernel_try_elf_prepare(info);
+        rc = kernel_try_elf_prepare(info, start, size);
 
     return rc;
 }
diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index 4533568..49fe9da 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -22,6 +22,7 @@ struct kernel_info {
 
     union {
         struct {
+            paddr_t kernel_addr;
             paddr_t load_addr;
             paddr_t len;
         } zimage;
@@ -33,9 +34,19 @@ struct kernel_info {
     };
 
     void (*load)(struct kernel_info *info);
+    int load_attr;
 };
 
 int kernel_prepare(struct kernel_info *info);
 void kernel_load(struct kernel_info *info);
 
 #endif /* #ifdef __ARCH_ARM_KERNEL_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:12:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbFO-0006AW-Hu; Thu, 06 Dec 2012 13:12:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbFM-00067E-Ig
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:12:12 +0000
Received: from [85.158.143.35:27335] by server-3.bemta-4.messagelabs.com id
	1D/63-18211-BA990C05; Thu, 06 Dec 2012 13:12:11 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1354799489!5435152!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTQy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25099 invoked from network); 6 Dec 2012 13:11:30 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-4.tower-21.messagelabs.com with SMTP;
	6 Dec 2012 13:11:30 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 06 Dec 2012 05:10:19 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252956672"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 05:11:05 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:09 +0800
Message-Id: <1354798871-5632-10-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 09/11] nested vmx: enable PAUSE and RDPMC
	exiting for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index bcb113f..178adbc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1362,6 +1362,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
+               CPU_BASED_PAUSE_EXITING |
+               CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:12:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbFO-0006AW-Hu; Thu, 06 Dec 2012 13:12:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbFM-00067E-Ig
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:12:12 +0000
Received: from [85.158.143.35:27335] by server-3.bemta-4.messagelabs.com id
	1D/63-18211-BA990C05; Thu, 06 Dec 2012 13:12:11 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1354799489!5435152!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMTQy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25099 invoked from network); 6 Dec 2012 13:11:30 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-4.tower-21.messagelabs.com with SMTP;
	6 Dec 2012 13:11:30 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 06 Dec 2012 05:10:19 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252956672"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 05:11:05 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:09 +0800
Message-Id: <1354798871-5632-10-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 09/11] nested vmx: enable PAUSE and RDPMC
	exiting for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index bcb113f..178adbc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1362,6 +1362,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
+               CPU_BASED_PAUSE_EXITING |
+               CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:13:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbGU-0006tt-29; Thu, 06 Dec 2012 13:13:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbGT-0006tZ-Ex
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:13:21 +0000
Received: from [193.109.254.147:29576] by server-4.bemta-14.messagelabs.com id
	80/68-18856-0F990C05; Thu, 06 Dec 2012 13:13:20 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1354799481!4431149!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3Mjk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9879 invoked from network); 6 Dec 2012 13:11:22 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 13:11:22 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 06 Dec 2012 05:10:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="176960673"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Dec 2012 05:10:57 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:07 +0800
Message-Id: <1354798871-5632-8-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 07/11] nested vmx: enable IA32E mode while do
	VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some VMMs may check the platform capability to judge whether long
mode guest is supported. Therefore we need to expose this bit to
guest VMM.

Xen on Xen works fine in current solution because Xen doesn't
check this capability but directly set it in VMCS if guest
supports long mode.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 02a7052..dcdc83e 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1376,7 +1376,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
+               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+               VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
         break;
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:13:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbGU-0006tt-29; Thu, 06 Dec 2012 13:13:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbGT-0006tZ-Ex
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:13:21 +0000
Received: from [193.109.254.147:29576] by server-4.bemta-14.messagelabs.com id
	80/68-18856-0F990C05; Thu, 06 Dec 2012 13:13:20 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1354799481!4431149!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3Mjk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9879 invoked from network); 6 Dec 2012 13:11:22 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 13:11:22 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 06 Dec 2012 05:10:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="176960673"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Dec 2012 05:10:57 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:01:07 +0800
Message-Id: <1354798871-5632-8-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v4 07/11] nested vmx: enable IA32E mode while do
	VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some VMMs may check the platform capability to judge whether long
mode guest is supported. Therefore we need to expose this bit to
guest VMM.

Xen on Xen works fine in current solution because Xen doesn't
check this capability but directly set it in VMCS if guest
supports long mode.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 02a7052..dcdc83e 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1376,7 +1376,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
+               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+               VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
         break;
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:17:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbKO-0008DW-Bn; Thu, 06 Dec 2012 13:17:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgbKN-0008DB-DO
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:17:23 +0000
Received: from [85.158.139.83:31926] by server-12.bemta-5.messagelabs.com id
	A7/F0-02886-2EA90C05; Thu, 06 Dec 2012 13:17:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354799841!28721550!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24017 invoked from network); 6 Dec 2012 13:17:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:17:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:17:20 +0000
Message-Id: <50C0A8F002000078000AE99F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:17:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
	<50BF841C.6010906@amd.com>
	<1354730007.17165.31.camel@zakaz.uk.xensource.com>
	<1354788485.17165.65.camel@zakaz.uk.xensource.com>
	<50C08BD502000078000AE7BD@nat28.tlf.novell.com>
	<20121206130829.GA1206@linux-hezd.amd.com>
In-Reply-To: <20121206130829.GA1206@linux-hezd.amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 14:08, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> As for whether a container can have more than one update --- typically no but I 
> would like to be able to support this.

In which case further changes would be necessary.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:17:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbKO-0008DW-Bn; Thu, 06 Dec 2012 13:17:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgbKN-0008DB-DO
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:17:23 +0000
Received: from [85.158.139.83:31926] by server-12.bemta-5.messagelabs.com id
	A7/F0-02886-2EA90C05; Thu, 06 Dec 2012 13:17:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354799841!28721550!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24017 invoked from network); 6 Dec 2012 13:17:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:17:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:17:20 +0000
Message-Id: <50C0A8F002000078000AE99F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:17:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <1353939218.5830.34.camel@zakaz.uk.xensource.com>
	<50B383B2.9060503@amd.com> <50B3FF8A.4050909@amd.com>
	<1354711402.15296.188.camel@zakaz.uk.xensource.com>
	<50BF7AD3.9010407@amd.com>
	<50BF8C2002000078000AE42D@nat28.tlf.novell.com>
	<50BF841C.6010906@amd.com>
	<1354730007.17165.31.camel@zakaz.uk.xensource.com>
	<1354788485.17165.65.camel@zakaz.uk.xensource.com>
	<50C08BD502000078000AE7BD@nat28.tlf.novell.com>
	<20121206130829.GA1206@linux-hezd.amd.com>
In-Reply-To: <20121206130829.GA1206@linux-hezd.amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 14:08, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> As for whether a container can have more than one update --- typically no but I 
> would like to be able to support this.

In which case further changes would be necessary.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:18:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbLH-0008Nz-SA; Thu, 06 Dec 2012 13:18:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbLH-0008Nm-18
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:18:19 +0000
Received: from [85.158.139.211:18985] by server-5.bemta-5.messagelabs.com id
	81/07-11353-A1B90C05; Thu, 06 Dec 2012 13:18:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1354799897!16788370!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21966 invoked from network); 6 Dec 2012 13:18:17 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-9.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 13:18:17 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 05:18:16 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259901476"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 05:18:15 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: konrad.wilk@oracle.com,
	xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:08:42 +0800
Message-Id: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Cc: linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory for
	map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While mapping sg buffers, checking to cross page DMA buffer is
also needed. If the guest DMA buffer crosses page boundary, Xen
should exchange contiguous memory for it.

Besides, it is needed to backup the original page contents
and copy it back after memory exchange is done.

This fixes issues if device DMA into software static buffers,
and in case the static buffer cross page boundary which pages are
not contiguous in real hardware.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
---
 drivers/xen/swiotlb-xen.c |   47 ++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 46 insertions(+), 1 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 58db6df..e8f0cfb 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 }
 EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
 
+static bool
+check_continguous_region(unsigned long vstart, unsigned long order)
+{
+	unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
+	unsigned long next_ma;
+	int i;
+
+	for (i = 1; i < (1 << order); i++) {
+		next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
+		if (next_ma != prev_ma + PAGE_SIZE)
+			return false;
+		prev_ma = next_ma;
+	}
+	return true;
+}
+
 /*
  * Map a set of buffers described by scatterlist in streaming mode for DMA.
  * This is the scatter-gather version of the above xen_swiotlb_map_page
@@ -489,7 +505,36 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
 
 	for_each_sg(sgl, sg, nelems, i) {
 		phys_addr_t paddr = sg_phys(sg);
-		dma_addr_t dev_addr = xen_phys_to_bus(paddr);
+		unsigned long vstart, order;
+		dma_addr_t dev_addr;
+
+		/*
+		 * While mapping sg buffers, checking to cross page DMA buffer
+		 * is also needed. If the guest DMA buffer crosses page
+		 * boundary, Xen should exchange contiguous memory for it.
+		 * Besides, it is needed to backup the original page contents
+		 * and copy it back after memory exchange is done.
+		 */
+		if (range_straddles_page_boundary(paddr, sg->length)) {
+			vstart = (unsigned long)__va(paddr & PAGE_MASK);
+			order = get_order(sg->length + (paddr & ~PAGE_MASK));
+			if (!check_continguous_region(vstart, order)) {
+				unsigned long buf;
+				buf = __get_free_pages(GFP_KERNEL, order);
+				memcpy((void *)buf, (void *)vstart,
+					PAGE_SIZE * (1 << order));
+				if (xen_create_contiguous_region(vstart, order,
+						fls64(paddr))) {
+					free_pages(buf, order);
+					return 0;
+				}
+				memcpy((void *)vstart, (void *)buf,
+					PAGE_SIZE * (1 << order));
+				free_pages(buf, order);
+			}
+		}
+
+		dev_addr = xen_phys_to_bus(paddr);
 
 		if (swiotlb_force ||
 		    !dma_capable(hwdev, dev_addr, sg->length) ||
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:18:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbLH-0008Nz-SA; Thu, 06 Dec 2012 13:18:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgbLH-0008Nm-18
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:18:19 +0000
Received: from [85.158.139.211:18985] by server-5.bemta-5.messagelabs.com id
	81/07-11353-A1B90C05; Thu, 06 Dec 2012 13:18:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1354799897!16788370!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21966 invoked from network); 6 Dec 2012 13:18:17 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-9.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 13:18:17 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 05:18:16 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259901476"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 05:18:15 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: konrad.wilk@oracle.com,
	xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 21:08:42 +0800
Message-Id: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Cc: linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory for
	map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While mapping sg buffers, checking to cross page DMA buffer is
also needed. If the guest DMA buffer crosses page boundary, Xen
should exchange contiguous memory for it.

Besides, it is needed to backup the original page contents
and copy it back after memory exchange is done.

This fixes issues if device DMA into software static buffers,
and in case the static buffer cross page boundary which pages are
not contiguous in real hardware.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
---
 drivers/xen/swiotlb-xen.c |   47 ++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 46 insertions(+), 1 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 58db6df..e8f0cfb 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 }
 EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
 
+static bool
+check_continguous_region(unsigned long vstart, unsigned long order)
+{
+	unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
+	unsigned long next_ma;
+	int i;
+
+	for (i = 1; i < (1 << order); i++) {
+		next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
+		if (next_ma != prev_ma + PAGE_SIZE)
+			return false;
+		prev_ma = next_ma;
+	}
+	return true;
+}
+
 /*
  * Map a set of buffers described by scatterlist in streaming mode for DMA.
  * This is the scatter-gather version of the above xen_swiotlb_map_page
@@ -489,7 +505,36 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
 
 	for_each_sg(sgl, sg, nelems, i) {
 		phys_addr_t paddr = sg_phys(sg);
-		dma_addr_t dev_addr = xen_phys_to_bus(paddr);
+		unsigned long vstart, order;
+		dma_addr_t dev_addr;
+
+		/*
+		 * While mapping sg buffers, checking to cross page DMA buffer
+		 * is also needed. If the guest DMA buffer crosses page
+		 * boundary, Xen should exchange contiguous memory for it.
+		 * Besides, it is needed to backup the original page contents
+		 * and copy it back after memory exchange is done.
+		 */
+		if (range_straddles_page_boundary(paddr, sg->length)) {
+			vstart = (unsigned long)__va(paddr & PAGE_MASK);
+			order = get_order(sg->length + (paddr & ~PAGE_MASK));
+			if (!check_continguous_region(vstart, order)) {
+				unsigned long buf;
+				buf = __get_free_pages(GFP_KERNEL, order);
+				memcpy((void *)buf, (void *)vstart,
+					PAGE_SIZE * (1 << order));
+				if (xen_create_contiguous_region(vstart, order,
+						fls64(paddr))) {
+					free_pages(buf, order);
+					return 0;
+				}
+				memcpy((void *)vstart, (void *)buf,
+					PAGE_SIZE * (1 << order));
+				free_pages(buf, order);
+			}
+		}
+
+		dev_addr = xen_phys_to_bus(paddr);
 
 		if (swiotlb_force ||
 		    !dma_capable(hwdev, dev_addr, sg->length) ||
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:30:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbWu-0000Xy-BU; Thu, 06 Dec 2012 13:30:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgbWs-0000Xs-Jg
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:30:18 +0000
Received: from [193.109.254.147:46503] by server-16.bemta-14.messagelabs.com
	id C5/F6-09215-9ED90C05; Thu, 06 Dec 2012 13:30:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354800616!9440912!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28842 invoked from network); 6 Dec 2012 13:30:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:30:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:30:16 +0000
Message-Id: <50C0ABF802000078000AE9C6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:30:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
	<1354798871-5632-12-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354798871-5632-12-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 11/11] nested vmx: check host ability
 when intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 14:01, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> When guest hypervisor tries to read MSR value, we intercept this behavior
> and return certain emulated values. Besides that, we also need to ensure
> that those emulated values must compatible with host ability.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |   16 +++++++++++++---
>  1 files changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 178adbc..cacbee4 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1314,24 +1314,29 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
>      return X86EMUL_OKAY;
>  }
>  
> +#define combine_host_cap(data, host_data) \
> +    (((data & host_data) & (~0ul << 32)) | \
> +    ((uint32_t)(data | host_data)))
> +
>  /*
>   * Capability reporting
>   */
>  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
>  {
> -    u64 data = 0, tmp = 0;
> +    u64 data = 0, host_data = 0, tmp = 0;
>      int r = 1;
>  
>      if ( !nestedhvm_enabled(current->domain) )
>          return 0;
>  
> +    rdmsrl(msr, host_data);
> +
>      /*
>       * Remove unsupport features from n1 guest capability MSR
>       */
>      switch (msr) {
>      case MSR_IA32_VMX_BASIC:
> -        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
> -               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
> +        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
>          break;
>      case MSR_IA32_VMX_PINBASED_CTLS:
>      case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
> @@ -1341,6 +1346,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 PIN_BASED_PREEMPT_TIMER;
>          tmp = VMX_PINBASED_CTLS_DEFAULT1;
>          data = ((data | tmp) << 32) | (tmp);
> +        data = combine_host_cap(data, host_data);

Honestly, I had hoped/expected to get the folding in of "tmp" into
the macro-ization as well (with zero getting passed where needed).
Quite likely the code would become even more readable if you
removed the extra assignments to "tmp" and rather passed the
literals directly (where they have a single, fixed value).

Jan

>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS:
>      case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
> @@ -1368,6 +1374,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          tmp = VMX_PROCBASED_CTLS_DEFAULT1;
>          /* 0-settings */
>          data = ((data | tmp) << 32) | (tmp);
> +        data = combine_host_cap(data, host_data);
>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS2:
>          /* 1-seetings */
> @@ -1376,6 +1383,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          /* 0-settings */
>          tmp = 0;
>          data = (data << 32) | tmp;
> +        data = combine_host_cap(data, host_data);
>          break;
>      case MSR_IA32_VMX_EXIT_CTLS:
>      case MSR_IA32_VMX_TRUE_EXIT_CTLS:
> @@ -1391,6 +1399,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
>  	/* 0-settings */
>          data = ((data | tmp) << 32) | tmp;
> +        data = combine_host_cap(data, host_data);
>          break;
>      case MSR_IA32_VMX_ENTRY_CTLS:
>      case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> @@ -1401,6 +1410,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
>                 VM_ENTRY_IA32E_MODE;
>          data = ((data | tmp) << 32) | tmp;
> +        data = combine_host_cap(data, host_data);
>          break;
>  
>      case IA32_FEATURE_CONTROL_MSR:
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:30:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbWu-0000Xy-BU; Thu, 06 Dec 2012 13:30:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgbWs-0000Xs-Jg
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:30:18 +0000
Received: from [193.109.254.147:46503] by server-16.bemta-14.messagelabs.com
	id C5/F6-09215-9ED90C05; Thu, 06 Dec 2012 13:30:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354800616!9440912!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28842 invoked from network); 6 Dec 2012 13:30:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:30:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:30:16 +0000
Message-Id: <50C0ABF802000078000AE9C6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:30:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
	<1354798871-5632-12-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354798871-5632-12-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 11/11] nested vmx: check host ability
 when intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 14:01, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> When guest hypervisor tries to read MSR value, we intercept this behavior
> and return certain emulated values. Besides that, we also need to ensure
> that those emulated values must compatible with host ability.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |   16 +++++++++++++---
>  1 files changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 178adbc..cacbee4 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1314,24 +1314,29 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
>      return X86EMUL_OKAY;
>  }
>  
> +#define combine_host_cap(data, host_data) \
> +    (((data & host_data) & (~0ul << 32)) | \
> +    ((uint32_t)(data | host_data)))
> +
>  /*
>   * Capability reporting
>   */
>  int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
>  {
> -    u64 data = 0, tmp = 0;
> +    u64 data = 0, host_data = 0, tmp = 0;
>      int r = 1;
>  
>      if ( !nestedhvm_enabled(current->domain) )
>          return 0;
>  
> +    rdmsrl(msr, host_data);
> +
>      /*
>       * Remove unsupport features from n1 guest capability MSR
>       */
>      switch (msr) {
>      case MSR_IA32_VMX_BASIC:
> -        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
> -               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
> +        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
>          break;
>      case MSR_IA32_VMX_PINBASED_CTLS:
>      case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
> @@ -1341,6 +1346,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 PIN_BASED_PREEMPT_TIMER;
>          tmp = VMX_PINBASED_CTLS_DEFAULT1;
>          data = ((data | tmp) << 32) | (tmp);
> +        data = combine_host_cap(data, host_data);

Honestly, I had hoped/expected to get the folding in of "tmp" into
the macro-ization as well (with zero getting passed where needed).
Quite likely the code would become even more readable if you
removed the extra assignments to "tmp" and rather passed the
literals directly (where they have a single, fixed value).

Jan

>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS:
>      case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
> @@ -1368,6 +1374,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          tmp = VMX_PROCBASED_CTLS_DEFAULT1;
>          /* 0-settings */
>          data = ((data | tmp) << 32) | (tmp);
> +        data = combine_host_cap(data, host_data);
>          break;
>      case MSR_IA32_VMX_PROCBASED_CTLS2:
>          /* 1-seetings */
> @@ -1376,6 +1383,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>          /* 0-settings */
>          tmp = 0;
>          data = (data << 32) | tmp;
> +        data = combine_host_cap(data, host_data);
>          break;
>      case MSR_IA32_VMX_EXIT_CTLS:
>      case MSR_IA32_VMX_TRUE_EXIT_CTLS:
> @@ -1391,6 +1399,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
>  	/* 0-settings */
>          data = ((data | tmp) << 32) | tmp;
> +        data = combine_host_cap(data, host_data);
>          break;
>      case MSR_IA32_VMX_ENTRY_CTLS:
>      case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> @@ -1401,6 +1410,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
> *msr_content)
>                 VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
>                 VM_ENTRY_IA32E_MODE;
>          data = ((data | tmp) << 32) | tmp;
> +        data = combine_host_cap(data, host_data);
>          break;
>  
>      case IA32_FEATURE_CONTROL_MSR:
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:32:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:32:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbYa-0000gw-QU; Thu, 06 Dec 2012 13:32:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgbYa-0000gn-5o
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:32:04 +0000
Received: from [193.109.254.147:56998] by server-15.bemta-14.messagelabs.com
	id EB/B3-12105-35E90C05; Thu, 06 Dec 2012 13:32:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354800689!9182260!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11846 invoked from network); 6 Dec 2012 13:31:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:31:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:31:28 +0000
Message-Id: <50C0AC3F02000078000AE9C9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:31:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 00/11] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 14:01, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> This series of patches contain some bug fixes and feature enabling for
> nested vmx, please help to review and pull.
> 
> Changes from v3 to v4:
>  - Use a macro to combine MSR value with host capability.
>  - Use the macro defined for bit 55 in IA32_VMX_BASIC MSR in vmcs.c.

With the minor comment on the last patch addressed, feel free
to add my ack on the whole series.

Jan

> Changes from v2 to v3:
>  - Change a hard number to literal name while exposing bit 55 in 
> IA32_VMX_BASIC MSR.
> 
> Changes from v1 to v2:
>  - Use literal name instead of hard numbers to expose default 1 settings in 
> VMX related MSRs.
>  - For TRUE VMX MSRs, we use the same value as normal VMX MSRs.
>  - Fix a coding style issue.
> 
> The following 5 patches are suitable to backport to 4.2.x:
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit
>   nested vmx: enable IA32E mode while do VM entry
>   nested vmx: fix interrupt delivery to L2 guest
> 
> Thanks,
> Dongxiao
> 
> Dongxiao Xu (11):
>   nested vmx: emulate MSR bitmaps
>   nested vmx: use literal name instead of hard numbers
>   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit
>   nested vmx: enable IA32E mode while do VM entry
>   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
>   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
>   nested vmx: fix interrupt delivery to L2 guest
>   nested vmx: check host ability when intercept MSR read
> 
>  xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
>  xen/arch/x86/hvm/vmx/vmcs.c        |   30 +++++++++-
>  xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
>  xen/arch/x86/hvm/vmx/vvmx.c        |  113 ++++++++++++++++++++++++++++++------
>  xen/include/asm-x86/hvm/vmx/vmcs.h |    7 ++
>  xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
>  xen/include/asm-x86/hvm/vmx/vvmx.h |   10 +++
>  7 files changed, 151 insertions(+), 24 deletions(-)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:32:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:32:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbYa-0000gw-QU; Thu, 06 Dec 2012 13:32:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgbYa-0000gn-5o
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:32:04 +0000
Received: from [193.109.254.147:56998] by server-15.bemta-14.messagelabs.com
	id EB/B3-12105-35E90C05; Thu, 06 Dec 2012 13:32:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354800689!9182260!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11846 invoked from network); 6 Dec 2012 13:31:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:31:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:31:28 +0000
Message-Id: <50C0AC3F02000078000AE9C9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:31:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: eddie.dong@intel.com, jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 00/11] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 14:01, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> This series of patches contain some bug fixes and feature enabling for
> nested vmx, please help to review and pull.
> 
> Changes from v3 to v4:
>  - Use a macro to combine MSR value with host capability.
>  - Use the macro defined for bit 55 in IA32_VMX_BASIC MSR in vmcs.c.

With the minor comment on the last patch addressed, feel free
to add my ack on the whole series.

Jan

> Changes from v2 to v3:
>  - Change a hard number to literal name while exposing bit 55 in 
> IA32_VMX_BASIC MSR.
> 
> Changes from v1 to v2:
>  - Use literal name instead of hard numbers to expose default 1 settings in 
> VMX related MSRs.
>  - For TRUE VMX MSRs, we use the same value as normal VMX MSRs.
>  - Fix a coding style issue.
> 
> The following 5 patches are suitable to backport to 4.2.x:
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit
>   nested vmx: enable IA32E mode while do VM entry
>   nested vmx: fix interrupt delivery to L2 guest
> 
> Thanks,
> Dongxiao
> 
> Dongxiao Xu (11):
>   nested vmx: emulate MSR bitmaps
>   nested vmx: use literal name instead of hard numbers
>   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
>   nested vmx: fix rflags status in virtual vmexit
>   nested vmx: fix handling of RDTSC
>   nested vmx: fix DR access VM exit
>   nested vmx: enable IA32E mode while do VM entry
>   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
>   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
>   nested vmx: fix interrupt delivery to L2 guest
>   nested vmx: check host ability when intercept MSR read
> 
>  xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
>  xen/arch/x86/hvm/vmx/vmcs.c        |   30 +++++++++-
>  xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
>  xen/arch/x86/hvm/vmx/vvmx.c        |  113 ++++++++++++++++++++++++++++++------
>  xen/include/asm-x86/hvm/vmx/vmcs.h |    7 ++
>  xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
>  xen/include/asm-x86/hvm/vmx/vvmx.h |   10 +++
>  7 files changed, 151 insertions(+), 24 deletions(-)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:37:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbeA-00015F-LR; Thu, 06 Dec 2012 13:37:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgbe9-00015A-Qp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:37:50 +0000
Received: from [85.158.138.51:26581] by server-1.bemta-3.messagelabs.com id
	1C/67-12169-8AF90C05; Thu, 06 Dec 2012 13:37:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354801063!27723290!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16221 invoked from network); 6 Dec 2012 13:37:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:37:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:37:41 +0000
Message-Id: <50C0ADB502000078000AE9DA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:37:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 14:08, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> While mapping sg buffers, checking to cross page DMA buffer is
> also needed. If the guest DMA buffer crosses page boundary, Xen
> should exchange contiguous memory for it.
> 
> Besides, it is needed to backup the original page contents
> and copy it back after memory exchange is done.
> 
> This fixes issues if device DMA into software static buffers,
> and in case the static buffer cross page boundary which pages are
> not contiguous in real hardware.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> ---
>  drivers/xen/swiotlb-xen.c |   47 
> ++++++++++++++++++++++++++++++++++++++++++++-
>  1 files changed, 46 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 58db6df..e8f0cfb 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, 
> dma_addr_t dev_addr,
>  }
>  EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
>  
> +static bool
> +check_continguous_region(unsigned long vstart, unsigned long order)

check_continguous_region(unsigned long vstart, unsigned int order)

But - why do you need to do this check order based in the first
place? Checking the actual length of the buffer should suffice.

> +{
> +	unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
> +	unsigned long next_ma;

phys_addr_t or some such for both of them.

> +	int i;

unsigned long

> +
> +	for (i = 1; i < (1 << order); i++) {

1UL

> +		next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
> +		if (next_ma != prev_ma + PAGE_SIZE)
> +			return false;
> +		prev_ma = next_ma;
> +	}
> +	return true;
> +}
> +
>  /*
>   * Map a set of buffers described by scatterlist in streaming mode for DMA.
>   * This is the scatter-gather version of the above xen_swiotlb_map_page
> @@ -489,7 +505,36 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct 
> scatterlist *sgl,
>  
>  	for_each_sg(sgl, sg, nelems, i) {
>  		phys_addr_t paddr = sg_phys(sg);
> -		dma_addr_t dev_addr = xen_phys_to_bus(paddr);
> +		unsigned long vstart, order;
> +		dma_addr_t dev_addr;
> +
> +		/*
> +		 * While mapping sg buffers, checking to cross page DMA buffer
> +		 * is also needed. If the guest DMA buffer crosses page
> +		 * boundary, Xen should exchange contiguous memory for it.
> +		 * Besides, it is needed to backup the original page contents
> +		 * and copy it back after memory exchange is done.
> +		 */
> +		if (range_straddles_page_boundary(paddr, sg->length)) {
> +			vstart = (unsigned long)__va(paddr & PAGE_MASK);
> +			order = get_order(sg->length + (paddr & ~PAGE_MASK));
> +			if (!check_continguous_region(vstart, order)) {
> +				unsigned long buf;
> +				buf = __get_free_pages(GFP_KERNEL, order);
> +				memcpy((void *)buf, (void *)vstart,
> +					PAGE_SIZE * (1 << order));
> +				if (xen_create_contiguous_region(vstart, order,
> +						fls64(paddr))) {
> +					free_pages(buf, order);
> +					return 0;
> +				}
> +				memcpy((void *)vstart, (void *)buf,
> +					PAGE_SIZE * (1 << order));
> +				free_pages(buf, order);
> +			}
> +		}
> +
> +		dev_addr = xen_phys_to_bus(paddr);
>  
>  		if (swiotlb_force ||
>  		    !dma_capable(hwdev, dev_addr, sg->length) ||

How about swiotlb_map_page() (for the compound page case)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 13:37:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 13:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgbeA-00015F-LR; Thu, 06 Dec 2012 13:37:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgbe9-00015A-Qp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 13:37:50 +0000
Received: from [85.158.138.51:26581] by server-1.bemta-3.messagelabs.com id
	1C/67-12169-8AF90C05; Thu, 06 Dec 2012 13:37:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354801063!27723290!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16221 invoked from network); 6 Dec 2012 13:37:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 13:37:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 13:37:41 +0000
Message-Id: <50C0ADB502000078000AE9DA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 13:37:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 14:08, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> While mapping sg buffers, checking to cross page DMA buffer is
> also needed. If the guest DMA buffer crosses page boundary, Xen
> should exchange contiguous memory for it.
> 
> Besides, it is needed to backup the original page contents
> and copy it back after memory exchange is done.
> 
> This fixes issues if device DMA into software static buffers,
> and in case the static buffer cross page boundary which pages are
> not contiguous in real hardware.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> ---
>  drivers/xen/swiotlb-xen.c |   47 
> ++++++++++++++++++++++++++++++++++++++++++++-
>  1 files changed, 46 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 58db6df..e8f0cfb 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, 
> dma_addr_t dev_addr,
>  }
>  EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
>  
> +static bool
> +check_continguous_region(unsigned long vstart, unsigned long order)

check_continguous_region(unsigned long vstart, unsigned int order)

But - why do you need to do this check order based in the first
place? Checking the actual length of the buffer should suffice.

> +{
> +	unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
> +	unsigned long next_ma;

phys_addr_t or some such for both of them.

> +	int i;

unsigned long

> +
> +	for (i = 1; i < (1 << order); i++) {

1UL

> +		next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
> +		if (next_ma != prev_ma + PAGE_SIZE)
> +			return false;
> +		prev_ma = next_ma;
> +	}
> +	return true;
> +}
> +
>  /*
>   * Map a set of buffers described by scatterlist in streaming mode for DMA.
>   * This is the scatter-gather version of the above xen_swiotlb_map_page
> @@ -489,7 +505,36 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct 
> scatterlist *sgl,
>  
>  	for_each_sg(sgl, sg, nelems, i) {
>  		phys_addr_t paddr = sg_phys(sg);
> -		dma_addr_t dev_addr = xen_phys_to_bus(paddr);
> +		unsigned long vstart, order;
> +		dma_addr_t dev_addr;
> +
> +		/*
> +		 * While mapping sg buffers, checking to cross page DMA buffer
> +		 * is also needed. If the guest DMA buffer crosses page
> +		 * boundary, Xen should exchange contiguous memory for it.
> +		 * Besides, it is needed to backup the original page contents
> +		 * and copy it back after memory exchange is done.
> +		 */
> +		if (range_straddles_page_boundary(paddr, sg->length)) {
> +			vstart = (unsigned long)__va(paddr & PAGE_MASK);
> +			order = get_order(sg->length + (paddr & ~PAGE_MASK));
> +			if (!check_continguous_region(vstart, order)) {
> +				unsigned long buf;
> +				buf = __get_free_pages(GFP_KERNEL, order);
> +				memcpy((void *)buf, (void *)vstart,
> +					PAGE_SIZE * (1 << order));
> +				if (xen_create_contiguous_region(vstart, order,
> +						fls64(paddr))) {
> +					free_pages(buf, order);
> +					return 0;
> +				}
> +				memcpy((void *)vstart, (void *)buf,
> +					PAGE_SIZE * (1 << order));
> +				free_pages(buf, order);
> +			}
> +		}
> +
> +		dev_addr = xen_phys_to_bus(paddr);
>  
>  		if (swiotlb_force ||
>  		    !dma_capable(hwdev, dev_addr, sg->length) ||

How about swiotlb_map_page() (for the compound page case)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:05:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgc4D-0002N9-9o; Thu, 06 Dec 2012 14:04:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1Tgc4C-0002N2-4x
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:04:44 +0000
Received: from [85.158.139.211:13299] by server-15.bemta-5.messagelabs.com id
	D2/25-26920-BF5A0C05; Thu, 06 Dec 2012 14:04:43 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354802682!19356506!1
X-Originating-IP: [209.85.214.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20323 invoked from network); 6 Dec 2012 14:04:42 -0000
Received: from mail-bk0-f45.google.com (HELO mail-bk0-f45.google.com)
	(209.85.214.45)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 14:04:42 -0000
Received: by mail-bk0-f45.google.com with SMTP id jk13so3015664bkc.32
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 06:04:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type;
	bh=xZY33r/PZiWK9UuEYuJyXq60Ah8TJ0ePlOU/URGIny4=;
	b=XxPwdu+onJgslo7VOmnBEAgwyZzDe7sEvZgnXJp5C4xsUUlwXjoCe6gnso8XC1+8Jw
	qHPy3sWP52WCVMWi+CkHBmNfzBUQ1qz+2DJf0yT0qcqzEFTqbTICSRwfbhm6ca+p/YPs
	58rfqFyk4BNR2SxjvmQuh1oQ+Ww3DZHIy2uzUHjkS3Cl/jWO7ZHxkDD19j1CcjXh4N5W
	cxVwfKq7R/5LiZ73XAsurZHG8EszJ/FX0GNcyNkuOpZPQCva7GkJO3rUwsh8RFhIGioI
	E8Lpucml1ckZwqwmTTutBbGDj6QlW+f+z3cYL8UStu7zc6wYi378obfZyrifmnGCDv7l
	JupA==
Received: by 10.204.148.145 with SMTP id p17mr645413bkv.136.1354802681977;
	Thu, 06 Dec 2012 06:04:41 -0800 (PST)
Received: from [172.16.26.11] (b0fb107a.bb.sky.com. [176.251.16.122])
	by mx.google.com with ESMTPS id o9sm6577835bko.15.2012.12.06.06.04.40
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 06:04:41 -0800 (PST)
Message-ID: <50C0A5F7.7050405@xen.org>
Date: Thu, 06 Dec 2012 14:04:39 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:16.0) Gecko/20121026 Thunderbird/16.0.2
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] 2013 Xen event plan's : Call for Input/Action
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5669306761138255342=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5669306761138255342==
Content-Type: multipart/alternative;
 boundary="------------040106060507030406020800"

This is a multi-part message in MIME format.
--------------040106060507030406020800
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi everybody,

_*Next Xen Hackathon*__*- vote on date/location needed*_
I have been working with a still to be disclosed vendor on a Xen 
Hackathon to be hosted in spring. The following options are on the table

  * Option 1: May 16/17, Dublin, Ireland (preferred by vendor)
  * Option 2: 2 days in February 11-14, Munich - note that FOSDEM is Feb
    2 & 3
  * Option 3: 2 days in March 18-22, Munich

Given that we had a Hackathon in Munich already and that I know that 
quite a few Xen devs (including me) are on vacation in the middle of 
March, my preference would be option 1. But I still wanted to put this 
to you to get a community view.

_*XenSummit EU*__*, Oct 2013
*_I just signed the contracts to host XenSummit the week of Oct 21-25 at 
the Edinburgh International Conference Centre. The exact dates are still 
to be decided. That time coincides with the 10th birthday of Xen 
depending on how one counts: the 1.0 release was on Oct 2nd 2003, SOSP 
2003 where Xen was presented was October 21 2003.

_*XenSummit US or Asia, April-June 2013*_
Given that the EU summit is some time away, it would make sense to host 
a XenSummit in either the US or Asia in late spring or early summer. 
Time-frame wise I was thinking of April to June, which should fit well 
with the Xen 4.3 release. The only problem is that there is no bigger 
event we could co-locate with. So I can only do this at a) significant 
cost to Xen.org or b) if a vendor in the community who has space to host 
100-200 people and is based in the US or Asia would volunteer to provide 
their premises for a 2 day event. If you work for a vendor who can 
accomodate us, and you would be interested in hosting, please get in 
touch with me.

Best Regards
Lars




--------------040106060507030406020800
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi everybody,<br>
    <br>
    <u><b>Next Xen Hackathon</b></u><u><b> - vote on date/location
        needed</b></u><br>
    I have been working with a still to be disclosed vendor on a Xen
    Hackathon to be hosted in spring. The following options are on the
    table<br>
    <ul>
      <li>Option 1: May 16/17, Dublin, Ireland (preferred by vendor)<br>
      </li>
      <li>Option 2: 2 days in February 11-14, Munich - note that FOSDEM
        is Feb 2 &amp; 3<br>
      </li>
      <li>Option 3: 2 days in March 18-22, Munich</li>
    </ul>
    Given that we had a Hackathon in Munich already and that I know that
    quite a few Xen devs (including me) are on vacation in the middle of
    March, my preference would be option 1. But I still wanted to put
    this to you to get a community view.<br>
    <br>
    <u><b>XenSummit EU</b></u><u><b>, Oct 2013<br>
      </b></u>I just signed the contracts to host XenSummit the week of
    Oct 21-25 at the Edinburgh International Conference Centre. The
    exact dates are still to be decided. That time coincides with the
    10th birthday of Xen depending on how one counts: the 1.0 release
    was on Oct 2nd 2003, SOSP 2003 where Xen was presented was October
    21 2003. <br>
    <br>
    <u><b>XenSummit US or Asia, April-June 2013</b></u><br>
    Given that the EU summit is some time away, it would make sense to
    host a XenSummit in either the US or Asia in late spring or early
    summer. Time-frame wise I was thinking of April to June, which
    should fit well with the Xen 4.3 release. The only problem is that
    there is no bigger event we could co-locate with. So I can only do
    this at a) significant cost to Xen.org or b) if a vendor in the
    community who has space to host 100-200 people and is based in the
    US or Asia would volunteer to provide their premises for a 2 day
    event. If you work for a vendor who can accomodate us, and you would
    be interested in hosting, please get in touch with me.<br>
    <br>
    Best Regards<br>
    Lars<br>
    <br>
    <br>
    <br>
  </body>
</html>

--------------040106060507030406020800--


--===============5669306761138255342==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5669306761138255342==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:05:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgc4D-0002N9-9o; Thu, 06 Dec 2012 14:04:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1Tgc4C-0002N2-4x
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:04:44 +0000
Received: from [85.158.139.211:13299] by server-15.bemta-5.messagelabs.com id
	D2/25-26920-BF5A0C05; Thu, 06 Dec 2012 14:04:43 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354802682!19356506!1
X-Originating-IP: [209.85.214.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20323 invoked from network); 6 Dec 2012 14:04:42 -0000
Received: from mail-bk0-f45.google.com (HELO mail-bk0-f45.google.com)
	(209.85.214.45)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 14:04:42 -0000
Received: by mail-bk0-f45.google.com with SMTP id jk13so3015664bkc.32
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 06:04:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type;
	bh=xZY33r/PZiWK9UuEYuJyXq60Ah8TJ0ePlOU/URGIny4=;
	b=XxPwdu+onJgslo7VOmnBEAgwyZzDe7sEvZgnXJp5C4xsUUlwXjoCe6gnso8XC1+8Jw
	qHPy3sWP52WCVMWi+CkHBmNfzBUQ1qz+2DJf0yT0qcqzEFTqbTICSRwfbhm6ca+p/YPs
	58rfqFyk4BNR2SxjvmQuh1oQ+Ww3DZHIy2uzUHjkS3Cl/jWO7ZHxkDD19j1CcjXh4N5W
	cxVwfKq7R/5LiZ73XAsurZHG8EszJ/FX0GNcyNkuOpZPQCva7GkJO3rUwsh8RFhIGioI
	E8Lpucml1ckZwqwmTTutBbGDj6QlW+f+z3cYL8UStu7zc6wYi378obfZyrifmnGCDv7l
	JupA==
Received: by 10.204.148.145 with SMTP id p17mr645413bkv.136.1354802681977;
	Thu, 06 Dec 2012 06:04:41 -0800 (PST)
Received: from [172.16.26.11] (b0fb107a.bb.sky.com. [176.251.16.122])
	by mx.google.com with ESMTPS id o9sm6577835bko.15.2012.12.06.06.04.40
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 06:04:41 -0800 (PST)
Message-ID: <50C0A5F7.7050405@xen.org>
Date: Thu, 06 Dec 2012 14:04:39 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:16.0) Gecko/20121026 Thunderbird/16.0.2
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] 2013 Xen event plan's : Call for Input/Action
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5669306761138255342=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5669306761138255342==
Content-Type: multipart/alternative;
 boundary="------------040106060507030406020800"

This is a multi-part message in MIME format.
--------------040106060507030406020800
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi everybody,

_*Next Xen Hackathon*__*- vote on date/location needed*_
I have been working with a still to be disclosed vendor on a Xen 
Hackathon to be hosted in spring. The following options are on the table

  * Option 1: May 16/17, Dublin, Ireland (preferred by vendor)
  * Option 2: 2 days in February 11-14, Munich - note that FOSDEM is Feb
    2 & 3
  * Option 3: 2 days in March 18-22, Munich

Given that we had a Hackathon in Munich already and that I know that 
quite a few Xen devs (including me) are on vacation in the middle of 
March, my preference would be option 1. But I still wanted to put this 
to you to get a community view.

_*XenSummit EU*__*, Oct 2013
*_I just signed the contracts to host XenSummit the week of Oct 21-25 at 
the Edinburgh International Conference Centre. The exact dates are still 
to be decided. That time coincides with the 10th birthday of Xen 
depending on how one counts: the 1.0 release was on Oct 2nd 2003, SOSP 
2003 where Xen was presented was October 21 2003.

_*XenSummit US or Asia, April-June 2013*_
Given that the EU summit is some time away, it would make sense to host 
a XenSummit in either the US or Asia in late spring or early summer. 
Time-frame wise I was thinking of April to June, which should fit well 
with the Xen 4.3 release. The only problem is that there is no bigger 
event we could co-locate with. So I can only do this at a) significant 
cost to Xen.org or b) if a vendor in the community who has space to host 
100-200 people and is based in the US or Asia would volunteer to provide 
their premises for a 2 day event. If you work for a vendor who can 
accomodate us, and you would be interested in hosting, please get in 
touch with me.

Best Regards
Lars




--------------040106060507030406020800
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi everybody,<br>
    <br>
    <u><b>Next Xen Hackathon</b></u><u><b> - vote on date/location
        needed</b></u><br>
    I have been working with a still to be disclosed vendor on a Xen
    Hackathon to be hosted in spring. The following options are on the
    table<br>
    <ul>
      <li>Option 1: May 16/17, Dublin, Ireland (preferred by vendor)<br>
      </li>
      <li>Option 2: 2 days in February 11-14, Munich - note that FOSDEM
        is Feb 2 &amp; 3<br>
      </li>
      <li>Option 3: 2 days in March 18-22, Munich</li>
    </ul>
    Given that we had a Hackathon in Munich already and that I know that
    quite a few Xen devs (including me) are on vacation in the middle of
    March, my preference would be option 1. But I still wanted to put
    this to you to get a community view.<br>
    <br>
    <u><b>XenSummit EU</b></u><u><b>, Oct 2013<br>
      </b></u>I just signed the contracts to host XenSummit the week of
    Oct 21-25 at the Edinburgh International Conference Centre. The
    exact dates are still to be decided. That time coincides with the
    10th birthday of Xen depending on how one counts: the 1.0 release
    was on Oct 2nd 2003, SOSP 2003 where Xen was presented was October
    21 2003. <br>
    <br>
    <u><b>XenSummit US or Asia, April-June 2013</b></u><br>
    Given that the EU summit is some time away, it would make sense to
    host a XenSummit in either the US or Asia in late spring or early
    summer. Time-frame wise I was thinking of April to June, which
    should fit well with the Xen 4.3 release. The only problem is that
    there is no bigger event we could co-locate with. So I can only do
    this at a) significant cost to Xen.org or b) if a vendor in the
    community who has space to host 100-200 people and is based in the
    US or Asia would volunteer to provide their premises for a 2 day
    event. If you work for a vendor who can accomodate us, and you would
    be interested in hosting, please get in touch with me.<br>
    <br>
    Best Regards<br>
    Lars<br>
    <br>
    <br>
    <br>
  </body>
</html>

--------------040106060507030406020800--


--===============5669306761138255342==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5669306761138255342==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:05:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:05:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgc4w-0002TY-PG; Thu, 06 Dec 2012 14:05:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgc4u-0002ST-HQ
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:05:28 +0000
Received: from [85.158.143.99:56876] by server-3.bemta-4.messagelabs.com id
	04/40-18211-726A0C05; Thu, 06 Dec 2012 14:05:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354802725!23025359!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13854 invoked from network); 6 Dec 2012 14:05:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:05:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:05:24 +0000
Message-Id: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:05:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 0/8] IOMMU: add phantom function support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While I'm unaware of devices making use of this functionality in
proper ways, the goal of this patch set is to leverage the enabling
of the specified behavior as a workaround for devices that behave
as if they made use of this functionality _without_ advertising so
in the PCIe capability structure.

While it would have been possible to leave the generic IOMMU
code untouched, and deal with the creation of the necessary
device context entries in the individual IOMMUs' implementations,
I felt that it was cleaner to have as much of the necessary
abstraction in the generic layer.

The adjustments in particular imply that for the relevant
operations, (PCI-dev, devfn) tuples get passed, with the PCI
device referring to the real device and devfn representing
either the real device or the phantom function. Consequently,
for any operation intended to deal with the real device, the
devfn of the device itself must be used, whereas for anything
targeting the phantom function the passed in value is the
correct one to pass on.

1: IOMMU: adjust (re)assign operation parameters
2: IOMMU: adjust add/remove operation parameters
3: VT-d: adjust context map/unmap parameters
4: AMD IOMMU: adjust flush function parameters
5: IOMMU: consolidate pdev_type() and cache its result for a given device
6: IOMMU: add phantom function support
7: VT-d: relax source qualifier for MSI of phantom functions
8: IOMMU: add option to specify devices behaving like ones using phantom functions

The patch set meanwhile got tested on the affected systems.

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:05:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:05:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgc4w-0002TY-PG; Thu, 06 Dec 2012 14:05:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgc4u-0002ST-HQ
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:05:28 +0000
Received: from [85.158.143.99:56876] by server-3.bemta-4.messagelabs.com id
	04/40-18211-726A0C05; Thu, 06 Dec 2012 14:05:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354802725!23025359!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13854 invoked from network); 6 Dec 2012 14:05:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:05:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:05:24 +0000
Message-Id: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:05:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 0/8] IOMMU: add phantom function support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While I'm unaware of devices making use of this functionality in
proper ways, the goal of this patch set is to leverage the enabling
of the specified behavior as a workaround for devices that behave
as if they made use of this functionality _without_ advertising so
in the PCIe capability structure.

While it would have been possible to leave the generic IOMMU
code untouched, and deal with the creation of the necessary
device context entries in the individual IOMMUs' implementations,
I felt that it was cleaner to have as much of the necessary
abstraction in the generic layer.

The adjustments in particular imply that for the relevant
operations, (PCI-dev, devfn) tuples get passed, with the PCI
device referring to the real device and devfn representing
either the real device or the phantom function. Consequently,
for any operation intended to deal with the real device, the
devfn of the device itself must be used, whereas for anything
targeting the phantom function the passed in value is the
correct one to pass on.

1: IOMMU: adjust (re)assign operation parameters
2: IOMMU: adjust add/remove operation parameters
3: VT-d: adjust context map/unmap parameters
4: AMD IOMMU: adjust flush function parameters
5: IOMMU: consolidate pdev_type() and cache its result for a given device
6: IOMMU: add phantom function support
7: VT-d: relax source qualifier for MSI of phantom functions
8: IOMMU: add option to specify devices behaving like ones using phantom functions

The patch set meanwhile got tested on the affected systems.

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:09:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:09:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgc8t-0002qm-35; Thu, 06 Dec 2012 14:09:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgc8r-0002qY-Fj
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 14:09:33 +0000
Received: from [85.158.143.99:32662] by server-2.bemta-4.messagelabs.com id
	93/7D-30861-C17A0C05; Thu, 06 Dec 2012 14:09:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354802971!27363090!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13101 invoked from network); 6 Dec 2012 14:09:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 14:09:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16201598"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 14:09:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 14:09:31 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgc8p-0002yR-36;
	Thu, 06 Dec 2012 14:09:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgc8o-0003Ma-BT;
	Thu, 06 Dec 2012 14:09:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14578-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 14:09:30 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14578: trouble: blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14578 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14578/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14481
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14481
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14481

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1
baseline version:
 linux                2fc3fd44850e68f21ce1746c2e94470cab44ffab

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3bbbcb136d00b0718e63f7fd633f098e0889c3d1
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Dec 5 18:40:31 2012 -0800

    Linux 3.0.55

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:09:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:09:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgc8t-0002qm-35; Thu, 06 Dec 2012 14:09:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgc8r-0002qY-Fj
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 14:09:33 +0000
Received: from [85.158.143.99:32662] by server-2.bemta-4.messagelabs.com id
	93/7D-30861-C17A0C05; Thu, 06 Dec 2012 14:09:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354802971!27363090!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13101 invoked from network); 6 Dec 2012 14:09:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 14:09:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16201598"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 14:09:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 14:09:31 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgc8p-0002yR-36;
	Thu, 06 Dec 2012 14:09:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgc8o-0003Ma-BT;
	Thu, 06 Dec 2012 14:09:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14578-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 14:09:30 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14578: trouble: blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14578 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14578/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14481
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14481
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14481

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1
baseline version:
 linux                2fc3fd44850e68f21ce1746c2e94470cab44ffab

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3bbbcb136d00b0718e63f7fd633f098e0889c3d1
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Dec 5 18:40:31 2012 -0800

    Linux 3.0.55

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:11:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcA8-00031E-OK; Thu, 06 Dec 2012 14:10:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcA6-000301-Jz
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:10:50 +0000
Received: from [193.109.254.147:38070] by server-14.bemta-14.messagelabs.com
	id 1E/AE-14517-967A0C05; Thu, 06 Dec 2012 14:10:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1354803030!9207984!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20007 invoked from network); 6 Dec 2012 14:10:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:10:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:10:29 +0000
Message-Id: <50C0B56402000078000AEA33@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:10:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part1B2A8644.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 1/8] IOMMU: adjust (re)assign operation
	parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part1B2A8644.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to use a (struct pci_dev *, devfn) pair.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -332,34 +332,31 @@ void amd_iommu_disable_domain_device(str
         disable_ats_device(iommu->seg, bus, devfn);
 }
=20
-static int reassign_device( struct domain *source, struct domain *target,
-                            u16 seg, u8 bus, u8 devfn)
+static int reassign_device(struct domain *source, struct domain *target,
+                           u8 devfn, struct pci_dev *pdev)
 {
-    struct pci_dev *pdev;
     struct amd_iommu *iommu;
     int bdf;
     struct hvm_iommu *t =3D domain_hvm_iommu(target);
=20
-    ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev =3D pci_get_pdev_by_domain(source, seg, bus, devfn);
-    if ( !pdev )
-        return -ENODEV;
-
-    bdf =3D PCI_BDF2(bus, devfn);
-    iommu =3D find_iommu_for_device(seg, bdf);
+    bdf =3D PCI_BDF2(pdev->bus, pdev->devfn);
+    iommu =3D find_iommu_for_device(pdev->seg, bdf);
     if ( !iommu )
     {
         AMD_IOMMU_DEBUG("Fail to find iommu."
                         " %04x:%02x:%x02.%x cannot be assigned to =
dom%d\n",
-                        seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                        pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(de=
vfn),
                         target->domain_id);
         return -ENODEV;
     }
=20
     amd_iommu_disable_domain_device(source, iommu, bdf);
=20
-    list_move(&pdev->domain_list, &target->arch.pdev_list);
-    pdev->domain =3D target;
+    if ( devfn =3D=3D pdev->devfn )
+    {
+        list_move(&pdev->domain_list, &target->arch.pdev_list);
+        pdev->domain =3D target;
+    }
=20
     /* IO page tables might be destroyed after pci-detach the last device
      * In this case, we have to re-allocate root table for next pci-attach=
.*/
@@ -368,17 +365,18 @@ static int reassign_device( struct domai
=20
     amd_iommu_setup_domain_device(target, iommu, bdf);
     AMD_IOMMU_DEBUG("Re-assign %04x:%02x:%02x.%u from dom%d to dom%d\n",
-                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                    pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn)=
,
                     source->domain_id, target->domain_id);
=20
     return 0;
 }
=20
-static int amd_iommu_assign_device(struct domain *d, u16 seg, u8 bus, u8 =
devfn)
+static int amd_iommu_assign_device(struct domain *d, u8 devfn,
+                                   struct pci_dev *pdev)
 {
-    struct ivrs_mappings *ivrs_mappings =3D get_ivrs_mappings(seg);
-    int bdf =3D PCI_BDF2(bus, devfn);
-    int req_id =3D get_dma_requestor_id(seg, bdf);
+    struct ivrs_mappings *ivrs_mappings =3D get_ivrs_mappings(pdev->seg);
+    int bdf =3D PCI_BDF2(pdev->bus, devfn);
+    int req_id =3D get_dma_requestor_id(pdev->seg, bdf);
=20
     if ( ivrs_mappings[req_id].unity_map_enable )
     {
@@ -390,7 +388,7 @@ static int amd_iommu_assign_device(struc
             ivrs_mappings[req_id].read_permission);
     }
=20
-    return reassign_device(dom0, d, seg, bus, devfn);
+    return reassign_device(dom0, d, devfn, pdev);
 }
=20
 static void deallocate_next_page_table(struct page_info* pg, int level)
@@ -451,12 +449,6 @@ static void amd_iommu_domain_destroy(str
     amd_iommu_flush_all_pages(d);
 }
=20
-static int amd_iommu_return_device(
-    struct domain *s, struct domain *t, u16 seg, u8 bus, u8 devfn)
-{
-    return reassign_device(s, t, seg, bus, devfn);
-}
-
 static int amd_iommu_add_device(struct pci_dev *pdev)
 {
     struct amd_iommu *iommu;
@@ -593,7 +585,7 @@ const struct iommu_ops amd_iommu_ops =3D {
     .teardown =3D amd_iommu_domain_destroy,
     .map_page =3D amd_iommu_map_page,
     .unmap_page =3D amd_iommu_unmap_page,
-    .reassign_device =3D amd_iommu_return_device,
+    .reassign_device =3D reassign_device,
     .get_device_group_id =3D amd_iommu_group_id,
     .update_ire_from_apic =3D amd_iommu_ioapic_update_ire,
     .update_ire_from_msi =3D amd_iommu_msi_msg_update_ire,
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -233,11 +233,16 @@ static int assign_device(struct domain *
         return -EXDEV;
=20
     spin_lock(&pcidevs_lock);
-    pdev =3D pci_get_pdev(seg, bus, devfn);
-    if ( pdev )
-        pdev->fault.count =3D 0;
+    pdev =3D pci_get_pdev_by_domain(dom0, seg, bus, devfn);
+    if ( !pdev )
+    {
+        rc =3D pci_get_pdev(seg, bus, devfn) ? -EBUSY : -ENODEV;
+        goto done;
+    }
+
+    pdev->fault.count =3D 0;
=20
-    if ( (rc =3D hd->platform_ops->assign_device(d, seg, bus, devfn)) )
+    if ( (rc =3D hd->platform_ops->assign_device(d, devfn, pdev)) )
         goto done;
=20
     if ( has_arch_pdevs(d) && !need_iommu(d) )
@@ -368,18 +373,11 @@ int deassign_device(struct domain *d, u1
         return -EINVAL;
=20
     ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev =3D pci_get_pdev(seg, bus, devfn);
+    pdev =3D pci_get_pdev_by_domain(d, seg, bus, devfn);
     if ( !pdev )
         return -ENODEV;
=20
-    if ( pdev->domain !=3D d )
-    {
-        dprintk(XENLOG_G_ERR,
-                "d%d: deassign a device not owned\n", d->domain_id);
-        return -EINVAL;
-    }
-
-    ret =3D hd->platform_ops->reassign_device(d, dom0, seg, bus, devfn);
+    ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
     if ( ret )
     {
         dprintk(XENLOG_G_ERR,
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1658,17 +1658,10 @@ out:
 static int reassign_device_ownership(
     struct domain *source,
     struct domain *target,
-    u16 seg, u8 bus, u8 devfn)
+    u8 devfn, struct pci_dev *pdev)
 {
-    struct pci_dev *pdev;
     int ret;
=20
-    ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev =3D pci_get_pdev_by_domain(source, seg, bus, devfn);
-
-    if (!pdev)
-        return -ENODEV;
-
     /*
      * Devices assigned to untrusted domains (here assumed to be any =
domU)
      * can attempt to send arbitrary LAPIC/MSI messages. We are unprotecte=
d
@@ -1677,16 +1670,19 @@ static int reassign_device_ownership(
     if ( (target !=3D dom0) && !iommu_intremap )
         untrusted_msi =3D 1;
=20
-    ret =3D domain_context_unmap(source, seg, bus, devfn);
+    ret =3D domain_context_unmap(source, pdev->seg, pdev->bus, devfn);
     if ( ret )
         return ret;
=20
-    ret =3D domain_context_mapping(target, seg, bus, devfn);
+    ret =3D domain_context_mapping(target, pdev->seg, pdev->bus, devfn);
     if ( ret )
         return ret;
=20
-    list_move(&pdev->domain_list, &target->arch.pdev_list);
-    pdev->domain =3D target;
+    if ( devfn =3D=3D pdev->devfn )
+    {
+        list_move(&pdev->domain_list, &target->arch.pdev_list);
+        pdev->domain =3D target;
+    }
=20
     return ret;
 }
@@ -2202,36 +2198,26 @@ int __init intel_vtd_setup(void)
 }
=20
 static int intel_iommu_assign_device(
-    struct domain *d, u16 seg, u8 bus, u8 devfn)
+    struct domain *d, u8 devfn, struct pci_dev *pdev)
 {
     struct acpi_rmrr_unit *rmrr;
     int ret =3D 0, i;
-    struct pci_dev *pdev;
-    u16 bdf;
+    u16 bdf, seg;
+    u8 bus;
=20
     if ( list_empty(&acpi_drhd_units) )
         return -ENODEV;
=20
-    ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev =3D pci_get_pdev(seg, bus, devfn);
-    if (!pdev)
-        return -ENODEV;
-
-    if (pdev->domain !=3D dom0)
-    {
-        dprintk(XENLOG_ERR VTDPREFIX,
-                "IOMMU: assign a assigned device\n");
-       return -EBUSY;
-    }
-
-    ret =3D reassign_device_ownership(dom0, d, seg, bus, devfn);
+    ret =3D reassign_device_ownership(dom0, d, devfn, pdev);
     if ( ret )
         goto done;
=20
     /* FIXME: Because USB RMRR conflicts with guest bios region,
      * ignore USB RMRR temporarily.
      */
-    if ( is_usb_device(seg, bus, devfn) )
+    seg =3D pdev->seg;
+    bus =3D pdev->bus;
+    if ( is_usb_device(seg, bus, pdev->devfn) )
     {
         ret =3D 0;
         goto done;
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -97,13 +97,13 @@ struct iommu_ops {
     int (*add_device)(struct pci_dev *pdev);
     int (*enable_device)(struct pci_dev *pdev);
     int (*remove_device)(struct pci_dev *pdev);
-    int (*assign_device)(struct domain *d, u16 seg, u8 bus, u8 devfn);
+    int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long =
mfn,
                     unsigned int flags);
     int (*unmap_page)(struct domain *d, unsigned long gfn);
     int (*reassign_device)(struct domain *s, struct domain *t,
-			   u16 seg, u8 bus, u8 devfn);
+			   u8 devfn, struct pci_dev *);
     int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, =
unsigned int value);
     void (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg =
*msg);



--=__Part1B2A8644.3__=
Content-Type: text/plain; name="IOMMU-assign-params.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-assign-params.patch"

IOMMU: adjust (re)assign operation parameters=0A=0A... to use a (struct =
pci_dev *, devfn) pair.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c=0A+++ b/xen/drivers=
/passthrough/amd/pci_amd_iommu.c=0A@@ -332,34 +332,31 @@ void amd_iommu_dis=
able_domain_device(str=0A         disable_ats_device(iommu->seg, bus, =
devfn);=0A }=0A =0A-static int reassign_device( struct domain *source, =
struct domain *target,=0A-                            u16 seg, u8 bus, u8 =
devfn)=0A+static int reassign_device(struct domain *source, struct domain =
*target,=0A+                           u8 devfn, struct pci_dev *pdev)=0A =
{=0A-    struct pci_dev *pdev;=0A     struct amd_iommu *iommu;=0A     int =
bdf;=0A     struct hvm_iommu *t =3D domain_hvm_iommu(target);=0A =0A-    =
ASSERT(spin_is_locked(&pcidevs_lock));=0A-    pdev =3D pci_get_pdev_by_doma=
in(source, seg, bus, devfn);=0A-    if ( !pdev )=0A-        return =
-ENODEV;=0A-=0A-    bdf =3D PCI_BDF2(bus, devfn);=0A-    iommu =3D =
find_iommu_for_device(seg, bdf);=0A+    bdf =3D PCI_BDF2(pdev->bus, =
pdev->devfn);=0A+    iommu =3D find_iommu_for_device(pdev->seg, bdf);=0A   =
  if ( !iommu )=0A     {=0A         AMD_IOMMU_DEBUG("Fail to find =
iommu."=0A                         " %04x:%02x:%x02.%x cannot be assigned =
to dom%d\n",=0A-                        seg, bus, PCI_SLOT(devfn), =
PCI_FUNC(devfn),=0A+                        pdev->seg, pdev->bus, =
PCI_SLOT(devfn), PCI_FUNC(devfn),=0A                         target->domain=
_id);=0A         return -ENODEV;=0A     }=0A =0A     amd_iommu_disable_doma=
in_device(source, iommu, bdf);=0A =0A-    list_move(&pdev->domain_list, =
&target->arch.pdev_list);=0A-    pdev->domain =3D target;=0A+    if ( =
devfn =3D=3D pdev->devfn )=0A+    {=0A+        list_move(&pdev->domain_list=
, &target->arch.pdev_list);=0A+        pdev->domain =3D target;=0A+    =
}=0A =0A     /* IO page tables might be destroyed after pci-detach the =
last device=0A      * In this case, we have to re-allocate root table for =
next pci-attach.*/=0A@@ -368,17 +365,18 @@ static int reassign_device( =
struct domai=0A =0A     amd_iommu_setup_domain_device(target, iommu, =
bdf);=0A     AMD_IOMMU_DEBUG("Re-assign %04x:%02x:%02x.%u from dom%d to =
dom%d\n",=0A-                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn)=
,=0A+                    pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(de=
vfn),=0A                     source->domain_id, target->domain_id);=0A =0A =
    return 0;=0A }=0A =0A-static int amd_iommu_assign_device(struct domain =
*d, u16 seg, u8 bus, u8 devfn)=0A+static int amd_iommu_assign_device(struct=
 domain *d, u8 devfn,=0A+                                   struct pci_dev =
*pdev)=0A {=0A-    struct ivrs_mappings *ivrs_mappings =3D get_ivrs_mapping=
s(seg);=0A-    int bdf =3D PCI_BDF2(bus, devfn);=0A-    int req_id =3D =
get_dma_requestor_id(seg, bdf);=0A+    struct ivrs_mappings *ivrs_mappings =
=3D get_ivrs_mappings(pdev->seg);=0A+    int bdf =3D PCI_BDF2(pdev->bus, =
devfn);=0A+    int req_id =3D get_dma_requestor_id(pdev->seg, bdf);=0A =0A =
    if ( ivrs_mappings[req_id].unity_map_enable )=0A     {=0A@@ -390,7 =
+388,7 @@ static int amd_iommu_assign_device(struc=0A             =
ivrs_mappings[req_id].read_permission);=0A     }=0A =0A-    return =
reassign_device(dom0, d, seg, bus, devfn);=0A+    return reassign_device(do=
m0, d, devfn, pdev);=0A }=0A =0A static void deallocate_next_page_table(str=
uct page_info* pg, int level)=0A@@ -451,12 +449,6 @@ static void amd_iommu_=
domain_destroy(str=0A     amd_iommu_flush_all_pages(d);=0A }=0A =0A-static =
int amd_iommu_return_device(=0A-    struct domain *s, struct domain *t, =
u16 seg, u8 bus, u8 devfn)=0A-{=0A-    return reassign_device(s, t, seg, =
bus, devfn);=0A-}=0A-=0A static int amd_iommu_add_device(struct pci_dev =
*pdev)=0A {=0A     struct amd_iommu *iommu;=0A@@ -593,7 +585,7 @@ const =
struct iommu_ops amd_iommu_ops =3D {=0A     .teardown =3D amd_iommu_domain_=
destroy,=0A     .map_page =3D amd_iommu_map_page,=0A     .unmap_page =3D =
amd_iommu_unmap_page,=0A-    .reassign_device =3D amd_iommu_return_device,=
=0A+    .reassign_device =3D reassign_device,=0A     .get_device_group_id =
=3D amd_iommu_group_id,=0A     .update_ire_from_apic =3D amd_iommu_ioapic_u=
pdate_ire,=0A     .update_ire_from_msi =3D amd_iommu_msi_msg_update_ire,=0A=
--- a/xen/drivers/passthrough/iommu.c=0A+++ b/xen/drivers/passthrough/iommu=
.c=0A@@ -233,11 +233,16 @@ static int assign_device(struct domain *=0A     =
    return -EXDEV;=0A =0A     spin_lock(&pcidevs_lock);=0A-    pdev =3D =
pci_get_pdev(seg, bus, devfn);=0A-    if ( pdev )=0A-        pdev->fault.co=
unt =3D 0;=0A+    pdev =3D pci_get_pdev_by_domain(dom0, seg, bus, =
devfn);=0A+    if ( !pdev )=0A+    {=0A+        rc =3D pci_get_pdev(seg, =
bus, devfn) ? -EBUSY : -ENODEV;=0A+        goto done;=0A+    }=0A+=0A+    =
pdev->fault.count =3D 0;=0A =0A-    if ( (rc =3D hd->platform_ops->assign_d=
evice(d, seg, bus, devfn)) )=0A+    if ( (rc =3D hd->platform_ops->assign_d=
evice(d, devfn, pdev)) )=0A         goto done;=0A =0A     if ( has_arch_pde=
vs(d) && !need_iommu(d) )=0A@@ -368,18 +373,11 @@ int deassign_device(struc=
t domain *d, u1=0A         return -EINVAL;=0A =0A     ASSERT(spin_is_locked=
(&pcidevs_lock));=0A-    pdev =3D pci_get_pdev(seg, bus, devfn);=0A+    =
pdev =3D pci_get_pdev_by_domain(d, seg, bus, devfn);=0A     if ( !pdev =
)=0A         return -ENODEV;=0A =0A-    if ( pdev->domain !=3D d )=0A-    =
{=0A-        dprintk(XENLOG_G_ERR,=0A-                "d%d: deassign a =
device not owned\n", d->domain_id);=0A-        return -EINVAL;=0A-    =
}=0A-=0A-    ret =3D hd->platform_ops->reassign_device(d, dom0, seg, bus, =
devfn);=0A+    ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, =
pdev);=0A     if ( ret )=0A     {=0A         dprintk(XENLOG_G_ERR,=0A--- =
a/xen/drivers/passthrough/vtd/iommu.c=0A+++ b/xen/drivers/passthrough/vtd/i=
ommu.c=0A@@ -1658,17 +1658,10 @@ out:=0A static int reassign_device_ownersh=
ip(=0A     struct domain *source,=0A     struct domain *target,=0A-    u16 =
seg, u8 bus, u8 devfn)=0A+    u8 devfn, struct pci_dev *pdev)=0A {=0A-    =
struct pci_dev *pdev;=0A     int ret;=0A =0A-    ASSERT(spin_is_locked(&pci=
devs_lock));=0A-    pdev =3D pci_get_pdev_by_domain(source, seg, bus, =
devfn);=0A-=0A-    if (!pdev)=0A-        return -ENODEV;=0A-=0A     /*=0A  =
    * Devices assigned to untrusted domains (here assumed to be any =
domU)=0A      * can attempt to send arbitrary LAPIC/MSI messages. We are =
unprotected=0A@@ -1677,16 +1670,19 @@ static int reassign_device_ownership(=
=0A     if ( (target !=3D dom0) && !iommu_intremap )=0A         untrusted_m=
si =3D 1;=0A =0A-    ret =3D domain_context_unmap(source, seg, bus, =
devfn);=0A+    ret =3D domain_context_unmap(source, pdev->seg, pdev->bus, =
devfn);=0A     if ( ret )=0A         return ret;=0A =0A-    ret =3D =
domain_context_mapping(target, seg, bus, devfn);=0A+    ret =3D domain_cont=
ext_mapping(target, pdev->seg, pdev->bus, devfn);=0A     if ( ret )=0A     =
    return ret;=0A =0A-    list_move(&pdev->domain_list, &target->arch.pdev=
_list);=0A-    pdev->domain =3D target;=0A+    if ( devfn =3D=3D pdev->devf=
n )=0A+    {=0A+        list_move(&pdev->domain_list, &target->arch.pdev_li=
st);=0A+        pdev->domain =3D target;=0A+    }=0A =0A     return =
ret;=0A }=0A@@ -2202,36 +2198,26 @@ int __init intel_vtd_setup(void)=0A =
}=0A =0A static int intel_iommu_assign_device(=0A-    struct domain *d, =
u16 seg, u8 bus, u8 devfn)=0A+    struct domain *d, u8 devfn, struct =
pci_dev *pdev)=0A {=0A     struct acpi_rmrr_unit *rmrr;=0A     int ret =3D =
0, i;=0A-    struct pci_dev *pdev;=0A-    u16 bdf;=0A+    u16 bdf, =
seg;=0A+    u8 bus;=0A =0A     if ( list_empty(&acpi_drhd_units) )=0A      =
   return -ENODEV;=0A =0A-    ASSERT(spin_is_locked(&pcidevs_lock));=0A-   =
 pdev =3D pci_get_pdev(seg, bus, devfn);=0A-    if (!pdev)=0A-        =
return -ENODEV;=0A-=0A-    if (pdev->domain !=3D dom0)=0A-    {=0A-        =
dprintk(XENLOG_ERR VTDPREFIX,=0A-                "IOMMU: assign a assigned =
device\n");=0A-       return -EBUSY;=0A-    }=0A-=0A-    ret =3D reassign_d=
evice_ownership(dom0, d, seg, bus, devfn);=0A+    ret =3D reassign_device_o=
wnership(dom0, d, devfn, pdev);=0A     if ( ret )=0A         goto done;=0A =
=0A     /* FIXME: Because USB RMRR conflicts with guest bios region,=0A    =
  * ignore USB RMRR temporarily.=0A      */=0A-    if ( is_usb_device(seg, =
bus, devfn) )=0A+    seg =3D pdev->seg;=0A+    bus =3D pdev->bus;=0A+    =
if ( is_usb_device(seg, bus, pdev->devfn) )=0A     {=0A         ret =3D =
0;=0A         goto done;=0A--- a/xen/include/xen/iommu.h=0A+++ b/xen/includ=
e/xen/iommu.h=0A@@ -97,13 +97,13 @@ struct iommu_ops {=0A     int =
(*add_device)(struct pci_dev *pdev);=0A     int (*enable_device)(struct =
pci_dev *pdev);=0A     int (*remove_device)(struct pci_dev *pdev);=0A-    =
int (*assign_device)(struct domain *d, u16 seg, u8 bus, u8 devfn);=0A+    =
int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);=0A     =
void (*teardown)(struct domain *d);=0A     int (*map_page)(struct domain =
*d, unsigned long gfn, unsigned long mfn,=0A                     unsigned =
int flags);=0A     int (*unmap_page)(struct domain *d, unsigned long =
gfn);=0A     int (*reassign_device)(struct domain *s, struct domain =
*t,=0A-			   u16 seg, u8 bus, u8 devfn);=0A+			=
   u8 devfn, struct pci_dev *);=0A     int (*get_device_group_id)(u16 seg, =
u8 bus, u8 devfn);=0A     void (*update_ire_from_apic)(unsigned int apic, =
unsigned int reg, unsigned int value);=0A     void (*update_ire_from_msi)(s=
truct msi_desc *msi_desc, struct msi_msg *msg);=0A
--=__Part1B2A8644.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part1B2A8644.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:11:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcA8-00031E-OK; Thu, 06 Dec 2012 14:10:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcA6-000301-Jz
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:10:50 +0000
Received: from [193.109.254.147:38070] by server-14.bemta-14.messagelabs.com
	id 1E/AE-14517-967A0C05; Thu, 06 Dec 2012 14:10:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1354803030!9207984!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20007 invoked from network); 6 Dec 2012 14:10:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:10:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:10:29 +0000
Message-Id: <50C0B56402000078000AEA33@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:10:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part1B2A8644.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 1/8] IOMMU: adjust (re)assign operation
	parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part1B2A8644.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to use a (struct pci_dev *, devfn) pair.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -332,34 +332,31 @@ void amd_iommu_disable_domain_device(str
         disable_ats_device(iommu->seg, bus, devfn);
 }
=20
-static int reassign_device( struct domain *source, struct domain *target,
-                            u16 seg, u8 bus, u8 devfn)
+static int reassign_device(struct domain *source, struct domain *target,
+                           u8 devfn, struct pci_dev *pdev)
 {
-    struct pci_dev *pdev;
     struct amd_iommu *iommu;
     int bdf;
     struct hvm_iommu *t =3D domain_hvm_iommu(target);
=20
-    ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev =3D pci_get_pdev_by_domain(source, seg, bus, devfn);
-    if ( !pdev )
-        return -ENODEV;
-
-    bdf =3D PCI_BDF2(bus, devfn);
-    iommu =3D find_iommu_for_device(seg, bdf);
+    bdf =3D PCI_BDF2(pdev->bus, pdev->devfn);
+    iommu =3D find_iommu_for_device(pdev->seg, bdf);
     if ( !iommu )
     {
         AMD_IOMMU_DEBUG("Fail to find iommu."
                         " %04x:%02x:%x02.%x cannot be assigned to =
dom%d\n",
-                        seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                        pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(de=
vfn),
                         target->domain_id);
         return -ENODEV;
     }
=20
     amd_iommu_disable_domain_device(source, iommu, bdf);
=20
-    list_move(&pdev->domain_list, &target->arch.pdev_list);
-    pdev->domain =3D target;
+    if ( devfn =3D=3D pdev->devfn )
+    {
+        list_move(&pdev->domain_list, &target->arch.pdev_list);
+        pdev->domain =3D target;
+    }
=20
     /* IO page tables might be destroyed after pci-detach the last device
      * In this case, we have to re-allocate root table for next pci-attach=
.*/
@@ -368,17 +365,18 @@ static int reassign_device( struct domai
=20
     amd_iommu_setup_domain_device(target, iommu, bdf);
     AMD_IOMMU_DEBUG("Re-assign %04x:%02x:%02x.%u from dom%d to dom%d\n",
-                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                    pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn)=
,
                     source->domain_id, target->domain_id);
=20
     return 0;
 }
=20
-static int amd_iommu_assign_device(struct domain *d, u16 seg, u8 bus, u8 =
devfn)
+static int amd_iommu_assign_device(struct domain *d, u8 devfn,
+                                   struct pci_dev *pdev)
 {
-    struct ivrs_mappings *ivrs_mappings =3D get_ivrs_mappings(seg);
-    int bdf =3D PCI_BDF2(bus, devfn);
-    int req_id =3D get_dma_requestor_id(seg, bdf);
+    struct ivrs_mappings *ivrs_mappings =3D get_ivrs_mappings(pdev->seg);
+    int bdf =3D PCI_BDF2(pdev->bus, devfn);
+    int req_id =3D get_dma_requestor_id(pdev->seg, bdf);
=20
     if ( ivrs_mappings[req_id].unity_map_enable )
     {
@@ -390,7 +388,7 @@ static int amd_iommu_assign_device(struc
             ivrs_mappings[req_id].read_permission);
     }
=20
-    return reassign_device(dom0, d, seg, bus, devfn);
+    return reassign_device(dom0, d, devfn, pdev);
 }
=20
 static void deallocate_next_page_table(struct page_info* pg, int level)
@@ -451,12 +449,6 @@ static void amd_iommu_domain_destroy(str
     amd_iommu_flush_all_pages(d);
 }
=20
-static int amd_iommu_return_device(
-    struct domain *s, struct domain *t, u16 seg, u8 bus, u8 devfn)
-{
-    return reassign_device(s, t, seg, bus, devfn);
-}
-
 static int amd_iommu_add_device(struct pci_dev *pdev)
 {
     struct amd_iommu *iommu;
@@ -593,7 +585,7 @@ const struct iommu_ops amd_iommu_ops =3D {
     .teardown =3D amd_iommu_domain_destroy,
     .map_page =3D amd_iommu_map_page,
     .unmap_page =3D amd_iommu_unmap_page,
-    .reassign_device =3D amd_iommu_return_device,
+    .reassign_device =3D reassign_device,
     .get_device_group_id =3D amd_iommu_group_id,
     .update_ire_from_apic =3D amd_iommu_ioapic_update_ire,
     .update_ire_from_msi =3D amd_iommu_msi_msg_update_ire,
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -233,11 +233,16 @@ static int assign_device(struct domain *
         return -EXDEV;
=20
     spin_lock(&pcidevs_lock);
-    pdev =3D pci_get_pdev(seg, bus, devfn);
-    if ( pdev )
-        pdev->fault.count =3D 0;
+    pdev =3D pci_get_pdev_by_domain(dom0, seg, bus, devfn);
+    if ( !pdev )
+    {
+        rc =3D pci_get_pdev(seg, bus, devfn) ? -EBUSY : -ENODEV;
+        goto done;
+    }
+
+    pdev->fault.count =3D 0;
=20
-    if ( (rc =3D hd->platform_ops->assign_device(d, seg, bus, devfn)) )
+    if ( (rc =3D hd->platform_ops->assign_device(d, devfn, pdev)) )
         goto done;
=20
     if ( has_arch_pdevs(d) && !need_iommu(d) )
@@ -368,18 +373,11 @@ int deassign_device(struct domain *d, u1
         return -EINVAL;
=20
     ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev =3D pci_get_pdev(seg, bus, devfn);
+    pdev =3D pci_get_pdev_by_domain(d, seg, bus, devfn);
     if ( !pdev )
         return -ENODEV;
=20
-    if ( pdev->domain !=3D d )
-    {
-        dprintk(XENLOG_G_ERR,
-                "d%d: deassign a device not owned\n", d->domain_id);
-        return -EINVAL;
-    }
-
-    ret =3D hd->platform_ops->reassign_device(d, dom0, seg, bus, devfn);
+    ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
     if ( ret )
     {
         dprintk(XENLOG_G_ERR,
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1658,17 +1658,10 @@ out:
 static int reassign_device_ownership(
     struct domain *source,
     struct domain *target,
-    u16 seg, u8 bus, u8 devfn)
+    u8 devfn, struct pci_dev *pdev)
 {
-    struct pci_dev *pdev;
     int ret;
=20
-    ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev =3D pci_get_pdev_by_domain(source, seg, bus, devfn);
-
-    if (!pdev)
-        return -ENODEV;
-
     /*
      * Devices assigned to untrusted domains (here assumed to be any =
domU)
      * can attempt to send arbitrary LAPIC/MSI messages. We are unprotecte=
d
@@ -1677,16 +1670,19 @@ static int reassign_device_ownership(
     if ( (target !=3D dom0) && !iommu_intremap )
         untrusted_msi =3D 1;
=20
-    ret =3D domain_context_unmap(source, seg, bus, devfn);
+    ret =3D domain_context_unmap(source, pdev->seg, pdev->bus, devfn);
     if ( ret )
         return ret;
=20
-    ret =3D domain_context_mapping(target, seg, bus, devfn);
+    ret =3D domain_context_mapping(target, pdev->seg, pdev->bus, devfn);
     if ( ret )
         return ret;
=20
-    list_move(&pdev->domain_list, &target->arch.pdev_list);
-    pdev->domain =3D target;
+    if ( devfn =3D=3D pdev->devfn )
+    {
+        list_move(&pdev->domain_list, &target->arch.pdev_list);
+        pdev->domain =3D target;
+    }
=20
     return ret;
 }
@@ -2202,36 +2198,26 @@ int __init intel_vtd_setup(void)
 }
=20
 static int intel_iommu_assign_device(
-    struct domain *d, u16 seg, u8 bus, u8 devfn)
+    struct domain *d, u8 devfn, struct pci_dev *pdev)
 {
     struct acpi_rmrr_unit *rmrr;
     int ret =3D 0, i;
-    struct pci_dev *pdev;
-    u16 bdf;
+    u16 bdf, seg;
+    u8 bus;
=20
     if ( list_empty(&acpi_drhd_units) )
         return -ENODEV;
=20
-    ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev =3D pci_get_pdev(seg, bus, devfn);
-    if (!pdev)
-        return -ENODEV;
-
-    if (pdev->domain !=3D dom0)
-    {
-        dprintk(XENLOG_ERR VTDPREFIX,
-                "IOMMU: assign a assigned device\n");
-       return -EBUSY;
-    }
-
-    ret =3D reassign_device_ownership(dom0, d, seg, bus, devfn);
+    ret =3D reassign_device_ownership(dom0, d, devfn, pdev);
     if ( ret )
         goto done;
=20
     /* FIXME: Because USB RMRR conflicts with guest bios region,
      * ignore USB RMRR temporarily.
      */
-    if ( is_usb_device(seg, bus, devfn) )
+    seg =3D pdev->seg;
+    bus =3D pdev->bus;
+    if ( is_usb_device(seg, bus, pdev->devfn) )
     {
         ret =3D 0;
         goto done;
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -97,13 +97,13 @@ struct iommu_ops {
     int (*add_device)(struct pci_dev *pdev);
     int (*enable_device)(struct pci_dev *pdev);
     int (*remove_device)(struct pci_dev *pdev);
-    int (*assign_device)(struct domain *d, u16 seg, u8 bus, u8 devfn);
+    int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long =
mfn,
                     unsigned int flags);
     int (*unmap_page)(struct domain *d, unsigned long gfn);
     int (*reassign_device)(struct domain *s, struct domain *t,
-			   u16 seg, u8 bus, u8 devfn);
+			   u8 devfn, struct pci_dev *);
     int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, =
unsigned int value);
     void (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg =
*msg);



--=__Part1B2A8644.3__=
Content-Type: text/plain; name="IOMMU-assign-params.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-assign-params.patch"

IOMMU: adjust (re)assign operation parameters=0A=0A... to use a (struct =
pci_dev *, devfn) pair.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c=0A+++ b/xen/drivers=
/passthrough/amd/pci_amd_iommu.c=0A@@ -332,34 +332,31 @@ void amd_iommu_dis=
able_domain_device(str=0A         disable_ats_device(iommu->seg, bus, =
devfn);=0A }=0A =0A-static int reassign_device( struct domain *source, =
struct domain *target,=0A-                            u16 seg, u8 bus, u8 =
devfn)=0A+static int reassign_device(struct domain *source, struct domain =
*target,=0A+                           u8 devfn, struct pci_dev *pdev)=0A =
{=0A-    struct pci_dev *pdev;=0A     struct amd_iommu *iommu;=0A     int =
bdf;=0A     struct hvm_iommu *t =3D domain_hvm_iommu(target);=0A =0A-    =
ASSERT(spin_is_locked(&pcidevs_lock));=0A-    pdev =3D pci_get_pdev_by_doma=
in(source, seg, bus, devfn);=0A-    if ( !pdev )=0A-        return =
-ENODEV;=0A-=0A-    bdf =3D PCI_BDF2(bus, devfn);=0A-    iommu =3D =
find_iommu_for_device(seg, bdf);=0A+    bdf =3D PCI_BDF2(pdev->bus, =
pdev->devfn);=0A+    iommu =3D find_iommu_for_device(pdev->seg, bdf);=0A   =
  if ( !iommu )=0A     {=0A         AMD_IOMMU_DEBUG("Fail to find =
iommu."=0A                         " %04x:%02x:%x02.%x cannot be assigned =
to dom%d\n",=0A-                        seg, bus, PCI_SLOT(devfn), =
PCI_FUNC(devfn),=0A+                        pdev->seg, pdev->bus, =
PCI_SLOT(devfn), PCI_FUNC(devfn),=0A                         target->domain=
_id);=0A         return -ENODEV;=0A     }=0A =0A     amd_iommu_disable_doma=
in_device(source, iommu, bdf);=0A =0A-    list_move(&pdev->domain_list, =
&target->arch.pdev_list);=0A-    pdev->domain =3D target;=0A+    if ( =
devfn =3D=3D pdev->devfn )=0A+    {=0A+        list_move(&pdev->domain_list=
, &target->arch.pdev_list);=0A+        pdev->domain =3D target;=0A+    =
}=0A =0A     /* IO page tables might be destroyed after pci-detach the =
last device=0A      * In this case, we have to re-allocate root table for =
next pci-attach.*/=0A@@ -368,17 +365,18 @@ static int reassign_device( =
struct domai=0A =0A     amd_iommu_setup_domain_device(target, iommu, =
bdf);=0A     AMD_IOMMU_DEBUG("Re-assign %04x:%02x:%02x.%u from dom%d to =
dom%d\n",=0A-                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn)=
,=0A+                    pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(de=
vfn),=0A                     source->domain_id, target->domain_id);=0A =0A =
    return 0;=0A }=0A =0A-static int amd_iommu_assign_device(struct domain =
*d, u16 seg, u8 bus, u8 devfn)=0A+static int amd_iommu_assign_device(struct=
 domain *d, u8 devfn,=0A+                                   struct pci_dev =
*pdev)=0A {=0A-    struct ivrs_mappings *ivrs_mappings =3D get_ivrs_mapping=
s(seg);=0A-    int bdf =3D PCI_BDF2(bus, devfn);=0A-    int req_id =3D =
get_dma_requestor_id(seg, bdf);=0A+    struct ivrs_mappings *ivrs_mappings =
=3D get_ivrs_mappings(pdev->seg);=0A+    int bdf =3D PCI_BDF2(pdev->bus, =
devfn);=0A+    int req_id =3D get_dma_requestor_id(pdev->seg, bdf);=0A =0A =
    if ( ivrs_mappings[req_id].unity_map_enable )=0A     {=0A@@ -390,7 =
+388,7 @@ static int amd_iommu_assign_device(struc=0A             =
ivrs_mappings[req_id].read_permission);=0A     }=0A =0A-    return =
reassign_device(dom0, d, seg, bus, devfn);=0A+    return reassign_device(do=
m0, d, devfn, pdev);=0A }=0A =0A static void deallocate_next_page_table(str=
uct page_info* pg, int level)=0A@@ -451,12 +449,6 @@ static void amd_iommu_=
domain_destroy(str=0A     amd_iommu_flush_all_pages(d);=0A }=0A =0A-static =
int amd_iommu_return_device(=0A-    struct domain *s, struct domain *t, =
u16 seg, u8 bus, u8 devfn)=0A-{=0A-    return reassign_device(s, t, seg, =
bus, devfn);=0A-}=0A-=0A static int amd_iommu_add_device(struct pci_dev =
*pdev)=0A {=0A     struct amd_iommu *iommu;=0A@@ -593,7 +585,7 @@ const =
struct iommu_ops amd_iommu_ops =3D {=0A     .teardown =3D amd_iommu_domain_=
destroy,=0A     .map_page =3D amd_iommu_map_page,=0A     .unmap_page =3D =
amd_iommu_unmap_page,=0A-    .reassign_device =3D amd_iommu_return_device,=
=0A+    .reassign_device =3D reassign_device,=0A     .get_device_group_id =
=3D amd_iommu_group_id,=0A     .update_ire_from_apic =3D amd_iommu_ioapic_u=
pdate_ire,=0A     .update_ire_from_msi =3D amd_iommu_msi_msg_update_ire,=0A=
--- a/xen/drivers/passthrough/iommu.c=0A+++ b/xen/drivers/passthrough/iommu=
.c=0A@@ -233,11 +233,16 @@ static int assign_device(struct domain *=0A     =
    return -EXDEV;=0A =0A     spin_lock(&pcidevs_lock);=0A-    pdev =3D =
pci_get_pdev(seg, bus, devfn);=0A-    if ( pdev )=0A-        pdev->fault.co=
unt =3D 0;=0A+    pdev =3D pci_get_pdev_by_domain(dom0, seg, bus, =
devfn);=0A+    if ( !pdev )=0A+    {=0A+        rc =3D pci_get_pdev(seg, =
bus, devfn) ? -EBUSY : -ENODEV;=0A+        goto done;=0A+    }=0A+=0A+    =
pdev->fault.count =3D 0;=0A =0A-    if ( (rc =3D hd->platform_ops->assign_d=
evice(d, seg, bus, devfn)) )=0A+    if ( (rc =3D hd->platform_ops->assign_d=
evice(d, devfn, pdev)) )=0A         goto done;=0A =0A     if ( has_arch_pde=
vs(d) && !need_iommu(d) )=0A@@ -368,18 +373,11 @@ int deassign_device(struc=
t domain *d, u1=0A         return -EINVAL;=0A =0A     ASSERT(spin_is_locked=
(&pcidevs_lock));=0A-    pdev =3D pci_get_pdev(seg, bus, devfn);=0A+    =
pdev =3D pci_get_pdev_by_domain(d, seg, bus, devfn);=0A     if ( !pdev =
)=0A         return -ENODEV;=0A =0A-    if ( pdev->domain !=3D d )=0A-    =
{=0A-        dprintk(XENLOG_G_ERR,=0A-                "d%d: deassign a =
device not owned\n", d->domain_id);=0A-        return -EINVAL;=0A-    =
}=0A-=0A-    ret =3D hd->platform_ops->reassign_device(d, dom0, seg, bus, =
devfn);=0A+    ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, =
pdev);=0A     if ( ret )=0A     {=0A         dprintk(XENLOG_G_ERR,=0A--- =
a/xen/drivers/passthrough/vtd/iommu.c=0A+++ b/xen/drivers/passthrough/vtd/i=
ommu.c=0A@@ -1658,17 +1658,10 @@ out:=0A static int reassign_device_ownersh=
ip(=0A     struct domain *source,=0A     struct domain *target,=0A-    u16 =
seg, u8 bus, u8 devfn)=0A+    u8 devfn, struct pci_dev *pdev)=0A {=0A-    =
struct pci_dev *pdev;=0A     int ret;=0A =0A-    ASSERT(spin_is_locked(&pci=
devs_lock));=0A-    pdev =3D pci_get_pdev_by_domain(source, seg, bus, =
devfn);=0A-=0A-    if (!pdev)=0A-        return -ENODEV;=0A-=0A     /*=0A  =
    * Devices assigned to untrusted domains (here assumed to be any =
domU)=0A      * can attempt to send arbitrary LAPIC/MSI messages. We are =
unprotected=0A@@ -1677,16 +1670,19 @@ static int reassign_device_ownership(=
=0A     if ( (target !=3D dom0) && !iommu_intremap )=0A         untrusted_m=
si =3D 1;=0A =0A-    ret =3D domain_context_unmap(source, seg, bus, =
devfn);=0A+    ret =3D domain_context_unmap(source, pdev->seg, pdev->bus, =
devfn);=0A     if ( ret )=0A         return ret;=0A =0A-    ret =3D =
domain_context_mapping(target, seg, bus, devfn);=0A+    ret =3D domain_cont=
ext_mapping(target, pdev->seg, pdev->bus, devfn);=0A     if ( ret )=0A     =
    return ret;=0A =0A-    list_move(&pdev->domain_list, &target->arch.pdev=
_list);=0A-    pdev->domain =3D target;=0A+    if ( devfn =3D=3D pdev->devf=
n )=0A+    {=0A+        list_move(&pdev->domain_list, &target->arch.pdev_li=
st);=0A+        pdev->domain =3D target;=0A+    }=0A =0A     return =
ret;=0A }=0A@@ -2202,36 +2198,26 @@ int __init intel_vtd_setup(void)=0A =
}=0A =0A static int intel_iommu_assign_device(=0A-    struct domain *d, =
u16 seg, u8 bus, u8 devfn)=0A+    struct domain *d, u8 devfn, struct =
pci_dev *pdev)=0A {=0A     struct acpi_rmrr_unit *rmrr;=0A     int ret =3D =
0, i;=0A-    struct pci_dev *pdev;=0A-    u16 bdf;=0A+    u16 bdf, =
seg;=0A+    u8 bus;=0A =0A     if ( list_empty(&acpi_drhd_units) )=0A      =
   return -ENODEV;=0A =0A-    ASSERT(spin_is_locked(&pcidevs_lock));=0A-   =
 pdev =3D pci_get_pdev(seg, bus, devfn);=0A-    if (!pdev)=0A-        =
return -ENODEV;=0A-=0A-    if (pdev->domain !=3D dom0)=0A-    {=0A-        =
dprintk(XENLOG_ERR VTDPREFIX,=0A-                "IOMMU: assign a assigned =
device\n");=0A-       return -EBUSY;=0A-    }=0A-=0A-    ret =3D reassign_d=
evice_ownership(dom0, d, seg, bus, devfn);=0A+    ret =3D reassign_device_o=
wnership(dom0, d, devfn, pdev);=0A     if ( ret )=0A         goto done;=0A =
=0A     /* FIXME: Because USB RMRR conflicts with guest bios region,=0A    =
  * ignore USB RMRR temporarily.=0A      */=0A-    if ( is_usb_device(seg, =
bus, devfn) )=0A+    seg =3D pdev->seg;=0A+    bus =3D pdev->bus;=0A+    =
if ( is_usb_device(seg, bus, pdev->devfn) )=0A     {=0A         ret =3D =
0;=0A         goto done;=0A--- a/xen/include/xen/iommu.h=0A+++ b/xen/includ=
e/xen/iommu.h=0A@@ -97,13 +97,13 @@ struct iommu_ops {=0A     int =
(*add_device)(struct pci_dev *pdev);=0A     int (*enable_device)(struct =
pci_dev *pdev);=0A     int (*remove_device)(struct pci_dev *pdev);=0A-    =
int (*assign_device)(struct domain *d, u16 seg, u8 bus, u8 devfn);=0A+    =
int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);=0A     =
void (*teardown)(struct domain *d);=0A     int (*map_page)(struct domain =
*d, unsigned long gfn, unsigned long mfn,=0A                     unsigned =
int flags);=0A     int (*unmap_page)(struct domain *d, unsigned long =
gfn);=0A     int (*reassign_device)(struct domain *s, struct domain =
*t,=0A-			   u16 seg, u8 bus, u8 devfn);=0A+			=
   u8 devfn, struct pci_dev *);=0A     int (*get_device_group_id)(u16 seg, =
u8 bus, u8 devfn);=0A     void (*update_ire_from_apic)(unsigned int apic, =
unsigned int reg, unsigned int value);=0A     void (*update_ire_from_msi)(s=
truct msi_desc *msi_desc, struct msi_msg *msg);=0A
--=__Part1B2A8644.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part1B2A8644.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:14:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:14:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcDA-0003Pf-Jm; Thu, 06 Dec 2012 14:14:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcD9-0003PM-18
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:13:59 +0000
Received: from [193.109.254.147:59241] by server-16.bemta-14.messagelabs.com
	id B2/E0-09215-628A0C05; Thu, 06 Dec 2012 14:13:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354803236!9667634!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 718 invoked from network); 6 Dec 2012 14:13:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:13:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:13:55 +0000
Message-Id: <50C0B63302000078000AEA43@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:13:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part6352FE33.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 5/8] IOMMU/PCI: consolidate pdev_type() and
 cache its result for a given device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part6352FE33.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Add an "unknown" device types as well as one for PCI-to-PCIe bridges
(the latter of which other IOMMU code with or without this patch
doesn't appear to handle properly).

Make sure we don't mistake a device for which we can't access its
config space as a legacy PCI device (after all we in fact don't know
how to deal with such a device, and hence shouldn't try to).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -142,7 +142,7 @@ static struct pci_dev *alloc_pdev(struct
     spin_lock_init(&pdev->msix_table_lock);
=20
     /* update bus2bridge */
-    switch ( pdev_type(pseg->nr, bus, devfn) )
+    switch ( pdev->type =3D pdev_type(pseg->nr, bus, devfn) )
     {
         u8 sec_bus, sub_bus;
=20
@@ -182,7 +182,7 @@ static struct pci_dev *alloc_pdev(struct
 static void free_pdev(struct pci_seg *pseg, struct pci_dev *pdev)
 {
     /* update bus2bridge */
-    switch ( pdev_type(pseg->nr, pdev->bus, pdev->devfn) )
+    switch ( pdev->type )
     {
         u8 dev, func, sec_bus, sub_bus;
=20
@@ -200,6 +200,9 @@ static void free_pdev(struct pci_seg *ps
                 pseg->bus2bridge[sec_bus] =3D pseg->bus2bridge[pdev->bus];=

             spin_unlock(&pseg->bus2bridge_lock);
             break;
+
+        default:
+            break;
     }
=20
     list_del(&pdev->alldevs_list);
@@ -587,20 +590,30 @@ void pci_release_devices(struct domain *
=20
 #define PCI_CLASS_BRIDGE_PCI     0x0604
=20
-int pdev_type(u16 seg, u8 bus, u8 devfn)
+enum pdev_type pdev_type(u16 seg, u8 bus, u8 devfn)
 {
     u16 class_device, creg;
     u8 d =3D PCI_SLOT(devfn), f =3D PCI_FUNC(devfn);
     int pos =3D pci_find_cap_offset(seg, bus, d, f, PCI_CAP_ID_EXP);
=20
     class_device =3D pci_conf_read16(seg, bus, d, f, PCI_CLASS_DEVICE);
-    if ( class_device =3D=3D PCI_CLASS_BRIDGE_PCI )
+    switch ( class_device )
     {
+    case PCI_CLASS_BRIDGE_PCI:
         if ( !pos )
             return DEV_TYPE_LEGACY_PCI_BRIDGE;
         creg =3D pci_conf_read16(seg, bus, d, f, pos + PCI_EXP_FLAGS);
-        return ((creg & PCI_EXP_FLAGS_TYPE) >> 4) =3D=3D PCI_EXP_TYPE_PCI_=
BRIDGE ?
-            DEV_TYPE_PCIe2PCI_BRIDGE : DEV_TYPE_PCIe_BRIDGE;
+        switch ( (creg & PCI_EXP_FLAGS_TYPE) >> 4 )
+        {
+        case PCI_EXP_TYPE_PCI_BRIDGE:
+            return DEV_TYPE_PCIe2PCI_BRIDGE;
+        case PCI_EXP_TYPE_PCIE_BRIDGE:
+            return DEV_TYPE_PCI2PCIe_BRIDGE;
+        }
+        return DEV_TYPE_PCIe_BRIDGE;
+
+    case 0x0000: case 0xffff:
+        return DEV_TYPE_PCI_UNKNOWN;
     }
=20
     return pos ? DEV_TYPE_PCIe_ENDPOINT : DEV_TYPE_PCI;
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -430,7 +430,6 @@ void io_apic_write_remap_rte(
=20
 static void set_msi_source_id(struct pci_dev *pdev, struct iremap_entry =
*ire)
 {
-    int type;
     u16 seg;
     u8 bus, devfn, secbus;
     int ret;
@@ -441,8 +440,7 @@ static void set_msi_source_id(struct pci
     seg =3D pdev->seg;
     bus =3D pdev->bus;
     devfn =3D pdev->devfn;
-    type =3D pdev_type(seg, bus, devfn);
-    switch ( type )
+    switch ( pdev->type )
     {
     case DEV_TYPE_PCIe_BRIDGE:
     case DEV_TYPE_PCIe2PCI_BRIDGE:
@@ -474,7 +472,7 @@ static void set_msi_source_id(struct pci
     default:
         dprintk(XENLOG_WARNING VTDPREFIX,
                 "d%d: unknown(%u): %04x:%02x:%02x.%u\n",
-                pdev->domain->domain_id, type,
+                pdev->domain->domain_id, pdev->type,
                 seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
         break;
    }
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1419,7 +1419,6 @@ static int domain_context_mapping(
 {
     struct acpi_drhd_unit *drhd;
     int ret =3D 0;
-    u32 type;
     u8 seg =3D pdev->seg, bus =3D pdev->bus, secbus;
=20
     drhd =3D acpi_find_matched_drhd_unit(pdev);
@@ -1428,8 +1427,7 @@ static int domain_context_mapping(
=20
     ASSERT(spin_is_locked(&pcidevs_lock));
=20
-    type =3D pdev_type(seg, bus, devfn);
-    switch ( type )
+    switch ( pdev->type )
     {
     case DEV_TYPE_PCIe_BRIDGE:
     case DEV_TYPE_PCIe2PCI_BRIDGE:
@@ -1479,7 +1477,7 @@ static int domain_context_mapping(
=20
     default:
         dprintk(XENLOG_ERR VTDPREFIX, "d%d:unknown(%u): %04x:%02x:%02x.%u\=
n",
-                domain->domain_id, type,
+                domain->domain_id, pdev->type,
                 seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
         ret =3D -EINVAL;
         break;
@@ -1551,7 +1549,6 @@ static int domain_context_unmap(
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     int ret =3D 0;
-    u32 type;
     u8 seg =3D pdev->seg, bus =3D pdev->bus, tmp_bus, tmp_devfn, secbus;
     int found =3D 0;
=20
@@ -1560,8 +1557,7 @@ static int domain_context_unmap(
         return -ENODEV;
     iommu =3D drhd->iommu;
=20
-    type =3D pdev_type(seg, bus, devfn);
-    switch ( type )
+    switch ( pdev->type )
     {
     case DEV_TYPE_PCIe_BRIDGE:
     case DEV_TYPE_PCIe2PCI_BRIDGE:
@@ -1608,7 +1604,7 @@ static int domain_context_unmap(
=20
     default:
         dprintk(XENLOG_ERR VTDPREFIX, "d%d:unknown(%u): %04x:%02x:%02x.%u\=
n",
-                domain->domain_id, type,
+                domain->domain_id, pdev->type,
                 seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
         ret =3D -EINVAL;
         goto out;
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -62,6 +62,17 @@ struct pci_dev {
     const u16 seg;
     const u8 bus;
     const u8 devfn;
+
+    enum pdev_type {
+        DEV_TYPE_PCI_UNKNOWN,
+        DEV_TYPE_PCIe_ENDPOINT,
+        DEV_TYPE_PCIe_BRIDGE,       // PCIe root port, switch
+        DEV_TYPE_PCIe2PCI_BRIDGE,   // PCIe-to-PCI/PCIx bridge
+        DEV_TYPE_PCI2PCIe_BRIDGE,   // PCI/PCIx-to-PCIe bridge
+        DEV_TYPE_LEGACY_PCI_BRIDGE, // Legacy PCI bridge
+        DEV_TYPE_PCI,
+    } type;
+
     struct pci_dev_info info;
     struct arch_pci_dev arch;
     struct {
@@ -83,18 +94,10 @@ struct pci_dev {
=20
 extern spinlock_t pcidevs_lock;
=20
-enum {
-    DEV_TYPE_PCIe_ENDPOINT,
-    DEV_TYPE_PCIe_BRIDGE,       // PCIe root port, switch
-    DEV_TYPE_PCIe2PCI_BRIDGE,   // PCIe-to-PCI/PCIx bridge
-    DEV_TYPE_LEGACY_PCI_BRIDGE, // Legacy PCI bridge
-    DEV_TYPE_PCI,
-};
-
 bool_t pci_known_segment(u16 seg);
 int pci_device_detect(u16 seg, u8 bus, u8 dev, u8 func);
 int scan_pci_devices(void);
-int pdev_type(u16 seg, u8 bus, u8 devfn);
+enum pdev_type pdev_type(u16 seg, u8 bus, u8 devfn);
 int find_upstream_bridge(u16 seg, u8 *bus, u8 *devfn, u8 *secbus);
 struct pci_dev *pci_lock_pdev(int seg, int bus, int devfn);
 struct pci_dev *pci_lock_domain_pdev(
--- a/xen/include/xen/pci_regs.h
+++ b/xen/include/xen/pci_regs.h
@@ -371,6 +371,9 @@
 #define  PCI_EXP_TYPE_UPSTREAM	0x5	/* Upstream Port */
 #define  PCI_EXP_TYPE_DOWNSTREAM 0x6	/* Downstream Port */
 #define  PCI_EXP_TYPE_PCI_BRIDGE 0x7	/* PCI/PCI-X Bridge */
+#define  PCI_EXP_TYPE_PCIE_BRIDGE 0x8	/* PCI/PCI-X to PCIE Bridge */
+#define  PCI_EXP_TYPE_RC_END	0x9	/* Root Complex Integrated =
Endpoint */
+#define  PCI_EXP_TYPE_RC_EC	0xa	/* Root Complex Event Collector */
 #define PCI_EXP_FLAGS_SLOT	0x0100	/* Slot implemented */
 #define PCI_EXP_FLAGS_IRQ	0x3e00	/* Interrupt message number */
 #define PCI_EXP_DEVCAP		4	/* Device capabilities */



--=__Part6352FE33.3__=
Content-Type: text/plain; name="IOMMU-pdev-type.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-pdev-type.patch"

IOMMU/PCI: consolidate pdev_type() and cache its result for a given =
device=0A=0AAdd an "unknown" device types as well as one for PCI-to-PCIe =
bridges=0A(the latter of which other IOMMU code with or without this =
patch=0Adoesn't appear to handle properly).=0A=0AMake sure we don't =
mistake a device for which we can't access its=0Aconfig space as a legacy =
PCI device (after all we in fact don't know=0Ahow to deal with such a =
device, and hence shouldn't try to).=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/drivers/passthrough/pci.c=0A+++ =
b/xen/drivers/passthrough/pci.c=0A@@ -142,7 +142,7 @@ static struct =
pci_dev *alloc_pdev(struct=0A     spin_lock_init(&pdev->msix_table_lock);=
=0A =0A     /* update bus2bridge */=0A-    switch ( pdev_type(pseg->nr, =
bus, devfn) )=0A+    switch ( pdev->type =3D pdev_type(pseg->nr, bus, =
devfn) )=0A     {=0A         u8 sec_bus, sub_bus;=0A =0A@@ -182,7 +182,7 =
@@ static struct pci_dev *alloc_pdev(struct=0A static void free_pdev(struct=
 pci_seg *pseg, struct pci_dev *pdev)=0A {=0A     /* update bus2bridge =
*/=0A-    switch ( pdev_type(pseg->nr, pdev->bus, pdev->devfn) )=0A+    =
switch ( pdev->type )=0A     {=0A         u8 dev, func, sec_bus, =
sub_bus;=0A =0A@@ -200,6 +200,9 @@ static void free_pdev(struct pci_seg =
*ps=0A                 pseg->bus2bridge[sec_bus] =3D pseg->bus2bridge[pdev-=
>bus];=0A             spin_unlock(&pseg->bus2bridge_lock);=0A             =
break;=0A+=0A+        default:=0A+            break;=0A     }=0A =0A     =
list_del(&pdev->alldevs_list);=0A@@ -587,20 +590,30 @@ void pci_release_dev=
ices(struct domain *=0A =0A #define PCI_CLASS_BRIDGE_PCI     0x0604=0A =
=0A-int pdev_type(u16 seg, u8 bus, u8 devfn)=0A+enum pdev_type pdev_type(u1=
6 seg, u8 bus, u8 devfn)=0A {=0A     u16 class_device, creg;=0A     u8 d =
=3D PCI_SLOT(devfn), f =3D PCI_FUNC(devfn);=0A     int pos =3D pci_find_cap=
_offset(seg, bus, d, f, PCI_CAP_ID_EXP);=0A =0A     class_device =3D =
pci_conf_read16(seg, bus, d, f, PCI_CLASS_DEVICE);=0A-    if ( class_device=
 =3D=3D PCI_CLASS_BRIDGE_PCI )=0A+    switch ( class_device )=0A     {=0A+ =
   case PCI_CLASS_BRIDGE_PCI:=0A         if ( !pos )=0A             return =
DEV_TYPE_LEGACY_PCI_BRIDGE;=0A         creg =3D pci_conf_read16(seg, bus, =
d, f, pos + PCI_EXP_FLAGS);=0A-        return ((creg & PCI_EXP_FLAGS_TYPE) =
>> 4) =3D=3D PCI_EXP_TYPE_PCI_BRIDGE ?=0A-            DEV_TYPE_PCIe2PCI_BRI=
DGE : DEV_TYPE_PCIe_BRIDGE;=0A+        switch ( (creg & PCI_EXP_FLAGS_TYPE)=
 >> 4 )=0A+        {=0A+        case PCI_EXP_TYPE_PCI_BRIDGE:=0A+          =
  return DEV_TYPE_PCIe2PCI_BRIDGE;=0A+        case PCI_EXP_TYPE_PCIE_BRIDGE=
:=0A+            return DEV_TYPE_PCI2PCIe_BRIDGE;=0A+        }=0A+        =
return DEV_TYPE_PCIe_BRIDGE;=0A+=0A+    case 0x0000: case 0xffff:=0A+      =
  return DEV_TYPE_PCI_UNKNOWN;=0A     }=0A =0A     return pos ? DEV_TYPE_PC=
Ie_ENDPOINT : DEV_TYPE_PCI;=0A--- a/xen/drivers/passthrough/vtd/intremap.c=
=0A+++ b/xen/drivers/passthrough/vtd/intremap.c=0A@@ -430,7 +430,6 @@ void =
io_apic_write_remap_rte(=0A =0A static void set_msi_source_id(struct =
pci_dev *pdev, struct iremap_entry *ire)=0A {=0A-    int type;=0A     u16 =
seg;=0A     u8 bus, devfn, secbus;=0A     int ret;=0A@@ -441,8 +440,7 @@ =
static void set_msi_source_id(struct pci=0A     seg =3D pdev->seg;=0A     =
bus =3D pdev->bus;=0A     devfn =3D pdev->devfn;=0A-    type =3D pdev_type(=
seg, bus, devfn);=0A-    switch ( type )=0A+    switch ( pdev->type )=0A   =
  {=0A     case DEV_TYPE_PCIe_BRIDGE:=0A     case DEV_TYPE_PCIe2PCI_BRIDGE:=
=0A@@ -474,7 +472,7 @@ static void set_msi_source_id(struct pci=0A     =
default:=0A         dprintk(XENLOG_WARNING VTDPREFIX,=0A                 =
"d%d: unknown(%u): %04x:%02x:%02x.%u\n",=0A-                pdev->domain->d=
omain_id, type,=0A+                pdev->domain->domain_id, pdev->type,=0A =
                seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));=0A         =
break;=0A    }=0A--- a/xen/drivers/passthrough/vtd/iommu.c=0A+++ b/xen/driv=
ers/passthrough/vtd/iommu.c=0A@@ -1419,7 +1419,6 @@ static int domain_conte=
xt_mapping(=0A {=0A     struct acpi_drhd_unit *drhd;=0A     int ret =3D =
0;=0A-    u32 type;=0A     u8 seg =3D pdev->seg, bus =3D pdev->bus, =
secbus;=0A =0A     drhd =3D acpi_find_matched_drhd_unit(pdev);=0A@@ =
-1428,8 +1427,7 @@ static int domain_context_mapping(=0A =0A     ASSERT(spi=
n_is_locked(&pcidevs_lock));=0A =0A-    type =3D pdev_type(seg, bus, =
devfn);=0A-    switch ( type )=0A+    switch ( pdev->type )=0A     {=0A    =
 case DEV_TYPE_PCIe_BRIDGE:=0A     case DEV_TYPE_PCIe2PCI_BRIDGE:=0A@@ =
-1479,7 +1477,7 @@ static int domain_context_mapping(=0A =0A     =
default:=0A         dprintk(XENLOG_ERR VTDPREFIX, "d%d:unknown(%u): =
%04x:%02x:%02x.%u\n",=0A-                domain->domain_id, type,=0A+      =
          domain->domain_id, pdev->type,=0A                 seg, bus, =
PCI_SLOT(devfn), PCI_FUNC(devfn));=0A         ret =3D -EINVAL;=0A         =
break;=0A@@ -1551,7 +1549,6 @@ static int domain_context_unmap(=0A     =
struct acpi_drhd_unit *drhd;=0A     struct iommu *iommu;=0A     int ret =
=3D 0;=0A-    u32 type;=0A     u8 seg =3D pdev->seg, bus =3D pdev->bus, =
tmp_bus, tmp_devfn, secbus;=0A     int found =3D 0;=0A =0A@@ -1560,8 =
+1557,7 @@ static int domain_context_unmap(=0A         return -ENODEV;=0A  =
   iommu =3D drhd->iommu;=0A =0A-    type =3D pdev_type(seg, bus, =
devfn);=0A-    switch ( type )=0A+    switch ( pdev->type )=0A     {=0A    =
 case DEV_TYPE_PCIe_BRIDGE:=0A     case DEV_TYPE_PCIe2PCI_BRIDGE:=0A@@ =
-1608,7 +1604,7 @@ static int domain_context_unmap(=0A =0A     default:=0A =
        dprintk(XENLOG_ERR VTDPREFIX, "d%d:unknown(%u): %04x:%02x:%02x.%u\n=
",=0A-                domain->domain_id, type,=0A+                =
domain->domain_id, pdev->type,=0A                 seg, bus, PCI_SLOT(devfn)=
, PCI_FUNC(devfn));=0A         ret =3D -EINVAL;=0A         goto out;=0A--- =
a/xen/include/xen/pci.h=0A+++ b/xen/include/xen/pci.h=0A@@ -62,6 +62,17 @@ =
struct pci_dev {=0A     const u16 seg;=0A     const u8 bus;=0A     const =
u8 devfn;=0A+=0A+    enum pdev_type {=0A+        DEV_TYPE_PCI_UNKNOWN,=0A+ =
       DEV_TYPE_PCIe_ENDPOINT,=0A+        DEV_TYPE_PCIe_BRIDGE,       // =
PCIe root port, switch=0A+        DEV_TYPE_PCIe2PCI_BRIDGE,   // PCIe-to-PC=
I/PCIx bridge=0A+        DEV_TYPE_PCI2PCIe_BRIDGE,   // PCI/PCIx-to-PCIe =
bridge=0A+        DEV_TYPE_LEGACY_PCI_BRIDGE, // Legacy PCI bridge=0A+     =
   DEV_TYPE_PCI,=0A+    } type;=0A+=0A     struct pci_dev_info info;=0A    =
 struct arch_pci_dev arch;=0A     struct {=0A@@ -83,18 +94,10 @@ struct =
pci_dev {=0A =0A extern spinlock_t pcidevs_lock;=0A =0A-enum {=0A-    =
DEV_TYPE_PCIe_ENDPOINT,=0A-    DEV_TYPE_PCIe_BRIDGE,       // PCIe root =
port, switch=0A-    DEV_TYPE_PCIe2PCI_BRIDGE,   // PCIe-to-PCI/PCIx =
bridge=0A-    DEV_TYPE_LEGACY_PCI_BRIDGE, // Legacy PCI bridge=0A-    =
DEV_TYPE_PCI,=0A-};=0A-=0A bool_t pci_known_segment(u16 seg);=0A int =
pci_device_detect(u16 seg, u8 bus, u8 dev, u8 func);=0A int scan_pci_device=
s(void);=0A-int pdev_type(u16 seg, u8 bus, u8 devfn);=0A+enum pdev_type =
pdev_type(u16 seg, u8 bus, u8 devfn);=0A int find_upstream_bridge(u16 seg, =
u8 *bus, u8 *devfn, u8 *secbus);=0A struct pci_dev *pci_lock_pdev(int seg, =
int bus, int devfn);=0A struct pci_dev *pci_lock_domain_pdev(=0A--- =
a/xen/include/xen/pci_regs.h=0A+++ b/xen/include/xen/pci_regs.h=0A@@ =
-371,6 +371,9 @@=0A #define  PCI_EXP_TYPE_UPSTREAM	0x5	/* =
Upstream Port */=0A #define  PCI_EXP_TYPE_DOWNSTREAM 0x6	/* =
Downstream Port */=0A #define  PCI_EXP_TYPE_PCI_BRIDGE 0x7	/* =
PCI/PCI-X Bridge */=0A+#define  PCI_EXP_TYPE_PCIE_BRIDGE 0x8	/* =
PCI/PCI-X to PCIE Bridge */=0A+#define  PCI_EXP_TYPE_RC_END	0x9	/* =
Root Complex Integrated Endpoint */=0A+#define  PCI_EXP_TYPE_RC_EC	=
0xa	/* Root Complex Event Collector */=0A #define PCI_EXP_FLAGS_SLOT	=
0x0100	/* Slot implemented */=0A #define PCI_EXP_FLAGS_IRQ	0x3e00	/* =
Interrupt message number */=0A #define PCI_EXP_DEVCAP		4	/* =
Device capabilities */=0A
--=__Part6352FE33.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part6352FE33.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:14:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:14:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcDA-0003Pf-Jm; Thu, 06 Dec 2012 14:14:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcD9-0003PM-18
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:13:59 +0000
Received: from [193.109.254.147:59241] by server-16.bemta-14.messagelabs.com
	id B2/E0-09215-628A0C05; Thu, 06 Dec 2012 14:13:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354803236!9667634!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 718 invoked from network); 6 Dec 2012 14:13:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:13:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:13:55 +0000
Message-Id: <50C0B63302000078000AEA43@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:13:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part6352FE33.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 5/8] IOMMU/PCI: consolidate pdev_type() and
 cache its result for a given device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part6352FE33.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Add an "unknown" device types as well as one for PCI-to-PCIe bridges
(the latter of which other IOMMU code with or without this patch
doesn't appear to handle properly).

Make sure we don't mistake a device for which we can't access its
config space as a legacy PCI device (after all we in fact don't know
how to deal with such a device, and hence shouldn't try to).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -142,7 +142,7 @@ static struct pci_dev *alloc_pdev(struct
     spin_lock_init(&pdev->msix_table_lock);
=20
     /* update bus2bridge */
-    switch ( pdev_type(pseg->nr, bus, devfn) )
+    switch ( pdev->type =3D pdev_type(pseg->nr, bus, devfn) )
     {
         u8 sec_bus, sub_bus;
=20
@@ -182,7 +182,7 @@ static struct pci_dev *alloc_pdev(struct
 static void free_pdev(struct pci_seg *pseg, struct pci_dev *pdev)
 {
     /* update bus2bridge */
-    switch ( pdev_type(pseg->nr, pdev->bus, pdev->devfn) )
+    switch ( pdev->type )
     {
         u8 dev, func, sec_bus, sub_bus;
=20
@@ -200,6 +200,9 @@ static void free_pdev(struct pci_seg *ps
                 pseg->bus2bridge[sec_bus] =3D pseg->bus2bridge[pdev->bus];=

             spin_unlock(&pseg->bus2bridge_lock);
             break;
+
+        default:
+            break;
     }
=20
     list_del(&pdev->alldevs_list);
@@ -587,20 +590,30 @@ void pci_release_devices(struct domain *
=20
 #define PCI_CLASS_BRIDGE_PCI     0x0604
=20
-int pdev_type(u16 seg, u8 bus, u8 devfn)
+enum pdev_type pdev_type(u16 seg, u8 bus, u8 devfn)
 {
     u16 class_device, creg;
     u8 d =3D PCI_SLOT(devfn), f =3D PCI_FUNC(devfn);
     int pos =3D pci_find_cap_offset(seg, bus, d, f, PCI_CAP_ID_EXP);
=20
     class_device =3D pci_conf_read16(seg, bus, d, f, PCI_CLASS_DEVICE);
-    if ( class_device =3D=3D PCI_CLASS_BRIDGE_PCI )
+    switch ( class_device )
     {
+    case PCI_CLASS_BRIDGE_PCI:
         if ( !pos )
             return DEV_TYPE_LEGACY_PCI_BRIDGE;
         creg =3D pci_conf_read16(seg, bus, d, f, pos + PCI_EXP_FLAGS);
-        return ((creg & PCI_EXP_FLAGS_TYPE) >> 4) =3D=3D PCI_EXP_TYPE_PCI_=
BRIDGE ?
-            DEV_TYPE_PCIe2PCI_BRIDGE : DEV_TYPE_PCIe_BRIDGE;
+        switch ( (creg & PCI_EXP_FLAGS_TYPE) >> 4 )
+        {
+        case PCI_EXP_TYPE_PCI_BRIDGE:
+            return DEV_TYPE_PCIe2PCI_BRIDGE;
+        case PCI_EXP_TYPE_PCIE_BRIDGE:
+            return DEV_TYPE_PCI2PCIe_BRIDGE;
+        }
+        return DEV_TYPE_PCIe_BRIDGE;
+
+    case 0x0000: case 0xffff:
+        return DEV_TYPE_PCI_UNKNOWN;
     }
=20
     return pos ? DEV_TYPE_PCIe_ENDPOINT : DEV_TYPE_PCI;
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -430,7 +430,6 @@ void io_apic_write_remap_rte(
=20
 static void set_msi_source_id(struct pci_dev *pdev, struct iremap_entry =
*ire)
 {
-    int type;
     u16 seg;
     u8 bus, devfn, secbus;
     int ret;
@@ -441,8 +440,7 @@ static void set_msi_source_id(struct pci
     seg =3D pdev->seg;
     bus =3D pdev->bus;
     devfn =3D pdev->devfn;
-    type =3D pdev_type(seg, bus, devfn);
-    switch ( type )
+    switch ( pdev->type )
     {
     case DEV_TYPE_PCIe_BRIDGE:
     case DEV_TYPE_PCIe2PCI_BRIDGE:
@@ -474,7 +472,7 @@ static void set_msi_source_id(struct pci
     default:
         dprintk(XENLOG_WARNING VTDPREFIX,
                 "d%d: unknown(%u): %04x:%02x:%02x.%u\n",
-                pdev->domain->domain_id, type,
+                pdev->domain->domain_id, pdev->type,
                 seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
         break;
    }
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1419,7 +1419,6 @@ static int domain_context_mapping(
 {
     struct acpi_drhd_unit *drhd;
     int ret =3D 0;
-    u32 type;
     u8 seg =3D pdev->seg, bus =3D pdev->bus, secbus;
=20
     drhd =3D acpi_find_matched_drhd_unit(pdev);
@@ -1428,8 +1427,7 @@ static int domain_context_mapping(
=20
     ASSERT(spin_is_locked(&pcidevs_lock));
=20
-    type =3D pdev_type(seg, bus, devfn);
-    switch ( type )
+    switch ( pdev->type )
     {
     case DEV_TYPE_PCIe_BRIDGE:
     case DEV_TYPE_PCIe2PCI_BRIDGE:
@@ -1479,7 +1477,7 @@ static int domain_context_mapping(
=20
     default:
         dprintk(XENLOG_ERR VTDPREFIX, "d%d:unknown(%u): %04x:%02x:%02x.%u\=
n",
-                domain->domain_id, type,
+                domain->domain_id, pdev->type,
                 seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
         ret =3D -EINVAL;
         break;
@@ -1551,7 +1549,6 @@ static int domain_context_unmap(
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     int ret =3D 0;
-    u32 type;
     u8 seg =3D pdev->seg, bus =3D pdev->bus, tmp_bus, tmp_devfn, secbus;
     int found =3D 0;
=20
@@ -1560,8 +1557,7 @@ static int domain_context_unmap(
         return -ENODEV;
     iommu =3D drhd->iommu;
=20
-    type =3D pdev_type(seg, bus, devfn);
-    switch ( type )
+    switch ( pdev->type )
     {
     case DEV_TYPE_PCIe_BRIDGE:
     case DEV_TYPE_PCIe2PCI_BRIDGE:
@@ -1608,7 +1604,7 @@ static int domain_context_unmap(
=20
     default:
         dprintk(XENLOG_ERR VTDPREFIX, "d%d:unknown(%u): %04x:%02x:%02x.%u\=
n",
-                domain->domain_id, type,
+                domain->domain_id, pdev->type,
                 seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
         ret =3D -EINVAL;
         goto out;
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -62,6 +62,17 @@ struct pci_dev {
     const u16 seg;
     const u8 bus;
     const u8 devfn;
+
+    enum pdev_type {
+        DEV_TYPE_PCI_UNKNOWN,
+        DEV_TYPE_PCIe_ENDPOINT,
+        DEV_TYPE_PCIe_BRIDGE,       // PCIe root port, switch
+        DEV_TYPE_PCIe2PCI_BRIDGE,   // PCIe-to-PCI/PCIx bridge
+        DEV_TYPE_PCI2PCIe_BRIDGE,   // PCI/PCIx-to-PCIe bridge
+        DEV_TYPE_LEGACY_PCI_BRIDGE, // Legacy PCI bridge
+        DEV_TYPE_PCI,
+    } type;
+
     struct pci_dev_info info;
     struct arch_pci_dev arch;
     struct {
@@ -83,18 +94,10 @@ struct pci_dev {
=20
 extern spinlock_t pcidevs_lock;
=20
-enum {
-    DEV_TYPE_PCIe_ENDPOINT,
-    DEV_TYPE_PCIe_BRIDGE,       // PCIe root port, switch
-    DEV_TYPE_PCIe2PCI_BRIDGE,   // PCIe-to-PCI/PCIx bridge
-    DEV_TYPE_LEGACY_PCI_BRIDGE, // Legacy PCI bridge
-    DEV_TYPE_PCI,
-};
-
 bool_t pci_known_segment(u16 seg);
 int pci_device_detect(u16 seg, u8 bus, u8 dev, u8 func);
 int scan_pci_devices(void);
-int pdev_type(u16 seg, u8 bus, u8 devfn);
+enum pdev_type pdev_type(u16 seg, u8 bus, u8 devfn);
 int find_upstream_bridge(u16 seg, u8 *bus, u8 *devfn, u8 *secbus);
 struct pci_dev *pci_lock_pdev(int seg, int bus, int devfn);
 struct pci_dev *pci_lock_domain_pdev(
--- a/xen/include/xen/pci_regs.h
+++ b/xen/include/xen/pci_regs.h
@@ -371,6 +371,9 @@
 #define  PCI_EXP_TYPE_UPSTREAM	0x5	/* Upstream Port */
 #define  PCI_EXP_TYPE_DOWNSTREAM 0x6	/* Downstream Port */
 #define  PCI_EXP_TYPE_PCI_BRIDGE 0x7	/* PCI/PCI-X Bridge */
+#define  PCI_EXP_TYPE_PCIE_BRIDGE 0x8	/* PCI/PCI-X to PCIE Bridge */
+#define  PCI_EXP_TYPE_RC_END	0x9	/* Root Complex Integrated =
Endpoint */
+#define  PCI_EXP_TYPE_RC_EC	0xa	/* Root Complex Event Collector */
 #define PCI_EXP_FLAGS_SLOT	0x0100	/* Slot implemented */
 #define PCI_EXP_FLAGS_IRQ	0x3e00	/* Interrupt message number */
 #define PCI_EXP_DEVCAP		4	/* Device capabilities */



--=__Part6352FE33.3__=
Content-Type: text/plain; name="IOMMU-pdev-type.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-pdev-type.patch"

IOMMU/PCI: consolidate pdev_type() and cache its result for a given =
device=0A=0AAdd an "unknown" device types as well as one for PCI-to-PCIe =
bridges=0A(the latter of which other IOMMU code with or without this =
patch=0Adoesn't appear to handle properly).=0A=0AMake sure we don't =
mistake a device for which we can't access its=0Aconfig space as a legacy =
PCI device (after all we in fact don't know=0Ahow to deal with such a =
device, and hence shouldn't try to).=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/drivers/passthrough/pci.c=0A+++ =
b/xen/drivers/passthrough/pci.c=0A@@ -142,7 +142,7 @@ static struct =
pci_dev *alloc_pdev(struct=0A     spin_lock_init(&pdev->msix_table_lock);=
=0A =0A     /* update bus2bridge */=0A-    switch ( pdev_type(pseg->nr, =
bus, devfn) )=0A+    switch ( pdev->type =3D pdev_type(pseg->nr, bus, =
devfn) )=0A     {=0A         u8 sec_bus, sub_bus;=0A =0A@@ -182,7 +182,7 =
@@ static struct pci_dev *alloc_pdev(struct=0A static void free_pdev(struct=
 pci_seg *pseg, struct pci_dev *pdev)=0A {=0A     /* update bus2bridge =
*/=0A-    switch ( pdev_type(pseg->nr, pdev->bus, pdev->devfn) )=0A+    =
switch ( pdev->type )=0A     {=0A         u8 dev, func, sec_bus, =
sub_bus;=0A =0A@@ -200,6 +200,9 @@ static void free_pdev(struct pci_seg =
*ps=0A                 pseg->bus2bridge[sec_bus] =3D pseg->bus2bridge[pdev-=
>bus];=0A             spin_unlock(&pseg->bus2bridge_lock);=0A             =
break;=0A+=0A+        default:=0A+            break;=0A     }=0A =0A     =
list_del(&pdev->alldevs_list);=0A@@ -587,20 +590,30 @@ void pci_release_dev=
ices(struct domain *=0A =0A #define PCI_CLASS_BRIDGE_PCI     0x0604=0A =
=0A-int pdev_type(u16 seg, u8 bus, u8 devfn)=0A+enum pdev_type pdev_type(u1=
6 seg, u8 bus, u8 devfn)=0A {=0A     u16 class_device, creg;=0A     u8 d =
=3D PCI_SLOT(devfn), f =3D PCI_FUNC(devfn);=0A     int pos =3D pci_find_cap=
_offset(seg, bus, d, f, PCI_CAP_ID_EXP);=0A =0A     class_device =3D =
pci_conf_read16(seg, bus, d, f, PCI_CLASS_DEVICE);=0A-    if ( class_device=
 =3D=3D PCI_CLASS_BRIDGE_PCI )=0A+    switch ( class_device )=0A     {=0A+ =
   case PCI_CLASS_BRIDGE_PCI:=0A         if ( !pos )=0A             return =
DEV_TYPE_LEGACY_PCI_BRIDGE;=0A         creg =3D pci_conf_read16(seg, bus, =
d, f, pos + PCI_EXP_FLAGS);=0A-        return ((creg & PCI_EXP_FLAGS_TYPE) =
>> 4) =3D=3D PCI_EXP_TYPE_PCI_BRIDGE ?=0A-            DEV_TYPE_PCIe2PCI_BRI=
DGE : DEV_TYPE_PCIe_BRIDGE;=0A+        switch ( (creg & PCI_EXP_FLAGS_TYPE)=
 >> 4 )=0A+        {=0A+        case PCI_EXP_TYPE_PCI_BRIDGE:=0A+          =
  return DEV_TYPE_PCIe2PCI_BRIDGE;=0A+        case PCI_EXP_TYPE_PCIE_BRIDGE=
:=0A+            return DEV_TYPE_PCI2PCIe_BRIDGE;=0A+        }=0A+        =
return DEV_TYPE_PCIe_BRIDGE;=0A+=0A+    case 0x0000: case 0xffff:=0A+      =
  return DEV_TYPE_PCI_UNKNOWN;=0A     }=0A =0A     return pos ? DEV_TYPE_PC=
Ie_ENDPOINT : DEV_TYPE_PCI;=0A--- a/xen/drivers/passthrough/vtd/intremap.c=
=0A+++ b/xen/drivers/passthrough/vtd/intremap.c=0A@@ -430,7 +430,6 @@ void =
io_apic_write_remap_rte(=0A =0A static void set_msi_source_id(struct =
pci_dev *pdev, struct iremap_entry *ire)=0A {=0A-    int type;=0A     u16 =
seg;=0A     u8 bus, devfn, secbus;=0A     int ret;=0A@@ -441,8 +440,7 @@ =
static void set_msi_source_id(struct pci=0A     seg =3D pdev->seg;=0A     =
bus =3D pdev->bus;=0A     devfn =3D pdev->devfn;=0A-    type =3D pdev_type(=
seg, bus, devfn);=0A-    switch ( type )=0A+    switch ( pdev->type )=0A   =
  {=0A     case DEV_TYPE_PCIe_BRIDGE:=0A     case DEV_TYPE_PCIe2PCI_BRIDGE:=
=0A@@ -474,7 +472,7 @@ static void set_msi_source_id(struct pci=0A     =
default:=0A         dprintk(XENLOG_WARNING VTDPREFIX,=0A                 =
"d%d: unknown(%u): %04x:%02x:%02x.%u\n",=0A-                pdev->domain->d=
omain_id, type,=0A+                pdev->domain->domain_id, pdev->type,=0A =
                seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));=0A         =
break;=0A    }=0A--- a/xen/drivers/passthrough/vtd/iommu.c=0A+++ b/xen/driv=
ers/passthrough/vtd/iommu.c=0A@@ -1419,7 +1419,6 @@ static int domain_conte=
xt_mapping(=0A {=0A     struct acpi_drhd_unit *drhd;=0A     int ret =3D =
0;=0A-    u32 type;=0A     u8 seg =3D pdev->seg, bus =3D pdev->bus, =
secbus;=0A =0A     drhd =3D acpi_find_matched_drhd_unit(pdev);=0A@@ =
-1428,8 +1427,7 @@ static int domain_context_mapping(=0A =0A     ASSERT(spi=
n_is_locked(&pcidevs_lock));=0A =0A-    type =3D pdev_type(seg, bus, =
devfn);=0A-    switch ( type )=0A+    switch ( pdev->type )=0A     {=0A    =
 case DEV_TYPE_PCIe_BRIDGE:=0A     case DEV_TYPE_PCIe2PCI_BRIDGE:=0A@@ =
-1479,7 +1477,7 @@ static int domain_context_mapping(=0A =0A     =
default:=0A         dprintk(XENLOG_ERR VTDPREFIX, "d%d:unknown(%u): =
%04x:%02x:%02x.%u\n",=0A-                domain->domain_id, type,=0A+      =
          domain->domain_id, pdev->type,=0A                 seg, bus, =
PCI_SLOT(devfn), PCI_FUNC(devfn));=0A         ret =3D -EINVAL;=0A         =
break;=0A@@ -1551,7 +1549,6 @@ static int domain_context_unmap(=0A     =
struct acpi_drhd_unit *drhd;=0A     struct iommu *iommu;=0A     int ret =
=3D 0;=0A-    u32 type;=0A     u8 seg =3D pdev->seg, bus =3D pdev->bus, =
tmp_bus, tmp_devfn, secbus;=0A     int found =3D 0;=0A =0A@@ -1560,8 =
+1557,7 @@ static int domain_context_unmap(=0A         return -ENODEV;=0A  =
   iommu =3D drhd->iommu;=0A =0A-    type =3D pdev_type(seg, bus, =
devfn);=0A-    switch ( type )=0A+    switch ( pdev->type )=0A     {=0A    =
 case DEV_TYPE_PCIe_BRIDGE:=0A     case DEV_TYPE_PCIe2PCI_BRIDGE:=0A@@ =
-1608,7 +1604,7 @@ static int domain_context_unmap(=0A =0A     default:=0A =
        dprintk(XENLOG_ERR VTDPREFIX, "d%d:unknown(%u): %04x:%02x:%02x.%u\n=
",=0A-                domain->domain_id, type,=0A+                =
domain->domain_id, pdev->type,=0A                 seg, bus, PCI_SLOT(devfn)=
, PCI_FUNC(devfn));=0A         ret =3D -EINVAL;=0A         goto out;=0A--- =
a/xen/include/xen/pci.h=0A+++ b/xen/include/xen/pci.h=0A@@ -62,6 +62,17 @@ =
struct pci_dev {=0A     const u16 seg;=0A     const u8 bus;=0A     const =
u8 devfn;=0A+=0A+    enum pdev_type {=0A+        DEV_TYPE_PCI_UNKNOWN,=0A+ =
       DEV_TYPE_PCIe_ENDPOINT,=0A+        DEV_TYPE_PCIe_BRIDGE,       // =
PCIe root port, switch=0A+        DEV_TYPE_PCIe2PCI_BRIDGE,   // PCIe-to-PC=
I/PCIx bridge=0A+        DEV_TYPE_PCI2PCIe_BRIDGE,   // PCI/PCIx-to-PCIe =
bridge=0A+        DEV_TYPE_LEGACY_PCI_BRIDGE, // Legacy PCI bridge=0A+     =
   DEV_TYPE_PCI,=0A+    } type;=0A+=0A     struct pci_dev_info info;=0A    =
 struct arch_pci_dev arch;=0A     struct {=0A@@ -83,18 +94,10 @@ struct =
pci_dev {=0A =0A extern spinlock_t pcidevs_lock;=0A =0A-enum {=0A-    =
DEV_TYPE_PCIe_ENDPOINT,=0A-    DEV_TYPE_PCIe_BRIDGE,       // PCIe root =
port, switch=0A-    DEV_TYPE_PCIe2PCI_BRIDGE,   // PCIe-to-PCI/PCIx =
bridge=0A-    DEV_TYPE_LEGACY_PCI_BRIDGE, // Legacy PCI bridge=0A-    =
DEV_TYPE_PCI,=0A-};=0A-=0A bool_t pci_known_segment(u16 seg);=0A int =
pci_device_detect(u16 seg, u8 bus, u8 dev, u8 func);=0A int scan_pci_device=
s(void);=0A-int pdev_type(u16 seg, u8 bus, u8 devfn);=0A+enum pdev_type =
pdev_type(u16 seg, u8 bus, u8 devfn);=0A int find_upstream_bridge(u16 seg, =
u8 *bus, u8 *devfn, u8 *secbus);=0A struct pci_dev *pci_lock_pdev(int seg, =
int bus, int devfn);=0A struct pci_dev *pci_lock_domain_pdev(=0A--- =
a/xen/include/xen/pci_regs.h=0A+++ b/xen/include/xen/pci_regs.h=0A@@ =
-371,6 +371,9 @@=0A #define  PCI_EXP_TYPE_UPSTREAM	0x5	/* =
Upstream Port */=0A #define  PCI_EXP_TYPE_DOWNSTREAM 0x6	/* =
Downstream Port */=0A #define  PCI_EXP_TYPE_PCI_BRIDGE 0x7	/* =
PCI/PCI-X Bridge */=0A+#define  PCI_EXP_TYPE_PCIE_BRIDGE 0x8	/* =
PCI/PCI-X to PCIE Bridge */=0A+#define  PCI_EXP_TYPE_RC_END	0x9	/* =
Root Complex Integrated Endpoint */=0A+#define  PCI_EXP_TYPE_RC_EC	=
0xa	/* Root Complex Event Collector */=0A #define PCI_EXP_FLAGS_SLOT	=
0x0100	/* Slot implemented */=0A #define PCI_EXP_FLAGS_IRQ	0x3e00	/* =
Interrupt message number */=0A #define PCI_EXP_DEVCAP		4	/* =
Device capabilities */=0A
--=__Part6352FE33.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part6352FE33.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:14:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcDh-0003U2-7I; Thu, 06 Dec 2012 14:14:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcDf-0003Tp-Nt
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:14:32 +0000
Received: from [193.109.254.147:8486] by server-1.bemta-14.messagelabs.com id
	D7/CC-25314-648A0C05; Thu, 06 Dec 2012 14:14:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1354803109!9068994!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25061 invoked from network); 6 Dec 2012 14:11:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:11:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:11:48 +0000
Message-Id: <50C0B5B402000078000AEA3B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:11:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartEBDA76B4.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 3/8] VT-d: adjust context map/unmap parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartEBDA76B4.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to use a (struct pci_dev *, devfn) pair.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -79,7 +79,7 @@ void free_pgtable_maddr(u64 maddr);
 void *map_vtd_domain_page(u64 maddr);
 void unmap_vtd_domain_page(void *va);
 int domain_context_mapping_one(struct domain *domain, struct iommu =
*iommu,
-                               u8 bus, u8 devfn);
+                               u8 bus, u8 devfn, const struct pci_dev *);
 int domain_context_unmap_one(struct domain *domain, struct iommu *iommu,
                              u8 bus, u8 devfn);
=20
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1275,7 +1275,7 @@ static void __init intel_iommu_dom0_init
 int domain_context_mapping_one(
     struct domain *domain,
     struct iommu *iommu,
-    u8 bus, u8 devfn)
+    u8 bus, u8 devfn, const struct pci_dev *pdev)
 {
     struct hvm_iommu *hd =3D domain_hvm_iommu(domain);
     struct context_entry *context, *context_entries;
@@ -1292,11 +1292,9 @@ int domain_context_mapping_one(
     if ( context_present(*context) )
     {
         int res =3D 0;
-        struct pci_dev *pdev =3D NULL;
=20
-        /* First try to get domain ownership from device structure.  If =
that's
+        /* Try to get domain ownership from device structure.  If that's
          * not available, try to read it from the context itself. */
-        pdev =3D pci_get_pdev(seg, bus, devfn);
         if ( pdev )
         {
             if ( pdev->domain !=3D domain )
@@ -1417,13 +1415,12 @@ int domain_context_mapping_one(
 }
=20
 static int domain_context_mapping(
-    struct domain *domain, u16 seg, u8 bus, u8 devfn)
+    struct domain *domain, u8 devfn, const struct pci_dev *pdev)
 {
     struct acpi_drhd_unit *drhd;
     int ret =3D 0;
     u32 type;
-    u8 secbus;
-    struct pci_dev *pdev =3D pci_get_pdev(seg, bus, devfn);
+    u8 seg =3D pdev->seg, bus =3D pdev->bus, secbus;
=20
     drhd =3D acpi_find_matched_drhd_unit(pdev);
     if ( !drhd )
@@ -1444,8 +1441,9 @@ static int domain_context_mapping(
             dprintk(VTDPREFIX, "d%d:PCIe: map %04x:%02x:%02x.%u\n",
                     domain->domain_id, seg, bus,
                     PCI_SLOT(devfn), PCI_FUNC(devfn));
-        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn);
-        if ( !ret && ats_device(pdev, drhd) > 0 )
+        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn,
+                                         pdev);
+        if ( !ret && devfn =3D=3D pdev->devfn && ats_device(pdev, drhd) > =
0 )
             enable_ats_device(seg, bus, devfn);
=20
         break;
@@ -1456,14 +1454,16 @@ static int domain_context_mapping(
                     domain->domain_id, seg, bus,
                     PCI_SLOT(devfn), PCI_FUNC(devfn));
=20
-        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn);
+        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn,
+                                         pdev);
         if ( ret )
             break;
=20
         if ( find_upstream_bridge(seg, &bus, &devfn, &secbus) < 1 )
             break;
=20
-        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn);
+        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn,
+                                         pci_get_pdev(seg, bus, devfn));
=20
         /*
          * Devices behind PCIe-to-PCI/PCIx bridge may generate different
@@ -1472,7 +1472,8 @@ static int domain_context_mapping(
          */
         if ( !ret && pdev_type(seg, bus, devfn) =3D=3D DEV_TYPE_PCIe2PCI_B=
RIDGE &&
              (secbus !=3D pdev->bus || pdev->devfn !=3D 0) )
-            ret =3D domain_context_mapping_one(domain, drhd->iommu, =
secbus, 0);
+            ret =3D domain_context_mapping_one(domain, drhd->iommu, =
secbus, 0,
+                                             pci_get_pdev(seg, secbus, =
0));
=20
         break;
=20
@@ -1545,18 +1546,15 @@ int domain_context_unmap_one(
 }
=20
 static int domain_context_unmap(
-    struct domain *domain, u16 seg, u8 bus, u8 devfn)
+    struct domain *domain, u8 devfn, const struct pci_dev *pdev)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     int ret =3D 0;
     u32 type;
-    u8 tmp_bus, tmp_devfn, secbus;
-    struct pci_dev *pdev =3D pci_get_pdev(seg, bus, devfn);
+    u8 seg =3D pdev->seg, bus =3D pdev->bus, tmp_bus, tmp_devfn, secbus;
     int found =3D 0;
=20
-    BUG_ON(!pdev);
-
     drhd =3D acpi_find_matched_drhd_unit(pdev);
     if ( !drhd )
         return -ENODEV;
@@ -1576,7 +1574,7 @@ static int domain_context_unmap(
                     domain->domain_id, seg, bus,
                     PCI_SLOT(devfn), PCI_FUNC(devfn));
         ret =3D domain_context_unmap_one(domain, iommu, bus, devfn);
-        if ( !ret && ats_device(pdev, drhd) > 0 )
+        if ( !ret && devfn =3D=3D pdev->devfn && ats_device(pdev, drhd) > =
0 )
             disable_ats_device(seg, bus, devfn);
=20
         break;
@@ -1670,11 +1668,11 @@ static int reassign_device_ownership(
     if ( (target !=3D dom0) && !iommu_intremap )
         untrusted_msi =3D 1;
=20
-    ret =3D domain_context_unmap(source, pdev->seg, pdev->bus, devfn);
+    ret =3D domain_context_unmap(source, devfn, pdev);
     if ( ret )
         return ret;
=20
-    ret =3D domain_context_mapping(target, pdev->seg, pdev->bus, devfn);
+    ret =3D domain_context_mapping(target, devfn, pdev);
     if ( ret )
         return ret;
=20
@@ -1884,7 +1882,7 @@ static int intel_iommu_add_device(u8 dev
     if ( !pdev->domain )
         return -EINVAL;
=20
-    ret =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, =
devfn);
+    ret =3D domain_context_mapping(pdev->domain, devfn, pdev);
     if ( ret )
     {
         dprintk(XENLOG_ERR VTDPREFIX, "d%d: context mapping failed\n",
@@ -1944,14 +1942,14 @@ static int intel_iommu_remove_device(u8=20
         }
     }
=20
-    return domain_context_unmap(pdev->domain, pdev->seg, pdev->bus, =
devfn);
+    return domain_context_unmap(pdev->domain, devfn, pdev);
 }
=20
 static int __init setup_dom0_device(u8 devfn, struct pci_dev *pdev)
 {
     int err;
=20
-    err =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, =
devfn);
+    err =3D domain_context_mapping(pdev->domain, devfn, pdev);
     if ( !err && devfn =3D=3D pdev->devfn )
         pci_vtd_quirk(pdev);
     return err;
--- a/xen/drivers/passthrough/vtd/quirks.c
+++ b/xen/drivers/passthrough/vtd/quirks.c
@@ -288,7 +288,7 @@ static void map_me_phantom_function(stru
     /* map or unmap ME phantom function */
     if ( map )
         domain_context_mapping_one(domain, drhd->iommu, 0,
-                                   PCI_DEVFN(dev, 7));
+                                   PCI_DEVFN(dev, 7), NULL);
     else
         domain_context_unmap_one(domain, drhd->iommu, 0,
                                  PCI_DEVFN(dev, 7));



--=__PartEBDA76B4.3__=
Content-Type: text/plain; name="VT-d-context-map-params.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="VT-d-context-map-params.patch"

VT-d: adjust context map/unmap parameters=0A=0A... to use a (struct =
pci_dev *, devfn) pair.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/drivers/passthrough/vtd/extern.h=0A+++ b/xen/drivers/passth=
rough/vtd/extern.h=0A@@ -79,7 +79,7 @@ void free_pgtable_maddr(u64 =
maddr);=0A void *map_vtd_domain_page(u64 maddr);=0A void unmap_vtd_domain_p=
age(void *va);=0A int domain_context_mapping_one(struct domain *domain, =
struct iommu *iommu,=0A-                               u8 bus, u8 =
devfn);=0A+                               u8 bus, u8 devfn, const struct =
pci_dev *);=0A int domain_context_unmap_one(struct domain *domain, struct =
iommu *iommu,=0A                              u8 bus, u8 devfn);=0A =0A--- =
a/xen/drivers/passthrough/vtd/iommu.c=0A+++ b/xen/drivers/passthrough/vtd/i=
ommu.c=0A@@ -1275,7 +1275,7 @@ static void __init intel_iommu_dom0_init=0A =
int domain_context_mapping_one(=0A     struct domain *domain,=0A     =
struct iommu *iommu,=0A-    u8 bus, u8 devfn)=0A+    u8 bus, u8 devfn, =
const struct pci_dev *pdev)=0A {=0A     struct hvm_iommu *hd =3D domain_hvm=
_iommu(domain);=0A     struct context_entry *context, *context_entries;=0A@=
@ -1292,11 +1292,9 @@ int domain_context_mapping_one(=0A     if ( =
context_present(*context) )=0A     {=0A         int res =3D 0;=0A-        =
struct pci_dev *pdev =3D NULL;=0A =0A-        /* First try to get domain =
ownership from device structure.  If that's=0A+        /* Try to get =
domain ownership from device structure.  If that's=0A          * not =
available, try to read it from the context itself. */=0A-        pdev =3D =
pci_get_pdev(seg, bus, devfn);=0A         if ( pdev )=0A         {=0A      =
       if ( pdev->domain !=3D domain )=0A@@ -1417,13 +1415,12 @@ int =
domain_context_mapping_one(=0A }=0A =0A static int domain_context_mapping(=
=0A-    struct domain *domain, u16 seg, u8 bus, u8 devfn)=0A+    struct =
domain *domain, u8 devfn, const struct pci_dev *pdev)=0A {=0A     struct =
acpi_drhd_unit *drhd;=0A     int ret =3D 0;=0A     u32 type;=0A-    u8 =
secbus;=0A-    struct pci_dev *pdev =3D pci_get_pdev(seg, bus, devfn);=0A+ =
   u8 seg =3D pdev->seg, bus =3D pdev->bus, secbus;=0A =0A     drhd =3D =
acpi_find_matched_drhd_unit(pdev);=0A     if ( !drhd )=0A@@ -1444,8 =
+1441,9 @@ static int domain_context_mapping(=0A             dprintk(VTDPRE=
FIX, "d%d:PCIe: map %04x:%02x:%02x.%u\n",=0A                     domain->do=
main_id, seg, bus,=0A                     PCI_SLOT(devfn), PCI_FUNC(devfn))=
;=0A-        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn);=0A-        if ( !ret && ats_device(pdev, drhd) > 0 )=0A+        =
ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, devfn,=0A+    =
                                     pdev);=0A+        if ( !ret && devfn =
=3D=3D pdev->devfn && ats_device(pdev, drhd) > 0 )=0A             =
enable_ats_device(seg, bus, devfn);=0A =0A         break;=0A@@ -1456,14 =
+1454,16 @@ static int domain_context_mapping(=0A                     =
domain->domain_id, seg, bus,=0A                     PCI_SLOT(devfn), =
PCI_FUNC(devfn));=0A =0A-        ret =3D domain_context_mapping_one(domain,=
 drhd->iommu, bus, devfn);=0A+        ret =3D domain_context_mapping_one(do=
main, drhd->iommu, bus, devfn,=0A+                                         =
pdev);=0A         if ( ret )=0A             break;=0A =0A         if ( =
find_upstream_bridge(seg, &bus, &devfn, &secbus) < 1 )=0A             =
break;=0A =0A-        ret =3D domain_context_mapping_one(domain, drhd->iomm=
u, bus, devfn);=0A+        ret =3D domain_context_mapping_one(domain, =
drhd->iommu, bus, devfn,=0A+                                         =
pci_get_pdev(seg, bus, devfn));=0A =0A         /*=0A          * Devices =
behind PCIe-to-PCI/PCIx bridge may generate different=0A@@ -1472,7 +1472,8 =
@@ static int domain_context_mapping(=0A          */=0A         if ( !ret =
&& pdev_type(seg, bus, devfn) =3D=3D DEV_TYPE_PCIe2PCI_BRIDGE &&=0A        =
      (secbus !=3D pdev->bus || pdev->devfn !=3D 0) )=0A-            ret =
=3D domain_context_mapping_one(domain, drhd->iommu, secbus, 0);=0A+        =
    ret =3D domain_context_mapping_one(domain, drhd->iommu, secbus, 0,=0A+ =
                                            pci_get_pdev(seg, secbus, =
0));=0A =0A         break;=0A =0A@@ -1545,18 +1546,15 @@ int domain_context=
_unmap_one(=0A }=0A =0A static int domain_context_unmap(=0A-    struct =
domain *domain, u16 seg, u8 bus, u8 devfn)=0A+    struct domain *domain, =
u8 devfn, const struct pci_dev *pdev)=0A {=0A     struct acpi_drhd_unit =
*drhd;=0A     struct iommu *iommu;=0A     int ret =3D 0;=0A     u32 =
type;=0A-    u8 tmp_bus, tmp_devfn, secbus;=0A-    struct pci_dev *pdev =
=3D pci_get_pdev(seg, bus, devfn);=0A+    u8 seg =3D pdev->seg, bus =3D =
pdev->bus, tmp_bus, tmp_devfn, secbus;=0A     int found =3D 0;=0A =0A-    =
BUG_ON(!pdev);=0A-=0A     drhd =3D acpi_find_matched_drhd_unit(pdev);=0A   =
  if ( !drhd )=0A         return -ENODEV;=0A@@ -1576,7 +1574,7 @@ static =
int domain_context_unmap(=0A                     domain->domain_id, seg, =
bus,=0A                     PCI_SLOT(devfn), PCI_FUNC(devfn));=0A         =
ret =3D domain_context_unmap_one(domain, iommu, bus, devfn);=0A-        if =
( !ret && ats_device(pdev, drhd) > 0 )=0A+        if ( !ret && devfn =
=3D=3D pdev->devfn && ats_device(pdev, drhd) > 0 )=0A             =
disable_ats_device(seg, bus, devfn);=0A =0A         break;=0A@@ -1670,11 =
+1668,11 @@ static int reassign_device_ownership(=0A     if ( (target !=3D =
dom0) && !iommu_intremap )=0A         untrusted_msi =3D 1;=0A =0A-    ret =
=3D domain_context_unmap(source, pdev->seg, pdev->bus, devfn);=0A+    ret =
=3D domain_context_unmap(source, devfn, pdev);=0A     if ( ret )=0A        =
 return ret;=0A =0A-    ret =3D domain_context_mapping(target, pdev->seg, =
pdev->bus, devfn);=0A+    ret =3D domain_context_mapping(target, devfn, =
pdev);=0A     if ( ret )=0A         return ret;=0A =0A@@ -1884,7 +1882,7 =
@@ static int intel_iommu_add_device(u8 dev=0A     if ( !pdev->domain )=0A =
        return -EINVAL;=0A =0A-    ret =3D domain_context_mapping(pdev->dom=
ain, pdev->seg, pdev->bus, devfn);=0A+    ret =3D domain_context_mapping(pd=
ev->domain, devfn, pdev);=0A     if ( ret )=0A     {=0A         dprintk(XEN=
LOG_ERR VTDPREFIX, "d%d: context mapping failed\n",=0A@@ -1944,14 +1942,14 =
@@ static int intel_iommu_remove_device(u8 =0A         }=0A     }=0A =0A-  =
  return domain_context_unmap(pdev->domain, pdev->seg, pdev->bus, =
devfn);=0A+    return domain_context_unmap(pdev->domain, devfn, pdev);=0A =
}=0A =0A static int __init setup_dom0_device(u8 devfn, struct pci_dev =
*pdev)=0A {=0A     int err;=0A =0A-    err =3D domain_context_mapping(pdev-=
>domain, pdev->seg, pdev->bus, devfn);=0A+    err =3D domain_context_mappin=
g(pdev->domain, devfn, pdev);=0A     if ( !err && devfn =3D=3D pdev->devfn =
)=0A         pci_vtd_quirk(pdev);=0A     return err;=0A--- a/xen/drivers/pa=
ssthrough/vtd/quirks.c=0A+++ b/xen/drivers/passthrough/vtd/quirks.c=0A@@ =
-288,7 +288,7 @@ static void map_me_phantom_function(stru=0A     /* map or =
unmap ME phantom function */=0A     if ( map )=0A         domain_context_ma=
pping_one(domain, drhd->iommu, 0,=0A-                                   =
PCI_DEVFN(dev, 7));=0A+                                   PCI_DEVFN(dev, =
7), NULL);=0A     else=0A         domain_context_unmap_one(domain, =
drhd->iommu, 0,=0A                                  PCI_DEVFN(dev, 7));=0A
--=__PartEBDA76B4.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartEBDA76B4.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:14:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcDh-0003U2-7I; Thu, 06 Dec 2012 14:14:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcDf-0003Tp-Nt
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:14:32 +0000
Received: from [193.109.254.147:8486] by server-1.bemta-14.messagelabs.com id
	D7/CC-25314-648A0C05; Thu, 06 Dec 2012 14:14:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1354803109!9068994!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25061 invoked from network); 6 Dec 2012 14:11:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:11:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:11:48 +0000
Message-Id: <50C0B5B402000078000AEA3B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:11:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartEBDA76B4.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 3/8] VT-d: adjust context map/unmap parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartEBDA76B4.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to use a (struct pci_dev *, devfn) pair.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -79,7 +79,7 @@ void free_pgtable_maddr(u64 maddr);
 void *map_vtd_domain_page(u64 maddr);
 void unmap_vtd_domain_page(void *va);
 int domain_context_mapping_one(struct domain *domain, struct iommu =
*iommu,
-                               u8 bus, u8 devfn);
+                               u8 bus, u8 devfn, const struct pci_dev *);
 int domain_context_unmap_one(struct domain *domain, struct iommu *iommu,
                              u8 bus, u8 devfn);
=20
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1275,7 +1275,7 @@ static void __init intel_iommu_dom0_init
 int domain_context_mapping_one(
     struct domain *domain,
     struct iommu *iommu,
-    u8 bus, u8 devfn)
+    u8 bus, u8 devfn, const struct pci_dev *pdev)
 {
     struct hvm_iommu *hd =3D domain_hvm_iommu(domain);
     struct context_entry *context, *context_entries;
@@ -1292,11 +1292,9 @@ int domain_context_mapping_one(
     if ( context_present(*context) )
     {
         int res =3D 0;
-        struct pci_dev *pdev =3D NULL;
=20
-        /* First try to get domain ownership from device structure.  If =
that's
+        /* Try to get domain ownership from device structure.  If that's
          * not available, try to read it from the context itself. */
-        pdev =3D pci_get_pdev(seg, bus, devfn);
         if ( pdev )
         {
             if ( pdev->domain !=3D domain )
@@ -1417,13 +1415,12 @@ int domain_context_mapping_one(
 }
=20
 static int domain_context_mapping(
-    struct domain *domain, u16 seg, u8 bus, u8 devfn)
+    struct domain *domain, u8 devfn, const struct pci_dev *pdev)
 {
     struct acpi_drhd_unit *drhd;
     int ret =3D 0;
     u32 type;
-    u8 secbus;
-    struct pci_dev *pdev =3D pci_get_pdev(seg, bus, devfn);
+    u8 seg =3D pdev->seg, bus =3D pdev->bus, secbus;
=20
     drhd =3D acpi_find_matched_drhd_unit(pdev);
     if ( !drhd )
@@ -1444,8 +1441,9 @@ static int domain_context_mapping(
             dprintk(VTDPREFIX, "d%d:PCIe: map %04x:%02x:%02x.%u\n",
                     domain->domain_id, seg, bus,
                     PCI_SLOT(devfn), PCI_FUNC(devfn));
-        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn);
-        if ( !ret && ats_device(pdev, drhd) > 0 )
+        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn,
+                                         pdev);
+        if ( !ret && devfn =3D=3D pdev->devfn && ats_device(pdev, drhd) > =
0 )
             enable_ats_device(seg, bus, devfn);
=20
         break;
@@ -1456,14 +1454,16 @@ static int domain_context_mapping(
                     domain->domain_id, seg, bus,
                     PCI_SLOT(devfn), PCI_FUNC(devfn));
=20
-        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn);
+        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn,
+                                         pdev);
         if ( ret )
             break;
=20
         if ( find_upstream_bridge(seg, &bus, &devfn, &secbus) < 1 )
             break;
=20
-        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn);
+        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn,
+                                         pci_get_pdev(seg, bus, devfn));
=20
         /*
          * Devices behind PCIe-to-PCI/PCIx bridge may generate different
@@ -1472,7 +1472,8 @@ static int domain_context_mapping(
          */
         if ( !ret && pdev_type(seg, bus, devfn) =3D=3D DEV_TYPE_PCIe2PCI_B=
RIDGE &&
              (secbus !=3D pdev->bus || pdev->devfn !=3D 0) )
-            ret =3D domain_context_mapping_one(domain, drhd->iommu, =
secbus, 0);
+            ret =3D domain_context_mapping_one(domain, drhd->iommu, =
secbus, 0,
+                                             pci_get_pdev(seg, secbus, =
0));
=20
         break;
=20
@@ -1545,18 +1546,15 @@ int domain_context_unmap_one(
 }
=20
 static int domain_context_unmap(
-    struct domain *domain, u16 seg, u8 bus, u8 devfn)
+    struct domain *domain, u8 devfn, const struct pci_dev *pdev)
 {
     struct acpi_drhd_unit *drhd;
     struct iommu *iommu;
     int ret =3D 0;
     u32 type;
-    u8 tmp_bus, tmp_devfn, secbus;
-    struct pci_dev *pdev =3D pci_get_pdev(seg, bus, devfn);
+    u8 seg =3D pdev->seg, bus =3D pdev->bus, tmp_bus, tmp_devfn, secbus;
     int found =3D 0;
=20
-    BUG_ON(!pdev);
-
     drhd =3D acpi_find_matched_drhd_unit(pdev);
     if ( !drhd )
         return -ENODEV;
@@ -1576,7 +1574,7 @@ static int domain_context_unmap(
                     domain->domain_id, seg, bus,
                     PCI_SLOT(devfn), PCI_FUNC(devfn));
         ret =3D domain_context_unmap_one(domain, iommu, bus, devfn);
-        if ( !ret && ats_device(pdev, drhd) > 0 )
+        if ( !ret && devfn =3D=3D pdev->devfn && ats_device(pdev, drhd) > =
0 )
             disable_ats_device(seg, bus, devfn);
=20
         break;
@@ -1670,11 +1668,11 @@ static int reassign_device_ownership(
     if ( (target !=3D dom0) && !iommu_intremap )
         untrusted_msi =3D 1;
=20
-    ret =3D domain_context_unmap(source, pdev->seg, pdev->bus, devfn);
+    ret =3D domain_context_unmap(source, devfn, pdev);
     if ( ret )
         return ret;
=20
-    ret =3D domain_context_mapping(target, pdev->seg, pdev->bus, devfn);
+    ret =3D domain_context_mapping(target, devfn, pdev);
     if ( ret )
         return ret;
=20
@@ -1884,7 +1882,7 @@ static int intel_iommu_add_device(u8 dev
     if ( !pdev->domain )
         return -EINVAL;
=20
-    ret =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, =
devfn);
+    ret =3D domain_context_mapping(pdev->domain, devfn, pdev);
     if ( ret )
     {
         dprintk(XENLOG_ERR VTDPREFIX, "d%d: context mapping failed\n",
@@ -1944,14 +1942,14 @@ static int intel_iommu_remove_device(u8=20
         }
     }
=20
-    return domain_context_unmap(pdev->domain, pdev->seg, pdev->bus, =
devfn);
+    return domain_context_unmap(pdev->domain, devfn, pdev);
 }
=20
 static int __init setup_dom0_device(u8 devfn, struct pci_dev *pdev)
 {
     int err;
=20
-    err =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, =
devfn);
+    err =3D domain_context_mapping(pdev->domain, devfn, pdev);
     if ( !err && devfn =3D=3D pdev->devfn )
         pci_vtd_quirk(pdev);
     return err;
--- a/xen/drivers/passthrough/vtd/quirks.c
+++ b/xen/drivers/passthrough/vtd/quirks.c
@@ -288,7 +288,7 @@ static void map_me_phantom_function(stru
     /* map or unmap ME phantom function */
     if ( map )
         domain_context_mapping_one(domain, drhd->iommu, 0,
-                                   PCI_DEVFN(dev, 7));
+                                   PCI_DEVFN(dev, 7), NULL);
     else
         domain_context_unmap_one(domain, drhd->iommu, 0,
                                  PCI_DEVFN(dev, 7));



--=__PartEBDA76B4.3__=
Content-Type: text/plain; name="VT-d-context-map-params.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="VT-d-context-map-params.patch"

VT-d: adjust context map/unmap parameters=0A=0A... to use a (struct =
pci_dev *, devfn) pair.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/drivers/passthrough/vtd/extern.h=0A+++ b/xen/drivers/passth=
rough/vtd/extern.h=0A@@ -79,7 +79,7 @@ void free_pgtable_maddr(u64 =
maddr);=0A void *map_vtd_domain_page(u64 maddr);=0A void unmap_vtd_domain_p=
age(void *va);=0A int domain_context_mapping_one(struct domain *domain, =
struct iommu *iommu,=0A-                               u8 bus, u8 =
devfn);=0A+                               u8 bus, u8 devfn, const struct =
pci_dev *);=0A int domain_context_unmap_one(struct domain *domain, struct =
iommu *iommu,=0A                              u8 bus, u8 devfn);=0A =0A--- =
a/xen/drivers/passthrough/vtd/iommu.c=0A+++ b/xen/drivers/passthrough/vtd/i=
ommu.c=0A@@ -1275,7 +1275,7 @@ static void __init intel_iommu_dom0_init=0A =
int domain_context_mapping_one(=0A     struct domain *domain,=0A     =
struct iommu *iommu,=0A-    u8 bus, u8 devfn)=0A+    u8 bus, u8 devfn, =
const struct pci_dev *pdev)=0A {=0A     struct hvm_iommu *hd =3D domain_hvm=
_iommu(domain);=0A     struct context_entry *context, *context_entries;=0A@=
@ -1292,11 +1292,9 @@ int domain_context_mapping_one(=0A     if ( =
context_present(*context) )=0A     {=0A         int res =3D 0;=0A-        =
struct pci_dev *pdev =3D NULL;=0A =0A-        /* First try to get domain =
ownership from device structure.  If that's=0A+        /* Try to get =
domain ownership from device structure.  If that's=0A          * not =
available, try to read it from the context itself. */=0A-        pdev =3D =
pci_get_pdev(seg, bus, devfn);=0A         if ( pdev )=0A         {=0A      =
       if ( pdev->domain !=3D domain )=0A@@ -1417,13 +1415,12 @@ int =
domain_context_mapping_one(=0A }=0A =0A static int domain_context_mapping(=
=0A-    struct domain *domain, u16 seg, u8 bus, u8 devfn)=0A+    struct =
domain *domain, u8 devfn, const struct pci_dev *pdev)=0A {=0A     struct =
acpi_drhd_unit *drhd;=0A     int ret =3D 0;=0A     u32 type;=0A-    u8 =
secbus;=0A-    struct pci_dev *pdev =3D pci_get_pdev(seg, bus, devfn);=0A+ =
   u8 seg =3D pdev->seg, bus =3D pdev->bus, secbus;=0A =0A     drhd =3D =
acpi_find_matched_drhd_unit(pdev);=0A     if ( !drhd )=0A@@ -1444,8 =
+1441,9 @@ static int domain_context_mapping(=0A             dprintk(VTDPRE=
FIX, "d%d:PCIe: map %04x:%02x:%02x.%u\n",=0A                     domain->do=
main_id, seg, bus,=0A                     PCI_SLOT(devfn), PCI_FUNC(devfn))=
;=0A-        ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, =
devfn);=0A-        if ( !ret && ats_device(pdev, drhd) > 0 )=0A+        =
ret =3D domain_context_mapping_one(domain, drhd->iommu, bus, devfn,=0A+    =
                                     pdev);=0A+        if ( !ret && devfn =
=3D=3D pdev->devfn && ats_device(pdev, drhd) > 0 )=0A             =
enable_ats_device(seg, bus, devfn);=0A =0A         break;=0A@@ -1456,14 =
+1454,16 @@ static int domain_context_mapping(=0A                     =
domain->domain_id, seg, bus,=0A                     PCI_SLOT(devfn), =
PCI_FUNC(devfn));=0A =0A-        ret =3D domain_context_mapping_one(domain,=
 drhd->iommu, bus, devfn);=0A+        ret =3D domain_context_mapping_one(do=
main, drhd->iommu, bus, devfn,=0A+                                         =
pdev);=0A         if ( ret )=0A             break;=0A =0A         if ( =
find_upstream_bridge(seg, &bus, &devfn, &secbus) < 1 )=0A             =
break;=0A =0A-        ret =3D domain_context_mapping_one(domain, drhd->iomm=
u, bus, devfn);=0A+        ret =3D domain_context_mapping_one(domain, =
drhd->iommu, bus, devfn,=0A+                                         =
pci_get_pdev(seg, bus, devfn));=0A =0A         /*=0A          * Devices =
behind PCIe-to-PCI/PCIx bridge may generate different=0A@@ -1472,7 +1472,8 =
@@ static int domain_context_mapping(=0A          */=0A         if ( !ret =
&& pdev_type(seg, bus, devfn) =3D=3D DEV_TYPE_PCIe2PCI_BRIDGE &&=0A        =
      (secbus !=3D pdev->bus || pdev->devfn !=3D 0) )=0A-            ret =
=3D domain_context_mapping_one(domain, drhd->iommu, secbus, 0);=0A+        =
    ret =3D domain_context_mapping_one(domain, drhd->iommu, secbus, 0,=0A+ =
                                            pci_get_pdev(seg, secbus, =
0));=0A =0A         break;=0A =0A@@ -1545,18 +1546,15 @@ int domain_context=
_unmap_one(=0A }=0A =0A static int domain_context_unmap(=0A-    struct =
domain *domain, u16 seg, u8 bus, u8 devfn)=0A+    struct domain *domain, =
u8 devfn, const struct pci_dev *pdev)=0A {=0A     struct acpi_drhd_unit =
*drhd;=0A     struct iommu *iommu;=0A     int ret =3D 0;=0A     u32 =
type;=0A-    u8 tmp_bus, tmp_devfn, secbus;=0A-    struct pci_dev *pdev =
=3D pci_get_pdev(seg, bus, devfn);=0A+    u8 seg =3D pdev->seg, bus =3D =
pdev->bus, tmp_bus, tmp_devfn, secbus;=0A     int found =3D 0;=0A =0A-    =
BUG_ON(!pdev);=0A-=0A     drhd =3D acpi_find_matched_drhd_unit(pdev);=0A   =
  if ( !drhd )=0A         return -ENODEV;=0A@@ -1576,7 +1574,7 @@ static =
int domain_context_unmap(=0A                     domain->domain_id, seg, =
bus,=0A                     PCI_SLOT(devfn), PCI_FUNC(devfn));=0A         =
ret =3D domain_context_unmap_one(domain, iommu, bus, devfn);=0A-        if =
( !ret && ats_device(pdev, drhd) > 0 )=0A+        if ( !ret && devfn =
=3D=3D pdev->devfn && ats_device(pdev, drhd) > 0 )=0A             =
disable_ats_device(seg, bus, devfn);=0A =0A         break;=0A@@ -1670,11 =
+1668,11 @@ static int reassign_device_ownership(=0A     if ( (target !=3D =
dom0) && !iommu_intremap )=0A         untrusted_msi =3D 1;=0A =0A-    ret =
=3D domain_context_unmap(source, pdev->seg, pdev->bus, devfn);=0A+    ret =
=3D domain_context_unmap(source, devfn, pdev);=0A     if ( ret )=0A        =
 return ret;=0A =0A-    ret =3D domain_context_mapping(target, pdev->seg, =
pdev->bus, devfn);=0A+    ret =3D domain_context_mapping(target, devfn, =
pdev);=0A     if ( ret )=0A         return ret;=0A =0A@@ -1884,7 +1882,7 =
@@ static int intel_iommu_add_device(u8 dev=0A     if ( !pdev->domain )=0A =
        return -EINVAL;=0A =0A-    ret =3D domain_context_mapping(pdev->dom=
ain, pdev->seg, pdev->bus, devfn);=0A+    ret =3D domain_context_mapping(pd=
ev->domain, devfn, pdev);=0A     if ( ret )=0A     {=0A         dprintk(XEN=
LOG_ERR VTDPREFIX, "d%d: context mapping failed\n",=0A@@ -1944,14 +1942,14 =
@@ static int intel_iommu_remove_device(u8 =0A         }=0A     }=0A =0A-  =
  return domain_context_unmap(pdev->domain, pdev->seg, pdev->bus, =
devfn);=0A+    return domain_context_unmap(pdev->domain, devfn, pdev);=0A =
}=0A =0A static int __init setup_dom0_device(u8 devfn, struct pci_dev =
*pdev)=0A {=0A     int err;=0A =0A-    err =3D domain_context_mapping(pdev-=
>domain, pdev->seg, pdev->bus, devfn);=0A+    err =3D domain_context_mappin=
g(pdev->domain, devfn, pdev);=0A     if ( !err && devfn =3D=3D pdev->devfn =
)=0A         pci_vtd_quirk(pdev);=0A     return err;=0A--- a/xen/drivers/pa=
ssthrough/vtd/quirks.c=0A+++ b/xen/drivers/passthrough/vtd/quirks.c=0A@@ =
-288,7 +288,7 @@ static void map_me_phantom_function(stru=0A     /* map or =
unmap ME phantom function */=0A     if ( map )=0A         domain_context_ma=
pping_one(domain, drhd->iommu, 0,=0A-                                   =
PCI_DEVFN(dev, 7));=0A+                                   PCI_DEVFN(dev, =
7), NULL);=0A     else=0A         domain_context_unmap_one(domain, =
drhd->iommu, 0,=0A                                  PCI_DEVFN(dev, 7));=0A
--=__PartEBDA76B4.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartEBDA76B4.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:14:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:14:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcDr-0003W3-R8; Thu, 06 Dec 2012 14:14:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcDq-0003Vk-Ec
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:14:42 +0000
Received: from [193.109.254.147:9989] by server-8.bemta-14.messagelabs.com id
	11/A7-05026-158A0C05; Thu, 06 Dec 2012 14:14:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354803279!3961666!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8159 invoked from network); 6 Dec 2012 14:14:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:14:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:14:38 +0000
Message-Id: <50C0B65F02000078000AEA66@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:14:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0F3E925F.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 6/8] IOMMU: add phantom function support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0F3E925F.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Apart from generating device context entries for the base function,
all phantom functions also need context entries to be generated for
them.

In order to distinguish different use cases, a variant of
pci_get_pdev() is being introduced that, even when passed a phantom
function number, would return the underlying actual device.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -339,7 +339,15 @@ static void amd_iommu_flush_all_iotlbs(s
         return;
=20
     for_each_pdev( d, pdev )
-        amd_iommu_flush_iotlb(pdev->devfn, pdev, gaddr, order);
+    {
+        u8 devfn =3D pdev->devfn;
+
+        do {
+            amd_iommu_flush_iotlb(devfn, pdev, gaddr, order);
+            devfn +=3D pdev->phantom_stride;
+        } while ( devfn !=3D pdev->devfn &&
+                  PCI_SLOT(devfn) =3D=3D PCI_SLOT(pdev->devfn) );
+    }
 }
=20
 /* Flush iommu cache after p2m changes. */
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -667,7 +667,7 @@ void parse_ppr_log_entry(struct amd_iomm
     devfn =3D PCI_DEVFN2(device_id);
=20
     spin_lock(&pcidevs_lock);
-    pdev =3D pci_get_pdev(iommu->seg, bus, devfn);
+    pdev =3D pci_get_real_pdev(iommu->seg, bus, devfn);
     spin_unlock(&pcidevs_lock);
=20
     if ( pdev )
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -598,7 +598,6 @@ static int update_paging_mode(struct dom
         for_each_pdev( d, pdev )
         {
             bdf =3D PCI_BDF2(pdev->bus, pdev->devfn);
-            req_id =3D get_dma_requestor_id(pdev->seg, bdf);
             iommu =3D find_iommu_for_device(pdev->seg, bdf);
             if ( !iommu )
             {
@@ -607,16 +606,21 @@ static int update_paging_mode(struct dom
             }
=20
             spin_lock_irqsave(&iommu->lock, flags);
-            device_entry =3D iommu->dev_table.buffer +
-                           (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);
-
-            /* valid =3D 0 only works for dom0 passthrough mode */
-            amd_iommu_set_root_page_table((u32 *)device_entry,
-                                          page_to_maddr(hd->root_table),
-                                          hd->domain_id,
-                                          hd->paging_mode, 1);
-
-            amd_iommu_flush_device(iommu, req_id);
+            do {
+                req_id =3D get_dma_requestor_id(pdev->seg, bdf);
+                device_entry =3D iommu->dev_table.buffer +
+                               (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);
+
+                /* valid =3D 0 only works for dom0 passthrough mode */
+                amd_iommu_set_root_page_table((u32 *)device_entry,
+                                              page_to_maddr(hd->root_table=
),
+                                              hd->domain_id,
+                                              hd->paging_mode, 1);
+
+                amd_iommu_flush_device(iommu, req_id);
+                bdf +=3D pdev->phantom_stride;
+            } while ( PCI_DEVFN2(bdf) !=3D pdev->devfn &&
+                      PCI_SLOT(bdf) =3D=3D PCI_SLOT(pdev->devfn) );
             spin_unlock_irqrestore(&iommu->lock, flags);
         }
=20
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -158,6 +158,8 @@ void __init iommu_dom0_init(struct domai
 int iommu_add_device(struct pci_dev *pdev)
 {
     struct hvm_iommu *hd;
+    int rc;
+    u8 devfn;
=20
     if ( !pdev->domain )
         return -EINVAL;
@@ -168,7 +170,20 @@ int iommu_add_device(struct pci_dev *pde
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
=20
-    return hd->platform_ops->add_device(pdev->devfn, pdev);
+    rc =3D hd->platform_ops->add_device(pdev->devfn, pdev);
+    if ( rc || !pdev->phantom_stride )
+        return rc;
+
+    for ( devfn =3D pdev->devfn ; ; )
+    {
+        devfn +=3D pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )
+            return 0;
+        rc =3D hd->platform_ops->add_device(devfn, pdev);
+        if ( rc )
+            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed =
(%d)\n",
+                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=
 rc);
+    }
 }
=20
 int iommu_enable_device(struct pci_dev *pdev)
@@ -191,6 +206,8 @@ int iommu_enable_device(struct pci_dev *
 int iommu_remove_device(struct pci_dev *pdev)
 {
     struct hvm_iommu *hd;
+    u8 devfn;
+
     if ( !pdev->domain )
         return -EINVAL;
=20
@@ -198,6 +215,22 @@ int iommu_remove_device(struct pci_dev *
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
=20
+    for ( devfn =3D pdev->devfn ; pdev->phantom_stride; )
+    {
+        int rc;
+
+        devfn +=3D pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )
+            break;
+        rc =3D hd->platform_ops->remove_device(devfn, pdev);
+        if ( !rc )
+            continue;
+
+        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed =
(%d)\n",
+               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), =
rc);
+        return rc;
+    }
+
     return hd->platform_ops->remove_device(pdev->devfn, pdev);
 }
=20
@@ -245,6 +278,18 @@ static int assign_device(struct domain *
     if ( (rc =3D hd->platform_ops->assign_device(d, devfn, pdev)) )
         goto done;
=20
+    for ( ; pdev->phantom_stride; rc =3D 0 )
+    {
+        devfn +=3D pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )
+            break;
+        rc =3D hd->platform_ops->assign_device(d, devfn, pdev);
+        if ( rc )
+            printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed =
(%d)\n",
+                   d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn=
),
+                   rc);
+    }
+
     if ( has_arch_pdevs(d) && !need_iommu(d) )
     {
         d->need_iommu =3D 1;
@@ -377,6 +422,21 @@ int deassign_device(struct domain *d, u1
     if ( !pdev )
         return -ENODEV;
=20
+    while ( pdev->phantom_stride )
+    {
+        devfn +=3D pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )
+            break;
+        ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
+        if ( !ret )
+            continue;
+
+        printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed =
(%d)\n",
+               d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), =
ret);
+        return ret;
+    }
+
+    devfn =3D pdev->devfn;
     ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
     if ( ret )
     {
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -144,6 +144,8 @@ static struct pci_dev *alloc_pdev(struct
     /* update bus2bridge */
     switch ( pdev->type =3D pdev_type(pseg->nr, bus, devfn) )
     {
+        int pos;
+        u16 cap;
         u8 sec_bus, sub_bus;
=20
         case DEV_TYPE_PCIe_BRIDGE:
@@ -167,6 +169,20 @@ static struct pci_dev *alloc_pdev(struct
             break;
=20
         case DEV_TYPE_PCIe_ENDPOINT:
+            pos =3D pci_find_cap_offset(pseg->nr, bus, PCI_SLOT(devfn),
+                                      PCI_FUNC(devfn), PCI_CAP_ID_EXP);
+            BUG_ON(!pos);
+            cap =3D pci_conf_read16(pseg->nr, bus, PCI_SLOT(devfn),
+                                  PCI_FUNC(devfn), pos + PCI_EXP_DEVCAP);
+            if ( cap & PCI_EXP_DEVCAP_PHANTOM )
+            {
+                pdev->phantom_stride =3D 8 >> MASK_EXTR(cap,
+                                                      PCI_EXP_DEVCAP_PHANT=
OM);
+                if ( PCI_FUNC(devfn) >=3D pdev->phantom_stride )
+                    pdev->phantom_stride =3D 0;
+            }
+            break;
+
         case DEV_TYPE_PCI:
             break;
=20
@@ -290,6 +306,27 @@ struct pci_dev *pci_get_pdev(int seg, in
     return NULL;
 }
=20
+struct pci_dev *pci_get_real_pdev(int seg, int bus, int devfn)
+{
+    struct pci_dev *pdev;
+    int stride;
+
+    if ( seg < 0 || bus < 0 || devfn < 0 )
+        return NULL;
+
+    for ( pdev =3D pci_get_pdev(seg, bus, devfn), stride =3D 4;
+          !pdev && stride; stride >>=3D 1 )
+    {
+        if ( !(devfn & (8 - stride)) )
+            continue;
+        pdev =3D pci_get_pdev(seg, bus, devfn & ~(8 - stride));
+        if ( pdev && stride !=3D pdev->phantom_stride )
+            pdev =3D NULL;
+    }
+
+    return pdev;
+}
+
 struct pci_dev *pci_get_pdev_by_domain(
     struct domain *d, int seg, int bus, int devfn)
 {
@@ -488,8 +525,19 @@ int pci_add_device(u16 seg, u8 bus, u8 d
=20
 out:
     spin_unlock(&pcidevs_lock);
-    printk(XENLOG_DEBUG "PCI add %s %04x:%02x:%02x.%u\n", pdev_type,
-           seg, bus, slot, func);
+    if ( !ret )
+    {
+        printk(XENLOG_DEBUG "PCI add %s %04x:%02x:%02x.%u\n", pdev_type,
+               seg, bus, slot, func);
+        while ( pdev->phantom_stride )
+        {
+            func +=3D pdev->phantom_stride;
+            if ( PCI_SLOT(func) )
+                break;
+            printk(XENLOG_DEBUG "PCI phantom %04x:%02x:%02x.%u\n",
+                   seg, bus, slot, func);
+        }
+    }
     return ret;
 }
=20
@@ -681,7 +729,7 @@ void pci_check_disable_device(u16 seg, u
     u16 cword;
=20
     spin_lock(&pcidevs_lock);
-    pdev =3D pci_get_pdev(seg, bus, devfn);
+    pdev =3D pci_get_real_pdev(seg, bus, devfn);
     if ( pdev )
     {
         if ( now < pdev->fault.time ||
@@ -698,6 +746,7 @@ void pci_check_disable_device(u16 seg, u
=20
     /* Tell the device to stop DMAing; we can't rely on the guest to
      * control it for us. */
+    devfn =3D pdev->devfn;
     cword =3D pci_conf_read16(seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
                             PCI_COMMAND);
     pci_conf_write16(seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
@@ -759,6 +808,27 @@ struct setup_dom0 {
     int (*handler)(u8 devfn, struct pci_dev *);
 };
=20
+static void setup_one_dom0_device(const struct setup_dom0 *ctxt,
+                                  struct pci_dev *pdev)
+{
+    u8 devfn =3D pdev->devfn;
+
+    do {
+        int err =3D ctxt->handler(devfn, pdev);
+
+        if ( err )
+        {
+            printk(XENLOG_ERR "setup %04x:%02x:%02x.%u for d%d failed =
(%d)\n",
+                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=

+                   ctxt->d->domain_id, err);
+            if ( devfn =3D=3D pdev->devfn )
+                return;
+        }
+        devfn +=3D pdev->phantom_stride;
+    } while ( devfn !=3D pdev->devfn &&
+              PCI_SLOT(devfn) =3D=3D PCI_SLOT(pdev->devfn) );
+}
+
 static int __init _setup_dom0_pci_devices(struct pci_seg *pseg, void =
*arg)
 {
     struct setup_dom0 *ctxt =3D arg;
@@ -777,12 +847,12 @@ static int __init _setup_dom0_pci_device
             {
                 pdev->domain =3D ctxt->d;
                 list_add(&pdev->domain_list, &ctxt->d->arch.pdev_list);
-                ctxt->handler(devfn, pdev);
+                setup_one_dom0_device(ctxt, pdev);
             }
             else if ( pdev->domain =3D=3D dom_xen )
             {
                 pdev->domain =3D ctxt->d;
-                ctxt->handler(devfn, pdev);
+                setup_one_dom0_device(ctxt, pdev);
                 pdev->domain =3D dom_xen;
             }
             else if ( pdev->domain !=3D ctxt->d )
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -58,6 +58,9 @@ do {                                   =20
=20
 #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]) + __must_be_array(x))
=20
+#define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
+#define MASK_INSR(v, m) (((v) * ((m) & -(m))) & (m))
+
 #define reserve_bootmem(_p,_l) ((void)0)
=20
 struct domain;
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -63,6 +63,8 @@ struct pci_dev {
     const u8 bus;
     const u8 devfn;
=20
+    u8 phantom_stride;
+
     enum pdev_type {
         DEV_TYPE_PCI_UNKNOWN,
         DEV_TYPE_PCIe_ENDPOINT,
@@ -114,6 +116,7 @@ int pci_ro_device(int seg, int bus, int=20
 void arch_pci_ro_device(int seg, int bdf);
 int pci_hide_device(int bus, int devfn);
 struct pci_dev *pci_get_pdev(int seg, int bus, int devfn);
+struct pci_dev *pci_get_real_pdev(int seg, int bus, int devfn);
 struct pci_dev *pci_get_pdev_by_domain(
     struct domain *, int seg, int bus, int devfn);
 void pci_check_disable_device(u16 seg, u8 bus, u8 devfn);



--=__Part0F3E925F.3__=
Content-Type: text/plain; name="IOMMU-phantom-dev.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-phantom-dev.patch"

IOMMU: add phantom function support=0A=0AApart from generating device =
context entries for the base function,=0Aall phantom functions also need =
context entries to be generated for=0Athem.=0A=0AIn order to distinguish =
different use cases, a variant of=0Apci_get_pdev() is being introduced =
that, even when passed a phantom=0Afunction number, would return the =
underlying actual device.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.co=
m>=0A=0A--- a/xen/drivers/passthrough/amd/iommu_cmd.c=0A+++ b/xen/drivers/p=
assthrough/amd/iommu_cmd.c=0A@@ -339,7 +339,15 @@ static void amd_iommu_flu=
sh_all_iotlbs(s=0A         return;=0A =0A     for_each_pdev( d, pdev )=0A- =
       amd_iommu_flush_iotlb(pdev->devfn, pdev, gaddr, order);=0A+    =
{=0A+        u8 devfn =3D pdev->devfn;=0A+=0A+        do {=0A+            =
amd_iommu_flush_iotlb(devfn, pdev, gaddr, order);=0A+            devfn =
+=3D pdev->phantom_stride;=0A+        } while ( devfn !=3D pdev->devfn =
&&=0A+                  PCI_SLOT(devfn) =3D=3D PCI_SLOT(pdev->devfn) =
);=0A+    }=0A }=0A =0A /* Flush iommu cache after p2m changes. */=0A--- =
a/xen/drivers/passthrough/amd/iommu_init.c=0A+++ b/xen/drivers/passthrough/=
amd/iommu_init.c=0A@@ -667,7 +667,7 @@ void parse_ppr_log_entry(struct =
amd_iomm=0A     devfn =3D PCI_DEVFN2(device_id);=0A =0A     spin_lock(&pcid=
evs_lock);=0A-    pdev =3D pci_get_pdev(iommu->seg, bus, devfn);=0A+    =
pdev =3D pci_get_real_pdev(iommu->seg, bus, devfn);=0A     spin_unlock(&pci=
devs_lock);=0A =0A     if ( pdev )=0A--- a/xen/drivers/passthrough/amd/iomm=
u_map.c=0A+++ b/xen/drivers/passthrough/amd/iommu_map.c=0A@@ -598,7 +598,6 =
@@ static int update_paging_mode(struct dom=0A         for_each_pdev( d, =
pdev )=0A         {=0A             bdf =3D PCI_BDF2(pdev->bus, pdev->devfn)=
;=0A-            req_id =3D get_dma_requestor_id(pdev->seg, bdf);=0A       =
      iommu =3D find_iommu_for_device(pdev->seg, bdf);=0A             if ( =
!iommu )=0A             {=0A@@ -607,16 +606,21 @@ static int update_paging_=
mode(struct dom=0A             }=0A =0A             spin_lock_irqsave(&iomm=
u->lock, flags);=0A-            device_entry =3D iommu->dev_table.buffer =
+=0A-                           (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);=0A-=
=0A-            /* valid =3D 0 only works for dom0 passthrough mode */=0A- =
           amd_iommu_set_root_page_table((u32 *)device_entry,=0A-          =
                                page_to_maddr(hd->root_table),=0A-         =
                                 hd->domain_id,=0A-                        =
                  hd->paging_mode, 1);=0A-=0A-            amd_iommu_flush_d=
evice(iommu, req_id);=0A+            do {=0A+                req_id =3D =
get_dma_requestor_id(pdev->seg, bdf);=0A+                device_entry =3D =
iommu->dev_table.buffer +=0A+                               (req_id * =
IOMMU_DEV_TABLE_ENTRY_SIZE);=0A+=0A+                /* valid =3D 0 only =
works for dom0 passthrough mode */=0A+                amd_iommu_set_root_pa=
ge_table((u32 *)device_entry,=0A+                                          =
    page_to_maddr(hd->root_table),=0A+                                     =
         hd->domain_id,=0A+                                              =
hd->paging_mode, 1);=0A+=0A+                amd_iommu_flush_device(iommu, =
req_id);=0A+                bdf +=3D pdev->phantom_stride;=0A+            =
} while ( PCI_DEVFN2(bdf) !=3D pdev->devfn &&=0A+                      =
PCI_SLOT(bdf) =3D=3D PCI_SLOT(pdev->devfn) );=0A             spin_unlock_ir=
qrestore(&iommu->lock, flags);=0A         }=0A =0A--- a/xen/drivers/passthr=
ough/iommu.c=0A+++ b/xen/drivers/passthrough/iommu.c=0A@@ -158,6 +158,8 @@ =
void __init iommu_dom0_init(struct domai=0A int iommu_add_device(struct =
pci_dev *pdev)=0A {=0A     struct hvm_iommu *hd;=0A+    int rc;=0A+    u8 =
devfn;=0A =0A     if ( !pdev->domain )=0A         return -EINVAL;=0A@@ =
-168,7 +170,20 @@ int iommu_add_device(struct pci_dev *pde=0A     if ( =
!iommu_enabled || !hd->platform_ops )=0A         return 0;=0A =0A-    =
return hd->platform_ops->add_device(pdev->devfn, pdev);=0A+    rc =3D =
hd->platform_ops->add_device(pdev->devfn, pdev);=0A+    if ( rc || =
!pdev->phantom_stride )=0A+        return rc;=0A+=0A+    for ( devfn =3D =
pdev->devfn ; ; )=0A+    {=0A+        devfn +=3D pdev->phantom_stride;=0A+ =
       if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )=0A+            =
return 0;=0A+        rc =3D hd->platform_ops->add_device(devfn, pdev);=0A+ =
       if ( rc )=0A+            printk(XENLOG_WARNING "IOMMU: add =
%04x:%02x:%02x.%u failed (%d)\n",=0A+                   pdev->seg, =
pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);=0A+    }=0A }=0A =0A int =
iommu_enable_device(struct pci_dev *pdev)=0A@@ -191,6 +206,8 @@ int =
iommu_enable_device(struct pci_dev *=0A int iommu_remove_device(struct =
pci_dev *pdev)=0A {=0A     struct hvm_iommu *hd;=0A+    u8 devfn;=0A+=0A   =
  if ( !pdev->domain )=0A         return -EINVAL;=0A =0A@@ -198,6 +215,22 =
@@ int iommu_remove_device(struct pci_dev *=0A     if ( !iommu_enabled || =
!hd->platform_ops )=0A         return 0;=0A =0A+    for ( devfn =3D =
pdev->devfn ; pdev->phantom_stride; )=0A+    {=0A+        int rc;=0A+=0A+  =
      devfn +=3D pdev->phantom_stride;=0A+        if ( PCI_SLOT(devfn) =
!=3D PCI_SLOT(pdev->devfn) )=0A+            break;=0A+        rc =3D =
hd->platform_ops->remove_device(devfn, pdev);=0A+        if ( !rc )=0A+    =
        continue;=0A+=0A+        printk(XENLOG_ERR "IOMMU: remove =
%04x:%02x:%02x.%u failed (%d)\n",=0A+               pdev->seg, pdev->bus, =
PCI_SLOT(devfn), PCI_FUNC(devfn), rc);=0A+        return rc;=0A+    =
}=0A+=0A     return hd->platform_ops->remove_device(pdev->devfn, pdev);=0A =
}=0A =0A@@ -245,6 +278,18 @@ static int assign_device(struct domain *=0A   =
  if ( (rc =3D hd->platform_ops->assign_device(d, devfn, pdev)) )=0A       =
  goto done;=0A =0A+    for ( ; pdev->phantom_stride; rc =3D 0 )=0A+    =
{=0A+        devfn +=3D pdev->phantom_stride;=0A+        if ( PCI_SLOT(devf=
n) !=3D PCI_SLOT(pdev->devfn) )=0A+            break;=0A+        rc =3D =
hd->platform_ops->assign_device(d, devfn, pdev);=0A+        if ( rc )=0A+  =
          printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed =
(%d)\n",=0A+                   d->domain_id, seg, bus, PCI_SLOT(devfn), =
PCI_FUNC(devfn),=0A+                   rc);=0A+    }=0A+=0A     if ( =
has_arch_pdevs(d) && !need_iommu(d) )=0A     {=0A         d->need_iommu =
=3D 1;=0A@@ -377,6 +422,21 @@ int deassign_device(struct domain *d, u1=0A  =
   if ( !pdev )=0A         return -ENODEV;=0A =0A+    while ( pdev->phantom=
_stride )=0A+    {=0A+        devfn +=3D pdev->phantom_stride;=0A+        =
if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )=0A+            break;=0A+=
        ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, =
pdev);=0A+        if ( !ret )=0A+            continue;=0A+=0A+        =
printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed (%d)\n",=0A+   =
            d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), =
ret);=0A+        return ret;=0A+    }=0A+=0A+    devfn =3D pdev->devfn;=0A =
    ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, pdev);=0A    =
 if ( ret )=0A     {=0A--- a/xen/drivers/passthrough/pci.c=0A+++ b/xen/driv=
ers/passthrough/pci.c=0A@@ -144,6 +144,8 @@ static struct pci_dev =
*alloc_pdev(struct=0A     /* update bus2bridge */=0A     switch ( =
pdev->type =3D pdev_type(pseg->nr, bus, devfn) )=0A     {=0A+        int =
pos;=0A+        u16 cap;=0A         u8 sec_bus, sub_bus;=0A =0A         =
case DEV_TYPE_PCIe_BRIDGE:=0A@@ -167,6 +169,20 @@ static struct pci_dev =
*alloc_pdev(struct=0A             break;=0A =0A         case DEV_TYPE_PCIe_=
ENDPOINT:=0A+            pos =3D pci_find_cap_offset(pseg->nr, bus, =
PCI_SLOT(devfn),=0A+                                      PCI_FUNC(devfn), =
PCI_CAP_ID_EXP);=0A+            BUG_ON(!pos);=0A+            cap =3D =
pci_conf_read16(pseg->nr, bus, PCI_SLOT(devfn),=0A+                        =
          PCI_FUNC(devfn), pos + PCI_EXP_DEVCAP);=0A+            if ( cap =
& PCI_EXP_DEVCAP_PHANTOM )=0A+            {=0A+                pdev->phanto=
m_stride =3D 8 >> MASK_EXTR(cap,=0A+                                       =
               PCI_EXP_DEVCAP_PHANTOM);=0A+                if ( PCI_FUNC(de=
vfn) >=3D pdev->phantom_stride )=0A+                    pdev->phantom_strid=
e =3D 0;=0A+            }=0A+            break;=0A+=0A         case =
DEV_TYPE_PCI:=0A             break;=0A =0A@@ -290,6 +306,27 @@ struct =
pci_dev *pci_get_pdev(int seg, in=0A     return NULL;=0A }=0A =0A+struct =
pci_dev *pci_get_real_pdev(int seg, int bus, int devfn)=0A+{=0A+    struct =
pci_dev *pdev;=0A+    int stride;=0A+=0A+    if ( seg < 0 || bus < 0 || =
devfn < 0 )=0A+        return NULL;=0A+=0A+    for ( pdev =3D pci_get_pdev(=
seg, bus, devfn), stride =3D 4;=0A+          !pdev && stride; stride >>=3D =
1 )=0A+    {=0A+        if ( !(devfn & (8 - stride)) )=0A+            =
continue;=0A+        pdev =3D pci_get_pdev(seg, bus, devfn & ~(8 - =
stride));=0A+        if ( pdev && stride !=3D pdev->phantom_stride )=0A+   =
         pdev =3D NULL;=0A+    }=0A+=0A+    return pdev;=0A+}=0A+=0A =
struct pci_dev *pci_get_pdev_by_domain(=0A     struct domain *d, int seg, =
int bus, int devfn)=0A {=0A@@ -488,8 +525,19 @@ int pci_add_device(u16 =
seg, u8 bus, u8 d=0A =0A out:=0A     spin_unlock(&pcidevs_lock);=0A-    =
printk(XENLOG_DEBUG "PCI add %s %04x:%02x:%02x.%u\n", pdev_type,=0A-       =
    seg, bus, slot, func);=0A+    if ( !ret )=0A+    {=0A+        =
printk(XENLOG_DEBUG "PCI add %s %04x:%02x:%02x.%u\n", pdev_type,=0A+       =
        seg, bus, slot, func);=0A+        while ( pdev->phantom_stride =
)=0A+        {=0A+            func +=3D pdev->phantom_stride;=0A+          =
  if ( PCI_SLOT(func) )=0A+                break;=0A+            printk(XEN=
LOG_DEBUG "PCI phantom %04x:%02x:%02x.%u\n",=0A+                   seg, =
bus, slot, func);=0A+        }=0A+    }=0A     return ret;=0A }=0A =0A@@ =
-681,7 +729,7 @@ void pci_check_disable_device(u16 seg, u=0A     u16 =
cword;=0A =0A     spin_lock(&pcidevs_lock);=0A-    pdev =3D pci_get_pdev(se=
g, bus, devfn);=0A+    pdev =3D pci_get_real_pdev(seg, bus, devfn);=0A     =
if ( pdev )=0A     {=0A         if ( now < pdev->fault.time ||=0A@@ -698,6 =
+746,7 @@ void pci_check_disable_device(u16 seg, u=0A =0A     /* Tell the =
device to stop DMAing; we can't rely on the guest to=0A      * control it =
for us. */=0A+    devfn =3D pdev->devfn;=0A     cword =3D pci_conf_read16(s=
eg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=0A                             =
PCI_COMMAND);=0A     pci_conf_write16(seg, bus, PCI_SLOT(devfn), PCI_FUNC(d=
evfn),=0A@@ -759,6 +808,27 @@ struct setup_dom0 {=0A     int (*handler)(u8 =
devfn, struct pci_dev *);=0A };=0A =0A+static void setup_one_dom0_device(co=
nst struct setup_dom0 *ctxt,=0A+                                  struct =
pci_dev *pdev)=0A+{=0A+    u8 devfn =3D pdev->devfn;=0A+=0A+    do {=0A+   =
     int err =3D ctxt->handler(devfn, pdev);=0A+=0A+        if ( err )=0A+ =
       {=0A+            printk(XENLOG_ERR "setup %04x:%02x:%02x.%u for d%d =
failed (%d)\n",=0A+                   pdev->seg, pdev->bus, PCI_SLOT(devfn)=
, PCI_FUNC(devfn),=0A+                   ctxt->d->domain_id, err);=0A+     =
       if ( devfn =3D=3D pdev->devfn )=0A+                return;=0A+      =
  }=0A+        devfn +=3D pdev->phantom_stride;=0A+    } while ( devfn =
!=3D pdev->devfn &&=0A+              PCI_SLOT(devfn) =3D=3D PCI_SLOT(pdev->=
devfn) );=0A+}=0A+=0A static int __init _setup_dom0_pci_devices(struct =
pci_seg *pseg, void *arg)=0A {=0A     struct setup_dom0 *ctxt =3D =
arg;=0A@@ -777,12 +847,12 @@ static int __init _setup_dom0_pci_device=0A   =
          {=0A                 pdev->domain =3D ctxt->d;=0A                =
 list_add(&pdev->domain_list, &ctxt->d->arch.pdev_list);=0A-               =
 ctxt->handler(devfn, pdev);=0A+                setup_one_dom0_device(ctxt,=
 pdev);=0A             }=0A             else if ( pdev->domain =3D=3D =
dom_xen )=0A             {=0A                 pdev->domain =3D ctxt->d;=0A-=
                ctxt->handler(devfn, pdev);=0A+                setup_one_do=
m0_device(ctxt, pdev);=0A                 pdev->domain =3D dom_xen;=0A     =
        }=0A             else if ( pdev->domain !=3D ctxt->d )=0A--- =
a/xen/include/xen/lib.h=0A+++ b/xen/include/xen/lib.h=0A@@ -58,6 +58,9 @@ =
do {                                    =0A =0A #define ARRAY_SIZE(x) =
(sizeof(x) / sizeof((x)[0]) + __must_be_array(x))=0A =0A+#define MASK_EXTR(=
v, m) (((v) & (m)) / ((m) & -(m)))=0A+#define MASK_INSR(v, m) (((v) * ((m) =
& -(m))) & (m))=0A+=0A #define reserve_bootmem(_p,_l) ((void)0)=0A =0A =
struct domain;=0A--- a/xen/include/xen/pci.h=0A+++ b/xen/include/xen/pci.h=
=0A@@ -63,6 +63,8 @@ struct pci_dev {=0A     const u8 bus;=0A     const u8 =
devfn;=0A =0A+    u8 phantom_stride;=0A+=0A     enum pdev_type {=0A        =
 DEV_TYPE_PCI_UNKNOWN,=0A         DEV_TYPE_PCIe_ENDPOINT,=0A@@ -114,6 =
+116,7 @@ int pci_ro_device(int seg, int bus, int =0A void arch_pci_ro_devi=
ce(int seg, int bdf);=0A int pci_hide_device(int bus, int devfn);=0A =
struct pci_dev *pci_get_pdev(int seg, int bus, int devfn);=0A+struct =
pci_dev *pci_get_real_pdev(int seg, int bus, int devfn);=0A struct pci_dev =
*pci_get_pdev_by_domain(=0A     struct domain *, int seg, int bus, int =
devfn);=0A void pci_check_disable_device(u16 seg, u8 bus, u8 devfn);=0A
--=__Part0F3E925F.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0F3E925F.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:14:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:14:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcDr-0003W3-R8; Thu, 06 Dec 2012 14:14:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcDq-0003Vk-Ec
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:14:42 +0000
Received: from [193.109.254.147:9989] by server-8.bemta-14.messagelabs.com id
	11/A7-05026-158A0C05; Thu, 06 Dec 2012 14:14:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354803279!3961666!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8159 invoked from network); 6 Dec 2012 14:14:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:14:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:14:38 +0000
Message-Id: <50C0B65F02000078000AEA66@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:14:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0F3E925F.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 6/8] IOMMU: add phantom function support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0F3E925F.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Apart from generating device context entries for the base function,
all phantom functions also need context entries to be generated for
them.

In order to distinguish different use cases, a variant of
pci_get_pdev() is being introduced that, even when passed a phantom
function number, would return the underlying actual device.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -339,7 +339,15 @@ static void amd_iommu_flush_all_iotlbs(s
         return;
=20
     for_each_pdev( d, pdev )
-        amd_iommu_flush_iotlb(pdev->devfn, pdev, gaddr, order);
+    {
+        u8 devfn =3D pdev->devfn;
+
+        do {
+            amd_iommu_flush_iotlb(devfn, pdev, gaddr, order);
+            devfn +=3D pdev->phantom_stride;
+        } while ( devfn !=3D pdev->devfn &&
+                  PCI_SLOT(devfn) =3D=3D PCI_SLOT(pdev->devfn) );
+    }
 }
=20
 /* Flush iommu cache after p2m changes. */
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -667,7 +667,7 @@ void parse_ppr_log_entry(struct amd_iomm
     devfn =3D PCI_DEVFN2(device_id);
=20
     spin_lock(&pcidevs_lock);
-    pdev =3D pci_get_pdev(iommu->seg, bus, devfn);
+    pdev =3D pci_get_real_pdev(iommu->seg, bus, devfn);
     spin_unlock(&pcidevs_lock);
=20
     if ( pdev )
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -598,7 +598,6 @@ static int update_paging_mode(struct dom
         for_each_pdev( d, pdev )
         {
             bdf =3D PCI_BDF2(pdev->bus, pdev->devfn);
-            req_id =3D get_dma_requestor_id(pdev->seg, bdf);
             iommu =3D find_iommu_for_device(pdev->seg, bdf);
             if ( !iommu )
             {
@@ -607,16 +606,21 @@ static int update_paging_mode(struct dom
             }
=20
             spin_lock_irqsave(&iommu->lock, flags);
-            device_entry =3D iommu->dev_table.buffer +
-                           (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);
-
-            /* valid =3D 0 only works for dom0 passthrough mode */
-            amd_iommu_set_root_page_table((u32 *)device_entry,
-                                          page_to_maddr(hd->root_table),
-                                          hd->domain_id,
-                                          hd->paging_mode, 1);
-
-            amd_iommu_flush_device(iommu, req_id);
+            do {
+                req_id =3D get_dma_requestor_id(pdev->seg, bdf);
+                device_entry =3D iommu->dev_table.buffer +
+                               (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);
+
+                /* valid =3D 0 only works for dom0 passthrough mode */
+                amd_iommu_set_root_page_table((u32 *)device_entry,
+                                              page_to_maddr(hd->root_table=
),
+                                              hd->domain_id,
+                                              hd->paging_mode, 1);
+
+                amd_iommu_flush_device(iommu, req_id);
+                bdf +=3D pdev->phantom_stride;
+            } while ( PCI_DEVFN2(bdf) !=3D pdev->devfn &&
+                      PCI_SLOT(bdf) =3D=3D PCI_SLOT(pdev->devfn) );
             spin_unlock_irqrestore(&iommu->lock, flags);
         }
=20
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -158,6 +158,8 @@ void __init iommu_dom0_init(struct domai
 int iommu_add_device(struct pci_dev *pdev)
 {
     struct hvm_iommu *hd;
+    int rc;
+    u8 devfn;
=20
     if ( !pdev->domain )
         return -EINVAL;
@@ -168,7 +170,20 @@ int iommu_add_device(struct pci_dev *pde
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
=20
-    return hd->platform_ops->add_device(pdev->devfn, pdev);
+    rc =3D hd->platform_ops->add_device(pdev->devfn, pdev);
+    if ( rc || !pdev->phantom_stride )
+        return rc;
+
+    for ( devfn =3D pdev->devfn ; ; )
+    {
+        devfn +=3D pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )
+            return 0;
+        rc =3D hd->platform_ops->add_device(devfn, pdev);
+        if ( rc )
+            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed =
(%d)\n",
+                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=
 rc);
+    }
 }
=20
 int iommu_enable_device(struct pci_dev *pdev)
@@ -191,6 +206,8 @@ int iommu_enable_device(struct pci_dev *
 int iommu_remove_device(struct pci_dev *pdev)
 {
     struct hvm_iommu *hd;
+    u8 devfn;
+
     if ( !pdev->domain )
         return -EINVAL;
=20
@@ -198,6 +215,22 @@ int iommu_remove_device(struct pci_dev *
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
=20
+    for ( devfn =3D pdev->devfn ; pdev->phantom_stride; )
+    {
+        int rc;
+
+        devfn +=3D pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )
+            break;
+        rc =3D hd->platform_ops->remove_device(devfn, pdev);
+        if ( !rc )
+            continue;
+
+        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed =
(%d)\n",
+               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), =
rc);
+        return rc;
+    }
+
     return hd->platform_ops->remove_device(pdev->devfn, pdev);
 }
=20
@@ -245,6 +278,18 @@ static int assign_device(struct domain *
     if ( (rc =3D hd->platform_ops->assign_device(d, devfn, pdev)) )
         goto done;
=20
+    for ( ; pdev->phantom_stride; rc =3D 0 )
+    {
+        devfn +=3D pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )
+            break;
+        rc =3D hd->platform_ops->assign_device(d, devfn, pdev);
+        if ( rc )
+            printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed =
(%d)\n",
+                   d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn=
),
+                   rc);
+    }
+
     if ( has_arch_pdevs(d) && !need_iommu(d) )
     {
         d->need_iommu =3D 1;
@@ -377,6 +422,21 @@ int deassign_device(struct domain *d, u1
     if ( !pdev )
         return -ENODEV;
=20
+    while ( pdev->phantom_stride )
+    {
+        devfn +=3D pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )
+            break;
+        ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
+        if ( !ret )
+            continue;
+
+        printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed =
(%d)\n",
+               d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), =
ret);
+        return ret;
+    }
+
+    devfn =3D pdev->devfn;
     ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
     if ( ret )
     {
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -144,6 +144,8 @@ static struct pci_dev *alloc_pdev(struct
     /* update bus2bridge */
     switch ( pdev->type =3D pdev_type(pseg->nr, bus, devfn) )
     {
+        int pos;
+        u16 cap;
         u8 sec_bus, sub_bus;
=20
         case DEV_TYPE_PCIe_BRIDGE:
@@ -167,6 +169,20 @@ static struct pci_dev *alloc_pdev(struct
             break;
=20
         case DEV_TYPE_PCIe_ENDPOINT:
+            pos =3D pci_find_cap_offset(pseg->nr, bus, PCI_SLOT(devfn),
+                                      PCI_FUNC(devfn), PCI_CAP_ID_EXP);
+            BUG_ON(!pos);
+            cap =3D pci_conf_read16(pseg->nr, bus, PCI_SLOT(devfn),
+                                  PCI_FUNC(devfn), pos + PCI_EXP_DEVCAP);
+            if ( cap & PCI_EXP_DEVCAP_PHANTOM )
+            {
+                pdev->phantom_stride =3D 8 >> MASK_EXTR(cap,
+                                                      PCI_EXP_DEVCAP_PHANT=
OM);
+                if ( PCI_FUNC(devfn) >=3D pdev->phantom_stride )
+                    pdev->phantom_stride =3D 0;
+            }
+            break;
+
         case DEV_TYPE_PCI:
             break;
=20
@@ -290,6 +306,27 @@ struct pci_dev *pci_get_pdev(int seg, in
     return NULL;
 }
=20
+struct pci_dev *pci_get_real_pdev(int seg, int bus, int devfn)
+{
+    struct pci_dev *pdev;
+    int stride;
+
+    if ( seg < 0 || bus < 0 || devfn < 0 )
+        return NULL;
+
+    for ( pdev =3D pci_get_pdev(seg, bus, devfn), stride =3D 4;
+          !pdev && stride; stride >>=3D 1 )
+    {
+        if ( !(devfn & (8 - stride)) )
+            continue;
+        pdev =3D pci_get_pdev(seg, bus, devfn & ~(8 - stride));
+        if ( pdev && stride !=3D pdev->phantom_stride )
+            pdev =3D NULL;
+    }
+
+    return pdev;
+}
+
 struct pci_dev *pci_get_pdev_by_domain(
     struct domain *d, int seg, int bus, int devfn)
 {
@@ -488,8 +525,19 @@ int pci_add_device(u16 seg, u8 bus, u8 d
=20
 out:
     spin_unlock(&pcidevs_lock);
-    printk(XENLOG_DEBUG "PCI add %s %04x:%02x:%02x.%u\n", pdev_type,
-           seg, bus, slot, func);
+    if ( !ret )
+    {
+        printk(XENLOG_DEBUG "PCI add %s %04x:%02x:%02x.%u\n", pdev_type,
+               seg, bus, slot, func);
+        while ( pdev->phantom_stride )
+        {
+            func +=3D pdev->phantom_stride;
+            if ( PCI_SLOT(func) )
+                break;
+            printk(XENLOG_DEBUG "PCI phantom %04x:%02x:%02x.%u\n",
+                   seg, bus, slot, func);
+        }
+    }
     return ret;
 }
=20
@@ -681,7 +729,7 @@ void pci_check_disable_device(u16 seg, u
     u16 cword;
=20
     spin_lock(&pcidevs_lock);
-    pdev =3D pci_get_pdev(seg, bus, devfn);
+    pdev =3D pci_get_real_pdev(seg, bus, devfn);
     if ( pdev )
     {
         if ( now < pdev->fault.time ||
@@ -698,6 +746,7 @@ void pci_check_disable_device(u16 seg, u
=20
     /* Tell the device to stop DMAing; we can't rely on the guest to
      * control it for us. */
+    devfn =3D pdev->devfn;
     cword =3D pci_conf_read16(seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
                             PCI_COMMAND);
     pci_conf_write16(seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
@@ -759,6 +808,27 @@ struct setup_dom0 {
     int (*handler)(u8 devfn, struct pci_dev *);
 };
=20
+static void setup_one_dom0_device(const struct setup_dom0 *ctxt,
+                                  struct pci_dev *pdev)
+{
+    u8 devfn =3D pdev->devfn;
+
+    do {
+        int err =3D ctxt->handler(devfn, pdev);
+
+        if ( err )
+        {
+            printk(XENLOG_ERR "setup %04x:%02x:%02x.%u for d%d failed =
(%d)\n",
+                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=

+                   ctxt->d->domain_id, err);
+            if ( devfn =3D=3D pdev->devfn )
+                return;
+        }
+        devfn +=3D pdev->phantom_stride;
+    } while ( devfn !=3D pdev->devfn &&
+              PCI_SLOT(devfn) =3D=3D PCI_SLOT(pdev->devfn) );
+}
+
 static int __init _setup_dom0_pci_devices(struct pci_seg *pseg, void =
*arg)
 {
     struct setup_dom0 *ctxt =3D arg;
@@ -777,12 +847,12 @@ static int __init _setup_dom0_pci_device
             {
                 pdev->domain =3D ctxt->d;
                 list_add(&pdev->domain_list, &ctxt->d->arch.pdev_list);
-                ctxt->handler(devfn, pdev);
+                setup_one_dom0_device(ctxt, pdev);
             }
             else if ( pdev->domain =3D=3D dom_xen )
             {
                 pdev->domain =3D ctxt->d;
-                ctxt->handler(devfn, pdev);
+                setup_one_dom0_device(ctxt, pdev);
                 pdev->domain =3D dom_xen;
             }
             else if ( pdev->domain !=3D ctxt->d )
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -58,6 +58,9 @@ do {                                   =20
=20
 #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]) + __must_be_array(x))
=20
+#define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
+#define MASK_INSR(v, m) (((v) * ((m) & -(m))) & (m))
+
 #define reserve_bootmem(_p,_l) ((void)0)
=20
 struct domain;
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -63,6 +63,8 @@ struct pci_dev {
     const u8 bus;
     const u8 devfn;
=20
+    u8 phantom_stride;
+
     enum pdev_type {
         DEV_TYPE_PCI_UNKNOWN,
         DEV_TYPE_PCIe_ENDPOINT,
@@ -114,6 +116,7 @@ int pci_ro_device(int seg, int bus, int=20
 void arch_pci_ro_device(int seg, int bdf);
 int pci_hide_device(int bus, int devfn);
 struct pci_dev *pci_get_pdev(int seg, int bus, int devfn);
+struct pci_dev *pci_get_real_pdev(int seg, int bus, int devfn);
 struct pci_dev *pci_get_pdev_by_domain(
     struct domain *, int seg, int bus, int devfn);
 void pci_check_disable_device(u16 seg, u8 bus, u8 devfn);



--=__Part0F3E925F.3__=
Content-Type: text/plain; name="IOMMU-phantom-dev.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-phantom-dev.patch"

IOMMU: add phantom function support=0A=0AApart from generating device =
context entries for the base function,=0Aall phantom functions also need =
context entries to be generated for=0Athem.=0A=0AIn order to distinguish =
different use cases, a variant of=0Apci_get_pdev() is being introduced =
that, even when passed a phantom=0Afunction number, would return the =
underlying actual device.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.co=
m>=0A=0A--- a/xen/drivers/passthrough/amd/iommu_cmd.c=0A+++ b/xen/drivers/p=
assthrough/amd/iommu_cmd.c=0A@@ -339,7 +339,15 @@ static void amd_iommu_flu=
sh_all_iotlbs(s=0A         return;=0A =0A     for_each_pdev( d, pdev )=0A- =
       amd_iommu_flush_iotlb(pdev->devfn, pdev, gaddr, order);=0A+    =
{=0A+        u8 devfn =3D pdev->devfn;=0A+=0A+        do {=0A+            =
amd_iommu_flush_iotlb(devfn, pdev, gaddr, order);=0A+            devfn =
+=3D pdev->phantom_stride;=0A+        } while ( devfn !=3D pdev->devfn =
&&=0A+                  PCI_SLOT(devfn) =3D=3D PCI_SLOT(pdev->devfn) =
);=0A+    }=0A }=0A =0A /* Flush iommu cache after p2m changes. */=0A--- =
a/xen/drivers/passthrough/amd/iommu_init.c=0A+++ b/xen/drivers/passthrough/=
amd/iommu_init.c=0A@@ -667,7 +667,7 @@ void parse_ppr_log_entry(struct =
amd_iomm=0A     devfn =3D PCI_DEVFN2(device_id);=0A =0A     spin_lock(&pcid=
evs_lock);=0A-    pdev =3D pci_get_pdev(iommu->seg, bus, devfn);=0A+    =
pdev =3D pci_get_real_pdev(iommu->seg, bus, devfn);=0A     spin_unlock(&pci=
devs_lock);=0A =0A     if ( pdev )=0A--- a/xen/drivers/passthrough/amd/iomm=
u_map.c=0A+++ b/xen/drivers/passthrough/amd/iommu_map.c=0A@@ -598,7 +598,6 =
@@ static int update_paging_mode(struct dom=0A         for_each_pdev( d, =
pdev )=0A         {=0A             bdf =3D PCI_BDF2(pdev->bus, pdev->devfn)=
;=0A-            req_id =3D get_dma_requestor_id(pdev->seg, bdf);=0A       =
      iommu =3D find_iommu_for_device(pdev->seg, bdf);=0A             if ( =
!iommu )=0A             {=0A@@ -607,16 +606,21 @@ static int update_paging_=
mode(struct dom=0A             }=0A =0A             spin_lock_irqsave(&iomm=
u->lock, flags);=0A-            device_entry =3D iommu->dev_table.buffer =
+=0A-                           (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);=0A-=
=0A-            /* valid =3D 0 only works for dom0 passthrough mode */=0A- =
           amd_iommu_set_root_page_table((u32 *)device_entry,=0A-          =
                                page_to_maddr(hd->root_table),=0A-         =
                                 hd->domain_id,=0A-                        =
                  hd->paging_mode, 1);=0A-=0A-            amd_iommu_flush_d=
evice(iommu, req_id);=0A+            do {=0A+                req_id =3D =
get_dma_requestor_id(pdev->seg, bdf);=0A+                device_entry =3D =
iommu->dev_table.buffer +=0A+                               (req_id * =
IOMMU_DEV_TABLE_ENTRY_SIZE);=0A+=0A+                /* valid =3D 0 only =
works for dom0 passthrough mode */=0A+                amd_iommu_set_root_pa=
ge_table((u32 *)device_entry,=0A+                                          =
    page_to_maddr(hd->root_table),=0A+                                     =
         hd->domain_id,=0A+                                              =
hd->paging_mode, 1);=0A+=0A+                amd_iommu_flush_device(iommu, =
req_id);=0A+                bdf +=3D pdev->phantom_stride;=0A+            =
} while ( PCI_DEVFN2(bdf) !=3D pdev->devfn &&=0A+                      =
PCI_SLOT(bdf) =3D=3D PCI_SLOT(pdev->devfn) );=0A             spin_unlock_ir=
qrestore(&iommu->lock, flags);=0A         }=0A =0A--- a/xen/drivers/passthr=
ough/iommu.c=0A+++ b/xen/drivers/passthrough/iommu.c=0A@@ -158,6 +158,8 @@ =
void __init iommu_dom0_init(struct domai=0A int iommu_add_device(struct =
pci_dev *pdev)=0A {=0A     struct hvm_iommu *hd;=0A+    int rc;=0A+    u8 =
devfn;=0A =0A     if ( !pdev->domain )=0A         return -EINVAL;=0A@@ =
-168,7 +170,20 @@ int iommu_add_device(struct pci_dev *pde=0A     if ( =
!iommu_enabled || !hd->platform_ops )=0A         return 0;=0A =0A-    =
return hd->platform_ops->add_device(pdev->devfn, pdev);=0A+    rc =3D =
hd->platform_ops->add_device(pdev->devfn, pdev);=0A+    if ( rc || =
!pdev->phantom_stride )=0A+        return rc;=0A+=0A+    for ( devfn =3D =
pdev->devfn ; ; )=0A+    {=0A+        devfn +=3D pdev->phantom_stride;=0A+ =
       if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )=0A+            =
return 0;=0A+        rc =3D hd->platform_ops->add_device(devfn, pdev);=0A+ =
       if ( rc )=0A+            printk(XENLOG_WARNING "IOMMU: add =
%04x:%02x:%02x.%u failed (%d)\n",=0A+                   pdev->seg, =
pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);=0A+    }=0A }=0A =0A int =
iommu_enable_device(struct pci_dev *pdev)=0A@@ -191,6 +206,8 @@ int =
iommu_enable_device(struct pci_dev *=0A int iommu_remove_device(struct =
pci_dev *pdev)=0A {=0A     struct hvm_iommu *hd;=0A+    u8 devfn;=0A+=0A   =
  if ( !pdev->domain )=0A         return -EINVAL;=0A =0A@@ -198,6 +215,22 =
@@ int iommu_remove_device(struct pci_dev *=0A     if ( !iommu_enabled || =
!hd->platform_ops )=0A         return 0;=0A =0A+    for ( devfn =3D =
pdev->devfn ; pdev->phantom_stride; )=0A+    {=0A+        int rc;=0A+=0A+  =
      devfn +=3D pdev->phantom_stride;=0A+        if ( PCI_SLOT(devfn) =
!=3D PCI_SLOT(pdev->devfn) )=0A+            break;=0A+        rc =3D =
hd->platform_ops->remove_device(devfn, pdev);=0A+        if ( !rc )=0A+    =
        continue;=0A+=0A+        printk(XENLOG_ERR "IOMMU: remove =
%04x:%02x:%02x.%u failed (%d)\n",=0A+               pdev->seg, pdev->bus, =
PCI_SLOT(devfn), PCI_FUNC(devfn), rc);=0A+        return rc;=0A+    =
}=0A+=0A     return hd->platform_ops->remove_device(pdev->devfn, pdev);=0A =
}=0A =0A@@ -245,6 +278,18 @@ static int assign_device(struct domain *=0A   =
  if ( (rc =3D hd->platform_ops->assign_device(d, devfn, pdev)) )=0A       =
  goto done;=0A =0A+    for ( ; pdev->phantom_stride; rc =3D 0 )=0A+    =
{=0A+        devfn +=3D pdev->phantom_stride;=0A+        if ( PCI_SLOT(devf=
n) !=3D PCI_SLOT(pdev->devfn) )=0A+            break;=0A+        rc =3D =
hd->platform_ops->assign_device(d, devfn, pdev);=0A+        if ( rc )=0A+  =
          printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed =
(%d)\n",=0A+                   d->domain_id, seg, bus, PCI_SLOT(devfn), =
PCI_FUNC(devfn),=0A+                   rc);=0A+    }=0A+=0A     if ( =
has_arch_pdevs(d) && !need_iommu(d) )=0A     {=0A         d->need_iommu =
=3D 1;=0A@@ -377,6 +422,21 @@ int deassign_device(struct domain *d, u1=0A  =
   if ( !pdev )=0A         return -ENODEV;=0A =0A+    while ( pdev->phantom=
_stride )=0A+    {=0A+        devfn +=3D pdev->phantom_stride;=0A+        =
if ( PCI_SLOT(devfn) !=3D PCI_SLOT(pdev->devfn) )=0A+            break;=0A+=
        ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, =
pdev);=0A+        if ( !ret )=0A+            continue;=0A+=0A+        =
printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed (%d)\n",=0A+   =
            d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), =
ret);=0A+        return ret;=0A+    }=0A+=0A+    devfn =3D pdev->devfn;=0A =
    ret =3D hd->platform_ops->reassign_device(d, dom0, devfn, pdev);=0A    =
 if ( ret )=0A     {=0A--- a/xen/drivers/passthrough/pci.c=0A+++ b/xen/driv=
ers/passthrough/pci.c=0A@@ -144,6 +144,8 @@ static struct pci_dev =
*alloc_pdev(struct=0A     /* update bus2bridge */=0A     switch ( =
pdev->type =3D pdev_type(pseg->nr, bus, devfn) )=0A     {=0A+        int =
pos;=0A+        u16 cap;=0A         u8 sec_bus, sub_bus;=0A =0A         =
case DEV_TYPE_PCIe_BRIDGE:=0A@@ -167,6 +169,20 @@ static struct pci_dev =
*alloc_pdev(struct=0A             break;=0A =0A         case DEV_TYPE_PCIe_=
ENDPOINT:=0A+            pos =3D pci_find_cap_offset(pseg->nr, bus, =
PCI_SLOT(devfn),=0A+                                      PCI_FUNC(devfn), =
PCI_CAP_ID_EXP);=0A+            BUG_ON(!pos);=0A+            cap =3D =
pci_conf_read16(pseg->nr, bus, PCI_SLOT(devfn),=0A+                        =
          PCI_FUNC(devfn), pos + PCI_EXP_DEVCAP);=0A+            if ( cap =
& PCI_EXP_DEVCAP_PHANTOM )=0A+            {=0A+                pdev->phanto=
m_stride =3D 8 >> MASK_EXTR(cap,=0A+                                       =
               PCI_EXP_DEVCAP_PHANTOM);=0A+                if ( PCI_FUNC(de=
vfn) >=3D pdev->phantom_stride )=0A+                    pdev->phantom_strid=
e =3D 0;=0A+            }=0A+            break;=0A+=0A         case =
DEV_TYPE_PCI:=0A             break;=0A =0A@@ -290,6 +306,27 @@ struct =
pci_dev *pci_get_pdev(int seg, in=0A     return NULL;=0A }=0A =0A+struct =
pci_dev *pci_get_real_pdev(int seg, int bus, int devfn)=0A+{=0A+    struct =
pci_dev *pdev;=0A+    int stride;=0A+=0A+    if ( seg < 0 || bus < 0 || =
devfn < 0 )=0A+        return NULL;=0A+=0A+    for ( pdev =3D pci_get_pdev(=
seg, bus, devfn), stride =3D 4;=0A+          !pdev && stride; stride >>=3D =
1 )=0A+    {=0A+        if ( !(devfn & (8 - stride)) )=0A+            =
continue;=0A+        pdev =3D pci_get_pdev(seg, bus, devfn & ~(8 - =
stride));=0A+        if ( pdev && stride !=3D pdev->phantom_stride )=0A+   =
         pdev =3D NULL;=0A+    }=0A+=0A+    return pdev;=0A+}=0A+=0A =
struct pci_dev *pci_get_pdev_by_domain(=0A     struct domain *d, int seg, =
int bus, int devfn)=0A {=0A@@ -488,8 +525,19 @@ int pci_add_device(u16 =
seg, u8 bus, u8 d=0A =0A out:=0A     spin_unlock(&pcidevs_lock);=0A-    =
printk(XENLOG_DEBUG "PCI add %s %04x:%02x:%02x.%u\n", pdev_type,=0A-       =
    seg, bus, slot, func);=0A+    if ( !ret )=0A+    {=0A+        =
printk(XENLOG_DEBUG "PCI add %s %04x:%02x:%02x.%u\n", pdev_type,=0A+       =
        seg, bus, slot, func);=0A+        while ( pdev->phantom_stride =
)=0A+        {=0A+            func +=3D pdev->phantom_stride;=0A+          =
  if ( PCI_SLOT(func) )=0A+                break;=0A+            printk(XEN=
LOG_DEBUG "PCI phantom %04x:%02x:%02x.%u\n",=0A+                   seg, =
bus, slot, func);=0A+        }=0A+    }=0A     return ret;=0A }=0A =0A@@ =
-681,7 +729,7 @@ void pci_check_disable_device(u16 seg, u=0A     u16 =
cword;=0A =0A     spin_lock(&pcidevs_lock);=0A-    pdev =3D pci_get_pdev(se=
g, bus, devfn);=0A+    pdev =3D pci_get_real_pdev(seg, bus, devfn);=0A     =
if ( pdev )=0A     {=0A         if ( now < pdev->fault.time ||=0A@@ -698,6 =
+746,7 @@ void pci_check_disable_device(u16 seg, u=0A =0A     /* Tell the =
device to stop DMAing; we can't rely on the guest to=0A      * control it =
for us. */=0A+    devfn =3D pdev->devfn;=0A     cword =3D pci_conf_read16(s=
eg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=0A                             =
PCI_COMMAND);=0A     pci_conf_write16(seg, bus, PCI_SLOT(devfn), PCI_FUNC(d=
evfn),=0A@@ -759,6 +808,27 @@ struct setup_dom0 {=0A     int (*handler)(u8 =
devfn, struct pci_dev *);=0A };=0A =0A+static void setup_one_dom0_device(co=
nst struct setup_dom0 *ctxt,=0A+                                  struct =
pci_dev *pdev)=0A+{=0A+    u8 devfn =3D pdev->devfn;=0A+=0A+    do {=0A+   =
     int err =3D ctxt->handler(devfn, pdev);=0A+=0A+        if ( err )=0A+ =
       {=0A+            printk(XENLOG_ERR "setup %04x:%02x:%02x.%u for d%d =
failed (%d)\n",=0A+                   pdev->seg, pdev->bus, PCI_SLOT(devfn)=
, PCI_FUNC(devfn),=0A+                   ctxt->d->domain_id, err);=0A+     =
       if ( devfn =3D=3D pdev->devfn )=0A+                return;=0A+      =
  }=0A+        devfn +=3D pdev->phantom_stride;=0A+    } while ( devfn =
!=3D pdev->devfn &&=0A+              PCI_SLOT(devfn) =3D=3D PCI_SLOT(pdev->=
devfn) );=0A+}=0A+=0A static int __init _setup_dom0_pci_devices(struct =
pci_seg *pseg, void *arg)=0A {=0A     struct setup_dom0 *ctxt =3D =
arg;=0A@@ -777,12 +847,12 @@ static int __init _setup_dom0_pci_device=0A   =
          {=0A                 pdev->domain =3D ctxt->d;=0A                =
 list_add(&pdev->domain_list, &ctxt->d->arch.pdev_list);=0A-               =
 ctxt->handler(devfn, pdev);=0A+                setup_one_dom0_device(ctxt,=
 pdev);=0A             }=0A             else if ( pdev->domain =3D=3D =
dom_xen )=0A             {=0A                 pdev->domain =3D ctxt->d;=0A-=
                ctxt->handler(devfn, pdev);=0A+                setup_one_do=
m0_device(ctxt, pdev);=0A                 pdev->domain =3D dom_xen;=0A     =
        }=0A             else if ( pdev->domain !=3D ctxt->d )=0A--- =
a/xen/include/xen/lib.h=0A+++ b/xen/include/xen/lib.h=0A@@ -58,6 +58,9 @@ =
do {                                    =0A =0A #define ARRAY_SIZE(x) =
(sizeof(x) / sizeof((x)[0]) + __must_be_array(x))=0A =0A+#define MASK_EXTR(=
v, m) (((v) & (m)) / ((m) & -(m)))=0A+#define MASK_INSR(v, m) (((v) * ((m) =
& -(m))) & (m))=0A+=0A #define reserve_bootmem(_p,_l) ((void)0)=0A =0A =
struct domain;=0A--- a/xen/include/xen/pci.h=0A+++ b/xen/include/xen/pci.h=
=0A@@ -63,6 +63,8 @@ struct pci_dev {=0A     const u8 bus;=0A     const u8 =
devfn;=0A =0A+    u8 phantom_stride;=0A+=0A     enum pdev_type {=0A        =
 DEV_TYPE_PCI_UNKNOWN,=0A         DEV_TYPE_PCIe_ENDPOINT,=0A@@ -114,6 =
+116,7 @@ int pci_ro_device(int seg, int bus, int =0A void arch_pci_ro_devi=
ce(int seg, int bdf);=0A int pci_hide_device(int bus, int devfn);=0A =
struct pci_dev *pci_get_pdev(int seg, int bus, int devfn);=0A+struct =
pci_dev *pci_get_real_pdev(int seg, int bus, int devfn);=0A struct pci_dev =
*pci_get_pdev_by_domain(=0A     struct domain *, int seg, int bus, int =
devfn);=0A void pci_check_disable_device(u16 seg, u8 bus, u8 devfn);=0A
--=__Part0F3E925F.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0F3E925F.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:14:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:14:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcDt-0003WP-99; Thu, 06 Dec 2012 14:14:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcDr-0003Vp-69
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:14:43 +0000
Received: from [193.109.254.147:9234] by server-16.bemta-14.messagelabs.com id
	F9/E1-09215-258A0C05; Thu, 06 Dec 2012 14:14:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354803066!9171506!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26148 invoked from network); 6 Dec 2012 14:11:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:11:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:11:05 +0000
Message-Id: <50C0B58902000078000AEA37@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:11:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part3607AB69.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 2/8] IOMMU: adjust add/remove operation
	parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part3607AB69.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to use a (struct pci_dev *, devfn) pair.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -83,14 +83,14 @@ static void disable_translation(u32 *dte
 }
=20
 static void amd_iommu_setup_domain_device(
-    struct domain *domain, struct amd_iommu *iommu, int bdf)
+    struct domain *domain, struct amd_iommu *iommu,
+    u8 devfn, struct pci_dev *pdev)
 {
     void *dte;
     unsigned long flags;
     int req_id, valid =3D 1;
     int dte_i =3D 0;
-    u8 bus =3D PCI_BUS(bdf);
-    u8 devfn =3D PCI_DEVFN2(bdf);
+    u8 bus =3D pdev->bus;
=20
     struct hvm_iommu *hd =3D domain_hvm_iommu(domain);
=20
@@ -103,7 +103,7 @@ static void amd_iommu_setup_domain_devic
         dte_i =3D 1;
=20
     /* get device-table entry */
-    req_id =3D get_dma_requestor_id(iommu->seg, bdf);
+    req_id =3D get_dma_requestor_id(iommu->seg, PCI_BDF2(bus, devfn));
     dte =3D iommu->dev_table.buffer + (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE=
);
=20
     spin_lock_irqsave(&iommu->lock, flags);
@@ -115,7 +115,7 @@ static void amd_iommu_setup_domain_devic
             (u32 *)dte, page_to_maddr(hd->root_table), hd->domain_id,
             hd->paging_mode, valid);
=20
-        if ( pci_ats_device(iommu->seg, bus, devfn) &&
+        if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
              iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
             iommu_dte_set_iotlb((u32 *)dte, dte_i);
=20
@@ -132,32 +132,31 @@ static void amd_iommu_setup_domain_devic
=20
     ASSERT(spin_is_locked(&pcidevs_lock));
=20
-    if ( pci_ats_device(iommu->seg, bus, devfn) &&
-         !pci_ats_enabled(iommu->seg, bus, devfn) )
+    if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
+         !pci_ats_enabled(iommu->seg, bus, pdev->devfn) )
     {
-        struct pci_dev *pdev;
+        if ( devfn =3D=3D pdev->devfn )
+            enable_ats_device(iommu->seg, bus, devfn);
=20
-        enable_ats_device(iommu->seg, bus, devfn);
-
-        ASSERT(spin_is_locked(&pcidevs_lock));
-        pdev =3D pci_get_pdev(iommu->seg, bus, devfn);
-
-        ASSERT( pdev !=3D NULL );
         amd_iommu_flush_iotlb(pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);
     }
 }
=20
-static void __init amd_iommu_setup_dom0_device(struct pci_dev *pdev)
+static int __init amd_iommu_setup_dom0_device(u8 devfn, struct pci_dev =
*pdev)
 {
     int bdf =3D PCI_BDF2(pdev->bus, pdev->devfn);
     struct amd_iommu *iommu =3D find_iommu_for_device(pdev->seg, bdf);
=20
-    if ( likely(iommu !=3D NULL) )
-        amd_iommu_setup_domain_device(pdev->domain, iommu, bdf);
-    else
+    if ( unlikely(!iommu) )
+    {
         AMD_IOMMU_DEBUG("No iommu for device %04x:%02x:%02x.%u\n",
                         pdev->seg, pdev->bus,
-                        PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+                        PCI_SLOT(devfn), PCI_FUNC(devfn));
+        return -ENODEV;
+    }
+
+    amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);
+    return 0;
 }
=20
 int __init amd_iov_detect(void)
@@ -295,16 +294,16 @@ static void __init amd_iommu_dom0_init(s
 }
=20
 void amd_iommu_disable_domain_device(struct domain *domain,
-                                     struct amd_iommu *iommu, int bdf)
+                                     struct amd_iommu *iommu,
+                                     u8 devfn, struct pci_dev *pdev)
 {
     void *dte;
     unsigned long flags;
     int req_id;
-    u8 bus =3D PCI_BUS(bdf);
-    u8 devfn =3D PCI_DEVFN2(bdf);
+    u8 bus =3D pdev->bus;
=20
     BUG_ON ( iommu->dev_table.buffer =3D=3D NULL );
-    req_id =3D get_dma_requestor_id(iommu->seg, bdf);
+    req_id =3D get_dma_requestor_id(iommu->seg, PCI_BDF2(bus, devfn));
     dte =3D iommu->dev_table.buffer + (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE=
);
=20
     spin_lock_irqsave(&iommu->lock, flags);
@@ -312,7 +311,7 @@ void amd_iommu_disable_domain_device(str
     {
         disable_translation((u32 *)dte);
=20
-        if ( pci_ats_device(iommu->seg, bus, devfn) &&
+        if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
              iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
             iommu_dte_set_iotlb((u32 *)dte, 0);
=20
@@ -327,7 +326,8 @@ void amd_iommu_disable_domain_device(str
=20
     ASSERT(spin_is_locked(&pcidevs_lock));
=20
-    if ( pci_ats_device(iommu->seg, bus, devfn) &&
+    if ( devfn =3D=3D pdev->devfn &&
+         pci_ats_device(iommu->seg, bus, devfn) &&
          pci_ats_enabled(iommu->seg, bus, devfn) )
         disable_ats_device(iommu->seg, bus, devfn);
 }
@@ -350,7 +350,7 @@ static int reassign_device(struct domain
         return -ENODEV;
     }
=20
-    amd_iommu_disable_domain_device(source, iommu, bdf);
+    amd_iommu_disable_domain_device(source, iommu, devfn, pdev);
=20
     if ( devfn =3D=3D pdev->devfn )
     {
@@ -363,7 +363,7 @@ static int reassign_device(struct domain
     if ( t->root_table =3D=3D NULL )
         allocate_domain_resources(t);
=20
-    amd_iommu_setup_domain_device(target, iommu, bdf);
+    amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
     AMD_IOMMU_DEBUG("Re-assign %04x:%02x:%02x.%u from dom%d to dom%d\n",
                     pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn)=
,
                     source->domain_id, target->domain_id);
@@ -449,7 +449,7 @@ static void amd_iommu_domain_destroy(str
     amd_iommu_flush_all_pages(d);
 }
=20
-static int amd_iommu_add_device(struct pci_dev *pdev)
+static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
 {
     struct amd_iommu *iommu;
     u16 bdf;
@@ -462,16 +462,16 @@ static int amd_iommu_add_device(struct p
     {
         AMD_IOMMU_DEBUG("Fail to find iommu."
                         " %04x:%02x:%02x.%u cannot be assigned to =
dom%d\n",
-                        pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
-                        PCI_FUNC(pdev->devfn), pdev->domain->domain_id);
+                        pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(de=
vfn),
+                        pdev->domain->domain_id);
         return -ENODEV;
     }
=20
-    amd_iommu_setup_domain_device(pdev->domain, iommu, bdf);
+    amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);
     return 0;
 }
=20
-static int amd_iommu_remove_device(struct pci_dev *pdev)
+static int amd_iommu_remove_device(u8 devfn, struct pci_dev *pdev)
 {
     struct amd_iommu *iommu;
     u16 bdf;
@@ -484,12 +484,12 @@ static int amd_iommu_remove_device(struc
     {
         AMD_IOMMU_DEBUG("Fail to find iommu."
                         " %04x:%02x:%02x.%u cannot be removed from =
dom%d\n",
-                        pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
-                        PCI_FUNC(pdev->devfn), pdev->domain->domain_id);
+                        pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(de=
vfn),
+                        pdev->domain->domain_id);
         return -ENODEV;
     }
=20
-    amd_iommu_disable_domain_device(pdev->domain, iommu, bdf);
+    amd_iommu_disable_domain_device(pdev->domain, iommu, devfn, pdev);
     return 0;
 }
=20
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -168,7 +168,7 @@ int iommu_add_device(struct pci_dev *pde
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
=20
-    return hd->platform_ops->add_device(pdev);
+    return hd->platform_ops->add_device(pdev->devfn, pdev);
 }
=20
 int iommu_enable_device(struct pci_dev *pdev)
@@ -198,7 +198,7 @@ int iommu_remove_device(struct pci_dev *
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
=20
-    return hd->platform_ops->remove_device(pdev);
+    return hd->platform_ops->remove_device(pdev->devfn, pdev);
 }
=20
 /*
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -743,7 +743,7 @@ int __init scan_pci_devices(void)
=20
 struct setup_dom0 {
     struct domain *d;
-    void (*handler)(struct pci_dev *);
+    int (*handler)(u8 devfn, struct pci_dev *);
 };
=20
 static int __init _setup_dom0_pci_devices(struct pci_seg *pseg, void =
*arg)
@@ -764,12 +764,12 @@ static int __init _setup_dom0_pci_device
             {
                 pdev->domain =3D ctxt->d;
                 list_add(&pdev->domain_list, &ctxt->d->arch.pdev_list);
-                ctxt->handler(pdev);
+                ctxt->handler(devfn, pdev);
             }
             else if ( pdev->domain =3D=3D dom_xen )
             {
                 pdev->domain =3D ctxt->d;
-                ctxt->handler(pdev);
+                ctxt->handler(devfn, pdev);
                 pdev->domain =3D dom_xen;
             }
             else if ( pdev->domain !=3D ctxt->d )
@@ -783,7 +783,7 @@ static int __init _setup_dom0_pci_device
 }
=20
 void __init setup_dom0_pci_devices(
-    struct domain *d, void (*handler)(struct pci_dev *))
+    struct domain *d, int (*handler)(u8 devfn, struct pci_dev *))
 {
     struct setup_dom0 ctxt =3D { .d =3D d, .handler =3D handler };
=20
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -50,7 +50,7 @@ int nr_iommus;
=20
 static struct tasklet vtd_fault_tasklet;
=20
-static void setup_dom0_device(struct pci_dev *);
+static int setup_dom0_device(u8 devfn, struct pci_dev *);
 static void setup_dom0_rmrr(struct domain *d);
=20
 static int domain_iommu_domid(struct domain *d,
@@ -1873,7 +1873,7 @@ static int rmrr_identity_mapping(struct=20
     return 0;
 }
=20
-static int intel_iommu_add_device(struct pci_dev *pdev)
+static int intel_iommu_add_device(u8 devfn, struct pci_dev *pdev)
 {
     struct acpi_rmrr_unit *rmrr;
     u16 bdf;
@@ -1884,8 +1884,7 @@ static int intel_iommu_add_device(struct
     if ( !pdev->domain )
         return -EINVAL;
=20
-    ret =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus,
-                                 pdev->devfn);
+    ret =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, =
devfn);
     if ( ret )
     {
         dprintk(XENLOG_ERR VTDPREFIX, "d%d: context mapping failed\n",
@@ -1897,7 +1896,7 @@ static int intel_iommu_add_device(struct
     {
         if ( rmrr->segment =3D=3D pdev->seg &&
              PCI_BUS(bdf) =3D=3D pdev->bus &&
-             PCI_DEVFN2(bdf) =3D=3D pdev->devfn )
+             PCI_DEVFN2(bdf) =3D=3D devfn )
         {
             ret =3D rmrr_identity_mapping(pdev->domain, rmrr);
             if ( ret )
@@ -1922,7 +1921,7 @@ static int intel_iommu_enable_device(str
     return ret >=3D 0 ? 0 : ret;
 }
=20
-static int intel_iommu_remove_device(struct pci_dev *pdev)
+static int intel_iommu_remove_device(u8 devfn, struct pci_dev *pdev)
 {
     struct acpi_rmrr_unit *rmrr;
     u16 bdf;
@@ -1940,19 +1939,22 @@ static int intel_iommu_remove_device(str
         {
             if ( rmrr->segment =3D=3D pdev->seg &&
                  PCI_BUS(bdf) =3D=3D pdev->bus &&
-                 PCI_DEVFN2(bdf) =3D=3D pdev->devfn )
+                 PCI_DEVFN2(bdf) =3D=3D devfn )
                 return 0;
         }
     }
=20
-    return domain_context_unmap(pdev->domain, pdev->seg, pdev->bus,
-                                pdev->devfn);
+    return domain_context_unmap(pdev->domain, pdev->seg, pdev->bus, =
devfn);
 }
=20
-static void __init setup_dom0_device(struct pci_dev *pdev)
+static int __init setup_dom0_device(u8 devfn, struct pci_dev *pdev)
 {
-    domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, pdev->devfn=
);
-    pci_vtd_quirk(pdev);
+    int err;
+
+    err =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, =
devfn);
+    if ( !err && devfn =3D=3D pdev->devfn )
+        pci_vtd_quirk(pdev);
+    return err;
 }
=20
 void clear_fault_bits(struct iommu *iommu)
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -94,9 +94,9 @@ struct msi_msg;
 struct iommu_ops {
     int (*init)(struct domain *d);
     void (*dom0_init)(struct domain *d);
-    int (*add_device)(struct pci_dev *pdev);
+    int (*add_device)(u8 devfn, struct pci_dev *);
     int (*enable_device)(struct pci_dev *pdev);
-    int (*remove_device)(struct pci_dev *pdev);
+    int (*remove_device)(u8 devfn, struct pci_dev *);
     int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long =
mfn,
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -100,7 +100,8 @@ struct pci_dev *pci_lock_pdev(int seg, i
 struct pci_dev *pci_lock_domain_pdev(
     struct domain *, int seg, int bus, int devfn);
=20
-void setup_dom0_pci_devices(struct domain *, void (*)(struct pci_dev *));
+void setup_dom0_pci_devices(struct domain *,
+                            int (*)(u8 devfn, struct pci_dev *));
 void pci_release_devices(struct domain *d);
 int pci_add_segment(u16 seg);
 const unsigned long *pci_get_ro_map(u16 seg);



--=__Part3607AB69.3__=
Content-Type: text/plain; name="IOMMU-add-remove-params.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-add-remove-params.patch"

IOMMU: adjust add/remove operation parameters=0A=0A... to use a (struct =
pci_dev *, devfn) pair.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c=0A+++ b/xen/drivers=
/passthrough/amd/pci_amd_iommu.c=0A@@ -83,14 +83,14 @@ static void =
disable_translation(u32 *dte=0A }=0A =0A static void amd_iommu_setup_domain=
_device(=0A-    struct domain *domain, struct amd_iommu *iommu, int =
bdf)=0A+    struct domain *domain, struct amd_iommu *iommu,=0A+    u8 =
devfn, struct pci_dev *pdev)=0A {=0A     void *dte;=0A     unsigned long =
flags;=0A     int req_id, valid =3D 1;=0A     int dte_i =3D 0;=0A-    u8 =
bus =3D PCI_BUS(bdf);=0A-    u8 devfn =3D PCI_DEVFN2(bdf);=0A+    u8 bus =
=3D pdev->bus;=0A =0A     struct hvm_iommu *hd =3D domain_hvm_iommu(domain)=
;=0A =0A@@ -103,7 +103,7 @@ static void amd_iommu_setup_domain_devic=0A    =
     dte_i =3D 1;=0A =0A     /* get device-table entry */=0A-    req_id =
=3D get_dma_requestor_id(iommu->seg, bdf);=0A+    req_id =3D get_dma_reques=
tor_id(iommu->seg, PCI_BDF2(bus, devfn));=0A     dte =3D iommu->dev_table.b=
uffer + (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);=0A =0A     spin_lock_irqsave=
(&iommu->lock, flags);=0A@@ -115,7 +115,7 @@ static void amd_iommu_setup_do=
main_devic=0A             (u32 *)dte, page_to_maddr(hd->root_table), =
hd->domain_id,=0A             hd->paging_mode, valid);=0A =0A-        if ( =
pci_ats_device(iommu->seg, bus, devfn) &&=0A+        if ( pci_ats_device(io=
mmu->seg, bus, pdev->devfn) &&=0A              iommu_has_cap(iommu, =
PCI_CAP_IOTLB_SHIFT) )=0A             iommu_dte_set_iotlb((u32 *)dte, =
dte_i);=0A =0A@@ -132,32 +132,31 @@ static void amd_iommu_setup_domain_devi=
c=0A =0A     ASSERT(spin_is_locked(&pcidevs_lock));=0A =0A-    if ( =
pci_ats_device(iommu->seg, bus, devfn) &&=0A-         !pci_ats_enabled(iomm=
u->seg, bus, devfn) )=0A+    if ( pci_ats_device(iommu->seg, bus, =
pdev->devfn) &&=0A+         !pci_ats_enabled(iommu->seg, bus, pdev->devfn) =
)=0A     {=0A-        struct pci_dev *pdev;=0A+        if ( devfn =3D=3D =
pdev->devfn )=0A+            enable_ats_device(iommu->seg, bus, devfn);=0A =
=0A-        enable_ats_device(iommu->seg, bus, devfn);=0A-=0A-        =
ASSERT(spin_is_locked(&pcidevs_lock));=0A-        pdev =3D pci_get_pdev(iom=
mu->seg, bus, devfn);=0A-=0A-        ASSERT( pdev !=3D NULL );=0A         =
amd_iommu_flush_iotlb(pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);=0A     }=0A =
}=0A =0A-static void __init amd_iommu_setup_dom0_device(struct pci_dev =
*pdev)=0A+static int __init amd_iommu_setup_dom0_device(u8 devfn, struct =
pci_dev *pdev)=0A {=0A     int bdf =3D PCI_BDF2(pdev->bus, pdev->devfn);=0A=
     struct amd_iommu *iommu =3D find_iommu_for_device(pdev->seg, bdf);=0A =
=0A-    if ( likely(iommu !=3D NULL) )=0A-        amd_iommu_setup_domain_de=
vice(pdev->domain, iommu, bdf);=0A-    else=0A+    if ( unlikely(!iommu) =
)=0A+    {=0A         AMD_IOMMU_DEBUG("No iommu for device %04x:%02x:%02x.%=
u\n",=0A                         pdev->seg, pdev->bus,=0A-                 =
       PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));=0A+                  =
      PCI_SLOT(devfn), PCI_FUNC(devfn));=0A+        return -ENODEV;=0A+    =
}=0A+=0A+    amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, =
pdev);=0A+    return 0;=0A }=0A =0A int __init amd_iov_detect(void)=0A@@ =
-295,16 +294,16 @@ static void __init amd_iommu_dom0_init(s=0A }=0A =0A =
void amd_iommu_disable_domain_device(struct domain *domain,=0A-            =
                         struct amd_iommu *iommu, int bdf)=0A+             =
                        struct amd_iommu *iommu,=0A+                       =
              u8 devfn, struct pci_dev *pdev)=0A {=0A     void *dte;=0A    =
 unsigned long flags;=0A     int req_id;=0A-    u8 bus =3D PCI_BUS(bdf);=0A=
-    u8 devfn =3D PCI_DEVFN2(bdf);=0A+    u8 bus =3D pdev->bus;=0A =0A     =
BUG_ON ( iommu->dev_table.buffer =3D=3D NULL );=0A-    req_id =3D =
get_dma_requestor_id(iommu->seg, bdf);=0A+    req_id =3D get_dma_requestor_=
id(iommu->seg, PCI_BDF2(bus, devfn));=0A     dte =3D iommu->dev_table.buffe=
r + (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);=0A =0A     spin_lock_irqsave(&io=
mmu->lock, flags);=0A@@ -312,7 +311,7 @@ void amd_iommu_disable_domain_devi=
ce(str=0A     {=0A         disable_translation((u32 *)dte);=0A =0A-        =
if ( pci_ats_device(iommu->seg, bus, devfn) &&=0A+        if ( pci_ats_devi=
ce(iommu->seg, bus, pdev->devfn) &&=0A              iommu_has_cap(iommu, =
PCI_CAP_IOTLB_SHIFT) )=0A             iommu_dte_set_iotlb((u32 *)dte, =
0);=0A =0A@@ -327,7 +326,8 @@ void amd_iommu_disable_domain_device(str=0A =
=0A     ASSERT(spin_is_locked(&pcidevs_lock));=0A =0A-    if ( pci_ats_devi=
ce(iommu->seg, bus, devfn) &&=0A+    if ( devfn =3D=3D pdev->devfn &&=0A+  =
       pci_ats_device(iommu->seg, bus, devfn) &&=0A          pci_ats_enable=
d(iommu->seg, bus, devfn) )=0A         disable_ats_device(iommu->seg, bus, =
devfn);=0A }=0A@@ -350,7 +350,7 @@ static int reassign_device(struct =
domain=0A         return -ENODEV;=0A     }=0A =0A-    amd_iommu_disable_dom=
ain_device(source, iommu, bdf);=0A+    amd_iommu_disable_domain_device(sour=
ce, iommu, devfn, pdev);=0A =0A     if ( devfn =3D=3D pdev->devfn )=0A     =
{=0A@@ -363,7 +363,7 @@ static int reassign_device(struct domain=0A     if =
( t->root_table =3D=3D NULL )=0A         allocate_domain_resources(t);=0A =
=0A-    amd_iommu_setup_domain_device(target, iommu, bdf);=0A+    =
amd_iommu_setup_domain_device(target, iommu, devfn, pdev);=0A     =
AMD_IOMMU_DEBUG("Re-assign %04x:%02x:%02x.%u from dom%d to dom%d\n",=0A    =
                 pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=0A=
                     source->domain_id, target->domain_id);=0A@@ -449,7 =
+449,7 @@ static void amd_iommu_domain_destroy(str=0A     amd_iommu_flush_a=
ll_pages(d);=0A }=0A =0A-static int amd_iommu_add_device(struct pci_dev =
*pdev)=0A+static int amd_iommu_add_device(u8 devfn, struct pci_dev =
*pdev)=0A {=0A     struct amd_iommu *iommu;=0A     u16 bdf;=0A@@ -462,16 =
+462,16 @@ static int amd_iommu_add_device(struct p=0A     {=0A         =
AMD_IOMMU_DEBUG("Fail to find iommu."=0A                         " =
%04x:%02x:%02x.%u cannot be assigned to dom%d\n",=0A-                      =
  pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),=0A-                        =
PCI_FUNC(pdev->devfn), pdev->domain->domain_id);=0A+                       =
 pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=0A+               =
         pdev->domain->domain_id);=0A         return -ENODEV;=0A     }=0A =
=0A-    amd_iommu_setup_domain_device(pdev->domain, iommu, bdf);=0A+    =
amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);=0A     =
return 0;=0A }=0A =0A-static int amd_iommu_remove_device(struct pci_dev =
*pdev)=0A+static int amd_iommu_remove_device(u8 devfn, struct pci_dev =
*pdev)=0A {=0A     struct amd_iommu *iommu;=0A     u16 bdf;=0A@@ -484,12 =
+484,12 @@ static int amd_iommu_remove_device(struc=0A     {=0A         =
AMD_IOMMU_DEBUG("Fail to find iommu."=0A                         " =
%04x:%02x:%02x.%u cannot be removed from dom%d\n",=0A-                     =
   pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),=0A-                        =
PCI_FUNC(pdev->devfn), pdev->domain->domain_id);=0A+                       =
 pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=0A+               =
         pdev->domain->domain_id);=0A         return -ENODEV;=0A     }=0A =
=0A-    amd_iommu_disable_domain_device(pdev->domain, iommu, bdf);=0A+    =
amd_iommu_disable_domain_device(pdev->domain, iommu, devfn, pdev);=0A     =
return 0;=0A }=0A =0A--- a/xen/drivers/passthrough/iommu.c=0A+++ b/xen/driv=
ers/passthrough/iommu.c=0A@@ -168,7 +168,7 @@ int iommu_add_device(struct =
pci_dev *pde=0A     if ( !iommu_enabled || !hd->platform_ops )=0A         =
return 0;=0A =0A-    return hd->platform_ops->add_device(pdev);=0A+    =
return hd->platform_ops->add_device(pdev->devfn, pdev);=0A }=0A =0A int =
iommu_enable_device(struct pci_dev *pdev)=0A@@ -198,7 +198,7 @@ int =
iommu_remove_device(struct pci_dev *=0A     if ( !iommu_enabled || =
!hd->platform_ops )=0A         return 0;=0A =0A-    return hd->platform_ops=
->remove_device(pdev);=0A+    return hd->platform_ops->remove_device(pdev->=
devfn, pdev);=0A }=0A =0A /*=0A--- a/xen/drivers/passthrough/pci.c=0A+++ =
b/xen/drivers/passthrough/pci.c=0A@@ -743,7 +743,7 @@ int __init scan_pci_d=
evices(void)=0A =0A struct setup_dom0 {=0A     struct domain *d;=0A-    =
void (*handler)(struct pci_dev *);=0A+    int (*handler)(u8 devfn, struct =
pci_dev *);=0A };=0A =0A static int __init _setup_dom0_pci_devices(struct =
pci_seg *pseg, void *arg)=0A@@ -764,12 +764,12 @@ static int __init =
_setup_dom0_pci_device=0A             {=0A                 pdev->domain =
=3D ctxt->d;=0A                 list_add(&pdev->domain_list, &ctxt->d->arch=
.pdev_list);=0A-                ctxt->handler(pdev);=0A+                =
ctxt->handler(devfn, pdev);=0A             }=0A             else if ( =
pdev->domain =3D=3D dom_xen )=0A             {=0A                 =
pdev->domain =3D ctxt->d;=0A-                ctxt->handler(pdev);=0A+      =
          ctxt->handler(devfn, pdev);=0A                 pdev->domain =3D =
dom_xen;=0A             }=0A             else if ( pdev->domain !=3D =
ctxt->d )=0A@@ -783,7 +783,7 @@ static int __init _setup_dom0_pci_device=0A=
 }=0A =0A void __init setup_dom0_pci_devices(=0A-    struct domain *d, =
void (*handler)(struct pci_dev *))=0A+    struct domain *d, int (*handler)(=
u8 devfn, struct pci_dev *))=0A {=0A     struct setup_dom0 ctxt =3D { .d =
=3D d, .handler =3D handler };=0A =0A--- a/xen/drivers/passthrough/vtd/iomm=
u.c=0A+++ b/xen/drivers/passthrough/vtd/iommu.c=0A@@ -50,7 +50,7 @@ int =
nr_iommus;=0A =0A static struct tasklet vtd_fault_tasklet;=0A =0A-static =
void setup_dom0_device(struct pci_dev *);=0A+static int setup_dom0_device(u=
8 devfn, struct pci_dev *);=0A static void setup_dom0_rmrr(struct domain =
*d);=0A =0A static int domain_iommu_domid(struct domain *d,=0A@@ -1873,7 =
+1873,7 @@ static int rmrr_identity_mapping(struct =0A     return 0;=0A =
}=0A =0A-static int intel_iommu_add_device(struct pci_dev *pdev)=0A+static =
int intel_iommu_add_device(u8 devfn, struct pci_dev *pdev)=0A {=0A     =
struct acpi_rmrr_unit *rmrr;=0A     u16 bdf;=0A@@ -1884,8 +1884,7 @@ =
static int intel_iommu_add_device(struct=0A     if ( !pdev->domain )=0A    =
     return -EINVAL;=0A =0A-    ret =3D domain_context_mapping(pdev->domain=
, pdev->seg, pdev->bus,=0A-                                 pdev->devfn);=
=0A+    ret =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, =
devfn);=0A     if ( ret )=0A     {=0A         dprintk(XENLOG_ERR VTDPREFIX,=
 "d%d: context mapping failed\n",=0A@@ -1897,7 +1896,7 @@ static int =
intel_iommu_add_device(struct=0A     {=0A         if ( rmrr->segment =
=3D=3D pdev->seg &&=0A              PCI_BUS(bdf) =3D=3D pdev->bus &&=0A-   =
          PCI_DEVFN2(bdf) =3D=3D pdev->devfn )=0A+             PCI_DEVFN2(b=
df) =3D=3D devfn )=0A         {=0A             ret =3D rmrr_identity_mappin=
g(pdev->domain, rmrr);=0A             if ( ret )=0A@@ -1922,7 +1921,7 @@ =
static int intel_iommu_enable_device(str=0A     return ret >=3D 0 ? 0 : =
ret;=0A }=0A =0A-static int intel_iommu_remove_device(struct pci_dev =
*pdev)=0A+static int intel_iommu_remove_device(u8 devfn, struct pci_dev =
*pdev)=0A {=0A     struct acpi_rmrr_unit *rmrr;=0A     u16 bdf;=0A@@ =
-1940,19 +1939,22 @@ static int intel_iommu_remove_device(str=0A         =
{=0A             if ( rmrr->segment =3D=3D pdev->seg &&=0A                 =
 PCI_BUS(bdf) =3D=3D pdev->bus &&=0A-                 PCI_DEVFN2(bdf) =
=3D=3D pdev->devfn )=0A+                 PCI_DEVFN2(bdf) =3D=3D devfn )=0A =
                return 0;=0A         }=0A     }=0A =0A-    return =
domain_context_unmap(pdev->domain, pdev->seg, pdev->bus,=0A-               =
                 pdev->devfn);=0A+    return domain_context_unmap(pdev->dom=
ain, pdev->seg, pdev->bus, devfn);=0A }=0A =0A-static void __init =
setup_dom0_device(struct pci_dev *pdev)=0A+static int __init setup_dom0_dev=
ice(u8 devfn, struct pci_dev *pdev)=0A {=0A-    domain_context_mapping(pdev=
->domain, pdev->seg, pdev->bus, pdev->devfn);=0A-    pci_vtd_quirk(pdev);=
=0A+    int err;=0A+=0A+    err =3D domain_context_mapping(pdev->domain, =
pdev->seg, pdev->bus, devfn);=0A+    if ( !err && devfn =3D=3D pdev->devfn =
)=0A+        pci_vtd_quirk(pdev);=0A+    return err;=0A }=0A =0A void =
clear_fault_bits(struct iommu *iommu)=0A--- a/xen/include/xen/iommu.h=0A+++=
 b/xen/include/xen/iommu.h=0A@@ -94,9 +94,9 @@ struct msi_msg;=0A struct =
iommu_ops {=0A     int (*init)(struct domain *d);=0A     void (*dom0_init)(=
struct domain *d);=0A-    int (*add_device)(struct pci_dev *pdev);=0A+    =
int (*add_device)(u8 devfn, struct pci_dev *);=0A     int (*enable_device)(=
struct pci_dev *pdev);=0A-    int (*remove_device)(struct pci_dev =
*pdev);=0A+    int (*remove_device)(u8 devfn, struct pci_dev *);=0A     =
int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);=0A     =
void (*teardown)(struct domain *d);=0A     int (*map_page)(struct domain =
*d, unsigned long gfn, unsigned long mfn,=0A--- a/xen/include/xen/pci.h=0A+=
++ b/xen/include/xen/pci.h=0A@@ -100,7 +100,8 @@ struct pci_dev *pci_lock_p=
dev(int seg, i=0A struct pci_dev *pci_lock_domain_pdev(=0A     struct =
domain *, int seg, int bus, int devfn);=0A =0A-void setup_dom0_pci_devices(=
struct domain *, void (*)(struct pci_dev *));=0A+void setup_dom0_pci_device=
s(struct domain *,=0A+                            int (*)(u8 devfn, struct =
pci_dev *));=0A void pci_release_devices(struct domain *d);=0A int =
pci_add_segment(u16 seg);=0A const unsigned long *pci_get_ro_map(u16 =
seg);=0A
--=__Part3607AB69.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part3607AB69.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:14:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:14:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcDt-0003WP-99; Thu, 06 Dec 2012 14:14:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcDr-0003Vp-69
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:14:43 +0000
Received: from [193.109.254.147:9234] by server-16.bemta-14.messagelabs.com id
	F9/E1-09215-258A0C05; Thu, 06 Dec 2012 14:14:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354803066!9171506!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26148 invoked from network); 6 Dec 2012 14:11:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:11:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:11:05 +0000
Message-Id: <50C0B58902000078000AEA37@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:11:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part3607AB69.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 2/8] IOMMU: adjust add/remove operation
	parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part3607AB69.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to use a (struct pci_dev *, devfn) pair.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -83,14 +83,14 @@ static void disable_translation(u32 *dte
 }
=20
 static void amd_iommu_setup_domain_device(
-    struct domain *domain, struct amd_iommu *iommu, int bdf)
+    struct domain *domain, struct amd_iommu *iommu,
+    u8 devfn, struct pci_dev *pdev)
 {
     void *dte;
     unsigned long flags;
     int req_id, valid =3D 1;
     int dte_i =3D 0;
-    u8 bus =3D PCI_BUS(bdf);
-    u8 devfn =3D PCI_DEVFN2(bdf);
+    u8 bus =3D pdev->bus;
=20
     struct hvm_iommu *hd =3D domain_hvm_iommu(domain);
=20
@@ -103,7 +103,7 @@ static void amd_iommu_setup_domain_devic
         dte_i =3D 1;
=20
     /* get device-table entry */
-    req_id =3D get_dma_requestor_id(iommu->seg, bdf);
+    req_id =3D get_dma_requestor_id(iommu->seg, PCI_BDF2(bus, devfn));
     dte =3D iommu->dev_table.buffer + (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE=
);
=20
     spin_lock_irqsave(&iommu->lock, flags);
@@ -115,7 +115,7 @@ static void amd_iommu_setup_domain_devic
             (u32 *)dte, page_to_maddr(hd->root_table), hd->domain_id,
             hd->paging_mode, valid);
=20
-        if ( pci_ats_device(iommu->seg, bus, devfn) &&
+        if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
              iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
             iommu_dte_set_iotlb((u32 *)dte, dte_i);
=20
@@ -132,32 +132,31 @@ static void amd_iommu_setup_domain_devic
=20
     ASSERT(spin_is_locked(&pcidevs_lock));
=20
-    if ( pci_ats_device(iommu->seg, bus, devfn) &&
-         !pci_ats_enabled(iommu->seg, bus, devfn) )
+    if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
+         !pci_ats_enabled(iommu->seg, bus, pdev->devfn) )
     {
-        struct pci_dev *pdev;
+        if ( devfn =3D=3D pdev->devfn )
+            enable_ats_device(iommu->seg, bus, devfn);
=20
-        enable_ats_device(iommu->seg, bus, devfn);
-
-        ASSERT(spin_is_locked(&pcidevs_lock));
-        pdev =3D pci_get_pdev(iommu->seg, bus, devfn);
-
-        ASSERT( pdev !=3D NULL );
         amd_iommu_flush_iotlb(pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);
     }
 }
=20
-static void __init amd_iommu_setup_dom0_device(struct pci_dev *pdev)
+static int __init amd_iommu_setup_dom0_device(u8 devfn, struct pci_dev =
*pdev)
 {
     int bdf =3D PCI_BDF2(pdev->bus, pdev->devfn);
     struct amd_iommu *iommu =3D find_iommu_for_device(pdev->seg, bdf);
=20
-    if ( likely(iommu !=3D NULL) )
-        amd_iommu_setup_domain_device(pdev->domain, iommu, bdf);
-    else
+    if ( unlikely(!iommu) )
+    {
         AMD_IOMMU_DEBUG("No iommu for device %04x:%02x:%02x.%u\n",
                         pdev->seg, pdev->bus,
-                        PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+                        PCI_SLOT(devfn), PCI_FUNC(devfn));
+        return -ENODEV;
+    }
+
+    amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);
+    return 0;
 }
=20
 int __init amd_iov_detect(void)
@@ -295,16 +294,16 @@ static void __init amd_iommu_dom0_init(s
 }
=20
 void amd_iommu_disable_domain_device(struct domain *domain,
-                                     struct amd_iommu *iommu, int bdf)
+                                     struct amd_iommu *iommu,
+                                     u8 devfn, struct pci_dev *pdev)
 {
     void *dte;
     unsigned long flags;
     int req_id;
-    u8 bus =3D PCI_BUS(bdf);
-    u8 devfn =3D PCI_DEVFN2(bdf);
+    u8 bus =3D pdev->bus;
=20
     BUG_ON ( iommu->dev_table.buffer =3D=3D NULL );
-    req_id =3D get_dma_requestor_id(iommu->seg, bdf);
+    req_id =3D get_dma_requestor_id(iommu->seg, PCI_BDF2(bus, devfn));
     dte =3D iommu->dev_table.buffer + (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE=
);
=20
     spin_lock_irqsave(&iommu->lock, flags);
@@ -312,7 +311,7 @@ void amd_iommu_disable_domain_device(str
     {
         disable_translation((u32 *)dte);
=20
-        if ( pci_ats_device(iommu->seg, bus, devfn) &&
+        if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
              iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
             iommu_dte_set_iotlb((u32 *)dte, 0);
=20
@@ -327,7 +326,8 @@ void amd_iommu_disable_domain_device(str
=20
     ASSERT(spin_is_locked(&pcidevs_lock));
=20
-    if ( pci_ats_device(iommu->seg, bus, devfn) &&
+    if ( devfn =3D=3D pdev->devfn &&
+         pci_ats_device(iommu->seg, bus, devfn) &&
          pci_ats_enabled(iommu->seg, bus, devfn) )
         disable_ats_device(iommu->seg, bus, devfn);
 }
@@ -350,7 +350,7 @@ static int reassign_device(struct domain
         return -ENODEV;
     }
=20
-    amd_iommu_disable_domain_device(source, iommu, bdf);
+    amd_iommu_disable_domain_device(source, iommu, devfn, pdev);
=20
     if ( devfn =3D=3D pdev->devfn )
     {
@@ -363,7 +363,7 @@ static int reassign_device(struct domain
     if ( t->root_table =3D=3D NULL )
         allocate_domain_resources(t);
=20
-    amd_iommu_setup_domain_device(target, iommu, bdf);
+    amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
     AMD_IOMMU_DEBUG("Re-assign %04x:%02x:%02x.%u from dom%d to dom%d\n",
                     pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn)=
,
                     source->domain_id, target->domain_id);
@@ -449,7 +449,7 @@ static void amd_iommu_domain_destroy(str
     amd_iommu_flush_all_pages(d);
 }
=20
-static int amd_iommu_add_device(struct pci_dev *pdev)
+static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
 {
     struct amd_iommu *iommu;
     u16 bdf;
@@ -462,16 +462,16 @@ static int amd_iommu_add_device(struct p
     {
         AMD_IOMMU_DEBUG("Fail to find iommu."
                         " %04x:%02x:%02x.%u cannot be assigned to =
dom%d\n",
-                        pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
-                        PCI_FUNC(pdev->devfn), pdev->domain->domain_id);
+                        pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(de=
vfn),
+                        pdev->domain->domain_id);
         return -ENODEV;
     }
=20
-    amd_iommu_setup_domain_device(pdev->domain, iommu, bdf);
+    amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);
     return 0;
 }
=20
-static int amd_iommu_remove_device(struct pci_dev *pdev)
+static int amd_iommu_remove_device(u8 devfn, struct pci_dev *pdev)
 {
     struct amd_iommu *iommu;
     u16 bdf;
@@ -484,12 +484,12 @@ static int amd_iommu_remove_device(struc
     {
         AMD_IOMMU_DEBUG("Fail to find iommu."
                         " %04x:%02x:%02x.%u cannot be removed from =
dom%d\n",
-                        pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
-                        PCI_FUNC(pdev->devfn), pdev->domain->domain_id);
+                        pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(de=
vfn),
+                        pdev->domain->domain_id);
         return -ENODEV;
     }
=20
-    amd_iommu_disable_domain_device(pdev->domain, iommu, bdf);
+    amd_iommu_disable_domain_device(pdev->domain, iommu, devfn, pdev);
     return 0;
 }
=20
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -168,7 +168,7 @@ int iommu_add_device(struct pci_dev *pde
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
=20
-    return hd->platform_ops->add_device(pdev);
+    return hd->platform_ops->add_device(pdev->devfn, pdev);
 }
=20
 int iommu_enable_device(struct pci_dev *pdev)
@@ -198,7 +198,7 @@ int iommu_remove_device(struct pci_dev *
     if ( !iommu_enabled || !hd->platform_ops )
         return 0;
=20
-    return hd->platform_ops->remove_device(pdev);
+    return hd->platform_ops->remove_device(pdev->devfn, pdev);
 }
=20
 /*
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -743,7 +743,7 @@ int __init scan_pci_devices(void)
=20
 struct setup_dom0 {
     struct domain *d;
-    void (*handler)(struct pci_dev *);
+    int (*handler)(u8 devfn, struct pci_dev *);
 };
=20
 static int __init _setup_dom0_pci_devices(struct pci_seg *pseg, void =
*arg)
@@ -764,12 +764,12 @@ static int __init _setup_dom0_pci_device
             {
                 pdev->domain =3D ctxt->d;
                 list_add(&pdev->domain_list, &ctxt->d->arch.pdev_list);
-                ctxt->handler(pdev);
+                ctxt->handler(devfn, pdev);
             }
             else if ( pdev->domain =3D=3D dom_xen )
             {
                 pdev->domain =3D ctxt->d;
-                ctxt->handler(pdev);
+                ctxt->handler(devfn, pdev);
                 pdev->domain =3D dom_xen;
             }
             else if ( pdev->domain !=3D ctxt->d )
@@ -783,7 +783,7 @@ static int __init _setup_dom0_pci_device
 }
=20
 void __init setup_dom0_pci_devices(
-    struct domain *d, void (*handler)(struct pci_dev *))
+    struct domain *d, int (*handler)(u8 devfn, struct pci_dev *))
 {
     struct setup_dom0 ctxt =3D { .d =3D d, .handler =3D handler };
=20
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -50,7 +50,7 @@ int nr_iommus;
=20
 static struct tasklet vtd_fault_tasklet;
=20
-static void setup_dom0_device(struct pci_dev *);
+static int setup_dom0_device(u8 devfn, struct pci_dev *);
 static void setup_dom0_rmrr(struct domain *d);
=20
 static int domain_iommu_domid(struct domain *d,
@@ -1873,7 +1873,7 @@ static int rmrr_identity_mapping(struct=20
     return 0;
 }
=20
-static int intel_iommu_add_device(struct pci_dev *pdev)
+static int intel_iommu_add_device(u8 devfn, struct pci_dev *pdev)
 {
     struct acpi_rmrr_unit *rmrr;
     u16 bdf;
@@ -1884,8 +1884,7 @@ static int intel_iommu_add_device(struct
     if ( !pdev->domain )
         return -EINVAL;
=20
-    ret =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus,
-                                 pdev->devfn);
+    ret =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, =
devfn);
     if ( ret )
     {
         dprintk(XENLOG_ERR VTDPREFIX, "d%d: context mapping failed\n",
@@ -1897,7 +1896,7 @@ static int intel_iommu_add_device(struct
     {
         if ( rmrr->segment =3D=3D pdev->seg &&
              PCI_BUS(bdf) =3D=3D pdev->bus &&
-             PCI_DEVFN2(bdf) =3D=3D pdev->devfn )
+             PCI_DEVFN2(bdf) =3D=3D devfn )
         {
             ret =3D rmrr_identity_mapping(pdev->domain, rmrr);
             if ( ret )
@@ -1922,7 +1921,7 @@ static int intel_iommu_enable_device(str
     return ret >=3D 0 ? 0 : ret;
 }
=20
-static int intel_iommu_remove_device(struct pci_dev *pdev)
+static int intel_iommu_remove_device(u8 devfn, struct pci_dev *pdev)
 {
     struct acpi_rmrr_unit *rmrr;
     u16 bdf;
@@ -1940,19 +1939,22 @@ static int intel_iommu_remove_device(str
         {
             if ( rmrr->segment =3D=3D pdev->seg &&
                  PCI_BUS(bdf) =3D=3D pdev->bus &&
-                 PCI_DEVFN2(bdf) =3D=3D pdev->devfn )
+                 PCI_DEVFN2(bdf) =3D=3D devfn )
                 return 0;
         }
     }
=20
-    return domain_context_unmap(pdev->domain, pdev->seg, pdev->bus,
-                                pdev->devfn);
+    return domain_context_unmap(pdev->domain, pdev->seg, pdev->bus, =
devfn);
 }
=20
-static void __init setup_dom0_device(struct pci_dev *pdev)
+static int __init setup_dom0_device(u8 devfn, struct pci_dev *pdev)
 {
-    domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, pdev->devfn=
);
-    pci_vtd_quirk(pdev);
+    int err;
+
+    err =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, =
devfn);
+    if ( !err && devfn =3D=3D pdev->devfn )
+        pci_vtd_quirk(pdev);
+    return err;
 }
=20
 void clear_fault_bits(struct iommu *iommu)
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -94,9 +94,9 @@ struct msi_msg;
 struct iommu_ops {
     int (*init)(struct domain *d);
     void (*dom0_init)(struct domain *d);
-    int (*add_device)(struct pci_dev *pdev);
+    int (*add_device)(u8 devfn, struct pci_dev *);
     int (*enable_device)(struct pci_dev *pdev);
-    int (*remove_device)(struct pci_dev *pdev);
+    int (*remove_device)(u8 devfn, struct pci_dev *);
     int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long =
mfn,
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -100,7 +100,8 @@ struct pci_dev *pci_lock_pdev(int seg, i
 struct pci_dev *pci_lock_domain_pdev(
     struct domain *, int seg, int bus, int devfn);
=20
-void setup_dom0_pci_devices(struct domain *, void (*)(struct pci_dev *));
+void setup_dom0_pci_devices(struct domain *,
+                            int (*)(u8 devfn, struct pci_dev *));
 void pci_release_devices(struct domain *d);
 int pci_add_segment(u16 seg);
 const unsigned long *pci_get_ro_map(u16 seg);



--=__Part3607AB69.3__=
Content-Type: text/plain; name="IOMMU-add-remove-params.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-add-remove-params.patch"

IOMMU: adjust add/remove operation parameters=0A=0A... to use a (struct =
pci_dev *, devfn) pair.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c=0A+++ b/xen/drivers=
/passthrough/amd/pci_amd_iommu.c=0A@@ -83,14 +83,14 @@ static void =
disable_translation(u32 *dte=0A }=0A =0A static void amd_iommu_setup_domain=
_device(=0A-    struct domain *domain, struct amd_iommu *iommu, int =
bdf)=0A+    struct domain *domain, struct amd_iommu *iommu,=0A+    u8 =
devfn, struct pci_dev *pdev)=0A {=0A     void *dte;=0A     unsigned long =
flags;=0A     int req_id, valid =3D 1;=0A     int dte_i =3D 0;=0A-    u8 =
bus =3D PCI_BUS(bdf);=0A-    u8 devfn =3D PCI_DEVFN2(bdf);=0A+    u8 bus =
=3D pdev->bus;=0A =0A     struct hvm_iommu *hd =3D domain_hvm_iommu(domain)=
;=0A =0A@@ -103,7 +103,7 @@ static void amd_iommu_setup_domain_devic=0A    =
     dte_i =3D 1;=0A =0A     /* get device-table entry */=0A-    req_id =
=3D get_dma_requestor_id(iommu->seg, bdf);=0A+    req_id =3D get_dma_reques=
tor_id(iommu->seg, PCI_BDF2(bus, devfn));=0A     dte =3D iommu->dev_table.b=
uffer + (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);=0A =0A     spin_lock_irqsave=
(&iommu->lock, flags);=0A@@ -115,7 +115,7 @@ static void amd_iommu_setup_do=
main_devic=0A             (u32 *)dte, page_to_maddr(hd->root_table), =
hd->domain_id,=0A             hd->paging_mode, valid);=0A =0A-        if ( =
pci_ats_device(iommu->seg, bus, devfn) &&=0A+        if ( pci_ats_device(io=
mmu->seg, bus, pdev->devfn) &&=0A              iommu_has_cap(iommu, =
PCI_CAP_IOTLB_SHIFT) )=0A             iommu_dte_set_iotlb((u32 *)dte, =
dte_i);=0A =0A@@ -132,32 +132,31 @@ static void amd_iommu_setup_domain_devi=
c=0A =0A     ASSERT(spin_is_locked(&pcidevs_lock));=0A =0A-    if ( =
pci_ats_device(iommu->seg, bus, devfn) &&=0A-         !pci_ats_enabled(iomm=
u->seg, bus, devfn) )=0A+    if ( pci_ats_device(iommu->seg, bus, =
pdev->devfn) &&=0A+         !pci_ats_enabled(iommu->seg, bus, pdev->devfn) =
)=0A     {=0A-        struct pci_dev *pdev;=0A+        if ( devfn =3D=3D =
pdev->devfn )=0A+            enable_ats_device(iommu->seg, bus, devfn);=0A =
=0A-        enable_ats_device(iommu->seg, bus, devfn);=0A-=0A-        =
ASSERT(spin_is_locked(&pcidevs_lock));=0A-        pdev =3D pci_get_pdev(iom=
mu->seg, bus, devfn);=0A-=0A-        ASSERT( pdev !=3D NULL );=0A         =
amd_iommu_flush_iotlb(pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);=0A     }=0A =
}=0A =0A-static void __init amd_iommu_setup_dom0_device(struct pci_dev =
*pdev)=0A+static int __init amd_iommu_setup_dom0_device(u8 devfn, struct =
pci_dev *pdev)=0A {=0A     int bdf =3D PCI_BDF2(pdev->bus, pdev->devfn);=0A=
     struct amd_iommu *iommu =3D find_iommu_for_device(pdev->seg, bdf);=0A =
=0A-    if ( likely(iommu !=3D NULL) )=0A-        amd_iommu_setup_domain_de=
vice(pdev->domain, iommu, bdf);=0A-    else=0A+    if ( unlikely(!iommu) =
)=0A+    {=0A         AMD_IOMMU_DEBUG("No iommu for device %04x:%02x:%02x.%=
u\n",=0A                         pdev->seg, pdev->bus,=0A-                 =
       PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));=0A+                  =
      PCI_SLOT(devfn), PCI_FUNC(devfn));=0A+        return -ENODEV;=0A+    =
}=0A+=0A+    amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, =
pdev);=0A+    return 0;=0A }=0A =0A int __init amd_iov_detect(void)=0A@@ =
-295,16 +294,16 @@ static void __init amd_iommu_dom0_init(s=0A }=0A =0A =
void amd_iommu_disable_domain_device(struct domain *domain,=0A-            =
                         struct amd_iommu *iommu, int bdf)=0A+             =
                        struct amd_iommu *iommu,=0A+                       =
              u8 devfn, struct pci_dev *pdev)=0A {=0A     void *dte;=0A    =
 unsigned long flags;=0A     int req_id;=0A-    u8 bus =3D PCI_BUS(bdf);=0A=
-    u8 devfn =3D PCI_DEVFN2(bdf);=0A+    u8 bus =3D pdev->bus;=0A =0A     =
BUG_ON ( iommu->dev_table.buffer =3D=3D NULL );=0A-    req_id =3D =
get_dma_requestor_id(iommu->seg, bdf);=0A+    req_id =3D get_dma_requestor_=
id(iommu->seg, PCI_BDF2(bus, devfn));=0A     dte =3D iommu->dev_table.buffe=
r + (req_id * IOMMU_DEV_TABLE_ENTRY_SIZE);=0A =0A     spin_lock_irqsave(&io=
mmu->lock, flags);=0A@@ -312,7 +311,7 @@ void amd_iommu_disable_domain_devi=
ce(str=0A     {=0A         disable_translation((u32 *)dte);=0A =0A-        =
if ( pci_ats_device(iommu->seg, bus, devfn) &&=0A+        if ( pci_ats_devi=
ce(iommu->seg, bus, pdev->devfn) &&=0A              iommu_has_cap(iommu, =
PCI_CAP_IOTLB_SHIFT) )=0A             iommu_dte_set_iotlb((u32 *)dte, =
0);=0A =0A@@ -327,7 +326,8 @@ void amd_iommu_disable_domain_device(str=0A =
=0A     ASSERT(spin_is_locked(&pcidevs_lock));=0A =0A-    if ( pci_ats_devi=
ce(iommu->seg, bus, devfn) &&=0A+    if ( devfn =3D=3D pdev->devfn &&=0A+  =
       pci_ats_device(iommu->seg, bus, devfn) &&=0A          pci_ats_enable=
d(iommu->seg, bus, devfn) )=0A         disable_ats_device(iommu->seg, bus, =
devfn);=0A }=0A@@ -350,7 +350,7 @@ static int reassign_device(struct =
domain=0A         return -ENODEV;=0A     }=0A =0A-    amd_iommu_disable_dom=
ain_device(source, iommu, bdf);=0A+    amd_iommu_disable_domain_device(sour=
ce, iommu, devfn, pdev);=0A =0A     if ( devfn =3D=3D pdev->devfn )=0A     =
{=0A@@ -363,7 +363,7 @@ static int reassign_device(struct domain=0A     if =
( t->root_table =3D=3D NULL )=0A         allocate_domain_resources(t);=0A =
=0A-    amd_iommu_setup_domain_device(target, iommu, bdf);=0A+    =
amd_iommu_setup_domain_device(target, iommu, devfn, pdev);=0A     =
AMD_IOMMU_DEBUG("Re-assign %04x:%02x:%02x.%u from dom%d to dom%d\n",=0A    =
                 pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=0A=
                     source->domain_id, target->domain_id);=0A@@ -449,7 =
+449,7 @@ static void amd_iommu_domain_destroy(str=0A     amd_iommu_flush_a=
ll_pages(d);=0A }=0A =0A-static int amd_iommu_add_device(struct pci_dev =
*pdev)=0A+static int amd_iommu_add_device(u8 devfn, struct pci_dev =
*pdev)=0A {=0A     struct amd_iommu *iommu;=0A     u16 bdf;=0A@@ -462,16 =
+462,16 @@ static int amd_iommu_add_device(struct p=0A     {=0A         =
AMD_IOMMU_DEBUG("Fail to find iommu."=0A                         " =
%04x:%02x:%02x.%u cannot be assigned to dom%d\n",=0A-                      =
  pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),=0A-                        =
PCI_FUNC(pdev->devfn), pdev->domain->domain_id);=0A+                       =
 pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=0A+               =
         pdev->domain->domain_id);=0A         return -ENODEV;=0A     }=0A =
=0A-    amd_iommu_setup_domain_device(pdev->domain, iommu, bdf);=0A+    =
amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);=0A     =
return 0;=0A }=0A =0A-static int amd_iommu_remove_device(struct pci_dev =
*pdev)=0A+static int amd_iommu_remove_device(u8 devfn, struct pci_dev =
*pdev)=0A {=0A     struct amd_iommu *iommu;=0A     u16 bdf;=0A@@ -484,12 =
+484,12 @@ static int amd_iommu_remove_device(struc=0A     {=0A         =
AMD_IOMMU_DEBUG("Fail to find iommu."=0A                         " =
%04x:%02x:%02x.%u cannot be removed from dom%d\n",=0A-                     =
   pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),=0A-                        =
PCI_FUNC(pdev->devfn), pdev->domain->domain_id);=0A+                       =
 pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=0A+               =
         pdev->domain->domain_id);=0A         return -ENODEV;=0A     }=0A =
=0A-    amd_iommu_disable_domain_device(pdev->domain, iommu, bdf);=0A+    =
amd_iommu_disable_domain_device(pdev->domain, iommu, devfn, pdev);=0A     =
return 0;=0A }=0A =0A--- a/xen/drivers/passthrough/iommu.c=0A+++ b/xen/driv=
ers/passthrough/iommu.c=0A@@ -168,7 +168,7 @@ int iommu_add_device(struct =
pci_dev *pde=0A     if ( !iommu_enabled || !hd->platform_ops )=0A         =
return 0;=0A =0A-    return hd->platform_ops->add_device(pdev);=0A+    =
return hd->platform_ops->add_device(pdev->devfn, pdev);=0A }=0A =0A int =
iommu_enable_device(struct pci_dev *pdev)=0A@@ -198,7 +198,7 @@ int =
iommu_remove_device(struct pci_dev *=0A     if ( !iommu_enabled || =
!hd->platform_ops )=0A         return 0;=0A =0A-    return hd->platform_ops=
->remove_device(pdev);=0A+    return hd->platform_ops->remove_device(pdev->=
devfn, pdev);=0A }=0A =0A /*=0A--- a/xen/drivers/passthrough/pci.c=0A+++ =
b/xen/drivers/passthrough/pci.c=0A@@ -743,7 +743,7 @@ int __init scan_pci_d=
evices(void)=0A =0A struct setup_dom0 {=0A     struct domain *d;=0A-    =
void (*handler)(struct pci_dev *);=0A+    int (*handler)(u8 devfn, struct =
pci_dev *);=0A };=0A =0A static int __init _setup_dom0_pci_devices(struct =
pci_seg *pseg, void *arg)=0A@@ -764,12 +764,12 @@ static int __init =
_setup_dom0_pci_device=0A             {=0A                 pdev->domain =
=3D ctxt->d;=0A                 list_add(&pdev->domain_list, &ctxt->d->arch=
.pdev_list);=0A-                ctxt->handler(pdev);=0A+                =
ctxt->handler(devfn, pdev);=0A             }=0A             else if ( =
pdev->domain =3D=3D dom_xen )=0A             {=0A                 =
pdev->domain =3D ctxt->d;=0A-                ctxt->handler(pdev);=0A+      =
          ctxt->handler(devfn, pdev);=0A                 pdev->domain =3D =
dom_xen;=0A             }=0A             else if ( pdev->domain !=3D =
ctxt->d )=0A@@ -783,7 +783,7 @@ static int __init _setup_dom0_pci_device=0A=
 }=0A =0A void __init setup_dom0_pci_devices(=0A-    struct domain *d, =
void (*handler)(struct pci_dev *))=0A+    struct domain *d, int (*handler)(=
u8 devfn, struct pci_dev *))=0A {=0A     struct setup_dom0 ctxt =3D { .d =
=3D d, .handler =3D handler };=0A =0A--- a/xen/drivers/passthrough/vtd/iomm=
u.c=0A+++ b/xen/drivers/passthrough/vtd/iommu.c=0A@@ -50,7 +50,7 @@ int =
nr_iommus;=0A =0A static struct tasklet vtd_fault_tasklet;=0A =0A-static =
void setup_dom0_device(struct pci_dev *);=0A+static int setup_dom0_device(u=
8 devfn, struct pci_dev *);=0A static void setup_dom0_rmrr(struct domain =
*d);=0A =0A static int domain_iommu_domid(struct domain *d,=0A@@ -1873,7 =
+1873,7 @@ static int rmrr_identity_mapping(struct =0A     return 0;=0A =
}=0A =0A-static int intel_iommu_add_device(struct pci_dev *pdev)=0A+static =
int intel_iommu_add_device(u8 devfn, struct pci_dev *pdev)=0A {=0A     =
struct acpi_rmrr_unit *rmrr;=0A     u16 bdf;=0A@@ -1884,8 +1884,7 @@ =
static int intel_iommu_add_device(struct=0A     if ( !pdev->domain )=0A    =
     return -EINVAL;=0A =0A-    ret =3D domain_context_mapping(pdev->domain=
, pdev->seg, pdev->bus,=0A-                                 pdev->devfn);=
=0A+    ret =3D domain_context_mapping(pdev->domain, pdev->seg, pdev->bus, =
devfn);=0A     if ( ret )=0A     {=0A         dprintk(XENLOG_ERR VTDPREFIX,=
 "d%d: context mapping failed\n",=0A@@ -1897,7 +1896,7 @@ static int =
intel_iommu_add_device(struct=0A     {=0A         if ( rmrr->segment =
=3D=3D pdev->seg &&=0A              PCI_BUS(bdf) =3D=3D pdev->bus &&=0A-   =
          PCI_DEVFN2(bdf) =3D=3D pdev->devfn )=0A+             PCI_DEVFN2(b=
df) =3D=3D devfn )=0A         {=0A             ret =3D rmrr_identity_mappin=
g(pdev->domain, rmrr);=0A             if ( ret )=0A@@ -1922,7 +1921,7 @@ =
static int intel_iommu_enable_device(str=0A     return ret >=3D 0 ? 0 : =
ret;=0A }=0A =0A-static int intel_iommu_remove_device(struct pci_dev =
*pdev)=0A+static int intel_iommu_remove_device(u8 devfn, struct pci_dev =
*pdev)=0A {=0A     struct acpi_rmrr_unit *rmrr;=0A     u16 bdf;=0A@@ =
-1940,19 +1939,22 @@ static int intel_iommu_remove_device(str=0A         =
{=0A             if ( rmrr->segment =3D=3D pdev->seg &&=0A                 =
 PCI_BUS(bdf) =3D=3D pdev->bus &&=0A-                 PCI_DEVFN2(bdf) =
=3D=3D pdev->devfn )=0A+                 PCI_DEVFN2(bdf) =3D=3D devfn )=0A =
                return 0;=0A         }=0A     }=0A =0A-    return =
domain_context_unmap(pdev->domain, pdev->seg, pdev->bus,=0A-               =
                 pdev->devfn);=0A+    return domain_context_unmap(pdev->dom=
ain, pdev->seg, pdev->bus, devfn);=0A }=0A =0A-static void __init =
setup_dom0_device(struct pci_dev *pdev)=0A+static int __init setup_dom0_dev=
ice(u8 devfn, struct pci_dev *pdev)=0A {=0A-    domain_context_mapping(pdev=
->domain, pdev->seg, pdev->bus, pdev->devfn);=0A-    pci_vtd_quirk(pdev);=
=0A+    int err;=0A+=0A+    err =3D domain_context_mapping(pdev->domain, =
pdev->seg, pdev->bus, devfn);=0A+    if ( !err && devfn =3D=3D pdev->devfn =
)=0A+        pci_vtd_quirk(pdev);=0A+    return err;=0A }=0A =0A void =
clear_fault_bits(struct iommu *iommu)=0A--- a/xen/include/xen/iommu.h=0A+++=
 b/xen/include/xen/iommu.h=0A@@ -94,9 +94,9 @@ struct msi_msg;=0A struct =
iommu_ops {=0A     int (*init)(struct domain *d);=0A     void (*dom0_init)(=
struct domain *d);=0A-    int (*add_device)(struct pci_dev *pdev);=0A+    =
int (*add_device)(u8 devfn, struct pci_dev *);=0A     int (*enable_device)(=
struct pci_dev *pdev);=0A-    int (*remove_device)(struct pci_dev =
*pdev);=0A+    int (*remove_device)(u8 devfn, struct pci_dev *);=0A     =
int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);=0A     =
void (*teardown)(struct domain *d);=0A     int (*map_page)(struct domain =
*d, unsigned long gfn, unsigned long mfn,=0A--- a/xen/include/xen/pci.h=0A+=
++ b/xen/include/xen/pci.h=0A@@ -100,7 +100,8 @@ struct pci_dev *pci_lock_p=
dev(int seg, i=0A struct pci_dev *pci_lock_domain_pdev(=0A     struct =
domain *, int seg, int bus, int devfn);=0A =0A-void setup_dom0_pci_devices(=
struct domain *, void (*)(struct pci_dev *));=0A+void setup_dom0_pci_device=
s(struct domain *,=0A+                            int (*)(u8 devfn, struct =
pci_dev *));=0A void pci_release_devices(struct domain *d);=0A int =
pci_add_segment(u16 seg);=0A const unsigned long *pci_get_ro_map(u16 =
seg);=0A
--=__Part3607AB69.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part3607AB69.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:15:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcEE-0003e4-VK; Thu, 06 Dec 2012 14:15:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcEE-0003df-5A
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:15:06 +0000
Received: from [193.109.254.147:36564] by server-3.bemta-14.messagelabs.com id
	61/42-01317-968A0C05; Thu, 06 Dec 2012 14:15:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354803145!8914742!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20522 invoked from network); 6 Dec 2012 14:12:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:12:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:12:24 +0000
Message-Id: <50C0B5D702000078000AEA3F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:12:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part88B915D7.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 4/8] AMD IOMMU: adjust flush function parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part88B915D7.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to use a (struct pci_dev *, devfn) pair.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -287,12 +287,12 @@ void invalidate_iommu_all(struct amd_iom
     send_iommu_command(iommu, cmd);
 }
=20
-void amd_iommu_flush_iotlb(struct pci_dev *pdev,
+void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
                            uint64_t gaddr, unsigned int order)
 {
     unsigned long flags;
     struct amd_iommu *iommu;
-    unsigned int bdf, req_id, queueid, maxpend;
+    unsigned int req_id, queueid, maxpend;
     struct pci_ats_dev *ats_pdev;
=20
     if ( !ats_enabled )
@@ -305,8 +305,8 @@ void amd_iommu_flush_iotlb(struct pci_de
     if ( !pci_ats_enabled(ats_pdev->seg, ats_pdev->bus, ats_pdev->devfn) =
)
         return;
=20
-    bdf =3D PCI_BDF2(ats_pdev->bus, ats_pdev->devfn);
-    iommu =3D find_iommu_for_device(ats_pdev->seg, bdf);
+    iommu =3D find_iommu_for_device(ats_pdev->seg,
+                                  PCI_BDF2(ats_pdev->bus, ats_pdev->devfn)=
);
=20
     if ( !iommu )
     {
@@ -319,7 +319,7 @@ void amd_iommu_flush_iotlb(struct pci_de
     if ( !iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
         return;
=20
-    req_id =3D get_dma_requestor_id(iommu->seg, bdf);
+    req_id =3D get_dma_requestor_id(iommu->seg, PCI_BDF2(ats_pdev->bus, =
devfn));
     queueid =3D req_id;
     maxpend =3D ats_pdev->ats_queue_depth & 0xff;
=20
@@ -339,7 +339,7 @@ static void amd_iommu_flush_all_iotlbs(s
         return;
=20
     for_each_pdev( d, pdev )
-        amd_iommu_flush_iotlb(pdev, gaddr, order);
+        amd_iommu_flush_iotlb(pdev->devfn, pdev, gaddr, order);
 }
=20
 /* Flush iommu cache after p2m changes. */
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -138,7 +138,7 @@ static void amd_iommu_setup_domain_devic
         if ( devfn =3D=3D pdev->devfn )
             enable_ats_device(iommu->seg, bus, devfn);
=20
-        amd_iommu_flush_iotlb(pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);
+        amd_iommu_flush_iotlb(devfn, pdev, INV_IOMMU_ALL_PAGES_ADDRESS, =
0);
     }
 }
=20
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -78,8 +78,8 @@ void iommu_dte_set_guest_cr3(u32 *dte, u
 void amd_iommu_flush_all_pages(struct domain *d);
 void amd_iommu_flush_pages(struct domain *d, unsigned long gfn,
                            unsigned int order);
-void amd_iommu_flush_iotlb(struct pci_dev *pdev, uint64_t gaddr,
-                           unsigned int order);
+void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
+                           uint64_t gaddr, unsigned int order);
 void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf);
 void amd_iommu_flush_intremap(struct amd_iommu *iommu, uint16_t bdf);
 void amd_iommu_flush_all_caches(struct amd_iommu *iommu);




--=__Part88B915D7.3__=
Content-Type: text/plain; name="AMD-IOMMU-flush-params.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="AMD-IOMMU-flush-params.patch"

AMD IOMMU: adjust flush function parameters=0A=0A... to use a (struct =
pci_dev *, devfn) pair.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/drivers/passthrough/amd/iommu_cmd.c=0A+++ b/xen/drivers/pas=
sthrough/amd/iommu_cmd.c=0A@@ -287,12 +287,12 @@ void invalidate_iommu_all(=
struct amd_iom=0A     send_iommu_command(iommu, cmd);=0A }=0A =0A-void =
amd_iommu_flush_iotlb(struct pci_dev *pdev,=0A+void amd_iommu_flush_iotlb(u=
8 devfn, const struct pci_dev *pdev,=0A                            =
uint64_t gaddr, unsigned int order)=0A {=0A     unsigned long flags;=0A    =
 struct amd_iommu *iommu;=0A-    unsigned int bdf, req_id, queueid, =
maxpend;=0A+    unsigned int req_id, queueid, maxpend;=0A     struct =
pci_ats_dev *ats_pdev;=0A =0A     if ( !ats_enabled )=0A@@ -305,8 +305,8 =
@@ void amd_iommu_flush_iotlb(struct pci_de=0A     if ( !pci_ats_enabled(at=
s_pdev->seg, ats_pdev->bus, ats_pdev->devfn) )=0A         return;=0A =0A-  =
  bdf =3D PCI_BDF2(ats_pdev->bus, ats_pdev->devfn);=0A-    iommu =3D =
find_iommu_for_device(ats_pdev->seg, bdf);=0A+    iommu =3D find_iommu_for_=
device(ats_pdev->seg,=0A+                                  PCI_BDF2(ats_pde=
v->bus, ats_pdev->devfn));=0A =0A     if ( !iommu )=0A     {=0A@@ -319,7 =
+319,7 @@ void amd_iommu_flush_iotlb(struct pci_de=0A     if ( !iommu_has_c=
ap(iommu, PCI_CAP_IOTLB_SHIFT) )=0A         return;=0A =0A-    req_id =3D =
get_dma_requestor_id(iommu->seg, bdf);=0A+    req_id =3D get_dma_requestor_=
id(iommu->seg, PCI_BDF2(ats_pdev->bus, devfn));=0A     queueid =3D =
req_id;=0A     maxpend =3D ats_pdev->ats_queue_depth & 0xff;=0A =0A@@ =
-339,7 +339,7 @@ static void amd_iommu_flush_all_iotlbs(s=0A         =
return;=0A =0A     for_each_pdev( d, pdev )=0A-        amd_iommu_flush_iotl=
b(pdev, gaddr, order);=0A+        amd_iommu_flush_iotlb(pdev->devfn, pdev, =
gaddr, order);=0A }=0A =0A /* Flush iommu cache after p2m changes. =
*/=0A--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c=0A+++ b/xen/drivers/=
passthrough/amd/pci_amd_iommu.c=0A@@ -138,7 +138,7 @@ static void =
amd_iommu_setup_domain_devic=0A         if ( devfn =3D=3D pdev->devfn )=0A =
            enable_ats_device(iommu->seg, bus, devfn);=0A =0A-        =
amd_iommu_flush_iotlb(pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);=0A+        =
amd_iommu_flush_iotlb(devfn, pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);=0A     =
}=0A }=0A =0A--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h=0A+++ =
b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h=0A@@ -78,8 +78,8 @@ void =
iommu_dte_set_guest_cr3(u32 *dte, u=0A void amd_iommu_flush_all_pages(struc=
t domain *d);=0A void amd_iommu_flush_pages(struct domain *d, unsigned =
long gfn,=0A                            unsigned int order);=0A-void =
amd_iommu_flush_iotlb(struct pci_dev *pdev, uint64_t gaddr,=0A-            =
               unsigned int order);=0A+void amd_iommu_flush_iotlb(u8 =
devfn, const struct pci_dev *pdev,=0A+                           uint64_t =
gaddr, unsigned int order);=0A void amd_iommu_flush_device(struct =
amd_iommu *iommu, uint16_t bdf);=0A void amd_iommu_flush_intremap(struct =
amd_iommu *iommu, uint16_t bdf);=0A void amd_iommu_flush_all_caches(struct =
amd_iommu *iommu);=0A
--=__Part88B915D7.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part88B915D7.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:15:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcEE-0003e4-VK; Thu, 06 Dec 2012 14:15:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcEE-0003df-5A
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:15:06 +0000
Received: from [193.109.254.147:36564] by server-3.bemta-14.messagelabs.com id
	61/42-01317-968A0C05; Thu, 06 Dec 2012 14:15:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354803145!8914742!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20522 invoked from network); 6 Dec 2012 14:12:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:12:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:12:24 +0000
Message-Id: <50C0B5D702000078000AEA3F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:12:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part88B915D7.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 4/8] AMD IOMMU: adjust flush function parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part88B915D7.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... to use a (struct pci_dev *, devfn) pair.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -287,12 +287,12 @@ void invalidate_iommu_all(struct amd_iom
     send_iommu_command(iommu, cmd);
 }
=20
-void amd_iommu_flush_iotlb(struct pci_dev *pdev,
+void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
                            uint64_t gaddr, unsigned int order)
 {
     unsigned long flags;
     struct amd_iommu *iommu;
-    unsigned int bdf, req_id, queueid, maxpend;
+    unsigned int req_id, queueid, maxpend;
     struct pci_ats_dev *ats_pdev;
=20
     if ( !ats_enabled )
@@ -305,8 +305,8 @@ void amd_iommu_flush_iotlb(struct pci_de
     if ( !pci_ats_enabled(ats_pdev->seg, ats_pdev->bus, ats_pdev->devfn) =
)
         return;
=20
-    bdf =3D PCI_BDF2(ats_pdev->bus, ats_pdev->devfn);
-    iommu =3D find_iommu_for_device(ats_pdev->seg, bdf);
+    iommu =3D find_iommu_for_device(ats_pdev->seg,
+                                  PCI_BDF2(ats_pdev->bus, ats_pdev->devfn)=
);
=20
     if ( !iommu )
     {
@@ -319,7 +319,7 @@ void amd_iommu_flush_iotlb(struct pci_de
     if ( !iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
         return;
=20
-    req_id =3D get_dma_requestor_id(iommu->seg, bdf);
+    req_id =3D get_dma_requestor_id(iommu->seg, PCI_BDF2(ats_pdev->bus, =
devfn));
     queueid =3D req_id;
     maxpend =3D ats_pdev->ats_queue_depth & 0xff;
=20
@@ -339,7 +339,7 @@ static void amd_iommu_flush_all_iotlbs(s
         return;
=20
     for_each_pdev( d, pdev )
-        amd_iommu_flush_iotlb(pdev, gaddr, order);
+        amd_iommu_flush_iotlb(pdev->devfn, pdev, gaddr, order);
 }
=20
 /* Flush iommu cache after p2m changes. */
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -138,7 +138,7 @@ static void amd_iommu_setup_domain_devic
         if ( devfn =3D=3D pdev->devfn )
             enable_ats_device(iommu->seg, bus, devfn);
=20
-        amd_iommu_flush_iotlb(pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);
+        amd_iommu_flush_iotlb(devfn, pdev, INV_IOMMU_ALL_PAGES_ADDRESS, =
0);
     }
 }
=20
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
@@ -78,8 +78,8 @@ void iommu_dte_set_guest_cr3(u32 *dte, u
 void amd_iommu_flush_all_pages(struct domain *d);
 void amd_iommu_flush_pages(struct domain *d, unsigned long gfn,
                            unsigned int order);
-void amd_iommu_flush_iotlb(struct pci_dev *pdev, uint64_t gaddr,
-                           unsigned int order);
+void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
+                           uint64_t gaddr, unsigned int order);
 void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf);
 void amd_iommu_flush_intremap(struct amd_iommu *iommu, uint16_t bdf);
 void amd_iommu_flush_all_caches(struct amd_iommu *iommu);




--=__Part88B915D7.3__=
Content-Type: text/plain; name="AMD-IOMMU-flush-params.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="AMD-IOMMU-flush-params.patch"

AMD IOMMU: adjust flush function parameters=0A=0A... to use a (struct =
pci_dev *, devfn) pair.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/drivers/passthrough/amd/iommu_cmd.c=0A+++ b/xen/drivers/pas=
sthrough/amd/iommu_cmd.c=0A@@ -287,12 +287,12 @@ void invalidate_iommu_all(=
struct amd_iom=0A     send_iommu_command(iommu, cmd);=0A }=0A =0A-void =
amd_iommu_flush_iotlb(struct pci_dev *pdev,=0A+void amd_iommu_flush_iotlb(u=
8 devfn, const struct pci_dev *pdev,=0A                            =
uint64_t gaddr, unsigned int order)=0A {=0A     unsigned long flags;=0A    =
 struct amd_iommu *iommu;=0A-    unsigned int bdf, req_id, queueid, =
maxpend;=0A+    unsigned int req_id, queueid, maxpend;=0A     struct =
pci_ats_dev *ats_pdev;=0A =0A     if ( !ats_enabled )=0A@@ -305,8 +305,8 =
@@ void amd_iommu_flush_iotlb(struct pci_de=0A     if ( !pci_ats_enabled(at=
s_pdev->seg, ats_pdev->bus, ats_pdev->devfn) )=0A         return;=0A =0A-  =
  bdf =3D PCI_BDF2(ats_pdev->bus, ats_pdev->devfn);=0A-    iommu =3D =
find_iommu_for_device(ats_pdev->seg, bdf);=0A+    iommu =3D find_iommu_for_=
device(ats_pdev->seg,=0A+                                  PCI_BDF2(ats_pde=
v->bus, ats_pdev->devfn));=0A =0A     if ( !iommu )=0A     {=0A@@ -319,7 =
+319,7 @@ void amd_iommu_flush_iotlb(struct pci_de=0A     if ( !iommu_has_c=
ap(iommu, PCI_CAP_IOTLB_SHIFT) )=0A         return;=0A =0A-    req_id =3D =
get_dma_requestor_id(iommu->seg, bdf);=0A+    req_id =3D get_dma_requestor_=
id(iommu->seg, PCI_BDF2(ats_pdev->bus, devfn));=0A     queueid =3D =
req_id;=0A     maxpend =3D ats_pdev->ats_queue_depth & 0xff;=0A =0A@@ =
-339,7 +339,7 @@ static void amd_iommu_flush_all_iotlbs(s=0A         =
return;=0A =0A     for_each_pdev( d, pdev )=0A-        amd_iommu_flush_iotl=
b(pdev, gaddr, order);=0A+        amd_iommu_flush_iotlb(pdev->devfn, pdev, =
gaddr, order);=0A }=0A =0A /* Flush iommu cache after p2m changes. =
*/=0A--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c=0A+++ b/xen/drivers/=
passthrough/amd/pci_amd_iommu.c=0A@@ -138,7 +138,7 @@ static void =
amd_iommu_setup_domain_devic=0A         if ( devfn =3D=3D pdev->devfn )=0A =
            enable_ats_device(iommu->seg, bus, devfn);=0A =0A-        =
amd_iommu_flush_iotlb(pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);=0A+        =
amd_iommu_flush_iotlb(devfn, pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);=0A     =
}=0A }=0A =0A--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h=0A+++ =
b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h=0A@@ -78,8 +78,8 @@ void =
iommu_dte_set_guest_cr3(u32 *dte, u=0A void amd_iommu_flush_all_pages(struc=
t domain *d);=0A void amd_iommu_flush_pages(struct domain *d, unsigned =
long gfn,=0A                            unsigned int order);=0A-void =
amd_iommu_flush_iotlb(struct pci_dev *pdev, uint64_t gaddr,=0A-            =
               unsigned int order);=0A+void amd_iommu_flush_iotlb(u8 =
devfn, const struct pci_dev *pdev,=0A+                           uint64_t =
gaddr, unsigned int order);=0A void amd_iommu_flush_device(struct =
amd_iommu *iommu, uint16_t bdf);=0A void amd_iommu_flush_intremap(struct =
amd_iommu *iommu, uint16_t bdf);=0A void amd_iommu_flush_all_caches(struct =
amd_iommu *iommu);=0A
--=__Part88B915D7.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part88B915D7.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:16:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:16:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcFI-0003zi-F2; Thu, 06 Dec 2012 14:16:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcFH-0003zD-Ka
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:16:11 +0000
Received: from [193.109.254.147:52364] by server-10.bemta-14.messagelabs.com
	id BB/12-31741-AA8A0C05; Thu, 06 Dec 2012 14:16:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1354803317!9208779!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17455 invoked from network); 6 Dec 2012 14:15:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:15:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:15:16 +0000
Message-Id: <50C0B68402000078000AEA6A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:15:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part3405A964.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 7/8] VT-d: relax source qualifier for MSI of
 phantom functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part3405A964.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

With ordinary requests allowed to come from phantom functions, the
remapping tables ought to be set up to allow for MSI triggers to come
from other than the "real" device too.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: "Zhang, Xiantao" <xiantao.zhang@intel.com>

--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -442,13 +442,22 @@ static void set_msi_source_id(struct pci
     devfn =3D pdev->devfn;
     switch ( pdev->type )
     {
+        unsigned int sq;
+
     case DEV_TYPE_PCIe_BRIDGE:
     case DEV_TYPE_PCIe2PCI_BRIDGE:
     case DEV_TYPE_LEGACY_PCI_BRIDGE:
         break;
=20
     case DEV_TYPE_PCIe_ENDPOINT:
-        set_ire_sid(ire, SVT_VERIFY_SID_SQ, SQ_ALL_16, PCI_BDF2(bus, =
devfn));
+        switch ( pdev->phantom_stride )
+        {
+        case 1: sq =3D SQ_13_IGNORE_3; break;
+        case 2: sq =3D SQ_13_IGNORE_2; break;
+        case 4: sq =3D SQ_13_IGNORE_1; break;
+        default: sq =3D SQ_ALL_16; break;
+        }
+        set_ire_sid(ire, SVT_VERIFY_SID_SQ, sq, PCI_BDF2(bus, devfn));
         break;
=20
     case DEV_TYPE_PCI:




--=__Part3405A964.3__=
Content-Type: text/plain; name="VT-d-phantom-MSI.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="VT-d-phantom-MSI.patch"

VT-d: relax source qualifier for MSI of phantom functions=0A=0AWith =
ordinary requests allowed to come from phantom functions, the=0Aremapping =
tables ought to be set up to allow for MSI triggers to come=0Afrom other =
than the "real" device too.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.=
com>=0AAcked-by: "Zhang, Xiantao" <xiantao.zhang@intel.com>=0A=0A--- =
a/xen/drivers/passthrough/vtd/intremap.c=0A+++ b/xen/drivers/passthrough/vt=
d/intremap.c=0A@@ -442,13 +442,22 @@ static void set_msi_source_id(struct =
pci=0A     devfn =3D pdev->devfn;=0A     switch ( pdev->type )=0A     =
{=0A+        unsigned int sq;=0A+=0A     case DEV_TYPE_PCIe_BRIDGE:=0A     =
case DEV_TYPE_PCIe2PCI_BRIDGE:=0A     case DEV_TYPE_LEGACY_PCI_BRIDGE:=0A  =
       break;=0A =0A     case DEV_TYPE_PCIe_ENDPOINT:=0A-        set_ire_si=
d(ire, SVT_VERIFY_SID_SQ, SQ_ALL_16, PCI_BDF2(bus, devfn));=0A+        =
switch ( pdev->phantom_stride )=0A+        {=0A+        case 1: sq =3D =
SQ_13_IGNORE_3; break;=0A+        case 2: sq =3D SQ_13_IGNORE_2; break;=0A+=
        case 4: sq =3D SQ_13_IGNORE_1; break;=0A+        default: sq =3D =
SQ_ALL_16; break;=0A+        }=0A+        set_ire_sid(ire, SVT_VERIFY_SID_S=
Q, sq, PCI_BDF2(bus, devfn));=0A         break;=0A =0A     case DEV_TYPE_PC=
I:=0A
--=__Part3405A964.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part3405A964.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:16:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:16:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcFI-0003zi-F2; Thu, 06 Dec 2012 14:16:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcFH-0003zD-Ka
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:16:11 +0000
Received: from [193.109.254.147:52364] by server-10.bemta-14.messagelabs.com
	id BB/12-31741-AA8A0C05; Thu, 06 Dec 2012 14:16:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1354803317!9208779!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17455 invoked from network); 6 Dec 2012 14:15:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:15:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:15:16 +0000
Message-Id: <50C0B68402000078000AEA6A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:15:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part3405A964.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 7/8] VT-d: relax source qualifier for MSI of
 phantom functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part3405A964.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

With ordinary requests allowed to come from phantom functions, the
remapping tables ought to be set up to allow for MSI triggers to come
from other than the "real" device too.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: "Zhang, Xiantao" <xiantao.zhang@intel.com>

--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -442,13 +442,22 @@ static void set_msi_source_id(struct pci
     devfn =3D pdev->devfn;
     switch ( pdev->type )
     {
+        unsigned int sq;
+
     case DEV_TYPE_PCIe_BRIDGE:
     case DEV_TYPE_PCIe2PCI_BRIDGE:
     case DEV_TYPE_LEGACY_PCI_BRIDGE:
         break;
=20
     case DEV_TYPE_PCIe_ENDPOINT:
-        set_ire_sid(ire, SVT_VERIFY_SID_SQ, SQ_ALL_16, PCI_BDF2(bus, =
devfn));
+        switch ( pdev->phantom_stride )
+        {
+        case 1: sq =3D SQ_13_IGNORE_3; break;
+        case 2: sq =3D SQ_13_IGNORE_2; break;
+        case 4: sq =3D SQ_13_IGNORE_1; break;
+        default: sq =3D SQ_ALL_16; break;
+        }
+        set_ire_sid(ire, SVT_VERIFY_SID_SQ, sq, PCI_BDF2(bus, devfn));
         break;
=20
     case DEV_TYPE_PCI:




--=__Part3405A964.3__=
Content-Type: text/plain; name="VT-d-phantom-MSI.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="VT-d-phantom-MSI.patch"

VT-d: relax source qualifier for MSI of phantom functions=0A=0AWith =
ordinary requests allowed to come from phantom functions, the=0Aremapping =
tables ought to be set up to allow for MSI triggers to come=0Afrom other =
than the "real" device too.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.=
com>=0AAcked-by: "Zhang, Xiantao" <xiantao.zhang@intel.com>=0A=0A--- =
a/xen/drivers/passthrough/vtd/intremap.c=0A+++ b/xen/drivers/passthrough/vt=
d/intremap.c=0A@@ -442,13 +442,22 @@ static void set_msi_source_id(struct =
pci=0A     devfn =3D pdev->devfn;=0A     switch ( pdev->type )=0A     =
{=0A+        unsigned int sq;=0A+=0A     case DEV_TYPE_PCIe_BRIDGE:=0A     =
case DEV_TYPE_PCIe2PCI_BRIDGE:=0A     case DEV_TYPE_LEGACY_PCI_BRIDGE:=0A  =
       break;=0A =0A     case DEV_TYPE_PCIe_ENDPOINT:=0A-        set_ire_si=
d(ire, SVT_VERIFY_SID_SQ, SQ_ALL_16, PCI_BDF2(bus, devfn));=0A+        =
switch ( pdev->phantom_stride )=0A+        {=0A+        case 1: sq =3D =
SQ_13_IGNORE_3; break;=0A+        case 2: sq =3D SQ_13_IGNORE_2; break;=0A+=
        case 4: sq =3D SQ_13_IGNORE_1; break;=0A+        default: sq =3D =
SQ_ALL_16; break;=0A+        }=0A+        set_ire_sid(ire, SVT_VERIFY_SID_S=
Q, sq, PCI_BDF2(bus, devfn));=0A         break;=0A =0A     case DEV_TYPE_PC=
I:=0A
--=__Part3405A964.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part3405A964.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:19:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcIN-0004cs-4z; Thu, 06 Dec 2012 14:19:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcIL-0004cc-90
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:19:21 +0000
Received: from [193.109.254.147:10649] by server-9.bemta-14.messagelabs.com id
	B0/BF-30773-869A0C05; Thu, 06 Dec 2012 14:19:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1354803361!9574727!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21740 invoked from network); 6 Dec 2012 14:16:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:16:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:16:00 +0000
Message-Id: <50C0B6AF02000078000AEA6E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:15:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartDFEE428F.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 8/8] IOMMU: add option to specify devices
 behaving like ones using phantom functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartDFEE428F.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

At least certain Marvell SATA controllers are known to issue bus master
requests with a non-zero function as origin, despite themselves being
single function devices.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -698,6 +698,16 @@ Defaults to booting secondary processors
=20
 Default: `on`
=20
+### pci-phantom
+> `=3D[<seg>:]<bus>:<device>,<stride>`
+
+Mark a group of PCI devices as using phantom functions without actually
+advertising so, so the IOMMU can create translation contexts for them.
+
+All numbers specified must be hexadecimal ones.
+
+This option can be specified more than once (up to 8 times at present).
+
 ### ple\_gap
 > `=3D <integer>`
=20
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -121,6 +121,49 @@ const unsigned long *pci_get_ro_map(u16=20
     return pseg ? pseg->ro_map : NULL;
 }
=20
+static struct phantom_dev {
+    u16 seg;
+    u8 bus, slot, stride;
+} phantom_devs[8];
+static unsigned int nr_phantom_devs;
+
+static void __init parse_phantom_dev(char *str) {
+    const char *s =3D str;
+    struct phantom_dev phantom;
+
+    if ( !s || !*s || nr_phantom_devs >=3D ARRAY_SIZE(phantom_devs) )
+        return;
+
+    phantom.seg =3D simple_strtol(s, &s, 16);
+    if ( *s !=3D ':' )
+        return;
+
+    phantom.bus =3D simple_strtol(s + 1, &s, 16);
+    if ( *s =3D=3D ',' )
+    {
+        phantom.slot =3D phantom.bus;
+        phantom.bus =3D phantom.seg;
+        phantom.seg =3D 0;
+    }
+    else if ( *s =3D=3D ':' )
+        phantom.slot =3D simple_strtol(s + 1, &s, 16);
+    else
+        return;
+
+    if ( *s !=3D ',' )
+        return;
+    switch ( phantom.stride =3D simple_strtol(s + 1, &s, 0) )
+    {
+    case 1: case 2: case 4:
+        if ( *s )
+    default:
+            return;
+    }
+
+    phantom_devs[nr_phantom_devs++] =3D phantom;
+}
+custom_param("pci-phantom", parse_phantom_dev);
+
 static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
 {
     struct pci_dev *pdev;
@@ -181,6 +224,20 @@ static struct pci_dev *alloc_pdev(struct
                 if ( PCI_FUNC(devfn) >=3D pdev->phantom_stride )
                     pdev->phantom_stride =3D 0;
             }
+            else
+            {
+                unsigned int i;
+
+                for ( i =3D 0; i < nr_phantom_devs; ++i )
+                    if ( phantom_devs[i].seg =3D=3D pseg->nr &&
+                         phantom_devs[i].bus =3D=3D bus &&
+                         phantom_devs[i].slot =3D=3D PCI_SLOT(devfn) &&
+                         phantom_devs[i].stride > PCI_FUNC(devfn) )
+                    {
+                        pdev->phantom_stride =3D phantom_devs[i].stride;
+                        break;
+                    }
+            }
             break;
=20
         case DEV_TYPE_PCI:




--=__PartDFEE428F.3__=
Content-Type: text/plain; name="IOMMU-phantom-dev-quirk.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-phantom-dev-quirk.patch"

IOMMU: add option to specify devices behaving like ones using phantom =
functions=0A=0AAt least certain Marvell SATA controllers are known to =
issue bus master=0Arequests with a non-zero function as origin, despite =
themselves being=0Asingle function devices.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/docs/misc/xen-command-line.markdown=
=0A+++ b/docs/misc/xen-command-line.markdown=0A@@ -698,6 +698,16 @@ =
Defaults to booting secondary processors=0A =0A Default: `on`=0A =0A+### =
pci-phantom=0A+> `=3D[<seg>:]<bus>:<device>,<stride>`=0A+=0A+Mark a group =
of PCI devices as using phantom functions without actually=0A+advertising =
so, so the IOMMU can create translation contexts for them.=0A+=0A+All =
numbers specified must be hexadecimal ones.=0A+=0A+This option can be =
specified more than once (up to 8 times at present).=0A+=0A ### ple\_gap=0A=
 > `=3D <integer>`=0A =0A--- a/xen/drivers/passthrough/pci.c=0A+++ =
b/xen/drivers/passthrough/pci.c=0A@@ -121,6 +121,49 @@ const unsigned long =
*pci_get_ro_map(u16 =0A     return pseg ? pseg->ro_map : NULL;=0A }=0A =
=0A+static struct phantom_dev {=0A+    u16 seg;=0A+    u8 bus, slot, =
stride;=0A+} phantom_devs[8];=0A+static unsigned int nr_phantom_devs;=0A+=
=0A+static void __init parse_phantom_dev(char *str) {=0A+    const char *s =
=3D str;=0A+    struct phantom_dev phantom;=0A+=0A+    if ( !s || !*s || =
nr_phantom_devs >=3D ARRAY_SIZE(phantom_devs) )=0A+        return;=0A+=0A+ =
   phantom.seg =3D simple_strtol(s, &s, 16);=0A+    if ( *s !=3D ':' )=0A+ =
       return;=0A+=0A+    phantom.bus =3D simple_strtol(s + 1, &s, =
16);=0A+    if ( *s =3D=3D ',' )=0A+    {=0A+        phantom.slot =3D =
phantom.bus;=0A+        phantom.bus =3D phantom.seg;=0A+        phantom.seg=
 =3D 0;=0A+    }=0A+    else if ( *s =3D=3D ':' )=0A+        phantom.slot =
=3D simple_strtol(s + 1, &s, 16);=0A+    else=0A+        return;=0A+=0A+   =
 if ( *s !=3D ',' )=0A+        return;=0A+    switch ( phantom.stride =3D =
simple_strtol(s + 1, &s, 0) )=0A+    {=0A+    case 1: case 2: case 4:=0A+  =
      if ( *s )=0A+    default:=0A+            return;=0A+    }=0A+=0A+    =
phantom_devs[nr_phantom_devs++] =3D phantom;=0A+}=0A+custom_param("pci-phan=
tom", parse_phantom_dev);=0A+=0A static struct pci_dev *alloc_pdev(struct =
pci_seg *pseg, u8 bus, u8 devfn)=0A {=0A     struct pci_dev *pdev;=0A@@ =
-181,6 +224,20 @@ static struct pci_dev *alloc_pdev(struct=0A              =
   if ( PCI_FUNC(devfn) >=3D pdev->phantom_stride )=0A                     =
pdev->phantom_stride =3D 0;=0A             }=0A+            else=0A+       =
     {=0A+                unsigned int i;=0A+=0A+                for ( i =
=3D 0; i < nr_phantom_devs; ++i )=0A+                    if ( phantom_devs[=
i].seg =3D=3D pseg->nr &&=0A+                         phantom_devs[i].bus =
=3D=3D bus &&=0A+                         phantom_devs[i].slot =3D=3D =
PCI_SLOT(devfn) &&=0A+                         phantom_devs[i].stride > =
PCI_FUNC(devfn) )=0A+                    {=0A+                        =
pdev->phantom_stride =3D phantom_devs[i].stride;=0A+                       =
 break;=0A+                    }=0A+            }=0A             break;=0A =
=0A         case DEV_TYPE_PCI:=0A
--=__PartDFEE428F.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartDFEE428F.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:19:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcIN-0004cs-4z; Thu, 06 Dec 2012 14:19:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgcIL-0004cc-90
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:19:21 +0000
Received: from [193.109.254.147:10649] by server-9.bemta-14.messagelabs.com id
	B0/BF-30773-869A0C05; Thu, 06 Dec 2012 14:19:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1354803361!9574727!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21740 invoked from network); 6 Dec 2012 14:16:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 14:16:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 14:16:00 +0000
Message-Id: <50C0B6AF02000078000AEA6E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 14:15:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
In-Reply-To: <50C0B43402000078000AEA20@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartDFEE428F.3__="
Cc: Wei Huang <wei.huang2@amd.com>, Wei Wang <weiwang.dd@gmail.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH 8/8] IOMMU: add option to specify devices
 behaving like ones using phantom functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartDFEE428F.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

At least certain Marvell SATA controllers are known to issue bus master
requests with a non-zero function as origin, despite themselves being
single function devices.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -698,6 +698,16 @@ Defaults to booting secondary processors
=20
 Default: `on`
=20
+### pci-phantom
+> `=3D[<seg>:]<bus>:<device>,<stride>`
+
+Mark a group of PCI devices as using phantom functions without actually
+advertising so, so the IOMMU can create translation contexts for them.
+
+All numbers specified must be hexadecimal ones.
+
+This option can be specified more than once (up to 8 times at present).
+
 ### ple\_gap
 > `=3D <integer>`
=20
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -121,6 +121,49 @@ const unsigned long *pci_get_ro_map(u16=20
     return pseg ? pseg->ro_map : NULL;
 }
=20
+static struct phantom_dev {
+    u16 seg;
+    u8 bus, slot, stride;
+} phantom_devs[8];
+static unsigned int nr_phantom_devs;
+
+static void __init parse_phantom_dev(char *str) {
+    const char *s =3D str;
+    struct phantom_dev phantom;
+
+    if ( !s || !*s || nr_phantom_devs >=3D ARRAY_SIZE(phantom_devs) )
+        return;
+
+    phantom.seg =3D simple_strtol(s, &s, 16);
+    if ( *s !=3D ':' )
+        return;
+
+    phantom.bus =3D simple_strtol(s + 1, &s, 16);
+    if ( *s =3D=3D ',' )
+    {
+        phantom.slot =3D phantom.bus;
+        phantom.bus =3D phantom.seg;
+        phantom.seg =3D 0;
+    }
+    else if ( *s =3D=3D ':' )
+        phantom.slot =3D simple_strtol(s + 1, &s, 16);
+    else
+        return;
+
+    if ( *s !=3D ',' )
+        return;
+    switch ( phantom.stride =3D simple_strtol(s + 1, &s, 0) )
+    {
+    case 1: case 2: case 4:
+        if ( *s )
+    default:
+            return;
+    }
+
+    phantom_devs[nr_phantom_devs++] =3D phantom;
+}
+custom_param("pci-phantom", parse_phantom_dev);
+
 static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
 {
     struct pci_dev *pdev;
@@ -181,6 +224,20 @@ static struct pci_dev *alloc_pdev(struct
                 if ( PCI_FUNC(devfn) >=3D pdev->phantom_stride )
                     pdev->phantom_stride =3D 0;
             }
+            else
+            {
+                unsigned int i;
+
+                for ( i =3D 0; i < nr_phantom_devs; ++i )
+                    if ( phantom_devs[i].seg =3D=3D pseg->nr &&
+                         phantom_devs[i].bus =3D=3D bus &&
+                         phantom_devs[i].slot =3D=3D PCI_SLOT(devfn) &&
+                         phantom_devs[i].stride > PCI_FUNC(devfn) )
+                    {
+                        pdev->phantom_stride =3D phantom_devs[i].stride;
+                        break;
+                    }
+            }
             break;
=20
         case DEV_TYPE_PCI:




--=__PartDFEE428F.3__=
Content-Type: text/plain; name="IOMMU-phantom-dev-quirk.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-phantom-dev-quirk.patch"

IOMMU: add option to specify devices behaving like ones using phantom =
functions=0A=0AAt least certain Marvell SATA controllers are known to =
issue bus master=0Arequests with a non-zero function as origin, despite =
themselves being=0Asingle function devices.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/docs/misc/xen-command-line.markdown=
=0A+++ b/docs/misc/xen-command-line.markdown=0A@@ -698,6 +698,16 @@ =
Defaults to booting secondary processors=0A =0A Default: `on`=0A =0A+### =
pci-phantom=0A+> `=3D[<seg>:]<bus>:<device>,<stride>`=0A+=0A+Mark a group =
of PCI devices as using phantom functions without actually=0A+advertising =
so, so the IOMMU can create translation contexts for them.=0A+=0A+All =
numbers specified must be hexadecimal ones.=0A+=0A+This option can be =
specified more than once (up to 8 times at present).=0A+=0A ### ple\_gap=0A=
 > `=3D <integer>`=0A =0A--- a/xen/drivers/passthrough/pci.c=0A+++ =
b/xen/drivers/passthrough/pci.c=0A@@ -121,6 +121,49 @@ const unsigned long =
*pci_get_ro_map(u16 =0A     return pseg ? pseg->ro_map : NULL;=0A }=0A =
=0A+static struct phantom_dev {=0A+    u16 seg;=0A+    u8 bus, slot, =
stride;=0A+} phantom_devs[8];=0A+static unsigned int nr_phantom_devs;=0A+=
=0A+static void __init parse_phantom_dev(char *str) {=0A+    const char *s =
=3D str;=0A+    struct phantom_dev phantom;=0A+=0A+    if ( !s || !*s || =
nr_phantom_devs >=3D ARRAY_SIZE(phantom_devs) )=0A+        return;=0A+=0A+ =
   phantom.seg =3D simple_strtol(s, &s, 16);=0A+    if ( *s !=3D ':' )=0A+ =
       return;=0A+=0A+    phantom.bus =3D simple_strtol(s + 1, &s, =
16);=0A+    if ( *s =3D=3D ',' )=0A+    {=0A+        phantom.slot =3D =
phantom.bus;=0A+        phantom.bus =3D phantom.seg;=0A+        phantom.seg=
 =3D 0;=0A+    }=0A+    else if ( *s =3D=3D ':' )=0A+        phantom.slot =
=3D simple_strtol(s + 1, &s, 16);=0A+    else=0A+        return;=0A+=0A+   =
 if ( *s !=3D ',' )=0A+        return;=0A+    switch ( phantom.stride =3D =
simple_strtol(s + 1, &s, 0) )=0A+    {=0A+    case 1: case 2: case 4:=0A+  =
      if ( *s )=0A+    default:=0A+            return;=0A+    }=0A+=0A+    =
phantom_devs[nr_phantom_devs++] =3D phantom;=0A+}=0A+custom_param("pci-phan=
tom", parse_phantom_dev);=0A+=0A static struct pci_dev *alloc_pdev(struct =
pci_seg *pseg, u8 bus, u8 devfn)=0A {=0A     struct pci_dev *pdev;=0A@@ =
-181,6 +224,20 @@ static struct pci_dev *alloc_pdev(struct=0A              =
   if ( PCI_FUNC(devfn) >=3D pdev->phantom_stride )=0A                     =
pdev->phantom_stride =3D 0;=0A             }=0A+            else=0A+       =
     {=0A+                unsigned int i;=0A+=0A+                for ( i =
=3D 0; i < nr_phantom_devs; ++i )=0A+                    if ( phantom_devs[=
i].seg =3D=3D pseg->nr &&=0A+                         phantom_devs[i].bus =
=3D=3D bus &&=0A+                         phantom_devs[i].slot =3D=3D =
PCI_SLOT(devfn) &&=0A+                         phantom_devs[i].stride > =
PCI_FUNC(devfn) )=0A+                    {=0A+                        =
pdev->phantom_stride =3D phantom_devs[i].stride;=0A+                       =
 break;=0A+                    }=0A+            }=0A             break;=0A =
=0A         case DEV_TYPE_PCI:=0A
--=__PartDFEE428F.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartDFEE428F.3__=--


From xen-devel-bounces@lists.xen.org Thu Dec 06 14:26:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPY-0005K4-SQ; Thu, 06 Dec 2012 14:26:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPX-0005Jg-Me
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:47 +0000
Received: from [85.158.138.51:43783] by server-16.bemta-3.messagelabs.com id
	50/18-07461-62BA0C05; Thu, 06 Dec 2012 14:26:46 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354804005!27832396!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28681 invoked from network); 6 Dec 2012 14:26:46 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-5.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 14:26:46 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 06:26:44 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252993531"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 06:26:43 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:16:58 +0800
Message-Id: <1354803427-18057-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 02/11] nested vmx: use literal name instead
	of hard numbers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For those default 1 settings in VMX MSR, use some literal name
instead of hard numbers in the code.

Besides, fix the default 1 setting for pin based control MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++----------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    9 +++++++++
 2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 719bfce..eb10bbf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp;
+    u64 data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
@@ -1318,9 +1318,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        data <<= 32;
-	/* 0-settings */
-        data |= 0;
+        tmp = VMX_PINBASED_CTLS_DEFAULT1;
+        data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
         /* 1-seetings */
@@ -1342,8 +1341,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
-        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
+        tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
         break;
@@ -1356,8 +1354,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
         /* 1-seetings */
-        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
-        tmp = 0x36dff;
+        tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1370,8 +1367,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
-        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
-        tmp = 0x11ff;
+        /* 1-seetings */
+        tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 067fbe4..dce2cd8 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -36,6 +36,15 @@ struct nestedvmx {
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
 
+/* bit 1, 2, 4 must be 1 */
+#define VMX_PINBASED_CTLS_DEFAULT1	0x16
+/* bit 1, 4-6,8,13-16,26 must be 1 */
+#define VMX_PROCBASED_CTLS_DEFAULT1	0x401e172
+/* bit 0-8, 10,11,13,14,16,17 must be 1 */
+#define VMX_EXIT_CTLS_DEFAULT1		0x36dff
+/* bit 0-8, and 12 must be 1 */
+#define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
+
 /*
  * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
  */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:26:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPY-0005K4-SQ; Thu, 06 Dec 2012 14:26:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPX-0005Jg-Me
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:47 +0000
Received: from [85.158.138.51:43783] by server-16.bemta-3.messagelabs.com id
	50/18-07461-62BA0C05; Thu, 06 Dec 2012 14:26:46 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354804005!27832396!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28681 invoked from network); 6 Dec 2012 14:26:46 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-5.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 14:26:46 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 06:26:44 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252993531"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 06:26:43 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:16:58 +0800
Message-Id: <1354803427-18057-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 02/11] nested vmx: use literal name instead
	of hard numbers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For those default 1 settings in VMX MSR, use some literal name
instead of hard numbers in the code.

Besides, fix the default 1 setting for pin based control MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++----------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    9 +++++++++
 2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 719bfce..eb10bbf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1299,7 +1299,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp;
+    u64 data = 0, tmp = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
@@ -1318,9 +1318,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        data <<= 32;
-	/* 0-settings */
-        data |= 0;
+        tmp = VMX_PINBASED_CTLS_DEFAULT1;
+        data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
         /* 1-seetings */
@@ -1342,8 +1341,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
-        tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
+        tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
         data = ((data | tmp) << 32) | (tmp);
         break;
@@ -1356,8 +1354,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
         /* 1-seetings */
-        /* bit 0-8, 10,11,13,14,16,17 must be 1 (refer G4 of SDM) */
-        tmp = 0x36dff;
+        tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1370,8 +1367,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
-        /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
-        tmp = 0x11ff;
+        /* 1-seetings */
+        tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 067fbe4..dce2cd8 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -36,6 +36,15 @@ struct nestedvmx {
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
 
+/* bit 1, 2, 4 must be 1 */
+#define VMX_PINBASED_CTLS_DEFAULT1	0x16
+/* bit 1, 4-6,8,13-16,26 must be 1 */
+#define VMX_PROCBASED_CTLS_DEFAULT1	0x401e172
+/* bit 0-8, 10,11,13,14,16,17 must be 1 */
+#define VMX_EXIT_CTLS_DEFAULT1		0x36dff
+/* bit 0-8, and 12 must be 1 */
+#define VMX_ENTRY_CTLS_DEFAULT1		0x11ff
+
 /*
  * Encode of VMX instructions base on Table 24-11 & 24-12 of SDM 3B
  */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:26:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPY-0005Jp-BC; Thu, 06 Dec 2012 14:26:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPW-0005Jf-QI
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:47 +0000
Received: from [85.158.139.211:56234] by server-8.bemta-5.messagelabs.com id
	28/5B-06050-62BA0C05; Thu, 06 Dec 2012 14:26:46 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354804003!19338070!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDQ0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32739 invoked from network); 6 Dec 2012 14:26:44 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-10.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 14:26:44 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 06 Dec 2012 06:26:42 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259933984"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:41 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:16:57 +0800
Message-Id: <1354803427-18057-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 01/11] nested vmx: emulate MSR bitmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In nested vmx virtualization for MSR bitmaps, L0 hypervisor will trap all the VM
exit from L2 guest by disable the MSR_BITMAP feature. When handling this VM exit,
L0 hypervisor judges whether L1 hypervisor uses MSR_BITMAP feature and the
corresponding bit is set to 1. If so, L0 will inject such VM exit into L1
hypervisor; otherwise, L0 will be responsible for handling this VM exit.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0fbdd75..205e705 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -674,6 +674,34 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 }
 
 /*
+ * access_type: read == 0, write == 1
+ */
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+{
+    int ret = 1;
+    if ( !msr_bitmap )
+        return 1;
+
+    if ( msr <= 0x1fff )
+    {
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+    }
+    else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+    {
+        msr &= 0x1fff;
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+    }
+    return ret;
+}
+
+
+/*
  * Switch VMCS between layer 1 & 2 guest
  */
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ed47780..719bfce 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -48,6 +48,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
     nvmx->iobitmap[1] = NULL;
+    nvmx->msrbitmap = NULL;
     return 0;
 out:
     return -ENOMEM;
@@ -561,6 +562,17 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
 }
 
+static void __map_msr_bitmap(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    unsigned long gpa;
+
+    if ( nvmx->msrbitmap )
+        hvm_unmap_guest_frame (nvmx->msrbitmap); 
+    gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, MSR_BITMAP);
+    nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT);
+}
+
 static void __map_io_bitmap(struct vcpu *v, u64 vmcs_reg)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -597,6 +609,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
             nvmx->iobitmap[i] = NULL;
         }
     }
+    if ( nvmx->msrbitmap ) {
+        hvm_unmap_guest_frame(nvmx->msrbitmap);
+        nvmx->msrbitmap = NULL;
+    }
 }
 
 u64 nvmx_get_tsc_offset(struct vcpu *v)
@@ -1153,6 +1169,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
         nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT);
         nvcpu->nv_vvmcxaddr = gpa;
         map_io_bitmap_all (v);
+        __map_msr_bitmap(v);
     }
 
     vmreturn(regs, VMSUCCEED);
@@ -1270,6 +1287,9 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
               vmcs_encoding == IO_BITMAP_B_HIGH )
         __map_io_bitmap (v, IO_BITMAP_B);
 
+    if ( vmcs_encoding == MSR_BITMAP || vmcs_encoding == MSR_BITMAP_HIGH )
+        __map_msr_bitmap(v);
+
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
 }
@@ -1320,6 +1340,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDTSC_EXITING |
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
+               CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
         tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
@@ -1497,8 +1518,6 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_TRIPLE_FAULT:
     case EXIT_REASON_TASK_SWITCH:
     case EXIT_REASON_CPUID:
-    case EXIT_REASON_MSR_READ:
-    case EXIT_REASON_MSR_WRITE:
     case EXIT_REASON_VMCALL:
     case EXIT_REASON_VMCLEAR:
     case EXIT_REASON_VMLAUNCH:
@@ -1514,6 +1533,22 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_MSR_READ:
+    case EXIT_REASON_MSR_WRITE:
+    {
+        int status;
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        {
+            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
+                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
+            if ( status )
+                nvcpu->nv_vmexit_pending = 1;
+        }
+        else
+            nvcpu->nv_vmexit_pending = 1;
+        break;
+    }
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index cc92f69..14ac773 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -427,6 +427,7 @@ int vmx_add_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 
 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index b9137b8..067fbe4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -26,6 +26,7 @@
 struct nestedvmx {
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
+    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:26:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPY-0005Jp-BC; Thu, 06 Dec 2012 14:26:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPW-0005Jf-QI
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:47 +0000
Received: from [85.158.139.211:56234] by server-8.bemta-5.messagelabs.com id
	28/5B-06050-62BA0C05; Thu, 06 Dec 2012 14:26:46 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354804003!19338070!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDQ0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32739 invoked from network); 6 Dec 2012 14:26:44 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-10.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 14:26:44 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 06 Dec 2012 06:26:42 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259933984"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:41 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:16:57 +0800
Message-Id: <1354803427-18057-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 01/11] nested vmx: emulate MSR bitmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In nested vmx virtualization for MSR bitmaps, L0 hypervisor will trap all the VM
exit from L2 guest by disable the MSR_BITMAP feature. When handling this VM exit,
L0 hypervisor judges whether L1 hypervisor uses MSR_BITMAP feature and the
corresponding bit is set to 1. If so, L0 will inject such VM exit into L1
hypervisor; otherwise, L0 will be responsible for handling this VM exit.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |   28 +++++++++++++++++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0fbdd75..205e705 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -674,6 +674,34 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 }
 
 /*
+ * access_type: read == 0, write == 1
+ */
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+{
+    int ret = 1;
+    if ( !msr_bitmap )
+        return 1;
+
+    if ( msr <= 0x1fff )
+    {
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+    }
+    else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
+    {
+        msr &= 0x1fff;
+        if ( access_type == 0 )
+            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+        else if ( access_type == 1 )
+            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+    }
+    return ret;
+}
+
+
+/*
  * Switch VMCS between layer 1 & 2 guest
  */
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ed47780..719bfce 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -48,6 +48,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
     nvmx->iobitmap[1] = NULL;
+    nvmx->msrbitmap = NULL;
     return 0;
 out:
     return -ENOMEM;
@@ -561,6 +562,17 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
 }
 
+static void __map_msr_bitmap(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    unsigned long gpa;
+
+    if ( nvmx->msrbitmap )
+        hvm_unmap_guest_frame (nvmx->msrbitmap); 
+    gpa = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, MSR_BITMAP);
+    nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT);
+}
+
 static void __map_io_bitmap(struct vcpu *v, u64 vmcs_reg)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -597,6 +609,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
             nvmx->iobitmap[i] = NULL;
         }
     }
+    if ( nvmx->msrbitmap ) {
+        hvm_unmap_guest_frame(nvmx->msrbitmap);
+        nvmx->msrbitmap = NULL;
+    }
 }
 
 u64 nvmx_get_tsc_offset(struct vcpu *v)
@@ -1153,6 +1169,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
         nvcpu->nv_vvmcx = hvm_map_guest_frame_rw(gpa >> PAGE_SHIFT);
         nvcpu->nv_vvmcxaddr = gpa;
         map_io_bitmap_all (v);
+        __map_msr_bitmap(v);
     }
 
     vmreturn(regs, VMSUCCEED);
@@ -1270,6 +1287,9 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
               vmcs_encoding == IO_BITMAP_B_HIGH )
         __map_io_bitmap (v, IO_BITMAP_B);
 
+    if ( vmcs_encoding == MSR_BITMAP || vmcs_encoding == MSR_BITMAP_HIGH )
+        __map_msr_bitmap(v);
+
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
 }
@@ -1320,6 +1340,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDTSC_EXITING |
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
+               CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         /* bit 1, 4-6,8,13-16,26 must be 1 (refer G4 of SDM) */
         tmp = ( (1<<26) | (0xf << 13) | 0x100 | (0x7 << 4) | 0x2);
@@ -1497,8 +1518,6 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_TRIPLE_FAULT:
     case EXIT_REASON_TASK_SWITCH:
     case EXIT_REASON_CPUID:
-    case EXIT_REASON_MSR_READ:
-    case EXIT_REASON_MSR_WRITE:
     case EXIT_REASON_VMCALL:
     case EXIT_REASON_VMCLEAR:
     case EXIT_REASON_VMLAUNCH:
@@ -1514,6 +1533,22 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_MSR_READ:
+    case EXIT_REASON_MSR_WRITE:
+    {
+        int status;
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        {
+            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
+                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
+            if ( status )
+                nvcpu->nv_vmexit_pending = 1;
+        }
+        else
+            nvcpu->nv_vmexit_pending = 1;
+        break;
+    }
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index cc92f69..14ac773 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -427,6 +427,7 @@ int vmx_add_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 
 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index b9137b8..067fbe4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -26,6 +26,7 @@
 struct nestedvmx {
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
+    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPd-0005Kp-9y; Thu, 06 Dec 2012 14:26:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPb-0005KT-Pq
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:52 +0000
Received: from [85.158.138.51:37714] by server-11.bemta-3.messagelabs.com id
	D8/DE-19361-B2BA0C05; Thu, 06 Dec 2012 14:26:51 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354804009!27732537!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15725 invoked from network); 6 Dec 2012 14:26:49 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 14:26:49 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 06:26:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259934028"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:47 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:01 +0800
Message-Id: <1354803427-18057-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 05/11] nested vmx: fix handling of RDTSC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If L0 is to handle the TSC access, then we need to update guest EIP by
calling update_guest_eip().

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 ++
 3 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3bb0d99..9fb9562 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1555,7 +1555,7 @@ static int get_instruction_length(void)
     return len;
 }
 
-static void update_guest_eip(void)
+void update_guest_eip(void)
 {
     struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned long x;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d8b7ce5..dab9551 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1614,6 +1614,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
             tsc += __get_vvmcs(nvcpu->nv_vvmcx, TSC_OFFSET);
             regs->eax = (uint32_t)tsc;
             regs->edx = (uint32_t)(tsc >> 32);
+            update_guest_eip();
 
             return 1;
         }
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c4c2fe8..aa5b080 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -399,6 +399,8 @@ void ept_p2m_init(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
+void update_guest_eip(void);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPd-0005Kp-9y; Thu, 06 Dec 2012 14:26:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPb-0005KT-Pq
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:52 +0000
Received: from [85.158.138.51:37714] by server-11.bemta-3.messagelabs.com id
	D8/DE-19361-B2BA0C05; Thu, 06 Dec 2012 14:26:51 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354804009!27732537!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15725 invoked from network); 6 Dec 2012 14:26:49 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 14:26:49 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 06:26:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259934028"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:47 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:01 +0800
Message-Id: <1354803427-18057-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 05/11] nested vmx: fix handling of RDTSC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If L0 is to handle the TSC access, then we need to update guest EIP by
calling update_guest_eip().

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 ++
 3 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3bb0d99..9fb9562 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1555,7 +1555,7 @@ static int get_instruction_length(void)
     return len;
 }
 
-static void update_guest_eip(void)
+void update_guest_eip(void)
 {
     struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned long x;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d8b7ce5..dab9551 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1614,6 +1614,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
             tsc += __get_vvmcs(nvcpu->nv_vvmcx, TSC_OFFSET);
             regs->eax = (uint32_t)tsc;
             regs->edx = (uint32_t)(tsc >> 32);
+            update_guest_eip();
 
             return 1;
         }
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c4c2fe8..aa5b080 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -399,6 +399,8 @@ void ept_p2m_init(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
+void update_guest_eip(void);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPf-0005LP-NO; Thu, 06 Dec 2012 14:26:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPd-0005Ko-Qz
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:53 +0000
Received: from [85.158.138.51:44236] by server-8.bemta-3.messagelabs.com id
	18/60-07786-D2BA0C05; Thu, 06 Dec 2012 14:26:53 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354804009!27732537!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15896 invoked from network); 6 Dec 2012 14:26:52 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 14:26:52 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 06:26:51 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259934036"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:50 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:03 +0800
Message-Id: <1354803427-18057-8-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 07/11] nested vmx: enable IA32E mode while do
	VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some VMMs may check the platform capability to judge whether long
mode guest is supported. Therefore we need to expose this bit to
guest VMM.

Xen on Xen works fine in current solution because Xen doesn't
check this capability but directly set it in VMCS if guest
supports long mode.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 02a7052..dcdc83e 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1376,7 +1376,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
+               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+               VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
         break;
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPf-0005LP-NO; Thu, 06 Dec 2012 14:26:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPd-0005Ko-Qz
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:53 +0000
Received: from [85.158.138.51:44236] by server-8.bemta-3.messagelabs.com id
	18/60-07786-D2BA0C05; Thu, 06 Dec 2012 14:26:53 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354804009!27732537!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15896 invoked from network); 6 Dec 2012 14:26:52 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 14:26:52 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 06:26:51 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259934036"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:50 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:03 +0800
Message-Id: <1354803427-18057-8-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 07/11] nested vmx: enable IA32E mode while do
	VM entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some VMMs may check the platform capability to judge whether long
mode guest is supported. Therefore we need to expose this bit to
guest VMM.

Xen on Xen works fine in current solution because Xen doesn't
check this capability but directly set it in VMCS if guest
supports long mode.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 02a7052..dcdc83e 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1376,7 +1376,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
-               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL;
+               VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
+               VM_ENTRY_IA32E_MODE;
         data = ((data | tmp) << 32) | tmp;
         break;
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPg-0005La-4R; Thu, 06 Dec 2012 14:26:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPe-0005L5-T0
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:55 +0000
Received: from [85.158.138.51:44336] by server-10.bemta-3.messagelabs.com id
	C7/38-19806-E2BA0C05; Thu, 06 Dec 2012 14:26:54 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354804009!27732537!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15943 invoked from network); 6 Dec 2012 14:26:53 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 14:26:53 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 06:26:53 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259934043"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:52 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:04 +0800
Message-Id: <1354803427-18057-9-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 08/11] nested vmx: enable "Virtualize APIC
	accesses" feature for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the "Virtualize APIC accesses" feature is enabled, we need to sync
the APIC-access address from virtual vvmcs into shadow vmcs when doing
virtual_vmentry.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   27 ++++++++++++++++++++++++++-
 1 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dcdc83e..bcb113f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -554,6 +554,24 @@ void nvmx_update_exception_bitmap(struct vcpu *v, unsigned long value)
     set_shadow_control(v, EXCEPTION_BITMAP, value);
 }
 
+static void nvmx_update_apic_access_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 apic_gpfn, apic_mfn;
+    u32 ctrl;
+    void *apic_va;
+
+    ctrl = __n2_secondary_exec_control(v);
+    if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+    {
+        apic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
+        apic_va = hvm_map_guest_frame_ro(apic_gpfn);
+        apic_mfn = virt_to_mfn(apic_va);
+        __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(apic_va); 
+    }
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -761,6 +779,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_exit_control(v, vmx_vmexit_control);
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
+    nvmx_update_apic_access_address(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1350,7 +1369,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
@@ -1680,6 +1700,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
         break;
     }
+    case EXIT_REASON_APIC_ACCESS:
+        ctrl = __n2_secondary_exec_control(v);
+        if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPg-0005La-4R; Thu, 06 Dec 2012 14:26:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPe-0005L5-T0
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:55 +0000
Received: from [85.158.138.51:44336] by server-10.bemta-3.messagelabs.com id
	C7/38-19806-E2BA0C05; Thu, 06 Dec 2012 14:26:54 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354804009!27732537!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15943 invoked from network); 6 Dec 2012 14:26:53 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 14:26:53 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 06:26:53 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259934043"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:52 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:04 +0800
Message-Id: <1354803427-18057-9-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 08/11] nested vmx: enable "Virtualize APIC
	accesses" feature for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the "Virtualize APIC accesses" feature is enabled, we need to sync
the APIC-access address from virtual vvmcs into shadow vmcs when doing
virtual_vmentry.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   27 ++++++++++++++++++++++++++-
 1 files changed, 26 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dcdc83e..bcb113f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -554,6 +554,24 @@ void nvmx_update_exception_bitmap(struct vcpu *v, unsigned long value)
     set_shadow_control(v, EXCEPTION_BITMAP, value);
 }
 
+static void nvmx_update_apic_access_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 apic_gpfn, apic_mfn;
+    u32 ctrl;
+    void *apic_va;
+
+    ctrl = __n2_secondary_exec_control(v);
+    if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+    {
+        apic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, APIC_ACCESS_ADDR) >> PAGE_SHIFT;
+        apic_va = hvm_map_guest_frame_ro(apic_gpfn);
+        apic_mfn = virt_to_mfn(apic_va);
+        __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(apic_va); 
+    }
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -761,6 +779,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_exit_control(v, vmx_vmexit_control);
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
+    nvmx_update_apic_access_address(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1350,7 +1369,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
-        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+        data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
         /* 0-settings */
         tmp = 0;
         data = (data << 32) | tmp;
@@ -1680,6 +1700,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
         break;
     }
+    case EXIT_REASON_APIC_ACCESS:
+        ctrl = __n2_secondary_exec_control(v);
+        if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPj-0005Mq-Hu; Thu, 06 Dec 2012 14:26:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPi-0005M0-F5
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:58 +0000
Received: from [85.158.138.51:44661] by server-6.bemta-3.messagelabs.com id
	A5/F4-28265-13BA0C05; Thu, 06 Dec 2012 14:26:57 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354804009!27732537!4
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16189 invoked from network); 6 Dec 2012 14:26:57 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 14:26:57 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 06:26:56 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259934066"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:55 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:06 +0800
Message-Id: <1354803427-18057-11-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 10/11] nested vmx: fix interrupt delivery to
	L2 guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While delivering interrupt into L2 guest, L0 hypervisor need to check
whether L1 hypervisor wants to own the interrupt, if not, directly inject
the interrupt into L2 guest.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 3961bc7..ef8b925 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -163,7 +163,7 @@ enum hvm_intblk nvmx_intr_blocked(struct vcpu *v)
 
 static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
 {
-    u32 exit_ctrl;
+    u32 ctrl;
 
     if ( nvmx_intr_blocked(v) != hvm_intblk_none )
     {
@@ -176,11 +176,14 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
         if ( intack.source == hvm_intsrc_pic ||
                  intack.source == hvm_intsrc_lapic )
         {
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, PIN_BASED_VM_EXEC_CONTROL);
+            if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) )
+                return 0;
+
             vmx_inject_extint(intack.vector);
 
-            exit_ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
-                            VM_EXIT_CONTROLS);
-            if ( exit_ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, VM_EXIT_CONTROLS);
+            if ( ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
             {
                 /* for now, duplicate the ack path in vmx_intr_assist */
                 hvm_vcpu_ack_pending_irq(v, intack);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPj-0005Mq-Hu; Thu, 06 Dec 2012 14:26:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPi-0005M0-F5
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:26:58 +0000
Received: from [85.158.138.51:44661] by server-6.bemta-3.messagelabs.com id
	A5/F4-28265-13BA0C05; Thu, 06 Dec 2012 14:26:57 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354804009!27732537!4
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5MjM1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16189 invoked from network); 6 Dec 2012 14:26:57 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-174.messagelabs.com with SMTP;
	6 Dec 2012 14:26:57 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Dec 2012 06:26:56 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259934066"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:55 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:06 +0800
Message-Id: <1354803427-18057-11-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 10/11] nested vmx: fix interrupt delivery to
	L2 guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While delivering interrupt into L2 guest, L0 hypervisor need to check
whether L1 hypervisor wants to own the interrupt, if not, directly inject
the interrupt into L2 guest.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 3961bc7..ef8b925 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -163,7 +163,7 @@ enum hvm_intblk nvmx_intr_blocked(struct vcpu *v)
 
 static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
 {
-    u32 exit_ctrl;
+    u32 ctrl;
 
     if ( nvmx_intr_blocked(v) != hvm_intblk_none )
     {
@@ -176,11 +176,14 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack)
         if ( intack.source == hvm_intsrc_pic ||
                  intack.source == hvm_intsrc_lapic )
         {
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, PIN_BASED_VM_EXEC_CONTROL);
+            if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) )
+                return 0;
+
             vmx_inject_extint(intack.vector);
 
-            exit_ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
-                            VM_EXIT_CONTROLS);
-            if ( exit_ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
+            ctrl = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, VM_EXIT_CONTROLS);
+            if ( ctrl & VM_EXIT_ACK_INTR_ON_EXIT )
             {
                 /* for now, duplicate the ack path in vmx_intr_assist */
                 hvm_vcpu_ack_pending_irq(v, intack);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPq-0005RL-Vs; Thu, 06 Dec 2012 14:27:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPp-0005QM-L1
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:05 +0000
Received: from [193.109.254.147:31528] by server-16.bemta-14.messagelabs.com
	id C5/14-09215-83BA0C05; Thu, 06 Dec 2012 14:27:04 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1354804006!9575981!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDQ0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3848 invoked from network); 6 Dec 2012 14:26:46 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 14:26:46 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 06 Dec 2012 06:26:45 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="258168102"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 06 Dec 2012 06:26:44 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:16:59 +0800
Message-Id: <1354803427-18057-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 03/11] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Besides, use literal name instead of hard numbers for this bit 55 in
IA32_VMX_BASIC_MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    6 +++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    6 ++++++
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 205e705..9adc7a4 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -237,7 +237,7 @@ static int vmx_init_vmcs_config(void)
          * We check VMX_BASIC_MSR[55] to correctly handle default controls.
          */
         uint32_t must_be_one, must_be_zero, msr = MSR_IA32_VMX_PROCBASED_CTLS;
-        if ( vmx_basic_msr_high & (1u << 23) )
+        if ( vmx_basic_msr_high & (VMX_BASIC_DEFAULT1_ZERO >> 32) )
             msr = MSR_IA32_VMX_TRUE_PROCBASED_CTLS;
         rdmsr(msr, must_be_one, must_be_zero);
         if ( must_be_one & (CPU_BASED_INVLPG_EXITING |
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index eb10bbf..ec5e8a7 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1311,9 +1311,10 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50;
+               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
         /* 1-seetings */
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
@@ -1322,6 +1323,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1353,6 +1355,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = (data << 32) | tmp;
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
+    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
         /* 1-seetings */
         tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
@@ -1367,6 +1370,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
+    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         /* 1-seetings */
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 14ac773..ef2c9c9 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -247,6 +247,12 @@ extern bool_t cpu_has_vmx_ins_outs_instr_info;
 #define VMX_INTR_SHADOW_SMI             0x00000004
 #define VMX_INTR_SHADOW_NMI             0x00000008
 
+/* 
+ * bit 55 of IA32_VMX_BASIC MSR, indicating whether any VMX controls that
+ * default to 1 may be cleared to 0.
+ */
+#define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
+
 /* VMCS field encodings. */
 enum vmcs_field {
     VIRTUAL_PROCESSOR_ID            = 0x00000000,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcPq-0005RL-Vs; Thu, 06 Dec 2012 14:27:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPp-0005QM-L1
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:05 +0000
Received: from [193.109.254.147:31528] by server-16.bemta-14.messagelabs.com
	id C5/14-09215-83BA0C05; Thu, 06 Dec 2012 14:27:04 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1354804006!9575981!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDQ0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3848 invoked from network); 6 Dec 2012 14:26:46 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 14:26:46 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 06 Dec 2012 06:26:45 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="258168102"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 06 Dec 2012 06:26:44 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:16:59 +0800
Message-Id: <1354803427-18057-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 03/11] nested vmx: expose bit 55 of
	IA32_VMX_BASIC_MSR to guest VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Besides, use literal name instead of hard numbers for this bit 55 in
IA32_VMX_BASIC_MSR.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    6 +++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |    6 ++++++
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 205e705..9adc7a4 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -237,7 +237,7 @@ static int vmx_init_vmcs_config(void)
          * We check VMX_BASIC_MSR[55] to correctly handle default controls.
          */
         uint32_t must_be_one, must_be_zero, msr = MSR_IA32_VMX_PROCBASED_CTLS;
-        if ( vmx_basic_msr_high & (1u << 23) )
+        if ( vmx_basic_msr_high & (VMX_BASIC_DEFAULT1_ZERO >> 32) )
             msr = MSR_IA32_VMX_TRUE_PROCBASED_CTLS;
         rdmsr(msr, must_be_one, must_be_zero);
         if ( must_be_one & (CPU_BASED_INVLPG_EXITING |
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index eb10bbf..ec5e8a7 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1311,9 +1311,10 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50;
+               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
         /* 1-seetings */
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
@@ -1322,6 +1323,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | (tmp);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
+    case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1353,6 +1355,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = (data << 32) | tmp;
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
+    case MSR_IA32_VMX_TRUE_EXIT_CTLS:
         /* 1-seetings */
         tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
@@ -1367,6 +1370,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = ((data | tmp) << 32) | tmp;
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
+    case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         /* 1-seetings */
         tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 14ac773..ef2c9c9 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -247,6 +247,12 @@ extern bool_t cpu_has_vmx_ins_outs_instr_info;
 #define VMX_INTR_SHADOW_SMI             0x00000004
 #define VMX_INTR_SHADOW_NMI             0x00000008
 
+/* 
+ * bit 55 of IA32_VMX_BASIC MSR, indicating whether any VMX controls that
+ * default to 1 may be cleared to 0.
+ */
+#define VMX_BASIC_DEFAULT1_ZERO		(1ULL << 55)
+
 /* VMCS field encodings. */
 enum vmcs_field {
     VIRTUAL_PROCESSOR_ID            = 0x00000000,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcQ0-0005V7-E5; Thu, 06 Dec 2012 14:27:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPy-0005U3-4N
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:14 +0000
Received: from [85.158.139.83:5163] by server-8.bemta-5.messagelabs.com id
	72/3C-06050-04BA0C05; Thu, 06 Dec 2012 14:27:12 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354804031!28017093!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28695 invoked from network); 6 Dec 2012 14:27:12 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 14:27:12 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 06:26:47 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252993552"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 06:26:46 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:00 +0800
Message-Id: <1354803427-18057-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 04/11] nested vmx: fix rflags status in
	virtual vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As stated in SDM, all bits (except for those 1-reserved) in rflags would
be set to 0 in VM exit. Therefore we need to follow this logic in
virtual_vmexit.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ec5e8a7..d8b7ce5 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -991,7 +991,8 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
 
     regs->eip = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RIP);
     regs->esp = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RSP);
-    regs->eflags = __vmread(GUEST_RFLAGS);
+    /* VM exit clears all bits except bit 1 */
+    regs->eflags = 0x2;
 
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcQ0-0005V7-E5; Thu, 06 Dec 2012 14:27:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPy-0005U3-4N
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:14 +0000
Received: from [85.158.139.83:5163] by server-8.bemta-5.messagelabs.com id
	72/3C-06050-04BA0C05; Thu, 06 Dec 2012 14:27:12 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354804031!28017093!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28695 invoked from network); 6 Dec 2012 14:27:12 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 14:27:12 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 06:26:47 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252993552"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 06:26:46 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:00 +0800
Message-Id: <1354803427-18057-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 04/11] nested vmx: fix rflags status in
	virtual vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As stated in SDM, all bits (except for those 1-reserved) in rflags would
be set to 0 in VM exit. Therefore we need to follow this logic in
virtual_vmexit.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ec5e8a7..d8b7ce5 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -991,7 +991,8 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
 
     regs->eip = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RIP);
     regs->esp = __get_vvmcs(nvcpu->nv_vvmcx, HOST_RSP);
-    regs->eflags = __vmread(GUEST_RFLAGS);
+    /* VM exit clears all bits except bit 1 */
+    regs->eflags = 0x2;
 
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcQ0-0005VP-Qy; Thu, 06 Dec 2012 14:27:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPy-0005UD-8k
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:14 +0000
Received: from [85.158.139.83:48764] by server-9.bemta-5.messagelabs.com id
	C4/D4-29295-14BA0C05; Thu, 06 Dec 2012 14:27:13 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354804031!28017093!2
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28785 invoked from network); 6 Dec 2012 14:27:12 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 14:27:12 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 06:26:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252993578"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 06:26:49 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:02 +0800
Message-Id: <1354803427-18057-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 06/11] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For DR register, we use lazy restore mechanism when access it. Therefore
when receiving such VM exit, L0 should be responsible to switch to the
right DR values, then inject to L1 hypervisor.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dab9551..02a7052 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1641,7 +1641,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         break;
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
-        if ( ctrl & CPU_BASED_MOV_DR_EXITING )
+        if ( (ctrl & CPU_BASED_MOV_DR_EXITING) &&
+            v->arch.hvm_vcpu.flag_dr_dirty )
             nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcQ0-0005VP-Qy; Thu, 06 Dec 2012 14:27:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPy-0005UD-8k
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:14 +0000
Received: from [85.158.139.83:48764] by server-9.bemta-5.messagelabs.com id
	C4/D4-29295-14BA0C05; Thu, 06 Dec 2012 14:27:13 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354804031!28017093!2
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28785 invoked from network); 6 Dec 2012 14:27:12 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 14:27:12 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 06:26:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252993578"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 06:26:49 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:02 +0800
Message-Id: <1354803427-18057-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 06/11] nested vmx: fix DR access VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For DR register, we use lazy restore mechanism when access it. Therefore
when receiving such VM exit, L0 should be responsible to switch to the
right DR values, then inject to L1 hypervisor.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dab9551..02a7052 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1641,7 +1641,8 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         break;
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
-        if ( ctrl & CPU_BASED_MOV_DR_EXITING )
+        if ( (ctrl & CPU_BASED_MOV_DR_EXITING) &&
+            v->arch.hvm_vcpu.flag_dr_dirty )
             nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcQ1-0005Vi-7w; Thu, 06 Dec 2012 14:27:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPz-0005Ue-67
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:15 +0000
Received: from [85.158.139.83:48827] by server-15.bemta-5.messagelabs.com id
	8C/30-26920-24BA0C05; Thu, 06 Dec 2012 14:27:14 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354804031!28017093!3
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28896 invoked from network); 6 Dec 2012 14:27:13 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 14:27:13 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 06:26:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252993599"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 06:26:53 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:05 +0800
Message-Id: <1354803427-18057-10-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 09/11] nested vmx: enable PAUSE and RDPMC
	exiting for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index bcb113f..178adbc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1362,6 +1362,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
+               CPU_BASED_PAUSE_EXITING |
+               CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcQ1-0005Vi-7w; Thu, 06 Dec 2012 14:27:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcPz-0005Ue-67
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:15 +0000
Received: from [85.158.139.83:48827] by server-15.bemta-5.messagelabs.com id
	8C/30-26920-24BA0C05; Thu, 06 Dec 2012 14:27:14 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354804031!28017093!3
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28896 invoked from network); 6 Dec 2012 14:27:13 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 14:27:13 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 06:26:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="252993599"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 06 Dec 2012 06:26:53 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:05 +0800
Message-Id: <1354803427-18057-10-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 09/11] nested vmx: enable PAUSE and RDPMC
	exiting for L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index bcb113f..178adbc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1362,6 +1362,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_MONITOR_TRAP_FLAG |
                CPU_BASED_VIRTUAL_NMI_PENDING |
                CPU_BASED_ACTIVATE_MSR_BITMAP |
+               CPU_BASED_PAUSE_EXITING |
+               CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         tmp = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 0-settings */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcQ4-0005Z0-QS; Thu, 06 Dec 2012 14:27:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcQ3-0005Wt-8G
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:19 +0000
Received: from [85.158.143.35:50047] by server-1.bemta-4.messagelabs.com id
	D9/8A-28401-64BA0C05; Thu, 06 Dec 2012 14:27:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354804033!13055631!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 360 invoked from network); 6 Dec 2012 14:27:13 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-12.tower-21.messagelabs.com with SMTP;
	6 Dec 2012 14:27:13 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 06:26:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="229837581"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 06 Dec 2012 06:26:56 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:07 +0800
Message-Id: <1354803427-18057-12-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 11/11] nested vmx: check host ability when
	intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When guest hypervisor tries to read MSR value, we intercept this behavior
and return certain emulated values. Besides that, we also need to ensure
that those emulated values must compatible with host ability.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   32 ++++++++++++++++----------------
 1 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 178adbc..b005816 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1314,24 +1314,32 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+#define __emul_value(enable1, default1) \
+    ((enable1 | default1) << 32 | (default1))
+
+#define gen_vmx_msr(enable1, default1, host_value) \
+    (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
+    ((uint32_t)(__emul_value(enable1, default1) | host_value)))
+
 /*
  * Capability reporting
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp = 0;
+    u64 data = 0, host_data = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
         return 0;
 
+    rdmsrl(msr, host_data);
+
     /*
      * Remove unsupport features from n1 guest capability MSR
      */
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
-        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
+        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
@@ -1339,8 +1347,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        tmp = VMX_PINBASED_CTLS_DEFAULT1;
-        data = ((data | tmp) << 32) | (tmp);
+        data = gen_vmx_msr(data, VMX_PINBASED_CTLS_DEFAULT1, host_data);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
@@ -1365,22 +1372,17 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_PAUSE_EXITING |
                CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        tmp = VMX_PROCBASED_CTLS_DEFAULT1;
-        /* 0-settings */
-        data = ((data | tmp) << 32) | (tmp);
+        data = gen_vmx_msr(data, VMX_PROCBASED_CTLS_DEFAULT1, host_data);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
                SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
-        /* 0-settings */
-        tmp = 0;
-        data = (data << 32) | tmp;
+        data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
     case MSR_IA32_VMX_TRUE_EXIT_CTLS:
         /* 1-seetings */
-        tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1389,18 +1391,16 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_EXIT_SAVE_GUEST_EFER |
                VM_EXIT_LOAD_HOST_EFER |
                VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
-	/* 0-settings */
-        data = ((data | tmp) << 32) | tmp;
+        data = gen_vmx_msr(data, VMX_EXIT_CTLS_DEFAULT1, host_data);
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
     case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         /* 1-seetings */
-        tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
                VM_ENTRY_IA32E_MODE;
-        data = ((data | tmp) << 32) | tmp;
+        data = gen_vmx_msr(data, VMX_ENTRY_CTLS_DEFAULT1, host_data);
         break;
 
     case IA32_FEATURE_CONTROL_MSR:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcQ4-0005Z0-QS; Thu, 06 Dec 2012 14:27:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcQ3-0005Wt-8G
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:19 +0000
Received: from [85.158.143.35:50047] by server-1.bemta-4.messagelabs.com id
	D9/8A-28401-64BA0C05; Thu, 06 Dec 2012 14:27:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354804033!13055631!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI3ODMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 360 invoked from network); 6 Dec 2012 14:27:13 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-12.tower-21.messagelabs.com with SMTP;
	6 Dec 2012 14:27:13 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 06 Dec 2012 06:26:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="229837581"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by orsmga001.jf.intel.com with ESMTP; 06 Dec 2012 06:26:56 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:17:07 +0800
Message-Id: <1354803427-18057-12-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
References: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 11/11] nested vmx: check host ability when
	intercept MSR read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When guest hypervisor tries to read MSR value, we intercept this behavior
and return certain emulated values. Besides that, we also need to ensure
that those emulated values must compatible with host ability.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   32 ++++++++++++++++----------------
 1 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 178adbc..b005816 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1314,24 +1314,32 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+#define __emul_value(enable1, default1) \
+    ((enable1 | default1) << 32 | (default1))
+
+#define gen_vmx_msr(enable1, default1, host_value) \
+    (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
+    ((uint32_t)(__emul_value(enable1, default1) | host_value)))
+
 /*
  * Capability reporting
  */
 int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
 {
-    u64 data = 0, tmp = 0;
+    u64 data = 0, host_data = 0;
     int r = 1;
 
     if ( !nestedhvm_enabled(current->domain) )
         return 0;
 
+    rdmsrl(msr, host_data);
+
     /*
      * Remove unsupport features from n1 guest capability MSR
      */
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
-        data = VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 | 
-               ((u64)MTRR_TYPE_WRBACK) << 50 | VMX_BASIC_DEFAULT1_ZERO;
+        data = (host_data & (~0ul << 32)) | VVMCS_REVISION;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
@@ -1339,8 +1347,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = PIN_BASED_EXT_INTR_MASK |
                PIN_BASED_NMI_EXITING |
                PIN_BASED_PREEMPT_TIMER;
-        tmp = VMX_PINBASED_CTLS_DEFAULT1;
-        data = ((data | tmp) << 32) | (tmp);
+        data = gen_vmx_msr(data, VMX_PINBASED_CTLS_DEFAULT1, host_data);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
@@ -1365,22 +1372,17 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_PAUSE_EXITING |
                CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        tmp = VMX_PROCBASED_CTLS_DEFAULT1;
-        /* 0-settings */
-        data = ((data | tmp) << 32) | (tmp);
+        data = gen_vmx_msr(data, VMX_PROCBASED_CTLS_DEFAULT1, host_data);
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
                SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
-        /* 0-settings */
-        tmp = 0;
-        data = (data << 32) | tmp;
+        data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
     case MSR_IA32_VMX_TRUE_EXIT_CTLS:
         /* 1-seetings */
-        tmp = VMX_EXIT_CTLS_DEFAULT1;
         data = VM_EXIT_ACK_INTR_ON_EXIT |
                VM_EXIT_IA32E_MODE |
                VM_EXIT_SAVE_PREEMPT_TIMER |
@@ -1389,18 +1391,16 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                VM_EXIT_SAVE_GUEST_EFER |
                VM_EXIT_LOAD_HOST_EFER |
                VM_EXIT_LOAD_PERF_GLOBAL_CTRL;
-	/* 0-settings */
-        data = ((data | tmp) << 32) | tmp;
+        data = gen_vmx_msr(data, VMX_EXIT_CTLS_DEFAULT1, host_data);
         break;
     case MSR_IA32_VMX_ENTRY_CTLS:
     case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         /* 1-seetings */
-        tmp = VMX_ENTRY_CTLS_DEFAULT1;
         data = VM_ENTRY_LOAD_GUEST_PAT |
                VM_ENTRY_LOAD_GUEST_EFER |
                VM_ENTRY_LOAD_PERF_GLOBAL_CTRL |
                VM_ENTRY_IA32E_MODE;
-        data = ((data | tmp) << 32) | tmp;
+        data = gen_vmx_msr(data, VMX_ENTRY_CTLS_DEFAULT1, host_data);
         break;
 
     case IA32_FEATURE_CONTROL_MSR:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcQH-0005hI-7O; Thu, 06 Dec 2012 14:27:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcQG-0005ge-Is
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:32 +0000
Received: from [193.109.254.147:37108] by server-15.bemta-14.messagelabs.com
	id 00/92-12105-35BA0C05; Thu, 06 Dec 2012 14:27:31 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1354804001!9575969!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDQ0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3593 invoked from network); 6 Dec 2012 14:26:42 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 14:26:42 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 06 Dec 2012 06:26:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259933968"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:40 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:16:56 +0800
Message-Id: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 00/11] nested vmx: bug fixes and feature
	enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series of patches contain some bug fixes and feature enabling for
nested vmx, please help to review and pull.

Changes from v4 to v5:
 - Combine host_data, enable features, and default1 settings into one macro when composing VMX MSRs for L1 VMM.

Changes from v3 to v4:
 - Use a macro to combine MSR value with host capability.
 - Use the macro defined for bit 55 in IA32_VMX_BASIC MSR in vmcs.c.

Changes from v2 to v3:
 - Change a hard number to literal name while exposing bit 55 in IA32_VMX_BASIC MSR.

Changes from v1 to v2:
 - Use literal name instead of hard numbers to expose default 1 settings in VMX related MSRs.
 - For TRUE VMX MSRs, we use the same value as normal VMX MSRs.
 - Fix a coding style issue.

The following 5 patches are suitable to backport to 4.2.x:
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: fix interrupt delivery to L2 guest

Thanks,
Dongxiao

Dongxiao Xu (11):
  nested vmx: emulate MSR bitmaps
  nested vmx: use literal name instead of hard numbers
  nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
  nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
  nested vmx: fix interrupt delivery to L2 guest
  nested vmx: check host ability when intercept MSR read

 xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
 xen/arch/x86/hvm/vmx/vmcs.c        |   30 +++++++++-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |  119 ++++++++++++++++++++++++++++--------
 xen/include/asm-x86/hvm/vmx/vmcs.h |    7 ++
 xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |   10 +++
 7 files changed, 149 insertions(+), 32 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:27:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:27:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcQH-0005hI-7O; Thu, 06 Dec 2012 14:27:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcQG-0005ge-Is
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:27:32 +0000
Received: from [193.109.254.147:37108] by server-15.bemta-14.messagelabs.com
	id 00/92-12105-35BA0C05; Thu, 06 Dec 2012 14:27:31 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1354804001!9575969!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MDQ0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3593 invoked from network); 6 Dec 2012 14:26:42 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-27.messagelabs.com with SMTP;
	6 Dec 2012 14:26:42 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 06 Dec 2012 06:26:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="259933968"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 06 Dec 2012 06:26:40 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu,  6 Dec 2012 22:16:56 +0800
Message-Id: <1354803427-18057-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Cc: eddie.dong@intel.com, jun.nakajima@intel.com
Subject: [Xen-devel] [PATCH v5 00/11] nested vmx: bug fixes and feature
	enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series of patches contain some bug fixes and feature enabling for
nested vmx, please help to review and pull.

Changes from v4 to v5:
 - Combine host_data, enable features, and default1 settings into one macro when composing VMX MSRs for L1 VMM.

Changes from v3 to v4:
 - Use a macro to combine MSR value with host capability.
 - Use the macro defined for bit 55 in IA32_VMX_BASIC MSR in vmcs.c.

Changes from v2 to v3:
 - Change a hard number to literal name while exposing bit 55 in IA32_VMX_BASIC MSR.

Changes from v1 to v2:
 - Use literal name instead of hard numbers to expose default 1 settings in VMX related MSRs.
 - For TRUE VMX MSRs, we use the same value as normal VMX MSRs.
 - Fix a coding style issue.

The following 5 patches are suitable to backport to 4.2.x:
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: fix interrupt delivery to L2 guest

Thanks,
Dongxiao

Dongxiao Xu (11):
  nested vmx: emulate MSR bitmaps
  nested vmx: use literal name instead of hard numbers
  nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
  nested vmx: fix rflags status in virtual vmexit
  nested vmx: fix handling of RDTSC
  nested vmx: fix DR access VM exit
  nested vmx: enable IA32E mode while do VM entry
  nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
  nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
  nested vmx: fix interrupt delivery to L2 guest
  nested vmx: check host ability when intercept MSR read

 xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
 xen/arch/x86/hvm/vmx/vmcs.c        |   30 +++++++++-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |  119 ++++++++++++++++++++++++++++--------
 xen/include/asm-x86/hvm/vmx/vmcs.h |    7 ++
 xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |   10 +++
 7 files changed, 149 insertions(+), 32 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:30:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcSe-0006be-PM; Thu, 06 Dec 2012 14:30:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcSd-0006bG-0C
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:29:59 +0000
Received: from [85.158.143.35:28082] by server-1.bemta-4.messagelabs.com id
	82/4E-28401-6EBA0C05; Thu, 06 Dec 2012 14:29:58 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354804195!11871414!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3Mjk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1415 invoked from network); 6 Dec 2012 14:29:56 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-7.tower-21.messagelabs.com with SMTP;
	6 Dec 2012 14:29:56 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 06 Dec 2012 06:29:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="176989291"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Dec 2012 06:29:26 -0800
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 06:29:09 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 06:29:08 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 22:29:06 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH v4 00/11] nested vmx: bug fixes and feature
	enabling
Thread-Index: AQHN07YDb6DMoFo9ukOWaITPX8iXmJgL1AgQ
Date: Thu, 6 Dec 2012 14:29:06 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB9814@SHSMSX102.ccr.corp.intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
	<50C0AC3F02000078000AE9C9@nat28.tlf.novell.com>
In-Reply-To: <50C0AC3F02000078000AE9C9@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 00/11] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, December 06, 2012 9:31 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH v4 00/11] nested vmx: bug fixes and feature
> enabling
> 
> >>> On 06.12.12 at 14:01, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > This series of patches contain some bug fixes and feature enabling for
> > nested vmx, please help to review and pull.
> >
> > Changes from v3 to v4:
> >  - Use a macro to combine MSR value with host capability.
> >  - Use the macro defined for bit 55 in IA32_VMX_BASIC MSR in vmcs.c.
> 
> With the minor comment on the last patch addressed, feel free to add my ack
> on the whole series.

Thanks, I've sent out another version to address the comment.

Thanks,
Dongxiao

> 
> Jan
> 
> > Changes from v2 to v3:
> >  - Change a hard number to literal name while exposing bit 55 in
> > IA32_VMX_BASIC MSR.
> >
> > Changes from v1 to v2:
> >  - Use literal name instead of hard numbers to expose default 1
> > settings in VMX related MSRs.
> >  - For TRUE VMX MSRs, we use the same value as normal VMX MSRs.
> >  - Fix a coding style issue.
> >
> > The following 5 patches are suitable to backport to 4.2.x:
> >   nested vmx: fix rflags status in virtual vmexit
> >   nested vmx: fix handling of RDTSC
> >   nested vmx: fix DR access VM exit
> >   nested vmx: enable IA32E mode while do VM entry
> >   nested vmx: fix interrupt delivery to L2 guest
> >
> > Thanks,
> > Dongxiao
> >
> > Dongxiao Xu (11):
> >   nested vmx: emulate MSR bitmaps
> >   nested vmx: use literal name instead of hard numbers
> >   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
> >   nested vmx: fix rflags status in virtual vmexit
> >   nested vmx: fix handling of RDTSC
> >   nested vmx: fix DR access VM exit
> >   nested vmx: enable IA32E mode while do VM entry
> >   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
> >   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
> >   nested vmx: fix interrupt delivery to L2 guest
> >   nested vmx: check host ability when intercept MSR read
> >
> >  xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
> >  xen/arch/x86/hvm/vmx/vmcs.c        |   30 +++++++++-
> >  xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
> >  xen/arch/x86/hvm/vmx/vvmx.c        |  113
> ++++++++++++++++++++++++++++++------
> >  xen/include/asm-x86/hvm/vmx/vmcs.h |    7 ++
> >  xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
> >  xen/include/asm-x86/hvm/vmx/vvmx.h |   10 +++
> >  7 files changed, 151 insertions(+), 24 deletions(-)
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:30:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgcSe-0006be-PM; Thu, 06 Dec 2012 14:30:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TgcSd-0006bG-0C
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 14:29:59 +0000
Received: from [85.158.143.35:28082] by server-1.bemta-4.messagelabs.com id
	82/4E-28401-6EBA0C05; Thu, 06 Dec 2012 14:29:58 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1354804195!11871414!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM3Mjk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1415 invoked from network); 6 Dec 2012 14:29:56 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-7.tower-21.messagelabs.com with SMTP;
	6 Dec 2012 14:29:56 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 06 Dec 2012 06:29:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,230,1355126400"; d="scan'208";a="176989291"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Dec 2012 06:29:26 -0800
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 06:29:09 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 6 Dec 2012 06:29:08 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Thu, 6 Dec 2012 22:29:06 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH v4 00/11] nested vmx: bug fixes and feature
	enabling
Thread-Index: AQHN07YDb6DMoFo9ukOWaITPX8iXmJgL1AgQ
Date: Thu, 6 Dec 2012 14:29:06 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEB9814@SHSMSX102.ccr.corp.intel.com>
References: <1354798871-5632-1-git-send-email-dongxiao.xu@intel.com>
	<50C0AC3F02000078000AE9C9@nat28.tlf.novell.com>
In-Reply-To: <50C0AC3F02000078000AE9C9@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 00/11] nested vmx: bug fixes and feature
 enabling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, December 06, 2012 9:31 PM
> To: Xu, Dongxiao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH v4 00/11] nested vmx: bug fixes and feature
> enabling
> 
> >>> On 06.12.12 at 14:01, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > This series of patches contain some bug fixes and feature enabling for
> > nested vmx, please help to review and pull.
> >
> > Changes from v3 to v4:
> >  - Use a macro to combine MSR value with host capability.
> >  - Use the macro defined for bit 55 in IA32_VMX_BASIC MSR in vmcs.c.
> 
> With the minor comment on the last patch addressed, feel free to add my ack
> on the whole series.

Thanks, I've sent out another version to address the comment.

Thanks,
Dongxiao

> 
> Jan
> 
> > Changes from v2 to v3:
> >  - Change a hard number to literal name while exposing bit 55 in
> > IA32_VMX_BASIC MSR.
> >
> > Changes from v1 to v2:
> >  - Use literal name instead of hard numbers to expose default 1
> > settings in VMX related MSRs.
> >  - For TRUE VMX MSRs, we use the same value as normal VMX MSRs.
> >  - Fix a coding style issue.
> >
> > The following 5 patches are suitable to backport to 4.2.x:
> >   nested vmx: fix rflags status in virtual vmexit
> >   nested vmx: fix handling of RDTSC
> >   nested vmx: fix DR access VM exit
> >   nested vmx: enable IA32E mode while do VM entry
> >   nested vmx: fix interrupt delivery to L2 guest
> >
> > Thanks,
> > Dongxiao
> >
> > Dongxiao Xu (11):
> >   nested vmx: emulate MSR bitmaps
> >   nested vmx: use literal name instead of hard numbers
> >   nested vmx: expose bit 55 of IA32_VMX_BASIC_MSR to guest VMM
> >   nested vmx: fix rflags status in virtual vmexit
> >   nested vmx: fix handling of RDTSC
> >   nested vmx: fix DR access VM exit
> >   nested vmx: enable IA32E mode while do VM entry
> >   nested vmx: enable "Virtualize APIC accesses" feature for L1 VMM
> >   nested vmx: enable PAUSE and RDPMC exiting for L1 VMM
> >   nested vmx: fix interrupt delivery to L2 guest
> >   nested vmx: check host ability when intercept MSR read
> >
> >  xen/arch/x86/hvm/vmx/intr.c        |   11 ++-
> >  xen/arch/x86/hvm/vmx/vmcs.c        |   30 +++++++++-
> >  xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
> >  xen/arch/x86/hvm/vmx/vvmx.c        |  113
> ++++++++++++++++++++++++++++++------
> >  xen/include/asm-x86/hvm/vmx/vmcs.h |    7 ++
> >  xen/include/asm-x86/hvm/vmx/vmx.h  |    2 +
> >  xen/include/asm-x86/hvm/vmx/vvmx.h |   10 +++
> >  7 files changed, 151 insertions(+), 24 deletions(-)
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:49:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgcll-0008Ju-CS; Thu, 06 Dec 2012 14:49:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davej@redhat.com>) id 1Tgclj-0008Jl-Sa
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 14:49:44 +0000
Received: from [85.158.139.83:59823] by server-5.bemta-5.messagelabs.com id
	29/80-11353-780B0C05; Thu, 06 Dec 2012 14:49:43 +0000
X-Env-Sender: davej@redhat.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354805318!17446080!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTY3NzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17708 invoked from network); 6 Dec 2012 14:48:38 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 14:48:38 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qB6EmZ5L006815
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 09:48:35 -0500
Received: from gelk.kernelslacker.org (ovpn-113-42.phx2.redhat.com
	[10.3.113.42])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id qB6EmX2s032623
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Dec 2012 09:48:34 -0500
Received: from gelk.kernelslacker.org (localhost [127.0.0.1])
	by gelk.kernelslacker.org (8.14.5/8.14.5) with ESMTP id qB6EmWTX004701; 
	Thu, 6 Dec 2012 09:48:32 -0500
Received: (from davej@localhost)
	by gelk.kernelslacker.org (8.14.5/8.14.5/Submit) id qB6EmUXp004698;
	Thu, 6 Dec 2012 09:48:30 -0500
X-Authentication-Warning: gelk.kernelslacker.org: davej set sender to
	davej@redhat.com using -f
Date: Thu, 6 Dec 2012 09:48:30 -0500
From: Dave Jones <davej@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Message-ID: <20121206144830.GA2450@redhat.com>
References: <20120430143835.GA10190@redhat.com>
	<20120905173335.GA15141@redhat.com>
	<20121206114112.GA13963@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121206114112.GA13963@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: xen-devel@lists.xensource.com, kvm@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, pv-drivers@vmware.com,
	virtualization@lists.linux-foundation.org, devel@linuxdriverproject.org
Subject: Re: [Xen-devel] [PATCHv2] x86info: dump kvm cpuid's
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 01:41:12PM +0200, Michael S. Tsirkin wrote:
 > On Wed, Sep 05, 2012 at 08:33:35PM +0300, Michael S. Tsirkin wrote:
 > > On Mon, Apr 30, 2012 at 05:38:35PM +0300, Michael S. Tsirkin wrote:
 > > > The following makes 'x86info -r' dump hypervisor leaf cpu ids
 > > > (for kvm this is signature+features) when running in a vm.
 > > > 
 > > > On the guest we see the signature and the features:
 > > > eax in: 0x40000000, eax = 00000000 ebx = 4b4d564b ecx = 564b4d56 edx = 0000004d
 > > > eax in: 0x40000001, eax = 0100007b ebx = 00000000 ecx = 00000000 edx = 00000000
 > > > 
 > > > Hypervisor flag is checked to avoid output changes when not
 > > > running on a VM.
 > > > 
 > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 > > > 
 > > > Changes from v1:
 > > > 	Make work on non KVM hypervisors (only KVM was tested).
 > > > 	Avi Kivity said kvm will in the future report
 > > > 	max HV leaf in eax. For now it reports eax = 0
 > > >         so add a work around for that.
 > > 
 > > Ping.
 > > Davej, any comments?
 > > Would be nice to have this in.
 > 
 > Is this the right address?
 > Davej do you maintain x86info?
 > Thanks,
 > MST

It's effectively abandonware at this point, largely due to my own
lack of time. hwloc and similar tools seem to have taken it's place.
If anyone is interested in taking over x86info, I'm happy to hand
over the reins to someone capable.

	Dave


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:49:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgcll-0008Ju-CS; Thu, 06 Dec 2012 14:49:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davej@redhat.com>) id 1Tgclj-0008Jl-Sa
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 14:49:44 +0000
Received: from [85.158.139.83:59823] by server-5.bemta-5.messagelabs.com id
	29/80-11353-780B0C05; Thu, 06 Dec 2012 14:49:43 +0000
X-Env-Sender: davej@redhat.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354805318!17446080!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTY3NzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17708 invoked from network); 6 Dec 2012 14:48:38 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-182.messagelabs.com with SMTP;
	6 Dec 2012 14:48:38 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qB6EmZ5L006815
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 09:48:35 -0500
Received: from gelk.kernelslacker.org (ovpn-113-42.phx2.redhat.com
	[10.3.113.42])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id qB6EmX2s032623
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Dec 2012 09:48:34 -0500
Received: from gelk.kernelslacker.org (localhost [127.0.0.1])
	by gelk.kernelslacker.org (8.14.5/8.14.5) with ESMTP id qB6EmWTX004701; 
	Thu, 6 Dec 2012 09:48:32 -0500
Received: (from davej@localhost)
	by gelk.kernelslacker.org (8.14.5/8.14.5/Submit) id qB6EmUXp004698;
	Thu, 6 Dec 2012 09:48:30 -0500
X-Authentication-Warning: gelk.kernelslacker.org: davej set sender to
	davej@redhat.com using -f
Date: Thu, 6 Dec 2012 09:48:30 -0500
From: Dave Jones <davej@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Message-ID: <20121206144830.GA2450@redhat.com>
References: <20120430143835.GA10190@redhat.com>
	<20120905173335.GA15141@redhat.com>
	<20121206114112.GA13963@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121206114112.GA13963@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: xen-devel@lists.xensource.com, kvm@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, pv-drivers@vmware.com,
	virtualization@lists.linux-foundation.org, devel@linuxdriverproject.org
Subject: Re: [Xen-devel] [PATCHv2] x86info: dump kvm cpuid's
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 01:41:12PM +0200, Michael S. Tsirkin wrote:
 > On Wed, Sep 05, 2012 at 08:33:35PM +0300, Michael S. Tsirkin wrote:
 > > On Mon, Apr 30, 2012 at 05:38:35PM +0300, Michael S. Tsirkin wrote:
 > > > The following makes 'x86info -r' dump hypervisor leaf cpu ids
 > > > (for kvm this is signature+features) when running in a vm.
 > > > 
 > > > On the guest we see the signature and the features:
 > > > eax in: 0x40000000, eax = 00000000 ebx = 4b4d564b ecx = 564b4d56 edx = 0000004d
 > > > eax in: 0x40000001, eax = 0100007b ebx = 00000000 ecx = 00000000 edx = 00000000
 > > > 
 > > > Hypervisor flag is checked to avoid output changes when not
 > > > running on a VM.
 > > > 
 > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 > > > 
 > > > Changes from v1:
 > > > 	Make work on non KVM hypervisors (only KVM was tested).
 > > > 	Avi Kivity said kvm will in the future report
 > > > 	max HV leaf in eax. For now it reports eax = 0
 > > >         so add a work around for that.
 > > 
 > > Ping.
 > > Davej, any comments?
 > > Would be nice to have this in.
 > 
 > Is this the right address?
 > Davej do you maintain x86info?
 > Thanks,
 > MST

It's effectively abandonware at this point, largely due to my own
lack of time. hwloc and similar tools seem to have taken it's place.
If anyone is interested in taking over x86info, I'm happy to hand
over the reins to someone capable.

	Dave


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgcr0-00007k-4P; Thu, 06 Dec 2012 14:55:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgcqy-00007c-RX
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 14:55:09 +0000
Received: from [193.109.254.147:4631] by server-15.bemta-14.messagelabs.com id
	7D/19-12105-CC1B0C05; Thu, 06 Dec 2012 14:55:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1354805707!1978206!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8515 invoked from network); 6 Dec 2012 14:55:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 14:55:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16202962"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 14:55:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 14:55:06 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgcqw-0003De-KI;
	Thu, 06 Dec 2012 14:55:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgcqw-0003D9-An;
	Thu, 06 Dec 2012 14:55:06 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14577-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 14:55:06 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14577: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14577 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14577/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgcr0-00007k-4P; Thu, 06 Dec 2012 14:55:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgcqy-00007c-RX
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 14:55:09 +0000
Received: from [193.109.254.147:4631] by server-15.bemta-14.messagelabs.com id
	7D/19-12105-CC1B0C05; Thu, 06 Dec 2012 14:55:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1354805707!1978206!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8515 invoked from network); 6 Dec 2012 14:55:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 14:55:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16202962"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 14:55:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 14:55:06 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgcqw-0003De-KI;
	Thu, 06 Dec 2012 14:55:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgcqw-0003D9-An;
	Thu, 06 Dec 2012 14:55:06 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14577-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 14:55:06 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14577: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14577 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14577/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:57:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgcsv-0000Hj-Re; Thu, 06 Dec 2012 14:57:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgcst-0000HW-Q4
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 14:57:08 +0000
Received: from [85.158.143.35:16347] by server-1.bemta-4.messagelabs.com id
	01/85-28401-342B0C05; Thu, 06 Dec 2012 14:57:07 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354805758!13337813!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27643 invoked from network); 6 Dec 2012 14:55:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 14:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16202988"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 14:55:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 14:55:58 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgcrm-0003Dm-6o;
	Thu, 06 Dec 2012 14:55:58 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgcrl-0000og-S1;
	Thu, 06 Dec 2012 14:55:58 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14576-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 14:55:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14576: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14576 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14576/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 14:57:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 14:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgcsv-0000Hj-Re; Thu, 06 Dec 2012 14:57:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgcst-0000HW-Q4
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 14:57:08 +0000
Received: from [85.158.143.35:16347] by server-1.bemta-4.messagelabs.com id
	01/85-28401-342B0C05; Thu, 06 Dec 2012 14:57:07 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354805758!13337813!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27643 invoked from network); 6 Dec 2012 14:55:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 14:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16202988"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 14:55:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 14:55:58 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgcrm-0003Dm-6o;
	Thu, 06 Dec 2012 14:55:58 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgcrl-0000og-S1;
	Thu, 06 Dec 2012 14:55:58 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14576-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 14:55:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14576: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14576 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14576/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgd4J-0001UL-Pv; Thu, 06 Dec 2012 15:08:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgd4I-0001UF-DH
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 15:08:54 +0000
Received: from [85.158.143.35:9607] by server-2.bemta-4.messagelabs.com id
	68/52-30861-505B0C05; Thu, 06 Dec 2012 15:08:53 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354806529!13061272!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20727 invoked from network); 6 Dec 2012 15:08:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:08:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16203509"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 15:08:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 15:08:49 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgd4D-0003Jc-7X;
	Thu, 06 Dec 2012 15:08:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgd4C-0007kz-Uf;
	Thu, 06 Dec 2012 15:08:49 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14579-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 15:08:49 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14579: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14579 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14579/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgd4J-0001UL-Pv; Thu, 06 Dec 2012 15:08:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgd4I-0001UF-DH
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 15:08:54 +0000
Received: from [85.158.143.35:9607] by server-2.bemta-4.messagelabs.com id
	68/52-30861-505B0C05; Thu, 06 Dec 2012 15:08:53 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354806529!13061272!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20727 invoked from network); 6 Dec 2012 15:08:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:08:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16203509"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 15:08:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 15:08:49 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgd4D-0003Jc-7X;
	Thu, 06 Dec 2012 15:08:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgd4C-0007kz-Uf;
	Thu, 06 Dec 2012 15:08:49 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14579-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 15:08:49 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14579: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14579 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14579/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:18:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:18:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdDF-00024p-Qe; Thu, 06 Dec 2012 15:18:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgdDE-00024k-2J
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:18:08 +0000
Received: from [85.158.143.99:28010] by server-3.bemta-4.messagelabs.com id
	67/EC-18211-F27B0C05; Thu, 06 Dec 2012 15:18:07 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354807085!17151750!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25793 invoked from network); 6 Dec 2012 15:18:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:18:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46831843"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 15:18:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 10:18:04 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgdDA-0002a9-Ce;
	Thu, 06 Dec 2012 15:18:04 +0000
Date: Thu, 6 Dec 2012 15:18:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212061517360.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim
	\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/6] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Jan Beulich wrote:
> >>> On 05.12.12 at 19:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > Make use of the framebuffer functions previously introduced.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Conceptually this and the prior patch look fine, but the one here
> definitely is against a stale tree (namely lacking the merge with
> 26184:7b4449bdb980).

Thanks for the pointer, I'll rebase

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:18:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:18:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdDF-00024p-Qe; Thu, 06 Dec 2012 15:18:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgdDE-00024k-2J
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:18:08 +0000
Received: from [85.158.143.99:28010] by server-3.bemta-4.messagelabs.com id
	67/EC-18211-F27B0C05; Thu, 06 Dec 2012 15:18:07 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354807085!17151750!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25793 invoked from network); 6 Dec 2012 15:18:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:18:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46831843"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 15:18:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 10:18:04 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgdDA-0002a9-Ce;
	Thu, 06 Dec 2012 15:18:04 +0000
Date: Thu, 6 Dec 2012 15:18:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212061517360.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim
	\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/6] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Jan Beulich wrote:
> >>> On 05.12.12 at 19:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > Make use of the framebuffer functions previously introduced.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Conceptually this and the prior patch look fine, but the one here
> definitely is against a stale tree (namely lacking the merge with
> 26184:7b4449bdb980).

Thanks for the pointer, I'll rebase

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:20:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:20:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdFC-0002AB-9r; Thu, 06 Dec 2012 15:20:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgdFA-0002A2-PN
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 15:20:09 +0000
Received: from [85.158.138.51:34445] by server-16.bemta-3.messagelabs.com id
	07/12-07461-3A7B0C05; Thu, 06 Dec 2012 15:20:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354807202!27770162!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12714 invoked from network); 6 Dec 2012 15:20:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:20:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16203813"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 15:20:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 15:20:02 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgdF4-0003Mw-7m;
	Thu, 06 Dec 2012 15:20:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgdF4-0005iW-6S;
	Thu, 06 Dec 2012 15:20:02 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14580-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 15:20:02 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14580: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14580 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14580/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:20:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:20:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdFC-0002AB-9r; Thu, 06 Dec 2012 15:20:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgdFA-0002A2-PN
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 15:20:09 +0000
Received: from [85.158.138.51:34445] by server-16.bemta-3.messagelabs.com id
	07/12-07461-3A7B0C05; Thu, 06 Dec 2012 15:20:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354807202!27770162!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12714 invoked from network); 6 Dec 2012 15:20:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:20:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16203813"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 15:20:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 15:20:02 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgdF4-0003Mw-7m;
	Thu, 06 Dec 2012 15:20:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgdF4-0005iW-6S;
	Thu, 06 Dec 2012 15:20:02 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14580-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 15:20:02 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14580: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14580 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14580/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:39:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdXz-0003FT-Ef; Thu, 06 Dec 2012 15:39:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgdXy-0003FO-D3
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:39:34 +0000
Received: from [193.109.254.147:56842] by server-5.bemta-14.messagelabs.com id
	8D/67-10257-53CB0C05; Thu, 06 Dec 2012 15:39:33 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354808371!9182962!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12417 invoked from network); 6 Dec 2012 15:39:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:39:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46835654"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 15:39:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 10:39:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgdXu-0002rd-Gc;
	Thu, 06 Dec 2012 15:39:30 +0000
Date: Thu, 6 Dec 2012 15:39:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354793819.17165.83.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212061518020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
	<1354793819.17165.83.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/6] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> On Thu, 2012-12-06 at 11:28 +0000, Jan Beulich wrote:
> > >>> On 05.12.12 at 19:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > wrote:
> > > Make use of the framebuffer functions previously introduced.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > Conceptually this and the prior patch look fine, but the one here
> > definitely is against a stale tree (namely lacking the merge with
> > 26184:7b4449bdb980).
> 
> Need to be careful with the code motion then, from that PoV these two
> patches would be better combined.

That is true. If you prefer I can merge the two patches.


> Also, this vmap/ioremap thing seems like something we should replicate
> on ARM instead of the map_phys_range introduced in a previous patch
> (even if it is just a name change). No reason to gratuitously differ on
> these things.

Of course we should have ioremap working on ARM, however in this case I
would rather not use ioremap, because in its current form relies on the
xen domheap being already setup, while I would prefer being able to
initialize the HDLCD controller earlier than that.

On the other hand we could have a much simpler implementation of ioremap
only based on map_phys_range rather than vmap. I wouldn't be against
that. We could call it early_ioremap to make it clear that it differs
from the x86 counterpart.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:39:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdXz-0003FT-Ef; Thu, 06 Dec 2012 15:39:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgdXy-0003FO-D3
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:39:34 +0000
Received: from [193.109.254.147:56842] by server-5.bemta-14.messagelabs.com id
	8D/67-10257-53CB0C05; Thu, 06 Dec 2012 15:39:33 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354808371!9182962!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12417 invoked from network); 6 Dec 2012 15:39:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:39:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46835654"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 15:39:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 10:39:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgdXu-0002rd-Gc;
	Thu, 06 Dec 2012 15:39:30 +0000
Date: Thu, 6 Dec 2012 15:39:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354793819.17165.83.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212061518020.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C08F7A02000078000AE7D8@nat28.tlf.novell.com>
	<1354793819.17165.83.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/6] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> On Thu, 2012-12-06 at 11:28 +0000, Jan Beulich wrote:
> > >>> On 05.12.12 at 19:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > wrote:
> > > Make use of the framebuffer functions previously introduced.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > Conceptually this and the prior patch look fine, but the one here
> > definitely is against a stale tree (namely lacking the merge with
> > 26184:7b4449bdb980).
> 
> Need to be careful with the code motion then, from that PoV these two
> patches would be better combined.

That is true. If you prefer I can merge the two patches.


> Also, this vmap/ioremap thing seems like something we should replicate
> on ARM instead of the map_phys_range introduced in a previous patch
> (even if it is just a name change). No reason to gratuitously differ on
> these things.

Of course we should have ioremap working on ARM, however in this case I
would rather not use ioremap, because in its current form relies on the
xen domheap being already setup, while I would prefer being able to
initialize the HDLCD controller earlier than that.

On the other hand we could have a much simpler implementation of ioremap
only based on map_phys_range rather than vmap. I wouldn't be against
that. We could call it early_ioremap to make it clear that it differs
from the x86 counterpart.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:55:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdnY-00048I-DQ; Thu, 06 Dec 2012 15:55:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgdnW-000481-6q
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:55:38 +0000
Received: from [85.158.138.51:54116] by server-1.bemta-3.messagelabs.com id
	D0/2C-12169-6FFB0C05; Thu, 06 Dec 2012 15:55:34 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354809333!19682144!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5752 invoked from network); 6 Dec 2012 15:55:33 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:55:33 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so603163wib.14
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 07:55:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=/OAZy/lHynYdWDTLSJrhDFNjPRUashl1AlD7l1OfOHE=;
	b=CfC6qtUsjHJRZWsGucGzuhCZSdF2MD4cgOnxv4OGkmt+OCpLaEZvmFtUQuXFSrK6Ha
	tVscNK1nmEqFtHQdUIrHUvoUzspA0xs5/tF6Afx4n4/mRVvv9HzkFCoKcfQ2UsFB+brw
	Jj5PWbIdK64rBOO2+bPIeV3QnJcN6/Ztg26HMsa9VOe0Jd6UVU/eTLCXpZrysT6jDcRy
	++F/Bo0wCHTXd89FENRY7PdyVkMhcwlCEee5uHGpSyByRelnZ5a+iOgr2lhYL8hHqGKS
	dSkvR+3QuMwfsvUxnSBvSGHZJUlnrSm5PwgksSC3rHPRs2kwTX15u+xYFLB9AX9gbr4u
	MqkQ==
Received: by 10.180.7.199 with SMTP id l7mr3351032wia.15.1354809332819;
	Thu, 06 Dec 2012 07:55:32 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id s12sm23624264wik.11.2012.12.06.07.55.25
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 07:55:32 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 15:55:02 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE67056.551CE%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/p2m: drop redundant macro definitions
Thread-Index: Ac3Tyg0Vs7IBKYUqyk+Y1cYO1JX2/A==
In-Reply-To: <50C0A4DA02000078000AE8BF@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/p2m: drop redundant macro definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:59, "Jan Beulich" <JBeulich@suse.com> wrote:

> Also, add log level indicator to P2M_ERROR(), and drop pointless
> underscores from all related macros' parameter names.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -46,18 +46,6 @@ boolean_param("hap_1gb", opt_hap_1gb);
>  bool_t __read_mostly opt_hap_2mb = 1;
>  boolean_param("hap_2mb", opt_hap_2mb);
>  
> -/* Printouts */
> -#define P2M_PRINTK(_f, _a...)                                \
> -    debugtrace_printk("p2m: %s(): " _f, __func__, ##_a)
> -#define P2M_ERROR(_f, _a...)                                 \
> -    printk("pg error: %s(): " _f, __func__, ##_a)
> -#if P2M_DEBUGGING
> -#define P2M_DEBUG(_f, _a...)                                 \
> -    debugtrace_printk("p2mdebug: %s(): " _f, __func__, ##_a)
> -#else
> -#define P2M_DEBUG(_f, _a...) do { (void)(_f); } while(0)
> -#endif
> -
>  
>  /* Override macros from asm/page.h to make them work with mfn_t */
>  #undef mfn_to_page
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -607,15 +607,15 @@ extern void audit_p2m(struct domain *d,
>  #endif /* P2M_AUDIT */
>  
>  /* Printouts */
> -#define P2M_PRINTK(_f, _a...)                                \
> -    debugtrace_printk("p2m: %s(): " _f, __func__, ##_a)
> -#define P2M_ERROR(_f, _a...)                                 \
> -    printk("pg error: %s(): " _f, __func__, ##_a)
> +#define P2M_PRINTK(f, a...)                                \
> +    debugtrace_printk("p2m: %s(): " f, __func__, ##a)
> +#define P2M_ERROR(f, a...)                                 \
> +    printk(XENLOG_G_ERR "pg error: %s(): " f, __func__, ##a)
>  #if P2M_DEBUGGING
> -#define P2M_DEBUG(_f, _a...)                                 \
> -    debugtrace_printk("p2mdebug: %s(): " _f, __func__, ##_a)
> +#define P2M_DEBUG(f, a...)                                 \
> +    debugtrace_printk("p2mdebug: %s(): " f, __func__, ##a)
>  #else
> -#define P2M_DEBUG(_f, _a...) do { (void)(_f); } while(0)
> +#define P2M_DEBUG(f, a...) do { (void)(f); } while(0)
>  #endif
>  
>  /* Called by p2m code when demand-populating a PoD page */
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:55:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdnY-00048I-DQ; Thu, 06 Dec 2012 15:55:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgdnW-000481-6q
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:55:38 +0000
Received: from [85.158.138.51:54116] by server-1.bemta-3.messagelabs.com id
	D0/2C-12169-6FFB0C05; Thu, 06 Dec 2012 15:55:34 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354809333!19682144!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5752 invoked from network); 6 Dec 2012 15:55:33 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:55:33 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so603163wib.14
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 07:55:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=/OAZy/lHynYdWDTLSJrhDFNjPRUashl1AlD7l1OfOHE=;
	b=CfC6qtUsjHJRZWsGucGzuhCZSdF2MD4cgOnxv4OGkmt+OCpLaEZvmFtUQuXFSrK6Ha
	tVscNK1nmEqFtHQdUIrHUvoUzspA0xs5/tF6Afx4n4/mRVvv9HzkFCoKcfQ2UsFB+brw
	Jj5PWbIdK64rBOO2+bPIeV3QnJcN6/Ztg26HMsa9VOe0Jd6UVU/eTLCXpZrysT6jDcRy
	++F/Bo0wCHTXd89FENRY7PdyVkMhcwlCEee5uHGpSyByRelnZ5a+iOgr2lhYL8hHqGKS
	dSkvR+3QuMwfsvUxnSBvSGHZJUlnrSm5PwgksSC3rHPRs2kwTX15u+xYFLB9AX9gbr4u
	MqkQ==
Received: by 10.180.7.199 with SMTP id l7mr3351032wia.15.1354809332819;
	Thu, 06 Dec 2012 07:55:32 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id s12sm23624264wik.11.2012.12.06.07.55.25
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 07:55:32 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 15:55:02 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE67056.551CE%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/p2m: drop redundant macro definitions
Thread-Index: Ac3Tyg0Vs7IBKYUqyk+Y1cYO1JX2/A==
In-Reply-To: <50C0A4DA02000078000AE8BF@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/p2m: drop redundant macro definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:59, "Jan Beulich" <JBeulich@suse.com> wrote:

> Also, add log level indicator to P2M_ERROR(), and drop pointless
> underscores from all related macros' parameter names.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -46,18 +46,6 @@ boolean_param("hap_1gb", opt_hap_1gb);
>  bool_t __read_mostly opt_hap_2mb = 1;
>  boolean_param("hap_2mb", opt_hap_2mb);
>  
> -/* Printouts */
> -#define P2M_PRINTK(_f, _a...)                                \
> -    debugtrace_printk("p2m: %s(): " _f, __func__, ##_a)
> -#define P2M_ERROR(_f, _a...)                                 \
> -    printk("pg error: %s(): " _f, __func__, ##_a)
> -#if P2M_DEBUGGING
> -#define P2M_DEBUG(_f, _a...)                                 \
> -    debugtrace_printk("p2mdebug: %s(): " _f, __func__, ##_a)
> -#else
> -#define P2M_DEBUG(_f, _a...) do { (void)(_f); } while(0)
> -#endif
> -
>  
>  /* Override macros from asm/page.h to make them work with mfn_t */
>  #undef mfn_to_page
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -607,15 +607,15 @@ extern void audit_p2m(struct domain *d,
>  #endif /* P2M_AUDIT */
>  
>  /* Printouts */
> -#define P2M_PRINTK(_f, _a...)                                \
> -    debugtrace_printk("p2m: %s(): " _f, __func__, ##_a)
> -#define P2M_ERROR(_f, _a...)                                 \
> -    printk("pg error: %s(): " _f, __func__, ##_a)
> +#define P2M_PRINTK(f, a...)                                \
> +    debugtrace_printk("p2m: %s(): " f, __func__, ##a)
> +#define P2M_ERROR(f, a...)                                 \
> +    printk(XENLOG_G_ERR "pg error: %s(): " f, __func__, ##a)
>  #if P2M_DEBUGGING
> -#define P2M_DEBUG(_f, _a...)                                 \
> -    debugtrace_printk("p2mdebug: %s(): " _f, __func__, ##_a)
> +#define P2M_DEBUG(f, a...)                                 \
> +    debugtrace_printk("p2mdebug: %s(): " f, __func__, ##a)
>  #else
> -#define P2M_DEBUG(_f, _a...) do { (void)(_f); } while(0)
> +#define P2M_DEBUG(f, a...) do { (void)(f); } while(0)
>  #endif
>  
>  /* Called by p2m code when demand-populating a PoD page */
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:55:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgdna-00048Y-QP; Thu, 06 Dec 2012 15:55:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgdnZ-00048P-Pi
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:55:41 +0000
Received: from [193.109.254.147:52668] by server-9.bemta-14.messagelabs.com id
	FB/4A-30773-DFFB0C05; Thu, 06 Dec 2012 15:55:41 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1354809339!2799570!1
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19963 invoked from network); 6 Dec 2012 15:55:39 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:55:39 -0000
Received: by mail-wi0-f181.google.com with SMTP id hm9so510253wib.14
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 07:55:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Wdy+Ck2tBKGNQ0oKbTkR/Xo86lIcltJXNC55xy6Kzwg=;
	b=B83rJrI7TEBd64y8yiA5M0hPzabxIMvhypG2p299sFL/F7KYOIA2U2/7jfYFb0zKdF
	aqjwrLnefLi3gD9uvYF6FGcN4B6irADsI9Trfm4AIovO2DTTqvZGaVDxwqzOfQH8J9Yd
	sjQB/RpDnGoU7PlOH3JhcLMbaeiqDIYBsJFlwdp+8FX3v6lSFgsnp87sQv9hohGWtoqk
	g86Yq8s+QPllC8Ov75SDtFcZAypnNkIZrO/AFCCR/LyQiPn7etR/Ks8SYKSJE5edPXYR
	7wOAqBE8bte2rpK78oeWemPuZwIiBNXyt9rFNiGXCQKTUQVCTjOSZJN94Rtl0Yw//nhS
	N+Tw==
Received: by 10.180.98.67 with SMTP id eg3mr3373295wib.9.1354809339700;
	Thu, 06 Dec 2012 07:55:39 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id s12sm23624264wik.11.2012.12.06.07.55.35
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 07:55:39 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 15:55:34 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE67076.551D0%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/HVM: add missing assert to stdvga's
	mmio_move()
Thread-Index: Ac3TyiAo0VzMOoZu7k+NqYWkWYsQig==
In-Reply-To: <50C0A3FE02000078000AE89A@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/HVM: add missing assert to stdvga's
 mmio_move()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:56, "Jan Beulich" <JBeulich@suse.com> wrote:

> ... to match the IOREQ_READ path.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/hvm/stdvga.c
> +++ b/xen/arch/x86/hvm/stdvga.c
> @@ -519,6 +519,7 @@ static int mmio_move(struct hvm_hw_stdvg
>                              put_page(dp);
>                          return 0;
>                      }
> +                    ASSERT(!dp);
>                      tmp = stdvga_mem_read(data, p->size);
>                  }
>                  stdvga_mem_write(addr, tmp, p->size);
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:55:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdnY-00048B-1p; Thu, 06 Dec 2012 15:55:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgdnW-000482-3L
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:55:38 +0000
Received: from [85.158.139.83:4560] by server-12.bemta-5.messagelabs.com id
	EA/81-02886-9FFB0C05; Thu, 06 Dec 2012 15:55:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354809335!28734678!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14300 invoked from network); 6 Dec 2012 15:55:35 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:55:35 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so2744279wgb.32
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 07:55:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=8NlaKXezRcjahdS7t7HXaK3mq4beGUwXvXJJl5gnqQI=;
	b=CtiZPT+yarj7PAh+KCD74byIMykOpTusK6Gt6/wi18jVCDFUt8H/I02tsCjljiXtNE
	w7XFHlEMyCZeuCRAeRv7i5oRtOwgItYlFosbB+4XrajTyNd+ki6TRB4zHtO0VdP1z1vW
	KCMxj06x5U7DcBctfGEORAo4ZrQOCe/BnAoQh2cKN4u0LfWkK3LNrsoVap18NOWdWzfW
	OHTYEDRkeWVYYgm3TmLhpkwGRdcFEVMlVrIAklvOe5cbQf7Y1FgLoYmxHNPce4aS+bXJ
	XDPTEOcXWa6RELAgWFzQBq7+YZxdrADEj03YvXQT/8Rh4VJnlIUkv9RbkFHm9IUXPpBf
	xonQ==
Received: by 10.216.197.42 with SMTP id s42mr746485wen.166.1354809335082;
	Thu, 06 Dec 2012 07:55:35 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id s12sm23624264wik.11.2012.12.06.07.55.33
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 07:55:34 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 15:55:18 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE67066.551CF%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86: properly fail mmuext ops when
	get_page_from_gfn() fails
Thread-Index: Ac3TyhafaKAohMiMCEeOjnaLn+ePLg==
In-Reply-To: <50C0A4B302000078000AE8BB@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: properly fail mmuext ops when
 get_page_from_gfn() fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:59, "Jan Beulich" <JBeulich@suse.com> wrote:

> I noticed this inconsistency while analyzing the code for XSA-32.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2776,7 +2776,7 @@ long do_mmuext_op(
>              page = get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC);
>              if ( unlikely(!page) )
>              {
> -                rc = -EINVAL;
> +                okay = 0;
>                  break;
>              }
>  
> @@ -2836,6 +2836,7 @@ long do_mmuext_op(
>              page = get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC);
>              if ( unlikely(!page) )
>              {
> +                okay = 0;
>                  MEM_LOG("Mfn %lx bad domain", op.arg1.mfn);
>                  break;
>              }
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:55:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgdna-00048Y-QP; Thu, 06 Dec 2012 15:55:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgdnZ-00048P-Pi
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:55:41 +0000
Received: from [193.109.254.147:52668] by server-9.bemta-14.messagelabs.com id
	FB/4A-30773-DFFB0C05; Thu, 06 Dec 2012 15:55:41 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1354809339!2799570!1
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19963 invoked from network); 6 Dec 2012 15:55:39 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:55:39 -0000
Received: by mail-wi0-f181.google.com with SMTP id hm9so510253wib.14
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 07:55:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Wdy+Ck2tBKGNQ0oKbTkR/Xo86lIcltJXNC55xy6Kzwg=;
	b=B83rJrI7TEBd64y8yiA5M0hPzabxIMvhypG2p299sFL/F7KYOIA2U2/7jfYFb0zKdF
	aqjwrLnefLi3gD9uvYF6FGcN4B6irADsI9Trfm4AIovO2DTTqvZGaVDxwqzOfQH8J9Yd
	sjQB/RpDnGoU7PlOH3JhcLMbaeiqDIYBsJFlwdp+8FX3v6lSFgsnp87sQv9hohGWtoqk
	g86Yq8s+QPllC8Ov75SDtFcZAypnNkIZrO/AFCCR/LyQiPn7etR/Ks8SYKSJE5edPXYR
	7wOAqBE8bte2rpK78oeWemPuZwIiBNXyt9rFNiGXCQKTUQVCTjOSZJN94Rtl0Yw//nhS
	N+Tw==
Received: by 10.180.98.67 with SMTP id eg3mr3373295wib.9.1354809339700;
	Thu, 06 Dec 2012 07:55:39 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id s12sm23624264wik.11.2012.12.06.07.55.35
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 07:55:39 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 15:55:34 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE67076.551D0%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/HVM: add missing assert to stdvga's
	mmio_move()
Thread-Index: Ac3TyiAo0VzMOoZu7k+NqYWkWYsQig==
In-Reply-To: <50C0A3FE02000078000AE89A@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/HVM: add missing assert to stdvga's
 mmio_move()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:56, "Jan Beulich" <JBeulich@suse.com> wrote:

> ... to match the IOREQ_READ path.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/hvm/stdvga.c
> +++ b/xen/arch/x86/hvm/stdvga.c
> @@ -519,6 +519,7 @@ static int mmio_move(struct hvm_hw_stdvg
>                              put_page(dp);
>                          return 0;
>                      }
> +                    ASSERT(!dp);
>                      tmp = stdvga_mem_read(data, p->size);
>                  }
>                  stdvga_mem_write(addr, tmp, p->size);
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:55:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdnY-00048B-1p; Thu, 06 Dec 2012 15:55:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgdnW-000482-3L
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:55:38 +0000
Received: from [85.158.139.83:4560] by server-12.bemta-5.messagelabs.com id
	EA/81-02886-9FFB0C05; Thu, 06 Dec 2012 15:55:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354809335!28734678!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14300 invoked from network); 6 Dec 2012 15:55:35 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:55:35 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so2744279wgb.32
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 07:55:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=8NlaKXezRcjahdS7t7HXaK3mq4beGUwXvXJJl5gnqQI=;
	b=CtiZPT+yarj7PAh+KCD74byIMykOpTusK6Gt6/wi18jVCDFUt8H/I02tsCjljiXtNE
	w7XFHlEMyCZeuCRAeRv7i5oRtOwgItYlFosbB+4XrajTyNd+ki6TRB4zHtO0VdP1z1vW
	KCMxj06x5U7DcBctfGEORAo4ZrQOCe/BnAoQh2cKN4u0LfWkK3LNrsoVap18NOWdWzfW
	OHTYEDRkeWVYYgm3TmLhpkwGRdcFEVMlVrIAklvOe5cbQf7Y1FgLoYmxHNPce4aS+bXJ
	XDPTEOcXWa6RELAgWFzQBq7+YZxdrADEj03YvXQT/8Rh4VJnlIUkv9RbkFHm9IUXPpBf
	xonQ==
Received: by 10.216.197.42 with SMTP id s42mr746485wen.166.1354809335082;
	Thu, 06 Dec 2012 07:55:35 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id s12sm23624264wik.11.2012.12.06.07.55.33
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 07:55:34 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 15:55:18 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE67066.551CF%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86: properly fail mmuext ops when
	get_page_from_gfn() fails
Thread-Index: Ac3TyhafaKAohMiMCEeOjnaLn+ePLg==
In-Reply-To: <50C0A4B302000078000AE8BB@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: properly fail mmuext ops when
 get_page_from_gfn() fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 12:59, "Jan Beulich" <JBeulich@suse.com> wrote:

> I noticed this inconsistency while analyzing the code for XSA-32.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2776,7 +2776,7 @@ long do_mmuext_op(
>              page = get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC);
>              if ( unlikely(!page) )
>              {
> -                rc = -EINVAL;
> +                okay = 0;
>                  break;
>              }
>  
> @@ -2836,6 +2836,7 @@ long do_mmuext_op(
>              page = get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC);
>              if ( unlikely(!page) )
>              {
> +                okay = 0;
>                  MEM_LOG("Mfn %lx bad domain", op.arg1.mfn);
>                  break;
>              }
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:56:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgdno-0004Ah-6z; Thu, 06 Dec 2012 15:55:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgdnm-0004A8-Bl
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:55:54 +0000
Received: from [85.158.137.99:10989] by server-13.bemta-3.messagelabs.com id
	A1/E9-24887-900C0C05; Thu, 06 Dec 2012 15:55:53 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1354809323!15896053!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21072 invoked from network); 6 Dec 2012 15:55:24 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:55:24 -0000
Received: by mail-wi0-f169.google.com with SMTP id hq12so624758wib.2
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 07:55:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ukFZu1l3MYEhUCbuwmfIA+gpMih3UxVhFX88L0NkQZg=;
	b=VNXJ3c31IefeTn/a7RNrYC9dXFyOD0/VIwal/zoGIO/3ZYHXO0+1wHoyBU2i09IJl0
	y3T/+nGfnq5A5j8YaHVuzmDQwjzbL2UF4otMemwphavfkCf5wbv41SxyGu9L+1qKbqp7
	WWh2M9+5xfj0xkcBRa9fp0OB0kXCHF9Seox98xzOyPwGKoi02SL50k2NruuE4MmlNNOS
	JSLeMDUqg71XFP6gd+QilDy5sbEAnV58+ni7Iw+udf8QGYOUxzV9AGL8sZI+0HBJsbdj
	JS3KZqNkXqnSETnoJslW+8pUzX0c/XSj0DCMLXumgnasNeEWeLONZ6Gvd38B3oSgSf0T
	UyvQ==
Received: by 10.216.203.234 with SMTP id f84mr890864weo.101.1354809319235;
	Thu, 06 Dec 2012 07:55:19 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id s12sm23624264wik.11.2012.12.06.07.54.54
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 07:54:58 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 15:54:46 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE67046.551CD%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86: mark certain items static
Thread-Index: Ac3TygOMTv4zeIZf6UahNeCDZuAFrQ==
In-Reply-To: <50C0A59C02000078000AE90C@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: mark certain items static
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 13:03, "Jan Beulich" <JBeulich@suse.com> wrote:

> ..., and at once constify the data items among here.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4123,10 +4123,10 @@ long do_hvm_op(unsigned long op, XEN_GUE
>          struct domain *d;
>          
>          /* Interface types to internal p2m types */
> -        p2m_type_t memtype[] = {
> -            p2m_ram_rw,        /* HVMMEM_ram_rw  */
> -            p2m_ram_ro,        /* HVMMEM_ram_ro  */
> -            p2m_mmio_dm        /* HVMMEM_mmio_dm */
> +        static const p2m_type_t memtype[] = {
> +            [HVMMEM_ram_rw]  = p2m_ram_rw,
> +            [HVMMEM_ram_ro]  = p2m_ram_ro,
> +            [HVMMEM_mmio_dm] = p2m_mmio_dm
>          };
>  
>          if ( copy_from_guest(&a, arg, 1) )
> --- a/xen/arch/x86/hvm/svm/emulate.c
> +++ b/xen/arch/x86/hvm/svm/emulate.c
> @@ -152,7 +152,7 @@ static int fetch(struct vcpu *v, u8 *buf
>  }
>  
>  int __get_instruction_length_from_list(struct vcpu *v,
> -        enum instruction_index *list, unsigned int list_count)
> +        const enum instruction_index *list, unsigned int list_count)
>  {
>      struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
>      unsigned int i, j, inst_len = 0;
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1931,7 +1931,7 @@ static void svm_wbinvd_intercept(void)
>  
>  static void svm_vmexit_do_invalidate_cache(struct cpu_user_regs *regs)
>  {
> -    enum instruction_index list[] = { INSTR_INVD, INSTR_WBINVD };
> +    static const enum instruction_index list[] = { INSTR_INVD, INSTR_WBINVD
> };
>      int inst_len;
>  
>      inst_len = __get_instruction_length_from_list(
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2475,9 +2475,11 @@ void vmx_vmexit_handler(struct cpu_user_
>          vmx_update_cpu_exec_control(v);
>          break;
>      case EXIT_REASON_TASK_SWITCH: {
> -        const enum hvm_task_switch_reason reasons[] = {
> -            TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int };
> +        static const enum hvm_task_switch_reason reasons[] = {
> +            TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int
> +        };
>          int32_t ecode = -1, source;
> +
>          exit_qualification = __vmread(EXIT_QUALIFICATION);
>          source = (exit_qualification >> 30) & 3;
>          /* Vectored event should fill in interrupt information. */
> --- a/xen/arch/x86/mm/guest_walk.c
> +++ b/xen/arch/x86/mm/guest_walk.c
> @@ -34,7 +34,7 @@
>  /* Flags that are needed in a pagetable entry, with the sense of NX inverted
> */
>  static uint32_t mandatory_flags(struct vcpu *v, uint32_t pfec)
>  {
> -    static uint32_t flags[] = {
> +    static const uint32_t flags[] = {
>          /* I/F -  Usr Wr */
>          /* 0   0   0   0 */ _PAGE_PRESENT,
>          /* 0   0   0   1 */ _PAGE_PRESENT|_PAGE_RW,
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -179,7 +179,7 @@ static void synchronize_tsc_slave(unsign
>      }
>  }
>  
> -void smp_callin(void)
> +static void smp_callin(void)
>  {
>      unsigned int cpu = smp_processor_id();
>      int i, rc;
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1151,7 +1151,7 @@ static void local_time_calibration(void)
>   * The Linux original version of this function is
>   * Copyright (c) 2006, Red Hat, Inc., Ingo Molnar
>   */
> -void check_tsc_warp(unsigned long tsc_khz, unsigned long *max_warp)
> +static void check_tsc_warp(unsigned long tsc_khz, unsigned long *max_warp)
>  {
>  #define rdtsc_barrier() mb()
>      static DEFINE_SPINLOCK(sync_lock);
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -45,7 +45,7 @@ static void _show_registers(
>      const struct cpu_user_regs *regs, unsigned long crs[8],
>      enum context context, const struct vcpu *v)
>  {
> -    const static char *context_names[] = {
> +    static const char *const context_names[] = {
>          [CTXT_hypervisor] = "hypervisor",
>          [CTXT_pv_guest]   = "pv guest",
>          [CTXT_hvm_guest]  = "hvm guest"
> --- a/xen/include/asm-x86/hvm/svm/emulate.h
> +++ b/xen/include/asm-x86/hvm/svm/emulate.h
> @@ -45,7 +45,7 @@ enum instruction_index {
>  struct vcpu;
>  
>  int __get_instruction_length_from_list(
> -    struct vcpu *v, enum instruction_index *list, unsigned int list_count);
> +    struct vcpu *, const enum instruction_index *, unsigned int list_count);
>  
>  static inline int __get_instruction_length(
>      struct vcpu *v, enum instruction_index instr)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:56:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgdno-0004Ah-6z; Thu, 06 Dec 2012 15:55:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgdnm-0004A8-Bl
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:55:54 +0000
Received: from [85.158.137.99:10989] by server-13.bemta-3.messagelabs.com id
	A1/E9-24887-900C0C05; Thu, 06 Dec 2012 15:55:53 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1354809323!15896053!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21072 invoked from network); 6 Dec 2012 15:55:24 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:55:24 -0000
Received: by mail-wi0-f169.google.com with SMTP id hq12so624758wib.2
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 07:55:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ukFZu1l3MYEhUCbuwmfIA+gpMih3UxVhFX88L0NkQZg=;
	b=VNXJ3c31IefeTn/a7RNrYC9dXFyOD0/VIwal/zoGIO/3ZYHXO0+1wHoyBU2i09IJl0
	y3T/+nGfnq5A5j8YaHVuzmDQwjzbL2UF4otMemwphavfkCf5wbv41SxyGu9L+1qKbqp7
	WWh2M9+5xfj0xkcBRa9fp0OB0kXCHF9Seox98xzOyPwGKoi02SL50k2NruuE4MmlNNOS
	JSLeMDUqg71XFP6gd+QilDy5sbEAnV58+ni7Iw+udf8QGYOUxzV9AGL8sZI+0HBJsbdj
	JS3KZqNkXqnSETnoJslW+8pUzX0c/XSj0DCMLXumgnasNeEWeLONZ6Gvd38B3oSgSf0T
	UyvQ==
Received: by 10.216.203.234 with SMTP id f84mr890864weo.101.1354809319235;
	Thu, 06 Dec 2012 07:55:19 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id s12sm23624264wik.11.2012.12.06.07.54.54
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 07:54:58 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 15:54:46 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE67046.551CD%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86: mark certain items static
Thread-Index: Ac3TygOMTv4zeIZf6UahNeCDZuAFrQ==
In-Reply-To: <50C0A59C02000078000AE90C@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: mark certain items static
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 13:03, "Jan Beulich" <JBeulich@suse.com> wrote:

> ..., and at once constify the data items among here.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4123,10 +4123,10 @@ long do_hvm_op(unsigned long op, XEN_GUE
>          struct domain *d;
>          
>          /* Interface types to internal p2m types */
> -        p2m_type_t memtype[] = {
> -            p2m_ram_rw,        /* HVMMEM_ram_rw  */
> -            p2m_ram_ro,        /* HVMMEM_ram_ro  */
> -            p2m_mmio_dm        /* HVMMEM_mmio_dm */
> +        static const p2m_type_t memtype[] = {
> +            [HVMMEM_ram_rw]  = p2m_ram_rw,
> +            [HVMMEM_ram_ro]  = p2m_ram_ro,
> +            [HVMMEM_mmio_dm] = p2m_mmio_dm
>          };
>  
>          if ( copy_from_guest(&a, arg, 1) )
> --- a/xen/arch/x86/hvm/svm/emulate.c
> +++ b/xen/arch/x86/hvm/svm/emulate.c
> @@ -152,7 +152,7 @@ static int fetch(struct vcpu *v, u8 *buf
>  }
>  
>  int __get_instruction_length_from_list(struct vcpu *v,
> -        enum instruction_index *list, unsigned int list_count)
> +        const enum instruction_index *list, unsigned int list_count)
>  {
>      struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
>      unsigned int i, j, inst_len = 0;
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1931,7 +1931,7 @@ static void svm_wbinvd_intercept(void)
>  
>  static void svm_vmexit_do_invalidate_cache(struct cpu_user_regs *regs)
>  {
> -    enum instruction_index list[] = { INSTR_INVD, INSTR_WBINVD };
> +    static const enum instruction_index list[] = { INSTR_INVD, INSTR_WBINVD
> };
>      int inst_len;
>  
>      inst_len = __get_instruction_length_from_list(
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2475,9 +2475,11 @@ void vmx_vmexit_handler(struct cpu_user_
>          vmx_update_cpu_exec_control(v);
>          break;
>      case EXIT_REASON_TASK_SWITCH: {
> -        const enum hvm_task_switch_reason reasons[] = {
> -            TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int };
> +        static const enum hvm_task_switch_reason reasons[] = {
> +            TSW_call_or_int, TSW_iret, TSW_jmp, TSW_call_or_int
> +        };
>          int32_t ecode = -1, source;
> +
>          exit_qualification = __vmread(EXIT_QUALIFICATION);
>          source = (exit_qualification >> 30) & 3;
>          /* Vectored event should fill in interrupt information. */
> --- a/xen/arch/x86/mm/guest_walk.c
> +++ b/xen/arch/x86/mm/guest_walk.c
> @@ -34,7 +34,7 @@
>  /* Flags that are needed in a pagetable entry, with the sense of NX inverted
> */
>  static uint32_t mandatory_flags(struct vcpu *v, uint32_t pfec)
>  {
> -    static uint32_t flags[] = {
> +    static const uint32_t flags[] = {
>          /* I/F -  Usr Wr */
>          /* 0   0   0   0 */ _PAGE_PRESENT,
>          /* 0   0   0   1 */ _PAGE_PRESENT|_PAGE_RW,
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -179,7 +179,7 @@ static void synchronize_tsc_slave(unsign
>      }
>  }
>  
> -void smp_callin(void)
> +static void smp_callin(void)
>  {
>      unsigned int cpu = smp_processor_id();
>      int i, rc;
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1151,7 +1151,7 @@ static void local_time_calibration(void)
>   * The Linux original version of this function is
>   * Copyright (c) 2006, Red Hat, Inc., Ingo Molnar
>   */
> -void check_tsc_warp(unsigned long tsc_khz, unsigned long *max_warp)
> +static void check_tsc_warp(unsigned long tsc_khz, unsigned long *max_warp)
>  {
>  #define rdtsc_barrier() mb()
>      static DEFINE_SPINLOCK(sync_lock);
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -45,7 +45,7 @@ static void _show_registers(
>      const struct cpu_user_regs *regs, unsigned long crs[8],
>      enum context context, const struct vcpu *v)
>  {
> -    const static char *context_names[] = {
> +    static const char *const context_names[] = {
>          [CTXT_hypervisor] = "hypervisor",
>          [CTXT_pv_guest]   = "pv guest",
>          [CTXT_hvm_guest]  = "hvm guest"
> --- a/xen/include/asm-x86/hvm/svm/emulate.h
> +++ b/xen/include/asm-x86/hvm/svm/emulate.h
> @@ -45,7 +45,7 @@ enum instruction_index {
>  struct vcpu;
>  
>  int __get_instruction_length_from_list(
> -    struct vcpu *v, enum instruction_index *list, unsigned int list_count);
> +    struct vcpu *, const enum instruction_index *, unsigned int list_count);
>  
>  static inline int __get_instruction_length(
>      struct vcpu *v, enum instruction_index instr)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:56:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgdo1-0004Ee-Ki; Thu, 06 Dec 2012 15:56:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgdnz-0004Do-5f
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:56:07 +0000
Received: from [85.158.143.99:49363] by server-1.bemta-4.messagelabs.com id
	4F/79-28401-610C0C05; Thu, 06 Dec 2012 15:56:06 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354809350!21323142!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18393 invoked from network); 6 Dec 2012 15:55:51 -0000
Received: from unknown (HELO mail-we0-f173.google.com) (74.125.82.173)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:55:51 -0000
Received: by mail-we0-f173.google.com with SMTP id z2so3140277wey.32
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 07:54:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=IUPFJVF2+bK09lYK3C4blBAdDsfz7IzK9bIRq6q+gX8=;
	b=IBykOMMNVaRZR2x75GLmnAMC+og7DINkkTC7KZeErZ/kwT0zH3d9A/Cmj+kG5h0apD
	ZOVHAWdW8Y1MnhmjBs8x605WE4PgZyoubrJKdSlsuJ6opmfbjX8Yq0oj0nKLFXerAbkT
	WczZ04iZbPwDYeR/HVbmmRXniFi1GhVKseQ9HN/nk+UQieRMpRvoPHFnxTjWSo2Efegy
	mOj5JBfQ02dkqUhODDomUBT1odq/ueMlUDhjwoC6XC69dUhglolSAXBmbtrX01w+eX4W
	FI05ZlpfV22uSAwvP3lS4oLuPBjoU03PQbjisyKUR8n6biMfFjYsIfIqnG17G8nkv/Hq
	G+8Q==
Received: by 10.216.28.14 with SMTP id f14mr804679wea.40.1354809281095;
	Thu, 06 Dec 2012 07:54:41 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id i2sm23603929wiw.3.2012.12.06.07.54.32
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 07:54:40 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 15:54:30 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE67036.551CC%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/EFI: add code interfacing with the
	secure boot shim
Thread-Index: Ac3TyfoCeq6oZ65a/0ip8OYQvBlWSw==
In-Reply-To: <50C0A63C02000078000AE910@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/EFI: add code interfacing with the
 secure boot shim
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 13:05, "Jan Beulich" <JBeulich@suse.com> wrote:

> ... to validate the kernel image (which is required to be in PE
> format, as is e.g. the case for the Linux bzImage when built with
> CONFIG_EFI_STUB).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/efi/boot.c
> +++ b/xen/arch/x86/efi/boot.c
> @@ -24,6 +24,18 @@
>  #include <asm/msr.h>
>  #include <asm/processor.h>
>  
> +#define SHIM_LOCK_PROTOCOL_GUID \
> +  { 0x605dab50, 0xe046, 0x4300, {0xab, 0xb6, 0x3d, 0xd8, 0x10, 0xdd, 0x8b,
> 0x23} }
> +
> +typedef EFI_STATUS
> +(/* _not_ EFIAPI */ *EFI_SHIM_LOCK_VERIFY) (
> +    IN VOID *Buffer,
> +    IN UINT32 Size);
> +
> +typedef struct {
> +    EFI_SHIM_LOCK_VERIFY Verify;
> +} EFI_SHIM_LOCK_PROTOCOL;
> +
>  extern char start[];
>  extern u32 cpuid_ext_features;
>  
> @@ -640,12 +652,14 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY
>      static EFI_GUID __initdata gop_guid = EFI_GRAPHICS_OUTPUT_PROTOCOL_GUID;
>      static EFI_GUID __initdata bio_guid = BLOCK_IO_PROTOCOL;
>      static EFI_GUID __initdata devp_guid = DEVICE_PATH_PROTOCOL;
> +    static EFI_GUID __initdata shim_lock_guid = SHIM_LOCK_PROTOCOL_GUID;
>      EFI_LOADED_IMAGE *loaded_image;
>      EFI_STATUS status;
>      unsigned int i, argc;
>      CHAR16 **argv, *file_name, *cfg_file_name = NULL;
>      UINTN cols, rows, depth, size, map_key, info_size, gop_mode = ~0;
>      EFI_HANDLE *handles = NULL;
> +    EFI_SHIM_LOCK_PROTOCOL *shim_lock;
>      EFI_GRAPHICS_OUTPUT_PROTOCOL *gop = NULL;
>      EFI_GRAPHICS_OUTPUT_MODE_INFORMATION *mode_info;
>      EFI_FILE_HANDLE dir_handle;
> @@ -835,6 +849,11 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY
>      read_file(dir_handle, s2w(&name), &kernel);
>      efi_bs->FreePool(name.w);
>  
> +    if ( !EFI_ERROR(efi_bs->LocateProtocol(&shim_lock_guid, NULL,
> +                    (void **)&shim_lock)) &&
> +         shim_lock->Verify(kernel.ptr, kernel.size) != EFI_SUCCESS )
> +        blexit(L"Dom0 kernel image could not be verified\r\n");
> +
>      name.s = get_value(&cfg, section.s, "ramdisk");
>      if ( name.s )
>      {
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:56:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgdo1-0004Ee-Ki; Thu, 06 Dec 2012 15:56:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgdnz-0004Do-5f
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:56:07 +0000
Received: from [85.158.143.99:49363] by server-1.bemta-4.messagelabs.com id
	4F/79-28401-610C0C05; Thu, 06 Dec 2012 15:56:06 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354809350!21323142!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18393 invoked from network); 6 Dec 2012 15:55:51 -0000
Received: from unknown (HELO mail-we0-f173.google.com) (74.125.82.173)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:55:51 -0000
Received: by mail-we0-f173.google.com with SMTP id z2so3140277wey.32
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 07:54:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=IUPFJVF2+bK09lYK3C4blBAdDsfz7IzK9bIRq6q+gX8=;
	b=IBykOMMNVaRZR2x75GLmnAMC+og7DINkkTC7KZeErZ/kwT0zH3d9A/Cmj+kG5h0apD
	ZOVHAWdW8Y1MnhmjBs8x605WE4PgZyoubrJKdSlsuJ6opmfbjX8Yq0oj0nKLFXerAbkT
	WczZ04iZbPwDYeR/HVbmmRXniFi1GhVKseQ9HN/nk+UQieRMpRvoPHFnxTjWSo2Efegy
	mOj5JBfQ02dkqUhODDomUBT1odq/ueMlUDhjwoC6XC69dUhglolSAXBmbtrX01w+eX4W
	FI05ZlpfV22uSAwvP3lS4oLuPBjoU03PQbjisyKUR8n6biMfFjYsIfIqnG17G8nkv/Hq
	G+8Q==
Received: by 10.216.28.14 with SMTP id f14mr804679wea.40.1354809281095;
	Thu, 06 Dec 2012 07:54:41 -0800 (PST)
Received: from [192.168.1.3] (host86-183-153-239.range86-183.btcentralplus.com.
	[86.183.153.239])
	by mx.google.com with ESMTPS id i2sm23603929wiw.3.2012.12.06.07.54.32
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 07:54:40 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 15:54:30 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE67036.551CC%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/EFI: add code interfacing with the
	secure boot shim
Thread-Index: Ac3TyfoCeq6oZ65a/0ip8OYQvBlWSw==
In-Reply-To: <50C0A63C02000078000AE910@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/EFI: add code interfacing with the
 secure boot shim
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/2012 13:05, "Jan Beulich" <JBeulich@suse.com> wrote:

> ... to validate the kernel image (which is required to be in PE
> format, as is e.g. the case for the Linux bzImage when built with
> CONFIG_EFI_STUB).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/efi/boot.c
> +++ b/xen/arch/x86/efi/boot.c
> @@ -24,6 +24,18 @@
>  #include <asm/msr.h>
>  #include <asm/processor.h>
>  
> +#define SHIM_LOCK_PROTOCOL_GUID \
> +  { 0x605dab50, 0xe046, 0x4300, {0xab, 0xb6, 0x3d, 0xd8, 0x10, 0xdd, 0x8b,
> 0x23} }
> +
> +typedef EFI_STATUS
> +(/* _not_ EFIAPI */ *EFI_SHIM_LOCK_VERIFY) (
> +    IN VOID *Buffer,
> +    IN UINT32 Size);
> +
> +typedef struct {
> +    EFI_SHIM_LOCK_VERIFY Verify;
> +} EFI_SHIM_LOCK_PROTOCOL;
> +
>  extern char start[];
>  extern u32 cpuid_ext_features;
>  
> @@ -640,12 +652,14 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY
>      static EFI_GUID __initdata gop_guid = EFI_GRAPHICS_OUTPUT_PROTOCOL_GUID;
>      static EFI_GUID __initdata bio_guid = BLOCK_IO_PROTOCOL;
>      static EFI_GUID __initdata devp_guid = DEVICE_PATH_PROTOCOL;
> +    static EFI_GUID __initdata shim_lock_guid = SHIM_LOCK_PROTOCOL_GUID;
>      EFI_LOADED_IMAGE *loaded_image;
>      EFI_STATUS status;
>      unsigned int i, argc;
>      CHAR16 **argv, *file_name, *cfg_file_name = NULL;
>      UINTN cols, rows, depth, size, map_key, info_size, gop_mode = ~0;
>      EFI_HANDLE *handles = NULL;
> +    EFI_SHIM_LOCK_PROTOCOL *shim_lock;
>      EFI_GRAPHICS_OUTPUT_PROTOCOL *gop = NULL;
>      EFI_GRAPHICS_OUTPUT_MODE_INFORMATION *mode_info;
>      EFI_FILE_HANDLE dir_handle;
> @@ -835,6 +849,11 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY
>      read_file(dir_handle, s2w(&name), &kernel);
>      efi_bs->FreePool(name.w);
>  
> +    if ( !EFI_ERROR(efi_bs->LocateProtocol(&shim_lock_guid, NULL,
> +                    (void **)&shim_lock)) &&
> +         shim_lock->Verify(kernel.ptr, kernel.size) != EFI_SUCCESS )
> +        blexit(L"Dom0 kernel image could not be verified\r\n");
> +
>      name.s = get_value(&cfg, section.s, "ramdisk");
>      if ( name.s )
>      {
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:57:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgdop-0004Pg-9J; Thu, 06 Dec 2012 15:56:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tgdoo-0004PK-Bm
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:56:58 +0000
Received: from [85.158.137.99:33103] by server-8.bemta-3.messagelabs.com id
	69/68-07786-940C0C05; Thu, 06 Dec 2012 15:56:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354809414!17926644!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 391 invoked from network); 6 Dec 2012 15:56:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:56:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216628935"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 15:56:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 10:56:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tgdoj-00038e-Au;
	Thu, 06 Dec 2012 15:56:53 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Thu, 6 Dec 2012 15:56:22 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 1] Kexec alteration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Fixing the root reentrant NMI/MCE issues is quickly turning into a
substantially larger problem than initially thought.

As a result, I have decided to submit the fixes in smaller chunks, in
the hope that they are reviewed and integrated separately.  The first
part is changes to the kexec path, from which I am now happy to drop the
RFC tag.

Please scrutinise carefully.


The vague decided plan going forwards is:

 * Implement spin_{un,}lock_recursive_irq{save,restore}()
 * Audit users of mixed regular/recursive calls on the same spinlock
 * Swap recursive spinlocks to be a separate type
 * Fix the semantics of console_force_unlock
 * Continue audit of NMI and MCE paths, with console semantics fixed
 * {Panic,NMI,MCE}-in-progress flags & reentrancy protection
 * Ability to specify whether iret should be used on trap exit
 * Implement "MCEs never iret, NMIs always iret" policy

The plan for "things I think needs to happen" is:
 * Extend spinlock debugging to know about NMI and MCE contexts
 * Implement strict ordering constraints for spinlock debugging

The "other things needing investigating" list includes:
 * Interaction of paging changes and IDTs while purgatory is changing
   processor modes

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:57:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgdop-0004Pg-9J; Thu, 06 Dec 2012 15:56:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tgdoo-0004PK-Bm
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:56:58 +0000
Received: from [85.158.137.99:33103] by server-8.bemta-3.messagelabs.com id
	69/68-07786-940C0C05; Thu, 06 Dec 2012 15:56:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354809414!17926644!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 391 invoked from network); 6 Dec 2012 15:56:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:56:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216628935"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 15:56:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 10:56:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tgdoj-00038e-Au;
	Thu, 06 Dec 2012 15:56:53 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Thu, 6 Dec 2012 15:56:22 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 1] Kexec alteration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Fixing the root reentrant NMI/MCE issues is quickly turning into a
substantially larger problem than initially thought.

As a result, I have decided to submit the fixes in smaller chunks, in
the hope that they are reviewed and integrated separately.  The first
part is changes to the kexec path, from which I am now happy to drop the
RFC tag.

Please scrutinise carefully.


The vague decided plan going forwards is:

 * Implement spin_{un,}lock_recursive_irq{save,restore}()
 * Audit users of mixed regular/recursive calls on the same spinlock
 * Swap recursive spinlocks to be a separate type
 * Fix the semantics of console_force_unlock
 * Continue audit of NMI and MCE paths, with console semantics fixed
 * {Panic,NMI,MCE}-in-progress flags & reentrancy protection
 * Ability to specify whether iret should be used on trap exit
 * Implement "MCEs never iret, NMIs always iret" policy

The plan for "things I think needs to happen" is:
 * Extend spinlock debugging to know about NMI and MCE contexts
 * Implement strict ordering constraints for spinlock debugging

The "other things needing investigating" list includes:
 * Interaction of paging changes and IDTs while purgatory is changing
   processor modes

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:57:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgdp4-0004UU-OH; Thu, 06 Dec 2012 15:57:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tgdp2-0004Th-RL
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:57:13 +0000
Received: from [85.158.137.99:37100] by server-11.bemta-3.messagelabs.com id
	37/CA-19361-840C0C05; Thu, 06 Dec 2012 15:56:56 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1354809414!18197981!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21052 invoked from network); 6 Dec 2012 15:56:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:56:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46838686"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 15:56:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 10:56:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tgdoj-00038e-BN;
	Thu, 06 Dec 2012 15:56:53 +0000
MIME-Version: 1.0
X-Mercurial-Node: 01158d25f3bfd50171a166652a5a30e4d249e5b2
Message-ID: <01158d25f3bfd50171a1.1354809383@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
References: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Thu, 6 Dec 2012 15:56:23 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 1] x86/kexec: Change NMI and MCE handling
	on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safer in more circumstances.

This patch adds three new low level routines:
 * nmi_crash
     This is a special NMI handler for using during a kexec crash.
 * enable_nmis
     This function enables NMIs by executing an iret-to-self, to
     disengage the hardware NMI latch.
 * trap_nop
     This is a no op handler which irets immediately.  It is actually in
     the middle of enable_nmis to reuse the iret instruction, without
     having a single lone aligned iret inflating the code side.


As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable interrupt stack tables.
    Disabling ISTs will prevent stack corruption from future NMIs and
    MCEs. As Xen is going to crash, we are not concerned with the
    security race condition with 'sysret'; any guest which managed to
    benefit from the race condition will only be able execute a handful
    of instructions before it will be killed anyway, and a guest is
    still unable to trigger the race condition in the first place.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the nmi_crash handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

Changes since v1:
 * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
   during the original refactoring.
 * Fold trap_nop into the middle of enable_nmis to reuse the iret.
 * Expand comments in areas as per Tim's suggestions.

diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,102 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( crashing_cpu != smp_processor_id() );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+
+        atomic_dec(&waiting_for_crash_ipi);
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely scenario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
+     * can deliberately queue up another NMI at the LAPIC which will not
+     * be delivered as the hardware NMI latch is currently in effect.
+     * This means that if NMIs become unlatched (e.g. following a
+     * non-fatal MCE), the LAPIC will force us back here rather than
+     * wandering back into regular Xen code.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i, cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     *
+     * Furthermore, disable stack tables for NMIs and MCEs to prevent
+     * race conditions resulting in corrupt stack frames.  As Xen is
+     * about to die, we no longer care about the security-related race
+     * condition with 'sysret' which ISTs are designed to solve.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
+            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
+
+            if ( i == cpu )
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+            else
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -87,6 +87,22 @@ void machine_kexec(xen_kexec_image_t *im
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.
+     *
+     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
+     * pcpus other than us have the nmi_crash handler, while we have the nop
+     * handler.
+     *
+     * The MCE handlers touch extensive areas of Xen code and data.  At this
+     * point, there is nothing we can usefully do, so set the nop handler.
+     */
+    set_intr_gate(TRAP_machine_check, &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,45 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        cli
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* Enable NMIs.  No special register assumptions, and all preserved. */
+ENTRY(enable_nmis)
+        pushq %rax
+        movq  %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+/* No op trap handler.  Required for kexec crash path.
+ * It is not used in performance critical code, and saves having a single
+ * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
+ * explicit alignment. */
+.globl trap_nop;
+trap_nop:
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        popq %rax
+        retq
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r bc624b00d6d6 -r 01158d25f3bf xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 15:57:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 15:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgdp4-0004UU-OH; Thu, 06 Dec 2012 15:57:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tgdp2-0004Th-RL
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 15:57:13 +0000
Received: from [85.158.137.99:37100] by server-11.bemta-3.messagelabs.com id
	37/CA-19361-840C0C05; Thu, 06 Dec 2012 15:56:56 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1354809414!18197981!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21052 invoked from network); 6 Dec 2012 15:56:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 15:56:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46838686"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 15:56:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 10:56:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tgdoj-00038e-BN;
	Thu, 06 Dec 2012 15:56:53 +0000
MIME-Version: 1.0
X-Mercurial-Node: 01158d25f3bfd50171a166652a5a30e4d249e5b2
Message-ID: <01158d25f3bfd50171a1.1354809383@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
References: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Thu, 6 Dec 2012 15:56:23 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 1] x86/kexec: Change NMI and MCE handling
	on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safer in more circumstances.

This patch adds three new low level routines:
 * nmi_crash
     This is a special NMI handler for using during a kexec crash.
 * enable_nmis
     This function enables NMIs by executing an iret-to-self, to
     disengage the hardware NMI latch.
 * trap_nop
     This is a no op handler which irets immediately.  It is actually in
     the middle of enable_nmis to reuse the iret instruction, without
     having a single lone aligned iret inflating the code side.


As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable interrupt stack tables.
    Disabling ISTs will prevent stack corruption from future NMIs and
    MCEs. As Xen is going to crash, we are not concerned with the
    security race condition with 'sysret'; any guest which managed to
    benefit from the race condition will only be able execute a handful
    of instructions before it will be killed anyway, and a guest is
    still unable to trigger the race condition in the first place.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the nmi_crash handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

Changes since v1:
 * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
   during the original refactoring.
 * Fold trap_nop into the middle of enable_nmis to reuse the iret.
 * Expand comments in areas as per Tim's suggestions.

diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,102 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( crashing_cpu != smp_processor_id() );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+
+        atomic_dec(&waiting_for_crash_ipi);
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely scenario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
+     * can deliberately queue up another NMI at the LAPIC which will not
+     * be delivered as the hardware NMI latch is currently in effect.
+     * This means that if NMIs become unlatched (e.g. following a
+     * non-fatal MCE), the LAPIC will force us back here rather than
+     * wandering back into regular Xen code.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i, cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     *
+     * Furthermore, disable stack tables for NMIs and MCEs to prevent
+     * race conditions resulting in corrupt stack frames.  As Xen is
+     * about to die, we no longer care about the security-related race
+     * condition with 'sysret' which ISTs are designed to solve.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
+            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
+
+            if ( i == cpu )
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+            else
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -87,6 +87,22 @@ void machine_kexec(xen_kexec_image_t *im
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.
+     *
+     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
+     * pcpus other than us have the nmi_crash handler, while we have the nop
+     * handler.
+     *
+     * The MCE handlers touch extensive areas of Xen code and data.  At this
+     * point, there is nothing we can usefully do, so set the nop handler.
+     */
+    set_intr_gate(TRAP_machine_check, &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,45 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        cli
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* Enable NMIs.  No special register assumptions, and all preserved. */
+ENTRY(enable_nmis)
+        pushq %rax
+        movq  %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+/* No op trap handler.  Required for kexec crash path.
+ * It is not used in performance critical code, and saves having a single
+ * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
+ * explicit alignment. */
+.globl trap_nop;
+trap_nop:
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        popq %rax
+        retq
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r bc624b00d6d6 -r 01158d25f3bf xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:02:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdtX-0005eS-JG; Thu, 06 Dec 2012 16:01:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgdtV-0005eI-TR
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:01:50 +0000
Received: from [85.158.138.51:25845] by server-12.bemta-3.messagelabs.com id
	E4/B7-22757-D61C0C05; Thu, 06 Dec 2012 16:01:49 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1354809707!18869550!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24635 invoked from network); 6 Dec 2012 16:01:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 16:01:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46839474"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 16:01:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 11:01:41 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgdtM-0003DB-V1	for
	xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:01:40 +0000
Message-ID: <50C0C164.9010303@citrix.com>
Date: Thu, 6 Dec 2012 16:01:40 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
X-Enigmail-Version: 1.4.6
Subject: Re: [Xen-devel] [PATCH 0 of 1] Kexec alteration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 06/12/12 15:56, Andrew Cooper wrote:
> Hello,
>
> Fixing the root reentrant NMI/MCE issues is quickly turning into a
> substantially larger problem than initially thought.
>
> As a result, I have decided to submit the fixes in smaller chunks, in
> the hope that they are reviewed and integrated separately.  The first
> part is changes to the kexec path, from which I am now happy to drop the
> RFC tag.
>
> Please scrutinise carefully.
>
>
> The vague decided plan going forwards is:
>
>  * Implement spin_{un,}lock_recursive_irq{save,restore}()
>  * Audit users of mixed regular/recursive calls on the same spinlock
>  * Swap recursive spinlocks to be a separate type
>  * Fix the semantics of console_force_unlock
* Fix ASSERT() and BUG() to be NMI/MCE safe wrt console
>  * Continue audit of NMI and MCE paths, with console semantics fixed
>  * {Panic,NMI,MCE}-in-progress flags & reentrancy protection
>  * Ability to specify whether iret should be used on trap exit
>  * Implement "MCEs never iret, NMIs always iret" policy
>
> The plan for "things I think needs to happen" is:
>  * Extend spinlock debugging to know about NMI and MCE contexts
>  * Implement strict ordering constraints for spinlock debugging
>
> The "other things needing investigating" list includes:
>  * Interaction of paging changes and IDTs while purgatory is changing
>    processor modes
>
> ~Andrew

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:02:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdtX-0005eS-JG; Thu, 06 Dec 2012 16:01:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgdtV-0005eI-TR
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:01:50 +0000
Received: from [85.158.138.51:25845] by server-12.bemta-3.messagelabs.com id
	E4/B7-22757-D61C0C05; Thu, 06 Dec 2012 16:01:49 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1354809707!18869550!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24635 invoked from network); 6 Dec 2012 16:01:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 16:01:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46839474"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 16:01:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 11:01:41 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgdtM-0003DB-V1	for
	xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:01:40 +0000
Message-ID: <50C0C164.9010303@citrix.com>
Date: Thu, 6 Dec 2012 16:01:40 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
X-Enigmail-Version: 1.4.6
Subject: Re: [Xen-devel] [PATCH 0 of 1] Kexec alteration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 06/12/12 15:56, Andrew Cooper wrote:
> Hello,
>
> Fixing the root reentrant NMI/MCE issues is quickly turning into a
> substantially larger problem than initially thought.
>
> As a result, I have decided to submit the fixes in smaller chunks, in
> the hope that they are reviewed and integrated separately.  The first
> part is changes to the kexec path, from which I am now happy to drop the
> RFC tag.
>
> Please scrutinise carefully.
>
>
> The vague decided plan going forwards is:
>
>  * Implement spin_{un,}lock_recursive_irq{save,restore}()
>  * Audit users of mixed regular/recursive calls on the same spinlock
>  * Swap recursive spinlocks to be a separate type
>  * Fix the semantics of console_force_unlock
* Fix ASSERT() and BUG() to be NMI/MCE safe wrt console
>  * Continue audit of NMI and MCE paths, with console semantics fixed
>  * {Panic,NMI,MCE}-in-progress flags & reentrancy protection
>  * Ability to specify whether iret should be used on trap exit
>  * Implement "MCEs never iret, NMIs always iret" policy
>
> The plan for "things I think needs to happen" is:
>  * Extend spinlock debugging to know about NMI and MCE contexts
>  * Implement strict ordering constraints for spinlock debugging
>
> The "other things needing investigating" list includes:
>  * Interaction of paging changes and IDTs while purgatory is changing
>    processor modes
>
> ~Andrew

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:06:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdyD-0005zQ-Ar; Thu, 06 Dec 2012 16:06:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgdyB-0005zK-W5
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 16:06:40 +0000
Received: from [85.158.143.99:48298] by server-2.bemta-4.messagelabs.com id
	3F/73-30861-F82C0C05; Thu, 06 Dec 2012 16:06:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354809997!21324690!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 360 invoked from network); 6 Dec 2012 16:06:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 16:06:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216630677"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 16:06:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 11:06:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tgdy7-0003HV-Ro;
	Thu, 06 Dec 2012 16:06:35 +0000
Date: Thu, 6 Dec 2012 16:06:31 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354787720.17165.59.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212061543230.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354787720.17165.59.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/6] xen/arm: introduce map_phys_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> > Introduce a function to map a physical memory into virtual memory.
> > It is going to be used later to map the videoram.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/mm.c        |   23 +++++++++++++++++++++++
> >  xen/include/asm-arm/mm.h |    3 +++
> >  2 files changed, 26 insertions(+), 0 deletions(-)
> > 
> > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > index 68ee9da..418a414 100644
> > --- a/xen/arch/arm/mm.c
> > +++ b/xen/arch/arm/mm.c
> > @@ -376,6 +376,29 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
> >      frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
> >  }
> >  
> > +/* Map the physical memory range start -  end at the virtual address
> > + * virt_start in 2MB chunks. start and virt_start have to be 2MB
> > + * aligned.
> > + */
> > +void map_phys_range(paddr_t start, paddr_t end,
> > +        unsigned long virt_start, unsigned attributes)
> > +{
> > +    ASSERT(!(start & ((1 << 21) - 1)));
> > +    ASSERT(!(virt_start & ((1 << 21) - 1)));
> > +
> > +    while ( start < end )
> > +    {
> > +        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
> > +        e.pt.ai = attributes;
> > +        write_pte(xen_second + second_table_offset(virt_start), e);
> > +        
> > +        start += (1<<21);
> > +        virt_start += (1<<21);
> > +    }
> > +
> > +    flush_xen_data_tlb();
> 
> What does this flush? The ptes are flushed by the write_pte aren't they?

write_pte doesn't flush the tlb at the moment.

> Should this be a range over virt_start + len?

Yes, ideally it would flush only the range virt_start-virt_start+size.
In practice I think I could remove the flush entirely because we don't
have any previous mappings at virt_start but I presume that it is not a
good idea to rely on that?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:06:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdyD-0005zQ-Ar; Thu, 06 Dec 2012 16:06:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgdyB-0005zK-W5
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 16:06:40 +0000
Received: from [85.158.143.99:48298] by server-2.bemta-4.messagelabs.com id
	3F/73-30861-F82C0C05; Thu, 06 Dec 2012 16:06:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354809997!21324690!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 360 invoked from network); 6 Dec 2012 16:06:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 16:06:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="216630677"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 16:06:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 11:06:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tgdy7-0003HV-Ro;
	Thu, 06 Dec 2012 16:06:35 +0000
Date: Thu, 6 Dec 2012 16:06:31 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354787720.17165.59.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212061543230.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354787720.17165.59.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/6] xen/arm: introduce map_phys_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> > Introduce a function to map a physical memory into virtual memory.
> > It is going to be used later to map the videoram.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/mm.c        |   23 +++++++++++++++++++++++
> >  xen/include/asm-arm/mm.h |    3 +++
> >  2 files changed, 26 insertions(+), 0 deletions(-)
> > 
> > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > index 68ee9da..418a414 100644
> > --- a/xen/arch/arm/mm.c
> > +++ b/xen/arch/arm/mm.c
> > @@ -376,6 +376,29 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
> >      frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
> >  }
> >  
> > +/* Map the physical memory range start -  end at the virtual address
> > + * virt_start in 2MB chunks. start and virt_start have to be 2MB
> > + * aligned.
> > + */
> > +void map_phys_range(paddr_t start, paddr_t end,
> > +        unsigned long virt_start, unsigned attributes)
> > +{
> > +    ASSERT(!(start & ((1 << 21) - 1)));
> > +    ASSERT(!(virt_start & ((1 << 21) - 1)));
> > +
> > +    while ( start < end )
> > +    {
> > +        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
> > +        e.pt.ai = attributes;
> > +        write_pte(xen_second + second_table_offset(virt_start), e);
> > +        
> > +        start += (1<<21);
> > +        virt_start += (1<<21);
> > +    }
> > +
> > +    flush_xen_data_tlb();
> 
> What does this flush? The ptes are flushed by the write_pte aren't they?

write_pte doesn't flush the tlb at the moment.

> Should this be a range over virt_start + len?

Yes, ideally it would flush only the range virt_start-virt_start+size.
In practice I think I could remove the flush entirely because we don't
have any previous mappings at virt_start but I presume that it is not a
good idea to rely on that?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:07:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdyQ-00060n-PJ; Thu, 06 Dec 2012 16:06:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bjzhang@suse.com>) id 1TgdyO-00060C-Vp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:06:53 +0000
Received: from [85.158.139.211:32846] by server-6.bemta-5.messagelabs.com id
	1B/E0-19321-C92C0C05; Thu, 06 Dec 2012 16:06:52 +0000
X-Env-Sender: bjzhang@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354810009!18575285!1
X-Originating-IP: [137.65.250.214]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1867 invoked from network); 6 Dec 2012 16:06:50 -0000
Received: from soto.provo.novell.com (HELO soto.provo.novell.com)
	(137.65.250.214) by server-15.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 16:06:50 -0000
Received: from INET-RELAY2-MTA by soto.provo.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 09:06:49 -0700
Message-Id: <50C15D410200003000027D8E@soto.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 09:06:41 -0700
From: "Bamvor Jian Zhang" <bjzhang@suse.com>
To: <Ian.Jackson@eu.citrix.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
In-Reply-To: <20661.3989.258191.396175@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jim Fehlig <JFEHLIG@suse.com>, Ian.Campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: [Xen-devel] =?utf-8?b?562U5aSN77yaIFJlOiBbUEFUQ0hdIGZpeCByYWNl?=
 =?utf-8?q?_condition_between_libvirtd_event_handling_and_libxl_fd_deregis?=
 =?utf-8?q?ter?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGksIElhbgoKVGhhbmtzIHlvdXIgY29tbWVudHMsIEkgZG8gbm90IHRoaW5rIGl0IGRlZXBseSBi
ZWZvcmUgaXQuCgpJIGFtIG5vdCBzdXJlIGFib3V0IHlvdXIgInJlbWFpbmluZyBwcm9ibGVtIi4g
SSBndWVzcyB0aGlzIHByb2JsZW0gc2hvdWxkIG5vdCBoYXBwZW4sIApCZWNhdXNlIHRoZSBkaWZm
ZXJlbnQgcmVnaXN0cmF0aW9uIHNob3VsZCByZWZlciB0byBkaWZmZXJlbnQgZXYobGl4bF9fZXZf
dGltZSwgbGl4bF9fZXZfZmQpLgoKQWJvdXQgdGhlIHJhY2UgY29uZGl0aW9uLCBJIHN0aWxsIG5v
dCB1bmRlcnN0YW5kIHdoeSBpdCBvY2N1cnJlZC4gTGlidmlydCBzaG91bGQgbm90IGdvdCB0aGUg
ZXZlbnQgd2hpbGUgbGlieGwgdGhvdWdodCBzdWNoIGV2ZW50IGlzIHVzZWxlc3MuIApDdXJyZW50
bHksIGZkIGlzIGRlcmVnaXN0ZXIgdHdpY2UsIHRoZSBmaXJzdCB0aW1lIGlzIGp1c3QgYWZ0ZXIg
dHJhbnNmZXIgZG9uZSBhbmQgc2Vjb25kIHRpbWUgaXMgCmR1cmluZyBjbGVhbnVwIGZ1bmN0aW9u
IHN1Y2ggYXMgYm9vdGxvYWRlciBjbGVhbnVwLiBJZiB3ZSBvbmx5IGRvIGl0IGluIHRoZSBjbGVh
bnVwIGZ1bmN0aW9ucywKdGhpcyByYWNlIHNob3VsZCBub3QgaGFwcGVuLgoKPj4+IElhbiBKYWNr
c29uIDxJYW4uSmFja3NvbkBldS5jaXRyaXguY29tPiAxMuW5tDEx5pyIMjjml6Ug5LiK5Y2IIDM6
MDggPj4+CkJhbXZvciBKaWFuIFpoYW5nIHdyaXRlcyAoIltQQVRDSF0gZml4IHJhY2UgY29uZGl0
aW9uIGJldHdlZW4gbGlidmlydGQgZXZlbnQgaGFuZGxpbmcgYW5kIGxpYnhsIGZkIGRlcmVnaXN0
ZXIiKToKPiB0aGUgcmFjZSBjb25kaXRpb24gbWF5IGJlIGVuY291bnRlZCBhdCB0aGUgZm9sbG93
aW5nIHNlbmFybzoKClRoYW5rcyBmb3IgdGhpcyByZXBvcnQuICBZb3UgYXJlIGNvcnJlY3QgdGhh
dCB0aGVyZSBpcyBhIHJhY2UgaGVyZS4KVW5mb3J0dW5hdGVseSBpdCdzIG1vcmUgY29tcGxpY2F0
ZWQgdGhhbiB5b3VyIGFuYWx5c2lzIHJldmVhbHMgYW5kIEkKdGhpbmsgeW91ciBwcm9wb3NlZCBm
aXggaXMgbm90IHN1ZmZpY2llbnQuCgpJIGhhdmUgd29ya2VkIHVwIGEgcGF0Y2ggLSBzZWUgYmVs
b3cgLSB3aGljaCBJIHRoaW5rIGZpeGVzIG1vc3Qgb2YgdGhlCnByb2JsZW0uICBQbGVhc2Ugc2Vl
IHRoZSBjb21taXQgbWVzc2FnZSBhbmQgdGhlIGJpZyBuZXcgY29tbWVudCBqdXN0CmFmdGVyIHRo
ZSBkZWZpbml0aW9uIG9mIE9TRVZFTlRfSE9PS19WT0lEIGZvciBkZXRhaWxzLgoKQnV0IEkgdGhp
bmsgdGhlcmUgbWF5IGJlIG9uZSBzZXJpb3VzIHByb2JsZW0gcmVtYWluaW5nLiAgSXQgc2VlbXMg
dG8KbWUgdGhhdCBlbnRyeSB0byBsaWJ4bF9fb3NldmVudF9vY2N1cnJlZF90aW1lb3V0IGNhbiBy
YWNlIHdpdGggdGhlCmhvb2sgdGltZW91dF9kZXJlZ2lzdGVyLiAgU3BlY2lmaWNhbGx5LCB0aGUg
QVBJIHByZXNlbnRlZCBieSBsaWJ4bApkb2VzIG5vdCBhbGxvdyBsaWJ4bCB0byB0ZWxsIHdoZXRo
ZXIgYSBjYWxsIHRvCmxpYnhsX19vc2V2ZW50X29jY3VycmVkX3RpbWVvdXQgaXMgb25lIHdoaWNo
IHdhcyBlbnRlcmVkIGJlZm9yZSBhIGNhbGwKdG8gbGlieGxfX2V2X3RpbWVfZGVyZWdpc3RlciBh
bmQgdGhlbmNlIHRpbWVvdXRfZGVyZWdpc3Rlci4KCkxldCB1cyBzdXBwb3NlIHRoYXQgdGhlIHBy
ZXZpb3VzIHRpbWVvdXQgcmVnaXN0cmF0aW9uIGlzIHNpbWlsYXIgdG8KdGhlIG5ldyBvbmUsIGJ1
dCBoYXMgYSBkaWZmZXJlbnQgZm9yX2FwcF9yZWcgdmFsdWUuCgpJZiB0aGUgY2FsbCB0byBsaWJ4
bF9fb3NldmVudF9vY2N1cnJlZF90aW1lb3V0IHdhcyBlbnRlcmVkIGEgbG9uZyB0aW1lCmFnbyB0
aGVuIGl0IHJlZmVycyB0byB0aGUgb2xkIHRpbWVvdXQgcmVnaXN0cmF0aW9uIGFuZCB0aGUgY3Vy
cmVudAp0aW1lb3V0IHJlZ2lzdHJhdGlvbiBpcyBub3QgYWZmZWN0ZWQsIGFuZCBtdXN0IHN0aWxs
IGJlIGRlcmVnaXN0ZXJlZCBpZgpuZWNlc3NhcnkuICBOb3QgZGVyZWdpc3RlcmluZyBpdCB3b3Vs
ZCBsZWFrIG1lbW9yeS4KCklmIHRoZSBjYWxsIHdhcyBlbnRlcmVkICJyZWNlbnRseSIgaXQgcmVm
ZXJzIHRvIHRoZSBjdXJyZW50IHRpbWVvdXQKcmVnaXN0cmF0aW9uLiAgVGhhdCBtZWFucyB0aGF0
IHRoZSBjdXJyZW50IHRpbWVvdXQgZm9yX2FwcF9yZWcgdmFsdWUKKHRoZSBsaWJ2aXJ0IGV2ZW50
IHJlcXVlc3QpIGhhcyBiZWVuIGZyZWVkIGFuZCBkZXJlZ2lzdGVyaW5nIGl0IHdvdWxkCmJlIGEg
bWVtb3J5IG1hbmFnZW1lbnQgZXJyb3IuCgpCYW12b3IsIGRvIHlvdSBhZ3JlZSB3aXRoIHRoaXMg
YW5hbHlzaXMgPyAgSWYgc28sIHdoYXQgY2hhbmdlIHRvIHRoZQpsaWJ4bCBBUEkgc3ludGF4IG9y
IHNlbWFudGljcyB3b3VsZCBmaXggdGhlIHByb2JsZW0gPwoKSSBzZWUgdGhhdCB0aGUgbm90aW9u
IHRoYXQgYSB0aW1lb3V0IGV2ZW50IHJlZ2lzdHJhdGlvbiBieSBsaWJ4bCB3aXRoCnRoZSBhcHBs
aWNhdGlvbiBpcyBhdXRvbWF0aWMgZGVyZWdpc3RlcmVkIGlzIG5vdCBleHByZXNzZWQgY2xlYXJs
eSBpbgpsaWJ4bC5oLiAgU28gcGVyaGFwcyBvbmUgZml4IHdvdWxkIGJlIHRvIGRlY3JlZSB0aGF0
IGxpYnhsIGlzIHN1cHBvc2VkCnRvIGNhbGwgdGltZW91dF9kZXJlZ2lzdGVyIGZyb20gd2l0aGlu
CmxpYnhsX19vc2V2ZW50X29jY3VycmVkX3RpbWVvdXQgaWYgaXQgd2Fzbid0IGNhbGxlZCBiZWZv
cmUuCgpUaGFua3MsCklhbi4KCgpGcm9tOiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AZXUuY2l0
cml4LmNvbT4KU3ViamVjdDogW1BBVENIXSBSRkM6IGxpYnhsOiBmaXggc3RhbGUgZmQvdGltZW91
dCBldmVudCBjYWxsYmFjayByYWNlCgpETyBOT1QgQVBQTFkKCkJlY2F1c2UgdGhlcmUgaXMgbm90
IG5lY2Vzc2FyaWx5IGFueSBsb2NrIGhlbGQgYXQgdGhlIHBvaW50IHRoZQphcHBsaWNhdGlvbiAo
ZWcsIGxpYnZpcnQpIGNhbGxzIGxpYnhsX29zZXZlbnRfb2NjdXJyZWRfdGltZW91dCBhbmQKLi4u
X2ZkLCBpbiBhIG11bHRpdGhyZWFkZWQgcHJvZ3JhbSB0aG9zZSBjYWxscyBtYXkgYmUgYXJiaXRy
YXJpbHkKZGVsYXllZCBpbiByZWxhdGlvbiB0byBvdGhlciBhY3Rpdml0aWVzIHdpdGhpbiB0aGUg
cHJvZ3JhbS4KCmxpYnhsIHRoZXJlZm9yZSBuZWVkcyB0byBiZSBwcmVwYXJlZCB0byByZWNlaXZl
IHZlcnkgb2xkIGV2ZW50CmNhbGxiYWNrcy4gIEFycmFuZ2UgZm9yIHRoaXMgdG8gYmUgdGhlIGNh
c2UuCgpEb2N1bWVudCB0aGUgcHJvYmxlbSBhbmQgdGhlIHNvbHV0aW9uIGluIGEgY29tbWVudCBp
biBsaWJ4bF9ldmVudC5jCmp1c3QgYmVmb3JlIHRoZSBkZWZpbml0aW9uIG9mIHN0cnVjdCBsaWJ4
bF9fb3NldmVudF9ob29rX25leHVzLgoKUmVwb3J0ZWQtYnk6IEJhbXZvciBKaWFuIFpoYW5nIDxi
anpoYW5nQHN1c2UuY29tPgpTaWduZWQtb2ZmLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25A
ZXUuY2l0cml4LmNvbT4KCi0tLQogdG9vbHMvbGlieGwvbGlieGxfZXZlbnQuYyAgICB8ICAxODgg
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0KIHRvb2xzL2xpYnhsL2xp
YnhsX2ludGVybmFsLmggfCAgICA4ICsrLQogMiBmaWxlcyBjaGFuZ2VkLCAxNjYgaW5zZXJ0aW9u
cygrKSwgMzAgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGwvbGlieGxfZXZl
bnQuYyBiL3Rvb2xzL2xpYnhsL2xpYnhsX2V2ZW50LmMKaW5kZXggNzJjYjcyMy4uN2M4NzE5ZCAx
MDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGxfZXZlbnQuYworKysgYi90b29scy9saWJ4bC9s
aWJ4bF9ldmVudC5jCkBAIC0zOCwyMyArMzgsMTIzIEBACiAgKiBUaGUgYXBwbGljYXRpb24ncyBy
ZWdpc3RyYXRpb24gaG9va3Mgc2hvdWxkIGJlIGNhbGxlZCBPTkxZIHZpYQogICogdGhlc2UgbWFj
cm9zLCB3aXRoIHRoZSBjdHggbG9ja2VkLiAgTGlrZXdpc2UgYWxsIHRoZSAib2NjdXJyZWQiCiAg
KiBlbnRyeXBvaW50cyBmcm9tIHRoZSBhcHBsaWNhdGlvbiBzaG91bGQgYXNzZXJ0KCFpbl9ob29r
KTsKKyAqCisgKiBEdXIrICogZXZhbHVhdGVkIC0gZXYtPm5leHVzIGlzIGd1YXJhbnRlZWQgdG8g
YmUgdmFsaWQgYW5kIHJlZmVyIHRvIHRoZQorICogbmV4dXMgd2hpY2ggaXMgYmVpbmcgdXNlZCBm
b3IgdGhpcyBldmVudCByZWdpc3RyYXRpb24uICBUaGUKKyAqIGFyZ3VtZW50cyBzaG91bGQgc3Bl
Y2lmeSBldi0+bmV4dXMgZm9yIHRoZSBmb3JfbGlieGwgYXJndW1lbnQgYW5kCisgKiBldi0+bmV4
dXMtPmZvcl9hcHBfcmVnIChvciBhIHBvaW50ZXIgdG8gaXQpIGZvciBmb3JfYXBwX3JlZy4KICAq
LwotI2RlZmluZSBPU0VWRU5UX0hPT0tfSU5URVJOKHJldHZhbCwgaG9va25hbWUsIC4uLikgZG8g
eyAgICAgICAgICAgICAgICAgICAgICBcCi0gICAgaWYgKENUWC0+b3NldmVudF9ob29rcykgeyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKLSAgICAgICAg
Q1RYLT5vc2V2ZW50X2luX2hvb2srKzsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgXAotICAgICAgICByZXR2YWwgQ1RYLT5vc2V2ZW50X2hvb2tzLT5ob29rbmFt
ZShDVFgtPm9zZXZlbnRfdXNlciwgX19WQV9BUkdTX18pOyBcCi0gICAgICAgIENUWC0+b3NldmVu
dF9pbl9ob29rLS07ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwKLSAgICB9ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgXAorI2RlZmluZSBPU0VWRU5UX0hPT0tfSU5URVJOKHJl
dHZhbCwgZmFpbGVkcCwgZXZraW5kLCBob29rb3AsIC4uLikgZG8geyAgXAorICAgIGlmIChDVFgt
Pm9zZXZlbnRfaG9va3MpIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICBDVFgtPm9zZXZlbnRfaW5faG9vaysrOyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBsaWJ4bF9fb3NldmVudF9ob29rX25leGkg
Km5leGkgPSAmQ1RYLT5ob29rXyMjZXZraW5kIyNfbmV4aV9pZGxlOyBcCisgICAgICAgIG9zZXZl
bnRfaG9va19wcmVfIyNob29rb3AoZ2MsIGV2LCBuZXhpLCAmZXYtPm5leHVzKTsgICAgICAgICAg
ICBcCisgICAgICAgIHJldHZhbCBDVFgtPm9zZXZlbnRfaG9va3MtPmV2a2luZCMjXyMjaG9va29w
ICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICAoQ1RYLT5vc2V2ZW50X3VzZXIsIF9f
VkFfQVJHU19fKTsgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgIGlmICgoZmFp
bGVkcCkpICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
CisgICAgICAgICAgICBvc2V2ZW50X2hvb2tfZmFpbGVkXyMjaG9va29wKGdjLCBldiwgbmV4aSwg
JmV2LT5uZXh1cyk7ICAgICBcCisgICAgICAgIENUWC0+b3NldmVudF9pbl9ob29rLS07ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgfSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiB9
IHdoaWxlICgwKQogCi0jZGVmaW5lIE9TRVZFTlRfSE9PSyhob29rbmFtZSwgLi4uKSAoeyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKLSAgICBpbnQgb3NldmVudF9ob29r
X3JjID0gMDsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
XAotICAgIE9TRVZFTlRfSE9PS19JTlRFUk4ob3NldmVudF9ob29rX3JjID0gLCBob29rbmFtZSwg
X19WQV9BUkdTX18pOyAgICAgICAgICBcCi0gICAgb3NldmVudF9ob29rX3JjOyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyNkZWZpbmUg
T1NFVkVOVF9IT09LKGV2a2luZCwgaG9va29wLCAuLi4pICh7ICAgICAgICAgICAgICAgICAgIFwK
KyAgICBpbnQgb3NldmVudF9ob29rX3JjID0gMDsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgT1NFVkVOVF9IT09LX0lOVEVSTihvc2V2ZW50X2hvb2tfcmMgPSwgISFv
c2V2ZW50X2hvb2tfcmMsICAgXAorICAgICAgICAgICAgICAgICAgICAgICAgZXZraW5kLCBob29r
b3AsIF9fVkFfQVJHU19fKTsgICAgICAgICAgXAorICAgIG9zZXZlbnRfaG9va19yYzsgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKIH0pCiAKLSNkZWZpbmUgT1NF
VkVOVF9IT09LX1ZPSUQoaG9va25hbWUsIC4uLikgXAotICAgIE9TRVZFTlRfSE9PS19JTlRFUk4o
Lyogdm9pZCAqLywgaG9va25hbWUsIF9fVkFfQVJHU19fKQorI2RlZmluZSBPU0VWRU5UX0hPT0tf
Vk9JRChldmtpbmQsIGhvb2tvcCwgLi4uKSBcCisgICAgT1NFVkVOVF9IT09LX0lOVEVSTigvKiB2
b2lkICovLCAwLCBldmtpbmQsIGhvb2tvcCwgX19WQV9BUkdTX18pCisKKy8qCisgKiBUaGUgYXBw
bGljYXRpb24ncyBjYWxscyB0byBsaWJ4bF9vc2V2ZW50X29jY3VycmVkXy4uLiBtYXkgYmUKKyAq
IGluZGVmaW5pdGVseSBkZWxheWVkIHdpdGggcmVzcGVjdCB0byB0aGUgcmVzdCBvZiB0aGUgcHJv
Z3JhbSAoc2luY2UKKyAqIHRoZXkgYXJlIG5vdCBuZWNlc3NhcmlseSBjYWxsZWQgd2l0aCBhbnkg
bG9jayBoZWxkKS4gIFNvIHRoZQorICogZm9yX2xpYnhsIHZhbHVlIHdlIHJlY2VpdmUgbWF5IGJl
IChhbG1vc3QpIGFyYml0cmFyaWx5IG9sZC4gIEFsbCB3ZQorICoga25vdyBpcyB0aGF0IGl0IGNh
bWUgZnJvbSB0aGlzIGN0eC4KKyAqCisgKiBUaGVyZWZvcmUgd2UgbWF5IG5vdCBmcmVlIHRoZSBv
YmplY3QgcmVmZXJyZWQgdG8gYnkgYW55IGZvcl9saWJ4bAorICogdmFsdWUgdW50aWwgd2UgZnJl
ZSB0aGUgd2hvbGUgbGlieGxfY3R4LiAgQW5kIGlmIHdlIHJldXNlIGl0IHdlCisgKiBtdXN0IGJl
IGFibGUgdG8gdGVsbCB3aGVuIGFuIG9sZCB1c2UgdHVybnMgdXAsIGFuZCBkaXNjYXJkIHRoZQor
ICogc3RhbGUgZXZlbnQuCisgKgorICogVGh1cyB3ZSBjYW5ub3QgdXNlIHRoZSBldiBkaXJlY3Rs
eSBhcyB0aGUgZm9yX2xpYnhsIHZhbHVlIC0gd2UgbmVlZAorICogYSBsYXllciBvZiBpbmRpcmVj
dGlvbi4KKyAqCisgKiBXZSBkbyB0aGlzIGJ5IGtlZXBpbmcgYSBwb29sIG9mIGxpYnhsX19vc2V2
ZW50X2hvb2tfbmV4dXMgc3RydWN0cywKKyAqIGFuZCB1c2UgcG9pbnRlcnMgdG8gdGhlbSBhcyBm
b3JfbGlieGwgdmFsdWVzLiAgSW4gZmFjdCwgdGhlcmUgYXJlCisgKiB0d28gcG9vbHM6IG9uZSBm
b3IgZmRzIGFuZCBvbmUgZm9yIHRpbWVvdXRzLiAgVGhpcyBlbnN1cmVzIHRoYXQgd2UKKyAqIGRv
bid0IHJpc2sgYSB0eXBlIGVycm9yIHdoZW4gd2UgdXBjYXN0IG5leHVzLT5ldi4gIEluIGVhY2gg
bmV4dXMKKyAqIHRoZSBldiBpcyBlaXRoZXIgbnVsbCBvciBwb2ludHMgdG8gYSB2YWxpZCBsaWJ4
bF9fZXZfdGltZSBvcgorICogbGlieGxfX2V2X2ZkLCBhcyBhcHBsaWNhYmxlLgorICoKKyAqIFdl
IC9kby8gYWxsb3cgb3Vyc2VsdmVzIHRvIHJlYXNzb2NpYXRlIGFuIG9sZCBuZXh1cyB3aXRoIGEg
bmV3IGV2CisgKiBhcyBvdGhlcndpc2Ugd2Ugd291bGQgaGF2ZSB0byBsZWFrIG5leGkuICAoVGhp
cyByZWFzc29jaWF0aW9uCisgKiBtaWdodCwgb2YgY291cnNlLCBiZSBhbiBvbGQgZXYgYmVpbmcg
cmV1c2VkIGZvciBhIG5ldyBwdXJwb3NlIHNvCisgKiBzaW1wbHkgY29tcGFyaW5nIHRoZSBldiBw
b2ludGVyIGlzIG5vdCBzdWZmaWNpZW50LikgIFRodXMgdGhlCisgKiBsaWJ4bF9vc2V2ZW50X29j
Y3VycmVkIGZ1bmN0aW9ucyBuZWVkIHRvIGNoZWNrIHRoYXQgdGhlIGNvbmRpdGlvbgorICogYWxs
ZWdlZGx5IHNpZ25hbGxlZCBieSB0aGlzIGV2ZW50IGFjdHVhbGx5IGV4aXN0cy4KKyAqCisgKiBU
aGUgbmV4aSBhbmQgdGhlIGxpc3RzIGFyZSBhbGwgcHJvdGVjdGVkIGJ5IHRoZSBjdHggbG9jay4K
KyAqLworIAorc3RydWN0IGxpYnhsX19vc2V2ZW50X2hvb2tfbmV4dXMgeworICAgIExJQlhMX1NM
SVNUX0VOVFJZKGxpYnhsX19vc2V2ZW50X2hvb2tfbmV4dXMpIG5leHQ7Cit9OworCitzdGF0aWMg
dm9pZCBvc2V2ZW50X2hvb2tfcHJlX3JlZ2lzdGVyKGxpYnhsX19nYyAqZ2MsIHZvaWQgKmV2LAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fb3NldmVudF9ob29r
X25leGkgKm5leGlfaWRsZSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
bGlieGxfX29zZXZlbnRfaG9va19uZXh1cyAqKm5leHVzX3IpCit7CisgICAgbGlieGxfX29zZXZl
bnRfaG9va19uZXh1cyAqbmV4dXMgPSBMSUJYTF9TTElTVF9GSVJTVChuZXhpX2lkbGUpOworICAg
IGlmIChuZXh1cykgeworICAgICAgICBMSUJYTF9TTElTVF9SRU1PVkVfSEVBRChuZXhpX2lkbGUs
IG5leHQpOworICAgIH0gZWxzZSB7CisgICAgICAgIG5leHVzID0gbGlieGxfX3phbGxvYyhOT0dD
LCBzaXplb2YoKm5leHVzKSk7CisgICAgfQorICAgIG5leHVzLT5ldiA9IGV2OworICAgICpuZXh1
c19yID0gbmV4dXM7Cit9CitzdGF0aWMgdm9pZCBvc2V2ZW50X2hvb2tfcHJlX2RlcmVnaXN0ZXIo
bGlieGxfX2djICpnYywgdm9pZCAqZXYsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgbGlieGxfX29zZXZlbnRfaG9va19uZXhpICpuZXhpX2lkbGUsCisgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX29zZXZlbnRfaG9va19uZXh1cyAq
Km5leHVzKQoreworICAgICgqbmV4dXMpLT5ldiA9IDA7CisgICAgTElCWExfU0xJU1RfSU5TRVJU
X0hFQUQobmV4aV9pZGxlLCAqbmV4dXMsIG5leHQpOworfQorc3RhdGljIHZvaWQgb3NldmVudF9o
b29rX3ByZV9tb2RpZnkobGlieGxfX2djICpnYywgdm9pZCAqZXYsCisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fb3NldmVudF9ob29rX25leGkgKm5leGlfaWRsZSwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19vc2V2ZW50X2hvb2tf
bmV4dXMgKipuZXh1cykKK3sKK30KKworc3RhdGljIHZvaWQgb3NldmVudF9ob29rX2ZhaWxlZF9y
ZWdpc3RlcihsaWJ4bF9fZ2MgKmdjLCB2b2lkICpldiwKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgbGlieGxfX29zZXZlbnRfaG9va19uZXhpICpuZXhpX2lkbGUsCisg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19vc2V2ZW50X2hv
b2tfbmV4dXMgKipuZXh1cykKK3sKKyAgICBvc2V2ZW50X2hvb2tfcHJlX2RlcmVnaXN0ZXIoZ2Ms
IGV2LCBuZXhpX2lkbGUsIG5leHVzKTsKK30KK3N0YXRpYyB2b2lkIG9zZXZlbnRfaG9va19mYWls
ZWRfZGVyZWdpc3RlcihsaWJ4bF9fZ2MgKmdjLCB2b2lkICpldiwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fb3NldmVudF9ob29rX25leGkgKm5leGlf
aWRsZSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9f
b3NldmVudF9ob29rX25leHVzICoqbmV4dXMpCit7CisgICAgYWJvcnQoKTsKK30KK3N0YXRpYyB2
b2lkIG9zZXZlbnRfaG9va19mYWlsZWRfbW9kaWZ5KGxpYnhsX19nYyAqZ2MsIHZvaWQgKmV2LAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX29zZXZlbnRfaG9v
a19uZXhpICpuZXhpX2lkbGUsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBsaWJ4bF9fb3NldmVudF9ob29rX25leHVzICoqbmV4dXMpCit7Cit9CisKK3N0YXRpYyB2b2lk
ICpvc2V2ZW50X2V2X2Zyb21faG9va19uZXh1cyhsaWJ4bF9jdHggKmN0eCwgdm9pZCAqZm9yX2xp
YnhsKQoreworICAgIGxpYnhsX19vc2V2ZW50X2hvb2tfbmV4dXMgKm5leHVzID0gZm9yX2xpYnhs
OworICAgIHJldHVybiBuZXh1cy0+ZXY7Cit9CiAKIC8qCiAgKiBmZCBldmVudHMKQEAgLTcyLDcg
KzE3Miw4IEBAIGludCBsaWJ4bF9fZXZfZmRfcmVnaXN0ZXIobGlieGxfX2djICpnYywgbGlieGxf
X2V2X2ZkICpldiwKIAogICAgIERCRygiZXZfZmQ9JXAgcmVnaXN0ZXIgZmQ9JWQgZXZlbnRzPSV4
IiwgZXYsIGZkLCBldmVudHMpOwogCi0gICAgcmMgPSBPU0VWRU5UX0hPT0soZmRfcmVnaXN0ZXIs
IGZkLCAmZXYtPmZvcl9hcHBfcmVnLCBldmVudHMsIGV2KTsKKyAgICByYyA9IE9TRVZFTlRfSE9P
SyhmZCwgcmVnaXN0ZXIsIGZkLCAmZXYtPm5leHVzLT5mb3JfYXBwX3JlZywKKyAgICAgICAgICAg
ICAgICAgICAgICBldmVudHMsIGV2LT5uZXh1cyk7CiAgICAgaWYgKHJjKSBnb3RvIG91dDsKIAog
ICAgIGV2LT5mZCA9IGZkOwpAQCAtOTcsNyArMTk4LDcgQEAgaW50IGxpYnhsX19ldl9mZF9tb2Rp
ZnkobGlieGxfX2djICpnYywgbGlieGxfX2V2X2ZkICpldiwgc2hvcnQgZXZlbnRzKQogCiAgICAg
REJHKCJldl9mZD0lcCBtb2RpZnkgZmQ9JWQgZXZlbnRzPSV4IiwgZXYsIGV2LT5mZCwgZXZlbnRz
KTsKIAotICAgIHJjID0gT1NFVkVOVF9IT09LKGZkX21vZGlmeSwgZXYtPmZkLCAmZXYtPmZvcl9h
cHBfcmVnLCBldmVudHMpOworICAgIHJjID0gT1NFVkVOVF9IT09LKGZkLCBtb2RpZnksIGV2LT5m
ZCwgJmV2LT5uZXh1cy0+Zm9yX2FwcF9yZWcsIGV2ZW50cyk7CiAgICAgaWYgKHJjKSBnb3RvIG91
dDsKIAogICAgIGV2LT5ldmVudHMgPSBldmVudHM7CkBAIC0xMTksNyArMjIwLDcgQEAgdm9pZCBs
aWJ4bF9fZXZfZmRfZGVyZWdpc3RlcihsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9fZXZfZmQgKmV2KQog
CiAgICAgREJHKCJldl9mZD0lcCBkZXJlZ2lzdGVyIGZkPSVkIiwgZXYsIGV2LT5mZCk7CiAKLSAg
ICBPU0VWRU5UX0hPT0tfVk9JRChmZF9kZXJlZ2lzdGVyLCBldi0+ZmQsIGV2LT5mb3JfYXBwX3Jl
Zyk7CisgICAgT1NFVkVOVF9IT09LX1ZPSUQoZmQsIGRlcmVnaXN0ZXIsIGV2LT5mZCwgZXYtPm5l
eHVzLT5mb3JfYXBwX3JlZyk7CiAgICAgTElCWExfTElTVF9SRU1PVkUoZXYsIGVudHJ5KTsKICAg
ICBldi0+ZmQgPSAtMTsKIApAQCAtMTcxLDcgKzI3Miw4IEBAIHN0YXRpYyBpbnQgdGltZV9yZWdp
c3Rlcl9maW5pdGUobGlieGxfX2djICpnYywgbGlieGxfX2V2X3RpbWUgKmV2LAogewogICAgIGlu
dCByYzsKIAotICAgIHJjID0gT1NFVkVOVF9IT09LKHRpbWVvdXRfcmVnaXN0ZXIsICZldi0+Zm9y
X2FwcF9yZWcsIGFic29sdXRlLCBldik7CisgICAgcmMgPSBPU0VWRU5UX0hPT0sodGltZW91dCwg
cmVnaXN0ZXIsICZldi0+bmV4dXMtPmZvcl9hcHBfcmVnLAorICAgICAgICAgICAgICAgICAgICAg
IGFic29sdXRlLCBldi0+bmV4dXMpOwogICAgIGlmIChyYykgcmV0dXJuIHJjOwogCiAgICAgZXYt
PmluZmluaXRlID0gMDsKQEAgLTE4NCw3ICsyODYsNyBAQCBzdGF0aWMgaW50IHRpbWVfcmVnaXN0
ZXJfZmluaXRlKGxpYnhsX19nYyAqZ2MsIGxpYnhsX19ldl90aW1lICpldiwKIHN0YXRpYyB2b2lk
IHRpbWVfZGVyZWdpc3RlcihsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9fZXZfdGltZSAqZXYpCiB7CiAg
ICAgaWYgKCFldi0+aW5maW5pdGUpIHsKLSAgICAgICAgT1NFVkVOVF9IT09LX1ZPSUQodGltZW91
dF9kZXJlZ2lzdGVyLCBldi0+Zm9yX2FwcF9yZWcpOworICAgICAgICBPU0VWRU5UX0hPT0tfVk9J
RCh0aW1lb3V0LCBkZXJlZ2lzdGVyLCBldi0+bmV4dXMtPmZvcl9hcHBfcmVnKTsKICAgICAgICAg
TElCWExfVEFJTFFfUkVNT1ZFKCZDVFgtPmV0aW1lcywgZXYsIGVudHJ5KTsKICAgICB9CiB9CkBA
IC0yNzAsNyArMzcyLDcgQEAgaW50IGxpYnhsX19ldl90aW1lX21vZGlmeV9hYnMobGlieGxfX2dj
ICpnYywgbGlieGxfX2V2X3RpbWUgKmV2LAogICAgICAgICByYyA9IHRpbWVfcmVnaXN0ZXJfZmlu
aXRlKGdjLCBldiwgYWJzb2x1dGUpOwogICAgICAgICBpZiAocmMpIGdvdG8gb3V0OwogICAgIH0g
ZWxzZSB7Ci0gICAgICAgIHJjID0gT1NFVkVOVF9IT09LKHRpbWVvdXRfbW9kaWZ5LCAmZXYtPmZv
cl9hcHBfcmVnLCBhYnNvbHV0ZSk7CisgICAgICAgIHJjID0gT1NFVkVOVF9IT09LKCAgICAgICAg
IExJQlhMX1RBSUxRX1JFTU9WRSgmQ1RYLT5ldGltZXMsIGV2LCBlbnRyeSk7CkBAIC0xMDEwLDM1
ICsxMTEyLDY1IEBAIHZvaWQgbGlieGxfb3NldmVudF9yZWdpc3Rlcl9ob29rcyhsaWJ4bF9jdHgg
KmN0eCwKIAogCiB2b2lkIGxpYnhsX29zZXZlbnRfb2NjdXJyZWRfZmQobGlieGxfY3R4ICpjdHgs
IHZvaWQgKmZvcl9saWJ4bCwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgZmQs
IHNob3J0IGV2ZW50cywgc2hvcnQgcmV2ZW50cykKKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBpbnQgZmQsIHNob3J0IGV2ZW50c19pZ24sIHNob3J0IHJldmVudHNfaWduKQogewotICAg
IGxpYnhsX19ldl9mZCAqZXYgPSBmb3JfbGlieGw7Ci0KICAgICBFR0NfSU5JVChjdHgpOwogICAg
IENUWF9MT0NLOwogICAgIGFzc2VydCghQ1RYLT5vc2V2ZW50X2luX2hvb2spOwogCi0gICAgYXNz
ZXJ0KGZkID09IGV2LT5mZCk7Ci0gICAgcmV2ZW50cyAmPSBldi0+ZXZlbnRzOwotICAgIGlmIChy
ZXZlbnRzKQotICAgICAgICBldi0+ZnVuYyhlZ2MsIGV2LCBmZCwgZXYtPmV2ZW50cywgcmV2ZW50
cyk7CisgICAgbGlieGxfX2V2X2ZkICpldiA9IG9zZXZlbnRfZXZfZnJvbV9ob29rX25leHVzKGN0
eCwgZm9yX2xpYnhsKTsKKyAgICBpZiAoIWV2KSBnb3RvIG91dDsKKyAgICBpZiAoZXYtPmZkICE9
IGZkKSBnb3RvIG91dDsKKworICAgIHN0cnVjdCBwb2xsZmQgY2hlY2s7CisgICAgZm9yICg7Oykg
eworICAgICAgICBjaGVjay5mZCA9IGZkOworICAgICAgICBjaGVjay5ldmVudHMgPSBldi0+ZXZl
bnRzOworICAgICAgICBpbnQgciA9IHBvbGwoJmNoZWNrLCAxLCAwKTsKKyAgICAgICAgaWYgKCFy
KQorICAgICAgICAgICAgZ290byBvdXQ7CisgICAgICAgIGlmIChyPT0xKQorICAgICAgICAgICAg
YnJlYWs7CisgICAgICAgIGFzc2VydChyPDApOworICAgICAgICBpZiAoZXJybm8gIT0gRUlOVFIp
IHsKKyAgICAgICAgICAgIExJQlhMX19FVkVOVF9ESVNBU1RFUihlZ2MsICJmYWlsZWQgcG9sbCB0
byBjaGVjayBmb3IgZmQiLCBlcnJubywgMCk7CisgICAgICAgICAgICBnb3RvIG91dDsKKyAgICAg
ICAgfQorICAgIH0KIAorICAgIGlmIChjaGVjay5yZXZlbnRzKQorICAgICAgICBldi0+ZnVuYyhl
Z2MsIGV2LCBmZCwgZXYtPmV2ZW50cywgY2hlY2sucmV2ZW50cyk7CisKKyBvdXQ6CiAgICAgQ1RY
X1VOTE9DSzsKICAgICBFR0NfRlJFRTsKIH0KIAogdm9pZCBsaWJ4bF9vc2V2ZW50X29jY3VycmVk
X3RpbWVvdXQobGlieGxfY3R4ICpjdHgsIHZvaWQgKmZvcl9saWJ4bCkKIHsKLSAgICBsaWJ4bF9f
ZXZfdGltZSAqZXYgPSBmb3JfbGlieGw7Ci0KICAgICBFR0NfSU5JVChjdHgpOwogICAgIENUWF9M
T0NLOwogICAgIGFzc2VydCghQ1RYLT5vc2V2ZW50X2luX2hvb2spOwogCi0gICAgYXNzZXJ0KCFl
di0+aW5maW5pdGUpOworICAgIGxpYnhsX19ldl90aW1lICpldiA9IG9zZXZlbnRfZXZfZnJvbV9o
b29rX25leHVzKGN0eCwgZm9yX2xpYnhsKTsKKyAgICBpZiAoIWV2KSBnb3RvIG91dDsKKyAgICBp
ZiAoZXYtPmluZmluaXRlKSBnb3RvIG91dDsKKworICAgIHN0cnVjdCB0aW1ldmFsIG5vdzsKKyAg
ICBpbnQgciA9IGxpYnhsX19nZXR0aW1lb2ZkYXkoZ2MsICZub3cpOworICAgIGlmIChyKQorICAg
ICAgICAvKiBwcm9iYWJseSg/KSBzYWZlciB0byByaXNrIGEgcmFjZSBjYXVzaW5nIGEgdGltZSBl
dmVudCB0bworICAgICAgICAgKiBoYXBwZW4gZWFybHkgdGhhbiB0byBpZ25vcmUgdGhlIGV2ZW50
IGFuZCBjYWxsIERJU0FTVEVSICovCisgICAgICAgIGdvdG8gb3V0OworCisgICAgaWYgKHRpbWVy
Y21wKCZldi0+YWJzLCAmbm93LCA+KSkKKyAgICAgICAgLyogbG9zdCB0aGUgcmFjZSAtIHRoaXMg
aXMgYSBzdGFsZSB0aW1lciBldmVudCBjYWxsYmFjayAqLworICAgICAgICBnb3RvIG91dDsKKwog
ICAgIExJQlhMX1RBSUxRX1JFTU9WRSgmQ1RYLT5ldGltZXMsIGV2LCBlbnRyeSk7CiAgICAgZXYt
PmZ1bmMoZWdjLCBldiwgJmV2LT5hYnMpOwogCisgb3V0OgogICAgIENUWF9VTkxPQ0s7CiAgICAg
RUdDX0ZSRUU7CiB9CmRpZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oIGIv
dG9vbHMvbGlieGwvbGlieGxfaW50ZXJuYWwuaAppbmRleCBjYmEzNjE2Li42NDg0YmNiIDEwMDY0
NAotLS0gYS90b29scy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oCisrKyBiL3Rvb2xzL2xpYnhsL2xp
YnhsX2ludGVybmFsLmgKQEAgLTEzNiw2ICsxMzYsOCBAQCB0eXBlZGVmIHN0cnVjdCBsaWJ4bF9f
Z2MgbGlieGxfX2djOwogdHlwZWRlZiBzdHJ1Y3QgbGlieGxfX2VnYyBsaWJ4bF9fZWdjOwogdHlw
ZWRlZiBzdHJ1Y3QgbGlieGxfX2FvIGxpYnhsX19hbzsKIHR5cGVkZWYgc3RydWN0IGxpYnhsX19h
b3Bfb2NjdXJyZWQgbGlieGxfX2FvcF9vY2N1cnJlZDsKK3R5cGVkZWYgc3RydWN0IGxpYnhsX19v
c2V2ZW50X2hvb2tfbmV4dXMgbGlieGxfX29zZXZlbnRfaG9va19uZXh1czsKK3R5cGVkZWYgc3Ry
dWN0IGxpYnhsX19vc2V2ZW50X2hvb2tfbmV4aSBsaWJ4bF9fb3NldmVudF9ob29rX25leGk7CiAK
IF9oaWRkZW4gdm9pZCBsaWJ4bF9fYWxsb2NfZmFpbGVkKGxpYnhsX2N0eCAqLCBjb25zdCBjaGFy
ICpmdW5jLAogICAgICAgICAgICAgICAgICAgICAgICAgIHNpemVfdCBubWVtYiwgc2l6ZV90IHNp
emUpIF9fYXR0cmlidXRlX18oKG5vcmV0dXJuKSk7CkBAIC0xNjMsNyArMTY1LDcgQEAgc3RydWN0
IGxpYnhsX19ldl9mZCB7CiAgICAgbGlieGxfX2V2X2ZkX2NhbGxiYWNrICpmdW5jOwogICAgIC8q
IHJlbWFpbmRlciBpcyBwcml2YXRlIGZvciBsaWJ4bF9fZXZfZmQuLi4gKi8KICAgICBMSUJYTF9M
SVNUX0VOVFJZKGxpYnhsX19ldl9mZCkgZW50cnk7Ci0gICAgdm9pZCAqZm9yX2FwcF9yZWc7Cisg
ICAgbGlieGxfX29zZXZlbnRfaG9va19uZXh1cyAqbmV4dXM7CiB9OwogCiAKQEAgLTE3OCw3ICsx
ODAsNyBAQCBzdHJ1Y3QgbGlieGxfX2V2X3RpbWUgewogICAgIGludCBpbmZpbml0ZTsgLyogbm90
IHJlZ2lzdGVyZWQgaW4gbGlzdCBvciB3aXRoIGFwcCBpZiBpbmZpbml0ZSAqLwogICAgIExJQlhM
X1RBSUxRX0VOVFJZKGxpYnhsX19ldl90aW1lKSBlbnRyeTsKICAgICBzdHJ1Y3QgdGltZXZhbCBh
YnM7Ci0gICAgdm9pZCAqZm9yX2FwcF9yZWc7CisgICAgbGlieGxfX29zZXZlbnRfaG9va19uZXh1
cyAqbmV4dXM7CiB9OwogCiB0eXBlZGVmIHN0cnVjdCBsaWJ4bF9fZXZfeHN3YXRjaCBsaWJ4bF9f
ZXZfeHN3YXRjaDsKQEAgLTMyOSw2ICszMzEsOCBAQCBzdHJ1Y3QgbGlieGxfX2N0eCB7CiAgICAg
bGlieGxfX3BvbGxlciBwb2xsZXJfYXBwOyAvKiBsaWJ4bF9vc2V2ZW50X2JlZm9yZXBvbGwgYW5k
IF9hZnRlcnBvbGwgKi8KICAgICBMSUJYTF9MSVNUX0hFQUQoLCBsaWJ4bF9fcG9sbGVyKSBwb2xs
ZXJzX2V2ZW50LCBwb2xsZXJzX2lkbGU7CiAKKyAgICBMSUJYTF9TTElTVF9IRUFEKGxpYnhsX19v
c2V2ZW50X2hvb2tfbmV4aSwgbGlieGxfX29zZXZlbnRfaG9va19uZXh1cykKKyAgICAgICAgaG9v
a19mZF9uZXhpX2lkbGUsIGhvb2tfdGltZW91dF9uZXhpX2lkbGU7CiAgICAgTElCWExfTElTVF9I
RUFEKCwgbGlieGxfX2V2X2ZkKSBlZmRzOwogICAgIExJQlhMX1RBSUxRX0hFQUQoLCBsaWJ4bF9f
ZXZfdGltZSkgZXRpbWVzOwogCi0tIAp0ZzogKGJmZTcwMjUuLikgdC94ZW4veGwuZXZlbnRmaXgy
LmZkcmVnLmluZGlyZWN0IChkZXBlbmRzIG9uOiBiYXNlKQoKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:07:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgdyQ-00060n-PJ; Thu, 06 Dec 2012 16:06:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bjzhang@suse.com>) id 1TgdyO-00060C-Vp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:06:53 +0000
Received: from [85.158.139.211:32846] by server-6.bemta-5.messagelabs.com id
	1B/E0-19321-C92C0C05; Thu, 06 Dec 2012 16:06:52 +0000
X-Env-Sender: bjzhang@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354810009!18575285!1
X-Originating-IP: [137.65.250.214]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1867 invoked from network); 6 Dec 2012 16:06:50 -0000
Received: from soto.provo.novell.com (HELO soto.provo.novell.com)
	(137.65.250.214) by server-15.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 16:06:50 -0000
Received: from INET-RELAY2-MTA by soto.provo.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 09:06:49 -0700
Message-Id: <50C15D410200003000027D8E@soto.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 09:06:41 -0700
From: "Bamvor Jian Zhang" <bjzhang@suse.com>
To: <Ian.Jackson@eu.citrix.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
In-Reply-To: <20661.3989.258191.396175@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jim Fehlig <JFEHLIG@suse.com>, Ian.Campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: [Xen-devel] =?utf-8?b?562U5aSN77yaIFJlOiBbUEFUQ0hdIGZpeCByYWNl?=
 =?utf-8?q?_condition_between_libvirtd_event_handling_and_libxl_fd_deregis?=
 =?utf-8?q?ter?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGksIElhbgoKVGhhbmtzIHlvdXIgY29tbWVudHMsIEkgZG8gbm90IHRoaW5rIGl0IGRlZXBseSBi
ZWZvcmUgaXQuCgpJIGFtIG5vdCBzdXJlIGFib3V0IHlvdXIgInJlbWFpbmluZyBwcm9ibGVtIi4g
SSBndWVzcyB0aGlzIHByb2JsZW0gc2hvdWxkIG5vdCBoYXBwZW4sIApCZWNhdXNlIHRoZSBkaWZm
ZXJlbnQgcmVnaXN0cmF0aW9uIHNob3VsZCByZWZlciB0byBkaWZmZXJlbnQgZXYobGl4bF9fZXZf
dGltZSwgbGl4bF9fZXZfZmQpLgoKQWJvdXQgdGhlIHJhY2UgY29uZGl0aW9uLCBJIHN0aWxsIG5v
dCB1bmRlcnN0YW5kIHdoeSBpdCBvY2N1cnJlZC4gTGlidmlydCBzaG91bGQgbm90IGdvdCB0aGUg
ZXZlbnQgd2hpbGUgbGlieGwgdGhvdWdodCBzdWNoIGV2ZW50IGlzIHVzZWxlc3MuIApDdXJyZW50
bHksIGZkIGlzIGRlcmVnaXN0ZXIgdHdpY2UsIHRoZSBmaXJzdCB0aW1lIGlzIGp1c3QgYWZ0ZXIg
dHJhbnNmZXIgZG9uZSBhbmQgc2Vjb25kIHRpbWUgaXMgCmR1cmluZyBjbGVhbnVwIGZ1bmN0aW9u
IHN1Y2ggYXMgYm9vdGxvYWRlciBjbGVhbnVwLiBJZiB3ZSBvbmx5IGRvIGl0IGluIHRoZSBjbGVh
bnVwIGZ1bmN0aW9ucywKdGhpcyByYWNlIHNob3VsZCBub3QgaGFwcGVuLgoKPj4+IElhbiBKYWNr
c29uIDxJYW4uSmFja3NvbkBldS5jaXRyaXguY29tPiAxMuW5tDEx5pyIMjjml6Ug5LiK5Y2IIDM6
MDggPj4+CkJhbXZvciBKaWFuIFpoYW5nIHdyaXRlcyAoIltQQVRDSF0gZml4IHJhY2UgY29uZGl0
aW9uIGJldHdlZW4gbGlidmlydGQgZXZlbnQgaGFuZGxpbmcgYW5kIGxpYnhsIGZkIGRlcmVnaXN0
ZXIiKToKPiB0aGUgcmFjZSBjb25kaXRpb24gbWF5IGJlIGVuY291bnRlZCBhdCB0aGUgZm9sbG93
aW5nIHNlbmFybzoKClRoYW5rcyBmb3IgdGhpcyByZXBvcnQuICBZb3UgYXJlIGNvcnJlY3QgdGhh
dCB0aGVyZSBpcyBhIHJhY2UgaGVyZS4KVW5mb3J0dW5hdGVseSBpdCdzIG1vcmUgY29tcGxpY2F0
ZWQgdGhhbiB5b3VyIGFuYWx5c2lzIHJldmVhbHMgYW5kIEkKdGhpbmsgeW91ciBwcm9wb3NlZCBm
aXggaXMgbm90IHN1ZmZpY2llbnQuCgpJIGhhdmUgd29ya2VkIHVwIGEgcGF0Y2ggLSBzZWUgYmVs
b3cgLSB3aGljaCBJIHRoaW5rIGZpeGVzIG1vc3Qgb2YgdGhlCnByb2JsZW0uICBQbGVhc2Ugc2Vl
IHRoZSBjb21taXQgbWVzc2FnZSBhbmQgdGhlIGJpZyBuZXcgY29tbWVudCBqdXN0CmFmdGVyIHRo
ZSBkZWZpbml0aW9uIG9mIE9TRVZFTlRfSE9PS19WT0lEIGZvciBkZXRhaWxzLgoKQnV0IEkgdGhp
bmsgdGhlcmUgbWF5IGJlIG9uZSBzZXJpb3VzIHByb2JsZW0gcmVtYWluaW5nLiAgSXQgc2VlbXMg
dG8KbWUgdGhhdCBlbnRyeSB0byBsaWJ4bF9fb3NldmVudF9vY2N1cnJlZF90aW1lb3V0IGNhbiBy
YWNlIHdpdGggdGhlCmhvb2sgdGltZW91dF9kZXJlZ2lzdGVyLiAgU3BlY2lmaWNhbGx5LCB0aGUg
QVBJIHByZXNlbnRlZCBieSBsaWJ4bApkb2VzIG5vdCBhbGxvdyBsaWJ4bCB0byB0ZWxsIHdoZXRo
ZXIgYSBjYWxsIHRvCmxpYnhsX19vc2V2ZW50X29jY3VycmVkX3RpbWVvdXQgaXMgb25lIHdoaWNo
IHdhcyBlbnRlcmVkIGJlZm9yZSBhIGNhbGwKdG8gbGlieGxfX2V2X3RpbWVfZGVyZWdpc3RlciBh
bmQgdGhlbmNlIHRpbWVvdXRfZGVyZWdpc3Rlci4KCkxldCB1cyBzdXBwb3NlIHRoYXQgdGhlIHBy
ZXZpb3VzIHRpbWVvdXQgcmVnaXN0cmF0aW9uIGlzIHNpbWlsYXIgdG8KdGhlIG5ldyBvbmUsIGJ1
dCBoYXMgYSBkaWZmZXJlbnQgZm9yX2FwcF9yZWcgdmFsdWUuCgpJZiB0aGUgY2FsbCB0byBsaWJ4
bF9fb3NldmVudF9vY2N1cnJlZF90aW1lb3V0IHdhcyBlbnRlcmVkIGEgbG9uZyB0aW1lCmFnbyB0
aGVuIGl0IHJlZmVycyB0byB0aGUgb2xkIHRpbWVvdXQgcmVnaXN0cmF0aW9uIGFuZCB0aGUgY3Vy
cmVudAp0aW1lb3V0IHJlZ2lzdHJhdGlvbiBpcyBub3QgYWZmZWN0ZWQsIGFuZCBtdXN0IHN0aWxs
IGJlIGRlcmVnaXN0ZXJlZCBpZgpuZWNlc3NhcnkuICBOb3QgZGVyZWdpc3RlcmluZyBpdCB3b3Vs
ZCBsZWFrIG1lbW9yeS4KCklmIHRoZSBjYWxsIHdhcyBlbnRlcmVkICJyZWNlbnRseSIgaXQgcmVm
ZXJzIHRvIHRoZSBjdXJyZW50IHRpbWVvdXQKcmVnaXN0cmF0aW9uLiAgVGhhdCBtZWFucyB0aGF0
IHRoZSBjdXJyZW50IHRpbWVvdXQgZm9yX2FwcF9yZWcgdmFsdWUKKHRoZSBsaWJ2aXJ0IGV2ZW50
IHJlcXVlc3QpIGhhcyBiZWVuIGZyZWVkIGFuZCBkZXJlZ2lzdGVyaW5nIGl0IHdvdWxkCmJlIGEg
bWVtb3J5IG1hbmFnZW1lbnQgZXJyb3IuCgpCYW12b3IsIGRvIHlvdSBhZ3JlZSB3aXRoIHRoaXMg
YW5hbHlzaXMgPyAgSWYgc28sIHdoYXQgY2hhbmdlIHRvIHRoZQpsaWJ4bCBBUEkgc3ludGF4IG9y
IHNlbWFudGljcyB3b3VsZCBmaXggdGhlIHByb2JsZW0gPwoKSSBzZWUgdGhhdCB0aGUgbm90aW9u
IHRoYXQgYSB0aW1lb3V0IGV2ZW50IHJlZ2lzdHJhdGlvbiBieSBsaWJ4bCB3aXRoCnRoZSBhcHBs
aWNhdGlvbiBpcyBhdXRvbWF0aWMgZGVyZWdpc3RlcmVkIGlzIG5vdCBleHByZXNzZWQgY2xlYXJs
eSBpbgpsaWJ4bC5oLiAgU28gcGVyaGFwcyBvbmUgZml4IHdvdWxkIGJlIHRvIGRlY3JlZSB0aGF0
IGxpYnhsIGlzIHN1cHBvc2VkCnRvIGNhbGwgdGltZW91dF9kZXJlZ2lzdGVyIGZyb20gd2l0aGlu
CmxpYnhsX19vc2V2ZW50X29jY3VycmVkX3RpbWVvdXQgaWYgaXQgd2Fzbid0IGNhbGxlZCBiZWZv
cmUuCgpUaGFua3MsCklhbi4KCgpGcm9tOiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AZXUuY2l0
cml4LmNvbT4KU3ViamVjdDogW1BBVENIXSBSRkM6IGxpYnhsOiBmaXggc3RhbGUgZmQvdGltZW91
dCBldmVudCBjYWxsYmFjayByYWNlCgpETyBOT1QgQVBQTFkKCkJlY2F1c2UgdGhlcmUgaXMgbm90
IG5lY2Vzc2FyaWx5IGFueSBsb2NrIGhlbGQgYXQgdGhlIHBvaW50IHRoZQphcHBsaWNhdGlvbiAo
ZWcsIGxpYnZpcnQpIGNhbGxzIGxpYnhsX29zZXZlbnRfb2NjdXJyZWRfdGltZW91dCBhbmQKLi4u
X2ZkLCBpbiBhIG11bHRpdGhyZWFkZWQgcHJvZ3JhbSB0aG9zZSBjYWxscyBtYXkgYmUgYXJiaXRy
YXJpbHkKZGVsYXllZCBpbiByZWxhdGlvbiB0byBvdGhlciBhY3Rpdml0aWVzIHdpdGhpbiB0aGUg
cHJvZ3JhbS4KCmxpYnhsIHRoZXJlZm9yZSBuZWVkcyB0byBiZSBwcmVwYXJlZCB0byByZWNlaXZl
IHZlcnkgb2xkIGV2ZW50CmNhbGxiYWNrcy4gIEFycmFuZ2UgZm9yIHRoaXMgdG8gYmUgdGhlIGNh
c2UuCgpEb2N1bWVudCB0aGUgcHJvYmxlbSBhbmQgdGhlIHNvbHV0aW9uIGluIGEgY29tbWVudCBp
biBsaWJ4bF9ldmVudC5jCmp1c3QgYmVmb3JlIHRoZSBkZWZpbml0aW9uIG9mIHN0cnVjdCBsaWJ4
bF9fb3NldmVudF9ob29rX25leHVzLgoKUmVwb3J0ZWQtYnk6IEJhbXZvciBKaWFuIFpoYW5nIDxi
anpoYW5nQHN1c2UuY29tPgpTaWduZWQtb2ZmLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25A
ZXUuY2l0cml4LmNvbT4KCi0tLQogdG9vbHMvbGlieGwvbGlieGxfZXZlbnQuYyAgICB8ICAxODgg
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0KIHRvb2xzL2xpYnhsL2xp
YnhsX2ludGVybmFsLmggfCAgICA4ICsrLQogMiBmaWxlcyBjaGFuZ2VkLCAxNjYgaW5zZXJ0aW9u
cygrKSwgMzAgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGwvbGlieGxfZXZl
bnQuYyBiL3Rvb2xzL2xpYnhsL2xpYnhsX2V2ZW50LmMKaW5kZXggNzJjYjcyMy4uN2M4NzE5ZCAx
MDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGxfZXZlbnQuYworKysgYi90b29scy9saWJ4bC9s
aWJ4bF9ldmVudC5jCkBAIC0zOCwyMyArMzgsMTIzIEBACiAgKiBUaGUgYXBwbGljYXRpb24ncyBy
ZWdpc3RyYXRpb24gaG9va3Mgc2hvdWxkIGJlIGNhbGxlZCBPTkxZIHZpYQogICogdGhlc2UgbWFj
cm9zLCB3aXRoIHRoZSBjdHggbG9ja2VkLiAgTGlrZXdpc2UgYWxsIHRoZSAib2NjdXJyZWQiCiAg
KiBlbnRyeXBvaW50cyBmcm9tIHRoZSBhcHBsaWNhdGlvbiBzaG91bGQgYXNzZXJ0KCFpbl9ob29r
KTsKKyAqCisgKiBEdXIrICogZXZhbHVhdGVkIC0gZXYtPm5leHVzIGlzIGd1YXJhbnRlZWQgdG8g
YmUgdmFsaWQgYW5kIHJlZmVyIHRvIHRoZQorICogbmV4dXMgd2hpY2ggaXMgYmVpbmcgdXNlZCBm
b3IgdGhpcyBldmVudCByZWdpc3RyYXRpb24uICBUaGUKKyAqIGFyZ3VtZW50cyBzaG91bGQgc3Bl
Y2lmeSBldi0+bmV4dXMgZm9yIHRoZSBmb3JfbGlieGwgYXJndW1lbnQgYW5kCisgKiBldi0+bmV4
dXMtPmZvcl9hcHBfcmVnIChvciBhIHBvaW50ZXIgdG8gaXQpIGZvciBmb3JfYXBwX3JlZy4KICAq
LwotI2RlZmluZSBPU0VWRU5UX0hPT0tfSU5URVJOKHJldHZhbCwgaG9va25hbWUsIC4uLikgZG8g
eyAgICAgICAgICAgICAgICAgICAgICBcCi0gICAgaWYgKENUWC0+b3NldmVudF9ob29rcykgeyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKLSAgICAgICAg
Q1RYLT5vc2V2ZW50X2luX2hvb2srKzsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgXAotICAgICAgICByZXR2YWwgQ1RYLT5vc2V2ZW50X2hvb2tzLT5ob29rbmFt
ZShDVFgtPm9zZXZlbnRfdXNlciwgX19WQV9BUkdTX18pOyBcCi0gICAgICAgIENUWC0+b3NldmVu
dF9pbl9ob29rLS07ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IFwKLSAgICB9ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgXAorI2RlZmluZSBPU0VWRU5UX0hPT0tfSU5URVJOKHJl
dHZhbCwgZmFpbGVkcCwgZXZraW5kLCBob29rb3AsIC4uLikgZG8geyAgXAorICAgIGlmIChDVFgt
Pm9zZXZlbnRfaG9va3MpIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgICAgICBDVFgtPm9zZXZlbnRfaW5faG9vaysrOyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBsaWJ4bF9fb3NldmVudF9ob29rX25leGkg
Km5leGkgPSAmQ1RYLT5ob29rXyMjZXZraW5kIyNfbmV4aV9pZGxlOyBcCisgICAgICAgIG9zZXZl
bnRfaG9va19wcmVfIyNob29rb3AoZ2MsIGV2LCBuZXhpLCAmZXYtPm5leHVzKTsgICAgICAgICAg
ICBcCisgICAgICAgIHJldHZhbCBDVFgtPm9zZXZlbnRfaG9va3MtPmV2a2luZCMjXyMjaG9va29w
ICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICAoQ1RYLT5vc2V2ZW50X3VzZXIsIF9f
VkFfQVJHU19fKTsgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgIGlmICgoZmFp
bGVkcCkpICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
CisgICAgICAgICAgICBvc2V2ZW50X2hvb2tfZmFpbGVkXyMjaG9va29wKGdjLCBldiwgbmV4aSwg
JmV2LT5uZXh1cyk7ICAgICBcCisgICAgICAgIENUWC0+b3NldmVudF9pbl9ob29rLS07ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgfSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiB9
IHdoaWxlICgwKQogCi0jZGVmaW5lIE9TRVZFTlRfSE9PSyhob29rbmFtZSwgLi4uKSAoeyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKLSAgICBpbnQgb3NldmVudF9ob29r
X3JjID0gMDsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
XAotICAgIE9TRVZFTlRfSE9PS19JTlRFUk4ob3NldmVudF9ob29rX3JjID0gLCBob29rbmFtZSwg
X19WQV9BUkdTX18pOyAgICAgICAgICBcCi0gICAgb3NldmVudF9ob29rX3JjOyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyNkZWZpbmUg
T1NFVkVOVF9IT09LKGV2a2luZCwgaG9va29wLCAuLi4pICh7ICAgICAgICAgICAgICAgICAgIFwK
KyAgICBpbnQgb3NldmVudF9ob29rX3JjID0gMDsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgT1NFVkVOVF9IT09LX0lOVEVSTihvc2V2ZW50X2hvb2tfcmMgPSwgISFv
c2V2ZW50X2hvb2tfcmMsICAgXAorICAgICAgICAgICAgICAgICAgICAgICAgZXZraW5kLCBob29r
b3AsIF9fVkFfQVJHU19fKTsgICAgICAgICAgXAorICAgIG9zZXZlbnRfaG9va19yYzsgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKIH0pCiAKLSNkZWZpbmUgT1NF
VkVOVF9IT09LX1ZPSUQoaG9va25hbWUsIC4uLikgXAotICAgIE9TRVZFTlRfSE9PS19JTlRFUk4o
Lyogdm9pZCAqLywgaG9va25hbWUsIF9fVkFfQVJHU19fKQorI2RlZmluZSBPU0VWRU5UX0hPT0tf
Vk9JRChldmtpbmQsIGhvb2tvcCwgLi4uKSBcCisgICAgT1NFVkVOVF9IT09LX0lOVEVSTigvKiB2
b2lkICovLCAwLCBldmtpbmQsIGhvb2tvcCwgX19WQV9BUkdTX18pCisKKy8qCisgKiBUaGUgYXBw
bGljYXRpb24ncyBjYWxscyB0byBsaWJ4bF9vc2V2ZW50X29jY3VycmVkXy4uLiBtYXkgYmUKKyAq
IGluZGVmaW5pdGVseSBkZWxheWVkIHdpdGggcmVzcGVjdCB0byB0aGUgcmVzdCBvZiB0aGUgcHJv
Z3JhbSAoc2luY2UKKyAqIHRoZXkgYXJlIG5vdCBuZWNlc3NhcmlseSBjYWxsZWQgd2l0aCBhbnkg
bG9jayBoZWxkKS4gIFNvIHRoZQorICogZm9yX2xpYnhsIHZhbHVlIHdlIHJlY2VpdmUgbWF5IGJl
IChhbG1vc3QpIGFyYml0cmFyaWx5IG9sZC4gIEFsbCB3ZQorICoga25vdyBpcyB0aGF0IGl0IGNh
bWUgZnJvbSB0aGlzIGN0eC4KKyAqCisgKiBUaGVyZWZvcmUgd2UgbWF5IG5vdCBmcmVlIHRoZSBv
YmplY3QgcmVmZXJyZWQgdG8gYnkgYW55IGZvcl9saWJ4bAorICogdmFsdWUgdW50aWwgd2UgZnJl
ZSB0aGUgd2hvbGUgbGlieGxfY3R4LiAgQW5kIGlmIHdlIHJldXNlIGl0IHdlCisgKiBtdXN0IGJl
IGFibGUgdG8gdGVsbCB3aGVuIGFuIG9sZCB1c2UgdHVybnMgdXAsIGFuZCBkaXNjYXJkIHRoZQor
ICogc3RhbGUgZXZlbnQuCisgKgorICogVGh1cyB3ZSBjYW5ub3QgdXNlIHRoZSBldiBkaXJlY3Rs
eSBhcyB0aGUgZm9yX2xpYnhsIHZhbHVlIC0gd2UgbmVlZAorICogYSBsYXllciBvZiBpbmRpcmVj
dGlvbi4KKyAqCisgKiBXZSBkbyB0aGlzIGJ5IGtlZXBpbmcgYSBwb29sIG9mIGxpYnhsX19vc2V2
ZW50X2hvb2tfbmV4dXMgc3RydWN0cywKKyAqIGFuZCB1c2UgcG9pbnRlcnMgdG8gdGhlbSBhcyBm
b3JfbGlieGwgdmFsdWVzLiAgSW4gZmFjdCwgdGhlcmUgYXJlCisgKiB0d28gcG9vbHM6IG9uZSBm
b3IgZmRzIGFuZCBvbmUgZm9yIHRpbWVvdXRzLiAgVGhpcyBlbnN1cmVzIHRoYXQgd2UKKyAqIGRv
bid0IHJpc2sgYSB0eXBlIGVycm9yIHdoZW4gd2UgdXBjYXN0IG5leHVzLT5ldi4gIEluIGVhY2gg
bmV4dXMKKyAqIHRoZSBldiBpcyBlaXRoZXIgbnVsbCBvciBwb2ludHMgdG8gYSB2YWxpZCBsaWJ4
bF9fZXZfdGltZSBvcgorICogbGlieGxfX2V2X2ZkLCBhcyBhcHBsaWNhYmxlLgorICoKKyAqIFdl
IC9kby8gYWxsb3cgb3Vyc2VsdmVzIHRvIHJlYXNzb2NpYXRlIGFuIG9sZCBuZXh1cyB3aXRoIGEg
bmV3IGV2CisgKiBhcyBvdGhlcndpc2Ugd2Ugd291bGQgaGF2ZSB0byBsZWFrIG5leGkuICAoVGhp
cyByZWFzc29jaWF0aW9uCisgKiBtaWdodCwgb2YgY291cnNlLCBiZSBhbiBvbGQgZXYgYmVpbmcg
cmV1c2VkIGZvciBhIG5ldyBwdXJwb3NlIHNvCisgKiBzaW1wbHkgY29tcGFyaW5nIHRoZSBldiBw
b2ludGVyIGlzIG5vdCBzdWZmaWNpZW50LikgIFRodXMgdGhlCisgKiBsaWJ4bF9vc2V2ZW50X29j
Y3VycmVkIGZ1bmN0aW9ucyBuZWVkIHRvIGNoZWNrIHRoYXQgdGhlIGNvbmRpdGlvbgorICogYWxs
ZWdlZGx5IHNpZ25hbGxlZCBieSB0aGlzIGV2ZW50IGFjdHVhbGx5IGV4aXN0cy4KKyAqCisgKiBU
aGUgbmV4aSBhbmQgdGhlIGxpc3RzIGFyZSBhbGwgcHJvdGVjdGVkIGJ5IHRoZSBjdHggbG9jay4K
KyAqLworIAorc3RydWN0IGxpYnhsX19vc2V2ZW50X2hvb2tfbmV4dXMgeworICAgIExJQlhMX1NM
SVNUX0VOVFJZKGxpYnhsX19vc2V2ZW50X2hvb2tfbmV4dXMpIG5leHQ7Cit9OworCitzdGF0aWMg
dm9pZCBvc2V2ZW50X2hvb2tfcHJlX3JlZ2lzdGVyKGxpYnhsX19nYyAqZ2MsIHZvaWQgKmV2LAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fb3NldmVudF9ob29r
X25leGkgKm5leGlfaWRsZSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
bGlieGxfX29zZXZlbnRfaG9va19uZXh1cyAqKm5leHVzX3IpCit7CisgICAgbGlieGxfX29zZXZl
bnRfaG9va19uZXh1cyAqbmV4dXMgPSBMSUJYTF9TTElTVF9GSVJTVChuZXhpX2lkbGUpOworICAg
IGlmIChuZXh1cykgeworICAgICAgICBMSUJYTF9TTElTVF9SRU1PVkVfSEVBRChuZXhpX2lkbGUs
IG5leHQpOworICAgIH0gZWxzZSB7CisgICAgICAgIG5leHVzID0gbGlieGxfX3phbGxvYyhOT0dD
LCBzaXplb2YoKm5leHVzKSk7CisgICAgfQorICAgIG5leHVzLT5ldiA9IGV2OworICAgICpuZXh1
c19yID0gbmV4dXM7Cit9CitzdGF0aWMgdm9pZCBvc2V2ZW50X2hvb2tfcHJlX2RlcmVnaXN0ZXIo
bGlieGxfX2djICpnYywgdm9pZCAqZXYsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgbGlieGxfX29zZXZlbnRfaG9va19uZXhpICpuZXhpX2lkbGUsCisgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX29zZXZlbnRfaG9va19uZXh1cyAq
Km5leHVzKQoreworICAgICgqbmV4dXMpLT5ldiA9IDA7CisgICAgTElCWExfU0xJU1RfSU5TRVJU
X0hFQUQobmV4aV9pZGxlLCAqbmV4dXMsIG5leHQpOworfQorc3RhdGljIHZvaWQgb3NldmVudF9o
b29rX3ByZV9tb2RpZnkobGlieGxfX2djICpnYywgdm9pZCAqZXYsCisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fb3NldmVudF9ob29rX25leGkgKm5leGlfaWRsZSwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19vc2V2ZW50X2hvb2tf
bmV4dXMgKipuZXh1cykKK3sKK30KKworc3RhdGljIHZvaWQgb3NldmVudF9ob29rX2ZhaWxlZF9y
ZWdpc3RlcihsaWJ4bF9fZ2MgKmdjLCB2b2lkICpldiwKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgbGlieGxfX29zZXZlbnRfaG9va19uZXhpICpuZXhpX2lkbGUsCisg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19vc2V2ZW50X2hv
b2tfbmV4dXMgKipuZXh1cykKK3sKKyAgICBvc2V2ZW50X2hvb2tfcHJlX2RlcmVnaXN0ZXIoZ2Ms
IGV2LCBuZXhpX2lkbGUsIG5leHVzKTsKK30KK3N0YXRpYyB2b2lkIG9zZXZlbnRfaG9va19mYWls
ZWRfZGVyZWdpc3RlcihsaWJ4bF9fZ2MgKmdjLCB2b2lkICpldiwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fb3NldmVudF9ob29rX25leGkgKm5leGlf
aWRsZSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9f
b3NldmVudF9ob29rX25leHVzICoqbmV4dXMpCit7CisgICAgYWJvcnQoKTsKK30KK3N0YXRpYyB2
b2lkIG9zZXZlbnRfaG9va19mYWlsZWRfbW9kaWZ5KGxpYnhsX19nYyAqZ2MsIHZvaWQgKmV2LAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX29zZXZlbnRfaG9v
a19uZXhpICpuZXhpX2lkbGUsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBsaWJ4bF9fb3NldmVudF9ob29rX25leHVzICoqbmV4dXMpCit7Cit9CisKK3N0YXRpYyB2b2lk
ICpvc2V2ZW50X2V2X2Zyb21faG9va19uZXh1cyhsaWJ4bF9jdHggKmN0eCwgdm9pZCAqZm9yX2xp
YnhsKQoreworICAgIGxpYnhsX19vc2V2ZW50X2hvb2tfbmV4dXMgKm5leHVzID0gZm9yX2xpYnhs
OworICAgIHJldHVybiBuZXh1cy0+ZXY7Cit9CiAKIC8qCiAgKiBmZCBldmVudHMKQEAgLTcyLDcg
KzE3Miw4IEBAIGludCBsaWJ4bF9fZXZfZmRfcmVnaXN0ZXIobGlieGxfX2djICpnYywgbGlieGxf
X2V2X2ZkICpldiwKIAogICAgIERCRygiZXZfZmQ9JXAgcmVnaXN0ZXIgZmQ9JWQgZXZlbnRzPSV4
IiwgZXYsIGZkLCBldmVudHMpOwogCi0gICAgcmMgPSBPU0VWRU5UX0hPT0soZmRfcmVnaXN0ZXIs
IGZkLCAmZXYtPmZvcl9hcHBfcmVnLCBldmVudHMsIGV2KTsKKyAgICByYyA9IE9TRVZFTlRfSE9P
SyhmZCwgcmVnaXN0ZXIsIGZkLCAmZXYtPm5leHVzLT5mb3JfYXBwX3JlZywKKyAgICAgICAgICAg
ICAgICAgICAgICBldmVudHMsIGV2LT5uZXh1cyk7CiAgICAgaWYgKHJjKSBnb3RvIG91dDsKIAog
ICAgIGV2LT5mZCA9IGZkOwpAQCAtOTcsNyArMTk4LDcgQEAgaW50IGxpYnhsX19ldl9mZF9tb2Rp
ZnkobGlieGxfX2djICpnYywgbGlieGxfX2V2X2ZkICpldiwgc2hvcnQgZXZlbnRzKQogCiAgICAg
REJHKCJldl9mZD0lcCBtb2RpZnkgZmQ9JWQgZXZlbnRzPSV4IiwgZXYsIGV2LT5mZCwgZXZlbnRz
KTsKIAotICAgIHJjID0gT1NFVkVOVF9IT09LKGZkX21vZGlmeSwgZXYtPmZkLCAmZXYtPmZvcl9h
cHBfcmVnLCBldmVudHMpOworICAgIHJjID0gT1NFVkVOVF9IT09LKGZkLCBtb2RpZnksIGV2LT5m
ZCwgJmV2LT5uZXh1cy0+Zm9yX2FwcF9yZWcsIGV2ZW50cyk7CiAgICAgaWYgKHJjKSBnb3RvIG91
dDsKIAogICAgIGV2LT5ldmVudHMgPSBldmVudHM7CkBAIC0xMTksNyArMjIwLDcgQEAgdm9pZCBs
aWJ4bF9fZXZfZmRfZGVyZWdpc3RlcihsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9fZXZfZmQgKmV2KQog
CiAgICAgREJHKCJldl9mZD0lcCBkZXJlZ2lzdGVyIGZkPSVkIiwgZXYsIGV2LT5mZCk7CiAKLSAg
ICBPU0VWRU5UX0hPT0tfVk9JRChmZF9kZXJlZ2lzdGVyLCBldi0+ZmQsIGV2LT5mb3JfYXBwX3Jl
Zyk7CisgICAgT1NFVkVOVF9IT09LX1ZPSUQoZmQsIGRlcmVnaXN0ZXIsIGV2LT5mZCwgZXYtPm5l
eHVzLT5mb3JfYXBwX3JlZyk7CiAgICAgTElCWExfTElTVF9SRU1PVkUoZXYsIGVudHJ5KTsKICAg
ICBldi0+ZmQgPSAtMTsKIApAQCAtMTcxLDcgKzI3Miw4IEBAIHN0YXRpYyBpbnQgdGltZV9yZWdp
c3Rlcl9maW5pdGUobGlieGxfX2djICpnYywgbGlieGxfX2V2X3RpbWUgKmV2LAogewogICAgIGlu
dCByYzsKIAotICAgIHJjID0gT1NFVkVOVF9IT09LKHRpbWVvdXRfcmVnaXN0ZXIsICZldi0+Zm9y
X2FwcF9yZWcsIGFic29sdXRlLCBldik7CisgICAgcmMgPSBPU0VWRU5UX0hPT0sodGltZW91dCwg
cmVnaXN0ZXIsICZldi0+bmV4dXMtPmZvcl9hcHBfcmVnLAorICAgICAgICAgICAgICAgICAgICAg
IGFic29sdXRlLCBldi0+bmV4dXMpOwogICAgIGlmIChyYykgcmV0dXJuIHJjOwogCiAgICAgZXYt
PmluZmluaXRlID0gMDsKQEAgLTE4NCw3ICsyODYsNyBAQCBzdGF0aWMgaW50IHRpbWVfcmVnaXN0
ZXJfZmluaXRlKGxpYnhsX19nYyAqZ2MsIGxpYnhsX19ldl90aW1lICpldiwKIHN0YXRpYyB2b2lk
IHRpbWVfZGVyZWdpc3RlcihsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9fZXZfdGltZSAqZXYpCiB7CiAg
ICAgaWYgKCFldi0+aW5maW5pdGUpIHsKLSAgICAgICAgT1NFVkVOVF9IT09LX1ZPSUQodGltZW91
dF9kZXJlZ2lzdGVyLCBldi0+Zm9yX2FwcF9yZWcpOworICAgICAgICBPU0VWRU5UX0hPT0tfVk9J
RCh0aW1lb3V0LCBkZXJlZ2lzdGVyLCBldi0+bmV4dXMtPmZvcl9hcHBfcmVnKTsKICAgICAgICAg
TElCWExfVEFJTFFfUkVNT1ZFKCZDVFgtPmV0aW1lcywgZXYsIGVudHJ5KTsKICAgICB9CiB9CkBA
IC0yNzAsNyArMzcyLDcgQEAgaW50IGxpYnhsX19ldl90aW1lX21vZGlmeV9hYnMobGlieGxfX2dj
ICpnYywgbGlieGxfX2V2X3RpbWUgKmV2LAogICAgICAgICByYyA9IHRpbWVfcmVnaXN0ZXJfZmlu
aXRlKGdjLCBldiwgYWJzb2x1dGUpOwogICAgICAgICBpZiAocmMpIGdvdG8gb3V0OwogICAgIH0g
ZWxzZSB7Ci0gICAgICAgIHJjID0gT1NFVkVOVF9IT09LKHRpbWVvdXRfbW9kaWZ5LCAmZXYtPmZv
cl9hcHBfcmVnLCBhYnNvbHV0ZSk7CisgICAgICAgIHJjID0gT1NFVkVOVF9IT09LKCAgICAgICAg
IExJQlhMX1RBSUxRX1JFTU9WRSgmQ1RYLT5ldGltZXMsIGV2LCBlbnRyeSk7CkBAIC0xMDEwLDM1
ICsxMTEyLDY1IEBAIHZvaWQgbGlieGxfb3NldmVudF9yZWdpc3Rlcl9ob29rcyhsaWJ4bF9jdHgg
KmN0eCwKIAogCiB2b2lkIGxpYnhsX29zZXZlbnRfb2NjdXJyZWRfZmQobGlieGxfY3R4ICpjdHgs
IHZvaWQgKmZvcl9saWJ4bCwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgZmQs
IHNob3J0IGV2ZW50cywgc2hvcnQgcmV2ZW50cykKKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBpbnQgZmQsIHNob3J0IGV2ZW50c19pZ24sIHNob3J0IHJldmVudHNfaWduKQogewotICAg
IGxpYnhsX19ldl9mZCAqZXYgPSBmb3JfbGlieGw7Ci0KICAgICBFR0NfSU5JVChjdHgpOwogICAg
IENUWF9MT0NLOwogICAgIGFzc2VydCghQ1RYLT5vc2V2ZW50X2luX2hvb2spOwogCi0gICAgYXNz
ZXJ0KGZkID09IGV2LT5mZCk7Ci0gICAgcmV2ZW50cyAmPSBldi0+ZXZlbnRzOwotICAgIGlmIChy
ZXZlbnRzKQotICAgICAgICBldi0+ZnVuYyhlZ2MsIGV2LCBmZCwgZXYtPmV2ZW50cywgcmV2ZW50
cyk7CisgICAgbGlieGxfX2V2X2ZkICpldiA9IG9zZXZlbnRfZXZfZnJvbV9ob29rX25leHVzKGN0
eCwgZm9yX2xpYnhsKTsKKyAgICBpZiAoIWV2KSBnb3RvIG91dDsKKyAgICBpZiAoZXYtPmZkICE9
IGZkKSBnb3RvIG91dDsKKworICAgIHN0cnVjdCBwb2xsZmQgY2hlY2s7CisgICAgZm9yICg7Oykg
eworICAgICAgICBjaGVjay5mZCA9IGZkOworICAgICAgICBjaGVjay5ldmVudHMgPSBldi0+ZXZl
bnRzOworICAgICAgICBpbnQgciA9IHBvbGwoJmNoZWNrLCAxLCAwKTsKKyAgICAgICAgaWYgKCFy
KQorICAgICAgICAgICAgZ290byBvdXQ7CisgICAgICAgIGlmIChyPT0xKQorICAgICAgICAgICAg
YnJlYWs7CisgICAgICAgIGFzc2VydChyPDApOworICAgICAgICBpZiAoZXJybm8gIT0gRUlOVFIp
IHsKKyAgICAgICAgICAgIExJQlhMX19FVkVOVF9ESVNBU1RFUihlZ2MsICJmYWlsZWQgcG9sbCB0
byBjaGVjayBmb3IgZmQiLCBlcnJubywgMCk7CisgICAgICAgICAgICBnb3RvIG91dDsKKyAgICAg
ICAgfQorICAgIH0KIAorICAgIGlmIChjaGVjay5yZXZlbnRzKQorICAgICAgICBldi0+ZnVuYyhl
Z2MsIGV2LCBmZCwgZXYtPmV2ZW50cywgY2hlY2sucmV2ZW50cyk7CisKKyBvdXQ6CiAgICAgQ1RY
X1VOTE9DSzsKICAgICBFR0NfRlJFRTsKIH0KIAogdm9pZCBsaWJ4bF9vc2V2ZW50X29jY3VycmVk
X3RpbWVvdXQobGlieGxfY3R4ICpjdHgsIHZvaWQgKmZvcl9saWJ4bCkKIHsKLSAgICBsaWJ4bF9f
ZXZfdGltZSAqZXYgPSBmb3JfbGlieGw7Ci0KICAgICBFR0NfSU5JVChjdHgpOwogICAgIENUWF9M
T0NLOwogICAgIGFzc2VydCghQ1RYLT5vc2V2ZW50X2luX2hvb2spOwogCi0gICAgYXNzZXJ0KCFl
di0+aW5maW5pdGUpOworICAgIGxpYnhsX19ldl90aW1lICpldiA9IG9zZXZlbnRfZXZfZnJvbV9o
b29rX25leHVzKGN0eCwgZm9yX2xpYnhsKTsKKyAgICBpZiAoIWV2KSBnb3RvIG91dDsKKyAgICBp
ZiAoZXYtPmluZmluaXRlKSBnb3RvIG91dDsKKworICAgIHN0cnVjdCB0aW1ldmFsIG5vdzsKKyAg
ICBpbnQgciA9IGxpYnhsX19nZXR0aW1lb2ZkYXkoZ2MsICZub3cpOworICAgIGlmIChyKQorICAg
ICAgICAvKiBwcm9iYWJseSg/KSBzYWZlciB0byByaXNrIGEgcmFjZSBjYXVzaW5nIGEgdGltZSBl
dmVudCB0bworICAgICAgICAgKiBoYXBwZW4gZWFybHkgdGhhbiB0byBpZ25vcmUgdGhlIGV2ZW50
IGFuZCBjYWxsIERJU0FTVEVSICovCisgICAgICAgIGdvdG8gb3V0OworCisgICAgaWYgKHRpbWVy
Y21wKCZldi0+YWJzLCAmbm93LCA+KSkKKyAgICAgICAgLyogbG9zdCB0aGUgcmFjZSAtIHRoaXMg
aXMgYSBzdGFsZSB0aW1lciBldmVudCBjYWxsYmFjayAqLworICAgICAgICBnb3RvIG91dDsKKwog
ICAgIExJQlhMX1RBSUxRX1JFTU9WRSgmQ1RYLT5ldGltZXMsIGV2LCBlbnRyeSk7CiAgICAgZXYt
PmZ1bmMoZWdjLCBldiwgJmV2LT5hYnMpOwogCisgb3V0OgogICAgIENUWF9VTkxPQ0s7CiAgICAg
RUdDX0ZSRUU7CiB9CmRpZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oIGIv
dG9vbHMvbGlieGwvbGlieGxfaW50ZXJuYWwuaAppbmRleCBjYmEzNjE2Li42NDg0YmNiIDEwMDY0
NAotLS0gYS90b29scy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oCisrKyBiL3Rvb2xzL2xpYnhsL2xp
YnhsX2ludGVybmFsLmgKQEAgLTEzNiw2ICsxMzYsOCBAQCB0eXBlZGVmIHN0cnVjdCBsaWJ4bF9f
Z2MgbGlieGxfX2djOwogdHlwZWRlZiBzdHJ1Y3QgbGlieGxfX2VnYyBsaWJ4bF9fZWdjOwogdHlw
ZWRlZiBzdHJ1Y3QgbGlieGxfX2FvIGxpYnhsX19hbzsKIHR5cGVkZWYgc3RydWN0IGxpYnhsX19h
b3Bfb2NjdXJyZWQgbGlieGxfX2FvcF9vY2N1cnJlZDsKK3R5cGVkZWYgc3RydWN0IGxpYnhsX19v
c2V2ZW50X2hvb2tfbmV4dXMgbGlieGxfX29zZXZlbnRfaG9va19uZXh1czsKK3R5cGVkZWYgc3Ry
dWN0IGxpYnhsX19vc2V2ZW50X2hvb2tfbmV4aSBsaWJ4bF9fb3NldmVudF9ob29rX25leGk7CiAK
IF9oaWRkZW4gdm9pZCBsaWJ4bF9fYWxsb2NfZmFpbGVkKGxpYnhsX2N0eCAqLCBjb25zdCBjaGFy
ICpmdW5jLAogICAgICAgICAgICAgICAgICAgICAgICAgIHNpemVfdCBubWVtYiwgc2l6ZV90IHNp
emUpIF9fYXR0cmlidXRlX18oKG5vcmV0dXJuKSk7CkBAIC0xNjMsNyArMTY1LDcgQEAgc3RydWN0
IGxpYnhsX19ldl9mZCB7CiAgICAgbGlieGxfX2V2X2ZkX2NhbGxiYWNrICpmdW5jOwogICAgIC8q
IHJlbWFpbmRlciBpcyBwcml2YXRlIGZvciBsaWJ4bF9fZXZfZmQuLi4gKi8KICAgICBMSUJYTF9M
SVNUX0VOVFJZKGxpYnhsX19ldl9mZCkgZW50cnk7Ci0gICAgdm9pZCAqZm9yX2FwcF9yZWc7Cisg
ICAgbGlieGxfX29zZXZlbnRfaG9va19uZXh1cyAqbmV4dXM7CiB9OwogCiAKQEAgLTE3OCw3ICsx
ODAsNyBAQCBzdHJ1Y3QgbGlieGxfX2V2X3RpbWUgewogICAgIGludCBpbmZpbml0ZTsgLyogbm90
IHJlZ2lzdGVyZWQgaW4gbGlzdCBvciB3aXRoIGFwcCBpZiBpbmZpbml0ZSAqLwogICAgIExJQlhM
X1RBSUxRX0VOVFJZKGxpYnhsX19ldl90aW1lKSBlbnRyeTsKICAgICBzdHJ1Y3QgdGltZXZhbCBh
YnM7Ci0gICAgdm9pZCAqZm9yX2FwcF9yZWc7CisgICAgbGlieGxfX29zZXZlbnRfaG9va19uZXh1
cyAqbmV4dXM7CiB9OwogCiB0eXBlZGVmIHN0cnVjdCBsaWJ4bF9fZXZfeHN3YXRjaCBsaWJ4bF9f
ZXZfeHN3YXRjaDsKQEAgLTMyOSw2ICszMzEsOCBAQCBzdHJ1Y3QgbGlieGxfX2N0eCB7CiAgICAg
bGlieGxfX3BvbGxlciBwb2xsZXJfYXBwOyAvKiBsaWJ4bF9vc2V2ZW50X2JlZm9yZXBvbGwgYW5k
IF9hZnRlcnBvbGwgKi8KICAgICBMSUJYTF9MSVNUX0hFQUQoLCBsaWJ4bF9fcG9sbGVyKSBwb2xs
ZXJzX2V2ZW50LCBwb2xsZXJzX2lkbGU7CiAKKyAgICBMSUJYTF9TTElTVF9IRUFEKGxpYnhsX19v
c2V2ZW50X2hvb2tfbmV4aSwgbGlieGxfX29zZXZlbnRfaG9va19uZXh1cykKKyAgICAgICAgaG9v
a19mZF9uZXhpX2lkbGUsIGhvb2tfdGltZW91dF9uZXhpX2lkbGU7CiAgICAgTElCWExfTElTVF9I
RUFEKCwgbGlieGxfX2V2X2ZkKSBlZmRzOwogICAgIExJQlhMX1RBSUxRX0hFQUQoLCBsaWJ4bF9f
ZXZfdGltZSkgZXRpbWVzOwogCi0tIAp0ZzogKGJmZTcwMjUuLikgdC94ZW4veGwuZXZlbnRmaXgy
LmZkcmVnLmluZGlyZWN0IChkZXBlbmRzIG9uOiBiYXNlKQoKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:12:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tge3O-0006MB-Ok; Thu, 06 Dec 2012 16:12:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bjzhang@suse.com>) id 1Tge3N-0006M5-W0
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:12:02 +0000
Received: from [85.158.139.211:51807] by server-10.bemta-5.messagelabs.com id
	B5/B4-09257-1D3C0C05; Thu, 06 Dec 2012 16:12:01 +0000
X-Env-Sender: bjzhang@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354810319!19378578!1
X-Originating-IP: [137.65.250.214]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16047 invoked from network); 6 Dec 2012 16:12:00 -0000
Received: from soto.provo.novell.com (HELO soto.provo.novell.com)
	(137.65.250.214) by server-8.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 16:12:00 -0000
Received: from INET-RELAY2-MTA by soto.provo.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 09:11:58 -0700
Message-Id: <50C15E7A0200003000027D96@soto.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 09:11:54 -0700
From: "Bamvor Jian Zhang" <bjzhang@suse.com>
To: <Ian.Campbell@citrix.com>,<Ian.Jackson@eu.citrix.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354101923.25834.16.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jim Fehlig <JFEHLIG@suse.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] =?utf-8?b?562U5aSN77yaIFJlOiBbUEFUQ0hdIGZpeCByYWNl?=
 =?utf-8?q?_condition_between_libvirtd_event_handling_and_libxl_fd_deregis?=
 =?utf-8?q?ter?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ClRoZSBmaXJzdCBtZW50aG9sIGlzIGFscmVhZHkgcHJvdmlkZWQgYnkgbGlidmlydCB0aG91Z2gg
bGlidmlydCBtdXRleCB3aGljaCBpcyBhIHBlci1ldmVudCBsb2NrLgpBbmQgYmVjYXVzZSB0aGlz
IGxvY2sgcmVsZWFzZSBiZWZvcmUgbGlieGwgY2FsbGJhY2sgZW50ZXJpbmcsIGxpYnhsIGdvdCB0
aGUgY2hhbmNlIHRvIHJlbW92ZSAKaGFuZGxlcy4KCj4+PiBJYW4gQ2FtcGJlbGwgPElhbi5DYW1w
YmVsbEBjaXRyaXguY29tPiAxMuW5tDEx5pyIMjjml6Ug5LiL5Y2IIDE5OjI1ID4+PgpPbiBUdWUs
IDIwMTItMTEtMjcgYXQgMTk6MDggKzAwMDAsIElhbiBKYWNrc29uIHdyb3RlOgo+IHdoYXQgY2hh
bmdlIHRvIHRoZSBsaWJ4bCBBUEkgc3ludGF4IG9yIHNlbWFudGljcyB3b3VsZCBmaXggdGhlIHBy
b2JsZW0gPwoKTW9zdGx5IGp1c3QgdGhpbmtpbmcgb3V0IGxvdWQgaGVyZS4uLgoKQ2FuIHdlIHBy
b3ZpZGUsIG9yIChtb3JlIGxpa2VseSkgcmVxdWlyZSB0aGUgYXBwbGljYXRpb24gdG8gcHJvdmlk
ZSwgYQpsb2NrIChwZXJoYXBzIHBlci1ldmVudCBvciwgYWdhaW4gbW9yZSBsaWtlbHksIHBlci1l
dmVudC1sb29wKSB3aGljaAptdXN0IGJlIGhlbGQgd2hpbGUgcHJvY2Vzc2luZyBjYWxsYmFja3Mg
YW5kIGFsc28gd2hpbGUgZXZlbnRzIGFyZSBiZWluZwpyZWdpc3RlcmVkL3VucmVnaXN0ZXJlZCB3
aXRoIHRoZSBhcHBsaWNhdGlvbidzIGV2ZW50IGhhbmRsaW5nIHN1YnN5c3RlbT8KV2l0aCBzdWNo
IGEgbG9jayBpbiBwbGFjZSB0aGUgYXBwbGljYXRpb24gd291bGQgYmUgYWJsZSB0byBndWFyYW50
ZWUKdGhhdCBoYXZpbmcgcmV0dXJuZWQgZnJvbSB0aGUgZGVyZWdpc3RlciBob29rIGFueSBmdXJ0
aGVyIGV2ZW50cyB3b3VsZApiZSBzZWVuIGFzIHNwdXJpb3VzIGV2ZW50cyBieSBpdHMgb3duIGV2
ZW50IHByb2Nlc3NpbmcgbG9vcC4KClRoZSBvdGhlciBzY2hlbWUgd2hpY2ggc3ByaW5ncyB0byBt
aW5kIGlzIHRvIGRvIHJlZmVyZW5jZSBjb3VudGluZywgd2l0aAp0aGUgYXBwbGljYXRpb24gaG9s
ZGluZyBhIHJlZmVyZW5jZSB3aGVuZXZlciB0aGUgZXZlbnQgaXMgcHJlc2VudCBpbiBpdHMKZXZl
bnQgbG9vcCAoc3VjaCB0aGF0IHRoZXJlIGlzIGFueSBjaGFuY2Ugb2YgdGhlIGV2ZW50IGJlaW5n
IGdlbmVyYXRlZCkKYW5kIGxpYnhsIGhvbGRpbmcgYSByZWZlcmVuY2Ugd2hpbGUgaXQgY29uc2lk
ZXJzIHRoZSBldmVudCB0byBiZSBhY3RpdmUKKHdoaWNoIHJvdWdobHkgY29ycmVzcG9uZHMgdG8g
dGhlIGV4aXN0aW5nIHJlZ2lzdGVyL3VucmVnaXN0ZXIgaG9vawpjYWxscywgSSB0aGluaykuIGxp
YnhsIHdvdWxkIGRyb3AgaXRzIHJlZmVyZW5jZSBhcyBwYXJ0IG9mIGNhbGxpbmcgdGhlCmRlcmVn
aXN0ZXIgaG9vayB3aGlsZSB0aGUgYXBwbGljYXRpb24gd291bGQgaG9sZCBpdHMgcmVmZXJlbmNl
IHVudGlsIGl0CndhcyBjZXJ0YWluIHRoZSBldmVudCB3b3VsZCBubyBsb25nZXIgYmUgZ2VuZXJh
dGVkIGUuZy4gaW4gYSBwYXNzIGF0IHRoZQp0b3Agb2YgaXRzIGV2ZW50IGhhbmRsaW5nIGxvb3Ag
d2hlcmUgaXQgaW5jbHVkZXMgb3IgZXhjbHVkZXMgdGhlCnJlbGV2YW50IGZkIGZyb20gaXRzIHNl
bGVjdCgpL3BvbGwoKS4KCkxhc3QgaGFsZiBicmFpbmVkIGlkZWEgd291bGQgYmUgdG8gc3BsaXQg
dGhlIGRlcmVnaXN0cmF0aW9uIGludG8gdHdvLgpsaWJ4bCBjYWxscyB1cCB0byB0aGUgYXBwIHNh
eWluZyAicGxlYXNlIGRlcmVnaXN0ZXIiIGFuZCB0aGUgYXBwIGNhbGxzCmJhY2sgdG8gbGlieGwg
dG8gc2F5ICJJIGFtIG5vIGxvbmdlciB3YXRjaGluZyBmb3IgdGhpcyBldmVudCBhbmQKZ3VhcmFu
dGVlIHRoYXQgSSB3b24ndCBkZWxpdmVyIGl0IGFueSBtb3JlIi4gKFByZXN1bWFibHkgdGhpcyB3
b3VsZCBiZQppbXBsZW1lbnRlZCBieSB0aGUgYXBwbGljYXRpb24gdmlhIHNvbWUgY29tYmluYXRp
b24gb2YgdGhlIGFib3ZlKS4gVGhpcwpjb3VsZCBiZSBkb25lIGluIGEgc29tZXdoYXQgY29tcGF0
aWJsZSB3YXkgYnkgYWxsb3dpbmcgdGhlIGRlcmVnaXN0ZXIKaG9vayB0byByZXR1cm4gIlBFTkRJ
TkciLgoKSWFuLgoKCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0
cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:12:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tge3O-0006MB-Ok; Thu, 06 Dec 2012 16:12:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bjzhang@suse.com>) id 1Tge3N-0006M5-W0
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:12:02 +0000
Received: from [85.158.139.211:51807] by server-10.bemta-5.messagelabs.com id
	B5/B4-09257-1D3C0C05; Thu, 06 Dec 2012 16:12:01 +0000
X-Env-Sender: bjzhang@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354810319!19378578!1
X-Originating-IP: [137.65.250.214]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16047 invoked from network); 6 Dec 2012 16:12:00 -0000
Received: from soto.provo.novell.com (HELO soto.provo.novell.com)
	(137.65.250.214) by server-8.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 16:12:00 -0000
Received: from INET-RELAY2-MTA by soto.provo.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 09:11:58 -0700
Message-Id: <50C15E7A0200003000027D96@soto.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 09:11:54 -0700
From: "Bamvor Jian Zhang" <bjzhang@suse.com>
To: <Ian.Campbell@citrix.com>,<Ian.Jackson@eu.citrix.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354101923.25834.16.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jim Fehlig <JFEHLIG@suse.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] =?utf-8?b?562U5aSN77yaIFJlOiBbUEFUQ0hdIGZpeCByYWNl?=
 =?utf-8?q?_condition_between_libvirtd_event_handling_and_libxl_fd_deregis?=
 =?utf-8?q?ter?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ClRoZSBmaXJzdCBtZW50aG9sIGlzIGFscmVhZHkgcHJvdmlkZWQgYnkgbGlidmlydCB0aG91Z2gg
bGlidmlydCBtdXRleCB3aGljaCBpcyBhIHBlci1ldmVudCBsb2NrLgpBbmQgYmVjYXVzZSB0aGlz
IGxvY2sgcmVsZWFzZSBiZWZvcmUgbGlieGwgY2FsbGJhY2sgZW50ZXJpbmcsIGxpYnhsIGdvdCB0
aGUgY2hhbmNlIHRvIHJlbW92ZSAKaGFuZGxlcy4KCj4+PiBJYW4gQ2FtcGJlbGwgPElhbi5DYW1w
YmVsbEBjaXRyaXguY29tPiAxMuW5tDEx5pyIMjjml6Ug5LiL5Y2IIDE5OjI1ID4+PgpPbiBUdWUs
IDIwMTItMTEtMjcgYXQgMTk6MDggKzAwMDAsIElhbiBKYWNrc29uIHdyb3RlOgo+IHdoYXQgY2hh
bmdlIHRvIHRoZSBsaWJ4bCBBUEkgc3ludGF4IG9yIHNlbWFudGljcyB3b3VsZCBmaXggdGhlIHBy
b2JsZW0gPwoKTW9zdGx5IGp1c3QgdGhpbmtpbmcgb3V0IGxvdWQgaGVyZS4uLgoKQ2FuIHdlIHBy
b3ZpZGUsIG9yIChtb3JlIGxpa2VseSkgcmVxdWlyZSB0aGUgYXBwbGljYXRpb24gdG8gcHJvdmlk
ZSwgYQpsb2NrIChwZXJoYXBzIHBlci1ldmVudCBvciwgYWdhaW4gbW9yZSBsaWtlbHksIHBlci1l
dmVudC1sb29wKSB3aGljaAptdXN0IGJlIGhlbGQgd2hpbGUgcHJvY2Vzc2luZyBjYWxsYmFja3Mg
YW5kIGFsc28gd2hpbGUgZXZlbnRzIGFyZSBiZWluZwpyZWdpc3RlcmVkL3VucmVnaXN0ZXJlZCB3
aXRoIHRoZSBhcHBsaWNhdGlvbidzIGV2ZW50IGhhbmRsaW5nIHN1YnN5c3RlbT8KV2l0aCBzdWNo
IGEgbG9jayBpbiBwbGFjZSB0aGUgYXBwbGljYXRpb24gd291bGQgYmUgYWJsZSB0byBndWFyYW50
ZWUKdGhhdCBoYXZpbmcgcmV0dXJuZWQgZnJvbSB0aGUgZGVyZWdpc3RlciBob29rIGFueSBmdXJ0
aGVyIGV2ZW50cyB3b3VsZApiZSBzZWVuIGFzIHNwdXJpb3VzIGV2ZW50cyBieSBpdHMgb3duIGV2
ZW50IHByb2Nlc3NpbmcgbG9vcC4KClRoZSBvdGhlciBzY2hlbWUgd2hpY2ggc3ByaW5ncyB0byBt
aW5kIGlzIHRvIGRvIHJlZmVyZW5jZSBjb3VudGluZywgd2l0aAp0aGUgYXBwbGljYXRpb24gaG9s
ZGluZyBhIHJlZmVyZW5jZSB3aGVuZXZlciB0aGUgZXZlbnQgaXMgcHJlc2VudCBpbiBpdHMKZXZl
bnQgbG9vcCAoc3VjaCB0aGF0IHRoZXJlIGlzIGFueSBjaGFuY2Ugb2YgdGhlIGV2ZW50IGJlaW5n
IGdlbmVyYXRlZCkKYW5kIGxpYnhsIGhvbGRpbmcgYSByZWZlcmVuY2Ugd2hpbGUgaXQgY29uc2lk
ZXJzIHRoZSBldmVudCB0byBiZSBhY3RpdmUKKHdoaWNoIHJvdWdobHkgY29ycmVzcG9uZHMgdG8g
dGhlIGV4aXN0aW5nIHJlZ2lzdGVyL3VucmVnaXN0ZXIgaG9vawpjYWxscywgSSB0aGluaykuIGxp
YnhsIHdvdWxkIGRyb3AgaXRzIHJlZmVyZW5jZSBhcyBwYXJ0IG9mIGNhbGxpbmcgdGhlCmRlcmVn
aXN0ZXIgaG9vayB3aGlsZSB0aGUgYXBwbGljYXRpb24gd291bGQgaG9sZCBpdHMgcmVmZXJlbmNl
IHVudGlsIGl0CndhcyBjZXJ0YWluIHRoZSBldmVudCB3b3VsZCBubyBsb25nZXIgYmUgZ2VuZXJh
dGVkIGUuZy4gaW4gYSBwYXNzIGF0IHRoZQp0b3Agb2YgaXRzIGV2ZW50IGhhbmRsaW5nIGxvb3Ag
d2hlcmUgaXQgaW5jbHVkZXMgb3IgZXhjbHVkZXMgdGhlCnJlbGV2YW50IGZkIGZyb20gaXRzIHNl
bGVjdCgpL3BvbGwoKS4KCkxhc3QgaGFsZiBicmFpbmVkIGlkZWEgd291bGQgYmUgdG8gc3BsaXQg
dGhlIGRlcmVnaXN0cmF0aW9uIGludG8gdHdvLgpsaWJ4bCBjYWxscyB1cCB0byB0aGUgYXBwIHNh
eWluZyAicGxlYXNlIGRlcmVnaXN0ZXIiIGFuZCB0aGUgYXBwIGNhbGxzCmJhY2sgdG8gbGlieGwg
dG8gc2F5ICJJIGFtIG5vIGxvbmdlciB3YXRjaGluZyBmb3IgdGhpcyBldmVudCBhbmQKZ3VhcmFu
dGVlIHRoYXQgSSB3b24ndCBkZWxpdmVyIGl0IGFueSBtb3JlIi4gKFByZXN1bWFibHkgdGhpcyB3
b3VsZCBiZQppbXBsZW1lbnRlZCBieSB0aGUgYXBwbGljYXRpb24gdmlhIHNvbWUgY29tYmluYXRp
b24gb2YgdGhlIGFib3ZlKS4gVGhpcwpjb3VsZCBiZSBkb25lIGluIGEgc29tZXdoYXQgY29tcGF0
aWJsZSB3YXkgYnkgYWxsb3dpbmcgdGhlIGRlcmVnaXN0ZXIKaG9vayB0byByZXR1cm4gIlBFTkRJ
TkciLgoKSWFuLgoKCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0
cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:23:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:23:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgeEL-0006zo-Kj; Thu, 06 Dec 2012 16:23:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TgeEK-0006zX-3y
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:23:20 +0000
Received: from [85.158.137.99:54070] by server-2.bemta-3.messagelabs.com id
	48/67-04744-576C0C05; Thu, 06 Dec 2012 16:23:17 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354810996!12874903!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzM2Nzk=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzM2Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1415 invoked from network); 6 Dec 2012 16:23:17 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-8.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 16:23:17 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2c7ofw==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-090-003.pools.arcor-ip.net [84.57.90.3])
	by smtp.strato.de (joses mo47) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id K02372oB6GGfez ;
	Thu, 6 Dec 2012 17:23:04 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 436A61884C; Thu,  6 Dec 2012 17:23:04 +0100 (CET)
Date: Thu, 6 Dec 2012 17:23:04 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121206162304.GA3989@aepfle.de>
References: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
	<50BF2E3802000078000AE162@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BF2E3802000078000AE162@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: xen-devel@lists.xen.org, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
 multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, Jan Beulich wrote:

> >>> On 05.12.12 at 11:01, Olaf Hering <olaf@aepfle.de> wrote:
> > backend_changed might be called multiple times, which will leak
> > be->mode. free the previous value before storing the current mode value.
> 
> As said before - this is one possible route to take. But did you
> consider at all the alternative of preventing the function from
> getting called more than once for a given device? As also said
> before, I think that would have other bad effects, and hence
> should be preferred (and would likely also result in a smaller
> patch).

Maybe it could be done like this, adding a flag to the backend device
and exit early if its called twice.

Olaf


diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index a6585a4..2822e73 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -28,6 +28,7 @@ struct backend_info {
 	unsigned		major;
 	unsigned		minor;
 	char			*mode;
+	unsigned		alive;
 };
 
 static struct kmem_cache *xen_blkif_cachep;
@@ -506,6 +507,9 @@ static void backend_changed(struct xenbus_watch *watch,
 
 	DPRINTK("");
 
+	if (be->alive)
+		return;
+
 	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
 			   &major, &minor);
 	if (XENBUS_EXIST_ERR(err)) {
@@ -548,8 +552,11 @@ static void backend_changed(struct xenbus_watch *watch,
 		char *p = strrchr(dev->otherend, '/') + 1;
 		long handle;
 		err = strict_strtoul(p, 0, &handle);
-		if (err)
+		if (err) {
+			kfree(be->mode);
+			be->mode = NULL;
 			return;
+		}
 
 		be->major = major;
 		be->minor = minor;
@@ -560,6 +567,8 @@ static void backend_changed(struct xenbus_watch *watch,
 			be->major = 0;
 			be->minor = 0;
 			xenbus_dev_fatal(dev, err, "creating vbd structure");
+			kfree(be->mode);
+			be->mode = NULL;
 			return;
 		}
 
@@ -569,10 +578,13 @@ static void backend_changed(struct xenbus_watch *watch,
 			be->major = 0;
 			be->minor = 0;
 			xenbus_dev_fatal(dev, err, "creating sysfs entries");
+			kfree(be->mode);
+			be->mode = NULL;
 			return;
 		}
 
 		/* We're potentially connected now */
+		be->alive = 1;
 		xen_update_blkif_status(be->blkif);
 	}
 }
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:23:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:23:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgeEL-0006zo-Kj; Thu, 06 Dec 2012 16:23:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TgeEK-0006zX-3y
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:23:20 +0000
Received: from [85.158.137.99:54070] by server-2.bemta-3.messagelabs.com id
	48/67-04744-576C0C05; Thu, 06 Dec 2012 16:23:17 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354810996!12874903!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzM2Nzk=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzM2Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1415 invoked from network); 6 Dec 2012 16:23:17 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-8.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 16:23:17 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2c7ofw==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-090-003.pools.arcor-ip.net [84.57.90.3])
	by smtp.strato.de (joses mo47) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id K02372oB6GGfez ;
	Thu, 6 Dec 2012 17:23:04 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 436A61884C; Thu,  6 Dec 2012 17:23:04 +0100 (CET)
Date: Thu, 6 Dec 2012 17:23:04 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121206162304.GA3989@aepfle.de>
References: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
	<50BF2E3802000078000AE162@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50BF2E3802000078000AE162@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: xen-devel@lists.xen.org, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
 multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, Jan Beulich wrote:

> >>> On 05.12.12 at 11:01, Olaf Hering <olaf@aepfle.de> wrote:
> > backend_changed might be called multiple times, which will leak
> > be->mode. free the previous value before storing the current mode value.
> 
> As said before - this is one possible route to take. But did you
> consider at all the alternative of preventing the function from
> getting called more than once for a given device? As also said
> before, I think that would have other bad effects, and hence
> should be preferred (and would likely also result in a smaller
> patch).

Maybe it could be done like this, adding a flag to the backend device
and exit early if its called twice.

Olaf


diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index a6585a4..2822e73 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -28,6 +28,7 @@ struct backend_info {
 	unsigned		major;
 	unsigned		minor;
 	char			*mode;
+	unsigned		alive;
 };
 
 static struct kmem_cache *xen_blkif_cachep;
@@ -506,6 +507,9 @@ static void backend_changed(struct xenbus_watch *watch,
 
 	DPRINTK("");
 
+	if (be->alive)
+		return;
+
 	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
 			   &major, &minor);
 	if (XENBUS_EXIST_ERR(err)) {
@@ -548,8 +552,11 @@ static void backend_changed(struct xenbus_watch *watch,
 		char *p = strrchr(dev->otherend, '/') + 1;
 		long handle;
 		err = strict_strtoul(p, 0, &handle);
-		if (err)
+		if (err) {
+			kfree(be->mode);
+			be->mode = NULL;
 			return;
+		}
 
 		be->major = major;
 		be->minor = minor;
@@ -560,6 +567,8 @@ static void backend_changed(struct xenbus_watch *watch,
 			be->major = 0;
 			be->minor = 0;
 			xenbus_dev_fatal(dev, err, "creating vbd structure");
+			kfree(be->mode);
+			be->mode = NULL;
 			return;
 		}
 
@@ -569,10 +578,13 @@ static void backend_changed(struct xenbus_watch *watch,
 			be->major = 0;
 			be->minor = 0;
 			xenbus_dev_fatal(dev, err, "creating sysfs entries");
+			kfree(be->mode);
+			be->mode = NULL;
 			return;
 		}
 
 		/* We're potentially connected now */
+		be->alive = 1;
 		xen_update_blkif_status(be->blkif);
 	}
 }
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:36:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgeQg-0007yH-Vs; Thu, 06 Dec 2012 16:36:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgeQf-0007yC-W0
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 16:36:06 +0000
Received: from [85.158.137.99:30935] by server-5.bemta-3.messagelabs.com id
	58/13-26311-579C0C05; Thu, 06 Dec 2012 16:36:05 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354811763!17933424!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31653 invoked from network); 6 Dec 2012 16:36:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 16:36:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46845887"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 16:36:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 11:36:02 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgeQb-0003fd-RK;
	Thu, 06 Dec 2012 16:36:01 +0000
Date: Thu, 6 Dec 2012 16:35:57 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Tim Deegan <tim@xen.org>
In-Reply-To: <20121206125326.GP82725@ocelot.phlegethon.org>
Message-ID: <alpine.DEB.2.02.1212061632570.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
	<20121206125326.GP82725@ocelot.phlegethon.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/6] xen/arm: introduce a driver for the ARM
 HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Tim Deegan wrote:
> At 18:19 +0000 on 05 Dec (1354731588), Stefano Stabellini wrote:
> > For the moment the resolution is hardcoded to 1280x1024@60.
> > Use the generic framebuffer functions to print on the screen.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/Rules.mk         |    1 +
> >  xen/drivers/video/Makefile    |    1 +
> >  xen/drivers/video/arm_hdlcd.c |  165 +++++++++++++++++++++++++++++++++++++++++
> >  xen/include/asm-arm/config.h  |    3 +
> >  4 files changed, 170 insertions(+), 0 deletions(-)
> >  create mode 100644 xen/drivers/video/arm_hdlcd.c
> > 
> > diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> > index fa9f9c1..9580e6b 100644
> > --- a/xen/arch/arm/Rules.mk
> > +++ b/xen/arch/arm/Rules.mk
> > @@ -8,6 +8,7 @@
> >  
> >  HAS_DEVICE_TREE := y
> >  HAS_VIDEO := y
> > +HAS_ARM_HDLCD := y
> >  
> >  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
> >  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> > diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> > index 3b3eb43..8a6f5da 100644
> > --- a/xen/drivers/video/Makefile
> > +++ b/xen/drivers/video/Makefile
> > @@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
> >  obj-$(HAS_VIDEO) += font_8x8.o
> >  obj-$(HAS_VIDEO) += fb.o
> >  obj-$(HAS_VGA) += vesa.o
> > +obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
> > diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
> > new file mode 100644
> > index 0000000..68f588c
> > --- /dev/null
> > +++ b/xen/drivers/video/arm_hdlcd.c
> > @@ -0,0 +1,165 @@
> > +/*
> > + * xen/drivers/video/arm_hdlcd.c
> > + *
> > + * Driver for ARM HDLCD Controller
> > + *
> > + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > + * Copyright (c) 2012 Citrix Systems.
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > + */
> > +
> > +#include <asm/delay.h>
> > +#include <asm/types.h>
> > +#include <xen/config.h>
> > +#include <xen/device_tree.h>
> > +#include <xen/libfdt/libfdt.h>
> > +#include <xen/init.h>
> > +#include <xen/mm.h>
> > +#include "font.h"
> > +#include "fb.h"
> > +
> > +#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
> > +
> > +#define HDLCD_INTMASK       (0x18/4)
> > +#define HDLCD_FBBASE        (0x100/4)
> > +#define HDLCD_LINELENGTH    (0x104/4)
> > +#define HDLCD_LINECOUNT     (0x108/4)
> > +#define HDLCD_LINEPITCH     (0x10C/4)
> > +#define HDLCD_BUS           (0x110/4)
> > +#define HDLCD_VSYNC         (0x200/4)
> > +#define HDLCD_VBACK         (0x204/4)
> > +#define HDLCD_VDATA         (0x208/4)
> > +#define HDLCD_VFRONT        (0x20C/4)
> > +#define HDLCD_HSYNC         (0x210/4)
> > +#define HDLCD_HBACK         (0x214/4)
> > +#define HDLCD_HDATA         (0x218/4)
> > +#define HDLCD_HFRONT        (0x21C/4)
> > +#define HDLCD_POLARITIES    (0x220/4)
> > +#define HDLCD_COMMAND       (0x230/4)
> > +#define HDLCD_PF            (0x240/4)
> > +#define HDLCD_RED           (0x244/4)
> > +#define HDLCD_GREEN         (0x248/4)
> > +#define HDLCD_BLUE          (0x24C/4)
> > +
> > +#define BPP             4
> > +#define XRES            1280
> > +#define YRES            1024
> > +#define refresh         60
> > +#define pixclock        108 /* in Mhz, needs to be set in the board config for OSC5 */
> > +#define left_margin     80
> > +#define hback left_margin
> > +#define right_margin    48
> > +#define hfront right_margin
> > +#define upper_margin    21
> > +#define vback upper_margin
> > +#define lower_margin    3
> > +#define vfront lower_margin
> > +#define hsync_len       32
> > +#define vsync_len       6
> > +
> > +#define HDLCD_SIZE (XRES*YRES*BPP)
> > +
> > +static void vga_noop_puts(const char *s) {}
> > +void (*video_puts)(const char *) = vga_noop_puts;
> > +
> > +static void hdlcd_flush(void)
> > +{
> > +    dsb();
> > +}
> > +
> > +void __init video_init(void)
> > +{
> > +    int node, depth;
> > +    u32 address_cells, size_cells;
> > +    struct fb_prop fbp;
> > +    unsigned char *lfb = (unsigned char *) VRAM_VIRT_START;
> > +    paddr_t hdlcd_start, hdlcd_size;
> > +    paddr_t framebuffer_start, framebuffer_size;
> > +    const struct fdt_property *prop;
> > +    const u32 *cell;
> > +
> > +    if ( find_compatible_node("arm,hdlcd", &node, &depth,
> > +                &address_cells, &size_cells) <= 0 )
> > +        return;
> > +
> > +    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
> > +    if ( !prop )
> > +        return;
> > +
> > +    cell = (const u32 *)prop->data;
> > +    device_tree_get_reg(&cell, address_cells, size_cells,
> > +            &hdlcd_start, &hdlcd_size); 
> > +
> > +    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
> > +    if ( !prop )
> > +        return;
> > +
> > +    cell = (const u32 *)prop->data;
> > +    device_tree_get_reg(&cell, address_cells, size_cells,
> > +            &framebuffer_start, &framebuffer_size); 
> > +
> > +    if ( !hdlcd_start || !framebuffer_start )
> > +        return;
> > +
> > +    printk("Initializing HDLCD driver\n");
> > +
> > +    map_phys_range(framebuffer_start,
> > +                    framebuffer_start + framebuffer_size,
> > +                    VRAM_VIRT_START, DEV_WC);
> > +    memset(lfb, 0x00, HDLCD_SIZE);
> 
> This needs some checks that framebuffer_size >= HDLCD_SIZE, and that
> framebuffer_size <= XENHEAP_VIRT_START - VRAM_VIRT_START.

You are right, I'll add those checks.
I think I'll turn map_phys_range into early_ioremap and have
early_ioremap return NULL if the range passed as an argument is too
big.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:36:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgeQg-0007yH-Vs; Thu, 06 Dec 2012 16:36:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TgeQf-0007yC-W0
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 16:36:06 +0000
Received: from [85.158.137.99:30935] by server-5.bemta-3.messagelabs.com id
	58/13-26311-579C0C05; Thu, 06 Dec 2012 16:36:05 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354811763!17933424!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31653 invoked from network); 6 Dec 2012 16:36:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 16:36:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46845887"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 16:36:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 11:36:02 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TgeQb-0003fd-RK;
	Thu, 06 Dec 2012 16:36:01 +0000
Date: Thu, 6 Dec 2012 16:35:57 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Tim Deegan <tim@xen.org>
In-Reply-To: <20121206125326.GP82725@ocelot.phlegethon.org>
Message-ID: <alpine.DEB.2.02.1212061632570.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
	<20121206125326.GP82725@ocelot.phlegethon.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/6] xen/arm: introduce a driver for the ARM
 HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Tim Deegan wrote:
> At 18:19 +0000 on 05 Dec (1354731588), Stefano Stabellini wrote:
> > For the moment the resolution is hardcoded to 1280x1024@60.
> > Use the generic framebuffer functions to print on the screen.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/Rules.mk         |    1 +
> >  xen/drivers/video/Makefile    |    1 +
> >  xen/drivers/video/arm_hdlcd.c |  165 +++++++++++++++++++++++++++++++++++++++++
> >  xen/include/asm-arm/config.h  |    3 +
> >  4 files changed, 170 insertions(+), 0 deletions(-)
> >  create mode 100644 xen/drivers/video/arm_hdlcd.c
> > 
> > diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> > index fa9f9c1..9580e6b 100644
> > --- a/xen/arch/arm/Rules.mk
> > +++ b/xen/arch/arm/Rules.mk
> > @@ -8,6 +8,7 @@
> >  
> >  HAS_DEVICE_TREE := y
> >  HAS_VIDEO := y
> > +HAS_ARM_HDLCD := y
> >  
> >  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
> >  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> > diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> > index 3b3eb43..8a6f5da 100644
> > --- a/xen/drivers/video/Makefile
> > +++ b/xen/drivers/video/Makefile
> > @@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
> >  obj-$(HAS_VIDEO) += font_8x8.o
> >  obj-$(HAS_VIDEO) += fb.o
> >  obj-$(HAS_VGA) += vesa.o
> > +obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
> > diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
> > new file mode 100644
> > index 0000000..68f588c
> > --- /dev/null
> > +++ b/xen/drivers/video/arm_hdlcd.c
> > @@ -0,0 +1,165 @@
> > +/*
> > + * xen/drivers/video/arm_hdlcd.c
> > + *
> > + * Driver for ARM HDLCD Controller
> > + *
> > + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > + * Copyright (c) 2012 Citrix Systems.
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > + */
> > +
> > +#include <asm/delay.h>
> > +#include <asm/types.h>
> > +#include <xen/config.h>
> > +#include <xen/device_tree.h>
> > +#include <xen/libfdt/libfdt.h>
> > +#include <xen/init.h>
> > +#include <xen/mm.h>
> > +#include "font.h"
> > +#include "fb.h"
> > +
> > +#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
> > +
> > +#define HDLCD_INTMASK       (0x18/4)
> > +#define HDLCD_FBBASE        (0x100/4)
> > +#define HDLCD_LINELENGTH    (0x104/4)
> > +#define HDLCD_LINECOUNT     (0x108/4)
> > +#define HDLCD_LINEPITCH     (0x10C/4)
> > +#define HDLCD_BUS           (0x110/4)
> > +#define HDLCD_VSYNC         (0x200/4)
> > +#define HDLCD_VBACK         (0x204/4)
> > +#define HDLCD_VDATA         (0x208/4)
> > +#define HDLCD_VFRONT        (0x20C/4)
> > +#define HDLCD_HSYNC         (0x210/4)
> > +#define HDLCD_HBACK         (0x214/4)
> > +#define HDLCD_HDATA         (0x218/4)
> > +#define HDLCD_HFRONT        (0x21C/4)
> > +#define HDLCD_POLARITIES    (0x220/4)
> > +#define HDLCD_COMMAND       (0x230/4)
> > +#define HDLCD_PF            (0x240/4)
> > +#define HDLCD_RED           (0x244/4)
> > +#define HDLCD_GREEN         (0x248/4)
> > +#define HDLCD_BLUE          (0x24C/4)
> > +
> > +#define BPP             4
> > +#define XRES            1280
> > +#define YRES            1024
> > +#define refresh         60
> > +#define pixclock        108 /* in Mhz, needs to be set in the board config for OSC5 */
> > +#define left_margin     80
> > +#define hback left_margin
> > +#define right_margin    48
> > +#define hfront right_margin
> > +#define upper_margin    21
> > +#define vback upper_margin
> > +#define lower_margin    3
> > +#define vfront lower_margin
> > +#define hsync_len       32
> > +#define vsync_len       6
> > +
> > +#define HDLCD_SIZE (XRES*YRES*BPP)
> > +
> > +static void vga_noop_puts(const char *s) {}
> > +void (*video_puts)(const char *) = vga_noop_puts;
> > +
> > +static void hdlcd_flush(void)
> > +{
> > +    dsb();
> > +}
> > +
> > +void __init video_init(void)
> > +{
> > +    int node, depth;
> > +    u32 address_cells, size_cells;
> > +    struct fb_prop fbp;
> > +    unsigned char *lfb = (unsigned char *) VRAM_VIRT_START;
> > +    paddr_t hdlcd_start, hdlcd_size;
> > +    paddr_t framebuffer_start, framebuffer_size;
> > +    const struct fdt_property *prop;
> > +    const u32 *cell;
> > +
> > +    if ( find_compatible_node("arm,hdlcd", &node, &depth,
> > +                &address_cells, &size_cells) <= 0 )
> > +        return;
> > +
> > +    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
> > +    if ( !prop )
> > +        return;
> > +
> > +    cell = (const u32 *)prop->data;
> > +    device_tree_get_reg(&cell, address_cells, size_cells,
> > +            &hdlcd_start, &hdlcd_size); 
> > +
> > +    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
> > +    if ( !prop )
> > +        return;
> > +
> > +    cell = (const u32 *)prop->data;
> > +    device_tree_get_reg(&cell, address_cells, size_cells,
> > +            &framebuffer_start, &framebuffer_size); 
> > +
> > +    if ( !hdlcd_start || !framebuffer_start )
> > +        return;
> > +
> > +    printk("Initializing HDLCD driver\n");
> > +
> > +    map_phys_range(framebuffer_start,
> > +                    framebuffer_start + framebuffer_size,
> > +                    VRAM_VIRT_START, DEV_WC);
> > +    memset(lfb, 0x00, HDLCD_SIZE);
> 
> This needs some checks that framebuffer_size >= HDLCD_SIZE, and that
> framebuffer_size <= XENHEAP_VIRT_START - VRAM_VIRT_START.

You are right, I'll add those checks.
I think I'll turn map_phys_range into early_ioremap and have
early_ioremap return NULL if the range passed as an argument is too
big.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgej0-0000LB-TX; Thu, 06 Dec 2012 16:55:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tgeiz-0000L6-9f
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:55:01 +0000
Received: from [85.158.143.99:62604] by server-3.bemta-4.messagelabs.com id
	DC/3F-18211-4EDC0C05; Thu, 06 Dec 2012 16:55:00 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354812899!17166765!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5497 invoked from network); 6 Dec 2012 16:54:59 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Dec 2012 16:54:59 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:56067 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tgemf-000843-NQ; Thu, 06 Dec 2012 17:58:49 +0100
Date: Thu, 6 Dec 2012 17:54:54 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <272767244.20121206175454@eikelenboom.it>
To: xen-devel <xen-devel@lists.xen.org>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] Upstream qemu-xen,
	log verbosity and compile errors when enabling debug, filenaming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

Yesterday I have tried building and using upstream qemu and seabios.
Config.mk:
QEMU_UPSTREAM_URL ?= git://git.qemu.org/qemu.git
QEMU_UPSTREAM_REVISION ?= master

SEABIOS_UPSTREAM_URL ?= git://git.qemu.org/seabios.git
SEABIOS_UPSTREAM_TAG ?= master

And i'm happy to say that it works quite ok, even with (secondary pci) pci-passthrough ( using an ATI gfx adapter and windows7 as guest os) :-).

But it seems to have an issue with a USB controller which is trying to use msi-X interrupts, which makes xl dmesg report:
(XEN) [2012-12-06 16:07:24] vmsi.c:108:d32767 Unsupported delivery mode 3
and when "pci_msitranslate=0" is set the error still occurs, only this time the correct domain number is reported, instead of the 32767.


However, when trying to debug, i noticed although making a debug build (make debug=y && make debug=y install), qemu-dm-<guestname>.log stays almost empty.
It seems all the defines related to debugging are not set.

- Would it be appropriated to enable them all when making a debug build ?
- Would it be wise to also have some more verbose logging when not running a debug build ?
- And if yes, what would be the nicest way to set the defines ?
- Should the naming of the debug defines be made more consistent ?


When enabling these debug defines by hand:

xen-all.c
#define DEBUG_XEN

xen-mapcache.c
#define MAPCACHE_DEBUG

hw/xen-host-pci-device.c
#define XEN_HOST_PCI_DEVICE_DEBUG

hw/xen_platform.c
#define DEBUG_PLATFORM

hw/xen_pt.h
#define XEN_PT_LOGGING_ENABLED
#define XEN_PT_DEBUG_PCI_CONFIG_ACCESS

I get a lot of compile errors related to wrong types in the debug printf's.


Another thing that occurred to me was that the file naming doesn't seem to be overly consistent:

xen-all.c
xen-mapcache.c
xen-mapcache.h
xen-stub.c
xen_apic.c
hw/xen_backend.c
hw/xen_backend.h
hw/xen_blkif.h
hw/xen_common.h
hw/xen_console.c
hw/xen_devconfig.c
hw/xen_disk.c
hw/xen_domainbuild.c
hw/xen_domainbuild.h
hw/xenfb.c
hw/xen.h
hw/xen-host-pci-device.c
hw/xen-host-pci-device.h
hw/xen_machine_pv.c
hw/xen_nic.c
hw/xen_platform.c
hw/xen_pt.c
hw/xen_pt_config_init.c
hw/xen_pt.h
hw/xen_pt_msi.c

Would it be worthwhile to make it:
- consistent all underscore or all minus ?
- allways xen_ (or xen- depending on the above) ?

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 16:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 16:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgej0-0000LB-TX; Thu, 06 Dec 2012 16:55:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tgeiz-0000L6-9f
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 16:55:01 +0000
Received: from [85.158.143.99:62604] by server-3.bemta-4.messagelabs.com id
	DC/3F-18211-4EDC0C05; Thu, 06 Dec 2012 16:55:00 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354812899!17166765!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5497 invoked from network); 6 Dec 2012 16:54:59 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Dec 2012 16:54:59 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:56067 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tgemf-000843-NQ; Thu, 06 Dec 2012 17:58:49 +0100
Date: Thu, 6 Dec 2012 17:54:54 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <272767244.20121206175454@eikelenboom.it>
To: xen-devel <xen-devel@lists.xen.org>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] Upstream qemu-xen,
	log verbosity and compile errors when enabling debug, filenaming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

Yesterday I have tried building and using upstream qemu and seabios.
Config.mk:
QEMU_UPSTREAM_URL ?= git://git.qemu.org/qemu.git
QEMU_UPSTREAM_REVISION ?= master

SEABIOS_UPSTREAM_URL ?= git://git.qemu.org/seabios.git
SEABIOS_UPSTREAM_TAG ?= master

And i'm happy to say that it works quite ok, even with (secondary pci) pci-passthrough ( using an ATI gfx adapter and windows7 as guest os) :-).

But it seems to have an issue with a USB controller which is trying to use msi-X interrupts, which makes xl dmesg report:
(XEN) [2012-12-06 16:07:24] vmsi.c:108:d32767 Unsupported delivery mode 3
and when "pci_msitranslate=0" is set the error still occurs, only this time the correct domain number is reported, instead of the 32767.


However, when trying to debug, i noticed although making a debug build (make debug=y && make debug=y install), qemu-dm-<guestname>.log stays almost empty.
It seems all the defines related to debugging are not set.

- Would it be appropriated to enable them all when making a debug build ?
- Would it be wise to also have some more verbose logging when not running a debug build ?
- And if yes, what would be the nicest way to set the defines ?
- Should the naming of the debug defines be made more consistent ?


When enabling these debug defines by hand:

xen-all.c
#define DEBUG_XEN

xen-mapcache.c
#define MAPCACHE_DEBUG

hw/xen-host-pci-device.c
#define XEN_HOST_PCI_DEVICE_DEBUG

hw/xen_platform.c
#define DEBUG_PLATFORM

hw/xen_pt.h
#define XEN_PT_LOGGING_ENABLED
#define XEN_PT_DEBUG_PCI_CONFIG_ACCESS

I get a lot of compile errors related to wrong types in the debug printf's.


Another thing that occurred to me was that the file naming doesn't seem to be overly consistent:

xen-all.c
xen-mapcache.c
xen-mapcache.h
xen-stub.c
xen_apic.c
hw/xen_backend.c
hw/xen_backend.h
hw/xen_blkif.h
hw/xen_common.h
hw/xen_console.c
hw/xen_devconfig.c
hw/xen_disk.c
hw/xen_domainbuild.c
hw/xen_domainbuild.h
hw/xenfb.c
hw/xen.h
hw/xen-host-pci-device.c
hw/xen-host-pci-device.h
hw/xen_machine_pv.c
hw/xen_nic.c
hw/xen_platform.c
hw/xen_pt.c
hw/xen_pt_config_init.c
hw/xen_pt.h
hw/xen_pt_msi.c

Would it be worthwhile to make it:
- consistent all underscore or all minus ?
- allways xen_ (or xen- depending on the above) ?

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 17:03:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 17:03:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgeqg-0000mO-MV; Thu, 06 Dec 2012 17:02:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgeqe-0000mH-Ae
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 17:02:56 +0000
Received: from [85.158.143.99:12178] by server-2.bemta-4.messagelabs.com id
	04/A6-30861-FBFC0C05; Thu, 06 Dec 2012 17:02:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354813374!21332922!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29825 invoked from network); 6 Dec 2012 17:02:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 17:02:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 17:02:53 +0000
Message-Id: <50C0DDCA02000078000AEBA9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 17:02:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
	<50BF2E3802000078000AE162@nat28.tlf.novell.com>
	<20121206162304.GA3989@aepfle.de>
In-Reply-To: <20121206162304.GA3989@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
 multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 17:23, Olaf Hering <olaf@aepfle.de> wrote:
> On Wed, Dec 05, Jan Beulich wrote:
> 
>> >>> On 05.12.12 at 11:01, Olaf Hering <olaf@aepfle.de> wrote:
>> > backend_changed might be called multiple times, which will leak
>> > be->mode. free the previous value before storing the current mode value.
>> 
>> As said before - this is one possible route to take. But did you
>> consider at all the alternative of preventing the function from
>> getting called more than once for a given device? As also said
>> before, I think that would have other bad effects, and hence
>> should be preferred (and would likely also result in a smaller
>> patch).
> 
> Maybe it could be done like this, adding a flag to the backend device
> and exit early if its called twice.

Maybe, but it looks odd to me. But then again I had hoped Konrad
would have an opinion here...

Also I don't see why you need to free be->mode now on all error
paths - afaict it would still get freed when "be" gets freed (with
your earlier patch).

Jan

> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -28,6 +28,7 @@ struct backend_info {
>  	unsigned		major;
>  	unsigned		minor;
>  	char			*mode;
> +	unsigned		alive;
>  };
>  
>  static struct kmem_cache *xen_blkif_cachep;
> @@ -506,6 +507,9 @@ static void backend_changed(struct xenbus_watch *watch,
>  
>  	DPRINTK("");
>  
> +	if (be->alive)
> +		return;
> +
>  	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
>  			   &major, &minor);
>  	if (XENBUS_EXIST_ERR(err)) {
> @@ -548,8 +552,11 @@ static void backend_changed(struct xenbus_watch *watch,
>  		char *p = strrchr(dev->otherend, '/') + 1;
>  		long handle;
>  		err = strict_strtoul(p, 0, &handle);
> -		if (err)
> +		if (err) {
> +			kfree(be->mode);
> +			be->mode = NULL;
>  			return;
> +		}
>  
>  		be->major = major;
>  		be->minor = minor;
> @@ -560,6 +567,8 @@ static void backend_changed(struct xenbus_watch *watch,
>  			be->major = 0;
>  			be->minor = 0;
>  			xenbus_dev_fatal(dev, err, "creating vbd structure");
> +			kfree(be->mode);
> +			be->mode = NULL;
>  			return;
>  		}
>  
> @@ -569,10 +578,13 @@ static void backend_changed(struct xenbus_watch 
> *watch,
>  			be->major = 0;
>  			be->minor = 0;
>  			xenbus_dev_fatal(dev, err, "creating sysfs entries");
> +			kfree(be->mode);
> +			be->mode = NULL;
>  			return;
>  		}
>  
>  		/* We're potentially connected now */
> +		be->alive = 1;
>  		xen_update_blkif_status(be->blkif);
>  	}
>  }
> -- 
> 1.8.0.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 17:03:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 17:03:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgeqg-0000mO-MV; Thu, 06 Dec 2012 17:02:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgeqe-0000mH-Ae
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 17:02:56 +0000
Received: from [85.158.143.99:12178] by server-2.bemta-4.messagelabs.com id
	04/A6-30861-FBFC0C05; Thu, 06 Dec 2012 17:02:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354813374!21332922!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29825 invoked from network); 6 Dec 2012 17:02:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 17:02:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Dec 2012 17:02:53 +0000
Message-Id: <50C0DDCA02000078000AEBA9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 06 Dec 2012 17:02:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
	<50BF2E3802000078000AE162@nat28.tlf.novell.com>
	<20121206162304.GA3989@aepfle.de>
In-Reply-To: <20121206162304.GA3989@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
 multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 17:23, Olaf Hering <olaf@aepfle.de> wrote:
> On Wed, Dec 05, Jan Beulich wrote:
> 
>> >>> On 05.12.12 at 11:01, Olaf Hering <olaf@aepfle.de> wrote:
>> > backend_changed might be called multiple times, which will leak
>> > be->mode. free the previous value before storing the current mode value.
>> 
>> As said before - this is one possible route to take. But did you
>> consider at all the alternative of preventing the function from
>> getting called more than once for a given device? As also said
>> before, I think that would have other bad effects, and hence
>> should be preferred (and would likely also result in a smaller
>> patch).
> 
> Maybe it could be done like this, adding a flag to the backend device
> and exit early if its called twice.

Maybe, but it looks odd to me. But then again I had hoped Konrad
would have an opinion here...

Also I don't see why you need to free be->mode now on all error
paths - afaict it would still get freed when "be" gets freed (with
your earlier patch).

Jan

> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -28,6 +28,7 @@ struct backend_info {
>  	unsigned		major;
>  	unsigned		minor;
>  	char			*mode;
> +	unsigned		alive;
>  };
>  
>  static struct kmem_cache *xen_blkif_cachep;
> @@ -506,6 +507,9 @@ static void backend_changed(struct xenbus_watch *watch,
>  
>  	DPRINTK("");
>  
> +	if (be->alive)
> +		return;
> +
>  	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
>  			   &major, &minor);
>  	if (XENBUS_EXIST_ERR(err)) {
> @@ -548,8 +552,11 @@ static void backend_changed(struct xenbus_watch *watch,
>  		char *p = strrchr(dev->otherend, '/') + 1;
>  		long handle;
>  		err = strict_strtoul(p, 0, &handle);
> -		if (err)
> +		if (err) {
> +			kfree(be->mode);
> +			be->mode = NULL;
>  			return;
> +		}
>  
>  		be->major = major;
>  		be->minor = minor;
> @@ -560,6 +567,8 @@ static void backend_changed(struct xenbus_watch *watch,
>  			be->major = 0;
>  			be->minor = 0;
>  			xenbus_dev_fatal(dev, err, "creating vbd structure");
> +			kfree(be->mode);
> +			be->mode = NULL;
>  			return;
>  		}
>  
> @@ -569,10 +578,13 @@ static void backend_changed(struct xenbus_watch 
> *watch,
>  			be->major = 0;
>  			be->minor = 0;
>  			xenbus_dev_fatal(dev, err, "creating sysfs entries");
> +			kfree(be->mode);
> +			be->mode = NULL;
>  			return;
>  		}
>  
>  		/* We're potentially connected now */
> +		be->alive = 1;
>  		xen_update_blkif_status(be->blkif);
>  	}
>  }
> -- 
> 1.8.0.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 17:18:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 17:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgf5X-00015b-M5; Thu, 06 Dec 2012 17:18:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tgf5V-00015W-8s
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 17:18:17 +0000
Received: from [85.158.139.83:32359] by server-12.bemta-5.messagelabs.com id
	0A/C8-02886-853D0C05; Thu, 06 Dec 2012 17:18:16 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354814293!28764551!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7517 invoked from network); 6 Dec 2012 17:18:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 17:18:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46853054"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 17:17:50 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Thu, 6 Dec 2012
	12:17:49 -0500
Message-ID: <50C0D33C.1@citrix.com>
Date: Thu, 6 Dec 2012 17:17:48 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
	<01158d25f3bfd50171a1.1354809383@andrewcoop.uk.xensource.com>
In-Reply-To: <01158d25f3bfd50171a1.1354809383@andrewcoop.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] [PATCH 1 of 1] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/12 15:56, Andrew Cooper wrote:
> Experimentally, certain crash kernels will triple fault very early after
> starting if started with NMIs disabled.  This was discovered when
> experimenting with a debug keyhandler which deliberately created a
> reentrant NMI, causing stack corruption.
>
> Because of this discovered bug, and that the future changes to the NMI
> handling will make the kexec path more fragile, take the time now to
> bullet-proof the kexec behaviour to be safer in more circumstances.
>
> This patch adds three new low level routines:
>   * nmi_crash
>       This is a special NMI handler for using during a kexec crash.
>   * enable_nmis
>       This function enables NMIs by executing an iret-to-self, to
>       disengage the hardware NMI latch.
>   * trap_nop
>       This is a no op handler which irets immediately.  It is actually in
>       the middle of enable_nmis to reuse the iret instruction, without
>       having a single lone aligned iret inflating the code side.
>
>
> As a result, the new behaviour of the kexec_crash path is:
>
> nmi_shootdown_cpus() will:
>
>   * Disable interrupt stack tables.
>      Disabling ISTs will prevent stack corruption from future NMIs and
>      MCEs. As Xen is going to crash, we are not concerned with the
>      security race condition with 'sysret'; any guest which managed to
>      benefit from the race condition will only be able execute a handful
>      of instructions before it will be killed anyway, and a guest is
>      still unable to trigger the race condition in the first place.
>
>   * Swap the NMI trap handlers.
>      The crashing pcpu gets the nop handler, to prevent it getting stuck in
>      an NMI context, causing a hang instead of crash.  The non-crashing
>      pcpus all get the nmi_crash handler which is designed never to
>      return.
>
> do_nmi_crash() will:
>
>   * Save the crash notes and shut the pcpu down.
>      There is now an extra per-cpu variable to prevent us from executing
>      this multiple times.  In the case where we reenter midway through,
>      attempt the whole operation again in preference to not completing
>      it in the first place.
>
>   * Set up another NMI at the LAPIC.
>      Even when the LAPIC has been disabled, the ID and command registers
>      are still usable.  As a result, we can deliberately queue up a new
>      NMI to re-interrupt us later if NMIs get unlatched.  Because of the
>      call to __stop_this_cpu(), we have to hand craft self_nmi() to be
>      safe from General Protection Faults.
>
>   * Fall into infinite loop.
>
> machine_kexec() will:
>
>    * Swap the MCE handlers to be a nop.
>       We cannot prevent MCEs from being delivered when we pass off to the
>       crash kernel, and the less Xen context is being touched the better.
>
>    * Explicitly enable NMIs.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> --
>
> Changes since v1:
>   * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
>     during the original refactoring.
>   * Fold trap_nop into the middle of enable_nmis to reuse the iret.
>   * Expand comments in areas as per Tim's suggestions.
>
> diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/crash.c
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,102 @@
>
>   static atomic_t waiting_for_crash_ipi;
>   static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs. */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>   {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
> +    ASSERT( crashing_cpu != smp_processor_id() );
> +
> +    /* Save crash information and shut down CPU.  Attempt only once. */
> +    if ( ! this_cpu(crash_save_done) )
> +    {
> +        kexec_crash_save_cpu();
> +        __stop_this_cpu();
> +
> +        this_cpu(crash_save_done) = 1;
> +
> +        atomic_dec(&waiting_for_crash_ipi);
> +    }
> +
> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
> +     * back to its boot state, so we are unable to rely on the regular
> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
> +     * (The likely scenario is that we have reverted from x2apic mode to
> +     * xapic, at which point #GPFs will occur if we use the apic_*
> +     * functions)
> +     *
> +     * The ICR and APIC ID of the LAPIC are still valid even during
> +     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
> +     * can deliberately queue up another NMI at the LAPIC which will not
> +     * be delivered as the hardware NMI latch is currently in effect.
> +     * This means that if NMIs become unlatched (e.g. following a
> +     * non-fatal MCE), the LAPIC will force us back here rather than
> +     * wandering back into regular Xen code.
>        */
> -    if ( cpu == crashing_cpu )
> -        return 1;
> -    local_irq_disable();
> +    switch ( current_local_apic_mode() )
> +    {
> +        u32 apic_id;
>
> -    kexec_crash_save_cpu();
> +    case APIC_MODE_X2APIC:
> +        apic_id = apic_rdmsr(APIC_ID);
>
> -    __stop_this_cpu();
> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
> +        break;
>
> -    atomic_dec(&waiting_for_crash_ipi);
> +    case APIC_MODE_XAPIC:
> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
> +
> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
> +            cpu_relax();
> +
> +        apic_mem_write(APIC_ICR2, apic_id << 24);
> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
> +        break;
> +
> +    default:
> +        break;
> +    }
>
>       for ( ; ; )
>           halt();
> -
> -    return 1;
>   }
>
>   static void nmi_shootdown_cpus(void)
>   {
>       unsigned long msecs;
> +    int i, cpu = smp_processor_id();
>
>       local_irq_disable();
>
> -    crashing_cpu = smp_processor_id();
> +    crashing_cpu = cpu;
>       local_irq_count(crashing_cpu) = 0;
>
>       atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
> -    /* Would it be better to replace the trap vector here? */
> -    set_nmi_callback(crash_nmi_callback);
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     *
> +     * Furthermore, disable stack tables for NMIs and MCEs to prevent
> +     * race conditions resulting in corrupt stack frames.  As Xen is
> +     * about to die, we no longer care about the security-related race
> +     * condition with 'sysret' which ISTs are designed to solve.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
> +            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
> +
> +            if ( i == cpu )
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
> +            else
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
> +        }
> +
>       /* Ensure the new callback function is set before sending out the NMI. */
>       wmb();
>
> diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/machine_kexec.c
> --- a/xen/arch/x86/machine_kexec.c
> +++ b/xen/arch/x86/machine_kexec.c
> @@ -87,6 +87,22 @@ void machine_kexec(xen_kexec_image_t *im
>        */
>       local_irq_disable();
>
> +    /* Now regular interrupts are disabled, we need to reduce the impact
> +     * of interrupts not disabled by 'cli'.
> +     *
> +     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
> +     * pcpus other than us have the nmi_crash handler, while we have the nop
> +     * handler.
> +     *
> +     * The MCE handlers touch extensive areas of Xen code and data.  At this
> +     * point, there is nothing we can usefully do, so set the nop handler.
> +     */
> +    set_intr_gate(TRAP_machine_check, &trap_nop);
> +
> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
> +     * not like running with NMIs disabled. */
> +    enable_nmis();
> +
>       /*
>        * compat_machine_kexec() returns to idle pagetables, which requires us
>        * to be running on a static GDT mapping (idle pagetables have no GDT
> diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -635,11 +635,45 @@ ENTRY(nmi)
>           movl  $TRAP_nmi,4(%rsp)
>           jmp   handle_ist_exception
>
> +ENTRY(nmi_crash)
> +        cli
> +        pushq $0
> +        movl $TRAP_nmi,4(%rsp)
> +        SAVE_ALL
> +        movq %rsp,%rdi
> +        callq do_nmi_crash /* Does not return */
> +        ud2
> +
>   ENTRY(machine_check)
>           pushq $0
>           movl  $TRAP_machine_check,4(%rsp)
>           jmp   handle_ist_exception
>
> +/* Enable NMIs.  No special register assumptions, and all preserved. */
> +ENTRY(enable_nmis)
> +        pushq %rax
> +        movq  %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
I did see Jan's comment to add "trap_nop" to this section of code, and I 
do agree that saving 14 bytes by not having a single iret in it's own 
aligned block is a good thing, but how about:

+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+        iretq /* Disable the hardware NMI latch */
+1:
+        popq %rax
+        retq
+
+/* No op trap handler.  Required for kexec crash path.
+ * It is not used in performance critical code, and saves having a single
+ * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
+ * explicit alignment. */
+.globl trap_nop;
+trap_nop:
+
+        iretq /* Disable the hardware NMI latch */


We still have the benefit of not wasting 14 bytes on aligning a single 
iretq, but the code to "do a iretq to enable nmi" is a bit cleaner.

--
Mats
> +
> +/* No op trap handler.  Required for kexec crash path.
> + * It is not used in performance critical code, and saves having a single
> + * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
> + * explicit alignment. */
> +.globl trap_nop;
> +trap_nop:
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
>   .section .rodata, "a", @progbits
>
>   ENTRY(exception_table)
> diff -r bc624b00d6d6 -r 01158d25f3bf xen/include/asm-x86/processor.h
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
>   DECLARE_TRAP_HANDLER(divide_error);
>   DECLARE_TRAP_HANDLER(debug);
>   DECLARE_TRAP_HANDLER(nmi);
> +DECLARE_TRAP_HANDLER(nmi_crash);
>   DECLARE_TRAP_HANDLER(int3);
>   DECLARE_TRAP_HANDLER(overflow);
>   DECLARE_TRAP_HANDLER(bounds);
> @@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>   DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>   #undef DECLARE_TRAP_HANDLER
>
> +void trap_nop(void);
> +void enable_nmis(void);
> +
>   void syscall_enter(void);
>   void sysenter_entry(void);
>   void sysenter_eflags_saved(void);
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 17:18:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 17:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgf5X-00015b-M5; Thu, 06 Dec 2012 17:18:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tgf5V-00015W-8s
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 17:18:17 +0000
Received: from [85.158.139.83:32359] by server-12.bemta-5.messagelabs.com id
	0A/C8-02886-853D0C05; Thu, 06 Dec 2012 17:18:16 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354814293!28764551!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7517 invoked from network); 6 Dec 2012 17:18:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 17:18:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="46853054"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 17:17:50 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Thu, 6 Dec 2012
	12:17:49 -0500
Message-ID: <50C0D33C.1@citrix.com>
Date: Thu, 6 Dec 2012 17:17:48 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
	<01158d25f3bfd50171a1.1354809383@andrewcoop.uk.xensource.com>
In-Reply-To: <01158d25f3bfd50171a1.1354809383@andrewcoop.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] [PATCH 1 of 1] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/12 15:56, Andrew Cooper wrote:
> Experimentally, certain crash kernels will triple fault very early after
> starting if started with NMIs disabled.  This was discovered when
> experimenting with a debug keyhandler which deliberately created a
> reentrant NMI, causing stack corruption.
>
> Because of this discovered bug, and that the future changes to the NMI
> handling will make the kexec path more fragile, take the time now to
> bullet-proof the kexec behaviour to be safer in more circumstances.
>
> This patch adds three new low level routines:
>   * nmi_crash
>       This is a special NMI handler for using during a kexec crash.
>   * enable_nmis
>       This function enables NMIs by executing an iret-to-self, to
>       disengage the hardware NMI latch.
>   * trap_nop
>       This is a no op handler which irets immediately.  It is actually in
>       the middle of enable_nmis to reuse the iret instruction, without
>       having a single lone aligned iret inflating the code side.
>
>
> As a result, the new behaviour of the kexec_crash path is:
>
> nmi_shootdown_cpus() will:
>
>   * Disable interrupt stack tables.
>      Disabling ISTs will prevent stack corruption from future NMIs and
>      MCEs. As Xen is going to crash, we are not concerned with the
>      security race condition with 'sysret'; any guest which managed to
>      benefit from the race condition will only be able execute a handful
>      of instructions before it will be killed anyway, and a guest is
>      still unable to trigger the race condition in the first place.
>
>   * Swap the NMI trap handlers.
>      The crashing pcpu gets the nop handler, to prevent it getting stuck in
>      an NMI context, causing a hang instead of crash.  The non-crashing
>      pcpus all get the nmi_crash handler which is designed never to
>      return.
>
> do_nmi_crash() will:
>
>   * Save the crash notes and shut the pcpu down.
>      There is now an extra per-cpu variable to prevent us from executing
>      this multiple times.  In the case where we reenter midway through,
>      attempt the whole operation again in preference to not completing
>      it in the first place.
>
>   * Set up another NMI at the LAPIC.
>      Even when the LAPIC has been disabled, the ID and command registers
>      are still usable.  As a result, we can deliberately queue up a new
>      NMI to re-interrupt us later if NMIs get unlatched.  Because of the
>      call to __stop_this_cpu(), we have to hand craft self_nmi() to be
>      safe from General Protection Faults.
>
>   * Fall into infinite loop.
>
> machine_kexec() will:
>
>    * Swap the MCE handlers to be a nop.
>       We cannot prevent MCEs from being delivered when we pass off to the
>       crash kernel, and the less Xen context is being touched the better.
>
>    * Explicitly enable NMIs.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> --
>
> Changes since v1:
>   * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
>     during the original refactoring.
>   * Fold trap_nop into the middle of enable_nmis to reuse the iret.
>   * Expand comments in areas as per Tim's suggestions.
>
> diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/crash.c
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,102 @@
>
>   static atomic_t waiting_for_crash_ipi;
>   static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs. */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>   {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
> +    ASSERT( crashing_cpu != smp_processor_id() );
> +
> +    /* Save crash information and shut down CPU.  Attempt only once. */
> +    if ( ! this_cpu(crash_save_done) )
> +    {
> +        kexec_crash_save_cpu();
> +        __stop_this_cpu();
> +
> +        this_cpu(crash_save_done) = 1;
> +
> +        atomic_dec(&waiting_for_crash_ipi);
> +    }
> +
> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
> +     * back to its boot state, so we are unable to rely on the regular
> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
> +     * (The likely scenario is that we have reverted from x2apic mode to
> +     * xapic, at which point #GPFs will occur if we use the apic_*
> +     * functions)
> +     *
> +     * The ICR and APIC ID of the LAPIC are still valid even during
> +     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
> +     * can deliberately queue up another NMI at the LAPIC which will not
> +     * be delivered as the hardware NMI latch is currently in effect.
> +     * This means that if NMIs become unlatched (e.g. following a
> +     * non-fatal MCE), the LAPIC will force us back here rather than
> +     * wandering back into regular Xen code.
>        */
> -    if ( cpu == crashing_cpu )
> -        return 1;
> -    local_irq_disable();
> +    switch ( current_local_apic_mode() )
> +    {
> +        u32 apic_id;
>
> -    kexec_crash_save_cpu();
> +    case APIC_MODE_X2APIC:
> +        apic_id = apic_rdmsr(APIC_ID);
>
> -    __stop_this_cpu();
> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
> +        break;
>
> -    atomic_dec(&waiting_for_crash_ipi);
> +    case APIC_MODE_XAPIC:
> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
> +
> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
> +            cpu_relax();
> +
> +        apic_mem_write(APIC_ICR2, apic_id << 24);
> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
> +        break;
> +
> +    default:
> +        break;
> +    }
>
>       for ( ; ; )
>           halt();
> -
> -    return 1;
>   }
>
>   static void nmi_shootdown_cpus(void)
>   {
>       unsigned long msecs;
> +    int i, cpu = smp_processor_id();
>
>       local_irq_disable();
>
> -    crashing_cpu = smp_processor_id();
> +    crashing_cpu = cpu;
>       local_irq_count(crashing_cpu) = 0;
>
>       atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
> -    /* Would it be better to replace the trap vector here? */
> -    set_nmi_callback(crash_nmi_callback);
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     *
> +     * Furthermore, disable stack tables for NMIs and MCEs to prevent
> +     * race conditions resulting in corrupt stack frames.  As Xen is
> +     * about to die, we no longer care about the security-related race
> +     * condition with 'sysret' which ISTs are designed to solve.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +            idt_tables[i][TRAP_nmi].a           &= ~(7UL << 32);
> +            idt_tables[i][TRAP_machine_check].a &= ~(7UL << 32);
> +
> +            if ( i == cpu )
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
> +            else
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
> +        }
> +
>       /* Ensure the new callback function is set before sending out the NMI. */
>       wmb();
>
> diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/machine_kexec.c
> --- a/xen/arch/x86/machine_kexec.c
> +++ b/xen/arch/x86/machine_kexec.c
> @@ -87,6 +87,22 @@ void machine_kexec(xen_kexec_image_t *im
>        */
>       local_irq_disable();
>
> +    /* Now regular interrupts are disabled, we need to reduce the impact
> +     * of interrupts not disabled by 'cli'.
> +     *
> +     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
> +     * pcpus other than us have the nmi_crash handler, while we have the nop
> +     * handler.
> +     *
> +     * The MCE handlers touch extensive areas of Xen code and data.  At this
> +     * point, there is nothing we can usefully do, so set the nop handler.
> +     */
> +    set_intr_gate(TRAP_machine_check, &trap_nop);
> +
> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
> +     * not like running with NMIs disabled. */
> +    enable_nmis();
> +
>       /*
>        * compat_machine_kexec() returns to idle pagetables, which requires us
>        * to be running on a static GDT mapping (idle pagetables have no GDT
> diff -r bc624b00d6d6 -r 01158d25f3bf xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -635,11 +635,45 @@ ENTRY(nmi)
>           movl  $TRAP_nmi,4(%rsp)
>           jmp   handle_ist_exception
>
> +ENTRY(nmi_crash)
> +        cli
> +        pushq $0
> +        movl $TRAP_nmi,4(%rsp)
> +        SAVE_ALL
> +        movq %rsp,%rdi
> +        callq do_nmi_crash /* Does not return */
> +        ud2
> +
>   ENTRY(machine_check)
>           pushq $0
>           movl  $TRAP_machine_check,4(%rsp)
>           jmp   handle_ist_exception
>
> +/* Enable NMIs.  No special register assumptions, and all preserved. */
> +ENTRY(enable_nmis)
> +        pushq %rax
> +        movq  %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
I did see Jan's comment to add "trap_nop" to this section of code, and I 
do agree that saving 14 bytes by not having a single iret in it's own 
aligned block is a good thing, but how about:

+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+        iretq /* Disable the hardware NMI latch */
+1:
+        popq %rax
+        retq
+
+/* No op trap handler.  Required for kexec crash path.
+ * It is not used in performance critical code, and saves having a single
+ * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
+ * explicit alignment. */
+.globl trap_nop;
+trap_nop:
+
+        iretq /* Disable the hardware NMI latch */


We still have the benefit of not wasting 14 bytes on aligning a single 
iretq, but the code to "do a iretq to enable nmi" is a bit cleaner.

--
Mats
> +
> +/* No op trap handler.  Required for kexec crash path.
> + * It is not used in performance critical code, and saves having a single
> + * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
> + * explicit alignment. */
> +.globl trap_nop;
> +trap_nop:
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
>   .section .rodata, "a", @progbits
>
>   ENTRY(exception_table)
> diff -r bc624b00d6d6 -r 01158d25f3bf xen/include/asm-x86/processor.h
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -517,6 +517,7 @@ void do_ ## _name(struct cpu_user_regs *
>   DECLARE_TRAP_HANDLER(divide_error);
>   DECLARE_TRAP_HANDLER(debug);
>   DECLARE_TRAP_HANDLER(nmi);
> +DECLARE_TRAP_HANDLER(nmi_crash);
>   DECLARE_TRAP_HANDLER(int3);
>   DECLARE_TRAP_HANDLER(overflow);
>   DECLARE_TRAP_HANDLER(bounds);
> @@ -535,6 +536,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>   DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>   #undef DECLARE_TRAP_HANDLER
>
> +void trap_nop(void);
> +void enable_nmis(void);
> +
>   void syscall_enter(void);
>   void sysenter_entry(void);
>   void sysenter_eflags_saved(void);
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 17:24:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 17:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgfBO-0001rB-Lb; Thu, 06 Dec 2012 17:24:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1TgfBN-0001r5-Lw
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 17:24:21 +0000
Received: from [193.109.254.147:24754] by server-12.bemta-14.messagelabs.com
	id 59/81-00510-4C4D0C05; Thu, 06 Dec 2012 17:24:20 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354814658!9469879!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAwNzc2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1290 invoked from network); 6 Dec 2012 17:24:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 17:24:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB6HOEr4030055
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 17:24:15 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB6HOE9J012412
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Dec 2012 17:24:14 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB6HODnU016516; Thu, 6 Dec 2012 11:24:13 -0600
MIME-Version: 1.0
Message-ID: <cd4561a9-08d0-4fe5-bd56-7a0ff13d4501@default>
Date: Thu, 6 Dec 2012 09:24:13 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>
References: <8be77fb4-3393-4f49-99f6-2b6c9f89bc18@default>
In-Reply-To: <8be77fb4-3393-4f49-99f6-2b6c9f89bc18@default>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6665.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: centralize accounting for domain
	tot_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ping?

> -----Original Message-----
> From: Dan Magenheimer
> Sent: Wednesday, November 28, 2012 2:50 PM
> To: Keir Fraser; Jan Beulich
> Cc: xen-devel@lists.xen.org; Konrad Wilk; Zhigang Wang
> Subject: [PATCH] xen: centralize accounting for domain tot_pages
> 
> xen: centralize accounting for domain tot_pages
> 
> Provide and use a common function for all adjustments to a
> domain's tot_pages counter in anticipation of future and/or
> out-of-tree patches that must adjust related counters
> atomically.
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> 
>  arch/x86/mm.c             |    4 ++--
>  arch/x86/mm/mem_sharing.c |    4 ++--
>  common/grant_table.c      |    2 +-
>  common/memory.c           |    2 +-
>  common/page_alloc.c       |   10 ++++++++--
>  include/xen/mm.h          |    2 ++
>  6 files changed, 16 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index ab94b02..3887ca6 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3842,7 +3842,7 @@ int donate_page(
>      {
>          if ( d->tot_pages >= d->max_pages )
>              goto fail;
> -        d->tot_pages++;
> +        domain_adjust_tot_pages(d, 1);
>      }
> 
>      page->count_info = PGC_allocated | 1;
> @@ -3892,7 +3892,7 @@ int steal_page(
>      } while ( (y = cmpxchg(&page->count_info, x, x | 1)) != x );
> 
>      /* Unlink from original owner. */
> -    if ( !(memflags & MEMF_no_refcount) && !--d->tot_pages )
> +    if ( !(memflags & MEMF_no_refcount) && !domain_adjust_tot_pages(d, -1) )
>          drop_dom_ref = 1;
>      page_list_del(page, &d->page_list);
> 
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 5103285..e91aac5 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -639,7 +639,7 @@ static int page_make_sharable(struct domain *d,
>      }
> 
>      page_set_owner(page, dom_cow);
> -    d->tot_pages--;
> +    domain_adjust_tot_pages(d, -1);
>      drop_dom_ref = (d->tot_pages == 0);
>      page_list_del(page, &d->page_list);
>      spin_unlock(&d->page_alloc_lock);
> @@ -680,7 +680,7 @@ static int page_make_private(struct domain *d, struct page_info *page)
>      ASSERT(page_get_owner(page) == dom_cow);
>      page_set_owner(page, d);
> 
> -    if ( d->tot_pages++ == 0 )
> +    if ( domain_adjust_tot_pages(d, 1) == 1 )
>          get_domain(d);
>      page_list_add_tail(page, &d->page_list);
>      spin_unlock(&d->page_alloc_lock);
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 7912769..ca8d861 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1656,7 +1656,7 @@ gnttab_transfer(
>          }
> 
>          /* Okay, add the page to 'e'. */
> -        if ( unlikely(e->tot_pages++ == 0) )
> +        if ( unlikely(domain_adjust_tot_pages(e, 1) == 1) )
>              get_knownalive_domain(e);
>          page_list_add_tail(page, &e->page_list);
>          page_set_owner(page, e);
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 83e2666..9842ea9 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -454,7 +454,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>                               (j * (1UL << exch.out.extent_order)));
> 
>                  spin_lock(&d->page_alloc_lock);
> -                d->tot_pages -= dec_count;
> +                domain_adjust_tot_pages(d, -dec_count);
>                  drop_dom_ref = (dec_count && !d->tot_pages);
>                  spin_unlock(&d->page_alloc_lock);
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 15ebc66..e273bb7 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -239,6 +239,12 @@ static long midsize_alloc_zone_pages;
> 
>  static DEFINE_SPINLOCK(heap_lock);
> 
> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
> +{
> +    ASSERT(spin_is_locked(&d->page_alloc_lock));
> +    return d->tot_pages += pages;
> +}
> +
>  static unsigned long init_node_heap(int node, unsigned long mfn,
>                                      unsigned long nr, bool_t *use_tail)
>  {
> @@ -1291,7 +1297,7 @@ int assign_pages(
>          if ( unlikely(d->tot_pages == 0) )
>              get_knownalive_domain(d);
> 
> -        d->tot_pages += 1 << order;
> +        domain_adjust_tot_pages(d, 1 << order);
>      }
> 
>      for ( i = 0; i < (1 << order); i++ )
> @@ -1375,7 +1381,7 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
>              page_list_del2(&pg[i], &d->page_list, &d->arch.relmem_list);
>          }
> 
> -        d->tot_pages -= 1 << order;
> +        domain_adjust_tot_pages(d, -(1 << order));
>          drop_dom_ref = (d->tot_pages == 0);
> 
>          spin_unlock_recursive(&d->page_alloc_lock);
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 64a0cc1..00b1915 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -48,6 +48,8 @@ void free_xenheap_pages(void *v, unsigned int order);
>  #define alloc_xenheap_page() (alloc_xenheap_pages(0,0))
>  #define free_xenheap_page(v) (free_xenheap_pages(v,0))
> 
> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages);
> +
>  /* Domain suballocator. These functions are *not* interrupt-safe.*/
>  void init_domheap_pages(paddr_t ps, paddr_t pe);
>  struct page_info *alloc_domheap_pages(

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 17:24:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 17:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgfBO-0001rB-Lb; Thu, 06 Dec 2012 17:24:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1TgfBN-0001r5-Lw
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 17:24:21 +0000
Received: from [193.109.254.147:24754] by server-12.bemta-14.messagelabs.com
	id 59/81-00510-4C4D0C05; Thu, 06 Dec 2012 17:24:20 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354814658!9469879!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAwNzc2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1290 invoked from network); 6 Dec 2012 17:24:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 17:24:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB6HOEr4030055
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 17:24:15 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB6HOE9J012412
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Dec 2012 17:24:14 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB6HODnU016516; Thu, 6 Dec 2012 11:24:13 -0600
MIME-Version: 1.0
Message-ID: <cd4561a9-08d0-4fe5-bd56-7a0ff13d4501@default>
Date: Thu, 6 Dec 2012 09:24:13 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>
References: <8be77fb4-3393-4f49-99f6-2b6c9f89bc18@default>
In-Reply-To: <8be77fb4-3393-4f49-99f6-2b6c9f89bc18@default>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6665.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: centralize accounting for domain
	tot_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ping?

> -----Original Message-----
> From: Dan Magenheimer
> Sent: Wednesday, November 28, 2012 2:50 PM
> To: Keir Fraser; Jan Beulich
> Cc: xen-devel@lists.xen.org; Konrad Wilk; Zhigang Wang
> Subject: [PATCH] xen: centralize accounting for domain tot_pages
> 
> xen: centralize accounting for domain tot_pages
> 
> Provide and use a common function for all adjustments to a
> domain's tot_pages counter in anticipation of future and/or
> out-of-tree patches that must adjust related counters
> atomically.
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> 
>  arch/x86/mm.c             |    4 ++--
>  arch/x86/mm/mem_sharing.c |    4 ++--
>  common/grant_table.c      |    2 +-
>  common/memory.c           |    2 +-
>  common/page_alloc.c       |   10 ++++++++--
>  include/xen/mm.h          |    2 ++
>  6 files changed, 16 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index ab94b02..3887ca6 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3842,7 +3842,7 @@ int donate_page(
>      {
>          if ( d->tot_pages >= d->max_pages )
>              goto fail;
> -        d->tot_pages++;
> +        domain_adjust_tot_pages(d, 1);
>      }
> 
>      page->count_info = PGC_allocated | 1;
> @@ -3892,7 +3892,7 @@ int steal_page(
>      } while ( (y = cmpxchg(&page->count_info, x, x | 1)) != x );
> 
>      /* Unlink from original owner. */
> -    if ( !(memflags & MEMF_no_refcount) && !--d->tot_pages )
> +    if ( !(memflags & MEMF_no_refcount) && !domain_adjust_tot_pages(d, -1) )
>          drop_dom_ref = 1;
>      page_list_del(page, &d->page_list);
> 
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 5103285..e91aac5 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -639,7 +639,7 @@ static int page_make_sharable(struct domain *d,
>      }
> 
>      page_set_owner(page, dom_cow);
> -    d->tot_pages--;
> +    domain_adjust_tot_pages(d, -1);
>      drop_dom_ref = (d->tot_pages == 0);
>      page_list_del(page, &d->page_list);
>      spin_unlock(&d->page_alloc_lock);
> @@ -680,7 +680,7 @@ static int page_make_private(struct domain *d, struct page_info *page)
>      ASSERT(page_get_owner(page) == dom_cow);
>      page_set_owner(page, d);
> 
> -    if ( d->tot_pages++ == 0 )
> +    if ( domain_adjust_tot_pages(d, 1) == 1 )
>          get_domain(d);
>      page_list_add_tail(page, &d->page_list);
>      spin_unlock(&d->page_alloc_lock);
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 7912769..ca8d861 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1656,7 +1656,7 @@ gnttab_transfer(
>          }
> 
>          /* Okay, add the page to 'e'. */
> -        if ( unlikely(e->tot_pages++ == 0) )
> +        if ( unlikely(domain_adjust_tot_pages(e, 1) == 1) )
>              get_knownalive_domain(e);
>          page_list_add_tail(page, &e->page_list);
>          page_set_owner(page, e);
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 83e2666..9842ea9 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -454,7 +454,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>                               (j * (1UL << exch.out.extent_order)));
> 
>                  spin_lock(&d->page_alloc_lock);
> -                d->tot_pages -= dec_count;
> +                domain_adjust_tot_pages(d, -dec_count);
>                  drop_dom_ref = (dec_count && !d->tot_pages);
>                  spin_unlock(&d->page_alloc_lock);
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 15ebc66..e273bb7 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -239,6 +239,12 @@ static long midsize_alloc_zone_pages;
> 
>  static DEFINE_SPINLOCK(heap_lock);
> 
> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
> +{
> +    ASSERT(spin_is_locked(&d->page_alloc_lock));
> +    return d->tot_pages += pages;
> +}
> +
>  static unsigned long init_node_heap(int node, unsigned long mfn,
>                                      unsigned long nr, bool_t *use_tail)
>  {
> @@ -1291,7 +1297,7 @@ int assign_pages(
>          if ( unlikely(d->tot_pages == 0) )
>              get_knownalive_domain(d);
> 
> -        d->tot_pages += 1 << order;
> +        domain_adjust_tot_pages(d, 1 << order);
>      }
> 
>      for ( i = 0; i < (1 << order); i++ )
> @@ -1375,7 +1381,7 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
>              page_list_del2(&pg[i], &d->page_list, &d->arch.relmem_list);
>          }
> 
> -        d->tot_pages -= 1 << order;
> +        domain_adjust_tot_pages(d, -(1 << order));
>          drop_dom_ref = (d->tot_pages == 0);
> 
>          spin_unlock_recursive(&d->page_alloc_lock);
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 64a0cc1..00b1915 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -48,6 +48,8 @@ void free_xenheap_pages(void *v, unsigned int order);
>  #define alloc_xenheap_page() (alloc_xenheap_pages(0,0))
>  #define free_xenheap_page(v) (free_xenheap_pages(v,0))
> 
> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages);
> +
>  /* Domain suballocator. These functions are *not* interrupt-safe.*/
>  void init_domheap_pages(paddr_t ps, paddr_t pe);
>  struct page_info *alloc_domheap_pages(

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 17:25:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 17:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgfBp-0001t6-2m; Thu, 06 Dec 2012 17:24:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1TgfBn-0001sv-T6
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 17:24:48 +0000
Received: from [85.158.143.99:41860] by server-3.bemta-4.messagelabs.com id
	43/30-18211-FD4D0C05; Thu, 06 Dec 2012 17:24:47 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354814684!17171046!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMDc5NjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28479 invoked from network); 6 Dec 2012 17:24:46 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 17:24:46 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB6HOhAk005320
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 17:24:44 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB6HOgJa008264
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Dec 2012 17:24:43 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB6HOgZC018351; Thu, 6 Dec 2012 11:24:42 -0600
MIME-Version: 1.0
Message-ID: <23b964b2-1d05-4f3a-9776-e708c10c1153@default>
Date: Thu, 6 Dec 2012 09:24:42 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, Keir Fraser <keir@xen.org>
References: <36f4ae2f-4fbc-4b14-a084-7b336a052a7a@default>
In-Reply-To: <36f4ae2f-4fbc-4b14-a084-7b336a052a7a@default>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6665.5003 (x86)]
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: reserve next two XENMEM_ op numbers
 for future/out-of-tree use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ping?

> -----Original Message-----
> From: Dan Magenheimer
> Sent: Wednesday, November 28, 2012 3:03 PM
> To: Jan Beulich; Keir Fraser
> Cc: xen-devel@lists.xen.org; Konrad Wilk; Zhigang Wang
> Subject: [PATCH] xen: reserve next two XENMEM_ op numbers for future/out-of-tree use
> 
> xen: reserve next two XENMEM_ op numbers for future/out-of-tree use
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> 
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index f1ddbc0..3ee2902 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -421,6 +421,12 @@ struct xen_mem_sharing_op {
>  typedef struct xen_mem_sharing_op xen_mem_sharing_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
> 
> +/*
> + * Reserve ops for future/out-of-tree "claim" patches (Oracle)
> + */
> +#define XENMEM_claim_pages                  24
> +#define XENMEM_get_unclaimed_pages          25
> +
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> 
>  #endif /* __XEN_PUBLIC_MEMORY_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 17:25:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 17:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgfBp-0001t6-2m; Thu, 06 Dec 2012 17:24:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1TgfBn-0001sv-T6
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 17:24:48 +0000
Received: from [85.158.143.99:41860] by server-3.bemta-4.messagelabs.com id
	43/30-18211-FD4D0C05; Thu, 06 Dec 2012 17:24:47 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354814684!17171046!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMDc5NjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28479 invoked from network); 6 Dec 2012 17:24:46 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 17:24:46 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB6HOhAk005320
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Dec 2012 17:24:44 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB6HOgJa008264
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Dec 2012 17:24:43 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB6HOgZC018351; Thu, 6 Dec 2012 11:24:42 -0600
MIME-Version: 1.0
Message-ID: <23b964b2-1d05-4f3a-9776-e708c10c1153@default>
Date: Thu, 6 Dec 2012 09:24:42 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, Keir Fraser <keir@xen.org>
References: <36f4ae2f-4fbc-4b14-a084-7b336a052a7a@default>
In-Reply-To: <36f4ae2f-4fbc-4b14-a084-7b336a052a7a@default>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6665.5003 (x86)]
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: reserve next two XENMEM_ op numbers
 for future/out-of-tree use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ping?

> -----Original Message-----
> From: Dan Magenheimer
> Sent: Wednesday, November 28, 2012 3:03 PM
> To: Jan Beulich; Keir Fraser
> Cc: xen-devel@lists.xen.org; Konrad Wilk; Zhigang Wang
> Subject: [PATCH] xen: reserve next two XENMEM_ op numbers for future/out-of-tree use
> 
> xen: reserve next two XENMEM_ op numbers for future/out-of-tree use
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> 
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index f1ddbc0..3ee2902 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -421,6 +421,12 @@ struct xen_mem_sharing_op {
>  typedef struct xen_mem_sharing_op xen_mem_sharing_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
> 
> +/*
> + * Reserve ops for future/out-of-tree "claim" patches (Oracle)
> + */
> +#define XENMEM_claim_pages                  24
> +#define XENMEM_get_unclaimed_pages          25
> +
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> 
>  #endif /* __XEN_PUBLIC_MEMORY_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:01:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgflN-0003br-QX; Thu, 06 Dec 2012 18:01:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgflM-0003bK-Lm
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 18:01:32 +0000
Received: from [193.109.254.147:8082] by server-2.bemta-14.messagelabs.com id
	76/27-20829-B7DD0C05; Thu, 06 Dec 2012 18:01:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354816890!8940395!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6731 invoked from network); 6 Dec 2012 18:01:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:01:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16207764"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 18:01:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 18:01:30 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgflK-0004GZ-2I;
	Thu, 06 Dec 2012 18:01:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgflJ-0005oi-DU;
	Thu, 06 Dec 2012 18:01:29 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14582-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 18:01:29 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14582: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14582 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14582/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:01:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgflN-0003br-QX; Thu, 06 Dec 2012 18:01:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgflM-0003bK-Lm
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 18:01:32 +0000
Received: from [193.109.254.147:8082] by server-2.bemta-14.messagelabs.com id
	76/27-20829-B7DD0C05; Thu, 06 Dec 2012 18:01:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354816890!8940395!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6731 invoked from network); 6 Dec 2012 18:01:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:01:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16207764"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 18:01:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 18:01:30 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgflK-0004GZ-2I;
	Thu, 06 Dec 2012 18:01:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgflJ-0005oi-DU;
	Thu, 06 Dec 2012 18:01:29 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14582-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 18:01:29 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14582: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14582 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14582/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:12:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:12:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgfvR-000453-G6; Thu, 06 Dec 2012 18:11:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgfvQ-00044v-11
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 18:11:56 +0000
Received: from [85.158.137.99:40843] by server-9.bemta-3.messagelabs.com id
	D3/8A-02388-BEFD0C05; Thu, 06 Dec 2012 18:11:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1354817514!15915121!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6648 invoked from network); 6 Dec 2012 18:11:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:11:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16208213"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 18:11:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 18:11:53 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgfvN-0004KP-22;
	Thu, 06 Dec 2012 18:11:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgfvM-0003vy-Vd;
	Thu, 06 Dec 2012 18:11:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14581-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 18:11:52 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14581: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14581 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14581/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:12:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:12:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgfvR-000453-G6; Thu, 06 Dec 2012 18:11:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgfvQ-00044v-11
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 18:11:56 +0000
Received: from [85.158.137.99:40843] by server-9.bemta-3.messagelabs.com id
	D3/8A-02388-BEFD0C05; Thu, 06 Dec 2012 18:11:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1354817514!15915121!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6648 invoked from network); 6 Dec 2012 18:11:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:11:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16208213"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 18:11:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 18:11:53 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgfvN-0004KP-22;
	Thu, 06 Dec 2012 18:11:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgfvM-0003vy-Vd;
	Thu, 06 Dec 2012 18:11:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14581-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 18:11:52 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14581: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14581 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14581/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg30-0004vw-Cb; Thu, 06 Dec 2012 18:19:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2y-0004v4-Rw
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:45 +0000
Received: from [85.158.143.99:41683] by server-2.bemta-4.messagelabs.com id
	26/31-30861-0C1E0C05; Thu, 06 Dec 2012 18:19:44 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354817980!17176997!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8613 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01aa_3e7eb2d7_82ef_4249_aa85_0de69392eacb;
	Thu, 06 Dec 2012 13:19:34 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:16 -0500
Message-Id: <1354817961-22196-3-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 3/8] vtpm/vtpmmgr and required libs to
	stubdom/Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add 3 new libraries to stubdom:
libgmp
polarssl
Berlios TPM Emulator 0.7.4

Also adds makefile structure for vtpm-stubdom and vtpmmgrdom

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/Makefile           |  138 +++++++++++++++++++++++++++++++++++++++++++-
 stubdom/polarssl.patch     |   64 ++++++++++++++++++++
 stubdom/tpmemu-0.7.4.patch |   12 ++++
 3 files changed, 211 insertions(+), 3 deletions(-)
 create mode 100644 stubdom/polarssl.patch
 create mode 100644 stubdom/tpmemu-0.7.4.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 50ba360..fc70d88 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -31,6 +31,18 @@ GRUB_VERSION=0.97
 OCAML_URL?=http://caml.inria.fr/pub/distrib/ocaml-3.11
 OCAML_VERSION=3.11.0
 
+GMP_VERSION=4.3.2
+GMP_URL?=$(XEN_EXTFILES_URL)
+#GMP_URL?=ftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
+
+POLARSSL_VERSION=1.1.4
+POLARSSL_URL?=$(XEN_EXTFILES_URL)
+#POLARSSL_URL?=http://polarssl.org/code/releases
+
+TPMEMU_VERSION=0.7.4
+TPMEMU_URL?=$(XEN_EXTFILES_URL)
+#TPMEMU_URL?=http://download.berlios.de/tpm-emulator
+
 WGET=wget -c
 
 GNU_TARGET_ARCH:=$(XEN_TARGET_ARCH)
@@ -74,12 +86,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include
 
 TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib
 
-TARGETS=ioemu c caml grub xenstore
+TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr
 
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom
+build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stubdom vtpmmgrdom
 else
 build: genpath
 endif
@@ -176,6 +188,76 @@ lwip-$(XEN_TARGET_ARCH): lwip-$(LWIP_VERSION).tar.gz
 	touch $@
 
 #############
+# cross-gmp
+#############
+gmp-$(GMP_VERSION).tar.bz2:
+	$(WGET) $(GMP_URL)/$@
+
+.PHONY: cross-gmp
+ifeq ($(XEN_TARGET_ARCH), x86_32)
+   GMPEXT=ABI=32
+endif
+gmp-$(XEN_TARGET_ARCH): gmp-$(GMP_VERSION).tar.bz2 $(NEWLIB_STAMPFILE)
+	tar xjf $<
+	mv gmp-$(GMP_VERSION) $@
+	#patch -d $@ -p0 < gmp.patch
+	cd $@; CPPFLAGS="-isystem $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include $(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" CC=$(CC) $(GMPEXT) ./configure --disable-shared --enable-static --disable-fft --without-readline --prefix=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf
+	sed -i 's/#define HAVE_OBSTACK_VPRINTF 1/\/\/#define HAVE_OBSTACK_VPRINTF 1/' $@/config.h
+	touch $@
+
+GMP_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libgmp.a
+cross-gmp: $(GMP_STAMPFILE)
+$(GMP_STAMPFILE): gmp-$(XEN_TARGET_ARCH)
+	( cd $< && \
+	  $(MAKE) && \
+	  $(MAKE) install )
+
+#############
+# cross-polarssl
+#############
+polarssl-$(POLARSSL_VERSION)-gpl.tgz:
+	$(WGET) $(POLARSSL_URL)/$@
+
+polarssl-$(XEN_TARGET_ARCH): polarssl-$(POLARSSL_VERSION)-gpl.tgz
+	tar xzf $<
+	mv polarssl-$(POLARSSL_VERSION) $@
+	patch -d $@ -p1 < polarssl.patch
+	touch $@
+
+POLARSSL_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libpolarssl.a
+cross-polarssl: $(POLARSSL_STAMPFILE)
+$(POLARSSL_STAMPFILE): polarssl-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE) lwip-$(XEN_TARGET_ARCH)
+	 ( cd $</library && \
+	   make CC="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -I $(realpath $(MINI_OS)/include)" && \
+	   mkdir -p $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include && \
+	   cp -r ../include/* $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include && \
+	   mkdir -p $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib && \
+	   $(INSTALL_DATA) libpolarssl.a $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib/ )
+
+#############
+# cross-tpmemu
+#############
+tpm_emulator-$(TPMEMU_VERSION).tar.gz:
+	$(WGET) $(TPMEMU_URL)/$@
+
+tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
+	tar xzf $<
+	mv tpm_emulator-$(TPMEMU_VERSION) $@
+	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
+	mkdir $@/build
+	cd $@/build; cmake .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
+	touch $@
+
+TPMEMU_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm.a
+$(TPMEMU_STAMPFILE): tpm_emulator-$(XEN_TARGET_ARCH) $(GMP_STAMPFILE)
+	( cd $</build && make VERBOSE=1 tpm_crypto tpm  )
+	cp $</build/crypto/libtpm_crypto.a $(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm_crypto.a
+	cp $</build/tpm/libtpm.a $(TPMEMU_STAMPFILE)
+
+.PHONY: cross-tpmemu
+cross-tpmemu: $(TPMEMU_STAMPFILE)
+
+#############
 # Cross-ocaml
 #############
 
@@ -319,6 +401,24 @@ c: $(CROSS_ROOT)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) 
 
 ######
+# VTPM
+######
+
+.PHONY: vtpm
+vtpm: cross-polarssl cross-tpmemu
+	make -C $(MINI_OS) links
+	XEN_TARGET_ARCH="$(XEN_TARGET_ARCH)" CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) -C $@
+
+######
+# VTPMMGR
+######
+
+.PHONY: vtpmmgr
+vtpmmgr: cross-polarssl
+	make -C $(MINI_OS) links
+	XEN_TARGET_ARCH="$(XEN_TARGET_ARCH)" CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) -C $@
+
+######
 # Grub
 ######
 
@@ -362,6 +462,14 @@ caml-stubdom: mini-os-$(XEN_TARGET_ARCH)-caml lwip-$(XEN_TARGET_ARCH) libxc cros
 c-stubdom: mini-os-$(XEN_TARGET_ARCH)-c lwip-$(XEN_TARGET_ARCH) libxc c
 	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/c/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS=$(CURDIR)/c/main.a
 
+.PHONY: vtpm-stubdom
+vtpm-stubdom: mini-os-$(XEN_TARGET_ARCH)-vtpm vtpm
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/vtpm/minios.cfg" $(MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS="$(CURDIR)/vtpm/vtpm.a" APP_LDLIBS="-ltpm -ltpm_crypto -lgmp"
+
+.PHONY: vtpmmgrdom
+vtpmmgrdom: mini-os-$(XEN_TARGET_ARCH)-vtpmmgr vtpmmgr
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/vtpmmgr/minios.cfg" $(MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS="$(CURDIR)/vtpmmgr/vtpmmgr.a" APP_LDLIBS="-lm"
+
 .PHONY: pv-grub
 pv-grub: mini-os-$(XEN_TARGET_ARCH)-grub libxc grub
 	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/grub/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/grub-$(XEN_TARGET_ARCH)/main.a
@@ -375,7 +483,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
 #########
 
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: genpath install-readme install-ioemu install-grub install-xenstore
+install: genpath install-readme install-ioemu install-grub install-xenstore install-vtpm install-vtpmmgr
 else
 install: genpath
 endif
@@ -399,6 +507,14 @@ install-xenstore: xenstore-stubdom
 	$(INSTALL_DIR) "$(DESTDIR)/usr/lib/xen/boot"
 	$(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-xenstore/mini-os.gz "$(DESTDIR)/usr/lib/xen/boot/xenstore-stubdom.gz"
 
+install-vtpm: vtpm-stubdom
+	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
+	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpm/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpm-stubdom.gz"
+
+install-vtpmmgr: vtpm-stubdom
+	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
+	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpmmgr/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpmmgrdom.gz"
+
 #######
 # clean
 #######
@@ -411,8 +527,12 @@ clean:
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-caml
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-grub
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-xenstore
+	rm -fr mini-os-$(XEN_TARGET_ARCH)-vtpm
+	rm -fr mini-os-$(XEN_TARGET_ARCH)-vtpmmgr
 	$(MAKE) DESTDIR= -C caml clean
 	$(MAKE) DESTDIR= -C c clean
+	$(MAKE) -C vtpm clean
+	$(MAKE) -C vtpmmgr clean
 	rm -fr grub-$(XEN_TARGET_ARCH)
 	rm -f $(STUBDOMPATH)
 	[ ! -d libxc-$(XEN_TARGET_ARCH) ] || $(MAKE) DESTDIR= -C libxc-$(XEN_TARGET_ARCH) clean
@@ -426,6 +546,10 @@ crossclean: clean
 	rm -fr newlib-$(XEN_TARGET_ARCH)
 	rm -fr zlib-$(XEN_TARGET_ARCH) pciutils-$(XEN_TARGET_ARCH)
 	rm -fr libxc-$(XEN_TARGET_ARCH) ioemu
+	rm -fr gmp-$(XEN_TARGET_ARCH)
+	rm -fr polarssl-$(XEN_TARGET_ARCH)
+	rm -fr openssl-$(XEN_TARGET_ARCH)
+	rm -fr tpm_emulator-$(XEN_TARGET_ARCH)
 	rm -f mk-headers-$(XEN_TARGET_ARCH)
 	rm -fr ocaml-$(XEN_TARGET_ARCH)
 	rm -fr include
@@ -434,6 +558,10 @@ crossclean: clean
 .PHONY: patchclean
 patchclean: crossclean
 	rm -fr newlib-$(NEWLIB_VERSION)
+	rm -fr gmp-$(XEN_TARGET_ARCH)
+	rm -fr polarssl-$(XEN_TARGET_ARCH)
+	rm -fr openssl-$(XEN_TARGET_ARCH)
+	rm -fr tpm_emulator-$(XEN_TARGET_ARCH)
 	rm -fr lwip-$(XEN_TARGET_ARCH)
 	rm -fr grub-upstream
 
@@ -442,10 +570,14 @@ patchclean: crossclean
 downloadclean: patchclean
 	rm -f newlib-$(NEWLIB_VERSION).tar.gz
 	rm -f zlib-$(ZLIB_VERSION).tar.gz
+	rm -f gmp-$(GMP_VERSION).tar.gz
+	rm -f tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	rm -f pciutils-$(LIBPCI_VERSION).tar.bz2
 	rm -f grub-$(GRUB_VERSION).tar.gz
 	rm -f lwip-$(LWIP_VERSION).tar.gz
 	rm -f ocaml-$(OCAML_VERSION).tar.gz
+	rm -f polarssl-$(POLARSSL_VERSION)-gpl.tgz
+	rm -f openssl-$(POLARSSL_VERSION)-gpl.tgz
 
 .PHONY: distclean
 distclean: downloadclean
diff --git a/stubdom/polarssl.patch b/stubdom/polarssl.patch
new file mode 100644
index 0000000..d387d4e
--- /dev/null
+++ b/stubdom/polarssl.patch
@@ -0,0 +1,64 @@
+diff -Naur polarssl-1.1.4/include/polarssl/config.h polarssl-x86_64/include/polarssl/config.h
+--- polarssl-1.1.4/include/polarssl/config.h	2011-12-22 05:06:27.000000000 -0500
++++ polarssl-x86_64/include/polarssl/config.h	2012-10-30 17:18:07.567001000 -0400
+@@ -164,8 +164,8 @@
+  * application.
+  *
+  * Uncomment this macro to prevent loading of default entropy functions.
+-#define POLARSSL_NO_DEFAULT_ENTROPY_SOURCES
+  */
++#define POLARSSL_NO_DEFAULT_ENTROPY_SOURCES
+
+ /**
+  * \def POLARSSL_NO_PLATFORM_ENTROPY
+@@ -175,8 +175,8 @@
+  * standards like the /dev/urandom or Windows CryptoAPI.
+  *
+  * Uncomment this macro to disable the built-in platform entropy functions.
+-#define POLARSSL_NO_PLATFORM_ENTROPY
+  */
++#define POLARSSL_NO_PLATFORM_ENTROPY
+
+ /**
+  * \def POLARSSL_PKCS1_V21
+@@ -426,8 +426,8 @@
+  * Requires: POLARSSL_TIMING_C
+  *
+  * This module enables the HAVEGE random number generator.
+- */
+ #define POLARSSL_HAVEGE_C
++ */
+
+ /**
+  * \def POLARSSL_MD_C
+@@ -490,7 +490,7 @@
+  *
+  * This module provides TCP/IP networking routines.
+  */
+-#define POLARSSL_NET_C
++//#define POLARSSL_NET_C
+
+ /**
+  * \def POLARSSL_PADLOCK_C
+@@ -644,8 +644,8 @@
+  * Caller:  library/havege.c
+  *
+  * This module is used by the HAVEGE random number generator.
+- */
+ #define POLARSSL_TIMING_C
++ */
+
+ /**
+  * \def POLARSSL_VERSION_C
+diff -Naur polarssl-1.1.4/library/bignum.c polarssl-x86_64/library/bignum.c
+--- polarssl-1.1.4/library/bignum.c	2012-04-29 16:15:55.000000000 -0400
++++ polarssl-x86_64/library/bignum.c	2012-10-30 17:21:52.135000999 -0400
+@@ -1101,7 +1101,7 @@
+             Z.p[i - t - 1] = ~0;
+         else
+         {
+-#if defined(POLARSSL_HAVE_LONGLONG)
++#if 0 //defined(POLARSSL_HAVE_LONGLONG)
+             t_udbl r;
+
+             r  = (t_udbl) X.p[i] << biL;
diff --git a/stubdom/tpmemu-0.7.4.patch b/stubdom/tpmemu-0.7.4.patch
new file mode 100644
index 0000000..b84eff1
--- /dev/null
+++ b/stubdom/tpmemu-0.7.4.patch
@@ -0,0 +1,12 @@
+diff -Naur tpm_emulator-x86_64-back/tpm/tpm_emulator_extern.c tpm_emulator-x86_64/tpm/tpm_emulator_extern.c
+--- tpm_emulator-x86_64-back/tpm/tpm_emulator_extern.c	2012-04-27 10:55:46.581963398 -0400
++++ tpm_emulator-x86_64/tpm/tpm_emulator_extern.c	2012-04-27 10:56:02.193034152 -0400
+@@ -249,7 +249,7 @@
+ #else /* TPM_NO_EXTERN */
+
+ int (*tpm_extern_init)(void)                                      = NULL;
+-int (*tpm_extern_release)(void)                                   = NULL;
++void (*tpm_extern_release)(void)                                   = NULL;
+ void* (*tpm_malloc)(size_t size)                                  = NULL;
+ void (*tpm_free)(/*const*/ void *ptr)                             = NULL;
+ void (*tpm_log)(int priority, const char *fmt, ...)               = NULL;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg30-0004vw-Cb; Thu, 06 Dec 2012 18:19:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2y-0004v4-Rw
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:45 +0000
Received: from [85.158.143.99:41683] by server-2.bemta-4.messagelabs.com id
	26/31-30861-0C1E0C05; Thu, 06 Dec 2012 18:19:44 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-16.tower-216.messagelabs.com!1354817980!17176997!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8613 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01aa_3e7eb2d7_82ef_4249_aa85_0de69392eacb;
	Thu, 06 Dec 2012 13:19:34 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:16 -0500
Message-Id: <1354817961-22196-3-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 3/8] vtpm/vtpmmgr and required libs to
	stubdom/Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add 3 new libraries to stubdom:
libgmp
polarssl
Berlios TPM Emulator 0.7.4

Also adds makefile structure for vtpm-stubdom and vtpmmgrdom

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/Makefile           |  138 +++++++++++++++++++++++++++++++++++++++++++-
 stubdom/polarssl.patch     |   64 ++++++++++++++++++++
 stubdom/tpmemu-0.7.4.patch |   12 ++++
 3 files changed, 211 insertions(+), 3 deletions(-)
 create mode 100644 stubdom/polarssl.patch
 create mode 100644 stubdom/tpmemu-0.7.4.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 50ba360..fc70d88 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -31,6 +31,18 @@ GRUB_VERSION=0.97
 OCAML_URL?=http://caml.inria.fr/pub/distrib/ocaml-3.11
 OCAML_VERSION=3.11.0
 
+GMP_VERSION=4.3.2
+GMP_URL?=$(XEN_EXTFILES_URL)
+#GMP_URL?=ftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
+
+POLARSSL_VERSION=1.1.4
+POLARSSL_URL?=$(XEN_EXTFILES_URL)
+#POLARSSL_URL?=http://polarssl.org/code/releases
+
+TPMEMU_VERSION=0.7.4
+TPMEMU_URL?=$(XEN_EXTFILES_URL)
+#TPMEMU_URL?=http://download.berlios.de/tpm-emulator
+
 WGET=wget -c
 
 GNU_TARGET_ARCH:=$(XEN_TARGET_ARCH)
@@ -74,12 +86,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include
 
 TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib
 
-TARGETS=ioemu c caml grub xenstore
+TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr
 
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom
+build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stubdom vtpmmgrdom
 else
 build: genpath
 endif
@@ -176,6 +188,76 @@ lwip-$(XEN_TARGET_ARCH): lwip-$(LWIP_VERSION).tar.gz
 	touch $@
 
 #############
+# cross-gmp
+#############
+gmp-$(GMP_VERSION).tar.bz2:
+	$(WGET) $(GMP_URL)/$@
+
+.PHONY: cross-gmp
+ifeq ($(XEN_TARGET_ARCH), x86_32)
+   GMPEXT=ABI=32
+endif
+gmp-$(XEN_TARGET_ARCH): gmp-$(GMP_VERSION).tar.bz2 $(NEWLIB_STAMPFILE)
+	tar xjf $<
+	mv gmp-$(GMP_VERSION) $@
+	#patch -d $@ -p0 < gmp.patch
+	cd $@; CPPFLAGS="-isystem $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include $(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" CC=$(CC) $(GMPEXT) ./configure --disable-shared --enable-static --disable-fft --without-readline --prefix=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf
+	sed -i 's/#define HAVE_OBSTACK_VPRINTF 1/\/\/#define HAVE_OBSTACK_VPRINTF 1/' $@/config.h
+	touch $@
+
+GMP_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libgmp.a
+cross-gmp: $(GMP_STAMPFILE)
+$(GMP_STAMPFILE): gmp-$(XEN_TARGET_ARCH)
+	( cd $< && \
+	  $(MAKE) && \
+	  $(MAKE) install )
+
+#############
+# cross-polarssl
+#############
+polarssl-$(POLARSSL_VERSION)-gpl.tgz:
+	$(WGET) $(POLARSSL_URL)/$@
+
+polarssl-$(XEN_TARGET_ARCH): polarssl-$(POLARSSL_VERSION)-gpl.tgz
+	tar xzf $<
+	mv polarssl-$(POLARSSL_VERSION) $@
+	patch -d $@ -p1 < polarssl.patch
+	touch $@
+
+POLARSSL_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libpolarssl.a
+cross-polarssl: $(POLARSSL_STAMPFILE)
+$(POLARSSL_STAMPFILE): polarssl-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE) lwip-$(XEN_TARGET_ARCH)
+	 ( cd $</library && \
+	   make CC="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -I $(realpath $(MINI_OS)/include)" && \
+	   mkdir -p $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include && \
+	   cp -r ../include/* $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include && \
+	   mkdir -p $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib && \
+	   $(INSTALL_DATA) libpolarssl.a $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib/ )
+
+#############
+# cross-tpmemu
+#############
+tpm_emulator-$(TPMEMU_VERSION).tar.gz:
+	$(WGET) $(TPMEMU_URL)/$@
+
+tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
+	tar xzf $<
+	mv tpm_emulator-$(TPMEMU_VERSION) $@
+	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
+	mkdir $@/build
+	cd $@/build; cmake .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
+	touch $@
+
+TPMEMU_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm.a
+$(TPMEMU_STAMPFILE): tpm_emulator-$(XEN_TARGET_ARCH) $(GMP_STAMPFILE)
+	( cd $</build && make VERBOSE=1 tpm_crypto tpm  )
+	cp $</build/crypto/libtpm_crypto.a $(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm_crypto.a
+	cp $</build/tpm/libtpm.a $(TPMEMU_STAMPFILE)
+
+.PHONY: cross-tpmemu
+cross-tpmemu: $(TPMEMU_STAMPFILE)
+
+#############
 # Cross-ocaml
 #############
 
@@ -319,6 +401,24 @@ c: $(CROSS_ROOT)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) 
 
 ######
+# VTPM
+######
+
+.PHONY: vtpm
+vtpm: cross-polarssl cross-tpmemu
+	make -C $(MINI_OS) links
+	XEN_TARGET_ARCH="$(XEN_TARGET_ARCH)" CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) -C $@
+
+######
+# VTPMMGR
+######
+
+.PHONY: vtpmmgr
+vtpmmgr: cross-polarssl
+	make -C $(MINI_OS) links
+	XEN_TARGET_ARCH="$(XEN_TARGET_ARCH)" CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) -C $@
+
+######
 # Grub
 ######
 
@@ -362,6 +462,14 @@ caml-stubdom: mini-os-$(XEN_TARGET_ARCH)-caml lwip-$(XEN_TARGET_ARCH) libxc cros
 c-stubdom: mini-os-$(XEN_TARGET_ARCH)-c lwip-$(XEN_TARGET_ARCH) libxc c
 	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/c/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS=$(CURDIR)/c/main.a
 
+.PHONY: vtpm-stubdom
+vtpm-stubdom: mini-os-$(XEN_TARGET_ARCH)-vtpm vtpm
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/vtpm/minios.cfg" $(MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS="$(CURDIR)/vtpm/vtpm.a" APP_LDLIBS="-ltpm -ltpm_crypto -lgmp"
+
+.PHONY: vtpmmgrdom
+vtpmmgrdom: mini-os-$(XEN_TARGET_ARCH)-vtpmmgr vtpmmgr
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/vtpmmgr/minios.cfg" $(MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS="$(CURDIR)/vtpmmgr/vtpmmgr.a" APP_LDLIBS="-lm"
+
 .PHONY: pv-grub
 pv-grub: mini-os-$(XEN_TARGET_ARCH)-grub libxc grub
 	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/grub/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/grub-$(XEN_TARGET_ARCH)/main.a
@@ -375,7 +483,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
 #########
 
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: genpath install-readme install-ioemu install-grub install-xenstore
+install: genpath install-readme install-ioemu install-grub install-xenstore install-vtpm install-vtpmmgr
 else
 install: genpath
 endif
@@ -399,6 +507,14 @@ install-xenstore: xenstore-stubdom
 	$(INSTALL_DIR) "$(DESTDIR)/usr/lib/xen/boot"
 	$(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-xenstore/mini-os.gz "$(DESTDIR)/usr/lib/xen/boot/xenstore-stubdom.gz"
 
+install-vtpm: vtpm-stubdom
+	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
+	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpm/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpm-stubdom.gz"
+
+install-vtpmmgr: vtpm-stubdom
+	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
+	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpmmgr/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpmmgrdom.gz"
+
 #######
 # clean
 #######
@@ -411,8 +527,12 @@ clean:
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-caml
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-grub
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-xenstore
+	rm -fr mini-os-$(XEN_TARGET_ARCH)-vtpm
+	rm -fr mini-os-$(XEN_TARGET_ARCH)-vtpmmgr
 	$(MAKE) DESTDIR= -C caml clean
 	$(MAKE) DESTDIR= -C c clean
+	$(MAKE) -C vtpm clean
+	$(MAKE) -C vtpmmgr clean
 	rm -fr grub-$(XEN_TARGET_ARCH)
 	rm -f $(STUBDOMPATH)
 	[ ! -d libxc-$(XEN_TARGET_ARCH) ] || $(MAKE) DESTDIR= -C libxc-$(XEN_TARGET_ARCH) clean
@@ -426,6 +546,10 @@ crossclean: clean
 	rm -fr newlib-$(XEN_TARGET_ARCH)
 	rm -fr zlib-$(XEN_TARGET_ARCH) pciutils-$(XEN_TARGET_ARCH)
 	rm -fr libxc-$(XEN_TARGET_ARCH) ioemu
+	rm -fr gmp-$(XEN_TARGET_ARCH)
+	rm -fr polarssl-$(XEN_TARGET_ARCH)
+	rm -fr openssl-$(XEN_TARGET_ARCH)
+	rm -fr tpm_emulator-$(XEN_TARGET_ARCH)
 	rm -f mk-headers-$(XEN_TARGET_ARCH)
 	rm -fr ocaml-$(XEN_TARGET_ARCH)
 	rm -fr include
@@ -434,6 +558,10 @@ crossclean: clean
 .PHONY: patchclean
 patchclean: crossclean
 	rm -fr newlib-$(NEWLIB_VERSION)
+	rm -fr gmp-$(XEN_TARGET_ARCH)
+	rm -fr polarssl-$(XEN_TARGET_ARCH)
+	rm -fr openssl-$(XEN_TARGET_ARCH)
+	rm -fr tpm_emulator-$(XEN_TARGET_ARCH)
 	rm -fr lwip-$(XEN_TARGET_ARCH)
 	rm -fr grub-upstream
 
@@ -442,10 +570,14 @@ patchclean: crossclean
 downloadclean: patchclean
 	rm -f newlib-$(NEWLIB_VERSION).tar.gz
 	rm -f zlib-$(ZLIB_VERSION).tar.gz
+	rm -f gmp-$(GMP_VERSION).tar.gz
+	rm -f tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	rm -f pciutils-$(LIBPCI_VERSION).tar.bz2
 	rm -f grub-$(GRUB_VERSION).tar.gz
 	rm -f lwip-$(LWIP_VERSION).tar.gz
 	rm -f ocaml-$(OCAML_VERSION).tar.gz
+	rm -f polarssl-$(POLARSSL_VERSION)-gpl.tgz
+	rm -f openssl-$(POLARSSL_VERSION)-gpl.tgz
 
 .PHONY: distclean
 distclean: downloadclean
diff --git a/stubdom/polarssl.patch b/stubdom/polarssl.patch
new file mode 100644
index 0000000..d387d4e
--- /dev/null
+++ b/stubdom/polarssl.patch
@@ -0,0 +1,64 @@
+diff -Naur polarssl-1.1.4/include/polarssl/config.h polarssl-x86_64/include/polarssl/config.h
+--- polarssl-1.1.4/include/polarssl/config.h	2011-12-22 05:06:27.000000000 -0500
++++ polarssl-x86_64/include/polarssl/config.h	2012-10-30 17:18:07.567001000 -0400
+@@ -164,8 +164,8 @@
+  * application.
+  *
+  * Uncomment this macro to prevent loading of default entropy functions.
+-#define POLARSSL_NO_DEFAULT_ENTROPY_SOURCES
+  */
++#define POLARSSL_NO_DEFAULT_ENTROPY_SOURCES
+
+ /**
+  * \def POLARSSL_NO_PLATFORM_ENTROPY
+@@ -175,8 +175,8 @@
+  * standards like the /dev/urandom or Windows CryptoAPI.
+  *
+  * Uncomment this macro to disable the built-in platform entropy functions.
+-#define POLARSSL_NO_PLATFORM_ENTROPY
+  */
++#define POLARSSL_NO_PLATFORM_ENTROPY
+
+ /**
+  * \def POLARSSL_PKCS1_V21
+@@ -426,8 +426,8 @@
+  * Requires: POLARSSL_TIMING_C
+  *
+  * This module enables the HAVEGE random number generator.
+- */
+ #define POLARSSL_HAVEGE_C
++ */
+
+ /**
+  * \def POLARSSL_MD_C
+@@ -490,7 +490,7 @@
+  *
+  * This module provides TCP/IP networking routines.
+  */
+-#define POLARSSL_NET_C
++//#define POLARSSL_NET_C
+
+ /**
+  * \def POLARSSL_PADLOCK_C
+@@ -644,8 +644,8 @@
+  * Caller:  library/havege.c
+  *
+  * This module is used by the HAVEGE random number generator.
+- */
+ #define POLARSSL_TIMING_C
++ */
+
+ /**
+  * \def POLARSSL_VERSION_C
+diff -Naur polarssl-1.1.4/library/bignum.c polarssl-x86_64/library/bignum.c
+--- polarssl-1.1.4/library/bignum.c	2012-04-29 16:15:55.000000000 -0400
++++ polarssl-x86_64/library/bignum.c	2012-10-30 17:21:52.135000999 -0400
+@@ -1101,7 +1101,7 @@
+             Z.p[i - t - 1] = ~0;
+         else
+         {
+-#if defined(POLARSSL_HAVE_LONGLONG)
++#if 0 //defined(POLARSSL_HAVE_LONGLONG)
+             t_udbl r;
+
+             r  = (t_udbl) X.p[i] << biL;
diff --git a/stubdom/tpmemu-0.7.4.patch b/stubdom/tpmemu-0.7.4.patch
new file mode 100644
index 0000000..b84eff1
--- /dev/null
+++ b/stubdom/tpmemu-0.7.4.patch
@@ -0,0 +1,12 @@
+diff -Naur tpm_emulator-x86_64-back/tpm/tpm_emulator_extern.c tpm_emulator-x86_64/tpm/tpm_emulator_extern.c
+--- tpm_emulator-x86_64-back/tpm/tpm_emulator_extern.c	2012-04-27 10:55:46.581963398 -0400
++++ tpm_emulator-x86_64/tpm/tpm_emulator_extern.c	2012-04-27 10:56:02.193034152 -0400
+@@ -249,7 +249,7 @@
+ #else /* TPM_NO_EXTERN */
+
+ int (*tpm_extern_init)(void)                                      = NULL;
+-int (*tpm_extern_release)(void)                                   = NULL;
++void (*tpm_extern_release)(void)                                   = NULL;
+ void* (*tpm_malloc)(size_t size)                                  = NULL;
+ void (*tpm_free)(/*const*/ void *ptr)                             = NULL;
+ void (*tpm_log)(int priority, const char *fmt, ...)               = NULL;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg2z-0004vV-KE; Thu, 06 Dec 2012 18:19:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2y-0004v2-CO
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:44 +0000
Received: from [85.158.139.211:33650] by server-11.bemta-5.messagelabs.com id
	2D/35-03409-FB1E0C05; Thu, 06 Dec 2012 18:19:43 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354817980!19409458!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26124 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01f0_53eea542_7350_4923_96cd_4c27df3340f1;
	Thu, 06 Dec 2012 13:19:35 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:21 -0500
Message-Id: <1354817961-22196-8-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 8/8] Add conditional build of subsystems to
	configure.ac
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The toplevel Makefile still works without running configure
and will default build everything

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 Makefile              |   10 ++++++++--
 config/Toplevel.mk.in |    1 +
 configure.ac          |   11 ++++++++++-
 m4/subsystem.m4       |   32 ++++++++++++++++++++++++++++++++
 4 files changed, 51 insertions(+), 3 deletions(-)
 create mode 100644 config/Toplevel.mk.in
 create mode 100644 m4/subsystem.m4

diff --git a/Makefile b/Makefile
index a6ed8be..aa3c7bd 100644
--- a/Makefile
+++ b/Makefile
@@ -6,6 +6,11 @@
 .PHONY: all
 all: dist
 
+-include config/Toplevel.mk
+SUBSYSTEMS?=xen kernels tools stubdom docs
+TARGS_DIST=$(patsubst %, dist-%, $(SUBSYSTEMS))
+TARGS_INSTALL=$(patsubst %, install-%, $(SUBSYSTEMS))
+
 export XEN_ROOT=$(CURDIR)
 include Config.mk
 
@@ -15,7 +20,7 @@ include buildconfigs/Rules.mk
 
 # build and install everything into the standard system directories
 .PHONY: install
-install: install-xen install-kernels install-tools install-stubdom install-docs
+install: $(TARGS_INSTALL)
 
 .PHONY: build
 build: kernels
@@ -37,7 +42,7 @@ test:
 # build and install everything into local dist directory
 .PHONY: dist
 dist: DESTDIR=$(DISTDIR)/install
-dist: dist-xen dist-kernels dist-tools dist-stubdom dist-docs dist-misc
+dist: $(TARGS_DIST) dist-misc
 
 dist-misc:
 	$(INSTALL_DIR) $(DISTDIR)/
@@ -151,6 +156,7 @@ endif
 # clean, but blow away kernel build tree plus tarballs
 .PHONY: distclean
 distclean:
+	-rm config/Toplevel.mk
 	$(MAKE) -C xen distclean
 	$(MAKE) -C tools distclean
 	$(MAKE) -C stubdom distclean
diff --git a/config/Toplevel.mk.in b/config/Toplevel.mk.in
new file mode 100644
index 0000000..4db7eaf
--- /dev/null
+++ b/config/Toplevel.mk.in
@@ -0,0 +1 @@
+SUBSYSTEMS               := @SUBSYSTEMS@
diff --git a/configure.ac b/configure.ac
index 5dacb46..fcbc4ae 100644
--- a/configure.ac
+++ b/configure.ac
@@ -6,7 +6,16 @@ AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
     [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
 AC_CONFIG_SRCDIR([./xen/common/kernel.c])
 AC_PREFIX_DEFAULT([/usr])
+AC_CONFIG_FILES([./config/Toplevel.mk])
 
-AC_CONFIG_SUBDIRS([tools stubdom])
+m4_include([m4/features.m4])
+m4_include([m4/subsystem.m4])
+
+AX_SUBSYSTEM_DEFAULT_ENABLE([xen])
+AX_SUBSYSTEM_DEFAULT_ENABLE([kernels])
+AX_SUBSYSTEM_DEFAULT_ENABLE([tools])
+AX_SUBSYSTEM_DEFAULT_ENABLE([stubdom])
+AX_SUBSYSTEM_DEFAULT_ENABLE([docs])
+AX_SUBSYSTEM_FINISH
 
 AC_OUTPUT()
diff --git a/m4/subsystem.m4 b/m4/subsystem.m4
new file mode 100644
index 0000000..d3eb8c9
--- /dev/null
+++ b/m4/subsystem.m4
@@ -0,0 +1,32 @@
+AC_DEFUN([AX_SUBSYSTEM_DEFAULT_ENABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--disable-$1], [Disable build and install of $1]),[
+$1=n
+],[
+$1=y
+SUBSYSTEMS="$SUBSYSTEMS $1"
+AS_IF([test -e "$1/configure"], [
+AC_CONFIG_SUBDIRS([$1])
+])
+])
+AC_SUBST($1)
+])
+
+AC_DEFUN([AX_SUBSYSTEM_DEFAULT_DISABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--enable-$1], [Enable build and install of $1]),[
+$1=y
+SUBSYSTEMS="$SUBSYSTEMS $1"
+AS_IF([test -e "$1/configure"], [
+AC_CONFIG_SUBDIRS([$1])
+])
+],[
+$1=n
+])
+AC_SUBST($1)
+])
+
+
+AC_DEFUN([AX_SUBSYSTEM_FINISH], [
+AC_SUBST(SUBSYSTEMS)
+])
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg2z-0004vV-KE; Thu, 06 Dec 2012 18:19:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2y-0004v2-CO
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:44 +0000
Received: from [85.158.139.211:33650] by server-11.bemta-5.messagelabs.com id
	2D/35-03409-FB1E0C05; Thu, 06 Dec 2012 18:19:43 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354817980!19409458!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26124 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01f0_53eea542_7350_4923_96cd_4c27df3340f1;
	Thu, 06 Dec 2012 13:19:35 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:21 -0500
Message-Id: <1354817961-22196-8-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 8/8] Add conditional build of subsystems to
	configure.ac
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The toplevel Makefile still works without running configure
and will default build everything

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 Makefile              |   10 ++++++++--
 config/Toplevel.mk.in |    1 +
 configure.ac          |   11 ++++++++++-
 m4/subsystem.m4       |   32 ++++++++++++++++++++++++++++++++
 4 files changed, 51 insertions(+), 3 deletions(-)
 create mode 100644 config/Toplevel.mk.in
 create mode 100644 m4/subsystem.m4

diff --git a/Makefile b/Makefile
index a6ed8be..aa3c7bd 100644
--- a/Makefile
+++ b/Makefile
@@ -6,6 +6,11 @@
 .PHONY: all
 all: dist
 
+-include config/Toplevel.mk
+SUBSYSTEMS?=xen kernels tools stubdom docs
+TARGS_DIST=$(patsubst %, dist-%, $(SUBSYSTEMS))
+TARGS_INSTALL=$(patsubst %, install-%, $(SUBSYSTEMS))
+
 export XEN_ROOT=$(CURDIR)
 include Config.mk
 
@@ -15,7 +20,7 @@ include buildconfigs/Rules.mk
 
 # build and install everything into the standard system directories
 .PHONY: install
-install: install-xen install-kernels install-tools install-stubdom install-docs
+install: $(TARGS_INSTALL)
 
 .PHONY: build
 build: kernels
@@ -37,7 +42,7 @@ test:
 # build and install everything into local dist directory
 .PHONY: dist
 dist: DESTDIR=$(DISTDIR)/install
-dist: dist-xen dist-kernels dist-tools dist-stubdom dist-docs dist-misc
+dist: $(TARGS_DIST) dist-misc
 
 dist-misc:
 	$(INSTALL_DIR) $(DISTDIR)/
@@ -151,6 +156,7 @@ endif
 # clean, but blow away kernel build tree plus tarballs
 .PHONY: distclean
 distclean:
+	-rm config/Toplevel.mk
 	$(MAKE) -C xen distclean
 	$(MAKE) -C tools distclean
 	$(MAKE) -C stubdom distclean
diff --git a/config/Toplevel.mk.in b/config/Toplevel.mk.in
new file mode 100644
index 0000000..4db7eaf
--- /dev/null
+++ b/config/Toplevel.mk.in
@@ -0,0 +1 @@
+SUBSYSTEMS               := @SUBSYSTEMS@
diff --git a/configure.ac b/configure.ac
index 5dacb46..fcbc4ae 100644
--- a/configure.ac
+++ b/configure.ac
@@ -6,7 +6,16 @@ AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
     [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
 AC_CONFIG_SRCDIR([./xen/common/kernel.c])
 AC_PREFIX_DEFAULT([/usr])
+AC_CONFIG_FILES([./config/Toplevel.mk])
 
-AC_CONFIG_SUBDIRS([tools stubdom])
+m4_include([m4/features.m4])
+m4_include([m4/subsystem.m4])
+
+AX_SUBSYSTEM_DEFAULT_ENABLE([xen])
+AX_SUBSYSTEM_DEFAULT_ENABLE([kernels])
+AX_SUBSYSTEM_DEFAULT_ENABLE([tools])
+AX_SUBSYSTEM_DEFAULT_ENABLE([stubdom])
+AX_SUBSYSTEM_DEFAULT_ENABLE([docs])
+AX_SUBSYSTEM_FINISH
 
 AC_OUTPUT()
diff --git a/m4/subsystem.m4 b/m4/subsystem.m4
new file mode 100644
index 0000000..d3eb8c9
--- /dev/null
+++ b/m4/subsystem.m4
@@ -0,0 +1,32 @@
+AC_DEFUN([AX_SUBSYSTEM_DEFAULT_ENABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--disable-$1], [Disable build and install of $1]),[
+$1=n
+],[
+$1=y
+SUBSYSTEMS="$SUBSYSTEMS $1"
+AS_IF([test -e "$1/configure"], [
+AC_CONFIG_SUBDIRS([$1])
+])
+])
+AC_SUBST($1)
+])
+
+AC_DEFUN([AX_SUBSYSTEM_DEFAULT_DISABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--enable-$1], [Enable build and install of $1]),[
+$1=y
+SUBSYSTEMS="$SUBSYSTEMS $1"
+AS_IF([test -e "$1/configure"], [
+AC_CONFIG_SUBDIRS([$1])
+])
+],[
+$1=n
+])
+AC_SUBST($1)
+])
+
+
+AC_DEFUN([AX_SUBSYSTEM_FINISH], [
+AC_SUBST(SUBSYSTEMS)
+])
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg30-0004vk-0N; Thu, 06 Dec 2012 18:19:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2y-0004v7-Sp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:44 +0000
Received: from [193.109.254.147:55971] by server-14.bemta-14.messagelabs.com
	id 6E/EF-14517-0C1E0C05; Thu, 06 Dec 2012 18:19:44 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354817981!6536870!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30540 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01c6_8da7cb57_bf34_46e6_ac22_2158df5693c2;
	Thu, 06 Dec 2012 13:19:35 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:18 -0500
Message-Id: <1354817961-22196-5-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 5/8] Add cmake dependency to README
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 README |    1 +
 1 file changed, 1 insertion(+)

diff --git a/README b/README
index 1938f66..6b9b510 100644
--- a/README
+++ b/README
@@ -61,6 +61,7 @@ provided by your OS distributor:
     * 16-bit x86 assembler, loader and compiler (dev86 rpm or bin86 & bcc debs)
     * ACPI ASL compiler (iasl)
     * markdown
+    * cmake (if building vtpm stub domains)
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg30-0004vk-0N; Thu, 06 Dec 2012 18:19:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2y-0004v7-Sp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:44 +0000
Received: from [193.109.254.147:55971] by server-14.bemta-14.messagelabs.com
	id 6E/EF-14517-0C1E0C05; Thu, 06 Dec 2012 18:19:44 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-5.tower-27.messagelabs.com!1354817981!6536870!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30540 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01c6_8da7cb57_bf34_46e6_ac22_2158df5693c2;
	Thu, 06 Dec 2012 13:19:35 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:18 -0500
Message-Id: <1354817961-22196-5-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 5/8] Add cmake dependency to README
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 README |    1 +
 1 file changed, 1 insertion(+)

diff --git a/README b/README
index 1938f66..6b9b510 100644
--- a/README
+++ b/README
@@ -61,6 +61,7 @@ provided by your OS distributor:
     * 16-bit x86 assembler, loader and compiler (dev86 rpm or bin86 & bcc debs)
     * ACPI ASL compiler (iasl)
     * markdown
+    * cmake (if building vtpm stub domains)
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg2x-0004uk-1W; Thu, 06 Dec 2012 18:19:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2v-0004ud-5K
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:41 +0000
Received: from [85.158.139.211:33355] by server-7.bemta-5.messagelabs.com id
	CA/EE-23096-CB1E0C05; Thu, 06 Dec 2012 18:19:40 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354817975!19409430!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25728 invoked from network); 6 Dec 2012 18:19:37 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 18:19:37 -0000
Received: from anonelbe.jhuapl.edu (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_018e_8f22115b_d573_4c16_86c9_b8fdc7bc42c6;
	Thu, 06 Dec 2012 13:19:34 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:14 -0500
Message-Id: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 1/8] add vtpm-stubdom code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the code base for vtpm-stubdom to the stubdom
heirarchy. Makefile changes in later patch.

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/vtpm/Makefile    |   37 +++++
 stubdom/vtpm/minios.cfg  |   14 ++
 stubdom/vtpm/vtpm.c      |  404 ++++++++++++++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm.h      |   36 +++++
 stubdom/vtpm/vtpm_cmd.c  |  256 +++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm_cmd.h  |   31 ++++
 stubdom/vtpm/vtpm_pcrs.c |   43 +++++
 stubdom/vtpm/vtpm_pcrs.h |   53 ++++++
 stubdom/vtpm/vtpmblk.c   |  307 +++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpmblk.h   |   31 ++++
 10 files changed, 1212 insertions(+)
 create mode 100644 stubdom/vtpm/Makefile
 create mode 100644 stubdom/vtpm/minios.cfg
 create mode 100644 stubdom/vtpm/vtpm.c
 create mode 100644 stubdom/vtpm/vtpm.h
 create mode 100644 stubdom/vtpm/vtpm_cmd.c
 create mode 100644 stubdom/vtpm/vtpm_cmd.h
 create mode 100644 stubdom/vtpm/vtpm_pcrs.c
 create mode 100644 stubdom/vtpm/vtpm_pcrs.h
 create mode 100644 stubdom/vtpm/vtpmblk.c
 create mode 100644 stubdom/vtpm/vtpmblk.h

diff --git a/stubdom/vtpm/Makefile b/stubdom/vtpm/Makefile
new file mode 100644
index 0000000..686c0ea
--- /dev/null
+++ b/stubdom/vtpm/Makefile
@@ -0,0 +1,37 @@
+# Copyright (c) 2010-2012 United States Government, as represented by
+# the Secretary of Defense.  All rights reserved.
+#
+# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+# INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+# DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+# SOFTWARE.
+#
+
+XEN_ROOT=../..
+
+PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
+PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o sha4.o
+
+TARGET=vtpm.a
+OBJS=vtpm.o vtpm_cmd.o vtpmblk.o vtpm_pcrs.o
+
+
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/build
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/tpm
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/crypto
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)
+
+$(TARGET): $(OBJS)
+	ar -cr $@ $(OBJS) $(TPMEMU_OBJS) $(foreach obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
+
+$(OBJS): vtpm_manager.h
+
+vtpm_manager.h:
+	ln -s ../vtpmmgr/vtpm_manager.h vtpm_manager.h
+
+clean:
+	-rm $(TARGET) $(OBJS) vtpm_manager.h
+
+.PHONY: clean
diff --git a/stubdom/vtpm/minios.cfg b/stubdom/vtpm/minios.cfg
new file mode 100644
index 0000000..31652ee
--- /dev/null
+++ b/stubdom/vtpm/minios.cfg
@@ -0,0 +1,14 @@
+CONFIG_TPMFRONT=y
+CONFIG_TPM_TIS=n
+CONFIG_TPMBACK=y
+CONFIG_START_NETWORK=n
+CONFIG_TEST=n
+CONFIG_PCIFRONT=n
+CONFIG_BLKFRONT=y
+CONFIG_NETFRONT=n
+CONFIG_FBFRONT=n
+CONFIG_KBDFRONT=n
+CONFIG_CONSFRONT=n
+CONFIG_XENBUS=y
+CONFIG_LWIP=n
+CONFIG_XC=n
diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
new file mode 100644
index 0000000..71aef78
--- /dev/null
+++ b/stubdom/vtpm/vtpm.c
@@ -0,0 +1,404 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <syslog.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <sys/time.h>
+#include <xen/xen.h>
+#include <tpmback.h>
+#include <tpmfront.h>
+
+#include <polarssl/entropy.h>
+#include <polarssl/ctr_drbg.h>
+
+#include "tpm/tpm_emulator_extern.h"
+#include "tpm/tpm_marshalling.h"
+#include "vtpm.h"
+#include "vtpm_cmd.h"
+#include "vtpm_pcrs.h"
+#include "vtpmblk.h"
+
+#define TPM_LOG_INFO LOG_INFO
+#define TPM_LOG_ERROR LOG_ERR
+#define TPM_LOG_DEBUG LOG_DEBUG
+
+/* Global commandline options - default values */
+struct Opt_args opt_args = {
+   .startup = ST_CLEAR,
+   .loglevel = TPM_LOG_INFO,
+   .hwinitpcrs = VTPM_PCRNONE,
+   .tpmconf = 0,
+   .enable_maint_cmds = false,
+};
+
+static uint32_t badords[32];
+static unsigned int n_badords = 0;
+
+entropy_context entropy;
+ctr_drbg_context ctr_drbg;
+
+struct tpmfront_dev* tpmfront_dev;
+
+void vtpm_get_extern_random_bytes(void *buf, size_t nbytes)
+{
+   ctr_drbg_random(&ctr_drbg, buf, nbytes);
+}
+
+int vtpm_read_from_file(uint8_t **data, size_t *data_length) {
+   return read_vtpmblk(tpmfront_dev, data, data_length);
+}
+
+int vtpm_write_to_file(uint8_t *data, size_t data_length) {
+   return write_vtpmblk(tpmfront_dev, data, data_length);
+}
+
+int vtpm_extern_init_fake(void) {
+   return 0;
+}
+
+void vtpm_extern_release_fake(void) {
+}
+
+
+void vtpm_log(int priority, const char *fmt, ...)
+{
+   if(opt_args.loglevel >= priority) {
+      va_list v;
+      va_start(v, fmt);
+      vprintf(fmt, v);
+      va_end(v);
+   }
+}
+
+static uint64_t vtpm_get_ticks(void)
+{
+  static uint64_t old_t = 0;
+  uint64_t new_t, res_t;
+  struct timeval tv;
+  gettimeofday(&tv, NULL);
+  new_t = (uint64_t)tv.tv_sec * 1000000 + (uint64_t)tv.tv_usec;
+  res_t = (old_t > 0) ? new_t - old_t : 0;
+  old_t = new_t;
+  return res_t;
+}
+
+
+static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
+   UINT32 sz = len;
+   TPM_RESULT rc = VTPM_GetRandom(tpmfront_dev, data, &sz);
+   *olen = sz;
+   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
+}
+
+int init_random(void) {
+   /* Initialize the rng */
+   entropy_init(&entropy);
+   entropy_add_source(&entropy, tpm_entropy_source, NULL, 0);
+   entropy_gather(&entropy);
+   ctr_drbg_init(&ctr_drbg, entropy_func, &entropy, NULL, 0);
+   ctr_drbg_set_prediction_resistance( &ctr_drbg, CTR_DRBG_PR_OFF );
+
+   return 0;
+}
+
+int check_ordinal(tpmcmd_t* tpmcmd) {
+   TPM_COMMAND_CODE ord;
+   UINT32 len = 4;
+   BYTE* ptr;
+   unsigned int i;
+
+   if(tpmcmd->req_len < 10) {
+      return true;
+   }
+
+   ptr = tpmcmd->req + 6;
+   tpm_unmarshal_UINT32(&ptr, &len, &ord);
+
+   for(i = 0; i < n_badords; ++i) {
+      if(ord == badords[i]) {
+         error("Disabled command ordinal (%" PRIu32") requested!\n");
+         return false;
+      }
+   }
+   return true;
+}
+
+static void main_loop(void) {
+   tpmcmd_t* tpmcmd = NULL;
+   domid_t domid;		/* Domid of frontend */
+   unsigned int handle;	/* handle of frontend */
+   int res = -1;
+
+   info("VTPM Initializing\n");
+
+   /* Set required tpm config args */
+   opt_args.tpmconf |= TPM_CONF_STRONG_PERSISTENCE;
+   opt_args.tpmconf &= ~TPM_CONF_USE_INTERNAL_PRNG;
+   opt_args.tpmconf |= TPM_CONF_GENERATE_EK;
+   opt_args.tpmconf |= TPM_CONF_GENERATE_SEED_DAA;
+
+   /* Initialize the emulator */
+   tpm_emulator_init(opt_args.startup, opt_args.tpmconf);
+
+   /* Initialize any requested PCRs with hardware TPM values */
+   if(vtpm_initialize_hw_pcrs(tpmfront_dev, opt_args.hwinitpcrs) != TPM_SUCCESS) {
+      error("Failed to initialize PCRs with hardware TPM values");
+      goto abort_postpcrs;
+   }
+
+   /* Wait for the frontend domain to connect */
+   info("Waiting for frontend domain to connect..");
+   if(tpmback_wait_for_frontend_connect(&domid, &handle) == 0) {
+      info("VTPM attached to Frontend %u/%u", (unsigned int) domid, handle);
+   } else {
+      error("Unable to attach to a frontend");
+   }
+
+   tpmcmd = tpmback_req(domid, handle);
+   while(tpmcmd) {
+      /* Handle the request */
+      if(tpmcmd->req_len) {
+	 tpmcmd->resp = NULL;
+	 tpmcmd->resp_len = 0;
+
+         /* First check for disabled ordinals */
+         if(!check_ordinal(tpmcmd)) {
+            create_error_response(tpmcmd, TPM_BAD_ORDINAL);
+         }
+         /* If not disabled, do the command */
+         else {
+            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len)) != 0) {
+               error("tpm_handle_command() failed");
+               create_error_response(tpmcmd, TPM_FAIL);
+            }
+         }
+      }
+
+      /* Send the response */
+      tpmback_resp(tpmcmd);
+
+      /* Wait for the next request */
+      tpmcmd = tpmback_req(domid, handle);
+
+   }
+
+abort_postpcrs:
+   info("VTPM Shutting down\n");
+
+   tpm_emulator_shutdown();
+}
+
+int parse_cmd_line(int argc, char** argv)
+{
+   char sval[25];
+   char* logstr = NULL;
+   /* Parse the command strings */
+   for(unsigned int i = 1; i < argc; ++i) {
+      if (sscanf(argv[i], "loglevel=%25s", sval) == 1){
+	 if (!strcmp(sval, "debug")) {
+	    opt_args.loglevel = TPM_LOG_DEBUG;
+	    logstr = "debug";
+	 }
+	 else if (!strcmp(sval, "info")) {
+	    logstr = "info";
+	    opt_args.loglevel = TPM_LOG_INFO;
+	 }
+	 else if (!strcmp(sval, "error")) {
+	    logstr = "error";
+	    opt_args.loglevel = TPM_LOG_ERROR;
+	 }
+      }
+      else if (!strcmp(argv[i], "clear")) {
+	 opt_args.startup = ST_CLEAR;
+      }
+      else if (!strcmp(argv[i], "save")) {
+	 opt_args.startup = ST_SAVE;
+      }
+      else if (!strcmp(argv[i], "deactivated")) {
+	 opt_args.startup = ST_DEACTIVATED;
+      }
+      else if (!strncmp(argv[i], "maintcmds=", 10)) {
+         if(!strcmp(argv[i] + 10, "1")) {
+            opt_args.enable_maint_cmds = true;
+         } else if(!strcmp(argv[i] + 10, "0")) {
+            opt_args.enable_maint_cmds = false;
+         }
+      }
+      else if(!strncmp(argv[i], "hwinitpcr=", 10)) {
+         char *pch = argv[i] + 10;
+         unsigned int v1, v2;
+         pch = strtok(pch, ",");
+         while(pch != NULL) {
+            if(!strcmp(pch, "all")) {
+               //Set all
+               opt_args.hwinitpcrs = VTPM_PCRALL;
+            } else if(!strcmp(pch, "none")) {
+               //Set none
+               opt_args.hwinitpcrs = VTPM_PCRNONE;
+            } else if(sscanf(pch, "%u", &v1) == 1) {
+               //Set one
+               if(v1 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               opt_args.hwinitpcrs |= (1 << v1);
+            } else if(sscanf(pch, "%u-%u", &v1, &v2) == 2) {
+               //Set range
+               if(v1 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               if(v2 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               if(v2 < v1) {
+                  unsigned tp = v1;
+                  v1 = v2;
+                  v2 = tp;
+               }
+               for(unsigned int i = v1; i <= v2; ++i) {
+                  opt_args.hwinitpcrs |= (1 << i);
+               }
+            } else {
+               error("hwintipcr error: Invalid PCR specification : %s", pch);
+               return -1;
+            }
+            pch = strtok(NULL, ",");
+         }
+      }
+      else {
+	 error("Invalid command line option `%s'", argv[i]);
+      }
+
+   }
+
+   /* Check Errors and print results */
+   switch(opt_args.startup) {
+      case ST_CLEAR:
+	 info("Startup mode is `clear'");
+	 break;
+      case ST_SAVE:
+	 info("Startup mode is `save'");
+	 break;
+      case ST_DEACTIVATED:
+	 info("Startup mode is `deactivated'");
+	 break;
+      default:
+	 error("Invalid startup mode %d", opt_args.startup);
+	 return -1;
+   }
+
+   if(opt_args.hwinitpcrs & (VTPM_PCRALL))
+   {
+      char pcrstr[1024];
+      char* ptr = pcrstr;
+
+      pcrstr[0] = '\0';
+      info("The following PCRs will be initialized with values from the hardware TPM:");
+      for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
+         if(opt_args.hwinitpcrs & (1 << i)) {
+            ptr += sprintf(ptr, "%u, ", i);
+         }
+      }
+      /* get rid of the last comma if any numbers were printed */
+      *(ptr -2) = '\0';
+
+      info("\t%s", pcrstr);
+   } else {
+      info("All PCRs initialized to default values");
+   }
+
+   if(!opt_args.enable_maint_cmds) {
+      info("TPM Maintenance Commands disabled");
+      badords[n_badords++] = TPM_ORD_CreateMaintenanceArchive;
+      badords[n_badords++] = TPM_ORD_LoadMaintenanceArchive;
+      badords[n_badords++] = TPM_ORD_KillMaintenanceFeature;
+      badords[n_badords++] = TPM_ORD_LoadManuMaintPub;
+      badords[n_badords++] = TPM_ORD_ReadManuMaintPub;
+   } else {
+      info("TPM Maintenance Commands enabled");
+   }
+
+   info("Log level set to %s", logstr);
+
+   return 0;
+}
+
+void cleanup_opt_args(void) {
+}
+
+int main(int argc, char **argv)
+{
+   //FIXME: initializing blkfront without this sleep causes the domain to crash on boot
+   sleep(2);
+
+   /* Setup extern function pointers */
+   tpm_extern_init = vtpm_extern_init_fake;
+   tpm_extern_release = vtpm_extern_release_fake;
+   tpm_malloc = malloc;
+   tpm_free = free;
+   tpm_log = vtpm_log;
+   tpm_get_ticks = vtpm_get_ticks;
+   tpm_get_extern_random_bytes = vtpm_get_extern_random_bytes;
+   tpm_write_to_storage = vtpm_write_to_file;
+   tpm_read_from_storage = vtpm_read_from_file;
+
+   info("starting TPM Emulator (1.2.%d.%d-%d)", VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD);
+   if(parse_cmd_line(argc, argv)) {
+      error("Error parsing commandline\n");
+      return -1;
+   }
+
+   /* Initialize devices */
+   init_tpmback();
+   if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
+      error("Unable to initialize tpmfront device");
+      goto abort_posttpmfront;
+   }
+
+   /* Seed the RNG with entropy from hardware TPM */
+   if(init_random()) {
+      error("Unable to initialize RNG");
+      goto abort_postrng;
+   }
+
+   /* Initialize blkfront device */
+   if(init_vtpmblk(tpmfront_dev)) {
+      error("Unable to initialize Blkfront persistent storage");
+      goto abort_postvtpmblk;
+   }
+
+   /* Run main loop */
+   main_loop();
+
+   /* Shutdown blkfront */
+   shutdown_vtpmblk();
+abort_postvtpmblk:
+abort_postrng:
+
+   /* Close devices */
+   shutdown_tpmfront(tpmfront_dev);
+abort_posttpmfront:
+   shutdown_tpmback();
+
+   cleanup_opt_args();
+
+   return 0;
+}
diff --git a/stubdom/vtpm/vtpm.h b/stubdom/vtpm/vtpm.h
new file mode 100644
index 0000000..5919e44
--- /dev/null
+++ b/stubdom/vtpm/vtpm.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef VTPM_H
+#define VTPM_H
+
+#include <stdbool.h>
+
+/* For testing */
+#define VERS_CMD "\x00\xC1\x00\x00\x00\x16\x00\x00\x00\x65\x00\x00\x00\x05\x00\x00\x00\x04\x00\x00\x01\x03"
+#define VERS_CMD_LEN 22
+
+/* Global commandline options */
+struct Opt_args {
+   enum StartUp {
+      ST_CLEAR = 1,
+      ST_SAVE = 2,
+      ST_DEACTIVATED = 3
+   } startup;
+   unsigned long hwinitpcrs;
+   int loglevel;
+   uint32_t tpmconf;
+   bool enable_maint_cmds;
+};
+extern struct Opt_args opt_args;
+
+#endif
diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
new file mode 100644
index 0000000..7eae98b
--- /dev/null
+++ b/stubdom/vtpm/vtpm_cmd.c
@@ -0,0 +1,256 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <types.h>
+#include <xen/xen.h>
+#include <mm.h>
+#include <gnttab.h>
+#include "tpm/tpm_marshalling.h"
+#include "vtpm_manager.h"
+#include "vtpm_cmd.h"
+#include <tpmback.h>
+
+#define TRYFAILGOTO(C) \
+   if((C)) { \
+      status = TPM_FAIL; \
+      goto abort_egress; \
+   }
+#define TRYFAILGOTOMSG(C, msg) \
+   if((C)) { \
+      status = TPM_FAIL; \
+      error(msg); \
+      goto abort_egress; \
+   }
+#define CHECKSTATUSGOTO(ret, fname) \
+   if((ret) != TPM_SUCCESS) { \
+      error("%s failed with error code (%lu)", fname, (unsigned long) ret); \
+      status = ord; \
+      goto abort_egress; \
+   }
+
+#define ERR_MALFORMED "Malformed response from backend"
+#define ERR_TPMFRONT "Error sending command through frontend device"
+
+struct shpage {
+   void* page;
+   grant_ref_t grantref;
+};
+
+typedef struct shpage shpage_t;
+
+static inline int pack_header(uint8_t** bptr, UINT32* len, TPM_TAG tag, UINT32 size, TPM_COMMAND_CODE ord)
+{
+   return *bptr == NULL ||
+	 tpm_marshal_UINT16(bptr, len, tag) ||
+	 tpm_marshal_UINT32(bptr, len, size) ||
+	 tpm_marshal_UINT32(bptr, len, ord);
+}
+
+static inline int unpack_header(uint8_t** bptr, UINT32* len, TPM_TAG* tag, UINT32* size, TPM_COMMAND_CODE* ord)
+{
+   return *bptr == NULL ||
+	 tpm_unmarshal_UINT16(bptr, len, tag) ||
+	 tpm_unmarshal_UINT32(bptr, len, size) ||
+	 tpm_unmarshal_UINT32(bptr, len, ord);
+}
+
+int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode)
+{
+   TPM_TAG tag;
+   UINT32 len = tpmcmd->req_len;
+   uint8_t* respptr;
+   uint8_t* cmdptr = tpmcmd->req;
+
+   if(!tpm_unmarshal_UINT16(&cmdptr, &len, &tag)) {
+      switch (tag) {
+         case TPM_TAG_RQU_COMMAND:
+            tag = TPM_TAG_RSP_COMMAND;
+            break;
+         case TPM_TAG_RQU_AUTH1_COMMAND:
+            tag = TPM_TAG_RQU_AUTH2_COMMAND;
+            break;
+         case TPM_TAG_RQU_AUTH2_COMMAND:
+            tag = TPM_TAG_RQU_AUTH2_COMMAND;
+            break;
+      }
+   } else {
+      tag = TPM_TAG_RSP_COMMAND;
+   }
+
+   tpmcmd->resp_len = len = 10;
+   tpmcmd->resp = respptr = tpm_malloc(tpmcmd->resp_len);
+
+   return pack_header(&respptr, &len, tag, len, errorcode);
+}
+
+TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32 *numbytes) {
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* cmdbuf, *resp, *bptr;
+   size_t resplen = 0;
+   UINT32 len;
+
+   /*Ask the real tpm for random bytes for the seed */
+   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = TPM_ORD_GetRandom;
+   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
+
+   /*Create the raw tpm command */
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, *numbytes));
+
+   /* Send cmd, wait for response */
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
+      ERR_TPMFRONT);
+
+   bptr = resp; len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   //Check return status of command
+   CHECKSTATUSGOTO(ord, "TPM_GetRandom()");
+
+   // Get the number of random bytes in the response
+   TRYFAILGOTOMSG(tpm_unmarshal_UINT32(&bptr, &len, &size), ERR_MALFORMED);
+   *numbytes = size;
+
+   //Get the random bytes out, tpm may give us less bytes than what we wanrt
+   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, bytes, *numbytes), ERR_MALFORMED);
+
+   goto egress;
+abort_egress:
+egress:
+   free(cmdbuf);
+   return status;
+
+}
+
+TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* bptr, *resp;
+   uint8_t* cmdbuf = NULL;
+   size_t resplen = 0;
+   UINT32 len;
+
+   TPM_TAG tag = VTPM_TAG_REQ;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = VTPM_ORD_LOADHASHKEY;
+
+   /*Create the command*/
+   len = size = VTPM_COMMAND_HEADER_SIZE;
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+
+   /* Send the command to vtpm_manager */
+   info("Requesting Encryption key from backend");
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   /* Unpack response header */
+   bptr = resp;
+   len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   /* Check return code */
+   CHECKSTATUSGOTO(ord, "VTPM_LoadHashKey()");
+
+   /* Get the size of the key */
+   *data_length = size - VTPM_COMMAND_HEADER_SIZE;
+
+   /* Copy the key bits */
+   *data = malloc(*data_length);
+   memcpy(*data, bptr, *data_length);
+
+   goto egress;
+abort_egress:
+   error("VTPM_LoadHashKey failed");
+egress:
+   free(cmdbuf);
+   return status;
+}
+
+TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* bptr, *resp;
+   uint8_t* cmdbuf = NULL;
+   size_t resplen = 0;
+   UINT32 len;
+
+   TPM_TAG tag = VTPM_TAG_REQ;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = VTPM_ORD_SAVEHASHKEY;
+
+   /*Create the command*/
+   len = size = VTPM_COMMAND_HEADER_SIZE + data_length;
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   memcpy(bptr, data, data_length);
+   bptr += data_length;
+
+   /* Send the command to vtpm_manager */
+   info("Sending encryption key to backend");
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   /* Unpack response header */
+   bptr = resp;
+   len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   /* Check return code */
+   CHECKSTATUSGOTO(ord, "VTPM_SaveHashKey()");
+
+   goto egress;
+abort_egress:
+   error("VTPM_SaveHashKey failed");
+egress:
+   free(cmdbuf);
+   return status;
+}
+
+TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t *cmdbuf, *resp, *bptr;
+   size_t resplen = 0;
+   UINT32 len;
+
+   /*Just send a TPM_PCRRead Command to the HW tpm */
+   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
+   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
+
+   /*Create the raw tpm cmd */
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, pcrIndex));
+
+   /*Send Cmd wait for response */
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   bptr = resp; len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   //Check return status of command
+   CHECKSTATUSGOTO(ord, "TPM_PCRRead");
+
+   //Get the ptr value
+   memcpy(outDigest, bptr, sizeof(TPM_PCRVALUE));
+
+   goto egress;
+abort_egress:
+egress:
+   free(cmdbuf);
+   return status;
+
+}
diff --git a/stubdom/vtpm/vtpm_cmd.h b/stubdom/vtpm/vtpm_cmd.h
new file mode 100644
index 0000000..b0bfa22
--- /dev/null
+++ b/stubdom/vtpm/vtpm_cmd.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef MANAGER_H
+#define MANAGER_H
+
+#include <tpmfront.h>
+#include <tpmback.h>
+#include "tpm/tpm_structures.h"
+
+/* Create a command response error header */
+int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode);
+/* Request random bytes from hardware tpm, returns 0 on success */
+TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32* numbytes);
+/* Retreive 256 bit AES encryption key from manager */
+TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length);
+/* Manager securely saves our 256 bit AES encryption key */
+TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length);
+/* Send a TPM_PCRRead command passthrough the manager to the hw tpm */
+TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest);
+
+#endif
diff --git a/stubdom/vtpm/vtpm_pcrs.c b/stubdom/vtpm/vtpm_pcrs.c
new file mode 100644
index 0000000..22a6cef
--- /dev/null
+++ b/stubdom/vtpm/vtpm_pcrs.c
@@ -0,0 +1,43 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include "vtpm_pcrs.h"
+#include "vtpm_cmd.h"
+#include "tpm/tpm_data.h"
+
+#define PCR_VALUE      tpmData.permanent.data.pcrValue
+
+static int write_pcr_direct(unsigned int pcrIndex, uint8_t* val) {
+   if(pcrIndex > TPM_NUM_PCR) {
+      return TPM_BADINDEX;
+   }
+   memcpy(&PCR_VALUE[pcrIndex], val, sizeof(TPM_PCRVALUE));
+   return TPM_SUCCESS;
+}
+
+TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs)
+{
+   TPM_RESULT rc = TPM_SUCCESS;
+   uint8_t digest[sizeof(TPM_PCRVALUE)];
+
+   for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
+      if(pcrs & 1 << i) {
+         if((rc = VTPM_PCRRead(tpmfront_dev, i, digest)) != TPM_SUCCESS) {
+            error("TPM_PCRRead failed with error : %d", rc);
+            return rc;
+         }
+         write_pcr_direct(i, digest);
+      }
+   }
+
+   return rc;
+}
diff --git a/stubdom/vtpm/vtpm_pcrs.h b/stubdom/vtpm/vtpm_pcrs.h
new file mode 100644
index 0000000..11835f9
--- /dev/null
+++ b/stubdom/vtpm/vtpm_pcrs.h
@@ -0,0 +1,53 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef VTPM_PCRS_H
+#define VTPM_PCRS_H
+
+#include "tpm/tpm_structures.h"
+
+#define VTPM_PCR0 1
+#define VTPM_PCR1 1 << 1
+#define VTPM_PCR2 1 << 2
+#define VTPM_PCR3 1 << 3
+#define VTPM_PCR4 1 << 4
+#define VTPM_PCR5 1 << 5
+#define VTPM_PCR6 1 << 6
+#define VTPM_PCR7 1 << 7
+#define VTPM_PCR8 1 << 8
+#define VTPM_PCR9 1 << 9
+#define VTPM_PCR10 1 << 10
+#define VTPM_PCR11 1 << 11
+#define VTPM_PCR12 1 << 12
+#define VTPM_PCR13 1 << 13
+#define VTPM_PCR14 1 << 14
+#define VTPM_PCR15 1 << 15
+#define VTPM_PCR16 1 << 16
+#define VTPM_PCR17 1 << 17
+#define VTPM_PCR18 1 << 18
+#define VTPM_PCR19 1 << 19
+#define VTPM_PCR20 1 << 20
+#define VTPM_PCR21 1 << 21
+#define VTPM_PCR22 1 << 22
+#define VTPM_PCR23 1 << 23
+
+#define VTPM_PCRALL (1 << TPM_NUM_PCR) - 1
+#define VTPM_PCRNONE 0
+
+#define VTPM_NUMPCRS 24
+
+struct tpmfront_dev;
+
+TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs);
+
+
+#endif
diff --git a/stubdom/vtpm/vtpmblk.c b/stubdom/vtpm/vtpmblk.c
new file mode 100644
index 0000000..b343bd8
--- /dev/null
+++ b/stubdom/vtpm/vtpmblk.c
@@ -0,0 +1,307 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <mini-os/byteorder.h>
+#include "vtpmblk.h"
+#include "tpm/tpm_marshalling.h"
+#include "vtpm_cmd.h"
+#include "polarssl/aes.h"
+#include "polarssl/sha1.h"
+#include <blkfront.h>
+#include <unistd.h>
+#include <errno.h>
+#include <fcntl.h>
+
+/*Encryption key and block sizes */
+#define BLKSZ 16
+
+static struct blkfront_dev* blkdev = NULL;
+static int blkfront_fd = -1;
+
+int init_vtpmblk(struct tpmfront_dev* tpmfront_dev)
+{
+   struct blkfront_info blkinfo;
+   info("Initializing persistent NVM storage\n");
+
+   if((blkdev = init_blkfront(NULL, &blkinfo)) == NULL) {
+      error("BLKIO: ERROR Unable to initialize blkfront");
+      return -1;
+   }
+   if (blkinfo.info & VDISK_READONLY || blkinfo.mode != O_RDWR) {
+      error("BLKIO: ERROR block device is read only!");
+      goto error;
+   }
+   if((blkfront_fd = blkfront_open(blkdev)) == -1) {
+      error("Unable to open blkfront file descriptor!");
+      goto error;
+   }
+
+   return 0;
+error:
+   shutdown_blkfront(blkdev);
+   blkdev = NULL;
+   return -1;
+}
+
+void shutdown_vtpmblk(void)
+{
+   close(blkfront_fd);
+   blkfront_fd = -1;
+   blkdev = NULL;
+}
+
+int write_vtpmblk_raw(uint8_t *data, size_t data_length)
+{
+   int rc;
+   uint32_t lenbuf;
+   debug("Begin Write data=%p len=%u", data, data_length);
+
+   lenbuf = cpu_to_be32((uint32_t)data_length);
+
+   lseek(blkfront_fd, 0, SEEK_SET);
+   if((rc = write(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
+      error("write(length) failed! error was %s", strerror(errno));
+      return -1;
+   }
+   if((rc = write(blkfront_fd, data, data_length)) != data_length) {
+      error("write(data) failed! error was %s", strerror(errno));
+      return -1;
+   }
+
+   info("Wrote %u bytes to NVM persistent storage", data_length);
+
+   return 0;
+}
+
+int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
+{
+   int rc;
+   uint32_t lenbuf;
+
+   lseek(blkfront_fd, 0, SEEK_SET);
+   if(( rc = read(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
+      error("read(length) failed! error was %s", strerror(errno));
+      return -1;
+   }
+   *data_length = (size_t) cpu_to_be32(lenbuf);
+   if(*data_length == 0) {
+      error("read 0 data_length for NVM");
+      return -1;
+   }
+
+   *data = tpm_malloc(*data_length);
+   if((rc = read(blkfront_fd, *data, *data_length)) != *data_length) {
+      error("read(data) failed! error was %s", strerror(errno));
+      return -1;
+   }
+
+   info("Read %u bytes from NVM persistent storage", *data_length);
+   return 0;
+}
+
+int encrypt_vtpmblk(uint8_t* clear, size_t clear_len, uint8_t** cipher, size_t* cipher_len, uint8_t* symkey)
+{
+   int rc = 0;
+   uint8_t iv[BLKSZ];
+   aes_context aes_ctx;
+   UINT32 temp;
+   int mod;
+
+   uint8_t* clbuf = NULL;
+
+   uint8_t* ivptr;
+   int ivlen;
+
+   uint8_t* cptr;	//Cipher block pointer
+   int clen;	//Cipher block length
+
+   /*Create a new 256 bit encryption key */
+   if(symkey == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+   tpm_get_extern_random_bytes(symkey, NVMKEYSZ);
+
+   /*Setup initialization vector - random bits and then 4 bytes clear text size at the end*/
+   temp = sizeof(UINT32);
+   ivlen = BLKSZ - temp;
+   tpm_get_extern_random_bytes(iv, ivlen);
+   ivptr = iv + ivlen;
+   tpm_marshal_UINT32(&ivptr, &temp, (UINT32) clear_len);
+
+   /*The clear text needs to be padded out to a multiple of BLKSZ */
+   mod = clear_len % BLKSZ;
+   clen = mod ? clear_len + BLKSZ - mod : clear_len;
+   clbuf = malloc(clen);
+   if (clbuf == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+   memcpy(clbuf, clear, clear_len);
+   /* zero out the padding bits - FIXME: better / more secure way to handle these? */
+   if(clen - clear_len) {
+      memset(clbuf + clear_len, 0, clen - clear_len);
+   }
+
+   /* Setup the ciphertext buffer */
+   *cipher_len = BLKSZ + clen;		/*iv + ciphertext */
+   cptr = *cipher = malloc(*cipher_len);
+   if (*cipher == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Copy the IV to cipher text blob*/
+   memcpy(cptr, iv, BLKSZ);
+   cptr += BLKSZ;
+
+   /* Setup encryption */
+   aes_setkey_enc(&aes_ctx, symkey, 256);
+
+   /* Do encryption now */
+   aes_crypt_cbc(&aes_ctx, AES_ENCRYPT, clen, iv, clbuf, cptr);
+
+   goto egress;
+abort_egress:
+egress:
+   free(clbuf);
+   return rc;
+}
+int decrypt_vtpmblk(uint8_t* cipher, size_t cipher_len, uint8_t** clear, size_t* clear_len, uint8_t* symkey)
+{
+   int rc = 0;
+   uint8_t iv[BLKSZ];
+   uint8_t* ivptr;
+   UINT32 u32, temp;
+   aes_context aes_ctx;
+
+   uint8_t* cptr = cipher;	//cipher block pointer
+   int clen = cipher_len;	//cipher block length
+
+   /* Pull out the initialization vector */
+   memcpy(iv, cipher, BLKSZ);
+   cptr += BLKSZ;
+   clen -= BLKSZ;
+
+   /* Setup the clear text buffer */
+   if((*clear = malloc(clen)) == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Get the length of clear text from last 4 bytes of iv */
+   temp = sizeof(UINT32);
+   ivptr = iv + BLKSZ - temp;
+   tpm_unmarshal_UINT32(&ivptr, &temp, &u32);
+   *clear_len = u32;
+
+   /* Setup decryption */
+   aes_setkey_dec(&aes_ctx, symkey, 256);
+
+   /* Do decryption now */
+   if ((clen % BLKSZ) != 0) {
+      error("Decryption Error: Cipher block size was not a multiple of %u", BLKSZ);
+      rc = -1;
+      goto abort_egress;
+   }
+   aes_crypt_cbc(&aes_ctx, AES_DECRYPT, clen, iv, cptr, *clear);
+
+   goto egress;
+abort_egress:
+egress:
+   return rc;
+}
+
+int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length) {
+   int rc;
+   uint8_t* cipher = NULL;
+   size_t cipher_len = 0;
+   uint8_t hashkey[HASHKEYSZ];
+   uint8_t* symkey = hashkey + HASHSZ;
+
+   /* Encrypt the data */
+   if((rc = encrypt_vtpmblk(data, data_length, &cipher, &cipher_len, symkey))) {
+      goto abort_egress;
+   }
+   /* Write to disk */
+   if((rc = write_vtpmblk_raw(cipher, cipher_len))) {
+      goto abort_egress;
+   }
+   /* Get sha1 hash of data */
+   sha1(cipher, cipher_len, hashkey);
+
+   /* Send hash and key to manager */
+   if((rc = VTPM_SaveHashKey(tpmfront_dev, hashkey, HASHKEYSZ)) != TPM_SUCCESS) {
+      goto abort_egress;
+   }
+   goto egress;
+abort_egress:
+egress:
+   free(cipher);
+   return rc;
+}
+
+int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data_length) {
+   int rc;
+   uint8_t* cipher = NULL;
+   size_t cipher_len = 0;
+   size_t keysize;
+   uint8_t* hashkey = NULL;
+   uint8_t hash[HASHSZ];
+   uint8_t* symkey;
+
+   /* Retreive the hash and the key from the manager */
+   if((rc = VTPM_LoadHashKey(tpmfront_dev, &hashkey, &keysize)) != TPM_SUCCESS) {
+      goto abort_egress;
+   }
+   if(keysize != HASHKEYSZ) {
+      error("Manager returned a hashkey of invalid size! expected %d, actual %d", NVMKEYSZ, keysize);
+      rc = -1;
+      goto abort_egress;
+   }
+   symkey = hashkey + HASHSZ;
+
+   /* Read from disk now */
+   if((rc = read_vtpmblk_raw(&cipher, &cipher_len))) {
+      goto abort_egress;
+   }
+
+   /* Compute the hash of the cipher text and compare */
+   sha1(cipher, cipher_len, hash);
+   if(memcmp(hash, hashkey, HASHSZ)) {
+      int i;
+      error("NVM Storage Checksum failed!");
+      printf("Expected: ");
+      for(i = 0; i < HASHSZ; ++i) {
+	 printf("%02hhX ", hashkey[i]);
+      }
+      printf("\n");
+      printf("Actual:   ");
+      for(i = 0; i < HASHSZ; ++i) {
+	 printf("%02hhX ", hash[i]);
+      }
+      printf("\n");
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Decrypt the blob */
+   if((rc = decrypt_vtpmblk(cipher, cipher_len, data, data_length, symkey))) {
+      goto abort_egress;
+   }
+   goto egress;
+abort_egress:
+egress:
+   free(cipher);
+   free(hashkey);
+   return rc;
+}
diff --git a/stubdom/vtpm/vtpmblk.h b/stubdom/vtpm/vtpmblk.h
new file mode 100644
index 0000000..282ce6a
--- /dev/null
+++ b/stubdom/vtpm/vtpmblk.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef NVM_H
+#define NVM_H
+#include <mini-os/types.h>
+#include <xen/xen.h>
+#include <tpmfront.h>
+
+#define NVMKEYSZ 32
+#define HASHSZ 20
+#define HASHKEYSZ (NVMKEYSZ + HASHSZ)
+
+int init_vtpmblk(struct tpmfront_dev* tpmfront_dev);
+void shutdown_vtpmblk(void);
+
+/* Encrypts and writes data to blk device */
+int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t *data, size_t data_length);
+/* Reads, Decrypts, and returns data from blk device */
+int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t **data, size_t *data_length);
+
+#endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg31-0004wv-Qf; Thu, 06 Dec 2012 18:19:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg31-0004w6-4P
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:47 +0000
Received: from [85.158.139.83:54098] by server-14.bemta-5.messagelabs.com id
	97/43-21768-2C1E0C05; Thu, 06 Dec 2012 18:19:46 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354817983!26025948!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13517 invoked from network); 6 Dec 2012 18:19:45 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 18:19:45 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01e2_a7b931be_77b9_4a4b_aebc_83edef72c4e9;
	Thu, 06 Dec 2012 13:19:35 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:20 -0500
Message-Id: <1354817961-22196-7-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 7/8] Add a top level configure script
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please rerun autoconf after commiting this patch

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 autogen.sh                         |    1 +
 tools/config.guess => config.guess |    0
 tools/config.sub => config.sub     |    0
 configure.ac                       |   12 ++++++++++++
 tools/configure.ac                 |    4 ++--
 tools/install.sh                   |    1 -
 6 files changed, 15 insertions(+), 3 deletions(-)
 rename tools/config.guess => config.guess (100%)
 rename tools/config.sub => config.sub (100%)
 create mode 100644 configure.ac
 mode change 100755 => 100644 install.sh
 delete mode 100644 tools/install.sh

diff --git a/autogen.sh b/autogen.sh
index ada482c..1456d94 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -1,4 +1,5 @@
 #!/bin/sh -e
+autoconf
 cd tools
 autoconf
 autoheader
diff --git a/tools/config.guess b/config.guess
similarity index 100%
rename from tools/config.guess
rename to config.guess
diff --git a/tools/config.sub b/config.sub
similarity index 100%
rename from tools/config.sub
rename to config.sub
diff --git a/configure.ac b/configure.ac
new file mode 100644
index 0000000..5dacb46
--- /dev/null
+++ b/configure.ac
@@ -0,0 +1,12 @@
+#                                               -*- Autoconf -*-
+# Process this file with autoconf to produce a configure script.
+
+AC_PREREQ([2.67])
+AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
+    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
+AC_CONFIG_SRCDIR([./xen/common/kernel.c])
+AC_PREFIX_DEFAULT([/usr])
+
+AC_CONFIG_SUBDIRS([tools stubdom])
+
+AC_OUTPUT()
diff --git a/install.sh b/install.sh
old mode 100755
new mode 100644
diff --git a/tools/configure.ac b/tools/configure.ac
index 971e3e9..2bd71b6 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -2,13 +2,13 @@
 # Process this file with autoconf to produce a configure script.
 
 AC_PREREQ([2.67])
-AC_INIT([Xen Hypervisor], m4_esyscmd([../version.sh ../xen/Makefile]),
+AC_INIT([Xen Hypervisor Tools], m4_esyscmd([../version.sh ../xen/Makefile]),
     [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
 AC_CONFIG_SRCDIR([libxl/libxl.c])
 AC_CONFIG_FILES([../config/Tools.mk])
 AC_CONFIG_HEADERS([config.h])
 AC_PREFIX_DEFAULT([/usr])
-AC_CONFIG_AUX_DIR([.])
+AC_CONFIG_AUX_DIR([../])
 
 # Check if CFLAGS, LDFLAGS, LIBS, CPPFLAGS or CPP is set and print a warning
 
diff --git a/tools/install.sh b/tools/install.sh
deleted file mode 100644
index 3f44f99..0000000
--- a/tools/install.sh
+++ /dev/null
@@ -1 +0,0 @@
-../install.sh
\ No newline at end of file
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg2x-0004uk-1W; Thu, 06 Dec 2012 18:19:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2v-0004ud-5K
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:41 +0000
Received: from [85.158.139.211:33355] by server-7.bemta-5.messagelabs.com id
	CA/EE-23096-CB1E0C05; Thu, 06 Dec 2012 18:19:40 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354817975!19409430!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25728 invoked from network); 6 Dec 2012 18:19:37 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 18:19:37 -0000
Received: from anonelbe.jhuapl.edu (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_018e_8f22115b_d573_4c16_86c9_b8fdc7bc42c6;
	Thu, 06 Dec 2012 13:19:34 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:14 -0500
Message-Id: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 1/8] add vtpm-stubdom code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the code base for vtpm-stubdom to the stubdom
heirarchy. Makefile changes in later patch.

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/vtpm/Makefile    |   37 +++++
 stubdom/vtpm/minios.cfg  |   14 ++
 stubdom/vtpm/vtpm.c      |  404 ++++++++++++++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm.h      |   36 +++++
 stubdom/vtpm/vtpm_cmd.c  |  256 +++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm_cmd.h  |   31 ++++
 stubdom/vtpm/vtpm_pcrs.c |   43 +++++
 stubdom/vtpm/vtpm_pcrs.h |   53 ++++++
 stubdom/vtpm/vtpmblk.c   |  307 +++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpmblk.h   |   31 ++++
 10 files changed, 1212 insertions(+)
 create mode 100644 stubdom/vtpm/Makefile
 create mode 100644 stubdom/vtpm/minios.cfg
 create mode 100644 stubdom/vtpm/vtpm.c
 create mode 100644 stubdom/vtpm/vtpm.h
 create mode 100644 stubdom/vtpm/vtpm_cmd.c
 create mode 100644 stubdom/vtpm/vtpm_cmd.h
 create mode 100644 stubdom/vtpm/vtpm_pcrs.c
 create mode 100644 stubdom/vtpm/vtpm_pcrs.h
 create mode 100644 stubdom/vtpm/vtpmblk.c
 create mode 100644 stubdom/vtpm/vtpmblk.h

diff --git a/stubdom/vtpm/Makefile b/stubdom/vtpm/Makefile
new file mode 100644
index 0000000..686c0ea
--- /dev/null
+++ b/stubdom/vtpm/Makefile
@@ -0,0 +1,37 @@
+# Copyright (c) 2010-2012 United States Government, as represented by
+# the Secretary of Defense.  All rights reserved.
+#
+# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+# INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+# DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+# SOFTWARE.
+#
+
+XEN_ROOT=../..
+
+PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
+PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o sha4.o
+
+TARGET=vtpm.a
+OBJS=vtpm.o vtpm_cmd.o vtpmblk.o vtpm_pcrs.o
+
+
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/build
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/tpm
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/crypto
+CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)
+
+$(TARGET): $(OBJS)
+	ar -cr $@ $(OBJS) $(TPMEMU_OBJS) $(foreach obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
+
+$(OBJS): vtpm_manager.h
+
+vtpm_manager.h:
+	ln -s ../vtpmmgr/vtpm_manager.h vtpm_manager.h
+
+clean:
+	-rm $(TARGET) $(OBJS) vtpm_manager.h
+
+.PHONY: clean
diff --git a/stubdom/vtpm/minios.cfg b/stubdom/vtpm/minios.cfg
new file mode 100644
index 0000000..31652ee
--- /dev/null
+++ b/stubdom/vtpm/minios.cfg
@@ -0,0 +1,14 @@
+CONFIG_TPMFRONT=y
+CONFIG_TPM_TIS=n
+CONFIG_TPMBACK=y
+CONFIG_START_NETWORK=n
+CONFIG_TEST=n
+CONFIG_PCIFRONT=n
+CONFIG_BLKFRONT=y
+CONFIG_NETFRONT=n
+CONFIG_FBFRONT=n
+CONFIG_KBDFRONT=n
+CONFIG_CONSFRONT=n
+CONFIG_XENBUS=y
+CONFIG_LWIP=n
+CONFIG_XC=n
diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
new file mode 100644
index 0000000..71aef78
--- /dev/null
+++ b/stubdom/vtpm/vtpm.c
@@ -0,0 +1,404 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <syslog.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <sys/time.h>
+#include <xen/xen.h>
+#include <tpmback.h>
+#include <tpmfront.h>
+
+#include <polarssl/entropy.h>
+#include <polarssl/ctr_drbg.h>
+
+#include "tpm/tpm_emulator_extern.h"
+#include "tpm/tpm_marshalling.h"
+#include "vtpm.h"
+#include "vtpm_cmd.h"
+#include "vtpm_pcrs.h"
+#include "vtpmblk.h"
+
+#define TPM_LOG_INFO LOG_INFO
+#define TPM_LOG_ERROR LOG_ERR
+#define TPM_LOG_DEBUG LOG_DEBUG
+
+/* Global commandline options - default values */
+struct Opt_args opt_args = {
+   .startup = ST_CLEAR,
+   .loglevel = TPM_LOG_INFO,
+   .hwinitpcrs = VTPM_PCRNONE,
+   .tpmconf = 0,
+   .enable_maint_cmds = false,
+};
+
+static uint32_t badords[32];
+static unsigned int n_badords = 0;
+
+entropy_context entropy;
+ctr_drbg_context ctr_drbg;
+
+struct tpmfront_dev* tpmfront_dev;
+
+void vtpm_get_extern_random_bytes(void *buf, size_t nbytes)
+{
+   ctr_drbg_random(&ctr_drbg, buf, nbytes);
+}
+
+int vtpm_read_from_file(uint8_t **data, size_t *data_length) {
+   return read_vtpmblk(tpmfront_dev, data, data_length);
+}
+
+int vtpm_write_to_file(uint8_t *data, size_t data_length) {
+   return write_vtpmblk(tpmfront_dev, data, data_length);
+}
+
+int vtpm_extern_init_fake(void) {
+   return 0;
+}
+
+void vtpm_extern_release_fake(void) {
+}
+
+
+void vtpm_log(int priority, const char *fmt, ...)
+{
+   if(opt_args.loglevel >= priority) {
+      va_list v;
+      va_start(v, fmt);
+      vprintf(fmt, v);
+      va_end(v);
+   }
+}
+
+static uint64_t vtpm_get_ticks(void)
+{
+  static uint64_t old_t = 0;
+  uint64_t new_t, res_t;
+  struct timeval tv;
+  gettimeofday(&tv, NULL);
+  new_t = (uint64_t)tv.tv_sec * 1000000 + (uint64_t)tv.tv_usec;
+  res_t = (old_t > 0) ? new_t - old_t : 0;
+  old_t = new_t;
+  return res_t;
+}
+
+
+static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
+   UINT32 sz = len;
+   TPM_RESULT rc = VTPM_GetRandom(tpmfront_dev, data, &sz);
+   *olen = sz;
+   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
+}
+
+int init_random(void) {
+   /* Initialize the rng */
+   entropy_init(&entropy);
+   entropy_add_source(&entropy, tpm_entropy_source, NULL, 0);
+   entropy_gather(&entropy);
+   ctr_drbg_init(&ctr_drbg, entropy_func, &entropy, NULL, 0);
+   ctr_drbg_set_prediction_resistance( &ctr_drbg, CTR_DRBG_PR_OFF );
+
+   return 0;
+}
+
+int check_ordinal(tpmcmd_t* tpmcmd) {
+   TPM_COMMAND_CODE ord;
+   UINT32 len = 4;
+   BYTE* ptr;
+   unsigned int i;
+
+   if(tpmcmd->req_len < 10) {
+      return true;
+   }
+
+   ptr = tpmcmd->req + 6;
+   tpm_unmarshal_UINT32(&ptr, &len, &ord);
+
+   for(i = 0; i < n_badords; ++i) {
+      if(ord == badords[i]) {
+         error("Disabled command ordinal (%" PRIu32") requested!\n");
+         return false;
+      }
+   }
+   return true;
+}
+
+static void main_loop(void) {
+   tpmcmd_t* tpmcmd = NULL;
+   domid_t domid;		/* Domid of frontend */
+   unsigned int handle;	/* handle of frontend */
+   int res = -1;
+
+   info("VTPM Initializing\n");
+
+   /* Set required tpm config args */
+   opt_args.tpmconf |= TPM_CONF_STRONG_PERSISTENCE;
+   opt_args.tpmconf &= ~TPM_CONF_USE_INTERNAL_PRNG;
+   opt_args.tpmconf |= TPM_CONF_GENERATE_EK;
+   opt_args.tpmconf |= TPM_CONF_GENERATE_SEED_DAA;
+
+   /* Initialize the emulator */
+   tpm_emulator_init(opt_args.startup, opt_args.tpmconf);
+
+   /* Initialize any requested PCRs with hardware TPM values */
+   if(vtpm_initialize_hw_pcrs(tpmfront_dev, opt_args.hwinitpcrs) != TPM_SUCCESS) {
+      error("Failed to initialize PCRs with hardware TPM values");
+      goto abort_postpcrs;
+   }
+
+   /* Wait for the frontend domain to connect */
+   info("Waiting for frontend domain to connect..");
+   if(tpmback_wait_for_frontend_connect(&domid, &handle) == 0) {
+      info("VTPM attached to Frontend %u/%u", (unsigned int) domid, handle);
+   } else {
+      error("Unable to attach to a frontend");
+   }
+
+   tpmcmd = tpmback_req(domid, handle);
+   while(tpmcmd) {
+      /* Handle the request */
+      if(tpmcmd->req_len) {
+	 tpmcmd->resp = NULL;
+	 tpmcmd->resp_len = 0;
+
+         /* First check for disabled ordinals */
+         if(!check_ordinal(tpmcmd)) {
+            create_error_response(tpmcmd, TPM_BAD_ORDINAL);
+         }
+         /* If not disabled, do the command */
+         else {
+            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len)) != 0) {
+               error("tpm_handle_command() failed");
+               create_error_response(tpmcmd, TPM_FAIL);
+            }
+         }
+      }
+
+      /* Send the response */
+      tpmback_resp(tpmcmd);
+
+      /* Wait for the next request */
+      tpmcmd = tpmback_req(domid, handle);
+
+   }
+
+abort_postpcrs:
+   info("VTPM Shutting down\n");
+
+   tpm_emulator_shutdown();
+}
+
+int parse_cmd_line(int argc, char** argv)
+{
+   char sval[25];
+   char* logstr = NULL;
+   /* Parse the command strings */
+   for(unsigned int i = 1; i < argc; ++i) {
+      if (sscanf(argv[i], "loglevel=%25s", sval) == 1){
+	 if (!strcmp(sval, "debug")) {
+	    opt_args.loglevel = TPM_LOG_DEBUG;
+	    logstr = "debug";
+	 }
+	 else if (!strcmp(sval, "info")) {
+	    logstr = "info";
+	    opt_args.loglevel = TPM_LOG_INFO;
+	 }
+	 else if (!strcmp(sval, "error")) {
+	    logstr = "error";
+	    opt_args.loglevel = TPM_LOG_ERROR;
+	 }
+      }
+      else if (!strcmp(argv[i], "clear")) {
+	 opt_args.startup = ST_CLEAR;
+      }
+      else if (!strcmp(argv[i], "save")) {
+	 opt_args.startup = ST_SAVE;
+      }
+      else if (!strcmp(argv[i], "deactivated")) {
+	 opt_args.startup = ST_DEACTIVATED;
+      }
+      else if (!strncmp(argv[i], "maintcmds=", 10)) {
+         if(!strcmp(argv[i] + 10, "1")) {
+            opt_args.enable_maint_cmds = true;
+         } else if(!strcmp(argv[i] + 10, "0")) {
+            opt_args.enable_maint_cmds = false;
+         }
+      }
+      else if(!strncmp(argv[i], "hwinitpcr=", 10)) {
+         char *pch = argv[i] + 10;
+         unsigned int v1, v2;
+         pch = strtok(pch, ",");
+         while(pch != NULL) {
+            if(!strcmp(pch, "all")) {
+               //Set all
+               opt_args.hwinitpcrs = VTPM_PCRALL;
+            } else if(!strcmp(pch, "none")) {
+               //Set none
+               opt_args.hwinitpcrs = VTPM_PCRNONE;
+            } else if(sscanf(pch, "%u", &v1) == 1) {
+               //Set one
+               if(v1 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               opt_args.hwinitpcrs |= (1 << v1);
+            } else if(sscanf(pch, "%u-%u", &v1, &v2) == 2) {
+               //Set range
+               if(v1 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               if(v2 >= TPM_NUM_PCR) {
+                  error("hwinitpcr error: Invalid PCR index %u", v1);
+                  return -1;
+               }
+               if(v2 < v1) {
+                  unsigned tp = v1;
+                  v1 = v2;
+                  v2 = tp;
+               }
+               for(unsigned int i = v1; i <= v2; ++i) {
+                  opt_args.hwinitpcrs |= (1 << i);
+               }
+            } else {
+               error("hwintipcr error: Invalid PCR specification : %s", pch);
+               return -1;
+            }
+            pch = strtok(NULL, ",");
+         }
+      }
+      else {
+	 error("Invalid command line option `%s'", argv[i]);
+      }
+
+   }
+
+   /* Check Errors and print results */
+   switch(opt_args.startup) {
+      case ST_CLEAR:
+	 info("Startup mode is `clear'");
+	 break;
+      case ST_SAVE:
+	 info("Startup mode is `save'");
+	 break;
+      case ST_DEACTIVATED:
+	 info("Startup mode is `deactivated'");
+	 break;
+      default:
+	 error("Invalid startup mode %d", opt_args.startup);
+	 return -1;
+   }
+
+   if(opt_args.hwinitpcrs & (VTPM_PCRALL))
+   {
+      char pcrstr[1024];
+      char* ptr = pcrstr;
+
+      pcrstr[0] = '\0';
+      info("The following PCRs will be initialized with values from the hardware TPM:");
+      for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
+         if(opt_args.hwinitpcrs & (1 << i)) {
+            ptr += sprintf(ptr, "%u, ", i);
+         }
+      }
+      /* get rid of the last comma if any numbers were printed */
+      *(ptr -2) = '\0';
+
+      info("\t%s", pcrstr);
+   } else {
+      info("All PCRs initialized to default values");
+   }
+
+   if(!opt_args.enable_maint_cmds) {
+      info("TPM Maintenance Commands disabled");
+      badords[n_badords++] = TPM_ORD_CreateMaintenanceArchive;
+      badords[n_badords++] = TPM_ORD_LoadMaintenanceArchive;
+      badords[n_badords++] = TPM_ORD_KillMaintenanceFeature;
+      badords[n_badords++] = TPM_ORD_LoadManuMaintPub;
+      badords[n_badords++] = TPM_ORD_ReadManuMaintPub;
+   } else {
+      info("TPM Maintenance Commands enabled");
+   }
+
+   info("Log level set to %s", logstr);
+
+   return 0;
+}
+
+void cleanup_opt_args(void) {
+}
+
+int main(int argc, char **argv)
+{
+   //FIXME: initializing blkfront without this sleep causes the domain to crash on boot
+   sleep(2);
+
+   /* Setup extern function pointers */
+   tpm_extern_init = vtpm_extern_init_fake;
+   tpm_extern_release = vtpm_extern_release_fake;
+   tpm_malloc = malloc;
+   tpm_free = free;
+   tpm_log = vtpm_log;
+   tpm_get_ticks = vtpm_get_ticks;
+   tpm_get_extern_random_bytes = vtpm_get_extern_random_bytes;
+   tpm_write_to_storage = vtpm_write_to_file;
+   tpm_read_from_storage = vtpm_read_from_file;
+
+   info("starting TPM Emulator (1.2.%d.%d-%d)", VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD);
+   if(parse_cmd_line(argc, argv)) {
+      error("Error parsing commandline\n");
+      return -1;
+   }
+
+   /* Initialize devices */
+   init_tpmback();
+   if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
+      error("Unable to initialize tpmfront device");
+      goto abort_posttpmfront;
+   }
+
+   /* Seed the RNG with entropy from hardware TPM */
+   if(init_random()) {
+      error("Unable to initialize RNG");
+      goto abort_postrng;
+   }
+
+   /* Initialize blkfront device */
+   if(init_vtpmblk(tpmfront_dev)) {
+      error("Unable to initialize Blkfront persistent storage");
+      goto abort_postvtpmblk;
+   }
+
+   /* Run main loop */
+   main_loop();
+
+   /* Shutdown blkfront */
+   shutdown_vtpmblk();
+abort_postvtpmblk:
+abort_postrng:
+
+   /* Close devices */
+   shutdown_tpmfront(tpmfront_dev);
+abort_posttpmfront:
+   shutdown_tpmback();
+
+   cleanup_opt_args();
+
+   return 0;
+}
diff --git a/stubdom/vtpm/vtpm.h b/stubdom/vtpm/vtpm.h
new file mode 100644
index 0000000..5919e44
--- /dev/null
+++ b/stubdom/vtpm/vtpm.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef VTPM_H
+#define VTPM_H
+
+#include <stdbool.h>
+
+/* For testing */
+#define VERS_CMD "\x00\xC1\x00\x00\x00\x16\x00\x00\x00\x65\x00\x00\x00\x05\x00\x00\x00\x04\x00\x00\x01\x03"
+#define VERS_CMD_LEN 22
+
+/* Global commandline options */
+struct Opt_args {
+   enum StartUp {
+      ST_CLEAR = 1,
+      ST_SAVE = 2,
+      ST_DEACTIVATED = 3
+   } startup;
+   unsigned long hwinitpcrs;
+   int loglevel;
+   uint32_t tpmconf;
+   bool enable_maint_cmds;
+};
+extern struct Opt_args opt_args;
+
+#endif
diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
new file mode 100644
index 0000000..7eae98b
--- /dev/null
+++ b/stubdom/vtpm/vtpm_cmd.c
@@ -0,0 +1,256 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <types.h>
+#include <xen/xen.h>
+#include <mm.h>
+#include <gnttab.h>
+#include "tpm/tpm_marshalling.h"
+#include "vtpm_manager.h"
+#include "vtpm_cmd.h"
+#include <tpmback.h>
+
+#define TRYFAILGOTO(C) \
+   if((C)) { \
+      status = TPM_FAIL; \
+      goto abort_egress; \
+   }
+#define TRYFAILGOTOMSG(C, msg) \
+   if((C)) { \
+      status = TPM_FAIL; \
+      error(msg); \
+      goto abort_egress; \
+   }
+#define CHECKSTATUSGOTO(ret, fname) \
+   if((ret) != TPM_SUCCESS) { \
+      error("%s failed with error code (%lu)", fname, (unsigned long) ret); \
+      status = ord; \
+      goto abort_egress; \
+   }
+
+#define ERR_MALFORMED "Malformed response from backend"
+#define ERR_TPMFRONT "Error sending command through frontend device"
+
+struct shpage {
+   void* page;
+   grant_ref_t grantref;
+};
+
+typedef struct shpage shpage_t;
+
+static inline int pack_header(uint8_t** bptr, UINT32* len, TPM_TAG tag, UINT32 size, TPM_COMMAND_CODE ord)
+{
+   return *bptr == NULL ||
+	 tpm_marshal_UINT16(bptr, len, tag) ||
+	 tpm_marshal_UINT32(bptr, len, size) ||
+	 tpm_marshal_UINT32(bptr, len, ord);
+}
+
+static inline int unpack_header(uint8_t** bptr, UINT32* len, TPM_TAG* tag, UINT32* size, TPM_COMMAND_CODE* ord)
+{
+   return *bptr == NULL ||
+	 tpm_unmarshal_UINT16(bptr, len, tag) ||
+	 tpm_unmarshal_UINT32(bptr, len, size) ||
+	 tpm_unmarshal_UINT32(bptr, len, ord);
+}
+
+int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode)
+{
+   TPM_TAG tag;
+   UINT32 len = tpmcmd->req_len;
+   uint8_t* respptr;
+   uint8_t* cmdptr = tpmcmd->req;
+
+   if(!tpm_unmarshal_UINT16(&cmdptr, &len, &tag)) {
+      switch (tag) {
+         case TPM_TAG_RQU_COMMAND:
+            tag = TPM_TAG_RSP_COMMAND;
+            break;
+         case TPM_TAG_RQU_AUTH1_COMMAND:
+            tag = TPM_TAG_RQU_AUTH2_COMMAND;
+            break;
+         case TPM_TAG_RQU_AUTH2_COMMAND:
+            tag = TPM_TAG_RQU_AUTH2_COMMAND;
+            break;
+      }
+   } else {
+      tag = TPM_TAG_RSP_COMMAND;
+   }
+
+   tpmcmd->resp_len = len = 10;
+   tpmcmd->resp = respptr = tpm_malloc(tpmcmd->resp_len);
+
+   return pack_header(&respptr, &len, tag, len, errorcode);
+}
+
+TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32 *numbytes) {
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* cmdbuf, *resp, *bptr;
+   size_t resplen = 0;
+   UINT32 len;
+
+   /*Ask the real tpm for random bytes for the seed */
+   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = TPM_ORD_GetRandom;
+   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
+
+   /*Create the raw tpm command */
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, *numbytes));
+
+   /* Send cmd, wait for response */
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
+      ERR_TPMFRONT);
+
+   bptr = resp; len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   //Check return status of command
+   CHECKSTATUSGOTO(ord, "TPM_GetRandom()");
+
+   // Get the number of random bytes in the response
+   TRYFAILGOTOMSG(tpm_unmarshal_UINT32(&bptr, &len, &size), ERR_MALFORMED);
+   *numbytes = size;
+
+   //Get the random bytes out, tpm may give us less bytes than what we wanrt
+   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, bytes, *numbytes), ERR_MALFORMED);
+
+   goto egress;
+abort_egress:
+egress:
+   free(cmdbuf);
+   return status;
+
+}
+
+TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* bptr, *resp;
+   uint8_t* cmdbuf = NULL;
+   size_t resplen = 0;
+   UINT32 len;
+
+   TPM_TAG tag = VTPM_TAG_REQ;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = VTPM_ORD_LOADHASHKEY;
+
+   /*Create the command*/
+   len = size = VTPM_COMMAND_HEADER_SIZE;
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+
+   /* Send the command to vtpm_manager */
+   info("Requesting Encryption key from backend");
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   /* Unpack response header */
+   bptr = resp;
+   len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   /* Check return code */
+   CHECKSTATUSGOTO(ord, "VTPM_LoadHashKey()");
+
+   /* Get the size of the key */
+   *data_length = size - VTPM_COMMAND_HEADER_SIZE;
+
+   /* Copy the key bits */
+   *data = malloc(*data_length);
+   memcpy(*data, bptr, *data_length);
+
+   goto egress;
+abort_egress:
+   error("VTPM_LoadHashKey failed");
+egress:
+   free(cmdbuf);
+   return status;
+}
+
+TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* bptr, *resp;
+   uint8_t* cmdbuf = NULL;
+   size_t resplen = 0;
+   UINT32 len;
+
+   TPM_TAG tag = VTPM_TAG_REQ;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = VTPM_ORD_SAVEHASHKEY;
+
+   /*Create the command*/
+   len = size = VTPM_COMMAND_HEADER_SIZE + data_length;
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   memcpy(bptr, data, data_length);
+   bptr += data_length;
+
+   /* Send the command to vtpm_manager */
+   info("Sending encryption key to backend");
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   /* Unpack response header */
+   bptr = resp;
+   len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   /* Check return code */
+   CHECKSTATUSGOTO(ord, "VTPM_SaveHashKey()");
+
+   goto egress;
+abort_egress:
+   error("VTPM_SaveHashKey failed");
+egress:
+   free(cmdbuf);
+   return status;
+}
+
+TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t *cmdbuf, *resp, *bptr;
+   size_t resplen = 0;
+   UINT32 len;
+
+   /*Just send a TPM_PCRRead Command to the HW tpm */
+   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
+   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
+
+   /*Create the raw tpm cmd */
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, pcrIndex));
+
+   /*Send Cmd wait for response */
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
+
+   bptr = resp; len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   //Check return status of command
+   CHECKSTATUSGOTO(ord, "TPM_PCRRead");
+
+   //Get the ptr value
+   memcpy(outDigest, bptr, sizeof(TPM_PCRVALUE));
+
+   goto egress;
+abort_egress:
+egress:
+   free(cmdbuf);
+   return status;
+
+}
diff --git a/stubdom/vtpm/vtpm_cmd.h b/stubdom/vtpm/vtpm_cmd.h
new file mode 100644
index 0000000..b0bfa22
--- /dev/null
+++ b/stubdom/vtpm/vtpm_cmd.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef MANAGER_H
+#define MANAGER_H
+
+#include <tpmfront.h>
+#include <tpmback.h>
+#include "tpm/tpm_structures.h"
+
+/* Create a command response error header */
+int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode);
+/* Request random bytes from hardware tpm, returns 0 on success */
+TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32* numbytes);
+/* Retreive 256 bit AES encryption key from manager */
+TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length);
+/* Manager securely saves our 256 bit AES encryption key */
+TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length);
+/* Send a TPM_PCRRead command passthrough the manager to the hw tpm */
+TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest);
+
+#endif
diff --git a/stubdom/vtpm/vtpm_pcrs.c b/stubdom/vtpm/vtpm_pcrs.c
new file mode 100644
index 0000000..22a6cef
--- /dev/null
+++ b/stubdom/vtpm/vtpm_pcrs.c
@@ -0,0 +1,43 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include "vtpm_pcrs.h"
+#include "vtpm_cmd.h"
+#include "tpm/tpm_data.h"
+
+#define PCR_VALUE      tpmData.permanent.data.pcrValue
+
+static int write_pcr_direct(unsigned int pcrIndex, uint8_t* val) {
+   if(pcrIndex > TPM_NUM_PCR) {
+      return TPM_BADINDEX;
+   }
+   memcpy(&PCR_VALUE[pcrIndex], val, sizeof(TPM_PCRVALUE));
+   return TPM_SUCCESS;
+}
+
+TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs)
+{
+   TPM_RESULT rc = TPM_SUCCESS;
+   uint8_t digest[sizeof(TPM_PCRVALUE)];
+
+   for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
+      if(pcrs & 1 << i) {
+         if((rc = VTPM_PCRRead(tpmfront_dev, i, digest)) != TPM_SUCCESS) {
+            error("TPM_PCRRead failed with error : %d", rc);
+            return rc;
+         }
+         write_pcr_direct(i, digest);
+      }
+   }
+
+   return rc;
+}
diff --git a/stubdom/vtpm/vtpm_pcrs.h b/stubdom/vtpm/vtpm_pcrs.h
new file mode 100644
index 0000000..11835f9
--- /dev/null
+++ b/stubdom/vtpm/vtpm_pcrs.h
@@ -0,0 +1,53 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef VTPM_PCRS_H
+#define VTPM_PCRS_H
+
+#include "tpm/tpm_structures.h"
+
+#define VTPM_PCR0 1
+#define VTPM_PCR1 1 << 1
+#define VTPM_PCR2 1 << 2
+#define VTPM_PCR3 1 << 3
+#define VTPM_PCR4 1 << 4
+#define VTPM_PCR5 1 << 5
+#define VTPM_PCR6 1 << 6
+#define VTPM_PCR7 1 << 7
+#define VTPM_PCR8 1 << 8
+#define VTPM_PCR9 1 << 9
+#define VTPM_PCR10 1 << 10
+#define VTPM_PCR11 1 << 11
+#define VTPM_PCR12 1 << 12
+#define VTPM_PCR13 1 << 13
+#define VTPM_PCR14 1 << 14
+#define VTPM_PCR15 1 << 15
+#define VTPM_PCR16 1 << 16
+#define VTPM_PCR17 1 << 17
+#define VTPM_PCR18 1 << 18
+#define VTPM_PCR19 1 << 19
+#define VTPM_PCR20 1 << 20
+#define VTPM_PCR21 1 << 21
+#define VTPM_PCR22 1 << 22
+#define VTPM_PCR23 1 << 23
+
+#define VTPM_PCRALL (1 << TPM_NUM_PCR) - 1
+#define VTPM_PCRNONE 0
+
+#define VTPM_NUMPCRS 24
+
+struct tpmfront_dev;
+
+TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs);
+
+
+#endif
diff --git a/stubdom/vtpm/vtpmblk.c b/stubdom/vtpm/vtpmblk.c
new file mode 100644
index 0000000..b343bd8
--- /dev/null
+++ b/stubdom/vtpm/vtpmblk.c
@@ -0,0 +1,307 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#include <mini-os/byteorder.h>
+#include "vtpmblk.h"
+#include "tpm/tpm_marshalling.h"
+#include "vtpm_cmd.h"
+#include "polarssl/aes.h"
+#include "polarssl/sha1.h"
+#include <blkfront.h>
+#include <unistd.h>
+#include <errno.h>
+#include <fcntl.h>
+
+/*Encryption key and block sizes */
+#define BLKSZ 16
+
+static struct blkfront_dev* blkdev = NULL;
+static int blkfront_fd = -1;
+
+int init_vtpmblk(struct tpmfront_dev* tpmfront_dev)
+{
+   struct blkfront_info blkinfo;
+   info("Initializing persistent NVM storage\n");
+
+   if((blkdev = init_blkfront(NULL, &blkinfo)) == NULL) {
+      error("BLKIO: ERROR Unable to initialize blkfront");
+      return -1;
+   }
+   if (blkinfo.info & VDISK_READONLY || blkinfo.mode != O_RDWR) {
+      error("BLKIO: ERROR block device is read only!");
+      goto error;
+   }
+   if((blkfront_fd = blkfront_open(blkdev)) == -1) {
+      error("Unable to open blkfront file descriptor!");
+      goto error;
+   }
+
+   return 0;
+error:
+   shutdown_blkfront(blkdev);
+   blkdev = NULL;
+   return -1;
+}
+
+void shutdown_vtpmblk(void)
+{
+   close(blkfront_fd);
+   blkfront_fd = -1;
+   blkdev = NULL;
+}
+
+int write_vtpmblk_raw(uint8_t *data, size_t data_length)
+{
+   int rc;
+   uint32_t lenbuf;
+   debug("Begin Write data=%p len=%u", data, data_length);
+
+   lenbuf = cpu_to_be32((uint32_t)data_length);
+
+   lseek(blkfront_fd, 0, SEEK_SET);
+   if((rc = write(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
+      error("write(length) failed! error was %s", strerror(errno));
+      return -1;
+   }
+   if((rc = write(blkfront_fd, data, data_length)) != data_length) {
+      error("write(data) failed! error was %s", strerror(errno));
+      return -1;
+   }
+
+   info("Wrote %u bytes to NVM persistent storage", data_length);
+
+   return 0;
+}
+
+int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
+{
+   int rc;
+   uint32_t lenbuf;
+
+   lseek(blkfront_fd, 0, SEEK_SET);
+   if(( rc = read(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
+      error("read(length) failed! error was %s", strerror(errno));
+      return -1;
+   }
+   *data_length = (size_t) cpu_to_be32(lenbuf);
+   if(*data_length == 0) {
+      error("read 0 data_length for NVM");
+      return -1;
+   }
+
+   *data = tpm_malloc(*data_length);
+   if((rc = read(blkfront_fd, *data, *data_length)) != *data_length) {
+      error("read(data) failed! error was %s", strerror(errno));
+      return -1;
+   }
+
+   info("Read %u bytes from NVM persistent storage", *data_length);
+   return 0;
+}
+
+int encrypt_vtpmblk(uint8_t* clear, size_t clear_len, uint8_t** cipher, size_t* cipher_len, uint8_t* symkey)
+{
+   int rc = 0;
+   uint8_t iv[BLKSZ];
+   aes_context aes_ctx;
+   UINT32 temp;
+   int mod;
+
+   uint8_t* clbuf = NULL;
+
+   uint8_t* ivptr;
+   int ivlen;
+
+   uint8_t* cptr;	//Cipher block pointer
+   int clen;	//Cipher block length
+
+   /*Create a new 256 bit encryption key */
+   if(symkey == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+   tpm_get_extern_random_bytes(symkey, NVMKEYSZ);
+
+   /*Setup initialization vector - random bits and then 4 bytes clear text size at the end*/
+   temp = sizeof(UINT32);
+   ivlen = BLKSZ - temp;
+   tpm_get_extern_random_bytes(iv, ivlen);
+   ivptr = iv + ivlen;
+   tpm_marshal_UINT32(&ivptr, &temp, (UINT32) clear_len);
+
+   /*The clear text needs to be padded out to a multiple of BLKSZ */
+   mod = clear_len % BLKSZ;
+   clen = mod ? clear_len + BLKSZ - mod : clear_len;
+   clbuf = malloc(clen);
+   if (clbuf == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+   memcpy(clbuf, clear, clear_len);
+   /* zero out the padding bits - FIXME: better / more secure way to handle these? */
+   if(clen - clear_len) {
+      memset(clbuf + clear_len, 0, clen - clear_len);
+   }
+
+   /* Setup the ciphertext buffer */
+   *cipher_len = BLKSZ + clen;		/*iv + ciphertext */
+   cptr = *cipher = malloc(*cipher_len);
+   if (*cipher == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Copy the IV to cipher text blob*/
+   memcpy(cptr, iv, BLKSZ);
+   cptr += BLKSZ;
+
+   /* Setup encryption */
+   aes_setkey_enc(&aes_ctx, symkey, 256);
+
+   /* Do encryption now */
+   aes_crypt_cbc(&aes_ctx, AES_ENCRYPT, clen, iv, clbuf, cptr);
+
+   goto egress;
+abort_egress:
+egress:
+   free(clbuf);
+   return rc;
+}
+int decrypt_vtpmblk(uint8_t* cipher, size_t cipher_len, uint8_t** clear, size_t* clear_len, uint8_t* symkey)
+{
+   int rc = 0;
+   uint8_t iv[BLKSZ];
+   uint8_t* ivptr;
+   UINT32 u32, temp;
+   aes_context aes_ctx;
+
+   uint8_t* cptr = cipher;	//cipher block pointer
+   int clen = cipher_len;	//cipher block length
+
+   /* Pull out the initialization vector */
+   memcpy(iv, cipher, BLKSZ);
+   cptr += BLKSZ;
+   clen -= BLKSZ;
+
+   /* Setup the clear text buffer */
+   if((*clear = malloc(clen)) == NULL) {
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Get the length of clear text from last 4 bytes of iv */
+   temp = sizeof(UINT32);
+   ivptr = iv + BLKSZ - temp;
+   tpm_unmarshal_UINT32(&ivptr, &temp, &u32);
+   *clear_len = u32;
+
+   /* Setup decryption */
+   aes_setkey_dec(&aes_ctx, symkey, 256);
+
+   /* Do decryption now */
+   if ((clen % BLKSZ) != 0) {
+      error("Decryption Error: Cipher block size was not a multiple of %u", BLKSZ);
+      rc = -1;
+      goto abort_egress;
+   }
+   aes_crypt_cbc(&aes_ctx, AES_DECRYPT, clen, iv, cptr, *clear);
+
+   goto egress;
+abort_egress:
+egress:
+   return rc;
+}
+
+int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length) {
+   int rc;
+   uint8_t* cipher = NULL;
+   size_t cipher_len = 0;
+   uint8_t hashkey[HASHKEYSZ];
+   uint8_t* symkey = hashkey + HASHSZ;
+
+   /* Encrypt the data */
+   if((rc = encrypt_vtpmblk(data, data_length, &cipher, &cipher_len, symkey))) {
+      goto abort_egress;
+   }
+   /* Write to disk */
+   if((rc = write_vtpmblk_raw(cipher, cipher_len))) {
+      goto abort_egress;
+   }
+   /* Get sha1 hash of data */
+   sha1(cipher, cipher_len, hashkey);
+
+   /* Send hash and key to manager */
+   if((rc = VTPM_SaveHashKey(tpmfront_dev, hashkey, HASHKEYSZ)) != TPM_SUCCESS) {
+      goto abort_egress;
+   }
+   goto egress;
+abort_egress:
+egress:
+   free(cipher);
+   return rc;
+}
+
+int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data_length) {
+   int rc;
+   uint8_t* cipher = NULL;
+   size_t cipher_len = 0;
+   size_t keysize;
+   uint8_t* hashkey = NULL;
+   uint8_t hash[HASHSZ];
+   uint8_t* symkey;
+
+   /* Retreive the hash and the key from the manager */
+   if((rc = VTPM_LoadHashKey(tpmfront_dev, &hashkey, &keysize)) != TPM_SUCCESS) {
+      goto abort_egress;
+   }
+   if(keysize != HASHKEYSZ) {
+      error("Manager returned a hashkey of invalid size! expected %d, actual %d", NVMKEYSZ, keysize);
+      rc = -1;
+      goto abort_egress;
+   }
+   symkey = hashkey + HASHSZ;
+
+   /* Read from disk now */
+   if((rc = read_vtpmblk_raw(&cipher, &cipher_len))) {
+      goto abort_egress;
+   }
+
+   /* Compute the hash of the cipher text and compare */
+   sha1(cipher, cipher_len, hash);
+   if(memcmp(hash, hashkey, HASHSZ)) {
+      int i;
+      error("NVM Storage Checksum failed!");
+      printf("Expected: ");
+      for(i = 0; i < HASHSZ; ++i) {
+	 printf("%02hhX ", hashkey[i]);
+      }
+      printf("\n");
+      printf("Actual:   ");
+      for(i = 0; i < HASHSZ; ++i) {
+	 printf("%02hhX ", hash[i]);
+      }
+      printf("\n");
+      rc = -1;
+      goto abort_egress;
+   }
+
+   /* Decrypt the blob */
+   if((rc = decrypt_vtpmblk(cipher, cipher_len, data, data_length, symkey))) {
+      goto abort_egress;
+   }
+   goto egress;
+abort_egress:
+egress:
+   free(cipher);
+   free(hashkey);
+   return rc;
+}
diff --git a/stubdom/vtpm/vtpmblk.h b/stubdom/vtpm/vtpmblk.h
new file mode 100644
index 0000000..282ce6a
--- /dev/null
+++ b/stubdom/vtpm/vtpmblk.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+#ifndef NVM_H
+#define NVM_H
+#include <mini-os/types.h>
+#include <xen/xen.h>
+#include <tpmfront.h>
+
+#define NVMKEYSZ 32
+#define HASHSZ 20
+#define HASHKEYSZ (NVMKEYSZ + HASHSZ)
+
+int init_vtpmblk(struct tpmfront_dev* tpmfront_dev);
+void shutdown_vtpmblk(void);
+
+/* Encrypts and writes data to blk device */
+int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t *data, size_t data_length);
+/* Reads, Decrypts, and returns data from blk device */
+int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t **data, size_t *data_length);
+
+#endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg31-0004wv-Qf; Thu, 06 Dec 2012 18:19:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg31-0004w6-4P
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:47 +0000
Received: from [85.158.139.83:54098] by server-14.bemta-5.messagelabs.com id
	97/43-21768-2C1E0C05; Thu, 06 Dec 2012 18:19:46 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354817983!26025948!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13517 invoked from network); 6 Dec 2012 18:19:45 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 18:19:45 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01e2_a7b931be_77b9_4a4b_aebc_83edef72c4e9;
	Thu, 06 Dec 2012 13:19:35 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:20 -0500
Message-Id: <1354817961-22196-7-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 7/8] Add a top level configure script
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please rerun autoconf after commiting this patch

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 autogen.sh                         |    1 +
 tools/config.guess => config.guess |    0
 tools/config.sub => config.sub     |    0
 configure.ac                       |   12 ++++++++++++
 tools/configure.ac                 |    4 ++--
 tools/install.sh                   |    1 -
 6 files changed, 15 insertions(+), 3 deletions(-)
 rename tools/config.guess => config.guess (100%)
 rename tools/config.sub => config.sub (100%)
 create mode 100644 configure.ac
 mode change 100755 => 100644 install.sh
 delete mode 100644 tools/install.sh

diff --git a/autogen.sh b/autogen.sh
index ada482c..1456d94 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -1,4 +1,5 @@
 #!/bin/sh -e
+autoconf
 cd tools
 autoconf
 autoheader
diff --git a/tools/config.guess b/config.guess
similarity index 100%
rename from tools/config.guess
rename to config.guess
diff --git a/tools/config.sub b/config.sub
similarity index 100%
rename from tools/config.sub
rename to config.sub
diff --git a/configure.ac b/configure.ac
new file mode 100644
index 0000000..5dacb46
--- /dev/null
+++ b/configure.ac
@@ -0,0 +1,12 @@
+#                                               -*- Autoconf -*-
+# Process this file with autoconf to produce a configure script.
+
+AC_PREREQ([2.67])
+AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/Makefile]),
+    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
+AC_CONFIG_SRCDIR([./xen/common/kernel.c])
+AC_PREFIX_DEFAULT([/usr])
+
+AC_CONFIG_SUBDIRS([tools stubdom])
+
+AC_OUTPUT()
diff --git a/install.sh b/install.sh
old mode 100755
new mode 100644
diff --git a/tools/configure.ac b/tools/configure.ac
index 971e3e9..2bd71b6 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -2,13 +2,13 @@
 # Process this file with autoconf to produce a configure script.
 
 AC_PREREQ([2.67])
-AC_INIT([Xen Hypervisor], m4_esyscmd([../version.sh ../xen/Makefile]),
+AC_INIT([Xen Hypervisor Tools], m4_esyscmd([../version.sh ../xen/Makefile]),
     [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
 AC_CONFIG_SRCDIR([libxl/libxl.c])
 AC_CONFIG_FILES([../config/Tools.mk])
 AC_CONFIG_HEADERS([config.h])
 AC_PREFIX_DEFAULT([/usr])
-AC_CONFIG_AUX_DIR([.])
+AC_CONFIG_AUX_DIR([../])
 
 # Check if CFLAGS, LDFLAGS, LIBS, CPPFLAGS or CPP is set and print a warning
 
diff --git a/tools/install.sh b/tools/install.sh
deleted file mode 100644
index 3f44f99..0000000
--- a/tools/install.sh
+++ /dev/null
@@ -1 +0,0 @@
-../install.sh
\ No newline at end of file
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg30-0004w9-P8; Thu, 06 Dec 2012 18:19:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2z-0004vG-8k
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:45 +0000
Received: from [85.158.137.99:23172] by server-13.bemta-3.messagelabs.com id
	E7/30-24887-0C1E0C05; Thu, 06 Dec 2012 18:19:44 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354817980!12753082!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18258 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01d4_19a4f8b2_0680_4a71_953b_6d367b1118f3;
	Thu, 06 Dec 2012 13:19:35 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:19 -0500
Message-Id: <1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please rerun autoconf after commiting this patch

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 autogen.sh                             |    2 +
 config/Stubdom.mk.in                   |   45 ++++++++++++++++
 {tools/m4 => m4}/curses.m4             |    0
 m4/depends.m4                          |   15 ++++++
 {tools/m4 => m4}/extfs.m4              |    0
 {tools/m4 => m4}/features.m4           |    0
 {tools/m4 => m4}/fetcher.m4            |    0
 {tools/m4 => m4}/ocaml.m4              |    0
 {tools/m4 => m4}/path_or_fail.m4       |    0
 {tools/m4 => m4}/pkg.m4                |    0
 {tools/m4 => m4}/pthread.m4            |    0
 {tools/m4 => m4}/ptyfuncs.m4           |    0
 {tools/m4 => m4}/python_devel.m4       |    0
 {tools/m4 => m4}/python_version.m4     |    0
 {tools/m4 => m4}/savevar.m4            |    0
 {tools/m4 => m4}/set_cflags_ldflags.m4 |    0
 m4/stubdom.m4                          |   89 ++++++++++++++++++++++++++++++++
 {tools/m4 => m4}/uuid.m4               |    0
 stubdom/Makefile                       |   55 +++++---------------
 stubdom/configure.ac                   |   58 +++++++++++++++++++++
 tools/configure.ac                     |   28 +++++-----
 21 files changed, 236 insertions(+), 56 deletions(-)
 create mode 100644 config/Stubdom.mk.in
 rename {tools/m4 => m4}/curses.m4 (100%)
 create mode 100644 m4/depends.m4
 rename {tools/m4 => m4}/extfs.m4 (100%)
 rename {tools/m4 => m4}/features.m4 (100%)
 rename {tools/m4 => m4}/fetcher.m4 (100%)
 rename {tools/m4 => m4}/ocaml.m4 (100%)
 rename {tools/m4 => m4}/path_or_fail.m4 (100%)
 rename {tools/m4 => m4}/pkg.m4 (100%)
 rename {tools/m4 => m4}/pthread.m4 (100%)
 rename {tools/m4 => m4}/ptyfuncs.m4 (100%)
 rename {tools/m4 => m4}/python_devel.m4 (100%)
 rename {tools/m4 => m4}/python_version.m4 (100%)
 rename {tools/m4 => m4}/savevar.m4 (100%)
 rename {tools/m4 => m4}/set_cflags_ldflags.m4 (100%)
 create mode 100644 m4/stubdom.m4
 rename {tools/m4 => m4}/uuid.m4 (100%)
 create mode 100644 stubdom/configure.ac

diff --git a/autogen.sh b/autogen.sh
index 58a71ce..ada482c 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -2,3 +2,5 @@
 cd tools
 autoconf
 autoheader
+cd ../stubdom
+autoconf
diff --git a/config/Stubdom.mk.in b/config/Stubdom.mk.in
new file mode 100644
index 0000000..432efd7
--- /dev/null
+++ b/config/Stubdom.mk.in
@@ -0,0 +1,45 @@
+# Prefix and install folder
+prefix              := @prefix@
+PREFIX              := $(prefix)
+exec_prefix         := @exec_prefix@
+libdir              := @libdir@
+LIBDIR              := $(libdir)
+
+# Path Programs
+CMAKE               := @CMAKE@
+WGET                := @WGET@ -c
+
+# A debug build of stubdom? //FIXME: Someone make this do something
+debug               := @debug@
+vtpm = @vtpm@
+
+STUBDOM_TARGETS     := @STUBDOM_TARGETS@
+STUBDOM_BUILD       := @STUBDOM_BUILD@
+STUBDOM_INSTALL     := @STUBDOM_INSTALL@
+
+ZLIB_VERSION        := @ZLIB_VERSION@
+ZLIB_URL            := @ZLIB_URL@
+
+LIBPCI_VERSION      := @LIBPCI_VERSION@
+LIBPCI_URL          := @LIBPCI_URL@
+
+NEWLIB_VERSION      := @NEWLIB_VERSION@
+NEWLIB_URL          := @NEWLIB_URL@
+
+LWIP_VERSION        := @LWIP_VERSION@
+LWIP_URL            := @LWIP_URL@
+
+GRUB_VERSION        := @GRUB_VERSION@
+GRUB_URL            := @GRUB_URL@
+
+OCAML_VERSION       := @OCAML_VERSION@
+OCAML_URL           := @OCAML_URL@
+
+GMP_VERSION         := @GMP_VERSION@
+GMP_URL             := @GMP_URL@
+
+POLARSSL_VERSION    := @POLARSSL_VERSION@
+POLARSSL_URL        := @POLARSSL_URL@
+
+TPMEMU_VERSION      := @TPMEMU_VERSION@
+TPMEMU_URL          := @TPMEMU_URL@
diff --git a/tools/m4/curses.m4 b/m4/curses.m4
similarity index 100%
rename from tools/m4/curses.m4
rename to m4/curses.m4
diff --git a/m4/depends.m4 b/m4/depends.m4
new file mode 100644
index 0000000..916e665
--- /dev/null
+++ b/m4/depends.m4
@@ -0,0 +1,15 @@
+
+AC_DEFUN([AX_DEPENDS_PATH_PROG], [
+AS_IF([test "x$$1" = "xy"], [AX_PATH_PROG_OR_FAIL([$2], [$3])], [
+AS_IF([test "x$$1" = "xn"], [
+$2="/$3-disabled-in-configure-script"
+], [
+AC_PATH_PROG([$2], [$3], [no])
+AS_IF([test x"${$2}" = "xno"], [
+$1=n
+$2="/$3-disabled-in-configure-script"
+])
+])
+])
+AC_SUBST($2)
+])
diff --git a/tools/m4/extfs.m4 b/m4/extfs.m4
similarity index 100%
rename from tools/m4/extfs.m4
rename to m4/extfs.m4
diff --git a/tools/m4/features.m4 b/m4/features.m4
similarity index 100%
rename from tools/m4/features.m4
rename to m4/features.m4
diff --git a/tools/m4/fetcher.m4 b/m4/fetcher.m4
similarity index 100%
rename from tools/m4/fetcher.m4
rename to m4/fetcher.m4
diff --git a/tools/m4/ocaml.m4 b/m4/ocaml.m4
similarity index 100%
rename from tools/m4/ocaml.m4
rename to m4/ocaml.m4
diff --git a/tools/m4/path_or_fail.m4 b/m4/path_or_fail.m4
similarity index 100%
rename from tools/m4/path_or_fail.m4
rename to m4/path_or_fail.m4
diff --git a/tools/m4/pkg.m4 b/m4/pkg.m4
similarity index 100%
rename from tools/m4/pkg.m4
rename to m4/pkg.m4
diff --git a/tools/m4/pthread.m4 b/m4/pthread.m4
similarity index 100%
rename from tools/m4/pthread.m4
rename to m4/pthread.m4
diff --git a/tools/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
similarity index 100%
rename from tools/m4/ptyfuncs.m4
rename to m4/ptyfuncs.m4
diff --git a/tools/m4/python_devel.m4 b/m4/python_devel.m4
similarity index 100%
rename from tools/m4/python_devel.m4
rename to m4/python_devel.m4
diff --git a/tools/m4/python_version.m4 b/m4/python_version.m4
similarity index 100%
rename from tools/m4/python_version.m4
rename to m4/python_version.m4
diff --git a/tools/m4/savevar.m4 b/m4/savevar.m4
similarity index 100%
rename from tools/m4/savevar.m4
rename to m4/savevar.m4
diff --git a/tools/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
similarity index 100%
rename from tools/m4/set_cflags_ldflags.m4
rename to m4/set_cflags_ldflags.m4
diff --git a/m4/stubdom.m4 b/m4/stubdom.m4
new file mode 100644
index 0000000..0bf0d2c
--- /dev/null
+++ b/m4/stubdom.m4
@@ -0,0 +1,89 @@
+AC_DEFUN([AX_STUBDOM_DEFAULT_ENABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--disable-$1], [Build and install $1 (default is ENABLED)]),[
+AX_STUBDOM_INTERNAL([$1], [$2])
+],[
+AX_ENABLE_STUBDOM([$1], [$2])
+])
+AC_SUBST([$2])
+])
+
+AC_DEFUN([AX_STUBDOM_DEFAULT_DISABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--enable-$1], [Build and install $1 (default is DISABLED)]),[
+AX_STUBDOM_INTERNAL([$1], [$2])
+],[
+AX_DISABLE_STUBDOM([$1], [$2])
+])
+AC_SUBST([$2])
+])
+
+AC_DEFUN([AX_STUBDOM_CONDITIONAL], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--enable-$1], [Build and install $1]),[
+AX_STUBDOM_INTERNAL([$1], [$2])
+])
+])
+
+AC_DEFUN([AX_STUBDOM_CONDITIONAL_FINISH], [
+AS_IF([test "x$$2" = "xy" || test "x$$2" = "x"], [
+AX_ENABLE_STUBDOM([$1],[$2])
+],[
+AX_DISABLE_STUBDOM([$1],[$2])
+])
+AC_SUBST([$2])
+])
+
+AC_DEFUN([AX_ENABLE_STUBDOM], [
+$2=y
+STUBDOM_TARGETS="$STUBDOM_TARGETS $2"
+STUBDOM_BUILD="$STUBDOM_BUILD $1"
+STUBDOM_INSTALL="$STUBDOM_INSTALL install-$2"
+])
+
+AC_DEFUN([AX_DISABLE_STUBDOM], [
+$2=n
+])
+
+dnl Don't call this outside of this file
+AC_DEFUN([AX_STUBDOM_INTERNAL], [
+AS_IF([test "x$enableval" = "xyes"], [
+AX_ENABLE_STUBDOM([$1], [$2])
+],[
+AS_IF([test "x$enableval" = "xno"],[
+AX_DISABLE_STUBDOM([$1], [$2])
+])
+])
+])
+
+AC_DEFUN([AX_STUBDOM_FINISH], [
+AC_SUBST(STUBDOM_TARGETS)
+AC_SUBST(STUBDOM_BUILD)
+AC_SUBST(STUBDOM_INSTALL)
+echo "Will build the following stub domains:"
+for x in $STUBDOM_BUILD; do
+	echo "  $x"
+done
+])
+
+AC_DEFUN([AX_STUBDOM_LIB], [
+AC_ARG_VAR([$1_URL], [Download url for $2])
+AS_IF([test "x$$1_URL" = "x"], [
+	AS_IF([test "x$extfiles" = "xy"],
+		[$1_URL=\@S|@\@{:@XEN_EXTFILES_URL\@:}@],
+		[$1_URL="$4"])
+	])
+$1_VERSION="$3"
+AC_SUBST($1_URL)
+AC_SUBST($1_VERSION)
+])
+
+AC_DEFUN([AX_STUBDOM_LIB_NOEXT], [
+AC_ARG_VAR([$1_URL], [Download url for $2])
+AS_IF([test "x$$1_URL" = "x"], [
+	$1_URL="$4"
+	])
+$1_VERSION="$3"
+AC_SUBST($1_URL)
+AC_SUBST($1_VERSION)
+])
diff --git a/tools/m4/uuid.m4 b/m4/uuid.m4
similarity index 100%
rename from tools/m4/uuid.m4
rename to m4/uuid.m4
diff --git a/stubdom/Makefile b/stubdom/Makefile
index fc70d88..709b71e 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -6,44 +6,7 @@ export XEN_OS=MiniOS
 export stubdom=y
 export debug=y
 include $(XEN_ROOT)/Config.mk
-
-#ZLIB_URL?=http://www.zlib.net
-ZLIB_URL=$(XEN_EXTFILES_URL)
-ZLIB_VERSION=1.2.3
-
-#LIBPCI_URL?=http://www.kernel.org/pub/software/utils/pciutils
-LIBPCI_URL?=$(XEN_EXTFILES_URL)
-LIBPCI_VERSION=2.2.9
-
-#NEWLIB_URL?=ftp://sources.redhat.com/pub/newlib
-NEWLIB_URL?=$(XEN_EXTFILES_URL)
-NEWLIB_VERSION=1.16.0
-
-#LWIP_URL?=http://download.savannah.gnu.org/releases/lwip
-LWIP_URL?=$(XEN_EXTFILES_URL)
-LWIP_VERSION=1.3.0
-
-#GRUB_URL?=http://alpha.gnu.org/gnu/grub
-GRUB_URL?=$(XEN_EXTFILES_URL)
-GRUB_VERSION=0.97
-
-#OCAML_URL?=$(XEN_EXTFILES_URL)
-OCAML_URL?=http://caml.inria.fr/pub/distrib/ocaml-3.11
-OCAML_VERSION=3.11.0
-
-GMP_VERSION=4.3.2
-GMP_URL?=$(XEN_EXTFILES_URL)
-#GMP_URL?=ftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
-
-POLARSSL_VERSION=1.1.4
-POLARSSL_URL?=$(XEN_EXTFILES_URL)
-#POLARSSL_URL?=http://polarssl.org/code/releases
-
-TPMEMU_VERSION=0.7.4
-TPMEMU_URL?=$(XEN_EXTFILES_URL)
-#TPMEMU_URL?=http://download.berlios.de/tpm-emulator
-
-WGET=wget -c
+-include $(XEN_ROOT)/config/Stubdom.mk
 
 GNU_TARGET_ARCH:=$(XEN_TARGET_ARCH)
 ifeq ($(XEN_TARGET_ARCH),x86_32)
@@ -86,12 +49,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include
 
 TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib
 
-TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr
+TARGETS=$(STUBDOM_TARGETS)
 
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stubdom vtpmmgrdom
+build: genpath $(STUBDOM_BUILD)
 else
 build: genpath
 endif
@@ -245,7 +208,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	mv tpm_emulator-$(TPMEMU_VERSION) $@
 	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
 	mkdir $@/build
-	cd $@/build; cmake .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
+	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
 
 TPMEMU_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm.a
@@ -483,7 +446,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
 #########
 
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: genpath install-readme install-ioemu install-grub install-xenstore install-vtpm install-vtpmmgr
+install: genpath install-readme $(STUBDOM_INSTALL)
 else
 install: genpath
 endif
@@ -503,6 +466,8 @@ install-grub: pv-grub
 	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
 	$(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-grub/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/pv-grub-$(XEN_TARGET_ARCH).gz"
 
+install-caml: caml-stubdom
+
 install-xenstore: xenstore-stubdom
 	$(INSTALL_DIR) "$(DESTDIR)/usr/lib/xen/boot"
 	$(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-xenstore/mini-os.gz "$(DESTDIR)/usr/lib/xen/boot/xenstore-stubdom.gz"
@@ -581,3 +546,9 @@ downloadclean: patchclean
 
 .PHONY: distclean
 distclean: downloadclean
+	-rm ../config/Stubdom.mk
+
+ifeq (,$(findstring clean,$(MAKECMDGOALS)))
+$(XEN_ROOT)/config/Stubdom.mk:
+	$(error You have to run ./configure before building or installing stubdom)
+endif
diff --git a/stubdom/configure.ac b/stubdom/configure.ac
new file mode 100644
index 0000000..db44d4a
--- /dev/null
+++ b/stubdom/configure.ac
@@ -0,0 +1,58 @@
+#                                               -*- Autoconf -*-
+# Process this file with autoconf to produce a configure script.
+
+AC_PREREQ([2.67])
+AC_INIT([Xen Hypervisor Stub Domains], m4_esyscmd([../version.sh ../xen/Makefile]),
+    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
+AC_CONFIG_SRCDIR([../extras/mini-os/kernel.c])
+AC_CONFIG_FILES([../config/Stubdom.mk])
+AC_PREFIX_DEFAULT([/usr])
+AC_CONFIG_AUX_DIR([../])
+
+# M4 Macro includes
+m4_include([../m4/stubdom.m4])
+m4_include([../m4/features.m4])
+m4_include([../m4/path_or_fail.m4])
+m4_include([../m4/depends.m4])
+
+# Enable/disable stub domains
+AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
+AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
+AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
+AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
+AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
+AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
+AX_STUBDOM_CONDITIONAL([vtpmmgrdom], [vtpmmgr])
+
+AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
+AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for libraries])
+
+AC_ARG_VAR([CMAKE], [Path to the cmake program])
+AC_ARG_VAR([WGET], [Path to wget program])
+
+# Checks for programs.
+AC_PROG_CC
+AC_PROG_MAKE_SET
+AC_PROG_INSTALL
+AX_PATH_PROG_OR_FAIL([WGET], [wget])
+
+# Checks for programs that depend on a feature
+AX_DEPENDS_PATH_PROG([vtpm], [CMAKE], [cmake])
+
+# Stubdom libraries version and url setup
+AX_STUBDOM_LIB([ZLIB], [zlib], [1.2.3], [http://www.zlib.net])
+AX_STUBDOM_LIB([LIBPCI], [libpci], [2.2.9], [http://www.kernel.org/pub/software/utils/pciutils])
+AX_STUBDOM_LIB([NEWLIB], [newlib], [1.16.0], [ftp://sources.redhat.com/pub/newlib])
+AX_STUBDOM_LIB([LWIP], [lwip], [1.3.0], [http://download.savannah.gnu.org/releases/lwip])
+AX_STUBDOM_LIB([GRUB], [grub], [0.97], [http://alpha.gnu.org/gnu/grub])
+AX_STUBDOM_LIB_NOEXT([OCAML], [ocaml], [3.11.0], [http://caml.inria.fr/pub/distrib/ocaml-3.11])
+AX_STUBDOM_LIB([GMP], [libgmp], [4.3.2], [ftp://ftp.gmplib.org/pub/gmp-4.3.2])
+AX_STUBDOM_LIB([POLARSSL], [polarssl], [1.1.4], [http://polarssl.org/code/releases])
+AX_STUBDOM_LIB([TPMEMU], [berlios tpm emulator], [0.7.4], [http://download.berlios.de/tpm-emulator])
+
+#Conditionally enable these stubdoms based on the presense of dependencies
+AX_STUBDOM_CONDITIONAL_FINISH([vtpm-stubdom], [vtpm])
+AX_STUBDOM_CONDITIONAL_FINISH([vtpmmgrdom], [vtpmmgr])
+
+AX_STUBDOM_FINISH
+AC_OUTPUT()
diff --git a/tools/configure.ac b/tools/configure.ac
index 586313d..971e3e9 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -22,20 +22,20 @@ APPEND_INCLUDES and APPEND_LIB instead when possible.])
 AC_CANONICAL_HOST
 
 # M4 Macro includes
-m4_include([m4/savevar.m4])
-m4_include([m4/features.m4])
-m4_include([m4/path_or_fail.m4])
-m4_include([m4/python_version.m4])
-m4_include([m4/python_devel.m4])
-m4_include([m4/ocaml.m4])
-m4_include([m4/set_cflags_ldflags.m4])
-m4_include([m4/uuid.m4])
-m4_include([m4/pkg.m4])
-m4_include([m4/curses.m4])
-m4_include([m4/pthread.m4])
-m4_include([m4/ptyfuncs.m4])
-m4_include([m4/extfs.m4])
-m4_include([m4/fetcher.m4])
+m4_include([../m4/savevar.m4])
+m4_include([../m4/features.m4])
+m4_include([../m4/path_or_fail.m4])
+m4_include([../m4/python_version.m4])
+m4_include([../m4/python_devel.m4])
+m4_include([../m4/ocaml.m4])
+m4_include([../m4/set_cflags_ldflags.m4])
+m4_include([../m4/uuid.m4])
+m4_include([../m4/pkg.m4])
+m4_include([../m4/curses.m4])
+m4_include([../m4/pthread.m4])
+m4_include([../m4/ptyfuncs.m4])
+m4_include([../m4/extfs.m4])
+m4_include([../m4/fetcher.m4])
 
 # Enable/disable options
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg30-0004w9-P8; Thu, 06 Dec 2012 18:19:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2z-0004vG-8k
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:45 +0000
Received: from [85.158.137.99:23172] by server-13.bemta-3.messagelabs.com id
	E7/30-24887-0C1E0C05; Thu, 06 Dec 2012 18:19:44 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354817980!12753082!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18258 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01d4_19a4f8b2_0680_4a71_953b_6d367b1118f3;
	Thu, 06 Dec 2012 13:19:35 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:19 -0500
Message-Id: <1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please rerun autoconf after commiting this patch

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 autogen.sh                             |    2 +
 config/Stubdom.mk.in                   |   45 ++++++++++++++++
 {tools/m4 => m4}/curses.m4             |    0
 m4/depends.m4                          |   15 ++++++
 {tools/m4 => m4}/extfs.m4              |    0
 {tools/m4 => m4}/features.m4           |    0
 {tools/m4 => m4}/fetcher.m4            |    0
 {tools/m4 => m4}/ocaml.m4              |    0
 {tools/m4 => m4}/path_or_fail.m4       |    0
 {tools/m4 => m4}/pkg.m4                |    0
 {tools/m4 => m4}/pthread.m4            |    0
 {tools/m4 => m4}/ptyfuncs.m4           |    0
 {tools/m4 => m4}/python_devel.m4       |    0
 {tools/m4 => m4}/python_version.m4     |    0
 {tools/m4 => m4}/savevar.m4            |    0
 {tools/m4 => m4}/set_cflags_ldflags.m4 |    0
 m4/stubdom.m4                          |   89 ++++++++++++++++++++++++++++++++
 {tools/m4 => m4}/uuid.m4               |    0
 stubdom/Makefile                       |   55 +++++---------------
 stubdom/configure.ac                   |   58 +++++++++++++++++++++
 tools/configure.ac                     |   28 +++++-----
 21 files changed, 236 insertions(+), 56 deletions(-)
 create mode 100644 config/Stubdom.mk.in
 rename {tools/m4 => m4}/curses.m4 (100%)
 create mode 100644 m4/depends.m4
 rename {tools/m4 => m4}/extfs.m4 (100%)
 rename {tools/m4 => m4}/features.m4 (100%)
 rename {tools/m4 => m4}/fetcher.m4 (100%)
 rename {tools/m4 => m4}/ocaml.m4 (100%)
 rename {tools/m4 => m4}/path_or_fail.m4 (100%)
 rename {tools/m4 => m4}/pkg.m4 (100%)
 rename {tools/m4 => m4}/pthread.m4 (100%)
 rename {tools/m4 => m4}/ptyfuncs.m4 (100%)
 rename {tools/m4 => m4}/python_devel.m4 (100%)
 rename {tools/m4 => m4}/python_version.m4 (100%)
 rename {tools/m4 => m4}/savevar.m4 (100%)
 rename {tools/m4 => m4}/set_cflags_ldflags.m4 (100%)
 create mode 100644 m4/stubdom.m4
 rename {tools/m4 => m4}/uuid.m4 (100%)
 create mode 100644 stubdom/configure.ac

diff --git a/autogen.sh b/autogen.sh
index 58a71ce..ada482c 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -2,3 +2,5 @@
 cd tools
 autoconf
 autoheader
+cd ../stubdom
+autoconf
diff --git a/config/Stubdom.mk.in b/config/Stubdom.mk.in
new file mode 100644
index 0000000..432efd7
--- /dev/null
+++ b/config/Stubdom.mk.in
@@ -0,0 +1,45 @@
+# Prefix and install folder
+prefix              := @prefix@
+PREFIX              := $(prefix)
+exec_prefix         := @exec_prefix@
+libdir              := @libdir@
+LIBDIR              := $(libdir)
+
+# Path Programs
+CMAKE               := @CMAKE@
+WGET                := @WGET@ -c
+
+# A debug build of stubdom? //FIXME: Someone make this do something
+debug               := @debug@
+vtpm = @vtpm@
+
+STUBDOM_TARGETS     := @STUBDOM_TARGETS@
+STUBDOM_BUILD       := @STUBDOM_BUILD@
+STUBDOM_INSTALL     := @STUBDOM_INSTALL@
+
+ZLIB_VERSION        := @ZLIB_VERSION@
+ZLIB_URL            := @ZLIB_URL@
+
+LIBPCI_VERSION      := @LIBPCI_VERSION@
+LIBPCI_URL          := @LIBPCI_URL@
+
+NEWLIB_VERSION      := @NEWLIB_VERSION@
+NEWLIB_URL          := @NEWLIB_URL@
+
+LWIP_VERSION        := @LWIP_VERSION@
+LWIP_URL            := @LWIP_URL@
+
+GRUB_VERSION        := @GRUB_VERSION@
+GRUB_URL            := @GRUB_URL@
+
+OCAML_VERSION       := @OCAML_VERSION@
+OCAML_URL           := @OCAML_URL@
+
+GMP_VERSION         := @GMP_VERSION@
+GMP_URL             := @GMP_URL@
+
+POLARSSL_VERSION    := @POLARSSL_VERSION@
+POLARSSL_URL        := @POLARSSL_URL@
+
+TPMEMU_VERSION      := @TPMEMU_VERSION@
+TPMEMU_URL          := @TPMEMU_URL@
diff --git a/tools/m4/curses.m4 b/m4/curses.m4
similarity index 100%
rename from tools/m4/curses.m4
rename to m4/curses.m4
diff --git a/m4/depends.m4 b/m4/depends.m4
new file mode 100644
index 0000000..916e665
--- /dev/null
+++ b/m4/depends.m4
@@ -0,0 +1,15 @@
+
+AC_DEFUN([AX_DEPENDS_PATH_PROG], [
+AS_IF([test "x$$1" = "xy"], [AX_PATH_PROG_OR_FAIL([$2], [$3])], [
+AS_IF([test "x$$1" = "xn"], [
+$2="/$3-disabled-in-configure-script"
+], [
+AC_PATH_PROG([$2], [$3], [no])
+AS_IF([test x"${$2}" = "xno"], [
+$1=n
+$2="/$3-disabled-in-configure-script"
+])
+])
+])
+AC_SUBST($2)
+])
diff --git a/tools/m4/extfs.m4 b/m4/extfs.m4
similarity index 100%
rename from tools/m4/extfs.m4
rename to m4/extfs.m4
diff --git a/tools/m4/features.m4 b/m4/features.m4
similarity index 100%
rename from tools/m4/features.m4
rename to m4/features.m4
diff --git a/tools/m4/fetcher.m4 b/m4/fetcher.m4
similarity index 100%
rename from tools/m4/fetcher.m4
rename to m4/fetcher.m4
diff --git a/tools/m4/ocaml.m4 b/m4/ocaml.m4
similarity index 100%
rename from tools/m4/ocaml.m4
rename to m4/ocaml.m4
diff --git a/tools/m4/path_or_fail.m4 b/m4/path_or_fail.m4
similarity index 100%
rename from tools/m4/path_or_fail.m4
rename to m4/path_or_fail.m4
diff --git a/tools/m4/pkg.m4 b/m4/pkg.m4
similarity index 100%
rename from tools/m4/pkg.m4
rename to m4/pkg.m4
diff --git a/tools/m4/pthread.m4 b/m4/pthread.m4
similarity index 100%
rename from tools/m4/pthread.m4
rename to m4/pthread.m4
diff --git a/tools/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
similarity index 100%
rename from tools/m4/ptyfuncs.m4
rename to m4/ptyfuncs.m4
diff --git a/tools/m4/python_devel.m4 b/m4/python_devel.m4
similarity index 100%
rename from tools/m4/python_devel.m4
rename to m4/python_devel.m4
diff --git a/tools/m4/python_version.m4 b/m4/python_version.m4
similarity index 100%
rename from tools/m4/python_version.m4
rename to m4/python_version.m4
diff --git a/tools/m4/savevar.m4 b/m4/savevar.m4
similarity index 100%
rename from tools/m4/savevar.m4
rename to m4/savevar.m4
diff --git a/tools/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
similarity index 100%
rename from tools/m4/set_cflags_ldflags.m4
rename to m4/set_cflags_ldflags.m4
diff --git a/m4/stubdom.m4 b/m4/stubdom.m4
new file mode 100644
index 0000000..0bf0d2c
--- /dev/null
+++ b/m4/stubdom.m4
@@ -0,0 +1,89 @@
+AC_DEFUN([AX_STUBDOM_DEFAULT_ENABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--disable-$1], [Build and install $1 (default is ENABLED)]),[
+AX_STUBDOM_INTERNAL([$1], [$2])
+],[
+AX_ENABLE_STUBDOM([$1], [$2])
+])
+AC_SUBST([$2])
+])
+
+AC_DEFUN([AX_STUBDOM_DEFAULT_DISABLE], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--enable-$1], [Build and install $1 (default is DISABLED)]),[
+AX_STUBDOM_INTERNAL([$1], [$2])
+],[
+AX_DISABLE_STUBDOM([$1], [$2])
+])
+AC_SUBST([$2])
+])
+
+AC_DEFUN([AX_STUBDOM_CONDITIONAL], [
+AC_ARG_ENABLE([$1],
+AS_HELP_STRING([--enable-$1], [Build and install $1]),[
+AX_STUBDOM_INTERNAL([$1], [$2])
+])
+])
+
+AC_DEFUN([AX_STUBDOM_CONDITIONAL_FINISH], [
+AS_IF([test "x$$2" = "xy" || test "x$$2" = "x"], [
+AX_ENABLE_STUBDOM([$1],[$2])
+],[
+AX_DISABLE_STUBDOM([$1],[$2])
+])
+AC_SUBST([$2])
+])
+
+AC_DEFUN([AX_ENABLE_STUBDOM], [
+$2=y
+STUBDOM_TARGETS="$STUBDOM_TARGETS $2"
+STUBDOM_BUILD="$STUBDOM_BUILD $1"
+STUBDOM_INSTALL="$STUBDOM_INSTALL install-$2"
+])
+
+AC_DEFUN([AX_DISABLE_STUBDOM], [
+$2=n
+])
+
+dnl Don't call this outside of this file
+AC_DEFUN([AX_STUBDOM_INTERNAL], [
+AS_IF([test "x$enableval" = "xyes"], [
+AX_ENABLE_STUBDOM([$1], [$2])
+],[
+AS_IF([test "x$enableval" = "xno"],[
+AX_DISABLE_STUBDOM([$1], [$2])
+])
+])
+])
+
+AC_DEFUN([AX_STUBDOM_FINISH], [
+AC_SUBST(STUBDOM_TARGETS)
+AC_SUBST(STUBDOM_BUILD)
+AC_SUBST(STUBDOM_INSTALL)
+echo "Will build the following stub domains:"
+for x in $STUBDOM_BUILD; do
+	echo "  $x"
+done
+])
+
+AC_DEFUN([AX_STUBDOM_LIB], [
+AC_ARG_VAR([$1_URL], [Download url for $2])
+AS_IF([test "x$$1_URL" = "x"], [
+	AS_IF([test "x$extfiles" = "xy"],
+		[$1_URL=\@S|@\@{:@XEN_EXTFILES_URL\@:}@],
+		[$1_URL="$4"])
+	])
+$1_VERSION="$3"
+AC_SUBST($1_URL)
+AC_SUBST($1_VERSION)
+])
+
+AC_DEFUN([AX_STUBDOM_LIB_NOEXT], [
+AC_ARG_VAR([$1_URL], [Download url for $2])
+AS_IF([test "x$$1_URL" = "x"], [
+	$1_URL="$4"
+	])
+$1_VERSION="$3"
+AC_SUBST($1_URL)
+AC_SUBST($1_VERSION)
+])
diff --git a/tools/m4/uuid.m4 b/m4/uuid.m4
similarity index 100%
rename from tools/m4/uuid.m4
rename to m4/uuid.m4
diff --git a/stubdom/Makefile b/stubdom/Makefile
index fc70d88..709b71e 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -6,44 +6,7 @@ export XEN_OS=MiniOS
 export stubdom=y
 export debug=y
 include $(XEN_ROOT)/Config.mk
-
-#ZLIB_URL?=http://www.zlib.net
-ZLIB_URL=$(XEN_EXTFILES_URL)
-ZLIB_VERSION=1.2.3
-
-#LIBPCI_URL?=http://www.kernel.org/pub/software/utils/pciutils
-LIBPCI_URL?=$(XEN_EXTFILES_URL)
-LIBPCI_VERSION=2.2.9
-
-#NEWLIB_URL?=ftp://sources.redhat.com/pub/newlib
-NEWLIB_URL?=$(XEN_EXTFILES_URL)
-NEWLIB_VERSION=1.16.0
-
-#LWIP_URL?=http://download.savannah.gnu.org/releases/lwip
-LWIP_URL?=$(XEN_EXTFILES_URL)
-LWIP_VERSION=1.3.0
-
-#GRUB_URL?=http://alpha.gnu.org/gnu/grub
-GRUB_URL?=$(XEN_EXTFILES_URL)
-GRUB_VERSION=0.97
-
-#OCAML_URL?=$(XEN_EXTFILES_URL)
-OCAML_URL?=http://caml.inria.fr/pub/distrib/ocaml-3.11
-OCAML_VERSION=3.11.0
-
-GMP_VERSION=4.3.2
-GMP_URL?=$(XEN_EXTFILES_URL)
-#GMP_URL?=ftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
-
-POLARSSL_VERSION=1.1.4
-POLARSSL_URL?=$(XEN_EXTFILES_URL)
-#POLARSSL_URL?=http://polarssl.org/code/releases
-
-TPMEMU_VERSION=0.7.4
-TPMEMU_URL?=$(XEN_EXTFILES_URL)
-#TPMEMU_URL?=http://download.berlios.de/tpm-emulator
-
-WGET=wget -c
+-include $(XEN_ROOT)/config/Stubdom.mk
 
 GNU_TARGET_ARCH:=$(XEN_TARGET_ARCH)
 ifeq ($(XEN_TARGET_ARCH),x86_32)
@@ -86,12 +49,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include
 
 TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib
 
-TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr
+TARGETS=$(STUBDOM_TARGETS)
 
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stubdom vtpmmgrdom
+build: genpath $(STUBDOM_BUILD)
 else
 build: genpath
 endif
@@ -245,7 +208,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	mv tpm_emulator-$(TPMEMU_VERSION) $@
 	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
 	mkdir $@/build
-	cd $@/build; cmake .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
+	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
 
 TPMEMU_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm.a
@@ -483,7 +446,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
 #########
 
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: genpath install-readme install-ioemu install-grub install-xenstore install-vtpm install-vtpmmgr
+install: genpath install-readme $(STUBDOM_INSTALL)
 else
 install: genpath
 endif
@@ -503,6 +466,8 @@ install-grub: pv-grub
 	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
 	$(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-grub/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/pv-grub-$(XEN_TARGET_ARCH).gz"
 
+install-caml: caml-stubdom
+
 install-xenstore: xenstore-stubdom
 	$(INSTALL_DIR) "$(DESTDIR)/usr/lib/xen/boot"
 	$(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-xenstore/mini-os.gz "$(DESTDIR)/usr/lib/xen/boot/xenstore-stubdom.gz"
@@ -581,3 +546,9 @@ downloadclean: patchclean
 
 .PHONY: distclean
 distclean: downloadclean
+	-rm ../config/Stubdom.mk
+
+ifeq (,$(findstring clean,$(MAKECMDGOALS)))
+$(XEN_ROOT)/config/Stubdom.mk:
+	$(error You have to run ./configure before building or installing stubdom)
+endif
diff --git a/stubdom/configure.ac b/stubdom/configure.ac
new file mode 100644
index 0000000..db44d4a
--- /dev/null
+++ b/stubdom/configure.ac
@@ -0,0 +1,58 @@
+#                                               -*- Autoconf -*-
+# Process this file with autoconf to produce a configure script.
+
+AC_PREREQ([2.67])
+AC_INIT([Xen Hypervisor Stub Domains], m4_esyscmd([../version.sh ../xen/Makefile]),
+    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
+AC_CONFIG_SRCDIR([../extras/mini-os/kernel.c])
+AC_CONFIG_FILES([../config/Stubdom.mk])
+AC_PREFIX_DEFAULT([/usr])
+AC_CONFIG_AUX_DIR([../])
+
+# M4 Macro includes
+m4_include([../m4/stubdom.m4])
+m4_include([../m4/features.m4])
+m4_include([../m4/path_or_fail.m4])
+m4_include([../m4/depends.m4])
+
+# Enable/disable stub domains
+AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
+AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
+AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
+AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
+AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
+AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
+AX_STUBDOM_CONDITIONAL([vtpmmgrdom], [vtpmmgr])
+
+AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
+AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for libraries])
+
+AC_ARG_VAR([CMAKE], [Path to the cmake program])
+AC_ARG_VAR([WGET], [Path to wget program])
+
+# Checks for programs.
+AC_PROG_CC
+AC_PROG_MAKE_SET
+AC_PROG_INSTALL
+AX_PATH_PROG_OR_FAIL([WGET], [wget])
+
+# Checks for programs that depend on a feature
+AX_DEPENDS_PATH_PROG([vtpm], [CMAKE], [cmake])
+
+# Stubdom libraries version and url setup
+AX_STUBDOM_LIB([ZLIB], [zlib], [1.2.3], [http://www.zlib.net])
+AX_STUBDOM_LIB([LIBPCI], [libpci], [2.2.9], [http://www.kernel.org/pub/software/utils/pciutils])
+AX_STUBDOM_LIB([NEWLIB], [newlib], [1.16.0], [ftp://sources.redhat.com/pub/newlib])
+AX_STUBDOM_LIB([LWIP], [lwip], [1.3.0], [http://download.savannah.gnu.org/releases/lwip])
+AX_STUBDOM_LIB([GRUB], [grub], [0.97], [http://alpha.gnu.org/gnu/grub])
+AX_STUBDOM_LIB_NOEXT([OCAML], [ocaml], [3.11.0], [http://caml.inria.fr/pub/distrib/ocaml-3.11])
+AX_STUBDOM_LIB([GMP], [libgmp], [4.3.2], [ftp://ftp.gmplib.org/pub/gmp-4.3.2])
+AX_STUBDOM_LIB([POLARSSL], [polarssl], [1.1.4], [http://polarssl.org/code/releases])
+AX_STUBDOM_LIB([TPMEMU], [berlios tpm emulator], [0.7.4], [http://download.berlios.de/tpm-emulator])
+
+#Conditionally enable these stubdoms based on the presense of dependencies
+AX_STUBDOM_CONDITIONAL_FINISH([vtpm-stubdom], [vtpm])
+AX_STUBDOM_CONDITIONAL_FINISH([vtpmmgrdom], [vtpmmgr])
+
+AX_STUBDOM_FINISH
+AC_OUTPUT()
diff --git a/tools/configure.ac b/tools/configure.ac
index 586313d..971e3e9 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -22,20 +22,20 @@ APPEND_INCLUDES and APPEND_LIB instead when possible.])
 AC_CANONICAL_HOST
 
 # M4 Macro includes
-m4_include([m4/savevar.m4])
-m4_include([m4/features.m4])
-m4_include([m4/path_or_fail.m4])
-m4_include([m4/python_version.m4])
-m4_include([m4/python_devel.m4])
-m4_include([m4/ocaml.m4])
-m4_include([m4/set_cflags_ldflags.m4])
-m4_include([m4/uuid.m4])
-m4_include([m4/pkg.m4])
-m4_include([m4/curses.m4])
-m4_include([m4/pthread.m4])
-m4_include([m4/ptyfuncs.m4])
-m4_include([m4/extfs.m4])
-m4_include([m4/fetcher.m4])
+m4_include([../m4/savevar.m4])
+m4_include([../m4/features.m4])
+m4_include([../m4/path_or_fail.m4])
+m4_include([../m4/python_version.m4])
+m4_include([../m4/python_devel.m4])
+m4_include([../m4/ocaml.m4])
+m4_include([../m4/set_cflags_ldflags.m4])
+m4_include([../m4/uuid.m4])
+m4_include([../m4/pkg.m4])
+m4_include([../m4/curses.m4])
+m4_include([../m4/pthread.m4])
+m4_include([../m4/ptyfuncs.m4])
+m4_include([../m4/extfs.m4])
+m4_include([../m4/fetcher.m4])
 
 # Enable/disable options
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg37-0004z1-8t; Thu, 06 Dec 2012 18:19:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg35-0004yN-C7
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:52 +0000
Received: from [85.158.143.35:48695] by server-3.bemta-4.messagelabs.com id
	A1/D4-18211-6C1E0C05; Thu, 06 Dec 2012 18:19:50 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-4.tower-21.messagelabs.com!1354817980!5468848!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24121 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_019c_9eef5c0e_ea40_4137_a244_aca487265dd9;
	Thu, 06 Dec 2012 13:19:34 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:15 -0500
Message-Id: <1354817961-22196-2-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 2/8] add stubdom/vtpmmgr code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the code base for vtpmmgrdom. Makefile changes
next patch.

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/vtpmmgr/Makefile           |   32 ++
 stubdom/vtpmmgr/init.c             |  553 +++++++++++++++++++++
 stubdom/vtpmmgr/log.c              |  151 ++++++
 stubdom/vtpmmgr/log.h              |   85 ++++
 stubdom/vtpmmgr/marshal.h          |  528 ++++++++++++++++++++
 stubdom/vtpmmgr/minios.cfg         |   14 +
 stubdom/vtpmmgr/tcg.h              |  707 +++++++++++++++++++++++++++
 stubdom/vtpmmgr/tpm.c              |  938 ++++++++++++++++++++++++++++++++++++
 stubdom/vtpmmgr/tpm.h              |  218 +++++++++
 stubdom/vtpmmgr/tpmrsa.c           |  175 +++++++
 stubdom/vtpmmgr/tpmrsa.h           |   67 +++
 stubdom/vtpmmgr/uuid.h             |   50 ++
 stubdom/vtpmmgr/vtpm_cmd_handler.c |  152 ++++++
 stubdom/vtpmmgr/vtpm_manager.h     |   64 +++
 stubdom/vtpmmgr/vtpm_storage.c     |  794 ++++++++++++++++++++++++++++++
 stubdom/vtpmmgr/vtpm_storage.h     |   68 +++
 stubdom/vtpmmgr/vtpmmgr.c          |   93 ++++
 stubdom/vtpmmgr/vtpmmgr.h          |   77 +++
 18 files changed, 4766 insertions(+)
 create mode 100644 stubdom/vtpmmgr/Makefile
 create mode 100644 stubdom/vtpmmgr/init.c
 create mode 100644 stubdom/vtpmmgr/log.c
 create mode 100644 stubdom/vtpmmgr/log.h
 create mode 100644 stubdom/vtpmmgr/marshal.h
 create mode 100644 stubdom/vtpmmgr/minios.cfg
 create mode 100644 stubdom/vtpmmgr/tcg.h
 create mode 100644 stubdom/vtpmmgr/tpm.c
 create mode 100644 stubdom/vtpmmgr/tpm.h
 create mode 100644 stubdom/vtpmmgr/tpmrsa.c
 create mode 100644 stubdom/vtpmmgr/tpmrsa.h
 create mode 100644 stubdom/vtpmmgr/uuid.h
 create mode 100644 stubdom/vtpmmgr/vtpm_cmd_handler.c
 create mode 100644 stubdom/vtpmmgr/vtpm_manager.h
 create mode 100644 stubdom/vtpmmgr/vtpm_storage.c
 create mode 100644 stubdom/vtpmmgr/vtpm_storage.h
 create mode 100644 stubdom/vtpmmgr/vtpmmgr.c
 create mode 100644 stubdom/vtpmmgr/vtpmmgr.h

diff --git a/stubdom/vtpmmgr/Makefile b/stubdom/vtpmmgr/Makefile
new file mode 100644
index 0000000..88c83c3
--- /dev/null
+++ b/stubdom/vtpmmgr/Makefile
@@ -0,0 +1,32 @@
+# Copyright (c) 2010-2012 United States Government, as represented by
+# the Secretary of Defense.  All rights reserved.
+#
+# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+# INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+# DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+# SOFTWARE.
+#
+
+XEN_ROOT=../..
+
+PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
+PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o bignum.o sha4.o havege.o timing.o entropy_poll.o
+
+TARGET=vtpmmgr.a
+OBJS=vtpmmgr.o vtpm_cmd_handler.o vtpm_storage.o init.o tpmrsa.o tpm.o log.o
+
+CFLAGS+=-Werror -Iutil -Icrypto -Itcs
+CFLAGS+=-Wno-declaration-after-statement -Wno-unused-label
+
+build: $(TARGET)
+$(TARGET): $(OBJS)
+	ar -rcs $@ $^ $(foreach obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
+
+clean:
+	rm -f $(TARGET) $(OBJS)
+
+distclean: clean
+
+.PHONY: clean distclean
diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
new file mode 100644
index 0000000..a158020
--- /dev/null
+++ b/stubdom/vtpmmgr/init.c
@@ -0,0 +1,553 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#include <stdint.h>
+#include <stdlib.h>
+
+#include <xen/xen.h>
+#include <mini-os/tpmback.h>
+#include <mini-os/tpmfront.h>
+#include <mini-os/tpm_tis.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include <polarssl/sha1.h>
+
+#include "log.h"
+#include "vtpmmgr.h"
+#include "vtpm_storage.h"
+#include "tpm.h"
+#include "marshal.h"
+
+struct Opts {
+   enum {
+      TPMDRV_TPM_TIS,
+      TPMDRV_TPMFRONT,
+   } tpmdriver;
+   unsigned long tpmiomem;
+   unsigned int tpmirq;
+   unsigned int tpmlocality;
+   int gen_owner_auth;
+};
+
+// --------------------------- Well Known Auths --------------------------
+const TPM_AUTHDATA WELLKNOWN_SRK_AUTH = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+const TPM_AUTHDATA WELLKNOWN_OWNER_AUTH = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
+
+struct vtpm_globals vtpm_globals = {
+   .tpm_fd = -1,
+   .storage_key = TPM_KEY_INIT,
+   .storage_key_handle = 0,
+   .oiap = { .AuthHandle = 0 }
+};
+
+static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
+   UINT32 sz = len;
+   TPM_RESULT rc = TPM_GetRandom(&sz, data);
+   *olen = sz;
+   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
+}
+
+static TPM_RESULT check_tpm_version(void) {
+   TPM_RESULT status;
+   UINT32 rsize;
+   BYTE* res = NULL;
+   TPM_CAP_VERSION_INFO vinfo;
+
+   TPMTRYRETURN(TPM_GetCapability(
+            TPM_CAP_VERSION_VAL,
+            0,
+            NULL,
+            &rsize,
+            &res));
+   if(rsize < 4) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid size returned by GetCapability!\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   unpack_TPM_CAP_VERSION_INFO(res, &vinfo, UNPACK_ALIAS);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Hardware TPM:\n");
+   vtpmloginfo(VTPM_LOG_VTPM, " version: %hhd %hhd %hhd %hhd\n",
+         vinfo.version.major, vinfo.version.minor, vinfo.version.revMajor, vinfo.version.revMinor);
+   vtpmloginfo(VTPM_LOG_VTPM, " specLevel: %hd\n", vinfo.specLevel);
+   vtpmloginfo(VTPM_LOG_VTPM, " errataRev: %hhd\n", vinfo.errataRev);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorID: %c%c%c%c\n",
+         vinfo.tpmVendorID[0], vinfo.tpmVendorID[1],
+         vinfo.tpmVendorID[2], vinfo.tpmVendorID[3]);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorSpecificSize: %hd\n", vinfo.vendorSpecificSize);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorSpecific: ");
+   for(int i = 0; i < vinfo.vendorSpecificSize; ++i) {
+      vtpmloginfomore(VTPM_LOG_VTPM, "%02hhx", vinfo.vendorSpecific[i]);
+   }
+   vtpmloginfomore(VTPM_LOG_VTPM, "\n");
+
+abort_egress:
+   free(res);
+   return status;
+}
+
+static TPM_RESULT flush_tpm(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   const TPM_RESOURCE_TYPE reslist[] = { TPM_RT_KEY, TPM_RT_AUTH, TPM_RT_TRANS, TPM_RT_COUNTER, TPM_RT_DAA_TPM, TPM_RT_CONTEXT };
+   BYTE* keylist = NULL;
+   UINT32 keylistSize;
+   BYTE* ptr;
+
+   //Iterate through each resource type and flush all handles
+   for(int i = 0; i < sizeof(reslist) / sizeof(TPM_RESOURCE_TYPE); ++i) {
+      TPM_RESOURCE_TYPE beres = cpu_to_be32(reslist[i]);
+      UINT16 size;
+      TPMTRYRETURN(TPM_GetCapability(
+               TPM_CAP_HANDLE,
+               sizeof(TPM_RESOURCE_TYPE),
+               (BYTE*)(&beres),
+               &keylistSize,
+               &keylist));
+
+      ptr = keylist;
+      ptr = unpack_UINT16(ptr, &size);
+
+      //Flush each handle
+      if(size) {
+         vtpmloginfo(VTPM_LOG_VTPM, "Flushing %u handle(s) of type %lu\n", size, (unsigned long) reslist[i]);
+         for(int j = 0; j < size; ++j) {
+            TPM_HANDLE h;
+            ptr = unpack_TPM_HANDLE(ptr, &h);
+            TPMTRYRETURN(TPM_FlushSpecific(h, reslist[i]));
+         }
+      }
+
+      free(keylist);
+      keylist = NULL;
+   }
+
+   goto egress;
+abort_egress:
+   free(keylist);
+egress:
+   return status;
+}
+
+
+static TPM_RESULT try_take_ownership(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_PUBKEY pubEK = TPM_PUBKEY_INIT;
+
+   // If we can read PubEK then there is no owner and we should take it.
+   status = TPM_ReadPubek(&pubEK);
+
+   switch(status) {
+      case TPM_DISABLED_CMD:
+         //Cannot read ek? TPM has owner
+         vtpmloginfo(VTPM_LOG_VTPM, "Failed to readEK meaning TPM has an owner. Creating Keys off existing SRK.\n");
+         status = TPM_SUCCESS;
+         break;
+      case TPM_NO_ENDORSEMENT:
+         {
+            //If theres no ek, we have to create one
+            TPM_KEY_PARMS keyInfo = {
+               .algorithmID = TPM_ALG_RSA,
+               .encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1,
+               .sigScheme = TPM_SS_NONE,
+               .parmSize = 12,
+               .parms.rsa = {
+                  .keyLength = RSA_KEY_SIZE,
+                  .numPrimes = 2,
+                  .exponentSize = 0,
+                  .exponent = NULL,
+               },
+            };
+            TPMTRYRETURN(TPM_CreateEndorsementKeyPair(&keyInfo, &pubEK));
+         }
+         //fall through to take ownership
+      case TPM_SUCCESS:
+         {
+            //Construct the Srk
+            TPM_KEY srk = {
+               .ver = TPM_STRUCT_VER_1_1,
+               .keyUsage = TPM_KEY_STORAGE,
+               .keyFlags = 0x00,
+               .authDataUsage = TPM_AUTH_ALWAYS,
+               .algorithmParms = {
+                  .algorithmID = TPM_ALG_RSA,
+                  .encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1,
+                  .sigScheme =  TPM_SS_NONE,
+                  .parmSize = 12,
+                  .parms.rsa = {
+                     .keyLength = RSA_KEY_SIZE,
+                     .numPrimes = 2,
+                     .exponentSize = 0,
+                     .exponent = NULL,
+                  },
+               },
+               .PCRInfoSize = 0,
+               .pubKey = {
+                  .keyLength = 0,
+                  .key = NULL,
+               },
+               .encDataSize = 0,
+            };
+
+            TPMTRYRETURN(TPM_TakeOwnership(
+                     &pubEK,
+                     (const TPM_AUTHDATA*)&vtpm_globals.owner_auth,
+                     (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+                     &srk,
+                     NULL,
+                     &vtpm_globals.oiap));
+
+            TPMTRYRETURN(TPM_DisablePubekRead(
+                     (const TPM_AUTHDATA*)&vtpm_globals.owner_auth,
+                     &vtpm_globals.oiap));
+         }
+         break;
+      default:
+         break;
+   }
+abort_egress:
+   free_TPM_PUBKEY(&pubEK);
+   return status;
+}
+
+static void init_storage_key(TPM_KEY* key) {
+   key->ver.major = 1;
+   key->ver.minor = 1;
+   key->ver.revMajor = 0;
+   key->ver.revMinor = 0;
+
+   key->keyUsage = TPM_KEY_BIND;
+   key->keyFlags = 0;
+   key->authDataUsage = TPM_AUTH_ALWAYS;
+
+   TPM_KEY_PARMS* p = &key->algorithmParms;
+   p->algorithmID = TPM_ALG_RSA;
+   p->encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1;
+   p->sigScheme = TPM_SS_NONE;
+   p->parmSize = 12;
+
+   TPM_RSA_KEY_PARMS* r = &p->parms.rsa;
+   r->keyLength = RSA_KEY_SIZE;
+   r->numPrimes = 2;
+   r->exponentSize = 0;
+   r->exponent = NULL;
+
+   key->PCRInfoSize = 0;
+   key->encDataSize = 0;
+   key->encData = NULL;
+}
+
+static int parse_auth_string(char* authstr, BYTE* target, const TPM_AUTHDATA wellknown, int allowrandom) {
+   int rc;
+   /* well known owner auth */
+   if(!strcmp(authstr, "well-known")) {
+      memcpy(target, wellknown, sizeof(TPM_AUTHDATA));
+   }
+   /* Create a randomly generated owner auth */
+   else if(allowrandom && !strcmp(authstr, "random")) {
+      return 1;
+   }
+   /* owner auth is a raw hash */
+   else if(!strncmp(authstr, "hash:", 5)) {
+      authstr += 5;
+      if((rc = strlen(authstr)) != 40) {
+         vtpmlogerror(VTPM_LOG_VTPM, "Supplied owner auth hex string `%s' must be exactly 40 characters (20 bytes) long, length=%d\n", authstr, rc);
+         return -1;
+      }
+      for(int j = 0; j < 20; ++j) {
+         if(sscanf(authstr, "%hhX", target + j) != 1) {
+            vtpmlogerror(VTPM_LOG_VTPM, "Supplied owner auth string `%s' is not a valid hex string\n", authstr);
+            return -1;
+         }
+         authstr += 2;
+      }
+   }
+   /* owner auth is a string that will be hashed */
+   else if(!strncmp(authstr, "text:", 5)) {
+      authstr += 5;
+      sha1((const unsigned char*)authstr, strlen(authstr), target);
+   }
+   else {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid auth string %s\n", authstr);
+      return -1;
+   }
+
+   return 0;
+}
+
+int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
+{
+   int rc;
+   int i;
+
+   //Set defaults
+   memcpy(vtpm_globals.owner_auth, WELLKNOWN_OWNER_AUTH, sizeof(TPM_AUTHDATA));
+   memcpy(vtpm_globals.srk_auth, WELLKNOWN_SRK_AUTH, sizeof(TPM_AUTHDATA));
+
+   for(i = 1; i < argc; ++i) {
+      if(!strncmp(argv[i], "owner_auth:", 10)) {
+         if((rc = parse_auth_string(argv[i] + 10, vtpm_globals.owner_auth, WELLKNOWN_OWNER_AUTH, 1)) < 0) {
+            goto err_invalid;
+         }
+         if(rc == 1) {
+            opts->gen_owner_auth = 1;
+         }
+      }
+      else if(!strncmp(argv[i], "srk_auth:", 8)) {
+         if((rc = parse_auth_string(argv[i] + 8, vtpm_globals.srk_auth, WELLKNOWN_SRK_AUTH, 0)) != 0) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmdriver=", 10)) {
+         if(!strcmp(argv[i] + 10, "tpm_tis")) {
+            opts->tpmdriver = TPMDRV_TPM_TIS;
+         } else if(!strcmp(argv[i] + 10, "tpmfront")) {
+            opts->tpmdriver = TPMDRV_TPMFRONT;
+         } else {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmiomem=",9)) {
+         if(sscanf(argv[i] + 9, "0x%lX", &opts->tpmiomem) != 1) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmirq=",7)) {
+         if(!strcmp(argv[i] + 7, "probe")) {
+            opts->tpmirq = TPM_PROBE_IRQ;
+         } else if( sscanf(argv[i] + 7, "%u", &opts->tpmirq) != 1) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmlocality=",12)) {
+         if(sscanf(argv[i] + 12, "%u", &opts->tpmlocality) != 1 || opts->tpmlocality > 4) {
+            goto err_invalid;
+         }
+      }
+   }
+
+   switch(opts->tpmdriver) {
+      case TPMDRV_TPM_TIS:
+         vtpmloginfo(VTPM_LOG_VTPM, "Option: Using tpm_tis driver\n");
+         break;
+      case TPMDRV_TPMFRONT:
+         vtpmloginfo(VTPM_LOG_VTPM, "Option: Using tpmfront driver\n");
+         break;
+   }
+
+   return 0;
+err_invalid:
+   vtpmlogerror(VTPM_LOG_VTPM, "Invalid Option %s\n", argv[i]);
+   return -1;
+}
+
+
+
+static TPM_RESULT vtpmmgr_create(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_AUTH_SESSION osap = TPM_AUTH_SESSION_INIT;
+   TPM_AUTHDATA sharedsecret;
+
+   // Take ownership if TPM is unowned
+   TPMTRYRETURN(try_take_ownership());
+
+   // Generate storage key's auth
+   memset(&vtpm_globals.storage_key_usage_auth, 0, sizeof(TPM_AUTHDATA));
+
+   TPMTRYRETURN( TPM_OSAP(
+            TPM_ET_KEYHANDLE,
+            TPM_SRK_KEYHANDLE,
+            (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+            &sharedsecret,
+            &osap) );
+
+   init_storage_key(&vtpm_globals.storage_key);
+
+   //initialize the storage key
+   TPMTRYRETURN( TPM_CreateWrapKey(
+            TPM_SRK_KEYHANDLE,
+            (const TPM_AUTHDATA*)&sharedsecret,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.storage_key,
+            &osap) );
+
+   //Load Storage Key
+   TPMTRYRETURN( TPM_LoadKey(
+            TPM_SRK_KEYHANDLE,
+            &vtpm_globals.storage_key,
+            &vtpm_globals.storage_key_handle,
+            (const TPM_AUTHDATA*) &vtpm_globals.srk_auth,
+            &vtpm_globals.oiap));
+
+   //Make sure TPM has commited changes
+   TPMTRYRETURN( TPM_SaveState() );
+
+   //Create new disk image
+   TPMTRYRETURN(vtpm_storage_new_header());
+
+   goto egress;
+abort_egress:
+egress:
+   vtpmloginfo(VTPM_LOG_VTPM, "Finished initialized new VTPM manager\n");
+
+   //End the OSAP session
+   if(osap.AuthHandle) {
+      TPM_TerminateHandle(osap.AuthHandle);
+   }
+
+   return status;
+}
+
+TPM_RESULT vtpmmgr_init(int argc, char** argv) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   /* Default commandline options */
+   struct Opts opts = {
+      .tpmdriver = TPMDRV_TPM_TIS,
+      .tpmiomem = TPM_BASEADDR,
+      .tpmirq = 0,
+      .tpmlocality = 0,
+      .gen_owner_auth = 0,
+   };
+
+   if(parse_cmdline_opts(argc, argv, &opts) != 0) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Command line parsing failed! exiting..\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   //Setup storage system
+   if(vtpm_storage_init() != 0) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize storage subsystem!\n");
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   //Setup tpmback device
+   init_tpmback();
+
+   //Setup tpm access
+   switch(opts.tpmdriver) {
+      case TPMDRV_TPM_TIS:
+         {
+            struct tpm_chip* tpm;
+            if((tpm = init_tpm_tis(opts.tpmiomem, TPM_TIS_LOCL_INT_TO_FLAG(opts.tpmlocality), opts.tpmirq)) == NULL) {
+               vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize tpmfront device\n");
+               status = TPM_IOERROR;
+               goto abort_egress;
+            }
+            vtpm_globals.tpm_fd = tpm_tis_open(tpm);
+            tpm_tis_request_locality(tpm, opts.tpmlocality);
+         }
+         break;
+      case TPMDRV_TPMFRONT:
+         {
+            struct tpmfront_dev* tpmfront_dev;
+            if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
+               vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize tpmfront device\n");
+               status = TPM_IOERROR;
+               goto abort_egress;
+            }
+            vtpm_globals.tpm_fd = tpmfront_open(tpmfront_dev);
+         }
+         break;
+   }
+
+   //Get the version of the tpm
+   TPMTRYRETURN(check_tpm_version());
+
+   // Blow away all stale handles left in the tpm
+   if(flush_tpm() != TPM_SUCCESS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "VTPM_FlushResources failed, continuing anyway..\n");
+   }
+
+   /* Initialize the rng */
+   entropy_init(&vtpm_globals.entropy);
+   entropy_add_source(&vtpm_globals.entropy, tpm_entropy_source, NULL, 0);
+   entropy_gather(&vtpm_globals.entropy);
+   ctr_drbg_init(&vtpm_globals.ctr_drbg, entropy_func, &vtpm_globals.entropy, NULL, 0);
+   ctr_drbg_set_prediction_resistance( &vtpm_globals.ctr_drbg, CTR_DRBG_PR_OFF );
+
+   // Generate Auth for Owner
+   if(opts.gen_owner_auth) {
+      vtpmmgr_rand(vtpm_globals.owner_auth, sizeof(TPM_AUTHDATA));
+   }
+
+   // Create OIAP session for service's authorized commands
+   TPMTRYRETURN( TPM_OIAP(&vtpm_globals.oiap) );
+
+   /* Load the Manager data, if it fails create a new manager */
+   if (vtpm_storage_load_header() != TPM_SUCCESS) {
+      /* If the OIAP session was closed by an error, create a new one */
+      if(vtpm_globals.oiap.AuthHandle == 0) {
+         TPMTRYRETURN( TPM_OIAP(&vtpm_globals.oiap) );
+      }
+      vtpmloginfo(VTPM_LOG_VTPM, "Failed to read manager file. Assuming first time initialization.\n");
+      TPMTRYRETURN( vtpmmgr_create() );
+   }
+
+   goto egress;
+abort_egress:
+   vtpmmgr_shutdown();
+egress:
+   return status;
+}
+
+void vtpmmgr_shutdown(void)
+{
+   /* Cleanup resources */
+   free_TPM_KEY(&vtpm_globals.storage_key);
+
+   /* Cleanup TPM resources */
+   TPM_EvictKey(vtpm_globals.storage_key_handle);
+   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
+
+   /* Close tpmback */
+   shutdown_tpmback();
+
+   /* Close the storage system and blkfront */
+   vtpm_storage_shutdown();
+
+   /* Close tpmfront/tpm_tis */
+   close(vtpm_globals.tpm_fd);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
+}
diff --git a/stubdom/vtpmmgr/log.c b/stubdom/vtpmmgr/log.c
new file mode 100644
index 0000000..a82c913
--- /dev/null
+++ b/stubdom/vtpmmgr/log.c
@@ -0,0 +1,151 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdlib.h>
+#include <string.h>
+#include <stdio.h>
+
+#include "tcg.h"
+
+char *module_names[] = { "",
+                                "TPM",
+                                "TPM",
+                                "VTPM",
+                                "VTPM",
+                                "TXDATA",
+                              };
+// Helper code for the consts, eg. to produce messages for error codes.
+
+typedef struct error_code_entry_t {
+  TPM_RESULT code;
+  char * code_name;
+  char * msg;
+} error_code_entry_t;
+
+static const error_code_entry_t error_msgs [] = {
+  { TPM_SUCCESS, "TPM_SUCCESS", "Successful completion of the operation" },
+  { TPM_AUTHFAIL, "TPM_AUTHFAIL", "Authentication failed" },
+  { TPM_BADINDEX, "TPM_BADINDEX", "The index to a PCR, DIR or other register is incorrect" },
+  { TPM_BAD_PARAMETER, "TPM_BAD_PARAMETER", "One or more parameter is bad" },
+  { TPM_AUDITFAILURE, "TPM_AUDITFAILURE", "An operation completed successfully but the auditing of that operation failed." },
+  { TPM_CLEAR_DISABLED, "TPM_CLEAR_DISABLED", "The clear disable flag is set and all clear operations now require physical access" },
+  { TPM_DEACTIVATED, "TPM_DEACTIVATED", "The TPM is deactivated" },
+  { TPM_DISABLED, "TPM_DISABLED", "The TPM is disabled" },
+  { TPM_DISABLED_CMD, "TPM_DISABLED_CMD", "The target command has been disabled" },
+  { TPM_FAIL, "TPM_FAIL", "The operation failed" },
+  { TPM_BAD_ORDINAL, "TPM_BAD_ORDINAL", "The ordinal was unknown or inconsistent" },
+  { TPM_INSTALL_DISABLED, "TPM_INSTALL_DISABLED", "The ability to install an owner is disabled" },
+  { TPM_INVALID_KEYHANDLE, "TPM_INVALID_KEYHANDLE", "The key handle presented was invalid" },
+  { TPM_KEYNOTFOUND, "TPM_KEYNOTFOUND", "The target key was not found" },
+  { TPM_INAPPROPRIATE_ENC, "TPM_INAPPROPRIATE_ENC", "Unacceptable encryption scheme" },
+  { TPM_MIGRATEFAIL, "TPM_MIGRATEFAIL", "Migration authorization failed" },
+  { TPM_INVALID_PCR_INFO, "TPM_INVALID_PCR_INFO", "PCR information could not be interpreted" },
+  { TPM_NOSPACE, "TPM_NOSPACE", "No room to load key." },
+  { TPM_NOSRK, "TPM_NOSRK", "There is no SRK set" },
+  { TPM_NOTSEALED_BLOB, "TPM_NOTSEALED_BLOB", "An encrypted blob is invalid or was not created by this TPM" },
+  { TPM_OWNER_SET, "TPM_OWNER_SET", "There is already an Owner" },
+  { TPM_RESOURCES, "TPM_RESOURCES", "The TPM has insufficient internal resources to perform the requested action." },
+  { TPM_SHORTRANDOM, "TPM_SHORTRANDOM", "A random string was too short" },
+  { TPM_SIZE, "TPM_SIZE", "The TPM does not have the space to perform the operation." },
+  { TPM_WRONGPCRVAL, "TPM_WRONGPCRVAL", "The named PCR value does not match the current PCR value." },
+  { TPM_BAD_PARAM_SIZE, "TPM_BAD_PARAM_SIZE", "The paramSize argument to the command has the incorrect value" },
+  { TPM_SHA_THREAD, "TPM_SHA_THREAD", "There is no existing SHA-1 thread." },
+  { TPM_SHA_ERROR, "TPM_SHA_ERROR", "The calculation is unable to proceed because the existing SHA-1 thread has already encountered an error." },
+  { TPM_FAILEDSELFTEST, "TPM_FAILEDSELFTEST", "Self-test has failed and the TPM has shutdown." },
+  { TPM_AUTH2FAIL, "TPM_AUTH2FAIL", "The authorization for the second key in a 2 key function failed authorization" },
+  { TPM_BADTAG, "TPM_BADTAG", "The tag value sent to for a command is invalid" },
+  { TPM_IOERROR, "TPM_IOERROR", "An IO error occurred transmitting information to the TPM" },
+  { TPM_ENCRYPT_ERROR, "TPM_ENCRYPT_ERROR", "The encryption process had a problem." },
+  { TPM_DECRYPT_ERROR, "TPM_DECRYPT_ERROR", "The decryption process did not complete." },
+  { TPM_INVALID_AUTHHANDLE, "TPM_INVALID_AUTHHANDLE", "An invalid handle was used." },
+  { TPM_NO_ENDORSEMENT, "TPM_NO_ENDORSEMENT", "The TPM does not a EK installed" },
+  { TPM_INVALID_KEYUSAGE, "TPM_INVALID_KEYUSAGE", "The usage of a key is not allowed" },
+  { TPM_WRONG_ENTITYTYPE, "TPM_WRONG_ENTITYTYPE", "The submitted entity type is not allowed" },
+  { TPM_INVALID_POSTINIT, "TPM_INVALID_POSTINIT", "The command was received in the wrong sequence relative to TPM_Init and a subsequent TPM_Startup" },
+  { TPM_INAPPROPRIATE_SIG, "TPM_INAPPROPRIATE_SIG", "Signed data cannot include additional DER information" },
+  { TPM_BAD_KEY_PROPERTY, "TPM_BAD_KEY_PROPERTY", "The key properties in TPM_KEY_PARMs are not supported by this TPM" },
+
+  { TPM_BAD_MIGRATION, "TPM_BAD_MIGRATION", "The migration properties of this key are incorrect." },
+  { TPM_BAD_SCHEME, "TPM_BAD_SCHEME", "The signature or encryption scheme for this key is incorrect or not permitted in this situation." },
+  { TPM_BAD_DATASIZE, "TPM_BAD_DATASIZE", "The size of the data (or blob) parameter is bad or inconsistent with the referenced key" },
+  { TPM_BAD_MODE, "TPM_BAD_MODE", "A mode parameter is bad, such as capArea or subCapArea for TPM_GetCapability, phsicalPresence parameter for TPM_PhysicalPresence, or migrationType for TPM_CreateMigrationBlob." },
+  { TPM_BAD_PRESENCE, "TPM_BAD_PRESENCE", "Either the physicalPresence or physicalPresenceLock bits have the wrong value" },
+  { TPM_BAD_VERSION, "TPM_BAD_VERSION", "The TPM cannot perform this version of the capability" },
+  { TPM_NO_WRAP_TRANSPORT, "TPM_NO_WRAP_TRANSPORT", "The TPM does not allow for wrapped transport sessions" },
+  { TPM_AUDITFAIL_UNSUCCESSFUL, "TPM_AUDITFAIL_UNSUCCESSFUL", "TPM audit construction failed and the underlying command was returning a failure code also" },
+  { TPM_AUDITFAIL_SUCCESSFUL, "TPM_AUDITFAIL_SUCCESSFUL", "TPM audit construction failed and the underlying command was returning success" },
+  { TPM_NOTRESETABLE, "TPM_NOTRESETABLE", "Attempt to reset a PCR register that does not have the resettable attribute" },
+  { TPM_NOTLOCAL, "TPM_NOTLOCAL", "Attempt to reset a PCR register that requires locality and locality modifier not part of command transport" },
+  { TPM_BAD_TYPE, "TPM_BAD_TYPE", "Make identity blob not properly typed" },
+  { TPM_INVALID_RESOURCE, "TPM_INVALID_RESOURCE", "When saving context identified resource type does not match actual resource" },
+  { TPM_NOTFIPS, "TPM_NOTFIPS", "The TPM is attempting to execute a command only available when in FIPS mode" },
+  { TPM_INVALID_FAMILY, "TPM_INVALID_FAMILY", "The command is attempting to use an invalid family ID" },
+  { TPM_NO_NV_PERMISSION, "TPM_NO_NV_PERMISSION", "The permission to manipulate the NV storage is not available" },
+  { TPM_REQUIRES_SIGN, "TPM_REQUIRES_SIGN", "The operation requires a signed command" },
+  { TPM_KEY_NOTSUPPORTED, "TPM_KEY_NOTSUPPORTED", "Wrong operation to load an NV key" },
+  { TPM_AUTH_CONFLICT, "TPM_AUTH_CONFLICT", "NV_LoadKey blob requires both owner and blob authorization" },
+  { TPM_AREA_LOCKED, "TPM_AREA_LOCKED", "The NV area is locked and not writtable" },
+  { TPM_BAD_LOCALITY, "TPM_BAD_LOCALITY", "The locality is incorrect for the attempted operation" },
+  { TPM_READ_ONLY, "TPM_READ_ONLY", "The NV area is read only and can't be written to" },
+  { TPM_PER_NOWRITE, "TPM_PER_NOWRITE", "There is no protection on the write to the NV area" },
+  { TPM_FAMILYCOUNT, "TPM_FAMILYCOUNT", "The family count value does not match" },
+  { TPM_WRITE_LOCKED, "TPM_WRITE_LOCKED", "The NV area has already been written to" },
+  { TPM_BAD_ATTRIBUTES, "TPM_BAD_ATTRIBUTES", "The NV area attributes conflict" },
+  { TPM_INVALID_STRUCTURE, "TPM_INVALID_STRUCTURE", "The structure tag and version are invalid or inconsistent" },
+  { TPM_KEY_OWNER_CONTROL, "TPM_KEY_OWNER_CONTROL", "The key is under control of the TPM Owner and can only be evicted by the TPM Owner." },
+  { TPM_BAD_COUNTER, "TPM_BAD_COUNTER", "The counter handle is incorrect" },
+  { TPM_NOT_FULLWRITE, "TPM_NOT_FULLWRITE", "The write is not a complete write of the area" },
+  { TPM_CONTEXT_GAP, "TPM_CONTEXT_GAP", "The gap between saved context counts is too large" },
+  { TPM_MAXNVWRITES, "TPM_MAXNVWRITES", "The maximum number of NV writes without an owner has been exceeded" },
+  { TPM_NOOPERATOR, "TPM_NOOPERATOR", "No operator authorization value is set" },
+  { TPM_RESOURCEMISSING, "TPM_RESOURCEMISSING", "The resource pointed to by context is not loaded" },
+  { TPM_DELEGATE_LOCK, "TPM_DELEGATE_LOCK", "The delegate administration is locked" },
+  { TPM_DELEGATE_FAMILY, "TPM_DELEGATE_FAMILY", "Attempt to manage a family other then the delegated family" },
+  { TPM_DELEGATE_ADMIN, "TPM_DELEGATE_ADMIN", "Delegation table management not enabled" },
+  { TPM_TRANSPORT_EXCLUSIVE, "TPM_TRANSPORT_EXCLUSIVE", "There was a command executed outside of an exclusive transport session" },
+};
+
+
+// helper function for the error codes:
+const char* tpm_get_error_name (TPM_RESULT code) {
+  // just do a linear scan for now
+  unsigned i;
+  for (i = 0; i < sizeof(error_msgs)/sizeof(error_msgs[0]); i++)
+    if (code == error_msgs[i].code)
+      return error_msgs[i].code_name;
+
+    return("Unknown Error Code");
+}
diff --git a/stubdom/vtpmmgr/log.h b/stubdom/vtpmmgr/log.h
new file mode 100644
index 0000000..5c7abf5
--- /dev/null
+++ b/stubdom/vtpmmgr/log.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __VTPM_LOG_H__
+#define __VTPM_LOG_H__
+
+#include <stdint.h>             // for uint32_t
+#include <stddef.h>             // for pointer NULL
+#include <stdio.h>
+#include "tcg.h"
+
+// =========================== LOGGING ==============================
+
+// the logging module numbers
+#define VTPM_LOG_TPM         1
+#define VTPM_LOG_TPM_DEEP    2
+#define VTPM_LOG_VTPM        3
+#define VTPM_LOG_VTPM_DEEP   4
+#define VTPM_LOG_TXDATA      5
+
+extern char *module_names[];
+
+// Default to standard logging
+#ifndef LOGGING_MODULES
+#define LOGGING_MODULES (BITMASK(VTPM_LOG_VTPM)|BITMASK(VTPM_LOG_TPM))
+#endif
+
+// bit-access macros
+#define BITMASK(idx)      ( 1U << (idx) )
+#define GETBIT(num,idx)   ( ((num) & BITMASK(idx)) >> idx )
+#define SETBIT(num,idx)   (num) |= BITMASK(idx)
+#define CLEARBIT(num,idx) (num) &= ( ~ BITMASK(idx) )
+
+#define vtpmloginfo(module, fmt, args...) \
+  if (GETBIT (LOGGING_MODULES, module) == 1) {				\
+    fprintf (stdout, "INFO[%s]: " fmt, module_names[module], ##args); \
+  }
+
+#define vtpmloginfomore(module, fmt, args...) \
+  if (GETBIT (LOGGING_MODULES, module) == 1) {			      \
+    fprintf (stdout, fmt,##args);				      \
+  }
+
+#define vtpmlogerror(module, fmt, args...) \
+  fprintf (stderr, "ERROR[%s]: " fmt, module_names[module], ##args);
+
+//typedef UINT32 tpm_size_t;
+
+// helper function for the error codes:
+const char* tpm_get_error_name (TPM_RESULT code);
+
+#endif // _VTPM_LOG_H_
diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
new file mode 100644
index 0000000..77d32f0
--- /dev/null
+++ b/stubdom/vtpmmgr/marshal.h
@@ -0,0 +1,528 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef MARSHAL_H
+#define MARSHAL_H
+
+#include <stdlib.h>
+#include <mini-os/byteorder.h>
+#include <mini-os/endian.h>
+#include "tcg.h"
+
+typedef enum UnpackPtr {
+   UNPACK_ALIAS,
+   UNPACK_ALLOC
+} UnpackPtr;
+
+inline BYTE* pack_BYTE(BYTE* ptr, BYTE t) {
+   ptr[0] = t;
+   return ++ptr;
+}
+
+inline BYTE* unpack_BYTE(BYTE* ptr, BYTE* t) {
+   t[0] = ptr[0];
+   return ++ptr;
+}
+
+#define pack_BOOL(p, t) pack_BYTE(p, t)
+#define unpack_BOOL(p, t) unpack_BYTE(p, t)
+
+inline BYTE* pack_UINT16(BYTE* ptr, UINT16 t) {
+   BYTE* b = (BYTE*)&t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   ptr[0] = b[1];
+   ptr[1] = b[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   ptr[0] = b[0];
+   ptr[1] = b[1];
+#endif
+   return ptr + sizeof(UINT16);
+}
+
+inline BYTE* unpack_UINT16(BYTE* ptr, UINT16* t) {
+   BYTE* b = (BYTE*)t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   b[0] = ptr[1];
+   b[1] = ptr[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   b[0] = ptr[0];
+   b[1] = ptr[1];
+#endif
+   return ptr + sizeof(UINT16);
+}
+
+inline BYTE* pack_UINT32(BYTE* ptr, UINT32 t) {
+   BYTE* b = (BYTE*)&t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   ptr[3] = b[0];
+   ptr[2] = b[1];
+   ptr[1] = b[2];
+   ptr[0] = b[3];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   ptr[0] = b[0];
+   ptr[1] = b[1];
+   ptr[2] = b[2];
+   ptr[3] = b[3];
+#endif
+   return ptr + sizeof(UINT32);
+}
+
+inline BYTE* unpack_UINT32(BYTE* ptr, UINT32* t) {
+   BYTE* b = (BYTE*)t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   b[0] = ptr[3];
+   b[1] = ptr[2];
+   b[2] = ptr[1];
+   b[3] = ptr[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   b[0] = ptr[0];
+   b[1] = ptr[1];
+   b[2] = ptr[2];
+   b[3] = ptr[3];
+#endif
+   return ptr + sizeof(UINT32);
+}
+
+#define pack_TPM_RESULT(p, t) pack_UINT32(p, t)
+#define pack_TPM_PCRINDEX(p, t) pack_UINT32(p, t)
+#define pack_TPM_DIRINDEX(p, t) pack_UINT32(p, t)
+#define pack_TPM_HANDLE(p, t) pack_UINT32(p, t)
+#define pack_TPM_AUTHHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_HASHHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_HMACHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_ENCHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TPM_KEY_HANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_ENTITYHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TPM_RESOURCE_TYPE(p, t) pack_UINT32(p, t)
+#define pack_TPM_COMMAND_CODE(p, t) pack_UINT32(p, t)
+#define pack_TPM_PROTOCOL_ID(p, t) pack_UINT16(p, t)
+#define pack_TPM_AUTH_DATA_USAGE(p, t) pack_BYTE(p, t)
+#define pack_TPM_ENTITY_TYPE(p, t) pack_UINT16(p, t)
+#define pack_TPM_ALGORITHM_ID(p, t) pack_UINT32(p, t)
+#define pack_TPM_KEY_USAGE(p, t) pack_UINT16(p, t)
+#define pack_TPM_STARTUP_TYPE(p, t) pack_UINT16(p, t)
+#define pack_TPM_CAPABILITY_AREA(p, t) pack_UINT32(p, t)
+#define pack_TPM_ENC_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_SIG_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_MIGRATE_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_PHYSICAL_PRESENCE(p, t) pack_UINT16(p, t)
+#define pack_TPM_KEY_FLAGS(p, t) pack_UINT32(p, t)
+
+#define unpack_TPM_RESULT(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_PCRINDEX(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_DIRINDEX(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_HANDLE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_AUTHHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_HASHHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_HMACHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_ENCHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TPM_KEY_HANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_ENTITYHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TPM_RESOURCE_TYPE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_COMMAND_CODE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_PROTOCOL_ID(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_AUTH_DATA_USAGE(p, t) unpack_BYTE(p, t)
+#define unpack_TPM_ENTITY_TYPE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_ALGORITHM_ID(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_KEY_USAGE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_STARTUP_TYPE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_CAPABILITY_AREA(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_ENC_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_SIG_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_MIGRATE_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_PHYSICAL_PRESENCE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_KEY_FLAGS(p, t) unpack_UINT32(p, t)
+
+#define pack_TPM_AUTH_HANDLE(p, t) pack_UINT32(p, t);
+#define pack_TCS_CONTEXT_HANDLE(p, t) pack_UINT32(p, t);
+#define pack_TCS_KEY_HANDLE(p, t) pack_UINT32(p, t);
+
+#define unpack_TPM_AUTH_HANDLE(p, t) unpack_UINT32(p, t);
+#define unpack_TCS_CONTEXT_HANDLE(p, t) unpack_UINT32(p, t);
+#define unpack_TCS_KEY_HANDLE(p, t) unpack_UINT32(p, t);
+
+inline BYTE* pack_BUFFER(BYTE* ptr, const BYTE* buf, UINT32 size) {
+   memcpy(ptr, buf, size);
+   return ptr + size;
+}
+
+inline BYTE* unpack_BUFFER(BYTE* ptr, BYTE* buf, UINT32 size) {
+   memcpy(buf, ptr, size);
+   return ptr + size;
+}
+
+inline BYTE* unpack_ALIAS(BYTE* ptr, BYTE** buf, UINT32 size) {
+   *buf = ptr;
+   return ptr + size;
+}
+
+inline BYTE* unpack_ALLOC(BYTE* ptr, BYTE** buf, UINT32 size) {
+   if(size) {
+      *buf = malloc(size);
+      memcpy(*buf, ptr, size);
+   } else {
+      *buf = NULL;
+   }
+   return ptr + size;
+}
+
+inline BYTE* unpack_PTR(BYTE* ptr, BYTE** buf, UINT32 size, UnpackPtr alloc) {
+   if(alloc == UNPACK_ALLOC) {
+      return unpack_ALLOC(ptr, buf, size);
+   } else {
+      return unpack_ALIAS(ptr, buf, size);
+   }
+}
+
+inline BYTE* pack_TPM_AUTHDATA(BYTE* ptr, const TPM_AUTHDATA* d) {
+   return pack_BUFFER(ptr, *d, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_AUTHDATA(BYTE* ptr, TPM_AUTHDATA* d) {
+   return unpack_BUFFER(ptr, *d, TPM_DIGEST_SIZE);
+}
+
+#define pack_TPM_SECRET(p, t) pack_TPM_AUTHDATA(p, t)
+#define pack_TPM_ENCAUTH(p, t) pack_TPM_AUTHDATA(p, t)
+#define pack_TPM_PAYLOAD_TYPE(p, t) pack_BYTE(p, t)
+#define pack_TPM_TAG(p, t) pack_UINT16(p, t)
+#define pack_TPM_STRUCTURE_TAG(p, t) pack_UINT16(p, t)
+
+#define unpack_TPM_SECRET(p, t) unpack_TPM_AUTHDATA(p, t)
+#define unpack_TPM_ENCAUTH(p, t) unpack_TPM_AUTHDATA(p, t)
+#define unpack_TPM_PAYLOAD_TYPE(p, t) unpack_BYTE(p, t)
+#define unpack_TPM_TAG(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_STRUCTURE_TAG(p, t) unpack_UINT16(p, t)
+
+inline BYTE* pack_TPM_VERSION(BYTE* ptr, const TPM_VERSION* t) {
+   ptr[0] = t->major;
+   ptr[1] = t->minor;
+   ptr[2] = t->revMajor;
+   ptr[3] = t->revMinor;
+   return ptr + 4;
+}
+
+inline BYTE* unpack_TPM_VERSION(BYTE* ptr, TPM_VERSION* t) {
+   t->major = ptr[0];
+   t->minor = ptr[1];
+   t->revMajor = ptr[2];
+   t->revMinor = ptr[3];
+   return ptr + 4;
+}
+
+inline BYTE* pack_TPM_CAP_VERSION_INFO(BYTE* ptr, const TPM_CAP_VERSION_INFO* v) {
+   ptr = pack_TPM_STRUCTURE_TAG(ptr, v->tag);
+   ptr = pack_TPM_VERSION(ptr, &v->version);
+   ptr = pack_UINT16(ptr, v->specLevel);
+   ptr = pack_BYTE(ptr, v->errataRev);
+   ptr = pack_BUFFER(ptr, v->tpmVendorID, sizeof(v->tpmVendorID));
+   ptr = pack_UINT16(ptr, v->vendorSpecificSize);
+   ptr = pack_BUFFER(ptr, v->vendorSpecific, v->vendorSpecificSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_CAP_VERSION_INFO(BYTE* ptr, TPM_CAP_VERSION_INFO* v, UnpackPtr alloc) {
+   ptr = unpack_TPM_STRUCTURE_TAG(ptr, &v->tag);
+   ptr = unpack_TPM_VERSION(ptr, &v->version);
+   ptr = unpack_UINT16(ptr, &v->specLevel);
+   ptr = unpack_BYTE(ptr, &v->errataRev);
+   ptr = unpack_BUFFER(ptr, v->tpmVendorID, sizeof(v->tpmVendorID));
+   ptr = unpack_UINT16(ptr, &v->vendorSpecificSize);
+   ptr = unpack_PTR(ptr, &v->vendorSpecific, v->vendorSpecificSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_DIGEST(BYTE* ptr, const TPM_DIGEST* d) {
+   return pack_BUFFER(ptr, d->digest, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_DIGEST(BYTE* ptr, TPM_DIGEST* d) {
+   return unpack_BUFFER(ptr, d->digest, TPM_DIGEST_SIZE);
+}
+
+#define pack_TPM_PCRVALUE(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_PCRVALUE(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_COMPOSITE_HASH(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_COMPOSITE_HASH(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_DIRVALUE(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_DIRVALUE(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_HMAC(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_HMAC(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_CHOSENID_HASH(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_CHOSENID_HASH(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+inline BYTE* pack_TPM_NONCE(BYTE* ptr, const TPM_NONCE* n) {
+   return pack_BUFFER(ptr, n->nonce, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_NONCE(BYTE* ptr, TPM_NONCE* n) {
+   return unpack_BUFFER(ptr, n->nonce, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* pack_TPM_SYMMETRIC_KEY_PARMS(BYTE* ptr, const TPM_SYMMETRIC_KEY_PARMS* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_UINT32(ptr, k->blockSize);
+   ptr = pack_UINT32(ptr, k->ivSize);
+   return pack_BUFFER(ptr, k->IV, k->ivSize);
+}
+
+inline BYTE* unpack_TPM_SYMMETRIC_KEY_PARMS(BYTE* ptr, TPM_SYMMETRIC_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_UINT32(ptr, &k->blockSize);
+   ptr = unpack_UINT32(ptr, &k->ivSize);
+   return unpack_PTR(ptr, &k->IV, k->ivSize, alloc);
+}
+
+inline BYTE* pack_TPM_RSA_KEY_PARMS(BYTE* ptr, const TPM_RSA_KEY_PARMS* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_UINT32(ptr, k->numPrimes);
+   ptr = pack_UINT32(ptr, k->exponentSize);
+   return pack_BUFFER(ptr, k->exponent, k->exponentSize);
+}
+
+inline BYTE* unpack_TPM_RSA_KEY_PARMS(BYTE* ptr, TPM_RSA_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_UINT32(ptr, &k->numPrimes);
+   ptr = unpack_UINT32(ptr, &k->exponentSize);
+   return unpack_PTR(ptr, &k->exponent, k->exponentSize, alloc);
+}
+
+inline BYTE* pack_TPM_KEY_PARMS(BYTE* ptr, const TPM_KEY_PARMS* k) {
+   ptr = pack_TPM_ALGORITHM_ID(ptr, k->algorithmID);
+   ptr = pack_TPM_ENC_SCHEME(ptr, k->encScheme);
+   ptr = pack_TPM_SIG_SCHEME(ptr, k->sigScheme);
+   ptr = pack_UINT32(ptr, k->parmSize);
+
+   if(k->parmSize) {
+      switch(k->algorithmID) {
+         case TPM_ALG_RSA:
+            return pack_TPM_RSA_KEY_PARMS(ptr, &k->parms.rsa);
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            return pack_TPM_SYMMETRIC_KEY_PARMS(ptr, &k->parms.sym);
+      }
+   }
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_KEY_PARMS(BYTE* ptr, TPM_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_ALGORITHM_ID(ptr, &k->algorithmID);
+   ptr = unpack_TPM_ENC_SCHEME(ptr, &k->encScheme);
+   ptr = unpack_TPM_SIG_SCHEME(ptr, &k->sigScheme);
+   ptr = unpack_UINT32(ptr, &k->parmSize);
+
+   if(k->parmSize) {
+      switch(k->algorithmID) {
+         case TPM_ALG_RSA:
+            return unpack_TPM_RSA_KEY_PARMS(ptr, &k->parms.rsa, alloc);
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            return unpack_TPM_SYMMETRIC_KEY_PARMS(ptr, &k->parms.sym, alloc);
+      }
+   }
+   return ptr;
+}
+
+inline BYTE* pack_TPM_STORE_PUBKEY(BYTE* ptr, const TPM_STORE_PUBKEY* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_BUFFER(ptr, k->key, k->keyLength);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_STORE_PUBKEY(BYTE* ptr, TPM_STORE_PUBKEY* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_PTR(ptr, &k->key, k->keyLength, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PUBKEY(BYTE* ptr, const TPM_PUBKEY* k) {
+   ptr = pack_TPM_KEY_PARMS(ptr, &k->algorithmParms);
+   return pack_TPM_STORE_PUBKEY(ptr, &k->pubKey);
+}
+
+inline BYTE* unpack_TPM_PUBKEY(BYTE* ptr, TPM_PUBKEY* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_KEY_PARMS(ptr, &k->algorithmParms, alloc);
+   return unpack_TPM_STORE_PUBKEY(ptr, &k->pubKey, alloc);
+}
+
+inline BYTE* pack_TPM_PCR_SELECTION(BYTE* ptr, const TPM_PCR_SELECTION* p) {
+   ptr = pack_UINT16(ptr, p->sizeOfSelect);
+   ptr = pack_BUFFER(ptr, p->pcrSelect, p->sizeOfSelect);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_SELECTION(BYTE* ptr, TPM_PCR_SELECTION* p, UnpackPtr alloc) {
+   ptr = unpack_UINT16(ptr, &p->sizeOfSelect);
+   ptr = unpack_PTR(ptr, &p->pcrSelect, p->sizeOfSelect, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PCR_INFO(BYTE* ptr, const TPM_PCR_INFO* p) {
+   ptr = pack_TPM_PCR_SELECTION(ptr, &p->pcrSelection);
+   ptr = pack_TPM_COMPOSITE_HASH(ptr, &p->digestAtRelease);
+   ptr = pack_TPM_COMPOSITE_HASH(ptr, &p->digestAtCreation);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_INFO(BYTE* ptr, TPM_PCR_INFO* p, UnpackPtr alloc) {
+   ptr = unpack_TPM_PCR_SELECTION(ptr, &p->pcrSelection, alloc);
+   ptr = unpack_TPM_COMPOSITE_HASH(ptr, &p->digestAtRelease);
+   ptr = unpack_TPM_COMPOSITE_HASH(ptr, &p->digestAtCreation);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PCR_COMPOSITE(BYTE* ptr, const TPM_PCR_COMPOSITE* p) {
+   ptr = pack_TPM_PCR_SELECTION(ptr, &p->select);
+   ptr = pack_UINT32(ptr, p->valueSize);
+   ptr = pack_BUFFER(ptr, (const BYTE*)p->pcrValue, p->valueSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_COMPOSITE(BYTE* ptr, TPM_PCR_COMPOSITE* p, UnpackPtr alloc) {
+   ptr = unpack_TPM_PCR_SELECTION(ptr, &p->select, alloc);
+   ptr = unpack_UINT32(ptr, &p->valueSize);
+   ptr = unpack_PTR(ptr, (BYTE**)&p->pcrValue, p->valueSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_KEY(BYTE* ptr, const TPM_KEY* k) {
+   ptr = pack_TPM_VERSION(ptr, &k->ver);
+   ptr = pack_TPM_KEY_USAGE(ptr, k->keyUsage);
+   ptr = pack_TPM_KEY_FLAGS(ptr, k->keyFlags);
+   ptr = pack_TPM_AUTH_DATA_USAGE(ptr, k->authDataUsage);
+   ptr = pack_TPM_KEY_PARMS(ptr, &k->algorithmParms);
+   ptr = pack_UINT32(ptr, k->PCRInfoSize);
+   if(k->PCRInfoSize) {
+      ptr = pack_TPM_PCR_INFO(ptr, &k->PCRInfo);
+   }
+   ptr = pack_TPM_STORE_PUBKEY(ptr, &k->pubKey);
+   ptr = pack_UINT32(ptr, k->encDataSize);
+   return pack_BUFFER(ptr, k->encData, k->encDataSize);
+}
+
+inline BYTE* unpack_TPM_KEY(BYTE* ptr, TPM_KEY* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &k->ver);
+   ptr = unpack_TPM_KEY_USAGE(ptr, &k->keyUsage);
+   ptr = unpack_TPM_KEY_FLAGS(ptr, &k->keyFlags);
+   ptr = unpack_TPM_AUTH_DATA_USAGE(ptr, &k->authDataUsage);
+   ptr = unpack_TPM_KEY_PARMS(ptr, &k->algorithmParms, alloc);
+   ptr = unpack_UINT32(ptr, &k->PCRInfoSize);
+   if(k->PCRInfoSize) {
+      ptr = unpack_TPM_PCR_INFO(ptr, &k->PCRInfo, alloc);
+   }
+   ptr = unpack_TPM_STORE_PUBKEY(ptr, &k->pubKey, alloc);
+   ptr = unpack_UINT32(ptr, &k->encDataSize);
+   return unpack_PTR(ptr, &k->encData, k->encDataSize, alloc);
+}
+
+inline BYTE* pack_TPM_BOUND_DATA(BYTE* ptr, const TPM_BOUND_DATA* b, UINT32 payloadSize) {
+   ptr = pack_TPM_VERSION(ptr, &b->ver);
+   ptr = pack_TPM_PAYLOAD_TYPE(ptr, b->payload);
+   return pack_BUFFER(ptr, b->payloadData, payloadSize);
+}
+
+inline BYTE* unpack_TPM_BOUND_DATA(BYTE* ptr, TPM_BOUND_DATA* b, UINT32 payloadSize, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &b->ver);
+   ptr = unpack_TPM_PAYLOAD_TYPE(ptr, &b->payload);
+   return unpack_PTR(ptr, &b->payloadData, payloadSize, alloc);
+}
+
+inline BYTE* pack_TPM_STORED_DATA(BYTE* ptr, const TPM_STORED_DATA* d) {
+   ptr = pack_TPM_VERSION(ptr, &d->ver);
+   ptr = pack_UINT32(ptr, d->sealInfoSize);
+   if(d->sealInfoSize) {
+      ptr = pack_TPM_PCR_INFO(ptr, &d->sealInfo);
+   }
+   ptr = pack_UINT32(ptr, d->encDataSize);
+   ptr = pack_BUFFER(ptr, d->encData, d->encDataSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_STORED_DATA(BYTE* ptr, TPM_STORED_DATA* d, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &d->ver);
+   ptr = unpack_UINT32(ptr, &d->sealInfoSize);
+   if(d->sealInfoSize) {
+      ptr = unpack_TPM_PCR_INFO(ptr, &d->sealInfo, alloc);
+   }
+   ptr = unpack_UINT32(ptr, &d->encDataSize);
+   ptr = unpack_PTR(ptr, &d->encData, d->encDataSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_AUTH_SESSION(BYTE* ptr, const TPM_AUTH_SESSION* auth) {
+   ptr = pack_TPM_AUTH_HANDLE(ptr, auth->AuthHandle);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+   ptr = pack_TPM_AUTHDATA(ptr, &auth->HMAC);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_AUTH_SESSION(BYTE* ptr, TPM_AUTH_SESSION* auth) {
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = unpack_BOOL(ptr, &auth->fContinueAuthSession);
+   ptr = unpack_TPM_AUTHDATA(ptr, &auth->HMAC);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
+      TPM_TAG tag,
+      UINT32 size,
+      TPM_COMMAND_CODE ord) {
+   ptr = pack_UINT16(ptr, tag);
+   ptr = pack_UINT32(ptr, size);
+   return pack_UINT32(ptr, ord);
+}
+
+inline BYTE* unpack_TPM_RQU_HEADER(BYTE* ptr,
+      TPM_TAG* tag,
+      UINT32* size,
+      TPM_COMMAND_CODE* ord) {
+   ptr = unpack_UINT16(ptr, tag);
+   ptr = unpack_UINT32(ptr, size);
+   ptr = unpack_UINT32(ptr, ord);
+   return ptr;
+}
+
+#define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r);
+#define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r);
+
+#endif
diff --git a/stubdom/vtpmmgr/minios.cfg b/stubdom/vtpmmgr/minios.cfg
new file mode 100644
index 0000000..3fb383d
--- /dev/null
+++ b/stubdom/vtpmmgr/minios.cfg
@@ -0,0 +1,14 @@
+CONFIG_TPMFRONT=y
+CONFIG_TPM_TIS=y
+CONFIG_TPMBACK=y
+CONFIG_START_NETWORK=n
+CONFIG_TEST=n
+CONFIG_PCIFRONT=n
+CONFIG_BLKFRONT=y
+CONFIG_NETFRONT=n
+CONFIG_FBFRONT=n
+CONFIG_KBDFRONT=n
+CONFIG_CONSFRONT=n
+CONFIG_XENBUS=y
+CONFIG_LWIP=n
+CONFIG_XC=n
diff --git a/stubdom/vtpmmgr/tcg.h b/stubdom/vtpmmgr/tcg.h
new file mode 100644
index 0000000..7687eae
--- /dev/null
+++ b/stubdom/vtpmmgr/tcg.h
@@ -0,0 +1,707 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005 Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __TCG_H__
+#define __TCG_H__
+
+#include <stdlib.h>
+#include <stdint.h>
+
+// **************************** CONSTANTS *********************************
+
+// BOOL values
+#define TRUE 0x01
+#define FALSE 0x00
+
+#define TCPA_MAX_BUFFER_LENGTH 0x2000
+
+//
+// TPM_COMMAND_CODE values
+#define TPM_PROTECTED_ORDINAL 0x00000000UL
+#define TPM_UNPROTECTED_ORDINAL 0x80000000UL
+#define TPM_CONNECTION_ORDINAL 0x40000000UL
+#define TPM_VENDOR_ORDINAL 0x20000000UL
+
+#define TPM_ORD_OIAP                     (10UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OSAP                     (11UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuth               (12UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_TakeOwnership            (13UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthAsymStart      (14UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthAsymFinish     (15UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthOwner          (16UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Extend                   (20UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PcrRead                  (21UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Quote                    (22UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Seal                     (23UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Unseal                   (24UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DirWriteAuth             (25UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DirRead                  (26UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_UnBind                   (30UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateWrapKey            (31UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadKey                  (32UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetPubKey                (33UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_EvictKey                 (34UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateMigrationBlob      (40UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReWrapKey                (41UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ConvertMigrationBlob     (42UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_AuthorizeMigrationKey    (43UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateMaintenanceArchive (44UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadMaintenanceArchive   (45UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_KillMaintenanceFeature   (46UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadManuMaintPub         (47UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadManuMaintPub         (48UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CertifyKey               (50UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Sign                     (60UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetRandom                (70UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_StirRandom               (71UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SelfTestFull             (80UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SelfTestStartup          (81UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CertifySelfTest          (82UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ContinueSelfTest         (83UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetTestResult            (84UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Reset                    (90UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerClear               (91UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisableOwnerClear        (92UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ForceClear               (93UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisableForceClear        (94UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapabilitySigned      (100UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapability            (101UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapabilityOwner       (102UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerSetDisable          (110UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalEnable           (111UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalDisable          (112UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetOwnerInstall          (113UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalSetDeactivated   (114UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetTempDeactivated       (115UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateEndorsementKeyPair (120UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_MakeIdentity             (121UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ActivateIdentity         (122UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadPubek                (124UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerReadPubek           (125UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisablePubekRead         (126UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetAuditEvent            (130UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetAuditEventSigned      (131UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetOrdinalAuditStatus    (140UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetOrdinalAuditStatus    (141UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Terminate_Handle         (150UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Init                     (151UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveState                (152UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Startup                  (153UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetRedirection           (154UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Start                (160UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Update               (161UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Complete             (162UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1CompleteExtend       (163UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_FieldUpgrade             (170UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveKeyContext           (180UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadKeyContext           (181UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveAuthContext          (182UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadAuthContext          (183UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveContext                      (184UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadContext                      (185UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_FlushSpecific                    (186UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PCR_Reset                        (200UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_DefineSpace                   (204UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_WriteValue                    (205UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_WriteValueAuth                (206UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_ReadValue                     (207UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_ReadValueAuth                 (208UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_UpdateVerification      (209UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_Manage                  (210UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_CreateKeyDelegation     (212UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_CreateOwnerDelegation   (213UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_VerifyDelegation        (214UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_LoadOwnerDelegation     (216UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_ReadAuth                (217UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_ReadTable               (219UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateCounter                    (220UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_IncrementCounter                 (221UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadCounter                      (222UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseCounter                   (223UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseCounterOwner              (224UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_EstablishTransport               (230UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ExecuteTransport                 (231UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseTransportSigned           (232UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetTicks                         (241UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_TickStampBlob                    (242UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_MAX                              (256UL + TPM_PROTECTED_ORDINAL)
+
+#define TSC_ORD_PhysicalPresence         (10UL + TPM_CONNECTION_ORDINAL)
+
+
+
+//
+// TPM_RESULT values
+//
+// just put in the whole table from spec 1.2
+
+#define TPM_BASE   0x0 // The start of TPM return codes
+#define TPM_VENDOR_ERROR 0x00000400 // Mask to indicate that the error code is vendor specific for vendor specific commands
+#define TPM_NON_FATAL  0x00000800 // Mask to indicate that the error code is a non-fatal failure.
+
+#define TPM_SUCCESS   TPM_BASE // Successful completion of the operation
+#define TPM_AUTHFAIL      TPM_BASE + 1 // Authentication failed
+#define TPM_BADINDEX      TPM_BASE + 2 // The index to a PCR, DIR or other register is incorrect
+#define TPM_BAD_PARAMETER     TPM_BASE + 3 // One or more parameter is bad
+#define TPM_AUDITFAILURE     TPM_BASE + 4 // An operation completed successfully but the auditing of that operation failed.
+#define TPM_CLEAR_DISABLED     TPM_BASE + 5 // The clear disable flag is set and all clear operations now require physical access
+#define TPM_DEACTIVATED     TPM_BASE + 6 // The TPM is deactivated
+#define TPM_DISABLED      TPM_BASE + 7 // The TPM is disabled
+#define TPM_DISABLED_CMD     TPM_BASE + 8 // The target command has been disabled
+#define TPM_FAIL       TPM_BASE + 9 // The operation failed
+#define TPM_BAD_ORDINAL     TPM_BASE + 10 // The ordinal was unknown or inconsistent
+#define TPM_INSTALL_DISABLED   TPM_BASE + 11 // The ability to install an owner is disabled
+#define TPM_INVALID_KEYHANDLE  TPM_BASE + 12 // The key handle presented was invalid
+#define TPM_KEYNOTFOUND     TPM_BASE + 13 // The target key was not found
+#define TPM_INAPPROPRIATE_ENC  TPM_BASE + 14 // Unacceptable encryption scheme
+#define TPM_MIGRATEFAIL     TPM_BASE + 15 // Migration authorization failed
+#define TPM_INVALID_PCR_INFO   TPM_BASE + 16 // PCR information could not be interpreted
+#define TPM_NOSPACE      TPM_BASE + 17 // No room to load key.
+#define TPM_NOSRK       TPM_BASE + 18 // There is no SRK set
+#define TPM_NOTSEALED_BLOB     TPM_BASE + 19 // An encrypted blob is invalid or was not created by this TPM
+#define TPM_OWNER_SET      TPM_BASE + 20 // There is already an Owner
+#define TPM_RESOURCES      TPM_BASE + 21 // The TPM has insufficient internal resources to perform the requested action.
+#define TPM_SHORTRANDOM     TPM_BASE + 22 // A random string was too short
+#define TPM_SIZE       TPM_BASE + 23 // The TPM does not have the space to perform the operation.
+#define TPM_WRONGPCRVAL     TPM_BASE + 24 // The named PCR value does not match the current PCR value.
+#define TPM_BAD_PARAM_SIZE     TPM_BASE + 25 // The paramSize argument to the command has the incorrect value
+#define TPM_SHA_THREAD      TPM_BASE + 26 // There is no existing SHA-1 thread.
+#define TPM_SHA_ERROR      TPM_BASE + 27 // The calculation is unable to proceed because the existing SHA-1 thread has already encountered an error.
+#define TPM_FAILEDSELFTEST     TPM_BASE + 28 // Self-test has failed and the TPM has shutdown.
+#define TPM_AUTH2FAIL      TPM_BASE + 29 // The authorization for the second key in a 2 key function failed authorization
+#define TPM_BADTAG       TPM_BASE + 30 // The tag value sent to for a command is invalid
+#define TPM_IOERROR      TPM_BASE + 31 // An IO error occurred transmitting information to the TPM
+#define TPM_ENCRYPT_ERROR     TPM_BASE + 32 // The encryption process had a problem.
+#define TPM_DECRYPT_ERROR     TPM_BASE + 33 // The decryption process did not complete.
+#define TPM_INVALID_AUTHHANDLE TPM_BASE + 34 // An invalid handle was used.
+#define TPM_NO_ENDORSEMENT     TPM_BASE + 35 // The TPM does not a EK installed
+#define TPM_INVALID_KEYUSAGE   TPM_BASE + 36 // The usage of a key is not allowed
+#define TPM_WRONG_ENTITYTYPE   TPM_BASE + 37 // The submitted entity type is not allowed
+#define TPM_INVALID_POSTINIT   TPM_BASE + 38 // The command was received in the wrong sequence relative to TPM_Init and a subsequent TPM_Startup
+#define TPM_INAPPROPRIATE_SIG  TPM_BASE + 39 // Signed data cannot include additional DER information
+#define TPM_BAD_KEY_PROPERTY   TPM_BASE + 40 // The key properties in TPM_KEY_PARMs are not supported by this TPM
+
+#define TPM_BAD_MIGRATION      TPM_BASE + 41 // The migration properties of this key are incorrect.
+#define TPM_BAD_SCHEME       TPM_BASE + 42 // The signature or encryption scheme for this key is incorrect or not permitted in this situation.
+#define TPM_BAD_DATASIZE      TPM_BASE + 43 // The size of the data (or blob) parameter is bad or inconsistent with the referenced key
+#define TPM_BAD_MODE       TPM_BASE + 44 // A mode parameter is bad, such as capArea or subCapArea for TPM_GetCapability, phsicalPresence parameter for TPM_PhysicalPresence, or migrationType for TPM_CreateMigrationBlob.
+#define TPM_BAD_PRESENCE      TPM_BASE + 45 // Either the physicalPresence or physicalPresenceLock bits have the wrong value
+#define TPM_BAD_VERSION      TPM_BASE + 46 // The TPM cannot perform this version of the capability
+#define TPM_NO_WRAP_TRANSPORT     TPM_BASE + 47 // The TPM does not allow for wrapped transport sessions
+#define TPM_AUDITFAIL_UNSUCCESSFUL TPM_BASE + 48 // TPM audit construction failed and the underlying command was returning a failure code also
+#define TPM_AUDITFAIL_SUCCESSFUL   TPM_BASE + 49 // TPM audit construction failed and the underlying command was returning success
+#define TPM_NOTRESETABLE      TPM_BASE + 50 // Attempt to reset a PCR register that does not have the resettable attribute
+#define TPM_NOTLOCAL       TPM_BASE + 51 // Attempt to reset a PCR register that requires locality and locality modifier not part of command transport
+#define TPM_BAD_TYPE       TPM_BASE + 52 // Make identity blob not properly typed
+#define TPM_INVALID_RESOURCE     TPM_BASE + 53 // When saving context identified resource type does not match actual resource
+#define TPM_NOTFIPS       TPM_BASE + 54 // The TPM is attempting to execute a command only available when in FIPS mode
+#define TPM_INVALID_FAMILY      TPM_BASE + 55 // The command is attempting to use an invalid family ID
+#define TPM_NO_NV_PERMISSION     TPM_BASE + 56 // The permission to manipulate the NV storage is not available
+#define TPM_REQUIRES_SIGN      TPM_BASE + 57 // The operation requires a signed command
+#define TPM_KEY_NOTSUPPORTED     TPM_BASE + 58 // Wrong operation to load an NV key
+#define TPM_AUTH_CONFLICT      TPM_BASE + 59 // NV_LoadKey blob requires both owner and blob authorization
+#define TPM_AREA_LOCKED      TPM_BASE + 60 // The NV area is locked and not writtable
+#define TPM_BAD_LOCALITY      TPM_BASE + 61 // The locality is incorrect for the attempted operation
+#define TPM_READ_ONLY       TPM_BASE + 62 // The NV area is read only and can't be written to
+#define TPM_PER_NOWRITE      TPM_BASE + 63 // There is no protection on the write to the NV area
+#define TPM_FAMILYCOUNT      TPM_BASE + 64 // The family count value does not match
+#define TPM_WRITE_LOCKED      TPM_BASE + 65 // The NV area has already been written to
+#define TPM_BAD_ATTRIBUTES      TPM_BASE + 66 // The NV area attributes conflict
+#define TPM_INVALID_STRUCTURE     TPM_BASE + 67 // The structure tag and version are invalid or inconsistent
+#define TPM_KEY_OWNER_CONTROL     TPM_BASE + 68 // The key is under control of the TPM Owner and can only be evicted by the TPM Owner.
+#define TPM_BAD_COUNTER      TPM_BASE + 69 // The counter handle is incorrect
+#define TPM_NOT_FULLWRITE      TPM_BASE + 70 // The write is not a complete write of the area
+#define TPM_CONTEXT_GAP      TPM_BASE + 71 // The gap between saved context counts is too large
+#define TPM_MAXNVWRITES      TPM_BASE + 72 // The maximum number of NV writes without an owner has been exceeded
+#define TPM_NOOPERATOR       TPM_BASE + 73 // No operator authorization value is set
+#define TPM_RESOURCEMISSING     TPM_BASE + 74 // The resource pointed to by context is not loaded
+#define TPM_DELEGATE_LOCK      TPM_BASE + 75 // The delegate administration is locked
+#define TPM_DELEGATE_FAMILY     TPM_BASE + 76 // Attempt to manage a family other then the delegated family
+#define TPM_DELEGATE_ADMIN      TPM_BASE + 77 // Delegation table management not enabled
+#define TPM_TRANSPORT_EXCLUSIVE    TPM_BASE + 78 // There was a command executed outside of an exclusive transport session
+
+// TPM_STARTUP_TYPE values
+#define TPM_ST_CLEAR 0x0001
+#define TPM_ST_STATE 0x0002
+#define TPM_ST_DEACTIVATED 0x003
+
+// TPM_TAG values
+#define TPM_TAG_RQU_COMMAND 0x00c1
+#define TPM_TAG_RQU_AUTH1_COMMAND 0x00c2
+#define TPM_TAG_RQU_AUTH2_COMMAND 0x00c3
+#define TPM_TAG_RSP_COMMAND 0x00c4
+#define TPM_TAG_RSP_AUTH1_COMMAND 0x00c5
+#define TPM_TAG_RSP_AUTH2_COMMAND 0x00c6
+
+// TPM_PAYLOAD_TYPE values
+#define TPM_PT_ASYM 0x01
+#define TPM_PT_BIND 0x02
+#define TPM_PT_MIGRATE 0x03
+#define TPM_PT_MAINT 0x04
+#define TPM_PT_SEAL 0x05
+
+// TPM_ENTITY_TYPE values
+#define TPM_ET_KEYHANDLE 0x0001
+#define TPM_ET_OWNER 0x0002
+#define TPM_ET_DATA 0x0003
+#define TPM_ET_SRK 0x0004
+#define TPM_ET_KEY 0x0005
+
+/// TPM_ResourceTypes
+#define TPM_RT_KEY      0x00000001
+#define TPM_RT_AUTH     0x00000002
+#define TPM_RT_HASH     0x00000003
+#define TPM_RT_TRANS    0x00000004
+#define TPM_RT_CONTEXT  0x00000005
+#define TPM_RT_COUNTER  0x00000006
+#define TPM_RT_DELEGATE 0x00000007
+#define TPM_RT_DAA_TPM  0x00000008
+#define TPM_RT_DAA_V0   0x00000009
+#define TPM_RT_DAA_V1   0x0000000A
+
+
+
+// TPM_PROTOCOL_ID values
+#define TPM_PID_OIAP 0x0001
+#define TPM_PID_OSAP 0x0002
+#define TPM_PID_ADIP 0x0003
+#define TPM_PID_ADCP 0x0004
+#define TPM_PID_OWNER 0x0005
+
+// TPM_ALGORITHM_ID values
+#define TPM_ALG_RSA 0x00000001
+#define TPM_ALG_SHA 0x00000004
+#define TPM_ALG_HMAC 0x00000005
+#define TPM_ALG_AES128 0x00000006
+#define TPM_ALG_MFG1 0x00000007
+#define TPM_ALG_AES192 0x00000008
+#define TPM_ALG_AES256 0x00000009
+#define TPM_ALG_XOR 0x0000000A
+
+// TPM_ENC_SCHEME values
+#define TPM_ES_NONE 0x0001
+#define TPM_ES_RSAESPKCSv15 0x0002
+#define TPM_ES_RSAESOAEP_SHA1_MGF1 0x0003
+
+// TPM_SIG_SCHEME values
+#define TPM_SS_NONE 0x0001
+#define TPM_SS_RSASSAPKCS1v15_SHA1 0x0002
+#define TPM_SS_RSASSAPKCS1v15_DER 0x0003
+
+/*
+ * TPM_CAPABILITY_AREA Values for TPM_GetCapability ([TPM_Part2], Section 21.1)
+ */
+#define TPM_CAP_ORD                     0x00000001
+#define TPM_CAP_ALG                     0x00000002
+#define TPM_CAP_PID                     0x00000003
+#define TPM_CAP_FLAG                    0x00000004
+#define TPM_CAP_PROPERTY                0x00000005
+#define TPM_CAP_VERSION                 0x00000006
+#define TPM_CAP_KEY_HANDLE              0x00000007
+#define TPM_CAP_CHECK_LOADED            0x00000008
+#define TPM_CAP_SYM_MODE                0x00000009
+#define TPM_CAP_KEY_STATUS              0x0000000C
+#define TPM_CAP_NV_LIST                 0x0000000D
+#define TPM_CAP_MFR                     0x00000010
+#define TPM_CAP_NV_INDEX                0x00000011
+#define TPM_CAP_TRANS_ALG               0x00000012
+#define TPM_CAP_HANDLE                  0x00000014
+#define TPM_CAP_TRANS_ES                0x00000015
+#define TPM_CAP_AUTH_ENCRYPT            0x00000017
+#define TPM_CAP_SELECT_SIZE             0x00000018
+#define TPM_CAP_DA_LOGIC                0x00000019
+#define TPM_CAP_VERSION_VAL             0x0000001A
+
+/* subCap definitions ([TPM_Part2], Section 21.2) */
+#define TPM_CAP_PROP_PCR                0x00000101
+#define TPM_CAP_PROP_DIR                0x00000102
+#define TPM_CAP_PROP_MANUFACTURER       0x00000103
+#define TPM_CAP_PROP_KEYS               0x00000104
+#define TPM_CAP_PROP_MIN_COUNTER        0x00000107
+#define TPM_CAP_FLAG_PERMANENT          0x00000108
+#define TPM_CAP_FLAG_VOLATILE           0x00000109
+#define TPM_CAP_PROP_AUTHSESS           0x0000010A
+#define TPM_CAP_PROP_TRANSESS           0x0000010B
+#define TPM_CAP_PROP_COUNTERS           0x0000010C
+#define TPM_CAP_PROP_MAX_AUTHSESS       0x0000010D
+#define TPM_CAP_PROP_MAX_TRANSESS       0x0000010E
+#define TPM_CAP_PROP_MAX_COUNTERS       0x0000010F
+#define TPM_CAP_PROP_MAX_KEYS           0x00000110
+#define TPM_CAP_PROP_OWNER              0x00000111
+#define TPM_CAP_PROP_CONTEXT            0x00000112
+#define TPM_CAP_PROP_MAX_CONTEXT        0x00000113
+#define TPM_CAP_PROP_FAMILYROWS         0x00000114
+#define TPM_CAP_PROP_TIS_TIMEOUT        0x00000115
+#define TPM_CAP_PROP_STARTUP_EFFECT     0x00000116
+#define TPM_CAP_PROP_DELEGATE_ROW       0x00000117
+#define TPM_CAP_PROP_MAX_DAASESS        0x00000119
+#define TPM_CAP_PROP_DAASESS            0x0000011A
+#define TPM_CAP_PROP_CONTEXT_DIST       0x0000011B
+#define TPM_CAP_PROP_DAA_INTERRUPT      0x0000011C
+#define TPM_CAP_PROP_SESSIONS           0x0000011D
+#define TPM_CAP_PROP_MAX_SESSIONS       0x0000011E
+#define TPM_CAP_PROP_CMK_RESTRICTION    0x0000011F
+#define TPM_CAP_PROP_DURATION           0x00000120
+#define TPM_CAP_PROP_ACTIVE_COUNTER     0x00000122
+#define TPM_CAP_PROP_MAX_NV_AVAILABLE   0x00000123
+#define TPM_CAP_PROP_INPUT_BUFFER       0x00000124
+
+// TPM_KEY_USAGE values
+#define TPM_KEY_EK 0x0000
+#define TPM_KEY_SIGNING 0x0010
+#define TPM_KEY_STORAGE 0x0011
+#define TPM_KEY_IDENTITY 0x0012
+#define TPM_KEY_AUTHCHANGE 0X0013
+#define TPM_KEY_BIND 0x0014
+#define TPM_KEY_LEGACY 0x0015
+
+// TPM_AUTH_DATA_USAGE values
+#define TPM_AUTH_NEVER 0x00
+#define TPM_AUTH_ALWAYS 0x01
+
+// Key Handle of owner and srk
+#define TPM_OWNER_KEYHANDLE 0x40000001
+#define TPM_SRK_KEYHANDLE 0x40000000
+
+
+
+// *************************** TYPEDEFS *********************************
+typedef unsigned char BYTE;
+typedef unsigned char BOOL;
+typedef uint16_t UINT16;
+typedef uint32_t UINT32;
+typedef uint64_t UINT64;
+
+typedef UINT32 TPM_RESULT;
+typedef UINT32 TPM_PCRINDEX;
+typedef UINT32 TPM_DIRINDEX;
+typedef UINT32 TPM_HANDLE;
+typedef TPM_HANDLE TPM_AUTHHANDLE;
+typedef TPM_HANDLE TCPA_HASHHANDLE;
+typedef TPM_HANDLE TCPA_HMACHANDLE;
+typedef TPM_HANDLE TCPA_ENCHANDLE;
+typedef TPM_HANDLE TPM_KEY_HANDLE;
+typedef TPM_HANDLE TCPA_ENTITYHANDLE;
+typedef UINT32 TPM_RESOURCE_TYPE;
+typedef UINT32 TPM_COMMAND_CODE;
+typedef UINT16 TPM_PROTOCOL_ID;
+typedef BYTE TPM_AUTH_DATA_USAGE;
+typedef UINT16 TPM_ENTITY_TYPE;
+typedef UINT32 TPM_ALGORITHM_ID;
+typedef UINT16 TPM_KEY_USAGE;
+typedef UINT16 TPM_STARTUP_TYPE;
+typedef UINT32 TPM_CAPABILITY_AREA;
+typedef UINT16 TPM_ENC_SCHEME;
+typedef UINT16 TPM_SIG_SCHEME;
+typedef UINT16 TPM_MIGRATE_SCHEME;
+typedef UINT16 TPM_PHYSICAL_PRESENCE;
+typedef UINT32 TPM_KEY_FLAGS;
+
+#define TPM_DIGEST_SIZE 20  // Don't change this
+typedef BYTE TPM_AUTHDATA[TPM_DIGEST_SIZE];
+typedef TPM_AUTHDATA TPM_SECRET;
+typedef TPM_AUTHDATA TPM_ENCAUTH;
+typedef BYTE TPM_PAYLOAD_TYPE;
+typedef UINT16 TPM_TAG;
+typedef UINT16 TPM_STRUCTURE_TAG;
+
+// Data Types of the TCS
+typedef UINT32 TCS_AUTHHANDLE;  // Handle addressing a authorization session
+typedef UINT32 TCS_CONTEXT_HANDLE; // Basic context handle
+typedef UINT32 TCS_KEY_HANDLE;  // Basic key handle
+
+// ************************* STRUCTURES **********************************
+
+typedef struct TPM_VERSION {
+  BYTE major;
+  BYTE minor;
+  BYTE revMajor;
+  BYTE revMinor;
+} TPM_VERSION;
+
+static const TPM_VERSION TPM_STRUCT_VER_1_1 = { 1,1,0,0 };
+
+typedef struct TPM_CAP_VERSION_INFO {
+   TPM_STRUCTURE_TAG tag;
+   TPM_VERSION version;
+   UINT16 specLevel;
+   BYTE errataRev;
+   BYTE tpmVendorID[4];
+   UINT16 vendorSpecificSize;
+   BYTE* vendorSpecific;
+} TPM_CAP_VERSION_INFO;
+
+inline void free_TPM_CAP_VERSION_INFO(TPM_CAP_VERSION_INFO* v) {
+   free(v->vendorSpecific);
+   v->vendorSpecific = NULL;
+}
+
+typedef struct TPM_DIGEST {
+  BYTE digest[TPM_DIGEST_SIZE];
+} TPM_DIGEST;
+
+typedef TPM_DIGEST TPM_PCRVALUE;
+typedef TPM_DIGEST TPM_COMPOSITE_HASH;
+typedef TPM_DIGEST TPM_DIRVALUE;
+typedef TPM_DIGEST TPM_HMAC;
+typedef TPM_DIGEST TPM_CHOSENID_HASH;
+
+typedef struct TPM_NONCE {
+  BYTE nonce[TPM_DIGEST_SIZE];
+} TPM_NONCE;
+
+typedef struct TPM_SYMMETRIC_KEY_PARMS {
+   UINT32 keyLength;
+   UINT32 blockSize;
+   UINT32 ivSize;
+   BYTE* IV;
+} TPM_SYMMETRIC_KEY_PARMS;
+
+inline void free_TPM_SYMMETRIC_KEY_PARMS(TPM_SYMMETRIC_KEY_PARMS* p) {
+   free(p->IV);
+   p->IV = NULL;
+}
+
+#define TPM_SYMMETRIC_KEY_PARMS_INIT { 0, 0, 0, NULL }
+
+typedef struct TPM_RSA_KEY_PARMS {
+  UINT32 keyLength;
+  UINT32 numPrimes;
+  UINT32 exponentSize;
+  BYTE* exponent;
+} TPM_RSA_KEY_PARMS;
+
+#define TPM_RSA_KEY_PARMS_INIT { 0, 0, 0, NULL }
+
+inline void free_TPM_RSA_KEY_PARMS(TPM_RSA_KEY_PARMS* p) {
+   free(p->exponent);
+   p->exponent = NULL;
+}
+
+typedef struct TPM_KEY_PARMS {
+  TPM_ALGORITHM_ID algorithmID;
+  TPM_ENC_SCHEME encScheme;
+  TPM_SIG_SCHEME sigScheme;
+  UINT32 parmSize;
+  union {
+     TPM_SYMMETRIC_KEY_PARMS sym;
+     TPM_RSA_KEY_PARMS rsa;
+  } parms;
+} TPM_KEY_PARMS;
+
+#define TPM_KEY_PARMS_INIT { 0, 0, 0, 0 }
+
+inline void free_TPM_KEY_PARMS(TPM_KEY_PARMS* p) {
+   if(p->parmSize) {
+      switch(p->algorithmID) {
+         case TPM_ALG_RSA:
+            free_TPM_RSA_KEY_PARMS(&p->parms.rsa);
+            break;
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            free_TPM_SYMMETRIC_KEY_PARMS(&p->parms.sym);
+            break;
+      }
+   }
+}
+
+typedef struct TPM_STORE_PUBKEY {
+  UINT32 keyLength;
+  BYTE* key;
+} TPM_STORE_PUBKEY;
+
+#define TPM_STORE_PUBKEY_INIT { 0, NULL }
+
+inline void free_TPM_STORE_PUBKEY(TPM_STORE_PUBKEY* p) {
+   free(p->key);
+   p->key = NULL;
+}
+
+typedef struct TPM_PUBKEY {
+  TPM_KEY_PARMS algorithmParms;
+  TPM_STORE_PUBKEY pubKey;
+} TPM_PUBKEY;
+
+#define TPM_PUBKEY_INIT { TPM_KEY_PARMS_INIT, TPM_STORE_PUBKEY_INIT }
+
+inline void free_TPM_PUBKEY(TPM_PUBKEY* k) {
+   free_TPM_KEY_PARMS(&k->algorithmParms);
+   free_TPM_STORE_PUBKEY(&k->pubKey);
+}
+
+typedef struct TPM_PCR_SELECTION {
+   UINT16 sizeOfSelect;
+   BYTE* pcrSelect;
+} TPM_PCR_SELECTION;
+
+#define TPM_PCR_SELECTION_INIT { 0, NULL }
+
+inline void free_TPM_PCR_SELECTION(TPM_PCR_SELECTION* p) {
+   free(p->pcrSelect);
+   p->pcrSelect = NULL;
+}
+
+typedef struct TPM_PCR_INFO {
+   TPM_PCR_SELECTION pcrSelection;
+   TPM_COMPOSITE_HASH digestAtRelease;
+   TPM_COMPOSITE_HASH digestAtCreation;
+} TPM_PCR_INFO;
+
+#define TPM_PCR_INFO_INIT { TPM_PCR_SELECTION_INIT }
+
+inline void free_TPM_PCR_INFO(TPM_PCR_INFO* p) {
+   free_TPM_PCR_SELECTION(&p->pcrSelection);
+}
+
+typedef struct TPM_PCR_COMPOSITE {
+  TPM_PCR_SELECTION select;
+  UINT32 valueSize;
+  TPM_PCRVALUE* pcrValue;
+} TPM_PCR_COMPOSITE;
+
+#define TPM_PCR_COMPOSITE_INIT { TPM_PCR_SELECTION_INIT, 0, NULL }
+
+inline void free_TPM_PCR_COMPOSITE(TPM_PCR_COMPOSITE* p) {
+   free_TPM_PCR_SELECTION(&p->select);
+   free(p->pcrValue);
+   p->pcrValue = NULL;
+}
+
+typedef struct TPM_KEY {
+  TPM_VERSION         ver;
+  TPM_KEY_USAGE       keyUsage;
+  TPM_KEY_FLAGS       keyFlags;
+  TPM_AUTH_DATA_USAGE authDataUsage;
+  TPM_KEY_PARMS       algorithmParms;
+  UINT32              PCRInfoSize;
+  TPM_PCR_INFO        PCRInfo;
+  TPM_STORE_PUBKEY    pubKey;
+  UINT32              encDataSize;
+  BYTE*               encData;
+} TPM_KEY;
+
+#define TPM_KEY_INIT { .algorithmParms = TPM_KEY_PARMS_INIT,\
+   .PCRInfoSize = 0, .PCRInfo = TPM_PCR_INFO_INIT, \
+   .pubKey = TPM_STORE_PUBKEY_INIT, \
+   .encDataSize = 0, .encData = NULL }
+
+inline void free_TPM_KEY(TPM_KEY* k) {
+   if(k->PCRInfoSize) {
+      free_TPM_PCR_INFO(&k->PCRInfo);
+   }
+   free_TPM_STORE_PUBKEY(&k->pubKey);
+   free(k->encData);
+   k->encData = NULL;
+}
+
+typedef struct TPM_BOUND_DATA {
+  TPM_VERSION ver;
+  TPM_PAYLOAD_TYPE payload;
+  BYTE* payloadData;
+} TPM_BOUND_DATA;
+
+#define TPM_BOUND_DATA_INIT { .payloadData = NULL }
+
+inline void free_TPM_BOUND_DATA(TPM_BOUND_DATA* d) {
+   free(d->payloadData);
+   d->payloadData = NULL;
+}
+
+typedef struct TPM_STORED_DATA {
+  TPM_VERSION ver;
+  UINT32 sealInfoSize;
+  TPM_PCR_INFO sealInfo;
+  UINT32 encDataSize;
+  BYTE* encData;
+} TPM_STORED_DATA;
+
+#define TPM_STORED_DATA_INIT { .sealInfoSize = 0, sealInfo = TPM_PCR_INFO_INIT,\
+   .encDataSize = 0, .encData = NULL }
+
+inline void free_TPM_STORED_DATA(TPM_STORED_DATA* d) {
+   if(d->sealInfoSize) {
+      free_TPM_PCR_INFO(&d->sealInfo);
+   }
+   free(d->encData);
+   d->encData = NULL;
+}
+
+typedef struct TPM_AUTH_SESSION {
+  TPM_AUTHHANDLE  AuthHandle;
+  TPM_NONCE   NonceOdd;   // system
+  TPM_NONCE   NonceEven;   // TPM
+  BOOL   fContinueAuthSession;
+  TPM_AUTHDATA  HMAC;
+} TPM_AUTH_SESSION;
+
+#define TPM_AUTH_SESSION_INIT { .AuthHandle = 0, .fContinueAuthSession = FALSE }
+
+// ---------------------- Functions for checking TPM_RESULTs -----------------
+
+#include <stdio.h>
+
+// FIXME: Review use of these and delete unneeded ones.
+
+// these are really badly dependent on local structure:
+// DEPENDS: local var 'status' of type TPM_RESULT
+// DEPENDS: label 'abort_egress' which cleans up and returns the status
+#define ERRORDIE(s) do { status = s; \
+                         fprintf (stderr, "*** ERRORDIE in %s at %s: %i\n", __func__, __FILE__, __LINE__); \
+                         goto abort_egress; } \
+                    while (0)
+
+// DEPENDS: local var 'status' of type TPM_RESULT
+// DEPENDS: label 'abort_egress' which cleans up and returns the status
+// Try command c. If it fails, set status to s and goto abort.
+#define TPMTRY(s,c) if (c != TPM_SUCCESS) { \
+                       status = s; \
+                       printf("ERROR in %s at %s:%i code: %s.\n", __func__, __FILE__, __LINE__, tpm_get_error_name(status)); \
+                       goto abort_egress; \
+                    } else {\
+                       status = c; \
+                    }
+
+// Try command c. If it fails, print error message, set status to actual return code. Goto abort
+#define TPMTRYRETURN(c) do { status = c; \
+                             if (status != TPM_SUCCESS) { \
+                               fprintf(stderr, "ERROR in %s at %s:%i code: %s.\n", __func__, __FILE__, __LINE__, tpm_get_error_name(status)); \
+                               goto abort_egress; \
+                             } \
+                        } while(0)
+
+
+#endif //__TCPA_H__
diff --git a/stubdom/vtpmmgr/tpm.c b/stubdom/vtpmmgr/tpm.c
new file mode 100644
index 0000000..123a27c
--- /dev/null
+++ b/stubdom/vtpmmgr/tpm.c
@@ -0,0 +1,938 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdio.h>
+#include <string.h>
+#include <malloc.h>
+#include <unistd.h>
+#include <errno.h>
+
+#include <polarssl/sha1.h>
+
+#include "tcg.h"
+#include "tpm.h"
+#include "log.h"
+#include "marshal.h"
+#include "tpmrsa.h"
+#include "vtpmmgr.h"
+
+#define TCPA_MAX_BUFFER_LENGTH 0x2000
+
+#define TPM_BEGIN(TAG, ORD) \
+   const TPM_TAG intag = TAG;\
+TPM_TAG tag = intag;\
+UINT32 paramSize;\
+const TPM_COMMAND_CODE ordinal = ORD;\
+TPM_RESULT status = TPM_SUCCESS;\
+BYTE in_buf[TCPA_MAX_BUFFER_LENGTH];\
+BYTE out_buf[TCPA_MAX_BUFFER_LENGTH];\
+UINT32 out_len = sizeof(out_buf);\
+BYTE* ptr = in_buf;\
+/*Print a log message */\
+vtpmloginfo(VTPM_LOG_TPM, "%s\n", __func__);\
+/* Pack the header*/\
+ptr = pack_TPM_TAG(ptr, tag);\
+ptr += sizeof(UINT32);\
+ptr = pack_TPM_COMMAND_CODE(ptr, ordinal)\
+
+#define TPM_AUTH_BEGIN() \
+   sha1_context sha1_ctx;\
+BYTE* authbase = ptr - sizeof(TPM_COMMAND_CODE);\
+TPM_DIGEST paramDigest;\
+sha1_starts(&sha1_ctx)
+
+#define TPM_AUTH1_GEN(HMACkey, auth) do {\
+   sha1_finish(&sha1_ctx, paramDigest.digest);\
+   generateAuth(&paramDigest, HMACkey, auth);\
+   ptr = pack_TPM_AUTH_SESSION(ptr, auth);\
+} while(0)
+
+#define TPM_AUTH2_GEN(HMACkey, auth) do {\
+   generateAuth(&paramDigest, HMACkey, auth);\
+   ptr = pack_TPM_AUTH_SESSION(ptr, auth);\
+} while(0)
+
+#define TPM_TRANSMIT() do {\
+   /* Pack the command size */\
+   paramSize = ptr - in_buf;\
+   pack_UINT32(in_buf + sizeof(TPM_TAG), paramSize);\
+   if((status = TPM_TransmitData(in_buf, paramSize, out_buf, &out_len)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH_VERIFY_BEGIN() do {\
+   UINT32 buf[2] = { cpu_to_be32(status), cpu_to_be32(ordinal) };\
+   sha1_starts(&sha1_ctx);\
+   sha1_update(&sha1_ctx, (unsigned char*)buf, sizeof(buf));\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH1_VERIFY(HMACkey, auth) do {\
+   sha1_finish(&sha1_ctx, paramDigest.digest);\
+   ptr = unpack_TPM_AUTH_SESSION(ptr, auth);\
+   if((status = verifyAuth(&paramDigest, HMACkey, auth)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH2_VERIFY(HMACkey, auth) do {\
+   ptr = unpack_TPM_AUTH_SESSION(ptr, auth);\
+   if((status = verifyAuth(&paramDigest, HMACkey, auth)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+
+
+#define TPM_UNPACK_VERIFY() do { \
+   ptr = out_buf;\
+   ptr = unpack_TPM_RSP_HEADER(ptr, \
+         &(tag), &(paramSize), &(status));\
+   if((status) != TPM_SUCCESS || (tag) != (intag +3)) { \
+      vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(status));\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH_HASH() do {\
+   sha1_update(&sha1_ctx, authbase, ptr - authbase);\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH_SKIP() do {\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH_ERR_CHECK(auth) do {\
+   if(status != TPM_SUCCESS || auth->fContinueAuthSession == FALSE) {\
+      vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x closed by TPM\n", auth->AuthHandle);\
+      auth->AuthHandle = 0;\
+   }\
+} while(0)
+
+static void xorEncrypt(const TPM_SECRET* sharedSecret,
+      TPM_NONCE* nonce,
+      const TPM_AUTHDATA* inAuth0,
+      TPM_ENCAUTH outAuth0,
+      const TPM_AUTHDATA* inAuth1,
+      TPM_ENCAUTH outAuth1) {
+   BYTE XORbuffer[sizeof(TPM_SECRET) + sizeof(TPM_NONCE)];
+   BYTE XORkey[TPM_DIGEST_SIZE];
+   BYTE* ptr = XORbuffer;
+   ptr = pack_TPM_SECRET(ptr, sharedSecret);
+   ptr = pack_TPM_NONCE(ptr, nonce);
+
+   sha1(XORbuffer, ptr - XORbuffer, XORkey);
+
+   if(inAuth0) {
+      for(int i = 0; i < TPM_DIGEST_SIZE; ++i) {
+         outAuth0[i] = XORkey[i] ^ (*inAuth0)[i];
+      }
+   }
+   if(inAuth1) {
+      for(int i = 0; i < TPM_DIGEST_SIZE; ++i) {
+         outAuth1[i] = XORkey[i] ^ (*inAuth1)[i];
+      }
+   }
+
+}
+
+static void generateAuth(const TPM_DIGEST* paramDigest,
+      const TPM_SECRET* HMACkey,
+      TPM_AUTH_SESSION *auth)
+{
+   //Generate new OddNonce
+   vtpmmgr_rand((BYTE*)auth->NonceOdd.nonce, sizeof(TPM_NONCE));
+
+   // Create HMAC text. (Concat inParamsDigest with inAuthSetupParams).
+   BYTE hmacText[sizeof(TPM_DIGEST) + (2 * sizeof(TPM_NONCE)) + sizeof(BOOL)];
+   BYTE* ptr = hmacText;
+
+   ptr = pack_TPM_DIGEST(ptr, paramDigest);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+
+   sha1_hmac((BYTE *) HMACkey, sizeof(TPM_DIGEST),
+         (BYTE *) hmacText, sizeof(hmacText),
+         auth->HMAC);
+}
+
+static TPM_RESULT verifyAuth(const TPM_DIGEST* paramDigest,
+      /*[IN]*/ const TPM_SECRET *HMACkey,
+      /*[IN,OUT]*/ TPM_AUTH_SESSION *auth)
+{
+
+   // Create HMAC text. (Concat inParamsDigest with inAuthSetupParams).
+   TPM_AUTHDATA hm;
+   BYTE hmacText[sizeof(TPM_DIGEST) + (2 * sizeof(TPM_NONCE)) + sizeof(BOOL)];
+   BYTE* ptr = hmacText;
+
+   ptr = pack_TPM_DIGEST(ptr, paramDigest);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+
+   sha1_hmac( (BYTE *) HMACkey, sizeof(TPM_DIGEST),
+         (BYTE *) hmacText, sizeof(hmacText),
+         hm);
+
+   // Compare correct HMAC with provided one.
+   if (memcmp(hm, auth->HMAC, sizeof(TPM_DIGEST)) == 0) { // 0 indicates equality
+      return TPM_SUCCESS;
+   } else {
+      vtpmlogerror(VTPM_LOG_TPM, "Auth Session verification failed!\n");
+      return TPM_AUTHFAIL;
+   }
+}
+
+
+
+// ------------------------------------------------------------------
+// Authorization Commands
+// ------------------------------------------------------------------
+
+TPM_RESULT TPM_OIAP(TPM_AUTH_SESSION*   auth)  // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_OIAP);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   memset(&auth->HMAC, 0, sizeof(TPM_DIGEST));
+   auth->fContinueAuthSession = TRUE;
+
+   ptr = unpack_UINT32(ptr, &auth->AuthHandle);
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x opened by TPM_OIAP.\n", auth->AuthHandle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_OSAP(TPM_ENTITY_TYPE  entityType,  // in
+      UINT32    entityValue, // in
+      const TPM_AUTHDATA* usageAuth, //in
+      TPM_SECRET *sharedSecret, //out
+      TPM_AUTH_SESSION *auth)
+{
+   BYTE* nonceOddOSAP;
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_OSAP);
+
+   ptr = pack_TPM_ENTITY_TYPE(ptr, entityType);
+   ptr = pack_UINT32(ptr, entityValue);
+
+   //nonce Odd OSAP
+   nonceOddOSAP = ptr;
+   vtpmmgr_rand(ptr, TPM_DIGEST_SIZE);
+   ptr += TPM_DIGEST_SIZE;
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, &auth->AuthHandle);
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+
+   //Calculate session secret
+   sha1_context ctx;
+   sha1_hmac_starts(&ctx, *usageAuth, TPM_DIGEST_SIZE);
+   sha1_hmac_update(&ctx, ptr, TPM_DIGEST_SIZE); //ptr = nonceEvenOSAP
+   sha1_hmac_update(&ctx, nonceOddOSAP, TPM_DIGEST_SIZE);
+   sha1_hmac_finish(&ctx, *sharedSecret);
+
+   memset(&auth->HMAC, 0, sizeof(TPM_DIGEST));
+   auth->fContinueAuthSession = FALSE;
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x opened by TPM_OSAP.\n", auth->AuthHandle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_TakeOwnership(
+      const TPM_PUBKEY *pubEK, //in
+      const TPM_AUTHDATA* ownerAuth, //in
+      const TPM_AUTHDATA* srkAuth, //in
+      const TPM_KEY* inSrk, //in
+      TPM_KEY* outSrk, //out, optional
+      TPM_AUTH_SESSION*   auth)   // in, out
+{
+   int keyAlloced = 0;
+   tpmrsa_context ek_rsa = TPMRSA_CTX_INIT;
+
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_TakeOwnership);
+   TPM_AUTH_BEGIN();
+
+   tpmrsa_set_pubkey(&ek_rsa,
+         pubEK->pubKey.key, pubEK->pubKey.keyLength,
+         pubEK->algorithmParms.parms.rsa.exponent,
+         pubEK->algorithmParms.parms.rsa.exponentSize);
+
+   /* Pack the protocol ID */
+   ptr = pack_UINT16(ptr, TPM_PID_OWNER);
+
+   /* Pack the encrypted owner auth */
+   ptr = pack_UINT32(ptr, pubEK->algorithmParms.parms.rsa.keyLength / 8);
+   tpmrsa_pub_encrypt_oaep(&ek_rsa,
+         ctr_drbg_random, &vtpm_globals.ctr_drbg,
+         sizeof(TPM_SECRET),
+         (BYTE*) ownerAuth,
+         ptr);
+   ptr += pubEK->algorithmParms.parms.rsa.keyLength / 8;
+
+   /* Pack the encrypted srk auth */
+   ptr = pack_UINT32(ptr, pubEK->algorithmParms.parms.rsa.keyLength / 8);
+   tpmrsa_pub_encrypt_oaep(&ek_rsa,
+         ctr_drbg_random, &vtpm_globals.ctr_drbg,
+         sizeof(TPM_SECRET),
+         (BYTE*) srkAuth,
+         ptr);
+   ptr += pubEK->algorithmParms.parms.rsa.keyLength / 8;
+
+   /* Pack the Srk key */
+   ptr = pack_TPM_KEY(ptr, inSrk);
+
+   /* Hash everything up to here */
+   TPM_AUTH_HASH();
+
+   /* Generate the authorization */
+   TPM_AUTH1_GEN(ownerAuth, auth);
+
+   /* Send the command to the tpm*/
+   TPM_TRANSMIT();
+   /* Unpack and validate the header */
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   if(outSrk != NULL) {
+      /* If the user wants a copy of the srk we give it to them */
+      keyAlloced = 1;
+      ptr = unpack_TPM_KEY(ptr, outSrk, UNPACK_ALLOC);
+   } else {
+      /*otherwise just parse past it */
+      TPM_KEY temp;
+      ptr = unpack_TPM_KEY(ptr, &temp, UNPACK_ALIAS);
+   }
+
+   /* Hash the output key */
+   TPM_AUTH_HASH();
+
+   /* Verify authorizaton */
+   TPM_AUTH1_VERIFY(ownerAuth, auth);
+
+   goto egress;
+abort_egress:
+   if(keyAlloced) {
+      free_TPM_KEY(outSrk);
+   }
+egress:
+   tpmrsa_free(&ek_rsa);
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+
+TPM_RESULT TPM_DisablePubekRead (
+      const TPM_AUTHDATA* ownerAuth,
+      TPM_AUTH_SESSION*   auth)
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_DisablePubekRead);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(ownerAuth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   TPM_AUTH1_VERIFY(ownerAuth, auth);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+
+TPM_RESULT TPM_TerminateHandle(TPM_AUTHHANDLE  handle)  // in
+{
+   if(handle == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_Terminate_Handle);
+
+   ptr = pack_TPM_AUTHHANDLE(ptr, handle);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x closed by TPM_TerminateHandle\n", handle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_Extend( TPM_PCRINDEX  pcrNum,  // in
+      TPM_DIGEST  inDigest, // in
+      TPM_PCRVALUE*  outDigest) // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_Extend);
+
+   ptr = pack_TPM_PCRINDEX(ptr, pcrNum);
+   ptr = pack_TPM_DIGEST(ptr, &inDigest);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_TPM_PCRVALUE(ptr, outDigest);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_Seal(
+      TPM_KEY_HANDLE  keyHandle,  // in
+      UINT32    pcrInfoSize, // in
+      TPM_PCR_INFO*    pcrInfo,  // in
+      UINT32    inDataSize,  // in
+      const BYTE*    inData,   // in
+      TPM_STORED_DATA* sealedData, //out
+      const TPM_SECRET* osapSharedSecret, //in
+      const TPM_AUTHDATA* sealedDataAuth, //in
+      TPM_AUTH_SESSION*   pubAuth  // in, out
+      )
+{
+   int dataAlloced = 0;
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_Seal);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, keyHandle);
+
+   TPM_AUTH_SKIP();
+
+   xorEncrypt(osapSharedSecret, &pubAuth->NonceEven,
+         sealedDataAuth, ptr,
+         NULL, NULL);
+   ptr += sizeof(TPM_ENCAUTH);
+
+   ptr = pack_UINT32(ptr, pcrInfoSize);
+   ptr = pack_TPM_PCR_INFO(ptr, pcrInfo);
+
+   ptr = pack_UINT32(ptr, inDataSize);
+   ptr = pack_BUFFER(ptr, inData, inDataSize);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(osapSharedSecret, pubAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_TPM_STORED_DATA(ptr, sealedData, UNPACK_ALLOC);
+   dataAlloced = 1;
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(osapSharedSecret, pubAuth);
+
+   goto egress;
+abort_egress:
+   if(dataAlloced) {
+      free_TPM_STORED_DATA(sealedData);
+   }
+egress:
+   TPM_AUTH_ERR_CHECK(pubAuth);
+   return status;
+}
+
+TPM_RESULT TPM_Unseal(
+      TPM_KEY_HANDLE parentHandle, // in
+      const TPM_STORED_DATA* sealedData,
+      UINT32*   outSize,  // out
+      BYTE**    out, //out
+      const TPM_AUTHDATA* key_usage_auth, //in
+      const TPM_AUTHDATA* data_usage_auth, //in
+      TPM_AUTH_SESSION*   keyAuth,  // in, out
+      TPM_AUTH_SESSION*   dataAuth  // in, out
+      )
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH2_COMMAND, TPM_ORD_Unseal);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, parentHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_TPM_STORED_DATA(ptr, sealedData);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(key_usage_auth, keyAuth);
+   TPM_AUTH2_GEN(data_usage_auth, dataAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, outSize);
+   ptr = unpack_ALLOC(ptr, out, *outSize);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(key_usage_auth, keyAuth);
+   TPM_AUTH2_VERIFY(data_usage_auth, dataAuth);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(keyAuth);
+   TPM_AUTH_ERR_CHECK(dataAuth);
+   return status;
+}
+
+TPM_RESULT TPM_Bind(
+      const TPM_KEY* key,
+      const BYTE* in,
+      UINT32 ilen,
+      BYTE* out)
+{
+   TPM_RESULT status;
+   tpmrsa_context rsa = TPMRSA_CTX_INIT;
+   TPM_BOUND_DATA boundData;
+   uint8_t plain[TCPA_MAX_BUFFER_LENGTH];
+   BYTE* ptr = plain;
+
+   vtpmloginfo(VTPM_LOG_TPM, "%s\n", __func__);
+
+   tpmrsa_set_pubkey(&rsa,
+         key->pubKey.key, key->pubKey.keyLength,
+         key->algorithmParms.parms.rsa.exponent,
+         key->algorithmParms.parms.rsa.exponentSize);
+
+   // Fill boundData's accessory information
+   boundData.ver = TPM_STRUCT_VER_1_1;
+   boundData.payload = TPM_PT_BIND;
+   boundData.payloadData = (BYTE*)in;
+
+   //marshall the bound data object
+   ptr = pack_TPM_BOUND_DATA(ptr, &boundData, ilen);
+
+   // Encrypt the data
+   TPMTRYRETURN(tpmrsa_pub_encrypt_oaep(&rsa,
+            ctr_drbg_random, &vtpm_globals.ctr_drbg,
+            ptr - plain,
+            plain,
+            out));
+
+abort_egress:
+   tpmrsa_free(&rsa);
+   return status;
+
+}
+
+TPM_RESULT TPM_UnBind(
+      TPM_KEY_HANDLE  keyHandle,  // in
+      UINT32 ilen, //in
+      const BYTE* in, //
+      UINT32* olen, //
+      BYTE*    out, //out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth //in, out
+      )
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_UnBind);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, keyHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_UINT32(ptr, ilen);
+   ptr = pack_BUFFER(ptr, in, ilen);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(usage_auth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, olen);
+   if(*olen > ilen) {
+      vtpmlogerror(VTPM_LOG_TPM, "Output length < input length!\n");
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+   ptr = unpack_BUFFER(ptr, out, *olen);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(usage_auth, auth);
+
+abort_egress:
+egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+TPM_RESULT TPM_CreateWrapKey(
+      TPM_KEY_HANDLE  hWrappingKey,  // in
+      const TPM_AUTHDATA* osapSharedSecret,
+      const TPM_AUTHDATA* dataUsageAuth, //in
+      const TPM_AUTHDATA* dataMigrationAuth, //in
+      TPM_KEY*     key, //in, out
+      TPM_AUTH_SESSION*   pAuth)    // in, out
+{
+   int keyAlloced = 0;
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_CreateWrapKey);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, hWrappingKey);
+
+   TPM_AUTH_SKIP();
+
+   //Encrypted auths
+   xorEncrypt(osapSharedSecret, &pAuth->NonceEven,
+         dataUsageAuth, ptr,
+         dataMigrationAuth, ptr + sizeof(TPM_ENCAUTH));
+   ptr += sizeof(TPM_ENCAUTH) * 2;
+
+   ptr = pack_TPM_KEY(ptr, key);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(osapSharedSecret, pAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   keyAlloced = 1;
+   ptr = unpack_TPM_KEY(ptr, key, UNPACK_ALLOC);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(osapSharedSecret, pAuth);
+
+   goto egress;
+abort_egress:
+   if(keyAlloced) {
+      free_TPM_KEY(key);
+   }
+egress:
+   TPM_AUTH_ERR_CHECK(pAuth);
+   return status;
+}
+
+TPM_RESULT TPM_LoadKey(
+      TPM_KEY_HANDLE  parentHandle, //
+      const TPM_KEY* key, //in
+      TPM_HANDLE*  keyHandle,    // out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth)
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_LoadKey);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, parentHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_TPM_KEY(ptr, key);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(usage_auth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, keyHandle);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(usage_auth, auth);
+
+   vtpmloginfo(VTPM_LOG_TPM, "Key Handle: 0x%x opened by TPM_LoadKey\n", *keyHandle);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+TPM_RESULT TPM_EvictKey( TPM_KEY_HANDLE  hKey)  // in
+{
+   if(hKey == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_EvictKey);
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, hKey);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   vtpmloginfo(VTPM_LOG_TPM, "Key handle: 0x%x closed by TPM_EvictKey\n", hKey);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_FlushSpecific(TPM_HANDLE handle,
+      TPM_RESOURCE_TYPE rt) {
+   if(handle == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_FlushSpecific);
+
+   ptr = pack_TPM_HANDLE(ptr, handle);
+   ptr = pack_TPM_RESOURCE_TYPE(ptr, rt);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_GetRandom( UINT32*    bytesRequested, // in, out
+      BYTE*    randomBytes) // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_GetRandom);
+
+   // check input params
+   if (bytesRequested == NULL || randomBytes == NULL){
+      return TPM_BAD_PARAMETER;
+   }
+
+   ptr = pack_UINT32(ptr, *bytesRequested);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, bytesRequested);
+   ptr = unpack_BUFFER(ptr, randomBytes, *bytesRequested);
+
+abort_egress:
+   return status;
+}
+
+
+TPM_RESULT TPM_ReadPubek(
+      TPM_PUBKEY* pubEK //out
+      )
+{
+   BYTE* antiReplay = NULL;
+   BYTE* kptr = NULL;
+   BYTE digest[TPM_DIGEST_SIZE];
+   sha1_context ctx;
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_ReadPubek);
+
+   //antiReplay nonce
+   vtpmmgr_rand(ptr, TPM_DIGEST_SIZE);
+   antiReplay = ptr;
+   ptr += TPM_DIGEST_SIZE;
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   //unpack and allocate the key
+   kptr = ptr;
+   ptr = unpack_TPM_PUBKEY(ptr, pubEK, UNPACK_ALLOC);
+
+   //Verify the checksum
+   sha1_starts(&ctx);
+   sha1_update(&ctx, kptr, ptr - kptr);
+   sha1_update(&ctx, antiReplay, TPM_DIGEST_SIZE);
+   sha1_finish(&ctx, digest);
+
+   //ptr points to the checksum computed by TPM
+   if(memcmp(digest, ptr, TPM_DIGEST_SIZE)) {
+      vtpmlogerror(VTPM_LOG_TPM, "TPM_ReadPubek: Checksum returned by TPM was invalid!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   goto egress;
+abort_egress:
+   if(kptr != NULL) { //If we unpacked the pubEK, we have to free it
+      free_TPM_PUBKEY(pubEK);
+   }
+egress:
+   return status;
+}
+
+
+TPM_RESULT TPM_SaveState(void)
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_SaveState);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_GetCapability(
+      TPM_CAPABILITY_AREA capArea,
+      UINT32 subCapSize,
+      const BYTE* subCap,
+      UINT32* respSize,
+      BYTE** resp)
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_GetCapability);
+
+   ptr = pack_TPM_CAPABILITY_AREA(ptr, capArea);
+   ptr = pack_UINT32(ptr, subCapSize);
+   ptr = pack_BUFFER(ptr, subCap, subCapSize);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, respSize);
+   ptr = unpack_ALLOC(ptr, resp, *respSize);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_CreateEndorsementKeyPair(
+      const TPM_KEY_PARMS* keyInfo,
+      TPM_PUBKEY* pubEK)
+{
+   BYTE* kptr = NULL;
+   sha1_context ctx;
+   TPM_DIGEST checksum;
+   TPM_DIGEST hash;
+   TPM_NONCE antiReplay;
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_CreateEndorsementKeyPair);
+
+   //Make anti replay nonce
+   vtpmmgr_rand(antiReplay.nonce, sizeof(antiReplay.nonce));
+
+   ptr = pack_TPM_NONCE(ptr, &antiReplay);
+   ptr = pack_TPM_KEY_PARMS(ptr, keyInfo);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   sha1_starts(&ctx);
+
+   kptr = ptr;
+   ptr = unpack_TPM_PUBKEY(ptr, pubEK, UNPACK_ALLOC);
+
+   /* Hash the pub key blob */
+   sha1_update(&ctx, kptr, ptr - kptr);
+   ptr = unpack_TPM_DIGEST(ptr, &checksum);
+
+   sha1_update(&ctx, antiReplay.nonce, sizeof(antiReplay.nonce));
+
+   sha1_finish(&ctx, hash.digest);
+   if(memcmp(checksum.digest, hash.digest, TPM_DIGEST_SIZE)) {
+      vtpmloginfo(VTPM_LOG_VTPM, "TPM_CreateEndorsementKey: Checkum verification failed!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   goto egress;
+abort_egress:
+   if(kptr) {
+      free_TPM_PUBKEY(pubEK);
+   }
+egress:
+   return status;
+}
+
+TPM_RESULT TPM_TransmitData(
+      BYTE* in,
+      UINT32 insize,
+      BYTE* out,
+      UINT32* outsize) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   UINT32 i;
+   vtpmloginfo(VTPM_LOG_TXDATA, "Sending buffer = 0x");
+   for(i = 0 ; i < insize ; i++)
+      vtpmloginfomore(VTPM_LOG_TXDATA, "%2.2x ", in[i]);
+
+   vtpmloginfomore(VTPM_LOG_TXDATA, "\n");
+
+   ssize_t size = 0;
+
+   // send the request
+   size = write (vtpm_globals.tpm_fd, in, insize);
+   if (size < 0) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "write() failed : %s\n", strerror(errno));
+      ERRORDIE (TPM_IOERROR);
+   }
+   else if ((UINT32) size < insize) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "Wrote %d instead of %d bytes!\n", (int) size, insize);
+      ERRORDIE (TPM_IOERROR);
+   }
+
+   // read the response
+   size = read (vtpm_globals.tpm_fd, out, *outsize);
+   if (size < 0) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "read() failed : %s\n", strerror(errno));
+      ERRORDIE (TPM_IOERROR);
+   }
+
+   vtpmloginfo(VTPM_LOG_TXDATA, "Receiving buffer = 0x");
+   for(i = 0 ; i < size ; i++)
+      vtpmloginfomore(VTPM_LOG_TXDATA, "%2.2x ", out[i]);
+
+   vtpmloginfomore(VTPM_LOG_TXDATA, "\n");
+
+   *outsize = size;
+   goto egress;
+
+abort_egress:
+egress:
+   return status;
+}
diff --git a/stubdom/vtpmmgr/tpm.h b/stubdom/vtpmmgr/tpm.h
new file mode 100644
index 0000000..304e145
--- /dev/null
+++ b/stubdom/vtpmmgr/tpm.h
@@ -0,0 +1,218 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005/2006, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __TPM_H__
+#define __TPM_H__
+
+#include "tcg.h"
+
+// ------------------------------------------------------------------
+// Exposed API
+// ------------------------------------------------------------------
+
+// TPM v1.1B Command Set
+
+// Authorzation
+TPM_RESULT TPM_OIAP(
+      TPM_AUTH_SESSION*   auth //out
+      );
+
+TPM_RESULT TPM_OSAP (
+      TPM_ENTITY_TYPE entityType,  // in
+      UINT32    entityValue, // in
+      const TPM_AUTHDATA* usageAuth, //in
+      TPM_SECRET *sharedSecret, //out
+      TPM_AUTH_SESSION *auth);
+
+TPM_RESULT TPM_TakeOwnership(
+      const TPM_PUBKEY *pubEK, //in
+      const TPM_AUTHDATA* ownerAuth, //in
+      const TPM_AUTHDATA* srkAuth, //in
+      const TPM_KEY* inSrk, //in
+      TPM_KEY* outSrk, //out, optional
+      TPM_AUTH_SESSION*   auth   // in, out
+      );
+
+TPM_RESULT TPM_DisablePubekRead (
+      const TPM_AUTHDATA* ownerAuth,
+      TPM_AUTH_SESSION*   auth
+      );
+
+TPM_RESULT TPM_TerminateHandle ( TPM_AUTHHANDLE  handle  // in
+      );
+
+TPM_RESULT TPM_FlushSpecific ( TPM_HANDLE  handle,  // in
+      TPM_RESOURCE_TYPE resourceType //in
+      );
+
+// TPM Mandatory
+TPM_RESULT TPM_Extend ( TPM_PCRINDEX  pcrNum,  // in
+      TPM_DIGEST   inDigest, // in
+      TPM_PCRVALUE*   outDigest // out
+      );
+
+TPM_RESULT TPM_PcrRead ( TPM_PCRINDEX  pcrNum,  // in
+      TPM_PCRVALUE*  outDigest // out
+      );
+
+TPM_RESULT TPM_Quote ( TCS_KEY_HANDLE  keyHandle,  // in
+      TPM_NONCE   antiReplay,  // in
+      UINT32*    PcrDataSize, // in, out
+      BYTE**    PcrData,  // in, out
+      TPM_AUTH_SESSION*   privAuth,  // in, out
+      UINT32*    sigSize,  // out
+      BYTE**    sig    // out
+      );
+
+TPM_RESULT TPM_Seal(
+      TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32    pcrInfoSize, // in
+      TPM_PCR_INFO*    pcrInfo,  // in
+      UINT32    inDataSize,  // in
+      const BYTE*    inData,   // in
+      TPM_STORED_DATA* sealedData, //out
+      const TPM_SECRET* osapSharedSecret, //in
+      const TPM_AUTHDATA* sealDataAuth, //in
+      TPM_AUTH_SESSION*   pubAuth  // in, out
+      );
+
+TPM_RESULT TPM_Unseal (
+      TPM_KEY_HANDLE parentHandle, // in
+      const TPM_STORED_DATA* sealedData,
+      UINT32*   outSize,  // out
+      BYTE**    out, //out
+      const TPM_AUTHDATA* key_usage_auth, //in
+      const TPM_AUTHDATA* data_usage_auth, //in
+      TPM_AUTH_SESSION*   keyAuth,  // in, out
+      TPM_AUTH_SESSION*   dataAuth  // in, out
+      );
+
+TPM_RESULT TPM_DirWriteAuth ( TPM_DIRINDEX  dirIndex,  // in
+      TPM_DIRVALUE  newContents, // in
+      TPM_AUTH_SESSION*   ownerAuth  // in, out
+      );
+
+TPM_RESULT TPM_DirRead ( TPM_DIRINDEX  dirIndex, // in
+      TPM_DIRVALUE*  dirValue // out
+      );
+
+TPM_RESULT TPM_Bind(
+      const TPM_KEY* key, //in
+      const BYTE* in, //in
+      UINT32 ilen, //in
+      BYTE* out //out, must be at least cipher block size
+      );
+
+TPM_RESULT TPM_UnBind (
+      TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32 ilen, //in
+      const BYTE* in, //
+      UINT32*   outDataSize, // out
+      BYTE*    outData, //out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth //in, out
+      );
+
+TPM_RESULT TPM_CreateWrapKey (
+      TCS_KEY_HANDLE  hWrappingKey,  // in
+      const TPM_AUTHDATA* osapSharedSecret,
+      const TPM_AUTHDATA* dataUsageAuth, //in
+      const TPM_AUTHDATA* dataMigrationAuth, //in
+      TPM_KEY*     key, //in
+      TPM_AUTH_SESSION*   pAuth    // in, out
+      );
+
+TPM_RESULT TPM_LoadKey (
+      TPM_KEY_HANDLE  parentHandle, //
+      const TPM_KEY* key, //in
+      TPM_HANDLE*  keyHandle,    // out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth
+      );
+
+TPM_RESULT TPM_GetPubKey (  TCS_KEY_HANDLE  hKey,   // in
+      TPM_AUTH_SESSION*   pAuth,   // in, out
+      UINT32*    pcPubKeySize, // out
+      BYTE**    prgbPubKey  // out
+      );
+
+TPM_RESULT TPM_EvictKey ( TCS_KEY_HANDLE  hKey  // in
+      );
+
+TPM_RESULT TPM_FlushSpecific(TPM_HANDLE handle, //in
+      TPM_RESOURCE_TYPE rt //in
+      );
+
+TPM_RESULT TPM_Sign ( TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32    areaToSignSize, // in
+      BYTE*    areaToSign,  // in
+      TPM_AUTH_SESSION*   privAuth,  // in, out
+      UINT32*    sigSize,  // out
+      BYTE**    sig    // out
+      );
+
+TPM_RESULT TPM_GetRandom (  UINT32*    bytesRequested, // in, out
+      BYTE*    randomBytes  // out
+      );
+
+TPM_RESULT TPM_StirRandom (  UINT32    inDataSize, // in
+      BYTE*    inData  // in
+      );
+
+TPM_RESULT TPM_ReadPubek (
+      TPM_PUBKEY* pubEK //out
+      );
+
+TPM_RESULT TPM_GetCapability(
+      TPM_CAPABILITY_AREA capArea,
+      UINT32 subCapSize,
+      const BYTE* subCap,
+      UINT32* respSize,
+      BYTE** resp);
+
+TPM_RESULT TPM_SaveState(void);
+
+TPM_RESULT TPM_CreateEndorsementKeyPair(
+      const TPM_KEY_PARMS* keyInfo,
+      TPM_PUBKEY* pubEK);
+
+TPM_RESULT TPM_TransmitData(
+      BYTE* in,
+      UINT32 insize,
+      BYTE* out,
+      UINT32* outsize);
+
+#endif //TPM_H
diff --git a/stubdom/vtpmmgr/tpmrsa.c b/stubdom/vtpmmgr/tpmrsa.c
new file mode 100644
index 0000000..56094e7
--- /dev/null
+++ b/stubdom/vtpmmgr/tpmrsa.c
@@ -0,0 +1,175 @@
+/*
+ *  The RSA public-key cryptosystem
+ *
+ *  Copyright (C) 2006-2011, Brainspark B.V.
+ *
+ *  This file is part of PolarSSL (http://www.polarssl.org)
+ *  Lead Maintainer: Paul Bakker <polarssl_maintainer at polarssl.org>
+ *
+ *  All rights reserved.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License along
+ *  with this program; if not, write to the Free Software Foundation, Inc.,
+ *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+/*
+ *  RSA was designed by Ron Rivest, Adi Shamir and Len Adleman.
+ *
+ *  http://theory.lcs.mit.edu/~rivest/rsapaper.pdf
+ *  http://www.cacr.math.uwaterloo.ca/hac/about/chap8.pdf
+ */
+
+#include "tcg.h"
+#include "polarssl/sha1.h"
+
+#include <stdlib.h>
+#include <stdio.h>
+
+#include "tpmrsa.h"
+
+#define HASH_LEN 20
+
+void tpmrsa_set_pubkey(tpmrsa_context* ctx,
+      const unsigned char* key,
+      int keylen,
+      const unsigned char* exponent,
+      int explen) {
+
+   tpmrsa_free(ctx);
+
+   if(explen == 0) { //Default e= 2^16+1
+      mpi_lset(&ctx->E, 65537);
+   } else {
+      mpi_read_binary(&ctx->E, exponent, explen);
+   }
+   mpi_read_binary(&ctx->N, key, keylen);
+
+   ctx->len = ( mpi_msb(&ctx->N) + 7) >> 3;
+}
+
+static TPM_RESULT tpmrsa_public( tpmrsa_context *ctx,
+      const unsigned char *input,
+      unsigned char *output )
+{
+   int ret;
+   size_t olen;
+   mpi T;
+
+   mpi_init( &T );
+
+   MPI_CHK( mpi_read_binary( &T, input, ctx->len ) );
+
+   if( mpi_cmp_mpi( &T, &ctx->N ) >= 0 )
+   {
+      mpi_free( &T );
+      return TPM_ENCRYPT_ERROR;
+   }
+
+   olen = ctx->len;
+   MPI_CHK( mpi_exp_mod( &T, &T, &ctx->E, &ctx->N, &ctx->RN ) );
+   MPI_CHK( mpi_write_binary( &T, output, olen ) );
+
+cleanup:
+
+   mpi_free( &T );
+
+   if( ret != 0 )
+      return TPM_ENCRYPT_ERROR;
+
+   return TPM_SUCCESS;
+}
+
+static void mgf_mask( unsigned char *dst, int dlen, unsigned char *src, int slen)
+{
+   unsigned char mask[HASH_LEN];
+   unsigned char counter[4] = {0, 0, 0, 0};
+   int i;
+   sha1_context mctx;
+
+   //We always hash the src with the counter, so save the partial hash
+   sha1_starts(&mctx);
+   sha1_update(&mctx, src, slen);
+
+   // Generate and apply dbMask
+   while(dlen > 0) {
+      //Copy the sha1 context
+      sha1_context ctx = mctx;
+
+      //compute hash for input || counter
+      sha1_update(&ctx, counter, sizeof(counter));
+      sha1_finish(&ctx, mask);
+
+      //Apply the mask
+      for(i = 0; i < (dlen < HASH_LEN ? dlen : HASH_LEN); ++i) {
+         *(dst++) ^= mask[i];
+      }
+
+      //Increment counter
+      ++counter[3];
+
+      dlen -= HASH_LEN;
+   }
+}
+
+/*
+ * Add the message padding, then do an RSA operation
+ */
+TPM_RESULT tpmrsa_pub_encrypt_oaep( tpmrsa_context *ctx,
+      int (*f_rng)(void *, unsigned char *, size_t),
+      void *p_rng,
+      size_t ilen,
+      const unsigned char *input,
+      unsigned char *output )
+{
+   int ret;
+   int olen;
+   unsigned char* seed = output + 1;
+   unsigned char* db = output + HASH_LEN +1;
+
+   olen = ctx->len-1;
+
+   if( f_rng == NULL )
+      return TPM_ENCRYPT_ERROR;
+
+   if( ilen > olen - 2 * HASH_LEN - 1)
+      return TPM_ENCRYPT_ERROR;
+
+   output[0] = 0;
+
+   //Encoding parameter p
+   sha1((unsigned char*)"TCPA", 4, db);
+
+   //PS
+   memset(db + HASH_LEN, 0,
+         olen - ilen - 2 * HASH_LEN - 1);
+
+   //constant 1 byte
+   db[olen - ilen - HASH_LEN -1] = 0x01;
+
+   //input string
+   memcpy(db + olen - ilen - HASH_LEN,
+         input, ilen);
+
+   //Generate random seed
+   if( ( ret = f_rng( p_rng, seed, HASH_LEN ) ) != 0 )
+      return TPM_ENCRYPT_ERROR;
+
+   // maskedDB: Apply dbMask to DB
+   mgf_mask( db, olen - HASH_LEN, seed, HASH_LEN);
+
+   // maskedSeed: Apply seedMask to seed
+   mgf_mask( seed, HASH_LEN, db, olen - HASH_LEN);
+
+   // Do the crypto op
+   return tpmrsa_public(ctx, output, output);
+}
diff --git a/stubdom/vtpmmgr/tpmrsa.h b/stubdom/vtpmmgr/tpmrsa.h
new file mode 100644
index 0000000..59579e7
--- /dev/null
+++ b/stubdom/vtpmmgr/tpmrsa.h
@@ -0,0 +1,67 @@
+/**
+ * \file rsa.h
+ *
+ * \brief The RSA public-key cryptosystem
+ *
+ *  Copyright (C) 2006-2010, Brainspark B.V.
+ *
+ *  This file is part of PolarSSL (http://www.polarssl.org)
+ *  Lead Maintainer: Paul Bakker <polarssl_maintainer at polarssl.org>
+ *
+ *  All rights reserved.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License along
+ *  with this program; if not, write to the Free Software Foundation, Inc.,
+ *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+#ifndef TPMRSA_H
+#define TPMRSA_H
+
+#include "tcg.h"
+#include <polarssl/bignum.h>
+
+/* tpm software key */
+typedef struct
+{
+    size_t len;                 /*!<  size(N) in chars  */
+
+    mpi N;                      /*!<  public modulus    */
+    mpi E;                      /*!<  public exponent   */
+
+    mpi RN;                     /*!<  cached R^2 mod N  */
+}
+tpmrsa_context;
+
+#define TPMRSA_CTX_INIT { 0, {0, 0, NULL}, {0, 0, NULL}, {0, 0, NULL}}
+
+/* Setup the rsa context using tpm public key data */
+void tpmrsa_set_pubkey(tpmrsa_context* ctx,
+      const unsigned char* key,
+      int keylen,
+      const unsigned char* exponent,
+      int explen);
+
+/* Do rsa public crypto */
+TPM_RESULT tpmrsa_pub_encrypt_oaep( tpmrsa_context *ctx,
+      int (*f_rng)(void *, unsigned char *, size_t),
+      void *p_rng,
+      size_t ilen,
+      const unsigned char *input,
+      unsigned char *output );
+
+/* free tpmrsa key */
+inline void tpmrsa_free( tpmrsa_context *ctx ) {
+   mpi_free( &ctx->RN ); mpi_free( &ctx->E  ); mpi_free( &ctx->N  );
+}
+
+#endif /* tpmrsa.h */
diff --git a/stubdom/vtpmmgr/uuid.h b/stubdom/vtpmmgr/uuid.h
new file mode 100644
index 0000000..4737645
--- /dev/null
+++ b/stubdom/vtpmmgr/uuid.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPMMGR_UUID_H
+#define VTPMMGR_UUID_H
+
+#define UUID_FMT "%02hhx%02hhx%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx"
+#define UUID_FMTLEN ((2*16)+4) /* 16 hex bytes plus 4 hypens */
+#define UUID_BYTES(uuid) uuid[0], uuid[1], uuid[2], uuid[3], \
+                                uuid[4], uuid[5], uuid[6], uuid[7], \
+                                uuid[8], uuid[9], uuid[10], uuid[11], \
+                                uuid[12], uuid[13], uuid[14], uuid[15]
+
+
+typedef uint8_t uuid_t[16];
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
new file mode 100644
index 0000000..f82a2a9
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
@@ -0,0 +1,152 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <inttypes.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "marshal.h"
+#include "log.h"
+#include "vtpm_storage.h"
+#include "vtpmmgr.h"
+#include "tpm.h"
+#include "tcg.h"
+
+static TPM_RESULT vtpmmgr_SaveHashKey(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+
+   if(tpmcmd->req_len != VTPM_COMMAND_HEADER_SIZE + HASHKEYSZ) {
+      vtpmlogerror(VTPM_LOG_VTPM, "VTPM_ORD_SAVEHASHKEY hashkey too short!\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   /* Do the command */
+   TPMTRYRETURN(vtpm_storage_save_hashkey(uuid, tpmcmd->req + VTPM_COMMAND_HEADER_SIZE));
+
+abort_egress:
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         VTPM_TAG_RSP, VTPM_COMMAND_HEADER_SIZE, status);
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+
+   return status;
+}
+
+static TPM_RESULT vtpmmgr_LoadHashKey(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+
+   TPMTRYRETURN(vtpm_storage_load_hashkey(uuid, tpmcmd->resp + VTPM_COMMAND_HEADER_SIZE));
+
+   tpmcmd->resp_len += HASHKEYSZ;
+
+abort_egress:
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         VTPM_TAG_RSP, tpmcmd->resp_len, status);
+
+   return status;
+}
+
+
+TPM_RESULT vtpmmgr_handle_cmd(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_TAG tag;
+   UINT32 size;
+   TPM_COMMAND_CODE ord;
+
+   unpack_TPM_RQU_HEADER(tpmcmd->req,
+         &tag, &size, &ord);
+
+   /* Handle the command now */
+   switch(tag) {
+      case VTPM_TAG_REQ:
+         //This is a vTPM command
+         switch(ord) {
+            case VTPM_ORD_SAVEHASHKEY:
+               return vtpmmgr_SaveHashKey(uuid, tpmcmd);
+            case VTPM_ORD_LOADHASHKEY:
+               return vtpmmgr_LoadHashKey(uuid, tpmcmd);
+            default:
+               vtpmlogerror(VTPM_LOG_VTPM, "Invalid vTPM Ordinal %" PRIu32 "\n", ord);
+               status = TPM_BAD_ORDINAL;
+         }
+         break;
+      case TPM_TAG_RQU_COMMAND:
+      case TPM_TAG_RQU_AUTH1_COMMAND:
+      case TPM_TAG_RQU_AUTH2_COMMAND:
+         //This is a TPM passthrough command
+         switch(ord) {
+            case TPM_ORD_GetRandom:
+               vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
+               break;
+            case TPM_ORD_PcrRead:
+               vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_PcrRead\n");
+               break;
+            default:
+               vtpmlogerror(VTPM_LOG_VTPM, "TPM Disallowed Passthrough ord=%" PRIu32 "\n", ord);
+               status = TPM_DISABLED_CMD;
+               goto abort_egress;
+         }
+
+         size = TCPA_MAX_BUFFER_LENGTH;
+         TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len, tpmcmd->resp, &size));
+         tpmcmd->resp_len = size;
+
+         unpack_TPM_RESULT(tpmcmd->resp + sizeof(TPM_TAG) + sizeof(UINT32), &status);
+         return status;
+
+         break;
+      default:
+         vtpmlogerror(VTPM_LOG_VTPM, "Invalid tag=%" PRIu16 "\n", tag);
+         status = TPM_BADTAG;
+   }
+
+abort_egress:
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         tag + 3, tpmcmd->resp_len, status);
+
+   return status;
+}
diff --git a/stubdom/vtpmmgr/vtpm_manager.h b/stubdom/vtpmmgr/vtpm_manager.h
new file mode 100644
index 0000000..a2bbcca
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_manager.h
@@ -0,0 +1,64 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPM_MANAGER_H
+#define VTPM_MANAGER_H
+
+#define VTPM_TAG_REQ 0x01c1
+#define VTPM_TAG_RSP 0x01c4
+#define COMMAND_BUFFER_SIZE 4096
+
+// Header size
+#define VTPM_COMMAND_HEADER_SIZE ( 2 + 4 + 4)
+
+//************************ Command Codes ****************************
+#define VTPM_ORD_BASE       0x0000
+#define VTPM_PRIV_MASK      0x01000000 // Priviledged VTPM Command
+#define VTPM_PRIV_BASE      (VTPM_ORD_BASE | VTPM_PRIV_MASK)
+
+// Non-priviledged VTPM Commands (From DMI's)
+#define VTPM_ORD_SAVEHASHKEY      (VTPM_ORD_BASE + 1) // DMI requests encryption key for persistent storage
+#define VTPM_ORD_LOADHASHKEY      (VTPM_ORD_BASE + 2) // DMI requests symkey to be regenerated
+
+//************************ Return Codes ****************************
+#define VTPM_SUCCESS               0
+#define VTPM_FAIL                  1
+#define VTPM_UNSUPPORTED           2
+#define VTPM_FORBIDDEN             3
+#define VTPM_RESTORE_CONTEXT_FAILED    4
+#define VTPM_INVALID_REQUEST       5
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_storage.c b/stubdom/vtpmmgr/vtpm_storage.c
new file mode 100644
index 0000000..abb0dba
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_storage.c
@@ -0,0 +1,794 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+/***************************************************************
+ * DISK IMAGE LAYOUT
+ * *************************************************************
+ * All data is stored in BIG ENDIAN format
+ * *************************************************************
+ * Section 1: Header
+ *
+ * 10 bytes 	 id			ID String "VTPMMGRDOM"
+ * uint32_t	 version	        Disk Image version number (current == 1)
+ * uint32_t      storage_key_len	Length of the storage Key
+ * TPM_KEY       storage_key		Marshalled TPM_KEY structure (See TPM spec v2)
+ * RSA_BLOCK     aes_crypto             Encrypted aes key data (RSA_CIPHER_SIZE bytes), bound by the storage_key
+ *  BYTE[32] aes_key                    Aes key for encrypting the uuid table
+ *  uint32_t cipher_sz                  Encrypted size of the uuid table
+ *
+ * *************************************************************
+ * Section 2: Uuid Table
+ *
+ * This table is encrypted by the aes_key in the header. The cipher text size is just
+ * large enough to hold all of the entries plus required padding.
+ *
+ * Each entry is as follows
+ * BYTE[16] uuid                       Uuid of a vtpm that is stored on this disk
+ * uint32_t offset                     Disk offset where the vtpm data is stored
+ *
+ * *************************************************************
+ * Section 3: Vtpm Table
+ *
+ * The rest of the disk stores vtpms. Each vtpm is an RSA_BLOCK encrypted
+ * by the storage key. Each vtpm must exist on an RSA_BLOCK aligned boundary,
+ * starting at the first RSA_BLOCK aligned offset after the uuid table.
+ * As the uuid table grows, vtpms may be relocated.
+ *
+ * RSA_BLOCK     vtpm_crypto          Vtpm data encrypted by storage_key
+ *   BYTE[20]    hash                 Sha1 hash of vtpm encrypted data
+ *   BYTE[16]    vtpm_aes_key         Encryption key for vtpm data
+ *
+  *************************************************************
+ */
+#define DISKVERS 1
+#define IDSTR "VTPMMGRDOM"
+#define IDSTRLEN 10
+#define AES_BLOCK_SIZE 16
+#define AES_KEY_BITS 256
+#define AES_KEY_SIZE (AES_KEY_BITS/8)
+#define BUF_SIZE 4096
+
+#define UUID_TBL_ENT_SIZE (sizeof(uuid_t) + sizeof(uint32_t))
+
+#define HEADERSZ (10 + 4 + 4)
+
+#define TRY_READ(buf, size, msg) do {\
+   int rc; \
+   if((rc = read(blkfront_fd, buf, (size))) != (size)) { \
+      vtpmlogerror(VTPM_LOG_VTPM, "read() failed! " msg " : rc=(%d/%d), error=(%s)\n", rc, (int)(size), strerror(errno)); \
+      status = TPM_IOERROR;\
+      goto abort_egress;\
+   } \
+} while(0)
+
+#define TRY_WRITE(buf, size, msg) do {\
+   int rc; \
+   if((rc = write(blkfront_fd, buf, (size))) != (size)) { \
+      vtpmlogerror(VTPM_LOG_VTPM, "write() failed! " msg " : rc=(%d/%d), error=(%s)\n", rc, (int)(size), strerror(errno)); \
+      status = TPM_IOERROR;\
+      goto abort_egress;\
+   } \
+} while(0)
+
+#include <blkfront.h>
+#include <unistd.h>
+#include <errno.h>
+#include <string.h>
+#include <inttypes.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <mini-os/byteorder.h>
+#include <polarssl/aes.h>
+
+#include "vtpm_manager.h"
+#include "log.h"
+#include "marshal.h"
+#include "tpm.h"
+#include "uuid.h"
+
+#include "vtpmmgr.h"
+#include "vtpm_storage.h"
+
+#define MAX(a,b) ( ((a) > (b)) ? (a) : (b) )
+#define MIN(a,b) ( ((a) < (b)) ? (a) : (b) )
+
+/* blkfront device objets */
+static struct blkfront_dev* blkdev = NULL;
+static int blkfront_fd = -1;
+
+struct Vtpm {
+   uuid_t uuid;
+   int offset;
+};
+struct Storage {
+   int aes_offset;
+   int uuid_offset;
+   int end_offset;
+
+   int num_vtpms;
+   int num_vtpms_alloced;
+   struct Vtpm* vtpms;
+};
+
+/* Global storage data */
+static struct Storage g_store = {
+   .vtpms = NULL,
+};
+
+static int get_offset(void) {
+   return lseek(blkfront_fd, 0, SEEK_CUR);
+}
+
+static void reset_store(void) {
+   g_store.aes_offset = 0;
+   g_store.uuid_offset = 0;
+   g_store.end_offset = 0;
+
+   g_store.num_vtpms = 0;
+   g_store.num_vtpms_alloced = 0;
+   free(g_store.vtpms);
+   g_store.vtpms = NULL;
+}
+
+static int vtpm_get_index(const uuid_t uuid) {
+   int st = 0;
+   int ed = g_store.num_vtpms-1;
+   while(st <= ed) {
+      int mid = ((unsigned int)st + (unsigned int)ed) >> 1; //avoid overflow
+      int c = memcmp(uuid, &g_store.vtpms[mid].uuid, sizeof(uuid_t));
+      if(c == 0) {
+         return mid;
+      } else if(c > 0) {
+         st = mid + 1;
+      } else {
+         ed = mid - 1;
+      }
+   }
+   return -(st + 1);
+}
+
+static void vtpm_add(const uuid_t uuid, int offset, int index) {
+   /* Realloc more space if needed */
+   if(g_store.num_vtpms >= g_store.num_vtpms_alloced) {
+      g_store.num_vtpms_alloced += 16;
+      g_store.vtpms = realloc(
+            g_store.vtpms,
+            sizeof(struct Vtpm) * g_store.num_vtpms_alloced);
+   }
+
+   /* Move everybody after the new guy */
+   for(int i = g_store.num_vtpms; i > index; --i) {
+      g_store.vtpms[i] = g_store.vtpms[i-1];
+   }
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Registered vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+
+   /* Finally add new one */
+   memcpy(g_store.vtpms[index].uuid, uuid, sizeof(uuid_t));
+   g_store.vtpms[index].offset = offset;
+   ++g_store.num_vtpms;
+}
+
+#if 0
+static void vtpm_remove(int index) {
+   for(i = index; i < g_store.num_vtpms; ++i) {
+      g_store.vtpms[i] = g_store.vtpms[i+1];
+   }
+   --g_store.num_vtpms;
+}
+#endif
+
+static int pack_uuid_table(uint8_t* table, int size, int* nvtpms) {
+   uint8_t* ptr = table;
+   while(*nvtpms < g_store.num_vtpms && size >= 0)
+   {
+      /* Pack the uuid */
+      memcpy(ptr, (uint8_t*)g_store.vtpms[*nvtpms].uuid, sizeof(uuid_t));
+      ptr+= sizeof(uuid_t);
+
+
+      /* Pack the offset */
+      ptr = pack_UINT32(ptr, g_store.vtpms[*nvtpms].offset);
+
+      ++*nvtpms;
+      size -= UUID_TBL_ENT_SIZE;
+   }
+   return ptr - table;
+}
+
+/* Extract the uuids */
+static int extract_uuid_table(uint8_t* table, int size) {
+   uint8_t* ptr = table;
+   for(;size >= UUID_TBL_ENT_SIZE; size -= UUID_TBL_ENT_SIZE) {
+      int index;
+      uint32_t v32;
+
+      /*uuid_t is just an array of bytes, so we can do a direct cast here */
+      uint8_t* uuid = ptr;
+      ptr += sizeof(uuid_t);
+
+      /* Get the offset of the key */
+      ptr = unpack_UINT32(ptr, &v32);
+
+      /* Insert the new vtpm in sorted order */
+      if((index = vtpm_get_index(uuid)) >= 0) {
+         vtpmlogerror(VTPM_LOG_VTPM, "Vtpm (" UUID_FMT ") exists multiple times! ignoring...\n", UUID_BYTES(uuid));
+         continue;
+      }
+      index = -index -1;
+
+      vtpm_add(uuid, v32, index);
+
+   }
+   return ptr - table;
+}
+
+static void vtpm_decrypt_block(aes_context* aes,
+      uint8_t* iv,
+      uint8_t* cipher,
+      uint8_t* plain,
+      int cipher_sz,
+      int* overlap)
+{
+   int bytes_ext;
+   /* Decrypt */
+   aes_crypt_cbc(aes, AES_DECRYPT,
+         cipher_sz,
+         iv, cipher, plain + *overlap);
+
+   /* Extract */
+   bytes_ext = extract_uuid_table(plain, cipher_sz + *overlap);
+
+   /* Copy left overs to the beginning */
+   *overlap = cipher_sz + *overlap - bytes_ext;
+   memcpy(plain, plain + bytes_ext, *overlap);
+}
+
+static int vtpm_encrypt_block(aes_context* aes,
+      uint8_t* iv,
+      uint8_t* plain,
+      uint8_t* cipher,
+      int block_sz,
+      int* overlap,
+      int* num_vtpms)
+{
+   int bytes_to_crypt;
+   int bytes_packed;
+
+   /* Pack the uuid table */
+   bytes_packed = *overlap + pack_uuid_table(plain + *overlap, block_sz - *overlap, num_vtpms);
+   bytes_to_crypt = MIN(bytes_packed, block_sz);
+
+   /* Add padding if we aren't on a multiple of the block size */
+   if(bytes_to_crypt & (AES_BLOCK_SIZE-1)) {
+      int oldsz = bytes_to_crypt;
+      //add padding
+      bytes_to_crypt += AES_BLOCK_SIZE - (bytes_to_crypt & (AES_BLOCK_SIZE-1));
+      //fill padding with random bytes
+      vtpmmgr_rand(plain + oldsz, bytes_to_crypt - oldsz);
+      *overlap = 0;
+   } else {
+      *overlap = bytes_packed - bytes_to_crypt;
+   }
+
+   /* Encrypt this chunk */
+   aes_crypt_cbc(aes, AES_ENCRYPT,
+            bytes_to_crypt,
+            iv, plain, cipher);
+
+   /* Copy the left over partials to the beginning */
+   memcpy(plain, plain + bytes_to_crypt, *overlap);
+
+   return bytes_to_crypt;
+}
+
+static TPM_RESULT vtpm_storage_new_vtpm(const uuid_t uuid, int index) {
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t plain[BUF_SIZE + AES_BLOCK_SIZE];
+   uint8_t buf[BUF_SIZE];
+   uint8_t* ptr;
+   int cipher_sz;
+   aes_context aes;
+
+   /* Add new vtpm to the table */
+   vtpm_add(uuid, g_store.end_offset, index);
+   g_store.end_offset += RSA_CIPHER_SIZE;
+
+   /* Compute the new end location of the encrypted uuid table */
+   cipher_sz = AES_BLOCK_SIZE; //IV
+   cipher_sz += g_store.num_vtpms * UUID_TBL_ENT_SIZE; //uuid table
+   cipher_sz += (AES_BLOCK_SIZE - (cipher_sz & (AES_BLOCK_SIZE -1))) & (AES_BLOCK_SIZE-1); //aes padding
+
+   /* Does this overlap any key data? If so they need to be relocated */
+   int uuid_end = (g_store.uuid_offset + cipher_sz + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+   for(int i = 0; i < g_store.num_vtpms; ++i) {
+      if(g_store.vtpms[i].offset < uuid_end) {
+
+         vtpmloginfo(VTPM_LOG_VTPM, "Relocating vtpm data\n");
+
+         //Read the hashkey cipher text
+         lseek(blkfront_fd, g_store.vtpms[i].offset, SEEK_SET);
+         TRY_READ(buf, RSA_CIPHER_SIZE, "vtpm hashkey relocate");
+
+         //Write the cipher text to new offset
+         lseek(blkfront_fd, g_store.end_offset, SEEK_SET);
+         TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm hashkey relocate");
+
+         //Save new offset
+         g_store.vtpms[i].offset = g_store.end_offset;
+         g_store.end_offset += RSA_CIPHER_SIZE;
+      }
+   }
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Generating a new symmetric key\n");
+
+   /* Generate an aes key */
+   TPMTRYRETURN(vtpmmgr_rand(plain, AES_KEY_SIZE));
+   aes_setkey_enc(&aes, plain, AES_KEY_BITS);
+   ptr = plain + AES_KEY_SIZE;
+
+   /* Pack the crypted size */
+   ptr = pack_UINT32(ptr, cipher_sz);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Binding encrypted key\n");
+
+   /* Seal the key and size */
+   TPMTRYRETURN(TPM_Bind(&vtpm_globals.storage_key,
+            plain,
+            ptr - plain,
+            buf));
+
+   /* Write the sealed key to disk */
+   lseek(blkfront_fd, g_store.aes_offset, SEEK_SET);
+   TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm aes key");
+
+   /* ENCRYPT AND WRITE UUID TABLE */
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Encrypting the uuid table\n");
+
+   int num_vtpms = 0;
+   int overlap = 0;
+   int bytes_crypted;
+   uint8_t iv[AES_BLOCK_SIZE];
+
+   /* Generate the iv for the first block */
+   TPMTRYRETURN(vtpmmgr_rand(iv, AES_BLOCK_SIZE));
+
+   /* Copy the iv to the cipher text buffer to be written to disk */
+   memcpy(buf, iv, AES_BLOCK_SIZE);
+   ptr = buf + AES_BLOCK_SIZE;
+
+   /* Encrypt the first block of the uuid table */
+   bytes_crypted = vtpm_encrypt_block(&aes,
+         iv, //iv
+         plain, //plaintext
+         ptr, //cipher text
+         BUF_SIZE - AES_BLOCK_SIZE,
+         &overlap,
+         &num_vtpms);
+
+   /* Write the iv followed by the crypted table*/
+   TRY_WRITE(buf, bytes_crypted + AES_BLOCK_SIZE, "vtpm uuid table");
+
+   /* Decrement the number of bytes encrypted */
+   cipher_sz -= bytes_crypted + AES_BLOCK_SIZE;
+
+   /* If there are more vtpms, encrypt and write them block by block */
+   while(cipher_sz > 0) {
+      /* Encrypt the next block of the uuid table */
+      bytes_crypted = vtpm_encrypt_block(&aes,
+               iv,
+               plain,
+               buf,
+               BUF_SIZE,
+               &overlap,
+               &num_vtpms);
+
+      /* Write the cipher text to disk */
+      TRY_WRITE(buf, bytes_crypted, "vtpm uuid table");
+
+      cipher_sz -= bytes_crypted;
+   }
+
+   goto egress;
+abort_egress:
+egress:
+   return status;
+}
+
+
+/**************************************
+ * PUBLIC FUNCTIONS
+ * ***********************************/
+
+int vtpm_storage_init(void) {
+   struct blkfront_info info;
+   if((blkdev = init_blkfront(NULL, &info)) == NULL) {
+      return -1;
+   }
+   if((blkfront_fd = blkfront_open(blkdev)) < 0) {
+      return -1;
+   }
+   return 0;
+}
+
+void vtpm_storage_shutdown(void) {
+   reset_store();
+   close(blkfront_fd);
+}
+
+TPM_RESULT vtpm_storage_load_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ])
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   int index;
+   uint8_t cipher[RSA_CIPHER_SIZE];
+   uint8_t clear[RSA_CIPHER_SIZE];
+   UINT32 clear_size;
+
+   /* Find the index of this uuid */
+   if((index = vtpm_get_index(uuid)) < 0) {
+      index = -index-1;
+      vtpmlogerror(VTPM_LOG_VTPM, "LoadKey failure: Unrecognized uuid! " UUID_FMT "\n", UUID_BYTES(uuid));
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   /* Read the table entry */
+   lseek(blkfront_fd, g_store.vtpms[index].offset, SEEK_SET);
+   TRY_READ(cipher, RSA_CIPHER_SIZE, "vtpm hashkey data");
+
+   /* Decrypt the table entry */
+   TPMTRYRETURN(TPM_UnBind(
+            vtpm_globals.storage_key_handle,
+            RSA_CIPHER_SIZE,
+            cipher,
+            &clear_size,
+            clear,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.oiap));
+
+   if(clear_size < HASHKEYSZ) {
+      vtpmloginfo(VTPM_LOG_VTPM, "Decrypted Hash key size (%" PRIu32 ") was too small!\n", clear_size);
+      status = TPM_RESOURCES;
+      goto abort_egress;
+   }
+
+   memcpy(hashkey, clear, HASHKEYSZ);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Loaded hash and key for vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+   goto egress;
+abort_egress:
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to load key\n");
+egress:
+   return status;
+}
+
+TPM_RESULT vtpm_storage_save_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ])
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   int index;
+   uint8_t buf[RSA_CIPHER_SIZE];
+
+   /* Find the index of this uuid */
+   if((index = vtpm_get_index(uuid)) < 0) {
+      index = -index-1;
+      /* Create a new vtpm */
+      TPMTRYRETURN( vtpm_storage_new_vtpm(uuid, index) );
+   }
+
+   /* Encrypt the hash and key */
+   TPMTRYRETURN( TPM_Bind(&vtpm_globals.storage_key,
+            hashkey,
+            HASHKEYSZ,
+            buf));
+
+   /* Write to disk */
+   lseek(blkfront_fd, g_store.vtpms[index].offset, SEEK_SET);
+   TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm hashkey data");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saved hash and key for vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+   goto egress;
+abort_egress:
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to save key\n");
+egress:
+   return status;
+}
+
+TPM_RESULT vtpm_storage_new_header()
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t buf[BUF_SIZE];
+   uint8_t keybuf[AES_KEY_SIZE + sizeof(uint32_t)];
+   uint8_t* ptr = buf;
+   uint8_t* sptr;
+
+   /* Clear everything first */
+   reset_store();
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Creating new disk image header\n");
+
+   /*Copy the ID string */
+   memcpy(ptr, IDSTR, IDSTRLEN);
+   ptr += IDSTRLEN;
+
+   /*Copy the version */
+   ptr = pack_UINT32(ptr, DISKVERS);
+
+   /*Save the location of the key size */
+   sptr = ptr;
+   ptr += sizeof(UINT32);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saving root storage key..\n");
+
+   /* Copy the storage key */
+   ptr = pack_TPM_KEY(ptr, &vtpm_globals.storage_key);
+
+   /* Now save the size */
+   pack_UINT32(sptr, ptr - (sptr + 4));
+
+   /* Create a fake aes key and set cipher text size to 0 */
+   memset(keybuf, 0, sizeof(keybuf));
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Binding uuid table symmetric key..\n");
+
+   /* Save the location of the aes key */
+   g_store.aes_offset = ptr - buf;
+
+   /* Store the fake aes key and vtpm count */
+   TPMTRYRETURN(TPM_Bind(&vtpm_globals.storage_key,
+         keybuf,
+         sizeof(keybuf),
+         ptr));
+   ptr+= RSA_CIPHER_SIZE;
+
+   /* Write the header to disk */
+   lseek(blkfront_fd, 0, SEEK_SET);
+   TRY_WRITE(buf, ptr-buf, "vtpm header");
+
+   /* Save the location of the uuid table */
+   g_store.uuid_offset = get_offset();
+
+   /* Save the end offset */
+   g_store.end_offset = (g_store.uuid_offset + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saved new manager disk header.\n");
+
+   goto egress;
+abort_egress:
+egress:
+   return status;
+}
+
+
+TPM_RESULT vtpm_storage_load_header(void)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint32_t v32;
+   uint8_t buf[BUF_SIZE];
+   uint8_t* ptr = buf;
+   aes_context aes;
+
+   /* Clear everything first */
+   reset_store();
+
+   /* Read the header from disk */
+   lseek(blkfront_fd, 0, SEEK_SET);
+   TRY_READ(buf, IDSTRLEN + sizeof(UINT32) + sizeof(UINT32), "vtpm header");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Loading disk image header\n");
+
+   /* Verify the ID string */
+   if(memcmp(ptr, IDSTR, IDSTRLEN)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid ID string in disk image!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+   ptr+=IDSTRLEN;
+
+   /* Unpack the version */
+   ptr = unpack_UINT32(ptr, &v32);
+
+   /* Verify the version */
+   if(v32 != DISKVERS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unsupported disk image version number %" PRIu32 "\n", v32);
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   /* Size of the storage key */
+   ptr = unpack_UINT32(ptr, &v32);
+
+   /* Sanity check */
+   if(v32 > BUF_SIZE) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Size of storage key (%" PRIu32 ") is too large!\n", v32);
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* read the storage key */
+   TRY_READ(buf, v32, "storage pub key");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Unpacking storage key\n");
+
+   /* unpack the storage key */
+   ptr = unpack_TPM_KEY(buf, &vtpm_globals.storage_key, UNPACK_ALLOC);
+
+   /* Load Storage Key into the TPM */
+   TPMTRYRETURN( TPM_LoadKey(
+            TPM_SRK_KEYHANDLE,
+            &vtpm_globals.storage_key,
+            &vtpm_globals.storage_key_handle,
+            (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+            &vtpm_globals.oiap));
+
+   /* Initialize the storage key auth */
+   memset(vtpm_globals.storage_key_usage_auth, 0, sizeof(TPM_AUTHDATA));
+
+   /* Store the offset of the aes key */
+   g_store.aes_offset = get_offset();
+
+   /* Read the rsa cipher text for the aes key */
+   TRY_READ(buf, RSA_CIPHER_SIZE, "aes key");
+   ptr = buf + RSA_CIPHER_SIZE;
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Unbinding uuid table symmetric key\n");
+
+   /* Decrypt the aes key protecting the uuid table */
+   UINT32 datalen;
+   TPMTRYRETURN(TPM_UnBind(
+            vtpm_globals.storage_key_handle,
+            RSA_CIPHER_SIZE,
+            buf,
+            &datalen,
+            ptr,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.oiap));
+
+   /* Validate the length of the output buffer */
+   if(datalen < AES_KEY_SIZE + sizeof(UINT32)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unbound AES key size (%d) was too small! expected (%ld)\n", datalen, AES_KEY_SIZE + sizeof(UINT32));
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* Extract the aes key */
+   aes_setkey_dec(&aes, ptr, AES_KEY_BITS);
+   ptr+= AES_KEY_SIZE;
+
+   /* Extract the ciphertext size */
+   ptr = unpack_UINT32(ptr, &v32);
+   int cipher_size = v32;
+
+   /* Sanity check */
+   if(cipher_size & (AES_BLOCK_SIZE-1)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Cipher text size (%" PRIu32 ") is not a multiple of the aes block size! (%d)\n", v32, AES_BLOCK_SIZE);
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* Save the location of the uuid table */
+   g_store.uuid_offset = get_offset();
+
+   /* Only decrypt the table if there are vtpms to decrypt */
+   if(cipher_size > 0) {
+      int rbytes;
+      int overlap = 0;
+      uint8_t plain[BUF_SIZE + AES_BLOCK_SIZE];
+      uint8_t iv[AES_BLOCK_SIZE];
+
+      vtpmloginfo(VTPM_LOG_VTPM, "Decrypting uuid table\n");
+
+      /* Pre allocate the vtpm array */
+      g_store.num_vtpms_alloced = cipher_size / UUID_TBL_ENT_SIZE;
+      g_store.vtpms = malloc(sizeof(struct Vtpm) * g_store.num_vtpms_alloced);
+
+      /* Read the iv and the first chunk of cipher text */
+      rbytes = MIN(cipher_size, BUF_SIZE);
+      TRY_READ(buf, rbytes, "vtpm uuid table\n");
+      cipher_size -= rbytes;
+
+      /* Copy the iv */
+      memcpy(iv, buf, AES_BLOCK_SIZE);
+      ptr = buf + AES_BLOCK_SIZE;
+
+      /* Remove the iv from the number of bytes to decrypt */
+      rbytes -= AES_BLOCK_SIZE;
+
+      /* Decrypt and extract vtpms */
+      vtpm_decrypt_block(&aes,
+            iv, ptr, plain,
+            rbytes, &overlap);
+
+      /* Read the rest of the table if there is more */
+      while(cipher_size > 0) {
+         /* Read next chunk of cipher text */
+         rbytes = MIN(cipher_size, BUF_SIZE);
+         TRY_READ(buf, rbytes, "vtpm uuid table");
+         cipher_size -= rbytes;
+
+         /* Decrypt a block of text */
+         vtpm_decrypt_block(&aes,
+               iv, buf, plain,
+               rbytes, &overlap);
+
+      }
+      vtpmloginfo(VTPM_LOG_VTPM, "Loaded %d vtpms!\n", g_store.num_vtpms);
+   }
+
+   /* The end of the key table, new vtpms go here */
+   int uuid_end = (get_offset() + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+   g_store.end_offset = uuid_end;
+
+   /* Compute the end offset while validating vtpms*/
+   for(int i = 0; i < g_store.num_vtpms; ++i) {
+      /* offset must not collide with previous data */
+      if(g_store.vtpms[i].offset < uuid_end) {
+         vtpmlogerror(VTPM_LOG_VTPM, "vtpm: " UUID_FMT
+               " offset (%d) is before end of uuid table (%d)!\n",
+               UUID_BYTES(g_store.vtpms[i].uuid),
+               g_store.vtpms[i].offset, uuid_end);
+         status = TPM_IOERROR;
+         goto abort_egress;
+      }
+      /* offset must be at a multiple of cipher size */
+      if(g_store.vtpms[i].offset & (RSA_CIPHER_SIZE-1)) {
+         vtpmlogerror(VTPM_LOG_VTPM, "vtpm: " UUID_FMT
+               " offset(%d) is not at a multiple of the rsa cipher text size (%d)!\n",
+               UUID_BYTES(g_store.vtpms[i].uuid),
+               g_store.vtpms[i].offset, RSA_CIPHER_SIZE);
+         status = TPM_IOERROR;
+         goto abort_egress;
+      }
+      /* Save the last offset */
+      if(g_store.vtpms[i].offset >= g_store.end_offset) {
+         g_store.end_offset = g_store.vtpms[i].offset + RSA_CIPHER_SIZE;
+      }
+   }
+
+   goto egress;
+abort_egress:
+   //An error occured somewhere
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to load manager data!\n");
+
+   //Clear the data store
+   reset_store();
+
+   //Reset the storage key structure
+   free_TPM_KEY(&vtpm_globals.storage_key);
+   {
+      TPM_KEY key = TPM_KEY_INIT;
+      vtpm_globals.storage_key = key;
+   }
+
+   //Reset the storage key handle
+   TPM_EvictKey(vtpm_globals.storage_key_handle);
+   vtpm_globals.storage_key_handle = 0;
+egress:
+   return status;
+}
+
+#if 0
+/* For testing disk IO */
+void add_fake_vtpms(int num) {
+   for(int i = 0; i < num; ++i) {
+      uint32_t ind = cpu_to_be32(i);
+
+      uuid_t uuid;
+      memset(uuid, 0, sizeof(uuid_t));
+      memcpy(uuid, &ind, sizeof(ind));
+      int index = vtpm_get_index(uuid);
+      index = -index-1;
+
+      vtpm_storage_new_vtpm(uuid, index);
+   }
+}
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_storage.h b/stubdom/vtpmmgr/vtpm_storage.h
new file mode 100644
index 0000000..a5a5fd7
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_storage.h
@@ -0,0 +1,68 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPM_STORAGE_H
+#define VTPM_STORAGE_h
+
+#include "uuid.h"
+
+#define VTPM_NVMKEY_SIZE 32
+#define HASHKEYSZ (sizeof(TPM_DIGEST) + VTPM_NVMKEY_SIZE)
+
+/* Initialize the storage system and its virtual disk */
+int vtpm_storage_init(void);
+
+/* Shutdown the storage system and its virtual disk */
+void vtpm_storage_shutdown(void);
+
+/* Loads Sha1 hash and 256 bit AES key from disk and stores them
+ * packed together in outbuf. outbuf must be freed
+ * by the caller using buffer_free()
+ */
+TPM_RESULT vtpm_storage_load_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ]);
+
+/* inbuf must contain a sha1 hash followed by a 256 bit AES key.
+ * Encrypts and stores the hash and key to disk */
+TPM_RESULT vtpm_storage_save_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ]);
+
+/* Load the vtpm manager data - call this on startup */
+TPM_RESULT vtpm_storage_load_header(void);
+
+/* Saves the vtpm manager data - call this on shutdown */
+TPM_RESULT vtpm_storage_new_header(void);
+
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpmmgr.c b/stubdom/vtpmmgr/vtpmmgr.c
new file mode 100644
index 0000000..563f4e8
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpmmgr.c
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdint.h>
+#include <mini-os/tpmback.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include "log.h"
+
+#include "vtpmmgr.h"
+#include "tcg.h"
+
+
+void main_loop(void) {
+   tpmcmd_t* tpmcmd;
+   uint8_t respbuf[TCPA_MAX_BUFFER_LENGTH];
+
+   while(1) {
+      /* Wait for requests from a vtpm */
+      vtpmloginfo(VTPM_LOG_VTPM, "Waiting for commands from vTPM's:\n");
+      if((tpmcmd = tpmback_req_any()) == NULL) {
+         vtpmlogerror(VTPM_LOG_VTPM, "NULL tpmcmd\n");
+         continue;
+      }
+
+      tpmcmd->resp = respbuf;
+
+      /* Process the command */
+      vtpmmgr_handle_cmd(tpmcmd->uuid, tpmcmd);
+
+      /* Send response */
+      tpmback_resp(tpmcmd);
+   }
+}
+
+int main(int argc, char** argv)
+{
+   int rc = 0;
+   sleep(2);
+   vtpmloginfo(VTPM_LOG_VTPM, "Starting vTPM manager domain\n");
+
+   /* Initialize the vtpm manager */
+   if(vtpmmgr_init(argc, argv) != TPM_SUCCESS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize vtpmmgr domain!\n");
+      rc = -1;
+      goto exit;
+   }
+
+   main_loop();
+
+   vtpmloginfo(VTPM_LOG_VTPM, "vTPM Manager shutting down...\n");
+
+   vtpmmgr_shutdown();
+
+exit:
+   return rc;
+
+}
diff --git a/stubdom/vtpmmgr/vtpmmgr.h b/stubdom/vtpmmgr/vtpmmgr.h
new file mode 100644
index 0000000..50a1992
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpmmgr.h
@@ -0,0 +1,77 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPMMGR_H
+#define VTPMMGR_H
+
+#include <mini-os/tpmback.h>
+#include <polarssl/entropy.h>
+#include <polarssl/ctr_drbg.h>
+
+#include "uuid.h"
+#include "tcg.h"
+#include "vtpm_manager.h"
+
+#define RSA_KEY_SIZE 0x0800
+#define RSA_CIPHER_SIZE (RSA_KEY_SIZE / 8)
+
+struct vtpm_globals {
+   int tpm_fd;
+   TPM_KEY             storage_key;
+   TPM_HANDLE          storage_key_handle;       // Key used by persistent store
+   TPM_AUTH_SESSION    oiap;                // OIAP session for storageKey
+   TPM_AUTHDATA        storage_key_usage_auth;
+
+   TPM_AUTHDATA        owner_auth;
+   TPM_AUTHDATA        srk_auth;
+
+   entropy_context     entropy;
+   ctr_drbg_context    ctr_drbg;
+};
+
+// --------------------------- Global Values --------------------------
+extern struct vtpm_globals vtpm_globals;   // Key info and DMI states
+
+TPM_RESULT vtpmmgr_init(int argc, char** argv);
+void vtpmmgr_shutdown(void);
+
+TPM_RESULT vtpmmgr_handle_cmd(const uuid_t uuid, tpmcmd_t* tpmcmd);
+
+inline TPM_RESULT vtpmmgr_rand(unsigned char* bytes, size_t num_bytes) {
+   return ctr_drbg_random(&vtpm_globals.ctr_drbg, bytes, num_bytes) == 0 ? 0 : TPM_FAIL;
+}
+
+#endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg37-0004z1-8t; Thu, 06 Dec 2012 18:19:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg35-0004yN-C7
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:52 +0000
Received: from [85.158.143.35:48695] by server-3.bemta-4.messagelabs.com id
	A1/D4-18211-6C1E0C05; Thu, 06 Dec 2012 18:19:50 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-4.tower-21.messagelabs.com!1354817980!5468848!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24121 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_019c_9eef5c0e_ea40_4137_a244_aca487265dd9;
	Thu, 06 Dec 2012 13:19:34 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:15 -0500
Message-Id: <1354817961-22196-2-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 2/8] add stubdom/vtpmmgr code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the code base for vtpmmgrdom. Makefile changes
next patch.

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 stubdom/vtpmmgr/Makefile           |   32 ++
 stubdom/vtpmmgr/init.c             |  553 +++++++++++++++++++++
 stubdom/vtpmmgr/log.c              |  151 ++++++
 stubdom/vtpmmgr/log.h              |   85 ++++
 stubdom/vtpmmgr/marshal.h          |  528 ++++++++++++++++++++
 stubdom/vtpmmgr/minios.cfg         |   14 +
 stubdom/vtpmmgr/tcg.h              |  707 +++++++++++++++++++++++++++
 stubdom/vtpmmgr/tpm.c              |  938 ++++++++++++++++++++++++++++++++++++
 stubdom/vtpmmgr/tpm.h              |  218 +++++++++
 stubdom/vtpmmgr/tpmrsa.c           |  175 +++++++
 stubdom/vtpmmgr/tpmrsa.h           |   67 +++
 stubdom/vtpmmgr/uuid.h             |   50 ++
 stubdom/vtpmmgr/vtpm_cmd_handler.c |  152 ++++++
 stubdom/vtpmmgr/vtpm_manager.h     |   64 +++
 stubdom/vtpmmgr/vtpm_storage.c     |  794 ++++++++++++++++++++++++++++++
 stubdom/vtpmmgr/vtpm_storage.h     |   68 +++
 stubdom/vtpmmgr/vtpmmgr.c          |   93 ++++
 stubdom/vtpmmgr/vtpmmgr.h          |   77 +++
 18 files changed, 4766 insertions(+)
 create mode 100644 stubdom/vtpmmgr/Makefile
 create mode 100644 stubdom/vtpmmgr/init.c
 create mode 100644 stubdom/vtpmmgr/log.c
 create mode 100644 stubdom/vtpmmgr/log.h
 create mode 100644 stubdom/vtpmmgr/marshal.h
 create mode 100644 stubdom/vtpmmgr/minios.cfg
 create mode 100644 stubdom/vtpmmgr/tcg.h
 create mode 100644 stubdom/vtpmmgr/tpm.c
 create mode 100644 stubdom/vtpmmgr/tpm.h
 create mode 100644 stubdom/vtpmmgr/tpmrsa.c
 create mode 100644 stubdom/vtpmmgr/tpmrsa.h
 create mode 100644 stubdom/vtpmmgr/uuid.h
 create mode 100644 stubdom/vtpmmgr/vtpm_cmd_handler.c
 create mode 100644 stubdom/vtpmmgr/vtpm_manager.h
 create mode 100644 stubdom/vtpmmgr/vtpm_storage.c
 create mode 100644 stubdom/vtpmmgr/vtpm_storage.h
 create mode 100644 stubdom/vtpmmgr/vtpmmgr.c
 create mode 100644 stubdom/vtpmmgr/vtpmmgr.h

diff --git a/stubdom/vtpmmgr/Makefile b/stubdom/vtpmmgr/Makefile
new file mode 100644
index 0000000..88c83c3
--- /dev/null
+++ b/stubdom/vtpmmgr/Makefile
@@ -0,0 +1,32 @@
+# Copyright (c) 2010-2012 United States Government, as represented by
+# the Secretary of Defense.  All rights reserved.
+#
+# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+# INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+# DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+# SOFTWARE.
+#
+
+XEN_ROOT=../..
+
+PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
+PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o bignum.o sha4.o havege.o timing.o entropy_poll.o
+
+TARGET=vtpmmgr.a
+OBJS=vtpmmgr.o vtpm_cmd_handler.o vtpm_storage.o init.o tpmrsa.o tpm.o log.o
+
+CFLAGS+=-Werror -Iutil -Icrypto -Itcs
+CFLAGS+=-Wno-declaration-after-statement -Wno-unused-label
+
+build: $(TARGET)
+$(TARGET): $(OBJS)
+	ar -rcs $@ $^ $(foreach obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
+
+clean:
+	rm -f $(TARGET) $(OBJS)
+
+distclean: clean
+
+.PHONY: clean distclean
diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
new file mode 100644
index 0000000..a158020
--- /dev/null
+++ b/stubdom/vtpmmgr/init.c
@@ -0,0 +1,553 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#include <stdint.h>
+#include <stdlib.h>
+
+#include <xen/xen.h>
+#include <mini-os/tpmback.h>
+#include <mini-os/tpmfront.h>
+#include <mini-os/tpm_tis.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include <polarssl/sha1.h>
+
+#include "log.h"
+#include "vtpmmgr.h"
+#include "vtpm_storage.h"
+#include "tpm.h"
+#include "marshal.h"
+
+struct Opts {
+   enum {
+      TPMDRV_TPM_TIS,
+      TPMDRV_TPMFRONT,
+   } tpmdriver;
+   unsigned long tpmiomem;
+   unsigned int tpmirq;
+   unsigned int tpmlocality;
+   int gen_owner_auth;
+};
+
+// --------------------------- Well Known Auths --------------------------
+const TPM_AUTHDATA WELLKNOWN_SRK_AUTH = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+const TPM_AUTHDATA WELLKNOWN_OWNER_AUTH = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
+
+struct vtpm_globals vtpm_globals = {
+   .tpm_fd = -1,
+   .storage_key = TPM_KEY_INIT,
+   .storage_key_handle = 0,
+   .oiap = { .AuthHandle = 0 }
+};
+
+static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
+   UINT32 sz = len;
+   TPM_RESULT rc = TPM_GetRandom(&sz, data);
+   *olen = sz;
+   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
+}
+
+static TPM_RESULT check_tpm_version(void) {
+   TPM_RESULT status;
+   UINT32 rsize;
+   BYTE* res = NULL;
+   TPM_CAP_VERSION_INFO vinfo;
+
+   TPMTRYRETURN(TPM_GetCapability(
+            TPM_CAP_VERSION_VAL,
+            0,
+            NULL,
+            &rsize,
+            &res));
+   if(rsize < 4) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid size returned by GetCapability!\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   unpack_TPM_CAP_VERSION_INFO(res, &vinfo, UNPACK_ALIAS);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Hardware TPM:\n");
+   vtpmloginfo(VTPM_LOG_VTPM, " version: %hhd %hhd %hhd %hhd\n",
+         vinfo.version.major, vinfo.version.minor, vinfo.version.revMajor, vinfo.version.revMinor);
+   vtpmloginfo(VTPM_LOG_VTPM, " specLevel: %hd\n", vinfo.specLevel);
+   vtpmloginfo(VTPM_LOG_VTPM, " errataRev: %hhd\n", vinfo.errataRev);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorID: %c%c%c%c\n",
+         vinfo.tpmVendorID[0], vinfo.tpmVendorID[1],
+         vinfo.tpmVendorID[2], vinfo.tpmVendorID[3]);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorSpecificSize: %hd\n", vinfo.vendorSpecificSize);
+   vtpmloginfo(VTPM_LOG_VTPM, " vendorSpecific: ");
+   for(int i = 0; i < vinfo.vendorSpecificSize; ++i) {
+      vtpmloginfomore(VTPM_LOG_VTPM, "%02hhx", vinfo.vendorSpecific[i]);
+   }
+   vtpmloginfomore(VTPM_LOG_VTPM, "\n");
+
+abort_egress:
+   free(res);
+   return status;
+}
+
+static TPM_RESULT flush_tpm(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   const TPM_RESOURCE_TYPE reslist[] = { TPM_RT_KEY, TPM_RT_AUTH, TPM_RT_TRANS, TPM_RT_COUNTER, TPM_RT_DAA_TPM, TPM_RT_CONTEXT };
+   BYTE* keylist = NULL;
+   UINT32 keylistSize;
+   BYTE* ptr;
+
+   //Iterate through each resource type and flush all handles
+   for(int i = 0; i < sizeof(reslist) / sizeof(TPM_RESOURCE_TYPE); ++i) {
+      TPM_RESOURCE_TYPE beres = cpu_to_be32(reslist[i]);
+      UINT16 size;
+      TPMTRYRETURN(TPM_GetCapability(
+               TPM_CAP_HANDLE,
+               sizeof(TPM_RESOURCE_TYPE),
+               (BYTE*)(&beres),
+               &keylistSize,
+               &keylist));
+
+      ptr = keylist;
+      ptr = unpack_UINT16(ptr, &size);
+
+      //Flush each handle
+      if(size) {
+         vtpmloginfo(VTPM_LOG_VTPM, "Flushing %u handle(s) of type %lu\n", size, (unsigned long) reslist[i]);
+         for(int j = 0; j < size; ++j) {
+            TPM_HANDLE h;
+            ptr = unpack_TPM_HANDLE(ptr, &h);
+            TPMTRYRETURN(TPM_FlushSpecific(h, reslist[i]));
+         }
+      }
+
+      free(keylist);
+      keylist = NULL;
+   }
+
+   goto egress;
+abort_egress:
+   free(keylist);
+egress:
+   return status;
+}
+
+
+static TPM_RESULT try_take_ownership(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_PUBKEY pubEK = TPM_PUBKEY_INIT;
+
+   // If we can read PubEK then there is no owner and we should take it.
+   status = TPM_ReadPubek(&pubEK);
+
+   switch(status) {
+      case TPM_DISABLED_CMD:
+         //Cannot read ek? TPM has owner
+         vtpmloginfo(VTPM_LOG_VTPM, "Failed to readEK meaning TPM has an owner. Creating Keys off existing SRK.\n");
+         status = TPM_SUCCESS;
+         break;
+      case TPM_NO_ENDORSEMENT:
+         {
+            //If theres no ek, we have to create one
+            TPM_KEY_PARMS keyInfo = {
+               .algorithmID = TPM_ALG_RSA,
+               .encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1,
+               .sigScheme = TPM_SS_NONE,
+               .parmSize = 12,
+               .parms.rsa = {
+                  .keyLength = RSA_KEY_SIZE,
+                  .numPrimes = 2,
+                  .exponentSize = 0,
+                  .exponent = NULL,
+               },
+            };
+            TPMTRYRETURN(TPM_CreateEndorsementKeyPair(&keyInfo, &pubEK));
+         }
+         //fall through to take ownership
+      case TPM_SUCCESS:
+         {
+            //Construct the Srk
+            TPM_KEY srk = {
+               .ver = TPM_STRUCT_VER_1_1,
+               .keyUsage = TPM_KEY_STORAGE,
+               .keyFlags = 0x00,
+               .authDataUsage = TPM_AUTH_ALWAYS,
+               .algorithmParms = {
+                  .algorithmID = TPM_ALG_RSA,
+                  .encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1,
+                  .sigScheme =  TPM_SS_NONE,
+                  .parmSize = 12,
+                  .parms.rsa = {
+                     .keyLength = RSA_KEY_SIZE,
+                     .numPrimes = 2,
+                     .exponentSize = 0,
+                     .exponent = NULL,
+                  },
+               },
+               .PCRInfoSize = 0,
+               .pubKey = {
+                  .keyLength = 0,
+                  .key = NULL,
+               },
+               .encDataSize = 0,
+            };
+
+            TPMTRYRETURN(TPM_TakeOwnership(
+                     &pubEK,
+                     (const TPM_AUTHDATA*)&vtpm_globals.owner_auth,
+                     (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+                     &srk,
+                     NULL,
+                     &vtpm_globals.oiap));
+
+            TPMTRYRETURN(TPM_DisablePubekRead(
+                     (const TPM_AUTHDATA*)&vtpm_globals.owner_auth,
+                     &vtpm_globals.oiap));
+         }
+         break;
+      default:
+         break;
+   }
+abort_egress:
+   free_TPM_PUBKEY(&pubEK);
+   return status;
+}
+
+static void init_storage_key(TPM_KEY* key) {
+   key->ver.major = 1;
+   key->ver.minor = 1;
+   key->ver.revMajor = 0;
+   key->ver.revMinor = 0;
+
+   key->keyUsage = TPM_KEY_BIND;
+   key->keyFlags = 0;
+   key->authDataUsage = TPM_AUTH_ALWAYS;
+
+   TPM_KEY_PARMS* p = &key->algorithmParms;
+   p->algorithmID = TPM_ALG_RSA;
+   p->encScheme = TPM_ES_RSAESOAEP_SHA1_MGF1;
+   p->sigScheme = TPM_SS_NONE;
+   p->parmSize = 12;
+
+   TPM_RSA_KEY_PARMS* r = &p->parms.rsa;
+   r->keyLength = RSA_KEY_SIZE;
+   r->numPrimes = 2;
+   r->exponentSize = 0;
+   r->exponent = NULL;
+
+   key->PCRInfoSize = 0;
+   key->encDataSize = 0;
+   key->encData = NULL;
+}
+
+static int parse_auth_string(char* authstr, BYTE* target, const TPM_AUTHDATA wellknown, int allowrandom) {
+   int rc;
+   /* well known owner auth */
+   if(!strcmp(authstr, "well-known")) {
+      memcpy(target, wellknown, sizeof(TPM_AUTHDATA));
+   }
+   /* Create a randomly generated owner auth */
+   else if(allowrandom && !strcmp(authstr, "random")) {
+      return 1;
+   }
+   /* owner auth is a raw hash */
+   else if(!strncmp(authstr, "hash:", 5)) {
+      authstr += 5;
+      if((rc = strlen(authstr)) != 40) {
+         vtpmlogerror(VTPM_LOG_VTPM, "Supplied owner auth hex string `%s' must be exactly 40 characters (20 bytes) long, length=%d\n", authstr, rc);
+         return -1;
+      }
+      for(int j = 0; j < 20; ++j) {
+         if(sscanf(authstr, "%hhX", target + j) != 1) {
+            vtpmlogerror(VTPM_LOG_VTPM, "Supplied owner auth string `%s' is not a valid hex string\n", authstr);
+            return -1;
+         }
+         authstr += 2;
+      }
+   }
+   /* owner auth is a string that will be hashed */
+   else if(!strncmp(authstr, "text:", 5)) {
+      authstr += 5;
+      sha1((const unsigned char*)authstr, strlen(authstr), target);
+   }
+   else {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid auth string %s\n", authstr);
+      return -1;
+   }
+
+   return 0;
+}
+
+int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
+{
+   int rc;
+   int i;
+
+   //Set defaults
+   memcpy(vtpm_globals.owner_auth, WELLKNOWN_OWNER_AUTH, sizeof(TPM_AUTHDATA));
+   memcpy(vtpm_globals.srk_auth, WELLKNOWN_SRK_AUTH, sizeof(TPM_AUTHDATA));
+
+   for(i = 1; i < argc; ++i) {
+      if(!strncmp(argv[i], "owner_auth:", 10)) {
+         if((rc = parse_auth_string(argv[i] + 10, vtpm_globals.owner_auth, WELLKNOWN_OWNER_AUTH, 1)) < 0) {
+            goto err_invalid;
+         }
+         if(rc == 1) {
+            opts->gen_owner_auth = 1;
+         }
+      }
+      else if(!strncmp(argv[i], "srk_auth:", 8)) {
+         if((rc = parse_auth_string(argv[i] + 8, vtpm_globals.srk_auth, WELLKNOWN_SRK_AUTH, 0)) != 0) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmdriver=", 10)) {
+         if(!strcmp(argv[i] + 10, "tpm_tis")) {
+            opts->tpmdriver = TPMDRV_TPM_TIS;
+         } else if(!strcmp(argv[i] + 10, "tpmfront")) {
+            opts->tpmdriver = TPMDRV_TPMFRONT;
+         } else {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmiomem=",9)) {
+         if(sscanf(argv[i] + 9, "0x%lX", &opts->tpmiomem) != 1) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmirq=",7)) {
+         if(!strcmp(argv[i] + 7, "probe")) {
+            opts->tpmirq = TPM_PROBE_IRQ;
+         } else if( sscanf(argv[i] + 7, "%u", &opts->tpmirq) != 1) {
+            goto err_invalid;
+         }
+      }
+      else if(!strncmp(argv[i], "tpmlocality=",12)) {
+         if(sscanf(argv[i] + 12, "%u", &opts->tpmlocality) != 1 || opts->tpmlocality > 4) {
+            goto err_invalid;
+         }
+      }
+   }
+
+   switch(opts->tpmdriver) {
+      case TPMDRV_TPM_TIS:
+         vtpmloginfo(VTPM_LOG_VTPM, "Option: Using tpm_tis driver\n");
+         break;
+      case TPMDRV_TPMFRONT:
+         vtpmloginfo(VTPM_LOG_VTPM, "Option: Using tpmfront driver\n");
+         break;
+   }
+
+   return 0;
+err_invalid:
+   vtpmlogerror(VTPM_LOG_VTPM, "Invalid Option %s\n", argv[i]);
+   return -1;
+}
+
+
+
+static TPM_RESULT vtpmmgr_create(void) {
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_AUTH_SESSION osap = TPM_AUTH_SESSION_INIT;
+   TPM_AUTHDATA sharedsecret;
+
+   // Take ownership if TPM is unowned
+   TPMTRYRETURN(try_take_ownership());
+
+   // Generate storage key's auth
+   memset(&vtpm_globals.storage_key_usage_auth, 0, sizeof(TPM_AUTHDATA));
+
+   TPMTRYRETURN( TPM_OSAP(
+            TPM_ET_KEYHANDLE,
+            TPM_SRK_KEYHANDLE,
+            (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+            &sharedsecret,
+            &osap) );
+
+   init_storage_key(&vtpm_globals.storage_key);
+
+   //initialize the storage key
+   TPMTRYRETURN( TPM_CreateWrapKey(
+            TPM_SRK_KEYHANDLE,
+            (const TPM_AUTHDATA*)&sharedsecret,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.storage_key,
+            &osap) );
+
+   //Load Storage Key
+   TPMTRYRETURN( TPM_LoadKey(
+            TPM_SRK_KEYHANDLE,
+            &vtpm_globals.storage_key,
+            &vtpm_globals.storage_key_handle,
+            (const TPM_AUTHDATA*) &vtpm_globals.srk_auth,
+            &vtpm_globals.oiap));
+
+   //Make sure TPM has commited changes
+   TPMTRYRETURN( TPM_SaveState() );
+
+   //Create new disk image
+   TPMTRYRETURN(vtpm_storage_new_header());
+
+   goto egress;
+abort_egress:
+egress:
+   vtpmloginfo(VTPM_LOG_VTPM, "Finished initialized new VTPM manager\n");
+
+   //End the OSAP session
+   if(osap.AuthHandle) {
+      TPM_TerminateHandle(osap.AuthHandle);
+   }
+
+   return status;
+}
+
+TPM_RESULT vtpmmgr_init(int argc, char** argv) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   /* Default commandline options */
+   struct Opts opts = {
+      .tpmdriver = TPMDRV_TPM_TIS,
+      .tpmiomem = TPM_BASEADDR,
+      .tpmirq = 0,
+      .tpmlocality = 0,
+      .gen_owner_auth = 0,
+   };
+
+   if(parse_cmdline_opts(argc, argv, &opts) != 0) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Command line parsing failed! exiting..\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   //Setup storage system
+   if(vtpm_storage_init() != 0) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize storage subsystem!\n");
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   //Setup tpmback device
+   init_tpmback();
+
+   //Setup tpm access
+   switch(opts.tpmdriver) {
+      case TPMDRV_TPM_TIS:
+         {
+            struct tpm_chip* tpm;
+            if((tpm = init_tpm_tis(opts.tpmiomem, TPM_TIS_LOCL_INT_TO_FLAG(opts.tpmlocality), opts.tpmirq)) == NULL) {
+               vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize tpmfront device\n");
+               status = TPM_IOERROR;
+               goto abort_egress;
+            }
+            vtpm_globals.tpm_fd = tpm_tis_open(tpm);
+            tpm_tis_request_locality(tpm, opts.tpmlocality);
+         }
+         break;
+      case TPMDRV_TPMFRONT:
+         {
+            struct tpmfront_dev* tpmfront_dev;
+            if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
+               vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize tpmfront device\n");
+               status = TPM_IOERROR;
+               goto abort_egress;
+            }
+            vtpm_globals.tpm_fd = tpmfront_open(tpmfront_dev);
+         }
+         break;
+   }
+
+   //Get the version of the tpm
+   TPMTRYRETURN(check_tpm_version());
+
+   // Blow away all stale handles left in the tpm
+   if(flush_tpm() != TPM_SUCCESS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "VTPM_FlushResources failed, continuing anyway..\n");
+   }
+
+   /* Initialize the rng */
+   entropy_init(&vtpm_globals.entropy);
+   entropy_add_source(&vtpm_globals.entropy, tpm_entropy_source, NULL, 0);
+   entropy_gather(&vtpm_globals.entropy);
+   ctr_drbg_init(&vtpm_globals.ctr_drbg, entropy_func, &vtpm_globals.entropy, NULL, 0);
+   ctr_drbg_set_prediction_resistance( &vtpm_globals.ctr_drbg, CTR_DRBG_PR_OFF );
+
+   // Generate Auth for Owner
+   if(opts.gen_owner_auth) {
+      vtpmmgr_rand(vtpm_globals.owner_auth, sizeof(TPM_AUTHDATA));
+   }
+
+   // Create OIAP session for service's authorized commands
+   TPMTRYRETURN( TPM_OIAP(&vtpm_globals.oiap) );
+
+   /* Load the Manager data, if it fails create a new manager */
+   if (vtpm_storage_load_header() != TPM_SUCCESS) {
+      /* If the OIAP session was closed by an error, create a new one */
+      if(vtpm_globals.oiap.AuthHandle == 0) {
+         TPMTRYRETURN( TPM_OIAP(&vtpm_globals.oiap) );
+      }
+      vtpmloginfo(VTPM_LOG_VTPM, "Failed to read manager file. Assuming first time initialization.\n");
+      TPMTRYRETURN( vtpmmgr_create() );
+   }
+
+   goto egress;
+abort_egress:
+   vtpmmgr_shutdown();
+egress:
+   return status;
+}
+
+void vtpmmgr_shutdown(void)
+{
+   /* Cleanup resources */
+   free_TPM_KEY(&vtpm_globals.storage_key);
+
+   /* Cleanup TPM resources */
+   TPM_EvictKey(vtpm_globals.storage_key_handle);
+   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
+
+   /* Close tpmback */
+   shutdown_tpmback();
+
+   /* Close the storage system and blkfront */
+   vtpm_storage_shutdown();
+
+   /* Close tpmfront/tpm_tis */
+   close(vtpm_globals.tpm_fd);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
+}
diff --git a/stubdom/vtpmmgr/log.c b/stubdom/vtpmmgr/log.c
new file mode 100644
index 0000000..a82c913
--- /dev/null
+++ b/stubdom/vtpmmgr/log.c
@@ -0,0 +1,151 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdlib.h>
+#include <string.h>
+#include <stdio.h>
+
+#include "tcg.h"
+
+char *module_names[] = { "",
+                                "TPM",
+                                "TPM",
+                                "VTPM",
+                                "VTPM",
+                                "TXDATA",
+                              };
+// Helper code for the consts, eg. to produce messages for error codes.
+
+typedef struct error_code_entry_t {
+  TPM_RESULT code;
+  char * code_name;
+  char * msg;
+} error_code_entry_t;
+
+static const error_code_entry_t error_msgs [] = {
+  { TPM_SUCCESS, "TPM_SUCCESS", "Successful completion of the operation" },
+  { TPM_AUTHFAIL, "TPM_AUTHFAIL", "Authentication failed" },
+  { TPM_BADINDEX, "TPM_BADINDEX", "The index to a PCR, DIR or other register is incorrect" },
+  { TPM_BAD_PARAMETER, "TPM_BAD_PARAMETER", "One or more parameter is bad" },
+  { TPM_AUDITFAILURE, "TPM_AUDITFAILURE", "An operation completed successfully but the auditing of that operation failed." },
+  { TPM_CLEAR_DISABLED, "TPM_CLEAR_DISABLED", "The clear disable flag is set and all clear operations now require physical access" },
+  { TPM_DEACTIVATED, "TPM_DEACTIVATED", "The TPM is deactivated" },
+  { TPM_DISABLED, "TPM_DISABLED", "The TPM is disabled" },
+  { TPM_DISABLED_CMD, "TPM_DISABLED_CMD", "The target command has been disabled" },
+  { TPM_FAIL, "TPM_FAIL", "The operation failed" },
+  { TPM_BAD_ORDINAL, "TPM_BAD_ORDINAL", "The ordinal was unknown or inconsistent" },
+  { TPM_INSTALL_DISABLED, "TPM_INSTALL_DISABLED", "The ability to install an owner is disabled" },
+  { TPM_INVALID_KEYHANDLE, "TPM_INVALID_KEYHANDLE", "The key handle presented was invalid" },
+  { TPM_KEYNOTFOUND, "TPM_KEYNOTFOUND", "The target key was not found" },
+  { TPM_INAPPROPRIATE_ENC, "TPM_INAPPROPRIATE_ENC", "Unacceptable encryption scheme" },
+  { TPM_MIGRATEFAIL, "TPM_MIGRATEFAIL", "Migration authorization failed" },
+  { TPM_INVALID_PCR_INFO, "TPM_INVALID_PCR_INFO", "PCR information could not be interpreted" },
+  { TPM_NOSPACE, "TPM_NOSPACE", "No room to load key." },
+  { TPM_NOSRK, "TPM_NOSRK", "There is no SRK set" },
+  { TPM_NOTSEALED_BLOB, "TPM_NOTSEALED_BLOB", "An encrypted blob is invalid or was not created by this TPM" },
+  { TPM_OWNER_SET, "TPM_OWNER_SET", "There is already an Owner" },
+  { TPM_RESOURCES, "TPM_RESOURCES", "The TPM has insufficient internal resources to perform the requested action." },
+  { TPM_SHORTRANDOM, "TPM_SHORTRANDOM", "A random string was too short" },
+  { TPM_SIZE, "TPM_SIZE", "The TPM does not have the space to perform the operation." },
+  { TPM_WRONGPCRVAL, "TPM_WRONGPCRVAL", "The named PCR value does not match the current PCR value." },
+  { TPM_BAD_PARAM_SIZE, "TPM_BAD_PARAM_SIZE", "The paramSize argument to the command has the incorrect value" },
+  { TPM_SHA_THREAD, "TPM_SHA_THREAD", "There is no existing SHA-1 thread." },
+  { TPM_SHA_ERROR, "TPM_SHA_ERROR", "The calculation is unable to proceed because the existing SHA-1 thread has already encountered an error." },
+  { TPM_FAILEDSELFTEST, "TPM_FAILEDSELFTEST", "Self-test has failed and the TPM has shutdown." },
+  { TPM_AUTH2FAIL, "TPM_AUTH2FAIL", "The authorization for the second key in a 2 key function failed authorization" },
+  { TPM_BADTAG, "TPM_BADTAG", "The tag value sent to for a command is invalid" },
+  { TPM_IOERROR, "TPM_IOERROR", "An IO error occurred transmitting information to the TPM" },
+  { TPM_ENCRYPT_ERROR, "TPM_ENCRYPT_ERROR", "The encryption process had a problem." },
+  { TPM_DECRYPT_ERROR, "TPM_DECRYPT_ERROR", "The decryption process did not complete." },
+  { TPM_INVALID_AUTHHANDLE, "TPM_INVALID_AUTHHANDLE", "An invalid handle was used." },
+  { TPM_NO_ENDORSEMENT, "TPM_NO_ENDORSEMENT", "The TPM does not a EK installed" },
+  { TPM_INVALID_KEYUSAGE, "TPM_INVALID_KEYUSAGE", "The usage of a key is not allowed" },
+  { TPM_WRONG_ENTITYTYPE, "TPM_WRONG_ENTITYTYPE", "The submitted entity type is not allowed" },
+  { TPM_INVALID_POSTINIT, "TPM_INVALID_POSTINIT", "The command was received in the wrong sequence relative to TPM_Init and a subsequent TPM_Startup" },
+  { TPM_INAPPROPRIATE_SIG, "TPM_INAPPROPRIATE_SIG", "Signed data cannot include additional DER information" },
+  { TPM_BAD_KEY_PROPERTY, "TPM_BAD_KEY_PROPERTY", "The key properties in TPM_KEY_PARMs are not supported by this TPM" },
+
+  { TPM_BAD_MIGRATION, "TPM_BAD_MIGRATION", "The migration properties of this key are incorrect." },
+  { TPM_BAD_SCHEME, "TPM_BAD_SCHEME", "The signature or encryption scheme for this key is incorrect or not permitted in this situation." },
+  { TPM_BAD_DATASIZE, "TPM_BAD_DATASIZE", "The size of the data (or blob) parameter is bad or inconsistent with the referenced key" },
+  { TPM_BAD_MODE, "TPM_BAD_MODE", "A mode parameter is bad, such as capArea or subCapArea for TPM_GetCapability, phsicalPresence parameter for TPM_PhysicalPresence, or migrationType for TPM_CreateMigrationBlob." },
+  { TPM_BAD_PRESENCE, "TPM_BAD_PRESENCE", "Either the physicalPresence or physicalPresenceLock bits have the wrong value" },
+  { TPM_BAD_VERSION, "TPM_BAD_VERSION", "The TPM cannot perform this version of the capability" },
+  { TPM_NO_WRAP_TRANSPORT, "TPM_NO_WRAP_TRANSPORT", "The TPM does not allow for wrapped transport sessions" },
+  { TPM_AUDITFAIL_UNSUCCESSFUL, "TPM_AUDITFAIL_UNSUCCESSFUL", "TPM audit construction failed and the underlying command was returning a failure code also" },
+  { TPM_AUDITFAIL_SUCCESSFUL, "TPM_AUDITFAIL_SUCCESSFUL", "TPM audit construction failed and the underlying command was returning success" },
+  { TPM_NOTRESETABLE, "TPM_NOTRESETABLE", "Attempt to reset a PCR register that does not have the resettable attribute" },
+  { TPM_NOTLOCAL, "TPM_NOTLOCAL", "Attempt to reset a PCR register that requires locality and locality modifier not part of command transport" },
+  { TPM_BAD_TYPE, "TPM_BAD_TYPE", "Make identity blob not properly typed" },
+  { TPM_INVALID_RESOURCE, "TPM_INVALID_RESOURCE", "When saving context identified resource type does not match actual resource" },
+  { TPM_NOTFIPS, "TPM_NOTFIPS", "The TPM is attempting to execute a command only available when in FIPS mode" },
+  { TPM_INVALID_FAMILY, "TPM_INVALID_FAMILY", "The command is attempting to use an invalid family ID" },
+  { TPM_NO_NV_PERMISSION, "TPM_NO_NV_PERMISSION", "The permission to manipulate the NV storage is not available" },
+  { TPM_REQUIRES_SIGN, "TPM_REQUIRES_SIGN", "The operation requires a signed command" },
+  { TPM_KEY_NOTSUPPORTED, "TPM_KEY_NOTSUPPORTED", "Wrong operation to load an NV key" },
+  { TPM_AUTH_CONFLICT, "TPM_AUTH_CONFLICT", "NV_LoadKey blob requires both owner and blob authorization" },
+  { TPM_AREA_LOCKED, "TPM_AREA_LOCKED", "The NV area is locked and not writtable" },
+  { TPM_BAD_LOCALITY, "TPM_BAD_LOCALITY", "The locality is incorrect for the attempted operation" },
+  { TPM_READ_ONLY, "TPM_READ_ONLY", "The NV area is read only and can't be written to" },
+  { TPM_PER_NOWRITE, "TPM_PER_NOWRITE", "There is no protection on the write to the NV area" },
+  { TPM_FAMILYCOUNT, "TPM_FAMILYCOUNT", "The family count value does not match" },
+  { TPM_WRITE_LOCKED, "TPM_WRITE_LOCKED", "The NV area has already been written to" },
+  { TPM_BAD_ATTRIBUTES, "TPM_BAD_ATTRIBUTES", "The NV area attributes conflict" },
+  { TPM_INVALID_STRUCTURE, "TPM_INVALID_STRUCTURE", "The structure tag and version are invalid or inconsistent" },
+  { TPM_KEY_OWNER_CONTROL, "TPM_KEY_OWNER_CONTROL", "The key is under control of the TPM Owner and can only be evicted by the TPM Owner." },
+  { TPM_BAD_COUNTER, "TPM_BAD_COUNTER", "The counter handle is incorrect" },
+  { TPM_NOT_FULLWRITE, "TPM_NOT_FULLWRITE", "The write is not a complete write of the area" },
+  { TPM_CONTEXT_GAP, "TPM_CONTEXT_GAP", "The gap between saved context counts is too large" },
+  { TPM_MAXNVWRITES, "TPM_MAXNVWRITES", "The maximum number of NV writes without an owner has been exceeded" },
+  { TPM_NOOPERATOR, "TPM_NOOPERATOR", "No operator authorization value is set" },
+  { TPM_RESOURCEMISSING, "TPM_RESOURCEMISSING", "The resource pointed to by context is not loaded" },
+  { TPM_DELEGATE_LOCK, "TPM_DELEGATE_LOCK", "The delegate administration is locked" },
+  { TPM_DELEGATE_FAMILY, "TPM_DELEGATE_FAMILY", "Attempt to manage a family other then the delegated family" },
+  { TPM_DELEGATE_ADMIN, "TPM_DELEGATE_ADMIN", "Delegation table management not enabled" },
+  { TPM_TRANSPORT_EXCLUSIVE, "TPM_TRANSPORT_EXCLUSIVE", "There was a command executed outside of an exclusive transport session" },
+};
+
+
+// helper function for the error codes:
+const char* tpm_get_error_name (TPM_RESULT code) {
+  // just do a linear scan for now
+  unsigned i;
+  for (i = 0; i < sizeof(error_msgs)/sizeof(error_msgs[0]); i++)
+    if (code == error_msgs[i].code)
+      return error_msgs[i].code_name;
+
+    return("Unknown Error Code");
+}
diff --git a/stubdom/vtpmmgr/log.h b/stubdom/vtpmmgr/log.h
new file mode 100644
index 0000000..5c7abf5
--- /dev/null
+++ b/stubdom/vtpmmgr/log.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __VTPM_LOG_H__
+#define __VTPM_LOG_H__
+
+#include <stdint.h>             // for uint32_t
+#include <stddef.h>             // for pointer NULL
+#include <stdio.h>
+#include "tcg.h"
+
+// =========================== LOGGING ==============================
+
+// the logging module numbers
+#define VTPM_LOG_TPM         1
+#define VTPM_LOG_TPM_DEEP    2
+#define VTPM_LOG_VTPM        3
+#define VTPM_LOG_VTPM_DEEP   4
+#define VTPM_LOG_TXDATA      5
+
+extern char *module_names[];
+
+// Default to standard logging
+#ifndef LOGGING_MODULES
+#define LOGGING_MODULES (BITMASK(VTPM_LOG_VTPM)|BITMASK(VTPM_LOG_TPM))
+#endif
+
+// bit-access macros
+#define BITMASK(idx)      ( 1U << (idx) )
+#define GETBIT(num,idx)   ( ((num) & BITMASK(idx)) >> idx )
+#define SETBIT(num,idx)   (num) |= BITMASK(idx)
+#define CLEARBIT(num,idx) (num) &= ( ~ BITMASK(idx) )
+
+#define vtpmloginfo(module, fmt, args...) \
+  if (GETBIT (LOGGING_MODULES, module) == 1) {				\
+    fprintf (stdout, "INFO[%s]: " fmt, module_names[module], ##args); \
+  }
+
+#define vtpmloginfomore(module, fmt, args...) \
+  if (GETBIT (LOGGING_MODULES, module) == 1) {			      \
+    fprintf (stdout, fmt,##args);				      \
+  }
+
+#define vtpmlogerror(module, fmt, args...) \
+  fprintf (stderr, "ERROR[%s]: " fmt, module_names[module], ##args);
+
+//typedef UINT32 tpm_size_t;
+
+// helper function for the error codes:
+const char* tpm_get_error_name (TPM_RESULT code);
+
+#endif // _VTPM_LOG_H_
diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
new file mode 100644
index 0000000..77d32f0
--- /dev/null
+++ b/stubdom/vtpmmgr/marshal.h
@@ -0,0 +1,528 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef MARSHAL_H
+#define MARSHAL_H
+
+#include <stdlib.h>
+#include <mini-os/byteorder.h>
+#include <mini-os/endian.h>
+#include "tcg.h"
+
+typedef enum UnpackPtr {
+   UNPACK_ALIAS,
+   UNPACK_ALLOC
+} UnpackPtr;
+
+inline BYTE* pack_BYTE(BYTE* ptr, BYTE t) {
+   ptr[0] = t;
+   return ++ptr;
+}
+
+inline BYTE* unpack_BYTE(BYTE* ptr, BYTE* t) {
+   t[0] = ptr[0];
+   return ++ptr;
+}
+
+#define pack_BOOL(p, t) pack_BYTE(p, t)
+#define unpack_BOOL(p, t) unpack_BYTE(p, t)
+
+inline BYTE* pack_UINT16(BYTE* ptr, UINT16 t) {
+   BYTE* b = (BYTE*)&t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   ptr[0] = b[1];
+   ptr[1] = b[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   ptr[0] = b[0];
+   ptr[1] = b[1];
+#endif
+   return ptr + sizeof(UINT16);
+}
+
+inline BYTE* unpack_UINT16(BYTE* ptr, UINT16* t) {
+   BYTE* b = (BYTE*)t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   b[0] = ptr[1];
+   b[1] = ptr[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   b[0] = ptr[0];
+   b[1] = ptr[1];
+#endif
+   return ptr + sizeof(UINT16);
+}
+
+inline BYTE* pack_UINT32(BYTE* ptr, UINT32 t) {
+   BYTE* b = (BYTE*)&t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   ptr[3] = b[0];
+   ptr[2] = b[1];
+   ptr[1] = b[2];
+   ptr[0] = b[3];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   ptr[0] = b[0];
+   ptr[1] = b[1];
+   ptr[2] = b[2];
+   ptr[3] = b[3];
+#endif
+   return ptr + sizeof(UINT32);
+}
+
+inline BYTE* unpack_UINT32(BYTE* ptr, UINT32* t) {
+   BYTE* b = (BYTE*)t;
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+   b[0] = ptr[3];
+   b[1] = ptr[2];
+   b[2] = ptr[1];
+   b[3] = ptr[0];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+   b[0] = ptr[0];
+   b[1] = ptr[1];
+   b[2] = ptr[2];
+   b[3] = ptr[3];
+#endif
+   return ptr + sizeof(UINT32);
+}
+
+#define pack_TPM_RESULT(p, t) pack_UINT32(p, t)
+#define pack_TPM_PCRINDEX(p, t) pack_UINT32(p, t)
+#define pack_TPM_DIRINDEX(p, t) pack_UINT32(p, t)
+#define pack_TPM_HANDLE(p, t) pack_UINT32(p, t)
+#define pack_TPM_AUTHHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_HASHHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_HMACHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_ENCHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TPM_KEY_HANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TCPA_ENTITYHANDLE(p, t) pack_TPM_HANDLE(p, t)
+#define pack_TPM_RESOURCE_TYPE(p, t) pack_UINT32(p, t)
+#define pack_TPM_COMMAND_CODE(p, t) pack_UINT32(p, t)
+#define pack_TPM_PROTOCOL_ID(p, t) pack_UINT16(p, t)
+#define pack_TPM_AUTH_DATA_USAGE(p, t) pack_BYTE(p, t)
+#define pack_TPM_ENTITY_TYPE(p, t) pack_UINT16(p, t)
+#define pack_TPM_ALGORITHM_ID(p, t) pack_UINT32(p, t)
+#define pack_TPM_KEY_USAGE(p, t) pack_UINT16(p, t)
+#define pack_TPM_STARTUP_TYPE(p, t) pack_UINT16(p, t)
+#define pack_TPM_CAPABILITY_AREA(p, t) pack_UINT32(p, t)
+#define pack_TPM_ENC_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_SIG_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_MIGRATE_SCHEME(p, t) pack_UINT16(p, t)
+#define pack_TPM_PHYSICAL_PRESENCE(p, t) pack_UINT16(p, t)
+#define pack_TPM_KEY_FLAGS(p, t) pack_UINT32(p, t)
+
+#define unpack_TPM_RESULT(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_PCRINDEX(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_DIRINDEX(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_HANDLE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_AUTHHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_HASHHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_HMACHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_ENCHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TPM_KEY_HANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TCPA_ENTITYHANDLE(p, t) unpack_TPM_HANDLE(p, t)
+#define unpack_TPM_RESOURCE_TYPE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_COMMAND_CODE(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_PROTOCOL_ID(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_AUTH_DATA_USAGE(p, t) unpack_BYTE(p, t)
+#define unpack_TPM_ENTITY_TYPE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_ALGORITHM_ID(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_KEY_USAGE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_STARTUP_TYPE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_CAPABILITY_AREA(p, t) unpack_UINT32(p, t)
+#define unpack_TPM_ENC_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_SIG_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_MIGRATE_SCHEME(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_PHYSICAL_PRESENCE(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_KEY_FLAGS(p, t) unpack_UINT32(p, t)
+
+#define pack_TPM_AUTH_HANDLE(p, t) pack_UINT32(p, t);
+#define pack_TCS_CONTEXT_HANDLE(p, t) pack_UINT32(p, t);
+#define pack_TCS_KEY_HANDLE(p, t) pack_UINT32(p, t);
+
+#define unpack_TPM_AUTH_HANDLE(p, t) unpack_UINT32(p, t);
+#define unpack_TCS_CONTEXT_HANDLE(p, t) unpack_UINT32(p, t);
+#define unpack_TCS_KEY_HANDLE(p, t) unpack_UINT32(p, t);
+
+inline BYTE* pack_BUFFER(BYTE* ptr, const BYTE* buf, UINT32 size) {
+   memcpy(ptr, buf, size);
+   return ptr + size;
+}
+
+inline BYTE* unpack_BUFFER(BYTE* ptr, BYTE* buf, UINT32 size) {
+   memcpy(buf, ptr, size);
+   return ptr + size;
+}
+
+inline BYTE* unpack_ALIAS(BYTE* ptr, BYTE** buf, UINT32 size) {
+   *buf = ptr;
+   return ptr + size;
+}
+
+inline BYTE* unpack_ALLOC(BYTE* ptr, BYTE** buf, UINT32 size) {
+   if(size) {
+      *buf = malloc(size);
+      memcpy(*buf, ptr, size);
+   } else {
+      *buf = NULL;
+   }
+   return ptr + size;
+}
+
+inline BYTE* unpack_PTR(BYTE* ptr, BYTE** buf, UINT32 size, UnpackPtr alloc) {
+   if(alloc == UNPACK_ALLOC) {
+      return unpack_ALLOC(ptr, buf, size);
+   } else {
+      return unpack_ALIAS(ptr, buf, size);
+   }
+}
+
+inline BYTE* pack_TPM_AUTHDATA(BYTE* ptr, const TPM_AUTHDATA* d) {
+   return pack_BUFFER(ptr, *d, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_AUTHDATA(BYTE* ptr, TPM_AUTHDATA* d) {
+   return unpack_BUFFER(ptr, *d, TPM_DIGEST_SIZE);
+}
+
+#define pack_TPM_SECRET(p, t) pack_TPM_AUTHDATA(p, t)
+#define pack_TPM_ENCAUTH(p, t) pack_TPM_AUTHDATA(p, t)
+#define pack_TPM_PAYLOAD_TYPE(p, t) pack_BYTE(p, t)
+#define pack_TPM_TAG(p, t) pack_UINT16(p, t)
+#define pack_TPM_STRUCTURE_TAG(p, t) pack_UINT16(p, t)
+
+#define unpack_TPM_SECRET(p, t) unpack_TPM_AUTHDATA(p, t)
+#define unpack_TPM_ENCAUTH(p, t) unpack_TPM_AUTHDATA(p, t)
+#define unpack_TPM_PAYLOAD_TYPE(p, t) unpack_BYTE(p, t)
+#define unpack_TPM_TAG(p, t) unpack_UINT16(p, t)
+#define unpack_TPM_STRUCTURE_TAG(p, t) unpack_UINT16(p, t)
+
+inline BYTE* pack_TPM_VERSION(BYTE* ptr, const TPM_VERSION* t) {
+   ptr[0] = t->major;
+   ptr[1] = t->minor;
+   ptr[2] = t->revMajor;
+   ptr[3] = t->revMinor;
+   return ptr + 4;
+}
+
+inline BYTE* unpack_TPM_VERSION(BYTE* ptr, TPM_VERSION* t) {
+   t->major = ptr[0];
+   t->minor = ptr[1];
+   t->revMajor = ptr[2];
+   t->revMinor = ptr[3];
+   return ptr + 4;
+}
+
+inline BYTE* pack_TPM_CAP_VERSION_INFO(BYTE* ptr, const TPM_CAP_VERSION_INFO* v) {
+   ptr = pack_TPM_STRUCTURE_TAG(ptr, v->tag);
+   ptr = pack_TPM_VERSION(ptr, &v->version);
+   ptr = pack_UINT16(ptr, v->specLevel);
+   ptr = pack_BYTE(ptr, v->errataRev);
+   ptr = pack_BUFFER(ptr, v->tpmVendorID, sizeof(v->tpmVendorID));
+   ptr = pack_UINT16(ptr, v->vendorSpecificSize);
+   ptr = pack_BUFFER(ptr, v->vendorSpecific, v->vendorSpecificSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_CAP_VERSION_INFO(BYTE* ptr, TPM_CAP_VERSION_INFO* v, UnpackPtr alloc) {
+   ptr = unpack_TPM_STRUCTURE_TAG(ptr, &v->tag);
+   ptr = unpack_TPM_VERSION(ptr, &v->version);
+   ptr = unpack_UINT16(ptr, &v->specLevel);
+   ptr = unpack_BYTE(ptr, &v->errataRev);
+   ptr = unpack_BUFFER(ptr, v->tpmVendorID, sizeof(v->tpmVendorID));
+   ptr = unpack_UINT16(ptr, &v->vendorSpecificSize);
+   ptr = unpack_PTR(ptr, &v->vendorSpecific, v->vendorSpecificSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_DIGEST(BYTE* ptr, const TPM_DIGEST* d) {
+   return pack_BUFFER(ptr, d->digest, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_DIGEST(BYTE* ptr, TPM_DIGEST* d) {
+   return unpack_BUFFER(ptr, d->digest, TPM_DIGEST_SIZE);
+}
+
+#define pack_TPM_PCRVALUE(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_PCRVALUE(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_COMPOSITE_HASH(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_COMPOSITE_HASH(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_DIRVALUE(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_DIRVALUE(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_HMAC(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_HMAC(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+#define pack_TPM_CHOSENID_HASH(ptr, d) pack_TPM_DIGEST(ptr, d);
+#define unpack_TPM_CHOSENID_HASH(ptr, d) unpack_TPM_DIGEST(ptr, d);
+
+inline BYTE* pack_TPM_NONCE(BYTE* ptr, const TPM_NONCE* n) {
+   return pack_BUFFER(ptr, n->nonce, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* unpack_TPM_NONCE(BYTE* ptr, TPM_NONCE* n) {
+   return unpack_BUFFER(ptr, n->nonce, TPM_DIGEST_SIZE);
+}
+
+inline BYTE* pack_TPM_SYMMETRIC_KEY_PARMS(BYTE* ptr, const TPM_SYMMETRIC_KEY_PARMS* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_UINT32(ptr, k->blockSize);
+   ptr = pack_UINT32(ptr, k->ivSize);
+   return pack_BUFFER(ptr, k->IV, k->ivSize);
+}
+
+inline BYTE* unpack_TPM_SYMMETRIC_KEY_PARMS(BYTE* ptr, TPM_SYMMETRIC_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_UINT32(ptr, &k->blockSize);
+   ptr = unpack_UINT32(ptr, &k->ivSize);
+   return unpack_PTR(ptr, &k->IV, k->ivSize, alloc);
+}
+
+inline BYTE* pack_TPM_RSA_KEY_PARMS(BYTE* ptr, const TPM_RSA_KEY_PARMS* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_UINT32(ptr, k->numPrimes);
+   ptr = pack_UINT32(ptr, k->exponentSize);
+   return pack_BUFFER(ptr, k->exponent, k->exponentSize);
+}
+
+inline BYTE* unpack_TPM_RSA_KEY_PARMS(BYTE* ptr, TPM_RSA_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_UINT32(ptr, &k->numPrimes);
+   ptr = unpack_UINT32(ptr, &k->exponentSize);
+   return unpack_PTR(ptr, &k->exponent, k->exponentSize, alloc);
+}
+
+inline BYTE* pack_TPM_KEY_PARMS(BYTE* ptr, const TPM_KEY_PARMS* k) {
+   ptr = pack_TPM_ALGORITHM_ID(ptr, k->algorithmID);
+   ptr = pack_TPM_ENC_SCHEME(ptr, k->encScheme);
+   ptr = pack_TPM_SIG_SCHEME(ptr, k->sigScheme);
+   ptr = pack_UINT32(ptr, k->parmSize);
+
+   if(k->parmSize) {
+      switch(k->algorithmID) {
+         case TPM_ALG_RSA:
+            return pack_TPM_RSA_KEY_PARMS(ptr, &k->parms.rsa);
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            return pack_TPM_SYMMETRIC_KEY_PARMS(ptr, &k->parms.sym);
+      }
+   }
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_KEY_PARMS(BYTE* ptr, TPM_KEY_PARMS* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_ALGORITHM_ID(ptr, &k->algorithmID);
+   ptr = unpack_TPM_ENC_SCHEME(ptr, &k->encScheme);
+   ptr = unpack_TPM_SIG_SCHEME(ptr, &k->sigScheme);
+   ptr = unpack_UINT32(ptr, &k->parmSize);
+
+   if(k->parmSize) {
+      switch(k->algorithmID) {
+         case TPM_ALG_RSA:
+            return unpack_TPM_RSA_KEY_PARMS(ptr, &k->parms.rsa, alloc);
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            return unpack_TPM_SYMMETRIC_KEY_PARMS(ptr, &k->parms.sym, alloc);
+      }
+   }
+   return ptr;
+}
+
+inline BYTE* pack_TPM_STORE_PUBKEY(BYTE* ptr, const TPM_STORE_PUBKEY* k) {
+   ptr = pack_UINT32(ptr, k->keyLength);
+   ptr = pack_BUFFER(ptr, k->key, k->keyLength);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_STORE_PUBKEY(BYTE* ptr, TPM_STORE_PUBKEY* k, UnpackPtr alloc) {
+   ptr = unpack_UINT32(ptr, &k->keyLength);
+   ptr = unpack_PTR(ptr, &k->key, k->keyLength, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PUBKEY(BYTE* ptr, const TPM_PUBKEY* k) {
+   ptr = pack_TPM_KEY_PARMS(ptr, &k->algorithmParms);
+   return pack_TPM_STORE_PUBKEY(ptr, &k->pubKey);
+}
+
+inline BYTE* unpack_TPM_PUBKEY(BYTE* ptr, TPM_PUBKEY* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_KEY_PARMS(ptr, &k->algorithmParms, alloc);
+   return unpack_TPM_STORE_PUBKEY(ptr, &k->pubKey, alloc);
+}
+
+inline BYTE* pack_TPM_PCR_SELECTION(BYTE* ptr, const TPM_PCR_SELECTION* p) {
+   ptr = pack_UINT16(ptr, p->sizeOfSelect);
+   ptr = pack_BUFFER(ptr, p->pcrSelect, p->sizeOfSelect);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_SELECTION(BYTE* ptr, TPM_PCR_SELECTION* p, UnpackPtr alloc) {
+   ptr = unpack_UINT16(ptr, &p->sizeOfSelect);
+   ptr = unpack_PTR(ptr, &p->pcrSelect, p->sizeOfSelect, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PCR_INFO(BYTE* ptr, const TPM_PCR_INFO* p) {
+   ptr = pack_TPM_PCR_SELECTION(ptr, &p->pcrSelection);
+   ptr = pack_TPM_COMPOSITE_HASH(ptr, &p->digestAtRelease);
+   ptr = pack_TPM_COMPOSITE_HASH(ptr, &p->digestAtCreation);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_INFO(BYTE* ptr, TPM_PCR_INFO* p, UnpackPtr alloc) {
+   ptr = unpack_TPM_PCR_SELECTION(ptr, &p->pcrSelection, alloc);
+   ptr = unpack_TPM_COMPOSITE_HASH(ptr, &p->digestAtRelease);
+   ptr = unpack_TPM_COMPOSITE_HASH(ptr, &p->digestAtCreation);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_PCR_COMPOSITE(BYTE* ptr, const TPM_PCR_COMPOSITE* p) {
+   ptr = pack_TPM_PCR_SELECTION(ptr, &p->select);
+   ptr = pack_UINT32(ptr, p->valueSize);
+   ptr = pack_BUFFER(ptr, (const BYTE*)p->pcrValue, p->valueSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_PCR_COMPOSITE(BYTE* ptr, TPM_PCR_COMPOSITE* p, UnpackPtr alloc) {
+   ptr = unpack_TPM_PCR_SELECTION(ptr, &p->select, alloc);
+   ptr = unpack_UINT32(ptr, &p->valueSize);
+   ptr = unpack_PTR(ptr, (BYTE**)&p->pcrValue, p->valueSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_KEY(BYTE* ptr, const TPM_KEY* k) {
+   ptr = pack_TPM_VERSION(ptr, &k->ver);
+   ptr = pack_TPM_KEY_USAGE(ptr, k->keyUsage);
+   ptr = pack_TPM_KEY_FLAGS(ptr, k->keyFlags);
+   ptr = pack_TPM_AUTH_DATA_USAGE(ptr, k->authDataUsage);
+   ptr = pack_TPM_KEY_PARMS(ptr, &k->algorithmParms);
+   ptr = pack_UINT32(ptr, k->PCRInfoSize);
+   if(k->PCRInfoSize) {
+      ptr = pack_TPM_PCR_INFO(ptr, &k->PCRInfo);
+   }
+   ptr = pack_TPM_STORE_PUBKEY(ptr, &k->pubKey);
+   ptr = pack_UINT32(ptr, k->encDataSize);
+   return pack_BUFFER(ptr, k->encData, k->encDataSize);
+}
+
+inline BYTE* unpack_TPM_KEY(BYTE* ptr, TPM_KEY* k, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &k->ver);
+   ptr = unpack_TPM_KEY_USAGE(ptr, &k->keyUsage);
+   ptr = unpack_TPM_KEY_FLAGS(ptr, &k->keyFlags);
+   ptr = unpack_TPM_AUTH_DATA_USAGE(ptr, &k->authDataUsage);
+   ptr = unpack_TPM_KEY_PARMS(ptr, &k->algorithmParms, alloc);
+   ptr = unpack_UINT32(ptr, &k->PCRInfoSize);
+   if(k->PCRInfoSize) {
+      ptr = unpack_TPM_PCR_INFO(ptr, &k->PCRInfo, alloc);
+   }
+   ptr = unpack_TPM_STORE_PUBKEY(ptr, &k->pubKey, alloc);
+   ptr = unpack_UINT32(ptr, &k->encDataSize);
+   return unpack_PTR(ptr, &k->encData, k->encDataSize, alloc);
+}
+
+inline BYTE* pack_TPM_BOUND_DATA(BYTE* ptr, const TPM_BOUND_DATA* b, UINT32 payloadSize) {
+   ptr = pack_TPM_VERSION(ptr, &b->ver);
+   ptr = pack_TPM_PAYLOAD_TYPE(ptr, b->payload);
+   return pack_BUFFER(ptr, b->payloadData, payloadSize);
+}
+
+inline BYTE* unpack_TPM_BOUND_DATA(BYTE* ptr, TPM_BOUND_DATA* b, UINT32 payloadSize, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &b->ver);
+   ptr = unpack_TPM_PAYLOAD_TYPE(ptr, &b->payload);
+   return unpack_PTR(ptr, &b->payloadData, payloadSize, alloc);
+}
+
+inline BYTE* pack_TPM_STORED_DATA(BYTE* ptr, const TPM_STORED_DATA* d) {
+   ptr = pack_TPM_VERSION(ptr, &d->ver);
+   ptr = pack_UINT32(ptr, d->sealInfoSize);
+   if(d->sealInfoSize) {
+      ptr = pack_TPM_PCR_INFO(ptr, &d->sealInfo);
+   }
+   ptr = pack_UINT32(ptr, d->encDataSize);
+   ptr = pack_BUFFER(ptr, d->encData, d->encDataSize);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_STORED_DATA(BYTE* ptr, TPM_STORED_DATA* d, UnpackPtr alloc) {
+   ptr = unpack_TPM_VERSION(ptr, &d->ver);
+   ptr = unpack_UINT32(ptr, &d->sealInfoSize);
+   if(d->sealInfoSize) {
+      ptr = unpack_TPM_PCR_INFO(ptr, &d->sealInfo, alloc);
+   }
+   ptr = unpack_UINT32(ptr, &d->encDataSize);
+   ptr = unpack_PTR(ptr, &d->encData, d->encDataSize, alloc);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_AUTH_SESSION(BYTE* ptr, const TPM_AUTH_SESSION* auth) {
+   ptr = pack_TPM_AUTH_HANDLE(ptr, auth->AuthHandle);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+   ptr = pack_TPM_AUTHDATA(ptr, &auth->HMAC);
+   return ptr;
+}
+
+inline BYTE* unpack_TPM_AUTH_SESSION(BYTE* ptr, TPM_AUTH_SESSION* auth) {
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = unpack_BOOL(ptr, &auth->fContinueAuthSession);
+   ptr = unpack_TPM_AUTHDATA(ptr, &auth->HMAC);
+   return ptr;
+}
+
+inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
+      TPM_TAG tag,
+      UINT32 size,
+      TPM_COMMAND_CODE ord) {
+   ptr = pack_UINT16(ptr, tag);
+   ptr = pack_UINT32(ptr, size);
+   return pack_UINT32(ptr, ord);
+}
+
+inline BYTE* unpack_TPM_RQU_HEADER(BYTE* ptr,
+      TPM_TAG* tag,
+      UINT32* size,
+      TPM_COMMAND_CODE* ord) {
+   ptr = unpack_UINT16(ptr, tag);
+   ptr = unpack_UINT32(ptr, size);
+   ptr = unpack_UINT32(ptr, ord);
+   return ptr;
+}
+
+#define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r);
+#define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r);
+
+#endif
diff --git a/stubdom/vtpmmgr/minios.cfg b/stubdom/vtpmmgr/minios.cfg
new file mode 100644
index 0000000..3fb383d
--- /dev/null
+++ b/stubdom/vtpmmgr/minios.cfg
@@ -0,0 +1,14 @@
+CONFIG_TPMFRONT=y
+CONFIG_TPM_TIS=y
+CONFIG_TPMBACK=y
+CONFIG_START_NETWORK=n
+CONFIG_TEST=n
+CONFIG_PCIFRONT=n
+CONFIG_BLKFRONT=y
+CONFIG_NETFRONT=n
+CONFIG_FBFRONT=n
+CONFIG_KBDFRONT=n
+CONFIG_CONSFRONT=n
+CONFIG_XENBUS=y
+CONFIG_LWIP=n
+CONFIG_XC=n
diff --git a/stubdom/vtpmmgr/tcg.h b/stubdom/vtpmmgr/tcg.h
new file mode 100644
index 0000000..7687eae
--- /dev/null
+++ b/stubdom/vtpmmgr/tcg.h
@@ -0,0 +1,707 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005 Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __TCG_H__
+#define __TCG_H__
+
+#include <stdlib.h>
+#include <stdint.h>
+
+// **************************** CONSTANTS *********************************
+
+// BOOL values
+#define TRUE 0x01
+#define FALSE 0x00
+
+#define TCPA_MAX_BUFFER_LENGTH 0x2000
+
+//
+// TPM_COMMAND_CODE values
+#define TPM_PROTECTED_ORDINAL 0x00000000UL
+#define TPM_UNPROTECTED_ORDINAL 0x80000000UL
+#define TPM_CONNECTION_ORDINAL 0x40000000UL
+#define TPM_VENDOR_ORDINAL 0x20000000UL
+
+#define TPM_ORD_OIAP                     (10UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OSAP                     (11UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuth               (12UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_TakeOwnership            (13UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthAsymStart      (14UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthAsymFinish     (15UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ChangeAuthOwner          (16UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Extend                   (20UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PcrRead                  (21UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Quote                    (22UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Seal                     (23UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Unseal                   (24UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DirWriteAuth             (25UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DirRead                  (26UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_UnBind                   (30UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateWrapKey            (31UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadKey                  (32UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetPubKey                (33UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_EvictKey                 (34UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateMigrationBlob      (40UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReWrapKey                (41UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ConvertMigrationBlob     (42UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_AuthorizeMigrationKey    (43UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateMaintenanceArchive (44UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadMaintenanceArchive   (45UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_KillMaintenanceFeature   (46UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadManuMaintPub         (47UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadManuMaintPub         (48UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CertifyKey               (50UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Sign                     (60UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetRandom                (70UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_StirRandom               (71UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SelfTestFull             (80UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SelfTestStartup          (81UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CertifySelfTest          (82UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ContinueSelfTest         (83UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetTestResult            (84UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Reset                    (90UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerClear               (91UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisableOwnerClear        (92UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ForceClear               (93UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisableForceClear        (94UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapabilitySigned      (100UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapability            (101UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetCapabilityOwner       (102UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerSetDisable          (110UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalEnable           (111UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalDisable          (112UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetOwnerInstall          (113UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PhysicalSetDeactivated   (114UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetTempDeactivated       (115UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateEndorsementKeyPair (120UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_MakeIdentity             (121UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ActivateIdentity         (122UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadPubek                (124UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_OwnerReadPubek           (125UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_DisablePubekRead         (126UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetAuditEvent            (130UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetAuditEventSigned      (131UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetOrdinalAuditStatus    (140UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetOrdinalAuditStatus    (141UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Terminate_Handle         (150UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Init                     (151UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveState                (152UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Startup                  (153UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SetRedirection           (154UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Start                (160UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Update               (161UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1Complete             (162UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SHA1CompleteExtend       (163UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_FieldUpgrade             (170UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveKeyContext           (180UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadKeyContext           (181UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveAuthContext          (182UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadAuthContext          (183UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_SaveContext                      (184UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_LoadContext                      (185UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_FlushSpecific                    (186UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_PCR_Reset                        (200UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_DefineSpace                   (204UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_WriteValue                    (205UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_WriteValueAuth                (206UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_ReadValue                     (207UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_NV_ReadValueAuth                 (208UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_UpdateVerification      (209UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_Manage                  (210UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_CreateKeyDelegation     (212UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_CreateOwnerDelegation   (213UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_VerifyDelegation        (214UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_LoadOwnerDelegation     (216UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_ReadAuth                (217UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_Delegate_ReadTable               (219UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_CreateCounter                    (220UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_IncrementCounter                 (221UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReadCounter                      (222UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseCounter                   (223UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseCounterOwner              (224UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_EstablishTransport               (230UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ExecuteTransport                 (231UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_ReleaseTransportSigned           (232UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_GetTicks                         (241UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_TickStampBlob                    (242UL + TPM_PROTECTED_ORDINAL)
+#define TPM_ORD_MAX                              (256UL + TPM_PROTECTED_ORDINAL)
+
+#define TSC_ORD_PhysicalPresence         (10UL + TPM_CONNECTION_ORDINAL)
+
+
+
+//
+// TPM_RESULT values
+//
+// just put in the whole table from spec 1.2
+
+#define TPM_BASE   0x0 // The start of TPM return codes
+#define TPM_VENDOR_ERROR 0x00000400 // Mask to indicate that the error code is vendor specific for vendor specific commands
+#define TPM_NON_FATAL  0x00000800 // Mask to indicate that the error code is a non-fatal failure.
+
+#define TPM_SUCCESS   TPM_BASE // Successful completion of the operation
+#define TPM_AUTHFAIL      TPM_BASE + 1 // Authentication failed
+#define TPM_BADINDEX      TPM_BASE + 2 // The index to a PCR, DIR or other register is incorrect
+#define TPM_BAD_PARAMETER     TPM_BASE + 3 // One or more parameter is bad
+#define TPM_AUDITFAILURE     TPM_BASE + 4 // An operation completed successfully but the auditing of that operation failed.
+#define TPM_CLEAR_DISABLED     TPM_BASE + 5 // The clear disable flag is set and all clear operations now require physical access
+#define TPM_DEACTIVATED     TPM_BASE + 6 // The TPM is deactivated
+#define TPM_DISABLED      TPM_BASE + 7 // The TPM is disabled
+#define TPM_DISABLED_CMD     TPM_BASE + 8 // The target command has been disabled
+#define TPM_FAIL       TPM_BASE + 9 // The operation failed
+#define TPM_BAD_ORDINAL     TPM_BASE + 10 // The ordinal was unknown or inconsistent
+#define TPM_INSTALL_DISABLED   TPM_BASE + 11 // The ability to install an owner is disabled
+#define TPM_INVALID_KEYHANDLE  TPM_BASE + 12 // The key handle presented was invalid
+#define TPM_KEYNOTFOUND     TPM_BASE + 13 // The target key was not found
+#define TPM_INAPPROPRIATE_ENC  TPM_BASE + 14 // Unacceptable encryption scheme
+#define TPM_MIGRATEFAIL     TPM_BASE + 15 // Migration authorization failed
+#define TPM_INVALID_PCR_INFO   TPM_BASE + 16 // PCR information could not be interpreted
+#define TPM_NOSPACE      TPM_BASE + 17 // No room to load key.
+#define TPM_NOSRK       TPM_BASE + 18 // There is no SRK set
+#define TPM_NOTSEALED_BLOB     TPM_BASE + 19 // An encrypted blob is invalid or was not created by this TPM
+#define TPM_OWNER_SET      TPM_BASE + 20 // There is already an Owner
+#define TPM_RESOURCES      TPM_BASE + 21 // The TPM has insufficient internal resources to perform the requested action.
+#define TPM_SHORTRANDOM     TPM_BASE + 22 // A random string was too short
+#define TPM_SIZE       TPM_BASE + 23 // The TPM does not have the space to perform the operation.
+#define TPM_WRONGPCRVAL     TPM_BASE + 24 // The named PCR value does not match the current PCR value.
+#define TPM_BAD_PARAM_SIZE     TPM_BASE + 25 // The paramSize argument to the command has the incorrect value
+#define TPM_SHA_THREAD      TPM_BASE + 26 // There is no existing SHA-1 thread.
+#define TPM_SHA_ERROR      TPM_BASE + 27 // The calculation is unable to proceed because the existing SHA-1 thread has already encountered an error.
+#define TPM_FAILEDSELFTEST     TPM_BASE + 28 // Self-test has failed and the TPM has shutdown.
+#define TPM_AUTH2FAIL      TPM_BASE + 29 // The authorization for the second key in a 2 key function failed authorization
+#define TPM_BADTAG       TPM_BASE + 30 // The tag value sent to for a command is invalid
+#define TPM_IOERROR      TPM_BASE + 31 // An IO error occurred transmitting information to the TPM
+#define TPM_ENCRYPT_ERROR     TPM_BASE + 32 // The encryption process had a problem.
+#define TPM_DECRYPT_ERROR     TPM_BASE + 33 // The decryption process did not complete.
+#define TPM_INVALID_AUTHHANDLE TPM_BASE + 34 // An invalid handle was used.
+#define TPM_NO_ENDORSEMENT     TPM_BASE + 35 // The TPM does not a EK installed
+#define TPM_INVALID_KEYUSAGE   TPM_BASE + 36 // The usage of a key is not allowed
+#define TPM_WRONG_ENTITYTYPE   TPM_BASE + 37 // The submitted entity type is not allowed
+#define TPM_INVALID_POSTINIT   TPM_BASE + 38 // The command was received in the wrong sequence relative to TPM_Init and a subsequent TPM_Startup
+#define TPM_INAPPROPRIATE_SIG  TPM_BASE + 39 // Signed data cannot include additional DER information
+#define TPM_BAD_KEY_PROPERTY   TPM_BASE + 40 // The key properties in TPM_KEY_PARMs are not supported by this TPM
+
+#define TPM_BAD_MIGRATION      TPM_BASE + 41 // The migration properties of this key are incorrect.
+#define TPM_BAD_SCHEME       TPM_BASE + 42 // The signature or encryption scheme for this key is incorrect or not permitted in this situation.
+#define TPM_BAD_DATASIZE      TPM_BASE + 43 // The size of the data (or blob) parameter is bad or inconsistent with the referenced key
+#define TPM_BAD_MODE       TPM_BASE + 44 // A mode parameter is bad, such as capArea or subCapArea for TPM_GetCapability, phsicalPresence parameter for TPM_PhysicalPresence, or migrationType for TPM_CreateMigrationBlob.
+#define TPM_BAD_PRESENCE      TPM_BASE + 45 // Either the physicalPresence or physicalPresenceLock bits have the wrong value
+#define TPM_BAD_VERSION      TPM_BASE + 46 // The TPM cannot perform this version of the capability
+#define TPM_NO_WRAP_TRANSPORT     TPM_BASE + 47 // The TPM does not allow for wrapped transport sessions
+#define TPM_AUDITFAIL_UNSUCCESSFUL TPM_BASE + 48 // TPM audit construction failed and the underlying command was returning a failure code also
+#define TPM_AUDITFAIL_SUCCESSFUL   TPM_BASE + 49 // TPM audit construction failed and the underlying command was returning success
+#define TPM_NOTRESETABLE      TPM_BASE + 50 // Attempt to reset a PCR register that does not have the resettable attribute
+#define TPM_NOTLOCAL       TPM_BASE + 51 // Attempt to reset a PCR register that requires locality and locality modifier not part of command transport
+#define TPM_BAD_TYPE       TPM_BASE + 52 // Make identity blob not properly typed
+#define TPM_INVALID_RESOURCE     TPM_BASE + 53 // When saving context identified resource type does not match actual resource
+#define TPM_NOTFIPS       TPM_BASE + 54 // The TPM is attempting to execute a command only available when in FIPS mode
+#define TPM_INVALID_FAMILY      TPM_BASE + 55 // The command is attempting to use an invalid family ID
+#define TPM_NO_NV_PERMISSION     TPM_BASE + 56 // The permission to manipulate the NV storage is not available
+#define TPM_REQUIRES_SIGN      TPM_BASE + 57 // The operation requires a signed command
+#define TPM_KEY_NOTSUPPORTED     TPM_BASE + 58 // Wrong operation to load an NV key
+#define TPM_AUTH_CONFLICT      TPM_BASE + 59 // NV_LoadKey blob requires both owner and blob authorization
+#define TPM_AREA_LOCKED      TPM_BASE + 60 // The NV area is locked and not writtable
+#define TPM_BAD_LOCALITY      TPM_BASE + 61 // The locality is incorrect for the attempted operation
+#define TPM_READ_ONLY       TPM_BASE + 62 // The NV area is read only and can't be written to
+#define TPM_PER_NOWRITE      TPM_BASE + 63 // There is no protection on the write to the NV area
+#define TPM_FAMILYCOUNT      TPM_BASE + 64 // The family count value does not match
+#define TPM_WRITE_LOCKED      TPM_BASE + 65 // The NV area has already been written to
+#define TPM_BAD_ATTRIBUTES      TPM_BASE + 66 // The NV area attributes conflict
+#define TPM_INVALID_STRUCTURE     TPM_BASE + 67 // The structure tag and version are invalid or inconsistent
+#define TPM_KEY_OWNER_CONTROL     TPM_BASE + 68 // The key is under control of the TPM Owner and can only be evicted by the TPM Owner.
+#define TPM_BAD_COUNTER      TPM_BASE + 69 // The counter handle is incorrect
+#define TPM_NOT_FULLWRITE      TPM_BASE + 70 // The write is not a complete write of the area
+#define TPM_CONTEXT_GAP      TPM_BASE + 71 // The gap between saved context counts is too large
+#define TPM_MAXNVWRITES      TPM_BASE + 72 // The maximum number of NV writes without an owner has been exceeded
+#define TPM_NOOPERATOR       TPM_BASE + 73 // No operator authorization value is set
+#define TPM_RESOURCEMISSING     TPM_BASE + 74 // The resource pointed to by context is not loaded
+#define TPM_DELEGATE_LOCK      TPM_BASE + 75 // The delegate administration is locked
+#define TPM_DELEGATE_FAMILY     TPM_BASE + 76 // Attempt to manage a family other then the delegated family
+#define TPM_DELEGATE_ADMIN      TPM_BASE + 77 // Delegation table management not enabled
+#define TPM_TRANSPORT_EXCLUSIVE    TPM_BASE + 78 // There was a command executed outside of an exclusive transport session
+
+// TPM_STARTUP_TYPE values
+#define TPM_ST_CLEAR 0x0001
+#define TPM_ST_STATE 0x0002
+#define TPM_ST_DEACTIVATED 0x003
+
+// TPM_TAG values
+#define TPM_TAG_RQU_COMMAND 0x00c1
+#define TPM_TAG_RQU_AUTH1_COMMAND 0x00c2
+#define TPM_TAG_RQU_AUTH2_COMMAND 0x00c3
+#define TPM_TAG_RSP_COMMAND 0x00c4
+#define TPM_TAG_RSP_AUTH1_COMMAND 0x00c5
+#define TPM_TAG_RSP_AUTH2_COMMAND 0x00c6
+
+// TPM_PAYLOAD_TYPE values
+#define TPM_PT_ASYM 0x01
+#define TPM_PT_BIND 0x02
+#define TPM_PT_MIGRATE 0x03
+#define TPM_PT_MAINT 0x04
+#define TPM_PT_SEAL 0x05
+
+// TPM_ENTITY_TYPE values
+#define TPM_ET_KEYHANDLE 0x0001
+#define TPM_ET_OWNER 0x0002
+#define TPM_ET_DATA 0x0003
+#define TPM_ET_SRK 0x0004
+#define TPM_ET_KEY 0x0005
+
+/// TPM_ResourceTypes
+#define TPM_RT_KEY      0x00000001
+#define TPM_RT_AUTH     0x00000002
+#define TPM_RT_HASH     0x00000003
+#define TPM_RT_TRANS    0x00000004
+#define TPM_RT_CONTEXT  0x00000005
+#define TPM_RT_COUNTER  0x00000006
+#define TPM_RT_DELEGATE 0x00000007
+#define TPM_RT_DAA_TPM  0x00000008
+#define TPM_RT_DAA_V0   0x00000009
+#define TPM_RT_DAA_V1   0x0000000A
+
+
+
+// TPM_PROTOCOL_ID values
+#define TPM_PID_OIAP 0x0001
+#define TPM_PID_OSAP 0x0002
+#define TPM_PID_ADIP 0x0003
+#define TPM_PID_ADCP 0x0004
+#define TPM_PID_OWNER 0x0005
+
+// TPM_ALGORITHM_ID values
+#define TPM_ALG_RSA 0x00000001
+#define TPM_ALG_SHA 0x00000004
+#define TPM_ALG_HMAC 0x00000005
+#define TPM_ALG_AES128 0x00000006
+#define TPM_ALG_MFG1 0x00000007
+#define TPM_ALG_AES192 0x00000008
+#define TPM_ALG_AES256 0x00000009
+#define TPM_ALG_XOR 0x0000000A
+
+// TPM_ENC_SCHEME values
+#define TPM_ES_NONE 0x0001
+#define TPM_ES_RSAESPKCSv15 0x0002
+#define TPM_ES_RSAESOAEP_SHA1_MGF1 0x0003
+
+// TPM_SIG_SCHEME values
+#define TPM_SS_NONE 0x0001
+#define TPM_SS_RSASSAPKCS1v15_SHA1 0x0002
+#define TPM_SS_RSASSAPKCS1v15_DER 0x0003
+
+/*
+ * TPM_CAPABILITY_AREA Values for TPM_GetCapability ([TPM_Part2], Section 21.1)
+ */
+#define TPM_CAP_ORD                     0x00000001
+#define TPM_CAP_ALG                     0x00000002
+#define TPM_CAP_PID                     0x00000003
+#define TPM_CAP_FLAG                    0x00000004
+#define TPM_CAP_PROPERTY                0x00000005
+#define TPM_CAP_VERSION                 0x00000006
+#define TPM_CAP_KEY_HANDLE              0x00000007
+#define TPM_CAP_CHECK_LOADED            0x00000008
+#define TPM_CAP_SYM_MODE                0x00000009
+#define TPM_CAP_KEY_STATUS              0x0000000C
+#define TPM_CAP_NV_LIST                 0x0000000D
+#define TPM_CAP_MFR                     0x00000010
+#define TPM_CAP_NV_INDEX                0x00000011
+#define TPM_CAP_TRANS_ALG               0x00000012
+#define TPM_CAP_HANDLE                  0x00000014
+#define TPM_CAP_TRANS_ES                0x00000015
+#define TPM_CAP_AUTH_ENCRYPT            0x00000017
+#define TPM_CAP_SELECT_SIZE             0x00000018
+#define TPM_CAP_DA_LOGIC                0x00000019
+#define TPM_CAP_VERSION_VAL             0x0000001A
+
+/* subCap definitions ([TPM_Part2], Section 21.2) */
+#define TPM_CAP_PROP_PCR                0x00000101
+#define TPM_CAP_PROP_DIR                0x00000102
+#define TPM_CAP_PROP_MANUFACTURER       0x00000103
+#define TPM_CAP_PROP_KEYS               0x00000104
+#define TPM_CAP_PROP_MIN_COUNTER        0x00000107
+#define TPM_CAP_FLAG_PERMANENT          0x00000108
+#define TPM_CAP_FLAG_VOLATILE           0x00000109
+#define TPM_CAP_PROP_AUTHSESS           0x0000010A
+#define TPM_CAP_PROP_TRANSESS           0x0000010B
+#define TPM_CAP_PROP_COUNTERS           0x0000010C
+#define TPM_CAP_PROP_MAX_AUTHSESS       0x0000010D
+#define TPM_CAP_PROP_MAX_TRANSESS       0x0000010E
+#define TPM_CAP_PROP_MAX_COUNTERS       0x0000010F
+#define TPM_CAP_PROP_MAX_KEYS           0x00000110
+#define TPM_CAP_PROP_OWNER              0x00000111
+#define TPM_CAP_PROP_CONTEXT            0x00000112
+#define TPM_CAP_PROP_MAX_CONTEXT        0x00000113
+#define TPM_CAP_PROP_FAMILYROWS         0x00000114
+#define TPM_CAP_PROP_TIS_TIMEOUT        0x00000115
+#define TPM_CAP_PROP_STARTUP_EFFECT     0x00000116
+#define TPM_CAP_PROP_DELEGATE_ROW       0x00000117
+#define TPM_CAP_PROP_MAX_DAASESS        0x00000119
+#define TPM_CAP_PROP_DAASESS            0x0000011A
+#define TPM_CAP_PROP_CONTEXT_DIST       0x0000011B
+#define TPM_CAP_PROP_DAA_INTERRUPT      0x0000011C
+#define TPM_CAP_PROP_SESSIONS           0x0000011D
+#define TPM_CAP_PROP_MAX_SESSIONS       0x0000011E
+#define TPM_CAP_PROP_CMK_RESTRICTION    0x0000011F
+#define TPM_CAP_PROP_DURATION           0x00000120
+#define TPM_CAP_PROP_ACTIVE_COUNTER     0x00000122
+#define TPM_CAP_PROP_MAX_NV_AVAILABLE   0x00000123
+#define TPM_CAP_PROP_INPUT_BUFFER       0x00000124
+
+// TPM_KEY_USAGE values
+#define TPM_KEY_EK 0x0000
+#define TPM_KEY_SIGNING 0x0010
+#define TPM_KEY_STORAGE 0x0011
+#define TPM_KEY_IDENTITY 0x0012
+#define TPM_KEY_AUTHCHANGE 0X0013
+#define TPM_KEY_BIND 0x0014
+#define TPM_KEY_LEGACY 0x0015
+
+// TPM_AUTH_DATA_USAGE values
+#define TPM_AUTH_NEVER 0x00
+#define TPM_AUTH_ALWAYS 0x01
+
+// Key Handle of owner and srk
+#define TPM_OWNER_KEYHANDLE 0x40000001
+#define TPM_SRK_KEYHANDLE 0x40000000
+
+
+
+// *************************** TYPEDEFS *********************************
+typedef unsigned char BYTE;
+typedef unsigned char BOOL;
+typedef uint16_t UINT16;
+typedef uint32_t UINT32;
+typedef uint64_t UINT64;
+
+typedef UINT32 TPM_RESULT;
+typedef UINT32 TPM_PCRINDEX;
+typedef UINT32 TPM_DIRINDEX;
+typedef UINT32 TPM_HANDLE;
+typedef TPM_HANDLE TPM_AUTHHANDLE;
+typedef TPM_HANDLE TCPA_HASHHANDLE;
+typedef TPM_HANDLE TCPA_HMACHANDLE;
+typedef TPM_HANDLE TCPA_ENCHANDLE;
+typedef TPM_HANDLE TPM_KEY_HANDLE;
+typedef TPM_HANDLE TCPA_ENTITYHANDLE;
+typedef UINT32 TPM_RESOURCE_TYPE;
+typedef UINT32 TPM_COMMAND_CODE;
+typedef UINT16 TPM_PROTOCOL_ID;
+typedef BYTE TPM_AUTH_DATA_USAGE;
+typedef UINT16 TPM_ENTITY_TYPE;
+typedef UINT32 TPM_ALGORITHM_ID;
+typedef UINT16 TPM_KEY_USAGE;
+typedef UINT16 TPM_STARTUP_TYPE;
+typedef UINT32 TPM_CAPABILITY_AREA;
+typedef UINT16 TPM_ENC_SCHEME;
+typedef UINT16 TPM_SIG_SCHEME;
+typedef UINT16 TPM_MIGRATE_SCHEME;
+typedef UINT16 TPM_PHYSICAL_PRESENCE;
+typedef UINT32 TPM_KEY_FLAGS;
+
+#define TPM_DIGEST_SIZE 20  // Don't change this
+typedef BYTE TPM_AUTHDATA[TPM_DIGEST_SIZE];
+typedef TPM_AUTHDATA TPM_SECRET;
+typedef TPM_AUTHDATA TPM_ENCAUTH;
+typedef BYTE TPM_PAYLOAD_TYPE;
+typedef UINT16 TPM_TAG;
+typedef UINT16 TPM_STRUCTURE_TAG;
+
+// Data Types of the TCS
+typedef UINT32 TCS_AUTHHANDLE;  // Handle addressing a authorization session
+typedef UINT32 TCS_CONTEXT_HANDLE; // Basic context handle
+typedef UINT32 TCS_KEY_HANDLE;  // Basic key handle
+
+// ************************* STRUCTURES **********************************
+
+typedef struct TPM_VERSION {
+  BYTE major;
+  BYTE minor;
+  BYTE revMajor;
+  BYTE revMinor;
+} TPM_VERSION;
+
+static const TPM_VERSION TPM_STRUCT_VER_1_1 = { 1,1,0,0 };
+
+typedef struct TPM_CAP_VERSION_INFO {
+   TPM_STRUCTURE_TAG tag;
+   TPM_VERSION version;
+   UINT16 specLevel;
+   BYTE errataRev;
+   BYTE tpmVendorID[4];
+   UINT16 vendorSpecificSize;
+   BYTE* vendorSpecific;
+} TPM_CAP_VERSION_INFO;
+
+inline void free_TPM_CAP_VERSION_INFO(TPM_CAP_VERSION_INFO* v) {
+   free(v->vendorSpecific);
+   v->vendorSpecific = NULL;
+}
+
+typedef struct TPM_DIGEST {
+  BYTE digest[TPM_DIGEST_SIZE];
+} TPM_DIGEST;
+
+typedef TPM_DIGEST TPM_PCRVALUE;
+typedef TPM_DIGEST TPM_COMPOSITE_HASH;
+typedef TPM_DIGEST TPM_DIRVALUE;
+typedef TPM_DIGEST TPM_HMAC;
+typedef TPM_DIGEST TPM_CHOSENID_HASH;
+
+typedef struct TPM_NONCE {
+  BYTE nonce[TPM_DIGEST_SIZE];
+} TPM_NONCE;
+
+typedef struct TPM_SYMMETRIC_KEY_PARMS {
+   UINT32 keyLength;
+   UINT32 blockSize;
+   UINT32 ivSize;
+   BYTE* IV;
+} TPM_SYMMETRIC_KEY_PARMS;
+
+inline void free_TPM_SYMMETRIC_KEY_PARMS(TPM_SYMMETRIC_KEY_PARMS* p) {
+   free(p->IV);
+   p->IV = NULL;
+}
+
+#define TPM_SYMMETRIC_KEY_PARMS_INIT { 0, 0, 0, NULL }
+
+typedef struct TPM_RSA_KEY_PARMS {
+  UINT32 keyLength;
+  UINT32 numPrimes;
+  UINT32 exponentSize;
+  BYTE* exponent;
+} TPM_RSA_KEY_PARMS;
+
+#define TPM_RSA_KEY_PARMS_INIT { 0, 0, 0, NULL }
+
+inline void free_TPM_RSA_KEY_PARMS(TPM_RSA_KEY_PARMS* p) {
+   free(p->exponent);
+   p->exponent = NULL;
+}
+
+typedef struct TPM_KEY_PARMS {
+  TPM_ALGORITHM_ID algorithmID;
+  TPM_ENC_SCHEME encScheme;
+  TPM_SIG_SCHEME sigScheme;
+  UINT32 parmSize;
+  union {
+     TPM_SYMMETRIC_KEY_PARMS sym;
+     TPM_RSA_KEY_PARMS rsa;
+  } parms;
+} TPM_KEY_PARMS;
+
+#define TPM_KEY_PARMS_INIT { 0, 0, 0, 0 }
+
+inline void free_TPM_KEY_PARMS(TPM_KEY_PARMS* p) {
+   if(p->parmSize) {
+      switch(p->algorithmID) {
+         case TPM_ALG_RSA:
+            free_TPM_RSA_KEY_PARMS(&p->parms.rsa);
+            break;
+         case TPM_ALG_AES128:
+         case TPM_ALG_AES192:
+         case TPM_ALG_AES256:
+            free_TPM_SYMMETRIC_KEY_PARMS(&p->parms.sym);
+            break;
+      }
+   }
+}
+
+typedef struct TPM_STORE_PUBKEY {
+  UINT32 keyLength;
+  BYTE* key;
+} TPM_STORE_PUBKEY;
+
+#define TPM_STORE_PUBKEY_INIT { 0, NULL }
+
+inline void free_TPM_STORE_PUBKEY(TPM_STORE_PUBKEY* p) {
+   free(p->key);
+   p->key = NULL;
+}
+
+typedef struct TPM_PUBKEY {
+  TPM_KEY_PARMS algorithmParms;
+  TPM_STORE_PUBKEY pubKey;
+} TPM_PUBKEY;
+
+#define TPM_PUBKEY_INIT { TPM_KEY_PARMS_INIT, TPM_STORE_PUBKEY_INIT }
+
+inline void free_TPM_PUBKEY(TPM_PUBKEY* k) {
+   free_TPM_KEY_PARMS(&k->algorithmParms);
+   free_TPM_STORE_PUBKEY(&k->pubKey);
+}
+
+typedef struct TPM_PCR_SELECTION {
+   UINT16 sizeOfSelect;
+   BYTE* pcrSelect;
+} TPM_PCR_SELECTION;
+
+#define TPM_PCR_SELECTION_INIT { 0, NULL }
+
+inline void free_TPM_PCR_SELECTION(TPM_PCR_SELECTION* p) {
+   free(p->pcrSelect);
+   p->pcrSelect = NULL;
+}
+
+typedef struct TPM_PCR_INFO {
+   TPM_PCR_SELECTION pcrSelection;
+   TPM_COMPOSITE_HASH digestAtRelease;
+   TPM_COMPOSITE_HASH digestAtCreation;
+} TPM_PCR_INFO;
+
+#define TPM_PCR_INFO_INIT { TPM_PCR_SELECTION_INIT }
+
+inline void free_TPM_PCR_INFO(TPM_PCR_INFO* p) {
+   free_TPM_PCR_SELECTION(&p->pcrSelection);
+}
+
+typedef struct TPM_PCR_COMPOSITE {
+  TPM_PCR_SELECTION select;
+  UINT32 valueSize;
+  TPM_PCRVALUE* pcrValue;
+} TPM_PCR_COMPOSITE;
+
+#define TPM_PCR_COMPOSITE_INIT { TPM_PCR_SELECTION_INIT, 0, NULL }
+
+inline void free_TPM_PCR_COMPOSITE(TPM_PCR_COMPOSITE* p) {
+   free_TPM_PCR_SELECTION(&p->select);
+   free(p->pcrValue);
+   p->pcrValue = NULL;
+}
+
+typedef struct TPM_KEY {
+  TPM_VERSION         ver;
+  TPM_KEY_USAGE       keyUsage;
+  TPM_KEY_FLAGS       keyFlags;
+  TPM_AUTH_DATA_USAGE authDataUsage;
+  TPM_KEY_PARMS       algorithmParms;
+  UINT32              PCRInfoSize;
+  TPM_PCR_INFO        PCRInfo;
+  TPM_STORE_PUBKEY    pubKey;
+  UINT32              encDataSize;
+  BYTE*               encData;
+} TPM_KEY;
+
+#define TPM_KEY_INIT { .algorithmParms = TPM_KEY_PARMS_INIT,\
+   .PCRInfoSize = 0, .PCRInfo = TPM_PCR_INFO_INIT, \
+   .pubKey = TPM_STORE_PUBKEY_INIT, \
+   .encDataSize = 0, .encData = NULL }
+
+inline void free_TPM_KEY(TPM_KEY* k) {
+   if(k->PCRInfoSize) {
+      free_TPM_PCR_INFO(&k->PCRInfo);
+   }
+   free_TPM_STORE_PUBKEY(&k->pubKey);
+   free(k->encData);
+   k->encData = NULL;
+}
+
+typedef struct TPM_BOUND_DATA {
+  TPM_VERSION ver;
+  TPM_PAYLOAD_TYPE payload;
+  BYTE* payloadData;
+} TPM_BOUND_DATA;
+
+#define TPM_BOUND_DATA_INIT { .payloadData = NULL }
+
+inline void free_TPM_BOUND_DATA(TPM_BOUND_DATA* d) {
+   free(d->payloadData);
+   d->payloadData = NULL;
+}
+
+typedef struct TPM_STORED_DATA {
+  TPM_VERSION ver;
+  UINT32 sealInfoSize;
+  TPM_PCR_INFO sealInfo;
+  UINT32 encDataSize;
+  BYTE* encData;
+} TPM_STORED_DATA;
+
+#define TPM_STORED_DATA_INIT { .sealInfoSize = 0, sealInfo = TPM_PCR_INFO_INIT,\
+   .encDataSize = 0, .encData = NULL }
+
+inline void free_TPM_STORED_DATA(TPM_STORED_DATA* d) {
+   if(d->sealInfoSize) {
+      free_TPM_PCR_INFO(&d->sealInfo);
+   }
+   free(d->encData);
+   d->encData = NULL;
+}
+
+typedef struct TPM_AUTH_SESSION {
+  TPM_AUTHHANDLE  AuthHandle;
+  TPM_NONCE   NonceOdd;   // system
+  TPM_NONCE   NonceEven;   // TPM
+  BOOL   fContinueAuthSession;
+  TPM_AUTHDATA  HMAC;
+} TPM_AUTH_SESSION;
+
+#define TPM_AUTH_SESSION_INIT { .AuthHandle = 0, .fContinueAuthSession = FALSE }
+
+// ---------------------- Functions for checking TPM_RESULTs -----------------
+
+#include <stdio.h>
+
+// FIXME: Review use of these and delete unneeded ones.
+
+// these are really badly dependent on local structure:
+// DEPENDS: local var 'status' of type TPM_RESULT
+// DEPENDS: label 'abort_egress' which cleans up and returns the status
+#define ERRORDIE(s) do { status = s; \
+                         fprintf (stderr, "*** ERRORDIE in %s at %s: %i\n", __func__, __FILE__, __LINE__); \
+                         goto abort_egress; } \
+                    while (0)
+
+// DEPENDS: local var 'status' of type TPM_RESULT
+// DEPENDS: label 'abort_egress' which cleans up and returns the status
+// Try command c. If it fails, set status to s and goto abort.
+#define TPMTRY(s,c) if (c != TPM_SUCCESS) { \
+                       status = s; \
+                       printf("ERROR in %s at %s:%i code: %s.\n", __func__, __FILE__, __LINE__, tpm_get_error_name(status)); \
+                       goto abort_egress; \
+                    } else {\
+                       status = c; \
+                    }
+
+// Try command c. If it fails, print error message, set status to actual return code. Goto abort
+#define TPMTRYRETURN(c) do { status = c; \
+                             if (status != TPM_SUCCESS) { \
+                               fprintf(stderr, "ERROR in %s at %s:%i code: %s.\n", __func__, __FILE__, __LINE__, tpm_get_error_name(status)); \
+                               goto abort_egress; \
+                             } \
+                        } while(0)
+
+
+#endif //__TCPA_H__
diff --git a/stubdom/vtpmmgr/tpm.c b/stubdom/vtpmmgr/tpm.c
new file mode 100644
index 0000000..123a27c
--- /dev/null
+++ b/stubdom/vtpmmgr/tpm.c
@@ -0,0 +1,938 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdio.h>
+#include <string.h>
+#include <malloc.h>
+#include <unistd.h>
+#include <errno.h>
+
+#include <polarssl/sha1.h>
+
+#include "tcg.h"
+#include "tpm.h"
+#include "log.h"
+#include "marshal.h"
+#include "tpmrsa.h"
+#include "vtpmmgr.h"
+
+#define TCPA_MAX_BUFFER_LENGTH 0x2000
+
+#define TPM_BEGIN(TAG, ORD) \
+   const TPM_TAG intag = TAG;\
+TPM_TAG tag = intag;\
+UINT32 paramSize;\
+const TPM_COMMAND_CODE ordinal = ORD;\
+TPM_RESULT status = TPM_SUCCESS;\
+BYTE in_buf[TCPA_MAX_BUFFER_LENGTH];\
+BYTE out_buf[TCPA_MAX_BUFFER_LENGTH];\
+UINT32 out_len = sizeof(out_buf);\
+BYTE* ptr = in_buf;\
+/*Print a log message */\
+vtpmloginfo(VTPM_LOG_TPM, "%s\n", __func__);\
+/* Pack the header*/\
+ptr = pack_TPM_TAG(ptr, tag);\
+ptr += sizeof(UINT32);\
+ptr = pack_TPM_COMMAND_CODE(ptr, ordinal)\
+
+#define TPM_AUTH_BEGIN() \
+   sha1_context sha1_ctx;\
+BYTE* authbase = ptr - sizeof(TPM_COMMAND_CODE);\
+TPM_DIGEST paramDigest;\
+sha1_starts(&sha1_ctx)
+
+#define TPM_AUTH1_GEN(HMACkey, auth) do {\
+   sha1_finish(&sha1_ctx, paramDigest.digest);\
+   generateAuth(&paramDigest, HMACkey, auth);\
+   ptr = pack_TPM_AUTH_SESSION(ptr, auth);\
+} while(0)
+
+#define TPM_AUTH2_GEN(HMACkey, auth) do {\
+   generateAuth(&paramDigest, HMACkey, auth);\
+   ptr = pack_TPM_AUTH_SESSION(ptr, auth);\
+} while(0)
+
+#define TPM_TRANSMIT() do {\
+   /* Pack the command size */\
+   paramSize = ptr - in_buf;\
+   pack_UINT32(in_buf + sizeof(TPM_TAG), paramSize);\
+   if((status = TPM_TransmitData(in_buf, paramSize, out_buf, &out_len)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH_VERIFY_BEGIN() do {\
+   UINT32 buf[2] = { cpu_to_be32(status), cpu_to_be32(ordinal) };\
+   sha1_starts(&sha1_ctx);\
+   sha1_update(&sha1_ctx, (unsigned char*)buf, sizeof(buf));\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH1_VERIFY(HMACkey, auth) do {\
+   sha1_finish(&sha1_ctx, paramDigest.digest);\
+   ptr = unpack_TPM_AUTH_SESSION(ptr, auth);\
+   if((status = verifyAuth(&paramDigest, HMACkey, auth)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH2_VERIFY(HMACkey, auth) do {\
+   ptr = unpack_TPM_AUTH_SESSION(ptr, auth);\
+   if((status = verifyAuth(&paramDigest, HMACkey, auth)) != TPM_SUCCESS) {\
+      goto abort_egress;\
+   }\
+} while(0)
+
+
+
+#define TPM_UNPACK_VERIFY() do { \
+   ptr = out_buf;\
+   ptr = unpack_TPM_RSP_HEADER(ptr, \
+         &(tag), &(paramSize), &(status));\
+   if((status) != TPM_SUCCESS || (tag) != (intag +3)) { \
+      vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(status));\
+      goto abort_egress;\
+   }\
+} while(0)
+
+#define TPM_AUTH_HASH() do {\
+   sha1_update(&sha1_ctx, authbase, ptr - authbase);\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH_SKIP() do {\
+   authbase = ptr;\
+} while(0)
+
+#define TPM_AUTH_ERR_CHECK(auth) do {\
+   if(status != TPM_SUCCESS || auth->fContinueAuthSession == FALSE) {\
+      vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x closed by TPM\n", auth->AuthHandle);\
+      auth->AuthHandle = 0;\
+   }\
+} while(0)
+
+static void xorEncrypt(const TPM_SECRET* sharedSecret,
+      TPM_NONCE* nonce,
+      const TPM_AUTHDATA* inAuth0,
+      TPM_ENCAUTH outAuth0,
+      const TPM_AUTHDATA* inAuth1,
+      TPM_ENCAUTH outAuth1) {
+   BYTE XORbuffer[sizeof(TPM_SECRET) + sizeof(TPM_NONCE)];
+   BYTE XORkey[TPM_DIGEST_SIZE];
+   BYTE* ptr = XORbuffer;
+   ptr = pack_TPM_SECRET(ptr, sharedSecret);
+   ptr = pack_TPM_NONCE(ptr, nonce);
+
+   sha1(XORbuffer, ptr - XORbuffer, XORkey);
+
+   if(inAuth0) {
+      for(int i = 0; i < TPM_DIGEST_SIZE; ++i) {
+         outAuth0[i] = XORkey[i] ^ (*inAuth0)[i];
+      }
+   }
+   if(inAuth1) {
+      for(int i = 0; i < TPM_DIGEST_SIZE; ++i) {
+         outAuth1[i] = XORkey[i] ^ (*inAuth1)[i];
+      }
+   }
+
+}
+
+static void generateAuth(const TPM_DIGEST* paramDigest,
+      const TPM_SECRET* HMACkey,
+      TPM_AUTH_SESSION *auth)
+{
+   //Generate new OddNonce
+   vtpmmgr_rand((BYTE*)auth->NonceOdd.nonce, sizeof(TPM_NONCE));
+
+   // Create HMAC text. (Concat inParamsDigest with inAuthSetupParams).
+   BYTE hmacText[sizeof(TPM_DIGEST) + (2 * sizeof(TPM_NONCE)) + sizeof(BOOL)];
+   BYTE* ptr = hmacText;
+
+   ptr = pack_TPM_DIGEST(ptr, paramDigest);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+
+   sha1_hmac((BYTE *) HMACkey, sizeof(TPM_DIGEST),
+         (BYTE *) hmacText, sizeof(hmacText),
+         auth->HMAC);
+}
+
+static TPM_RESULT verifyAuth(const TPM_DIGEST* paramDigest,
+      /*[IN]*/ const TPM_SECRET *HMACkey,
+      /*[IN,OUT]*/ TPM_AUTH_SESSION *auth)
+{
+
+   // Create HMAC text. (Concat inParamsDigest with inAuthSetupParams).
+   TPM_AUTHDATA hm;
+   BYTE hmacText[sizeof(TPM_DIGEST) + (2 * sizeof(TPM_NONCE)) + sizeof(BOOL)];
+   BYTE* ptr = hmacText;
+
+   ptr = pack_TPM_DIGEST(ptr, paramDigest);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceEven);
+   ptr = pack_TPM_NONCE(ptr, &auth->NonceOdd);
+   ptr = pack_BOOL(ptr, auth->fContinueAuthSession);
+
+   sha1_hmac( (BYTE *) HMACkey, sizeof(TPM_DIGEST),
+         (BYTE *) hmacText, sizeof(hmacText),
+         hm);
+
+   // Compare correct HMAC with provided one.
+   if (memcmp(hm, auth->HMAC, sizeof(TPM_DIGEST)) == 0) { // 0 indicates equality
+      return TPM_SUCCESS;
+   } else {
+      vtpmlogerror(VTPM_LOG_TPM, "Auth Session verification failed!\n");
+      return TPM_AUTHFAIL;
+   }
+}
+
+
+
+// ------------------------------------------------------------------
+// Authorization Commands
+// ------------------------------------------------------------------
+
+TPM_RESULT TPM_OIAP(TPM_AUTH_SESSION*   auth)  // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_OIAP);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   memset(&auth->HMAC, 0, sizeof(TPM_DIGEST));
+   auth->fContinueAuthSession = TRUE;
+
+   ptr = unpack_UINT32(ptr, &auth->AuthHandle);
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x opened by TPM_OIAP.\n", auth->AuthHandle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_OSAP(TPM_ENTITY_TYPE  entityType,  // in
+      UINT32    entityValue, // in
+      const TPM_AUTHDATA* usageAuth, //in
+      TPM_SECRET *sharedSecret, //out
+      TPM_AUTH_SESSION *auth)
+{
+   BYTE* nonceOddOSAP;
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_OSAP);
+
+   ptr = pack_TPM_ENTITY_TYPE(ptr, entityType);
+   ptr = pack_UINT32(ptr, entityValue);
+
+   //nonce Odd OSAP
+   nonceOddOSAP = ptr;
+   vtpmmgr_rand(ptr, TPM_DIGEST_SIZE);
+   ptr += TPM_DIGEST_SIZE;
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, &auth->AuthHandle);
+   ptr = unpack_TPM_NONCE(ptr, &auth->NonceEven);
+
+   //Calculate session secret
+   sha1_context ctx;
+   sha1_hmac_starts(&ctx, *usageAuth, TPM_DIGEST_SIZE);
+   sha1_hmac_update(&ctx, ptr, TPM_DIGEST_SIZE); //ptr = nonceEvenOSAP
+   sha1_hmac_update(&ctx, nonceOddOSAP, TPM_DIGEST_SIZE);
+   sha1_hmac_finish(&ctx, *sharedSecret);
+
+   memset(&auth->HMAC, 0, sizeof(TPM_DIGEST));
+   auth->fContinueAuthSession = FALSE;
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x opened by TPM_OSAP.\n", auth->AuthHandle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_TakeOwnership(
+      const TPM_PUBKEY *pubEK, //in
+      const TPM_AUTHDATA* ownerAuth, //in
+      const TPM_AUTHDATA* srkAuth, //in
+      const TPM_KEY* inSrk, //in
+      TPM_KEY* outSrk, //out, optional
+      TPM_AUTH_SESSION*   auth)   // in, out
+{
+   int keyAlloced = 0;
+   tpmrsa_context ek_rsa = TPMRSA_CTX_INIT;
+
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_TakeOwnership);
+   TPM_AUTH_BEGIN();
+
+   tpmrsa_set_pubkey(&ek_rsa,
+         pubEK->pubKey.key, pubEK->pubKey.keyLength,
+         pubEK->algorithmParms.parms.rsa.exponent,
+         pubEK->algorithmParms.parms.rsa.exponentSize);
+
+   /* Pack the protocol ID */
+   ptr = pack_UINT16(ptr, TPM_PID_OWNER);
+
+   /* Pack the encrypted owner auth */
+   ptr = pack_UINT32(ptr, pubEK->algorithmParms.parms.rsa.keyLength / 8);
+   tpmrsa_pub_encrypt_oaep(&ek_rsa,
+         ctr_drbg_random, &vtpm_globals.ctr_drbg,
+         sizeof(TPM_SECRET),
+         (BYTE*) ownerAuth,
+         ptr);
+   ptr += pubEK->algorithmParms.parms.rsa.keyLength / 8;
+
+   /* Pack the encrypted srk auth */
+   ptr = pack_UINT32(ptr, pubEK->algorithmParms.parms.rsa.keyLength / 8);
+   tpmrsa_pub_encrypt_oaep(&ek_rsa,
+         ctr_drbg_random, &vtpm_globals.ctr_drbg,
+         sizeof(TPM_SECRET),
+         (BYTE*) srkAuth,
+         ptr);
+   ptr += pubEK->algorithmParms.parms.rsa.keyLength / 8;
+
+   /* Pack the Srk key */
+   ptr = pack_TPM_KEY(ptr, inSrk);
+
+   /* Hash everything up to here */
+   TPM_AUTH_HASH();
+
+   /* Generate the authorization */
+   TPM_AUTH1_GEN(ownerAuth, auth);
+
+   /* Send the command to the tpm*/
+   TPM_TRANSMIT();
+   /* Unpack and validate the header */
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   if(outSrk != NULL) {
+      /* If the user wants a copy of the srk we give it to them */
+      keyAlloced = 1;
+      ptr = unpack_TPM_KEY(ptr, outSrk, UNPACK_ALLOC);
+   } else {
+      /*otherwise just parse past it */
+      TPM_KEY temp;
+      ptr = unpack_TPM_KEY(ptr, &temp, UNPACK_ALIAS);
+   }
+
+   /* Hash the output key */
+   TPM_AUTH_HASH();
+
+   /* Verify authorizaton */
+   TPM_AUTH1_VERIFY(ownerAuth, auth);
+
+   goto egress;
+abort_egress:
+   if(keyAlloced) {
+      free_TPM_KEY(outSrk);
+   }
+egress:
+   tpmrsa_free(&ek_rsa);
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+
+TPM_RESULT TPM_DisablePubekRead (
+      const TPM_AUTHDATA* ownerAuth,
+      TPM_AUTH_SESSION*   auth)
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_DisablePubekRead);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(ownerAuth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   TPM_AUTH1_VERIFY(ownerAuth, auth);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+
+TPM_RESULT TPM_TerminateHandle(TPM_AUTHHANDLE  handle)  // in
+{
+   if(handle == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_Terminate_Handle);
+
+   ptr = pack_TPM_AUTHHANDLE(ptr, handle);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   vtpmloginfo(VTPM_LOG_TPM, "Auth Session: 0x%x closed by TPM_TerminateHandle\n", handle);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_Extend( TPM_PCRINDEX  pcrNum,  // in
+      TPM_DIGEST  inDigest, // in
+      TPM_PCRVALUE*  outDigest) // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_Extend);
+
+   ptr = pack_TPM_PCRINDEX(ptr, pcrNum);
+   ptr = pack_TPM_DIGEST(ptr, &inDigest);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_TPM_PCRVALUE(ptr, outDigest);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_Seal(
+      TPM_KEY_HANDLE  keyHandle,  // in
+      UINT32    pcrInfoSize, // in
+      TPM_PCR_INFO*    pcrInfo,  // in
+      UINT32    inDataSize,  // in
+      const BYTE*    inData,   // in
+      TPM_STORED_DATA* sealedData, //out
+      const TPM_SECRET* osapSharedSecret, //in
+      const TPM_AUTHDATA* sealedDataAuth, //in
+      TPM_AUTH_SESSION*   pubAuth  // in, out
+      )
+{
+   int dataAlloced = 0;
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_Seal);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, keyHandle);
+
+   TPM_AUTH_SKIP();
+
+   xorEncrypt(osapSharedSecret, &pubAuth->NonceEven,
+         sealedDataAuth, ptr,
+         NULL, NULL);
+   ptr += sizeof(TPM_ENCAUTH);
+
+   ptr = pack_UINT32(ptr, pcrInfoSize);
+   ptr = pack_TPM_PCR_INFO(ptr, pcrInfo);
+
+   ptr = pack_UINT32(ptr, inDataSize);
+   ptr = pack_BUFFER(ptr, inData, inDataSize);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(osapSharedSecret, pubAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_TPM_STORED_DATA(ptr, sealedData, UNPACK_ALLOC);
+   dataAlloced = 1;
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(osapSharedSecret, pubAuth);
+
+   goto egress;
+abort_egress:
+   if(dataAlloced) {
+      free_TPM_STORED_DATA(sealedData);
+   }
+egress:
+   TPM_AUTH_ERR_CHECK(pubAuth);
+   return status;
+}
+
+TPM_RESULT TPM_Unseal(
+      TPM_KEY_HANDLE parentHandle, // in
+      const TPM_STORED_DATA* sealedData,
+      UINT32*   outSize,  // out
+      BYTE**    out, //out
+      const TPM_AUTHDATA* key_usage_auth, //in
+      const TPM_AUTHDATA* data_usage_auth, //in
+      TPM_AUTH_SESSION*   keyAuth,  // in, out
+      TPM_AUTH_SESSION*   dataAuth  // in, out
+      )
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH2_COMMAND, TPM_ORD_Unseal);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, parentHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_TPM_STORED_DATA(ptr, sealedData);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(key_usage_auth, keyAuth);
+   TPM_AUTH2_GEN(data_usage_auth, dataAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, outSize);
+   ptr = unpack_ALLOC(ptr, out, *outSize);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(key_usage_auth, keyAuth);
+   TPM_AUTH2_VERIFY(data_usage_auth, dataAuth);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(keyAuth);
+   TPM_AUTH_ERR_CHECK(dataAuth);
+   return status;
+}
+
+TPM_RESULT TPM_Bind(
+      const TPM_KEY* key,
+      const BYTE* in,
+      UINT32 ilen,
+      BYTE* out)
+{
+   TPM_RESULT status;
+   tpmrsa_context rsa = TPMRSA_CTX_INIT;
+   TPM_BOUND_DATA boundData;
+   uint8_t plain[TCPA_MAX_BUFFER_LENGTH];
+   BYTE* ptr = plain;
+
+   vtpmloginfo(VTPM_LOG_TPM, "%s\n", __func__);
+
+   tpmrsa_set_pubkey(&rsa,
+         key->pubKey.key, key->pubKey.keyLength,
+         key->algorithmParms.parms.rsa.exponent,
+         key->algorithmParms.parms.rsa.exponentSize);
+
+   // Fill boundData's accessory information
+   boundData.ver = TPM_STRUCT_VER_1_1;
+   boundData.payload = TPM_PT_BIND;
+   boundData.payloadData = (BYTE*)in;
+
+   //marshall the bound data object
+   ptr = pack_TPM_BOUND_DATA(ptr, &boundData, ilen);
+
+   // Encrypt the data
+   TPMTRYRETURN(tpmrsa_pub_encrypt_oaep(&rsa,
+            ctr_drbg_random, &vtpm_globals.ctr_drbg,
+            ptr - plain,
+            plain,
+            out));
+
+abort_egress:
+   tpmrsa_free(&rsa);
+   return status;
+
+}
+
+TPM_RESULT TPM_UnBind(
+      TPM_KEY_HANDLE  keyHandle,  // in
+      UINT32 ilen, //in
+      const BYTE* in, //
+      UINT32* olen, //
+      BYTE*    out, //out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth //in, out
+      )
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_UnBind);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, keyHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_UINT32(ptr, ilen);
+   ptr = pack_BUFFER(ptr, in, ilen);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(usage_auth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, olen);
+   if(*olen > ilen) {
+      vtpmlogerror(VTPM_LOG_TPM, "Output length < input length!\n");
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+   ptr = unpack_BUFFER(ptr, out, *olen);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(usage_auth, auth);
+
+abort_egress:
+egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+TPM_RESULT TPM_CreateWrapKey(
+      TPM_KEY_HANDLE  hWrappingKey,  // in
+      const TPM_AUTHDATA* osapSharedSecret,
+      const TPM_AUTHDATA* dataUsageAuth, //in
+      const TPM_AUTHDATA* dataMigrationAuth, //in
+      TPM_KEY*     key, //in, out
+      TPM_AUTH_SESSION*   pAuth)    // in, out
+{
+   int keyAlloced = 0;
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_CreateWrapKey);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, hWrappingKey);
+
+   TPM_AUTH_SKIP();
+
+   //Encrypted auths
+   xorEncrypt(osapSharedSecret, &pAuth->NonceEven,
+         dataUsageAuth, ptr,
+         dataMigrationAuth, ptr + sizeof(TPM_ENCAUTH));
+   ptr += sizeof(TPM_ENCAUTH) * 2;
+
+   ptr = pack_TPM_KEY(ptr, key);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(osapSharedSecret, pAuth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   keyAlloced = 1;
+   ptr = unpack_TPM_KEY(ptr, key, UNPACK_ALLOC);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(osapSharedSecret, pAuth);
+
+   goto egress;
+abort_egress:
+   if(keyAlloced) {
+      free_TPM_KEY(key);
+   }
+egress:
+   TPM_AUTH_ERR_CHECK(pAuth);
+   return status;
+}
+
+TPM_RESULT TPM_LoadKey(
+      TPM_KEY_HANDLE  parentHandle, //
+      const TPM_KEY* key, //in
+      TPM_HANDLE*  keyHandle,    // out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth)
+{
+   TPM_BEGIN(TPM_TAG_RQU_AUTH1_COMMAND, TPM_ORD_LoadKey);
+   TPM_AUTH_BEGIN();
+
+   TPM_AUTH_HASH();
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, parentHandle);
+
+   TPM_AUTH_SKIP();
+
+   ptr = pack_TPM_KEY(ptr, key);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_GEN(usage_auth, auth);
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+   TPM_AUTH_VERIFY_BEGIN();
+
+   ptr = unpack_UINT32(ptr, keyHandle);
+
+   TPM_AUTH_HASH();
+
+   TPM_AUTH1_VERIFY(usage_auth, auth);
+
+   vtpmloginfo(VTPM_LOG_TPM, "Key Handle: 0x%x opened by TPM_LoadKey\n", *keyHandle);
+
+abort_egress:
+   TPM_AUTH_ERR_CHECK(auth);
+   return status;
+}
+
+TPM_RESULT TPM_EvictKey( TPM_KEY_HANDLE  hKey)  // in
+{
+   if(hKey == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_EvictKey);
+
+   ptr = pack_TPM_KEY_HANDLE(ptr, hKey);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   vtpmloginfo(VTPM_LOG_TPM, "Key handle: 0x%x closed by TPM_EvictKey\n", hKey);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_FlushSpecific(TPM_HANDLE handle,
+      TPM_RESOURCE_TYPE rt) {
+   if(handle == 0) {
+      return TPM_SUCCESS;
+   }
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_FlushSpecific);
+
+   ptr = pack_TPM_HANDLE(ptr, handle);
+   ptr = pack_TPM_RESOURCE_TYPE(ptr, rt);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_GetRandom( UINT32*    bytesRequested, // in, out
+      BYTE*    randomBytes) // out
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_GetRandom);
+
+   // check input params
+   if (bytesRequested == NULL || randomBytes == NULL){
+      return TPM_BAD_PARAMETER;
+   }
+
+   ptr = pack_UINT32(ptr, *bytesRequested);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, bytesRequested);
+   ptr = unpack_BUFFER(ptr, randomBytes, *bytesRequested);
+
+abort_egress:
+   return status;
+}
+
+
+TPM_RESULT TPM_ReadPubek(
+      TPM_PUBKEY* pubEK //out
+      )
+{
+   BYTE* antiReplay = NULL;
+   BYTE* kptr = NULL;
+   BYTE digest[TPM_DIGEST_SIZE];
+   sha1_context ctx;
+
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_ReadPubek);
+
+   //antiReplay nonce
+   vtpmmgr_rand(ptr, TPM_DIGEST_SIZE);
+   antiReplay = ptr;
+   ptr += TPM_DIGEST_SIZE;
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   //unpack and allocate the key
+   kptr = ptr;
+   ptr = unpack_TPM_PUBKEY(ptr, pubEK, UNPACK_ALLOC);
+
+   //Verify the checksum
+   sha1_starts(&ctx);
+   sha1_update(&ctx, kptr, ptr - kptr);
+   sha1_update(&ctx, antiReplay, TPM_DIGEST_SIZE);
+   sha1_finish(&ctx, digest);
+
+   //ptr points to the checksum computed by TPM
+   if(memcmp(digest, ptr, TPM_DIGEST_SIZE)) {
+      vtpmlogerror(VTPM_LOG_TPM, "TPM_ReadPubek: Checksum returned by TPM was invalid!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   goto egress;
+abort_egress:
+   if(kptr != NULL) { //If we unpacked the pubEK, we have to free it
+      free_TPM_PUBKEY(pubEK);
+   }
+egress:
+   return status;
+}
+
+
+TPM_RESULT TPM_SaveState(void)
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_SaveState);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_GetCapability(
+      TPM_CAPABILITY_AREA capArea,
+      UINT32 subCapSize,
+      const BYTE* subCap,
+      UINT32* respSize,
+      BYTE** resp)
+{
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_GetCapability);
+
+   ptr = pack_TPM_CAPABILITY_AREA(ptr, capArea);
+   ptr = pack_UINT32(ptr, subCapSize);
+   ptr = pack_BUFFER(ptr, subCap, subCapSize);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   ptr = unpack_UINT32(ptr, respSize);
+   ptr = unpack_ALLOC(ptr, resp, *respSize);
+
+abort_egress:
+   return status;
+}
+
+TPM_RESULT TPM_CreateEndorsementKeyPair(
+      const TPM_KEY_PARMS* keyInfo,
+      TPM_PUBKEY* pubEK)
+{
+   BYTE* kptr = NULL;
+   sha1_context ctx;
+   TPM_DIGEST checksum;
+   TPM_DIGEST hash;
+   TPM_NONCE antiReplay;
+   TPM_BEGIN(TPM_TAG_RQU_COMMAND, TPM_ORD_CreateEndorsementKeyPair);
+
+   //Make anti replay nonce
+   vtpmmgr_rand(antiReplay.nonce, sizeof(antiReplay.nonce));
+
+   ptr = pack_TPM_NONCE(ptr, &antiReplay);
+   ptr = pack_TPM_KEY_PARMS(ptr, keyInfo);
+
+   TPM_TRANSMIT();
+   TPM_UNPACK_VERIFY();
+
+   sha1_starts(&ctx);
+
+   kptr = ptr;
+   ptr = unpack_TPM_PUBKEY(ptr, pubEK, UNPACK_ALLOC);
+
+   /* Hash the pub key blob */
+   sha1_update(&ctx, kptr, ptr - kptr);
+   ptr = unpack_TPM_DIGEST(ptr, &checksum);
+
+   sha1_update(&ctx, antiReplay.nonce, sizeof(antiReplay.nonce));
+
+   sha1_finish(&ctx, hash.digest);
+   if(memcmp(checksum.digest, hash.digest, TPM_DIGEST_SIZE)) {
+      vtpmloginfo(VTPM_LOG_VTPM, "TPM_CreateEndorsementKey: Checkum verification failed!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   goto egress;
+abort_egress:
+   if(kptr) {
+      free_TPM_PUBKEY(pubEK);
+   }
+egress:
+   return status;
+}
+
+TPM_RESULT TPM_TransmitData(
+      BYTE* in,
+      UINT32 insize,
+      BYTE* out,
+      UINT32* outsize) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   UINT32 i;
+   vtpmloginfo(VTPM_LOG_TXDATA, "Sending buffer = 0x");
+   for(i = 0 ; i < insize ; i++)
+      vtpmloginfomore(VTPM_LOG_TXDATA, "%2.2x ", in[i]);
+
+   vtpmloginfomore(VTPM_LOG_TXDATA, "\n");
+
+   ssize_t size = 0;
+
+   // send the request
+   size = write (vtpm_globals.tpm_fd, in, insize);
+   if (size < 0) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "write() failed : %s\n", strerror(errno));
+      ERRORDIE (TPM_IOERROR);
+   }
+   else if ((UINT32) size < insize) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "Wrote %d instead of %d bytes!\n", (int) size, insize);
+      ERRORDIE (TPM_IOERROR);
+   }
+
+   // read the response
+   size = read (vtpm_globals.tpm_fd, out, *outsize);
+   if (size < 0) {
+      vtpmlogerror(VTPM_LOG_TXDATA, "read() failed : %s\n", strerror(errno));
+      ERRORDIE (TPM_IOERROR);
+   }
+
+   vtpmloginfo(VTPM_LOG_TXDATA, "Receiving buffer = 0x");
+   for(i = 0 ; i < size ; i++)
+      vtpmloginfomore(VTPM_LOG_TXDATA, "%2.2x ", out[i]);
+
+   vtpmloginfomore(VTPM_LOG_TXDATA, "\n");
+
+   *outsize = size;
+   goto egress;
+
+abort_egress:
+egress:
+   return status;
+}
diff --git a/stubdom/vtpmmgr/tpm.h b/stubdom/vtpmmgr/tpm.h
new file mode 100644
index 0000000..304e145
--- /dev/null
+++ b/stubdom/vtpmmgr/tpm.h
@@ -0,0 +1,218 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005/2006, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef __TPM_H__
+#define __TPM_H__
+
+#include "tcg.h"
+
+// ------------------------------------------------------------------
+// Exposed API
+// ------------------------------------------------------------------
+
+// TPM v1.1B Command Set
+
+// Authorzation
+TPM_RESULT TPM_OIAP(
+      TPM_AUTH_SESSION*   auth //out
+      );
+
+TPM_RESULT TPM_OSAP (
+      TPM_ENTITY_TYPE entityType,  // in
+      UINT32    entityValue, // in
+      const TPM_AUTHDATA* usageAuth, //in
+      TPM_SECRET *sharedSecret, //out
+      TPM_AUTH_SESSION *auth);
+
+TPM_RESULT TPM_TakeOwnership(
+      const TPM_PUBKEY *pubEK, //in
+      const TPM_AUTHDATA* ownerAuth, //in
+      const TPM_AUTHDATA* srkAuth, //in
+      const TPM_KEY* inSrk, //in
+      TPM_KEY* outSrk, //out, optional
+      TPM_AUTH_SESSION*   auth   // in, out
+      );
+
+TPM_RESULT TPM_DisablePubekRead (
+      const TPM_AUTHDATA* ownerAuth,
+      TPM_AUTH_SESSION*   auth
+      );
+
+TPM_RESULT TPM_TerminateHandle ( TPM_AUTHHANDLE  handle  // in
+      );
+
+TPM_RESULT TPM_FlushSpecific ( TPM_HANDLE  handle,  // in
+      TPM_RESOURCE_TYPE resourceType //in
+      );
+
+// TPM Mandatory
+TPM_RESULT TPM_Extend ( TPM_PCRINDEX  pcrNum,  // in
+      TPM_DIGEST   inDigest, // in
+      TPM_PCRVALUE*   outDigest // out
+      );
+
+TPM_RESULT TPM_PcrRead ( TPM_PCRINDEX  pcrNum,  // in
+      TPM_PCRVALUE*  outDigest // out
+      );
+
+TPM_RESULT TPM_Quote ( TCS_KEY_HANDLE  keyHandle,  // in
+      TPM_NONCE   antiReplay,  // in
+      UINT32*    PcrDataSize, // in, out
+      BYTE**    PcrData,  // in, out
+      TPM_AUTH_SESSION*   privAuth,  // in, out
+      UINT32*    sigSize,  // out
+      BYTE**    sig    // out
+      );
+
+TPM_RESULT TPM_Seal(
+      TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32    pcrInfoSize, // in
+      TPM_PCR_INFO*    pcrInfo,  // in
+      UINT32    inDataSize,  // in
+      const BYTE*    inData,   // in
+      TPM_STORED_DATA* sealedData, //out
+      const TPM_SECRET* osapSharedSecret, //in
+      const TPM_AUTHDATA* sealDataAuth, //in
+      TPM_AUTH_SESSION*   pubAuth  // in, out
+      );
+
+TPM_RESULT TPM_Unseal (
+      TPM_KEY_HANDLE parentHandle, // in
+      const TPM_STORED_DATA* sealedData,
+      UINT32*   outSize,  // out
+      BYTE**    out, //out
+      const TPM_AUTHDATA* key_usage_auth, //in
+      const TPM_AUTHDATA* data_usage_auth, //in
+      TPM_AUTH_SESSION*   keyAuth,  // in, out
+      TPM_AUTH_SESSION*   dataAuth  // in, out
+      );
+
+TPM_RESULT TPM_DirWriteAuth ( TPM_DIRINDEX  dirIndex,  // in
+      TPM_DIRVALUE  newContents, // in
+      TPM_AUTH_SESSION*   ownerAuth  // in, out
+      );
+
+TPM_RESULT TPM_DirRead ( TPM_DIRINDEX  dirIndex, // in
+      TPM_DIRVALUE*  dirValue // out
+      );
+
+TPM_RESULT TPM_Bind(
+      const TPM_KEY* key, //in
+      const BYTE* in, //in
+      UINT32 ilen, //in
+      BYTE* out //out, must be at least cipher block size
+      );
+
+TPM_RESULT TPM_UnBind (
+      TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32 ilen, //in
+      const BYTE* in, //
+      UINT32*   outDataSize, // out
+      BYTE*    outData, //out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth //in, out
+      );
+
+TPM_RESULT TPM_CreateWrapKey (
+      TCS_KEY_HANDLE  hWrappingKey,  // in
+      const TPM_AUTHDATA* osapSharedSecret,
+      const TPM_AUTHDATA* dataUsageAuth, //in
+      const TPM_AUTHDATA* dataMigrationAuth, //in
+      TPM_KEY*     key, //in
+      TPM_AUTH_SESSION*   pAuth    // in, out
+      );
+
+TPM_RESULT TPM_LoadKey (
+      TPM_KEY_HANDLE  parentHandle, //
+      const TPM_KEY* key, //in
+      TPM_HANDLE*  keyHandle,    // out
+      const TPM_AUTHDATA* usage_auth,
+      TPM_AUTH_SESSION* auth
+      );
+
+TPM_RESULT TPM_GetPubKey (  TCS_KEY_HANDLE  hKey,   // in
+      TPM_AUTH_SESSION*   pAuth,   // in, out
+      UINT32*    pcPubKeySize, // out
+      BYTE**    prgbPubKey  // out
+      );
+
+TPM_RESULT TPM_EvictKey ( TCS_KEY_HANDLE  hKey  // in
+      );
+
+TPM_RESULT TPM_FlushSpecific(TPM_HANDLE handle, //in
+      TPM_RESOURCE_TYPE rt //in
+      );
+
+TPM_RESULT TPM_Sign ( TCS_KEY_HANDLE  keyHandle,  // in
+      UINT32    areaToSignSize, // in
+      BYTE*    areaToSign,  // in
+      TPM_AUTH_SESSION*   privAuth,  // in, out
+      UINT32*    sigSize,  // out
+      BYTE**    sig    // out
+      );
+
+TPM_RESULT TPM_GetRandom (  UINT32*    bytesRequested, // in, out
+      BYTE*    randomBytes  // out
+      );
+
+TPM_RESULT TPM_StirRandom (  UINT32    inDataSize, // in
+      BYTE*    inData  // in
+      );
+
+TPM_RESULT TPM_ReadPubek (
+      TPM_PUBKEY* pubEK //out
+      );
+
+TPM_RESULT TPM_GetCapability(
+      TPM_CAPABILITY_AREA capArea,
+      UINT32 subCapSize,
+      const BYTE* subCap,
+      UINT32* respSize,
+      BYTE** resp);
+
+TPM_RESULT TPM_SaveState(void);
+
+TPM_RESULT TPM_CreateEndorsementKeyPair(
+      const TPM_KEY_PARMS* keyInfo,
+      TPM_PUBKEY* pubEK);
+
+TPM_RESULT TPM_TransmitData(
+      BYTE* in,
+      UINT32 insize,
+      BYTE* out,
+      UINT32* outsize);
+
+#endif //TPM_H
diff --git a/stubdom/vtpmmgr/tpmrsa.c b/stubdom/vtpmmgr/tpmrsa.c
new file mode 100644
index 0000000..56094e7
--- /dev/null
+++ b/stubdom/vtpmmgr/tpmrsa.c
@@ -0,0 +1,175 @@
+/*
+ *  The RSA public-key cryptosystem
+ *
+ *  Copyright (C) 2006-2011, Brainspark B.V.
+ *
+ *  This file is part of PolarSSL (http://www.polarssl.org)
+ *  Lead Maintainer: Paul Bakker <polarssl_maintainer at polarssl.org>
+ *
+ *  All rights reserved.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License along
+ *  with this program; if not, write to the Free Software Foundation, Inc.,
+ *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+/*
+ *  RSA was designed by Ron Rivest, Adi Shamir and Len Adleman.
+ *
+ *  http://theory.lcs.mit.edu/~rivest/rsapaper.pdf
+ *  http://www.cacr.math.uwaterloo.ca/hac/about/chap8.pdf
+ */
+
+#include "tcg.h"
+#include "polarssl/sha1.h"
+
+#include <stdlib.h>
+#include <stdio.h>
+
+#include "tpmrsa.h"
+
+#define HASH_LEN 20
+
+void tpmrsa_set_pubkey(tpmrsa_context* ctx,
+      const unsigned char* key,
+      int keylen,
+      const unsigned char* exponent,
+      int explen) {
+
+   tpmrsa_free(ctx);
+
+   if(explen == 0) { //Default e= 2^16+1
+      mpi_lset(&ctx->E, 65537);
+   } else {
+      mpi_read_binary(&ctx->E, exponent, explen);
+   }
+   mpi_read_binary(&ctx->N, key, keylen);
+
+   ctx->len = ( mpi_msb(&ctx->N) + 7) >> 3;
+}
+
+static TPM_RESULT tpmrsa_public( tpmrsa_context *ctx,
+      const unsigned char *input,
+      unsigned char *output )
+{
+   int ret;
+   size_t olen;
+   mpi T;
+
+   mpi_init( &T );
+
+   MPI_CHK( mpi_read_binary( &T, input, ctx->len ) );
+
+   if( mpi_cmp_mpi( &T, &ctx->N ) >= 0 )
+   {
+      mpi_free( &T );
+      return TPM_ENCRYPT_ERROR;
+   }
+
+   olen = ctx->len;
+   MPI_CHK( mpi_exp_mod( &T, &T, &ctx->E, &ctx->N, &ctx->RN ) );
+   MPI_CHK( mpi_write_binary( &T, output, olen ) );
+
+cleanup:
+
+   mpi_free( &T );
+
+   if( ret != 0 )
+      return TPM_ENCRYPT_ERROR;
+
+   return TPM_SUCCESS;
+}
+
+static void mgf_mask( unsigned char *dst, int dlen, unsigned char *src, int slen)
+{
+   unsigned char mask[HASH_LEN];
+   unsigned char counter[4] = {0, 0, 0, 0};
+   int i;
+   sha1_context mctx;
+
+   //We always hash the src with the counter, so save the partial hash
+   sha1_starts(&mctx);
+   sha1_update(&mctx, src, slen);
+
+   // Generate and apply dbMask
+   while(dlen > 0) {
+      //Copy the sha1 context
+      sha1_context ctx = mctx;
+
+      //compute hash for input || counter
+      sha1_update(&ctx, counter, sizeof(counter));
+      sha1_finish(&ctx, mask);
+
+      //Apply the mask
+      for(i = 0; i < (dlen < HASH_LEN ? dlen : HASH_LEN); ++i) {
+         *(dst++) ^= mask[i];
+      }
+
+      //Increment counter
+      ++counter[3];
+
+      dlen -= HASH_LEN;
+   }
+}
+
+/*
+ * Add the message padding, then do an RSA operation
+ */
+TPM_RESULT tpmrsa_pub_encrypt_oaep( tpmrsa_context *ctx,
+      int (*f_rng)(void *, unsigned char *, size_t),
+      void *p_rng,
+      size_t ilen,
+      const unsigned char *input,
+      unsigned char *output )
+{
+   int ret;
+   int olen;
+   unsigned char* seed = output + 1;
+   unsigned char* db = output + HASH_LEN +1;
+
+   olen = ctx->len-1;
+
+   if( f_rng == NULL )
+      return TPM_ENCRYPT_ERROR;
+
+   if( ilen > olen - 2 * HASH_LEN - 1)
+      return TPM_ENCRYPT_ERROR;
+
+   output[0] = 0;
+
+   //Encoding parameter p
+   sha1((unsigned char*)"TCPA", 4, db);
+
+   //PS
+   memset(db + HASH_LEN, 0,
+         olen - ilen - 2 * HASH_LEN - 1);
+
+   //constant 1 byte
+   db[olen - ilen - HASH_LEN -1] = 0x01;
+
+   //input string
+   memcpy(db + olen - ilen - HASH_LEN,
+         input, ilen);
+
+   //Generate random seed
+   if( ( ret = f_rng( p_rng, seed, HASH_LEN ) ) != 0 )
+      return TPM_ENCRYPT_ERROR;
+
+   // maskedDB: Apply dbMask to DB
+   mgf_mask( db, olen - HASH_LEN, seed, HASH_LEN);
+
+   // maskedSeed: Apply seedMask to seed
+   mgf_mask( seed, HASH_LEN, db, olen - HASH_LEN);
+
+   // Do the crypto op
+   return tpmrsa_public(ctx, output, output);
+}
diff --git a/stubdom/vtpmmgr/tpmrsa.h b/stubdom/vtpmmgr/tpmrsa.h
new file mode 100644
index 0000000..59579e7
--- /dev/null
+++ b/stubdom/vtpmmgr/tpmrsa.h
@@ -0,0 +1,67 @@
+/**
+ * \file rsa.h
+ *
+ * \brief The RSA public-key cryptosystem
+ *
+ *  Copyright (C) 2006-2010, Brainspark B.V.
+ *
+ *  This file is part of PolarSSL (http://www.polarssl.org)
+ *  Lead Maintainer: Paul Bakker <polarssl_maintainer at polarssl.org>
+ *
+ *  All rights reserved.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License along
+ *  with this program; if not, write to the Free Software Foundation, Inc.,
+ *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+#ifndef TPMRSA_H
+#define TPMRSA_H
+
+#include "tcg.h"
+#include <polarssl/bignum.h>
+
+/* tpm software key */
+typedef struct
+{
+    size_t len;                 /*!<  size(N) in chars  */
+
+    mpi N;                      /*!<  public modulus    */
+    mpi E;                      /*!<  public exponent   */
+
+    mpi RN;                     /*!<  cached R^2 mod N  */
+}
+tpmrsa_context;
+
+#define TPMRSA_CTX_INIT { 0, {0, 0, NULL}, {0, 0, NULL}, {0, 0, NULL}}
+
+/* Setup the rsa context using tpm public key data */
+void tpmrsa_set_pubkey(tpmrsa_context* ctx,
+      const unsigned char* key,
+      int keylen,
+      const unsigned char* exponent,
+      int explen);
+
+/* Do rsa public crypto */
+TPM_RESULT tpmrsa_pub_encrypt_oaep( tpmrsa_context *ctx,
+      int (*f_rng)(void *, unsigned char *, size_t),
+      void *p_rng,
+      size_t ilen,
+      const unsigned char *input,
+      unsigned char *output );
+
+/* free tpmrsa key */
+inline void tpmrsa_free( tpmrsa_context *ctx ) {
+   mpi_free( &ctx->RN ); mpi_free( &ctx->E  ); mpi_free( &ctx->N  );
+}
+
+#endif /* tpmrsa.h */
diff --git a/stubdom/vtpmmgr/uuid.h b/stubdom/vtpmmgr/uuid.h
new file mode 100644
index 0000000..4737645
--- /dev/null
+++ b/stubdom/vtpmmgr/uuid.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPMMGR_UUID_H
+#define VTPMMGR_UUID_H
+
+#define UUID_FMT "%02hhx%02hhx%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx"
+#define UUID_FMTLEN ((2*16)+4) /* 16 hex bytes plus 4 hypens */
+#define UUID_BYTES(uuid) uuid[0], uuid[1], uuid[2], uuid[3], \
+                                uuid[4], uuid[5], uuid[6], uuid[7], \
+                                uuid[8], uuid[9], uuid[10], uuid[11], \
+                                uuid[12], uuid[13], uuid[14], uuid[15]
+
+
+typedef uint8_t uuid_t[16];
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
new file mode 100644
index 0000000..f82a2a9
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
@@ -0,0 +1,152 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <inttypes.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include "marshal.h"
+#include "log.h"
+#include "vtpm_storage.h"
+#include "vtpmmgr.h"
+#include "tpm.h"
+#include "tcg.h"
+
+static TPM_RESULT vtpmmgr_SaveHashKey(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+
+   if(tpmcmd->req_len != VTPM_COMMAND_HEADER_SIZE + HASHKEYSZ) {
+      vtpmlogerror(VTPM_LOG_VTPM, "VTPM_ORD_SAVEHASHKEY hashkey too short!\n");
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   /* Do the command */
+   TPMTRYRETURN(vtpm_storage_save_hashkey(uuid, tpmcmd->req + VTPM_COMMAND_HEADER_SIZE));
+
+abort_egress:
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         VTPM_TAG_RSP, VTPM_COMMAND_HEADER_SIZE, status);
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+
+   return status;
+}
+
+static TPM_RESULT vtpmmgr_LoadHashKey(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd) {
+   TPM_RESULT status = TPM_SUCCESS;
+
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+
+   TPMTRYRETURN(vtpm_storage_load_hashkey(uuid, tpmcmd->resp + VTPM_COMMAND_HEADER_SIZE));
+
+   tpmcmd->resp_len += HASHKEYSZ;
+
+abort_egress:
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         VTPM_TAG_RSP, tpmcmd->resp_len, status);
+
+   return status;
+}
+
+
+TPM_RESULT vtpmmgr_handle_cmd(
+      const uuid_t uuid,
+      tpmcmd_t* tpmcmd)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   TPM_TAG tag;
+   UINT32 size;
+   TPM_COMMAND_CODE ord;
+
+   unpack_TPM_RQU_HEADER(tpmcmd->req,
+         &tag, &size, &ord);
+
+   /* Handle the command now */
+   switch(tag) {
+      case VTPM_TAG_REQ:
+         //This is a vTPM command
+         switch(ord) {
+            case VTPM_ORD_SAVEHASHKEY:
+               return vtpmmgr_SaveHashKey(uuid, tpmcmd);
+            case VTPM_ORD_LOADHASHKEY:
+               return vtpmmgr_LoadHashKey(uuid, tpmcmd);
+            default:
+               vtpmlogerror(VTPM_LOG_VTPM, "Invalid vTPM Ordinal %" PRIu32 "\n", ord);
+               status = TPM_BAD_ORDINAL;
+         }
+         break;
+      case TPM_TAG_RQU_COMMAND:
+      case TPM_TAG_RQU_AUTH1_COMMAND:
+      case TPM_TAG_RQU_AUTH2_COMMAND:
+         //This is a TPM passthrough command
+         switch(ord) {
+            case TPM_ORD_GetRandom:
+               vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
+               break;
+            case TPM_ORD_PcrRead:
+               vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_PcrRead\n");
+               break;
+            default:
+               vtpmlogerror(VTPM_LOG_VTPM, "TPM Disallowed Passthrough ord=%" PRIu32 "\n", ord);
+               status = TPM_DISABLED_CMD;
+               goto abort_egress;
+         }
+
+         size = TCPA_MAX_BUFFER_LENGTH;
+         TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len, tpmcmd->resp, &size));
+         tpmcmd->resp_len = size;
+
+         unpack_TPM_RESULT(tpmcmd->resp + sizeof(TPM_TAG) + sizeof(UINT32), &status);
+         return status;
+
+         break;
+      default:
+         vtpmlogerror(VTPM_LOG_VTPM, "Invalid tag=%" PRIu16 "\n", tag);
+         status = TPM_BADTAG;
+   }
+
+abort_egress:
+   tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+   pack_TPM_RSP_HEADER(tpmcmd->resp,
+         tag + 3, tpmcmd->resp_len, status);
+
+   return status;
+}
diff --git a/stubdom/vtpmmgr/vtpm_manager.h b/stubdom/vtpmmgr/vtpm_manager.h
new file mode 100644
index 0000000..a2bbcca
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_manager.h
@@ -0,0 +1,64 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPM_MANAGER_H
+#define VTPM_MANAGER_H
+
+#define VTPM_TAG_REQ 0x01c1
+#define VTPM_TAG_RSP 0x01c4
+#define COMMAND_BUFFER_SIZE 4096
+
+// Header size
+#define VTPM_COMMAND_HEADER_SIZE ( 2 + 4 + 4)
+
+//************************ Command Codes ****************************
+#define VTPM_ORD_BASE       0x0000
+#define VTPM_PRIV_MASK      0x01000000 // Priviledged VTPM Command
+#define VTPM_PRIV_BASE      (VTPM_ORD_BASE | VTPM_PRIV_MASK)
+
+// Non-priviledged VTPM Commands (From DMI's)
+#define VTPM_ORD_SAVEHASHKEY      (VTPM_ORD_BASE + 1) // DMI requests encryption key for persistent storage
+#define VTPM_ORD_LOADHASHKEY      (VTPM_ORD_BASE + 2) // DMI requests symkey to be regenerated
+
+//************************ Return Codes ****************************
+#define VTPM_SUCCESS               0
+#define VTPM_FAIL                  1
+#define VTPM_UNSUPPORTED           2
+#define VTPM_FORBIDDEN             3
+#define VTPM_RESTORE_CONTEXT_FAILED    4
+#define VTPM_INVALID_REQUEST       5
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_storage.c b/stubdom/vtpmmgr/vtpm_storage.c
new file mode 100644
index 0000000..abb0dba
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_storage.c
@@ -0,0 +1,794 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
+ * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
+ * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
+ * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
+ * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
+ * SOFTWARE.
+ */
+
+/***************************************************************
+ * DISK IMAGE LAYOUT
+ * *************************************************************
+ * All data is stored in BIG ENDIAN format
+ * *************************************************************
+ * Section 1: Header
+ *
+ * 10 bytes 	 id			ID String "VTPMMGRDOM"
+ * uint32_t	 version	        Disk Image version number (current == 1)
+ * uint32_t      storage_key_len	Length of the storage Key
+ * TPM_KEY       storage_key		Marshalled TPM_KEY structure (See TPM spec v2)
+ * RSA_BLOCK     aes_crypto             Encrypted aes key data (RSA_CIPHER_SIZE bytes), bound by the storage_key
+ *  BYTE[32] aes_key                    Aes key for encrypting the uuid table
+ *  uint32_t cipher_sz                  Encrypted size of the uuid table
+ *
+ * *************************************************************
+ * Section 2: Uuid Table
+ *
+ * This table is encrypted by the aes_key in the header. The cipher text size is just
+ * large enough to hold all of the entries plus required padding.
+ *
+ * Each entry is as follows
+ * BYTE[16] uuid                       Uuid of a vtpm that is stored on this disk
+ * uint32_t offset                     Disk offset where the vtpm data is stored
+ *
+ * *************************************************************
+ * Section 3: Vtpm Table
+ *
+ * The rest of the disk stores vtpms. Each vtpm is an RSA_BLOCK encrypted
+ * by the storage key. Each vtpm must exist on an RSA_BLOCK aligned boundary,
+ * starting at the first RSA_BLOCK aligned offset after the uuid table.
+ * As the uuid table grows, vtpms may be relocated.
+ *
+ * RSA_BLOCK     vtpm_crypto          Vtpm data encrypted by storage_key
+ *   BYTE[20]    hash                 Sha1 hash of vtpm encrypted data
+ *   BYTE[16]    vtpm_aes_key         Encryption key for vtpm data
+ *
+  *************************************************************
+ */
+#define DISKVERS 1
+#define IDSTR "VTPMMGRDOM"
+#define IDSTRLEN 10
+#define AES_BLOCK_SIZE 16
+#define AES_KEY_BITS 256
+#define AES_KEY_SIZE (AES_KEY_BITS/8)
+#define BUF_SIZE 4096
+
+#define UUID_TBL_ENT_SIZE (sizeof(uuid_t) + sizeof(uint32_t))
+
+#define HEADERSZ (10 + 4 + 4)
+
+#define TRY_READ(buf, size, msg) do {\
+   int rc; \
+   if((rc = read(blkfront_fd, buf, (size))) != (size)) { \
+      vtpmlogerror(VTPM_LOG_VTPM, "read() failed! " msg " : rc=(%d/%d), error=(%s)\n", rc, (int)(size), strerror(errno)); \
+      status = TPM_IOERROR;\
+      goto abort_egress;\
+   } \
+} while(0)
+
+#define TRY_WRITE(buf, size, msg) do {\
+   int rc; \
+   if((rc = write(blkfront_fd, buf, (size))) != (size)) { \
+      vtpmlogerror(VTPM_LOG_VTPM, "write() failed! " msg " : rc=(%d/%d), error=(%s)\n", rc, (int)(size), strerror(errno)); \
+      status = TPM_IOERROR;\
+      goto abort_egress;\
+   } \
+} while(0)
+
+#include <blkfront.h>
+#include <unistd.h>
+#include <errno.h>
+#include <string.h>
+#include <inttypes.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <mini-os/byteorder.h>
+#include <polarssl/aes.h>
+
+#include "vtpm_manager.h"
+#include "log.h"
+#include "marshal.h"
+#include "tpm.h"
+#include "uuid.h"
+
+#include "vtpmmgr.h"
+#include "vtpm_storage.h"
+
+#define MAX(a,b) ( ((a) > (b)) ? (a) : (b) )
+#define MIN(a,b) ( ((a) < (b)) ? (a) : (b) )
+
+/* blkfront device objets */
+static struct blkfront_dev* blkdev = NULL;
+static int blkfront_fd = -1;
+
+struct Vtpm {
+   uuid_t uuid;
+   int offset;
+};
+struct Storage {
+   int aes_offset;
+   int uuid_offset;
+   int end_offset;
+
+   int num_vtpms;
+   int num_vtpms_alloced;
+   struct Vtpm* vtpms;
+};
+
+/* Global storage data */
+static struct Storage g_store = {
+   .vtpms = NULL,
+};
+
+static int get_offset(void) {
+   return lseek(blkfront_fd, 0, SEEK_CUR);
+}
+
+static void reset_store(void) {
+   g_store.aes_offset = 0;
+   g_store.uuid_offset = 0;
+   g_store.end_offset = 0;
+
+   g_store.num_vtpms = 0;
+   g_store.num_vtpms_alloced = 0;
+   free(g_store.vtpms);
+   g_store.vtpms = NULL;
+}
+
+static int vtpm_get_index(const uuid_t uuid) {
+   int st = 0;
+   int ed = g_store.num_vtpms-1;
+   while(st <= ed) {
+      int mid = ((unsigned int)st + (unsigned int)ed) >> 1; //avoid overflow
+      int c = memcmp(uuid, &g_store.vtpms[mid].uuid, sizeof(uuid_t));
+      if(c == 0) {
+         return mid;
+      } else if(c > 0) {
+         st = mid + 1;
+      } else {
+         ed = mid - 1;
+      }
+   }
+   return -(st + 1);
+}
+
+static void vtpm_add(const uuid_t uuid, int offset, int index) {
+   /* Realloc more space if needed */
+   if(g_store.num_vtpms >= g_store.num_vtpms_alloced) {
+      g_store.num_vtpms_alloced += 16;
+      g_store.vtpms = realloc(
+            g_store.vtpms,
+            sizeof(struct Vtpm) * g_store.num_vtpms_alloced);
+   }
+
+   /* Move everybody after the new guy */
+   for(int i = g_store.num_vtpms; i > index; --i) {
+      g_store.vtpms[i] = g_store.vtpms[i-1];
+   }
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Registered vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+
+   /* Finally add new one */
+   memcpy(g_store.vtpms[index].uuid, uuid, sizeof(uuid_t));
+   g_store.vtpms[index].offset = offset;
+   ++g_store.num_vtpms;
+}
+
+#if 0
+static void vtpm_remove(int index) {
+   for(i = index; i < g_store.num_vtpms; ++i) {
+      g_store.vtpms[i] = g_store.vtpms[i+1];
+   }
+   --g_store.num_vtpms;
+}
+#endif
+
+static int pack_uuid_table(uint8_t* table, int size, int* nvtpms) {
+   uint8_t* ptr = table;
+   while(*nvtpms < g_store.num_vtpms && size >= 0)
+   {
+      /* Pack the uuid */
+      memcpy(ptr, (uint8_t*)g_store.vtpms[*nvtpms].uuid, sizeof(uuid_t));
+      ptr+= sizeof(uuid_t);
+
+
+      /* Pack the offset */
+      ptr = pack_UINT32(ptr, g_store.vtpms[*nvtpms].offset);
+
+      ++*nvtpms;
+      size -= UUID_TBL_ENT_SIZE;
+   }
+   return ptr - table;
+}
+
+/* Extract the uuids */
+static int extract_uuid_table(uint8_t* table, int size) {
+   uint8_t* ptr = table;
+   for(;size >= UUID_TBL_ENT_SIZE; size -= UUID_TBL_ENT_SIZE) {
+      int index;
+      uint32_t v32;
+
+      /*uuid_t is just an array of bytes, so we can do a direct cast here */
+      uint8_t* uuid = ptr;
+      ptr += sizeof(uuid_t);
+
+      /* Get the offset of the key */
+      ptr = unpack_UINT32(ptr, &v32);
+
+      /* Insert the new vtpm in sorted order */
+      if((index = vtpm_get_index(uuid)) >= 0) {
+         vtpmlogerror(VTPM_LOG_VTPM, "Vtpm (" UUID_FMT ") exists multiple times! ignoring...\n", UUID_BYTES(uuid));
+         continue;
+      }
+      index = -index -1;
+
+      vtpm_add(uuid, v32, index);
+
+   }
+   return ptr - table;
+}
+
+static void vtpm_decrypt_block(aes_context* aes,
+      uint8_t* iv,
+      uint8_t* cipher,
+      uint8_t* plain,
+      int cipher_sz,
+      int* overlap)
+{
+   int bytes_ext;
+   /* Decrypt */
+   aes_crypt_cbc(aes, AES_DECRYPT,
+         cipher_sz,
+         iv, cipher, plain + *overlap);
+
+   /* Extract */
+   bytes_ext = extract_uuid_table(plain, cipher_sz + *overlap);
+
+   /* Copy left overs to the beginning */
+   *overlap = cipher_sz + *overlap - bytes_ext;
+   memcpy(plain, plain + bytes_ext, *overlap);
+}
+
+static int vtpm_encrypt_block(aes_context* aes,
+      uint8_t* iv,
+      uint8_t* plain,
+      uint8_t* cipher,
+      int block_sz,
+      int* overlap,
+      int* num_vtpms)
+{
+   int bytes_to_crypt;
+   int bytes_packed;
+
+   /* Pack the uuid table */
+   bytes_packed = *overlap + pack_uuid_table(plain + *overlap, block_sz - *overlap, num_vtpms);
+   bytes_to_crypt = MIN(bytes_packed, block_sz);
+
+   /* Add padding if we aren't on a multiple of the block size */
+   if(bytes_to_crypt & (AES_BLOCK_SIZE-1)) {
+      int oldsz = bytes_to_crypt;
+      //add padding
+      bytes_to_crypt += AES_BLOCK_SIZE - (bytes_to_crypt & (AES_BLOCK_SIZE-1));
+      //fill padding with random bytes
+      vtpmmgr_rand(plain + oldsz, bytes_to_crypt - oldsz);
+      *overlap = 0;
+   } else {
+      *overlap = bytes_packed - bytes_to_crypt;
+   }
+
+   /* Encrypt this chunk */
+   aes_crypt_cbc(aes, AES_ENCRYPT,
+            bytes_to_crypt,
+            iv, plain, cipher);
+
+   /* Copy the left over partials to the beginning */
+   memcpy(plain, plain + bytes_to_crypt, *overlap);
+
+   return bytes_to_crypt;
+}
+
+static TPM_RESULT vtpm_storage_new_vtpm(const uuid_t uuid, int index) {
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t plain[BUF_SIZE + AES_BLOCK_SIZE];
+   uint8_t buf[BUF_SIZE];
+   uint8_t* ptr;
+   int cipher_sz;
+   aes_context aes;
+
+   /* Add new vtpm to the table */
+   vtpm_add(uuid, g_store.end_offset, index);
+   g_store.end_offset += RSA_CIPHER_SIZE;
+
+   /* Compute the new end location of the encrypted uuid table */
+   cipher_sz = AES_BLOCK_SIZE; //IV
+   cipher_sz += g_store.num_vtpms * UUID_TBL_ENT_SIZE; //uuid table
+   cipher_sz += (AES_BLOCK_SIZE - (cipher_sz & (AES_BLOCK_SIZE -1))) & (AES_BLOCK_SIZE-1); //aes padding
+
+   /* Does this overlap any key data? If so they need to be relocated */
+   int uuid_end = (g_store.uuid_offset + cipher_sz + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+   for(int i = 0; i < g_store.num_vtpms; ++i) {
+      if(g_store.vtpms[i].offset < uuid_end) {
+
+         vtpmloginfo(VTPM_LOG_VTPM, "Relocating vtpm data\n");
+
+         //Read the hashkey cipher text
+         lseek(blkfront_fd, g_store.vtpms[i].offset, SEEK_SET);
+         TRY_READ(buf, RSA_CIPHER_SIZE, "vtpm hashkey relocate");
+
+         //Write the cipher text to new offset
+         lseek(blkfront_fd, g_store.end_offset, SEEK_SET);
+         TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm hashkey relocate");
+
+         //Save new offset
+         g_store.vtpms[i].offset = g_store.end_offset;
+         g_store.end_offset += RSA_CIPHER_SIZE;
+      }
+   }
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Generating a new symmetric key\n");
+
+   /* Generate an aes key */
+   TPMTRYRETURN(vtpmmgr_rand(plain, AES_KEY_SIZE));
+   aes_setkey_enc(&aes, plain, AES_KEY_BITS);
+   ptr = plain + AES_KEY_SIZE;
+
+   /* Pack the crypted size */
+   ptr = pack_UINT32(ptr, cipher_sz);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Binding encrypted key\n");
+
+   /* Seal the key and size */
+   TPMTRYRETURN(TPM_Bind(&vtpm_globals.storage_key,
+            plain,
+            ptr - plain,
+            buf));
+
+   /* Write the sealed key to disk */
+   lseek(blkfront_fd, g_store.aes_offset, SEEK_SET);
+   TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm aes key");
+
+   /* ENCRYPT AND WRITE UUID TABLE */
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Encrypting the uuid table\n");
+
+   int num_vtpms = 0;
+   int overlap = 0;
+   int bytes_crypted;
+   uint8_t iv[AES_BLOCK_SIZE];
+
+   /* Generate the iv for the first block */
+   TPMTRYRETURN(vtpmmgr_rand(iv, AES_BLOCK_SIZE));
+
+   /* Copy the iv to the cipher text buffer to be written to disk */
+   memcpy(buf, iv, AES_BLOCK_SIZE);
+   ptr = buf + AES_BLOCK_SIZE;
+
+   /* Encrypt the first block of the uuid table */
+   bytes_crypted = vtpm_encrypt_block(&aes,
+         iv, //iv
+         plain, //plaintext
+         ptr, //cipher text
+         BUF_SIZE - AES_BLOCK_SIZE,
+         &overlap,
+         &num_vtpms);
+
+   /* Write the iv followed by the crypted table*/
+   TRY_WRITE(buf, bytes_crypted + AES_BLOCK_SIZE, "vtpm uuid table");
+
+   /* Decrement the number of bytes encrypted */
+   cipher_sz -= bytes_crypted + AES_BLOCK_SIZE;
+
+   /* If there are more vtpms, encrypt and write them block by block */
+   while(cipher_sz > 0) {
+      /* Encrypt the next block of the uuid table */
+      bytes_crypted = vtpm_encrypt_block(&aes,
+               iv,
+               plain,
+               buf,
+               BUF_SIZE,
+               &overlap,
+               &num_vtpms);
+
+      /* Write the cipher text to disk */
+      TRY_WRITE(buf, bytes_crypted, "vtpm uuid table");
+
+      cipher_sz -= bytes_crypted;
+   }
+
+   goto egress;
+abort_egress:
+egress:
+   return status;
+}
+
+
+/**************************************
+ * PUBLIC FUNCTIONS
+ * ***********************************/
+
+int vtpm_storage_init(void) {
+   struct blkfront_info info;
+   if((blkdev = init_blkfront(NULL, &info)) == NULL) {
+      return -1;
+   }
+   if((blkfront_fd = blkfront_open(blkdev)) < 0) {
+      return -1;
+   }
+   return 0;
+}
+
+void vtpm_storage_shutdown(void) {
+   reset_store();
+   close(blkfront_fd);
+}
+
+TPM_RESULT vtpm_storage_load_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ])
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   int index;
+   uint8_t cipher[RSA_CIPHER_SIZE];
+   uint8_t clear[RSA_CIPHER_SIZE];
+   UINT32 clear_size;
+
+   /* Find the index of this uuid */
+   if((index = vtpm_get_index(uuid)) < 0) {
+      index = -index-1;
+      vtpmlogerror(VTPM_LOG_VTPM, "LoadKey failure: Unrecognized uuid! " UUID_FMT "\n", UUID_BYTES(uuid));
+      status = TPM_BAD_PARAMETER;
+      goto abort_egress;
+   }
+
+   /* Read the table entry */
+   lseek(blkfront_fd, g_store.vtpms[index].offset, SEEK_SET);
+   TRY_READ(cipher, RSA_CIPHER_SIZE, "vtpm hashkey data");
+
+   /* Decrypt the table entry */
+   TPMTRYRETURN(TPM_UnBind(
+            vtpm_globals.storage_key_handle,
+            RSA_CIPHER_SIZE,
+            cipher,
+            &clear_size,
+            clear,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.oiap));
+
+   if(clear_size < HASHKEYSZ) {
+      vtpmloginfo(VTPM_LOG_VTPM, "Decrypted Hash key size (%" PRIu32 ") was too small!\n", clear_size);
+      status = TPM_RESOURCES;
+      goto abort_egress;
+   }
+
+   memcpy(hashkey, clear, HASHKEYSZ);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Loaded hash and key for vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+   goto egress;
+abort_egress:
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to load key\n");
+egress:
+   return status;
+}
+
+TPM_RESULT vtpm_storage_save_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ])
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   int index;
+   uint8_t buf[RSA_CIPHER_SIZE];
+
+   /* Find the index of this uuid */
+   if((index = vtpm_get_index(uuid)) < 0) {
+      index = -index-1;
+      /* Create a new vtpm */
+      TPMTRYRETURN( vtpm_storage_new_vtpm(uuid, index) );
+   }
+
+   /* Encrypt the hash and key */
+   TPMTRYRETURN( TPM_Bind(&vtpm_globals.storage_key,
+            hashkey,
+            HASHKEYSZ,
+            buf));
+
+   /* Write to disk */
+   lseek(blkfront_fd, g_store.vtpms[index].offset, SEEK_SET);
+   TRY_WRITE(buf, RSA_CIPHER_SIZE, "vtpm hashkey data");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saved hash and key for vtpm " UUID_FMT "\n", UUID_BYTES(uuid));
+   goto egress;
+abort_egress:
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to save key\n");
+egress:
+   return status;
+}
+
+TPM_RESULT vtpm_storage_new_header()
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t buf[BUF_SIZE];
+   uint8_t keybuf[AES_KEY_SIZE + sizeof(uint32_t)];
+   uint8_t* ptr = buf;
+   uint8_t* sptr;
+
+   /* Clear everything first */
+   reset_store();
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Creating new disk image header\n");
+
+   /*Copy the ID string */
+   memcpy(ptr, IDSTR, IDSTRLEN);
+   ptr += IDSTRLEN;
+
+   /*Copy the version */
+   ptr = pack_UINT32(ptr, DISKVERS);
+
+   /*Save the location of the key size */
+   sptr = ptr;
+   ptr += sizeof(UINT32);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saving root storage key..\n");
+
+   /* Copy the storage key */
+   ptr = pack_TPM_KEY(ptr, &vtpm_globals.storage_key);
+
+   /* Now save the size */
+   pack_UINT32(sptr, ptr - (sptr + 4));
+
+   /* Create a fake aes key and set cipher text size to 0 */
+   memset(keybuf, 0, sizeof(keybuf));
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Binding uuid table symmetric key..\n");
+
+   /* Save the location of the aes key */
+   g_store.aes_offset = ptr - buf;
+
+   /* Store the fake aes key and vtpm count */
+   TPMTRYRETURN(TPM_Bind(&vtpm_globals.storage_key,
+         keybuf,
+         sizeof(keybuf),
+         ptr));
+   ptr+= RSA_CIPHER_SIZE;
+
+   /* Write the header to disk */
+   lseek(blkfront_fd, 0, SEEK_SET);
+   TRY_WRITE(buf, ptr-buf, "vtpm header");
+
+   /* Save the location of the uuid table */
+   g_store.uuid_offset = get_offset();
+
+   /* Save the end offset */
+   g_store.end_offset = (g_store.uuid_offset + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Saved new manager disk header.\n");
+
+   goto egress;
+abort_egress:
+egress:
+   return status;
+}
+
+
+TPM_RESULT vtpm_storage_load_header(void)
+{
+   TPM_RESULT status = TPM_SUCCESS;
+   uint32_t v32;
+   uint8_t buf[BUF_SIZE];
+   uint8_t* ptr = buf;
+   aes_context aes;
+
+   /* Clear everything first */
+   reset_store();
+
+   /* Read the header from disk */
+   lseek(blkfront_fd, 0, SEEK_SET);
+   TRY_READ(buf, IDSTRLEN + sizeof(UINT32) + sizeof(UINT32), "vtpm header");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Loading disk image header\n");
+
+   /* Verify the ID string */
+   if(memcmp(ptr, IDSTR, IDSTRLEN)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Invalid ID string in disk image!\n");
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+   ptr+=IDSTRLEN;
+
+   /* Unpack the version */
+   ptr = unpack_UINT32(ptr, &v32);
+
+   /* Verify the version */
+   if(v32 != DISKVERS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unsupported disk image version number %" PRIu32 "\n", v32);
+      status = TPM_FAIL;
+      goto abort_egress;
+   }
+
+   /* Size of the storage key */
+   ptr = unpack_UINT32(ptr, &v32);
+
+   /* Sanity check */
+   if(v32 > BUF_SIZE) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Size of storage key (%" PRIu32 ") is too large!\n", v32);
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* read the storage key */
+   TRY_READ(buf, v32, "storage pub key");
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Unpacking storage key\n");
+
+   /* unpack the storage key */
+   ptr = unpack_TPM_KEY(buf, &vtpm_globals.storage_key, UNPACK_ALLOC);
+
+   /* Load Storage Key into the TPM */
+   TPMTRYRETURN( TPM_LoadKey(
+            TPM_SRK_KEYHANDLE,
+            &vtpm_globals.storage_key,
+            &vtpm_globals.storage_key_handle,
+            (const TPM_AUTHDATA*)&vtpm_globals.srk_auth,
+            &vtpm_globals.oiap));
+
+   /* Initialize the storage key auth */
+   memset(vtpm_globals.storage_key_usage_auth, 0, sizeof(TPM_AUTHDATA));
+
+   /* Store the offset of the aes key */
+   g_store.aes_offset = get_offset();
+
+   /* Read the rsa cipher text for the aes key */
+   TRY_READ(buf, RSA_CIPHER_SIZE, "aes key");
+   ptr = buf + RSA_CIPHER_SIZE;
+
+   vtpmloginfo(VTPM_LOG_VTPM, "Unbinding uuid table symmetric key\n");
+
+   /* Decrypt the aes key protecting the uuid table */
+   UINT32 datalen;
+   TPMTRYRETURN(TPM_UnBind(
+            vtpm_globals.storage_key_handle,
+            RSA_CIPHER_SIZE,
+            buf,
+            &datalen,
+            ptr,
+            (const TPM_AUTHDATA*)&vtpm_globals.storage_key_usage_auth,
+            &vtpm_globals.oiap));
+
+   /* Validate the length of the output buffer */
+   if(datalen < AES_KEY_SIZE + sizeof(UINT32)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unbound AES key size (%d) was too small! expected (%ld)\n", datalen, AES_KEY_SIZE + sizeof(UINT32));
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* Extract the aes key */
+   aes_setkey_dec(&aes, ptr, AES_KEY_BITS);
+   ptr+= AES_KEY_SIZE;
+
+   /* Extract the ciphertext size */
+   ptr = unpack_UINT32(ptr, &v32);
+   int cipher_size = v32;
+
+   /* Sanity check */
+   if(cipher_size & (AES_BLOCK_SIZE-1)) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Cipher text size (%" PRIu32 ") is not a multiple of the aes block size! (%d)\n", v32, AES_BLOCK_SIZE);
+      status = TPM_IOERROR;
+      goto abort_egress;
+   }
+
+   /* Save the location of the uuid table */
+   g_store.uuid_offset = get_offset();
+
+   /* Only decrypt the table if there are vtpms to decrypt */
+   if(cipher_size > 0) {
+      int rbytes;
+      int overlap = 0;
+      uint8_t plain[BUF_SIZE + AES_BLOCK_SIZE];
+      uint8_t iv[AES_BLOCK_SIZE];
+
+      vtpmloginfo(VTPM_LOG_VTPM, "Decrypting uuid table\n");
+
+      /* Pre allocate the vtpm array */
+      g_store.num_vtpms_alloced = cipher_size / UUID_TBL_ENT_SIZE;
+      g_store.vtpms = malloc(sizeof(struct Vtpm) * g_store.num_vtpms_alloced);
+
+      /* Read the iv and the first chunk of cipher text */
+      rbytes = MIN(cipher_size, BUF_SIZE);
+      TRY_READ(buf, rbytes, "vtpm uuid table\n");
+      cipher_size -= rbytes;
+
+      /* Copy the iv */
+      memcpy(iv, buf, AES_BLOCK_SIZE);
+      ptr = buf + AES_BLOCK_SIZE;
+
+      /* Remove the iv from the number of bytes to decrypt */
+      rbytes -= AES_BLOCK_SIZE;
+
+      /* Decrypt and extract vtpms */
+      vtpm_decrypt_block(&aes,
+            iv, ptr, plain,
+            rbytes, &overlap);
+
+      /* Read the rest of the table if there is more */
+      while(cipher_size > 0) {
+         /* Read next chunk of cipher text */
+         rbytes = MIN(cipher_size, BUF_SIZE);
+         TRY_READ(buf, rbytes, "vtpm uuid table");
+         cipher_size -= rbytes;
+
+         /* Decrypt a block of text */
+         vtpm_decrypt_block(&aes,
+               iv, buf, plain,
+               rbytes, &overlap);
+
+      }
+      vtpmloginfo(VTPM_LOG_VTPM, "Loaded %d vtpms!\n", g_store.num_vtpms);
+   }
+
+   /* The end of the key table, new vtpms go here */
+   int uuid_end = (get_offset() + RSA_CIPHER_SIZE) & ~(RSA_CIPHER_SIZE -1);
+   g_store.end_offset = uuid_end;
+
+   /* Compute the end offset while validating vtpms*/
+   for(int i = 0; i < g_store.num_vtpms; ++i) {
+      /* offset must not collide with previous data */
+      if(g_store.vtpms[i].offset < uuid_end) {
+         vtpmlogerror(VTPM_LOG_VTPM, "vtpm: " UUID_FMT
+               " offset (%d) is before end of uuid table (%d)!\n",
+               UUID_BYTES(g_store.vtpms[i].uuid),
+               g_store.vtpms[i].offset, uuid_end);
+         status = TPM_IOERROR;
+         goto abort_egress;
+      }
+      /* offset must be at a multiple of cipher size */
+      if(g_store.vtpms[i].offset & (RSA_CIPHER_SIZE-1)) {
+         vtpmlogerror(VTPM_LOG_VTPM, "vtpm: " UUID_FMT
+               " offset(%d) is not at a multiple of the rsa cipher text size (%d)!\n",
+               UUID_BYTES(g_store.vtpms[i].uuid),
+               g_store.vtpms[i].offset, RSA_CIPHER_SIZE);
+         status = TPM_IOERROR;
+         goto abort_egress;
+      }
+      /* Save the last offset */
+      if(g_store.vtpms[i].offset >= g_store.end_offset) {
+         g_store.end_offset = g_store.vtpms[i].offset + RSA_CIPHER_SIZE;
+      }
+   }
+
+   goto egress;
+abort_egress:
+   //An error occured somewhere
+   vtpmlogerror(VTPM_LOG_VTPM, "Failed to load manager data!\n");
+
+   //Clear the data store
+   reset_store();
+
+   //Reset the storage key structure
+   free_TPM_KEY(&vtpm_globals.storage_key);
+   {
+      TPM_KEY key = TPM_KEY_INIT;
+      vtpm_globals.storage_key = key;
+   }
+
+   //Reset the storage key handle
+   TPM_EvictKey(vtpm_globals.storage_key_handle);
+   vtpm_globals.storage_key_handle = 0;
+egress:
+   return status;
+}
+
+#if 0
+/* For testing disk IO */
+void add_fake_vtpms(int num) {
+   for(int i = 0; i < num; ++i) {
+      uint32_t ind = cpu_to_be32(i);
+
+      uuid_t uuid;
+      memset(uuid, 0, sizeof(uuid_t));
+      memcpy(uuid, &ind, sizeof(ind));
+      int index = vtpm_get_index(uuid);
+      index = -index-1;
+
+      vtpm_storage_new_vtpm(uuid, index);
+   }
+}
+#endif
diff --git a/stubdom/vtpmmgr/vtpm_storage.h b/stubdom/vtpmmgr/vtpm_storage.h
new file mode 100644
index 0000000..a5a5fd7
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpm_storage.h
@@ -0,0 +1,68 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPM_STORAGE_H
+#define VTPM_STORAGE_h
+
+#include "uuid.h"
+
+#define VTPM_NVMKEY_SIZE 32
+#define HASHKEYSZ (sizeof(TPM_DIGEST) + VTPM_NVMKEY_SIZE)
+
+/* Initialize the storage system and its virtual disk */
+int vtpm_storage_init(void);
+
+/* Shutdown the storage system and its virtual disk */
+void vtpm_storage_shutdown(void);
+
+/* Loads Sha1 hash and 256 bit AES key from disk and stores them
+ * packed together in outbuf. outbuf must be freed
+ * by the caller using buffer_free()
+ */
+TPM_RESULT vtpm_storage_load_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ]);
+
+/* inbuf must contain a sha1 hash followed by a 256 bit AES key.
+ * Encrypts and stores the hash and key to disk */
+TPM_RESULT vtpm_storage_save_hashkey(const uuid_t uuid, uint8_t hashkey[HASHKEYSZ]);
+
+/* Load the vtpm manager data - call this on startup */
+TPM_RESULT vtpm_storage_load_header(void);
+
+/* Saves the vtpm manager data - call this on shutdown */
+TPM_RESULT vtpm_storage_new_header(void);
+
+
+#endif
diff --git a/stubdom/vtpmmgr/vtpmmgr.c b/stubdom/vtpmmgr/vtpmmgr.c
new file mode 100644
index 0000000..563f4e8
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpmmgr.c
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdint.h>
+#include <mini-os/tpmback.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+#include "log.h"
+
+#include "vtpmmgr.h"
+#include "tcg.h"
+
+
+void main_loop(void) {
+   tpmcmd_t* tpmcmd;
+   uint8_t respbuf[TCPA_MAX_BUFFER_LENGTH];
+
+   while(1) {
+      /* Wait for requests from a vtpm */
+      vtpmloginfo(VTPM_LOG_VTPM, "Waiting for commands from vTPM's:\n");
+      if((tpmcmd = tpmback_req_any()) == NULL) {
+         vtpmlogerror(VTPM_LOG_VTPM, "NULL tpmcmd\n");
+         continue;
+      }
+
+      tpmcmd->resp = respbuf;
+
+      /* Process the command */
+      vtpmmgr_handle_cmd(tpmcmd->uuid, tpmcmd);
+
+      /* Send response */
+      tpmback_resp(tpmcmd);
+   }
+}
+
+int main(int argc, char** argv)
+{
+   int rc = 0;
+   sleep(2);
+   vtpmloginfo(VTPM_LOG_VTPM, "Starting vTPM manager domain\n");
+
+   /* Initialize the vtpm manager */
+   if(vtpmmgr_init(argc, argv) != TPM_SUCCESS) {
+      vtpmlogerror(VTPM_LOG_VTPM, "Unable to initialize vtpmmgr domain!\n");
+      rc = -1;
+      goto exit;
+   }
+
+   main_loop();
+
+   vtpmloginfo(VTPM_LOG_VTPM, "vTPM Manager shutting down...\n");
+
+   vtpmmgr_shutdown();
+
+exit:
+   return rc;
+
+}
diff --git a/stubdom/vtpmmgr/vtpmmgr.h b/stubdom/vtpmmgr/vtpmmgr.h
new file mode 100644
index 0000000..50a1992
--- /dev/null
+++ b/stubdom/vtpmmgr/vtpmmgr.h
@@ -0,0 +1,77 @@
+/*
+ * Copyright (c) 2010-2012 United States Government, as represented by
+ * the Secretary of Defense.  All rights reserved.
+ *
+ * based off of the original tools/vtpm_manager code base which is:
+ * Copyright (c) 2005, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *   * Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *   * Redistributions in binary form must reproduce the above
+ *     copyright notice, this list of conditions and the following
+ *     disclaimer in the documentation and/or other materials provided
+ *     with the distribution.
+ *   * Neither the name of Intel Corporation nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#ifndef VTPMMGR_H
+#define VTPMMGR_H
+
+#include <mini-os/tpmback.h>
+#include <polarssl/entropy.h>
+#include <polarssl/ctr_drbg.h>
+
+#include "uuid.h"
+#include "tcg.h"
+#include "vtpm_manager.h"
+
+#define RSA_KEY_SIZE 0x0800
+#define RSA_CIPHER_SIZE (RSA_KEY_SIZE / 8)
+
+struct vtpm_globals {
+   int tpm_fd;
+   TPM_KEY             storage_key;
+   TPM_HANDLE          storage_key_handle;       // Key used by persistent store
+   TPM_AUTH_SESSION    oiap;                // OIAP session for storageKey
+   TPM_AUTHDATA        storage_key_usage_auth;
+
+   TPM_AUTHDATA        owner_auth;
+   TPM_AUTHDATA        srk_auth;
+
+   entropy_context     entropy;
+   ctr_drbg_context    ctr_drbg;
+};
+
+// --------------------------- Global Values --------------------------
+extern struct vtpm_globals vtpm_globals;   // Key info and DMI states
+
+TPM_RESULT vtpmmgr_init(int argc, char** argv);
+void vtpmmgr_shutdown(void);
+
+TPM_RESULT vtpmmgr_handle_cmd(const uuid_t uuid, tpmcmd_t* tpmcmd);
+
+inline TPM_RESULT vtpmmgr_rand(unsigned char* bytes, size_t num_bytes) {
+   return ctr_drbg_random(&vtpm_globals.ctr_drbg, bytes, num_bytes) == 0 ? 0 : TPM_FAIL;
+}
+
+#endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:20:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg31-0004wP-5N; Thu, 06 Dec 2012 18:19:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2z-0004vR-Ro
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:46 +0000
Received: from [85.158.138.51:23421] by server-10.bemta-3.messagelabs.com id
	19/D8-19806-1C1E0C05; Thu, 06 Dec 2012 18:19:45 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354817981!19849219!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_23,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2176 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01b8_4fdfff42_38f8_44a7_a58f_7571a01cb330;
	Thu, 06 Dec 2012 13:19:34 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:17 -0500
Message-Id: <1354817961-22196-4-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 4/8] Add vtpm documentation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See the files included in this patch for details

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 docs/misc/vtpm.txt     |  357 ++++++++++++++++++++++++++++++++++--------------
 stubdom/vtpm/README    |   75 ++++++++++
 stubdom/vtpmmgr/README |   75 ++++++++++
 3 files changed, 401 insertions(+), 106 deletions(-)
 create mode 100644 stubdom/vtpm/README
 create mode 100644 stubdom/vtpmmgr/README

diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt
index ad37fe8..fc6029a 100644
--- a/docs/misc/vtpm.txt
+++ b/docs/misc/vtpm.txt
@@ -1,152 +1,297 @@
-Copyright: IBM Corporation (C), Intel Corporation
-29 June 2006
-Authors: Stefan Berger <stefanb@us.ibm.com> (IBM), 
-         Employees of Intel Corp
-
-This document gives a short introduction to the virtual TPM support
-in XEN and goes as far as connecting a user domain to a virtual TPM
-instance and doing a short test to verify success. It is assumed
-that the user is fairly familiar with compiling and installing XEN
-and Linux on a machine. 
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the virtual Trusted Platform Module (vTPM) subsystem
+for Xen. The reader is assumed to have familiarity with building and installing
+Xen, Linux, and a basic understanding of the TPM and vTPM concepts.
+
+------------------------------
+INTRODUCTION
+------------------------------
+The goal of this work is to provide a TPM functionality to a virtual guest
+operating system (a DomU).  This allows programs to interact with a TPM in a
+virtual system the same way they interact with a TPM on the physical system.
+Each guest gets its own unique, emulated, software TPM.  However, each of the
+vTPM's secrets (Keys, NVRAM, etc) are managed by a vTPM Manager domain, which
+seals the secrets to the Physical TPM.  Thus, the vTPM subsystem extends the
+chain of trust rooted in the hardware TPM to virtual machines in Xen. Each
+major component of vTPM is implemented as a separate domain, providing secure
+separation guaranteed by the hypervisor. The vTPM domains are implemented in
+mini-os to reduce memory and processor overhead.
+
+This mini-os vTPM subsystem was built on top of the previous vTPM
+work done by IBM and Intel corporation.
  
-Production Prerequisites: An x86-based machine machine with a
-Linux-supported TPM on the motherboard (NSC, Atmel, Infineon, TPM V1.2).
-Development Prerequisites: An emulator for TESTING ONLY is provided
-
+------------------------------
+DESIGN OVERVIEW
+------------------------------
+
+The architecture of vTPM is described below:
+
++------------------+
+|    Linux DomU    | ...
+|       |  ^       |
+|       v  |       |
+|   xen-tpmfront   |
++------------------+
+        |  ^
+        v  |
++------------------+
+| mini-os/tpmback  |
+|       |  ^       |
+|       v  |       |
+|  vtpm-stubdom    | ...
+|       |  ^       |
+|       v  |       |
+| mini-os/tpmfront |
++------------------+
+        |  ^
+        v  |
++------------------+
+| mini-os/tpmback  |
+|       |  ^       |
+|       v  |       |
+|   vtpmmgrdom     |
+|       |  ^       |
+|       v  |       |
+| mini-os/tpm_tis  |
++------------------+
+        |  ^
+        v  |
++------------------+
+|   Hardware TPM   |
++------------------+
+ * Linux DomU: The Linux based guest that wants to use a vTPM. There many be
+               more than one of these.
+
+ * xen-tpmfront.ko: Linux kernel virtual TPM frontend driver. This driver
+                    provides vTPM access to a para-virtualized Linux based DomU.
+
+ * mini-os/tpmback: Mini-os TPM backend driver. The Linux frontend driver
+                    connects to this backend driver to facilitate
+                    communications between the Linux DomU and its vTPM. This
+                    driver is also used by vtpmmgrdom to communicate with
+                    vtpm-stubdom.
+
+ * vtpm-stubdom: A mini-os stub domain that implements a vTPM. There is a
+                 one to one mapping between running vtpm-stubdom instances and
+                 logical vtpms on the system. The vTPM Platform Configuration
+                 Registers (PCRs) are all initialized to zero.
+
+ * mini-os/tpmfront: Mini-os TPM frontend driver. The vTPM mini-os domain
+                     vtpm-stubdom uses this driver to communicate with
+                     vtpmmgrdom. This driver could also be used separately to
+                     implement a mini-os domain that wishes to use a vTPM of
+                     its own.
+
+ * vtpmmgrdom: A mini-os domain that implements the vTPM manager.
+               There is only one vTPM manager and it should be running during
+               the entire lifetime of the machine.  This domain regulates
+               access to the physical TPM on the system and secures the
+               persistent state of each vTPM.
+
+ * mini-os/tpm_tis: Mini-os TPM version 1.2 TPM Interface Specification (TIS)
+                    driver. This driver used by vtpmmgrdom to talk directly to
+                    the hardware TPM. Communication is facilitated by mapping
+                    hardware memory pages into vtpmmgrdom.
+
+ * Hardware TPM: The physical TPM that is soldered onto the motherboard.
+
+------------------------------
+INSTALLATION
+------------------------------
+
+Prerequisites:
+--------------
+You must have an x86 machine with a TPM on the motherboard.
+The only software requirement to compiling vTPM is cmake.
+You must use libxl to manage domains with vTPMs. 'xm' is
+deprecated and does not support vTPM.
 
 Compiling the XEN tree:
 -----------------------
 
-Compile the XEN tree as usual after the following lines set in the
-linux-2.6.??-xen/.config file:
+Compile and install the XEN tree as usual. Be sure to build and install
+the stubdom tree.
+
+Compiling the LINUX dom0 kernel:
+--------------------------------
 
-CONFIG_XEN_TPMDEV_BACKEND=m
+The Linux dom0 kernel has no special prerequisites.
 
-CONFIG_TCG_TPM=m
-CONFIG_TCG_TIS=m      (supported after 2.6.17-rc4)
-CONFIG_TCG_NSC=m
-CONFIG_TCG_ATMEL=m
-CONFIG_TCG_INFINEON=m
-CONFIG_TCG_XEN=m
-<possible other TPM drivers supported by Linux>
+Compiling the LINUX domU kernel:
+--------------------------------
 
-If the frontend driver needs to be compiled into the user domain
-kernel, then the following two lines should be changed.
+The domU kernel used by domains with vtpms must
+include the xen-tpmfront.ko driver. It can be built
+directly into the kernel or as a module.
 
 CONFIG_TCG_TPM=y
 CONFIG_TCG_XEN=y
 
+------------------------------
+VTPM MANAGER SETUP
+------------------------------
+
+Manager disk image setup:
+-------------------------
+
+The vTPM Manager requires a disk image to store its
+encrypted data. The image does not require a filesystem
+and can live anywhere on the host disk. The image does not need
+to be large. 8 to 16 Mb should be sufficient.
+
+# dd if=/dev/zero of=/var/vtpmmgrdom.img bs=16M count=1
+
+Manager config file:
+--------------------
+
+The vTPM Manager domain (vtpmmgrdom) must be started like
+any other Xen virtual machine and requires a config file.
+The manager requires a disk image for storage and permission
+to access the hardware memory pages for the TPM. An
+example configuration looks like the following.
+
+kernel="/usr/lib/xen/boot/vtpmmgrdom.gz"
+memory=16
+disk=["file:/var/vtpmmgrdom.img,hda,w"]
+name="vtpmmgrdom"
+iomem=["fed40,5"]
+
+The iomem line tells xl to allow access to the TPM
+IO memory pages, which are 5 pages that start at
+0xfed40000.
+
+Starting and stopping the manager:
+----------------------------------
+
+The vTPM manager should be started at boot, you may wish to
+create an init script to do this.
+
+# xl create -c vtpmmgrdom.cfg
+
+Once initialization is complete you should see the following:
+INFO[VTPM]: Waiting for commands from vTPM's:
+
+To shutdown the manager you must destroy it. To avoid data corruption,
+only destroy the manager when you see the above "Waiting for commands"
+message. This ensures the disk is in a consistent state.
+
+# xl destroy vtpmmgrdom
+
+------------------------------
+VTPM AND LINUX PVM SETUP
+------------------------------
 
-You must also enable the virtual TPM to be built:
+In the following examples we will assume we have Linux
+guest named "domu" with its associated configuration
+located at /home/user/domu. It's vtpm will be named
+domu-vtpm.
 
-In Config.mk in the Xen root directory set the line
+vTPM disk image setup:
+----------------------
 
-VTPM_TOOLS ?= y
+The vTPM requires a disk image to store its persistent
+data. The image does not require a filesystem. The image
+does not need to be large. 8 Mb should be sufficient.
 
-and in
+# dd if=/dev/zero of=/home/user/domu/vtpm.img bs=8M count=1
 
-tools/vtpm/Rules.mk set the line
+vTPM config file:
+-----------------
 
-BUILD_EMULATOR = y
+The vTPM domain requires a configuration file like
+any other domain. The vTPM requires a disk image for
+storage and a TPM frontend driver to communicate
+with the manager. An example configuration is given:
 
-Now build the Xen sources from Xen's root directory:
+kernel="/usr/lib/xen/boot/vtpm-stubdom.gz"
+memory=8
+disk=["file:/home/user/domu/vtpm.img,hda,w"]
+name="domu-vtpm"
+vtpm=["backend=vtpmmgrdom,uuid=ac0a5b9e-cbe2-4c07-b43b-1d69e46fb839"]
 
-make install
+The vtpm= line sets up the tpm frontend driver. The backend must set
+to vtpmmgrdom. You are required to generate a uuid for this vtpm.
+You can use the uuidgen unix program or some other method to create a
+uuid. The uuid uniquely identifies this vtpm to manager.
 
+If you wish to clear the vTPM data you can either recreate the
+disk image or change the uuid.
 
-Also build the initial RAM disk if necessary.
+Linux Guest config file:
+------------------------
 
-Reboot the machine with the created Xen kernel.
+The Linux guest config file needs to be modified to include
+the Linux tpmfront driver. Add the following line:
 
-Note: If you do not want any TPM-related code compiled into your
-kernel or built as module then comment all the above lines like
-this example:
-# CONFIG_TCG_TPM is not set
+vtpm=["backend=domu-vtpm"]
 
+Currently only paravirtualized guests are supported.
 
-Modifying VM Configuration files:
----------------------------------
+Launching and shut down:
+------------------------
 
-VM configuration files need to be adapted to make a TPM instance
-available to a user domain. The following VM configuration file is
-an example of how a user domain can be configured to have a TPM
-available. It works similar to making a network interface
-available to a domain.
+To launch a Linux guest with a vTPM we first have to start the vTPM domain.
 
-kernel = "/boot/vmlinuz-2.6.x"
-ramdisk = "/xen/initrd_domU/U1_ramdisk.img"
-memory = 32
-name = "TPMUserDomain0"
-vtpm = ['instance=1,backend=0']
-root = "/dev/ram0 console=tty ro"
-vif = ['backend=0']
+# xl create -c /home/user/domu/vtpm.cfg
 
-In the above configuration file the line 'vtpm = ...' provides
-information about the domain where the virtual TPM is running and
-where the TPM backend has been compiled into - this has to be 
-domain 0  at the moment - and which TPM instance the user domain
-is supposed to talk to. Note that each running VM must use a 
-different instance and that using instance 0 is NOT allowed. The
-instance parameter is taken as the desired instance number, but
-the actual instance number that is assigned to the virtual machine
-can be different. This is the case if for example that particular
-instance is already used by another virtual machine. The association
-of which TPM instance number is used by which virtual machine is
-kept in the file /var/vtpm/vtpm.db. Associations are maintained by
-a xend-internal vTPM UUID and vTPM instance number.
+After initialization is complete, you should see the following:
+Info: Waiting for frontend domain to connect..
 
-Note: If you do not want TPM functionality for your user domain simply
-leave out the 'vtpm' line in the configuration file.
+Next, launch the Linux guest
 
+# xl create -c /home/user/domu/domu.cfg
 
-Running the TPM:
-----------------
+If xen-tpmfront was compiled as a module, be sure to load it
+in the guest.
 
-To run the vTPM, the device /dev/vtpm must be available.
-Verify that 'ls -l /dev/vtpm' shows the following output:
+# modprobe xen-tpmfront
 
-crw-------  1 root root 10, 225 Aug 11 06:58 /dev/vtpm
+After the Linux domain boots and the xen-tpmfront driver is loaded,
+you should see the following on the vtpm console:
 
-If it is not available, run the following command as 'root'.
-mknod /dev/vtpm c 10 225
+Info: VTPM attached to Frontend X/Y
 
-Make sure that the vTPM is running in domain 0. To do this run the
-following:
+If you have trousers and tpm_tools installed on the guest, you can test the
+vtpm.
 
-modprobe tpmbk
+On guest:
+# tcsd (if tcsd is not running already)
+# tpm_version
 
-/usr/bin/vtpm_managerd
+The version command should return the following:
+  TPM 1.2 Version Info:
+  Chip Version:        1.2.0.7
+  Spec Level:          2
+  Errata Revision:     1
+  TPM Vendor ID:       ETHZ
+  TPM Version:         01010000
+  Manufacturer Info:   4554485a
 
-Start a user domain using the 'xm create' command. Once you are in the
-shell of the user domain, you should be able to do the following as
-user 'root':
+You should also see the command being sent to the vtpm console as well
+as the vtpm saving its state. You should see the vtpm key being
+encrypted and stored on the vtpmmgrdom console.
 
-Insert the TPM frontend into the kernel if it has been compiled as a
-kernel module.
+To shutdown the guest and its vtpm, you just have to shutdown the guest
+normally. As soon as the guest vm disconnects, the vtpm will shut itself
+down automatically.
 
-> modprobe tpm_xenu
+On guest:
+# shutdown -h now
 
-Check the status of the TPM
+You may wish to write a script to start your vtpm and guest together.
 
-> cd /sys/devices/xen/vtpm-0
-> ls
-[...]  cancel  caps   pcrs    pubek   [...]
-> cat pcrs
-PCR-00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-01: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-02: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-03: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-04: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-05: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-06: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-07: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-08: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-[...]
+------------------------------
+MORE INFORMATION
+------------------------------
 
-At this point the user domain has been successfully connected to its
-virtual TPM instance.
+See stubdom/vtpmmgr/README for more details about how
+the manager domain works, how to use it, and its command line
+parameters.
 
-For further information please read the documentation in 
-tools/vtpm_manager/README and tools/vtpm/README
+See stubdom/vtpm/README for more specifics about how vtpm-stubdom
+operates and the command line options it accepts.
 
-Stefan Berger and Employees of the Intel Corp
diff --git a/stubdom/vtpm/README b/stubdom/vtpm/README
new file mode 100644
index 0000000..11bdacb
--- /dev/null
+++ b/stubdom/vtpm/README
@@ -0,0 +1,75 @@
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the operation and command line interface
+of vtpm-stubdom. See docs/misc/vtpm.txt for details on the
+vTPM subsystem as a whole.
+
+
+------------------------------
+OPERATION
+------------------------------
+
+The vtpm-stubdom is a mini-OS domain that emulates a TPM for the guest OS to
+use. It is a small wrapper around the Berlios TPM emulator
+version 0.7.4. Commands are passed from the linux guest via the
+mini-os TPM backend driver. vTPM data is encrypted and stored via a disk image
+provided to the virtual machine. The key used to encrypt the data along
+with a hash of the vTPM's data is sent to the vTPM manager for secure storage
+and later retrieval.  The vTPM domain communicates with the manager using a
+mini-os tpm front/back device pair.
+
+------------------------------
+COMMAND LINE ARGUMENTS
+------------------------------
+
+Command line arguments are passed to the domain via the 'extra'
+parameter in the VM config file. Each parameter is separated
+by white space. For example:
+
+extra="foo=bar baz"
+
+List of Arguments:
+------------------
+
+loglevel=<LOG>: Controls the amount of logging printed to the console.
+	The possible values for <LOG> are:
+	 error
+	 info (default)
+	 debug
+
+clear: Start the Berlios emulator in "clear" mode. (default)
+
+save: Start the Berlios emulator in "save" mode.
+
+deactivated: Start the Berlios emulator in "deactivated" mode.
+	See the Berlios TPM emulator documentation for details
+	about the startup mode. For all normal use, always use clear
+	which is the default. You should not need to specify any of these.
+
+maintcmds=<1|0>: Enable to disable the TPM maintenance commands.
+	These commands are used by tpm manufacturers and thus
+	open a security hole. They are disabled by default.
+
+hwinitpcr=<PCRSPEC>: Initialize the virtual Platform Configuration Registers
+	(PCRs) with PCR values from the hardware TPM. Each pcr specified by
+	<PCRSPEC> will be initialized with the value of that same PCR in TPM
+	once at startup. By default all PCRs are zero initialized.
+	Value values of <PCRSPEC> are:
+	 all: copy all pcrs
+	 none: copy no pcrs (default)
+	 <N>: copy pcr n
+	 <X-Y>: copy pcrs x to y (inclusive)
+
+	These can also be combined by comma separation, for example:
+	 hwinitpcrs=5,12-16
+	will copy pcrs 5, 12, 13, 14, 15, and 16.
+
+------------------------------
+REFERENCES
+------------------------------
+
+Berlios TPM Emulator:
+http://tpm-emulator.berlios.de/
diff --git a/stubdom/vtpmmgr/README b/stubdom/vtpmmgr/README
new file mode 100644
index 0000000..09f3958
--- /dev/null
+++ b/stubdom/vtpmmgr/README
@@ -0,0 +1,75 @@
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the operation and command line interface
+of vtpmmgrdom. See docs/misc/vtpm.txt for details on the
+vTPM subsystem as a whole.
+
+
+------------------------------
+OPERATION
+------------------------------
+
+The vtpmmgrdom implements a vTPM manager who has two major functions:
+
+ - Securely store encryption keys for each of the vTPMS
+ - Regulate access to the hardware TPM for the entire system
+
+The manager accepts commands from the vtpm-stubdom domains via the mini-os
+TPM backend driver. The vTPM manager communicates directly with hardware TPM
+using the mini-os tpm_tis driver.
+
+
+When the manager starts for the first time it will check if the TPM
+has an owner. If the TPM is unowned, it will attempt to take ownership
+with the supplied owner_auth (see below) and then create a TPM
+storage key which will be used to secure vTPM key data. Currently the
+manager only binds vTPM keys to the disk. In the future support
+for sealing to PCRs should be added.
+
+------------------------------
+COMMAND LINE ARGUMENTS
+------------------------------
+
+Command line arguments are passed to the domain via the 'extra'
+parameter in the VM config file. Each parameter is separated
+by white space. For example:
+
+extra="foo=bar baz"
+
+List of Arguments:
+------------------
+
+owner_auth=<AUTHSPEC>: Set the owner auth of the TPM. The default
+	is the well known owner auth of all ones.
+
+srk_auth=<AUTHSPEC>: Set the SRK auth for the TPM. The default is
+	the well known srk auth of all zeroes.
+	The possible values of <AUTHSPEC> are:
+	 well-known: Use the well known auth (default)
+	 random: Randomly generate an auth
+	 hash: <HASH>: Use the given 40 character ASCII hex string
+	 text: <STR>: Use sha1 hash of <STR>.
+
+tpmdriver=<DRIVER>: Which driver to use to talk to the hardware TPM.
+	Don't change this unless you know what you're doing.
+	The possible values of <DRIVER> are:
+	 tpm_tis: Use the tpm_tis driver to talk directly to the TPM.
+		The domain must have access to TPM IO memory.  (default)
+	 tpmfront: Use tpmfront to talk to the TPM. The domain must have
+		a tpmfront device setup to talk to another domain
+		which provides access to the TPM.
+
+The following options only apply to the tpm_tis driver:
+
+tpmiomem=<ADDR>: The base address of the hardware memory pages of the
+	TPM (default 0xfed40000).
+
+tpmirq=<IRQ>: The irq of the hardware TPM if using interrupts. A value of
+	"probe" can be set to probe for the irq. A value of 0
+	disabled interrupts and uses polling (default 0).
+
+tpmlocality=<LOC>: Attempt to use locality <LOC> of the hardware TPM.
+	(default 0)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:20:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg31-0004wP-5N; Thu, 06 Dec 2012 18:19:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu>)
	id 1Tgg2z-0004vR-Ro
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:19:46 +0000
Received: from [85.158.138.51:23421] by server-10.bemta-3.messagelabs.com id
	19/D8-19806-1C1E0C05; Thu, 06 Dec 2012 18:19:45 +0000
X-Env-Sender: prvs=0684f9a7a4=matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354817981!19849219!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_23,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2176 invoked from network); 6 Dec 2012 18:19:42 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 18:19:42 -0000
Received: from anonelbe.jhuapl.edu (anonelbe.jhuapl.edu [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6e8c_01b8_4fdfff42_38f8_44a7_a58f_7571a01cb330;
	Thu, 06 Dec 2012 13:19:34 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Date: Thu,  6 Dec 2012 13:19:17 -0500
Message-Id: <1354817961-22196-4-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [VTPM v7 4/8] Add vtpm documentation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See the files included in this patch for details

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 docs/misc/vtpm.txt     |  357 ++++++++++++++++++++++++++++++++++--------------
 stubdom/vtpm/README    |   75 ++++++++++
 stubdom/vtpmmgr/README |   75 ++++++++++
 3 files changed, 401 insertions(+), 106 deletions(-)
 create mode 100644 stubdom/vtpm/README
 create mode 100644 stubdom/vtpmmgr/README

diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt
index ad37fe8..fc6029a 100644
--- a/docs/misc/vtpm.txt
+++ b/docs/misc/vtpm.txt
@@ -1,152 +1,297 @@
-Copyright: IBM Corporation (C), Intel Corporation
-29 June 2006
-Authors: Stefan Berger <stefanb@us.ibm.com> (IBM), 
-         Employees of Intel Corp
-
-This document gives a short introduction to the virtual TPM support
-in XEN and goes as far as connecting a user domain to a virtual TPM
-instance and doing a short test to verify success. It is assumed
-that the user is fairly familiar with compiling and installing XEN
-and Linux on a machine. 
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the virtual Trusted Platform Module (vTPM) subsystem
+for Xen. The reader is assumed to have familiarity with building and installing
+Xen, Linux, and a basic understanding of the TPM and vTPM concepts.
+
+------------------------------
+INTRODUCTION
+------------------------------
+The goal of this work is to provide a TPM functionality to a virtual guest
+operating system (a DomU).  This allows programs to interact with a TPM in a
+virtual system the same way they interact with a TPM on the physical system.
+Each guest gets its own unique, emulated, software TPM.  However, each of the
+vTPM's secrets (Keys, NVRAM, etc) are managed by a vTPM Manager domain, which
+seals the secrets to the Physical TPM.  Thus, the vTPM subsystem extends the
+chain of trust rooted in the hardware TPM to virtual machines in Xen. Each
+major component of vTPM is implemented as a separate domain, providing secure
+separation guaranteed by the hypervisor. The vTPM domains are implemented in
+mini-os to reduce memory and processor overhead.
+
+This mini-os vTPM subsystem was built on top of the previous vTPM
+work done by IBM and Intel corporation.
  
-Production Prerequisites: An x86-based machine machine with a
-Linux-supported TPM on the motherboard (NSC, Atmel, Infineon, TPM V1.2).
-Development Prerequisites: An emulator for TESTING ONLY is provided
-
+------------------------------
+DESIGN OVERVIEW
+------------------------------
+
+The architecture of vTPM is described below:
+
++------------------+
+|    Linux DomU    | ...
+|       |  ^       |
+|       v  |       |
+|   xen-tpmfront   |
++------------------+
+        |  ^
+        v  |
++------------------+
+| mini-os/tpmback  |
+|       |  ^       |
+|       v  |       |
+|  vtpm-stubdom    | ...
+|       |  ^       |
+|       v  |       |
+| mini-os/tpmfront |
++------------------+
+        |  ^
+        v  |
++------------------+
+| mini-os/tpmback  |
+|       |  ^       |
+|       v  |       |
+|   vtpmmgrdom     |
+|       |  ^       |
+|       v  |       |
+| mini-os/tpm_tis  |
++------------------+
+        |  ^
+        v  |
++------------------+
+|   Hardware TPM   |
++------------------+
+ * Linux DomU: The Linux based guest that wants to use a vTPM. There many be
+               more than one of these.
+
+ * xen-tpmfront.ko: Linux kernel virtual TPM frontend driver. This driver
+                    provides vTPM access to a para-virtualized Linux based DomU.
+
+ * mini-os/tpmback: Mini-os TPM backend driver. The Linux frontend driver
+                    connects to this backend driver to facilitate
+                    communications between the Linux DomU and its vTPM. This
+                    driver is also used by vtpmmgrdom to communicate with
+                    vtpm-stubdom.
+
+ * vtpm-stubdom: A mini-os stub domain that implements a vTPM. There is a
+                 one to one mapping between running vtpm-stubdom instances and
+                 logical vtpms on the system. The vTPM Platform Configuration
+                 Registers (PCRs) are all initialized to zero.
+
+ * mini-os/tpmfront: Mini-os TPM frontend driver. The vTPM mini-os domain
+                     vtpm-stubdom uses this driver to communicate with
+                     vtpmmgrdom. This driver could also be used separately to
+                     implement a mini-os domain that wishes to use a vTPM of
+                     its own.
+
+ * vtpmmgrdom: A mini-os domain that implements the vTPM manager.
+               There is only one vTPM manager and it should be running during
+               the entire lifetime of the machine.  This domain regulates
+               access to the physical TPM on the system and secures the
+               persistent state of each vTPM.
+
+ * mini-os/tpm_tis: Mini-os TPM version 1.2 TPM Interface Specification (TIS)
+                    driver. This driver used by vtpmmgrdom to talk directly to
+                    the hardware TPM. Communication is facilitated by mapping
+                    hardware memory pages into vtpmmgrdom.
+
+ * Hardware TPM: The physical TPM that is soldered onto the motherboard.
+
+------------------------------
+INSTALLATION
+------------------------------
+
+Prerequisites:
+--------------
+You must have an x86 machine with a TPM on the motherboard.
+The only software requirement to compiling vTPM is cmake.
+You must use libxl to manage domains with vTPMs. 'xm' is
+deprecated and does not support vTPM.
 
 Compiling the XEN tree:
 -----------------------
 
-Compile the XEN tree as usual after the following lines set in the
-linux-2.6.??-xen/.config file:
+Compile and install the XEN tree as usual. Be sure to build and install
+the stubdom tree.
+
+Compiling the LINUX dom0 kernel:
+--------------------------------
 
-CONFIG_XEN_TPMDEV_BACKEND=m
+The Linux dom0 kernel has no special prerequisites.
 
-CONFIG_TCG_TPM=m
-CONFIG_TCG_TIS=m      (supported after 2.6.17-rc4)
-CONFIG_TCG_NSC=m
-CONFIG_TCG_ATMEL=m
-CONFIG_TCG_INFINEON=m
-CONFIG_TCG_XEN=m
-<possible other TPM drivers supported by Linux>
+Compiling the LINUX domU kernel:
+--------------------------------
 
-If the frontend driver needs to be compiled into the user domain
-kernel, then the following two lines should be changed.
+The domU kernel used by domains with vtpms must
+include the xen-tpmfront.ko driver. It can be built
+directly into the kernel or as a module.
 
 CONFIG_TCG_TPM=y
 CONFIG_TCG_XEN=y
 
+------------------------------
+VTPM MANAGER SETUP
+------------------------------
+
+Manager disk image setup:
+-------------------------
+
+The vTPM Manager requires a disk image to store its
+encrypted data. The image does not require a filesystem
+and can live anywhere on the host disk. The image does not need
+to be large. 8 to 16 Mb should be sufficient.
+
+# dd if=/dev/zero of=/var/vtpmmgrdom.img bs=16M count=1
+
+Manager config file:
+--------------------
+
+The vTPM Manager domain (vtpmmgrdom) must be started like
+any other Xen virtual machine and requires a config file.
+The manager requires a disk image for storage and permission
+to access the hardware memory pages for the TPM. An
+example configuration looks like the following.
+
+kernel="/usr/lib/xen/boot/vtpmmgrdom.gz"
+memory=16
+disk=["file:/var/vtpmmgrdom.img,hda,w"]
+name="vtpmmgrdom"
+iomem=["fed40,5"]
+
+The iomem line tells xl to allow access to the TPM
+IO memory pages, which are 5 pages that start at
+0xfed40000.
+
+Starting and stopping the manager:
+----------------------------------
+
+The vTPM manager should be started at boot, you may wish to
+create an init script to do this.
+
+# xl create -c vtpmmgrdom.cfg
+
+Once initialization is complete you should see the following:
+INFO[VTPM]: Waiting for commands from vTPM's:
+
+To shutdown the manager you must destroy it. To avoid data corruption,
+only destroy the manager when you see the above "Waiting for commands"
+message. This ensures the disk is in a consistent state.
+
+# xl destroy vtpmmgrdom
+
+------------------------------
+VTPM AND LINUX PVM SETUP
+------------------------------
 
-You must also enable the virtual TPM to be built:
+In the following examples we will assume we have Linux
+guest named "domu" with its associated configuration
+located at /home/user/domu. It's vtpm will be named
+domu-vtpm.
 
-In Config.mk in the Xen root directory set the line
+vTPM disk image setup:
+----------------------
 
-VTPM_TOOLS ?= y
+The vTPM requires a disk image to store its persistent
+data. The image does not require a filesystem. The image
+does not need to be large. 8 Mb should be sufficient.
 
-and in
+# dd if=/dev/zero of=/home/user/domu/vtpm.img bs=8M count=1
 
-tools/vtpm/Rules.mk set the line
+vTPM config file:
+-----------------
 
-BUILD_EMULATOR = y
+The vTPM domain requires a configuration file like
+any other domain. The vTPM requires a disk image for
+storage and a TPM frontend driver to communicate
+with the manager. An example configuration is given:
 
-Now build the Xen sources from Xen's root directory:
+kernel="/usr/lib/xen/boot/vtpm-stubdom.gz"
+memory=8
+disk=["file:/home/user/domu/vtpm.img,hda,w"]
+name="domu-vtpm"
+vtpm=["backend=vtpmmgrdom,uuid=ac0a5b9e-cbe2-4c07-b43b-1d69e46fb839"]
 
-make install
+The vtpm= line sets up the tpm frontend driver. The backend must set
+to vtpmmgrdom. You are required to generate a uuid for this vtpm.
+You can use the uuidgen unix program or some other method to create a
+uuid. The uuid uniquely identifies this vtpm to manager.
 
+If you wish to clear the vTPM data you can either recreate the
+disk image or change the uuid.
 
-Also build the initial RAM disk if necessary.
+Linux Guest config file:
+------------------------
 
-Reboot the machine with the created Xen kernel.
+The Linux guest config file needs to be modified to include
+the Linux tpmfront driver. Add the following line:
 
-Note: If you do not want any TPM-related code compiled into your
-kernel or built as module then comment all the above lines like
-this example:
-# CONFIG_TCG_TPM is not set
+vtpm=["backend=domu-vtpm"]
 
+Currently only paravirtualized guests are supported.
 
-Modifying VM Configuration files:
----------------------------------
+Launching and shut down:
+------------------------
 
-VM configuration files need to be adapted to make a TPM instance
-available to a user domain. The following VM configuration file is
-an example of how a user domain can be configured to have a TPM
-available. It works similar to making a network interface
-available to a domain.
+To launch a Linux guest with a vTPM we first have to start the vTPM domain.
 
-kernel = "/boot/vmlinuz-2.6.x"
-ramdisk = "/xen/initrd_domU/U1_ramdisk.img"
-memory = 32
-name = "TPMUserDomain0"
-vtpm = ['instance=1,backend=0']
-root = "/dev/ram0 console=tty ro"
-vif = ['backend=0']
+# xl create -c /home/user/domu/vtpm.cfg
 
-In the above configuration file the line 'vtpm = ...' provides
-information about the domain where the virtual TPM is running and
-where the TPM backend has been compiled into - this has to be 
-domain 0  at the moment - and which TPM instance the user domain
-is supposed to talk to. Note that each running VM must use a 
-different instance and that using instance 0 is NOT allowed. The
-instance parameter is taken as the desired instance number, but
-the actual instance number that is assigned to the virtual machine
-can be different. This is the case if for example that particular
-instance is already used by another virtual machine. The association
-of which TPM instance number is used by which virtual machine is
-kept in the file /var/vtpm/vtpm.db. Associations are maintained by
-a xend-internal vTPM UUID and vTPM instance number.
+After initialization is complete, you should see the following:
+Info: Waiting for frontend domain to connect..
 
-Note: If you do not want TPM functionality for your user domain simply
-leave out the 'vtpm' line in the configuration file.
+Next, launch the Linux guest
 
+# xl create -c /home/user/domu/domu.cfg
 
-Running the TPM:
-----------------
+If xen-tpmfront was compiled as a module, be sure to load it
+in the guest.
 
-To run the vTPM, the device /dev/vtpm must be available.
-Verify that 'ls -l /dev/vtpm' shows the following output:
+# modprobe xen-tpmfront
 
-crw-------  1 root root 10, 225 Aug 11 06:58 /dev/vtpm
+After the Linux domain boots and the xen-tpmfront driver is loaded,
+you should see the following on the vtpm console:
 
-If it is not available, run the following command as 'root'.
-mknod /dev/vtpm c 10 225
+Info: VTPM attached to Frontend X/Y
 
-Make sure that the vTPM is running in domain 0. To do this run the
-following:
+If you have trousers and tpm_tools installed on the guest, you can test the
+vtpm.
 
-modprobe tpmbk
+On guest:
+# tcsd (if tcsd is not running already)
+# tpm_version
 
-/usr/bin/vtpm_managerd
+The version command should return the following:
+  TPM 1.2 Version Info:
+  Chip Version:        1.2.0.7
+  Spec Level:          2
+  Errata Revision:     1
+  TPM Vendor ID:       ETHZ
+  TPM Version:         01010000
+  Manufacturer Info:   4554485a
 
-Start a user domain using the 'xm create' command. Once you are in the
-shell of the user domain, you should be able to do the following as
-user 'root':
+You should also see the command being sent to the vtpm console as well
+as the vtpm saving its state. You should see the vtpm key being
+encrypted and stored on the vtpmmgrdom console.
 
-Insert the TPM frontend into the kernel if it has been compiled as a
-kernel module.
+To shutdown the guest and its vtpm, you just have to shutdown the guest
+normally. As soon as the guest vm disconnects, the vtpm will shut itself
+down automatically.
 
-> modprobe tpm_xenu
+On guest:
+# shutdown -h now
 
-Check the status of the TPM
+You may wish to write a script to start your vtpm and guest together.
 
-> cd /sys/devices/xen/vtpm-0
-> ls
-[...]  cancel  caps   pcrs    pubek   [...]
-> cat pcrs
-PCR-00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-01: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-02: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-03: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-04: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-05: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-06: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-07: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-PCR-08: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-[...]
+------------------------------
+MORE INFORMATION
+------------------------------
 
-At this point the user domain has been successfully connected to its
-virtual TPM instance.
+See stubdom/vtpmmgr/README for more details about how
+the manager domain works, how to use it, and its command line
+parameters.
 
-For further information please read the documentation in 
-tools/vtpm_manager/README and tools/vtpm/README
+See stubdom/vtpm/README for more specifics about how vtpm-stubdom
+operates and the command line options it accepts.
 
-Stefan Berger and Employees of the Intel Corp
diff --git a/stubdom/vtpm/README b/stubdom/vtpm/README
new file mode 100644
index 0000000..11bdacb
--- /dev/null
+++ b/stubdom/vtpm/README
@@ -0,0 +1,75 @@
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the operation and command line interface
+of vtpm-stubdom. See docs/misc/vtpm.txt for details on the
+vTPM subsystem as a whole.
+
+
+------------------------------
+OPERATION
+------------------------------
+
+The vtpm-stubdom is a mini-OS domain that emulates a TPM for the guest OS to
+use. It is a small wrapper around the Berlios TPM emulator
+version 0.7.4. Commands are passed from the linux guest via the
+mini-os TPM backend driver. vTPM data is encrypted and stored via a disk image
+provided to the virtual machine. The key used to encrypt the data along
+with a hash of the vTPM's data is sent to the vTPM manager for secure storage
+and later retrieval.  The vTPM domain communicates with the manager using a
+mini-os tpm front/back device pair.
+
+------------------------------
+COMMAND LINE ARGUMENTS
+------------------------------
+
+Command line arguments are passed to the domain via the 'extra'
+parameter in the VM config file. Each parameter is separated
+by white space. For example:
+
+extra="foo=bar baz"
+
+List of Arguments:
+------------------
+
+loglevel=<LOG>: Controls the amount of logging printed to the console.
+	The possible values for <LOG> are:
+	 error
+	 info (default)
+	 debug
+
+clear: Start the Berlios emulator in "clear" mode. (default)
+
+save: Start the Berlios emulator in "save" mode.
+
+deactivated: Start the Berlios emulator in "deactivated" mode.
+	See the Berlios TPM emulator documentation for details
+	about the startup mode. For all normal use, always use clear
+	which is the default. You should not need to specify any of these.
+
+maintcmds=<1|0>: Enable to disable the TPM maintenance commands.
+	These commands are used by tpm manufacturers and thus
+	open a security hole. They are disabled by default.
+
+hwinitpcr=<PCRSPEC>: Initialize the virtual Platform Configuration Registers
+	(PCRs) with PCR values from the hardware TPM. Each pcr specified by
+	<PCRSPEC> will be initialized with the value of that same PCR in TPM
+	once at startup. By default all PCRs are zero initialized.
+	Value values of <PCRSPEC> are:
+	 all: copy all pcrs
+	 none: copy no pcrs (default)
+	 <N>: copy pcr n
+	 <X-Y>: copy pcrs x to y (inclusive)
+
+	These can also be combined by comma separation, for example:
+	 hwinitpcrs=5,12-16
+	will copy pcrs 5, 12, 13, 14, 15, and 16.
+
+------------------------------
+REFERENCES
+------------------------------
+
+Berlios TPM Emulator:
+http://tpm-emulator.berlios.de/
diff --git a/stubdom/vtpmmgr/README b/stubdom/vtpmmgr/README
new file mode 100644
index 0000000..09f3958
--- /dev/null
+++ b/stubdom/vtpmmgr/README
@@ -0,0 +1,75 @@
+Copyright (c) 2010-2012 United States Government, as represented by
+the Secretary of Defense.  All rights reserved.
+November 12 2012
+Authors: Matthew Fioravante (JHUAPL),
+
+This document describes the operation and command line interface
+of vtpmmgrdom. See docs/misc/vtpm.txt for details on the
+vTPM subsystem as a whole.
+
+
+------------------------------
+OPERATION
+------------------------------
+
+The vtpmmgrdom implements a vTPM manager who has two major functions:
+
+ - Securely store encryption keys for each of the vTPMS
+ - Regulate access to the hardware TPM for the entire system
+
+The manager accepts commands from the vtpm-stubdom domains via the mini-os
+TPM backend driver. The vTPM manager communicates directly with hardware TPM
+using the mini-os tpm_tis driver.
+
+
+When the manager starts for the first time it will check if the TPM
+has an owner. If the TPM is unowned, it will attempt to take ownership
+with the supplied owner_auth (see below) and then create a TPM
+storage key which will be used to secure vTPM key data. Currently the
+manager only binds vTPM keys to the disk. In the future support
+for sealing to PCRs should be added.
+
+------------------------------
+COMMAND LINE ARGUMENTS
+------------------------------
+
+Command line arguments are passed to the domain via the 'extra'
+parameter in the VM config file. Each parameter is separated
+by white space. For example:
+
+extra="foo=bar baz"
+
+List of Arguments:
+------------------
+
+owner_auth=<AUTHSPEC>: Set the owner auth of the TPM. The default
+	is the well known owner auth of all ones.
+
+srk_auth=<AUTHSPEC>: Set the SRK auth for the TPM. The default is
+	the well known srk auth of all zeroes.
+	The possible values of <AUTHSPEC> are:
+	 well-known: Use the well known auth (default)
+	 random: Randomly generate an auth
+	 hash: <HASH>: Use the given 40 character ASCII hex string
+	 text: <STR>: Use sha1 hash of <STR>.
+
+tpmdriver=<DRIVER>: Which driver to use to talk to the hardware TPM.
+	Don't change this unless you know what you're doing.
+	The possible values of <DRIVER> are:
+	 tpm_tis: Use the tpm_tis driver to talk directly to the TPM.
+		The domain must have access to TPM IO memory.  (default)
+	 tpmfront: Use tpmfront to talk to the TPM. The domain must have
+		a tpmfront device setup to talk to another domain
+		which provides access to the TPM.
+
+The following options only apply to the tpm_tis driver:
+
+tpmiomem=<ADDR>: The base address of the hardware memory pages of the
+	TPM (default 0xfed40000).
+
+tpmirq=<IRQ>: The irq of the hardware TPM if using interrupts. A value of
+	"probe" can be set to probe for the irq. A value of 0
+	disabled interrupts and uses polling (default 0).
+
+tpmlocality=<LOC>: Attempt to use locality <LOC> of the hardware TPM.
+	(default 0)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:25:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg8l-0006Yo-6P; Thu, 06 Dec 2012 18:25:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgg8j-0006YN-O1
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 18:25:42 +0000
Received: from [85.158.139.83:44487] by server-7.bemta-5.messagelabs.com id
	B7/D4-23096-423E0C05; Thu, 06 Dec 2012 18:25:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354818339!26026592!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 666 invoked from network); 6 Dec 2012 18:25:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:25:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16208456"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 18:25:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 18:25:39 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgg8g-0004Uy-Qd;
	Thu, 06 Dec 2012 18:25:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgg8g-0008Qy-Nu;
	Thu, 06 Dec 2012 18:25:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14574-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 18:25:38 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14574: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14574 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14574/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14566
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:25:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgg8l-0006Yo-6P; Thu, 06 Dec 2012 18:25:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgg8j-0006YN-O1
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 18:25:42 +0000
Received: from [85.158.139.83:44487] by server-7.bemta-5.messagelabs.com id
	B7/D4-23096-423E0C05; Thu, 06 Dec 2012 18:25:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1354818339!26026592!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 666 invoked from network); 6 Dec 2012 18:25:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:25:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; d="scan'208";a="16208456"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 18:25:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 18:25:39 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgg8g-0004Uy-Qd;
	Thu, 06 Dec 2012 18:25:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgg8g-0008Qy-Nu;
	Thu, 06 Dec 2012 18:25:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14574-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 18:25:38 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14574: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14574 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14574/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14566
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:27:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TggAU-0006ob-Mt; Thu, 06 Dec 2012 18:27:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xinxinjin89@gmail.com>) id 1TggAT-0006oN-97
	for Xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:27:29 +0000
Received: from [85.158.138.51:63440] by server-2.bemta-3.messagelabs.com id
	A8/98-04744-093E0C05; Thu, 06 Dec 2012 18:27:28 +0000
X-Env-Sender: xinxinjin89@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1354818444!19765769!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31819 invoked from network); 6 Dec 2012 18:27:25 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:27:25 -0000
Received: by mail-qa0-f52.google.com with SMTP id d13so891075qak.11
	for <Xen-devel@lists.xen.org>; Thu, 06 Dec 2012 10:27:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=D4KHrbvn1sQjtAMJx6IBiB+TN6LtmlUe9bTEubx9x8o=;
	b=TVHNscJeOEAMe86sRShiTkq76CpKOqHKGxF+du4GMxZH3eRh8EDVRT0Pz7Bh5Dtm2K
	Os/lIO3L7HJhxR1R0feyGsi/lw93/CPJpC1xD7eshHvYAug4vGaVTCBgWuWogFgLCUaU
	J7lk2sBXp8uW2PT5I9WAFmqhAoPLNh6Vjkvlf+z4xR1l8fDIhWiuv/xI/iLvQX+7PWW5
	yzhbw5q40HaCl6Yt9G91EyJsdjF7rM3II9XTWKF6usIN2vRIQgdgo19RdNhZ/mYU5608
	4yJVR1kkZZUCqnxS8lSdHbvmdmlkCrxmvxGVaHGzMD+CN/6spKZjGuncItm0O1PqV8Kq
	1nww==
MIME-Version: 1.0
Received: by 10.224.178.201 with SMTP id bn9mr4696869qab.55.1354818443676;
	Thu, 06 Dec 2012 10:27:23 -0800 (PST)
Received: by 10.229.104.203 with HTTP; Thu, 6 Dec 2012 10:27:23 -0800 (PST)
Date: Thu, 6 Dec 2012 10:27:23 -0800
Message-ID: <CAJJWZcyFoWA9V_10qPRcEDo+Jm0jfQu7ktE5iyO25k4BntrDbg@mail.gmail.com>
From: Xinxin Jin <xinxinjin89@gmail.com>
To: "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: [Xen-devel] Question on local_irq_save/local_irq_retore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8497987321285284371=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8497987321285284371==
Content-Type: multipart/alternative; boundary=20cf303b3afd120cc704d03340a5

--20cf303b3afd120cc704d03340a5
Content-Type: text/plain; charset=ISO-8859-1

Hi, I have some confusion on local_irq_save() and local_irq_restore(). From
the definitions, you can see that local_irq_save() calls local_irq_disable().
But why there is no local_irq_enable() in local_irq_restore?

#define local_irq_save(x)
({
    local_save_flags(x);
    local_irq_disable();
})

#define local_irq_restore(x)
({
    BUILD_BUG_ON(sizeof(x) != sizeof(long));
    asm volatile ( "push" __OS " %0 ; popf" __OS
                   : : "g" (x) : "memory", "cc" );
})



-- 
Xinxin

--20cf303b3afd120cc704d03340a5
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi, I have some confusion on local_irq_save() and local_irq_restore(). From=
 the definitions,=A0<span style=3D"font-family:arial,sans-serif;font-size:1=
2.800000190734863px">you can see that=A0</span><span style=3D"font-family:a=
rial,sans-serif;font-size:12.800000190734863px">local_irq_save() calls=A0</=
span><span style=3D"font-family:arial,sans-serif;font-size:12.8000001907348=
63px">local_irq_disable(). But why there is no local_irq_enable() in local_=
irq_restore?</span>=A0=A0<div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
<br></div><div style=3D"font-family:arial,sans-serif;font-size:12.800000190=
734863px">#define local_irq_save(x) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
({ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div><div style=3D"f=
ont-family:arial,sans-serif;font-size:12.800000190734863px">=A0 =A0 local_s=
ave_flags(x); =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0=A0</div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
=A0 =A0 local_irq_disable(); =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div><div style=3D"font-family:arial=
,sans-serif;font-size:12.800000190734863px">}) =A0</div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
<br></div><div style=3D"font-family:arial,sans-serif;font-size:12.800000190=
734863px">#define local_irq_restore(x) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
({ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div><div style=3D"f=
ont-family:arial,sans-serif;font-size:12.800000190734863px">=A0 =A0 BUILD_B=
UG_ON(sizeof(x) !=3D sizeof(long)); =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
=A0</div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
=A0 =A0 asm volatile ( &quot;push&quot; __OS &quot; %0 ; popf&quot; __OS =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div><div style=3D"font-family:arial,san=
s-serif;font-size:12.800000190734863px">
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0: : &quot;g&quot; (x) : &quot;memory=
&quot;, &quot;cc&quot; ); =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div><div style=3D=
"font-family:arial,sans-serif;font-size:12.800000190734863px">}) =A0 =A0 =
=A0</div></div><div style=3D"font-family:arial,sans-serif;font-size:12.8000=
00190734863px">
<br></div><br clear=3D"all"><div><br></div>-- <br>Xinxin<br>


--20cf303b3afd120cc704d03340a5--


--===============8497987321285284371==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8497987321285284371==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 18:27:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TggAU-0006ob-Mt; Thu, 06 Dec 2012 18:27:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xinxinjin89@gmail.com>) id 1TggAT-0006oN-97
	for Xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:27:29 +0000
Received: from [85.158.138.51:63440] by server-2.bemta-3.messagelabs.com id
	A8/98-04744-093E0C05; Thu, 06 Dec 2012 18:27:28 +0000
X-Env-Sender: xinxinjin89@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1354818444!19765769!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31819 invoked from network); 6 Dec 2012 18:27:25 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:27:25 -0000
Received: by mail-qa0-f52.google.com with SMTP id d13so891075qak.11
	for <Xen-devel@lists.xen.org>; Thu, 06 Dec 2012 10:27:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=D4KHrbvn1sQjtAMJx6IBiB+TN6LtmlUe9bTEubx9x8o=;
	b=TVHNscJeOEAMe86sRShiTkq76CpKOqHKGxF+du4GMxZH3eRh8EDVRT0Pz7Bh5Dtm2K
	Os/lIO3L7HJhxR1R0feyGsi/lw93/CPJpC1xD7eshHvYAug4vGaVTCBgWuWogFgLCUaU
	J7lk2sBXp8uW2PT5I9WAFmqhAoPLNh6Vjkvlf+z4xR1l8fDIhWiuv/xI/iLvQX+7PWW5
	yzhbw5q40HaCl6Yt9G91EyJsdjF7rM3II9XTWKF6usIN2vRIQgdgo19RdNhZ/mYU5608
	4yJVR1kkZZUCqnxS8lSdHbvmdmlkCrxmvxGVaHGzMD+CN/6spKZjGuncItm0O1PqV8Kq
	1nww==
MIME-Version: 1.0
Received: by 10.224.178.201 with SMTP id bn9mr4696869qab.55.1354818443676;
	Thu, 06 Dec 2012 10:27:23 -0800 (PST)
Received: by 10.229.104.203 with HTTP; Thu, 6 Dec 2012 10:27:23 -0800 (PST)
Date: Thu, 6 Dec 2012 10:27:23 -0800
Message-ID: <CAJJWZcyFoWA9V_10qPRcEDo+Jm0jfQu7ktE5iyO25k4BntrDbg@mail.gmail.com>
From: Xinxin Jin <xinxinjin89@gmail.com>
To: "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: [Xen-devel] Question on local_irq_save/local_irq_retore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8497987321285284371=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8497987321285284371==
Content-Type: multipart/alternative; boundary=20cf303b3afd120cc704d03340a5

--20cf303b3afd120cc704d03340a5
Content-Type: text/plain; charset=ISO-8859-1

Hi, I have some confusion on local_irq_save() and local_irq_restore(). From
the definitions, you can see that local_irq_save() calls local_irq_disable().
But why there is no local_irq_enable() in local_irq_restore?

#define local_irq_save(x)
({
    local_save_flags(x);
    local_irq_disable();
})

#define local_irq_restore(x)
({
    BUILD_BUG_ON(sizeof(x) != sizeof(long));
    asm volatile ( "push" __OS " %0 ; popf" __OS
                   : : "g" (x) : "memory", "cc" );
})



-- 
Xinxin

--20cf303b3afd120cc704d03340a5
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi, I have some confusion on local_irq_save() and local_irq_restore(). From=
 the definitions,=A0<span style=3D"font-family:arial,sans-serif;font-size:1=
2.800000190734863px">you can see that=A0</span><span style=3D"font-family:a=
rial,sans-serif;font-size:12.800000190734863px">local_irq_save() calls=A0</=
span><span style=3D"font-family:arial,sans-serif;font-size:12.8000001907348=
63px">local_irq_disable(). But why there is no local_irq_enable() in local_=
irq_restore?</span>=A0=A0<div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
<br></div><div style=3D"font-family:arial,sans-serif;font-size:12.800000190=
734863px">#define local_irq_save(x) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
({ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div><div style=3D"f=
ont-family:arial,sans-serif;font-size:12.800000190734863px">=A0 =A0 local_s=
ave_flags(x); =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0=A0</div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
=A0 =A0 local_irq_disable(); =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div><div style=3D"font-family:arial=
,sans-serif;font-size:12.800000190734863px">}) =A0</div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
<br></div><div style=3D"font-family:arial,sans-serif;font-size:12.800000190=
734863px">#define local_irq_restore(x) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
({ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div><div style=3D"f=
ont-family:arial,sans-serif;font-size:12.800000190734863px">=A0 =A0 BUILD_B=
UG_ON(sizeof(x) !=3D sizeof(long)); =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
=A0</div>
<div style=3D"font-family:arial,sans-serif;font-size:12.800000190734863px">=
=A0 =A0 asm volatile ( &quot;push&quot; __OS &quot; %0 ; popf&quot; __OS =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div><div style=3D"font-family:arial,san=
s-serif;font-size:12.800000190734863px">
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0: : &quot;g&quot; (x) : &quot;memory=
&quot;, &quot;cc&quot; ); =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div><div style=3D=
"font-family:arial,sans-serif;font-size:12.800000190734863px">}) =A0 =A0 =
=A0</div></div><div style=3D"font-family:arial,sans-serif;font-size:12.8000=
00190734863px">
<br></div><br clear=3D"all"><div><br></div>-- <br>Xinxin<br>


--20cf303b3afd120cc704d03340a5--


--===============8497987321285284371==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8497987321285284371==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 18:27:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:27:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TggAd-0006qJ-4J; Thu, 06 Dec 2012 18:27:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TggAc-0006px-A9
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:27:38 +0000
Received: from [85.158.143.35:7533] by server-2.bemta-4.messagelabs.com id
	17/B5-30861-993E0C05; Thu, 06 Dec 2012 18:27:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354818455!5366917!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23467 invoked from network); 6 Dec 2012 18:27:36 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:27:36 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so740561wib.14
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 10:27:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=8VY6a8TPJ+O/uHUPMe3QzfORHtMk7KhTlePf8ui3hF8=;
	b=Z6SsVqIoUXAhyjt7aVA4QS0UScNYnhTsQ0oRZIwDTI8rJaa49g88ikXr8xw98UdBiz
	UXT2F6O7H+GmD5xIQ1qjtaZrbPO+rqgFV3Br4Di96N1mHy5nsZd+jD41+uVrvyszTZ+s
	s67h42Vl/z4kPJQFSs5KcBU5Rws+o0FaGzp9vLojxTUecdRCEpY3+bD/8uykNtRVGrpx
	KpdGPcZ3fzi560lcrbx79ysMfa3DT0Nts5RDzdA3VlUu1PPF6zULwjsGWSCWQNf3t/aL
	MNKy7tTUb11DmITaQxOKUEX0XdvbaceRbafUU1LACzFbrDDM75/0zmTrS3UT5wZDl81V
	n0/A==
Received: by 10.180.99.5 with SMTP id em5mr4059873wib.8.1354818455688;
	Thu, 06 Dec 2012 10:27:35 -0800 (PST)
Received: from [192.168.1.88]
	(host86-183-153-239.range86-183.btcentralplus.com. [86.183.153.239])
	by mx.google.com with ESMTPS id u6sm12126919wif.2.2012.12.06.10.27.33
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 10:27:34 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 18:27:30 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>,
	Jan Beulich <JBeulich@suse.com>
Message-ID: <CCE69412.4677F%keir.xen@gmail.com>
Thread-Topic: [PATCH] xen: centralize accounting for domain tot_pages
Thread-Index: Ac3T31m3LWPKryvzHUKiP3AfNdhG0g==
In-Reply-To: <cd4561a9-08d0-4fe5-bd56-7a0ff13d4501@default>
Mime-version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: centralize accounting for domain
	tot_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I will check these two patches in.

 -- Keir

On 06/12/2012 17:24, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> ping?
> 
>> -----Original Message-----
>> From: Dan Magenheimer
>> Sent: Wednesday, November 28, 2012 2:50 PM
>> To: Keir Fraser; Jan Beulich
>> Cc: xen-devel@lists.xen.org; Konrad Wilk; Zhigang Wang
>> Subject: [PATCH] xen: centralize accounting for domain tot_pages
>> 
>> xen: centralize accounting for domain tot_pages
>> 
>> Provide and use a common function for all adjustments to a
>> domain's tot_pages counter in anticipation of future and/or
>> out-of-tree patches that must adjust related counters
>> atomically.
>> 
>> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
>> 
>>  arch/x86/mm.c             |    4 ++--
>>  arch/x86/mm/mem_sharing.c |    4 ++--
>>  common/grant_table.c      |    2 +-
>>  common/memory.c           |    2 +-
>>  common/page_alloc.c       |   10 ++++++++--
>>  include/xen/mm.h          |    2 ++
>>  6 files changed, 16 insertions(+), 8 deletions(-)
>> 
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index ab94b02..3887ca6 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -3842,7 +3842,7 @@ int donate_page(
>>      {
>>          if ( d->tot_pages >= d->max_pages )
>>              goto fail;
>> -        d->tot_pages++;
>> +        domain_adjust_tot_pages(d, 1);
>>      }
>> 
>>      page->count_info = PGC_allocated | 1;
>> @@ -3892,7 +3892,7 @@ int steal_page(
>>      } while ( (y = cmpxchg(&page->count_info, x, x | 1)) != x );
>> 
>>      /* Unlink from original owner. */
>> -    if ( !(memflags & MEMF_no_refcount) && !--d->tot_pages )
>> +    if ( !(memflags & MEMF_no_refcount) && !domain_adjust_tot_pages(d, -1) )
>>          drop_dom_ref = 1;
>>      page_list_del(page, &d->page_list);
>> 
>> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
>> index 5103285..e91aac5 100644
>> --- a/xen/arch/x86/mm/mem_sharing.c
>> +++ b/xen/arch/x86/mm/mem_sharing.c
>> @@ -639,7 +639,7 @@ static int page_make_sharable(struct domain *d,
>>      }
>> 
>>      page_set_owner(page, dom_cow);
>> -    d->tot_pages--;
>> +    domain_adjust_tot_pages(d, -1);
>>      drop_dom_ref = (d->tot_pages == 0);
>>      page_list_del(page, &d->page_list);
>>      spin_unlock(&d->page_alloc_lock);
>> @@ -680,7 +680,7 @@ static int page_make_private(struct domain *d, struct
>> page_info *page)
>>      ASSERT(page_get_owner(page) == dom_cow);
>>      page_set_owner(page, d);
>> 
>> -    if ( d->tot_pages++ == 0 )
>> +    if ( domain_adjust_tot_pages(d, 1) == 1 )
>>          get_domain(d);
>>      page_list_add_tail(page, &d->page_list);
>>      spin_unlock(&d->page_alloc_lock);
>> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
>> index 7912769..ca8d861 100644
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -1656,7 +1656,7 @@ gnttab_transfer(
>>          }
>> 
>>          /* Okay, add the page to 'e'. */
>> -        if ( unlikely(e->tot_pages++ == 0) )
>> +        if ( unlikely(domain_adjust_tot_pages(e, 1) == 1) )
>>              get_knownalive_domain(e);
>>          page_list_add_tail(page, &e->page_list);
>>          page_set_owner(page, e);
>> diff --git a/xen/common/memory.c b/xen/common/memory.c
>> index 83e2666..9842ea9 100644
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -454,7 +454,7 @@ static long
>> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>>                               (j * (1UL << exch.out.extent_order)));
>> 
>>                  spin_lock(&d->page_alloc_lock);
>> -                d->tot_pages -= dec_count;
>> +                domain_adjust_tot_pages(d, -dec_count);
>>                  drop_dom_ref = (dec_count && !d->tot_pages);
>>                  spin_unlock(&d->page_alloc_lock);
>> 
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index 15ebc66..e273bb7 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -239,6 +239,12 @@ static long midsize_alloc_zone_pages;
>> 
>>  static DEFINE_SPINLOCK(heap_lock);
>> 
>> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
>> +{
>> +    ASSERT(spin_is_locked(&d->page_alloc_lock));
>> +    return d->tot_pages += pages;
>> +}
>> +
>>  static unsigned long init_node_heap(int node, unsigned long mfn,
>>                                      unsigned long nr, bool_t *use_tail)
>>  {
>> @@ -1291,7 +1297,7 @@ int assign_pages(
>>          if ( unlikely(d->tot_pages == 0) )
>>              get_knownalive_domain(d);
>> 
>> -        d->tot_pages += 1 << order;
>> +        domain_adjust_tot_pages(d, 1 << order);
>>      }
>> 
>>      for ( i = 0; i < (1 << order); i++ )
>> @@ -1375,7 +1381,7 @@ void free_domheap_pages(struct page_info *pg, unsigned
>> int order)
>>              page_list_del2(&pg[i], &d->page_list, &d->arch.relmem_list);
>>          }
>> 
>> -        d->tot_pages -= 1 << order;
>> +        domain_adjust_tot_pages(d, -(1 << order));
>>          drop_dom_ref = (d->tot_pages == 0);
>> 
>>          spin_unlock_recursive(&d->page_alloc_lock);
>> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
>> index 64a0cc1..00b1915 100644
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -48,6 +48,8 @@ void free_xenheap_pages(void *v, unsigned int order);
>>  #define alloc_xenheap_page() (alloc_xenheap_pages(0,0))
>>  #define free_xenheap_page(v) (free_xenheap_pages(v,0))
>> 
>> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages);
>> +
>>  /* Domain suballocator. These functions are *not* interrupt-safe.*/
>>  void init_domheap_pages(paddr_t ps, paddr_t pe);
>>  struct page_info *alloc_domheap_pages(



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:27:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:27:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TggAd-0006qJ-4J; Thu, 06 Dec 2012 18:27:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TggAc-0006px-A9
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:27:38 +0000
Received: from [85.158.143.35:7533] by server-2.bemta-4.messagelabs.com id
	17/B5-30861-993E0C05; Thu, 06 Dec 2012 18:27:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354818455!5366917!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23467 invoked from network); 6 Dec 2012 18:27:36 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:27:36 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so740561wib.14
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 10:27:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=8VY6a8TPJ+O/uHUPMe3QzfORHtMk7KhTlePf8ui3hF8=;
	b=Z6SsVqIoUXAhyjt7aVA4QS0UScNYnhTsQ0oRZIwDTI8rJaa49g88ikXr8xw98UdBiz
	UXT2F6O7H+GmD5xIQ1qjtaZrbPO+rqgFV3Br4Di96N1mHy5nsZd+jD41+uVrvyszTZ+s
	s67h42Vl/z4kPJQFSs5KcBU5Rws+o0FaGzp9vLojxTUecdRCEpY3+bD/8uykNtRVGrpx
	KpdGPcZ3fzi560lcrbx79ysMfa3DT0Nts5RDzdA3VlUu1PPF6zULwjsGWSCWQNf3t/aL
	MNKy7tTUb11DmITaQxOKUEX0XdvbaceRbafUU1LACzFbrDDM75/0zmTrS3UT5wZDl81V
	n0/A==
Received: by 10.180.99.5 with SMTP id em5mr4059873wib.8.1354818455688;
	Thu, 06 Dec 2012 10:27:35 -0800 (PST)
Received: from [192.168.1.88]
	(host86-183-153-239.range86-183.btcentralplus.com. [86.183.153.239])
	by mx.google.com with ESMTPS id u6sm12126919wif.2.2012.12.06.10.27.33
	(version=SSLv3 cipher=OTHER); Thu, 06 Dec 2012 10:27:34 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Thu, 06 Dec 2012 18:27:30 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>,
	Jan Beulich <JBeulich@suse.com>
Message-ID: <CCE69412.4677F%keir.xen@gmail.com>
Thread-Topic: [PATCH] xen: centralize accounting for domain tot_pages
Thread-Index: Ac3T31m3LWPKryvzHUKiP3AfNdhG0g==
In-Reply-To: <cd4561a9-08d0-4fe5-bd56-7a0ff13d4501@default>
Mime-version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: centralize accounting for domain
	tot_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I will check these two patches in.

 -- Keir

On 06/12/2012 17:24, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> ping?
> 
>> -----Original Message-----
>> From: Dan Magenheimer
>> Sent: Wednesday, November 28, 2012 2:50 PM
>> To: Keir Fraser; Jan Beulich
>> Cc: xen-devel@lists.xen.org; Konrad Wilk; Zhigang Wang
>> Subject: [PATCH] xen: centralize accounting for domain tot_pages
>> 
>> xen: centralize accounting for domain tot_pages
>> 
>> Provide and use a common function for all adjustments to a
>> domain's tot_pages counter in anticipation of future and/or
>> out-of-tree patches that must adjust related counters
>> atomically.
>> 
>> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
>> 
>>  arch/x86/mm.c             |    4 ++--
>>  arch/x86/mm/mem_sharing.c |    4 ++--
>>  common/grant_table.c      |    2 +-
>>  common/memory.c           |    2 +-
>>  common/page_alloc.c       |   10 ++++++++--
>>  include/xen/mm.h          |    2 ++
>>  6 files changed, 16 insertions(+), 8 deletions(-)
>> 
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index ab94b02..3887ca6 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -3842,7 +3842,7 @@ int donate_page(
>>      {
>>          if ( d->tot_pages >= d->max_pages )
>>              goto fail;
>> -        d->tot_pages++;
>> +        domain_adjust_tot_pages(d, 1);
>>      }
>> 
>>      page->count_info = PGC_allocated | 1;
>> @@ -3892,7 +3892,7 @@ int steal_page(
>>      } while ( (y = cmpxchg(&page->count_info, x, x | 1)) != x );
>> 
>>      /* Unlink from original owner. */
>> -    if ( !(memflags & MEMF_no_refcount) && !--d->tot_pages )
>> +    if ( !(memflags & MEMF_no_refcount) && !domain_adjust_tot_pages(d, -1) )
>>          drop_dom_ref = 1;
>>      page_list_del(page, &d->page_list);
>> 
>> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
>> index 5103285..e91aac5 100644
>> --- a/xen/arch/x86/mm/mem_sharing.c
>> +++ b/xen/arch/x86/mm/mem_sharing.c
>> @@ -639,7 +639,7 @@ static int page_make_sharable(struct domain *d,
>>      }
>> 
>>      page_set_owner(page, dom_cow);
>> -    d->tot_pages--;
>> +    domain_adjust_tot_pages(d, -1);
>>      drop_dom_ref = (d->tot_pages == 0);
>>      page_list_del(page, &d->page_list);
>>      spin_unlock(&d->page_alloc_lock);
>> @@ -680,7 +680,7 @@ static int page_make_private(struct domain *d, struct
>> page_info *page)
>>      ASSERT(page_get_owner(page) == dom_cow);
>>      page_set_owner(page, d);
>> 
>> -    if ( d->tot_pages++ == 0 )
>> +    if ( domain_adjust_tot_pages(d, 1) == 1 )
>>          get_domain(d);
>>      page_list_add_tail(page, &d->page_list);
>>      spin_unlock(&d->page_alloc_lock);
>> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
>> index 7912769..ca8d861 100644
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -1656,7 +1656,7 @@ gnttab_transfer(
>>          }
>> 
>>          /* Okay, add the page to 'e'. */
>> -        if ( unlikely(e->tot_pages++ == 0) )
>> +        if ( unlikely(domain_adjust_tot_pages(e, 1) == 1) )
>>              get_knownalive_domain(e);
>>          page_list_add_tail(page, &e->page_list);
>>          page_set_owner(page, e);
>> diff --git a/xen/common/memory.c b/xen/common/memory.c
>> index 83e2666..9842ea9 100644
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -454,7 +454,7 @@ static long
>> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>>                               (j * (1UL << exch.out.extent_order)));
>> 
>>                  spin_lock(&d->page_alloc_lock);
>> -                d->tot_pages -= dec_count;
>> +                domain_adjust_tot_pages(d, -dec_count);
>>                  drop_dom_ref = (dec_count && !d->tot_pages);
>>                  spin_unlock(&d->page_alloc_lock);
>> 
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index 15ebc66..e273bb7 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -239,6 +239,12 @@ static long midsize_alloc_zone_pages;
>> 
>>  static DEFINE_SPINLOCK(heap_lock);
>> 
>> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
>> +{
>> +    ASSERT(spin_is_locked(&d->page_alloc_lock));
>> +    return d->tot_pages += pages;
>> +}
>> +
>>  static unsigned long init_node_heap(int node, unsigned long mfn,
>>                                      unsigned long nr, bool_t *use_tail)
>>  {
>> @@ -1291,7 +1297,7 @@ int assign_pages(
>>          if ( unlikely(d->tot_pages == 0) )
>>              get_knownalive_domain(d);
>> 
>> -        d->tot_pages += 1 << order;
>> +        domain_adjust_tot_pages(d, 1 << order);
>>      }
>> 
>>      for ( i = 0; i < (1 << order); i++ )
>> @@ -1375,7 +1381,7 @@ void free_domheap_pages(struct page_info *pg, unsigned
>> int order)
>>              page_list_del2(&pg[i], &d->page_list, &d->arch.relmem_list);
>>          }
>> 
>> -        d->tot_pages -= 1 << order;
>> +        domain_adjust_tot_pages(d, -(1 << order));
>>          drop_dom_ref = (d->tot_pages == 0);
>> 
>>          spin_unlock_recursive(&d->page_alloc_lock);
>> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
>> index 64a0cc1..00b1915 100644
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -48,6 +48,8 @@ void free_xenheap_pages(void *v, unsigned int order);
>>  #define alloc_xenheap_page() (alloc_xenheap_pages(0,0))
>>  #define free_xenheap_page(v) (free_xenheap_pages(v,0))
>> 
>> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages);
>> +
>>  /* Domain suballocator. These functions are *not* interrupt-safe.*/
>>  void init_domheap_pages(paddr_t ps, paddr_t pe);
>>  struct page_info *alloc_domheap_pages(



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 18:35:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:35:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TggIT-0007qo-Nf; Thu, 06 Dec 2012 18:35:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TggIS-0007qh-6X
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:35:44 +0000
Received: from [85.158.138.51:51234] by server-2.bemta-3.messagelabs.com id
	F8/DE-04744-F75E0C05; Thu, 06 Dec 2012 18:35:43 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354818938!27797591!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26726 invoked from network); 6 Dec 2012 18:35:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:35:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; 
	d="scan'208,217";a="216654234"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 18:35:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 13:35:11 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TggHv-0005JW-6q	for
	xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:35:11 +0000
Message-ID: <50C0E55F.1040903@citrix.com>
Date: Thu, 6 Dec 2012 18:35:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CAJJWZcyFoWA9V_10qPRcEDo+Jm0jfQu7ktE5iyO25k4BntrDbg@mail.gmail.com>
In-Reply-To: <CAJJWZcyFoWA9V_10qPRcEDo+Jm0jfQu7ktE5iyO25k4BntrDbg@mail.gmail.com>
X-Enigmail-Version: 1.4.6
Subject: Re: [Xen-devel] Question on local_irq_save/local_irq_retore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1866579031654875905=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1866579031654875905==
Content-Type: multipart/alternative;
	boundary="------------020601010009060905050706"

--------------020601010009060905050706
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 06/12/12 18:27, Xinxin Jin wrote:
> Hi, I have some confusion on local_irq_save() and local_irq_restore().
> From the definitions, you can see that local_irq_save()
> calls local_irq_disable(). But why there is no local_irq_enable() in
> local_irq_restore?  
>
> #define local_irq_save(x)                                        
> ({                                                               
>     local_save_flags(x);                                         
>     local_irq_disable();                                         
> })  
>
> #define local_irq_restore(x)                                     
> ({                                                               
>     BUILD_BUG_ON(sizeof(x) != sizeof(long));                     
>     asm volatile ( "push" __OS " %0 ; popf" __OS                 
>                    : : "g" (x) : "memory", "cc" );               
> })      
>
>

The interrupt flag is part of the rflags register.

irq_save "saves" the current state of the flags register in the
parameter you give it, then disables interrupt, and irq_restore puts
flags back to the saved state.

As a result, you can nest calls like:

// Lets define interrupts to currently be enabled
irq_save()
  // Interrupts are disabled
  irq_save()
    // Interrupts are disabled
  irq_restore()
  // Interrupts are still disabled
irq_restore()
// Interrupts are now enabled.


save/restore pairs are for code where you do not know whether interrupts
are enabled or disabled at the entry point, but need to be certain that
interrupts are disabled between the pair, and also certain that you
don't enable interrupts early because of nesting.

~Andrew

>
> -- 
> Xinxin

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------020601010009060905050706
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    On 06/12/12 18:27, Xinxin Jin wrote:<br>
    <blockquote
cite="mid:CAJJWZcyFoWA9V_10qPRcEDo+Jm0jfQu7ktE5iyO25k4BntrDbg@mail.gmail.com"
      type="cite">Hi, I have some confusion on local_irq_save() and
      local_irq_restore(). From the definitions,&nbsp;<span
        style="font-family:arial,sans-serif;font-size:12.800000190734863px">you
        can see that&nbsp;</span><span
        style="font-family:arial,sans-serif;font-size:12.800000190734863px">local_irq_save()
        calls&nbsp;</span><span
        style="font-family:arial,sans-serif;font-size:12.800000190734863px">local_irq_disable().
        But why there is no local_irq_enable() in local_irq_restore?</span>&nbsp;&nbsp;
      <div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px"><br>
        </div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">#define
          local_irq_save(x) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">({
          &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">&nbsp;
          &nbsp; local_save_flags(x); &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
          &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">&nbsp;
          &nbsp; local_irq_disable(); &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
          &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">})
          &nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px"><br>
        </div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">#define
          local_irq_restore(x) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">({
          &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">&nbsp;
          &nbsp; BUILD_BUG_ON(sizeof(x) != sizeof(long)); &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
          &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">&nbsp;
          &nbsp; asm volatile ( "push" __OS " %0 ; popf" __OS &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
          &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">
          &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;: : "g" (x) : "memory", "cc" ); &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
          &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">})
          &nbsp; &nbsp; &nbsp;</div>
      </div>
      <div
        style="font-family:arial,sans-serif;font-size:12.800000190734863px">
        <br>
      </div>
      <br clear="all">
    </blockquote>
    <br>
    The interrupt flag is part of the rflags register.<br>
    <br>
    irq_save "saves" the current state of the flags register in the
    parameter you give it, then disables interrupt, and irq_restore puts
    flags back to the saved state.<br>
    <br>
    As a result, you can nest calls like:<br>
    <br>
    // Lets define interrupts to currently be enabled<br>
    irq_save()<br>
    &nbsp; // Interrupts are disabled<br>
    &nbsp; irq_save()<br>
    &nbsp;&nbsp;&nbsp; // Interrupts are disabled<br>
    &nbsp; irq_restore()<br>
    &nbsp; // Interrupts are still disabled<br>
    irq_restore()<br>
    // Interrupts are now enabled.<br>
    <br>
    <br>
    save/restore pairs are for code where you do not know whether
    interrupts are enabled or disabled at the entry point, but need to
    be certain that interrupts are disabled between the pair, and also
    certain that you don't enable interrupts early because of nesting.<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote
cite="mid:CAJJWZcyFoWA9V_10qPRcEDo+Jm0jfQu7ktE5iyO25k4BntrDbg@mail.gmail.com"
      type="cite">
      <div><br>
      </div>
      -- <br>
      Xinxin<br>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a></pre>
  </body>
</html>

--------------020601010009060905050706--


--===============1866579031654875905==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1866579031654875905==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 18:35:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 18:35:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TggIT-0007qo-Nf; Thu, 06 Dec 2012 18:35:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TggIS-0007qh-6X
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:35:44 +0000
Received: from [85.158.138.51:51234] by server-2.bemta-3.messagelabs.com id
	F8/DE-04744-F75E0C05; Thu, 06 Dec 2012 18:35:43 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1354818938!27797591!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26726 invoked from network); 6 Dec 2012 18:35:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 18:35:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,230,1355097600"; 
	d="scan'208,217";a="216654234"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 18:35:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 13:35:11 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TggHv-0005JW-6q	for
	xen-devel@lists.xen.org; Thu, 06 Dec 2012 18:35:11 +0000
Message-ID: <50C0E55F.1040903@citrix.com>
Date: Thu, 6 Dec 2012 18:35:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CAJJWZcyFoWA9V_10qPRcEDo+Jm0jfQu7ktE5iyO25k4BntrDbg@mail.gmail.com>
In-Reply-To: <CAJJWZcyFoWA9V_10qPRcEDo+Jm0jfQu7ktE5iyO25k4BntrDbg@mail.gmail.com>
X-Enigmail-Version: 1.4.6
Subject: Re: [Xen-devel] Question on local_irq_save/local_irq_retore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1866579031654875905=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1866579031654875905==
Content-Type: multipart/alternative;
	boundary="------------020601010009060905050706"

--------------020601010009060905050706
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 06/12/12 18:27, Xinxin Jin wrote:
> Hi, I have some confusion on local_irq_save() and local_irq_restore().
> From the definitions, you can see that local_irq_save()
> calls local_irq_disable(). But why there is no local_irq_enable() in
> local_irq_restore?  
>
> #define local_irq_save(x)                                        
> ({                                                               
>     local_save_flags(x);                                         
>     local_irq_disable();                                         
> })  
>
> #define local_irq_restore(x)                                     
> ({                                                               
>     BUILD_BUG_ON(sizeof(x) != sizeof(long));                     
>     asm volatile ( "push" __OS " %0 ; popf" __OS                 
>                    : : "g" (x) : "memory", "cc" );               
> })      
>
>

The interrupt flag is part of the rflags register.

irq_save "saves" the current state of the flags register in the
parameter you give it, then disables interrupt, and irq_restore puts
flags back to the saved state.

As a result, you can nest calls like:

// Lets define interrupts to currently be enabled
irq_save()
  // Interrupts are disabled
  irq_save()
    // Interrupts are disabled
  irq_restore()
  // Interrupts are still disabled
irq_restore()
// Interrupts are now enabled.


save/restore pairs are for code where you do not know whether interrupts
are enabled or disabled at the entry point, but need to be certain that
interrupts are disabled between the pair, and also certain that you
don't enable interrupts early because of nesting.

~Andrew

>
> -- 
> Xinxin

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------020601010009060905050706
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    On 06/12/12 18:27, Xinxin Jin wrote:<br>
    <blockquote
cite="mid:CAJJWZcyFoWA9V_10qPRcEDo+Jm0jfQu7ktE5iyO25k4BntrDbg@mail.gmail.com"
      type="cite">Hi, I have some confusion on local_irq_save() and
      local_irq_restore(). From the definitions,&nbsp;<span
        style="font-family:arial,sans-serif;font-size:12.800000190734863px">you
        can see that&nbsp;</span><span
        style="font-family:arial,sans-serif;font-size:12.800000190734863px">local_irq_save()
        calls&nbsp;</span><span
        style="font-family:arial,sans-serif;font-size:12.800000190734863px">local_irq_disable().
        But why there is no local_irq_enable() in local_irq_restore?</span>&nbsp;&nbsp;
      <div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px"><br>
        </div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">#define
          local_irq_save(x) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">({
          &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">&nbsp;
          &nbsp; local_save_flags(x); &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
          &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">&nbsp;
          &nbsp; local_irq_disable(); &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
          &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">})
          &nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px"><br>
        </div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">#define
          local_irq_restore(x) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">({
          &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">&nbsp;
          &nbsp; BUILD_BUG_ON(sizeof(x) != sizeof(long)); &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
          &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">&nbsp;
          &nbsp; asm volatile ( "push" __OS " %0 ; popf" __OS &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
          &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">
          &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;: : "g" (x) : "memory", "cc" ); &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
          &nbsp;&nbsp;</div>
        <div
          style="font-family:arial,sans-serif;font-size:12.800000190734863px">})
          &nbsp; &nbsp; &nbsp;</div>
      </div>
      <div
        style="font-family:arial,sans-serif;font-size:12.800000190734863px">
        <br>
      </div>
      <br clear="all">
    </blockquote>
    <br>
    The interrupt flag is part of the rflags register.<br>
    <br>
    irq_save "saves" the current state of the flags register in the
    parameter you give it, then disables interrupt, and irq_restore puts
    flags back to the saved state.<br>
    <br>
    As a result, you can nest calls like:<br>
    <br>
    // Lets define interrupts to currently be enabled<br>
    irq_save()<br>
    &nbsp; // Interrupts are disabled<br>
    &nbsp; irq_save()<br>
    &nbsp;&nbsp;&nbsp; // Interrupts are disabled<br>
    &nbsp; irq_restore()<br>
    &nbsp; // Interrupts are still disabled<br>
    irq_restore()<br>
    // Interrupts are now enabled.<br>
    <br>
    <br>
    save/restore pairs are for code where you do not know whether
    interrupts are enabled or disabled at the entry point, but need to
    be certain that interrupts are disabled between the pair, and also
    certain that you don't enable interrupts early because of nesting.<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote
cite="mid:CAJJWZcyFoWA9V_10qPRcEDo+Jm0jfQu7ktE5iyO25k4BntrDbg@mail.gmail.com"
      type="cite">
      <div><br>
      </div>
      -- <br>
      Xinxin<br>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a></pre>
  </body>
</html>

--------------020601010009060905050706--


--===============1866579031654875905==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1866579031654875905==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 19:04:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 19:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TggkQ-0001aS-1Y; Thu, 06 Dec 2012 19:04:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TggkO-0001aM-01
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 19:04:36 +0000
Received: from [85.158.139.211:52219] by server-16.bemta-5.messagelabs.com id
	D5/30-21311-34CE0C05; Thu, 06 Dec 2012 19:04:35 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354820674!19413338!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODgyNDg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODgyNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13118 invoked from network); 6 Dec 2012 19:04:34 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 19:04:34 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2c7ofw==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-090-003.pools.arcor-ip.net [84.57.90.3])
	by smtp.strato.de (josoe mo10) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 90461doB6J2EAh ;
	Thu, 6 Dec 2012 20:04:23 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 635F71884C; Thu,  6 Dec 2012 20:04:23 +0100 (CET)
Date: Thu, 6 Dec 2012 20:04:23 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121206190423.GA27952@aepfle.de>
References: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
	<50BF2E3802000078000AE162@nat28.tlf.novell.com>
	<20121206162304.GA3989@aepfle.de>
	<50C0DDCA02000078000AEBA9@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C0DDCA02000078000AEBA9@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
 multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, Jan Beulich wrote:

> >>> On 06.12.12 at 17:23, Olaf Hering <olaf@aepfle.de> wrote:
> > On Wed, Dec 05, Jan Beulich wrote:
> > 
> >> >>> On 05.12.12 at 11:01, Olaf Hering <olaf@aepfle.de> wrote:
> >> > backend_changed might be called multiple times, which will leak
> >> > be->mode. free the previous value before storing the current mode value.
> >> 
> >> As said before - this is one possible route to take. But did you
> >> consider at all the alternative of preventing the function from
> >> getting called more than once for a given device? As also said
> >> before, I think that would have other bad effects, and hence
> >> should be preferred (and would likely also result in a smaller
> >> patch).
> > 
> > Maybe it could be done like this, adding a flag to the backend device
> > and exit early if its called twice.
> 
> Maybe, but it looks odd to me. But then again I had hoped Konrad
> would have an opinion here...

Looking at this some more, if backend_changed is supposed to be called
exactly once then major/minor checks can be removed because they will be
always zero, like this:


 drivers/block/xen-blkback/xenbus.c | 68 ++++++++++++++++++--------------------
 1 file changed, 32 insertions(+), 36 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index a6585a4..5ca77c3 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -28,6 +28,7 @@ struct backend_info {
 	unsigned		major;
 	unsigned		minor;
 	char			*mode;
+	unsigned		alive;
 };
 
 static struct kmem_cache *xen_blkif_cachep;
@@ -502,10 +503,14 @@ static void backend_changed(struct xenbus_watch *watch,
 		= container_of(watch, struct backend_info, backend_watch);
 	struct xenbus_device *dev = be->dev;
 	int cdrom = 0;
-	char *device_type;
+	char *device_type, *p;
+	long handle;
 
 	DPRINTK("");
 
+	if (be->alive)
+		return;
+
 	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
 			   &major, &minor);
 	if (XENBUS_EXIST_ERR(err)) {
@@ -521,12 +526,7 @@ static void backend_changed(struct xenbus_watch *watch,
 		return;
 	}
 
-	if ((be->major || be->minor) &&
-	    ((be->major != major) || (be->minor != minor))) {
-		pr_warn(DRV_PFX "changing physical device (from %x:%x to %x:%x) not supported.\n",
-			be->major, be->minor, major, minor);
-		return;
-	}
+	be->alive = 1;
 
 	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
 	if (IS_ERR(be->mode)) {
@@ -542,39 +542,35 @@ static void backend_changed(struct xenbus_watch *watch,
 		kfree(device_type);
 	}
 
-	if (be->major == 0 && be->minor == 0) {
-		/* Front end dir is a number, which is used as the handle. */
-
-		char *p = strrchr(dev->otherend, '/') + 1;
-		long handle;
-		err = strict_strtoul(p, 0, &handle);
-		if (err)
-			return;
-
-		be->major = major;
-		be->minor = minor;
+	/* Front end dir is a number, which is used as the handle. */
+	p = strrchr(dev->otherend, '/') + 1;
+	err = strict_strtoul(p, 0, &handle);
+	if (err)
+		return;
 
-		err = xen_vbd_create(be->blkif, handle, major, minor,
-				 (NULL == strchr(be->mode, 'w')), cdrom);
-		if (err) {
-			be->major = 0;
-			be->minor = 0;
-			xenbus_dev_fatal(dev, err, "creating vbd structure");
-			return;
-		}
+	be->major = major;
+	be->minor = minor;
 
-		err = xenvbd_sysfs_addif(dev);
-		if (err) {
-			xen_vbd_free(&be->blkif->vbd);
-			be->major = 0;
-			be->minor = 0;
-			xenbus_dev_fatal(dev, err, "creating sysfs entries");
-			return;
-		}
+	err = xen_vbd_create(be->blkif, handle, major, minor,
+			 (NULL == strchr(be->mode, 'w')), cdrom);
+	if (err) {
+		be->major = 0;
+		be->minor = 0;
+		xenbus_dev_fatal(dev, err, "creating vbd structure");
+		return;
+	}
 
-		/* We're potentially connected now */
-		xen_update_blkif_status(be->blkif);
+	err = xenvbd_sysfs_addif(dev);
+	if (err) {
+		xen_vbd_free(&be->blkif->vbd);
+		be->major = 0;
+		be->minor = 0;
+		xenbus_dev_fatal(dev, err, "creating sysfs entries");
+		return;
 	}
+
+	/* We're potentially connected now */
+	xen_update_blkif_status(be->blkif);
 }
 
 
-- 
1.8.0.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 19:04:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 19:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TggkQ-0001aS-1Y; Thu, 06 Dec 2012 19:04:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TggkO-0001aM-01
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 19:04:36 +0000
Received: from [85.158.139.211:52219] by server-16.bemta-5.messagelabs.com id
	D5/30-21311-34CE0C05; Thu, 06 Dec 2012 19:04:35 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354820674!19413338!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODgyNDg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA0ODgyNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13118 invoked from network); 6 Dec 2012 19:04:34 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Dec 2012 19:04:34 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2c7ofw==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-090-003.pools.arcor-ip.net [84.57.90.3])
	by smtp.strato.de (josoe mo10) (RZmta 31.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 90461doB6J2EAh ;
	Thu, 6 Dec 2012 20:04:23 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 635F71884C; Thu,  6 Dec 2012 20:04:23 +0100 (CET)
Date: Thu, 6 Dec 2012 20:04:23 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121206190423.GA27952@aepfle.de>
References: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
	<50BF2E3802000078000AE162@nat28.tlf.novell.com>
	<20121206162304.GA3989@aepfle.de>
	<50C0DDCA02000078000AEBA9@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C0DDCA02000078000AEBA9@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
 multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, Jan Beulich wrote:

> >>> On 06.12.12 at 17:23, Olaf Hering <olaf@aepfle.de> wrote:
> > On Wed, Dec 05, Jan Beulich wrote:
> > 
> >> >>> On 05.12.12 at 11:01, Olaf Hering <olaf@aepfle.de> wrote:
> >> > backend_changed might be called multiple times, which will leak
> >> > be->mode. free the previous value before storing the current mode value.
> >> 
> >> As said before - this is one possible route to take. But did you
> >> consider at all the alternative of preventing the function from
> >> getting called more than once for a given device? As also said
> >> before, I think that would have other bad effects, and hence
> >> should be preferred (and would likely also result in a smaller
> >> patch).
> > 
> > Maybe it could be done like this, adding a flag to the backend device
> > and exit early if its called twice.
> 
> Maybe, but it looks odd to me. But then again I had hoped Konrad
> would have an opinion here...

Looking at this some more, if backend_changed is supposed to be called
exactly once then major/minor checks can be removed because they will be
always zero, like this:


 drivers/block/xen-blkback/xenbus.c | 68 ++++++++++++++++++--------------------
 1 file changed, 32 insertions(+), 36 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index a6585a4..5ca77c3 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -28,6 +28,7 @@ struct backend_info {
 	unsigned		major;
 	unsigned		minor;
 	char			*mode;
+	unsigned		alive;
 };
 
 static struct kmem_cache *xen_blkif_cachep;
@@ -502,10 +503,14 @@ static void backend_changed(struct xenbus_watch *watch,
 		= container_of(watch, struct backend_info, backend_watch);
 	struct xenbus_device *dev = be->dev;
 	int cdrom = 0;
-	char *device_type;
+	char *device_type, *p;
+	long handle;
 
 	DPRINTK("");
 
+	if (be->alive)
+		return;
+
 	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
 			   &major, &minor);
 	if (XENBUS_EXIST_ERR(err)) {
@@ -521,12 +526,7 @@ static void backend_changed(struct xenbus_watch *watch,
 		return;
 	}
 
-	if ((be->major || be->minor) &&
-	    ((be->major != major) || (be->minor != minor))) {
-		pr_warn(DRV_PFX "changing physical device (from %x:%x to %x:%x) not supported.\n",
-			be->major, be->minor, major, minor);
-		return;
-	}
+	be->alive = 1;
 
 	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
 	if (IS_ERR(be->mode)) {
@@ -542,39 +542,35 @@ static void backend_changed(struct xenbus_watch *watch,
 		kfree(device_type);
 	}
 
-	if (be->major == 0 && be->minor == 0) {
-		/* Front end dir is a number, which is used as the handle. */
-
-		char *p = strrchr(dev->otherend, '/') + 1;
-		long handle;
-		err = strict_strtoul(p, 0, &handle);
-		if (err)
-			return;
-
-		be->major = major;
-		be->minor = minor;
+	/* Front end dir is a number, which is used as the handle. */
+	p = strrchr(dev->otherend, '/') + 1;
+	err = strict_strtoul(p, 0, &handle);
+	if (err)
+		return;
 
-		err = xen_vbd_create(be->blkif, handle, major, minor,
-				 (NULL == strchr(be->mode, 'w')), cdrom);
-		if (err) {
-			be->major = 0;
-			be->minor = 0;
-			xenbus_dev_fatal(dev, err, "creating vbd structure");
-			return;
-		}
+	be->major = major;
+	be->minor = minor;
 
-		err = xenvbd_sysfs_addif(dev);
-		if (err) {
-			xen_vbd_free(&be->blkif->vbd);
-			be->major = 0;
-			be->minor = 0;
-			xenbus_dev_fatal(dev, err, "creating sysfs entries");
-			return;
-		}
+	err = xen_vbd_create(be->blkif, handle, major, minor,
+			 (NULL == strchr(be->mode, 'w')), cdrom);
+	if (err) {
+		be->major = 0;
+		be->minor = 0;
+		xenbus_dev_fatal(dev, err, "creating vbd structure");
+		return;
+	}
 
-		/* We're potentially connected now */
-		xen_update_blkif_status(be->blkif);
+	err = xenvbd_sysfs_addif(dev);
+	if (err) {
+		xen_vbd_free(&be->blkif->vbd);
+		be->major = 0;
+		be->minor = 0;
+		xenbus_dev_fatal(dev, err, "creating sysfs entries");
+		return;
 	}
+
+	/* We're potentially connected now */
+	xen_update_blkif_status(be->blkif);
 }
 
 
-- 
1.8.0.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 19:42:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 19:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TghKH-0002wO-Rg; Thu, 06 Dec 2012 19:41:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cutter409@gmail.com>) id 1TghKG-0002wJ-G5
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 19:41:40 +0000
Received: from [85.158.143.35:5525] by server-1.bemta-4.messagelabs.com id
	66/92-28401-3F4F0C05; Thu, 06 Dec 2012 19:41:39 +0000
X-Env-Sender: cutter409@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354822897!12870964!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19629 invoked from network); 6 Dec 2012 19:41:38 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 19:41:38 -0000
Received: by mail-la0-f43.google.com with SMTP id z14so6626220lag.30
	for <xen-devel@lists.xensource.com>;
	Thu, 06 Dec 2012 11:41:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=8e9qVpTFMRAuVqEvWuf7RGrZhumnrZqodTq3teYtOb4=;
	b=d+ZKQGOH10+xP5S1DdkLz3n0aKtL029plgRGh/QkcKE14QRB58sfC13b12fG7w7WlB
	pdZkl0vJO25qA3dTDIq5ltSZ221lvaoBKuVFbo4cW9853xzXI8T5d4z7UKP2RqkMUuua
	ccGD8sSufiNebD/NaGkJ4407VRCDYXRkGu3n3g2BaFX89UwFg9fKMDBBSp8wXLXsMqYi
	uvJpmbT++/XJbj31jc0PB5jtRzcYKz8OWek3Z6+IWSHAt643/4LxRhev6w2mlKcgl05p
	l6gxpCKGBjPeVBDousSnu9mDYSGn0Udc1klUeFTjL4mVqEmPXNXlToDgrTP1HVEbwFYf
	N6pw==
MIME-Version: 1.0
Received: by 10.152.111.166 with SMTP id ij6mr2980082lab.38.1354822897223;
	Thu, 06 Dec 2012 11:41:37 -0800 (PST)
Received: by 10.112.138.1 with HTTP; Thu, 6 Dec 2012 11:41:37 -0800 (PST)
Date: Thu, 6 Dec 2012 14:41:37 -0500
Message-ID: <CAG4Ohu92oPZyt2WvVH_8MY5+SOPL_fcsMT8UWuNDJFapqAs0EA@mail.gmail.com>
From: Cutter 409 <cutter409@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] xc_map_foreign_* from DomU?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0414669426488928955=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0414669426488928955==
Content-Type: multipart/alternative; boundary=f46d040891f585c9b504d034490a

--f46d040891f585c9b504d034490a
Content-Type: text/plain; charset=ISO-8859-1

Is there any way to map memory from one DomU to another?

In our case, we're trying to use XenClient XT, and we've tried setting one
domain's target as another, and we've also tried it via a service VM.

It seems as though the DomU kernel is what's blocking us, even if we were
to patch the hypervisor in some way.

Is there any better approach?

Thanks!

--f46d040891f585c9b504d034490a
Content-Type: text/html; charset=ISO-8859-1

Is there any way to map memory from one DomU to another?<br><br>In our case, we&#39;re trying to use XenClient XT, and we&#39;ve tried setting one domain&#39;s target as another, and we&#39;ve also tried it via a service VM.<br>
<br>It seems as though the DomU kernel is what&#39;s blocking us, even if we were to patch the hypervisor in some way. <br><br>Is there any better approach?<br><br>Thanks!<br>

--f46d040891f585c9b504d034490a--


--===============0414669426488928955==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0414669426488928955==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 19:42:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 19:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TghKH-0002wO-Rg; Thu, 06 Dec 2012 19:41:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cutter409@gmail.com>) id 1TghKG-0002wJ-G5
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 19:41:40 +0000
Received: from [85.158.143.35:5525] by server-1.bemta-4.messagelabs.com id
	66/92-28401-3F4F0C05; Thu, 06 Dec 2012 19:41:39 +0000
X-Env-Sender: cutter409@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354822897!12870964!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19629 invoked from network); 6 Dec 2012 19:41:38 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 19:41:38 -0000
Received: by mail-la0-f43.google.com with SMTP id z14so6626220lag.30
	for <xen-devel@lists.xensource.com>;
	Thu, 06 Dec 2012 11:41:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=8e9qVpTFMRAuVqEvWuf7RGrZhumnrZqodTq3teYtOb4=;
	b=d+ZKQGOH10+xP5S1DdkLz3n0aKtL029plgRGh/QkcKE14QRB58sfC13b12fG7w7WlB
	pdZkl0vJO25qA3dTDIq5ltSZ221lvaoBKuVFbo4cW9853xzXI8T5d4z7UKP2RqkMUuua
	ccGD8sSufiNebD/NaGkJ4407VRCDYXRkGu3n3g2BaFX89UwFg9fKMDBBSp8wXLXsMqYi
	uvJpmbT++/XJbj31jc0PB5jtRzcYKz8OWek3Z6+IWSHAt643/4LxRhev6w2mlKcgl05p
	l6gxpCKGBjPeVBDousSnu9mDYSGn0Udc1klUeFTjL4mVqEmPXNXlToDgrTP1HVEbwFYf
	N6pw==
MIME-Version: 1.0
Received: by 10.152.111.166 with SMTP id ij6mr2980082lab.38.1354822897223;
	Thu, 06 Dec 2012 11:41:37 -0800 (PST)
Received: by 10.112.138.1 with HTTP; Thu, 6 Dec 2012 11:41:37 -0800 (PST)
Date: Thu, 6 Dec 2012 14:41:37 -0500
Message-ID: <CAG4Ohu92oPZyt2WvVH_8MY5+SOPL_fcsMT8UWuNDJFapqAs0EA@mail.gmail.com>
From: Cutter 409 <cutter409@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] xc_map_foreign_* from DomU?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0414669426488928955=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0414669426488928955==
Content-Type: multipart/alternative; boundary=f46d040891f585c9b504d034490a

--f46d040891f585c9b504d034490a
Content-Type: text/plain; charset=ISO-8859-1

Is there any way to map memory from one DomU to another?

In our case, we're trying to use XenClient XT, and we've tried setting one
domain's target as another, and we've also tried it via a service VM.

It seems as though the DomU kernel is what's blocking us, even if we were
to patch the hypervisor in some way.

Is there any better approach?

Thanks!

--f46d040891f585c9b504d034490a
Content-Type: text/html; charset=ISO-8859-1

Is there any way to map memory from one DomU to another?<br><br>In our case, we&#39;re trying to use XenClient XT, and we&#39;ve tried setting one domain&#39;s target as another, and we&#39;ve also tried it via a service VM.<br>
<br>It seems as though the DomU kernel is what&#39;s blocking us, even if we were to patch the hypervisor in some way. <br><br>Is there any better approach?<br><br>Thanks!<br>

--f46d040891f585c9b504d034490a--


--===============0414669426488928955==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0414669426488928955==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 20:43:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 20:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgiHM-000610-W7; Thu, 06 Dec 2012 20:42:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TgiHK-00060v-LH
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 20:42:43 +0000
Received: from [85.158.139.211:18413] by server-12.bemta-5.messagelabs.com id
	B7/EA-02886-14301C05; Thu, 06 Dec 2012 20:42:41 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354826544!19397293!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDE2MTU3Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13433 invoked from network); 6 Dec 2012 20:42:25 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 20:42:25 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354826545; x=1386362545;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:content-transfer-encoding:in-reply-to;
	bh=vh8o7/bZ79vnuwzauwvYqFMJ3K66PwaziALC3coOxIQ=;
	b=AjeUKVsMKe9ny3T3N2uH3VDn8g3zAYyl08flERs+3H3wJSVQfzf+/gpt
	ezswQEUMeTjUYfaiwTlEbRYJ1lcGmHSwejS+uUyuCvQgiHiNZJHmbSZ69
	tpwNPLR/5fLFIO8VpeK4DuQwNXaSXo8iug2fex1L1lUiooKMsLCSIOPjV M=;
X-IronPort-AV: E=McAfee;i="5400,1158,6918"; a="491599028"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 06 Dec 2012 20:42:21 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB6KgKd0020420
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 6 Dec 2012 20:42:21 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.118) by
	ex10-hub-31005.ant.amazon.com (10.185.176.12) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 6 Dec 2012 12:42:14 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 06 Dec 2012 12:42:14 -0800
Date: Thu, 6 Dec 2012 12:42:14 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121206204212.GB3482@u109add4315675089e695.ant.amazon.com>
References: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] docs: check for documentation generation
 tools in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 11:06:54AM +0000, Ian Campbell wrote:
> It is sometimes hard to discover all the optional tools that should be
> on a system to build all available Xen documentation. By checking for
> documentation generation tools at ./configure time and displaying a
> warning, Xen packagers will more easily learn about new optional build
> dependencies, like markdown, when they are introduced.
> =

> Based on a patch by Matt Wilson. Changed to use a separate
> docs/configure which is called from the top-level in the same manner
> as stubdoms.
> =

> Rerun autogen.sh and "git add docs/configure" after applying this patch.
> =

> Signed-off-by: Matt Wilson <msw@amazon.com>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
> Cc: Roger Pau Monn=E9 <roger.pau@citrix.com>

For the change to introduce docs/configure:

Acked-by: Matt Wilson <msw@amazon.com>

> ---
> Applies on top of Matthew's "Add autoconf to stubdom" and "Add a top
> level configure script".
> ---
>  .gitignore            |    1 +
>  .hgignore             |    1 +
>  README                |    2 +-
>  autogen.sh            |   15 ++++++---
>  config/Docs.mk.in     |   20 +++++++++++
>  configure             |    4 +-
>  configure.ac          |    2 +-
>  docs/Docs.mk          |   12 -------
>  docs/Makefile         |   86 +++++++++++++++++++++++++++++++++----------=
-----
>  docs/configure.ac     |   27 +++++++++++++++
>  docs/figs/Makefile    |    2 +-
>  docs/xen-api/Makefile |    7 +++-
>  m4/docs_tool.m4       |   17 ++++++++++
>  13 files changed, 146 insertions(+), 50 deletions(-)
>  create mode 100644 config/Docs.mk.in
>  delete mode 100644 docs/Docs.mk
>  create mode 100644 docs/configure.ac
>  create mode 100644 m4/docs_tool.m4
> =

> diff --git a/.gitignore b/.gitignore
> index 46ce63a..a4cdd6c 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -120,6 +120,7 @@ config.status
>  config.cache
>  config/Tools.mk
>  config/Stubdom.mk
> +config/Docs.mk
>  tools/blktap2/daemon/blktapctrl
>  tools/blktap2/drivers/img2qcow
>  tools/blktap2/drivers/lock-util
> diff --git a/.hgignore b/.hgignore
> index 0392a56..da3a7e6 100644
> --- a/.hgignore
> +++ b/.hgignore
> @@ -312,6 +312,7 @@
>  ^tools/config\.cache$
>  ^config/Tools\.mk$
>  ^config/Stubdom\.mk$
> +^config/Docs\.mk$
>  ^xen/\.banner.*$
>  ^xen/BLOG$
>  ^xen/System.map$
> diff --git a/README b/README
> index f5d5530..88401f7 100644
> --- a/README
> +++ b/README
> @@ -57,7 +57,6 @@ provided by your OS distributor:
>      * GNU gettext
>      * 16-bit x86 assembler, loader and compiler (dev86 rpm or bin86 & bc=
c debs)
>      * ACPI ASL compiler (iasl)
> -    * markdown
>  =

>  In addition to the above there are a number of optional build
>  prerequisites. Omitting these will cause the related features to be
> @@ -65,6 +64,7 @@ disabled at compile time:
>      * Development install of Ocaml (e.g. ocaml-nox and
>        ocaml-findlib). Required to build ocaml components which
>        includes the alternative ocaml xenstored.
> +    * markdown
>  =

>  Second, you need to acquire a suitable kernel for use in domain 0. If
>  possible you should use a kernel provided by your OS distributor. If
> diff --git a/autogen.sh b/autogen.sh
> index 1456d94..b5c9688 100755
> --- a/autogen.sh
> +++ b/autogen.sh
> @@ -1,7 +1,12 @@
>  #!/bin/sh -e
>  autoconf
> -cd tools
> -autoconf
> -autoheader
> -cd ../stubdom
> -autoconf
> +( cd tools
> +  autoconf
> +  autoheader
> +)
> +( cd stubdom
> +  autoconf
> +)
> +( cd docs
> +  autoconf
> +)
> diff --git a/config/Docs.mk.in b/config/Docs.mk.in
> new file mode 100644
> index 0000000..b6ab6fe
> --- /dev/null
> +++ b/config/Docs.mk.in
> @@ -0,0 +1,20 @@
> +# Prefix and install folder
> +prefix              :=3D @prefix@
> +PREFIX              :=3D $(prefix)
> +exec_prefix         :=3D @exec_prefix@
> +libdir              :=3D @libdir@
> +LIBDIR              :=3D $(libdir)
> +
> +# Tools
> +PS2PDF              :=3D @PS2PDF@
> +DVIPS               :=3D @DVIPS@
> +LATEX               :=3D @LATEX@
> +FIG2DEV             :=3D @FIG2DEV@
> +LATEX2HTML          :=3D @LATEX2HTML@
> +DOXYGEN             :=3D @DOXYGEN@
> +POD2MAN             :=3D @POD2MAN@
> +POD2HTML            :=3D @POD2HTML@
> +POD2TEXT            :=3D @POD2TEXT@
> +DOT                 :=3D @DOT@
> +NEATO               :=3D @NEATO@
> +MARKDOWN            :=3D @MARKDOWN@
> diff --git a/configure b/configure
> index 649708f..a307f3a 100755
> --- a/configure
> +++ b/configure
> @@ -606,7 +606,7 @@ enable_option_checking
>        ac_precious_vars=3D'build_alias
>  host_alias
>  target_alias'
> -ac_subdirs_all=3D'tools stubdom'
> +ac_subdirs_all=3D'tools docs stubdom'
>  =

>  # Initialize some variables set by options.
>  ac_init_help=3D
> @@ -1675,7 +1675,7 @@ ac_configure=3D"$SHELL $ac_aux_dir/configure"  # Pl=
ease don't use this var.
>  =

>  =

>  =

> -subdirs=3D"$subdirs tools stubdom"
> +subdirs=3D"$subdirs tools docs stubdom"
>  =

>  =

>  cat >confcache <<\_ACEOF
> diff --git a/configure.ac b/configure.ac
> index 0497d97..637b35b 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -6,6 +6,6 @@ AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/=
Makefile]),
>      [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
>  AC_CONFIG_SRCDIR([./xen/common/kernel.c])
>  =

> -AC_CONFIG_SUBDIRS([tools stubdom])
> +AC_CONFIG_SUBDIRS([tools docs stubdom])
>  =

>  AC_OUTPUT()
> diff --git a/docs/Docs.mk b/docs/Docs.mk
> deleted file mode 100644
> index aa653d3..0000000
> --- a/docs/Docs.mk
> +++ /dev/null
> @@ -1,12 +0,0 @@
> -PS2PDF		:=3D ps2pdf
> -DVIPS		:=3D dvips
> -LATEX		:=3D latex
> -FIG2DEV		:=3D fig2dev
> -LATEX2HTML	:=3D latex2html
> -DOXYGEN		:=3D doxygen
> -POD2MAN		:=3D pod2man
> -POD2HTML	:=3D pod2html
> -POD2TEXT	:=3D pod2text
> -DOT		:=3D dot
> -NEATO		:=3D neato
> -MARKDOWN	:=3D markdown
> diff --git a/docs/Makefile b/docs/Makefile
> index 03f141a..03621f7 100644
> --- a/docs/Makefile
> +++ b/docs/Makefile
> @@ -2,7 +2,7 @@
>  =

>  XEN_ROOT=3D$(CURDIR)/..
>  include $(XEN_ROOT)/Config.mk
> -include $(XEN_ROOT)/docs/Docs.mk
> +include $(XEN_ROOT)/config/Docs.mk
>  =

>  VERSION		=3D xen-unstable
>  =

> @@ -26,10 +26,12 @@ all: build
>  =

>  .PHONY: build
>  build: html txt man-pages figs
> -	@if which $(DOT) 1>/dev/null 2>/dev/null ; then              \
> -	$(MAKE) -C xen-api build ; else                              \
> -        echo "Graphviz (dot) not installed; skipping xen-api." ; fi
> +ifdef DOT
> +	$(MAKE) -C xen-api build
>  	rm -f *.aux *.dvi *.bbl *.blg *.glo *.idx *.ilg *.log *.ind *.toc
> +else
> +	@echo "Graphviz (dot) not installed; skipping xen-api."
> +endif
>  =

>  .PHONY: dev-docs
>  dev-docs: python-dev-docs
> @@ -39,30 +41,37 @@ html: $(DOC_HTML) html/index.html
>  =

>  .PHONY: txt
>  txt:
> -	@if which $(POD2TEXT) 1>/dev/null 2>/dev/null; then \
> -	$(MAKE) $(DOC_TXT); else              \
> -	echo "pod2text not installed; skipping text outputs."; fi
> +ifdef POD2TEXT
> +	$(MAKE) $(DOC_TXT)
> +else
> +	@echo "pod2text not installed; skipping text outputs."
> +endif
>  =

>  .PHONY: figs
>  figs:
> -	@set -e ; if which $(FIG2DEV) 1>/dev/null 2>/dev/null; then \
> -	set -x; $(MAKE) -C figs ; else                   \
> -	echo "fig2dev (transfig) not installed; skipping figs."; fi
> +ifdef FIG2DEV
> +	set -x; $(MAKE) -C figs
> +else
> +	@echo "fig2dev (transfig) not installed; skipping figs."
> +endif
>  =

>  .PHONY: python-dev-docs
>  python-dev-docs:
> -	@mkdir -v -p api/tools/python
> -	@set -e ; if which $(DOXYGEN) 1>/dev/null 2>/dev/null; then \
> -        echo "Running doxygen to generate Python tools APIs ... "; \
> -	$(DOXYGEN) Doxyfile;                                       \
> -	$(MAKE) -C api/tools/python/latex ; else                   \
> -        echo "Doxygen not installed; skipping python-dev-docs."; fi
> +ifdef DOXYGEN
> +	@echo "Running doxygen to generate Python tools APIs ... "
> +	mkdir -v -p api/tools/python
> +	$(DOXYGEN) Doxyfile && $(MAKE) -C api/tools/python/latex
> +else
> +	@echo "Doxygen not installed; skipping python-dev-docs."
> +endif
>  =

>  .PHONY: man-pages
>  man-pages:
> -	@if which $(POD2MAN) 1>/dev/null 2>/dev/null; then \
> -	$(MAKE) $(DOC_MAN1) $(DOC_MAN5); else              \
> -	echo "pod2man not installed; skipping man-pages."; fi
> +ifdef POD2MAN
> +	$(MAKE) $(DOC_MAN1) $(DOC_MAN5)
> +else
> +	@echo "pod2man not installed; skipping man-pages."
> +endif
>  =

>  man1/%.1: man/%.pod.1 Makefile
>  	$(INSTALL_DIR) $(@D)
> @@ -87,6 +96,7 @@ clean:
>  =

>  .PHONY: distclean
>  distclean: clean
> +	rm -rf ../config/Docs.mk config.log config.status autom4te.cache
>  =

>  .PHONY: install
>  install: all
> @@ -104,30 +114,40 @@ html/index.html: $(DOC_HTML) ./gen-html-index INDEX
>  	perl -w -- ./gen-html-index -i INDEX html $(DOC_HTML)
>  =

>  html/%.html: %.markdown
> -	@$(INSTALL_DIR) $(@D)
> -	@set -e ; if which $(MARKDOWN) 1>/dev/null 2>/dev/null; then \
> -	echo "Running markdown to generate $*.html ... "; \
> +	$(INSTALL_DIR) $(@D)
> +ifdef MARKDOWN
> +	@echo "Running markdown to generate $*.html ... "
>  	$(MARKDOWN) $< > $@.tmp ; \
> -	$(call move-if-changed,$@.tmp,$@) ; else \
> -	echo "markdown not installed; skipping $*.html."; fi
> +	$(call move-if-changed,$@.tmp,$@)
> +else
> +	@echo "markdown not installed; skipping $*.html."
> +endif
>  =

>  html/%.txt: %.txt
> -	@$(INSTALL_DIR) $(@D)
> +	$(INSTALL_DIR) $(@D)
>  	cp $< $@
>  =

>  html/man/%.1.html: man/%.pod.1 Makefile
>  	$(INSTALL_DIR) $(@D)
> +ifdef POD2HTML
>  	$(POD2HTML) --infile=3D$< --outfile=3D$@.tmp
>  	$(call move-if-changed,$@.tmp,$@)
> +else
> +	@echo "pod2html not installed; skipping $<."
> +endif
>  =

>  html/man/%.5.html: man/%.pod.5 Makefile
>  	$(INSTALL_DIR) $(@D)
> +ifdef POD2HTML
>  	$(POD2HTML) --infile=3D$< --outfile=3D$@.tmp
>  	$(call move-if-changed,$@.tmp,$@)
> +else
> +	@echo "pod2html not installed; skipping $<."
> +endif
>  =

>  html/hypercall/index.html: ./xen-headers
>  	rm -rf $(@D)
> -	@$(INSTALL_DIR) $(@D)
> +	$(INSTALL_DIR) $(@D)
>  	./xen-headers -O $(@D) \
>  		-T 'arch-x86_64 - Xen public headers' \
>  		-X arch-ia64 -X arch-x86_32 -X xen-x86_32 -X arch-arm \
> @@ -147,11 +167,23 @@ txt/%.txt: %.markdown
>  =

>  txt/man/%.1.txt: man/%.pod.1 Makefile
>  	$(INSTALL_DIR) $(@D)
> +ifdef POD2TEXT
>  	$(POD2TEXT) $< $@.tmp
>  	$(call move-if-changed,$@.tmp,$@)
> +else
> +	@echo "pod2text not installed; skipping $<."
> +endif
>  =

>  txt/man/%.5.txt: man/%.pod.5 Makefile
>  	$(INSTALL_DIR) $(@D)
> +ifdef POD2TEXT
>  	$(POD2TEXT) $< $@.tmp
>  	$(call move-if-changed,$@.tmp,$@)
> -
> +else
> +	@echo "pod2text not installed; skipping $<."
> +endif
> +
> +ifeq (,$(findstring clean,$(MAKECMDGOALS)))
> +$(XEN_ROOT)/config/Docs.mk:
> +	$(error You have to run ./configure before building docs)
> +endif
> diff --git a/docs/configure.ac b/docs/configure.ac
> new file mode 100644
> index 0000000..45dc9b8
> --- /dev/null
> +++ b/docs/configure.ac
> @@ -0,0 +1,27 @@
> +#                                               -*- Autoconf -*-
> +# Process this file with autoconf to produce a configure script.
> +
> +AC_PREREQ([2.67])
> +AC_INIT([Xen Hypervisor Documentation], m4_esyscmd([../version.sh ../xen=
/Makefile]),
> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
> +AC_CONFIG_SRCDIR([misc/xen-command-line.markdown])
> +AC_CONFIG_FILES([../config/Docs.mk])
> +AC_CONFIG_AUX_DIR([../])
> +
> +# M4 Macro includes
> +m4_include([../m4/docs_tool.m4])
> +
> +AX_DOCS_TOOL_PROG([PS2PDF], [ps2pdf])
> +AX_DOCS_TOOL_PROG([DVIPS], [dvips])
> +AX_DOCS_TOOL_PROG([LATEX], [latex])
> +AX_DOCS_TOOL_PROG([FIG2DEV], [fig2dev])
> +AX_DOCS_TOOL_PROG([LATEX2HTML], [latex2html])
> +AX_DOCS_TOOL_PROG([DOXYGEN], [doxygen])
> +AX_DOCS_TOOL_PROG([POD2MAN], [pod2man])
> +AX_DOCS_TOOL_PROG([POD2HTML], [pod2html])
> +AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
> +AX_DOCS_TOOL_PROG([DOT], [dot])
> +AX_DOCS_TOOL_PROG([NEATO], [neato])
> +AX_DOCS_TOOL_PROGS([MARKDOWN], [markdown], [markdown markdown_py])
> +
> +AC_OUTPUT()
> diff --git a/docs/figs/Makefile b/docs/figs/Makefile
> index 5ecdae3..f782dc1 100644
> --- a/docs/figs/Makefile
> +++ b/docs/figs/Makefile
> @@ -1,7 +1,7 @@
>  =

>  XEN_ROOT=3D$(CURDIR)/../..
>  include $(XEN_ROOT)/Config.mk
> -include $(XEN_ROOT)/docs/Docs.mk
> +include $(XEN_ROOT)/config/Docs.mk
>  =

>  TARGETS=3D network-bridge.png network-basic.png
>  =

> diff --git a/docs/xen-api/Makefile b/docs/xen-api/Makefile
> index 77a0117..b2da651 100644
> --- a/docs/xen-api/Makefile
> +++ b/docs/xen-api/Makefile
> @@ -2,7 +2,7 @@
>  =

>  XEN_ROOT=3D$(CURDIR)/../..
>  include $(XEN_ROOT)/Config.mk
> -include $(XEN_ROOT)/docs/Docs.mk
> +-include $(XEN_ROOT)/config/Docs.mk
>  =

>  =

>  TEX :=3D $(wildcard *.tex)
> @@ -42,3 +42,8 @@ xenapi-datamodel-graph.eps: xenapi-datamodel-graph.dot
>  .PHONY: clean
>  clean:
>  	rm -f *.pdf *.ps *.dvi *.aux *.log *.out $(EPSDOT)
> +
> +ifeq (,$(findstring clean,$(MAKECMDGOALS)))
> +$(XEN_ROOT)/config/Docs.mk:
> +	$(error You have to run ./configure before building docs)
> +endif
> diff --git a/m4/docs_tool.m4 b/m4/docs_tool.m4
> new file mode 100644
> index 0000000..3e8814a
> --- /dev/null
> +++ b/m4/docs_tool.m4
> @@ -0,0 +1,17 @@
> +AC_DEFUN([AX_DOCS_TOOL_PROG], [
> +dnl
> +    AC_ARG_VAR([$1], [Path to $2 tool])
> +    AC_PATH_PROG([$1], [$2])
> +    AS_IF([! test -x "$ac_cv_path_$1"], [
> +        AC_MSG_WARN([$2 is not available so some documentation won't be =
built])
> +    ])
> +])
> +
> +AC_DEFUN([AX_DOCS_TOOL_PROGS], [
> +dnl
> +    AC_ARG_VAR([$1], [Path to $2 tool])
> +    AC_PATH_PROGS([$1], [$3])
> +    AS_IF([! test -x "$ac_cv_path_$1"], [
> +        AC_MSG_WARN([$2 is not available so some documentation won't be =
built])
> +    ])
> +])
> -- =

> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 20:43:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 20:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgiHM-000610-W7; Thu, 06 Dec 2012 20:42:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TgiHK-00060v-LH
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 20:42:43 +0000
Received: from [85.158.139.211:18413] by server-12.bemta-5.messagelabs.com id
	B7/EA-02886-14301C05; Thu, 06 Dec 2012 20:42:41 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354826544!19397293!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDE2MTU3Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13433 invoked from network); 6 Dec 2012 20:42:25 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 20:42:25 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1354826545; x=1386362545;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:content-transfer-encoding:in-reply-to;
	bh=vh8o7/bZ79vnuwzauwvYqFMJ3K66PwaziALC3coOxIQ=;
	b=AjeUKVsMKe9ny3T3N2uH3VDn8g3zAYyl08flERs+3H3wJSVQfzf+/gpt
	ezswQEUMeTjUYfaiwTlEbRYJ1lcGmHSwejS+uUyuCvQgiHiNZJHmbSZ69
	tpwNPLR/5fLFIO8VpeK4DuQwNXaSXo8iug2fex1L1lUiooKMsLCSIOPjV M=;
X-IronPort-AV: E=McAfee;i="5400,1158,6918"; a="491599028"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 06 Dec 2012 20:42:21 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	qB6KgKd0020420
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 6 Dec 2012 20:42:21 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.118) by
	ex10-hub-31005.ant.amazon.com (10.185.176.12) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 6 Dec 2012 12:42:14 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 06 Dec 2012 12:42:14 -0800
Date: Thu, 6 Dec 2012 12:42:14 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121206204212.GB3482@u109add4315675089e695.ant.amazon.com>
References: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] docs: check for documentation generation
 tools in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 11:06:54AM +0000, Ian Campbell wrote:
> It is sometimes hard to discover all the optional tools that should be
> on a system to build all available Xen documentation. By checking for
> documentation generation tools at ./configure time and displaying a
> warning, Xen packagers will more easily learn about new optional build
> dependencies, like markdown, when they are introduced.
> =

> Based on a patch by Matt Wilson. Changed to use a separate
> docs/configure which is called from the top-level in the same manner
> as stubdoms.
> =

> Rerun autogen.sh and "git add docs/configure" after applying this patch.
> =

> Signed-off-by: Matt Wilson <msw@amazon.com>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
> Cc: Roger Pau Monn=E9 <roger.pau@citrix.com>

For the change to introduce docs/configure:

Acked-by: Matt Wilson <msw@amazon.com>

> ---
> Applies on top of Matthew's "Add autoconf to stubdom" and "Add a top
> level configure script".
> ---
>  .gitignore            |    1 +
>  .hgignore             |    1 +
>  README                |    2 +-
>  autogen.sh            |   15 ++++++---
>  config/Docs.mk.in     |   20 +++++++++++
>  configure             |    4 +-
>  configure.ac          |    2 +-
>  docs/Docs.mk          |   12 -------
>  docs/Makefile         |   86 +++++++++++++++++++++++++++++++++----------=
-----
>  docs/configure.ac     |   27 +++++++++++++++
>  docs/figs/Makefile    |    2 +-
>  docs/xen-api/Makefile |    7 +++-
>  m4/docs_tool.m4       |   17 ++++++++++
>  13 files changed, 146 insertions(+), 50 deletions(-)
>  create mode 100644 config/Docs.mk.in
>  delete mode 100644 docs/Docs.mk
>  create mode 100644 docs/configure.ac
>  create mode 100644 m4/docs_tool.m4
> =

> diff --git a/.gitignore b/.gitignore
> index 46ce63a..a4cdd6c 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -120,6 +120,7 @@ config.status
>  config.cache
>  config/Tools.mk
>  config/Stubdom.mk
> +config/Docs.mk
>  tools/blktap2/daemon/blktapctrl
>  tools/blktap2/drivers/img2qcow
>  tools/blktap2/drivers/lock-util
> diff --git a/.hgignore b/.hgignore
> index 0392a56..da3a7e6 100644
> --- a/.hgignore
> +++ b/.hgignore
> @@ -312,6 +312,7 @@
>  ^tools/config\.cache$
>  ^config/Tools\.mk$
>  ^config/Stubdom\.mk$
> +^config/Docs\.mk$
>  ^xen/\.banner.*$
>  ^xen/BLOG$
>  ^xen/System.map$
> diff --git a/README b/README
> index f5d5530..88401f7 100644
> --- a/README
> +++ b/README
> @@ -57,7 +57,6 @@ provided by your OS distributor:
>      * GNU gettext
>      * 16-bit x86 assembler, loader and compiler (dev86 rpm or bin86 & bc=
c debs)
>      * ACPI ASL compiler (iasl)
> -    * markdown
>  =

>  In addition to the above there are a number of optional build
>  prerequisites. Omitting these will cause the related features to be
> @@ -65,6 +64,7 @@ disabled at compile time:
>      * Development install of Ocaml (e.g. ocaml-nox and
>        ocaml-findlib). Required to build ocaml components which
>        includes the alternative ocaml xenstored.
> +    * markdown
>  =

>  Second, you need to acquire a suitable kernel for use in domain 0. If
>  possible you should use a kernel provided by your OS distributor. If
> diff --git a/autogen.sh b/autogen.sh
> index 1456d94..b5c9688 100755
> --- a/autogen.sh
> +++ b/autogen.sh
> @@ -1,7 +1,12 @@
>  #!/bin/sh -e
>  autoconf
> -cd tools
> -autoconf
> -autoheader
> -cd ../stubdom
> -autoconf
> +( cd tools
> +  autoconf
> +  autoheader
> +)
> +( cd stubdom
> +  autoconf
> +)
> +( cd docs
> +  autoconf
> +)
> diff --git a/config/Docs.mk.in b/config/Docs.mk.in
> new file mode 100644
> index 0000000..b6ab6fe
> --- /dev/null
> +++ b/config/Docs.mk.in
> @@ -0,0 +1,20 @@
> +# Prefix and install folder
> +prefix              :=3D @prefix@
> +PREFIX              :=3D $(prefix)
> +exec_prefix         :=3D @exec_prefix@
> +libdir              :=3D @libdir@
> +LIBDIR              :=3D $(libdir)
> +
> +# Tools
> +PS2PDF              :=3D @PS2PDF@
> +DVIPS               :=3D @DVIPS@
> +LATEX               :=3D @LATEX@
> +FIG2DEV             :=3D @FIG2DEV@
> +LATEX2HTML          :=3D @LATEX2HTML@
> +DOXYGEN             :=3D @DOXYGEN@
> +POD2MAN             :=3D @POD2MAN@
> +POD2HTML            :=3D @POD2HTML@
> +POD2TEXT            :=3D @POD2TEXT@
> +DOT                 :=3D @DOT@
> +NEATO               :=3D @NEATO@
> +MARKDOWN            :=3D @MARKDOWN@
> diff --git a/configure b/configure
> index 649708f..a307f3a 100755
> --- a/configure
> +++ b/configure
> @@ -606,7 +606,7 @@ enable_option_checking
>        ac_precious_vars=3D'build_alias
>  host_alias
>  target_alias'
> -ac_subdirs_all=3D'tools stubdom'
> +ac_subdirs_all=3D'tools docs stubdom'
>  =

>  # Initialize some variables set by options.
>  ac_init_help=3D
> @@ -1675,7 +1675,7 @@ ac_configure=3D"$SHELL $ac_aux_dir/configure"  # Pl=
ease don't use this var.
>  =

>  =

>  =

> -subdirs=3D"$subdirs tools stubdom"
> +subdirs=3D"$subdirs tools docs stubdom"
>  =

>  =

>  cat >confcache <<\_ACEOF
> diff --git a/configure.ac b/configure.ac
> index 0497d97..637b35b 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -6,6 +6,6 @@ AC_INIT([Xen Hypervisor], m4_esyscmd([./version.sh ./xen/=
Makefile]),
>      [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
>  AC_CONFIG_SRCDIR([./xen/common/kernel.c])
>  =

> -AC_CONFIG_SUBDIRS([tools stubdom])
> +AC_CONFIG_SUBDIRS([tools docs stubdom])
>  =

>  AC_OUTPUT()
> diff --git a/docs/Docs.mk b/docs/Docs.mk
> deleted file mode 100644
> index aa653d3..0000000
> --- a/docs/Docs.mk
> +++ /dev/null
> @@ -1,12 +0,0 @@
> -PS2PDF		:=3D ps2pdf
> -DVIPS		:=3D dvips
> -LATEX		:=3D latex
> -FIG2DEV		:=3D fig2dev
> -LATEX2HTML	:=3D latex2html
> -DOXYGEN		:=3D doxygen
> -POD2MAN		:=3D pod2man
> -POD2HTML	:=3D pod2html
> -POD2TEXT	:=3D pod2text
> -DOT		:=3D dot
> -NEATO		:=3D neato
> -MARKDOWN	:=3D markdown
> diff --git a/docs/Makefile b/docs/Makefile
> index 03f141a..03621f7 100644
> --- a/docs/Makefile
> +++ b/docs/Makefile
> @@ -2,7 +2,7 @@
>  =

>  XEN_ROOT=3D$(CURDIR)/..
>  include $(XEN_ROOT)/Config.mk
> -include $(XEN_ROOT)/docs/Docs.mk
> +include $(XEN_ROOT)/config/Docs.mk
>  =

>  VERSION		=3D xen-unstable
>  =

> @@ -26,10 +26,12 @@ all: build
>  =

>  .PHONY: build
>  build: html txt man-pages figs
> -	@if which $(DOT) 1>/dev/null 2>/dev/null ; then              \
> -	$(MAKE) -C xen-api build ; else                              \
> -        echo "Graphviz (dot) not installed; skipping xen-api." ; fi
> +ifdef DOT
> +	$(MAKE) -C xen-api build
>  	rm -f *.aux *.dvi *.bbl *.blg *.glo *.idx *.ilg *.log *.ind *.toc
> +else
> +	@echo "Graphviz (dot) not installed; skipping xen-api."
> +endif
>  =

>  .PHONY: dev-docs
>  dev-docs: python-dev-docs
> @@ -39,30 +41,37 @@ html: $(DOC_HTML) html/index.html
>  =

>  .PHONY: txt
>  txt:
> -	@if which $(POD2TEXT) 1>/dev/null 2>/dev/null; then \
> -	$(MAKE) $(DOC_TXT); else              \
> -	echo "pod2text not installed; skipping text outputs."; fi
> +ifdef POD2TEXT
> +	$(MAKE) $(DOC_TXT)
> +else
> +	@echo "pod2text not installed; skipping text outputs."
> +endif
>  =

>  .PHONY: figs
>  figs:
> -	@set -e ; if which $(FIG2DEV) 1>/dev/null 2>/dev/null; then \
> -	set -x; $(MAKE) -C figs ; else                   \
> -	echo "fig2dev (transfig) not installed; skipping figs."; fi
> +ifdef FIG2DEV
> +	set -x; $(MAKE) -C figs
> +else
> +	@echo "fig2dev (transfig) not installed; skipping figs."
> +endif
>  =

>  .PHONY: python-dev-docs
>  python-dev-docs:
> -	@mkdir -v -p api/tools/python
> -	@set -e ; if which $(DOXYGEN) 1>/dev/null 2>/dev/null; then \
> -        echo "Running doxygen to generate Python tools APIs ... "; \
> -	$(DOXYGEN) Doxyfile;                                       \
> -	$(MAKE) -C api/tools/python/latex ; else                   \
> -        echo "Doxygen not installed; skipping python-dev-docs."; fi
> +ifdef DOXYGEN
> +	@echo "Running doxygen to generate Python tools APIs ... "
> +	mkdir -v -p api/tools/python
> +	$(DOXYGEN) Doxyfile && $(MAKE) -C api/tools/python/latex
> +else
> +	@echo "Doxygen not installed; skipping python-dev-docs."
> +endif
>  =

>  .PHONY: man-pages
>  man-pages:
> -	@if which $(POD2MAN) 1>/dev/null 2>/dev/null; then \
> -	$(MAKE) $(DOC_MAN1) $(DOC_MAN5); else              \
> -	echo "pod2man not installed; skipping man-pages."; fi
> +ifdef POD2MAN
> +	$(MAKE) $(DOC_MAN1) $(DOC_MAN5)
> +else
> +	@echo "pod2man not installed; skipping man-pages."
> +endif
>  =

>  man1/%.1: man/%.pod.1 Makefile
>  	$(INSTALL_DIR) $(@D)
> @@ -87,6 +96,7 @@ clean:
>  =

>  .PHONY: distclean
>  distclean: clean
> +	rm -rf ../config/Docs.mk config.log config.status autom4te.cache
>  =

>  .PHONY: install
>  install: all
> @@ -104,30 +114,40 @@ html/index.html: $(DOC_HTML) ./gen-html-index INDEX
>  	perl -w -- ./gen-html-index -i INDEX html $(DOC_HTML)
>  =

>  html/%.html: %.markdown
> -	@$(INSTALL_DIR) $(@D)
> -	@set -e ; if which $(MARKDOWN) 1>/dev/null 2>/dev/null; then \
> -	echo "Running markdown to generate $*.html ... "; \
> +	$(INSTALL_DIR) $(@D)
> +ifdef MARKDOWN
> +	@echo "Running markdown to generate $*.html ... "
>  	$(MARKDOWN) $< > $@.tmp ; \
> -	$(call move-if-changed,$@.tmp,$@) ; else \
> -	echo "markdown not installed; skipping $*.html."; fi
> +	$(call move-if-changed,$@.tmp,$@)
> +else
> +	@echo "markdown not installed; skipping $*.html."
> +endif
>  =

>  html/%.txt: %.txt
> -	@$(INSTALL_DIR) $(@D)
> +	$(INSTALL_DIR) $(@D)
>  	cp $< $@
>  =

>  html/man/%.1.html: man/%.pod.1 Makefile
>  	$(INSTALL_DIR) $(@D)
> +ifdef POD2HTML
>  	$(POD2HTML) --infile=3D$< --outfile=3D$@.tmp
>  	$(call move-if-changed,$@.tmp,$@)
> +else
> +	@echo "pod2html not installed; skipping $<."
> +endif
>  =

>  html/man/%.5.html: man/%.pod.5 Makefile
>  	$(INSTALL_DIR) $(@D)
> +ifdef POD2HTML
>  	$(POD2HTML) --infile=3D$< --outfile=3D$@.tmp
>  	$(call move-if-changed,$@.tmp,$@)
> +else
> +	@echo "pod2html not installed; skipping $<."
> +endif
>  =

>  html/hypercall/index.html: ./xen-headers
>  	rm -rf $(@D)
> -	@$(INSTALL_DIR) $(@D)
> +	$(INSTALL_DIR) $(@D)
>  	./xen-headers -O $(@D) \
>  		-T 'arch-x86_64 - Xen public headers' \
>  		-X arch-ia64 -X arch-x86_32 -X xen-x86_32 -X arch-arm \
> @@ -147,11 +167,23 @@ txt/%.txt: %.markdown
>  =

>  txt/man/%.1.txt: man/%.pod.1 Makefile
>  	$(INSTALL_DIR) $(@D)
> +ifdef POD2TEXT
>  	$(POD2TEXT) $< $@.tmp
>  	$(call move-if-changed,$@.tmp,$@)
> +else
> +	@echo "pod2text not installed; skipping $<."
> +endif
>  =

>  txt/man/%.5.txt: man/%.pod.5 Makefile
>  	$(INSTALL_DIR) $(@D)
> +ifdef POD2TEXT
>  	$(POD2TEXT) $< $@.tmp
>  	$(call move-if-changed,$@.tmp,$@)
> -
> +else
> +	@echo "pod2text not installed; skipping $<."
> +endif
> +
> +ifeq (,$(findstring clean,$(MAKECMDGOALS)))
> +$(XEN_ROOT)/config/Docs.mk:
> +	$(error You have to run ./configure before building docs)
> +endif
> diff --git a/docs/configure.ac b/docs/configure.ac
> new file mode 100644
> index 0000000..45dc9b8
> --- /dev/null
> +++ b/docs/configure.ac
> @@ -0,0 +1,27 @@
> +#                                               -*- Autoconf -*-
> +# Process this file with autoconf to produce a configure script.
> +
> +AC_PREREQ([2.67])
> +AC_INIT([Xen Hypervisor Documentation], m4_esyscmd([../version.sh ../xen=
/Makefile]),
> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
> +AC_CONFIG_SRCDIR([misc/xen-command-line.markdown])
> +AC_CONFIG_FILES([../config/Docs.mk])
> +AC_CONFIG_AUX_DIR([../])
> +
> +# M4 Macro includes
> +m4_include([../m4/docs_tool.m4])
> +
> +AX_DOCS_TOOL_PROG([PS2PDF], [ps2pdf])
> +AX_DOCS_TOOL_PROG([DVIPS], [dvips])
> +AX_DOCS_TOOL_PROG([LATEX], [latex])
> +AX_DOCS_TOOL_PROG([FIG2DEV], [fig2dev])
> +AX_DOCS_TOOL_PROG([LATEX2HTML], [latex2html])
> +AX_DOCS_TOOL_PROG([DOXYGEN], [doxygen])
> +AX_DOCS_TOOL_PROG([POD2MAN], [pod2man])
> +AX_DOCS_TOOL_PROG([POD2HTML], [pod2html])
> +AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
> +AX_DOCS_TOOL_PROG([DOT], [dot])
> +AX_DOCS_TOOL_PROG([NEATO], [neato])
> +AX_DOCS_TOOL_PROGS([MARKDOWN], [markdown], [markdown markdown_py])
> +
> +AC_OUTPUT()
> diff --git a/docs/figs/Makefile b/docs/figs/Makefile
> index 5ecdae3..f782dc1 100644
> --- a/docs/figs/Makefile
> +++ b/docs/figs/Makefile
> @@ -1,7 +1,7 @@
>  =

>  XEN_ROOT=3D$(CURDIR)/../..
>  include $(XEN_ROOT)/Config.mk
> -include $(XEN_ROOT)/docs/Docs.mk
> +include $(XEN_ROOT)/config/Docs.mk
>  =

>  TARGETS=3D network-bridge.png network-basic.png
>  =

> diff --git a/docs/xen-api/Makefile b/docs/xen-api/Makefile
> index 77a0117..b2da651 100644
> --- a/docs/xen-api/Makefile
> +++ b/docs/xen-api/Makefile
> @@ -2,7 +2,7 @@
>  =

>  XEN_ROOT=3D$(CURDIR)/../..
>  include $(XEN_ROOT)/Config.mk
> -include $(XEN_ROOT)/docs/Docs.mk
> +-include $(XEN_ROOT)/config/Docs.mk
>  =

>  =

>  TEX :=3D $(wildcard *.tex)
> @@ -42,3 +42,8 @@ xenapi-datamodel-graph.eps: xenapi-datamodel-graph.dot
>  .PHONY: clean
>  clean:
>  	rm -f *.pdf *.ps *.dvi *.aux *.log *.out $(EPSDOT)
> +
> +ifeq (,$(findstring clean,$(MAKECMDGOALS)))
> +$(XEN_ROOT)/config/Docs.mk:
> +	$(error You have to run ./configure before building docs)
> +endif
> diff --git a/m4/docs_tool.m4 b/m4/docs_tool.m4
> new file mode 100644
> index 0000000..3e8814a
> --- /dev/null
> +++ b/m4/docs_tool.m4
> @@ -0,0 +1,17 @@
> +AC_DEFUN([AX_DOCS_TOOL_PROG], [
> +dnl
> +    AC_ARG_VAR([$1], [Path to $2 tool])
> +    AC_PATH_PROG([$1], [$2])
> +    AS_IF([! test -x "$ac_cv_path_$1"], [
> +        AC_MSG_WARN([$2 is not available so some documentation won't be =
built])
> +    ])
> +])
> +
> +AC_DEFUN([AX_DOCS_TOOL_PROGS], [
> +dnl
> +    AC_ARG_VAR([$1], [Path to $2 tool])
> +    AC_PATH_PROGS([$1], [$3])
> +    AS_IF([! test -x "$ac_cv_path_$1"], [
> +        AC_MSG_WARN([$2 is not available so some documentation won't be =
built])
> +    ])
> +])
> -- =

> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:07:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgifF-00070F-7J; Thu, 06 Dec 2012 21:07:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgifD-00070A-SR
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 21:07:24 +0000
Received: from [85.158.139.83:31665] by server-8.bemta-5.messagelabs.com id
	75/D8-06050-B0901C05; Thu, 06 Dec 2012 21:07:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354828040!28205720!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28601 invoked from network); 6 Dec 2012 21:07:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 21:07:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,232,1355097600"; d="scan'208";a="16210586"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 21:07:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 21:07:20 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgifA-00057l-GO;
	Thu, 06 Dec 2012 21:07:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgifA-0002K1-Ah;
	Thu, 06 Dec 2012 21:07:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14585-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 21:07:20 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14585: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14585 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14585/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:07:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgifF-00070F-7J; Thu, 06 Dec 2012 21:07:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgifD-00070A-SR
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 21:07:24 +0000
Received: from [85.158.139.83:31665] by server-8.bemta-5.messagelabs.com id
	75/D8-06050-B0901C05; Thu, 06 Dec 2012 21:07:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354828040!28205720!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDgxMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28601 invoked from network); 6 Dec 2012 21:07:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 21:07:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,232,1355097600"; d="scan'208";a="16210586"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 21:07:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 21:07:20 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgifA-00057l-GO;
	Thu, 06 Dec 2012 21:07:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgifA-0002K1-Ah;
	Thu, 06 Dec 2012 21:07:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14585-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 21:07:20 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14585: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14585 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14585/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:32:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:32:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgj3H-00082k-O4; Thu, 06 Dec 2012 21:32:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tgj3G-00082f-4T
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 21:32:14 +0000
Received: from [85.158.139.83:2367] by server-6.bemta-5.messagelabs.com id
	B5/42-19321-DDE01C05; Thu, 06 Dec 2012 21:32:13 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354829532!28645208!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19810 invoked from network); 6 Dec 2012 21:32:12 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-15.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Dec 2012 21:32:12 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:57812 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tgj6x-00013s-MD; Thu, 06 Dec 2012 22:36:03 +0100
Date: Thu, 6 Dec 2012 22:32:08 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <92707120.20121206223208@eikelenboom.it>
To: xen-devel <xen-devel@lists.xen.org>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x returns
	same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano / Anthony,

With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:

With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry

in qemu-traditional:

pt_msix_update_one: pt_msix_update_one requested pirq = 87
pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
pt_msix_update_one: pt_msix_update_one requested pirq = 86
pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
pt_msix_update_one: pt_msix_update_one requested pirq = 85
pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
pt_msix_update_one: pt_msix_update_one requested pirq = 84
pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0


in qemu-xen (upstream):

[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:32:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:32:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgj3H-00082k-O4; Thu, 06 Dec 2012 21:32:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tgj3G-00082f-4T
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 21:32:14 +0000
Received: from [85.158.139.83:2367] by server-6.bemta-5.messagelabs.com id
	B5/42-19321-DDE01C05; Thu, 06 Dec 2012 21:32:13 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354829532!28645208!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19810 invoked from network); 6 Dec 2012 21:32:12 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-15.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Dec 2012 21:32:12 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:57812 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tgj6x-00013s-MD; Thu, 06 Dec 2012 22:36:03 +0100
Date: Thu, 6 Dec 2012 22:32:08 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <92707120.20121206223208@eikelenboom.it>
To: xen-devel <xen-devel@lists.xen.org>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x returns
	same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano / Anthony,

With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:

With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry

in qemu-traditional:

pt_msix_update_one: pt_msix_update_one requested pirq = 87
pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
pt_msix_update_one: pt_msix_update_one requested pirq = 86
pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
pt_msix_update_one: pt_msix_update_one requested pirq = 85
pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
pt_msix_update_one: pt_msix_update_one requested pirq = 84
pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0


in qemu-xen (upstream):

[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:46:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgjGO-00004x-1j; Thu, 06 Dec 2012 21:45:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgjGM-0008WK-7O
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 21:45:46 +0000
Received: from [85.158.139.83:29371] by server-16.bemta-5.messagelabs.com id
	3A/5E-21311-90211C05; Thu, 06 Dec 2012 21:45:45 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354830336!28646203!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29959 invoked from network); 6 Dec 2012 21:45:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 21:45:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,233,1355097600"; d="scan'208";a="46893845"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 21:45:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 16:45:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgjGB-0007kq-DN;
	Thu, 06 Dec 2012 21:45:35 +0000
MIME-Version: 1.0
X-Mercurial-Node: 3757511a785287066cfd51605ff109c005bfdc08
Message-ID: <3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Thu, 6 Dec 2012 21:42:30 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 2 of 2 V3] x86/kexec: Change NMI and MCE
	handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safer in more circumstances.

This patch adds three new low level routines:
 * nmi_crash
    This is a special NMI handler for using during a kexec crash.
 * enable_nmis
    This function enables NMIs by executing an iret-to-self, to
    disengage the hardware NMI latch.
 * trap_nop
    This is a no op handler which irets immediately.  It is actually in
    the middle of enable_nmis to reuse the iret instruction, without
    having a single lone aligned iret inflating the code side.


As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable the crashing cpus NMI/MCE interrupt stack tables.
    Disabling the stack tables removes race conditions which would lead
    to corrupt exception frames and infinite loops.  As this pcpu is
    never planning to execute a sysret back to a pv vcpu, the update is
    safe from a security point of view.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the nmi_crash handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

Changes since v2:

 * Rework the alteration of the stack tables to completely remove the
   possiblity of a PV domain getting very lucky with the "NMI or MCE in
   a 1 instruction race window on sysret" and managing to execute code
   in the hypervisor context.
 * Make use of set_ist() from the previous patch in the series to avoid
   open-coding the IST manipulation.

Changes since v1:
 * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
   during the original refactoring.
 * Fold trap_nop into the middle of enable_nmis to reuse the iret.
 * Expand comments in areas as per Tim's suggestions.

diff -r 43f86afe90be -r 3757511a7852 xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,129 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    int cpu = smp_processor_id();
+
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( cpu != crashing_cpu );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        /* Disable the interrupt stack table for the MCE handler.  This
+         * prevents race conditions between clearing MCIP and receving a
+         * new MCE, during which the exception frame would be clobbered
+         * and the MCE handler fall into an infinite loop.  We are soon
+         * going to disable the NMI watchdog, so the loop would not be
+         * caught.
+         *
+         * We do not need to change the NMI IST, as the nmi_crash
+         * handler is immue to corrupt exception frames, by virtue of
+         * being designed never to return.
+         *
+         * This update is safe from a security point of view, as this
+         * pcpu is never going to try to sysret back to a PV vcpu.
+         */
+        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
+
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+        atomic_dec(&waiting_for_crash_ipi);
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely scenario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
+     * can deliberately queue up another NMI at the LAPIC which will not
+     * be delivered as the hardware NMI latch is currently in effect.
+     * This means that if NMIs become unlatched (e.g. following a
+     * non-fatal MCE), the LAPIC will force us back here rather than
+     * wandering back into regular Xen code.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i, cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+
+            if ( i == cpu )
+            {
+                /* Disable the interrupt stack tables for this MCE and
+                 * NMI handler (shortly to become a nop) as there is a 1
+                 * instruction race window where NMIs could be
+                 * re-enabled and corrupt the exception frame, leaving
+                 * us unable to continue on this crash path (which half
+                 * defeats the point of using the nop handler in the
+                 * first place).
+                 *
+                 * This update is safe from a security point of view, as
+                 * this pcpu is never going to try to sysret back to a
+                 * PV vcpu.
+                 */
+                set_ist(&idt_tables[i][TRAP_nmi],           IST_NONE);
+                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
+
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+            }
+            else
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r 43f86afe90be -r 3757511a7852 xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -87,6 +87,22 @@ void machine_kexec(xen_kexec_image_t *im
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.
+     *
+     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
+     * pcpus other than us have the nmi_crash handler, while we have the nop
+     * handler.
+     *
+     * The MCE handlers touch extensive areas of Xen code and data.  At this
+     * point, there is nothing we can usefully do, so set the nop handler.
+     */
+    set_intr_gate(TRAP_machine_check, &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r 43f86afe90be -r 3757511a7852 xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,45 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        cli
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* Enable NMIs.  No special register assumptions, and all preserved. */
+ENTRY(enable_nmis)
+        pushq %rax
+        movq  %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+/* No op trap handler.  Required for kexec crash path.
+ * It is not used in performance critical code, and saves having a single
+ * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
+ * explicit alignment. */
+.globl trap_nop;
+trap_nop:
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        popq %rax
+        retq
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r 43f86afe90be -r 3757511a7852 xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:46:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgjGO-00004x-1j; Thu, 06 Dec 2012 21:45:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgjGM-0008WK-7O
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 21:45:46 +0000
Received: from [85.158.139.83:29371] by server-16.bemta-5.messagelabs.com id
	3A/5E-21311-90211C05; Thu, 06 Dec 2012 21:45:45 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354830336!28646203!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc1Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29959 invoked from network); 6 Dec 2012 21:45:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 21:45:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,233,1355097600"; d="scan'208";a="46893845"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 21:45:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 16:45:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgjGB-0007kq-DN;
	Thu, 06 Dec 2012 21:45:35 +0000
MIME-Version: 1.0
X-Mercurial-Node: 3757511a785287066cfd51605ff109c005bfdc08
Message-ID: <3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Thu, 6 Dec 2012 21:42:30 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 2 of 2 V3] x86/kexec: Change NMI and MCE
	handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safer in more circumstances.

This patch adds three new low level routines:
 * nmi_crash
    This is a special NMI handler for using during a kexec crash.
 * enable_nmis
    This function enables NMIs by executing an iret-to-self, to
    disengage the hardware NMI latch.
 * trap_nop
    This is a no op handler which irets immediately.  It is actually in
    the middle of enable_nmis to reuse the iret instruction, without
    having a single lone aligned iret inflating the code side.


As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable the crashing cpus NMI/MCE interrupt stack tables.
    Disabling the stack tables removes race conditions which would lead
    to corrupt exception frames and infinite loops.  As this pcpu is
    never planning to execute a sysret back to a pv vcpu, the update is
    safe from a security point of view.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the nmi_crash handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

Changes since v2:

 * Rework the alteration of the stack tables to completely remove the
   possiblity of a PV domain getting very lucky with the "NMI or MCE in
   a 1 instruction race window on sysret" and managing to execute code
   in the hypervisor context.
 * Make use of set_ist() from the previous patch in the series to avoid
   open-coding the IST manipulation.

Changes since v1:
 * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
   during the original refactoring.
 * Fold trap_nop into the middle of enable_nmis to reuse the iret.
 * Expand comments in areas as per Tim's suggestions.

diff -r 43f86afe90be -r 3757511a7852 xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,129 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    int cpu = smp_processor_id();
+
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( cpu != crashing_cpu );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        /* Disable the interrupt stack table for the MCE handler.  This
+         * prevents race conditions between clearing MCIP and receving a
+         * new MCE, during which the exception frame would be clobbered
+         * and the MCE handler fall into an infinite loop.  We are soon
+         * going to disable the NMI watchdog, so the loop would not be
+         * caught.
+         *
+         * We do not need to change the NMI IST, as the nmi_crash
+         * handler is immue to corrupt exception frames, by virtue of
+         * being designed never to return.
+         *
+         * This update is safe from a security point of view, as this
+         * pcpu is never going to try to sysret back to a PV vcpu.
+         */
+        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
+
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+        atomic_dec(&waiting_for_crash_ipi);
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely scenario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
+     * can deliberately queue up another NMI at the LAPIC which will not
+     * be delivered as the hardware NMI latch is currently in effect.
+     * This means that if NMIs become unlatched (e.g. following a
+     * non-fatal MCE), the LAPIC will force us back here rather than
+     * wandering back into regular Xen code.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i, cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+
+            if ( i == cpu )
+            {
+                /* Disable the interrupt stack tables for this MCE and
+                 * NMI handler (shortly to become a nop) as there is a 1
+                 * instruction race window where NMIs could be
+                 * re-enabled and corrupt the exception frame, leaving
+                 * us unable to continue on this crash path (which half
+                 * defeats the point of using the nop handler in the
+                 * first place).
+                 *
+                 * This update is safe from a security point of view, as
+                 * this pcpu is never going to try to sysret back to a
+                 * PV vcpu.
+                 */
+                set_ist(&idt_tables[i][TRAP_nmi],           IST_NONE);
+                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
+
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+            }
+            else
+                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r 43f86afe90be -r 3757511a7852 xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -87,6 +87,22 @@ void machine_kexec(xen_kexec_image_t *im
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.
+     *
+     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
+     * pcpus other than us have the nmi_crash handler, while we have the nop
+     * handler.
+     *
+     * The MCE handlers touch extensive areas of Xen code and data.  At this
+     * point, there is nothing we can usefully do, so set the nop handler.
+     */
+    set_intr_gate(TRAP_machine_check, &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r 43f86afe90be -r 3757511a7852 xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,45 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        cli
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* Enable NMIs.  No special register assumptions, and all preserved. */
+ENTRY(enable_nmis)
+        pushq %rax
+        movq  %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+/* No op trap handler.  Required for kexec crash path.
+ * It is not used in performance critical code, and saves having a single
+ * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
+ * explicit alignment. */
+.globl trap_nop;
+trap_nop:
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        popq %rax
+        retq
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r 43f86afe90be -r 3757511a7852 xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:46:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgjGL-0008WD-AB; Thu, 06 Dec 2012 21:45:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgjGK-0008W8-SR
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 21:45:44 +0000
Received: from [85.158.137.99:26437] by server-15.bemta-3.messagelabs.com id
	75/5C-23779-30211C05; Thu, 06 Dec 2012 21:45:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354830337!18231155!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23188 invoked from network); 6 Dec 2012 21:45:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 21:45:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,233,1355097600"; d="scan'208";a="216682868"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 21:45:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 16:45:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgjGB-0007kq-CM;
	Thu, 06 Dec 2012 21:45:35 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Thu, 6 Dec 2012 21:42:28 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 2 V3] Kexec alterations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

V3 of this patch, following detailed code review from 3 colleagues.

Patch 1 implemented a helper function for working with IST entries, to
avoid open-coded bitwise manipulation, while the major difference in
patch 2 is a reworking of the IST alterations to remove the (very
remote) possiblity of re-enabling the sysret context switch security
vulnerability.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:46:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgjGL-0008WD-AB; Thu, 06 Dec 2012 21:45:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgjGK-0008W8-SR
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 21:45:44 +0000
Received: from [85.158.137.99:26437] by server-15.bemta-3.messagelabs.com id
	75/5C-23779-30211C05; Thu, 06 Dec 2012 21:45:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354830337!18231155!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23188 invoked from network); 6 Dec 2012 21:45:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 21:45:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,233,1355097600"; d="scan'208";a="216682868"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 21:45:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 16:45:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgjGB-0007kq-CM;
	Thu, 06 Dec 2012 21:45:35 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Thu, 6 Dec 2012 21:42:28 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 2 V3] Kexec alterations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

V3 of this patch, following detailed code review from 3 colleagues.

Patch 1 implemented a helper function for working with IST entries, to
avoid open-coded bitwise manipulation, while the major difference in
patch 2 is a reworking of the IST alterations to remove the (very
remote) possiblity of re-enabling the sysret context switch security
vulnerability.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:46:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgjGN-00004q-MX; Thu, 06 Dec 2012 21:45:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgjGL-0008WH-Vr
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 21:45:46 +0000
Received: from [85.158.137.99:33410] by server-8.bemta-3.messagelabs.com id
	5F/2C-07786-40211C05; Thu, 06 Dec 2012 21:45:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354830337!18231155!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23199 invoked from network); 6 Dec 2012 21:45:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 21:45:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,233,1355097600"; d="scan'208";a="216682869"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 21:45:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 16:45:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgjGB-0007kq-Cq;
	Thu, 06 Dec 2012 21:45:35 +0000
MIME-Version: 1.0
X-Mercurial-Node: 43f86afe90be582e1579ba6aaf105f233832a111
Message-ID: <43f86afe90be582e1579.1354830149@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Thu, 6 Dec 2012 21:42:29 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 2 V3] x86/IST: Create set_ist() helper
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

... to save using open-coded bitwise operations, and update all IST
manipulation sites to use the function.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

I am not overly happy with the name set_ist(), and certainly not tied to
it.  However, I am unable to think of a better name. set_idt_ist() is
wrong, as is set_irq_ist(), while set_idt_entry_ist() just seems to
cludgy.  The comment and parameter types do explicitly state what is
expected t be passed, but suggestions welcome for a better name.

diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/hvm/svm/svm.c
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -869,9 +869,9 @@ static void svm_ctxt_switch_from(struct 
     svm_vmload(per_cpu(root_vmcb, cpu));
 
     /* Resume use of ISTs now that the host TR is reinstated. */
-    idt_tables[cpu][TRAP_double_fault].a  |= IST_DF << 32;
-    idt_tables[cpu][TRAP_nmi].a           |= IST_NMI << 32;
-    idt_tables[cpu][TRAP_machine_check].a |= IST_MCE << 32;
+    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_DF);
+    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NMI);
+    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_MCE);
 }
 
 static void svm_ctxt_switch_to(struct vcpu *v)
@@ -893,9 +893,9 @@ static void svm_ctxt_switch_to(struct vc
      * Cannot use ISTs for NMI/#MC/#DF while we are running with the guest TR.
      * But this doesn't matter: the IST is only req'd to handle SYSCALL/SYSRET.
      */
-    idt_tables[cpu][TRAP_double_fault].a  &= ~(7UL << 32);
-    idt_tables[cpu][TRAP_nmi].a           &= ~(7UL << 32);
-    idt_tables[cpu][TRAP_machine_check].a &= ~(7UL << 32);
+    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_NONE);
+    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NONE);
+    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
 
     svm_restore_dr(v);
 
diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/x86_64/traps.c
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -370,9 +370,9 @@ void __devinit subarch_percpu_traps_init
     {
         /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
         set_intr_gate(TRAP_double_fault, &double_fault);
-        idt_table[TRAP_double_fault].a  |= IST_DF << 32;
-        idt_table[TRAP_nmi].a           |= IST_NMI << 32;
-        idt_table[TRAP_machine_check].a |= IST_MCE << 32;
+        set_ist(&idt_table[TRAP_double_fault],  IST_DF);
+        set_ist(&idt_table[TRAP_nmi],           IST_NMI);
+        set_ist(&idt_table[TRAP_machine_check], IST_MCE);
 
         /*
          * The 32-on-64 hypercall entry vector is only accessible from ring 1.
diff -r bc624b00d6d6 -r 43f86afe90be xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -425,10 +425,20 @@ struct tss_struct {
     u8 __cacheline_filler[24];
 } __cacheline_aligned __attribute__((packed));
 
-#define IST_DF  1UL
-#define IST_NMI 2UL
-#define IST_MCE 3UL
-#define IST_MAX 3UL
+#define IST_NONE 0UL
+#define IST_DF   1UL
+#define IST_NMI  2UL
+#define IST_MCE  3UL
+#define IST_MAX  3UL
+
+/* Set the interrupt stack table used by a particular interrupt
+ * descriptor table entry. */
+static always_inline void set_ist(idt_entry_t * idt, unsigned long ist)
+{
+    /* ist is a 3 bit field, 32 bits into the idt entry. */
+    ASSERT( ist < 8 );
+    idt->a = ( idt->a & ~(7UL << 32) ) | ( (ist & 7UL) << 32 );
+}
 
 #define IDT_ENTRIES 256
 extern idt_entry_t idt_table[];

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:46:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgjGN-00004q-MX; Thu, 06 Dec 2012 21:45:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TgjGL-0008WH-Vr
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 21:45:46 +0000
Received: from [85.158.137.99:33410] by server-8.bemta-3.messagelabs.com id
	5F/2C-07786-40211C05; Thu, 06 Dec 2012 21:45:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354830337!18231155!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAwMDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23199 invoked from network); 6 Dec 2012 21:45:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 21:45:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,233,1355097600"; d="scan'208";a="216682869"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Dec 2012 21:45:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 6 Dec 2012 16:45:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TgjGB-0007kq-Cq;
	Thu, 06 Dec 2012 21:45:35 +0000
MIME-Version: 1.0
X-Mercurial-Node: 43f86afe90be582e1579ba6aaf105f233832a111
Message-ID: <43f86afe90be582e1579.1354830149@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3.2
Date: Thu, 6 Dec 2012 21:42:29 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 2 V3] x86/IST: Create set_ist() helper
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

... to save using open-coded bitwise operations, and update all IST
manipulation sites to use the function.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

I am not overly happy with the name set_ist(), and certainly not tied to
it.  However, I am unable to think of a better name. set_idt_ist() is
wrong, as is set_irq_ist(), while set_idt_entry_ist() just seems to
cludgy.  The comment and parameter types do explicitly state what is
expected t be passed, but suggestions welcome for a better name.

diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/hvm/svm/svm.c
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -869,9 +869,9 @@ static void svm_ctxt_switch_from(struct 
     svm_vmload(per_cpu(root_vmcb, cpu));
 
     /* Resume use of ISTs now that the host TR is reinstated. */
-    idt_tables[cpu][TRAP_double_fault].a  |= IST_DF << 32;
-    idt_tables[cpu][TRAP_nmi].a           |= IST_NMI << 32;
-    idt_tables[cpu][TRAP_machine_check].a |= IST_MCE << 32;
+    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_DF);
+    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NMI);
+    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_MCE);
 }
 
 static void svm_ctxt_switch_to(struct vcpu *v)
@@ -893,9 +893,9 @@ static void svm_ctxt_switch_to(struct vc
      * Cannot use ISTs for NMI/#MC/#DF while we are running with the guest TR.
      * But this doesn't matter: the IST is only req'd to handle SYSCALL/SYSRET.
      */
-    idt_tables[cpu][TRAP_double_fault].a  &= ~(7UL << 32);
-    idt_tables[cpu][TRAP_nmi].a           &= ~(7UL << 32);
-    idt_tables[cpu][TRAP_machine_check].a &= ~(7UL << 32);
+    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_NONE);
+    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NONE);
+    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
 
     svm_restore_dr(v);
 
diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/x86_64/traps.c
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -370,9 +370,9 @@ void __devinit subarch_percpu_traps_init
     {
         /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
         set_intr_gate(TRAP_double_fault, &double_fault);
-        idt_table[TRAP_double_fault].a  |= IST_DF << 32;
-        idt_table[TRAP_nmi].a           |= IST_NMI << 32;
-        idt_table[TRAP_machine_check].a |= IST_MCE << 32;
+        set_ist(&idt_table[TRAP_double_fault],  IST_DF);
+        set_ist(&idt_table[TRAP_nmi],           IST_NMI);
+        set_ist(&idt_table[TRAP_machine_check], IST_MCE);
 
         /*
          * The 32-on-64 hypercall entry vector is only accessible from ring 1.
diff -r bc624b00d6d6 -r 43f86afe90be xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -425,10 +425,20 @@ struct tss_struct {
     u8 __cacheline_filler[24];
 } __cacheline_aligned __attribute__((packed));
 
-#define IST_DF  1UL
-#define IST_NMI 2UL
-#define IST_MCE 3UL
-#define IST_MAX 3UL
+#define IST_NONE 0UL
+#define IST_DF   1UL
+#define IST_NMI  2UL
+#define IST_MCE  3UL
+#define IST_MAX  3UL
+
+/* Set the interrupt stack table used by a particular interrupt
+ * descriptor table entry. */
+static always_inline void set_ist(idt_entry_t * idt, unsigned long ist)
+{
+    /* ist is a 3 bit field, 32 bits into the idt entry. */
+    ASSERT( ist < 8 );
+    idt->a = ( idt->a & ~(7UL << 32) ) | ( (ist & 7UL) << 32 );
+}
 
 #define IDT_ENTRIES 256
 extern idt_entry_t idt_table[];

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:54:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgjOR-0000nh-1W; Thu, 06 Dec 2012 21:54:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgjOO-0000nc-Se
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 21:54:05 +0000
Received: from [193.109.254.147:49653] by server-6.bemta-14.messagelabs.com id
	C3/CF-02788-CF311C05; Thu, 06 Dec 2012 21:54:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1354830841!4482077!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13192 invoked from network); 6 Dec 2012 21:54:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 21:54:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,233,1355097600"; d="scan'208";a="16210957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 21:54:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 21:54:01 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgjOL-0005Dz-1Z;
	Thu, 06 Dec 2012 21:54:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgjOL-0007w9-19;
	Thu, 06 Dec 2012 21:54:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14575-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 21:54:01 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14575: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14575 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14575/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14565
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14565
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  dc81777ca115
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26243:dc81777ca115
tag:         tip
user:        Liu Jinsong <jinsong.liu@intel.com>
date:        Thu Dec 06 10:47:22 2012 +0000
    
    X86/vMCE: handle broken page with regard to migration
    
    At the sender
      xc_domain_save has a key point: 'to query the types of all the pages
      with xc_get_pfn_type_batch'
      1) if broken page occur before the key point, migration will be fine
         since proper pfn_type and pfn number will be transferred to the
         target and then take appropriate action;
      2) if broken page occur after the key point, whole system will crash
         and no need care migration any more;
    
    At the target
      Target will populates pages for guest. As for the case of broken page,
      we prefer to keep the type of the page for the sake of seamless migration.
      Target will set p2m as p2m_ram_broken for broken page. If guest access
      the broken page again it will kill itself as expected.
    
    Suggested-by: George Dunlap <george.dunlap@eu.citrix.com>
    Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26242:89bd3c43f883
user:        George Dunlap <george.dunlap@eu.citrix.com>
date:        Thu Dec 06 10:19:08 2012 +0000
    
    libxl: Make an internal function explicitly check existence of expected paths
    
    libxl__device_disk_from_xs_be() was failing without error for some
    missing xenstore nodes in a backend, while assuming (without checking)
    that other nodes were valid, causing a crash when another internal
    error wrote these nodes in the wrong place.
    
    Make this function consistent by:
    * Checking the existence of all nodes before using
    * Choosing a default only when the node is not written in device_disk_add()
    * Failing with log msg if any node written by device_disk_add() is not present
    * Returning an error on failure
    * Disposing of the structure before returning using libxl_device_disk_displose()
    
    Also make the callers of the function pay attention to the error and
    behave appropriately.  In the case of libxl__append_disk_list_of_type(),
    this means only incrementing *ndisks as the disk structures are
    successfully initialized.
    
    Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26241:d3c96e02e532
user:        Stefano Stabellini <stefano.stabellini@eu.citrix.com>
date:        Thu Dec 06 10:19:08 2012 +0000
    
    xen/arm: disable interrupts on return_to_hypervisor
    
    At the moment it is possible to reach return_to_hypervisor with
    interrupts enabled (it happens all the times when we are actually going
    back to hypervisor mode, when we don't take the return_to_guest path).
    
    If that happens we risk loosing the content of ELR_hyp: if we receive an
    interrupt right after restoring ELR_hyp, once we come back we'll have a
    different value in ELR_hyp and the original is lost.
    
    In order to make the return_to_hypervisor path safe, we disable
    interrupts before restoring any registers.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26240:0c96325e2c8f
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 10:19:07 2012 +0000
    
    README: docs/pdf/user.pdf was deleted in 24563:4271634e4c86
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26239:4b6d74b093bc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 10:19:06 2012 +0000
    
    gitignore: ignore xen-foreign/arm.h
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26238:53805e238cca
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 10:56:53 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    
    
changeset:   26237:85ea612be837
user:        Samuel Thibault <samuel.thibault@ens-lyon.org>
date:        Thu Dec 06 09:22:31 2012 +0000
    
    mini-os: drop shutdown variables when CONFIG_XENBUS=n
    
    Shutdown variables are meaningless when CONFIG_XENBUS=n since no
    shutdown event will ever happen.  Better make sure that no code tries
    to use it and never get the hoped shutdown event.
    
    Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   26236:51767f7f6ccc
user:        David Vrabel <david.vrabel@citrix.com>
date:        Thu Dec 06 09:21:49 2012 +0000
    
    MAINTAINERS: Device tree is maintained by the ARM maintainers
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   26235:670b07e8d738
user:        Jan Beulich <jbeulich@suse.com>
date:        Wed Dec 05 09:52:14 2012 +0100
    
    IOMMU/ATS: fix maximum queue depth calculation
    
    The capabilities register field is a 5-bit value, and the 5 bits all
    being zero actually means 32 entries.
    
    Under the assumption that amd_iommu_flush_iotlb() really just tried
    to correct for the miscalculation above when adding 32 to the value,
    that adjustment is also being removed.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by Xiantao Zhang <xiantao.zhang@intel.com>
    Acked-by: Wei Huang <wei.huang2@amd.com>
    
    
changeset:   26234:bc624b00d6d6
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:38:31 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 21:54:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 21:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgjOR-0000nh-1W; Thu, 06 Dec 2012 21:54:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgjOO-0000nc-Se
	for xen-devel@lists.xensource.com; Thu, 06 Dec 2012 21:54:05 +0000
Received: from [193.109.254.147:49653] by server-6.bemta-14.messagelabs.com id
	C3/CF-02788-CF311C05; Thu, 06 Dec 2012 21:54:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1354830841!4482077!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13192 invoked from network); 6 Dec 2012 21:54:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Dec 2012 21:54:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,233,1355097600"; d="scan'208";a="16210957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Dec 2012 21:54:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 6 Dec 2012 21:54:01 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgjOL-0005Dz-1Z;
	Thu, 06 Dec 2012 21:54:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgjOL-0007w9-19;
	Thu, 06 Dec 2012 21:54:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14575-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Dec 2012 21:54:01 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14575: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14575 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14575/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14565
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14565
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  dc81777ca115
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26243:dc81777ca115
tag:         tip
user:        Liu Jinsong <jinsong.liu@intel.com>
date:        Thu Dec 06 10:47:22 2012 +0000
    
    X86/vMCE: handle broken page with regard to migration
    
    At the sender
      xc_domain_save has a key point: 'to query the types of all the pages
      with xc_get_pfn_type_batch'
      1) if broken page occur before the key point, migration will be fine
         since proper pfn_type and pfn number will be transferred to the
         target and then take appropriate action;
      2) if broken page occur after the key point, whole system will crash
         and no need care migration any more;
    
    At the target
      Target will populates pages for guest. As for the case of broken page,
      we prefer to keep the type of the page for the sake of seamless migration.
      Target will set p2m as p2m_ram_broken for broken page. If guest access
      the broken page again it will kill itself as expected.
    
    Suggested-by: George Dunlap <george.dunlap@eu.citrix.com>
    Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26242:89bd3c43f883
user:        George Dunlap <george.dunlap@eu.citrix.com>
date:        Thu Dec 06 10:19:08 2012 +0000
    
    libxl: Make an internal function explicitly check existence of expected paths
    
    libxl__device_disk_from_xs_be() was failing without error for some
    missing xenstore nodes in a backend, while assuming (without checking)
    that other nodes were valid, causing a crash when another internal
    error wrote these nodes in the wrong place.
    
    Make this function consistent by:
    * Checking the existence of all nodes before using
    * Choosing a default only when the node is not written in device_disk_add()
    * Failing with log msg if any node written by device_disk_add() is not present
    * Returning an error on failure
    * Disposing of the structure before returning using libxl_device_disk_displose()
    
    Also make the callers of the function pay attention to the error and
    behave appropriately.  In the case of libxl__append_disk_list_of_type(),
    this means only incrementing *ndisks as the disk structures are
    successfully initialized.
    
    Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26241:d3c96e02e532
user:        Stefano Stabellini <stefano.stabellini@eu.citrix.com>
date:        Thu Dec 06 10:19:08 2012 +0000
    
    xen/arm: disable interrupts on return_to_hypervisor
    
    At the moment it is possible to reach return_to_hypervisor with
    interrupts enabled (it happens all the times when we are actually going
    back to hypervisor mode, when we don't take the return_to_guest path).
    
    If that happens we risk loosing the content of ELR_hyp: if we receive an
    interrupt right after restoring ELR_hyp, once we come back we'll have a
    different value in ELR_hyp and the original is lost.
    
    In order to make the return_to_hypervisor path safe, we disable
    interrupts before restoring any registers.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26240:0c96325e2c8f
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 10:19:07 2012 +0000
    
    README: docs/pdf/user.pdf was deleted in 24563:4271634e4c86
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26239:4b6d74b093bc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 10:19:06 2012 +0000
    
    gitignore: ignore xen-foreign/arm.h
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26238:53805e238cca
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 10:56:53 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    
    
changeset:   26237:85ea612be837
user:        Samuel Thibault <samuel.thibault@ens-lyon.org>
date:        Thu Dec 06 09:22:31 2012 +0000
    
    mini-os: drop shutdown variables when CONFIG_XENBUS=n
    
    Shutdown variables are meaningless when CONFIG_XENBUS=n since no
    shutdown event will ever happen.  Better make sure that no code tries
    to use it and never get the hoped shutdown event.
    
    Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   26236:51767f7f6ccc
user:        David Vrabel <david.vrabel@citrix.com>
date:        Thu Dec 06 09:21:49 2012 +0000
    
    MAINTAINERS: Device tree is maintained by the ARM maintainers
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   26235:670b07e8d738
user:        Jan Beulich <jbeulich@suse.com>
date:        Wed Dec 05 09:52:14 2012 +0100
    
    IOMMU/ATS: fix maximum queue depth calculation
    
    The capabilities register field is a 5-bit value, and the 5 bits all
    being zero actually means 32 entries.
    
    Under the assumption that amd_iommu_flush_iotlb() really just tried
    to correct for the miscalculation above when adding 32 to the value,
    that adjustment is also being removed.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by Xiantao Zhang <xiantao.zhang@intel.com>
    Acked-by: Wei Huang <wei.huang2@amd.com>
    
    
changeset:   26234:bc624b00d6d6
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:38:31 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 22:19:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 22:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgjmP-0001VQ-NS; Thu, 06 Dec 2012 22:18:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eblake@redhat.com>) id 1TgjmO-0001VK-Fp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 22:18:52 +0000
Received: from [85.158.139.211:63766] by server-12.bemta-5.messagelabs.com id
	BC/FF-02886-BC911C05; Thu, 06 Dec 2012 22:18:51 +0000
X-Env-Sender: eblake@redhat.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1354832330!19348893!1
X-Originating-IP: [76.96.30.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzYuOTYuMzAuMjQgPT4gMTQ1NDE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23010 invoked from network); 6 Dec 2012 22:18:50 -0000
Received: from qmta02.emeryville.ca.mail.comcast.net (HELO
	qmta02.emeryville.ca.mail.comcast.net) (76.96.30.24)
	by server-4.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 22:18:50 -0000
Received: from omta13.emeryville.ca.mail.comcast.net ([76.96.30.52])
	by qmta02.emeryville.ca.mail.comcast.net with comcast
	id YJA31k00M17UAYkA2NJq8l; Thu, 06 Dec 2012 22:18:50 +0000
Received: from [192.168.0.6] ([24.10.251.25])
	by omta13.emeryville.ca.mail.comcast.net with comcast
	id YNJo1k0060ZdyUg8ZNJoZC; Thu, 06 Dec 2012 22:18:49 +0000
Message-ID: <50C119C7.807@redhat.com>
Date: Thu, 06 Dec 2012 15:18:47 -0700
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jim Fehlig <jfehlig@suse.com>
References: <1354313593-13358-1-git-send-email-jfehlig@suse.com>
In-Reply-To: <1354313593-13358-1-git-send-email-jfehlig@suse.com>
X-Enigmail-Version: 1.4.6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net;
	s=q20121106; t=1354832330;
	bh=XwNFkqhVndjL99UTOsWFEaPpXbz4A9HbwcaSErIMIlI=;
	h=Received:Received:Message-ID:Date:From:MIME-Version:To:Subject:
	Content-Type;
	b=oMwHAR06ou5Zj/Lalc7B0yzoQ1brOylWPnqwehYf5L3FYxxjry99M1nhCOhyPua0e
	Z3fwqbLeox3iN1v/S/AYUHXM7+MPZRBPHB+tNOGLCiu3o9f04ys8Ef5zU/Cc5ZXbel
	wRa5LffKdeYMthsmwA05B/kOnMWjSu/tfoAHOqEYiT8opzF07MY8onTrGZJCMvS2hb
	C03n1MdZhYaOZWMBpdCZrF/euuunx+9uunF1LKZ5/YelYFbL4bhkf027uXFAwVETo0
	y2GsKNIsOg87dQOqK23VR+FvPETpKsawQ34wQVXlX9aZgVz+EU/BQqBN2vQK8tbpng
	L6WLasUXq8nVQ==
Cc: libvir-list@redhat.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] [PATCH] Convert libxl driver to Xen 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1164622548883492165=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============1164622548883492165==
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------enigA940809F03E23C6DDE448577"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigA940809F03E23C6DDE448577
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 11/30/2012 03:13 PM, Jim Fehlig wrote:
> Based on a patch originally authored by Daniel De Graaf
>=20
>   http://lists.xen.org/archives/html/xen-devel/2012-05/msg00565.html
>=20
> This patch converts the Xen libxl driver to support only Xen >=3D 4.2.
> Support for Xen 4.1 libxl is dropped since that version of libxl is
> designated 'technology preview' only and is incompatible with Xen 4.2
> libxl.  Additionally, the default toolstack in Xen 4.1 is still xend,
> for which libvirt has a stable, functional driver.
> ---
> V2:
>   Remove 128 vcpu limit.
>   Remove split_string_into_string_list() function copied from xen
>   sources since libvirt now has virStringSplit().

Tested on Fedora 18, with its use of xen 4.2.  ACK; let's get this pushed=
=2E

> @@ -62,7 +64,6 @@ struct guest_arch {
>  static const char *xen_cap_re =3D "(xen|hvm)-[[:digit:]]+\\.[[:digit:]=
]+-(x86_32|x86_64|ia64|powerpc64)(p|be)?";
>  static regex_t xen_cap_rec;
> =20
> -
>  static int
>  libxlNextFreeVncPort(libxlDriverPrivatePtr driver, int startPort)
>  {

This looks like a spurious whitespace change in isolation, but as long
as the overall file is consistent on one vs. two blank lines before
functions, I don't care if you keep or drop this hunk.


--=20
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


--------------enigA940809F03E23C6DDE448577
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQEcBAEBCAAGBQJQwRnHAAoJEKeha0olJ0NqMywH/jJfP1TmDcfiB2EHnfuUfnp4
chNJXBXNfk4L5fu+2gORRtndjNLaKkyjCmf+Hk9AP4jAGdfx/jXuSiry9nSdzIOK
8lyeJVaXQ1aElb/ZlbU9oayOYcGyGFfb/tZwgWErMwefcKSLZmeLwpBAEz4U/KAn
gfNRXOdvVBZTsaPYgbpP+DGO1knRAX+Xm6K1nZRhGvff+UYU62w0eEc9s/JwL1D9
kEoHYcRg7kcxUw1mC0hvsirNYIQ8bwnjwYzJ5IUny5/A1L7p/Ic0ww3+ndInZCR6
gT20CH/Uw1nH+VRtK3Yelm0iFqB+VX/3H80SjK0H1L6GDpMr49XVI4LzCvuGpPA=
=mmwj
-----END PGP SIGNATURE-----

--------------enigA940809F03E23C6DDE448577--


--===============1164622548883492165==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1164622548883492165==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 22:19:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 22:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgjmP-0001VQ-NS; Thu, 06 Dec 2012 22:18:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eblake@redhat.com>) id 1TgjmO-0001VK-Fp
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 22:18:52 +0000
Received: from [85.158.139.211:63766] by server-12.bemta-5.messagelabs.com id
	BC/FF-02886-BC911C05; Thu, 06 Dec 2012 22:18:51 +0000
X-Env-Sender: eblake@redhat.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1354832330!19348893!1
X-Originating-IP: [76.96.30.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzYuOTYuMzAuMjQgPT4gMTQ1NDE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23010 invoked from network); 6 Dec 2012 22:18:50 -0000
Received: from qmta02.emeryville.ca.mail.comcast.net (HELO
	qmta02.emeryville.ca.mail.comcast.net) (76.96.30.24)
	by server-4.tower-206.messagelabs.com with SMTP;
	6 Dec 2012 22:18:50 -0000
Received: from omta13.emeryville.ca.mail.comcast.net ([76.96.30.52])
	by qmta02.emeryville.ca.mail.comcast.net with comcast
	id YJA31k00M17UAYkA2NJq8l; Thu, 06 Dec 2012 22:18:50 +0000
Received: from [192.168.0.6] ([24.10.251.25])
	by omta13.emeryville.ca.mail.comcast.net with comcast
	id YNJo1k0060ZdyUg8ZNJoZC; Thu, 06 Dec 2012 22:18:49 +0000
Message-ID: <50C119C7.807@redhat.com>
Date: Thu, 06 Dec 2012 15:18:47 -0700
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jim Fehlig <jfehlig@suse.com>
References: <1354313593-13358-1-git-send-email-jfehlig@suse.com>
In-Reply-To: <1354313593-13358-1-git-send-email-jfehlig@suse.com>
X-Enigmail-Version: 1.4.6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net;
	s=q20121106; t=1354832330;
	bh=XwNFkqhVndjL99UTOsWFEaPpXbz4A9HbwcaSErIMIlI=;
	h=Received:Received:Message-ID:Date:From:MIME-Version:To:Subject:
	Content-Type;
	b=oMwHAR06ou5Zj/Lalc7B0yzoQ1brOylWPnqwehYf5L3FYxxjry99M1nhCOhyPua0e
	Z3fwqbLeox3iN1v/S/AYUHXM7+MPZRBPHB+tNOGLCiu3o9f04ys8Ef5zU/Cc5ZXbel
	wRa5LffKdeYMthsmwA05B/kOnMWjSu/tfoAHOqEYiT8opzF07MY8onTrGZJCMvS2hb
	C03n1MdZhYaOZWMBpdCZrF/euuunx+9uunF1LKZ5/YelYFbL4bhkf027uXFAwVETo0
	y2GsKNIsOg87dQOqK23VR+FvPETpKsawQ34wQVXlX9aZgVz+EU/BQqBN2vQK8tbpng
	L6WLasUXq8nVQ==
Cc: libvir-list@redhat.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] [PATCH] Convert libxl driver to Xen 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1164622548883492165=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============1164622548883492165==
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------enigA940809F03E23C6DDE448577"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigA940809F03E23C6DDE448577
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 11/30/2012 03:13 PM, Jim Fehlig wrote:
> Based on a patch originally authored by Daniel De Graaf
>=20
>   http://lists.xen.org/archives/html/xen-devel/2012-05/msg00565.html
>=20
> This patch converts the Xen libxl driver to support only Xen >=3D 4.2.
> Support for Xen 4.1 libxl is dropped since that version of libxl is
> designated 'technology preview' only and is incompatible with Xen 4.2
> libxl.  Additionally, the default toolstack in Xen 4.1 is still xend,
> for which libvirt has a stable, functional driver.
> ---
> V2:
>   Remove 128 vcpu limit.
>   Remove split_string_into_string_list() function copied from xen
>   sources since libvirt now has virStringSplit().

Tested on Fedora 18, with its use of xen 4.2.  ACK; let's get this pushed=
=2E

> @@ -62,7 +64,6 @@ struct guest_arch {
>  static const char *xen_cap_re =3D "(xen|hvm)-[[:digit:]]+\\.[[:digit:]=
]+-(x86_32|x86_64|ia64|powerpc64)(p|be)?";
>  static regex_t xen_cap_rec;
> =20
> -
>  static int
>  libxlNextFreeVncPort(libxlDriverPrivatePtr driver, int startPort)
>  {

This looks like a spurious whitespace change in isolation, but as long
as the overall file is consistent on one vs. two blank lines before
functions, I don't care if you keep or drop this hunk.


--=20
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


--------------enigA940809F03E23C6DDE448577
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQEcBAEBCAAGBQJQwRnHAAoJEKeha0olJ0NqMywH/jJfP1TmDcfiB2EHnfuUfnp4
chNJXBXNfk4L5fu+2gORRtndjNLaKkyjCmf+Hk9AP4jAGdfx/jXuSiry9nSdzIOK
8lyeJVaXQ1aElb/ZlbU9oayOYcGyGFfb/tZwgWErMwefcKSLZmeLwpBAEz4U/KAn
gfNRXOdvVBZTsaPYgbpP+DGO1knRAX+Xm6K1nZRhGvff+UYU62w0eEc9s/JwL1D9
kEoHYcRg7kcxUw1mC0hvsirNYIQ8bwnjwYzJ5IUny5/A1L7p/Ic0ww3+ndInZCR6
gT20CH/Uw1nH+VRtK3Yelm0iFqB+VX/3H80SjK0H1L6GDpMr49XVI4LzCvuGpPA=
=mmwj
-----END PGP SIGNATURE-----

--------------enigA940809F03E23C6DDE448577--


--===============1164622548883492165==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1164622548883492165==--


From xen-devel-bounces@lists.xen.org Thu Dec 06 22:28:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 22:28:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgjvg-00022z-Vm; Thu, 06 Dec 2012 22:28:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1Tgjvf-00022u-Nz
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 22:28:28 +0000
Received: from [85.158.137.99:23131] by server-8.bemta-3.messagelabs.com id
	61/13-07786-60C11C05; Thu, 06 Dec 2012 22:28:22 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354832896!12908431!1
X-Originating-IP: [216.32.180.13]
X-SpamReason: No, hits=0.7 required=7.0 tests=DATE_IN_PAST_03_06
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19114 invoked from network); 6 Dec 2012 22:28:17 -0000
Received: from va3ehsobe003.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.13)
	by server-8.tower-217.messagelabs.com with AES128-SHA encrypted SMTP;
	6 Dec 2012 22:28:17 -0000
Received: from mail223-va3-R.bigfish.com (10.7.14.244) by
	VA3EHSOBE005.bigfish.com (10.7.40.25) with Microsoft SMTP Server id
	14.1.225.23; Thu, 6 Dec 2012 22:28:15 +0000
Received: from mail223-va3 (localhost [127.0.0.1])	by
	mail223-va3-R.bigfish.com (Postfix) with ESMTP id 93CDBB00223;
	Thu,  6 Dec 2012 22:28:15 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1de0h1202h1d1ah1d2ahzz8275bhz2dh668h839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1155h)
Received: from mail223-va3 (localhost.localdomain [127.0.0.1]) by mail223-va3
	(MessageSwitch) id 1354832892227089_8483;
	Thu,  6 Dec 2012 22:28:12 +0000 (UTC)
Received: from VA3EHSMHS014.bigfish.com (unknown [10.7.14.240])	by
	mail223-va3.bigfish.com (Postfix) with ESMTP id 2A7A2180065;
	Thu,  6 Dec 2012 22:28:12 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	VA3EHSMHS014.bigfish.com (10.7.99.24) with Microsoft SMTP Server id
	14.1.225.23; Thu, 6 Dec 2012 22:28:11 +0000
X-WSS-ID: 0MEMRQV-02-L87-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2C2FBC80C5;	Thu,  6 Dec 2012 16:28:06 -0600 (CST)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 6 Dec 2012 16:28:13 -0600
Received: from linux-62wg.amd.com (163.181.55.254) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server id 14.2.318.4; Thu, 6 Dec 2012
	16:28:08 -0600
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
To: <JBeulich@suse.com>, <Ian.Campbell@citrix.com>
Date: Thu, 6 Dec 2012 13:28:11 -0500
Message-ID: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-OriginatorOrg: amd.com
Cc: boris.ostrovsky@amd.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] x86/ucode: Improve error handling and container
	file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not report error when a patch is not appplicable to current processor,
simply skip it and move on to next patch in container file.

Process container file to the end instead of stopping at the first
applicable patch.

Log the fact that a patch has been applied at KERN_WARNING level, add
CPU number to debug messages.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
---
 xen/arch/x86/microcode_amd.c |   69 +++++++++++++++++++++++-------------------
 1 file changed, 38 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c
index 7a54001..bb5968e 100644
--- a/xen/arch/x86/microcode_amd.c
+++ b/xen/arch/x86/microcode_amd.c
@@ -88,13 +88,13 @@ static int collect_cpu_info(int cpu, struct cpu_signature *csig)
 
     rdmsrl(MSR_AMD_PATCHLEVEL, csig->rev);
 
-    printk(KERN_DEBUG "microcode: collect_cpu_info: patch_id=%#x\n",
-           csig->rev);
+    printk(KERN_DEBUG "microcode: CPU%d collect_cpu_info: patch_id=%#x\n",
+           cpu, csig->rev);
 
     return 0;
 }
 
-static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
+static bool_t microcode_fits(const struct microcode_amd *mc_amd, int cpu)
 {
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
     const struct microcode_header_amd *mc_header = mc_amd->mpb;
@@ -125,11 +125,16 @@ static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
         printk(KERN_DEBUG "microcode: CPU%d patch does not match "
                "(patch is %x, cpu base id is %x) \n",
                cpu, mc_header->processor_rev_id, equiv_cpu_id);
-        return -EINVAL;
+        return 0;
     }
 
     if ( mc_header->patch_id <= uci->cpu_sig.rev )
+    {
+        printk(KERN_DEBUG "microcode: CPU%d patching is not needed: "
+               "patch provides level 0x%x, cpu is at 0x%x \n",
+               cpu, mc_header->patch_id, uci->cpu_sig.rev);
         return 0;
+    }
 
     printk(KERN_DEBUG "microcode: CPU%d found a matching microcode "
            "update with version %#x (current=%#x)\n",
@@ -173,7 +178,7 @@ static int apply_microcode(int cpu)
         return -EIO;
     }
 
-    printk(KERN_INFO "microcode: CPU%d updated from revision %#x to %#x\n",
+    printk(KERN_WARNING "microcode: CPU%d updated from revision %#x to %#x\n",
            cpu, uci->cpu_sig.rev, hdr->patch_id);
 
     uci->cpu_sig.rev = rev;
@@ -181,7 +186,7 @@ static int apply_microcode(int cpu)
     return 0;
 }
 
-static int get_next_ucode_from_buffer_amd(
+static int get_ucode_from_buffer_amd(
     struct microcode_amd *mc_amd,
     const void *buf,
     size_t bufsize,
@@ -194,8 +199,12 @@ static int get_next_ucode_from_buffer_amd(
     off = *offset;
 
     /* No more data */
-    if ( off >= bufsize )
-        return 1;
+    if ( off >= bufsize ) 
+    {
+        printk(KERN_ERR "microcode: error! "
+               "ucode buffer overrun\n");
+        return -EINVAL;
+    }
 
     mpbuf = (const struct mpbhdr *)&bufp[off];
     if ( mpbuf->type != UCODE_UCODE_TYPE )
@@ -205,8 +214,8 @@ static int get_next_ucode_from_buffer_amd(
         return -EINVAL;
     }
 
-    printk(KERN_DEBUG "microcode: size %zu, block size %u, offset %zu\n",
-           bufsize, mpbuf->len, off);
+    printk(KERN_DEBUG "microcode: CPU%d size %zu, block size %u, offset %zu\n",
+           raw_smp_processor_id(), bufsize, mpbuf->len, off);
 
     if ( (off + mpbuf->len) > bufsize )
     {
@@ -278,8 +287,8 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
 {
     struct microcode_amd *mc_amd, *mc_old;
     size_t offset = bufsize;
+    size_t last_offset, applied_offset = 0;
     int error = 0;
-    int ret;
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
 
     /* We should bind the task to the CPU */
@@ -321,33 +330,32 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
      */
     mc_amd->mpb = NULL;
     mc_amd->mpb_size = 0;
-    while ( (ret = get_next_ucode_from_buffer_amd(mc_amd, buf, bufsize,
-                                                  &offset)) == 0 )
+    last_offset = offset;
+    while ( (error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
+                                               &offset)) == 0 )
     {
-        error = microcode_fits(mc_amd, cpu);
-        if (error <= 0)
-            continue;
+        if ( microcode_fits(mc_amd, cpu) )
+            if ( apply_microcode(cpu) == 0 )
+                applied_offset = last_offset;
 
-        error = apply_microcode(cpu);
-        if (error == 0)
-        {
-            error = 1;
+        last_offset = offset;
+
+        if ( offset >= bufsize )
             break;
-        }
     }
 
-    if ( ret < 0 )
-        error = ret;
-
     /* On success keep the microcode patch for
      * re-apply on resume.
      */
-    if ( error == 1 )
+    if ( applied_offset != 0 )
     {
-        xfree(mc_old);
-        error = 0;
+        error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
+                                          &applied_offset);
+        if (error == 0)
+            xfree(mc_old);
     }
-    else
+
+    if ( applied_offset == 0 || error != 0 )
     {
         xfree(mc_amd);
         uci->mc.mc_amd = mc_old;
@@ -364,10 +372,9 @@ static int microcode_resume_match(int cpu, const void *mc)
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
     struct microcode_amd *mc_amd = uci->mc.mc_amd;
     const struct microcode_amd *src = mc;
-    int res = microcode_fits(src, cpu);
 
-    if ( res <= 0 )
-        return res;
+    if ( microcode_fits(src, cpu) == 0 )
+        return 0;
 
     if ( src != mc_amd )
     {
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 22:28:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 22:28:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgjvg-00022z-Vm; Thu, 06 Dec 2012 22:28:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1Tgjvf-00022u-Nz
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 22:28:28 +0000
Received: from [85.158.137.99:23131] by server-8.bemta-3.messagelabs.com id
	61/13-07786-60C11C05; Thu, 06 Dec 2012 22:28:22 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354832896!12908431!1
X-Originating-IP: [216.32.180.13]
X-SpamReason: No, hits=0.7 required=7.0 tests=DATE_IN_PAST_03_06
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19114 invoked from network); 6 Dec 2012 22:28:17 -0000
Received: from va3ehsobe003.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.13)
	by server-8.tower-217.messagelabs.com with AES128-SHA encrypted SMTP;
	6 Dec 2012 22:28:17 -0000
Received: from mail223-va3-R.bigfish.com (10.7.14.244) by
	VA3EHSOBE005.bigfish.com (10.7.40.25) with Microsoft SMTP Server id
	14.1.225.23; Thu, 6 Dec 2012 22:28:15 +0000
Received: from mail223-va3 (localhost [127.0.0.1])	by
	mail223-va3-R.bigfish.com (Postfix) with ESMTP id 93CDBB00223;
	Thu,  6 Dec 2012 22:28:15 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1de0h1202h1d1ah1d2ahzz8275bhz2dh668h839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1155h)
Received: from mail223-va3 (localhost.localdomain [127.0.0.1]) by mail223-va3
	(MessageSwitch) id 1354832892227089_8483;
	Thu,  6 Dec 2012 22:28:12 +0000 (UTC)
Received: from VA3EHSMHS014.bigfish.com (unknown [10.7.14.240])	by
	mail223-va3.bigfish.com (Postfix) with ESMTP id 2A7A2180065;
	Thu,  6 Dec 2012 22:28:12 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	VA3EHSMHS014.bigfish.com (10.7.99.24) with Microsoft SMTP Server id
	14.1.225.23; Thu, 6 Dec 2012 22:28:11 +0000
X-WSS-ID: 0MEMRQV-02-L87-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2C2FBC80C5;	Thu,  6 Dec 2012 16:28:06 -0600 (CST)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 6 Dec 2012 16:28:13 -0600
Received: from linux-62wg.amd.com (163.181.55.254) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server id 14.2.318.4; Thu, 6 Dec 2012
	16:28:08 -0600
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
To: <JBeulich@suse.com>, <Ian.Campbell@citrix.com>
Date: Thu, 6 Dec 2012 13:28:11 -0500
Message-ID: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-OriginatorOrg: amd.com
Cc: boris.ostrovsky@amd.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] x86/ucode: Improve error handling and container
	file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not report error when a patch is not appplicable to current processor,
simply skip it and move on to next patch in container file.

Process container file to the end instead of stopping at the first
applicable patch.

Log the fact that a patch has been applied at KERN_WARNING level, add
CPU number to debug messages.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
---
 xen/arch/x86/microcode_amd.c |   69 +++++++++++++++++++++++-------------------
 1 file changed, 38 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c
index 7a54001..bb5968e 100644
--- a/xen/arch/x86/microcode_amd.c
+++ b/xen/arch/x86/microcode_amd.c
@@ -88,13 +88,13 @@ static int collect_cpu_info(int cpu, struct cpu_signature *csig)
 
     rdmsrl(MSR_AMD_PATCHLEVEL, csig->rev);
 
-    printk(KERN_DEBUG "microcode: collect_cpu_info: patch_id=%#x\n",
-           csig->rev);
+    printk(KERN_DEBUG "microcode: CPU%d collect_cpu_info: patch_id=%#x\n",
+           cpu, csig->rev);
 
     return 0;
 }
 
-static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
+static bool_t microcode_fits(const struct microcode_amd *mc_amd, int cpu)
 {
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
     const struct microcode_header_amd *mc_header = mc_amd->mpb;
@@ -125,11 +125,16 @@ static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
         printk(KERN_DEBUG "microcode: CPU%d patch does not match "
                "(patch is %x, cpu base id is %x) \n",
                cpu, mc_header->processor_rev_id, equiv_cpu_id);
-        return -EINVAL;
+        return 0;
     }
 
     if ( mc_header->patch_id <= uci->cpu_sig.rev )
+    {
+        printk(KERN_DEBUG "microcode: CPU%d patching is not needed: "
+               "patch provides level 0x%x, cpu is at 0x%x \n",
+               cpu, mc_header->patch_id, uci->cpu_sig.rev);
         return 0;
+    }
 
     printk(KERN_DEBUG "microcode: CPU%d found a matching microcode "
            "update with version %#x (current=%#x)\n",
@@ -173,7 +178,7 @@ static int apply_microcode(int cpu)
         return -EIO;
     }
 
-    printk(KERN_INFO "microcode: CPU%d updated from revision %#x to %#x\n",
+    printk(KERN_WARNING "microcode: CPU%d updated from revision %#x to %#x\n",
            cpu, uci->cpu_sig.rev, hdr->patch_id);
 
     uci->cpu_sig.rev = rev;
@@ -181,7 +186,7 @@ static int apply_microcode(int cpu)
     return 0;
 }
 
-static int get_next_ucode_from_buffer_amd(
+static int get_ucode_from_buffer_amd(
     struct microcode_amd *mc_amd,
     const void *buf,
     size_t bufsize,
@@ -194,8 +199,12 @@ static int get_next_ucode_from_buffer_amd(
     off = *offset;
 
     /* No more data */
-    if ( off >= bufsize )
-        return 1;
+    if ( off >= bufsize ) 
+    {
+        printk(KERN_ERR "microcode: error! "
+               "ucode buffer overrun\n");
+        return -EINVAL;
+    }
 
     mpbuf = (const struct mpbhdr *)&bufp[off];
     if ( mpbuf->type != UCODE_UCODE_TYPE )
@@ -205,8 +214,8 @@ static int get_next_ucode_from_buffer_amd(
         return -EINVAL;
     }
 
-    printk(KERN_DEBUG "microcode: size %zu, block size %u, offset %zu\n",
-           bufsize, mpbuf->len, off);
+    printk(KERN_DEBUG "microcode: CPU%d size %zu, block size %u, offset %zu\n",
+           raw_smp_processor_id(), bufsize, mpbuf->len, off);
 
     if ( (off + mpbuf->len) > bufsize )
     {
@@ -278,8 +287,8 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
 {
     struct microcode_amd *mc_amd, *mc_old;
     size_t offset = bufsize;
+    size_t last_offset, applied_offset = 0;
     int error = 0;
-    int ret;
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
 
     /* We should bind the task to the CPU */
@@ -321,33 +330,32 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
      */
     mc_amd->mpb = NULL;
     mc_amd->mpb_size = 0;
-    while ( (ret = get_next_ucode_from_buffer_amd(mc_amd, buf, bufsize,
-                                                  &offset)) == 0 )
+    last_offset = offset;
+    while ( (error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
+                                               &offset)) == 0 )
     {
-        error = microcode_fits(mc_amd, cpu);
-        if (error <= 0)
-            continue;
+        if ( microcode_fits(mc_amd, cpu) )
+            if ( apply_microcode(cpu) == 0 )
+                applied_offset = last_offset;
 
-        error = apply_microcode(cpu);
-        if (error == 0)
-        {
-            error = 1;
+        last_offset = offset;
+
+        if ( offset >= bufsize )
             break;
-        }
     }
 
-    if ( ret < 0 )
-        error = ret;
-
     /* On success keep the microcode patch for
      * re-apply on resume.
      */
-    if ( error == 1 )
+    if ( applied_offset != 0 )
     {
-        xfree(mc_old);
-        error = 0;
+        error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
+                                          &applied_offset);
+        if (error == 0)
+            xfree(mc_old);
     }
-    else
+
+    if ( applied_offset == 0 || error != 0 )
     {
         xfree(mc_amd);
         uci->mc.mc_amd = mc_old;
@@ -364,10 +372,9 @@ static int microcode_resume_match(int cpu, const void *mc)
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
     struct microcode_amd *mc_amd = uci->mc.mc_amd;
     const struct microcode_amd *src = mc;
-    int res = microcode_fits(src, cpu);
 
-    if ( res <= 0 )
-        return res;
+    if ( microcode_fits(src, cpu) == 0 )
+        return 0;
 
     if ( src != mc_amd )
     {
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 23:21:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 23:21:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgkkN-0003Fs-9y; Thu, 06 Dec 2012 23:20:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TgkkL-0003Fl-Nb
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 23:20:49 +0000
Received: from [85.158.143.99:44097] by server-1.bemta-4.messagelabs.com id
	D1/77-28401-15821C05; Thu, 06 Dec 2012 23:20:49 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354836047!27794940!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5693 invoked from network); 6 Dec 2012 23:20:48 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-3.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 23:20:48 -0000
Received: from [137.65.220.93] ([137.65.220.93])
	by mail.novell.com with ESMTP; Thu, 06 Dec 2012 16:20:44 -0700
Message-ID: <50C1284B.2060303@suse.com>
Date: Thu, 06 Dec 2012 16:20:43 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Eric Blake <eblake@redhat.com>
References: <1354313593-13358-1-git-send-email-jfehlig@suse.com>
	<50C119C7.807@redhat.com>
In-Reply-To: <50C119C7.807@redhat.com>
Cc: libvir-list@redhat.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] [PATCH] Convert libxl driver to Xen 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Eric Blake wrote:
> On 11/30/2012 03:13 PM, Jim Fehlig wrote:
>   
>> Based on a patch originally authored by Daniel De Graaf
>>
>>   http://lists.xen.org/archives/html/xen-devel/2012-05/msg00565.html
>>
>> This patch converts the Xen libxl driver to support only Xen >= 4.2.
>> Support for Xen 4.1 libxl is dropped since that version of libxl is
>> designated 'technology preview' only and is incompatible with Xen 4.2
>> libxl.  Additionally, the default toolstack in Xen 4.1 is still xend,
>> for which libvirt has a stable, functional driver.
>> ---
>> V2:
>>   Remove 128 vcpu limit.
>>   Remove split_string_into_string_list() function copied from xen
>>   sources since libvirt now has virStringSplit().
>>     
>
> Tested on Fedora 18, with its use of xen 4.2.  ACK; let's get this pushed.
>   

Thanks, pushed.

>   
>> @@ -62,7 +64,6 @@ struct guest_arch {
>>  static const char *xen_cap_re = "(xen|hvm)-[[:digit:]]+\\.[[:digit:]]+-(x86_32|x86_64|ia64|powerpc64)(p|be)?";
>>  static regex_t xen_cap_rec;
>>  
>> -
>>  static int
>>  libxlNextFreeVncPort(libxlDriverPrivatePtr driver, int startPort)
>>  {
>>     
>
> This looks like a spurious whitespace change in isolation, but as long
> as the overall file is consistent on one vs. two blank lines before
> functions, I don't care if you keep or drop this hunk.
>   

That was spurious, and removed before pushing.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 06 23:21:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Dec 2012 23:21:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgkkN-0003Fs-9y; Thu, 06 Dec 2012 23:20:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TgkkL-0003Fl-Nb
	for xen-devel@lists.xen.org; Thu, 06 Dec 2012 23:20:49 +0000
Received: from [85.158.143.99:44097] by server-1.bemta-4.messagelabs.com id
	D1/77-28401-15821C05; Thu, 06 Dec 2012 23:20:49 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354836047!27794940!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5693 invoked from network); 6 Dec 2012 23:20:48 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-3.tower-216.messagelabs.com with SMTP;
	6 Dec 2012 23:20:48 -0000
Received: from [137.65.220.93] ([137.65.220.93])
	by mail.novell.com with ESMTP; Thu, 06 Dec 2012 16:20:44 -0700
Message-ID: <50C1284B.2060303@suse.com>
Date: Thu, 06 Dec 2012 16:20:43 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Eric Blake <eblake@redhat.com>
References: <1354313593-13358-1-git-send-email-jfehlig@suse.com>
	<50C119C7.807@redhat.com>
In-Reply-To: <50C119C7.807@redhat.com>
Cc: libvir-list@redhat.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] [PATCH] Convert libxl driver to Xen 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Eric Blake wrote:
> On 11/30/2012 03:13 PM, Jim Fehlig wrote:
>   
>> Based on a patch originally authored by Daniel De Graaf
>>
>>   http://lists.xen.org/archives/html/xen-devel/2012-05/msg00565.html
>>
>> This patch converts the Xen libxl driver to support only Xen >= 4.2.
>> Support for Xen 4.1 libxl is dropped since that version of libxl is
>> designated 'technology preview' only and is incompatible with Xen 4.2
>> libxl.  Additionally, the default toolstack in Xen 4.1 is still xend,
>> for which libvirt has a stable, functional driver.
>> ---
>> V2:
>>   Remove 128 vcpu limit.
>>   Remove split_string_into_string_list() function copied from xen
>>   sources since libvirt now has virStringSplit().
>>     
>
> Tested on Fedora 18, with its use of xen 4.2.  ACK; let's get this pushed.
>   

Thanks, pushed.

>   
>> @@ -62,7 +64,6 @@ struct guest_arch {
>>  static const char *xen_cap_re = "(xen|hvm)-[[:digit:]]+\\.[[:digit:]]+-(x86_32|x86_64|ia64|powerpc64)(p|be)?";
>>  static regex_t xen_cap_rec;
>>  
>> -
>>  static int
>>  libxlNextFreeVncPort(libxlDriverPrivatePtr driver, int startPort)
>>  {
>>     
>
> This looks like a spurious whitespace change in isolation, but as long
> as the overall file is consistent on one vs. two blank lines before
> functions, I don't care if you keep or drop this hunk.
>   

That was spurious, and removed before pushing.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 01:13:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 01:13:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgmV3-0001W3-IL; Fri, 07 Dec 2012 01:13:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgmV1-0001Vy-9x
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 01:13:07 +0000
Received: from [85.158.139.211:19029] by server-13.bemta-5.messagelabs.com id
	75/4D-27809-2A241C05; Fri, 07 Dec 2012 01:13:06 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354842785!19414102!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30946 invoked from network); 7 Dec 2012 01:13:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 01:13:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,233,1355097600"; d="scan'208";a="16212672"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 01:13:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 01:13:05 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgmUz-0005kQ-6h;
	Fri, 07 Dec 2012 01:13:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgmUy-0006de-V8;
	Fri, 07 Dec 2012 01:13:05 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14583-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 01:13:05 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14583: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14583 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14583/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-intel 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-xl            3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-pair 4 host-install/dst_host(4) broken in 14574 REGR. vs. 14566
 test-amd64-i386-pair 3 host-install/src_host(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl           3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-pv            3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl-credit2   3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-pair 4 host-install/dst_host(4) broken in 14574 REGR. vs. 14566
 test-i386-i386-pair 3 host-install/src_host(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl-multivcpu 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-win           3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-win-vcpus1   3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-xl-winxpsp3   3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3 3 host-install(3) broken in 14574 REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 14574

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check fail in 14574 never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check fail in 14574 never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check fail in 14574 never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check fail in 14574 never pass
 test-i386-i386-qemut-win     16 leak-check/check      fail in 14574 never pass
 test-i386-i386-xl-win        13 guest-stop            fail in 14574 never pass
 test-i386-i386-xl-qemut-win  13 guest-stop            fail in 14574 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 14574 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 14574 never pass
 test-amd64-i386-win          16 leak-check/check      fail in 14574 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 14574 never pass
 test-amd64-i386-qemut-win    16 leak-check/check      fail in 14574 never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop        fail in 14574 never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 01:13:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 01:13:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgmV3-0001W3-IL; Fri, 07 Dec 2012 01:13:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgmV1-0001Vy-9x
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 01:13:07 +0000
Received: from [85.158.139.211:19029] by server-13.bemta-5.messagelabs.com id
	75/4D-27809-2A241C05; Fri, 07 Dec 2012 01:13:06 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354842785!19414102!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30946 invoked from network); 7 Dec 2012 01:13:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 01:13:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,233,1355097600"; d="scan'208";a="16212672"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 01:13:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 01:13:05 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgmUz-0005kQ-6h;
	Fri, 07 Dec 2012 01:13:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgmUy-0006de-V8;
	Fri, 07 Dec 2012 01:13:05 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14583-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 01:13:05 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14583: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14583 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14583/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-intel 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-xl            3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-pair 4 host-install/dst_host(4) broken in 14574 REGR. vs. 14566
 test-amd64-i386-pair 3 host-install/src_host(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl           3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-pv            3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl-credit2   3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-pair 4 host-install/dst_host(4) broken in 14574 REGR. vs. 14566
 test-i386-i386-pair 3 host-install/src_host(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl-multivcpu 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-win           3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-win-vcpus1   3 host-install(3) broken in 14574 REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3 3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-xl-winxpsp3   3 host-install(3) broken in 14574 REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3 3 host-install(3) broken in 14574 REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 14574

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check fail in 14574 never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check fail in 14574 never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check fail in 14574 never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check fail in 14574 never pass
 test-i386-i386-qemut-win     16 leak-check/check      fail in 14574 never pass
 test-i386-i386-xl-win        13 guest-stop            fail in 14574 never pass
 test-i386-i386-xl-qemut-win  13 guest-stop            fail in 14574 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 14574 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 14574 never pass
 test-amd64-i386-win          16 leak-check/check      fail in 14574 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 14574 never pass
 test-amd64-i386-qemut-win    16 leak-check/check      fail in 14574 never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop        fail in 14574 never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 02:22:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 02:22:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgnZL-0002y8-LL; Fri, 07 Dec 2012 02:21:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TgnZK-0002xu-AL
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 02:21:38 +0000
Received: from [193.109.254.147:32418] by server-10.bemta-14.messagelabs.com
	id 5C/AC-31741-1B251C05; Fri, 07 Dec 2012 02:21:37 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1354846895!2029405!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAyMDc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13648 invoked from network); 7 Dec 2012 02:21:36 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 02:21:36 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB72LWpJ027682
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 02:21:33 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB72LWRK022644
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 02:21:32 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB72LWpx006250; Thu, 6 Dec 2012 20:21:32 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Dec 2012 18:21:32 -0800
Date: Thu, 6 Dec 2012 18:21:31 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20121206182131.7b417212@mantra.us.oracle.com>
In-Reply-To: <50C0866202000078000AE787@nat28.tlf.novell.com>
References: <20121205174153.76fa5dd1@mantra.us.oracle.com>
	<50C0866202000078000AE787@nat28.tlf.novell.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: IanCampbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] page refcount on pages mapped by the toolstack..
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 06 Dec 2012 10:49:54 +0000
"Jan Beulich" <JBeulich@suse.com> wrote:
> 
> Did you perhaps not monitor the changes to the refcnt closely
> enough? It ought to be 2 when the guest is up (on reference for
> the _PGC_allocated bit, and another for it to be mapped
> somewhere in the guest). I.e. between Dom0 creating the guest
> (and touching its memory) and the guest actually starting, there
> could be further adjustments to the refcnt that simply sum up to
> zero.

Correct, it's 2 when the guest is up. Then relinquish_memory() seems to
bring it to 0 when domain destroy is called. 

For PVH, I somehow end up with 3 when guest is up, so when guest is destroyed
the mfn's are around with refcnt of 1. I get refcnt on the page in xen
when doing xen_remap_domain_mfn_range() in dom0. I hold on to the refcnt
until xen_unmap_domain_mfn_range(). Unmap results in call to 
XENMEM_remove_from_physmap where I do put_page if it's not a grant page
and is from foreign domain....  Debugging that right now.

Thanks for the help,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 02:22:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 02:22:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgnZL-0002y8-LL; Fri, 07 Dec 2012 02:21:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TgnZK-0002xu-AL
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 02:21:38 +0000
Received: from [193.109.254.147:32418] by server-10.bemta-14.messagelabs.com
	id 5C/AC-31741-1B251C05; Fri, 07 Dec 2012 02:21:37 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1354846895!2029405!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAyMDc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13648 invoked from network); 7 Dec 2012 02:21:36 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 02:21:36 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB72LWpJ027682
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 02:21:33 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB72LWRK022644
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 02:21:32 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB72LWpx006250; Thu, 6 Dec 2012 20:21:32 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Dec 2012 18:21:32 -0800
Date: Thu, 6 Dec 2012 18:21:31 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20121206182131.7b417212@mantra.us.oracle.com>
In-Reply-To: <50C0866202000078000AE787@nat28.tlf.novell.com>
References: <20121205174153.76fa5dd1@mantra.us.oracle.com>
	<50C0866202000078000AE787@nat28.tlf.novell.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: IanCampbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] page refcount on pages mapped by the toolstack..
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 06 Dec 2012 10:49:54 +0000
"Jan Beulich" <JBeulich@suse.com> wrote:
> 
> Did you perhaps not monitor the changes to the refcnt closely
> enough? It ought to be 2 when the guest is up (on reference for
> the _PGC_allocated bit, and another for it to be mapped
> somewhere in the guest). I.e. between Dom0 creating the guest
> (and touching its memory) and the guest actually starting, there
> could be further adjustments to the refcnt that simply sum up to
> zero.

Correct, it's 2 when the guest is up. Then relinquish_memory() seems to
bring it to 0 when domain destroy is called. 

For PVH, I somehow end up with 3 when guest is up, so when guest is destroyed
the mfn's are around with refcnt of 1. I get refcnt on the page in xen
when doing xen_remap_domain_mfn_range() in dom0. I hold on to the refcnt
until xen_unmap_domain_mfn_range(). Unmap results in call to 
XENMEM_remove_from_physmap where I do put_page if it's not a grant page
and is from foreign domain....  Debugging that right now.

Thanks for the help,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 03:13:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 03:13:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgoMt-00045K-TX; Fri, 07 Dec 2012 03:12:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgoMs-00045F-Pn
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 03:12:51 +0000
Received: from [85.158.139.211:2333] by server-13.bemta-5.messagelabs.com id
	C6/DA-27809-2BE51C05; Fri, 07 Dec 2012 03:12:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354849968!18628273!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18416 invoked from network); 7 Dec 2012 03:12:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 03:12:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,235,1355097600"; d="scan'208";a="16213384"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 03:12:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 03:12:48 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgoMq-00060H-0b;
	Fri, 07 Dec 2012 03:12:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgoMp-0002Jf-Uy;
	Fri, 07 Dec 2012 03:12:48 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14584-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 03:12:47 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14584: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14584 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14584/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14563

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14563
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 03:13:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 03:13:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgoMt-00045K-TX; Fri, 07 Dec 2012 03:12:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgoMs-00045F-Pn
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 03:12:51 +0000
Received: from [85.158.139.211:2333] by server-13.bemta-5.messagelabs.com id
	C6/DA-27809-2BE51C05; Fri, 07 Dec 2012 03:12:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354849968!18628273!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18416 invoked from network); 7 Dec 2012 03:12:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 03:12:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,235,1355097600"; d="scan'208";a="16213384"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 03:12:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 03:12:48 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgoMq-00060H-0b;
	Fri, 07 Dec 2012 03:12:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgoMp-0002Jf-Uy;
	Fri, 07 Dec 2012 03:12:48 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14584-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 03:12:47 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14584: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14584 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14584/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14563

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14563
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 03:25:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 03:25:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgoYo-0004G0-5s; Fri, 07 Dec 2012 03:25:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1TgoYm-0004Fv-8o
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 03:25:08 +0000
Received: from [85.158.139.211:5523] by server-8.bemta-5.messagelabs.com id
	20/7D-06050-39161C05; Fri, 07 Dec 2012 03:25:07 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354850705!18629029!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAyMDc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5767 invoked from network); 7 Dec 2012 03:25:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 03:25:06 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB73P3Pi005139
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Fri, 7 Dec 2012 03:25:04 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB73P2pR007505
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Fri, 7 Dec 2012 03:25:03 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB73P2hN002697
	for <xen-devel@lists.xen.org>; Thu, 6 Dec 2012 21:25:02 -0600
Received: from zhenzhong2.localdomain (/10.182.39.88)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Dec 2012 19:25:02 -0800
Message-ID: <50C161AD.9020803@oracle.com>
Date: Fri, 07 Dec 2012 11:25:33 +0800
From: DuanZhenzhong <zhenzhong.duan@oracle.com>
Organization: Oracle Corporation
User-Agent: Thunderbird 2.0.0.24 (X11/20101209)
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Joe Jin <joe.jin@oracle.com>
Subject: [Xen-devel] rombios: add support for special CHS layout (spt=32)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When booting up windows VM which converted by dd scsi disk, it fails with
"Error Loading Operating System", root cause is rombios doesn't simulate disk's
logical CHS properly. Real bios uses spt=32 here.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
--- a/tools/firmware/rombios/rombios.c	2011-12-09 00:50:28.000000000 +0800
+++ b/tools/firmware/rombios/rombios.c	2012-11-20 09:42:50.000000000 +0800
@@ -2674,6 +2674,8 @@ void ata_detect( )
       Bit32u sectors_low, sectors_high;
       Bit16u cylinders, heads, spt, blksize;
       Bit8u  translation, removable, mode;
+      Bit8u i;
+      Bit8u *p;
 
       // default mode to PIO16
       mode = ATA_MODE_PIO16;
@@ -2738,14 +2740,32 @@ void ata_detect( )
         case ATA_TRANSLATION_NONE:
           break;
         case ATA_TRANSLATION_LBA:
-          spt = 63;
-          sectors_low /= 63;
-          heads = sectors_low / 1024;
-          if (heads>128) heads = 255;
-          else if (heads>64) heads = 128;
-          else if (heads>32) heads = 64;
-          else if (heads>16) heads = 32;
-          else heads=16;
+          spt = heads = 0;
+          if (ata_cmd_data_in(device,ATA_CMD_READ_SECTORS, 1, 0, 0, 1, 0L, 0L, get_SS(),buffer) !=0 )
+            BX_PANIC("ata-detect: Failed to read first sector\n");
+          for(i=0; i<4; i++) {
+            p = buffer + 0x1be + i * 16;
+            if (read_dword(get_SS(), p+12) && read_byte(get_SS(), p+5)) {
+              /* We make the assumption that the partition terminates on
+                 a cylinder boundary */
+              heads = read_byte(get_SS(), p+5) + 1;
+              spt = read_byte(get_SS(), p+6) & 63;
+              if (spt != 0)
+                break;
+            }
+          }
+          if (spt == 32) {
+            sectors_low /= spt;
+          } else {
+            spt = 63;
+            sectors_low /= 63;
+            heads = sectors_low / 1024;
+            if (heads>128) heads = 255;
+            else if (heads>64) heads = 128;
+            else if (heads>32) heads = 64;
+            else if (heads>16) heads = 32;
+            else heads=16;
+          }
           cylinders = sectors_low / heads;
           break;
         case ATA_TRANSLATION_RECHS:


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 03:25:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 03:25:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgoYo-0004G0-5s; Fri, 07 Dec 2012 03:25:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1TgoYm-0004Fv-8o
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 03:25:08 +0000
Received: from [85.158.139.211:5523] by server-8.bemta-5.messagelabs.com id
	20/7D-06050-39161C05; Fri, 07 Dec 2012 03:25:07 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1354850705!18629029!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAyMDc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5767 invoked from network); 7 Dec 2012 03:25:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 03:25:06 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB73P3Pi005139
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Fri, 7 Dec 2012 03:25:04 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB73P2pR007505
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Fri, 7 Dec 2012 03:25:03 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB73P2hN002697
	for <xen-devel@lists.xen.org>; Thu, 6 Dec 2012 21:25:02 -0600
Received: from zhenzhong2.localdomain (/10.182.39.88)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Dec 2012 19:25:02 -0800
Message-ID: <50C161AD.9020803@oracle.com>
Date: Fri, 07 Dec 2012 11:25:33 +0800
From: DuanZhenzhong <zhenzhong.duan@oracle.com>
Organization: Oracle Corporation
User-Agent: Thunderbird 2.0.0.24 (X11/20101209)
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Joe Jin <joe.jin@oracle.com>
Subject: [Xen-devel] rombios: add support for special CHS layout (spt=32)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When booting up windows VM which converted by dd scsi disk, it fails with
"Error Loading Operating System", root cause is rombios doesn't simulate disk's
logical CHS properly. Real bios uses spt=32 here.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
--- a/tools/firmware/rombios/rombios.c	2011-12-09 00:50:28.000000000 +0800
+++ b/tools/firmware/rombios/rombios.c	2012-11-20 09:42:50.000000000 +0800
@@ -2674,6 +2674,8 @@ void ata_detect( )
       Bit32u sectors_low, sectors_high;
       Bit16u cylinders, heads, spt, blksize;
       Bit8u  translation, removable, mode;
+      Bit8u i;
+      Bit8u *p;
 
       // default mode to PIO16
       mode = ATA_MODE_PIO16;
@@ -2738,14 +2740,32 @@ void ata_detect( )
         case ATA_TRANSLATION_NONE:
           break;
         case ATA_TRANSLATION_LBA:
-          spt = 63;
-          sectors_low /= 63;
-          heads = sectors_low / 1024;
-          if (heads>128) heads = 255;
-          else if (heads>64) heads = 128;
-          else if (heads>32) heads = 64;
-          else if (heads>16) heads = 32;
-          else heads=16;
+          spt = heads = 0;
+          if (ata_cmd_data_in(device,ATA_CMD_READ_SECTORS, 1, 0, 0, 1, 0L, 0L, get_SS(),buffer) !=0 )
+            BX_PANIC("ata-detect: Failed to read first sector\n");
+          for(i=0; i<4; i++) {
+            p = buffer + 0x1be + i * 16;
+            if (read_dword(get_SS(), p+12) && read_byte(get_SS(), p+5)) {
+              /* We make the assumption that the partition terminates on
+                 a cylinder boundary */
+              heads = read_byte(get_SS(), p+5) + 1;
+              spt = read_byte(get_SS(), p+6) & 63;
+              if (spt != 0)
+                break;
+            }
+          }
+          if (spt == 32) {
+            sectors_low /= spt;
+          } else {
+            spt = 63;
+            sectors_low /= 63;
+            heads = sectors_low / 1024;
+            if (heads>128) heads = 255;
+            else if (heads>64) heads = 128;
+            else if (heads>32) heads = 64;
+            else if (heads>16) heads = 32;
+            else heads=16;
+          }
           cylinders = sectors_low / heads;
           break;
         case ATA_TRANSLATION_RECHS:


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 04:12:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 04:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgpI1-00056D-2f; Fri, 07 Dec 2012 04:11:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgpHz-000568-25
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 04:11:51 +0000
Received: from [85.158.139.83:64361] by server-16.bemta-5.messagelabs.com id
	15/AB-21311-68C61C05; Fri, 07 Dec 2012 04:11:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1354853509!29030857!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24031 invoked from network); 7 Dec 2012 04:11:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 04:11:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,235,1355097600"; d="scan'208";a="16213632"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 04:11:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 04:11:49 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgpHw-00067y-RO;
	Fri, 07 Dec 2012 04:11:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgpHw-000469-IY;
	Fri, 07 Dec 2012 04:11:48 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14589-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 04:11:48 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14589: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14589 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14589/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 04:12:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 04:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgpI1-00056D-2f; Fri, 07 Dec 2012 04:11:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgpHz-000568-25
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 04:11:51 +0000
Received: from [85.158.139.83:64361] by server-16.bemta-5.messagelabs.com id
	15/AB-21311-68C61C05; Fri, 07 Dec 2012 04:11:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1354853509!29030857!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24031 invoked from network); 7 Dec 2012 04:11:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 04:11:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,235,1355097600"; d="scan'208";a="16213632"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 04:11:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 04:11:49 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgpHw-00067y-RO;
	Fri, 07 Dec 2012 04:11:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgpHw-000469-IY;
	Fri, 07 Dec 2012 04:11:48 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14589-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 04:11:48 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14589: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14589 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14589/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 05:00:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 05:00:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgq2r-00065V-Ba; Fri, 07 Dec 2012 05:00:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgq2q-00065Q-Ls
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 05:00:17 +0000
Received: from [85.158.139.83:47784] by server-1.bemta-5.messagelabs.com id
	DA/30-12813-FD771C05; Fri, 07 Dec 2012 05:00:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354856414!28095326!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21474 invoked from network); 7 Dec 2012 05:00:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 05:00:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,235,1355097600"; d="scan'208";a="16213804"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 05:00:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 05:00:14 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgq2o-0006Et-04;
	Fri, 07 Dec 2012 05:00:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgq2n-0006VY-Vf;
	Fri, 07 Dec 2012 05:00:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14587-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 05:00:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14587: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14587 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14587/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 05:00:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 05:00:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgq2r-00065V-Ba; Fri, 07 Dec 2012 05:00:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgq2q-00065Q-Ls
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 05:00:17 +0000
Received: from [85.158.139.83:47784] by server-1.bemta-5.messagelabs.com id
	DA/30-12813-FD771C05; Fri, 07 Dec 2012 05:00:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354856414!28095326!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21474 invoked from network); 7 Dec 2012 05:00:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 05:00:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,235,1355097600"; d="scan'208";a="16213804"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 05:00:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 05:00:14 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgq2o-0006Et-04;
	Fri, 07 Dec 2012 05:00:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgq2n-0006VY-Vf;
	Fri, 07 Dec 2012 05:00:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14587-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 05:00:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14587: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14587 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14587/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 05:47:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 05:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgqmA-0006uX-O1; Fri, 07 Dec 2012 05:47:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1Tgqm8-0006uS-PQ
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 05:47:05 +0000
Received: from [193.109.254.147:35437] by server-9.bemta-14.messagelabs.com id
	3A/2F-30773-8D281C05; Fri, 07 Dec 2012 05:47:04 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354859223!8981736!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_TEST_2,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24204 invoked from network); 7 Dec 2012 05:47:03 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 05:47:03 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so61912eek.32
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 21:46:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:from:date:x-google-sender-auth
	:message-id:subject:to:content-type;
	bh=KrC6n51ssAhw5cug5KuP5OQ4/rr1xYDeM3Q2JGlE2Oo=;
	b=TC3vtvvLrR06bvwxio8TFR+gOPzR9xhtpsapvKOqf83OYClbeVlpLVbYY2qJ6URy4l
	dr8hoLI/LaMN3f1LjAzP9qwLUNaNW77ZIIcVtwHY31tlWOvy90vmQrd4wjO67xVT2FJg
	di8TCvd6+icGA/vOsyn99n+FikM2hyxFmvoWU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:from:date:x-google-sender-auth
	:message-id:subject:to:content-type:x-gm-message-state;
	bh=KrC6n51ssAhw5cug5KuP5OQ4/rr1xYDeM3Q2JGlE2Oo=;
	b=JCRt42K97cSnZXMPN8terMqNUXog8WpKQG1eNxhdSfd9sQv1mb6Bsy9VUFui/Fki0i
	3BieheR3wtRD7rHuxAEL6rNAjeEsr2RG+n1vGTlS7EdhnSVcMXwqmfOVqpRIi/uTfgfc
	8FGP7UJCjs7CZL11lVNGahQPAZAO4xPyvvtj91nNk25T3wtvSLP/DM+ywBUYF1MPJi26
	4atUj2JK4dpc2HqL4FfuJ0Jd2boE+YJ3b/1h0umQKx+1jr5Ix4duGDamuylWgP5rYtQz
	OP/1sT2K1EECCjSGjU0syZNzQcKnvHRmVAe3ZCW3lIp20pc74XiIZID418hihcwj+aKh
	GnGw==
Received: by 10.14.3.8 with SMTP id 8mr12763880eeg.40.1354859209024; Thu, 06
	Dec 2012 21:46:49 -0800 (PST)
MIME-Version: 1.0
Received: by 10.14.99.204 with HTTP; Thu, 6 Dec 2012 21:46:33 -0800 (PST)
X-Originating-IP: [85.143.161.18]
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 7 Dec 2012 09:46:33 +0400
X-Google-Sender-Auth: pH_yiEgbb8ACfDPZfvI2dHHrC_0
Message-ID: <CACaajQuzns_hFFwR86akZ57Zh9O54zdGOL3yEyqUquVB3W8T5Q@mail.gmail.com>
To: xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQkmaxJ55F+eUvBCBmbsdwiGvv+lsoJpG+JO9xMnBlovATT54kjg0Ylg7QkjDVyzLN6TCupg
Subject: [Xen-devel] migrate vps from xm to xl toolstack
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, xen  team.
As i understand migrate from xm to xl is not possible.
But is that possible to migrate vps by hands? For example on xm node i
do xm save and on xl node doins xl restore... ?

--
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 05:47:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 05:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgqmA-0006uX-O1; Fri, 07 Dec 2012 05:47:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1Tgqm8-0006uS-PQ
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 05:47:05 +0000
Received: from [193.109.254.147:35437] by server-9.bemta-14.messagelabs.com id
	3A/2F-30773-8D281C05; Fri, 07 Dec 2012 05:47:04 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354859223!8981736!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_TEST_2,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24204 invoked from network); 7 Dec 2012 05:47:03 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 05:47:03 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so61912eek.32
	for <xen-devel@lists.xen.org>; Thu, 06 Dec 2012 21:46:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:from:date:x-google-sender-auth
	:message-id:subject:to:content-type;
	bh=KrC6n51ssAhw5cug5KuP5OQ4/rr1xYDeM3Q2JGlE2Oo=;
	b=TC3vtvvLrR06bvwxio8TFR+gOPzR9xhtpsapvKOqf83OYClbeVlpLVbYY2qJ6URy4l
	dr8hoLI/LaMN3f1LjAzP9qwLUNaNW77ZIIcVtwHY31tlWOvy90vmQrd4wjO67xVT2FJg
	di8TCvd6+icGA/vOsyn99n+FikM2hyxFmvoWU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:from:date:x-google-sender-auth
	:message-id:subject:to:content-type:x-gm-message-state;
	bh=KrC6n51ssAhw5cug5KuP5OQ4/rr1xYDeM3Q2JGlE2Oo=;
	b=JCRt42K97cSnZXMPN8terMqNUXog8WpKQG1eNxhdSfd9sQv1mb6Bsy9VUFui/Fki0i
	3BieheR3wtRD7rHuxAEL6rNAjeEsr2RG+n1vGTlS7EdhnSVcMXwqmfOVqpRIi/uTfgfc
	8FGP7UJCjs7CZL11lVNGahQPAZAO4xPyvvtj91nNk25T3wtvSLP/DM+ywBUYF1MPJi26
	4atUj2JK4dpc2HqL4FfuJ0Jd2boE+YJ3b/1h0umQKx+1jr5Ix4duGDamuylWgP5rYtQz
	OP/1sT2K1EECCjSGjU0syZNzQcKnvHRmVAe3ZCW3lIp20pc74XiIZID418hihcwj+aKh
	GnGw==
Received: by 10.14.3.8 with SMTP id 8mr12763880eeg.40.1354859209024; Thu, 06
	Dec 2012 21:46:49 -0800 (PST)
MIME-Version: 1.0
Received: by 10.14.99.204 with HTTP; Thu, 6 Dec 2012 21:46:33 -0800 (PST)
X-Originating-IP: [85.143.161.18]
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 7 Dec 2012 09:46:33 +0400
X-Google-Sender-Auth: pH_yiEgbb8ACfDPZfvI2dHHrC_0
Message-ID: <CACaajQuzns_hFFwR86akZ57Zh9O54zdGOL3yEyqUquVB3W8T5Q@mail.gmail.com>
To: xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQkmaxJ55F+eUvBCBmbsdwiGvv+lsoJpG+JO9xMnBlovATT54kjg0Ylg7QkjDVyzLN6TCupg
Subject: [Xen-devel] migrate vps from xm to xl toolstack
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, xen  team.
As i understand migrate from xm to xl is not possible.
But is that possible to migrate vps by hands? For example on xm node i
do xm save and on xl node doins xl restore... ?

--
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 05:49:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 05:49:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgqo0-0006zz-8g; Fri, 07 Dec 2012 05:49:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgqny-0006zt-2N
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 05:48:58 +0000
Received: from [85.158.139.211:54558] by server-9.bemta-5.messagelabs.com id
	85/54-10690-94381C05; Fri, 07 Dec 2012 05:48:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354859336!19438839!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17888 invoked from network); 7 Dec 2012 05:48:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 05:48:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,235,1355097600"; d="scan'208";a="16214440"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 05:48:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 05:48:36 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgqfV-0006Mz-SQ;
	Fri, 07 Dec 2012 05:40:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgqfV-0005Dc-OJ;
	Fri, 07 Dec 2012 05:40:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14590-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 05:40:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14590: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14590 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14590/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 05:49:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 05:49:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgqo0-0006zz-8g; Fri, 07 Dec 2012 05:49:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgqny-0006zt-2N
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 05:48:58 +0000
Received: from [85.158.139.211:54558] by server-9.bemta-5.messagelabs.com id
	85/54-10690-94381C05; Fri, 07 Dec 2012 05:48:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354859336!19438839!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17888 invoked from network); 7 Dec 2012 05:48:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 05:48:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,235,1355097600"; d="scan'208";a="16214440"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 05:48:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 05:48:36 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgqfV-0006Mz-SQ;
	Fri, 07 Dec 2012 05:40:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgqfV-0005Dc-OJ;
	Fri, 07 Dec 2012 05:40:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14590-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 05:40:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14590: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14590 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14590/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 06:07:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 06:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgr5d-0007f6-3S; Fri, 07 Dec 2012 06:07:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgr5b-0007f1-PG
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 06:07:12 +0000
Received: from [193.109.254.147:31823] by server-1.bemta-14.messagelabs.com id
	79/CD-25314-E8781C05; Fri, 07 Dec 2012 06:07:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1354860419!9639408!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 919 invoked from network); 7 Dec 2012 06:07:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 06:07:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,235,1355097600"; d="scan'208";a="16214644"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 06:06:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 06:06:58 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgr5O-0006Qd-Ht;
	Fri, 07 Dec 2012 06:06:58 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgr5O-0000G7-9l;
	Fri, 07 Dec 2012 06:06:58 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14591-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 06:06:58 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14591: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14591 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14591/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 06:07:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 06:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgr5d-0007f6-3S; Fri, 07 Dec 2012 06:07:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgr5b-0007f1-PG
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 06:07:12 +0000
Received: from [193.109.254.147:31823] by server-1.bemta-14.messagelabs.com id
	79/CD-25314-E8781C05; Fri, 07 Dec 2012 06:07:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1354860419!9639408!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 919 invoked from network); 7 Dec 2012 06:07:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 06:07:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,235,1355097600"; d="scan'208";a="16214644"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 06:06:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 06:06:58 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tgr5O-0006Qd-Ht;
	Fri, 07 Dec 2012 06:06:58 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tgr5O-0000G7-9l;
	Fri, 07 Dec 2012 06:06:58 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14591-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 06:06:58 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14591: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14591 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14591/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 06:38:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 06:38:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgrZQ-00085v-Mm; Fri, 07 Dec 2012 06:38:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgrZP-00085q-AP
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 06:37:59 +0000
Received: from [193.109.254.147:47114] by server-3.bemta-14.messagelabs.com id
	F7/24-01317-6CE81C05; Fri, 07 Dec 2012 06:37:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354862275!4033590!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14941 invoked from network); 7 Dec 2012 06:37:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 06:37:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16215059"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 06:37:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 06:37:55 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgrZK-0006bX-JU;
	Fri, 07 Dec 2012 06:37:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgrZK-0000ec-Ge;
	Fri, 07 Dec 2012 06:37:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14588-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 06:37:54 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14588: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14588 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14588/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 14570
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 06:38:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 06:38:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgrZQ-00085v-Mm; Fri, 07 Dec 2012 06:38:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgrZP-00085q-AP
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 06:37:59 +0000
Received: from [193.109.254.147:47114] by server-3.bemta-14.messagelabs.com id
	F7/24-01317-6CE81C05; Fri, 07 Dec 2012 06:37:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1354862275!4033590!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14941 invoked from network); 7 Dec 2012 06:37:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 06:37:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16215059"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 06:37:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 06:37:55 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgrZK-0006bX-JU;
	Fri, 07 Dec 2012 06:37:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgrZK-0000ec-Ge;
	Fri, 07 Dec 2012 06:37:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14588-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 06:37:54 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14588: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14588 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14588/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 14570
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 06:49:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 06:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgrkb-0008Pb-Tt; Fri, 07 Dec 2012 06:49:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgrka-0008PW-30
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 06:49:32 +0000
Received: from [85.158.143.35:3440] by server-3.bemta-4.messagelabs.com id
	80/99-18211-B7191C05; Fri, 07 Dec 2012 06:49:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1354862970!11998138!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22669 invoked from network); 7 Dec 2012 06:49:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 06:49:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16215143"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 06:49:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 06:49:30 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgrkY-0006ex-7C;
	Fri, 07 Dec 2012 06:49:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgrkY-00069b-6p;
	Fri, 07 Dec 2012 06:49:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14586-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 06:49:30 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14586: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14586 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14586/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  12d2786dc549
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 402 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 06:49:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 06:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgrkb-0008Pb-Tt; Fri, 07 Dec 2012 06:49:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgrka-0008PW-30
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 06:49:32 +0000
Received: from [85.158.143.35:3440] by server-3.bemta-4.messagelabs.com id
	80/99-18211-B7191C05; Fri, 07 Dec 2012 06:49:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1354862970!11998138!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg1ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22669 invoked from network); 7 Dec 2012 06:49:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 06:49:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16215143"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 06:49:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 06:49:30 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgrkY-0006ex-7C;
	Fri, 07 Dec 2012 06:49:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgrkY-00069b-6p;
	Fri, 07 Dec 2012 06:49:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14586-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 06:49:30 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14586: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14586 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14586/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  12d2786dc549
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 402 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 09:37:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 09:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TguMz-0003Uh-LC; Fri, 07 Dec 2012 09:37:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TguMx-0003Ua-TB
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 09:37:20 +0000
Received: from [85.158.139.211:11557] by server-10.bemta-5.messagelabs.com id
	82/F6-13383-FC8B1C05; Fri, 07 Dec 2012 09:37:19 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1354872952!19401059!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2510 invoked from network); 7 Dec 2012 09:35:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 09:35:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16217928"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 09:35:52 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	09:35:52 +0000
Message-ID: <50C1B877.5000805@citrix.com>
Date: Fri, 7 Dec 2012 10:35:51 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] docs: check for documentation generation
	tools in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/12 12:06, Ian Campbell wrote:
> diff --git a/config/Docs.mk.in b/config/Docs.mk.in
> new file mode 100644
> index 0000000..b6ab6fe
> --- /dev/null
> +++ b/config/Docs.mk.in
> @@ -0,0 +1,20 @@
> +# Prefix and install folder
> +prefix              := @prefix@
> +PREFIX              := $(prefix)
> +exec_prefix         := @exec_prefix@
> +libdir              := @libdir@
> +LIBDIR              := $(libdir)
> +
> +# Tools
> +PS2PDF              := @PS2PDF@
> +DVIPS               := @DVIPS@
> +LATEX               := @LATEX@
> +FIG2DEV             := @FIG2DEV@
> +LATEX2HTML          := @LATEX2HTML@

Didn't we drop all the Latex stuff from Docs? I've did a quick grep and
it seems it's still used by xen-api related docs... What I cannot find
is any user for LATEX2HTML, can't we remove than one?

> @@ -26,10 +26,12 @@ all: build
>
>  .PHONY: build
>  build: html txt man-pages figs
> -	@if which $(DOT) 1>/dev/null 2>/dev/null ; then              \
> -	$(MAKE) -C xen-api build ; else                              \
> -        echo "Graphviz (dot) not installed; skipping xen-api." ; fi
> +ifdef DOT
> +	$(MAKE) -C xen-api build
>  	rm -f *.aux *.dvi *.bbl *.blg *.glo *.idx *.ilg *.log *.ind *.toc
> +else
> +	@echo "Graphviz (dot) not installed; skipping xen-api."
> +endif

Don't we need the latex stuff to build xen-api docs? For the xen-api
Makefile to succeed we seem to need PS2PDF, LATEX, DOT, DVIPS and NEATO.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 09:37:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 09:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TguMz-0003Uh-LC; Fri, 07 Dec 2012 09:37:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TguMx-0003Ua-TB
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 09:37:20 +0000
Received: from [85.158.139.211:11557] by server-10.bemta-5.messagelabs.com id
	82/F6-13383-FC8B1C05; Fri, 07 Dec 2012 09:37:19 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1354872952!19401059!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2510 invoked from network); 7 Dec 2012 09:35:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 09:35:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16217928"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 09:35:52 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	09:35:52 +0000
Message-ID: <50C1B877.5000805@citrix.com>
Date: Fri, 7 Dec 2012 10:35:51 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] docs: check for documentation generation
	tools in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/12 12:06, Ian Campbell wrote:
> diff --git a/config/Docs.mk.in b/config/Docs.mk.in
> new file mode 100644
> index 0000000..b6ab6fe
> --- /dev/null
> +++ b/config/Docs.mk.in
> @@ -0,0 +1,20 @@
> +# Prefix and install folder
> +prefix              := @prefix@
> +PREFIX              := $(prefix)
> +exec_prefix         := @exec_prefix@
> +libdir              := @libdir@
> +LIBDIR              := $(libdir)
> +
> +# Tools
> +PS2PDF              := @PS2PDF@
> +DVIPS               := @DVIPS@
> +LATEX               := @LATEX@
> +FIG2DEV             := @FIG2DEV@
> +LATEX2HTML          := @LATEX2HTML@

Didn't we drop all the Latex stuff from Docs? I've did a quick grep and
it seems it's still used by xen-api related docs... What I cannot find
is any user for LATEX2HTML, can't we remove than one?

> @@ -26,10 +26,12 @@ all: build
>
>  .PHONY: build
>  build: html txt man-pages figs
> -	@if which $(DOT) 1>/dev/null 2>/dev/null ; then              \
> -	$(MAKE) -C xen-api build ; else                              \
> -        echo "Graphviz (dot) not installed; skipping xen-api." ; fi
> +ifdef DOT
> +	$(MAKE) -C xen-api build
>  	rm -f *.aux *.dvi *.bbl *.blg *.glo *.idx *.ilg *.log *.ind *.toc
> +else
> +	@echo "Graphviz (dot) not installed; skipping xen-api."
> +endif

Don't we need the latex stuff to build xen-api docs? For the xen-api
Makefile to succeed we seem to need PS2PDF, LATEX, DOT, DVIPS and NEATO.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 09:37:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 09:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TguMw-0003UT-8h; Fri, 07 Dec 2012 09:37:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TguMv-0003UO-4E
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 09:37:17 +0000
Received: from [85.158.139.211:37008] by server-2.bemta-5.messagelabs.com id
	F4/8D-16162-CC8B1C05; Fri, 07 Dec 2012 09:37:16 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354873035!19509480!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6833 invoked from network); 7 Dec 2012 09:37:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 09:37:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16217965"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 09:37:15 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	09:37:15 +0000
Message-ID: <50C1B8CA.7020609@citrix.com>
Date: Fri, 7 Dec 2012 10:37:14 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1352822539-56930-1-git-send-email-roger.pau@citrix.com>
	<20646.26443.690228.204535@mariner.uk.xensource.com>
	<50BCD2F6.8060106@citrix.com>
	<20672.37281.34306.903202@mariner.uk.xensource.com>
In-Reply-To: <20672.37281.34306.903202@mariner.uk.xensource.com>
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change
 [and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/12 13:37, Ian Jackson wrote:
> Roger Pau Monne writes ("[Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change"):
>> qemu-stubdom was stripping the prefix from the "params" xenstore
>> key in xenstore_parse_domain_config, which was then saved stripped in
>> a variable. In xenstore_process_event we compare the "param" from
>> xenstore (not stripped) with the stripped "param" saved in the
>> variable, which leads to a medium change (even if there isn't any),
>> since we are comparing something like aio:/path/to/file with
>> /path/to/file. This only happens one time, since
>> xenstore_parse_domain_config is the only place where we strip the
>> prefix. The result of this bug is the following:
> 
> Roger Pau Monne writes ("Re: [Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change"):
>> Yes, it's a can of worms indeed.
>>
>> The non-stubdom path is not modified, and the code changes (1st block of
>> the patch) are contained inside a #ifdef STUBDOM (which is not seen on
>> the patch itself, because it's already there).
> 
> Aha!  Indeed.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> This needs to go into 4.2 surely ?

Thanks, and yes, it should be backported to 4.2 stable.

> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 09:37:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 09:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TguMw-0003UT-8h; Fri, 07 Dec 2012 09:37:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TguMv-0003UO-4E
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 09:37:17 +0000
Received: from [85.158.139.211:37008] by server-2.bemta-5.messagelabs.com id
	F4/8D-16162-CC8B1C05; Fri, 07 Dec 2012 09:37:16 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1354873035!19509480!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6833 invoked from network); 7 Dec 2012 09:37:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 09:37:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16217965"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 09:37:15 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	09:37:15 +0000
Message-ID: <50C1B8CA.7020609@citrix.com>
Date: Fri, 7 Dec 2012 10:37:14 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1352822539-56930-1-git-send-email-roger.pau@citrix.com>
	<20646.26443.690228.204535@mariner.uk.xensource.com>
	<50BCD2F6.8060106@citrix.com>
	<20672.37281.34306.903202@mariner.uk.xensource.com>
In-Reply-To: <20672.37281.34306.903202@mariner.uk.xensource.com>
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change
 [and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/12 13:37, Ian Jackson wrote:
> Roger Pau Monne writes ("[Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change"):
>> qemu-stubdom was stripping the prefix from the "params" xenstore
>> key in xenstore_parse_domain_config, which was then saved stripped in
>> a variable. In xenstore_process_event we compare the "param" from
>> xenstore (not stripped) with the stripped "param" saved in the
>> variable, which leads to a medium change (even if there isn't any),
>> since we are comparing something like aio:/path/to/file with
>> /path/to/file. This only happens one time, since
>> xenstore_parse_domain_config is the only place where we strip the
>> prefix. The result of this bug is the following:
> 
> Roger Pau Monne writes ("Re: [Xen-devel] [PATCH] qemu-stubdom: prevent useless medium change"):
>> Yes, it's a can of worms indeed.
>>
>> The non-stubdom path is not modified, and the code changes (1st block of
>> the patch) are contained inside a #ifdef STUBDOM (which is not seen on
>> the patch itself, because it's already there).
> 
> Aha!  Indeed.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> This needs to go into 4.2 surely ?

Thanks, and yes, it should be backported to 4.2 stable.

> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 09:54:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 09:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tguco-0004hZ-F5; Fri, 07 Dec 2012 09:53:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tgucn-0004hS-5h
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 09:53:41 +0000
Received: from [85.158.138.51:5369] by server-10.bemta-3.messagelabs.com id
	93/3C-19806-4ACB1C05; Fri, 07 Dec 2012 09:53:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354874017!27846904!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19712 invoked from network); 7 Dec 2012 09:53:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 09:53:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16218353"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 09:53:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	09:53:37 +0000
Message-ID: <1354874016.31710.3.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 7 Dec 2012 09:53:36 +0000
In-Reply-To: <alpine.DEB.2.02.1212061543230.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354787720.17165.59.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212061543230.8801@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/6] xen/arm: introduce map_phys_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 16:06 +0000, Stefano Stabellini wrote:
> On Thu, 6 Dec 2012, Ian Campbell wrote:
> > On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> > > Introduce a function to map a physical memory into virtual memory.
> > > It is going to be used later to map the videoram.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > ---
> > >  xen/arch/arm/mm.c        |   23 +++++++++++++++++++++++
> > >  xen/include/asm-arm/mm.h |    3 +++
> > >  2 files changed, 26 insertions(+), 0 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > > index 68ee9da..418a414 100644
> > > --- a/xen/arch/arm/mm.c
> > > +++ b/xen/arch/arm/mm.c
> > > @@ -376,6 +376,29 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
> > >      frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
> > >  }
> > >  
> > > +/* Map the physical memory range start -  end at the virtual address
> > > + * virt_start in 2MB chunks. start and virt_start have to be 2MB
> > > + * aligned.
> > > + */
> > > +void map_phys_range(paddr_t start, paddr_t end,
> > > +        unsigned long virt_start, unsigned attributes)
> > > +{
> > > +    ASSERT(!(start & ((1 << 21) - 1)));
> > > +    ASSERT(!(virt_start & ((1 << 21) - 1)));
> > > +
> > > +    while ( start < end )
> > > +    {
> > > +        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
> > > +        e.pt.ai = attributes;
> > > +        write_pte(xen_second + second_table_offset(virt_start), e);
> > > +        
> > > +        start += (1<<21);
> > > +        virt_start += (1<<21);
> > > +    }
> > > +
> > > +    flush_xen_data_tlb();
> > 
> > What does this flush? The ptes are flushed by the write_pte aren't they?
> 
> write_pte doesn't flush the tlb at the moment.

Oh right, it just flushes the dcache.

> > Should this be a range over virt_start + len?
> 
> Yes, ideally it would flush only the range virt_start-virt_start+size.

Why doesn't it?

> In practice I think I could remove the flush entirely because we don't
> have any previous mappings at virt_start but I presume that it is not a
> good idea to rely on that?

Better safe than sorry I guess.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 09:54:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 09:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tguco-0004hZ-F5; Fri, 07 Dec 2012 09:53:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tgucn-0004hS-5h
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 09:53:41 +0000
Received: from [85.158.138.51:5369] by server-10.bemta-3.messagelabs.com id
	93/3C-19806-4ACB1C05; Fri, 07 Dec 2012 09:53:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354874017!27846904!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19712 invoked from network); 7 Dec 2012 09:53:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 09:53:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16218353"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 09:53:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	09:53:37 +0000
Message-ID: <1354874016.31710.3.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 7 Dec 2012 09:53:36 +0000
In-Reply-To: <alpine.DEB.2.02.1212061543230.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354787720.17165.59.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212061543230.8801@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/6] xen/arm: introduce map_phys_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 16:06 +0000, Stefano Stabellini wrote:
> On Thu, 6 Dec 2012, Ian Campbell wrote:
> > On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> > > Introduce a function to map a physical memory into virtual memory.
> > > It is going to be used later to map the videoram.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > ---
> > >  xen/arch/arm/mm.c        |   23 +++++++++++++++++++++++
> > >  xen/include/asm-arm/mm.h |    3 +++
> > >  2 files changed, 26 insertions(+), 0 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > > index 68ee9da..418a414 100644
> > > --- a/xen/arch/arm/mm.c
> > > +++ b/xen/arch/arm/mm.c
> > > @@ -376,6 +376,29 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
> > >      frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
> > >  }
> > >  
> > > +/* Map the physical memory range start -  end at the virtual address
> > > + * virt_start in 2MB chunks. start and virt_start have to be 2MB
> > > + * aligned.
> > > + */
> > > +void map_phys_range(paddr_t start, paddr_t end,
> > > +        unsigned long virt_start, unsigned attributes)
> > > +{
> > > +    ASSERT(!(start & ((1 << 21) - 1)));
> > > +    ASSERT(!(virt_start & ((1 << 21) - 1)));
> > > +
> > > +    while ( start < end )
> > > +    {
> > > +        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
> > > +        e.pt.ai = attributes;
> > > +        write_pte(xen_second + second_table_offset(virt_start), e);
> > > +        
> > > +        start += (1<<21);
> > > +        virt_start += (1<<21);
> > > +    }
> > > +
> > > +    flush_xen_data_tlb();
> > 
> > What does this flush? The ptes are flushed by the write_pte aren't they?
> 
> write_pte doesn't flush the tlb at the moment.

Oh right, it just flushes the dcache.

> > Should this be a range over virt_start + len?
> 
> Yes, ideally it would flush only the range virt_start-virt_start+size.

Why doesn't it?

> In practice I think I could remove the flush entirely because we don't
> have any previous mappings at virt_start but I presume that it is not a
> good idea to rely on that?

Better safe than sorry I guess.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 10:08:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 10:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tguqp-0005cN-AH; Fri, 07 Dec 2012 10:08:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tguqn-0005cC-Gk
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 10:08:09 +0000
Received: from [85.158.143.35:49689] by server-2.bemta-4.messagelabs.com id
	2A/D4-30861-800C1C05; Fri, 07 Dec 2012 10:08:08 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1354874881!5534388!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30961 invoked from network); 7 Dec 2012 10:08:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 10:08:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16218811"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 10:08:01 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	10:08:00 +0000
Message-ID: <50C1C000.5020803@citrix.com>
Date: Fri, 7 Dec 2012 11:08:00 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1351097925-26221-1-git-send-email-roger.pau@citrix.com>
	<20121206031455.GA4408@phenom.dumpdata.com>
In-Reply-To: <20121206031455.GA4408@phenom.dumpdata.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] LVM Checksum error when using persistent grants
 (#linux-next + stable/for-jens-3.8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/12 04:14, Konrad Rzeszutek Wilk wrote:
> Hey Roger,
> 
> I am seeing this weird behavior when using #linux-next + stable/for-jens-3.8 tree.
> 
> Basically I can do 'pvscan' on xvd* disk and quite often I get checksum errors:
> 
> # pvscan /dev/xvdf
>   PV /dev/xvdf2   VG VolGroup00        lvm2 [18.88 GiB / 0    free]
>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>   Total: 5 [962.38 GiB] / in use: 5 [962.38 GiB] / in no VG: 0 [0   ]
> # pvscan /dev/xvdf
>   /dev/xvdf2: Checksum error
>   Couldn't read volume group metadata.
>   /dev/xvdf2: Checksum error
>   Couldn't read volume group metadata.
>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>   Total: 4 [943.50 GiB] / in use: 4 [943.50 GiB] / in no VG: 0 [0   ]
> 
> This is with a i386 dom0, 64-bit Xen 4.1.3 hypervisor, and with either
> 64-bit or 32-bit PV or PVHVM guest.
> 
> Have you seen something like this?
> 
> Note, the other LV disks are over iSCSI and are working fine.

Thanks for the report Konrad, I'm able to reproduce this:

root@debian:~# pvscan -d -v /dev/xvdb2
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
    Walking through all physical volumes
  PV /dev/xvdb2                      lvm2 [4.99 GiB]
  Total: 1 [4.99 GiB] / in use: 0 [0   ] / in no VG: 1 [4.99 GiB]
root@debian:~# pvscan -d -v /dev/xvdb2
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
    Walking through all physical volumes
  No matching physical volumes found

What I find strange is that this only happens when using partitions as
LVM PVs, if I use the full disk (/dev/xvdb) as a PV I'm not able to
reproduce it. I will investigate further.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 10:08:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 10:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tguqp-0005cN-AH; Fri, 07 Dec 2012 10:08:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tguqn-0005cC-Gk
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 10:08:09 +0000
Received: from [85.158.143.35:49689] by server-2.bemta-4.messagelabs.com id
	2A/D4-30861-800C1C05; Fri, 07 Dec 2012 10:08:08 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1354874881!5534388!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30961 invoked from network); 7 Dec 2012 10:08:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 10:08:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16218811"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 10:08:01 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	10:08:00 +0000
Message-ID: <50C1C000.5020803@citrix.com>
Date: Fri, 7 Dec 2012 11:08:00 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1351097925-26221-1-git-send-email-roger.pau@citrix.com>
	<20121206031455.GA4408@phenom.dumpdata.com>
In-Reply-To: <20121206031455.GA4408@phenom.dumpdata.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] LVM Checksum error when using persistent grants
 (#linux-next + stable/for-jens-3.8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/12/12 04:14, Konrad Rzeszutek Wilk wrote:
> Hey Roger,
> 
> I am seeing this weird behavior when using #linux-next + stable/for-jens-3.8 tree.
> 
> Basically I can do 'pvscan' on xvd* disk and quite often I get checksum errors:
> 
> # pvscan /dev/xvdf
>   PV /dev/xvdf2   VG VolGroup00        lvm2 [18.88 GiB / 0    free]
>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>   Total: 5 [962.38 GiB] / in use: 5 [962.38 GiB] / in no VG: 0 [0   ]
> # pvscan /dev/xvdf
>   /dev/xvdf2: Checksum error
>   Couldn't read volume group metadata.
>   /dev/xvdf2: Checksum error
>   Couldn't read volume group metadata.
>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>   Total: 4 [943.50 GiB] / in use: 4 [943.50 GiB] / in no VG: 0 [0   ]
> 
> This is with a i386 dom0, 64-bit Xen 4.1.3 hypervisor, and with either
> 64-bit or 32-bit PV or PVHVM guest.
> 
> Have you seen something like this?
> 
> Note, the other LV disks are over iSCSI and are working fine.

Thanks for the report Konrad, I'm able to reproduce this:

root@debian:~# pvscan -d -v /dev/xvdb2
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
    Walking through all physical volumes
  PV /dev/xvdb2                      lvm2 [4.99 GiB]
  Total: 1 [4.99 GiB] / in use: 0 [0   ] / in no VG: 1 [4.99 GiB]
root@debian:~# pvscan -d -v /dev/xvdb2
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
    Walking through all physical volumes
  No matching physical volumes found

What I find strange is that this only happens when using partitions as
LVM PVs, if I use the full disk (/dev/xvdb) as a PV I'm not able to
reproduce it. I will investigate further.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 10:21:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 10:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgv3c-00064F-LX; Fri, 07 Dec 2012 10:21:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tgv3b-00064A-8G
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 10:21:23 +0000
Received: from [85.158.143.35:15713] by server-2.bemta-4.messagelabs.com id
	66/08-30861-223C1C05; Fri, 07 Dec 2012 10:21:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354875680!15765862!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4683 invoked from network); 7 Dec 2012 10:21:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 10:21:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16219189"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 10:21:17 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	10:21:17 +0000
Message-ID: <1354875675.31710.19.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Fri, 7 Dec 2012 10:21:15 +0000
In-Reply-To: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Christoph_Egger@gmx.de" <Christoph_Egger@gmx.de>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 18:28 +0000, Boris Ostrovsky wrote:
> Do not report error when a patch is not appplicable to current processor,
> simply skip it and move on to next patch in container file.
> 
> Process container file to the end instead of stopping at the first
> applicable patch.
> 
> Log the fact that a patch has been applied at KERN_WARNING level, add
> CPU number to debug messages.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

This works for me, the logs are:
        (XEN) microcode: CPU0 collect_cpu_info: patch_id=0x6000626
        (XEN) microcode: CPU0 size 5260, block size 2592, offset 60
        (XEN) microcode: CPU0 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
        (XEN) microcode: CPU0 size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU0 patch does not match (patch is 6101, cpu base id is 6012) 
        (XEN) microcode: CPU0 size 5260, block size 2592, offset 60
        (XEN) microcode: CPU1 collect_cpu_info: patch_id=0x6000629
        (XEN) microcode: CPU1 size 5260, block size 2592, offset 60
        (XEN) microcode: CPU1 patching is not needed: patch provides level 0x6000629, cpu is at 0x6000629 
        (XEN) microcode: CPU1 size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base id is 6012) 

and the overall hypercall reports success.

Tested-by: Ian Campbell <ian.campbell@citrix.com>

Thanks!


> ---
>  xen/arch/x86/microcode_amd.c |   69 +++++++++++++++++++++++-------------------
>  1 file changed, 38 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c
> index 7a54001..bb5968e 100644
> --- a/xen/arch/x86/microcode_amd.c
> +++ b/xen/arch/x86/microcode_amd.c
> @@ -88,13 +88,13 @@ static int collect_cpu_info(int cpu, struct cpu_signature *csig)
>  
>      rdmsrl(MSR_AMD_PATCHLEVEL, csig->rev);
>  
> -    printk(KERN_DEBUG "microcode: collect_cpu_info: patch_id=%#x\n",
> -           csig->rev);
> +    printk(KERN_DEBUG "microcode: CPU%d collect_cpu_info: patch_id=%#x\n",
> +           cpu, csig->rev);
>  
>      return 0;
>  }
>  
> -static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
> +static bool_t microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>  {
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>      const struct microcode_header_amd *mc_header = mc_amd->mpb;
> @@ -125,11 +125,16 @@ static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>          printk(KERN_DEBUG "microcode: CPU%d patch does not match "
>                 "(patch is %x, cpu base id is %x) \n",
>                 cpu, mc_header->processor_rev_id, equiv_cpu_id);
> -        return -EINVAL;
> +        return 0;
>      }
>  
>      if ( mc_header->patch_id <= uci->cpu_sig.rev )
> +    {
> +        printk(KERN_DEBUG "microcode: CPU%d patching is not needed: "
> +               "patch provides level 0x%x, cpu is at 0x%x \n",
> +               cpu, mc_header->patch_id, uci->cpu_sig.rev);
>          return 0;
> +    }
>  
>      printk(KERN_DEBUG "microcode: CPU%d found a matching microcode "
>             "update with version %#x (current=%#x)\n",
> @@ -173,7 +178,7 @@ static int apply_microcode(int cpu)
>          return -EIO;
>      }
>  
> -    printk(KERN_INFO "microcode: CPU%d updated from revision %#x to %#x\n",
> +    printk(KERN_WARNING "microcode: CPU%d updated from revision %#x to %#x\n",
>             cpu, uci->cpu_sig.rev, hdr->patch_id);
>  
>      uci->cpu_sig.rev = rev;
> @@ -181,7 +186,7 @@ static int apply_microcode(int cpu)
>      return 0;
>  }
>  
> -static int get_next_ucode_from_buffer_amd(
> +static int get_ucode_from_buffer_amd(
>      struct microcode_amd *mc_amd,
>      const void *buf,
>      size_t bufsize,
> @@ -194,8 +199,12 @@ static int get_next_ucode_from_buffer_amd(
>      off = *offset;
>  
>      /* No more data */
> -    if ( off >= bufsize )
> -        return 1;
> +    if ( off >= bufsize ) 
> +    {
> +        printk(KERN_ERR "microcode: error! "
> +               "ucode buffer overrun\n");
> +        return -EINVAL;
> +    }
>  
>      mpbuf = (const struct mpbhdr *)&bufp[off];
>      if ( mpbuf->type != UCODE_UCODE_TYPE )
> @@ -205,8 +214,8 @@ static int get_next_ucode_from_buffer_amd(
>          return -EINVAL;
>      }
>  
> -    printk(KERN_DEBUG "microcode: size %zu, block size %u, offset %zu\n",
> -           bufsize, mpbuf->len, off);
> +    printk(KERN_DEBUG "microcode: CPU%d size %zu, block size %u, offset %zu\n",
> +           raw_smp_processor_id(), bufsize, mpbuf->len, off);
>  
>      if ( (off + mpbuf->len) > bufsize )
>      {
> @@ -278,8 +287,8 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
>  {
>      struct microcode_amd *mc_amd, *mc_old;
>      size_t offset = bufsize;
> +    size_t last_offset, applied_offset = 0;
>      int error = 0;
> -    int ret;
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>  
>      /* We should bind the task to the CPU */
> @@ -321,33 +330,32 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
>       */
>      mc_amd->mpb = NULL;
>      mc_amd->mpb_size = 0;
> -    while ( (ret = get_next_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> -                                                  &offset)) == 0 )
> +    last_offset = offset;
> +    while ( (error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> +                                               &offset)) == 0 )
>      {
> -        error = microcode_fits(mc_amd, cpu);
> -        if (error <= 0)
> -            continue;
> +        if ( microcode_fits(mc_amd, cpu) )
> +            if ( apply_microcode(cpu) == 0 )
> +                applied_offset = last_offset;
>  
> -        error = apply_microcode(cpu);
> -        if (error == 0)
> -        {
> -            error = 1;
> +        last_offset = offset;
> +
> +        if ( offset >= bufsize )
>              break;
> -        }
>      }
>  
> -    if ( ret < 0 )
> -        error = ret;
> -
>      /* On success keep the microcode patch for
>       * re-apply on resume.
>       */
> -    if ( error == 1 )
> +    if ( applied_offset != 0 )
>      {
> -        xfree(mc_old);
> -        error = 0;
> +        error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> +                                          &applied_offset);
> +        if (error == 0)
> +            xfree(mc_old);
>      }
> -    else
> +
> +    if ( applied_offset == 0 || error != 0 )
>      {
>          xfree(mc_amd);
>          uci->mc.mc_amd = mc_old;
> @@ -364,10 +372,9 @@ static int microcode_resume_match(int cpu, const void *mc)
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>      struct microcode_amd *mc_amd = uci->mc.mc_amd;
>      const struct microcode_amd *src = mc;
> -    int res = microcode_fits(src, cpu);
>  
> -    if ( res <= 0 )
> -        return res;
> +    if ( microcode_fits(src, cpu) == 0 )
> +        return 0;
>  
>      if ( src != mc_amd )
>      {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 10:21:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 10:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgv3c-00064F-LX; Fri, 07 Dec 2012 10:21:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tgv3b-00064A-8G
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 10:21:23 +0000
Received: from [85.158.143.35:15713] by server-2.bemta-4.messagelabs.com id
	66/08-30861-223C1C05; Fri, 07 Dec 2012 10:21:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354875680!15765862!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4683 invoked from network); 7 Dec 2012 10:21:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 10:21:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16219189"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 10:21:17 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	10:21:17 +0000
Message-ID: <1354875675.31710.19.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@amd.com>
Date: Fri, 7 Dec 2012 10:21:15 +0000
In-Reply-To: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Christoph_Egger@gmx.de" <Christoph_Egger@gmx.de>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 18:28 +0000, Boris Ostrovsky wrote:
> Do not report error when a patch is not appplicable to current processor,
> simply skip it and move on to next patch in container file.
> 
> Process container file to the end instead of stopping at the first
> applicable patch.
> 
> Log the fact that a patch has been applied at KERN_WARNING level, add
> CPU number to debug messages.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

This works for me, the logs are:
        (XEN) microcode: CPU0 collect_cpu_info: patch_id=0x6000626
        (XEN) microcode: CPU0 size 5260, block size 2592, offset 60
        (XEN) microcode: CPU0 found a matching microcode update with version 0x6000629 (current=0x6000626)
        (XEN) microcode: CPU0 updated from revision 0x6000626 to 0x6000629
        (XEN) microcode: CPU0 size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU0 patch does not match (patch is 6101, cpu base id is 6012) 
        (XEN) microcode: CPU0 size 5260, block size 2592, offset 60
        (XEN) microcode: CPU1 collect_cpu_info: patch_id=0x6000629
        (XEN) microcode: CPU1 size 5260, block size 2592, offset 60
        (XEN) microcode: CPU1 patching is not needed: patch provides level 0x6000629, cpu is at 0x6000629 
        (XEN) microcode: CPU1 size 5260, block size 2592, offset 2660
        (XEN) microcode: CPU1 patch does not match (patch is 6101, cpu base id is 6012) 

and the overall hypercall reports success.

Tested-by: Ian Campbell <ian.campbell@citrix.com>

Thanks!


> ---
>  xen/arch/x86/microcode_amd.c |   69 +++++++++++++++++++++++-------------------
>  1 file changed, 38 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c
> index 7a54001..bb5968e 100644
> --- a/xen/arch/x86/microcode_amd.c
> +++ b/xen/arch/x86/microcode_amd.c
> @@ -88,13 +88,13 @@ static int collect_cpu_info(int cpu, struct cpu_signature *csig)
>  
>      rdmsrl(MSR_AMD_PATCHLEVEL, csig->rev);
>  
> -    printk(KERN_DEBUG "microcode: collect_cpu_info: patch_id=%#x\n",
> -           csig->rev);
> +    printk(KERN_DEBUG "microcode: CPU%d collect_cpu_info: patch_id=%#x\n",
> +           cpu, csig->rev);
>  
>      return 0;
>  }
>  
> -static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
> +static bool_t microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>  {
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>      const struct microcode_header_amd *mc_header = mc_amd->mpb;
> @@ -125,11 +125,16 @@ static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>          printk(KERN_DEBUG "microcode: CPU%d patch does not match "
>                 "(patch is %x, cpu base id is %x) \n",
>                 cpu, mc_header->processor_rev_id, equiv_cpu_id);
> -        return -EINVAL;
> +        return 0;
>      }
>  
>      if ( mc_header->patch_id <= uci->cpu_sig.rev )
> +    {
> +        printk(KERN_DEBUG "microcode: CPU%d patching is not needed: "
> +               "patch provides level 0x%x, cpu is at 0x%x \n",
> +               cpu, mc_header->patch_id, uci->cpu_sig.rev);
>          return 0;
> +    }
>  
>      printk(KERN_DEBUG "microcode: CPU%d found a matching microcode "
>             "update with version %#x (current=%#x)\n",
> @@ -173,7 +178,7 @@ static int apply_microcode(int cpu)
>          return -EIO;
>      }
>  
> -    printk(KERN_INFO "microcode: CPU%d updated from revision %#x to %#x\n",
> +    printk(KERN_WARNING "microcode: CPU%d updated from revision %#x to %#x\n",
>             cpu, uci->cpu_sig.rev, hdr->patch_id);
>  
>      uci->cpu_sig.rev = rev;
> @@ -181,7 +186,7 @@ static int apply_microcode(int cpu)
>      return 0;
>  }
>  
> -static int get_next_ucode_from_buffer_amd(
> +static int get_ucode_from_buffer_amd(
>      struct microcode_amd *mc_amd,
>      const void *buf,
>      size_t bufsize,
> @@ -194,8 +199,12 @@ static int get_next_ucode_from_buffer_amd(
>      off = *offset;
>  
>      /* No more data */
> -    if ( off >= bufsize )
> -        return 1;
> +    if ( off >= bufsize ) 
> +    {
> +        printk(KERN_ERR "microcode: error! "
> +               "ucode buffer overrun\n");
> +        return -EINVAL;
> +    }
>  
>      mpbuf = (const struct mpbhdr *)&bufp[off];
>      if ( mpbuf->type != UCODE_UCODE_TYPE )
> @@ -205,8 +214,8 @@ static int get_next_ucode_from_buffer_amd(
>          return -EINVAL;
>      }
>  
> -    printk(KERN_DEBUG "microcode: size %zu, block size %u, offset %zu\n",
> -           bufsize, mpbuf->len, off);
> +    printk(KERN_DEBUG "microcode: CPU%d size %zu, block size %u, offset %zu\n",
> +           raw_smp_processor_id(), bufsize, mpbuf->len, off);
>  
>      if ( (off + mpbuf->len) > bufsize )
>      {
> @@ -278,8 +287,8 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
>  {
>      struct microcode_amd *mc_amd, *mc_old;
>      size_t offset = bufsize;
> +    size_t last_offset, applied_offset = 0;
>      int error = 0;
> -    int ret;
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>  
>      /* We should bind the task to the CPU */
> @@ -321,33 +330,32 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
>       */
>      mc_amd->mpb = NULL;
>      mc_amd->mpb_size = 0;
> -    while ( (ret = get_next_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> -                                                  &offset)) == 0 )
> +    last_offset = offset;
> +    while ( (error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> +                                               &offset)) == 0 )
>      {
> -        error = microcode_fits(mc_amd, cpu);
> -        if (error <= 0)
> -            continue;
> +        if ( microcode_fits(mc_amd, cpu) )
> +            if ( apply_microcode(cpu) == 0 )
> +                applied_offset = last_offset;
>  
> -        error = apply_microcode(cpu);
> -        if (error == 0)
> -        {
> -            error = 1;
> +        last_offset = offset;
> +
> +        if ( offset >= bufsize )
>              break;
> -        }
>      }
>  
> -    if ( ret < 0 )
> -        error = ret;
> -
>      /* On success keep the microcode patch for
>       * re-apply on resume.
>       */
> -    if ( error == 1 )
> +    if ( applied_offset != 0 )
>      {
> -        xfree(mc_old);
> -        error = 0;
> +        error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> +                                          &applied_offset);
> +        if (error == 0)
> +            xfree(mc_old);
>      }
> -    else
> +
> +    if ( applied_offset == 0 || error != 0 )
>      {
>          xfree(mc_amd);
>          uci->mc.mc_amd = mc_old;
> @@ -364,10 +372,9 @@ static int microcode_resume_match(int cpu, const void *mc)
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>      struct microcode_amd *mc_amd = uci->mc.mc_amd;
>      const struct microcode_amd *src = mc;
> -    int res = microcode_fits(src, cpu);
>  
> -    if ( res <= 0 )
> -        return res;
> +    if ( microcode_fits(src, cpu) == 0 )
> +        return 0;
>  
>      if ( src != mc_amd )
>      {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 10:38:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 10:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgvJc-0006Tx-7o; Fri, 07 Dec 2012 10:37:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgvJa-0006Ts-7c
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 10:37:54 +0000
Received: from [85.158.138.51:42403] by server-5.bemta-3.messagelabs.com id
	5E/98-26311-EF6C1C05; Fri, 07 Dec 2012 10:37:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354876669!27561565!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10387 invoked from network); 7 Dec 2012 10:37:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 10:37:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16219697"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 10:37:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	10:37:49 +0000
Message-ID: <1354876667.31710.26.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Fri, 7 Dec 2012 10:37:47 +0000
In-Reply-To: <50C1B877.5000805@citrix.com>
References: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
	<50C1B877.5000805@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] docs: check for documentation generation
 tools in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 09:35 +0000, Roger Pau Monne wrote:
> On 06/12/12 12:06, Ian Campbell wrote:
> > diff --git a/config/Docs.mk.in b/config/Docs.mk.in
> > new file mode 100644
> > index 0000000..b6ab6fe
> > --- /dev/null
> > +++ b/config/Docs.mk.in
> > @@ -0,0 +1,20 @@
> > +# Prefix and install folder
> > +prefix              := @prefix@
> > +PREFIX              := $(prefix)
> > +exec_prefix         := @exec_prefix@
> > +libdir              := @libdir@
> > +LIBDIR              := $(libdir)
> > +
> > +# Tools
> > +PS2PDF              := @PS2PDF@
> > +DVIPS               := @DVIPS@
> > +LATEX               := @LATEX@
> > +FIG2DEV             := @FIG2DEV@
> > +LATEX2HTML          := @LATEX2HTML@
> 
> Didn't we drop all the Latex stuff from Docs? I've did a quick grep and
> it seems it's still used by xen-api related docs... What I cannot find
> is any user for LATEX2HTML, can't we remove than one?

Seems like we can, yes. Probably this was still used when Matt wrote the
original patch.

> 
> > @@ -26,10 +26,12 @@ all: build
> >
> >  .PHONY: build
> >  build: html txt man-pages figs
> > -	@if which $(DOT) 1>/dev/null 2>/dev/null ; then              \
> > -	$(MAKE) -C xen-api build ; else                              \
> > -        echo "Graphviz (dot) not installed; skipping xen-api." ; fi
> > +ifdef DOT
> > +	$(MAKE) -C xen-api build
> >  	rm -f *.aux *.dvi *.bbl *.blg *.glo *.idx *.ilg *.log *.ind *.toc
> > +else
> > +	@echo "Graphviz (dot) not installed; skipping xen-api."
> > +endif
> 
> Don't we need the latex stuff to build xen-api docs? For the xen-api
> Makefile to succeed we seem to need PS2PDF, LATEX, DOT, DVIPS and NEATO.

Given that this "xen-api" doc documents an old unmaintained version of
the XenAPI, which bears little to no relation to what is implemented in
xapi and which is only partially implemented in xend (which is
deprecated) I'm leaning strongly towards just nuking this particular
document from unstable. Anyone who is interested can just use the
version which was in 4.2...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 10:38:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 10:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgvJc-0006Tx-7o; Fri, 07 Dec 2012 10:37:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgvJa-0006Ts-7c
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 10:37:54 +0000
Received: from [85.158.138.51:42403] by server-5.bemta-3.messagelabs.com id
	5E/98-26311-EF6C1C05; Fri, 07 Dec 2012 10:37:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354876669!27561565!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10387 invoked from network); 7 Dec 2012 10:37:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 10:37:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16219697"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 10:37:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	10:37:49 +0000
Message-ID: <1354876667.31710.26.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Fri, 7 Dec 2012 10:37:47 +0000
In-Reply-To: <50C1B877.5000805@citrix.com>
References: <1354792014-23685-1-git-send-email-ian.campbell@citrix.com>
	<50C1B877.5000805@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] docs: check for documentation generation
 tools in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 09:35 +0000, Roger Pau Monne wrote:
> On 06/12/12 12:06, Ian Campbell wrote:
> > diff --git a/config/Docs.mk.in b/config/Docs.mk.in
> > new file mode 100644
> > index 0000000..b6ab6fe
> > --- /dev/null
> > +++ b/config/Docs.mk.in
> > @@ -0,0 +1,20 @@
> > +# Prefix and install folder
> > +prefix              := @prefix@
> > +PREFIX              := $(prefix)
> > +exec_prefix         := @exec_prefix@
> > +libdir              := @libdir@
> > +LIBDIR              := $(libdir)
> > +
> > +# Tools
> > +PS2PDF              := @PS2PDF@
> > +DVIPS               := @DVIPS@
> > +LATEX               := @LATEX@
> > +FIG2DEV             := @FIG2DEV@
> > +LATEX2HTML          := @LATEX2HTML@
> 
> Didn't we drop all the Latex stuff from Docs? I've did a quick grep and
> it seems it's still used by xen-api related docs... What I cannot find
> is any user for LATEX2HTML, can't we remove than one?

Seems like we can, yes. Probably this was still used when Matt wrote the
original patch.

> 
> > @@ -26,10 +26,12 @@ all: build
> >
> >  .PHONY: build
> >  build: html txt man-pages figs
> > -	@if which $(DOT) 1>/dev/null 2>/dev/null ; then              \
> > -	$(MAKE) -C xen-api build ; else                              \
> > -        echo "Graphviz (dot) not installed; skipping xen-api." ; fi
> > +ifdef DOT
> > +	$(MAKE) -C xen-api build
> >  	rm -f *.aux *.dvi *.bbl *.blg *.glo *.idx *.ilg *.log *.ind *.toc
> > +else
> > +	@echo "Graphviz (dot) not installed; skipping xen-api."
> > +endif
> 
> Don't we need the latex stuff to build xen-api docs? For the xen-api
> Makefile to succeed we seem to need PS2PDF, LATEX, DOT, DVIPS and NEATO.

Given that this "xen-api" doc documents an old unmaintained version of
the XenAPI, which bears little to no relation to what is implemented in
xapi and which is only partially implemented in xend (which is
deprecated) I'm leaning strongly towards just nuking this particular
document from unstable. Anyone who is interested can just use the
version which was in 4.2...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:02:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:02:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgvgh-0007MD-KA; Fri, 07 Dec 2012 11:01:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgvgf-0007LB-E8
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:01:45 +0000
Received: from [85.158.143.99:42770] by server-2.bemta-4.messagelabs.com id
	59/12-30861-89CC1C05; Fri, 07 Dec 2012 11:01:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1354878100!18701049!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18508 invoked from network); 7 Dec 2012 11:01:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:01:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:01:39 +0000
Message-Id: <50C1DAA302000078000AEE3E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:01:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB6872A83.0__="
Subject: [Xen-devel] [PATCH] scheduler: fix rate limit range checking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB6872A83.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

For one, neither of the two checks permitted for the documented value
of zero (disabling the functionality altogether).

Second, the range checking of the command line parameter was done by
the credit scheduler's initialization code, despite it being a generic
scheduler option.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -835,8 +835,9 @@ csched_sys_cntl(const struct scheduler *
     case XEN_SYSCTL_SCHEDOP_putinfo:
         if (params->tslice_ms > XEN_SYSCTL_CSCHED_TSLICE_MAX
             || params->tslice_ms < XEN_SYSCTL_CSCHED_TSLICE_MIN=20
-            || params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
-            || params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN=20
+            || (params->ratelimit_us
+                && (params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
+                    || params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_M=
IN))
             || MICROSECS(params->ratelimit_us) > MILLISECS(params->tslice_=
ms) )
                 goto out;
         prv->tslice_ms =3D params->tslice_ms;
@@ -1593,17 +1594,6 @@ csched_init(struct scheduler *ops)
         sched_credit_tslice_ms =3D CSCHED_DEFAULT_TSLICE_MS;
     }
=20
-    if ( sched_ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
-         || sched_ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN )
-    {
-        printk("WARNING: sched_ratelimit_us outside of valid range =
[%d,%d].\n"
-               " Resetting to default %u\n",
-               XEN_SYSCTL_SCHED_RATELIMIT_MIN,
-               XEN_SYSCTL_SCHED_RATELIMIT_MAX,
-               SCHED_DEFAULT_RATELIMIT_US);
-        sched_ratelimit_us =3D SCHED_DEFAULT_RATELIMIT_US;
-    }
-
     prv->tslice_ms =3D sched_credit_tslice_ms;
     prv->ticks_per_tslice =3D CSCHED_TICKS_PER_TSLICE;
     if ( prv->tslice_ms < prv->ticks_per_tslice )
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1324,6 +1324,18 @@ void __init scheduler_init(void)
     if ( SCHED_OP(&ops, init) )
         panic("scheduler returned error on init\n");
=20
+    if ( sched_ratelimit_us &&
+         (sched_ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
+          || sched_ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN) )
+    {
+        printk("WARNING: sched_ratelimit_us outside of valid range =
[%d,%d].\n"
+               " Resetting to default %u\n",
+               XEN_SYSCTL_SCHED_RATELIMIT_MIN,
+               XEN_SYSCTL_SCHED_RATELIMIT_MAX,
+               SCHED_DEFAULT_RATELIMIT_US);
+        sched_ratelimit_us =3D SCHED_DEFAULT_RATELIMIT_US;
+    }
+
     idle_domain =3D domain_create(DOMID_IDLE, 0, 0);
     BUG_ON(IS_ERR(idle_domain));
     idle_domain->vcpu =3D idle_vcpu;




--=__PartB6872A83.0__=
Content-Type: text/plain; name="sched-ratelimit-check.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="sched-ratelimit-check.patch"

scheduler: fix rate limit range checking=0A=0AFor one, neither of the two =
checks permitted for the documented value=0Aof zero (disabling the =
functionality altogether).=0A=0ASecond, the range checking of the command =
line parameter was done by=0Athe credit scheduler's initialization code, =
despite it being a generic=0Ascheduler option.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/common/sched_credit.c=0A+++ =
b/xen/common/sched_credit.c=0A@@ -835,8 +835,9 @@ csched_sys_cntl(const =
struct scheduler *=0A     case XEN_SYSCTL_SCHEDOP_putinfo:=0A         if =
(params->tslice_ms > XEN_SYSCTL_CSCHED_TSLICE_MAX=0A             || =
params->tslice_ms < XEN_SYSCTL_CSCHED_TSLICE_MIN =0A-            || =
params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX=0A-            || =
params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN =0A+            || =
(params->ratelimit_us=0A+                && (params->ratelimit_us > =
XEN_SYSCTL_SCHED_RATELIMIT_MAX=0A+                    || params->ratelimit_=
us < XEN_SYSCTL_SCHED_RATELIMIT_MIN))=0A             || MICROSECS(params->r=
atelimit_us) > MILLISECS(params->tslice_ms) )=0A                 goto =
out;=0A         prv->tslice_ms =3D params->tslice_ms;=0A@@ -1593,17 =
+1594,6 @@ csched_init(struct scheduler *ops)=0A         sched_credit_tslic=
e_ms =3D CSCHED_DEFAULT_TSLICE_MS;=0A     }=0A =0A-    if ( sched_ratelimit=
_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX=0A-         || sched_ratelimit_us < =
XEN_SYSCTL_SCHED_RATELIMIT_MIN )=0A-    {=0A-        printk("WARNING: =
sched_ratelimit_us outside of valid range [%d,%d].\n"=0A-               " =
Resetting to default %u\n",=0A-               XEN_SYSCTL_SCHED_RATELIMIT_MI=
N,=0A-               XEN_SYSCTL_SCHED_RATELIMIT_MAX,=0A-               =
SCHED_DEFAULT_RATELIMIT_US);=0A-        sched_ratelimit_us =3D SCHED_DEFAUL=
T_RATELIMIT_US;=0A-    }=0A-=0A     prv->tslice_ms =3D sched_credit_tslice_=
ms;=0A     prv->ticks_per_tslice =3D CSCHED_TICKS_PER_TSLICE;=0A     if ( =
prv->tslice_ms < prv->ticks_per_tslice )=0A--- a/xen/common/schedule.c=0A++=
+ b/xen/common/schedule.c=0A@@ -1324,6 +1324,18 @@ void __init scheduler_in=
it(void)=0A     if ( SCHED_OP(&ops, init) )=0A         panic("scheduler =
returned error on init\n");=0A =0A+    if ( sched_ratelimit_us &&=0A+      =
   (sched_ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX=0A+          || =
sched_ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN) )=0A+    {=0A+        =
printk("WARNING: sched_ratelimit_us outside of valid range [%d,%d].\n"=0A+ =
              " Resetting to default %u\n",=0A+               XEN_SYSCTL_SC=
HED_RATELIMIT_MIN,=0A+               XEN_SYSCTL_SCHED_RATELIMIT_MAX,=0A+   =
            SCHED_DEFAULT_RATELIMIT_US);=0A+        sched_ratelimit_us =3D =
SCHED_DEFAULT_RATELIMIT_US;=0A+    }=0A+=0A     idle_domain =3D domain_crea=
te(DOMID_IDLE, 0, 0);=0A     BUG_ON(IS_ERR(idle_domain));=0A     idle_domai=
n->vcpu =3D idle_vcpu;=0A
--=__PartB6872A83.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB6872A83.0__=--


From xen-devel-bounces@lists.xen.org Fri Dec 07 11:02:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:02:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgvgh-0007MD-KA; Fri, 07 Dec 2012 11:01:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgvgf-0007LB-E8
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:01:45 +0000
Received: from [85.158.143.99:42770] by server-2.bemta-4.messagelabs.com id
	59/12-30861-89CC1C05; Fri, 07 Dec 2012 11:01:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1354878100!18701049!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18508 invoked from network); 7 Dec 2012 11:01:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:01:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:01:39 +0000
Message-Id: <50C1DAA302000078000AEE3E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:01:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB6872A83.0__="
Subject: [Xen-devel] [PATCH] scheduler: fix rate limit range checking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB6872A83.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

For one, neither of the two checks permitted for the documented value
of zero (disabling the functionality altogether).

Second, the range checking of the command line parameter was done by
the credit scheduler's initialization code, despite it being a generic
scheduler option.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -835,8 +835,9 @@ csched_sys_cntl(const struct scheduler *
     case XEN_SYSCTL_SCHEDOP_putinfo:
         if (params->tslice_ms > XEN_SYSCTL_CSCHED_TSLICE_MAX
             || params->tslice_ms < XEN_SYSCTL_CSCHED_TSLICE_MIN=20
-            || params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
-            || params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN=20
+            || (params->ratelimit_us
+                && (params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
+                    || params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_M=
IN))
             || MICROSECS(params->ratelimit_us) > MILLISECS(params->tslice_=
ms) )
                 goto out;
         prv->tslice_ms =3D params->tslice_ms;
@@ -1593,17 +1594,6 @@ csched_init(struct scheduler *ops)
         sched_credit_tslice_ms =3D CSCHED_DEFAULT_TSLICE_MS;
     }
=20
-    if ( sched_ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
-         || sched_ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN )
-    {
-        printk("WARNING: sched_ratelimit_us outside of valid range =
[%d,%d].\n"
-               " Resetting to default %u\n",
-               XEN_SYSCTL_SCHED_RATELIMIT_MIN,
-               XEN_SYSCTL_SCHED_RATELIMIT_MAX,
-               SCHED_DEFAULT_RATELIMIT_US);
-        sched_ratelimit_us =3D SCHED_DEFAULT_RATELIMIT_US;
-    }
-
     prv->tslice_ms =3D sched_credit_tslice_ms;
     prv->ticks_per_tslice =3D CSCHED_TICKS_PER_TSLICE;
     if ( prv->tslice_ms < prv->ticks_per_tslice )
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1324,6 +1324,18 @@ void __init scheduler_init(void)
     if ( SCHED_OP(&ops, init) )
         panic("scheduler returned error on init\n");
=20
+    if ( sched_ratelimit_us &&
+         (sched_ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
+          || sched_ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN) )
+    {
+        printk("WARNING: sched_ratelimit_us outside of valid range =
[%d,%d].\n"
+               " Resetting to default %u\n",
+               XEN_SYSCTL_SCHED_RATELIMIT_MIN,
+               XEN_SYSCTL_SCHED_RATELIMIT_MAX,
+               SCHED_DEFAULT_RATELIMIT_US);
+        sched_ratelimit_us =3D SCHED_DEFAULT_RATELIMIT_US;
+    }
+
     idle_domain =3D domain_create(DOMID_IDLE, 0, 0);
     BUG_ON(IS_ERR(idle_domain));
     idle_domain->vcpu =3D idle_vcpu;




--=__PartB6872A83.0__=
Content-Type: text/plain; name="sched-ratelimit-check.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="sched-ratelimit-check.patch"

scheduler: fix rate limit range checking=0A=0AFor one, neither of the two =
checks permitted for the documented value=0Aof zero (disabling the =
functionality altogether).=0A=0ASecond, the range checking of the command =
line parameter was done by=0Athe credit scheduler's initialization code, =
despite it being a generic=0Ascheduler option.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/common/sched_credit.c=0A+++ =
b/xen/common/sched_credit.c=0A@@ -835,8 +835,9 @@ csched_sys_cntl(const =
struct scheduler *=0A     case XEN_SYSCTL_SCHEDOP_putinfo:=0A         if =
(params->tslice_ms > XEN_SYSCTL_CSCHED_TSLICE_MAX=0A             || =
params->tslice_ms < XEN_SYSCTL_CSCHED_TSLICE_MIN =0A-            || =
params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX=0A-            || =
params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN =0A+            || =
(params->ratelimit_us=0A+                && (params->ratelimit_us > =
XEN_SYSCTL_SCHED_RATELIMIT_MAX=0A+                    || params->ratelimit_=
us < XEN_SYSCTL_SCHED_RATELIMIT_MIN))=0A             || MICROSECS(params->r=
atelimit_us) > MILLISECS(params->tslice_ms) )=0A                 goto =
out;=0A         prv->tslice_ms =3D params->tslice_ms;=0A@@ -1593,17 =
+1594,6 @@ csched_init(struct scheduler *ops)=0A         sched_credit_tslic=
e_ms =3D CSCHED_DEFAULT_TSLICE_MS;=0A     }=0A =0A-    if ( sched_ratelimit=
_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX=0A-         || sched_ratelimit_us < =
XEN_SYSCTL_SCHED_RATELIMIT_MIN )=0A-    {=0A-        printk("WARNING: =
sched_ratelimit_us outside of valid range [%d,%d].\n"=0A-               " =
Resetting to default %u\n",=0A-               XEN_SYSCTL_SCHED_RATELIMIT_MI=
N,=0A-               XEN_SYSCTL_SCHED_RATELIMIT_MAX,=0A-               =
SCHED_DEFAULT_RATELIMIT_US);=0A-        sched_ratelimit_us =3D SCHED_DEFAUL=
T_RATELIMIT_US;=0A-    }=0A-=0A     prv->tslice_ms =3D sched_credit_tslice_=
ms;=0A     prv->ticks_per_tslice =3D CSCHED_TICKS_PER_TSLICE;=0A     if ( =
prv->tslice_ms < prv->ticks_per_tslice )=0A--- a/xen/common/schedule.c=0A++=
+ b/xen/common/schedule.c=0A@@ -1324,6 +1324,18 @@ void __init scheduler_in=
it(void)=0A     if ( SCHED_OP(&ops, init) )=0A         panic("scheduler =
returned error on init\n");=0A =0A+    if ( sched_ratelimit_us &&=0A+      =
   (sched_ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX=0A+          || =
sched_ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN) )=0A+    {=0A+        =
printk("WARNING: sched_ratelimit_us outside of valid range [%d,%d].\n"=0A+ =
              " Resetting to default %u\n",=0A+               XEN_SYSCTL_SC=
HED_RATELIMIT_MIN,=0A+               XEN_SYSCTL_SCHED_RATELIMIT_MAX,=0A+   =
            SCHED_DEFAULT_RATELIMIT_US);=0A+        sched_ratelimit_us =3D =
SCHED_DEFAULT_RATELIMIT_US;=0A+    }=0A+=0A     idle_domain =3D domain_crea=
te(DOMID_IDLE, 0, 0);=0A     BUG_ON(IS_ERR(idle_domain));=0A     idle_domai=
n->vcpu =3D idle_vcpu;=0A
--=__PartB6872A83.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB6872A83.0__=--


From xen-devel-bounces@lists.xen.org Fri Dec 07 11:24:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgw2B-0000Cf-SG; Fri, 07 Dec 2012 11:23:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgw2A-0000CV-00
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:23:58 +0000
Received: from [85.158.143.99:48493] by server-1.bemta-4.messagelabs.com id
	2D/E5-28401-DC1D1C05; Fri, 07 Dec 2012 11:23:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354879436!21434742!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13465 invoked from network); 7 Dec 2012 11:23:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:23:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:23:55 +0000
Message-Id: <50C1DFDA02000078000AEEBC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:23:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mats Petersson" <mats.petersson@citrix.com>
References: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
	<01158d25f3bfd50171a1.1354809383@andrewcoop.uk.xensource.com>
	<50C0D33C.1@citrix.com>
In-Reply-To: <50C0D33C.1@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1 of 1] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 18:17, Mats Petersson <mats.petersson@citrix.com> wrote:
> On 06/12/12 15:56, Andrew Cooper wrote:
>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>> +ENTRY(enable_nmis)
>> +        pushq %rax
>> +        movq  %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
> I did see Jan's comment to add "trap_nop" to this section of code, and I 
> do agree that saving 14 bytes by not having a single iret in it's own 
> aligned block is a good thing, but how about:
> 
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
> +/* No op trap handler.  Required for kexec crash path.
> + * It is not used in performance critical code, and saves having a single
> + * lone aligned iret. It does not use ENTRY to declare the symbol to avoid 
> the
> + * explicit alignment. */
> +.globl trap_nop;
> +trap_nop:
> +
> +        iretq /* Disable the hardware NMI latch */
> 
> 
> We still have the benefit of not wasting 14 bytes on aligning a single 
> iretq, but the code to "do a iretq to enable nmi" is a bit cleaner.

I'd be okay with that too (with the stray blank line and ; removed),
but personally I prefer the version as sent by Andrew.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:24:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgw2B-0000Cf-SG; Fri, 07 Dec 2012 11:23:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgw2A-0000CV-00
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:23:58 +0000
Received: from [85.158.143.99:48493] by server-1.bemta-4.messagelabs.com id
	2D/E5-28401-DC1D1C05; Fri, 07 Dec 2012 11:23:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354879436!21434742!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13465 invoked from network); 7 Dec 2012 11:23:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:23:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:23:55 +0000
Message-Id: <50C1DFDA02000078000AEEBC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:23:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mats Petersson" <mats.petersson@citrix.com>
References: <patchbomb.1354809382@andrewcoop.uk.xensource.com>
	<01158d25f3bfd50171a1.1354809383@andrewcoop.uk.xensource.com>
	<50C0D33C.1@citrix.com>
In-Reply-To: <50C0D33C.1@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1 of 1] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 18:17, Mats Petersson <mats.petersson@citrix.com> wrote:
> On 06/12/12 15:56, Andrew Cooper wrote:
>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>> +ENTRY(enable_nmis)
>> +        pushq %rax
>> +        movq  %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
> I did see Jan's comment to add "trap_nop" to this section of code, and I 
> do agree that saving 14 bytes by not having a single iret in it's own 
> aligned block is a good thing, but how about:
> 
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
> +/* No op trap handler.  Required for kexec crash path.
> + * It is not used in performance critical code, and saves having a single
> + * lone aligned iret. It does not use ENTRY to declare the symbol to avoid 
> the
> + * explicit alignment. */
> +.globl trap_nop;
> +trap_nop:
> +
> +        iretq /* Disable the hardware NMI latch */
> 
> 
> We still have the benefit of not wasting 14 bytes on aligning a single 
> iretq, but the code to "do a iretq to enable nmi" is a bit cleaner.

I'd be okay with that too (with the stray blank line and ; removed),
but personally I prefer the version as sent by Andrew.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:29:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:29:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgw7G-0000Ym-Pu; Fri, 07 Dec 2012 11:29:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgw7F-0000Yh-1c
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:29:13 +0000
Received: from [193.109.254.147:46402] by server-7.bemta-14.messagelabs.com id
	E9/7C-02272-803D1C05; Fri, 07 Dec 2012 11:29:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354879637!9291047!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10808 invoked from network); 7 Dec 2012 11:27:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:27:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:27:17 +0000
Message-Id: <50C1E0A502000078000AEECA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:27:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "DuanZhenzhong" <zhenzhong.duan@oracle.com>
References: <50C161AD.9020803@oracle.com>
In-Reply-To: <50C161AD.9020803@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Joe Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] rombios: add support for special CHS layout (spt=32)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 04:25, DuanZhenzhong <zhenzhong.duan@oracle.com> wrote:
> When booting up windows VM which converted by dd scsi disk, it fails with
> "Error Loading Operating System", root cause is rombios doesn't simulate 
> disk's
> logical CHS properly. Real bios uses spt=32 here.
> 
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
> --- a/tools/firmware/rombios/rombios.c	2011-12-09 00:50:28.000000000 +0800
> +++ b/tools/firmware/rombios/rombios.c	2012-11-20 09:42:50.000000000 +0800
> @@ -2674,6 +2674,8 @@ void ata_detect( )
>        Bit32u sectors_low, sectors_high;
>        Bit16u cylinders, heads, spt, blksize;
>        Bit8u  translation, removable, mode;
> +      Bit8u i;
> +      Bit8u *p;
>  
>        // default mode to PIO16
>        mode = ATA_MODE_PIO16;
> @@ -2738,14 +2740,32 @@ void ata_detect( )
>          case ATA_TRANSLATION_NONE:
>            break;
>          case ATA_TRANSLATION_LBA:
> -          spt = 63;
> -          sectors_low /= 63;
> -          heads = sectors_low / 1024;
> -          if (heads>128) heads = 255;
> -          else if (heads>64) heads = 128;
> -          else if (heads>32) heads = 64;
> -          else if (heads>16) heads = 32;
> -          else heads=16;
> +          spt = heads = 0;
> +          if (ata_cmd_data_in(device,ATA_CMD_READ_SECTORS, 1, 0, 0, 1, 0L, 0L, get_SS(),buffer) !=0 )
> +            BX_PANIC("ata-detect: Failed to read first sector\n");
> +          for(i=0; i<4; i++) {
> +            p = buffer + 0x1be + i * 16;
> +            if (read_dword(get_SS(), p+12) && read_byte(get_SS(), p+5)) {
> +              /* We make the assumption that the partition terminates on
> +                 a cylinder boundary */

Which certainly isn't a generally correct assumption, so I strongly
recommend against basing any decisions on that.

Jan

> +              heads = read_byte(get_SS(), p+5) + 1;
> +              spt = read_byte(get_SS(), p+6) & 63;
> +              if (spt != 0)
> +                break;
> +            }
> +          }
> +          if (spt == 32) {
> +            sectors_low /= spt;
> +          } else {
> +            spt = 63;
> +            sectors_low /= 63;
> +            heads = sectors_low / 1024;
> +            if (heads>128) heads = 255;
> +            else if (heads>64) heads = 128;
> +            else if (heads>32) heads = 64;
> +            else if (heads>16) heads = 32;
> +            else heads=16;
> +          }
>            cylinders = sectors_low / heads;
>            break;
>          case ATA_TRANSLATION_RECHS:
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:29:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:29:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgw7G-0000Ym-Pu; Fri, 07 Dec 2012 11:29:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgw7F-0000Yh-1c
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:29:13 +0000
Received: from [193.109.254.147:46402] by server-7.bemta-14.messagelabs.com id
	E9/7C-02272-803D1C05; Fri, 07 Dec 2012 11:29:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354879637!9291047!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10808 invoked from network); 7 Dec 2012 11:27:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:27:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:27:17 +0000
Message-Id: <50C1E0A502000078000AEECA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:27:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "DuanZhenzhong" <zhenzhong.duan@oracle.com>
References: <50C161AD.9020803@oracle.com>
In-Reply-To: <50C161AD.9020803@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Joe Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] rombios: add support for special CHS layout (spt=32)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 04:25, DuanZhenzhong <zhenzhong.duan@oracle.com> wrote:
> When booting up windows VM which converted by dd scsi disk, it fails with
> "Error Loading Operating System", root cause is rombios doesn't simulate 
> disk's
> logical CHS properly. Real bios uses spt=32 here.
> 
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
> --- a/tools/firmware/rombios/rombios.c	2011-12-09 00:50:28.000000000 +0800
> +++ b/tools/firmware/rombios/rombios.c	2012-11-20 09:42:50.000000000 +0800
> @@ -2674,6 +2674,8 @@ void ata_detect( )
>        Bit32u sectors_low, sectors_high;
>        Bit16u cylinders, heads, spt, blksize;
>        Bit8u  translation, removable, mode;
> +      Bit8u i;
> +      Bit8u *p;
>  
>        // default mode to PIO16
>        mode = ATA_MODE_PIO16;
> @@ -2738,14 +2740,32 @@ void ata_detect( )
>          case ATA_TRANSLATION_NONE:
>            break;
>          case ATA_TRANSLATION_LBA:
> -          spt = 63;
> -          sectors_low /= 63;
> -          heads = sectors_low / 1024;
> -          if (heads>128) heads = 255;
> -          else if (heads>64) heads = 128;
> -          else if (heads>32) heads = 64;
> -          else if (heads>16) heads = 32;
> -          else heads=16;
> +          spt = heads = 0;
> +          if (ata_cmd_data_in(device,ATA_CMD_READ_SECTORS, 1, 0, 0, 1, 0L, 0L, get_SS(),buffer) !=0 )
> +            BX_PANIC("ata-detect: Failed to read first sector\n");
> +          for(i=0; i<4; i++) {
> +            p = buffer + 0x1be + i * 16;
> +            if (read_dword(get_SS(), p+12) && read_byte(get_SS(), p+5)) {
> +              /* We make the assumption that the partition terminates on
> +                 a cylinder boundary */

Which certainly isn't a generally correct assumption, so I strongly
recommend against basing any decisions on that.

Jan

> +              heads = read_byte(get_SS(), p+5) + 1;
> +              spt = read_byte(get_SS(), p+6) & 63;
> +              if (spt != 0)
> +                break;
> +            }
> +          }
> +          if (spt == 32) {
> +            sectors_low /= spt;
> +          } else {
> +            spt = 63;
> +            sectors_low /= 63;
> +            heads = sectors_low / 1024;
> +            if (heads>128) heads = 255;
> +            else if (heads>64) heads = 128;
> +            else if (heads>32) heads = 64;
> +            else if (heads>16) heads = 32;
> +            else heads=16;
> +          }
>            cylinders = sectors_low / heads;
>            break;
>          case ATA_TRANSLATION_RECHS:
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:41:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgwIU-0000vr-5L; Fri, 07 Dec 2012 11:40:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <asadrupucit2006@gmail.com>) id 1TgwIS-0000vm-NC
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:40:48 +0000
Received: from [85.158.139.211:39599] by server-1.bemta-5.messagelabs.com id
	AB/4E-12813-FB5D1C05; Fri, 07 Dec 2012 11:40:47 +0000
X-Env-Sender: asadrupucit2006@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1354880445!19044815!1
X-Originating-IP: [209.85.219.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17410 invoked from network); 7 Dec 2012 11:40:47 -0000
Received: from mail-oa0-f45.google.com (HELO mail-oa0-f45.google.com)
	(209.85.219.45)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 11:40:47 -0000
Received: by mail-oa0-f45.google.com with SMTP id i18so394752oag.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 03:40:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=wgW+1ZYbCczGZMEP7NTvJqeKAQcY94BNNB+utjRoR+M=;
	b=XxH2fEfyzdIOmL7YnRzRvcHxY2ZXbFwCmtygYq8+8wwlIwI62kdwzojHjMNc4vsaxE
	8fbIOV7rVS70bFZr2oAAUiVFIBeonzronVM54STH89tFCuTwHXSAisyBTp1LjrM9d9il
	PYGCoLCuJrynp7F9voqzIsOexW+h8kXCQAisSEB1f+LFuXUSMX/2VlKco7eaqxEyPGAa
	fm0Cseo4DBhIjQZdXusCvMXaaT2NJQskAPV7LX+nO6+hVqHptWvvHVKySkdFcziq70Yy
	LAtnid5qrR7GZ+a7OQ+wjDl+RgYdaiTTwapEta+gRV9e0A8BuG/u9g8nHG1/cU/WKx5l
	E3Ig==
MIME-Version: 1.0
Received: by 10.60.28.74 with SMTP id z10mr3083655oeg.29.1354880445147; Fri,
	07 Dec 2012 03:40:45 -0800 (PST)
Received: by 10.60.20.3 with HTTP; Fri, 7 Dec 2012 03:40:44 -0800 (PST)
Date: Fri, 7 Dec 2012 16:40:44 +0500
Message-ID: <CAJ2v2mgRa8Qhxxu3nqAtPftP7WK-5p4zAuJOgSB1WfMhdxjGkw@mail.gmail.com>
From: asad raza <asadrupucit2006@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Why we relocate xen location in __start_xen method?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

in ARM Archietecture.
__start_xen ()

  ----->  setup_pagetables()
            {
                    /* Calculate virt-to-phys offset for the new location */
                      phys_offset = xen_paddr - (unsigned long) _start;

                     /* Copy */
                    memcpy((void *) dest_va, _start, _end - _start);
            }

thankx
---------
Asad

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:41:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgwIU-0000vr-5L; Fri, 07 Dec 2012 11:40:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <asadrupucit2006@gmail.com>) id 1TgwIS-0000vm-NC
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:40:48 +0000
Received: from [85.158.139.211:39599] by server-1.bemta-5.messagelabs.com id
	AB/4E-12813-FB5D1C05; Fri, 07 Dec 2012 11:40:47 +0000
X-Env-Sender: asadrupucit2006@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1354880445!19044815!1
X-Originating-IP: [209.85.219.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17410 invoked from network); 7 Dec 2012 11:40:47 -0000
Received: from mail-oa0-f45.google.com (HELO mail-oa0-f45.google.com)
	(209.85.219.45)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 11:40:47 -0000
Received: by mail-oa0-f45.google.com with SMTP id i18so394752oag.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 03:40:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=wgW+1ZYbCczGZMEP7NTvJqeKAQcY94BNNB+utjRoR+M=;
	b=XxH2fEfyzdIOmL7YnRzRvcHxY2ZXbFwCmtygYq8+8wwlIwI62kdwzojHjMNc4vsaxE
	8fbIOV7rVS70bFZr2oAAUiVFIBeonzronVM54STH89tFCuTwHXSAisyBTp1LjrM9d9il
	PYGCoLCuJrynp7F9voqzIsOexW+h8kXCQAisSEB1f+LFuXUSMX/2VlKco7eaqxEyPGAa
	fm0Cseo4DBhIjQZdXusCvMXaaT2NJQskAPV7LX+nO6+hVqHptWvvHVKySkdFcziq70Yy
	LAtnid5qrR7GZ+a7OQ+wjDl+RgYdaiTTwapEta+gRV9e0A8BuG/u9g8nHG1/cU/WKx5l
	E3Ig==
MIME-Version: 1.0
Received: by 10.60.28.74 with SMTP id z10mr3083655oeg.29.1354880445147; Fri,
	07 Dec 2012 03:40:45 -0800 (PST)
Received: by 10.60.20.3 with HTTP; Fri, 7 Dec 2012 03:40:44 -0800 (PST)
Date: Fri, 7 Dec 2012 16:40:44 +0500
Message-ID: <CAJ2v2mgRa8Qhxxu3nqAtPftP7WK-5p4zAuJOgSB1WfMhdxjGkw@mail.gmail.com>
From: asad raza <asadrupucit2006@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Why we relocate xen location in __start_xen method?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

in ARM Archietecture.
__start_xen ()

  ----->  setup_pagetables()
            {
                    /* Calculate virt-to-phys offset for the new location */
                      phys_offset = xen_paddr - (unsigned long) _start;

                     /* Copy */
                    memcpy((void *) dest_va, _start, _end - _start);
            }

thankx
---------
Asad

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:42:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgwJc-00011Q-Jv; Fri, 07 Dec 2012 11:42:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgwJb-00011E-9K
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:41:59 +0000
Received: from [85.158.143.35:19962] by server-3.bemta-4.messagelabs.com id
	5B/69-18211-606D1C05; Fri, 07 Dec 2012 11:41:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354880517!13153526!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31117 invoked from network); 7 Dec 2012 11:41:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:41:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:41:56 +0000
Message-Id: <50C1E41402000078000AEED6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:41:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
In-Reply-To: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian.Campbell@citrix.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 19:28, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> Do not report error when a patch is not appplicable to current processor,
> simply skip it and move on to next patch in container file.
> 
> Process container file to the end instead of stopping at the first
> applicable patch.
> 
> Log the fact that a patch has been applied at KERN_WARNING level, add
> CPU number to debug messages.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
> ---
>  xen/arch/x86/microcode_amd.c |   69 +++++++++++++++++++++++-------------------
>  1 file changed, 38 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c
> index 7a54001..bb5968e 100644
> --- a/xen/arch/x86/microcode_amd.c
> +++ b/xen/arch/x86/microcode_amd.c
> @@ -88,13 +88,13 @@ static int collect_cpu_info(int cpu, struct cpu_signature 
> *csig)
>  
>      rdmsrl(MSR_AMD_PATCHLEVEL, csig->rev);
>  
> -    printk(KERN_DEBUG "microcode: collect_cpu_info: patch_id=%#x\n",
> -           csig->rev);
> +    printk(KERN_DEBUG "microcode: CPU%d collect_cpu_info: patch_id=%#x\n",
> +           cpu, csig->rev);
>  
>      return 0;
>  }
>  
> -static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
> +static bool_t microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>  {
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>      const struct microcode_header_amd *mc_header = mc_amd->mpb;
> @@ -125,11 +125,16 @@ static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>          printk(KERN_DEBUG "microcode: CPU%d patch does not match "
>                 "(patch is %x, cpu base id is %x) \n",
>                 cpu, mc_header->processor_rev_id, equiv_cpu_id);
> -        return -EINVAL;
> +        return 0;
>      }
>  
>      if ( mc_header->patch_id <= uci->cpu_sig.rev )
> +    {
> +        printk(KERN_DEBUG "microcode: CPU%d patching is not needed: "
> +               "patch provides level 0x%x, cpu is at 0x%x \n",

Did you not notice that all the other messages now use %#x?

Also, I'm not really convinced we need this added verbosity. I
personally tend to run all my systems with "loglvl=all", and the
amount of output produced by the code in this file made me
change that for just the one AMD machine I have that is new
enough to support microcode loading.

Minimally I'd expect nothing to be printed here if the two
versions match.

> +               cpu, mc_header->patch_id, uci->cpu_sig.rev);
>          return 0;
> +    }
>  
>      printk(KERN_DEBUG "microcode: CPU%d found a matching microcode "
>             "update with version %#x (current=%#x)\n",
> @@ -173,7 +178,7 @@ static int apply_microcode(int cpu)
>          return -EIO;
>      }
>  
> -    printk(KERN_INFO "microcode: CPU%d updated from revision %#x to %#x\n",
> +    printk(KERN_WARNING "microcode: CPU%d updated from revision %#x to %#x\n",
>             cpu, uci->cpu_sig.rev, hdr->patch_id);
>  
>      uci->cpu_sig.rev = rev;
> @@ -181,7 +186,7 @@ static int apply_microcode(int cpu)
>      return 0;
>  }
>  
> -static int get_next_ucode_from_buffer_amd(
> +static int get_ucode_from_buffer_amd(
>      struct microcode_amd *mc_amd,
>      const void *buf,
>      size_t bufsize,
> @@ -194,8 +199,12 @@ static int get_next_ucode_from_buffer_amd(
>      off = *offset;
>  
>      /* No more data */
> -    if ( off >= bufsize )
> -        return 1;
> +    if ( off >= bufsize ) 
> +    {
> +        printk(KERN_ERR "microcode: error! "
> +               "ucode buffer overrun\n");

All on one line please (and the "error!" probably is superfluous).

Also, is printing this really correct when off == bufsize? Or can
this not happen at all?

> +        return -EINVAL;
> +    }
>  
>      mpbuf = (const struct mpbhdr *)&bufp[off];
>      if ( mpbuf->type != UCODE_UCODE_TYPE )
> @@ -205,8 +214,8 @@ static int get_next_ucode_from_buffer_amd(
>          return -EINVAL;
>      }
>  
> -    printk(KERN_DEBUG "microcode: size %zu, block size %u, offset %zu\n",
> -           bufsize, mpbuf->len, off);
> +    printk(KERN_DEBUG "microcode: CPU%d size %zu, block size %u, offset %zu\n",
> +           raw_smp_processor_id(), bufsize, mpbuf->len, off);
>  
>      if ( (off + mpbuf->len) > bufsize )
>      {
> @@ -278,8 +287,8 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
>  {
>      struct microcode_amd *mc_amd, *mc_old;
>      size_t offset = bufsize;
> +    size_t last_offset, applied_offset = 0;
>      int error = 0;
> -    int ret;
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>  
>      /* We should bind the task to the CPU */
> @@ -321,33 +330,32 @@ static int cpu_request_microcode(int cpu, const void 
> *buf, size_t bufsize)
>       */
>      mc_amd->mpb = NULL;
>      mc_amd->mpb_size = 0;
> -    while ( (ret = get_next_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> -                                                  &offset)) == 0 )
> +    last_offset = offset;
> +    while ( (error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> +                                               &offset)) == 0 )
>      {
> -        error = microcode_fits(mc_amd, cpu);
> -        if (error <= 0)
> -            continue;
> +        if ( microcode_fits(mc_amd, cpu) )
> +            if ( apply_microcode(cpu) == 0 )

Fold the two if()-s into one, please.

But then again you lose the return value of apply_microcode()
here, which is wrong.

> +                applied_offset = last_offset;
>  
> -        error = apply_microcode(cpu);
> -        if (error == 0)
> -        {
> -            error = 1;
> +        last_offset = offset;
> +
> +        if ( offset >= bufsize )
>              break;
> -        }
>      }
>  
> -    if ( ret < 0 )
> -        error = ret;
> -
>      /* On success keep the microcode patch for
>       * re-apply on resume.
>       */
> -    if ( error == 1 )
> +    if ( applied_offset != 0 )
>      {
> -        xfree(mc_old);
> -        error = 0;
> +        error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> +                                          &applied_offset);
> +        if (error == 0)
> +            xfree(mc_old);
>      }
> -    else
> +
> +    if ( applied_offset == 0 || error != 0 )
>      {
>          xfree(mc_amd);
>          uci->mc.mc_amd = mc_old;
> @@ -364,10 +372,9 @@ static int microcode_resume_match(int cpu, const void *mc)
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>      struct microcode_amd *mc_amd = uci->mc.mc_amd;
>      const struct microcode_amd *src = mc;
> -    int res = microcode_fits(src, cpu);
>  
> -    if ( res <= 0 )
> -        return res;
> +    if ( microcode_fits(src, cpu) == 0 )

microcode_fits() now returning bool_t clearly asks for using ! instead
of == 0 here.

Jan

> +        return 0;
>  
>      if ( src != mc_amd )
>      {
> -- 
> 1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:42:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgwJc-00011Q-Jv; Fri, 07 Dec 2012 11:42:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgwJb-00011E-9K
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:41:59 +0000
Received: from [85.158.143.35:19962] by server-3.bemta-4.messagelabs.com id
	5B/69-18211-606D1C05; Fri, 07 Dec 2012 11:41:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354880517!13153526!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31117 invoked from network); 7 Dec 2012 11:41:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:41:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:41:56 +0000
Message-Id: <50C1E41402000078000AEED6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:41:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
In-Reply-To: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian.Campbell@citrix.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 19:28, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> Do not report error when a patch is not appplicable to current processor,
> simply skip it and move on to next patch in container file.
> 
> Process container file to the end instead of stopping at the first
> applicable patch.
> 
> Log the fact that a patch has been applied at KERN_WARNING level, add
> CPU number to debug messages.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
> ---
>  xen/arch/x86/microcode_amd.c |   69 +++++++++++++++++++++++-------------------
>  1 file changed, 38 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c
> index 7a54001..bb5968e 100644
> --- a/xen/arch/x86/microcode_amd.c
> +++ b/xen/arch/x86/microcode_amd.c
> @@ -88,13 +88,13 @@ static int collect_cpu_info(int cpu, struct cpu_signature 
> *csig)
>  
>      rdmsrl(MSR_AMD_PATCHLEVEL, csig->rev);
>  
> -    printk(KERN_DEBUG "microcode: collect_cpu_info: patch_id=%#x\n",
> -           csig->rev);
> +    printk(KERN_DEBUG "microcode: CPU%d collect_cpu_info: patch_id=%#x\n",
> +           cpu, csig->rev);
>  
>      return 0;
>  }
>  
> -static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
> +static bool_t microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>  {
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>      const struct microcode_header_amd *mc_header = mc_amd->mpb;
> @@ -125,11 +125,16 @@ static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>          printk(KERN_DEBUG "microcode: CPU%d patch does not match "
>                 "(patch is %x, cpu base id is %x) \n",
>                 cpu, mc_header->processor_rev_id, equiv_cpu_id);
> -        return -EINVAL;
> +        return 0;
>      }
>  
>      if ( mc_header->patch_id <= uci->cpu_sig.rev )
> +    {
> +        printk(KERN_DEBUG "microcode: CPU%d patching is not needed: "
> +               "patch provides level 0x%x, cpu is at 0x%x \n",

Did you not notice that all the other messages now use %#x?

Also, I'm not really convinced we need this added verbosity. I
personally tend to run all my systems with "loglvl=all", and the
amount of output produced by the code in this file made me
change that for just the one AMD machine I have that is new
enough to support microcode loading.

Minimally I'd expect nothing to be printed here if the two
versions match.

> +               cpu, mc_header->patch_id, uci->cpu_sig.rev);
>          return 0;
> +    }
>  
>      printk(KERN_DEBUG "microcode: CPU%d found a matching microcode "
>             "update with version %#x (current=%#x)\n",
> @@ -173,7 +178,7 @@ static int apply_microcode(int cpu)
>          return -EIO;
>      }
>  
> -    printk(KERN_INFO "microcode: CPU%d updated from revision %#x to %#x\n",
> +    printk(KERN_WARNING "microcode: CPU%d updated from revision %#x to %#x\n",
>             cpu, uci->cpu_sig.rev, hdr->patch_id);
>  
>      uci->cpu_sig.rev = rev;
> @@ -181,7 +186,7 @@ static int apply_microcode(int cpu)
>      return 0;
>  }
>  
> -static int get_next_ucode_from_buffer_amd(
> +static int get_ucode_from_buffer_amd(
>      struct microcode_amd *mc_amd,
>      const void *buf,
>      size_t bufsize,
> @@ -194,8 +199,12 @@ static int get_next_ucode_from_buffer_amd(
>      off = *offset;
>  
>      /* No more data */
> -    if ( off >= bufsize )
> -        return 1;
> +    if ( off >= bufsize ) 
> +    {
> +        printk(KERN_ERR "microcode: error! "
> +               "ucode buffer overrun\n");

All on one line please (and the "error!" probably is superfluous).

Also, is printing this really correct when off == bufsize? Or can
this not happen at all?

> +        return -EINVAL;
> +    }
>  
>      mpbuf = (const struct mpbhdr *)&bufp[off];
>      if ( mpbuf->type != UCODE_UCODE_TYPE )
> @@ -205,8 +214,8 @@ static int get_next_ucode_from_buffer_amd(
>          return -EINVAL;
>      }
>  
> -    printk(KERN_DEBUG "microcode: size %zu, block size %u, offset %zu\n",
> -           bufsize, mpbuf->len, off);
> +    printk(KERN_DEBUG "microcode: CPU%d size %zu, block size %u, offset %zu\n",
> +           raw_smp_processor_id(), bufsize, mpbuf->len, off);
>  
>      if ( (off + mpbuf->len) > bufsize )
>      {
> @@ -278,8 +287,8 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
>  {
>      struct microcode_amd *mc_amd, *mc_old;
>      size_t offset = bufsize;
> +    size_t last_offset, applied_offset = 0;
>      int error = 0;
> -    int ret;
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>  
>      /* We should bind the task to the CPU */
> @@ -321,33 +330,32 @@ static int cpu_request_microcode(int cpu, const void 
> *buf, size_t bufsize)
>       */
>      mc_amd->mpb = NULL;
>      mc_amd->mpb_size = 0;
> -    while ( (ret = get_next_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> -                                                  &offset)) == 0 )
> +    last_offset = offset;
> +    while ( (error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> +                                               &offset)) == 0 )
>      {
> -        error = microcode_fits(mc_amd, cpu);
> -        if (error <= 0)
> -            continue;
> +        if ( microcode_fits(mc_amd, cpu) )
> +            if ( apply_microcode(cpu) == 0 )

Fold the two if()-s into one, please.

But then again you lose the return value of apply_microcode()
here, which is wrong.

> +                applied_offset = last_offset;
>  
> -        error = apply_microcode(cpu);
> -        if (error == 0)
> -        {
> -            error = 1;
> +        last_offset = offset;
> +
> +        if ( offset >= bufsize )
>              break;
> -        }
>      }
>  
> -    if ( ret < 0 )
> -        error = ret;
> -
>      /* On success keep the microcode patch for
>       * re-apply on resume.
>       */
> -    if ( error == 1 )
> +    if ( applied_offset != 0 )
>      {
> -        xfree(mc_old);
> -        error = 0;
> +        error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
> +                                          &applied_offset);
> +        if (error == 0)
> +            xfree(mc_old);
>      }
> -    else
> +
> +    if ( applied_offset == 0 || error != 0 )
>      {
>          xfree(mc_amd);
>          uci->mc.mc_amd = mc_old;
> @@ -364,10 +372,9 @@ static int microcode_resume_match(int cpu, const void *mc)
>      struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>      struct microcode_amd *mc_amd = uci->mc.mc_amd;
>      const struct microcode_amd *src = mc;
> -    int res = microcode_fits(src, cpu);
>  
> -    if ( res <= 0 )
> -        return res;
> +    if ( microcode_fits(src, cpu) == 0 )

microcode_fits() now returning bool_t clearly asks for using ! instead
of == 0 here.

Jan

> +        return 0;
>  
>      if ( src != mc_amd )
>      {
> -- 
> 1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:48:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgwPQ-0001NB-JE; Fri, 07 Dec 2012 11:48:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgwPP-0001N6-BY
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:47:59 +0000
Received: from [85.158.143.35:62791] by server-1.bemta-4.messagelabs.com id
	78/36-28401-E67D1C05; Fri, 07 Dec 2012 11:47:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354880744!13153917!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13899 invoked from network); 7 Dec 2012 11:45:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:45:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:45:44 +0000
Message-Id: <50C1E4F802000078000AEED9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:45:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<43f86afe90be582e1579.1354830149@andrewcoop.uk.xensource.com>
In-Reply-To: <43f86afe90be582e1579.1354830149@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1 of 2 V3] x86/IST: Create set_ist() helper
 function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 22:42, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> ... to save using open-coded bitwise operations, and update all IST
> manipulation sites to use the function.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> --
> 
> I am not overly happy with the name set_ist(), and certainly not tied to
> it.  However, I am unable to think of a better name. set_idt_ist() is
> wrong, as is set_irq_ist(), while set_idt_entry_ist() just seems to
> cludgy.  The comment and parameter types do explicitly state what is
> expected t be passed, but suggestions welcome for a better name.
> 
> diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/hvm/svm/svm.c
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -869,9 +869,9 @@ static void svm_ctxt_switch_from(struct 
>      svm_vmload(per_cpu(root_vmcb, cpu));
>  
>      /* Resume use of ISTs now that the host TR is reinstated. */
> -    idt_tables[cpu][TRAP_double_fault].a  |= IST_DF << 32;
> -    idt_tables[cpu][TRAP_nmi].a           |= IST_NMI << 32;
> -    idt_tables[cpu][TRAP_machine_check].a |= IST_MCE << 32;
> +    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_DF);
> +    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NMI);
> +    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_MCE);
>  }
>  
>  static void svm_ctxt_switch_to(struct vcpu *v)
> @@ -893,9 +893,9 @@ static void svm_ctxt_switch_to(struct vc
>       * Cannot use ISTs for NMI/#MC/#DF while we are running with the guest TR.
>       * But this doesn't matter: the IST is only req'd to handle SYSCALL/SYSRET.
>       */
> -    idt_tables[cpu][TRAP_double_fault].a  &= ~(7UL << 32);
> -    idt_tables[cpu][TRAP_nmi].a           &= ~(7UL << 32);
> -    idt_tables[cpu][TRAP_machine_check].a &= ~(7UL << 32);
> +    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_NONE);
> +    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NONE);
> +    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
>  
>      svm_restore_dr(v);
>  
> diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/x86_64/traps.c
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -370,9 +370,9 @@ void __devinit subarch_percpu_traps_init
>      {
>          /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
>          set_intr_gate(TRAP_double_fault, &double_fault);
> -        idt_table[TRAP_double_fault].a  |= IST_DF << 32;
> -        idt_table[TRAP_nmi].a           |= IST_NMI << 32;
> -        idt_table[TRAP_machine_check].a |= IST_MCE << 32;
> +        set_ist(&idt_table[TRAP_double_fault],  IST_DF);
> +        set_ist(&idt_table[TRAP_nmi],           IST_NMI);
> +        set_ist(&idt_table[TRAP_machine_check], IST_MCE);
>  
>          /*
>           * The 32-on-64 hypercall entry vector is only accessible from ring 
> 1.
> diff -r bc624b00d6d6 -r 43f86afe90be xen/include/asm-x86/processor.h
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -425,10 +425,20 @@ struct tss_struct {
>      u8 __cacheline_filler[24];
>  } __cacheline_aligned __attribute__((packed));
>  
> -#define IST_DF  1UL
> -#define IST_NMI 2UL
> -#define IST_MCE 3UL
> -#define IST_MAX 3UL
> +#define IST_NONE 0UL
> +#define IST_DF   1UL
> +#define IST_NMI  2UL
> +#define IST_MCE  3UL
> +#define IST_MAX  3UL
> +
> +/* Set the interrupt stack table used by a particular interrupt
> + * descriptor table entry. */
> +static always_inline void set_ist(idt_entry_t * idt, unsigned long ist)
> +{
> +    /* ist is a 3 bit field, 32 bits into the idt entry. */
> +    ASSERT( ist < 8 );

This ought to check against IST_MAX.

> +    idt->a = ( idt->a & ~(7UL << 32) ) | ( (ist & 7UL) << 32 );

And with the check above, the right most & is pretty pointless.

Jan

> +}
>  
>  #define IDT_ENTRIES 256
>  extern idt_entry_t idt_table[];
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:48:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgwPQ-0001NB-JE; Fri, 07 Dec 2012 11:48:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgwPP-0001N6-BY
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:47:59 +0000
Received: from [85.158.143.35:62791] by server-1.bemta-4.messagelabs.com id
	78/36-28401-E67D1C05; Fri, 07 Dec 2012 11:47:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354880744!13153917!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13899 invoked from network); 7 Dec 2012 11:45:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:45:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:45:44 +0000
Message-Id: <50C1E4F802000078000AEED9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:45:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<43f86afe90be582e1579.1354830149@andrewcoop.uk.xensource.com>
In-Reply-To: <43f86afe90be582e1579.1354830149@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1 of 2 V3] x86/IST: Create set_ist() helper
 function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 22:42, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> ... to save using open-coded bitwise operations, and update all IST
> manipulation sites to use the function.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> --
> 
> I am not overly happy with the name set_ist(), and certainly not tied to
> it.  However, I am unable to think of a better name. set_idt_ist() is
> wrong, as is set_irq_ist(), while set_idt_entry_ist() just seems to
> cludgy.  The comment and parameter types do explicitly state what is
> expected t be passed, but suggestions welcome for a better name.
> 
> diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/hvm/svm/svm.c
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -869,9 +869,9 @@ static void svm_ctxt_switch_from(struct 
>      svm_vmload(per_cpu(root_vmcb, cpu));
>  
>      /* Resume use of ISTs now that the host TR is reinstated. */
> -    idt_tables[cpu][TRAP_double_fault].a  |= IST_DF << 32;
> -    idt_tables[cpu][TRAP_nmi].a           |= IST_NMI << 32;
> -    idt_tables[cpu][TRAP_machine_check].a |= IST_MCE << 32;
> +    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_DF);
> +    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NMI);
> +    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_MCE);
>  }
>  
>  static void svm_ctxt_switch_to(struct vcpu *v)
> @@ -893,9 +893,9 @@ static void svm_ctxt_switch_to(struct vc
>       * Cannot use ISTs for NMI/#MC/#DF while we are running with the guest TR.
>       * But this doesn't matter: the IST is only req'd to handle SYSCALL/SYSRET.
>       */
> -    idt_tables[cpu][TRAP_double_fault].a  &= ~(7UL << 32);
> -    idt_tables[cpu][TRAP_nmi].a           &= ~(7UL << 32);
> -    idt_tables[cpu][TRAP_machine_check].a &= ~(7UL << 32);
> +    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_NONE);
> +    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NONE);
> +    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
>  
>      svm_restore_dr(v);
>  
> diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/x86_64/traps.c
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -370,9 +370,9 @@ void __devinit subarch_percpu_traps_init
>      {
>          /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
>          set_intr_gate(TRAP_double_fault, &double_fault);
> -        idt_table[TRAP_double_fault].a  |= IST_DF << 32;
> -        idt_table[TRAP_nmi].a           |= IST_NMI << 32;
> -        idt_table[TRAP_machine_check].a |= IST_MCE << 32;
> +        set_ist(&idt_table[TRAP_double_fault],  IST_DF);
> +        set_ist(&idt_table[TRAP_nmi],           IST_NMI);
> +        set_ist(&idt_table[TRAP_machine_check], IST_MCE);
>  
>          /*
>           * The 32-on-64 hypercall entry vector is only accessible from ring 
> 1.
> diff -r bc624b00d6d6 -r 43f86afe90be xen/include/asm-x86/processor.h
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -425,10 +425,20 @@ struct tss_struct {
>      u8 __cacheline_filler[24];
>  } __cacheline_aligned __attribute__((packed));
>  
> -#define IST_DF  1UL
> -#define IST_NMI 2UL
> -#define IST_MCE 3UL
> -#define IST_MAX 3UL
> +#define IST_NONE 0UL
> +#define IST_DF   1UL
> +#define IST_NMI  2UL
> +#define IST_MCE  3UL
> +#define IST_MAX  3UL
> +
> +/* Set the interrupt stack table used by a particular interrupt
> + * descriptor table entry. */
> +static always_inline void set_ist(idt_entry_t * idt, unsigned long ist)
> +{
> +    /* ist is a 3 bit field, 32 bits into the idt entry. */
> +    ASSERT( ist < 8 );

This ought to check against IST_MAX.

> +    idt->a = ( idt->a & ~(7UL << 32) ) | ( (ist & 7UL) << 32 );

And with the check above, the right most & is pretty pointless.

Jan

> +}
>  
>  #define IDT_ENTRIES 256
>  extern idt_entry_t idt_table[];
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:49:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:49:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgwQn-0001T6-AO; Fri, 07 Dec 2012 11:49:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgwQm-0001Ss-1S
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 11:49:24 +0000
Received: from [85.158.143.35:12710] by server-3.bemta-4.messagelabs.com id
	AA/04-18211-3C7D1C05; Fri, 07 Dec 2012 11:49:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354880962!10134557!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7723 invoked from network); 7 Dec 2012 11:49:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 11:49:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16221636"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 11:49:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 11:49:22 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgwQj-0007sk-NW;
	Fri, 07 Dec 2012 11:49:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgwQj-0004ei-Dt;
	Fri, 07 Dec 2012 11:49:21 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14592-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 11:49:21 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14592: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14592 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14592/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14565
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail REGR. vs. 14565
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14565
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565
 build-i386-pvops             2 host-install(2) broken in 14586 REGR. vs. 14565
 build-i386                   2 host-install(2) broken in 14586 REGR. vs. 14565
 build-i386-oldkern           2 host-install(2) broken in 14586 REGR. vs. 14565

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           3 host-install(3)           broken pass in 14586
 test-amd64-amd64-pair 3 host-install/src_host(3) broken in 14586 pass in 14592
 test-amd64-amd64-pair 4 host-install/dst_host(4) broken in 14586 pass in 14592

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14586 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14586 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14586 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14586 n/a
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14586 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14586 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14586 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14586 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14586 n/a

version targeted for testing:
 xen                  12d2786dc549
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 402 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:49:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:49:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgwQn-0001T6-AO; Fri, 07 Dec 2012 11:49:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgwQm-0001Ss-1S
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 11:49:24 +0000
Received: from [85.158.143.35:12710] by server-3.bemta-4.messagelabs.com id
	AA/04-18211-3C7D1C05; Fri, 07 Dec 2012 11:49:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354880962!10134557!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7723 invoked from network); 7 Dec 2012 11:49:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 11:49:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,236,1355097600"; d="scan'208";a="16221636"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 11:49:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 11:49:22 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgwQj-0007sk-NW;
	Fri, 07 Dec 2012 11:49:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgwQj-0004ei-Dt;
	Fri, 07 Dec 2012 11:49:21 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14592-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 11:49:21 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14592: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14592 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14592/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14565
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail REGR. vs. 14565
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14565
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565
 build-i386-pvops             2 host-install(2) broken in 14586 REGR. vs. 14565
 build-i386                   2 host-install(2) broken in 14586 REGR. vs. 14565
 build-i386-oldkern           2 host-install(2) broken in 14586 REGR. vs. 14565

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           3 host-install(3)           broken pass in 14586
 test-amd64-amd64-pair 3 host-install/src_host(3) broken in 14586 pass in 14592
 test-amd64-amd64-pair 4 host-install/dst_host(4) broken in 14586 pass in 14592

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14586 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14586 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14586 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14586 n/a
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14586 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14586 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14586 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14586 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14586 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14586 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14586 n/a

version targeted for testing:
 xen                  12d2786dc549
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 402 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:52:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgwTa-0001iV-To; Fri, 07 Dec 2012 11:52:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgwTZ-0001hm-OS
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:52:17 +0000
Received: from [85.158.139.211:36927] by server-2.bemta-5.messagelabs.com id
	CD/1B-16162-178D1C05; Fri, 07 Dec 2012 11:52:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354881136!19491160!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9716 invoked from network); 7 Dec 2012 11:52:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:52:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:52:15 +0000
Message-Id: <50C1E67F02000078000AEEEF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:52:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
In-Reply-To: <3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 2 V3] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 22:42, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +
> +            if ( i == cpu )
> +            {
> +                /* Disable the interrupt stack tables for this MCE and
> +                 * NMI handler (shortly to become a nop) as there is a 1
> +                 * instruction race window where NMIs could be
> +                 * re-enabled and corrupt the exception frame, leaving
> +                 * us unable to continue on this crash path (which half
> +                 * defeats the point of using the nop handler in the
> +                 * first place).
> +                 *
> +                 * This update is safe from a security point of view, as
> +                 * this pcpu is never going to try to sysret back to a
> +                 * PV vcpu.
> +                 */
> +                set_ist(&idt_tables[i][TRAP_nmi],           IST_NONE);
> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
> +
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);

This makes the first set_ist() above pointless, doesn't it?

> +            }
> +            else
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
> +        }
> +
>      /* Ensure the new callback function is set before sending out the NMI. */
>      wmb();

>  ...

> +/* Enable NMIs.  No special register assumptions, and all preserved. */
> +ENTRY(enable_nmis)
> +        pushq %rax

What's the point of saving %rax here, btw?

Jan

> +        movq  %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +/* No op trap handler.  Required for kexec crash path.
> + * It is not used in performance critical code, and saves having a single
> + * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
> + * explicit alignment. */
> +.globl trap_nop;
> +trap_nop:
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
>  .section .rodata, "a", @progbits
>  
>  ENTRY(exception_table)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 11:52:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 11:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgwTa-0001iV-To; Fri, 07 Dec 2012 11:52:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgwTZ-0001hm-OS
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 11:52:17 +0000
Received: from [85.158.139.211:36927] by server-2.bemta-5.messagelabs.com id
	CD/1B-16162-178D1C05; Fri, 07 Dec 2012 11:52:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354881136!19491160!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9716 invoked from network); 7 Dec 2012 11:52:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 11:52:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 11:52:15 +0000
Message-Id: <50C1E67F02000078000AEEEF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 11:52:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
In-Reply-To: <3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 2 V3] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.12.12 at 22:42, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +
> +            if ( i == cpu )
> +            {
> +                /* Disable the interrupt stack tables for this MCE and
> +                 * NMI handler (shortly to become a nop) as there is a 1
> +                 * instruction race window where NMIs could be
> +                 * re-enabled and corrupt the exception frame, leaving
> +                 * us unable to continue on this crash path (which half
> +                 * defeats the point of using the nop handler in the
> +                 * first place).
> +                 *
> +                 * This update is safe from a security point of view, as
> +                 * this pcpu is never going to try to sysret back to a
> +                 * PV vcpu.
> +                 */
> +                set_ist(&idt_tables[i][TRAP_nmi],           IST_NONE);
> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
> +
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);

This makes the first set_ist() above pointless, doesn't it?

> +            }
> +            else
> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
> +        }
> +
>      /* Ensure the new callback function is set before sending out the NMI. */
>      wmb();

>  ...

> +/* Enable NMIs.  No special register assumptions, and all preserved. */
> +ENTRY(enable_nmis)
> +        pushq %rax

What's the point of saving %rax here, btw?

Jan

> +        movq  %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +/* No op trap handler.  Required for kexec crash path.
> + * It is not used in performance critical code, and saves having a single
> + * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
> + * explicit alignment. */
> +.globl trap_nop;
> +trap_nop:
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        popq %rax
> +        retq
> +
>  .section .rodata, "a", @progbits
>  
>  ENTRY(exception_table)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 13:00:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 13:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgxXT-0003LA-PN; Fri, 07 Dec 2012 13:00:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgxXS-0003Ku-3W
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 13:00:22 +0000
Received: from [85.158.143.35:6344] by server-2.bemta-4.messagelabs.com id
	CA/50-30861-468E1C05; Fri, 07 Dec 2012 13:00:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354885191!15786893!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23885 invoked from network); 7 Dec 2012 12:59:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 12:59:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 12:59:51 +0000
Message-Id: <50C1F65602000078000AEF1B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 12:59:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part4776DB56.0__="
Subject: [Xen-devel] [PATCH] x86/oprofile: adjust CPU specific initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part4776DB56.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Drop support for 32-bit only CPU models as well as those that can be
dealt with by the arch_perfmon bits. Models 14 and 15 remain as
questionable (I'm not 100% positive that these don't support 64-bit
mode).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/oprofile/nmi_int.c
+++ b/xen/arch/x86/oprofile/nmi_int.c
@@ -342,37 +342,13 @@ static int __init ppro_init(char ** cpu_
 		return 0;
=20
 	switch (cpu_model) {
-	case 0 ... 2:
-		*cpu_type =3D "i386/ppro";
-		break;
-	case 3 ... 5:
-		*cpu_type =3D "i386/pii";
-		break;
-	case 6 ... 8:
-	case 10 ... 11:
-		*cpu_type =3D "i386/piii";
-		break;
-	case 9:
-	case 13:
-		*cpu_type =3D "i386/p6_mobile";
-		break;
 	case 14:
 		*cpu_type =3D "i386/core";
 		break;
 	case 15:
-	case 23:
-	case 29:
 		*cpu_type =3D "i386/core_2";
 		ppro_has_global_ctrl =3D 1;
 		break;
-	case 26:
-		arch_perfmon_setup_counters();
-		*cpu_type =3D "i386/core_i7";
-		ppro_has_global_ctrl =3D 1;
-		break;
-	case 28:
-		*cpu_type =3D "i386/atom";
-		break;
 	default:
 		/* Unknown */
 		return 0;
@@ -389,6 +365,7 @@ static int __init arch_perfmon_init(char
 	*cpu_type =3D "i386/arch_perfmon";
 	model =3D &op_arch_perfmon_spec;
 	arch_perfmon_setup_counters();
+	ppro_has_global_ctrl =3D 1;
 	return 1;
 }
=20
@@ -413,14 +390,8 @@ static int __init nmi_init(void)
 				       "AMD processor family %d is not "
 				       "supported\n", family);
 				return -ENODEV;
-			case 6:
-				model =3D &op_athlon_spec;
-				cpu_type =3D "i386/athlon";
-				break;
 			case 0xf:
 				model =3D &op_athlon_spec;
-				/* Actually it could be i386/hammer too, =
but
-				   give user space an consistent name. */
 				cpu_type =3D "x86-64/hammer";
 				break;
 			case 0x10:




--=__Part4776DB56.0__=
Content-Type: text/plain; name="x86-oprof-Intel-new.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-oprof-Intel-new.patch"

x86/oprofile: adjust CPU specific initialization=0A=0ADrop support for =
32-bit only CPU models as well as those that can be=0Adealt with by the =
arch_perfmon bits. Models 14 and 15 remain as=0Aquestionable (I'm not 100% =
positive that these don't support 64-bit=0Amode).=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/oprofile/nmi_int.c=0A++=
+ b/xen/arch/x86/oprofile/nmi_int.c=0A@@ -342,37 +342,13 @@ static int =
__init ppro_init(char ** cpu_=0A 		return 0;=0A =0A 	=
switch (cpu_model) {=0A-	case 0 ... 2:=0A-		*cpu_type =
=3D "i386/ppro";=0A-		break;=0A-	case 3 ... 5:=0A-		=
*cpu_type =3D "i386/pii";=0A-		break;=0A-	case 6 ... 8:=0A-	=
case 10 ... 11:=0A-		*cpu_type =3D "i386/piii";=0A-		=
break;=0A-	case 9:=0A-	case 13:=0A-		*cpu_type =3D =
"i386/p6_mobile";=0A-		break;=0A 	case 14:=0A 		=
*cpu_type =3D "i386/core";=0A 		break;=0A 	case 15:=0A-	=
case 23:=0A-	case 29:=0A 		*cpu_type =3D "i386/core_2";=0A 	=
	ppro_has_global_ctrl =3D 1;=0A 		break;=0A-	case =
26:=0A-		arch_perfmon_setup_counters();=0A-		*cpu_type =
=3D "i386/core_i7";=0A-		ppro_has_global_ctrl =3D 1;=0A-		=
break;=0A-	case 28:=0A-		*cpu_type =3D "i386/atom";=0A-		=
break;=0A 	default:=0A 		/* Unknown */=0A 		=
return 0;=0A@@ -389,6 +365,7 @@ static int __init arch_perfmon_init(char=0A=
 	*cpu_type =3D "i386/arch_perfmon";=0A 	model =3D &op_arch_perfmon_=
spec;=0A 	arch_perfmon_setup_counters();=0A+	ppro_has_global_ctr=
l =3D 1;=0A 	return 1;=0A }=0A =0A@@ -413,14 +390,8 @@ static int =
__init nmi_init(void)=0A 				       "AMD =
processor family %d is not "=0A 				       =
"supported\n", family);=0A 				return -ENODEV;=0A-=
			case 6:=0A-				model =3D =
&op_athlon_spec;=0A-				cpu_type =3D "i386/athlon";=
=0A-				break;=0A 			case =
0xf:=0A 				model =3D &op_athlon_spec;=0A-		=
		/* Actually it could be i386/hammer too, but=0A-		=
		   give user space an consistent name. */=0A 			=
	cpu_type =3D "x86-64/hammer";=0A 				=
break;=0A 			case 0x10:=0A
--=__Part4776DB56.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part4776DB56.0__=--


From xen-devel-bounces@lists.xen.org Fri Dec 07 13:00:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 13:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgxXT-0003LA-PN; Fri, 07 Dec 2012 13:00:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgxXS-0003Ku-3W
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 13:00:22 +0000
Received: from [85.158.143.35:6344] by server-2.bemta-4.messagelabs.com id
	CA/50-30861-468E1C05; Fri, 07 Dec 2012 13:00:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354885191!15786893!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23885 invoked from network); 7 Dec 2012 12:59:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 12:59:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 12:59:51 +0000
Message-Id: <50C1F65602000078000AEF1B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 12:59:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part4776DB56.0__="
Subject: [Xen-devel] [PATCH] x86/oprofile: adjust CPU specific initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part4776DB56.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Drop support for 32-bit only CPU models as well as those that can be
dealt with by the arch_perfmon bits. Models 14 and 15 remain as
questionable (I'm not 100% positive that these don't support 64-bit
mode).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/oprofile/nmi_int.c
+++ b/xen/arch/x86/oprofile/nmi_int.c
@@ -342,37 +342,13 @@ static int __init ppro_init(char ** cpu_
 		return 0;
=20
 	switch (cpu_model) {
-	case 0 ... 2:
-		*cpu_type =3D "i386/ppro";
-		break;
-	case 3 ... 5:
-		*cpu_type =3D "i386/pii";
-		break;
-	case 6 ... 8:
-	case 10 ... 11:
-		*cpu_type =3D "i386/piii";
-		break;
-	case 9:
-	case 13:
-		*cpu_type =3D "i386/p6_mobile";
-		break;
 	case 14:
 		*cpu_type =3D "i386/core";
 		break;
 	case 15:
-	case 23:
-	case 29:
 		*cpu_type =3D "i386/core_2";
 		ppro_has_global_ctrl =3D 1;
 		break;
-	case 26:
-		arch_perfmon_setup_counters();
-		*cpu_type =3D "i386/core_i7";
-		ppro_has_global_ctrl =3D 1;
-		break;
-	case 28:
-		*cpu_type =3D "i386/atom";
-		break;
 	default:
 		/* Unknown */
 		return 0;
@@ -389,6 +365,7 @@ static int __init arch_perfmon_init(char
 	*cpu_type =3D "i386/arch_perfmon";
 	model =3D &op_arch_perfmon_spec;
 	arch_perfmon_setup_counters();
+	ppro_has_global_ctrl =3D 1;
 	return 1;
 }
=20
@@ -413,14 +390,8 @@ static int __init nmi_init(void)
 				       "AMD processor family %d is not "
 				       "supported\n", family);
 				return -ENODEV;
-			case 6:
-				model =3D &op_athlon_spec;
-				cpu_type =3D "i386/athlon";
-				break;
 			case 0xf:
 				model =3D &op_athlon_spec;
-				/* Actually it could be i386/hammer too, =
but
-				   give user space an consistent name. */
 				cpu_type =3D "x86-64/hammer";
 				break;
 			case 0x10:




--=__Part4776DB56.0__=
Content-Type: text/plain; name="x86-oprof-Intel-new.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-oprof-Intel-new.patch"

x86/oprofile: adjust CPU specific initialization=0A=0ADrop support for =
32-bit only CPU models as well as those that can be=0Adealt with by the =
arch_perfmon bits. Models 14 and 15 remain as=0Aquestionable (I'm not 100% =
positive that these don't support 64-bit=0Amode).=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/oprofile/nmi_int.c=0A++=
+ b/xen/arch/x86/oprofile/nmi_int.c=0A@@ -342,37 +342,13 @@ static int =
__init ppro_init(char ** cpu_=0A 		return 0;=0A =0A 	=
switch (cpu_model) {=0A-	case 0 ... 2:=0A-		*cpu_type =
=3D "i386/ppro";=0A-		break;=0A-	case 3 ... 5:=0A-		=
*cpu_type =3D "i386/pii";=0A-		break;=0A-	case 6 ... 8:=0A-	=
case 10 ... 11:=0A-		*cpu_type =3D "i386/piii";=0A-		=
break;=0A-	case 9:=0A-	case 13:=0A-		*cpu_type =3D =
"i386/p6_mobile";=0A-		break;=0A 	case 14:=0A 		=
*cpu_type =3D "i386/core";=0A 		break;=0A 	case 15:=0A-	=
case 23:=0A-	case 29:=0A 		*cpu_type =3D "i386/core_2";=0A 	=
	ppro_has_global_ctrl =3D 1;=0A 		break;=0A-	case =
26:=0A-		arch_perfmon_setup_counters();=0A-		*cpu_type =
=3D "i386/core_i7";=0A-		ppro_has_global_ctrl =3D 1;=0A-		=
break;=0A-	case 28:=0A-		*cpu_type =3D "i386/atom";=0A-		=
break;=0A 	default:=0A 		/* Unknown */=0A 		=
return 0;=0A@@ -389,6 +365,7 @@ static int __init arch_perfmon_init(char=0A=
 	*cpu_type =3D "i386/arch_perfmon";=0A 	model =3D &op_arch_perfmon_=
spec;=0A 	arch_perfmon_setup_counters();=0A+	ppro_has_global_ctr=
l =3D 1;=0A 	return 1;=0A }=0A =0A@@ -413,14 +390,8 @@ static int =
__init nmi_init(void)=0A 				       "AMD =
processor family %d is not "=0A 				       =
"supported\n", family);=0A 				return -ENODEV;=0A-=
			case 6:=0A-				model =3D =
&op_athlon_spec;=0A-				cpu_type =3D "i386/athlon";=
=0A-				break;=0A 			case =
0xf:=0A 				model =3D &op_athlon_spec;=0A-		=
		/* Actually it could be i386/hammer too, but=0A-		=
		   give user space an consistent name. */=0A 			=
	cpu_type =3D "x86-64/hammer";=0A 				=
break;=0A 			case 0x10:=0A
--=__Part4776DB56.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part4776DB56.0__=--


From xen-devel-bounces@lists.xen.org Fri Dec 07 13:06:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 13:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgxdB-00043u-8Z; Fri, 07 Dec 2012 13:06:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgxd9-00043V-MI
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 13:06:16 +0000
Received: from [85.158.143.35:35008] by server-1.bemta-4.messagelabs.com id
	8E/CB-28401-6C9E1C05; Fri, 07 Dec 2012 13:06:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354885573!10143989!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26726 invoked from network); 7 Dec 2012 13:06:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 13:06:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 13:06:12 +0000
Message-Id: <50C1F7D402000078000AEF3E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 13:06:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartC4F558D4.0__="
Subject: [Xen-devel] [PATCH] streamline guest copy operations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartC4F558D4.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- use the variants not validating the VA range when writing back
  structures/fields to the same space that they were previously read
  from
- when only a single field of a structure actually changed, copy back
  just that field where possible
- consolidate copying back results in a few places

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
If really necessary, this patch could of course be split up at almost
arbitrary boundaries.

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -51,6 +51,7 @@ long arch_do_domctl(
     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret =3D 0;
+    bool_t copyback =3D 0;
=20
     switch ( domctl->cmd )
     {
@@ -66,7 +67,7 @@ long arch_do_domctl(
                                 &domctl->u.shadow_op,
                                 guest_handle_cast(u_domctl, void));
             rcu_unlock_domain(d);
-            copy_to_guest(u_domctl, domctl, 1);
+            copyback =3D 1;
         }=20
     }
     break;
@@ -150,8 +151,7 @@ long arch_do_domctl(
         }
=20
         rcu_unlock_domain(d);
-
-        copy_to_guest(u_domctl, domctl, 1);
+        copyback =3D 1;
     }
     break;
=20
@@ -408,7 +408,7 @@ long arch_do_domctl(
             spin_unlock(&d->page_alloc_lock);
=20
             domctl->u.getmemlist.num_pfns =3D i;
-            copy_to_guest(u_domctl, domctl, 1);
+            copyback =3D 1;
         getmemlist_out:
             rcu_unlock_domain(d);
         }
@@ -539,13 +539,11 @@ long arch_do_domctl(
             ret =3D -EFAULT;
=20
     gethvmcontext_out:
-        if ( copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
+        rcu_unlock_domain(d);
+        copyback =3D 1;
=20
         if ( c.data !=3D NULL )
             xfree(c.data);
-
-        rcu_unlock_domain(d);
     }
     break;
=20
@@ -627,11 +625,9 @@ long arch_do_domctl(
         domctl->u.address_size.size =3D
             is_pv_32on64_domain(d) ? 32 : BITS_PER_LONG;
=20
-        ret =3D 0;
         rcu_unlock_domain(d);
-
-        if ( copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
+        ret =3D 0;
+        copyback =3D 1;
     }
     break;
=20
@@ -676,13 +672,9 @@ long arch_do_domctl(
=20
         domctl->u.address_size.size =3D d->arch.physaddr_bitsize;
=20
-        ret =3D 0;
         rcu_unlock_domain(d);
-
-        if ( copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
-
-
+        ret =3D 0;
+        copyback =3D 1;
     }
     break;
=20
@@ -1124,9 +1116,8 @@ long arch_do_domctl(
=20
     ext_vcpucontext_out:
         rcu_unlock_domain(d);
-        if ( (domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext) &&
-             copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
+        if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext )
+            copyback =3D 1;
     }
     break;
=20
@@ -1268,10 +1259,10 @@ long arch_do_domctl(
             domctl->u.gdbsx_guest_memio.len;
=20
         ret =3D gdbsx_guest_mem_io(domctl->domain, &domctl->u.gdbsx_guest_=
memio);
-        if ( !ret && copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
=20
         rcu_unlock_domain(d);
+        if ( !ret )
+           copyback =3D 1;
     }
     break;
=20
@@ -1358,10 +1349,9 @@ long arch_do_domctl(
                 }
             }
         }
-        ret =3D 0;
-        if ( copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
         rcu_unlock_domain(d);
+        ret =3D 0;
+        copyback =3D 1;
     }
     break;
=20
@@ -1485,9 +1475,8 @@ long arch_do_domctl(
=20
     vcpuextstate_out:
         rcu_unlock_domain(d);
-        if ( (domctl->cmd =3D=3D XEN_DOMCTL_getvcpuextstate) &&
-             copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
+        if ( domctl->cmd =3D=3D XEN_DOMCTL_getvcpuextstate )
+            copyback =3D 1;
     }
     break;
=20
@@ -1504,7 +1493,7 @@ long arch_do_domctl(
                 ret =3D mem_event_domctl(d, &domctl->u.mem_event_op,
                                        guest_handle_cast(u_domctl, =
void));
             rcu_unlock_domain(d);
-            copy_to_guest(u_domctl, domctl, 1);
+            copyback =3D 1;
         }=20
     }
     break;
@@ -1539,8 +1528,7 @@ long arch_do_domctl(
                   &domctl->u.audit_p2m.m2p_bad,
                   &domctl->u.audit_p2m.p2m_bad);
         rcu_unlock_domain(d);
-        if ( copy_to_guest(u_domctl, domctl, 1) )=20
-            ret =3D -EFAULT;
+        copyback =3D 1;
     }
     break;
 #endif /* P2M_AUDIT */
@@ -1573,6 +1561,9 @@ long arch_do_domctl(
         break;
     }
=20
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret =3D -EFAULT;
+
     return ret;
 }
=20
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4407,7 +4407,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
=20
         if ( xatp.space =3D=3D XENMAPSPACE_gmfn_range )
         {
-            if ( rc && copy_to_guest(arg, &xatp, 1) )
+            if ( rc && __copy_to_guest(arg, &xatp, 1) )
                 rc =3D -EFAULT;
=20
             if ( rc =3D=3D -EAGAIN )
@@ -4492,7 +4492,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
         map.nr_entries =3D min(map.nr_entries, d->arch.pv_domain.nr_e820);=

         if ( copy_to_guest(map.buffer, d->arch.pv_domain.e820,
                            map.nr_entries) ||
-             copy_to_guest(arg, &map, 1) )
+             __copy_to_guest(arg, &map, 1) )
         {
             spin_unlock(&d->arch.pv_domain.e820_lock);
             return -EFAULT;
@@ -4559,7 +4559,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
=20
         ctxt.map.nr_entries =3D ctxt.n;
=20
-        if ( copy_to_guest(arg, &ctxt.map, 1) )
+        if ( __copy_to_guest(arg, &ctxt.map, 1) )
             return -EFAULT;
=20
         return 0;
@@ -4630,7 +4630,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
             target.pod_cache_pages =3D p2m->pod.count;
             target.pod_entries     =3D p2m->pod.entry_count;
=20
-            if ( copy_to_guest(arg, &target, 1) )
+            if ( __copy_to_guest(arg, &target, 1) )
             {
                 rc=3D -EFAULT;
                 goto pod_target_out_unlock;
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -384,7 +384,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         irq_status_query.flags |=3D XENIRQSTAT_needs_eoi;
         if ( pirq_shared(v->domain, irq) )
             irq_status_query.flags |=3D XENIRQSTAT_shared;
-        ret =3D copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;
+        ret =3D __copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;
         break;
     }
=20
@@ -412,7 +412,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         ret =3D physdev_map_pirq(map.domid, map.type, &map.index, =
&map.pirq,
                                &msi);
=20
-        if ( copy_to_guest(arg, &map, 1) !=3D 0 )
+        if ( __copy_to_guest(arg, &map, 1) )
             ret =3D -EFAULT;
         break;
     }
@@ -440,7 +440,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         if ( ret )
             break;
         ret =3D ioapic_guest_read(apic.apic_physbase, apic.reg, &apic.valu=
e);
-        if ( copy_to_guest(arg, &apic, 1) !=3D 0 )
+        if ( __copy_to_guest(arg, &apic, 1) )
             ret =3D -EFAULT;
         break;
     }
@@ -478,7 +478,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         irq_op.vector =3D irq_op.irq;
         ret =3D 0;
        =20
-        if ( copy_to_guest(arg, &irq_op, 1) !=3D 0 )
+        if ( __copy_to_guest(arg, &irq_op, 1) )
             ret =3D -EFAULT;
         break;
     }
@@ -714,7 +714,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         if ( ret >=3D 0 )
         {
             out.pirq =3D ret;
-            ret =3D copy_to_guest(arg, &out, 1) ? -EFAULT : 0;
+            ret =3D __copy_to_guest(arg, &out, 1) ? -EFAULT : 0;
         }
=20
         break;
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -115,7 +115,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
         {
             op->u.add_memtype.handle =3D 0;
             op->u.add_memtype.reg    =3D ret;
-            ret =3D copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
+            ret =3D __copy_field_to_guest(u_xenpf_op, op, u.add_memtype) =
?
+                  -EFAULT : 0;
             if ( ret !=3D 0 )
                 mtrr_del_page(ret, 0, 0);
         }
@@ -157,7 +158,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             op->u.read_memtype.mfn     =3D mfn;
             op->u.read_memtype.nr_mfns =3D nr_mfns;
             op->u.read_memtype.type    =3D type;
-            ret =3D copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
+            ret =3D __copy_field_to_guest(u_xenpf_op, op, u.read_memtype)
+                  ? -EFAULT : 0;
         }
     }
     break;
@@ -263,8 +265,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             C(legacy_sectors_per_track);
 #undef C
=20
-            ret =3D (copy_field_to_guest(u_xenpf_op, op,
-                                      u.firmware_info.u.disk_info)
+            ret =3D (__copy_field_to_guest(u_xenpf_op, op,
+                                         u.firmware_info.u.disk_info)
                    ? -EFAULT : 0);
             break;
         }
@@ -281,8 +283,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             op->u.firmware_info.u.disk_mbr_signature.mbr_signature =3D
                 sig->signature;
=20
-            ret =3D (copy_field_to_guest(u_xenpf_op, op,
-                                      u.firmware_info.u.disk_mbr_signature=
)
+            ret =3D (__copy_field_to_guest(u_xenpf_op, op,
+                                         u.firmware_info.u.disk_mbr_signat=
ure)
                    ? -EFAULT : 0);
             break;
         }
@@ -299,10 +301,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
                 bootsym(boot_edid_caps) >> 8;
=20
             ret =3D 0;
-            if ( copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
-                                     u.vbeddc_info.capabilities) ||
-                 copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
-                                     u.vbeddc_info.edid_transfer_time) ||
+            if ( __copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
+                                       u.vbeddc_info.capabilities) ||
+                 __copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
+                                       u.vbeddc_info.edid_transfer_time) =
||
                  copy_to_compat(op->u.firmware_info.u.vbeddc_info.edid,
                                 bootsym(boot_edid_info), 128) )
                 ret =3D -EFAULT;
@@ -311,8 +313,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             ret =3D efi_get_info(op->u.firmware_info.index,
                                &op->u.firmware_info.u.efi_info);
             if ( ret =3D=3D 0 &&
-                 copy_field_to_guest(u_xenpf_op, op,
-                                     u.firmware_info.u.efi_info) )
+                 __copy_field_to_guest(u_xenpf_op, op,
+                                       u.firmware_info.u.efi_info) )
                 ret =3D -EFAULT;
             break;
         case XEN_FW_KBD_SHIFT_FLAGS:
@@ -323,8 +325,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             op->u.firmware_info.u.kbd_shift_flags =3D bootsym(kbd_shift_fl=
ags);
=20
             ret =3D 0;
-            if ( copy_field_to_guest(u_xenpf_op, op,
-                                     u.firmware_info.u.kbd_shift_flags) )
+            if ( __copy_field_to_guest(u_xenpf_op, op,
+                                       u.firmware_info.u.kbd_shift_flags) =
)
                 ret =3D -EFAULT;
             break;
         default:
@@ -340,7 +342,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
=20
         ret =3D efi_runtime_call(&op->u.efi_runtime_call);
         if ( ret =3D=3D 0 &&
-             copy_field_to_guest(u_xenpf_op, op, u.efi_runtime_call) )
+             __copy_field_to_guest(u_xenpf_op, op, u.efi_runtime_call) )
             ret =3D -EFAULT;
         break;
=20
@@ -412,7 +414,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             ret =3D cpumask_to_xenctl_cpumap(&ctlmap, cpumap);
         free_cpumask_var(cpumap);
=20
-        if ( ret =3D=3D 0 && copy_to_guest(u_xenpf_op, op, 1) )
+        if ( ret =3D=3D 0 && __copy_field_to_guest(u_xenpf_op, op, =
u.getidletime) )
             ret =3D -EFAULT;
     }
     break;
@@ -503,7 +505,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
=20
         put_cpu_maps();
=20
-        ret =3D copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
+        ret =3D __copy_field_to_guest(u_xenpf_op, op, u.pcpu_info) ? =
-EFAULT : 0;
     }
     break;
=20
@@ -538,7 +540,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
=20
         put_cpu_maps();
=20
-        if ( copy_field_to_guest(u_xenpf_op, op, u.pcpu_version) )
+        if ( __copy_field_to_guest(u_xenpf_op, op, u.pcpu_version) )
             ret =3D -EFAULT;
     }
     break;
@@ -639,7 +641,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
=20
         case XEN_CORE_PARKING_GET:
             op->u.core_parking.idle_nums =3D get_cur_idle_nums();
-            ret =3D copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
+            ret =3D __copy_field_to_guest(u_xenpf_op, op, u.core_parking) =
?
+                  -EFAULT : 0;
             break;
=20
         default:
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -93,7 +93,7 @@ long arch_do_sysctl(
         if ( iommu_enabled )
             pi->capabilities |=3D XEN_SYSCTL_PHYSCAP_hvm_directio;
=20
-        if ( copy_to_guest(u_sysctl, sysctl, 1) )
+        if ( __copy_field_to_guest(u_sysctl, sysctl, u.physinfo) )
             ret =3D -EFAULT;
     }
     break;
@@ -133,7 +133,8 @@ long arch_do_sysctl(
             }
         }
=20
-        ret =3D ((i <=3D max_cpu_index) || copy_to_guest(u_sysctl, =
sysctl, 1))
+        ret =3D ((i <=3D max_cpu_index) ||
+               __copy_field_to_guest(u_sysctl, sysctl, u.topologyinfo))
             ? -EFAULT : 0;
     }
     break;
@@ -185,7 +186,8 @@ long arch_do_sysctl(
             }
         }
=20
-        ret =3D ((i <=3D max_node_index) || copy_to_guest(u_sysctl, =
sysctl, 1))
+        ret =3D ((i <=3D max_node_index) ||
+               __copy_field_to_guest(u_sysctl, sysctl, u.numainfo))
             ? -EFAULT : 0;
     }
     break;
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -122,7 +122,7 @@ int compat_arch_memory_op(int op, XEN_GU
 #define XLAT_memory_map_HNDL_buffer(_d_, _s_) ((void)0)
         XLAT_memory_map(&cmp, nat);
 #undef XLAT_memory_map_HNDL_buffer
-        if ( copy_to_guest(arg, &cmp, 1) )
+        if ( __copy_to_guest(arg, &cmp, 1) )
             rc =3D -EFAULT;
=20
         break;
@@ -148,7 +148,7 @@ int compat_arch_memory_op(int op, XEN_GU
=20
         XLAT_pod_target(&cmp, nat);
=20
-        if ( copy_to_guest(arg, &cmp, 1) )
+        if ( __copy_to_guest(arg, &cmp, 1) )
         {
             if ( rc =3D=3D __HYPERVISOR_memory_op )
                 hypercall_cancel_continuation();
@@ -200,7 +200,7 @@ int compat_arch_memory_op(int op, XEN_GU
         }
=20
         xmml.nr_extents =3D i;
-        if ( copy_to_guest(arg, &xmml, 1) )
+        if ( __copy_to_guest(arg, &xmml, 1) )
             rc =3D -EFAULT;
=20
         break;
@@ -219,7 +219,7 @@ int compat_arch_memory_op(int op, XEN_GU
         if ( copy_from_guest(&meo, arg, 1) )
             return -EFAULT;
         rc =3D do_mem_event_op(op, meo.domain, (void *) &meo);
-        if ( !rc && copy_to_guest(arg, &meo, 1) )
+        if ( !rc && __copy_to_guest(arg, &meo, 1) )
             return -EFAULT;
         break;
     }
@@ -231,7 +231,7 @@ int compat_arch_memory_op(int op, XEN_GU
         if ( mso.op =3D=3D XENMEM_sharing_op_audit )
             return mem_sharing_audit();=20
         rc =3D do_mem_event_op(op, mso.domain, (void *) &mso);
-        if ( !rc && copy_to_guest(arg, &mso, 1) )
+        if ( !rc && __copy_to_guest(arg, &mso, 1) )
             return -EFAULT;
         break;
     }
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1074,7 +1074,7 @@ long subarch_memory_op(int op, XEN_GUEST
         }
=20
         xmml.nr_extents =3D i;
-        if ( copy_to_guest(arg, &xmml, 1) )
+        if ( __copy_to_guest(arg, &xmml, 1) )
             return -EFAULT;
=20
         break;
@@ -1092,7 +1092,7 @@ long subarch_memory_op(int op, XEN_GUEST
         if ( copy_from_guest(&meo, arg, 1) )
             return -EFAULT;
         rc =3D do_mem_event_op(op, meo.domain, (void *) &meo);
-        if ( !rc && copy_to_guest(arg, &meo, 1) )
+        if ( !rc && __copy_to_guest(arg, &meo, 1) )
             return -EFAULT;
         break;
     }
@@ -1104,7 +1104,7 @@ long subarch_memory_op(int op, XEN_GUEST
         if ( mso.op =3D=3D XENMEM_sharing_op_audit )
             return mem_sharing_audit();=20
         rc =3D do_mem_event_op(op, mso.domain, (void *) &mso);
-        if ( !rc && copy_to_guest(arg, &mso, 1) )
+        if ( !rc && __copy_to_guest(arg, &mso, 1) )
             return -EFAULT;
         break;
     }
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -292,8 +292,9 @@ int compat_memory_op(unsigned int cmd, X
             }
=20
             cmp.xchg.nr_exchanged =3D nat.xchg->nr_exchanged;
-            if ( copy_field_to_guest(guest_handle_cast(compat, compat_memo=
ry_exchange_t),
-                                     &cmp.xchg, nr_exchanged) )
+            if ( __copy_field_to_guest(guest_handle_cast(compat,
+                                                         compat_memory_exc=
hange_t),
+                                       &cmp.xchg, nr_exchanged) )
                 rc =3D -EFAULT;
=20
             if ( rc < 0 )
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -242,6 +242,7 @@ void domctl_lock_release(void)
 long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret =3D 0;
+    bool_t copyback =3D 0;
     struct xen_domctl curop, *op =3D &curop;
=20
     if ( copy_from_guest(op, u_domctl, 1) )
@@ -469,8 +470,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
                sizeof(xen_domain_handle_t));
=20
         op->domain =3D d->domain_id;
-        if ( copy_to_guest(u_domctl, op, 1) )
-            ret =3D -EFAULT;
+        copyback =3D 1;
     }
     break;
=20
@@ -653,8 +653,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
             goto scheduler_op_out;
=20
         ret =3D sched_adjust(d, &op->u.scheduler_op);
-        if ( copy_to_guest(u_domctl, op, 1) )
-            ret =3D -EFAULT;
+        copyback =3D 1;
=20
     scheduler_op_out:
         rcu_unlock_domain(d);
@@ -686,8 +685,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         getdomaininfo(d, &op->u.getdomaininfo);
=20
         op->domain =3D op->u.getdomaininfo.domain;
-        if ( copy_to_guest(u_domctl, op, 1) )
-            ret =3D -EFAULT;
+        copyback =3D 1;
=20
     getdomaininfo_out:
         rcu_read_unlock(&domlist_read_lock);
@@ -747,8 +745,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         ret =3D copy_to_guest(op->u.vcpucontext.ctxt, c.nat, 1);
 #endif
=20
-        if ( copy_to_guest(u_domctl, op, 1) || ret )
+        if ( ret )
             ret =3D -EFAULT;
+        copyback =3D 1;
=20
     getvcpucontext_out:
         xfree(c.nat);
@@ -786,9 +785,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         op->u.getvcpuinfo.cpu_time =3D runstate.time[RUNSTATE_running];
         op->u.getvcpuinfo.cpu      =3D v->processor;
         ret =3D 0;
-
-        if ( copy_to_guest(u_domctl, op, 1) )
-            ret =3D -EFAULT;
+        copyback =3D 1;
=20
     getvcpuinfo_out:
         rcu_unlock_domain(d);
@@ -1045,6 +1042,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
=20
     domctl_lock_release();
=20
+    if ( copyback && __copy_to_guest(u_domctl, op, 1) )
+        ret =3D -EFAULT;
+
     return ret;
 }
=20
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -981,7 +981,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&alloc_unbound, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_alloc_unbound(&alloc_unbound);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &alloc_unbound, 1) !=3D =
0) )
+        if ( !rc && __copy_to_guest(arg, &alloc_unbound, 1) )
             rc =3D -EFAULT; /* Cleaning up here would be a mess! */
         break;
     }
@@ -991,7 +991,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&bind_interdomain, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_bind_interdomain(&bind_interdomain);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &bind_interdomain, 1) =
!=3D 0) )
+        if ( !rc && __copy_to_guest(arg, &bind_interdomain, 1) )
             rc =3D -EFAULT; /* Cleaning up here would be a mess! */
         break;
     }
@@ -1001,7 +1001,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&bind_virq, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_bind_virq(&bind_virq);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &bind_virq, 1) !=3D 0) =
)
+        if ( !rc && __copy_to_guest(arg, &bind_virq, 1) )
             rc =3D -EFAULT; /* Cleaning up here would be a mess! */
         break;
     }
@@ -1011,7 +1011,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&bind_ipi, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_bind_ipi(&bind_ipi);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &bind_ipi, 1) !=3D 0) )
+        if ( !rc && __copy_to_guest(arg, &bind_ipi, 1) )
             rc =3D -EFAULT; /* Cleaning up here would be a mess! */
         break;
     }
@@ -1021,7 +1021,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&bind_pirq, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_bind_pirq(&bind_pirq);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &bind_pirq, 1) !=3D 0) =
)
+        if ( !rc && __copy_to_guest(arg, &bind_pirq, 1) )
             rc =3D -EFAULT; /* Cleaning up here would be a mess! */
         break;
     }
@@ -1047,7 +1047,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&status, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_status(&status);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &status, 1) !=3D 0) )
+        if ( !rc && __copy_to_guest(arg, &status, 1) )
             rc =3D -EFAULT;
         break;
     }
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1115,12 +1115,13 @@ gnttab_unmap_grant_ref(
=20
         for ( i =3D 0; i < c; i++ )
         {
-            if ( unlikely(__copy_from_guest_offset(&op, uop, done+i, 1)) =
)
+            if ( unlikely(__copy_from_guest(&op, uop, 1)) )
                 goto fault;
             __gnttab_unmap_grant_ref(&op, &(common[i]));
             ++partial_done;
-            if ( unlikely(__copy_to_guest_offset(uop, done+i, &op, 1)) )
+            if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
                 goto fault;
+            guest_handle_add_offset(uop, 1);
         }
=20
         flush_tlb_mask(current->domain->domain_dirty_cpumask);
@@ -1177,12 +1178,13 @@ gnttab_unmap_and_replace(
        =20
         for ( i =3D 0; i < c; i++ )
         {
-            if ( unlikely(__copy_from_guest_offset(&op, uop, done+i, 1)) =
)
+            if ( unlikely(__copy_from_guest(&op, uop, 1)) )
                 goto fault;
             __gnttab_unmap_and_replace(&op, &(common[i]));
             ++partial_done;
-            if ( unlikely(__copy_to_guest_offset(uop, done+i, &op, 1)) )
+            if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
                 goto fault;
+            guest_handle_add_offset(uop, 1);
         }
        =20
         flush_tlb_mask(current->domain->domain_dirty_cpumask);
@@ -1396,7 +1398,7 @@ gnttab_setup_table(
  out2:
     rcu_unlock_domain(d);
  out1:
-    if ( unlikely(copy_to_guest(uop, &op, 1)) )
+    if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
         return -EFAULT;
=20
     return 0;
@@ -1446,7 +1448,7 @@ gnttab_query_size(
     rcu_unlock_domain(d);
=20
  query_out:
-    if ( unlikely(copy_to_guest(uop, &op, 1)) )
+    if ( unlikely(__copy_to_guest(uop, &op, 1)) )
         return -EFAULT;
=20
     return 0;
@@ -1542,7 +1544,7 @@ gnttab_transfer(
             return i;
=20
         /* Read from caller address space. */
-        if ( unlikely(__copy_from_guest_offset(&gop, uop, i, 1)) )
+        if ( unlikely(__copy_from_guest(&gop, uop, 1)) )
         {
             gdprintk(XENLOG_INFO, "gnttab_transfer: error reading req =
%d/%d\n",
                     i, count);
@@ -1701,12 +1703,13 @@ gnttab_transfer(
         gop.status =3D GNTST_okay;
=20
     copyback:
-        if ( unlikely(__copy_to_guest_offset(uop, i, &gop, 1)) )
+        if ( unlikely(__copy_field_to_guest(uop, &gop, status)) )
         {
             gdprintk(XENLOG_INFO, "gnttab_transfer: error writing resp "
                      "%d/%d\n", i, count);
             return -EFAULT;
         }
+        guest_handle_add_offset(uop, 1);
     }
=20
     return 0;
@@ -2143,17 +2146,18 @@ gnttab_copy(
     {
         if (i && hypercall_preempt_check())
             return i;
-        if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) )
+        if ( unlikely(__copy_from_guest(&op, uop, 1)) )
             return -EFAULT;
         __gnttab_copy(&op);
-        if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )
+        if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
             return -EFAULT;
+        guest_handle_add_offset(uop, 1);
     }
     return 0;
 }
=20
 static long
-gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
+gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t) uop)
 {
     gnttab_set_version_t op;
     struct domain *d =3D current->domain;
@@ -2265,7 +2269,7 @@ out_unlock:
 out:
     op.version =3D gt->gt_version;
=20
-    if (copy_to_guest(uop, &op, 1))
+    if (__copy_to_guest(uop, &op, 1))
         res =3D -EFAULT;
=20
     return res;
@@ -2329,14 +2333,14 @@ gnttab_get_status_frames(XEN_GUEST_HANDL
 out2:
     rcu_unlock_domain(d);
 out1:
-    if ( unlikely(copy_to_guest(uop, &op, 1)) )
+    if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
         return -EFAULT;
=20
     return 0;
 }
=20
 static long
-gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
+gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t) uop)
 {
     gnttab_get_version_t op;
     struct domain *d;
@@ -2359,7 +2363,7 @@ gnttab_get_version(XEN_GUEST_HANDLE_PARA
=20
     rcu_unlock_domain(d);
=20
-    if ( copy_to_guest(uop, &op, 1) )
+    if ( __copy_field_to_guest(uop, &op, version) )
         return -EFAULT;
=20
     return 0;
@@ -2421,7 +2425,7 @@ out:
 }
=20
 static long
-gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t =
uop),
+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) =
uop,
                       unsigned int count)
 {
     int i;
@@ -2431,11 +2435,12 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE_P
     {
         if ( i && hypercall_preempt_check() )
             return i;
-        if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) )
+        if ( unlikely(__copy_from_guest(&op, uop, 1)) )
             return -EFAULT;
         op.status =3D __gnttab_swap_grant_ref(op.ref_a, op.ref_b);
-        if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )
+        if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
             return -EFAULT;
+        guest_handle_add_offset(uop, 1);
     }
     return 0;
 }
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -359,7 +359,7 @@ static long memory_exchange(XEN_GUEST_HA
         {
             exch.nr_exchanged =3D i << in_chunk_order;
             rcu_unlock_domain(d);
-            if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
+            if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
                 return -EFAULT;
             return hypercall_create_continuation(
                 __HYPERVISOR_memory_op, "lh", XENMEM_exchange, arg);
@@ -500,7 +500,7 @@ static long memory_exchange(XEN_GUEST_HA
     }
=20
     exch.nr_exchanged =3D exch.in.nr_extents;
-    if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
+    if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
         rc =3D -EFAULT;
     rcu_unlock_domain(d);
     return rc;
@@ -527,7 +527,7 @@ static long memory_exchange(XEN_GUEST_HA
     exch.nr_exchanged =3D i << in_chunk_order;
=20
  fail_early:
-    if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
+    if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
         rc =3D -EFAULT;
     return rc;
 }
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -30,6 +30,7 @@
 long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret =3D 0;
+    int copyback =3D -1;
     struct xen_sysctl curop, *op =3D &curop;
     static DEFINE_SPINLOCK(sysctl_lock);
=20
@@ -55,42 +56,28 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
     switch ( op->cmd )
     {
     case XEN_SYSCTL_readconsole:
-    {
         ret =3D xsm_readconsole(op->u.readconsole.clear);
         if ( ret )
             break;
=20
         ret =3D read_console_ring(&op->u.readconsole);
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
=20
     case XEN_SYSCTL_tbuf_op:
-    {
         ret =3D xsm_tbufcontrol();
         if ( ret )
             break;
=20
         ret =3D tb_control(&op->u.tbuf_op);
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
    =20
     case XEN_SYSCTL_sched_id:
-    {
         ret =3D xsm_sched_id();
         if ( ret )
             break;
=20
         op->u.sched_id.sched_id =3D sched_id();
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-        else
-            ret =3D 0;
-    }
-    break;
+        break;
=20
     case XEN_SYSCTL_getdomaininfolist:
     {=20
@@ -129,38 +116,27 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
             break;
        =20
         op->u.getdomaininfolist.num_domains =3D num_domains;
-
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
     }
     break;
=20
 #ifdef PERF_COUNTERS
     case XEN_SYSCTL_perfc_op:
-    {
         ret =3D xsm_perfcontrol();
         if ( ret )
             break;
=20
         ret =3D perfc_control(&op->u.perfc_op);
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
 #endif
=20
 #ifdef LOCK_PROFILE
     case XEN_SYSCTL_lockprof_op:
-    {
         ret =3D xsm_lockprof();
         if ( ret )
             break;
=20
         ret =3D spinlock_profile_control(&op->u.lockprof_op);
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
 #endif
     case XEN_SYSCTL_debug_keys:
     {
@@ -179,6 +155,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
             handle_keypress(c, guest_cpu_user_regs());
         }
         ret =3D 0;
+        copyback =3D 0;
     }
     break;
=20
@@ -193,22 +170,21 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
         if ( ret )
             break;
=20
+        ret =3D -EFAULT;
         for ( i =3D 0; i < nr_cpus; i++ )
         {
             cpuinfo.idletime =3D get_cpu_idle_time(i);
=20
-            ret =3D -EFAULT;
             if ( copy_to_guest_offset(op->u.getcpuinfo.info, i, &cpuinfo, =
1) )
                 goto out;
         }
=20
         op->u.getcpuinfo.nr_cpus =3D i;
-        ret =3D copy_to_guest(u_sysctl, op, 1) ? -EFAULT : 0;
+        ret =3D 0;
     }
     break;
=20
     case XEN_SYSCTL_availheap:
-    {=20
         ret =3D xsm_availheap();
         if ( ret )
             break;
@@ -218,47 +194,26 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
             op->u.availheap.min_bitwidth,
             op->u.availheap.max_bitwidth);
         op->u.availheap.avail_bytes <<=3D PAGE_SHIFT;
-
-        ret =3D copy_to_guest(u_sysctl, op, 1) ? -EFAULT : 0;
-    }
-    break;
+        break;
=20
 #ifdef HAS_ACPI
     case XEN_SYSCTL_get_pmstat:
-    {
         ret =3D xsm_get_pmstat();
         if ( ret )
             break;
=20
         ret =3D do_get_pm_info(&op->u.get_pmstat);
-        if ( ret )
-            break;
-
-        if ( copy_to_guest(u_sysctl, op, 1) )
-        {
-            ret =3D -EFAULT;
-            break;
-        }
-    }
-    break;
+        break;
=20
     case XEN_SYSCTL_pm_op:
-    {
         ret =3D xsm_pm_op();
         if ( ret )
             break;
=20
         ret =3D do_pm_op(&op->u.pm_op);
-        if ( ret && (ret !=3D -EAGAIN) )
-            break;
-
-        if ( copy_to_guest(u_sysctl, op, 1) )
-        {
-            ret =3D -EFAULT;
-            break;
-        }
-    }
-    break;
+        if ( ret =3D=3D -EAGAIN )
+            copyback =3D 1;
+        break;
 #endif
=20
     case XEN_SYSCTL_page_offline_op:
@@ -317,41 +272,39 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
         }
=20
         xfree(status);
+        copyback =3D 0;
     }
     break;
=20
     case XEN_SYSCTL_cpupool_op:
-    {
         ret =3D xsm_cpupool_op();
         if ( ret )
             break;
=20
         ret =3D cpupool_do_sysctl(&op->u.cpupool_op);
-        if ( (ret =3D=3D 0) && copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
=20
     case XEN_SYSCTL_scheduler_op:
-    {
         ret =3D xsm_sched_op();
         if ( ret )
             break;
=20
         ret =3D sched_adjust_global(&op->u.scheduler_op);
-        if ( (ret =3D=3D 0) && copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
=20
     default:
         ret =3D arch_do_sysctl(op, u_sysctl);
+        copyback =3D 0;
         break;
     }
=20
  out:
     spin_unlock(&sysctl_lock);
=20
+    if ( copyback && (!ret || copyback > 0) &&
+         __copy_to_guest(u_sysctl, op, 1) )
+        ret =3D -EFAULT;
+
     return ret;
 }
=20
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -449,7 +449,7 @@ static int add_passive_list(XEN_GUEST_HA
             current->domain, __pa(d->xenoprof->rawbuf),
             passive.buf_gmaddr, d->xenoprof->npages);
=20
-    if ( copy_to_guest(arg, &passive, 1) )
+    if ( __copy_to_guest(arg, &passive, 1) )
     {
         put_domain(d);
         return -EFAULT;
@@ -604,7 +604,7 @@ static int xenoprof_op_init(XEN_GUEST_HA
     if ( xenoprof_init.is_primary )
         xenoprof_primary_profiler =3D current->domain;
=20
-    return (copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0);
+    return __copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0;
 }
=20
 #define ret_t long
@@ -651,10 +651,7 @@ static int xenoprof_op_get_buffer(XEN_GU
             d, __pa(d->xenoprof->rawbuf), xenoprof_get_buffer.buf_gmaddr,
             d->xenoprof->npages);
=20
-    if ( copy_to_guest(arg, &xenoprof_get_buffer, 1) )
-        return -EFAULT;
-
-    return 0;
+    return __copy_to_guest(arg, &xenoprof_get_buffer, 1) ? -EFAULT : 0;
 }
=20
 #define NONPRIV_OP(op) ( (op =3D=3D XENOPROF_init)          \
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -586,7 +586,7 @@ int iommu_do_domctl(
             domctl->u.get_device_group.num_sdevs =3D ret;
             ret =3D 0;
         }
-        if ( copy_to_guest(u_domctl, domctl, 1) )
+        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) =
)
             ret =3D -EFAULT;
         rcu_unlock_domain(d);
     }



--=__PartC4F558D4.0__=
Content-Type: text/plain; name="guest-copy-streamline.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="guest-copy-streamline.patch"

streamline guest copy operations=0A=0A- use the variants not validating =
the VA range when writing back=0A  structures/fields to the same space =
that they were previously read=0A  from=0A- when only a single field of a =
structure actually changed, copy back=0A  just that field where possible=0A=
- consolidate copying back results in a few places=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A---=0AIf really necessary, this patch could =
of course be split up at almost=0Aarbitrary boundaries.=0A=0A--- a/xen/arch=
/x86/domctl.c=0A+++ b/xen/arch/x86/domctl.c=0A@@ -51,6 +51,7 @@ long =
arch_do_domctl(=0A     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)=0A =
{=0A     long ret =3D 0;=0A+    bool_t copyback =3D 0;=0A =0A     switch ( =
domctl->cmd )=0A     {=0A@@ -66,7 +67,7 @@ long arch_do_domctl(=0A         =
                        &domctl->u.shadow_op,=0A                           =
      guest_handle_cast(u_domctl, void));=0A             rcu_unlock_domain(=
d);=0A-            copy_to_guest(u_domctl, domctl, 1);=0A+            =
copyback =3D 1;=0A         } =0A     }=0A     break;=0A@@ -150,8 +151,7 @@ =
long arch_do_domctl(=0A         }=0A =0A         rcu_unlock_domain(d);=0A-=
=0A-        copy_to_guest(u_domctl, domctl, 1);=0A+        copyback =3D =
1;=0A     }=0A     break;=0A =0A@@ -408,7 +408,7 @@ long arch_do_domctl(=0A=
             spin_unlock(&d->page_alloc_lock);=0A =0A             =
domctl->u.getmemlist.num_pfns =3D i;=0A-            copy_to_guest(u_domctl,=
 domctl, 1);=0A+            copyback =3D 1;=0A         getmemlist_out:=0A  =
           rcu_unlock_domain(d);=0A         }=0A@@ -539,13 +539,11 @@ long =
arch_do_domctl(=0A             ret =3D -EFAULT;=0A =0A     gethvmcontext_ou=
t:=0A-        if ( copy_to_guest(u_domctl, domctl, 1) )=0A-            ret =
=3D -EFAULT;=0A+        rcu_unlock_domain(d);=0A+        copyback =3D =
1;=0A =0A         if ( c.data !=3D NULL )=0A             xfree(c.data);=0A-=
=0A-        rcu_unlock_domain(d);=0A     }=0A     break;=0A =0A@@ -627,11 =
+625,9 @@ long arch_do_domctl(=0A         domctl->u.address_size.size =
=3D=0A             is_pv_32on64_domain(d) ? 32 : BITS_PER_LONG;=0A =0A-    =
    ret =3D 0;=0A         rcu_unlock_domain(d);=0A-=0A-        if ( =
copy_to_guest(u_domctl, domctl, 1) )=0A-            ret =3D -EFAULT;=0A+   =
     ret =3D 0;=0A+        copyback =3D 1;=0A     }=0A     break;=0A =0A@@ =
-676,13 +672,9 @@ long arch_do_domctl(=0A =0A         domctl->u.address_siz=
e.size =3D d->arch.physaddr_bitsize;=0A =0A-        ret =3D 0;=0A         =
rcu_unlock_domain(d);=0A-=0A-        if ( copy_to_guest(u_domctl, domctl, =
1) )=0A-            ret =3D -EFAULT;=0A-=0A-=0A+        ret =3D 0;=0A+     =
   copyback =3D 1;=0A     }=0A     break;=0A =0A@@ -1124,9 +1116,8 @@ long =
arch_do_domctl(=0A =0A     ext_vcpucontext_out:=0A         rcu_unlock_domai=
n(d);=0A-        if ( (domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext) =
&&=0A-             copy_to_guest(u_domctl, domctl, 1) )=0A-            ret =
=3D -EFAULT;=0A+        if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucont=
ext )=0A+            copyback =3D 1;=0A     }=0A     break;=0A =0A@@ =
-1268,10 +1259,10 @@ long arch_do_domctl(=0A             domctl->u.gdbsx_gu=
est_memio.len;=0A =0A         ret =3D gdbsx_guest_mem_io(domctl->domain, =
&domctl->u.gdbsx_guest_memio);=0A-        if ( !ret && copy_to_guest(u_domc=
tl, domctl, 1) )=0A-            ret =3D -EFAULT;=0A =0A         rcu_unlock_=
domain(d);=0A+        if ( !ret )=0A+           copyback =3D 1;=0A     =
}=0A     break;=0A =0A@@ -1358,10 +1349,9 @@ long arch_do_domctl(=0A       =
          }=0A             }=0A         }=0A-        ret =3D 0;=0A-        =
if ( copy_to_guest(u_domctl, domctl, 1) )=0A-            ret =3D =
-EFAULT;=0A         rcu_unlock_domain(d);=0A+        ret =3D 0;=0A+        =
copyback =3D 1;=0A     }=0A     break;=0A =0A@@ -1485,9 +1475,8 @@ long =
arch_do_domctl(=0A =0A     vcpuextstate_out:=0A         rcu_unlock_domain(d=
);=0A-        if ( (domctl->cmd =3D=3D XEN_DOMCTL_getvcpuextstate) &&=0A-  =
           copy_to_guest(u_domctl, domctl, 1) )=0A-            ret =3D =
-EFAULT;=0A+        if ( domctl->cmd =3D=3D XEN_DOMCTL_getvcpuextstate =
)=0A+            copyback =3D 1;=0A     }=0A     break;=0A =0A@@ -1504,7 =
+1493,7 @@ long arch_do_domctl(=0A                 ret =3D mem_event_domctl=
(d, &domctl->u.mem_event_op,=0A                                        =
guest_handle_cast(u_domctl, void));=0A             rcu_unlock_domain(d);=0A=
-            copy_to_guest(u_domctl, domctl, 1);=0A+            copyback =
=3D 1;=0A         } =0A     }=0A     break;=0A@@ -1539,8 +1528,7 @@ long =
arch_do_domctl(=0A                   &domctl->u.audit_p2m.m2p_bad,=0A      =
             &domctl->u.audit_p2m.p2m_bad);=0A         rcu_unlock_domain(d)=
;=0A-        if ( copy_to_guest(u_domctl, domctl, 1) ) =0A-            ret =
=3D -EFAULT;=0A+        copyback =3D 1;=0A     }=0A     break;=0A #endif =
/* P2M_AUDIT */=0A@@ -1573,6 +1561,9 @@ long arch_do_domctl(=0A         =
break;=0A     }=0A =0A+    if ( copyback && __copy_to_guest(u_domctl, =
domctl, 1) )=0A+        ret =3D -EFAULT;=0A+=0A     return ret;=0A }=0A =
=0A--- a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=0A@@ -4407,7 +4407,7 =
@@ long arch_memory_op(int op, XEN_GUEST_HA=0A =0A         if ( xatp.space =
=3D=3D XENMAPSPACE_gmfn_range )=0A         {=0A-            if ( rc && =
copy_to_guest(arg, &xatp, 1) )=0A+            if ( rc && __copy_to_guest(ar=
g, &xatp, 1) )=0A                 rc =3D -EFAULT;=0A =0A             if ( =
rc =3D=3D -EAGAIN )=0A@@ -4492,7 +4492,7 @@ long arch_memory_op(int op, =
XEN_GUEST_HA=0A         map.nr_entries =3D min(map.nr_entries, d->arch.pv_d=
omain.nr_e820);=0A         if ( copy_to_guest(map.buffer, d->arch.pv_domain=
.e820,=0A                            map.nr_entries) ||=0A-             =
copy_to_guest(arg, &map, 1) )=0A+             __copy_to_guest(arg, &map, =
1) )=0A         {=0A             spin_unlock(&d->arch.pv_domain.e820_lock);=
=0A             return -EFAULT;=0A@@ -4559,7 +4559,7 @@ long arch_memory_op=
(int op, XEN_GUEST_HA=0A =0A         ctxt.map.nr_entries =3D ctxt.n;=0A =
=0A-        if ( copy_to_guest(arg, &ctxt.map, 1) )=0A+        if ( =
__copy_to_guest(arg, &ctxt.map, 1) )=0A             return -EFAULT;=0A =0A =
        return 0;=0A@@ -4630,7 +4630,7 @@ long arch_memory_op(int op, =
XEN_GUEST_HA=0A             target.pod_cache_pages =3D p2m->pod.count;=0A  =
           target.pod_entries     =3D p2m->pod.entry_count;=0A =0A-        =
    if ( copy_to_guest(arg, &target, 1) )=0A+            if ( __copy_to_gue=
st(arg, &target, 1) )=0A             {=0A                 rc=3D -EFAULT;=0A=
                 goto pod_target_out_unlock;=0A--- a/xen/arch/x86/physdev.c=
=0A+++ b/xen/arch/x86/physdev.c=0A@@ -384,7 +384,7 @@ ret_t do_physdev_op(i=
nt cmd, XEN_GUEST_H=0A         irq_status_query.flags |=3D XENIRQSTAT_needs=
_eoi;=0A         if ( pirq_shared(v->domain, irq) )=0A             =
irq_status_query.flags |=3D XENIRQSTAT_shared;=0A-        ret =3D =
copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;=0A+        ret =3D =
__copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;=0A         =
break;=0A     }=0A =0A@@ -412,7 +412,7 @@ ret_t do_physdev_op(int cmd, =
XEN_GUEST_H=0A         ret =3D physdev_map_pirq(map.domid, map.type, =
&map.index, &map.pirq,=0A                                &msi);=0A =0A-    =
    if ( copy_to_guest(arg, &map, 1) !=3D 0 )=0A+        if ( __copy_to_gue=
st(arg, &map, 1) )=0A             ret =3D -EFAULT;=0A         break;=0A    =
 }=0A@@ -440,7 +440,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H=0A       =
  if ( ret )=0A             break;=0A         ret =3D ioapic_guest_read(api=
c.apic_physbase, apic.reg, &apic.value);=0A-        if ( copy_to_guest(arg,=
 &apic, 1) !=3D 0 )=0A+        if ( __copy_to_guest(arg, &apic, 1) )=0A    =
         ret =3D -EFAULT;=0A         break;=0A     }=0A@@ -478,7 +478,7 @@ =
ret_t do_physdev_op(int cmd, XEN_GUEST_H=0A         irq_op.vector =3D =
irq_op.irq;=0A         ret =3D 0;=0A         =0A-        if ( copy_to_guest=
(arg, &irq_op, 1) !=3D 0 )=0A+        if ( __copy_to_guest(arg, &irq_op, =
1) )=0A             ret =3D -EFAULT;=0A         break;=0A     }=0A@@ =
-714,7 +714,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H=0A         if ( =
ret >=3D 0 )=0A         {=0A             out.pirq =3D ret;=0A-            =
ret =3D copy_to_guest(arg, &out, 1) ? -EFAULT : 0;=0A+            ret =3D =
__copy_to_guest(arg, &out, 1) ? -EFAULT : 0;=0A         }=0A =0A         =
break;=0A--- a/xen/arch/x86/platform_hypercall.c=0A+++ b/xen/arch/x86/platf=
orm_hypercall.c=0A@@ -115,7 +115,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE=
_PA=0A         {=0A             op->u.add_memtype.handle =3D 0;=0A         =
    op->u.add_memtype.reg    =3D ret;=0A-            ret =3D copy_to_guest(=
u_xenpf_op, op, 1) ? -EFAULT : 0;=0A+            ret =3D __copy_field_to_gu=
est(u_xenpf_op, op, u.add_memtype) ?=0A+                  -EFAULT : 0;=0A  =
           if ( ret !=3D 0 )=0A                 mtrr_del_page(ret, 0, =
0);=0A         }=0A@@ -157,7 +158,8 @@ ret_t do_platform_op(XEN_GUEST_HANDL=
E_PA=0A             op->u.read_memtype.mfn     =3D mfn;=0A             =
op->u.read_memtype.nr_mfns =3D nr_mfns;=0A             op->u.read_memtype.t=
ype    =3D type;=0A-            ret =3D copy_to_guest(u_xenpf_op, op, 1) ? =
-EFAULT : 0;=0A+            ret =3D __copy_field_to_guest(u_xenpf_op, op, =
u.read_memtype)=0A+                  ? -EFAULT : 0;=0A         }=0A     =
}=0A     break;=0A@@ -263,8 +265,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE=
_PA=0A             C(legacy_sectors_per_track);=0A #undef C=0A =0A-        =
    ret =3D (copy_field_to_guest(u_xenpf_op, op,=0A-                       =
               u.firmware_info.u.disk_info)=0A+            ret =3D =
(__copy_field_to_guest(u_xenpf_op, op,=0A+                                 =
        u.firmware_info.u.disk_info)=0A                    ? -EFAULT : =
0);=0A             break;=0A         }=0A@@ -281,8 +283,8 @@ ret_t =
do_platform_op(XEN_GUEST_HANDLE_PA=0A             op->u.firmware_info.u.dis=
k_mbr_signature.mbr_signature =3D=0A                 sig->signature;=0A =
=0A-            ret =3D (copy_field_to_guest(u_xenpf_op, op,=0A-           =
                           u.firmware_info.u.disk_mbr_signature)=0A+       =
     ret =3D (__copy_field_to_guest(u_xenpf_op, op,=0A+                    =
                     u.firmware_info.u.disk_mbr_signature)=0A              =
      ? -EFAULT : 0);=0A             break;=0A         }=0A@@ -299,10 =
+301,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=0A                 =
bootsym(boot_edid_caps) >> 8;=0A =0A             ret =3D 0;=0A-            =
if ( copy_field_to_guest(u_xenpf_op, op, u.firmware_info.=0A-              =
                       u.vbeddc_info.capabilities) ||=0A-                 =
copy_field_to_guest(u_xenpf_op, op, u.firmware_info.=0A-                   =
                  u.vbeddc_info.edid_transfer_time) ||=0A+            if ( =
__copy_field_to_guest(u_xenpf_op, op, u.firmware_info.=0A+                 =
                      u.vbeddc_info.capabilities) ||=0A+                 =
__copy_field_to_guest(u_xenpf_op, op, u.firmware_info.=0A+                 =
                      u.vbeddc_info.edid_transfer_time) ||=0A              =
    copy_to_compat(op->u.firmware_info.u.vbeddc_info.edid,=0A              =
                   bootsym(boot_edid_info), 128) )=0A                 ret =
=3D -EFAULT;=0A@@ -311,8 +313,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=
=0A             ret =3D efi_get_info(op->u.firmware_info.index,=0A         =
                       &op->u.firmware_info.u.efi_info);=0A             if =
( ret =3D=3D 0 &&=0A-                 copy_field_to_guest(u_xenpf_op, =
op,=0A-                                     u.firmware_info.u.efi_info) =
)=0A+                 __copy_field_to_guest(u_xenpf_op, op,=0A+            =
                           u.firmware_info.u.efi_info) )=0A                =
 ret =3D -EFAULT;=0A             break;=0A         case XEN_FW_KBD_SHIFT_FL=
AGS:=0A@@ -323,8 +325,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=0A     =
        op->u.firmware_info.u.kbd_shift_flags =3D bootsym(kbd_shift_flags);=
=0A =0A             ret =3D 0;=0A-            if ( copy_field_to_guest(u_xe=
npf_op, op,=0A-                                     u.firmware_info.u.kbd_s=
hift_flags) )=0A+            if ( __copy_field_to_guest(u_xenpf_op, =
op,=0A+                                       u.firmware_info.u.kbd_shift_f=
lags) )=0A                 ret =3D -EFAULT;=0A             break;=0A       =
  default:=0A@@ -340,7 +342,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=
=0A =0A         ret =3D efi_runtime_call(&op->u.efi_runtime_call);=0A      =
   if ( ret =3D=3D 0 &&=0A-             copy_field_to_guest(u_xenpf_op, =
op, u.efi_runtime_call) )=0A+             __copy_field_to_guest(u_xenpf_op,=
 op, u.efi_runtime_call) )=0A             ret =3D -EFAULT;=0A         =
break;=0A =0A@@ -412,7 +414,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=
=0A             ret =3D cpumask_to_xenctl_cpumap(&ctlmap, cpumap);=0A      =
   free_cpumask_var(cpumap);=0A =0A-        if ( ret =3D=3D 0 && copy_to_gu=
est(u_xenpf_op, op, 1) )=0A+        if ( ret =3D=3D 0 && __copy_field_to_gu=
est(u_xenpf_op, op, u.getidletime) )=0A             ret =3D -EFAULT;=0A    =
 }=0A     break;=0A@@ -503,7 +505,7 @@ ret_t do_platform_op(XEN_GUEST_HANDL=
E_PA=0A =0A         put_cpu_maps();=0A =0A-        ret =3D copy_to_guest(u_=
xenpf_op, op, 1) ? -EFAULT : 0;=0A+        ret =3D __copy_field_to_guest(u_=
xenpf_op, op, u.pcpu_info) ? -EFAULT : 0;=0A     }=0A     break;=0A =0A@@ =
-538,7 +540,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=0A =0A         =
put_cpu_maps();=0A =0A-        if ( copy_field_to_guest(u_xenpf_op, op, =
u.pcpu_version) )=0A+        if ( __copy_field_to_guest(u_xenpf_op, op, =
u.pcpu_version) )=0A             ret =3D -EFAULT;=0A     }=0A     =
break;=0A@@ -639,7 +641,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=0A =
=0A         case XEN_CORE_PARKING_GET:=0A             op->u.core_parking.id=
le_nums =3D get_cur_idle_nums();=0A-            ret =3D copy_to_guest(u_xen=
pf_op, op, 1) ? -EFAULT : 0;=0A+            ret =3D __copy_field_to_guest(u=
_xenpf_op, op, u.core_parking) ?=0A+                  -EFAULT : 0;=0A      =
       break;=0A =0A         default:=0A--- a/xen/arch/x86/sysctl.c=0A+++ =
b/xen/arch/x86/sysctl.c=0A@@ -93,7 +93,7 @@ long arch_do_sysctl(=0A        =
 if ( iommu_enabled )=0A             pi->capabilities |=3D XEN_SYSCTL_PHYSC=
AP_hvm_directio;=0A =0A-        if ( copy_to_guest(u_sysctl, sysctl, 1) =
)=0A+        if ( __copy_field_to_guest(u_sysctl, sysctl, u.physinfo) )=0A =
            ret =3D -EFAULT;=0A     }=0A     break;=0A@@ -133,7 +133,8 @@ =
long arch_do_sysctl(=0A             }=0A         }=0A =0A-        ret =3D =
((i <=3D max_cpu_index) || copy_to_guest(u_sysctl, sysctl, 1))=0A+        =
ret =3D ((i <=3D max_cpu_index) ||=0A+               __copy_field_to_guest(=
u_sysctl, sysctl, u.topologyinfo))=0A             ? -EFAULT : 0;=0A     =
}=0A     break;=0A@@ -185,7 +186,8 @@ long arch_do_sysctl(=0A             =
}=0A         }=0A =0A-        ret =3D ((i <=3D max_node_index) || =
copy_to_guest(u_sysctl, sysctl, 1))=0A+        ret =3D ((i <=3D max_node_in=
dex) ||=0A+               __copy_field_to_guest(u_sysctl, sysctl, =
u.numainfo))=0A             ? -EFAULT : 0;=0A     }=0A     break;=0A--- =
a/xen/arch/x86/x86_64/compat/mm.c=0A+++ b/xen/arch/x86/x86_64/compat/mm.c=
=0A@@ -122,7 +122,7 @@ int compat_arch_memory_op(int op, XEN_GU=0A #define =
XLAT_memory_map_HNDL_buffer(_d_, _s_) ((void)0)=0A         XLAT_memory_map(=
&cmp, nat);=0A #undef XLAT_memory_map_HNDL_buffer=0A-        if ( =
copy_to_guest(arg, &cmp, 1) )=0A+        if ( __copy_to_guest(arg, &cmp, =
1) )=0A             rc =3D -EFAULT;=0A =0A         break;=0A@@ -148,7 =
+148,7 @@ int compat_arch_memory_op(int op, XEN_GU=0A =0A         =
XLAT_pod_target(&cmp, nat);=0A =0A-        if ( copy_to_guest(arg, &cmp, =
1) )=0A+        if ( __copy_to_guest(arg, &cmp, 1) )=0A         {=0A       =
      if ( rc =3D=3D __HYPERVISOR_memory_op )=0A                 hypercall_=
cancel_continuation();=0A@@ -200,7 +200,7 @@ int compat_arch_memory_op(int =
op, XEN_GU=0A         }=0A =0A         xmml.nr_extents =3D i;=0A-        =
if ( copy_to_guest(arg, &xmml, 1) )=0A+        if ( __copy_to_guest(arg, =
&xmml, 1) )=0A             rc =3D -EFAULT;=0A =0A         break;=0A@@ =
-219,7 +219,7 @@ int compat_arch_memory_op(int op, XEN_GU=0A         if ( =
copy_from_guest(&meo, arg, 1) )=0A             return -EFAULT;=0A         =
rc =3D do_mem_event_op(op, meo.domain, (void *) &meo);=0A-        if ( !rc =
&& copy_to_guest(arg, &meo, 1) )=0A+        if ( !rc && __copy_to_guest(arg=
, &meo, 1) )=0A             return -EFAULT;=0A         break;=0A     =
}=0A@@ -231,7 +231,7 @@ int compat_arch_memory_op(int op, XEN_GU=0A        =
 if ( mso.op =3D=3D XENMEM_sharing_op_audit )=0A             return =
mem_sharing_audit(); =0A         rc =3D do_mem_event_op(op, mso.domain, =
(void *) &mso);=0A-        if ( !rc && copy_to_guest(arg, &mso, 1) )=0A+   =
     if ( !rc && __copy_to_guest(arg, &mso, 1) )=0A             return =
-EFAULT;=0A         break;=0A     }=0A--- a/xen/arch/x86/x86_64/mm.c=0A+++ =
b/xen/arch/x86/x86_64/mm.c=0A@@ -1074,7 +1074,7 @@ long subarch_memory_op(i=
nt op, XEN_GUEST=0A         }=0A =0A         xmml.nr_extents =3D i;=0A-    =
    if ( copy_to_guest(arg, &xmml, 1) )=0A+        if ( __copy_to_guest(arg=
, &xmml, 1) )=0A             return -EFAULT;=0A =0A         break;=0A@@ =
-1092,7 +1092,7 @@ long subarch_memory_op(int op, XEN_GUEST=0A         if =
( copy_from_guest(&meo, arg, 1) )=0A             return -EFAULT;=0A        =
 rc =3D do_mem_event_op(op, meo.domain, (void *) &meo);=0A-        if ( =
!rc && copy_to_guest(arg, &meo, 1) )=0A+        if ( !rc && __copy_to_guest=
(arg, &meo, 1) )=0A             return -EFAULT;=0A         break;=0A     =
}=0A@@ -1104,7 +1104,7 @@ long subarch_memory_op(int op, XEN_GUEST=0A      =
   if ( mso.op =3D=3D XENMEM_sharing_op_audit )=0A             return =
mem_sharing_audit(); =0A         rc =3D do_mem_event_op(op, mso.domain, =
(void *) &mso);=0A-        if ( !rc && copy_to_guest(arg, &mso, 1) )=0A+   =
     if ( !rc && __copy_to_guest(arg, &mso, 1) )=0A             return =
-EFAULT;=0A         break;=0A     }=0A--- a/xen/common/compat/memory.c=0A++=
+ b/xen/common/compat/memory.c=0A@@ -292,8 +292,9 @@ int compat_memory_op(u=
nsigned int cmd, X=0A             }=0A =0A             cmp.xchg.nr_exchange=
d =3D nat.xchg->nr_exchanged;=0A-            if ( copy_field_to_guest(guest=
_handle_cast(compat, compat_memory_exchange_t),=0A-                        =
             &cmp.xchg, nr_exchanged) )=0A+            if ( __copy_field_to=
_guest(guest_handle_cast(compat,=0A+                                       =
                  compat_memory_exchange_t),=0A+                           =
            &cmp.xchg, nr_exchanged) )=0A                 rc =3D =
-EFAULT;=0A =0A             if ( rc < 0 )=0A--- a/xen/common/domctl.c=0A+++=
 b/xen/common/domctl.c=0A@@ -242,6 +242,7 @@ void domctl_lock_release(void)=
=0A long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)=0A {=0A  =
   long ret =3D 0;=0A+    bool_t copyback =3D 0;=0A     struct xen_domctl =
curop, *op =3D &curop;=0A =0A     if ( copy_from_guest(op, u_domctl, 1) =
)=0A@@ -469,8 +470,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A        =
        sizeof(xen_domain_handle_t));=0A =0A         op->domain =3D =
d->domain_id;=0A-        if ( copy_to_guest(u_domctl, op, 1) )=0A-         =
   ret =3D -EFAULT;=0A+        copyback =3D 1;=0A     }=0A     break;=0A =
=0A@@ -653,8 +653,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A         =
    goto scheduler_op_out;=0A =0A         ret =3D sched_adjust(d, =
&op->u.scheduler_op);=0A-        if ( copy_to_guest(u_domctl, op, 1) )=0A- =
           ret =3D -EFAULT;=0A+        copyback =3D 1;=0A =0A     =
scheduler_op_out:=0A         rcu_unlock_domain(d);=0A@@ -686,8 +685,7 @@ =
long do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A         getdomaininfo(d, =
&op->u.getdomaininfo);=0A =0A         op->domain =3D op->u.getdomaininfo.do=
main;=0A-        if ( copy_to_guest(u_domctl, op, 1) )=0A-            ret =
=3D -EFAULT;=0A+        copyback =3D 1;=0A =0A     getdomaininfo_out:=0A   =
      rcu_read_unlock(&domlist_read_lock);=0A@@ -747,8 +745,9 @@ long =
do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A         ret =3D copy_to_guest(op->u.=
vcpucontext.ctxt, c.nat, 1);=0A #endif=0A =0A-        if ( copy_to_guest(u_=
domctl, op, 1) || ret )=0A+        if ( ret )=0A             ret =3D =
-EFAULT;=0A+        copyback =3D 1;=0A =0A     getvcpucontext_out:=0A      =
   xfree(c.nat);=0A@@ -786,9 +785,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARA=
M(xe=0A         op->u.getvcpuinfo.cpu_time =3D runstate.time[RUNSTATE_runni=
ng];=0A         op->u.getvcpuinfo.cpu      =3D v->processor;=0A         =
ret =3D 0;=0A-=0A-        if ( copy_to_guest(u_domctl, op, 1) )=0A-        =
    ret =3D -EFAULT;=0A+        copyback =3D 1;=0A =0A     getvcpuinfo_out:=
=0A         rcu_unlock_domain(d);=0A@@ -1045,6 +1042,9 @@ long do_domctl(XE=
N_GUEST_HANDLE_PARAM(xe=0A =0A     domctl_lock_release();=0A =0A+    if ( =
copyback && __copy_to_guest(u_domctl, op, 1) )=0A+        ret =3D =
-EFAULT;=0A+=0A     return ret;=0A }=0A =0A--- a/xen/common/event_channel.c=
=0A+++ b/xen/common/event_channel.c=0A@@ -981,7 +981,7 @@ long do_event_cha=
nnel_op(int cmd, XEN_GU=0A         if ( copy_from_guest(&alloc_unbound, =
arg, 1) !=3D 0 )=0A             return -EFAULT;=0A         rc =3D =
evtchn_alloc_unbound(&alloc_unbound);=0A-        if ( (rc =3D=3D 0) && =
(copy_to_guest(arg, &alloc_unbound, 1) !=3D 0) )=0A+        if ( !rc && =
__copy_to_guest(arg, &alloc_unbound, 1) )=0A             rc =3D -EFAULT; =
/* Cleaning up here would be a mess! */=0A         break;=0A     }=0A@@ =
-991,7 +991,7 @@ long do_event_channel_op(int cmd, XEN_GU=0A         if ( =
copy_from_guest(&bind_interdomain, arg, 1) !=3D 0 )=0A             return =
-EFAULT;=0A         rc =3D evtchn_bind_interdomain(&bind_interdomain);=0A- =
       if ( (rc =3D=3D 0) && (copy_to_guest(arg, &bind_interdomain, 1) =
!=3D 0) )=0A+        if ( !rc && __copy_to_guest(arg, &bind_interdomain, =
1) )=0A             rc =3D -EFAULT; /* Cleaning up here would be a mess! =
*/=0A         break;=0A     }=0A@@ -1001,7 +1001,7 @@ long do_event_channel=
_op(int cmd, XEN_GU=0A         if ( copy_from_guest(&bind_virq, arg, 1) =
!=3D 0 )=0A             return -EFAULT;=0A         rc =3D evtchn_bind_virq(=
&bind_virq);=0A-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, =
&bind_virq, 1) !=3D 0) )=0A+        if ( !rc && __copy_to_guest(arg, =
&bind_virq, 1) )=0A             rc =3D -EFAULT; /* Cleaning up here would =
be a mess! */=0A         break;=0A     }=0A@@ -1011,7 +1011,7 @@ long =
do_event_channel_op(int cmd, XEN_GU=0A         if ( copy_from_guest(&bind_i=
pi, arg, 1) !=3D 0 )=0A             return -EFAULT;=0A         rc =3D =
evtchn_bind_ipi(&bind_ipi);=0A-        if ( (rc =3D=3D 0) && (copy_to_guest=
(arg, &bind_ipi, 1) !=3D 0) )=0A+        if ( !rc && __copy_to_guest(arg, =
&bind_ipi, 1) )=0A             rc =3D -EFAULT; /* Cleaning up here would =
be a mess! */=0A         break;=0A     }=0A@@ -1021,7 +1021,7 @@ long =
do_event_channel_op(int cmd, XEN_GU=0A         if ( copy_from_guest(&bind_p=
irq, arg, 1) !=3D 0 )=0A             return -EFAULT;=0A         rc =3D =
evtchn_bind_pirq(&bind_pirq);=0A-        if ( (rc =3D=3D 0) && (copy_to_gue=
st(arg, &bind_pirq, 1) !=3D 0) )=0A+        if ( !rc && __copy_to_guest(arg=
, &bind_pirq, 1) )=0A             rc =3D -EFAULT; /* Cleaning up here =
would be a mess! */=0A         break;=0A     }=0A@@ -1047,7 +1047,7 @@ =
long do_event_channel_op(int cmd, XEN_GU=0A         if ( copy_from_guest(&s=
tatus, arg, 1) !=3D 0 )=0A             return -EFAULT;=0A         rc =3D =
evtchn_status(&status);=0A-        if ( (rc =3D=3D 0) && (copy_to_guest(arg=
, &status, 1) !=3D 0) )=0A+        if ( !rc && __copy_to_guest(arg, =
&status, 1) )=0A             rc =3D -EFAULT;=0A         break;=0A     =
}=0A--- a/xen/common/grant_table.c=0A+++ b/xen/common/grant_table.c=0A@@ =
-1115,12 +1115,13 @@ gnttab_unmap_grant_ref(=0A =0A         for ( i =3D 0; =
i < c; i++ )=0A         {=0A-            if ( unlikely(__copy_from_guest_of=
fset(&op, uop, done+i, 1)) )=0A+            if ( unlikely(__copy_from_guest=
(&op, uop, 1)) )=0A                 goto fault;=0A             __gnttab_unm=
ap_grant_ref(&op, &(common[i]));=0A             ++partial_done;=0A-        =
    if ( unlikely(__copy_to_guest_offset(uop, done+i, &op, 1)) )=0A+       =
     if ( unlikely(__copy_field_to_guest(uop, &op, status)) )=0A           =
      goto fault;=0A+            guest_handle_add_offset(uop, 1);=0A       =
  }=0A =0A         flush_tlb_mask(current->domain->domain_dirty_cpumask);=
=0A@@ -1177,12 +1178,13 @@ gnttab_unmap_and_replace(=0A         =0A        =
 for ( i =3D 0; i < c; i++ )=0A         {=0A-            if ( unlikely(__co=
py_from_guest_offset(&op, uop, done+i, 1)) )=0A+            if ( unlikely(_=
_copy_from_guest(&op, uop, 1)) )=0A                 goto fault;=0A         =
    __gnttab_unmap_and_replace(&op, &(common[i]));=0A             =
++partial_done;=0A-            if ( unlikely(__copy_to_guest_offset(uop, =
done+i, &op, 1)) )=0A+            if ( unlikely(__copy_field_to_guest(uop, =
&op, status)) )=0A                 goto fault;=0A+            guest_handle_=
add_offset(uop, 1);=0A         }=0A         =0A         flush_tlb_mask(curr=
ent->domain->domain_dirty_cpumask);=0A@@ -1396,7 +1398,7 @@ gnttab_setup_ta=
ble(=0A  out2:=0A     rcu_unlock_domain(d);=0A  out1:=0A-    if ( =
unlikely(copy_to_guest(uop, &op, 1)) )=0A+    if ( unlikely(__copy_field_to=
_guest(uop, &op, status)) )=0A         return -EFAULT;=0A =0A     return =
0;=0A@@ -1446,7 +1448,7 @@ gnttab_query_size(=0A     rcu_unlock_domain(d);=
=0A =0A  query_out:=0A-    if ( unlikely(copy_to_guest(uop, &op, 1)) )=0A+ =
   if ( unlikely(__copy_to_guest(uop, &op, 1)) )=0A         return =
-EFAULT;=0A =0A     return 0;=0A@@ -1542,7 +1544,7 @@ gnttab_transfer(=0A  =
           return i;=0A =0A         /* Read from caller address space. =
*/=0A-        if ( unlikely(__copy_from_guest_offset(&gop, uop, i, 1)) =
)=0A+        if ( unlikely(__copy_from_guest(&gop, uop, 1)) )=0A         =
{=0A             gdprintk(XENLOG_INFO, "gnttab_transfer: error reading req =
%d/%d\n",=0A                     i, count);=0A@@ -1701,12 +1703,13 @@ =
gnttab_transfer(=0A         gop.status =3D GNTST_okay;=0A =0A     =
copyback:=0A-        if ( unlikely(__copy_to_guest_offset(uop, i, &gop, =
1)) )=0A+        if ( unlikely(__copy_field_to_guest(uop, &gop, status)) =
)=0A         {=0A             gdprintk(XENLOG_INFO, "gnttab_transfer: =
error writing resp "=0A                      "%d/%d\n", i, count);=0A      =
       return -EFAULT;=0A         }=0A+        guest_handle_add_offset(uop,=
 1);=0A     }=0A =0A     return 0;=0A@@ -2143,17 +2146,18 @@ gnttab_copy(=
=0A     {=0A         if (i && hypercall_preempt_check())=0A             =
return i;=0A-        if ( unlikely(__copy_from_guest_offset(&op, uop, i, =
1)) )=0A+        if ( unlikely(__copy_from_guest(&op, uop, 1)) )=0A        =
     return -EFAULT;=0A         __gnttab_copy(&op);=0A-        if ( =
unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )=0A+        if ( =
unlikely(__copy_field_to_guest(uop, &op, status)) )=0A             return =
-EFAULT;=0A+        guest_handle_add_offset(uop, 1);=0A     }=0A     =
return 0;=0A }=0A =0A static long=0A-gnttab_set_version(XEN_GUEST_HANDLE_PA=
RAM(gnttab_set_version_t uop))=0A+gnttab_set_version(XEN_GUEST_HANDLE_PARAM=
(gnttab_set_version_t) uop)=0A {=0A     gnttab_set_version_t op;=0A     =
struct domain *d =3D current->domain;=0A@@ -2265,7 +2269,7 @@ out_unlock:=
=0A out:=0A     op.version =3D gt->gt_version;=0A =0A-    if (copy_to_guest=
(uop, &op, 1))=0A+    if (__copy_to_guest(uop, &op, 1))=0A         res =3D =
-EFAULT;=0A =0A     return res;=0A@@ -2329,14 +2333,14 @@ gnttab_get_status=
_frames(XEN_GUEST_HANDL=0A out2:=0A     rcu_unlock_domain(d);=0A out1:=0A- =
   if ( unlikely(copy_to_guest(uop, &op, 1)) )=0A+    if ( unlikely(__copy_=
field_to_guest(uop, &op, status)) )=0A         return -EFAULT;=0A =0A     =
return 0;=0A }=0A =0A static long=0A-gnttab_get_version(XEN_GUEST_HANDLE_PA=
RAM(gnttab_get_version_t uop))=0A+gnttab_get_version(XEN_GUEST_HANDLE_PARAM=
(gnttab_get_version_t) uop)=0A {=0A     gnttab_get_version_t op;=0A     =
struct domain *d;=0A@@ -2359,7 +2363,7 @@ gnttab_get_version(XEN_GUEST_HAND=
LE_PARA=0A =0A     rcu_unlock_domain(d);=0A =0A-    if ( copy_to_guest(uop,=
 &op, 1) )=0A+    if ( __copy_field_to_guest(uop, &op, version) )=0A       =
  return -EFAULT;=0A =0A     return 0;=0A@@ -2421,7 +2425,7 @@ out:=0A =
}=0A =0A static long=0A-gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab=
_swap_grant_ref_t uop),=0A+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnt=
tab_swap_grant_ref_t) uop,=0A                       unsigned int count)=0A =
{=0A     int i;=0A@@ -2431,11 +2435,12 @@ gnttab_swap_grant_ref(XEN_GUEST_H=
ANDLE_P=0A     {=0A         if ( i && hypercall_preempt_check() )=0A       =
      return i;=0A-        if ( unlikely(__copy_from_guest_offset(&op, =
uop, i, 1)) )=0A+        if ( unlikely(__copy_from_guest(&op, uop, 1)) =
)=0A             return -EFAULT;=0A         op.status =3D __gnttab_swap_gra=
nt_ref(op.ref_a, op.ref_b);=0A-        if ( unlikely(__copy_to_guest_offset=
(uop, i, &op, 1)) )=0A+        if ( unlikely(__copy_field_to_guest(uop, =
&op, status)) )=0A             return -EFAULT;=0A+        guest_handle_add_=
offset(uop, 1);=0A     }=0A     return 0;=0A }=0A--- a/xen/common/memory.c=
=0A+++ b/xen/common/memory.c=0A@@ -359,7 +359,7 @@ static long memory_excha=
nge(XEN_GUEST_HA=0A         {=0A             exch.nr_exchanged =3D i << =
in_chunk_order;=0A             rcu_unlock_domain(d);=0A-            if ( =
copy_field_to_guest(arg, &exch, nr_exchanged) )=0A+            if ( =
__copy_field_to_guest(arg, &exch, nr_exchanged) )=0A                 =
return -EFAULT;=0A             return hypercall_create_continuation(=0A    =
             __HYPERVISOR_memory_op, "lh", XENMEM_exchange, arg);=0A@@ =
-500,7 +500,7 @@ static long memory_exchange(XEN_GUEST_HA=0A     }=0A =0A  =
   exch.nr_exchanged =3D exch.in.nr_extents;=0A-    if ( copy_field_to_gues=
t(arg, &exch, nr_exchanged) )=0A+    if ( __copy_field_to_guest(arg, =
&exch, nr_exchanged) )=0A         rc =3D -EFAULT;=0A     rcu_unlock_domain(=
d);=0A     return rc;=0A@@ -527,7 +527,7 @@ static long memory_exchange(XEN=
_GUEST_HA=0A     exch.nr_exchanged =3D i << in_chunk_order;=0A =0A  =
fail_early:=0A-    if ( copy_field_to_guest(arg, &exch, nr_exchanged) =
)=0A+    if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )=0A         =
rc =3D -EFAULT;=0A     return rc;=0A }=0A--- a/xen/common/sysctl.c=0A+++ =
b/xen/common/sysctl.c=0A@@ -30,6 +30,7 @@=0A long do_sysctl(XEN_GUEST_HANDL=
E_PARAM(xen_sysctl_t) u_sysctl)=0A {=0A     long ret =3D 0;=0A+    int =
copyback =3D -1;=0A     struct xen_sysctl curop, *op =3D &curop;=0A     =
static DEFINE_SPINLOCK(sysctl_lock);=0A =0A@@ -55,42 +56,28 @@ long =
do_sysctl(XEN_GUEST_HANDLE_PARAM(xe=0A     switch ( op->cmd )=0A     {=0A  =
   case XEN_SYSCTL_readconsole:=0A-    {=0A         ret =3D xsm_readconsole=
(op->u.readconsole.clear);=0A         if ( ret )=0A             break;=0A =
=0A         ret =3D read_console_ring(&op->u.readconsole);=0A-        if ( =
copy_to_guest(u_sysctl, op, 1) )=0A-            ret =3D -EFAULT;=0A-    =
}=0A-    break;=0A+        break;=0A =0A     case XEN_SYSCTL_tbuf_op:=0A-  =
  {=0A         ret =3D xsm_tbufcontrol();=0A         if ( ret )=0A         =
    break;=0A =0A         ret =3D tb_control(&op->u.tbuf_op);=0A-        =
if ( copy_to_guest(u_sysctl, op, 1) )=0A-            ret =3D -EFAULT;=0A-  =
  }=0A-    break;=0A+        break;=0A     =0A     case XEN_SYSCTL_sched_id=
:=0A-    {=0A         ret =3D xsm_sched_id();=0A         if ( ret )=0A     =
        break;=0A =0A         op->u.sched_id.sched_id =3D sched_id();=0A-  =
      if ( copy_to_guest(u_sysctl, op, 1) )=0A-            ret =3D =
-EFAULT;=0A-        else=0A-            ret =3D 0;=0A-    }=0A-    =
break;=0A+        break;=0A =0A     case XEN_SYSCTL_getdomaininfolist:=0A  =
   { =0A@@ -129,38 +116,27 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe=0A  =
           break;=0A         =0A         op->u.getdomaininfolist.num_domain=
s =3D num_domains;=0A-=0A-        if ( copy_to_guest(u_sysctl, op, 1) =
)=0A-            ret =3D -EFAULT;=0A     }=0A     break;=0A =0A #ifdef =
PERF_COUNTERS=0A     case XEN_SYSCTL_perfc_op:=0A-    {=0A         ret =3D =
xsm_perfcontrol();=0A         if ( ret )=0A             break;=0A =0A      =
   ret =3D perfc_control(&op->u.perfc_op);=0A-        if ( copy_to_guest(u_=
sysctl, op, 1) )=0A-            ret =3D -EFAULT;=0A-    }=0A-    break;=0A+=
        break;=0A #endif=0A =0A #ifdef LOCK_PROFILE=0A     case XEN_SYSCTL_=
lockprof_op:=0A-    {=0A         ret =3D xsm_lockprof();=0A         if ( =
ret )=0A             break;=0A =0A         ret =3D spinlock_profile_control=
(&op->u.lockprof_op);=0A-        if ( copy_to_guest(u_sysctl, op, 1) )=0A- =
           ret =3D -EFAULT;=0A-    }=0A-    break;=0A+        break;=0A =
#endif=0A     case XEN_SYSCTL_debug_keys:=0A     {=0A@@ -179,6 +155,7 @@ =
long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe=0A             handle_keypress(c, =
guest_cpu_user_regs());=0A         }=0A         ret =3D 0;=0A+        =
copyback =3D 0;=0A     }=0A     break;=0A =0A@@ -193,22 +170,21 @@ long =
do_sysctl(XEN_GUEST_HANDLE_PARAM(xe=0A         if ( ret )=0A             =
break;=0A =0A+        ret =3D -EFAULT;=0A         for ( i =3D 0; i < =
nr_cpus; i++ )=0A         {=0A             cpuinfo.idletime =3D get_cpu_idl=
e_time(i);=0A =0A-            ret =3D -EFAULT;=0A             if ( =
copy_to_guest_offset(op->u.getcpuinfo.info, i, &cpuinfo, 1) )=0A           =
      goto out;=0A         }=0A =0A         op->u.getcpuinfo.nr_cpus =3D =
i;=0A-        ret =3D copy_to_guest(u_sysctl, op, 1) ? -EFAULT : 0;=0A+    =
    ret =3D 0;=0A     }=0A     break;=0A =0A     case XEN_SYSCTL_availheap:=
=0A-    { =0A         ret =3D xsm_availheap();=0A         if ( ret )=0A    =
         break;=0A@@ -218,47 +194,26 @@ long do_sysctl(XEN_GUEST_HANDLE_PAR=
AM(xe=0A             op->u.availheap.min_bitwidth,=0A             =
op->u.availheap.max_bitwidth);=0A         op->u.availheap.avail_bytes =
<<=3D PAGE_SHIFT;=0A-=0A-        ret =3D copy_to_guest(u_sysctl, op, 1) ? =
-EFAULT : 0;=0A-    }=0A-    break;=0A+        break;=0A =0A #ifdef =
HAS_ACPI=0A     case XEN_SYSCTL_get_pmstat:=0A-    {=0A         ret =3D =
xsm_get_pmstat();=0A         if ( ret )=0A             break;=0A =0A       =
  ret =3D do_get_pm_info(&op->u.get_pmstat);=0A-        if ( ret )=0A-     =
       break;=0A-=0A-        if ( copy_to_guest(u_sysctl, op, 1) )=0A-     =
   {=0A-            ret =3D -EFAULT;=0A-            break;=0A-        =
}=0A-    }=0A-    break;=0A+        break;=0A =0A     case XEN_SYSCTL_pm_op=
:=0A-    {=0A         ret =3D xsm_pm_op();=0A         if ( ret )=0A        =
     break;=0A =0A         ret =3D do_pm_op(&op->u.pm_op);=0A-        if ( =
ret && (ret !=3D -EAGAIN) )=0A-            break;=0A-=0A-        if ( =
copy_to_guest(u_sysctl, op, 1) )=0A-        {=0A-            ret =3D =
-EFAULT;=0A-            break;=0A-        }=0A-    }=0A-    break;=0A+     =
   if ( ret =3D=3D -EAGAIN )=0A+            copyback =3D 1;=0A+        =
break;=0A #endif=0A =0A     case XEN_SYSCTL_page_offline_op:=0A@@ -317,41 =
+272,39 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe=0A         }=0A =0A    =
     xfree(status);=0A+        copyback =3D 0;=0A     }=0A     break;=0A =
=0A     case XEN_SYSCTL_cpupool_op:=0A-    {=0A         ret =3D xsm_cpupool=
_op();=0A         if ( ret )=0A             break;=0A =0A         ret =3D =
cpupool_do_sysctl(&op->u.cpupool_op);=0A-        if ( (ret =3D=3D 0) && =
copy_to_guest(u_sysctl, op, 1) )=0A-            ret =3D -EFAULT;=0A-    =
}=0A-    break;=0A+        break;=0A =0A     case XEN_SYSCTL_scheduler_op:=
=0A-    {=0A         ret =3D xsm_sched_op();=0A         if ( ret )=0A      =
       break;=0A =0A         ret =3D sched_adjust_global(&op->u.scheduler_o=
p);=0A-        if ( (ret =3D=3D 0) && copy_to_guest(u_sysctl, op, 1) )=0A- =
           ret =3D -EFAULT;=0A-    }=0A-    break;=0A+        break;=0A =
=0A     default:=0A         ret =3D arch_do_sysctl(op, u_sysctl);=0A+      =
  copyback =3D 0;=0A         break;=0A     }=0A =0A  out:=0A     spin_unloc=
k(&sysctl_lock);=0A =0A+    if ( copyback && (!ret || copyback > 0) &&=0A+ =
        __copy_to_guest(u_sysctl, op, 1) )=0A+        ret =3D -EFAULT;=0A+=
=0A     return ret;=0A }=0A =0A--- a/xen/common/xenoprof.c=0A+++ b/xen/comm=
on/xenoprof.c=0A@@ -449,7 +449,7 @@ static int add_passive_list(XEN_GUEST_H=
A=0A             current->domain, __pa(d->xenoprof->rawbuf),=0A            =
 passive.buf_gmaddr, d->xenoprof->npages);=0A =0A-    if ( copy_to_guest(ar=
g, &passive, 1) )=0A+    if ( __copy_to_guest(arg, &passive, 1) )=0A     =
{=0A         put_domain(d);=0A         return -EFAULT;=0A@@ -604,7 +604,7 =
@@ static int xenoprof_op_init(XEN_GUEST_HA=0A     if ( xenoprof_init.is_pr=
imary )=0A         xenoprof_primary_profiler =3D current->domain;=0A =0A-  =
  return (copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0);=0A+    =
return __copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0;=0A }=0A =0A =
#define ret_t long=0A@@ -651,10 +651,7 @@ static int xenoprof_op_get_buffer=
(XEN_GU=0A             d, __pa(d->xenoprof->rawbuf), xenoprof_get_buffer.bu=
f_gmaddr,=0A             d->xenoprof->npages);=0A =0A-    if ( copy_to_gues=
t(arg, &xenoprof_get_buffer, 1) )=0A-        return -EFAULT;=0A-=0A-    =
return 0;=0A+    return __copy_to_guest(arg, &xenoprof_get_buffer, 1) ? =
-EFAULT : 0;=0A }=0A =0A #define NONPRIV_OP(op) ( (op =3D=3D XENOPROF_init)=
          \=0A--- a/xen/drivers/passthrough/iommu.c=0A+++ b/xen/drivers/pas=
sthrough/iommu.c=0A@@ -586,7 +586,7 @@ int iommu_do_domctl(=0A             =
domctl->u.get_device_group.num_sdevs =3D ret;=0A             ret =3D 0;=0A =
        }=0A-        if ( copy_to_guest(u_domctl, domctl, 1) )=0A+        =
if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )=0A      =
       ret =3D -EFAULT;=0A         rcu_unlock_domain(d);=0A     }=0A
--=__PartC4F558D4.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartC4F558D4.0__=--


From xen-devel-bounces@lists.xen.org Fri Dec 07 13:06:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 13:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgxdB-00043u-8Z; Fri, 07 Dec 2012 13:06:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tgxd9-00043V-MI
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 13:06:16 +0000
Received: from [85.158.143.35:35008] by server-1.bemta-4.messagelabs.com id
	8E/CB-28401-6C9E1C05; Fri, 07 Dec 2012 13:06:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1354885573!10143989!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26726 invoked from network); 7 Dec 2012 13:06:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 13:06:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 13:06:12 +0000
Message-Id: <50C1F7D402000078000AEF3E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 13:06:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartC4F558D4.0__="
Subject: [Xen-devel] [PATCH] streamline guest copy operations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartC4F558D4.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- use the variants not validating the VA range when writing back
  structures/fields to the same space that they were previously read
  from
- when only a single field of a structure actually changed, copy back
  just that field where possible
- consolidate copying back results in a few places

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
If really necessary, this patch could of course be split up at almost
arbitrary boundaries.

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -51,6 +51,7 @@ long arch_do_domctl(
     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret =3D 0;
+    bool_t copyback =3D 0;
=20
     switch ( domctl->cmd )
     {
@@ -66,7 +67,7 @@ long arch_do_domctl(
                                 &domctl->u.shadow_op,
                                 guest_handle_cast(u_domctl, void));
             rcu_unlock_domain(d);
-            copy_to_guest(u_domctl, domctl, 1);
+            copyback =3D 1;
         }=20
     }
     break;
@@ -150,8 +151,7 @@ long arch_do_domctl(
         }
=20
         rcu_unlock_domain(d);
-
-        copy_to_guest(u_domctl, domctl, 1);
+        copyback =3D 1;
     }
     break;
=20
@@ -408,7 +408,7 @@ long arch_do_domctl(
             spin_unlock(&d->page_alloc_lock);
=20
             domctl->u.getmemlist.num_pfns =3D i;
-            copy_to_guest(u_domctl, domctl, 1);
+            copyback =3D 1;
         getmemlist_out:
             rcu_unlock_domain(d);
         }
@@ -539,13 +539,11 @@ long arch_do_domctl(
             ret =3D -EFAULT;
=20
     gethvmcontext_out:
-        if ( copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
+        rcu_unlock_domain(d);
+        copyback =3D 1;
=20
         if ( c.data !=3D NULL )
             xfree(c.data);
-
-        rcu_unlock_domain(d);
     }
     break;
=20
@@ -627,11 +625,9 @@ long arch_do_domctl(
         domctl->u.address_size.size =3D
             is_pv_32on64_domain(d) ? 32 : BITS_PER_LONG;
=20
-        ret =3D 0;
         rcu_unlock_domain(d);
-
-        if ( copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
+        ret =3D 0;
+        copyback =3D 1;
     }
     break;
=20
@@ -676,13 +672,9 @@ long arch_do_domctl(
=20
         domctl->u.address_size.size =3D d->arch.physaddr_bitsize;
=20
-        ret =3D 0;
         rcu_unlock_domain(d);
-
-        if ( copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
-
-
+        ret =3D 0;
+        copyback =3D 1;
     }
     break;
=20
@@ -1124,9 +1116,8 @@ long arch_do_domctl(
=20
     ext_vcpucontext_out:
         rcu_unlock_domain(d);
-        if ( (domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext) &&
-             copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
+        if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext )
+            copyback =3D 1;
     }
     break;
=20
@@ -1268,10 +1259,10 @@ long arch_do_domctl(
             domctl->u.gdbsx_guest_memio.len;
=20
         ret =3D gdbsx_guest_mem_io(domctl->domain, &domctl->u.gdbsx_guest_=
memio);
-        if ( !ret && copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
=20
         rcu_unlock_domain(d);
+        if ( !ret )
+           copyback =3D 1;
     }
     break;
=20
@@ -1358,10 +1349,9 @@ long arch_do_domctl(
                 }
             }
         }
-        ret =3D 0;
-        if ( copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
         rcu_unlock_domain(d);
+        ret =3D 0;
+        copyback =3D 1;
     }
     break;
=20
@@ -1485,9 +1475,8 @@ long arch_do_domctl(
=20
     vcpuextstate_out:
         rcu_unlock_domain(d);
-        if ( (domctl->cmd =3D=3D XEN_DOMCTL_getvcpuextstate) &&
-             copy_to_guest(u_domctl, domctl, 1) )
-            ret =3D -EFAULT;
+        if ( domctl->cmd =3D=3D XEN_DOMCTL_getvcpuextstate )
+            copyback =3D 1;
     }
     break;
=20
@@ -1504,7 +1493,7 @@ long arch_do_domctl(
                 ret =3D mem_event_domctl(d, &domctl->u.mem_event_op,
                                        guest_handle_cast(u_domctl, =
void));
             rcu_unlock_domain(d);
-            copy_to_guest(u_domctl, domctl, 1);
+            copyback =3D 1;
         }=20
     }
     break;
@@ -1539,8 +1528,7 @@ long arch_do_domctl(
                   &domctl->u.audit_p2m.m2p_bad,
                   &domctl->u.audit_p2m.p2m_bad);
         rcu_unlock_domain(d);
-        if ( copy_to_guest(u_domctl, domctl, 1) )=20
-            ret =3D -EFAULT;
+        copyback =3D 1;
     }
     break;
 #endif /* P2M_AUDIT */
@@ -1573,6 +1561,9 @@ long arch_do_domctl(
         break;
     }
=20
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret =3D -EFAULT;
+
     return ret;
 }
=20
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4407,7 +4407,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
=20
         if ( xatp.space =3D=3D XENMAPSPACE_gmfn_range )
         {
-            if ( rc && copy_to_guest(arg, &xatp, 1) )
+            if ( rc && __copy_to_guest(arg, &xatp, 1) )
                 rc =3D -EFAULT;
=20
             if ( rc =3D=3D -EAGAIN )
@@ -4492,7 +4492,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
         map.nr_entries =3D min(map.nr_entries, d->arch.pv_domain.nr_e820);=

         if ( copy_to_guest(map.buffer, d->arch.pv_domain.e820,
                            map.nr_entries) ||
-             copy_to_guest(arg, &map, 1) )
+             __copy_to_guest(arg, &map, 1) )
         {
             spin_unlock(&d->arch.pv_domain.e820_lock);
             return -EFAULT;
@@ -4559,7 +4559,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
=20
         ctxt.map.nr_entries =3D ctxt.n;
=20
-        if ( copy_to_guest(arg, &ctxt.map, 1) )
+        if ( __copy_to_guest(arg, &ctxt.map, 1) )
             return -EFAULT;
=20
         return 0;
@@ -4630,7 +4630,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
             target.pod_cache_pages =3D p2m->pod.count;
             target.pod_entries     =3D p2m->pod.entry_count;
=20
-            if ( copy_to_guest(arg, &target, 1) )
+            if ( __copy_to_guest(arg, &target, 1) )
             {
                 rc=3D -EFAULT;
                 goto pod_target_out_unlock;
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -384,7 +384,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         irq_status_query.flags |=3D XENIRQSTAT_needs_eoi;
         if ( pirq_shared(v->domain, irq) )
             irq_status_query.flags |=3D XENIRQSTAT_shared;
-        ret =3D copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;
+        ret =3D __copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;
         break;
     }
=20
@@ -412,7 +412,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         ret =3D physdev_map_pirq(map.domid, map.type, &map.index, =
&map.pirq,
                                &msi);
=20
-        if ( copy_to_guest(arg, &map, 1) !=3D 0 )
+        if ( __copy_to_guest(arg, &map, 1) )
             ret =3D -EFAULT;
         break;
     }
@@ -440,7 +440,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         if ( ret )
             break;
         ret =3D ioapic_guest_read(apic.apic_physbase, apic.reg, &apic.valu=
e);
-        if ( copy_to_guest(arg, &apic, 1) !=3D 0 )
+        if ( __copy_to_guest(arg, &apic, 1) )
             ret =3D -EFAULT;
         break;
     }
@@ -478,7 +478,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         irq_op.vector =3D irq_op.irq;
         ret =3D 0;
        =20
-        if ( copy_to_guest(arg, &irq_op, 1) !=3D 0 )
+        if ( __copy_to_guest(arg, &irq_op, 1) )
             ret =3D -EFAULT;
         break;
     }
@@ -714,7 +714,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         if ( ret >=3D 0 )
         {
             out.pirq =3D ret;
-            ret =3D copy_to_guest(arg, &out, 1) ? -EFAULT : 0;
+            ret =3D __copy_to_guest(arg, &out, 1) ? -EFAULT : 0;
         }
=20
         break;
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -115,7 +115,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
         {
             op->u.add_memtype.handle =3D 0;
             op->u.add_memtype.reg    =3D ret;
-            ret =3D copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
+            ret =3D __copy_field_to_guest(u_xenpf_op, op, u.add_memtype) =
?
+                  -EFAULT : 0;
             if ( ret !=3D 0 )
                 mtrr_del_page(ret, 0, 0);
         }
@@ -157,7 +158,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             op->u.read_memtype.mfn     =3D mfn;
             op->u.read_memtype.nr_mfns =3D nr_mfns;
             op->u.read_memtype.type    =3D type;
-            ret =3D copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
+            ret =3D __copy_field_to_guest(u_xenpf_op, op, u.read_memtype)
+                  ? -EFAULT : 0;
         }
     }
     break;
@@ -263,8 +265,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             C(legacy_sectors_per_track);
 #undef C
=20
-            ret =3D (copy_field_to_guest(u_xenpf_op, op,
-                                      u.firmware_info.u.disk_info)
+            ret =3D (__copy_field_to_guest(u_xenpf_op, op,
+                                         u.firmware_info.u.disk_info)
                    ? -EFAULT : 0);
             break;
         }
@@ -281,8 +283,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             op->u.firmware_info.u.disk_mbr_signature.mbr_signature =3D
                 sig->signature;
=20
-            ret =3D (copy_field_to_guest(u_xenpf_op, op,
-                                      u.firmware_info.u.disk_mbr_signature=
)
+            ret =3D (__copy_field_to_guest(u_xenpf_op, op,
+                                         u.firmware_info.u.disk_mbr_signat=
ure)
                    ? -EFAULT : 0);
             break;
         }
@@ -299,10 +301,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
                 bootsym(boot_edid_caps) >> 8;
=20
             ret =3D 0;
-            if ( copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
-                                     u.vbeddc_info.capabilities) ||
-                 copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
-                                     u.vbeddc_info.edid_transfer_time) ||
+            if ( __copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
+                                       u.vbeddc_info.capabilities) ||
+                 __copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
+                                       u.vbeddc_info.edid_transfer_time) =
||
                  copy_to_compat(op->u.firmware_info.u.vbeddc_info.edid,
                                 bootsym(boot_edid_info), 128) )
                 ret =3D -EFAULT;
@@ -311,8 +313,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             ret =3D efi_get_info(op->u.firmware_info.index,
                                &op->u.firmware_info.u.efi_info);
             if ( ret =3D=3D 0 &&
-                 copy_field_to_guest(u_xenpf_op, op,
-                                     u.firmware_info.u.efi_info) )
+                 __copy_field_to_guest(u_xenpf_op, op,
+                                       u.firmware_info.u.efi_info) )
                 ret =3D -EFAULT;
             break;
         case XEN_FW_KBD_SHIFT_FLAGS:
@@ -323,8 +325,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             op->u.firmware_info.u.kbd_shift_flags =3D bootsym(kbd_shift_fl=
ags);
=20
             ret =3D 0;
-            if ( copy_field_to_guest(u_xenpf_op, op,
-                                     u.firmware_info.u.kbd_shift_flags) )
+            if ( __copy_field_to_guest(u_xenpf_op, op,
+                                       u.firmware_info.u.kbd_shift_flags) =
)
                 ret =3D -EFAULT;
             break;
         default:
@@ -340,7 +342,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
=20
         ret =3D efi_runtime_call(&op->u.efi_runtime_call);
         if ( ret =3D=3D 0 &&
-             copy_field_to_guest(u_xenpf_op, op, u.efi_runtime_call) )
+             __copy_field_to_guest(u_xenpf_op, op, u.efi_runtime_call) )
             ret =3D -EFAULT;
         break;
=20
@@ -412,7 +414,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
             ret =3D cpumask_to_xenctl_cpumap(&ctlmap, cpumap);
         free_cpumask_var(cpumap);
=20
-        if ( ret =3D=3D 0 && copy_to_guest(u_xenpf_op, op, 1) )
+        if ( ret =3D=3D 0 && __copy_field_to_guest(u_xenpf_op, op, =
u.getidletime) )
             ret =3D -EFAULT;
     }
     break;
@@ -503,7 +505,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
=20
         put_cpu_maps();
=20
-        ret =3D copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
+        ret =3D __copy_field_to_guest(u_xenpf_op, op, u.pcpu_info) ? =
-EFAULT : 0;
     }
     break;
=20
@@ -538,7 +540,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
=20
         put_cpu_maps();
=20
-        if ( copy_field_to_guest(u_xenpf_op, op, u.pcpu_version) )
+        if ( __copy_field_to_guest(u_xenpf_op, op, u.pcpu_version) )
             ret =3D -EFAULT;
     }
     break;
@@ -639,7 +641,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
=20
         case XEN_CORE_PARKING_GET:
             op->u.core_parking.idle_nums =3D get_cur_idle_nums();
-            ret =3D copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
+            ret =3D __copy_field_to_guest(u_xenpf_op, op, u.core_parking) =
?
+                  -EFAULT : 0;
             break;
=20
         default:
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -93,7 +93,7 @@ long arch_do_sysctl(
         if ( iommu_enabled )
             pi->capabilities |=3D XEN_SYSCTL_PHYSCAP_hvm_directio;
=20
-        if ( copy_to_guest(u_sysctl, sysctl, 1) )
+        if ( __copy_field_to_guest(u_sysctl, sysctl, u.physinfo) )
             ret =3D -EFAULT;
     }
     break;
@@ -133,7 +133,8 @@ long arch_do_sysctl(
             }
         }
=20
-        ret =3D ((i <=3D max_cpu_index) || copy_to_guest(u_sysctl, =
sysctl, 1))
+        ret =3D ((i <=3D max_cpu_index) ||
+               __copy_field_to_guest(u_sysctl, sysctl, u.topologyinfo))
             ? -EFAULT : 0;
     }
     break;
@@ -185,7 +186,8 @@ long arch_do_sysctl(
             }
         }
=20
-        ret =3D ((i <=3D max_node_index) || copy_to_guest(u_sysctl, =
sysctl, 1))
+        ret =3D ((i <=3D max_node_index) ||
+               __copy_field_to_guest(u_sysctl, sysctl, u.numainfo))
             ? -EFAULT : 0;
     }
     break;
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -122,7 +122,7 @@ int compat_arch_memory_op(int op, XEN_GU
 #define XLAT_memory_map_HNDL_buffer(_d_, _s_) ((void)0)
         XLAT_memory_map(&cmp, nat);
 #undef XLAT_memory_map_HNDL_buffer
-        if ( copy_to_guest(arg, &cmp, 1) )
+        if ( __copy_to_guest(arg, &cmp, 1) )
             rc =3D -EFAULT;
=20
         break;
@@ -148,7 +148,7 @@ int compat_arch_memory_op(int op, XEN_GU
=20
         XLAT_pod_target(&cmp, nat);
=20
-        if ( copy_to_guest(arg, &cmp, 1) )
+        if ( __copy_to_guest(arg, &cmp, 1) )
         {
             if ( rc =3D=3D __HYPERVISOR_memory_op )
                 hypercall_cancel_continuation();
@@ -200,7 +200,7 @@ int compat_arch_memory_op(int op, XEN_GU
         }
=20
         xmml.nr_extents =3D i;
-        if ( copy_to_guest(arg, &xmml, 1) )
+        if ( __copy_to_guest(arg, &xmml, 1) )
             rc =3D -EFAULT;
=20
         break;
@@ -219,7 +219,7 @@ int compat_arch_memory_op(int op, XEN_GU
         if ( copy_from_guest(&meo, arg, 1) )
             return -EFAULT;
         rc =3D do_mem_event_op(op, meo.domain, (void *) &meo);
-        if ( !rc && copy_to_guest(arg, &meo, 1) )
+        if ( !rc && __copy_to_guest(arg, &meo, 1) )
             return -EFAULT;
         break;
     }
@@ -231,7 +231,7 @@ int compat_arch_memory_op(int op, XEN_GU
         if ( mso.op =3D=3D XENMEM_sharing_op_audit )
             return mem_sharing_audit();=20
         rc =3D do_mem_event_op(op, mso.domain, (void *) &mso);
-        if ( !rc && copy_to_guest(arg, &mso, 1) )
+        if ( !rc && __copy_to_guest(arg, &mso, 1) )
             return -EFAULT;
         break;
     }
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1074,7 +1074,7 @@ long subarch_memory_op(int op, XEN_GUEST
         }
=20
         xmml.nr_extents =3D i;
-        if ( copy_to_guest(arg, &xmml, 1) )
+        if ( __copy_to_guest(arg, &xmml, 1) )
             return -EFAULT;
=20
         break;
@@ -1092,7 +1092,7 @@ long subarch_memory_op(int op, XEN_GUEST
         if ( copy_from_guest(&meo, arg, 1) )
             return -EFAULT;
         rc =3D do_mem_event_op(op, meo.domain, (void *) &meo);
-        if ( !rc && copy_to_guest(arg, &meo, 1) )
+        if ( !rc && __copy_to_guest(arg, &meo, 1) )
             return -EFAULT;
         break;
     }
@@ -1104,7 +1104,7 @@ long subarch_memory_op(int op, XEN_GUEST
         if ( mso.op =3D=3D XENMEM_sharing_op_audit )
             return mem_sharing_audit();=20
         rc =3D do_mem_event_op(op, mso.domain, (void *) &mso);
-        if ( !rc && copy_to_guest(arg, &mso, 1) )
+        if ( !rc && __copy_to_guest(arg, &mso, 1) )
             return -EFAULT;
         break;
     }
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -292,8 +292,9 @@ int compat_memory_op(unsigned int cmd, X
             }
=20
             cmp.xchg.nr_exchanged =3D nat.xchg->nr_exchanged;
-            if ( copy_field_to_guest(guest_handle_cast(compat, compat_memo=
ry_exchange_t),
-                                     &cmp.xchg, nr_exchanged) )
+            if ( __copy_field_to_guest(guest_handle_cast(compat,
+                                                         compat_memory_exc=
hange_t),
+                                       &cmp.xchg, nr_exchanged) )
                 rc =3D -EFAULT;
=20
             if ( rc < 0 )
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -242,6 +242,7 @@ void domctl_lock_release(void)
 long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret =3D 0;
+    bool_t copyback =3D 0;
     struct xen_domctl curop, *op =3D &curop;
=20
     if ( copy_from_guest(op, u_domctl, 1) )
@@ -469,8 +470,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
                sizeof(xen_domain_handle_t));
=20
         op->domain =3D d->domain_id;
-        if ( copy_to_guest(u_domctl, op, 1) )
-            ret =3D -EFAULT;
+        copyback =3D 1;
     }
     break;
=20
@@ -653,8 +653,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
             goto scheduler_op_out;
=20
         ret =3D sched_adjust(d, &op->u.scheduler_op);
-        if ( copy_to_guest(u_domctl, op, 1) )
-            ret =3D -EFAULT;
+        copyback =3D 1;
=20
     scheduler_op_out:
         rcu_unlock_domain(d);
@@ -686,8 +685,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         getdomaininfo(d, &op->u.getdomaininfo);
=20
         op->domain =3D op->u.getdomaininfo.domain;
-        if ( copy_to_guest(u_domctl, op, 1) )
-            ret =3D -EFAULT;
+        copyback =3D 1;
=20
     getdomaininfo_out:
         rcu_read_unlock(&domlist_read_lock);
@@ -747,8 +745,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         ret =3D copy_to_guest(op->u.vcpucontext.ctxt, c.nat, 1);
 #endif
=20
-        if ( copy_to_guest(u_domctl, op, 1) || ret )
+        if ( ret )
             ret =3D -EFAULT;
+        copyback =3D 1;
=20
     getvcpucontext_out:
         xfree(c.nat);
@@ -786,9 +785,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         op->u.getvcpuinfo.cpu_time =3D runstate.time[RUNSTATE_running];
         op->u.getvcpuinfo.cpu      =3D v->processor;
         ret =3D 0;
-
-        if ( copy_to_guest(u_domctl, op, 1) )
-            ret =3D -EFAULT;
+        copyback =3D 1;
=20
     getvcpuinfo_out:
         rcu_unlock_domain(d);
@@ -1045,6 +1042,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
=20
     domctl_lock_release();
=20
+    if ( copyback && __copy_to_guest(u_domctl, op, 1) )
+        ret =3D -EFAULT;
+
     return ret;
 }
=20
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -981,7 +981,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&alloc_unbound, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_alloc_unbound(&alloc_unbound);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &alloc_unbound, 1) !=3D =
0) )
+        if ( !rc && __copy_to_guest(arg, &alloc_unbound, 1) )
             rc =3D -EFAULT; /* Cleaning up here would be a mess! */
         break;
     }
@@ -991,7 +991,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&bind_interdomain, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_bind_interdomain(&bind_interdomain);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &bind_interdomain, 1) =
!=3D 0) )
+        if ( !rc && __copy_to_guest(arg, &bind_interdomain, 1) )
             rc =3D -EFAULT; /* Cleaning up here would be a mess! */
         break;
     }
@@ -1001,7 +1001,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&bind_virq, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_bind_virq(&bind_virq);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &bind_virq, 1) !=3D 0) =
)
+        if ( !rc && __copy_to_guest(arg, &bind_virq, 1) )
             rc =3D -EFAULT; /* Cleaning up here would be a mess! */
         break;
     }
@@ -1011,7 +1011,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&bind_ipi, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_bind_ipi(&bind_ipi);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &bind_ipi, 1) !=3D 0) )
+        if ( !rc && __copy_to_guest(arg, &bind_ipi, 1) )
             rc =3D -EFAULT; /* Cleaning up here would be a mess! */
         break;
     }
@@ -1021,7 +1021,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&bind_pirq, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_bind_pirq(&bind_pirq);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &bind_pirq, 1) !=3D 0) =
)
+        if ( !rc && __copy_to_guest(arg, &bind_pirq, 1) )
             rc =3D -EFAULT; /* Cleaning up here would be a mess! */
         break;
     }
@@ -1047,7 +1047,7 @@ long do_event_channel_op(int cmd, XEN_GU
         if ( copy_from_guest(&status, arg, 1) !=3D 0 )
             return -EFAULT;
         rc =3D evtchn_status(&status);
-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, &status, 1) !=3D 0) )
+        if ( !rc && __copy_to_guest(arg, &status, 1) )
             rc =3D -EFAULT;
         break;
     }
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1115,12 +1115,13 @@ gnttab_unmap_grant_ref(
=20
         for ( i =3D 0; i < c; i++ )
         {
-            if ( unlikely(__copy_from_guest_offset(&op, uop, done+i, 1)) =
)
+            if ( unlikely(__copy_from_guest(&op, uop, 1)) )
                 goto fault;
             __gnttab_unmap_grant_ref(&op, &(common[i]));
             ++partial_done;
-            if ( unlikely(__copy_to_guest_offset(uop, done+i, &op, 1)) )
+            if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
                 goto fault;
+            guest_handle_add_offset(uop, 1);
         }
=20
         flush_tlb_mask(current->domain->domain_dirty_cpumask);
@@ -1177,12 +1178,13 @@ gnttab_unmap_and_replace(
        =20
         for ( i =3D 0; i < c; i++ )
         {
-            if ( unlikely(__copy_from_guest_offset(&op, uop, done+i, 1)) =
)
+            if ( unlikely(__copy_from_guest(&op, uop, 1)) )
                 goto fault;
             __gnttab_unmap_and_replace(&op, &(common[i]));
             ++partial_done;
-            if ( unlikely(__copy_to_guest_offset(uop, done+i, &op, 1)) )
+            if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
                 goto fault;
+            guest_handle_add_offset(uop, 1);
         }
        =20
         flush_tlb_mask(current->domain->domain_dirty_cpumask);
@@ -1396,7 +1398,7 @@ gnttab_setup_table(
  out2:
     rcu_unlock_domain(d);
  out1:
-    if ( unlikely(copy_to_guest(uop, &op, 1)) )
+    if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
         return -EFAULT;
=20
     return 0;
@@ -1446,7 +1448,7 @@ gnttab_query_size(
     rcu_unlock_domain(d);
=20
  query_out:
-    if ( unlikely(copy_to_guest(uop, &op, 1)) )
+    if ( unlikely(__copy_to_guest(uop, &op, 1)) )
         return -EFAULT;
=20
     return 0;
@@ -1542,7 +1544,7 @@ gnttab_transfer(
             return i;
=20
         /* Read from caller address space. */
-        if ( unlikely(__copy_from_guest_offset(&gop, uop, i, 1)) )
+        if ( unlikely(__copy_from_guest(&gop, uop, 1)) )
         {
             gdprintk(XENLOG_INFO, "gnttab_transfer: error reading req =
%d/%d\n",
                     i, count);
@@ -1701,12 +1703,13 @@ gnttab_transfer(
         gop.status =3D GNTST_okay;
=20
     copyback:
-        if ( unlikely(__copy_to_guest_offset(uop, i, &gop, 1)) )
+        if ( unlikely(__copy_field_to_guest(uop, &gop, status)) )
         {
             gdprintk(XENLOG_INFO, "gnttab_transfer: error writing resp "
                      "%d/%d\n", i, count);
             return -EFAULT;
         }
+        guest_handle_add_offset(uop, 1);
     }
=20
     return 0;
@@ -2143,17 +2146,18 @@ gnttab_copy(
     {
         if (i && hypercall_preempt_check())
             return i;
-        if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) )
+        if ( unlikely(__copy_from_guest(&op, uop, 1)) )
             return -EFAULT;
         __gnttab_copy(&op);
-        if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )
+        if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
             return -EFAULT;
+        guest_handle_add_offset(uop, 1);
     }
     return 0;
 }
=20
 static long
-gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
+gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t) uop)
 {
     gnttab_set_version_t op;
     struct domain *d =3D current->domain;
@@ -2265,7 +2269,7 @@ out_unlock:
 out:
     op.version =3D gt->gt_version;
=20
-    if (copy_to_guest(uop, &op, 1))
+    if (__copy_to_guest(uop, &op, 1))
         res =3D -EFAULT;
=20
     return res;
@@ -2329,14 +2333,14 @@ gnttab_get_status_frames(XEN_GUEST_HANDL
 out2:
     rcu_unlock_domain(d);
 out1:
-    if ( unlikely(copy_to_guest(uop, &op, 1)) )
+    if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
         return -EFAULT;
=20
     return 0;
 }
=20
 static long
-gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
+gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t) uop)
 {
     gnttab_get_version_t op;
     struct domain *d;
@@ -2359,7 +2363,7 @@ gnttab_get_version(XEN_GUEST_HANDLE_PARA
=20
     rcu_unlock_domain(d);
=20
-    if ( copy_to_guest(uop, &op, 1) )
+    if ( __copy_field_to_guest(uop, &op, version) )
         return -EFAULT;
=20
     return 0;
@@ -2421,7 +2425,7 @@ out:
 }
=20
 static long
-gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t =
uop),
+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) =
uop,
                       unsigned int count)
 {
     int i;
@@ -2431,11 +2435,12 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE_P
     {
         if ( i && hypercall_preempt_check() )
             return i;
-        if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) )
+        if ( unlikely(__copy_from_guest(&op, uop, 1)) )
             return -EFAULT;
         op.status =3D __gnttab_swap_grant_ref(op.ref_a, op.ref_b);
-        if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )
+        if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
             return -EFAULT;
+        guest_handle_add_offset(uop, 1);
     }
     return 0;
 }
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -359,7 +359,7 @@ static long memory_exchange(XEN_GUEST_HA
         {
             exch.nr_exchanged =3D i << in_chunk_order;
             rcu_unlock_domain(d);
-            if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
+            if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
                 return -EFAULT;
             return hypercall_create_continuation(
                 __HYPERVISOR_memory_op, "lh", XENMEM_exchange, arg);
@@ -500,7 +500,7 @@ static long memory_exchange(XEN_GUEST_HA
     }
=20
     exch.nr_exchanged =3D exch.in.nr_extents;
-    if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
+    if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
         rc =3D -EFAULT;
     rcu_unlock_domain(d);
     return rc;
@@ -527,7 +527,7 @@ static long memory_exchange(XEN_GUEST_HA
     exch.nr_exchanged =3D i << in_chunk_order;
=20
  fail_early:
-    if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
+    if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
         rc =3D -EFAULT;
     return rc;
 }
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -30,6 +30,7 @@
 long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret =3D 0;
+    int copyback =3D -1;
     struct xen_sysctl curop, *op =3D &curop;
     static DEFINE_SPINLOCK(sysctl_lock);
=20
@@ -55,42 +56,28 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
     switch ( op->cmd )
     {
     case XEN_SYSCTL_readconsole:
-    {
         ret =3D xsm_readconsole(op->u.readconsole.clear);
         if ( ret )
             break;
=20
         ret =3D read_console_ring(&op->u.readconsole);
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
=20
     case XEN_SYSCTL_tbuf_op:
-    {
         ret =3D xsm_tbufcontrol();
         if ( ret )
             break;
=20
         ret =3D tb_control(&op->u.tbuf_op);
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
    =20
     case XEN_SYSCTL_sched_id:
-    {
         ret =3D xsm_sched_id();
         if ( ret )
             break;
=20
         op->u.sched_id.sched_id =3D sched_id();
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-        else
-            ret =3D 0;
-    }
-    break;
+        break;
=20
     case XEN_SYSCTL_getdomaininfolist:
     {=20
@@ -129,38 +116,27 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
             break;
        =20
         op->u.getdomaininfolist.num_domains =3D num_domains;
-
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
     }
     break;
=20
 #ifdef PERF_COUNTERS
     case XEN_SYSCTL_perfc_op:
-    {
         ret =3D xsm_perfcontrol();
         if ( ret )
             break;
=20
         ret =3D perfc_control(&op->u.perfc_op);
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
 #endif
=20
 #ifdef LOCK_PROFILE
     case XEN_SYSCTL_lockprof_op:
-    {
         ret =3D xsm_lockprof();
         if ( ret )
             break;
=20
         ret =3D spinlock_profile_control(&op->u.lockprof_op);
-        if ( copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
 #endif
     case XEN_SYSCTL_debug_keys:
     {
@@ -179,6 +155,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
             handle_keypress(c, guest_cpu_user_regs());
         }
         ret =3D 0;
+        copyback =3D 0;
     }
     break;
=20
@@ -193,22 +170,21 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
         if ( ret )
             break;
=20
+        ret =3D -EFAULT;
         for ( i =3D 0; i < nr_cpus; i++ )
         {
             cpuinfo.idletime =3D get_cpu_idle_time(i);
=20
-            ret =3D -EFAULT;
             if ( copy_to_guest_offset(op->u.getcpuinfo.info, i, &cpuinfo, =
1) )
                 goto out;
         }
=20
         op->u.getcpuinfo.nr_cpus =3D i;
-        ret =3D copy_to_guest(u_sysctl, op, 1) ? -EFAULT : 0;
+        ret =3D 0;
     }
     break;
=20
     case XEN_SYSCTL_availheap:
-    {=20
         ret =3D xsm_availheap();
         if ( ret )
             break;
@@ -218,47 +194,26 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
             op->u.availheap.min_bitwidth,
             op->u.availheap.max_bitwidth);
         op->u.availheap.avail_bytes <<=3D PAGE_SHIFT;
-
-        ret =3D copy_to_guest(u_sysctl, op, 1) ? -EFAULT : 0;
-    }
-    break;
+        break;
=20
 #ifdef HAS_ACPI
     case XEN_SYSCTL_get_pmstat:
-    {
         ret =3D xsm_get_pmstat();
         if ( ret )
             break;
=20
         ret =3D do_get_pm_info(&op->u.get_pmstat);
-        if ( ret )
-            break;
-
-        if ( copy_to_guest(u_sysctl, op, 1) )
-        {
-            ret =3D -EFAULT;
-            break;
-        }
-    }
-    break;
+        break;
=20
     case XEN_SYSCTL_pm_op:
-    {
         ret =3D xsm_pm_op();
         if ( ret )
             break;
=20
         ret =3D do_pm_op(&op->u.pm_op);
-        if ( ret && (ret !=3D -EAGAIN) )
-            break;
-
-        if ( copy_to_guest(u_sysctl, op, 1) )
-        {
-            ret =3D -EFAULT;
-            break;
-        }
-    }
-    break;
+        if ( ret =3D=3D -EAGAIN )
+            copyback =3D 1;
+        break;
 #endif
=20
     case XEN_SYSCTL_page_offline_op:
@@ -317,41 +272,39 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
         }
=20
         xfree(status);
+        copyback =3D 0;
     }
     break;
=20
     case XEN_SYSCTL_cpupool_op:
-    {
         ret =3D xsm_cpupool_op();
         if ( ret )
             break;
=20
         ret =3D cpupool_do_sysctl(&op->u.cpupool_op);
-        if ( (ret =3D=3D 0) && copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
=20
     case XEN_SYSCTL_scheduler_op:
-    {
         ret =3D xsm_sched_op();
         if ( ret )
             break;
=20
         ret =3D sched_adjust_global(&op->u.scheduler_op);
-        if ( (ret =3D=3D 0) && copy_to_guest(u_sysctl, op, 1) )
-            ret =3D -EFAULT;
-    }
-    break;
+        break;
=20
     default:
         ret =3D arch_do_sysctl(op, u_sysctl);
+        copyback =3D 0;
         break;
     }
=20
  out:
     spin_unlock(&sysctl_lock);
=20
+    if ( copyback && (!ret || copyback > 0) &&
+         __copy_to_guest(u_sysctl, op, 1) )
+        ret =3D -EFAULT;
+
     return ret;
 }
=20
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -449,7 +449,7 @@ static int add_passive_list(XEN_GUEST_HA
             current->domain, __pa(d->xenoprof->rawbuf),
             passive.buf_gmaddr, d->xenoprof->npages);
=20
-    if ( copy_to_guest(arg, &passive, 1) )
+    if ( __copy_to_guest(arg, &passive, 1) )
     {
         put_domain(d);
         return -EFAULT;
@@ -604,7 +604,7 @@ static int xenoprof_op_init(XEN_GUEST_HA
     if ( xenoprof_init.is_primary )
         xenoprof_primary_profiler =3D current->domain;
=20
-    return (copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0);
+    return __copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0;
 }
=20
 #define ret_t long
@@ -651,10 +651,7 @@ static int xenoprof_op_get_buffer(XEN_GU
             d, __pa(d->xenoprof->rawbuf), xenoprof_get_buffer.buf_gmaddr,
             d->xenoprof->npages);
=20
-    if ( copy_to_guest(arg, &xenoprof_get_buffer, 1) )
-        return -EFAULT;
-
-    return 0;
+    return __copy_to_guest(arg, &xenoprof_get_buffer, 1) ? -EFAULT : 0;
 }
=20
 #define NONPRIV_OP(op) ( (op =3D=3D XENOPROF_init)          \
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -586,7 +586,7 @@ int iommu_do_domctl(
             domctl->u.get_device_group.num_sdevs =3D ret;
             ret =3D 0;
         }
-        if ( copy_to_guest(u_domctl, domctl, 1) )
+        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) =
)
             ret =3D -EFAULT;
         rcu_unlock_domain(d);
     }



--=__PartC4F558D4.0__=
Content-Type: text/plain; name="guest-copy-streamline.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="guest-copy-streamline.patch"

streamline guest copy operations=0A=0A- use the variants not validating =
the VA range when writing back=0A  structures/fields to the same space =
that they were previously read=0A  from=0A- when only a single field of a =
structure actually changed, copy back=0A  just that field where possible=0A=
- consolidate copying back results in a few places=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A---=0AIf really necessary, this patch could =
of course be split up at almost=0Aarbitrary boundaries.=0A=0A--- a/xen/arch=
/x86/domctl.c=0A+++ b/xen/arch/x86/domctl.c=0A@@ -51,6 +51,7 @@ long =
arch_do_domctl(=0A     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)=0A =
{=0A     long ret =3D 0;=0A+    bool_t copyback =3D 0;=0A =0A     switch ( =
domctl->cmd )=0A     {=0A@@ -66,7 +67,7 @@ long arch_do_domctl(=0A         =
                        &domctl->u.shadow_op,=0A                           =
      guest_handle_cast(u_domctl, void));=0A             rcu_unlock_domain(=
d);=0A-            copy_to_guest(u_domctl, domctl, 1);=0A+            =
copyback =3D 1;=0A         } =0A     }=0A     break;=0A@@ -150,8 +151,7 @@ =
long arch_do_domctl(=0A         }=0A =0A         rcu_unlock_domain(d);=0A-=
=0A-        copy_to_guest(u_domctl, domctl, 1);=0A+        copyback =3D =
1;=0A     }=0A     break;=0A =0A@@ -408,7 +408,7 @@ long arch_do_domctl(=0A=
             spin_unlock(&d->page_alloc_lock);=0A =0A             =
domctl->u.getmemlist.num_pfns =3D i;=0A-            copy_to_guest(u_domctl,=
 domctl, 1);=0A+            copyback =3D 1;=0A         getmemlist_out:=0A  =
           rcu_unlock_domain(d);=0A         }=0A@@ -539,13 +539,11 @@ long =
arch_do_domctl(=0A             ret =3D -EFAULT;=0A =0A     gethvmcontext_ou=
t:=0A-        if ( copy_to_guest(u_domctl, domctl, 1) )=0A-            ret =
=3D -EFAULT;=0A+        rcu_unlock_domain(d);=0A+        copyback =3D =
1;=0A =0A         if ( c.data !=3D NULL )=0A             xfree(c.data);=0A-=
=0A-        rcu_unlock_domain(d);=0A     }=0A     break;=0A =0A@@ -627,11 =
+625,9 @@ long arch_do_domctl(=0A         domctl->u.address_size.size =
=3D=0A             is_pv_32on64_domain(d) ? 32 : BITS_PER_LONG;=0A =0A-    =
    ret =3D 0;=0A         rcu_unlock_domain(d);=0A-=0A-        if ( =
copy_to_guest(u_domctl, domctl, 1) )=0A-            ret =3D -EFAULT;=0A+   =
     ret =3D 0;=0A+        copyback =3D 1;=0A     }=0A     break;=0A =0A@@ =
-676,13 +672,9 @@ long arch_do_domctl(=0A =0A         domctl->u.address_siz=
e.size =3D d->arch.physaddr_bitsize;=0A =0A-        ret =3D 0;=0A         =
rcu_unlock_domain(d);=0A-=0A-        if ( copy_to_guest(u_domctl, domctl, =
1) )=0A-            ret =3D -EFAULT;=0A-=0A-=0A+        ret =3D 0;=0A+     =
   copyback =3D 1;=0A     }=0A     break;=0A =0A@@ -1124,9 +1116,8 @@ long =
arch_do_domctl(=0A =0A     ext_vcpucontext_out:=0A         rcu_unlock_domai=
n(d);=0A-        if ( (domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext) =
&&=0A-             copy_to_guest(u_domctl, domctl, 1) )=0A-            ret =
=3D -EFAULT;=0A+        if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucont=
ext )=0A+            copyback =3D 1;=0A     }=0A     break;=0A =0A@@ =
-1268,10 +1259,10 @@ long arch_do_domctl(=0A             domctl->u.gdbsx_gu=
est_memio.len;=0A =0A         ret =3D gdbsx_guest_mem_io(domctl->domain, =
&domctl->u.gdbsx_guest_memio);=0A-        if ( !ret && copy_to_guest(u_domc=
tl, domctl, 1) )=0A-            ret =3D -EFAULT;=0A =0A         rcu_unlock_=
domain(d);=0A+        if ( !ret )=0A+           copyback =3D 1;=0A     =
}=0A     break;=0A =0A@@ -1358,10 +1349,9 @@ long arch_do_domctl(=0A       =
          }=0A             }=0A         }=0A-        ret =3D 0;=0A-        =
if ( copy_to_guest(u_domctl, domctl, 1) )=0A-            ret =3D =
-EFAULT;=0A         rcu_unlock_domain(d);=0A+        ret =3D 0;=0A+        =
copyback =3D 1;=0A     }=0A     break;=0A =0A@@ -1485,9 +1475,8 @@ long =
arch_do_domctl(=0A =0A     vcpuextstate_out:=0A         rcu_unlock_domain(d=
);=0A-        if ( (domctl->cmd =3D=3D XEN_DOMCTL_getvcpuextstate) &&=0A-  =
           copy_to_guest(u_domctl, domctl, 1) )=0A-            ret =3D =
-EFAULT;=0A+        if ( domctl->cmd =3D=3D XEN_DOMCTL_getvcpuextstate =
)=0A+            copyback =3D 1;=0A     }=0A     break;=0A =0A@@ -1504,7 =
+1493,7 @@ long arch_do_domctl(=0A                 ret =3D mem_event_domctl=
(d, &domctl->u.mem_event_op,=0A                                        =
guest_handle_cast(u_domctl, void));=0A             rcu_unlock_domain(d);=0A=
-            copy_to_guest(u_domctl, domctl, 1);=0A+            copyback =
=3D 1;=0A         } =0A     }=0A     break;=0A@@ -1539,8 +1528,7 @@ long =
arch_do_domctl(=0A                   &domctl->u.audit_p2m.m2p_bad,=0A      =
             &domctl->u.audit_p2m.p2m_bad);=0A         rcu_unlock_domain(d)=
;=0A-        if ( copy_to_guest(u_domctl, domctl, 1) ) =0A-            ret =
=3D -EFAULT;=0A+        copyback =3D 1;=0A     }=0A     break;=0A #endif =
/* P2M_AUDIT */=0A@@ -1573,6 +1561,9 @@ long arch_do_domctl(=0A         =
break;=0A     }=0A =0A+    if ( copyback && __copy_to_guest(u_domctl, =
domctl, 1) )=0A+        ret =3D -EFAULT;=0A+=0A     return ret;=0A }=0A =
=0A--- a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=0A@@ -4407,7 +4407,7 =
@@ long arch_memory_op(int op, XEN_GUEST_HA=0A =0A         if ( xatp.space =
=3D=3D XENMAPSPACE_gmfn_range )=0A         {=0A-            if ( rc && =
copy_to_guest(arg, &xatp, 1) )=0A+            if ( rc && __copy_to_guest(ar=
g, &xatp, 1) )=0A                 rc =3D -EFAULT;=0A =0A             if ( =
rc =3D=3D -EAGAIN )=0A@@ -4492,7 +4492,7 @@ long arch_memory_op(int op, =
XEN_GUEST_HA=0A         map.nr_entries =3D min(map.nr_entries, d->arch.pv_d=
omain.nr_e820);=0A         if ( copy_to_guest(map.buffer, d->arch.pv_domain=
.e820,=0A                            map.nr_entries) ||=0A-             =
copy_to_guest(arg, &map, 1) )=0A+             __copy_to_guest(arg, &map, =
1) )=0A         {=0A             spin_unlock(&d->arch.pv_domain.e820_lock);=
=0A             return -EFAULT;=0A@@ -4559,7 +4559,7 @@ long arch_memory_op=
(int op, XEN_GUEST_HA=0A =0A         ctxt.map.nr_entries =3D ctxt.n;=0A =
=0A-        if ( copy_to_guest(arg, &ctxt.map, 1) )=0A+        if ( =
__copy_to_guest(arg, &ctxt.map, 1) )=0A             return -EFAULT;=0A =0A =
        return 0;=0A@@ -4630,7 +4630,7 @@ long arch_memory_op(int op, =
XEN_GUEST_HA=0A             target.pod_cache_pages =3D p2m->pod.count;=0A  =
           target.pod_entries     =3D p2m->pod.entry_count;=0A =0A-        =
    if ( copy_to_guest(arg, &target, 1) )=0A+            if ( __copy_to_gue=
st(arg, &target, 1) )=0A             {=0A                 rc=3D -EFAULT;=0A=
                 goto pod_target_out_unlock;=0A--- a/xen/arch/x86/physdev.c=
=0A+++ b/xen/arch/x86/physdev.c=0A@@ -384,7 +384,7 @@ ret_t do_physdev_op(i=
nt cmd, XEN_GUEST_H=0A         irq_status_query.flags |=3D XENIRQSTAT_needs=
_eoi;=0A         if ( pirq_shared(v->domain, irq) )=0A             =
irq_status_query.flags |=3D XENIRQSTAT_shared;=0A-        ret =3D =
copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;=0A+        ret =3D =
__copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;=0A         =
break;=0A     }=0A =0A@@ -412,7 +412,7 @@ ret_t do_physdev_op(int cmd, =
XEN_GUEST_H=0A         ret =3D physdev_map_pirq(map.domid, map.type, =
&map.index, &map.pirq,=0A                                &msi);=0A =0A-    =
    if ( copy_to_guest(arg, &map, 1) !=3D 0 )=0A+        if ( __copy_to_gue=
st(arg, &map, 1) )=0A             ret =3D -EFAULT;=0A         break;=0A    =
 }=0A@@ -440,7 +440,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H=0A       =
  if ( ret )=0A             break;=0A         ret =3D ioapic_guest_read(api=
c.apic_physbase, apic.reg, &apic.value);=0A-        if ( copy_to_guest(arg,=
 &apic, 1) !=3D 0 )=0A+        if ( __copy_to_guest(arg, &apic, 1) )=0A    =
         ret =3D -EFAULT;=0A         break;=0A     }=0A@@ -478,7 +478,7 @@ =
ret_t do_physdev_op(int cmd, XEN_GUEST_H=0A         irq_op.vector =3D =
irq_op.irq;=0A         ret =3D 0;=0A         =0A-        if ( copy_to_guest=
(arg, &irq_op, 1) !=3D 0 )=0A+        if ( __copy_to_guest(arg, &irq_op, =
1) )=0A             ret =3D -EFAULT;=0A         break;=0A     }=0A@@ =
-714,7 +714,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H=0A         if ( =
ret >=3D 0 )=0A         {=0A             out.pirq =3D ret;=0A-            =
ret =3D copy_to_guest(arg, &out, 1) ? -EFAULT : 0;=0A+            ret =3D =
__copy_to_guest(arg, &out, 1) ? -EFAULT : 0;=0A         }=0A =0A         =
break;=0A--- a/xen/arch/x86/platform_hypercall.c=0A+++ b/xen/arch/x86/platf=
orm_hypercall.c=0A@@ -115,7 +115,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE=
_PA=0A         {=0A             op->u.add_memtype.handle =3D 0;=0A         =
    op->u.add_memtype.reg    =3D ret;=0A-            ret =3D copy_to_guest(=
u_xenpf_op, op, 1) ? -EFAULT : 0;=0A+            ret =3D __copy_field_to_gu=
est(u_xenpf_op, op, u.add_memtype) ?=0A+                  -EFAULT : 0;=0A  =
           if ( ret !=3D 0 )=0A                 mtrr_del_page(ret, 0, =
0);=0A         }=0A@@ -157,7 +158,8 @@ ret_t do_platform_op(XEN_GUEST_HANDL=
E_PA=0A             op->u.read_memtype.mfn     =3D mfn;=0A             =
op->u.read_memtype.nr_mfns =3D nr_mfns;=0A             op->u.read_memtype.t=
ype    =3D type;=0A-            ret =3D copy_to_guest(u_xenpf_op, op, 1) ? =
-EFAULT : 0;=0A+            ret =3D __copy_field_to_guest(u_xenpf_op, op, =
u.read_memtype)=0A+                  ? -EFAULT : 0;=0A         }=0A     =
}=0A     break;=0A@@ -263,8 +265,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE=
_PA=0A             C(legacy_sectors_per_track);=0A #undef C=0A =0A-        =
    ret =3D (copy_field_to_guest(u_xenpf_op, op,=0A-                       =
               u.firmware_info.u.disk_info)=0A+            ret =3D =
(__copy_field_to_guest(u_xenpf_op, op,=0A+                                 =
        u.firmware_info.u.disk_info)=0A                    ? -EFAULT : =
0);=0A             break;=0A         }=0A@@ -281,8 +283,8 @@ ret_t =
do_platform_op(XEN_GUEST_HANDLE_PA=0A             op->u.firmware_info.u.dis=
k_mbr_signature.mbr_signature =3D=0A                 sig->signature;=0A =
=0A-            ret =3D (copy_field_to_guest(u_xenpf_op, op,=0A-           =
                           u.firmware_info.u.disk_mbr_signature)=0A+       =
     ret =3D (__copy_field_to_guest(u_xenpf_op, op,=0A+                    =
                     u.firmware_info.u.disk_mbr_signature)=0A              =
      ? -EFAULT : 0);=0A             break;=0A         }=0A@@ -299,10 =
+301,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=0A                 =
bootsym(boot_edid_caps) >> 8;=0A =0A             ret =3D 0;=0A-            =
if ( copy_field_to_guest(u_xenpf_op, op, u.firmware_info.=0A-              =
                       u.vbeddc_info.capabilities) ||=0A-                 =
copy_field_to_guest(u_xenpf_op, op, u.firmware_info.=0A-                   =
                  u.vbeddc_info.edid_transfer_time) ||=0A+            if ( =
__copy_field_to_guest(u_xenpf_op, op, u.firmware_info.=0A+                 =
                      u.vbeddc_info.capabilities) ||=0A+                 =
__copy_field_to_guest(u_xenpf_op, op, u.firmware_info.=0A+                 =
                      u.vbeddc_info.edid_transfer_time) ||=0A              =
    copy_to_compat(op->u.firmware_info.u.vbeddc_info.edid,=0A              =
                   bootsym(boot_edid_info), 128) )=0A                 ret =
=3D -EFAULT;=0A@@ -311,8 +313,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=
=0A             ret =3D efi_get_info(op->u.firmware_info.index,=0A         =
                       &op->u.firmware_info.u.efi_info);=0A             if =
( ret =3D=3D 0 &&=0A-                 copy_field_to_guest(u_xenpf_op, =
op,=0A-                                     u.firmware_info.u.efi_info) =
)=0A+                 __copy_field_to_guest(u_xenpf_op, op,=0A+            =
                           u.firmware_info.u.efi_info) )=0A                =
 ret =3D -EFAULT;=0A             break;=0A         case XEN_FW_KBD_SHIFT_FL=
AGS:=0A@@ -323,8 +325,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=0A     =
        op->u.firmware_info.u.kbd_shift_flags =3D bootsym(kbd_shift_flags);=
=0A =0A             ret =3D 0;=0A-            if ( copy_field_to_guest(u_xe=
npf_op, op,=0A-                                     u.firmware_info.u.kbd_s=
hift_flags) )=0A+            if ( __copy_field_to_guest(u_xenpf_op, =
op,=0A+                                       u.firmware_info.u.kbd_shift_f=
lags) )=0A                 ret =3D -EFAULT;=0A             break;=0A       =
  default:=0A@@ -340,7 +342,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=
=0A =0A         ret =3D efi_runtime_call(&op->u.efi_runtime_call);=0A      =
   if ( ret =3D=3D 0 &&=0A-             copy_field_to_guest(u_xenpf_op, =
op, u.efi_runtime_call) )=0A+             __copy_field_to_guest(u_xenpf_op,=
 op, u.efi_runtime_call) )=0A             ret =3D -EFAULT;=0A         =
break;=0A =0A@@ -412,7 +414,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=
=0A             ret =3D cpumask_to_xenctl_cpumap(&ctlmap, cpumap);=0A      =
   free_cpumask_var(cpumap);=0A =0A-        if ( ret =3D=3D 0 && copy_to_gu=
est(u_xenpf_op, op, 1) )=0A+        if ( ret =3D=3D 0 && __copy_field_to_gu=
est(u_xenpf_op, op, u.getidletime) )=0A             ret =3D -EFAULT;=0A    =
 }=0A     break;=0A@@ -503,7 +505,7 @@ ret_t do_platform_op(XEN_GUEST_HANDL=
E_PA=0A =0A         put_cpu_maps();=0A =0A-        ret =3D copy_to_guest(u_=
xenpf_op, op, 1) ? -EFAULT : 0;=0A+        ret =3D __copy_field_to_guest(u_=
xenpf_op, op, u.pcpu_info) ? -EFAULT : 0;=0A     }=0A     break;=0A =0A@@ =
-538,7 +540,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=0A =0A         =
put_cpu_maps();=0A =0A-        if ( copy_field_to_guest(u_xenpf_op, op, =
u.pcpu_version) )=0A+        if ( __copy_field_to_guest(u_xenpf_op, op, =
u.pcpu_version) )=0A             ret =3D -EFAULT;=0A     }=0A     =
break;=0A@@ -639,7 +641,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA=0A =
=0A         case XEN_CORE_PARKING_GET:=0A             op->u.core_parking.id=
le_nums =3D get_cur_idle_nums();=0A-            ret =3D copy_to_guest(u_xen=
pf_op, op, 1) ? -EFAULT : 0;=0A+            ret =3D __copy_field_to_guest(u=
_xenpf_op, op, u.core_parking) ?=0A+                  -EFAULT : 0;=0A      =
       break;=0A =0A         default:=0A--- a/xen/arch/x86/sysctl.c=0A+++ =
b/xen/arch/x86/sysctl.c=0A@@ -93,7 +93,7 @@ long arch_do_sysctl(=0A        =
 if ( iommu_enabled )=0A             pi->capabilities |=3D XEN_SYSCTL_PHYSC=
AP_hvm_directio;=0A =0A-        if ( copy_to_guest(u_sysctl, sysctl, 1) =
)=0A+        if ( __copy_field_to_guest(u_sysctl, sysctl, u.physinfo) )=0A =
            ret =3D -EFAULT;=0A     }=0A     break;=0A@@ -133,7 +133,8 @@ =
long arch_do_sysctl(=0A             }=0A         }=0A =0A-        ret =3D =
((i <=3D max_cpu_index) || copy_to_guest(u_sysctl, sysctl, 1))=0A+        =
ret =3D ((i <=3D max_cpu_index) ||=0A+               __copy_field_to_guest(=
u_sysctl, sysctl, u.topologyinfo))=0A             ? -EFAULT : 0;=0A     =
}=0A     break;=0A@@ -185,7 +186,8 @@ long arch_do_sysctl(=0A             =
}=0A         }=0A =0A-        ret =3D ((i <=3D max_node_index) || =
copy_to_guest(u_sysctl, sysctl, 1))=0A+        ret =3D ((i <=3D max_node_in=
dex) ||=0A+               __copy_field_to_guest(u_sysctl, sysctl, =
u.numainfo))=0A             ? -EFAULT : 0;=0A     }=0A     break;=0A--- =
a/xen/arch/x86/x86_64/compat/mm.c=0A+++ b/xen/arch/x86/x86_64/compat/mm.c=
=0A@@ -122,7 +122,7 @@ int compat_arch_memory_op(int op, XEN_GU=0A #define =
XLAT_memory_map_HNDL_buffer(_d_, _s_) ((void)0)=0A         XLAT_memory_map(=
&cmp, nat);=0A #undef XLAT_memory_map_HNDL_buffer=0A-        if ( =
copy_to_guest(arg, &cmp, 1) )=0A+        if ( __copy_to_guest(arg, &cmp, =
1) )=0A             rc =3D -EFAULT;=0A =0A         break;=0A@@ -148,7 =
+148,7 @@ int compat_arch_memory_op(int op, XEN_GU=0A =0A         =
XLAT_pod_target(&cmp, nat);=0A =0A-        if ( copy_to_guest(arg, &cmp, =
1) )=0A+        if ( __copy_to_guest(arg, &cmp, 1) )=0A         {=0A       =
      if ( rc =3D=3D __HYPERVISOR_memory_op )=0A                 hypercall_=
cancel_continuation();=0A@@ -200,7 +200,7 @@ int compat_arch_memory_op(int =
op, XEN_GU=0A         }=0A =0A         xmml.nr_extents =3D i;=0A-        =
if ( copy_to_guest(arg, &xmml, 1) )=0A+        if ( __copy_to_guest(arg, =
&xmml, 1) )=0A             rc =3D -EFAULT;=0A =0A         break;=0A@@ =
-219,7 +219,7 @@ int compat_arch_memory_op(int op, XEN_GU=0A         if ( =
copy_from_guest(&meo, arg, 1) )=0A             return -EFAULT;=0A         =
rc =3D do_mem_event_op(op, meo.domain, (void *) &meo);=0A-        if ( !rc =
&& copy_to_guest(arg, &meo, 1) )=0A+        if ( !rc && __copy_to_guest(arg=
, &meo, 1) )=0A             return -EFAULT;=0A         break;=0A     =
}=0A@@ -231,7 +231,7 @@ int compat_arch_memory_op(int op, XEN_GU=0A        =
 if ( mso.op =3D=3D XENMEM_sharing_op_audit )=0A             return =
mem_sharing_audit(); =0A         rc =3D do_mem_event_op(op, mso.domain, =
(void *) &mso);=0A-        if ( !rc && copy_to_guest(arg, &mso, 1) )=0A+   =
     if ( !rc && __copy_to_guest(arg, &mso, 1) )=0A             return =
-EFAULT;=0A         break;=0A     }=0A--- a/xen/arch/x86/x86_64/mm.c=0A+++ =
b/xen/arch/x86/x86_64/mm.c=0A@@ -1074,7 +1074,7 @@ long subarch_memory_op(i=
nt op, XEN_GUEST=0A         }=0A =0A         xmml.nr_extents =3D i;=0A-    =
    if ( copy_to_guest(arg, &xmml, 1) )=0A+        if ( __copy_to_guest(arg=
, &xmml, 1) )=0A             return -EFAULT;=0A =0A         break;=0A@@ =
-1092,7 +1092,7 @@ long subarch_memory_op(int op, XEN_GUEST=0A         if =
( copy_from_guest(&meo, arg, 1) )=0A             return -EFAULT;=0A        =
 rc =3D do_mem_event_op(op, meo.domain, (void *) &meo);=0A-        if ( =
!rc && copy_to_guest(arg, &meo, 1) )=0A+        if ( !rc && __copy_to_guest=
(arg, &meo, 1) )=0A             return -EFAULT;=0A         break;=0A     =
}=0A@@ -1104,7 +1104,7 @@ long subarch_memory_op(int op, XEN_GUEST=0A      =
   if ( mso.op =3D=3D XENMEM_sharing_op_audit )=0A             return =
mem_sharing_audit(); =0A         rc =3D do_mem_event_op(op, mso.domain, =
(void *) &mso);=0A-        if ( !rc && copy_to_guest(arg, &mso, 1) )=0A+   =
     if ( !rc && __copy_to_guest(arg, &mso, 1) )=0A             return =
-EFAULT;=0A         break;=0A     }=0A--- a/xen/common/compat/memory.c=0A++=
+ b/xen/common/compat/memory.c=0A@@ -292,8 +292,9 @@ int compat_memory_op(u=
nsigned int cmd, X=0A             }=0A =0A             cmp.xchg.nr_exchange=
d =3D nat.xchg->nr_exchanged;=0A-            if ( copy_field_to_guest(guest=
_handle_cast(compat, compat_memory_exchange_t),=0A-                        =
             &cmp.xchg, nr_exchanged) )=0A+            if ( __copy_field_to=
_guest(guest_handle_cast(compat,=0A+                                       =
                  compat_memory_exchange_t),=0A+                           =
            &cmp.xchg, nr_exchanged) )=0A                 rc =3D =
-EFAULT;=0A =0A             if ( rc < 0 )=0A--- a/xen/common/domctl.c=0A+++=
 b/xen/common/domctl.c=0A@@ -242,6 +242,7 @@ void domctl_lock_release(void)=
=0A long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)=0A {=0A  =
   long ret =3D 0;=0A+    bool_t copyback =3D 0;=0A     struct xen_domctl =
curop, *op =3D &curop;=0A =0A     if ( copy_from_guest(op, u_domctl, 1) =
)=0A@@ -469,8 +470,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A        =
        sizeof(xen_domain_handle_t));=0A =0A         op->domain =3D =
d->domain_id;=0A-        if ( copy_to_guest(u_domctl, op, 1) )=0A-         =
   ret =3D -EFAULT;=0A+        copyback =3D 1;=0A     }=0A     break;=0A =
=0A@@ -653,8 +653,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A         =
    goto scheduler_op_out;=0A =0A         ret =3D sched_adjust(d, =
&op->u.scheduler_op);=0A-        if ( copy_to_guest(u_domctl, op, 1) )=0A- =
           ret =3D -EFAULT;=0A+        copyback =3D 1;=0A =0A     =
scheduler_op_out:=0A         rcu_unlock_domain(d);=0A@@ -686,8 +685,7 @@ =
long do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A         getdomaininfo(d, =
&op->u.getdomaininfo);=0A =0A         op->domain =3D op->u.getdomaininfo.do=
main;=0A-        if ( copy_to_guest(u_domctl, op, 1) )=0A-            ret =
=3D -EFAULT;=0A+        copyback =3D 1;=0A =0A     getdomaininfo_out:=0A   =
      rcu_read_unlock(&domlist_read_lock);=0A@@ -747,8 +745,9 @@ long =
do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A         ret =3D copy_to_guest(op->u.=
vcpucontext.ctxt, c.nat, 1);=0A #endif=0A =0A-        if ( copy_to_guest(u_=
domctl, op, 1) || ret )=0A+        if ( ret )=0A             ret =3D =
-EFAULT;=0A+        copyback =3D 1;=0A =0A     getvcpucontext_out:=0A      =
   xfree(c.nat);=0A@@ -786,9 +785,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARA=
M(xe=0A         op->u.getvcpuinfo.cpu_time =3D runstate.time[RUNSTATE_runni=
ng];=0A         op->u.getvcpuinfo.cpu      =3D v->processor;=0A         =
ret =3D 0;=0A-=0A-        if ( copy_to_guest(u_domctl, op, 1) )=0A-        =
    ret =3D -EFAULT;=0A+        copyback =3D 1;=0A =0A     getvcpuinfo_out:=
=0A         rcu_unlock_domain(d);=0A@@ -1045,6 +1042,9 @@ long do_domctl(XE=
N_GUEST_HANDLE_PARAM(xe=0A =0A     domctl_lock_release();=0A =0A+    if ( =
copyback && __copy_to_guest(u_domctl, op, 1) )=0A+        ret =3D =
-EFAULT;=0A+=0A     return ret;=0A }=0A =0A--- a/xen/common/event_channel.c=
=0A+++ b/xen/common/event_channel.c=0A@@ -981,7 +981,7 @@ long do_event_cha=
nnel_op(int cmd, XEN_GU=0A         if ( copy_from_guest(&alloc_unbound, =
arg, 1) !=3D 0 )=0A             return -EFAULT;=0A         rc =3D =
evtchn_alloc_unbound(&alloc_unbound);=0A-        if ( (rc =3D=3D 0) && =
(copy_to_guest(arg, &alloc_unbound, 1) !=3D 0) )=0A+        if ( !rc && =
__copy_to_guest(arg, &alloc_unbound, 1) )=0A             rc =3D -EFAULT; =
/* Cleaning up here would be a mess! */=0A         break;=0A     }=0A@@ =
-991,7 +991,7 @@ long do_event_channel_op(int cmd, XEN_GU=0A         if ( =
copy_from_guest(&bind_interdomain, arg, 1) !=3D 0 )=0A             return =
-EFAULT;=0A         rc =3D evtchn_bind_interdomain(&bind_interdomain);=0A- =
       if ( (rc =3D=3D 0) && (copy_to_guest(arg, &bind_interdomain, 1) =
!=3D 0) )=0A+        if ( !rc && __copy_to_guest(arg, &bind_interdomain, =
1) )=0A             rc =3D -EFAULT; /* Cleaning up here would be a mess! =
*/=0A         break;=0A     }=0A@@ -1001,7 +1001,7 @@ long do_event_channel=
_op(int cmd, XEN_GU=0A         if ( copy_from_guest(&bind_virq, arg, 1) =
!=3D 0 )=0A             return -EFAULT;=0A         rc =3D evtchn_bind_virq(=
&bind_virq);=0A-        if ( (rc =3D=3D 0) && (copy_to_guest(arg, =
&bind_virq, 1) !=3D 0) )=0A+        if ( !rc && __copy_to_guest(arg, =
&bind_virq, 1) )=0A             rc =3D -EFAULT; /* Cleaning up here would =
be a mess! */=0A         break;=0A     }=0A@@ -1011,7 +1011,7 @@ long =
do_event_channel_op(int cmd, XEN_GU=0A         if ( copy_from_guest(&bind_i=
pi, arg, 1) !=3D 0 )=0A             return -EFAULT;=0A         rc =3D =
evtchn_bind_ipi(&bind_ipi);=0A-        if ( (rc =3D=3D 0) && (copy_to_guest=
(arg, &bind_ipi, 1) !=3D 0) )=0A+        if ( !rc && __copy_to_guest(arg, =
&bind_ipi, 1) )=0A             rc =3D -EFAULT; /* Cleaning up here would =
be a mess! */=0A         break;=0A     }=0A@@ -1021,7 +1021,7 @@ long =
do_event_channel_op(int cmd, XEN_GU=0A         if ( copy_from_guest(&bind_p=
irq, arg, 1) !=3D 0 )=0A             return -EFAULT;=0A         rc =3D =
evtchn_bind_pirq(&bind_pirq);=0A-        if ( (rc =3D=3D 0) && (copy_to_gue=
st(arg, &bind_pirq, 1) !=3D 0) )=0A+        if ( !rc && __copy_to_guest(arg=
, &bind_pirq, 1) )=0A             rc =3D -EFAULT; /* Cleaning up here =
would be a mess! */=0A         break;=0A     }=0A@@ -1047,7 +1047,7 @@ =
long do_event_channel_op(int cmd, XEN_GU=0A         if ( copy_from_guest(&s=
tatus, arg, 1) !=3D 0 )=0A             return -EFAULT;=0A         rc =3D =
evtchn_status(&status);=0A-        if ( (rc =3D=3D 0) && (copy_to_guest(arg=
, &status, 1) !=3D 0) )=0A+        if ( !rc && __copy_to_guest(arg, =
&status, 1) )=0A             rc =3D -EFAULT;=0A         break;=0A     =
}=0A--- a/xen/common/grant_table.c=0A+++ b/xen/common/grant_table.c=0A@@ =
-1115,12 +1115,13 @@ gnttab_unmap_grant_ref(=0A =0A         for ( i =3D 0; =
i < c; i++ )=0A         {=0A-            if ( unlikely(__copy_from_guest_of=
fset(&op, uop, done+i, 1)) )=0A+            if ( unlikely(__copy_from_guest=
(&op, uop, 1)) )=0A                 goto fault;=0A             __gnttab_unm=
ap_grant_ref(&op, &(common[i]));=0A             ++partial_done;=0A-        =
    if ( unlikely(__copy_to_guest_offset(uop, done+i, &op, 1)) )=0A+       =
     if ( unlikely(__copy_field_to_guest(uop, &op, status)) )=0A           =
      goto fault;=0A+            guest_handle_add_offset(uop, 1);=0A       =
  }=0A =0A         flush_tlb_mask(current->domain->domain_dirty_cpumask);=
=0A@@ -1177,12 +1178,13 @@ gnttab_unmap_and_replace(=0A         =0A        =
 for ( i =3D 0; i < c; i++ )=0A         {=0A-            if ( unlikely(__co=
py_from_guest_offset(&op, uop, done+i, 1)) )=0A+            if ( unlikely(_=
_copy_from_guest(&op, uop, 1)) )=0A                 goto fault;=0A         =
    __gnttab_unmap_and_replace(&op, &(common[i]));=0A             =
++partial_done;=0A-            if ( unlikely(__copy_to_guest_offset(uop, =
done+i, &op, 1)) )=0A+            if ( unlikely(__copy_field_to_guest(uop, =
&op, status)) )=0A                 goto fault;=0A+            guest_handle_=
add_offset(uop, 1);=0A         }=0A         =0A         flush_tlb_mask(curr=
ent->domain->domain_dirty_cpumask);=0A@@ -1396,7 +1398,7 @@ gnttab_setup_ta=
ble(=0A  out2:=0A     rcu_unlock_domain(d);=0A  out1:=0A-    if ( =
unlikely(copy_to_guest(uop, &op, 1)) )=0A+    if ( unlikely(__copy_field_to=
_guest(uop, &op, status)) )=0A         return -EFAULT;=0A =0A     return =
0;=0A@@ -1446,7 +1448,7 @@ gnttab_query_size(=0A     rcu_unlock_domain(d);=
=0A =0A  query_out:=0A-    if ( unlikely(copy_to_guest(uop, &op, 1)) )=0A+ =
   if ( unlikely(__copy_to_guest(uop, &op, 1)) )=0A         return =
-EFAULT;=0A =0A     return 0;=0A@@ -1542,7 +1544,7 @@ gnttab_transfer(=0A  =
           return i;=0A =0A         /* Read from caller address space. =
*/=0A-        if ( unlikely(__copy_from_guest_offset(&gop, uop, i, 1)) =
)=0A+        if ( unlikely(__copy_from_guest(&gop, uop, 1)) )=0A         =
{=0A             gdprintk(XENLOG_INFO, "gnttab_transfer: error reading req =
%d/%d\n",=0A                     i, count);=0A@@ -1701,12 +1703,13 @@ =
gnttab_transfer(=0A         gop.status =3D GNTST_okay;=0A =0A     =
copyback:=0A-        if ( unlikely(__copy_to_guest_offset(uop, i, &gop, =
1)) )=0A+        if ( unlikely(__copy_field_to_guest(uop, &gop, status)) =
)=0A         {=0A             gdprintk(XENLOG_INFO, "gnttab_transfer: =
error writing resp "=0A                      "%d/%d\n", i, count);=0A      =
       return -EFAULT;=0A         }=0A+        guest_handle_add_offset(uop,=
 1);=0A     }=0A =0A     return 0;=0A@@ -2143,17 +2146,18 @@ gnttab_copy(=
=0A     {=0A         if (i && hypercall_preempt_check())=0A             =
return i;=0A-        if ( unlikely(__copy_from_guest_offset(&op, uop, i, =
1)) )=0A+        if ( unlikely(__copy_from_guest(&op, uop, 1)) )=0A        =
     return -EFAULT;=0A         __gnttab_copy(&op);=0A-        if ( =
unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )=0A+        if ( =
unlikely(__copy_field_to_guest(uop, &op, status)) )=0A             return =
-EFAULT;=0A+        guest_handle_add_offset(uop, 1);=0A     }=0A     =
return 0;=0A }=0A =0A static long=0A-gnttab_set_version(XEN_GUEST_HANDLE_PA=
RAM(gnttab_set_version_t uop))=0A+gnttab_set_version(XEN_GUEST_HANDLE_PARAM=
(gnttab_set_version_t) uop)=0A {=0A     gnttab_set_version_t op;=0A     =
struct domain *d =3D current->domain;=0A@@ -2265,7 +2269,7 @@ out_unlock:=
=0A out:=0A     op.version =3D gt->gt_version;=0A =0A-    if (copy_to_guest=
(uop, &op, 1))=0A+    if (__copy_to_guest(uop, &op, 1))=0A         res =3D =
-EFAULT;=0A =0A     return res;=0A@@ -2329,14 +2333,14 @@ gnttab_get_status=
_frames(XEN_GUEST_HANDL=0A out2:=0A     rcu_unlock_domain(d);=0A out1:=0A- =
   if ( unlikely(copy_to_guest(uop, &op, 1)) )=0A+    if ( unlikely(__copy_=
field_to_guest(uop, &op, status)) )=0A         return -EFAULT;=0A =0A     =
return 0;=0A }=0A =0A static long=0A-gnttab_get_version(XEN_GUEST_HANDLE_PA=
RAM(gnttab_get_version_t uop))=0A+gnttab_get_version(XEN_GUEST_HANDLE_PARAM=
(gnttab_get_version_t) uop)=0A {=0A     gnttab_get_version_t op;=0A     =
struct domain *d;=0A@@ -2359,7 +2363,7 @@ gnttab_get_version(XEN_GUEST_HAND=
LE_PARA=0A =0A     rcu_unlock_domain(d);=0A =0A-    if ( copy_to_guest(uop,=
 &op, 1) )=0A+    if ( __copy_field_to_guest(uop, &op, version) )=0A       =
  return -EFAULT;=0A =0A     return 0;=0A@@ -2421,7 +2425,7 @@ out:=0A =
}=0A =0A static long=0A-gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab=
_swap_grant_ref_t uop),=0A+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnt=
tab_swap_grant_ref_t) uop,=0A                       unsigned int count)=0A =
{=0A     int i;=0A@@ -2431,11 +2435,12 @@ gnttab_swap_grant_ref(XEN_GUEST_H=
ANDLE_P=0A     {=0A         if ( i && hypercall_preempt_check() )=0A       =
      return i;=0A-        if ( unlikely(__copy_from_guest_offset(&op, =
uop, i, 1)) )=0A+        if ( unlikely(__copy_from_guest(&op, uop, 1)) =
)=0A             return -EFAULT;=0A         op.status =3D __gnttab_swap_gra=
nt_ref(op.ref_a, op.ref_b);=0A-        if ( unlikely(__copy_to_guest_offset=
(uop, i, &op, 1)) )=0A+        if ( unlikely(__copy_field_to_guest(uop, =
&op, status)) )=0A             return -EFAULT;=0A+        guest_handle_add_=
offset(uop, 1);=0A     }=0A     return 0;=0A }=0A--- a/xen/common/memory.c=
=0A+++ b/xen/common/memory.c=0A@@ -359,7 +359,7 @@ static long memory_excha=
nge(XEN_GUEST_HA=0A         {=0A             exch.nr_exchanged =3D i << =
in_chunk_order;=0A             rcu_unlock_domain(d);=0A-            if ( =
copy_field_to_guest(arg, &exch, nr_exchanged) )=0A+            if ( =
__copy_field_to_guest(arg, &exch, nr_exchanged) )=0A                 =
return -EFAULT;=0A             return hypercall_create_continuation(=0A    =
             __HYPERVISOR_memory_op, "lh", XENMEM_exchange, arg);=0A@@ =
-500,7 +500,7 @@ static long memory_exchange(XEN_GUEST_HA=0A     }=0A =0A  =
   exch.nr_exchanged =3D exch.in.nr_extents;=0A-    if ( copy_field_to_gues=
t(arg, &exch, nr_exchanged) )=0A+    if ( __copy_field_to_guest(arg, =
&exch, nr_exchanged) )=0A         rc =3D -EFAULT;=0A     rcu_unlock_domain(=
d);=0A     return rc;=0A@@ -527,7 +527,7 @@ static long memory_exchange(XEN=
_GUEST_HA=0A     exch.nr_exchanged =3D i << in_chunk_order;=0A =0A  =
fail_early:=0A-    if ( copy_field_to_guest(arg, &exch, nr_exchanged) =
)=0A+    if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )=0A         =
rc =3D -EFAULT;=0A     return rc;=0A }=0A--- a/xen/common/sysctl.c=0A+++ =
b/xen/common/sysctl.c=0A@@ -30,6 +30,7 @@=0A long do_sysctl(XEN_GUEST_HANDL=
E_PARAM(xen_sysctl_t) u_sysctl)=0A {=0A     long ret =3D 0;=0A+    int =
copyback =3D -1;=0A     struct xen_sysctl curop, *op =3D &curop;=0A     =
static DEFINE_SPINLOCK(sysctl_lock);=0A =0A@@ -55,42 +56,28 @@ long =
do_sysctl(XEN_GUEST_HANDLE_PARAM(xe=0A     switch ( op->cmd )=0A     {=0A  =
   case XEN_SYSCTL_readconsole:=0A-    {=0A         ret =3D xsm_readconsole=
(op->u.readconsole.clear);=0A         if ( ret )=0A             break;=0A =
=0A         ret =3D read_console_ring(&op->u.readconsole);=0A-        if ( =
copy_to_guest(u_sysctl, op, 1) )=0A-            ret =3D -EFAULT;=0A-    =
}=0A-    break;=0A+        break;=0A =0A     case XEN_SYSCTL_tbuf_op:=0A-  =
  {=0A         ret =3D xsm_tbufcontrol();=0A         if ( ret )=0A         =
    break;=0A =0A         ret =3D tb_control(&op->u.tbuf_op);=0A-        =
if ( copy_to_guest(u_sysctl, op, 1) )=0A-            ret =3D -EFAULT;=0A-  =
  }=0A-    break;=0A+        break;=0A     =0A     case XEN_SYSCTL_sched_id=
:=0A-    {=0A         ret =3D xsm_sched_id();=0A         if ( ret )=0A     =
        break;=0A =0A         op->u.sched_id.sched_id =3D sched_id();=0A-  =
      if ( copy_to_guest(u_sysctl, op, 1) )=0A-            ret =3D =
-EFAULT;=0A-        else=0A-            ret =3D 0;=0A-    }=0A-    =
break;=0A+        break;=0A =0A     case XEN_SYSCTL_getdomaininfolist:=0A  =
   { =0A@@ -129,38 +116,27 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe=0A  =
           break;=0A         =0A         op->u.getdomaininfolist.num_domain=
s =3D num_domains;=0A-=0A-        if ( copy_to_guest(u_sysctl, op, 1) =
)=0A-            ret =3D -EFAULT;=0A     }=0A     break;=0A =0A #ifdef =
PERF_COUNTERS=0A     case XEN_SYSCTL_perfc_op:=0A-    {=0A         ret =3D =
xsm_perfcontrol();=0A         if ( ret )=0A             break;=0A =0A      =
   ret =3D perfc_control(&op->u.perfc_op);=0A-        if ( copy_to_guest(u_=
sysctl, op, 1) )=0A-            ret =3D -EFAULT;=0A-    }=0A-    break;=0A+=
        break;=0A #endif=0A =0A #ifdef LOCK_PROFILE=0A     case XEN_SYSCTL_=
lockprof_op:=0A-    {=0A         ret =3D xsm_lockprof();=0A         if ( =
ret )=0A             break;=0A =0A         ret =3D spinlock_profile_control=
(&op->u.lockprof_op);=0A-        if ( copy_to_guest(u_sysctl, op, 1) )=0A- =
           ret =3D -EFAULT;=0A-    }=0A-    break;=0A+        break;=0A =
#endif=0A     case XEN_SYSCTL_debug_keys:=0A     {=0A@@ -179,6 +155,7 @@ =
long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe=0A             handle_keypress(c, =
guest_cpu_user_regs());=0A         }=0A         ret =3D 0;=0A+        =
copyback =3D 0;=0A     }=0A     break;=0A =0A@@ -193,22 +170,21 @@ long =
do_sysctl(XEN_GUEST_HANDLE_PARAM(xe=0A         if ( ret )=0A             =
break;=0A =0A+        ret =3D -EFAULT;=0A         for ( i =3D 0; i < =
nr_cpus; i++ )=0A         {=0A             cpuinfo.idletime =3D get_cpu_idl=
e_time(i);=0A =0A-            ret =3D -EFAULT;=0A             if ( =
copy_to_guest_offset(op->u.getcpuinfo.info, i, &cpuinfo, 1) )=0A           =
      goto out;=0A         }=0A =0A         op->u.getcpuinfo.nr_cpus =3D =
i;=0A-        ret =3D copy_to_guest(u_sysctl, op, 1) ? -EFAULT : 0;=0A+    =
    ret =3D 0;=0A     }=0A     break;=0A =0A     case XEN_SYSCTL_availheap:=
=0A-    { =0A         ret =3D xsm_availheap();=0A         if ( ret )=0A    =
         break;=0A@@ -218,47 +194,26 @@ long do_sysctl(XEN_GUEST_HANDLE_PAR=
AM(xe=0A             op->u.availheap.min_bitwidth,=0A             =
op->u.availheap.max_bitwidth);=0A         op->u.availheap.avail_bytes =
<<=3D PAGE_SHIFT;=0A-=0A-        ret =3D copy_to_guest(u_sysctl, op, 1) ? =
-EFAULT : 0;=0A-    }=0A-    break;=0A+        break;=0A =0A #ifdef =
HAS_ACPI=0A     case XEN_SYSCTL_get_pmstat:=0A-    {=0A         ret =3D =
xsm_get_pmstat();=0A         if ( ret )=0A             break;=0A =0A       =
  ret =3D do_get_pm_info(&op->u.get_pmstat);=0A-        if ( ret )=0A-     =
       break;=0A-=0A-        if ( copy_to_guest(u_sysctl, op, 1) )=0A-     =
   {=0A-            ret =3D -EFAULT;=0A-            break;=0A-        =
}=0A-    }=0A-    break;=0A+        break;=0A =0A     case XEN_SYSCTL_pm_op=
:=0A-    {=0A         ret =3D xsm_pm_op();=0A         if ( ret )=0A        =
     break;=0A =0A         ret =3D do_pm_op(&op->u.pm_op);=0A-        if ( =
ret && (ret !=3D -EAGAIN) )=0A-            break;=0A-=0A-        if ( =
copy_to_guest(u_sysctl, op, 1) )=0A-        {=0A-            ret =3D =
-EFAULT;=0A-            break;=0A-        }=0A-    }=0A-    break;=0A+     =
   if ( ret =3D=3D -EAGAIN )=0A+            copyback =3D 1;=0A+        =
break;=0A #endif=0A =0A     case XEN_SYSCTL_page_offline_op:=0A@@ -317,41 =
+272,39 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe=0A         }=0A =0A    =
     xfree(status);=0A+        copyback =3D 0;=0A     }=0A     break;=0A =
=0A     case XEN_SYSCTL_cpupool_op:=0A-    {=0A         ret =3D xsm_cpupool=
_op();=0A         if ( ret )=0A             break;=0A =0A         ret =3D =
cpupool_do_sysctl(&op->u.cpupool_op);=0A-        if ( (ret =3D=3D 0) && =
copy_to_guest(u_sysctl, op, 1) )=0A-            ret =3D -EFAULT;=0A-    =
}=0A-    break;=0A+        break;=0A =0A     case XEN_SYSCTL_scheduler_op:=
=0A-    {=0A         ret =3D xsm_sched_op();=0A         if ( ret )=0A      =
       break;=0A =0A         ret =3D sched_adjust_global(&op->u.scheduler_o=
p);=0A-        if ( (ret =3D=3D 0) && copy_to_guest(u_sysctl, op, 1) )=0A- =
           ret =3D -EFAULT;=0A-    }=0A-    break;=0A+        break;=0A =
=0A     default:=0A         ret =3D arch_do_sysctl(op, u_sysctl);=0A+      =
  copyback =3D 0;=0A         break;=0A     }=0A =0A  out:=0A     spin_unloc=
k(&sysctl_lock);=0A =0A+    if ( copyback && (!ret || copyback > 0) &&=0A+ =
        __copy_to_guest(u_sysctl, op, 1) )=0A+        ret =3D -EFAULT;=0A+=
=0A     return ret;=0A }=0A =0A--- a/xen/common/xenoprof.c=0A+++ b/xen/comm=
on/xenoprof.c=0A@@ -449,7 +449,7 @@ static int add_passive_list(XEN_GUEST_H=
A=0A             current->domain, __pa(d->xenoprof->rawbuf),=0A            =
 passive.buf_gmaddr, d->xenoprof->npages);=0A =0A-    if ( copy_to_guest(ar=
g, &passive, 1) )=0A+    if ( __copy_to_guest(arg, &passive, 1) )=0A     =
{=0A         put_domain(d);=0A         return -EFAULT;=0A@@ -604,7 +604,7 =
@@ static int xenoprof_op_init(XEN_GUEST_HA=0A     if ( xenoprof_init.is_pr=
imary )=0A         xenoprof_primary_profiler =3D current->domain;=0A =0A-  =
  return (copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0);=0A+    =
return __copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0;=0A }=0A =0A =
#define ret_t long=0A@@ -651,10 +651,7 @@ static int xenoprof_op_get_buffer=
(XEN_GU=0A             d, __pa(d->xenoprof->rawbuf), xenoprof_get_buffer.bu=
f_gmaddr,=0A             d->xenoprof->npages);=0A =0A-    if ( copy_to_gues=
t(arg, &xenoprof_get_buffer, 1) )=0A-        return -EFAULT;=0A-=0A-    =
return 0;=0A+    return __copy_to_guest(arg, &xenoprof_get_buffer, 1) ? =
-EFAULT : 0;=0A }=0A =0A #define NONPRIV_OP(op) ( (op =3D=3D XENOPROF_init)=
          \=0A--- a/xen/drivers/passthrough/iommu.c=0A+++ b/xen/drivers/pas=
sthrough/iommu.c=0A@@ -586,7 +586,7 @@ int iommu_do_domctl(=0A             =
domctl->u.get_device_group.num_sdevs =3D ret;=0A             ret =3D 0;=0A =
        }=0A-        if ( copy_to_guest(u_domctl, domctl, 1) )=0A+        =
if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )=0A      =
       ret =3D -EFAULT;=0A         rcu_unlock_domain(d);=0A     }=0A
--=__PartC4F558D4.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartC4F558D4.0__=--


From xen-devel-bounces@lists.xen.org Fri Dec 07 13:47:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 13:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgyGJ-00055E-MW; Fri, 07 Dec 2012 13:46:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgyGI-000559-Gh
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 13:46:42 +0000
Received: from [85.158.139.211:13407] by server-9.bemta-5.messagelabs.com id
	62/8E-10690-143F1C05; Fri, 07 Dec 2012 13:46:41 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354887999!18028028!1
X-Originating-IP: [213.199.154.208]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14857 invoked from network); 7 Dec 2012 13:46:39 -0000
Received: from am1ehsobe005.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.208)
	by server-2.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Dec 2012 13:46:39 -0000
Received: from mail48-am1-R.bigfish.com (10.3.201.250) by
	AM1EHSOBE006.bigfish.com (10.3.204.26) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 13:46:39 +0000
Received: from mail48-am1 (localhost [127.0.0.1])	by mail48-am1-R.bigfish.com
	(Postfix) with ESMTP id 19C773401B3;
	Fri,  7 Dec 2012 13:46:39 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1de0h1202h1e76h1d1ah1d2ahzz8275bhz2dh668h839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1155h)
Received: from mail48-am1 (localhost.localdomain [127.0.0.1]) by mail48-am1
	(MessageSwitch) id 135488799768076_17866;
	Fri,  7 Dec 2012 13:46:37 +0000 (UTC)
Received: from AM1EHSMHS004.bigfish.com (unknown [10.3.201.251])	by
	mail48-am1.bigfish.com (Postfix) with ESMTP id 0432E140064;
	Fri,  7 Dec 2012 13:46:37 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	AM1EHSMHS004.bigfish.com (10.3.207.104) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 13:46:33 +0000
X-WSS-ID: 0MENY9I-02-7EP-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 23FDAC8057;	Fri,  7 Dec 2012 07:46:29 -0600 (CST)
Received: from SAUSEXDAG02.amd.com (163.181.55.2) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 7 Dec 2012 07:46:36 -0600
Received: from [10.234.222.132] (163.181.55.254) by sausexdag02.amd.com
	(163.181.55.2) with Microsoft SMTP Server id 14.2.318.4; Fri, 7 Dec 2012
	07:46:30 -0600
Message-ID: <50C1F334.20802@amd.com>
Date: Fri, 7 Dec 2012 08:46:28 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121025 Thunderbird/16.0.2
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
	<50C1E41402000078000AEED6@nat28.tlf.novell.com>
In-Reply-To: <50C1E41402000078000AEED6@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: Ian.Campbell@citrix.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Inline.

On 12/07/2012 06:41 AM, Jan Beulich wrote:
>>>> On 06.12.12 at 19:28, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> Do not report error when a patch is not appplicable to current processor,
>> simply skip it and move on to next patch in container file.
>>
>> Process container file to the end instead of stopping at the first
>> applicable patch.
>>
>> Log the fact that a patch has been applied at KERN_WARNING level, add
>> CPU number to debug messages.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
>> ---
>>   xen/arch/x86/microcode_amd.c |   69 +++++++++++++++++++++++-------------------
>>   1 file changed, 38 insertions(+), 31 deletions(-)
>>
>> diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c
>> index 7a54001..bb5968e 100644
>> --- a/xen/arch/x86/microcode_amd.c
>> +++ b/xen/arch/x86/microcode_amd.c
>> @@ -88,13 +88,13 @@ static int collect_cpu_info(int cpu, struct cpu_signature
>> *csig)
>>
>>       rdmsrl(MSR_AMD_PATCHLEVEL, csig->rev);
>>
>> -    printk(KERN_DEBUG "microcode: collect_cpu_info: patch_id=%#x\n",
>> -           csig->rev);
>> +    printk(KERN_DEBUG "microcode: CPU%d collect_cpu_info: patch_id=%#x\n",
>> +           cpu, csig->rev);
>>
>>       return 0;
>>   }
>>
>> -static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>> +static bool_t microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>>   {
>>       struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>>       const struct microcode_header_amd *mc_header = mc_amd->mpb;
>> @@ -125,11 +125,16 @@ static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>>           printk(KERN_DEBUG "microcode: CPU%d patch does not match "
>>                  "(patch is %x, cpu base id is %x) \n",
>>                  cpu, mc_header->processor_rev_id, equiv_cpu_id);
>> -        return -EINVAL;
>> +        return 0;
>>       }
>>
>>       if ( mc_header->patch_id <= uci->cpu_sig.rev )
>> +    {
>> +        printk(KERN_DEBUG "microcode: CPU%d patching is not needed: "
>> +               "patch provides level 0x%x, cpu is at 0x%x \n",
>
> Did you not notice that all the other messages now use %#x?
>
> Also, I'm not really convinced we need this added verbosity. I
> personally tend to run all my systems with "loglvl=all", and the
> amount of output produced by the code in this file made me
> change that for just the one AMD machine I have that is new
> enough to support microcode loading.
>
> Minimally I'd expect nothing to be printed here if the two
> versions match.

Yes, I may have gone a bit overboard with chattiness.

This is actually made worse by the fact that we are going through patch 
loading twice during boot --- once through start_secondary() (or via 
microcode_presmp_init() on CPU0) and second time when the driver is 
initialized.

Do we really need what microcode_init() does?

>
>> +               cpu, mc_header->patch_id, uci->cpu_sig.rev);
>>           return 0;
>> +    }
>>
>>       printk(KERN_DEBUG "microcode: CPU%d found a matching microcode "
>>              "update with version %#x (current=%#x)\n",
>> @@ -173,7 +178,7 @@ static int apply_microcode(int cpu)
>>           return -EIO;
>>       }
>>
>> -    printk(KERN_INFO "microcode: CPU%d updated from revision %#x to %#x\n",
>> +    printk(KERN_WARNING "microcode: CPU%d updated from revision %#x to %#x\n",
>>              cpu, uci->cpu_sig.rev, hdr->patch_id);
>>
>>       uci->cpu_sig.rev = rev;
>> @@ -181,7 +186,7 @@ static int apply_microcode(int cpu)
>>       return 0;
>>   }
>>
>> -static int get_next_ucode_from_buffer_amd(
>> +static int get_ucode_from_buffer_amd(
>>       struct microcode_amd *mc_amd,
>>       const void *buf,
>>       size_t bufsize,
>> @@ -194,8 +199,12 @@ static int get_next_ucode_from_buffer_amd(
>>       off = *offset;
>>
>>       /* No more data */
>> -    if ( off >= bufsize )
>> -        return 1;
>> +    if ( off >= bufsize )
>> +    {
>> +        printk(KERN_ERR "microcode: error! "
>> +               "ucode buffer overrun\n");
>
> All on one line please (and the "error!" probably is superfluous).
>
> Also, is printing this really correct when off == bufsize? Or can
> this not happen at all?

Prior to this patch off >= bufsize was not an error but actually loop 
termination condition for the caller ("==" was the most likely case). 
Now the loop is broken out explicitly so when off >= bufsize (including 
"==") we have an error. Having said this, it is pretty much impossible 
to get it with current code.

>
>> +        return -EINVAL;
>> +    }
>>
>>       mpbuf = (const struct mpbhdr *)&bufp[off];
>>       if ( mpbuf->type != UCODE_UCODE_TYPE )
>> @@ -205,8 +214,8 @@ static int get_next_ucode_from_buffer_amd(
>>           return -EINVAL;
>>       }
>>
>> -    printk(KERN_DEBUG "microcode: size %zu, block size %u, offset %zu\n",
>> -           bufsize, mpbuf->len, off);
>> +    printk(KERN_DEBUG "microcode: CPU%d size %zu, block size %u, offset %zu\n",
>> +           raw_smp_processor_id(), bufsize, mpbuf->len, off);
>>
>>       if ( (off + mpbuf->len) > bufsize )
>>       {
>> @@ -278,8 +287,8 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
>>   {
>>       struct microcode_amd *mc_amd, *mc_old;
>>       size_t offset = bufsize;
>> +    size_t last_offset, applied_offset = 0;
>>       int error = 0;
>> -    int ret;
>>       struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>>
>>       /* We should bind the task to the CPU */
>> @@ -321,33 +330,32 @@ static int cpu_request_microcode(int cpu, const void
>> *buf, size_t bufsize)
>>        */
>>       mc_amd->mpb = NULL;
>>       mc_amd->mpb_size = 0;
>> -    while ( (ret = get_next_ucode_from_buffer_amd(mc_amd, buf, bufsize,
>> -                                                  &offset)) == 0 )
>> +    last_offset = offset;
>> +    while ( (error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
>> +                                               &offset)) == 0 )
>>       {
>> -        error = microcode_fits(mc_amd, cpu);
>> -        if (error <= 0)
>> -            continue;
>> +        if ( microcode_fits(mc_amd, cpu) )
>> +            if ( apply_microcode(cpu) == 0 )
>
> Fold the two if()-s into one, please.
>
> But then again you lose the return value of apply_microcode()
> here, which is wrong.

Right, I need to deal with it. Unfortunately apply_microcode() may get 
an error after a previous patch has been successfully loaded (for the 
very unlikely cases when container has multiple matches). I will 
probably still make the driver return an error in this case too.


Thanks.
-boris


>
>> +                applied_offset = last_offset;
>>
>> -        error = apply_microcode(cpu);
>> -        if (error == 0)
>> -        {
>> -            error = 1;
>> +        last_offset = offset;
>> +
>> +        if ( offset >= bufsize )
>>               break;
>> -        }
>>       }
>>
>> -    if ( ret < 0 )
>> -        error = ret;
>> -
>>       /* On success keep the microcode patch for
>>        * re-apply on resume.
>>        */
>> -    if ( error == 1 )
>> +    if ( applied_offset != 0 )
>>       {
>> -        xfree(mc_old);
>> -        error = 0;
>> +        error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
>> +                                          &applied_offset);
>> +        if (error == 0)
>> +            xfree(mc_old);
>>       }
>> -    else
>> +
>> +    if ( applied_offset == 0 || error != 0 )
>>       {
>>           xfree(mc_amd);
>>           uci->mc.mc_amd = mc_old;
>> @@ -364,10 +372,9 @@ static int microcode_resume_match(int cpu, const void *mc)
>>       struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>>       struct microcode_amd *mc_amd = uci->mc.mc_amd;
>>       const struct microcode_amd *src = mc;
>> -    int res = microcode_fits(src, cpu);
>>
>> -    if ( res <= 0 )
>> -        return res;
>> +    if ( microcode_fits(src, cpu) == 0 )
>
> microcode_fits() now returning bool_t clearly asks for using ! instead
> of == 0 here.
>
> Jan
>
>> +        return 0;
>>
>>       if ( src != mc_amd )
>>       {
>> --
>> 1.7.10.4
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 13:47:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 13:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgyGJ-00055E-MW; Fri, 07 Dec 2012 13:46:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgyGI-000559-Gh
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 13:46:42 +0000
Received: from [85.158.139.211:13407] by server-9.bemta-5.messagelabs.com id
	62/8E-10690-143F1C05; Fri, 07 Dec 2012 13:46:41 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354887999!18028028!1
X-Originating-IP: [213.199.154.208]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14857 invoked from network); 7 Dec 2012 13:46:39 -0000
Received: from am1ehsobe005.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.208)
	by server-2.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Dec 2012 13:46:39 -0000
Received: from mail48-am1-R.bigfish.com (10.3.201.250) by
	AM1EHSOBE006.bigfish.com (10.3.204.26) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 13:46:39 +0000
Received: from mail48-am1 (localhost [127.0.0.1])	by mail48-am1-R.bigfish.com
	(Postfix) with ESMTP id 19C773401B3;
	Fri,  7 Dec 2012 13:46:39 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1de0h1202h1e76h1d1ah1d2ahzz8275bhz2dh668h839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1155h)
Received: from mail48-am1 (localhost.localdomain [127.0.0.1]) by mail48-am1
	(MessageSwitch) id 135488799768076_17866;
	Fri,  7 Dec 2012 13:46:37 +0000 (UTC)
Received: from AM1EHSMHS004.bigfish.com (unknown [10.3.201.251])	by
	mail48-am1.bigfish.com (Postfix) with ESMTP id 0432E140064;
	Fri,  7 Dec 2012 13:46:37 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	AM1EHSMHS004.bigfish.com (10.3.207.104) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 13:46:33 +0000
X-WSS-ID: 0MENY9I-02-7EP-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 23FDAC8057;	Fri,  7 Dec 2012 07:46:29 -0600 (CST)
Received: from SAUSEXDAG02.amd.com (163.181.55.2) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 7 Dec 2012 07:46:36 -0600
Received: from [10.234.222.132] (163.181.55.254) by sausexdag02.amd.com
	(163.181.55.2) with Microsoft SMTP Server id 14.2.318.4; Fri, 7 Dec 2012
	07:46:30 -0600
Message-ID: <50C1F334.20802@amd.com>
Date: Fri, 7 Dec 2012 08:46:28 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121025 Thunderbird/16.0.2
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
	<50C1E41402000078000AEED6@nat28.tlf.novell.com>
In-Reply-To: <50C1E41402000078000AEED6@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: Ian.Campbell@citrix.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Inline.

On 12/07/2012 06:41 AM, Jan Beulich wrote:
>>>> On 06.12.12 at 19:28, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> Do not report error when a patch is not appplicable to current processor,
>> simply skip it and move on to next patch in container file.
>>
>> Process container file to the end instead of stopping at the first
>> applicable patch.
>>
>> Log the fact that a patch has been applied at KERN_WARNING level, add
>> CPU number to debug messages.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
>> ---
>>   xen/arch/x86/microcode_amd.c |   69 +++++++++++++++++++++++-------------------
>>   1 file changed, 38 insertions(+), 31 deletions(-)
>>
>> diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c
>> index 7a54001..bb5968e 100644
>> --- a/xen/arch/x86/microcode_amd.c
>> +++ b/xen/arch/x86/microcode_amd.c
>> @@ -88,13 +88,13 @@ static int collect_cpu_info(int cpu, struct cpu_signature
>> *csig)
>>
>>       rdmsrl(MSR_AMD_PATCHLEVEL, csig->rev);
>>
>> -    printk(KERN_DEBUG "microcode: collect_cpu_info: patch_id=%#x\n",
>> -           csig->rev);
>> +    printk(KERN_DEBUG "microcode: CPU%d collect_cpu_info: patch_id=%#x\n",
>> +           cpu, csig->rev);
>>
>>       return 0;
>>   }
>>
>> -static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>> +static bool_t microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>>   {
>>       struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>>       const struct microcode_header_amd *mc_header = mc_amd->mpb;
>> @@ -125,11 +125,16 @@ static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
>>           printk(KERN_DEBUG "microcode: CPU%d patch does not match "
>>                  "(patch is %x, cpu base id is %x) \n",
>>                  cpu, mc_header->processor_rev_id, equiv_cpu_id);
>> -        return -EINVAL;
>> +        return 0;
>>       }
>>
>>       if ( mc_header->patch_id <= uci->cpu_sig.rev )
>> +    {
>> +        printk(KERN_DEBUG "microcode: CPU%d patching is not needed: "
>> +               "patch provides level 0x%x, cpu is at 0x%x \n",
>
> Did you not notice that all the other messages now use %#x?
>
> Also, I'm not really convinced we need this added verbosity. I
> personally tend to run all my systems with "loglvl=all", and the
> amount of output produced by the code in this file made me
> change that for just the one AMD machine I have that is new
> enough to support microcode loading.
>
> Minimally I'd expect nothing to be printed here if the two
> versions match.

Yes, I may have gone a bit overboard with chattiness.

This is actually made worse by the fact that we are going through patch 
loading twice during boot --- once through start_secondary() (or via 
microcode_presmp_init() on CPU0) and second time when the driver is 
initialized.

Do we really need what microcode_init() does?

>
>> +               cpu, mc_header->patch_id, uci->cpu_sig.rev);
>>           return 0;
>> +    }
>>
>>       printk(KERN_DEBUG "microcode: CPU%d found a matching microcode "
>>              "update with version %#x (current=%#x)\n",
>> @@ -173,7 +178,7 @@ static int apply_microcode(int cpu)
>>           return -EIO;
>>       }
>>
>> -    printk(KERN_INFO "microcode: CPU%d updated from revision %#x to %#x\n",
>> +    printk(KERN_WARNING "microcode: CPU%d updated from revision %#x to %#x\n",
>>              cpu, uci->cpu_sig.rev, hdr->patch_id);
>>
>>       uci->cpu_sig.rev = rev;
>> @@ -181,7 +186,7 @@ static int apply_microcode(int cpu)
>>       return 0;
>>   }
>>
>> -static int get_next_ucode_from_buffer_amd(
>> +static int get_ucode_from_buffer_amd(
>>       struct microcode_amd *mc_amd,
>>       const void *buf,
>>       size_t bufsize,
>> @@ -194,8 +199,12 @@ static int get_next_ucode_from_buffer_amd(
>>       off = *offset;
>>
>>       /* No more data */
>> -    if ( off >= bufsize )
>> -        return 1;
>> +    if ( off >= bufsize )
>> +    {
>> +        printk(KERN_ERR "microcode: error! "
>> +               "ucode buffer overrun\n");
>
> All on one line please (and the "error!" probably is superfluous).
>
> Also, is printing this really correct when off == bufsize? Or can
> this not happen at all?

Prior to this patch off >= bufsize was not an error but actually loop 
termination condition for the caller ("==" was the most likely case). 
Now the loop is broken out explicitly so when off >= bufsize (including 
"==") we have an error. Having said this, it is pretty much impossible 
to get it with current code.

>
>> +        return -EINVAL;
>> +    }
>>
>>       mpbuf = (const struct mpbhdr *)&bufp[off];
>>       if ( mpbuf->type != UCODE_UCODE_TYPE )
>> @@ -205,8 +214,8 @@ static int get_next_ucode_from_buffer_amd(
>>           return -EINVAL;
>>       }
>>
>> -    printk(KERN_DEBUG "microcode: size %zu, block size %u, offset %zu\n",
>> -           bufsize, mpbuf->len, off);
>> +    printk(KERN_DEBUG "microcode: CPU%d size %zu, block size %u, offset %zu\n",
>> +           raw_smp_processor_id(), bufsize, mpbuf->len, off);
>>
>>       if ( (off + mpbuf->len) > bufsize )
>>       {
>> @@ -278,8 +287,8 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
>>   {
>>       struct microcode_amd *mc_amd, *mc_old;
>>       size_t offset = bufsize;
>> +    size_t last_offset, applied_offset = 0;
>>       int error = 0;
>> -    int ret;
>>       struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>>
>>       /* We should bind the task to the CPU */
>> @@ -321,33 +330,32 @@ static int cpu_request_microcode(int cpu, const void
>> *buf, size_t bufsize)
>>        */
>>       mc_amd->mpb = NULL;
>>       mc_amd->mpb_size = 0;
>> -    while ( (ret = get_next_ucode_from_buffer_amd(mc_amd, buf, bufsize,
>> -                                                  &offset)) == 0 )
>> +    last_offset = offset;
>> +    while ( (error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
>> +                                               &offset)) == 0 )
>>       {
>> -        error = microcode_fits(mc_amd, cpu);
>> -        if (error <= 0)
>> -            continue;
>> +        if ( microcode_fits(mc_amd, cpu) )
>> +            if ( apply_microcode(cpu) == 0 )
>
> Fold the two if()-s into one, please.
>
> But then again you lose the return value of apply_microcode()
> here, which is wrong.

Right, I need to deal with it. Unfortunately apply_microcode() may get 
an error after a previous patch has been successfully loaded (for the 
very unlikely cases when container has multiple matches). I will 
probably still make the driver return an error in this case too.


Thanks.
-boris


>
>> +                applied_offset = last_offset;
>>
>> -        error = apply_microcode(cpu);
>> -        if (error == 0)
>> -        {
>> -            error = 1;
>> +        last_offset = offset;
>> +
>> +        if ( offset >= bufsize )
>>               break;
>> -        }
>>       }
>>
>> -    if ( ret < 0 )
>> -        error = ret;
>> -
>>       /* On success keep the microcode patch for
>>        * re-apply on resume.
>>        */
>> -    if ( error == 1 )
>> +    if ( applied_offset != 0 )
>>       {
>> -        xfree(mc_old);
>> -        error = 0;
>> +        error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
>> +                                          &applied_offset);
>> +        if (error == 0)
>> +            xfree(mc_old);
>>       }
>> -    else
>> +
>> +    if ( applied_offset == 0 || error != 0 )
>>       {
>>           xfree(mc_amd);
>>           uci->mc.mc_amd = mc_old;
>> @@ -364,10 +372,9 @@ static int microcode_resume_match(int cpu, const void *mc)
>>       struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
>>       struct microcode_amd *mc_amd = uci->mc.mc_amd;
>>       const struct microcode_amd *src = mc;
>> -    int res = microcode_fits(src, cpu);
>>
>> -    if ( res <= 0 )
>> -        return res;
>> +    if ( microcode_fits(src, cpu) == 0 )
>
> microcode_fits() now returning bool_t clearly asks for using ! instead
> of == 0 here.
>
> Jan
>
>> +        return 0;
>>
>>       if ( src != mc_amd )
>>       {
>> --
>> 1.7.10.4
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:06:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:06:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgyYt-0005kt-6T; Fri, 07 Dec 2012 14:05:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TgyYs-0005kn-Ae
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 14:05:54 +0000
Received: from [85.158.143.99:18578] by server-1.bemta-4.messagelabs.com id
	DE/57-28401-1C7F1C05; Fri, 07 Dec 2012 14:05:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1354889140!20974610!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTA4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17977 invoked from network); 7 Dec 2012 14:05:41 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 14:05:41 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7E5VO2000597
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 14:05:32 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7E5UFT010832
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 14:05:31 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7E5UR6029706; Fri, 7 Dec 2012 08:05:30 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 06:05:30 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E65E31C05A8; Fri,  7 Dec 2012 09:05:28 -0500 (EST)
Date: Fri, 7 Dec 2012 09:05:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>
Message-ID: <20121207140528.GA3140@phenom.dumpdata.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"lenb@kernel.org" <lenb@kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 04:27:36AM +0000, Liu, Jinsong wrote:
> >>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> >>>> index 126d8ce..abd0396 100644
> >>>> --- a/drivers/xen/Kconfig
> >>>> +++ b/drivers/xen/Kconfig
> >>>> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
> >>>>  	  Allow kernel fetching MCE error from Xen platform and
> >>>>  	  converting it into Linux mcelog format for mcelog tools
> >>>> 
> >>>> +config XEN_ACPI_MEMORY_HOTPLUG
> >>>> +	bool "Xen ACPI memory hotplug"
> >>> 
> >>> There should be a way to make this a module.
> >> 
> >> I have some concerns to make it a module:
> >> 1. xen and native memhotplug driver both work as module, while we
> >> need early load xen driver. 
> >> 2. if possible, a xen stub driver may solve load sequence issue, but
> >>   it may involve other issues * if xen driver load then unload,
> >> native driver may have chance to load successfully; 
> > 
> > The stub driver would still "occupy" the ACPI bus for the memory
> > hotplug PnP, so I think this would not be a problem.
> > 
> 
> I'm not quite clear your mean here, do you mean it has
> 1. xen_stub driver + xen_memhoplug driver, then xen_strub driver unload and entirely replaced by xen_memhotplug driver, or
> 2. xen_stub driver (w/ stub ops) + xen_memhotplug ops (not driver), then xen_stub driver keep occupying but its stub ops later replaced by xen_memhotplug ops?

#2
> 
> If in way #1, it has risk that native driver may load (if xen driver unload).
> If in way #2, xen_memhotplug ops lose the chance to probe/add/bind existed memory devices (since it's done when driver registerred).

Could the stub driver have a queue of events?

> 
> >>   * if xen driver load --> unload --> load again, then it will lose
> >> hotplug notification during unload period; 
> > 
> > Sure. But I think we can do it with this driver? After all the
> > function of 
> > it is to just tell the firmware to turn on/off sockets - and if we
> > miss 
> > one notification we won't take advantage of the power savings - but we
> > can do that later on.
> > 
> 
> Not only inform firmware.
> Hotplug notify callback will invoke acpi_bus_add -> ... -> implicitly invoke drv->ops.add method to add the hotadded memory device.

Gotcha.
> 
> > 
> >>   * if xen driver load --> unload --> load again, then it will
> >> re-add all memory devices, but the handle for 'booting memory
> >> device' and 'hotplug memory device' are different while we have no
> >> way to distinguish these 2 kind of devices.   
> > 
> > Wouldn't the stub driver hold onto that?
> > 
> 
> Same question as comment #1. Do you mean it has a xen_stub driver (w/ stub ops) and a xen_memhotplug ops?

Correct.
> 
> >> 
> >> IMHO I think to make xen hotplug logic as module may involves
> >> unexpected result. Is there any obvious advantages of doing so?
> >> after all we have provided config choice to user. Thoughts?  
> > 
> > Yes, it becomes a module - which is what we want.
> > 
> 
> What I meant here is, module will bring some unexpected issues for xen hotplug.
> We can provide user 'bool' config choice, let them decide to build-in or not, but not 'tristate' choice.

What would be involved in making it an tristate choice?
> 
> Thanks
> Jinsong

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:06:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:06:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgyYt-0005kt-6T; Fri, 07 Dec 2012 14:05:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TgyYs-0005kn-Ae
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 14:05:54 +0000
Received: from [85.158.143.99:18578] by server-1.bemta-4.messagelabs.com id
	DE/57-28401-1C7F1C05; Fri, 07 Dec 2012 14:05:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1354889140!20974610!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTA4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17977 invoked from network); 7 Dec 2012 14:05:41 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 14:05:41 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7E5VO2000597
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 14:05:32 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7E5UFT010832
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 14:05:31 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7E5UR6029706; Fri, 7 Dec 2012 08:05:30 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 06:05:30 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E65E31C05A8; Fri,  7 Dec 2012 09:05:28 -0500 (EST)
Date: Fri, 7 Dec 2012 09:05:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>
Message-ID: <20121207140528.GA3140@phenom.dumpdata.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"lenb@kernel.org" <lenb@kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 04:27:36AM +0000, Liu, Jinsong wrote:
> >>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> >>>> index 126d8ce..abd0396 100644
> >>>> --- a/drivers/xen/Kconfig
> >>>> +++ b/drivers/xen/Kconfig
> >>>> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
> >>>>  	  Allow kernel fetching MCE error from Xen platform and
> >>>>  	  converting it into Linux mcelog format for mcelog tools
> >>>> 
> >>>> +config XEN_ACPI_MEMORY_HOTPLUG
> >>>> +	bool "Xen ACPI memory hotplug"
> >>> 
> >>> There should be a way to make this a module.
> >> 
> >> I have some concerns to make it a module:
> >> 1. xen and native memhotplug driver both work as module, while we
> >> need early load xen driver. 
> >> 2. if possible, a xen stub driver may solve load sequence issue, but
> >>   it may involve other issues * if xen driver load then unload,
> >> native driver may have chance to load successfully; 
> > 
> > The stub driver would still "occupy" the ACPI bus for the memory
> > hotplug PnP, so I think this would not be a problem.
> > 
> 
> I'm not quite clear your mean here, do you mean it has
> 1. xen_stub driver + xen_memhoplug driver, then xen_strub driver unload and entirely replaced by xen_memhotplug driver, or
> 2. xen_stub driver (w/ stub ops) + xen_memhotplug ops (not driver), then xen_stub driver keep occupying but its stub ops later replaced by xen_memhotplug ops?

#2
> 
> If in way #1, it has risk that native driver may load (if xen driver unload).
> If in way #2, xen_memhotplug ops lose the chance to probe/add/bind existed memory devices (since it's done when driver registerred).

Could the stub driver have a queue of events?

> 
> >>   * if xen driver load --> unload --> load again, then it will lose
> >> hotplug notification during unload period; 
> > 
> > Sure. But I think we can do it with this driver? After all the
> > function of 
> > it is to just tell the firmware to turn on/off sockets - and if we
> > miss 
> > one notification we won't take advantage of the power savings - but we
> > can do that later on.
> > 
> 
> Not only inform firmware.
> Hotplug notify callback will invoke acpi_bus_add -> ... -> implicitly invoke drv->ops.add method to add the hotadded memory device.

Gotcha.
> 
> > 
> >>   * if xen driver load --> unload --> load again, then it will
> >> re-add all memory devices, but the handle for 'booting memory
> >> device' and 'hotplug memory device' are different while we have no
> >> way to distinguish these 2 kind of devices.   
> > 
> > Wouldn't the stub driver hold onto that?
> > 
> 
> Same question as comment #1. Do you mean it has a xen_stub driver (w/ stub ops) and a xen_memhotplug ops?

Correct.
> 
> >> 
> >> IMHO I think to make xen hotplug logic as module may involves
> >> unexpected result. Is there any obvious advantages of doing so?
> >> after all we have provided config choice to user. Thoughts?  
> > 
> > Yes, it becomes a module - which is what we want.
> > 
> 
> What I meant here is, module will bring some unexpected issues for xen hotplug.
> We can provide user 'bool' config choice, let them decide to build-in or not, but not 'tristate' choice.

What would be involved in making it an tristate choice?
> 
> Thanks
> Jinsong

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:09:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:09:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgybz-0005tv-Qs; Fri, 07 Dec 2012 14:09:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tgyby-0005tn-SG
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:09:07 +0000
Received: from [85.158.143.99:35548] by server-1.bemta-4.messagelabs.com id
	3B/CB-28401-188F1C05; Fri, 07 Dec 2012 14:09:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354889337!21463012!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTA4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23836 invoked from network); 7 Dec 2012 14:09:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 14:09:00 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7E8tCU004195
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 14:08:55 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7E8s0U005314
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 14:08:55 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7E8spZ031013; Fri, 7 Dec 2012 08:08:54 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 06:08:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EDC7C1C05A8; Fri,  7 Dec 2012 09:08:52 -0500 (EST)
Date: Fri, 7 Dec 2012 09:08:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dongxiao Xu <dongxiao.xu@intel.com>
Message-ID: <20121207140852.GC3140@phenom.dumpdata.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 09:08:42PM +0800, Dongxiao Xu wrote:
> While mapping sg buffers, checking to cross page DMA buffer is
> also needed. If the guest DMA buffer crosses page boundary, Xen
> should exchange contiguous memory for it.

So this is when we cross those 2MB contingous swatch of buffers.
Wouldn't we get the same problem with the 'map_page' call? If the
driver tried to map say a 4MB DMA region?

What if this check was done in the routines that provide the
software static buffers and there try to provide a nice
DMA contingous swatch of pages?

> 
> Besides, it is needed to backup the original page contents
> and copy it back after memory exchange is done.
> 
> This fixes issues if device DMA into software static buffers,
> and in case the static buffer cross page boundary which pages are
> not contiguous in real hardware.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> ---
>  drivers/xen/swiotlb-xen.c |   47 ++++++++++++++++++++++++++++++++++++++++++++-
>  1 files changed, 46 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 58db6df..e8f0cfb 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
>  }
>  EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
>  
> +static bool
> +check_continguous_region(unsigned long vstart, unsigned long order)
> +{
> +	unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
> +	unsigned long next_ma;
> +	int i;
> +
> +	for (i = 1; i < (1 << order); i++) {
> +		next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
> +		if (next_ma != prev_ma + PAGE_SIZE)
> +			return false;
> +		prev_ma = next_ma;
> +	}
> +	return true;
> +}
> +
>  /*
>   * Map a set of buffers described by scatterlist in streaming mode for DMA.
>   * This is the scatter-gather version of the above xen_swiotlb_map_page
> @@ -489,7 +505,36 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
>  
>  	for_each_sg(sgl, sg, nelems, i) {
>  		phys_addr_t paddr = sg_phys(sg);
> -		dma_addr_t dev_addr = xen_phys_to_bus(paddr);
> +		unsigned long vstart, order;
> +		dma_addr_t dev_addr;
> +
> +		/*
> +		 * While mapping sg buffers, checking to cross page DMA buffer
> +		 * is also needed. If the guest DMA buffer crosses page
> +		 * boundary, Xen should exchange contiguous memory for it.
> +		 * Besides, it is needed to backup the original page contents
> +		 * and copy it back after memory exchange is done.
> +		 */
> +		if (range_straddles_page_boundary(paddr, sg->length)) {
> +			vstart = (unsigned long)__va(paddr & PAGE_MASK);
> +			order = get_order(sg->length + (paddr & ~PAGE_MASK));
> +			if (!check_continguous_region(vstart, order)) {
> +				unsigned long buf;
> +				buf = __get_free_pages(GFP_KERNEL, order);
> +				memcpy((void *)buf, (void *)vstart,
> +					PAGE_SIZE * (1 << order));
> +				if (xen_create_contiguous_region(vstart, order,
> +						fls64(paddr))) {
> +					free_pages(buf, order);
> +					return 0;
> +				}
> +				memcpy((void *)vstart, (void *)buf,
> +					PAGE_SIZE * (1 << order));
> +				free_pages(buf, order);
> +			}
> +		}
> +
> +		dev_addr = xen_phys_to_bus(paddr);
>  
>  		if (swiotlb_force ||
>  		    !dma_capable(hwdev, dev_addr, sg->length) ||
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:09:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:09:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgybz-0005tv-Qs; Fri, 07 Dec 2012 14:09:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tgyby-0005tn-SG
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:09:07 +0000
Received: from [85.158.143.99:35548] by server-1.bemta-4.messagelabs.com id
	3B/CB-28401-188F1C05; Fri, 07 Dec 2012 14:09:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354889337!21463012!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTA4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23836 invoked from network); 7 Dec 2012 14:09:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 14:09:00 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7E8tCU004195
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 14:08:55 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7E8s0U005314
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 14:08:55 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7E8spZ031013; Fri, 7 Dec 2012 08:08:54 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 06:08:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EDC7C1C05A8; Fri,  7 Dec 2012 09:08:52 -0500 (EST)
Date: Fri, 7 Dec 2012 09:08:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dongxiao Xu <dongxiao.xu@intel.com>
Message-ID: <20121207140852.GC3140@phenom.dumpdata.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 09:08:42PM +0800, Dongxiao Xu wrote:
> While mapping sg buffers, checking to cross page DMA buffer is
> also needed. If the guest DMA buffer crosses page boundary, Xen
> should exchange contiguous memory for it.

So this is when we cross those 2MB contingous swatch of buffers.
Wouldn't we get the same problem with the 'map_page' call? If the
driver tried to map say a 4MB DMA region?

What if this check was done in the routines that provide the
software static buffers and there try to provide a nice
DMA contingous swatch of pages?

> 
> Besides, it is needed to backup the original page contents
> and copy it back after memory exchange is done.
> 
> This fixes issues if device DMA into software static buffers,
> and in case the static buffer cross page boundary which pages are
> not contiguous in real hardware.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> ---
>  drivers/xen/swiotlb-xen.c |   47 ++++++++++++++++++++++++++++++++++++++++++++-
>  1 files changed, 46 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 58db6df..e8f0cfb 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
>  }
>  EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
>  
> +static bool
> +check_continguous_region(unsigned long vstart, unsigned long order)
> +{
> +	unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
> +	unsigned long next_ma;
> +	int i;
> +
> +	for (i = 1; i < (1 << order); i++) {
> +		next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
> +		if (next_ma != prev_ma + PAGE_SIZE)
> +			return false;
> +		prev_ma = next_ma;
> +	}
> +	return true;
> +}
> +
>  /*
>   * Map a set of buffers described by scatterlist in streaming mode for DMA.
>   * This is the scatter-gather version of the above xen_swiotlb_map_page
> @@ -489,7 +505,36 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
>  
>  	for_each_sg(sgl, sg, nelems, i) {
>  		phys_addr_t paddr = sg_phys(sg);
> -		dma_addr_t dev_addr = xen_phys_to_bus(paddr);
> +		unsigned long vstart, order;
> +		dma_addr_t dev_addr;
> +
> +		/*
> +		 * While mapping sg buffers, checking to cross page DMA buffer
> +		 * is also needed. If the guest DMA buffer crosses page
> +		 * boundary, Xen should exchange contiguous memory for it.
> +		 * Besides, it is needed to backup the original page contents
> +		 * and copy it back after memory exchange is done.
> +		 */
> +		if (range_straddles_page_boundary(paddr, sg->length)) {
> +			vstart = (unsigned long)__va(paddr & PAGE_MASK);
> +			order = get_order(sg->length + (paddr & ~PAGE_MASK));
> +			if (!check_continguous_region(vstart, order)) {
> +				unsigned long buf;
> +				buf = __get_free_pages(GFP_KERNEL, order);
> +				memcpy((void *)buf, (void *)vstart,
> +					PAGE_SIZE * (1 << order));
> +				if (xen_create_contiguous_region(vstart, order,
> +						fls64(paddr))) {
> +					free_pages(buf, order);
> +					return 0;
> +				}
> +				memcpy((void *)vstart, (void *)buf,
> +					PAGE_SIZE * (1 << order));
> +				free_pages(buf, order);
> +			}
> +		}
> +
> +		dev_addr = xen_phys_to_bus(paddr);
>  
>  		if (swiotlb_force ||
>  		    !dma_capable(hwdev, dev_addr, sg->length) ||
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:11:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgye5-00062j-CO; Fri, 07 Dec 2012 14:11:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tgye4-00062Y-4X
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:11:16 +0000
Received: from [85.158.137.99:36543] by server-10.bemta-3.messagelabs.com id
	9C/8D-19806-EF8F1C05; Fri, 07 Dec 2012 14:11:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354889465!15209245!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAzNjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17120 invoked from network); 7 Dec 2012 14:11:08 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 14:11:07 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7EB2dB027534
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 14:11:03 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7EB14T020875
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 14:11:02 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7EB1Ud032632; Fri, 7 Dec 2012 08:11:01 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 06:11:01 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4068A1C05A8; Fri,  7 Dec 2012 09:11:00 -0500 (EST)
Date: Fri, 7 Dec 2012 09:11:00 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121207141100.GD3140@phenom.dumpdata.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<50C0ADB502000078000AE9DA@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C0ADB502000078000AE9DA@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Dongxiao Xu <dongxiao.xu@intel.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 01:37:41PM +0000, Jan Beulich wrote:
> >>> On 06.12.12 at 14:08, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > While mapping sg buffers, checking to cross page DMA buffer is
> > also needed. If the guest DMA buffer crosses page boundary, Xen
> > should exchange contiguous memory for it.
> > 
> > Besides, it is needed to backup the original page contents
> > and copy it back after memory exchange is done.
> > 
> > This fixes issues if device DMA into software static buffers,
> > and in case the static buffer cross page boundary which pages are
> > not contiguous in real hardware.
> > 
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > ---
> >  drivers/xen/swiotlb-xen.c |   47 
> > ++++++++++++++++++++++++++++++++++++++++++++-
> >  1 files changed, 46 insertions(+), 1 deletions(-)
> > 
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 58db6df..e8f0cfb 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, 
> > dma_addr_t dev_addr,
> >  }
> >  EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
> >  
> > +static bool
> > +check_continguous_region(unsigned long vstart, unsigned long order)
> 
> check_continguous_region(unsigned long vstart, unsigned int order)
> 
> But - why do you need to do this check order based in the first
> place? Checking the actual length of the buffer should suffice.
> 
> > +{
> > +	unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
> > +	unsigned long next_ma;
> 
> phys_addr_t or some such for both of them.
> 
> > +	int i;
> 
> unsigned long
> 
> > +
> > +	for (i = 1; i < (1 << order); i++) {
> 
> 1UL
> 
> > +		next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
> > +		if (next_ma != prev_ma + PAGE_SIZE)
> > +			return false;
> > +		prev_ma = next_ma;
> > +	}
> > +	return true;
> > +}
> > +
> >  /*
> >   * Map a set of buffers described by scatterlist in streaming mode for DMA.
> >   * This is the scatter-gather version of the above xen_swiotlb_map_page
> > @@ -489,7 +505,36 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct 
> > scatterlist *sgl,
> >  
> >  	for_each_sg(sgl, sg, nelems, i) {
> >  		phys_addr_t paddr = sg_phys(sg);
> > -		dma_addr_t dev_addr = xen_phys_to_bus(paddr);
> > +		unsigned long vstart, order;
> > +		dma_addr_t dev_addr;
> > +
> > +		/*
> > +		 * While mapping sg buffers, checking to cross page DMA buffer
> > +		 * is also needed. If the guest DMA buffer crosses page
> > +		 * boundary, Xen should exchange contiguous memory for it.
> > +		 * Besides, it is needed to backup the original page contents
> > +		 * and copy it back after memory exchange is done.
> > +		 */
> > +		if (range_straddles_page_boundary(paddr, sg->length)) {
> > +			vstart = (unsigned long)__va(paddr & PAGE_MASK);
> > +			order = get_order(sg->length + (paddr & ~PAGE_MASK));
> > +			if (!check_continguous_region(vstart, order)) {
> > +				unsigned long buf;
> > +				buf = __get_free_pages(GFP_KERNEL, order);
> > +				memcpy((void *)buf, (void *)vstart,
> > +					PAGE_SIZE * (1 << order));
> > +				if (xen_create_contiguous_region(vstart, order,
> > +						fls64(paddr))) {
> > +					free_pages(buf, order);
> > +					return 0;
> > +				}
> > +				memcpy((void *)vstart, (void *)buf,
> > +					PAGE_SIZE * (1 << order));
> > +				free_pages(buf, order);
> > +			}
> > +		}
> > +
> > +		dev_addr = xen_phys_to_bus(paddr);
> >  
> >  		if (swiotlb_force ||
> >  		    !dma_capable(hwdev, dev_addr, sg->length) ||
> 
> How about swiotlb_map_page() (for the compound page case)?

Heh. Thanks - I just got to your reply now and had the same question.

Interestingly enough - this looks like a problem that has been forever
and nobody ever hit this.

Worst, the problem is even present if a driver uses the pci_alloc_coherent
and asks for a 3MB region or such - as we can at most give out only
2MB swaths.

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:11:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgye5-00062j-CO; Fri, 07 Dec 2012 14:11:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tgye4-00062Y-4X
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:11:16 +0000
Received: from [85.158.137.99:36543] by server-10.bemta-3.messagelabs.com id
	9C/8D-19806-EF8F1C05; Fri, 07 Dec 2012 14:11:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354889465!15209245!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAzNjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17120 invoked from network); 7 Dec 2012 14:11:08 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 14:11:07 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7EB2dB027534
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 14:11:03 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7EB14T020875
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 14:11:02 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7EB1Ud032632; Fri, 7 Dec 2012 08:11:01 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 06:11:01 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4068A1C05A8; Fri,  7 Dec 2012 09:11:00 -0500 (EST)
Date: Fri, 7 Dec 2012 09:11:00 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121207141100.GD3140@phenom.dumpdata.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<50C0ADB502000078000AE9DA@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C0ADB502000078000AE9DA@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Dongxiao Xu <dongxiao.xu@intel.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 01:37:41PM +0000, Jan Beulich wrote:
> >>> On 06.12.12 at 14:08, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > While mapping sg buffers, checking to cross page DMA buffer is
> > also needed. If the guest DMA buffer crosses page boundary, Xen
> > should exchange contiguous memory for it.
> > 
> > Besides, it is needed to backup the original page contents
> > and copy it back after memory exchange is done.
> > 
> > This fixes issues if device DMA into software static buffers,
> > and in case the static buffer cross page boundary which pages are
> > not contiguous in real hardware.
> > 
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > ---
> >  drivers/xen/swiotlb-xen.c |   47 
> > ++++++++++++++++++++++++++++++++++++++++++++-
> >  1 files changed, 46 insertions(+), 1 deletions(-)
> > 
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 58db6df..e8f0cfb 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, 
> > dma_addr_t dev_addr,
> >  }
> >  EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
> >  
> > +static bool
> > +check_continguous_region(unsigned long vstart, unsigned long order)
> 
> check_continguous_region(unsigned long vstart, unsigned int order)
> 
> But - why do you need to do this check order based in the first
> place? Checking the actual length of the buffer should suffice.
> 
> > +{
> > +	unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
> > +	unsigned long next_ma;
> 
> phys_addr_t or some such for both of them.
> 
> > +	int i;
> 
> unsigned long
> 
> > +
> > +	for (i = 1; i < (1 << order); i++) {
> 
> 1UL
> 
> > +		next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
> > +		if (next_ma != prev_ma + PAGE_SIZE)
> > +			return false;
> > +		prev_ma = next_ma;
> > +	}
> > +	return true;
> > +}
> > +
> >  /*
> >   * Map a set of buffers described by scatterlist in streaming mode for DMA.
> >   * This is the scatter-gather version of the above xen_swiotlb_map_page
> > @@ -489,7 +505,36 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct 
> > scatterlist *sgl,
> >  
> >  	for_each_sg(sgl, sg, nelems, i) {
> >  		phys_addr_t paddr = sg_phys(sg);
> > -		dma_addr_t dev_addr = xen_phys_to_bus(paddr);
> > +		unsigned long vstart, order;
> > +		dma_addr_t dev_addr;
> > +
> > +		/*
> > +		 * While mapping sg buffers, checking to cross page DMA buffer
> > +		 * is also needed. If the guest DMA buffer crosses page
> > +		 * boundary, Xen should exchange contiguous memory for it.
> > +		 * Besides, it is needed to backup the original page contents
> > +		 * and copy it back after memory exchange is done.
> > +		 */
> > +		if (range_straddles_page_boundary(paddr, sg->length)) {
> > +			vstart = (unsigned long)__va(paddr & PAGE_MASK);
> > +			order = get_order(sg->length + (paddr & ~PAGE_MASK));
> > +			if (!check_continguous_region(vstart, order)) {
> > +				unsigned long buf;
> > +				buf = __get_free_pages(GFP_KERNEL, order);
> > +				memcpy((void *)buf, (void *)vstart,
> > +					PAGE_SIZE * (1 << order));
> > +				if (xen_create_contiguous_region(vstart, order,
> > +						fls64(paddr))) {
> > +					free_pages(buf, order);
> > +					return 0;
> > +				}
> > +				memcpy((void *)vstart, (void *)buf,
> > +					PAGE_SIZE * (1 << order));
> > +				free_pages(buf, order);
> > +			}
> > +		}
> > +
> > +		dev_addr = xen_phys_to_bus(paddr);
> >  
> >  		if (swiotlb_force ||
> >  		    !dma_capable(hwdev, dev_addr, sg->length) ||
> 
> How about swiotlb_map_page() (for the compound page case)?

Heh. Thanks - I just got to your reply now and had the same question.

Interestingly enough - this looks like a problem that has been forever
and nobody ever hit this.

Worst, the problem is even present if a driver uses the pci_alloc_coherent
and asks for a 3MB region or such - as we can at most give out only
2MB swaths.

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgyft-0006Ce-2O; Fri, 07 Dec 2012 14:13:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1Tgyfq-0006CV-Qt
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:13:07 +0000
Received: from [85.158.137.99:9242] by server-9.bemta-3.messagelabs.com id
	FF/77-02388-D69F1C05; Fri, 07 Dec 2012 14:13:01 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354889580!18391948!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4216 invoked from network); 7 Dec 2012 14:13:00 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-4.tower-217.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Dec 2012 14:13:00 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Fri, 7 Dec 2012 15:12:59 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: VGA passthrough and AMD drivers
Thread-Index: Ac3UgIzX7YM1+LtSSTesZY1qKjfwPQ==
Date: Fri, 7 Dec 2012 14:12:58 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C539CC@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0247567437819244552=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0247567437819244552==
Content-Language: fr-FR
Content-Type: multipart/alternative;
	boundary="_000_36774CA35642C143BCDE93BA0C68DC5702C539CCdulac_"

--_000_36774CA35642C143BCDE93BA0C68DC5702C539CCdulac_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Hi all,

I have made some tests to find a good driver for FirePro V8800 on windows 7=
 64bit HVM.
I have been focused on 'advanced features': quad buffer and active stereosc=
opy, synchronization ...
The results, for all FirePro drivers (of this year); I can't get the quad b=
uffer/active stereoscopy feature.
But they work on a native installation.

The only driver that allows this feature is a Radeon HD driver (Catalyst 12=
.10 WHQL).
But this driver becomes unstable when an application using active stereo an=
d synchronization is closed:
-The synchronization between two computers is lost.
-The CCC can crash when the synchronization is made again.

Someone have any clues about this?

Thanks,
Aurelien




--_000_36774CA35642C143BCDE93BA0C68DC5702C539CCdulac_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
h4
	{mso-style-priority:9;
	mso-style-link:"Titre 4 Car";
	mso-margin-top-alt:auto;
	margin-right:0cm;
	mso-margin-bottom-alt:auto;
	margin-left:0cm;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
	{mso-style-priority:99;
	mso-style-link:"Texte de bulles Car";
	margin:0cm;
	margin-bottom:.0001pt;
	font-size:8.0pt;
	font-family:"Tahoma","sans-serif";
	mso-fareast-language:EN-US;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.TextedebullesCar
	{mso-style-name:"Texte de bulles Car";
	mso-style-priority:99;
	mso-style-link:"Texte de bulles";
	font-family:"Tahoma","sans-serif";}
span.Titre4Car
	{mso-style-name:"Titre 4 Car";
	mso-style-priority:9;
	mso-style-link:"Titre 4";
	font-family:"Times New Roman","serif";
	mso-fareast-language:FR;
	font-weight:bold;}
.MsoChpDefault
	{mso-style-type:export-only;
	mso-fareast-language:EN-US;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"FR" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I have made some tests to find =
a good driver for FirePro V8800 on windows 7 64bit HVM.<o:p></o:p></span></=
p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I have been focused on &#8216;a=
dvanced features&#8217;: quad buffer and active stereoscopy, synchronizatio=
n &#8230;<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">The results, for all FirePro dr=
ivers (of this year); I can&#8217;t get the quad buffer/active stereoscopy =
feature.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">But they work on a native insta=
llation.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">The only driver that allows thi=
s feature is a Radeon HD driver (Catalyst 12.10 WHQL).<o:p></o:p></span></p=
>
<p class=3D"MsoNormal"><span lang=3D"EN-US">But this driver becomes unstabl=
e when an application using active stereo and synchronization is closed:
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-indent:35.4pt"><span lang=3D"EN-US">-T=
he synchronization between two computers is lost.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-indent:35.4pt"><span lang=3D"EN-US">-T=
he CCC can crash when the synchronization is made again.<o:p></o:p></span><=
/p>
<p class=3D"MsoNormal" style=3D"text-indent:35.4pt"><span lang=3D"EN-US"><o=
:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Someone have any clues about th=
is?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thanks,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Aurelien<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
</div>
</body>
</html>

--_000_36774CA35642C143BCDE93BA0C68DC5702C539CCdulac_--


--===============0247567437819244552==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0247567437819244552==--


From xen-devel-bounces@lists.xen.org Fri Dec 07 14:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgyft-0006Ce-2O; Fri, 07 Dec 2012 14:13:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1Tgyfq-0006CV-Qt
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:13:07 +0000
Received: from [85.158.137.99:9242] by server-9.bemta-3.messagelabs.com id
	FF/77-02388-D69F1C05; Fri, 07 Dec 2012 14:13:01 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354889580!18391948!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4216 invoked from network); 7 Dec 2012 14:13:00 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-4.tower-217.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Dec 2012 14:13:00 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Fri, 7 Dec 2012 15:12:59 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: VGA passthrough and AMD drivers
Thread-Index: Ac3UgIzX7YM1+LtSSTesZY1qKjfwPQ==
Date: Fri, 7 Dec 2012 14:12:58 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C539CC@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0247567437819244552=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0247567437819244552==
Content-Language: fr-FR
Content-Type: multipart/alternative;
	boundary="_000_36774CA35642C143BCDE93BA0C68DC5702C539CCdulac_"

--_000_36774CA35642C143BCDE93BA0C68DC5702C539CCdulac_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Hi all,

I have made some tests to find a good driver for FirePro V8800 on windows 7=
 64bit HVM.
I have been focused on 'advanced features': quad buffer and active stereosc=
opy, synchronization ...
The results, for all FirePro drivers (of this year); I can't get the quad b=
uffer/active stereoscopy feature.
But they work on a native installation.

The only driver that allows this feature is a Radeon HD driver (Catalyst 12=
.10 WHQL).
But this driver becomes unstable when an application using active stereo an=
d synchronization is closed:
-The synchronization between two computers is lost.
-The CCC can crash when the synchronization is made again.

Someone have any clues about this?

Thanks,
Aurelien




--_000_36774CA35642C143BCDE93BA0C68DC5702C539CCdulac_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
h4
	{mso-style-priority:9;
	mso-style-link:"Titre 4 Car";
	mso-margin-top-alt:auto;
	margin-right:0cm;
	mso-margin-bottom-alt:auto;
	margin-left:0cm;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
	{mso-style-priority:99;
	mso-style-link:"Texte de bulles Car";
	margin:0cm;
	margin-bottom:.0001pt;
	font-size:8.0pt;
	font-family:"Tahoma","sans-serif";
	mso-fareast-language:EN-US;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.TextedebullesCar
	{mso-style-name:"Texte de bulles Car";
	mso-style-priority:99;
	mso-style-link:"Texte de bulles";
	font-family:"Tahoma","sans-serif";}
span.Titre4Car
	{mso-style-name:"Titre 4 Car";
	mso-style-priority:9;
	mso-style-link:"Titre 4";
	font-family:"Times New Roman","serif";
	mso-fareast-language:FR;
	font-weight:bold;}
.MsoChpDefault
	{mso-style-type:export-only;
	mso-fareast-language:EN-US;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"FR" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I have made some tests to find =
a good driver for FirePro V8800 on windows 7 64bit HVM.<o:p></o:p></span></=
p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I have been focused on &#8216;a=
dvanced features&#8217;: quad buffer and active stereoscopy, synchronizatio=
n &#8230;<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">The results, for all FirePro dr=
ivers (of this year); I can&#8217;t get the quad buffer/active stereoscopy =
feature.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">But they work on a native insta=
llation.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">The only driver that allows thi=
s feature is a Radeon HD driver (Catalyst 12.10 WHQL).<o:p></o:p></span></p=
>
<p class=3D"MsoNormal"><span lang=3D"EN-US">But this driver becomes unstabl=
e when an application using active stereo and synchronization is closed:
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-indent:35.4pt"><span lang=3D"EN-US">-T=
he synchronization between two computers is lost.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-indent:35.4pt"><span lang=3D"EN-US">-T=
he CCC can crash when the synchronization is made again.<o:p></o:p></span><=
/p>
<p class=3D"MsoNormal" style=3D"text-indent:35.4pt"><span lang=3D"EN-US"><o=
:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Someone have any clues about th=
is?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thanks,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Aurelien<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
</div>
</body>
</html>

--_000_36774CA35642C143BCDE93BA0C68DC5702C539CCdulac_--


--===============0247567437819244552==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0247567437819244552==--


From xen-devel-bounces@lists.xen.org Fri Dec 07 14:22:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgyox-0006hh-6a; Fri, 07 Dec 2012 14:22:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Tgyov-0006hc-EQ
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 14:22:29 +0000
Received: from [85.158.143.99:43241] by server-1.bemta-4.messagelabs.com id
	F7/DC-28401-4ABF1C05; Fri, 07 Dec 2012 14:22:28 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1354890146!25140639!1
X-Originating-IP: [209.85.220.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=BODY_VIRGIN,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32003 invoked from network); 7 Dec 2012 14:22:27 -0000
Received: from mail-vc0-f171.google.com (HELO mail-vc0-f171.google.com)
	(209.85.220.171)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:22:27 -0000
Received: by mail-vc0-f171.google.com with SMTP id n11so586180vch.30
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Dec 2012 06:22:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=MmKT4n4vfPrEL4ZVEE2WTnMNcPt+kf/UJVKiDB0IJq0=;
	b=WZ2Rww6hFK4bPuRT1vFRSBApXuLFWEniz2UtUjeus3Z4w0tUuuT6G7A9LhoNuF9xz1
	WJ8RJdpIW/6ddXAN9iBktmyP0hMj7Ff6Sv2qchXPMOO8iZZMdmzYNeptzTNYylHqh1QE
	mU0xazpoWWqNeLLZm1DyySEm7l9063y9Q4uC9XUyaky/fPiYLcfgOn26FRPoIa5ZE/xA
	QN6SPODVh9qXO4n39r7VDM0DwUpfCYbUy8e4V+TVcHrMPbONswEhTjXNhNFLJl7hkyE8
	5rrgmoGDMkO2myXcbsJpkmRUaOYNJy57EgnqqawIP6PenHL63WG4SoYBnqAxz5UTCLyU
	ItMw==
Received: by 10.58.161.41 with SMTP id xp9mr3880744veb.56.1354890146080;
	Fri, 07 Dec 2012 06:22:26 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id zx18sm3467096veb.3.2012.12.07.06.22.25
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 06:22:25 -0800 (PST)
Date: Fri, 7 Dec 2012 09:22:23 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20121207142222.GA3303@phenom.dumpdata.com>
References: <1351097925-26221-1-git-send-email-roger.pau@citrix.com>
	<20121206031455.GA4408@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121206031455.GA4408@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] LVM Checksum error when using persistent grants
 (#linux-next + stable/for-jens-3.8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 10:14:55PM -0500, Konrad Rzeszutek Wilk wrote:
> Hey Roger,
> 
> I am seeing this weird behavior when using #linux-next + stable/for-jens-3.8 tree.

To make it easier I just used v3.7-rc8 and merged stable/for-jens-3.8
tree.

> 
> Basically I can do 'pvscan' on xvd* disk and quite often I get checksum errors:
> 
> # pvscan /dev/xvdf
>   PV /dev/xvdf2   VG VolGroup00        lvm2 [18.88 GiB / 0    free]
>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>   Total: 5 [962.38 GiB] / in use: 5 [962.38 GiB] / in no VG: 0 [0   ]
> # pvscan /dev/xvdf
>   /dev/xvdf2: Checksum error
>   Couldn't read volume group metadata.
>   /dev/xvdf2: Checksum error
>   Couldn't read volume group metadata.
>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>   Total: 4 [943.50 GiB] / in use: 4 [943.50 GiB] / in no VG: 0 [0   ]
> 
> This is with a i386 dom0, 64-bit Xen 4.1.3 hypervisor, and with either
> 64-bit or 32-bit PV or PVHVM guest.

And it does not matter if dom0 is 64-bit.
> 
> Have you seen something like this?

More interestingly is that the failure is the frontend. I ran the "new"
guests that do persistent grants with the old backends (so v3.7-rc8
virgin) and still got the same failure.

> 
> Note, the other LV disks are over iSCSI and are working fine.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:22:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgyox-0006hh-6a; Fri, 07 Dec 2012 14:22:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Tgyov-0006hc-EQ
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 14:22:29 +0000
Received: from [85.158.143.99:43241] by server-1.bemta-4.messagelabs.com id
	F7/DC-28401-4ABF1C05; Fri, 07 Dec 2012 14:22:28 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1354890146!25140639!1
X-Originating-IP: [209.85.220.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=BODY_VIRGIN,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32003 invoked from network); 7 Dec 2012 14:22:27 -0000
Received: from mail-vc0-f171.google.com (HELO mail-vc0-f171.google.com)
	(209.85.220.171)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:22:27 -0000
Received: by mail-vc0-f171.google.com with SMTP id n11so586180vch.30
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Dec 2012 06:22:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=MmKT4n4vfPrEL4ZVEE2WTnMNcPt+kf/UJVKiDB0IJq0=;
	b=WZ2Rww6hFK4bPuRT1vFRSBApXuLFWEniz2UtUjeus3Z4w0tUuuT6G7A9LhoNuF9xz1
	WJ8RJdpIW/6ddXAN9iBktmyP0hMj7Ff6Sv2qchXPMOO8iZZMdmzYNeptzTNYylHqh1QE
	mU0xazpoWWqNeLLZm1DyySEm7l9063y9Q4uC9XUyaky/fPiYLcfgOn26FRPoIa5ZE/xA
	QN6SPODVh9qXO4n39r7VDM0DwUpfCYbUy8e4V+TVcHrMPbONswEhTjXNhNFLJl7hkyE8
	5rrgmoGDMkO2myXcbsJpkmRUaOYNJy57EgnqqawIP6PenHL63WG4SoYBnqAxz5UTCLyU
	ItMw==
Received: by 10.58.161.41 with SMTP id xp9mr3880744veb.56.1354890146080;
	Fri, 07 Dec 2012 06:22:26 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id zx18sm3467096veb.3.2012.12.07.06.22.25
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 06:22:25 -0800 (PST)
Date: Fri, 7 Dec 2012 09:22:23 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20121207142222.GA3303@phenom.dumpdata.com>
References: <1351097925-26221-1-git-send-email-roger.pau@citrix.com>
	<20121206031455.GA4408@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121206031455.GA4408@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] LVM Checksum error when using persistent grants
 (#linux-next + stable/for-jens-3.8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 05, 2012 at 10:14:55PM -0500, Konrad Rzeszutek Wilk wrote:
> Hey Roger,
> 
> I am seeing this weird behavior when using #linux-next + stable/for-jens-3.8 tree.

To make it easier I just used v3.7-rc8 and merged stable/for-jens-3.8
tree.

> 
> Basically I can do 'pvscan' on xvd* disk and quite often I get checksum errors:
> 
> # pvscan /dev/xvdf
>   PV /dev/xvdf2   VG VolGroup00        lvm2 [18.88 GiB / 0    free]
>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>   Total: 5 [962.38 GiB] / in use: 5 [962.38 GiB] / in no VG: 0 [0   ]
> # pvscan /dev/xvdf
>   /dev/xvdf2: Checksum error
>   Couldn't read volume group metadata.
>   /dev/xvdf2: Checksum error
>   Couldn't read volume group metadata.
>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>   Total: 4 [943.50 GiB] / in use: 4 [943.50 GiB] / in no VG: 0 [0   ]
> 
> This is with a i386 dom0, 64-bit Xen 4.1.3 hypervisor, and with either
> 64-bit or 32-bit PV or PVHVM guest.

And it does not matter if dom0 is 64-bit.
> 
> Have you seen something like this?

More interestingly is that the failure is the frontend. I ran the "new"
guests that do persistent grants with the old backends (so v3.7-rc8
virgin) and still got the same failure.

> 
> Note, the other LV disks are over iSCSI and are working fine.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:50:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzFr-0007KD-DP; Fri, 07 Dec 2012 14:50:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TgzFp-0007K5-Sh
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:50:18 +0000
Received: from [85.158.137.99:23400] by server-5.bemta-3.messagelabs.com id
	DA/84-26311-62202C05; Fri, 07 Dec 2012 14:50:14 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354891811!18396228!1
X-Originating-IP: [209.85.223.169]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23108 invoked from network); 7 Dec 2012 14:50:12 -0000
Received: from mail-ie0-f169.google.com (HELO mail-ie0-f169.google.com)
	(209.85.223.169)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:50:12 -0000
Received: by mail-ie0-f169.google.com with SMTP id c14so1472353ieb.0
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 06:50:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=lNVDTvEKdyuVSGfsk503nETo19A4gX5Mx1a4Lf2NQgY=;
	b=IKAqpfZY0EWAxON6C9rjEi1W6TDaEQozX9SMQD2e119YvwkvRbkS0tRAzgVe2GgFE6
	+Y7avtcFbyb6ehjSkvTwtkpYgwu1Xl5VHxV9J9SsbTEpFqBUIJl4l7HAPpE0qQeLN+bt
	8LIN16TKL3rbvxE2dNj3+4dafH5lUmZgbs+CJWKuQcNUeX2vIS0GA/fOCBgOsYhn7ADk
	+1lPUGh8C2Oan9omy1xTGvRhM5LZhpDjYbqjrOiip1vOkZMNuNTVXGRg8R7/Wsqcr0+s
	20VddGoiPISEwox7XWpjTaAcFaKRt68qb4lFj6OegWcSiI/GCl/zbOa8gAdkxt/Es46c
	xt2g==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr9526913igq.37.1354891810315; Fri,
	07 Dec 2012 06:50:10 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Fri, 7 Dec 2012 06:50:10 -0800 (PST)
In-Reply-To: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
Date: Fri, 7 Dec 2012 22:50:10 +0800
X-Google-Sender-Auth: O4nEqtC07omvmhkn01grJJ0ZRfM
Message-ID: <CAKhsbWbZXSkRch6+iH_0E-xCPrUPCfe4sAnPMA4C4QXzmQpKjw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3
 with the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU
 missing interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4474464799736840499=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4474464799736840499==
Content-Type: multipart/alternative; boundary=14dae9341115101db604d04455ff

--14dae9341115101db604d04455ff
Content-Type: text/plain; charset=ISO-8859-1

Hi, could anybody take a look please?


On Thu, Dec 6, 2012 at 11:42 AM, G.R. <firemeteor@users.sourceforge.net>wrote:

> Sorry, but I have to resend this in a separate thread for better
> visibility.
> Back ground:
> After backporting this patch to fix my PVHVM interrupt missing issue in
> 4.1.3:
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html
> I find side effect for pure HVM guest.
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00208.html
>
> I had a follow up thread to analysis the issue and post it again here:
>
> Hi, it seems that the patch has some side effect on pure HVM guest.
>>> For openelec 2.0 guest, which is based on linux 3.2.x with PVOP
>>> disabled, I see such syndrome in qemu-dm-xxx.log:
>>>
>>> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
>>> support??
>>> pt_pci_write_config: Internal error: Invalid write emulation return
>>> value[-1]. I/O emulator exit.
>>>
>>> The guest dies immediately after the log, I have no way to check guest
>>> kernel log.
>>> Without the patch, this guest can boot without obvious error log even
>>> the VGA passthrough does not quite work.
>>> I'll check the code to see what does these log mean...
>>>
>>
>> I did some analysis and it really looks like a bug to me.
>> Since this is a patch back-ported from 4.2.0, I would like to ask is
>> there any follow up patch that would fix this issue?
>> Please see my analysis below:
>>
>> Here is part of the qemu-dm log, with debug log enabled at compile time:
>>
>> dm-command: hot insert pass-through pci dev
>> register_real_device: Assigning real physical device 00:1b.0 ...
>> pt_iomul_init: Error: pt_iomul_init can't open file /dev/xen/pci_iomul:
>> No such file or directory: 0x0:0x1b.0x0
>> pt_register_regions: IO region registered (size=0x00004000
>> base_addr=0xf7c10004)
>> pt_msi_setup: msi mapped with pirq 36
>> pci_intx: intx=1
>> register_real_device: Real physical device 00:1b.0 registered successfuly!
>> IRQ type = MSI-INTx
>> ...
>> pt_pci_read_config: [00:06.0]: address=0000 val=0x0000*8086* len=2
>> pt_pci_read_config: [00:06.0]: address=0002 val=0x0000*1e20* len=2
>> ...
>> *pt_pci_write_config: [00:06.0]: address=0068* val=0x00000000 len=4
>> ...
>> pt_pci_write_config: [00:06.0]: address=0062 val=0x00000081 len=2
>> *pt_msgctrl_reg_write: guest enabling MSI, disable MSI-INTx translation*
>> pci_intx: intx=1
>> *pt_msi_disable: Unmap msi with pirq 36*
>> pt_msgctrl_reg_write: setup msi for dev 30
>> pt_msi_setup: msi mapped with pirq 36
>> pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303
>> pt_pci_read_config: [00:06.0]: address=0062 val=0x00000081 len=2
>> pt_pci_write_config: [00:06.0]: address=0062 val=0x00000081 len=2
>> pt_pci_write_config: [00:06.0]: address=0064 val=0xfee0300c len=4
>> *pt_pci_write_config: [00:06.0]: address=0068* val=0x00000000 len=4
>>
>> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
>> support??
>> pt_pci_write_config: Internal error: Invalid write emulation return
>> value[-1]. I/O emulator exit.
>>
>>
>> Here the device in question should is the audio controller, 00:1b.0 in
>> the host, which is 64 bit capable:
>> 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset
>> Family High Definition Audio Controller (rev 04)
>>     Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+
>>         Address: 00000000fee00378  Data: 0000
>>
>> And there is also a successful write to offset 0x68 above, while the
>> second write fails the I/O emulator.
>>
>>
>> The patch added pt_msi_disable() call into *pt_msgctrl_reg_write() *function,
>> And the pt_msi_disable() function has these lines:
>> out:
>>     /* clear msi info */
>>     dev->msi->flags = 0;
>>     dev->msi->pirq = -1;
>>     dev->msi_trans_en = 0;
>>
>> As a result, the flags are cleared -- this is new to the patch.
>> And I believe this change caused the failure in pt_msgaddr64_write():
>>
>> 3882     /* check whether the type is 64 bit or not */
>> 3883     if (!(ptdev->msi->flags & PCI_MSI_FLAGS_64BIT))
>> 3884     {
>> 3885         /* exit I/O emulator */
>> 3886         PT_LOG("Error: why comes to Upper Address without 64 bit
>> support??\n");
>> 3887         return -1;
>> 3888     }
>>
>>
>> I only see flags setup in  pt_msgctrl_reg_init() function. I guess this
>> is not called this time.
>>
>> Thanks,
>> Timothy
>>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--14dae9341115101db604d04455ff
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi, could anybody take a look please?<br><div class=3D"gmail_extra"><br><br=
><div class=3D"gmail_quote">On Thu, Dec 6, 2012 at 11:42 AM, G.R. <span dir=
=3D"ltr">&lt;<a href=3D"mailto:firemeteor@users.sourceforge.net" target=3D"=
_blank">firemeteor@users.sourceforge.net</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Sorry, but I have to resend this in a separa=
te thread for better visibility.<br><div class=3D"gmail_extra">Back ground:=
<br>
After backporting this patch to fix my PVHVM interrupt missing issue in 4.1=
.3:<br><a rel=3D"nofollow" href=3D"http://lists.xen.org/archives/html/xen-d=
evel/2012-06/msg01909.html" target=3D"_blank">http://lists.xen.org/archives=
/html/xen-devel/2012-06/msg01909.html</a><br>

I find side effect for pure HVM guest.<br><a href=3D"http://lists.xen.org/a=
rchives/html/xen-devel/2012-12/msg00208.html" target=3D"_blank">http://list=
s.xen.org/archives/html/xen-devel/2012-12/msg00208.html</a><br><br>I had a =
follow up thread to analysis the issue and post it again here:<br>

<br><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"m=
argin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left=
:1ex"><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div><div>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D"gmail_extra=
"><div class=3D"gmail_quote"><div>Hi, it seems that the patch has some side=
 effect on pure HVM guest.<br>

For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled, I=
 see such syndrome in qemu-dm-xxx.log:<br>



<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>pt_pci_write_config: Internal error: Invalid write emulation=
 return value[-1]. I/O emulator exit.<br><br>The guest dies immediately aft=
er the log, I have no way to check guest kernel log.<br>





Without the patch, this guest can boot without obvious error log even the V=
GA passthrough does not quite work.<br>I&#39;ll check the code to see what =
does these log mean...<br></div></div></div></blockquote></div></div><div>

<br>I did some analysis and it really looks like a bug to me.<br>

Since this is a patch back-ported from 4.2.0, I would like to ask is there =
any follow up patch that would fix this issue?<br>Please see my analysis be=
low:<br>
<br>Here is part of the qemu-dm log, with debug log enabled at compile time=
:<br><br><span style=3D"font-family:courier new,monospace">dm-command: hot =
insert pass-through pci dev<br>register_real_device: Assigning real physica=
l device 00:1b.0 ...<br>



pt_iomul_init: Error: pt_iomul_init can&#39;t open file /dev/xen/pci_iomul:=
 No such file or directory: 0x0:0x1b.0x0<br>pt_register_regions: IO region =
registered (size=3D0x00004000 base_addr=3D0xf7c10004)<br>pt_msi_setup: msi =
mapped with pirq 36<br>



pci_intx: intx=3D1<br>register_real_device: Real physical device 00:1b.0 re=
gistered successfuly!<br>IRQ type =3D MSI-INTx<br>...<br>pt_pci_read_config=
: [00:06.0]: address=3D0000 val=3D0x0000<b>8086</b> len=3D2<br>pt_pci_read_=
config: [00:06.0]: address=3D0002 val=3D0x0000<b>1e20</b> len=3D2<br>



...<br><b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x000000=
00 len=3D4<br>...<br>pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0=
x00000081 len=3D2<br><b>pt_msgctrl_reg_write: guest enabling MSI, disable M=
SI-INTx translation</b><br>



pci_intx: intx=3D1<br><b>pt_msi_disable: Unmap msi with pirq 36</b><br>pt_m=
sgctrl_reg_write: setup msi for dev 30<br>pt_msi_setup: msi mapped with pir=
q 36<br>pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303<br>pt_pc=
i_read_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>



pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>=
pt_pci_write_config: [00:06.0]: address=3D0064 val=3D0xfee0300c len=3D4<br>=
<b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x00000000 len=
=3D4<div>
<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>

pt_pci_write_config: Internal error: Invalid write emulation return value[-=
1]. I/O emulator exit.<br></div></span><br><span style=3D"font-family:couri=
er new,monospace"></span></div></div><br>Here the device in question should=
 is the audio controller, 00:1b.0 in the host, which is 64 bit capable:<br>




00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family=
 High Definition Audio Controller (rev 04)<br>=A0=A0=A0 Capabilities: [60] =
MSI: Enable+ Count=3D1/1 Maskable- 64bit+<br>=A0=A0=A0 =A0=A0=A0 Address: 0=
0000000fee00378=A0 Data: 0000<br>



<br>And there is also a successful write to offset 0x68 above, while the se=
cond write fails the I/O emulator.<br>
<br><br>The patch added pt_msi_disable() call into <span style=3D"font-fami=
ly:courier new,monospace"><b>pt_msgctrl_reg_write() </b></span>function, An=
d the pt_msi_disable() function has these lines:<br><span style=3D"font-fam=
ily:courier new,monospace">out:<br>

=A0=A0=A0 /* clear msi info */<br>=A0=A0=A0 dev-&gt;msi-&gt;flags =3D 0;<br=
>

=A0=A0=A0 dev-&gt;msi-&gt;pirq =3D -1;<br>=A0=A0=A0 dev-&gt;msi_trans_en =
=3D 0;</span><br>
<br>As a result, the flags are cleared -- this is new to the patch. <br>And=
 I believe this change caused the failure in pt_msgaddr64_write():<br><br><=
span style=3D"font-family:courier new,monospace">3882=A0=A0=A0=A0 /* check =
whether the type is 64 bit or not */<br>



3883=A0=A0=A0=A0 if (!(ptdev-&gt;msi-&gt;flags &amp; PCI_MSI_FLAGS_64BIT))<=
br>3884=A0=A0=A0=A0 {<br>3885=A0=A0=A0=A0=A0=A0=A0=A0 /* exit I/O emulator =
*/<br>3886=A0=A0=A0=A0=A0=A0=A0=A0 PT_LOG(&quot;Error: why comes to Upper A=
ddress without 64 bit support??\n&quot;);<br>



3887=A0=A0=A0=A0=A0=A0=A0=A0 return -1;<br>3888=A0=A0=A0=A0 }</span><br><br=
><br>I only see flags setup in=A0 pt_msgctrl_reg_init() function. I guess t=
his is not called this time.<br><br>Thanks,<br>Timothy<br>
</div>
</blockquote><br></div><br></div>
<br>_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br></blockquote></div><br></div>

--14dae9341115101db604d04455ff--


--===============4474464799736840499==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4474464799736840499==--


From xen-devel-bounces@lists.xen.org Fri Dec 07 14:50:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzFr-0007KD-DP; Fri, 07 Dec 2012 14:50:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TgzFp-0007K5-Sh
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:50:18 +0000
Received: from [85.158.137.99:23400] by server-5.bemta-3.messagelabs.com id
	DA/84-26311-62202C05; Fri, 07 Dec 2012 14:50:14 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1354891811!18396228!1
X-Originating-IP: [209.85.223.169]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23108 invoked from network); 7 Dec 2012 14:50:12 -0000
Received: from mail-ie0-f169.google.com (HELO mail-ie0-f169.google.com)
	(209.85.223.169)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:50:12 -0000
Received: by mail-ie0-f169.google.com with SMTP id c14so1472353ieb.0
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 06:50:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=lNVDTvEKdyuVSGfsk503nETo19A4gX5Mx1a4Lf2NQgY=;
	b=IKAqpfZY0EWAxON6C9rjEi1W6TDaEQozX9SMQD2e119YvwkvRbkS0tRAzgVe2GgFE6
	+Y7avtcFbyb6ehjSkvTwtkpYgwu1Xl5VHxV9J9SsbTEpFqBUIJl4l7HAPpE0qQeLN+bt
	8LIN16TKL3rbvxE2dNj3+4dafH5lUmZgbs+CJWKuQcNUeX2vIS0GA/fOCBgOsYhn7ADk
	+1lPUGh8C2Oan9omy1xTGvRhM5LZhpDjYbqjrOiip1vOkZMNuNTVXGRg8R7/Wsqcr0+s
	20VddGoiPISEwox7XWpjTaAcFaKRt68qb4lFj6OegWcSiI/GCl/zbOa8gAdkxt/Es46c
	xt2g==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr9526913igq.37.1354891810315; Fri,
	07 Dec 2012 06:50:10 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Fri, 7 Dec 2012 06:50:10 -0800 (PST)
In-Reply-To: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
Date: Fri, 7 Dec 2012 22:50:10 +0800
X-Google-Sender-Auth: O4nEqtC07omvmhkn01grJJ0ZRfM
Message-ID: <CAKhsbWbZXSkRch6+iH_0E-xCPrUPCfe4sAnPMA4C4QXzmQpKjw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3
 with the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU
 missing interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4474464799736840499=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4474464799736840499==
Content-Type: multipart/alternative; boundary=14dae9341115101db604d04455ff

--14dae9341115101db604d04455ff
Content-Type: text/plain; charset=ISO-8859-1

Hi, could anybody take a look please?


On Thu, Dec 6, 2012 at 11:42 AM, G.R. <firemeteor@users.sourceforge.net>wrote:

> Sorry, but I have to resend this in a separate thread for better
> visibility.
> Back ground:
> After backporting this patch to fix my PVHVM interrupt missing issue in
> 4.1.3:
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html
> I find side effect for pure HVM guest.
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00208.html
>
> I had a follow up thread to analysis the issue and post it again here:
>
> Hi, it seems that the patch has some side effect on pure HVM guest.
>>> For openelec 2.0 guest, which is based on linux 3.2.x with PVOP
>>> disabled, I see such syndrome in qemu-dm-xxx.log:
>>>
>>> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
>>> support??
>>> pt_pci_write_config: Internal error: Invalid write emulation return
>>> value[-1]. I/O emulator exit.
>>>
>>> The guest dies immediately after the log, I have no way to check guest
>>> kernel log.
>>> Without the patch, this guest can boot without obvious error log even
>>> the VGA passthrough does not quite work.
>>> I'll check the code to see what does these log mean...
>>>
>>
>> I did some analysis and it really looks like a bug to me.
>> Since this is a patch back-ported from 4.2.0, I would like to ask is
>> there any follow up patch that would fix this issue?
>> Please see my analysis below:
>>
>> Here is part of the qemu-dm log, with debug log enabled at compile time:
>>
>> dm-command: hot insert pass-through pci dev
>> register_real_device: Assigning real physical device 00:1b.0 ...
>> pt_iomul_init: Error: pt_iomul_init can't open file /dev/xen/pci_iomul:
>> No such file or directory: 0x0:0x1b.0x0
>> pt_register_regions: IO region registered (size=0x00004000
>> base_addr=0xf7c10004)
>> pt_msi_setup: msi mapped with pirq 36
>> pci_intx: intx=1
>> register_real_device: Real physical device 00:1b.0 registered successfuly!
>> IRQ type = MSI-INTx
>> ...
>> pt_pci_read_config: [00:06.0]: address=0000 val=0x0000*8086* len=2
>> pt_pci_read_config: [00:06.0]: address=0002 val=0x0000*1e20* len=2
>> ...
>> *pt_pci_write_config: [00:06.0]: address=0068* val=0x00000000 len=4
>> ...
>> pt_pci_write_config: [00:06.0]: address=0062 val=0x00000081 len=2
>> *pt_msgctrl_reg_write: guest enabling MSI, disable MSI-INTx translation*
>> pci_intx: intx=1
>> *pt_msi_disable: Unmap msi with pirq 36*
>> pt_msgctrl_reg_write: setup msi for dev 30
>> pt_msi_setup: msi mapped with pirq 36
>> pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303
>> pt_pci_read_config: [00:06.0]: address=0062 val=0x00000081 len=2
>> pt_pci_write_config: [00:06.0]: address=0062 val=0x00000081 len=2
>> pt_pci_write_config: [00:06.0]: address=0064 val=0xfee0300c len=4
>> *pt_pci_write_config: [00:06.0]: address=0068* val=0x00000000 len=4
>>
>> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit
>> support??
>> pt_pci_write_config: Internal error: Invalid write emulation return
>> value[-1]. I/O emulator exit.
>>
>>
>> Here the device in question should is the audio controller, 00:1b.0 in
>> the host, which is 64 bit capable:
>> 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset
>> Family High Definition Audio Controller (rev 04)
>>     Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+
>>         Address: 00000000fee00378  Data: 0000
>>
>> And there is also a successful write to offset 0x68 above, while the
>> second write fails the I/O emulator.
>>
>>
>> The patch added pt_msi_disable() call into *pt_msgctrl_reg_write() *function,
>> And the pt_msi_disable() function has these lines:
>> out:
>>     /* clear msi info */
>>     dev->msi->flags = 0;
>>     dev->msi->pirq = -1;
>>     dev->msi_trans_en = 0;
>>
>> As a result, the flags are cleared -- this is new to the patch.
>> And I believe this change caused the failure in pt_msgaddr64_write():
>>
>> 3882     /* check whether the type is 64 bit or not */
>> 3883     if (!(ptdev->msi->flags & PCI_MSI_FLAGS_64BIT))
>> 3884     {
>> 3885         /* exit I/O emulator */
>> 3886         PT_LOG("Error: why comes to Upper Address without 64 bit
>> support??\n");
>> 3887         return -1;
>> 3888     }
>>
>>
>> I only see flags setup in  pt_msgctrl_reg_init() function. I guess this
>> is not called this time.
>>
>> Thanks,
>> Timothy
>>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--14dae9341115101db604d04455ff
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi, could anybody take a look please?<br><div class=3D"gmail_extra"><br><br=
><div class=3D"gmail_quote">On Thu, Dec 6, 2012 at 11:42 AM, G.R. <span dir=
=3D"ltr">&lt;<a href=3D"mailto:firemeteor@users.sourceforge.net" target=3D"=
_blank">firemeteor@users.sourceforge.net</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Sorry, but I have to resend this in a separa=
te thread for better visibility.<br><div class=3D"gmail_extra">Back ground:=
<br>
After backporting this patch to fix my PVHVM interrupt missing issue in 4.1=
.3:<br><a rel=3D"nofollow" href=3D"http://lists.xen.org/archives/html/xen-d=
evel/2012-06/msg01909.html" target=3D"_blank">http://lists.xen.org/archives=
/html/xen-devel/2012-06/msg01909.html</a><br>

I find side effect for pure HVM guest.<br><a href=3D"http://lists.xen.org/a=
rchives/html/xen-devel/2012-12/msg00208.html" target=3D"_blank">http://list=
s.xen.org/archives/html/xen-devel/2012-12/msg00208.html</a><br><br>I had a =
follow up thread to analysis the issue and post it again here:<br>

<br><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"m=
argin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left=
:1ex"><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div><div>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D"gmail_extra=
"><div class=3D"gmail_quote"><div>Hi, it seems that the patch has some side=
 effect on pure HVM guest.<br>

For openelec 2.0 guest, which is based on linux 3.2.x with PVOP disabled, I=
 see such syndrome in qemu-dm-xxx.log:<br>



<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>pt_pci_write_config: Internal error: Invalid write emulation=
 return value[-1]. I/O emulator exit.<br><br>The guest dies immediately aft=
er the log, I have no way to check guest kernel log.<br>





Without the patch, this guest can boot without obvious error log even the V=
GA passthrough does not quite work.<br>I&#39;ll check the code to see what =
does these log mean...<br></div></div></div></blockquote></div></div><div>

<br>I did some analysis and it really looks like a bug to me.<br>

Since this is a patch back-ported from 4.2.0, I would like to ask is there =
any follow up patch that would fix this issue?<br>Please see my analysis be=
low:<br>
<br>Here is part of the qemu-dm log, with debug log enabled at compile time=
:<br><br><span style=3D"font-family:courier new,monospace">dm-command: hot =
insert pass-through pci dev<br>register_real_device: Assigning real physica=
l device 00:1b.0 ...<br>



pt_iomul_init: Error: pt_iomul_init can&#39;t open file /dev/xen/pci_iomul:=
 No such file or directory: 0x0:0x1b.0x0<br>pt_register_regions: IO region =
registered (size=3D0x00004000 base_addr=3D0xf7c10004)<br>pt_msi_setup: msi =
mapped with pirq 36<br>



pci_intx: intx=3D1<br>register_real_device: Real physical device 00:1b.0 re=
gistered successfuly!<br>IRQ type =3D MSI-INTx<br>...<br>pt_pci_read_config=
: [00:06.0]: address=3D0000 val=3D0x0000<b>8086</b> len=3D2<br>pt_pci_read_=
config: [00:06.0]: address=3D0002 val=3D0x0000<b>1e20</b> len=3D2<br>



...<br><b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x000000=
00 len=3D4<br>...<br>pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0=
x00000081 len=3D2<br><b>pt_msgctrl_reg_write: guest enabling MSI, disable M=
SI-INTx translation</b><br>



pci_intx: intx=3D1<br><b>pt_msi_disable: Unmap msi with pirq 36</b><br>pt_m=
sgctrl_reg_write: setup msi for dev 30<br>pt_msi_setup: msi mapped with pir=
q 36<br>pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303<br>pt_pc=
i_read_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>



pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2<br>=
pt_pci_write_config: [00:06.0]: address=3D0064 val=3D0xfee0300c len=3D4<br>=
<b>pt_pci_write_config: [00:06.0]: address=3D0068</b> val=3D0x00000000 len=
=3D4<div>
<br>pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bi=
t support??<br>

pt_pci_write_config: Internal error: Invalid write emulation return value[-=
1]. I/O emulator exit.<br></div></span><br><span style=3D"font-family:couri=
er new,monospace"></span></div></div><br>Here the device in question should=
 is the audio controller, 00:1b.0 in the host, which is 64 bit capable:<br>




00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family=
 High Definition Audio Controller (rev 04)<br>=A0=A0=A0 Capabilities: [60] =
MSI: Enable+ Count=3D1/1 Maskable- 64bit+<br>=A0=A0=A0 =A0=A0=A0 Address: 0=
0000000fee00378=A0 Data: 0000<br>



<br>And there is also a successful write to offset 0x68 above, while the se=
cond write fails the I/O emulator.<br>
<br><br>The patch added pt_msi_disable() call into <span style=3D"font-fami=
ly:courier new,monospace"><b>pt_msgctrl_reg_write() </b></span>function, An=
d the pt_msi_disable() function has these lines:<br><span style=3D"font-fam=
ily:courier new,monospace">out:<br>

=A0=A0=A0 /* clear msi info */<br>=A0=A0=A0 dev-&gt;msi-&gt;flags =3D 0;<br=
>

=A0=A0=A0 dev-&gt;msi-&gt;pirq =3D -1;<br>=A0=A0=A0 dev-&gt;msi_trans_en =
=3D 0;</span><br>
<br>As a result, the flags are cleared -- this is new to the patch. <br>And=
 I believe this change caused the failure in pt_msgaddr64_write():<br><br><=
span style=3D"font-family:courier new,monospace">3882=A0=A0=A0=A0 /* check =
whether the type is 64 bit or not */<br>



3883=A0=A0=A0=A0 if (!(ptdev-&gt;msi-&gt;flags &amp; PCI_MSI_FLAGS_64BIT))<=
br>3884=A0=A0=A0=A0 {<br>3885=A0=A0=A0=A0=A0=A0=A0=A0 /* exit I/O emulator =
*/<br>3886=A0=A0=A0=A0=A0=A0=A0=A0 PT_LOG(&quot;Error: why comes to Upper A=
ddress without 64 bit support??\n&quot;);<br>



3887=A0=A0=A0=A0=A0=A0=A0=A0 return -1;<br>3888=A0=A0=A0=A0 }</span><br><br=
><br>I only see flags setup in=A0 pt_msgctrl_reg_init() function. I guess t=
his is not called this time.<br><br>Thanks,<br>Timothy<br>
</div>
</blockquote><br></div><br></div>
<br>_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br></blockquote></div><br></div>

--14dae9341115101db604d04455ff--


--===============4474464799736840499==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4474464799736840499==--


From xen-devel-bounces@lists.xen.org Fri Dec 07 14:55:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzKM-0007hV-B5; Fri, 07 Dec 2012 14:54:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgzKK-0007hK-D4
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:54:56 +0000
Received: from [85.158.139.83:39384] by server-1.bemta-5.messagelabs.com id
	0F/2C-12813-D3302C05; Fri, 07 Dec 2012 14:54:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354892093!28906048!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32671 invoked from network); 7 Dec 2012 14:54:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:54:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16226635"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 14:54:53 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	14:54:53 +0000
Message-ID: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 14:54:51 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Matt Wilson <msw@amazon.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 00/03 V2] docs: check for documentation
 generation tools in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds a configure script for docs/, based on Matthew's stubdom
configure patch.

Main change this time round is to first ditch a bunch of obsolete docs
and therefore avoid the need to check for those tools at all.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:55:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzKM-0007hV-B5; Fri, 07 Dec 2012 14:54:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgzKK-0007hK-D4
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:54:56 +0000
Received: from [85.158.139.83:39384] by server-1.bemta-5.messagelabs.com id
	0F/2C-12813-D3302C05; Fri, 07 Dec 2012 14:54:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354892093!28906048!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32671 invoked from network); 7 Dec 2012 14:54:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:54:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16226635"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 14:54:53 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	14:54:53 +0000
Message-ID: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 14:54:51 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Matt Wilson <msw@amazon.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 00/03 V2] docs: check for documentation
 generation tools in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds a configure script for docs/, based on Matthew's stubdom
configure patch.

Main change this time round is to first ditch a bunch of obsolete docs
and therefore avoid the need to check for those tools at all.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:55:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzKr-0007kC-Ow; Fri, 07 Dec 2012 14:55:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgzKp-0007jr-I1
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:55:28 +0000
Received: from [85.158.139.211:57398] by server-11.bemta-5.messagelabs.com id
	77/C4-31624-E5302C05; Fri, 07 Dec 2012 14:55:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1354892123!19415719!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14579 invoked from network); 7 Dec 2012 14:55:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:55:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47003189"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 14:55:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 09:55:11 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgzKY-0004T4-Hh;
	Fri, 07 Dec 2012 14:55:10 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 14:55:09 +0000
Message-ID: <1354892110-31108-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
References: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] docs: drop doxygen stuff
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In the 300+ page PDF this produces I couldn't see anything which
wasn't the autogenerated doxygen boilerplate stuff.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 docs/Docs.mk       |    1 -
 docs/Doxyfile      | 1218 ----------------------------------------------------
 docs/Doxyfilter    |   16 -
 docs/Makefile      |   13 -
 docs/html.sty      |  887 --------------------------------------
 docs/pythfilter.py |  658 ----------------------------
 6 files changed, 0 insertions(+), 2793 deletions(-)
 delete mode 100644 docs/Doxyfile
 delete mode 100644 docs/Doxyfilter
 delete mode 100644 docs/html.sty
 delete mode 100644 docs/pythfilter.py

diff --git a/docs/Docs.mk b/docs/Docs.mk
index dcc8a21..db3c19d 100644
--- a/docs/Docs.mk
+++ b/docs/Docs.mk
@@ -1,6 +1,5 @@
 FIG2DEV		:= fig2dev
 LATEX2HTML	:= latex2html
-DOXYGEN		:= doxygen
 POD2MAN		:= pod2man
 POD2HTML	:= pod2html
 POD2TEXT	:= pod2text
diff --git a/docs/Doxyfile b/docs/Doxyfile
deleted file mode 100644
index 8ac4451..0000000
--- a/docs/Doxyfile
+++ /dev/null
@@ -1,1218 +0,0 @@
-# Doxyfile 1.4.2
-
-# This file describes the settings to be used by the documentation system
-# doxygen (www.doxygen.org) for a project
-#
-# All text after a hash (#) is considered a comment and will be ignored
-# The format is:
-#       TAG = value [value, ...]
-# For lists items can also be appended using:
-#       TAG += value [value, ...]
-# Values that contain spaces should be placed between quotes (" ")
-
-#---------------------------------------------------------------------------
-# Project related configuration options
-#---------------------------------------------------------------------------
-
-# The PROJECT_NAME tag is a single word (or a sequence of words surrounded 
-# by quotes) that should identify the project.
-
-PROJECT_NAME           = Xen Python Tools
-
-# The PROJECT_NUMBER tag can be used to enter a project or revision number. 
-# This could be handy for archiving the generated documentation or 
-# if some version control system is used.
-
-PROJECT_NUMBER         = 
-
-# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) 
-# base path where the generated documentation will be put. 
-# If a relative path is entered, it will be relative to the location 
-# where doxygen was started. If left blank the current directory will be used.
-
-OUTPUT_DIRECTORY       = api/tools/python
-
-# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 
-# 4096 sub-directories (in 2 levels) under the output directory of each output 
-# format and will distribute the generated files over these directories. 
-# Enabling this option can be useful when feeding doxygen a huge amount of 
-# source files, where putting all generated files in the same directory would 
-# otherwise cause performance problems for the file system.
-
-CREATE_SUBDIRS         = NO
-
-# The OUTPUT_LANGUAGE tag is used to specify the language in which all 
-# documentation generated by doxygen is written. Doxygen will use this 
-# information to generate all constant output in the proper language. 
-# The default language is English, other supported languages are: 
-# Brazilian, Catalan, Chinese, Chinese-Traditional, Croatian, Czech, Danish, 
-# Dutch, Finnish, French, German, Greek, Hungarian, Italian, Japanese, 
-# Japanese-en (Japanese with English messages), Korean, Korean-en, Norwegian, 
-# Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovene, Spanish, 
-# Swedish, and Ukrainian.
-
-OUTPUT_LANGUAGE        = English
-
-# This tag can be used to specify the encoding used in the generated output. 
-# The encoding is not always determined by the language that is chosen, 
-# but also whether or not the output is meant for Windows or non-Windows users. 
-# In case there is a difference, setting the USE_WINDOWS_ENCODING tag to YES 
-# forces the Windows encoding (this is the default for the Windows binary), 
-# whereas setting the tag to NO uses a Unix-style encoding (the default for 
-# all platforms other than Windows).
-
-USE_WINDOWS_ENCODING   = NO
-
-# If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will 
-# include brief member descriptions after the members that are listed in 
-# the file and class documentation (similar to JavaDoc). 
-# Set to NO to disable this.
-
-BRIEF_MEMBER_DESC      = YES
-
-# If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend 
-# the brief description of a member or function before the detailed description. 
-# Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the 
-# brief descriptions will be completely suppressed.
-
-REPEAT_BRIEF           = YES
-
-# This tag implements a quasi-intelligent brief description abbreviator 
-# that is used to form the text in various listings. Each string 
-# in this list, if found as the leading text of the brief description, will be 
-# stripped from the text and the result after processing the whole list, is 
-# used as the annotated text. Otherwise, the brief description is used as-is. 
-# If left blank, the following values are used ("$name" is automatically 
-# replaced with the name of the entity): "The $name class" "The $name widget" 
-# "The $name file" "is" "provides" "specifies" "contains" 
-# "represents" "a" "an" "the"
-
-ABBREVIATE_BRIEF       = 
-
-# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then 
-# Doxygen will generate a detailed section even if there is only a brief 
-# description.
-
-ALWAYS_DETAILED_SEC    = NO
-
-# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all 
-# inherited members of a class in the documentation of that class as if those 
-# members were ordinary class members. Constructors, destructors and assignment 
-# operators of the base classes will not be shown.
-
-INLINE_INHERITED_MEMB  = NO
-
-# If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full 
-# path before files name in the file list and in the header files. If set 
-# to NO the shortest path that makes the file name unique will be used.
-
-FULL_PATH_NAMES        = YES
-
-# If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag 
-# can be used to strip a user-defined part of the path. Stripping is 
-# only done if one of the specified strings matches the left-hand part of 
-# the path. The tag can be used to show relative paths in the file list. 
-# If left blank the directory from which doxygen is run is used as the 
-# path to strip.
-
-STRIP_FROM_PATH        = 
-
-# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of 
-# the path mentioned in the documentation of a class, which tells 
-# the reader which header file to include in order to use a class. 
-# If left blank only the name of the header file containing the class 
-# definition is used. Otherwise one should specify the include paths that 
-# are normally passed to the compiler using the -I flag.
-
-STRIP_FROM_INC_PATH    = 
-
-# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter 
-# (but less readable) file names. This can be useful is your file systems 
-# doesn't support long names like on DOS, Mac, or CD-ROM.
-
-SHORT_NAMES            = NO
-
-# If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen 
-# will interpret the first line (until the first dot) of a JavaDoc-style 
-# comment as the brief description. If set to NO, the JavaDoc 
-# comments will behave just like the Qt-style comments (thus requiring an 
-# explicit @brief command for a brief description.
-
-JAVADOC_AUTOBRIEF      = YES
-
-# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen 
-# treat a multi-line C++ special comment block (i.e. a block of //! or /// 
-# comments) as a brief description. This used to be the default behaviour. 
-# The new default is to treat a multi-line C++ comment block as a detailed 
-# description. Set this tag to YES if you prefer the old behaviour instead.
-
-MULTILINE_CPP_IS_BRIEF = NO
-
-# If the DETAILS_AT_TOP tag is set to YES then Doxygen 
-# will output the detailed description near the top, like JavaDoc.
-# If set to NO, the detailed description appears after the member 
-# documentation.
-
-DETAILS_AT_TOP         = YES
-
-# If the INHERIT_DOCS tag is set to YES (the default) then an undocumented 
-# member inherits the documentation from any documented member that it 
-# re-implements.
-
-INHERIT_DOCS           = YES
-
-# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC 
-# tag is set to YES, then doxygen will reuse the documentation of the first 
-# member in the group (if any) for the other members of the group. By default 
-# all members of a group must be documented explicitly.
-
-DISTRIBUTE_GROUP_DOC   = NO
-
-# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce 
-# a new page for each member. If set to NO, the documentation of a member will 
-# be part of the file/class/namespace that contains it.
-
-SEPARATE_MEMBER_PAGES  = NO
-
-# The TAB_SIZE tag can be used to set the number of spaces in a tab. 
-# Doxygen uses this value to replace tabs by spaces in code fragments.
-
-TAB_SIZE               = 8
-
-# This tag can be used to specify a number of aliases that acts 
-# as commands in the documentation. An alias has the form "name=value". 
-# For example adding "sideeffect=\par Side Effects:\n" will allow you to 
-# put the command \sideeffect (or @sideeffect) in the documentation, which 
-# will result in a user-defined paragraph with heading "Side Effects:". 
-# You can put \n's in the value part of an alias to insert newlines.
-
-ALIASES                = 
-
-# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C 
-# sources only. Doxygen will then generate output that is more tailored for C. 
-# For instance, some of the names that are used will be different. The list 
-# of all members will be omitted, etc.
-
-OPTIMIZE_OUTPUT_FOR_C  = NO
-
-# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java sources 
-# only. Doxygen will then generate output that is more tailored for Java. 
-# For instance, namespaces will be presented as packages, qualified scopes 
-# will look different, etc.
-
-OPTIMIZE_OUTPUT_JAVA   = YES
-
-# Set the SUBGROUPING tag to YES (the default) to allow class member groups of 
-# the same type (for instance a group of public functions) to be put as a 
-# subgroup of that type (e.g. under the Public Functions section). Set it to 
-# NO to prevent subgrouping. Alternatively, this can be done per class using 
-# the \nosubgrouping command.
-
-SUBGROUPING            = YES
-
-#---------------------------------------------------------------------------
-# Build related configuration options
-#---------------------------------------------------------------------------
-
-# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in 
-# documentation are documented, even if no documentation was available. 
-# Private class members and static file members will be hidden unless 
-# the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES
-
-EXTRACT_ALL            = YES
-
-# If the EXTRACT_PRIVATE tag is set to YES all private members of a class 
-# will be included in the documentation.
-
-EXTRACT_PRIVATE        = YES
-
-# If the EXTRACT_STATIC tag is set to YES all static members of a file 
-# will be included in the documentation.
-
-EXTRACT_STATIC         = YES
-
-# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) 
-# defined locally in source files will be included in the documentation. 
-# If set to NO only classes defined in header files are included.
-
-EXTRACT_LOCAL_CLASSES  = YES
-
-# This flag is only useful for Objective-C code. When set to YES local 
-# methods, which are defined in the implementation section but not in 
-# the interface are included in the documentation. 
-# If set to NO (the default) only methods in the interface are included.
-
-EXTRACT_LOCAL_METHODS  = NO
-
-# If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all 
-# undocumented members of documented classes, files or namespaces. 
-# If set to NO (the default) these members will be included in the 
-# various overviews, but no documentation section is generated. 
-# This option has no effect if EXTRACT_ALL is enabled.
-
-HIDE_UNDOC_MEMBERS     = NO
-
-# If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all 
-# undocumented classes that are normally visible in the class hierarchy. 
-# If set to NO (the default) these classes will be included in the various 
-# overviews. This option has no effect if EXTRACT_ALL is enabled.
-
-HIDE_UNDOC_CLASSES     = NO
-
-# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all 
-# friend (class|struct|union) declarations. 
-# If set to NO (the default) these declarations will be included in the 
-# documentation.
-
-HIDE_FRIEND_COMPOUNDS  = NO
-
-# If the HIDE_IN_BODY_DOCS tag is set to YES, Doxygen will hide any 
-# documentation blocks found inside the body of a function. 
-# If set to NO (the default) these blocks will be appended to the 
-# function's detailed documentation block.
-
-HIDE_IN_BODY_DOCS      = NO
-
-# The INTERNAL_DOCS tag determines if documentation 
-# that is typed after a \internal command is included. If the tag is set 
-# to NO (the default) then the documentation will be excluded. 
-# Set it to YES to include the internal documentation.
-
-INTERNAL_DOCS          = NO
-
-# If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate 
-# file names in lower-case letters. If set to YES upper-case letters are also 
-# allowed. This is useful if you have classes or files whose names only differ 
-# in case and if your file system supports case sensitive file names. Windows 
-# and Mac users are advised to set this option to NO.
-
-CASE_SENSE_NAMES       = YES
-
-# If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen 
-# will show members with their full class and namespace scopes in the 
-# documentation. If set to YES the scope will be hidden.
-
-HIDE_SCOPE_NAMES       = NO
-
-# If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen 
-# will put a list of the files that are included by a file in the documentation 
-# of that file.
-
-SHOW_INCLUDE_FILES     = YES
-
-# If the INLINE_INFO tag is set to YES (the default) then a tag [inline] 
-# is inserted in the documentation for inline members.
-
-INLINE_INFO            = YES
-
-# If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen 
-# will sort the (detailed) documentation of file and class members 
-# alphabetically by member name. If set to NO the members will appear in 
-# declaration order.
-
-SORT_MEMBER_DOCS       = YES
-
-# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the 
-# brief documentation of file, namespace and class members alphabetically 
-# by member name. If set to NO (the default) the members will appear in 
-# declaration order.
-
-SORT_BRIEF_DOCS        = NO
-
-# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be 
-# sorted by fully-qualified names, including namespaces. If set to 
-# NO (the default), the class list will be sorted only by class name, 
-# not including the namespace part. 
-# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
-# Note: This option applies only to the class list, not to the 
-# alphabetical list.
-
-SORT_BY_SCOPE_NAME     = NO
-
-# The GENERATE_TODOLIST tag can be used to enable (YES) or 
-# disable (NO) the todo list. This list is created by putting \todo 
-# commands in the documentation.
-
-GENERATE_TODOLIST      = YES
-
-# The GENERATE_TESTLIST tag can be used to enable (YES) or 
-# disable (NO) the test list. This list is created by putting \test 
-# commands in the documentation.
-
-GENERATE_TESTLIST      = YES
-
-# The GENERATE_BUGLIST tag can be used to enable (YES) or 
-# disable (NO) the bug list. This list is created by putting \bug 
-# commands in the documentation.
-
-GENERATE_BUGLIST       = YES
-
-# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or 
-# disable (NO) the deprecated list. This list is created by putting 
-# \deprecated commands in the documentation.
-
-GENERATE_DEPRECATEDLIST= YES
-
-# The ENABLED_SECTIONS tag can be used to enable conditional 
-# documentation sections, marked by \if sectionname ... \endif.
-
-ENABLED_SECTIONS       = 
-
-# The MAX_INITIALIZER_LINES tag determines the maximum number of lines 
-# the initial value of a variable or define consists of for it to appear in 
-# the documentation. If the initializer consists of more lines than specified 
-# here it will be hidden. Use a value of 0 to hide initializers completely. 
-# The appearance of the initializer of individual variables and defines in the 
-# documentation can be controlled using \showinitializer or \hideinitializer 
-# command in the documentation regardless of this setting.
-
-MAX_INITIALIZER_LINES  = 30
-
-# Set the SHOW_USED_FILES tag to NO to disable the list of files generated 
-# at the bottom of the documentation of classes and structs. If set to YES the 
-# list will mention the files that were used to generate the documentation.
-
-SHOW_USED_FILES        = YES
-
-# If the sources in your project are distributed over multiple directories 
-# then setting the SHOW_DIRECTORIES tag to YES will show the directory hierarchy 
-# in the documentation.
-
-SHOW_DIRECTORIES       = YES
-
-# The FILE_VERSION_FILTER tag can be used to specify a program or script that 
-# doxygen should invoke to get the current version for each file (typically from the 
-# version control system). Doxygen will invoke the program by executing (via 
-# popen()) the command <command> <input-file>, where <command> is the value of 
-# the FILE_VERSION_FILTER tag, and <input-file> is the name of an input file 
-# provided by doxygen. Whatever the progam writes to standard output 
-# is used as the file version. See the manual for examples.
-
-FILE_VERSION_FILTER    = 
-
-#---------------------------------------------------------------------------
-# configuration options related to warning and progress messages
-#---------------------------------------------------------------------------
-
-# The QUIET tag can be used to turn on/off the messages that are generated 
-# by doxygen. Possible values are YES and NO. If left blank NO is used.
-
-QUIET                  = YES
-
-# The WARNINGS tag can be used to turn on/off the warning messages that are 
-# generated by doxygen. Possible values are YES and NO. If left blank 
-# NO is used.
-
-WARNINGS               = YES
-
-# If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings 
-# for undocumented members. If EXTRACT_ALL is set to YES then this flag will 
-# automatically be disabled.
-
-WARN_IF_UNDOCUMENTED   = YES
-
-# If WARN_IF_DOC_ERROR is set to YES, doxygen will generate warnings for 
-# potential errors in the documentation, such as not documenting some 
-# parameters in a documented function, or documenting parameters that 
-# don't exist or using markup commands wrongly.
-
-WARN_IF_DOC_ERROR      = YES
-
-# This WARN_NO_PARAMDOC option can be abled to get warnings for 
-# functions that are documented, but have no documentation for their parameters 
-# or return value. If set to NO (the default) doxygen will only warn about 
-# wrong or incomplete parameter documentation, but not about the absence of 
-# documentation.
-
-WARN_NO_PARAMDOC       = NO
-
-# The WARN_FORMAT tag determines the format of the warning messages that 
-# doxygen can produce. The string should contain the $file, $line, and $text 
-# tags, which will be replaced by the file and line number from which the 
-# warning originated and the warning text. Optionally the format may contain 
-# $version, which will be replaced by the version of the file (if it could 
-# be obtained via FILE_VERSION_FILTER)
-
-WARN_FORMAT            = "$file:$line: $text"
-
-# The WARN_LOGFILE tag can be used to specify a file to which warning 
-# and error messages should be written. If left blank the output is written 
-# to stderr.
-
-WARN_LOGFILE           = 
-
-#---------------------------------------------------------------------------
-# configuration options related to the input files
-#---------------------------------------------------------------------------
-
-# The INPUT tag can be used to specify the files and/or directories that contain 
-# documented source files. You may enter file names like "myfile.cpp" or 
-# directories like "/usr/src/myproject". Separate the files or directories 
-# with spaces.
-
-INPUT                  = ../tools/python/xen/
-
-# If the value of the INPUT tag contains directories, you can use the 
-# FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 
-# and *.h) to filter out the source-files in the directories. If left 
-# blank the following patterns are tested: 
-# *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx 
-# *.hpp *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm
-
-FILE_PATTERNS          = *.py *.c
-
-# The RECURSIVE tag can be used to turn specify whether or not subdirectories 
-# should be searched for input files as well. Possible values are YES and NO. 
-# If left blank NO is used.
-
-RECURSIVE              = YES
-
-# The EXCLUDE tag can be used to specify files and/or directories that should 
-# excluded from the INPUT source files. This way you can easily exclude a 
-# subdirectory from a directory tree whose root is specified with the INPUT tag.
-
-EXCLUDE                = 
-
-# The EXCLUDE_SYMLINKS tag can be used select whether or not files or 
-# directories that are symbolic links (a Unix filesystem feature) are excluded 
-# from the input.
-
-EXCLUDE_SYMLINKS       = NO
-
-# If the value of the INPUT tag contains directories, you can use the 
-# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude 
-# certain files from those directories.
-
-EXCLUDE_PATTERNS       = 
-
-# The EXAMPLE_PATH tag can be used to specify one or more files or 
-# directories that contain example code fragments that are included (see 
-# the \include command).
-
-EXAMPLE_PATH           = 
-
-# If the value of the EXAMPLE_PATH tag contains directories, you can use the 
-# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 
-# and *.h) to filter out the source-files in the directories. If left 
-# blank all files are included.
-
-EXAMPLE_PATTERNS       = 
-
-# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be 
-# searched for input files to be used with the \include or \dontinclude 
-# commands irrespective of the value of the RECURSIVE tag. 
-# Possible values are YES and NO. If left blank NO is used.
-
-EXAMPLE_RECURSIVE      = NO
-
-# The IMAGE_PATH tag can be used to specify one or more files or 
-# directories that contain image that are included in the documentation (see 
-# the \image command).
-
-IMAGE_PATH             = 
-
-# The INPUT_FILTER tag can be used to specify a program that doxygen should 
-# invoke to filter for each input file. Doxygen will invoke the filter program 
-# by executing (via popen()) the command <filter> <input-file>, where <filter> 
-# is the value of the INPUT_FILTER tag, and <input-file> is the name of an 
-# input file. Doxygen will then use the output that the filter program writes 
-# to standard output.  If FILTER_PATTERNS is specified, this tag will be 
-# ignored.
-
-INPUT_FILTER           = "sh ./Doxyfilter ../tools/python"
-
-# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern 
-# basis.  Doxygen will compare the file name with each pattern and apply the 
-# filter if there is a match.  The filters are a list of the form: 
-# pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further 
-# info on how filters are used. If FILTER_PATTERNS is empty, INPUT_FILTER 
-# is applied to all files.
-
-FILTER_PATTERNS        = 
-
-# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using 
-# INPUT_FILTER) will be used to filter the input files when producing source 
-# files to browse (i.e. when SOURCE_BROWSER is set to YES).
-
-FILTER_SOURCE_FILES    = YES
-
-#---------------------------------------------------------------------------
-# configuration options related to source browsing
-#---------------------------------------------------------------------------
-
-# If the SOURCE_BROWSER tag is set to YES then a list of source files will 
-# be generated. Documented entities will be cross-referenced with these sources. 
-# Note: To get rid of all source code in the generated output, make sure also 
-# VERBATIM_HEADERS is set to NO.
-
-SOURCE_BROWSER         = NO
-
-# Setting the INLINE_SOURCES tag to YES will include the body 
-# of functions and classes directly in the documentation.
-
-INLINE_SOURCES         = NO
-
-# Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct 
-# doxygen to hide any special comment blocks from generated source code 
-# fragments. Normal C and C++ comments will always remain visible.
-
-STRIP_CODE_COMMENTS    = YES
-
-# If the REFERENCED_BY_RELATION tag is set to YES (the default) 
-# then for each documented function all documented 
-# functions referencing it will be listed.
-
-REFERENCED_BY_RELATION = YES
-
-# If the REFERENCES_RELATION tag is set to YES (the default) 
-# then for each documented function all documented entities 
-# called/used by that function will be listed.
-
-REFERENCES_RELATION    = YES
-
-# If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen 
-# will generate a verbatim copy of the header file for each class for 
-# which an include is specified. Set to NO to disable this.
-
-VERBATIM_HEADERS       = YES
-
-#---------------------------------------------------------------------------
-# configuration options related to the alphabetical class index
-#---------------------------------------------------------------------------
-
-# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index 
-# of all compounds will be generated. Enable this if the project 
-# contains a lot of classes, structs, unions or interfaces.
-
-ALPHABETICAL_INDEX     = NO
-
-# If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then 
-# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns 
-# in which this list will be split (can be a number in the range [1..20])
-
-COLS_IN_ALPHA_INDEX    = 5
-
-# In case all classes in a project start with a common prefix, all 
-# classes will be put under the same header in the alphabetical index. 
-# The IGNORE_PREFIX tag can be used to specify one or more prefixes that 
-# should be ignored while generating the index headers.
-
-IGNORE_PREFIX          = 
-
-#---------------------------------------------------------------------------
-# configuration options related to the HTML output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_HTML tag is set to YES (the default) Doxygen will 
-# generate HTML output.
-
-GENERATE_HTML          = YES
-
-# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `html' will be used as the default path.
-
-HTML_OUTPUT            = html
-
-# The HTML_FILE_EXTENSION tag can be used to specify the file extension for 
-# each generated HTML page (for example: .htm,.php,.asp). If it is left blank 
-# doxygen will generate files with .html extension.
-
-HTML_FILE_EXTENSION    = .html
-
-# The HTML_HEADER tag can be used to specify a personal HTML header for 
-# each generated HTML page. If it is left blank doxygen will generate a 
-# standard header.
-
-HTML_HEADER            = 
-
-# The HTML_FOOTER tag can be used to specify a personal HTML footer for 
-# each generated HTML page. If it is left blank doxygen will generate a 
-# standard footer.
-
-HTML_FOOTER            = 
-
-# The HTML_STYLESHEET tag can be used to specify a user-defined cascading 
-# style sheet that is used by each HTML page. It can be used to 
-# fine-tune the look of the HTML output. If the tag is left blank doxygen 
-# will generate a default style sheet. Note that doxygen will try to copy 
-# the style sheet file to the HTML output directory, so don't put your own 
-# stylesheet in the HTML output directory as well, or it will be erased!
-
-HTML_STYLESHEET        = 
-
-# If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes, 
-# files or namespaces will be aligned in HTML using tables. If set to 
-# NO a bullet list will be used.
-
-HTML_ALIGN_MEMBERS     = YES
-
-# If the GENERATE_HTMLHELP tag is set to YES, additional index files 
-# will be generated that can be used as input for tools like the 
-# Microsoft HTML help workshop to generate a compressed HTML help file (.chm) 
-# of the generated HTML documentation.
-
-GENERATE_HTMLHELP      = NO
-
-# If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can 
-# be used to specify the file name of the resulting .chm file. You 
-# can add a path in front of the file if the result should not be 
-# written to the html output directory.
-
-CHM_FILE               = 
-
-# If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can 
-# be used to specify the location (absolute path including file name) of 
-# the HTML help compiler (hhc.exe). If non-empty doxygen will try to run 
-# the HTML help compiler on the generated index.hhp.
-
-HHC_LOCATION           = 
-
-# If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag 
-# controls if a separate .chi index file is generated (YES) or that 
-# it should be included in the master .chm file (NO).
-
-GENERATE_CHI           = NO
-
-# If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag 
-# controls whether a binary table of contents is generated (YES) or a 
-# normal table of contents (NO) in the .chm file.
-
-BINARY_TOC             = NO
-
-# The TOC_EXPAND flag can be set to YES to add extra items for group members 
-# to the contents of the HTML help documentation and to the tree view.
-
-TOC_EXPAND             = NO
-
-# The DISABLE_INDEX tag can be used to turn on/off the condensed index at 
-# top of each HTML page. The value NO (the default) enables the index and 
-# the value YES disables it.
-
-DISABLE_INDEX          = NO
-
-# This tag can be used to set the number of enum values (range [1..20]) 
-# that doxygen will group on one line in the generated HTML documentation.
-
-ENUM_VALUES_PER_LINE   = 4
-
-# If the GENERATE_TREEVIEW tag is set to YES, a side panel will be
-# generated containing a tree-like index structure (just like the one that 
-# is generated for HTML Help). For this to work a browser that supports 
-# JavaScript, DHTML, CSS and frames is required (for instance Mozilla 1.0+, 
-# Netscape 6.0+, Internet explorer 5.0+, or Konqueror). Windows users are 
-# probably better off using the HTML help feature.
-
-GENERATE_TREEVIEW      = NO
-
-# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be 
-# used to set the initial width (in pixels) of the frame in which the tree 
-# is shown.
-
-TREEVIEW_WIDTH         = 250
-
-#---------------------------------------------------------------------------
-# configuration options related to the LaTeX output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_LATEX tag is set to YES (the default) Doxygen will 
-# generate Latex output.
-
-GENERATE_LATEX         = YES
-
-# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `latex' will be used as the default path.
-
-LATEX_OUTPUT           = latex
-
-# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be 
-# invoked. If left blank `latex' will be used as the default command name.
-
-LATEX_CMD_NAME         = latex
-
-# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to 
-# generate index for LaTeX. If left blank `makeindex' will be used as the 
-# default command name.
-
-MAKEINDEX_CMD_NAME     = makeindex
-
-# If the COMPACT_LATEX tag is set to YES Doxygen generates more compact 
-# LaTeX documents. This may be useful for small projects and may help to 
-# save some trees in general.
-
-COMPACT_LATEX          = NO
-
-# The PAPER_TYPE tag can be used to set the paper type that is used 
-# by the printer. Possible values are: a4, a4wide, letter, legal and 
-# executive. If left blank a4wide will be used.
-
-PAPER_TYPE             = a4
-
-# The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX 
-# packages that should be included in the LaTeX output.
-
-EXTRA_PACKAGES         = 
-
-# The LATEX_HEADER tag can be used to specify a personal LaTeX header for 
-# the generated latex document. The header should contain everything until 
-# the first chapter. If it is left blank doxygen will generate a 
-# standard header. Notice: only use this tag if you know what you are doing!
-
-LATEX_HEADER           = 
-
-# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated 
-# is prepared for conversion to pdf (using ps2pdf). The pdf file will 
-# contain links (just like the HTML output) instead of page references 
-# This makes the output suitable for online browsing using a pdf viewer.
-
-PDF_HYPERLINKS         = YES
-
-# If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of 
-# plain latex in the generated Makefile. Set this option to YES to get a 
-# higher quality PDF documentation.
-
-USE_PDFLATEX           = YES
-
-# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. 
-# command to the generated LaTeX files. This will instruct LaTeX to keep 
-# running if errors occur, instead of asking the user for help. 
-# This option is also used when generating formulas in HTML.
-
-LATEX_BATCHMODE        = NO
-
-# If LATEX_HIDE_INDICES is set to YES then doxygen will not 
-# include the index chapters (such as File Index, Compound Index, etc.) 
-# in the output.
-
-LATEX_HIDE_INDICES     = NO
-
-#---------------------------------------------------------------------------
-# configuration options related to the RTF output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output 
-# The RTF output is optimized for Word 97 and may not look very pretty with 
-# other RTF readers or editors.
-
-GENERATE_RTF           = NO
-
-# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `rtf' will be used as the default path.
-
-RTF_OUTPUT             = rtf
-
-# If the COMPACT_RTF tag is set to YES Doxygen generates more compact 
-# RTF documents. This may be useful for small projects and may help to 
-# save some trees in general.
-
-COMPACT_RTF            = NO
-
-# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated 
-# will contain hyperlink fields. The RTF file will 
-# contain links (just like the HTML output) instead of page references. 
-# This makes the output suitable for online browsing using WORD or other 
-# programs which support those fields. 
-# Note: wordpad (write) and others do not support links.
-
-RTF_HYPERLINKS         = NO
-
-# Load stylesheet definitions from file. Syntax is similar to doxygen's 
-# config file, i.e. a series of assignments. You only have to provide 
-# replacements, missing definitions are set to their default value.
-
-RTF_STYLESHEET_FILE    = 
-
-# Set optional variables used in the generation of an rtf document. 
-# Syntax is similar to doxygen's config file.
-
-RTF_EXTENSIONS_FILE    = 
-
-#---------------------------------------------------------------------------
-# configuration options related to the man page output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_MAN tag is set to YES (the default) Doxygen will 
-# generate man pages
-
-GENERATE_MAN           = NO
-
-# The MAN_OUTPUT tag is used to specify where the man pages will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `man' will be used as the default path.
-
-MAN_OUTPUT             = man
-
-# The MAN_EXTENSION tag determines the extension that is added to 
-# the generated man pages (default is the subroutine's section .3)
-
-MAN_EXTENSION          = .3
-
-# If the MAN_LINKS tag is set to YES and Doxygen generates man output, 
-# then it will generate one additional man file for each entity 
-# documented in the real man page(s). These additional files 
-# only source the real man page, but without them the man command 
-# would be unable to find the correct page. The default is NO.
-
-MAN_LINKS              = NO
-
-#---------------------------------------------------------------------------
-# configuration options related to the XML output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_XML tag is set to YES Doxygen will 
-# generate an XML file that captures the structure of 
-# the code including all documentation.
-
-GENERATE_XML           = NO
-
-# The XML_OUTPUT tag is used to specify where the XML pages will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `xml' will be used as the default path.
-
-XML_OUTPUT             = xml
-
-# The XML_SCHEMA tag can be used to specify an XML schema, 
-# which can be used by a validating XML parser to check the 
-# syntax of the XML files.
-
-XML_SCHEMA             = 
-
-# The XML_DTD tag can be used to specify an XML DTD, 
-# which can be used by a validating XML parser to check the 
-# syntax of the XML files.
-
-XML_DTD                = 
-
-# If the XML_PROGRAMLISTING tag is set to YES Doxygen will 
-# dump the program listings (including syntax highlighting 
-# and cross-referencing information) to the XML output. Note that 
-# enabling this will significantly increase the size of the XML output.
-
-XML_PROGRAMLISTING     = YES
-
-#---------------------------------------------------------------------------
-# configuration options for the AutoGen Definitions output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will 
-# generate an AutoGen Definitions (see autogen.sf.net) file 
-# that captures the structure of the code including all 
-# documentation. Note that this feature is still experimental 
-# and incomplete at the moment.
-
-GENERATE_AUTOGEN_DEF   = NO
-
-#---------------------------------------------------------------------------
-# configuration options related to the Perl module output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_PERLMOD tag is set to YES Doxygen will 
-# generate a Perl module file that captures the structure of 
-# the code including all documentation. Note that this 
-# feature is still experimental and incomplete at the 
-# moment.
-
-GENERATE_PERLMOD       = NO
-
-# If the PERLMOD_LATEX tag is set to YES Doxygen will generate 
-# the necessary Makefile rules, Perl scripts and LaTeX code to be able 
-# to generate PDF and DVI output from the Perl module output.
-
-PERLMOD_LATEX          = NO
-
-# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be 
-# nicely formatted so it can be parsed by a human reader.  This is useful 
-# if you want to understand what is going on.  On the other hand, if this 
-# tag is set to NO the size of the Perl module output will be much smaller 
-# and Perl will parse it just the same.
-
-PERLMOD_PRETTY         = YES
-
-# The names of the make variables in the generated doxyrules.make file 
-# are prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. 
-# This is useful so different doxyrules.make files included by the same 
-# Makefile don't overwrite each other's variables.
-
-PERLMOD_MAKEVAR_PREFIX = 
-
-#---------------------------------------------------------------------------
-# Configuration options related to the preprocessor   
-#---------------------------------------------------------------------------
-
-# If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will 
-# evaluate all C-preprocessor directives found in the sources and include 
-# files.
-
-ENABLE_PREPROCESSING   = YES
-
-# If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro 
-# names in the source code. If set to NO (the default) only conditional 
-# compilation will be performed. Macro expansion can be done in a controlled 
-# way by setting EXPAND_ONLY_PREDEF to YES.
-
-MACRO_EXPANSION        = NO
-
-# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES 
-# then the macro expansion is limited to the macros specified with the 
-# PREDEFINED and EXPAND_AS_PREDEFINED tags.
-
-EXPAND_ONLY_PREDEF     = NO
-
-# If the SEARCH_INCLUDES tag is set to YES (the default) the includes files 
-# in the INCLUDE_PATH (see below) will be search if a #include is found.
-
-SEARCH_INCLUDES        = YES
-
-# The INCLUDE_PATH tag can be used to specify one or more directories that 
-# contain include files that are not input files but should be processed by 
-# the preprocessor.
-
-INCLUDE_PATH           = 
-
-# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard 
-# patterns (like *.h and *.hpp) to filter out the header-files in the 
-# directories. If left blank, the patterns specified with FILE_PATTERNS will 
-# be used.
-
-INCLUDE_FILE_PATTERNS  = 
-
-# The PREDEFINED tag can be used to specify one or more macro names that 
-# are defined before the preprocessor is started (similar to the -D option of 
-# gcc). The argument of the tag is a list of macros of the form: name 
-# or name=definition (no spaces). If the definition and the = are 
-# omitted =1 is assumed. To prevent a macro definition from being 
-# undefined via #undef or recursively expanded use the := operator 
-# instead of the = operator.
-
-PREDEFINED             = 
-
-# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then 
-# this tag can be used to specify a list of macro names that should be expanded. 
-# The macro definition that is found in the sources will be used. 
-# Use the PREDEFINED tag if you want to use a different macro definition.
-
-EXPAND_AS_DEFINED      = 
-
-# If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then 
-# doxygen's preprocessor will remove all function-like macros that are alone 
-# on a line, have an all uppercase name, and do not end with a semicolon. Such 
-# function macros are typically used for boiler-plate code, and will confuse 
-# the parser if not removed.
-
-SKIP_FUNCTION_MACROS   = YES
-
-#---------------------------------------------------------------------------
-# Configuration::additions related to external references   
-#---------------------------------------------------------------------------
-
-# The TAGFILES option can be used to specify one or more tagfiles. 
-# Optionally an initial location of the external documentation 
-# can be added for each tagfile. The format of a tag file without 
-# this location is as follows: 
-#   TAGFILES = file1 file2 ... 
-# Adding location for the tag files is done as follows: 
-#   TAGFILES = file1=loc1 "file2 = loc2" ... 
-# where "loc1" and "loc2" can be relative or absolute paths or 
-# URLs. If a location is present for each tag, the installdox tool 
-# does not have to be run to correct the links.
-# Note that each tag file must have a unique name
-# (where the name does NOT include the path)
-# If a tag file is not located in the directory in which doxygen 
-# is run, you must also specify the path to the tagfile here.
-
-TAGFILES               = 
-
-# When a file name is specified after GENERATE_TAGFILE, doxygen will create 
-# a tag file that is based on the input files it reads.
-
-GENERATE_TAGFILE       = 
-
-# If the ALLEXTERNALS tag is set to YES all external classes will be listed 
-# in the class index. If set to NO only the inherited external classes 
-# will be listed.
-
-ALLEXTERNALS           = NO
-
-# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed 
-# in the modules index. If set to NO, only the current project's groups will 
-# be listed.
-
-EXTERNAL_GROUPS        = YES
-
-# The PERL_PATH should be the absolute path and name of the perl script 
-# interpreter (i.e. the result of `which perl').
-
-PERL_PATH              = /usr/bin/perl
-
-#---------------------------------------------------------------------------
-# Configuration options related to the dot tool   
-#---------------------------------------------------------------------------
-
-# If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will 
-# generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base 
-# or super classes. Setting the tag to NO turns the diagrams off. Note that 
-# this option is superseded by the HAVE_DOT option below. This is only a 
-# fallback. It is recommended to install and use dot, since it yields more 
-# powerful graphs.
-
-CLASS_DIAGRAMS         = YES
-
-# If set to YES, the inheritance and collaboration graphs will hide 
-# inheritance and usage relations if the target is undocumented 
-# or is not a class.
-
-HIDE_UNDOC_RELATIONS   = YES
-
-# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is 
-# available from the path. This tool is part of Graphviz, a graph visualization 
-# toolkit from AT&T and Lucent Bell Labs. The other options in this section 
-# have no effect if this option is set to NO (the default)
-
-HAVE_DOT               = NO
-
-# If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen 
-# will generate a graph for each documented class showing the direct and 
-# indirect inheritance relations. Setting this tag to YES will force the 
-# the CLASS_DIAGRAMS tag to NO.
-
-CLASS_GRAPH            = YES
-
-# If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen 
-# will generate a graph for each documented class showing the direct and 
-# indirect implementation dependencies (inheritance, containment, and 
-# class references variables) of the class with other documented classes.
-
-COLLABORATION_GRAPH    = YES
-
-# If the GROUP_GRAPHS and HAVE_DOT tags are set to YES then doxygen 
-# will generate a graph for groups, showing the direct groups dependencies
-
-GROUP_GRAPHS           = YES
-
-# If the UML_LOOK tag is set to YES doxygen will generate inheritance and 
-# collaboration diagrams in a style similar to the OMG's Unified Modeling 
-# Language.
-
-UML_LOOK               = NO
-
-# If set to YES, the inheritance and collaboration graphs will show the 
-# relations between templates and their instances.
-
-TEMPLATE_RELATIONS     = NO
-
-# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT 
-# tags are set to YES then doxygen will generate a graph for each documented 
-# file showing the direct and indirect include dependencies of the file with 
-# other documented files.
-
-INCLUDE_GRAPH          = YES
-
-# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and 
-# HAVE_DOT tags are set to YES then doxygen will generate a graph for each 
-# documented header file showing the documented files that directly or 
-# indirectly include this file.
-
-INCLUDED_BY_GRAPH      = YES
-
-# If the CALL_GRAPH and HAVE_DOT tags are set to YES then doxygen will 
-# generate a call dependency graph for every global function or class method. 
-# Note that enabling this option will significantly increase the time of a run. 
-# So in most cases it will be better to enable call graphs for selected 
-# functions only using the \callgraph command.
-
-CALL_GRAPH             = NO
-
-# If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen 
-# will graphical hierarchy of all classes instead of a textual one.
-
-GRAPHICAL_HIERARCHY    = YES
-
-# If the DIRECTORY_GRAPH, SHOW_DIRECTORIES and HAVE_DOT tags are set to YES 
-# then doxygen will show the dependencies a directory has on other directories 
-# in a graphical way. The dependency relations are determined by the #include
-# relations between the files in the directories.
-
-DIRECTORY_GRAPH        = YES
-
-# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images 
-# generated by dot. Possible values are png, jpg, or gif
-# If left blank png will be used.
-
-DOT_IMAGE_FORMAT       = png
-
-# The tag DOT_PATH can be used to specify the path where the dot tool can be 
-# found. If left blank, it is assumed the dot tool can be found in the path.
-
-DOT_PATH               = 
-
-# The DOTFILE_DIRS tag can be used to specify one or more directories that 
-# contain dot files that are included in the documentation (see the 
-# \dotfile command).
-
-DOTFILE_DIRS           = 
-
-# The MAX_DOT_GRAPH_WIDTH tag can be used to set the maximum allowed width 
-# (in pixels) of the graphs generated by dot. If a graph becomes larger than 
-# this value, doxygen will try to truncate the graph, so that it fits within 
-# the specified constraint. Beware that most browsers cannot cope with very 
-# large images.
-
-MAX_DOT_GRAPH_WIDTH    = 1024
-
-# The MAX_DOT_GRAPH_HEIGHT tag can be used to set the maximum allows height 
-# (in pixels) of the graphs generated by dot. If a graph becomes larger than 
-# this value, doxygen will try to truncate the graph, so that it fits within 
-# the specified constraint. Beware that most browsers cannot cope with very 
-# large images.
-
-MAX_DOT_GRAPH_HEIGHT   = 1024
-
-# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the 
-# graphs generated by dot. A depth value of 3 means that only nodes reachable 
-# from the root by following a path via at most 3 edges will be shown. Nodes 
-# that lay further from the root node will be omitted. Note that setting this 
-# option to 1 or 2 may greatly reduce the computation time needed for large 
-# code bases. Also note that a graph may be further truncated if the graph's 
-# image dimensions are not sufficient to fit the graph (see MAX_DOT_GRAPH_WIDTH 
-# and MAX_DOT_GRAPH_HEIGHT). If 0 is used for the depth value (the default), 
-# the graph is not depth-constrained.
-
-MAX_DOT_GRAPH_DEPTH    = 0
-
-# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent 
-# background. This is disabled by default, which results in a white background. 
-# Warning: Depending on the platform used, enabling this option may lead to 
-# badly anti-aliased labels on the edges of a graph (i.e. they become hard to 
-# read).
-
-DOT_TRANSPARENT        = NO
-
-# Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output 
-# files in one run (i.e. multiple -o and -T options on the command line). This 
-# makes dot run faster, but since only newer versions of dot (>1.8.10) 
-# support this, this feature is disabled by default.
-
-DOT_MULTI_TARGETS      = NO
-
-# If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will 
-# generate a legend page explaining the meaning of the various boxes and 
-# arrows in the dot generated graphs.
-
-GENERATE_LEGEND        = YES
-
-# If the DOT_CLEANUP tag is set to YES (the default) Doxygen will 
-# remove the intermediate dot files that are used to generate 
-# the various graphs.
-
-DOT_CLEANUP            = YES
-
-#---------------------------------------------------------------------------
-# Configuration::additions related to the search engine   
-#---------------------------------------------------------------------------
-
-# The SEARCHENGINE tag specifies whether or not a search engine should be 
-# used. If set to NO the values of all tags below this one will be ignored.
-
-SEARCHENGINE           = NO
diff --git a/docs/Doxyfilter b/docs/Doxyfilter
deleted file mode 100644
index 6a6d50f..0000000
--- a/docs/Doxyfilter
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/sh
-
-#
-# Doxyfilter <source-root> <filename>
-#
-
-dir=$(dirname "$0")
-
-PYFILTER="$dir/pythfilter.py"
-
-if [ "${2/.py/}" != "$2" ]
-then
-    python "$PYFILTER" -r "$1" -f "$2"
-else
-    cat "$2"
-fi
diff --git a/docs/Makefile b/docs/Makefile
index 620a296..053d7af 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -27,9 +27,6 @@ all: build
 .PHONY: build
 build: html txt man-pages figs
 
-.PHONY: dev-docs
-dev-docs: python-dev-docs
-
 .PHONY: html
 html: $(DOC_HTML) html/index.html
 
@@ -45,15 +42,6 @@ figs:
 	set -x; $(MAKE) -C figs ; else                   \
 	echo "fig2dev (transfig) not installed; skipping figs."; fi
 
-.PHONY: python-dev-docs
-python-dev-docs:
-	@mkdir -v -p api/tools/python
-	@set -e ; if which $(DOXYGEN) 1>/dev/null 2>/dev/null; then \
-        echo "Running doxygen to generate Python tools APIs ... "; \
-	$(DOXYGEN) Doxyfile;                                       \
-	$(MAKE) -C api/tools/python/latex ; else                   \
-        echo "Doxygen not installed; skipping python-dev-docs."; fi
-
 .PHONY: man-pages
 man-pages:
 	@if which $(POD2MAN) 1>/dev/null 2>/dev/null; then \
@@ -76,7 +64,6 @@ clean:
 	rm -rf .word_count *.aux *.dvi *.bbl *.blg *.glo *.idx *~ 
 	rm -rf *.ilg *.log *.ind *.toc *.bak core
 	rm -rf html txt
-	rm -rf api
 	rm -rf man5
 	rm -rf man1
 
diff --git a/docs/html.sty b/docs/html.sty
deleted file mode 100644
index b5f8fbb..0000000
--- a/docs/html.sty
+++ /dev/null
@@ -1,887 +0,0 @@
-%
-% $Id: html.sty,v 1.23 1998/02/26 10:32:24 latex2html Exp $
-% LaTeX2HTML Version 96.2 : html.sty
-% 
-% This file contains definitions of LaTeX commands which are
-% processed in a special way by the translator. 
-% For example, there are commands for embedding external hypertext links,
-% for cross-references between documents or for including raw HTML.
-% This file includes the comments.sty file v2.0 by Victor Eijkhout
-% In most cases these commands do nothing when processed by LaTeX.
-%
-% Place this file in a directory accessible to LaTeX (i.e., somewhere
-% in the TEXINPUTS path.)
-%
-% NOTE: This file works with LaTeX 2.09 or (the newer) LaTeX2e.
-%       If you only have LaTeX 2.09, some complex LaTeX2HTML features
-%       like support for segmented documents are not available.
-
-% Changes:
-% See the change log at end of file.
-
-
-% Exit if the style file is already loaded
-% (suggested by Lee Shombert <las@potomac.wash.inmet.com>
-\ifx \htmlstyloaded\relax \endinput\else\let\htmlstyloaded\relax\fi
-\makeatletter
-
-\providecommand{\latextohtml}{\LaTeX2\texttt{HTML}}
-
-
-%%% LINKS TO EXTERNAL DOCUMENTS
-%
-% This can be used to provide links to arbitrary documents.
-% The first argumment should be the text that is going to be
-% highlighted and the second argument a URL.
-% The hyperlink will appear as a hyperlink in the HTML 
-% document and as a footnote in the dvi or ps files.
-%
-\newcommand{\htmladdnormallinkfoot}[2]{#1\footnote{#2}} 
-
-
-% This is an alternative definition of the command above which
-% will ignore the URL in the dvi or ps files.
-\newcommand{\htmladdnormallink}[2]{#1}
-
-
-% This command takes as argument a URL pointing to an image.
-% The image will be embedded in the HTML document but will
-% be ignored in the dvi and ps files.
-%
-\newcommand{\htmladdimg}[1]{}
-
-
-%%% CROSS-REFERENCES BETWEEN (LOCAL OR REMOTE) DOCUMENTS
-%
-% This can be used to refer to symbolic labels in other Latex 
-% documents that have already been processed by the translator.
-% The arguments should be:
-% #1 : the URL to the directory containing the external document
-% #2 : the path to the labels.pl file of the external document.
-% If the external document lives on a remote machine then labels.pl 
-% must be copied on the local machine.
-%
-%e.g. \externallabels{http://cbl.leeds.ac.uk/nikos/WWW/doc/tex2html/latex2html}
-%                    {/usr/cblelca/nikos/tmp/labels.pl}
-% The arguments are ignored in the dvi and ps files.
-%
-\newcommand{\externallabels}[2]{}
-
-
-% This complements the \externallabels command above. The argument
-% should be a label defined in another latex document and will be
-% ignored in the dvi and ps files.
-%
-\newcommand{\externalref}[1]{}
-
-
-% Suggested by  Uffe Engberg (http://www.brics.dk/~engberg/)
-% This allows the same effect for citations in external bibliographies.
-% An  \externallabels  command must be given, locating a labels.pl file
-% which defines the location and keys used in the external .html file.
-%  
-\newcommand{\externalcite}{\nocite}
-
-
-%%% HTMLRULE
-% This command adds a horizontal rule and is valid even within
-% a figure caption.
-% Here we introduce a stub for compatibility.
-\newcommand{\htmlrule}{\protect\HTMLrule}
-\newcommand{\HTMLrule}{\@ifstar\htmlrulestar\htmlrulestar}
-\newcommand{\htmlrulestar}[1]{}
-
-% This command adds information within the <BODY> ... </BODY> tag
-%
-\newcommand{\bodytext}[1]{}
-\newcommand{\htmlbody}{}
-
-
-%%% HYPERREF 
-% Suggested by Eric M. Carol <eric@ca.utoronto.utcc.enfm>
-% Similar to \ref but accepts conditional text. 
-% The first argument is HTML text which will become ``hyperized''
-% (underlined).
-% The second and third arguments are text which will appear only in the paper
-% version (DVI file), enclosing the fourth argument which is a reference to a label.
-%
-%e.g. \hyperref{using the tracer}{using the tracer (see Section}{)}{trace}
-% where there is a corresponding \label{trace}
-%
-\newcommand{\hyperref}{\hyperrefx[ref]}
-\def\hyperrefx[#1]{{\def\next{#1}%
- \def\tmp{ref}\ifx\next\tmp\aftergroup\hyperrefref
- \else\def\tmp{pageref}\ifx\next\tmp\aftergroup\hyperpageref
- \else\def\tmp{page}\ifx\next\tmp\aftergroup\hyperpageref
- \else\def\tmp{noref}\ifx\next\tmp\aftergroup\hypernoref
- \else\def\tmp{no}\ifx\next\tmp\aftergroup\hypernoref
- \else\typeout{*** unknown option \next\space to  hyperref ***}%
- \fi\fi\fi\fi\fi}}
-\newcommand{\hyperrefref}[4]{#2\ref{#4}#3}
-\newcommand{\hyperpageref}[4]{#2\pageref{#4}#3}
-\newcommand{\hypernoref}[3]{#2}
-
-
-%%% HYPERCITE --- added by RRM
-% Suggested by Stephen Simpson <simpson@math.psu.edu>
-% effects the same ideas as in  \hyperref, but for citations.
-% It does not allow an optional argument to the \cite, in LaTeX.
-%
-%   \hypercite{<html-text>}{<LaTeX-text>}{<opt-text>}{<key>}
-%
-% uses the pre/post-texts in LaTeX, with a  \cite{<key>}
-%
-%   \hypercite[ext]{<html-text>}{<LaTeX-text>}{<key>}
-%
-% uses the pre/post-texts in LaTeX, with a  \nocite{<key>}
-% the actual reference comes from an \externallabels  file.
-%
-\newcommand{\hypercite}{\hypercitex[int]}
-\def\hypercitex[#1]{{\def\next{#1}%
- \def\tmp{int}\ifx\next\tmp\aftergroup\hyperciteint
- \else\def\tmp{cite}\ifx\next\tmp\aftergroup\hyperciteint
- \else\def\tmp{ext}\ifx\next\tmp\aftergroup\hyperciteext
- \else\def\tmp{nocite}\ifx\next\tmp\aftergroup\hyperciteext
- \else\def\tmp{no}\ifx\next\tmp\aftergroup\hyperciteext
- \else\typeout{*** unknown option \next\space to  hypercite ***}%
- \fi\fi\fi\fi\fi}}
-\newcommand{\hyperciteint}[4]{#2{\def\tmp{#3}\def\emptyopt{}%
- \ifx\tmp\emptyopt\cite{#4}\else\cite[#3]{#4}\fi}}
-\newcommand{\hyperciteext}[3]{#2\nocite{#3}}
-
-
-
-%%% HTMLREF
-% Reference in HTML version only.
-% Mix between \htmladdnormallink and \hyperref.
-% First arg is text for in both versions, second is label for use in HTML
-% version.
-\newcommand{\htmlref}[2]{#1}
-
-%%% HTMLCITE
-% Reference in HTML version only.
-% Mix between \htmladdnormallink and \hypercite.
-% First arg is text for in both versions, second is citation for use in HTML
-% version.
-\newcommand{\htmlcite}[2]{#1}
-
-
-%%% HTMLIMAGE
-% This command can be used inside any environment that is converted
-% into an inlined image (eg a "figure" environment) in order to change
-% the way the image will be translated. The argument of \htmlimage
-% is really a string of options separated by commas ie 
-% [scale=<scale factor>],[external],[thumbnail=<reduction factor>
-% The scale option allows control over the size of the final image.
-% The ``external'' option will cause the image not to be inlined 
-% (images are inlined by default). External images will be accessible
-% via a hypertext link. 
-% The ``thumbnail'' option will cause a small inlined image to be 
-% placed in the caption. The size of the thumbnail depends on the
-% reduction factor. The use of the ``thumbnail'' option implies
-% the ``external'' option.
-%
-% Example:
-% \htmlimage{scale=1.5,external,thumbnail=0.2}
-% will cause a small thumbnail image 1/5th of the original size to be
-% placed in the final document, pointing to an external image 1.5
-% times bigger than the original.
-% 
-\newcommand{\htmlimage}[1]{}
-
-
-% \htmlborder causes a border to be placed around an image or table
-% when the image is placed within a <TABLE> cell.
-\newcommand{\htmlborder}[1]{}
-
-% Put \begin{makeimage}, \end{makeimage} around LaTeX to ensure its
-% translation into an image.
-% This shields sensitive text from being translated.
-\newenvironment{makeimage}{}{}
-
-
-% A dummy environment that can be useful to alter the order
-% in which commands are processed, in LaTeX2HTML
-\newenvironment{tex2html_deferred}{}{}
-
-
-%%% HTMLADDTONAVIGATION
-% This command appends its argument to the buttons in the navigation
-% panel. It is ignored by LaTeX.
-%
-% Example:
-% \htmladdtonavigation{\htmladdnormallink
-%              {\htmladdimg{http://server/path/to/gif}}
-%              {http://server/path}}
-\newcommand{\htmladdtonavigation}[1]{}
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% Comment.sty   version 2.0, 19 June 1992
-% selectively in/exclude pieces of text: the user can define new
-% comment versions, and each is controlled separately.
-% This style can be used with plain TeX or LaTeX, and probably
-% most other packages too.
-%
-% Examples of use in LaTeX and TeX follow \endinput
-%
-% Author
-%    Victor Eijkhout
-%    Department of Computer Science
-%    University Tennessee at Knoxville
-%    104 Ayres Hall
-%    Knoxville, TN 37996
-%    USA
-%
-%    eijkhout@cs.utk.edu
-%
-% Usage: all text included in between
-%    \comment ... \endcomment
-% or \begin{comment} ... \end{comment}
-% is discarded. The closing command should appear on a line
-% of its own. No starting spaces, nothing after it.
-% This environment should work with arbitrary amounts
-% of comment.
-%
-% Other 'comment' environments are defined by
-% and are selected/deselected with
-% \includecomment{versiona}
-% \excludecoment{versionb}
-%
-% These environments are used as
-% \versiona ... \endversiona
-% or \begin{versiona} ... \end{versiona}
-% with the closing command again on a line of its own.
-%
-% Basic approach:
-% to comment something out, scoop up  every line in verbatim mode
-% as macro argument, then throw it away.
-% For inclusions, both the opening and closing comands
-% are defined as noop
-%
-% Changed \next to \html@next to prevent clashes with other sty files
-% (mike@emn.fr)
-% Changed \html@next to \htmlnext so the \makeatletter and
-% \makeatother commands could be removed (they were causing other
-% style files - changebar.sty - to crash) (nikos@cbl.leeds.ac.uk)
-% Changed \htmlnext back to \html@next...
-
-\def\makeinnocent#1{\catcode`#1=12 }
-\def\csarg#1#2{\expandafter#1\csname#2\endcsname}
-
-\def\ThrowAwayComment#1{\begingroup
-    \def\CurrentComment{#1}%
-    \let\do\makeinnocent \dospecials
-    \makeinnocent\^^L% and whatever other special cases
-    \endlinechar`\^^M \catcode`\^^M=12 \xComment}
-{\catcode`\^^M=12 \endlinechar=-1 %
- \gdef\xComment#1^^M{\def\test{#1}\edef\test{\meaning\test}
-      \csarg\ifx{PlainEnd\CurrentComment Test}\test
-          \let\html@next\endgroup
-      \else \csarg\ifx{LaLaEnd\CurrentComment Test}\test
-            \edef\html@next{\endgroup\noexpand\end{\CurrentComment}}
-      \else \csarg\ifx{LaInnEnd\CurrentComment Test}\test
-            \edef\html@next{\endgroup\noexpand\end{\CurrentComment}}
-      \else \let\html@next\xComment
-      \fi \fi \fi \html@next}
-}
-
-\def\includecomment
- #1{\expandafter\def\csname#1\endcsname{}%
-    \expandafter\def\csname end#1\endcsname{}}
-\def\excludecomment
- #1{\expandafter\def\csname#1\endcsname{\ThrowAwayComment{#1}}%
-    {\escapechar=-1\relax
-     \edef\tmp{\string\\end#1}%
-      \csarg\xdef{PlainEnd#1Test}{\meaning\tmp}%
-     \edef\tmp{\string\\end\string\{#1\string\}}%
-      \csarg\xdef{LaLaEnd#1Test}{\meaning\tmp}%
-     \edef\tmp{\string\\end \string\{#1\string\}}%
-      \csarg\xdef{LaInnEnd#1Test}{\meaning\tmp}%
-    }}
-
-\excludecomment{comment}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% end Comment.sty
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-%
-% Alternative code by Robin Fairbairns, 22 September 1997
-%
-\newcommand\@gobbleenv{\let\reserved@a\@currenvir\@gobble@nv}
-\long\def\@gobble@nv#1\end#2{\def\reserved@b{#2}%
- \ifx\reserved@a\reserved@b
-  \edef\reserved@a{\noexpand\end{\reserved@a}}%
-  \expandafter\reserved@a
- \else
-  \expandafter\@gobble@nv
- \fi}
-
-\renewcommand{\excludecomment}[1]{%
-    \csname newenvironment\endcsname{#1}{\@gobbleenv}{}}
-
-%%% RAW HTML 
-% 
-% Enclose raw HTML between a \begin{rawhtml} and \end{rawhtml}.
-% The html environment ignores its body
-%
-\excludecomment{rawhtml}
-
-
-%%% HTML ONLY
-%
-% Enclose LaTeX constructs which will only appear in the 
-% HTML output and will be ignored by LaTeX with 
-% \begin{htmlonly} and \end{htmlonly}
-%
-\excludecomment{htmlonly}
-% Shorter version
-\newcommand{\html}[1]{}
-
-% for images.tex only
-\excludecomment{imagesonly}
-
-%%% LaTeX ONLY
-% Enclose LaTeX constructs which will only appear in the 
-% DVI output and will be ignored by latex2html with 
-%\begin{latexonly} and \end{latexonly}
-%
-\newenvironment{latexonly}{}{}
-% Shorter version
-\newcommand{\latex}[1]{#1}
-
-
-%%% LaTeX or HTML
-% Combination of \latex and \html.
-% Say \latexhtml{this should be latex text}{this html text}
-%
-%\newcommand{\latexhtml}[2]{#1}
-\long\def\latexhtml#1#2{#1}
-
-
-%%% tracing the HTML conversions
-% This alters the tracing-level within the processing
-% performed by  latex2html  by adjusting  $VERBOSITY
-% (see  latex2html.config  for the appropriate values)
-%
-\newcommand{\htmltracing}[1]{}
-\newcommand{\htmltracenv}[1]{}
-
-
-%%%  \strikeout for HTML only
-% uses <STRIKE>...</STRIKE> tags on the argument
-% LaTeX just gobbles it up.
-\newcommand{\strikeout}[1]{}
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%%% JCL - stop input here if LaTeX2e is not present
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\ifx\if@compatibility\undefined
-  %LaTeX209
-  \makeatother\relax\expandafter\endinput
-\fi
-\if@compatibility
-  %LaTeX2e in LaTeX209 compatibility mode
-  \makeatother\relax\expandafter\endinput
-\fi
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-% Start providing LaTeX2e extension:
-% This is currently:
-%  - additional optional argument for \htmladdimg
-%  - support for segmented documents
-%
-
-\ProvidesPackage{html}
-          [1996/12/22 v1.1 hypertext commands for latex2html (nd, hws, rrm)]
-%%%%MG
-
-% This command takes as argument a URL pointing to an image.
-% The image will be embedded in the HTML document but will
-% be ignored in the dvi and ps files.  The optional argument
-% denotes additional HTML tags.
-%
-% Example:  \htmladdimg[ALT="portrait" ALIGN=CENTER]{portrait.gif}
-%
-\renewcommand{\htmladdimg}[2][]{}
-
-%%% HTMLRULE for LaTeX2e
-% This command adds a horizontal rule and is valid even within
-% a figure caption.
-%
-% This command is best used with LaTeX2e and HTML 3.2 support.
-% It is like \hrule, but allows for options via key--value pairs
-% as follows:  \htmlrule[key1=value1, key2=value2, ...] .
-% Use \htmlrule* to suppress the <BR> tag.
-% Eg. \htmlrule[left, 15, 5pt, "none", NOSHADE] produces
-% <BR CLEAR="left"><HR NOSHADE SIZE="15">.
-% Renew the necessary part.
-\renewcommand{\htmlrulestar}[1][all]{}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-%  renew some definitions to allow optional arguments
-%
-% The description of the options is missing, as yet.
-%
-\renewcommand{\latextohtml}{\textup{\LaTeX2\texttt{HTML}}}
-\renewcommand{\htmladdnormallinkfoot}[3][]{#2\footnote{#3}} 
-\renewcommand{\htmladdnormallink}[3][]{#2}
-\renewcommand{\htmlbody}[1][]{}
-\renewcommand{\hyperref}[1][ref]{\hyperrefx[#1]}
-\renewcommand{\hypercite}[1][int]{\hypercitex[#1]}
-\renewcommand{\htmlref}[3][]{#2}
-\renewcommand{\htmlcite}[1]{#1\htmlcitex}
-\newcommand{\htmlcitex}[2][]{{\def\tmp{#1}\ifx\tmp\@empty\else~[#1]\fi}}
-\renewcommand{\htmlimage}[2][]{}
-\renewcommand{\htmlborder}[2][]{}
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-%  HTML  HTMLset  HTMLsetenv
-%
-%  These commands do nothing in LaTeX, but can be used to place
-%  HTML tags or set Perl variables during the LaTeX2HTML processing;
-%  They are intended for expert use only.
-
-\newcommand{\HTMLcode}[2][]{}
-\ifx\undefined\HTML\newcommand{\HTML}[2][]{}\else
-\typeout{*** Warning: \string\HTML\space had an incompatible definition ***}%
-\typeout{*** instead use \string\HTMLcode\space for raw HTML code ***}%
-\fi 
-\newcommand{\HTMLset}[3][]{}
-\newcommand{\HTMLsetenv}[3][]{}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-% The following commands pertain to document segmentation, and
-% were added by Herbert Swan <dprhws@edp.Arco.com> (with help from
-% Michel Goossens <goossens@cern.ch>):
-%
-%
-% This command inputs internal latex2html tables so that large
-% documents can to partitioned into smaller (more manageable)
-% segments.
-%
-\newcommand{\internal}[2][internals]{}
-
-%
-%  Define a dummy stub \htmlhead{}.  This command causes latex2html
-%  to define the title of the start of a new segment.  It is not
-%  normally placed in the user's document.  Rather, it is passed to
-%  latex2html via a .ptr file written by \segment.
-%
-\newcommand{\htmlhead}[3][]{}
-
-%  In the LaTeX2HTML version this will eliminate the title line
-%  generated by a \segment command, but retains the title string
-%  for use in other places.
-%
-\newcommand{\htmlnohead}{}
-
-
-%  In the LaTeX2HTML version this put a URL into a <BASE> tag
-%  within the <HEAD>...</HEAD> portion of a document.
-%
-\newcommand{\htmlbase}[1]{}
-%
-
-%
-%  The dummy command \endpreamble is needed by latex2html to
-%  mark the end of the preamble in document segments that do
-%  not contain a \begin{document}
-%
-\newcommand{\startdocument}{}
-
-
-% \tableofchildlinks, \htmlinfo
-%     by Ross Moore  ---  extensions dated 27 September 1997
-%
-%  These do nothing in LaTeX but for LaTeX2HTML they mark 
-%  where the table of child-links and info-page should be placed,
-%  when the user wants other than the default.
-%	\tableofchildlinks	 % put mini-TOC at this location
-%	\tableofchildlinks[off]	 % not on current page
-%	\tableofchildlinks[none] % not on current and subsequent pages
-%	\tableofchildlinks[on]   % selectively on current page
-%	\tableofchildlinks[all]  % on current and all subsequent pages
-%	\htmlinfo	 	 % put info-page at this location
-%	\htmlinfo[off]		 % no info-page in current document
-%	\htmlinfo[none]		 % no info-page in current document
-%  *-versions omit the preceding <BR> tag.
-%
-\newcommand{\tableofchildlinks}{%
-  \@ifstar\tableofchildlinksstar\tableofchildlinksstar}
-\newcommand{\tableofchildlinksstar}[1][]{}
-
-\newcommand{\htmlinfo}{\@ifstar\htmlinfostar\htmlinfostar}
-\newcommand{\htmlinfostar}[1][]{}
-
-
-%  This redefines  \begin  to allow for an optional argument
-%  which is used by LaTeX2HTML to specify `style-sheet' information
-
-\let\realLaTeX@begin=\begin
-\renewcommand{\begin}[1][]{\realLaTeX@begin}
-
-
-%
-%  Allocate a new set of section counters, which will get incremented
-%  for "*" forms of sectioning commands, and for a few miscellaneous
-%  commands.
-%
-
-\newcounter{lpart}
-\newcounter{lchapter}[part]
-\@ifundefined{c@chapter}%
- {\let\Hchapter\relax \newcounter{lsection}[part]}%
- {\let\Hchapter=\chapter \newcounter{lsection}[chapter]}
-\newcounter{lsubsection}[section]
-\newcounter{lsubsubsection}[subsection]
-\newcounter{lparagraph}[subsubsection]
-\newcounter{lsubparagraph}[paragraph]
-\newcounter{lequation}
-
-%
-%  Redefine "*" forms of sectioning commands to increment their
-%  respective counters.
-%
-\let\Hpart=\part
-%\let\Hchapter=\chapter
-\let\Hsection=\section
-\let\Hsubsection=\subsection
-\let\Hsubsubsection=\subsubsection
-\let\Hparagraph=\paragraph
-\let\Hsubparagraph=\subparagraph
-\let\Hsubsubparagraph=\subsubparagraph
-
-\ifx\c@subparagraph\undefined
- \newcounter{lsubsubparagraph}[lsubparagraph]
-\else
- \newcounter{lsubsubparagraph}[subparagraph]
-\fi
-
-%
-%  The following definitions are specific to LaTeX2e:
-%  (They must be commented out for LaTeX 2.09)
-%
-\renewcommand{\part}{\@ifstar{\stepcounter{lpart}%
-  \bgroup\def\tmp{*}\H@part}{\bgroup\def\tmp{}\H@part}}
-\newcommand{\H@part}[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hpart\tmp}
-
-\ifx\Hchapter\relax\else
- \def\chapter{\resetsections \@ifstar{\stepcounter{lchapter}%
-   \bgroup\def\tmp{*}\H@chapter}{\bgroup\def\tmp{}\H@chapter}}\fi
-\newcommand{\H@chapter}[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hchapter\tmp}
-
-\renewcommand{\section}{\resetsubsections
- \@ifstar{\stepcounter{lsection}\bgroup\def\tmp{*}%
-   \H@section}{\bgroup\def\tmp{}\H@section}}
-\newcommand{\H@section}[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hsection\tmp}
-
-\renewcommand{\subsection}{\resetsubsubsections
- \@ifstar{\stepcounter{lsubsection}\bgroup\def\tmp{*}%
-   \H@subsection}{\bgroup\def\tmp{}\H@subsection}}
-\newcommand{\H@subsection}[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hsubsection\tmp}
-
-\renewcommand{\subsubsection}{\resetparagraphs
- \@ifstar{\stepcounter{lsubsubsection}\bgroup\def\tmp{*}%
-   \H@subsubsection}{\bgroup\def\tmp{}\H@subsubsection}}
-\newcommand{\H@subsubsection}[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hsubsubsection\tmp}
-
-\renewcommand{\paragraph}{\resetsubparagraphs
- \@ifstar{\stepcounter{lparagraph}\bgroup\def\tmp{*}%
-   \H@paragraph}{\bgroup\def\tmp{}\H@paragraph}}
-\newcommand\H@paragraph[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hparagraph\tmp}
-
-\renewcommand{\subparagraph}{\resetsubsubparagraphs
- \@ifstar{\stepcounter{lsubparagraph}\bgroup\def\tmp{*}%
-   \H@subparagraph}{\bgroup\def\tmp{}\H@subparagraph}}
-\newcommand\H@subparagraph[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hsubparagraph\tmp}
-
-\ifx\Hsubsubparagraph\relax\else\@ifundefined{subsubparagraph}{}{%
-\def\subsubparagraph{%
- \@ifstar{\stepcounter{lsubsubparagraph}\bgroup\def\tmp{*}%
-   \H@subsubparagraph}{\bgroup\def\tmp{}\H@subsubparagraph}}}\fi
-\newcommand\H@subsubparagraph[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hsubsubparagraph\tmp}
-
-\def\check@align{\def\empty{}\ifx\tmp@a\empty
- \else\def\tmp@b{center}\ifx\tmp@a\tmp@b\let\tmp@a\empty
- \else\def\tmp@b{left}\ifx\tmp@a\tmp@b\let\tmp@a\empty
- \else\def\tmp@b{right}\ifx\tmp@a\tmp@b\let\tmp@a\empty
- \else\expandafter\def\expandafter\tmp@a\expandafter{\expandafter[\tmp@a]}%
- \fi\fi\fi \def\empty{}\ifx\tmp\empty\let\tmp=\tmp@a \else 
-  \expandafter\def\expandafter\tmp\expandafter{\expandafter*\tmp@a}%
- \fi\fi}
-%
-\def\resetsections{\setcounter{section}{0}\setcounter{lsection}{0}%
- \reset@dependents{section}\resetsubsections }
-\def\resetsubsections{\setcounter{subsection}{0}\setcounter{lsubsection}{0}%
- \reset@dependents{subsection}\resetsubsubsections }
-\def\resetsubsubsections{\setcounter{subsubsection}{0}\setcounter{lsubsubsection}{0}%
- \reset@dependents{subsubsection}\resetparagraphs }
-%
-\def\resetparagraphs{\setcounter{lparagraph}{0}\setcounter{lparagraph}{0}%
- \reset@dependents{paragraph}\resetsubparagraphs }
-\def\resetsubparagraphs{\ifx\c@subparagraph\undefined\else
-  \setcounter{subparagraph}{0}\fi \setcounter{lsubparagraph}{0}%
- \reset@dependents{subparagraph}\resetsubsubparagraphs }
-\def\resetsubsubparagraphs{\ifx\c@subsubparagraph\undefined\else
-  \setcounter{subsubparagraph}{0}\fi \setcounter{lsubsubparagraph}{0}}
-%
-\def\reset@dependents#1{\begingroup\let \@elt \@stpelt
- \csname cl@#1\endcsname\endgroup}
-%
-%
-%  Define a helper macro to dump a single \secounter command to a file.
-%
-\newcommand{\DumpPtr}[2]{%
-\count255=\arabic{#1}\def\dummy{dummy}\def\tmp{#2}%
-\ifx\tmp\dummy\else\advance\count255 by \arabic{#2}\fi
-\immediate\write\ptrfile{%
-\noexpand\setcounter{#1}{\number\count255}}}
-
-%
-%  Define a helper macro to dump all counters to the file.
-%  The value for each counter will be the sum of the l-counter
-%      actual LaTeX section counter.
-%  Also dump an \htmlhead{section-command}{section title} command
-%      to the file.
-%
-\newwrite\ptrfile
-\def\DumpCounters#1#2#3#4{%
-\begingroup\let\protect=\noexpand
-\immediate\openout\ptrfile = #1.ptr
-\DumpPtr{part}{lpart}%
-\ifx\Hchapter\relax\else\DumpPtr{chapter}{lchapter}\fi
-\DumpPtr{section}{lsection}%
-\DumpPtr{subsection}{lsubsection}%
-\DumpPtr{subsubsection}{lsubsubsection}%
-\DumpPtr{paragraph}{lparagraph}%
-\DumpPtr{subparagraph}{lsubparagraph}%
-\DumpPtr{equation}{lequation}%
-\DumpPtr{footnote}{dummy}%
-\def\tmp{#4}\ifx\tmp\@empty
-\immediate\write\ptrfile{\noexpand\htmlhead{#2}{#3}}\else
-\immediate\write\ptrfile{\noexpand\htmlhead[#4]{#2}{#3}}\fi
-\dumpcitestatus \dumpcurrentcolor
-\immediate\closeout\ptrfile
-\endgroup }
-
-
-%% interface to natbib.sty
-
-\def\dumpcitestatus{}
-\def\loadcitestatus{\def\dumpcitestatus{%
-  \ifciteindex\immediate\write\ptrfile{\noexpand\citeindextrue}%
-  \else\immediate\write\ptrfile{\noexpand\citeindexfalse}\fi }%
-}
-\@ifpackageloaded{natbib}{\loadcitestatus}{%
- \AtBeginDocument{\@ifpackageloaded{natbib}{\loadcitestatus}{}}}
-
-
-%% interface to color.sty
-
-\def\dumpcurrentcolor{}
-\def\loadsegmentcolors{%
- \let\real@pagecolor=\pagecolor
- \let\pagecolor\segmentpagecolor
- \let\segmentcolor\color
- \ifx\current@page@color\undefined \def\current@page@color{{}}\fi
- \def\dumpcurrentcolor{\bgroup\def\@empty@{{}}%
-   \expandafter\def\expandafter\tmp\space####1@{\def\thiscol{####1}}%
-  \ifx\current@color\@empty@\def\thiscol{}\else
-   \expandafter\tmp\current@color @\fi
-  \immediate\write\ptrfile{\noexpand\segmentcolor{\thiscol}}%
-  \ifx\current@page@color\@empty@\def\thiscol{}\else
-   \expandafter\tmp\current@page@color @\fi
-  \immediate\write\ptrfile{\noexpand\segmentpagecolor{\thiscol}}%
- \egroup}%
- \global\let\loadsegmentcolors=\relax
-}
-
-% These macros are needed within  images.tex  since this inputs
-% the <segment>.ptr files for a segment, so that counters are
-% colors are synchronised.
-%
-\newcommand{\segmentpagecolor}[1][]{%
- \@ifpackageloaded{color}{\loadsegmentcolors\bgroup
-  \def\tmp{#1}\ifx\@empty\tmp\def\next{[]}\else\def\next{[#1]}\fi
-  \expandafter\segmentpagecolor@\next}%
- {\@gobble}}
-\def\segmentpagecolor@[#1]#2{\def\tmp{#1}\def\tmpB{#2}%
- \ifx\tmpB\@empty\let\next=\egroup
- \else
-  \let\realendgroup=\endgroup
-  \def\endgroup{\edef\next{\noexpand\realendgroup
-   \def\noexpand\current@page@color{\current@color}}\next}%
-  \ifx\tmp\@empty\real@pagecolor{#2}\def\model{}%
-  \else\real@pagecolor[#1]{#2}\def\model{[#1]}%
-  \fi
-  \edef\next{\egroup\def\noexpand\current@page@color{\current@page@color}%
-  \noexpand\real@pagecolor\model{#2}}%
- \fi\next}
-%
-\newcommand{\segmentcolor}[2][named]{\@ifpackageloaded{color}%
- {\loadsegmentcolors\segmentcolor[#1]{#2}}{}}
-
-\@ifpackageloaded{color}{\loadsegmentcolors}{\let\real@pagecolor=\@gobble
- \AtBeginDocument{\@ifpackageloaded{color}{\loadsegmentcolors}{}}}
-
-
-%  Define the \segment[align]{file}{section-command}{section-title} command,
-%  and its helper macros.  This command does four things:
-%       1)  Begins a new LaTeX section;
-%       2)  Writes a list of section counters to file.ptr, each
-%           of which represents the sum of the LaTeX section
-%           counters, and the l-counters, defined above;
-%       3)  Write an \htmlhead{section-title} command to file.ptr;
-%       4)  Inputs file.tex.
-
-\def\segment{\@ifstar{\@@htmls}{\@@html}}
-\def\endsegment{}
-\newcommand{\@@htmls}[1][]{\@@htmlsx{#1}}
-\newcommand{\@@html}[1][]{\@@htmlx{#1}}
-\def\@@htmlsx#1#2#3#4{\csname #3\endcsname* {#4}%
-                   \DumpCounters{#2}{#3*}{#4}{#1}\input{#2}}
-\def\@@htmlx#1#2#3#4{\csname #3\endcsname {#4}%
-                   \DumpCounters{#2}{#3}{#4}{#1}\input{#2}}
-
-\makeatother
-\endinput
-
-
-% Modifications:
-%
-% (The listing of Initiales see Changes)
-
-% $Log: html.sty,v $
-% Revision 1.23  1998/02/26 10:32:24  latex2html
-%  --  use \providecommand for  \latextohtml
-%  --  implemented \HTMLcode to do what \HTML did previously
-% 	\HTML still works, unless already defined by another package
-%  --  fixed problems remaining with undefined \chapter
-%  --  defined \endsegment
-%
-% Revision 1.22  1997/12/05 11:38:18  RRM
-%  --  implemented an optional argument to \begin for style-sheet info.
-%  --  modified use of an optional argument with sectioning-commands
-%
-% Revision 1.21  1997/11/05 10:28:56  RRM
-%  --  replaced redefinition of \@htmlrule with \htmlrulestar
-%
-% Revision 1.20  1997/10/28 02:15:58  RRM
-%  --  altered the way some special html-macros are defined, so that
-% 	star-variants are explicitly defined for LaTeX
-% 	 -- it is possible for these to occur within  images.tex
-% 	e.g. \htmlinfostar \htmlrulestar \tableofchildlinksstar
-%
-% Revision 1.19  1997/10/11 05:47:48  RRM
-%  --  allow the dummy {tex2html_nowrap} environment in LaTeX
-% 	use it to make its contents be evaluated in environment order
-%
-% Revision 1.18  1997/10/04 06:56:50  RRM
-%  --  uses Robin Fairbairns' code for ignored environments,
-%      replacing the previous  comment.sty  stuff.
-%  --  extensions to the \tableofchildlinks command
-%  --  extensions to the \htmlinfo command
-%
-% Revision 1.17  1997/07/08 11:23:39  RRM
-%     include value of footnote counter in .ptr files for segments
-%
-% Revision 1.16  1997/07/03 08:56:34  RRM
-%     use \textup  within the \latextohtml macro
-%
-% Revision 1.15  1997/06/15 10:24:58  RRM
-%      new command  \htmltracenv  as environment-ordered \htmltracing
-%
-% Revision 1.14  1997/06/06 10:30:37  RRM
-%  -   new command:  \htmlborder  puts environment into a <TABLE> cell
-%      with a border of specified width, + other attributes.
-%  -   new commands: \HTML  for setting arbitrary HTML tags, with attributes
-%                    \HTMLset  for setting Perl variables, while processing
-%                    \HTMLsetenv  same as \HTMLset , but it gets processed
-%                                 as if it were an environment.
-%  -   new command:  \latextohtml  --- to set the LaTeX2HTML name/logo
-%  -   fixed some remaining problems with \segmentcolor & \segmentpagecolor
-%
-% Revision 1.13  1997/05/19 13:55:46  RRM
-%      alterations and extra options to  \hypercite
-%
-% Revision 1.12  1997/05/09 12:28:39  RRM
-%  -  Added the optional argument to \htmlhead, also in \DumpCounters
-%  -  Implemented \HTMLset as a no-op in LaTeX.
-%  -  Fixed a bug in accessing the page@color settings.
-%
-% Revision 1.11  1997/03/26 09:32:40  RRM
-%  -  Implements LaTeX versions of  \externalcite  and  \hypercite  commands.
-%     Thanks to  Uffe Engberg  and  Stephen Simpson  for the suggestions.
-%
-% Revision 1.10  1997/03/06 07:37:58  RRM
-% Added the  \htmltracing  command, for altering  $VERBOSITY .
-%
-% Revision 1.9  1997/02/17 02:26:26  RRM
-% - changes to counter handling (RRM)
-% - shuffled around some definitions
-% - changed \htmlrule of 209 mode
-%
-% Revision 1.8  1997/01/26 09:04:12  RRM
-% RRM: added optional argument to sectioning commands
-%      \htmlbase  sets the <BASE HREF=...> tag
-%      \htmlinfo  and  \htmlinfo* allow the document info to be positioned
-%
-% Revision 1.7  1997/01/03 12:15:44  L2HADMIN
-% % - fixes to the  color  and  natbib  interfaces
-% % - extended usage of  \hyperref, via an optional argument.
-% % - extended use comment environments to allow shifting expansions
-% %     e.g. within \multicolumn  (`bug' reported by Luc De Coninck).
-% % - allow optional argument to: \htmlimage, \htmlhead,
-% %     \htmladdimg, \htmladdnormallink, \htmladdnormallinkfoot
-% % - added new commands: \htmlbody, \htmlnohead
-% % - added new command: \tableofchildlinks
-%
-% Revision 1.6  1996/12/25 03:04:54  JCL
-% added patches to segment feature from Martin Wilck
-%
-% Revision 1.5  1996/12/23 01:48:06  JCL
-%  o introduced the environment makeimage, which may be used to force
-%    LaTeX2HTML to generate an image from the contents.
-%    There's no magic, all what we have now is a defined empty environment
-%    which LaTeX2HTML will not recognize and thus pass it to images.tex.
-%  o provided \protect to the \htmlrule commands to allow for usage
-%    within captions.
-%
-% Revision 1.4  1996/12/21 19:59:22  JCL
-% - shuffled some entries
-% - added \latexhtml command
-%
-% Revision 1.3  1996/12/21 12:22:59  JCL
-% removed duplicate \htmlrule, changed \htmlrule back not to create a \hrule
-% to allow occurrence in caption
-%
-% Revision 1.2  1996/12/20 04:03:41  JCL
-% changed occurrence of \makeatletter, \makeatother
-% added new \htmlrule command both for the LaTeX2.09 and LaTeX2e
-% sections
-%
-%
-% jcl 30-SEP-96
-%  - Stuck the commands commonly used by both LaTeX versions to the top,
-%    added a check which stops input or reads further if the document
-%    makes use of LaTeX2e.
-%  - Introduced rrm's \dumpcurrentcolor and \bodytext
-% hws 31-JAN-96 - Added support for document segmentation
-% hws 10-OCT-95 - Added \htmlrule command
-% jz 22-APR-94 - Added support for htmlref
-% nd  - Created
diff --git a/docs/pythfilter.py b/docs/pythfilter.py
deleted file mode 100644
index 3054f7c..0000000
--- a/docs/pythfilter.py
+++ /dev/null
@@ -1,658 +0,0 @@
-#!/usr/bin/env python
-
-# pythfilter.py v1.5.5, written by Matthias Baas (baas@ira.uka.de)
-
-# Doxygen filter which can be used to document Python source code.
-# Classes (incl. methods) and functions can be documented.
-# Every comment that begins with ## is literally turned into an
-# Doxygen comment. Consecutive comment lines are turned into
-# comment blocks (-> /** ... */).
-# All the stuff is put inside a namespace with the same name as
-# the source file.
-
-# Conversions:
-# ============
-# ##-blocks                  ->  /** ... */
-# "class name(base): ..."    ->  "class name : public base {...}"
-# "def name(params): ..."    ->  "name(params) {...}"
-
-# Changelog:
-# 21.01.2003: Raw (r"") or unicode (u"") doc string will now be properly
-#             handled. (thanks to Richard Laager for the patch)
-# 22.12.2003: Fixed a bug where no function names would be output for "def"
-#             blocks that were not in a class.
-#             (thanks to Richard Laager for the patch)
-# 12.12.2003: Implemented code to handle static and class methods with
-#             this logic: Methods with "self" as the first argument are
-#             non-static. Methods with "cls" are Python class methods,
-#             which translate into static methods for Doxygen. Other
-#             methods are assumed to be static methods. As should be
-#             obvious, this logic doesn't take into account if the method
-#             is actually setup as a classmethod() or a staticmethod(),
-#             just if it follows the normal conventions.
-#             (thanks to Richard Laager for the patch)
-# 11.12.2003: Corrected #includes to use os.path.sep instead of ".". Corrected
-#             namespace code to use "::" instead of ".".
-#             (thanks to Richard Laager for the patch)
-# 11.12.2003: Methods beginning with two underscores that end with
-#             something other than two underscores are considered private
-#             and are handled accordingly.
-#             (thanks to Richard Laager for the patch)
-# 03.12.2003: The first parameter of class methods (self) is removed from
-#             the documentation.
-# 03.11.2003: The module docstring will be used as namespace documentation
-#             (thanks to Joe Bronkema for the patch)
-# 08.07.2003: Namespaces get a default documentation so that the namespace
-#             and its contents will show up in the generated documentation.
-# 05.02.2003: Directories will be delted during synchronization.
-# 31.01.2003: -f option & filtering entire directory trees.
-# 10.08.2002: In base classes the '.' will be replaced by '::'
-# 18.07.2002: * and ** will be translated into arguments
-# 18.07.2002: Argument lists may contain default values using constructors.
-# 18.06.2002: Support for ## public:
-# 21.01.2002: from ... import will be translated to "using namespace ...;"
-#             TODO: "from ... import *" vs "from ... import names"
-#             TODO: Using normal imports: name.name -> name::name
-# 20.01.2002: #includes will be placed in front of the namespace
-
-######################################################################
-
-# The program is written as a state machine with the following states:
-#
-# - OUTSIDE               The current position is outside any comment,
-#                         class definition or function.
-#
-# - BUILD_COMMENT         Begins with first "##".
-#                         Ends with the first token that is no "##"
-#                         at the same column as before.
-#
-# - BUILD_CLASS_DECL      Begins with "class".
-#                         Ends with ":"
-# - BUILD_CLASS_BODY      Begins just after BUILD_CLASS_DECL.
-#                         The first following token (which is no comment)
-#                         determines indentation depth.
-#                         Ends with a token that has a smaller indendation.
-#
-# - BUILD_DEF_DECL        Begins with "def".
-#                         Ends with ":".
-# - BUILD_DEF_BODY        Begins just after BUILD_DEF_DECL.
-#                         The first following token (which is no comment)
-#                         determines indentation depth.
-#                         Ends with a token that has a smaller indendation.
-
-import getopt
-import glob
-import os.path
-import re
-import shutil
-import string
-import sys
-import token
-import tokenize
-
-from stat import *
-
-OUTSIDE          = 0
-BUILD_COMMENT    = 1
-BUILD_CLASS_DECL = 2
-BUILD_CLASS_BODY = 3
-BUILD_DEF_DECL   = 4
-BUILD_DEF_BODY   = 5
-IMPORT           = 6
-IMPORT_OP        = 7
-IMPORT_APPEND    = 8
-
-# Output file stream
-outfile = sys.stdout
-
-# Output buffer
-outbuffer = []
-
-out_row = 1
-out_col = 0
-
-# Variables used by rec_name_n_param()
-name         = ""
-param        = ""
-doc_string   = ""
-record_state = 0
-bracket_counter = 0
-
-# Tuple: (row,column)
-class_spos  = (0,0)
-def_spos    = (0,0)
-import_spos = (0,0)
-
-# Which import was used? ("import" or "from")
-import_token = ""
-
-# Comment block buffer
-comment_block = []
-comment_finished = 0
-
-# Imported modules
-modules = []
-
-# Program state
-stateStack = [OUTSIDE]
-
-# Keep track of whether module has a docstring
-module_has_docstring = False
-
-# Keep track of member protection
-protection_level = "public"
-private_member = False
-
-# Keep track of the module namespace
-namespace = ""
-
-######################################################################
-# Output string s. '\n' may only be at the end of the string (not
-# somewhere in the middle).
-#
-# In: s    - String
-#     spos - Startpos
-######################################################################
-def output(s,spos, immediate=0):
-    global outbuffer, out_row, out_col, outfile
-
-    os = string.rjust(s,spos[1]-out_col+len(s))
-
-    if immediate:
-        outfile.write(os)
-    else:
-        outbuffer.append(os)
-
-    assert -1 == string.find(s[0:-2], "\n"), s
-
-    if (s[-1:]=="\n"):
-        out_row = out_row+1
-        out_col = 0
-    else:
-        out_col = spos[1]+len(s)
-
-
-######################################################################
-# Records a name and parameters. The name is either a class name or
-# a function name. Then the parameter is either the base class or
-# the function parameters.
-# The name is stored in the global variable "name", the parameters
-# in "param".
-# The variable "record_state" holds the current state of this internal
-# state machine.
-# The recording is started by calling start_recording().
-#
-# In: type, tok
-######################################################################
-def rec_name_n_param(type, tok):
-    global record_state,name,param,doc_string,bracket_counter
-    s = record_state
-    # State 0: Do nothing.
-    if   (s==0):
-         return
-    # State 1: Remember name.
-    elif (s==1):
-        name = tok
-        record_state = 2
-    # State 2: Wait for opening bracket or colon
-    elif (s==2):
-        if (tok=='('):
-            bracket_counter = 1
-            record_state=3
-        if (tok==':'): record_state=4
-    # State 3: Store parameter (or base class) and wait for an ending bracket
-    elif (s==3):
-        if (tok=='*' or tok=='**'):
-            tok=''
-        if (tok=='('):
-            bracket_counter = bracket_counter+1
-        if (tok==')'):
-            bracket_counter = bracket_counter-1
-        if bracket_counter==0:
-            record_state=4
-        else:
-            param=param+tok
-    # State 4: Look for doc string
-    elif (s==4):
-        if (type==token.NEWLINE or type==token.INDENT or type==token.SLASHEQUAL):
-            return
-        elif (tok==":"):
-            return
-        elif (type==token.STRING):
-            while tok[:1]=='r' or tok[:1]=='u':
-                tok=tok[1:]
-            while tok[:1]=='"':
-                tok=tok[1:]
-            while tok[-1:]=='"':
-                tok=tok[:-1]
-            doc_string=tok
-        record_state=0
-
-######################################################################
-# Starts the recording of a name & param part.
-# The function rec_name_n_param() has to be fed with tokens. After
-# the necessary tokens are fed the name and parameters can be found
-# in the global variables "name" und "param".
-######################################################################
-def start_recording():
-    global record_state,param,name, doc_string
-    record_state=1
-    name=""
-    param=""
-    doc_string=""
-
-######################################################################
-# Test if recording is finished
-######################################################################
-def is_recording_finished():
-    global record_state
-    return record_state==0
-
-######################################################################
-## Gather comment block
-######################################################################
-def gather_comment(type,tok,spos):
-    global comment_block,comment_finished
-    if (type!=tokenize.COMMENT):
-        comment_finished = 1
-    else:
-        # Output old comment block if a new one is started.
-        if (comment_finished):
-            print_comment(spos)
-            comment_finished=0
-        if (tok[0:2]=="##" and tok[0:3]!="###"):
-            append_comment_lines(tok[2:])
-
-######################################################################
-## Output comment block and empty buffer.
-######################################################################
-def print_comment(spos):
-    global comment_block,comment_finished
-    if (comment_block!=[]):
-        output("/** ",spos)
-        for c in comment_block:
-            output(c,spos)
-        output("*/\n",spos)
-    comment_block    = []
-    comment_finished = 0
-
-######################################################################
-def set_state(s):
-    global stateStack
-    stateStack[len(stateStack)-1]=s
-
-######################################################################
-def get_state():
-    global stateStack
-    return stateStack[len(stateStack)-1]
-
-######################################################################
-def push_state(s):
-    global stateStack
-    stateStack.append(s)
-
-######################################################################
-def pop_state():
-    global stateStack
-    stateStack.pop()
-
-
-######################################################################
-def tok_eater(type, tok, spos, epos, line):
-    global stateStack,name,param,class_spos,def_spos,import_spos
-    global doc_string, modules, import_token, module_has_docstring
-    global protection_level, private_member
-    global out_row
-
-    while out_row + 1 < spos[0]:
-        output("\n", (0, 0))
-
-    rec_name_n_param(type,tok)
-    if (string.replace(string.strip(tok)," ","")=="##private:"):
-         protection_level = "private"
-         output("private:\n",spos)
-    elif (string.replace(string.strip(tok)," ","")=="##protected:"):
-         protection_level = "protected"
-         output("protected:\n",spos)
-    elif (string.replace(string.strip(tok)," ","")=="##public:"):
-         protection_level = "public"
-         output("public:\n",spos)
-    else:
-         gather_comment(type,tok,spos)
-
-    state = get_state()
-
-#    sys.stderr.write("%d: %s\n"%(state, tok))
-
-    # OUTSIDE
-    if   (state==OUTSIDE):
-        if  (tok=="class"):
-            start_recording()
-            class_spos = spos
-            push_state(BUILD_CLASS_DECL)
-        elif (tok=="def"):
-            start_recording()
-            def_spos = spos
-            push_state(BUILD_DEF_DECL)
-        elif (tok=="import") or (tok=="from"):
-            import_token = tok
-            import_spos = spos
-            modules     = []
-            push_state(IMPORT)
-        elif (spos[1] == 0 and tok[:3] == '"""'):
-            # Capture module docstring as namespace documentation
-            module_has_docstring = True
-            append_comment_lines("\\namespace %s\n" % namespace)
-            append_comment_lines(tok[3:-3])
-            print_comment(spos)
-
-    # IMPORT
-    elif (state==IMPORT):
-        if (type==token.NAME):
-            modules.append(tok)
-            set_state(IMPORT_OP)
-    # IMPORT_OP
-    elif (state==IMPORT_OP):
-        if (tok=="."):
-            set_state(IMPORT_APPEND)
-        elif (tok==","):
-            set_state(IMPORT)
-        else:
-            for m in modules:
-                output('#include "'+m.replace('.',os.path.sep)+'.py"\n', import_spos, immediate=1)
-                if import_token=="from":
-                    output('using namespace '+m.replace('.', '::')+';\n', import_spos)
-            pop_state()
-    # IMPORT_APPEND
-    elif (state==IMPORT_APPEND):
-        if (type==token.NAME):
-            modules[len(modules)-1]+="."+tok
-            set_state(IMPORT_OP)
-    # BUILD_CLASS_DECL
-    elif (state==BUILD_CLASS_DECL):
-        if (is_recording_finished()):
-            s = "class "+name
-            if (param!=""): s = s+" : public "+param.replace('.','::')
-            if (doc_string!=""):
-                append_comment_lines(doc_string)
-            print_comment(class_spos)
-            output(s+"\n",class_spos)
-            output("{\n",(class_spos[0]+1,class_spos[1]))
-            protection_level = "public"
-            output("  public:\n",(class_spos[0]+2,class_spos[1]))
-            set_state(BUILD_CLASS_BODY)
-    # BUILD_CLASS_BODY
-    elif (state==BUILD_CLASS_BODY):
-        if (type!=token.INDENT and type!=token.NEWLINE and type!=40 and
-            type!=tokenize.NL and type!=tokenize.COMMENT and
-            (spos[1]<=class_spos[1])):
-            output("}; // end of class\n",(out_row+1,class_spos[1]))
-            pop_state()
-        elif (tok=="def"):
-            start_recording()
-            def_spos = spos
-            push_state(BUILD_DEF_DECL)
-    # BUILD_DEF_DECL
-    elif (state==BUILD_DEF_DECL):
-        if (is_recording_finished()):
-            param = param.replace("\n", " ")
-            param = param.replace("=", " = ")
-            params = param.split(",")
-            if BUILD_CLASS_BODY in stateStack:
-                if len(name) > 1 \
-                   and name[0:2] == '__' \
-                   and name[len(name)-2:len(name)] != '__' \
-                   and protection_level != 'private':
-                       private_member = True
-                       output("  private:\n",(def_spos[0]+2,def_spos[1]))
-
-            if (doc_string != ""):
-                append_comment_lines(doc_string)
-
-            print_comment(def_spos)
-
-            output_function_decl(name, params)
-#       output("{\n",(def_spos[0]+1,def_spos[1]))
-            set_state(BUILD_DEF_BODY)
-    # BUILD_DEF_BODY
-    elif (state==BUILD_DEF_BODY):
-        if (type!=token.INDENT and type!=token.NEWLINE \
-            and type!=40 and type!=tokenize.NL \
-            and (spos[1]<=def_spos[1])):
-#            output("} // end of method/function\n",(out_row+1,def_spos[1]))
-            if private_member and protection_level != 'private':
-                private_member = False
-                output("  " + protection_level + ":\n",(def_spos[0]+2,def_spos[1]))
-            pop_state()
-#       else:
-#            output(tok,spos)
-
-
-def output_function_decl(name, params):
-    global def_spos
-
-    # Do we document a class method? then remove the 'self' parameter
-    if params[0] == 'self':
-        preamble = ''
-        params = params[1:]
-    else:
-        preamble = 'static '
-        if params[0] == 'cls':
-            params = params[1:]
-
-    param_string = string.join(params, ", Type ")
-
-    if param_string == '':
-        param_string = '(' + param_string + ');\n'
-    else:
-        param_string = '(Type ' + param_string + ');\n'
-
-    output(preamble, def_spos)
-    output(name, def_spos)
-    output(param_string, def_spos)
-
-
-def append_comment_lines(lines):
-    map(append_comment_line, doc_string.split('\n'))
-
-paramRE = re.compile(r'(@param \w+):')
-
-def append_comment_line(line):
-    global paramRE
-    
-    comment_block.append(paramRE.sub(r'\1', line) + '\n')
-
-def dump(filename):
-    f = open(filename)
-    r = f.readlines()
-    for s in r:
-        sys.stdout.write(s)
-
-def filter(filename):
-    global name, module_has_docstring, source_root
-
-    path,name = os.path.split(filename)
-    root,ext  = os.path.splitext(name)
-
-    if source_root and path.find(source_root) == 0:
-        path = path[len(source_root):]
-
-        if path[0] == os.sep:
-            path = path[1:]
-
-        ns = path.split(os.sep)
-    else:
-        ns = []
-
-    ns.append(root)
-
-    for n in ns:
-        output("namespace " + n + " {\n",(0,0))
-
-    # set module name for tok_eater to use if there's a module doc string
-    name = root
-
-#    sys.stderr.write('Filtering "'+filename+'"...')
-    f = open(filename)
-    tokenize.tokenize(f.readline, tok_eater)
-    f.close()
-    print_comment((0,0))
-
-    output("\n",(0,0))
-    
-    for n in ns:
-        output("}  // end of namespace\n",(0,0))
-
-    if not module_has_docstring:
-        # Put in default namespace documentation
-        output('/** \\namespace '+root+' \n',(0,0))
-        output('    \\brief Module "%s" */\n'%(root),(0,0))
-
-    for s in outbuffer:
-        outfile.write(s)
-
-
-def filterFile(filename, out=sys.stdout):
-    global outfile
-
-    outfile = out
-
-    try:
-        root,ext  = os.path.splitext(filename)
-
-        if ext==".py":
-            filter(filename)
-        else:
-            dump(filename)
-
-#        sys.stderr.write("OK\n")
-    except IOError,e:
-        sys.stderr.write(e[1]+"\n")
-
-
-######################################################################
-
-# preparePath
-def preparePath(path):
-    """Prepare a path.
-
-    Checks if the path exists and creates it if it does not exist.
-    """
-    if not os.path.exists(path):
-        parent = os.path.dirname(path)
-        if parent!="":
-            preparePath(parent)
-        os.mkdir(path)
-
-# isNewer
-def isNewer(file1,file2):
-    """Check if file1 is newer than file2.
-
-    file1 must be an existing file.
-    """
-    if not os.path.exists(file2):
-        return True
-    return os.stat(file1)[ST_MTIME]>os.stat(file2)[ST_MTIME]
-
-# convert
-def convert(srcpath, destpath):
-    """Convert a Python source tree into a C+ stub tree.
-
-    All *.py files in srcpath (including sub-directories) are filtered
-    and written to destpath. If destpath exists, only the files
-    that have been modified are filtered again. Files that were deleted
-    from srcpath are also deleted in destpath if they are still present.
-    The function returns the number of processed *.py files.
-    """
-    count=0
-    sp = os.path.join(srcpath,"*")
-    sfiles = glob.glob(sp)
-    dp = os.path.join(destpath,"*")
-    dfiles = glob.glob(dp)
-    leftovers={}
-    for df in dfiles:
-        leftovers[os.path.basename(df)]=1
-
-    for srcfile in sfiles:
-        basename = os.path.basename(srcfile)
-        if basename in leftovers:
-            del leftovers[basename]
-
-        # Is it a subdirectory?
-        if os.path.isdir(srcfile):
-            sdir = os.path.join(srcpath,basename)
-            ddir = os.path.join(destpath,basename)
-            count+=convert(sdir, ddir)
-            continue
-        # Check the extension (only *.py will be converted)
-        root, ext = os.path.splitext(srcfile)
-        if ext.lower()!=".py":
-            continue
-
-        destfile = os.path.join(destpath,basename)
-        if destfile==srcfile:
-            print "WARNING: Input and output names are identical!"
-            sys.exit(1)
-
-        count+=1
-#        sys.stdout.write("%s\015"%(srcfile))
-
-        if isNewer(srcfile, destfile):
-            preparePath(os.path.dirname(destfile))
-#            out=open(destfile,"w")
-#            filterFile(srcfile, out)
-#            out.close()
-            os.system("python %s -f %s>%s"%(sys.argv[0],srcfile,destfile))
-
-    # Delete obsolete files in destpath
-    for df in leftovers:
-        dname=os.path.join(destpath,df)
-        if os.path.isdir(dname):
-            try:
-                shutil.rmtree(dname)
-            except:
-                print "Can't remove obsolete directory '%s'"%dname
-        else:
-            try:
-                os.remove(dname)
-            except:
-                print "Can't remove obsolete file '%s'"%dname
-
-    return count
-
-
-######################################################################
-######################################################################
-######################################################################
-
-filter_file = False
-source_root = None
-
-try:
-    opts, args = getopt.getopt(sys.argv[1:], "hfr:", ["help"])
-except getopt.GetoptError,e:
-    print e
-    sys.exit(1)
-
-for o,a in opts:
-    if o=="-f":
-        filter_file = True
-
-    if o=="-r":
-        source_root = os.path.abspath(a)
-
-if filter_file:
-    # Filter the specified file and print the result to stdout
-    filename = string.join(args)
-    filterFile(os.path.abspath(filename))
-else:
-
-    if len(args)!=2:
-        sys.stderr.write("%s options input output\n"%(os.path.basename(sys.argv[0])))
-        sys.exit(1)
-
-    # Filter an entire Python source tree
-    print '"%s" -> "%s"\n'%(args[0],args[1])
-    c=convert(args[0],args[1])
-    print "%d files"%(c)
-
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:55:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzKr-0007kC-Ow; Fri, 07 Dec 2012 14:55:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgzKp-0007jr-I1
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:55:28 +0000
Received: from [85.158.139.211:57398] by server-11.bemta-5.messagelabs.com id
	77/C4-31624-E5302C05; Fri, 07 Dec 2012 14:55:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1354892123!19415719!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14579 invoked from network); 7 Dec 2012 14:55:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:55:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47003189"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 14:55:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 09:55:11 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgzKY-0004T4-Hh;
	Fri, 07 Dec 2012 14:55:10 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 14:55:09 +0000
Message-ID: <1354892110-31108-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
References: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] docs: drop doxygen stuff
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In the 300+ page PDF this produces I couldn't see anything which
wasn't the autogenerated doxygen boilerplate stuff.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 docs/Docs.mk       |    1 -
 docs/Doxyfile      | 1218 ----------------------------------------------------
 docs/Doxyfilter    |   16 -
 docs/Makefile      |   13 -
 docs/html.sty      |  887 --------------------------------------
 docs/pythfilter.py |  658 ----------------------------
 6 files changed, 0 insertions(+), 2793 deletions(-)
 delete mode 100644 docs/Doxyfile
 delete mode 100644 docs/Doxyfilter
 delete mode 100644 docs/html.sty
 delete mode 100644 docs/pythfilter.py

diff --git a/docs/Docs.mk b/docs/Docs.mk
index dcc8a21..db3c19d 100644
--- a/docs/Docs.mk
+++ b/docs/Docs.mk
@@ -1,6 +1,5 @@
 FIG2DEV		:= fig2dev
 LATEX2HTML	:= latex2html
-DOXYGEN		:= doxygen
 POD2MAN		:= pod2man
 POD2HTML	:= pod2html
 POD2TEXT	:= pod2text
diff --git a/docs/Doxyfile b/docs/Doxyfile
deleted file mode 100644
index 8ac4451..0000000
--- a/docs/Doxyfile
+++ /dev/null
@@ -1,1218 +0,0 @@
-# Doxyfile 1.4.2
-
-# This file describes the settings to be used by the documentation system
-# doxygen (www.doxygen.org) for a project
-#
-# All text after a hash (#) is considered a comment and will be ignored
-# The format is:
-#       TAG = value [value, ...]
-# For lists items can also be appended using:
-#       TAG += value [value, ...]
-# Values that contain spaces should be placed between quotes (" ")
-
-#---------------------------------------------------------------------------
-# Project related configuration options
-#---------------------------------------------------------------------------
-
-# The PROJECT_NAME tag is a single word (or a sequence of words surrounded 
-# by quotes) that should identify the project.
-
-PROJECT_NAME           = Xen Python Tools
-
-# The PROJECT_NUMBER tag can be used to enter a project or revision number. 
-# This could be handy for archiving the generated documentation or 
-# if some version control system is used.
-
-PROJECT_NUMBER         = 
-
-# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) 
-# base path where the generated documentation will be put. 
-# If a relative path is entered, it will be relative to the location 
-# where doxygen was started. If left blank the current directory will be used.
-
-OUTPUT_DIRECTORY       = api/tools/python
-
-# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 
-# 4096 sub-directories (in 2 levels) under the output directory of each output 
-# format and will distribute the generated files over these directories. 
-# Enabling this option can be useful when feeding doxygen a huge amount of 
-# source files, where putting all generated files in the same directory would 
-# otherwise cause performance problems for the file system.
-
-CREATE_SUBDIRS         = NO
-
-# The OUTPUT_LANGUAGE tag is used to specify the language in which all 
-# documentation generated by doxygen is written. Doxygen will use this 
-# information to generate all constant output in the proper language. 
-# The default language is English, other supported languages are: 
-# Brazilian, Catalan, Chinese, Chinese-Traditional, Croatian, Czech, Danish, 
-# Dutch, Finnish, French, German, Greek, Hungarian, Italian, Japanese, 
-# Japanese-en (Japanese with English messages), Korean, Korean-en, Norwegian, 
-# Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovene, Spanish, 
-# Swedish, and Ukrainian.
-
-OUTPUT_LANGUAGE        = English
-
-# This tag can be used to specify the encoding used in the generated output. 
-# The encoding is not always determined by the language that is chosen, 
-# but also whether or not the output is meant for Windows or non-Windows users. 
-# In case there is a difference, setting the USE_WINDOWS_ENCODING tag to YES 
-# forces the Windows encoding (this is the default for the Windows binary), 
-# whereas setting the tag to NO uses a Unix-style encoding (the default for 
-# all platforms other than Windows).
-
-USE_WINDOWS_ENCODING   = NO
-
-# If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will 
-# include brief member descriptions after the members that are listed in 
-# the file and class documentation (similar to JavaDoc). 
-# Set to NO to disable this.
-
-BRIEF_MEMBER_DESC      = YES
-
-# If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend 
-# the brief description of a member or function before the detailed description. 
-# Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the 
-# brief descriptions will be completely suppressed.
-
-REPEAT_BRIEF           = YES
-
-# This tag implements a quasi-intelligent brief description abbreviator 
-# that is used to form the text in various listings. Each string 
-# in this list, if found as the leading text of the brief description, will be 
-# stripped from the text and the result after processing the whole list, is 
-# used as the annotated text. Otherwise, the brief description is used as-is. 
-# If left blank, the following values are used ("$name" is automatically 
-# replaced with the name of the entity): "The $name class" "The $name widget" 
-# "The $name file" "is" "provides" "specifies" "contains" 
-# "represents" "a" "an" "the"
-
-ABBREVIATE_BRIEF       = 
-
-# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then 
-# Doxygen will generate a detailed section even if there is only a brief 
-# description.
-
-ALWAYS_DETAILED_SEC    = NO
-
-# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all 
-# inherited members of a class in the documentation of that class as if those 
-# members were ordinary class members. Constructors, destructors and assignment 
-# operators of the base classes will not be shown.
-
-INLINE_INHERITED_MEMB  = NO
-
-# If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full 
-# path before files name in the file list and in the header files. If set 
-# to NO the shortest path that makes the file name unique will be used.
-
-FULL_PATH_NAMES        = YES
-
-# If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag 
-# can be used to strip a user-defined part of the path. Stripping is 
-# only done if one of the specified strings matches the left-hand part of 
-# the path. The tag can be used to show relative paths in the file list. 
-# If left blank the directory from which doxygen is run is used as the 
-# path to strip.
-
-STRIP_FROM_PATH        = 
-
-# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of 
-# the path mentioned in the documentation of a class, which tells 
-# the reader which header file to include in order to use a class. 
-# If left blank only the name of the header file containing the class 
-# definition is used. Otherwise one should specify the include paths that 
-# are normally passed to the compiler using the -I flag.
-
-STRIP_FROM_INC_PATH    = 
-
-# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter 
-# (but less readable) file names. This can be useful is your file systems 
-# doesn't support long names like on DOS, Mac, or CD-ROM.
-
-SHORT_NAMES            = NO
-
-# If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen 
-# will interpret the first line (until the first dot) of a JavaDoc-style 
-# comment as the brief description. If set to NO, the JavaDoc 
-# comments will behave just like the Qt-style comments (thus requiring an 
-# explicit @brief command for a brief description.
-
-JAVADOC_AUTOBRIEF      = YES
-
-# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen 
-# treat a multi-line C++ special comment block (i.e. a block of //! or /// 
-# comments) as a brief description. This used to be the default behaviour. 
-# The new default is to treat a multi-line C++ comment block as a detailed 
-# description. Set this tag to YES if you prefer the old behaviour instead.
-
-MULTILINE_CPP_IS_BRIEF = NO
-
-# If the DETAILS_AT_TOP tag is set to YES then Doxygen 
-# will output the detailed description near the top, like JavaDoc.
-# If set to NO, the detailed description appears after the member 
-# documentation.
-
-DETAILS_AT_TOP         = YES
-
-# If the INHERIT_DOCS tag is set to YES (the default) then an undocumented 
-# member inherits the documentation from any documented member that it 
-# re-implements.
-
-INHERIT_DOCS           = YES
-
-# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC 
-# tag is set to YES, then doxygen will reuse the documentation of the first 
-# member in the group (if any) for the other members of the group. By default 
-# all members of a group must be documented explicitly.
-
-DISTRIBUTE_GROUP_DOC   = NO
-
-# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce 
-# a new page for each member. If set to NO, the documentation of a member will 
-# be part of the file/class/namespace that contains it.
-
-SEPARATE_MEMBER_PAGES  = NO
-
-# The TAB_SIZE tag can be used to set the number of spaces in a tab. 
-# Doxygen uses this value to replace tabs by spaces in code fragments.
-
-TAB_SIZE               = 8
-
-# This tag can be used to specify a number of aliases that acts 
-# as commands in the documentation. An alias has the form "name=value". 
-# For example adding "sideeffect=\par Side Effects:\n" will allow you to 
-# put the command \sideeffect (or @sideeffect) in the documentation, which 
-# will result in a user-defined paragraph with heading "Side Effects:". 
-# You can put \n's in the value part of an alias to insert newlines.
-
-ALIASES                = 
-
-# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C 
-# sources only. Doxygen will then generate output that is more tailored for C. 
-# For instance, some of the names that are used will be different. The list 
-# of all members will be omitted, etc.
-
-OPTIMIZE_OUTPUT_FOR_C  = NO
-
-# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java sources 
-# only. Doxygen will then generate output that is more tailored for Java. 
-# For instance, namespaces will be presented as packages, qualified scopes 
-# will look different, etc.
-
-OPTIMIZE_OUTPUT_JAVA   = YES
-
-# Set the SUBGROUPING tag to YES (the default) to allow class member groups of 
-# the same type (for instance a group of public functions) to be put as a 
-# subgroup of that type (e.g. under the Public Functions section). Set it to 
-# NO to prevent subgrouping. Alternatively, this can be done per class using 
-# the \nosubgrouping command.
-
-SUBGROUPING            = YES
-
-#---------------------------------------------------------------------------
-# Build related configuration options
-#---------------------------------------------------------------------------
-
-# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in 
-# documentation are documented, even if no documentation was available. 
-# Private class members and static file members will be hidden unless 
-# the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES
-
-EXTRACT_ALL            = YES
-
-# If the EXTRACT_PRIVATE tag is set to YES all private members of a class 
-# will be included in the documentation.
-
-EXTRACT_PRIVATE        = YES
-
-# If the EXTRACT_STATIC tag is set to YES all static members of a file 
-# will be included in the documentation.
-
-EXTRACT_STATIC         = YES
-
-# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) 
-# defined locally in source files will be included in the documentation. 
-# If set to NO only classes defined in header files are included.
-
-EXTRACT_LOCAL_CLASSES  = YES
-
-# This flag is only useful for Objective-C code. When set to YES local 
-# methods, which are defined in the implementation section but not in 
-# the interface are included in the documentation. 
-# If set to NO (the default) only methods in the interface are included.
-
-EXTRACT_LOCAL_METHODS  = NO
-
-# If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all 
-# undocumented members of documented classes, files or namespaces. 
-# If set to NO (the default) these members will be included in the 
-# various overviews, but no documentation section is generated. 
-# This option has no effect if EXTRACT_ALL is enabled.
-
-HIDE_UNDOC_MEMBERS     = NO
-
-# If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all 
-# undocumented classes that are normally visible in the class hierarchy. 
-# If set to NO (the default) these classes will be included in the various 
-# overviews. This option has no effect if EXTRACT_ALL is enabled.
-
-HIDE_UNDOC_CLASSES     = NO
-
-# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all 
-# friend (class|struct|union) declarations. 
-# If set to NO (the default) these declarations will be included in the 
-# documentation.
-
-HIDE_FRIEND_COMPOUNDS  = NO
-
-# If the HIDE_IN_BODY_DOCS tag is set to YES, Doxygen will hide any 
-# documentation blocks found inside the body of a function. 
-# If set to NO (the default) these blocks will be appended to the 
-# function's detailed documentation block.
-
-HIDE_IN_BODY_DOCS      = NO
-
-# The INTERNAL_DOCS tag determines if documentation 
-# that is typed after a \internal command is included. If the tag is set 
-# to NO (the default) then the documentation will be excluded. 
-# Set it to YES to include the internal documentation.
-
-INTERNAL_DOCS          = NO
-
-# If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate 
-# file names in lower-case letters. If set to YES upper-case letters are also 
-# allowed. This is useful if you have classes or files whose names only differ 
-# in case and if your file system supports case sensitive file names. Windows 
-# and Mac users are advised to set this option to NO.
-
-CASE_SENSE_NAMES       = YES
-
-# If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen 
-# will show members with their full class and namespace scopes in the 
-# documentation. If set to YES the scope will be hidden.
-
-HIDE_SCOPE_NAMES       = NO
-
-# If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen 
-# will put a list of the files that are included by a file in the documentation 
-# of that file.
-
-SHOW_INCLUDE_FILES     = YES
-
-# If the INLINE_INFO tag is set to YES (the default) then a tag [inline] 
-# is inserted in the documentation for inline members.
-
-INLINE_INFO            = YES
-
-# If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen 
-# will sort the (detailed) documentation of file and class members 
-# alphabetically by member name. If set to NO the members will appear in 
-# declaration order.
-
-SORT_MEMBER_DOCS       = YES
-
-# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the 
-# brief documentation of file, namespace and class members alphabetically 
-# by member name. If set to NO (the default) the members will appear in 
-# declaration order.
-
-SORT_BRIEF_DOCS        = NO
-
-# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be 
-# sorted by fully-qualified names, including namespaces. If set to 
-# NO (the default), the class list will be sorted only by class name, 
-# not including the namespace part. 
-# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
-# Note: This option applies only to the class list, not to the 
-# alphabetical list.
-
-SORT_BY_SCOPE_NAME     = NO
-
-# The GENERATE_TODOLIST tag can be used to enable (YES) or 
-# disable (NO) the todo list. This list is created by putting \todo 
-# commands in the documentation.
-
-GENERATE_TODOLIST      = YES
-
-# The GENERATE_TESTLIST tag can be used to enable (YES) or 
-# disable (NO) the test list. This list is created by putting \test 
-# commands in the documentation.
-
-GENERATE_TESTLIST      = YES
-
-# The GENERATE_BUGLIST tag can be used to enable (YES) or 
-# disable (NO) the bug list. This list is created by putting \bug 
-# commands in the documentation.
-
-GENERATE_BUGLIST       = YES
-
-# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or 
-# disable (NO) the deprecated list. This list is created by putting 
-# \deprecated commands in the documentation.
-
-GENERATE_DEPRECATEDLIST= YES
-
-# The ENABLED_SECTIONS tag can be used to enable conditional 
-# documentation sections, marked by \if sectionname ... \endif.
-
-ENABLED_SECTIONS       = 
-
-# The MAX_INITIALIZER_LINES tag determines the maximum number of lines 
-# the initial value of a variable or define consists of for it to appear in 
-# the documentation. If the initializer consists of more lines than specified 
-# here it will be hidden. Use a value of 0 to hide initializers completely. 
-# The appearance of the initializer of individual variables and defines in the 
-# documentation can be controlled using \showinitializer or \hideinitializer 
-# command in the documentation regardless of this setting.
-
-MAX_INITIALIZER_LINES  = 30
-
-# Set the SHOW_USED_FILES tag to NO to disable the list of files generated 
-# at the bottom of the documentation of classes and structs. If set to YES the 
-# list will mention the files that were used to generate the documentation.
-
-SHOW_USED_FILES        = YES
-
-# If the sources in your project are distributed over multiple directories 
-# then setting the SHOW_DIRECTORIES tag to YES will show the directory hierarchy 
-# in the documentation.
-
-SHOW_DIRECTORIES       = YES
-
-# The FILE_VERSION_FILTER tag can be used to specify a program or script that 
-# doxygen should invoke to get the current version for each file (typically from the 
-# version control system). Doxygen will invoke the program by executing (via 
-# popen()) the command <command> <input-file>, where <command> is the value of 
-# the FILE_VERSION_FILTER tag, and <input-file> is the name of an input file 
-# provided by doxygen. Whatever the progam writes to standard output 
-# is used as the file version. See the manual for examples.
-
-FILE_VERSION_FILTER    = 
-
-#---------------------------------------------------------------------------
-# configuration options related to warning and progress messages
-#---------------------------------------------------------------------------
-
-# The QUIET tag can be used to turn on/off the messages that are generated 
-# by doxygen. Possible values are YES and NO. If left blank NO is used.
-
-QUIET                  = YES
-
-# The WARNINGS tag can be used to turn on/off the warning messages that are 
-# generated by doxygen. Possible values are YES and NO. If left blank 
-# NO is used.
-
-WARNINGS               = YES
-
-# If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings 
-# for undocumented members. If EXTRACT_ALL is set to YES then this flag will 
-# automatically be disabled.
-
-WARN_IF_UNDOCUMENTED   = YES
-
-# If WARN_IF_DOC_ERROR is set to YES, doxygen will generate warnings for 
-# potential errors in the documentation, such as not documenting some 
-# parameters in a documented function, or documenting parameters that 
-# don't exist or using markup commands wrongly.
-
-WARN_IF_DOC_ERROR      = YES
-
-# This WARN_NO_PARAMDOC option can be abled to get warnings for 
-# functions that are documented, but have no documentation for their parameters 
-# or return value. If set to NO (the default) doxygen will only warn about 
-# wrong or incomplete parameter documentation, but not about the absence of 
-# documentation.
-
-WARN_NO_PARAMDOC       = NO
-
-# The WARN_FORMAT tag determines the format of the warning messages that 
-# doxygen can produce. The string should contain the $file, $line, and $text 
-# tags, which will be replaced by the file and line number from which the 
-# warning originated and the warning text. Optionally the format may contain 
-# $version, which will be replaced by the version of the file (if it could 
-# be obtained via FILE_VERSION_FILTER)
-
-WARN_FORMAT            = "$file:$line: $text"
-
-# The WARN_LOGFILE tag can be used to specify a file to which warning 
-# and error messages should be written. If left blank the output is written 
-# to stderr.
-
-WARN_LOGFILE           = 
-
-#---------------------------------------------------------------------------
-# configuration options related to the input files
-#---------------------------------------------------------------------------
-
-# The INPUT tag can be used to specify the files and/or directories that contain 
-# documented source files. You may enter file names like "myfile.cpp" or 
-# directories like "/usr/src/myproject". Separate the files or directories 
-# with spaces.
-
-INPUT                  = ../tools/python/xen/
-
-# If the value of the INPUT tag contains directories, you can use the 
-# FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 
-# and *.h) to filter out the source-files in the directories. If left 
-# blank the following patterns are tested: 
-# *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx 
-# *.hpp *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm
-
-FILE_PATTERNS          = *.py *.c
-
-# The RECURSIVE tag can be used to turn specify whether or not subdirectories 
-# should be searched for input files as well. Possible values are YES and NO. 
-# If left blank NO is used.
-
-RECURSIVE              = YES
-
-# The EXCLUDE tag can be used to specify files and/or directories that should 
-# excluded from the INPUT source files. This way you can easily exclude a 
-# subdirectory from a directory tree whose root is specified with the INPUT tag.
-
-EXCLUDE                = 
-
-# The EXCLUDE_SYMLINKS tag can be used select whether or not files or 
-# directories that are symbolic links (a Unix filesystem feature) are excluded 
-# from the input.
-
-EXCLUDE_SYMLINKS       = NO
-
-# If the value of the INPUT tag contains directories, you can use the 
-# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude 
-# certain files from those directories.
-
-EXCLUDE_PATTERNS       = 
-
-# The EXAMPLE_PATH tag can be used to specify one or more files or 
-# directories that contain example code fragments that are included (see 
-# the \include command).
-
-EXAMPLE_PATH           = 
-
-# If the value of the EXAMPLE_PATH tag contains directories, you can use the 
-# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 
-# and *.h) to filter out the source-files in the directories. If left 
-# blank all files are included.
-
-EXAMPLE_PATTERNS       = 
-
-# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be 
-# searched for input files to be used with the \include or \dontinclude 
-# commands irrespective of the value of the RECURSIVE tag. 
-# Possible values are YES and NO. If left blank NO is used.
-
-EXAMPLE_RECURSIVE      = NO
-
-# The IMAGE_PATH tag can be used to specify one or more files or 
-# directories that contain image that are included in the documentation (see 
-# the \image command).
-
-IMAGE_PATH             = 
-
-# The INPUT_FILTER tag can be used to specify a program that doxygen should 
-# invoke to filter for each input file. Doxygen will invoke the filter program 
-# by executing (via popen()) the command <filter> <input-file>, where <filter> 
-# is the value of the INPUT_FILTER tag, and <input-file> is the name of an 
-# input file. Doxygen will then use the output that the filter program writes 
-# to standard output.  If FILTER_PATTERNS is specified, this tag will be 
-# ignored.
-
-INPUT_FILTER           = "sh ./Doxyfilter ../tools/python"
-
-# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern 
-# basis.  Doxygen will compare the file name with each pattern and apply the 
-# filter if there is a match.  The filters are a list of the form: 
-# pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further 
-# info on how filters are used. If FILTER_PATTERNS is empty, INPUT_FILTER 
-# is applied to all files.
-
-FILTER_PATTERNS        = 
-
-# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using 
-# INPUT_FILTER) will be used to filter the input files when producing source 
-# files to browse (i.e. when SOURCE_BROWSER is set to YES).
-
-FILTER_SOURCE_FILES    = YES
-
-#---------------------------------------------------------------------------
-# configuration options related to source browsing
-#---------------------------------------------------------------------------
-
-# If the SOURCE_BROWSER tag is set to YES then a list of source files will 
-# be generated. Documented entities will be cross-referenced with these sources. 
-# Note: To get rid of all source code in the generated output, make sure also 
-# VERBATIM_HEADERS is set to NO.
-
-SOURCE_BROWSER         = NO
-
-# Setting the INLINE_SOURCES tag to YES will include the body 
-# of functions and classes directly in the documentation.
-
-INLINE_SOURCES         = NO
-
-# Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct 
-# doxygen to hide any special comment blocks from generated source code 
-# fragments. Normal C and C++ comments will always remain visible.
-
-STRIP_CODE_COMMENTS    = YES
-
-# If the REFERENCED_BY_RELATION tag is set to YES (the default) 
-# then for each documented function all documented 
-# functions referencing it will be listed.
-
-REFERENCED_BY_RELATION = YES
-
-# If the REFERENCES_RELATION tag is set to YES (the default) 
-# then for each documented function all documented entities 
-# called/used by that function will be listed.
-
-REFERENCES_RELATION    = YES
-
-# If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen 
-# will generate a verbatim copy of the header file for each class for 
-# which an include is specified. Set to NO to disable this.
-
-VERBATIM_HEADERS       = YES
-
-#---------------------------------------------------------------------------
-# configuration options related to the alphabetical class index
-#---------------------------------------------------------------------------
-
-# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index 
-# of all compounds will be generated. Enable this if the project 
-# contains a lot of classes, structs, unions or interfaces.
-
-ALPHABETICAL_INDEX     = NO
-
-# If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then 
-# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns 
-# in which this list will be split (can be a number in the range [1..20])
-
-COLS_IN_ALPHA_INDEX    = 5
-
-# In case all classes in a project start with a common prefix, all 
-# classes will be put under the same header in the alphabetical index. 
-# The IGNORE_PREFIX tag can be used to specify one or more prefixes that 
-# should be ignored while generating the index headers.
-
-IGNORE_PREFIX          = 
-
-#---------------------------------------------------------------------------
-# configuration options related to the HTML output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_HTML tag is set to YES (the default) Doxygen will 
-# generate HTML output.
-
-GENERATE_HTML          = YES
-
-# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `html' will be used as the default path.
-
-HTML_OUTPUT            = html
-
-# The HTML_FILE_EXTENSION tag can be used to specify the file extension for 
-# each generated HTML page (for example: .htm,.php,.asp). If it is left blank 
-# doxygen will generate files with .html extension.
-
-HTML_FILE_EXTENSION    = .html
-
-# The HTML_HEADER tag can be used to specify a personal HTML header for 
-# each generated HTML page. If it is left blank doxygen will generate a 
-# standard header.
-
-HTML_HEADER            = 
-
-# The HTML_FOOTER tag can be used to specify a personal HTML footer for 
-# each generated HTML page. If it is left blank doxygen will generate a 
-# standard footer.
-
-HTML_FOOTER            = 
-
-# The HTML_STYLESHEET tag can be used to specify a user-defined cascading 
-# style sheet that is used by each HTML page. It can be used to 
-# fine-tune the look of the HTML output. If the tag is left blank doxygen 
-# will generate a default style sheet. Note that doxygen will try to copy 
-# the style sheet file to the HTML output directory, so don't put your own 
-# stylesheet in the HTML output directory as well, or it will be erased!
-
-HTML_STYLESHEET        = 
-
-# If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes, 
-# files or namespaces will be aligned in HTML using tables. If set to 
-# NO a bullet list will be used.
-
-HTML_ALIGN_MEMBERS     = YES
-
-# If the GENERATE_HTMLHELP tag is set to YES, additional index files 
-# will be generated that can be used as input for tools like the 
-# Microsoft HTML help workshop to generate a compressed HTML help file (.chm) 
-# of the generated HTML documentation.
-
-GENERATE_HTMLHELP      = NO
-
-# If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can 
-# be used to specify the file name of the resulting .chm file. You 
-# can add a path in front of the file if the result should not be 
-# written to the html output directory.
-
-CHM_FILE               = 
-
-# If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can 
-# be used to specify the location (absolute path including file name) of 
-# the HTML help compiler (hhc.exe). If non-empty doxygen will try to run 
-# the HTML help compiler on the generated index.hhp.
-
-HHC_LOCATION           = 
-
-# If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag 
-# controls if a separate .chi index file is generated (YES) or that 
-# it should be included in the master .chm file (NO).
-
-GENERATE_CHI           = NO
-
-# If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag 
-# controls whether a binary table of contents is generated (YES) or a 
-# normal table of contents (NO) in the .chm file.
-
-BINARY_TOC             = NO
-
-# The TOC_EXPAND flag can be set to YES to add extra items for group members 
-# to the contents of the HTML help documentation and to the tree view.
-
-TOC_EXPAND             = NO
-
-# The DISABLE_INDEX tag can be used to turn on/off the condensed index at 
-# top of each HTML page. The value NO (the default) enables the index and 
-# the value YES disables it.
-
-DISABLE_INDEX          = NO
-
-# This tag can be used to set the number of enum values (range [1..20]) 
-# that doxygen will group on one line in the generated HTML documentation.
-
-ENUM_VALUES_PER_LINE   = 4
-
-# If the GENERATE_TREEVIEW tag is set to YES, a side panel will be
-# generated containing a tree-like index structure (just like the one that 
-# is generated for HTML Help). For this to work a browser that supports 
-# JavaScript, DHTML, CSS and frames is required (for instance Mozilla 1.0+, 
-# Netscape 6.0+, Internet explorer 5.0+, or Konqueror). Windows users are 
-# probably better off using the HTML help feature.
-
-GENERATE_TREEVIEW      = NO
-
-# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be 
-# used to set the initial width (in pixels) of the frame in which the tree 
-# is shown.
-
-TREEVIEW_WIDTH         = 250
-
-#---------------------------------------------------------------------------
-# configuration options related to the LaTeX output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_LATEX tag is set to YES (the default) Doxygen will 
-# generate Latex output.
-
-GENERATE_LATEX         = YES
-
-# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `latex' will be used as the default path.
-
-LATEX_OUTPUT           = latex
-
-# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be 
-# invoked. If left blank `latex' will be used as the default command name.
-
-LATEX_CMD_NAME         = latex
-
-# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to 
-# generate index for LaTeX. If left blank `makeindex' will be used as the 
-# default command name.
-
-MAKEINDEX_CMD_NAME     = makeindex
-
-# If the COMPACT_LATEX tag is set to YES Doxygen generates more compact 
-# LaTeX documents. This may be useful for small projects and may help to 
-# save some trees in general.
-
-COMPACT_LATEX          = NO
-
-# The PAPER_TYPE tag can be used to set the paper type that is used 
-# by the printer. Possible values are: a4, a4wide, letter, legal and 
-# executive. If left blank a4wide will be used.
-
-PAPER_TYPE             = a4
-
-# The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX 
-# packages that should be included in the LaTeX output.
-
-EXTRA_PACKAGES         = 
-
-# The LATEX_HEADER tag can be used to specify a personal LaTeX header for 
-# the generated latex document. The header should contain everything until 
-# the first chapter. If it is left blank doxygen will generate a 
-# standard header. Notice: only use this tag if you know what you are doing!
-
-LATEX_HEADER           = 
-
-# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated 
-# is prepared for conversion to pdf (using ps2pdf). The pdf file will 
-# contain links (just like the HTML output) instead of page references 
-# This makes the output suitable for online browsing using a pdf viewer.
-
-PDF_HYPERLINKS         = YES
-
-# If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of 
-# plain latex in the generated Makefile. Set this option to YES to get a 
-# higher quality PDF documentation.
-
-USE_PDFLATEX           = YES
-
-# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. 
-# command to the generated LaTeX files. This will instruct LaTeX to keep 
-# running if errors occur, instead of asking the user for help. 
-# This option is also used when generating formulas in HTML.
-
-LATEX_BATCHMODE        = NO
-
-# If LATEX_HIDE_INDICES is set to YES then doxygen will not 
-# include the index chapters (such as File Index, Compound Index, etc.) 
-# in the output.
-
-LATEX_HIDE_INDICES     = NO
-
-#---------------------------------------------------------------------------
-# configuration options related to the RTF output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output 
-# The RTF output is optimized for Word 97 and may not look very pretty with 
-# other RTF readers or editors.
-
-GENERATE_RTF           = NO
-
-# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `rtf' will be used as the default path.
-
-RTF_OUTPUT             = rtf
-
-# If the COMPACT_RTF tag is set to YES Doxygen generates more compact 
-# RTF documents. This may be useful for small projects and may help to 
-# save some trees in general.
-
-COMPACT_RTF            = NO
-
-# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated 
-# will contain hyperlink fields. The RTF file will 
-# contain links (just like the HTML output) instead of page references. 
-# This makes the output suitable for online browsing using WORD or other 
-# programs which support those fields. 
-# Note: wordpad (write) and others do not support links.
-
-RTF_HYPERLINKS         = NO
-
-# Load stylesheet definitions from file. Syntax is similar to doxygen's 
-# config file, i.e. a series of assignments. You only have to provide 
-# replacements, missing definitions are set to their default value.
-
-RTF_STYLESHEET_FILE    = 
-
-# Set optional variables used in the generation of an rtf document. 
-# Syntax is similar to doxygen's config file.
-
-RTF_EXTENSIONS_FILE    = 
-
-#---------------------------------------------------------------------------
-# configuration options related to the man page output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_MAN tag is set to YES (the default) Doxygen will 
-# generate man pages
-
-GENERATE_MAN           = NO
-
-# The MAN_OUTPUT tag is used to specify where the man pages will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `man' will be used as the default path.
-
-MAN_OUTPUT             = man
-
-# The MAN_EXTENSION tag determines the extension that is added to 
-# the generated man pages (default is the subroutine's section .3)
-
-MAN_EXTENSION          = .3
-
-# If the MAN_LINKS tag is set to YES and Doxygen generates man output, 
-# then it will generate one additional man file for each entity 
-# documented in the real man page(s). These additional files 
-# only source the real man page, but without them the man command 
-# would be unable to find the correct page. The default is NO.
-
-MAN_LINKS              = NO
-
-#---------------------------------------------------------------------------
-# configuration options related to the XML output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_XML tag is set to YES Doxygen will 
-# generate an XML file that captures the structure of 
-# the code including all documentation.
-
-GENERATE_XML           = NO
-
-# The XML_OUTPUT tag is used to specify where the XML pages will be put. 
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
-# put in front of it. If left blank `xml' will be used as the default path.
-
-XML_OUTPUT             = xml
-
-# The XML_SCHEMA tag can be used to specify an XML schema, 
-# which can be used by a validating XML parser to check the 
-# syntax of the XML files.
-
-XML_SCHEMA             = 
-
-# The XML_DTD tag can be used to specify an XML DTD, 
-# which can be used by a validating XML parser to check the 
-# syntax of the XML files.
-
-XML_DTD                = 
-
-# If the XML_PROGRAMLISTING tag is set to YES Doxygen will 
-# dump the program listings (including syntax highlighting 
-# and cross-referencing information) to the XML output. Note that 
-# enabling this will significantly increase the size of the XML output.
-
-XML_PROGRAMLISTING     = YES
-
-#---------------------------------------------------------------------------
-# configuration options for the AutoGen Definitions output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will 
-# generate an AutoGen Definitions (see autogen.sf.net) file 
-# that captures the structure of the code including all 
-# documentation. Note that this feature is still experimental 
-# and incomplete at the moment.
-
-GENERATE_AUTOGEN_DEF   = NO
-
-#---------------------------------------------------------------------------
-# configuration options related to the Perl module output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_PERLMOD tag is set to YES Doxygen will 
-# generate a Perl module file that captures the structure of 
-# the code including all documentation. Note that this 
-# feature is still experimental and incomplete at the 
-# moment.
-
-GENERATE_PERLMOD       = NO
-
-# If the PERLMOD_LATEX tag is set to YES Doxygen will generate 
-# the necessary Makefile rules, Perl scripts and LaTeX code to be able 
-# to generate PDF and DVI output from the Perl module output.
-
-PERLMOD_LATEX          = NO
-
-# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be 
-# nicely formatted so it can be parsed by a human reader.  This is useful 
-# if you want to understand what is going on.  On the other hand, if this 
-# tag is set to NO the size of the Perl module output will be much smaller 
-# and Perl will parse it just the same.
-
-PERLMOD_PRETTY         = YES
-
-# The names of the make variables in the generated doxyrules.make file 
-# are prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. 
-# This is useful so different doxyrules.make files included by the same 
-# Makefile don't overwrite each other's variables.
-
-PERLMOD_MAKEVAR_PREFIX = 
-
-#---------------------------------------------------------------------------
-# Configuration options related to the preprocessor   
-#---------------------------------------------------------------------------
-
-# If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will 
-# evaluate all C-preprocessor directives found in the sources and include 
-# files.
-
-ENABLE_PREPROCESSING   = YES
-
-# If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro 
-# names in the source code. If set to NO (the default) only conditional 
-# compilation will be performed. Macro expansion can be done in a controlled 
-# way by setting EXPAND_ONLY_PREDEF to YES.
-
-MACRO_EXPANSION        = NO
-
-# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES 
-# then the macro expansion is limited to the macros specified with the 
-# PREDEFINED and EXPAND_AS_PREDEFINED tags.
-
-EXPAND_ONLY_PREDEF     = NO
-
-# If the SEARCH_INCLUDES tag is set to YES (the default) the includes files 
-# in the INCLUDE_PATH (see below) will be search if a #include is found.
-
-SEARCH_INCLUDES        = YES
-
-# The INCLUDE_PATH tag can be used to specify one or more directories that 
-# contain include files that are not input files but should be processed by 
-# the preprocessor.
-
-INCLUDE_PATH           = 
-
-# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard 
-# patterns (like *.h and *.hpp) to filter out the header-files in the 
-# directories. If left blank, the patterns specified with FILE_PATTERNS will 
-# be used.
-
-INCLUDE_FILE_PATTERNS  = 
-
-# The PREDEFINED tag can be used to specify one or more macro names that 
-# are defined before the preprocessor is started (similar to the -D option of 
-# gcc). The argument of the tag is a list of macros of the form: name 
-# or name=definition (no spaces). If the definition and the = are 
-# omitted =1 is assumed. To prevent a macro definition from being 
-# undefined via #undef or recursively expanded use the := operator 
-# instead of the = operator.
-
-PREDEFINED             = 
-
-# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then 
-# this tag can be used to specify a list of macro names that should be expanded. 
-# The macro definition that is found in the sources will be used. 
-# Use the PREDEFINED tag if you want to use a different macro definition.
-
-EXPAND_AS_DEFINED      = 
-
-# If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then 
-# doxygen's preprocessor will remove all function-like macros that are alone 
-# on a line, have an all uppercase name, and do not end with a semicolon. Such 
-# function macros are typically used for boiler-plate code, and will confuse 
-# the parser if not removed.
-
-SKIP_FUNCTION_MACROS   = YES
-
-#---------------------------------------------------------------------------
-# Configuration::additions related to external references   
-#---------------------------------------------------------------------------
-
-# The TAGFILES option can be used to specify one or more tagfiles. 
-# Optionally an initial location of the external documentation 
-# can be added for each tagfile. The format of a tag file without 
-# this location is as follows: 
-#   TAGFILES = file1 file2 ... 
-# Adding location for the tag files is done as follows: 
-#   TAGFILES = file1=loc1 "file2 = loc2" ... 
-# where "loc1" and "loc2" can be relative or absolute paths or 
-# URLs. If a location is present for each tag, the installdox tool 
-# does not have to be run to correct the links.
-# Note that each tag file must have a unique name
-# (where the name does NOT include the path)
-# If a tag file is not located in the directory in which doxygen 
-# is run, you must also specify the path to the tagfile here.
-
-TAGFILES               = 
-
-# When a file name is specified after GENERATE_TAGFILE, doxygen will create 
-# a tag file that is based on the input files it reads.
-
-GENERATE_TAGFILE       = 
-
-# If the ALLEXTERNALS tag is set to YES all external classes will be listed 
-# in the class index. If set to NO only the inherited external classes 
-# will be listed.
-
-ALLEXTERNALS           = NO
-
-# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed 
-# in the modules index. If set to NO, only the current project's groups will 
-# be listed.
-
-EXTERNAL_GROUPS        = YES
-
-# The PERL_PATH should be the absolute path and name of the perl script 
-# interpreter (i.e. the result of `which perl').
-
-PERL_PATH              = /usr/bin/perl
-
-#---------------------------------------------------------------------------
-# Configuration options related to the dot tool   
-#---------------------------------------------------------------------------
-
-# If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will 
-# generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base 
-# or super classes. Setting the tag to NO turns the diagrams off. Note that 
-# this option is superseded by the HAVE_DOT option below. This is only a 
-# fallback. It is recommended to install and use dot, since it yields more 
-# powerful graphs.
-
-CLASS_DIAGRAMS         = YES
-
-# If set to YES, the inheritance and collaboration graphs will hide 
-# inheritance and usage relations if the target is undocumented 
-# or is not a class.
-
-HIDE_UNDOC_RELATIONS   = YES
-
-# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is 
-# available from the path. This tool is part of Graphviz, a graph visualization 
-# toolkit from AT&T and Lucent Bell Labs. The other options in this section 
-# have no effect if this option is set to NO (the default)
-
-HAVE_DOT               = NO
-
-# If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen 
-# will generate a graph for each documented class showing the direct and 
-# indirect inheritance relations. Setting this tag to YES will force the 
-# the CLASS_DIAGRAMS tag to NO.
-
-CLASS_GRAPH            = YES
-
-# If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen 
-# will generate a graph for each documented class showing the direct and 
-# indirect implementation dependencies (inheritance, containment, and 
-# class references variables) of the class with other documented classes.
-
-COLLABORATION_GRAPH    = YES
-
-# If the GROUP_GRAPHS and HAVE_DOT tags are set to YES then doxygen 
-# will generate a graph for groups, showing the direct groups dependencies
-
-GROUP_GRAPHS           = YES
-
-# If the UML_LOOK tag is set to YES doxygen will generate inheritance and 
-# collaboration diagrams in a style similar to the OMG's Unified Modeling 
-# Language.
-
-UML_LOOK               = NO
-
-# If set to YES, the inheritance and collaboration graphs will show the 
-# relations between templates and their instances.
-
-TEMPLATE_RELATIONS     = NO
-
-# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT 
-# tags are set to YES then doxygen will generate a graph for each documented 
-# file showing the direct and indirect include dependencies of the file with 
-# other documented files.
-
-INCLUDE_GRAPH          = YES
-
-# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and 
-# HAVE_DOT tags are set to YES then doxygen will generate a graph for each 
-# documented header file showing the documented files that directly or 
-# indirectly include this file.
-
-INCLUDED_BY_GRAPH      = YES
-
-# If the CALL_GRAPH and HAVE_DOT tags are set to YES then doxygen will 
-# generate a call dependency graph for every global function or class method. 
-# Note that enabling this option will significantly increase the time of a run. 
-# So in most cases it will be better to enable call graphs for selected 
-# functions only using the \callgraph command.
-
-CALL_GRAPH             = NO
-
-# If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen 
-# will graphical hierarchy of all classes instead of a textual one.
-
-GRAPHICAL_HIERARCHY    = YES
-
-# If the DIRECTORY_GRAPH, SHOW_DIRECTORIES and HAVE_DOT tags are set to YES 
-# then doxygen will show the dependencies a directory has on other directories 
-# in a graphical way. The dependency relations are determined by the #include
-# relations between the files in the directories.
-
-DIRECTORY_GRAPH        = YES
-
-# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images 
-# generated by dot. Possible values are png, jpg, or gif
-# If left blank png will be used.
-
-DOT_IMAGE_FORMAT       = png
-
-# The tag DOT_PATH can be used to specify the path where the dot tool can be 
-# found. If left blank, it is assumed the dot tool can be found in the path.
-
-DOT_PATH               = 
-
-# The DOTFILE_DIRS tag can be used to specify one or more directories that 
-# contain dot files that are included in the documentation (see the 
-# \dotfile command).
-
-DOTFILE_DIRS           = 
-
-# The MAX_DOT_GRAPH_WIDTH tag can be used to set the maximum allowed width 
-# (in pixels) of the graphs generated by dot. If a graph becomes larger than 
-# this value, doxygen will try to truncate the graph, so that it fits within 
-# the specified constraint. Beware that most browsers cannot cope with very 
-# large images.
-
-MAX_DOT_GRAPH_WIDTH    = 1024
-
-# The MAX_DOT_GRAPH_HEIGHT tag can be used to set the maximum allows height 
-# (in pixels) of the graphs generated by dot. If a graph becomes larger than 
-# this value, doxygen will try to truncate the graph, so that it fits within 
-# the specified constraint. Beware that most browsers cannot cope with very 
-# large images.
-
-MAX_DOT_GRAPH_HEIGHT   = 1024
-
-# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the 
-# graphs generated by dot. A depth value of 3 means that only nodes reachable 
-# from the root by following a path via at most 3 edges will be shown. Nodes 
-# that lay further from the root node will be omitted. Note that setting this 
-# option to 1 or 2 may greatly reduce the computation time needed for large 
-# code bases. Also note that a graph may be further truncated if the graph's 
-# image dimensions are not sufficient to fit the graph (see MAX_DOT_GRAPH_WIDTH 
-# and MAX_DOT_GRAPH_HEIGHT). If 0 is used for the depth value (the default), 
-# the graph is not depth-constrained.
-
-MAX_DOT_GRAPH_DEPTH    = 0
-
-# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent 
-# background. This is disabled by default, which results in a white background. 
-# Warning: Depending on the platform used, enabling this option may lead to 
-# badly anti-aliased labels on the edges of a graph (i.e. they become hard to 
-# read).
-
-DOT_TRANSPARENT        = NO
-
-# Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output 
-# files in one run (i.e. multiple -o and -T options on the command line). This 
-# makes dot run faster, but since only newer versions of dot (>1.8.10) 
-# support this, this feature is disabled by default.
-
-DOT_MULTI_TARGETS      = NO
-
-# If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will 
-# generate a legend page explaining the meaning of the various boxes and 
-# arrows in the dot generated graphs.
-
-GENERATE_LEGEND        = YES
-
-# If the DOT_CLEANUP tag is set to YES (the default) Doxygen will 
-# remove the intermediate dot files that are used to generate 
-# the various graphs.
-
-DOT_CLEANUP            = YES
-
-#---------------------------------------------------------------------------
-# Configuration::additions related to the search engine   
-#---------------------------------------------------------------------------
-
-# The SEARCHENGINE tag specifies whether or not a search engine should be 
-# used. If set to NO the values of all tags below this one will be ignored.
-
-SEARCHENGINE           = NO
diff --git a/docs/Doxyfilter b/docs/Doxyfilter
deleted file mode 100644
index 6a6d50f..0000000
--- a/docs/Doxyfilter
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/sh
-
-#
-# Doxyfilter <source-root> <filename>
-#
-
-dir=$(dirname "$0")
-
-PYFILTER="$dir/pythfilter.py"
-
-if [ "${2/.py/}" != "$2" ]
-then
-    python "$PYFILTER" -r "$1" -f "$2"
-else
-    cat "$2"
-fi
diff --git a/docs/Makefile b/docs/Makefile
index 620a296..053d7af 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -27,9 +27,6 @@ all: build
 .PHONY: build
 build: html txt man-pages figs
 
-.PHONY: dev-docs
-dev-docs: python-dev-docs
-
 .PHONY: html
 html: $(DOC_HTML) html/index.html
 
@@ -45,15 +42,6 @@ figs:
 	set -x; $(MAKE) -C figs ; else                   \
 	echo "fig2dev (transfig) not installed; skipping figs."; fi
 
-.PHONY: python-dev-docs
-python-dev-docs:
-	@mkdir -v -p api/tools/python
-	@set -e ; if which $(DOXYGEN) 1>/dev/null 2>/dev/null; then \
-        echo "Running doxygen to generate Python tools APIs ... "; \
-	$(DOXYGEN) Doxyfile;                                       \
-	$(MAKE) -C api/tools/python/latex ; else                   \
-        echo "Doxygen not installed; skipping python-dev-docs."; fi
-
 .PHONY: man-pages
 man-pages:
 	@if which $(POD2MAN) 1>/dev/null 2>/dev/null; then \
@@ -76,7 +64,6 @@ clean:
 	rm -rf .word_count *.aux *.dvi *.bbl *.blg *.glo *.idx *~ 
 	rm -rf *.ilg *.log *.ind *.toc *.bak core
 	rm -rf html txt
-	rm -rf api
 	rm -rf man5
 	rm -rf man1
 
diff --git a/docs/html.sty b/docs/html.sty
deleted file mode 100644
index b5f8fbb..0000000
--- a/docs/html.sty
+++ /dev/null
@@ -1,887 +0,0 @@
-%
-% $Id: html.sty,v 1.23 1998/02/26 10:32:24 latex2html Exp $
-% LaTeX2HTML Version 96.2 : html.sty
-% 
-% This file contains definitions of LaTeX commands which are
-% processed in a special way by the translator. 
-% For example, there are commands for embedding external hypertext links,
-% for cross-references between documents or for including raw HTML.
-% This file includes the comments.sty file v2.0 by Victor Eijkhout
-% In most cases these commands do nothing when processed by LaTeX.
-%
-% Place this file in a directory accessible to LaTeX (i.e., somewhere
-% in the TEXINPUTS path.)
-%
-% NOTE: This file works with LaTeX 2.09 or (the newer) LaTeX2e.
-%       If you only have LaTeX 2.09, some complex LaTeX2HTML features
-%       like support for segmented documents are not available.
-
-% Changes:
-% See the change log at end of file.
-
-
-% Exit if the style file is already loaded
-% (suggested by Lee Shombert <las@potomac.wash.inmet.com>
-\ifx \htmlstyloaded\relax \endinput\else\let\htmlstyloaded\relax\fi
-\makeatletter
-
-\providecommand{\latextohtml}{\LaTeX2\texttt{HTML}}
-
-
-%%% LINKS TO EXTERNAL DOCUMENTS
-%
-% This can be used to provide links to arbitrary documents.
-% The first argumment should be the text that is going to be
-% highlighted and the second argument a URL.
-% The hyperlink will appear as a hyperlink in the HTML 
-% document and as a footnote in the dvi or ps files.
-%
-\newcommand{\htmladdnormallinkfoot}[2]{#1\footnote{#2}} 
-
-
-% This is an alternative definition of the command above which
-% will ignore the URL in the dvi or ps files.
-\newcommand{\htmladdnormallink}[2]{#1}
-
-
-% This command takes as argument a URL pointing to an image.
-% The image will be embedded in the HTML document but will
-% be ignored in the dvi and ps files.
-%
-\newcommand{\htmladdimg}[1]{}
-
-
-%%% CROSS-REFERENCES BETWEEN (LOCAL OR REMOTE) DOCUMENTS
-%
-% This can be used to refer to symbolic labels in other Latex 
-% documents that have already been processed by the translator.
-% The arguments should be:
-% #1 : the URL to the directory containing the external document
-% #2 : the path to the labels.pl file of the external document.
-% If the external document lives on a remote machine then labels.pl 
-% must be copied on the local machine.
-%
-%e.g. \externallabels{http://cbl.leeds.ac.uk/nikos/WWW/doc/tex2html/latex2html}
-%                    {/usr/cblelca/nikos/tmp/labels.pl}
-% The arguments are ignored in the dvi and ps files.
-%
-\newcommand{\externallabels}[2]{}
-
-
-% This complements the \externallabels command above. The argument
-% should be a label defined in another latex document and will be
-% ignored in the dvi and ps files.
-%
-\newcommand{\externalref}[1]{}
-
-
-% Suggested by  Uffe Engberg (http://www.brics.dk/~engberg/)
-% This allows the same effect for citations in external bibliographies.
-% An  \externallabels  command must be given, locating a labels.pl file
-% which defines the location and keys used in the external .html file.
-%  
-\newcommand{\externalcite}{\nocite}
-
-
-%%% HTMLRULE
-% This command adds a horizontal rule and is valid even within
-% a figure caption.
-% Here we introduce a stub for compatibility.
-\newcommand{\htmlrule}{\protect\HTMLrule}
-\newcommand{\HTMLrule}{\@ifstar\htmlrulestar\htmlrulestar}
-\newcommand{\htmlrulestar}[1]{}
-
-% This command adds information within the <BODY> ... </BODY> tag
-%
-\newcommand{\bodytext}[1]{}
-\newcommand{\htmlbody}{}
-
-
-%%% HYPERREF 
-% Suggested by Eric M. Carol <eric@ca.utoronto.utcc.enfm>
-% Similar to \ref but accepts conditional text. 
-% The first argument is HTML text which will become ``hyperized''
-% (underlined).
-% The second and third arguments are text which will appear only in the paper
-% version (DVI file), enclosing the fourth argument which is a reference to a label.
-%
-%e.g. \hyperref{using the tracer}{using the tracer (see Section}{)}{trace}
-% where there is a corresponding \label{trace}
-%
-\newcommand{\hyperref}{\hyperrefx[ref]}
-\def\hyperrefx[#1]{{\def\next{#1}%
- \def\tmp{ref}\ifx\next\tmp\aftergroup\hyperrefref
- \else\def\tmp{pageref}\ifx\next\tmp\aftergroup\hyperpageref
- \else\def\tmp{page}\ifx\next\tmp\aftergroup\hyperpageref
- \else\def\tmp{noref}\ifx\next\tmp\aftergroup\hypernoref
- \else\def\tmp{no}\ifx\next\tmp\aftergroup\hypernoref
- \else\typeout{*** unknown option \next\space to  hyperref ***}%
- \fi\fi\fi\fi\fi}}
-\newcommand{\hyperrefref}[4]{#2\ref{#4}#3}
-\newcommand{\hyperpageref}[4]{#2\pageref{#4}#3}
-\newcommand{\hypernoref}[3]{#2}
-
-
-%%% HYPERCITE --- added by RRM
-% Suggested by Stephen Simpson <simpson@math.psu.edu>
-% effects the same ideas as in  \hyperref, but for citations.
-% It does not allow an optional argument to the \cite, in LaTeX.
-%
-%   \hypercite{<html-text>}{<LaTeX-text>}{<opt-text>}{<key>}
-%
-% uses the pre/post-texts in LaTeX, with a  \cite{<key>}
-%
-%   \hypercite[ext]{<html-text>}{<LaTeX-text>}{<key>}
-%
-% uses the pre/post-texts in LaTeX, with a  \nocite{<key>}
-% the actual reference comes from an \externallabels  file.
-%
-\newcommand{\hypercite}{\hypercitex[int]}
-\def\hypercitex[#1]{{\def\next{#1}%
- \def\tmp{int}\ifx\next\tmp\aftergroup\hyperciteint
- \else\def\tmp{cite}\ifx\next\tmp\aftergroup\hyperciteint
- \else\def\tmp{ext}\ifx\next\tmp\aftergroup\hyperciteext
- \else\def\tmp{nocite}\ifx\next\tmp\aftergroup\hyperciteext
- \else\def\tmp{no}\ifx\next\tmp\aftergroup\hyperciteext
- \else\typeout{*** unknown option \next\space to  hypercite ***}%
- \fi\fi\fi\fi\fi}}
-\newcommand{\hyperciteint}[4]{#2{\def\tmp{#3}\def\emptyopt{}%
- \ifx\tmp\emptyopt\cite{#4}\else\cite[#3]{#4}\fi}}
-\newcommand{\hyperciteext}[3]{#2\nocite{#3}}
-
-
-
-%%% HTMLREF
-% Reference in HTML version only.
-% Mix between \htmladdnormallink and \hyperref.
-% First arg is text for in both versions, second is label for use in HTML
-% version.
-\newcommand{\htmlref}[2]{#1}
-
-%%% HTMLCITE
-% Reference in HTML version only.
-% Mix between \htmladdnormallink and \hypercite.
-% First arg is text for in both versions, second is citation for use in HTML
-% version.
-\newcommand{\htmlcite}[2]{#1}
-
-
-%%% HTMLIMAGE
-% This command can be used inside any environment that is converted
-% into an inlined image (eg a "figure" environment) in order to change
-% the way the image will be translated. The argument of \htmlimage
-% is really a string of options separated by commas ie 
-% [scale=<scale factor>],[external],[thumbnail=<reduction factor>
-% The scale option allows control over the size of the final image.
-% The ``external'' option will cause the image not to be inlined 
-% (images are inlined by default). External images will be accessible
-% via a hypertext link. 
-% The ``thumbnail'' option will cause a small inlined image to be 
-% placed in the caption. The size of the thumbnail depends on the
-% reduction factor. The use of the ``thumbnail'' option implies
-% the ``external'' option.
-%
-% Example:
-% \htmlimage{scale=1.5,external,thumbnail=0.2}
-% will cause a small thumbnail image 1/5th of the original size to be
-% placed in the final document, pointing to an external image 1.5
-% times bigger than the original.
-% 
-\newcommand{\htmlimage}[1]{}
-
-
-% \htmlborder causes a border to be placed around an image or table
-% when the image is placed within a <TABLE> cell.
-\newcommand{\htmlborder}[1]{}
-
-% Put \begin{makeimage}, \end{makeimage} around LaTeX to ensure its
-% translation into an image.
-% This shields sensitive text from being translated.
-\newenvironment{makeimage}{}{}
-
-
-% A dummy environment that can be useful to alter the order
-% in which commands are processed, in LaTeX2HTML
-\newenvironment{tex2html_deferred}{}{}
-
-
-%%% HTMLADDTONAVIGATION
-% This command appends its argument to the buttons in the navigation
-% panel. It is ignored by LaTeX.
-%
-% Example:
-% \htmladdtonavigation{\htmladdnormallink
-%              {\htmladdimg{http://server/path/to/gif}}
-%              {http://server/path}}
-\newcommand{\htmladdtonavigation}[1]{}
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% Comment.sty   version 2.0, 19 June 1992
-% selectively in/exclude pieces of text: the user can define new
-% comment versions, and each is controlled separately.
-% This style can be used with plain TeX or LaTeX, and probably
-% most other packages too.
-%
-% Examples of use in LaTeX and TeX follow \endinput
-%
-% Author
-%    Victor Eijkhout
-%    Department of Computer Science
-%    University Tennessee at Knoxville
-%    104 Ayres Hall
-%    Knoxville, TN 37996
-%    USA
-%
-%    eijkhout@cs.utk.edu
-%
-% Usage: all text included in between
-%    \comment ... \endcomment
-% or \begin{comment} ... \end{comment}
-% is discarded. The closing command should appear on a line
-% of its own. No starting spaces, nothing after it.
-% This environment should work with arbitrary amounts
-% of comment.
-%
-% Other 'comment' environments are defined by
-% and are selected/deselected with
-% \includecomment{versiona}
-% \excludecoment{versionb}
-%
-% These environments are used as
-% \versiona ... \endversiona
-% or \begin{versiona} ... \end{versiona}
-% with the closing command again on a line of its own.
-%
-% Basic approach:
-% to comment something out, scoop up  every line in verbatim mode
-% as macro argument, then throw it away.
-% For inclusions, both the opening and closing comands
-% are defined as noop
-%
-% Changed \next to \html@next to prevent clashes with other sty files
-% (mike@emn.fr)
-% Changed \html@next to \htmlnext so the \makeatletter and
-% \makeatother commands could be removed (they were causing other
-% style files - changebar.sty - to crash) (nikos@cbl.leeds.ac.uk)
-% Changed \htmlnext back to \html@next...
-
-\def\makeinnocent#1{\catcode`#1=12 }
-\def\csarg#1#2{\expandafter#1\csname#2\endcsname}
-
-\def\ThrowAwayComment#1{\begingroup
-    \def\CurrentComment{#1}%
-    \let\do\makeinnocent \dospecials
-    \makeinnocent\^^L% and whatever other special cases
-    \endlinechar`\^^M \catcode`\^^M=12 \xComment}
-{\catcode`\^^M=12 \endlinechar=-1 %
- \gdef\xComment#1^^M{\def\test{#1}\edef\test{\meaning\test}
-      \csarg\ifx{PlainEnd\CurrentComment Test}\test
-          \let\html@next\endgroup
-      \else \csarg\ifx{LaLaEnd\CurrentComment Test}\test
-            \edef\html@next{\endgroup\noexpand\end{\CurrentComment}}
-      \else \csarg\ifx{LaInnEnd\CurrentComment Test}\test
-            \edef\html@next{\endgroup\noexpand\end{\CurrentComment}}
-      \else \let\html@next\xComment
-      \fi \fi \fi \html@next}
-}
-
-\def\includecomment
- #1{\expandafter\def\csname#1\endcsname{}%
-    \expandafter\def\csname end#1\endcsname{}}
-\def\excludecomment
- #1{\expandafter\def\csname#1\endcsname{\ThrowAwayComment{#1}}%
-    {\escapechar=-1\relax
-     \edef\tmp{\string\\end#1}%
-      \csarg\xdef{PlainEnd#1Test}{\meaning\tmp}%
-     \edef\tmp{\string\\end\string\{#1\string\}}%
-      \csarg\xdef{LaLaEnd#1Test}{\meaning\tmp}%
-     \edef\tmp{\string\\end \string\{#1\string\}}%
-      \csarg\xdef{LaInnEnd#1Test}{\meaning\tmp}%
-    }}
-
-\excludecomment{comment}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% end Comment.sty
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-%
-% Alternative code by Robin Fairbairns, 22 September 1997
-%
-\newcommand\@gobbleenv{\let\reserved@a\@currenvir\@gobble@nv}
-\long\def\@gobble@nv#1\end#2{\def\reserved@b{#2}%
- \ifx\reserved@a\reserved@b
-  \edef\reserved@a{\noexpand\end{\reserved@a}}%
-  \expandafter\reserved@a
- \else
-  \expandafter\@gobble@nv
- \fi}
-
-\renewcommand{\excludecomment}[1]{%
-    \csname newenvironment\endcsname{#1}{\@gobbleenv}{}}
-
-%%% RAW HTML 
-% 
-% Enclose raw HTML between a \begin{rawhtml} and \end{rawhtml}.
-% The html environment ignores its body
-%
-\excludecomment{rawhtml}
-
-
-%%% HTML ONLY
-%
-% Enclose LaTeX constructs which will only appear in the 
-% HTML output and will be ignored by LaTeX with 
-% \begin{htmlonly} and \end{htmlonly}
-%
-\excludecomment{htmlonly}
-% Shorter version
-\newcommand{\html}[1]{}
-
-% for images.tex only
-\excludecomment{imagesonly}
-
-%%% LaTeX ONLY
-% Enclose LaTeX constructs which will only appear in the 
-% DVI output and will be ignored by latex2html with 
-%\begin{latexonly} and \end{latexonly}
-%
-\newenvironment{latexonly}{}{}
-% Shorter version
-\newcommand{\latex}[1]{#1}
-
-
-%%% LaTeX or HTML
-% Combination of \latex and \html.
-% Say \latexhtml{this should be latex text}{this html text}
-%
-%\newcommand{\latexhtml}[2]{#1}
-\long\def\latexhtml#1#2{#1}
-
-
-%%% tracing the HTML conversions
-% This alters the tracing-level within the processing
-% performed by  latex2html  by adjusting  $VERBOSITY
-% (see  latex2html.config  for the appropriate values)
-%
-\newcommand{\htmltracing}[1]{}
-\newcommand{\htmltracenv}[1]{}
-
-
-%%%  \strikeout for HTML only
-% uses <STRIKE>...</STRIKE> tags on the argument
-% LaTeX just gobbles it up.
-\newcommand{\strikeout}[1]{}
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%%% JCL - stop input here if LaTeX2e is not present
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\ifx\if@compatibility\undefined
-  %LaTeX209
-  \makeatother\relax\expandafter\endinput
-\fi
-\if@compatibility
-  %LaTeX2e in LaTeX209 compatibility mode
-  \makeatother\relax\expandafter\endinput
-\fi
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-% Start providing LaTeX2e extension:
-% This is currently:
-%  - additional optional argument for \htmladdimg
-%  - support for segmented documents
-%
-
-\ProvidesPackage{html}
-          [1996/12/22 v1.1 hypertext commands for latex2html (nd, hws, rrm)]
-%%%%MG
-
-% This command takes as argument a URL pointing to an image.
-% The image will be embedded in the HTML document but will
-% be ignored in the dvi and ps files.  The optional argument
-% denotes additional HTML tags.
-%
-% Example:  \htmladdimg[ALT="portrait" ALIGN=CENTER]{portrait.gif}
-%
-\renewcommand{\htmladdimg}[2][]{}
-
-%%% HTMLRULE for LaTeX2e
-% This command adds a horizontal rule and is valid even within
-% a figure caption.
-%
-% This command is best used with LaTeX2e and HTML 3.2 support.
-% It is like \hrule, but allows for options via key--value pairs
-% as follows:  \htmlrule[key1=value1, key2=value2, ...] .
-% Use \htmlrule* to suppress the <BR> tag.
-% Eg. \htmlrule[left, 15, 5pt, "none", NOSHADE] produces
-% <BR CLEAR="left"><HR NOSHADE SIZE="15">.
-% Renew the necessary part.
-\renewcommand{\htmlrulestar}[1][all]{}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-%  renew some definitions to allow optional arguments
-%
-% The description of the options is missing, as yet.
-%
-\renewcommand{\latextohtml}{\textup{\LaTeX2\texttt{HTML}}}
-\renewcommand{\htmladdnormallinkfoot}[3][]{#2\footnote{#3}} 
-\renewcommand{\htmladdnormallink}[3][]{#2}
-\renewcommand{\htmlbody}[1][]{}
-\renewcommand{\hyperref}[1][ref]{\hyperrefx[#1]}
-\renewcommand{\hypercite}[1][int]{\hypercitex[#1]}
-\renewcommand{\htmlref}[3][]{#2}
-\renewcommand{\htmlcite}[1]{#1\htmlcitex}
-\newcommand{\htmlcitex}[2][]{{\def\tmp{#1}\ifx\tmp\@empty\else~[#1]\fi}}
-\renewcommand{\htmlimage}[2][]{}
-\renewcommand{\htmlborder}[2][]{}
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-%  HTML  HTMLset  HTMLsetenv
-%
-%  These commands do nothing in LaTeX, but can be used to place
-%  HTML tags or set Perl variables during the LaTeX2HTML processing;
-%  They are intended for expert use only.
-
-\newcommand{\HTMLcode}[2][]{}
-\ifx\undefined\HTML\newcommand{\HTML}[2][]{}\else
-\typeout{*** Warning: \string\HTML\space had an incompatible definition ***}%
-\typeout{*** instead use \string\HTMLcode\space for raw HTML code ***}%
-\fi 
-\newcommand{\HTMLset}[3][]{}
-\newcommand{\HTMLsetenv}[3][]{}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%
-% The following commands pertain to document segmentation, and
-% were added by Herbert Swan <dprhws@edp.Arco.com> (with help from
-% Michel Goossens <goossens@cern.ch>):
-%
-%
-% This command inputs internal latex2html tables so that large
-% documents can to partitioned into smaller (more manageable)
-% segments.
-%
-\newcommand{\internal}[2][internals]{}
-
-%
-%  Define a dummy stub \htmlhead{}.  This command causes latex2html
-%  to define the title of the start of a new segment.  It is not
-%  normally placed in the user's document.  Rather, it is passed to
-%  latex2html via a .ptr file written by \segment.
-%
-\newcommand{\htmlhead}[3][]{}
-
-%  In the LaTeX2HTML version this will eliminate the title line
-%  generated by a \segment command, but retains the title string
-%  for use in other places.
-%
-\newcommand{\htmlnohead}{}
-
-
-%  In the LaTeX2HTML version this put a URL into a <BASE> tag
-%  within the <HEAD>...</HEAD> portion of a document.
-%
-\newcommand{\htmlbase}[1]{}
-%
-
-%
-%  The dummy command \endpreamble is needed by latex2html to
-%  mark the end of the preamble in document segments that do
-%  not contain a \begin{document}
-%
-\newcommand{\startdocument}{}
-
-
-% \tableofchildlinks, \htmlinfo
-%     by Ross Moore  ---  extensions dated 27 September 1997
-%
-%  These do nothing in LaTeX but for LaTeX2HTML they mark 
-%  where the table of child-links and info-page should be placed,
-%  when the user wants other than the default.
-%	\tableofchildlinks	 % put mini-TOC at this location
-%	\tableofchildlinks[off]	 % not on current page
-%	\tableofchildlinks[none] % not on current and subsequent pages
-%	\tableofchildlinks[on]   % selectively on current page
-%	\tableofchildlinks[all]  % on current and all subsequent pages
-%	\htmlinfo	 	 % put info-page at this location
-%	\htmlinfo[off]		 % no info-page in current document
-%	\htmlinfo[none]		 % no info-page in current document
-%  *-versions omit the preceding <BR> tag.
-%
-\newcommand{\tableofchildlinks}{%
-  \@ifstar\tableofchildlinksstar\tableofchildlinksstar}
-\newcommand{\tableofchildlinksstar}[1][]{}
-
-\newcommand{\htmlinfo}{\@ifstar\htmlinfostar\htmlinfostar}
-\newcommand{\htmlinfostar}[1][]{}
-
-
-%  This redefines  \begin  to allow for an optional argument
-%  which is used by LaTeX2HTML to specify `style-sheet' information
-
-\let\realLaTeX@begin=\begin
-\renewcommand{\begin}[1][]{\realLaTeX@begin}
-
-
-%
-%  Allocate a new set of section counters, which will get incremented
-%  for "*" forms of sectioning commands, and for a few miscellaneous
-%  commands.
-%
-
-\newcounter{lpart}
-\newcounter{lchapter}[part]
-\@ifundefined{c@chapter}%
- {\let\Hchapter\relax \newcounter{lsection}[part]}%
- {\let\Hchapter=\chapter \newcounter{lsection}[chapter]}
-\newcounter{lsubsection}[section]
-\newcounter{lsubsubsection}[subsection]
-\newcounter{lparagraph}[subsubsection]
-\newcounter{lsubparagraph}[paragraph]
-\newcounter{lequation}
-
-%
-%  Redefine "*" forms of sectioning commands to increment their
-%  respective counters.
-%
-\let\Hpart=\part
-%\let\Hchapter=\chapter
-\let\Hsection=\section
-\let\Hsubsection=\subsection
-\let\Hsubsubsection=\subsubsection
-\let\Hparagraph=\paragraph
-\let\Hsubparagraph=\subparagraph
-\let\Hsubsubparagraph=\subsubparagraph
-
-\ifx\c@subparagraph\undefined
- \newcounter{lsubsubparagraph}[lsubparagraph]
-\else
- \newcounter{lsubsubparagraph}[subparagraph]
-\fi
-
-%
-%  The following definitions are specific to LaTeX2e:
-%  (They must be commented out for LaTeX 2.09)
-%
-\renewcommand{\part}{\@ifstar{\stepcounter{lpart}%
-  \bgroup\def\tmp{*}\H@part}{\bgroup\def\tmp{}\H@part}}
-\newcommand{\H@part}[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hpart\tmp}
-
-\ifx\Hchapter\relax\else
- \def\chapter{\resetsections \@ifstar{\stepcounter{lchapter}%
-   \bgroup\def\tmp{*}\H@chapter}{\bgroup\def\tmp{}\H@chapter}}\fi
-\newcommand{\H@chapter}[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hchapter\tmp}
-
-\renewcommand{\section}{\resetsubsections
- \@ifstar{\stepcounter{lsection}\bgroup\def\tmp{*}%
-   \H@section}{\bgroup\def\tmp{}\H@section}}
-\newcommand{\H@section}[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hsection\tmp}
-
-\renewcommand{\subsection}{\resetsubsubsections
- \@ifstar{\stepcounter{lsubsection}\bgroup\def\tmp{*}%
-   \H@subsection}{\bgroup\def\tmp{}\H@subsection}}
-\newcommand{\H@subsection}[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hsubsection\tmp}
-
-\renewcommand{\subsubsection}{\resetparagraphs
- \@ifstar{\stepcounter{lsubsubsection}\bgroup\def\tmp{*}%
-   \H@subsubsection}{\bgroup\def\tmp{}\H@subsubsection}}
-\newcommand{\H@subsubsection}[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hsubsubsection\tmp}
-
-\renewcommand{\paragraph}{\resetsubparagraphs
- \@ifstar{\stepcounter{lparagraph}\bgroup\def\tmp{*}%
-   \H@paragraph}{\bgroup\def\tmp{}\H@paragraph}}
-\newcommand\H@paragraph[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hparagraph\tmp}
-
-\renewcommand{\subparagraph}{\resetsubsubparagraphs
- \@ifstar{\stepcounter{lsubparagraph}\bgroup\def\tmp{*}%
-   \H@subparagraph}{\bgroup\def\tmp{}\H@subparagraph}}
-\newcommand\H@subparagraph[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hsubparagraph\tmp}
-
-\ifx\Hsubsubparagraph\relax\else\@ifundefined{subsubparagraph}{}{%
-\def\subsubparagraph{%
- \@ifstar{\stepcounter{lsubsubparagraph}\bgroup\def\tmp{*}%
-   \H@subsubparagraph}{\bgroup\def\tmp{}\H@subsubparagraph}}}\fi
-\newcommand\H@subsubparagraph[1][]{\def\tmp@a{#1}\check@align
- \expandafter\egroup\expandafter\Hsubsubparagraph\tmp}
-
-\def\check@align{\def\empty{}\ifx\tmp@a\empty
- \else\def\tmp@b{center}\ifx\tmp@a\tmp@b\let\tmp@a\empty
- \else\def\tmp@b{left}\ifx\tmp@a\tmp@b\let\tmp@a\empty
- \else\def\tmp@b{right}\ifx\tmp@a\tmp@b\let\tmp@a\empty
- \else\expandafter\def\expandafter\tmp@a\expandafter{\expandafter[\tmp@a]}%
- \fi\fi\fi \def\empty{}\ifx\tmp\empty\let\tmp=\tmp@a \else 
-  \expandafter\def\expandafter\tmp\expandafter{\expandafter*\tmp@a}%
- \fi\fi}
-%
-\def\resetsections{\setcounter{section}{0}\setcounter{lsection}{0}%
- \reset@dependents{section}\resetsubsections }
-\def\resetsubsections{\setcounter{subsection}{0}\setcounter{lsubsection}{0}%
- \reset@dependents{subsection}\resetsubsubsections }
-\def\resetsubsubsections{\setcounter{subsubsection}{0}\setcounter{lsubsubsection}{0}%
- \reset@dependents{subsubsection}\resetparagraphs }
-%
-\def\resetparagraphs{\setcounter{lparagraph}{0}\setcounter{lparagraph}{0}%
- \reset@dependents{paragraph}\resetsubparagraphs }
-\def\resetsubparagraphs{\ifx\c@subparagraph\undefined\else
-  \setcounter{subparagraph}{0}\fi \setcounter{lsubparagraph}{0}%
- \reset@dependents{subparagraph}\resetsubsubparagraphs }
-\def\resetsubsubparagraphs{\ifx\c@subsubparagraph\undefined\else
-  \setcounter{subsubparagraph}{0}\fi \setcounter{lsubsubparagraph}{0}}
-%
-\def\reset@dependents#1{\begingroup\let \@elt \@stpelt
- \csname cl@#1\endcsname\endgroup}
-%
-%
-%  Define a helper macro to dump a single \secounter command to a file.
-%
-\newcommand{\DumpPtr}[2]{%
-\count255=\arabic{#1}\def\dummy{dummy}\def\tmp{#2}%
-\ifx\tmp\dummy\else\advance\count255 by \arabic{#2}\fi
-\immediate\write\ptrfile{%
-\noexpand\setcounter{#1}{\number\count255}}}
-
-%
-%  Define a helper macro to dump all counters to the file.
-%  The value for each counter will be the sum of the l-counter
-%      actual LaTeX section counter.
-%  Also dump an \htmlhead{section-command}{section title} command
-%      to the file.
-%
-\newwrite\ptrfile
-\def\DumpCounters#1#2#3#4{%
-\begingroup\let\protect=\noexpand
-\immediate\openout\ptrfile = #1.ptr
-\DumpPtr{part}{lpart}%
-\ifx\Hchapter\relax\else\DumpPtr{chapter}{lchapter}\fi
-\DumpPtr{section}{lsection}%
-\DumpPtr{subsection}{lsubsection}%
-\DumpPtr{subsubsection}{lsubsubsection}%
-\DumpPtr{paragraph}{lparagraph}%
-\DumpPtr{subparagraph}{lsubparagraph}%
-\DumpPtr{equation}{lequation}%
-\DumpPtr{footnote}{dummy}%
-\def\tmp{#4}\ifx\tmp\@empty
-\immediate\write\ptrfile{\noexpand\htmlhead{#2}{#3}}\else
-\immediate\write\ptrfile{\noexpand\htmlhead[#4]{#2}{#3}}\fi
-\dumpcitestatus \dumpcurrentcolor
-\immediate\closeout\ptrfile
-\endgroup }
-
-
-%% interface to natbib.sty
-
-\def\dumpcitestatus{}
-\def\loadcitestatus{\def\dumpcitestatus{%
-  \ifciteindex\immediate\write\ptrfile{\noexpand\citeindextrue}%
-  \else\immediate\write\ptrfile{\noexpand\citeindexfalse}\fi }%
-}
-\@ifpackageloaded{natbib}{\loadcitestatus}{%
- \AtBeginDocument{\@ifpackageloaded{natbib}{\loadcitestatus}{}}}
-
-
-%% interface to color.sty
-
-\def\dumpcurrentcolor{}
-\def\loadsegmentcolors{%
- \let\real@pagecolor=\pagecolor
- \let\pagecolor\segmentpagecolor
- \let\segmentcolor\color
- \ifx\current@page@color\undefined \def\current@page@color{{}}\fi
- \def\dumpcurrentcolor{\bgroup\def\@empty@{{}}%
-   \expandafter\def\expandafter\tmp\space####1@{\def\thiscol{####1}}%
-  \ifx\current@color\@empty@\def\thiscol{}\else
-   \expandafter\tmp\current@color @\fi
-  \immediate\write\ptrfile{\noexpand\segmentcolor{\thiscol}}%
-  \ifx\current@page@color\@empty@\def\thiscol{}\else
-   \expandafter\tmp\current@page@color @\fi
-  \immediate\write\ptrfile{\noexpand\segmentpagecolor{\thiscol}}%
- \egroup}%
- \global\let\loadsegmentcolors=\relax
-}
-
-% These macros are needed within  images.tex  since this inputs
-% the <segment>.ptr files for a segment, so that counters are
-% colors are synchronised.
-%
-\newcommand{\segmentpagecolor}[1][]{%
- \@ifpackageloaded{color}{\loadsegmentcolors\bgroup
-  \def\tmp{#1}\ifx\@empty\tmp\def\next{[]}\else\def\next{[#1]}\fi
-  \expandafter\segmentpagecolor@\next}%
- {\@gobble}}
-\def\segmentpagecolor@[#1]#2{\def\tmp{#1}\def\tmpB{#2}%
- \ifx\tmpB\@empty\let\next=\egroup
- \else
-  \let\realendgroup=\endgroup
-  \def\endgroup{\edef\next{\noexpand\realendgroup
-   \def\noexpand\current@page@color{\current@color}}\next}%
-  \ifx\tmp\@empty\real@pagecolor{#2}\def\model{}%
-  \else\real@pagecolor[#1]{#2}\def\model{[#1]}%
-  \fi
-  \edef\next{\egroup\def\noexpand\current@page@color{\current@page@color}%
-  \noexpand\real@pagecolor\model{#2}}%
- \fi\next}
-%
-\newcommand{\segmentcolor}[2][named]{\@ifpackageloaded{color}%
- {\loadsegmentcolors\segmentcolor[#1]{#2}}{}}
-
-\@ifpackageloaded{color}{\loadsegmentcolors}{\let\real@pagecolor=\@gobble
- \AtBeginDocument{\@ifpackageloaded{color}{\loadsegmentcolors}{}}}
-
-
-%  Define the \segment[align]{file}{section-command}{section-title} command,
-%  and its helper macros.  This command does four things:
-%       1)  Begins a new LaTeX section;
-%       2)  Writes a list of section counters to file.ptr, each
-%           of which represents the sum of the LaTeX section
-%           counters, and the l-counters, defined above;
-%       3)  Write an \htmlhead{section-title} command to file.ptr;
-%       4)  Inputs file.tex.
-
-\def\segment{\@ifstar{\@@htmls}{\@@html}}
-\def\endsegment{}
-\newcommand{\@@htmls}[1][]{\@@htmlsx{#1}}
-\newcommand{\@@html}[1][]{\@@htmlx{#1}}
-\def\@@htmlsx#1#2#3#4{\csname #3\endcsname* {#4}%
-                   \DumpCounters{#2}{#3*}{#4}{#1}\input{#2}}
-\def\@@htmlx#1#2#3#4{\csname #3\endcsname {#4}%
-                   \DumpCounters{#2}{#3}{#4}{#1}\input{#2}}
-
-\makeatother
-\endinput
-
-
-% Modifications:
-%
-% (The listing of Initiales see Changes)
-
-% $Log: html.sty,v $
-% Revision 1.23  1998/02/26 10:32:24  latex2html
-%  --  use \providecommand for  \latextohtml
-%  --  implemented \HTMLcode to do what \HTML did previously
-% 	\HTML still works, unless already defined by another package
-%  --  fixed problems remaining with undefined \chapter
-%  --  defined \endsegment
-%
-% Revision 1.22  1997/12/05 11:38:18  RRM
-%  --  implemented an optional argument to \begin for style-sheet info.
-%  --  modified use of an optional argument with sectioning-commands
-%
-% Revision 1.21  1997/11/05 10:28:56  RRM
-%  --  replaced redefinition of \@htmlrule with \htmlrulestar
-%
-% Revision 1.20  1997/10/28 02:15:58  RRM
-%  --  altered the way some special html-macros are defined, so that
-% 	star-variants are explicitly defined for LaTeX
-% 	 -- it is possible for these to occur within  images.tex
-% 	e.g. \htmlinfostar \htmlrulestar \tableofchildlinksstar
-%
-% Revision 1.19  1997/10/11 05:47:48  RRM
-%  --  allow the dummy {tex2html_nowrap} environment in LaTeX
-% 	use it to make its contents be evaluated in environment order
-%
-% Revision 1.18  1997/10/04 06:56:50  RRM
-%  --  uses Robin Fairbairns' code for ignored environments,
-%      replacing the previous  comment.sty  stuff.
-%  --  extensions to the \tableofchildlinks command
-%  --  extensions to the \htmlinfo command
-%
-% Revision 1.17  1997/07/08 11:23:39  RRM
-%     include value of footnote counter in .ptr files for segments
-%
-% Revision 1.16  1997/07/03 08:56:34  RRM
-%     use \textup  within the \latextohtml macro
-%
-% Revision 1.15  1997/06/15 10:24:58  RRM
-%      new command  \htmltracenv  as environment-ordered \htmltracing
-%
-% Revision 1.14  1997/06/06 10:30:37  RRM
-%  -   new command:  \htmlborder  puts environment into a <TABLE> cell
-%      with a border of specified width, + other attributes.
-%  -   new commands: \HTML  for setting arbitrary HTML tags, with attributes
-%                    \HTMLset  for setting Perl variables, while processing
-%                    \HTMLsetenv  same as \HTMLset , but it gets processed
-%                                 as if it were an environment.
-%  -   new command:  \latextohtml  --- to set the LaTeX2HTML name/logo
-%  -   fixed some remaining problems with \segmentcolor & \segmentpagecolor
-%
-% Revision 1.13  1997/05/19 13:55:46  RRM
-%      alterations and extra options to  \hypercite
-%
-% Revision 1.12  1997/05/09 12:28:39  RRM
-%  -  Added the optional argument to \htmlhead, also in \DumpCounters
-%  -  Implemented \HTMLset as a no-op in LaTeX.
-%  -  Fixed a bug in accessing the page@color settings.
-%
-% Revision 1.11  1997/03/26 09:32:40  RRM
-%  -  Implements LaTeX versions of  \externalcite  and  \hypercite  commands.
-%     Thanks to  Uffe Engberg  and  Stephen Simpson  for the suggestions.
-%
-% Revision 1.10  1997/03/06 07:37:58  RRM
-% Added the  \htmltracing  command, for altering  $VERBOSITY .
-%
-% Revision 1.9  1997/02/17 02:26:26  RRM
-% - changes to counter handling (RRM)
-% - shuffled around some definitions
-% - changed \htmlrule of 209 mode
-%
-% Revision 1.8  1997/01/26 09:04:12  RRM
-% RRM: added optional argument to sectioning commands
-%      \htmlbase  sets the <BASE HREF=...> tag
-%      \htmlinfo  and  \htmlinfo* allow the document info to be positioned
-%
-% Revision 1.7  1997/01/03 12:15:44  L2HADMIN
-% % - fixes to the  color  and  natbib  interfaces
-% % - extended usage of  \hyperref, via an optional argument.
-% % - extended use comment environments to allow shifting expansions
-% %     e.g. within \multicolumn  (`bug' reported by Luc De Coninck).
-% % - allow optional argument to: \htmlimage, \htmlhead,
-% %     \htmladdimg, \htmladdnormallink, \htmladdnormallinkfoot
-% % - added new commands: \htmlbody, \htmlnohead
-% % - added new command: \tableofchildlinks
-%
-% Revision 1.6  1996/12/25 03:04:54  JCL
-% added patches to segment feature from Martin Wilck
-%
-% Revision 1.5  1996/12/23 01:48:06  JCL
-%  o introduced the environment makeimage, which may be used to force
-%    LaTeX2HTML to generate an image from the contents.
-%    There's no magic, all what we have now is a defined empty environment
-%    which LaTeX2HTML will not recognize and thus pass it to images.tex.
-%  o provided \protect to the \htmlrule commands to allow for usage
-%    within captions.
-%
-% Revision 1.4  1996/12/21 19:59:22  JCL
-% - shuffled some entries
-% - added \latexhtml command
-%
-% Revision 1.3  1996/12/21 12:22:59  JCL
-% removed duplicate \htmlrule, changed \htmlrule back not to create a \hrule
-% to allow occurrence in caption
-%
-% Revision 1.2  1996/12/20 04:03:41  JCL
-% changed occurrence of \makeatletter, \makeatother
-% added new \htmlrule command both for the LaTeX2.09 and LaTeX2e
-% sections
-%
-%
-% jcl 30-SEP-96
-%  - Stuck the commands commonly used by both LaTeX versions to the top,
-%    added a check which stops input or reads further if the document
-%    makes use of LaTeX2e.
-%  - Introduced rrm's \dumpcurrentcolor and \bodytext
-% hws 31-JAN-96 - Added support for document segmentation
-% hws 10-OCT-95 - Added \htmlrule command
-% jz 22-APR-94 - Added support for htmlref
-% nd  - Created
diff --git a/docs/pythfilter.py b/docs/pythfilter.py
deleted file mode 100644
index 3054f7c..0000000
--- a/docs/pythfilter.py
+++ /dev/null
@@ -1,658 +0,0 @@
-#!/usr/bin/env python
-
-# pythfilter.py v1.5.5, written by Matthias Baas (baas@ira.uka.de)
-
-# Doxygen filter which can be used to document Python source code.
-# Classes (incl. methods) and functions can be documented.
-# Every comment that begins with ## is literally turned into an
-# Doxygen comment. Consecutive comment lines are turned into
-# comment blocks (-> /** ... */).
-# All the stuff is put inside a namespace with the same name as
-# the source file.
-
-# Conversions:
-# ============
-# ##-blocks                  ->  /** ... */
-# "class name(base): ..."    ->  "class name : public base {...}"
-# "def name(params): ..."    ->  "name(params) {...}"
-
-# Changelog:
-# 21.01.2003: Raw (r"") or unicode (u"") doc string will now be properly
-#             handled. (thanks to Richard Laager for the patch)
-# 22.12.2003: Fixed a bug where no function names would be output for "def"
-#             blocks that were not in a class.
-#             (thanks to Richard Laager for the patch)
-# 12.12.2003: Implemented code to handle static and class methods with
-#             this logic: Methods with "self" as the first argument are
-#             non-static. Methods with "cls" are Python class methods,
-#             which translate into static methods for Doxygen. Other
-#             methods are assumed to be static methods. As should be
-#             obvious, this logic doesn't take into account if the method
-#             is actually setup as a classmethod() or a staticmethod(),
-#             just if it follows the normal conventions.
-#             (thanks to Richard Laager for the patch)
-# 11.12.2003: Corrected #includes to use os.path.sep instead of ".". Corrected
-#             namespace code to use "::" instead of ".".
-#             (thanks to Richard Laager for the patch)
-# 11.12.2003: Methods beginning with two underscores that end with
-#             something other than two underscores are considered private
-#             and are handled accordingly.
-#             (thanks to Richard Laager for the patch)
-# 03.12.2003: The first parameter of class methods (self) is removed from
-#             the documentation.
-# 03.11.2003: The module docstring will be used as namespace documentation
-#             (thanks to Joe Bronkema for the patch)
-# 08.07.2003: Namespaces get a default documentation so that the namespace
-#             and its contents will show up in the generated documentation.
-# 05.02.2003: Directories will be delted during synchronization.
-# 31.01.2003: -f option & filtering entire directory trees.
-# 10.08.2002: In base classes the '.' will be replaced by '::'
-# 18.07.2002: * and ** will be translated into arguments
-# 18.07.2002: Argument lists may contain default values using constructors.
-# 18.06.2002: Support for ## public:
-# 21.01.2002: from ... import will be translated to "using namespace ...;"
-#             TODO: "from ... import *" vs "from ... import names"
-#             TODO: Using normal imports: name.name -> name::name
-# 20.01.2002: #includes will be placed in front of the namespace
-
-######################################################################
-
-# The program is written as a state machine with the following states:
-#
-# - OUTSIDE               The current position is outside any comment,
-#                         class definition or function.
-#
-# - BUILD_COMMENT         Begins with first "##".
-#                         Ends with the first token that is no "##"
-#                         at the same column as before.
-#
-# - BUILD_CLASS_DECL      Begins with "class".
-#                         Ends with ":"
-# - BUILD_CLASS_BODY      Begins just after BUILD_CLASS_DECL.
-#                         The first following token (which is no comment)
-#                         determines indentation depth.
-#                         Ends with a token that has a smaller indendation.
-#
-# - BUILD_DEF_DECL        Begins with "def".
-#                         Ends with ":".
-# - BUILD_DEF_BODY        Begins just after BUILD_DEF_DECL.
-#                         The first following token (which is no comment)
-#                         determines indentation depth.
-#                         Ends with a token that has a smaller indendation.
-
-import getopt
-import glob
-import os.path
-import re
-import shutil
-import string
-import sys
-import token
-import tokenize
-
-from stat import *
-
-OUTSIDE          = 0
-BUILD_COMMENT    = 1
-BUILD_CLASS_DECL = 2
-BUILD_CLASS_BODY = 3
-BUILD_DEF_DECL   = 4
-BUILD_DEF_BODY   = 5
-IMPORT           = 6
-IMPORT_OP        = 7
-IMPORT_APPEND    = 8
-
-# Output file stream
-outfile = sys.stdout
-
-# Output buffer
-outbuffer = []
-
-out_row = 1
-out_col = 0
-
-# Variables used by rec_name_n_param()
-name         = ""
-param        = ""
-doc_string   = ""
-record_state = 0
-bracket_counter = 0
-
-# Tuple: (row,column)
-class_spos  = (0,0)
-def_spos    = (0,0)
-import_spos = (0,0)
-
-# Which import was used? ("import" or "from")
-import_token = ""
-
-# Comment block buffer
-comment_block = []
-comment_finished = 0
-
-# Imported modules
-modules = []
-
-# Program state
-stateStack = [OUTSIDE]
-
-# Keep track of whether module has a docstring
-module_has_docstring = False
-
-# Keep track of member protection
-protection_level = "public"
-private_member = False
-
-# Keep track of the module namespace
-namespace = ""
-
-######################################################################
-# Output string s. '\n' may only be at the end of the string (not
-# somewhere in the middle).
-#
-# In: s    - String
-#     spos - Startpos
-######################################################################
-def output(s,spos, immediate=0):
-    global outbuffer, out_row, out_col, outfile
-
-    os = string.rjust(s,spos[1]-out_col+len(s))
-
-    if immediate:
-        outfile.write(os)
-    else:
-        outbuffer.append(os)
-
-    assert -1 == string.find(s[0:-2], "\n"), s
-
-    if (s[-1:]=="\n"):
-        out_row = out_row+1
-        out_col = 0
-    else:
-        out_col = spos[1]+len(s)
-
-
-######################################################################
-# Records a name and parameters. The name is either a class name or
-# a function name. Then the parameter is either the base class or
-# the function parameters.
-# The name is stored in the global variable "name", the parameters
-# in "param".
-# The variable "record_state" holds the current state of this internal
-# state machine.
-# The recording is started by calling start_recording().
-#
-# In: type, tok
-######################################################################
-def rec_name_n_param(type, tok):
-    global record_state,name,param,doc_string,bracket_counter
-    s = record_state
-    # State 0: Do nothing.
-    if   (s==0):
-         return
-    # State 1: Remember name.
-    elif (s==1):
-        name = tok
-        record_state = 2
-    # State 2: Wait for opening bracket or colon
-    elif (s==2):
-        if (tok=='('):
-            bracket_counter = 1
-            record_state=3
-        if (tok==':'): record_state=4
-    # State 3: Store parameter (or base class) and wait for an ending bracket
-    elif (s==3):
-        if (tok=='*' or tok=='**'):
-            tok=''
-        if (tok=='('):
-            bracket_counter = bracket_counter+1
-        if (tok==')'):
-            bracket_counter = bracket_counter-1
-        if bracket_counter==0:
-            record_state=4
-        else:
-            param=param+tok
-    # State 4: Look for doc string
-    elif (s==4):
-        if (type==token.NEWLINE or type==token.INDENT or type==token.SLASHEQUAL):
-            return
-        elif (tok==":"):
-            return
-        elif (type==token.STRING):
-            while tok[:1]=='r' or tok[:1]=='u':
-                tok=tok[1:]
-            while tok[:1]=='"':
-                tok=tok[1:]
-            while tok[-1:]=='"':
-                tok=tok[:-1]
-            doc_string=tok
-        record_state=0
-
-######################################################################
-# Starts the recording of a name & param part.
-# The function rec_name_n_param() has to be fed with tokens. After
-# the necessary tokens are fed the name and parameters can be found
-# in the global variables "name" und "param".
-######################################################################
-def start_recording():
-    global record_state,param,name, doc_string
-    record_state=1
-    name=""
-    param=""
-    doc_string=""
-
-######################################################################
-# Test if recording is finished
-######################################################################
-def is_recording_finished():
-    global record_state
-    return record_state==0
-
-######################################################################
-## Gather comment block
-######################################################################
-def gather_comment(type,tok,spos):
-    global comment_block,comment_finished
-    if (type!=tokenize.COMMENT):
-        comment_finished = 1
-    else:
-        # Output old comment block if a new one is started.
-        if (comment_finished):
-            print_comment(spos)
-            comment_finished=0
-        if (tok[0:2]=="##" and tok[0:3]!="###"):
-            append_comment_lines(tok[2:])
-
-######################################################################
-## Output comment block and empty buffer.
-######################################################################
-def print_comment(spos):
-    global comment_block,comment_finished
-    if (comment_block!=[]):
-        output("/** ",spos)
-        for c in comment_block:
-            output(c,spos)
-        output("*/\n",spos)
-    comment_block    = []
-    comment_finished = 0
-
-######################################################################
-def set_state(s):
-    global stateStack
-    stateStack[len(stateStack)-1]=s
-
-######################################################################
-def get_state():
-    global stateStack
-    return stateStack[len(stateStack)-1]
-
-######################################################################
-def push_state(s):
-    global stateStack
-    stateStack.append(s)
-
-######################################################################
-def pop_state():
-    global stateStack
-    stateStack.pop()
-
-
-######################################################################
-def tok_eater(type, tok, spos, epos, line):
-    global stateStack,name,param,class_spos,def_spos,import_spos
-    global doc_string, modules, import_token, module_has_docstring
-    global protection_level, private_member
-    global out_row
-
-    while out_row + 1 < spos[0]:
-        output("\n", (0, 0))
-
-    rec_name_n_param(type,tok)
-    if (string.replace(string.strip(tok)," ","")=="##private:"):
-         protection_level = "private"
-         output("private:\n",spos)
-    elif (string.replace(string.strip(tok)," ","")=="##protected:"):
-         protection_level = "protected"
-         output("protected:\n",spos)
-    elif (string.replace(string.strip(tok)," ","")=="##public:"):
-         protection_level = "public"
-         output("public:\n",spos)
-    else:
-         gather_comment(type,tok,spos)
-
-    state = get_state()
-
-#    sys.stderr.write("%d: %s\n"%(state, tok))
-
-    # OUTSIDE
-    if   (state==OUTSIDE):
-        if  (tok=="class"):
-            start_recording()
-            class_spos = spos
-            push_state(BUILD_CLASS_DECL)
-        elif (tok=="def"):
-            start_recording()
-            def_spos = spos
-            push_state(BUILD_DEF_DECL)
-        elif (tok=="import") or (tok=="from"):
-            import_token = tok
-            import_spos = spos
-            modules     = []
-            push_state(IMPORT)
-        elif (spos[1] == 0 and tok[:3] == '"""'):
-            # Capture module docstring as namespace documentation
-            module_has_docstring = True
-            append_comment_lines("\\namespace %s\n" % namespace)
-            append_comment_lines(tok[3:-3])
-            print_comment(spos)
-
-    # IMPORT
-    elif (state==IMPORT):
-        if (type==token.NAME):
-            modules.append(tok)
-            set_state(IMPORT_OP)
-    # IMPORT_OP
-    elif (state==IMPORT_OP):
-        if (tok=="."):
-            set_state(IMPORT_APPEND)
-        elif (tok==","):
-            set_state(IMPORT)
-        else:
-            for m in modules:
-                output('#include "'+m.replace('.',os.path.sep)+'.py"\n', import_spos, immediate=1)
-                if import_token=="from":
-                    output('using namespace '+m.replace('.', '::')+';\n', import_spos)
-            pop_state()
-    # IMPORT_APPEND
-    elif (state==IMPORT_APPEND):
-        if (type==token.NAME):
-            modules[len(modules)-1]+="."+tok
-            set_state(IMPORT_OP)
-    # BUILD_CLASS_DECL
-    elif (state==BUILD_CLASS_DECL):
-        if (is_recording_finished()):
-            s = "class "+name
-            if (param!=""): s = s+" : public "+param.replace('.','::')
-            if (doc_string!=""):
-                append_comment_lines(doc_string)
-            print_comment(class_spos)
-            output(s+"\n",class_spos)
-            output("{\n",(class_spos[0]+1,class_spos[1]))
-            protection_level = "public"
-            output("  public:\n",(class_spos[0]+2,class_spos[1]))
-            set_state(BUILD_CLASS_BODY)
-    # BUILD_CLASS_BODY
-    elif (state==BUILD_CLASS_BODY):
-        if (type!=token.INDENT and type!=token.NEWLINE and type!=40 and
-            type!=tokenize.NL and type!=tokenize.COMMENT and
-            (spos[1]<=class_spos[1])):
-            output("}; // end of class\n",(out_row+1,class_spos[1]))
-            pop_state()
-        elif (tok=="def"):
-            start_recording()
-            def_spos = spos
-            push_state(BUILD_DEF_DECL)
-    # BUILD_DEF_DECL
-    elif (state==BUILD_DEF_DECL):
-        if (is_recording_finished()):
-            param = param.replace("\n", " ")
-            param = param.replace("=", " = ")
-            params = param.split(",")
-            if BUILD_CLASS_BODY in stateStack:
-                if len(name) > 1 \
-                   and name[0:2] == '__' \
-                   and name[len(name)-2:len(name)] != '__' \
-                   and protection_level != 'private':
-                       private_member = True
-                       output("  private:\n",(def_spos[0]+2,def_spos[1]))
-
-            if (doc_string != ""):
-                append_comment_lines(doc_string)
-
-            print_comment(def_spos)
-
-            output_function_decl(name, params)
-#       output("{\n",(def_spos[0]+1,def_spos[1]))
-            set_state(BUILD_DEF_BODY)
-    # BUILD_DEF_BODY
-    elif (state==BUILD_DEF_BODY):
-        if (type!=token.INDENT and type!=token.NEWLINE \
-            and type!=40 and type!=tokenize.NL \
-            and (spos[1]<=def_spos[1])):
-#            output("} // end of method/function\n",(out_row+1,def_spos[1]))
-            if private_member and protection_level != 'private':
-                private_member = False
-                output("  " + protection_level + ":\n",(def_spos[0]+2,def_spos[1]))
-            pop_state()
-#       else:
-#            output(tok,spos)
-
-
-def output_function_decl(name, params):
-    global def_spos
-
-    # Do we document a class method? then remove the 'self' parameter
-    if params[0] == 'self':
-        preamble = ''
-        params = params[1:]
-    else:
-        preamble = 'static '
-        if params[0] == 'cls':
-            params = params[1:]
-
-    param_string = string.join(params, ", Type ")
-
-    if param_string == '':
-        param_string = '(' + param_string + ');\n'
-    else:
-        param_string = '(Type ' + param_string + ');\n'
-
-    output(preamble, def_spos)
-    output(name, def_spos)
-    output(param_string, def_spos)
-
-
-def append_comment_lines(lines):
-    map(append_comment_line, doc_string.split('\n'))
-
-paramRE = re.compile(r'(@param \w+):')
-
-def append_comment_line(line):
-    global paramRE
-    
-    comment_block.append(paramRE.sub(r'\1', line) + '\n')
-
-def dump(filename):
-    f = open(filename)
-    r = f.readlines()
-    for s in r:
-        sys.stdout.write(s)
-
-def filter(filename):
-    global name, module_has_docstring, source_root
-
-    path,name = os.path.split(filename)
-    root,ext  = os.path.splitext(name)
-
-    if source_root and path.find(source_root) == 0:
-        path = path[len(source_root):]
-
-        if path[0] == os.sep:
-            path = path[1:]
-
-        ns = path.split(os.sep)
-    else:
-        ns = []
-
-    ns.append(root)
-
-    for n in ns:
-        output("namespace " + n + " {\n",(0,0))
-
-    # set module name for tok_eater to use if there's a module doc string
-    name = root
-
-#    sys.stderr.write('Filtering "'+filename+'"...')
-    f = open(filename)
-    tokenize.tokenize(f.readline, tok_eater)
-    f.close()
-    print_comment((0,0))
-
-    output("\n",(0,0))
-    
-    for n in ns:
-        output("}  // end of namespace\n",(0,0))
-
-    if not module_has_docstring:
-        # Put in default namespace documentation
-        output('/** \\namespace '+root+' \n',(0,0))
-        output('    \\brief Module "%s" */\n'%(root),(0,0))
-
-    for s in outbuffer:
-        outfile.write(s)
-
-
-def filterFile(filename, out=sys.stdout):
-    global outfile
-
-    outfile = out
-
-    try:
-        root,ext  = os.path.splitext(filename)
-
-        if ext==".py":
-            filter(filename)
-        else:
-            dump(filename)
-
-#        sys.stderr.write("OK\n")
-    except IOError,e:
-        sys.stderr.write(e[1]+"\n")
-
-
-######################################################################
-
-# preparePath
-def preparePath(path):
-    """Prepare a path.
-
-    Checks if the path exists and creates it if it does not exist.
-    """
-    if not os.path.exists(path):
-        parent = os.path.dirname(path)
-        if parent!="":
-            preparePath(parent)
-        os.mkdir(path)
-
-# isNewer
-def isNewer(file1,file2):
-    """Check if file1 is newer than file2.
-
-    file1 must be an existing file.
-    """
-    if not os.path.exists(file2):
-        return True
-    return os.stat(file1)[ST_MTIME]>os.stat(file2)[ST_MTIME]
-
-# convert
-def convert(srcpath, destpath):
-    """Convert a Python source tree into a C+ stub tree.
-
-    All *.py files in srcpath (including sub-directories) are filtered
-    and written to destpath. If destpath exists, only the files
-    that have been modified are filtered again. Files that were deleted
-    from srcpath are also deleted in destpath if they are still present.
-    The function returns the number of processed *.py files.
-    """
-    count=0
-    sp = os.path.join(srcpath,"*")
-    sfiles = glob.glob(sp)
-    dp = os.path.join(destpath,"*")
-    dfiles = glob.glob(dp)
-    leftovers={}
-    for df in dfiles:
-        leftovers[os.path.basename(df)]=1
-
-    for srcfile in sfiles:
-        basename = os.path.basename(srcfile)
-        if basename in leftovers:
-            del leftovers[basename]
-
-        # Is it a subdirectory?
-        if os.path.isdir(srcfile):
-            sdir = os.path.join(srcpath,basename)
-            ddir = os.path.join(destpath,basename)
-            count+=convert(sdir, ddir)
-            continue
-        # Check the extension (only *.py will be converted)
-        root, ext = os.path.splitext(srcfile)
-        if ext.lower()!=".py":
-            continue
-
-        destfile = os.path.join(destpath,basename)
-        if destfile==srcfile:
-            print "WARNING: Input and output names are identical!"
-            sys.exit(1)
-
-        count+=1
-#        sys.stdout.write("%s\015"%(srcfile))
-
-        if isNewer(srcfile, destfile):
-            preparePath(os.path.dirname(destfile))
-#            out=open(destfile,"w")
-#            filterFile(srcfile, out)
-#            out.close()
-            os.system("python %s -f %s>%s"%(sys.argv[0],srcfile,destfile))
-
-    # Delete obsolete files in destpath
-    for df in leftovers:
-        dname=os.path.join(destpath,df)
-        if os.path.isdir(dname):
-            try:
-                shutil.rmtree(dname)
-            except:
-                print "Can't remove obsolete directory '%s'"%dname
-        else:
-            try:
-                os.remove(dname)
-            except:
-                print "Can't remove obsolete file '%s'"%dname
-
-    return count
-
-
-######################################################################
-######################################################################
-######################################################################
-
-filter_file = False
-source_root = None
-
-try:
-    opts, args = getopt.getopt(sys.argv[1:], "hfr:", ["help"])
-except getopt.GetoptError,e:
-    print e
-    sys.exit(1)
-
-for o,a in opts:
-    if o=="-f":
-        filter_file = True
-
-    if o=="-r":
-        source_root = os.path.abspath(a)
-
-if filter_file:
-    # Filter the specified file and print the result to stdout
-    filename = string.join(args)
-    filterFile(os.path.abspath(filename))
-else:
-
-    if len(args)!=2:
-        sys.stderr.write("%s options input output\n"%(os.path.basename(sys.argv[0])))
-        sys.exit(1)
-
-    # Filter an entire Python source tree
-    print '"%s" -> "%s"\n'%(args[0],args[1])
-    c=convert(args[0],args[1])
-    print "%d files"%(c)
-
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:56:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzLR-0007p1-Hi; Fri, 07 Dec 2012 14:56:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgzLP-0007oe-JQ
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:56:03 +0000
Received: from [193.109.254.147:53686] by server-11.bemta-14.messagelabs.com
	id 0E/1F-29027-28302C05; Fri, 07 Dec 2012 14:56:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354892151!9303018!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11352 invoked from network); 7 Dec 2012 14:55:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:55:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216786660"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 14:55:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 09:55:11 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgzKY-0004T4-Is;
	Fri, 07 Dec 2012 14:55:10 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 14:55:10 +0000
Message-ID: <1354892110-31108-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
References: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Ian Campbell <ian.campbell@citrix.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] docs: check for documentation generation
	tools in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SXQgaXMgc29tZXRpbWVzIGhhcmQgdG8gZGlzY292ZXIgYWxsIHRoZSBvcHRpb25hbCB0b29scyB0
aGF0IHNob3VsZCBiZQpvbiBhIHN5c3RlbSB0byBidWlsZCBhbGwgYXZhaWxhYmxlIFhlbiBkb2N1
bWVudGF0aW9uLiBCeSBjaGVja2luZyBmb3IKZG9jdW1lbnRhdGlvbiBnZW5lcmF0aW9uIHRvb2xz
IGF0IC4vY29uZmlndXJlIHRpbWUgYW5kIGRpc3BsYXlpbmcgYQp3YXJuaW5nLCBYZW4gcGFja2Fn
ZXJzIHdpbGwgbW9yZSBlYXNpbHkgbGVhcm4gYWJvdXQgbmV3IG9wdGlvbmFsIGJ1aWxkCmRlcGVu
ZGVuY2llcywgbGlrZSBtYXJrZG93biwgd2hlbiB0aGV5IGFyZSBpbnRyb2R1Y2VkLgoKQmFzZWQg
b24gYSBwYXRjaCBieSBNYXR0IFdpbHNvbi4gQ2hhbmdlZCB0byB1c2UgYSBzZXBhcmF0ZQpkb2Nz
L2NvbmZpZ3VyZSB3aGljaCBpcyBjYWxsZWQgZnJvbSB0aGUgdG9wLWxldmVsIGluIHRoZSBzYW1l
IG1hbm5lcgphcyBzdHViZG9tcy4KClJlcnVuIGF1dG9nZW4uc2ggYW5kICJnaXQgYWRkIGRvY3Mv
Y29uZmlndXJlIiBhZnRlciBhcHBseWluZyB0aGlzIHBhdGNoLgoKU2lnbmVkLW9mZi1ieTogTWF0
dCBXaWxzb24gPG1zd0BhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBJYW4gQ2FtcGJlbGwgPGlh
bi5jYW1wYmVsbEBjaXRyaXguY29tPgpDYzogIkZpb3JhdmFudGUsIE1hdHRoZXcgRS4iIDxNYXR0
aGV3LkZpb3JhdmFudGVAamh1YXBsLmVkdT4KQ2M6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBh
dUBjaXRyaXguY29tPgotLS0KQXBwbGllcyBvbiB0b3Agb2YgTWF0dGhldydzICJBZGQgYXV0b2Nv
bmYgdG8gc3R1YmRvbSIgYW5kICJBZGQgYSB0b3AKbGV2ZWwgY29uZmlndXJlIHNjcmlwdCIuCgp2
MjogLSBMQVRFWDJIVE1MIGlzIHVudXNlZAogICAgLSBObyBtb3JlIFhlbkFQSSBvciBEb3h5Z2Vu
IG9yIGFzc29jaWF0ZWQgdG9vbHMuCi0tLQogLmdpdGlnbm9yZSAgICAgICAgIHwgICAgMSArCiAu
aGdpZ25vcmUgICAgICAgICAgfCAgICAxICsKIFJFQURNRSAgICAgICAgICAgICB8ICAgIDIgKy0K
IGF1dG9nZW4uc2ggICAgICAgICB8ICAgMTUgKysrKysrKystLS0tCiBjb25maWcvRG9jcy5tay5p
biAgfCAgIDEzICsrKysrKysrKysKIGNvbmZpZ3VyZSAgICAgICAgICB8ICAgIDQgKy0KIGNvbmZp
Z3VyZS5hYyAgICAgICB8ICAgIDIgKy0KIGRvY3MvRG9jcy5tayAgICAgICB8ICAgIDYgLS0tLQog
ZG9jcy9NYWtlZmlsZSAgICAgIHwgICA2NSArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrLS0tLS0tLS0tLS0tLS0KIGRvY3MvY29uZmlndXJlLmFjICB8ICAgMjAgKysrKysrKysr
KysrKysrKwogZG9jcy9maWdzL01ha2VmaWxlIHwgICAgMiArLQogbTQvZG9jc190b29sLm00ICAg
IHwgICAxNyArKysrKysrKysrKysrCiAxMiBmaWxlcyBjaGFuZ2VkLCAxMTQgaW5zZXJ0aW9ucygr
KSwgMzQgZGVsZXRpb25zKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgY29uZmlnL0RvY3MubWsuaW4K
IGRlbGV0ZSBtb2RlIDEwMDY0NCBkb2NzL0RvY3MubWsKIGNyZWF0ZSBtb2RlIDEwMDY0NCBkb2Nz
L2NvbmZpZ3VyZS5hYwogY3JlYXRlIG1vZGUgMTAwNjQ0IG00L2RvY3NfdG9vbC5tNAoKZGlmZiAt
LWdpdCBhLy5naXRpZ25vcmUgYi8uZ2l0aWdub3JlCmluZGV4IDQ2Y2U2M2EuLmE0Y2RkNmMgMTAw
NjQ0Ci0tLSBhLy5naXRpZ25vcmUKKysrIGIvLmdpdGlnbm9yZQpAQCAtMTIwLDYgKzEyMCw3IEBA
IGNvbmZpZy5zdGF0dXMKIGNvbmZpZy5jYWNoZQogY29uZmlnL1Rvb2xzLm1rCiBjb25maWcvU3R1
YmRvbS5taworY29uZmlnL0RvY3MubWsKIHRvb2xzL2Jsa3RhcDIvZGFlbW9uL2Jsa3RhcGN0cmwK
IHRvb2xzL2Jsa3RhcDIvZHJpdmVycy9pbWcycWNvdwogdG9vbHMvYmxrdGFwMi9kcml2ZXJzL2xv
Y2stdXRpbApkaWZmIC0tZ2l0IGEvLmhnaWdub3JlIGIvLmhnaWdub3JlCmluZGV4IDAzOTJhNTYu
LmRhM2E3ZTYgMTAwNjQ0Ci0tLSBhLy5oZ2lnbm9yZQorKysgYi8uaGdpZ25vcmUKQEAgLTMxMiw2
ICszMTIsNyBAQAogXnRvb2xzL2NvbmZpZ1wuY2FjaGUkCiBeY29uZmlnL1Rvb2xzXC5tayQKIF5j
b25maWcvU3R1YmRvbVwubWskCiteY29uZmlnL0RvY3NcLm1rJAogXnhlbi9cLmJhbm5lci4qJAog
Xnhlbi9CTE9HJAogXnhlbi9TeXN0ZW0ubWFwJApkaWZmIC0tZ2l0IGEvUkVBRE1FIGIvUkVBRE1F
CmluZGV4IGY1ZDU1MzAuLjg4NDAxZjcgMTAwNjQ0Ci0tLSBhL1JFQURNRQorKysgYi9SRUFETUUK
QEAgLTU3LDcgKzU3LDYgQEAgcHJvdmlkZWQgYnkgeW91ciBPUyBkaXN0cmlidXRvcjoKICAgICAq
IEdOVSBnZXR0ZXh0CiAgICAgKiAxNi1iaXQgeDg2IGFzc2VtYmxlciwgbG9hZGVyIGFuZCBjb21w
aWxlciAoZGV2ODYgcnBtIG9yIGJpbjg2ICYgYmNjIGRlYnMpCiAgICAgKiBBQ1BJIEFTTCBjb21w
aWxlciAoaWFzbCkKLSAgICAqIG1hcmtkb3duCiAKIEluIGFkZGl0aW9uIHRvIHRoZSBhYm92ZSB0
aGVyZSBhcmUgYSBudW1iZXIgb2Ygb3B0aW9uYWwgYnVpbGQKIHByZXJlcXVpc2l0ZXMuIE9taXR0
aW5nIHRoZXNlIHdpbGwgY2F1c2UgdGhlIHJlbGF0ZWQgZmVhdHVyZXMgdG8gYmUKQEAgLTY1LDYg
KzY0LDcgQEAgZGlzYWJsZWQgYXQgY29tcGlsZSB0aW1lOgogICAgICogRGV2ZWxvcG1lbnQgaW5z
dGFsbCBvZiBPY2FtbCAoZS5nLiBvY2FtbC1ub3ggYW5kCiAgICAgICBvY2FtbC1maW5kbGliKS4g
UmVxdWlyZWQgdG8gYnVpbGQgb2NhbWwgY29tcG9uZW50cyB3aGljaAogICAgICAgaW5jbHVkZXMg
dGhlIGFsdGVybmF0aXZlIG9jYW1sIHhlbnN0b3JlZC4KKyAgICAqIG1hcmtkb3duCiAKIFNlY29u
ZCwgeW91IG5lZWQgdG8gYWNxdWlyZSBhIHN1aXRhYmxlIGtlcm5lbCBmb3IgdXNlIGluIGRvbWFp
biAwLiBJZgogcG9zc2libGUgeW91IHNob3VsZCB1c2UgYSBrZXJuZWwgcHJvdmlkZWQgYnkgeW91
ciBPUyBkaXN0cmlidXRvci4gSWYKZGlmZiAtLWdpdCBhL2F1dG9nZW4uc2ggYi9hdXRvZ2VuLnNo
CmluZGV4IDE0NTZkOTQuLmI1Yzk2ODggMTAwNzU1Ci0tLSBhL2F1dG9nZW4uc2gKKysrIGIvYXV0
b2dlbi5zaApAQCAtMSw3ICsxLDEyIEBACiAjIS9iaW4vc2ggLWUKIGF1dG9jb25mCi1jZCB0b29s
cwotYXV0b2NvbmYKLWF1dG9oZWFkZXIKLWNkIC4uL3N0dWJkb20KLWF1dG9jb25mCisoIGNkIHRv
b2xzCisgIGF1dG9jb25mCisgIGF1dG9oZWFkZXIKKykKKyggY2Qgc3R1YmRvbQorICBhdXRvY29u
ZgorKQorKCBjZCBkb2NzCisgIGF1dG9jb25mCispCmRpZmYgLS1naXQgYS9jb25maWcvRG9jcy5t
ay5pbiBiL2NvbmZpZy9Eb2NzLm1rLmluCm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAw
MDAuLjAyNGVmMjAKLS0tIC9kZXYvbnVsbAorKysgYi9jb25maWcvRG9jcy5tay5pbgpAQCAtMCww
ICsxLDEzIEBACisjIFByZWZpeCBhbmQgaW5zdGFsbCBmb2xkZXIKK3ByZWZpeCAgICAgICAgICAg
ICAgOj0gQHByZWZpeEAKK1BSRUZJWCAgICAgICAgICAgICAgOj0gJChwcmVmaXgpCitleGVjX3By
ZWZpeCAgICAgICAgIDo9IEBleGVjX3ByZWZpeEAKK2xpYmRpciAgICAgICAgICAgICAgOj0gQGxp
YmRpckAKK0xJQkRJUiAgICAgICAgICAgICAgOj0gJChsaWJkaXIpCisKKyMgVG9vbHMKK0ZJRzJE
RVYgICAgICAgICAgICAgOj0gQEZJRzJERVZACitQT0QyTUFOICAgICAgICAgICAgIDo9IEBQT0Qy
TUFOQAorUE9EMkhUTUwgICAgICAgICAgICA6PSBAUE9EMkhUTUxACitQT0QyVEVYVCAgICAgICAg
ICAgIDo9IEBQT0QyVEVYVEAKK01BUktET1dOICAgICAgICAgICAgOj0gQE1BUktET1dOQApkaWZm
IC0tZ2l0IGEvY29uZmlndXJlIGIvY29uZmlndXJlCmluZGV4IDY0OTcwOGYuLmEzMDdmM2EgMTAw
NzU1Ci0tLSBhL2NvbmZpZ3VyZQorKysgYi9jb25maWd1cmUKQEAgLTYwNiw3ICs2MDYsNyBAQCBl
bmFibGVfb3B0aW9uX2NoZWNraW5nCiAgICAgICBhY19wcmVjaW91c192YXJzPSdidWlsZF9hbGlh
cwogaG9zdF9hbGlhcwogdGFyZ2V0X2FsaWFzJwotYWNfc3ViZGlyc19hbGw9J3Rvb2xzIHN0dWJk
b20nCithY19zdWJkaXJzX2FsbD0ndG9vbHMgZG9jcyBzdHViZG9tJwogCiAjIEluaXRpYWxpemUg
c29tZSB2YXJpYWJsZXMgc2V0IGJ5IG9wdGlvbnMuCiBhY19pbml0X2hlbHA9CkBAIC0xNjc1LDcg
KzE2NzUsNyBAQCBhY19jb25maWd1cmU9IiRTSEVMTCAkYWNfYXV4X2Rpci9jb25maWd1cmUiICAj
IFBsZWFzZSBkb24ndCB1c2UgdGhpcyB2YXIuCiAKIAogCi1zdWJkaXJzPSIkc3ViZGlycyB0b29s
cyBzdHViZG9tIgorc3ViZGlycz0iJHN1YmRpcnMgdG9vbHMgZG9jcyBzdHViZG9tIgogCiAKIGNh
dCA+Y29uZmNhY2hlIDw8XF9BQ0VPRgpkaWZmIC0tZ2l0IGEvY29uZmlndXJlLmFjIGIvY29uZmln
dXJlLmFjCmluZGV4IDA0OTdkOTcuLjYzN2IzNWIgMTAwNjQ0Ci0tLSBhL2NvbmZpZ3VyZS5hYwor
KysgYi9jb25maWd1cmUuYWMKQEAgLTYsNiArNiw2IEBAIEFDX0lOSVQoW1hlbiBIeXBlcnZpc29y
XSwgbTRfZXN5c2NtZChbLi92ZXJzaW9uLnNoIC4veGVuL01ha2VmaWxlXSksCiAgICAgW3hlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnXSwgW3hlbl0sIFtodHRwOi8vd3d3Lnhlbi5vcmcvXSkKIEFDX0NP
TkZJR19TUkNESVIoWy4veGVuL2NvbW1vbi9rZXJuZWwuY10pCiAKLUFDX0NPTkZJR19TVUJESVJT
KFt0b29scyBzdHViZG9tXSkKK0FDX0NPTkZJR19TVUJESVJTKFt0b29scyBkb2NzIHN0dWJkb21d
KQogCiBBQ19PVVRQVVQoKQpkaWZmIC0tZ2l0IGEvZG9jcy9Eb2NzLm1rIGIvZG9jcy9Eb2NzLm1r
CmRlbGV0ZWQgZmlsZSBtb2RlIDEwMDY0NAppbmRleCBkYjNjMTlkLi4wMDAwMDAwCi0tLSBhL2Rv
Y3MvRG9jcy5taworKysgL2Rldi9udWxsCkBAIC0xLDYgKzAsMCBAQAotRklHMkRFVgkJOj0gZmln
MmRldgotTEFURVgySFRNTAk6PSBsYXRleDJodG1sCi1QT0QyTUFOCQk6PSBwb2QybWFuCi1QT0Qy
SFRNTAk6PSBwb2QyaHRtbAotUE9EMlRFWFQJOj0gcG9kMnRleHQKLU1BUktET1dOCTo9IG1hcmtk
b3duCmRpZmYgLS1naXQgYS9kb2NzL01ha2VmaWxlIGIvZG9jcy9NYWtlZmlsZQppbmRleCAwNTNk
N2FmLi5iYjJjYjk4IDEwMDY0NAotLS0gYS9kb2NzL01ha2VmaWxlCisrKyBiL2RvY3MvTWFrZWZp
bGUKQEAgLTIsNyArMiw3IEBACiAKIFhFTl9ST09UPSQoQ1VSRElSKS8uLgogaW5jbHVkZSAkKFhF
Tl9ST09UKS9Db25maWcubWsKLWluY2x1ZGUgJChYRU5fUk9PVCkvZG9jcy9Eb2NzLm1rCitpbmNs
dWRlICQoWEVOX1JPT1QpL2NvbmZpZy9Eb2NzLm1rCiAKIFZFUlNJT04JCT0geGVuLXVuc3RhYmxl
CiAKQEAgLTMyLDIxICszMiwyNyBAQCBodG1sOiAkKERPQ19IVE1MKSBodG1sL2luZGV4Lmh0bWwK
IAogLlBIT05ZOiB0eHQKIHR4dDoKLQlAaWYgd2hpY2ggJChQT0QyVEVYVCkgMT4vZGV2L251bGwg
Mj4vZGV2L251bGw7IHRoZW4gXAotCSQoTUFLRSkgJChET0NfVFhUKTsgZWxzZSAgICAgICAgICAg
ICAgXAotCWVjaG8gInBvZDJ0ZXh0IG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nIHRleHQgb3V0cHV0
cy4iOyBmaQoraWZkZWYgUE9EMlRFWFQKKwkkKE1BS0UpICQoRE9DX1RYVCkKK2Vsc2UKKwlAZWNo
byAicG9kMnRleHQgbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgdGV4dCBvdXRwdXRzLiIKK2VuZGlm
CiAKIC5QSE9OWTogZmlncwogZmlnczoKLQlAc2V0IC1lIDsgaWYgd2hpY2ggJChGSUcyREVWKSAx
Pi9kZXYvbnVsbCAyPi9kZXYvbnVsbDsgdGhlbiBcCi0Jc2V0IC14OyAkKE1BS0UpIC1DIGZpZ3Mg
OyBlbHNlICAgICAgICAgICAgICAgICAgIFwKLQllY2hvICJmaWcyZGV2ICh0cmFuc2ZpZykgbm90
IGluc3RhbGxlZDsgc2tpcHBpbmcgZmlncy4iOyBmaQoraWZkZWYgRklHMkRFVgorCXNldCAteDsg
JChNQUtFKSAtQyBmaWdzCitlbHNlCisJQGVjaG8gImZpZzJkZXYgKHRyYW5zZmlnKSBub3QgaW5z
dGFsbGVkOyBza2lwcGluZyBmaWdzLiIKK2VuZGlmCiAKIC5QSE9OWTogbWFuLXBhZ2VzCiBtYW4t
cGFnZXM6Ci0JQGlmIHdoaWNoICQoUE9EMk1BTikgMT4vZGV2L251bGwgMj4vZGV2L251bGw7IHRo
ZW4gXAotCSQoTUFLRSkgJChET0NfTUFOMSkgJChET0NfTUFONSk7IGVsc2UgICAgICAgICAgICAg
IFwKLQllY2hvICJwb2QybWFuIG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nIG1hbi1wYWdlcy4iOyBm
aQoraWZkZWYgUE9EMk1BTgorCSQoTUFLRSkgJChET0NfTUFOMSkgJChET0NfTUFONSkKK2Vsc2UK
KwlAZWNobyAicG9kMm1hbiBub3QgaW5zdGFsbGVkOyBza2lwcGluZyBtYW4tcGFnZXMuIgorZW5k
aWYKIAogbWFuMS8lLjE6IG1hbi8lLnBvZC4xIE1ha2VmaWxlCiAJJChJTlNUQUxMX0RJUikgJChA
RCkKQEAgLTY5LDYgKzc1LDcgQEAgY2xlYW46CiAKIC5QSE9OWTogZGlzdGNsZWFuCiBkaXN0Y2xl
YW46IGNsZWFuCisJcm0gLXJmIC4uL2NvbmZpZy9Eb2NzLm1rIGNvbmZpZy5sb2cgY29uZmlnLnN0
YXR1cyBhdXRvbTR0ZS5jYWNoZQogCiAuUEhPTlk6IGluc3RhbGwKIGluc3RhbGw6IGFsbApAQCAt
ODQsMzAgKzkxLDQwIEBAIGh0bWwvaW5kZXguaHRtbDogJChET0NfSFRNTCkgLi9nZW4taHRtbC1p
bmRleCBJTkRFWAogCXBlcmwgLXcgLS0gLi9nZW4taHRtbC1pbmRleCAtaSBJTkRFWCBodG1sICQo
RE9DX0hUTUwpCiAKIGh0bWwvJS5odG1sOiAlLm1hcmtkb3duCi0JQCQoSU5TVEFMTF9ESVIpICQo
QEQpCi0JQHNldCAtZSA7IGlmIHdoaWNoICQoTUFSS0RPV04pIDE+L2Rldi9udWxsIDI+L2Rldi9u
dWxsOyB0aGVuIFwKLQllY2hvICJSdW5uaW5nIG1hcmtkb3duIHRvIGdlbmVyYXRlICQqLmh0bWwg
Li4uICI7IFwKKwkkKElOU1RBTExfRElSKSAkKEBEKQoraWZkZWYgTUFSS0RPV04KKwlAZWNobyAi
UnVubmluZyBtYXJrZG93biB0byBnZW5lcmF0ZSAkKi5odG1sIC4uLiAiCiAJJChNQVJLRE9XTikg
JDwgPiAkQC50bXAgOyBcCi0JJChjYWxsIG1vdmUtaWYtY2hhbmdlZCwkQC50bXAsJEApIDsgZWxz
ZSBcCi0JZWNobyAibWFya2Rvd24gbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgJCouaHRtbC4iOyBm
aQorCSQoY2FsbCBtb3ZlLWlmLWNoYW5nZWQsJEAudG1wLCRAKQorZWxzZQorCUBlY2hvICJtYXJr
ZG93biBub3QgaW5zdGFsbGVkOyBza2lwcGluZyAkKi5odG1sLiIKK2VuZGlmCiAKIGh0bWwvJS50
eHQ6ICUudHh0Ci0JQCQoSU5TVEFMTF9ESVIpICQoQEQpCisJJChJTlNUQUxMX0RJUikgJChARCkK
IAljcCAkPCAkQAogCiBodG1sL21hbi8lLjEuaHRtbDogbWFuLyUucG9kLjEgTWFrZWZpbGUKIAkk
KElOU1RBTExfRElSKSAkKEBEKQoraWZkZWYgUE9EMkhUTUwKIAkkKFBPRDJIVE1MKSAtLWluZmls
ZT0kPCAtLW91dGZpbGU9JEAudG1wCiAJJChjYWxsIG1vdmUtaWYtY2hhbmdlZCwkQC50bXAsJEAp
CitlbHNlCisJQGVjaG8gInBvZDJodG1sIG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nICQ8LiIKK2Vu
ZGlmCiAKIGh0bWwvbWFuLyUuNS5odG1sOiBtYW4vJS5wb2QuNSBNYWtlZmlsZQogCSQoSU5TVEFM
TF9ESVIpICQoQEQpCitpZmRlZiBQT0QySFRNTAogCSQoUE9EMkhUTUwpIC0taW5maWxlPSQ8IC0t
b3V0ZmlsZT0kQC50bXAKIAkkKGNhbGwgbW92ZS1pZi1jaGFuZ2VkLCRALnRtcCwkQCkKK2Vsc2UK
KwlAZWNobyAicG9kMmh0bWwgbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgJDwuIgorZW5kaWYKIAog
aHRtbC9oeXBlcmNhbGwvaW5kZXguaHRtbDogLi94ZW4taGVhZGVycwogCXJtIC1yZiAkKEBEKQot
CUAkKElOU1RBTExfRElSKSAkKEBEKQorCSQoSU5TVEFMTF9ESVIpICQoQEQpCiAJLi94ZW4taGVh
ZGVycyAtTyAkKEBEKSBcCiAJCS1UICdhcmNoLXg4Nl82NCAtIFhlbiBwdWJsaWMgaGVhZGVycycg
XAogCQktWCBhcmNoLWlhNjQgLVggYXJjaC14ODZfMzIgLVggeGVuLXg4Nl8zMiAtWCBhcmNoLWFy
bSBcCkBAIC0xMjcsMTEgKzE0NCwyMyBAQCB0eHQvJS50eHQ6ICUubWFya2Rvd24KIAogdHh0L21h
bi8lLjEudHh0OiBtYW4vJS5wb2QuMSBNYWtlZmlsZQogCSQoSU5TVEFMTF9ESVIpICQoQEQpCitp
ZmRlZiBQT0QyVEVYVAogCSQoUE9EMlRFWFQpICQ8ICRALnRtcAogCSQoY2FsbCBtb3ZlLWlmLWNo
YW5nZWQsJEAudG1wLCRAKQorZWxzZQorCUBlY2hvICJwb2QydGV4dCBub3QgaW5zdGFsbGVkOyBz
a2lwcGluZyAkPC4iCitlbmRpZgogCiB0eHQvbWFuLyUuNS50eHQ6IG1hbi8lLnBvZC41IE1ha2Vm
aWxlCiAJJChJTlNUQUxMX0RJUikgJChARCkKK2lmZGVmIFBPRDJURVhUCiAJJChQT0QyVEVYVCkg
JDwgJEAudG1wCiAJJChjYWxsIG1vdmUtaWYtY2hhbmdlZCwkQC50bXAsJEApCi0KK2Vsc2UKKwlA
ZWNobyAicG9kMnRleHQgbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgJDwuIgorZW5kaWYKKworaWZl
cSAoLCQoZmluZHN0cmluZyBjbGVhbiwkKE1BS0VDTURHT0FMUykpKQorJChYRU5fUk9PVCkvY29u
ZmlnL0RvY3MubWs6CisJJChlcnJvciBZb3UgaGF2ZSB0byBydW4gLi9jb25maWd1cmUgYmVmb3Jl
IGJ1aWxkaW5nIGRvY3MpCitlbmRpZgpkaWZmIC0tZ2l0IGEvZG9jcy9jb25maWd1cmUuYWMgYi9k
b2NzL2NvbmZpZ3VyZS5hYwpuZXcgZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi5lYTA1
NTJlCi0tLSAvZGV2L251bGwKKysrIGIvZG9jcy9jb25maWd1cmUuYWMKQEAgLTAsMCArMSwyMCBA
QAorIyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgLSotIEF1
dG9jb25mIC0qLQorIyBQcm9jZXNzIHRoaXMgZmlsZSB3aXRoIGF1dG9jb25mIHRvIHByb2R1Y2Ug
YSBjb25maWd1cmUgc2NyaXB0LgorCitBQ19QUkVSRVEoWzIuNjddKQorQUNfSU5JVChbWGVuIEh5
cGVydmlzb3IgRG9jdW1lbnRhdGlvbl0sIG00X2VzeXNjbWQoWy4uL3ZlcnNpb24uc2ggLi4veGVu
L01ha2VmaWxlXSksCisgICAgW3hlbi1kZXZlbEBsaXN0cy54ZW4ub3JnXSwgW3hlbl0sIFtodHRw
Oi8vd3d3Lnhlbi5vcmcvXSkKK0FDX0NPTkZJR19TUkNESVIoW21pc2MveGVuLWNvbW1hbmQtbGlu
ZS5tYXJrZG93bl0pCitBQ19DT05GSUdfRklMRVMoWy4uL2NvbmZpZy9Eb2NzLm1rXSkKK0FDX0NP
TkZJR19BVVhfRElSKFsuLi9dKQorCisjIE00IE1hY3JvIGluY2x1ZGVzCittNF9pbmNsdWRlKFsu
Li9tNC9kb2NzX3Rvb2wubTRdKQorCitBWF9ET0NTX1RPT0xfUFJPRyhbRklHMkRFVl0sIFtmaWcy
ZGV2XSkKK0FYX0RPQ1NfVE9PTF9QUk9HKFtQT0QyTUFOXSwgW3BvZDJtYW5dKQorQVhfRE9DU19U
T09MX1BST0coW1BPRDJIVE1MXSwgW3BvZDJodG1sXSkKK0FYX0RPQ1NfVE9PTF9QUk9HKFtQT0Qy
VEVYVF0sIFtwb2QydGV4dF0pCitBWF9ET0NTX1RPT0xfUFJPR1MoW01BUktET1dOXSwgW21hcmtk
b3duXSwgW21hcmtkb3duIG1hcmtkb3duX3B5XSkKKworQUNfT1VUUFVUKCkKZGlmZiAtLWdpdCBh
L2RvY3MvZmlncy9NYWtlZmlsZSBiL2RvY3MvZmlncy9NYWtlZmlsZQppbmRleCA1ZWNkYWUzLi5m
NzgyZGMxIDEwMDY0NAotLS0gYS9kb2NzL2ZpZ3MvTWFrZWZpbGUKKysrIGIvZG9jcy9maWdzL01h
a2VmaWxlCkBAIC0xLDcgKzEsNyBAQAogCiBYRU5fUk9PVD0kKENVUkRJUikvLi4vLi4KIGluY2x1
ZGUgJChYRU5fUk9PVCkvQ29uZmlnLm1rCi1pbmNsdWRlICQoWEVOX1JPT1QpL2RvY3MvRG9jcy5t
aworaW5jbHVkZSAkKFhFTl9ST09UKS9jb25maWcvRG9jcy5tawogCiBUQVJHRVRTPSBuZXR3b3Jr
LWJyaWRnZS5wbmcgbmV0d29yay1iYXNpYy5wbmcKIApkaWZmIC0tZ2l0IGEvbTQvZG9jc190b29s
Lm00IGIvbTQvZG9jc190b29sLm00Cm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAwMDAu
LjNlODgxNGEKLS0tIC9kZXYvbnVsbAorKysgYi9tNC9kb2NzX3Rvb2wubTQKQEAgLTAsMCArMSwx
NyBAQAorQUNfREVGVU4oW0FYX0RPQ1NfVE9PTF9QUk9HXSwgWworZG5sCisgICAgQUNfQVJHX1ZB
UihbJDFdLCBbUGF0aCB0byAkMiB0b29sXSkKKyAgICBBQ19QQVRIX1BST0coWyQxXSwgWyQyXSkK
KyAgICBBU19JRihbISB0ZXN0IC14ICIkYWNfY3ZfcGF0aF8kMSJdLCBbCisgICAgICAgIEFDX01T
R19XQVJOKFskMiBpcyBub3QgYXZhaWxhYmxlIHNvIHNvbWUgZG9jdW1lbnRhdGlvbiB3b24ndCBi
ZSBidWlsdF0pCisgICAgXSkKK10pCisKK0FDX0RFRlVOKFtBWF9ET0NTX1RPT0xfUFJPR1NdLCBb
CitkbmwKKyAgICBBQ19BUkdfVkFSKFskMV0sIFtQYXRoIHRvICQyIHRvb2xdKQorICAgIEFDX1BB
VEhfUFJPR1MoWyQxXSwgWyQzXSkKKyAgICBBU19JRihbISB0ZXN0IC14ICIkYWNfY3ZfcGF0aF8k
MSJdLCBbCisgICAgICAgIEFDX01TR19XQVJOKFskMiBpcyBub3QgYXZhaWxhYmxlIHNvIHNvbWUg
ZG9jdW1lbnRhdGlvbiB3b24ndCBiZSBidWlsdF0pCisgICAgXSkKK10pCi0tIAoxLjcuMi41CgoK
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs
IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y
Zy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Dec 07 14:56:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 14:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzLR-0007p1-Hi; Fri, 07 Dec 2012 14:56:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgzLP-0007oe-JQ
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:56:03 +0000
Received: from [193.109.254.147:53686] by server-11.bemta-14.messagelabs.com
	id 0E/1F-29027-28302C05; Fri, 07 Dec 2012 14:56:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354892151!9303018!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11352 invoked from network); 7 Dec 2012 14:55:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:55:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216786660"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 14:55:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 09:55:11 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgzKY-0004T4-Is;
	Fri, 07 Dec 2012 14:55:10 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 14:55:10 +0000
Message-ID: <1354892110-31108-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
References: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>,
	Ian Campbell <ian.campbell@citrix.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] docs: check for documentation generation
	tools in docs/configure.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SXQgaXMgc29tZXRpbWVzIGhhcmQgdG8gZGlzY292ZXIgYWxsIHRoZSBvcHRpb25hbCB0b29scyB0
aGF0IHNob3VsZCBiZQpvbiBhIHN5c3RlbSB0byBidWlsZCBhbGwgYXZhaWxhYmxlIFhlbiBkb2N1
bWVudGF0aW9uLiBCeSBjaGVja2luZyBmb3IKZG9jdW1lbnRhdGlvbiBnZW5lcmF0aW9uIHRvb2xz
IGF0IC4vY29uZmlndXJlIHRpbWUgYW5kIGRpc3BsYXlpbmcgYQp3YXJuaW5nLCBYZW4gcGFja2Fn
ZXJzIHdpbGwgbW9yZSBlYXNpbHkgbGVhcm4gYWJvdXQgbmV3IG9wdGlvbmFsIGJ1aWxkCmRlcGVu
ZGVuY2llcywgbGlrZSBtYXJrZG93biwgd2hlbiB0aGV5IGFyZSBpbnRyb2R1Y2VkLgoKQmFzZWQg
b24gYSBwYXRjaCBieSBNYXR0IFdpbHNvbi4gQ2hhbmdlZCB0byB1c2UgYSBzZXBhcmF0ZQpkb2Nz
L2NvbmZpZ3VyZSB3aGljaCBpcyBjYWxsZWQgZnJvbSB0aGUgdG9wLWxldmVsIGluIHRoZSBzYW1l
IG1hbm5lcgphcyBzdHViZG9tcy4KClJlcnVuIGF1dG9nZW4uc2ggYW5kICJnaXQgYWRkIGRvY3Mv
Y29uZmlndXJlIiBhZnRlciBhcHBseWluZyB0aGlzIHBhdGNoLgoKU2lnbmVkLW9mZi1ieTogTWF0
dCBXaWxzb24gPG1zd0BhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBJYW4gQ2FtcGJlbGwgPGlh
bi5jYW1wYmVsbEBjaXRyaXguY29tPgpDYzogIkZpb3JhdmFudGUsIE1hdHRoZXcgRS4iIDxNYXR0
aGV3LkZpb3JhdmFudGVAamh1YXBsLmVkdT4KQ2M6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBh
dUBjaXRyaXguY29tPgotLS0KQXBwbGllcyBvbiB0b3Agb2YgTWF0dGhldydzICJBZGQgYXV0b2Nv
bmYgdG8gc3R1YmRvbSIgYW5kICJBZGQgYSB0b3AKbGV2ZWwgY29uZmlndXJlIHNjcmlwdCIuCgp2
MjogLSBMQVRFWDJIVE1MIGlzIHVudXNlZAogICAgLSBObyBtb3JlIFhlbkFQSSBvciBEb3h5Z2Vu
IG9yIGFzc29jaWF0ZWQgdG9vbHMuCi0tLQogLmdpdGlnbm9yZSAgICAgICAgIHwgICAgMSArCiAu
aGdpZ25vcmUgICAgICAgICAgfCAgICAxICsKIFJFQURNRSAgICAgICAgICAgICB8ICAgIDIgKy0K
IGF1dG9nZW4uc2ggICAgICAgICB8ICAgMTUgKysrKysrKystLS0tCiBjb25maWcvRG9jcy5tay5p
biAgfCAgIDEzICsrKysrKysrKysKIGNvbmZpZ3VyZSAgICAgICAgICB8ICAgIDQgKy0KIGNvbmZp
Z3VyZS5hYyAgICAgICB8ICAgIDIgKy0KIGRvY3MvRG9jcy5tayAgICAgICB8ICAgIDYgLS0tLQog
ZG9jcy9NYWtlZmlsZSAgICAgIHwgICA2NSArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrLS0tLS0tLS0tLS0tLS0KIGRvY3MvY29uZmlndXJlLmFjICB8ICAgMjAgKysrKysrKysr
KysrKysrKwogZG9jcy9maWdzL01ha2VmaWxlIHwgICAgMiArLQogbTQvZG9jc190b29sLm00ICAg
IHwgICAxNyArKysrKysrKysrKysrCiAxMiBmaWxlcyBjaGFuZ2VkLCAxMTQgaW5zZXJ0aW9ucygr
KSwgMzQgZGVsZXRpb25zKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgY29uZmlnL0RvY3MubWsuaW4K
IGRlbGV0ZSBtb2RlIDEwMDY0NCBkb2NzL0RvY3MubWsKIGNyZWF0ZSBtb2RlIDEwMDY0NCBkb2Nz
L2NvbmZpZ3VyZS5hYwogY3JlYXRlIG1vZGUgMTAwNjQ0IG00L2RvY3NfdG9vbC5tNAoKZGlmZiAt
LWdpdCBhLy5naXRpZ25vcmUgYi8uZ2l0aWdub3JlCmluZGV4IDQ2Y2U2M2EuLmE0Y2RkNmMgMTAw
NjQ0Ci0tLSBhLy5naXRpZ25vcmUKKysrIGIvLmdpdGlnbm9yZQpAQCAtMTIwLDYgKzEyMCw3IEBA
IGNvbmZpZy5zdGF0dXMKIGNvbmZpZy5jYWNoZQogY29uZmlnL1Rvb2xzLm1rCiBjb25maWcvU3R1
YmRvbS5taworY29uZmlnL0RvY3MubWsKIHRvb2xzL2Jsa3RhcDIvZGFlbW9uL2Jsa3RhcGN0cmwK
IHRvb2xzL2Jsa3RhcDIvZHJpdmVycy9pbWcycWNvdwogdG9vbHMvYmxrdGFwMi9kcml2ZXJzL2xv
Y2stdXRpbApkaWZmIC0tZ2l0IGEvLmhnaWdub3JlIGIvLmhnaWdub3JlCmluZGV4IDAzOTJhNTYu
LmRhM2E3ZTYgMTAwNjQ0Ci0tLSBhLy5oZ2lnbm9yZQorKysgYi8uaGdpZ25vcmUKQEAgLTMxMiw2
ICszMTIsNyBAQAogXnRvb2xzL2NvbmZpZ1wuY2FjaGUkCiBeY29uZmlnL1Rvb2xzXC5tayQKIF5j
b25maWcvU3R1YmRvbVwubWskCiteY29uZmlnL0RvY3NcLm1rJAogXnhlbi9cLmJhbm5lci4qJAog
Xnhlbi9CTE9HJAogXnhlbi9TeXN0ZW0ubWFwJApkaWZmIC0tZ2l0IGEvUkVBRE1FIGIvUkVBRE1F
CmluZGV4IGY1ZDU1MzAuLjg4NDAxZjcgMTAwNjQ0Ci0tLSBhL1JFQURNRQorKysgYi9SRUFETUUK
QEAgLTU3LDcgKzU3LDYgQEAgcHJvdmlkZWQgYnkgeW91ciBPUyBkaXN0cmlidXRvcjoKICAgICAq
IEdOVSBnZXR0ZXh0CiAgICAgKiAxNi1iaXQgeDg2IGFzc2VtYmxlciwgbG9hZGVyIGFuZCBjb21w
aWxlciAoZGV2ODYgcnBtIG9yIGJpbjg2ICYgYmNjIGRlYnMpCiAgICAgKiBBQ1BJIEFTTCBjb21w
aWxlciAoaWFzbCkKLSAgICAqIG1hcmtkb3duCiAKIEluIGFkZGl0aW9uIHRvIHRoZSBhYm92ZSB0
aGVyZSBhcmUgYSBudW1iZXIgb2Ygb3B0aW9uYWwgYnVpbGQKIHByZXJlcXVpc2l0ZXMuIE9taXR0
aW5nIHRoZXNlIHdpbGwgY2F1c2UgdGhlIHJlbGF0ZWQgZmVhdHVyZXMgdG8gYmUKQEAgLTY1LDYg
KzY0LDcgQEAgZGlzYWJsZWQgYXQgY29tcGlsZSB0aW1lOgogICAgICogRGV2ZWxvcG1lbnQgaW5z
dGFsbCBvZiBPY2FtbCAoZS5nLiBvY2FtbC1ub3ggYW5kCiAgICAgICBvY2FtbC1maW5kbGliKS4g
UmVxdWlyZWQgdG8gYnVpbGQgb2NhbWwgY29tcG9uZW50cyB3aGljaAogICAgICAgaW5jbHVkZXMg
dGhlIGFsdGVybmF0aXZlIG9jYW1sIHhlbnN0b3JlZC4KKyAgICAqIG1hcmtkb3duCiAKIFNlY29u
ZCwgeW91IG5lZWQgdG8gYWNxdWlyZSBhIHN1aXRhYmxlIGtlcm5lbCBmb3IgdXNlIGluIGRvbWFp
biAwLiBJZgogcG9zc2libGUgeW91IHNob3VsZCB1c2UgYSBrZXJuZWwgcHJvdmlkZWQgYnkgeW91
ciBPUyBkaXN0cmlidXRvci4gSWYKZGlmZiAtLWdpdCBhL2F1dG9nZW4uc2ggYi9hdXRvZ2VuLnNo
CmluZGV4IDE0NTZkOTQuLmI1Yzk2ODggMTAwNzU1Ci0tLSBhL2F1dG9nZW4uc2gKKysrIGIvYXV0
b2dlbi5zaApAQCAtMSw3ICsxLDEyIEBACiAjIS9iaW4vc2ggLWUKIGF1dG9jb25mCi1jZCB0b29s
cwotYXV0b2NvbmYKLWF1dG9oZWFkZXIKLWNkIC4uL3N0dWJkb20KLWF1dG9jb25mCisoIGNkIHRv
b2xzCisgIGF1dG9jb25mCisgIGF1dG9oZWFkZXIKKykKKyggY2Qgc3R1YmRvbQorICBhdXRvY29u
ZgorKQorKCBjZCBkb2NzCisgIGF1dG9jb25mCispCmRpZmYgLS1naXQgYS9jb25maWcvRG9jcy5t
ay5pbiBiL2NvbmZpZy9Eb2NzLm1rLmluCm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAw
MDAuLjAyNGVmMjAKLS0tIC9kZXYvbnVsbAorKysgYi9jb25maWcvRG9jcy5tay5pbgpAQCAtMCww
ICsxLDEzIEBACisjIFByZWZpeCBhbmQgaW5zdGFsbCBmb2xkZXIKK3ByZWZpeCAgICAgICAgICAg
ICAgOj0gQHByZWZpeEAKK1BSRUZJWCAgICAgICAgICAgICAgOj0gJChwcmVmaXgpCitleGVjX3By
ZWZpeCAgICAgICAgIDo9IEBleGVjX3ByZWZpeEAKK2xpYmRpciAgICAgICAgICAgICAgOj0gQGxp
YmRpckAKK0xJQkRJUiAgICAgICAgICAgICAgOj0gJChsaWJkaXIpCisKKyMgVG9vbHMKK0ZJRzJE
RVYgICAgICAgICAgICAgOj0gQEZJRzJERVZACitQT0QyTUFOICAgICAgICAgICAgIDo9IEBQT0Qy
TUFOQAorUE9EMkhUTUwgICAgICAgICAgICA6PSBAUE9EMkhUTUxACitQT0QyVEVYVCAgICAgICAg
ICAgIDo9IEBQT0QyVEVYVEAKK01BUktET1dOICAgICAgICAgICAgOj0gQE1BUktET1dOQApkaWZm
IC0tZ2l0IGEvY29uZmlndXJlIGIvY29uZmlndXJlCmluZGV4IDY0OTcwOGYuLmEzMDdmM2EgMTAw
NzU1Ci0tLSBhL2NvbmZpZ3VyZQorKysgYi9jb25maWd1cmUKQEAgLTYwNiw3ICs2MDYsNyBAQCBl
bmFibGVfb3B0aW9uX2NoZWNraW5nCiAgICAgICBhY19wcmVjaW91c192YXJzPSdidWlsZF9hbGlh
cwogaG9zdF9hbGlhcwogdGFyZ2V0X2FsaWFzJwotYWNfc3ViZGlyc19hbGw9J3Rvb2xzIHN0dWJk
b20nCithY19zdWJkaXJzX2FsbD0ndG9vbHMgZG9jcyBzdHViZG9tJwogCiAjIEluaXRpYWxpemUg
c29tZSB2YXJpYWJsZXMgc2V0IGJ5IG9wdGlvbnMuCiBhY19pbml0X2hlbHA9CkBAIC0xNjc1LDcg
KzE2NzUsNyBAQCBhY19jb25maWd1cmU9IiRTSEVMTCAkYWNfYXV4X2Rpci9jb25maWd1cmUiICAj
IFBsZWFzZSBkb24ndCB1c2UgdGhpcyB2YXIuCiAKIAogCi1zdWJkaXJzPSIkc3ViZGlycyB0b29s
cyBzdHViZG9tIgorc3ViZGlycz0iJHN1YmRpcnMgdG9vbHMgZG9jcyBzdHViZG9tIgogCiAKIGNh
dCA+Y29uZmNhY2hlIDw8XF9BQ0VPRgpkaWZmIC0tZ2l0IGEvY29uZmlndXJlLmFjIGIvY29uZmln
dXJlLmFjCmluZGV4IDA0OTdkOTcuLjYzN2IzNWIgMTAwNjQ0Ci0tLSBhL2NvbmZpZ3VyZS5hYwor
KysgYi9jb25maWd1cmUuYWMKQEAgLTYsNiArNiw2IEBAIEFDX0lOSVQoW1hlbiBIeXBlcnZpc29y
XSwgbTRfZXN5c2NtZChbLi92ZXJzaW9uLnNoIC4veGVuL01ha2VmaWxlXSksCiAgICAgW3hlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnXSwgW3hlbl0sIFtodHRwOi8vd3d3Lnhlbi5vcmcvXSkKIEFDX0NP
TkZJR19TUkNESVIoWy4veGVuL2NvbW1vbi9rZXJuZWwuY10pCiAKLUFDX0NPTkZJR19TVUJESVJT
KFt0b29scyBzdHViZG9tXSkKK0FDX0NPTkZJR19TVUJESVJTKFt0b29scyBkb2NzIHN0dWJkb21d
KQogCiBBQ19PVVRQVVQoKQpkaWZmIC0tZ2l0IGEvZG9jcy9Eb2NzLm1rIGIvZG9jcy9Eb2NzLm1r
CmRlbGV0ZWQgZmlsZSBtb2RlIDEwMDY0NAppbmRleCBkYjNjMTlkLi4wMDAwMDAwCi0tLSBhL2Rv
Y3MvRG9jcy5taworKysgL2Rldi9udWxsCkBAIC0xLDYgKzAsMCBAQAotRklHMkRFVgkJOj0gZmln
MmRldgotTEFURVgySFRNTAk6PSBsYXRleDJodG1sCi1QT0QyTUFOCQk6PSBwb2QybWFuCi1QT0Qy
SFRNTAk6PSBwb2QyaHRtbAotUE9EMlRFWFQJOj0gcG9kMnRleHQKLU1BUktET1dOCTo9IG1hcmtk
b3duCmRpZmYgLS1naXQgYS9kb2NzL01ha2VmaWxlIGIvZG9jcy9NYWtlZmlsZQppbmRleCAwNTNk
N2FmLi5iYjJjYjk4IDEwMDY0NAotLS0gYS9kb2NzL01ha2VmaWxlCisrKyBiL2RvY3MvTWFrZWZp
bGUKQEAgLTIsNyArMiw3IEBACiAKIFhFTl9ST09UPSQoQ1VSRElSKS8uLgogaW5jbHVkZSAkKFhF
Tl9ST09UKS9Db25maWcubWsKLWluY2x1ZGUgJChYRU5fUk9PVCkvZG9jcy9Eb2NzLm1rCitpbmNs
dWRlICQoWEVOX1JPT1QpL2NvbmZpZy9Eb2NzLm1rCiAKIFZFUlNJT04JCT0geGVuLXVuc3RhYmxl
CiAKQEAgLTMyLDIxICszMiwyNyBAQCBodG1sOiAkKERPQ19IVE1MKSBodG1sL2luZGV4Lmh0bWwK
IAogLlBIT05ZOiB0eHQKIHR4dDoKLQlAaWYgd2hpY2ggJChQT0QyVEVYVCkgMT4vZGV2L251bGwg
Mj4vZGV2L251bGw7IHRoZW4gXAotCSQoTUFLRSkgJChET0NfVFhUKTsgZWxzZSAgICAgICAgICAg
ICAgXAotCWVjaG8gInBvZDJ0ZXh0IG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nIHRleHQgb3V0cHV0
cy4iOyBmaQoraWZkZWYgUE9EMlRFWFQKKwkkKE1BS0UpICQoRE9DX1RYVCkKK2Vsc2UKKwlAZWNo
byAicG9kMnRleHQgbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgdGV4dCBvdXRwdXRzLiIKK2VuZGlm
CiAKIC5QSE9OWTogZmlncwogZmlnczoKLQlAc2V0IC1lIDsgaWYgd2hpY2ggJChGSUcyREVWKSAx
Pi9kZXYvbnVsbCAyPi9kZXYvbnVsbDsgdGhlbiBcCi0Jc2V0IC14OyAkKE1BS0UpIC1DIGZpZ3Mg
OyBlbHNlICAgICAgICAgICAgICAgICAgIFwKLQllY2hvICJmaWcyZGV2ICh0cmFuc2ZpZykgbm90
IGluc3RhbGxlZDsgc2tpcHBpbmcgZmlncy4iOyBmaQoraWZkZWYgRklHMkRFVgorCXNldCAteDsg
JChNQUtFKSAtQyBmaWdzCitlbHNlCisJQGVjaG8gImZpZzJkZXYgKHRyYW5zZmlnKSBub3QgaW5z
dGFsbGVkOyBza2lwcGluZyBmaWdzLiIKK2VuZGlmCiAKIC5QSE9OWTogbWFuLXBhZ2VzCiBtYW4t
cGFnZXM6Ci0JQGlmIHdoaWNoICQoUE9EMk1BTikgMT4vZGV2L251bGwgMj4vZGV2L251bGw7IHRo
ZW4gXAotCSQoTUFLRSkgJChET0NfTUFOMSkgJChET0NfTUFONSk7IGVsc2UgICAgICAgICAgICAg
IFwKLQllY2hvICJwb2QybWFuIG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nIG1hbi1wYWdlcy4iOyBm
aQoraWZkZWYgUE9EMk1BTgorCSQoTUFLRSkgJChET0NfTUFOMSkgJChET0NfTUFONSkKK2Vsc2UK
KwlAZWNobyAicG9kMm1hbiBub3QgaW5zdGFsbGVkOyBza2lwcGluZyBtYW4tcGFnZXMuIgorZW5k
aWYKIAogbWFuMS8lLjE6IG1hbi8lLnBvZC4xIE1ha2VmaWxlCiAJJChJTlNUQUxMX0RJUikgJChA
RCkKQEAgLTY5LDYgKzc1LDcgQEAgY2xlYW46CiAKIC5QSE9OWTogZGlzdGNsZWFuCiBkaXN0Y2xl
YW46IGNsZWFuCisJcm0gLXJmIC4uL2NvbmZpZy9Eb2NzLm1rIGNvbmZpZy5sb2cgY29uZmlnLnN0
YXR1cyBhdXRvbTR0ZS5jYWNoZQogCiAuUEhPTlk6IGluc3RhbGwKIGluc3RhbGw6IGFsbApAQCAt
ODQsMzAgKzkxLDQwIEBAIGh0bWwvaW5kZXguaHRtbDogJChET0NfSFRNTCkgLi9nZW4taHRtbC1p
bmRleCBJTkRFWAogCXBlcmwgLXcgLS0gLi9nZW4taHRtbC1pbmRleCAtaSBJTkRFWCBodG1sICQo
RE9DX0hUTUwpCiAKIGh0bWwvJS5odG1sOiAlLm1hcmtkb3duCi0JQCQoSU5TVEFMTF9ESVIpICQo
QEQpCi0JQHNldCAtZSA7IGlmIHdoaWNoICQoTUFSS0RPV04pIDE+L2Rldi9udWxsIDI+L2Rldi9u
dWxsOyB0aGVuIFwKLQllY2hvICJSdW5uaW5nIG1hcmtkb3duIHRvIGdlbmVyYXRlICQqLmh0bWwg
Li4uICI7IFwKKwkkKElOU1RBTExfRElSKSAkKEBEKQoraWZkZWYgTUFSS0RPV04KKwlAZWNobyAi
UnVubmluZyBtYXJrZG93biB0byBnZW5lcmF0ZSAkKi5odG1sIC4uLiAiCiAJJChNQVJLRE9XTikg
JDwgPiAkQC50bXAgOyBcCi0JJChjYWxsIG1vdmUtaWYtY2hhbmdlZCwkQC50bXAsJEApIDsgZWxz
ZSBcCi0JZWNobyAibWFya2Rvd24gbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgJCouaHRtbC4iOyBm
aQorCSQoY2FsbCBtb3ZlLWlmLWNoYW5nZWQsJEAudG1wLCRAKQorZWxzZQorCUBlY2hvICJtYXJr
ZG93biBub3QgaW5zdGFsbGVkOyBza2lwcGluZyAkKi5odG1sLiIKK2VuZGlmCiAKIGh0bWwvJS50
eHQ6ICUudHh0Ci0JQCQoSU5TVEFMTF9ESVIpICQoQEQpCisJJChJTlNUQUxMX0RJUikgJChARCkK
IAljcCAkPCAkQAogCiBodG1sL21hbi8lLjEuaHRtbDogbWFuLyUucG9kLjEgTWFrZWZpbGUKIAkk
KElOU1RBTExfRElSKSAkKEBEKQoraWZkZWYgUE9EMkhUTUwKIAkkKFBPRDJIVE1MKSAtLWluZmls
ZT0kPCAtLW91dGZpbGU9JEAudG1wCiAJJChjYWxsIG1vdmUtaWYtY2hhbmdlZCwkQC50bXAsJEAp
CitlbHNlCisJQGVjaG8gInBvZDJodG1sIG5vdCBpbnN0YWxsZWQ7IHNraXBwaW5nICQ8LiIKK2Vu
ZGlmCiAKIGh0bWwvbWFuLyUuNS5odG1sOiBtYW4vJS5wb2QuNSBNYWtlZmlsZQogCSQoSU5TVEFM
TF9ESVIpICQoQEQpCitpZmRlZiBQT0QySFRNTAogCSQoUE9EMkhUTUwpIC0taW5maWxlPSQ8IC0t
b3V0ZmlsZT0kQC50bXAKIAkkKGNhbGwgbW92ZS1pZi1jaGFuZ2VkLCRALnRtcCwkQCkKK2Vsc2UK
KwlAZWNobyAicG9kMmh0bWwgbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgJDwuIgorZW5kaWYKIAog
aHRtbC9oeXBlcmNhbGwvaW5kZXguaHRtbDogLi94ZW4taGVhZGVycwogCXJtIC1yZiAkKEBEKQot
CUAkKElOU1RBTExfRElSKSAkKEBEKQorCSQoSU5TVEFMTF9ESVIpICQoQEQpCiAJLi94ZW4taGVh
ZGVycyAtTyAkKEBEKSBcCiAJCS1UICdhcmNoLXg4Nl82NCAtIFhlbiBwdWJsaWMgaGVhZGVycycg
XAogCQktWCBhcmNoLWlhNjQgLVggYXJjaC14ODZfMzIgLVggeGVuLXg4Nl8zMiAtWCBhcmNoLWFy
bSBcCkBAIC0xMjcsMTEgKzE0NCwyMyBAQCB0eHQvJS50eHQ6ICUubWFya2Rvd24KIAogdHh0L21h
bi8lLjEudHh0OiBtYW4vJS5wb2QuMSBNYWtlZmlsZQogCSQoSU5TVEFMTF9ESVIpICQoQEQpCitp
ZmRlZiBQT0QyVEVYVAogCSQoUE9EMlRFWFQpICQ8ICRALnRtcAogCSQoY2FsbCBtb3ZlLWlmLWNo
YW5nZWQsJEAudG1wLCRAKQorZWxzZQorCUBlY2hvICJwb2QydGV4dCBub3QgaW5zdGFsbGVkOyBz
a2lwcGluZyAkPC4iCitlbmRpZgogCiB0eHQvbWFuLyUuNS50eHQ6IG1hbi8lLnBvZC41IE1ha2Vm
aWxlCiAJJChJTlNUQUxMX0RJUikgJChARCkKK2lmZGVmIFBPRDJURVhUCiAJJChQT0QyVEVYVCkg
JDwgJEAudG1wCiAJJChjYWxsIG1vdmUtaWYtY2hhbmdlZCwkQC50bXAsJEApCi0KK2Vsc2UKKwlA
ZWNobyAicG9kMnRleHQgbm90IGluc3RhbGxlZDsgc2tpcHBpbmcgJDwuIgorZW5kaWYKKworaWZl
cSAoLCQoZmluZHN0cmluZyBjbGVhbiwkKE1BS0VDTURHT0FMUykpKQorJChYRU5fUk9PVCkvY29u
ZmlnL0RvY3MubWs6CisJJChlcnJvciBZb3UgaGF2ZSB0byBydW4gLi9jb25maWd1cmUgYmVmb3Jl
IGJ1aWxkaW5nIGRvY3MpCitlbmRpZgpkaWZmIC0tZ2l0IGEvZG9jcy9jb25maWd1cmUuYWMgYi9k
b2NzL2NvbmZpZ3VyZS5hYwpuZXcgZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi5lYTA1
NTJlCi0tLSAvZGV2L251bGwKKysrIGIvZG9jcy9jb25maWd1cmUuYWMKQEAgLTAsMCArMSwyMCBA
QAorIyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgLSotIEF1
dG9jb25mIC0qLQorIyBQcm9jZXNzIHRoaXMgZmlsZSB3aXRoIGF1dG9jb25mIHRvIHByb2R1Y2Ug
YSBjb25maWd1cmUgc2NyaXB0LgorCitBQ19QUkVSRVEoWzIuNjddKQorQUNfSU5JVChbWGVuIEh5
cGVydmlzb3IgRG9jdW1lbnRhdGlvbl0sIG00X2VzeXNjbWQoWy4uL3ZlcnNpb24uc2ggLi4veGVu
L01ha2VmaWxlXSksCisgICAgW3hlbi1kZXZlbEBsaXN0cy54ZW4ub3JnXSwgW3hlbl0sIFtodHRw
Oi8vd3d3Lnhlbi5vcmcvXSkKK0FDX0NPTkZJR19TUkNESVIoW21pc2MveGVuLWNvbW1hbmQtbGlu
ZS5tYXJrZG93bl0pCitBQ19DT05GSUdfRklMRVMoWy4uL2NvbmZpZy9Eb2NzLm1rXSkKK0FDX0NP
TkZJR19BVVhfRElSKFsuLi9dKQorCisjIE00IE1hY3JvIGluY2x1ZGVzCittNF9pbmNsdWRlKFsu
Li9tNC9kb2NzX3Rvb2wubTRdKQorCitBWF9ET0NTX1RPT0xfUFJPRyhbRklHMkRFVl0sIFtmaWcy
ZGV2XSkKK0FYX0RPQ1NfVE9PTF9QUk9HKFtQT0QyTUFOXSwgW3BvZDJtYW5dKQorQVhfRE9DU19U
T09MX1BST0coW1BPRDJIVE1MXSwgW3BvZDJodG1sXSkKK0FYX0RPQ1NfVE9PTF9QUk9HKFtQT0Qy
VEVYVF0sIFtwb2QydGV4dF0pCitBWF9ET0NTX1RPT0xfUFJPR1MoW01BUktET1dOXSwgW21hcmtk
b3duXSwgW21hcmtkb3duIG1hcmtkb3duX3B5XSkKKworQUNfT1VUUFVUKCkKZGlmZiAtLWdpdCBh
L2RvY3MvZmlncy9NYWtlZmlsZSBiL2RvY3MvZmlncy9NYWtlZmlsZQppbmRleCA1ZWNkYWUzLi5m
NzgyZGMxIDEwMDY0NAotLS0gYS9kb2NzL2ZpZ3MvTWFrZWZpbGUKKysrIGIvZG9jcy9maWdzL01h
a2VmaWxlCkBAIC0xLDcgKzEsNyBAQAogCiBYRU5fUk9PVD0kKENVUkRJUikvLi4vLi4KIGluY2x1
ZGUgJChYRU5fUk9PVCkvQ29uZmlnLm1rCi1pbmNsdWRlICQoWEVOX1JPT1QpL2RvY3MvRG9jcy5t
aworaW5jbHVkZSAkKFhFTl9ST09UKS9jb25maWcvRG9jcy5tawogCiBUQVJHRVRTPSBuZXR3b3Jr
LWJyaWRnZS5wbmcgbmV0d29yay1iYXNpYy5wbmcKIApkaWZmIC0tZ2l0IGEvbTQvZG9jc190b29s
Lm00IGIvbTQvZG9jc190b29sLm00Cm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAwMDAu
LjNlODgxNGEKLS0tIC9kZXYvbnVsbAorKysgYi9tNC9kb2NzX3Rvb2wubTQKQEAgLTAsMCArMSwx
NyBAQAorQUNfREVGVU4oW0FYX0RPQ1NfVE9PTF9QUk9HXSwgWworZG5sCisgICAgQUNfQVJHX1ZB
UihbJDFdLCBbUGF0aCB0byAkMiB0b29sXSkKKyAgICBBQ19QQVRIX1BST0coWyQxXSwgWyQyXSkK
KyAgICBBU19JRihbISB0ZXN0IC14ICIkYWNfY3ZfcGF0aF8kMSJdLCBbCisgICAgICAgIEFDX01T
R19XQVJOKFskMiBpcyBub3QgYXZhaWxhYmxlIHNvIHNvbWUgZG9jdW1lbnRhdGlvbiB3b24ndCBi
ZSBidWlsdF0pCisgICAgXSkKK10pCisKK0FDX0RFRlVOKFtBWF9ET0NTX1RPT0xfUFJPR1NdLCBb
CitkbmwKKyAgICBBQ19BUkdfVkFSKFskMV0sIFtQYXRoIHRvICQyIHRvb2xdKQorICAgIEFDX1BB
VEhfUFJPR1MoWyQxXSwgWyQzXSkKKyAgICBBU19JRihbISB0ZXN0IC14ICIkYWNfY3ZfcGF0aF8k
MSJdLCBbCisgICAgICAgIEFDX01TR19XQVJOKFskMiBpcyBub3QgYXZhaWxhYmxlIHNvIHNvbWUg
ZG9jdW1lbnRhdGlvbiB3b24ndCBiZSBidWlsdF0pCisgICAgXSkKK10pCi0tIAoxLjcuMi41CgoK
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs
IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y
Zy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:00:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzPj-00089z-2B; Fri, 07 Dec 2012 15:00:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TgzPh-00089q-LM
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:00:29 +0000
Received: from [85.158.138.51:31462] by server-11.bemta-3.messagelabs.com id
	E2/8A-19361-C8402C05; Fri, 07 Dec 2012 15:00:28 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354892424!27830198!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17247 invoked from network); 7 Dec 2012 15:00:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:00:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47004106"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 15:00:20 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Fri, 7 Dec 2012
	10:00:20 -0500
Message-ID: <50C20483.4020200@citrix.com>
Date: Fri, 7 Dec 2012 15:00:19 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <36774CA35642C143BCDE93BA0C68DC5702C539CC@dulac>
In-Reply-To: <36774CA35642C143BCDE93BA0C68DC5702C539CC@dulac>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/12 14:12, Aur=E9lien MILLIAT wrote:
>
> Hi all,
>
> I have made some tests to find a good driver for FirePro V8800 on =

> windows 7 64bit HVM.
>
> I have been focused on =91advanced features=92: quad buffer and active =

> stereoscopy, synchronization =85
>
> The results, for all FirePro drivers (of this year); I can=92t get the =

> quad buffer/active stereoscopy feature.
>
> But they work on a native installation.
>
Can you describe the setup a little more?
How many graphic cards per guest?
How many guests? On how many hosts?
>
> The only driver that allows this feature is a Radeon HD driver =

> (Catalyst 12.10 WHQL).
>
> But this driver becomes unstable when an application using active =

> stereo and synchronization is closed:
>
> -The synchronization between two computers is lost.
>
> -The CCC can crash when the synchronization is made again.
>
> Someone have any clues about this?
>
I don't know exactly how this works on AMD/ATI graphics cards, but I =

have worked with synchronisation on other graphics cards about 7 years =

ago, so I have some idea of how you solve the various problems.

What I don't quite understand is why it would be different between a =

virtual environment and the bare-metal ("native") install. My immediate =

guess is that there is a timing difference, for one of two reasons:
1. IOMMU is adding extra delays to the graphics card reading system memory.
2. Interrupt delays due to hypervisor.
3. Dom0 or other guest domains "stealing" CPU from the guest.
I don't think those are easy to work around (as they all have to =

"happen" in a virtual system), but I also don't REALLY understand why =

this should cause problems in the first place, as there isn't any =

guarantee as to the timings of either memory reads, interrupt =

latency/responsiveness or CPU availability in Windows, so the same =

problem would appear in native systems as well, given "the right" =

circumstances.

What exactly is the crash in CCC?

(CCC stands for "Catalyst Control Center" - which I think is a Windows =

"service" to handle certain requests from the driver that can't be done =

in kernel mode [or shouldn't be done in the driver in general]).

--
Mats
>
> Thanks,
>
> Aurelien
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:00:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzPj-00089z-2B; Fri, 07 Dec 2012 15:00:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TgzPh-00089q-LM
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:00:29 +0000
Received: from [85.158.138.51:31462] by server-11.bemta-3.messagelabs.com id
	E2/8A-19361-C8402C05; Fri, 07 Dec 2012 15:00:28 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1354892424!27830198!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17247 invoked from network); 7 Dec 2012 15:00:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:00:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47004106"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 15:00:20 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1; Fri, 7 Dec 2012
	10:00:20 -0500
Message-ID: <50C20483.4020200@citrix.com>
Date: Fri, 7 Dec 2012 15:00:19 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <36774CA35642C143BCDE93BA0C68DC5702C539CC@dulac>
In-Reply-To: <36774CA35642C143BCDE93BA0C68DC5702C539CC@dulac>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/12 14:12, Aur=E9lien MILLIAT wrote:
>
> Hi all,
>
> I have made some tests to find a good driver for FirePro V8800 on =

> windows 7 64bit HVM.
>
> I have been focused on =91advanced features=92: quad buffer and active =

> stereoscopy, synchronization =85
>
> The results, for all FirePro drivers (of this year); I can=92t get the =

> quad buffer/active stereoscopy feature.
>
> But they work on a native installation.
>
Can you describe the setup a little more?
How many graphic cards per guest?
How many guests? On how many hosts?
>
> The only driver that allows this feature is a Radeon HD driver =

> (Catalyst 12.10 WHQL).
>
> But this driver becomes unstable when an application using active =

> stereo and synchronization is closed:
>
> -The synchronization between two computers is lost.
>
> -The CCC can crash when the synchronization is made again.
>
> Someone have any clues about this?
>
I don't know exactly how this works on AMD/ATI graphics cards, but I =

have worked with synchronisation on other graphics cards about 7 years =

ago, so I have some idea of how you solve the various problems.

What I don't quite understand is why it would be different between a =

virtual environment and the bare-metal ("native") install. My immediate =

guess is that there is a timing difference, for one of two reasons:
1. IOMMU is adding extra delays to the graphics card reading system memory.
2. Interrupt delays due to hypervisor.
3. Dom0 or other guest domains "stealing" CPU from the guest.
I don't think those are easy to work around (as they all have to =

"happen" in a virtual system), but I also don't REALLY understand why =

this should cause problems in the first place, as there isn't any =

guarantee as to the timings of either memory reads, interrupt =

latency/responsiveness or CPU availability in Windows, so the same =

problem would appear in native systems as well, given "the right" =

circumstances.

What exactly is the crash in CCC?

(CCC stands for "Catalyst Control Center" - which I think is a Windows =

"service" to handle certain requests from the driver that can't be done =

in kernel mode [or shouldn't be done in the driver in general]).

--
Mats
>
> Thanks,
>
> Aurelien
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:03:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzSN-0008QW-0Y; Fri, 07 Dec 2012 15:03:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgzSM-0008QN-2j
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 15:03:14 +0000
Received: from [85.158.139.211:21687] by server-9.bemta-5.messagelabs.com id
	C2/8E-10690-13502C05; Fri, 07 Dec 2012 15:03:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354892592!18025100!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9952 invoked from network); 7 Dec 2012 15:03:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:03:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16226918"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 15:03:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 15:03:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgzSK-0008S4-Ac;
	Fri, 07 Dec 2012 15:03:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgzSJ-0003Ck-W1;
	Fri, 07 Dec 2012 15:03:12 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14594-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 15:03:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14594: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14594 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14594/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 14570
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:03:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzSN-0008QW-0Y; Fri, 07 Dec 2012 15:03:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TgzSM-0008QN-2j
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 15:03:14 +0000
Received: from [85.158.139.211:21687] by server-9.bemta-5.messagelabs.com id
	C2/8E-10690-13502C05; Fri, 07 Dec 2012 15:03:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354892592!18025100!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9952 invoked from network); 7 Dec 2012 15:03:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:03:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16226918"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 15:03:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 15:03:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgzSK-0008S4-Ac;
	Fri, 07 Dec 2012 15:03:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgzSJ-0003Ck-W1;
	Fri, 07 Dec 2012 15:03:12 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14594-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 15:03:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14594: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14594 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14594/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 14570
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:09:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzXx-0000Ut-ST; Fri, 07 Dec 2012 15:09:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgzXw-0000Un-Rm
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:09:00 +0000
Received: from [85.158.138.51:30976] by server-7.bemta-3.messagelabs.com id
	B6/73-01713-E6602C05; Fri, 07 Dec 2012 15:08:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354892908!19830300!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27480 invoked from network); 7 Dec 2012 15:08:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 15:08:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 15:08:27 +0000
Message-Id: <50C2147C02000078000AEFCC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 15:08:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
	<50C1E41402000078000AEED6@nat28.tlf.novell.com>
	<50C1F334.20802@amd.com>
In-Reply-To: <50C1F334.20802@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian.Campbell@citrix.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 14:46, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> Do we really need what microcode_init() does?

Oh yes, absolutely. How else would boot time microcode loading
work for secondary CPUs without this? Note that the notifier only
deals with the CPU_DEAD case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:09:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzXx-0000Ut-ST; Fri, 07 Dec 2012 15:09:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TgzXw-0000Un-Rm
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:09:00 +0000
Received: from [85.158.138.51:30976] by server-7.bemta-3.messagelabs.com id
	B6/73-01713-E6602C05; Fri, 07 Dec 2012 15:08:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354892908!19830300!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27480 invoked from network); 7 Dec 2012 15:08:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 15:08:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 15:08:27 +0000
Message-Id: <50C2147C02000078000AEFCC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 15:08:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
	<50C1E41402000078000AEED6@nat28.tlf.novell.com>
	<50C1F334.20802@amd.com>
In-Reply-To: <50C1F334.20802@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian.Campbell@citrix.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 14:46, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> Do we really need what microcode_init() does?

Oh yes, absolutely. How else would boot time microcode loading
work for secondary CPUs without this? Note that the notifier only
deals with the CPU_DEAD case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:10:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzZ9-0000aF-Bp; Fri, 07 Dec 2012 15:10:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgzZ7-0000a4-3J
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:10:13 +0000
Received: from [85.158.137.99:4296] by server-3.bemta-3.messagelabs.com id
	29/73-31566-2D602C05; Fri, 07 Dec 2012 15:10:10 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354893008!12886057!1
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6269 invoked from network); 7 Dec 2012 15:10:08 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:10:08 -0000
Received: by mail-wi0-f181.google.com with SMTP id hm9so380504wib.14
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 07:10:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=nv3bgEBHwcxopOKS+LKDcEY2toWNsQBasDadwTGH1Io=;
	b=tvIEPVkRgYiTNQNfU/j2q51FSSbsh6Pmj2rbT8nCHsEig3GUCvzMp9C7izonYXpue5
	F54eOE8k0uafshQ413ypWqvzJ0tGXXRXe5p2xuWJJdzJibXLGYsg86l25xON5/Qwx0vj
	/krppijtHjd7MfST2/xWLmKxRDnCSaztxaU9lYO6TuaXjCQnFyL030xdpRL6qN5v/C5y
	XPNGbIoGqrYZxCEqzGV61Tfo3r8yoVEXAz7EsXeaeKKYZiTR6ZKv7E7TvM4TdnzGDjWV
	xl31Kw5DyLPoqssSOLpriRbnslzVebLyRLtIGeF+4KwcXYG7xDkmha6MFs6X4zuIklqZ
	HfWw==
Received: by 10.216.194.170 with SMTP id m42mr2162558wen.30.1354893007999;
	Fri, 07 Dec 2012 07:10:07 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id bd7sm16015537wib.8.2012.12.07.07.10.04
	(version=SSLv3 cipher=OTHER); Fri, 07 Dec 2012 07:10:07 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 07 Dec 2012 15:09:54 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE7B742.552E7%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] scheduler: fix rate limit range checking
Thread-Index: Ac3UjOlnZZmls3CDqkytKu8U1TQSHg==
In-Reply-To: <50C1DAA302000078000AEE3E@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] scheduler: fix rate limit range checking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/2012 11:01, "Jan Beulich" <JBeulich@suse.com> wrote:

> For one, neither of the two checks permitted for the documented value
> of zero (disabling the functionality altogether).
> 
> Second, the range checking of the command line parameter was done by
> the credit scheduler's initialization code, despite it being a generic
> scheduler option.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -835,8 +835,9 @@ csched_sys_cntl(const struct scheduler *
>      case XEN_SYSCTL_SCHEDOP_putinfo:
>          if (params->tslice_ms > XEN_SYSCTL_CSCHED_TSLICE_MAX
>              || params->tslice_ms < XEN_SYSCTL_CSCHED_TSLICE_MIN
> -            || params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
> -            || params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN
> +            || (params->ratelimit_us
> +                && (params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
> +                    || params->ratelimit_us <
> XEN_SYSCTL_SCHED_RATELIMIT_MIN))
>              || MICROSECS(params->ratelimit_us) > MILLISECS(params->tslice_ms)
> )
>                  goto out;
>          prv->tslice_ms = params->tslice_ms;
> @@ -1593,17 +1594,6 @@ csched_init(struct scheduler *ops)
>          sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
>      }
>  
> -    if ( sched_ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
> -         || sched_ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN )
> -    {
> -        printk("WARNING: sched_ratelimit_us outside of valid range
> [%d,%d].\n"
> -               " Resetting to default %u\n",
> -               XEN_SYSCTL_SCHED_RATELIMIT_MIN,
> -               XEN_SYSCTL_SCHED_RATELIMIT_MAX,
> -               SCHED_DEFAULT_RATELIMIT_US);
> -        sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
> -    }
> -
>      prv->tslice_ms = sched_credit_tslice_ms;
>      prv->ticks_per_tslice = CSCHED_TICKS_PER_TSLICE;
>      if ( prv->tslice_ms < prv->ticks_per_tslice )
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -1324,6 +1324,18 @@ void __init scheduler_init(void)
>      if ( SCHED_OP(&ops, init) )
>          panic("scheduler returned error on init\n");
>  
> +    if ( sched_ratelimit_us &&
> +         (sched_ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
> +          || sched_ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN) )
> +    {
> +        printk("WARNING: sched_ratelimit_us outside of valid range
> [%d,%d].\n"
> +               " Resetting to default %u\n",
> +               XEN_SYSCTL_SCHED_RATELIMIT_MIN,
> +               XEN_SYSCTL_SCHED_RATELIMIT_MAX,
> +               SCHED_DEFAULT_RATELIMIT_US);
> +        sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
> +    }
> +
>      idle_domain = domain_create(DOMID_IDLE, 0, 0);
>      BUG_ON(IS_ERR(idle_domain));
>      idle_domain->vcpu = idle_vcpu;
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:10:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzZ9-0000aF-Bp; Fri, 07 Dec 2012 15:10:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgzZ7-0000a4-3J
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:10:13 +0000
Received: from [85.158.137.99:4296] by server-3.bemta-3.messagelabs.com id
	29/73-31566-2D602C05; Fri, 07 Dec 2012 15:10:10 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354893008!12886057!1
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6269 invoked from network); 7 Dec 2012 15:10:08 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:10:08 -0000
Received: by mail-wi0-f181.google.com with SMTP id hm9so380504wib.14
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 07:10:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=nv3bgEBHwcxopOKS+LKDcEY2toWNsQBasDadwTGH1Io=;
	b=tvIEPVkRgYiTNQNfU/j2q51FSSbsh6Pmj2rbT8nCHsEig3GUCvzMp9C7izonYXpue5
	F54eOE8k0uafshQ413ypWqvzJ0tGXXRXe5p2xuWJJdzJibXLGYsg86l25xON5/Qwx0vj
	/krppijtHjd7MfST2/xWLmKxRDnCSaztxaU9lYO6TuaXjCQnFyL030xdpRL6qN5v/C5y
	XPNGbIoGqrYZxCEqzGV61Tfo3r8yoVEXAz7EsXeaeKKYZiTR6ZKv7E7TvM4TdnzGDjWV
	xl31Kw5DyLPoqssSOLpriRbnslzVebLyRLtIGeF+4KwcXYG7xDkmha6MFs6X4zuIklqZ
	HfWw==
Received: by 10.216.194.170 with SMTP id m42mr2162558wen.30.1354893007999;
	Fri, 07 Dec 2012 07:10:07 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id bd7sm16015537wib.8.2012.12.07.07.10.04
	(version=SSLv3 cipher=OTHER); Fri, 07 Dec 2012 07:10:07 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 07 Dec 2012 15:09:54 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE7B742.552E7%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] scheduler: fix rate limit range checking
Thread-Index: Ac3UjOlnZZmls3CDqkytKu8U1TQSHg==
In-Reply-To: <50C1DAA302000078000AEE3E@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] scheduler: fix rate limit range checking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/2012 11:01, "Jan Beulich" <JBeulich@suse.com> wrote:

> For one, neither of the two checks permitted for the documented value
> of zero (disabling the functionality altogether).
> 
> Second, the range checking of the command line parameter was done by
> the credit scheduler's initialization code, despite it being a generic
> scheduler option.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -835,8 +835,9 @@ csched_sys_cntl(const struct scheduler *
>      case XEN_SYSCTL_SCHEDOP_putinfo:
>          if (params->tslice_ms > XEN_SYSCTL_CSCHED_TSLICE_MAX
>              || params->tslice_ms < XEN_SYSCTL_CSCHED_TSLICE_MIN
> -            || params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
> -            || params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN
> +            || (params->ratelimit_us
> +                && (params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
> +                    || params->ratelimit_us <
> XEN_SYSCTL_SCHED_RATELIMIT_MIN))
>              || MICROSECS(params->ratelimit_us) > MILLISECS(params->tslice_ms)
> )
>                  goto out;
>          prv->tslice_ms = params->tslice_ms;
> @@ -1593,17 +1594,6 @@ csched_init(struct scheduler *ops)
>          sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
>      }
>  
> -    if ( sched_ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
> -         || sched_ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN )
> -    {
> -        printk("WARNING: sched_ratelimit_us outside of valid range
> [%d,%d].\n"
> -               " Resetting to default %u\n",
> -               XEN_SYSCTL_SCHED_RATELIMIT_MIN,
> -               XEN_SYSCTL_SCHED_RATELIMIT_MAX,
> -               SCHED_DEFAULT_RATELIMIT_US);
> -        sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
> -    }
> -
>      prv->tslice_ms = sched_credit_tslice_ms;
>      prv->ticks_per_tslice = CSCHED_TICKS_PER_TSLICE;
>      if ( prv->tslice_ms < prv->ticks_per_tslice )
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -1324,6 +1324,18 @@ void __init scheduler_init(void)
>      if ( SCHED_OP(&ops, init) )
>          panic("scheduler returned error on init\n");
>  
> +    if ( sched_ratelimit_us &&
> +         (sched_ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
> +          || sched_ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN) )
> +    {
> +        printk("WARNING: sched_ratelimit_us outside of valid range
> [%d,%d].\n"
> +               " Resetting to default %u\n",
> +               XEN_SYSCTL_SCHED_RATELIMIT_MIN,
> +               XEN_SYSCTL_SCHED_RATELIMIT_MAX,
> +               SCHED_DEFAULT_RATELIMIT_US);
> +        sched_ratelimit_us = SCHED_DEFAULT_RATELIMIT_US;
> +    }
> +
>      idle_domain = domain_create(DOMID_IDLE, 0, 0);
>      BUG_ON(IS_ERR(idle_domain));
>      idle_domain->vcpu = idle_vcpu;
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:10:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzZC-0000b0-UO; Fri, 07 Dec 2012 15:10:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgzZC-0000ag-2h
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:10:18 +0000
Received: from [85.158.137.99:4736] by server-10.bemta-3.messagelabs.com id
	EF/FD-19806-9D602C05; Fri, 07 Dec 2012 15:10:17 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354893008!12886057!2
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6981 invoked from network); 7 Dec 2012 15:10:16 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:10:16 -0000
Received: by mail-wi0-f181.google.com with SMTP id hm9so380504wib.14
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 07:10:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=COn9qaCJiVl4qIjV6lUd7uH37Ow2egbY5fbstbfG9Hs=;
	b=NozQ8gnre+hnPZfkCYRLxjrD2AhW2lxhnh+wqC02h9PyJJ9U8EWiCfP+5zddHIQtWB
	HBi4I6OiR+koKWXTdGAjpKORgxDUDagVnnHvJhNoJRg+ycJ7PLQMhUfRFhrDfTv4YbDo
	u1P2bbefX0wew8Yh/B7PuB577RtLSSwLW1lJa+1fYJ4Ah/VikBAk730McLmdoW8gB5aF
	5guipJcRNKFdqeWiEdvzhIVmD7jgJ5p9i4QKCBneFxMKTtnx2jwszHSkr9KZxm2PpV6o
	lvvgskvHjz42+5oSQJoLtxajxrs7ZH1jVqF/iyTp9T4vUHO5osZiOcn3AmdylIouTdup
	NRQA==
Received: by 10.180.98.8 with SMTP id ee8mr15606482wib.4.1354893016230;
	Fri, 07 Dec 2012 07:10:16 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id bd7sm16015537wib.8.2012.12.07.07.10.11
	(version=SSLv3 cipher=OTHER); Fri, 07 Dec 2012 07:10:15 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 07 Dec 2012 15:10:08 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE7B750.552E8%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/oprofile: adjust CPU specific
	initialization
Thread-Index: Ac3UjPG/JORA6QZxvkiYH8jOx78TLA==
In-Reply-To: <50C1F65602000078000AEF1B@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/oprofile: adjust CPU specific
 initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/2012 12:59, "Jan Beulich" <JBeulich@suse.com> wrote:

> Drop support for 32-bit only CPU models as well as those that can be
> dealt with by the arch_perfmon bits. Models 14 and 15 remain as
> questionable (I'm not 100% positive that these don't support 64-bit
> mode).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/oprofile/nmi_int.c
> +++ b/xen/arch/x86/oprofile/nmi_int.c
> @@ -342,37 +342,13 @@ static int __init ppro_init(char ** cpu_
> return 0;
>  
> switch (cpu_model) {
> - case 0 ... 2:
> -  *cpu_type = "i386/ppro";
> -  break;
> - case 3 ... 5:
> -  *cpu_type = "i386/pii";
> -  break;
> - case 6 ... 8:
> - case 10 ... 11:
> -  *cpu_type = "i386/piii";
> -  break;
> - case 9:
> - case 13:
> -  *cpu_type = "i386/p6_mobile";
> -  break;
> case 14:
> *cpu_type = "i386/core";
> break;
> case 15:
> - case 23:
> - case 29:
> *cpu_type = "i386/core_2";
> ppro_has_global_ctrl = 1;
> break;
> - case 26:
> -  arch_perfmon_setup_counters();
> -  *cpu_type = "i386/core_i7";
> -  ppro_has_global_ctrl = 1;
> -  break;
> - case 28:
> -  *cpu_type = "i386/atom";
> -  break;
> default:
> /* Unknown */
> return 0;
> @@ -389,6 +365,7 @@ static int __init arch_perfmon_init(char
> *cpu_type = "i386/arch_perfmon";
> model = &op_arch_perfmon_spec;
> arch_perfmon_setup_counters();
> + ppro_has_global_ctrl = 1;
> return 1;
>  }
>  
> @@ -413,14 +390,8 @@ static int __init nmi_init(void)
>       "AMD processor family %d is not "
>       "supported\n", family);
> return -ENODEV;
> -   case 6:
> -    model = &op_athlon_spec;
> -    cpu_type = "i386/athlon";
> -    break;
> case 0xf:
> model = &op_athlon_spec;
> -    /* Actually it could be i386/hammer too, but
> -       give user space an consistent name. */
> cpu_type = "x86-64/hammer";
> break;
> case 0x10:
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:10:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzZC-0000b0-UO; Fri, 07 Dec 2012 15:10:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgzZC-0000ag-2h
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:10:18 +0000
Received: from [85.158.137.99:4736] by server-10.bemta-3.messagelabs.com id
	EF/FD-19806-9D602C05; Fri, 07 Dec 2012 15:10:17 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354893008!12886057!2
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6981 invoked from network); 7 Dec 2012 15:10:16 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:10:16 -0000
Received: by mail-wi0-f181.google.com with SMTP id hm9so380504wib.14
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 07:10:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=COn9qaCJiVl4qIjV6lUd7uH37Ow2egbY5fbstbfG9Hs=;
	b=NozQ8gnre+hnPZfkCYRLxjrD2AhW2lxhnh+wqC02h9PyJJ9U8EWiCfP+5zddHIQtWB
	HBi4I6OiR+koKWXTdGAjpKORgxDUDagVnnHvJhNoJRg+ycJ7PLQMhUfRFhrDfTv4YbDo
	u1P2bbefX0wew8Yh/B7PuB577RtLSSwLW1lJa+1fYJ4Ah/VikBAk730McLmdoW8gB5aF
	5guipJcRNKFdqeWiEdvzhIVmD7jgJ5p9i4QKCBneFxMKTtnx2jwszHSkr9KZxm2PpV6o
	lvvgskvHjz42+5oSQJoLtxajxrs7ZH1jVqF/iyTp9T4vUHO5osZiOcn3AmdylIouTdup
	NRQA==
Received: by 10.180.98.8 with SMTP id ee8mr15606482wib.4.1354893016230;
	Fri, 07 Dec 2012 07:10:16 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id bd7sm16015537wib.8.2012.12.07.07.10.11
	(version=SSLv3 cipher=OTHER); Fri, 07 Dec 2012 07:10:15 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 07 Dec 2012 15:10:08 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE7B750.552E8%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/oprofile: adjust CPU specific
	initialization
Thread-Index: Ac3UjPG/JORA6QZxvkiYH8jOx78TLA==
In-Reply-To: <50C1F65602000078000AEF1B@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/oprofile: adjust CPU specific
 initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/2012 12:59, "Jan Beulich" <JBeulich@suse.com> wrote:

> Drop support for 32-bit only CPU models as well as those that can be
> dealt with by the arch_perfmon bits. Models 14 and 15 remain as
> questionable (I'm not 100% positive that these don't support 64-bit
> mode).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/oprofile/nmi_int.c
> +++ b/xen/arch/x86/oprofile/nmi_int.c
> @@ -342,37 +342,13 @@ static int __init ppro_init(char ** cpu_
> return 0;
>  
> switch (cpu_model) {
> - case 0 ... 2:
> -  *cpu_type = "i386/ppro";
> -  break;
> - case 3 ... 5:
> -  *cpu_type = "i386/pii";
> -  break;
> - case 6 ... 8:
> - case 10 ... 11:
> -  *cpu_type = "i386/piii";
> -  break;
> - case 9:
> - case 13:
> -  *cpu_type = "i386/p6_mobile";
> -  break;
> case 14:
> *cpu_type = "i386/core";
> break;
> case 15:
> - case 23:
> - case 29:
> *cpu_type = "i386/core_2";
> ppro_has_global_ctrl = 1;
> break;
> - case 26:
> -  arch_perfmon_setup_counters();
> -  *cpu_type = "i386/core_i7";
> -  ppro_has_global_ctrl = 1;
> -  break;
> - case 28:
> -  *cpu_type = "i386/atom";
> -  break;
> default:
> /* Unknown */
> return 0;
> @@ -389,6 +365,7 @@ static int __init arch_perfmon_init(char
> *cpu_type = "i386/arch_perfmon";
> model = &op_arch_perfmon_spec;
> arch_perfmon_setup_counters();
> + ppro_has_global_ctrl = 1;
> return 1;
>  }
>  
> @@ -413,14 +390,8 @@ static int __init nmi_init(void)
>       "AMD processor family %d is not "
>       "supported\n", family);
> return -ENODEV;
> -   case 6:
> -    model = &op_athlon_spec;
> -    cpu_type = "i386/athlon";
> -    break;
> case 0xf:
> model = &op_athlon_spec;
> -    /* Actually it could be i386/hammer too, but
> -       give user space an consistent name. */
> cpu_type = "x86-64/hammer";
> break;
> case 0x10:
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:10:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzZc-0000gJ-Cj; Fri, 07 Dec 2012 15:10:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgzZZ-0000fv-VL
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:10:42 +0000
Received: from [85.158.137.99:25774] by server-5.bemta-3.messagelabs.com id
	F1/C2-26311-1F602C05; Fri, 07 Dec 2012 15:10:41 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354893008!12886057!3
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=1.7 required=7.0 tests=INFO_TLD,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8665 invoked from network); 7 Dec 2012 15:10:38 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:10:38 -0000
Received: by mail-wi0-f181.google.com with SMTP id hm9so380504wib.14
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 07:10:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=qlEo8uUtNvmRtx97DPlxkCtBV3cRHIOwtqcKQG6mKEI=;
	b=CxOQ4WDVFlIitJfuxKmSgrNHto+/G+I8Y8WDKhi6NDlRGzUzDDEF1J3EH8mhCd8/Df
	z5n6OjqjMzMoj0OR3WB/ILl3c8EWo9GAca10ZMywV2MD2Wx89RBkztFZkyh5XiiR7KFv
	3/w0n7b8yLphbvHs8Xgbp3gJ/tMcubGLA8fswwEV4Z+CEM3FZMvfqPZKlcwugXkYU9tT
	Pt2fgQV4qoqhIrUajl68sH+APjBqTQSTdZOBsD3QkV2sOo3MIlKxbsCPdhs4BBPgHFqr
	9UHLLuSRSennOFazsPI8gSR0Yka4NO6J8XMfVwkglnJQPga2yRAdvO6UCyiiBOHqrCjN
	fVpQ==
Received: by 10.180.73.80 with SMTP id j16mr9155019wiv.5.1354893038625;
	Fri, 07 Dec 2012 07:10:38 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id h19sm21531092wiv.7.2012.12.07.07.10.35
	(version=SSLv3 cipher=OTHER); Fri, 07 Dec 2012 07:10:37 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 07 Dec 2012 15:10:28 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE7B764.552E9%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] streamline guest copy operations
Thread-Index: Ac3UjP2rgG2jtlo7aE+BgPtyf5tQ5A==
In-Reply-To: <50C1F7D402000078000AEF3E@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] streamline guest copy operations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/2012 13:06, "Jan Beulich" <JBeulich@suse.com> wrote:

> - use the variants not validating the VA range when writing back
>   structures/fields to the same space that they were previously read
>   from
> - when only a single field of a structure actually changed, copy back
>   just that field where possible
> - consolidate copying back results in a few places
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> ---
> If really necessary, this patch could of course be split up at almost
> arbitrary boundaries.

I wouldn't bother.

> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -51,6 +51,7 @@ long arch_do_domctl(
>      XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
>      long ret = 0;
> +    bool_t copyback = 0;
>  
>      switch ( domctl->cmd )
>      {
> @@ -66,7 +67,7 @@ long arch_do_domctl(
>                                  &domctl->u.shadow_op,
>                                  guest_handle_cast(u_domctl, void));
>              rcu_unlock_domain(d);
> -            copy_to_guest(u_domctl, domctl, 1);
> +            copyback = 1;
>          } 
>      }
>      break;
> @@ -150,8 +151,7 @@ long arch_do_domctl(
>          }
>  
>          rcu_unlock_domain(d);
> -
> -        copy_to_guest(u_domctl, domctl, 1);
> +        copyback = 1;
>      }
>      break;
>  
> @@ -408,7 +408,7 @@ long arch_do_domctl(
>              spin_unlock(&d->page_alloc_lock);
>  
>              domctl->u.getmemlist.num_pfns = i;
> -            copy_to_guest(u_domctl, domctl, 1);
> +            copyback = 1;
>          getmemlist_out:
>              rcu_unlock_domain(d);
>          }
> @@ -539,13 +539,11 @@ long arch_do_domctl(
>              ret = -EFAULT;
>  
>      gethvmcontext_out:
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> +        rcu_unlock_domain(d);
> +        copyback = 1;
>  
>          if ( c.data != NULL )
>              xfree(c.data);
> -
> -        rcu_unlock_domain(d);
>      }
>      break;
>  
> @@ -627,11 +625,9 @@ long arch_do_domctl(
>          domctl->u.address_size.size =
>              is_pv_32on64_domain(d) ? 32 : BITS_PER_LONG;
>  
> -        ret = 0;
>          rcu_unlock_domain(d);
> -
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> +        ret = 0;
> +        copyback = 1;
>      }
>      break;
>  
> @@ -676,13 +672,9 @@ long arch_do_domctl(
>  
>          domctl->u.address_size.size = d->arch.physaddr_bitsize;
>  
> -        ret = 0;
>          rcu_unlock_domain(d);
> -
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> -
> -
> +        ret = 0;
> +        copyback = 1;
>      }
>      break;
>  
> @@ -1124,9 +1116,8 @@ long arch_do_domctl(
>  
>      ext_vcpucontext_out:
>          rcu_unlock_domain(d);
> -        if ( (domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext) &&
> -             copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> +        if ( domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext )
> +            copyback = 1;
>      }
>      break;
>  
> @@ -1268,10 +1259,10 @@ long arch_do_domctl(
>              domctl->u.gdbsx_guest_memio.len;
>  
>          ret = gdbsx_guest_mem_io(domctl->domain,
> &domctl->u.gdbsx_guest_memio);
> -        if ( !ret && copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
>  
>          rcu_unlock_domain(d);
> +        if ( !ret )
> +           copyback = 1;
>      }
>      break;
>  
> @@ -1358,10 +1349,9 @@ long arch_do_domctl(
>                  }
>              }
>          }
> -        ret = 0;
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
>          rcu_unlock_domain(d);
> +        ret = 0;
> +        copyback = 1;
>      }
>      break;
>  
> @@ -1485,9 +1475,8 @@ long arch_do_domctl(
>  
>      vcpuextstate_out:
>          rcu_unlock_domain(d);
> -        if ( (domctl->cmd == XEN_DOMCTL_getvcpuextstate) &&
> -             copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> +        if ( domctl->cmd == XEN_DOMCTL_getvcpuextstate )
> +            copyback = 1;
>      }
>      break;
>  
> @@ -1504,7 +1493,7 @@ long arch_do_domctl(
>                  ret = mem_event_domctl(d, &domctl->u.mem_event_op,
>                                         guest_handle_cast(u_domctl, void));
>              rcu_unlock_domain(d);
> -            copy_to_guest(u_domctl, domctl, 1);
> +            copyback = 1;
>          } 
>      }
>      break;
> @@ -1539,8 +1528,7 @@ long arch_do_domctl(
>                    &domctl->u.audit_p2m.m2p_bad,
>                    &domctl->u.audit_p2m.p2m_bad);
>          rcu_unlock_domain(d);
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> +        copyback = 1;
>      }
>      break;
>  #endif /* P2M_AUDIT */
> @@ -1573,6 +1561,9 @@ long arch_do_domctl(
>          break;
>      }
>  
> +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
> +        ret = -EFAULT;
> +
>      return ret;
>  }
>  
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4407,7 +4407,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
>  
>          if ( xatp.space == XENMAPSPACE_gmfn_range )
>          {
> -            if ( rc && copy_to_guest(arg, &xatp, 1) )
> +            if ( rc && __copy_to_guest(arg, &xatp, 1) )
>                  rc = -EFAULT;
>  
>              if ( rc == -EAGAIN )
> @@ -4492,7 +4492,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
>          map.nr_entries = min(map.nr_entries, d->arch.pv_domain.nr_e820);
>          if ( copy_to_guest(map.buffer, d->arch.pv_domain.e820,
>                             map.nr_entries) ||
> -             copy_to_guest(arg, &map, 1) )
> +             __copy_to_guest(arg, &map, 1) )
>          {
>              spin_unlock(&d->arch.pv_domain.e820_lock);
>              return -EFAULT;
> @@ -4559,7 +4559,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
>  
>          ctxt.map.nr_entries = ctxt.n;
>  
> -        if ( copy_to_guest(arg, &ctxt.map, 1) )
> +        if ( __copy_to_guest(arg, &ctxt.map, 1) )
>              return -EFAULT;
>  
>          return 0;
> @@ -4630,7 +4630,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
>              target.pod_cache_pages = p2m->pod.count;
>              target.pod_entries     = p2m->pod.entry_count;
>  
> -            if ( copy_to_guest(arg, &target, 1) )
> +            if ( __copy_to_guest(arg, &target, 1) )
>              {
>                  rc= -EFAULT;
>                  goto pod_target_out_unlock;
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -384,7 +384,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          irq_status_query.flags |= XENIRQSTAT_needs_eoi;
>          if ( pirq_shared(v->domain, irq) )
>              irq_status_query.flags |= XENIRQSTAT_shared;
> -        ret = copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;
> +        ret = __copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;
>          break;
>      }
>  
> @@ -412,7 +412,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          ret = physdev_map_pirq(map.domid, map.type, &map.index, &map.pirq,
>                                 &msi);
>  
> -        if ( copy_to_guest(arg, &map, 1) != 0 )
> +        if ( __copy_to_guest(arg, &map, 1) )
>              ret = -EFAULT;
>          break;
>      }
> @@ -440,7 +440,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          if ( ret )
>              break;
>          ret = ioapic_guest_read(apic.apic_physbase, apic.reg, &apic.value);
> -        if ( copy_to_guest(arg, &apic, 1) != 0 )
> +        if ( __copy_to_guest(arg, &apic, 1) )
>              ret = -EFAULT;
>          break;
>      }
> @@ -478,7 +478,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          irq_op.vector = irq_op.irq;
>          ret = 0;
>          
> -        if ( copy_to_guest(arg, &irq_op, 1) != 0 )
> +        if ( __copy_to_guest(arg, &irq_op, 1) )
>              ret = -EFAULT;
>          break;
>      }
> @@ -714,7 +714,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          if ( ret >= 0 )
>          {
>              out.pirq = ret;
> -            ret = copy_to_guest(arg, &out, 1) ? -EFAULT : 0;
> +            ret = __copy_to_guest(arg, &out, 1) ? -EFAULT : 0;
>          }
>  
>          break;
> --- a/xen/arch/x86/platform_hypercall.c
> +++ b/xen/arch/x86/platform_hypercall.c
> @@ -115,7 +115,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>          {
>              op->u.add_memtype.handle = 0;
>              op->u.add_memtype.reg    = ret;
> -            ret = copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
> +            ret = __copy_field_to_guest(u_xenpf_op, op, u.add_memtype) ?
> +                  -EFAULT : 0;
>              if ( ret != 0 )
>                  mtrr_del_page(ret, 0, 0);
>          }
> @@ -157,7 +158,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              op->u.read_memtype.mfn     = mfn;
>              op->u.read_memtype.nr_mfns = nr_mfns;
>              op->u.read_memtype.type    = type;
> -            ret = copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
> +            ret = __copy_field_to_guest(u_xenpf_op, op, u.read_memtype)
> +                  ? -EFAULT : 0;
>          }
>      }
>      break;
> @@ -263,8 +265,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              C(legacy_sectors_per_track);
>  #undef C
>  
> -            ret = (copy_field_to_guest(u_xenpf_op, op,
> -                                      u.firmware_info.u.disk_info)
> +            ret = (__copy_field_to_guest(u_xenpf_op, op,
> +                                         u.firmware_info.u.disk_info)
>                     ? -EFAULT : 0);
>              break;
>          }
> @@ -281,8 +283,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              op->u.firmware_info.u.disk_mbr_signature.mbr_signature =
>                  sig->signature;
>  
> -            ret = (copy_field_to_guest(u_xenpf_op, op,
> -                                      u.firmware_info.u.disk_mbr_signature)
> +            ret = (__copy_field_to_guest(u_xenpf_op, op,
> +                
> u.firmware_info.u.disk_mbr_signature)
>                     ? -EFAULT : 0);
>              break;
>          }
> @@ -299,10 +301,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>                  bootsym(boot_edid_caps) >> 8;
>  
>              ret = 0;
> -            if ( copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
> -                                     u.vbeddc_info.capabilities) ||
> -                 copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
> -                                     u.vbeddc_info.edid_transfer_time) ||
> +            if ( __copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
> +                                       u.vbeddc_info.capabilities) ||
> +                 __copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
> +                                       u.vbeddc_info.edid_transfer_time) ||
>                   copy_to_compat(op->u.firmware_info.u.vbeddc_info.edid,
>                                  bootsym(boot_edid_info), 128) )
>                  ret = -EFAULT;
> @@ -311,8 +313,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              ret = efi_get_info(op->u.firmware_info.index,
>                                 &op->u.firmware_info.u.efi_info);
>              if ( ret == 0 &&
> -                 copy_field_to_guest(u_xenpf_op, op,
> -                                     u.firmware_info.u.efi_info) )
> +                 __copy_field_to_guest(u_xenpf_op, op,
> +                                       u.firmware_info.u.efi_info) )
>                  ret = -EFAULT;
>              break;
>          case XEN_FW_KBD_SHIFT_FLAGS:
> @@ -323,8 +325,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              op->u.firmware_info.u.kbd_shift_flags = bootsym(kbd_shift_flags);
>  
>              ret = 0;
> -            if ( copy_field_to_guest(u_xenpf_op, op,
> -                                     u.firmware_info.u.kbd_shift_flags) )
> +            if ( __copy_field_to_guest(u_xenpf_op, op,
> +                                       u.firmware_info.u.kbd_shift_flags) )
>                  ret = -EFAULT;
>              break;
>          default:
> @@ -340,7 +342,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>  
>          ret = efi_runtime_call(&op->u.efi_runtime_call);
>          if ( ret == 0 &&
> -             copy_field_to_guest(u_xenpf_op, op, u.efi_runtime_call) )
> +             __copy_field_to_guest(u_xenpf_op, op, u.efi_runtime_call) )
>              ret = -EFAULT;
>          break;
>  
> @@ -412,7 +414,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              ret = cpumask_to_xenctl_cpumap(&ctlmap, cpumap);
>          free_cpumask_var(cpumap);
>  
> -        if ( ret == 0 && copy_to_guest(u_xenpf_op, op, 1) )
> +        if ( ret == 0 && __copy_field_to_guest(u_xenpf_op, op, u.getidletime)
> )
>              ret = -EFAULT;
>      }
>      break;
> @@ -503,7 +505,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>  
>          put_cpu_maps();
>  
> -        ret = copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
> +        ret = __copy_field_to_guest(u_xenpf_op, op, u.pcpu_info) ? -EFAULT :
> 0;
>      }
>      break;
>  
> @@ -538,7 +540,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>  
>          put_cpu_maps();
>  
> -        if ( copy_field_to_guest(u_xenpf_op, op, u.pcpu_version) )
> +        if ( __copy_field_to_guest(u_xenpf_op, op, u.pcpu_version) )
>              ret = -EFAULT;
>      }
>      break;
> @@ -639,7 +641,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>  
>          case XEN_CORE_PARKING_GET:
>              op->u.core_parking.idle_nums = get_cur_idle_nums();
> -            ret = copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
> +            ret = __copy_field_to_guest(u_xenpf_op, op, u.core_parking) ?
> +                  -EFAULT : 0;
>              break;
>  
>          default:
> --- a/xen/arch/x86/sysctl.c
> +++ b/xen/arch/x86/sysctl.c
> @@ -93,7 +93,7 @@ long arch_do_sysctl(
>          if ( iommu_enabled )
>              pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
>  
> -        if ( copy_to_guest(u_sysctl, sysctl, 1) )
> +        if ( __copy_field_to_guest(u_sysctl, sysctl, u.physinfo) )
>              ret = -EFAULT;
>      }
>      break;
> @@ -133,7 +133,8 @@ long arch_do_sysctl(
>              }
>          }
>  
> -        ret = ((i <= max_cpu_index) || copy_to_guest(u_sysctl, sysctl, 1))
> +        ret = ((i <= max_cpu_index) ||
> +               __copy_field_to_guest(u_sysctl, sysctl, u.topologyinfo))
>              ? -EFAULT : 0;
>      }
>      break;
> @@ -185,7 +186,8 @@ long arch_do_sysctl(
>              }
>          }
>  
> -        ret = ((i <= max_node_index) || copy_to_guest(u_sysctl, sysctl, 1))
> +        ret = ((i <= max_node_index) ||
> +               __copy_field_to_guest(u_sysctl, sysctl, u.numainfo))
>              ? -EFAULT : 0;
>      }
>      break;
> --- a/xen/arch/x86/x86_64/compat/mm.c
> +++ b/xen/arch/x86/x86_64/compat/mm.c
> @@ -122,7 +122,7 @@ int compat_arch_memory_op(int op, XEN_GU
>  #define XLAT_memory_map_HNDL_buffer(_d_, _s_) ((void)0)
>          XLAT_memory_map(&cmp, nat);
>  #undef XLAT_memory_map_HNDL_buffer
> -        if ( copy_to_guest(arg, &cmp, 1) )
> +        if ( __copy_to_guest(arg, &cmp, 1) )
>              rc = -EFAULT;
>  
>          break;
> @@ -148,7 +148,7 @@ int compat_arch_memory_op(int op, XEN_GU
>  
>          XLAT_pod_target(&cmp, nat);
>  
> -        if ( copy_to_guest(arg, &cmp, 1) )
> +        if ( __copy_to_guest(arg, &cmp, 1) )
>          {
>              if ( rc == __HYPERVISOR_memory_op )
>                  hypercall_cancel_continuation();
> @@ -200,7 +200,7 @@ int compat_arch_memory_op(int op, XEN_GU
>          }
>  
>          xmml.nr_extents = i;
> -        if ( copy_to_guest(arg, &xmml, 1) )
> +        if ( __copy_to_guest(arg, &xmml, 1) )
>              rc = -EFAULT;
>  
>          break;
> @@ -219,7 +219,7 @@ int compat_arch_memory_op(int op, XEN_GU
>          if ( copy_from_guest(&meo, arg, 1) )
>              return -EFAULT;
>          rc = do_mem_event_op(op, meo.domain, (void *) &meo);
> -        if ( !rc && copy_to_guest(arg, &meo, 1) )
> +        if ( !rc && __copy_to_guest(arg, &meo, 1) )
>              return -EFAULT;
>          break;
>      }
> @@ -231,7 +231,7 @@ int compat_arch_memory_op(int op, XEN_GU
>          if ( mso.op == XENMEM_sharing_op_audit )
>              return mem_sharing_audit();
>          rc = do_mem_event_op(op, mso.domain, (void *) &mso);
> -        if ( !rc && copy_to_guest(arg, &mso, 1) )
> +        if ( !rc && __copy_to_guest(arg, &mso, 1) )
>              return -EFAULT;
>          break;
>      }
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -1074,7 +1074,7 @@ long subarch_memory_op(int op, XEN_GUEST
>          }
>  
>          xmml.nr_extents = i;
> -        if ( copy_to_guest(arg, &xmml, 1) )
> +        if ( __copy_to_guest(arg, &xmml, 1) )
>              return -EFAULT;
>  
>          break;
> @@ -1092,7 +1092,7 @@ long subarch_memory_op(int op, XEN_GUEST
>          if ( copy_from_guest(&meo, arg, 1) )
>              return -EFAULT;
>          rc = do_mem_event_op(op, meo.domain, (void *) &meo);
> -        if ( !rc && copy_to_guest(arg, &meo, 1) )
> +        if ( !rc && __copy_to_guest(arg, &meo, 1) )
>              return -EFAULT;
>          break;
>      }
> @@ -1104,7 +1104,7 @@ long subarch_memory_op(int op, XEN_GUEST
>          if ( mso.op == XENMEM_sharing_op_audit )
>              return mem_sharing_audit();
>          rc = do_mem_event_op(op, mso.domain, (void *) &mso);
> -        if ( !rc && copy_to_guest(arg, &mso, 1) )
> +        if ( !rc && __copy_to_guest(arg, &mso, 1) )
>              return -EFAULT;
>          break;
>      }
> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -292,8 +292,9 @@ int compat_memory_op(unsigned int cmd, X
>              }
>  
>              cmp.xchg.nr_exchanged = nat.xchg->nr_exchanged;
> -            if ( copy_field_to_guest(guest_handle_cast(compat,
> compat_memory_exchange_t),
> -                                     &cmp.xchg, nr_exchanged) )
> +            if ( __copy_field_to_guest(guest_handle_cast(compat,
> +                
> compat_memory_exchange_t),
> +                                       &cmp.xchg, nr_exchanged) )
>                  rc = -EFAULT;
>  
>              if ( rc < 0 )
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -242,6 +242,7 @@ void domctl_lock_release(void)
>  long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
>      long ret = 0;
> +    bool_t copyback = 0;
>      struct xen_domctl curop, *op = &curop;
>  
>      if ( copy_from_guest(op, u_domctl, 1) )
> @@ -469,8 +470,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>                 sizeof(xen_domain_handle_t));
>  
>          op->domain = d->domain_id;
> -        if ( copy_to_guest(u_domctl, op, 1) )
> -            ret = -EFAULT;
> +        copyback = 1;
>      }
>      break;
>  
> @@ -653,8 +653,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>              goto scheduler_op_out;
>  
>          ret = sched_adjust(d, &op->u.scheduler_op);
> -        if ( copy_to_guest(u_domctl, op, 1) )
> -            ret = -EFAULT;
> +        copyback = 1;
>  
>      scheduler_op_out:
>          rcu_unlock_domain(d);
> @@ -686,8 +685,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          getdomaininfo(d, &op->u.getdomaininfo);
>  
>          op->domain = op->u.getdomaininfo.domain;
> -        if ( copy_to_guest(u_domctl, op, 1) )
> -            ret = -EFAULT;
> +        copyback = 1;
>  
>      getdomaininfo_out:
>          rcu_read_unlock(&domlist_read_lock);
> @@ -747,8 +745,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          ret = copy_to_guest(op->u.vcpucontext.ctxt, c.nat, 1);
>  #endif
>  
> -        if ( copy_to_guest(u_domctl, op, 1) || ret )
> +        if ( ret )
>              ret = -EFAULT;
> +        copyback = 1;
>  
>      getvcpucontext_out:
>          xfree(c.nat);
> @@ -786,9 +785,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          op->u.getvcpuinfo.cpu_time = runstate.time[RUNSTATE_running];
>          op->u.getvcpuinfo.cpu      = v->processor;
>          ret = 0;
> -
> -        if ( copy_to_guest(u_domctl, op, 1) )
> -            ret = -EFAULT;
> +        copyback = 1;
>  
>      getvcpuinfo_out:
>          rcu_unlock_domain(d);
> @@ -1045,6 +1042,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>  
>      domctl_lock_release();
>  
> +    if ( copyback && __copy_to_guest(u_domctl, op, 1) )
> +        ret = -EFAULT;
> +
>      return ret;
>  }
>  
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -981,7 +981,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&alloc_unbound, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_alloc_unbound(&alloc_unbound);
> -        if ( (rc == 0) && (copy_to_guest(arg, &alloc_unbound, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &alloc_unbound, 1) )
>              rc = -EFAULT; /* Cleaning up here would be a mess! */
>          break;
>      }
> @@ -991,7 +991,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&bind_interdomain, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_bind_interdomain(&bind_interdomain);
> -        if ( (rc == 0) && (copy_to_guest(arg, &bind_interdomain, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &bind_interdomain, 1) )
>              rc = -EFAULT; /* Cleaning up here would be a mess! */
>          break;
>      }
> @@ -1001,7 +1001,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&bind_virq, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_bind_virq(&bind_virq);
> -        if ( (rc == 0) && (copy_to_guest(arg, &bind_virq, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &bind_virq, 1) )
>              rc = -EFAULT; /* Cleaning up here would be a mess! */
>          break;
>      }
> @@ -1011,7 +1011,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&bind_ipi, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_bind_ipi(&bind_ipi);
> -        if ( (rc == 0) && (copy_to_guest(arg, &bind_ipi, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &bind_ipi, 1) )
>              rc = -EFAULT; /* Cleaning up here would be a mess! */
>          break;
>      }
> @@ -1021,7 +1021,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&bind_pirq, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_bind_pirq(&bind_pirq);
> -        if ( (rc == 0) && (copy_to_guest(arg, &bind_pirq, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &bind_pirq, 1) )
>              rc = -EFAULT; /* Cleaning up here would be a mess! */
>          break;
>      }
> @@ -1047,7 +1047,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&status, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_status(&status);
> -        if ( (rc == 0) && (copy_to_guest(arg, &status, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &status, 1) )
>              rc = -EFAULT;
>          break;
>      }
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1115,12 +1115,13 @@ gnttab_unmap_grant_ref(
>  
>          for ( i = 0; i < c; i++ )
>          {
> -            if ( unlikely(__copy_from_guest_offset(&op, uop, done+i, 1)) )
> +            if ( unlikely(__copy_from_guest(&op, uop, 1)) )
>                  goto fault;
>              __gnttab_unmap_grant_ref(&op, &(common[i]));
>              ++partial_done;
> -            if ( unlikely(__copy_to_guest_offset(uop, done+i, &op, 1)) )
> +            if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>                  goto fault;
> +            guest_handle_add_offset(uop, 1);
>          }
>  
>          flush_tlb_mask(current->domain->domain_dirty_cpumask);
> @@ -1177,12 +1178,13 @@ gnttab_unmap_and_replace(
>          
>          for ( i = 0; i < c; i++ )
>          {
> -            if ( unlikely(__copy_from_guest_offset(&op, uop, done+i, 1)) )
> +            if ( unlikely(__copy_from_guest(&op, uop, 1)) )
>                  goto fault;
>              __gnttab_unmap_and_replace(&op, &(common[i]));
>              ++partial_done;
> -            if ( unlikely(__copy_to_guest_offset(uop, done+i, &op, 1)) )
> +            if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>                  goto fault;
> +            guest_handle_add_offset(uop, 1);
>          }
>          
>          flush_tlb_mask(current->domain->domain_dirty_cpumask);
> @@ -1396,7 +1398,7 @@ gnttab_setup_table(
>   out2:
>      rcu_unlock_domain(d);
>   out1:
> -    if ( unlikely(copy_to_guest(uop, &op, 1)) )
> +    if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>          return -EFAULT;
>  
>      return 0;
> @@ -1446,7 +1448,7 @@ gnttab_query_size(
>      rcu_unlock_domain(d);
>  
>   query_out:
> -    if ( unlikely(copy_to_guest(uop, &op, 1)) )
> +    if ( unlikely(__copy_to_guest(uop, &op, 1)) )
>          return -EFAULT;
>  
>      return 0;
> @@ -1542,7 +1544,7 @@ gnttab_transfer(
>              return i;
>  
>          /* Read from caller address space. */
> -        if ( unlikely(__copy_from_guest_offset(&gop, uop, i, 1)) )
> +        if ( unlikely(__copy_from_guest(&gop, uop, 1)) )
>          {
>              gdprintk(XENLOG_INFO, "gnttab_transfer: error reading req
> %d/%d\n",
>                      i, count);
> @@ -1701,12 +1703,13 @@ gnttab_transfer(
>          gop.status = GNTST_okay;
>  
>      copyback:
> -        if ( unlikely(__copy_to_guest_offset(uop, i, &gop, 1)) )
> +        if ( unlikely(__copy_field_to_guest(uop, &gop, status)) )
>          {
>              gdprintk(XENLOG_INFO, "gnttab_transfer: error writing resp "
>                       "%d/%d\n", i, count);
>              return -EFAULT;
>          }
> +        guest_handle_add_offset(uop, 1);
>      }
>  
>      return 0;
> @@ -2143,17 +2146,18 @@ gnttab_copy(
>      {
>          if (i && hypercall_preempt_check())
>              return i;
> -        if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) )
> +        if ( unlikely(__copy_from_guest(&op, uop, 1)) )
>              return -EFAULT;
>          __gnttab_copy(&op);
> -        if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )
> +        if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>              return -EFAULT;
> +        guest_handle_add_offset(uop, 1);
>      }
>      return 0;
>  }
>  
>  static long
> -gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
> +gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t) uop)
>  {
>      gnttab_set_version_t op;
>      struct domain *d = current->domain;
> @@ -2265,7 +2269,7 @@ out_unlock:
>  out:
>      op.version = gt->gt_version;
>  
> -    if (copy_to_guest(uop, &op, 1))
> +    if (__copy_to_guest(uop, &op, 1))
>          res = -EFAULT;
>  
>      return res;
> @@ -2329,14 +2333,14 @@ gnttab_get_status_frames(XEN_GUEST_HANDL
>  out2:
>      rcu_unlock_domain(d);
>  out1:
> -    if ( unlikely(copy_to_guest(uop, &op, 1)) )
> +    if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>          return -EFAULT;
>  
>      return 0;
>  }
>  
>  static long
> -gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
> +gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t) uop)
>  {
>      gnttab_get_version_t op;
>      struct domain *d;
> @@ -2359,7 +2363,7 @@ gnttab_get_version(XEN_GUEST_HANDLE_PARA
>  
>      rcu_unlock_domain(d);
>  
> -    if ( copy_to_guest(uop, &op, 1) )
> +    if ( __copy_field_to_guest(uop, &op, version) )
>          return -EFAULT;
>  
>      return 0;
> @@ -2421,7 +2425,7 @@ out:
>  }
>  
>  static long
> -gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t uop),
> +gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) uop,
>                        unsigned int count)
>  {
>      int i;
> @@ -2431,11 +2435,12 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE_P
>      {
>          if ( i && hypercall_preempt_check() )
>              return i;
> -        if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) )
> +        if ( unlikely(__copy_from_guest(&op, uop, 1)) )
>              return -EFAULT;
>          op.status = __gnttab_swap_grant_ref(op.ref_a, op.ref_b);
> -        if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )
> +        if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>              return -EFAULT;
> +        guest_handle_add_offset(uop, 1);
>      }
>      return 0;
>  }
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -359,7 +359,7 @@ static long memory_exchange(XEN_GUEST_HA
>          {
>              exch.nr_exchanged = i << in_chunk_order;
>              rcu_unlock_domain(d);
> -            if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
> +            if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
>                  return -EFAULT;
>              return hypercall_create_continuation(
>                  __HYPERVISOR_memory_op, "lh", XENMEM_exchange, arg);
> @@ -500,7 +500,7 @@ static long memory_exchange(XEN_GUEST_HA
>      }
>  
>      exch.nr_exchanged = exch.in.nr_extents;
> -    if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
> +    if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
>          rc = -EFAULT;
>      rcu_unlock_domain(d);
>      return rc;
> @@ -527,7 +527,7 @@ static long memory_exchange(XEN_GUEST_HA
>      exch.nr_exchanged = i << in_chunk_order;
>  
>   fail_early:
> -    if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
> +    if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
>          rc = -EFAULT;
>      return rc;
>  }
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -30,6 +30,7 @@
>  long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>  {
>      long ret = 0;
> +    int copyback = -1;
>      struct xen_sysctl curop, *op = &curop;
>      static DEFINE_SPINLOCK(sysctl_lock);
>  
> @@ -55,42 +56,28 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>      switch ( op->cmd )
>      {
>      case XEN_SYSCTL_readconsole:
> -    {
>          ret = xsm_readconsole(op->u.readconsole.clear);
>          if ( ret )
>              break;
>  
>          ret = read_console_ring(&op->u.readconsole);
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>  
>      case XEN_SYSCTL_tbuf_op:
> -    {
>          ret = xsm_tbufcontrol();
>          if ( ret )
>              break;
>  
>          ret = tb_control(&op->u.tbuf_op);
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>      
>      case XEN_SYSCTL_sched_id:
> -    {
>          ret = xsm_sched_id();
>          if ( ret )
>              break;
>  
>          op->u.sched_id.sched_id = sched_id();
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -        else
> -            ret = 0;
> -    }
> -    break;
> +        break;
>  
>      case XEN_SYSCTL_getdomaininfolist:
>      { 
> @@ -129,38 +116,27 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>              break;
>          
>          op->u.getdomaininfolist.num_domains = num_domains;
> -
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
>      }
>      break;
>  
>  #ifdef PERF_COUNTERS
>      case XEN_SYSCTL_perfc_op:
> -    {
>          ret = xsm_perfcontrol();
>          if ( ret )
>              break;
>  
>          ret = perfc_control(&op->u.perfc_op);
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>  #endif
>  
>  #ifdef LOCK_PROFILE
>      case XEN_SYSCTL_lockprof_op:
> -    {
>          ret = xsm_lockprof();
>          if ( ret )
>              break;
>  
>          ret = spinlock_profile_control(&op->u.lockprof_op);
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>  #endif
>      case XEN_SYSCTL_debug_keys:
>      {
> @@ -179,6 +155,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>              handle_keypress(c, guest_cpu_user_regs());
>          }
>          ret = 0;
> +        copyback = 0;
>      }
>      break;
>  
> @@ -193,22 +170,21 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>          if ( ret )
>              break;
>  
> +        ret = -EFAULT;
>          for ( i = 0; i < nr_cpus; i++ )
>          {
>              cpuinfo.idletime = get_cpu_idle_time(i);
>  
> -            ret = -EFAULT;
>              if ( copy_to_guest_offset(op->u.getcpuinfo.info, i, &cpuinfo, 1) 
> )
>                  goto out;
>          }
>  
>          op->u.getcpuinfo.nr_cpus = i;
> -        ret = copy_to_guest(u_sysctl, op, 1) ? -EFAULT : 0;
> +        ret = 0;
>      }
>      break;
>  
>      case XEN_SYSCTL_availheap:
> -    { 
>          ret = xsm_availheap();
>          if ( ret )
>              break;
> @@ -218,47 +194,26 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>              op->u.availheap.min_bitwidth,
>              op->u.availheap.max_bitwidth);
>          op->u.availheap.avail_bytes <<= PAGE_SHIFT;
> -
> -        ret = copy_to_guest(u_sysctl, op, 1) ? -EFAULT : 0;
> -    }
> -    break;
> +        break;
>  
>  #ifdef HAS_ACPI
>      case XEN_SYSCTL_get_pmstat:
> -    {
>          ret = xsm_get_pmstat();
>          if ( ret )
>              break;
>  
>          ret = do_get_pm_info(&op->u.get_pmstat);
> -        if ( ret )
> -            break;
> -
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -        {
> -            ret = -EFAULT;
> -            break;
> -        }
> -    }
> -    break;
> +        break;
>  
>      case XEN_SYSCTL_pm_op:
> -    {
>          ret = xsm_pm_op();
>          if ( ret )
>              break;
>  
>          ret = do_pm_op(&op->u.pm_op);
> -        if ( ret && (ret != -EAGAIN) )
> -            break;
> -
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -        {
> -            ret = -EFAULT;
> -            break;
> -        }
> -    }
> -    break;
> +        if ( ret == -EAGAIN )
> +            copyback = 1;
> +        break;
>  #endif
>  
>      case XEN_SYSCTL_page_offline_op:
> @@ -317,41 +272,39 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>          }
>  
>          xfree(status);
> +        copyback = 0;
>      }
>      break;
>  
>      case XEN_SYSCTL_cpupool_op:
> -    {
>          ret = xsm_cpupool_op();
>          if ( ret )
>              break;
>  
>          ret = cpupool_do_sysctl(&op->u.cpupool_op);
> -        if ( (ret == 0) && copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>  
>      case XEN_SYSCTL_scheduler_op:
> -    {
>          ret = xsm_sched_op();
>          if ( ret )
>              break;
>  
>          ret = sched_adjust_global(&op->u.scheduler_op);
> -        if ( (ret == 0) && copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>  
>      default:
>          ret = arch_do_sysctl(op, u_sysctl);
> +        copyback = 0;
>          break;
>      }
>  
>   out:
>      spin_unlock(&sysctl_lock);
>  
> +    if ( copyback && (!ret || copyback > 0) &&
> +         __copy_to_guest(u_sysctl, op, 1) )
> +        ret = -EFAULT;
> +
>      return ret;
>  }
>  
> --- a/xen/common/xenoprof.c
> +++ b/xen/common/xenoprof.c
> @@ -449,7 +449,7 @@ static int add_passive_list(XEN_GUEST_HA
>              current->domain, __pa(d->xenoprof->rawbuf),
>              passive.buf_gmaddr, d->xenoprof->npages);
>  
> -    if ( copy_to_guest(arg, &passive, 1) )
> +    if ( __copy_to_guest(arg, &passive, 1) )
>      {
>          put_domain(d);
>          return -EFAULT;
> @@ -604,7 +604,7 @@ static int xenoprof_op_init(XEN_GUEST_HA
>      if ( xenoprof_init.is_primary )
>          xenoprof_primary_profiler = current->domain;
>  
> -    return (copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0);
> +    return __copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0;
>  }
>  
>  #define ret_t long
> @@ -651,10 +651,7 @@ static int xenoprof_op_get_buffer(XEN_GU
>              d, __pa(d->xenoprof->rawbuf), xenoprof_get_buffer.buf_gmaddr,
>              d->xenoprof->npages);
>  
> -    if ( copy_to_guest(arg, &xenoprof_get_buffer, 1) )
> -        return -EFAULT;
> -
> -    return 0;
> +    return __copy_to_guest(arg, &xenoprof_get_buffer, 1) ? -EFAULT : 0;
>  }
>  
>  #define NONPRIV_OP(op) ( (op == XENOPROF_init)          \
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -586,7 +586,7 @@ int iommu_do_domctl(
>              domctl->u.get_device_group.num_sdevs = ret;
>              ret = 0;
>          }
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> +        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
>              ret = -EFAULT;
>          rcu_unlock_domain(d);
>      }
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:10:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzZc-0000gJ-Cj; Fri, 07 Dec 2012 15:10:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TgzZZ-0000fv-VL
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:10:42 +0000
Received: from [85.158.137.99:25774] by server-5.bemta-3.messagelabs.com id
	F1/C2-26311-1F602C05; Fri, 07 Dec 2012 15:10:41 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354893008!12886057!3
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=1.7 required=7.0 tests=INFO_TLD,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8665 invoked from network); 7 Dec 2012 15:10:38 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:10:38 -0000
Received: by mail-wi0-f181.google.com with SMTP id hm9so380504wib.14
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 07:10:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=qlEo8uUtNvmRtx97DPlxkCtBV3cRHIOwtqcKQG6mKEI=;
	b=CxOQ4WDVFlIitJfuxKmSgrNHto+/G+I8Y8WDKhi6NDlRGzUzDDEF1J3EH8mhCd8/Df
	z5n6OjqjMzMoj0OR3WB/ILl3c8EWo9GAca10ZMywV2MD2Wx89RBkztFZkyh5XiiR7KFv
	3/w0n7b8yLphbvHs8Xgbp3gJ/tMcubGLA8fswwEV4Z+CEM3FZMvfqPZKlcwugXkYU9tT
	Pt2fgQV4qoqhIrUajl68sH+APjBqTQSTdZOBsD3QkV2sOo3MIlKxbsCPdhs4BBPgHFqr
	9UHLLuSRSennOFazsPI8gSR0Yka4NO6J8XMfVwkglnJQPga2yRAdvO6UCyiiBOHqrCjN
	fVpQ==
Received: by 10.180.73.80 with SMTP id j16mr9155019wiv.5.1354893038625;
	Fri, 07 Dec 2012 07:10:38 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id h19sm21531092wiv.7.2012.12.07.07.10.35
	(version=SSLv3 cipher=OTHER); Fri, 07 Dec 2012 07:10:37 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 07 Dec 2012 15:10:28 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCE7B764.552E9%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] streamline guest copy operations
Thread-Index: Ac3UjP2rgG2jtlo7aE+BgPtyf5tQ5A==
In-Reply-To: <50C1F7D402000078000AEF3E@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] streamline guest copy operations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/2012 13:06, "Jan Beulich" <JBeulich@suse.com> wrote:

> - use the variants not validating the VA range when writing back
>   structures/fields to the same space that they were previously read
>   from
> - when only a single field of a structure actually changed, copy back
>   just that field where possible
> - consolidate copying back results in a few places
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> ---
> If really necessary, this patch could of course be split up at almost
> arbitrary boundaries.

I wouldn't bother.

> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -51,6 +51,7 @@ long arch_do_domctl(
>      XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
>      long ret = 0;
> +    bool_t copyback = 0;
>  
>      switch ( domctl->cmd )
>      {
> @@ -66,7 +67,7 @@ long arch_do_domctl(
>                                  &domctl->u.shadow_op,
>                                  guest_handle_cast(u_domctl, void));
>              rcu_unlock_domain(d);
> -            copy_to_guest(u_domctl, domctl, 1);
> +            copyback = 1;
>          } 
>      }
>      break;
> @@ -150,8 +151,7 @@ long arch_do_domctl(
>          }
>  
>          rcu_unlock_domain(d);
> -
> -        copy_to_guest(u_domctl, domctl, 1);
> +        copyback = 1;
>      }
>      break;
>  
> @@ -408,7 +408,7 @@ long arch_do_domctl(
>              spin_unlock(&d->page_alloc_lock);
>  
>              domctl->u.getmemlist.num_pfns = i;
> -            copy_to_guest(u_domctl, domctl, 1);
> +            copyback = 1;
>          getmemlist_out:
>              rcu_unlock_domain(d);
>          }
> @@ -539,13 +539,11 @@ long arch_do_domctl(
>              ret = -EFAULT;
>  
>      gethvmcontext_out:
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> +        rcu_unlock_domain(d);
> +        copyback = 1;
>  
>          if ( c.data != NULL )
>              xfree(c.data);
> -
> -        rcu_unlock_domain(d);
>      }
>      break;
>  
> @@ -627,11 +625,9 @@ long arch_do_domctl(
>          domctl->u.address_size.size =
>              is_pv_32on64_domain(d) ? 32 : BITS_PER_LONG;
>  
> -        ret = 0;
>          rcu_unlock_domain(d);
> -
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> +        ret = 0;
> +        copyback = 1;
>      }
>      break;
>  
> @@ -676,13 +672,9 @@ long arch_do_domctl(
>  
>          domctl->u.address_size.size = d->arch.physaddr_bitsize;
>  
> -        ret = 0;
>          rcu_unlock_domain(d);
> -
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> -
> -
> +        ret = 0;
> +        copyback = 1;
>      }
>      break;
>  
> @@ -1124,9 +1116,8 @@ long arch_do_domctl(
>  
>      ext_vcpucontext_out:
>          rcu_unlock_domain(d);
> -        if ( (domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext) &&
> -             copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> +        if ( domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext )
> +            copyback = 1;
>      }
>      break;
>  
> @@ -1268,10 +1259,10 @@ long arch_do_domctl(
>              domctl->u.gdbsx_guest_memio.len;
>  
>          ret = gdbsx_guest_mem_io(domctl->domain,
> &domctl->u.gdbsx_guest_memio);
> -        if ( !ret && copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
>  
>          rcu_unlock_domain(d);
> +        if ( !ret )
> +           copyback = 1;
>      }
>      break;
>  
> @@ -1358,10 +1349,9 @@ long arch_do_domctl(
>                  }
>              }
>          }
> -        ret = 0;
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
>          rcu_unlock_domain(d);
> +        ret = 0;
> +        copyback = 1;
>      }
>      break;
>  
> @@ -1485,9 +1475,8 @@ long arch_do_domctl(
>  
>      vcpuextstate_out:
>          rcu_unlock_domain(d);
> -        if ( (domctl->cmd == XEN_DOMCTL_getvcpuextstate) &&
> -             copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> +        if ( domctl->cmd == XEN_DOMCTL_getvcpuextstate )
> +            copyback = 1;
>      }
>      break;
>  
> @@ -1504,7 +1493,7 @@ long arch_do_domctl(
>                  ret = mem_event_domctl(d, &domctl->u.mem_event_op,
>                                         guest_handle_cast(u_domctl, void));
>              rcu_unlock_domain(d);
> -            copy_to_guest(u_domctl, domctl, 1);
> +            copyback = 1;
>          } 
>      }
>      break;
> @@ -1539,8 +1528,7 @@ long arch_do_domctl(
>                    &domctl->u.audit_p2m.m2p_bad,
>                    &domctl->u.audit_p2m.p2m_bad);
>          rcu_unlock_domain(d);
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> -            ret = -EFAULT;
> +        copyback = 1;
>      }
>      break;
>  #endif /* P2M_AUDIT */
> @@ -1573,6 +1561,9 @@ long arch_do_domctl(
>          break;
>      }
>  
> +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
> +        ret = -EFAULT;
> +
>      return ret;
>  }
>  
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4407,7 +4407,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
>  
>          if ( xatp.space == XENMAPSPACE_gmfn_range )
>          {
> -            if ( rc && copy_to_guest(arg, &xatp, 1) )
> +            if ( rc && __copy_to_guest(arg, &xatp, 1) )
>                  rc = -EFAULT;
>  
>              if ( rc == -EAGAIN )
> @@ -4492,7 +4492,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
>          map.nr_entries = min(map.nr_entries, d->arch.pv_domain.nr_e820);
>          if ( copy_to_guest(map.buffer, d->arch.pv_domain.e820,
>                             map.nr_entries) ||
> -             copy_to_guest(arg, &map, 1) )
> +             __copy_to_guest(arg, &map, 1) )
>          {
>              spin_unlock(&d->arch.pv_domain.e820_lock);
>              return -EFAULT;
> @@ -4559,7 +4559,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
>  
>          ctxt.map.nr_entries = ctxt.n;
>  
> -        if ( copy_to_guest(arg, &ctxt.map, 1) )
> +        if ( __copy_to_guest(arg, &ctxt.map, 1) )
>              return -EFAULT;
>  
>          return 0;
> @@ -4630,7 +4630,7 @@ long arch_memory_op(int op, XEN_GUEST_HA
>              target.pod_cache_pages = p2m->pod.count;
>              target.pod_entries     = p2m->pod.entry_count;
>  
> -            if ( copy_to_guest(arg, &target, 1) )
> +            if ( __copy_to_guest(arg, &target, 1) )
>              {
>                  rc= -EFAULT;
>                  goto pod_target_out_unlock;
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -384,7 +384,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          irq_status_query.flags |= XENIRQSTAT_needs_eoi;
>          if ( pirq_shared(v->domain, irq) )
>              irq_status_query.flags |= XENIRQSTAT_shared;
> -        ret = copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;
> +        ret = __copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;
>          break;
>      }
>  
> @@ -412,7 +412,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          ret = physdev_map_pirq(map.domid, map.type, &map.index, &map.pirq,
>                                 &msi);
>  
> -        if ( copy_to_guest(arg, &map, 1) != 0 )
> +        if ( __copy_to_guest(arg, &map, 1) )
>              ret = -EFAULT;
>          break;
>      }
> @@ -440,7 +440,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          if ( ret )
>              break;
>          ret = ioapic_guest_read(apic.apic_physbase, apic.reg, &apic.value);
> -        if ( copy_to_guest(arg, &apic, 1) != 0 )
> +        if ( __copy_to_guest(arg, &apic, 1) )
>              ret = -EFAULT;
>          break;
>      }
> @@ -478,7 +478,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          irq_op.vector = irq_op.irq;
>          ret = 0;
>          
> -        if ( copy_to_guest(arg, &irq_op, 1) != 0 )
> +        if ( __copy_to_guest(arg, &irq_op, 1) )
>              ret = -EFAULT;
>          break;
>      }
> @@ -714,7 +714,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          if ( ret >= 0 )
>          {
>              out.pirq = ret;
> -            ret = copy_to_guest(arg, &out, 1) ? -EFAULT : 0;
> +            ret = __copy_to_guest(arg, &out, 1) ? -EFAULT : 0;
>          }
>  
>          break;
> --- a/xen/arch/x86/platform_hypercall.c
> +++ b/xen/arch/x86/platform_hypercall.c
> @@ -115,7 +115,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>          {
>              op->u.add_memtype.handle = 0;
>              op->u.add_memtype.reg    = ret;
> -            ret = copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
> +            ret = __copy_field_to_guest(u_xenpf_op, op, u.add_memtype) ?
> +                  -EFAULT : 0;
>              if ( ret != 0 )
>                  mtrr_del_page(ret, 0, 0);
>          }
> @@ -157,7 +158,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              op->u.read_memtype.mfn     = mfn;
>              op->u.read_memtype.nr_mfns = nr_mfns;
>              op->u.read_memtype.type    = type;
> -            ret = copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
> +            ret = __copy_field_to_guest(u_xenpf_op, op, u.read_memtype)
> +                  ? -EFAULT : 0;
>          }
>      }
>      break;
> @@ -263,8 +265,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              C(legacy_sectors_per_track);
>  #undef C
>  
> -            ret = (copy_field_to_guest(u_xenpf_op, op,
> -                                      u.firmware_info.u.disk_info)
> +            ret = (__copy_field_to_guest(u_xenpf_op, op,
> +                                         u.firmware_info.u.disk_info)
>                     ? -EFAULT : 0);
>              break;
>          }
> @@ -281,8 +283,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              op->u.firmware_info.u.disk_mbr_signature.mbr_signature =
>                  sig->signature;
>  
> -            ret = (copy_field_to_guest(u_xenpf_op, op,
> -                                      u.firmware_info.u.disk_mbr_signature)
> +            ret = (__copy_field_to_guest(u_xenpf_op, op,
> +                
> u.firmware_info.u.disk_mbr_signature)
>                     ? -EFAULT : 0);
>              break;
>          }
> @@ -299,10 +301,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>                  bootsym(boot_edid_caps) >> 8;
>  
>              ret = 0;
> -            if ( copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
> -                                     u.vbeddc_info.capabilities) ||
> -                 copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
> -                                     u.vbeddc_info.edid_transfer_time) ||
> +            if ( __copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
> +                                       u.vbeddc_info.capabilities) ||
> +                 __copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
> +                                       u.vbeddc_info.edid_transfer_time) ||
>                   copy_to_compat(op->u.firmware_info.u.vbeddc_info.edid,
>                                  bootsym(boot_edid_info), 128) )
>                  ret = -EFAULT;
> @@ -311,8 +313,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              ret = efi_get_info(op->u.firmware_info.index,
>                                 &op->u.firmware_info.u.efi_info);
>              if ( ret == 0 &&
> -                 copy_field_to_guest(u_xenpf_op, op,
> -                                     u.firmware_info.u.efi_info) )
> +                 __copy_field_to_guest(u_xenpf_op, op,
> +                                       u.firmware_info.u.efi_info) )
>                  ret = -EFAULT;
>              break;
>          case XEN_FW_KBD_SHIFT_FLAGS:
> @@ -323,8 +325,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              op->u.firmware_info.u.kbd_shift_flags = bootsym(kbd_shift_flags);
>  
>              ret = 0;
> -            if ( copy_field_to_guest(u_xenpf_op, op,
> -                                     u.firmware_info.u.kbd_shift_flags) )
> +            if ( __copy_field_to_guest(u_xenpf_op, op,
> +                                       u.firmware_info.u.kbd_shift_flags) )
>                  ret = -EFAULT;
>              break;
>          default:
> @@ -340,7 +342,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>  
>          ret = efi_runtime_call(&op->u.efi_runtime_call);
>          if ( ret == 0 &&
> -             copy_field_to_guest(u_xenpf_op, op, u.efi_runtime_call) )
> +             __copy_field_to_guest(u_xenpf_op, op, u.efi_runtime_call) )
>              ret = -EFAULT;
>          break;
>  
> @@ -412,7 +414,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>              ret = cpumask_to_xenctl_cpumap(&ctlmap, cpumap);
>          free_cpumask_var(cpumap);
>  
> -        if ( ret == 0 && copy_to_guest(u_xenpf_op, op, 1) )
> +        if ( ret == 0 && __copy_field_to_guest(u_xenpf_op, op, u.getidletime)
> )
>              ret = -EFAULT;
>      }
>      break;
> @@ -503,7 +505,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>  
>          put_cpu_maps();
>  
> -        ret = copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
> +        ret = __copy_field_to_guest(u_xenpf_op, op, u.pcpu_info) ? -EFAULT :
> 0;
>      }
>      break;
>  
> @@ -538,7 +540,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>  
>          put_cpu_maps();
>  
> -        if ( copy_field_to_guest(u_xenpf_op, op, u.pcpu_version) )
> +        if ( __copy_field_to_guest(u_xenpf_op, op, u.pcpu_version) )
>              ret = -EFAULT;
>      }
>      break;
> @@ -639,7 +641,8 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
>  
>          case XEN_CORE_PARKING_GET:
>              op->u.core_parking.idle_nums = get_cur_idle_nums();
> -            ret = copy_to_guest(u_xenpf_op, op, 1) ? -EFAULT : 0;
> +            ret = __copy_field_to_guest(u_xenpf_op, op, u.core_parking) ?
> +                  -EFAULT : 0;
>              break;
>  
>          default:
> --- a/xen/arch/x86/sysctl.c
> +++ b/xen/arch/x86/sysctl.c
> @@ -93,7 +93,7 @@ long arch_do_sysctl(
>          if ( iommu_enabled )
>              pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
>  
> -        if ( copy_to_guest(u_sysctl, sysctl, 1) )
> +        if ( __copy_field_to_guest(u_sysctl, sysctl, u.physinfo) )
>              ret = -EFAULT;
>      }
>      break;
> @@ -133,7 +133,8 @@ long arch_do_sysctl(
>              }
>          }
>  
> -        ret = ((i <= max_cpu_index) || copy_to_guest(u_sysctl, sysctl, 1))
> +        ret = ((i <= max_cpu_index) ||
> +               __copy_field_to_guest(u_sysctl, sysctl, u.topologyinfo))
>              ? -EFAULT : 0;
>      }
>      break;
> @@ -185,7 +186,8 @@ long arch_do_sysctl(
>              }
>          }
>  
> -        ret = ((i <= max_node_index) || copy_to_guest(u_sysctl, sysctl, 1))
> +        ret = ((i <= max_node_index) ||
> +               __copy_field_to_guest(u_sysctl, sysctl, u.numainfo))
>              ? -EFAULT : 0;
>      }
>      break;
> --- a/xen/arch/x86/x86_64/compat/mm.c
> +++ b/xen/arch/x86/x86_64/compat/mm.c
> @@ -122,7 +122,7 @@ int compat_arch_memory_op(int op, XEN_GU
>  #define XLAT_memory_map_HNDL_buffer(_d_, _s_) ((void)0)
>          XLAT_memory_map(&cmp, nat);
>  #undef XLAT_memory_map_HNDL_buffer
> -        if ( copy_to_guest(arg, &cmp, 1) )
> +        if ( __copy_to_guest(arg, &cmp, 1) )
>              rc = -EFAULT;
>  
>          break;
> @@ -148,7 +148,7 @@ int compat_arch_memory_op(int op, XEN_GU
>  
>          XLAT_pod_target(&cmp, nat);
>  
> -        if ( copy_to_guest(arg, &cmp, 1) )
> +        if ( __copy_to_guest(arg, &cmp, 1) )
>          {
>              if ( rc == __HYPERVISOR_memory_op )
>                  hypercall_cancel_continuation();
> @@ -200,7 +200,7 @@ int compat_arch_memory_op(int op, XEN_GU
>          }
>  
>          xmml.nr_extents = i;
> -        if ( copy_to_guest(arg, &xmml, 1) )
> +        if ( __copy_to_guest(arg, &xmml, 1) )
>              rc = -EFAULT;
>  
>          break;
> @@ -219,7 +219,7 @@ int compat_arch_memory_op(int op, XEN_GU
>          if ( copy_from_guest(&meo, arg, 1) )
>              return -EFAULT;
>          rc = do_mem_event_op(op, meo.domain, (void *) &meo);
> -        if ( !rc && copy_to_guest(arg, &meo, 1) )
> +        if ( !rc && __copy_to_guest(arg, &meo, 1) )
>              return -EFAULT;
>          break;
>      }
> @@ -231,7 +231,7 @@ int compat_arch_memory_op(int op, XEN_GU
>          if ( mso.op == XENMEM_sharing_op_audit )
>              return mem_sharing_audit();
>          rc = do_mem_event_op(op, mso.domain, (void *) &mso);
> -        if ( !rc && copy_to_guest(arg, &mso, 1) )
> +        if ( !rc && __copy_to_guest(arg, &mso, 1) )
>              return -EFAULT;
>          break;
>      }
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -1074,7 +1074,7 @@ long subarch_memory_op(int op, XEN_GUEST
>          }
>  
>          xmml.nr_extents = i;
> -        if ( copy_to_guest(arg, &xmml, 1) )
> +        if ( __copy_to_guest(arg, &xmml, 1) )
>              return -EFAULT;
>  
>          break;
> @@ -1092,7 +1092,7 @@ long subarch_memory_op(int op, XEN_GUEST
>          if ( copy_from_guest(&meo, arg, 1) )
>              return -EFAULT;
>          rc = do_mem_event_op(op, meo.domain, (void *) &meo);
> -        if ( !rc && copy_to_guest(arg, &meo, 1) )
> +        if ( !rc && __copy_to_guest(arg, &meo, 1) )
>              return -EFAULT;
>          break;
>      }
> @@ -1104,7 +1104,7 @@ long subarch_memory_op(int op, XEN_GUEST
>          if ( mso.op == XENMEM_sharing_op_audit )
>              return mem_sharing_audit();
>          rc = do_mem_event_op(op, mso.domain, (void *) &mso);
> -        if ( !rc && copy_to_guest(arg, &mso, 1) )
> +        if ( !rc && __copy_to_guest(arg, &mso, 1) )
>              return -EFAULT;
>          break;
>      }
> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -292,8 +292,9 @@ int compat_memory_op(unsigned int cmd, X
>              }
>  
>              cmp.xchg.nr_exchanged = nat.xchg->nr_exchanged;
> -            if ( copy_field_to_guest(guest_handle_cast(compat,
> compat_memory_exchange_t),
> -                                     &cmp.xchg, nr_exchanged) )
> +            if ( __copy_field_to_guest(guest_handle_cast(compat,
> +                
> compat_memory_exchange_t),
> +                                       &cmp.xchg, nr_exchanged) )
>                  rc = -EFAULT;
>  
>              if ( rc < 0 )
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -242,6 +242,7 @@ void domctl_lock_release(void)
>  long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
>      long ret = 0;
> +    bool_t copyback = 0;
>      struct xen_domctl curop, *op = &curop;
>  
>      if ( copy_from_guest(op, u_domctl, 1) )
> @@ -469,8 +470,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>                 sizeof(xen_domain_handle_t));
>  
>          op->domain = d->domain_id;
> -        if ( copy_to_guest(u_domctl, op, 1) )
> -            ret = -EFAULT;
> +        copyback = 1;
>      }
>      break;
>  
> @@ -653,8 +653,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>              goto scheduler_op_out;
>  
>          ret = sched_adjust(d, &op->u.scheduler_op);
> -        if ( copy_to_guest(u_domctl, op, 1) )
> -            ret = -EFAULT;
> +        copyback = 1;
>  
>      scheduler_op_out:
>          rcu_unlock_domain(d);
> @@ -686,8 +685,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          getdomaininfo(d, &op->u.getdomaininfo);
>  
>          op->domain = op->u.getdomaininfo.domain;
> -        if ( copy_to_guest(u_domctl, op, 1) )
> -            ret = -EFAULT;
> +        copyback = 1;
>  
>      getdomaininfo_out:
>          rcu_read_unlock(&domlist_read_lock);
> @@ -747,8 +745,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          ret = copy_to_guest(op->u.vcpucontext.ctxt, c.nat, 1);
>  #endif
>  
> -        if ( copy_to_guest(u_domctl, op, 1) || ret )
> +        if ( ret )
>              ret = -EFAULT;
> +        copyback = 1;
>  
>      getvcpucontext_out:
>          xfree(c.nat);
> @@ -786,9 +785,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          op->u.getvcpuinfo.cpu_time = runstate.time[RUNSTATE_running];
>          op->u.getvcpuinfo.cpu      = v->processor;
>          ret = 0;
> -
> -        if ( copy_to_guest(u_domctl, op, 1) )
> -            ret = -EFAULT;
> +        copyback = 1;
>  
>      getvcpuinfo_out:
>          rcu_unlock_domain(d);
> @@ -1045,6 +1042,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>  
>      domctl_lock_release();
>  
> +    if ( copyback && __copy_to_guest(u_domctl, op, 1) )
> +        ret = -EFAULT;
> +
>      return ret;
>  }
>  
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -981,7 +981,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&alloc_unbound, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_alloc_unbound(&alloc_unbound);
> -        if ( (rc == 0) && (copy_to_guest(arg, &alloc_unbound, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &alloc_unbound, 1) )
>              rc = -EFAULT; /* Cleaning up here would be a mess! */
>          break;
>      }
> @@ -991,7 +991,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&bind_interdomain, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_bind_interdomain(&bind_interdomain);
> -        if ( (rc == 0) && (copy_to_guest(arg, &bind_interdomain, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &bind_interdomain, 1) )
>              rc = -EFAULT; /* Cleaning up here would be a mess! */
>          break;
>      }
> @@ -1001,7 +1001,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&bind_virq, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_bind_virq(&bind_virq);
> -        if ( (rc == 0) && (copy_to_guest(arg, &bind_virq, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &bind_virq, 1) )
>              rc = -EFAULT; /* Cleaning up here would be a mess! */
>          break;
>      }
> @@ -1011,7 +1011,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&bind_ipi, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_bind_ipi(&bind_ipi);
> -        if ( (rc == 0) && (copy_to_guest(arg, &bind_ipi, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &bind_ipi, 1) )
>              rc = -EFAULT; /* Cleaning up here would be a mess! */
>          break;
>      }
> @@ -1021,7 +1021,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&bind_pirq, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_bind_pirq(&bind_pirq);
> -        if ( (rc == 0) && (copy_to_guest(arg, &bind_pirq, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &bind_pirq, 1) )
>              rc = -EFAULT; /* Cleaning up here would be a mess! */
>          break;
>      }
> @@ -1047,7 +1047,7 @@ long do_event_channel_op(int cmd, XEN_GU
>          if ( copy_from_guest(&status, arg, 1) != 0 )
>              return -EFAULT;
>          rc = evtchn_status(&status);
> -        if ( (rc == 0) && (copy_to_guest(arg, &status, 1) != 0) )
> +        if ( !rc && __copy_to_guest(arg, &status, 1) )
>              rc = -EFAULT;
>          break;
>      }
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1115,12 +1115,13 @@ gnttab_unmap_grant_ref(
>  
>          for ( i = 0; i < c; i++ )
>          {
> -            if ( unlikely(__copy_from_guest_offset(&op, uop, done+i, 1)) )
> +            if ( unlikely(__copy_from_guest(&op, uop, 1)) )
>                  goto fault;
>              __gnttab_unmap_grant_ref(&op, &(common[i]));
>              ++partial_done;
> -            if ( unlikely(__copy_to_guest_offset(uop, done+i, &op, 1)) )
> +            if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>                  goto fault;
> +            guest_handle_add_offset(uop, 1);
>          }
>  
>          flush_tlb_mask(current->domain->domain_dirty_cpumask);
> @@ -1177,12 +1178,13 @@ gnttab_unmap_and_replace(
>          
>          for ( i = 0; i < c; i++ )
>          {
> -            if ( unlikely(__copy_from_guest_offset(&op, uop, done+i, 1)) )
> +            if ( unlikely(__copy_from_guest(&op, uop, 1)) )
>                  goto fault;
>              __gnttab_unmap_and_replace(&op, &(common[i]));
>              ++partial_done;
> -            if ( unlikely(__copy_to_guest_offset(uop, done+i, &op, 1)) )
> +            if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>                  goto fault;
> +            guest_handle_add_offset(uop, 1);
>          }
>          
>          flush_tlb_mask(current->domain->domain_dirty_cpumask);
> @@ -1396,7 +1398,7 @@ gnttab_setup_table(
>   out2:
>      rcu_unlock_domain(d);
>   out1:
> -    if ( unlikely(copy_to_guest(uop, &op, 1)) )
> +    if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>          return -EFAULT;
>  
>      return 0;
> @@ -1446,7 +1448,7 @@ gnttab_query_size(
>      rcu_unlock_domain(d);
>  
>   query_out:
> -    if ( unlikely(copy_to_guest(uop, &op, 1)) )
> +    if ( unlikely(__copy_to_guest(uop, &op, 1)) )
>          return -EFAULT;
>  
>      return 0;
> @@ -1542,7 +1544,7 @@ gnttab_transfer(
>              return i;
>  
>          /* Read from caller address space. */
> -        if ( unlikely(__copy_from_guest_offset(&gop, uop, i, 1)) )
> +        if ( unlikely(__copy_from_guest(&gop, uop, 1)) )
>          {
>              gdprintk(XENLOG_INFO, "gnttab_transfer: error reading req
> %d/%d\n",
>                      i, count);
> @@ -1701,12 +1703,13 @@ gnttab_transfer(
>          gop.status = GNTST_okay;
>  
>      copyback:
> -        if ( unlikely(__copy_to_guest_offset(uop, i, &gop, 1)) )
> +        if ( unlikely(__copy_field_to_guest(uop, &gop, status)) )
>          {
>              gdprintk(XENLOG_INFO, "gnttab_transfer: error writing resp "
>                       "%d/%d\n", i, count);
>              return -EFAULT;
>          }
> +        guest_handle_add_offset(uop, 1);
>      }
>  
>      return 0;
> @@ -2143,17 +2146,18 @@ gnttab_copy(
>      {
>          if (i && hypercall_preempt_check())
>              return i;
> -        if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) )
> +        if ( unlikely(__copy_from_guest(&op, uop, 1)) )
>              return -EFAULT;
>          __gnttab_copy(&op);
> -        if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )
> +        if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>              return -EFAULT;
> +        guest_handle_add_offset(uop, 1);
>      }
>      return 0;
>  }
>  
>  static long
> -gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
> +gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t) uop)
>  {
>      gnttab_set_version_t op;
>      struct domain *d = current->domain;
> @@ -2265,7 +2269,7 @@ out_unlock:
>  out:
>      op.version = gt->gt_version;
>  
> -    if (copy_to_guest(uop, &op, 1))
> +    if (__copy_to_guest(uop, &op, 1))
>          res = -EFAULT;
>  
>      return res;
> @@ -2329,14 +2333,14 @@ gnttab_get_status_frames(XEN_GUEST_HANDL
>  out2:
>      rcu_unlock_domain(d);
>  out1:
> -    if ( unlikely(copy_to_guest(uop, &op, 1)) )
> +    if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>          return -EFAULT;
>  
>      return 0;
>  }
>  
>  static long
> -gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
> +gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t) uop)
>  {
>      gnttab_get_version_t op;
>      struct domain *d;
> @@ -2359,7 +2363,7 @@ gnttab_get_version(XEN_GUEST_HANDLE_PARA
>  
>      rcu_unlock_domain(d);
>  
> -    if ( copy_to_guest(uop, &op, 1) )
> +    if ( __copy_field_to_guest(uop, &op, version) )
>          return -EFAULT;
>  
>      return 0;
> @@ -2421,7 +2425,7 @@ out:
>  }
>  
>  static long
> -gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t uop),
> +gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) uop,
>                        unsigned int count)
>  {
>      int i;
> @@ -2431,11 +2435,12 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE_P
>      {
>          if ( i && hypercall_preempt_check() )
>              return i;
> -        if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) )
> +        if ( unlikely(__copy_from_guest(&op, uop, 1)) )
>              return -EFAULT;
>          op.status = __gnttab_swap_grant_ref(op.ref_a, op.ref_b);
> -        if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )
> +        if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
>              return -EFAULT;
> +        guest_handle_add_offset(uop, 1);
>      }
>      return 0;
>  }
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -359,7 +359,7 @@ static long memory_exchange(XEN_GUEST_HA
>          {
>              exch.nr_exchanged = i << in_chunk_order;
>              rcu_unlock_domain(d);
> -            if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
> +            if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
>                  return -EFAULT;
>              return hypercall_create_continuation(
>                  __HYPERVISOR_memory_op, "lh", XENMEM_exchange, arg);
> @@ -500,7 +500,7 @@ static long memory_exchange(XEN_GUEST_HA
>      }
>  
>      exch.nr_exchanged = exch.in.nr_extents;
> -    if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
> +    if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
>          rc = -EFAULT;
>      rcu_unlock_domain(d);
>      return rc;
> @@ -527,7 +527,7 @@ static long memory_exchange(XEN_GUEST_HA
>      exch.nr_exchanged = i << in_chunk_order;
>  
>   fail_early:
> -    if ( copy_field_to_guest(arg, &exch, nr_exchanged) )
> +    if ( __copy_field_to_guest(arg, &exch, nr_exchanged) )
>          rc = -EFAULT;
>      return rc;
>  }
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -30,6 +30,7 @@
>  long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>  {
>      long ret = 0;
> +    int copyback = -1;
>      struct xen_sysctl curop, *op = &curop;
>      static DEFINE_SPINLOCK(sysctl_lock);
>  
> @@ -55,42 +56,28 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>      switch ( op->cmd )
>      {
>      case XEN_SYSCTL_readconsole:
> -    {
>          ret = xsm_readconsole(op->u.readconsole.clear);
>          if ( ret )
>              break;
>  
>          ret = read_console_ring(&op->u.readconsole);
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>  
>      case XEN_SYSCTL_tbuf_op:
> -    {
>          ret = xsm_tbufcontrol();
>          if ( ret )
>              break;
>  
>          ret = tb_control(&op->u.tbuf_op);
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>      
>      case XEN_SYSCTL_sched_id:
> -    {
>          ret = xsm_sched_id();
>          if ( ret )
>              break;
>  
>          op->u.sched_id.sched_id = sched_id();
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -        else
> -            ret = 0;
> -    }
> -    break;
> +        break;
>  
>      case XEN_SYSCTL_getdomaininfolist:
>      { 
> @@ -129,38 +116,27 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>              break;
>          
>          op->u.getdomaininfolist.num_domains = num_domains;
> -
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
>      }
>      break;
>  
>  #ifdef PERF_COUNTERS
>      case XEN_SYSCTL_perfc_op:
> -    {
>          ret = xsm_perfcontrol();
>          if ( ret )
>              break;
>  
>          ret = perfc_control(&op->u.perfc_op);
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>  #endif
>  
>  #ifdef LOCK_PROFILE
>      case XEN_SYSCTL_lockprof_op:
> -    {
>          ret = xsm_lockprof();
>          if ( ret )
>              break;
>  
>          ret = spinlock_profile_control(&op->u.lockprof_op);
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>  #endif
>      case XEN_SYSCTL_debug_keys:
>      {
> @@ -179,6 +155,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>              handle_keypress(c, guest_cpu_user_regs());
>          }
>          ret = 0;
> +        copyback = 0;
>      }
>      break;
>  
> @@ -193,22 +170,21 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>          if ( ret )
>              break;
>  
> +        ret = -EFAULT;
>          for ( i = 0; i < nr_cpus; i++ )
>          {
>              cpuinfo.idletime = get_cpu_idle_time(i);
>  
> -            ret = -EFAULT;
>              if ( copy_to_guest_offset(op->u.getcpuinfo.info, i, &cpuinfo, 1) 
> )
>                  goto out;
>          }
>  
>          op->u.getcpuinfo.nr_cpus = i;
> -        ret = copy_to_guest(u_sysctl, op, 1) ? -EFAULT : 0;
> +        ret = 0;
>      }
>      break;
>  
>      case XEN_SYSCTL_availheap:
> -    { 
>          ret = xsm_availheap();
>          if ( ret )
>              break;
> @@ -218,47 +194,26 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>              op->u.availheap.min_bitwidth,
>              op->u.availheap.max_bitwidth);
>          op->u.availheap.avail_bytes <<= PAGE_SHIFT;
> -
> -        ret = copy_to_guest(u_sysctl, op, 1) ? -EFAULT : 0;
> -    }
> -    break;
> +        break;
>  
>  #ifdef HAS_ACPI
>      case XEN_SYSCTL_get_pmstat:
> -    {
>          ret = xsm_get_pmstat();
>          if ( ret )
>              break;
>  
>          ret = do_get_pm_info(&op->u.get_pmstat);
> -        if ( ret )
> -            break;
> -
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -        {
> -            ret = -EFAULT;
> -            break;
> -        }
> -    }
> -    break;
> +        break;
>  
>      case XEN_SYSCTL_pm_op:
> -    {
>          ret = xsm_pm_op();
>          if ( ret )
>              break;
>  
>          ret = do_pm_op(&op->u.pm_op);
> -        if ( ret && (ret != -EAGAIN) )
> -            break;
> -
> -        if ( copy_to_guest(u_sysctl, op, 1) )
> -        {
> -            ret = -EFAULT;
> -            break;
> -        }
> -    }
> -    break;
> +        if ( ret == -EAGAIN )
> +            copyback = 1;
> +        break;
>  #endif
>  
>      case XEN_SYSCTL_page_offline_op:
> @@ -317,41 +272,39 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
>          }
>  
>          xfree(status);
> +        copyback = 0;
>      }
>      break;
>  
>      case XEN_SYSCTL_cpupool_op:
> -    {
>          ret = xsm_cpupool_op();
>          if ( ret )
>              break;
>  
>          ret = cpupool_do_sysctl(&op->u.cpupool_op);
> -        if ( (ret == 0) && copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>  
>      case XEN_SYSCTL_scheduler_op:
> -    {
>          ret = xsm_sched_op();
>          if ( ret )
>              break;
>  
>          ret = sched_adjust_global(&op->u.scheduler_op);
> -        if ( (ret == 0) && copy_to_guest(u_sysctl, op, 1) )
> -            ret = -EFAULT;
> -    }
> -    break;
> +        break;
>  
>      default:
>          ret = arch_do_sysctl(op, u_sysctl);
> +        copyback = 0;
>          break;
>      }
>  
>   out:
>      spin_unlock(&sysctl_lock);
>  
> +    if ( copyback && (!ret || copyback > 0) &&
> +         __copy_to_guest(u_sysctl, op, 1) )
> +        ret = -EFAULT;
> +
>      return ret;
>  }
>  
> --- a/xen/common/xenoprof.c
> +++ b/xen/common/xenoprof.c
> @@ -449,7 +449,7 @@ static int add_passive_list(XEN_GUEST_HA
>              current->domain, __pa(d->xenoprof->rawbuf),
>              passive.buf_gmaddr, d->xenoprof->npages);
>  
> -    if ( copy_to_guest(arg, &passive, 1) )
> +    if ( __copy_to_guest(arg, &passive, 1) )
>      {
>          put_domain(d);
>          return -EFAULT;
> @@ -604,7 +604,7 @@ static int xenoprof_op_init(XEN_GUEST_HA
>      if ( xenoprof_init.is_primary )
>          xenoprof_primary_profiler = current->domain;
>  
> -    return (copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0);
> +    return __copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0;
>  }
>  
>  #define ret_t long
> @@ -651,10 +651,7 @@ static int xenoprof_op_get_buffer(XEN_GU
>              d, __pa(d->xenoprof->rawbuf), xenoprof_get_buffer.buf_gmaddr,
>              d->xenoprof->npages);
>  
> -    if ( copy_to_guest(arg, &xenoprof_get_buffer, 1) )
> -        return -EFAULT;
> -
> -    return 0;
> +    return __copy_to_guest(arg, &xenoprof_get_buffer, 1) ? -EFAULT : 0;
>  }
>  
>  #define NONPRIV_OP(op) ( (op == XENOPROF_init)          \
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -586,7 +586,7 @@ int iommu_do_domctl(
>              domctl->u.get_device_group.num_sdevs = ret;
>              ret = 0;
>          }
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> +        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
>              ret = -EFAULT;
>          rcu_unlock_domain(d);
>      }
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:14:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:14:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgzd6-0001Cv-9V; Fri, 07 Dec 2012 15:14:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgzd4-0001Ck-Jg
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:14:18 +0000
Received: from [85.158.137.99:31600] by server-15.bemta-3.messagelabs.com id
	4D/9E-23779-9C702C05; Fri, 07 Dec 2012 15:14:17 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354893255!17731140!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23810 invoked from network); 7 Dec 2012 15:14:16 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:14:16 -0000
Received: by mail-wi0-f169.google.com with SMTP id hq12so1447176wib.2
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 07:14:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=IuMoH/E+ffjji6majHe7w/0sVx98z+sTCUTKzwZV6IE=;
	b=JX6p6HZwxz3Myjx5ZHGVfQY48FaM3EGOcmx6Jg8wtrZJB7EmxPeRaUM5OXesdmegqw
	F7rLhhcnoWj7aHn/DgSSlFhVHnLSW3vGyOlc7zrxFRL7Whsr8cR4bSoCTcd4knpRCTtR
	Ns4tsa2JvvZ9TBLoDvkL6nUiIErHLuu28gbJbdBjQgVwGKldJs1eMacofrgjIR6f9yl0
	RCW/8iVgGg9Rp1CGRo807Ex12zdc9tSaGXDGpgAfbroEyRddpeDVlt6Q0pZlAPNCO5F5
	X/Q6M0sysvZwiI7CseIXBgw4QaYBLOnz3rpq+QjOGOTjqvBbKUZAel12YIzIuLk8VJIq
	ygOg==
Received: by 10.180.87.225 with SMTP id bb1mr9143449wib.20.1354893255815;
	Fri, 07 Dec 2012 07:14:15 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id cf6sm16028679wib.3.2012.12.07.07.14.05
	(version=SSLv3 cipher=OTHER); Fri, 07 Dec 2012 07:14:15 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 07 Dec 2012 15:13:58 +0000
From: Keir Fraser <keir@xen.org>
To: Dan Magenheimer <dan.magenheimer@oracle.com>,
	Jan Beulich <JBeulich@suse.com>
Message-ID: <CCE7B836.552EF%keir@xen.org>
Thread-Topic: [PATCH] xen: centralize accounting for domain tot_pages
Thread-Index: Ac3UjXrWFdtpXmRfZkOZj/SZreQb6g==
In-Reply-To: <8be77fb4-3393-4f49-99f6-2b6c9f89bc18@default>
Mime-version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: centralize accounting for domain
	tot_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/11/2012 21:50, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> xen: centralize accounting for domain tot_pages
> 
> Provide and use a common function for all adjustments to a
> domain's tot_pages counter in anticipation of future and/or
> out-of-tree patches that must adjust related counters
> atomically.
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>

Applied.

 -- Keir

>  arch/x86/mm.c             |    4 ++--
>  arch/x86/mm/mem_sharing.c |    4 ++--
>  common/grant_table.c      |    2 +-
>  common/memory.c           |    2 +-
>  common/page_alloc.c       |   10 ++++++++--
>  include/xen/mm.h          |    2 ++
>  6 files changed, 16 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index ab94b02..3887ca6 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3842,7 +3842,7 @@ int donate_page(
>      {
>          if ( d->tot_pages >= d->max_pages )
>              goto fail;
> -        d->tot_pages++;
> +        domain_adjust_tot_pages(d, 1);
>      }
>  
>      page->count_info = PGC_allocated | 1;
> @@ -3892,7 +3892,7 @@ int steal_page(
>      } while ( (y = cmpxchg(&page->count_info, x, x | 1)) != x );
>  
>      /* Unlink from original owner. */
> -    if ( !(memflags & MEMF_no_refcount) && !--d->tot_pages )
> +    if ( !(memflags & MEMF_no_refcount) && !domain_adjust_tot_pages(d, -1) )
>          drop_dom_ref = 1;
>      page_list_del(page, &d->page_list);
>  
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 5103285..e91aac5 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -639,7 +639,7 @@ static int page_make_sharable(struct domain *d,
>      }
>  
>      page_set_owner(page, dom_cow);
> -    d->tot_pages--;
> +    domain_adjust_tot_pages(d, -1);
>      drop_dom_ref = (d->tot_pages == 0);
>      page_list_del(page, &d->page_list);
>      spin_unlock(&d->page_alloc_lock);
> @@ -680,7 +680,7 @@ static int page_make_private(struct domain *d, struct
> page_info *page)
>      ASSERT(page_get_owner(page) == dom_cow);
>      page_set_owner(page, d);
>  
> -    if ( d->tot_pages++ == 0 )
> +    if ( domain_adjust_tot_pages(d, 1) == 1 )
>          get_domain(d);
>      page_list_add_tail(page, &d->page_list);
>      spin_unlock(&d->page_alloc_lock);
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 7912769..ca8d861 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1656,7 +1656,7 @@ gnttab_transfer(
>          }
>  
>          /* Okay, add the page to 'e'. */
> -        if ( unlikely(e->tot_pages++ == 0) )
> +        if ( unlikely(domain_adjust_tot_pages(e, 1) == 1) )
>              get_knownalive_domain(e);
>          page_list_add_tail(page, &e->page_list);
>          page_set_owner(page, e);
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 83e2666..9842ea9 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -454,7 +454,7 @@ static long
> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>                               (j * (1UL << exch.out.extent_order)));
>  
>                  spin_lock(&d->page_alloc_lock);
> -                d->tot_pages -= dec_count;
> +                domain_adjust_tot_pages(d, -dec_count);
>                  drop_dom_ref = (dec_count && !d->tot_pages);
>                  spin_unlock(&d->page_alloc_lock);
>  
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 15ebc66..e273bb7 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -239,6 +239,12 @@ static long midsize_alloc_zone_pages;
>  
>  static DEFINE_SPINLOCK(heap_lock);
>  
> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
> +{
> +    ASSERT(spin_is_locked(&d->page_alloc_lock));
> +    return d->tot_pages += pages;
> +}
> +
>  static unsigned long init_node_heap(int node, unsigned long mfn,
>                                      unsigned long nr, bool_t *use_tail)
>  {
> @@ -1291,7 +1297,7 @@ int assign_pages(
>          if ( unlikely(d->tot_pages == 0) )
>              get_knownalive_domain(d);
>  
> -        d->tot_pages += 1 << order;
> +        domain_adjust_tot_pages(d, 1 << order);
>      }
>  
>      for ( i = 0; i < (1 << order); i++ )
> @@ -1375,7 +1381,7 @@ void free_domheap_pages(struct page_info *pg, unsigned
> int order)
>              page_list_del2(&pg[i], &d->page_list, &d->arch.relmem_list);
>          }
>  
> -        d->tot_pages -= 1 << order;
> +        domain_adjust_tot_pages(d, -(1 << order));
>          drop_dom_ref = (d->tot_pages == 0);
>  
>          spin_unlock_recursive(&d->page_alloc_lock);
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 64a0cc1..00b1915 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -48,6 +48,8 @@ void free_xenheap_pages(void *v, unsigned int order);
>  #define alloc_xenheap_page() (alloc_xenheap_pages(0,0))
>  #define free_xenheap_page(v) (free_xenheap_pages(v,0))
>  
> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages);
> +
>  /* Domain suballocator. These functions are *not* interrupt-safe.*/
>  void init_domheap_pages(paddr_t ps, paddr_t pe);
>  struct page_info *alloc_domheap_pages(



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:14:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:14:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgzd6-0001Cv-9V; Fri, 07 Dec 2012 15:14:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgzd4-0001Ck-Jg
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:14:18 +0000
Received: from [85.158.137.99:31600] by server-15.bemta-3.messagelabs.com id
	4D/9E-23779-9C702C05; Fri, 07 Dec 2012 15:14:17 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1354893255!17731140!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23810 invoked from network); 7 Dec 2012 15:14:16 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:14:16 -0000
Received: by mail-wi0-f169.google.com with SMTP id hq12so1447176wib.2
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 07:14:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=IuMoH/E+ffjji6majHe7w/0sVx98z+sTCUTKzwZV6IE=;
	b=JX6p6HZwxz3Myjx5ZHGVfQY48FaM3EGOcmx6Jg8wtrZJB7EmxPeRaUM5OXesdmegqw
	F7rLhhcnoWj7aHn/DgSSlFhVHnLSW3vGyOlc7zrxFRL7Whsr8cR4bSoCTcd4knpRCTtR
	Ns4tsa2JvvZ9TBLoDvkL6nUiIErHLuu28gbJbdBjQgVwGKldJs1eMacofrgjIR6f9yl0
	RCW/8iVgGg9Rp1CGRo807Ex12zdc9tSaGXDGpgAfbroEyRddpeDVlt6Q0pZlAPNCO5F5
	X/Q6M0sysvZwiI7CseIXBgw4QaYBLOnz3rpq+QjOGOTjqvBbKUZAel12YIzIuLk8VJIq
	ygOg==
Received: by 10.180.87.225 with SMTP id bb1mr9143449wib.20.1354893255815;
	Fri, 07 Dec 2012 07:14:15 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id cf6sm16028679wib.3.2012.12.07.07.14.05
	(version=SSLv3 cipher=OTHER); Fri, 07 Dec 2012 07:14:15 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 07 Dec 2012 15:13:58 +0000
From: Keir Fraser <keir@xen.org>
To: Dan Magenheimer <dan.magenheimer@oracle.com>,
	Jan Beulich <JBeulich@suse.com>
Message-ID: <CCE7B836.552EF%keir@xen.org>
Thread-Topic: [PATCH] xen: centralize accounting for domain tot_pages
Thread-Index: Ac3UjXrWFdtpXmRfZkOZj/SZreQb6g==
In-Reply-To: <8be77fb4-3393-4f49-99f6-2b6c9f89bc18@default>
Mime-version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: centralize accounting for domain
	tot_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/11/2012 21:50, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> xen: centralize accounting for domain tot_pages
> 
> Provide and use a common function for all adjustments to a
> domain's tot_pages counter in anticipation of future and/or
> out-of-tree patches that must adjust related counters
> atomically.
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>

Applied.

 -- Keir

>  arch/x86/mm.c             |    4 ++--
>  arch/x86/mm/mem_sharing.c |    4 ++--
>  common/grant_table.c      |    2 +-
>  common/memory.c           |    2 +-
>  common/page_alloc.c       |   10 ++++++++--
>  include/xen/mm.h          |    2 ++
>  6 files changed, 16 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index ab94b02..3887ca6 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3842,7 +3842,7 @@ int donate_page(
>      {
>          if ( d->tot_pages >= d->max_pages )
>              goto fail;
> -        d->tot_pages++;
> +        domain_adjust_tot_pages(d, 1);
>      }
>  
>      page->count_info = PGC_allocated | 1;
> @@ -3892,7 +3892,7 @@ int steal_page(
>      } while ( (y = cmpxchg(&page->count_info, x, x | 1)) != x );
>  
>      /* Unlink from original owner. */
> -    if ( !(memflags & MEMF_no_refcount) && !--d->tot_pages )
> +    if ( !(memflags & MEMF_no_refcount) && !domain_adjust_tot_pages(d, -1) )
>          drop_dom_ref = 1;
>      page_list_del(page, &d->page_list);
>  
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 5103285..e91aac5 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -639,7 +639,7 @@ static int page_make_sharable(struct domain *d,
>      }
>  
>      page_set_owner(page, dom_cow);
> -    d->tot_pages--;
> +    domain_adjust_tot_pages(d, -1);
>      drop_dom_ref = (d->tot_pages == 0);
>      page_list_del(page, &d->page_list);
>      spin_unlock(&d->page_alloc_lock);
> @@ -680,7 +680,7 @@ static int page_make_private(struct domain *d, struct
> page_info *page)
>      ASSERT(page_get_owner(page) == dom_cow);
>      page_set_owner(page, d);
>  
> -    if ( d->tot_pages++ == 0 )
> +    if ( domain_adjust_tot_pages(d, 1) == 1 )
>          get_domain(d);
>      page_list_add_tail(page, &d->page_list);
>      spin_unlock(&d->page_alloc_lock);
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 7912769..ca8d861 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1656,7 +1656,7 @@ gnttab_transfer(
>          }
>  
>          /* Okay, add the page to 'e'. */
> -        if ( unlikely(e->tot_pages++ == 0) )
> +        if ( unlikely(domain_adjust_tot_pages(e, 1) == 1) )
>              get_knownalive_domain(e);
>          page_list_add_tail(page, &e->page_list);
>          page_set_owner(page, e);
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 83e2666..9842ea9 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -454,7 +454,7 @@ static long
> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>                               (j * (1UL << exch.out.extent_order)));
>  
>                  spin_lock(&d->page_alloc_lock);
> -                d->tot_pages -= dec_count;
> +                domain_adjust_tot_pages(d, -dec_count);
>                  drop_dom_ref = (dec_count && !d->tot_pages);
>                  spin_unlock(&d->page_alloc_lock);
>  
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 15ebc66..e273bb7 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -239,6 +239,12 @@ static long midsize_alloc_zone_pages;
>  
>  static DEFINE_SPINLOCK(heap_lock);
>  
> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
> +{
> +    ASSERT(spin_is_locked(&d->page_alloc_lock));
> +    return d->tot_pages += pages;
> +}
> +
>  static unsigned long init_node_heap(int node, unsigned long mfn,
>                                      unsigned long nr, bool_t *use_tail)
>  {
> @@ -1291,7 +1297,7 @@ int assign_pages(
>          if ( unlikely(d->tot_pages == 0) )
>              get_knownalive_domain(d);
>  
> -        d->tot_pages += 1 << order;
> +        domain_adjust_tot_pages(d, 1 << order);
>      }
>  
>      for ( i = 0; i < (1 << order); i++ )
> @@ -1375,7 +1381,7 @@ void free_domheap_pages(struct page_info *pg, unsigned
> int order)
>              page_list_del2(&pg[i], &d->page_list, &d->arch.relmem_list);
>          }
>  
> -        d->tot_pages -= 1 << order;
> +        domain_adjust_tot_pages(d, -(1 << order));
>          drop_dom_ref = (d->tot_pages == 0);
>  
>          spin_unlock_recursive(&d->page_alloc_lock);
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 64a0cc1..00b1915 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -48,6 +48,8 @@ void free_xenheap_pages(void *v, unsigned int order);
>  #define alloc_xenheap_page() (alloc_xenheap_pages(0,0))
>  #define free_xenheap_page(v) (free_xenheap_pages(v,0))
>  
> +unsigned long domain_adjust_tot_pages(struct domain *d, long pages);
> +
>  /* Domain suballocator. These functions are *not* interrupt-safe.*/
>  void init_domheap_pages(paddr_t ps, paddr_t pe);
>  struct page_info *alloc_domheap_pages(



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:15:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:15:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgzdv-0001I0-O1; Fri, 07 Dec 2012 15:15:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgzdu-0001Hj-O4
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:15:10 +0000
Received: from [85.158.143.99:5360] by server-2.bemta-4.messagelabs.com id
	11/7B-30861-EF702C05; Fri, 07 Dec 2012 15:15:10 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354893308!28490323!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11340 invoked from network); 7 Dec 2012 15:15:09 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:15:09 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so1386725wib.14
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 07:15:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=eWzEEs4uuiWRMDGlBR7coVMLufTIsf45xNok7qBx3EE=;
	b=iKsBFOvdhIkp1dA4RgsYMWa5v4oN42BpdwP/ZqdGvRjYRKlL1mT52DnXVzrpABrtSr
	7K+/F7CbPH54lOvX1uLMdJUz7PQHtdje4DyBWp+X5zponNtfW3yh4FcqnuaEqrGBtazE
	dIN4NXpToNkl5ync65Udsw5+moQtTPBijoD1FOqG1SqgMzn+dyuiuFVQ3V4oW+tbELTM
	6P4ze9n1cwZNQDxlR0EH+wJproZ0BjuxpnLuyAgZYd/qe0JfYFdT4zVJ7ztA3h+I223c
	aypFSo/ep3oiqKr5SZBPEnDDHEXLOoUJTGG2v8lVRWprGScppBWNg6f2gd6TXuMoElnW
	NqgA==
Received: by 10.180.93.3 with SMTP id cq3mr3601030wib.1.1354893308586;
	Fri, 07 Dec 2012 07:15:08 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id s12sm27575376wik.11.2012.12.07.07.15.06
	(version=SSLv3 cipher=OTHER); Fri, 07 Dec 2012 07:15:07 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 07 Dec 2012 15:15:00 +0000
From: Keir Fraser <keir@xen.org>
To: Dan Magenheimer <dan.magenheimer@oracle.com>,
	Jan Beulich <JBeulich@suse.com>
Message-ID: <CCE7B874.552F0%keir@xen.org>
Thread-Topic: [PATCH] xen: reserve next two XENMEM_ op numbers for
	future/out-of-tree use
Thread-Index: Ac3UjZ/LDT8EkneYIEKX3eWcns1LWg==
In-Reply-To: <36f4ae2f-4fbc-4b14-a084-7b336a052a7a@default>
Mime-version: 1.0
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: reserve next two XENMEM_ op numbers
 for future/out-of-tree use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/11/2012 22:03, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> xen: reserve next two XENMEM_ op numbers for future/out-of-tree use
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>

There was some discussion on whether these numbers should just have
XENMEM_reserved_oracle_{1,2} definitions, or similar. Or even just reserved
by a header comment. Does anyone have any strong opinions?

 -- Keir

> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index f1ddbc0..3ee2902 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -421,6 +421,12 @@ struct xen_mem_sharing_op {
>  typedef struct xen_mem_sharing_op xen_mem_sharing_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
>  
> +/*
> + * Reserve ops for future/out-of-tree "claim" patches (Oracle)
> + */
> +#define XENMEM_claim_pages                  24
> +#define XENMEM_get_unclaimed_pages          25
> +
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>  
>  #endif /* __XEN_PUBLIC_MEMORY_H__ */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:15:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:15:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tgzdv-0001I0-O1; Fri, 07 Dec 2012 15:15:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tgzdu-0001Hj-O4
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:15:10 +0000
Received: from [85.158.143.99:5360] by server-2.bemta-4.messagelabs.com id
	11/7B-30861-EF702C05; Fri, 07 Dec 2012 15:15:10 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354893308!28490323!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11340 invoked from network); 7 Dec 2012 15:15:09 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:15:09 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so1386725wib.14
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 07:15:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=eWzEEs4uuiWRMDGlBR7coVMLufTIsf45xNok7qBx3EE=;
	b=iKsBFOvdhIkp1dA4RgsYMWa5v4oN42BpdwP/ZqdGvRjYRKlL1mT52DnXVzrpABrtSr
	7K+/F7CbPH54lOvX1uLMdJUz7PQHtdje4DyBWp+X5zponNtfW3yh4FcqnuaEqrGBtazE
	dIN4NXpToNkl5ync65Udsw5+moQtTPBijoD1FOqG1SqgMzn+dyuiuFVQ3V4oW+tbELTM
	6P4ze9n1cwZNQDxlR0EH+wJproZ0BjuxpnLuyAgZYd/qe0JfYFdT4zVJ7ztA3h+I223c
	aypFSo/ep3oiqKr5SZBPEnDDHEXLOoUJTGG2v8lVRWprGScppBWNg6f2gd6TXuMoElnW
	NqgA==
Received: by 10.180.93.3 with SMTP id cq3mr3601030wib.1.1354893308586;
	Fri, 07 Dec 2012 07:15:08 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id s12sm27575376wik.11.2012.12.07.07.15.06
	(version=SSLv3 cipher=OTHER); Fri, 07 Dec 2012 07:15:07 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 07 Dec 2012 15:15:00 +0000
From: Keir Fraser <keir@xen.org>
To: Dan Magenheimer <dan.magenheimer@oracle.com>,
	Jan Beulich <JBeulich@suse.com>
Message-ID: <CCE7B874.552F0%keir@xen.org>
Thread-Topic: [PATCH] xen: reserve next two XENMEM_ op numbers for
	future/out-of-tree use
Thread-Index: Ac3UjZ/LDT8EkneYIEKX3eWcns1LWg==
In-Reply-To: <36f4ae2f-4fbc-4b14-a084-7b336a052a7a@default>
Mime-version: 1.0
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: reserve next two XENMEM_ op numbers
 for future/out-of-tree use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/11/2012 22:03, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> xen: reserve next two XENMEM_ op numbers for future/out-of-tree use
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>

There was some discussion on whether these numbers should just have
XENMEM_reserved_oracle_{1,2} definitions, or similar. Or even just reserved
by a header comment. Does anyone have any strong opinions?

 -- Keir

> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index f1ddbc0..3ee2902 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -421,6 +421,12 @@ struct xen_mem_sharing_op {
>  typedef struct xen_mem_sharing_op xen_mem_sharing_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
>  
> +/*
> + * Reserve ops for future/out-of-tree "claim" patches (Oracle)
> + */
> +#define XENMEM_claim_pages                  24
> +#define XENMEM_get_unclaimed_pages          25
> +
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>  
>  #endif /* __XEN_PUBLIC_MEMORY_H__ */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:16:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzfB-0001RP-71; Fri, 07 Dec 2012 15:16:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgzf9-0001RC-U1
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 15:16:28 +0000
Received: from [85.158.139.211:23624] by server-6.bemta-5.messagelabs.com id
	B9/CF-30498-B4802C05; Fri, 07 Dec 2012 15:16:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354893385!18026901!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27811 invoked from network); 7 Dec 2012 15:16:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:16:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16227270"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 15:15:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 15:15:53 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgzeX-0008VB-In;
	Fri, 07 Dec 2012 15:15:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgzeX-00007a-HP;
	Fri, 07 Dec 2012 15:15:49 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14599-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 15:15:49 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14599: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14599 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14599/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:16:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzfB-0001RP-71; Fri, 07 Dec 2012 15:16:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tgzf9-0001RC-U1
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 15:16:28 +0000
Received: from [85.158.139.211:23624] by server-6.bemta-5.messagelabs.com id
	B9/CF-30498-B4802C05; Fri, 07 Dec 2012 15:16:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354893385!18026901!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27811 invoked from network); 7 Dec 2012 15:16:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:16:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16227270"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 15:15:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 15:15:53 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TgzeX-0008VB-In;
	Fri, 07 Dec 2012 15:15:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TgzeX-00007a-HP;
	Fri, 07 Dec 2012 15:15:49 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14599-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 15:15:49 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14599: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14599 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14599/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:26:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzoH-0001xE-99; Fri, 07 Dec 2012 15:25:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgzoF-0001x9-9R
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:25:51 +0000
Received: from [85.158.143.99:31851] by server-1.bemta-4.messagelabs.com id
	4C/1A-28401-E7A02C05; Fri, 07 Dec 2012 15:25:50 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354893939!27535358!1
X-Originating-IP: [65.55.88.13]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16614 invoked from network); 7 Dec 2012 15:25:41 -0000
Received: from tx2ehsobe003.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.13)
	by server-5.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Dec 2012 15:25:41 -0000
Received: from mail47-tx2-R.bigfish.com (10.9.14.239) by
	TX2EHSOBE003.bigfish.com (10.9.40.23) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 15:25:39 +0000
Received: from mail47-tx2 (localhost [127.0.0.1])	by mail47-tx2-R.bigfish.com
	(Postfix) with ESMTP id 4F409420224;
	Fri,  7 Dec 2012 15:25:39 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1de0h1202h1e76h1d1ah1d2ahzz8275bhz2dh668h839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1155h)
Received: from mail47-tx2 (localhost.localdomain [127.0.0.1]) by mail47-tx2
	(MessageSwitch) id 1354893936954877_7822;
	Fri,  7 Dec 2012 15:25:36 +0000 (UTC)
Received: from TX2EHSMHS008.bigfish.com (unknown [10.9.14.254])	by
	mail47-tx2.bigfish.com (Postfix) with ESMTP id DEC424C0085;
	Fri,  7 Dec 2012 15:25:36 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	TX2EHSMHS008.bigfish.com (10.9.99.108) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 15:25:36 +0000
X-WSS-ID: 0MEO2UM-02-D84-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 29B91C8057;	Fri,  7 Dec 2012 09:25:33 -0600 (CST)
Received: from SAUSEXDAG01.amd.com (163.181.55.1) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 7 Dec 2012 09:25:41 -0600
Received: from [10.234.222.132] (163.181.55.254) by sausexdag01.amd.com
	(163.181.55.1) with Microsoft SMTP Server id 14.2.318.4; Fri, 7 Dec 2012
	09:25:34 -0600
Message-ID: <50C20A6D.3010803@amd.com>
Date: Fri, 7 Dec 2012 10:25:33 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121025 Thunderbird/16.0.2
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
	<50C1E41402000078000AEED6@nat28.tlf.novell.com>
	<50C1F334.20802@amd.com>
	<50C2147C02000078000AEFCC@nat28.tlf.novell.com>
In-Reply-To: <50C2147C02000078000AEFCC@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: Ian.Campbell@citrix.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 12/07/2012 10:08 AM, Jan Beulich wrote:
>>>> On 07.12.12 at 14:46, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> Do we really need what microcode_init() does?
>
> Oh yes, absolutely. How else would boot time microcode loading
> work for secondary CPUs without this? Note that the notifier only
> deals with the CPU_DEAD case.

start_secondary() -> microcode_resume_cpu() -> 
microcode_ops->apply_microcode() will update non-boot CPUs in most cases.

This won't quite work when secondary CPU is different from (or at 
different patch level than) boot CPU but that can be fixed.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:26:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TgzoH-0001xE-99; Fri, 07 Dec 2012 15:25:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1TgzoF-0001x9-9R
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:25:51 +0000
Received: from [85.158.143.99:31851] by server-1.bemta-4.messagelabs.com id
	4C/1A-28401-E7A02C05; Fri, 07 Dec 2012 15:25:50 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354893939!27535358!1
X-Originating-IP: [65.55.88.13]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16614 invoked from network); 7 Dec 2012 15:25:41 -0000
Received: from tx2ehsobe003.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.13)
	by server-5.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Dec 2012 15:25:41 -0000
Received: from mail47-tx2-R.bigfish.com (10.9.14.239) by
	TX2EHSOBE003.bigfish.com (10.9.40.23) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 15:25:39 +0000
Received: from mail47-tx2 (localhost [127.0.0.1])	by mail47-tx2-R.bigfish.com
	(Postfix) with ESMTP id 4F409420224;
	Fri,  7 Dec 2012 15:25:39 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1de0h1202h1e76h1d1ah1d2ahzz8275bhz2dh668h839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1155h)
Received: from mail47-tx2 (localhost.localdomain [127.0.0.1]) by mail47-tx2
	(MessageSwitch) id 1354893936954877_7822;
	Fri,  7 Dec 2012 15:25:36 +0000 (UTC)
Received: from TX2EHSMHS008.bigfish.com (unknown [10.9.14.254])	by
	mail47-tx2.bigfish.com (Postfix) with ESMTP id DEC424C0085;
	Fri,  7 Dec 2012 15:25:36 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	TX2EHSMHS008.bigfish.com (10.9.99.108) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 15:25:36 +0000
X-WSS-ID: 0MEO2UM-02-D84-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 29B91C8057;	Fri,  7 Dec 2012 09:25:33 -0600 (CST)
Received: from SAUSEXDAG01.amd.com (163.181.55.1) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 7 Dec 2012 09:25:41 -0600
Received: from [10.234.222.132] (163.181.55.254) by sausexdag01.amd.com
	(163.181.55.1) with Microsoft SMTP Server id 14.2.318.4; Fri, 7 Dec 2012
	09:25:34 -0600
Message-ID: <50C20A6D.3010803@amd.com>
Date: Fri, 7 Dec 2012 10:25:33 -0500
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121025 Thunderbird/16.0.2
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
	<50C1E41402000078000AEED6@nat28.tlf.novell.com>
	<50C1F334.20802@amd.com>
	<50C2147C02000078000AEFCC@nat28.tlf.novell.com>
In-Reply-To: <50C2147C02000078000AEFCC@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: Ian.Campbell@citrix.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 12/07/2012 10:08 AM, Jan Beulich wrote:
>>>> On 07.12.12 at 14:46, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> Do we really need what microcode_init() does?
>
> Oh yes, absolutely. How else would boot time microcode loading
> work for secondary CPUs without this? Note that the notifier only
> deals with the CPU_DEAD case.

start_secondary() -> microcode_resume_cpu() -> 
microcode_ops->apply_microcode() will update non-boot CPUs in most cases.

This won't quite work when secondary CPU is different from (or at 
different patch level than) boot CPU but that can be fixed.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:42:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:42:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th04R-0002OL-2k; Fri, 07 Dec 2012 15:42:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Th04P-0002OG-RL
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:42:33 +0000
Received: from [85.158.137.99:56310] by server-1.bemta-3.messagelabs.com id
	A5/19-12169-86E02C05; Fri, 07 Dec 2012 15:42:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354894952!13506310!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 877 invoked from network); 7 Dec 2012 15:42:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 15:42:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 15:42:30 +0000
Message-Id: <50C21C7802000078000AF055@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 15:42:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
	<50C1E41402000078000AEED6@nat28.tlf.novell.com>
	<50C1F334.20802@amd.com>
	<50C2147C02000078000AEFCC@nat28.tlf.novell.com>
	<50C20A6D.3010803@amd.com>
In-Reply-To: <50C20A6D.3010803@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian.Campbell@citrix.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 16:25, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> On 12/07/2012 10:08 AM, Jan Beulich wrote:
>>>>> On 07.12.12 at 14:46, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>>> Do we really need what microcode_init() does?
>>
>> Oh yes, absolutely. How else would boot time microcode loading
>> work for secondary CPUs without this? Note that the notifier only
>> deals with the CPU_DEAD case.
> 
> start_secondary() -> microcode_resume_cpu() -> 
> microcode_ops->apply_microcode() will update non-boot CPUs in most cases.
> 
> This won't quite work when secondary CPU is different from (or at 
> different patch level than) boot CPU but that can be fixed.

Exactly. And I don't believe it can be done differently as easily as
you seem to think.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:42:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:42:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th04R-0002OL-2k; Fri, 07 Dec 2012 15:42:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Th04P-0002OG-RL
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:42:33 +0000
Received: from [85.158.137.99:56310] by server-1.bemta-3.messagelabs.com id
	A5/19-12169-86E02C05; Fri, 07 Dec 2012 15:42:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354894952!13506310!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 877 invoked from network); 7 Dec 2012 15:42:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 15:42:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 15:42:30 +0000
Message-Id: <50C21C7802000078000AF055@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 15:42:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <1354818491-9723-1-git-send-email-boris.ostrovsky@amd.com>
	<50C1E41402000078000AEED6@nat28.tlf.novell.com>
	<50C1F334.20802@amd.com>
	<50C2147C02000078000AEFCC@nat28.tlf.novell.com>
	<50C20A6D.3010803@amd.com>
In-Reply-To: <50C20A6D.3010803@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian.Campbell@citrix.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/ucode: Improve error handling and
 container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 16:25, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> On 12/07/2012 10:08 AM, Jan Beulich wrote:
>>>>> On 07.12.12 at 14:46, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>>> Do we really need what microcode_init() does?
>>
>> Oh yes, absolutely. How else would boot time microcode loading
>> work for secondary CPUs without this? Note that the notifier only
>> deals with the CPU_DEAD case.
> 
> start_secondary() -> microcode_resume_cpu() -> 
> microcode_ops->apply_microcode() will update non-boot CPUs in most cases.
> 
> This won't quite work when secondary CPU is different from (or at 
> different patch level than) boot CPU but that can be fixed.

Exactly. And I don't believe it can be done differently as easily as
you seem to think.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:42:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th04S-0002OX-F8; Fri, 07 Dec 2012 15:42:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Th04R-0002OP-KJ
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:42:35 +0000
Received: from [85.158.143.35:7806] by server-3.bemta-4.messagelabs.com id
	4D/92-18211-A6E02C05; Fri, 07 Dec 2012 15:42:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1354894952!13962065!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAzNjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25582 invoked from network); 7 Dec 2012 15:42:33 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 15:42:33 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7FgN8e031245
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 15:42:23 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7FgKs1006178
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 15:42:21 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7FgKe1003956; Fri, 7 Dec 2012 09:42:20 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 07:42:20 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C0F881C05A8; Fri,  7 Dec 2012 10:42:18 -0500 (EST)
Date: Fri, 7 Dec 2012 10:42:18 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Keir Fraser <keir@xen.org>
Message-ID: <20121207154218.GA4760@phenom.dumpdata.com>
References: <36f4ae2f-4fbc-4b14-a084-7b336a052a7a@default>
	<CCE7B874.552F0%keir@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CCE7B874.552F0%keir@xen.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: reserve next two XENMEM_ op numbers
 for future/out-of-tree use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 07, 2012 at 03:15:00PM +0000, Keir Fraser wrote:
> On 28/11/2012 22:03, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:
> 
> > xen: reserve next two XENMEM_ op numbers for future/out-of-tree use
> > 
> > Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> 
> There was some discussion on whether these numbers should just have
> XENMEM_reserved_oracle_{1,2} definitions, or similar. Or even just reserved
> by a header comment. Does anyone have any strong opinions?

I would just go with the claim/get_unclaimed. The 'Oracle' part is already
in the comment section.

> 
>  -- Keir
> 
> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > index f1ddbc0..3ee2902 100644
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -421,6 +421,12 @@ struct xen_mem_sharing_op {
> >  typedef struct xen_mem_sharing_op xen_mem_sharing_op_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
> >  
> > +/*
> > + * Reserve ops for future/out-of-tree "claim" patches (Oracle)
> > + */
> > +#define XENMEM_claim_pages                  24
> > +#define XENMEM_get_unclaimed_pages          25
> > +
> >  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> >  
> >  #endif /* __XEN_PUBLIC_MEMORY_H__ */
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:42:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th04S-0002OX-F8; Fri, 07 Dec 2012 15:42:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Th04R-0002OP-KJ
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:42:35 +0000
Received: from [85.158.143.35:7806] by server-3.bemta-4.messagelabs.com id
	4D/92-18211-A6E02C05; Fri, 07 Dec 2012 15:42:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1354894952!13962065!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAzNjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25582 invoked from network); 7 Dec 2012 15:42:33 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 15:42:33 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7FgN8e031245
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 15:42:23 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7FgKs1006178
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 15:42:21 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7FgKe1003956; Fri, 7 Dec 2012 09:42:20 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 07:42:20 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C0F881C05A8; Fri,  7 Dec 2012 10:42:18 -0500 (EST)
Date: Fri, 7 Dec 2012 10:42:18 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Keir Fraser <keir@xen.org>
Message-ID: <20121207154218.GA4760@phenom.dumpdata.com>
References: <36f4ae2f-4fbc-4b14-a084-7b336a052a7a@default>
	<CCE7B874.552F0%keir@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CCE7B874.552F0%keir@xen.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: reserve next two XENMEM_ op numbers
 for future/out-of-tree use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 07, 2012 at 03:15:00PM +0000, Keir Fraser wrote:
> On 28/11/2012 22:03, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:
> 
> > xen: reserve next two XENMEM_ op numbers for future/out-of-tree use
> > 
> > Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> 
> There was some discussion on whether these numbers should just have
> XENMEM_reserved_oracle_{1,2} definitions, or similar. Or even just reserved
> by a header comment. Does anyone have any strong opinions?

I would just go with the claim/get_unclaimed. The 'Oracle' part is already
in the comment section.

> 
>  -- Keir
> 
> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > index f1ddbc0..3ee2902 100644
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -421,6 +421,12 @@ struct xen_mem_sharing_op {
> >  typedef struct xen_mem_sharing_op xen_mem_sharing_op_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
> >  
> > +/*
> > + * Reserve ops for future/out-of-tree "claim" patches (Oracle)
> > + */
> > +#define XENMEM_claim_pages                  24
> > +#define XENMEM_get_unclaimed_pages          25
> > +
> >  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> >  
> >  #endif /* __XEN_PUBLIC_MEMORY_H__ */
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:44:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th05p-0002am-4A; Fri, 07 Dec 2012 15:44:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Th05n-0002aU-Bt
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:43:59 +0000
Received: from [85.158.137.99:8318] by server-15.bemta-3.messagelabs.com id
	18/A7-23779-EBE02C05; Fri, 07 Dec 2012 15:43:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354895037!13506515!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5174 invoked from network); 7 Dec 2012 15:43:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 15:43:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 15:43:57 +0000
Message-Id: <50C21CCC02000078000AF058@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 15:43:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dan Magenheimer" <dan.magenheimer@oracle.com>,
	"Keir Fraser" <keir@xen.org>
References: <36f4ae2f-4fbc-4b14-a084-7b336a052a7a@default>
	<CCE7B874.552F0%keir@xen.org>
In-Reply-To: <CCE7B874.552F0%keir@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: reserve next two XENMEM_ op numbers
 for future/out-of-tree use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 16:15, Keir Fraser <keir@xen.org> wrote:
> On 28/11/2012 22:03, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:
> 
>> xen: reserve next two XENMEM_ op numbers for future/out-of-tree use
>> 
>> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> 
> There was some discussion on whether these numbers should just have
> XENMEM_reserved_oracle_{1,2} definitions, or similar. Or even just reserved
> by a header comment. Does anyone have any strong opinions?

I think if it's known what they're for, calling them by their names
should be quite fine.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:44:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th05p-0002am-4A; Fri, 07 Dec 2012 15:44:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Th05n-0002aU-Bt
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:43:59 +0000
Received: from [85.158.137.99:8318] by server-15.bemta-3.messagelabs.com id
	18/A7-23779-EBE02C05; Fri, 07 Dec 2012 15:43:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1354895037!13506515!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5174 invoked from network); 7 Dec 2012 15:43:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 15:43:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 15:43:57 +0000
Message-Id: <50C21CCC02000078000AF058@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 15:43:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dan Magenheimer" <dan.magenheimer@oracle.com>,
	"Keir Fraser" <keir@xen.org>
References: <36f4ae2f-4fbc-4b14-a084-7b336a052a7a@default>
	<CCE7B874.552F0%keir@xen.org>
In-Reply-To: <CCE7B874.552F0%keir@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Zhigang Wang <zhigang.x.wang@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: reserve next two XENMEM_ op numbers
 for future/out-of-tree use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 16:15, Keir Fraser <keir@xen.org> wrote:
> On 28/11/2012 22:03, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:
> 
>> xen: reserve next two XENMEM_ op numbers for future/out-of-tree use
>> 
>> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> 
> There was some discussion on whether these numbers should just have
> XENMEM_reserved_oracle_{1,2} definitions, or similar. Or even just reserved
> by a header comment. Does anyone have any strong opinions?

I think if it's known what they're for, calling them by their names
should be quite fine.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:50:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0CF-00033i-PQ; Fri, 07 Dec 2012 15:50:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th0CE-00033X-5B
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 15:50:38 +0000
Received: from [85.158.139.211:38774] by server-8.bemta-5.messagelabs.com id
	C4/EA-15003-D4012C05; Fri, 07 Dec 2012 15:50:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354895434!18045755!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29729 invoked from network); 7 Dec 2012 15:50:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:50:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47012866"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 15:50:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 10:50:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th0C9-0005NV-L0;
	Fri, 07 Dec 2012 15:50:33 +0000
Date: Fri, 7 Dec 2012 15:50:28 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354789717.17165.76.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212061647510.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354789717.17165.76.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/6] xen/arm: introduce a driver for the ARM
 HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> > For the moment the resolution is hardcoded to 1280x1024@60.
> 
> Is there a longer term alternative? Something in the DTB perhaps.
> 
> Also there are hardcoded dependencies on the vexpress (clock stuff)
> which aren't mentioned here, but just in /* in Mhz, needs to be set in
> the board config for OSC5 */. Would be good to have some instruction on
> how to use this stuff somewhere.

I managed to get rid of those.


> > Use the generic framebuffer functions to print on the screen.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/Rules.mk         |    1 +
> >  xen/drivers/video/Makefile    |    1 +
> >  xen/drivers/video/arm_hdlcd.c |  165 +++++++++++++++++++++++++++++++++++++++++
> >  xen/include/asm-arm/config.h  |    3 +
> >  4 files changed, 170 insertions(+), 0 deletions(-)
> >  create mode 100644 xen/drivers/video/arm_hdlcd.c
> > 
> > diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> > index fa9f9c1..9580e6b 100644
> > --- a/xen/arch/arm/Rules.mk
> > +++ b/xen/arch/arm/Rules.mk
> > @@ -8,6 +8,7 @@
> >  
> >  HAS_DEVICE_TREE := y
> >  HAS_VIDEO := y
> > +HAS_ARM_HDLCD := y
> >  
> >  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
> >  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> > diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> > index 3b3eb43..8a6f5da 100644
> > --- a/xen/drivers/video/Makefile
> > +++ b/xen/drivers/video/Makefile
> > @@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
> >  obj-$(HAS_VIDEO) += font_8x8.o
> >  obj-$(HAS_VIDEO) += fb.o
> >  obj-$(HAS_VGA) += vesa.o
> > +obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
> > diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
> > new file mode 100644
> > index 0000000..68f588c
> > --- /dev/null
> > +++ b/xen/drivers/video/arm_hdlcd.c
> > @@ -0,0 +1,165 @@
> > +/*
> > + * xen/drivers/video/arm_hdlcd.c
> > + *
> > + * Driver for ARM HDLCD Controller
> > + *
> > + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > + * Copyright (c) 2012 Citrix Systems.
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > + */
> > +
> > +#include <asm/delay.h>
> > +#include <asm/types.h>
> > +#include <xen/config.h>
> > +#include <xen/device_tree.h>
> > +#include <xen/libfdt/libfdt.h>
> > +#include <xen/init.h>
> > +#include <xen/mm.h>
> > +#include "font.h"
> > +#include "fb.h"
> > +
> > +#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
> > +
> > +#define HDLCD_INTMASK       (0x18/4)
> > +#define HDLCD_FBBASE        (0x100/4)
> > +#define HDLCD_LINELENGTH    (0x104/4)
> > +#define HDLCD_LINECOUNT     (0x108/4)
> > +#define HDLCD_LINEPITCH     (0x10C/4)
> > +#define HDLCD_BUS           (0x110/4)
> > +#define HDLCD_VSYNC         (0x200/4)
> > +#define HDLCD_VBACK         (0x204/4)
> > +#define HDLCD_VDATA         (0x208/4)
> > +#define HDLCD_VFRONT        (0x20C/4)
> > +#define HDLCD_HSYNC         (0x210/4)
> > +#define HDLCD_HBACK         (0x214/4)
> > +#define HDLCD_HDATA         (0x218/4)
> > +#define HDLCD_HFRONT        (0x21C/4)
> > +#define HDLCD_POLARITIES    (0x220/4)
> > +#define HDLCD_COMMAND       (0x230/4)
> > +#define HDLCD_PF            (0x240/4)
> > +#define HDLCD_RED           (0x244/4)
> > +#define HDLCD_GREEN         (0x248/4)
> > +#define HDLCD_BLUE          (0x24C/4)
> > +
> > +#define BPP             4
> > +#define XRES            1280
> > +#define YRES            1024
> > +#define refresh         60
> > +#define pixclock        108 /* in Mhz, needs to be set in the board config for OSC5 */
> > +#define left_margin     80
> > +#define hback left_margin
> > +#define right_margin    48
> > +#define hfront right_margin
> > +#define upper_margin    21
> > +#define vback upper_margin
> > +#define lower_margin    3
> > +#define vfront lower_margin
> > +#define hsync_len       32
> > +#define vsync_len       6
> > +
> > +#define HDLCD_SIZE (XRES*YRES*BPP)
> > +
> > +static void vga_noop_puts(const char *s) {}
> > +void (*video_puts)(const char *) = vga_noop_puts;
> > +
> > +static void hdlcd_flush(void)
> > +{
> > +    dsb();
> > +}
> > +
> > +void __init video_init(void)
> > +{
> > +    int node, depth;
> > +    u32 address_cells, size_cells;
> > +    struct fb_prop fbp;
> > +    unsigned char *lfb = (unsigned char *) VRAM_VIRT_START;
> > +    paddr_t hdlcd_start, hdlcd_size;
> > +    paddr_t framebuffer_start, framebuffer_size;
> > +    const struct fdt_property *prop;
> > +    const u32 *cell;
> > +
> > +    if ( find_compatible_node("arm,hdlcd", &node, &depth,
> > +                &address_cells, &size_cells) <= 0 )
> > +        return;
> > +
> > +    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
> > +    if ( !prop )
> > +        return;
> > +
> > +    cell = (const u32 *)prop->data;
> > +    device_tree_get_reg(&cell, address_cells, size_cells,
> > +            &hdlcd_start, &hdlcd_size);
> 
> I wonder why we don't have a function to get the reg given a prop,
> because we have this pattern everywhere AFAICT.

It is not possible only from a prop because we also need to know
address_cells and size_cells. They are only known at depth - 1, that's
why it is non-trivial to write such a function.


> > +    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
> > +    if ( !prop )
> > +        return;
> > +
> > +    cell = (const u32 *)prop->data;
> > +    device_tree_get_reg(&cell, address_cells, size_cells,
> > +            &framebuffer_start, &framebuffer_size); 
> > +
> > +    if ( !hdlcd_start || !framebuffer_start )
> > +        return;
> > +
> > +    printk("Initializing HDLCD driver\n");
> > +
> > +    map_phys_range(framebuffer_start,
> > +                    framebuffer_start + framebuffer_size,
> > +                    VRAM_VIRT_START, DEV_WC);
> > +    memset(lfb, 0x00, HDLCD_SIZE);
> > +
> > +    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
> > +    HDLCD[HDLCD_COMMAND] = 0;
> > +
> > +    HDLCD[HDLCD_LINELENGTH] = XRES * BPP;
> > +    HDLCD[HDLCD_LINECOUNT] = YRES - 1;
> > +    HDLCD[HDLCD_LINEPITCH] = XRES * BPP;
> > +    HDLCD[HDLCD_PF] = ((BPP - 1) << 3);
> > +    HDLCD[HDLCD_INTMASK] = 0;
> > +    HDLCD[HDLCD_FBBASE] = framebuffer_start;
> > +    HDLCD[HDLCD_BUS] = 0xf00|(1<<4);
> > +    HDLCD[HDLCD_VBACK] = upper_margin - 1;
> > +    HDLCD[HDLCD_VSYNC] = vsync_len - 1;
> > +    HDLCD[HDLCD_VDATA] = YRES - 1;
> > +    HDLCD[HDLCD_VFRONT] = lower_margin - 1;
> > +    HDLCD[HDLCD_HBACK] = left_margin - 1;
> > +    HDLCD[HDLCD_HSYNC] = hsync_len - 1;
> > +    HDLCD[HDLCD_HDATA] = XRES - 1;
> > +    HDLCD[HDLCD_HFRONT] = right_margin - 1;
> > +    HDLCD[HDLCD_POLARITIES] = (1<<2)|(1<<3);
> > +    HDLCD[HDLCD_RED] = (8<<8)|0;
> > +    HDLCD[HDLCD_GREEN] = (8<<8)|8;
> > +    HDLCD[HDLCD_BLUE] = (8<<8)|16;
> > +
> > +    HDLCD[HDLCD_COMMAND] = 1;
> > +    clear_fixmap(FIXMAP_MISC);
> > +
> > +    fbp.lfb = lfb;
> > +    fbp.font = &font_vga_8x16;
> > +    fbp.pixel_on = 0xffffff;
> > +    fbp.bits_per_pixel = BPP*8;
> > +    fbp.bytes_per_line = BPP*XRES;
> > +    fbp.width = XRES;
> > +    fbp.height = YRES;
> > +    fbp.flush = hdlcd_flush;
> > +    fbp.text_columns = XRES / 8;
> > +    fbp.text_rows = YRES / 16;
> > +    if ( fb_init(fbp) < 0 )
> > +            return;
> > +    video_puts = fb_scroll_puts;
> > +}
> > +
> > +void video_endboot(void)
> > +{
> > +    if ( video_puts != vga_noop_puts )
> > +        fb_alloc();
> > +}
> 
> Can you stick the standard magic block here please.

OK

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:50:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0CF-00033i-PQ; Fri, 07 Dec 2012 15:50:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th0CE-00033X-5B
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 15:50:38 +0000
Received: from [85.158.139.211:38774] by server-8.bemta-5.messagelabs.com id
	C4/EA-15003-D4012C05; Fri, 07 Dec 2012 15:50:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354895434!18045755!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29729 invoked from network); 7 Dec 2012 15:50:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 15:50:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47012866"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 15:50:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 10:50:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th0C9-0005NV-L0;
	Fri, 07 Dec 2012 15:50:33 +0000
Date: Fri, 7 Dec 2012 15:50:28 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354789717.17165.76.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212061647510.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212051805020.8801@kaball.uk.xensource.com>
	<1354731588-32579-6-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354789717.17165.76.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/6] xen/arm: introduce a driver for the ARM
 HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> On Wed, 2012-12-05 at 18:19 +0000, Stefano Stabellini wrote:
> > For the moment the resolution is hardcoded to 1280x1024@60.
> 
> Is there a longer term alternative? Something in the DTB perhaps.
> 
> Also there are hardcoded dependencies on the vexpress (clock stuff)
> which aren't mentioned here, but just in /* in Mhz, needs to be set in
> the board config for OSC5 */. Would be good to have some instruction on
> how to use this stuff somewhere.

I managed to get rid of those.


> > Use the generic framebuffer functions to print on the screen.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/Rules.mk         |    1 +
> >  xen/drivers/video/Makefile    |    1 +
> >  xen/drivers/video/arm_hdlcd.c |  165 +++++++++++++++++++++++++++++++++++++++++
> >  xen/include/asm-arm/config.h  |    3 +
> >  4 files changed, 170 insertions(+), 0 deletions(-)
> >  create mode 100644 xen/drivers/video/arm_hdlcd.c
> > 
> > diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> > index fa9f9c1..9580e6b 100644
> > --- a/xen/arch/arm/Rules.mk
> > +++ b/xen/arch/arm/Rules.mk
> > @@ -8,6 +8,7 @@
> >  
> >  HAS_DEVICE_TREE := y
> >  HAS_VIDEO := y
> > +HAS_ARM_HDLCD := y
> >  
> >  CFLAGS += -fno-builtin -fno-common -Wredundant-decls
> >  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> > diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> > index 3b3eb43..8a6f5da 100644
> > --- a/xen/drivers/video/Makefile
> > +++ b/xen/drivers/video/Makefile
> > @@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
> >  obj-$(HAS_VIDEO) += font_8x8.o
> >  obj-$(HAS_VIDEO) += fb.o
> >  obj-$(HAS_VGA) += vesa.o
> > +obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
> > diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
> > new file mode 100644
> > index 0000000..68f588c
> > --- /dev/null
> > +++ b/xen/drivers/video/arm_hdlcd.c
> > @@ -0,0 +1,165 @@
> > +/*
> > + * xen/drivers/video/arm_hdlcd.c
> > + *
> > + * Driver for ARM HDLCD Controller
> > + *
> > + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > + * Copyright (c) 2012 Citrix Systems.
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > + */
> > +
> > +#include <asm/delay.h>
> > +#include <asm/types.h>
> > +#include <xen/config.h>
> > +#include <xen/device_tree.h>
> > +#include <xen/libfdt/libfdt.h>
> > +#include <xen/init.h>
> > +#include <xen/mm.h>
> > +#include "font.h"
> > +#include "fb.h"
> > +
> > +#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
> > +
> > +#define HDLCD_INTMASK       (0x18/4)
> > +#define HDLCD_FBBASE        (0x100/4)
> > +#define HDLCD_LINELENGTH    (0x104/4)
> > +#define HDLCD_LINECOUNT     (0x108/4)
> > +#define HDLCD_LINEPITCH     (0x10C/4)
> > +#define HDLCD_BUS           (0x110/4)
> > +#define HDLCD_VSYNC         (0x200/4)
> > +#define HDLCD_VBACK         (0x204/4)
> > +#define HDLCD_VDATA         (0x208/4)
> > +#define HDLCD_VFRONT        (0x20C/4)
> > +#define HDLCD_HSYNC         (0x210/4)
> > +#define HDLCD_HBACK         (0x214/4)
> > +#define HDLCD_HDATA         (0x218/4)
> > +#define HDLCD_HFRONT        (0x21C/4)
> > +#define HDLCD_POLARITIES    (0x220/4)
> > +#define HDLCD_COMMAND       (0x230/4)
> > +#define HDLCD_PF            (0x240/4)
> > +#define HDLCD_RED           (0x244/4)
> > +#define HDLCD_GREEN         (0x248/4)
> > +#define HDLCD_BLUE          (0x24C/4)
> > +
> > +#define BPP             4
> > +#define XRES            1280
> > +#define YRES            1024
> > +#define refresh         60
> > +#define pixclock        108 /* in Mhz, needs to be set in the board config for OSC5 */
> > +#define left_margin     80
> > +#define hback left_margin
> > +#define right_margin    48
> > +#define hfront right_margin
> > +#define upper_margin    21
> > +#define vback upper_margin
> > +#define lower_margin    3
> > +#define vfront lower_margin
> > +#define hsync_len       32
> > +#define vsync_len       6
> > +
> > +#define HDLCD_SIZE (XRES*YRES*BPP)
> > +
> > +static void vga_noop_puts(const char *s) {}
> > +void (*video_puts)(const char *) = vga_noop_puts;
> > +
> > +static void hdlcd_flush(void)
> > +{
> > +    dsb();
> > +}
> > +
> > +void __init video_init(void)
> > +{
> > +    int node, depth;
> > +    u32 address_cells, size_cells;
> > +    struct fb_prop fbp;
> > +    unsigned char *lfb = (unsigned char *) VRAM_VIRT_START;
> > +    paddr_t hdlcd_start, hdlcd_size;
> > +    paddr_t framebuffer_start, framebuffer_size;
> > +    const struct fdt_property *prop;
> > +    const u32 *cell;
> > +
> > +    if ( find_compatible_node("arm,hdlcd", &node, &depth,
> > +                &address_cells, &size_cells) <= 0 )
> > +        return;
> > +
> > +    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
> > +    if ( !prop )
> > +        return;
> > +
> > +    cell = (const u32 *)prop->data;
> > +    device_tree_get_reg(&cell, address_cells, size_cells,
> > +            &hdlcd_start, &hdlcd_size);
> 
> I wonder why we don't have a function to get the reg given a prop,
> because we have this pattern everywhere AFAICT.

It is not possible only from a prop because we also need to know
address_cells and size_cells. They are only known at depth - 1, that's
why it is non-trivial to write such a function.


> > +    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
> > +    if ( !prop )
> > +        return;
> > +
> > +    cell = (const u32 *)prop->data;
> > +    device_tree_get_reg(&cell, address_cells, size_cells,
> > +            &framebuffer_start, &framebuffer_size); 
> > +
> > +    if ( !hdlcd_start || !framebuffer_start )
> > +        return;
> > +
> > +    printk("Initializing HDLCD driver\n");
> > +
> > +    map_phys_range(framebuffer_start,
> > +                    framebuffer_start + framebuffer_size,
> > +                    VRAM_VIRT_START, DEV_WC);
> > +    memset(lfb, 0x00, HDLCD_SIZE);
> > +
> > +    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
> > +    HDLCD[HDLCD_COMMAND] = 0;
> > +
> > +    HDLCD[HDLCD_LINELENGTH] = XRES * BPP;
> > +    HDLCD[HDLCD_LINECOUNT] = YRES - 1;
> > +    HDLCD[HDLCD_LINEPITCH] = XRES * BPP;
> > +    HDLCD[HDLCD_PF] = ((BPP - 1) << 3);
> > +    HDLCD[HDLCD_INTMASK] = 0;
> > +    HDLCD[HDLCD_FBBASE] = framebuffer_start;
> > +    HDLCD[HDLCD_BUS] = 0xf00|(1<<4);
> > +    HDLCD[HDLCD_VBACK] = upper_margin - 1;
> > +    HDLCD[HDLCD_VSYNC] = vsync_len - 1;
> > +    HDLCD[HDLCD_VDATA] = YRES - 1;
> > +    HDLCD[HDLCD_VFRONT] = lower_margin - 1;
> > +    HDLCD[HDLCD_HBACK] = left_margin - 1;
> > +    HDLCD[HDLCD_HSYNC] = hsync_len - 1;
> > +    HDLCD[HDLCD_HDATA] = XRES - 1;
> > +    HDLCD[HDLCD_HFRONT] = right_margin - 1;
> > +    HDLCD[HDLCD_POLARITIES] = (1<<2)|(1<<3);
> > +    HDLCD[HDLCD_RED] = (8<<8)|0;
> > +    HDLCD[HDLCD_GREEN] = (8<<8)|8;
> > +    HDLCD[HDLCD_BLUE] = (8<<8)|16;
> > +
> > +    HDLCD[HDLCD_COMMAND] = 1;
> > +    clear_fixmap(FIXMAP_MISC);
> > +
> > +    fbp.lfb = lfb;
> > +    fbp.font = &font_vga_8x16;
> > +    fbp.pixel_on = 0xffffff;
> > +    fbp.bits_per_pixel = BPP*8;
> > +    fbp.bytes_per_line = BPP*XRES;
> > +    fbp.width = XRES;
> > +    fbp.height = YRES;
> > +    fbp.flush = hdlcd_flush;
> > +    fbp.text_columns = XRES / 8;
> > +    fbp.text_rows = YRES / 16;
> > +    if ( fb_init(fbp) < 0 )
> > +            return;
> > +    video_puts = fb_scroll_puts;
> > +}
> > +
> > +void video_endboot(void)
> > +{
> > +    if ( video_puts != vga_noop_puts )
> > +        fb_alloc();
> > +}
> 
> Can you stick the standard magic block here please.

OK

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:57:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0J3-0003PV-Ms; Fri, 07 Dec 2012 15:57:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Th0J2-0003PP-Oc
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:57:40 +0000
Received: from [85.158.138.51:54189] by server-13.bemta-3.messagelabs.com id
	F9/87-24887-3F112C05; Fri, 07 Dec 2012 15:57:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354895857!28007301!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20417 invoked from network); 7 Dec 2012 15:57:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 15:57:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 15:57:37 +0000
Message-Id: <50C2200202000078000AF0AC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 15:57:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Charles Arnold <CARNOLD@suse.com>
Subject: [Xen-devel] compile flags overrides
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All,

looks like for the tools part of the tree, at least something has been
done to that effect (via APPEND_CFLAGS).

I don't see how one is supposed to override (e.g. optimization level,
or add to) CFLAGS for the hypervisor part of the build though.

Passing CFLAGS= as make option would suppress all += additions
throughout the makefiles, as they're not being done with "override".

Passing CFLAGS through the environment accumulates the
additions to the variable as the tree gets recursed through (i.e.
at the second recursion level you end up having all flags specified
three times). While that might not appear to be a problem, it really
is (besides being rather odd in the first place) for the EFI build:
The tool chain capability detection passes $(CFLAGS) with
$(CFLAGS-y) stripped off (in particular to suppress the use of
"-MMD -MF -$(@F).d", as $@ and $(@F) are not defined in this
context), thus leaving - from the upper recursion levels - .xen.d
and .built_in.d as stray input files on the command line.

I don't think passing CFLAGS itself through either means is really
to be supported (other than someone passing the _full_ set of
them, suppressing _everything_ the build scripts do, say for
build process debugging purposes). Instead, a separate variable
intended for just that purpose ought to be introduced and then
appended to CFLAGS. Question of course is whether that ought
to be uniform for all subtrees, or separate for each of them.

Opinions, ideas?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 15:57:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 15:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0J3-0003PV-Ms; Fri, 07 Dec 2012 15:57:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Th0J2-0003PP-Oc
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 15:57:40 +0000
Received: from [85.158.138.51:54189] by server-13.bemta-3.messagelabs.com id
	F9/87-24887-3F112C05; Fri, 07 Dec 2012 15:57:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354895857!28007301!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20417 invoked from network); 7 Dec 2012 15:57:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 15:57:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 15:57:37 +0000
Message-Id: <50C2200202000078000AF0AC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 15:57:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Charles Arnold <CARNOLD@suse.com>
Subject: [Xen-devel] compile flags overrides
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All,

looks like for the tools part of the tree, at least something has been
done to that effect (via APPEND_CFLAGS).

I don't see how one is supposed to override (e.g. optimization level,
or add to) CFLAGS for the hypervisor part of the build though.

Passing CFLAGS= as make option would suppress all += additions
throughout the makefiles, as they're not being done with "override".

Passing CFLAGS through the environment accumulates the
additions to the variable as the tree gets recursed through (i.e.
at the second recursion level you end up having all flags specified
three times). While that might not appear to be a problem, it really
is (besides being rather odd in the first place) for the EFI build:
The tool chain capability detection passes $(CFLAGS) with
$(CFLAGS-y) stripped off (in particular to suppress the use of
"-MMD -MF -$(@F).d", as $@ and $(@F) are not defined in this
context), thus leaving - from the upper recursion levels - .xen.d
and .built_in.d as stray input files on the command line.

I don't think passing CFLAGS itself through either means is really
to be supported (other than someone passing the _full_ set of
them, suppressing _everything_ the build scripts do, say for
build process debugging purposes). Instead, a separate variable
intended for just that purpose ought to be introduced and then
appended to CFLAGS. Question of course is whether that ought
to be uniform for all subtrees, or separate for each of them.

Opinions, ideas?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:17:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:17:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0c0-0004m6-Cm; Fri, 07 Dec 2012 16:17:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Th0by-0004m1-S2
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 16:17:14 +0000
Received: from [85.158.139.83:55530] by server-11.bemta-5.messagelabs.com id
	9E/51-31624-98612C05; Fri, 07 Dec 2012 16:17:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354896964!24884807!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4953 invoked from network); 7 Dec 2012 16:16:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 16:16:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 16:16:04 +0000
Message-Id: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 16:16:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <yunhong.jiang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yunhong,

in c/s 20617:283a5357d196 you modified init_frametable() to
populate the frame table slightly differently for the hotplug
case. I wonder why you did that, because (apart from the bug
already fixed, and the off-by-one bugs I'm having a fix pending
for) I fear you didn't pay attention to the fact that using
pdx_to_page() on something that doesn't really represent the
PDX for a valid page may return a value not validly usable here.

Do you happen to recall what it was that caused you to do that
adjustment in the first place? If you don't, do you have an
environment where you would be able to test an eventual
change of mine (effectively undoing that part of said c/s)?

Thanks, Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:17:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:17:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0c0-0004m6-Cm; Fri, 07 Dec 2012 16:17:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Th0by-0004m1-S2
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 16:17:14 +0000
Received: from [85.158.139.83:55530] by server-11.bemta-5.messagelabs.com id
	9E/51-31624-98612C05; Fri, 07 Dec 2012 16:17:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1354896964!24884807!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQyODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4953 invoked from network); 7 Dec 2012 16:16:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 16:16:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Dec 2012 16:16:04 +0000
Message-Id: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 07 Dec 2012 16:16:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <yunhong.jiang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yunhong,

in c/s 20617:283a5357d196 you modified init_frametable() to
populate the frame table slightly differently for the hotplug
case. I wonder why you did that, because (apart from the bug
already fixed, and the off-by-one bugs I'm having a fix pending
for) I fear you didn't pay attention to the fact that using
pdx_to_page() on something that doesn't really represent the
PDX for a valid page may return a value not validly usable here.

Do you happen to recall what it was that caused you to do that
adjustment in the first place? If you don't, do you have an
environment where you would be able to test an eventual
change of mine (effectively undoing that part of said c/s)?

Thanks, Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:19:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0dP-0004qs-Sa; Fri, 07 Dec 2012 16:18:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1Th0dO-0004qi-KO
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 16:18:42 +0000
Received: from [85.158.138.51:2576] by server-12.bemta-3.messagelabs.com id
	EF/36-22757-1E612C05; Fri, 07 Dec 2012 16:18:41 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-14.tower-174.messagelabs.com!1354897104!21570961!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28748 invoked from network); 7 Dec 2012 16:18:26 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-14.tower-174.messagelabs.com with SMTP;
	7 Dec 2012 16:18:26 -0000
Received: from [62.94.143.201] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 77034702; Fri, 07 Dec 2012 17:18:36 +0100
Message-ID: <1354897102.23012.49.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: Jim Fehlig <jfehlig@suse.com>
Date: Fri, 07 Dec 2012 17:18:22 +0100
In-Reply-To: <50C1284B.2060303@suse.com>
References: <1354313593-13358-1-git-send-email-jfehlig@suse.com>
	<50C119C7.807@redhat.com> <50C1284B.2060303@suse.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: "xen@lists.fedoraproject.org" <xen@lists.fedoraproject.org>,
	libvir-list@redhat.com, xen-devel@lists.xen.org,
	M A Young <m.a.young@durham.ac.uk>, virt <virt@lists.fedoraproject.org>,
	Eric Blake <eblake@redhat.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH] Convert libxl driver to Xen 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4809500171669715344=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4809500171669715344==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-WZ1rUWFwugBGyCRkG/9B"


--=-WZ1rUWFwugBGyCRkG/9B
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-06 at 16:20 -0700, Jim Fehlig wrote:=20
> >> V2:
> >>   Remove 128 vcpu limit.
> >>   Remove split_string_into_string_list() function copied from xen
> >>   sources since libvirt now has virStringSplit().
> >>    =20
> >
> > Tested on Fedora 18, with its use of xen 4.2.  ACK; let's get this push=
ed.
> >  =20
>=20
> Thanks, pushed.
>=20
Cool!

(letting virt@lists.fedoraproject.org and xen@lists.fedoraproject.org
<xen@lists.fedoraproject.org> know about that!)

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-WZ1rUWFwugBGyCRkG/9B
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDCFs4ACgkQk4XaBE3IOsR2tgCfVgxrYGUEXtuUoRjy8+FILSlr
Ut8AoJp7FptEC5OVcrZVCymlmerA0o0W
=9IBY
-----END PGP SIGNATURE-----

--=-WZ1rUWFwugBGyCRkG/9B--



--===============4809500171669715344==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4809500171669715344==--



From xen-devel-bounces@lists.xen.org Fri Dec 07 16:19:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0dP-0004qs-Sa; Fri, 07 Dec 2012 16:18:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1Th0dO-0004qi-KO
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 16:18:42 +0000
Received: from [85.158.138.51:2576] by server-12.bemta-3.messagelabs.com id
	EF/36-22757-1E612C05; Fri, 07 Dec 2012 16:18:41 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-14.tower-174.messagelabs.com!1354897104!21570961!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28748 invoked from network); 7 Dec 2012 16:18:26 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-14.tower-174.messagelabs.com with SMTP;
	7 Dec 2012 16:18:26 -0000
Received: from [62.94.143.201] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 77034702; Fri, 07 Dec 2012 17:18:36 +0100
Message-ID: <1354897102.23012.49.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: Jim Fehlig <jfehlig@suse.com>
Date: Fri, 07 Dec 2012 17:18:22 +0100
In-Reply-To: <50C1284B.2060303@suse.com>
References: <1354313593-13358-1-git-send-email-jfehlig@suse.com>
	<50C119C7.807@redhat.com> <50C1284B.2060303@suse.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: "xen@lists.fedoraproject.org" <xen@lists.fedoraproject.org>,
	libvir-list@redhat.com, xen-devel@lists.xen.org,
	M A Young <m.a.young@durham.ac.uk>, virt <virt@lists.fedoraproject.org>,
	Eric Blake <eblake@redhat.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH] Convert libxl driver to Xen 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4809500171669715344=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4809500171669715344==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-WZ1rUWFwugBGyCRkG/9B"


--=-WZ1rUWFwugBGyCRkG/9B
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-06 at 16:20 -0700, Jim Fehlig wrote:=20
> >> V2:
> >>   Remove 128 vcpu limit.
> >>   Remove split_string_into_string_list() function copied from xen
> >>   sources since libvirt now has virStringSplit().
> >>    =20
> >
> > Tested on Fedora 18, with its use of xen 4.2.  ACK; let's get this push=
ed.
> >  =20
>=20
> Thanks, pushed.
>=20
Cool!

(letting virt@lists.fedoraproject.org and xen@lists.fedoraproject.org
<xen@lists.fedoraproject.org> know about that!)

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-WZ1rUWFwugBGyCRkG/9B
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDCFs4ACgkQk4XaBE3IOsR2tgCfVgxrYGUEXtuUoRjy8+FILSlr
Ut8AoJp7FptEC5OVcrZVCymlmerA0o0W
=9IBY
-----END PGP SIGNATURE-----

--=-WZ1rUWFwugBGyCRkG/9B--



--===============4809500171669715344==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4809500171669715344==--



From xen-devel-bounces@lists.xen.org Fri Dec 07 16:23:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:23:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0hy-00052H-KP; Fri, 07 Dec 2012 16:23:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th0hw-00052A-LV
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 16:23:24 +0000
Received: from [85.158.138.51:31742] by server-11.bemta-3.messagelabs.com id
	49/ED-19361-BF712C05; Fri, 07 Dec 2012 16:23:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1354896930!9257592!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15605 invoked from network); 7 Dec 2012 16:15:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 16:15:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16228708"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 16:14:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 16:14:15 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th0Z2-0000Hy-3w; Fri, 07 Dec 2012 16:14:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th0Z1-0005dC-NZ;
	Fri, 07 Dec 2012 16:14:11 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20674.5586.142286.869968@mariner.uk.xensource.com>
Date: Fri, 7 Dec 2012 16:14:10 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: dongxiao.xu@intel.com, xen-devel@lists.xensource.com, qemu-devel@nongnu.org
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio
	and	cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini writes ("[Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and cpu_ioreq_move"):
> after reviewing the patch "fix multiply issue for int and uint types"
> with Ian Jackson, we realized that cpu_ioreq_pio and cpu_ioreq_move are
> in much need for a simplification as well as removal of a possible
> integer overflow.
> 
> This patch series tries to accomplish both switching to two new helper
> functions and using a more obvious arithmetic. Doing so it should also
> fix the original problem that Dongxiao was experiencing. The C language
> can be a nasty backstabber when signed and unsigned integers are
> involved.

I think the attached patch is better as it removes some formulaic
code.  I don't think I have a guest which can repro the bug so I have
only compile tested it.

Dongxiao, would you care to take a look ?

PS: I'm pretty sure the original overflows aren't security problems.

Thanks,
Ian.

commit d19731e4e452e3415a5c03771d0406efc803baa9
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Fri Dec 7 16:02:04 2012 +0000

    cpu_ioreq_pio, cpu_ioreq_move: introduce read_phys_req_item, write_phys_req_item
    
    The current code compare i (int) with req->count (uint32_t) in a for
    loop, risking an infinite loop if req->count is >INT_MAX.  It also
    does the multiplication of req->size in a too-small type, leading to
    integer overflows.
    
    Turn read_physical and write_physical into two different helper
    functions, read_phys_req_item and write_phys_req_item, that take care
    of adding or subtracting offset depending on sign.
    
    This removes the formulaic multiplication to a single place where the
    integer overflows can be dealt with by casting to wide-enough unsigned
    types.
    
    Reported-By: Dongxiao Xu <dongxiao.xu@intel.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
index c6d049c..9b8552c 100644
--- a/i386-dm/helper2.c
+++ b/i386-dm/helper2.c
@@ -339,21 +339,40 @@ static void do_outp(CPUState *env, unsigned long addr,
     }
 }
 
-static inline void read_physical(uint64_t addr, unsigned long size, void *val)
+/*
+ * Helper functions which read/write an object from/to physical guest
+ * memory, as part of the implementation of an ioreq.
+ *
+ * Equivalent to
+ *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
+ *                          val, req->size, 0/1)
+ * except without the integer overflow problems.
+ */
+static void rw_phys_req_item(target_phys_addr_t addr,
+                             ioreq_t *req, uint32_t i, void *val, int rw)
 {
-    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 0);
+    /* Do everything unsigned so overflow just results in a truncated result
+     * and accesses to undesired parts of guest memory, which is up
+     * to the guest */
+    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
+    if (req->df) addr -= offset;
+    else addr -= offset;
+    cpu_physical_memory_rw(addr, val, req->size, rw);
 }
-
-static inline void write_physical(uint64_t addr, unsigned long size, void *val)
+static inline void read_phys_req_item(target_phys_addr_t addr,
+                                      ioreq_t *req, uint32_t i, void *val)
 {
-    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 1);
+    rw_phys_req_item(addr, req, i, val, 0);
+}
+static inline void write_phys_req_item(target_phys_addr_t addr,
+                                       ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 1);
 }
 
 static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 {
-    int i, sign;
-
-    sign = req->df ? -1 : 1;
+    uint32_t i;
 
     if (req->dir == IOREQ_READ) {
         if (!req->data_is_ptr) {
@@ -363,9 +382,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 
             for (i = 0; i < req->count; i++) {
                 tmp = do_inp(env, req->addr, req->size);
-                write_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                write_phys_req_item((target_phys_addr_t) req->data, req, i, &tmp);
             }
         }
     } else if (req->dir == IOREQ_WRITE) {
@@ -375,9 +392,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
             for (i = 0; i < req->count; i++) {
                 unsigned long tmp = 0;
 
-                read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->data, req, i, &tmp);
                 do_outp(env, req->addr, req->size, tmp);
             }
         }
@@ -386,22 +401,16 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 
 static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
 {
-    int i, sign;
-
-    sign = req->df ? -1 : 1;
+    uint32_t i;
 
     if (!req->data_is_ptr) {
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                read_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &req->data);
+                read_phys_req_item(req->addr, req, i, &req->data);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                write_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &req->data);
+                write_phys_req_item(req->addr, req, i, &req->data);
             }
         }
     } else {
@@ -409,21 +418,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
 
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                read_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &tmp);
-                write_physical((target_phys_addr_t )req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->addr, req, i, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
-                write_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->data, req, i, &tmp);
+                write_phys_req_item(req->addr, req, i, &tmp);
             }
         }
     }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:23:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:23:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0hy-00052H-KP; Fri, 07 Dec 2012 16:23:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th0hw-00052A-LV
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 16:23:24 +0000
Received: from [85.158.138.51:31742] by server-11.bemta-3.messagelabs.com id
	49/ED-19361-BF712C05; Fri, 07 Dec 2012 16:23:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1354896930!9257592!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15605 invoked from network); 7 Dec 2012 16:15:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 16:15:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16228708"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 16:14:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 16:14:15 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th0Z2-0000Hy-3w; Fri, 07 Dec 2012 16:14:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th0Z1-0005dC-NZ;
	Fri, 07 Dec 2012 16:14:11 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20674.5586.142286.869968@mariner.uk.xensource.com>
Date: Fri, 7 Dec 2012 16:14:10 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: dongxiao.xu@intel.com, xen-devel@lists.xensource.com, qemu-devel@nongnu.org
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio
	and	cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini writes ("[Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and cpu_ioreq_move"):
> after reviewing the patch "fix multiply issue for int and uint types"
> with Ian Jackson, we realized that cpu_ioreq_pio and cpu_ioreq_move are
> in much need for a simplification as well as removal of a possible
> integer overflow.
> 
> This patch series tries to accomplish both switching to two new helper
> functions and using a more obvious arithmetic. Doing so it should also
> fix the original problem that Dongxiao was experiencing. The C language
> can be a nasty backstabber when signed and unsigned integers are
> involved.

I think the attached patch is better as it removes some formulaic
code.  I don't think I have a guest which can repro the bug so I have
only compile tested it.

Dongxiao, would you care to take a look ?

PS: I'm pretty sure the original overflows aren't security problems.

Thanks,
Ian.

commit d19731e4e452e3415a5c03771d0406efc803baa9
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Fri Dec 7 16:02:04 2012 +0000

    cpu_ioreq_pio, cpu_ioreq_move: introduce read_phys_req_item, write_phys_req_item
    
    The current code compare i (int) with req->count (uint32_t) in a for
    loop, risking an infinite loop if req->count is >INT_MAX.  It also
    does the multiplication of req->size in a too-small type, leading to
    integer overflows.
    
    Turn read_physical and write_physical into two different helper
    functions, read_phys_req_item and write_phys_req_item, that take care
    of adding or subtracting offset depending on sign.
    
    This removes the formulaic multiplication to a single place where the
    integer overflows can be dealt with by casting to wide-enough unsigned
    types.
    
    Reported-By: Dongxiao Xu <dongxiao.xu@intel.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
index c6d049c..9b8552c 100644
--- a/i386-dm/helper2.c
+++ b/i386-dm/helper2.c
@@ -339,21 +339,40 @@ static void do_outp(CPUState *env, unsigned long addr,
     }
 }
 
-static inline void read_physical(uint64_t addr, unsigned long size, void *val)
+/*
+ * Helper functions which read/write an object from/to physical guest
+ * memory, as part of the implementation of an ioreq.
+ *
+ * Equivalent to
+ *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
+ *                          val, req->size, 0/1)
+ * except without the integer overflow problems.
+ */
+static void rw_phys_req_item(target_phys_addr_t addr,
+                             ioreq_t *req, uint32_t i, void *val, int rw)
 {
-    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 0);
+    /* Do everything unsigned so overflow just results in a truncated result
+     * and accesses to undesired parts of guest memory, which is up
+     * to the guest */
+    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
+    if (req->df) addr -= offset;
+    else addr -= offset;
+    cpu_physical_memory_rw(addr, val, req->size, rw);
 }
-
-static inline void write_physical(uint64_t addr, unsigned long size, void *val)
+static inline void read_phys_req_item(target_phys_addr_t addr,
+                                      ioreq_t *req, uint32_t i, void *val)
 {
-    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 1);
+    rw_phys_req_item(addr, req, i, val, 0);
+}
+static inline void write_phys_req_item(target_phys_addr_t addr,
+                                       ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 1);
 }
 
 static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 {
-    int i, sign;
-
-    sign = req->df ? -1 : 1;
+    uint32_t i;
 
     if (req->dir == IOREQ_READ) {
         if (!req->data_is_ptr) {
@@ -363,9 +382,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 
             for (i = 0; i < req->count; i++) {
                 tmp = do_inp(env, req->addr, req->size);
-                write_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                write_phys_req_item((target_phys_addr_t) req->data, req, i, &tmp);
             }
         }
     } else if (req->dir == IOREQ_WRITE) {
@@ -375,9 +392,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
             for (i = 0; i < req->count; i++) {
                 unsigned long tmp = 0;
 
-                read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->data, req, i, &tmp);
                 do_outp(env, req->addr, req->size, tmp);
             }
         }
@@ -386,22 +401,16 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 
 static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
 {
-    int i, sign;
-
-    sign = req->df ? -1 : 1;
+    uint32_t i;
 
     if (!req->data_is_ptr) {
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                read_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &req->data);
+                read_phys_req_item(req->addr, req, i, &req->data);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                write_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &req->data);
+                write_phys_req_item(req->addr, req, i, &req->data);
             }
         }
     } else {
@@ -409,21 +418,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
 
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                read_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &tmp);
-                write_physical((target_phys_addr_t )req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->addr, req, i, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
-                write_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->data, req, i, &tmp);
+                write_phys_req_item(req->addr, req, i, &tmp);
             }
         }
     }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:31:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0p5-0005IU-Le; Fri, 07 Dec 2012 16:30:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Th0p5-0005IP-72
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 16:30:47 +0000
Received: from [85.158.139.83:24228] by server-12.bemta-5.messagelabs.com id
	BA/FC-02275-6B912C05; Fri, 07 Dec 2012 16:30:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354897845!28776769!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31303 invoked from network); 7 Dec 2012 16:30:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 16:30:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16229107"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 16:30:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	16:30:45 +0000
Message-ID: <1354897843.31710.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 16:30:43 +0000
In-Reply-To: <20674.5586.142286.869968@mariner.uk.xensource.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
	<20674.5586.142286.869968@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "dongxiao.xu@intel.com" <dongxiao.xu@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio	and
 cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 16:14 +0000, Ian Jackson wrote:
> +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> +    if (req->df) addr -= offset;
> +    else addr -= offset;

One of these -= should be a += I presume?

[...]
> +                write_phys_req_item((target_phys_addr_t) req->data, req, i, &tmp);

This seems to be the only one with this cast, why?

write_phys_req_item takes a target_phys_addr_t so this will happen
regardless I think.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:31:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0p5-0005IU-Le; Fri, 07 Dec 2012 16:30:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Th0p5-0005IP-72
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 16:30:47 +0000
Received: from [85.158.139.83:24228] by server-12.bemta-5.messagelabs.com id
	BA/FC-02275-6B912C05; Fri, 07 Dec 2012 16:30:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354897845!28776769!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31303 invoked from network); 7 Dec 2012 16:30:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 16:30:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16229107"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 16:30:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	16:30:45 +0000
Message-ID: <1354897843.31710.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 16:30:43 +0000
In-Reply-To: <20674.5586.142286.869968@mariner.uk.xensource.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
	<20674.5586.142286.869968@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "dongxiao.xu@intel.com" <dongxiao.xu@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio	and
 cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 16:14 +0000, Ian Jackson wrote:
> +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> +    if (req->df) addr -= offset;
> +    else addr -= offset;

One of these -= should be a += I presume?

[...]
> +                write_phys_req_item((target_phys_addr_t) req->data, req, i, &tmp);

This seems to be the only one with this cast, why?

write_phys_req_item takes a target_phys_addr_t so this will happen
regardless I think.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:33:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:33:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0rg-0005Qd-7i; Fri, 07 Dec 2012 16:33:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th0rf-0005QW-0H
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 16:33:27 +0000
Received: from [85.158.137.99:64538] by server-13.bemta-3.messagelabs.com id
	6D/D9-24887-65A12C05; Fri, 07 Dec 2012 16:33:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354898004!18091407!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23641 invoked from network); 7 Dec 2012 16:33:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 16:33:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47019585"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 16:33:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 11:33:23 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th0rb-0005ww-2H;
	Fri, 07 Dec 2012 16:33:23 +0000
Date: Fri, 7 Dec 2012 16:33:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: asad raza <asadrupucit2006@gmail.com>
In-Reply-To: <CAJ2v2mgRa8Qhxxu3nqAtPftP7WK-5p4zAuJOgSB1WfMhdxjGkw@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212071626410.8801@kaball.uk.xensource.com>
References: <CAJ2v2mgRa8Qhxxu3nqAtPftP7WK-5p4zAuJOgSB1WfMhdxjGkw@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why we relocate xen location in __start_xen method?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Dec 2012, asad raza wrote:
> in ARM Archietecture.
> __start_xen ()
> 
>   ----->  setup_pagetables()
>             {
>                     /* Calculate virt-to-phys offset for the new location */
>                       phys_offset = xen_paddr - (unsigned long) _start;
> 
>                      /* Copy */
>                     memcpy((void *) dest_va, _start, _end - _start);
>             }

We relocate Xen to the top of RAM (and align to a XEN_PADDR_ALIGN
boundary) to free some precious low RAM and be independent from
the behaviour of the bootloader.
See xen/arch/arm/setup.c:get_xen_paddr.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:33:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:33:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0rg-0005Qd-7i; Fri, 07 Dec 2012 16:33:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th0rf-0005QW-0H
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 16:33:27 +0000
Received: from [85.158.137.99:64538] by server-13.bemta-3.messagelabs.com id
	6D/D9-24887-65A12C05; Fri, 07 Dec 2012 16:33:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354898004!18091407!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23641 invoked from network); 7 Dec 2012 16:33:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 16:33:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47019585"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 16:33:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 11:33:23 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th0rb-0005ww-2H;
	Fri, 07 Dec 2012 16:33:23 +0000
Date: Fri, 7 Dec 2012 16:33:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: asad raza <asadrupucit2006@gmail.com>
In-Reply-To: <CAJ2v2mgRa8Qhxxu3nqAtPftP7WK-5p4zAuJOgSB1WfMhdxjGkw@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212071626410.8801@kaball.uk.xensource.com>
References: <CAJ2v2mgRa8Qhxxu3nqAtPftP7WK-5p4zAuJOgSB1WfMhdxjGkw@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why we relocate xen location in __start_xen method?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Dec 2012, asad raza wrote:
> in ARM Archietecture.
> __start_xen ()
> 
>   ----->  setup_pagetables()
>             {
>                     /* Calculate virt-to-phys offset for the new location */
>                       phys_offset = xen_paddr - (unsigned long) _start;
> 
>                      /* Copy */
>                     memcpy((void *) dest_va, _start, _end - _start);
>             }

We relocate Xen to the top of RAM (and align to a XEN_PADDR_ALIGN
boundary) to free some precious low RAM and be independent from
the behaviour of the bootloader.
See xen/arch/arm/setup.c:get_xen_paddr.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:33:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:33:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0rp-0005RN-KR; Fri, 07 Dec 2012 16:33:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th0rn-0005RE-Uc
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 16:33:36 +0000
Received: from [85.158.143.35:64288] by server-2.bemta-4.messagelabs.com id
	A1/B0-30861-F5A12C05; Fri, 07 Dec 2012 16:33:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354898014!5475235!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1172 invoked from network); 7 Dec 2012 16:33:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 16:33:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16229162"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 16:33:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 16:33:34 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th0rl-0000Qc-Uh; Fri, 07 Dec 2012 16:33:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th0rl-0005ft-QI;
	Fri, 07 Dec 2012 16:33:33 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20674.6749.646806.53950@mariner.uk.xensource.com>
Date: Fri, 7 Dec 2012 16:33:33 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354897843.31710.93.camel@zakaz.uk.xensource.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
	<20674.5586.142286.869968@mariner.uk.xensource.com>
	<1354897843.31710.93.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "dongxiao.xu@intel.com" <dongxiao.xu@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio	and
 cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio	and cpu_ioreq_move"):
> On Fri, 2012-12-07 at 16:14 +0000, Ian Jackson wrote:
> > +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> > +    if (req->df) addr -= offset;
> > +    else addr -= offset;
> 
> One of these -= should be a += I presume?

Uh, yes.

> [...]
> > +                write_phys_req_item((target_phys_addr_t) req->data, req, i, &tmp);
> 
> This seems to be the only one with this cast, why?

This is a mistake.

> write_phys_req_item takes a target_phys_addr_t so this will happen
> regardless I think.

Indeed.

Ian.

commit fd3865f8e0d867a203b4ddcb22eefa827cfaca0a
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Fri Dec 7 16:02:04 2012 +0000

    cpu_ioreq_pio, cpu_ioreq_move: introduce read_phys_req_item, write_phys_req_item
    
    The current code compare i (int) with req->count (uint32_t) in a for
    loop, risking an infinite loop if req->count is >INT_MAX.  It also
    does the multiplication of req->size in a too-small type, leading to
    integer overflows.
    
    Turn read_physical and write_physical into two different helper
    functions, read_phys_req_item and write_phys_req_item, that take care
    of adding or subtracting offset depending on sign.
    
    This removes the formulaic multiplication to a single place where the
    integer overflows can be dealt with by casting to wide-enough unsigned
    types.
    
    Reported-By: Dongxiao Xu <dongxiao.xu@intel.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    --
    v2: Fix sign when !!req->df.  Remove a useless cast.

diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
index c6d049c..63a938b 100644
--- a/i386-dm/helper2.c
+++ b/i386-dm/helper2.c
@@ -339,21 +339,40 @@ static void do_outp(CPUState *env, unsigned long addr,
     }
 }
 
-static inline void read_physical(uint64_t addr, unsigned long size, void *val)
+/*
+ * Helper functions which read/write an object from/to physical guest
+ * memory, as part of the implementation of an ioreq.
+ *
+ * Equivalent to
+ *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
+ *                          val, req->size, 0/1)
+ * except without the integer overflow problems.
+ */
+static void rw_phys_req_item(target_phys_addr_t addr,
+                             ioreq_t *req, uint32_t i, void *val, int rw)
 {
-    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 0);
+    /* Do everything unsigned so overflow just results in a truncated result
+     * and accesses to undesired parts of guest memory, which is up
+     * to the guest */
+    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
+    if (req->df) addr -= offset;
+    else addr += offset;
+    cpu_physical_memory_rw(addr, val, req->size, rw);
 }
-
-static inline void write_physical(uint64_t addr, unsigned long size, void *val)
+static inline void read_phys_req_item(target_phys_addr_t addr,
+                                      ioreq_t *req, uint32_t i, void *val)
 {
-    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 1);
+    rw_phys_req_item(addr, req, i, val, 0);
+}
+static inline void write_phys_req_item(target_phys_addr_t addr,
+                                       ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 1);
 }
 
 static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 {
-    int i, sign;
-
-    sign = req->df ? -1 : 1;
+    uint32_t i;
 
     if (req->dir == IOREQ_READ) {
         if (!req->data_is_ptr) {
@@ -363,9 +382,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 
             for (i = 0; i < req->count; i++) {
                 tmp = do_inp(env, req->addr, req->size);
-                write_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
             }
         }
     } else if (req->dir == IOREQ_WRITE) {
@@ -375,9 +392,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
             for (i = 0; i < req->count; i++) {
                 unsigned long tmp = 0;
 
-                read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->data, req, i, &tmp);
                 do_outp(env, req->addr, req->size, tmp);
             }
         }
@@ -386,22 +401,16 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 
 static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
 {
-    int i, sign;
-
-    sign = req->df ? -1 : 1;
+    uint32_t i;
 
     if (!req->data_is_ptr) {
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                read_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &req->data);
+                read_phys_req_item(req->addr, req, i, &req->data);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                write_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &req->data);
+                write_phys_req_item(req->addr, req, i, &req->data);
             }
         }
     } else {
@@ -409,21 +418,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
 
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                read_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &tmp);
-                write_physical((target_phys_addr_t )req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->addr, req, i, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
-                write_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->data, req, i, &tmp);
+                write_phys_req_item(req->addr, req, i, &tmp);
             }
         }
     }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:33:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:33:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th0rp-0005RN-KR; Fri, 07 Dec 2012 16:33:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th0rn-0005RE-Uc
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 16:33:36 +0000
Received: from [85.158.143.35:64288] by server-2.bemta-4.messagelabs.com id
	A1/B0-30861-F5A12C05; Fri, 07 Dec 2012 16:33:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354898014!5475235!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1172 invoked from network); 7 Dec 2012 16:33:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 16:33:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16229162"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 16:33:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 16:33:34 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th0rl-0000Qc-Uh; Fri, 07 Dec 2012 16:33:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th0rl-0005ft-QI;
	Fri, 07 Dec 2012 16:33:33 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20674.6749.646806.53950@mariner.uk.xensource.com>
Date: Fri, 7 Dec 2012 16:33:33 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354897843.31710.93.camel@zakaz.uk.xensource.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
	<20674.5586.142286.869968@mariner.uk.xensource.com>
	<1354897843.31710.93.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "dongxiao.xu@intel.com" <dongxiao.xu@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio	and
 cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio	and cpu_ioreq_move"):
> On Fri, 2012-12-07 at 16:14 +0000, Ian Jackson wrote:
> > +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> > +    if (req->df) addr -= offset;
> > +    else addr -= offset;
> 
> One of these -= should be a += I presume?

Uh, yes.

> [...]
> > +                write_phys_req_item((target_phys_addr_t) req->data, req, i, &tmp);
> 
> This seems to be the only one with this cast, why?

This is a mistake.

> write_phys_req_item takes a target_phys_addr_t so this will happen
> regardless I think.

Indeed.

Ian.

commit fd3865f8e0d867a203b4ddcb22eefa827cfaca0a
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Fri Dec 7 16:02:04 2012 +0000

    cpu_ioreq_pio, cpu_ioreq_move: introduce read_phys_req_item, write_phys_req_item
    
    The current code compare i (int) with req->count (uint32_t) in a for
    loop, risking an infinite loop if req->count is >INT_MAX.  It also
    does the multiplication of req->size in a too-small type, leading to
    integer overflows.
    
    Turn read_physical and write_physical into two different helper
    functions, read_phys_req_item and write_phys_req_item, that take care
    of adding or subtracting offset depending on sign.
    
    This removes the formulaic multiplication to a single place where the
    integer overflows can be dealt with by casting to wide-enough unsigned
    types.
    
    Reported-By: Dongxiao Xu <dongxiao.xu@intel.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    --
    v2: Fix sign when !!req->df.  Remove a useless cast.

diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
index c6d049c..63a938b 100644
--- a/i386-dm/helper2.c
+++ b/i386-dm/helper2.c
@@ -339,21 +339,40 @@ static void do_outp(CPUState *env, unsigned long addr,
     }
 }
 
-static inline void read_physical(uint64_t addr, unsigned long size, void *val)
+/*
+ * Helper functions which read/write an object from/to physical guest
+ * memory, as part of the implementation of an ioreq.
+ *
+ * Equivalent to
+ *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
+ *                          val, req->size, 0/1)
+ * except without the integer overflow problems.
+ */
+static void rw_phys_req_item(target_phys_addr_t addr,
+                             ioreq_t *req, uint32_t i, void *val, int rw)
 {
-    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 0);
+    /* Do everything unsigned so overflow just results in a truncated result
+     * and accesses to undesired parts of guest memory, which is up
+     * to the guest */
+    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
+    if (req->df) addr -= offset;
+    else addr += offset;
+    cpu_physical_memory_rw(addr, val, req->size, rw);
 }
-
-static inline void write_physical(uint64_t addr, unsigned long size, void *val)
+static inline void read_phys_req_item(target_phys_addr_t addr,
+                                      ioreq_t *req, uint32_t i, void *val)
 {
-    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 1);
+    rw_phys_req_item(addr, req, i, val, 0);
+}
+static inline void write_phys_req_item(target_phys_addr_t addr,
+                                       ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 1);
 }
 
 static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 {
-    int i, sign;
-
-    sign = req->df ? -1 : 1;
+    uint32_t i;
 
     if (req->dir == IOREQ_READ) {
         if (!req->data_is_ptr) {
@@ -363,9 +382,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 
             for (i = 0; i < req->count; i++) {
                 tmp = do_inp(env, req->addr, req->size);
-                write_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
             }
         }
     } else if (req->dir == IOREQ_WRITE) {
@@ -375,9 +392,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
             for (i = 0; i < req->count; i++) {
                 unsigned long tmp = 0;
 
-                read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->data, req, i, &tmp);
                 do_outp(env, req->addr, req->size, tmp);
             }
         }
@@ -386,22 +401,16 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
 
 static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
 {
-    int i, sign;
-
-    sign = req->df ? -1 : 1;
+    uint32_t i;
 
     if (!req->data_is_ptr) {
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                read_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &req->data);
+                read_phys_req_item(req->addr, req, i, &req->data);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                write_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &req->data);
+                write_phys_req_item(req->addr, req, i, &req->data);
             }
         }
     } else {
@@ -409,21 +418,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
 
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                read_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &tmp);
-                write_physical((target_phys_addr_t )req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->addr, req, i, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
-                  req->size, &tmp);
-                write_physical(req->addr
-                  + (sign * i * req->size),
-                  req->size, &tmp);
+                read_phys_req_item(req->data, req, i, &tmp);
+                write_phys_req_item(req->addr, req, i, &tmp);
             }
         }
     }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 16:45:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th135-0005oc-SW; Fri, 07 Dec 2012 16:45:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th134-0005oX-5R
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 16:45:14 +0000
Received: from [85.158.143.99:31533] by server-1.bemta-4.messagelabs.com id
	57/BD-28401-91D12C05; Fri, 07 Dec 2012 16:45:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354898708!27546221!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 573 invoked from network); 7 Dec 2012 16:45:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 16:45:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216802840"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 16:45:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 11:45:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th12x-00066x-Br;
	Fri, 07 Dec 2012 16:45:07 +0000
Date: Fri, 7 Dec 2012 16:45:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212071635370.8801@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1408387693-1354898702=:8801"
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3
 with the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU
 missing interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1408387693-1354898702=:8801
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

Sorry for the delay in the reply

On Thu, 6 Dec 2012, G.R. wrote:
> Sorry, but I have to resend this in a separate thread for better visibili=
ty.
> Back ground:
> After backporting this patch to fix my PVHVM interrupt missing issue in 4=
=2E1.3:
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html
> I find side effect for pure HVM guest.
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00208.html
>=20
> I had a follow up thread to analysis the issue and post it again here:
>=20
>             Hi, it seems that the patch has some side effect on pure HVM =
guest.
>             For openelec 2.0 guest, which is based on linux 3.2.x with PV=
OP disabled, I see such syndrome in
>             qemu-dm-xxx.log:
>=20
>             pt_msgaddr64_reg_write: Error: why comes to Upper Address wit=
hout 64 bit support??
>             pt_pci_write_config: Internal error: Invalid write emulation =
return value[-1]. I/O emulator exit.
>=20
>             The guest dies immediately after the log, I have no way to ch=
eck guest kernel log.
>             Without the patch, this guest can boot without obvious error =
log even the VGA passthrough does not
>             quite work.
>             I'll check the code to see what does these log mean...
>=20
>=20
> I did some analysis and it really looks like a bug to me.
> Since this is a patch back-ported from 4.2.0, I would like to ask is ther=
e any follow up patch that would fix this
> issue?
> Please see my analysis below:
>=20
> Here is part of the qemu-dm log, with debug log enabled at compile time:
>=20
> dm-command: hot insert pass-through pci dev
> register_real_device: Assigning real physical device 00:1b.0 ...
> pt_iomul_init: Error: pt_iomul_init can't open file /dev/xen/pci_iomul: N=
o such file or directory: 0x0:0x1b.0x0
> pt_register_regions: IO region registered (size=3D0x00004000 base_addr=3D=
0xf7c10004)
> pt_msi_setup: msi mapped with pirq 36
> pci_intx: intx=3D1
> register_real_device: Real physical device 00:1b.0 registered successfuly=
!
> IRQ type =3D MSI-INTx
> ...
> pt_pci_read_config: [00:06.0]: address=3D0000 val=3D0x00008086 len=3D2
> pt_pci_read_config: [00:06.0]: address=3D0002 val=3D0x00001e20 len=3D2
> ...
> pt_pci_write_config: [00:06.0]: address=3D0068 val=3D0x00000000 len=3D4
> ...
> pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
> pt_msgctrl_reg_write: guest enabling MSI, disable MSI-INTx translation
> pci_intx: intx=3D1
> pt_msi_disable: Unmap msi with pirq 36
> pt_msgctrl_reg_write: setup msi for dev 30
> pt_msi_setup: msi mapped with pirq 36
> pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303
> pt_pci_read_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
> pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
> pt_pci_write_config: [00:06.0]: address=3D0064 val=3D0xfee0300c len=3D4
> pt_pci_write_config: [00:06.0]: address=3D0068 val=3D0x00000000 len=3D4
> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit =
support??
> pt_pci_write_config: Internal error: Invalid write emulation return value=
[-1]. I/O emulator exit.
>=20
>=20
> Here the device in question should is the audio controller, 00:1b.0 in th=
e host, which is 64 bit capable:
> 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Fami=
ly High Definition Audio Controller (rev 04)
> =C2=A0=C2=A0=C2=A0 Capabilities: [60] MSI: Enable+ Count=3D1/1 Maskable- =
64bit+
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 Address: 00000000fee00378=C2=A0 Dat=
a: 0000

I won't be able to reproduce the bug, so I thank you in advance for any
help you can give me with testing a fix


> And there is also a successful write to offset 0x68 above, while the seco=
nd write fails the I/O emulator.
>=20
>=20
> The patch added pt_msi_disable() call into pt_msgctrl_reg_write() functio=
n, And the pt_msi_disable() function has these
> lines:
> out:
> =C2=A0=C2=A0=C2=A0 /* clear msi info */
> =C2=A0=C2=A0=C2=A0 dev->msi->flags =3D 0;
> =C2=A0=C2=A0=C2=A0 dev->msi->pirq =3D -1;
> =C2=A0=C2=A0=C2=A0 dev->msi_trans_en =3D 0;
>=20
> As a result, the flags are cleared -- this is new to the patch.
> And I believe this change caused the failure in pt_msgaddr64_write():
>=20
> 3882=C2=A0=C2=A0=C2=A0=C2=A0 /* check whether the type is 64 bit or not *=
/
> 3883=C2=A0=C2=A0=C2=A0=C2=A0 if (!(ptdev->msi->flags & PCI_MSI_FLAGS_64BI=
T))
> 3884=C2=A0=C2=A0=C2=A0=C2=A0 {
> 3885=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* exit I/O emulator=
 */
> 3886=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PT_LOG("Error: why c=
omes to Upper Address without 64 bit support??\n");
> 3887=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -1;
> 3888=C2=A0=C2=A0=C2=A0=C2=A0 }
>=20
>=20
> I only see flags setup in=C2=A0 pt_msgctrl_reg_init() function. I guess t=
his is not called this time.

I think you might be right, thanks for the detailed analysis!
If this is really the cause of the problem (as it looks like it is),
then the following should fix it:



diff --git a/hw/pt-msi.c b/hw/pt-msi.c
index 73f737d..b03b989 100644
--- a/hw/pt-msi.c
+++ b/hw/pt-msi.c
@@ -213,7 +213,7 @@ void pt_msi_disable(struct pt_dev *dev)
=20
 out:
     /* clear msi info */
-    dev->msi->flags =3D 0;
+    dev->msi->flags &=3D ~(MSI_FLAG_UNINIT | PT_MSI_MAPPED | PCI_MSI_FLAGS=
_ENABLE);
     dev->msi->pirq =3D -1;
     dev->msi_trans_en =3D 0;
 }
--1342847746-1408387693-1354898702=:8801
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1408387693-1354898702=:8801--


From xen-devel-bounces@lists.xen.org Fri Dec 07 16:45:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th135-0005oc-SW; Fri, 07 Dec 2012 16:45:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th134-0005oX-5R
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 16:45:14 +0000
Received: from [85.158.143.99:31533] by server-1.bemta-4.messagelabs.com id
	57/BD-28401-91D12C05; Fri, 07 Dec 2012 16:45:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1354898708!27546221!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 573 invoked from network); 7 Dec 2012 16:45:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 16:45:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216802840"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 16:45:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 11:45:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th12x-00066x-Br;
	Fri, 07 Dec 2012 16:45:07 +0000
Date: Fri, 7 Dec 2012 16:45:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212071635370.8801@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1408387693-1354898702=:8801"
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3
 with the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU
 missing interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1408387693-1354898702=:8801
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

Sorry for the delay in the reply

On Thu, 6 Dec 2012, G.R. wrote:
> Sorry, but I have to resend this in a separate thread for better visibili=
ty.
> Back ground:
> After backporting this patch to fix my PVHVM interrupt missing issue in 4=
=2E1.3:
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg01909.html
> I find side effect for pure HVM guest.
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00208.html
>=20
> I had a follow up thread to analysis the issue and post it again here:
>=20
>             Hi, it seems that the patch has some side effect on pure HVM =
guest.
>             For openelec 2.0 guest, which is based on linux 3.2.x with PV=
OP disabled, I see such syndrome in
>             qemu-dm-xxx.log:
>=20
>             pt_msgaddr64_reg_write: Error: why comes to Upper Address wit=
hout 64 bit support??
>             pt_pci_write_config: Internal error: Invalid write emulation =
return value[-1]. I/O emulator exit.
>=20
>             The guest dies immediately after the log, I have no way to ch=
eck guest kernel log.
>             Without the patch, this guest can boot without obvious error =
log even the VGA passthrough does not
>             quite work.
>             I'll check the code to see what does these log mean...
>=20
>=20
> I did some analysis and it really looks like a bug to me.
> Since this is a patch back-ported from 4.2.0, I would like to ask is ther=
e any follow up patch that would fix this
> issue?
> Please see my analysis below:
>=20
> Here is part of the qemu-dm log, with debug log enabled at compile time:
>=20
> dm-command: hot insert pass-through pci dev
> register_real_device: Assigning real physical device 00:1b.0 ...
> pt_iomul_init: Error: pt_iomul_init can't open file /dev/xen/pci_iomul: N=
o such file or directory: 0x0:0x1b.0x0
> pt_register_regions: IO region registered (size=3D0x00004000 base_addr=3D=
0xf7c10004)
> pt_msi_setup: msi mapped with pirq 36
> pci_intx: intx=3D1
> register_real_device: Real physical device 00:1b.0 registered successfuly=
!
> IRQ type =3D MSI-INTx
> ...
> pt_pci_read_config: [00:06.0]: address=3D0000 val=3D0x00008086 len=3D2
> pt_pci_read_config: [00:06.0]: address=3D0002 val=3D0x00001e20 len=3D2
> ...
> pt_pci_write_config: [00:06.0]: address=3D0068 val=3D0x00000000 len=3D4
> ...
> pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
> pt_msgctrl_reg_write: guest enabling MSI, disable MSI-INTx translation
> pci_intx: intx=3D1
> pt_msi_disable: Unmap msi with pirq 36
> pt_msgctrl_reg_write: setup msi for dev 30
> pt_msi_setup: msi mapped with pirq 36
> pt_msi_update: Update msi with pirq 36 gvec 51 gflags 1303
> pt_pci_read_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
> pt_pci_write_config: [00:06.0]: address=3D0062 val=3D0x00000081 len=3D2
> pt_pci_write_config: [00:06.0]: address=3D0064 val=3D0xfee0300c len=3D4
> pt_pci_write_config: [00:06.0]: address=3D0068 val=3D0x00000000 len=3D4
> pt_msgaddr64_reg_write: Error: why comes to Upper Address without 64 bit =
support??
> pt_pci_write_config: Internal error: Invalid write emulation return value=
[-1]. I/O emulator exit.
>=20
>=20
> Here the device in question should is the audio controller, 00:1b.0 in th=
e host, which is 64 bit capable:
> 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Fami=
ly High Definition Audio Controller (rev 04)
> =C2=A0=C2=A0=C2=A0 Capabilities: [60] MSI: Enable+ Count=3D1/1 Maskable- =
64bit+
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 Address: 00000000fee00378=C2=A0 Dat=
a: 0000

I won't be able to reproduce the bug, so I thank you in advance for any
help you can give me with testing a fix


> And there is also a successful write to offset 0x68 above, while the seco=
nd write fails the I/O emulator.
>=20
>=20
> The patch added pt_msi_disable() call into pt_msgctrl_reg_write() functio=
n, And the pt_msi_disable() function has these
> lines:
> out:
> =C2=A0=C2=A0=C2=A0 /* clear msi info */
> =C2=A0=C2=A0=C2=A0 dev->msi->flags =3D 0;
> =C2=A0=C2=A0=C2=A0 dev->msi->pirq =3D -1;
> =C2=A0=C2=A0=C2=A0 dev->msi_trans_en =3D 0;
>=20
> As a result, the flags are cleared -- this is new to the patch.
> And I believe this change caused the failure in pt_msgaddr64_write():
>=20
> 3882=C2=A0=C2=A0=C2=A0=C2=A0 /* check whether the type is 64 bit or not *=
/
> 3883=C2=A0=C2=A0=C2=A0=C2=A0 if (!(ptdev->msi->flags & PCI_MSI_FLAGS_64BI=
T))
> 3884=C2=A0=C2=A0=C2=A0=C2=A0 {
> 3885=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* exit I/O emulator=
 */
> 3886=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PT_LOG("Error: why c=
omes to Upper Address without 64 bit support??\n");
> 3887=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -1;
> 3888=C2=A0=C2=A0=C2=A0=C2=A0 }
>=20
>=20
> I only see flags setup in=C2=A0 pt_msgctrl_reg_init() function. I guess t=
his is not called this time.

I think you might be right, thanks for the detailed analysis!
If this is really the cause of the problem (as it looks like it is),
then the following should fix it:



diff --git a/hw/pt-msi.c b/hw/pt-msi.c
index 73f737d..b03b989 100644
--- a/hw/pt-msi.c
+++ b/hw/pt-msi.c
@@ -213,7 +213,7 @@ void pt_msi_disable(struct pt_dev *dev)
=20
 out:
     /* clear msi info */
-    dev->msi->flags =3D 0;
+    dev->msi->flags &=3D ~(MSI_FLAG_UNINIT | PT_MSI_MAPPED | PCI_MSI_FLAGS=
_ENABLE);
     dev->msi->pirq =3D -1;
     dev->msi_trans_en =3D 0;
 }
--1342847746-1408387693-1354898702=:8801
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1408387693-1354898702=:8801--


From xen-devel-bounces@lists.xen.org Fri Dec 07 16:51:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th190-00060N-NY; Fri, 07 Dec 2012 16:51:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1Th18z-00060H-Oh
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 16:51:22 +0000
Received: from [193.109.254.147:24070] by server-14.bemta-14.messagelabs.com
	id 6D/B6-14517-98E12C05; Fri, 07 Dec 2012 16:51:21 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354899079!9327584!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22940 invoked from network); 7 Dec 2012 16:51:19 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-13.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Dec 2012 16:51:19 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Fri, 7 Dec 2012 17:51:18 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: Re: [Xen-devel] VGA passthrough and AMD drivers
Thread-Index: Ac3UmuRmhuFUnX+GSrChAek6f4kfgg==
Date: Fri, 7 Dec 2012 16:51:16 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C53A0A@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5472531898119308267=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5472531898119308267==
Content-Language: fr-FR
Content-Type: multipart/alternative;
	boundary="_000_36774CA35642C143BCDE93BA0C68DC5702C53A0Adulac_"

--_000_36774CA35642C143BCDE93BA0C68DC5702C53A0Adulac_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

>> Hi all,

>>

>> I have made some tests to find a good driver for FirePro V8800 on

>> windows 7 64bit HVM.

>>

>> I have been focused on ?advanced features?: quad buffer and active

>> stereoscopy, synchronization ?

>>

>> The results, for all FirePro drivers (of this year); I can?t get the

>> quad buffer/active stereoscopy feature.

>>

>> But they work on a native installation.

>>

>Can you describe the setup a little more?



I've got 2 HP Z800 workstation with FirePro V8800, one per computer.

It's a setup used in CAVE system, I try (and its works, minus some issues) =
to virtualize 'virtual reality contexts' that needs full graphics card feat=
ures.



Intel Xeon E5640 CPU with Intel 5520 chipset

cores_per_socket       : 4

threads_per_core       : 2

cpu_mhz                : 2660

total_memory           : 4079



>How many graphic cards per guest?

One card per guest.



>How many guests? On how many hosts?

One guest per computer.



>>

>> The only driver that allows this feature is a Radeon HD driver

>> (Catalyst 12.10 WHQL).

>>

>> But this driver becomes unstable when an application using active

>> stereo and synchronization is closed:

>>

>> -The synchronization between two computers is lost.

>>

>> -The CCC can crash when the synchronization is made again.

>>

>> Someone have any clues about this?

>>

>I don't know exactly how this works on AMD/ATI graphics cards, but I

>have worked with synchronisation on other graphics cards about 7 years

>ago, so I have some idea of how you solve the various problems.

>

>What I don't quite understand is why it would be different between a

>virtual environment and the bare-metal ("native") install. My immediate

>guess is that there is a timing difference, for one of two reasons:

>1. IOMMU is adding extra delays to the graphics card reading system memory=
.

>2. Interrupt delays due to hypervisor.

>3. Dom0 or other guest domains "stealing" CPU from the guest.

>I don't think those are easy to work around (as they all have to

>"happen" in a virtual system), but I also don't REALLY understand why

>this should cause problems in the first place, as there isn't any

>guarantee as to the timings of either memory reads, interrupt

>latency/responsiveness or CPU availability in Windows, so the same

>problem would appear in native systems as well, given "the right"

>circumstances.

>

>

>What exactly is the crash in CCC?

>

>(CCC stands for "Catalyst Control Center" - which I think is a Windows

>"service" to handle certain requests from the driver that can't be done

>in kernel mode [or shouldn't be done in the driver in general]).



After the application is closed, I launch the Catalyst Control Center, the =
synchronization state seems to be good. But there is no synchronization.

If I try to apply any modifications of synchronization (synchro server or c=
lient), CCC freeze and I need to kill it manually.

I can set the synchronization back after this.



I will try next week with others computers.



Thanks for the reply,

Aurelien



--

Mats

>

> Thanks,

>

> Aurelien

>


--_000_36774CA35642C143BCDE93BA0C68DC5702C53A0Adulac_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoPlainText, li.MsoPlainText, div.MsoPlainText
	{mso-style-priority:99;
	mso-style-link:"Texte brut Car";
	margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
	{mso-style-priority:99;
	mso-style-link:"Texte de bulles Car";
	margin:0cm;
	margin-bottom:.0001pt;
	font-size:8.0pt;
	font-family:"Tahoma","sans-serif";
	mso-fareast-language:EN-US;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.TextedebullesCar
	{mso-style-name:"Texte de bulles Car";
	mso-style-priority:99;
	mso-style-link:"Texte de bulles";
	font-family:"Tahoma","sans-serif";}
span.TextebrutCar
	{mso-style-name:"Texte brut Car";
	mso-style-priority:99;
	mso-style-link:"Texte brut";
	font-family:"Calibri","sans-serif";}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
	{page:WordSection1;}
/* List Definitions */
@list l0
	{mso-list-id:2129855748;
	mso-list-type:hybrid;
	mso-list-template-ids:-1562313248 -2100918334 67895299 67895301 67895297 6=
7895299 67895301 67895297 67895299 67895301;}
@list l0:level1
	{mso-level-start-at:0;
	mso-level-number-format:bullet;
	mso-level-text:-;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-font-family:Calibri;}
@list l0:level2
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:"Courier New";}
@list l0:level3
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:Wingdings;}
@list l0:level4
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:Symbol;}
@list l0:level5
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:"Courier New";}
@list l0:level6
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:Wingdings;}
@list l0:level7
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:Symbol;}
@list l0:level8
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:"Courier New";}
@list l0:level9
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:Wingdings;}
ol
	{margin-bottom:0cm;}
ul
	{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"FR" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; Hi all,<o:p></o:p><=
/span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; I have made some te=
sts to find a good driver for FirePro V8800 on
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; windows 7 64bit HVM=
.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; I have been focused=
 on ?advanced features?: quad buffer and active
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; stereoscopy, synchr=
onization ?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; The results, for al=
l FirePro drivers (of this year); I can?t get the
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; quad buffer/active =
stereoscopy feature.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; But they work on a =
native installation.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;Can you describe the set=
up a little more?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">I&#8217;ve got 2 HP Z800 wor=
kstation with FirePro V8800, one per computer.
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">It&#8217;s a setup used in C=
AVE system, I try (and its works, minus some issues) to virtualize &#8216;v=
irtual reality contexts&#8217; that needs full graphics card features.<o:p>=
</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Intel Xeon E5640 CPU with In=
tel 5520 chipset<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">cores_per_socket&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; : 4<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">threads_per_core&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; : 2<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">cpu_mhz&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 266=
0<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">total_memory&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 4079<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;How many graphic cards p=
er guest?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">One card per guest.<o:p></o:=
p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;How many guests? On how =
many hosts?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">One guest per computer. <o:p=
></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; The only driver tha=
t allows this feature is a Radeon HD driver
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; (Catalyst 12.10 WHQ=
L).<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; But this driver bec=
omes unstable when an application using active
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; stereo and synchron=
ization is closed:<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; -The synchronizatio=
n between two computers is lost.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; -The CCC can crash =
when the synchronization is made again.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; Someone have any cl=
ues about this?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;I don't know exactly how=
 this works on AMD/ATI graphics cards, but I
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;have worked with synchro=
nisation on other graphics cards about 7 years
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;ago, so I have some idea=
 of how you solve the various problems.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;What I don't quite under=
stand is why it would be different between a
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;virtual environment and =
the bare-metal (&quot;native&quot;) install. My immediate
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;guess is that there is a=
 timing difference, for one of two reasons:<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;1. IOMMU is adding extra=
 delays to the graphics card reading system memory.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;2. Interrupt delays due =
to hypervisor.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;3. Dom0 or other guest d=
omains &quot;stealing&quot; CPU from the guest.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;I don't think those are =
easy to work around (as they all have to
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&quot;happen&quot; in a =
virtual system), but I also don't REALLY understand why
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;this should cause proble=
ms in the first place, as there isn't any
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;guarantee as to the timi=
ngs of either memory reads, interrupt
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;latency/responsiveness o=
r CPU availability in Windows, so the same
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;problem would appear in =
native systems as well, given &quot;the right&quot;
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;circumstances.<o:p></o:p=
></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;What exactly is the cras=
h in CCC?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;(CCC stands for &quot;Ca=
talyst Control Center&quot; - which I think is a Windows
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&quot;service&quot; to h=
andle certain requests from the driver that can't be done
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;in kernel mode [or shoul=
dn't be done in the driver in general]).<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">After the application is clo=
sed, I launch the Catalyst Control Center, the synchronization state seems =
to be good. But there is no synchronization.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">If I try to apply any modifi=
cations of synchronization (synchro server or client), CCC freeze and I nee=
d to kill it manually.
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">I can set the synchronizatio=
n back after this.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">I will try next week with ot=
hers computers.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Thanks for the reply,<o:p></=
o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Aurelien <o:p></o:p></span><=
/p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">--<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Mats<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt; Thanks,<o:p></o:p></spa=
n></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt; Aurelien<o:p></o:p></sp=
an></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
</div>
</body>
</html>

--_000_36774CA35642C143BCDE93BA0C68DC5702C53A0Adulac_--


--===============5472531898119308267==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5472531898119308267==--


From xen-devel-bounces@lists.xen.org Fri Dec 07 16:51:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 16:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th190-00060N-NY; Fri, 07 Dec 2012 16:51:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1Th18z-00060H-Oh
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 16:51:22 +0000
Received: from [193.109.254.147:24070] by server-14.bemta-14.messagelabs.com
	id 6D/B6-14517-98E12C05; Fri, 07 Dec 2012 16:51:21 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354899079!9327584!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22940 invoked from network); 7 Dec 2012 16:51:19 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-13.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Dec 2012 16:51:19 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Fri, 7 Dec 2012 17:51:18 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: Re: [Xen-devel] VGA passthrough and AMD drivers
Thread-Index: Ac3UmuRmhuFUnX+GSrChAek6f4kfgg==
Date: Fri, 7 Dec 2012 16:51:16 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C53A0A@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5472531898119308267=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5472531898119308267==
Content-Language: fr-FR
Content-Type: multipart/alternative;
	boundary="_000_36774CA35642C143BCDE93BA0C68DC5702C53A0Adulac_"

--_000_36774CA35642C143BCDE93BA0C68DC5702C53A0Adulac_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

>> Hi all,

>>

>> I have made some tests to find a good driver for FirePro V8800 on

>> windows 7 64bit HVM.

>>

>> I have been focused on ?advanced features?: quad buffer and active

>> stereoscopy, synchronization ?

>>

>> The results, for all FirePro drivers (of this year); I can?t get the

>> quad buffer/active stereoscopy feature.

>>

>> But they work on a native installation.

>>

>Can you describe the setup a little more?



I've got 2 HP Z800 workstation with FirePro V8800, one per computer.

It's a setup used in CAVE system, I try (and its works, minus some issues) =
to virtualize 'virtual reality contexts' that needs full graphics card feat=
ures.



Intel Xeon E5640 CPU with Intel 5520 chipset

cores_per_socket       : 4

threads_per_core       : 2

cpu_mhz                : 2660

total_memory           : 4079



>How many graphic cards per guest?

One card per guest.



>How many guests? On how many hosts?

One guest per computer.



>>

>> The only driver that allows this feature is a Radeon HD driver

>> (Catalyst 12.10 WHQL).

>>

>> But this driver becomes unstable when an application using active

>> stereo and synchronization is closed:

>>

>> -The synchronization between two computers is lost.

>>

>> -The CCC can crash when the synchronization is made again.

>>

>> Someone have any clues about this?

>>

>I don't know exactly how this works on AMD/ATI graphics cards, but I

>have worked with synchronisation on other graphics cards about 7 years

>ago, so I have some idea of how you solve the various problems.

>

>What I don't quite understand is why it would be different between a

>virtual environment and the bare-metal ("native") install. My immediate

>guess is that there is a timing difference, for one of two reasons:

>1. IOMMU is adding extra delays to the graphics card reading system memory=
.

>2. Interrupt delays due to hypervisor.

>3. Dom0 or other guest domains "stealing" CPU from the guest.

>I don't think those are easy to work around (as they all have to

>"happen" in a virtual system), but I also don't REALLY understand why

>this should cause problems in the first place, as there isn't any

>guarantee as to the timings of either memory reads, interrupt

>latency/responsiveness or CPU availability in Windows, so the same

>problem would appear in native systems as well, given "the right"

>circumstances.

>

>

>What exactly is the crash in CCC?

>

>(CCC stands for "Catalyst Control Center" - which I think is a Windows

>"service" to handle certain requests from the driver that can't be done

>in kernel mode [or shouldn't be done in the driver in general]).



After the application is closed, I launch the Catalyst Control Center, the =
synchronization state seems to be good. But there is no synchronization.

If I try to apply any modifications of synchronization (synchro server or c=
lient), CCC freeze and I need to kill it manually.

I can set the synchronization back after this.



I will try next week with others computers.



Thanks for the reply,

Aurelien



--

Mats

>

> Thanks,

>

> Aurelien

>


--_000_36774CA35642C143BCDE93BA0C68DC5702C53A0Adulac_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoPlainText, li.MsoPlainText, div.MsoPlainText
	{mso-style-priority:99;
	mso-style-link:"Texte brut Car";
	margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
	{mso-style-priority:99;
	mso-style-link:"Texte de bulles Car";
	margin:0cm;
	margin-bottom:.0001pt;
	font-size:8.0pt;
	font-family:"Tahoma","sans-serif";
	mso-fareast-language:EN-US;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.TextedebullesCar
	{mso-style-name:"Texte de bulles Car";
	mso-style-priority:99;
	mso-style-link:"Texte de bulles";
	font-family:"Tahoma","sans-serif";}
span.TextebrutCar
	{mso-style-name:"Texte brut Car";
	mso-style-priority:99;
	mso-style-link:"Texte brut";
	font-family:"Calibri","sans-serif";}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
	{page:WordSection1;}
/* List Definitions */
@list l0
	{mso-list-id:2129855748;
	mso-list-type:hybrid;
	mso-list-template-ids:-1562313248 -2100918334 67895299 67895301 67895297 6=
7895299 67895301 67895297 67895299 67895301;}
@list l0:level1
	{mso-level-start-at:0;
	mso-level-number-format:bullet;
	mso-level-text:-;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-font-family:Calibri;}
@list l0:level2
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:"Courier New";}
@list l0:level3
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:Wingdings;}
@list l0:level4
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:Symbol;}
@list l0:level5
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:"Courier New";}
@list l0:level6
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:Wingdings;}
@list l0:level7
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:Symbol;}
@list l0:level8
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:"Courier New";}
@list l0:level9
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-18.0pt;
	font-family:Wingdings;}
ol
	{margin-bottom:0cm;}
ul
	{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"FR" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; Hi all,<o:p></o:p><=
/span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; I have made some te=
sts to find a good driver for FirePro V8800 on
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; windows 7 64bit HVM=
.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; I have been focused=
 on ?advanced features?: quad buffer and active
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; stereoscopy, synchr=
onization ?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; The results, for al=
l FirePro drivers (of this year); I can?t get the
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; quad buffer/active =
stereoscopy feature.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; But they work on a =
native installation.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;Can you describe the set=
up a little more?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">I&#8217;ve got 2 HP Z800 wor=
kstation with FirePro V8800, one per computer.
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">It&#8217;s a setup used in C=
AVE system, I try (and its works, minus some issues) to virtualize &#8216;v=
irtual reality contexts&#8217; that needs full graphics card features.<o:p>=
</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Intel Xeon E5640 CPU with In=
tel 5520 chipset<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">cores_per_socket&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; : 4<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">threads_per_core&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; : 2<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">cpu_mhz&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 266=
0<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">total_memory&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 4079<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;How many graphic cards p=
er guest?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">One card per guest.<o:p></o:=
p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;How many guests? On how =
many hosts?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">One guest per computer. <o:p=
></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; The only driver tha=
t allows this feature is a Radeon HD driver
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; (Catalyst 12.10 WHQ=
L).<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; But this driver bec=
omes unstable when an application using active
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; stereo and synchron=
ization is closed:<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; -The synchronizatio=
n between two computers is lost.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; -The CCC can crash =
when the synchronization is made again.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt; Someone have any cl=
ues about this?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&gt;<o:p>&nbsp;</o:p></s=
pan></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;I don't know exactly how=
 this works on AMD/ATI graphics cards, but I
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;have worked with synchro=
nisation on other graphics cards about 7 years
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;ago, so I have some idea=
 of how you solve the various problems.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;What I don't quite under=
stand is why it would be different between a
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;virtual environment and =
the bare-metal (&quot;native&quot;) install. My immediate
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;guess is that there is a=
 timing difference, for one of two reasons:<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;1. IOMMU is adding extra=
 delays to the graphics card reading system memory.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;2. Interrupt delays due =
to hypervisor.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;3. Dom0 or other guest d=
omains &quot;stealing&quot; CPU from the guest.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;I don't think those are =
easy to work around (as they all have to
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&quot;happen&quot; in a =
virtual system), but I also don't REALLY understand why
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;this should cause proble=
ms in the first place, as there isn't any
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;guarantee as to the timi=
ngs of either memory reads, interrupt
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;latency/responsiveness o=
r CPU availability in Windows, so the same
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;problem would appear in =
native systems as well, given &quot;the right&quot;
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;circumstances.<o:p></o:p=
></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;What exactly is the cras=
h in CCC?<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;(CCC stands for &quot;Ca=
talyst Control Center&quot; - which I think is a Windows
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;&quot;service&quot; to h=
andle certain requests from the driver that can't be done
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;in kernel mode [or shoul=
dn't be done in the driver in general]).<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">After the application is clo=
sed, I launch the Catalyst Control Center, the synchronization state seems =
to be good. But there is no synchronization.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">If I try to apply any modifi=
cations of synchronization (synchro server or client), CCC freeze and I nee=
d to kill it manually.
<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">I can set the synchronizatio=
n back after this.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">I will try next week with ot=
hers computers.<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Thanks for the reply,<o:p></=
o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Aurelien <o:p></o:p></span><=
/p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">--<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Mats<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt; Thanks,<o:p></o:p></spa=
n></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt; Aurelien<o:p></o:p></sp=
an></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">&gt;<o:p>&nbsp;</o:p></span>=
</p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
</div>
</body>
</html>

--_000_36774CA35642C143BCDE93BA0C68DC5702C53A0Adulac_--


--===============5472531898119308267==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5472531898119308267==--


From xen-devel-bounces@lists.xen.org Fri Dec 07 17:02:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1JX-0006Vq-FF; Fri, 07 Dec 2012 17:02:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th1JV-0006Vj-N5
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:02:13 +0000
Received: from [85.158.139.83:25700] by server-10.bemta-5.messagelabs.com id
	DF/A2-13383-41122C05; Fri, 07 Dec 2012 17:02:12 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1354899730!27744595!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7426 invoked from network); 7 Dec 2012 17:02:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:02:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47023640"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:01:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:01:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th1J6-0006Ll-85;
	Fri, 07 Dec 2012 17:01:48 +0000
Date: Fri, 7 Dec 2012 17:01:43 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <272767244.20121206175454@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212071655460.8801@kaball.uk.xensource.com>
References: <272767244.20121206175454@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upstream qemu-xen,
 log verbosity and compile errors when enabling debug, filenaming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
> Hi All,
> 
> Yesterday I have tried building and using upstream qemu and seabios.
> Config.mk:
> QEMU_UPSTREAM_URL ?= git://git.qemu.org/qemu.git
> QEMU_UPSTREAM_REVISION ?= master
> 
> SEABIOS_UPSTREAM_URL ?= git://git.qemu.org/seabios.git
> SEABIOS_UPSTREAM_TAG ?= master
> 
> And i'm happy to say that it works quite ok, even with (secondary pci) pci-passthrough ( using an ATI gfx adapter and windows7 as guest os) :-).
> 
> But it seems to have an issue with a USB controller which is trying to use msi-X interrupts, which makes xl dmesg report:
> (XEN) [2012-12-06 16:07:24] vmsi.c:108:d32767 Unsupported delivery mode 3
> and when "pci_msitranslate=0" is set the error still occurs, only this time the correct domain number is reported, instead of the 32767.
> 
> 
> However, when trying to debug, i noticed although making a debug build (make debug=y && make debug=y install), qemu-dm-<guestname>.log stays almost empty.
> It seems all the defines related to debugging are not set.
> 
> - Would it be appropriated to enable them all when making a debug build ?
> - Would it be wise to also have some more verbose logging when not running a debug build ?

Yes and yes

> - And if yes, what would be the nicest way to set the defines ?

My guess is that it would be enough to turn on XEN_PT_LOGGING_ENABLED by
default

> - Should the naming of the debug defines be made more consistent ?

Yes



> 
> When enabling these debug defines by hand:
> 
> xen-all.c
> #define DEBUG_XEN
> 
> xen-mapcache.c
> #define MAPCACHE_DEBUG
> 
> hw/xen-host-pci-device.c
> #define XEN_HOST_PCI_DEVICE_DEBUG
> 
> hw/xen_platform.c
> #define DEBUG_PLATFORM
> 
> hw/xen_pt.h
> #define XEN_PT_LOGGING_ENABLED
> #define XEN_PT_DEBUG_PCI_CONFIG_ACCESS
> 
> I get a lot of compile errors related to wrong types in the debug printf's.

That's really bad. I would like upstream QEMU to have the same level of
logging as qemu-xen-traditional by default. And they should compile.


> Another thing that occurred to me was that the file naming doesn't seem to be overly consistent:
> 
> xen-all.c
> xen-mapcache.c
> xen-mapcache.h
> xen-stub.c
> xen_apic.c
> hw/xen_backend.c
> hw/xen_backend.h
> hw/xen_blkif.h
> hw/xen_common.h
> hw/xen_console.c
> hw/xen_devconfig.c
> hw/xen_disk.c
> hw/xen_domainbuild.c
> hw/xen_domainbuild.h
> hw/xenfb.c
> hw/xen.h
> hw/xen-host-pci-device.c
> hw/xen-host-pci-device.h
> hw/xen_machine_pv.c
> hw/xen_nic.c
> hw/xen_platform.c
> hw/xen_pt.c
> hw/xen_pt_config_init.c
> hw/xen_pt.h
> hw/xen_pt_msi.c
> 
> Would it be worthwhile to make it:
> - consistent all underscore or all minus ?
> - allways xen_ (or xen- depending on the above) ?

Yes, definitely.
Given that the development window for QEMU 1.4 has just opened might
even be the right time to make these changes.

Are you volunteering? :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:02:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1JX-0006Vq-FF; Fri, 07 Dec 2012 17:02:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th1JV-0006Vj-N5
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:02:13 +0000
Received: from [85.158.139.83:25700] by server-10.bemta-5.messagelabs.com id
	DF/A2-13383-41122C05; Fri, 07 Dec 2012 17:02:12 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1354899730!27744595!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7426 invoked from network); 7 Dec 2012 17:02:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:02:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47023640"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:01:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:01:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th1J6-0006Ll-85;
	Fri, 07 Dec 2012 17:01:48 +0000
Date: Fri, 7 Dec 2012 17:01:43 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <272767244.20121206175454@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212071655460.8801@kaball.uk.xensource.com>
References: <272767244.20121206175454@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upstream qemu-xen,
 log verbosity and compile errors when enabling debug, filenaming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
> Hi All,
> 
> Yesterday I have tried building and using upstream qemu and seabios.
> Config.mk:
> QEMU_UPSTREAM_URL ?= git://git.qemu.org/qemu.git
> QEMU_UPSTREAM_REVISION ?= master
> 
> SEABIOS_UPSTREAM_URL ?= git://git.qemu.org/seabios.git
> SEABIOS_UPSTREAM_TAG ?= master
> 
> And i'm happy to say that it works quite ok, even with (secondary pci) pci-passthrough ( using an ATI gfx adapter and windows7 as guest os) :-).
> 
> But it seems to have an issue with a USB controller which is trying to use msi-X interrupts, which makes xl dmesg report:
> (XEN) [2012-12-06 16:07:24] vmsi.c:108:d32767 Unsupported delivery mode 3
> and when "pci_msitranslate=0" is set the error still occurs, only this time the correct domain number is reported, instead of the 32767.
> 
> 
> However, when trying to debug, i noticed although making a debug build (make debug=y && make debug=y install), qemu-dm-<guestname>.log stays almost empty.
> It seems all the defines related to debugging are not set.
> 
> - Would it be appropriated to enable them all when making a debug build ?
> - Would it be wise to also have some more verbose logging when not running a debug build ?

Yes and yes

> - And if yes, what would be the nicest way to set the defines ?

My guess is that it would be enough to turn on XEN_PT_LOGGING_ENABLED by
default

> - Should the naming of the debug defines be made more consistent ?

Yes



> 
> When enabling these debug defines by hand:
> 
> xen-all.c
> #define DEBUG_XEN
> 
> xen-mapcache.c
> #define MAPCACHE_DEBUG
> 
> hw/xen-host-pci-device.c
> #define XEN_HOST_PCI_DEVICE_DEBUG
> 
> hw/xen_platform.c
> #define DEBUG_PLATFORM
> 
> hw/xen_pt.h
> #define XEN_PT_LOGGING_ENABLED
> #define XEN_PT_DEBUG_PCI_CONFIG_ACCESS
> 
> I get a lot of compile errors related to wrong types in the debug printf's.

That's really bad. I would like upstream QEMU to have the same level of
logging as qemu-xen-traditional by default. And they should compile.


> Another thing that occurred to me was that the file naming doesn't seem to be overly consistent:
> 
> xen-all.c
> xen-mapcache.c
> xen-mapcache.h
> xen-stub.c
> xen_apic.c
> hw/xen_backend.c
> hw/xen_backend.h
> hw/xen_blkif.h
> hw/xen_common.h
> hw/xen_console.c
> hw/xen_devconfig.c
> hw/xen_disk.c
> hw/xen_domainbuild.c
> hw/xen_domainbuild.h
> hw/xenfb.c
> hw/xen.h
> hw/xen-host-pci-device.c
> hw/xen-host-pci-device.h
> hw/xen_machine_pv.c
> hw/xen_nic.c
> hw/xen_platform.c
> hw/xen_pt.c
> hw/xen_pt_config_init.c
> hw/xen_pt.h
> hw/xen_pt_msi.c
> 
> Would it be worthwhile to make it:
> - consistent all underscore or all minus ?
> - allways xen_ (or xen- depending on the above) ?

Yes, definitely.
Given that the development window for QEMU 1.4 has just opened might
even be the right time to make these changes.

Are you volunteering? :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:05:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:05:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1M5-0006by-1E; Fri, 07 Dec 2012 17:04:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Th1M3-0006bq-Oy
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:04:51 +0000
Received: from [85.158.138.51:16689] by server-10.bemta-3.messagelabs.com id
	B9/29-19806-EA122C05; Fri, 07 Dec 2012 17:04:46 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354899883!27891820!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3574 invoked from network); 7 Dec 2012 17:04:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:04:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216805662"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:04:33 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Fri, 7 Dec 2012
	12:04:32 -0500
Message-ID: <50C2219F.5050505@citrix.com>
Date: Fri, 7 Dec 2012 17:04:31 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <36774CA35642C143BCDE93BA0C68DC5702C53A0A@dulac>
In-Reply-To: <36774CA35642C143BCDE93BA0C68DC5702C53A0A@dulac>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/12 16:51, Aur=E9lien MILLIAT wrote:
>
> >> Hi all,
>
> >>
>
> >> I have made some tests to find a good driver for FirePro V8800 on
>
> >> windows 7 64bit HVM.
>
> >>
>
> >> I have been focused on ?advanced features?: quad buffer and active
>
> >> stereoscopy, synchronization ?
>
> >>
>
> >> The results, for all FirePro drivers (of this year); I can?t get the
>
> >> quad buffer/active stereoscopy feature.
>
> >>
>
> >> But they work on a native installation.
>
> >>
>
> >Can you describe the setup a little more?
>
> I=92ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>
> It=92s a setup used in CAVE system, I try (and its works, minus some =

> issues) to virtualize =91virtual reality contexts=92 that needs full =

> graphics card features.
>
> Intel Xeon E5640 CPU with Intel 5520 chipset
>
> cores_per_socket : 4
>
> threads_per_core : 2
>
> cpu_mhz : 2660
>
> total_memory : 4079
>
> >How many graphic cards per guest?
>
> One card per guest.
>
> >How many guests? On how many hosts?
>
> One guest per computer.
>

And of course, I just thought of some other questions:
What version of Xen are you using?
What kernel are you using in Dom0?

And just to be clear, there is only Dom0 and one Windows 7 HVM guest on =

each machine?

> >>
>
> >> The only driver that allows this feature is a Radeon HD driver
>
> >> (Catalyst 12.10 WHQL).
>
> >>
>
> >> But this driver becomes unstable when an application using active
>
> >> stereo and synchronization is closed:
>
> >>
>
> >> -The synchronization between two computers is lost.
>
> >>
>
> >> -The CCC can crash when the synchronization is made again.
>
> >>
>
> >> Someone have any clues about this?
>
> >>
>
> >I don't know exactly how this works on AMD/ATI graphics cards, but I
>
> >have worked with synchronisation on other graphics cards about 7 years
>
> >ago, so I have some idea of how you solve the various problems.
>
> >
>
> >What I don't quite understand is why it would be different between a
>
> >virtual environment and the bare-metal ("native") install. My immediate
>
> >guess is that there is a timing difference, for one of two reasons:
>
> >1. IOMMU is adding extra delays to the graphics card reading system memo=
ry.
>
> >2. Interrupt delays due to hypervisor.
>
> >3. Dom0 or other guest domains "stealing" CPU from the guest.
>
> >I don't think those are easy to work around (as they all have to
>
> >"happen" in a virtual system), but I also don't REALLY understand why
>
> >this should cause problems in the first place, as there isn't any
>
> >guarantee as to the timings of either memory reads, interrupt
>
> >latency/responsiveness or CPU availability in Windows, so the same
>
> >problem would appear in native systems as well, given "the right"
>
> >circumstances.
>
> >
>
> >
>
> >What exactly is the crash in CCC?
>
> >
>
> >(CCC stands for "Catalyst Control Center" - which I think is a Windows
>
> >"service" to handle certain requests from the driver that can't be done
>
> >in kernel mode [or shouldn't be done in the driver in general]).
>
> After the application is closed, I launch the Catalyst Control Center, =

> the synchronization state seems to be good. But there is no =

> synchronization.
>
> If I try to apply any modifications of synchronization (synchro server =

> or client), CCC freeze and I need to kill it manually.
>
> I can set the synchronization back after this.
>

This clearly sounds like a software issue in the CCC itself. I could be =

wrong, but that's what I think right now. It would be rather difficult =

to figure out what is going wrong without at least a repro environment.

Whilst I'm all for using Xen for everything, there are sometimes =

situations when "not using Xen" may actually be the right choice. Can =

you explain why running your guests in Xen is of benefit? [If you'd like =

to answer "none of your business", that's fine, but it may help to =

understand what the "business case" is for this].

--
Mats
>
> I will try next week with others computers.
>
> Thanks for the reply,
>
> Aurelien
>
> --
>
> Mats
>
> >
>
> > Thanks,
>
> >
>
> > Aurelien
>
> >
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:05:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:05:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1M5-0006by-1E; Fri, 07 Dec 2012 17:04:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Th1M3-0006bq-Oy
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:04:51 +0000
Received: from [85.158.138.51:16689] by server-10.bemta-3.messagelabs.com id
	B9/29-19806-EA122C05; Fri, 07 Dec 2012 17:04:46 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354899883!27891820!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3574 invoked from network); 7 Dec 2012 17:04:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:04:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216805662"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:04:33 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1; Fri, 7 Dec 2012
	12:04:32 -0500
Message-ID: <50C2219F.5050505@citrix.com>
Date: Fri, 7 Dec 2012 17:04:31 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <36774CA35642C143BCDE93BA0C68DC5702C53A0A@dulac>
In-Reply-To: <36774CA35642C143BCDE93BA0C68DC5702C53A0A@dulac>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/12 16:51, Aur=E9lien MILLIAT wrote:
>
> >> Hi all,
>
> >>
>
> >> I have made some tests to find a good driver for FirePro V8800 on
>
> >> windows 7 64bit HVM.
>
> >>
>
> >> I have been focused on ?advanced features?: quad buffer and active
>
> >> stereoscopy, synchronization ?
>
> >>
>
> >> The results, for all FirePro drivers (of this year); I can?t get the
>
> >> quad buffer/active stereoscopy feature.
>
> >>
>
> >> But they work on a native installation.
>
> >>
>
> >Can you describe the setup a little more?
>
> I=92ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>
> It=92s a setup used in CAVE system, I try (and its works, minus some =

> issues) to virtualize =91virtual reality contexts=92 that needs full =

> graphics card features.
>
> Intel Xeon E5640 CPU with Intel 5520 chipset
>
> cores_per_socket : 4
>
> threads_per_core : 2
>
> cpu_mhz : 2660
>
> total_memory : 4079
>
> >How many graphic cards per guest?
>
> One card per guest.
>
> >How many guests? On how many hosts?
>
> One guest per computer.
>

And of course, I just thought of some other questions:
What version of Xen are you using?
What kernel are you using in Dom0?

And just to be clear, there is only Dom0 and one Windows 7 HVM guest on =

each machine?

> >>
>
> >> The only driver that allows this feature is a Radeon HD driver
>
> >> (Catalyst 12.10 WHQL).
>
> >>
>
> >> But this driver becomes unstable when an application using active
>
> >> stereo and synchronization is closed:
>
> >>
>
> >> -The synchronization between two computers is lost.
>
> >>
>
> >> -The CCC can crash when the synchronization is made again.
>
> >>
>
> >> Someone have any clues about this?
>
> >>
>
> >I don't know exactly how this works on AMD/ATI graphics cards, but I
>
> >have worked with synchronisation on other graphics cards about 7 years
>
> >ago, so I have some idea of how you solve the various problems.
>
> >
>
> >What I don't quite understand is why it would be different between a
>
> >virtual environment and the bare-metal ("native") install. My immediate
>
> >guess is that there is a timing difference, for one of two reasons:
>
> >1. IOMMU is adding extra delays to the graphics card reading system memo=
ry.
>
> >2. Interrupt delays due to hypervisor.
>
> >3. Dom0 or other guest domains "stealing" CPU from the guest.
>
> >I don't think those are easy to work around (as they all have to
>
> >"happen" in a virtual system), but I also don't REALLY understand why
>
> >this should cause problems in the first place, as there isn't any
>
> >guarantee as to the timings of either memory reads, interrupt
>
> >latency/responsiveness or CPU availability in Windows, so the same
>
> >problem would appear in native systems as well, given "the right"
>
> >circumstances.
>
> >
>
> >
>
> >What exactly is the crash in CCC?
>
> >
>
> >(CCC stands for "Catalyst Control Center" - which I think is a Windows
>
> >"service" to handle certain requests from the driver that can't be done
>
> >in kernel mode [or shouldn't be done in the driver in general]).
>
> After the application is closed, I launch the Catalyst Control Center, =

> the synchronization state seems to be good. But there is no =

> synchronization.
>
> If I try to apply any modifications of synchronization (synchro server =

> or client), CCC freeze and I need to kill it manually.
>
> I can set the synchronization back after this.
>

This clearly sounds like a software issue in the CCC itself. I could be =

wrong, but that's what I think right now. It would be rather difficult =

to figure out what is going wrong without at least a repro environment.

Whilst I'm all for using Xen for everything, there are sometimes =

situations when "not using Xen" may actually be the right choice. Can =

you explain why running your guests in Xen is of benefit? [If you'd like =

to answer "none of your business", that's fine, but it may help to =

understand what the "business case" is for this].

--
Mats
>
> I will try next week with others computers.
>
> Thanks for the reply,
>
> Aurelien
>
> --
>
> Mats
>
> >
>
> > Thanks,
>
> >
>
> > Aurelien
>
> >
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:05:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1Mt-0006fj-FT; Fri, 07 Dec 2012 17:05:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Th1Ms-0006fW-DM
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 17:05:42 +0000
Received: from [85.158.137.99:46522] by server-10.bemta-3.messagelabs.com id
	09/DA-19806-5E122C05; Fri, 07 Dec 2012 17:05:41 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354899940!18362399!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17883 invoked from network); 7 Dec 2012 17:05:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:05:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16229798"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 17:05:40 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	17:05:40 +0000
Message-ID: <50C221E3.50901@citrix.com>
Date: Fri, 7 Dec 2012 18:05:39 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
References: <1351097925-26221-1-git-send-email-roger.pau@citrix.com>
	<20121206031455.GA4408@phenom.dumpdata.com>
	<20121207142222.GA3303@phenom.dumpdata.com>
In-Reply-To: <20121207142222.GA3303@phenom.dumpdata.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] LVM Checksum error when using persistent grants
 (#linux-next + stable/for-jens-3.8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/12 15:22, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 05, 2012 at 10:14:55PM -0500, Konrad Rzeszutek Wilk wrote:
>> Hey Roger,
>>
>> I am seeing this weird behavior when using #linux-next + stable/for-jens-3.8 tree.
> 
> To make it easier I just used v3.7-rc8 and merged stable/for-jens-3.8
> tree.
> 
>>
>> Basically I can do 'pvscan' on xvd* disk and quite often I get checksum errors:
>>
>> # pvscan /dev/xvdf
>>   PV /dev/xvdf2   VG VolGroup00        lvm2 [18.88 GiB / 0    free]
>>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>>   Total: 5 [962.38 GiB] / in use: 5 [962.38 GiB] / in no VG: 0 [0   ]
>> # pvscan /dev/xvdf
>>   /dev/xvdf2: Checksum error
>>   Couldn't read volume group metadata.
>>   /dev/xvdf2: Checksum error
>>   Couldn't read volume group metadata.
>>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>>   Total: 4 [943.50 GiB] / in use: 4 [943.50 GiB] / in no VG: 0 [0   ]
>>
>> This is with a i386 dom0, 64-bit Xen 4.1.3 hypervisor, and with either
>> 64-bit or 32-bit PV or PVHVM guest.
> 
> And it does not matter if dom0 is 64-bit.
>>
>> Have you seen something like this?
> 
> More interestingly is that the failure is the frontend. I ran the "new"
> guests that do persistent grants with the old backends (so v3.7-rc8
> virgin) and still got the same failure.
> 
>>
>> Note, the other LV disks are over iSCSI and are working fine.

I've found the problem, this happens when you copy only a part of the
shared data in blkif_completion, this is an example of the problem:


1st loop in rq_for_each_segment
 * bv_offset: 3584
 * bv_len: 512
 * offset += bv_len
 * i: 0

2nd loop:
 * bv_offset: 0
 * bv_len: 512
 * i: 0

As you can see, in the second loop i is still 0 (because offset is
only 512, so 512 >> PAGE_SHIFT is 0) when it should be 1.

This problem made me realize another corner case, which I don't
know if can happen, AFAIK I've never seen it:


1st loop in rq_for_each_segment
 * bv_offset: 1024
 * bv_len: 512
 * offset += len
 * i: 0

2nd loop:
 * bv_offset: 0
 * bv_len: 512
 * i: 0
	
In this second case, should i be 1? Can this really happen? I can't see
anyway to get a "global offset" or something similar, that's not realtive
to the bvec being handled right now.

For the problem that you described a quick fix follows, but it doesn't
cover the second case exposed above:

---
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index df21b05..6e155d0 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -869,7 +871,7 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info,
                                bvec->bv_len);
                        bvec_kunmap_irq(bvec_data, &flags);
                        kunmap_atomic(shared_data);
-                       offset += bvec->bv_len;
+                       offset = (i * PAGE_SIZE) + (bvec->bv_offset + bvec->bv_len);
                }
        }
        /* Add the persistent grant into the list of free grants */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:05:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1Mt-0006fj-FT; Fri, 07 Dec 2012 17:05:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Th1Ms-0006fW-DM
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 17:05:42 +0000
Received: from [85.158.137.99:46522] by server-10.bemta-3.messagelabs.com id
	09/DA-19806-5E122C05; Fri, 07 Dec 2012 17:05:41 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1354899940!18362399!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17883 invoked from network); 7 Dec 2012 17:05:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:05:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="16229798"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 17:05:40 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	17:05:40 +0000
Message-ID: <50C221E3.50901@citrix.com>
Date: Fri, 7 Dec 2012 18:05:39 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
References: <1351097925-26221-1-git-send-email-roger.pau@citrix.com>
	<20121206031455.GA4408@phenom.dumpdata.com>
	<20121207142222.GA3303@phenom.dumpdata.com>
In-Reply-To: <20121207142222.GA3303@phenom.dumpdata.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] LVM Checksum error when using persistent grants
 (#linux-next + stable/for-jens-3.8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/12 15:22, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 05, 2012 at 10:14:55PM -0500, Konrad Rzeszutek Wilk wrote:
>> Hey Roger,
>>
>> I am seeing this weird behavior when using #linux-next + stable/for-jens-3.8 tree.
> 
> To make it easier I just used v3.7-rc8 and merged stable/for-jens-3.8
> tree.
> 
>>
>> Basically I can do 'pvscan' on xvd* disk and quite often I get checksum errors:
>>
>> # pvscan /dev/xvdf
>>   PV /dev/xvdf2   VG VolGroup00        lvm2 [18.88 GiB / 0    free]
>>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>>   Total: 5 [962.38 GiB] / in use: 5 [962.38 GiB] / in no VG: 0 [0   ]
>> # pvscan /dev/xvdf
>>   /dev/xvdf2: Checksum error
>>   Couldn't read volume group metadata.
>>   /dev/xvdf2: Checksum error
>>   Couldn't read volume group metadata.
>>   PV /dev/dm-14   VG vg_x86_64-pvhvm   lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/dm-12   VG vg_i386-pvhvm     lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/dm-11   VG vg_i386           lvm2 [4.00 GiB / 68.00 MiB free]
>>   PV /dev/sda     VG guests            lvm2 [931.51 GiB / 220.51 GiB free]
>>   Total: 4 [943.50 GiB] / in use: 4 [943.50 GiB] / in no VG: 0 [0   ]
>>
>> This is with a i386 dom0, 64-bit Xen 4.1.3 hypervisor, and with either
>> 64-bit or 32-bit PV or PVHVM guest.
> 
> And it does not matter if dom0 is 64-bit.
>>
>> Have you seen something like this?
> 
> More interestingly is that the failure is the frontend. I ran the "new"
> guests that do persistent grants with the old backends (so v3.7-rc8
> virgin) and still got the same failure.
> 
>>
>> Note, the other LV disks are over iSCSI and are working fine.

I've found the problem, this happens when you copy only a part of the
shared data in blkif_completion, this is an example of the problem:


1st loop in rq_for_each_segment
 * bv_offset: 3584
 * bv_len: 512
 * offset += bv_len
 * i: 0

2nd loop:
 * bv_offset: 0
 * bv_len: 512
 * i: 0

As you can see, in the second loop i is still 0 (because offset is
only 512, so 512 >> PAGE_SHIFT is 0) when it should be 1.

This problem made me realize another corner case, which I don't
know if can happen, AFAIK I've never seen it:


1st loop in rq_for_each_segment
 * bv_offset: 1024
 * bv_len: 512
 * offset += len
 * i: 0

2nd loop:
 * bv_offset: 0
 * bv_len: 512
 * i: 0
	
In this second case, should i be 1? Can this really happen? I can't see
anyway to get a "global offset" or something similar, that's not realtive
to the bvec being handled right now.

For the problem that you described a quick fix follows, but it doesn't
cover the second case exposed above:

---
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index df21b05..6e155d0 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -869,7 +871,7 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info,
                                bvec->bv_len);
                        bvec_kunmap_irq(bvec_data, &flags);
                        kunmap_atomic(shared_data);
-                       offset += bvec->bv_len;
+                       offset = (i * PAGE_SIZE) + (bvec->bv_offset + bvec->bv_len);
                }
        }
        /* Add the persistent grant into the list of free grants */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:06:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1NE-0006j4-TF; Fri, 07 Dec 2012 17:06:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Th1NC-0006ig-RX
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 17:06:03 +0000
Received: from [85.158.138.51:45355] by server-8.bemta-3.messagelabs.com id
	B9/9A-07786-9F122C05; Fri, 07 Dec 2012 17:06:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354899959!19999661!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAzNjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13090 invoked from network); 7 Dec 2012 17:06:01 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 17:06:01 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7H5woI026847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Fri, 7 Dec 2012 17:05:59 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7H5vuZ007076
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Fri, 7 Dec 2012 17:05:58 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7H5vea009359
	for <xen-devel@lists.xensource.com>; Fri, 7 Dec 2012 11:05:57 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 09:05:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5A15D1C05A8; Fri,  7 Dec 2012 12:05:56 -0500 (EST)
Date: Fri, 7 Dec 2012 12:05:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Message-ID: <20121207170556.GA6165@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] Linux Kernel Summit 2012 hallway talks - PV MMU, PVH,
 hpa, tglrx, stefano and me.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This year during the Linux Kernel Summit in the hallways we were
discussing the paravirt and PV MMU interface with tglrx and hpa.

The x86 maintainers would like to rewrite parts of the MMU code
(specifically the page table creation/tear down) and are hitting
the wall of not knowing whether changing some of the paravirt
calls will have an adverse effect on Xen. We hit that in the past
with something seemingly innocent - but it caused us quite the
headache. Look in git commit 279b706bf800b5967037f492dbe4fc5081ad5d0f
(x86,xen: introduce x86_init.mapping.pagetable_reserve) for details.

Peter (hpa) explained a nice and quite neat mechanism to the
pagetable creation after tglrx and hpa looked at how unwieldy
the pagetable creation is nowadays (arch/x86/mm/*). This
is nicely explained in https://lkml.org/lkml/2012/10/4/701

The patches for this have been written by Jinghai and are on the
queue for v3.8. They will eliminate the above mentioned
hook (pagetable_reserve).

We also explained how the PVH mode that Mukesh is working on
will benefit re-write of the MMU code as it would not have to
worry about Xen's PV MMU rules.

We got in more details about what else we would like to do and
it came down to:
 - Continue removing pvops function calls we don't use.
   There are some that have the same exact functions for both
   Xen, lguest and baremetal. I am on the hook to do an
   audit of this but hadn't gotten very far.

 - Wait until the PVH patches have been posted and are in a good
   stage. For those that don't know what PVH is, this blog has
   a very good explanation of it and is worth the read:
   http://blog.xen.org/index.php/2012/10/31/the-paravirtualization-spectrum-part-2-from-poles-to-a-spectrum/

   I would highly recommend reading it - it also has a bit of history
   and explanation of the different modes.

   Anyhow, once the PVH works - so can do SMP guests, does
   properly interrupt delivery, etc, we would obsolete the PV MMU
   mode in 5 years. This means that arch/x86/xen/p2m.c and arch/x86/xen/mmu.c
   along with a host of paravirt interfaces would be #ifdef-ed out.
   There would also be a note in the Documentation/deprecate-schedule
   pointing that out. If everything time-wise aligns itself that
   means 2013 is when PVH has it debut and will have its kinks worked
   out. 2018 is when PV MMU would be obsoleted. The impact is that in
   2018 users would need Intel VT-d or AMD VI-IOMMU capable machine to run
   the latest Linux dom0 kernel with device drivers on x86.
   You would still be able to run the ancient PV kernels (like 2.6.18) as
   guests - just not as a dom0. The hypervisor would still support the
   hypercalls - so in 2018 you could still run with Xen X.Y with a pre-2018
   mainline kernel.

The reasoning behind this move is:
 - faster performance. The PVH which uses the hardware VMX container
   and VT-d allows us to run PV guests faster. Look in details at
   Mukesh's presentation at this year XenSummit:
   http://vimeo.com/album/2068760/video/49506288
 - Less code to maintain = less chance of bugs.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:06:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1NE-0006j4-TF; Fri, 07 Dec 2012 17:06:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Th1NC-0006ig-RX
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 17:06:03 +0000
Received: from [85.158.138.51:45355] by server-8.bemta-3.messagelabs.com id
	B9/9A-07786-9F122C05; Fri, 07 Dec 2012 17:06:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1354899959!19999661!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAzNjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13090 invoked from network); 7 Dec 2012 17:06:01 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Dec 2012 17:06:01 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7H5woI026847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Fri, 7 Dec 2012 17:05:59 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7H5vuZ007076
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Fri, 7 Dec 2012 17:05:58 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7H5vea009359
	for <xen-devel@lists.xensource.com>; Fri, 7 Dec 2012 11:05:57 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 09:05:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5A15D1C05A8; Fri,  7 Dec 2012 12:05:56 -0500 (EST)
Date: Fri, 7 Dec 2012 12:05:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Message-ID: <20121207170556.GA6165@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] Linux Kernel Summit 2012 hallway talks - PV MMU, PVH,
 hpa, tglrx, stefano and me.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This year during the Linux Kernel Summit in the hallways we were
discussing the paravirt and PV MMU interface with tglrx and hpa.

The x86 maintainers would like to rewrite parts of the MMU code
(specifically the page table creation/tear down) and are hitting
the wall of not knowing whether changing some of the paravirt
calls will have an adverse effect on Xen. We hit that in the past
with something seemingly innocent - but it caused us quite the
headache. Look in git commit 279b706bf800b5967037f492dbe4fc5081ad5d0f
(x86,xen: introduce x86_init.mapping.pagetable_reserve) for details.

Peter (hpa) explained a nice and quite neat mechanism to the
pagetable creation after tglrx and hpa looked at how unwieldy
the pagetable creation is nowadays (arch/x86/mm/*). This
is nicely explained in https://lkml.org/lkml/2012/10/4/701

The patches for this have been written by Jinghai and are on the
queue for v3.8. They will eliminate the above mentioned
hook (pagetable_reserve).

We also explained how the PVH mode that Mukesh is working on
will benefit re-write of the MMU code as it would not have to
worry about Xen's PV MMU rules.

We got in more details about what else we would like to do and
it came down to:
 - Continue removing pvops function calls we don't use.
   There are some that have the same exact functions for both
   Xen, lguest and baremetal. I am on the hook to do an
   audit of this but hadn't gotten very far.

 - Wait until the PVH patches have been posted and are in a good
   stage. For those that don't know what PVH is, this blog has
   a very good explanation of it and is worth the read:
   http://blog.xen.org/index.php/2012/10/31/the-paravirtualization-spectrum-part-2-from-poles-to-a-spectrum/

   I would highly recommend reading it - it also has a bit of history
   and explanation of the different modes.

   Anyhow, once the PVH works - so can do SMP guests, does
   properly interrupt delivery, etc, we would obsolete the PV MMU
   mode in 5 years. This means that arch/x86/xen/p2m.c and arch/x86/xen/mmu.c
   along with a host of paravirt interfaces would be #ifdef-ed out.
   There would also be a note in the Documentation/deprecate-schedule
   pointing that out. If everything time-wise aligns itself that
   means 2013 is when PVH has it debut and will have its kinks worked
   out. 2018 is when PV MMU would be obsoleted. The impact is that in
   2018 users would need Intel VT-d or AMD VI-IOMMU capable machine to run
   the latest Linux dom0 kernel with device drivers on x86.
   You would still be able to run the ancient PV kernels (like 2.6.18) as
   guests - just not as a dom0. The hypervisor would still support the
   hypercalls - so in 2018 you could still run with Xen X.Y with a pre-2018
   mainline kernel.

The reasoning behind this move is:
 - faster performance. The PVH which uses the hardware VMX container
   and VT-d allows us to run PV guests faster. Look in details at
   Mukesh's presentation at this year XenSummit:
   http://vimeo.com/album/2068760/video/49506288
 - Less code to maintain = less chance of bugs.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:24:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:24:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1ev-0007bP-NS; Fri, 07 Dec 2012 17:24:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th1et-0007bK-J8
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:24:19 +0000
Received: from [85.158.139.83:2687] by server-7.bemta-5.messagelabs.com id
	11/70-08009-24622C05; Fri, 07 Dec 2012 17:24:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354901056!28782933!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19069 invoked from network); 7 Dec 2012 17:24:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:24:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47027222"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:24:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:24:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th1ep-0006ex-Hs;
	Fri, 07 Dec 2012 17:24:15 +0000
Date: Fri, 7 Dec 2012 17:24:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <92707120.20121206223208@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x
 returns same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
> Hi Stefano / Anthony,
> 
> With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:
> 
> With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry
> 
> in qemu-traditional:
> 
> pt_msix_update_one: pt_msix_update_one requested pirq = 87
> pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
> pt_msix_update_one: pt_msix_update_one requested pirq = 86
> pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
> pt_msix_update_one: pt_msix_update_one requested pirq = 85
> pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
> pt_msix_update_one: pt_msix_update_one requested pirq = 84
> pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0
> 
> 
> in qemu-xen (upstream):
> 
> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)

That is a good pointer, but unfortunately the code that parses those
entries look exactly alike in both QEMU trees:

qemu-xen-traditional/hw/pt-msi.c:pt_msix_update_one

if (!gvec) {
        /* if gvec is 0, the guest is asking for a particular pirq that
         * is passed as dest_id */
        pirq = ((gaddr >> 32) & 0xffffff00) |
               (((gaddr & 0xffffffff) >> MSI_TARGET_CPU_SHIFT) & 0xff);



qemu-xen/hw/xen_pt_msi.c:msi_msix_setup

if (gvec == 0) {
        /* if gvec is 0, the guest is asking for a particular pirq that
         * is passed as dest_id */
        *ppirq = msi_ext_dest_id(addr >> 32) | msi_dest_id(addr);

given how msi_ext_dest_id and msi_dest_id are defined, they should
behave the same way.

Maybe adding a printk in msi_msix_setup to show addr would help
nonetheless...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:24:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:24:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1ev-0007bP-NS; Fri, 07 Dec 2012 17:24:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th1et-0007bK-J8
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:24:19 +0000
Received: from [85.158.139.83:2687] by server-7.bemta-5.messagelabs.com id
	11/70-08009-24622C05; Fri, 07 Dec 2012 17:24:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354901056!28782933!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19069 invoked from network); 7 Dec 2012 17:24:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:24:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47027222"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:24:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:24:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th1ep-0006ex-Hs;
	Fri, 07 Dec 2012 17:24:15 +0000
Date: Fri, 7 Dec 2012 17:24:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <92707120.20121206223208@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x
 returns same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
> Hi Stefano / Anthony,
> 
> With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:
> 
> With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry
> 
> in qemu-traditional:
> 
> pt_msix_update_one: pt_msix_update_one requested pirq = 87
> pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
> pt_msix_update_one: pt_msix_update_one requested pirq = 86
> pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
> pt_msix_update_one: pt_msix_update_one requested pirq = 85
> pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
> pt_msix_update_one: pt_msix_update_one requested pirq = 84
> pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0
> 
> 
> in qemu-xen (upstream):
> 
> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)

That is a good pointer, but unfortunately the code that parses those
entries look exactly alike in both QEMU trees:

qemu-xen-traditional/hw/pt-msi.c:pt_msix_update_one

if (!gvec) {
        /* if gvec is 0, the guest is asking for a particular pirq that
         * is passed as dest_id */
        pirq = ((gaddr >> 32) & 0xffffff00) |
               (((gaddr & 0xffffffff) >> MSI_TARGET_CPU_SHIFT) & 0xff);



qemu-xen/hw/xen_pt_msi.c:msi_msix_setup

if (gvec == 0) {
        /* if gvec is 0, the guest is asking for a particular pirq that
         * is passed as dest_id */
        *ppirq = msi_ext_dest_id(addr >> 32) | msi_dest_id(addr);

given how msi_ext_dest_id and msi_dest_id are defined, they should
behave the same way.

Maybe adding a printk in msi_msix_setup to show addr would help
nonetheless...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:31:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1lU-0007j7-JF; Fri, 07 Dec 2012 17:31:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th1lS-0007j2-RJ
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:31:07 +0000
Received: from [85.158.138.51:13517] by server-8.bemta-3.messagelabs.com id
	8C/24-07786-5D722C05; Fri, 07 Dec 2012 17:31:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1354901458!9266016!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3763 invoked from network); 7 Dec 2012 17:31:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:31:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216809671"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:30:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:30:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th1lJ-0006kS-4s;
	Fri, 07 Dec 2012 17:30:57 +0000
Date: Fri, 7 Dec 2012 17:30:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212071728350.8801@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/9] xen: arm: parse modules from DT during
	early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> The bootloader should populate /chosen/modules/module@<N>/ for each
> module it wishes to pass to the hypervisor. The content of these nodes
> is described in docs/misc/arm/device-tree/booting.txt
> 
> The hypervisor parses for 2 types of module, linux zImages and linux
> initrds. Currently we don't do anything with them.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> v4: Use /chosen/modules/module@N
>     Identify module type by compatible property not number.
> v3: Use a reg = < > property for the module address/length.
> v2: Reserve the zeroeth module for Xen itself (not used yet)
>     Use a more idiomatic DT layout
>     Document said layout
> ---
>  docs/misc/arm/device-tree/booting.txt |   25 ++++++++++
>  xen/common/device_tree.c              |   86 +++++++++++++++++++++++++++++++++
>  xen/include/xen/device_tree.h         |   14 +++++
>  3 files changed, 125 insertions(+), 0 deletions(-)
>  create mode 100644 docs/misc/arm/device-tree/booting.txt
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> new file mode 100644
> index 0000000..94cd3f1
> --- /dev/null
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -0,0 +1,25 @@
> +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> +node of the device tree.
> +
> +Each node has the form /chosen/modules/module@<N> and contains the following
> +properties:
> +
> +- compatible
> +
> +	Must be:
> +
> +		"xen,<type>", "xen,multiboot-module"
> +
> +	where <type> must be one of:
> +
> +	- "linux-zimage" -- the dom0 kernel
> +	- "linux-initrd" -- the dom0 ramdisk
> +
> +- reg
> +
> +	Specifies the physical address of the module in RAM and the
> +	length of the module.
> +
> +- bootargs (optional)
> +
> +	Command line associated with this module
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index da0af77..4bb640e 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -270,6 +270,90 @@ static void __init process_cpu_node(const void *fdt, int node,
>      cpumask_set_cpu(start, &cpu_possible_map);
>  }
>  
> +static int __init process_chosen_modules_node(const void *fdt, int node,
> +                                              const char *name, int *depth,
> +                                              u32 address_cells, u32 size_cells)
> +{
> +    const struct fdt_property *prop;
> +    const u32 *cell;
> +    int nr, nr_modules = 0;
> +    struct dt_mb_module *mod;
> +    int len;
> +
> +    for ( *depth = 1;
> +          *depth >= 1;
> +          node = fdt_next_node(fdt, node, depth) )
> +    {
> +        name = fdt_get_name(fdt, node, NULL);
> +        if ( strncmp(name, "module@", strlen("module@")) == 0 ) {
> +
> +            if ( fdt_node_check_compatible(fdt, node,
> +                                           "xen,multiboot-module" ) != 0 )
> +                early_panic("%s not a compatible module node\n", name);
> +
> +            if ( fdt_node_check_compatible(fdt, node,
> +                                           "xen,linux-zimage") == 0 )
> +                nr = 1;
> +            else if ( fdt_node_check_compatible(fdt, node,
> +                                                "xen,linux-initrd") == 0)
> +                nr = 2;
> +            else
> +                early_panic("%s not a known xen multiboot byte\n");
> +
> +            if ( nr > nr_modules )
> +                nr_modules = nr;
> +
> +            mod = &early_info.modules.module[nr];
> +
> +            prop = fdt_get_property(fdt, node, "reg", NULL);
> +            if ( !prop )
> +                early_panic("node %s missing `reg' property\n", name);
> +
> +            cell = (const u32 *)prop->data;
> +            device_tree_get_reg(&cell, address_cells, size_cells,
> +                                &mod->start, &mod->size);
> +
> +            prop = fdt_get_property(fdt, node, "bootargs", &len);
> +            if ( prop )
> +            {
> +                if ( len > sizeof(mod->cmdline) )
> +                    early_panic("module %d command line too long\n", nr);
> +
> +                safe_strcpy(mod->cmdline, prop->data);
> +            }
> +            else
> +                mod->cmdline[0] = 0;
> +        }
> +    }
> +
> +    for ( nr = 1 ; nr < nr_modules ; nr++ )
> +    {
> +        mod = &early_info.modules.module[nr];
> +        if ( !mod->start || !mod->size )
> +            early_panic("module %d  missing / invalid\n", nr);
> +    }
> +
> +    early_info.modules.nr_mods = nr_modules;
> +    return node;
> +}
> +
> +static void __init process_chosen_node(const void *fdt, int node,
> +                                       const char *name,
> +                                       u32 address_cells, u32 size_cells)
> +{
> +    int depth;
> +
> +    for ( depth = 0;
> +          depth >= 0;
> +          node = fdt_next_node(fdt, node, &depth) )
> +    {
> +        name = fdt_get_name(fdt, node, NULL);
> +        if ( depth == 1 && strcmp(name, "modules") == 0 )
> +            node = process_chosen_modules_node(fdt, node, name, &depth,
> +                                               address_cells, size_cells);
> +    }
> +}
> +
>  static int __init early_scan_node(const void *fdt,
>                                    int node, const char *name, int depth,
>                                    u32 address_cells, u32 size_cells,
> @@ -279,6 +363,8 @@ static int __init early_scan_node(const void *fdt,
>          process_memory_node(fdt, node, name, address_cells, size_cells);
>      else if ( device_tree_type_matches(fdt, node, "cpu") )
>          process_cpu_node(fdt, node, name, address_cells, size_cells);
> +    else if ( device_tree_node_matches(fdt, node, "chosen") )
> +        process_chosen_node(fdt, node, name, address_cells, size_cells);
>  
>      return 0;
>  }

You have really written a lot of code here!
I would have thought that just matching on the compatible string would
be enough:

else if ( device_tree_node_matches(fdt, node, "linux-zimage") )
     process_linuxzimage_node(fdt, node, name, address_cells, size_cells);
else if ( device_tree_node_matches(fdt, node, "linux-initrd") )
     process_linuxinitrd_node(fdt, node, name, address_cells, size_cells);

so that your process_linuxzimage_node and process_linuxinitrd_node start
from the right node and have everything they need to parse it



> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 4d010c0..c383677 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -15,6 +15,7 @@
>  #define DEVICE_TREE_MAX_DEPTH 16
>  
>  #define NR_MEM_BANKS 8
> +#define NR_MODULES 2
>  
>  struct membank {
>      paddr_t start;
> @@ -26,8 +27,21 @@ struct dt_mem_info {
>      struct membank bank[NR_MEM_BANKS];
>  };
>  
> +struct dt_mb_module {
> +    paddr_t start;
> +    paddr_t size;
> +    char cmdline[1024];
> +};
> +
> +struct dt_module_info {
> +    int nr_mods;
> +    /* Module 0 is Xen itself, followed by the provided modules-proper */
> +    struct dt_mb_module module[NR_MODULES + 1];
> +};
> +
>  struct dt_early_info {
>      struct dt_mem_info mem;
> +    struct dt_module_info modules;
>  };
>  
>  typedef int (*device_tree_node_func)(const void *fdt,
> -- 
> 1.7.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:31:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1lU-0007j7-JF; Fri, 07 Dec 2012 17:31:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th1lS-0007j2-RJ
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:31:07 +0000
Received: from [85.158.138.51:13517] by server-8.bemta-3.messagelabs.com id
	8C/24-07786-5D722C05; Fri, 07 Dec 2012 17:31:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1354901458!9266016!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3763 invoked from network); 7 Dec 2012 17:31:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:31:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216809671"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:30:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:30:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th1lJ-0006kS-4s;
	Fri, 07 Dec 2012 17:30:57 +0000
Date: Fri, 7 Dec 2012 17:30:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212071728350.8801@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/9] xen: arm: parse modules from DT during
	early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> The bootloader should populate /chosen/modules/module@<N>/ for each
> module it wishes to pass to the hypervisor. The content of these nodes
> is described in docs/misc/arm/device-tree/booting.txt
> 
> The hypervisor parses for 2 types of module, linux zImages and linux
> initrds. Currently we don't do anything with them.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> v4: Use /chosen/modules/module@N
>     Identify module type by compatible property not number.
> v3: Use a reg = < > property for the module address/length.
> v2: Reserve the zeroeth module for Xen itself (not used yet)
>     Use a more idiomatic DT layout
>     Document said layout
> ---
>  docs/misc/arm/device-tree/booting.txt |   25 ++++++++++
>  xen/common/device_tree.c              |   86 +++++++++++++++++++++++++++++++++
>  xen/include/xen/device_tree.h         |   14 +++++
>  3 files changed, 125 insertions(+), 0 deletions(-)
>  create mode 100644 docs/misc/arm/device-tree/booting.txt
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> new file mode 100644
> index 0000000..94cd3f1
> --- /dev/null
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -0,0 +1,25 @@
> +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> +node of the device tree.
> +
> +Each node has the form /chosen/modules/module@<N> and contains the following
> +properties:
> +
> +- compatible
> +
> +	Must be:
> +
> +		"xen,<type>", "xen,multiboot-module"
> +
> +	where <type> must be one of:
> +
> +	- "linux-zimage" -- the dom0 kernel
> +	- "linux-initrd" -- the dom0 ramdisk
> +
> +- reg
> +
> +	Specifies the physical address of the module in RAM and the
> +	length of the module.
> +
> +- bootargs (optional)
> +
> +	Command line associated with this module
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index da0af77..4bb640e 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -270,6 +270,90 @@ static void __init process_cpu_node(const void *fdt, int node,
>      cpumask_set_cpu(start, &cpu_possible_map);
>  }
>  
> +static int __init process_chosen_modules_node(const void *fdt, int node,
> +                                              const char *name, int *depth,
> +                                              u32 address_cells, u32 size_cells)
> +{
> +    const struct fdt_property *prop;
> +    const u32 *cell;
> +    int nr, nr_modules = 0;
> +    struct dt_mb_module *mod;
> +    int len;
> +
> +    for ( *depth = 1;
> +          *depth >= 1;
> +          node = fdt_next_node(fdt, node, depth) )
> +    {
> +        name = fdt_get_name(fdt, node, NULL);
> +        if ( strncmp(name, "module@", strlen("module@")) == 0 ) {
> +
> +            if ( fdt_node_check_compatible(fdt, node,
> +                                           "xen,multiboot-module" ) != 0 )
> +                early_panic("%s not a compatible module node\n", name);
> +
> +            if ( fdt_node_check_compatible(fdt, node,
> +                                           "xen,linux-zimage") == 0 )
> +                nr = 1;
> +            else if ( fdt_node_check_compatible(fdt, node,
> +                                                "xen,linux-initrd") == 0)
> +                nr = 2;
> +            else
> +                early_panic("%s not a known xen multiboot byte\n");
> +
> +            if ( nr > nr_modules )
> +                nr_modules = nr;
> +
> +            mod = &early_info.modules.module[nr];
> +
> +            prop = fdt_get_property(fdt, node, "reg", NULL);
> +            if ( !prop )
> +                early_panic("node %s missing `reg' property\n", name);
> +
> +            cell = (const u32 *)prop->data;
> +            device_tree_get_reg(&cell, address_cells, size_cells,
> +                                &mod->start, &mod->size);
> +
> +            prop = fdt_get_property(fdt, node, "bootargs", &len);
> +            if ( prop )
> +            {
> +                if ( len > sizeof(mod->cmdline) )
> +                    early_panic("module %d command line too long\n", nr);
> +
> +                safe_strcpy(mod->cmdline, prop->data);
> +            }
> +            else
> +                mod->cmdline[0] = 0;
> +        }
> +    }
> +
> +    for ( nr = 1 ; nr < nr_modules ; nr++ )
> +    {
> +        mod = &early_info.modules.module[nr];
> +        if ( !mod->start || !mod->size )
> +            early_panic("module %d  missing / invalid\n", nr);
> +    }
> +
> +    early_info.modules.nr_mods = nr_modules;
> +    return node;
> +}
> +
> +static void __init process_chosen_node(const void *fdt, int node,
> +                                       const char *name,
> +                                       u32 address_cells, u32 size_cells)
> +{
> +    int depth;
> +
> +    for ( depth = 0;
> +          depth >= 0;
> +          node = fdt_next_node(fdt, node, &depth) )
> +    {
> +        name = fdt_get_name(fdt, node, NULL);
> +        if ( depth == 1 && strcmp(name, "modules") == 0 )
> +            node = process_chosen_modules_node(fdt, node, name, &depth,
> +                                               address_cells, size_cells);
> +    }
> +}
> +
>  static int __init early_scan_node(const void *fdt,
>                                    int node, const char *name, int depth,
>                                    u32 address_cells, u32 size_cells,
> @@ -279,6 +363,8 @@ static int __init early_scan_node(const void *fdt,
>          process_memory_node(fdt, node, name, address_cells, size_cells);
>      else if ( device_tree_type_matches(fdt, node, "cpu") )
>          process_cpu_node(fdt, node, name, address_cells, size_cells);
> +    else if ( device_tree_node_matches(fdt, node, "chosen") )
> +        process_chosen_node(fdt, node, name, address_cells, size_cells);
>  
>      return 0;
>  }

You have really written a lot of code here!
I would have thought that just matching on the compatible string would
be enough:

else if ( device_tree_node_matches(fdt, node, "linux-zimage") )
     process_linuxzimage_node(fdt, node, name, address_cells, size_cells);
else if ( device_tree_node_matches(fdt, node, "linux-initrd") )
     process_linuxinitrd_node(fdt, node, name, address_cells, size_cells);

so that your process_linuxzimage_node and process_linuxinitrd_node start
from the right node and have everything they need to parse it



> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 4d010c0..c383677 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -15,6 +15,7 @@
>  #define DEVICE_TREE_MAX_DEPTH 16
>  
>  #define NR_MEM_BANKS 8
> +#define NR_MODULES 2
>  
>  struct membank {
>      paddr_t start;
> @@ -26,8 +27,21 @@ struct dt_mem_info {
>      struct membank bank[NR_MEM_BANKS];
>  };
>  
> +struct dt_mb_module {
> +    paddr_t start;
> +    paddr_t size;
> +    char cmdline[1024];
> +};
> +
> +struct dt_module_info {
> +    int nr_mods;
> +    /* Module 0 is Xen itself, followed by the provided modules-proper */
> +    struct dt_mb_module module[NR_MODULES + 1];
> +};
> +
>  struct dt_early_info {
>      struct dt_mem_info mem;
> +    struct dt_module_info modules;
>  };
>  
>  typedef int (*device_tree_node_func)(const void *fdt,
> -- 
> 1.7.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:36:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1q4-0007y9-Gc; Fri, 07 Dec 2012 17:35:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th1q2-0007y3-SE
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:35:51 +0000
Received: from [85.158.143.35:43989] by server-3.bemta-4.messagelabs.com id
	25/D8-18211-6F822C05; Fri, 07 Dec 2012 17:35:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354901747!12986006!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22289 invoked from network); 7 Dec 2012 17:35:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:35:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216810462"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:35:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:35:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th1py-0006p9-Ik;
	Fri, 07 Dec 2012 17:35:46 +0000
Date: Fri, 7 Dec 2012 17:35:41 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212071733450.8801@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] xen: strip /chosen/modules/module@<N>/*
 from dom0 device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> These nodes are used by Xen to find the initial modules.
> 
> Only drop the "xen,multiboot-module" compatible nodes in case someone
> else has a similar idea.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> v4 - /chosen/modules/modules@N not /chosen/module@N
> v3 - use a helper to filter out DT elements which are not for dom0.
>      Better than an ad-hoc break in the middle of a loop.
> ---
>  xen/arch/arm/domain_build.c |   40 ++++++++++++++++++++++++++++++++++++++--
>  1 files changed, 38 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 7a964f7..27e02e4 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -172,6 +172,40 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
>      return prop;
>  }
>  
> +/* Returns the next node in fdt (starting from offset) which should be
> + * passed through to dom0.
> + */
> +static int fdt_next_dom0_node(const void *fdt, int node,
> +                              int *depth_out,
> +                              int parents[DEVICE_TREE_MAX_DEPTH])
> +{
> +    int depth = *depth_out;
> +
> +    while ( (node = fdt_next_node(fdt, node, &depth)) &&
> +            node >= 0 && depth >= 0 )
> +    {
> +        if ( depth >= DEVICE_TREE_MAX_DEPTH )
> +            break;
> +
> +        parents[depth] = node;
> +
> +        /* Skip /chosen/modules/module@<N>/ and all subnodes */
> +        if ( depth >= 3 &&
> +             device_tree_node_matches(fdt, parents[1], "chosen") &&
> +             device_tree_node_matches(fdt, parents[2], "modules") &&
> +             device_tree_node_matches(fdt, parents[3], "module") &&
> +             fdt_node_check_compatible(fdt, parents[3],
> +                                       "xen,multiboot-module" ) == 0 )
> +            continue;
> +
> +        /* We've arrived at a node which dom0 is interested in. */
> +        break;
> +    }
> +
> +    *depth_out = depth;
> +    return node;
> +}

Can't we just skip the node if it is compatible with
"xen,multiboot-module", no matter where it lives?  This should simplify
this function greatly and you wouldn't need the parents parameter
anymore.
This way we could have a simple node blacklist based on the compatible
node all in a single function.


>  static int write_nodes(struct domain *d, struct kernel_info *kinfo,
>                         const void *fdt)
>  {
> @@ -179,11 +213,12 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
>      int depth = 0, last_depth = -1;
>      u32 address_cells[DEVICE_TREE_MAX_DEPTH];
>      u32 size_cells[DEVICE_TREE_MAX_DEPTH];
> +    int parents[DEVICE_TREE_MAX_DEPTH];
>      int ret;
>  
>      for ( node = 0, depth = 0;
>            node >= 0 && depth >= 0;
> -          node = fdt_next_node(fdt, node, &depth) )
> +          node = fdt_next_dom0_node(fdt, node, &depth, parents) )
>      {
>          const char *name;
>  
> @@ -191,7 +226,8 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
>  
>          if ( depth >= DEVICE_TREE_MAX_DEPTH )
>          {
> -            printk("warning: node `%s' is nested too deep\n", name);
> +            printk("warning: node `%s' is nested too deep (%d)\n",
> +                   name, depth);
>              continue;
>          }
>  
> -- 
> 1.7.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:36:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th1q4-0007y9-Gc; Fri, 07 Dec 2012 17:35:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th1q2-0007y3-SE
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:35:51 +0000
Received: from [85.158.143.35:43989] by server-3.bemta-4.messagelabs.com id
	25/D8-18211-6F822C05; Fri, 07 Dec 2012 17:35:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354901747!12986006!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22289 invoked from network); 7 Dec 2012 17:35:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:35:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216810462"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:35:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:35:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th1py-0006p9-Ik;
	Fri, 07 Dec 2012 17:35:46 +0000
Date: Fri, 7 Dec 2012 17:35:41 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212071733450.8801@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] xen: strip /chosen/modules/module@<N>/*
 from dom0 device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> These nodes are used by Xen to find the initial modules.
> 
> Only drop the "xen,multiboot-module" compatible nodes in case someone
> else has a similar idea.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> v4 - /chosen/modules/modules@N not /chosen/module@N
> v3 - use a helper to filter out DT elements which are not for dom0.
>      Better than an ad-hoc break in the middle of a loop.
> ---
>  xen/arch/arm/domain_build.c |   40 ++++++++++++++++++++++++++++++++++++++--
>  1 files changed, 38 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 7a964f7..27e02e4 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -172,6 +172,40 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
>      return prop;
>  }
>  
> +/* Returns the next node in fdt (starting from offset) which should be
> + * passed through to dom0.
> + */
> +static int fdt_next_dom0_node(const void *fdt, int node,
> +                              int *depth_out,
> +                              int parents[DEVICE_TREE_MAX_DEPTH])
> +{
> +    int depth = *depth_out;
> +
> +    while ( (node = fdt_next_node(fdt, node, &depth)) &&
> +            node >= 0 && depth >= 0 )
> +    {
> +        if ( depth >= DEVICE_TREE_MAX_DEPTH )
> +            break;
> +
> +        parents[depth] = node;
> +
> +        /* Skip /chosen/modules/module@<N>/ and all subnodes */
> +        if ( depth >= 3 &&
> +             device_tree_node_matches(fdt, parents[1], "chosen") &&
> +             device_tree_node_matches(fdt, parents[2], "modules") &&
> +             device_tree_node_matches(fdt, parents[3], "module") &&
> +             fdt_node_check_compatible(fdt, parents[3],
> +                                       "xen,multiboot-module" ) == 0 )
> +            continue;
> +
> +        /* We've arrived at a node which dom0 is interested in. */
> +        break;
> +    }
> +
> +    *depth_out = depth;
> +    return node;
> +}

Can't we just skip the node if it is compatible with
"xen,multiboot-module", no matter where it lives?  This should simplify
this function greatly and you wouldn't need the parents parameter
anymore.
This way we could have a simple node blacklist based on the compatible
node all in a single function.


>  static int write_nodes(struct domain *d, struct kernel_info *kinfo,
>                         const void *fdt)
>  {
> @@ -179,11 +213,12 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
>      int depth = 0, last_depth = -1;
>      u32 address_cells[DEVICE_TREE_MAX_DEPTH];
>      u32 size_cells[DEVICE_TREE_MAX_DEPTH];
> +    int parents[DEVICE_TREE_MAX_DEPTH];
>      int ret;
>  
>      for ( node = 0, depth = 0;
>            node >= 0 && depth >= 0;
> -          node = fdt_next_node(fdt, node, &depth) )
> +          node = fdt_next_dom0_node(fdt, node, &depth, parents) )
>      {
>          const char *name;
>  
> @@ -191,7 +226,8 @@ static int write_nodes(struct domain *d, struct kernel_info *kinfo,
>  
>          if ( depth >= DEVICE_TREE_MAX_DEPTH )
>          {
> -            printk("warning: node `%s' is nested too deep\n", name);
> +            printk("warning: node `%s' is nested too deep (%d)\n",
> +                   name, depth);
>              continue;
>          }
>  
> -- 
> 1.7.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:49:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:49:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th22u-0008KM-1p; Fri, 07 Dec 2012 17:49:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th22s-0008KH-Kq
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 17:49:06 +0000
Received: from [85.158.143.35:27480] by server-1.bemta-4.messagelabs.com id
	4A/88-28401-11C22C05; Fri, 07 Dec 2012 17:49:05 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1354902387!13842460!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4976 invoked from network); 7 Dec 2012 17:46:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:46:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47030351"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:46:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:46:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th20C-0006xP-Sc;
	Fri, 07 Dec 2012 17:46:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 17:45:55 +0000
Message-ID: <1354902355-10209-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 2/2] xen/arm: use strcmp in
	device_tree_type_matches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We want to match the exact string rather than the first subset.

Changes in v4:
- get rid of len.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/common/device_tree.c |    5 ++---
 1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 7a072cb..8b4ef2f 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -44,14 +44,13 @@ bool_t device_tree_node_matches(const void *fdt, int node, const char *match)
 
 bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
 {
-    int len;
     const void *prop;
 
-    prop = fdt_getprop(fdt, node, "device_type", &len);
+    prop = fdt_getprop(fdt, node, "device_type", NULL);
     if ( prop == NULL )
         return 0;
 
-    return !strncmp(prop, match, len);
+    return !strcmp(prop, match);
 }
 
 bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:49:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:49:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th22u-0008KM-1p; Fri, 07 Dec 2012 17:49:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th22s-0008KH-Kq
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 17:49:06 +0000
Received: from [85.158.143.35:27480] by server-1.bemta-4.messagelabs.com id
	4A/88-28401-11C22C05; Fri, 07 Dec 2012 17:49:05 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1354902387!13842460!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4976 invoked from network); 7 Dec 2012 17:46:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:46:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47030351"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:46:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:46:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th20C-0006xP-Sc;
	Fri, 07 Dec 2012 17:46:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 17:45:55 +0000
Message-ID: <1354902355-10209-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 2/2] xen/arm: use strcmp in
	device_tree_type_matches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We want to match the exact string rather than the first subset.

Changes in v4:
- get rid of len.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/common/device_tree.c |    5 ++---
 1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 7a072cb..8b4ef2f 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -44,14 +44,13 @@ bool_t device_tree_node_matches(const void *fdt, int node, const char *match)
 
 bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
 {
-    int len;
     const void *prop;
 
-    prop = fdt_getprop(fdt, node, "device_type", &len);
+    prop = fdt_getprop(fdt, node, "device_type", NULL);
     if ( prop == NULL )
         return 0;
 
-    return !strncmp(prop, match, len);
+    return !strcmp(prop, match);
 }
 
 bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:49:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th234-0008Kx-Eh; Fri, 07 Dec 2012 17:49:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th233-0008Kp-Ff
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 17:49:17 +0000
Received: from [85.158.143.35:30065] by server-2.bemta-4.messagelabs.com id
	2F/E4-30861-C1C22C05; Fri, 07 Dec 2012 17:49:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1354902387!13842460!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5150 invoked from network); 7 Dec 2012 17:46:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:46:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47030352"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:46:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:46:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th20C-0006xP-S6;
	Fri, 07 Dec 2012 17:46:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 17:45:54 +0000
Message-ID: <1354902355-10209-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 1/2] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Get the address of the GIC distributor, cpu, virtual and virtual cpu
interfaces registers from device tree.

Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
and friends because we are using them from mode_switch.S, that is
executed before device tree has been parsed. But at least mode_switch.S
is known to contain vexpress specific code anyway.

Changes in v4:
- return ranges from device_tree_nr_reg_ranges;
- remove hard tab.

Changes in v3:
- printk a message with the GIC interface addresses in gic_init;
- use strcmp in device_tree_node_compatible;
- rename device_tree_get_reg_ranges to device_tree_nr_reg_ranges;
- improve error message in process_gic_node.

Changes in v2:
- remove 2 superflous lines from process_gic_node;
- introduce device_tree_get_reg_ranges;
- add a check for uninitialized GIC interface addresses;
- add a check for non-page aligned GIC interface addresses;
- remove the code to deal with non-page aligned addresses from GICC and
GICH.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c            |   40 ++++++++++++++++------
 xen/common/device_tree.c      |   73 +++++++++++++++++++++++++++++++++++++++--
 xen/include/xen/device_tree.h |    8 ++++
 3 files changed, 107 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0c6fab9..81188f0 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -26,6 +26,7 @@
 #include <xen/errno.h>
 #include <xen/softirq.h>
 #include <xen/list.h>
+#include <xen/device_tree.h>
 #include <asm/p2m.h>
 #include <asm/domain.h>
 
@@ -33,10 +34,8 @@
 
 /* Access to the GIC Distributor registers through the fixmap */
 #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
-#define GICC ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICC1)  \
-                                     + (GIC_CR_OFFSET & 0xfff)))
-#define GICH ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICH)  \
-                                     + (GIC_HR_OFFSET & 0xfff)))
+#define GICC ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICC1)) 
+#define GICH ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICH))
 static void gic_restore_pending_irqs(struct vcpu *v);
 
 /* Global state */
@@ -44,6 +43,7 @@ static struct {
     paddr_t dbase;       /* Address of distributor registers */
     paddr_t cbase;       /* Address of CPU interface registers */
     paddr_t hbase;       /* Address of virtual interface registers */
+    paddr_t vbase;       /* Address of virtual cpu interface registers */
     unsigned int lines;
     unsigned int cpus;
     spinlock_t lock;
@@ -306,10 +306,28 @@ static void __cpuinit gic_hyp_disable(void)
 /* Set up the GIC */
 void __init gic_init(void)
 {
-    /* XXX FIXME get this from devicetree */
-    gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET;
-    gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET;
-    gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET;
+    printk("GIC initialization:\n"
+              "        gic_dist_addr=%"PRIpaddr"\n"
+              "        gic_cpu_addr=%"PRIpaddr"\n"
+              "        gic_hyp_addr=%"PRIpaddr"\n"
+              "        gic_vcpu_addr=%"PRIpaddr"\n",
+              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
+              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);
+    if ( !early_info.gic.gic_dist_addr ||
+         !early_info.gic.gic_cpu_addr ||
+         !early_info.gic.gic_hyp_addr ||
+         !early_info.gic.gic_vcpu_addr )
+        panic("the physical address of one of the GIC interfaces is missing\n");
+    if ( (early_info.gic.gic_dist_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_cpu_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_hyp_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_vcpu_addr & ~PAGE_MASK) )
+        panic("GIC interfaces not page aligned.\n");
+
+    gic.dbase = early_info.gic.gic_dist_addr;
+    gic.cbase = early_info.gic.gic_cpu_addr;
+    gic.hbase = early_info.gic.gic_hyp_addr;
+    gic.vbase = early_info.gic.gic_vcpu_addr;
     set_fixmap(FIXMAP_GICD, gic.dbase >> PAGE_SHIFT, DEV_SHARED);
     BUILD_BUG_ON(FIXMAP_ADDR(FIXMAP_GICC1) !=
                  FIXMAP_ADDR(FIXMAP_GICC2)-PAGE_SIZE);
@@ -569,9 +587,9 @@ int gicv_setup(struct domain *d)
 {
     /* map the gic virtual cpu interface in the gic cpu interface region of
      * the guest */
-    return map_mmio_regions(d, GIC_BASE_ADDRESS + GIC_CR_OFFSET,
-                        GIC_BASE_ADDRESS + GIC_CR_OFFSET + (2 * PAGE_SIZE) - 1,
-                        GIC_BASE_ADDRESS + GIC_VR_OFFSET);
+    return map_mmio_regions(d, gic.cbase,
+                        gic.cbase + (2 * PAGE_SIZE) - 1,
+                        gic.vbase);
 }
 
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index da0af77..7a072cb 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -54,6 +54,33 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
     return !strncmp(prop, match, len);
 }
 
+bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
+{
+    int len, l;
+    const void *prop;
+
+    prop = fdt_getprop(fdt, node, "compatible", &len);
+    if ( prop == NULL )
+        return 0;
+
+    while ( len > 0 ) {
+        if ( !strcmp(prop, match) )
+            return 1;
+        l = strlen(prop) + 1;
+        prop += l;
+        len -= l;
+    }
+
+    return 0;
+}
+
+static int device_tree_nr_reg_ranges(const struct fdt_property *prop,
+        u32 address_cells, u32 size_cells)
+{
+    u32 reg_cells = address_cells + size_cells;
+    return fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
+}
+
 static void __init get_val(const u32 **cell, u32 cells, u64 *val)
 {
     *val = 0;
@@ -209,7 +236,6 @@ static void __init process_memory_node(const void *fdt, int node,
                                        u32 address_cells, u32 size_cells)
 {
     const struct fdt_property *prop;
-    size_t reg_cells;
     int i;
     int banks;
     const u32 *cell;
@@ -230,8 +256,7 @@ static void __init process_memory_node(const void *fdt, int node,
     }
 
     cell = (const u32 *)prop->data;
-    reg_cells = address_cells + size_cells;
-    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
+    banks = device_tree_nr_reg_ranges(prop, address_cells, size_cells);
 
     for ( i = 0; i < banks && early_info.mem.nr_banks < NR_MEM_BANKS; i++ )
     {
@@ -270,6 +295,46 @@ static void __init process_cpu_node(const void *fdt, int node,
     cpumask_set_cpu(start, &cpu_possible_map);
 }
 
+static void __init process_gic_node(const void *fdt, int node,
+                                    const char *name,
+                                    u32 address_cells, u32 size_cells)
+{
+    const struct fdt_property *prop;
+    const u32 *cell;
+    paddr_t start, size;
+    int interfaces;
+
+    if ( address_cells < 1 || size_cells < 1 )
+    {
+        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
+                     name);
+        return;
+    }
+
+    prop = fdt_get_property(fdt, node, "reg", NULL);
+    if ( !prop )
+    {
+        early_printk("fdt: node `%s': missing `reg' property\n", name);
+        return;
+    }
+
+    cell = (const u32 *)prop->data;
+    interfaces = device_tree_nr_reg_ranges(prop, address_cells, size_cells);
+    if ( interfaces < 4 )
+    {
+        early_printk("fdt: node `%s': not enough ranges\n", name);
+        return;
+    }
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_dist_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_cpu_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_hyp_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_vcpu_addr = start;
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -279,6 +344,8 @@ static int __init early_scan_node(const void *fdt,
         process_memory_node(fdt, node, name, address_cells, size_cells);
     else if ( device_tree_type_matches(fdt, node, "cpu") )
         process_cpu_node(fdt, node, name, address_cells, size_cells);
+    else if ( device_tree_node_compatible(fdt, node, "arm,cortex-a15-gic") )
+        process_gic_node(fdt, node, name, address_cells, size_cells);
 
     return 0;
 }
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 4d010c0..a0e3a97 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -26,8 +26,16 @@ struct dt_mem_info {
     struct membank bank[NR_MEM_BANKS];
 };
 
+struct dt_gic_info {
+    paddr_t gic_dist_addr;
+    paddr_t gic_cpu_addr;
+    paddr_t gic_hyp_addr;
+    paddr_t gic_vcpu_addr;
+};
+
 struct dt_early_info {
     struct dt_mem_info mem;
+    struct dt_gic_info gic;
 };
 
 typedef int (*device_tree_node_func)(const void *fdt,
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:49:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th234-0008Kx-Eh; Fri, 07 Dec 2012 17:49:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th233-0008Kp-Ff
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 17:49:17 +0000
Received: from [85.158.143.35:30065] by server-2.bemta-4.messagelabs.com id
	2F/E4-30861-C1C22C05; Fri, 07 Dec 2012 17:49:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1354902387!13842460!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5150 invoked from network); 7 Dec 2012 17:46:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 17:46:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47030352"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 17:46:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 12:46:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th20C-0006xP-S6;
	Fri, 07 Dec 2012 17:46:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 17:45:54 +0000
Message-ID: <1354902355-10209-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 1/2] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Get the address of the GIC distributor, cpu, virtual and virtual cpu
interfaces registers from device tree.

Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
and friends because we are using them from mode_switch.S, that is
executed before device tree has been parsed. But at least mode_switch.S
is known to contain vexpress specific code anyway.

Changes in v4:
- return ranges from device_tree_nr_reg_ranges;
- remove hard tab.

Changes in v3:
- printk a message with the GIC interface addresses in gic_init;
- use strcmp in device_tree_node_compatible;
- rename device_tree_get_reg_ranges to device_tree_nr_reg_ranges;
- improve error message in process_gic_node.

Changes in v2:
- remove 2 superflous lines from process_gic_node;
- introduce device_tree_get_reg_ranges;
- add a check for uninitialized GIC interface addresses;
- add a check for non-page aligned GIC interface addresses;
- remove the code to deal with non-page aligned addresses from GICC and
GICH.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c            |   40 ++++++++++++++++------
 xen/common/device_tree.c      |   73 +++++++++++++++++++++++++++++++++++++++--
 xen/include/xen/device_tree.h |    8 ++++
 3 files changed, 107 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0c6fab9..81188f0 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -26,6 +26,7 @@
 #include <xen/errno.h>
 #include <xen/softirq.h>
 #include <xen/list.h>
+#include <xen/device_tree.h>
 #include <asm/p2m.h>
 #include <asm/domain.h>
 
@@ -33,10 +34,8 @@
 
 /* Access to the GIC Distributor registers through the fixmap */
 #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
-#define GICC ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICC1)  \
-                                     + (GIC_CR_OFFSET & 0xfff)))
-#define GICH ((volatile uint32_t *) (FIXMAP_ADDR(FIXMAP_GICH)  \
-                                     + (GIC_HR_OFFSET & 0xfff)))
+#define GICC ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICC1)) 
+#define GICH ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICH))
 static void gic_restore_pending_irqs(struct vcpu *v);
 
 /* Global state */
@@ -44,6 +43,7 @@ static struct {
     paddr_t dbase;       /* Address of distributor registers */
     paddr_t cbase;       /* Address of CPU interface registers */
     paddr_t hbase;       /* Address of virtual interface registers */
+    paddr_t vbase;       /* Address of virtual cpu interface registers */
     unsigned int lines;
     unsigned int cpus;
     spinlock_t lock;
@@ -306,10 +306,28 @@ static void __cpuinit gic_hyp_disable(void)
 /* Set up the GIC */
 void __init gic_init(void)
 {
-    /* XXX FIXME get this from devicetree */
-    gic.dbase = GIC_BASE_ADDRESS + GIC_DR_OFFSET;
-    gic.cbase = GIC_BASE_ADDRESS + GIC_CR_OFFSET;
-    gic.hbase = GIC_BASE_ADDRESS + GIC_HR_OFFSET;
+    printk("GIC initialization:\n"
+              "        gic_dist_addr=%"PRIpaddr"\n"
+              "        gic_cpu_addr=%"PRIpaddr"\n"
+              "        gic_hyp_addr=%"PRIpaddr"\n"
+              "        gic_vcpu_addr=%"PRIpaddr"\n",
+              early_info.gic.gic_dist_addr, early_info.gic.gic_cpu_addr,
+              early_info.gic.gic_hyp_addr, early_info.gic.gic_vcpu_addr);
+    if ( !early_info.gic.gic_dist_addr ||
+         !early_info.gic.gic_cpu_addr ||
+         !early_info.gic.gic_hyp_addr ||
+         !early_info.gic.gic_vcpu_addr )
+        panic("the physical address of one of the GIC interfaces is missing\n");
+    if ( (early_info.gic.gic_dist_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_cpu_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_hyp_addr & ~PAGE_MASK) ||
+         (early_info.gic.gic_vcpu_addr & ~PAGE_MASK) )
+        panic("GIC interfaces not page aligned.\n");
+
+    gic.dbase = early_info.gic.gic_dist_addr;
+    gic.cbase = early_info.gic.gic_cpu_addr;
+    gic.hbase = early_info.gic.gic_hyp_addr;
+    gic.vbase = early_info.gic.gic_vcpu_addr;
     set_fixmap(FIXMAP_GICD, gic.dbase >> PAGE_SHIFT, DEV_SHARED);
     BUILD_BUG_ON(FIXMAP_ADDR(FIXMAP_GICC1) !=
                  FIXMAP_ADDR(FIXMAP_GICC2)-PAGE_SIZE);
@@ -569,9 +587,9 @@ int gicv_setup(struct domain *d)
 {
     /* map the gic virtual cpu interface in the gic cpu interface region of
      * the guest */
-    return map_mmio_regions(d, GIC_BASE_ADDRESS + GIC_CR_OFFSET,
-                        GIC_BASE_ADDRESS + GIC_CR_OFFSET + (2 * PAGE_SIZE) - 1,
-                        GIC_BASE_ADDRESS + GIC_VR_OFFSET);
+    return map_mmio_regions(d, gic.cbase,
+                        gic.cbase + (2 * PAGE_SIZE) - 1,
+                        gic.vbase);
 }
 
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index da0af77..7a072cb 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -54,6 +54,33 @@ bool_t device_tree_type_matches(const void *fdt, int node, const char *match)
     return !strncmp(prop, match, len);
 }
 
+bool_t device_tree_node_compatible(const void *fdt, int node, const char *match)
+{
+    int len, l;
+    const void *prop;
+
+    prop = fdt_getprop(fdt, node, "compatible", &len);
+    if ( prop == NULL )
+        return 0;
+
+    while ( len > 0 ) {
+        if ( !strcmp(prop, match) )
+            return 1;
+        l = strlen(prop) + 1;
+        prop += l;
+        len -= l;
+    }
+
+    return 0;
+}
+
+static int device_tree_nr_reg_ranges(const struct fdt_property *prop,
+        u32 address_cells, u32 size_cells)
+{
+    u32 reg_cells = address_cells + size_cells;
+    return fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
+}
+
 static void __init get_val(const u32 **cell, u32 cells, u64 *val)
 {
     *val = 0;
@@ -209,7 +236,6 @@ static void __init process_memory_node(const void *fdt, int node,
                                        u32 address_cells, u32 size_cells)
 {
     const struct fdt_property *prop;
-    size_t reg_cells;
     int i;
     int banks;
     const u32 *cell;
@@ -230,8 +256,7 @@ static void __init process_memory_node(const void *fdt, int node,
     }
 
     cell = (const u32 *)prop->data;
-    reg_cells = address_cells + size_cells;
-    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof(u32));
+    banks = device_tree_nr_reg_ranges(prop, address_cells, size_cells);
 
     for ( i = 0; i < banks && early_info.mem.nr_banks < NR_MEM_BANKS; i++ )
     {
@@ -270,6 +295,46 @@ static void __init process_cpu_node(const void *fdt, int node,
     cpumask_set_cpu(start, &cpu_possible_map);
 }
 
+static void __init process_gic_node(const void *fdt, int node,
+                                    const char *name,
+                                    u32 address_cells, u32 size_cells)
+{
+    const struct fdt_property *prop;
+    const u32 *cell;
+    paddr_t start, size;
+    int interfaces;
+
+    if ( address_cells < 1 || size_cells < 1 )
+    {
+        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
+                     name);
+        return;
+    }
+
+    prop = fdt_get_property(fdt, node, "reg", NULL);
+    if ( !prop )
+    {
+        early_printk("fdt: node `%s': missing `reg' property\n", name);
+        return;
+    }
+
+    cell = (const u32 *)prop->data;
+    interfaces = device_tree_nr_reg_ranges(prop, address_cells, size_cells);
+    if ( interfaces < 4 )
+    {
+        early_printk("fdt: node `%s': not enough ranges\n", name);
+        return;
+    }
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_dist_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_cpu_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_hyp_addr = start;
+    device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+    early_info.gic.gic_vcpu_addr = start;
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -279,6 +344,8 @@ static int __init early_scan_node(const void *fdt,
         process_memory_node(fdt, node, name, address_cells, size_cells);
     else if ( device_tree_type_matches(fdt, node, "cpu") )
         process_cpu_node(fdt, node, name, address_cells, size_cells);
+    else if ( device_tree_node_compatible(fdt, node, "arm,cortex-a15-gic") )
+        process_gic_node(fdt, node, name, address_cells, size_cells);
 
     return 0;
 }
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 4d010c0..a0e3a97 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -26,8 +26,16 @@ struct dt_mem_info {
     struct membank bank[NR_MEM_BANKS];
 };
 
+struct dt_gic_info {
+    paddr_t gic_dist_addr;
+    paddr_t gic_cpu_addr;
+    paddr_t gic_hyp_addr;
+    paddr_t gic_vcpu_addr;
+};
+
 struct dt_early_info {
     struct dt_mem_info mem;
+    struct dt_gic_info gic;
 };
 
 typedef int (*device_tree_node_func)(const void *fdt,
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:56:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2A5-0000IS-BM; Fri, 07 Dec 2012 17:56:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1Th2A4-0000IN-5M
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:56:32 +0000
Received: from [85.158.139.211:26305] by server-5.bemta-5.messagelabs.com id
	49/65-22648-ECD22C05; Fri, 07 Dec 2012 17:56:30 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354902987!18085811!1
X-Originating-IP: [216.32.180.189]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21599 invoked from network); 7 Dec 2012 17:56:29 -0000
Received: from co1ehsobe006.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.189)
	by server-11.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Dec 2012 17:56:29 -0000
Received: from mail113-co1-R.bigfish.com (10.243.78.249) by
	CO1EHSOBE014.bigfish.com (10.243.66.77) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 17:56:27 +0000
Received: from mail113-co1 (localhost [127.0.0.1])	by
	mail113-co1-R.bigfish.com (Postfix) with ESMTP id 275D96801D2;
	Fri,  7 Dec 2012 17:56:27 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1de0h1202h1e76h1d1ah1d2ahzz8275bhz2dh668h839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1155h)
Received: from mail113-co1 (localhost.localdomain [127.0.0.1]) by mail113-co1
	(MessageSwitch) id 1354902985721489_26508;
	Fri,  7 Dec 2012 17:56:25 +0000 (UTC)
Received: from CO1EHSMHS030.bigfish.com (unknown [10.243.78.250])	by
	mail113-co1.bigfish.com (Postfix) with ESMTP id 7B584800082;
	Fri,  7 Dec 2012 17:56:24 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS030.bigfish.com (10.243.66.40) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 17:56:23 +0000
X-WSS-ID: 0MEO9TW-02-OG3-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 226F1FCC00F;	Fri,  7 Dec 2012 11:56:20 -0600 (CST)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 7 Dec 2012 11:38:34 -0600
Received: from linux-62wg.amd.com (163.181.55.254) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server id 14.2.318.4; Fri, 7 Dec 2012
	11:56:21 -0600
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
To: <JBeulich@suse.com>, <Ian.Campbell@citrix.com>
Date: Fri, 7 Dec 2012 12:56:12 -0500
Message-ID: <1354902972-16390-1-git-send-email-boris.ostrovsky@amd.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-OriginatorOrg: amd.com
Cc: boris.ostrovsky@amd.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] x86/ucode: Improve error handling and
	container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not report error when a patch is not appplicable to current processor,
simply skip it and move on to next patch in container file.

Process container file to the end instead of stopping at the first
applicable patch.

Log the fact that a patch has been applied at KERN_WARNING level, modify
debug messages.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
---
 xen/arch/x86/microcode_amd.c |  102 +++++++++++++++++++++++-------------------
 1 file changed, 56 insertions(+), 46 deletions(-)

diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c
index 7a54001..5073a3c 100644
--- a/xen/arch/x86/microcode_amd.c
+++ b/xen/arch/x86/microcode_amd.c
@@ -88,13 +88,13 @@ static int collect_cpu_info(int cpu, struct cpu_signature *csig)
 
     rdmsrl(MSR_AMD_PATCHLEVEL, csig->rev);
 
-    printk(KERN_DEBUG "microcode: collect_cpu_info: patch_id=%#x\n",
-           csig->rev);
+    printk(KERN_DEBUG "microcode: CPU%d collect_cpu_info: patch_id=%#x\n",
+           cpu, csig->rev);
 
     return 0;
 }
 
-static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
+static bool_t microcode_fits(const struct microcode_amd *mc_amd, int cpu)
 {
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
     const struct microcode_header_amd *mc_header = mc_amd->mpb;
@@ -121,12 +121,7 @@ static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
         return 0;
 
     if ( (mc_header->processor_rev_id) != equiv_cpu_id )
-    {
-        printk(KERN_DEBUG "microcode: CPU%d patch does not match "
-               "(patch is %x, cpu base id is %x) \n",
-               cpu, mc_header->processor_rev_id, equiv_cpu_id);
-        return -EINVAL;
-    }
+        return 0;
 
     if ( mc_header->patch_id <= uci->cpu_sig.rev )
         return 0;
@@ -173,7 +168,7 @@ static int apply_microcode(int cpu)
         return -EIO;
     }
 
-    printk(KERN_INFO "microcode: CPU%d updated from revision %#x to %#x\n",
+    printk(KERN_WARNING "microcode: CPU%d updated from revision %#x to %#x\n",
            cpu, uci->cpu_sig.rev, hdr->patch_id);
 
     uci->cpu_sig.rev = rev;
@@ -181,7 +176,7 @@ static int apply_microcode(int cpu)
     return 0;
 }
 
-static int get_next_ucode_from_buffer_amd(
+static int get_ucode_from_buffer_amd(
     struct microcode_amd *mc_amd,
     const void *buf,
     size_t bufsize,
@@ -194,23 +189,22 @@ static int get_next_ucode_from_buffer_amd(
     off = *offset;
 
     /* No more data */
-    if ( off >= bufsize )
-        return 1;
+    if ( off >= bufsize ) 
+    {
+        printk(KERN_ERR "microcode: Microcode buffer overrun\n");
+        return -EINVAL;
+    }
 
     mpbuf = (const struct mpbhdr *)&bufp[off];
     if ( mpbuf->type != UCODE_UCODE_TYPE )
     {
-        printk(KERN_ERR "microcode: error! "
-               "Wrong microcode payload type field\n");
+        printk(KERN_ERR "microcode: Wrong microcode payload type field\n");
         return -EINVAL;
     }
 
-    printk(KERN_DEBUG "microcode: size %zu, block size %u, offset %zu\n",
-           bufsize, mpbuf->len, off);
-
     if ( (off + mpbuf->len) > bufsize )
     {
-        printk(KERN_ERR "microcode: error! Bad data in microcode data file\n");
+        printk(KERN_ERR "microcode: Bad data in microcode data file\n");
         return -EINVAL;
     }
 
@@ -230,6 +224,12 @@ static int get_next_ucode_from_buffer_amd(
 
     *offset = off + mpbuf->len + 8;
 
+    printk(KERN_DEBUG "microcode: CPU%d size %zu, block size %u, offset %zu, "
+           "equivID %#x rev %#x\n",
+           raw_smp_processor_id(), bufsize, mpbuf->len, off,
+           ((struct microcode_header_amd *)mc_amd->mpb)->processor_rev_id,
+           ((struct microcode_header_amd *)mc_amd->mpb)->patch_id);
+
     return 0;
 }
 
@@ -246,14 +246,14 @@ static int install_equiv_cpu_table(
 
     if ( mpbuf->type != UCODE_EQUIV_CPU_TABLE_TYPE )
     {
-        printk(KERN_ERR "microcode: error! "
+        printk(KERN_ERR "microcode: "
                "Wrong microcode equivalent cpu table type field\n");
         return -EINVAL;
     }
 
     if ( mpbuf->len == 0 )
     {
-        printk(KERN_ERR "microcode: error! "
+        printk(KERN_ERR "microcode: "
                "Wrong microcode equivalent cpu table length\n");
         return -EINVAL;
     }
@@ -261,7 +261,7 @@ static int install_equiv_cpu_table(
     mc_amd->equiv_cpu_table = xmalloc_bytes(mpbuf->len);
     if ( !mc_amd->equiv_cpu_table )
     {
-        printk(KERN_ERR "microcode: error! "
+        printk(KERN_ERR "microcode: "
                "Can not allocate memory for equivalent cpu table\n");
         return -ENOMEM;
     }
@@ -278,8 +278,8 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
 {
     struct microcode_amd *mc_amd, *mc_old;
     size_t offset = bufsize;
+    size_t last_offset, applied_offset = 0;
     int error = 0;
-    int ret;
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
 
     /* We should bind the task to the CPU */
@@ -287,8 +287,7 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
 
     if ( *(const uint32_t *)buf != UCODE_MAGIC )
     {
-        printk(KERN_ERR "microcode: error! Wrong "
-               "microcode patch file magic\n");
+        printk(KERN_ERR "microcode: Wrong microcode patch file magic\n");
         error = -EINVAL;
         goto out;
     }
@@ -296,7 +295,7 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
     mc_amd = xmalloc(struct microcode_amd);
     if ( !mc_amd )
     {
-        printk(KERN_ERR "microcode: error! "
+        printk(KERN_ERR "microcode: "
                "Can not allocate memory for microcode patch\n");
         error = -ENOMEM;
         goto out;
@@ -321,33 +320,39 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
      */
     mc_amd->mpb = NULL;
     mc_amd->mpb_size = 0;
-    while ( (ret = get_next_ucode_from_buffer_amd(mc_amd, buf, bufsize,
-                                                  &offset)) == 0 )
+    last_offset = offset;
+    while ( (error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
+                                               &offset)) == 0 )
     {
-        error = microcode_fits(mc_amd, cpu);
-        if (error <= 0)
-            continue;
-
-        error = apply_microcode(cpu);
-        if (error == 0)
+        if ( microcode_fits(mc_amd, cpu) )
         {
-            error = 1;
-            break;
+            error = apply_microcode(cpu);
+            if (error == 0)
+                applied_offset = last_offset;
+            else
+                break;
         }
-    }
 
-    if ( ret < 0 )
-        error = ret;
+        last_offset = offset;
+
+        if ( offset >= bufsize )
+            break;
+    }
 
     /* On success keep the microcode patch for
      * re-apply on resume.
      */
-    if ( error == 1 )
+    if ( applied_offset != 0 )
     {
-        xfree(mc_old);
-        error = 0;
+        int ret = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
+                                            &applied_offset);
+        if (ret == 0)
+            xfree(mc_old);
+        else
+            error = ret;
     }
-    else
+
+    if ( applied_offset == 0 || error != 0 )
     {
         xfree(mc_amd);
         uci->mc.mc_amd = mc_old;
@@ -356,6 +361,12 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
   out:
     svm_host_osvw_init();
 
+    /* 
+     * In some cases we may return an error even if processor's microcode has
+     * been updated. For example, the first patch in a container file is loaded
+     * successfully but subsequent container file processing encounters a
+     * failure.
+     */
     return error;
 }
 
@@ -364,10 +375,9 @@ static int microcode_resume_match(int cpu, const void *mc)
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
     struct microcode_amd *mc_amd = uci->mc.mc_amd;
     const struct microcode_amd *src = mc;
-    int res = microcode_fits(src, cpu);
 
-    if ( res <= 0 )
-        return res;
+    if ( !microcode_fits(src, cpu) )
+        return 0;
 
     if ( src != mc_amd )
     {
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 17:56:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 17:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2A5-0000IS-BM; Fri, 07 Dec 2012 17:56:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1Th2A4-0000IN-5M
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 17:56:32 +0000
Received: from [85.158.139.211:26305] by server-5.bemta-5.messagelabs.com id
	49/65-22648-ECD22C05; Fri, 07 Dec 2012 17:56:30 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1354902987!18085811!1
X-Originating-IP: [216.32.180.189]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21599 invoked from network); 7 Dec 2012 17:56:29 -0000
Received: from co1ehsobe006.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.189)
	by server-11.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Dec 2012 17:56:29 -0000
Received: from mail113-co1-R.bigfish.com (10.243.78.249) by
	CO1EHSOBE014.bigfish.com (10.243.66.77) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 17:56:27 +0000
Received: from mail113-co1 (localhost [127.0.0.1])	by
	mail113-co1-R.bigfish.com (Postfix) with ESMTP id 275D96801D2;
	Fri,  7 Dec 2012 17:56:27 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1de0h1202h1e76h1d1ah1d2ahzz8275bhz2dh668h839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1155h)
Received: from mail113-co1 (localhost.localdomain [127.0.0.1]) by mail113-co1
	(MessageSwitch) id 1354902985721489_26508;
	Fri,  7 Dec 2012 17:56:25 +0000 (UTC)
Received: from CO1EHSMHS030.bigfish.com (unknown [10.243.78.250])	by
	mail113-co1.bigfish.com (Postfix) with ESMTP id 7B584800082;
	Fri,  7 Dec 2012 17:56:24 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS030.bigfish.com (10.243.66.40) with Microsoft SMTP Server id
	14.1.225.23; Fri, 7 Dec 2012 17:56:23 +0000
X-WSS-ID: 0MEO9TW-02-OG3-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 226F1FCC00F;	Fri,  7 Dec 2012 11:56:20 -0600 (CST)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 7 Dec 2012 11:38:34 -0600
Received: from linux-62wg.amd.com (163.181.55.254) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server id 14.2.318.4; Fri, 7 Dec 2012
	11:56:21 -0600
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
To: <JBeulich@suse.com>, <Ian.Campbell@citrix.com>
Date: Fri, 7 Dec 2012 12:56:12 -0500
Message-ID: <1354902972-16390-1-git-send-email-boris.ostrovsky@amd.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-OriginatorOrg: amd.com
Cc: boris.ostrovsky@amd.com, Christoph_Egger@gmx.de, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] x86/ucode: Improve error handling and
	container file processing on AMD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not report error when a patch is not appplicable to current processor,
simply skip it and move on to next patch in container file.

Process container file to the end instead of stopping at the first
applicable patch.

Log the fact that a patch has been applied at KERN_WARNING level, modify
debug messages.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
---
 xen/arch/x86/microcode_amd.c |  102 +++++++++++++++++++++++-------------------
 1 file changed, 56 insertions(+), 46 deletions(-)

diff --git a/xen/arch/x86/microcode_amd.c b/xen/arch/x86/microcode_amd.c
index 7a54001..5073a3c 100644
--- a/xen/arch/x86/microcode_amd.c
+++ b/xen/arch/x86/microcode_amd.c
@@ -88,13 +88,13 @@ static int collect_cpu_info(int cpu, struct cpu_signature *csig)
 
     rdmsrl(MSR_AMD_PATCHLEVEL, csig->rev);
 
-    printk(KERN_DEBUG "microcode: collect_cpu_info: patch_id=%#x\n",
-           csig->rev);
+    printk(KERN_DEBUG "microcode: CPU%d collect_cpu_info: patch_id=%#x\n",
+           cpu, csig->rev);
 
     return 0;
 }
 
-static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
+static bool_t microcode_fits(const struct microcode_amd *mc_amd, int cpu)
 {
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
     const struct microcode_header_amd *mc_header = mc_amd->mpb;
@@ -121,12 +121,7 @@ static int microcode_fits(const struct microcode_amd *mc_amd, int cpu)
         return 0;
 
     if ( (mc_header->processor_rev_id) != equiv_cpu_id )
-    {
-        printk(KERN_DEBUG "microcode: CPU%d patch does not match "
-               "(patch is %x, cpu base id is %x) \n",
-               cpu, mc_header->processor_rev_id, equiv_cpu_id);
-        return -EINVAL;
-    }
+        return 0;
 
     if ( mc_header->patch_id <= uci->cpu_sig.rev )
         return 0;
@@ -173,7 +168,7 @@ static int apply_microcode(int cpu)
         return -EIO;
     }
 
-    printk(KERN_INFO "microcode: CPU%d updated from revision %#x to %#x\n",
+    printk(KERN_WARNING "microcode: CPU%d updated from revision %#x to %#x\n",
            cpu, uci->cpu_sig.rev, hdr->patch_id);
 
     uci->cpu_sig.rev = rev;
@@ -181,7 +176,7 @@ static int apply_microcode(int cpu)
     return 0;
 }
 
-static int get_next_ucode_from_buffer_amd(
+static int get_ucode_from_buffer_amd(
     struct microcode_amd *mc_amd,
     const void *buf,
     size_t bufsize,
@@ -194,23 +189,22 @@ static int get_next_ucode_from_buffer_amd(
     off = *offset;
 
     /* No more data */
-    if ( off >= bufsize )
-        return 1;
+    if ( off >= bufsize ) 
+    {
+        printk(KERN_ERR "microcode: Microcode buffer overrun\n");
+        return -EINVAL;
+    }
 
     mpbuf = (const struct mpbhdr *)&bufp[off];
     if ( mpbuf->type != UCODE_UCODE_TYPE )
     {
-        printk(KERN_ERR "microcode: error! "
-               "Wrong microcode payload type field\n");
+        printk(KERN_ERR "microcode: Wrong microcode payload type field\n");
         return -EINVAL;
     }
 
-    printk(KERN_DEBUG "microcode: size %zu, block size %u, offset %zu\n",
-           bufsize, mpbuf->len, off);
-
     if ( (off + mpbuf->len) > bufsize )
     {
-        printk(KERN_ERR "microcode: error! Bad data in microcode data file\n");
+        printk(KERN_ERR "microcode: Bad data in microcode data file\n");
         return -EINVAL;
     }
 
@@ -230,6 +224,12 @@ static int get_next_ucode_from_buffer_amd(
 
     *offset = off + mpbuf->len + 8;
 
+    printk(KERN_DEBUG "microcode: CPU%d size %zu, block size %u, offset %zu, "
+           "equivID %#x rev %#x\n",
+           raw_smp_processor_id(), bufsize, mpbuf->len, off,
+           ((struct microcode_header_amd *)mc_amd->mpb)->processor_rev_id,
+           ((struct microcode_header_amd *)mc_amd->mpb)->patch_id);
+
     return 0;
 }
 
@@ -246,14 +246,14 @@ static int install_equiv_cpu_table(
 
     if ( mpbuf->type != UCODE_EQUIV_CPU_TABLE_TYPE )
     {
-        printk(KERN_ERR "microcode: error! "
+        printk(KERN_ERR "microcode: "
                "Wrong microcode equivalent cpu table type field\n");
         return -EINVAL;
     }
 
     if ( mpbuf->len == 0 )
     {
-        printk(KERN_ERR "microcode: error! "
+        printk(KERN_ERR "microcode: "
                "Wrong microcode equivalent cpu table length\n");
         return -EINVAL;
     }
@@ -261,7 +261,7 @@ static int install_equiv_cpu_table(
     mc_amd->equiv_cpu_table = xmalloc_bytes(mpbuf->len);
     if ( !mc_amd->equiv_cpu_table )
     {
-        printk(KERN_ERR "microcode: error! "
+        printk(KERN_ERR "microcode: "
                "Can not allocate memory for equivalent cpu table\n");
         return -ENOMEM;
     }
@@ -278,8 +278,8 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
 {
     struct microcode_amd *mc_amd, *mc_old;
     size_t offset = bufsize;
+    size_t last_offset, applied_offset = 0;
     int error = 0;
-    int ret;
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
 
     /* We should bind the task to the CPU */
@@ -287,8 +287,7 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
 
     if ( *(const uint32_t *)buf != UCODE_MAGIC )
     {
-        printk(KERN_ERR "microcode: error! Wrong "
-               "microcode patch file magic\n");
+        printk(KERN_ERR "microcode: Wrong microcode patch file magic\n");
         error = -EINVAL;
         goto out;
     }
@@ -296,7 +295,7 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
     mc_amd = xmalloc(struct microcode_amd);
     if ( !mc_amd )
     {
-        printk(KERN_ERR "microcode: error! "
+        printk(KERN_ERR "microcode: "
                "Can not allocate memory for microcode patch\n");
         error = -ENOMEM;
         goto out;
@@ -321,33 +320,39 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
      */
     mc_amd->mpb = NULL;
     mc_amd->mpb_size = 0;
-    while ( (ret = get_next_ucode_from_buffer_amd(mc_amd, buf, bufsize,
-                                                  &offset)) == 0 )
+    last_offset = offset;
+    while ( (error = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
+                                               &offset)) == 0 )
     {
-        error = microcode_fits(mc_amd, cpu);
-        if (error <= 0)
-            continue;
-
-        error = apply_microcode(cpu);
-        if (error == 0)
+        if ( microcode_fits(mc_amd, cpu) )
         {
-            error = 1;
-            break;
+            error = apply_microcode(cpu);
+            if (error == 0)
+                applied_offset = last_offset;
+            else
+                break;
         }
-    }
 
-    if ( ret < 0 )
-        error = ret;
+        last_offset = offset;
+
+        if ( offset >= bufsize )
+            break;
+    }
 
     /* On success keep the microcode patch for
      * re-apply on resume.
      */
-    if ( error == 1 )
+    if ( applied_offset != 0 )
     {
-        xfree(mc_old);
-        error = 0;
+        int ret = get_ucode_from_buffer_amd(mc_amd, buf, bufsize,
+                                            &applied_offset);
+        if (ret == 0)
+            xfree(mc_old);
+        else
+            error = ret;
     }
-    else
+
+    if ( applied_offset == 0 || error != 0 )
     {
         xfree(mc_amd);
         uci->mc.mc_amd = mc_old;
@@ -356,6 +361,12 @@ static int cpu_request_microcode(int cpu, const void *buf, size_t bufsize)
   out:
     svm_host_osvw_init();
 
+    /* 
+     * In some cases we may return an error even if processor's microcode has
+     * been updated. For example, the first patch in a container file is loaded
+     * successfully but subsequent container file processing encounters a
+     * failure.
+     */
     return error;
 }
 
@@ -364,10 +375,9 @@ static int microcode_resume_match(int cpu, const void *mc)
     struct ucode_cpu_info *uci = &per_cpu(ucode_cpu_info, cpu);
     struct microcode_amd *mc_amd = uci->mc.mc_amd;
     const struct microcode_amd *src = mc;
-    int res = microcode_fits(src, cpu);
 
-    if ( res <= 0 )
-        return res;
+    if ( !microcode_fits(src, cpu) )
+        return 0;
 
     if ( src != mc_amd )
     {
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:01:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2EP-0000WH-7H; Fri, 07 Dec 2012 18:01:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Th2EN-0000WA-8k
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 18:00:59 +0000
Received: from [193.109.254.147:58021] by server-13.bemta-14.messagelabs.com
	id 93/D2-11239-ADE22C05; Fri, 07 Dec 2012 18:00:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354903253!9333111!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21567 invoked from network); 7 Dec 2012 18:00:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:00:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; 
   d="scan'208";a="2163"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 18:00:54 +0000
Received: from mac.citrite.net (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	18:00:53 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 19:00:31 +0100
Message-ID: <1354903231-1808-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] xen-blkfront: handle bvecs with partial data
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q3VycmVudGx5IGJsa2Zyb250IGZhaWxzIHRvIGhhbmRsZSBjYXNlcyBpbiBibGtpZl9jb21wbGV0
aW9uIGxpa2UgdGhlCmZvbGxvd2luZzoKCjFzdCBsb29wIGluIHJxX2Zvcl9lYWNoX3NlZ21lbnQK
ICogYnZfb2Zmc2V0OiAzNTg0CiAqIGJ2X2xlbjogNTEyCiAqIG9mZnNldCArPSBidl9sZW4KICog
aTogMAoKMm5kIGxvb3A6CiAqIGJ2X29mZnNldDogMAogKiBidl9sZW46IDUxMgogKiBpOiAwCgpJ
biB0aGUgc2Vjb25kIGxvb3AgaSBzaG91bGQgYmUgMSwgc2luY2Ugd2UgYXNzdW1lIHdlIG9ubHkg
d2FudGVkIHRvCnJlYWQgYSBwYXJ0IG9mIHRoZSBwcmV2aW91cyBwYWdlLiBUaGlzIHBhdGNoZXMg
Zml4ZXMgdGhpcyBjYXNlcyB3aGVyZQpvbmx5IGEgcGFydCBvZiB0aGUgc2hhcmVkIHBhZ2UgaXMg
cmVhZCwgYW5kIGJsa2lmX2NvbXBsZXRpb24gYXNzdW1lcwp0aGF0IGlmIHRoZSBidl9vZmZzZXQg
b2YgYSBidmVjIGlzIGxlc3MgdGhhbiB0aGUgcHJldmlvdXMgYnZfb2Zmc2V0CnBsdXMgdGhlIGJ2
X3NpemUgd2UgaGF2ZSB0byBzd2l0Y2ggdG8gdGhlIG5leHQgc2hhcmVkIHBhZ2UuCgpSZXBvcnRl
ZC1ieTogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25yYWQud2lsa0BvcmFjbGUuY29tPgpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IGxp
bnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcKQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29u
cmFkLndpbGtAb3JhY2xlLmNvbT4KLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jIHwg
ICAgNyArKysrLS0tCiAxIGZpbGVzIGNoYW5nZWQsIDQgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlv
bnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jIGIvZHJpdmVy
cy9ibG9jay94ZW4tYmxrZnJvbnQuYwppbmRleCBkZjIxYjA1Li44NjBmNzZkIDEwMDY0NAotLS0g
YS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJs
a2Zyb250LmMKQEAgLTg0Myw3ICs4NDMsNyBAQCBzdGF0aWMgdm9pZCBibGtpZl9mcmVlKHN0cnVj
dCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgc3VzcGVuZCkKIHN0YXRpYyB2b2lkIGJsa2lmX2Nv
bXBsZXRpb24oc3RydWN0IGJsa19zaGFkb3cgKnMsIHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZv
LAogCQkJICAgICBzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgKmJyZXQpCiB7Ci0JaW50IGk7CisJaW50
IGkgPSAwOwogCXN0cnVjdCBiaW9fdmVjICpidmVjOwogCXN0cnVjdCByZXFfaXRlcmF0b3IgaXRl
cjsKIAl1bnNpZ25lZCBsb25nIGZsYWdzOwpAQCAtODYwLDcgKzg2MCw4IEBAIHN0YXRpYyB2b2lk
IGJsa2lmX2NvbXBsZXRpb24oc3RydWN0IGJsa19zaGFkb3cgKnMsIHN0cnVjdCBibGtmcm9udF9p
bmZvICppbmZvLAogCQkgKi8KIAkJcnFfZm9yX2VhY2hfc2VnbWVudChidmVjLCBzLT5yZXF1ZXN0
LCBpdGVyKSB7CiAJCQlCVUdfT04oKGJ2ZWMtPmJ2X29mZnNldCArIGJ2ZWMtPmJ2X2xlbikgPiBQ
QUdFX1NJWkUpOwotCQkJaSA9IG9mZnNldCA+PiBQQUdFX1NISUZUOworCQkJaWYgKGJ2ZWMtPmJ2
X29mZnNldCA8IG9mZnNldCkKKwkJCQlpKys7CiAJCQlCVUdfT04oaSA+PSBzLT5yZXEudS5ydy5u
cl9zZWdtZW50cyk7CiAJCQlzaGFyZWRfZGF0YSA9IGttYXBfYXRvbWljKAogCQkJCXBmbl90b19w
YWdlKHMtPmdyYW50c191c2VkW2ldLT5wZm4pKTsKQEAgLTg2OSw3ICs4NzAsNyBAQCBzdGF0aWMg
dm9pZCBibGtpZl9jb21wbGV0aW9uKHN0cnVjdCBibGtfc2hhZG93ICpzLCBzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbywKIAkJCQlidmVjLT5idl9sZW4pOwogCQkJYnZlY19rdW5tYXBfaXJxKGJ2
ZWNfZGF0YSwgJmZsYWdzKTsKIAkJCWt1bm1hcF9hdG9taWMoc2hhcmVkX2RhdGEpOwotCQkJb2Zm
c2V0ICs9IGJ2ZWMtPmJ2X2xlbjsKKwkJCW9mZnNldCA9IGJ2ZWMtPmJ2X29mZnNldCArIGJ2ZWMt
PmJ2X2xlbjsKIAkJfQogCX0KIAkvKiBBZGQgdGhlIHBlcnNpc3RlbnQgZ3JhbnQgaW50byB0aGUg
bGlzdCBvZiBmcmVlIGdyYW50cyAqLwotLSAKMS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWls
aW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVu
LWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:01:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2EP-0000WH-7H; Fri, 07 Dec 2012 18:01:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Th2EN-0000WA-8k
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 18:00:59 +0000
Received: from [193.109.254.147:58021] by server-13.bemta-14.messagelabs.com
	id 93/D2-11239-ADE22C05; Fri, 07 Dec 2012 18:00:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354903253!9333111!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21567 invoked from network); 7 Dec 2012 18:00:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:00:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; 
   d="scan'208";a="2163"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 18:00:54 +0000
Received: from mac.citrite.net (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	18:00:53 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 19:00:31 +0100
Message-ID: <1354903231-1808-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] xen-blkfront: handle bvecs with partial data
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q3VycmVudGx5IGJsa2Zyb250IGZhaWxzIHRvIGhhbmRsZSBjYXNlcyBpbiBibGtpZl9jb21wbGV0
aW9uIGxpa2UgdGhlCmZvbGxvd2luZzoKCjFzdCBsb29wIGluIHJxX2Zvcl9lYWNoX3NlZ21lbnQK
ICogYnZfb2Zmc2V0OiAzNTg0CiAqIGJ2X2xlbjogNTEyCiAqIG9mZnNldCArPSBidl9sZW4KICog
aTogMAoKMm5kIGxvb3A6CiAqIGJ2X29mZnNldDogMAogKiBidl9sZW46IDUxMgogKiBpOiAwCgpJ
biB0aGUgc2Vjb25kIGxvb3AgaSBzaG91bGQgYmUgMSwgc2luY2Ugd2UgYXNzdW1lIHdlIG9ubHkg
d2FudGVkIHRvCnJlYWQgYSBwYXJ0IG9mIHRoZSBwcmV2aW91cyBwYWdlLiBUaGlzIHBhdGNoZXMg
Zml4ZXMgdGhpcyBjYXNlcyB3aGVyZQpvbmx5IGEgcGFydCBvZiB0aGUgc2hhcmVkIHBhZ2UgaXMg
cmVhZCwgYW5kIGJsa2lmX2NvbXBsZXRpb24gYXNzdW1lcwp0aGF0IGlmIHRoZSBidl9vZmZzZXQg
b2YgYSBidmVjIGlzIGxlc3MgdGhhbiB0aGUgcHJldmlvdXMgYnZfb2Zmc2V0CnBsdXMgdGhlIGJ2
X3NpemUgd2UgaGF2ZSB0byBzd2l0Y2ggdG8gdGhlIG5leHQgc2hhcmVkIHBhZ2UuCgpSZXBvcnRl
ZC1ieTogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25yYWQud2lsa0BvcmFjbGUuY29tPgpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IGxp
bnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcKQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29u
cmFkLndpbGtAb3JhY2xlLmNvbT4KLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jIHwg
ICAgNyArKysrLS0tCiAxIGZpbGVzIGNoYW5nZWQsIDQgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlv
bnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jIGIvZHJpdmVy
cy9ibG9jay94ZW4tYmxrZnJvbnQuYwppbmRleCBkZjIxYjA1Li44NjBmNzZkIDEwMDY0NAotLS0g
YS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJs
a2Zyb250LmMKQEAgLTg0Myw3ICs4NDMsNyBAQCBzdGF0aWMgdm9pZCBibGtpZl9mcmVlKHN0cnVj
dCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgc3VzcGVuZCkKIHN0YXRpYyB2b2lkIGJsa2lmX2Nv
bXBsZXRpb24oc3RydWN0IGJsa19zaGFkb3cgKnMsIHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZv
LAogCQkJICAgICBzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgKmJyZXQpCiB7Ci0JaW50IGk7CisJaW50
IGkgPSAwOwogCXN0cnVjdCBiaW9fdmVjICpidmVjOwogCXN0cnVjdCByZXFfaXRlcmF0b3IgaXRl
cjsKIAl1bnNpZ25lZCBsb25nIGZsYWdzOwpAQCAtODYwLDcgKzg2MCw4IEBAIHN0YXRpYyB2b2lk
IGJsa2lmX2NvbXBsZXRpb24oc3RydWN0IGJsa19zaGFkb3cgKnMsIHN0cnVjdCBibGtmcm9udF9p
bmZvICppbmZvLAogCQkgKi8KIAkJcnFfZm9yX2VhY2hfc2VnbWVudChidmVjLCBzLT5yZXF1ZXN0
LCBpdGVyKSB7CiAJCQlCVUdfT04oKGJ2ZWMtPmJ2X29mZnNldCArIGJ2ZWMtPmJ2X2xlbikgPiBQ
QUdFX1NJWkUpOwotCQkJaSA9IG9mZnNldCA+PiBQQUdFX1NISUZUOworCQkJaWYgKGJ2ZWMtPmJ2
X29mZnNldCA8IG9mZnNldCkKKwkJCQlpKys7CiAJCQlCVUdfT04oaSA+PSBzLT5yZXEudS5ydy5u
cl9zZWdtZW50cyk7CiAJCQlzaGFyZWRfZGF0YSA9IGttYXBfYXRvbWljKAogCQkJCXBmbl90b19w
YWdlKHMtPmdyYW50c191c2VkW2ldLT5wZm4pKTsKQEAgLTg2OSw3ICs4NzAsNyBAQCBzdGF0aWMg
dm9pZCBibGtpZl9jb21wbGV0aW9uKHN0cnVjdCBibGtfc2hhZG93ICpzLCBzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbywKIAkJCQlidmVjLT5idl9sZW4pOwogCQkJYnZlY19rdW5tYXBfaXJxKGJ2
ZWNfZGF0YSwgJmZsYWdzKTsKIAkJCWt1bm1hcF9hdG9taWMoc2hhcmVkX2RhdGEpOwotCQkJb2Zm
c2V0ICs9IGJ2ZWMtPmJ2X2xlbjsKKwkJCW9mZnNldCA9IGJ2ZWMtPmJ2X29mZnNldCArIGJ2ZWMt
PmJ2X2xlbjsKIAkJfQogCX0KIAkvKiBBZGQgdGhlIHBlcnNpc3RlbnQgZ3JhbnQgaW50byB0aGUg
bGlzdCBvZiBmcmVlIGdyYW50cyAqLwotLSAKMS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWls
aW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVu
LWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:02:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Fo-0000bB-N1; Fri, 07 Dec 2012 18:02:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Fo-0000b3-0h
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:02:28 +0000
Received: from [85.158.143.35:20816] by server-1.bemta-4.messagelabs.com id
	25/11-28401-33F22C05; Fri, 07 Dec 2012 18:02:27 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354903344!5483038!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10173 invoked from network); 7 Dec 2012 18:02:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:02:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47032551"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:02:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:02:23 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2Fj-0007FC-EY;
	Fri, 07 Dec 2012 18:02:23 +0000
Date: Fri, 7 Dec 2012 18:02:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 0/8] xen: ARM HDLCD video driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series introduces a very simple driver for the ARM HDLCD
Controller, that means that we can finally have something on the screen
while Xen is booting on ARM :)

The driver is capable of reading the mode property on device tree and
setting the HDLCD accordingly. It is also capable of setting the
required OSC5 timer to the right frequency for the pixel clock.

In order to reduce code duplication with x86, I tried to generalize the
existing vesa character rendering functions into a architecture agnostic
framebuffer driver that can be used by vesa and the hdlcd drivers.

I would very much appreciate if you could give a close look at the vesa
changes because I don't have any x86 test machines that boot in vesa
mode, therefore I couldn't test it.


Changes in v2:
- rebase on latest xen-unstable;
- add support for multiple resolutions;
- add support to dynamically change the OSC5 motherboard timer;
- add the patch "preserve DTB mappings".



Stefano Stabellini (8):
      xen/arm: introduce early_ioremap
      xen: infrastructure to have cross-platform video drivers
      xen: introduce a generic framebuffer driver
      xen/vesa: use the new fb_* functions
      xen/arm: preserve DTB mappings
      xen/device_tree: introduce find_compatible_node
      xen/arm: introduce vexpress_syscfg
      xen/arm: introduce a driver for the ARM HDLCD controller

 xen/arch/arm/Makefile                   |    1 +
 xen/arch/arm/Rules.mk                   |    2 +
 xen/arch/arm/kernel.h                   |    2 +
 xen/arch/arm/mm.c                       |   44 +++++
 xen/arch/arm/platform_vexpress.c        |   97 +++++++++++
 xen/arch/arm/setup.c                    |    8 +-
 xen/arch/x86/Rules.mk                   |    1 +
 xen/common/device_tree.c                |   51 ++++++
 xen/drivers/Makefile                    |    2 +-
 xen/drivers/char/console.c              |   12 +-
 xen/drivers/video/Makefile              |   12 +-
 xen/drivers/video/arm_hdlcd.c           |  282 +++++++++++++++++++++++++++++++
 xen/drivers/video/fb.c                  |  209 +++++++++++++++++++++++
 xen/drivers/video/fb.h                  |   49 ++++++
 xen/drivers/video/modelines.h           |   69 ++++++++
 xen/drivers/video/vesa.c                |  179 +++-----------------
 xen/drivers/video/vga.c                 |   12 +-
 xen/include/asm-arm/config.h            |    4 +
 xen/include/asm-arm/mm.h                |    5 +-
 xen/include/asm-arm/page.h              |   23 +++
 xen/include/asm-arm/platform_vexpress.h |   23 +++
 xen/include/asm-x86/config.h            |    1 +
 xen/include/xen/device_tree.h           |    3 +
 xen/include/xen/vga.h                   |    9 +-
 xen/include/xen/video.h                 |   24 +++
 25 files changed, 940 insertions(+), 184 deletions(-)



Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:02:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Fo-0000bB-N1; Fri, 07 Dec 2012 18:02:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Fo-0000b3-0h
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:02:28 +0000
Received: from [85.158.143.35:20816] by server-1.bemta-4.messagelabs.com id
	25/11-28401-33F22C05; Fri, 07 Dec 2012 18:02:27 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354903344!5483038!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10173 invoked from network); 7 Dec 2012 18:02:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:02:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47032551"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:02:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:02:23 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2Fj-0007FC-EY;
	Fri, 07 Dec 2012 18:02:23 +0000
Date: Fri, 7 Dec 2012 18:02:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 0/8] xen: ARM HDLCD video driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series introduces a very simple driver for the ARM HDLCD
Controller, that means that we can finally have something on the screen
while Xen is booting on ARM :)

The driver is capable of reading the mode property on device tree and
setting the HDLCD accordingly. It is also capable of setting the
required OSC5 timer to the right frequency for the pixel clock.

In order to reduce code duplication with x86, I tried to generalize the
existing vesa character rendering functions into a architecture agnostic
framebuffer driver that can be used by vesa and the hdlcd drivers.

I would very much appreciate if you could give a close look at the vesa
changes because I don't have any x86 test machines that boot in vesa
mode, therefore I couldn't test it.


Changes in v2:
- rebase on latest xen-unstable;
- add support for multiple resolutions;
- add support to dynamically change the OSC5 motherboard timer;
- add the patch "preserve DTB mappings".



Stefano Stabellini (8):
      xen/arm: introduce early_ioremap
      xen: infrastructure to have cross-platform video drivers
      xen: introduce a generic framebuffer driver
      xen/vesa: use the new fb_* functions
      xen/arm: preserve DTB mappings
      xen/device_tree: introduce find_compatible_node
      xen/arm: introduce vexpress_syscfg
      xen/arm: introduce a driver for the ARM HDLCD controller

 xen/arch/arm/Makefile                   |    1 +
 xen/arch/arm/Rules.mk                   |    2 +
 xen/arch/arm/kernel.h                   |    2 +
 xen/arch/arm/mm.c                       |   44 +++++
 xen/arch/arm/platform_vexpress.c        |   97 +++++++++++
 xen/arch/arm/setup.c                    |    8 +-
 xen/arch/x86/Rules.mk                   |    1 +
 xen/common/device_tree.c                |   51 ++++++
 xen/drivers/Makefile                    |    2 +-
 xen/drivers/char/console.c              |   12 +-
 xen/drivers/video/Makefile              |   12 +-
 xen/drivers/video/arm_hdlcd.c           |  282 +++++++++++++++++++++++++++++++
 xen/drivers/video/fb.c                  |  209 +++++++++++++++++++++++
 xen/drivers/video/fb.h                  |   49 ++++++
 xen/drivers/video/modelines.h           |   69 ++++++++
 xen/drivers/video/vesa.c                |  179 +++-----------------
 xen/drivers/video/vga.c                 |   12 +-
 xen/include/asm-arm/config.h            |    4 +
 xen/include/asm-arm/mm.h                |    5 +-
 xen/include/asm-arm/page.h              |   23 +++
 xen/include/asm-arm/platform_vexpress.h |   23 +++
 xen/include/asm-x86/config.h            |    1 +
 xen/include/xen/device_tree.h           |    3 +
 xen/include/xen/vga.h                   |    9 +-
 xen/include/xen/video.h                 |   24 +++
 25 files changed, 940 insertions(+), 184 deletions(-)



Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2GX-0000fV-6b; Fri, 07 Dec 2012 18:03:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2GW-0000fL-4f
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:12 +0000
Received: from [85.158.143.99:39913] by server-2.bemta-4.messagelabs.com id
	7B/9D-30861-F5F22C05; Fri, 07 Dec 2012 18:03:11 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10576 invoked from network); 7 Dec 2012 18:03:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814394"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-1a;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:50 +0000
Message-ID: <1354903377-13068-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 1/8] xen/arm: introduce early_ioremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a function to map a range of physical memory into Xen virtual
memory.
It doesn't need domheap to be setup.
It is going to be used to map the videoram.

Add flush_xen_data_tlb_range, that flushes a range of virtual addresses.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/mm.c            |   32 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/config.h |    2 ++
 xen/include/asm-arm/mm.h     |    3 ++-
 xen/include/asm-arm/page.h   |   23 +++++++++++++++++++++++
 4 files changed, 59 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 855f83d..0d7a163 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -367,6 +367,38 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
 }
 
+/* Map the physical memory range start -  start + len into virtual
+ * memory and return the virtual address of the mapping.
+ * start has to be 2MB aligned.
+ * len has to be < EARLY_VMAP_END - EARLY_VMAP_START.
+ */
+void* early_ioremap(paddr_t start, size_t len, unsigned attributes)
+{
+    static unsigned long virt_start = EARLY_VMAP_START;
+    void* ret_addr = (void *)virt_start;
+    paddr_t end = start + len;
+
+    ASSERT(!(start & (~SECOND_MASK)));
+    ASSERT(!(virt_start & (~SECOND_MASK)));
+
+    /* The range we need to map is too big */
+    if ( virt_start + len >= EARLY_VMAP_END )
+        return NULL;
+
+    while ( start < end )
+    {
+        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
+        e.pt.ai = attributes;
+        write_pte(xen_second + second_table_offset(virt_start), e);
+
+        start += SECOND_SIZE;
+        virt_start += SECOND_SIZE;
+    }
+    flush_xen_data_tlb_range((unsigned long) ret_addr, len);
+
+    return ret_addr;
+}
+
 enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
 static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
 {
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 2a05539..87db0d1 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -73,9 +73,11 @@
 #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
 #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
 #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
+#define EARLY_VMAP_START       mk_unsigned_long(0x10000000)
 #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
 #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
 
+#define EARLY_VMAP_END         XENHEAP_VIRT_START
 #define HYPERVISOR_VIRT_START  XEN_VIRT_START
 
 #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index e95ece1..4ed5df6 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -150,7 +150,8 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
 extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
 /* Remove a mapping from a fixmap entry */
 extern void clear_fixmap(unsigned map);
-
+/* map a 2MB aligned physical range in virtual memory. */
+void* early_ioremap(paddr_t start, size_t len, unsigned attributes);
 
 #define mfn_valid(mfn)        ({                                              \
     unsigned long __m_f_n = (mfn);                                            \
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index d89261e..0790dda 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -328,6 +328,23 @@ static inline void flush_xen_data_tlb_va(unsigned long va)
                  : : "r" (va) : "memory");
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB. This is not
+ * sufficient when changing code mappings or for self modifying code.
+ */
+static inline void flush_xen_data_tlb_range(unsigned long va, unsigned long size)
+{
+    unsigned long end = va + size;
+    while ( va < end ) {
+        asm volatile("dsb;" /* Ensure preceding are visible */
+                STORE_CP32(0, TLBIMVAH)
+                "dsb;" /* Ensure completion of the TLB flush */
+                "isb;"
+                : : "r" (va) : "memory");
+        va += PAGE_SIZE;
+    }
+}
+
 /* Flush all non-hypervisor mappings from the TLB */
 static inline void flush_guest_tlb(void)
 {
@@ -418,8 +435,14 @@ static inline uint64_t gva_to_ipa(uint32_t va)
 #define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
 
 #define THIRD_SHIFT  PAGE_SHIFT
+#define THIRD_SIZE   (1u << THIRD_SHIFT)
+#define THIRD_MASK   (~(THIRD_SIZE - 1))
 #define SECOND_SHIFT (THIRD_SHIFT + LPAE_SHIFT)
+#define SECOND_SIZE   (1u << SECOND_SHIFT)
+#define SECOND_MASK   (~(SECOND_SIZE - 1))
 #define FIRST_SHIFT  (SECOND_SHIFT + LPAE_SHIFT)
+#define FIRST_SIZE   (1u << FIRST_SHIFT)
+#define FIRST_MASK   (~(FIRST_SIZE - 1))
 
 /* Calculate the offsets into the pagetables for a given VA */
 #define first_linear_offset(va) (va >> FIRST_SHIFT)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2GX-0000fV-6b; Fri, 07 Dec 2012 18:03:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2GW-0000fL-4f
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:12 +0000
Received: from [85.158.143.99:39913] by server-2.bemta-4.messagelabs.com id
	7B/9D-30861-F5F22C05; Fri, 07 Dec 2012 18:03:11 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10576 invoked from network); 7 Dec 2012 18:03:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814394"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-1a;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:50 +0000
Message-ID: <1354903377-13068-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 1/8] xen/arm: introduce early_ioremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a function to map a range of physical memory into Xen virtual
memory.
It doesn't need domheap to be setup.
It is going to be used to map the videoram.

Add flush_xen_data_tlb_range, that flushes a range of virtual addresses.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/mm.c            |   32 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/config.h |    2 ++
 xen/include/asm-arm/mm.h     |    3 ++-
 xen/include/asm-arm/page.h   |   23 +++++++++++++++++++++++
 4 files changed, 59 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 855f83d..0d7a163 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -367,6 +367,38 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
 }
 
+/* Map the physical memory range start -  start + len into virtual
+ * memory and return the virtual address of the mapping.
+ * start has to be 2MB aligned.
+ * len has to be < EARLY_VMAP_END - EARLY_VMAP_START.
+ */
+void* early_ioremap(paddr_t start, size_t len, unsigned attributes)
+{
+    static unsigned long virt_start = EARLY_VMAP_START;
+    void* ret_addr = (void *)virt_start;
+    paddr_t end = start + len;
+
+    ASSERT(!(start & (~SECOND_MASK)));
+    ASSERT(!(virt_start & (~SECOND_MASK)));
+
+    /* The range we need to map is too big */
+    if ( virt_start + len >= EARLY_VMAP_END )
+        return NULL;
+
+    while ( start < end )
+    {
+        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
+        e.pt.ai = attributes;
+        write_pte(xen_second + second_table_offset(virt_start), e);
+
+        start += SECOND_SIZE;
+        virt_start += SECOND_SIZE;
+    }
+    flush_xen_data_tlb_range((unsigned long) ret_addr, len);
+
+    return ret_addr;
+}
+
 enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
 static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
 {
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 2a05539..87db0d1 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -73,9 +73,11 @@
 #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
 #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
 #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
+#define EARLY_VMAP_START       mk_unsigned_long(0x10000000)
 #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
 #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
 
+#define EARLY_VMAP_END         XENHEAP_VIRT_START
 #define HYPERVISOR_VIRT_START  XEN_VIRT_START
 
 #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index e95ece1..4ed5df6 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -150,7 +150,8 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
 extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
 /* Remove a mapping from a fixmap entry */
 extern void clear_fixmap(unsigned map);
-
+/* map a 2MB aligned physical range in virtual memory. */
+void* early_ioremap(paddr_t start, size_t len, unsigned attributes);
 
 #define mfn_valid(mfn)        ({                                              \
     unsigned long __m_f_n = (mfn);                                            \
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index d89261e..0790dda 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -328,6 +328,23 @@ static inline void flush_xen_data_tlb_va(unsigned long va)
                  : : "r" (va) : "memory");
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB. This is not
+ * sufficient when changing code mappings or for self modifying code.
+ */
+static inline void flush_xen_data_tlb_range(unsigned long va, unsigned long size)
+{
+    unsigned long end = va + size;
+    while ( va < end ) {
+        asm volatile("dsb;" /* Ensure preceding are visible */
+                STORE_CP32(0, TLBIMVAH)
+                "dsb;" /* Ensure completion of the TLB flush */
+                "isb;"
+                : : "r" (va) : "memory");
+        va += PAGE_SIZE;
+    }
+}
+
 /* Flush all non-hypervisor mappings from the TLB */
 static inline void flush_guest_tlb(void)
 {
@@ -418,8 +435,14 @@ static inline uint64_t gva_to_ipa(uint32_t va)
 #define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
 
 #define THIRD_SHIFT  PAGE_SHIFT
+#define THIRD_SIZE   (1u << THIRD_SHIFT)
+#define THIRD_MASK   (~(THIRD_SIZE - 1))
 #define SECOND_SHIFT (THIRD_SHIFT + LPAE_SHIFT)
+#define SECOND_SIZE   (1u << SECOND_SHIFT)
+#define SECOND_MASK   (~(SECOND_SIZE - 1))
 #define FIRST_SHIFT  (SECOND_SHIFT + LPAE_SHIFT)
+#define FIRST_SIZE   (1u << FIRST_SHIFT)
+#define FIRST_MASK   (~(FIRST_SIZE - 1))
 
 /* Calculate the offsets into the pagetables for a given VA */
 #define first_linear_offset(va) (va >> FIRST_SHIFT)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2GZ-0000fu-It; Fri, 07 Dec 2012 18:03:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2GX-0000fU-DY
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:13 +0000
Received: from [85.158.143.99:39950] by server-1.bemta-4.messagelabs.com id
	60/81-28401-06F22C05; Fri, 07 Dec 2012 18:03:12 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10603 invoked from network); 7 Dec 2012 18:03:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814395"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-2B;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:51 +0000
Message-ID: <1354903377-13068-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 2/8] xen: infrastructure to have
	cross-platform video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- introduce a new HAS_VIDEO config variable;
- build xen/drivers/video/font* if HAS_VIDEO;
- rename vga_puts to video_puts;
- rename vga_init to video_init;
- rename vga_endboot to video_endboot.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/Rules.mk        |    1 +
 xen/arch/x86/Rules.mk        |    1 +
 xen/drivers/Makefile         |    2 +-
 xen/drivers/char/console.c   |   12 ++++++------
 xen/drivers/video/Makefile   |   10 +++++-----
 xen/drivers/video/vesa.c     |    4 ++--
 xen/drivers/video/vga.c      |   12 ++++++------
 xen/include/asm-x86/config.h |    1 +
 xen/include/xen/vga.h        |    9 +--------
 xen/include/xen/video.h      |   24 ++++++++++++++++++++++++
 10 files changed, 48 insertions(+), 28 deletions(-)
 create mode 100644 xen/include/xen/video.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index a45c654..fa9f9c1 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -7,6 +7,7 @@
 #
 
 HAS_DEVICE_TREE := y
+HAS_VIDEO := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
index 963850f..0a9d68d 100644
--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -3,6 +3,7 @@
 
 HAS_ACPI := y
 HAS_VGA  := y
+HAS_VIDEO  := y
 HAS_CPUFREQ := y
 HAS_PCI := y
 HAS_PASSTHROUGH := y
diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile
index 7239375..9c70f20 100644
--- a/xen/drivers/Makefile
+++ b/xen/drivers/Makefile
@@ -3,4 +3,4 @@ subdir-$(HAS_CPUFREQ) += cpufreq
 subdir-$(HAS_PCI) += pci
 subdir-$(HAS_PASSTHROUGH) += passthrough
 subdir-$(HAS_ACPI) += acpi
-subdir-$(HAS_VGA) += video
+subdir-$(HAS_VIDEO) += video
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index ff360fe..1b7a593 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -21,7 +21,7 @@
 #include <xen/delay.h>
 #include <xen/guest_access.h>
 #include <xen/shutdown.h>
-#include <xen/vga.h>
+#include <xen/video.h>
 #include <xen/kexec.h>
 #include <asm/debugger.h>
 #include <asm/div64.h>
@@ -297,7 +297,7 @@ static void dump_console_ring_key(unsigned char key)
     buf[sofar] = '\0';
 
     sercon_puts(buf);
-    vga_puts(buf);
+    video_puts(buf);
 
     free_xenheap_pages(buf, order);
 }
@@ -383,7 +383,7 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
         spin_lock_irq(&console_lock);
 
         sercon_puts(kbuf);
-        vga_puts(kbuf);
+        video_puts(kbuf);
 
         if ( opt_console_to_ring )
         {
@@ -464,7 +464,7 @@ static void __putstr(const char *str)
     ASSERT(spin_is_locked(&console_lock));
 
     sercon_puts(str);
-    vga_puts(str);
+    video_puts(str);
 
     if ( !console_locks_busted )
     {
@@ -592,7 +592,7 @@ void __init console_init_preirq(void)
         if ( *p == ',' )
             p++;
         if ( !strncmp(p, "vga", 3) )
-            vga_init();
+            video_init();
         else if ( !strncmp(p, "none", 4) )
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
@@ -694,7 +694,7 @@ void __init console_endboot(void)
         printk("\n");
     }
 
-    vga_endboot();
+    video_endboot();
 
     /*
      * If user specifies so, we fool the switch routine to redirect input
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 6c3e5b4..2993c39 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -1,5 +1,5 @@
-obj-y := vga.o
-obj-$(CONFIG_X86) += font_8x14.o
-obj-$(CONFIG_X86) += font_8x16.o
-obj-$(CONFIG_X86) += font_8x8.o
-obj-$(CONFIG_X86) += vesa.o
+obj-$(HAS_VGA) := vga.o
+obj-$(HAS_VIDEO) += font_8x14.o
+obj-$(HAS_VIDEO) += font_8x16.o
+obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index d0a83ff..aaf8b23 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -108,7 +108,7 @@ void __init vesa_init(void)
 
     memset(lfb, 0, vram_remap);
 
-    vga_puts = vesa_redraw_puts;
+    video_puts = vesa_redraw_puts;
 
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
@@ -193,7 +193,7 @@ void __init vesa_endboot(bool_t keep)
     if ( keep )
     {
         xpos = 0;
-        vga_puts = vesa_scroll_puts;
+        video_puts = vesa_scroll_puts;
     }
     else
     {
diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
index a98bd00..40e5963 100644
--- a/xen/drivers/video/vga.c
+++ b/xen/drivers/video/vga.c
@@ -21,7 +21,7 @@ static unsigned char *video;
 
 static void vga_text_puts(const char *s);
 static void vga_noop_puts(const char *s) {}
-void (*vga_puts)(const char *) = vga_noop_puts;
+void (*video_puts)(const char *) = vga_noop_puts;
 
 /*
  * 'vga=<mode-specifier>[,keep]' where <mode-specifier> is one of:
@@ -62,7 +62,7 @@ void vesa_endboot(bool_t keep);
 #define vesa_endboot(x)   ((void)0)
 #endif
 
-void __init vga_init(void)
+void __init video_init(void)
 {
     char *p;
 
@@ -85,7 +85,7 @@ void __init vga_init(void)
         columns = vga_console_info.u.text_mode_3.columns;
         lines   = vga_console_info.u.text_mode_3.rows;
         memset(video, 0, columns * lines * 2);
-        vga_puts = vga_text_puts;
+        video_puts = vga_text_puts;
         break;
     case XEN_VGATYPE_VESA_LFB:
     case XEN_VGATYPE_EFI_LFB:
@@ -97,16 +97,16 @@ void __init vga_init(void)
     }
 }
 
-void __init vga_endboot(void)
+void __init video_endboot(void)
 {
-    if ( vga_puts == vga_noop_puts )
+    if ( video_puts == vga_noop_puts )
         return;
 
     printk("Xen is %s VGA console.\n",
            vgacon_keep ? "keeping" : "relinquishing");
 
     if ( !vgacon_keep )
-        vga_puts = vga_noop_puts;
+        video_puts = vga_noop_puts;
     else
     {
         int bus, devfn;
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 0c4868c..e8da4f7 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -38,6 +38,7 @@
 #define CONFIG_ACPI_CSTATE 1
 
 #define CONFIG_VGA 1
+#define CONFIG_VIDEO 1
 
 #define CONFIG_HOTPLUG 1
 #define CONFIG_HOTPLUG_CPU 1
diff --git a/xen/include/xen/vga.h b/xen/include/xen/vga.h
index cc690b9..f72b63d 100644
--- a/xen/include/xen/vga.h
+++ b/xen/include/xen/vga.h
@@ -9,17 +9,10 @@
 #ifndef _XEN_VGA_H
 #define _XEN_VGA_H
 
-#include <public/xen.h>
+#include <xen/video.h>
 
 #ifdef CONFIG_VGA
 extern struct xen_vga_console_info vga_console_info;
-void vga_init(void);
-void vga_endboot(void);
-extern void (*vga_puts)(const char *);
-#else
-#define vga_init()    ((void)0)
-#define vga_endboot() ((void)0)
-#define vga_puts(s)   ((void)0)
 #endif
 
 #endif /* _XEN_VGA_H */
diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
new file mode 100644
index 0000000..2e897f9
--- /dev/null
+++ b/xen/include/xen/video.h
@@ -0,0 +1,24 @@
+/*
+ *  video.h
+ *
+ *  This file is subject to the terms and conditions of the GNU General Public
+ *  License.  See the file COPYING in the main directory of this archive
+ *  for more details.
+ */
+
+#ifndef _XEN_VIDEO_H
+#define _XEN_VIDEO_H
+
+#include <public/xen.h>
+
+#ifdef CONFIG_VIDEO
+void video_init(void);
+extern void (*video_puts)(const char *);
+void video_endboot(void);
+#else
+#define video_init()    ((void)0)
+#define video_puts(s)   ((void)0)
+#define video_endboot() ((void)0)
+#endif
+
+#endif /* _XEN_VIDEO_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2GZ-0000fu-It; Fri, 07 Dec 2012 18:03:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2GX-0000fU-DY
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:13 +0000
Received: from [85.158.143.99:39950] by server-1.bemta-4.messagelabs.com id
	60/81-28401-06F22C05; Fri, 07 Dec 2012 18:03:12 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10603 invoked from network); 7 Dec 2012 18:03:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814395"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-2B;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:51 +0000
Message-ID: <1354903377-13068-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 2/8] xen: infrastructure to have
	cross-platform video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- introduce a new HAS_VIDEO config variable;
- build xen/drivers/video/font* if HAS_VIDEO;
- rename vga_puts to video_puts;
- rename vga_init to video_init;
- rename vga_endboot to video_endboot.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/Rules.mk        |    1 +
 xen/arch/x86/Rules.mk        |    1 +
 xen/drivers/Makefile         |    2 +-
 xen/drivers/char/console.c   |   12 ++++++------
 xen/drivers/video/Makefile   |   10 +++++-----
 xen/drivers/video/vesa.c     |    4 ++--
 xen/drivers/video/vga.c      |   12 ++++++------
 xen/include/asm-x86/config.h |    1 +
 xen/include/xen/vga.h        |    9 +--------
 xen/include/xen/video.h      |   24 ++++++++++++++++++++++++
 10 files changed, 48 insertions(+), 28 deletions(-)
 create mode 100644 xen/include/xen/video.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index a45c654..fa9f9c1 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -7,6 +7,7 @@
 #
 
 HAS_DEVICE_TREE := y
+HAS_VIDEO := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
index 963850f..0a9d68d 100644
--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -3,6 +3,7 @@
 
 HAS_ACPI := y
 HAS_VGA  := y
+HAS_VIDEO  := y
 HAS_CPUFREQ := y
 HAS_PCI := y
 HAS_PASSTHROUGH := y
diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile
index 7239375..9c70f20 100644
--- a/xen/drivers/Makefile
+++ b/xen/drivers/Makefile
@@ -3,4 +3,4 @@ subdir-$(HAS_CPUFREQ) += cpufreq
 subdir-$(HAS_PCI) += pci
 subdir-$(HAS_PASSTHROUGH) += passthrough
 subdir-$(HAS_ACPI) += acpi
-subdir-$(HAS_VGA) += video
+subdir-$(HAS_VIDEO) += video
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index ff360fe..1b7a593 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -21,7 +21,7 @@
 #include <xen/delay.h>
 #include <xen/guest_access.h>
 #include <xen/shutdown.h>
-#include <xen/vga.h>
+#include <xen/video.h>
 #include <xen/kexec.h>
 #include <asm/debugger.h>
 #include <asm/div64.h>
@@ -297,7 +297,7 @@ static void dump_console_ring_key(unsigned char key)
     buf[sofar] = '\0';
 
     sercon_puts(buf);
-    vga_puts(buf);
+    video_puts(buf);
 
     free_xenheap_pages(buf, order);
 }
@@ -383,7 +383,7 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
         spin_lock_irq(&console_lock);
 
         sercon_puts(kbuf);
-        vga_puts(kbuf);
+        video_puts(kbuf);
 
         if ( opt_console_to_ring )
         {
@@ -464,7 +464,7 @@ static void __putstr(const char *str)
     ASSERT(spin_is_locked(&console_lock));
 
     sercon_puts(str);
-    vga_puts(str);
+    video_puts(str);
 
     if ( !console_locks_busted )
     {
@@ -592,7 +592,7 @@ void __init console_init_preirq(void)
         if ( *p == ',' )
             p++;
         if ( !strncmp(p, "vga", 3) )
-            vga_init();
+            video_init();
         else if ( !strncmp(p, "none", 4) )
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
@@ -694,7 +694,7 @@ void __init console_endboot(void)
         printk("\n");
     }
 
-    vga_endboot();
+    video_endboot();
 
     /*
      * If user specifies so, we fool the switch routine to redirect input
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 6c3e5b4..2993c39 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -1,5 +1,5 @@
-obj-y := vga.o
-obj-$(CONFIG_X86) += font_8x14.o
-obj-$(CONFIG_X86) += font_8x16.o
-obj-$(CONFIG_X86) += font_8x8.o
-obj-$(CONFIG_X86) += vesa.o
+obj-$(HAS_VGA) := vga.o
+obj-$(HAS_VIDEO) += font_8x14.o
+obj-$(HAS_VIDEO) += font_8x16.o
+obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index d0a83ff..aaf8b23 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -108,7 +108,7 @@ void __init vesa_init(void)
 
     memset(lfb, 0, vram_remap);
 
-    vga_puts = vesa_redraw_puts;
+    video_puts = vesa_redraw_puts;
 
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
@@ -193,7 +193,7 @@ void __init vesa_endboot(bool_t keep)
     if ( keep )
     {
         xpos = 0;
-        vga_puts = vesa_scroll_puts;
+        video_puts = vesa_scroll_puts;
     }
     else
     {
diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
index a98bd00..40e5963 100644
--- a/xen/drivers/video/vga.c
+++ b/xen/drivers/video/vga.c
@@ -21,7 +21,7 @@ static unsigned char *video;
 
 static void vga_text_puts(const char *s);
 static void vga_noop_puts(const char *s) {}
-void (*vga_puts)(const char *) = vga_noop_puts;
+void (*video_puts)(const char *) = vga_noop_puts;
 
 /*
  * 'vga=<mode-specifier>[,keep]' where <mode-specifier> is one of:
@@ -62,7 +62,7 @@ void vesa_endboot(bool_t keep);
 #define vesa_endboot(x)   ((void)0)
 #endif
 
-void __init vga_init(void)
+void __init video_init(void)
 {
     char *p;
 
@@ -85,7 +85,7 @@ void __init vga_init(void)
         columns = vga_console_info.u.text_mode_3.columns;
         lines   = vga_console_info.u.text_mode_3.rows;
         memset(video, 0, columns * lines * 2);
-        vga_puts = vga_text_puts;
+        video_puts = vga_text_puts;
         break;
     case XEN_VGATYPE_VESA_LFB:
     case XEN_VGATYPE_EFI_LFB:
@@ -97,16 +97,16 @@ void __init vga_init(void)
     }
 }
 
-void __init vga_endboot(void)
+void __init video_endboot(void)
 {
-    if ( vga_puts == vga_noop_puts )
+    if ( video_puts == vga_noop_puts )
         return;
 
     printk("Xen is %s VGA console.\n",
            vgacon_keep ? "keeping" : "relinquishing");
 
     if ( !vgacon_keep )
-        vga_puts = vga_noop_puts;
+        video_puts = vga_noop_puts;
     else
     {
         int bus, devfn;
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 0c4868c..e8da4f7 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -38,6 +38,7 @@
 #define CONFIG_ACPI_CSTATE 1
 
 #define CONFIG_VGA 1
+#define CONFIG_VIDEO 1
 
 #define CONFIG_HOTPLUG 1
 #define CONFIG_HOTPLUG_CPU 1
diff --git a/xen/include/xen/vga.h b/xen/include/xen/vga.h
index cc690b9..f72b63d 100644
--- a/xen/include/xen/vga.h
+++ b/xen/include/xen/vga.h
@@ -9,17 +9,10 @@
 #ifndef _XEN_VGA_H
 #define _XEN_VGA_H
 
-#include <public/xen.h>
+#include <xen/video.h>
 
 #ifdef CONFIG_VGA
 extern struct xen_vga_console_info vga_console_info;
-void vga_init(void);
-void vga_endboot(void);
-extern void (*vga_puts)(const char *);
-#else
-#define vga_init()    ((void)0)
-#define vga_endboot() ((void)0)
-#define vga_puts(s)   ((void)0)
 #endif
 
 #endif /* _XEN_VGA_H */
diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
new file mode 100644
index 0000000..2e897f9
--- /dev/null
+++ b/xen/include/xen/video.h
@@ -0,0 +1,24 @@
+/*
+ *  video.h
+ *
+ *  This file is subject to the terms and conditions of the GNU General Public
+ *  License.  See the file COPYING in the main directory of this archive
+ *  for more details.
+ */
+
+#ifndef _XEN_VIDEO_H
+#define _XEN_VIDEO_H
+
+#include <public/xen.h>
+
+#ifdef CONFIG_VIDEO
+void video_init(void);
+extern void (*video_puts)(const char *);
+void video_endboot(void);
+#else
+#define video_init()    ((void)0)
+#define video_puts(s)   ((void)0)
+#define video_endboot() ((void)0)
+#endif
+
+#endif /* _XEN_VIDEO_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Gi-0000hS-0V; Fri, 07 Dec 2012 18:03:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gf-0000gy-MA
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:21 +0000
Received: from [85.158.143.99:39976] by server-1.bemta-4.messagelabs.com id
	F9/81-28401-96F22C05; Fri, 07 Dec 2012 18:03:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10905 invoked from network); 7 Dec 2012 18:03:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814396"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-4v;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:54 +0000
Message-ID: <1354903377-13068-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 5/8] xen/arm: preserve DTB mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At the moment we destroy the DTB mappings we have in setup_pagetables
and we don't restore them until setup_mm.

Keep the temporary DTB mapping until we create the new ones.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/kernel.h    |    2 ++
 xen/arch/arm/mm.c        |   12 ++++++++++++
 xen/arch/arm/setup.c     |    1 +
 xen/include/asm-arm/mm.h |    2 ++
 4 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index 4533568..a179ffb 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -38,4 +38,6 @@ struct kernel_info {
 int kernel_prepare(struct kernel_info *info);
 void kernel_load(struct kernel_info *info);
 
+extern char _sdtb[];
+
 #endif /* #ifdef __ARCH_ARM_KERNEL_H__ */
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0d7a163..2410794 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -34,6 +34,7 @@
 #include <asm/current.h>
 #include <public/memory.h>
 #include <xen/sched.h>
+#include "kernel.h"
 
 struct domain *dom_xen, *dom_io;
 
@@ -295,12 +296,23 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
     /* TLBFLUSH and ISB would be needed here, but wait until we set WXN */
 
+    /* preserve the DTB mapping a little while longer */
+    pte = mfn_to_xen_entry(((unsigned long) _sdtb + boot_phys_offset) >> PAGE_SHIFT);
+    write_pte(xen_second + second_linear_offset(BOOT_MISC_VIRT_START), pte);
+
     /* From now on, no mapping may be both writable and executable. */
     WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR);
     /* Flush everything after setting WXN bit. */
     flush_xen_text_tlb();
 }
 
+void __init destroy_dtb_mapping(void)
+{
+    /* destroy old DTB mapping */
+    xen_second[second_linear_offset(BOOT_MISC_VIRT_START)].bits = 0;
+    dsb();
+}
+
 /* MMU setup for secondary CPUS (which already have paging enabled) */
 void __cpuinit mmu_init_secondary_cpu(void)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 70397ce..64420ef 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -230,6 +230,7 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     init_xen_time();
 
+    destroy_dtb_mapping();
     setup_mm(atag_paddr, fdt_size);
 
     /* Setup Hyp vector base */
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 4ed5df6..6fa4308 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -139,6 +139,8 @@ extern unsigned long total_pages;
 
 /* Boot-time pagetable setup */
 extern void setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr);
+/* Destroy temporary DTB mapping */
+extern void destroy_dtb_mapping(void);
 /* MMU setup for seccondary CPUS (which already have paging enabled) */
 extern void __cpuinit mmu_init_secondary_cpu(void);
 /* Set up the xenheap: up to 1GB of contiguous, always-mapped memory.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Gi-0000hS-0V; Fri, 07 Dec 2012 18:03:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gf-0000gy-MA
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:21 +0000
Received: from [85.158.143.99:39976] by server-1.bemta-4.messagelabs.com id
	F9/81-28401-96F22C05; Fri, 07 Dec 2012 18:03:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10905 invoked from network); 7 Dec 2012 18:03:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814396"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-4v;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:54 +0000
Message-ID: <1354903377-13068-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 5/8] xen/arm: preserve DTB mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At the moment we destroy the DTB mappings we have in setup_pagetables
and we don't restore them until setup_mm.

Keep the temporary DTB mapping until we create the new ones.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/kernel.h    |    2 ++
 xen/arch/arm/mm.c        |   12 ++++++++++++
 xen/arch/arm/setup.c     |    1 +
 xen/include/asm-arm/mm.h |    2 ++
 4 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index 4533568..a179ffb 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -38,4 +38,6 @@ struct kernel_info {
 int kernel_prepare(struct kernel_info *info);
 void kernel_load(struct kernel_info *info);
 
+extern char _sdtb[];
+
 #endif /* #ifdef __ARCH_ARM_KERNEL_H__ */
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0d7a163..2410794 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -34,6 +34,7 @@
 #include <asm/current.h>
 #include <public/memory.h>
 #include <xen/sched.h>
+#include "kernel.h"
 
 struct domain *dom_xen, *dom_io;
 
@@ -295,12 +296,23 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
     /* TLBFLUSH and ISB would be needed here, but wait until we set WXN */
 
+    /* preserve the DTB mapping a little while longer */
+    pte = mfn_to_xen_entry(((unsigned long) _sdtb + boot_phys_offset) >> PAGE_SHIFT);
+    write_pte(xen_second + second_linear_offset(BOOT_MISC_VIRT_START), pte);
+
     /* From now on, no mapping may be both writable and executable. */
     WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR);
     /* Flush everything after setting WXN bit. */
     flush_xen_text_tlb();
 }
 
+void __init destroy_dtb_mapping(void)
+{
+    /* destroy old DTB mapping */
+    xen_second[second_linear_offset(BOOT_MISC_VIRT_START)].bits = 0;
+    dsb();
+}
+
 /* MMU setup for secondary CPUS (which already have paging enabled) */
 void __cpuinit mmu_init_secondary_cpu(void)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 70397ce..64420ef 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -230,6 +230,7 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     init_xen_time();
 
+    destroy_dtb_mapping();
     setup_mm(atag_paddr, fdt_size);
 
     /* Setup Hyp vector base */
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 4ed5df6..6fa4308 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -139,6 +139,8 @@ extern unsigned long total_pages;
 
 /* Boot-time pagetable setup */
 extern void setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr);
+/* Destroy temporary DTB mapping */
+extern void destroy_dtb_mapping(void);
 /* MMU setup for seccondary CPUS (which already have paging enabled) */
 extern void __cpuinit mmu_init_secondary_cpu(void);
 /* Set up the xenheap: up to 1GB of contiguous, always-mapped memory.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Go-0000k8-JP; Fri, 07 Dec 2012 18:03:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gn-0000jZ-Av
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:29 +0000
Received: from [85.158.143.99:2367] by server-1.bemta-4.messagelabs.com id
	01/91-28401-07F22C05; Fri, 07 Dec 2012 18:03:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10913 invoked from network); 7 Dec 2012 18:03:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814398"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-4Q;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:53 +0000
Message-ID: <1354903377-13068-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make use of the framebuffer functions previously introduced.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
 1 files changed, 26 insertions(+), 153 deletions(-)

diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index aaf8b23..778cfdf 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -13,20 +13,15 @@
 #include <asm/io.h>
 #include <asm/page.h>
 #include "font.h"
+#include "fb.h"
 
 #define vlfb_info    vga_console_info.u.vesa_lfb
-#define text_columns (vlfb_info.width / font->width)
-#define text_rows    (vlfb_info.height / font->height)
 
-static void vesa_redraw_puts(const char *s);
-static void vesa_scroll_puts(const char *s);
+static void lfb_flush(void);
 
-static unsigned char *lfb, *lbuf, *text_buf;
-static unsigned int *__initdata line_len;
+static unsigned char *lfb;
 static const struct font_desc *font;
 static bool_t vga_compat;
-static unsigned int pixel_on;
-static unsigned int xpos, ypos;
 
 static unsigned int vram_total;
 integer_param("vesa-ram", vram_total);
@@ -87,29 +82,26 @@ void __init vesa_early_init(void)
 
 void __init vesa_init(void)
 {
-    if ( !font )
-        goto fail;
-
-    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
-    if ( !lbuf )
-        goto fail;
+    struct fb_prop fbp;
 
-    text_buf = xzalloc_bytes(text_columns * text_rows);
-    if ( !text_buf )
-        goto fail;
+    if ( !font )
+        return;
 
-    line_len = xzalloc_array(unsigned int, text_columns);
-    if ( !line_len )
-        goto fail;
+    fbp.font = font;
+    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
+    fbp.bytes_per_line = vlfb_info.bytes_per_line;
+    fbp.width = vlfb_info.width;
+    fbp.height = vlfb_info.height;
+    fbp.flush = lfb_flush;
+    fbp.text_columns = vlfb_info.width / font->width;
+    fbp.text_rows = vlfb_info.height / font->height;
 
-    lfb = ioremap(vlfb_info.lfb_base, vram_remap);
+    fbp.lfb = lfb = ioremap(vlfb_info.lfb_base, vram_remap);
     if ( !lfb )
-        goto fail;
+        return;
 
     memset(lfb, 0, vram_remap);
 
-    video_puts = vesa_redraw_puts;
-
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
            vlfb_info.lfb_base, lfb,
@@ -131,7 +123,7 @@ void __init vesa_init(void)
     {
         /* Light grey in truecolor. */
         unsigned int grey = 0xaaaaaaaa;
-        pixel_on = 
+        fbp.pixel_on = 
             ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
             ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
             ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
@@ -139,15 +131,14 @@ void __init vesa_init(void)
     else
     {
         /* White(ish) in default pseudocolor palette. */
-        pixel_on = 7;
+        fbp.pixel_on = 7;
     }
 
-    return;
-
- fail:
-    xfree(lbuf);
-    xfree(text_buf);
-    xfree(line_len);
+    if ( fb_init(fbp) < 0 )
+        return;
+    if ( fb_alloc() < 0 )
+        return;
+    video_puts = fb_redraw_puts;
 }
 
 #include <asm/mtrr.h>
@@ -192,8 +183,8 @@ void __init vesa_endboot(bool_t keep)
 {
     if ( keep )
     {
-        xpos = 0;
-        video_puts = vesa_scroll_puts;
+        video_puts = fb_scroll_puts;
+        fb_cr();
     }
     else
     {
@@ -202,124 +193,6 @@ void __init vesa_endboot(bool_t keep)
             memset(lfb + i * vlfb_info.bytes_per_line, 0,
                    vlfb_info.width * bpp);
         lfb_flush();
+        fb_free();
     }
-
-    xfree(line_len);
-}
-
-/* Render one line of text to given linear framebuffer line. */
-static void vesa_show_line(
-    const unsigned char *text_line,
-    unsigned char *video_line,
-    unsigned int nr_chars,
-    unsigned int nr_cells)
-{
-    unsigned int i, j, b, bpp, pixel;
-
-    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
-
-    for ( i = 0; i < font->height; i++ )
-    {
-        unsigned char *ptr = lbuf;
-
-        for ( j = 0; j < nr_chars; j++ )
-        {
-            const unsigned char *bits = font->data;
-            bits += ((text_line[j] * font->height + i) *
-                     ((font->width + 7) >> 3));
-            for ( b = font->width; b--; )
-            {
-                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
-                memcpy(ptr, &pixel, bpp);
-                ptr += bpp;
-            }
-        }
-
-        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
-        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
-        video_line += vlfb_info.bytes_per_line;
-    }
-}
-
-/* Fast mode which redraws all modified parts of a 2D text buffer. */
-static void __init vesa_redraw_puts(const char *s)
-{
-    unsigned int i, min_redraw_y = ypos;
-    char c;
-
-    /* Paste characters into text buffer. */
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            if ( ++ypos >= text_rows )
-            {
-                min_redraw_y = 0;
-                ypos = text_rows - 1;
-                memmove(text_buf, text_buf + text_columns,
-                        ypos * text_columns);
-                memset(text_buf + ypos * text_columns, 0, xpos);
-            }
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++ + ypos * text_columns] = c;
-    }
-
-    /* Render modified section of text buffer to VESA linear framebuffer. */
-    for ( i = min_redraw_y; i <= ypos; i++ )
-    {
-        const unsigned char *line = text_buf + i * text_columns;
-        unsigned int width;
-
-        for ( width = text_columns; width; --width )
-            if ( line[width - 1] )
-                 break;
-        vesa_show_line(line,
-                       lfb + i * font->height * vlfb_info.bytes_per_line,
-                       width, max(line_len[i], width));
-        line_len[i] = width;
-    }
-
-    lfb_flush();
-}
-
-/* Slower line-based scroll mode which interacts better with dom0. */
-static void vesa_scroll_puts(const char *s)
-{
-    unsigned int i;
-    char c;
-
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            unsigned int bytes = (vlfb_info.width *
-                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
-            unsigned char *src = lfb + font->height * vlfb_info.bytes_per_line;
-            unsigned char *dst = lfb;
-            
-            /* New line: scroll all previous rows up one line. */
-            for ( i = font->height; i < vlfb_info.height; i++ )
-            {
-                memcpy(dst, src, bytes);
-                src += vlfb_info.bytes_per_line;
-                dst += vlfb_info.bytes_per_line;
-            }
-
-            /* Render new line. */
-            vesa_show_line(
-                text_buf,
-                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
-                xpos, text_columns);
-
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++] = c;
-    }
-
-    lfb_flush();
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Go-0000k8-JP; Fri, 07 Dec 2012 18:03:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gn-0000jZ-Av
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:29 +0000
Received: from [85.158.143.99:2367] by server-1.bemta-4.messagelabs.com id
	01/91-28401-07F22C05; Fri, 07 Dec 2012 18:03:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10913 invoked from network); 7 Dec 2012 18:03:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814398"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-4Q;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:53 +0000
Message-ID: <1354903377-13068-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make use of the framebuffer functions previously introduced.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
 1 files changed, 26 insertions(+), 153 deletions(-)

diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index aaf8b23..778cfdf 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -13,20 +13,15 @@
 #include <asm/io.h>
 #include <asm/page.h>
 #include "font.h"
+#include "fb.h"
 
 #define vlfb_info    vga_console_info.u.vesa_lfb
-#define text_columns (vlfb_info.width / font->width)
-#define text_rows    (vlfb_info.height / font->height)
 
-static void vesa_redraw_puts(const char *s);
-static void vesa_scroll_puts(const char *s);
+static void lfb_flush(void);
 
-static unsigned char *lfb, *lbuf, *text_buf;
-static unsigned int *__initdata line_len;
+static unsigned char *lfb;
 static const struct font_desc *font;
 static bool_t vga_compat;
-static unsigned int pixel_on;
-static unsigned int xpos, ypos;
 
 static unsigned int vram_total;
 integer_param("vesa-ram", vram_total);
@@ -87,29 +82,26 @@ void __init vesa_early_init(void)
 
 void __init vesa_init(void)
 {
-    if ( !font )
-        goto fail;
-
-    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
-    if ( !lbuf )
-        goto fail;
+    struct fb_prop fbp;
 
-    text_buf = xzalloc_bytes(text_columns * text_rows);
-    if ( !text_buf )
-        goto fail;
+    if ( !font )
+        return;
 
-    line_len = xzalloc_array(unsigned int, text_columns);
-    if ( !line_len )
-        goto fail;
+    fbp.font = font;
+    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
+    fbp.bytes_per_line = vlfb_info.bytes_per_line;
+    fbp.width = vlfb_info.width;
+    fbp.height = vlfb_info.height;
+    fbp.flush = lfb_flush;
+    fbp.text_columns = vlfb_info.width / font->width;
+    fbp.text_rows = vlfb_info.height / font->height;
 
-    lfb = ioremap(vlfb_info.lfb_base, vram_remap);
+    fbp.lfb = lfb = ioremap(vlfb_info.lfb_base, vram_remap);
     if ( !lfb )
-        goto fail;
+        return;
 
     memset(lfb, 0, vram_remap);
 
-    video_puts = vesa_redraw_puts;
-
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
            vlfb_info.lfb_base, lfb,
@@ -131,7 +123,7 @@ void __init vesa_init(void)
     {
         /* Light grey in truecolor. */
         unsigned int grey = 0xaaaaaaaa;
-        pixel_on = 
+        fbp.pixel_on = 
             ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
             ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
             ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
@@ -139,15 +131,14 @@ void __init vesa_init(void)
     else
     {
         /* White(ish) in default pseudocolor palette. */
-        pixel_on = 7;
+        fbp.pixel_on = 7;
     }
 
-    return;
-
- fail:
-    xfree(lbuf);
-    xfree(text_buf);
-    xfree(line_len);
+    if ( fb_init(fbp) < 0 )
+        return;
+    if ( fb_alloc() < 0 )
+        return;
+    video_puts = fb_redraw_puts;
 }
 
 #include <asm/mtrr.h>
@@ -192,8 +183,8 @@ void __init vesa_endboot(bool_t keep)
 {
     if ( keep )
     {
-        xpos = 0;
-        video_puts = vesa_scroll_puts;
+        video_puts = fb_scroll_puts;
+        fb_cr();
     }
     else
     {
@@ -202,124 +193,6 @@ void __init vesa_endboot(bool_t keep)
             memset(lfb + i * vlfb_info.bytes_per_line, 0,
                    vlfb_info.width * bpp);
         lfb_flush();
+        fb_free();
     }
-
-    xfree(line_len);
-}
-
-/* Render one line of text to given linear framebuffer line. */
-static void vesa_show_line(
-    const unsigned char *text_line,
-    unsigned char *video_line,
-    unsigned int nr_chars,
-    unsigned int nr_cells)
-{
-    unsigned int i, j, b, bpp, pixel;
-
-    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
-
-    for ( i = 0; i < font->height; i++ )
-    {
-        unsigned char *ptr = lbuf;
-
-        for ( j = 0; j < nr_chars; j++ )
-        {
-            const unsigned char *bits = font->data;
-            bits += ((text_line[j] * font->height + i) *
-                     ((font->width + 7) >> 3));
-            for ( b = font->width; b--; )
-            {
-                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
-                memcpy(ptr, &pixel, bpp);
-                ptr += bpp;
-            }
-        }
-
-        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
-        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
-        video_line += vlfb_info.bytes_per_line;
-    }
-}
-
-/* Fast mode which redraws all modified parts of a 2D text buffer. */
-static void __init vesa_redraw_puts(const char *s)
-{
-    unsigned int i, min_redraw_y = ypos;
-    char c;
-
-    /* Paste characters into text buffer. */
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            if ( ++ypos >= text_rows )
-            {
-                min_redraw_y = 0;
-                ypos = text_rows - 1;
-                memmove(text_buf, text_buf + text_columns,
-                        ypos * text_columns);
-                memset(text_buf + ypos * text_columns, 0, xpos);
-            }
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++ + ypos * text_columns] = c;
-    }
-
-    /* Render modified section of text buffer to VESA linear framebuffer. */
-    for ( i = min_redraw_y; i <= ypos; i++ )
-    {
-        const unsigned char *line = text_buf + i * text_columns;
-        unsigned int width;
-
-        for ( width = text_columns; width; --width )
-            if ( line[width - 1] )
-                 break;
-        vesa_show_line(line,
-                       lfb + i * font->height * vlfb_info.bytes_per_line,
-                       width, max(line_len[i], width));
-        line_len[i] = width;
-    }
-
-    lfb_flush();
-}
-
-/* Slower line-based scroll mode which interacts better with dom0. */
-static void vesa_scroll_puts(const char *s)
-{
-    unsigned int i;
-    char c;
-
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            unsigned int bytes = (vlfb_info.width *
-                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
-            unsigned char *src = lfb + font->height * vlfb_info.bytes_per_line;
-            unsigned char *dst = lfb;
-            
-            /* New line: scroll all previous rows up one line. */
-            for ( i = font->height; i < vlfb_info.height; i++ )
-            {
-                memcpy(dst, src, bytes);
-                src += vlfb_info.bytes_per_line;
-                dst += vlfb_info.bytes_per_line;
-            }
-
-            /* Render new line. */
-            vesa_show_line(
-                text_buf,
-                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
-                xpos, text_columns);
-
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++] = c;
-    }
-
-    lfb_flush();
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Gp-0000kP-0X; Fri, 07 Dec 2012 18:03:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gn-0000je-Eb
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:29 +0000
Received: from [85.158.143.99:39988] by server-3.bemta-4.messagelabs.com id
	2C/DB-18211-07F22C05; Fri, 07 Dec 2012 18:03:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10929 invoked from network); 7 Dec 2012 18:03:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814399"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-77;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:56 +0000
Message-ID: <1354903377-13068-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 7/8] xen/arm: introduce vexpress_syscfg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a Versatile Express specific function to read/write
motherboard settings.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Makefile                   |    1 +
 xen/arch/arm/platform_vexpress.c        |   97 +++++++++++++++++++++++++++++++
 xen/include/asm-arm/platform_vexpress.h |   23 +++++++
 3 files changed, 121 insertions(+), 0 deletions(-)
 create mode 100644 xen/arch/arm/platform_vexpress.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 4c61b04..24689c5 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -18,6 +18,7 @@ obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
 obj-y += physdev.o
+obj-y += platform_vexpress.o
 obj-y += setup.o
 obj-y += time.o
 obj-y += smpboot.o
diff --git a/xen/arch/arm/platform_vexpress.c b/xen/arch/arm/platform_vexpress.c
new file mode 100644
index 0000000..41e3806
--- /dev/null
+++ b/xen/arch/arm/platform_vexpress.c
@@ -0,0 +1,97 @@
+/*
+ * xen/arch/arm/platform_vexpress.c
+ *
+ * Versatile Express specific settings
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/platform_vexpress.h>
+#include <xen/mm.h>
+
+#define DCC_SHIFT      26
+#define FUNCTION_SHIFT 20
+#define SITE_SHIFT     16
+#define POSITION_SHIFT 12
+#define DEVICE_SHIFT   0
+
+int vexpress_syscfg(int write, int function, int device, uint32_t *data)
+{
+    uint32_t *syscfg = (uint32_t *) FIXMAP_ADDR(FIXMAP_MISC);
+    uint32_t stat;
+    int dcc = 0; /* DCC to access */
+    int site = 0; /* motherboard */
+    int position = 0; /* motherboard */
+
+    set_fixmap(FIXMAP_MISC, V2M_SYS_MMIO_BASE >> PAGE_SHIFT, DEV_SHARED);
+
+    if ( syscfg[V2M_SYS_CFGCTRL] & V2M_SYS_CFG_START )
+        return -1;
+
+    /* clear the complete bit in the V2M_SYS_CFGSTAT status register */
+    syscfg[V2M_SYS_CFGSTAT] = 0;
+
+    if ( write )
+    {
+        /* write data */
+        syscfg[V2M_SYS_CFGDATA] = *data;
+
+        /* set control register */
+        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | V2M_SYS_CFG_WRITE |
+            (dcc << DCC_SHIFT) | (function << FUNCTION_SHIFT) |
+            (site << SITE_SHIFT) | (position << POSITION_SHIFT) |
+            (device << DEVICE_SHIFT);
+
+        /* wait for complete flag to be set */
+        do {
+            stat = syscfg[V2M_SYS_CFGSTAT];
+            dsb();
+        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
+
+        /* check error status and return error flag if set */
+        if ( stat & V2M_SYS_CFG_ERROR )
+            return -1;
+    } else {
+        /* set control register */
+        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | (dcc << DCC_SHIFT) |
+            (function << FUNCTION_SHIFT) | (site << SITE_SHIFT) |
+            (position << POSITION_SHIFT) | (device << DEVICE_SHIFT);
+
+        /* wait for complete flag to be set */
+        do {
+            stat = syscfg[V2M_SYS_CFGSTAT];
+            dsb();
+        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
+
+        /* check error status flag and return error flag if set */
+        if ( stat & V2M_SYS_CFG_ERROR )
+            return -1;
+        else
+            /* read data */
+            *data = syscfg[V2M_SYS_CFGDATA];
+    }
+
+    clear_fixmap(FIXMAP_MISC);
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/platform_vexpress.h b/xen/include/asm-arm/platform_vexpress.h
index 3556af3..407602d 100644
--- a/xen/include/asm-arm/platform_vexpress.h
+++ b/xen/include/asm-arm/platform_vexpress.h
@@ -6,6 +6,29 @@
 #define V2M_SYS_FLAGSSET      (0x30)
 #define V2M_SYS_FLAGSCLR      (0x34)
 
+#define V2M_SYS_CFGDATA       (0x00A0/4)
+#define V2M_SYS_CFGCTRL       (0x00A4/4)
+#define V2M_SYS_CFGSTAT       (0x00A8/4)
+
+#define V2M_SYS_CFG_START     (1<<31)
+#define V2M_SYS_CFG_WRITE     (1<<30)
+#define V2M_SYS_CFG_ERROR     (1<<1)
+#define V2M_SYS_CFG_COMPLETE  (1<<0)
+
+#define V2M_SYS_CFG_OSC_FUNC  1
+#define V2M_SYS_CFG_OSC0      0
+#define V2M_SYS_CFG_OSC1      1
+#define V2M_SYS_CFG_OSC2      2
+#define V2M_SYS_CFG_OSC3      3
+#define V2M_SYS_CFG_OSC4      4
+#define V2M_SYS_CFG_OSC5      5
+
+#ifndef __ASSEMBLY__
+#include <xen/inttypes.h>
+
+int vexpress_syscfg(int write, int function, int device, uint32_t *data);
+#endif
+
 #endif /* __ASM_ARM_PLATFORM_H */
 /*
  * Local variables:
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Gp-0000kP-0X; Fri, 07 Dec 2012 18:03:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gn-0000je-Eb
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:29 +0000
Received: from [85.158.143.99:39988] by server-3.bemta-4.messagelabs.com id
	2C/DB-18211-07F22C05; Fri, 07 Dec 2012 18:03:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10929 invoked from network); 7 Dec 2012 18:03:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814399"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-77;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:56 +0000
Message-ID: <1354903377-13068-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 7/8] xen/arm: introduce vexpress_syscfg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a Versatile Express specific function to read/write
motherboard settings.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Makefile                   |    1 +
 xen/arch/arm/platform_vexpress.c        |   97 +++++++++++++++++++++++++++++++
 xen/include/asm-arm/platform_vexpress.h |   23 +++++++
 3 files changed, 121 insertions(+), 0 deletions(-)
 create mode 100644 xen/arch/arm/platform_vexpress.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 4c61b04..24689c5 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -18,6 +18,7 @@ obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
 obj-y += physdev.o
+obj-y += platform_vexpress.o
 obj-y += setup.o
 obj-y += time.o
 obj-y += smpboot.o
diff --git a/xen/arch/arm/platform_vexpress.c b/xen/arch/arm/platform_vexpress.c
new file mode 100644
index 0000000..41e3806
--- /dev/null
+++ b/xen/arch/arm/platform_vexpress.c
@@ -0,0 +1,97 @@
+/*
+ * xen/arch/arm/platform_vexpress.c
+ *
+ * Versatile Express specific settings
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/platform_vexpress.h>
+#include <xen/mm.h>
+
+#define DCC_SHIFT      26
+#define FUNCTION_SHIFT 20
+#define SITE_SHIFT     16
+#define POSITION_SHIFT 12
+#define DEVICE_SHIFT   0
+
+int vexpress_syscfg(int write, int function, int device, uint32_t *data)
+{
+    uint32_t *syscfg = (uint32_t *) FIXMAP_ADDR(FIXMAP_MISC);
+    uint32_t stat;
+    int dcc = 0; /* DCC to access */
+    int site = 0; /* motherboard */
+    int position = 0; /* motherboard */
+
+    set_fixmap(FIXMAP_MISC, V2M_SYS_MMIO_BASE >> PAGE_SHIFT, DEV_SHARED);
+
+    if ( syscfg[V2M_SYS_CFGCTRL] & V2M_SYS_CFG_START )
+        return -1;
+
+    /* clear the complete bit in the V2M_SYS_CFGSTAT status register */
+    syscfg[V2M_SYS_CFGSTAT] = 0;
+
+    if ( write )
+    {
+        /* write data */
+        syscfg[V2M_SYS_CFGDATA] = *data;
+
+        /* set control register */
+        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | V2M_SYS_CFG_WRITE |
+            (dcc << DCC_SHIFT) | (function << FUNCTION_SHIFT) |
+            (site << SITE_SHIFT) | (position << POSITION_SHIFT) |
+            (device << DEVICE_SHIFT);
+
+        /* wait for complete flag to be set */
+        do {
+            stat = syscfg[V2M_SYS_CFGSTAT];
+            dsb();
+        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
+
+        /* check error status and return error flag if set */
+        if ( stat & V2M_SYS_CFG_ERROR )
+            return -1;
+    } else {
+        /* set control register */
+        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | (dcc << DCC_SHIFT) |
+            (function << FUNCTION_SHIFT) | (site << SITE_SHIFT) |
+            (position << POSITION_SHIFT) | (device << DEVICE_SHIFT);
+
+        /* wait for complete flag to be set */
+        do {
+            stat = syscfg[V2M_SYS_CFGSTAT];
+            dsb();
+        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
+
+        /* check error status flag and return error flag if set */
+        if ( stat & V2M_SYS_CFG_ERROR )
+            return -1;
+        else
+            /* read data */
+            *data = syscfg[V2M_SYS_CFGDATA];
+    }
+
+    clear_fixmap(FIXMAP_MISC);
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/platform_vexpress.h b/xen/include/asm-arm/platform_vexpress.h
index 3556af3..407602d 100644
--- a/xen/include/asm-arm/platform_vexpress.h
+++ b/xen/include/asm-arm/platform_vexpress.h
@@ -6,6 +6,29 @@
 #define V2M_SYS_FLAGSSET      (0x30)
 #define V2M_SYS_FLAGSCLR      (0x34)
 
+#define V2M_SYS_CFGDATA       (0x00A0/4)
+#define V2M_SYS_CFGCTRL       (0x00A4/4)
+#define V2M_SYS_CFGSTAT       (0x00A8/4)
+
+#define V2M_SYS_CFG_START     (1<<31)
+#define V2M_SYS_CFG_WRITE     (1<<30)
+#define V2M_SYS_CFG_ERROR     (1<<1)
+#define V2M_SYS_CFG_COMPLETE  (1<<0)
+
+#define V2M_SYS_CFG_OSC_FUNC  1
+#define V2M_SYS_CFG_OSC0      0
+#define V2M_SYS_CFG_OSC1      1
+#define V2M_SYS_CFG_OSC2      2
+#define V2M_SYS_CFG_OSC3      3
+#define V2M_SYS_CFG_OSC4      4
+#define V2M_SYS_CFG_OSC5      5
+
+#ifndef __ASSEMBLY__
+#include <xen/inttypes.h>
+
+int vexpress_syscfg(int write, int function, int device, uint32_t *data);
+#endif
+
 #endif /* __ASM_ARM_PLATFORM_H */
 /*
  * Local variables:
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Gt-0000me-Ed; Fri, 07 Dec 2012 18:03:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gs-0000je-Ln
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:34 +0000
Received: from [85.158.143.99:40006] by server-3.bemta-4.messagelabs.com id
	51/EB-18211-67F22C05; Fri, 07 Dec 2012 18:03:34 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!6
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10942 invoked from network); 7 Dec 2012 18:03:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814400"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-6c;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:55 +0000
Message-ID: <1354903377-13068-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 6/8] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a find_compatible_node function that can be used by device
drivers to find the node corresponding to their device in the device
tree.

Initialize device_tree_flattened early in start_xen, so that it is
available before setup_mm. Get rid of fdt in the process.

Also add device_tree_node_compatible to device_tree.h, that is currently
missing.

Changes in v2:
- remove fdt;
- return early from _find_compatible_node, if a node has already been
found.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/setup.c          |    7 ++---
 xen/common/device_tree.c      |   51 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/device_tree.h |    3 ++
 3 files changed, 57 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 64420ef..c46eb36 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -196,7 +196,6 @@ void __init start_xen(unsigned long boot_phys_offset,
                       unsigned long atag_paddr,
                       unsigned long cpuid)
 {
-    void *fdt;
     size_t fdt_size;
     int cpus, i;
 
@@ -204,13 +203,13 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     smp_clear_cpu_maps();
 
-    fdt = (void *)BOOT_MISC_VIRT_START
+    device_tree_flattened = (void *)BOOT_MISC_VIRT_START
         + (atag_paddr & ((1 << SECOND_SHIFT) - 1));
-    fdt_size = device_tree_early_init(fdt);
+    fdt_size = device_tree_early_init(device_tree_flattened);
 
     cpus = smp_get_max_cpus();
     cpus = 1;
-    cmdline_parse(device_tree_bootargs(fdt));
+    cmdline_parse(device_tree_bootargs(device_tree_flattened));
 
     setup_pagetables(boot_phys_offset, get_xen_paddr());
 
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 8b4ef2f..d4391f8 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -172,6 +172,57 @@ int device_tree_for_each_node(const void *fdt,
     return 0;
 }
 
+struct find_compat {
+    const char *compatible;
+    int found;
+    int node;
+    int depth;
+    u32 address_cells;
+    u32 size_cells;
+};
+
+static int _find_compatible_node(const void *fdt,
+                             int node, const char *name, int depth,
+                             u32 address_cells, u32 size_cells,
+                             void *data)
+{
+    struct find_compat *c = (struct find_compat *) data;
+
+    if (  c->found  )
+        return 0;
+
+    if ( device_tree_node_compatible(fdt, node, c->compatible) )
+    {
+        c->found = 1;
+        c->node = node;
+        c->depth = depth;
+        c->address_cells = address_cells;
+        c->size_cells = size_cells;
+    }
+    return 0;
+}
+ 
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells)
+{
+    int ret;
+    struct find_compat c;
+    c.compatible = compatible;
+    c.found = 0;
+
+    ret = device_tree_for_each_node(device_tree_flattened, _find_compatible_node, &c);
+    if ( !c.found )
+        return ret;
+    else
+    {
+        *node = c.node;
+        *depth = c.depth;
+        *address_cells = c.address_cells;
+        *size_cells = c.size_cells;
+        return 1;
+    }
+}
+
 /**
  * device_tree_bootargs - return the bootargs (the Xen command line)
  * @fdt flat device tree.
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index a0e3a97..5a75f0e 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -54,6 +54,9 @@ void device_tree_set_reg(u32 **cell, u32 address_cells, u32 size_cells,
                          u64 start, u64 size);
 u32 device_tree_get_u32(const void *fdt, int node, const char *prop_name);
 bool_t device_tree_node_matches(const void *fdt, int node, const char *match);
+bool_t device_tree_node_compatible(const void *fdt, int node, const char *match);
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells);
 int device_tree_for_each_node(const void *fdt,
                               device_tree_node_func func, void *data);
 const char *device_tree_bootargs(const void *fdt);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Gt-0000me-Ed; Fri, 07 Dec 2012 18:03:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gs-0000je-Ln
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:34 +0000
Received: from [85.158.143.99:40006] by server-3.bemta-4.messagelabs.com id
	51/EB-18211-67F22C05; Fri, 07 Dec 2012 18:03:34 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!6
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10942 invoked from network); 7 Dec 2012 18:03:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814400"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-6c;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:55 +0000
Message-ID: <1354903377-13068-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 6/8] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a find_compatible_node function that can be used by device
drivers to find the node corresponding to their device in the device
tree.

Initialize device_tree_flattened early in start_xen, so that it is
available before setup_mm. Get rid of fdt in the process.

Also add device_tree_node_compatible to device_tree.h, that is currently
missing.

Changes in v2:
- remove fdt;
- return early from _find_compatible_node, if a node has already been
found.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/setup.c          |    7 ++---
 xen/common/device_tree.c      |   51 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/device_tree.h |    3 ++
 3 files changed, 57 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 64420ef..c46eb36 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -196,7 +196,6 @@ void __init start_xen(unsigned long boot_phys_offset,
                       unsigned long atag_paddr,
                       unsigned long cpuid)
 {
-    void *fdt;
     size_t fdt_size;
     int cpus, i;
 
@@ -204,13 +203,13 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     smp_clear_cpu_maps();
 
-    fdt = (void *)BOOT_MISC_VIRT_START
+    device_tree_flattened = (void *)BOOT_MISC_VIRT_START
         + (atag_paddr & ((1 << SECOND_SHIFT) - 1));
-    fdt_size = device_tree_early_init(fdt);
+    fdt_size = device_tree_early_init(device_tree_flattened);
 
     cpus = smp_get_max_cpus();
     cpus = 1;
-    cmdline_parse(device_tree_bootargs(fdt));
+    cmdline_parse(device_tree_bootargs(device_tree_flattened));
 
     setup_pagetables(boot_phys_offset, get_xen_paddr());
 
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 8b4ef2f..d4391f8 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -172,6 +172,57 @@ int device_tree_for_each_node(const void *fdt,
     return 0;
 }
 
+struct find_compat {
+    const char *compatible;
+    int found;
+    int node;
+    int depth;
+    u32 address_cells;
+    u32 size_cells;
+};
+
+static int _find_compatible_node(const void *fdt,
+                             int node, const char *name, int depth,
+                             u32 address_cells, u32 size_cells,
+                             void *data)
+{
+    struct find_compat *c = (struct find_compat *) data;
+
+    if (  c->found  )
+        return 0;
+
+    if ( device_tree_node_compatible(fdt, node, c->compatible) )
+    {
+        c->found = 1;
+        c->node = node;
+        c->depth = depth;
+        c->address_cells = address_cells;
+        c->size_cells = size_cells;
+    }
+    return 0;
+}
+ 
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells)
+{
+    int ret;
+    struct find_compat c;
+    c.compatible = compatible;
+    c.found = 0;
+
+    ret = device_tree_for_each_node(device_tree_flattened, _find_compatible_node, &c);
+    if ( !c.found )
+        return ret;
+    else
+    {
+        *node = c.node;
+        *depth = c.depth;
+        *address_cells = c.address_cells;
+        *size_cells = c.size_cells;
+        return 1;
+    }
+}
+
 /**
  * device_tree_bootargs - return the bootargs (the Xen command line)
  * @fdt flat device tree.
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index a0e3a97..5a75f0e 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -54,6 +54,9 @@ void device_tree_set_reg(u32 **cell, u32 address_cells, u32 size_cells,
                          u64 start, u64 size);
 u32 device_tree_get_u32(const void *fdt, int node, const char *prop_name);
 bool_t device_tree_node_matches(const void *fdt, int node, const char *match);
+bool_t device_tree_node_compatible(const void *fdt, int node, const char *match);
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells);
 int device_tree_for_each_node(const void *fdt,
                               device_tree_node_func func, void *data);
 const char *device_tree_bootargs(const void *fdt);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Gu-0000nN-Sb; Fri, 07 Dec 2012 18:03:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gt-0000je-4S
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:35 +0000
Received: from [85.158.143.99:40015] by server-3.bemta-4.messagelabs.com id
	61/EB-18211-67F22C05; Fri, 07 Dec 2012 18:03:34 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!7
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10951 invoked from network); 7 Dec 2012 18:03:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814401"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-7c;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:57 +0000
Message-ID: <1354903377-13068-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 8/8] xen/arm: introduce a driver for the ARM
	HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Read the screen resolution setting from device tree, find the
corresponding modeline in a small table of standard video modes, set the
hardware accordingly.

Use vexpress_syscfg to configure the pixel clock.

Use the generic framebuffer functions to print on the screen.

Changes in v2:
- read mode from DT;
- support multiple resolutions;
- use vexpress_syscfg to set the pixclock.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Rules.mk         |    1 +
 xen/drivers/video/Makefile    |    1 +
 xen/drivers/video/arm_hdlcd.c |  282 +++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/modelines.h |   69 ++++++++++
 xen/include/asm-arm/config.h  |    2 +
 5 files changed, 355 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/arm_hdlcd.c
 create mode 100644 xen/drivers/video/modelines.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index fa9f9c1..9580e6b 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -8,6 +8,7 @@
 
 HAS_DEVICE_TREE := y
 HAS_VIDEO := y
+HAS_ARM_HDLCD := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 3b3eb43..8a6f5da 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
 obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
+obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
new file mode 100644
index 0000000..9e69856
--- /dev/null
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -0,0 +1,282 @@
+/*
+ * xen/drivers/video/arm_hdlcd.c
+ *
+ * Driver for ARM HDLCD Controller
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/delay.h>
+#include <asm/types.h>
+#include <asm/platform_vexpress.h>
+#include <xen/config.h>
+#include <xen/device_tree.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include "font.h"
+#include "fb.h"
+#include "modelines.h"
+
+#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
+
+#define HDLCD_INTMASK       (0x18/4)
+#define HDLCD_FBBASE        (0x100/4)
+#define HDLCD_LINELENGTH    (0x104/4)
+#define HDLCD_LINECOUNT     (0x108/4)
+#define HDLCD_LINEPITCH     (0x10C/4)
+#define HDLCD_BUS           (0x110/4)
+#define HDLCD_VSYNC         (0x200/4)
+#define HDLCD_VBACK         (0x204/4)
+#define HDLCD_VDATA         (0x208/4)
+#define HDLCD_VFRONT        (0x20C/4)
+#define HDLCD_HSYNC         (0x210/4)
+#define HDLCD_HBACK         (0x214/4)
+#define HDLCD_HDATA         (0x218/4)
+#define HDLCD_HFRONT        (0x21C/4)
+#define HDLCD_POLARITIES    (0x220/4)
+#define HDLCD_COMMAND       (0x230/4)
+#define HDLCD_PF            (0x240/4)
+#define HDLCD_RED           (0x244/4)
+#define HDLCD_GREEN         (0x248/4)
+#define HDLCD_BLUE          (0x24C/4)
+
+static void vga_noop_puts(const char *s) {}
+void (*video_puts)(const char *) = vga_noop_puts;
+
+static void hdlcd_flush(void)
+{
+    dsb();
+}
+
+static void set_color_masks(int bpp,
+                       int *red_shift, int *green_shift, int *blue_shift,
+                       int *red_size, int *green_size, int *blue_size)
+{
+    switch (bpp) {
+        case 2:
+            *red_shift = 0;
+            *green_shift = 5;
+            *blue_shift = 11;
+            *red_size = 5;
+            *green_size = 6;
+            *blue_size = 5;
+            break;
+        case 3:
+        case 4:
+            *red_shift = 0;
+            *green_shift = 8;
+            *blue_shift = 16;
+            *red_size = 8;
+            *green_size = 8;
+            *blue_size = 8;
+            break;
+        default:
+            BUG();
+            break;
+    }
+}
+
+static void set_pixclock(uint32_t pixclock)
+{
+    vexpress_syscfg(1, V2M_SYS_CFG_OSC_FUNC, V2M_SYS_CFG_OSC5, &pixclock);
+}
+
+void __init video_init(void)
+{
+    int node, depth;
+    u32 address_cells, size_cells;
+    struct fb_prop fbp;
+    unsigned char *lfb;
+    paddr_t hdlcd_start, hdlcd_size;
+    paddr_t framebuffer_start, framebuffer_size;
+    const struct fdt_property *prop;
+    const u32 *cell;
+    const char *mode_string;
+    char _mode_string[16];
+    int bpp;
+    int red_shift, green_shift, blue_shift;
+    int red_size, green_size, blue_size;
+    struct modeline *videomode = NULL;
+    int i;
+
+    if ( find_compatible_node("arm,hdlcd", &node, &depth,
+                &address_cells, &size_cells) <= 0 )
+        return;
+
+    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &hdlcd_start, &hdlcd_size); 
+
+    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &framebuffer_start, &framebuffer_size); 
+
+    mode_string = fdt_getprop(device_tree_flattened, node, "mode", NULL);
+    if ( !mode_string )
+    {
+        bpp = 4;
+        set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                &red_size, &green_size, &blue_size);
+        memcpy(_mode_string, "1280x1024@60", strlen("1280x1024@60"));
+    }
+    if ( strlen(mode_string) < strlen("800x600@60") )
+    {
+        printk("HDLCD: invalid modeline=%s\n", mode_string);
+        return;
+    } else {
+        char *s = strchr(mode_string, '-');
+        if ( !s )
+        {
+            printk("HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
+                    mode_string);
+            bpp = 4;
+            set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                    &red_size, &green_size, &blue_size);
+            memcpy(_mode_string, mode_string, strlen(mode_string) + 1);
+        } else {
+			if ( strlen(s) < 6 )
+			{
+				printk("HDLCD: invalid mode %s\n", mode_string);
+				return;
+			}
+            s++;
+            if ( !strncmp(s, "16", 2) )
+            {
+                bpp = 2;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            }
+            else if ( !strncmp(s, "24", 2) )
+            {
+                bpp = 3;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            }
+            else if ( !strncmp(s, "32", 2) )
+            {
+                bpp = 4;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            } else  {
+                printk("HDLCD: unsupported bpp %s\n", s);
+                return;
+            }
+            i = s - mode_string - 1;
+            memcpy(_mode_string, mode_string, i);
+            memcpy(_mode_string + i, mode_string + i + 3, 4);
+        }
+    }
+
+    for ( i = 0; i < ARRAY_SIZE(videomodes); i++ )
+    {
+        if ( !strcmp(_mode_string, videomodes[i].mode) )
+        {
+            videomode = &videomodes[i];
+            break;
+        }
+    }
+    if ( !videomode )
+    {
+        printk("HDLCD: unsupported videomode %s\n", _mode_string);
+        return;
+    }
+
+
+    if ( !hdlcd_start || !framebuffer_start )
+        return;
+
+    if ( framebuffer_size < bpp * videomode->xres * videomode->yres )
+    {
+        printk("HDLCD: the framebuffer is too small, disable the HDLCD driver\n");
+        return;
+    }
+
+    printk("Initializing HDLCD driver\n");
+
+    lfb = early_ioremap(framebuffer_start, framebuffer_size, DEV_WC);
+    if ( !lfb )
+    {
+        printk("Couldn't map the framebuffer\n");
+        return;
+    }
+    memset(lfb, 0x00, bpp * videomode->xres * videomode->yres);
+
+    /* uses FIXMAP_MISC */
+    set_pixclock(videomode->pixclock);
+
+    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
+    HDLCD[HDLCD_COMMAND] = 0;
+
+    HDLCD[HDLCD_LINELENGTH] = videomode->xres * bpp;
+    HDLCD[HDLCD_LINECOUNT] = videomode->yres - 1;
+    HDLCD[HDLCD_LINEPITCH] = videomode->xres * bpp;
+    HDLCD[HDLCD_PF] = ((bpp - 1) << 3);
+    HDLCD[HDLCD_INTMASK] = 0;
+    HDLCD[HDLCD_FBBASE] = framebuffer_start;
+    HDLCD[HDLCD_BUS] = 0xf00 | (1 << 4);
+    HDLCD[HDLCD_VBACK] = videomode->vback - 1;
+    HDLCD[HDLCD_VSYNC] = videomode->vsync - 1;
+    HDLCD[HDLCD_VDATA] = videomode->yres - 1;
+    HDLCD[HDLCD_VFRONT] = videomode->vfront - 1;
+    HDLCD[HDLCD_HBACK] = videomode->hback - 1;
+    HDLCD[HDLCD_HSYNC] = videomode->hsync - 1;
+    HDLCD[HDLCD_HDATA] = videomode->xres - 1;
+    HDLCD[HDLCD_HFRONT] = videomode->hfront - 1;
+    HDLCD[HDLCD_POLARITIES] = (1 << 2) | (1 << 3);
+    HDLCD[HDLCD_RED] = (red_size << 8) | red_shift;
+    HDLCD[HDLCD_GREEN] = (green_size << 8) | green_shift;
+    HDLCD[HDLCD_BLUE] = (blue_size << 8) | blue_shift;
+    HDLCD[HDLCD_COMMAND] = 1;
+    clear_fixmap(FIXMAP_MISC);
+
+    fbp.pixel_on = (((1 << red_size) - 1) << red_shift) |
+        (((1 << green_size) - 1) << green_shift) |
+        (((1 << blue_size) - 1) << blue_shift);
+    fbp.lfb = lfb;
+    fbp.font = &font_vga_8x16;
+    fbp.bits_per_pixel = bpp*8;
+    fbp.bytes_per_line = bpp*videomode->xres;
+    fbp.width = videomode->xres;
+    fbp.height = videomode->yres;
+    fbp.flush = hdlcd_flush;
+    fbp.text_columns = videomode->xres / 8;
+    fbp.text_rows = videomode->yres / 16;
+    if ( fb_init(fbp) < 0 )
+            return;
+    video_puts = fb_scroll_puts;
+}
+
+void video_endboot(void)
+{
+    if ( video_puts != vga_noop_puts )
+        fb_alloc();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/drivers/video/modelines.h b/xen/drivers/video/modelines.h
new file mode 100644
index 0000000..b91368d
--- /dev/null
+++ b/xen/drivers/video/modelines.h
@@ -0,0 +1,69 @@
+/*
+ * xen/drivers/video/modelines.h
+ *
+ * Timings for many popular monitor resolutions
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _XEN_MODLINES_H
+#define _XEN_MODLINES_H
+
+struct modeline {
+    const char* mode;  /* in the form 1280x1024@60 */
+    uint32_t pixclock; /* Khz */
+    uint32_t xres;
+    uint32_t hfront;   /* horizontal front porch in pixels */
+    uint32_t hsync;    /* horizontal sync pulse in pixels */
+    uint32_t hback;    /* horizontal back porch in pixels */
+    uint32_t yres;
+    uint32_t vfront;   /* vertical front porch in lines */
+    uint32_t vsync;    /* vertical sync pulse in lines */
+    uint32_t vback;    /* vertical back  porch in lines */
+};
+
+struct modeline __initdata videomodes[] = {
+    { "640x480@60",   25175,  640,  16,   96,   48,   480,  11,   2,    31 },
+    { "640x480@72",   31500,  640,  24,   40,   128,  480,  9,    3,    28 },
+    { "640x480@75",   31500,  640,  16,   96,   48,   480,  11,   2,    32 },
+    { "640x480@85",   36000,  640,  32,   48,   112,  480,  1,    3,    25 },
+    { "800x600@56",   38100,  800,  32,   128,  128,  600,  1,    4,    14 },
+    { "800x600@60",   40000,  800,  40,   128,  88 ,  600,  1,    4,    23 },
+    { "800x600@72",   50000,  800,  56,   120,  64 ,  600,  37,   6,    23 },
+    { "800x600@75",   49500,  800,  16,   80,   160,  600,  1,    2,    21 },
+    { "800x600@85",   56250,  800,  32,   64,   152,  600,  1,    3,    27 },
+    { "1024x768@60",  65000,  1024, 24,   136,  160,  768,  3,    6,    29 },
+    { "1024x768@70",  75000,  1024, 24,   136,  144,  768,  3,    6,    29 },
+    { "1024x768@75",  78750,  1024, 16,   96,   176,  768,  1,    3,    28 },
+    { "1024x768@85",  94500,  1024, 48,   96,   208,  768,  1,    3,    36 },
+    { "1280x1024@60", 108000, 1280, 48,   112,  248,  1024, 1,    3,    38 },
+    { "1280x1024@75", 135000, 1280, 16,   144,  248,  1024, 1,    3,    38 },
+    { "1280x1024@85", 157500, 1280, 64,   160,  224,  1024, 1,    3,    44 },
+    { "1400x1050@60", 122610, 1400, 88,   152,  240,  1050, 1,    3,    33 },
+    { "1400x1050@75", 155850, 1400, 96,   152,  248,  1050, 1,    3,    42 },
+    { "1600x1200@60", 162000, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@65", 175500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@70", 189000, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@75", 202500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@85", 229500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1792x1344@60", 204800, 1792, 128,  200,  328,  1344, 1,    3,    46 },
+    { "1792x1344@75", 261000, 1792, 96,   216,  352,  1344, 1,    3,    69 },
+    { "1856x1392@60", 218300, 1856, 96,   224,  352,  1392, 1,    3,    43 },
+    { "1856x1392@75", 288000, 1856, 128,  224,  352,  1392, 1,    3,    104 },
+    { "1920x1200@75", 193160, 1920, 128,  208,  336,  1200, 1,    3,    38 },
+    { "1920x1440@60", 234000, 1920, 128,  208,  344,  1440, 1,    3,    56 },
+    { "1920x1440@75", 297000, 1920, 144,  224,  352,  1440, 1,    3,    56 },
+};
+
+#endif
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 87db0d1..d8aa66b 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -19,6 +19,8 @@
 
 #define CONFIG_DOMAIN_PAGE 1
 
+#define CONFIG_VIDEO 1
+
 #define OPT_CONSOLE_STR "com1"
 
 #ifdef MAX_PHYS_CPUS
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Gu-0000nN-Sb; Fri, 07 Dec 2012 18:03:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gt-0000je-4S
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:35 +0000
Received: from [85.158.143.99:40015] by server-3.bemta-4.messagelabs.com id
	61/EB-18211-67F22C05; Fri, 07 Dec 2012 18:03:34 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1354903389!28510874!7
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10951 invoked from network); 7 Dec 2012 18:03:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814401"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-7c;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:57 +0000
Message-ID: <1354903377-13068-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 8/8] xen/arm: introduce a driver for the ARM
	HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Read the screen resolution setting from device tree, find the
corresponding modeline in a small table of standard video modes, set the
hardware accordingly.

Use vexpress_syscfg to configure the pixel clock.

Use the generic framebuffer functions to print on the screen.

Changes in v2:
- read mode from DT;
- support multiple resolutions;
- use vexpress_syscfg to set the pixclock.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Rules.mk         |    1 +
 xen/drivers/video/Makefile    |    1 +
 xen/drivers/video/arm_hdlcd.c |  282 +++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/modelines.h |   69 ++++++++++
 xen/include/asm-arm/config.h  |    2 +
 5 files changed, 355 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/arm_hdlcd.c
 create mode 100644 xen/drivers/video/modelines.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index fa9f9c1..9580e6b 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -8,6 +8,7 @@
 
 HAS_DEVICE_TREE := y
 HAS_VIDEO := y
+HAS_ARM_HDLCD := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 3b3eb43..8a6f5da 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
 obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
+obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
new file mode 100644
index 0000000..9e69856
--- /dev/null
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -0,0 +1,282 @@
+/*
+ * xen/drivers/video/arm_hdlcd.c
+ *
+ * Driver for ARM HDLCD Controller
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/delay.h>
+#include <asm/types.h>
+#include <asm/platform_vexpress.h>
+#include <xen/config.h>
+#include <xen/device_tree.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include "font.h"
+#include "fb.h"
+#include "modelines.h"
+
+#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
+
+#define HDLCD_INTMASK       (0x18/4)
+#define HDLCD_FBBASE        (0x100/4)
+#define HDLCD_LINELENGTH    (0x104/4)
+#define HDLCD_LINECOUNT     (0x108/4)
+#define HDLCD_LINEPITCH     (0x10C/4)
+#define HDLCD_BUS           (0x110/4)
+#define HDLCD_VSYNC         (0x200/4)
+#define HDLCD_VBACK         (0x204/4)
+#define HDLCD_VDATA         (0x208/4)
+#define HDLCD_VFRONT        (0x20C/4)
+#define HDLCD_HSYNC         (0x210/4)
+#define HDLCD_HBACK         (0x214/4)
+#define HDLCD_HDATA         (0x218/4)
+#define HDLCD_HFRONT        (0x21C/4)
+#define HDLCD_POLARITIES    (0x220/4)
+#define HDLCD_COMMAND       (0x230/4)
+#define HDLCD_PF            (0x240/4)
+#define HDLCD_RED           (0x244/4)
+#define HDLCD_GREEN         (0x248/4)
+#define HDLCD_BLUE          (0x24C/4)
+
+static void vga_noop_puts(const char *s) {}
+void (*video_puts)(const char *) = vga_noop_puts;
+
+static void hdlcd_flush(void)
+{
+    dsb();
+}
+
+static void set_color_masks(int bpp,
+                       int *red_shift, int *green_shift, int *blue_shift,
+                       int *red_size, int *green_size, int *blue_size)
+{
+    switch (bpp) {
+        case 2:
+            *red_shift = 0;
+            *green_shift = 5;
+            *blue_shift = 11;
+            *red_size = 5;
+            *green_size = 6;
+            *blue_size = 5;
+            break;
+        case 3:
+        case 4:
+            *red_shift = 0;
+            *green_shift = 8;
+            *blue_shift = 16;
+            *red_size = 8;
+            *green_size = 8;
+            *blue_size = 8;
+            break;
+        default:
+            BUG();
+            break;
+    }
+}
+
+static void set_pixclock(uint32_t pixclock)
+{
+    vexpress_syscfg(1, V2M_SYS_CFG_OSC_FUNC, V2M_SYS_CFG_OSC5, &pixclock);
+}
+
+void __init video_init(void)
+{
+    int node, depth;
+    u32 address_cells, size_cells;
+    struct fb_prop fbp;
+    unsigned char *lfb;
+    paddr_t hdlcd_start, hdlcd_size;
+    paddr_t framebuffer_start, framebuffer_size;
+    const struct fdt_property *prop;
+    const u32 *cell;
+    const char *mode_string;
+    char _mode_string[16];
+    int bpp;
+    int red_shift, green_shift, blue_shift;
+    int red_size, green_size, blue_size;
+    struct modeline *videomode = NULL;
+    int i;
+
+    if ( find_compatible_node("arm,hdlcd", &node, &depth,
+                &address_cells, &size_cells) <= 0 )
+        return;
+
+    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &hdlcd_start, &hdlcd_size); 
+
+    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &framebuffer_start, &framebuffer_size); 
+
+    mode_string = fdt_getprop(device_tree_flattened, node, "mode", NULL);
+    if ( !mode_string )
+    {
+        bpp = 4;
+        set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                &red_size, &green_size, &blue_size);
+        memcpy(_mode_string, "1280x1024@60", strlen("1280x1024@60"));
+    }
+    if ( strlen(mode_string) < strlen("800x600@60") )
+    {
+        printk("HDLCD: invalid modeline=%s\n", mode_string);
+        return;
+    } else {
+        char *s = strchr(mode_string, '-');
+        if ( !s )
+        {
+            printk("HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
+                    mode_string);
+            bpp = 4;
+            set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                    &red_size, &green_size, &blue_size);
+            memcpy(_mode_string, mode_string, strlen(mode_string) + 1);
+        } else {
+			if ( strlen(s) < 6 )
+			{
+				printk("HDLCD: invalid mode %s\n", mode_string);
+				return;
+			}
+            s++;
+            if ( !strncmp(s, "16", 2) )
+            {
+                bpp = 2;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            }
+            else if ( !strncmp(s, "24", 2) )
+            {
+                bpp = 3;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            }
+            else if ( !strncmp(s, "32", 2) )
+            {
+                bpp = 4;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            } else  {
+                printk("HDLCD: unsupported bpp %s\n", s);
+                return;
+            }
+            i = s - mode_string - 1;
+            memcpy(_mode_string, mode_string, i);
+            memcpy(_mode_string + i, mode_string + i + 3, 4);
+        }
+    }
+
+    for ( i = 0; i < ARRAY_SIZE(videomodes); i++ )
+    {
+        if ( !strcmp(_mode_string, videomodes[i].mode) )
+        {
+            videomode = &videomodes[i];
+            break;
+        }
+    }
+    if ( !videomode )
+    {
+        printk("HDLCD: unsupported videomode %s\n", _mode_string);
+        return;
+    }
+
+
+    if ( !hdlcd_start || !framebuffer_start )
+        return;
+
+    if ( framebuffer_size < bpp * videomode->xres * videomode->yres )
+    {
+        printk("HDLCD: the framebuffer is too small, disable the HDLCD driver\n");
+        return;
+    }
+
+    printk("Initializing HDLCD driver\n");
+
+    lfb = early_ioremap(framebuffer_start, framebuffer_size, DEV_WC);
+    if ( !lfb )
+    {
+        printk("Couldn't map the framebuffer\n");
+        return;
+    }
+    memset(lfb, 0x00, bpp * videomode->xres * videomode->yres);
+
+    /* uses FIXMAP_MISC */
+    set_pixclock(videomode->pixclock);
+
+    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
+    HDLCD[HDLCD_COMMAND] = 0;
+
+    HDLCD[HDLCD_LINELENGTH] = videomode->xres * bpp;
+    HDLCD[HDLCD_LINECOUNT] = videomode->yres - 1;
+    HDLCD[HDLCD_LINEPITCH] = videomode->xres * bpp;
+    HDLCD[HDLCD_PF] = ((bpp - 1) << 3);
+    HDLCD[HDLCD_INTMASK] = 0;
+    HDLCD[HDLCD_FBBASE] = framebuffer_start;
+    HDLCD[HDLCD_BUS] = 0xf00 | (1 << 4);
+    HDLCD[HDLCD_VBACK] = videomode->vback - 1;
+    HDLCD[HDLCD_VSYNC] = videomode->vsync - 1;
+    HDLCD[HDLCD_VDATA] = videomode->yres - 1;
+    HDLCD[HDLCD_VFRONT] = videomode->vfront - 1;
+    HDLCD[HDLCD_HBACK] = videomode->hback - 1;
+    HDLCD[HDLCD_HSYNC] = videomode->hsync - 1;
+    HDLCD[HDLCD_HDATA] = videomode->xres - 1;
+    HDLCD[HDLCD_HFRONT] = videomode->hfront - 1;
+    HDLCD[HDLCD_POLARITIES] = (1 << 2) | (1 << 3);
+    HDLCD[HDLCD_RED] = (red_size << 8) | red_shift;
+    HDLCD[HDLCD_GREEN] = (green_size << 8) | green_shift;
+    HDLCD[HDLCD_BLUE] = (blue_size << 8) | blue_shift;
+    HDLCD[HDLCD_COMMAND] = 1;
+    clear_fixmap(FIXMAP_MISC);
+
+    fbp.pixel_on = (((1 << red_size) - 1) << red_shift) |
+        (((1 << green_size) - 1) << green_shift) |
+        (((1 << blue_size) - 1) << blue_shift);
+    fbp.lfb = lfb;
+    fbp.font = &font_vga_8x16;
+    fbp.bits_per_pixel = bpp*8;
+    fbp.bytes_per_line = bpp*videomode->xres;
+    fbp.width = videomode->xres;
+    fbp.height = videomode->yres;
+    fbp.flush = hdlcd_flush;
+    fbp.text_columns = videomode->xres / 8;
+    fbp.text_rows = videomode->yres / 16;
+    if ( fb_init(fbp) < 0 )
+            return;
+    video_puts = fb_scroll_puts;
+}
+
+void video_endboot(void)
+{
+    if ( video_puts != vga_noop_puts )
+        fb_alloc();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/drivers/video/modelines.h b/xen/drivers/video/modelines.h
new file mode 100644
index 0000000..b91368d
--- /dev/null
+++ b/xen/drivers/video/modelines.h
@@ -0,0 +1,69 @@
+/*
+ * xen/drivers/video/modelines.h
+ *
+ * Timings for many popular monitor resolutions
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _XEN_MODLINES_H
+#define _XEN_MODLINES_H
+
+struct modeline {
+    const char* mode;  /* in the form 1280x1024@60 */
+    uint32_t pixclock; /* Khz */
+    uint32_t xres;
+    uint32_t hfront;   /* horizontal front porch in pixels */
+    uint32_t hsync;    /* horizontal sync pulse in pixels */
+    uint32_t hback;    /* horizontal back porch in pixels */
+    uint32_t yres;
+    uint32_t vfront;   /* vertical front porch in lines */
+    uint32_t vsync;    /* vertical sync pulse in lines */
+    uint32_t vback;    /* vertical back  porch in lines */
+};
+
+struct modeline __initdata videomodes[] = {
+    { "640x480@60",   25175,  640,  16,   96,   48,   480,  11,   2,    31 },
+    { "640x480@72",   31500,  640,  24,   40,   128,  480,  9,    3,    28 },
+    { "640x480@75",   31500,  640,  16,   96,   48,   480,  11,   2,    32 },
+    { "640x480@85",   36000,  640,  32,   48,   112,  480,  1,    3,    25 },
+    { "800x600@56",   38100,  800,  32,   128,  128,  600,  1,    4,    14 },
+    { "800x600@60",   40000,  800,  40,   128,  88 ,  600,  1,    4,    23 },
+    { "800x600@72",   50000,  800,  56,   120,  64 ,  600,  37,   6,    23 },
+    { "800x600@75",   49500,  800,  16,   80,   160,  600,  1,    2,    21 },
+    { "800x600@85",   56250,  800,  32,   64,   152,  600,  1,    3,    27 },
+    { "1024x768@60",  65000,  1024, 24,   136,  160,  768,  3,    6,    29 },
+    { "1024x768@70",  75000,  1024, 24,   136,  144,  768,  3,    6,    29 },
+    { "1024x768@75",  78750,  1024, 16,   96,   176,  768,  1,    3,    28 },
+    { "1024x768@85",  94500,  1024, 48,   96,   208,  768,  1,    3,    36 },
+    { "1280x1024@60", 108000, 1280, 48,   112,  248,  1024, 1,    3,    38 },
+    { "1280x1024@75", 135000, 1280, 16,   144,  248,  1024, 1,    3,    38 },
+    { "1280x1024@85", 157500, 1280, 64,   160,  224,  1024, 1,    3,    44 },
+    { "1400x1050@60", 122610, 1400, 88,   152,  240,  1050, 1,    3,    33 },
+    { "1400x1050@75", 155850, 1400, 96,   152,  248,  1050, 1,    3,    42 },
+    { "1600x1200@60", 162000, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@65", 175500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@70", 189000, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@75", 202500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@85", 229500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1792x1344@60", 204800, 1792, 128,  200,  328,  1344, 1,    3,    46 },
+    { "1792x1344@75", 261000, 1792, 96,   216,  352,  1344, 1,    3,    69 },
+    { "1856x1392@60", 218300, 1856, 96,   224,  352,  1392, 1,    3,    43 },
+    { "1856x1392@75", 288000, 1856, 128,  224,  352,  1392, 1,    3,    104 },
+    { "1920x1200@75", 193160, 1920, 128,  208,  336,  1200, 1,    3,    38 },
+    { "1920x1440@60", 234000, 1920, 128,  208,  344,  1440, 1,    3,    56 },
+    { "1920x1440@75", 297000, 1920, 144,  224,  352,  1440, 1,    3,    56 },
+};
+
+#endif
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 87db0d1..d8aa66b 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -19,6 +19,8 @@
 
 #define CONFIG_DOMAIN_PAGE 1
 
+#define CONFIG_VIDEO 1
+
 #define OPT_CONSOLE_STR "com1"
 
 #ifdef MAX_PHYS_CPUS
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Gy-0000qK-IZ; Fri, 07 Dec 2012 18:03:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gw-0000je-Ex
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:38 +0000
Received: from [85.158.143.99:2689] by server-3.bemta-4.messagelabs.com id
	1C/EB-18211-A7F22C05; Fri, 07 Dec 2012 18:03:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354903391!21493231!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22039 invoked from network); 7 Dec 2012 18:03:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814397"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-2i;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:52 +0000
Message-ID: <1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Abstract away from vesa.c the funcions to handle a linear framebuffer
and print characters to it.
The corresponding functions are going to be removed from vesa.c in the
next patch.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/Makefile |    1 +
 xen/drivers/video/fb.c     |  209 ++++++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/fb.h     |   49 ++++++++++
 3 files changed, 259 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/fb.c
 create mode 100644 xen/drivers/video/fb.h

diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 2993c39..3b3eb43 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -2,4 +2,5 @@ obj-$(HAS_VGA) := vga.o
 obj-$(HAS_VIDEO) += font_8x14.o
 obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/fb.c b/xen/drivers/video/fb.c
new file mode 100644
index 0000000..282f97e
--- /dev/null
+++ b/xen/drivers/video/fb.c
@@ -0,0 +1,209 @@
+/******************************************************************************
+ * fb.c
+ *
+ * linear frame buffer handling.
+ */
+
+#include <xen/config.h>
+#include <xen/kernel.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include "fb.h"
+#include "font.h"
+
+#define MAX_XRES 1900
+#define MAX_YRES 1200
+#define MAX_BPP 4
+#define MAX_FONT_W 8
+#define MAX_FONT_H 16
+static __initdata unsigned int line_len[MAX_XRES / MAX_FONT_W];
+static __initdata unsigned char lbuf[MAX_XRES * MAX_BPP];
+static __initdata unsigned char text_buf[(MAX_XRES / MAX_FONT_W) * \
+                          (MAX_YRES / MAX_FONT_H)];
+
+struct fb_status {
+    struct fb_prop fbp;
+
+    unsigned char *lbuf, *text_buf;
+    unsigned int *line_len;
+    unsigned int xpos, ypos;
+};
+static struct fb_status fb;
+
+static void fb_show_line(
+    const unsigned char *text_line,
+    unsigned char *video_line,
+    unsigned int nr_chars,
+    unsigned int nr_cells)
+{
+    unsigned int i, j, b, bpp, pixel;
+
+    bpp = (fb.fbp.bits_per_pixel + 7) >> 3;
+
+    for ( i = 0; i < fb.fbp.font->height; i++ )
+    {
+        unsigned char *ptr = fb.lbuf;
+
+        for ( j = 0; j < nr_chars; j++ )
+        {
+            const unsigned char *bits = fb.fbp.font->data;
+            bits += ((text_line[j] * fb.fbp.font->height + i) *
+                     ((fb.fbp.font->width + 7) >> 3));
+            for ( b = fb.fbp.font->width; b--; )
+            {
+                pixel = (*bits & (1u<<b)) ? fb.fbp.pixel_on : 0;
+                memcpy(ptr, &pixel, bpp);
+                ptr += bpp;
+            }
+        }
+
+        memset(ptr, 0, (fb.fbp.width - nr_chars * fb.fbp.font->width) * bpp);
+        memcpy(video_line, fb.lbuf, nr_cells * fb.fbp.font->width * bpp);
+        video_line += fb.fbp.bytes_per_line;
+    }
+}
+
+/* Fast mode which redraws all modified parts of a 2D text buffer. */
+void fb_redraw_puts(const char *s)
+{
+    unsigned int i, min_redraw_y = fb.ypos;
+    char c;
+
+    /* Paste characters into text buffer. */
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            if ( ++fb.ypos >= fb.fbp.text_rows )
+            {
+                min_redraw_y = 0;
+                fb.ypos = fb.fbp.text_rows - 1;
+                memmove(fb.text_buf, fb.text_buf + fb.fbp.text_columns,
+                        fb.ypos * fb.fbp.text_columns);
+                memset(fb.text_buf + fb.ypos * fb.fbp.text_columns, 0, fb.xpos);
+            }
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++ + fb.ypos * fb.fbp.text_columns] = c;
+    }
+
+    /* Render modified section of text buffer to VESA linear framebuffer. */
+    for ( i = min_redraw_y; i <= fb.ypos; i++ )
+    {
+        const unsigned char *line = fb.text_buf + i * fb.fbp.text_columns;
+        unsigned int width;
+
+        for ( width = fb.fbp.text_columns; width; --width )
+            if ( line[width - 1] )
+                 break;
+        fb_show_line(line,
+                       fb.fbp.lfb + i * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                       width, max(fb.line_len[i], width));
+        fb.line_len[i] = width;
+    }
+
+    fb.fbp.flush();
+}
+
+/* Slower line-based scroll mode which interacts better with dom0. */
+void fb_scroll_puts(const char *s)
+{
+    unsigned int i;
+    char c;
+
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            unsigned int bytes = (fb.fbp.width *
+                                  ((fb.fbp.bits_per_pixel + 7) >> 3));
+            unsigned char *src = fb.fbp.lfb + fb.fbp.font->height * fb.fbp.bytes_per_line;
+            unsigned char *dst = fb.fbp.lfb;
+
+            /* New line: scroll all previous rows up one line. */
+            for ( i = fb.fbp.font->height; i < fb.fbp.height; i++ )
+            {
+                memcpy(dst, src, bytes);
+                src += fb.fbp.bytes_per_line;
+                dst += fb.fbp.bytes_per_line;
+            }
+
+            /* Render new line. */
+            fb_show_line(
+                fb.text_buf,
+                fb.fbp.lfb + (fb.fbp.text_rows-1) * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                fb.xpos, fb.fbp.text_columns);
+
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++] = c;
+    }
+
+    fb.fbp.flush();
+}
+
+void fb_cr(void)
+{
+    fb.xpos = 0;
+}
+
+int __init fb_init(struct fb_prop fbp)
+{
+    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
+    {
+        printk("Couldn't initialize a %xx%x framebuffer early.\n",
+                        fbp.width, fbp.height);
+        return -EINVAL;
+    }
+
+    fb.fbp = fbp;
+    fb.lbuf = lbuf;
+    fb.text_buf = text_buf;
+    fb.line_len = line_len;
+    return 0;
+}
+
+int __init fb_alloc(void)
+{
+    fb.lbuf = NULL;
+    fb.text_buf = NULL;
+    fb.line_len = NULL;
+
+    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
+    if ( !fb.lbuf )
+        goto fail;
+
+    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
+    if ( !fb.text_buf )
+        goto fail;
+
+    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
+    if ( !fb.line_len )
+        goto fail;
+
+    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
+    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
+    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
+
+    return 0;
+
+fail:
+    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
+                    "the framebuffer\n");
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+
+    return -ENOMEM;
+}
+
+void fb_free(void)
+{
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+}
diff --git a/xen/drivers/video/fb.h b/xen/drivers/video/fb.h
new file mode 100644
index 0000000..558d058
--- /dev/null
+++ b/xen/drivers/video/fb.h
@@ -0,0 +1,49 @@
+/*
+ * xen/drivers/video/fb.h
+ *
+ * Cross-platform framebuffer library
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _XEN_FB_H
+#define _XEN_FB_H
+
+#include <xen/init.h>
+
+struct fb_prop {
+    const struct font_desc *font;
+    unsigned char *lfb;
+    unsigned int pixel_on;
+    uint16_t width, height;
+    uint16_t bytes_per_line;
+    uint16_t bits_per_pixel;
+    void (*flush)(void);
+
+    unsigned int text_columns;
+    unsigned int text_rows;
+};
+
+void fb_redraw_puts(const char *s);
+void fb_scroll_puts(const char *s);
+void fb_cr(void);
+void fb_free(void);
+
+/* initialize the framebuffer, can be called early (before xmalloc is
+ * available) */
+int __init fb_init(struct fb_prop fbp);
+/* fb_alloc allocates internal structures using xmalloc */
+int __init fb_alloc(void);
+
+#endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:03:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2Gy-0000qK-IZ; Fri, 07 Dec 2012 18:03:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2Gw-0000je-Ex
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:38 +0000
Received: from [85.158.143.99:2689] by server-3.bemta-4.messagelabs.com id
	1C/EB-18211-A7F22C05; Fri, 07 Dec 2012 18:03:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354903391!21493231!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22039 invoked from network); 7 Dec 2012 18:03:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216814397"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:03:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2GN-0007G7-2i;
	Fri, 07 Dec 2012 18:03:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Dec 2012 18:02:52 +0000
Message-ID: <1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Abstract away from vesa.c the funcions to handle a linear framebuffer
and print characters to it.
The corresponding functions are going to be removed from vesa.c in the
next patch.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/Makefile |    1 +
 xen/drivers/video/fb.c     |  209 ++++++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/fb.h     |   49 ++++++++++
 3 files changed, 259 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/fb.c
 create mode 100644 xen/drivers/video/fb.h

diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 2993c39..3b3eb43 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -2,4 +2,5 @@ obj-$(HAS_VGA) := vga.o
 obj-$(HAS_VIDEO) += font_8x14.o
 obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/fb.c b/xen/drivers/video/fb.c
new file mode 100644
index 0000000..282f97e
--- /dev/null
+++ b/xen/drivers/video/fb.c
@@ -0,0 +1,209 @@
+/******************************************************************************
+ * fb.c
+ *
+ * linear frame buffer handling.
+ */
+
+#include <xen/config.h>
+#include <xen/kernel.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include "fb.h"
+#include "font.h"
+
+#define MAX_XRES 1900
+#define MAX_YRES 1200
+#define MAX_BPP 4
+#define MAX_FONT_W 8
+#define MAX_FONT_H 16
+static __initdata unsigned int line_len[MAX_XRES / MAX_FONT_W];
+static __initdata unsigned char lbuf[MAX_XRES * MAX_BPP];
+static __initdata unsigned char text_buf[(MAX_XRES / MAX_FONT_W) * \
+                          (MAX_YRES / MAX_FONT_H)];
+
+struct fb_status {
+    struct fb_prop fbp;
+
+    unsigned char *lbuf, *text_buf;
+    unsigned int *line_len;
+    unsigned int xpos, ypos;
+};
+static struct fb_status fb;
+
+static void fb_show_line(
+    const unsigned char *text_line,
+    unsigned char *video_line,
+    unsigned int nr_chars,
+    unsigned int nr_cells)
+{
+    unsigned int i, j, b, bpp, pixel;
+
+    bpp = (fb.fbp.bits_per_pixel + 7) >> 3;
+
+    for ( i = 0; i < fb.fbp.font->height; i++ )
+    {
+        unsigned char *ptr = fb.lbuf;
+
+        for ( j = 0; j < nr_chars; j++ )
+        {
+            const unsigned char *bits = fb.fbp.font->data;
+            bits += ((text_line[j] * fb.fbp.font->height + i) *
+                     ((fb.fbp.font->width + 7) >> 3));
+            for ( b = fb.fbp.font->width; b--; )
+            {
+                pixel = (*bits & (1u<<b)) ? fb.fbp.pixel_on : 0;
+                memcpy(ptr, &pixel, bpp);
+                ptr += bpp;
+            }
+        }
+
+        memset(ptr, 0, (fb.fbp.width - nr_chars * fb.fbp.font->width) * bpp);
+        memcpy(video_line, fb.lbuf, nr_cells * fb.fbp.font->width * bpp);
+        video_line += fb.fbp.bytes_per_line;
+    }
+}
+
+/* Fast mode which redraws all modified parts of a 2D text buffer. */
+void fb_redraw_puts(const char *s)
+{
+    unsigned int i, min_redraw_y = fb.ypos;
+    char c;
+
+    /* Paste characters into text buffer. */
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            if ( ++fb.ypos >= fb.fbp.text_rows )
+            {
+                min_redraw_y = 0;
+                fb.ypos = fb.fbp.text_rows - 1;
+                memmove(fb.text_buf, fb.text_buf + fb.fbp.text_columns,
+                        fb.ypos * fb.fbp.text_columns);
+                memset(fb.text_buf + fb.ypos * fb.fbp.text_columns, 0, fb.xpos);
+            }
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++ + fb.ypos * fb.fbp.text_columns] = c;
+    }
+
+    /* Render modified section of text buffer to VESA linear framebuffer. */
+    for ( i = min_redraw_y; i <= fb.ypos; i++ )
+    {
+        const unsigned char *line = fb.text_buf + i * fb.fbp.text_columns;
+        unsigned int width;
+
+        for ( width = fb.fbp.text_columns; width; --width )
+            if ( line[width - 1] )
+                 break;
+        fb_show_line(line,
+                       fb.fbp.lfb + i * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                       width, max(fb.line_len[i], width));
+        fb.line_len[i] = width;
+    }
+
+    fb.fbp.flush();
+}
+
+/* Slower line-based scroll mode which interacts better with dom0. */
+void fb_scroll_puts(const char *s)
+{
+    unsigned int i;
+    char c;
+
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            unsigned int bytes = (fb.fbp.width *
+                                  ((fb.fbp.bits_per_pixel + 7) >> 3));
+            unsigned char *src = fb.fbp.lfb + fb.fbp.font->height * fb.fbp.bytes_per_line;
+            unsigned char *dst = fb.fbp.lfb;
+
+            /* New line: scroll all previous rows up one line. */
+            for ( i = fb.fbp.font->height; i < fb.fbp.height; i++ )
+            {
+                memcpy(dst, src, bytes);
+                src += fb.fbp.bytes_per_line;
+                dst += fb.fbp.bytes_per_line;
+            }
+
+            /* Render new line. */
+            fb_show_line(
+                fb.text_buf,
+                fb.fbp.lfb + (fb.fbp.text_rows-1) * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                fb.xpos, fb.fbp.text_columns);
+
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++] = c;
+    }
+
+    fb.fbp.flush();
+}
+
+void fb_cr(void)
+{
+    fb.xpos = 0;
+}
+
+int __init fb_init(struct fb_prop fbp)
+{
+    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
+    {
+        printk("Couldn't initialize a %xx%x framebuffer early.\n",
+                        fbp.width, fbp.height);
+        return -EINVAL;
+    }
+
+    fb.fbp = fbp;
+    fb.lbuf = lbuf;
+    fb.text_buf = text_buf;
+    fb.line_len = line_len;
+    return 0;
+}
+
+int __init fb_alloc(void)
+{
+    fb.lbuf = NULL;
+    fb.text_buf = NULL;
+    fb.line_len = NULL;
+
+    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
+    if ( !fb.lbuf )
+        goto fail;
+
+    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
+    if ( !fb.text_buf )
+        goto fail;
+
+    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
+    if ( !fb.line_len )
+        goto fail;
+
+    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
+    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
+    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
+
+    return 0;
+
+fail:
+    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
+                    "the framebuffer\n");
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+
+    return -ENOMEM;
+}
+
+void fb_free(void)
+{
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+}
diff --git a/xen/drivers/video/fb.h b/xen/drivers/video/fb.h
new file mode 100644
index 0000000..558d058
--- /dev/null
+++ b/xen/drivers/video/fb.h
@@ -0,0 +1,49 @@
+/*
+ * xen/drivers/video/fb.h
+ *
+ * Cross-platform framebuffer library
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _XEN_FB_H
+#define _XEN_FB_H
+
+#include <xen/init.h>
+
+struct fb_prop {
+    const struct font_desc *font;
+    unsigned char *lfb;
+    unsigned int pixel_on;
+    uint16_t width, height;
+    uint16_t bytes_per_line;
+    uint16_t bits_per_pixel;
+    void (*flush)(void);
+
+    unsigned int text_columns;
+    unsigned int text_rows;
+};
+
+void fb_redraw_puts(const char *s);
+void fb_scroll_puts(const char *s);
+void fb_cr(void);
+void fb_free(void);
+
+/* initialize the framebuffer, can be called early (before xmalloc is
+ * available) */
+int __init fb_init(struct fb_prop fbp);
+/* fb_alloc allocates internal structures using xmalloc */
+int __init fb_alloc(void);
+
+#endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:04:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:04:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2HG-00011j-1n; Fri, 07 Dec 2012 18:03:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th2HF-00010w-4A
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:57 +0000
Received: from [193.109.254.147:18025] by server-10.bemta-14.messagelabs.com
	id 83/92-31741-C8F22C05; Fri, 07 Dec 2012 18:03:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354903435!1865384!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26439 invoked from network); 7 Dec 2012 18:03:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; 
   d="scan'208";a="2233"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 18:03:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 18:03:55 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th2HD-0000jk-8d;
	Fri, 07 Dec 2012 18:03:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th2HD-0005jV-0G;
	Fri, 07 Dec 2012 18:03:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14600-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 18:03:55 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14600: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14600 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14600/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:04:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:04:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2HG-00011j-1n; Fri, 07 Dec 2012 18:03:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th2HF-00010w-4A
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 18:03:57 +0000
Received: from [193.109.254.147:18025] by server-10.bemta-14.messagelabs.com
	id 83/92-31741-C8F22C05; Fri, 07 Dec 2012 18:03:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1354903435!1865384!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26439 invoked from network); 7 Dec 2012 18:03:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:03:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; 
   d="scan'208";a="2233"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 18:03:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 18:03:55 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th2HD-0000jk-8d;
	Fri, 07 Dec 2012 18:03:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th2HD-0005jV-0G;
	Fri, 07 Dec 2012 18:03:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14600-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 18:03:55 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14600: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14600 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14600/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:08:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2LJ-00026g-P9; Fri, 07 Dec 2012 18:08:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Th2LI-00026Y-7i
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 18:08:08 +0000
Received: from [85.158.143.35:18021] by server-3.bemta-4.messagelabs.com id
	42/CE-18211-78032C05; Fri, 07 Dec 2012 18:08:07 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354903684!12988732!1
X-Originating-IP: [209.85.223.176]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30237 invoked from network); 7 Dec 2012 18:08:06 -0000
Received: from mail-ie0-f176.google.com (HELO mail-ie0-f176.google.com)
	(209.85.223.176)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:08:06 -0000
Received: by mail-ie0-f176.google.com with SMTP id 13so2123612iea.21
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 10:08:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=JaX3qrhRn1MS0QWxShetMjHTzJQXbBHlFiEvKaNWJfU=;
	b=iwI3DFr7Zb7R45j+0vjEWJRNiD+3ehuea0eosnLdHwIVoC7YrM9AGj2X57aV0JLtyf
	sPqtwZqgSqcuGAFaXKvos51t6sE+CDjOXaBN9L9gTBDAiRnqjRfpESsVWgbr5Un/WC5W
	jJZnT1DX6PE1VV2YbRkiuGFLuaVyAp7hRfTCCS5E8yn6ITgwHG8jVroJJCVyoajlbn34
	QklU2116ComMmFHphmnWZpmvN+JPoRWRbjPLn4AdZ5DWmVJOyyplHWH718cJC7GL9gdU
	xcLZ0vabPVcjC26VEqBrAN0crna87WUOgtCmIriYbKolI0cNO9TkWQiNwkMUmsQrMWLD
	4tEw==
MIME-Version: 1.0
Received: by 10.50.34.226 with SMTP id c2mr5826069igj.24.1354903684596; Fri,
	07 Dec 2012 10:08:04 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Fri, 7 Dec 2012 10:08:04 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212071635370.8801@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
	<alpine.DEB.2.02.1212071635370.8801@kaball.uk.xensource.com>
Date: Sat, 8 Dec 2012 02:08:04 +0800
X-Google-Sender-Auth: k-ev-1NFcDW-7pUFSaZ6dcALkvE
Message-ID: <CAKhsbWZF6NaX92Cz3tp_OUqf=bd5qQ-W9hcPN4Y96D7EKoqXig@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3
 with the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU
 missing interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7121696878454860437=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7121696878454860437==
Content-Type: multipart/alternative; boundary=14dae9340595d3456604d04718f3

--14dae9340595d3456604d04718f3
Content-Type: text/plain; charset=ISO-8859-1

>
>
> I won't be able to reproduce the bug, so I thank you in advance for any
> help you can give me with testing a fix
>
> Thanks for your fix, I'll try it out tomorrow and post the result -- it's
too late today.
But I'm wondering if it is specific to 4.1.3, or have been fixed in 4.2
already.

Thanks,
Timothy

--14dae9340595d3456604d04718f3
Content-Type: text/html; charset=ISO-8859-1

<div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
<br>
</div></div>I won&#39;t be able to reproduce the bug, so I thank you in advance for any<br>
help you can give me with testing a fix<br>
<div class="im"><br></div></blockquote>Thanks for your fix, I&#39;ll try it out tomorrow and post the result -- it&#39;s too late today.<br>But I&#39;m wondering if it is specific to 4.1.3, or have been fixed in 4.2 already.<br>
<br>Thanks,<br>Timothy<br></div><br></div>

--14dae9340595d3456604d04718f3--


--===============7121696878454860437==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7121696878454860437==--


From xen-devel-bounces@lists.xen.org Fri Dec 07 18:08:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2LJ-00026g-P9; Fri, 07 Dec 2012 18:08:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Th2LI-00026Y-7i
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 18:08:08 +0000
Received: from [85.158.143.35:18021] by server-3.bemta-4.messagelabs.com id
	42/CE-18211-78032C05; Fri, 07 Dec 2012 18:08:07 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1354903684!12988732!1
X-Originating-IP: [209.85.223.176]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30237 invoked from network); 7 Dec 2012 18:08:06 -0000
Received: from mail-ie0-f176.google.com (HELO mail-ie0-f176.google.com)
	(209.85.223.176)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:08:06 -0000
Received: by mail-ie0-f176.google.com with SMTP id 13so2123612iea.21
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 10:08:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=JaX3qrhRn1MS0QWxShetMjHTzJQXbBHlFiEvKaNWJfU=;
	b=iwI3DFr7Zb7R45j+0vjEWJRNiD+3ehuea0eosnLdHwIVoC7YrM9AGj2X57aV0JLtyf
	sPqtwZqgSqcuGAFaXKvos51t6sE+CDjOXaBN9L9gTBDAiRnqjRfpESsVWgbr5Un/WC5W
	jJZnT1DX6PE1VV2YbRkiuGFLuaVyAp7hRfTCCS5E8yn6ITgwHG8jVroJJCVyoajlbn34
	QklU2116ComMmFHphmnWZpmvN+JPoRWRbjPLn4AdZ5DWmVJOyyplHWH718cJC7GL9gdU
	xcLZ0vabPVcjC26VEqBrAN0crna87WUOgtCmIriYbKolI0cNO9TkWQiNwkMUmsQrMWLD
	4tEw==
MIME-Version: 1.0
Received: by 10.50.34.226 with SMTP id c2mr5826069igj.24.1354903684596; Fri,
	07 Dec 2012 10:08:04 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Fri, 7 Dec 2012 10:08:04 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212071635370.8801@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
	<alpine.DEB.2.02.1212071635370.8801@kaball.uk.xensource.com>
Date: Sat, 8 Dec 2012 02:08:04 +0800
X-Google-Sender-Auth: k-ev-1NFcDW-7pUFSaZ6dcALkvE
Message-ID: <CAKhsbWZF6NaX92Cz3tp_OUqf=bd5qQ-W9hcPN4Y96D7EKoqXig@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3
 with the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU
 missing interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7121696878454860437=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7121696878454860437==
Content-Type: multipart/alternative; boundary=14dae9340595d3456604d04718f3

--14dae9340595d3456604d04718f3
Content-Type: text/plain; charset=ISO-8859-1

>
>
> I won't be able to reproduce the bug, so I thank you in advance for any
> help you can give me with testing a fix
>
> Thanks for your fix, I'll try it out tomorrow and post the result -- it's
too late today.
But I'm wondering if it is specific to 4.1.3, or have been fixed in 4.2
already.

Thanks,
Timothy

--14dae9340595d3456604d04718f3
Content-Type: text/html; charset=ISO-8859-1

<div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
<br>
</div></div>I won&#39;t be able to reproduce the bug, so I thank you in advance for any<br>
help you can give me with testing a fix<br>
<div class="im"><br></div></blockquote>Thanks for your fix, I&#39;ll try it out tomorrow and post the result -- it&#39;s too late today.<br>But I&#39;m wondering if it is specific to 4.1.3, or have been fixed in 4.2 already.<br>
<br>Thanks,<br>Timothy<br></div><br></div>

--14dae9340595d3456604d04718f3--


--===============7121696878454860437==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7121696878454860437==--


From xen-devel-bounces@lists.xen.org Fri Dec 07 18:12:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2P5-0002Gm-E6; Fri, 07 Dec 2012 18:12:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2P3-0002Gg-KH
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 18:12:01 +0000
Received: from [193.109.254.147:9989] by server-14.bemta-14.messagelabs.com id
	F7/08-14517-07132C05; Fri, 07 Dec 2012 18:12:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354903919!9333974!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16530 invoked from network); 7 Dec 2012 18:12:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:12:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216815570"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:11:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:11:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2Oz-0007Nn-OX;
	Fri, 07 Dec 2012 18:11:57 +0000
Date: Fri, 7 Dec 2012 18:11:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWZF6NaX92Cz3tp_OUqf=bd5qQ-W9hcPN4Y96D7EKoqXig@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212071811380.8801@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
	<alpine.DEB.2.02.1212071635370.8801@kaball.uk.xensource.com>
	<CAKhsbWZF6NaX92Cz3tp_OUqf=bd5qQ-W9hcPN4Y96D7EKoqXig@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3
 with the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU
 missing interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Dec 2012, G.R. wrote:
> I won't be able to reproduce the bug, so I thank you in advance for any
> help you can give me with testing a fix
> 
> Thanks for your fix, I'll try it out tomorrow and post the result -- it's too late today.
> But I'm wondering if it is specific to 4.1.3, or have been fixed in 4.2 already.

I don't think it has been fixes in 4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:12:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2P5-0002Gm-E6; Fri, 07 Dec 2012 18:12:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Th2P3-0002Gg-KH
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 18:12:01 +0000
Received: from [193.109.254.147:9989] by server-14.bemta-14.messagelabs.com id
	F7/08-14517-07132C05; Fri, 07 Dec 2012 18:12:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1354903919!9333974!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTAzMDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16530 invoked from network); 7 Dec 2012 18:12:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:12:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216815570"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:11:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:11:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Th2Oz-0007Nn-OX;
	Fri, 07 Dec 2012 18:11:57 +0000
Date: Fri, 7 Dec 2012 18:11:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWZF6NaX92Cz3tp_OUqf=bd5qQ-W9hcPN4Y96D7EKoqXig@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212071811380.8801@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
	<alpine.DEB.2.02.1212071635370.8801@kaball.uk.xensource.com>
	<CAKhsbWZF6NaX92Cz3tp_OUqf=bd5qQ-W9hcPN4Y96D7EKoqXig@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3
 with the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU
 missing interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Dec 2012, G.R. wrote:
> I won't be able to reproduce the bug, so I thank you in advance for any
> help you can give me with testing a fix
> 
> Thanks for your fix, I'll try it out tomorrow and post the result -- it's too late today.
> But I'm wondering if it is specific to 4.1.3, or have been fixed in 4.2 already.

I don't think it has been fixes in 4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:16:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:16:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2TY-0002Ry-4x; Fri, 07 Dec 2012 18:16:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Th2TW-0002Rr-N1
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 18:16:39 +0000
Received: from [85.158.137.99:21979] by server-15.bemta-3.messagelabs.com id
	98/35-23779-18232C05; Fri, 07 Dec 2012 18:16:33 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354904190!13045717!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16267 invoked from network); 7 Dec 2012 18:16:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:16:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47034288"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:15:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:15:29 -0500
Received: from [10.80.3.146] (helo=localmatsp-T3500.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<mats.petersson@citrix.com>)	id 1Th2SP-0007R1-AV;
	Fri, 07 Dec 2012 18:15:29 +0000
From: Mats Petersson <mats.petersson@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 18:14:51 +0000
Message-ID: <1354904091-30160-1-git-send-email-mats.petersson@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>, david.vrabel@citrix.com,
	Ian.Campbell@citrix.com, JBeulich@suse.com, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The V4 patch contains some cosmetic adjustments (spelling mistake in
comment and reversing condition in an if-statement to avoid having an
"else" branch, a random empty line added by accident now reverted back
to previous state). Thanks to Jan Beulich for the comments.


The V3 of the patch contains suggested improvements from:
David Vrabel - make it two distinct external functions, doc-comments.
Ian Campbell - use one common function for the main work.
Jan Beulich  - found a bug and pointed out some whitespace problems.


One comment asked for more details on the improvements:
Using a small test program to map Guest memory into Dom0 (repeatedly
for "Iterations" mapping the same first "Num Pages")
Iterations    Num Pages	   Time 3.7rc4	Time With this patch
5000	      4096	   76.107	37.027
10000	      2048	   75.703	37.177
20000	      1024	   75.893	37.247
So a little better than twice as fast.

Using this patch in migration, using "time" to measure the overall
time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
memory, one network card, one disk, guest at login prompt):
Time 3.7rc5		Time With this patch
6.697			5.667
Since migration involves a whole lot of other things, it's only about
15% faster - but still a good improvement. Similar measurement with a
guest that is running code to "dirty" memory shows about 23%
improvement, as it spends more time copying dirtied memory.

As discussed elsewhere, a good deal more can be had from improving the
munmap system call, but it is a little tricky to get this in without
worsening non-PVOPS kernel, so I will have another look at this.

Signed-off-by: Mats Petersson <mats.petersson@citrix.com>

---
 arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
 drivers/xen/privcmd.c |   55 +++++++++++++++++----
 include/xen/xen-ops.h |    5 ++
 3 files changed, 169 insertions(+), 23 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index dcf5f2d..a67774f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
 #define REMAP_BATCH_SIZE 16
 
 struct remap_data {
-	unsigned long mfn;
+	unsigned long *mfn;
+	bool contiguous;
 	pgprot_t prot;
 	struct mmu_update *mmu_update;
 };
@@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
+	/* If we have a contigious range, just update the mfn itself,
+	   else update pointer to be "next mfn". */
+	if (rmd->contiguous)
+		(*rmd->mfn)++;
+	else
+		rmd->mfn++;
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       unsigned long mfn, int nr,
-			       pgprot_t prot, unsigned domid)
-{
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ *
+ * Note that err_ptr is used to indicate whether *mfn
+ * is a list or a "first mfn of a contiguous range". */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			unsigned long *mfn, int nr,
+			int *err_ptr, pgprot_t prot,
+			unsigned domid)
+{
+	int err, last_err = 0;
 	struct remap_data rmd;
 	struct mmu_update mmu_update[REMAP_BATCH_SIZE];
-	int batch;
 	unsigned long range;
-	int err = 0;
 
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
@@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 
 	rmd.mfn = mfn;
 	rmd.prot = prot;
+	/* We use the err_ptr to indicate if there we are doing a contigious
+	 * mapping or a discontigious mapping. */
+	rmd.contiguous = !err_ptr;
 
 	while (nr) {
-		batch = min(REMAP_BATCH_SIZE, nr);
+		int index = 0;
+		int done = 0;
+		int batch = min(REMAP_BATCH_SIZE, nr);
+		int batch_left = batch;
 		range = (unsigned long)batch << PAGE_SHIFT;
 
 		rmd.mmu_update = mmu_update;
@@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
-		if (err < 0)
-			goto out;
+		/* We record the error for each page that gives an error, but
+		 * continue mapping until the whole set is done */
+		do {
+			err = HYPERVISOR_mmu_update(&mmu_update[index],
+						    batch_left, &done, domid);
+			if (err < 0) {
+				if (!err_ptr)
+					goto out;
+				/* increment done so we skip the error item */
+				done++;
+				last_err = err_ptr[index] = err;
+			}
+			batch_left -= done;
+			index += done;
+		} while (batch_left);
 
 		nr -= batch;
 		addr += range;
+		if (err_ptr)
+			err_ptr += batch;
 	}
 
-	err = 0;
+	err = last_err;
 out:
 
 	xen_flush_tlb_all();
 
 	return err;
 }
+
+/* xen_remap_domain_mfn_range() - Used to map foreign pages
+ * @vma:   the vma for the pages to be mapped into
+ * @addr:  the address at which to map the pages
+ * @mfn:   the first MFN to map
+ * @nr:    the number of consecutive mfns to map
+ * @prot:  page protection mask
+ * @domid: id of the domain that we are mapping from
+ *
+ * This function takes a mfn and maps nr pages on from that into this kernel's
+ * memory. The owner of the pages is defined by domid. Where the pages are
+ * mapped is determined by addr, and vma is used for "accounting" of the
+ * pages.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long mfn, int nr,
+			       pgprot_t prot, unsigned domid)
+{
+	return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not sucessfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 71f5c45..75f6e86 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
 	return ret;
 }
 
+/*
+ * Similar to traverse_pages, but use each page as a "block" of
+ * data to be processed as one unit.
+ */
+static int traverse_pages_block(unsigned nelem, size_t size,
+				struct list_head *pos,
+				int (*fn)(void *data, int nr, void *state),
+				void *state)
+{
+	void *pagedata;
+	unsigned pageidx;
+	int ret = 0;
+
+	BUG_ON(size > PAGE_SIZE);
+
+	pageidx = PAGE_SIZE;
+
+	while (nelem) {
+		int nr = (PAGE_SIZE/size);
+		struct page *page;
+		if (nr > nelem)
+			nr = nelem;
+		pos = pos->next;
+		page = list_entry(pos, struct page, lru);
+		pagedata = page_address(page);
+		ret = (*fn)(pagedata, nr, state);
+		if (ret)
+			break;
+		nelem -= nr;
+	}
+
+	return ret;
+}
+
 struct mmap_mfn_state {
 	unsigned long va;
 	struct vm_area_struct *vma;
@@ -250,7 +284,7 @@ struct mmap_batch_state {
 	 *      0 for no errors
 	 *      1 if at least one error has happened (and no
 	 *          -ENOENT errors have happened)
-	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 *      -ENOENT if at least one -ENOENT has happened.
 	 */
 	int global_error;
 	/* An array for individual errors */
@@ -260,17 +294,20 @@ struct mmap_batch_state {
 	xen_pfn_t __user *user_mfn;
 };
 
-static int mmap_batch_fn(void *data, void *state)
+static int mmap_batch_fn(void *data, int nr, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
 	int ret;
 
-	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-					 st->vma->vm_page_prot, st->domain);
+	BUG_ON(nr < 0);
 
-	/* Store error code for second pass. */
-	*(st->err++) = ret;
+	ret = xen_remap_domain_mfn_array(st->vma,
+					 st->va & PAGE_MASK,
+					 mfnp, nr,
+					 st->err,
+					 st->vma->vm_page_prot,
+					 st->domain);
 
 	/* And see if it affects the global_error. */
 	if (ret < 0) {
@@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
 				st->global_error = 1;
 		}
 	}
-	st->va += PAGE_SIZE;
+	st->va += PAGE_SIZE * nr;
 
 	return 0;
 }
@@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 	state.err           = err_array;
 
 	/* mmap_batch_fn guarantees ret == 0 */
-	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state));
+	BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
+				    &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 6a198e4..22cad75 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid);
 
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid);
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:16:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:16:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2TY-0002Ry-4x; Fri, 07 Dec 2012 18:16:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Th2TW-0002Rr-N1
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 18:16:39 +0000
Received: from [85.158.137.99:21979] by server-15.bemta-3.messagelabs.com id
	98/35-23779-18232C05; Fri, 07 Dec 2012 18:16:33 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1354904190!13045717!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODc4Nzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16267 invoked from network); 7 Dec 2012 18:16:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 18:16:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="47034288"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 18:15:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 13:15:29 -0500
Received: from [10.80.3.146] (helo=localmatsp-T3500.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<mats.petersson@citrix.com>)	id 1Th2SP-0007R1-AV;
	Fri, 07 Dec 2012 18:15:29 +0000
From: Mats Petersson <mats.petersson@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 18:14:51 +0000
Message-ID: <1354904091-30160-1-git-send-email-mats.petersson@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>, david.vrabel@citrix.com,
	Ian.Campbell@citrix.com, JBeulich@suse.com, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The V4 patch contains some cosmetic adjustments (spelling mistake in
comment and reversing condition in an if-statement to avoid having an
"else" branch, a random empty line added by accident now reverted back
to previous state). Thanks to Jan Beulich for the comments.


The V3 of the patch contains suggested improvements from:
David Vrabel - make it two distinct external functions, doc-comments.
Ian Campbell - use one common function for the main work.
Jan Beulich  - found a bug and pointed out some whitespace problems.


One comment asked for more details on the improvements:
Using a small test program to map Guest memory into Dom0 (repeatedly
for "Iterations" mapping the same first "Num Pages")
Iterations    Num Pages	   Time 3.7rc4	Time With this patch
5000	      4096	   76.107	37.027
10000	      2048	   75.703	37.177
20000	      1024	   75.893	37.247
So a little better than twice as fast.

Using this patch in migration, using "time" to measure the overall
time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
memory, one network card, one disk, guest at login prompt):
Time 3.7rc5		Time With this patch
6.697			5.667
Since migration involves a whole lot of other things, it's only about
15% faster - but still a good improvement. Similar measurement with a
guest that is running code to "dirty" memory shows about 23%
improvement, as it spends more time copying dirtied memory.

As discussed elsewhere, a good deal more can be had from improving the
munmap system call, but it is a little tricky to get this in without
worsening non-PVOPS kernel, so I will have another look at this.

Signed-off-by: Mats Petersson <mats.petersson@citrix.com>

---
 arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
 drivers/xen/privcmd.c |   55 +++++++++++++++++----
 include/xen/xen-ops.h |    5 ++
 3 files changed, 169 insertions(+), 23 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index dcf5f2d..a67774f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
 #define REMAP_BATCH_SIZE 16
 
 struct remap_data {
-	unsigned long mfn;
+	unsigned long *mfn;
+	bool contiguous;
 	pgprot_t prot;
 	struct mmu_update *mmu_update;
 };
@@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
+	/* If we have a contigious range, just update the mfn itself,
+	   else update pointer to be "next mfn". */
+	if (rmd->contiguous)
+		(*rmd->mfn)++;
+	else
+		rmd->mfn++;
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       unsigned long mfn, int nr,
-			       pgprot_t prot, unsigned domid)
-{
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ *
+ * Note that err_ptr is used to indicate whether *mfn
+ * is a list or a "first mfn of a contiguous range". */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			unsigned long *mfn, int nr,
+			int *err_ptr, pgprot_t prot,
+			unsigned domid)
+{
+	int err, last_err = 0;
 	struct remap_data rmd;
 	struct mmu_update mmu_update[REMAP_BATCH_SIZE];
-	int batch;
 	unsigned long range;
-	int err = 0;
 
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
@@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 
 	rmd.mfn = mfn;
 	rmd.prot = prot;
+	/* We use the err_ptr to indicate if there we are doing a contigious
+	 * mapping or a discontigious mapping. */
+	rmd.contiguous = !err_ptr;
 
 	while (nr) {
-		batch = min(REMAP_BATCH_SIZE, nr);
+		int index = 0;
+		int done = 0;
+		int batch = min(REMAP_BATCH_SIZE, nr);
+		int batch_left = batch;
 		range = (unsigned long)batch << PAGE_SHIFT;
 
 		rmd.mmu_update = mmu_update;
@@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
-		if (err < 0)
-			goto out;
+		/* We record the error for each page that gives an error, but
+		 * continue mapping until the whole set is done */
+		do {
+			err = HYPERVISOR_mmu_update(&mmu_update[index],
+						    batch_left, &done, domid);
+			if (err < 0) {
+				if (!err_ptr)
+					goto out;
+				/* increment done so we skip the error item */
+				done++;
+				last_err = err_ptr[index] = err;
+			}
+			batch_left -= done;
+			index += done;
+		} while (batch_left);
 
 		nr -= batch;
 		addr += range;
+		if (err_ptr)
+			err_ptr += batch;
 	}
 
-	err = 0;
+	err = last_err;
 out:
 
 	xen_flush_tlb_all();
 
 	return err;
 }
+
+/* xen_remap_domain_mfn_range() - Used to map foreign pages
+ * @vma:   the vma for the pages to be mapped into
+ * @addr:  the address at which to map the pages
+ * @mfn:   the first MFN to map
+ * @nr:    the number of consecutive mfns to map
+ * @prot:  page protection mask
+ * @domid: id of the domain that we are mapping from
+ *
+ * This function takes a mfn and maps nr pages on from that into this kernel's
+ * memory. The owner of the pages is defined by domid. Where the pages are
+ * mapped is determined by addr, and vma is used for "accounting" of the
+ * pages.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long mfn, int nr,
+			       pgprot_t prot, unsigned domid)
+{
+	return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not sucessfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 71f5c45..75f6e86 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
 	return ret;
 }
 
+/*
+ * Similar to traverse_pages, but use each page as a "block" of
+ * data to be processed as one unit.
+ */
+static int traverse_pages_block(unsigned nelem, size_t size,
+				struct list_head *pos,
+				int (*fn)(void *data, int nr, void *state),
+				void *state)
+{
+	void *pagedata;
+	unsigned pageidx;
+	int ret = 0;
+
+	BUG_ON(size > PAGE_SIZE);
+
+	pageidx = PAGE_SIZE;
+
+	while (nelem) {
+		int nr = (PAGE_SIZE/size);
+		struct page *page;
+		if (nr > nelem)
+			nr = nelem;
+		pos = pos->next;
+		page = list_entry(pos, struct page, lru);
+		pagedata = page_address(page);
+		ret = (*fn)(pagedata, nr, state);
+		if (ret)
+			break;
+		nelem -= nr;
+	}
+
+	return ret;
+}
+
 struct mmap_mfn_state {
 	unsigned long va;
 	struct vm_area_struct *vma;
@@ -250,7 +284,7 @@ struct mmap_batch_state {
 	 *      0 for no errors
 	 *      1 if at least one error has happened (and no
 	 *          -ENOENT errors have happened)
-	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 *      -ENOENT if at least one -ENOENT has happened.
 	 */
 	int global_error;
 	/* An array for individual errors */
@@ -260,17 +294,20 @@ struct mmap_batch_state {
 	xen_pfn_t __user *user_mfn;
 };
 
-static int mmap_batch_fn(void *data, void *state)
+static int mmap_batch_fn(void *data, int nr, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
 	int ret;
 
-	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-					 st->vma->vm_page_prot, st->domain);
+	BUG_ON(nr < 0);
 
-	/* Store error code for second pass. */
-	*(st->err++) = ret;
+	ret = xen_remap_domain_mfn_array(st->vma,
+					 st->va & PAGE_MASK,
+					 mfnp, nr,
+					 st->err,
+					 st->vma->vm_page_prot,
+					 st->domain);
 
 	/* And see if it affects the global_error. */
 	if (ret < 0) {
@@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
 				st->global_error = 1;
 		}
 	}
-	st->va += PAGE_SIZE;
+	st->va += PAGE_SIZE * nr;
 
 	return 0;
 }
@@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 	state.err           = err_array;
 
 	/* mmap_batch_fn guarantees ret == 0 */
-	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state));
+	BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
+				    &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 6a198e4..22cad75 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid);
 
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid);
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:26:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:26:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2cV-0002oG-MF; Fri, 07 Dec 2012 18:25:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Th2cU-0002oB-Fv
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 18:25:54 +0000
Received: from [85.158.139.83:17923] by server-10.bemta-5.messagelabs.com id
	31/F6-13383-1B432C05; Fri, 07 Dec 2012 18:25:53 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354904751!17636325!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19899 invoked from network); 7 Dec 2012 18:25:52 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Dec 2012 18:25:52 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:53728 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Th2gD-0002fc-MU; Fri, 07 Dec 2012 19:29:45 +0100
Date: Fri, 7 Dec 2012 19:25:46 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1877721103.20121207192546@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212071655460.8801@kaball.uk.xensource.com>
References: <272767244.20121206175454@eikelenboom.it>
	<alpine.DEB.2.02.1212071655460.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upstream qemu-xen,
	log verbosity and compile errors when enabling debug, filenaming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, December 7, 2012, 6:01:43 PM, you wrote:

> On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
>> Hi All,
>> 
>> Yesterday I have tried building and using upstream qemu and seabios.
>> Config.mk:
>> QEMU_UPSTREAM_URL ?= git://git.qemu.org/qemu.git
>> QEMU_UPSTREAM_REVISION ?= master
>> 
>> SEABIOS_UPSTREAM_URL ?= git://git.qemu.org/seabios.git
>> SEABIOS_UPSTREAM_TAG ?= master
>> 
>> And i'm happy to say that it works quite ok, even with (secondary pci) pci-passthrough ( using an ATI gfx adapter and windows7 as guest os) :-).
>> 
>> But it seems to have an issue with a USB controller which is trying to use msi-X interrupts, which makes xl dmesg report:
>> (XEN) [2012-12-06 16:07:24] vmsi.c:108:d32767 Unsupported delivery mode 3
>> and when "pci_msitranslate=0" is set the error still occurs, only this time the correct domain number is reported, instead of the 32767.
>> 
>> 
>> However, when trying to debug, i noticed although making a debug build (make debug=y && make debug=y install), qemu-dm-<guestname>.log stays almost empty.
>> It seems all the defines related to debugging are not set.
>> 
>> - Would it be appropriated to enable them all when making a debug build ?
>> - Would it be wise to also have some more verbose logging when not running a debug build ?

> Yes and yes

>> - And if yes, what would be the nicest way to set the defines ?

> My guess is that it would be enough to turn on XEN_PT_LOGGING_ENABLED by
> default

>> - Should the naming of the debug defines be made more consistent ?

> Yes



>> 
>> When enabling these debug defines by hand:
>> 
>> xen-all.c
>> #define DEBUG_XEN
>> 
>> xen-mapcache.c
>> #define MAPCACHE_DEBUG
>> 
>> hw/xen-host-pci-device.c
>> #define XEN_HOST_PCI_DEVICE_DEBUG
>> 
>> hw/xen_platform.c
>> #define DEBUG_PLATFORM
>> 
>> hw/xen_pt.h
>> #define XEN_PT_LOGGING_ENABLED
>> #define XEN_PT_DEBUG_PCI_CONFIG_ACCESS
>> 
>> I get a lot of compile errors related to wrong types in the debug printf's.

> That's really bad. I would like upstream QEMU to have the same level of
> logging as qemu-xen-traditional by default. And they should compile.


>> Another thing that occurred to me was that the file naming doesn't seem to be overly consistent:
>> 
>> xen-all.c
>> xen-mapcache.c
>> xen-mapcache.h
>> xen-stub.c
>> xen_apic.c
>> hw/xen_backend.c
>> hw/xen_backend.h
>> hw/xen_blkif.h
>> hw/xen_common.h
>> hw/xen_console.c
>> hw/xen_devconfig.c
>> hw/xen_disk.c
>> hw/xen_domainbuild.c
>> hw/xen_domainbuild.h
>> hw/xenfb.c
>> hw/xen.h
>> hw/xen-host-pci-device.c
>> hw/xen-host-pci-device.h
>> hw/xen_machine_pv.c
>> hw/xen_nic.c
>> hw/xen_platform.c
>> hw/xen_pt.c
>> hw/xen_pt_config_init.c
>> hw/xen_pt.h
>> hw/xen_pt_msi.c
>> 
>> Would it be worthwhile to make it:
>> - consistent all underscore or all minus ?
>> - allways xen_ (or xen- depending on the above) ?

> Yes, definitely.
> Given that the development window for QEMU 1.4 has just opened might
> even be the right time to make these changes.

> Are you volunteering? :)

Erhmm yes i think i should be able to accomplish this :-)
And yes i did notice the 1.3 release :-)

Patches would be directly against the qemu-upstream git-tree, first round to xen-devel and when acked send to qemu-list ?


For the filerenaming, the rest of the qemu sources seems to be mixed, but i think it would be more neat for the xen part to stick to one of the two .. but which one would be prefered ?
1. all underscores
2. all minus

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 18:26:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 18:26:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th2cV-0002oG-MF; Fri, 07 Dec 2012 18:25:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Th2cU-0002oB-Fv
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 18:25:54 +0000
Received: from [85.158.139.83:17923] by server-10.bemta-5.messagelabs.com id
	31/F6-13383-1B432C05; Fri, 07 Dec 2012 18:25:53 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354904751!17636325!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19899 invoked from network); 7 Dec 2012 18:25:52 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Dec 2012 18:25:52 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:53728 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Th2gD-0002fc-MU; Fri, 07 Dec 2012 19:29:45 +0100
Date: Fri, 7 Dec 2012 19:25:46 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1877721103.20121207192546@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212071655460.8801@kaball.uk.xensource.com>
References: <272767244.20121206175454@eikelenboom.it>
	<alpine.DEB.2.02.1212071655460.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upstream qemu-xen,
	log verbosity and compile errors when enabling debug, filenaming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, December 7, 2012, 6:01:43 PM, you wrote:

> On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
>> Hi All,
>> 
>> Yesterday I have tried building and using upstream qemu and seabios.
>> Config.mk:
>> QEMU_UPSTREAM_URL ?= git://git.qemu.org/qemu.git
>> QEMU_UPSTREAM_REVISION ?= master
>> 
>> SEABIOS_UPSTREAM_URL ?= git://git.qemu.org/seabios.git
>> SEABIOS_UPSTREAM_TAG ?= master
>> 
>> And i'm happy to say that it works quite ok, even with (secondary pci) pci-passthrough ( using an ATI gfx adapter and windows7 as guest os) :-).
>> 
>> But it seems to have an issue with a USB controller which is trying to use msi-X interrupts, which makes xl dmesg report:
>> (XEN) [2012-12-06 16:07:24] vmsi.c:108:d32767 Unsupported delivery mode 3
>> and when "pci_msitranslate=0" is set the error still occurs, only this time the correct domain number is reported, instead of the 32767.
>> 
>> 
>> However, when trying to debug, i noticed although making a debug build (make debug=y && make debug=y install), qemu-dm-<guestname>.log stays almost empty.
>> It seems all the defines related to debugging are not set.
>> 
>> - Would it be appropriated to enable them all when making a debug build ?
>> - Would it be wise to also have some more verbose logging when not running a debug build ?

> Yes and yes

>> - And if yes, what would be the nicest way to set the defines ?

> My guess is that it would be enough to turn on XEN_PT_LOGGING_ENABLED by
> default

>> - Should the naming of the debug defines be made more consistent ?

> Yes



>> 
>> When enabling these debug defines by hand:
>> 
>> xen-all.c
>> #define DEBUG_XEN
>> 
>> xen-mapcache.c
>> #define MAPCACHE_DEBUG
>> 
>> hw/xen-host-pci-device.c
>> #define XEN_HOST_PCI_DEVICE_DEBUG
>> 
>> hw/xen_platform.c
>> #define DEBUG_PLATFORM
>> 
>> hw/xen_pt.h
>> #define XEN_PT_LOGGING_ENABLED
>> #define XEN_PT_DEBUG_PCI_CONFIG_ACCESS
>> 
>> I get a lot of compile errors related to wrong types in the debug printf's.

> That's really bad. I would like upstream QEMU to have the same level of
> logging as qemu-xen-traditional by default. And they should compile.


>> Another thing that occurred to me was that the file naming doesn't seem to be overly consistent:
>> 
>> xen-all.c
>> xen-mapcache.c
>> xen-mapcache.h
>> xen-stub.c
>> xen_apic.c
>> hw/xen_backend.c
>> hw/xen_backend.h
>> hw/xen_blkif.h
>> hw/xen_common.h
>> hw/xen_console.c
>> hw/xen_devconfig.c
>> hw/xen_disk.c
>> hw/xen_domainbuild.c
>> hw/xen_domainbuild.h
>> hw/xenfb.c
>> hw/xen.h
>> hw/xen-host-pci-device.c
>> hw/xen-host-pci-device.h
>> hw/xen_machine_pv.c
>> hw/xen_nic.c
>> hw/xen_platform.c
>> hw/xen_pt.c
>> hw/xen_pt_config_init.c
>> hw/xen_pt.h
>> hw/xen_pt_msi.c
>> 
>> Would it be worthwhile to make it:
>> - consistent all underscore or all minus ?
>> - allways xen_ (or xen- depending on the above) ?

> Yes, definitely.
> Given that the development window for QEMU 1.4 has just opened might
> even be the right time to make these changes.

> Are you volunteering? :)

Erhmm yes i think i should be able to accomplish this :-)
And yes i did notice the 1.3 release :-)

Patches would be directly against the qemu-upstream git-tree, first round to xen-devel and when acked send to qemu-list ?


For the filerenaming, the rest of the qemu sources seems to be mixed, but i think it would be more neat for the xen part to stick to one of the two .. but which one would be prefered ?
1. all underscores
2. all minus

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:06:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3F5-0003KE-9f; Fri, 07 Dec 2012 19:05:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3F3-0003K9-BO
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 19:05:45 +0000
Received: from [85.158.137.99:34969] by server-14.bemta-3.messagelabs.com id
	B4/EC-31424-80E32C05; Fri, 07 Dec 2012 19:05:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354907143!15245177!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30988 invoked from network); 7 Dec 2012 19:05:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:05:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; 
   d="scan'208";a="3359"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:05:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:05:40 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th3Ey-0000s1-I3;
	Fri, 07 Dec 2012 19:05:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th3Ey-0006qO-E7;
	Fri, 07 Dec 2012 19:05:40 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14595-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 19:05:40 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14595: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14595 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14595/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14563
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14563
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-qemut-win     3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14563
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14563
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14563
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-win   3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-winxpsp3  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14563
 test-i386-i386-xl-win        12 guest-localmigrate/x10    fail REGR. vs. 14563

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14563
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    broken  
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  broken  
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:06:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3F5-0003KE-9f; Fri, 07 Dec 2012 19:05:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3F3-0003K9-BO
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 19:05:45 +0000
Received: from [85.158.137.99:34969] by server-14.bemta-3.messagelabs.com id
	B4/EC-31424-80E32C05; Fri, 07 Dec 2012 19:05:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1354907143!15245177!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30988 invoked from network); 7 Dec 2012 19:05:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:05:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; 
   d="scan'208";a="3359"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:05:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:05:40 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th3Ey-0000s1-I3;
	Fri, 07 Dec 2012 19:05:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th3Ey-0006qO-E7;
	Fri, 07 Dec 2012 19:05:40 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14595-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 19:05:40 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14595: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14595 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14595/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14563
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14563
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-qemut-win     3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14563
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14563
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14563
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-win   3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-winxpsp3  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14563
 test-i386-i386-xl-win        12 guest-localmigrate/x10    fail REGR. vs. 14563

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14563
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    broken  
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  broken  
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:11:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:11:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3KW-0003Xk-47; Fri, 07 Dec 2012 19:11:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3KU-0003Xf-8S
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:11:22 +0000
Received: from [85.158.139.211:24938] by server-3.bemta-5.messagelabs.com id
	ED/3E-25441-95F32C05; Fri, 07 Dec 2012 19:11:21 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354907480!18051322!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27110 invoked from network); 7 Dec 2012 19:11:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:11:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; 
   d="scan'208";a="3429"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:11:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:11:19 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th3KR-0000sd-CG; Fri, 07 Dec 2012 19:11:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th3KR-0006pJ-2D;
	Fri, 07 Dec 2012 19:11:19 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20674.16214.934271.479230@mariner.uk.xensource.com>
Date: Fri, 7 Dec 2012 19:11:18 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354101923.25834.16.camel@zakaz.uk.xensource.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "jfehlig@suse.com" <jfehlig@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> Can we provide, or (more likely) require the application to provide, a
> lock (perhaps per-event or, again more likely, per-event-loop) which
> must be held while processing callbacks and also while events are being
> registered/unregistered with the application's event handling subsystem?
> With such a lock in place the application would be able to guarantee
> that having returned from the deregister hook any further events would
> be seen as spurious events by its own event processing loop.

I think this might be difficult to get right without deadlocks.

...
> Last half brained idea would be to split the deregistration into two.
> libxl calls up to the app saying "please deregister" and the app calls
> back to libxl to say "I am no longer watching for this event and
> guarantee that I won't deliver it any more". (Presumably this would be
> implemented by the application via some combination of the above). This
> could be done in a somewhat compatible way by allowing the deregister
> hook to return "PENDING".

This is in fact straightforward and is a subset of the existing API.
If we have libxl always call timeout_modify with abs={0,0}, it will
get timeout_occurred when the application's event loop has dealt with
it.  We can simply never call timeout_deregister.

I have implemented this in the 2-patch RFD series I'm about to send.
NB this series has been complied but not (as yet) executed by me...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:11:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:11:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3KW-0003Xk-47; Fri, 07 Dec 2012 19:11:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3KU-0003Xf-8S
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:11:22 +0000
Received: from [85.158.139.211:24938] by server-3.bemta-5.messagelabs.com id
	ED/3E-25441-95F32C05; Fri, 07 Dec 2012 19:11:21 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1354907480!18051322!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27110 invoked from network); 7 Dec 2012 19:11:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:11:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; 
   d="scan'208";a="3429"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:11:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:11:19 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th3KR-0000sd-CG; Fri, 07 Dec 2012 19:11:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th3KR-0006pJ-2D;
	Fri, 07 Dec 2012 19:11:19 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20674.16214.934271.479230@mariner.uk.xensource.com>
Date: Fri, 7 Dec 2012 19:11:18 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354101923.25834.16.camel@zakaz.uk.xensource.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "jfehlig@suse.com" <jfehlig@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> Can we provide, or (more likely) require the application to provide, a
> lock (perhaps per-event or, again more likely, per-event-loop) which
> must be held while processing callbacks and also while events are being
> registered/unregistered with the application's event handling subsystem?
> With such a lock in place the application would be able to guarantee
> that having returned from the deregister hook any further events would
> be seen as spurious events by its own event processing loop.

I think this might be difficult to get right without deadlocks.

...
> Last half brained idea would be to split the deregistration into two.
> libxl calls up to the app saying "please deregister" and the app calls
> back to libxl to say "I am no longer watching for this event and
> guarantee that I won't deliver it any more". (Presumably this would be
> implemented by the application via some combination of the above). This
> could be done in a somewhat compatible way by allowing the deregister
> hook to return "PENDING".

This is in fact straightforward and is a subset of the existing API.
If we have libxl always call timeout_modify with abs={0,0}, it will
get timeout_occurred when the application's event loop has dealt with
it.  We can simply never call timeout_deregister.

I have implemented this in the 2-patch RFD series I'm about to send.
NB this series has been complied but not (as yet) executed by me...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:15:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3Oe-0003fl-PZ; Fri, 07 Dec 2012 19:15:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3Od-0003ff-Q5
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:15:40 +0000
Received: from [85.158.137.99:34877] by server-10.bemta-3.messagelabs.com id
	ED/05-19806-A5042C05; Fri, 07 Dec 2012 19:15:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354907737!18106930!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26460 invoked from network); 7 Dec 2012 19:15:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:15:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="3515"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:15:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:15:37 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th3Ob-0000tS-1t; Fri, 07 Dec 2012 19:15:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th3Oa-00070v-Tr;
	Fri, 07 Dec 2012 19:15:36 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Fri, 7 Dec 2012 19:15:33 +0000
Message-ID: <1354907734-26934-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20674.16214.934271.479230@mariner.uk.xensource.com>
References: <20674.16214.934271.479230@mariner.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout and
..._fd, in a multithreaded program those calls may be arbitrarily
delayed in relation to other activities within the program.

libxl therefore needs to be prepared to receive very old event
callbacks.  Arrange for this to be the case for fd callbacks.

This requires a new layer of indirection through a "hook nexus" struct
which can outlive the libxl__ev_foo.  Allocation and deallocation of
these nexi is mostly handled in the OSEVENT macros which wrap up
the application's callbacks.

Document the problem and the solution in a comment in libxl_event.c
just before the definition of struct libxl__osevent_hook_nexus.

There is still a race relating to libxl__osevent_occurred_timeout;
this will be addressed in the following patch.

Reported-by: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--
v2:
  - Prepare for fixing timeout race too
  - Break out osevent_release_nexus()
  - nexusop argument to OSEVENT* macros
  - Clarify OSEVENT* nexusop hooks
  - osevent_ev_from_hook_nexus takes a libxl__osevent_hook_nexus*
---
 tools/libxl/libxl_event.c    |  184 +++++++++++++++++++++++++++++++++++------
 tools/libxl/libxl_internal.h |    8 ++-
 2 files changed, 163 insertions(+), 29 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 72cb723..f1fe425 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -38,23 +38,131 @@
  * The application's registration hooks should be called ONLY via
  * these macros, with the ctx locked.  Likewise all the "occurred"
  * entrypoints from the application should assert(!in_hook);
+ *
+ * During the hook call - including while the arguments are being
+ * evaluated - ev->nexus is guaranteed to be valid and refer to the
+ * nexus which is being used for this event registration.  The
+ * arguments should specify ev->nexus for the for_libxl argument and
+ * ev->nexus->for_app_reg (or a pointer to it) for for_app_reg.
  */
-#define OSEVENT_HOOK_INTERN(retval, hookname, ...) do {                      \
-    if (CTX->osevent_hooks) {                                                \
-        CTX->osevent_in_hook++;                                              \
-        retval CTX->osevent_hooks->hookname(CTX->osevent_user, __VA_ARGS__); \
-        CTX->osevent_in_hook--;                                              \
-    }                                                                        \
+#define OSEVENT_HOOK_INTERN(retval, failedp, evkind, hookop, nexusop, ...) do { \
+    if (CTX->osevent_hooks) {                                           \
+        CTX->osevent_in_hook++;                                         \
+        libxl__osevent_hook_nexi *nexi = &CTX->hook_##evkind##_nexi_idle; \
+        osevent_hook_pre_##nexusop(gc, ev, nexi, &ev->nexus);            \
+        retval CTX->osevent_hooks->evkind##_##hookop                    \
+            (CTX->osevent_user, __VA_ARGS__);                           \
+        if ((failedp))                                                  \
+            osevent_hook_failed_##nexusop(gc, ev, nexi, &ev->nexus);     \
+        CTX->osevent_in_hook--;                                         \
+    }                                                                   \
 } while (0)
 
-#define OSEVENT_HOOK(hookname, ...) ({                                       \
-    int osevent_hook_rc = 0;                                                 \
-    OSEVENT_HOOK_INTERN(osevent_hook_rc = , hookname, __VA_ARGS__);          \
-    osevent_hook_rc;                                                         \
+#define OSEVENT_HOOK(evkind, hookop, nexusop, ...) ({                   \
+    int osevent_hook_rc = 0;                                    \
+    OSEVENT_HOOK_INTERN(osevent_hook_rc =, !!osevent_hook_rc,   \
+                        evkind, hookop, nexusop, __VA_ARGS__);          \
+    osevent_hook_rc;                                            \
 })
 
-#define OSEVENT_HOOK_VOID(hookname, ...) \
-    OSEVENT_HOOK_INTERN(/* void */, hookname, __VA_ARGS__)
+#define OSEVENT_HOOK_VOID(evkind, hookop, nexusop, ...)                         \
+    OSEVENT_HOOK_INTERN(/* void */, 0, evkind, hookop, nexusop, __VA_ARGS__)
+
+/*
+ * The application's calls to libxl_osevent_occurred_... may be
+ * indefinitely delayed with respect to the rest of the program (since
+ * they are not necessarily called with any lock held).  So the
+ * for_libxl value we receive may be (almost) arbitrarily old.  All we
+ * know is that it came from this ctx.
+ *
+ * Therefore we may not free the object referred to by any for_libxl
+ * value until we free the whole libxl_ctx.  And if we reuse it we
+ * must be able to tell when an old use turns up, and discard the
+ * stale event.
+ *
+ * Thus we cannot use the ev directly as the for_libxl value - we need
+ * a layer of indirection.
+ *
+ * We do this by keeping a pool of libxl__osevent_hook_nexus structs,
+ * and use pointers to them as for_libxl values.  In fact, there are
+ * two pools: one for fds and one for timeouts.  This ensures that we
+ * don't risk a type error when we upcast nexus->ev.  In each nexus
+ * the ev is either null or points to a valid libxl__ev_time or
+ * libxl__ev_fd, as applicable.
+ *
+ * We /do/ allow ourselves to reassociate an old nexus with a new ev
+ * as otherwise we would have to leak nexi.  (This reassociation
+ * might, of course, be an old ev being reused for a new purpose so
+ * simply comparing the ev pointer is not sufficient.)  Thus the
+ * libxl_osevent_occurred functions need to check that the condition
+ * allegedly signalled by this event actually exists.
+ *
+ * The nexi and the lists are all protected by the ctx lock.
+ */
+ 
+struct libxl__osevent_hook_nexus {
+    void *ev;
+    void *for_app_reg;
+    LIBXL_SLIST_ENTRY(libxl__osevent_hook_nexus) next;
+};
+
+static void *osevent_ev_from_hook_nexus(libxl_ctx *ctx,
+           libxl__osevent_hook_nexus *nexus /* pass  void *for_libxl */)
+{
+    return nexus->ev;
+}
+
+static void osevent_release_nexus(libxl__gc *gc,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus *nexus)
+{
+    nexus->ev = 0;
+    LIBXL_SLIST_INSERT_HEAD(nexi_idle, nexus, next);
+}
+
+/*----- OSEVENT* hook functions for nexusop "alloc" -----*/
+static void osevent_hook_pre_alloc(libxl__gc *gc, void *ev,
+                                   libxl__osevent_hook_nexi *nexi_idle,
+                                   libxl__osevent_hook_nexus **nexus_r)
+{
+    libxl__osevent_hook_nexus *nexus = LIBXL_SLIST_FIRST(nexi_idle);
+    if (nexus) {
+        LIBXL_SLIST_REMOVE_HEAD(nexi_idle, next);
+    } else {
+        nexus = libxl__zalloc(NOGC, sizeof(*nexus));
+    }
+    nexus->ev = ev;
+    *nexus_r = nexus;
+}
+static void osevent_hook_failed_alloc(libxl__gc *gc, void *ev,
+                                      libxl__osevent_hook_nexi *nexi_idle,
+                                      libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+
+/*----- OSEVENT* hook functions for nexusop "release" -----*/
+static void osevent_hook_pre_release(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+static void osevent_hook_failed_release(libxl__gc *gc, void *ev,
+                                        libxl__osevent_hook_nexi *nexi_idle,
+                                        libxl__osevent_hook_nexus **nexus)
+{
+    abort();
+}
+
+/*----- OSEVENT* hook functions for nexusop "noop" -----*/
+static void osevent_hook_pre_noop(libxl__gc *gc, void *ev,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus **nexus) { }
+static void osevent_hook_failed_noop(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus) { }
+
 
 /*
  * fd events
@@ -72,7 +180,8 @@ int libxl__ev_fd_register(libxl__gc *gc, libxl__ev_fd *ev,
 
     DBG("ev_fd=%p register fd=%d events=%x", ev, fd, events);
 
-    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
+    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
+                      events, ev->nexus);
     if (rc) goto out;
 
     ev->fd = fd;
@@ -97,7 +206,7 @@ int libxl__ev_fd_modify(libxl__gc *gc, libxl__ev_fd *ev, short events)
 
     DBG("ev_fd=%p modify fd=%d events=%x", ev, ev->fd, events);
 
-    rc = OSEVENT_HOOK(fd_modify, ev->fd, &ev->for_app_reg, events);
+    rc = OSEVENT_HOOK(fd,modify, noop, ev->fd, &ev->nexus->for_app_reg, events);
     if (rc) goto out;
 
     ev->events = events;
@@ -119,7 +228,7 @@ void libxl__ev_fd_deregister(libxl__gc *gc, libxl__ev_fd *ev)
 
     DBG("ev_fd=%p deregister fd=%d", ev, ev->fd);
 
-    OSEVENT_HOOK_VOID(fd_deregister, ev->fd, ev->for_app_reg);
+    OSEVENT_HOOK_VOID(fd,deregister, release, ev->fd, ev->nexus->for_app_reg);
     LIBXL_LIST_REMOVE(ev, entry);
     ev->fd = -1;
 
@@ -171,7 +280,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 {
     int rc;
 
-    rc = OSEVENT_HOOK(timeout_register, &ev->for_app_reg, absolute, ev);
+    rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
+                      absolute, ev->nexus);
     if (rc) return rc;
 
     ev->infinite = 0;
@@ -184,7 +294,7 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout_deregister, ev->for_app_reg);
+        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -270,7 +380,8 @@ int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
         rc = time_register_finite(gc, ev, absolute);
         if (rc) goto out;
     } else {
-        rc = OSEVENT_HOOK(timeout_modify, &ev->for_app_reg, absolute);
+        rc = OSEVENT_HOOK(timeout,modify, noop,
+                          &ev->nexus->for_app_reg, absolute);
         if (rc) goto out;
 
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
@@ -1010,35 +1121,54 @@ void libxl_osevent_register_hooks(libxl_ctx *ctx,
 
 
 void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
-                               int fd, short events, short revents)
+                               int fd, short events_ign, short revents_ign)
 {
-    libxl__ev_fd *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    assert(fd == ev->fd);
-    revents &= ev->events;
-    if (revents)
-        ev->func(egc, ev, fd, ev->events, revents);
+    libxl__ev_fd *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
+    if (ev->fd != fd) goto out;
 
+    struct pollfd check;
+    for (;;) {
+        check.fd = fd;
+        check.events = ev->events;
+        int r = poll(&check, 1, 0);
+        if (!r)
+            goto out;
+        if (r==1)
+            break;
+        assert(r<0);
+        if (errno != EINTR) {
+            LIBXL__EVENT_DISASTER(egc, "failed poll to check for fd", errno, 0);
+            goto out;
+        }
+    }
+
+    if (check.revents)
+        ev->func(egc, ev, fd, ev->events, check.revents);
+
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
 
 void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 {
-    libxl__ev_time *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
     assert(!ev->infinite);
+
     LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     ev->func(egc, ev, &ev->abs);
 
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index cba3616..6484bcb 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -136,6 +136,8 @@ typedef struct libxl__gc libxl__gc;
 typedef struct libxl__egc libxl__egc;
 typedef struct libxl__ao libxl__ao;
 typedef struct libxl__aop_occurred libxl__aop_occurred;
+typedef struct libxl__osevent_hook_nexus libxl__osevent_hook_nexus;
+typedef struct libxl__osevent_hook_nexi libxl__osevent_hook_nexi;
 
 _hidden void libxl__alloc_failed(libxl_ctx *, const char *func,
                          size_t nmemb, size_t size) __attribute__((noreturn));
@@ -163,7 +165,7 @@ struct libxl__ev_fd {
     libxl__ev_fd_callback *func;
     /* remainder is private for libxl__ev_fd... */
     LIBXL_LIST_ENTRY(libxl__ev_fd) entry;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 
@@ -178,7 +180,7 @@ struct libxl__ev_time {
     int infinite; /* not registered in list or with app if infinite */
     LIBXL_TAILQ_ENTRY(libxl__ev_time) entry;
     struct timeval abs;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 typedef struct libxl__ev_xswatch libxl__ev_xswatch;
@@ -329,6 +331,8 @@ struct libxl__ctx {
     libxl__poller poller_app; /* libxl_osevent_beforepoll and _afterpoll */
     LIBXL_LIST_HEAD(, libxl__poller) pollers_event, pollers_idle;
 
+    LIBXL_SLIST_HEAD(libxl__osevent_hook_nexi, libxl__osevent_hook_nexus)
+        hook_fd_nexi_idle, hook_timeout_nexi_idle;
     LIBXL_LIST_HEAD(, libxl__ev_fd) efds;
     LIBXL_TAILQ_HEAD(, libxl__ev_time) etimes;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:15:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3Oe-0003fl-PZ; Fri, 07 Dec 2012 19:15:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3Od-0003ff-Q5
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:15:40 +0000
Received: from [85.158.137.99:34877] by server-10.bemta-3.messagelabs.com id
	ED/05-19806-A5042C05; Fri, 07 Dec 2012 19:15:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354907737!18106930!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26460 invoked from network); 7 Dec 2012 19:15:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:15:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="3515"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:15:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:15:37 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th3Ob-0000tS-1t; Fri, 07 Dec 2012 19:15:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th3Oa-00070v-Tr;
	Fri, 07 Dec 2012 19:15:36 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Fri, 7 Dec 2012 19:15:33 +0000
Message-ID: <1354907734-26934-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20674.16214.934271.479230@mariner.uk.xensource.com>
References: <20674.16214.934271.479230@mariner.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout and
..._fd, in a multithreaded program those calls may be arbitrarily
delayed in relation to other activities within the program.

libxl therefore needs to be prepared to receive very old event
callbacks.  Arrange for this to be the case for fd callbacks.

This requires a new layer of indirection through a "hook nexus" struct
which can outlive the libxl__ev_foo.  Allocation and deallocation of
these nexi is mostly handled in the OSEVENT macros which wrap up
the application's callbacks.

Document the problem and the solution in a comment in libxl_event.c
just before the definition of struct libxl__osevent_hook_nexus.

There is still a race relating to libxl__osevent_occurred_timeout;
this will be addressed in the following patch.

Reported-by: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--
v2:
  - Prepare for fixing timeout race too
  - Break out osevent_release_nexus()
  - nexusop argument to OSEVENT* macros
  - Clarify OSEVENT* nexusop hooks
  - osevent_ev_from_hook_nexus takes a libxl__osevent_hook_nexus*
---
 tools/libxl/libxl_event.c    |  184 +++++++++++++++++++++++++++++++++++------
 tools/libxl/libxl_internal.h |    8 ++-
 2 files changed, 163 insertions(+), 29 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 72cb723..f1fe425 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -38,23 +38,131 @@
  * The application's registration hooks should be called ONLY via
  * these macros, with the ctx locked.  Likewise all the "occurred"
  * entrypoints from the application should assert(!in_hook);
+ *
+ * During the hook call - including while the arguments are being
+ * evaluated - ev->nexus is guaranteed to be valid and refer to the
+ * nexus which is being used for this event registration.  The
+ * arguments should specify ev->nexus for the for_libxl argument and
+ * ev->nexus->for_app_reg (or a pointer to it) for for_app_reg.
  */
-#define OSEVENT_HOOK_INTERN(retval, hookname, ...) do {                      \
-    if (CTX->osevent_hooks) {                                                \
-        CTX->osevent_in_hook++;                                              \
-        retval CTX->osevent_hooks->hookname(CTX->osevent_user, __VA_ARGS__); \
-        CTX->osevent_in_hook--;                                              \
-    }                                                                        \
+#define OSEVENT_HOOK_INTERN(retval, failedp, evkind, hookop, nexusop, ...) do { \
+    if (CTX->osevent_hooks) {                                           \
+        CTX->osevent_in_hook++;                                         \
+        libxl__osevent_hook_nexi *nexi = &CTX->hook_##evkind##_nexi_idle; \
+        osevent_hook_pre_##nexusop(gc, ev, nexi, &ev->nexus);            \
+        retval CTX->osevent_hooks->evkind##_##hookop                    \
+            (CTX->osevent_user, __VA_ARGS__);                           \
+        if ((failedp))                                                  \
+            osevent_hook_failed_##nexusop(gc, ev, nexi, &ev->nexus);     \
+        CTX->osevent_in_hook--;                                         \
+    }                                                                   \
 } while (0)
 
-#define OSEVENT_HOOK(hookname, ...) ({                                       \
-    int osevent_hook_rc = 0;                                                 \
-    OSEVENT_HOOK_INTERN(osevent_hook_rc = , hookname, __VA_ARGS__);          \
-    osevent_hook_rc;                                                         \
+#define OSEVENT_HOOK(evkind, hookop, nexusop, ...) ({                   \
+    int osevent_hook_rc = 0;                                    \
+    OSEVENT_HOOK_INTERN(osevent_hook_rc =, !!osevent_hook_rc,   \
+                        evkind, hookop, nexusop, __VA_ARGS__);          \
+    osevent_hook_rc;                                            \
 })
 
-#define OSEVENT_HOOK_VOID(hookname, ...) \
-    OSEVENT_HOOK_INTERN(/* void */, hookname, __VA_ARGS__)
+#define OSEVENT_HOOK_VOID(evkind, hookop, nexusop, ...)                         \
+    OSEVENT_HOOK_INTERN(/* void */, 0, evkind, hookop, nexusop, __VA_ARGS__)
+
+/*
+ * The application's calls to libxl_osevent_occurred_... may be
+ * indefinitely delayed with respect to the rest of the program (since
+ * they are not necessarily called with any lock held).  So the
+ * for_libxl value we receive may be (almost) arbitrarily old.  All we
+ * know is that it came from this ctx.
+ *
+ * Therefore we may not free the object referred to by any for_libxl
+ * value until we free the whole libxl_ctx.  And if we reuse it we
+ * must be able to tell when an old use turns up, and discard the
+ * stale event.
+ *
+ * Thus we cannot use the ev directly as the for_libxl value - we need
+ * a layer of indirection.
+ *
+ * We do this by keeping a pool of libxl__osevent_hook_nexus structs,
+ * and use pointers to them as for_libxl values.  In fact, there are
+ * two pools: one for fds and one for timeouts.  This ensures that we
+ * don't risk a type error when we upcast nexus->ev.  In each nexus
+ * the ev is either null or points to a valid libxl__ev_time or
+ * libxl__ev_fd, as applicable.
+ *
+ * We /do/ allow ourselves to reassociate an old nexus with a new ev
+ * as otherwise we would have to leak nexi.  (This reassociation
+ * might, of course, be an old ev being reused for a new purpose so
+ * simply comparing the ev pointer is not sufficient.)  Thus the
+ * libxl_osevent_occurred functions need to check that the condition
+ * allegedly signalled by this event actually exists.
+ *
+ * The nexi and the lists are all protected by the ctx lock.
+ */
+ 
+struct libxl__osevent_hook_nexus {
+    void *ev;
+    void *for_app_reg;
+    LIBXL_SLIST_ENTRY(libxl__osevent_hook_nexus) next;
+};
+
+static void *osevent_ev_from_hook_nexus(libxl_ctx *ctx,
+           libxl__osevent_hook_nexus *nexus /* pass  void *for_libxl */)
+{
+    return nexus->ev;
+}
+
+static void osevent_release_nexus(libxl__gc *gc,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus *nexus)
+{
+    nexus->ev = 0;
+    LIBXL_SLIST_INSERT_HEAD(nexi_idle, nexus, next);
+}
+
+/*----- OSEVENT* hook functions for nexusop "alloc" -----*/
+static void osevent_hook_pre_alloc(libxl__gc *gc, void *ev,
+                                   libxl__osevent_hook_nexi *nexi_idle,
+                                   libxl__osevent_hook_nexus **nexus_r)
+{
+    libxl__osevent_hook_nexus *nexus = LIBXL_SLIST_FIRST(nexi_idle);
+    if (nexus) {
+        LIBXL_SLIST_REMOVE_HEAD(nexi_idle, next);
+    } else {
+        nexus = libxl__zalloc(NOGC, sizeof(*nexus));
+    }
+    nexus->ev = ev;
+    *nexus_r = nexus;
+}
+static void osevent_hook_failed_alloc(libxl__gc *gc, void *ev,
+                                      libxl__osevent_hook_nexi *nexi_idle,
+                                      libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+
+/*----- OSEVENT* hook functions for nexusop "release" -----*/
+static void osevent_hook_pre_release(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+static void osevent_hook_failed_release(libxl__gc *gc, void *ev,
+                                        libxl__osevent_hook_nexi *nexi_idle,
+                                        libxl__osevent_hook_nexus **nexus)
+{
+    abort();
+}
+
+/*----- OSEVENT* hook functions for nexusop "noop" -----*/
+static void osevent_hook_pre_noop(libxl__gc *gc, void *ev,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus **nexus) { }
+static void osevent_hook_failed_noop(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus) { }
+
 
 /*
  * fd events
@@ -72,7 +180,8 @@ int libxl__ev_fd_register(libxl__gc *gc, libxl__ev_fd *ev,
 
     DBG("ev_fd=%p register fd=%d events=%x", ev, fd, events);
 
-    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
+    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
+                      events, ev->nexus);
     if (rc) goto out;
 
     ev->fd = fd;
@@ -97,7 +206,7 @@ int libxl__ev_fd_modify(libxl__gc *gc, libxl__ev_fd *ev, short events)
 
     DBG("ev_fd=%p modify fd=%d events=%x", ev, ev->fd, events);
 
-    rc = OSEVENT_HOOK(fd_modify, ev->fd, &ev->for_app_reg, events);
+    rc = OSEVENT_HOOK(fd,modify, noop, ev->fd, &ev->nexus->for_app_reg, events);
     if (rc) goto out;
 
     ev->events = events;
@@ -119,7 +228,7 @@ void libxl__ev_fd_deregister(libxl__gc *gc, libxl__ev_fd *ev)
 
     DBG("ev_fd=%p deregister fd=%d", ev, ev->fd);
 
-    OSEVENT_HOOK_VOID(fd_deregister, ev->fd, ev->for_app_reg);
+    OSEVENT_HOOK_VOID(fd,deregister, release, ev->fd, ev->nexus->for_app_reg);
     LIBXL_LIST_REMOVE(ev, entry);
     ev->fd = -1;
 
@@ -171,7 +280,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 {
     int rc;
 
-    rc = OSEVENT_HOOK(timeout_register, &ev->for_app_reg, absolute, ev);
+    rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
+                      absolute, ev->nexus);
     if (rc) return rc;
 
     ev->infinite = 0;
@@ -184,7 +294,7 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout_deregister, ev->for_app_reg);
+        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -270,7 +380,8 @@ int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
         rc = time_register_finite(gc, ev, absolute);
         if (rc) goto out;
     } else {
-        rc = OSEVENT_HOOK(timeout_modify, &ev->for_app_reg, absolute);
+        rc = OSEVENT_HOOK(timeout,modify, noop,
+                          &ev->nexus->for_app_reg, absolute);
         if (rc) goto out;
 
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
@@ -1010,35 +1121,54 @@ void libxl_osevent_register_hooks(libxl_ctx *ctx,
 
 
 void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
-                               int fd, short events, short revents)
+                               int fd, short events_ign, short revents_ign)
 {
-    libxl__ev_fd *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    assert(fd == ev->fd);
-    revents &= ev->events;
-    if (revents)
-        ev->func(egc, ev, fd, ev->events, revents);
+    libxl__ev_fd *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
+    if (ev->fd != fd) goto out;
 
+    struct pollfd check;
+    for (;;) {
+        check.fd = fd;
+        check.events = ev->events;
+        int r = poll(&check, 1, 0);
+        if (!r)
+            goto out;
+        if (r==1)
+            break;
+        assert(r<0);
+        if (errno != EINTR) {
+            LIBXL__EVENT_DISASTER(egc, "failed poll to check for fd", errno, 0);
+            goto out;
+        }
+    }
+
+    if (check.revents)
+        ev->func(egc, ev, fd, ev->events, check.revents);
+
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
 
 void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 {
-    libxl__ev_time *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
     assert(!ev->infinite);
+
     LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     ev->func(egc, ev, &ev->abs);
 
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index cba3616..6484bcb 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -136,6 +136,8 @@ typedef struct libxl__gc libxl__gc;
 typedef struct libxl__egc libxl__egc;
 typedef struct libxl__ao libxl__ao;
 typedef struct libxl__aop_occurred libxl__aop_occurred;
+typedef struct libxl__osevent_hook_nexus libxl__osevent_hook_nexus;
+typedef struct libxl__osevent_hook_nexi libxl__osevent_hook_nexi;
 
 _hidden void libxl__alloc_failed(libxl_ctx *, const char *func,
                          size_t nmemb, size_t size) __attribute__((noreturn));
@@ -163,7 +165,7 @@ struct libxl__ev_fd {
     libxl__ev_fd_callback *func;
     /* remainder is private for libxl__ev_fd... */
     LIBXL_LIST_ENTRY(libxl__ev_fd) entry;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 
@@ -178,7 +180,7 @@ struct libxl__ev_time {
     int infinite; /* not registered in list or with app if infinite */
     LIBXL_TAILQ_ENTRY(libxl__ev_time) entry;
     struct timeval abs;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 typedef struct libxl__ev_xswatch libxl__ev_xswatch;
@@ -329,6 +331,8 @@ struct libxl__ctx {
     libxl__poller poller_app; /* libxl_osevent_beforepoll and _afterpoll */
     LIBXL_LIST_HEAD(, libxl__poller) pollers_event, pollers_idle;
 
+    LIBXL_SLIST_HEAD(libxl__osevent_hook_nexi, libxl__osevent_hook_nexus)
+        hook_fd_nexi_idle, hook_timeout_nexi_idle;
     LIBXL_LIST_HEAD(, libxl__ev_fd) efds;
     LIBXL_TAILQ_HEAD(, libxl__ev_time) etimes;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:16:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:16:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3Oi-0003gL-CN; Fri, 07 Dec 2012 19:15:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3Og-0003ff-IG
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:15:42 +0000
Received: from [85.158.137.99:45212] by server-10.bemta-3.messagelabs.com id
	1B/15-19806-E5042C05; Fri, 07 Dec 2012 19:15:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354907737!18106930!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26823 invoked from network); 7 Dec 2012 19:15:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:15:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="3516"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:15:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:15:40 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th3Oe-0000tV-OE; Fri, 07 Dec 2012 19:15:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th3Ob-00070z-Kn;
	Fri, 07 Dec 2012 19:15:37 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Fri, 7 Dec 2012 19:15:34 +0000
Message-ID: <1354907734-26934-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20674.16214.934271.479230@mariner.uk.xensource.com>
References: <20674.16214.934271.479230@mariner.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout, in a
multithreaded program those calls may be arbitrarily delayed in
relation to other activities within the program.

Specifically this means when ->timeout_deregister returns, libxl does
not know whether it can safely dispose of the for_libxl value or
whether it needs to retain it in case of an in-progress call to
_occurred_timeout.

The interface could be fixed by requiring the application to make a
new call into libxl to say that the deregistration was complete.

However that new call would have to be threaded through the
application's event loop; this is complicated and some application
authors are likely not to implement it properly.  Furthermore the
easiest way to implement this facility in most event loops is to queue
up a time event for "now".

Shortcut all of this by having libxl always call timeout_modify
setting abs={0,0} (ie, ASAP) instead of timeout_deregister.  This will
cause the application to call _occurred_timeout.  When processing this
calldown we see that we were no longer actually interested and simply
throw it away.

Additionally, there is a race between _occurred_timeout and
->timeout_modify.  If libxl ever adjusts the deadline for a timeout
the application may already be in the process of calling _occurred, in
which case the situation with for_app's lifetime becomes very
complicated.  Therefore abolish libxl__ev_time_modify_{abs,rel} (which
have no callers) and promise to the application only ever to call
->timeout_modify with abs=={0,0}.  The application still needs to cope
with ->timeout_modify racing with its internal function which calls
_occurred_timeout.  Document this.

This is a forwards-compatible change for applications using the libxl
API, and will hopefully eliminate these races in callback-supplying
applications (such as libvirt) without the need for corresponding
changes to the application.

For clarity, fold the body of time_register_finite into its one
remaining call site.  This makes the semantics of ev->infinite
slightly clearer.

Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_event.c |   88 +++++++--------------------------------------
 tools/libxl/libxl_event.h |   17 ++++++++-
 2 files changed, 28 insertions(+), 77 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index f1fe425..65c34da 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -267,18 +267,11 @@ static int time_rel_to_abs(libxl__gc *gc, int ms, struct timeval *abs_out)
     return 0;
 }
 
-static void time_insert_finite(libxl__gc *gc, libxl__ev_time *ev)
-{
-    libxl__ev_time *evsearch;
-    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
-                              timercmp(&ev->abs, &evsearch->abs, >));
-    ev->infinite = 0;
-}
-
 static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
                                 struct timeval absolute)
 {
     int rc;
+    libxl__ev_time *evsearch;
 
     rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
                       absolute, ev->nexus);
@@ -286,7 +279,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 
     ev->infinite = 0;
     ev->abs = absolute;
-    time_insert_finite(gc, ev);
+    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
+                              timercmp(&ev->abs, &evsearch->abs, >));
 
     return 0;
 }
@@ -294,7 +288,11 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
+        struct timeval right_away = { 0, 0 };
+        ev->nexus->ev = 0;
+        OSEVENT_HOOK_VOID(timeout,modify,
+                          noop /* release nexus in _occurred_ */,
+                          ev->nexus->for_app_reg, right_away);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -364,70 +362,6 @@ int libxl__ev_time_register_rel(libxl__gc *gc, libxl__ev_time *ev,
     return rc;
 }
 
-int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
-                              struct timeval absolute)
-{
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify abs==%lu.%06lu",
-        ev, (unsigned long)absolute.tv_sec, (unsigned long)absolute.tv_usec);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (ev->infinite) {
-        rc = time_register_finite(gc, ev, absolute);
-        if (rc) goto out;
-    } else {
-        rc = OSEVENT_HOOK(timeout,modify, noop,
-                          &ev->nexus->for_app_reg, absolute);
-        if (rc) goto out;
-
-        LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
-        ev->abs = absolute;
-        time_insert_finite(gc, ev);
-    }
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
-int libxl__ev_time_modify_rel(libxl__gc *gc, libxl__ev_time *ev,
-                              int milliseconds)
-{
-    struct timeval absolute;
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify ms=%d", ev, milliseconds);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (milliseconds < 0) {
-        time_deregister(gc, ev);
-        ev->infinite = 1;
-        rc = 0;
-        goto out;
-    }
-
-    rc = time_rel_to_abs(gc, milliseconds, &absolute);
-    if (rc) goto out;
-
-    rc = libxl__ev_time_modify_abs(gc, ev, absolute);
-    if (rc) goto out;
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
 void libxl__ev_time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     CTX_LOCK;
@@ -1161,7 +1095,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    libxl__osevent_hook_nexus *nexus = for_libxl;
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, nexus);
+
+    osevent_release_nexus(gc, &CTX->hook_timeout_nexi_idle, nexus);
+
     if (!ev) goto out;
     assert(!ev->infinite);
 
diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3bcb6d3..51f2721 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -287,8 +287,10 @@ typedef struct libxl_osevent_hooks {
   int (*timeout_register)(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl);
   int (*timeout_modify)(void *user, void **for_app_registration_update,
-                         struct timeval abs);
-  void (*timeout_deregister)(void *user, void *for_app_registration);
+                         struct timeval abs)
+      /* only ever called with abs={0,0}, meaning ASAP */;
+  void (*timeout_deregister)(void *user, void *for_app_registration)
+      /* will never be called */;
 } libxl_osevent_hooks;
 
 /* The application which calls register_fd_hooks promises to
@@ -337,6 +339,17 @@ typedef struct libxl_osevent_hooks {
  * register (or modify), and pass it to subsequent calls to modify
  * or deregister.
  *
+ * Note that the application must cope with a call from libxl to
+ * timeout_modify racing with its own call to
+ * libxl__osevent_occurred_timeout.  libxl guarantees that
+ * timeout_modify will only be called with abs={0,0} but the
+ * application must still ensure that libxl's attempt to cause the
+ * timeout to occur immediately is safely ignored even the timeout is
+ * actually already in the process of occurring.
+ *
+ * timeout_deregister is not used because it forms part of a
+ * deprecated unsafe mode of use of the API.
+ *
  * osevent_register_hooks may be called only once for each libxl_ctx.
  * libxl may make calls to register/modify/deregister from within
  * any libxl function (indeed, it will usually call register from
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:16:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:16:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3Oi-0003gL-CN; Fri, 07 Dec 2012 19:15:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3Og-0003ff-IG
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:15:42 +0000
Received: from [85.158.137.99:45212] by server-10.bemta-3.messagelabs.com id
	1B/15-19806-E5042C05; Fri, 07 Dec 2012 19:15:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1354907737!18106930!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26823 invoked from network); 7 Dec 2012 19:15:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:15:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="3516"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:15:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:15:40 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th3Oe-0000tV-OE; Fri, 07 Dec 2012 19:15:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th3Ob-00070z-Kn;
	Fri, 07 Dec 2012 19:15:37 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Fri, 7 Dec 2012 19:15:34 +0000
Message-ID: <1354907734-26934-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20674.16214.934271.479230@mariner.uk.xensource.com>
References: <20674.16214.934271.479230@mariner.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout, in a
multithreaded program those calls may be arbitrarily delayed in
relation to other activities within the program.

Specifically this means when ->timeout_deregister returns, libxl does
not know whether it can safely dispose of the for_libxl value or
whether it needs to retain it in case of an in-progress call to
_occurred_timeout.

The interface could be fixed by requiring the application to make a
new call into libxl to say that the deregistration was complete.

However that new call would have to be threaded through the
application's event loop; this is complicated and some application
authors are likely not to implement it properly.  Furthermore the
easiest way to implement this facility in most event loops is to queue
up a time event for "now".

Shortcut all of this by having libxl always call timeout_modify
setting abs={0,0} (ie, ASAP) instead of timeout_deregister.  This will
cause the application to call _occurred_timeout.  When processing this
calldown we see that we were no longer actually interested and simply
throw it away.

Additionally, there is a race between _occurred_timeout and
->timeout_modify.  If libxl ever adjusts the deadline for a timeout
the application may already be in the process of calling _occurred, in
which case the situation with for_app's lifetime becomes very
complicated.  Therefore abolish libxl__ev_time_modify_{abs,rel} (which
have no callers) and promise to the application only ever to call
->timeout_modify with abs=={0,0}.  The application still needs to cope
with ->timeout_modify racing with its internal function which calls
_occurred_timeout.  Document this.

This is a forwards-compatible change for applications using the libxl
API, and will hopefully eliminate these races in callback-supplying
applications (such as libvirt) without the need for corresponding
changes to the application.

For clarity, fold the body of time_register_finite into its one
remaining call site.  This makes the semantics of ev->infinite
slightly clearer.

Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_event.c |   88 +++++++--------------------------------------
 tools/libxl/libxl_event.h |   17 ++++++++-
 2 files changed, 28 insertions(+), 77 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index f1fe425..65c34da 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -267,18 +267,11 @@ static int time_rel_to_abs(libxl__gc *gc, int ms, struct timeval *abs_out)
     return 0;
 }
 
-static void time_insert_finite(libxl__gc *gc, libxl__ev_time *ev)
-{
-    libxl__ev_time *evsearch;
-    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
-                              timercmp(&ev->abs, &evsearch->abs, >));
-    ev->infinite = 0;
-}
-
 static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
                                 struct timeval absolute)
 {
     int rc;
+    libxl__ev_time *evsearch;
 
     rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
                       absolute, ev->nexus);
@@ -286,7 +279,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 
     ev->infinite = 0;
     ev->abs = absolute;
-    time_insert_finite(gc, ev);
+    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
+                              timercmp(&ev->abs, &evsearch->abs, >));
 
     return 0;
 }
@@ -294,7 +288,11 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
+        struct timeval right_away = { 0, 0 };
+        ev->nexus->ev = 0;
+        OSEVENT_HOOK_VOID(timeout,modify,
+                          noop /* release nexus in _occurred_ */,
+                          ev->nexus->for_app_reg, right_away);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -364,70 +362,6 @@ int libxl__ev_time_register_rel(libxl__gc *gc, libxl__ev_time *ev,
     return rc;
 }
 
-int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
-                              struct timeval absolute)
-{
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify abs==%lu.%06lu",
-        ev, (unsigned long)absolute.tv_sec, (unsigned long)absolute.tv_usec);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (ev->infinite) {
-        rc = time_register_finite(gc, ev, absolute);
-        if (rc) goto out;
-    } else {
-        rc = OSEVENT_HOOK(timeout,modify, noop,
-                          &ev->nexus->for_app_reg, absolute);
-        if (rc) goto out;
-
-        LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
-        ev->abs = absolute;
-        time_insert_finite(gc, ev);
-    }
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
-int libxl__ev_time_modify_rel(libxl__gc *gc, libxl__ev_time *ev,
-                              int milliseconds)
-{
-    struct timeval absolute;
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify ms=%d", ev, milliseconds);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (milliseconds < 0) {
-        time_deregister(gc, ev);
-        ev->infinite = 1;
-        rc = 0;
-        goto out;
-    }
-
-    rc = time_rel_to_abs(gc, milliseconds, &absolute);
-    if (rc) goto out;
-
-    rc = libxl__ev_time_modify_abs(gc, ev, absolute);
-    if (rc) goto out;
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
 void libxl__ev_time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     CTX_LOCK;
@@ -1161,7 +1095,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    libxl__osevent_hook_nexus *nexus = for_libxl;
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, nexus);
+
+    osevent_release_nexus(gc, &CTX->hook_timeout_nexi_idle, nexus);
+
     if (!ev) goto out;
     assert(!ev->infinite);
 
diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3bcb6d3..51f2721 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -287,8 +287,10 @@ typedef struct libxl_osevent_hooks {
   int (*timeout_register)(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl);
   int (*timeout_modify)(void *user, void **for_app_registration_update,
-                         struct timeval abs);
-  void (*timeout_deregister)(void *user, void *for_app_registration);
+                         struct timeval abs)
+      /* only ever called with abs={0,0}, meaning ASAP */;
+  void (*timeout_deregister)(void *user, void *for_app_registration)
+      /* will never be called */;
 } libxl_osevent_hooks;
 
 /* The application which calls register_fd_hooks promises to
@@ -337,6 +339,17 @@ typedef struct libxl_osevent_hooks {
  * register (or modify), and pass it to subsequent calls to modify
  * or deregister.
  *
+ * Note that the application must cope with a call from libxl to
+ * timeout_modify racing with its own call to
+ * libxl__osevent_occurred_timeout.  libxl guarantees that
+ * timeout_modify will only be called with abs={0,0} but the
+ * application must still ensure that libxl's attempt to cause the
+ * timeout to occur immediately is safely ignored even the timeout is
+ * actually already in the process of occurring.
+ *
+ * timeout_deregister is not used because it forms part of a
+ * deprecated unsafe mode of use of the API.
+ *
  * osevent_register_hooks may be called only once for each libxl_ctx.
  * libxl may make calls to register/modify/deregister from within
  * any libxl function (indeed, it will usually call register from
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:19:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3SG-0003w8-0x; Fri, 07 Dec 2012 19:19:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3SF-0003w1-AJ
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 19:19:23 +0000
Received: from [85.158.137.99:6231] by server-9.bemta-3.messagelabs.com id
	E0/D9-02388-A3142C05; Fri, 07 Dec 2012 19:19:22 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354907961!17544229!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25968 invoked from network); 7 Dec 2012 19:19:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:19:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="3590"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:19:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:19:21 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th3SD-0000xN-5v;
	Fri, 07 Dec 2012 19:19:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th3SD-0005i6-1Q;
	Fri, 07 Dec 2012 19:19:21 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14601-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 19:19:21 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14601: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14601 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14601/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:19:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3SG-0003w8-0x; Fri, 07 Dec 2012 19:19:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3SF-0003w1-AJ
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 19:19:23 +0000
Received: from [85.158.137.99:6231] by server-9.bemta-3.messagelabs.com id
	E0/D9-02388-A3142C05; Fri, 07 Dec 2012 19:19:22 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1354907961!17544229!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25968 invoked from network); 7 Dec 2012 19:19:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:19:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="3590"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:19:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:19:21 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th3SD-0000xN-5v;
	Fri, 07 Dec 2012 19:19:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th3SD-0005i6-1Q;
	Fri, 07 Dec 2012 19:19:21 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14601-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 19:19:21 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14601: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14601 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14601/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:21:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3UQ-00044p-If; Fri, 07 Dec 2012 19:21:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijackson@chiark.greenend.org.uk>) id 1Th3UP-00044d-Ia
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:21:38 +0000
Received: from [85.158.139.211:9106] by server-16.bemta-5.messagelabs.com id
	9A/10-09208-0C142C05; Fri, 07 Dec 2012 19:21:36 +0000
X-Env-Sender: ijackson@chiark.greenend.org.uk
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354908095!19550892!1
X-Originating-IP: [212.13.197.229]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28035 invoked from network); 7 Dec 2012 19:21:35 -0000
Received: from chiark.greenend.org.uk (HELO chiark.greenend.org.uk)
	(212.13.197.229)
	by server-8.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Dec 2012 19:21:35 -0000
Received: by chiark.greenend.org.uk (Debian Exim 4.72 #1) with local
	(return-path ijackson@chiark.greenend.org.uk)
	id 1Th3UL-0002d7-HM; Fri, 07 Dec 2012 19:21:33 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Fri,  7 Dec 2012 19:21:30 +0000
Message-Id: <1354908091-9905-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20674.16214.934271.479230@mariner.uk.xensource.com>
References: <20674.16214.934271.479230@mariner.uk.xensource.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout and
..._fd, in a multithreaded program those calls may be arbitrarily
delayed in relation to other activities within the program.

libxl therefore needs to be prepared to receive very old event
callbacks.  Arrange for this to be the case for fd callbacks.

This requires a new layer of indirection through a "hook nexus" struct
which can outlive the libxl__ev_foo.  Allocation and deallocation of
these nexi is mostly handled in the OSEVENT macros which wrap up
the application's callbacks.

Document the problem and the solution in a comment in libxl_event.c
just before the definition of struct libxl__osevent_hook_nexus.

There is still a race relating to libxl__osevent_occurred_timeout;
this will be addressed in the following patch.

Reported-by: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--
v2:
  - Prepare for fixing timeout race too
  - Break out osevent_release_nexus()
  - nexusop argument to OSEVENT* macros
  - Clarify OSEVENT* nexusop hooks
  - osevent_ev_from_hook_nexus takes a libxl__osevent_hook_nexus*
---
 tools/libxl/libxl_event.c    |  184 +++++++++++++++++++++++++++++++++++------
 tools/libxl/libxl_internal.h |    8 ++-
 2 files changed, 163 insertions(+), 29 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 72cb723..f1fe425 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -38,23 +38,131 @@
  * The application's registration hooks should be called ONLY via
  * these macros, with the ctx locked.  Likewise all the "occurred"
  * entrypoints from the application should assert(!in_hook);
+ *
+ * During the hook call - including while the arguments are being
+ * evaluated - ev->nexus is guaranteed to be valid and refer to the
+ * nexus which is being used for this event registration.  The
+ * arguments should specify ev->nexus for the for_libxl argument and
+ * ev->nexus->for_app_reg (or a pointer to it) for for_app_reg.
  */
-#define OSEVENT_HOOK_INTERN(retval, hookname, ...) do {                      \
-    if (CTX->osevent_hooks) {                                                \
-        CTX->osevent_in_hook++;                                              \
-        retval CTX->osevent_hooks->hookname(CTX->osevent_user, __VA_ARGS__); \
-        CTX->osevent_in_hook--;                                              \
-    }                                                                        \
+#define OSEVENT_HOOK_INTERN(retval, failedp, evkind, hookop, nexusop, ...) do { \
+    if (CTX->osevent_hooks) {                                           \
+        CTX->osevent_in_hook++;                                         \
+        libxl__osevent_hook_nexi *nexi = &CTX->hook_##evkind##_nexi_idle; \
+        osevent_hook_pre_##nexusop(gc, ev, nexi, &ev->nexus);            \
+        retval CTX->osevent_hooks->evkind##_##hookop                    \
+            (CTX->osevent_user, __VA_ARGS__);                           \
+        if ((failedp))                                                  \
+            osevent_hook_failed_##nexusop(gc, ev, nexi, &ev->nexus);     \
+        CTX->osevent_in_hook--;                                         \
+    }                                                                   \
 } while (0)
 
-#define OSEVENT_HOOK(hookname, ...) ({                                       \
-    int osevent_hook_rc = 0;                                                 \
-    OSEVENT_HOOK_INTERN(osevent_hook_rc = , hookname, __VA_ARGS__);          \
-    osevent_hook_rc;                                                         \
+#define OSEVENT_HOOK(evkind, hookop, nexusop, ...) ({                   \
+    int osevent_hook_rc = 0;                                    \
+    OSEVENT_HOOK_INTERN(osevent_hook_rc =, !!osevent_hook_rc,   \
+                        evkind, hookop, nexusop, __VA_ARGS__);          \
+    osevent_hook_rc;                                            \
 })
 
-#define OSEVENT_HOOK_VOID(hookname, ...) \
-    OSEVENT_HOOK_INTERN(/* void */, hookname, __VA_ARGS__)
+#define OSEVENT_HOOK_VOID(evkind, hookop, nexusop, ...)                         \
+    OSEVENT_HOOK_INTERN(/* void */, 0, evkind, hookop, nexusop, __VA_ARGS__)
+
+/*
+ * The application's calls to libxl_osevent_occurred_... may be
+ * indefinitely delayed with respect to the rest of the program (since
+ * they are not necessarily called with any lock held).  So the
+ * for_libxl value we receive may be (almost) arbitrarily old.  All we
+ * know is that it came from this ctx.
+ *
+ * Therefore we may not free the object referred to by any for_libxl
+ * value until we free the whole libxl_ctx.  And if we reuse it we
+ * must be able to tell when an old use turns up, and discard the
+ * stale event.
+ *
+ * Thus we cannot use the ev directly as the for_libxl value - we need
+ * a layer of indirection.
+ *
+ * We do this by keeping a pool of libxl__osevent_hook_nexus structs,
+ * and use pointers to them as for_libxl values.  In fact, there are
+ * two pools: one for fds and one for timeouts.  This ensures that we
+ * don't risk a type error when we upcast nexus->ev.  In each nexus
+ * the ev is either null or points to a valid libxl__ev_time or
+ * libxl__ev_fd, as applicable.
+ *
+ * We /do/ allow ourselves to reassociate an old nexus with a new ev
+ * as otherwise we would have to leak nexi.  (This reassociation
+ * might, of course, be an old ev being reused for a new purpose so
+ * simply comparing the ev pointer is not sufficient.)  Thus the
+ * libxl_osevent_occurred functions need to check that the condition
+ * allegedly signalled by this event actually exists.
+ *
+ * The nexi and the lists are all protected by the ctx lock.
+ */
+ 
+struct libxl__osevent_hook_nexus {
+    void *ev;
+    void *for_app_reg;
+    LIBXL_SLIST_ENTRY(libxl__osevent_hook_nexus) next;
+};
+
+static void *osevent_ev_from_hook_nexus(libxl_ctx *ctx,
+           libxl__osevent_hook_nexus *nexus /* pass  void *for_libxl */)
+{
+    return nexus->ev;
+}
+
+static void osevent_release_nexus(libxl__gc *gc,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus *nexus)
+{
+    nexus->ev = 0;
+    LIBXL_SLIST_INSERT_HEAD(nexi_idle, nexus, next);
+}
+
+/*----- OSEVENT* hook functions for nexusop "alloc" -----*/
+static void osevent_hook_pre_alloc(libxl__gc *gc, void *ev,
+                                   libxl__osevent_hook_nexi *nexi_idle,
+                                   libxl__osevent_hook_nexus **nexus_r)
+{
+    libxl__osevent_hook_nexus *nexus = LIBXL_SLIST_FIRST(nexi_idle);
+    if (nexus) {
+        LIBXL_SLIST_REMOVE_HEAD(nexi_idle, next);
+    } else {
+        nexus = libxl__zalloc(NOGC, sizeof(*nexus));
+    }
+    nexus->ev = ev;
+    *nexus_r = nexus;
+}
+static void osevent_hook_failed_alloc(libxl__gc *gc, void *ev,
+                                      libxl__osevent_hook_nexi *nexi_idle,
+                                      libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+
+/*----- OSEVENT* hook functions for nexusop "release" -----*/
+static void osevent_hook_pre_release(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+static void osevent_hook_failed_release(libxl__gc *gc, void *ev,
+                                        libxl__osevent_hook_nexi *nexi_idle,
+                                        libxl__osevent_hook_nexus **nexus)
+{
+    abort();
+}
+
+/*----- OSEVENT* hook functions for nexusop "noop" -----*/
+static void osevent_hook_pre_noop(libxl__gc *gc, void *ev,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus **nexus) { }
+static void osevent_hook_failed_noop(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus) { }
+
 
 /*
  * fd events
@@ -72,7 +180,8 @@ int libxl__ev_fd_register(libxl__gc *gc, libxl__ev_fd *ev,
 
     DBG("ev_fd=%p register fd=%d events=%x", ev, fd, events);
 
-    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
+    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
+                      events, ev->nexus);
     if (rc) goto out;
 
     ev->fd = fd;
@@ -97,7 +206,7 @@ int libxl__ev_fd_modify(libxl__gc *gc, libxl__ev_fd *ev, short events)
 
     DBG("ev_fd=%p modify fd=%d events=%x", ev, ev->fd, events);
 
-    rc = OSEVENT_HOOK(fd_modify, ev->fd, &ev->for_app_reg, events);
+    rc = OSEVENT_HOOK(fd,modify, noop, ev->fd, &ev->nexus->for_app_reg, events);
     if (rc) goto out;
 
     ev->events = events;
@@ -119,7 +228,7 @@ void libxl__ev_fd_deregister(libxl__gc *gc, libxl__ev_fd *ev)
 
     DBG("ev_fd=%p deregister fd=%d", ev, ev->fd);
 
-    OSEVENT_HOOK_VOID(fd_deregister, ev->fd, ev->for_app_reg);
+    OSEVENT_HOOK_VOID(fd,deregister, release, ev->fd, ev->nexus->for_app_reg);
     LIBXL_LIST_REMOVE(ev, entry);
     ev->fd = -1;
 
@@ -171,7 +280,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 {
     int rc;
 
-    rc = OSEVENT_HOOK(timeout_register, &ev->for_app_reg, absolute, ev);
+    rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
+                      absolute, ev->nexus);
     if (rc) return rc;
 
     ev->infinite = 0;
@@ -184,7 +294,7 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout_deregister, ev->for_app_reg);
+        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -270,7 +380,8 @@ int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
         rc = time_register_finite(gc, ev, absolute);
         if (rc) goto out;
     } else {
-        rc = OSEVENT_HOOK(timeout_modify, &ev->for_app_reg, absolute);
+        rc = OSEVENT_HOOK(timeout,modify, noop,
+                          &ev->nexus->for_app_reg, absolute);
         if (rc) goto out;
 
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
@@ -1010,35 +1121,54 @@ void libxl_osevent_register_hooks(libxl_ctx *ctx,
 
 
 void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
-                               int fd, short events, short revents)
+                               int fd, short events_ign, short revents_ign)
 {
-    libxl__ev_fd *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    assert(fd == ev->fd);
-    revents &= ev->events;
-    if (revents)
-        ev->func(egc, ev, fd, ev->events, revents);
+    libxl__ev_fd *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
+    if (ev->fd != fd) goto out;
 
+    struct pollfd check;
+    for (;;) {
+        check.fd = fd;
+        check.events = ev->events;
+        int r = poll(&check, 1, 0);
+        if (!r)
+            goto out;
+        if (r==1)
+            break;
+        assert(r<0);
+        if (errno != EINTR) {
+            LIBXL__EVENT_DISASTER(egc, "failed poll to check for fd", errno, 0);
+            goto out;
+        }
+    }
+
+    if (check.revents)
+        ev->func(egc, ev, fd, ev->events, check.revents);
+
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
 
 void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 {
-    libxl__ev_time *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
     assert(!ev->infinite);
+
     LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     ev->func(egc, ev, &ev->abs);
 
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index cba3616..6484bcb 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -136,6 +136,8 @@ typedef struct libxl__gc libxl__gc;
 typedef struct libxl__egc libxl__egc;
 typedef struct libxl__ao libxl__ao;
 typedef struct libxl__aop_occurred libxl__aop_occurred;
+typedef struct libxl__osevent_hook_nexus libxl__osevent_hook_nexus;
+typedef struct libxl__osevent_hook_nexi libxl__osevent_hook_nexi;
 
 _hidden void libxl__alloc_failed(libxl_ctx *, const char *func,
                          size_t nmemb, size_t size) __attribute__((noreturn));
@@ -163,7 +165,7 @@ struct libxl__ev_fd {
     libxl__ev_fd_callback *func;
     /* remainder is private for libxl__ev_fd... */
     LIBXL_LIST_ENTRY(libxl__ev_fd) entry;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 
@@ -178,7 +180,7 @@ struct libxl__ev_time {
     int infinite; /* not registered in list or with app if infinite */
     LIBXL_TAILQ_ENTRY(libxl__ev_time) entry;
     struct timeval abs;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 typedef struct libxl__ev_xswatch libxl__ev_xswatch;
@@ -329,6 +331,8 @@ struct libxl__ctx {
     libxl__poller poller_app; /* libxl_osevent_beforepoll and _afterpoll */
     LIBXL_LIST_HEAD(, libxl__poller) pollers_event, pollers_idle;
 
+    LIBXL_SLIST_HEAD(libxl__osevent_hook_nexi, libxl__osevent_hook_nexus)
+        hook_fd_nexi_idle, hook_timeout_nexi_idle;
     LIBXL_LIST_HEAD(, libxl__ev_fd) efds;
     LIBXL_TAILQ_HEAD(, libxl__ev_time) etimes;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:21:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3UQ-00044p-If; Fri, 07 Dec 2012 19:21:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijackson@chiark.greenend.org.uk>) id 1Th3UP-00044d-Ia
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:21:38 +0000
Received: from [85.158.139.211:9106] by server-16.bemta-5.messagelabs.com id
	9A/10-09208-0C142C05; Fri, 07 Dec 2012 19:21:36 +0000
X-Env-Sender: ijackson@chiark.greenend.org.uk
X-Msg-Ref: server-8.tower-206.messagelabs.com!1354908095!19550892!1
X-Originating-IP: [212.13.197.229]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28035 invoked from network); 7 Dec 2012 19:21:35 -0000
Received: from chiark.greenend.org.uk (HELO chiark.greenend.org.uk)
	(212.13.197.229)
	by server-8.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Dec 2012 19:21:35 -0000
Received: by chiark.greenend.org.uk (Debian Exim 4.72 #1) with local
	(return-path ijackson@chiark.greenend.org.uk)
	id 1Th3UL-0002d7-HM; Fri, 07 Dec 2012 19:21:33 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Fri,  7 Dec 2012 19:21:30 +0000
Message-Id: <1354908091-9905-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20674.16214.934271.479230@mariner.uk.xensource.com>
References: <20674.16214.934271.479230@mariner.uk.xensource.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout and
..._fd, in a multithreaded program those calls may be arbitrarily
delayed in relation to other activities within the program.

libxl therefore needs to be prepared to receive very old event
callbacks.  Arrange for this to be the case for fd callbacks.

This requires a new layer of indirection through a "hook nexus" struct
which can outlive the libxl__ev_foo.  Allocation and deallocation of
these nexi is mostly handled in the OSEVENT macros which wrap up
the application's callbacks.

Document the problem and the solution in a comment in libxl_event.c
just before the definition of struct libxl__osevent_hook_nexus.

There is still a race relating to libxl__osevent_occurred_timeout;
this will be addressed in the following patch.

Reported-by: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--
v2:
  - Prepare for fixing timeout race too
  - Break out osevent_release_nexus()
  - nexusop argument to OSEVENT* macros
  - Clarify OSEVENT* nexusop hooks
  - osevent_ev_from_hook_nexus takes a libxl__osevent_hook_nexus*
---
 tools/libxl/libxl_event.c    |  184 +++++++++++++++++++++++++++++++++++------
 tools/libxl/libxl_internal.h |    8 ++-
 2 files changed, 163 insertions(+), 29 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 72cb723..f1fe425 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -38,23 +38,131 @@
  * The application's registration hooks should be called ONLY via
  * these macros, with the ctx locked.  Likewise all the "occurred"
  * entrypoints from the application should assert(!in_hook);
+ *
+ * During the hook call - including while the arguments are being
+ * evaluated - ev->nexus is guaranteed to be valid and refer to the
+ * nexus which is being used for this event registration.  The
+ * arguments should specify ev->nexus for the for_libxl argument and
+ * ev->nexus->for_app_reg (or a pointer to it) for for_app_reg.
  */
-#define OSEVENT_HOOK_INTERN(retval, hookname, ...) do {                      \
-    if (CTX->osevent_hooks) {                                                \
-        CTX->osevent_in_hook++;                                              \
-        retval CTX->osevent_hooks->hookname(CTX->osevent_user, __VA_ARGS__); \
-        CTX->osevent_in_hook--;                                              \
-    }                                                                        \
+#define OSEVENT_HOOK_INTERN(retval, failedp, evkind, hookop, nexusop, ...) do { \
+    if (CTX->osevent_hooks) {                                           \
+        CTX->osevent_in_hook++;                                         \
+        libxl__osevent_hook_nexi *nexi = &CTX->hook_##evkind##_nexi_idle; \
+        osevent_hook_pre_##nexusop(gc, ev, nexi, &ev->nexus);            \
+        retval CTX->osevent_hooks->evkind##_##hookop                    \
+            (CTX->osevent_user, __VA_ARGS__);                           \
+        if ((failedp))                                                  \
+            osevent_hook_failed_##nexusop(gc, ev, nexi, &ev->nexus);     \
+        CTX->osevent_in_hook--;                                         \
+    }                                                                   \
 } while (0)
 
-#define OSEVENT_HOOK(hookname, ...) ({                                       \
-    int osevent_hook_rc = 0;                                                 \
-    OSEVENT_HOOK_INTERN(osevent_hook_rc = , hookname, __VA_ARGS__);          \
-    osevent_hook_rc;                                                         \
+#define OSEVENT_HOOK(evkind, hookop, nexusop, ...) ({                   \
+    int osevent_hook_rc = 0;                                    \
+    OSEVENT_HOOK_INTERN(osevent_hook_rc =, !!osevent_hook_rc,   \
+                        evkind, hookop, nexusop, __VA_ARGS__);          \
+    osevent_hook_rc;                                            \
 })
 
-#define OSEVENT_HOOK_VOID(hookname, ...) \
-    OSEVENT_HOOK_INTERN(/* void */, hookname, __VA_ARGS__)
+#define OSEVENT_HOOK_VOID(evkind, hookop, nexusop, ...)                         \
+    OSEVENT_HOOK_INTERN(/* void */, 0, evkind, hookop, nexusop, __VA_ARGS__)
+
+/*
+ * The application's calls to libxl_osevent_occurred_... may be
+ * indefinitely delayed with respect to the rest of the program (since
+ * they are not necessarily called with any lock held).  So the
+ * for_libxl value we receive may be (almost) arbitrarily old.  All we
+ * know is that it came from this ctx.
+ *
+ * Therefore we may not free the object referred to by any for_libxl
+ * value until we free the whole libxl_ctx.  And if we reuse it we
+ * must be able to tell when an old use turns up, and discard the
+ * stale event.
+ *
+ * Thus we cannot use the ev directly as the for_libxl value - we need
+ * a layer of indirection.
+ *
+ * We do this by keeping a pool of libxl__osevent_hook_nexus structs,
+ * and use pointers to them as for_libxl values.  In fact, there are
+ * two pools: one for fds and one for timeouts.  This ensures that we
+ * don't risk a type error when we upcast nexus->ev.  In each nexus
+ * the ev is either null or points to a valid libxl__ev_time or
+ * libxl__ev_fd, as applicable.
+ *
+ * We /do/ allow ourselves to reassociate an old nexus with a new ev
+ * as otherwise we would have to leak nexi.  (This reassociation
+ * might, of course, be an old ev being reused for a new purpose so
+ * simply comparing the ev pointer is not sufficient.)  Thus the
+ * libxl_osevent_occurred functions need to check that the condition
+ * allegedly signalled by this event actually exists.
+ *
+ * The nexi and the lists are all protected by the ctx lock.
+ */
+ 
+struct libxl__osevent_hook_nexus {
+    void *ev;
+    void *for_app_reg;
+    LIBXL_SLIST_ENTRY(libxl__osevent_hook_nexus) next;
+};
+
+static void *osevent_ev_from_hook_nexus(libxl_ctx *ctx,
+           libxl__osevent_hook_nexus *nexus /* pass  void *for_libxl */)
+{
+    return nexus->ev;
+}
+
+static void osevent_release_nexus(libxl__gc *gc,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus *nexus)
+{
+    nexus->ev = 0;
+    LIBXL_SLIST_INSERT_HEAD(nexi_idle, nexus, next);
+}
+
+/*----- OSEVENT* hook functions for nexusop "alloc" -----*/
+static void osevent_hook_pre_alloc(libxl__gc *gc, void *ev,
+                                   libxl__osevent_hook_nexi *nexi_idle,
+                                   libxl__osevent_hook_nexus **nexus_r)
+{
+    libxl__osevent_hook_nexus *nexus = LIBXL_SLIST_FIRST(nexi_idle);
+    if (nexus) {
+        LIBXL_SLIST_REMOVE_HEAD(nexi_idle, next);
+    } else {
+        nexus = libxl__zalloc(NOGC, sizeof(*nexus));
+    }
+    nexus->ev = ev;
+    *nexus_r = nexus;
+}
+static void osevent_hook_failed_alloc(libxl__gc *gc, void *ev,
+                                      libxl__osevent_hook_nexi *nexi_idle,
+                                      libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+
+/*----- OSEVENT* hook functions for nexusop "release" -----*/
+static void osevent_hook_pre_release(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+static void osevent_hook_failed_release(libxl__gc *gc, void *ev,
+                                        libxl__osevent_hook_nexi *nexi_idle,
+                                        libxl__osevent_hook_nexus **nexus)
+{
+    abort();
+}
+
+/*----- OSEVENT* hook functions for nexusop "noop" -----*/
+static void osevent_hook_pre_noop(libxl__gc *gc, void *ev,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus **nexus) { }
+static void osevent_hook_failed_noop(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus) { }
+
 
 /*
  * fd events
@@ -72,7 +180,8 @@ int libxl__ev_fd_register(libxl__gc *gc, libxl__ev_fd *ev,
 
     DBG("ev_fd=%p register fd=%d events=%x", ev, fd, events);
 
-    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
+    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
+                      events, ev->nexus);
     if (rc) goto out;
 
     ev->fd = fd;
@@ -97,7 +206,7 @@ int libxl__ev_fd_modify(libxl__gc *gc, libxl__ev_fd *ev, short events)
 
     DBG("ev_fd=%p modify fd=%d events=%x", ev, ev->fd, events);
 
-    rc = OSEVENT_HOOK(fd_modify, ev->fd, &ev->for_app_reg, events);
+    rc = OSEVENT_HOOK(fd,modify, noop, ev->fd, &ev->nexus->for_app_reg, events);
     if (rc) goto out;
 
     ev->events = events;
@@ -119,7 +228,7 @@ void libxl__ev_fd_deregister(libxl__gc *gc, libxl__ev_fd *ev)
 
     DBG("ev_fd=%p deregister fd=%d", ev, ev->fd);
 
-    OSEVENT_HOOK_VOID(fd_deregister, ev->fd, ev->for_app_reg);
+    OSEVENT_HOOK_VOID(fd,deregister, release, ev->fd, ev->nexus->for_app_reg);
     LIBXL_LIST_REMOVE(ev, entry);
     ev->fd = -1;
 
@@ -171,7 +280,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 {
     int rc;
 
-    rc = OSEVENT_HOOK(timeout_register, &ev->for_app_reg, absolute, ev);
+    rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
+                      absolute, ev->nexus);
     if (rc) return rc;
 
     ev->infinite = 0;
@@ -184,7 +294,7 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout_deregister, ev->for_app_reg);
+        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -270,7 +380,8 @@ int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
         rc = time_register_finite(gc, ev, absolute);
         if (rc) goto out;
     } else {
-        rc = OSEVENT_HOOK(timeout_modify, &ev->for_app_reg, absolute);
+        rc = OSEVENT_HOOK(timeout,modify, noop,
+                          &ev->nexus->for_app_reg, absolute);
         if (rc) goto out;
 
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
@@ -1010,35 +1121,54 @@ void libxl_osevent_register_hooks(libxl_ctx *ctx,
 
 
 void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
-                               int fd, short events, short revents)
+                               int fd, short events_ign, short revents_ign)
 {
-    libxl__ev_fd *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    assert(fd == ev->fd);
-    revents &= ev->events;
-    if (revents)
-        ev->func(egc, ev, fd, ev->events, revents);
+    libxl__ev_fd *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
+    if (ev->fd != fd) goto out;
 
+    struct pollfd check;
+    for (;;) {
+        check.fd = fd;
+        check.events = ev->events;
+        int r = poll(&check, 1, 0);
+        if (!r)
+            goto out;
+        if (r==1)
+            break;
+        assert(r<0);
+        if (errno != EINTR) {
+            LIBXL__EVENT_DISASTER(egc, "failed poll to check for fd", errno, 0);
+            goto out;
+        }
+    }
+
+    if (check.revents)
+        ev->func(egc, ev, fd, ev->events, check.revents);
+
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
 
 void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 {
-    libxl__ev_time *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
     assert(!ev->infinite);
+
     LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     ev->func(egc, ev, &ev->abs);
 
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index cba3616..6484bcb 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -136,6 +136,8 @@ typedef struct libxl__gc libxl__gc;
 typedef struct libxl__egc libxl__egc;
 typedef struct libxl__ao libxl__ao;
 typedef struct libxl__aop_occurred libxl__aop_occurred;
+typedef struct libxl__osevent_hook_nexus libxl__osevent_hook_nexus;
+typedef struct libxl__osevent_hook_nexi libxl__osevent_hook_nexi;
 
 _hidden void libxl__alloc_failed(libxl_ctx *, const char *func,
                          size_t nmemb, size_t size) __attribute__((noreturn));
@@ -163,7 +165,7 @@ struct libxl__ev_fd {
     libxl__ev_fd_callback *func;
     /* remainder is private for libxl__ev_fd... */
     LIBXL_LIST_ENTRY(libxl__ev_fd) entry;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 
@@ -178,7 +180,7 @@ struct libxl__ev_time {
     int infinite; /* not registered in list or with app if infinite */
     LIBXL_TAILQ_ENTRY(libxl__ev_time) entry;
     struct timeval abs;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 typedef struct libxl__ev_xswatch libxl__ev_xswatch;
@@ -329,6 +331,8 @@ struct libxl__ctx {
     libxl__poller poller_app; /* libxl_osevent_beforepoll and _afterpoll */
     LIBXL_LIST_HEAD(, libxl__poller) pollers_event, pollers_idle;
 
+    LIBXL_SLIST_HEAD(libxl__osevent_hook_nexi, libxl__osevent_hook_nexus)
+        hook_fd_nexi_idle, hook_timeout_nexi_idle;
     LIBXL_LIST_HEAD(, libxl__ev_fd) efds;
     LIBXL_TAILQ_HEAD(, libxl__ev_time) etimes;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:21:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3UW-00045o-5K; Fri, 07 Dec 2012 19:21:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijackson@chiark.greenend.org.uk>) id 1Th3UU-00045V-S4
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:21:43 +0000
Received: from [85.158.143.35:49517] by server-3.bemta-4.messagelabs.com id
	81/DF-18211-6C142C05; Fri, 07 Dec 2012 19:21:42 +0000
X-Env-Sender: ijackson@chiark.greenend.org.uk
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354908094!13488851!1
X-Originating-IP: [212.13.197.229]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5192 invoked from network); 7 Dec 2012 19:21:35 -0000
Received: from chiark.greenend.org.uk (HELO chiark.greenend.org.uk)
	(212.13.197.229)
	by server-3.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Dec 2012 19:21:35 -0000
Received: by chiark.greenend.org.uk (Debian Exim 4.72 #1) with local
	(return-path ijackson@chiark.greenend.org.uk)
	id 1Th3UM-0002dD-Im; Fri, 07 Dec 2012 19:21:34 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Fri,  7 Dec 2012 19:21:31 +0000
Message-Id: <1354908091-9905-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20674.16214.934271.479230@mariner.uk.xensource.com>
References: <20674.16214.934271.479230@mariner.uk.xensource.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout, in a
multithreaded program those calls may be arbitrarily delayed in
relation to other activities within the program.

Specifically this means when ->timeout_deregister returns, libxl does
not know whether it can safely dispose of the for_libxl value or
whether it needs to retain it in case of an in-progress call to
_occurred_timeout.

The interface could be fixed by requiring the application to make a
new call into libxl to say that the deregistration was complete.

However that new call would have to be threaded through the
application's event loop; this is complicated and some application
authors are likely not to implement it properly.  Furthermore the
easiest way to implement this facility in most event loops is to queue
up a time event for "now".

Shortcut all of this by having libxl always call timeout_modify
setting abs={0,0} (ie, ASAP) instead of timeout_deregister.  This will
cause the application to call _occurred_timeout.  When processing this
calldown we see that we were no longer actually interested and simply
throw it away.

Additionally, there is a race between _occurred_timeout and
->timeout_modify.  If libxl ever adjusts the deadline for a timeout
the application may already be in the process of calling _occurred, in
which case the situation with for_app's lifetime becomes very
complicated.  Therefore abolish libxl__ev_time_modify_{abs,rel} (which
have no callers) and promise to the application only ever to call
->timeout_modify with abs=={0,0}.  The application still needs to cope
with ->timeout_modify racing with its internal function which calls
_occurred_timeout.  Document this.

This is a forwards-compatible change for applications using the libxl
API, and will hopefully eliminate these races in callback-supplying
applications (such as libvirt) without the need for corresponding
changes to the application.

For clarity, fold the body of time_register_finite into its one
remaining call site.  This makes the semantics of ev->infinite
slightly clearer.

Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_event.c |   88 +++++++--------------------------------------
 tools/libxl/libxl_event.h |   17 ++++++++-
 2 files changed, 28 insertions(+), 77 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index f1fe425..65c34da 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -267,18 +267,11 @@ static int time_rel_to_abs(libxl__gc *gc, int ms, struct timeval *abs_out)
     return 0;
 }
 
-static void time_insert_finite(libxl__gc *gc, libxl__ev_time *ev)
-{
-    libxl__ev_time *evsearch;
-    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
-                              timercmp(&ev->abs, &evsearch->abs, >));
-    ev->infinite = 0;
-}
-
 static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
                                 struct timeval absolute)
 {
     int rc;
+    libxl__ev_time *evsearch;
 
     rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
                       absolute, ev->nexus);
@@ -286,7 +279,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 
     ev->infinite = 0;
     ev->abs = absolute;
-    time_insert_finite(gc, ev);
+    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
+                              timercmp(&ev->abs, &evsearch->abs, >));
 
     return 0;
 }
@@ -294,7 +288,11 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
+        struct timeval right_away = { 0, 0 };
+        ev->nexus->ev = 0;
+        OSEVENT_HOOK_VOID(timeout,modify,
+                          noop /* release nexus in _occurred_ */,
+                          ev->nexus->for_app_reg, right_away);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -364,70 +362,6 @@ int libxl__ev_time_register_rel(libxl__gc *gc, libxl__ev_time *ev,
     return rc;
 }
 
-int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
-                              struct timeval absolute)
-{
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify abs==%lu.%06lu",
-        ev, (unsigned long)absolute.tv_sec, (unsigned long)absolute.tv_usec);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (ev->infinite) {
-        rc = time_register_finite(gc, ev, absolute);
-        if (rc) goto out;
-    } else {
-        rc = OSEVENT_HOOK(timeout,modify, noop,
-                          &ev->nexus->for_app_reg, absolute);
-        if (rc) goto out;
-
-        LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
-        ev->abs = absolute;
-        time_insert_finite(gc, ev);
-    }
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
-int libxl__ev_time_modify_rel(libxl__gc *gc, libxl__ev_time *ev,
-                              int milliseconds)
-{
-    struct timeval absolute;
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify ms=%d", ev, milliseconds);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (milliseconds < 0) {
-        time_deregister(gc, ev);
-        ev->infinite = 1;
-        rc = 0;
-        goto out;
-    }
-
-    rc = time_rel_to_abs(gc, milliseconds, &absolute);
-    if (rc) goto out;
-
-    rc = libxl__ev_time_modify_abs(gc, ev, absolute);
-    if (rc) goto out;
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
 void libxl__ev_time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     CTX_LOCK;
@@ -1161,7 +1095,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    libxl__osevent_hook_nexus *nexus = for_libxl;
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, nexus);
+
+    osevent_release_nexus(gc, &CTX->hook_timeout_nexi_idle, nexus);
+
     if (!ev) goto out;
     assert(!ev->infinite);
 
diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3bcb6d3..51f2721 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -287,8 +287,10 @@ typedef struct libxl_osevent_hooks {
   int (*timeout_register)(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl);
   int (*timeout_modify)(void *user, void **for_app_registration_update,
-                         struct timeval abs);
-  void (*timeout_deregister)(void *user, void *for_app_registration);
+                         struct timeval abs)
+      /* only ever called with abs={0,0}, meaning ASAP */;
+  void (*timeout_deregister)(void *user, void *for_app_registration)
+      /* will never be called */;
 } libxl_osevent_hooks;
 
 /* The application which calls register_fd_hooks promises to
@@ -337,6 +339,17 @@ typedef struct libxl_osevent_hooks {
  * register (or modify), and pass it to subsequent calls to modify
  * or deregister.
  *
+ * Note that the application must cope with a call from libxl to
+ * timeout_modify racing with its own call to
+ * libxl__osevent_occurred_timeout.  libxl guarantees that
+ * timeout_modify will only be called with abs={0,0} but the
+ * application must still ensure that libxl's attempt to cause the
+ * timeout to occur immediately is safely ignored even the timeout is
+ * actually already in the process of occurring.
+ *
+ * timeout_deregister is not used because it forms part of a
+ * deprecated unsafe mode of use of the API.
+ *
  * osevent_register_hooks may be called only once for each libxl_ctx.
  * libxl may make calls to register/modify/deregister from within
  * any libxl function (indeed, it will usually call register from
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:21:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3UW-00045o-5K; Fri, 07 Dec 2012 19:21:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijackson@chiark.greenend.org.uk>) id 1Th3UU-00045V-S4
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:21:43 +0000
Received: from [85.158.143.35:49517] by server-3.bemta-4.messagelabs.com id
	81/DF-18211-6C142C05; Fri, 07 Dec 2012 19:21:42 +0000
X-Env-Sender: ijackson@chiark.greenend.org.uk
X-Msg-Ref: server-3.tower-21.messagelabs.com!1354908094!13488851!1
X-Originating-IP: [212.13.197.229]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5192 invoked from network); 7 Dec 2012 19:21:35 -0000
Received: from chiark.greenend.org.uk (HELO chiark.greenend.org.uk)
	(212.13.197.229)
	by server-3.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Dec 2012 19:21:35 -0000
Received: by chiark.greenend.org.uk (Debian Exim 4.72 #1) with local
	(return-path ijackson@chiark.greenend.org.uk)
	id 1Th3UM-0002dD-Im; Fri, 07 Dec 2012 19:21:34 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Fri,  7 Dec 2012 19:21:31 +0000
Message-Id: <1354908091-9905-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20674.16214.934271.479230@mariner.uk.xensource.com>
References: <20674.16214.934271.479230@mariner.uk.xensource.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout, in a
multithreaded program those calls may be arbitrarily delayed in
relation to other activities within the program.

Specifically this means when ->timeout_deregister returns, libxl does
not know whether it can safely dispose of the for_libxl value or
whether it needs to retain it in case of an in-progress call to
_occurred_timeout.

The interface could be fixed by requiring the application to make a
new call into libxl to say that the deregistration was complete.

However that new call would have to be threaded through the
application's event loop; this is complicated and some application
authors are likely not to implement it properly.  Furthermore the
easiest way to implement this facility in most event loops is to queue
up a time event for "now".

Shortcut all of this by having libxl always call timeout_modify
setting abs={0,0} (ie, ASAP) instead of timeout_deregister.  This will
cause the application to call _occurred_timeout.  When processing this
calldown we see that we were no longer actually interested and simply
throw it away.

Additionally, there is a race between _occurred_timeout and
->timeout_modify.  If libxl ever adjusts the deadline for a timeout
the application may already be in the process of calling _occurred, in
which case the situation with for_app's lifetime becomes very
complicated.  Therefore abolish libxl__ev_time_modify_{abs,rel} (which
have no callers) and promise to the application only ever to call
->timeout_modify with abs=={0,0}.  The application still needs to cope
with ->timeout_modify racing with its internal function which calls
_occurred_timeout.  Document this.

This is a forwards-compatible change for applications using the libxl
API, and will hopefully eliminate these races in callback-supplying
applications (such as libvirt) without the need for corresponding
changes to the application.

For clarity, fold the body of time_register_finite into its one
remaining call site.  This makes the semantics of ev->infinite
slightly clearer.

Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_event.c |   88 +++++++--------------------------------------
 tools/libxl/libxl_event.h |   17 ++++++++-
 2 files changed, 28 insertions(+), 77 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index f1fe425..65c34da 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -267,18 +267,11 @@ static int time_rel_to_abs(libxl__gc *gc, int ms, struct timeval *abs_out)
     return 0;
 }
 
-static void time_insert_finite(libxl__gc *gc, libxl__ev_time *ev)
-{
-    libxl__ev_time *evsearch;
-    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
-                              timercmp(&ev->abs, &evsearch->abs, >));
-    ev->infinite = 0;
-}
-
 static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
                                 struct timeval absolute)
 {
     int rc;
+    libxl__ev_time *evsearch;
 
     rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
                       absolute, ev->nexus);
@@ -286,7 +279,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 
     ev->infinite = 0;
     ev->abs = absolute;
-    time_insert_finite(gc, ev);
+    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
+                              timercmp(&ev->abs, &evsearch->abs, >));
 
     return 0;
 }
@@ -294,7 +288,11 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
+        struct timeval right_away = { 0, 0 };
+        ev->nexus->ev = 0;
+        OSEVENT_HOOK_VOID(timeout,modify,
+                          noop /* release nexus in _occurred_ */,
+                          ev->nexus->for_app_reg, right_away);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -364,70 +362,6 @@ int libxl__ev_time_register_rel(libxl__gc *gc, libxl__ev_time *ev,
     return rc;
 }
 
-int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
-                              struct timeval absolute)
-{
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify abs==%lu.%06lu",
-        ev, (unsigned long)absolute.tv_sec, (unsigned long)absolute.tv_usec);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (ev->infinite) {
-        rc = time_register_finite(gc, ev, absolute);
-        if (rc) goto out;
-    } else {
-        rc = OSEVENT_HOOK(timeout,modify, noop,
-                          &ev->nexus->for_app_reg, absolute);
-        if (rc) goto out;
-
-        LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
-        ev->abs = absolute;
-        time_insert_finite(gc, ev);
-    }
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
-int libxl__ev_time_modify_rel(libxl__gc *gc, libxl__ev_time *ev,
-                              int milliseconds)
-{
-    struct timeval absolute;
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify ms=%d", ev, milliseconds);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (milliseconds < 0) {
-        time_deregister(gc, ev);
-        ev->infinite = 1;
-        rc = 0;
-        goto out;
-    }
-
-    rc = time_rel_to_abs(gc, milliseconds, &absolute);
-    if (rc) goto out;
-
-    rc = libxl__ev_time_modify_abs(gc, ev, absolute);
-    if (rc) goto out;
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
 void libxl__ev_time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     CTX_LOCK;
@@ -1161,7 +1095,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    libxl__osevent_hook_nexus *nexus = for_libxl;
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, nexus);
+
+    osevent_release_nexus(gc, &CTX->hook_timeout_nexi_idle, nexus);
+
     if (!ev) goto out;
     assert(!ev->infinite);
 
diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3bcb6d3..51f2721 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -287,8 +287,10 @@ typedef struct libxl_osevent_hooks {
   int (*timeout_register)(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl);
   int (*timeout_modify)(void *user, void **for_app_registration_update,
-                         struct timeval abs);
-  void (*timeout_deregister)(void *user, void *for_app_registration);
+                         struct timeval abs)
+      /* only ever called with abs={0,0}, meaning ASAP */;
+  void (*timeout_deregister)(void *user, void *for_app_registration)
+      /* will never be called */;
 } libxl_osevent_hooks;
 
 /* The application which calls register_fd_hooks promises to
@@ -337,6 +339,17 @@ typedef struct libxl_osevent_hooks {
  * register (or modify), and pass it to subsequent calls to modify
  * or deregister.
  *
+ * Note that the application must cope with a call from libxl to
+ * timeout_modify racing with its own call to
+ * libxl__osevent_occurred_timeout.  libxl guarantees that
+ * timeout_modify will only be called with abs={0,0} but the
+ * application must still ensure that libxl's attempt to cause the
+ * timeout to occur immediately is safely ignored even the timeout is
+ * actually already in the process of occurring.
+ *
+ * timeout_deregister is not used because it forms part of a
+ * deprecated unsafe mode of use of the API.
+ *
  * osevent_register_hooks may be called only once for each libxl_ctx.
  * libxl may make calls to register/modify/deregister from within
  * any libxl function (indeed, it will usually call register from
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:22:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:22:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3V6-0004CG-KS; Fri, 07 Dec 2012 19:22:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3V5-0004C4-Qp
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:22:20 +0000
Received: from [85.158.138.51:40241] by server-16.bemta-3.messagelabs.com id
	9D/E0-07461-BE142C05; Fri, 07 Dec 2012 19:22:19 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354908138!19858643!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4129 invoked from network); 7 Dec 2012 19:22:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:22:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="3645"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:22:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:22:18 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th3V3-0000xk-W5; Fri, 07 Dec 2012 19:22:18 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th3V3-00072g-Se;
	Fri, 07 Dec 2012 19:22:17 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20674.16873.782442.249086@mariner.uk.xensource.com>
Date: Fri, 7 Dec 2012 19:22:17 +0000
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354907734-26934-1-git-send-email-ian.jackson@eu.citrix.com>
References: <20674.16214.934271.479230@mariner.uk.xensource.com>
	<1354907734-26934-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[PATCH 1/2] libxl: fix stale fd event callback race"):
> Because there is not necessarily any lock held at the point the
> application (eg, libvirt) calls libxl_osevent_occurred_timeout and
> ..._fd, in a multithreaded program those calls may be arbitrarily
> delayed in relation to other activities within the program.

Sorry for the duplicate; the first seemed to have vanished so I resent
it and then naturally it turned up.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:22:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:22:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3V6-0004CG-KS; Fri, 07 Dec 2012 19:22:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3V5-0004C4-Qp
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:22:20 +0000
Received: from [85.158.138.51:40241] by server-16.bemta-3.messagelabs.com id
	9D/E0-07461-BE142C05; Fri, 07 Dec 2012 19:22:19 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354908138!19858643!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4129 invoked from network); 7 Dec 2012 19:22:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:22:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="3645"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:22:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:22:18 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Th3V3-0000xk-W5; Fri, 07 Dec 2012 19:22:18 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Th3V3-00072g-Se;
	Fri, 07 Dec 2012 19:22:17 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20674.16873.782442.249086@mariner.uk.xensource.com>
Date: Fri, 7 Dec 2012 19:22:17 +0000
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354907734-26934-1-git-send-email-ian.jackson@eu.citrix.com>
References: <20674.16214.934271.479230@mariner.uk.xensource.com>
	<1354907734-26934-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[PATCH 1/2] libxl: fix stale fd event callback race"):
> Because there is not necessarily any lock held at the point the
> application (eg, libvirt) calls libxl_osevent_occurred_timeout and
> ..._fd, in a multithreaded program those calls may be arbitrarily
> delayed in relation to other activities within the program.

Sorry for the duplicate; the first seemed to have vanished so I resent
it and then naturally it turned up.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:32:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3f1-0004eM-ON; Fri, 07 Dec 2012 19:32:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3f0-0004eH-5g
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 19:32:34 +0000
Received: from [193.109.254.147:47402] by server-9.bemta-14.messagelabs.com id
	C7/28-30773-15442C05; Fri, 07 Dec 2012 19:32:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1354908752!9221937!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13956 invoked from network); 7 Dec 2012 19:32:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:32:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="3835"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:32:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:32:32 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th3ey-0000z7-C9;
	Fri, 07 Dec 2012 19:32:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th3ey-0003Cc-BW;
	Fri, 07 Dec 2012 19:32:32 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14603-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 19:32:32 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14603: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14603 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14603/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:32:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3f1-0004eM-ON; Fri, 07 Dec 2012 19:32:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th3f0-0004eH-5g
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 19:32:34 +0000
Received: from [193.109.254.147:47402] by server-9.bemta-14.messagelabs.com id
	C7/28-30773-15442C05; Fri, 07 Dec 2012 19:32:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1354908752!9221937!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13956 invoked from network); 7 Dec 2012 19:32:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 19:32:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="3835"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 19:32:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 19:32:32 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th3ey-0000z7-C9;
	Fri, 07 Dec 2012 19:32:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th3ey-0003Cc-BW;
	Fri, 07 Dec 2012 19:32:32 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14603-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 19:32:32 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14603: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14603 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14603/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:47:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:47:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3sj-00053u-Oi; Fri, 07 Dec 2012 19:46:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Th3si-00053n-6S
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:46:44 +0000
Received: from [85.158.138.51:53157] by server-7.bemta-3.messagelabs.com id
	FE/86-01713-3A742C05; Fri, 07 Dec 2012 19:46:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354909600!27930904!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAzNjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1697 invoked from network); 7 Dec 2012 19:46:42 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 19:46:42 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7JkbVE029093
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 19:46:38 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7JkbmQ018107
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 19:46:37 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7JkaJR019778; Fri, 7 Dec 2012 13:46:36 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 11:46:36 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 37B7A1C05A8; Fri,  7 Dec 2012 14:46:35 -0500 (EST)
Date: Fri, 7 Dec 2012 14:46:35 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121207194635.GA8782@phenom.dumpdata.com>
References: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
	<50BF2E3802000078000AE162@nat28.tlf.novell.com>
	<20121206162304.GA3989@aepfle.de>
	<50C0DDCA02000078000AEBA9@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C0DDCA02000078000AEBA9@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
 multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 05:02:50PM +0000, Jan Beulich wrote:
> >>> On 06.12.12 at 17:23, Olaf Hering <olaf@aepfle.de> wrote:
> > On Wed, Dec 05, Jan Beulich wrote:
> > 
> >> >>> On 05.12.12 at 11:01, Olaf Hering <olaf@aepfle.de> wrote:
> >> > backend_changed might be called multiple times, which will leak
> >> > be->mode. free the previous value before storing the current mode value.
> >> 
> >> As said before - this is one possible route to take. But did you
> >> consider at all the alternative of preventing the function from
> >> getting called more than once for a given device? As also said
> >> before, I think that would have other bad effects, and hence
> >> should be preferred (and would likely also result in a smaller
> >> patch).
> > 
> > Maybe it could be done like this, adding a flag to the backend device
> > and exit early if its called twice.
> 
> Maybe, but it looks odd to me. But then again I had hoped Konrad
> would have an opinion here...

Sorry - was lurking around and hadn't paid any attention to this thread.
And it does not help that next week I am out :-)

> 
> Also I don't see why you need to free be->mode now on all error
> paths - afaict it would still get freed when "be" gets freed (with
> your earlier patch).
> 
> Jan
> 
> > --- a/drivers/block/xen-blkback/xenbus.c
> > +++ b/drivers/block/xen-blkback/xenbus.c
> > @@ -28,6 +28,7 @@ struct backend_info {
> >  	unsigned		major;
> >  	unsigned		minor;
> >  	char			*mode;
> > +	unsigned		alive;
> >  };
> >  
> >  static struct kmem_cache *xen_blkif_cachep;
> > @@ -506,6 +507,9 @@ static void backend_changed(struct xenbus_watch *watch,
> >  
> >  	DPRINTK("");
> >  
> > +	if (be->alive)
> > +		return;
> > +
> >  	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
> >  			   &major, &minor);
> >  	if (XENBUS_EXIST_ERR(err)) {
> > @@ -548,8 +552,11 @@ static void backend_changed(struct xenbus_watch *watch,
> >  		char *p = strrchr(dev->otherend, '/') + 1;
> >  		long handle;
> >  		err = strict_strtoul(p, 0, &handle);
> > -		if (err)
> > +		if (err) {
> > +			kfree(be->mode);
> > +			be->mode = NULL;
> >  			return;
> > +		}
> >  
> >  		be->major = major;
> >  		be->minor = minor;
> > @@ -560,6 +567,8 @@ static void backend_changed(struct xenbus_watch *watch,
> >  			be->major = 0;
> >  			be->minor = 0;
> >  			xenbus_dev_fatal(dev, err, "creating vbd structure");
> > +			kfree(be->mode);
> > +			be->mode = NULL;
> >  			return;
> >  		}
> >  
> > @@ -569,10 +578,13 @@ static void backend_changed(struct xenbus_watch 
> > *watch,
> >  			be->major = 0;
> >  			be->minor = 0;
> >  			xenbus_dev_fatal(dev, err, "creating sysfs entries");
> > +			kfree(be->mode);
> > +			be->mode = NULL;
> >  			return;
> >  		}
> >  
> >  		/* We're potentially connected now */
> > +		be->alive = 1;
> >  		xen_update_blkif_status(be->blkif);
> >  	}
> >  }
> > -- 
> > 1.8.0.1
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 19:47:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 19:47:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th3sj-00053u-Oi; Fri, 07 Dec 2012 19:46:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Th3si-00053n-6S
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 19:46:44 +0000
Received: from [85.158.138.51:53157] by server-7.bemta-3.messagelabs.com id
	FE/86-01713-3A742C05; Fri, 07 Dec 2012 19:46:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354909600!27930904!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAzNjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1697 invoked from network); 7 Dec 2012 19:46:42 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 19:46:42 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7JkbVE029093
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 19:46:38 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7JkbmQ018107
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 19:46:37 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7JkaJR019778; Fri, 7 Dec 2012 13:46:36 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 11:46:36 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 37B7A1C05A8; Fri,  7 Dec 2012 14:46:35 -0500 (EST)
Date: Fri, 7 Dec 2012 14:46:35 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121207194635.GA8782@phenom.dumpdata.com>
References: <1354701697-5815-1-git-send-email-olaf@aepfle.de>
	<50BF2E3802000078000AE162@nat28.tlf.novell.com>
	<20121206162304.GA3989@aepfle.de>
	<50C0DDCA02000078000AEBA9@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C0DDCA02000078000AEBA9@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent leak of mode during
 multiple backend_changed calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 05:02:50PM +0000, Jan Beulich wrote:
> >>> On 06.12.12 at 17:23, Olaf Hering <olaf@aepfle.de> wrote:
> > On Wed, Dec 05, Jan Beulich wrote:
> > 
> >> >>> On 05.12.12 at 11:01, Olaf Hering <olaf@aepfle.de> wrote:
> >> > backend_changed might be called multiple times, which will leak
> >> > be->mode. free the previous value before storing the current mode value.
> >> 
> >> As said before - this is one possible route to take. But did you
> >> consider at all the alternative of preventing the function from
> >> getting called more than once for a given device? As also said
> >> before, I think that would have other bad effects, and hence
> >> should be preferred (and would likely also result in a smaller
> >> patch).
> > 
> > Maybe it could be done like this, adding a flag to the backend device
> > and exit early if its called twice.
> 
> Maybe, but it looks odd to me. But then again I had hoped Konrad
> would have an opinion here...

Sorry - was lurking around and hadn't paid any attention to this thread.
And it does not help that next week I am out :-)

> 
> Also I don't see why you need to free be->mode now on all error
> paths - afaict it would still get freed when "be" gets freed (with
> your earlier patch).
> 
> Jan
> 
> > --- a/drivers/block/xen-blkback/xenbus.c
> > +++ b/drivers/block/xen-blkback/xenbus.c
> > @@ -28,6 +28,7 @@ struct backend_info {
> >  	unsigned		major;
> >  	unsigned		minor;
> >  	char			*mode;
> > +	unsigned		alive;
> >  };
> >  
> >  static struct kmem_cache *xen_blkif_cachep;
> > @@ -506,6 +507,9 @@ static void backend_changed(struct xenbus_watch *watch,
> >  
> >  	DPRINTK("");
> >  
> > +	if (be->alive)
> > +		return;
> > +
> >  	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
> >  			   &major, &minor);
> >  	if (XENBUS_EXIST_ERR(err)) {
> > @@ -548,8 +552,11 @@ static void backend_changed(struct xenbus_watch *watch,
> >  		char *p = strrchr(dev->otherend, '/') + 1;
> >  		long handle;
> >  		err = strict_strtoul(p, 0, &handle);
> > -		if (err)
> > +		if (err) {
> > +			kfree(be->mode);
> > +			be->mode = NULL;
> >  			return;
> > +		}
> >  
> >  		be->major = major;
> >  		be->minor = minor;
> > @@ -560,6 +567,8 @@ static void backend_changed(struct xenbus_watch *watch,
> >  			be->major = 0;
> >  			be->minor = 0;
> >  			xenbus_dev_fatal(dev, err, "creating vbd structure");
> > +			kfree(be->mode);
> > +			be->mode = NULL;
> >  			return;
> >  		}
> >  
> > @@ -569,10 +578,13 @@ static void backend_changed(struct xenbus_watch 
> > *watch,
> >  			be->major = 0;
> >  			be->minor = 0;
> >  			xenbus_dev_fatal(dev, err, "creating sysfs entries");
> > +			kfree(be->mode);
> > +			be->mode = NULL;
> >  			return;
> >  		}
> >  
> >  		/* We're potentially connected now */
> > +		be->alive = 1;
> >  		xen_update_blkif_status(be->blkif);
> >  	}
> >  }
> > -- 
> > 1.8.0.1
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 20:02:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 20:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th48A-0005Tn-JU; Fri, 07 Dec 2012 20:02:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th489-0005Ti-E3
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 20:02:41 +0000
Received: from [85.158.139.83:15719] by server-14.bemta-5.messagelabs.com id
	AE/EB-09538-06B42C05; Fri, 07 Dec 2012 20:02:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1354910559!28909657!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 591 invoked from network); 7 Dec 2012 20:02:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 20:02:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="4252"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 20:02:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 20:02:39 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th487-000162-Fy;
	Fri, 07 Dec 2012 20:02:39 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th487-0005LE-7B;
	Fri, 07 Dec 2012 20:02:39 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14605-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 20:02:39 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14605: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14605 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14605/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 20:02:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 20:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th48A-0005Tn-JU; Fri, 07 Dec 2012 20:02:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th489-0005Ti-E3
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 20:02:41 +0000
Received: from [85.158.139.83:15719] by server-14.bemta-5.messagelabs.com id
	AE/EB-09538-06B42C05; Fri, 07 Dec 2012 20:02:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1354910559!28909657!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 591 invoked from network); 7 Dec 2012 20:02:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 20:02:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="4252"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 20:02:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 20:02:39 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th487-000162-Fy;
	Fri, 07 Dec 2012 20:02:39 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th487-0005LE-7B;
	Fri, 07 Dec 2012 20:02:39 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14605-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 20:02:39 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14605: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14605 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14605/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 20:20:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 20:20:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th4PL-0005fF-7V; Fri, 07 Dec 2012 20:20:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Th4PJ-0005fA-5O
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 20:20:25 +0000
Received: from [85.158.138.51:31485] by server-15.bemta-3.messagelabs.com id
	B9/2A-23779-88F42C05; Fri, 07 Dec 2012 20:20:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354911621!19862271!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAzNjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15077 invoked from network); 7 Dec 2012 20:20:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 20:20:23 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7KK6ZG026172
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 20:20:07 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7KK5FK029002
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 20:20:06 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7KK40M017567; Fri, 7 Dec 2012 14:20:04 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 12:20:04 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 321BF1C05A8; Fri,  7 Dec 2012 15:20:03 -0500 (EST)
Date: Fri, 7 Dec 2012 15:20:03 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>, akpm@linux-foundation.org,
	sfr@canb.auug.org.au, peterz@infradead.org
Message-ID: <20121207202003.GA9462@phenom.dumpdata.com>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
	<1354630913-17287-2-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354630913-17287-2-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
 llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
> Implement a safe version of llist_for_each_entry, and use it in
> blkif_free. Previously grants where freed while iterating the list,
> which lead to dereferences when trying to fetch the next item.

Looks like xen-blkfront is the only user of this llist_for_each_entry.

Would it be more prudent to put the macro in the llist.h file?

> =

> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>
> Cc: xen-devel@lists.xen.org
> ---
>  drivers/block/xen-blkfront.c |   10 +++++++++-
>  1 files changed, 9 insertions(+), 1 deletions(-)
> =

> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 96e9b00..df21b05 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -143,6 +143,13 @@ static DEFINE_SPINLOCK(minor_lock);
>  =

>  #define DEV_NAME	"xvd"	/* name in /dev */
>  =

> +#define llist_for_each_entry_safe(pos, n, node, member)		\
> +	for ((pos) =3D llist_entry((node), typeof(*(pos)), member),	\
> +	     (n) =3D (pos)->member.next;					\
> +	     &(pos)->member !=3D NULL;					\
> +	     (pos) =3D llist_entry(n, typeof(*(pos)), member),		\
> +	     (n) =3D (&(pos)->member !=3D NULL) ? (pos)->member.next : NULL)
> +
>  static int get_id_from_freelist(struct blkfront_info *info)
>  {
>  	unsigned long free =3D info->shadow_free;
> @@ -792,6 +799,7 @@ static void blkif_free(struct blkfront_info *info, in=
t suspend)
>  {
>  	struct llist_node *all_gnts;
>  	struct grant *persistent_gnt;
> +	struct llist_node *n;
>  =

>  	/* Prevent new requests being issued until we fix things up. */
>  	spin_lock_irq(&info->io_lock);
> @@ -804,7 +812,7 @@ static void blkif_free(struct blkfront_info *info, in=
t suspend)
>  	/* Remove all persistent grants */
>  	if (info->persistent_gnts_c) {
>  		all_gnts =3D llist_del_all(&info->persistent_gnts);
> -		llist_for_each_entry(persistent_gnt, all_gnts, node) {
> +		llist_for_each_entry_safe(persistent_gnt, n, all_gnts, node) {
>  			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
>  			__free_page(pfn_to_page(persistent_gnt->pfn));
>  			kfree(persistent_gnt);
> -- =

> 1.7.7.5 (Apple Git-26)
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 20:20:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 20:20:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th4PL-0005fF-7V; Fri, 07 Dec 2012 20:20:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Th4PJ-0005fA-5O
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 20:20:25 +0000
Received: from [85.158.138.51:31485] by server-15.bemta-3.messagelabs.com id
	B9/2A-23779-88F42C05; Fri, 07 Dec 2012 20:20:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1354911621!19862271!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTAzNjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15077 invoked from network); 7 Dec 2012 20:20:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Dec 2012 20:20:23 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB7KK6ZG026172
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Dec 2012 20:20:07 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB7KK5FK029002
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Dec 2012 20:20:06 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB7KK40M017567; Fri, 7 Dec 2012 14:20:04 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 12:20:04 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 321BF1C05A8; Fri,  7 Dec 2012 15:20:03 -0500 (EST)
Date: Fri, 7 Dec 2012 15:20:03 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>, akpm@linux-foundation.org,
	sfr@canb.auug.org.au, peterz@infradead.org
Message-ID: <20121207202003.GA9462@phenom.dumpdata.com>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
	<1354630913-17287-2-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354630913-17287-2-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
 llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
> Implement a safe version of llist_for_each_entry, and use it in
> blkif_free. Previously grants where freed while iterating the list,
> which lead to dereferences when trying to fetch the next item.

Looks like xen-blkfront is the only user of this llist_for_each_entry.

Would it be more prudent to put the macro in the llist.h file?

> =

> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>
> Cc: xen-devel@lists.xen.org
> ---
>  drivers/block/xen-blkfront.c |   10 +++++++++-
>  1 files changed, 9 insertions(+), 1 deletions(-)
> =

> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 96e9b00..df21b05 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -143,6 +143,13 @@ static DEFINE_SPINLOCK(minor_lock);
>  =

>  #define DEV_NAME	"xvd"	/* name in /dev */
>  =

> +#define llist_for_each_entry_safe(pos, n, node, member)		\
> +	for ((pos) =3D llist_entry((node), typeof(*(pos)), member),	\
> +	     (n) =3D (pos)->member.next;					\
> +	     &(pos)->member !=3D NULL;					\
> +	     (pos) =3D llist_entry(n, typeof(*(pos)), member),		\
> +	     (n) =3D (&(pos)->member !=3D NULL) ? (pos)->member.next : NULL)
> +
>  static int get_id_from_freelist(struct blkfront_info *info)
>  {
>  	unsigned long free =3D info->shadow_free;
> @@ -792,6 +799,7 @@ static void blkif_free(struct blkfront_info *info, in=
t suspend)
>  {
>  	struct llist_node *all_gnts;
>  	struct grant *persistent_gnt;
> +	struct llist_node *n;
>  =

>  	/* Prevent new requests being issued until we fix things up. */
>  	spin_lock_irq(&info->io_lock);
> @@ -804,7 +812,7 @@ static void blkif_free(struct blkfront_info *info, in=
t suspend)
>  	/* Remove all persistent grants */
>  	if (info->persistent_gnts_c) {
>  		all_gnts =3D llist_del_all(&info->persistent_gnts);
> -		llist_for_each_entry(persistent_gnt, all_gnts, node) {
> +		llist_for_each_entry_safe(persistent_gnt, n, all_gnts, node) {
>  			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
>  			__free_page(pfn_to_page(persistent_gnt->pfn));
>  			kfree(persistent_gnt);
> -- =

> 1.7.7.5 (Apple Git-26)
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 20:24:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 20:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th4SY-0005oo-Rc; Fri, 07 Dec 2012 20:23:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th4SX-0005oh-29
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 20:23:45 +0000
Received: from [85.158.143.35:4602] by server-1.bemta-4.messagelabs.com id
	56/5B-28401-05052C05; Fri, 07 Dec 2012 20:23:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1354911818!4791892!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17861 invoked from network); 7 Dec 2012 20:23:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 20:23:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="4438"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 20:23:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 20:23:37 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th4IP-000176-Nn;
	Fri, 07 Dec 2012 20:13:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th4IP-0003cr-N1;
	Fri, 07 Dec 2012 20:13:17 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14604-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 20:13:17 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14604: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14604 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14604/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 20:24:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 20:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th4SY-0005oo-Rc; Fri, 07 Dec 2012 20:23:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th4SX-0005oh-29
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 20:23:45 +0000
Received: from [85.158.143.35:4602] by server-1.bemta-4.messagelabs.com id
	56/5B-28401-05052C05; Fri, 07 Dec 2012 20:23:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1354911818!4791892!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17861 invoked from network); 7 Dec 2012 20:23:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 20:23:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="4438"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 20:23:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 20:23:37 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th4IP-000176-Nn;
	Fri, 07 Dec 2012 20:13:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th4IP-0003cr-N1;
	Fri, 07 Dec 2012 20:13:17 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14604-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 20:13:17 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14604: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14604 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14604/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 20:50:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 20:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th4rp-0006CN-Ir; Fri, 07 Dec 2012 20:49:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th4ro-0006CI-1e
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 20:49:52 +0000
Received: from [85.158.137.99:4211] by server-9.bemta-3.messagelabs.com id
	7F/A4-02388-A6652C05; Fri, 07 Dec 2012 20:49:46 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1354913384!12177941!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22521 invoked from network); 7 Dec 2012 20:49:45 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 20:49:45 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so961462vcb.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 12:49:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=V5vkqibRjaOuTdfOMVYfj8E4Jb9bdn7aXVBpTw47nm8=;
	b=qOafMv4DpW+TnL6F9raWF1140drHjI9ugqnBrP/brntkBlDBiU9gss0R0f1AgGjq0f
	pFY0hg9kfE0OTsa/DuZq+3VHoRQZH1sfJv3EkeDUR/Fwh54/qzVoAq6v076RZuuNXzzE
	GksQMV6aOBU0eiAz4YLruq2KV6wMaWIjKbGCbHEsCaRVlmqIFjLDBVJImVwIWCneVH2f
	xVgmbzZIQX8qxF5oc47rxfVuCEONYTFZUPStZ90GRhtDjVnN/O1IivRgNZN1arM3OOpo
	v4ud9gkxL51PjIsLOnUffklxmDfRPMcvoPKeA+VCP1iYeVop5IuTy7K8GAyBJ5D8ZAOD
	m81A==
Received: by 10.52.180.5 with SMTP id dk5mr3987936vdc.45.1354913384047;
	Fri, 07 Dec 2012 12:49:44 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id g5sm3786672vez.6.2012.12.07.12.49.43
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 12:49:43 -0800 (PST)
Date: Fri, 7 Dec 2012 15:49:41 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: "Omer K." <okhalid.cern@gmail.com>
Message-ID: <20121207204939.GA9664@phenom.dumpdata.com>
References: <CAPM9MsQsggsBAL948ZJ31bVnSNH+Q4kGr4d0Na+_RgPBpcgypA@mail.gmail.com>
	<50A6843F.5050000@citrix.com>
	<CAPM9MsSrmoG4RLJjNDNn+j=hWuU-r+WbckT-UJXPfJ-J8gbvOQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAPM9MsSrmoG4RLJjNDNn+j=hWuU-r+WbckT-UJXPfJ-J8gbvOQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Mats Petersson <mats.petersson@citrix.com>,
	xen-users list <xen-users@lists.xensource.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to activate all VPCUS for a domU?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Nov 16, 2012 at 08:19:07PM +0000, Omer K. wrote:
> Hi,
> 
> Apologies for earlier spamming the xen-devel list but this message might be
> relevant as I think there is a bug in xm/create.py where 'vcpu_avail'
> option doesn't get properly set.
> 
> I manage to solve the issue of activating all of the vcpus by apply  a
> modified version of the patch earlier discussed:
> 
> http://old-list-archives.xen.org/archives/html/xen-users/2010-09/msg00353.html
> 
> The following patch fixed the issue for me and after this bitmask value was
> activating required vpcu's read from the xen configuration file. Prior to
> that, vpcu_avail was always set to '1' (also verified from 'xenstore-ls -f'
> output).
> 
> --- create.py
> +++ create.py.af
> 
>          if maxvcpus and vcpus:
>              config.append(['vcpus', vcpus])
> -            config.append(['vcpu_avail', (1 << vcpus) -1)])
> +            config.append(['vcpu_avail', getattr(vals, 'vcpu_avail'])
> 
>      def add_conf(n):
>          if hasattr(vals, n):


Could you want repost this with an Signed-off-By. There is a nice
description here on how to do it:

http://wiki.xen.org/wiki/Submitting_Xen_Patches

> 
> Regards.
> Omer
> 
> 
> On Fri, Nov 16, 2012 at 6:21 PM, Mats Petersson
> <mats.petersson@citrix.com>wrote:
> 
> > On 16/11/12 18:09, Omer K. wrote:
> >
> >> Hi,
> >>
> >> I have set maxvcpus and vcpus options in my domU configuration file, and
> >> I can see that X number of vcpu are set for the domU.
> >>
> >> I tried to activate all the vpcus by using vpcu_avail option (using
> >> decimal to represent vpcu bitmask e.g. 24=11000) but it doesn't seem to
> >> work, and only the first vpcu is activated (i.e. has -b- state) while all
> >> other vpcu's set for the domU are in the paused state.
> >>
> >> Can any one share more insights on how to activate all the vpcu's for the
> >> guest domain ?
> >>
> >> Thanks
> >>
> > This definitely belongs to the Xen-Users list, not on Xen-Devel. Please
> > don't post to both lists, it just confuses people.
> >
> > --
> > Mats
> >
> > ______________________________**_________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 20:50:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 20:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th4rp-0006CN-Ir; Fri, 07 Dec 2012 20:49:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th4ro-0006CI-1e
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 20:49:52 +0000
Received: from [85.158.137.99:4211] by server-9.bemta-3.messagelabs.com id
	7F/A4-02388-A6652C05; Fri, 07 Dec 2012 20:49:46 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1354913384!12177941!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22521 invoked from network); 7 Dec 2012 20:49:45 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 20:49:45 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so961462vcb.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 12:49:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=V5vkqibRjaOuTdfOMVYfj8E4Jb9bdn7aXVBpTw47nm8=;
	b=qOafMv4DpW+TnL6F9raWF1140drHjI9ugqnBrP/brntkBlDBiU9gss0R0f1AgGjq0f
	pFY0hg9kfE0OTsa/DuZq+3VHoRQZH1sfJv3EkeDUR/Fwh54/qzVoAq6v076RZuuNXzzE
	GksQMV6aOBU0eiAz4YLruq2KV6wMaWIjKbGCbHEsCaRVlmqIFjLDBVJImVwIWCneVH2f
	xVgmbzZIQX8qxF5oc47rxfVuCEONYTFZUPStZ90GRhtDjVnN/O1IivRgNZN1arM3OOpo
	v4ud9gkxL51PjIsLOnUffklxmDfRPMcvoPKeA+VCP1iYeVop5IuTy7K8GAyBJ5D8ZAOD
	m81A==
Received: by 10.52.180.5 with SMTP id dk5mr3987936vdc.45.1354913384047;
	Fri, 07 Dec 2012 12:49:44 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id g5sm3786672vez.6.2012.12.07.12.49.43
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 12:49:43 -0800 (PST)
Date: Fri, 7 Dec 2012 15:49:41 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: "Omer K." <okhalid.cern@gmail.com>
Message-ID: <20121207204939.GA9664@phenom.dumpdata.com>
References: <CAPM9MsQsggsBAL948ZJ31bVnSNH+Q4kGr4d0Na+_RgPBpcgypA@mail.gmail.com>
	<50A6843F.5050000@citrix.com>
	<CAPM9MsSrmoG4RLJjNDNn+j=hWuU-r+WbckT-UJXPfJ-J8gbvOQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAPM9MsSrmoG4RLJjNDNn+j=hWuU-r+WbckT-UJXPfJ-J8gbvOQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Mats Petersson <mats.petersson@citrix.com>,
	xen-users list <xen-users@lists.xensource.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to activate all VPCUS for a domU?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Nov 16, 2012 at 08:19:07PM +0000, Omer K. wrote:
> Hi,
> 
> Apologies for earlier spamming the xen-devel list but this message might be
> relevant as I think there is a bug in xm/create.py where 'vcpu_avail'
> option doesn't get properly set.
> 
> I manage to solve the issue of activating all of the vcpus by apply  a
> modified version of the patch earlier discussed:
> 
> http://old-list-archives.xen.org/archives/html/xen-users/2010-09/msg00353.html
> 
> The following patch fixed the issue for me and after this bitmask value was
> activating required vpcu's read from the xen configuration file. Prior to
> that, vpcu_avail was always set to '1' (also verified from 'xenstore-ls -f'
> output).
> 
> --- create.py
> +++ create.py.af
> 
>          if maxvcpus and vcpus:
>              config.append(['vcpus', vcpus])
> -            config.append(['vcpu_avail', (1 << vcpus) -1)])
> +            config.append(['vcpu_avail', getattr(vals, 'vcpu_avail'])
> 
>      def add_conf(n):
>          if hasattr(vals, n):


Could you want repost this with an Signed-off-By. There is a nice
description here on how to do it:

http://wiki.xen.org/wiki/Submitting_Xen_Patches

> 
> Regards.
> Omer
> 
> 
> On Fri, Nov 16, 2012 at 6:21 PM, Mats Petersson
> <mats.petersson@citrix.com>wrote:
> 
> > On 16/11/12 18:09, Omer K. wrote:
> >
> >> Hi,
> >>
> >> I have set maxvcpus and vcpus options in my domU configuration file, and
> >> I can see that X number of vcpu are set for the domU.
> >>
> >> I tried to activate all the vpcus by using vpcu_avail option (using
> >> decimal to represent vpcu bitmask e.g. 24=11000) but it doesn't seem to
> >> work, and only the first vpcu is activated (i.e. has -b- state) while all
> >> other vpcu's set for the domU are in the paused state.
> >>
> >> Can any one share more insights on how to activate all the vpcu's for the
> >> guest domain ?
> >>
> >> Thanks
> >>
> > This definitely belongs to the Xen-Users list, not on Xen-Devel. Please
> > don't post to both lists, it just confuses people.
> >
> > --
> > Mats
> >
> > ______________________________**_________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 20:51:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 20:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th4tD-0006IH-D5; Fri, 07 Dec 2012 20:51:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Th4tA-0006Hy-Lf
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 20:51:17 +0000
Received: from [193.109.254.147:6235] by server-7.bemta-14.messagelabs.com id
	6E/02-02272-3C652C05; Fri, 07 Dec 2012 20:51:15 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-27.messagelabs.com!1354913470!9362104!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31111 invoked from network); 7 Dec 2012 20:51:11 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-12.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Dec 2012 20:51:11 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:54166 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Th4wp-0003Sn-MA; Fri, 07 Dec 2012 21:55:04 +0100
Date: Fri, 7 Dec 2012 21:51:04 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <183697744.20121207215104@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
	<alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------09400606801CFB705"
Cc: Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x
	returns same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------09400606801CFB705
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


Friday, December 7, 2012, 6:24:10 PM, you wrote:

> On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
>> Hi Stefano / Anthony,
>> 
>> With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:
>> 
>> With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry
>> 
>> in qemu-traditional:
>> 
>> pt_msix_update_one: pt_msix_update_one requested pirq = 87
>> pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
>> pt_msix_update_one: pt_msix_update_one requested pirq = 86
>> pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
>> pt_msix_update_one: pt_msix_update_one requested pirq = 85
>> pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
>> pt_msix_update_one: pt_msix_update_one requested pirq = 84
>> pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0
>> 
>> 
>> in qemu-xen (upstream):
>> 
>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)

> That is a good pointer, but unfortunately the code that parses those
> entries look exactly alike in both QEMU trees:

> qemu-xen-traditional/hw/pt-msi.c:pt_msix_update_one

> if (!gvec) {
>         /* if gvec is 0, the guest is asking for a particular pirq that
>          * is passed as dest_id */
>         pirq = ((gaddr >> 32) & 0xffffff00) |
>                (((gaddr & 0xffffffff) >> MSI_TARGET_CPU_SHIFT) & 0xff);



> qemu-xen/hw/xen_pt_msi.c:msi_msix_setup

> if (gvec == 0) {
>         /* if gvec is 0, the guest is asking for a particular pirq that
>          * is passed as dest_id */
>         *ppirq = msi_ext_dest_id(addr >> 32) | msi_dest_id(addr);

> given how msi_ext_dest_id and msi_dest_id are defined, they should
> behave the same way.

> Maybe adding a printk in msi_msix_setup to show addr would help
> nonetheless...

Hi Stefano,

I have added some printk's attached i have:

- qemu-upstream.log           boot of the guest with qemu upstream, device not working
- qemu-traditional.log        boot of the same guest with qemu traditional, device is working

- xl-dmesg-upstream.txt       part of xl-dmesg related to boot of guest with qemu-upstream
- xl-dmesg-traditional.txt    part of xl-dmesg related to boot of sameguest with qemu-traditional
- xl-dmesg.txt                complete xl-dmesg

- interrupts-dom0.txt         /proc/interrupts of dom0
- interrupts-upstream.txt     /proc/interrupts of guest with qemu-upstream

--
Sander
------------09400606801CFB705
Content-Type: text/plain;
 name="interrupts-dom0.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="interrupts-dom0.txt"

ICAgICAgICAgICAgQ1BVMCAgICAgICBDUFUxICAgICAgIENQVTIgICAgICAgQ1BVMyAgICAg
ICBDUFU0ICAgICAgIENQVTUgICAgICAgDQogICAxOiAgICAgICAgICAyICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBpcnEt
aW9hcGljLWVkZ2UgIGk4MDQyDQogICA4OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBpcnEtaW9hcGlj
LWVkZ2UgIHJ0YzANCiAgIDk6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1pb2FwaWMtbGV2ZWwg
IGFjcGkNCiAgMTI6ICAgICAgICAgIDQgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1pb2FwaWMtZWRnZSAgaTgwNDIN
CiAgMTY6ICAgICAgICA2NjYgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1pb2FwaWMtbGV2ZWwgIHNuZF9oZGFfaW50
ZWwNCiAgMTc6ICAgICAgICAgIDIgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1pb2FwaWMtbGV2ZWwgIGVoY2lfaGNk
OnVzYjEsIGVoY2lfaGNkOnVzYjIsIGVoY2lfaGNkOnVzYjMNCiAgMTg6ICAgICAgICAyMDcg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICB4ZW4tcGlycS1pb2FwaWMtbGV2ZWwgIG9oY2lfaGNkOnVzYjQsIG9oY2lfaGNkOnVzYjUs
IG9oY2lfaGNkOnVzYjYsIG9oY2lfaGNkOnVzYjcNCiAgMjI6ICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4t
cGlycS1pb2FwaWMtbGV2ZWwgIHhlbi1wY2liYWNrWzAwMDA6MDM6MDYuMF0NCiAgNzI6ICAg
ICAxMTc5NzUgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICB4ZW4tcGVyY3B1LXZpcnEgICAgICB0aW1lcjANCiAgNzM6ICAgICAgNDkw
NTIgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAg
ICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICByZXNjaGVkMA0KICA3NDogICAgICAgICA5NyAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
IHhlbi1wZXJjcHUtaXBpICAgICAgIGNhbGxmdW5jMA0KICA3NTogICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhl
bi1wZXJjcHUtdmlycSAgICAgIGRlYnVnMA0KICA3NjogICAgICAgMTQ0MSAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJj
cHUtaXBpICAgICAgIGNhbGxmdW5jc2luZ2xlMA0KICA3NzogICAgICAgICAgMCAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1w
ZXJjcHUtaXBpICAgICAgIGlycXdvcmswDQogIDc4OiAgICAgICAgICAwICAgICAxMTIxNzkg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNw
dS12aXJxICAgICAgdGltZXIxDQogIDc5OiAgICAgICAgICAwICAgICAgNDM3MTggICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNwdS1pcGkg
ICAgICAgcmVzY2hlZDENCiAgODA6ICAgICAgICAgIDAgICAgICAgIDE3NiAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAg
ICBjYWxsZnVuYzENCiAgODE6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LXZpcnEgICAgICBk
ZWJ1ZzENCiAgODI6ICAgICAgICAgIDAgICAgICAgMTU1OSAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBjYWxsZnVu
Y3NpbmdsZTENCiAgODM6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBpcnF3
b3JrMQ0KICA4NDogICAgICAgICAgMCAgICAgICAgICAwICAgICAgNDYzMzMgICAgICAgICAg
MCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtdmlycSAgICAgIHRpbWVyMg0K
ICA4NTogICAgICAgICAgMCAgICAgICAgICAwICAgICAgNDcwNDggICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtaXBpICAgICAgIHJlc2NoZWQyDQogIDg2
OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgIDE4NSAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgeGVuLXBlcmNwdS1pcGkgICAgICAgY2FsbGZ1bmMyDQogIDg3OiAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgeGVuLXBlcmNwdS12aXJxICAgICAgZGVidWcyDQogIDg4OiAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgMTUwOSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgeGVuLXBlcmNwdS1pcGkgICAgICAgY2FsbGZ1bmNzaW5nbGUyDQogIDg5OiAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgICAgMCAgeGVuLXBlcmNwdS1pcGkgICAgICAgaXJxd29yazINCiAgOTA6ICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgMzYyMDIgICAgICAgICAgMCAgICAgICAg
ICAwICB4ZW4tcGVyY3B1LXZpcnEgICAgICB0aW1lcjMNCiAgOTE6ICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgMjIxNzYgICAgICAgICAgMCAgICAgICAgICAwICB4
ZW4tcGVyY3B1LWlwaSAgICAgICByZXNjaGVkMw0KICA5MjogICAgICAgICAgMCAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgIDEzNCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1w
ZXJjcHUtaXBpICAgICAgIGNhbGxmdW5jMw0KICA5MzogICAgICAgICAgMCAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJj
cHUtdmlycSAgICAgIGRlYnVnMw0KICA5NDogICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgMTI1OSAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtaXBp
ICAgICAgIGNhbGxmdW5jc2luZ2xlMw0KICA5NTogICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUt
aXBpICAgICAgIGlycXdvcmszDQogIDk2OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgMjI0MTMgICAgICAgICAgMCAgeGVuLXBlcmNwdS12aXJx
ICAgICAgdGltZXI0DQogIDk3OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgMTg5MDMgICAgICAgICAgMCAgeGVuLXBlcmNwdS1pcGkgICAgICAg
cmVzY2hlZDQNCiAgOTg6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgIDEzOCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBjYWxs
ZnVuYzQNCiAgOTk6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LXZpcnEgICAgICBkZWJ1ZzQN
CiAxMDA6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgMTY4NCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBjYWxsZnVuY3Npbmds
ZTQNCiAxMDE6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBpcnF3b3JrNA0K
IDEwMjogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgMjQxOTEgIHhlbi1wZXJjcHUtdmlycSAgICAgIHRpbWVyNQ0KIDEwMzog
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICAgICAgMTg2MTEgIHhlbi1wZXJjcHUtaXBpICAgICAgIHJlc2NoZWQ1DQogMTA0OiAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgIDE1NCAgeGVuLXBlcmNwdS1pcGkgICAgICAgY2FsbGZ1bmM1DQogMTA1OiAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgeGVuLXBlcmNwdS12aXJxICAgICAgZGVidWc1DQogMTA2OiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgMTQ0MCAg
eGVuLXBlcmNwdS1pcGkgICAgICAgY2FsbGZ1bmNzaW5nbGU1DQogMTA3OiAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAg
MCAgeGVuLXBlcmNwdS1pcGkgICAgICAgaXJxd29yazUNCiAxMDg6ICAgICAgIDk0MTMgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
eGVuLWR5bi1ldmVudCAgICAgeGVuYnVzDQogMTA5OiAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNw
dS12aXJxICAgICAgeGVuLXBjcHUNCiAxMTg6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LXZp
cnEgICAgICBtY2UNCiAxMjA6ICAgICAxNDY1NDYgICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1tc2kgICAgICAgYWhj
aQ0KIDEyMTogICAgICAxMzcxMiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgIHhlbi1waXJxLW1zaSAgICAgICBldGgwDQogMTIyOiAg
ICAgICA4OTIzICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgeGVuLXBpcnEtbXNpICAgICAgIGV0aDENCiAxMjM6ICAgICAgIDY1OTgg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxMjQ6ICAgICAgICAg
IDMgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAg
ICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxMjU6ICAgICAg
ICAyODAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxMjY6ICAg
ICAgICAyNzQgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTI3
OiAgICAgICAyOTkzICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIGJsa2lmLWJhY2tlbmQNCiAxMjg6
ICAgICAgMTM0MTMgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAg
MCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgYmxraWYtYmFja2VuZA0KIDEyOTog
ICAgICAgNjMzMSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAgICB2aWYxLjANCiAxMzA6ICAgICAgICAx
OTEgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAg
ICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxMzE6ICAgICAg
ICAyODcgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTMyOiAg
ICAgICAxMTMwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIGJsa2lmLWJhY2tlbmQNCiAxMzM6ICAg
ICAgIDEwMDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgdmlmMi4wDQogMTM0OiAgICAgICAgMTk1
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAg
MCAgIHhlbi1keW4tZXZlbnQgICAgIGV2dGNobjpveGVuc3RvcmVkDQogMTM1OiAgICAgICAg
Mjg4ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIGV2dGNobjp4ZW5jb25zb2xlZA0KIDEzNjogICAg
ICAgMTE2OSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAgICBibGtpZi1iYWNrZW5kDQogMTM3OiAgICAg
ICAgMjI5ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIHZpZjMuMA0KIDEzODogICAgICAgIDE5NiAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICB4ZW4tZHluLWV2ZW50ICAgICBldnRjaG46b3hlbnN0b3JlZA0KIDEzOTogICAgICAgIDI3
NCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICB4ZW4tZHluLWV2ZW50ICAgICBldnRjaG46eGVuY29uc29sZWQNCiAxNDA6ICAgICAg
ICA3MzQgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgYmxraWYtYmFja2VuZA0KIDE0MTogICAgICAg
ICAgMSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICB4ZW4tZHluLWV2ZW50ICAgICB2aWY0LjANCiAxNDI6ICAgICAgICAxOTIgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
eGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxNDM6ICAgICAgICAyODUg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTQ0OiAgICAgICAx
NTkxICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIGJsa2lmLWJhY2tlbmQNCiAxNDU6ICAgICAgICAg
NTYgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAg
ICAwICAgeGVuLWR5bi1ldmVudCAgICAgdmlmNS4wDQogMTQ2OiAgICAgICAgMTkxICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhl
bi1keW4tZXZlbnQgICAgIGV2dGNobjpveGVuc3RvcmVkDQogMTQ3OiAgICAgICAgMzA4ICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
IHhlbi1keW4tZXZlbnQgICAgIGV2dGNobjp4ZW5jb25zb2xlZA0KIDE0ODogICAgICAgMTEx
MyAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICB4ZW4tZHluLWV2ZW50ICAgICBibGtpZi1iYWNrZW5kDQogMTQ5OiAgICAgICAgICA0
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAg
MCAgIHhlbi1keW4tZXZlbnQgICAgIHZpZjYuMA0KIDE1MDogICAgICAgIDE5MCAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4t
ZHluLWV2ZW50ICAgICBldnRjaG46b3hlbnN0b3JlZA0KIDE1MTogICAgICAgIDI2OCAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4
ZW4tZHluLWV2ZW50ICAgICBldnRjaG46eGVuY29uc29sZWQNCiAxNTI6ICAgICAgIDEwODUg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICAgeGVuLWR5bi1ldmVudCAgICAgYmxraWYtYmFja2VuZA0KIDE1MzogICAgICAyMzU5MyAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICB4ZW4tZHluLWV2ZW50ICAgICB2aWY3LjANCiAxNTQ6ICAgICAgICAxOTIgICAgICAgICAg
MCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5
bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxNTU6ICAgICAgICAyODIgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVu
LWR5bi1ldmVudCAgICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTU2OiAgICAgICAgOTA2ICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
IHhlbi1keW4tZXZlbnQgICAgIGJsa2lmLWJhY2tlbmQNCiAxNTc6ICAgICAgICAgIDQgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
eGVuLWR5bi1ldmVudCAgICAgdmlmOC4wDQogMTU4OiAgICAgICAgMTg5ICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4t
ZXZlbnQgICAgIGV2dGNobjpveGVuc3RvcmVkDQogMTU5OiAgICAgICAgMjYzICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1k
eW4tZXZlbnQgICAgIGV2dGNobjp4ZW5jb25zb2xlZA0KIDE2MDogICAgICAgMTE4MyAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4
ZW4tZHluLWV2ZW50ICAgICBibGtpZi1iYWNrZW5kDQogMTYxOiAgICAgIDEyOTkzICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhl
bi1keW4tZXZlbnQgICAgIHZpZjkuMA0KIDE2MjogICAgICAgIDE5NiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2
ZW50ICAgICBldnRjaG46b3hlbnN0b3JlZA0KIDE2MzogICAgICAgIDI3OSAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHlu
LWV2ZW50ICAgICBldnRjaG46eGVuY29uc29sZWQNCiAxNjQ6ICAgICAgIDEzNDQgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVu
LWR5bi1ldmVudCAgICAgYmxraWYtYmFja2VuZA0KIDE2NTogICAgICAgICAzMSAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4t
ZHluLWV2ZW50ICAgICB2aWYxMC4wDQogMTY2OiAgICAgICAgMjQzICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZl
bnQgICAgIGV2dGNobjpveGVuc3RvcmVkDQogMTY3OiAgICAgICAgMzM5ICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4t
ZXZlbnQgICAgIGV2dGNobjp4ZW5jb25zb2xlZA0KIDE2ODogICAgICAgIDM4NyAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4t
ZHluLWV2ZW50ICAgICB4ZW4tcGNpYmFjaw0KIDE2OTogICAgICAgMTM2MiAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHlu
LWV2ZW50ICAgICBibGtpZi1iYWNrZW5kDQogMTcwOiAgICAgICAgIDI5ICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4t
ZXZlbnQgICAgIHZpZjExLjANCiAxNzE6ICAgICAgICAxODQgICAgICAgICAgMCAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAg
ICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxNzI6ICAgICAgICAyOTUgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVu
dCAgICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTczOiAgICAgICAxMTM0ICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4t
ZXZlbnQgICAgIGJsa2lmLWJhY2tlbmQNCiAxNzQ6ICAgICAgICAgMjcgICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1l
dmVudCAgICAgdmlmMTIuMA0KIDE3NTogICAgICAgIDE5MyAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAg
ICBldnRjaG46b3hlbnN0b3JlZA0KIDE3NjogICAgICAgIDI4NSAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50
ICAgICBldnRjaG46eGVuY29uc29sZWQNCiAxNzc6ICAgICAgIDEzNjkgICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1l
dmVudCAgICAgYmxraWYtYmFja2VuZA0KIDE3ODogICAgICAgIDQyMiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2
ZW50ICAgICB2aWYxMy4wDQogMTc5OiAgICAgICAgMTg1ICAgICAgICAgIDAgICAgICAgICAg
MCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAg
IGV2dGNobjpveGVuc3RvcmVkDQogMTgwOiAgICAgICAgNzU2ICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQg
ICAgIGV2dGNobjp4ZW5jb25zb2xlZA0KIDE4MTogICAgICAyODA1OSAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2
ZW50ICAgICBibGtpZi1iYWNrZW5kDQogMTgyOiAgICAgICAgMTU0ICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZl
bnQgICAgIHZpZjE0LjANCiAxODM6ICAgICAgICA0ODAgICAgICAgICAgMCAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAg
ZXZ0Y2huOm94ZW5zdG9yZWQNCiAxODQ6ICAgICAgICAgMTIgICAgICAgICAgMCAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAg
ICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTg1OiAgICAgMzYxODAzICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZl
bnQgICAgIGV2dGNobjpxZW11LXN5c3RlbS1pMzgNCiAxODY6ICAgICAgMjg3NzcgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVu
LWR5bi1ldmVudCAgICAgZXZ0Y2huOnFlbXUtc3lzdGVtLWkzOA0KIDE4NzogICAgICAyMTA0
NiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICB4ZW4tZHluLWV2ZW50ICAgICBldnRjaG46cWVtdS1zeXN0ZW0taTM4DQogMTg4OiAg
ICAgICAgIDUxICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIGV2dGNobjpxZW11LXN5c3RlbS1pMzgN
CiAxODk6ICAgICAgIDE2MTcgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgYmxraWYtYmFja2VuZA0K
IDE5MDogICAgICAgIDI4NSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAgICBibGtpZi1iYWNrZW5kDQog
MTkxOiAgICAgICAgIDExICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIHZpZjE2LjANCiAxOTI6ICAg
ICAgICAgIDEgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOnFlbXUtc3lzdGVtLWkzOA0K
IE5NSTogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICBOb24tbWFza2FibGUgaW50ZXJydXB0cw0KIExPQzogICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICBMb2NhbCB0aW1lciBpbnRlcnJ1cHRzDQogU1BVOiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
IFNwdXJpb3VzIGludGVycnVwdHMNCiBQTUk6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgUGVyZm9ybWFuY2Ug
bW9uaXRvcmluZyBpbnRlcnJ1cHRzDQogSVdJOiAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIElSUSB3b3JrIGlu
dGVycnVwdHMNCiBSVFI6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgQVBJQyBJQ1IgcmVhZCByZXRyaWVzDQog
UkVTOiAgICAgIDQ5MDUyICAgICAgNDM3MTggICAgICA0NzA0OCAgICAgIDIyMTc2ICAgICAg
MTg5MDMgICAgICAxODYxMSAgIFJlc2NoZWR1bGluZyBpbnRlcnJ1cHRzDQogQ0FMOiAgICAg
ICAxNTM4ICAgICAgIDE3MzUgICAgICAgMTY5NCAgICAgICAxMzkzICAgICAgIDE4MjIgICAg
ICAgMTU5NCAgIEZ1bmN0aW9uIGNhbGwgaW50ZXJydXB0cw0KIFRMQjogICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICBUTEIgc2hvb3Rkb3ducw0KIFRSTTogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICBUaGVybWFsIGV2ZW50IGlu
dGVycnVwdHMNCiBUSFI6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgVGhyZXNob2xkIEFQSUMgaW50ZXJydXB0
cw0KIE1DRTogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgICBNYWNoaW5lIGNoZWNrIGV4Y2VwdGlvbnMNCiBNQ1A6
ICAgICAgICAgIDMgICAgICAgICAgMyAgICAgICAgICAzICAgICAgICAgIDMgICAgICAgICAg
MyAgICAgICAgICAzICAgTWFjaGluZSBjaGVjayBwb2xscw0KIEVSUjogICAgICAgICAgMA0K
IE1JUzogICAgICAgICAgMA0K
------------09400606801CFB705
Content-Type: text/plain;
 name="interrupts-upstream.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="interrupts-upstream.txt"

ICAgICAgICAgICAgQ1BVMCAgICAgICBDUFUxICAgICAgIENQVTINCiAgIDA6ICAgICAgICAg
NDkgICAgICAgICAgMCAgICAgICAgICAwICAgSU8tQVBJQy1lZGdlICAgICAgdGltZXINCiAg
IDE6ICAgICAgICAgIDggICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1pb2FwaWMt
ZWRnZSAgaTgwNDINCiAgIDQ6ICAgICAgICAzNzYgICAgICAgICAgMCAgICAgICAgICAwICB4
ZW4tcGlycS1pb2FwaWMtZWRnZSAgc2VyaWFsDQogICA4OiAgICAgICAgICAyICAgICAgICAg
IDAgICAgICAgICAgMCAgeGVuLXBpcnEtaW9hcGljLWVkZ2UgIHJ0YzANCiAgIDk6ICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgSU8tQVBJQy1mYXN0ZW9pICAgYWNwaQ0K
ICAxMjogICAgICAgIDExMSAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1waXJxLWlvYXBp
Yy1lZGdlICBpODA0Mg0KICAyMzogICAgICAgICAzMyAgICAgICAgICAwICAgICAgICAgIDAg
IHhlbi1waXJxLWlvYXBpYy1sZXZlbCAgdWhjaV9oY2Q6dXNiMQ0KICA2NDogICAgICAyMTg3
OCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtdmlycSAgICAgIHRpbWVyMA0K
ICA2NTogICAgICAgOTQwOCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtaXBp
ICAgICAgIHJlc2NoZWQwDQogIDY2OiAgICAgICAgIDQ0ICAgICAgICAgIDAgICAgICAgICAg
MCAgeGVuLXBlcmNwdS1pcGkgICAgICAgY2FsbGZ1bmMwDQogIDY3OiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNwdS12aXJxICAgICAgZGVidWcwDQogIDY4
OiAgICAgICAgNTEwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNwdS1pcGkgICAg
ICAgY2FsbGZ1bmNzaW5nbGUwDQogIDY5OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgeGVuLXBlcmNwdS1pcGkgICAgICAgaXJxd29yazANCiAgNzA6ICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBzcGlubG9jazAN
CiAgNzE6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlw
aSAgICAgICBzcGlubG9jazENCiAgNzI6ICAgICAgICAgIDAgICAgICAgNzQzOCAgICAgICAg
ICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICByZXNjaGVkMQ0KICA3MzogICAgICAgICAgMCAg
ICAgICAgIDU1ICAgICAgICAgIDAgIHhlbi1wZXJjcHUtaXBpICAgICAgIGNhbGxmdW5jMQ0K
ICA3NDogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtdmly
cSAgICAgIGRlYnVnMQ0KICA3NTogICAgICAgICAgMCAgICAgICAgNjA2ICAgICAgICAgIDAg
IHhlbi1wZXJjcHUtaXBpICAgICAgIGNhbGxmdW5jc2luZ2xlMQ0KICA3NjogICAgICAgICAg
MCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtaXBpICAgICAgIGlycXdvcmsx
DQogIDc3OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNwdS1p
cGkgICAgICAgc3BpbmxvY2syDQogIDc4OiAgICAgICAgICAwICAgICAgMjI3MjcgICAgICAg
ICAgMCAgeGVuLXBlcmNwdS12aXJxICAgICAgdGltZXIxDQogIDc5OiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgNzQ0MSAgeGVuLXBlcmNwdS1pcGkgICAgICAgcmVzY2hlZDINCiAg
ODA6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgIDQ1ICB4ZW4tcGVyY3B1LWlwaSAg
ICAgICBjYWxsZnVuYzINCiAgODE6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICB4ZW4tcGVyY3B1LXZpcnEgICAgICBkZWJ1ZzINCiAgODI6ICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgNTc2ICB4ZW4tcGVyY3B1LWlwaSAgICAgICBjYWxsZnVuY3NpbmdsZTIN
CiAgODM6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlw
aSAgICAgICBpcnF3b3JrMg0KICA4NDogICAgICAgICAgMCAgICAgICAgICAwICAgICAgMjIy
NjQgIHhlbi1wZXJjcHUtdmlycSAgICAgIHRpbWVyMg0KICA4NTogICAgICAgIDQ5OCAgICAg
ICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAgICB4ZW5idXMNCiAgODY6ICAg
ICAgICAgIDYgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgaHZj
X2NvbnNvbGUNCiAgODc6ICAgICAgIDQ0NTcgICAgICAgICAgMCAgICAgICAgICAwICAgeGVu
LWR5bi1ldmVudCAgICAgYmxraWYNCiAgODg6ICAgICAgICAyODkgICAgICAgICAgMCAgICAg
ICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgYmxraWYNCiAgODk6ICAgICAgICAxMDEgICAg
ICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXRoMA0KICA5MDogICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1waXJxLW1zaS14ICAgICB4aGNp
X2hjZA0KICA5MTogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1waXJx
LW1zaS14ICAgICB4aGNpX2hjZA0KICA5MjogICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgIHhlbi1waXJxLW1zaS14ICAgICB4aGNpX2hjZA0KICA5MzogICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgIHhlbi1waXJxLW1zaS14ICAgICB4aGNpX2hjZA0KICA5
NDogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAg
ICB2a2JkDQogTk1JOiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIE5vbi1t
YXNrYWJsZSBpbnRlcnJ1cHRzDQogTE9DOiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgIExvY2FsIHRpbWVyIGludGVycnVwdHMNCiBTUFU6ICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgICAwICAgU3B1cmlvdXMgaW50ZXJydXB0cw0KIFBNSTogICAgICAgICAg
MCAgICAgICAgICAwICAgICAgICAgIDAgICBQZXJmb3JtYW5jZSBtb25pdG9yaW5nIGludGVy
cnVwdHMNCiBJV0k6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgSVJRIHdv
cmsgaW50ZXJydXB0cw0KIFJUUjogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICBBUElDIElDUiByZWFkIHJldHJpZXMNCiBSRVM6ICAgICAgIDk0MDggICAgICAgNzQzOCAg
ICAgICA3NDQxICAgUmVzY2hlZHVsaW5nIGludGVycnVwdHMNCiBDQUw6ICAgICAgICAgMjkg
ICAgICAgICAxNSAgICAgICAgIDE3ICAgRnVuY3Rpb24gY2FsbCBpbnRlcnJ1cHRzDQogVExC
OiAgICAgICAgNTI1ICAgICAgICA2NDYgICAgICAgIDYwNCAgIFRMQiBzaG9vdGRvd25zDQog
VFJNOiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIFRoZXJtYWwgZXZlbnQg
aW50ZXJydXB0cw0KIFRIUjogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICBU
aHJlc2hvbGQgQVBJQyBpbnRlcnJ1cHRzDQogTUNFOiAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgIE1hY2hpbmUgY2hlY2sgZXhjZXB0aW9ucw0KIE1DUDogICAgICAgICAg
MSAgICAgICAgICAxICAgICAgICAgIDEgICBNYWNoaW5lIGNoZWNrIHBvbGxzDQogRVJSOiAg
ICAgICAgICAwDQogTUlTOiAgICAgICAgICAwDQo=
------------09400606801CFB705
Content-Type: application/octet-stream;
 name="qemu-traditional.log"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="qemu-traditional.log"

ZG9taWQ6IDE1Ci12aWRlb3JhbSBvcHRpb24gZG9lcyBub3Qgd29yayB3aXRoIGNpcnJ1cyB2
Z2EgZGV2aWNlIG1vZGVsLiBWaWRlb3JhbSBzZXQgdG8gNE0uClVzaW5nIGZpbGUgL2Rldi94
ZW5fdm1zL3NlY3VyaXR5YmFja3VwIGluIHJlYWQtd3JpdGUgbW9kZQpVc2luZyBmaWxlIC9k
ZXYveGVuX3Ztcy9zZWN1cml0eV9kYXRhIGluIHJlYWQtd3JpdGUgbW9kZQpXYXRjaGluZyAv
bG9jYWwvZG9tYWluLzAvZGV2aWNlLW1vZGVsLzE1L2xvZ2RpcnR5L2NtZApXYXRjaGluZyAv
bG9jYWwvZG9tYWluLzAvZGV2aWNlLW1vZGVsLzE1L2NvbW1hbmQKV2F0Y2hpbmcgL2xvY2Fs
L2RvbWFpbi8xNS9jcHUKY2hhciBkZXZpY2UgcmVkaXJlY3RlZCB0byAvZGV2L3B0cy8xNwpx
ZW11X21hcF9jYWNoZV9pbml0IG5yX2J1Y2tldHMgPSAxMDAwMCBzaXplIDQxOTQzMDQKc2hh
cmVkIHBhZ2UgYXQgcGZuIGZlZmZkCmJ1ZmZlcmVkIGlvIHBhZ2UgYXQgcGZuIGZlZmZiCkd1
ZXN0IHV1aWQgPSBkNDdjYzNiYy02ODMyLTRkNTEtYmMyNy1iOWIwMjZhZDJjMWMKcG9wdWxh
dGluZyB2aWRlbyBSQU0gYXQgZmYwMDAwMDAKbWFwcGluZyB2aWRlbyBSQU0gZnJvbSBmZjAw
MDAwMApSZWdpc3RlciB4ZW4gcGxhdGZvcm0uCkRvbmUgcmVnaXN0ZXIgcGxhdGZvcm0uCnBs
YXRmb3JtX2ZpeGVkX2lvcG9ydDogY2hhbmdlZCByby9ydyBzdGF0ZSBvZiBST00gbWVtb3J5
IGFyZWEuIG5vdyBpcyBydyBzdGF0ZS4KeHNfcmVhZCgvbG9jYWwvZG9tYWluLzAvZGV2aWNl
LW1vZGVsLzE1L3hlbl9leHRlbmRlZF9wb3dlcl9tZ210KTogcmVhZCBlcnJvcgpMb2ctZGly
dHk6IG5vIGNvbW1hbmQgeWV0LgpJL08gcmVxdWVzdCBub3QgcmVhZHk6IDAsIHB0cjogMCwg
cG9ydDogMCwgZGF0YTogMCwgY291bnQ6IDAsIHNpemU6IDAKSS9PIHJlcXVlc3Qgbm90IHJl
YWR5OiAwLCBwdHI6IDAsIHBvcnQ6IDAsIGRhdGE6IDAsIGNvdW50OiAwLCBzaXplOiAwCnZj
cHUtc2V0OiB3YXRjaCBub2RlIGVycm9yLgpJL08gcmVxdWVzdCBub3QgcmVhZHk6IDAsIHB0
cjogMCwgcG9ydDogMCwgZGF0YTogMCwgY291bnQ6IDAsIHNpemU6IDAKeHNfcmVhZCgvbG9j
YWwvZG9tYWluLzE1L2xvZy10aHJvdHRsaW5nKTogcmVhZCBlcnJvcgpxZW11OiBpZ25vcmlu
ZyBub3QtdW5kZXJzdG9vZCBkcml2ZSBgL2xvY2FsL2RvbWFpbi8xNS9sb2ctdGhyb3R0bGlu
ZycKbWVkaXVtIGNoYW5nZSB3YXRjaCBvbiBgL2xvY2FsL2RvbWFpbi8xNS9sb2ctdGhyb3R0
bGluZycgLSB1bmtub3duIGRldmljZSwgaWdub3JlZApkbS1jb21tYW5kOiBob3QgaW5zZXJ0
IHBhc3MtdGhyb3VnaCBwY2kgZGV2IApyZWdpc3Rlcl9yZWFsX2RldmljZTogQXNzaWduaW5n
IHJlYWwgcGh5c2ljYWwgZGV2aWNlIDA0OjAwLjAgLi4uCnJlZ2lzdGVyX3JlYWxfZGV2aWNl
OiBEaXNhYmxlIE1TSSB0cmFuc2xhdGlvbiB2aWEgcGVyIGRldmljZSBvcHRpb24KcmVnaXN0
ZXJfcmVhbF9kZXZpY2U6IERpc2FibGUgcG93ZXIgbWFuYWdlbWVudApwdF9pb211bF9pbml0
OiBFcnJvcjogcHRfaW9tdWxfaW5pdCBjYW4ndCBvcGVuIGZpbGUgL2Rldi94ZW4vcGNpX2lv
bXVsOiBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5OiAweDQ6MHgwLjB4MApwdF9yZWdpc3Rl
cl9yZWdpb25zOiBJTyByZWdpb24gcmVnaXN0ZXJlZCAoc2l6ZT0weDAwMDAyMDAwIGJhc2Vf
YWRkcj0weGY5OGZlMDA0KQpwdF9tc2l4X2luaXQ6IGdldCBNU0ktWCB0YWJsZSBiYXIgYmFz
ZSBmOThmZTAwMApwdF9tc2l4X2luaXQ6IHRhYmxlX29mZiA9IDEwMDAsIHRvdGFsX2VudHJp
ZXMgPSA4CnB0X21zaXhfaW5pdDogbWFwcGluZyBwaHlzaWNhbCBNU0ktWCB0YWJsZSB0byA3
ZjkzZTYyMjMwMDAKcGNpX2ludHg6IGludHg9MQpyZWdpc3Rlcl9yZWFsX2RldmljZTogUmVh
bCBwaHlzaWNhbCBkZXZpY2UgMDQ6MDAuMCByZWdpc3RlcmVkIHN1Y2Nlc3NmdWx5IQpJUlEg
dHlwZSA9IElOVHgKY2lycnVzIHZnYSBtYXAgY2hhbmdlIHdoaWxlIG9uIGxmYiBtb2RlCnB0
X2lvbWVtX21hcDogZV9waHlzPWYzMDIwMDAwIG1hZGRyPWY5OGZlMDAwIHR5cGU9MCBsZW49
ODE5MiBpbmRleD0wIGZpcnN0X21hcD0xCm1hcHBpbmcgdnJhbSB0byBmMDAwMDAwMCAtIGYw
NDAwMDAwCnBsYXRmb3JtX2ZpeGVkX2lvcG9ydDogY2hhbmdlZCByby9ydyBzdGF0ZSBvZiBS
T00gbWVtb3J5IGFyZWEuIG5vdyBpcyBydyBzdGF0ZS4KcGxhdGZvcm1fZml4ZWRfaW9wb3J0
OiBjaGFuZ2VkIHJvL3J3IHN0YXRlIG9mIFJPTSBtZW1vcnkgYXJlYS4gbm93IGlzIHJvIHN0
YXRlLgpVbmtub3duIFBWIHByb2R1Y3QgMyBsb2FkZWQgaW4gZ3Vlc3QKUFYgZHJpdmVyIGJ1
aWxkIDEKcmVnaW9uIHR5cGUgMCBhdCBbZjMwMDAwMDAsZjMwMjAwMDApLgpzcXVhc2ggaW9t
ZW0gW2YzMDAwMDAwLCBmMzAyMDAwMCkuCnJlZ2lvbiB0eXBlIDEgYXQgW2MxMDAsYzE0MCku
CnB0X2lvbWVtX21hcDogZV9waHlzPWZmZmZmZmZmIG1hZGRyPWY5OGZlMDAwIHR5cGU9MCBs
ZW49ODE5MiBpbmRleD0wIGZpcnN0X21hcD0wCnNxdWFzaCBpb21lbSBbZjMwMjEwMDAsIGYz
MDIyMDAwKS4KcHRfaW9tZW1fbWFwOiBlX3BoeXM9ZjMwMjAwMDAgbWFkZHI9Zjk4ZmUwMDAg
dHlwZT0wIGxlbj04MTkyIGluZGV4PTAgZmlyc3RfbWFwPTAKcHRfaW9tZW1fbWFwOiBlX3Bo
eXM9ZmZmZmZmZmYgbWFkZHI9Zjk4ZmUwMDAgdHlwZT0wIGxlbj04MTkyIGluZGV4PTAgZmly
c3RfbWFwPTAKc3F1YXNoIGlvbWVtIFtmMzAyMTAwMCwgZjMwMjIwMDApLgpwdF9pb21lbV9t
YXA6IGVfcGh5cz1mMzAyMDAwMCBtYWRkcj1mOThmZTAwMCB0eXBlPTAgbGVuPTgxOTIgaW5k
ZXg9MCBmaXJzdF9tYXA9MApwdF9pb21lbV9tYXA6IGVfcGh5cz1mZmZmZmZmZiBtYWRkcj1m
OThmZTAwMCB0eXBlPTAgbGVuPTgxOTIgaW5kZXg9MCBmaXJzdF9tYXA9MApzcXVhc2ggaW9t
ZW0gW2YzMDIxMDAwLCBmMzAyMjAwMCkuCnB0X2lvbWVtX21hcDogZV9waHlzPWYzMDIwMDAw
IG1hZGRyPWY5OGZlMDAwIHR5cGU9MCBsZW49ODE5MiBpbmRleD0wIGZpcnN0X21hcD0wCnB0
X2lvbWVtX21hcDogZV9waHlzPWZmZmZmZmZmIG1hZGRyPWY5OGZlMDAwIHR5cGU9MCBsZW49
ODE5MiBpbmRleD0wIGZpcnN0X21hcD0wCnNxdWFzaCBpb21lbSBbZjMwMjEwMDAsIGYzMDIy
MDAwKS4KcHRfaW9tZW1fbWFwOiBlX3BoeXM9ZjMwMjAwMDAgbWFkZHI9Zjk4ZmUwMDAgdHlw
ZT0wIGxlbj04MTkyIGluZGV4PTAgZmlyc3RfbWFwPTAKcHRfaW9tZW1fbWFwOiBlX3BoeXM9
ZmZmZmZmZmYgbWFkZHI9Zjk4ZmUwMDAgdHlwZT0wIGxlbj04MTkyIGluZGV4PTAgZmlyc3Rf
bWFwPTAKc3F1YXNoIGlvbWVtIFtmMzAyMTAwMCwgZjMwMjIwMDApLgpwdF9pb21lbV9tYXA6
IGVfcGh5cz1mMzAyMDAwMCBtYWRkcj1mOThmZTAwMCB0eXBlPTAgbGVuPTgxOTIgaW5kZXg9
MCBmaXJzdF9tYXA9MApwdF9pb21lbV9tYXA6IGVfcGh5cz1mZmZmZmZmZiBtYWRkcj1mOThm
ZTAwMCB0eXBlPTAgbGVuPTgxOTIgaW5kZXg9MCBmaXJzdF9tYXA9MApzcXVhc2ggaW9tZW0g
W2YzMDIxMDAwLCBmMzAyMjAwMCkuCnB0X3BjaV93cml0ZV9jb25maWc6IFswMDowNTowXSBX
YXJuaW5nOiBHdWVzdCBhdHRlbXB0IHRvIHNldCBhZGRyZXNzIHRvIHVudXNlZCBCYXNlIEFk
ZHJlc3MgUmVnaXN0ZXIuIFtPZmZzZXQ6MzBoXVtMZW5ndGg6NF0KcHRfaW9tZW1fbWFwOiBl
X3BoeXM9ZjMwMjAwMDAgbWFkZHI9Zjk4ZmUwMDAgdHlwZT0wIGxlbj04MTkyIGluZGV4PTAg
Zmlyc3RfbWFwPTAKcHRfbXNpeF91cGRhdGVfb25lOiBwdF9tc2l4X3VwZGF0ZV9vbmUgcmVx
dWVzdGVkIHBpcnEgPSA4NwpwdF9tc2l4X3VwZGF0ZV9vbmU6IFVwZGF0ZSBtc2l4IGVudHJ5
IDAgd2l0aCBwaXJxIDU3IGd2ZWMgMApwdF9tc2l4X3VwZGF0ZV9vbmU6IHB0X21zaXhfdXBk
YXRlX29uZSByZXF1ZXN0ZWQgcGlycSA9IDg2CnB0X21zaXhfdXBkYXRlX29uZTogVXBkYXRl
IG1zaXggZW50cnkgMSB3aXRoIHBpcnEgNTYgZ3ZlYyAwCnB0X21zaXhfdXBkYXRlX29uZTog
cHRfbXNpeF91cGRhdGVfb25lIHJlcXVlc3RlZCBwaXJxID0gODUKcHRfbXNpeF91cGRhdGVf
b25lOiBVcGRhdGUgbXNpeCBlbnRyeSAyIHdpdGggcGlycSA1NSBndmVjIDAKcHRfbXNpeF91
cGRhdGVfb25lOiBwdF9tc2l4X3VwZGF0ZV9vbmUgcmVxdWVzdGVkIHBpcnEgPSA4NApwdF9t
c2l4X3VwZGF0ZV9vbmU6IFVwZGF0ZSBtc2l4IGVudHJ5IDMgd2l0aCBwaXJxIDU0IGd2ZWMg
MApzaHV0ZG93biByZXF1ZXN0ZWQgaW4gY3B1X2hhbmRsZV9pb3JlcQpJc3N1ZWQgZG9tYWlu
IDE1IHBvd2Vyb2ZmCg==
------------09400606801CFB705
Content-Type: application/octet-stream;
 name="qemu-upstream.log"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="qemu-upstream.log"

Y2hhciBkZXZpY2UgcmVkaXJlY3RlZCB0byAvZGV2L3B0cy8xNwp4ZW46IHNoYXJlZCBwYWdl
IGF0IHBmbiBmZWZmZAp4ZW46IGJ1ZmZlcmVkIGlvIHBhZ2UgYXQgcGZuIGZlZmZiCnhlbl9t
YXBjYWNoZTogeGVuX21hcF9jYWNoZV9pbml0LCBucl9idWNrZXRzID0gODAwMCBzaXplIDE1
NzI4NjQKeGVuX3BsYXRmb3JtOiBjaGFuZ2VkIHJvL3J3IHN0YXRlIG9mIFJPTSBtZW1vcnkg
YXJlYS4gbm93IGlzIHJ3IHN0YXRlLgp4ZW46IEkvTyByZXF1ZXN0IG5vdCByZWFkeTogMCwg
cHRyOiAwLCBwb3J0OiAwLCBkYXRhOiAwLCBjb3VudDogMCwgc2l6ZTogMAp4ZW46IEkvTyBy
ZXF1ZXN0IG5vdCByZWFkeTogMCwgcHRyOiAwLCBwb3J0OiAwLCBkYXRhOiAwLCBjb3VudDog
MCwgc2l6ZTogMAp4ZW46IEkvTyByZXF1ZXN0IG5vdCByZWFkeTogMCwgcHRyOiAwLCBwb3J0
OiAwLCBkYXRhOiAwLCBjb3VudDogMCwgc2l6ZTogMApbMDA6MDUuMF0geGVuX3B0X2luaXRm
bjogQXNzaWduaW5nIHJlYWwgcGh5c2ljYWwgZGV2aWNlIDA0OjAwLjAgdG8gZGV2Zm4gMHgy
OApbMDA6MDUuMF0geGVuX3B0X3JlZ2lzdGVyX3JlZ2lvbnM6IElPIHJlZ2lvbiAwIHJlZ2lz
dGVyZWQgKHNpemU9MHgyMDAwbHggYmFzZV9hZGRyPTB4Zjk4ZmUwMDBseCB0eXBlOiAweDQp
ClswMDowNS4wXSB4ZW5fcHRfbXNpeF9pbml0OiBnZXQgTVNJLVggdGFibGUgQkFSIGJhc2Ug
MHhmOThmZTAwMApbMDA6MDUuMF0geGVuX3B0X21zaXhfaW5pdDogdGFibGVfb2ZmID0gMHgx
MDAwLCB0b3RhbF9lbnRyaWVzID0gOApbMDA6MDUuMF0geGVuX3B0X21zaXhfaW5pdDogbWFw
cGluZyBwaHlzaWNhbCBNU0ktWCB0YWJsZSB0byAweDdmYTZjODg2NjAwMApbMDA6MDUuMF0g
eGVuX3B0X3BjaV9pbnR4OiBpbnR4PTEKWzAwOjA1LjBdIHhlbl9wdF9pbml0Zm46IFJlYWwg
cGh5c2ljYWwgZGV2aWNlIDA0OjAwLjAgcmVnaXN0ZXJlZCBzdWNjZXNzZnVseSEKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDBhIHZhbD0weDAwMDAw
YzAzIGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4
MDAwMCB2YWw9MHgwMDAwMTAzMyBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2Nv
bmZpZzogYWRkcmVzcz0weDAwMDIgdmFsPTB4MDAwMDAxOTQgbGVuPTIKWzAwOjA1LjBdIHhl
bl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDEwIHZhbD0weDAwMDAwMDA0IGxl
bj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTAg
dmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6
IGFkZHJlc3M9MHgwMDEwIHZhbD0weGZmZmZlMDA0IGxlbj00ClswMDowNS4wXSB4ZW5fcHRf
cGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTAgdmFsPTB4MDAwMDAwMDQgbGVuPTQK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDE0IHZhbD0w
eDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRk
cmVzcz0weDAwMTQgdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
cmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDE0IHZhbD0weGZmZmZmZmZmIGxlbj00ClswMDow
NS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTQgdmFsPTB4MDAw
MDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9
MHgwMDE4IHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRl
X2NvbmZpZzogYWRkcmVzcz0weDAwMTggdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBd
IHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDE4IHZhbD0weDAwMDAwMDAw
IGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAw
MTggdmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25m
aWc6IGFkZHJlc3M9MHgwMDFjIHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5f
cHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMWMgdmFsPTB4ZmZmZmZmZmYgbGVu
PTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDFjIHZh
bD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzog
YWRkcmVzcz0weDAwMWMgdmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9w
Y2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDIwIHZhbD0weDAwMDAwMDAwIGxlbj00Clsw
MDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMjAgdmFsPTB4
ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJl
c3M9MHgwMDIwIHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dy
aXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMjAgdmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDI0IHZhbD0weDAwMDAw
MDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0w
eDAwMjQgdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9j
b25maWc6IGFkZHJlc3M9MHgwMDI0IHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4
ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMjQgdmFsPTB4MDAwMDAwMDAg
bGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDMw
IHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZp
ZzogYWRkcmVzcz0weDAwMzAgdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDMwIHZhbD0weDAwMDAwMDAwIGxlbj00
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMzAgdmFs
PTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFk
ZHJlc3M9MHgwMDNkIHZhbD0weDAwMDAwMDAxIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNp
X3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwM2MgdmFsPTB4MDAwMDAwMGEgbGVuPTEKWzAw
OjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZhbD0weDAw
MDAwMDAwIGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVz
cz0weDAwMDQgdmFsPTB4MDAwMDAwMDQgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVh
ZF9jb25maWc6IGFkZHJlc3M9MHgwMDEwIHZhbD0weDAwMDAwMDA0IGxlbj00ClswMDowNS4w
XSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTAgdmFsPTB4ZjMwNTAw
MDQgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDA0IHZhbD0weDAwMDAwMDA0IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2Nv
bmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAwOjA1LjBdIHhl
bl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MCBwaXJxOi0xIGVudHJ5
X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
MSBlbnRyeV9ucjowIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRh
dGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjEgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6MSBu
b3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVy
ZSBlbnRyeV9ucjoyIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjIgbm90IHVwZGF0ZWQ6MCAK
WzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MyBw
aXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9v
bmU6IE1lIGhlcmUgMSBlbnRyeV9ucjozIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5f
cHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjQgcGlycTotMSBlbnRyeV91
cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEg
ZW50cnlfbnI6NCBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRl
X29uZTogTWUgaGVyZSBlbnRyeV9ucjo1IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6
MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjUgbm90
IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
ZW50cnlfbnI6NiBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9t
c2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo2IG5vdCB1cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjcgcGly
cTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25l
OiBNZSBoZXJlIDEgZW50cnlfbnI6NyBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDAgdmFsPTB4MDAwMDEwMzMgbGVuPTIK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDAwIHZhbD0w
eDAxOTQxMDMzIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRy
ZXNzPTB4MDAwOCB2YWw9MHgwYzAzMzAwMyBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV9y
ZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMGUgdmFsPTB4MDAwMDAwMDAgbGVuPTEKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDBlIHZhbD0weDAwMDAw
MDAwIGxlbj0xCnhlbjogcGh5c21hcHBpbmcgZG9lcyBub3QgZXhpc3QgYXQgMDAwMDAwMDBm
MzA0MDAwMAp4ZW46IG1hcHBpbmcgdnJhbSB0byBmMDAwMDAwMCAtIDQwMzQ5MjA0NDgKeGVu
OiBwaHlzbWFwcGluZyBkb2VzIG5vdCBleGlzdCBhdCAwMDAwMDAwMGYzMDIwMDAwClswMDow
NS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAzMCB2YWw9MHgwMDAw
MDAwMCBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9
MHgwMDMwIHZhbD0weGZmZmZmZmZlIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRl
X2NvbmZpZzogV2FybmluZzogR3Vlc3QgYXR0ZW1wdCB0byBzZXQgYWRkcmVzcyB0byB1bnVz
ZWQgQmFzZSBBZGRyZXNzIFJlZ2lzdGVyLiAoYWRkcjogMHgzMCwgbGVuOiA0KQpbMDA6MDUu
MF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMzAgdmFsPTB4MDAwMDAw
MDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4
MDAzMCB2YWw9MHgwMDAwMDAwMCBsZW49NAp4ZW5fcGxhdGZvcm06IFVua25vd24gUFYgcHJv
ZHVjdCAzIGxvYWRlZCBpbiBndWVzdAp4ZW5fcGxhdGZvcm06IHVucGx1ZyBkaXNrcwp4ZW5f
cGxhdGZvcm06IHVucGx1ZyBuaWNzClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmln
OiBhZGRyZXNzPTB4MDAwOCB2YWw9MHgwYzAzMzAwMyBsZW49NApbMDA6MDUuMF0geGVuX3B0
X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMGUgdmFsPTB4MDAwMDAwMDAgbGVuPTEK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDAwIHZhbD0w
eDAxOTQxMDMzIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRy
ZXNzPTB4MDAwYSB2YWw9MHgwMDAwMGMwMyBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9y
ZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDAgdmFsPTB4MDAwMDEwMzMgbGVuPTIKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDAyIHZhbD0weDAwMDAw
MTk0IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4
MDAwZSB2YWw9MHgwMDAwMDAwMCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2Nv
bmZpZzogYWRkcmVzcz0weDAwMDggdmFsPTB4MGMwMzMwMDMgbGVuPTQKWzAwOjA1LjBdIHhl
bl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDBlIHZhbD0weDAwMDAwMDAwIGxl
bj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAwMCB2
YWw9MHgwMTk0MTAzMyBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzog
YWRkcmVzcz0weDAwMGUgdmFsPTB4MDAwMDAwMDAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9w
Y2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA2IHZhbD0weDAwMDAwMDEwIGxlbj0yClsw
MDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAzNCB2YWw9MHgw
MDAwMDA1MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVz
cz0weDAwNTAgdmFsPTB4MDAwMDAwMDEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVh
ZF9jb25maWc6IGFkZHJlc3M9MHgwMDUxIHZhbD0weDAwMDAwMDcwIGxlbj0xClswMDowNS4w
XSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MCB2YWw9MHgwMDAwMDAw
NSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAw
NzEgdmFsPTB4MDAwMDAwOTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25m
aWc6IGFkZHJlc3M9MHgwMDkwIHZhbD0weDAwMDAwMDExIGxlbj0xClswMDowNS4wXSB4ZW5f
cHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA5MSB2YWw9MHgwMDAwMDBhMCBsZW49
MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwYTAgdmFs
PTB4MDAwMDAwMTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFk
ZHJlc3M9MHgwMGEyIHZhbD0weDAwMDAwMDAyIGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNp
X3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDBhNCB2YWw9MHgwMDAwOGZjMCBsZW49MgpbMDA6
MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDggdmFsPTB4MGMw
MzMwMDMgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9
MHgwMDAwIHZhbD0weDAxOTQxMDMzIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRf
Y29uZmlnOiBhZGRyZXNzPTB4MDAzZCB2YWw9MHgwMDAwMDAwMSBsZW49MQpbMDA6MDUuMF0g
eGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwM2MgdmFsPTB4MDAwMDAwMGEg
bGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA0
IHZhbD0weDAwMDAwMDA2IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZp
ZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDQgbGVuPTIKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDEwIHZhbD0weGYzMDUwMDA0IGxlbj00
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTAgdmFs
PTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFk
ZHJlc3M9MHgwMDEwIHZhbD0weGZmZmZlMDA0IGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNp
X3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTAgdmFsPTB4ZjMwNTAwMDQgbGVuPTQKWzAw
OjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDE0IHZhbD0weDAw
MDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVz
cz0weDAwMTQgdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVh
ZF9jb25maWc6IGFkZHJlc3M9MHgwMDE0IHZhbD0weGZmZmZmZmZmIGxlbj00ClswMDowNS4w
XSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTQgdmFsPTB4MDAwMDAw
MDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4
MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRl
X29uZTogTWUgaGVyZSBlbnRyeV9ucjowIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6
MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjAgbm90
IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
ZW50cnlfbnI6MSBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9t
c2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjoxIG5vdCB1cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjIgcGly
cTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25l
OiBNZSBoZXJlIDEgZW50cnlfbnI6MiBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjozIHBpcnE6LTEgZW50cnlfdXBk
YXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVu
dHJ5X25yOjMgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9v
bmU6IE1lIGhlcmUgZW50cnlfbnI6NCBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1
LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo0IG5vdCB1
cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVu
dHJ5X25yOjUgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNp
eF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6NSBub3QgdXBkYXRlZDowIApbMDA6
MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjo2IHBpcnE6
LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTog
TWUgaGVyZSAxIGVudHJ5X25yOjYgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9t
c2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6NyBwaXJxOi0xIGVudHJ5X3VwZGF0
ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRy
eV9ucjo3IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmln
OiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6MDUuMF0geGVuX3B0
X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZhbD0weDAwMDAwMDA0IGxlbj0y
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAxOCB2YWw9
MHgwMDAwMDAwMCBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFk
ZHJlc3M9MHgwMDE4IHZhbD0weGZmZmZmZmZmIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNp
X3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAxOCB2YWw9MHgwMDAwMDAwMCBsZW49NApbMDA6
MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9MHgwMDE4IHZhbD0weDAw
MDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVz
cz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3Vw
ZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MCBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAK
WzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjow
IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBo
ZXJlIGVudHJ5X25yOjEgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5f
cHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6MSBub3QgdXBkYXRlZDow
IApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjoy
IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRl
X29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjIgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhl
bl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MyBwaXJxOi0xIGVudHJ5
X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
MSBlbnRyeV9ucjozIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRh
dGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjQgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6NCBu
b3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVy
ZSBlbnRyeV9ucjo1IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjUgbm90IHVwZGF0ZWQ6MCAK
WzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6NiBw
aXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9v
bmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo2IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5f
cHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjcgcGlycTotMSBlbnRyeV91
cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEg
ZW50cnlfbnI6NyBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2Nv
bmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAwOjA1LjBdIHhl
bl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNCBs
ZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMWMg
dmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmln
OiBhZGRyZXNzPTB4MDAxYyB2YWw9MHhmZmZmZmZmZiBsZW49NApbMDA6MDUuMF0geGVuX3B0
X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMWMgdmFsPTB4MDAwMDAwMDAgbGVuPTQK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAxYyB2YWw9
MHgwMDAwMDAwMCBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFk
ZHJlc3M9MHgwMDA0IHZhbD0weDAwMDAwMDA2IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfbXNp
eF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjAgcGlycTotMSBlbnRyeV91cGRhdGVk
OjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlf
bnI6MCBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTog
TWUgaGVyZSBlbnRyeV9ucjoxIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0g
eGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjEgbm90IHVwZGF0
ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlf
bnI6MiBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3Vw
ZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjoyIG5vdCB1cGRhdGVkOjAgClswMDowNS4w
XSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjMgcGlycTotMSBl
bnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBo
ZXJlIDEgZW50cnlfbnI6MyBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhf
dXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjo0IHBpcnE6LTEgZW50cnlfdXBkYXRlZDow
IApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25y
OjQgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1l
IGhlcmUgZW50cnlfbnI6NSBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhl
bl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo1IG5vdCB1cGRhdGVk
OjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25y
OjYgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRh
dGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6NiBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0g
eGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjo3IHBpcnE6LTEgZW50
cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVy
ZSAxIGVudHJ5X25yOjcgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVh
ZF9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZhbD0weDAwMDAwMDA2IGxlbj0yClswMDowNS4w
XSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAw
MDQgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDIwIHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2Nv
bmZpZzogYWRkcmVzcz0weDAwMjAgdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhl
bl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDIwIHZhbD0weDAwMDAwMDAwIGxl
bj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMjAg
dmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmln
OiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6MDUuMF0geGVuX3B0
X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjowIHBpcnE6LTEgZW50cnlfdXBk
YXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVu
dHJ5X25yOjAgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9v
bmU6IE1lIGhlcmUgZW50cnlfbnI6MSBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1
LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjoxIG5vdCB1
cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVu
dHJ5X25yOjIgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNp
eF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6MiBub3QgdXBkYXRlZDowIApbMDA6
MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjozIHBpcnE6
LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTog
TWUgaGVyZSAxIGVudHJ5X25yOjMgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9t
c2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6NCBwaXJxOi0xIGVudHJ5X3VwZGF0
ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRy
eV9ucjo0IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25l
OiBNZSBoZXJlIGVudHJ5X25yOjUgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4w
XSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6NSBub3QgdXBk
YXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRy
eV9ucjo2IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhf
dXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjYgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1
LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6NyBwaXJxOi0x
IGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1l
IGhlcmUgMSBlbnRyeV9ucjo3IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfcGNp
X3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6
MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZhbD0weDAw
MDAwMDA0IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNz
PTB4MDAyNCB2YWw9MHgwMDAwMDAwMCBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0
ZV9jb25maWc6IGFkZHJlc3M9MHgwMDI0IHZhbD0weGZmZmZmZmZmIGxlbj00ClswMDowNS4w
XSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAyNCB2YWw9MHgwMDAwMDAw
MCBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9MHgw
MDI0IHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2Nv
bmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAwOjA1LjBdIHhl
bl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MCBwaXJxOi0xIGVudHJ5
X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
MSBlbnRyeV9ucjowIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRh
dGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjEgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6MSBu
b3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVy
ZSBlbnRyeV9ucjoyIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjIgbm90IHVwZGF0ZWQ6MCAK
WzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MyBw
aXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9v
bmU6IE1lIGhlcmUgMSBlbnRyeV9ucjozIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5f
cHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjQgcGlycTotMSBlbnRyeV91
cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEg
ZW50cnlfbnI6NCBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRl
X29uZTogTWUgaGVyZSBlbnRyeV9ucjo1IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6
MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjUgbm90
IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
ZW50cnlfbnI6NiBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9t
c2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo2IG5vdCB1cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjcgcGly
cTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25l
OiBNZSBoZXJlIDEgZW50cnlfbnI6NyBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9
MHgwMDAwMDAwNCBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRk
cmVzcz0weDAwMzAgdmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
d3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAzMCB2YWw9MHhmZmZmZjgwMCBsZW49NApbMDA6
MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IFdhcm5pbmc6IEd1ZXN0IGF0dGVtcHQg
dG8gc2V0IGFkZHJlc3MgdG8gdW51c2VkIEJhc2UgQWRkcmVzcyBSZWdpc3Rlci4gKGFkZHI6
IDB4MzAsIGxlbjogNCkKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJl
c3M9MHgwMDMwIHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dy
aXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMzAgdmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAw
MDAwNiBsZW49MgpbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBl
bnRyeV9ucjowIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21z
aXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjAgbm90IHVwZGF0ZWQ6MCAKWzAw
OjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MSBwaXJx
Oi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6
IE1lIGhlcmUgMSBlbnRyeV9ucjoxIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRf
bXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjIgcGlycTotMSBlbnRyeV91cGRh
dGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50
cnlfbnI6MiBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29u
ZTogTWUgaGVyZSBlbnRyeV9ucjozIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUu
MF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjMgbm90IHVw
ZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50
cnlfbnI6NCBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4
X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo0IG5vdCB1cGRhdGVkOjAgClswMDow
NS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjUgcGlycTot
MSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBN
ZSBoZXJlIDEgZW50cnlfbnI6NSBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21z
aXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjo2IHBpcnE6LTEgZW50cnlfdXBkYXRl
ZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5
X25yOjYgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6
IE1lIGhlcmUgZW50cnlfbnI6NyBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBd
IHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo3IG5vdCB1cGRh
dGVkOjAgClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAy
YyB2YWw9MHgwMDAwMTQ2MiBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZp
ZzogYWRkcmVzcz0weDAwMmUgdmFsPTB4MDAwMDQyNTcgbGVuPTIKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA2IHZhbD0weDAwMDAwMDEwIGxlbj0y
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAzNCB2YWw9
MHgwMDAwMDA1MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRk
cmVzcz0weDAwNTAgdmFsPTB4MDAwMDAwMDEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
cmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDUxIHZhbD0weDAwMDAwMDcwIGxlbj0xClswMDow
NS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MCB2YWw9MHgwMDAw
MDAwNSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0w
eDAwNzIgdmFsPTB4MDAwMDAwODAgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVf
Y29uZmlnOiBhZGRyZXNzPTB4MDA3MiB2YWw9MHgwMDAwMDA4MCBsZW49MgpbMDA6MDUuMF0g
eGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDYgdmFsPTB4MDAwMDAwMTAg
bGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDM0
IHZhbD0weDAwMDAwMDUwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmln
OiBhZGRyZXNzPTB4MDA1MCB2YWw9MHgwMDAwMDAwMSBsZW49MQpbMDA6MDUuMF0geGVuX3B0
X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNTEgdmFsPTB4MDAwMDAwNzAgbGVuPTEK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDcwIHZhbD0w
eDAwMDAwMDA1IGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRy
ZXNzPTB4MDA3MSB2YWw9MHgwMDAwMDA5MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9y
ZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwOTAgdmFsPTB4MDAwMDAwMTEgbGVuPTEKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDkyIHZhbD0weDAwMDAw
MDA3IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0w
eDAwOTIgdmFsPTB4MDAwMDAwMDcgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9j
b25maWc6IGFkZHJlc3M9MHgwMDA2IHZhbD0weDAwMDAwMDEwIGxlbj0yClswMDowNS4wXSB4
ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAzNCB2YWw9MHgwMDAwMDA1MCBs
ZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNTAg
dmFsPTB4MDAwMDAwMDEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6
IGFkZHJlc3M9MHgwMDUxIHZhbD0weDAwMDAwMDcwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRf
cGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MCB2YWw9MHgwMDAwMDAwNSBsZW49MQpb
MDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNzEgdmFsPTB4
MDAwMDAwOTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJl
c3M9MHgwMDkwIHZhbD0weDAwMDAwMDExIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3Jl
YWRfY29uZmlnOiBhZGRyZXNzPTB4MDA5MSB2YWw9MHgwMDAwMDBhMCBsZW49MQpbMDA6MDUu
MF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwYTAgdmFsPTB4MDAwMDAw
MTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDA2IHZhbD0weDAwMDAwMDEwIGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29u
ZmlnOiBhZGRyZXNzPTB4MDAzNCB2YWw9MHgwMDAwMDA1MCBsZW49MQpbMDA6MDUuMF0geGVu
X3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNTAgdmFsPTB4MDAwMDAwMDEgbGVu
PTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDUxIHZh
bD0weDAwMDAwMDcwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBh
ZGRyZXNzPTB4MDA3MCB2YWw9MHgwMDAwMDAwNSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3Bj
aV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNzEgdmFsPTB4MDAwMDAwOTAgbGVuPTEKWzAw
OjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDkwIHZhbD0weDAw
MDAwMDExIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNz
PTB4MDA5MSB2YWw9MHgwMDAwMDBhMCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFk
X2NvbmZpZzogYWRkcmVzcz0weDAwYTAgdmFsPTB4MDAwMDAwMTAgbGVuPTEKWzAwOjA1LjBd
IHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMGExIHZhbD0weDAwMDAwMDAw
IGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAw
NiB2YWw9MHgwMDAwMDAxMCBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZp
ZzogYWRkcmVzcz0weDAwMzQgdmFsPTB4MDAwMDAwNTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDUwIHZhbD0weDAwMDAwMDAxIGxlbj0x
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1MiB2YWw9
MHgwMDAwMDAwMyBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRk
cmVzcz0weDAwMDYgdmFsPTB4MDAwMDAwMTAgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
cmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDM0IHZhbD0weDAwMDAwMDUwIGxlbj0xClswMDow
NS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1MCB2YWw9MHgwMDAw
MDAwMSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0w
eDAwNTEgdmFsPTB4MDAwMDAwNzAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9j
b25maWc6IGFkZHJlc3M9MHgwMDcwIHZhbD0weDAwMDAwMDA1IGxlbj0xClswMDowNS4wXSB4
ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MSB2YWw9MHgwMDAwMDA5MCBs
ZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwOTAg
dmFsPTB4MDAwMDAwMTEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6
IGFkZHJlc3M9MHgwMDkxIHZhbD0weDAwMDAwMGEwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRf
cGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDBhMCB2YWw9MHgwMDAwMDAxMCBsZW49MQpb
MDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwYTEgdmFsPTB4
MDAwMDAwMDAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJl
c3M9MHgwMDAwIHZhbD0weDAxOTQxMDMzIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3Jl
YWRfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6MDUu
MF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAw
MDYgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDU0IHZhbD0weDAwMDAwMDA4IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29u
ZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6MDUuMF0geGVu
X3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVu
PTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZh
bD0weDAwMDAwMDA2IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzog
YWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDIgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9w
Y2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDBjIHZhbD0weDAwMDAwMDAwIGxlbj0xClsw
MDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1NCB2YWw9MHgw
MDAwMDAwOCBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVz
cz0weDAwMDQgdmFsPTB4MDAwMDAwMDIgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVh
ZF9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZhbD0weDAwMDAwMDAyIGxlbj0yClswMDowNS4w
XSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAw
MDYgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDYwIHZhbD0weDAwMDAwMDMwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29u
ZmlnOiBhZGRyZXNzPTB4MDAwYyB2YWw9MHgwMDAwMDAwMCBsZW49MQpbMDA6MDUuMF0geGVu
X3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9MHgwMDBjIHZhbD0weDAwMDAwMDEwIGxl
bj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAwYyB2
YWw9MHgwMDAwMDAxMCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzog
YWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9w
Y2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAxNiBsZW49Mgpb
MDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDYgdmFsPTB4
MDAwMDAwMTAgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJl
c3M9MHgwMDM0IHZhbD0weDAwMDAwMDUwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3Jl
YWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1MCB2YWw9MHgwMDAwMDAwMSBsZW49MQpbMDA6MDUu
MF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNTEgdmFsPTB4MDAwMDAw
NzAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDcwIHZhbD0weDAwMDAwMDA1IGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29u
ZmlnOiBhZGRyZXNzPTB4MDA3MSB2YWw9MHgwMDAwMDA5MCBsZW49MQpbMDA6MDUuMF0geGVu
X3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwOTAgdmFsPTB4MDAwMDAwMTEgbGVu
PTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA2IHZh
bD0weDAwMDAwMDEwIGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBh
ZGRyZXNzPTB4MDAzNCB2YWw9MHgwMDAwMDA1MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3Bj
aV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNTAgdmFsPTB4MDAwMDAwMDEgbGVuPTEKWzAw
OjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDUxIHZhbD0weDAw
MDAwMDcwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNz
PTB4MDA3MCB2YWw9MHgwMDAwMDAwNSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFk
X2NvbmZpZzogYWRkcmVzcz0weDAwNzEgdmFsPTB4MDAwMDAwOTAgbGVuPTEKWzAwOjA1LjBd
IHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDkwIHZhbD0weDAwMDAwMDEx
IGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA5
MiB2YWw9MHgwMDAwMDAwNyBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZp
ZzogYWRkcmVzcz0weDAwMDYgdmFsPTB4MDAwMDAwMTAgbGVuPTIKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDM0IHZhbD0weDAwMDAwMDUwIGxlbj0x
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1MCB2YWw9
MHgwMDAwMDAwMSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRk
cmVzcz0weDAwNTEgdmFsPTB4MDAwMDAwNzAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
cmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDcwIHZhbD0weDAwMDAwMDA1IGxlbj0xClswMDow
NS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MSB2YWw9MHgwMDAw
MDA5MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0w
eDAwOTAgdmFsPTB4MDAwMDAwMTEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9j
b25maWc6IGFkZHJlc3M9MHgwMDkyIHZhbD0weDAwMDAwMDA3IGxlbj0yClswMDowNS4wXSB4
ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwOTIgdmFsPTB4MDAwMDAwMDcg
bGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDk0
IHZhbD0weDAwMDAxMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZp
ZzogYWRkcmVzcz0weDAwOTIgdmFsPTB4MDAwMGMwMDcgbGVuPTIKWzAwOjA1LjBdIHhlbl9w
dF9tc2l4Y3RybF9yZWdfd3JpdGU6IGVuYWJsZSBNU0ktWApbMDA6MDUuMF0geGVuX3B0X3Bj
aV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAw
OjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgw
MDAwMDQwNiBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJl
c3M9MHgwMDkyIHZhbD0weDAwMDA4MDA3IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfbXNpeF91
cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjAgcGlycTotMSBlbnRyeV91cGRhdGVkOjEg
ClswMDowNS4wXSBtc2lfbXNpeF9zZXR1cDogTWUgaGVyZSAuLiBNU0ktWDogKGFkZHI6MTcx
NTIgZGF0YToxNzE1MiB2ZWM6IDAsIGVudHJ5OiAwKSBtYXBwZWQ6bm90X21hcHBlZCAKWzAw
OjA1LjBdIG1zaV9tc2l4X3NldHVwOiBvbGRfcGlycV9jYWxjID0gNApbMDA6MDUuMF0gbXNp
X21zaXhfc2V0dXA6IHJlcXVlc3RlZCBwaXJxIDQgZm9yIE1TSS1YICh2ZWM6IDAsIGVudHJ5
OiAwKQpbMDA6MDUuMF0gbXNpX21zaXhfdXBkYXRlOiBVcGRhdGluZyBNU0ktWCB3aXRoIHBp
cnEgNCBndmVjIDAgZ2ZsYWdzIDB4MzA1NyAoZW50cnk6IDApClswMDowNS4wXSB4ZW5fcHRf
bXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDMgIHJjID0gMCBlbnRyeSB1cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjEgcGly
cTotMSBlbnRyeV91cGRhdGVkOjEgClswMDowNS4wXSBtc2lfbXNpeF9zZXR1cDogTWUgaGVy
ZSAuLiBNU0ktWDogKGFkZHI6MTcxNTIgZGF0YToxNzE1MiB2ZWM6IDAsIGVudHJ5OiAweDEp
IG1hcHBlZDpub3RfbWFwcGVkIApbMDA6MDUuMF0gbXNpX21zaXhfc2V0dXA6IG9sZF9waXJx
X2NhbGMgPSA0ClswMDowNS4wXSBtc2lfbXNpeF9zZXR1cDogcmVxdWVzdGVkIHBpcnEgNCBm
b3IgTVNJLVggKHZlYzogMCwgZW50cnk6IDB4MSkKWzAwOjA1LjBdIG1zaV9tc2l4X3VwZGF0
ZTogVXBkYXRpbmcgTVNJLVggd2l0aCBwaXJxIDQgZ3ZlYyAwIGdmbGFncyAweDMwNTYgKGVu
dHJ5OiAweDEpClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDMg
IHJjID0gMCBlbnRyeSB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVf
b25lOiBNZSBoZXJlIGVudHJ5X25yOjIgcGlycTotMSBlbnRyeV91cGRhdGVkOjEgClswMDow
NS4wXSBtc2lfbXNpeF9zZXR1cDogTWUgaGVyZSAuLiBNU0ktWDogKGFkZHI6MTcxNTIgZGF0
YToxNzE1MiB2ZWM6IDAsIGVudHJ5OiAweDIpIG1hcHBlZDpub3RfbWFwcGVkIApbMDA6MDUu
MF0gbXNpX21zaXhfc2V0dXA6IG9sZF9waXJxX2NhbGMgPSA0ClswMDowNS4wXSBtc2lfbXNp
eF9zZXR1cDogcmVxdWVzdGVkIHBpcnEgNCBmb3IgTVNJLVggKHZlYzogMCwgZW50cnk6IDB4
MikKWzAwOjA1LjBdIG1zaV9tc2l4X3VwZGF0ZTogVXBkYXRpbmcgTVNJLVggd2l0aCBwaXJx
IDQgZ3ZlYyAwIGdmbGFncyAweDMwNTUgKGVudHJ5OiAweDIpClswMDowNS4wXSB4ZW5fcHRf
bXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDMgIHJjID0gMCBlbnRyeSB1cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjMgcGly
cTotMSBlbnRyeV91cGRhdGVkOjEgClswMDowNS4wXSBtc2lfbXNpeF9zZXR1cDogTWUgaGVy
ZSAuLiBNU0ktWDogKGFkZHI6MTcxNTIgZGF0YToxNzE1MiB2ZWM6IDAsIGVudHJ5OiAweDMp
IG1hcHBlZDpub3RfbWFwcGVkIApbMDA6MDUuMF0gbXNpX21zaXhfc2V0dXA6IG9sZF9waXJx
X2NhbGMgPSA0ClswMDowNS4wXSBtc2lfbXNpeF9zZXR1cDogcmVxdWVzdGVkIHBpcnEgNCBm
b3IgTVNJLVggKHZlYzogMCwgZW50cnk6IDB4MykKWzAwOjA1LjBdIG1zaV9tc2l4X3VwZGF0
ZTogVXBkYXRpbmcgTVNJLVggd2l0aCBwaXJxIDQgZ3ZlYyAwIGdmbGFncyAweDMwNTQgKGVu
dHJ5OiAweDMpClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDMg
IHJjID0gMCBlbnRyeSB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVf
b25lOiBNZSBoZXJlIGVudHJ5X25yOjQgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDow
NS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6NCBub3Qg
dXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBl
bnRyeV9ucjo1IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21z
aXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjUgbm90IHVwZGF0ZWQ6MCAKWzAw
OjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6NiBwaXJx
Oi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6
IE1lIGhlcmUgMSBlbnRyeV9ucjo2IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRf
bXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjcgcGlycTotMSBlbnRyeV91cGRh
dGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50
cnlfbnI6NyBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZp
ZzogYWRkcmVzcz0weDAwYTQgdmFsPTB4MDAwMDhmYzAgbGVuPTQKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA2IHZhbD0weDAwMDAwMDEwIGxlbj0y
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAzNCB2YWw9
MHgwMDAwMDA1MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRk
cmVzcz0weDAwNTAgdmFsPTB4MDAwMDAwMDEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
cmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDUxIHZhbD0weDAwMDAwMDcwIGxlbj0xClswMDow
NS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MCB2YWw9MHgwMDAw
MDAwNSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0w
eDAwNzEgdmFsPTB4MDAwMDAwOTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9j
b25maWc6IGFkZHJlc3M9MHgwMDkwIHZhbD0weDAwMDAwMDExIGxlbj0xClswMDowNS4wXSB4
ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA5MSB2YWw9MHgwMDAwMDBhMCBs
ZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwYTAg
dmFsPTB4MDAwMDAwMTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6
IGFkZHJlc3M9MHgwMGExIHZhbD0weDAwMDAwMDAwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRf
cGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1NCB2YWw9MHgwMDAwMDAwOCBsZW49Mgo=
------------09400606801CFB705
Content-Type: text/plain;
 name="xl-dmesg.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="xl-dmesg.txt"

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fX18gICAgICAgICAgICAgICAgICAgIF8g
ICAgICAgIF8gICAgIF8gICAgICANCiBcIFwvIC9fX18gXyBfXyAgIHwgfHwgfCAgfF9fXyAv
ICAgIF8gICBfIF8gX18gIF9fX3wgfF8gX18gX3wgfF9fIHwgfCBfX18gDQogIFwgIC8vIF8g
XCAnXyBcICB8IHx8IHxfICAgfF8gXCBfX3wgfCB8IHwgJ18gXC8gX198IF9fLyBfYCB8ICdf
IFx8IHwvIF8gXA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgX19fKSB8X198IHxffCB8
IHwgfCBcX18gXCB8fCAoX3wgfCB8XykgfCB8ICBfXy8NCiAvXy9cX1xfX198X3wgfF98ICAg
IHxffChfKV9fX18vICAgIFxfXyxffF98IHxffF9fXy9cX19cX18sX3xfLl9fL3xffFxfX198
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZlcnNpb24gNC4zLXVuc3RhYmxl
IChyb290QGR5bmRucy5vcmcpIChnY2MgKERlYmlhbiA0LjQuNS04KSA0LjQuNSkgRnJpIERl
YyAgNyAyMTowOToyMSBDRVQgMjAxMg0KKFhFTikgTGF0ZXN0IENoYW5nZVNldDogVGh1IERl
YyAwNiAxNDoyMDoxNSAyMDEyICswMTAwIDI2MjQ3OmQ4MjhmMjNiNzJjOA0KKFhFTikgQm9v
dGxvYWRlcjogR1JVQiAxLjk4KzIwMTAwODA0LTE0K3NxdWVlemUxDQooWEVOKSBDb21tYW5k
IGxpbmU6IGRvbTBfbWVtPTEwMjRNLG1heDoxMDI0TSBsb2dsdmw9YWxsIGxvZ2x2bF9ndWVz
dD1hbGwgY29uc29sZV90aW1lc3RhbXBzIHZnYT1nZngtMTI4MHgxMDI0eDMyIGNwdWlkbGUg
Y3B1ZnJlcT14ZW4gbm9yZWJvb3QgZGVidWcgbGFwaWM9ZGVidWcgYXBpY192ZXJib3NpdHk9
ZGVidWcgYXBpYz1kZWJ1ZyBpb21tdT1vbix2ZXJib3NlLGRlYnVnLGFtZC1pb21tdS1kZWJ1
ZyBjb20xPTM4NDAwLDhuMSBjb25zb2xlPXZnYSxjb20xDQooWEVOKSBWaWRlbyBpbmZvcm1h
dGlvbjoNCihYRU4pICBWR0EgaXMgZ3JhcGhpY3MgbW9kZSAxMjgweDEwMjQsIDMyIGJwcA0K
KFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNv
bmRzDQooWEVOKSBEaXNjIGluZm9ybWF0aW9uOg0KKFhFTikgIEZvdW5kIDIgTUJSIHNpZ25h
dHVyZXMNCihYRU4pICBGb3VuZCAyIEVERCBpbmZvcm1hdGlvbiBzdHJ1Y3R1cmVzDQooWEVO
KSBYZW4tZTgyMCBSQU0gbWFwOg0KKFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAw
MDAwMDlmMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDAwMDA5ZjAwMCAtIDAwMDAwMDAw
MDAwYTAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAwZTQwMDAgLSAwMDAwMDAw
MDAwMTAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMDAwMTAwMDAwIC0gMDAwMDAw
MDBhZmY5MDAwMCAodXNhYmxlKQ0KKFhFTikgIDAwMDAwMDAwYWZmOTAwMDAgLSAwMDAwMDAw
MGFmZjllMDAwIChBQ1BJIGRhdGEpDQooWEVOKSAgMDAwMDAwMDBhZmY5ZTAwMCAtIDAwMDAw
MDAwYWZmZTAwMDAgKEFDUEkgTlZTKQ0KKFhFTikgIDAwMDAwMDAwYWZmZTAwMDAgLSAwMDAw
MDAwMGIwMDAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZmZTAwMDAwIC0gMDAw
MDAwMDEwMDAwMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDEwMDAwMDAwMCAtIDAw
MDAwMDAyNTAwMDAwMDAgKHVzYWJsZSkNCihYRU4pIEFDUEk6IFJTRFAgMDAwRkIxMDAsIDAw
MTQgKHIwIEFDUElBTSkNCihYRU4pIEFDUEk6IFJTRFQgQUZGOTAwMDAsIDAwNDggKHIxIE1T
SSAgICBPRU1TTElDICAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogRkFD
UCBBRkY5MDIwMCwgMDA4NCAocjEgNzY0ME1TIEE3NjQwMTAwIDIwMTAwOTEzIE1TRlQgICAg
ICAgOTcpDQooWEVOKSBBQ1BJOiBEU0RUIEFGRjkwNUUwLCA5NDI3IChyMSAgQTc2NDAgQTc2
NDAxMDAgICAgICAxMDAgSU5UTCAyMDA1MTExNykNCihYRU4pIEFDUEk6IEZBQ1MgQUZGOUUw
MDAsIDAwNDANCihYRU4pIEFDUEk6IEFQSUMgQUZGOTAzOTAsIDAwODggKHIxIDc2NDBNUyBB
NzY0MDEwMCAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogTUNGRyBBRkY5
MDQyMCwgMDAzQyAocjEgNzY0ME1TIE9FTU1DRkcgIDIwMTAwOTEzIE1TRlQgICAgICAgOTcp
DQooWEVOKSBBQ1BJOiBTTElDIEFGRjkwNDYwLCAwMTc2IChyMSBNU0kgICAgT0VNU0xJQyAg
MjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IE9FTUIgQUZGOUUwNDAsIDAw
NzIgKHIxIDc2NDBNUyBBNzY0MDEwMCAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikg
QUNQSTogU1JBVCBBRkY5QTVFMCwgMDEwOCAocjMgQU1EICAgIEZBTV9GXzEwICAgICAgICAy
IEFNRCAgICAgICAgIDEpDQooWEVOKSBBQ1BJOiBIUEVUIEFGRjlBNkYwLCAwMDM4IChyMSA3
NjQwTVMgT0VNSFBFVCAgMjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IElW
UlMgQUZGOUE3MzAsIDAwRjggKHIxICBBTUQgICAgIFJEODkwUyAgIDIwMjAzMSBBTUQgICAg
ICAgICAwKQ0KKFhFTikgQUNQSTogU1NEVCBBRkY5QTgzMCwgMERBNCAocjEgQSBNIEkgIFBP
V0VSTk9XICAgICAgICAxIEFNRCAgICAgICAgIDEpDQooWEVOKSBTeXN0ZW0gUkFNOiA4MTkx
TUIgKDgzODc3NzJrQikNCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMCAtPiBOb2RlIDAN
CihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBY
TSAwIC0+IEFQSUMgMiAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMyAt
PiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgNCAtPiBOb2RlIDANCihYRU4p
IFNSQVQ6IFBYTSAwIC0+IEFQSUMgNSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IE5vZGUgMCBQ
WE0gMCAwLWEwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwLWIwMDAwMDAw
DQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwMDAwLTI1MDAwMDAwMA0KKFhFTikg
TlVNQTogQWxsb2NhdGVkIG1lbW5vZGVtYXAgZnJvbSAyNGRhMDEwMDAgLSAyNGRhMDQwMDAN
CihYRU4pIE5VTUE6IFVzaW5nIDggZm9yIHRoZSBoYXNoIHNoaWZ0Lg0KKFhFTikgRG9tYWlu
IGhlYXAgaW5pdGlhbGlzZWQNCihYRU4pIHZlc2FmYjogZnJhbWVidWZmZXIgYXQgMHhmYjAw
MDAwMCwgbWFwcGVkIHRvIDB4ZmZmZjgyYzAwMDA4MTAwMCwgdXNpbmcgNjE0NGssIHRvdGFs
IDE0MzM2aw0KKFhFTikgdmVzYWZiOiBtb2RlIGlzIDEyODB4MTAyNHgzMiwgbGluZWxlbmd0
aD01MTIwLCBmb250IDh4MTYNCihYRU4pIHZlc2FmYjogVHJ1ZWNvbG9yOiBzaXplPTg6ODo4
OjgsIHNoaWZ0PTI0OjE2Ojg6MA0KKFhFTikgZm91bmQgU01QIE1QLXRhYmxlIGF0IDAwMGZm
NzgwDQooWEVOKSBETUkgcHJlc2VudC4NCihYRU4pIEFQSUMgYm9vdCBzdGF0ZSBpcyAneGFw
aWMnDQooWEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0DQooWEVOKSBBQ1BJOiBQTS1U
aW1lciBJTyBQb3J0OiAweDgwOA0KKFhFTikgQUNQSTogQUNQSSBTTEVFUCBJTkZPOiBwbTF4
X2NudFs4MDQsMF0sIHBtMXhfZXZ0WzgwMCwwXQ0KKFhFTikgQUNQSTogICAgICAgICAgICAg
ICAgICB3YWtldXBfdmVjW2FmZjllMDBjXSwgdmVjX3NpemVbMjBdDQooWEVOKSBBQ1BJOiBM
b2NhbCBBUElDIGFkZHJlc3MgMHhmZWUwMDAwMA0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHgwMV0gbGFwaWNfaWRbMHgwMF0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMCAw
OjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0g
bGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMSAwOjEwIEFQSUMg
dmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwM10gbGFwaWNfaWRb
MHgwMl0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMiAwOjEwIEFQSUMgdmVyc2lvbiAx
Ng0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwM10gZW5h
YmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMyAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikg
QUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNV0gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkNCihY
RU4pIFByb2Nlc3NvciAjNCAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQ
SUMgKGFjcGlfaWRbMHgwNl0gbGFwaWNfaWRbMHgwNV0gZW5hYmxlZCkNCihYRU4pIFByb2Nl
c3NvciAjNSAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogSU9BUElDIChpZFsw
eDA2XSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBd
OiBhcGljX2lkIDYsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMN
CihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwN10gYWRkcmVzc1sweGZlYzIwMDAwXSBnc2lf
YmFzZVsyNF0pDQooWEVOKSBJT0FQSUNbMV06IGFwaWNfaWQgNywgdmVyc2lvbiAzMywgYWRk
cmVzcyAweGZlYzIwMDAwLCBHU0kgMjQtNTUNCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZSIChi
dXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQooWEVOKSBBQ1BJOiBJTlRf
U1JDX09WUiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQooWEVO
KSBBQ1BJOiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQg
Ynkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlE5IHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVO
KSBFbmFibGluZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMiBJL08gQVBJQ3MNCihYRU4p
IEFDUEk6IEhQRVQgaWQ6IDB4ODMwMCBiYXNlOiAweGZlZDAwMDAwDQooWEVOKSBUYWJsZSBp
cyBub3QgZm91bmQhDQooWEVOKSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3Vy
YXRpb24gaW5mb3JtYXRpb24NCihYRU4pIFNNUDogQWxsb3dpbmcgNiBDUFVzICgwIGhvdHBs
dWcgQ1BVcykNCihYRU4pIG1hcHBlZCBBUElDIHRvIGZmZmY4MmMzZmZkZmIwMDAgKGZlZTAw
MDAwKQ0KKFhFTikgbWFwcGVkIElPQVBJQyB0byBmZmZmODJjM2ZmZGZhMDAwIChmZWMwMDAw
MCkNCihYRU4pIG1hcHBlZCBJT0FQSUMgdG8gZmZmZjgyYzNmZmRmOTAwMCAoZmVjMjAwMDAp
DQooWEVOKSBJUlEgbGltaXRzOiA1NiBHU0ksIDExMTIgTVNJL01TSS1YDQooWEVOKSBVc2lu
ZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpDQooWEVOKSBEZXRl
Y3RlZCAzMjAwLjIzMSBNSHogcHJvY2Vzc29yLg0KKFhFTikgSW5pdGluZyBtZW1vcnkgc2hh
cmluZy4NCihYRU4pIEFNRCBGYW0xMGggbWFjaGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxl
ZA0KKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDogYmFzZSBlMDAwMDAwMCBzZWdt
ZW50IDAwMDAgYnVzZXMgMDAgLSBmZg0KKFhFTikgUENJOiBOb3QgdXNpbmcgTUNGRyBmb3Ig
c2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgQU1ELVZpOiBGb3VuZCBNU0kgY2FwYWJp
bGl0eSBibG9jayBhdCAweDU0DQooWEVOKSBBTUQtVmk6IEFDUEkgVGFibGU6DQooWEVOKSBB
TUQtVmk6ICBTaWduYXR1cmUgSVZSUw0KKFhFTikgQU1ELVZpOiAgTGVuZ3RoIDB4ZjgNCihY
RU4pIEFNRC1WaTogIFJldmlzaW9uIDB4MQ0KKFhFTikgQU1ELVZpOiAgQ2hlY2tTdW0gMHg1
MA0KKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRCAgDQooWEVOKSBBTUQtVmk6ICBPRU1fVGFi
bGVfSWQgUkQ4OTBTDQooWEVOKSBBTUQtVmk6ICBPRU1fUmV2aXNpb24gMHgyMDIwMzENCihY
RU4pIEFNRC1WaTogIENyZWF0b3JfSWQgQU1EIA0KKFhFTikgQU1ELVZpOiAgQ3JlYXRvcl9S
ZXZpc2lvbiAwDQooWEVOKSBBTUQtVmk6IElWUlMgQmxvY2s6DQooWEVOKSBBTUQtVmk6ICBU
eXBlIDB4MTANCihYRU4pIEFNRC1WaTogIEZsYWdzIDB4M2UNCihYRU4pIEFNRC1WaTogIExl
bmd0aCAweGM4DQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHgyDQooWEVOKSBBTUQtVmk6IElW
SEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDMNCihYRU4pIEFNRC1W
aTogIERldl9JZCAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6ICBE
ZXZfSWQgUmFuZ2U6IDAgLT4gMHgyDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAweDEw
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAw
eGIwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmlj
ZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZf
SWQgMHgxOA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERl
dmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBE
ZXZfSWQgMHg5MDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4NDMNCihYRU4pIEFNRC1W
aTogIERldl9JZCAweGEwOA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIFJhbmdlOiAweGEwOCAtPiAweGFmZg0KKFhFTikgQU1ELVZpOiAgRGV2X0lk
IEFsaWFzOiAweGEwMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeToNCihYRU4p
IEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHgyOA0KKFhFTikg
QU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeToNCihY
RU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHg4MDANCihY
RU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6
DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIDB4MzAN
CihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50
cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIDB4
NzAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9J
ZCAweDUwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2
aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERl
dl9JZCAweDYwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHg1OA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJ
VkhEIERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQt
Vmk6ICBEZXZfSWQgMHg1MDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1W
aTogIERldl9JZCBSYW5nZTogMHg1MDAgLT4gMHg1MDENCihYRU4pIEFNRC1WaTogSVZIRCBE
ZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIDB4NjgNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIDB4NDAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6
IElWSEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFN
RC1WaTogIERldl9JZCAweDg4DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQt
Vmk6IElWSEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDMNCihYRU4p
IEFNRC1WaTogIERldl9JZCAweDkwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBB
TUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4OTAgLT4gMHg5Mg0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHg5OA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIFJhbmdlOiAweDk4IC0+IDB4OWENCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2Ug
RW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lk
IDB4YTANCihYRU4pIEFNRC1WaTogIEZsYWdzIDB4ZDcNCihYRU4pIEFNRC1WaTogSVZIRCBE
ZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIDB4YTENCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIDB4YTINCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTog
SVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1E
LVZpOiAgRGV2X0lkIDB4YTMNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1W
aTogSVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikg
QU1ELVZpOiAgRGV2X0lkIDB4YTQNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFN
RC1WaTogSVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4NDMNCihY
RU4pIEFNRC1WaTogIERldl9JZCAweDMwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhF
TikgQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweDMwMCAtPiAweDNmZg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIEFsaWFzOiAweGE0DQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAweGE1
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAw
eGE4DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9J
ZCAweGE5DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2
aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERl
dl9JZCAweDEwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHhiMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIFJhbmdlOiAweGIwIC0+IDB4YjINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2Ug
RW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDANCihYRU4pIEFNRC1WaTogIERldl9JZCAw
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDQ4DQooWEVOKSBBTUQtVmk6ICBEZXZfSWQg
MA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMHhkNw0KKFhFTikgQU1ELVZpOiBJVkhEIERldmlj
ZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHg0OA0KKFhFTikgQU1ELVZpOiAgRGV2
X0lkIDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSU9NTVUgMCBF
bmFibGVkLg0KKFhFTikgQU1ELVZpOiBFbmFibGluZyBnbG9iYWwgdmVjdG9yIG1hcA0KKFhF
TikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQNCihYRU4pICAtIERvbTAgbW9kZTogUmVs
YXhlZA0KKFhFTikgR2V0dGluZyBWRVJTSU9OOiA4MDA1MDAxMA0KKFhFTikgR2V0dGluZyBW
RVJTSU9OOiA4MDA1MDAxMA0KKFhFTikgR2V0dGluZyBJRDogMA0KKFhFTikgR2V0dGluZyBM
VlQwOiA3MDANCihYRU4pIEdldHRpbmcgTFZUMTogNDAwDQooWEVOKSBlbmFibGVkIEV4dElO
VCBvbiBDUFUjMA0KKFhFTikgRVNSIHZhbHVlIGJlZm9yZSBlbmFibGluZyB2ZWN0b3I6IDB4
NCAgYWZ0ZXI6IDANCihYRU4pIEVOQUJMSU5HIElPLUFQSUMgSVJRcw0KKFhFTikgIC0+IFVz
aW5nIG5ldyBBQ0sgbWV0aG9kDQooWEVOKSBpbml0IElPX0FQSUMgSVJRcw0KKFhFTikgIElP
LUFQSUMgKGFwaWNpZC1waW4pIDYtMCwgNi0xNiwgNi0xNywgNi0xOCwgNi0xOSwgNi0yMCwg
Ni0yMSwgNi0yMiwgNi0yMywgNy0wLCA3LTEsIDctMiwgNy0zLCA3LTQsIDctNSwgNy02LCA3
LTcsIDctOCwgNy05LCA3LTEwLCA3LTExLCA3LTEyLCA3LTEzLCA3LTE0LCA3LTE1LCA3LTE2
LCA3LTE3LCA3LTE4LCA3LTE5LCA3LTIwLCA3LTIxLCA3LTIyLCA3LTIzLCA3LTI0LCA3LTI1
LCA3LTI2LCA3LTI3LCA3LTI4LCA3LTI5LCA3LTMwLCA3LTMxIG5vdCBjb25uZWN0ZWQuDQoo
WEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4y
PS0xDQooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1Lg0KKFhFTikgbnVtYmVy
IG9mIElPLUFQSUMgIzYgcmVnaXN0ZXJzOiAyNC4NCihYRU4pIG51bWJlciBvZiBJTy1BUElD
ICM3IHJlZ2lzdGVyczogMzIuDQooWEVOKSB0ZXN0aW5nIHRoZSBJTyBBUElDLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4NCihYRU4pIElPIEFQSUMgIzYuLi4uLi4NCihYRU4pIC4uLi4gcmVn
aXN0ZXIgIzAwOiAwNjAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICA6IHBoeXNpY2FsIEFQSUMg
aWQ6IDA2DQooWEVOKSAuLi4uLi4uICAgIDogRGVsaXZlcnkgVHlwZTogMA0KKFhFTikgLi4u
Li4uLiAgICA6IExUUyAgICAgICAgICA6IDANCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAxOiAw
MDE3ODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBtYXggcmVkaXJlY3Rpb24gZW50cmllczog
MDAxNw0KKFhFTikgLi4uLi4uLiAgICAgOiBQUlEgaW1wbGVtZW50ZWQ6IDENCihYRU4pIC4u
Li4uLi4gICAgIDogSU8gQVBJQyB2ZXJzaW9uOiAwMDIxDQooWEVOKSAuLi4uIHJlZ2lzdGVy
ICMwMjogMDYwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDogYXJiaXRyYXRpb246IDA2DQoo
WEVOKSAuLi4uIHJlZ2lzdGVyICMwMzogMDcwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDog
Qm9vdCBEVCAgICA6IDANCihYRU4pIC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRhYmxlOg0KKFhF
TikgIE5SIExvZyBQaHkgTWFzayBUcmlnIElSUiBQb2wgU3RhdCBEZXN0IERlbGkgVmVjdDog
ICANCihYRU4pICAwMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAg
IDAwDQooWEVOKSAgMDEgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICAzMA0KKFhFTikgIDAyIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgRjANCihYRU4pICAwMyAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAx
ICAgIDM4DQooWEVOKSAgMDQgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAg
MSAgICBGMQ0KKFhFTikgIDA1IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAg
IDEgICAgNDANCihYRU4pICAwNiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAg
ICAxICAgIDQ4DQooWEVOKSAgMDcgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEg
ICAgMSAgICA1MA0KKFhFTikgIDA4IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgNTgNCihYRU4pICAwOSAwMDEgMDEgIDEgICAgMSAgICAwICAgMSAgIDAgICAg
MSAgICAxICAgIDYwDQooWEVOKSAgMGEgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICA2OA0KKFhFTikgIDBiIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgNzANCihYRU4pICAwYyAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDc4DQooWEVOKSAgMGQgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICA4OA0KKFhFTikgIDBlIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgOTANCihYRU4pICAwZiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDk4DQooWEVOKSAgMTAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAg
ICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDExIDAwMCAwMCAgMSAgICAwICAgIDAgICAw
ICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxMiAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTMgMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE0IDAwMCAwMCAgMSAgICAwICAgIDAg
ICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNSAwMDAgMDAgIDEgICAgMCAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTYgMDAwIDAwICAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE3IDAwMCAwMCAgMSAgICAwICAg
IDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pIElPIEFQSUMgIzcuLi4uLi4NCihY
RU4pIC4uLi4gcmVnaXN0ZXIgIzAwOiAwNzAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICA6IHBo
eXNpY2FsIEFQSUMgaWQ6IDA3DQooWEVOKSAuLi4uLi4uICAgIDogRGVsaXZlcnkgVHlwZTog
MA0KKFhFTikgLi4uLi4uLiAgICA6IExUUyAgICAgICAgICA6IDANCihYRU4pIC4uLi4gcmVn
aXN0ZXIgIzAxOiAwMDFGODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBtYXggcmVkaXJlY3Rp
b24gZW50cmllczogMDAxRg0KKFhFTikgLi4uLi4uLiAgICAgOiBQUlEgaW1wbGVtZW50ZWQ6
IDENCihYRU4pIC4uLi4uLi4gICAgIDogSU8gQVBJQyB2ZXJzaW9uOiAwMDIxDQooWEVOKSAu
Li4uIHJlZ2lzdGVyICMwMjogMDAwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDogYXJiaXRy
YXRpb246IDAwDQooWEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0YWJsZToNCihYRU4pICBO
UiBMb2cgUGh5IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRGVzdCBEZWxpIFZlY3Q6ICAgDQoo
WEVOKSAgMDAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0K
KFhFTikgIDAxIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAN
CihYRU4pICAwMiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAw
DQooWEVOKSAgMDMgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAw
MA0KKFhFTikgIDA0IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAg
MDANCihYRU4pICAwNSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAg
IDAwDQooWEVOKSAgMDYgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAg
ICAwMA0KKFhFTikgIDA3IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAg
ICAgMDANCihYRU4pICAwOCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwDQooWEVOKSAgMDkgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MCAgICAwMA0KKFhFTikgIDBhIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAg
IDAgICAgMDANCihYRU4pICAwYiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwDQooWEVOKSAgMGMgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMCAgICAwMA0KKFhFTikgIDBkIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAw
ICAgIDAgICAgMDANCihYRU4pICAwZSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAwICAgIDAwDQooWEVOKSAgMGYgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMCAgICAwMA0KKFhFTikgIDEwIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAg
ICAwICAgIDAgICAgMDANCihYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAg
ICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAw
ICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDEzIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAg
MCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAg
IDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAg
ICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE2IDAwMCAwMCAgMSAgICAwICAgIDAgICAw
ICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNyAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTggMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE5IDAwMCAwMCAgMSAgICAwICAgIDAg
ICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxYSAwMDAgMDAgIDEgICAgMCAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWIgMDAwIDAwICAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFjIDAwMCAwMCAgMSAgICAwICAg
IDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxZCAwMDAgMDAgIDEgICAgMCAg
ICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWUgMDAwIDAwICAxICAgIDAg
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFmIDAwMCAwMCAgMSAgICAw
ICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pIFVzaW5nIHZlY3Rvci1iYXNl
ZCBpbmRleGluZw0KKFhFTikgSVJRIHRvIHBpbiBtYXBwaW5nczoNCihYRU4pIElSUTI0MCAt
PiAwOjINCihYRU4pIElSUTQ4IC0+IDA6MQ0KKFhFTikgSVJRNTYgLT4gMDozDQooWEVOKSBJ
UlEyNDEgLT4gMDo0DQooWEVOKSBJUlE2NCAtPiAwOjUNCihYRU4pIElSUTcyIC0+IDA6Ng0K
KFhFTikgSVJRODAgLT4gMDo3DQooWEVOKSBJUlE4OCAtPiAwOjgNCihYRU4pIElSUTk2IC0+
IDA6OQ0KKFhFTikgSVJRMTA0IC0+IDA6MTANCihYRU4pIElSUTExMiAtPiAwOjExDQooWEVO
KSBJUlExMjAgLT4gMDoxMg0KKFhFTikgSVJRMTM2IC0+IDA6MTMNCihYRU4pIElSUTE0NCAt
PiAwOjE0DQooWEVOKSBJUlExNTIgLT4gMDoxNQ0KKFhFTikgLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uIGRvbmUuDQooWEVOKSBVc2luZyBsb2NhbCBBUElDIHRpbWVy
IGludGVycnVwdHMuDQooWEVOKSBjYWxpYnJhdGluZyBBUElDIHRpbWVyIC4uLg0KKFhFTikg
Li4uLi4gQ1BVIGNsb2NrIHNwZWVkIGlzIDMyMDAuMTUzNSBNSHouDQooWEVOKSAuLi4uLiBo
b3N0IGJ1cyBjbG9jayBzcGVlZCBpcyAyMDAuMDA5NSBNSHouDQooWEVOKSAuLi4uLiBidXNf
c2NhbGUgPSAweGNjZDcNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBQbGF0Zm9ybSB0
aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIEFs
bG9jYXRlZCBjb25zb2xlIHJpbmcgb2YgNjQgS2lCLg0KKFhFTikgWzIwMTItMTItMDcgMjA6
Mjg6NDNdIEhWTTogQVNJRHMgZW5hYmxlZC4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQz
XSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjI4OjQzXSAgLSBOZXN0ZWQgUGFnZSBUYWJsZXMgKE5QVCkNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjI4OjQzXSAgLSBMYXN0IEJyYW5jaCBSZWNvcmQgKExCUikgVmlydHVhbGlzYXRp
b24NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAj
Vk1FWElUDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gIC0gUGF1c2UtSW50ZXJjZXB0
IEZpbHRlcg0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIEhWTTogU1ZNIGVuYWJsZWQN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBh
Z2luZyAoSEFQKSBkZXRlY3RlZA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIEhWTTog
SEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0INCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4
OjQyXSBtYXNrZWQgRXh0SU5UIG9uIENQVSMxDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
M10gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwYmYNCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQyXSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0M10gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBw
YXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQyXSBtYXNrZWQg
RXh0SU5UIG9uIENQVSMzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gbWljcm9jb2Rl
OiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjI4OjQyXSBtYXNrZWQgRXh0SU5UIG9uIENQVSM0DQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0M10gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEw
MDAwYmYNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQyXSBtYXNrZWQgRXh0SU5UIG9uIENQ
VSM1DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gQnJvdWdodCB1cCA2IENQVXMNCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86
IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIEhQRVQ6
IDMgdGltZXJzICgzIHdpbGwgYmUgdXNlZCBmb3IgYnJvYWRjYXN0KQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6Mjg6NDNdIEFDUEkgc2xlZXAgbW9kZXM6IFMzDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0M10gTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5n
IGZyZXF1ZW5jeQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIG1jaGVja19wb2xsOiBN
YWNoaW5lIGNoZWNrIHBvbGxpbmcgdGltZXIgc3RhcnRlZC4NCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjI4OjQzXSBYZW5vcHJvZmlsZTogRmFpbGVkIHRvIHNldHVwIElCUyBMVlQgb2Zmc2V0
LCBJQlNDVEwgPSAweGZmZmZmZmZmDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gKioq
IExPQURJTkcgRE9NQUlOIDAgKioqDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxm
X3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9MHgxMDAwMDAwIG1lbXN6PTB4ZDJlMDAwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFk
ZHI9MHgxZTAwMDAwIG1lbXN6PTB4ZDIwZjANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQz
XSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFlZDMwMDAgbWVtc3o9MHgxM2Nj
MA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIGVsZl9wYXJzZV9iaW5hcnk6IHBoZHI6
IHBhZGRyPTB4MWVlNzAwMCBtZW1zej0weGRlYTAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6
Mjg6NDNdIGVsZl9wYXJzZV9iaW5hcnk6IG1lbW9yeTogMHgxMDAwMDAwIC0+IDB4MmNkMTAw
MA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VF
U1RfT1MgPSAibGludXgiDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hlbl9w
YXJzZV9ub3RlOiBHVUVTVF9WRVJTSU9OID0gIjIuNiINCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjI4OjQzXSBlbGZfeGVuX3BhcnNlX25vdGU6IFhFTl9WRVJTSU9OID0gInhlbi0zLjAiDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBWSVJUX0JB
U0UgPSAweGZmZmZmZmZmODAwMDAwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBl
bGZfeGVuX3BhcnNlX25vdGU6IEVOVFJZID0gMHhmZmZmZmZmZjgxZWU3MjEwDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBIWVBFUkNBTExfUEFH
RSA9IDB4ZmZmZmZmZmY4MTAwMTAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIGVs
Zl94ZW5fcGFyc2Vfbm90ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVzfHBh
ZV9wZ2Rpcl9hYm92ZV80Z2IiDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hl
bl9wYXJzZV9ub3RlOiBQQUVfTU9ERSA9ICJ5ZXMiDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBMT0FERVIgPSAiZ2VuZXJpYyINCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjI4OjQzXSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25vd24geGVuIGVs
ZiBub3RlICgweGQpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hlbl9wYXJz
ZV9ub3RlOiBTVVNQRU5EX0NBTkNFTCA9IDB4MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6
NDNdIGVsZl94ZW5fcGFyc2Vfbm90ZTogSFZfU1RBUlRfTE9XID0gMHhmZmZmODAwMDAwMDAw
MDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBQ
QUREUl9PRkZTRVQgPSAweDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBlbGZfeGVu
X2FkZHJfY2FsY19jaGVjazogYWRkcmVzc2VzOg0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6
NDNdICAgICB2aXJ0X2Jhc2UgICAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAwDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDoyODo0M10gICAgIGVsZl9wYWRkcl9vZmZzZXQgPSAweDANCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjI4OjQzXSAgICAgdmlydF9vZmZzZXQgICAgICA9IDB4ZmZmZmZm
ZmY4MDAwMDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdICAgICB2aXJ0X2tzdGFy
dCAgICAgID0gMHhmZmZmZmZmZjgxMDAwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
M10gICAgIHZpcnRfa2VuZCAgICAgICAgPSAweGZmZmZmZmZmODJjZDEwMDANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjI4OjQzXSAgICAgdmlydF9lbnRyeSAgICAgICA9IDB4ZmZmZmZmZmY4
MWVlNzIxMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdICAgICBwMm1fYmFzZSAgICAg
ICAgID0gMHhmZmZmZmZmZmZmZmZmZmZmDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10g
IFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0MzINCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjI4OjQzXSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgUEFFLCBsc2IsIHBhZGRyIDB4MTAw
MDAwMCAtPiAweDJjZDEwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBQSFlTSUNB
TCBNRU1PUlkgQVJSQU5HRU1FTlQ6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gIERv
bTAgYWxsb2MuOiAgIDAwMDAwMDAyNDAwMDAwMDAtPjAwMDAwMDAyNDQwMDAwMDAgKDI0MjUx
NiBwYWdlcyB0byBiZSBhbGxvY2F0ZWQpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10g
IEluaXQuIHJhbWRpc2s6IDAwMDAwMDAyNGYzNTQwMDAtPjAwMDAwMDAyNGZmZmZjMDANCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBWSVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSAgTG9hZGVkIGtlcm5lbDogZmZmZmZmZmY4
MTAwMDAwMC0+ZmZmZmZmZmY4MmNkMTAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNd
ICBJbml0LiByYW1kaXNrOiBmZmZmZmZmZjgyY2QxMDAwLT5mZmZmZmZmZjgzOTdjYzAwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gIFBoeXMtTWFjaCBtYXA6IGZmZmZmZmZmODM5
N2QwMDAtPmZmZmZmZmZmODNiN2QwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSAg
U3RhcnQgaW5mbzogICAgZmZmZmZmZmY4M2I3ZDAwMC0+ZmZmZmZmZmY4M2I3ZDRiNA0KKFhF
TikgWzIwMTItMTItMDcgMjA6Mjg6NDNdICBQYWdlIHRhYmxlczogICBmZmZmZmZmZjgzYjdl
MDAwLT5mZmZmZmZmZjgzYmExMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gIEJv
b3Qgc3RhY2s6ICAgIGZmZmZmZmZmODNiYTEwMDAtPmZmZmZmZmZmODNiYTIwMDANCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjI4OjQzXSAgVE9UQUw6ICAgICAgICAgZmZmZmZmZmY4MDAwMDAw
MC0+ZmZmZmZmZmY4NDAwMDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdICBFTlRS
WSBBRERSRVNTOiBmZmZmZmZmZjgxZWU3MjEwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
M10gRG9tMCBoYXMgbWF4aW11bSA2IFZDUFVzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
M10gZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDAgYXQgMHhmZmZmZmZmZjgxMDAwMDAwIC0+IDB4
ZmZmZmZmZmY4MWQyZTAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIGVsZl9sb2Fk
X2JpbmFyeTogcGhkciAxIGF0IDB4ZmZmZmZmZmY4MWUwMDAwMCAtPiAweGZmZmZmZmZmODFl
ZDIwZjANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBlbGZfbG9hZF9iaW5hcnk6IHBo
ZHIgMiBhdCAweGZmZmZmZmZmODFlZDMwMDAgLT4gMHhmZmZmZmZmZjgxZWU2Y2MwDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDMgYXQgMHhm
ZmZmZmZmZjgxZWU3MDAwIC0+IDB4ZmZmZmZmZmY4MWY4YTAwMA0KKFhFTikgWzIwMTItMTIt
MDcgMjA6Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUg
PSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFn
ZSB0YWJsZTogZGV2aWNlIGlkID0gMHgyLCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRd
IEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAsIHJvb3Qg
dGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHgxOCwgcm9vdCB0YWJsZSA9IDB4MjQ5OGEzMDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ0XSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDI4LCByb290IHRhYmxlID0g
MHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTIt
MTItMDcgMjA6Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4MzAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5n
IG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJ
L08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg1MCwgcm9vdCB0YWJsZSA9IDB4MjQ5OGEz
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjI4OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDU4
LCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0g
Mw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4NjgsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4OCwgcm9vdCB0
YWJsZSA9IDB4MjQ5OGEzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjI4OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweDkwLCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAs
IHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRdIEFNRC1WaTog
U2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTIsIHJvb3QgdGFibGUgPSAw
eDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHg5OCwgcm9vdCB0YWJsZSA9IDB4MjQ5OGEzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ0XSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDlhLCByb290IHRhYmxlID0gMHgyNDk4YTMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6
Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTAs
IHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAz
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhhMSwgcm9vdCB0YWJsZSA9IDB4MjQ5OGEzMDAwLCBkb21h
aW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ0XSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGEyLCByb290IHRh
YmxlID0gMHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikg
WzIwMTItMTItMDcgMjA6Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4YTMsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwg
cGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNCwgcm9vdCB0YWJsZSA9IDB4
MjQ5OGEzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjI4OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQg
PSAweGE1LCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTgsIHJvb3QgdGFibGUgPSAweDI0OThhMzAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhiMCwg
cm9vdCB0YWJsZSA9IDB4MjQ5OGEzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweGIyLCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRvbWFp
biA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRdIEFN
RC1WaTogTm8gaW9tbXUgZm9yIGRldmljZSAwMDAwOjAwOjE4LjANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjI4OjQ0XSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2UgMDAwMDowMDoxOC4x
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBObyBpb21tdSBmb3IgZGV2
aWNlIDAwMDA6MDA6MTguMg0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRdIEFNRC1WaTog
Tm8gaW9tbXUgZm9yIGRldmljZSAwMDAwOjAwOjE4LjMNCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjI4OjQ0XSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2UgMDAwMDowMDoxOC40DQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHg0MDAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZp
OiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg1MDAsIHJvb3QgdGFibGUg
PSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNl
IGlkID0gMHg1MDEsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg2MDAsIHJvb3QgdGFibGUgPSAweDI0
OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0g
MHg3MDAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1v
ZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4MDAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MDAs
IHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAz
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhhMDAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhiMDAsIHJvb3Qg
dGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gU2NydWJiaW5nIEZyZWUgUkFNOiAuLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLmRvbmUuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gSW5pdGlhbCBsb3cg
bWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDoyODo0Nl0gU3RkLiBMb2dsZXZlbDogQWxsDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0Nl0gR3Vlc3QgTG9nbGV2ZWw6IEFsbA0KKFhFTikgWzIwMTItMTItMDcgMjA6
Mjg6NDZdIFhlbiBpcyByZWxpbnF1aXNoaW5nIFZHQSBjb25zb2xlLg0KKFhFTikgWzIwMTIt
MTItMDcgMjA6Mjg6NDZdICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1h
JyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQ0KKFhFTikgWzIwMTItMTIt
MDcgMjA6Mjg6NDZdIEZyZWVkIDI1MmtCIGluaXQgbWVtb3J5Lg0KKFhFTikgWzIwMTItMTIt
MDcgMjA6Mjg6NDZdIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg2LTkgLT4g
MHg2MCAtPiBJUlEgOSBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0Nl0gdHJhcHMuYzoyNDg2OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAwMDAwMDAwMCB0byAweDAwMDAwMDAwMDAwMGZmZmYu
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDow
MC4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDow
MDowMC4yDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAw
MDowMDowMi4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2Ug
MDAwMDowMDowMy4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDowNS4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBk
ZXZpY2UgMDAwMDowMDowNi4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDowYS4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDowYi4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0g
UENJIGFkZCBkZXZpY2UgMDAwMDowMDowZC4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMS4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMi4wDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMi4yDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4wDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4yDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4wDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4xDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4yDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4z
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
NC40DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDow
MDoxNC41DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAw
MDowMDoxNS4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2Ug
MDAwMDowMDoxNi4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxNi4yDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBk
ZXZpY2UgMDAwMDowMDoxOC4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDoxOC4xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDoxOC4yDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0g
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC4zDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC40DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowYjowMC4wDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowOTowMC4wDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowYTowMS4wDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowYTowMS4xDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowYTowMS4yDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowODowMC4wDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowNzowMC4wDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowNjowMC4w
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowNTow
MC4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDow
NTowMC4xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAw
MDowNDowMC4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2Ug
MDAwMDowMzowNi4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gSU9BUElDWzBdOiBT
ZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtOCAtPiAweDU4IC0+IElSUSA4IE1vZGU6MCBBY3Rp
dmU6MCkNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ2XSBJT0FQSUNbMF06IFNldCBQQ0kg
cm91dGluZyBlbnRyeSAoNi0xMyAtPiAweDg4IC0+IElSUSAxMyBNb2RlOjAgQWN0aXZlOjAp
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gSU9BUElDWzFdOiBTZXQgUENJIHJvdXRp
bmcgZW50cnkgKDctMjggLT4gMHhiOCAtPiBJUlEgNTIgTW9kZToxIEFjdGl2ZToxKQ0KKFhF
TikgWzIwMTItMTItMDcgMjA6Mjg6NDZdIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVu
dHJ5ICg3LTI5IC0+IDB4YzAgLT4gSVJRIDUzIE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjI4OjQ2XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAo
Ny0zMCAtPiAweGM4IC0+IElSUSA1NCBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDoyODo0Nl0gSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtMTYg
LT4gMHhkMCAtPiBJUlEgMTYgTW9kZToxIEFjdGl2ZToxKQ0KKFhFTikgWzIwMTItMTItMDcg
MjA6Mjg6NDZdIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg2LTE4IC0+IDB4
ZDggLT4gSVJRIDE4IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4
OjQ3XSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNi0xNyAtPiAweDIxIC0+
IElSUSAxNyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0N10g
SU9BUElDWzFdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDctNSAtPiAweDI5IC0+IElSUSAy
OSBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0N10gSU9BUElD
WzFdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDctNiAtPiAweDMxIC0+IElSUSAzMCBNb2Rl
OjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0N10gSU9BUElDWzFdOiBT
ZXQgUENJIHJvdXRpbmcgZW50cnkgKDctNyAtPiAweDM5IC0+IElSUSAzMSBNb2RlOjEgQWN0
aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0N10gSU9BUElDWzFdOiBTZXQgUENJ
IHJvdXRpbmcgZW50cnkgKDctMTYgLT4gMHg0MSAtPiBJUlEgNDAgTW9kZToxIEFjdGl2ZTox
KQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDddIElPQVBJQ1swXTogU2V0IFBDSSByb3V0
aW5nIGVudHJ5ICg2LTIyIC0+IDB4ODkgLT4gSVJRIDIyIE1vZGU6MSBBY3RpdmU6MSkNCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ3XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBl
bnRyeSAoNy05IC0+IDB4OTEgLT4gSVJRIDMzIE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjI4OjQ3XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAo
Ny04IC0+IDB4OTkgLT4gSVJRIDMyIE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjI4OjQ3XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yMyAt
PiAweGExIC0+IElSUSA0NyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDoyODo0OF0gSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtMTkgLT4gMHhh
OSAtPiBJUlEgMTkgTW9kZToxIEFjdGl2ZToxKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6
NDhdIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg3LTIyIC0+IDB4YjkgLT4g
SVJRIDQ2IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ4XSBJ
T0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yNyAtPiAweGM5IC0+IElSUSA1
MSBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMDowNl0gdHJhcHMu
YzoyNDg2OmQxIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9t
IDB4MDAwMDAwMDAwMDAwMDAwMCB0byAweDAwMDAwMDAwMDAwMGZmZmYuDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozMDoxMV0gdHJhcHMuYzoyNDg2OmQyIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAwMDAwMDAwMCB0byAweDAwMDAw
MDAwMDAwMGZmZmYuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMDoxN10gdHJhcHMuYzoyNDg2
OmQzIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAw
MDAwMDAwMDAwMDAwMCB0byAweDAwMDAwMDAwMDAwMGZmZmYuDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDozMDoyM10gdHJhcHMuYzoyNDg2OmQ0IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAwMDAwMDAwMCB0byAweDAwMDAwMDAwMDAw
MGZmZmYuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMDoyOV0gdHJhcHMuYzoyNDg2OmQ1IERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAw
MDAwMDAwMCB0byAweDAwMDAwMDAwMDAwMGZmZmYuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
MDozMV0gZ3JhbnRfdGFibGUuYzozMTM6ZDAgSW5jcmVhc2VkIG1hcHRyYWNrIHNpemUgdG8g
MiBmcmFtZXMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMwOjM1XSB0cmFwcy5jOjI0ODY6ZDYg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwMDAw
MDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZmZi4NCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMwOjQyXSB0cmFwcy5jOjI0ODY6ZDcgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZm
Zi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMwOjQ3XSB0cmFwcy5jOjI0ODY6ZDggRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAw
MDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZmZi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMwOjUz
XSB0cmFwcy5jOjI0ODY6ZDkgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZmZi4NCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjMwOjU5XSBncmFudF90YWJsZS5jOjMxMzpkMCBJbmNyZWFz
ZWQgbWFwdHJhY2sgc2l6ZSB0byAzIGZyYW1lcw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzA6
NTldIHRyYXBzLmM6MjQ4NjpkMTAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZmZi4N
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjMxOjA2XSBBTUQtVmk6IERpc2FibGU6IGRldmljZSBp
ZCA9IDB4YTQsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzE6MDZdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDB4YTQsIHJvb3QgdGFibGUgPSAweDE4ZTA5ZTAwMCwgZG9tYWluID0gMTEsIHBhZ2luZyBt
b2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzE6MDZdIEFNRC1WaTogUmUtYXNzaWdu
IDAwMDA6MDM6MDYuMCBmcm9tIGRvbTAgdG8gZG9tMTENCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMxOjA2XSB0cmFwcy5jOjI0ODY6ZDExIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAwMDAwMDAwMCB0byAweDAwMDAwMDAwMDAwMGZm
ZmYuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMToxNl0gdHJhcHMuYzoyNDg2OmQxMiBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDQgZnJvbSAweDAwMDAwMDAwMDAw
MDAwMDAgdG8gMHgwMDAwMDAwMDAwMDBmZmZmLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzE6
MjFdIGdyYW50X3RhYmxlLmM6MzEzOmQwIEluY3JlYXNlZCBtYXB0cmFjayBzaXplIHRvIDQg
ZnJhbWVzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMToyMl0gdHJhcHMuYzoyNDg2OmQxMyBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDQgZnJvbSAweDAwMDAwMDAw
MDAwMDAwMDAgdG8gMHgwMDAwMDAwMDAwMDBmZmZmLg0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzE6MjldIHRyYXBzLmM6MjQ4NjpkMTQgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZm
Zi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMxOjUzXSBncmFudF90YWJsZS5jOjMxMzpkMCBJ
bmNyZWFzZWQgbWFwdHJhY2sgc2l6ZSB0byA1IGZyYW1lcw0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MDhdIEFNRC1WaTogU2hhcmUgcDJtIHRhYmxlIHdpdGggaW9tbXU6IHAybSB0YWJs
ZSA9IDB4MWMxZjhjDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gaW8uYzoyODI6IGQx
NTogYmluZDogbV9nc2k9MTYgZ19nc2k9MzYgZGV2aWNlPTUgaW50eD0wDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozMzoxMl0gQU1ELVZpOiBEaXNhYmxlOiBkZXZpY2UgaWQgPSAweDQwMCwg
ZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzox
Ml0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg0MDAsIHJv
b3QgdGFibGUgPSAweDFjMWY4YzAwMCwgZG9tYWluID0gMTUsIHBhZ2luZyBtb2RlID0gNA0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEFNRC1WaTogUmUtYXNzaWduIDAwMDA6MDQ6
MDAuMCBmcm9tIGRvbTAgdG8gZG9tMTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBI
Vk0xNTogSFZNIExvYWRlcg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBE
ZXRlY3RlZCBYZW4gdjQuMy11bnN0YWJsZQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJd
IEhWTTE1OiBYZW5idXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2ZW50IGNoYW5uZWwgNQ0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBTeXN0ZW0gcmVxdWVzdGVkIFJPTUJJ
T1MNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogQ1BVIHNwZWVkIGlzIDMy
MDAgTUh6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gaXJxLmM6MjcwOiBEb20xNSBQ
Q0kgbGluayAwIGNoYW5nZWQgMCAtPiA1DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0g
SFZNMTU6IFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1DQooWEVOKSBbMjAxMi0xMi0w
NyAyMDozMzoxMl0gaXJxLmM6MjcwOiBEb20xNSBQQ0kgbGluayAxIGNoYW5nZWQgMCAtPiAx
MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBQQ0ktSVNBIGxpbmsgMSBy
b3V0ZWQgdG8gSVJRMTANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBpcnEuYzoyNzA6
IERvbTE1IFBDSSBsaW5rIDIgY2hhbmdlZCAwIC0+IDExDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozMzoxMl0gSFZNMTU6IFBDSS1JU0EgbGluayAyIHJvdXRlZCB0byBJUlExMQ0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzM6MTJdIGlycS5jOjI3MDogRG9tMTUgUENJIGxpbmsgMyBjaGFu
Z2VkIDAgLT4gNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBQQ0ktSVNB
IGxpbmsgMyByb3V0ZWQgdG8gSVJRNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhW
TTE1OiBwY2kgZGV2IDAxOjIgSU5URC0+SVJRNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6
MTJdIEhWTTE1OiBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTANCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwMzowIElOVEEtPklSUTUNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwNDowIElOVEEtPklSUTUNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwNTowIElOVEEtPklSUTEw
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHBjaSBkZXYgMDI6MCBiYXIg
MTAgc2l6ZSBseDogMDIwMDAwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0x
NTogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIGx4OiAwMTAwMDAwMA0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDA0OjAgYmFyIDEwIHNpemUgbHg6IDAw
MDIwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHBjaSBkZXYgMDU6
MCBiYXIgMTAgc2l6ZSBseDogMDAwMDIwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEy
XSBtZW1vcnlfbWFwOmFkZDogZG9tMTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHBjaSBkZXYgMDI6MCBiYXIgMTQgc2l6
ZSBseDogMDAwMDEwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNp
IGRldiAwMzowIGJhciAxMCBzaXplIGx4OiAwMDAwMDEwMA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDA0OjAgYmFyIDE0IHNpemUgbHg6IDAwMDAwMDQw
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHBjaSBkZXYgMDE6MiBiYXIg
MjAgc2l6ZSBseDogMDAwMDAwMjANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0x
NTogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIGx4OiAwMDAwMDAxMA0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzM6MTJdIEhWTTE1OiBNdWx0aXByb2Nlc3NvciBpbml0aWFsaXNhdGlvbjoN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogIC0gQ1BVMCAuLi4gNDgtYml0
IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICAtIENQVTEgLi4uIDQ4LWJpdCBw
aHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLg0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgLSBDUFUyIC4uLiA0OC1iaXQgcGh5
cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4NCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogVGVzdGluZyBIVk0gZW52aXJvbm1lbnQ6
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICAtIFJFUCBJTlNCIGFjcm9z
cyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBhc3NlZA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6
MTJdIEhWTTE1OiAgLSBHUyBiYXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IFBhc3NlZCAyIG9mIDIgdGVzdHMNCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogV3JpdGluZyBTTUJJT1MgdGFibGVz
IC4uLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBMb2FkaW5nIFJPTUJJ
T1MgLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IDk2NjAgYnl0ZXMg
b2YgUk9NQklPUyBoaWdoLW1lbW9yeSBleHRlbnNpb25zOg0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MTJdIEhWTTE1OiAgIFJlbG9jYXRpbmcgdG8gMHhmYzAwMTAwMC0weGZjMDAzNWJj
IC4uLiBkb25lDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IENyZWF0aW5n
IE1QIHRhYmxlcyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogTG9h
ZGluZyBDaXJydXMgVkdBQklPUyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBI
Vk0xNTogTG9hZGluZyBQQ0kgT3B0aW9uIFJPTSAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjEyXSBIVk0xNTogIC0gTWFudWZhY3R1cmVyOiBodHRwOi8vaXB4ZS5vcmcNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogIC0gUHJvZHVjdCBuYW1lOiBpUFhFDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IE9wdGlvbiBST01zOg0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgYzAwMDAtYzhmZmY6IFZHQSBCSU9TDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICBjOTAwMC1kOWZmZjogRXRoZXJi
b290IFJPTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBMb2FkaW5nIEFD
UEkgLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHZtODYgVFNTIGF0
IGZjMDBmNjgwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IEJJT1MgbWFw
Og0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgZjAwMDAtZmZmZmY6IE1h
aW4gQklPUw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBFODIwIHRhYmxl
Og0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgWzAwXTogMDAwMDAwMDA6
MDAwMDAwMDAgLSAwMDAwMDAwMDowMDA5ZTAwMDogUkFNDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozMzoxMl0gSFZNMTU6ICBbMDFdOiAwMDAwMDAwMDowMDA5ZTAwMCAtIDAwMDAwMDAwOjAw
MGEwMDAwOiBSRVNFUlZFRA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAg
SE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDowMDBlMDAwMA0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgWzAyXTogMDAwMDAwMDA6MDAwZTAwMDAgLSAw
MDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEy
XSBIVk0xNTogIFswM106IDAwMDAwMDAwOjAwMTAwMDAwIC0gMDAwMDAwMDA6MmY4MDAwMDA6
IFJBTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgSE9MRTogMDAwMDAw
MDA6MmY4MDAwMDAgLSAwMDAwMDAwMDpmYzAwMDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzM6MTJdIEhWTTE1OiAgWzA0XTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMTowMDAw
MDAwMDogUkVTRVJWRUQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogSW52
b2tpbmcgUk9NQklPUyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTog
JFJldmlzaW9uOiAxLjIyMSAkICREYXRlOiAyMDA4LzEyLzA3IDE3OjMyOjI5ICQNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBzdGR2Z2EuYzoxNDc6ZDE1IGVudGVyaW5nIHN0ZHZn
YSBhbmQgY2FjaGluZyBtb2Rlcw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1
OiBWR0FCaW9zICRJZDogdmdhYmlvcy5jLHYgMS42NyAyMDA4LzAxLzI3IDA5OjQ0OjEyIHZy
dXBwZXJ0IEV4cCAkDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IEJvY2hz
IEJJT1MgLSBidWlsZDogMDYvMjMvOTkNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBI
Vk0xNTogJFJldmlzaW9uOiAxLjIyMSAkICREYXRlOiAyMDA4LzEyLzA3IDE3OjMyOjI5ICQN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogT3B0aW9uczogYXBtYmlvcyBw
Y2liaW9zIGVsdG9yaXRvIFBNTSANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0x
NTogDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IGF0YTAtMDogUENIUz0x
NjM4My8xNi82MyB0cmFuc2xhdGlvbj1sYmEgTENIUz0xMDI0LzI1NS82Mw0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBhdGEwIG1hc3RlcjogUUVNVSBIQVJERElTSyBB
VEEtNyBIYXJkLURpc2sgKDEwMjQwIE1CeXRlcykNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMz
OjEyXSBIVk0xNTogYXRhMC0xOiBQQ0hTPTE2MzgzLzE2LzYzIHRyYW5zbGF0aW9uPWxiYSBM
Q0hTPTEwMjQvMjU1LzYzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IGF0
YTAgIHNsYXZlOiBRRU1VIEhBUkRESVNLIEFUQS03IEhhcmQtRGlzayAoIDMwMCBHQnl0ZXMp
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IA0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzM6MTJdIEhWTTE1OiANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0x
NTogDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IFByZXNzIEYxMiBmb3Ig
Ym9vdCBtZW51Lg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiANCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogQm9vdGluZyBmcm9tIEhhcmQgRGlzay4u
Lg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBCb290aW5nIGZyb20gMDAw
MDo3YzAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNF0gZ3JhbnRfdGFibGUuYzozMTM6
ZDAgSW5jcmVhc2VkIG1hcHRyYWNrIHNpemUgdG8gNiBmcmFtZXMNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjI1XSBpcnEuYzozNzU6IERvbTE1IGNhbGxiYWNrIHZpYSBjaGFuZ2VkIHRv
IERpcmVjdCBWZWN0b3IgMHhmMw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOmFkZDogZG9tMTUgZ2ZuPWYzMDIwIG1m
bj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNl0gbWVtb3J5X21hcDpy
ZW1vdmU6IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzM6MjZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5OGZl
IG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tMTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
MzoyNl0gbWVtb3J5X21hcDphZGQ6IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9MQ0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xNSBn
Zm49ZjMwMjAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBt
ZW1vcnlfbWFwOmFkZDogZG9tMTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDozMzoyNl0gbWVtb3J5X21hcDpyZW1vdmU6IGRvbTE1IGdmbj1mMzAy
MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIG1lbW9yeV9t
YXA6YWRkOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMTUgZ2ZuPWYzMDIwIG1mbj1m
OThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNl0gbWVtb3J5X21hcDphZGQ6
IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzM6MjZdIGlycS5jOjI3MDogRG9tMTUgUENJIGxpbmsgMCBjaGFuZ2VkIDUgLT4gMA0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzM6MjZdIGlycS5jOjI3MDogRG9tMTUgUENJIGxpbmsgMSBj
aGFuZ2VkIDEwIC0+IDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBpcnEuYzoyNzA6
IERvbTE1IFBDSSBsaW5rIDIgY2hhbmdlZCAxMSAtPiAwDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozMzoyNl0gaXJxLmM6MjcwOiBEb20xNSBQQ0kgbGluayAzIGNoYW5nZWQgNSAtPiAwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjozMF0gQU1ELVZpOiBEaXNhYmxlOiBkZXZpY2UgaWQg
PSAweDQwMCwgZG9tYWluID0gMTUsIHBhZ2luZyBtb2RlID0gNA0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzY6MzBdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDB4NDAwLCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6MzBdIEFNRC1WaTogUmUtYXNzaWdu
IDAwMDA6MDQ6MDAuMCBmcm9tIGRvbTE1IHRvIGRvbTANCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjUzXSBBTUQtVmk6IFNoYXJlIHAybSB0YWJsZSB3aXRoIGlvbW11OiBwMm0gdGFibGUg
PSAweDFjMWU5Zg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIGlvLmM6MjgyOiBkMTY6
IGJpbmQ6IG1fZ3NpPTE2IGdfZ3NpPTM2IGRldmljZT01IGludHg9MA0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEFNRC1WaTogRGlzYWJsZTogZGV2aWNlIGlkID0gMHg0MDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRd
IEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NDAwLCByb290
IHRhYmxlID0gMHgxYzFlOWYwMDAsIGRvbWFpbiA9IDE2LCBwYWdpbmcgbW9kZSA9IDQNCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBBTUQtVmk6IFJlLWFzc2lnbiAwMDAwOjA0OjAw
LjAgZnJvbSBkb20wIHRvIGRvbTE2DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZN
MTY6IEhWTSBMb2FkZXINCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRGV0
ZWN0ZWQgWGVuIHY0LjMtdW5zdGFibGUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogWGVuYnVzIHJpbmdzIEAweGZlZmZjMDAwLCBldmVudCBjaGFubmVsIDUNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogU3lzdGVtIHJlcXVlc3RlZCBTZWFCSU9T
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IENQVSBzcGVlZCBpcyAzMjAw
IE1Ieg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIGlycS5jOjI3MDogRG9tMTYgUENJ
IGxpbmsgMCBjaGFuZ2VkIDAgLT4gNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhW
TTE2OiBQQ0ktSVNBIGxpbmsgMCByb3V0ZWQgdG8gSVJRNQ0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIGlycS5jOjI3MDogRG9tMTYgUENJIGxpbmsgMSBjaGFuZ2VkIDAgLT4gMTAN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogUENJLUlTQSBsaW5rIDEgcm91
dGVkIHRvIElSUTEwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gaXJxLmM6MjcwOiBE
b20xNiBQQ0kgbGluayAyIGNoYW5nZWQgMCAtPiAxMQ0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzY6NTRdIEhWTTE2OiBQQ0ktSVNBIGxpbmsgMiByb3V0ZWQgdG8gSVJRMTENCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBpcnEuYzoyNzA6IERvbTE2IFBDSSBsaW5rIDMgY2hhbmdl
ZCAwIC0+IDUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogUENJLUlTQSBs
aW5rIDMgcm91dGVkIHRvIElSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0x
NjogcGNpIGRldiAwMToyIElOVEQtPklSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0
XSBIVk0xNjogcGNpIGRldiAwMTozIElOVEEtPklSUTEwDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozNjo1NF0gSFZNMTY6IHBjaSBkZXYgMDM6MCBJTlRBLT5JUlE1DQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1DQooWEVOKSBb
MjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IHBjaSBkZXYgMDU6MCBJTlRBLT5JUlExMA0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAyOjAgYmFyIDEw
IHNpemUgbHg6IDAyMDAwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6
IHBjaSBkZXYgMDM6MCBiYXIgMTQgc2l6ZSBseDogMDEwMDAwMDANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwNDowIGJhciAxMCBzaXplIGx4OiAwMDAy
MDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDA0OjAg
YmFyIDMwIHNpemUgbHg6IDAwMDIwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IHBjaSBkZXYgMDI6MCBiYXIgMzAgc2l6ZSBseDogMDAwMTAwMDANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwNTowIGJhciAxMCBzaXplIGx4
OiAwMDAwMjAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIG1lbW9yeV9tYXA6YWRk
OiBkb20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBIVk0xNjogcGNpIGRldiAwMjowIGJhciAxNCBzaXplIGx4OiAwMDAwMTAwMA0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAzOjAgYmFyIDEw
IHNpemUgbHg6IDAwMDAwMTAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6
IHBjaSBkZXYgMDQ6MCBiYXIgMTQgc2l6ZSBseDogMDAwMDAwNDANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwMToyIGJhciAyMCBzaXplIGx4OiAwMDAw
MDAyMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAxOjEg
YmFyIDIwIHNpemUgbHg6IDAwMDAwMDEwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IE11bHRpcHJvY2Vzc29yIGluaXRpYWxpc2F0aW9uOg0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzY6NTRdIEhWTTE2OiAgLSBDUFUwIC4uLiA0OC1iaXQgcGh5cyAuLi4gZml4ZWQg
TVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4NCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjM2OjU0XSBIVk0xNjogIC0gQ1BVMSAuLi4gNDgtYml0IHBoeXMgLi4uIGZpeGVkIE1U
UlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozNjo1NF0gSFZNMTY6ICAtIENQVTIgLi4uIDQ4LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJS
cyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLg0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzY6NTRdIEhWTTE2OiBUZXN0aW5nIEhWTSBlbnZpcm9ubWVudDoNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogIC0gUkVQIElOU0IgYWNyb3NzIHBhZ2UgYm91bmRhcmll
cyAuLi4gcGFzc2VkDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6ICAtIEdT
IGJhc2UgTVNScyBhbmQgU1dBUEdTIC4uLiBwYXNzZWQNCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBIVk0xNjogUGFzc2VkIDIgb2YgMiB0ZXN0cw0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIEhWTTE2OiBXcml0aW5nIFNNQklPUyB0YWJsZXMgLi4uDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IExvYWRpbmcgU2VhQklPUyAuLi4NCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogQ3JlYXRpbmcgTVAgdGFibGVzIC4uLg0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBMb2FkaW5nIEFDUEkgLi4uDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IHZtODYgVFNTIGF0IGZjMDBhMDgwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IEJJT1MgbWFwOg0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgMTAwMDAtMTAwZDM6IFNjcmF0Y2ggc3BhY2UN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogIGUwMDAwLWZmZmZmOiBNYWlu
IEJJT1MNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRTgyMCB0YWJsZToN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogIFswMF06IDAwMDAwMDAwOjAw
MDAwMDAwIC0gMDAwMDAwMDA6MDAwYTAwMDA6IFJBTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzY6NTRdIEhWTTE2OiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDowMDBl
MDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgWzAxXTogMDAwMDAw
MDA6MDAwZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogIFswMl06IDAwMDAwMDAwOjAwMTAwMDAwIC0gMDAw
MDAwMDA6MmY4MDAwMDA6IFJBTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2
OiAgSE9MRTogMDAwMDAwMDA6MmY4MDAwMDAgLSAwMDAwMDAwMDpmYzAwMDAwMA0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgWzAzXTogMDAwMDAwMDA6ZmMwMDAwMDAg
LSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2
OjU0XSBIVk0xNjogSW52b2tpbmcgU2VhQklPUyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBIVk0xNjogU2VhQklPUyAodmVyc2lvbiByZWwtMS43LjEtNDQtZ2IxYzM1ZjIt
MjAxMjEyMDdfMjExMTA0LXNlcnZlZXJzdGVydGplKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzY6NTRdIEhWTTE2OiANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRm91
bmQgWGVuIGh5cGVydmlzb3Igc2lnbmF0dXJlIGF0IDQwMDAwMDAwDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IHhlbjogY29weSBlODIwLi4uDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IFJhbSBTaXplPTB4MmY4MDAwMDAgKDB4MDAwMDAwMDAw
MDAwMDAwMCBoaWdoKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBSZWxv
Y2F0aW5nIGxvdyBkYXRhIGZyb20gMHgwMDBlMzI3MCB0byAweDAwMGVmNzgwIChzaXplIDIx
NjQpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFJlbG9jYXRpbmcgaW5p
dCBmcm9tIDB4MDAwZTNhZTQgdG8gMHgyZjdlMmEyMCAoc2l6ZSA1NDQ1MikNCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogQ1BVIE1oej0zMjAwDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IEZvdW5kIDkgUENJIGRldmljZXMgKG1heCBQQ0kgYnVz
IGlzIDAwKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBBbGxvY2F0ZWQg
WGVuIGh5cGVyY2FsbCBwYWdlIGF0IDJmN2ZmMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1NF0gSFZNMTY6IERldGVjdGVkIFhlbiB2NC4zLXVuc3RhYmxlDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IEZvdW5kIDMgY3B1KHMpIG1heCBzdXBwb3J0ZWQgMyBj
cHUocykNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogeGVuOiBjb3B5IEJJ
T1MgdGFibGVzLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IENvcHlp
bmcgU01CSU9TIGVudHJ5IHBvaW50IGZyb20gMHgwMDAxMDAxMCB0byAweDAwMGZkYjEwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IENvcHlpbmcgTVBUQUJMRSBmcm9t
IDB4ZmMwMDExOTAvZmMwMDExYTAgdG8gMHgwMDBmZGEwMA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIEhWTTE2OiBDb3B5aW5nIFBJUiBmcm9tIDB4MDAwMTAwMzAgdG8gMHgwMDBm
ZDk4MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBDb3B5aW5nIEFDUEkg
UlNEUCBmcm9tIDB4MDAwMTAwYjAgdG8gMHgwMDBmZDk1MA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIEhWTTE2OiBTY2FuIGZvciBWR0Egb3B0aW9uIHJvbQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEhWTTE2OiBSdW5uaW5nIG9wdGlvbiByb20gYXQgYzAwMDowMDAz
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gc3RkdmdhLmM6MTQ3OmQxNiBlbnRlcmlu
ZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0
XSBIVk0xNjogVHVybmluZyBvbiB2Z2EgdGV4dCBtb2RlIGNvbnNvbGUNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogU2VhQklPUyAodmVyc2lvbiByZWwtMS43LjEtNDQt
Z2IxYzM1ZjItMjAxMjEyMDdfMjExMTA0LXNlcnZlZXJzdGVydGplKQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEhWTTE2OiANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogVUhDSSBpbml0IG9uIGRldiAwMDowMS4yIChpbz1jMTQwKQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEhWTTE2OiBGb3VuZCAxIGxwdCBwb3J0cw0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEhWTTE2OiBGb3VuZCAxIHNlcmlhbCBwb3J0cw0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTZWFyY2hpbmcgYm9vdG9yZGVyIGZvcjogL3Bj
aUBpMGNmOC9pc2FAMS9mZGNAMDNmMC9mbG9wcHlAMA0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzY6NTRdIEhWTTE2OiBBVEEgY29udHJvbGxlciAxIGF0IDFmMC8zZjQvYzE2MCAoaXJxIDE0
IGRldiA5KQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBBVEEgY29udHJv
bGxlciAyIGF0IDE3MC8zNzQvYzE2OCAoaXJxIDE1IGRldiA5KQ0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzY6NTRdIEhWTTE2OiBhdGEwLTA6IFFFTVUgSEFSRERJU0sgQVRBLTcgSGFyZC1E
aXNrICgxMDI0MCBNaUJ5dGVzKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2
OiBTZWFyY2hpbmcgYm9vdG9yZGVyIGZvcjogL3BjaUBpMGNmOC8qQDEsMS9kcml2ZUAwL2Rp
c2tAMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBhdGEwLTE6IFFFTVUg
SEFSRERJU0sgQVRBLTcgSGFyZC1EaXNrICgzMDAgR2lCeXRlcykNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogU2VhcmNoaW5nIGJvb3RvcmRlciBmb3I6IC9wY2lAaTBj
ZjgvKkAxLDEvZHJpdmVAMC9kaXNrQDENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogRFZEL0NEIFthdGExLTA6IFFFTVUgRFZELVJPTSBBVEFQSS00IERWRC9DRF0NCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogU2VhcmNoaW5nIGJvb3RvcmRlciBm
b3I6IC9wY2lAaTBjZjgvKkAxLDEvZHJpdmVAMS9kaXNrQDANCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjM2OjU0XSBIVk0xNjogUFMyIGtleWJvYXJkIGluaXRpYWxpemVkDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IEFsbCB0aHJlYWRzIGNvbXBsZXRlLg0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTY2FuIGZvciBvcHRpb24gcm9tcw0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBSdW5uaW5nIG9wdGlvbiByb20gYXQg
YzkwMDowMDAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IHBtbSBjYWxs
IGFyZzE9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwbW0gY2FsbCBh
cmcxPTANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcG1tIGNhbGwgYXJn
MT0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IHBtbSBjYWxsIGFyZzE9
MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTZWFyY2hpbmcgYm9vdG9y
ZGVyIGZvcjogL3BjaUBpMGNmOC8qQDQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogUHJlc3MgRjEyIGZvciBib290IG1lbnUuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1NF0gSFZNMTY6IA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiBkcml2
ZSAweDAwMGZkOGQwOiBQQ0hTPTE2MzgzLzE2LzYzIHRyYW5zbGF0aW9uPWxiYSBMQ0hTPTEw
MjQvMjU1LzYzIHM9MjA5NzE1MjANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0x
NjogDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6IGRyaXZlIDB4MDAwZmQ4
YTA6IFBDSFM9MTYzODMvMTYvNjMgdHJhbnNsYXRpb249bGJhIExDSFM9MTAyNC8yNTUvNjMg
cz02MjkxNDU2MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiAwDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6IFNwYWNlIGF2YWlsYWJsZSBmb3IgVU1C
OiAwMDBjYTAwMC0wMDBlZTgwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2
OiBSZXR1cm5lZCA2MTQ0MCBieXRlcyBvZiBab25lSGlnaA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTddIEhWTTE2OiBlODIwIG1hcCBoYXMgNiBpdGVtczoNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU3XSBIVk0xNjogICAwOiAwMDAwMDAwMDAwMDAwMDAwIC0gMDAwMDAwMDAw
MDA5ZmMwMCA9IDEgUkFNDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6ICAg
MTogMDAwMDAwMDAwMDA5ZmMwMCAtIDAwMDAwMDAwMDAwYTAwMDAgPSAyIFJFU0VSVkVEDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6ICAgMjogMDAwMDAwMDAwMDBmMDAw
MCAtIDAwMDAwMDAwMDAxMDAwMDAgPSAyIFJFU0VSVkVEDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozNjo1N10gSFZNMTY6ICAgMzogMDAwMDAwMDAwMDEwMDAwMCAtIDAwMDAwMDAwMmY3ZmYw
MDAgPSAxIFJBTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiAgIDQ6IDAw
MDAwMDAwMmY3ZmYwMDAgLSAwMDAwMDAwMDJmODAwMDAwID0gMiBSRVNFUlZFRA0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiAgIDU6IDAwMDAwMDAwZmMwMDAwMDAgLSAw
MDAwMDAwMTAwMDAwMDAwID0gMiBSRVNFUlZFRA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6
NTddIEhWTTE2OiBlbnRlciBoYW5kbGVfMTk6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1
N10gSFZNMTY6ICAgTlVMTA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiBC
b290aW5nIGZyb20gSGFyZCBEaXNrLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10g
SFZNMTY6IEJvb3RpbmcgZnJvbSAwMDAwOjdjMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3
OjA0XSBpcnEuYzozNzU6IERvbTE2IGNhbGxiYWNrIHZpYSBjaGFuZ2VkIHRvIERpcmVjdCBW
ZWN0b3IgMHhmMw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9tYXA6cmVt
b3ZlOiBkb20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjM3OjA2XSBtZW1vcnlfbWFwOmFkZDogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBu
cj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0gbWVtb3J5X21hcDpyZW1vdmU6IGRv
bTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6
MDZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMTYgZ2Zu
PWYzMDUwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0gbWVt
b3J5X21hcDphZGQ6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIw
MTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xNiBnZm49ZjMwNTAg
bWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBtZW1vcnlfbWFw
OmFkZDogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDozNzowNl0gbWVtb3J5X21hcDpyZW1vdmU6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4
ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9tYXA6YWRkOiBk
b20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3
OjA2XSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBucj0x
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0gbWVtb3J5X21hcDphZGQ6IGRvbTE2IGdm
bj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIGly
cS5jOjI3MDogRG9tMTYgUENJIGxpbmsgMCBjaGFuZ2VkIDUgLT4gMA0KKFhFTikgWzIwMTIt
MTItMDcgMjA6Mzc6MDZdIGlycS5jOjI3MDogRG9tMTYgUENJIGxpbmsgMSBjaGFuZ2VkIDEw
IC0+IDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBpcnEuYzoyNzA6IERvbTE2IFBD
SSBsaW5rIDIgY2hhbmdlZCAxMSAtPiAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0g
aXJxLmM6MjcwOiBEb20xNiBQQ0kgbGluayAzIGNoYW5nZWQgNSAtPiAwDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozNzowOF0gaXJxLmM6MTg3ODogZG9tMTY6IHBpcnEgNCBvciBpcnEgNzIg
YWxyZWFkeSBtYXBwZWQgb2xkX3BpcnE6MCBvbGRfaXJxOiA3MQ0KKFhFTikgWzIwMTItMTIt
MDcgMjA6Mzc6MDhdIGlycS5jOjE4Nzg6IGRvbTE2OiBwaXJxIDQgb3IgaXJxIDczIGFscmVh
ZHkgbWFwcGVkIG9sZF9waXJxOjAgb2xkX2lycTogNzENCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM3OjA4XSBpcnEuYzoxODc4OiBkb20xNjogcGlycSA0IG9yIGlycSA3NCBhbHJlYWR5IG1h
cHBlZCBvbGRfcGlycTowIG9sZF9pcnE6IDcxDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzow
OF0gdm1zaS5jOjEwODpkMzI3NjcgVW5zdXBwb3J0ZWQgZGVsaXZlcnkgbW9kZSAzDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozNzowOF0gdm1zaS5jOjExMDpkMzI3NjcgVW5zdXBwb3J0ZWQg
ZGVsaXZlcnkgbW9kZTozIHZlY3RvcjowIHRyaWdtb2RlOjAgZGVzdDo1NCBkZXN0X21vZGU6
MA0K
------------09400606801CFB705
Content-Type: text/plain;
 name="xl-dmesg-traditional.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="xl-dmesg-traditional.txt"

KFhFTikgWzIwMTItMTItMDcgMjA6MzM6MDhdIEFNRC1WaTogU2hhcmUgcDJtIHRhYmxlIHdp
dGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MWMxZjhjDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
MzoxMl0gaW8uYzoyODI6IGQxNTogYmluZDogbV9nc2k9MTYgZ19nc2k9MzYgZGV2aWNlPTUg
aW50eD0wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gQU1ELVZpOiBEaXNhYmxlOiBk
ZXZpY2UgaWQgPSAweDQwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDozMzoxMl0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2
aWNlIGlkID0gMHg0MDAsIHJvb3QgdGFibGUgPSAweDFjMWY4YzAwMCwgZG9tYWluID0gMTUs
IHBhZ2luZyBtb2RlID0gNA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEFNRC1WaTog
UmUtYXNzaWduIDAwMDA6MDQ6MDAuMCBmcm9tIGRvbTAgdG8gZG9tMTUNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogSFZNIExvYWRlcg0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MTJdIEhWTTE1OiBEZXRlY3RlZCBYZW4gdjQuMy11bnN0YWJsZQ0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBYZW5idXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2
ZW50IGNoYW5uZWwgNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBTeXN0
ZW0gcmVxdWVzdGVkIFJPTUJJT1MNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0x
NTogQ1BVIHNwZWVkIGlzIDMyMDAgTUh6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0g
aXJxLmM6MjcwOiBEb20xNSBQQ0kgbGluayAwIGNoYW5nZWQgMCAtPiA1DQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gaXJxLmM6MjcwOiBEb20xNSBQQ0kgbGlu
ayAxIGNoYW5nZWQgMCAtPiAxMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1
OiBQQ0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8gSVJRMTANCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjEyXSBpcnEuYzoyNzA6IERvbTE1IFBDSSBsaW5rIDIgY2hhbmdlZCAwIC0+IDExDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IFBDSS1JU0EgbGluayAyIHJvdXRl
ZCB0byBJUlExMQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIGlycS5jOjI3MDogRG9t
MTUgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6
MTJdIEhWTTE1OiBQQ0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDAxOjIgSU5URC0+SVJRNQ0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTAN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwMzowIElOVEEt
PklSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwNDow
IElOVEEtPklSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRl
diAwNTowIElOVEEtPklSUTEwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6
IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSBseDogMDIwMDAwMDANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIGx4OiAwMTAw
MDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDA0OjAg
YmFyIDEwIHNpemUgbHg6IDAwMDIwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0g
SFZNMTU6IHBjaSBkZXYgMDU6MCBiYXIgMTAgc2l6ZSBseDogMDAwMDIwMDANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjMzOjEyXSBtZW1vcnlfbWFwOmFkZDogZG9tMTUgZ2ZuPWYzMDIwIG1m
bj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHBjaSBk
ZXYgMDI6MCBiYXIgMTQgc2l6ZSBseDogMDAwMDEwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjEyXSBIVk0xNTogcGNpIGRldiAwMzowIGJhciAxMCBzaXplIGx4OiAwMDAwMDEwMA0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDA0OjAgYmFyIDE0
IHNpemUgbHg6IDAwMDAwMDQwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6
IHBjaSBkZXYgMDE6MiBiYXIgMjAgc2l6ZSBseDogMDAwMDAwMjANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIGx4OiAwMDAw
MDAxMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBNdWx0aXByb2Nlc3Nv
ciBpbml0aWFsaXNhdGlvbjoNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTog
IC0gQ1BVMCAuLi4gNDgtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMg
WzIvOF0gLi4uIGRvbmUuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICAt
IENQVTEgLi4uIDQ4LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsy
LzhdIC4uLiBkb25lLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgLSBD
UFUyIC4uLiA0OC1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84
XSAuLi4gZG9uZS4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogVGVzdGlu
ZyBIVk0gZW52aXJvbm1lbnQ6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6
ICAtIFJFUCBJTlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBhc3NlZA0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgLSBHUyBiYXNlIE1TUnMgYW5kIFNXQVBH
UyAuLi4gcGFzc2VkDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IFBhc3Nl
ZCAyIG9mIDIgdGVzdHMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogV3Jp
dGluZyBTTUJJT1MgdGFibGVzIC4uLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhW
TTE1OiBMb2FkaW5nIFJPTUJJT1MgLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0g
SFZNMTU6IDk2NjAgYnl0ZXMgb2YgUk9NQklPUyBoaWdoLW1lbW9yeSBleHRlbnNpb25zOg0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgIFJlbG9jYXRpbmcgdG8gMHhm
YzAwMTAwMC0weGZjMDAzNWJjIC4uLiBkb25lDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzox
Ml0gSFZNMTU6IENyZWF0aW5nIE1QIHRhYmxlcyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjEyXSBIVk0xNTogTG9hZGluZyBDaXJydXMgVkdBQklPUyAuLi4NCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogTG9hZGluZyBQQ0kgT3B0aW9uIFJPTSAuLi4NCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogIC0gTWFudWZhY3R1cmVyOiBodHRw
Oi8vaXB4ZS5vcmcNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogIC0gUHJv
ZHVjdCBuYW1lOiBpUFhFDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IE9w
dGlvbiBST01zOg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgYzAwMDAt
YzhmZmY6IFZHQSBCSU9TDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICBj
OTAwMC1kOWZmZjogRXRoZXJib290IFJPTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJd
IEhWTTE1OiBMb2FkaW5nIEFDUEkgLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0g
SFZNMTU6IHZtODYgVFNTIGF0IGZjMDBmNjgwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzox
Ml0gSFZNMTU6IEJJT1MgbWFwOg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1
OiAgZjAwMDAtZmZmZmY6IE1haW4gQklPUw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJd
IEhWTTE1OiBFODIwIHRhYmxlOg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1
OiAgWzAwXTogMDAwMDAwMDA6MDAwMDAwMDAgLSAwMDAwMDAwMDowMDA5ZTAwMDogUkFNDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICBbMDFdOiAwMDAwMDAwMDowMDA5
ZTAwMCAtIDAwMDAwMDAwOjAwMGEwMDAwOiBSRVNFUlZFRA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MTJdIEhWTTE1OiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDow
MDBlMDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgWzAyXTogMDAw
MDAwMDA6MDAwZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQNCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogIFswM106IDAwMDAwMDAwOjAwMTAwMDAwIC0g
MDAwMDAwMDA6MmY4MDAwMDA6IFJBTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhW
TTE1OiAgSE9MRTogMDAwMDAwMDA6MmY4MDAwMDAgLSAwMDAwMDAwMDpmYzAwMDAwMA0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgWzA0XTogMDAwMDAwMDA6ZmMwMDAw
MDAgLSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQNCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjEyXSBIVk0xNTogSW52b2tpbmcgUk9NQklPUyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjMzOjEyXSBIVk0xNTogJFJldmlzaW9uOiAxLjIyMSAkICREYXRlOiAyMDA4LzEyLzA3
IDE3OjMyOjI5ICQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBzdGR2Z2EuYzoxNDc6
ZDE1IGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2Rlcw0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzM6MTJdIEhWTTE1OiBWR0FCaW9zICRJZDogdmdhYmlvcy5jLHYgMS42NyAyMDA4
LzAxLzI3IDA5OjQ0OjEyIHZydXBwZXJ0IEV4cCAkDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
MzoxMl0gSFZNMTU6IEJvY2hzIEJJT1MgLSBidWlsZDogMDYvMjMvOTkNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogJFJldmlzaW9uOiAxLjIyMSAkICREYXRlOiAyMDA4
LzEyLzA3IDE3OjMyOjI5ICQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTog
T3B0aW9uczogYXBtYmlvcyBwY2liaW9zIGVsdG9yaXRvIFBNTSANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjEyXSBIVk0xNTogDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZN
MTU6IGF0YTAtMDogUENIUz0xNjM4My8xNi82MyB0cmFuc2xhdGlvbj1sYmEgTENIUz0xMDI0
LzI1NS82Mw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBhdGEwIG1hc3Rl
cjogUUVNVSBIQVJERElTSyBBVEEtNyBIYXJkLURpc2sgKDEwMjQwIE1CeXRlcykNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogYXRhMC0xOiBQQ0hTPTE2MzgzLzE2LzYz
IHRyYW5zbGF0aW9uPWxiYSBMQ0hTPTEwMjQvMjU1LzYzDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozMzoxMl0gSFZNMTU6IGF0YTAgIHNsYXZlOiBRRU1VIEhBUkRESVNLIEFUQS03IEhhcmQt
RGlzayAoIDMwMCBHQnl0ZXMpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6
IA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjEyXSBIVk0xNTogDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZN
MTU6IFByZXNzIEYxMiBmb3IgYm9vdCBtZW51Lg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6
MTJdIEhWTTE1OiANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogQm9vdGlu
ZyBmcm9tIEhhcmQgRGlzay4uLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1
OiBCb290aW5nIGZyb20gMDAwMDo3YzAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNF0g
Z3JhbnRfdGFibGUuYzozMTM6ZDAgSW5jcmVhc2VkIG1hcHRyYWNrIHNpemUgdG8gNiBmcmFt
ZXMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI1XSBpcnEuYzozNzU6IERvbTE1IGNhbGxi
YWNrIHZpYSBjaGFuZ2VkIHRvIERpcmVjdCBWZWN0b3IgMHhmMw0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzM6MjZdIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5
OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOmFkZDog
ZG9tMTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
MzoyNl0gbWVtb3J5X21hcDpyZW1vdmU6IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9
MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNSBn
Zm49ZjMwMjAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tMTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozMzoyNl0gbWVtb3J5X21hcDphZGQ6IGRvbTE1IGdmbj1mMzAy
MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOmFkZDogZG9tMTUgZ2ZuPWYzMDIwIG1mbj1m
OThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNl0gbWVtb3J5X21hcDpyZW1v
dmU6IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MjZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5OGZlIG5y
PTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
MTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoy
Nl0gbWVtb3J5X21hcDphZGQ6IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9MQ0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzM6MjZdIGlycS5jOjI3MDogRG9tMTUgUENJIGxpbmsgMCBj
aGFuZ2VkIDUgLT4gMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIGlycS5jOjI3MDog
RG9tMTUgUENJIGxpbmsgMSBjaGFuZ2VkIDEwIC0+IDANCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjI2XSBpcnEuYzoyNzA6IERvbTE1IFBDSSBsaW5rIDIgY2hhbmdlZCAxMSAtPiAwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNl0gaXJxLmM6MjcwOiBEb20xNSBQQ0kgbGluayAz
IGNoYW5nZWQgNSAtPiAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjozMF0gQU1ELVZpOiBE
aXNhYmxlOiBkZXZpY2UgaWQgPSAweDQwMCwgZG9tYWluID0gMTUsIHBhZ2luZyBtb2RlID0g
NA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6MzBdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4NDAwLCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6MzBd
IEFNRC1WaTogUmUtYXNzaWduIDAwMDA6MDQ6MDAuMCBmcm9tIGRvbTE1IHRvIGRvbTANCg==
------------09400606801CFB705
Content-Type: text/plain;
 name="xl-dmesg-upstream.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="xl-dmesg-upstream.txt"

KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTNdIEFNRC1WaTogU2hhcmUgcDJtIHRhYmxlIHdp
dGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MWMxZTlmDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1NF0gaW8uYzoyODI6IGQxNjogYmluZDogbV9nc2k9MTYgZ19nc2k9MzYgZGV2aWNlPTUg
aW50eD0wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gQU1ELVZpOiBEaXNhYmxlOiBk
ZXZpY2UgaWQgPSAweDQwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDozNjo1NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2
aWNlIGlkID0gMHg0MDAsIHJvb3QgdGFibGUgPSAweDFjMWU5ZjAwMCwgZG9tYWluID0gMTYs
IHBhZ2luZyBtb2RlID0gNA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEFNRC1WaTog
UmUtYXNzaWduIDAwMDA6MDQ6MDAuMCBmcm9tIGRvbTAgdG8gZG9tMTYNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogSFZNIExvYWRlcg0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIEhWTTE2OiBEZXRlY3RlZCBYZW4gdjQuMy11bnN0YWJsZQ0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBYZW5idXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2
ZW50IGNoYW5uZWwgNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTeXN0
ZW0gcmVxdWVzdGVkIFNlYUJJT1MNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0x
NjogQ1BVIHNwZWVkIGlzIDMyMDAgTUh6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
aXJxLmM6MjcwOiBEb20xNiBQQ0kgbGluayAwIGNoYW5nZWQgMCAtPiA1DQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gaXJxLmM6MjcwOiBEb20xNiBQQ0kgbGlu
ayAxIGNoYW5nZWQgMCAtPiAxMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2
OiBQQ0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8gSVJRMTANCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBpcnEuYzoyNzA6IERvbTE2IFBDSSBsaW5rIDIgY2hhbmdlZCAwIC0+IDExDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFBDSS1JU0EgbGluayAyIHJvdXRl
ZCB0byBJUlExMQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIGlycS5jOjI3MDogRG9t
MTYgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6
NTRdIEhWTTE2OiBQQ0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAxOjIgSU5URC0+SVJRNQ0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTAN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwMzowIElOVEEt
PklSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwNDow
IElOVEEtPklSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRl
diAwNTowIElOVEEtPklSUTEwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6
IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSBseDogMDIwMDAwMDANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIGx4OiAwMTAw
MDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDA0OjAg
YmFyIDEwIHNpemUgbHg6IDAwMDIwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IHBjaSBkZXYgMDQ6MCBiYXIgMzAgc2l6ZSBseDogMDAwMjAwMDANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwMjowIGJhciAzMCBzaXplIGx4
OiAwMDAxMDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2
IDA1OjAgYmFyIDEwIHNpemUgbHg6IDAwMDAyMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1NF0gbWVtb3J5X21hcDphZGQ6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAyOjAgYmFyIDE0
IHNpemUgbHg6IDAwMDAxMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6
IHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSBseDogMDAwMDAxMDANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwNDowIGJhciAxNCBzaXplIGx4OiAwMDAw
MDA0MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAxOjIg
YmFyIDIwIHNpemUgbHg6IDAwMDAwMDIwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IHBjaSBkZXYgMDE6MSBiYXIgMjAgc2l6ZSBseDogMDAwMDAwMTANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogTXVsdGlwcm9jZXNzb3IgaW5pdGlhbGlzYXRp
b246DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6ICAtIENQVTAgLi4uIDQ4
LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25l
Lg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgLSBDUFUxIC4uLiA0OC1i
aXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4N
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogIC0gQ1BVMiAuLi4gNDgtYml0
IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFRlc3RpbmcgSFZNIGVudmlyb25t
ZW50Og0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgLSBSRVAgSU5TQiBh
Y3Jvc3MgcGFnZSBib3VuZGFyaWVzIC4uLiBwYXNzZWQNCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBIVk0xNjogIC0gR1MgYmFzZSBNU1JzIGFuZCBTV0FQR1MgLi4uIHBhc3NlZA0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBQYXNzZWQgMiBvZiAyIHRlc3Rz
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFdyaXRpbmcgU01CSU9TIHRh
YmxlcyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogTG9hZGluZyBT
ZWFCSU9TIC4uLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBDcmVhdGlu
ZyBNUCB0YWJsZXMgLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IExv
YWRpbmcgQUNQSSAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogdm04
NiBUU1MgYXQgZmMwMGEwODANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjog
QklPUyBtYXA6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6ICAxMDAwMC0x
MDBkMzogU2NyYXRjaCBzcGFjZQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2
OiAgZTAwMDAtZmZmZmY6IE1haW4gQklPUw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRd
IEhWTTE2OiBFODIwIHRhYmxlOg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2
OiAgWzAwXTogMDAwMDAwMDA6MDAwMDAwMDAgLSAwMDAwMDAwMDowMDBhMDAwMDogUkFNDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6ICBIT0xFOiAwMDAwMDAwMDowMDBh
MDAwMCAtIDAwMDAwMDAwOjAwMGUwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6ICBbMDFdOiAwMDAwMDAwMDowMDBlMDAwMCAtIDAwMDAwMDAwOjAwMTAwMDAwOiBS
RVNFUlZFRA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgWzAyXTogMDAw
MDAwMDA6MDAxMDAwMDAgLSAwMDAwMDAwMDoyZjgwMDAwMDogUkFNDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6ICBIT0xFOiAwMDAwMDAwMDoyZjgwMDAwMCAtIDAwMDAw
MDAwOmZjMDAwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6ICBbMDNd
OiAwMDAwMDAwMDpmYzAwMDAwMCAtIDAwMDAwMDAxOjAwMDAwMDAwOiBSRVNFUlZFRA0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBJbnZva2luZyBTZWFCSU9TIC4uLg0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTZWFCSU9TICh2ZXJzaW9uIHJl
bC0xLjcuMS00NC1nYjFjMzVmMi0yMDEyMTIwN18yMTExMDQtc2VydmVlcnN0ZXJ0amUpDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIEhWTTE2OiBGb3VuZCBYZW4gaHlwZXJ2aXNvciBzaWduYXR1cmUgYXQgNDAw
MDAwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogeGVuOiBjb3B5IGU4
MjAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogUmFtIFNpemU9MHgy
ZjgwMDAwMCAoMHgwMDAwMDAwMDAwMDAwMDAwIGhpZ2gpDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozNjo1NF0gSFZNMTY6IFJlbG9jYXRpbmcgbG93IGRhdGEgZnJvbSAweDAwMGUzMjcwIHRv
IDB4MDAwZWY3ODAgKHNpemUgMjE2NCkNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogUmVsb2NhdGluZyBpbml0IGZyb20gMHgwMDBlM2FlNCB0byAweDJmN2UyYTIwIChz
aXplIDU0NDUyKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBDUFUgTWh6
PTMyMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRm91bmQgOSBQQ0kg
ZGV2aWNlcyAobWF4IFBDSSBidXMgaXMgMDApDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1
NF0gSFZNMTY6IEFsbG9jYXRlZCBYZW4gaHlwZXJjYWxsIHBhZ2UgYXQgMmY3ZmYwMDANCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRGV0ZWN0ZWQgWGVuIHY0LjMtdW5z
dGFibGUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRm91bmQgMyBjcHUo
cykgbWF4IHN1cHBvcnRlZCAzIGNwdShzKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRd
IEhWTTE2OiB4ZW46IGNvcHkgQklPUyB0YWJsZXMuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBIVk0xNjogQ29weWluZyBTTUJJT1MgZW50cnkgcG9pbnQgZnJvbSAweDAwMDEw
MDEwIHRvIDB4MDAwZmRiMTANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjog
Q29weWluZyBNUFRBQkxFIGZyb20gMHhmYzAwMTE5MC9mYzAwMTFhMCB0byAweDAwMGZkYTAw
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IENvcHlpbmcgUElSIGZyb20g
MHgwMDAxMDAzMCB0byAweDAwMGZkOTgwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IENvcHlpbmcgQUNQSSBSU0RQIGZyb20gMHgwMDAxMDBiMCB0byAweDAwMGZkOTUw
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFNjYW4gZm9yIFZHQSBvcHRp
b24gcm9tDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFJ1bm5pbmcgb3B0
aW9uIHJvbSBhdCBjMDAwOjAwMDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBzdGR2
Z2EuYzoxNDc6ZDE2IGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2Rlcw0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBUdXJuaW5nIG9uIHZnYSB0ZXh0IG1vZGUg
Y29uc29sZQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTZWFCSU9TICh2
ZXJzaW9uIHJlbC0xLjcuMS00NC1nYjFjMzVmMi0yMDEyMTIwN18yMTExMDQtc2VydmVlcnN0
ZXJ0amUpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IA0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBVSENJIGluaXQgb24gZGV2IDAwOjAxLjIgKGlv
PWMxNDApDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IEZvdW5kIDEgbHB0
IHBvcnRzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IEZvdW5kIDEgc2Vy
aWFsIHBvcnRzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFNlYXJjaGlu
ZyBib290b3JkZXIgZm9yOiAvcGNpQGkwY2Y4L2lzYUAxL2ZkY0AwM2YwL2Zsb3BweUAwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IEFUQSBjb250cm9sbGVyIDEgYXQg
MWYwLzNmNC9jMTYwIChpcnEgMTQgZGV2IDkpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1
NF0gSFZNMTY6IEFUQSBjb250cm9sbGVyIDIgYXQgMTcwLzM3NC9jMTY4IChpcnEgMTUgZGV2
IDkpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IGF0YTAtMDogUUVNVSBI
QVJERElTSyBBVEEtNyBIYXJkLURpc2sgKDEwMjQwIE1pQnl0ZXMpDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IFNlYXJjaGluZyBib290b3JkZXIgZm9yOiAvcGNpQGkw
Y2Y4LypAMSwxL2RyaXZlQDAvZGlza0AwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IGF0YTAtMTogUUVNVSBIQVJERElTSyBBVEEtNyBIYXJkLURpc2sgKDMwMCBHaUJ5
dGVzKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTZWFyY2hpbmcgYm9v
dG9yZGVyIGZvcjogL3BjaUBpMGNmOC8qQDEsMS9kcml2ZUAwL2Rpc2tAMQ0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBEVkQvQ0QgW2F0YTEtMDogUUVNVSBEVkQtUk9N
IEFUQVBJLTQgRFZEL0NEXQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBT
ZWFyY2hpbmcgYm9vdG9yZGVyIGZvcjogL3BjaUBpMGNmOC8qQDEsMS9kcml2ZUAxL2Rpc2tA
MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBQUzIga2V5Ym9hcmQgaW5p
dGlhbGl6ZWQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogQWxsIHRocmVh
ZHMgY29tcGxldGUuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFNjYW4g
Zm9yIG9wdGlvbiByb21zDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFJ1
bm5pbmcgb3B0aW9uIHJvbSBhdCBjOTAwOjAwMDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2
OjU0XSBIVk0xNjogcG1tIGNhbGwgYXJnMT0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1
NF0gSFZNMTY6IHBtbSBjYWxsIGFyZzE9MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRd
IEhWTTE2OiBwbW0gY2FsbCBhcmcxPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogcG1tIGNhbGwgYXJnMT0wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZN
MTY6IFNlYXJjaGluZyBib290b3JkZXIgZm9yOiAvcGNpQGkwY2Y4LypANA0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBQcmVzcyBGMTIgZm9yIGJvb3QgbWVudS4NCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozNjo1N10gSFZNMTY6IGRyaXZlIDB4MDAwZmQ4ZDA6IFBDSFM9MTYzODMvMTYvNjMgdHJh
bnNsYXRpb249bGJhIExDSFM9MTAyNC8yNTUvNjMgcz0yMDk3MTUyMA0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTddIEhWTTE2OiANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBI
Vk0xNjogZHJpdmUgMHgwMDBmZDhhMDogUENIUz0xNjM4My8xNi82MyB0cmFuc2xhdGlvbj1s
YmEgTENIUz0xMDI0LzI1NS82MyBzPTYyOTE0NTYwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1N10gSFZNMTY6IDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0xNjogU3Bh
Y2UgYXZhaWxhYmxlIGZvciBVTUI6IDAwMGNhMDAwLTAwMGVlODAwDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1N10gSFZNMTY6IFJldHVybmVkIDYxNDQwIGJ5dGVzIG9mIFpvbmVIaWdo
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6IGU4MjAgbWFwIGhhcyA2IGl0
ZW1zOg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiAgIDA6IDAwMDAwMDAw
MDAwMDAwMDAgLSAwMDAwMDAwMDAwMDlmYzAwID0gMSBSQU0NCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjM2OjU3XSBIVk0xNjogICAxOiAwMDAwMDAwMDAwMDlmYzAwIC0gMDAwMDAwMDAwMDBh
MDAwMCA9IDIgUkVTRVJWRUQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0xNjog
ICAyOiAwMDAwMDAwMDAwMGYwMDAwIC0gMDAwMDAwMDAwMDEwMDAwMCA9IDIgUkVTRVJWRUQN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0xNjogICAzOiAwMDAwMDAwMDAwMTAw
MDAwIC0gMDAwMDAwMDAyZjdmZjAwMCA9IDEgUkFNDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1N10gSFZNMTY6ICAgNDogMDAwMDAwMDAyZjdmZjAwMCAtIDAwMDAwMDAwMmY4MDAwMDAg
PSAyIFJFU0VSVkVEDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6ICAgNTog
MDAwMDAwMDBmYzAwMDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgPSAyIFJFU0VSVkVEDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6IGVudGVyIGhhbmRsZV8xOToNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0xNjogICBOVUxMDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDozNjo1N10gSFZNMTY6IEJvb3RpbmcgZnJvbSBIYXJkIERpc2suLi4NCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0xNjogQm9vdGluZyBmcm9tIDAwMDA6N2MwMA0KKFhF
TikgWzIwMTItMTItMDcgMjA6Mzc6MDRdIGlycS5jOjM3NTogRG9tMTYgY2FsbGJhY2sgdmlh
IGNoYW5nZWQgdG8gRGlyZWN0IFZlY3RvciAweGYzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
NzowNl0gbWVtb3J5X21hcDpyZW1vdmU6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9
MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNiBn
Zm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBucj0xDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozNzowNl0gbWVtb3J5X21hcDphZGQ6IGRvbTE2IGdmbj1mMzA1
MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjM3OjA2XSBtZW1vcnlfbWFwOmFkZDogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1m
OThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0gbWVtb3J5X21hcDpyZW1v
dmU6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcg
MjA6Mzc6MDZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5y
PTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
MTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzow
Nl0gbWVtb3J5X21hcDphZGQ6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0KKFhF
TikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xNiBnZm49
ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBtZW1v
cnlfbWFwOmFkZDogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozNzowNl0gaXJxLmM6MjcwOiBEb20xNiBQQ0kgbGluayAwIGNoYW5nZWQg
NSAtPiAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0gaXJxLmM6MjcwOiBEb20xNiBQ
Q0kgbGluayAxIGNoYW5nZWQgMTAgLT4gMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZd
IGlycS5jOjI3MDogRG9tMTYgUENJIGxpbmsgMiBjaGFuZ2VkIDExIC0+IDANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM3OjA2XSBpcnEuYzoyNzA6IERvbTE2IFBDSSBsaW5rIDMgY2hhbmdl
ZCA1IC0+IDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA4XSBpcnEuYzoxODc4OiBkb20x
NjogcGlycSA0IG9yIGlycSA3MiBhbHJlYWR5IG1hcHBlZCBvbGRfcGlycTowIG9sZF9pcnE6
IDcxDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowOF0gaXJxLmM6MTg3ODogZG9tMTY6IHBp
cnEgNCBvciBpcnEgNzMgYWxyZWFkeSBtYXBwZWQgb2xkX3BpcnE6MCBvbGRfaXJxOiA3MQ0K
KFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDhdIGlycS5jOjE4Nzg6IGRvbTE2OiBwaXJxIDQg
b3IgaXJxIDc0IGFscmVhZHkgbWFwcGVkIG9sZF9waXJxOjAgb2xkX2lycTogNzENCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjM3OjA4XSB2bXNpLmM6MTA4OmQzMjc2NyBVbnN1cHBvcnRlZCBk
ZWxpdmVyeSBtb2RlIDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA4XSB2bXNpLmM6MTEw
OmQzMjc2NyBVbnN1cHBvcnRlZCBkZWxpdmVyeSBtb2RlOjMgdmVjdG9yOjAgdHJpZ21vZGU6
MCBkZXN0OjU0IGRlc3RfbW9kZTowDQo=
------------09400606801CFB705
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------09400606801CFB705--



From xen-devel-bounces@lists.xen.org Fri Dec 07 20:51:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 20:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th4tD-0006IH-D5; Fri, 07 Dec 2012 20:51:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Th4tA-0006Hy-Lf
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 20:51:17 +0000
Received: from [193.109.254.147:6235] by server-7.bemta-14.messagelabs.com id
	6E/02-02272-3C652C05; Fri, 07 Dec 2012 20:51:15 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-27.messagelabs.com!1354913470!9362104!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31111 invoked from network); 7 Dec 2012 20:51:11 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-12.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Dec 2012 20:51:11 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:54166 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Th4wp-0003Sn-MA; Fri, 07 Dec 2012 21:55:04 +0100
Date: Fri, 7 Dec 2012 21:51:04 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <183697744.20121207215104@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
	<alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------09400606801CFB705"
Cc: Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x
	returns same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------09400606801CFB705
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


Friday, December 7, 2012, 6:24:10 PM, you wrote:

> On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
>> Hi Stefano / Anthony,
>> 
>> With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:
>> 
>> With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry
>> 
>> in qemu-traditional:
>> 
>> pt_msix_update_one: pt_msix_update_one requested pirq = 87
>> pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
>> pt_msix_update_one: pt_msix_update_one requested pirq = 86
>> pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
>> pt_msix_update_one: pt_msix_update_one requested pirq = 85
>> pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
>> pt_msix_update_one: pt_msix_update_one requested pirq = 84
>> pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0
>> 
>> 
>> in qemu-xen (upstream):
>> 
>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)

> That is a good pointer, but unfortunately the code that parses those
> entries look exactly alike in both QEMU trees:

> qemu-xen-traditional/hw/pt-msi.c:pt_msix_update_one

> if (!gvec) {
>         /* if gvec is 0, the guest is asking for a particular pirq that
>          * is passed as dest_id */
>         pirq = ((gaddr >> 32) & 0xffffff00) |
>                (((gaddr & 0xffffffff) >> MSI_TARGET_CPU_SHIFT) & 0xff);



> qemu-xen/hw/xen_pt_msi.c:msi_msix_setup

> if (gvec == 0) {
>         /* if gvec is 0, the guest is asking for a particular pirq that
>          * is passed as dest_id */
>         *ppirq = msi_ext_dest_id(addr >> 32) | msi_dest_id(addr);

> given how msi_ext_dest_id and msi_dest_id are defined, they should
> behave the same way.

> Maybe adding a printk in msi_msix_setup to show addr would help
> nonetheless...

Hi Stefano,

I have added some printk's attached i have:

- qemu-upstream.log           boot of the guest with qemu upstream, device not working
- qemu-traditional.log        boot of the same guest with qemu traditional, device is working

- xl-dmesg-upstream.txt       part of xl-dmesg related to boot of guest with qemu-upstream
- xl-dmesg-traditional.txt    part of xl-dmesg related to boot of sameguest with qemu-traditional
- xl-dmesg.txt                complete xl-dmesg

- interrupts-dom0.txt         /proc/interrupts of dom0
- interrupts-upstream.txt     /proc/interrupts of guest with qemu-upstream

--
Sander
------------09400606801CFB705
Content-Type: text/plain;
 name="interrupts-dom0.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="interrupts-dom0.txt"

ICAgICAgICAgICAgQ1BVMCAgICAgICBDUFUxICAgICAgIENQVTIgICAgICAgQ1BVMyAgICAg
ICBDUFU0ICAgICAgIENQVTUgICAgICAgDQogICAxOiAgICAgICAgICAyICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBpcnEt
aW9hcGljLWVkZ2UgIGk4MDQyDQogICA4OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBpcnEtaW9hcGlj
LWVkZ2UgIHJ0YzANCiAgIDk6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1pb2FwaWMtbGV2ZWwg
IGFjcGkNCiAgMTI6ICAgICAgICAgIDQgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1pb2FwaWMtZWRnZSAgaTgwNDIN
CiAgMTY6ICAgICAgICA2NjYgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1pb2FwaWMtbGV2ZWwgIHNuZF9oZGFfaW50
ZWwNCiAgMTc6ICAgICAgICAgIDIgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1pb2FwaWMtbGV2ZWwgIGVoY2lfaGNk
OnVzYjEsIGVoY2lfaGNkOnVzYjIsIGVoY2lfaGNkOnVzYjMNCiAgMTg6ICAgICAgICAyMDcg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICB4ZW4tcGlycS1pb2FwaWMtbGV2ZWwgIG9oY2lfaGNkOnVzYjQsIG9oY2lfaGNkOnVzYjUs
IG9oY2lfaGNkOnVzYjYsIG9oY2lfaGNkOnVzYjcNCiAgMjI6ICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4t
cGlycS1pb2FwaWMtbGV2ZWwgIHhlbi1wY2liYWNrWzAwMDA6MDM6MDYuMF0NCiAgNzI6ICAg
ICAxMTc5NzUgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICB4ZW4tcGVyY3B1LXZpcnEgICAgICB0aW1lcjANCiAgNzM6ICAgICAgNDkw
NTIgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAg
ICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICByZXNjaGVkMA0KICA3NDogICAgICAgICA5NyAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
IHhlbi1wZXJjcHUtaXBpICAgICAgIGNhbGxmdW5jMA0KICA3NTogICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhl
bi1wZXJjcHUtdmlycSAgICAgIGRlYnVnMA0KICA3NjogICAgICAgMTQ0MSAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJj
cHUtaXBpICAgICAgIGNhbGxmdW5jc2luZ2xlMA0KICA3NzogICAgICAgICAgMCAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1w
ZXJjcHUtaXBpICAgICAgIGlycXdvcmswDQogIDc4OiAgICAgICAgICAwICAgICAxMTIxNzkg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNw
dS12aXJxICAgICAgdGltZXIxDQogIDc5OiAgICAgICAgICAwICAgICAgNDM3MTggICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNwdS1pcGkg
ICAgICAgcmVzY2hlZDENCiAgODA6ICAgICAgICAgIDAgICAgICAgIDE3NiAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAg
ICBjYWxsZnVuYzENCiAgODE6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LXZpcnEgICAgICBk
ZWJ1ZzENCiAgODI6ICAgICAgICAgIDAgICAgICAgMTU1OSAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBjYWxsZnVu
Y3NpbmdsZTENCiAgODM6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBpcnF3
b3JrMQ0KICA4NDogICAgICAgICAgMCAgICAgICAgICAwICAgICAgNDYzMzMgICAgICAgICAg
MCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtdmlycSAgICAgIHRpbWVyMg0K
ICA4NTogICAgICAgICAgMCAgICAgICAgICAwICAgICAgNDcwNDggICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtaXBpICAgICAgIHJlc2NoZWQyDQogIDg2
OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgIDE4NSAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgeGVuLXBlcmNwdS1pcGkgICAgICAgY2FsbGZ1bmMyDQogIDg3OiAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgeGVuLXBlcmNwdS12aXJxICAgICAgZGVidWcyDQogIDg4OiAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgMTUwOSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgeGVuLXBlcmNwdS1pcGkgICAgICAgY2FsbGZ1bmNzaW5nbGUyDQogIDg5OiAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgICAgMCAgeGVuLXBlcmNwdS1pcGkgICAgICAgaXJxd29yazINCiAgOTA6ICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgMzYyMDIgICAgICAgICAgMCAgICAgICAg
ICAwICB4ZW4tcGVyY3B1LXZpcnEgICAgICB0aW1lcjMNCiAgOTE6ICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgMjIxNzYgICAgICAgICAgMCAgICAgICAgICAwICB4
ZW4tcGVyY3B1LWlwaSAgICAgICByZXNjaGVkMw0KICA5MjogICAgICAgICAgMCAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgIDEzNCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1w
ZXJjcHUtaXBpICAgICAgIGNhbGxmdW5jMw0KICA5MzogICAgICAgICAgMCAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJj
cHUtdmlycSAgICAgIGRlYnVnMw0KICA5NDogICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgMTI1OSAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtaXBp
ICAgICAgIGNhbGxmdW5jc2luZ2xlMw0KICA5NTogICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUt
aXBpICAgICAgIGlycXdvcmszDQogIDk2OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgMjI0MTMgICAgICAgICAgMCAgeGVuLXBlcmNwdS12aXJx
ICAgICAgdGltZXI0DQogIDk3OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgMTg5MDMgICAgICAgICAgMCAgeGVuLXBlcmNwdS1pcGkgICAgICAg
cmVzY2hlZDQNCiAgOTg6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgIDEzOCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBjYWxs
ZnVuYzQNCiAgOTk6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LXZpcnEgICAgICBkZWJ1ZzQN
CiAxMDA6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgMTY4NCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBjYWxsZnVuY3Npbmds
ZTQNCiAxMDE6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBpcnF3b3JrNA0K
IDEwMjogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgMjQxOTEgIHhlbi1wZXJjcHUtdmlycSAgICAgIHRpbWVyNQ0KIDEwMzog
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICAgICAgMTg2MTEgIHhlbi1wZXJjcHUtaXBpICAgICAgIHJlc2NoZWQ1DQogMTA0OiAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgIDE1NCAgeGVuLXBlcmNwdS1pcGkgICAgICAgY2FsbGZ1bmM1DQogMTA1OiAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgeGVuLXBlcmNwdS12aXJxICAgICAgZGVidWc1DQogMTA2OiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgMTQ0MCAg
eGVuLXBlcmNwdS1pcGkgICAgICAgY2FsbGZ1bmNzaW5nbGU1DQogMTA3OiAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAg
MCAgeGVuLXBlcmNwdS1pcGkgICAgICAgaXJxd29yazUNCiAxMDg6ICAgICAgIDk0MTMgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
eGVuLWR5bi1ldmVudCAgICAgeGVuYnVzDQogMTA5OiAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNw
dS12aXJxICAgICAgeGVuLXBjcHUNCiAxMTg6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LXZp
cnEgICAgICBtY2UNCiAxMjA6ICAgICAxNDY1NDYgICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1tc2kgICAgICAgYWhj
aQ0KIDEyMTogICAgICAxMzcxMiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgIHhlbi1waXJxLW1zaSAgICAgICBldGgwDQogMTIyOiAg
ICAgICA4OTIzICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgeGVuLXBpcnEtbXNpICAgICAgIGV0aDENCiAxMjM6ICAgICAgIDY1OTgg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxMjQ6ICAgICAgICAg
IDMgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAg
ICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxMjU6ICAgICAg
ICAyODAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxMjY6ICAg
ICAgICAyNzQgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTI3
OiAgICAgICAyOTkzICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIGJsa2lmLWJhY2tlbmQNCiAxMjg6
ICAgICAgMTM0MTMgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAg
MCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgYmxraWYtYmFja2VuZA0KIDEyOTog
ICAgICAgNjMzMSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAgICB2aWYxLjANCiAxMzA6ICAgICAgICAx
OTEgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAg
ICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxMzE6ICAgICAg
ICAyODcgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTMyOiAg
ICAgICAxMTMwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIGJsa2lmLWJhY2tlbmQNCiAxMzM6ICAg
ICAgIDEwMDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgdmlmMi4wDQogMTM0OiAgICAgICAgMTk1
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAg
MCAgIHhlbi1keW4tZXZlbnQgICAgIGV2dGNobjpveGVuc3RvcmVkDQogMTM1OiAgICAgICAg
Mjg4ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIGV2dGNobjp4ZW5jb25zb2xlZA0KIDEzNjogICAg
ICAgMTE2OSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAgICBibGtpZi1iYWNrZW5kDQogMTM3OiAgICAg
ICAgMjI5ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIHZpZjMuMA0KIDEzODogICAgICAgIDE5NiAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICB4ZW4tZHluLWV2ZW50ICAgICBldnRjaG46b3hlbnN0b3JlZA0KIDEzOTogICAgICAgIDI3
NCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICB4ZW4tZHluLWV2ZW50ICAgICBldnRjaG46eGVuY29uc29sZWQNCiAxNDA6ICAgICAg
ICA3MzQgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgYmxraWYtYmFja2VuZA0KIDE0MTogICAgICAg
ICAgMSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICB4ZW4tZHluLWV2ZW50ICAgICB2aWY0LjANCiAxNDI6ICAgICAgICAxOTIgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
eGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxNDM6ICAgICAgICAyODUg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTQ0OiAgICAgICAx
NTkxICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIGJsa2lmLWJhY2tlbmQNCiAxNDU6ICAgICAgICAg
NTYgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAg
ICAwICAgeGVuLWR5bi1ldmVudCAgICAgdmlmNS4wDQogMTQ2OiAgICAgICAgMTkxICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhl
bi1keW4tZXZlbnQgICAgIGV2dGNobjpveGVuc3RvcmVkDQogMTQ3OiAgICAgICAgMzA4ICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
IHhlbi1keW4tZXZlbnQgICAgIGV2dGNobjp4ZW5jb25zb2xlZA0KIDE0ODogICAgICAgMTEx
MyAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICB4ZW4tZHluLWV2ZW50ICAgICBibGtpZi1iYWNrZW5kDQogMTQ5OiAgICAgICAgICA0
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAg
MCAgIHhlbi1keW4tZXZlbnQgICAgIHZpZjYuMA0KIDE1MDogICAgICAgIDE5MCAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4t
ZHluLWV2ZW50ICAgICBldnRjaG46b3hlbnN0b3JlZA0KIDE1MTogICAgICAgIDI2OCAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4
ZW4tZHluLWV2ZW50ICAgICBldnRjaG46eGVuY29uc29sZWQNCiAxNTI6ICAgICAgIDEwODUg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICAgeGVuLWR5bi1ldmVudCAgICAgYmxraWYtYmFja2VuZA0KIDE1MzogICAgICAyMzU5MyAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICB4ZW4tZHluLWV2ZW50ICAgICB2aWY3LjANCiAxNTQ6ICAgICAgICAxOTIgICAgICAgICAg
MCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5
bi1ldmVudCAgICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxNTU6ICAgICAgICAyODIgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVu
LWR5bi1ldmVudCAgICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTU2OiAgICAgICAgOTA2ICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
IHhlbi1keW4tZXZlbnQgICAgIGJsa2lmLWJhY2tlbmQNCiAxNTc6ICAgICAgICAgIDQgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
eGVuLWR5bi1ldmVudCAgICAgdmlmOC4wDQogMTU4OiAgICAgICAgMTg5ICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4t
ZXZlbnQgICAgIGV2dGNobjpveGVuc3RvcmVkDQogMTU5OiAgICAgICAgMjYzICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1k
eW4tZXZlbnQgICAgIGV2dGNobjp4ZW5jb25zb2xlZA0KIDE2MDogICAgICAgMTE4MyAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4
ZW4tZHluLWV2ZW50ICAgICBibGtpZi1iYWNrZW5kDQogMTYxOiAgICAgIDEyOTkzICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhl
bi1keW4tZXZlbnQgICAgIHZpZjkuMA0KIDE2MjogICAgICAgIDE5NiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2
ZW50ICAgICBldnRjaG46b3hlbnN0b3JlZA0KIDE2MzogICAgICAgIDI3OSAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHlu
LWV2ZW50ICAgICBldnRjaG46eGVuY29uc29sZWQNCiAxNjQ6ICAgICAgIDEzNDQgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVu
LWR5bi1ldmVudCAgICAgYmxraWYtYmFja2VuZA0KIDE2NTogICAgICAgICAzMSAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4t
ZHluLWV2ZW50ICAgICB2aWYxMC4wDQogMTY2OiAgICAgICAgMjQzICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZl
bnQgICAgIGV2dGNobjpveGVuc3RvcmVkDQogMTY3OiAgICAgICAgMzM5ICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4t
ZXZlbnQgICAgIGV2dGNobjp4ZW5jb25zb2xlZA0KIDE2ODogICAgICAgIDM4NyAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4t
ZHluLWV2ZW50ICAgICB4ZW4tcGNpYmFjaw0KIDE2OTogICAgICAgMTM2MiAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHlu
LWV2ZW50ICAgICBibGtpZi1iYWNrZW5kDQogMTcwOiAgICAgICAgIDI5ICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4t
ZXZlbnQgICAgIHZpZjExLjANCiAxNzE6ICAgICAgICAxODQgICAgICAgICAgMCAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAg
ICAgZXZ0Y2huOm94ZW5zdG9yZWQNCiAxNzI6ICAgICAgICAyOTUgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVu
dCAgICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTczOiAgICAgICAxMTM0ICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4t
ZXZlbnQgICAgIGJsa2lmLWJhY2tlbmQNCiAxNzQ6ICAgICAgICAgMjcgICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1l
dmVudCAgICAgdmlmMTIuMA0KIDE3NTogICAgICAgIDE5MyAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAg
ICBldnRjaG46b3hlbnN0b3JlZA0KIDE3NjogICAgICAgIDI4NSAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50
ICAgICBldnRjaG46eGVuY29uc29sZWQNCiAxNzc6ICAgICAgIDEzNjkgICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1l
dmVudCAgICAgYmxraWYtYmFja2VuZA0KIDE3ODogICAgICAgIDQyMiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2
ZW50ICAgICB2aWYxMy4wDQogMTc5OiAgICAgICAgMTg1ICAgICAgICAgIDAgICAgICAgICAg
MCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAg
IGV2dGNobjpveGVuc3RvcmVkDQogMTgwOiAgICAgICAgNzU2ICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQg
ICAgIGV2dGNobjp4ZW5jb25zb2xlZA0KIDE4MTogICAgICAyODA1OSAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2
ZW50ICAgICBibGtpZi1iYWNrZW5kDQogMTgyOiAgICAgICAgMTU0ICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZl
bnQgICAgIHZpZjE0LjANCiAxODM6ICAgICAgICA0ODAgICAgICAgICAgMCAgICAgICAgICAw
ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAg
ZXZ0Y2huOm94ZW5zdG9yZWQNCiAxODQ6ICAgICAgICAgMTIgICAgICAgICAgMCAgICAgICAg
ICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAg
ICAgZXZ0Y2huOnhlbmNvbnNvbGVkDQogMTg1OiAgICAgMzYxODAzICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZl
bnQgICAgIGV2dGNobjpxZW11LXN5c3RlbS1pMzgNCiAxODY6ICAgICAgMjg3NzcgICAgICAg
ICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgeGVu
LWR5bi1ldmVudCAgICAgZXZ0Y2huOnFlbXUtc3lzdGVtLWkzOA0KIDE4NzogICAgICAyMTA0
NiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICB4ZW4tZHluLWV2ZW50ICAgICBldnRjaG46cWVtdS1zeXN0ZW0taTM4DQogMTg4OiAg
ICAgICAgIDUxICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIGV2dGNobjpxZW11LXN5c3RlbS1pMzgN
CiAxODk6ICAgICAgIDE2MTcgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgYmxraWYtYmFja2VuZA0K
IDE5MDogICAgICAgIDI4NSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAgICBibGtpZi1iYWNrZW5kDQog
MTkxOiAgICAgICAgIDExICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgICAgMCAgIHhlbi1keW4tZXZlbnQgICAgIHZpZjE2LjANCiAxOTI6ICAg
ICAgICAgIDEgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXZ0Y2huOnFlbXUtc3lzdGVtLWkzOA0K
IE5NSTogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICBOb24tbWFza2FibGUgaW50ZXJydXB0cw0KIExPQzogICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAg
ICAgICAgIDAgICBMb2NhbCB0aW1lciBpbnRlcnJ1cHRzDQogU1BVOiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
IFNwdXJpb3VzIGludGVycnVwdHMNCiBQTUk6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAg
ICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgUGVyZm9ybWFuY2Ug
bW9uaXRvcmluZyBpbnRlcnJ1cHRzDQogSVdJOiAgICAgICAgICAwICAgICAgICAgIDAgICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIElSUSB3b3JrIGlu
dGVycnVwdHMNCiBSVFI6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgQVBJQyBJQ1IgcmVhZCByZXRyaWVzDQog
UkVTOiAgICAgIDQ5MDUyICAgICAgNDM3MTggICAgICA0NzA0OCAgICAgIDIyMTc2ICAgICAg
MTg5MDMgICAgICAxODYxMSAgIFJlc2NoZWR1bGluZyBpbnRlcnJ1cHRzDQogQ0FMOiAgICAg
ICAxNTM4ICAgICAgIDE3MzUgICAgICAgMTY5NCAgICAgICAxMzkzICAgICAgIDE4MjIgICAg
ICAgMTU5NCAgIEZ1bmN0aW9uIGNhbGwgaW50ZXJydXB0cw0KIFRMQjogICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICBUTEIgc2hvb3Rkb3ducw0KIFRSTTogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAg
IDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICBUaGVybWFsIGV2ZW50IGlu
dGVycnVwdHMNCiBUSFI6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgVGhyZXNob2xkIEFQSUMgaW50ZXJydXB0
cw0KIE1DRTogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgICBNYWNoaW5lIGNoZWNrIGV4Y2VwdGlvbnMNCiBNQ1A6
ICAgICAgICAgIDMgICAgICAgICAgMyAgICAgICAgICAzICAgICAgICAgIDMgICAgICAgICAg
MyAgICAgICAgICAzICAgTWFjaGluZSBjaGVjayBwb2xscw0KIEVSUjogICAgICAgICAgMA0K
IE1JUzogICAgICAgICAgMA0K
------------09400606801CFB705
Content-Type: text/plain;
 name="interrupts-upstream.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="interrupts-upstream.txt"

ICAgICAgICAgICAgQ1BVMCAgICAgICBDUFUxICAgICAgIENQVTINCiAgIDA6ICAgICAgICAg
NDkgICAgICAgICAgMCAgICAgICAgICAwICAgSU8tQVBJQy1lZGdlICAgICAgdGltZXINCiAg
IDE6ICAgICAgICAgIDggICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGlycS1pb2FwaWMt
ZWRnZSAgaTgwNDINCiAgIDQ6ICAgICAgICAzNzYgICAgICAgICAgMCAgICAgICAgICAwICB4
ZW4tcGlycS1pb2FwaWMtZWRnZSAgc2VyaWFsDQogICA4OiAgICAgICAgICAyICAgICAgICAg
IDAgICAgICAgICAgMCAgeGVuLXBpcnEtaW9hcGljLWVkZ2UgIHJ0YzANCiAgIDk6ICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgSU8tQVBJQy1mYXN0ZW9pICAgYWNwaQ0K
ICAxMjogICAgICAgIDExMSAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1waXJxLWlvYXBp
Yy1lZGdlICBpODA0Mg0KICAyMzogICAgICAgICAzMyAgICAgICAgICAwICAgICAgICAgIDAg
IHhlbi1waXJxLWlvYXBpYy1sZXZlbCAgdWhjaV9oY2Q6dXNiMQ0KICA2NDogICAgICAyMTg3
OCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtdmlycSAgICAgIHRpbWVyMA0K
ICA2NTogICAgICAgOTQwOCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtaXBp
ICAgICAgIHJlc2NoZWQwDQogIDY2OiAgICAgICAgIDQ0ICAgICAgICAgIDAgICAgICAgICAg
MCAgeGVuLXBlcmNwdS1pcGkgICAgICAgY2FsbGZ1bmMwDQogIDY3OiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNwdS12aXJxICAgICAgZGVidWcwDQogIDY4
OiAgICAgICAgNTEwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNwdS1pcGkgICAg
ICAgY2FsbGZ1bmNzaW5nbGUwDQogIDY5OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgeGVuLXBlcmNwdS1pcGkgICAgICAgaXJxd29yazANCiAgNzA6ICAgICAgICAgIDAg
ICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICBzcGlubG9jazAN
CiAgNzE6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlw
aSAgICAgICBzcGlubG9jazENCiAgNzI6ICAgICAgICAgIDAgICAgICAgNzQzOCAgICAgICAg
ICAwICB4ZW4tcGVyY3B1LWlwaSAgICAgICByZXNjaGVkMQ0KICA3MzogICAgICAgICAgMCAg
ICAgICAgIDU1ICAgICAgICAgIDAgIHhlbi1wZXJjcHUtaXBpICAgICAgIGNhbGxmdW5jMQ0K
ICA3NDogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtdmly
cSAgICAgIGRlYnVnMQ0KICA3NTogICAgICAgICAgMCAgICAgICAgNjA2ICAgICAgICAgIDAg
IHhlbi1wZXJjcHUtaXBpICAgICAgIGNhbGxmdW5jc2luZ2xlMQ0KICA3NjogICAgICAgICAg
MCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1wZXJjcHUtaXBpICAgICAgIGlycXdvcmsx
DQogIDc3OiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgeGVuLXBlcmNwdS1p
cGkgICAgICAgc3BpbmxvY2syDQogIDc4OiAgICAgICAgICAwICAgICAgMjI3MjcgICAgICAg
ICAgMCAgeGVuLXBlcmNwdS12aXJxICAgICAgdGltZXIxDQogIDc5OiAgICAgICAgICAwICAg
ICAgICAgIDAgICAgICAgNzQ0MSAgeGVuLXBlcmNwdS1pcGkgICAgICAgcmVzY2hlZDINCiAg
ODA6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgIDQ1ICB4ZW4tcGVyY3B1LWlwaSAg
ICAgICBjYWxsZnVuYzINCiAgODE6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAw
ICB4ZW4tcGVyY3B1LXZpcnEgICAgICBkZWJ1ZzINCiAgODI6ICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgNTc2ICB4ZW4tcGVyY3B1LWlwaSAgICAgICBjYWxsZnVuY3NpbmdsZTIN
CiAgODM6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICB4ZW4tcGVyY3B1LWlw
aSAgICAgICBpcnF3b3JrMg0KICA4NDogICAgICAgICAgMCAgICAgICAgICAwICAgICAgMjIy
NjQgIHhlbi1wZXJjcHUtdmlycSAgICAgIHRpbWVyMg0KICA4NTogICAgICAgIDQ5OCAgICAg
ICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAgICB4ZW5idXMNCiAgODY6ICAg
ICAgICAgIDYgICAgICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgaHZj
X2NvbnNvbGUNCiAgODc6ICAgICAgIDQ0NTcgICAgICAgICAgMCAgICAgICAgICAwICAgeGVu
LWR5bi1ldmVudCAgICAgYmxraWYNCiAgODg6ICAgICAgICAyODkgICAgICAgICAgMCAgICAg
ICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgYmxraWYNCiAgODk6ICAgICAgICAxMDEgICAg
ICAgICAgMCAgICAgICAgICAwICAgeGVuLWR5bi1ldmVudCAgICAgZXRoMA0KICA5MDogICAg
ICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1waXJxLW1zaS14ICAgICB4aGNp
X2hjZA0KICA5MTogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgIHhlbi1waXJx
LW1zaS14ICAgICB4aGNpX2hjZA0KICA5MjogICAgICAgICAgMCAgICAgICAgICAwICAgICAg
ICAgIDAgIHhlbi1waXJxLW1zaS14ICAgICB4aGNpX2hjZA0KICA5MzogICAgICAgICAgMCAg
ICAgICAgICAwICAgICAgICAgIDAgIHhlbi1waXJxLW1zaS14ICAgICB4aGNpX2hjZA0KICA5
NDogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICB4ZW4tZHluLWV2ZW50ICAg
ICB2a2JkDQogTk1JOiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIE5vbi1t
YXNrYWJsZSBpbnRlcnJ1cHRzDQogTE9DOiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAg
ICAgMCAgIExvY2FsIHRpbWVyIGludGVycnVwdHMNCiBTUFU6ICAgICAgICAgIDAgICAgICAg
ICAgMCAgICAgICAgICAwICAgU3B1cmlvdXMgaW50ZXJydXB0cw0KIFBNSTogICAgICAgICAg
MCAgICAgICAgICAwICAgICAgICAgIDAgICBQZXJmb3JtYW5jZSBtb25pdG9yaW5nIGludGVy
cnVwdHMNCiBJV0k6ICAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgSVJRIHdv
cmsgaW50ZXJydXB0cw0KIFJUUjogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAg
ICBBUElDIElDUiByZWFkIHJldHJpZXMNCiBSRVM6ICAgICAgIDk0MDggICAgICAgNzQzOCAg
ICAgICA3NDQxICAgUmVzY2hlZHVsaW5nIGludGVycnVwdHMNCiBDQUw6ICAgICAgICAgMjkg
ICAgICAgICAxNSAgICAgICAgIDE3ICAgRnVuY3Rpb24gY2FsbCBpbnRlcnJ1cHRzDQogVExC
OiAgICAgICAgNTI1ICAgICAgICA2NDYgICAgICAgIDYwNCAgIFRMQiBzaG9vdGRvd25zDQog
VFJNOiAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAgMCAgIFRoZXJtYWwgZXZlbnQg
aW50ZXJydXB0cw0KIFRIUjogICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICBU
aHJlc2hvbGQgQVBJQyBpbnRlcnJ1cHRzDQogTUNFOiAgICAgICAgICAwICAgICAgICAgIDAg
ICAgICAgICAgMCAgIE1hY2hpbmUgY2hlY2sgZXhjZXB0aW9ucw0KIE1DUDogICAgICAgICAg
MSAgICAgICAgICAxICAgICAgICAgIDEgICBNYWNoaW5lIGNoZWNrIHBvbGxzDQogRVJSOiAg
ICAgICAgICAwDQogTUlTOiAgICAgICAgICAwDQo=
------------09400606801CFB705
Content-Type: application/octet-stream;
 name="qemu-traditional.log"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="qemu-traditional.log"

ZG9taWQ6IDE1Ci12aWRlb3JhbSBvcHRpb24gZG9lcyBub3Qgd29yayB3aXRoIGNpcnJ1cyB2
Z2EgZGV2aWNlIG1vZGVsLiBWaWRlb3JhbSBzZXQgdG8gNE0uClVzaW5nIGZpbGUgL2Rldi94
ZW5fdm1zL3NlY3VyaXR5YmFja3VwIGluIHJlYWQtd3JpdGUgbW9kZQpVc2luZyBmaWxlIC9k
ZXYveGVuX3Ztcy9zZWN1cml0eV9kYXRhIGluIHJlYWQtd3JpdGUgbW9kZQpXYXRjaGluZyAv
bG9jYWwvZG9tYWluLzAvZGV2aWNlLW1vZGVsLzE1L2xvZ2RpcnR5L2NtZApXYXRjaGluZyAv
bG9jYWwvZG9tYWluLzAvZGV2aWNlLW1vZGVsLzE1L2NvbW1hbmQKV2F0Y2hpbmcgL2xvY2Fs
L2RvbWFpbi8xNS9jcHUKY2hhciBkZXZpY2UgcmVkaXJlY3RlZCB0byAvZGV2L3B0cy8xNwpx
ZW11X21hcF9jYWNoZV9pbml0IG5yX2J1Y2tldHMgPSAxMDAwMCBzaXplIDQxOTQzMDQKc2hh
cmVkIHBhZ2UgYXQgcGZuIGZlZmZkCmJ1ZmZlcmVkIGlvIHBhZ2UgYXQgcGZuIGZlZmZiCkd1
ZXN0IHV1aWQgPSBkNDdjYzNiYy02ODMyLTRkNTEtYmMyNy1iOWIwMjZhZDJjMWMKcG9wdWxh
dGluZyB2aWRlbyBSQU0gYXQgZmYwMDAwMDAKbWFwcGluZyB2aWRlbyBSQU0gZnJvbSBmZjAw
MDAwMApSZWdpc3RlciB4ZW4gcGxhdGZvcm0uCkRvbmUgcmVnaXN0ZXIgcGxhdGZvcm0uCnBs
YXRmb3JtX2ZpeGVkX2lvcG9ydDogY2hhbmdlZCByby9ydyBzdGF0ZSBvZiBST00gbWVtb3J5
IGFyZWEuIG5vdyBpcyBydyBzdGF0ZS4KeHNfcmVhZCgvbG9jYWwvZG9tYWluLzAvZGV2aWNl
LW1vZGVsLzE1L3hlbl9leHRlbmRlZF9wb3dlcl9tZ210KTogcmVhZCBlcnJvcgpMb2ctZGly
dHk6IG5vIGNvbW1hbmQgeWV0LgpJL08gcmVxdWVzdCBub3QgcmVhZHk6IDAsIHB0cjogMCwg
cG9ydDogMCwgZGF0YTogMCwgY291bnQ6IDAsIHNpemU6IDAKSS9PIHJlcXVlc3Qgbm90IHJl
YWR5OiAwLCBwdHI6IDAsIHBvcnQ6IDAsIGRhdGE6IDAsIGNvdW50OiAwLCBzaXplOiAwCnZj
cHUtc2V0OiB3YXRjaCBub2RlIGVycm9yLgpJL08gcmVxdWVzdCBub3QgcmVhZHk6IDAsIHB0
cjogMCwgcG9ydDogMCwgZGF0YTogMCwgY291bnQ6IDAsIHNpemU6IDAKeHNfcmVhZCgvbG9j
YWwvZG9tYWluLzE1L2xvZy10aHJvdHRsaW5nKTogcmVhZCBlcnJvcgpxZW11OiBpZ25vcmlu
ZyBub3QtdW5kZXJzdG9vZCBkcml2ZSBgL2xvY2FsL2RvbWFpbi8xNS9sb2ctdGhyb3R0bGlu
ZycKbWVkaXVtIGNoYW5nZSB3YXRjaCBvbiBgL2xvY2FsL2RvbWFpbi8xNS9sb2ctdGhyb3R0
bGluZycgLSB1bmtub3duIGRldmljZSwgaWdub3JlZApkbS1jb21tYW5kOiBob3QgaW5zZXJ0
IHBhc3MtdGhyb3VnaCBwY2kgZGV2IApyZWdpc3Rlcl9yZWFsX2RldmljZTogQXNzaWduaW5n
IHJlYWwgcGh5c2ljYWwgZGV2aWNlIDA0OjAwLjAgLi4uCnJlZ2lzdGVyX3JlYWxfZGV2aWNl
OiBEaXNhYmxlIE1TSSB0cmFuc2xhdGlvbiB2aWEgcGVyIGRldmljZSBvcHRpb24KcmVnaXN0
ZXJfcmVhbF9kZXZpY2U6IERpc2FibGUgcG93ZXIgbWFuYWdlbWVudApwdF9pb211bF9pbml0
OiBFcnJvcjogcHRfaW9tdWxfaW5pdCBjYW4ndCBvcGVuIGZpbGUgL2Rldi94ZW4vcGNpX2lv
bXVsOiBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5OiAweDQ6MHgwLjB4MApwdF9yZWdpc3Rl
cl9yZWdpb25zOiBJTyByZWdpb24gcmVnaXN0ZXJlZCAoc2l6ZT0weDAwMDAyMDAwIGJhc2Vf
YWRkcj0weGY5OGZlMDA0KQpwdF9tc2l4X2luaXQ6IGdldCBNU0ktWCB0YWJsZSBiYXIgYmFz
ZSBmOThmZTAwMApwdF9tc2l4X2luaXQ6IHRhYmxlX29mZiA9IDEwMDAsIHRvdGFsX2VudHJp
ZXMgPSA4CnB0X21zaXhfaW5pdDogbWFwcGluZyBwaHlzaWNhbCBNU0ktWCB0YWJsZSB0byA3
ZjkzZTYyMjMwMDAKcGNpX2ludHg6IGludHg9MQpyZWdpc3Rlcl9yZWFsX2RldmljZTogUmVh
bCBwaHlzaWNhbCBkZXZpY2UgMDQ6MDAuMCByZWdpc3RlcmVkIHN1Y2Nlc3NmdWx5IQpJUlEg
dHlwZSA9IElOVHgKY2lycnVzIHZnYSBtYXAgY2hhbmdlIHdoaWxlIG9uIGxmYiBtb2RlCnB0
X2lvbWVtX21hcDogZV9waHlzPWYzMDIwMDAwIG1hZGRyPWY5OGZlMDAwIHR5cGU9MCBsZW49
ODE5MiBpbmRleD0wIGZpcnN0X21hcD0xCm1hcHBpbmcgdnJhbSB0byBmMDAwMDAwMCAtIGYw
NDAwMDAwCnBsYXRmb3JtX2ZpeGVkX2lvcG9ydDogY2hhbmdlZCByby9ydyBzdGF0ZSBvZiBS
T00gbWVtb3J5IGFyZWEuIG5vdyBpcyBydyBzdGF0ZS4KcGxhdGZvcm1fZml4ZWRfaW9wb3J0
OiBjaGFuZ2VkIHJvL3J3IHN0YXRlIG9mIFJPTSBtZW1vcnkgYXJlYS4gbm93IGlzIHJvIHN0
YXRlLgpVbmtub3duIFBWIHByb2R1Y3QgMyBsb2FkZWQgaW4gZ3Vlc3QKUFYgZHJpdmVyIGJ1
aWxkIDEKcmVnaW9uIHR5cGUgMCBhdCBbZjMwMDAwMDAsZjMwMjAwMDApLgpzcXVhc2ggaW9t
ZW0gW2YzMDAwMDAwLCBmMzAyMDAwMCkuCnJlZ2lvbiB0eXBlIDEgYXQgW2MxMDAsYzE0MCku
CnB0X2lvbWVtX21hcDogZV9waHlzPWZmZmZmZmZmIG1hZGRyPWY5OGZlMDAwIHR5cGU9MCBs
ZW49ODE5MiBpbmRleD0wIGZpcnN0X21hcD0wCnNxdWFzaCBpb21lbSBbZjMwMjEwMDAsIGYz
MDIyMDAwKS4KcHRfaW9tZW1fbWFwOiBlX3BoeXM9ZjMwMjAwMDAgbWFkZHI9Zjk4ZmUwMDAg
dHlwZT0wIGxlbj04MTkyIGluZGV4PTAgZmlyc3RfbWFwPTAKcHRfaW9tZW1fbWFwOiBlX3Bo
eXM9ZmZmZmZmZmYgbWFkZHI9Zjk4ZmUwMDAgdHlwZT0wIGxlbj04MTkyIGluZGV4PTAgZmly
c3RfbWFwPTAKc3F1YXNoIGlvbWVtIFtmMzAyMTAwMCwgZjMwMjIwMDApLgpwdF9pb21lbV9t
YXA6IGVfcGh5cz1mMzAyMDAwMCBtYWRkcj1mOThmZTAwMCB0eXBlPTAgbGVuPTgxOTIgaW5k
ZXg9MCBmaXJzdF9tYXA9MApwdF9pb21lbV9tYXA6IGVfcGh5cz1mZmZmZmZmZiBtYWRkcj1m
OThmZTAwMCB0eXBlPTAgbGVuPTgxOTIgaW5kZXg9MCBmaXJzdF9tYXA9MApzcXVhc2ggaW9t
ZW0gW2YzMDIxMDAwLCBmMzAyMjAwMCkuCnB0X2lvbWVtX21hcDogZV9waHlzPWYzMDIwMDAw
IG1hZGRyPWY5OGZlMDAwIHR5cGU9MCBsZW49ODE5MiBpbmRleD0wIGZpcnN0X21hcD0wCnB0
X2lvbWVtX21hcDogZV9waHlzPWZmZmZmZmZmIG1hZGRyPWY5OGZlMDAwIHR5cGU9MCBsZW49
ODE5MiBpbmRleD0wIGZpcnN0X21hcD0wCnNxdWFzaCBpb21lbSBbZjMwMjEwMDAsIGYzMDIy
MDAwKS4KcHRfaW9tZW1fbWFwOiBlX3BoeXM9ZjMwMjAwMDAgbWFkZHI9Zjk4ZmUwMDAgdHlw
ZT0wIGxlbj04MTkyIGluZGV4PTAgZmlyc3RfbWFwPTAKcHRfaW9tZW1fbWFwOiBlX3BoeXM9
ZmZmZmZmZmYgbWFkZHI9Zjk4ZmUwMDAgdHlwZT0wIGxlbj04MTkyIGluZGV4PTAgZmlyc3Rf
bWFwPTAKc3F1YXNoIGlvbWVtIFtmMzAyMTAwMCwgZjMwMjIwMDApLgpwdF9pb21lbV9tYXA6
IGVfcGh5cz1mMzAyMDAwMCBtYWRkcj1mOThmZTAwMCB0eXBlPTAgbGVuPTgxOTIgaW5kZXg9
MCBmaXJzdF9tYXA9MApwdF9pb21lbV9tYXA6IGVfcGh5cz1mZmZmZmZmZiBtYWRkcj1mOThm
ZTAwMCB0eXBlPTAgbGVuPTgxOTIgaW5kZXg9MCBmaXJzdF9tYXA9MApzcXVhc2ggaW9tZW0g
W2YzMDIxMDAwLCBmMzAyMjAwMCkuCnB0X3BjaV93cml0ZV9jb25maWc6IFswMDowNTowXSBX
YXJuaW5nOiBHdWVzdCBhdHRlbXB0IHRvIHNldCBhZGRyZXNzIHRvIHVudXNlZCBCYXNlIEFk
ZHJlc3MgUmVnaXN0ZXIuIFtPZmZzZXQ6MzBoXVtMZW5ndGg6NF0KcHRfaW9tZW1fbWFwOiBl
X3BoeXM9ZjMwMjAwMDAgbWFkZHI9Zjk4ZmUwMDAgdHlwZT0wIGxlbj04MTkyIGluZGV4PTAg
Zmlyc3RfbWFwPTAKcHRfbXNpeF91cGRhdGVfb25lOiBwdF9tc2l4X3VwZGF0ZV9vbmUgcmVx
dWVzdGVkIHBpcnEgPSA4NwpwdF9tc2l4X3VwZGF0ZV9vbmU6IFVwZGF0ZSBtc2l4IGVudHJ5
IDAgd2l0aCBwaXJxIDU3IGd2ZWMgMApwdF9tc2l4X3VwZGF0ZV9vbmU6IHB0X21zaXhfdXBk
YXRlX29uZSByZXF1ZXN0ZWQgcGlycSA9IDg2CnB0X21zaXhfdXBkYXRlX29uZTogVXBkYXRl
IG1zaXggZW50cnkgMSB3aXRoIHBpcnEgNTYgZ3ZlYyAwCnB0X21zaXhfdXBkYXRlX29uZTog
cHRfbXNpeF91cGRhdGVfb25lIHJlcXVlc3RlZCBwaXJxID0gODUKcHRfbXNpeF91cGRhdGVf
b25lOiBVcGRhdGUgbXNpeCBlbnRyeSAyIHdpdGggcGlycSA1NSBndmVjIDAKcHRfbXNpeF91
cGRhdGVfb25lOiBwdF9tc2l4X3VwZGF0ZV9vbmUgcmVxdWVzdGVkIHBpcnEgPSA4NApwdF9t
c2l4X3VwZGF0ZV9vbmU6IFVwZGF0ZSBtc2l4IGVudHJ5IDMgd2l0aCBwaXJxIDU0IGd2ZWMg
MApzaHV0ZG93biByZXF1ZXN0ZWQgaW4gY3B1X2hhbmRsZV9pb3JlcQpJc3N1ZWQgZG9tYWlu
IDE1IHBvd2Vyb2ZmCg==
------------09400606801CFB705
Content-Type: application/octet-stream;
 name="qemu-upstream.log"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="qemu-upstream.log"

Y2hhciBkZXZpY2UgcmVkaXJlY3RlZCB0byAvZGV2L3B0cy8xNwp4ZW46IHNoYXJlZCBwYWdl
IGF0IHBmbiBmZWZmZAp4ZW46IGJ1ZmZlcmVkIGlvIHBhZ2UgYXQgcGZuIGZlZmZiCnhlbl9t
YXBjYWNoZTogeGVuX21hcF9jYWNoZV9pbml0LCBucl9idWNrZXRzID0gODAwMCBzaXplIDE1
NzI4NjQKeGVuX3BsYXRmb3JtOiBjaGFuZ2VkIHJvL3J3IHN0YXRlIG9mIFJPTSBtZW1vcnkg
YXJlYS4gbm93IGlzIHJ3IHN0YXRlLgp4ZW46IEkvTyByZXF1ZXN0IG5vdCByZWFkeTogMCwg
cHRyOiAwLCBwb3J0OiAwLCBkYXRhOiAwLCBjb3VudDogMCwgc2l6ZTogMAp4ZW46IEkvTyBy
ZXF1ZXN0IG5vdCByZWFkeTogMCwgcHRyOiAwLCBwb3J0OiAwLCBkYXRhOiAwLCBjb3VudDog
MCwgc2l6ZTogMAp4ZW46IEkvTyByZXF1ZXN0IG5vdCByZWFkeTogMCwgcHRyOiAwLCBwb3J0
OiAwLCBkYXRhOiAwLCBjb3VudDogMCwgc2l6ZTogMApbMDA6MDUuMF0geGVuX3B0X2luaXRm
bjogQXNzaWduaW5nIHJlYWwgcGh5c2ljYWwgZGV2aWNlIDA0OjAwLjAgdG8gZGV2Zm4gMHgy
OApbMDA6MDUuMF0geGVuX3B0X3JlZ2lzdGVyX3JlZ2lvbnM6IElPIHJlZ2lvbiAwIHJlZ2lz
dGVyZWQgKHNpemU9MHgyMDAwbHggYmFzZV9hZGRyPTB4Zjk4ZmUwMDBseCB0eXBlOiAweDQp
ClswMDowNS4wXSB4ZW5fcHRfbXNpeF9pbml0OiBnZXQgTVNJLVggdGFibGUgQkFSIGJhc2Ug
MHhmOThmZTAwMApbMDA6MDUuMF0geGVuX3B0X21zaXhfaW5pdDogdGFibGVfb2ZmID0gMHgx
MDAwLCB0b3RhbF9lbnRyaWVzID0gOApbMDA6MDUuMF0geGVuX3B0X21zaXhfaW5pdDogbWFw
cGluZyBwaHlzaWNhbCBNU0ktWCB0YWJsZSB0byAweDdmYTZjODg2NjAwMApbMDA6MDUuMF0g
eGVuX3B0X3BjaV9pbnR4OiBpbnR4PTEKWzAwOjA1LjBdIHhlbl9wdF9pbml0Zm46IFJlYWwg
cGh5c2ljYWwgZGV2aWNlIDA0OjAwLjAgcmVnaXN0ZXJlZCBzdWNjZXNzZnVseSEKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDBhIHZhbD0weDAwMDAw
YzAzIGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4
MDAwMCB2YWw9MHgwMDAwMTAzMyBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2Nv
bmZpZzogYWRkcmVzcz0weDAwMDIgdmFsPTB4MDAwMDAxOTQgbGVuPTIKWzAwOjA1LjBdIHhl
bl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDEwIHZhbD0weDAwMDAwMDA0IGxl
bj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTAg
dmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6
IGFkZHJlc3M9MHgwMDEwIHZhbD0weGZmZmZlMDA0IGxlbj00ClswMDowNS4wXSB4ZW5fcHRf
cGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTAgdmFsPTB4MDAwMDAwMDQgbGVuPTQK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDE0IHZhbD0w
eDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRk
cmVzcz0weDAwMTQgdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
cmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDE0IHZhbD0weGZmZmZmZmZmIGxlbj00ClswMDow
NS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTQgdmFsPTB4MDAw
MDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9
MHgwMDE4IHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRl
X2NvbmZpZzogYWRkcmVzcz0weDAwMTggdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBd
IHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDE4IHZhbD0weDAwMDAwMDAw
IGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAw
MTggdmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25m
aWc6IGFkZHJlc3M9MHgwMDFjIHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5f
cHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMWMgdmFsPTB4ZmZmZmZmZmYgbGVu
PTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDFjIHZh
bD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzog
YWRkcmVzcz0weDAwMWMgdmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9w
Y2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDIwIHZhbD0weDAwMDAwMDAwIGxlbj00Clsw
MDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMjAgdmFsPTB4
ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJl
c3M9MHgwMDIwIHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dy
aXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMjAgdmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDI0IHZhbD0weDAwMDAw
MDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0w
eDAwMjQgdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9j
b25maWc6IGFkZHJlc3M9MHgwMDI0IHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4
ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMjQgdmFsPTB4MDAwMDAwMDAg
bGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDMw
IHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZp
ZzogYWRkcmVzcz0weDAwMzAgdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDMwIHZhbD0weDAwMDAwMDAwIGxlbj00
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMzAgdmFs
PTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFk
ZHJlc3M9MHgwMDNkIHZhbD0weDAwMDAwMDAxIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNp
X3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwM2MgdmFsPTB4MDAwMDAwMGEgbGVuPTEKWzAw
OjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZhbD0weDAw
MDAwMDAwIGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVz
cz0weDAwMDQgdmFsPTB4MDAwMDAwMDQgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVh
ZF9jb25maWc6IGFkZHJlc3M9MHgwMDEwIHZhbD0weDAwMDAwMDA0IGxlbj00ClswMDowNS4w
XSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTAgdmFsPTB4ZjMwNTAw
MDQgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDA0IHZhbD0weDAwMDAwMDA0IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2Nv
bmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAwOjA1LjBdIHhl
bl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MCBwaXJxOi0xIGVudHJ5
X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
MSBlbnRyeV9ucjowIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRh
dGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjEgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6MSBu
b3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVy
ZSBlbnRyeV9ucjoyIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjIgbm90IHVwZGF0ZWQ6MCAK
WzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MyBw
aXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9v
bmU6IE1lIGhlcmUgMSBlbnRyeV9ucjozIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5f
cHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjQgcGlycTotMSBlbnRyeV91
cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEg
ZW50cnlfbnI6NCBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRl
X29uZTogTWUgaGVyZSBlbnRyeV9ucjo1IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6
MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjUgbm90
IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
ZW50cnlfbnI6NiBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9t
c2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo2IG5vdCB1cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjcgcGly
cTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25l
OiBNZSBoZXJlIDEgZW50cnlfbnI6NyBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDAgdmFsPTB4MDAwMDEwMzMgbGVuPTIK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDAwIHZhbD0w
eDAxOTQxMDMzIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRy
ZXNzPTB4MDAwOCB2YWw9MHgwYzAzMzAwMyBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV9y
ZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMGUgdmFsPTB4MDAwMDAwMDAgbGVuPTEKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDBlIHZhbD0weDAwMDAw
MDAwIGxlbj0xCnhlbjogcGh5c21hcHBpbmcgZG9lcyBub3QgZXhpc3QgYXQgMDAwMDAwMDBm
MzA0MDAwMAp4ZW46IG1hcHBpbmcgdnJhbSB0byBmMDAwMDAwMCAtIDQwMzQ5MjA0NDgKeGVu
OiBwaHlzbWFwcGluZyBkb2VzIG5vdCBleGlzdCBhdCAwMDAwMDAwMGYzMDIwMDAwClswMDow
NS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAzMCB2YWw9MHgwMDAw
MDAwMCBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9
MHgwMDMwIHZhbD0weGZmZmZmZmZlIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRl
X2NvbmZpZzogV2FybmluZzogR3Vlc3QgYXR0ZW1wdCB0byBzZXQgYWRkcmVzcyB0byB1bnVz
ZWQgQmFzZSBBZGRyZXNzIFJlZ2lzdGVyLiAoYWRkcjogMHgzMCwgbGVuOiA0KQpbMDA6MDUu
MF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMzAgdmFsPTB4MDAwMDAw
MDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4
MDAzMCB2YWw9MHgwMDAwMDAwMCBsZW49NAp4ZW5fcGxhdGZvcm06IFVua25vd24gUFYgcHJv
ZHVjdCAzIGxvYWRlZCBpbiBndWVzdAp4ZW5fcGxhdGZvcm06IHVucGx1ZyBkaXNrcwp4ZW5f
cGxhdGZvcm06IHVucGx1ZyBuaWNzClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmln
OiBhZGRyZXNzPTB4MDAwOCB2YWw9MHgwYzAzMzAwMyBsZW49NApbMDA6MDUuMF0geGVuX3B0
X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMGUgdmFsPTB4MDAwMDAwMDAgbGVuPTEK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDAwIHZhbD0w
eDAxOTQxMDMzIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRy
ZXNzPTB4MDAwYSB2YWw9MHgwMDAwMGMwMyBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9y
ZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDAgdmFsPTB4MDAwMDEwMzMgbGVuPTIKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDAyIHZhbD0weDAwMDAw
MTk0IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4
MDAwZSB2YWw9MHgwMDAwMDAwMCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2Nv
bmZpZzogYWRkcmVzcz0weDAwMDggdmFsPTB4MGMwMzMwMDMgbGVuPTQKWzAwOjA1LjBdIHhl
bl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDBlIHZhbD0weDAwMDAwMDAwIGxl
bj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAwMCB2
YWw9MHgwMTk0MTAzMyBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzog
YWRkcmVzcz0weDAwMGUgdmFsPTB4MDAwMDAwMDAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9w
Y2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA2IHZhbD0weDAwMDAwMDEwIGxlbj0yClsw
MDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAzNCB2YWw9MHgw
MDAwMDA1MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVz
cz0weDAwNTAgdmFsPTB4MDAwMDAwMDEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVh
ZF9jb25maWc6IGFkZHJlc3M9MHgwMDUxIHZhbD0weDAwMDAwMDcwIGxlbj0xClswMDowNS4w
XSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MCB2YWw9MHgwMDAwMDAw
NSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAw
NzEgdmFsPTB4MDAwMDAwOTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25m
aWc6IGFkZHJlc3M9MHgwMDkwIHZhbD0weDAwMDAwMDExIGxlbj0xClswMDowNS4wXSB4ZW5f
cHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA5MSB2YWw9MHgwMDAwMDBhMCBsZW49
MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwYTAgdmFs
PTB4MDAwMDAwMTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFk
ZHJlc3M9MHgwMGEyIHZhbD0weDAwMDAwMDAyIGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNp
X3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDBhNCB2YWw9MHgwMDAwOGZjMCBsZW49MgpbMDA6
MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDggdmFsPTB4MGMw
MzMwMDMgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9
MHgwMDAwIHZhbD0weDAxOTQxMDMzIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRf
Y29uZmlnOiBhZGRyZXNzPTB4MDAzZCB2YWw9MHgwMDAwMDAwMSBsZW49MQpbMDA6MDUuMF0g
eGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwM2MgdmFsPTB4MDAwMDAwMGEg
bGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA0
IHZhbD0weDAwMDAwMDA2IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZp
ZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDQgbGVuPTIKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDEwIHZhbD0weGYzMDUwMDA0IGxlbj00
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTAgdmFs
PTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFk
ZHJlc3M9MHgwMDEwIHZhbD0weGZmZmZlMDA0IGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNp
X3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTAgdmFsPTB4ZjMwNTAwMDQgbGVuPTQKWzAw
OjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDE0IHZhbD0weDAw
MDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVz
cz0weDAwMTQgdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVh
ZF9jb25maWc6IGFkZHJlc3M9MHgwMDE0IHZhbD0weGZmZmZmZmZmIGxlbj00ClswMDowNS4w
XSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMTQgdmFsPTB4MDAwMDAw
MDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4
MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRl
X29uZTogTWUgaGVyZSBlbnRyeV9ucjowIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6
MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjAgbm90
IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
ZW50cnlfbnI6MSBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9t
c2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjoxIG5vdCB1cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjIgcGly
cTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25l
OiBNZSBoZXJlIDEgZW50cnlfbnI6MiBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjozIHBpcnE6LTEgZW50cnlfdXBk
YXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVu
dHJ5X25yOjMgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9v
bmU6IE1lIGhlcmUgZW50cnlfbnI6NCBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1
LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo0IG5vdCB1
cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVu
dHJ5X25yOjUgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNp
eF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6NSBub3QgdXBkYXRlZDowIApbMDA6
MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjo2IHBpcnE6
LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTog
TWUgaGVyZSAxIGVudHJ5X25yOjYgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9t
c2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6NyBwaXJxOi0xIGVudHJ5X3VwZGF0
ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRy
eV9ucjo3IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmln
OiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6MDUuMF0geGVuX3B0
X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZhbD0weDAwMDAwMDA0IGxlbj0y
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAxOCB2YWw9
MHgwMDAwMDAwMCBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFk
ZHJlc3M9MHgwMDE4IHZhbD0weGZmZmZmZmZmIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNp
X3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAxOCB2YWw9MHgwMDAwMDAwMCBsZW49NApbMDA6
MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9MHgwMDE4IHZhbD0weDAw
MDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVz
cz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3Vw
ZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MCBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAK
WzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjow
IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBo
ZXJlIGVudHJ5X25yOjEgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5f
cHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6MSBub3QgdXBkYXRlZDow
IApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjoy
IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRl
X29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjIgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhl
bl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MyBwaXJxOi0xIGVudHJ5
X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
MSBlbnRyeV9ucjozIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRh
dGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjQgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6NCBu
b3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVy
ZSBlbnRyeV9ucjo1IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjUgbm90IHVwZGF0ZWQ6MCAK
WzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6NiBw
aXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9v
bmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo2IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5f
cHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjcgcGlycTotMSBlbnRyeV91
cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEg
ZW50cnlfbnI6NyBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2Nv
bmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAwOjA1LjBdIHhl
bl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNCBs
ZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMWMg
dmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmln
OiBhZGRyZXNzPTB4MDAxYyB2YWw9MHhmZmZmZmZmZiBsZW49NApbMDA6MDUuMF0geGVuX3B0
X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMWMgdmFsPTB4MDAwMDAwMDAgbGVuPTQK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAxYyB2YWw9
MHgwMDAwMDAwMCBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFk
ZHJlc3M9MHgwMDA0IHZhbD0weDAwMDAwMDA2IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfbXNp
eF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjAgcGlycTotMSBlbnRyeV91cGRhdGVk
OjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlf
bnI6MCBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTog
TWUgaGVyZSBlbnRyeV9ucjoxIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0g
eGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjEgbm90IHVwZGF0
ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlf
bnI6MiBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3Vw
ZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjoyIG5vdCB1cGRhdGVkOjAgClswMDowNS4w
XSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjMgcGlycTotMSBl
bnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBo
ZXJlIDEgZW50cnlfbnI6MyBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhf
dXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjo0IHBpcnE6LTEgZW50cnlfdXBkYXRlZDow
IApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25y
OjQgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1l
IGhlcmUgZW50cnlfbnI6NSBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhl
bl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo1IG5vdCB1cGRhdGVk
OjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25y
OjYgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRh
dGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6NiBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0g
eGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjo3IHBpcnE6LTEgZW50
cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVy
ZSAxIGVudHJ5X25yOjcgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVh
ZF9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZhbD0weDAwMDAwMDA2IGxlbj0yClswMDowNS4w
XSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAw
MDQgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDIwIHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2Nv
bmZpZzogYWRkcmVzcz0weDAwMjAgdmFsPTB4ZmZmZmZmZmYgbGVuPTQKWzAwOjA1LjBdIHhl
bl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDIwIHZhbD0weDAwMDAwMDAwIGxl
bj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMjAg
dmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmln
OiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6MDUuMF0geGVuX3B0
X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjowIHBpcnE6LTEgZW50cnlfdXBk
YXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVu
dHJ5X25yOjAgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9v
bmU6IE1lIGhlcmUgZW50cnlfbnI6MSBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1
LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjoxIG5vdCB1
cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVu
dHJ5X25yOjIgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNp
eF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6MiBub3QgdXBkYXRlZDowIApbMDA6
MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjozIHBpcnE6
LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTog
TWUgaGVyZSAxIGVudHJ5X25yOjMgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9t
c2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6NCBwaXJxOi0xIGVudHJ5X3VwZGF0
ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRy
eV9ucjo0IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25l
OiBNZSBoZXJlIGVudHJ5X25yOjUgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4w
XSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6NSBub3QgdXBk
YXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRy
eV9ucjo2IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhf
dXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjYgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1
LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6NyBwaXJxOi0x
IGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1l
IGhlcmUgMSBlbnRyeV9ucjo3IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfcGNp
X3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6
MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZhbD0weDAw
MDAwMDA0IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNz
PTB4MDAyNCB2YWw9MHgwMDAwMDAwMCBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0
ZV9jb25maWc6IGFkZHJlc3M9MHgwMDI0IHZhbD0weGZmZmZmZmZmIGxlbj00ClswMDowNS4w
XSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAyNCB2YWw9MHgwMDAwMDAw
MCBsZW49NApbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9MHgw
MDI0IHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2Nv
bmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAwOjA1LjBdIHhl
bl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MCBwaXJxOi0xIGVudHJ5
X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
MSBlbnRyeV9ucjowIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRh
dGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjEgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6MSBu
b3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVy
ZSBlbnRyeV9ucjoyIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjIgbm90IHVwZGF0ZWQ6MCAK
WzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MyBw
aXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9v
bmU6IE1lIGhlcmUgMSBlbnRyeV9ucjozIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5f
cHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjQgcGlycTotMSBlbnRyeV91
cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEg
ZW50cnlfbnI6NCBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRl
X29uZTogTWUgaGVyZSBlbnRyeV9ucjo1IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6
MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjUgbm90
IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUg
ZW50cnlfbnI6NiBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9t
c2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo2IG5vdCB1cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjcgcGly
cTotMSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25l
OiBNZSBoZXJlIDEgZW50cnlfbnI6NyBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0
X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9
MHgwMDAwMDAwNCBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRk
cmVzcz0weDAwMzAgdmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
d3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAzMCB2YWw9MHhmZmZmZjgwMCBsZW49NApbMDA6
MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IFdhcm5pbmc6IEd1ZXN0IGF0dGVtcHQg
dG8gc2V0IGFkZHJlc3MgdG8gdW51c2VkIEJhc2UgQWRkcmVzcyBSZWdpc3Rlci4gKGFkZHI6
IDB4MzAsIGxlbjogNCkKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJl
c3M9MHgwMDMwIHZhbD0weDAwMDAwMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dy
aXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMzAgdmFsPTB4MDAwMDAwMDAgbGVuPTQKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAw
MDAwNiBsZW49MgpbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBl
bnRyeV9ucjowIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21z
aXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjAgbm90IHVwZGF0ZWQ6MCAKWzAw
OjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6MSBwaXJx
Oi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6
IE1lIGhlcmUgMSBlbnRyeV9ucjoxIG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRf
bXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjIgcGlycTotMSBlbnRyeV91cGRh
dGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50
cnlfbnI6MiBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29u
ZTogTWUgaGVyZSBlbnRyeV9ucjozIHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUu
MF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjMgbm90IHVw
ZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50
cnlfbnI6NCBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4
X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo0IG5vdCB1cGRhdGVkOjAgClswMDow
NS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjUgcGlycTot
MSBlbnRyeV91cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBN
ZSBoZXJlIDEgZW50cnlfbnI6NSBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21z
aXhfdXBkYXRlX29uZTogTWUgaGVyZSBlbnRyeV9ucjo2IHBpcnE6LTEgZW50cnlfdXBkYXRl
ZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5
X25yOjYgbm90IHVwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6
IE1lIGhlcmUgZW50cnlfbnI6NyBwaXJxOi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBd
IHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgMSBlbnRyeV9ucjo3IG5vdCB1cGRh
dGVkOjAgClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAy
YyB2YWw9MHgwMDAwMTQ2MiBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZp
ZzogYWRkcmVzcz0weDAwMmUgdmFsPTB4MDAwMDQyNTcgbGVuPTIKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA2IHZhbD0weDAwMDAwMDEwIGxlbj0y
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAzNCB2YWw9
MHgwMDAwMDA1MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRk
cmVzcz0weDAwNTAgdmFsPTB4MDAwMDAwMDEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
cmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDUxIHZhbD0weDAwMDAwMDcwIGxlbj0xClswMDow
NS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MCB2YWw9MHgwMDAw
MDAwNSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0w
eDAwNzIgdmFsPTB4MDAwMDAwODAgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVf
Y29uZmlnOiBhZGRyZXNzPTB4MDA3MiB2YWw9MHgwMDAwMDA4MCBsZW49MgpbMDA6MDUuMF0g
eGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDYgdmFsPTB4MDAwMDAwMTAg
bGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDM0
IHZhbD0weDAwMDAwMDUwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmln
OiBhZGRyZXNzPTB4MDA1MCB2YWw9MHgwMDAwMDAwMSBsZW49MQpbMDA6MDUuMF0geGVuX3B0
X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNTEgdmFsPTB4MDAwMDAwNzAgbGVuPTEK
WzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDcwIHZhbD0w
eDAwMDAwMDA1IGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRy
ZXNzPTB4MDA3MSB2YWw9MHgwMDAwMDA5MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9y
ZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwOTAgdmFsPTB4MDAwMDAwMTEgbGVuPTEKWzAwOjA1
LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDkyIHZhbD0weDAwMDAw
MDA3IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0w
eDAwOTIgdmFsPTB4MDAwMDAwMDcgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9j
b25maWc6IGFkZHJlc3M9MHgwMDA2IHZhbD0weDAwMDAwMDEwIGxlbj0yClswMDowNS4wXSB4
ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAzNCB2YWw9MHgwMDAwMDA1MCBs
ZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNTAg
dmFsPTB4MDAwMDAwMDEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6
IGFkZHJlc3M9MHgwMDUxIHZhbD0weDAwMDAwMDcwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRf
cGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MCB2YWw9MHgwMDAwMDAwNSBsZW49MQpb
MDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNzEgdmFsPTB4
MDAwMDAwOTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJl
c3M9MHgwMDkwIHZhbD0weDAwMDAwMDExIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3Jl
YWRfY29uZmlnOiBhZGRyZXNzPTB4MDA5MSB2YWw9MHgwMDAwMDBhMCBsZW49MQpbMDA6MDUu
MF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwYTAgdmFsPTB4MDAwMDAw
MTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDA2IHZhbD0weDAwMDAwMDEwIGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29u
ZmlnOiBhZGRyZXNzPTB4MDAzNCB2YWw9MHgwMDAwMDA1MCBsZW49MQpbMDA6MDUuMF0geGVu
X3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNTAgdmFsPTB4MDAwMDAwMDEgbGVu
PTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDUxIHZh
bD0weDAwMDAwMDcwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBh
ZGRyZXNzPTB4MDA3MCB2YWw9MHgwMDAwMDAwNSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3Bj
aV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNzEgdmFsPTB4MDAwMDAwOTAgbGVuPTEKWzAw
OjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDkwIHZhbD0weDAw
MDAwMDExIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNz
PTB4MDA5MSB2YWw9MHgwMDAwMDBhMCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFk
X2NvbmZpZzogYWRkcmVzcz0weDAwYTAgdmFsPTB4MDAwMDAwMTAgbGVuPTEKWzAwOjA1LjBd
IHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMGExIHZhbD0weDAwMDAwMDAw
IGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAw
NiB2YWw9MHgwMDAwMDAxMCBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZp
ZzogYWRkcmVzcz0weDAwMzQgdmFsPTB4MDAwMDAwNTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDUwIHZhbD0weDAwMDAwMDAxIGxlbj0x
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1MiB2YWw9
MHgwMDAwMDAwMyBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRk
cmVzcz0weDAwMDYgdmFsPTB4MDAwMDAwMTAgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
cmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDM0IHZhbD0weDAwMDAwMDUwIGxlbj0xClswMDow
NS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1MCB2YWw9MHgwMDAw
MDAwMSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0w
eDAwNTEgdmFsPTB4MDAwMDAwNzAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9j
b25maWc6IGFkZHJlc3M9MHgwMDcwIHZhbD0weDAwMDAwMDA1IGxlbj0xClswMDowNS4wXSB4
ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MSB2YWw9MHgwMDAwMDA5MCBs
ZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwOTAg
dmFsPTB4MDAwMDAwMTEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6
IGFkZHJlc3M9MHgwMDkxIHZhbD0weDAwMDAwMGEwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRf
cGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDBhMCB2YWw9MHgwMDAwMDAxMCBsZW49MQpb
MDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwYTEgdmFsPTB4
MDAwMDAwMDAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJl
c3M9MHgwMDAwIHZhbD0weDAxOTQxMDMzIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3Jl
YWRfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6MDUu
MF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAw
MDYgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDU0IHZhbD0weDAwMDAwMDA4IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29u
ZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAwNiBsZW49MgpbMDA6MDUuMF0geGVu
X3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVu
PTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZh
bD0weDAwMDAwMDA2IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzog
YWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDIgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9w
Y2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDBjIHZhbD0weDAwMDAwMDAwIGxlbj0xClsw
MDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1NCB2YWw9MHgw
MDAwMDAwOCBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVz
cz0weDAwMDQgdmFsPTB4MDAwMDAwMDIgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVh
ZF9jb25maWc6IGFkZHJlc3M9MHgwMDA0IHZhbD0weDAwMDAwMDAyIGxlbj0yClswMDowNS4w
XSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAw
MDYgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDYwIHZhbD0weDAwMDAwMDMwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29u
ZmlnOiBhZGRyZXNzPTB4MDAwYyB2YWw9MHgwMDAwMDAwMCBsZW49MQpbMDA6MDUuMF0geGVu
X3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJlc3M9MHgwMDBjIHZhbD0weDAwMDAwMDEwIGxl
bj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAwYyB2
YWw9MHgwMDAwMDAxMCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzog
YWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9w
Y2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgwMDAwMDAxNiBsZW49Mgpb
MDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDYgdmFsPTB4
MDAwMDAwMTAgbGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJl
c3M9MHgwMDM0IHZhbD0weDAwMDAwMDUwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3Jl
YWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1MCB2YWw9MHgwMDAwMDAwMSBsZW49MQpbMDA6MDUu
MF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNTEgdmFsPTB4MDAwMDAw
NzAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgw
MDcwIHZhbD0weDAwMDAwMDA1IGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29u
ZmlnOiBhZGRyZXNzPTB4MDA3MSB2YWw9MHgwMDAwMDA5MCBsZW49MQpbMDA6MDUuMF0geGVu
X3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwOTAgdmFsPTB4MDAwMDAwMTEgbGVu
PTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA2IHZh
bD0weDAwMDAwMDEwIGxlbj0yClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBh
ZGRyZXNzPTB4MDAzNCB2YWw9MHgwMDAwMDA1MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3Bj
aV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwNTAgdmFsPTB4MDAwMDAwMDEgbGVuPTEKWzAw
OjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDUxIHZhbD0weDAw
MDAwMDcwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNz
PTB4MDA3MCB2YWw9MHgwMDAwMDAwNSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFk
X2NvbmZpZzogYWRkcmVzcz0weDAwNzEgdmFsPTB4MDAwMDAwOTAgbGVuPTEKWzAwOjA1LjBd
IHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDkwIHZhbD0weDAwMDAwMDEx
IGxlbj0xClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA5
MiB2YWw9MHgwMDAwMDAwNyBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZp
ZzogYWRkcmVzcz0weDAwMDYgdmFsPTB4MDAwMDAwMTAgbGVuPTIKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDM0IHZhbD0weDAwMDAwMDUwIGxlbj0x
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1MCB2YWw9
MHgwMDAwMDAwMSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRk
cmVzcz0weDAwNTEgdmFsPTB4MDAwMDAwNzAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
cmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDcwIHZhbD0weDAwMDAwMDA1IGxlbj0xClswMDow
NS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MSB2YWw9MHgwMDAw
MDA5MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0w
eDAwOTAgdmFsPTB4MDAwMDAwMTEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9j
b25maWc6IGFkZHJlc3M9MHgwMDkyIHZhbD0weDAwMDAwMDA3IGxlbj0yClswMDowNS4wXSB4
ZW5fcHRfcGNpX3dyaXRlX2NvbmZpZzogYWRkcmVzcz0weDAwOTIgdmFsPTB4MDAwMDAwMDcg
bGVuPTIKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDk0
IHZhbD0weDAwMDAxMDAwIGxlbj00ClswMDowNS4wXSB4ZW5fcHRfcGNpX3dyaXRlX2NvbmZp
ZzogYWRkcmVzcz0weDAwOTIgdmFsPTB4MDAwMGMwMDcgbGVuPTIKWzAwOjA1LjBdIHhlbl9w
dF9tc2l4Y3RybF9yZWdfd3JpdGU6IGVuYWJsZSBNU0ktWApbMDA6MDUuMF0geGVuX3B0X3Bj
aV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwMDQgdmFsPTB4MDAwMDAwMDYgbGVuPTIKWzAw
OjA1LjBdIHhlbl9wdF9wY2lfd3JpdGVfY29uZmlnOiBhZGRyZXNzPTB4MDAwNCB2YWw9MHgw
MDAwMDQwNiBsZW49MgpbMDA6MDUuMF0geGVuX3B0X3BjaV93cml0ZV9jb25maWc6IGFkZHJl
c3M9MHgwMDkyIHZhbD0weDAwMDA4MDA3IGxlbj0yClswMDowNS4wXSB4ZW5fcHRfbXNpeF91
cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjAgcGlycTotMSBlbnRyeV91cGRhdGVkOjEg
ClswMDowNS4wXSBtc2lfbXNpeF9zZXR1cDogTWUgaGVyZSAuLiBNU0ktWDogKGFkZHI6MTcx
NTIgZGF0YToxNzE1MiB2ZWM6IDAsIGVudHJ5OiAwKSBtYXBwZWQ6bm90X21hcHBlZCAKWzAw
OjA1LjBdIG1zaV9tc2l4X3NldHVwOiBvbGRfcGlycV9jYWxjID0gNApbMDA6MDUuMF0gbXNp
X21zaXhfc2V0dXA6IHJlcXVlc3RlZCBwaXJxIDQgZm9yIE1TSS1YICh2ZWM6IDAsIGVudHJ5
OiAwKQpbMDA6MDUuMF0gbXNpX21zaXhfdXBkYXRlOiBVcGRhdGluZyBNU0ktWCB3aXRoIHBp
cnEgNCBndmVjIDAgZ2ZsYWdzIDB4MzA1NyAoZW50cnk6IDApClswMDowNS4wXSB4ZW5fcHRf
bXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDMgIHJjID0gMCBlbnRyeSB1cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjEgcGly
cTotMSBlbnRyeV91cGRhdGVkOjEgClswMDowNS4wXSBtc2lfbXNpeF9zZXR1cDogTWUgaGVy
ZSAuLiBNU0ktWDogKGFkZHI6MTcxNTIgZGF0YToxNzE1MiB2ZWM6IDAsIGVudHJ5OiAweDEp
IG1hcHBlZDpub3RfbWFwcGVkIApbMDA6MDUuMF0gbXNpX21zaXhfc2V0dXA6IG9sZF9waXJx
X2NhbGMgPSA0ClswMDowNS4wXSBtc2lfbXNpeF9zZXR1cDogcmVxdWVzdGVkIHBpcnEgNCBm
b3IgTVNJLVggKHZlYzogMCwgZW50cnk6IDB4MSkKWzAwOjA1LjBdIG1zaV9tc2l4X3VwZGF0
ZTogVXBkYXRpbmcgTVNJLVggd2l0aCBwaXJxIDQgZ3ZlYyAwIGdmbGFncyAweDMwNTYgKGVu
dHJ5OiAweDEpClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDMg
IHJjID0gMCBlbnRyeSB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVf
b25lOiBNZSBoZXJlIGVudHJ5X25yOjIgcGlycTotMSBlbnRyeV91cGRhdGVkOjEgClswMDow
NS4wXSBtc2lfbXNpeF9zZXR1cDogTWUgaGVyZSAuLiBNU0ktWDogKGFkZHI6MTcxNTIgZGF0
YToxNzE1MiB2ZWM6IDAsIGVudHJ5OiAweDIpIG1hcHBlZDpub3RfbWFwcGVkIApbMDA6MDUu
MF0gbXNpX21zaXhfc2V0dXA6IG9sZF9waXJxX2NhbGMgPSA0ClswMDowNS4wXSBtc2lfbXNp
eF9zZXR1cDogcmVxdWVzdGVkIHBpcnEgNCBmb3IgTVNJLVggKHZlYzogMCwgZW50cnk6IDB4
MikKWzAwOjA1LjBdIG1zaV9tc2l4X3VwZGF0ZTogVXBkYXRpbmcgTVNJLVggd2l0aCBwaXJx
IDQgZ3ZlYyAwIGdmbGFncyAweDMwNTUgKGVudHJ5OiAweDIpClswMDowNS4wXSB4ZW5fcHRf
bXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDMgIHJjID0gMCBlbnRyeSB1cGRhdGVkOjAgClsw
MDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjMgcGly
cTotMSBlbnRyeV91cGRhdGVkOjEgClswMDowNS4wXSBtc2lfbXNpeF9zZXR1cDogTWUgaGVy
ZSAuLiBNU0ktWDogKGFkZHI6MTcxNTIgZGF0YToxNzE1MiB2ZWM6IDAsIGVudHJ5OiAweDMp
IG1hcHBlZDpub3RfbWFwcGVkIApbMDA6MDUuMF0gbXNpX21zaXhfc2V0dXA6IG9sZF9waXJx
X2NhbGMgPSA0ClswMDowNS4wXSBtc2lfbXNpeF9zZXR1cDogcmVxdWVzdGVkIHBpcnEgNCBm
b3IgTVNJLVggKHZlYzogMCwgZW50cnk6IDB4MykKWzAwOjA1LjBdIG1zaV9tc2l4X3VwZGF0
ZTogVXBkYXRpbmcgTVNJLVggd2l0aCBwaXJxIDQgZ3ZlYyAwIGdmbGFncyAweDMwNTQgKGVu
dHJ5OiAweDMpClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDMg
IHJjID0gMCBlbnRyeSB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVf
b25lOiBNZSBoZXJlIGVudHJ5X25yOjQgcGlycTotMSBlbnRyeV91cGRhdGVkOjAgClswMDow
NS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50cnlfbnI6NCBub3Qg
dXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21zaXhfdXBkYXRlX29uZTogTWUgaGVyZSBl
bnRyeV9ucjo1IHBpcnE6LTEgZW50cnlfdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X21z
aXhfdXBkYXRlX29uZTogTWUgaGVyZSAxIGVudHJ5X25yOjUgbm90IHVwZGF0ZWQ6MCAKWzAw
OjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6IE1lIGhlcmUgZW50cnlfbnI6NiBwaXJx
Oi0xIGVudHJ5X3VwZGF0ZWQ6MCAKWzAwOjA1LjBdIHhlbl9wdF9tc2l4X3VwZGF0ZV9vbmU6
IE1lIGhlcmUgMSBlbnRyeV9ucjo2IG5vdCB1cGRhdGVkOjAgClswMDowNS4wXSB4ZW5fcHRf
bXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIGVudHJ5X25yOjcgcGlycTotMSBlbnRyeV91cGRh
dGVkOjAgClswMDowNS4wXSB4ZW5fcHRfbXNpeF91cGRhdGVfb25lOiBNZSBoZXJlIDEgZW50
cnlfbnI6NyBub3QgdXBkYXRlZDowIApbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZp
ZzogYWRkcmVzcz0weDAwYTQgdmFsPTB4MDAwMDhmYzAgbGVuPTQKWzAwOjA1LjBdIHhlbl9w
dF9wY2lfcmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDA2IHZhbD0weDAwMDAwMDEwIGxlbj0y
ClswMDowNS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDAzNCB2YWw9
MHgwMDAwMDA1MCBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRk
cmVzcz0weDAwNTAgdmFsPTB4MDAwMDAwMDEgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lf
cmVhZF9jb25maWc6IGFkZHJlc3M9MHgwMDUxIHZhbD0weDAwMDAwMDcwIGxlbj0xClswMDow
NS4wXSB4ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA3MCB2YWw9MHgwMDAw
MDAwNSBsZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0w
eDAwNzEgdmFsPTB4MDAwMDAwOTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9j
b25maWc6IGFkZHJlc3M9MHgwMDkwIHZhbD0weDAwMDAwMDExIGxlbj0xClswMDowNS4wXSB4
ZW5fcHRfcGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA5MSB2YWw9MHgwMDAwMDBhMCBs
ZW49MQpbMDA6MDUuMF0geGVuX3B0X3BjaV9yZWFkX2NvbmZpZzogYWRkcmVzcz0weDAwYTAg
dmFsPTB4MDAwMDAwMTAgbGVuPTEKWzAwOjA1LjBdIHhlbl9wdF9wY2lfcmVhZF9jb25maWc6
IGFkZHJlc3M9MHgwMGExIHZhbD0weDAwMDAwMDAwIGxlbj0xClswMDowNS4wXSB4ZW5fcHRf
cGNpX3JlYWRfY29uZmlnOiBhZGRyZXNzPTB4MDA1NCB2YWw9MHgwMDAwMDAwOCBsZW49Mgo=
------------09400606801CFB705
Content-Type: text/plain;
 name="xl-dmesg.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="xl-dmesg.txt"

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fX18gICAgICAgICAgICAgICAgICAgIF8g
ICAgICAgIF8gICAgIF8gICAgICANCiBcIFwvIC9fX18gXyBfXyAgIHwgfHwgfCAgfF9fXyAv
ICAgIF8gICBfIF8gX18gIF9fX3wgfF8gX18gX3wgfF9fIHwgfCBfX18gDQogIFwgIC8vIF8g
XCAnXyBcICB8IHx8IHxfICAgfF8gXCBfX3wgfCB8IHwgJ18gXC8gX198IF9fLyBfYCB8ICdf
IFx8IHwvIF8gXA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgX19fKSB8X198IHxffCB8
IHwgfCBcX18gXCB8fCAoX3wgfCB8XykgfCB8ICBfXy8NCiAvXy9cX1xfX198X3wgfF98ICAg
IHxffChfKV9fX18vICAgIFxfXyxffF98IHxffF9fXy9cX19cX18sX3xfLl9fL3xffFxfX198
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZlcnNpb24gNC4zLXVuc3RhYmxl
IChyb290QGR5bmRucy5vcmcpIChnY2MgKERlYmlhbiA0LjQuNS04KSA0LjQuNSkgRnJpIERl
YyAgNyAyMTowOToyMSBDRVQgMjAxMg0KKFhFTikgTGF0ZXN0IENoYW5nZVNldDogVGh1IERl
YyAwNiAxNDoyMDoxNSAyMDEyICswMTAwIDI2MjQ3OmQ4MjhmMjNiNzJjOA0KKFhFTikgQm9v
dGxvYWRlcjogR1JVQiAxLjk4KzIwMTAwODA0LTE0K3NxdWVlemUxDQooWEVOKSBDb21tYW5k
IGxpbmU6IGRvbTBfbWVtPTEwMjRNLG1heDoxMDI0TSBsb2dsdmw9YWxsIGxvZ2x2bF9ndWVz
dD1hbGwgY29uc29sZV90aW1lc3RhbXBzIHZnYT1nZngtMTI4MHgxMDI0eDMyIGNwdWlkbGUg
Y3B1ZnJlcT14ZW4gbm9yZWJvb3QgZGVidWcgbGFwaWM9ZGVidWcgYXBpY192ZXJib3NpdHk9
ZGVidWcgYXBpYz1kZWJ1ZyBpb21tdT1vbix2ZXJib3NlLGRlYnVnLGFtZC1pb21tdS1kZWJ1
ZyBjb20xPTM4NDAwLDhuMSBjb25zb2xlPXZnYSxjb20xDQooWEVOKSBWaWRlbyBpbmZvcm1h
dGlvbjoNCihYRU4pICBWR0EgaXMgZ3JhcGhpY3MgbW9kZSAxMjgweDEwMjQsIDMyIGJwcA0K
KFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNv
bmRzDQooWEVOKSBEaXNjIGluZm9ybWF0aW9uOg0KKFhFTikgIEZvdW5kIDIgTUJSIHNpZ25h
dHVyZXMNCihYRU4pICBGb3VuZCAyIEVERCBpbmZvcm1hdGlvbiBzdHJ1Y3R1cmVzDQooWEVO
KSBYZW4tZTgyMCBSQU0gbWFwOg0KKFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAw
MDAwMDlmMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDAwMDA5ZjAwMCAtIDAwMDAwMDAw
MDAwYTAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAwZTQwMDAgLSAwMDAwMDAw
MDAwMTAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMDAwMTAwMDAwIC0gMDAwMDAw
MDBhZmY5MDAwMCAodXNhYmxlKQ0KKFhFTikgIDAwMDAwMDAwYWZmOTAwMDAgLSAwMDAwMDAw
MGFmZjllMDAwIChBQ1BJIGRhdGEpDQooWEVOKSAgMDAwMDAwMDBhZmY5ZTAwMCAtIDAwMDAw
MDAwYWZmZTAwMDAgKEFDUEkgTlZTKQ0KKFhFTikgIDAwMDAwMDAwYWZmZTAwMDAgLSAwMDAw
MDAwMGIwMDAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZmZTAwMDAwIC0gMDAw
MDAwMDEwMDAwMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDEwMDAwMDAwMCAtIDAw
MDAwMDAyNTAwMDAwMDAgKHVzYWJsZSkNCihYRU4pIEFDUEk6IFJTRFAgMDAwRkIxMDAsIDAw
MTQgKHIwIEFDUElBTSkNCihYRU4pIEFDUEk6IFJTRFQgQUZGOTAwMDAsIDAwNDggKHIxIE1T
SSAgICBPRU1TTElDICAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogRkFD
UCBBRkY5MDIwMCwgMDA4NCAocjEgNzY0ME1TIEE3NjQwMTAwIDIwMTAwOTEzIE1TRlQgICAg
ICAgOTcpDQooWEVOKSBBQ1BJOiBEU0RUIEFGRjkwNUUwLCA5NDI3IChyMSAgQTc2NDAgQTc2
NDAxMDAgICAgICAxMDAgSU5UTCAyMDA1MTExNykNCihYRU4pIEFDUEk6IEZBQ1MgQUZGOUUw
MDAsIDAwNDANCihYRU4pIEFDUEk6IEFQSUMgQUZGOTAzOTAsIDAwODggKHIxIDc2NDBNUyBB
NzY0MDEwMCAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogTUNGRyBBRkY5
MDQyMCwgMDAzQyAocjEgNzY0ME1TIE9FTU1DRkcgIDIwMTAwOTEzIE1TRlQgICAgICAgOTcp
DQooWEVOKSBBQ1BJOiBTTElDIEFGRjkwNDYwLCAwMTc2IChyMSBNU0kgICAgT0VNU0xJQyAg
MjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IE9FTUIgQUZGOUUwNDAsIDAw
NzIgKHIxIDc2NDBNUyBBNzY0MDEwMCAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikg
QUNQSTogU1JBVCBBRkY5QTVFMCwgMDEwOCAocjMgQU1EICAgIEZBTV9GXzEwICAgICAgICAy
IEFNRCAgICAgICAgIDEpDQooWEVOKSBBQ1BJOiBIUEVUIEFGRjlBNkYwLCAwMDM4IChyMSA3
NjQwTVMgT0VNSFBFVCAgMjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IElW
UlMgQUZGOUE3MzAsIDAwRjggKHIxICBBTUQgICAgIFJEODkwUyAgIDIwMjAzMSBBTUQgICAg
ICAgICAwKQ0KKFhFTikgQUNQSTogU1NEVCBBRkY5QTgzMCwgMERBNCAocjEgQSBNIEkgIFBP
V0VSTk9XICAgICAgICAxIEFNRCAgICAgICAgIDEpDQooWEVOKSBTeXN0ZW0gUkFNOiA4MTkx
TUIgKDgzODc3NzJrQikNCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMCAtPiBOb2RlIDAN
CihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBY
TSAwIC0+IEFQSUMgMiAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMyAt
PiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgNCAtPiBOb2RlIDANCihYRU4p
IFNSQVQ6IFBYTSAwIC0+IEFQSUMgNSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IE5vZGUgMCBQ
WE0gMCAwLWEwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwLWIwMDAwMDAw
DQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwMDAwLTI1MDAwMDAwMA0KKFhFTikg
TlVNQTogQWxsb2NhdGVkIG1lbW5vZGVtYXAgZnJvbSAyNGRhMDEwMDAgLSAyNGRhMDQwMDAN
CihYRU4pIE5VTUE6IFVzaW5nIDggZm9yIHRoZSBoYXNoIHNoaWZ0Lg0KKFhFTikgRG9tYWlu
IGhlYXAgaW5pdGlhbGlzZWQNCihYRU4pIHZlc2FmYjogZnJhbWVidWZmZXIgYXQgMHhmYjAw
MDAwMCwgbWFwcGVkIHRvIDB4ZmZmZjgyYzAwMDA4MTAwMCwgdXNpbmcgNjE0NGssIHRvdGFs
IDE0MzM2aw0KKFhFTikgdmVzYWZiOiBtb2RlIGlzIDEyODB4MTAyNHgzMiwgbGluZWxlbmd0
aD01MTIwLCBmb250IDh4MTYNCihYRU4pIHZlc2FmYjogVHJ1ZWNvbG9yOiBzaXplPTg6ODo4
OjgsIHNoaWZ0PTI0OjE2Ojg6MA0KKFhFTikgZm91bmQgU01QIE1QLXRhYmxlIGF0IDAwMGZm
NzgwDQooWEVOKSBETUkgcHJlc2VudC4NCihYRU4pIEFQSUMgYm9vdCBzdGF0ZSBpcyAneGFw
aWMnDQooWEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0DQooWEVOKSBBQ1BJOiBQTS1U
aW1lciBJTyBQb3J0OiAweDgwOA0KKFhFTikgQUNQSTogQUNQSSBTTEVFUCBJTkZPOiBwbTF4
X2NudFs4MDQsMF0sIHBtMXhfZXZ0WzgwMCwwXQ0KKFhFTikgQUNQSTogICAgICAgICAgICAg
ICAgICB3YWtldXBfdmVjW2FmZjllMDBjXSwgdmVjX3NpemVbMjBdDQooWEVOKSBBQ1BJOiBM
b2NhbCBBUElDIGFkZHJlc3MgMHhmZWUwMDAwMA0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHgwMV0gbGFwaWNfaWRbMHgwMF0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMCAw
OjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0g
bGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMSAwOjEwIEFQSUMg
dmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwM10gbGFwaWNfaWRb
MHgwMl0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMiAwOjEwIEFQSUMgdmVyc2lvbiAx
Ng0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwM10gZW5h
YmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMyAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikg
QUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNV0gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkNCihY
RU4pIFByb2Nlc3NvciAjNCAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQ
SUMgKGFjcGlfaWRbMHgwNl0gbGFwaWNfaWRbMHgwNV0gZW5hYmxlZCkNCihYRU4pIFByb2Nl
c3NvciAjNSAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogSU9BUElDIChpZFsw
eDA2XSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBd
OiBhcGljX2lkIDYsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMN
CihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwN10gYWRkcmVzc1sweGZlYzIwMDAwXSBnc2lf
YmFzZVsyNF0pDQooWEVOKSBJT0FQSUNbMV06IGFwaWNfaWQgNywgdmVyc2lvbiAzMywgYWRk
cmVzcyAweGZlYzIwMDAwLCBHU0kgMjQtNTUNCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZSIChi
dXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQooWEVOKSBBQ1BJOiBJTlRf
U1JDX09WUiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQooWEVO
KSBBQ1BJOiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQg
Ynkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlE5IHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVO
KSBFbmFibGluZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMiBJL08gQVBJQ3MNCihYRU4p
IEFDUEk6IEhQRVQgaWQ6IDB4ODMwMCBiYXNlOiAweGZlZDAwMDAwDQooWEVOKSBUYWJsZSBp
cyBub3QgZm91bmQhDQooWEVOKSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3Vy
YXRpb24gaW5mb3JtYXRpb24NCihYRU4pIFNNUDogQWxsb3dpbmcgNiBDUFVzICgwIGhvdHBs
dWcgQ1BVcykNCihYRU4pIG1hcHBlZCBBUElDIHRvIGZmZmY4MmMzZmZkZmIwMDAgKGZlZTAw
MDAwKQ0KKFhFTikgbWFwcGVkIElPQVBJQyB0byBmZmZmODJjM2ZmZGZhMDAwIChmZWMwMDAw
MCkNCihYRU4pIG1hcHBlZCBJT0FQSUMgdG8gZmZmZjgyYzNmZmRmOTAwMCAoZmVjMjAwMDAp
DQooWEVOKSBJUlEgbGltaXRzOiA1NiBHU0ksIDExMTIgTVNJL01TSS1YDQooWEVOKSBVc2lu
ZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpDQooWEVOKSBEZXRl
Y3RlZCAzMjAwLjIzMSBNSHogcHJvY2Vzc29yLg0KKFhFTikgSW5pdGluZyBtZW1vcnkgc2hh
cmluZy4NCihYRU4pIEFNRCBGYW0xMGggbWFjaGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxl
ZA0KKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDogYmFzZSBlMDAwMDAwMCBzZWdt
ZW50IDAwMDAgYnVzZXMgMDAgLSBmZg0KKFhFTikgUENJOiBOb3QgdXNpbmcgTUNGRyBmb3Ig
c2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgQU1ELVZpOiBGb3VuZCBNU0kgY2FwYWJp
bGl0eSBibG9jayBhdCAweDU0DQooWEVOKSBBTUQtVmk6IEFDUEkgVGFibGU6DQooWEVOKSBB
TUQtVmk6ICBTaWduYXR1cmUgSVZSUw0KKFhFTikgQU1ELVZpOiAgTGVuZ3RoIDB4ZjgNCihY
RU4pIEFNRC1WaTogIFJldmlzaW9uIDB4MQ0KKFhFTikgQU1ELVZpOiAgQ2hlY2tTdW0gMHg1
MA0KKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRCAgDQooWEVOKSBBTUQtVmk6ICBPRU1fVGFi
bGVfSWQgUkQ4OTBTDQooWEVOKSBBTUQtVmk6ICBPRU1fUmV2aXNpb24gMHgyMDIwMzENCihY
RU4pIEFNRC1WaTogIENyZWF0b3JfSWQgQU1EIA0KKFhFTikgQU1ELVZpOiAgQ3JlYXRvcl9S
ZXZpc2lvbiAwDQooWEVOKSBBTUQtVmk6IElWUlMgQmxvY2s6DQooWEVOKSBBTUQtVmk6ICBU
eXBlIDB4MTANCihYRU4pIEFNRC1WaTogIEZsYWdzIDB4M2UNCihYRU4pIEFNRC1WaTogIExl
bmd0aCAweGM4DQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHgyDQooWEVOKSBBTUQtVmk6IElW
SEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDMNCihYRU4pIEFNRC1W
aTogIERldl9JZCAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6ICBE
ZXZfSWQgUmFuZ2U6IDAgLT4gMHgyDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAweDEw
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAw
eGIwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmlj
ZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZf
SWQgMHgxOA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERl
dmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBE
ZXZfSWQgMHg5MDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4NDMNCihYRU4pIEFNRC1W
aTogIERldl9JZCAweGEwOA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIFJhbmdlOiAweGEwOCAtPiAweGFmZg0KKFhFTikgQU1ELVZpOiAgRGV2X0lk
IEFsaWFzOiAweGEwMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeToNCihYRU4p
IEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHgyOA0KKFhFTikg
QU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeToNCihY
RU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHg4MDANCihY
RU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6
DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIDB4MzAN
CihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50
cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIDB4
NzAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9J
ZCAweDUwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2
aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERl
dl9JZCAweDYwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHg1OA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJ
VkhEIERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQt
Vmk6ICBEZXZfSWQgMHg1MDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1W
aTogIERldl9JZCBSYW5nZTogMHg1MDAgLT4gMHg1MDENCihYRU4pIEFNRC1WaTogSVZIRCBE
ZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIDB4NjgNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIDB4NDAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6
IElWSEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFN
RC1WaTogIERldl9JZCAweDg4DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQt
Vmk6IElWSEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDMNCihYRU4p
IEFNRC1WaTogIERldl9JZCAweDkwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBB
TUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4OTAgLT4gMHg5Mg0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHg5OA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIFJhbmdlOiAweDk4IC0+IDB4OWENCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2Ug
RW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lk
IDB4YTANCihYRU4pIEFNRC1WaTogIEZsYWdzIDB4ZDcNCihYRU4pIEFNRC1WaTogSVZIRCBE
ZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIDB4YTENCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIDB4YTINCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTog
SVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1E
LVZpOiAgRGV2X0lkIDB4YTMNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1W
aTogSVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikg
QU1ELVZpOiAgRGV2X0lkIDB4YTQNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFN
RC1WaTogSVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4NDMNCihY
RU4pIEFNRC1WaTogIERldl9JZCAweDMwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhF
TikgQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweDMwMCAtPiAweDNmZg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIEFsaWFzOiAweGE0DQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAweGE1
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAw
eGE4DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9J
ZCAweGE5DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2
aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERl
dl9JZCAweDEwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHhiMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIFJhbmdlOiAweGIwIC0+IDB4YjINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2Ug
RW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDANCihYRU4pIEFNRC1WaTogIERldl9JZCAw
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDQ4DQooWEVOKSBBTUQtVmk6ICBEZXZfSWQg
MA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMHhkNw0KKFhFTikgQU1ELVZpOiBJVkhEIERldmlj
ZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHg0OA0KKFhFTikgQU1ELVZpOiAgRGV2
X0lkIDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSU9NTVUgMCBF
bmFibGVkLg0KKFhFTikgQU1ELVZpOiBFbmFibGluZyBnbG9iYWwgdmVjdG9yIG1hcA0KKFhF
TikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQNCihYRU4pICAtIERvbTAgbW9kZTogUmVs
YXhlZA0KKFhFTikgR2V0dGluZyBWRVJTSU9OOiA4MDA1MDAxMA0KKFhFTikgR2V0dGluZyBW
RVJTSU9OOiA4MDA1MDAxMA0KKFhFTikgR2V0dGluZyBJRDogMA0KKFhFTikgR2V0dGluZyBM
VlQwOiA3MDANCihYRU4pIEdldHRpbmcgTFZUMTogNDAwDQooWEVOKSBlbmFibGVkIEV4dElO
VCBvbiBDUFUjMA0KKFhFTikgRVNSIHZhbHVlIGJlZm9yZSBlbmFibGluZyB2ZWN0b3I6IDB4
NCAgYWZ0ZXI6IDANCihYRU4pIEVOQUJMSU5HIElPLUFQSUMgSVJRcw0KKFhFTikgIC0+IFVz
aW5nIG5ldyBBQ0sgbWV0aG9kDQooWEVOKSBpbml0IElPX0FQSUMgSVJRcw0KKFhFTikgIElP
LUFQSUMgKGFwaWNpZC1waW4pIDYtMCwgNi0xNiwgNi0xNywgNi0xOCwgNi0xOSwgNi0yMCwg
Ni0yMSwgNi0yMiwgNi0yMywgNy0wLCA3LTEsIDctMiwgNy0zLCA3LTQsIDctNSwgNy02LCA3
LTcsIDctOCwgNy05LCA3LTEwLCA3LTExLCA3LTEyLCA3LTEzLCA3LTE0LCA3LTE1LCA3LTE2
LCA3LTE3LCA3LTE4LCA3LTE5LCA3LTIwLCA3LTIxLCA3LTIyLCA3LTIzLCA3LTI0LCA3LTI1
LCA3LTI2LCA3LTI3LCA3LTI4LCA3LTI5LCA3LTMwLCA3LTMxIG5vdCBjb25uZWN0ZWQuDQoo
WEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4y
PS0xDQooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1Lg0KKFhFTikgbnVtYmVy
IG9mIElPLUFQSUMgIzYgcmVnaXN0ZXJzOiAyNC4NCihYRU4pIG51bWJlciBvZiBJTy1BUElD
ICM3IHJlZ2lzdGVyczogMzIuDQooWEVOKSB0ZXN0aW5nIHRoZSBJTyBBUElDLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4NCihYRU4pIElPIEFQSUMgIzYuLi4uLi4NCihYRU4pIC4uLi4gcmVn
aXN0ZXIgIzAwOiAwNjAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICA6IHBoeXNpY2FsIEFQSUMg
aWQ6IDA2DQooWEVOKSAuLi4uLi4uICAgIDogRGVsaXZlcnkgVHlwZTogMA0KKFhFTikgLi4u
Li4uLiAgICA6IExUUyAgICAgICAgICA6IDANCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAxOiAw
MDE3ODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBtYXggcmVkaXJlY3Rpb24gZW50cmllczog
MDAxNw0KKFhFTikgLi4uLi4uLiAgICAgOiBQUlEgaW1wbGVtZW50ZWQ6IDENCihYRU4pIC4u
Li4uLi4gICAgIDogSU8gQVBJQyB2ZXJzaW9uOiAwMDIxDQooWEVOKSAuLi4uIHJlZ2lzdGVy
ICMwMjogMDYwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDogYXJiaXRyYXRpb246IDA2DQoo
WEVOKSAuLi4uIHJlZ2lzdGVyICMwMzogMDcwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDog
Qm9vdCBEVCAgICA6IDANCihYRU4pIC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRhYmxlOg0KKFhF
TikgIE5SIExvZyBQaHkgTWFzayBUcmlnIElSUiBQb2wgU3RhdCBEZXN0IERlbGkgVmVjdDog
ICANCihYRU4pICAwMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAg
IDAwDQooWEVOKSAgMDEgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICAzMA0KKFhFTikgIDAyIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgRjANCihYRU4pICAwMyAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAx
ICAgIDM4DQooWEVOKSAgMDQgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAg
MSAgICBGMQ0KKFhFTikgIDA1IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAg
IDEgICAgNDANCihYRU4pICAwNiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAg
ICAxICAgIDQ4DQooWEVOKSAgMDcgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEg
ICAgMSAgICA1MA0KKFhFTikgIDA4IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgNTgNCihYRU4pICAwOSAwMDEgMDEgIDEgICAgMSAgICAwICAgMSAgIDAgICAg
MSAgICAxICAgIDYwDQooWEVOKSAgMGEgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICA2OA0KKFhFTikgIDBiIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgNzANCihYRU4pICAwYyAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDc4DQooWEVOKSAgMGQgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICA4OA0KKFhFTikgIDBlIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgOTANCihYRU4pICAwZiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDk4DQooWEVOKSAgMTAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAg
ICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDExIDAwMCAwMCAgMSAgICAwICAgIDAgICAw
ICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxMiAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTMgMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE0IDAwMCAwMCAgMSAgICAwICAgIDAg
ICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNSAwMDAgMDAgIDEgICAgMCAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTYgMDAwIDAwICAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE3IDAwMCAwMCAgMSAgICAwICAg
IDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pIElPIEFQSUMgIzcuLi4uLi4NCihY
RU4pIC4uLi4gcmVnaXN0ZXIgIzAwOiAwNzAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICA6IHBo
eXNpY2FsIEFQSUMgaWQ6IDA3DQooWEVOKSAuLi4uLi4uICAgIDogRGVsaXZlcnkgVHlwZTog
MA0KKFhFTikgLi4uLi4uLiAgICA6IExUUyAgICAgICAgICA6IDANCihYRU4pIC4uLi4gcmVn
aXN0ZXIgIzAxOiAwMDFGODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBtYXggcmVkaXJlY3Rp
b24gZW50cmllczogMDAxRg0KKFhFTikgLi4uLi4uLiAgICAgOiBQUlEgaW1wbGVtZW50ZWQ6
IDENCihYRU4pIC4uLi4uLi4gICAgIDogSU8gQVBJQyB2ZXJzaW9uOiAwMDIxDQooWEVOKSAu
Li4uIHJlZ2lzdGVyICMwMjogMDAwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDogYXJiaXRy
YXRpb246IDAwDQooWEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0YWJsZToNCihYRU4pICBO
UiBMb2cgUGh5IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRGVzdCBEZWxpIFZlY3Q6ICAgDQoo
WEVOKSAgMDAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0K
KFhFTikgIDAxIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAN
CihYRU4pICAwMiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAw
DQooWEVOKSAgMDMgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAw
MA0KKFhFTikgIDA0IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAg
MDANCihYRU4pICAwNSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAg
IDAwDQooWEVOKSAgMDYgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAg
ICAwMA0KKFhFTikgIDA3IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAg
ICAgMDANCihYRU4pICAwOCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwDQooWEVOKSAgMDkgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MCAgICAwMA0KKFhFTikgIDBhIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAg
IDAgICAgMDANCihYRU4pICAwYiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwDQooWEVOKSAgMGMgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMCAgICAwMA0KKFhFTikgIDBkIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAw
ICAgIDAgICAgMDANCihYRU4pICAwZSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAwICAgIDAwDQooWEVOKSAgMGYgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMCAgICAwMA0KKFhFTikgIDEwIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAg
ICAwICAgIDAgICAgMDANCihYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAg
ICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAw
ICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDEzIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAg
MCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAg
IDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAg
ICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE2IDAwMCAwMCAgMSAgICAwICAgIDAgICAw
ICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNyAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTggMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE5IDAwMCAwMCAgMSAgICAwICAgIDAg
ICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxYSAwMDAgMDAgIDEgICAgMCAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWIgMDAwIDAwICAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFjIDAwMCAwMCAgMSAgICAwICAg
IDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxZCAwMDAgMDAgIDEgICAgMCAg
ICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWUgMDAwIDAwICAxICAgIDAg
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFmIDAwMCAwMCAgMSAgICAw
ICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pIFVzaW5nIHZlY3Rvci1iYXNl
ZCBpbmRleGluZw0KKFhFTikgSVJRIHRvIHBpbiBtYXBwaW5nczoNCihYRU4pIElSUTI0MCAt
PiAwOjINCihYRU4pIElSUTQ4IC0+IDA6MQ0KKFhFTikgSVJRNTYgLT4gMDozDQooWEVOKSBJ
UlEyNDEgLT4gMDo0DQooWEVOKSBJUlE2NCAtPiAwOjUNCihYRU4pIElSUTcyIC0+IDA6Ng0K
KFhFTikgSVJRODAgLT4gMDo3DQooWEVOKSBJUlE4OCAtPiAwOjgNCihYRU4pIElSUTk2IC0+
IDA6OQ0KKFhFTikgSVJRMTA0IC0+IDA6MTANCihYRU4pIElSUTExMiAtPiAwOjExDQooWEVO
KSBJUlExMjAgLT4gMDoxMg0KKFhFTikgSVJRMTM2IC0+IDA6MTMNCihYRU4pIElSUTE0NCAt
PiAwOjE0DQooWEVOKSBJUlExNTIgLT4gMDoxNQ0KKFhFTikgLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uIGRvbmUuDQooWEVOKSBVc2luZyBsb2NhbCBBUElDIHRpbWVy
IGludGVycnVwdHMuDQooWEVOKSBjYWxpYnJhdGluZyBBUElDIHRpbWVyIC4uLg0KKFhFTikg
Li4uLi4gQ1BVIGNsb2NrIHNwZWVkIGlzIDMyMDAuMTUzNSBNSHouDQooWEVOKSAuLi4uLiBo
b3N0IGJ1cyBjbG9jayBzcGVlZCBpcyAyMDAuMDA5NSBNSHouDQooWEVOKSAuLi4uLiBidXNf
c2NhbGUgPSAweGNjZDcNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBQbGF0Zm9ybSB0
aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIEFs
bG9jYXRlZCBjb25zb2xlIHJpbmcgb2YgNjQgS2lCLg0KKFhFTikgWzIwMTItMTItMDcgMjA6
Mjg6NDNdIEhWTTogQVNJRHMgZW5hYmxlZC4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQz
XSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjI4OjQzXSAgLSBOZXN0ZWQgUGFnZSBUYWJsZXMgKE5QVCkNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjI4OjQzXSAgLSBMYXN0IEJyYW5jaCBSZWNvcmQgKExCUikgVmlydHVhbGlzYXRp
b24NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAj
Vk1FWElUDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gIC0gUGF1c2UtSW50ZXJjZXB0
IEZpbHRlcg0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIEhWTTogU1ZNIGVuYWJsZWQN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBh
Z2luZyAoSEFQKSBkZXRlY3RlZA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIEhWTTog
SEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0INCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4
OjQyXSBtYXNrZWQgRXh0SU5UIG9uIENQVSMxDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
M10gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwYmYNCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQyXSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0M10gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBw
YXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQyXSBtYXNrZWQg
RXh0SU5UIG9uIENQVSMzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gbWljcm9jb2Rl
OiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjI4OjQyXSBtYXNrZWQgRXh0SU5UIG9uIENQVSM0DQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0M10gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEw
MDAwYmYNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQyXSBtYXNrZWQgRXh0SU5UIG9uIENQ
VSM1DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gQnJvdWdodCB1cCA2IENQVXMNCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86
IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIEhQRVQ6
IDMgdGltZXJzICgzIHdpbGwgYmUgdXNlZCBmb3IgYnJvYWRjYXN0KQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6Mjg6NDNdIEFDUEkgc2xlZXAgbW9kZXM6IFMzDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0M10gTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5n
IGZyZXF1ZW5jeQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIG1jaGVja19wb2xsOiBN
YWNoaW5lIGNoZWNrIHBvbGxpbmcgdGltZXIgc3RhcnRlZC4NCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjI4OjQzXSBYZW5vcHJvZmlsZTogRmFpbGVkIHRvIHNldHVwIElCUyBMVlQgb2Zmc2V0
LCBJQlNDVEwgPSAweGZmZmZmZmZmDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gKioq
IExPQURJTkcgRE9NQUlOIDAgKioqDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxm
X3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9MHgxMDAwMDAwIG1lbXN6PTB4ZDJlMDAwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFk
ZHI9MHgxZTAwMDAwIG1lbXN6PTB4ZDIwZjANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQz
XSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFlZDMwMDAgbWVtc3o9MHgxM2Nj
MA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIGVsZl9wYXJzZV9iaW5hcnk6IHBoZHI6
IHBhZGRyPTB4MWVlNzAwMCBtZW1zej0weGRlYTAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6
Mjg6NDNdIGVsZl9wYXJzZV9iaW5hcnk6IG1lbW9yeTogMHgxMDAwMDAwIC0+IDB4MmNkMTAw
MA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VF
U1RfT1MgPSAibGludXgiDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hlbl9w
YXJzZV9ub3RlOiBHVUVTVF9WRVJTSU9OID0gIjIuNiINCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjI4OjQzXSBlbGZfeGVuX3BhcnNlX25vdGU6IFhFTl9WRVJTSU9OID0gInhlbi0zLjAiDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBWSVJUX0JB
U0UgPSAweGZmZmZmZmZmODAwMDAwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBl
bGZfeGVuX3BhcnNlX25vdGU6IEVOVFJZID0gMHhmZmZmZmZmZjgxZWU3MjEwDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBIWVBFUkNBTExfUEFH
RSA9IDB4ZmZmZmZmZmY4MTAwMTAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIGVs
Zl94ZW5fcGFyc2Vfbm90ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVzfHBh
ZV9wZ2Rpcl9hYm92ZV80Z2IiDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hl
bl9wYXJzZV9ub3RlOiBQQUVfTU9ERSA9ICJ5ZXMiDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBMT0FERVIgPSAiZ2VuZXJpYyINCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjI4OjQzXSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25vd24geGVuIGVs
ZiBub3RlICgweGQpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hlbl9wYXJz
ZV9ub3RlOiBTVVNQRU5EX0NBTkNFTCA9IDB4MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6
NDNdIGVsZl94ZW5fcGFyc2Vfbm90ZTogSFZfU1RBUlRfTE9XID0gMHhmZmZmODAwMDAwMDAw
MDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBQ
QUREUl9PRkZTRVQgPSAweDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBlbGZfeGVu
X2FkZHJfY2FsY19jaGVjazogYWRkcmVzc2VzOg0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6
NDNdICAgICB2aXJ0X2Jhc2UgICAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAwDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDoyODo0M10gICAgIGVsZl9wYWRkcl9vZmZzZXQgPSAweDANCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjI4OjQzXSAgICAgdmlydF9vZmZzZXQgICAgICA9IDB4ZmZmZmZm
ZmY4MDAwMDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdICAgICB2aXJ0X2tzdGFy
dCAgICAgID0gMHhmZmZmZmZmZjgxMDAwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
M10gICAgIHZpcnRfa2VuZCAgICAgICAgPSAweGZmZmZmZmZmODJjZDEwMDANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjI4OjQzXSAgICAgdmlydF9lbnRyeSAgICAgICA9IDB4ZmZmZmZmZmY4
MWVlNzIxMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdICAgICBwMm1fYmFzZSAgICAg
ICAgID0gMHhmZmZmZmZmZmZmZmZmZmZmDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10g
IFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0MzINCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjI4OjQzXSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgUEFFLCBsc2IsIHBhZGRyIDB4MTAw
MDAwMCAtPiAweDJjZDEwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBQSFlTSUNB
TCBNRU1PUlkgQVJSQU5HRU1FTlQ6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gIERv
bTAgYWxsb2MuOiAgIDAwMDAwMDAyNDAwMDAwMDAtPjAwMDAwMDAyNDQwMDAwMDAgKDI0MjUx
NiBwYWdlcyB0byBiZSBhbGxvY2F0ZWQpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10g
IEluaXQuIHJhbWRpc2s6IDAwMDAwMDAyNGYzNTQwMDAtPjAwMDAwMDAyNGZmZmZjMDANCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBWSVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSAgTG9hZGVkIGtlcm5lbDogZmZmZmZmZmY4
MTAwMDAwMC0+ZmZmZmZmZmY4MmNkMTAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNd
ICBJbml0LiByYW1kaXNrOiBmZmZmZmZmZjgyY2QxMDAwLT5mZmZmZmZmZjgzOTdjYzAwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gIFBoeXMtTWFjaCBtYXA6IGZmZmZmZmZmODM5
N2QwMDAtPmZmZmZmZmZmODNiN2QwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSAg
U3RhcnQgaW5mbzogICAgZmZmZmZmZmY4M2I3ZDAwMC0+ZmZmZmZmZmY4M2I3ZDRiNA0KKFhF
TikgWzIwMTItMTItMDcgMjA6Mjg6NDNdICBQYWdlIHRhYmxlczogICBmZmZmZmZmZjgzYjdl
MDAwLT5mZmZmZmZmZjgzYmExMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0M10gIEJv
b3Qgc3RhY2s6ICAgIGZmZmZmZmZmODNiYTEwMDAtPmZmZmZmZmZmODNiYTIwMDANCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjI4OjQzXSAgVE9UQUw6ICAgICAgICAgZmZmZmZmZmY4MDAwMDAw
MC0+ZmZmZmZmZmY4NDAwMDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdICBFTlRS
WSBBRERSRVNTOiBmZmZmZmZmZjgxZWU3MjEwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
M10gRG9tMCBoYXMgbWF4aW11bSA2IFZDUFVzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
M10gZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDAgYXQgMHhmZmZmZmZmZjgxMDAwMDAwIC0+IDB4
ZmZmZmZmZmY4MWQyZTAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDNdIGVsZl9sb2Fk
X2JpbmFyeTogcGhkciAxIGF0IDB4ZmZmZmZmZmY4MWUwMDAwMCAtPiAweGZmZmZmZmZmODFl
ZDIwZjANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQzXSBlbGZfbG9hZF9iaW5hcnk6IHBo
ZHIgMiBhdCAweGZmZmZmZmZmODFlZDMwMDAgLT4gMHhmZmZmZmZmZjgxZWU2Y2MwDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0M10gZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDMgYXQgMHhm
ZmZmZmZmZjgxZWU3MDAwIC0+IDB4ZmZmZmZmZmY4MWY4YTAwMA0KKFhFTikgWzIwMTItMTIt
MDcgMjA6Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUg
PSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFn
ZSB0YWJsZTogZGV2aWNlIGlkID0gMHgyLCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRd
IEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAsIHJvb3Qg
dGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHgxOCwgcm9vdCB0YWJsZSA9IDB4MjQ5OGEzMDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ0XSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDI4LCByb290IHRhYmxlID0g
MHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTIt
MTItMDcgMjA6Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4MzAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5n
IG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJ
L08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg1MCwgcm9vdCB0YWJsZSA9IDB4MjQ5OGEz
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjI4OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDU4
LCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0g
Mw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4NjgsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4OCwgcm9vdCB0
YWJsZSA9IDB4MjQ5OGEzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjI4OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweDkwLCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAs
IHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRdIEFNRC1WaTog
U2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTIsIHJvb3QgdGFibGUgPSAw
eDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHg5OCwgcm9vdCB0YWJsZSA9IDB4MjQ5OGEzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ0XSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDlhLCByb290IHRhYmxlID0gMHgyNDk4YTMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6
Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTAs
IHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAz
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhhMSwgcm9vdCB0YWJsZSA9IDB4MjQ5OGEzMDAwLCBkb21h
aW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ0XSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGEyLCByb290IHRh
YmxlID0gMHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikg
WzIwMTItMTItMDcgMjA6Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4YTMsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwg
cGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNCwgcm9vdCB0YWJsZSA9IDB4
MjQ5OGEzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjI4OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQg
PSAweGE1LCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRdIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTgsIHJvb3QgdGFibGUgPSAweDI0OThhMzAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhiMCwg
cm9vdCB0YWJsZSA9IDB4MjQ5OGEzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweGIyLCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRvbWFp
biA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRdIEFN
RC1WaTogTm8gaW9tbXUgZm9yIGRldmljZSAwMDAwOjAwOjE4LjANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjI4OjQ0XSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2UgMDAwMDowMDoxOC4x
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBObyBpb21tdSBmb3IgZGV2
aWNlIDAwMDA6MDA6MTguMg0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDRdIEFNRC1WaTog
Tm8gaW9tbXUgZm9yIGRldmljZSAwMDAwOjAwOjE4LjMNCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjI4OjQ0XSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2UgMDAwMDowMDoxOC40DQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHg0MDAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZp
OiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg1MDAsIHJvb3QgdGFibGUg
PSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNl
IGlkID0gMHg1MDEsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg2MDAsIHJvb3QgdGFibGUgPSAweDI0
OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0g
MHg3MDAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1v
ZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4MDAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MDAs
IHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAz
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhhMDAsIHJvb3QgdGFibGUgPSAweDI0OThhMzAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0NF0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhiMDAsIHJvb3Qg
dGFibGUgPSAweDI0OThhMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0NF0gU2NydWJiaW5nIEZyZWUgUkFNOiAuLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLmRvbmUuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gSW5pdGlhbCBsb3cg
bWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDoyODo0Nl0gU3RkLiBMb2dsZXZlbDogQWxsDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0Nl0gR3Vlc3QgTG9nbGV2ZWw6IEFsbA0KKFhFTikgWzIwMTItMTItMDcgMjA6
Mjg6NDZdIFhlbiBpcyByZWxpbnF1aXNoaW5nIFZHQSBjb25zb2xlLg0KKFhFTikgWzIwMTIt
MTItMDcgMjA6Mjg6NDZdICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1h
JyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQ0KKFhFTikgWzIwMTItMTIt
MDcgMjA6Mjg6NDZdIEZyZWVkIDI1MmtCIGluaXQgbWVtb3J5Lg0KKFhFTikgWzIwMTItMTIt
MDcgMjA6Mjg6NDZdIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg2LTkgLT4g
MHg2MCAtPiBJUlEgOSBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0Nl0gdHJhcHMuYzoyNDg2OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAwMDAwMDAwMCB0byAweDAwMDAwMDAwMDAwMGZmZmYu
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDow
MC4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDow
MDowMC4yDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAw
MDowMDowMi4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2Ug
MDAwMDowMDowMy4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDowNS4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBk
ZXZpY2UgMDAwMDowMDowNi4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDowYS4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDowYi4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0g
UENJIGFkZCBkZXZpY2UgMDAwMDowMDowZC4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMS4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMi4wDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMi4yDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4wDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4yDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4wDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4xDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4yDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4z
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
NC40DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDow
MDoxNC41DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAw
MDowMDoxNS4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2Ug
MDAwMDowMDoxNi4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxNi4yDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBk
ZXZpY2UgMDAwMDowMDoxOC4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDoxOC4xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDoxOC4yDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0g
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC4zDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0
Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC40DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoy
ODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowYjowMC4wDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowOTowMC4wDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowYTowMS4wDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowYTowMS4xDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowYTowMS4yDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowODowMC4wDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowNzowMC4wDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowNjowMC4w
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowNTow
MC4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDow
NTowMC4xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2UgMDAw
MDowNDowMC4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gUENJIGFkZCBkZXZpY2Ug
MDAwMDowMzowNi4wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gSU9BUElDWzBdOiBT
ZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtOCAtPiAweDU4IC0+IElSUSA4IE1vZGU6MCBBY3Rp
dmU6MCkNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ2XSBJT0FQSUNbMF06IFNldCBQQ0kg
cm91dGluZyBlbnRyeSAoNi0xMyAtPiAweDg4IC0+IElSUSAxMyBNb2RlOjAgQWN0aXZlOjAp
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0Nl0gSU9BUElDWzFdOiBTZXQgUENJIHJvdXRp
bmcgZW50cnkgKDctMjggLT4gMHhiOCAtPiBJUlEgNTIgTW9kZToxIEFjdGl2ZToxKQ0KKFhF
TikgWzIwMTItMTItMDcgMjA6Mjg6NDZdIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVu
dHJ5ICg3LTI5IC0+IDB4YzAgLT4gSVJRIDUzIE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjI4OjQ2XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAo
Ny0zMCAtPiAweGM4IC0+IElSUSA1NCBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDoyODo0Nl0gSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtMTYg
LT4gMHhkMCAtPiBJUlEgMTYgTW9kZToxIEFjdGl2ZToxKQ0KKFhFTikgWzIwMTItMTItMDcg
MjA6Mjg6NDZdIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg2LTE4IC0+IDB4
ZDggLT4gSVJRIDE4IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4
OjQ3XSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNi0xNyAtPiAweDIxIC0+
IElSUSAxNyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0N10g
SU9BUElDWzFdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDctNSAtPiAweDI5IC0+IElSUSAy
OSBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0N10gSU9BUElD
WzFdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDctNiAtPiAweDMxIC0+IElSUSAzMCBNb2Rl
OjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0N10gSU9BUElDWzFdOiBT
ZXQgUENJIHJvdXRpbmcgZW50cnkgKDctNyAtPiAweDM5IC0+IElSUSAzMSBNb2RlOjEgQWN0
aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoyODo0N10gSU9BUElDWzFdOiBTZXQgUENJ
IHJvdXRpbmcgZW50cnkgKDctMTYgLT4gMHg0MSAtPiBJUlEgNDAgTW9kZToxIEFjdGl2ZTox
KQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6NDddIElPQVBJQ1swXTogU2V0IFBDSSByb3V0
aW5nIGVudHJ5ICg2LTIyIC0+IDB4ODkgLT4gSVJRIDIyIE1vZGU6MSBBY3RpdmU6MSkNCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ3XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBl
bnRyeSAoNy05IC0+IDB4OTEgLT4gSVJRIDMzIE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjI4OjQ3XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAo
Ny04IC0+IDB4OTkgLT4gSVJRIDMyIE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjI4OjQ3XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yMyAt
PiAweGExIC0+IElSUSA0NyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDoyODo0OF0gSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtMTkgLT4gMHhh
OSAtPiBJUlEgMTkgTW9kZToxIEFjdGl2ZToxKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mjg6
NDhdIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg3LTIyIC0+IDB4YjkgLT4g
SVJRIDQ2IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjI4OjQ4XSBJ
T0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yNyAtPiAweGM5IC0+IElSUSA1
MSBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMDowNl0gdHJhcHMu
YzoyNDg2OmQxIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9t
IDB4MDAwMDAwMDAwMDAwMDAwMCB0byAweDAwMDAwMDAwMDAwMGZmZmYuDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozMDoxMV0gdHJhcHMuYzoyNDg2OmQyIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAwMDAwMDAwMCB0byAweDAwMDAw
MDAwMDAwMGZmZmYuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMDoxN10gdHJhcHMuYzoyNDg2
OmQzIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAw
MDAwMDAwMDAwMDAwMCB0byAweDAwMDAwMDAwMDAwMGZmZmYuDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDozMDoyM10gdHJhcHMuYzoyNDg2OmQ0IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAwMDAwMDAwMCB0byAweDAwMDAwMDAwMDAw
MGZmZmYuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMDoyOV0gdHJhcHMuYzoyNDg2OmQ1IERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAw
MDAwMDAwMCB0byAweDAwMDAwMDAwMDAwMGZmZmYuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
MDozMV0gZ3JhbnRfdGFibGUuYzozMTM6ZDAgSW5jcmVhc2VkIG1hcHRyYWNrIHNpemUgdG8g
MiBmcmFtZXMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMwOjM1XSB0cmFwcy5jOjI0ODY6ZDYg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwMDAw
MDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZmZi4NCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMwOjQyXSB0cmFwcy5jOjI0ODY6ZDcgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZm
Zi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMwOjQ3XSB0cmFwcy5jOjI0ODY6ZDggRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAw
MDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZmZi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMwOjUz
XSB0cmFwcy5jOjI0ODY6ZDkgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZmZi4NCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjMwOjU5XSBncmFudF90YWJsZS5jOjMxMzpkMCBJbmNyZWFz
ZWQgbWFwdHJhY2sgc2l6ZSB0byAzIGZyYW1lcw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzA6
NTldIHRyYXBzLmM6MjQ4NjpkMTAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZmZi4N
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjMxOjA2XSBBTUQtVmk6IERpc2FibGU6IGRldmljZSBp
ZCA9IDB4YTQsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzE6MDZdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDB4YTQsIHJvb3QgdGFibGUgPSAweDE4ZTA5ZTAwMCwgZG9tYWluID0gMTEsIHBhZ2luZyBt
b2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzE6MDZdIEFNRC1WaTogUmUtYXNzaWdu
IDAwMDA6MDM6MDYuMCBmcm9tIGRvbTAgdG8gZG9tMTENCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMxOjA2XSB0cmFwcy5jOjI0ODY6ZDExIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAwMDAwMDAwMCB0byAweDAwMDAwMDAwMDAwMGZm
ZmYuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMToxNl0gdHJhcHMuYzoyNDg2OmQxMiBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDQgZnJvbSAweDAwMDAwMDAwMDAw
MDAwMDAgdG8gMHgwMDAwMDAwMDAwMDBmZmZmLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzE6
MjFdIGdyYW50X3RhYmxlLmM6MzEzOmQwIEluY3JlYXNlZCBtYXB0cmFjayBzaXplIHRvIDQg
ZnJhbWVzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMToyMl0gdHJhcHMuYzoyNDg2OmQxMyBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDQgZnJvbSAweDAwMDAwMDAw
MDAwMDAwMDAgdG8gMHgwMDAwMDAwMDAwMDBmZmZmLg0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzE6MjldIHRyYXBzLmM6MjQ4NjpkMTQgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZm
Zi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMxOjUzXSBncmFudF90YWJsZS5jOjMxMzpkMCBJ
bmNyZWFzZWQgbWFwdHJhY2sgc2l6ZSB0byA1IGZyYW1lcw0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MDhdIEFNRC1WaTogU2hhcmUgcDJtIHRhYmxlIHdpdGggaW9tbXU6IHAybSB0YWJs
ZSA9IDB4MWMxZjhjDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gaW8uYzoyODI6IGQx
NTogYmluZDogbV9nc2k9MTYgZ19nc2k9MzYgZGV2aWNlPTUgaW50eD0wDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozMzoxMl0gQU1ELVZpOiBEaXNhYmxlOiBkZXZpY2UgaWQgPSAweDQwMCwg
ZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzox
Ml0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg0MDAsIHJv
b3QgdGFibGUgPSAweDFjMWY4YzAwMCwgZG9tYWluID0gMTUsIHBhZ2luZyBtb2RlID0gNA0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEFNRC1WaTogUmUtYXNzaWduIDAwMDA6MDQ6
MDAuMCBmcm9tIGRvbTAgdG8gZG9tMTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBI
Vk0xNTogSFZNIExvYWRlcg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBE
ZXRlY3RlZCBYZW4gdjQuMy11bnN0YWJsZQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJd
IEhWTTE1OiBYZW5idXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2ZW50IGNoYW5uZWwgNQ0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBTeXN0ZW0gcmVxdWVzdGVkIFJPTUJJ
T1MNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogQ1BVIHNwZWVkIGlzIDMy
MDAgTUh6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gaXJxLmM6MjcwOiBEb20xNSBQ
Q0kgbGluayAwIGNoYW5nZWQgMCAtPiA1DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0g
SFZNMTU6IFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1DQooWEVOKSBbMjAxMi0xMi0w
NyAyMDozMzoxMl0gaXJxLmM6MjcwOiBEb20xNSBQQ0kgbGluayAxIGNoYW5nZWQgMCAtPiAx
MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBQQ0ktSVNBIGxpbmsgMSBy
b3V0ZWQgdG8gSVJRMTANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBpcnEuYzoyNzA6
IERvbTE1IFBDSSBsaW5rIDIgY2hhbmdlZCAwIC0+IDExDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozMzoxMl0gSFZNMTU6IFBDSS1JU0EgbGluayAyIHJvdXRlZCB0byBJUlExMQ0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzM6MTJdIGlycS5jOjI3MDogRG9tMTUgUENJIGxpbmsgMyBjaGFu
Z2VkIDAgLT4gNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBQQ0ktSVNB
IGxpbmsgMyByb3V0ZWQgdG8gSVJRNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhW
TTE1OiBwY2kgZGV2IDAxOjIgSU5URC0+SVJRNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6
MTJdIEhWTTE1OiBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTANCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwMzowIElOVEEtPklSUTUNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwNDowIElOVEEtPklSUTUNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwNTowIElOVEEtPklSUTEw
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHBjaSBkZXYgMDI6MCBiYXIg
MTAgc2l6ZSBseDogMDIwMDAwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0x
NTogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIGx4OiAwMTAwMDAwMA0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDA0OjAgYmFyIDEwIHNpemUgbHg6IDAw
MDIwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHBjaSBkZXYgMDU6
MCBiYXIgMTAgc2l6ZSBseDogMDAwMDIwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEy
XSBtZW1vcnlfbWFwOmFkZDogZG9tMTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHBjaSBkZXYgMDI6MCBiYXIgMTQgc2l6
ZSBseDogMDAwMDEwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNp
IGRldiAwMzowIGJhciAxMCBzaXplIGx4OiAwMDAwMDEwMA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDA0OjAgYmFyIDE0IHNpemUgbHg6IDAwMDAwMDQw
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHBjaSBkZXYgMDE6MiBiYXIg
MjAgc2l6ZSBseDogMDAwMDAwMjANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0x
NTogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIGx4OiAwMDAwMDAxMA0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzM6MTJdIEhWTTE1OiBNdWx0aXByb2Nlc3NvciBpbml0aWFsaXNhdGlvbjoN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogIC0gQ1BVMCAuLi4gNDgtYml0
IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICAtIENQVTEgLi4uIDQ4LWJpdCBw
aHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLg0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgLSBDUFUyIC4uLiA0OC1iaXQgcGh5
cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4NCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogVGVzdGluZyBIVk0gZW52aXJvbm1lbnQ6
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICAtIFJFUCBJTlNCIGFjcm9z
cyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBhc3NlZA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6
MTJdIEhWTTE1OiAgLSBHUyBiYXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IFBhc3NlZCAyIG9mIDIgdGVzdHMNCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogV3JpdGluZyBTTUJJT1MgdGFibGVz
IC4uLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBMb2FkaW5nIFJPTUJJ
T1MgLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IDk2NjAgYnl0ZXMg
b2YgUk9NQklPUyBoaWdoLW1lbW9yeSBleHRlbnNpb25zOg0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MTJdIEhWTTE1OiAgIFJlbG9jYXRpbmcgdG8gMHhmYzAwMTAwMC0weGZjMDAzNWJj
IC4uLiBkb25lDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IENyZWF0aW5n
IE1QIHRhYmxlcyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogTG9h
ZGluZyBDaXJydXMgVkdBQklPUyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBI
Vk0xNTogTG9hZGluZyBQQ0kgT3B0aW9uIFJPTSAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjEyXSBIVk0xNTogIC0gTWFudWZhY3R1cmVyOiBodHRwOi8vaXB4ZS5vcmcNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogIC0gUHJvZHVjdCBuYW1lOiBpUFhFDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IE9wdGlvbiBST01zOg0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgYzAwMDAtYzhmZmY6IFZHQSBCSU9TDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICBjOTAwMC1kOWZmZjogRXRoZXJi
b290IFJPTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBMb2FkaW5nIEFD
UEkgLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHZtODYgVFNTIGF0
IGZjMDBmNjgwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IEJJT1MgbWFw
Og0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgZjAwMDAtZmZmZmY6IE1h
aW4gQklPUw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBFODIwIHRhYmxl
Og0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgWzAwXTogMDAwMDAwMDA6
MDAwMDAwMDAgLSAwMDAwMDAwMDowMDA5ZTAwMDogUkFNDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozMzoxMl0gSFZNMTU6ICBbMDFdOiAwMDAwMDAwMDowMDA5ZTAwMCAtIDAwMDAwMDAwOjAw
MGEwMDAwOiBSRVNFUlZFRA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAg
SE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDowMDBlMDAwMA0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgWzAyXTogMDAwMDAwMDA6MDAwZTAwMDAgLSAw
MDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEy
XSBIVk0xNTogIFswM106IDAwMDAwMDAwOjAwMTAwMDAwIC0gMDAwMDAwMDA6MmY4MDAwMDA6
IFJBTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgSE9MRTogMDAwMDAw
MDA6MmY4MDAwMDAgLSAwMDAwMDAwMDpmYzAwMDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzM6MTJdIEhWTTE1OiAgWzA0XTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMTowMDAw
MDAwMDogUkVTRVJWRUQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogSW52
b2tpbmcgUk9NQklPUyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTog
JFJldmlzaW9uOiAxLjIyMSAkICREYXRlOiAyMDA4LzEyLzA3IDE3OjMyOjI5ICQNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBzdGR2Z2EuYzoxNDc6ZDE1IGVudGVyaW5nIHN0ZHZn
YSBhbmQgY2FjaGluZyBtb2Rlcw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1
OiBWR0FCaW9zICRJZDogdmdhYmlvcy5jLHYgMS42NyAyMDA4LzAxLzI3IDA5OjQ0OjEyIHZy
dXBwZXJ0IEV4cCAkDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IEJvY2hz
IEJJT1MgLSBidWlsZDogMDYvMjMvOTkNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBI
Vk0xNTogJFJldmlzaW9uOiAxLjIyMSAkICREYXRlOiAyMDA4LzEyLzA3IDE3OjMyOjI5ICQN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogT3B0aW9uczogYXBtYmlvcyBw
Y2liaW9zIGVsdG9yaXRvIFBNTSANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0x
NTogDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IGF0YTAtMDogUENIUz0x
NjM4My8xNi82MyB0cmFuc2xhdGlvbj1sYmEgTENIUz0xMDI0LzI1NS82Mw0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBhdGEwIG1hc3RlcjogUUVNVSBIQVJERElTSyBB
VEEtNyBIYXJkLURpc2sgKDEwMjQwIE1CeXRlcykNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMz
OjEyXSBIVk0xNTogYXRhMC0xOiBQQ0hTPTE2MzgzLzE2LzYzIHRyYW5zbGF0aW9uPWxiYSBM
Q0hTPTEwMjQvMjU1LzYzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IGF0
YTAgIHNsYXZlOiBRRU1VIEhBUkRESVNLIEFUQS03IEhhcmQtRGlzayAoIDMwMCBHQnl0ZXMp
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IA0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzM6MTJdIEhWTTE1OiANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0x
NTogDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IFByZXNzIEYxMiBmb3Ig
Ym9vdCBtZW51Lg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiANCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogQm9vdGluZyBmcm9tIEhhcmQgRGlzay4u
Lg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBCb290aW5nIGZyb20gMDAw
MDo3YzAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNF0gZ3JhbnRfdGFibGUuYzozMTM6
ZDAgSW5jcmVhc2VkIG1hcHRyYWNrIHNpemUgdG8gNiBmcmFtZXMNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjI1XSBpcnEuYzozNzU6IERvbTE1IGNhbGxiYWNrIHZpYSBjaGFuZ2VkIHRv
IERpcmVjdCBWZWN0b3IgMHhmMw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOmFkZDogZG9tMTUgZ2ZuPWYzMDIwIG1m
bj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNl0gbWVtb3J5X21hcDpy
ZW1vdmU6IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzM6MjZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5OGZl
IG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tMTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
MzoyNl0gbWVtb3J5X21hcDphZGQ6IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9MQ0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xNSBn
Zm49ZjMwMjAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBt
ZW1vcnlfbWFwOmFkZDogZG9tMTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDozMzoyNl0gbWVtb3J5X21hcDpyZW1vdmU6IGRvbTE1IGdmbj1mMzAy
MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIG1lbW9yeV9t
YXA6YWRkOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMTUgZ2ZuPWYzMDIwIG1mbj1m
OThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNl0gbWVtb3J5X21hcDphZGQ6
IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzM6MjZdIGlycS5jOjI3MDogRG9tMTUgUENJIGxpbmsgMCBjaGFuZ2VkIDUgLT4gMA0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzM6MjZdIGlycS5jOjI3MDogRG9tMTUgUENJIGxpbmsgMSBj
aGFuZ2VkIDEwIC0+IDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBpcnEuYzoyNzA6
IERvbTE1IFBDSSBsaW5rIDIgY2hhbmdlZCAxMSAtPiAwDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozMzoyNl0gaXJxLmM6MjcwOiBEb20xNSBQQ0kgbGluayAzIGNoYW5nZWQgNSAtPiAwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjozMF0gQU1ELVZpOiBEaXNhYmxlOiBkZXZpY2UgaWQg
PSAweDQwMCwgZG9tYWluID0gMTUsIHBhZ2luZyBtb2RlID0gNA0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzY6MzBdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDB4NDAwLCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6MzBdIEFNRC1WaTogUmUtYXNzaWdu
IDAwMDA6MDQ6MDAuMCBmcm9tIGRvbTE1IHRvIGRvbTANCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjUzXSBBTUQtVmk6IFNoYXJlIHAybSB0YWJsZSB3aXRoIGlvbW11OiBwMm0gdGFibGUg
PSAweDFjMWU5Zg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIGlvLmM6MjgyOiBkMTY6
IGJpbmQ6IG1fZ3NpPTE2IGdfZ3NpPTM2IGRldmljZT01IGludHg9MA0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEFNRC1WaTogRGlzYWJsZTogZGV2aWNlIGlkID0gMHg0MDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRd
IEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NDAwLCByb290
IHRhYmxlID0gMHgxYzFlOWYwMDAsIGRvbWFpbiA9IDE2LCBwYWdpbmcgbW9kZSA9IDQNCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBBTUQtVmk6IFJlLWFzc2lnbiAwMDAwOjA0OjAw
LjAgZnJvbSBkb20wIHRvIGRvbTE2DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZN
MTY6IEhWTSBMb2FkZXINCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRGV0
ZWN0ZWQgWGVuIHY0LjMtdW5zdGFibGUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogWGVuYnVzIHJpbmdzIEAweGZlZmZjMDAwLCBldmVudCBjaGFubmVsIDUNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogU3lzdGVtIHJlcXVlc3RlZCBTZWFCSU9T
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IENQVSBzcGVlZCBpcyAzMjAw
IE1Ieg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIGlycS5jOjI3MDogRG9tMTYgUENJ
IGxpbmsgMCBjaGFuZ2VkIDAgLT4gNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhW
TTE2OiBQQ0ktSVNBIGxpbmsgMCByb3V0ZWQgdG8gSVJRNQ0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIGlycS5jOjI3MDogRG9tMTYgUENJIGxpbmsgMSBjaGFuZ2VkIDAgLT4gMTAN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogUENJLUlTQSBsaW5rIDEgcm91
dGVkIHRvIElSUTEwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gaXJxLmM6MjcwOiBE
b20xNiBQQ0kgbGluayAyIGNoYW5nZWQgMCAtPiAxMQ0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzY6NTRdIEhWTTE2OiBQQ0ktSVNBIGxpbmsgMiByb3V0ZWQgdG8gSVJRMTENCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBpcnEuYzoyNzA6IERvbTE2IFBDSSBsaW5rIDMgY2hhbmdl
ZCAwIC0+IDUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogUENJLUlTQSBs
aW5rIDMgcm91dGVkIHRvIElSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0x
NjogcGNpIGRldiAwMToyIElOVEQtPklSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0
XSBIVk0xNjogcGNpIGRldiAwMTozIElOVEEtPklSUTEwDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozNjo1NF0gSFZNMTY6IHBjaSBkZXYgMDM6MCBJTlRBLT5JUlE1DQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1DQooWEVOKSBb
MjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IHBjaSBkZXYgMDU6MCBJTlRBLT5JUlExMA0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAyOjAgYmFyIDEw
IHNpemUgbHg6IDAyMDAwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6
IHBjaSBkZXYgMDM6MCBiYXIgMTQgc2l6ZSBseDogMDEwMDAwMDANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwNDowIGJhciAxMCBzaXplIGx4OiAwMDAy
MDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDA0OjAg
YmFyIDMwIHNpemUgbHg6IDAwMDIwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IHBjaSBkZXYgMDI6MCBiYXIgMzAgc2l6ZSBseDogMDAwMTAwMDANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwNTowIGJhciAxMCBzaXplIGx4
OiAwMDAwMjAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIG1lbW9yeV9tYXA6YWRk
OiBkb20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBIVk0xNjogcGNpIGRldiAwMjowIGJhciAxNCBzaXplIGx4OiAwMDAwMTAwMA0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAzOjAgYmFyIDEw
IHNpemUgbHg6IDAwMDAwMTAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6
IHBjaSBkZXYgMDQ6MCBiYXIgMTQgc2l6ZSBseDogMDAwMDAwNDANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwMToyIGJhciAyMCBzaXplIGx4OiAwMDAw
MDAyMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAxOjEg
YmFyIDIwIHNpemUgbHg6IDAwMDAwMDEwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IE11bHRpcHJvY2Vzc29yIGluaXRpYWxpc2F0aW9uOg0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzY6NTRdIEhWTTE2OiAgLSBDUFUwIC4uLiA0OC1iaXQgcGh5cyAuLi4gZml4ZWQg
TVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4NCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjM2OjU0XSBIVk0xNjogIC0gQ1BVMSAuLi4gNDgtYml0IHBoeXMgLi4uIGZpeGVkIE1U
UlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozNjo1NF0gSFZNMTY6ICAtIENQVTIgLi4uIDQ4LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJS
cyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLg0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzY6NTRdIEhWTTE2OiBUZXN0aW5nIEhWTSBlbnZpcm9ubWVudDoNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogIC0gUkVQIElOU0IgYWNyb3NzIHBhZ2UgYm91bmRhcmll
cyAuLi4gcGFzc2VkDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6ICAtIEdT
IGJhc2UgTVNScyBhbmQgU1dBUEdTIC4uLiBwYXNzZWQNCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBIVk0xNjogUGFzc2VkIDIgb2YgMiB0ZXN0cw0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIEhWTTE2OiBXcml0aW5nIFNNQklPUyB0YWJsZXMgLi4uDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IExvYWRpbmcgU2VhQklPUyAuLi4NCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogQ3JlYXRpbmcgTVAgdGFibGVzIC4uLg0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBMb2FkaW5nIEFDUEkgLi4uDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IHZtODYgVFNTIGF0IGZjMDBhMDgwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IEJJT1MgbWFwOg0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgMTAwMDAtMTAwZDM6IFNjcmF0Y2ggc3BhY2UN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogIGUwMDAwLWZmZmZmOiBNYWlu
IEJJT1MNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRTgyMCB0YWJsZToN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogIFswMF06IDAwMDAwMDAwOjAw
MDAwMDAwIC0gMDAwMDAwMDA6MDAwYTAwMDA6IFJBTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzY6NTRdIEhWTTE2OiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDowMDBl
MDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgWzAxXTogMDAwMDAw
MDA6MDAwZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogIFswMl06IDAwMDAwMDAwOjAwMTAwMDAwIC0gMDAw
MDAwMDA6MmY4MDAwMDA6IFJBTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2
OiAgSE9MRTogMDAwMDAwMDA6MmY4MDAwMDAgLSAwMDAwMDAwMDpmYzAwMDAwMA0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgWzAzXTogMDAwMDAwMDA6ZmMwMDAwMDAg
LSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2
OjU0XSBIVk0xNjogSW52b2tpbmcgU2VhQklPUyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBIVk0xNjogU2VhQklPUyAodmVyc2lvbiByZWwtMS43LjEtNDQtZ2IxYzM1ZjIt
MjAxMjEyMDdfMjExMTA0LXNlcnZlZXJzdGVydGplKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzY6NTRdIEhWTTE2OiANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRm91
bmQgWGVuIGh5cGVydmlzb3Igc2lnbmF0dXJlIGF0IDQwMDAwMDAwDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IHhlbjogY29weSBlODIwLi4uDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IFJhbSBTaXplPTB4MmY4MDAwMDAgKDB4MDAwMDAwMDAw
MDAwMDAwMCBoaWdoKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBSZWxv
Y2F0aW5nIGxvdyBkYXRhIGZyb20gMHgwMDBlMzI3MCB0byAweDAwMGVmNzgwIChzaXplIDIx
NjQpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFJlbG9jYXRpbmcgaW5p
dCBmcm9tIDB4MDAwZTNhZTQgdG8gMHgyZjdlMmEyMCAoc2l6ZSA1NDQ1MikNCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogQ1BVIE1oej0zMjAwDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IEZvdW5kIDkgUENJIGRldmljZXMgKG1heCBQQ0kgYnVz
IGlzIDAwKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBBbGxvY2F0ZWQg
WGVuIGh5cGVyY2FsbCBwYWdlIGF0IDJmN2ZmMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1NF0gSFZNMTY6IERldGVjdGVkIFhlbiB2NC4zLXVuc3RhYmxlDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IEZvdW5kIDMgY3B1KHMpIG1heCBzdXBwb3J0ZWQgMyBj
cHUocykNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogeGVuOiBjb3B5IEJJ
T1MgdGFibGVzLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IENvcHlp
bmcgU01CSU9TIGVudHJ5IHBvaW50IGZyb20gMHgwMDAxMDAxMCB0byAweDAwMGZkYjEwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IENvcHlpbmcgTVBUQUJMRSBmcm9t
IDB4ZmMwMDExOTAvZmMwMDExYTAgdG8gMHgwMDBmZGEwMA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIEhWTTE2OiBDb3B5aW5nIFBJUiBmcm9tIDB4MDAwMTAwMzAgdG8gMHgwMDBm
ZDk4MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBDb3B5aW5nIEFDUEkg
UlNEUCBmcm9tIDB4MDAwMTAwYjAgdG8gMHgwMDBmZDk1MA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIEhWTTE2OiBTY2FuIGZvciBWR0Egb3B0aW9uIHJvbQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEhWTTE2OiBSdW5uaW5nIG9wdGlvbiByb20gYXQgYzAwMDowMDAz
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gc3RkdmdhLmM6MTQ3OmQxNiBlbnRlcmlu
ZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0
XSBIVk0xNjogVHVybmluZyBvbiB2Z2EgdGV4dCBtb2RlIGNvbnNvbGUNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogU2VhQklPUyAodmVyc2lvbiByZWwtMS43LjEtNDQt
Z2IxYzM1ZjItMjAxMjEyMDdfMjExMTA0LXNlcnZlZXJzdGVydGplKQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEhWTTE2OiANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogVUhDSSBpbml0IG9uIGRldiAwMDowMS4yIChpbz1jMTQwKQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEhWTTE2OiBGb3VuZCAxIGxwdCBwb3J0cw0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEhWTTE2OiBGb3VuZCAxIHNlcmlhbCBwb3J0cw0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTZWFyY2hpbmcgYm9vdG9yZGVyIGZvcjogL3Bj
aUBpMGNmOC9pc2FAMS9mZGNAMDNmMC9mbG9wcHlAMA0KKFhFTikgWzIwMTItMTItMDcgMjA6
MzY6NTRdIEhWTTE2OiBBVEEgY29udHJvbGxlciAxIGF0IDFmMC8zZjQvYzE2MCAoaXJxIDE0
IGRldiA5KQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBBVEEgY29udHJv
bGxlciAyIGF0IDE3MC8zNzQvYzE2OCAoaXJxIDE1IGRldiA5KQ0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzY6NTRdIEhWTTE2OiBhdGEwLTA6IFFFTVUgSEFSRERJU0sgQVRBLTcgSGFyZC1E
aXNrICgxMDI0MCBNaUJ5dGVzKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2
OiBTZWFyY2hpbmcgYm9vdG9yZGVyIGZvcjogL3BjaUBpMGNmOC8qQDEsMS9kcml2ZUAwL2Rp
c2tAMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBhdGEwLTE6IFFFTVUg
SEFSRERJU0sgQVRBLTcgSGFyZC1EaXNrICgzMDAgR2lCeXRlcykNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogU2VhcmNoaW5nIGJvb3RvcmRlciBmb3I6IC9wY2lAaTBj
ZjgvKkAxLDEvZHJpdmVAMC9kaXNrQDENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogRFZEL0NEIFthdGExLTA6IFFFTVUgRFZELVJPTSBBVEFQSS00IERWRC9DRF0NCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogU2VhcmNoaW5nIGJvb3RvcmRlciBm
b3I6IC9wY2lAaTBjZjgvKkAxLDEvZHJpdmVAMS9kaXNrQDANCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjM2OjU0XSBIVk0xNjogUFMyIGtleWJvYXJkIGluaXRpYWxpemVkDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IEFsbCB0aHJlYWRzIGNvbXBsZXRlLg0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTY2FuIGZvciBvcHRpb24gcm9tcw0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBSdW5uaW5nIG9wdGlvbiByb20gYXQg
YzkwMDowMDAzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IHBtbSBjYWxs
IGFyZzE9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwbW0gY2FsbCBh
cmcxPTANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcG1tIGNhbGwgYXJn
MT0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IHBtbSBjYWxsIGFyZzE9
MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTZWFyY2hpbmcgYm9vdG9y
ZGVyIGZvcjogL3BjaUBpMGNmOC8qQDQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogUHJlc3MgRjEyIGZvciBib290IG1lbnUuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1NF0gSFZNMTY6IA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiBkcml2
ZSAweDAwMGZkOGQwOiBQQ0hTPTE2MzgzLzE2LzYzIHRyYW5zbGF0aW9uPWxiYSBMQ0hTPTEw
MjQvMjU1LzYzIHM9MjA5NzE1MjANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0x
NjogDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6IGRyaXZlIDB4MDAwZmQ4
YTA6IFBDSFM9MTYzODMvMTYvNjMgdHJhbnNsYXRpb249bGJhIExDSFM9MTAyNC8yNTUvNjMg
cz02MjkxNDU2MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiAwDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6IFNwYWNlIGF2YWlsYWJsZSBmb3IgVU1C
OiAwMDBjYTAwMC0wMDBlZTgwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2
OiBSZXR1cm5lZCA2MTQ0MCBieXRlcyBvZiBab25lSGlnaA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTddIEhWTTE2OiBlODIwIG1hcCBoYXMgNiBpdGVtczoNCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU3XSBIVk0xNjogICAwOiAwMDAwMDAwMDAwMDAwMDAwIC0gMDAwMDAwMDAw
MDA5ZmMwMCA9IDEgUkFNDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6ICAg
MTogMDAwMDAwMDAwMDA5ZmMwMCAtIDAwMDAwMDAwMDAwYTAwMDAgPSAyIFJFU0VSVkVEDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6ICAgMjogMDAwMDAwMDAwMDBmMDAw
MCAtIDAwMDAwMDAwMDAxMDAwMDAgPSAyIFJFU0VSVkVEDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozNjo1N10gSFZNMTY6ICAgMzogMDAwMDAwMDAwMDEwMDAwMCAtIDAwMDAwMDAwMmY3ZmYw
MDAgPSAxIFJBTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiAgIDQ6IDAw
MDAwMDAwMmY3ZmYwMDAgLSAwMDAwMDAwMDJmODAwMDAwID0gMiBSRVNFUlZFRA0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiAgIDU6IDAwMDAwMDAwZmMwMDAwMDAgLSAw
MDAwMDAwMTAwMDAwMDAwID0gMiBSRVNFUlZFRA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6
NTddIEhWTTE2OiBlbnRlciBoYW5kbGVfMTk6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1
N10gSFZNMTY6ICAgTlVMTA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiBC
b290aW5nIGZyb20gSGFyZCBEaXNrLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10g
SFZNMTY6IEJvb3RpbmcgZnJvbSAwMDAwOjdjMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3
OjA0XSBpcnEuYzozNzU6IERvbTE2IGNhbGxiYWNrIHZpYSBjaGFuZ2VkIHRvIERpcmVjdCBW
ZWN0b3IgMHhmMw0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9tYXA6cmVt
b3ZlOiBkb20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjM3OjA2XSBtZW1vcnlfbWFwOmFkZDogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBu
cj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0gbWVtb3J5X21hcDpyZW1vdmU6IGRv
bTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6
MDZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMTYgZ2Zu
PWYzMDUwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0gbWVt
b3J5X21hcDphZGQ6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIw
MTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xNiBnZm49ZjMwNTAg
bWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBtZW1vcnlfbWFw
OmFkZDogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDozNzowNl0gbWVtb3J5X21hcDpyZW1vdmU6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4
ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9tYXA6YWRkOiBk
b20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3
OjA2XSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBucj0x
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0gbWVtb3J5X21hcDphZGQ6IGRvbTE2IGdm
bj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIGly
cS5jOjI3MDogRG9tMTYgUENJIGxpbmsgMCBjaGFuZ2VkIDUgLT4gMA0KKFhFTikgWzIwMTIt
MTItMDcgMjA6Mzc6MDZdIGlycS5jOjI3MDogRG9tMTYgUENJIGxpbmsgMSBjaGFuZ2VkIDEw
IC0+IDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBpcnEuYzoyNzA6IERvbTE2IFBD
SSBsaW5rIDIgY2hhbmdlZCAxMSAtPiAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0g
aXJxLmM6MjcwOiBEb20xNiBQQ0kgbGluayAzIGNoYW5nZWQgNSAtPiAwDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozNzowOF0gaXJxLmM6MTg3ODogZG9tMTY6IHBpcnEgNCBvciBpcnEgNzIg
YWxyZWFkeSBtYXBwZWQgb2xkX3BpcnE6MCBvbGRfaXJxOiA3MQ0KKFhFTikgWzIwMTItMTIt
MDcgMjA6Mzc6MDhdIGlycS5jOjE4Nzg6IGRvbTE2OiBwaXJxIDQgb3IgaXJxIDczIGFscmVh
ZHkgbWFwcGVkIG9sZF9waXJxOjAgb2xkX2lycTogNzENCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM3OjA4XSBpcnEuYzoxODc4OiBkb20xNjogcGlycSA0IG9yIGlycSA3NCBhbHJlYWR5IG1h
cHBlZCBvbGRfcGlycTowIG9sZF9pcnE6IDcxDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzow
OF0gdm1zaS5jOjEwODpkMzI3NjcgVW5zdXBwb3J0ZWQgZGVsaXZlcnkgbW9kZSAzDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozNzowOF0gdm1zaS5jOjExMDpkMzI3NjcgVW5zdXBwb3J0ZWQg
ZGVsaXZlcnkgbW9kZTozIHZlY3RvcjowIHRyaWdtb2RlOjAgZGVzdDo1NCBkZXN0X21vZGU6
MA0K
------------09400606801CFB705
Content-Type: text/plain;
 name="xl-dmesg-traditional.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="xl-dmesg-traditional.txt"

KFhFTikgWzIwMTItMTItMDcgMjA6MzM6MDhdIEFNRC1WaTogU2hhcmUgcDJtIHRhYmxlIHdp
dGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MWMxZjhjDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
MzoxMl0gaW8uYzoyODI6IGQxNTogYmluZDogbV9nc2k9MTYgZ19nc2k9MzYgZGV2aWNlPTUg
aW50eD0wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gQU1ELVZpOiBEaXNhYmxlOiBk
ZXZpY2UgaWQgPSAweDQwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDozMzoxMl0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2
aWNlIGlkID0gMHg0MDAsIHJvb3QgdGFibGUgPSAweDFjMWY4YzAwMCwgZG9tYWluID0gMTUs
IHBhZ2luZyBtb2RlID0gNA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEFNRC1WaTog
UmUtYXNzaWduIDAwMDA6MDQ6MDAuMCBmcm9tIGRvbTAgdG8gZG9tMTUNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogSFZNIExvYWRlcg0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MTJdIEhWTTE1OiBEZXRlY3RlZCBYZW4gdjQuMy11bnN0YWJsZQ0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBYZW5idXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2
ZW50IGNoYW5uZWwgNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBTeXN0
ZW0gcmVxdWVzdGVkIFJPTUJJT1MNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0x
NTogQ1BVIHNwZWVkIGlzIDMyMDAgTUh6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0g
aXJxLmM6MjcwOiBEb20xNSBQQ0kgbGluayAwIGNoYW5nZWQgMCAtPiA1DQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gaXJxLmM6MjcwOiBEb20xNSBQQ0kgbGlu
ayAxIGNoYW5nZWQgMCAtPiAxMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1
OiBQQ0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8gSVJRMTANCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjEyXSBpcnEuYzoyNzA6IERvbTE1IFBDSSBsaW5rIDIgY2hhbmdlZCAwIC0+IDExDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IFBDSS1JU0EgbGluayAyIHJvdXRl
ZCB0byBJUlExMQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIGlycS5jOjI3MDogRG9t
MTUgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6
MTJdIEhWTTE1OiBQQ0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDAxOjIgSU5URC0+SVJRNQ0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTAN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwMzowIElOVEEt
PklSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwNDow
IElOVEEtPklSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRl
diAwNTowIElOVEEtPklSUTEwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6
IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSBseDogMDIwMDAwMDANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIGx4OiAwMTAw
MDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDA0OjAg
YmFyIDEwIHNpemUgbHg6IDAwMDIwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0g
SFZNMTU6IHBjaSBkZXYgMDU6MCBiYXIgMTAgc2l6ZSBseDogMDAwMDIwMDANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjMzOjEyXSBtZW1vcnlfbWFwOmFkZDogZG9tMTUgZ2ZuPWYzMDIwIG1m
bj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IHBjaSBk
ZXYgMDI6MCBiYXIgMTQgc2l6ZSBseDogMDAwMDEwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjEyXSBIVk0xNTogcGNpIGRldiAwMzowIGJhciAxMCBzaXplIGx4OiAwMDAwMDEwMA0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBwY2kgZGV2IDA0OjAgYmFyIDE0
IHNpemUgbHg6IDAwMDAwMDQwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6
IHBjaSBkZXYgMDE6MiBiYXIgMjAgc2l6ZSBseDogMDAwMDAwMjANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjEyXSBIVk0xNTogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIGx4OiAwMDAw
MDAxMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBNdWx0aXByb2Nlc3Nv
ciBpbml0aWFsaXNhdGlvbjoNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTog
IC0gQ1BVMCAuLi4gNDgtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMg
WzIvOF0gLi4uIGRvbmUuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICAt
IENQVTEgLi4uIDQ4LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsy
LzhdIC4uLiBkb25lLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgLSBD
UFUyIC4uLiA0OC1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84
XSAuLi4gZG9uZS4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogVGVzdGlu
ZyBIVk0gZW52aXJvbm1lbnQ6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6
ICAtIFJFUCBJTlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBhc3NlZA0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgLSBHUyBiYXNlIE1TUnMgYW5kIFNXQVBH
UyAuLi4gcGFzc2VkDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IFBhc3Nl
ZCAyIG9mIDIgdGVzdHMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogV3Jp
dGluZyBTTUJJT1MgdGFibGVzIC4uLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhW
TTE1OiBMb2FkaW5nIFJPTUJJT1MgLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0g
SFZNMTU6IDk2NjAgYnl0ZXMgb2YgUk9NQklPUyBoaWdoLW1lbW9yeSBleHRlbnNpb25zOg0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgIFJlbG9jYXRpbmcgdG8gMHhm
YzAwMTAwMC0weGZjMDAzNWJjIC4uLiBkb25lDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzox
Ml0gSFZNMTU6IENyZWF0aW5nIE1QIHRhYmxlcyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjEyXSBIVk0xNTogTG9hZGluZyBDaXJydXMgVkdBQklPUyAuLi4NCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogTG9hZGluZyBQQ0kgT3B0aW9uIFJPTSAuLi4NCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogIC0gTWFudWZhY3R1cmVyOiBodHRw
Oi8vaXB4ZS5vcmcNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogIC0gUHJv
ZHVjdCBuYW1lOiBpUFhFDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6IE9w
dGlvbiBST01zOg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgYzAwMDAt
YzhmZmY6IFZHQSBCSU9TDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICBj
OTAwMC1kOWZmZjogRXRoZXJib290IFJPTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJd
IEhWTTE1OiBMb2FkaW5nIEFDUEkgLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0g
SFZNMTU6IHZtODYgVFNTIGF0IGZjMDBmNjgwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzox
Ml0gSFZNMTU6IEJJT1MgbWFwOg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1
OiAgZjAwMDAtZmZmZmY6IE1haW4gQklPUw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJd
IEhWTTE1OiBFODIwIHRhYmxlOg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1
OiAgWzAwXTogMDAwMDAwMDA6MDAwMDAwMDAgLSAwMDAwMDAwMDowMDA5ZTAwMDogUkFNDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6ICBbMDFdOiAwMDAwMDAwMDowMDA5
ZTAwMCAtIDAwMDAwMDAwOjAwMGEwMDAwOiBSRVNFUlZFRA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MTJdIEhWTTE1OiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDow
MDBlMDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgWzAyXTogMDAw
MDAwMDA6MDAwZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQNCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogIFswM106IDAwMDAwMDAwOjAwMTAwMDAwIC0g
MDAwMDAwMDA6MmY4MDAwMDA6IFJBTQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhW
TTE1OiAgSE9MRTogMDAwMDAwMDA6MmY4MDAwMDAgLSAwMDAwMDAwMDpmYzAwMDAwMA0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiAgWzA0XTogMDAwMDAwMDA6ZmMwMDAw
MDAgLSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQNCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjEyXSBIVk0xNTogSW52b2tpbmcgUk9NQklPUyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjMzOjEyXSBIVk0xNTogJFJldmlzaW9uOiAxLjIyMSAkICREYXRlOiAyMDA4LzEyLzA3
IDE3OjMyOjI5ICQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBzdGR2Z2EuYzoxNDc6
ZDE1IGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2Rlcw0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzM6MTJdIEhWTTE1OiBWR0FCaW9zICRJZDogdmdhYmlvcy5jLHYgMS42NyAyMDA4
LzAxLzI3IDA5OjQ0OjEyIHZydXBwZXJ0IEV4cCAkDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
MzoxMl0gSFZNMTU6IEJvY2hzIEJJT1MgLSBidWlsZDogMDYvMjMvOTkNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogJFJldmlzaW9uOiAxLjIyMSAkICREYXRlOiAyMDA4
LzEyLzA3IDE3OjMyOjI5ICQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTog
T3B0aW9uczogYXBtYmlvcyBwY2liaW9zIGVsdG9yaXRvIFBNTSANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjEyXSBIVk0xNTogDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZN
MTU6IGF0YTAtMDogUENIUz0xNjM4My8xNi82MyB0cmFuc2xhdGlvbj1sYmEgTENIUz0xMDI0
LzI1NS82Mw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiBhdGEwIG1hc3Rl
cjogUUVNVSBIQVJERElTSyBBVEEtNyBIYXJkLURpc2sgKDEwMjQwIE1CeXRlcykNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogYXRhMC0xOiBQQ0hTPTE2MzgzLzE2LzYz
IHRyYW5zbGF0aW9uPWxiYSBMQ0hTPTEwMjQvMjU1LzYzDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozMzoxMl0gSFZNMTU6IGF0YTAgIHNsYXZlOiBRRU1VIEhBUkRESVNLIEFUQS03IEhhcmQt
RGlzayAoIDMwMCBHQnl0ZXMpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZNMTU6
IA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1OiANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjMzOjEyXSBIVk0xNTogDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoxMl0gSFZN
MTU6IFByZXNzIEYxMiBmb3IgYm9vdCBtZW51Lg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6
MTJdIEhWTTE1OiANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjEyXSBIVk0xNTogQm9vdGlu
ZyBmcm9tIEhhcmQgRGlzay4uLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MTJdIEhWTTE1
OiBCb290aW5nIGZyb20gMDAwMDo3YzAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNF0g
Z3JhbnRfdGFibGUuYzozMTM6ZDAgSW5jcmVhc2VkIG1hcHRyYWNrIHNpemUgdG8gNiBmcmFt
ZXMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI1XSBpcnEuYzozNzU6IERvbTE1IGNhbGxi
YWNrIHZpYSBjaGFuZ2VkIHRvIERpcmVjdCBWZWN0b3IgMHhmMw0KKFhFTikgWzIwMTItMTIt
MDcgMjA6MzM6MjZdIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5
OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOmFkZDog
ZG9tMTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
MzoyNl0gbWVtb3J5X21hcDpyZW1vdmU6IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9
MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNSBn
Zm49ZjMwMjAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tMTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozMzoyNl0gbWVtb3J5X21hcDphZGQ6IGRvbTE1IGdmbj1mMzAy
MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOmFkZDogZG9tMTUgZ2ZuPWYzMDIwIG1mbj1m
OThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNl0gbWVtb3J5X21hcDpyZW1v
dmU6IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzM6MjZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNSBnZm49ZjMwMjAgbWZuPWY5OGZlIG5y
PTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjMzOjI2XSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
MTUgZ2ZuPWYzMDIwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozMzoy
Nl0gbWVtb3J5X21hcDphZGQ6IGRvbTE1IGdmbj1mMzAyMCBtZm49Zjk4ZmUgbnI9MQ0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzM6MjZdIGlycS5jOjI3MDogRG9tMTUgUENJIGxpbmsgMCBj
aGFuZ2VkIDUgLT4gMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzM6MjZdIGlycS5jOjI3MDog
RG9tMTUgUENJIGxpbmsgMSBjaGFuZ2VkIDEwIC0+IDANCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjMzOjI2XSBpcnEuYzoyNzA6IERvbTE1IFBDSSBsaW5rIDIgY2hhbmdlZCAxMSAtPiAwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozMzoyNl0gaXJxLmM6MjcwOiBEb20xNSBQQ0kgbGluayAz
IGNoYW5nZWQgNSAtPiAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjozMF0gQU1ELVZpOiBE
aXNhYmxlOiBkZXZpY2UgaWQgPSAweDQwMCwgZG9tYWluID0gMTUsIHBhZ2luZyBtb2RlID0g
NA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6MzBdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4NDAwLCByb290IHRhYmxlID0gMHgyNDk4YTMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6MzBd
IEFNRC1WaTogUmUtYXNzaWduIDAwMDA6MDQ6MDAuMCBmcm9tIGRvbTE1IHRvIGRvbTANCg==
------------09400606801CFB705
Content-Type: text/plain;
 name="xl-dmesg-upstream.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="xl-dmesg-upstream.txt"

KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTNdIEFNRC1WaTogU2hhcmUgcDJtIHRhYmxlIHdp
dGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MWMxZTlmDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1NF0gaW8uYzoyODI6IGQxNjogYmluZDogbV9nc2k9MTYgZ19nc2k9MzYgZGV2aWNlPTUg
aW50eD0wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gQU1ELVZpOiBEaXNhYmxlOiBk
ZXZpY2UgaWQgPSAweDQwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBb
MjAxMi0xMi0wNyAyMDozNjo1NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2
aWNlIGlkID0gMHg0MDAsIHJvb3QgdGFibGUgPSAweDFjMWU5ZjAwMCwgZG9tYWluID0gMTYs
IHBhZ2luZyBtb2RlID0gNA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEFNRC1WaTog
UmUtYXNzaWduIDAwMDA6MDQ6MDAuMCBmcm9tIGRvbTAgdG8gZG9tMTYNCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogSFZNIExvYWRlcg0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIEhWTTE2OiBEZXRlY3RlZCBYZW4gdjQuMy11bnN0YWJsZQ0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBYZW5idXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2
ZW50IGNoYW5uZWwgNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTeXN0
ZW0gcmVxdWVzdGVkIFNlYUJJT1MNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0x
NjogQ1BVIHNwZWVkIGlzIDMyMDAgTUh6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
aXJxLmM6MjcwOiBEb20xNiBQQ0kgbGluayAwIGNoYW5nZWQgMCAtPiA1DQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gaXJxLmM6MjcwOiBEb20xNiBQQ0kgbGlu
ayAxIGNoYW5nZWQgMCAtPiAxMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2
OiBQQ0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8gSVJRMTANCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBpcnEuYzoyNzA6IERvbTE2IFBDSSBsaW5rIDIgY2hhbmdlZCAwIC0+IDExDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFBDSS1JU0EgbGluayAyIHJvdXRl
ZCB0byBJUlExMQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIGlycS5jOjI3MDogRG9t
MTYgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6
NTRdIEhWTTE2OiBQQ0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQ0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAxOjIgSU5URC0+SVJRNQ0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTAN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwMzowIElOVEEt
PklSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwNDow
IElOVEEtPklSUTUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRl
diAwNTowIElOVEEtPklSUTEwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6
IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSBseDogMDIwMDAwMDANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIGx4OiAwMTAw
MDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDA0OjAg
YmFyIDEwIHNpemUgbHg6IDAwMDIwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IHBjaSBkZXYgMDQ6MCBiYXIgMzAgc2l6ZSBseDogMDAwMjAwMDANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwMjowIGJhciAzMCBzaXplIGx4
OiAwMDAxMDAwMA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2
IDA1OjAgYmFyIDEwIHNpemUgbHg6IDAwMDAyMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1NF0gbWVtb3J5X21hcDphZGQ6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAyOjAgYmFyIDE0
IHNpemUgbHg6IDAwMDAxMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6
IHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSBseDogMDAwMDAxMDANCihYRU4pIFsyMDEyLTEy
LTA3IDIwOjM2OjU0XSBIVk0xNjogcGNpIGRldiAwNDowIGJhciAxNCBzaXplIGx4OiAwMDAw
MDA0MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBwY2kgZGV2IDAxOjIg
YmFyIDIwIHNpemUgbHg6IDAwMDAwMDIwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IHBjaSBkZXYgMDE6MSBiYXIgMjAgc2l6ZSBseDogMDAwMDAwMTANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogTXVsdGlwcm9jZXNzb3IgaW5pdGlhbGlzYXRp
b246DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6ICAtIENQVTAgLi4uIDQ4
LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25l
Lg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgLSBDUFUxIC4uLiA0OC1i
aXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4N
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogIC0gQ1BVMiAuLi4gNDgtYml0
IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFRlc3RpbmcgSFZNIGVudmlyb25t
ZW50Og0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgLSBSRVAgSU5TQiBh
Y3Jvc3MgcGFnZSBib3VuZGFyaWVzIC4uLiBwYXNzZWQNCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBIVk0xNjogIC0gR1MgYmFzZSBNU1JzIGFuZCBTV0FQR1MgLi4uIHBhc3NlZA0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBQYXNzZWQgMiBvZiAyIHRlc3Rz
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFdyaXRpbmcgU01CSU9TIHRh
YmxlcyAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogTG9hZGluZyBT
ZWFCSU9TIC4uLg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBDcmVhdGlu
ZyBNUCB0YWJsZXMgLi4uDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IExv
YWRpbmcgQUNQSSAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogdm04
NiBUU1MgYXQgZmMwMGEwODANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjog
QklPUyBtYXA6DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6ICAxMDAwMC0x
MDBkMzogU2NyYXRjaCBzcGFjZQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2
OiAgZTAwMDAtZmZmZmY6IE1haW4gQklPUw0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRd
IEhWTTE2OiBFODIwIHRhYmxlOg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2
OiAgWzAwXTogMDAwMDAwMDA6MDAwMDAwMDAgLSAwMDAwMDAwMDowMDBhMDAwMDogUkFNDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6ICBIT0xFOiAwMDAwMDAwMDowMDBh
MDAwMCAtIDAwMDAwMDAwOjAwMGUwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6ICBbMDFdOiAwMDAwMDAwMDowMDBlMDAwMCAtIDAwMDAwMDAwOjAwMTAwMDAwOiBS
RVNFUlZFRA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiAgWzAyXTogMDAw
MDAwMDA6MDAxMDAwMDAgLSAwMDAwMDAwMDoyZjgwMDAwMDogUkFNDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6ICBIT0xFOiAwMDAwMDAwMDoyZjgwMDAwMCAtIDAwMDAw
MDAwOmZjMDAwMDAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6ICBbMDNd
OiAwMDAwMDAwMDpmYzAwMDAwMCAtIDAwMDAwMDAxOjAwMDAwMDAwOiBSRVNFUlZFRA0KKFhF
TikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBJbnZva2luZyBTZWFCSU9TIC4uLg0K
KFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTZWFCSU9TICh2ZXJzaW9uIHJl
bC0xLjcuMS00NC1nYjFjMzVmMi0yMDEyMTIwN18yMTExMDQtc2VydmVlcnN0ZXJ0amUpDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IA0KKFhFTikgWzIwMTItMTItMDcg
MjA6MzY6NTRdIEhWTTE2OiBGb3VuZCBYZW4gaHlwZXJ2aXNvciBzaWduYXR1cmUgYXQgNDAw
MDAwMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogeGVuOiBjb3B5IGU4
MjAuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogUmFtIFNpemU9MHgy
ZjgwMDAwMCAoMHgwMDAwMDAwMDAwMDAwMDAwIGhpZ2gpDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozNjo1NF0gSFZNMTY6IFJlbG9jYXRpbmcgbG93IGRhdGEgZnJvbSAweDAwMGUzMjcwIHRv
IDB4MDAwZWY3ODAgKHNpemUgMjE2NCkNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogUmVsb2NhdGluZyBpbml0IGZyb20gMHgwMDBlM2FlNCB0byAweDJmN2UyYTIwIChz
aXplIDU0NDUyKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBDUFUgTWh6
PTMyMDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRm91bmQgOSBQQ0kg
ZGV2aWNlcyAobWF4IFBDSSBidXMgaXMgMDApDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1
NF0gSFZNMTY6IEFsbG9jYXRlZCBYZW4gaHlwZXJjYWxsIHBhZ2UgYXQgMmY3ZmYwMDANCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRGV0ZWN0ZWQgWGVuIHY0LjMtdW5z
dGFibGUNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogRm91bmQgMyBjcHUo
cykgbWF4IHN1cHBvcnRlZCAzIGNwdShzKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRd
IEhWTTE2OiB4ZW46IGNvcHkgQklPUyB0YWJsZXMuLi4NCihYRU4pIFsyMDEyLTEyLTA3IDIw
OjM2OjU0XSBIVk0xNjogQ29weWluZyBTTUJJT1MgZW50cnkgcG9pbnQgZnJvbSAweDAwMDEw
MDEwIHRvIDB4MDAwZmRiMTANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjog
Q29weWluZyBNUFRBQkxFIGZyb20gMHhmYzAwMTE5MC9mYzAwMTFhMCB0byAweDAwMGZkYTAw
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IENvcHlpbmcgUElSIGZyb20g
MHgwMDAxMDAzMCB0byAweDAwMGZkOTgwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IENvcHlpbmcgQUNQSSBSU0RQIGZyb20gMHgwMDAxMDBiMCB0byAweDAwMGZkOTUw
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFNjYW4gZm9yIFZHQSBvcHRp
b24gcm9tDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFJ1bm5pbmcgb3B0
aW9uIHJvbSBhdCBjMDAwOjAwMDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBzdGR2
Z2EuYzoxNDc6ZDE2IGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2Rlcw0KKFhFTikg
WzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBUdXJuaW5nIG9uIHZnYSB0ZXh0IG1vZGUg
Y29uc29sZQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTZWFCSU9TICh2
ZXJzaW9uIHJlbC0xLjcuMS00NC1nYjFjMzVmMi0yMDEyMTIwN18yMTExMDQtc2VydmVlcnN0
ZXJ0amUpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IA0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBVSENJIGluaXQgb24gZGV2IDAwOjAxLjIgKGlv
PWMxNDApDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IEZvdW5kIDEgbHB0
IHBvcnRzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IEZvdW5kIDEgc2Vy
aWFsIHBvcnRzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFNlYXJjaGlu
ZyBib290b3JkZXIgZm9yOiAvcGNpQGkwY2Y4L2lzYUAxL2ZkY0AwM2YwL2Zsb3BweUAwDQoo
WEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IEFUQSBjb250cm9sbGVyIDEgYXQg
MWYwLzNmNC9jMTYwIChpcnEgMTQgZGV2IDkpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1
NF0gSFZNMTY6IEFUQSBjb250cm9sbGVyIDIgYXQgMTcwLzM3NC9jMTY4IChpcnEgMTUgZGV2
IDkpDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IGF0YTAtMDogUUVNVSBI
QVJERElTSyBBVEEtNyBIYXJkLURpc2sgKDEwMjQwIE1pQnl0ZXMpDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1NF0gSFZNMTY6IFNlYXJjaGluZyBib290b3JkZXIgZm9yOiAvcGNpQGkw
Y2Y4LypAMSwxL2RyaXZlQDAvZGlza0AwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0g
SFZNMTY6IGF0YTAtMTogUUVNVSBIQVJERElTSyBBVEEtNyBIYXJkLURpc2sgKDMwMCBHaUJ5
dGVzKQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBTZWFyY2hpbmcgYm9v
dG9yZGVyIGZvcjogL3BjaUBpMGNmOC8qQDEsMS9kcml2ZUAwL2Rpc2tAMQ0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBEVkQvQ0QgW2F0YTEtMDogUUVNVSBEVkQtUk9N
IEFUQVBJLTQgRFZEL0NEXQ0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBT
ZWFyY2hpbmcgYm9vdG9yZGVyIGZvcjogL3BjaUBpMGNmOC8qQDEsMS9kcml2ZUAxL2Rpc2tA
MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBQUzIga2V5Ym9hcmQgaW5p
dGlhbGl6ZWQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogQWxsIHRocmVh
ZHMgY29tcGxldGUuDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFNjYW4g
Zm9yIG9wdGlvbiByb21zDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZNMTY6IFJ1
bm5pbmcgb3B0aW9uIHJvbSBhdCBjOTAwOjAwMDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2
OjU0XSBIVk0xNjogcG1tIGNhbGwgYXJnMT0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1
NF0gSFZNMTY6IHBtbSBjYWxsIGFyZzE9MA0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTRd
IEhWTTE2OiBwbW0gY2FsbCBhcmcxPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBI
Vk0xNjogcG1tIGNhbGwgYXJnMT0wDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1NF0gSFZN
MTY6IFNlYXJjaGluZyBib290b3JkZXIgZm9yOiAvcGNpQGkwY2Y4LypANA0KKFhFTikgWzIw
MTItMTItMDcgMjA6MzY6NTRdIEhWTTE2OiBQcmVzcyBGMTIgZm9yIGJvb3QgbWVudS4NCihY
RU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU0XSBIVk0xNjogDQooWEVOKSBbMjAxMi0xMi0wNyAy
MDozNjo1N10gSFZNMTY6IGRyaXZlIDB4MDAwZmQ4ZDA6IFBDSFM9MTYzODMvMTYvNjMgdHJh
bnNsYXRpb249bGJhIExDSFM9MTAyNC8yNTUvNjMgcz0yMDk3MTUyMA0KKFhFTikgWzIwMTIt
MTItMDcgMjA6MzY6NTddIEhWTTE2OiANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBI
Vk0xNjogZHJpdmUgMHgwMDBmZDhhMDogUENIUz0xNjM4My8xNi82MyB0cmFuc2xhdGlvbj1s
YmEgTENIUz0xMDI0LzI1NS82MyBzPTYyOTE0NTYwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1N10gSFZNMTY6IDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0xNjogU3Bh
Y2UgYXZhaWxhYmxlIGZvciBVTUI6IDAwMGNhMDAwLTAwMGVlODAwDQooWEVOKSBbMjAxMi0x
Mi0wNyAyMDozNjo1N10gSFZNMTY6IFJldHVybmVkIDYxNDQwIGJ5dGVzIG9mIFpvbmVIaWdo
DQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6IGU4MjAgbWFwIGhhcyA2IGl0
ZW1zOg0KKFhFTikgWzIwMTItMTItMDcgMjA6MzY6NTddIEhWTTE2OiAgIDA6IDAwMDAwMDAw
MDAwMDAwMDAgLSAwMDAwMDAwMDAwMDlmYzAwID0gMSBSQU0NCihYRU4pIFsyMDEyLTEyLTA3
IDIwOjM2OjU3XSBIVk0xNjogICAxOiAwMDAwMDAwMDAwMDlmYzAwIC0gMDAwMDAwMDAwMDBh
MDAwMCA9IDIgUkVTRVJWRUQNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0xNjog
ICAyOiAwMDAwMDAwMDAwMGYwMDAwIC0gMDAwMDAwMDAwMDEwMDAwMCA9IDIgUkVTRVJWRUQN
CihYRU4pIFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0xNjogICAzOiAwMDAwMDAwMDAwMTAw
MDAwIC0gMDAwMDAwMDAyZjdmZjAwMCA9IDEgUkFNDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
Njo1N10gSFZNMTY6ICAgNDogMDAwMDAwMDAyZjdmZjAwMCAtIDAwMDAwMDAwMmY4MDAwMDAg
PSAyIFJFU0VSVkVEDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6ICAgNTog
MDAwMDAwMDBmYzAwMDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgPSAyIFJFU0VSVkVEDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozNjo1N10gSFZNMTY6IGVudGVyIGhhbmRsZV8xOToNCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0xNjogICBOVUxMDQooWEVOKSBbMjAxMi0xMi0w
NyAyMDozNjo1N10gSFZNMTY6IEJvb3RpbmcgZnJvbSBIYXJkIERpc2suLi4NCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM2OjU3XSBIVk0xNjogQm9vdGluZyBmcm9tIDAwMDA6N2MwMA0KKFhF
TikgWzIwMTItMTItMDcgMjA6Mzc6MDRdIGlycS5jOjM3NTogRG9tMTYgY2FsbGJhY2sgdmlh
IGNoYW5nZWQgdG8gRGlyZWN0IFZlY3RvciAweGYzDQooWEVOKSBbMjAxMi0xMi0wNyAyMDoz
NzowNl0gbWVtb3J5X21hcDpyZW1vdmU6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9
MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNiBn
Zm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBucj0xDQooWEVO
KSBbMjAxMi0xMi0wNyAyMDozNzowNl0gbWVtb3J5X21hcDphZGQ6IGRvbTE2IGdmbj1mMzA1
MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEy
LTEyLTA3IDIwOjM3OjA2XSBtZW1vcnlfbWFwOmFkZDogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1m
OThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0gbWVtb3J5X21hcDpyZW1v
dmU6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0KKFhFTikgWzIwMTItMTItMDcg
MjA6Mzc6MDZdIG1lbW9yeV9tYXA6YWRkOiBkb20xNiBnZm49ZjMwNTAgbWZuPWY5OGZlIG5y
PTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
MTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzow
Nl0gbWVtb3J5X21hcDphZGQ6IGRvbTE2IGdmbj1mMzA1MCBtZm49Zjk4ZmUgbnI9MQ0KKFhF
TikgWzIwMTItMTItMDcgMjA6Mzc6MDZdIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xNiBnZm49
ZjMwNTAgbWZuPWY5OGZlIG5yPTENCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA2XSBtZW1v
cnlfbWFwOmFkZDogZG9tMTYgZ2ZuPWYzMDUwIG1mbj1mOThmZSBucj0xDQooWEVOKSBbMjAx
Mi0xMi0wNyAyMDozNzowNl0gaXJxLmM6MjcwOiBEb20xNiBQQ0kgbGluayAwIGNoYW5nZWQg
NSAtPiAwDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowNl0gaXJxLmM6MjcwOiBEb20xNiBQ
Q0kgbGluayAxIGNoYW5nZWQgMTAgLT4gMA0KKFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDZd
IGlycS5jOjI3MDogRG9tMTYgUENJIGxpbmsgMiBjaGFuZ2VkIDExIC0+IDANCihYRU4pIFsy
MDEyLTEyLTA3IDIwOjM3OjA2XSBpcnEuYzoyNzA6IERvbTE2IFBDSSBsaW5rIDMgY2hhbmdl
ZCA1IC0+IDANCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA4XSBpcnEuYzoxODc4OiBkb20x
NjogcGlycSA0IG9yIGlycSA3MiBhbHJlYWR5IG1hcHBlZCBvbGRfcGlycTowIG9sZF9pcnE6
IDcxDQooWEVOKSBbMjAxMi0xMi0wNyAyMDozNzowOF0gaXJxLmM6MTg3ODogZG9tMTY6IHBp
cnEgNCBvciBpcnEgNzMgYWxyZWFkeSBtYXBwZWQgb2xkX3BpcnE6MCBvbGRfaXJxOiA3MQ0K
KFhFTikgWzIwMTItMTItMDcgMjA6Mzc6MDhdIGlycS5jOjE4Nzg6IGRvbTE2OiBwaXJxIDQg
b3IgaXJxIDc0IGFscmVhZHkgbWFwcGVkIG9sZF9waXJxOjAgb2xkX2lycTogNzENCihYRU4p
IFsyMDEyLTEyLTA3IDIwOjM3OjA4XSB2bXNpLmM6MTA4OmQzMjc2NyBVbnN1cHBvcnRlZCBk
ZWxpdmVyeSBtb2RlIDMNCihYRU4pIFsyMDEyLTEyLTA3IDIwOjM3OjA4XSB2bXNpLmM6MTEw
OmQzMjc2NyBVbnN1cHBvcnRlZCBkZWxpdmVyeSBtb2RlOjMgdmVjdG9yOjAgdHJpZ21vZGU6
MCBkZXN0OjU0IGRlc3RfbW9kZTowDQo=
------------09400606801CFB705
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------09400606801CFB705--



From xen-devel-bounces@lists.xen.org Fri Dec 07 21:02:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:02:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th53X-0006jp-1s; Fri, 07 Dec 2012 21:01:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Th53V-0006jk-GV
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:01:57 +0000
Received: from [85.158.143.99:52037] by server-2.bemta-4.messagelabs.com id
	B9/64-30861-44952C05; Fri, 07 Dec 2012 21:01:56 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354914116!23228194!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25592 invoked from network); 7 Dec 2012 21:01:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:01:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="4752"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 21:01:56 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	21:01:55 +0000
Message-ID: <50C25933.1050407@citrix.com>
Date: Fri, 7 Dec 2012 21:01:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<43f86afe90be582e1579.1354830149@andrewcoop.uk.xensource.com>
	<50C1E4F802000078000AEED9@nat28.tlf.novell.com>
In-Reply-To: <50C1E4F802000078000AEED9@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 2 V3] x86/IST: Create set_ist() helper
 function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/2012 11:45, Jan Beulich wrote:
>>>> On 06.12.12 at 22:42, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> ... to save using open-coded bitwise operations, and update all IST
>> manipulation sites to use the function.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> --
>>
>> I am not overly happy with the name set_ist(), and certainly not tied to
>> it.  However, I am unable to think of a better name. set_idt_ist() is
>> wrong, as is set_irq_ist(), while set_idt_entry_ist() just seems to
>> cludgy.  The comment and parameter types do explicitly state what is
>> expected t be passed, but suggestions welcome for a better name.
>>
>> diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/hvm/svm/svm.c
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -869,9 +869,9 @@ static void svm_ctxt_switch_from(struct 
>>      svm_vmload(per_cpu(root_vmcb, cpu));
>>  
>>      /* Resume use of ISTs now that the host TR is reinstated. */
>> -    idt_tables[cpu][TRAP_double_fault].a  |= IST_DF << 32;
>> -    idt_tables[cpu][TRAP_nmi].a           |= IST_NMI << 32;
>> -    idt_tables[cpu][TRAP_machine_check].a |= IST_MCE << 32;
>> +    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_DF);
>> +    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NMI);
>> +    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_MCE);
>>  }
>>  
>>  static void svm_ctxt_switch_to(struct vcpu *v)
>> @@ -893,9 +893,9 @@ static void svm_ctxt_switch_to(struct vc
>>       * Cannot use ISTs for NMI/#MC/#DF while we are running with the guest TR.
>>       * But this doesn't matter: the IST is only req'd to handle SYSCALL/SYSRET.
>>       */
>> -    idt_tables[cpu][TRAP_double_fault].a  &= ~(7UL << 32);
>> -    idt_tables[cpu][TRAP_nmi].a           &= ~(7UL << 32);
>> -    idt_tables[cpu][TRAP_machine_check].a &= ~(7UL << 32);
>> +    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_NONE);
>> +    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NONE);
>> +    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
>>  
>>      svm_restore_dr(v);
>>  
>> diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/x86_64/traps.c
>> --- a/xen/arch/x86/x86_64/traps.c
>> +++ b/xen/arch/x86/x86_64/traps.c
>> @@ -370,9 +370,9 @@ void __devinit subarch_percpu_traps_init
>>      {
>>          /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
>>          set_intr_gate(TRAP_double_fault, &double_fault);
>> -        idt_table[TRAP_double_fault].a  |= IST_DF << 32;
>> -        idt_table[TRAP_nmi].a           |= IST_NMI << 32;
>> -        idt_table[TRAP_machine_check].a |= IST_MCE << 32;
>> +        set_ist(&idt_table[TRAP_double_fault],  IST_DF);
>> +        set_ist(&idt_table[TRAP_nmi],           IST_NMI);
>> +        set_ist(&idt_table[TRAP_machine_check], IST_MCE);
>>  
>>          /*
>>           * The 32-on-64 hypercall entry vector is only accessible from ring 
>> 1.
>> diff -r bc624b00d6d6 -r 43f86afe90be xen/include/asm-x86/processor.h
>> --- a/xen/include/asm-x86/processor.h
>> +++ b/xen/include/asm-x86/processor.h
>> @@ -425,10 +425,20 @@ struct tss_struct {
>>      u8 __cacheline_filler[24];
>>  } __cacheline_aligned __attribute__((packed));
>>  
>> -#define IST_DF  1UL
>> -#define IST_NMI 2UL
>> -#define IST_MCE 3UL
>> -#define IST_MAX 3UL
>> +#define IST_NONE 0UL
>> +#define IST_DF   1UL
>> +#define IST_NMI  2UL
>> +#define IST_MCE  3UL
>> +#define IST_MAX  3UL
>> +
>> +/* Set the interrupt stack table used by a particular interrupt
>> + * descriptor table entry. */
>> +static always_inline void set_ist(idt_entry_t * idt, unsigned long ist)
>> +{
>> +    /* ist is a 3 bit field, 32 bits into the idt entry. */
>> +    ASSERT( ist < 8 );
> This ought to check against IST_MAX.

Ok

>
>> +    idt->a = ( idt->a & ~(7UL << 32) ) | ( (ist & 7UL) << 32 );
> And with the check above, the right most & is pretty pointless.
>
> Jan

I was trying to protect against overflowing into the rest of the bits. 
I would hope that all actual instantiations use the IST_ macros, and the
compiler will optimise it away, but in the case that someone calls
set_ist(<entry>, 100), the compiler will do the right thing.

Thinking about it though, anyone calling set_ist() with a value greater
than 7 probably has larger bugs in their code.

I will remove it.

~Andrew

>
>> +}
>>  
>>  #define IDT_ENTRIES 256
>>  extern idt_entry_t idt_table[];
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:02:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:02:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th53X-0006jp-1s; Fri, 07 Dec 2012 21:01:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Th53V-0006jk-GV
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:01:57 +0000
Received: from [85.158.143.99:52037] by server-2.bemta-4.messagelabs.com id
	B9/64-30861-44952C05; Fri, 07 Dec 2012 21:01:56 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1354914116!23228194!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25592 invoked from network); 7 Dec 2012 21:01:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:01:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="4752"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 21:01:56 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	21:01:55 +0000
Message-ID: <50C25933.1050407@citrix.com>
Date: Fri, 7 Dec 2012 21:01:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<43f86afe90be582e1579.1354830149@andrewcoop.uk.xensource.com>
	<50C1E4F802000078000AEED9@nat28.tlf.novell.com>
In-Reply-To: <50C1E4F802000078000AEED9@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 2 V3] x86/IST: Create set_ist() helper
 function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/2012 11:45, Jan Beulich wrote:
>>>> On 06.12.12 at 22:42, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> ... to save using open-coded bitwise operations, and update all IST
>> manipulation sites to use the function.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> --
>>
>> I am not overly happy with the name set_ist(), and certainly not tied to
>> it.  However, I am unable to think of a better name. set_idt_ist() is
>> wrong, as is set_irq_ist(), while set_idt_entry_ist() just seems to
>> cludgy.  The comment and parameter types do explicitly state what is
>> expected t be passed, but suggestions welcome for a better name.
>>
>> diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/hvm/svm/svm.c
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -869,9 +869,9 @@ static void svm_ctxt_switch_from(struct 
>>      svm_vmload(per_cpu(root_vmcb, cpu));
>>  
>>      /* Resume use of ISTs now that the host TR is reinstated. */
>> -    idt_tables[cpu][TRAP_double_fault].a  |= IST_DF << 32;
>> -    idt_tables[cpu][TRAP_nmi].a           |= IST_NMI << 32;
>> -    idt_tables[cpu][TRAP_machine_check].a |= IST_MCE << 32;
>> +    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_DF);
>> +    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NMI);
>> +    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_MCE);
>>  }
>>  
>>  static void svm_ctxt_switch_to(struct vcpu *v)
>> @@ -893,9 +893,9 @@ static void svm_ctxt_switch_to(struct vc
>>       * Cannot use ISTs for NMI/#MC/#DF while we are running with the guest TR.
>>       * But this doesn't matter: the IST is only req'd to handle SYSCALL/SYSRET.
>>       */
>> -    idt_tables[cpu][TRAP_double_fault].a  &= ~(7UL << 32);
>> -    idt_tables[cpu][TRAP_nmi].a           &= ~(7UL << 32);
>> -    idt_tables[cpu][TRAP_machine_check].a &= ~(7UL << 32);
>> +    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_NONE);
>> +    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NONE);
>> +    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
>>  
>>      svm_restore_dr(v);
>>  
>> diff -r bc624b00d6d6 -r 43f86afe90be xen/arch/x86/x86_64/traps.c
>> --- a/xen/arch/x86/x86_64/traps.c
>> +++ b/xen/arch/x86/x86_64/traps.c
>> @@ -370,9 +370,9 @@ void __devinit subarch_percpu_traps_init
>>      {
>>          /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
>>          set_intr_gate(TRAP_double_fault, &double_fault);
>> -        idt_table[TRAP_double_fault].a  |= IST_DF << 32;
>> -        idt_table[TRAP_nmi].a           |= IST_NMI << 32;
>> -        idt_table[TRAP_machine_check].a |= IST_MCE << 32;
>> +        set_ist(&idt_table[TRAP_double_fault],  IST_DF);
>> +        set_ist(&idt_table[TRAP_nmi],           IST_NMI);
>> +        set_ist(&idt_table[TRAP_machine_check], IST_MCE);
>>  
>>          /*
>>           * The 32-on-64 hypercall entry vector is only accessible from ring 
>> 1.
>> diff -r bc624b00d6d6 -r 43f86afe90be xen/include/asm-x86/processor.h
>> --- a/xen/include/asm-x86/processor.h
>> +++ b/xen/include/asm-x86/processor.h
>> @@ -425,10 +425,20 @@ struct tss_struct {
>>      u8 __cacheline_filler[24];
>>  } __cacheline_aligned __attribute__((packed));
>>  
>> -#define IST_DF  1UL
>> -#define IST_NMI 2UL
>> -#define IST_MCE 3UL
>> -#define IST_MAX 3UL
>> +#define IST_NONE 0UL
>> +#define IST_DF   1UL
>> +#define IST_NMI  2UL
>> +#define IST_MCE  3UL
>> +#define IST_MAX  3UL
>> +
>> +/* Set the interrupt stack table used by a particular interrupt
>> + * descriptor table entry. */
>> +static always_inline void set_ist(idt_entry_t * idt, unsigned long ist)
>> +{
>> +    /* ist is a 3 bit field, 32 bits into the idt entry. */
>> +    ASSERT( ist < 8 );
> This ought to check against IST_MAX.

Ok

>
>> +    idt->a = ( idt->a & ~(7UL << 32) ) | ( (ist & 7UL) << 32 );
> And with the check above, the right most & is pretty pointless.
>
> Jan

I was trying to protect against overflowing into the rest of the bits. 
I would hope that all actual instantiations use the IST_ macros, and the
compiler will optimise it away, but in the case that someone calls
set_ist(<entry>, 100), the compiler will do the right thing.

Thinking about it though, anyone calling set_ist() with a value greater
than 7 probably has larger bugs in their code.

I will remove it.

~Andrew

>
>> +}
>>  
>>  #define IDT_ENTRIES 256
>>  extern idt_entry_t idt_table[];
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5EK-0007G4-Mo; Fri, 07 Dec 2012 21:13:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th5EI-0007Fi-SU
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:13:07 +0000
Received: from [85.158.139.83:26806] by server-10.bemta-5.messagelabs.com id
	56/D2-13383-2EB52C05; Fri, 07 Dec 2012 21:13:06 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354914784!28944790!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11502 invoked from network); 7 Dec 2012 21:13:05 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:13:05 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so981035vcb.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 13:13:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=UoTMuWdR6idRSqj+2q+TCfk5KMT3fYG38EYCmtfwh1g=;
	b=bE1R7VhiC01DANibY5nci1JRFfjSAuNCxxBoX6rf2z99UUiXuw3Ff78VhoJoptxYf2
	NWBQ/sXpcGMaVx4sceZjWBJEsJzVLQqrglFrLWcuJpi52Xf1KKCSLMVfGo0uq3Uh7nOc
	iIndMvWcP7ClUfTx6sSIg+FsfSO/wO1YUcBjcXC+huVdeKBwYOUVBcNThVxyikaecuMw
	1QTveuuawjj0vPjVfVFtcG9ssMXy8Bl6BN0CiGnzMXxshLwPywBaRkwaih/s3f5L7Iqp
	zvPBnN8xqUuCYRH49U13EDGPbl1sDHphU8DswOnoVeRs5wUZ0Hn/5Xa1D0mOm5QiHnKF
	J8Uw==
Received: by 10.220.156.10 with SMTP id u10mr4617601vcw.28.1354914783814;
	Fri, 07 Dec 2012 13:13:03 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id x4sm4375171vdh.13.2012.12.07.13.13.02
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 13:13:03 -0800 (PST)
Date: Fri, 7 Dec 2012 16:13:01 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: David Xu <davidxu06@gmail.com>
Message-ID: <20121207211300.GB9664@phenom.dumpdata.com>
References: <CAGjowiTaWSbyWH-NEXz2FEdeXdxHMVbOf9Kj7t25tNj0fvUZ-w@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAGjowiTaWSbyWH-NEXz2FEdeXdxHMVbOf9Kj7t25tNj0fvUZ-w@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] Is: xen-netfront on 3.2.x crashes on TCP traffic. Was
 Re: kernel panic on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Nov 25, 2012 at 10:40:10PM -0500, David Xu wrote:
> Hi all,
> 
> When I run the iperf benchmark to measure the TCP throughput between a
> physical machine and a VM, the TCP server which is a Xen VM crashed. Do you
> know what's the problem of this bug? Thanks.

No idea. Is this easy to reproduce? Do you see it only with 3.2.x
kernels?
> 
> [  100.973027] BUG: unable to handle kernel NULL pointer dereference at
> 0000000000000008
> [  100.973040] IP: [<ffffffff81455f16>] xennet_alloc_rx_buffers+0x166/0x350
> [  100.973050] PGD 1cc98067 PUD 1d74c067 PMD 0
> [  100.973051] Oops: 0002 [#1] SMP
> [  100.973051] CPU 1
> [  100.973051] Modules linked in:
> [  100.973051]
> [  100.973051] Pid: 9, comm: ksoftirqd/1 Not tainted 3.2.23 #131
> [  100.973051] RIP: e030:[<ffffffff81455f16>]  [<ffffffff81455f16>]
> xennet_alloc_rx_buffers+0x166/0x350
> [  100.973051] RSP: e02b:ffff88001e8f1c10  EFLAGS: 00010206
> [  100.973051] RAX: 0000000000000000 RBX: ffff88001da98000 RCX:
> 00000000000012b0
> [  100.973051] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
> 0000000000000256
> [  100.973051] RBP: ffff88001e8f1c60 R08: ffffc90000000000 R09:
> 0000000000017a41
> [  100.973051] R10: 0000000000000002 R11: 0000000000017298 R12:
> ffff880019a7b700
> [  100.973051] R13: 0000000000000256 R14: 0000000000012092 R15:
> 0000000000000092
> [  100.973051] FS:  00007f7ace91f700(0000) GS:ffff88001fd00000(0000)
> knlGS:0000000000000000
> [  100.973051] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [  100.973051] CR2: 0000000000000008 CR3: 000000001da64000 CR4:
> 0000000000002660
> [  100.973051] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [  100.973051] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [  100.973051] Process ksoftirqd/1 (pid: 9, threadinfo ffff88001e8f0000,
> task ffff88001e8d7000)
> [  100.973051] Stack:
> [  100.973051]  0000000000000091 ffff88001da99c80 ffff88001da99400
> 0000000100012091
> [  100.973051]  ffff88001e8f1c60 000000000000002d ffff880019a8ac4e
> ffff88001fd1a590
> [  100.973051]  0000160000000000 ffff880000000000 ffff88001e8f1db0
> ffffffff8145699a
> [  100.973051] Call Trace:
> [  100.973051]  [<ffffffff8145699a>] xennet_poll+0x7ca/0xe80
> [  100.973051]  [<ffffffff814e3e51>] net_rx_action+0x151/0x2b0
> [  100.973051]  [<ffffffff8106090d>] __do_softirq+0xbd/0x250
> [  100.973051]  [<ffffffff81060b67>] run_ksoftirqd+0xc7/0x170
> [  100.973051]  [<ffffffff81060aa0>] ? __do_softirq+0x250/0x250
> [  100.973051]  [<ffffffff8107b0ac>] kthread+0x8c/0xa0
> [  100.973051]  [<ffffffff8167ca04>] kernel_thread_helper+0x4/0x10
> [  100.973051]  [<ffffffff81672d21>] ? retint_restore_args+0x13/0x13
> [  100.973051]  [<ffffffff8167ca00>] ? gs_change+0x13/0x13
> [  100.973051] Code: 0f 84 19 01 00 00 83 ab 10 14 00 00 01 45 0f b6 fe 49
> 8b 14 24 49 8b 44 24 08 49 c7 04 24 00 00 00 00 49 c7 44 24 08 00 00 00 00
> <48> 89 42 08 48 89 10 41 0f b6 d7 49 89 5c 24 20 48 8d 82 b8 01
> [  100.973051] RIP  [<ffffffff81455f16>] xennet_alloc_rx_buffers+0x166/0x350
> [  100.973051]  RSP <ffff88001e8f1c10>
> [  100.973051] CR2: 0000000000000008
> [  100.973259] ---[ end trace b0530821c3527d70 ]---
> [  100.973263] Kernel panic - not syncing: Fatal exception in interrupt
> [  100.973267] Pid: 9, comm: ksoftirqd/1 Tainted: G      D      3.2.23 #131
> [  100.973270] Call Trace:
> [  100.973273]  [<ffffffff816674ae>] panic+0x91/0x1a2
> [  100.973278]  [<ffffffff8100adb2>] ? check_events+0x12/0x20
> [  100.973282]  [<ffffffff81673b0a>] oops_end+0xea/0xf0
> [  100.973286]  [<ffffffff81666e6b>] no_context+0x214/0x223
> [  100.973291]  [<ffffffff8113cf94>] ? kmem_cache_free+0x104/0x110
> [  100.973295]  [<ffffffff8166704b>] __bad_area_nosemaphore+0x1d1/0x1f0
> [  100.973299]  [<ffffffff8166707d>] bad_area_nosemaphore+0x13/0x15
> [  100.973304]  [<ffffffff816763fb>] do_page_fault+0x35b/0x4f0
> [  100.973308]  [<ffffffff814d6044>] ? __netdev_alloc_skb+0x24/0x50
> [  100.973313]  [<ffffffff8129f75a>] ? trace_hardirqs_off_thunk+0x3a/0x6c
> [  100.973318]  [<ffffffff81672fa5>] page_fault+0x25/0x30
> [  100.973322]  [<ffffffff81455f16>] ? xennet_alloc_rx_buffers+0x166/0x350
> [  100.973326]  [<ffffffff8145699a>] xennet_poll+0x7ca/0xe80
> [  100.973330]  [<ffffffff814e3e51>] net_rx_action+0x151/0x2b0
> [  100.973334]  [<ffffffff8106090d>] __do_softirq+0xbd/0x250
> [  100.973338]  [<ffffffff81060b67>] run_ksoftirqd+0xc7/0x170
> [  100.973342]  [<ffffffff81060aa0>] ? __do_softirq+0x250/0x250
> [  100.973346]  [<ffffffff8107b0ac>] kthread+0x8c/0xa0
> [  100.973350]  [<ffffffff8167ca04>] kernel_thread_helper+0x4/0x10
> [  100.973354]  [<ffffffff81672d21>] ? retint_restore_args+0x13/0x13
> [  100.973358]  [<ffffffff8167ca00>] ? gs_change+0x13/0x13
> 
> Regards,
> Cong

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5EK-0007G4-Mo; Fri, 07 Dec 2012 21:13:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th5EI-0007Fi-SU
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:13:07 +0000
Received: from [85.158.139.83:26806] by server-10.bemta-5.messagelabs.com id
	56/D2-13383-2EB52C05; Fri, 07 Dec 2012 21:13:06 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1354914784!28944790!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11502 invoked from network); 7 Dec 2012 21:13:05 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:13:05 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so981035vcb.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 13:13:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=UoTMuWdR6idRSqj+2q+TCfk5KMT3fYG38EYCmtfwh1g=;
	b=bE1R7VhiC01DANibY5nci1JRFfjSAuNCxxBoX6rf2z99UUiXuw3Ff78VhoJoptxYf2
	NWBQ/sXpcGMaVx4sceZjWBJEsJzVLQqrglFrLWcuJpi52Xf1KKCSLMVfGo0uq3Uh7nOc
	iIndMvWcP7ClUfTx6sSIg+FsfSO/wO1YUcBjcXC+huVdeKBwYOUVBcNThVxyikaecuMw
	1QTveuuawjj0vPjVfVFtcG9ssMXy8Bl6BN0CiGnzMXxshLwPywBaRkwaih/s3f5L7Iqp
	zvPBnN8xqUuCYRH49U13EDGPbl1sDHphU8DswOnoVeRs5wUZ0Hn/5Xa1D0mOm5QiHnKF
	J8Uw==
Received: by 10.220.156.10 with SMTP id u10mr4617601vcw.28.1354914783814;
	Fri, 07 Dec 2012 13:13:03 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id x4sm4375171vdh.13.2012.12.07.13.13.02
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 13:13:03 -0800 (PST)
Date: Fri, 7 Dec 2012 16:13:01 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: David Xu <davidxu06@gmail.com>
Message-ID: <20121207211300.GB9664@phenom.dumpdata.com>
References: <CAGjowiTaWSbyWH-NEXz2FEdeXdxHMVbOf9Kj7t25tNj0fvUZ-w@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAGjowiTaWSbyWH-NEXz2FEdeXdxHMVbOf9Kj7t25tNj0fvUZ-w@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] Is: xen-netfront on 3.2.x crashes on TCP traffic. Was
 Re: kernel panic on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Nov 25, 2012 at 10:40:10PM -0500, David Xu wrote:
> Hi all,
> 
> When I run the iperf benchmark to measure the TCP throughput between a
> physical machine and a VM, the TCP server which is a Xen VM crashed. Do you
> know what's the problem of this bug? Thanks.

No idea. Is this easy to reproduce? Do you see it only with 3.2.x
kernels?
> 
> [  100.973027] BUG: unable to handle kernel NULL pointer dereference at
> 0000000000000008
> [  100.973040] IP: [<ffffffff81455f16>] xennet_alloc_rx_buffers+0x166/0x350
> [  100.973050] PGD 1cc98067 PUD 1d74c067 PMD 0
> [  100.973051] Oops: 0002 [#1] SMP
> [  100.973051] CPU 1
> [  100.973051] Modules linked in:
> [  100.973051]
> [  100.973051] Pid: 9, comm: ksoftirqd/1 Not tainted 3.2.23 #131
> [  100.973051] RIP: e030:[<ffffffff81455f16>]  [<ffffffff81455f16>]
> xennet_alloc_rx_buffers+0x166/0x350
> [  100.973051] RSP: e02b:ffff88001e8f1c10  EFLAGS: 00010206
> [  100.973051] RAX: 0000000000000000 RBX: ffff88001da98000 RCX:
> 00000000000012b0
> [  100.973051] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
> 0000000000000256
> [  100.973051] RBP: ffff88001e8f1c60 R08: ffffc90000000000 R09:
> 0000000000017a41
> [  100.973051] R10: 0000000000000002 R11: 0000000000017298 R12:
> ffff880019a7b700
> [  100.973051] R13: 0000000000000256 R14: 0000000000012092 R15:
> 0000000000000092
> [  100.973051] FS:  00007f7ace91f700(0000) GS:ffff88001fd00000(0000)
> knlGS:0000000000000000
> [  100.973051] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [  100.973051] CR2: 0000000000000008 CR3: 000000001da64000 CR4:
> 0000000000002660
> [  100.973051] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [  100.973051] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [  100.973051] Process ksoftirqd/1 (pid: 9, threadinfo ffff88001e8f0000,
> task ffff88001e8d7000)
> [  100.973051] Stack:
> [  100.973051]  0000000000000091 ffff88001da99c80 ffff88001da99400
> 0000000100012091
> [  100.973051]  ffff88001e8f1c60 000000000000002d ffff880019a8ac4e
> ffff88001fd1a590
> [  100.973051]  0000160000000000 ffff880000000000 ffff88001e8f1db0
> ffffffff8145699a
> [  100.973051] Call Trace:
> [  100.973051]  [<ffffffff8145699a>] xennet_poll+0x7ca/0xe80
> [  100.973051]  [<ffffffff814e3e51>] net_rx_action+0x151/0x2b0
> [  100.973051]  [<ffffffff8106090d>] __do_softirq+0xbd/0x250
> [  100.973051]  [<ffffffff81060b67>] run_ksoftirqd+0xc7/0x170
> [  100.973051]  [<ffffffff81060aa0>] ? __do_softirq+0x250/0x250
> [  100.973051]  [<ffffffff8107b0ac>] kthread+0x8c/0xa0
> [  100.973051]  [<ffffffff8167ca04>] kernel_thread_helper+0x4/0x10
> [  100.973051]  [<ffffffff81672d21>] ? retint_restore_args+0x13/0x13
> [  100.973051]  [<ffffffff8167ca00>] ? gs_change+0x13/0x13
> [  100.973051] Code: 0f 84 19 01 00 00 83 ab 10 14 00 00 01 45 0f b6 fe 49
> 8b 14 24 49 8b 44 24 08 49 c7 04 24 00 00 00 00 49 c7 44 24 08 00 00 00 00
> <48> 89 42 08 48 89 10 41 0f b6 d7 49 89 5c 24 20 48 8d 82 b8 01
> [  100.973051] RIP  [<ffffffff81455f16>] xennet_alloc_rx_buffers+0x166/0x350
> [  100.973051]  RSP <ffff88001e8f1c10>
> [  100.973051] CR2: 0000000000000008
> [  100.973259] ---[ end trace b0530821c3527d70 ]---
> [  100.973263] Kernel panic - not syncing: Fatal exception in interrupt
> [  100.973267] Pid: 9, comm: ksoftirqd/1 Tainted: G      D      3.2.23 #131
> [  100.973270] Call Trace:
> [  100.973273]  [<ffffffff816674ae>] panic+0x91/0x1a2
> [  100.973278]  [<ffffffff8100adb2>] ? check_events+0x12/0x20
> [  100.973282]  [<ffffffff81673b0a>] oops_end+0xea/0xf0
> [  100.973286]  [<ffffffff81666e6b>] no_context+0x214/0x223
> [  100.973291]  [<ffffffff8113cf94>] ? kmem_cache_free+0x104/0x110
> [  100.973295]  [<ffffffff8166704b>] __bad_area_nosemaphore+0x1d1/0x1f0
> [  100.973299]  [<ffffffff8166707d>] bad_area_nosemaphore+0x13/0x15
> [  100.973304]  [<ffffffff816763fb>] do_page_fault+0x35b/0x4f0
> [  100.973308]  [<ffffffff814d6044>] ? __netdev_alloc_skb+0x24/0x50
> [  100.973313]  [<ffffffff8129f75a>] ? trace_hardirqs_off_thunk+0x3a/0x6c
> [  100.973318]  [<ffffffff81672fa5>] page_fault+0x25/0x30
> [  100.973322]  [<ffffffff81455f16>] ? xennet_alloc_rx_buffers+0x166/0x350
> [  100.973326]  [<ffffffff8145699a>] xennet_poll+0x7ca/0xe80
> [  100.973330]  [<ffffffff814e3e51>] net_rx_action+0x151/0x2b0
> [  100.973334]  [<ffffffff8106090d>] __do_softirq+0xbd/0x250
> [  100.973338]  [<ffffffff81060b67>] run_ksoftirqd+0xc7/0x170
> [  100.973342]  [<ffffffff81060aa0>] ? __do_softirq+0x250/0x250
> [  100.973346]  [<ffffffff8107b0ac>] kthread+0x8c/0xa0
> [  100.973350]  [<ffffffff8167ca04>] kernel_thread_helper+0x4/0x10
> [  100.973354]  [<ffffffff81672d21>] ? retint_restore_args+0x13/0x13
> [  100.973358]  [<ffffffff8167ca00>] ? gs_change+0x13/0x13
> 
> Regards,
> Cong

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:17:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5Hz-0007Uc-Cq; Fri, 07 Dec 2012 21:16:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th5Hx-0007UQ-RP
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 21:16:54 +0000
Received: from [85.158.143.35:40091] by server-1.bemta-4.messagelabs.com id
	8D/7C-28401-5CC52C05; Fri, 07 Dec 2012 21:16:53 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1354915011!4794826!1
X-Originating-IP: [209.85.212.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7403 invoked from network); 7 Dec 2012 21:16:52 -0000
Received: from mail-vb0-f43.google.com (HELO mail-vb0-f43.google.com)
	(209.85.212.43)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:16:52 -0000
Received: by mail-vb0-f43.google.com with SMTP id fs19so1067568vbb.30
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Dec 2012 13:16:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=xX6GUuFJFtNbQI6+phFagly1UvI89fG8IjalRkEKhG8=;
	b=WW+eIRYM+tXM8lfX5toRbBx3ENZoPBWo2lPcvOPMRR7N0hlaVo4qDOaVpiEs16tWxQ
	ZZR+Nrzq4W3q5YAW27lDcWUoD/JJv79YHTgSEx8QPezh+tPrQcziVXIT7ZjL82X384pc
	Tl6NbsMHshL35cKcSaoU6yUkSnYDKHXeFw3+FJUutyq0bA2gMTLug9wBK+5qu5urXHMB
	14CP0g0AZs+lIjqWUiWP9etjuPVTX8ldwGu4wiUIe5QpJZdB39KPAhHeL02/YwYbvrUh
	2KvszRYjEdrctMdPB5KzU7OKcvVm/YAZFCta7EovslLN1kaUUf7c+VwyKTWvpry6OGb2
	JmjQ==
Received: by 10.220.108.79 with SMTP id e15mr4537722vcp.61.1354915011310;
	Fri, 07 Dec 2012 13:16:51 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id yu4sm3809025veb.7.2012.12.07.13.16.50
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 13:16:51 -0800 (PST)
Date: Fri, 7 Dec 2012 16:16:48 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Sachin Kamat <sachin.kamat@linaro.org>
Message-ID: <20121207211648.GC9664@phenom.dumpdata.com>
References: <1353324150-21785-1-git-send-email-sachin.kamat@linaro.org>
	<CAK9yfHwz3jUpT6tRR4hUC9kLfocEbC-pkmwjiOkgmAmkPrZ-6Q@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAK9yfHwz3jUpT6tRR4hUC9kLfocEbC-pkmwjiOkgmAmkPrZ-6Q@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: konrad.wilk@oracle.com, xen-devel@lists.xensource.com, patches@linaro.org,
	virtualization@lists.linux-foundation.org
Subject: Re: [Xen-devel] [PATCH 1/1] xen/xenbus: Remove duplicate inclusion
 of asm/xen/hypervisor.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Nov 27, 2012 at 09:03:26AM +0530, Sachin Kamat wrote:
> ping

I think I have this in my tree.
> 
> On 19 November 2012 16:52, Sachin Kamat <sachin.kamat@linaro.org> wrote:
> > asm/xen/hypervisor.h was included twice.
> >
> > Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
> > ---
> >  drivers/xen/xenbus/xenbus_xs.c |    1 -
> >  1 files changed, 0 insertions(+), 1 deletions(-)
> >
> > diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
> > index acedeab..88e677b 100644
> > --- a/drivers/xen/xenbus/xenbus_xs.c
> > +++ b/drivers/xen/xenbus/xenbus_xs.c
> > @@ -48,7 +48,6 @@
> >  #include <xen/xenbus.h>
> >  #include <xen/xen.h>
> >  #include "xenbus_comms.h"
> > -#include <asm/xen/hypervisor.h>
> >
> >  struct xs_stored_msg {
> >         struct list_head list;
> > --
> > 1.7.4.1
> >
> 
> 
> 
> -- 
> With warm regards,
> Sachin
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:17:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5Hz-0007Uc-Cq; Fri, 07 Dec 2012 21:16:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th5Hx-0007UQ-RP
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 21:16:54 +0000
Received: from [85.158.143.35:40091] by server-1.bemta-4.messagelabs.com id
	8D/7C-28401-5CC52C05; Fri, 07 Dec 2012 21:16:53 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1354915011!4794826!1
X-Originating-IP: [209.85.212.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7403 invoked from network); 7 Dec 2012 21:16:52 -0000
Received: from mail-vb0-f43.google.com (HELO mail-vb0-f43.google.com)
	(209.85.212.43)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:16:52 -0000
Received: by mail-vb0-f43.google.com with SMTP id fs19so1067568vbb.30
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Dec 2012 13:16:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=xX6GUuFJFtNbQI6+phFagly1UvI89fG8IjalRkEKhG8=;
	b=WW+eIRYM+tXM8lfX5toRbBx3ENZoPBWo2lPcvOPMRR7N0hlaVo4qDOaVpiEs16tWxQ
	ZZR+Nrzq4W3q5YAW27lDcWUoD/JJv79YHTgSEx8QPezh+tPrQcziVXIT7ZjL82X384pc
	Tl6NbsMHshL35cKcSaoU6yUkSnYDKHXeFw3+FJUutyq0bA2gMTLug9wBK+5qu5urXHMB
	14CP0g0AZs+lIjqWUiWP9etjuPVTX8ldwGu4wiUIe5QpJZdB39KPAhHeL02/YwYbvrUh
	2KvszRYjEdrctMdPB5KzU7OKcvVm/YAZFCta7EovslLN1kaUUf7c+VwyKTWvpry6OGb2
	JmjQ==
Received: by 10.220.108.79 with SMTP id e15mr4537722vcp.61.1354915011310;
	Fri, 07 Dec 2012 13:16:51 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id yu4sm3809025veb.7.2012.12.07.13.16.50
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 13:16:51 -0800 (PST)
Date: Fri, 7 Dec 2012 16:16:48 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Sachin Kamat <sachin.kamat@linaro.org>
Message-ID: <20121207211648.GC9664@phenom.dumpdata.com>
References: <1353324150-21785-1-git-send-email-sachin.kamat@linaro.org>
	<CAK9yfHwz3jUpT6tRR4hUC9kLfocEbC-pkmwjiOkgmAmkPrZ-6Q@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAK9yfHwz3jUpT6tRR4hUC9kLfocEbC-pkmwjiOkgmAmkPrZ-6Q@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: konrad.wilk@oracle.com, xen-devel@lists.xensource.com, patches@linaro.org,
	virtualization@lists.linux-foundation.org
Subject: Re: [Xen-devel] [PATCH 1/1] xen/xenbus: Remove duplicate inclusion
 of asm/xen/hypervisor.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Nov 27, 2012 at 09:03:26AM +0530, Sachin Kamat wrote:
> ping

I think I have this in my tree.
> 
> On 19 November 2012 16:52, Sachin Kamat <sachin.kamat@linaro.org> wrote:
> > asm/xen/hypervisor.h was included twice.
> >
> > Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
> > ---
> >  drivers/xen/xenbus/xenbus_xs.c |    1 -
> >  1 files changed, 0 insertions(+), 1 deletions(-)
> >
> > diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
> > index acedeab..88e677b 100644
> > --- a/drivers/xen/xenbus/xenbus_xs.c
> > +++ b/drivers/xen/xenbus/xenbus_xs.c
> > @@ -48,7 +48,6 @@
> >  #include <xen/xenbus.h>
> >  #include <xen/xen.h>
> >  #include "xenbus_comms.h"
> > -#include <asm/xen/hypervisor.h>
> >
> >  struct xs_stored_msg {
> >         struct list_head list;
> > --
> > 1.7.4.1
> >
> 
> 
> 
> -- 
> With warm regards,
> Sachin
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:19:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:19:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5Jw-0007dn-UQ; Fri, 07 Dec 2012 21:18:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Th5Jv-0007dd-LE
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:18:55 +0000
Received: from [85.158.143.35:42428] by server-1.bemta-4.messagelabs.com id
	59/1D-28401-F3D52C05; Fri, 07 Dec 2012 21:18:55 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354915134!13208238!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2723 invoked from network); 7 Dec 2012 21:18:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:18:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="4897"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 21:18:55 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	21:18:53 +0000
Message-ID: <50C25D3D.10407@citrix.com>
Date: Fri, 7 Dec 2012 21:18:53 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
	<50C1E67F02000078000AEEEF@nat28.tlf.novell.com>
In-Reply-To: <50C1E67F02000078000AEEEF@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 V3] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/2012 11:52, Jan Beulich wrote:
>>>> On 06.12.12 at 22:42, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> +
>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>> +     * invokes do_nmi_crash (above), which cause them to write state and
>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>> +     * cause it to return to this function ASAP.
>> +     */
>> +    for ( i = 0; i < nr_cpu_ids; ++i )
>> +        if ( idt_tables[i] )
>> +        {
>> +
>> +            if ( i == cpu )
>> +            {
>> +                /* Disable the interrupt stack tables for this MCE and
>> +                 * NMI handler (shortly to become a nop) as there is a 1
>> +                 * instruction race window where NMIs could be
>> +                 * re-enabled and corrupt the exception frame, leaving
>> +                 * us unable to continue on this crash path (which half
>> +                 * defeats the point of using the nop handler in the
>> +                 * first place).
>> +                 *
>> +                 * This update is safe from a security point of view, as
>> +                 * this pcpu is never going to try to sysret back to a
>> +                 * PV vcpu.
>> +                 */
>> +                set_ist(&idt_tables[i][TRAP_nmi],           IST_NONE);
>> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
>> +
>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
> This makes the first set_ist() above pointless, doesn't it?

No.  The set_ist() is to remove the possibility of stack corruption from
reentrant NMIs, while the trap_nop handler is so we don't get diverted
into the regular NMI handler.  There is nothing the NMI handler would do
which could alter the outcome, and there are many cases where the
regular NMI handler would try to panic, starting us reentrantly on the
kexec path again (where we would deadlock on the one_cpu_only() check).

Given that we are going to crash, any time spent in the NMI handler is
time not spent trying to kill the other pcpus, which themselves might be
heading towards a cascade failure.

>
>> +            }
>> +            else
>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
>> +        }
>> +
>>      /* Ensure the new callback function is set before sending out the NMI. */
>>      wmb();
>>  ...
>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>> +ENTRY(enable_nmis)
>> +        pushq %rax
> What's the point of saving %rax here, btw?
>
> Jan

Because at the moment I believe I might need to call it from asm
context, when doing some of the later fixes.  I figured that it was
better to make it safe now, rather than patch it up later.

~Andrew

>
>> +        movq  %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
>> +
>> +/* No op trap handler.  Required for kexec crash path.
>> + * It is not used in performance critical code, and saves having a single
>> + * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
>> + * explicit alignment. */
>> +.globl trap_nop;
>> +trap_nop:
>> +
>> +        iretq /* Disable the hardware NMI latch */
>> +1:
>> +        popq %rax
>> +        retq
>> +
>>  .section .rodata, "a", @progbits
>>  
>>  ENTRY(exception_table)
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:19:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:19:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5Jw-0007dn-UQ; Fri, 07 Dec 2012 21:18:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Th5Jv-0007dd-LE
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:18:55 +0000
Received: from [85.158.143.35:42428] by server-1.bemta-4.messagelabs.com id
	59/1D-28401-F3D52C05; Fri, 07 Dec 2012 21:18:55 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1354915134!13208238!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2723 invoked from network); 7 Dec 2012 21:18:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:18:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="4897"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 21:18:55 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1; Fri, 7 Dec 2012
	21:18:53 +0000
Message-ID: <50C25D3D.10407@citrix.com>
Date: Fri, 7 Dec 2012 21:18:53 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
	<50C1E67F02000078000AEEEF@nat28.tlf.novell.com>
In-Reply-To: <50C1E67F02000078000AEEEF@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 V3] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/2012 11:52, Jan Beulich wrote:
>>>> On 06.12.12 at 22:42, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> +
>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>> +     * invokes do_nmi_crash (above), which cause them to write state and
>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>> +     * cause it to return to this function ASAP.
>> +     */
>> +    for ( i = 0; i < nr_cpu_ids; ++i )
>> +        if ( idt_tables[i] )
>> +        {
>> +
>> +            if ( i == cpu )
>> +            {
>> +                /* Disable the interrupt stack tables for this MCE and
>> +                 * NMI handler (shortly to become a nop) as there is a 1
>> +                 * instruction race window where NMIs could be
>> +                 * re-enabled and corrupt the exception frame, leaving
>> +                 * us unable to continue on this crash path (which half
>> +                 * defeats the point of using the nop handler in the
>> +                 * first place).
>> +                 *
>> +                 * This update is safe from a security point of view, as
>> +                 * this pcpu is never going to try to sysret back to a
>> +                 * PV vcpu.
>> +                 */
>> +                set_ist(&idt_tables[i][TRAP_nmi],           IST_NONE);
>> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
>> +
>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
> This makes the first set_ist() above pointless, doesn't it?

No.  The set_ist() is to remove the possibility of stack corruption from
reentrant NMIs, while the trap_nop handler is so we don't get diverted
into the regular NMI handler.  There is nothing the NMI handler would do
which could alter the outcome, and there are many cases where the
regular NMI handler would try to panic, starting us reentrantly on the
kexec path again (where we would deadlock on the one_cpu_only() check).

Given that we are going to crash, any time spent in the NMI handler is
time not spent trying to kill the other pcpus, which themselves might be
heading towards a cascade failure.

>
>> +            }
>> +            else
>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
>> +        }
>> +
>>      /* Ensure the new callback function is set before sending out the NMI. */
>>      wmb();
>>  ...
>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>> +ENTRY(enable_nmis)
>> +        pushq %rax
> What's the point of saving %rax here, btw?
>
> Jan

Because at the moment I believe I might need to call it from asm
context, when doing some of the later fixes.  I figured that it was
better to make it safe now, rather than patch it up later.

~Andrew

>
>> +        movq  %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
>> +
>> +/* No op trap handler.  Required for kexec crash path.
>> + * It is not used in performance critical code, and saves having a single
>> + * lone aligned iret. It does not use ENTRY to declare the symbol to avoid the
>> + * explicit alignment. */
>> +.globl trap_nop;
>> +trap_nop:
>> +
>> +        iretq /* Disable the hardware NMI latch */
>> +1:
>> +        popq %rax
>> +        retq
>> +
>>  .section .rodata, "a", @progbits
>>  
>>  ENTRY(exception_table)
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:23:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5Oa-0007sK-O7; Fri, 07 Dec 2012 21:23:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th5OY-0007s8-PQ
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:23:42 +0000
Received: from [85.158.143.35:3755] by server-2.bemta-4.messagelabs.com id
	88/1B-30861-E5E52C05; Fri, 07 Dec 2012 21:23:42 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1354915420!13987528!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6129 invoked from network); 7 Dec 2012 21:23:41 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:23:41 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so991287vbi.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 13:23:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=rkWOfEccHGnO71XuMzzM3KjI2wc+H7hb8Sq4KbbesX8=;
	b=iI3WwaWXpgXef7PDh5b8IoKjQZTqdNLr2ZCYgVjYurJGiPiuF3HGegt+UWWu4ocVIp
	cAPU4wdVEoeklePKU+4UM+GtA8D/IulOBL6V67bioEv+/ONgoUA6q5xoIcb186KWpa4I
	/RtosX1hHmE5cODf/NfirlhCo1QI8LLQ+hHpnKkFJuV3A/CeO3AYHWRBnjMMySy9VuNu
	pgsbk+kz7Haq66ZSttKXEScjjABTTuv6hoUYBbt25x46Ms60mHlos1msMjh66d5pQXO1
	3zrLwwSqWy7Nq76tLl6ZVn51SGm0Qr1kAkBOFd7GfyfW+sqxlgCykEjA8oRNTBun0nyp
	O/PQ==
Received: by 10.52.35.129 with SMTP id h1mr4054262vdj.74.1354915419839;
	Fri, 07 Dec 2012 13:23:39 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id ey7sm3826894ved.0.2012.12.07.13.23.39
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 13:23:39 -0800 (PST)
Date: Fri, 7 Dec 2012 16:23:37 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121207212336.GD9664@phenom.dumpdata.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Paolo Bonzini <pbonzini@redhat.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Nov 30, 2012 at 08:33:34AM +0000, Jan Beulich wrote:
> >>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > On some machines, the location at 0x40e does not point to the beginning
> > of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
> > area of the EBDA, while the option ROMs place their data below that
> > segment.
> > 
> > For this reason, 0x413 is actually a better source than 0x40e to get
> > the location of the real-mode trampoline.  But it is even better to
> > fetch the information from the multiboot structure, where the boot
> > loader has placed the data for us already.
> 
> I think if anything we really should make this a minimum calculation
> of all three (sanity checked) values, rather than throwing the other
> sources out. It's just not certain enough that we can trust all
> multiboot implementations.
> 
> Of course, ideally we'd consult the memory map, but the E820 one
> is unavailable at that point (and getting at it would create a
> chicken-and-egg problem).

Can we scan the memory for the possible EBDA regions? There is an
"EBDA" type header in those regions, if I recall?
> 
> Jan
> 
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> >  xen/arch/x86/boot/head.S | 21 ++++++++++++---------
> >  1 file changed, 12 insertions(+), 9 deletions(-)
> > 
> > diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
> > index 7efa155..1790462 100644
> > --- a/xen/arch/x86/boot/head.S
> > +++ b/xen/arch/x86/boot/head.S
> > @@ -78,16 +78,19 @@ __start:
> >          cmp     $0x2BADB002,%eax
> >          jne     not_multiboot
> >  
> > -        /* Set up trampoline segment 64k below EBDA */
> > -        movzwl  0x40e,%eax          /* EBDA segment */
> > -        cmp     $0xa000,%eax        /* sanity check (high) */
> > -        jae     0f
> > -        cmp     $0x4000,%eax        /* sanity check (low) */
> > -        jae     1f
> > -0:
> > -        movzwl  0x413,%eax          /* use base memory size on failure */
> > -        shl     $10-4,%eax
> > +        /* Set up trampoline segment just below end of base memory.
> > +         * Prefer to get this information from the multiboot
> > +         * structure, if available.
> > +         */
> > +        mov     4(%ebx),%eax        /* kb of low memory */
> > +        testb   $1,(%ebx)           /* test MBI_MEMLIMITS */
> > +        jnz     1f
> > +
> > +        movzwl  0x413,%eax          /* base memory size in kb */
> >  1:
> > +        shl     $10-4,%eax          /* convert to a segment number */
> > +
> > +        /* Reserve 64kb for the trampoline */
> >          sub     $0x1000,%eax
> >  
> >          /* From arch/x86/smpboot.c: start_eip had better be page-aligned! */
> > -- 
> > 1.8.0
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org 
> > http://lists.xen.org/xen-devel 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:23:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5Oa-0007sK-O7; Fri, 07 Dec 2012 21:23:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th5OY-0007s8-PQ
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:23:42 +0000
Received: from [85.158.143.35:3755] by server-2.bemta-4.messagelabs.com id
	88/1B-30861-E5E52C05; Fri, 07 Dec 2012 21:23:42 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1354915420!13987528!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6129 invoked from network); 7 Dec 2012 21:23:41 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:23:41 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so991287vbi.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 13:23:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=rkWOfEccHGnO71XuMzzM3KjI2wc+H7hb8Sq4KbbesX8=;
	b=iI3WwaWXpgXef7PDh5b8IoKjQZTqdNLr2ZCYgVjYurJGiPiuF3HGegt+UWWu4ocVIp
	cAPU4wdVEoeklePKU+4UM+GtA8D/IulOBL6V67bioEv+/ONgoUA6q5xoIcb186KWpa4I
	/RtosX1hHmE5cODf/NfirlhCo1QI8LLQ+hHpnKkFJuV3A/CeO3AYHWRBnjMMySy9VuNu
	pgsbk+kz7Haq66ZSttKXEScjjABTTuv6hoUYBbt25x46Ms60mHlos1msMjh66d5pQXO1
	3zrLwwSqWy7Nq76tLl6ZVn51SGm0Qr1kAkBOFd7GfyfW+sqxlgCykEjA8oRNTBun0nyp
	O/PQ==
Received: by 10.52.35.129 with SMTP id h1mr4054262vdj.74.1354915419839;
	Fri, 07 Dec 2012 13:23:39 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id ey7sm3826894ved.0.2012.12.07.13.23.39
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 13:23:39 -0800 (PST)
Date: Fri, 7 Dec 2012 16:23:37 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121207212336.GD9664@phenom.dumpdata.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Paolo Bonzini <pbonzini@redhat.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Nov 30, 2012 at 08:33:34AM +0000, Jan Beulich wrote:
> >>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > On some machines, the location at 0x40e does not point to the beginning
> > of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
> > area of the EBDA, while the option ROMs place their data below that
> > segment.
> > 
> > For this reason, 0x413 is actually a better source than 0x40e to get
> > the location of the real-mode trampoline.  But it is even better to
> > fetch the information from the multiboot structure, where the boot
> > loader has placed the data for us already.
> 
> I think if anything we really should make this a minimum calculation
> of all three (sanity checked) values, rather than throwing the other
> sources out. It's just not certain enough that we can trust all
> multiboot implementations.
> 
> Of course, ideally we'd consult the memory map, but the E820 one
> is unavailable at that point (and getting at it would create a
> chicken-and-egg problem).

Can we scan the memory for the possible EBDA regions? There is an
"EBDA" type header in those regions, if I recall?
> 
> Jan
> 
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> >  xen/arch/x86/boot/head.S | 21 ++++++++++++---------
> >  1 file changed, 12 insertions(+), 9 deletions(-)
> > 
> > diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
> > index 7efa155..1790462 100644
> > --- a/xen/arch/x86/boot/head.S
> > +++ b/xen/arch/x86/boot/head.S
> > @@ -78,16 +78,19 @@ __start:
> >          cmp     $0x2BADB002,%eax
> >          jne     not_multiboot
> >  
> > -        /* Set up trampoline segment 64k below EBDA */
> > -        movzwl  0x40e,%eax          /* EBDA segment */
> > -        cmp     $0xa000,%eax        /* sanity check (high) */
> > -        jae     0f
> > -        cmp     $0x4000,%eax        /* sanity check (low) */
> > -        jae     1f
> > -0:
> > -        movzwl  0x413,%eax          /* use base memory size on failure */
> > -        shl     $10-4,%eax
> > +        /* Set up trampoline segment just below end of base memory.
> > +         * Prefer to get this information from the multiboot
> > +         * structure, if available.
> > +         */
> > +        mov     4(%ebx),%eax        /* kb of low memory */
> > +        testb   $1,(%ebx)           /* test MBI_MEMLIMITS */
> > +        jnz     1f
> > +
> > +        movzwl  0x413,%eax          /* base memory size in kb */
> >  1:
> > +        shl     $10-4,%eax          /* convert to a segment number */
> > +
> > +        /* Reserve 64kb for the trampoline */
> >          sub     $0x1000,%eax
> >  
> >          /* From arch/x86/smpboot.c: start_eip had better be page-aligned! */
> > -- 
> > 1.8.0
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org 
> > http://lists.xen.org/xen-devel 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:26:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5Qb-00081l-DT; Fri, 07 Dec 2012 21:25:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th5QZ-00081T-Vl
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:25:48 +0000
Received: from [85.158.139.83:10834] by server-3.bemta-5.messagelabs.com id
	00/BC-25441-BDE52C05; Fri, 07 Dec 2012 21:25:47 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354915540!17648768!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8714 invoked from network); 7 Dec 2012 21:25:41 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:25:41 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so990630vcb.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 13:25:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=9nrv+cazEct0erfAH7MT2V74ivFxmKOVbIkEUtbtp40=;
	b=D0Yn+2o6eJkqITMeONDYudoqdynXDIUiYV2qVEHK43JZO0NDc20Mm78yIzJud143Bj
	Tv4t4MG1VCufJlMmN3wo2tYpPKsNApyqDstbSNJ2vlRtfKiB+fWE7Am41J2HN5zt34ro
	f9yP9wvg6TaDw/lko08u1tLnP1v8bD49sksNJkKD7OD9MyR7rZTnGlxWAkmp4qDt+yv8
	k3/rfS5wU1Z2GbYNDoYF+2TzXOyt/jsc2/a1B0gGVZmFrNLW7acOLHcvZnoed7OR9Q21
	ut9PHiSTn3xsLN428cfNcDpfUvfUQpBOjSEovW44DyC9zWTotUf4xmYvHV6KuXQVxQ8S
	ehiQ==
Received: by 10.52.89.68 with SMTP id bm4mr4015776vdb.129.1354915540133;
	Fri, 07 Dec 2012 13:25:40 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id a10sm3812909vez.10.2012.12.07.13.25.39
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 13:25:39 -0800 (PST)
Date: Fri, 7 Dec 2012 16:25:37 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Message-ID: <20121207212536.GE9664@phenom.dumpdata.com>
References: <50B4D060.9070403@jhuapl.edu>
	<1354029286-17652-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354029286-17652-2-git-send-email-dgdegra@tycho.nsa.gov>
	<50B76DBB.90504@jhuapl.edu>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50B76DBB.90504@jhuapl.edu>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] stubdom: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >+   snprintf(path, 512, "backend/vtpm/%u/%u/feature-protocol-v2", (unsigned int) tpmif->domid, tpmif->handle);
> >+   if ((err = xenbus_write(XBT_NIL, path, "1")))
> >+   {
> >+      /* if we got an error here we should carefully remove the interface and then return */
> >+      TPMBACK_ERR("Unable to write feature-protocol-v2 node: %s\n", err);
> >+      free(err);
> >+      remove_tpmif(tpmif);
> >+      goto error_post_irq;
> >+   }
> >+
> My preference is still to do away with the versioning stuff since
> tpm is just getting released. Its not even in linux yet so there is
> no confusion. We can even merge the linux patches together and
> resubmit as one if thats preferrable. Konrad, Ian, your final votes
> on that?

I am up for just removing the versioning stuff - and if one really
wants to be fool-proof - rename the 'backend/vtpm' to 'backend/vtpm2'
Perhaps?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:26:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5Qb-00081l-DT; Fri, 07 Dec 2012 21:25:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th5QZ-00081T-Vl
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:25:48 +0000
Received: from [85.158.139.83:10834] by server-3.bemta-5.messagelabs.com id
	00/BC-25441-BDE52C05; Fri, 07 Dec 2012 21:25:47 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1354915540!17648768!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8714 invoked from network); 7 Dec 2012 21:25:41 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:25:41 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so990630vcb.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 13:25:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=9nrv+cazEct0erfAH7MT2V74ivFxmKOVbIkEUtbtp40=;
	b=D0Yn+2o6eJkqITMeONDYudoqdynXDIUiYV2qVEHK43JZO0NDc20Mm78yIzJud143Bj
	Tv4t4MG1VCufJlMmN3wo2tYpPKsNApyqDstbSNJ2vlRtfKiB+fWE7Am41J2HN5zt34ro
	f9yP9wvg6TaDw/lko08u1tLnP1v8bD49sksNJkKD7OD9MyR7rZTnGlxWAkmp4qDt+yv8
	k3/rfS5wU1Z2GbYNDoYF+2TzXOyt/jsc2/a1B0gGVZmFrNLW7acOLHcvZnoed7OR9Q21
	ut9PHiSTn3xsLN428cfNcDpfUvfUQpBOjSEovW44DyC9zWTotUf4xmYvHV6KuXQVxQ8S
	ehiQ==
Received: by 10.52.89.68 with SMTP id bm4mr4015776vdb.129.1354915540133;
	Fri, 07 Dec 2012 13:25:40 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id a10sm3812909vez.10.2012.12.07.13.25.39
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 13:25:39 -0800 (PST)
Date: Fri, 7 Dec 2012 16:25:37 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Message-ID: <20121207212536.GE9664@phenom.dumpdata.com>
References: <50B4D060.9070403@jhuapl.edu>
	<1354029286-17652-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354029286-17652-2-git-send-email-dgdegra@tycho.nsa.gov>
	<50B76DBB.90504@jhuapl.edu>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50B76DBB.90504@jhuapl.edu>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] stubdom: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >+   snprintf(path, 512, "backend/vtpm/%u/%u/feature-protocol-v2", (unsigned int) tpmif->domid, tpmif->handle);
> >+   if ((err = xenbus_write(XBT_NIL, path, "1")))
> >+   {
> >+      /* if we got an error here we should carefully remove the interface and then return */
> >+      TPMBACK_ERR("Unable to write feature-protocol-v2 node: %s\n", err);
> >+      free(err);
> >+      remove_tpmif(tpmif);
> >+      goto error_post_irq;
> >+   }
> >+
> My preference is still to do away with the versioning stuff since
> tpm is just getting released. Its not even in linux yet so there is
> no confusion. We can even merge the linux patches together and
> resubmit as one if thats preferrable. Konrad, Ian, your final votes
> on that?

I am up for just removing the versioning stuff - and if one really
wants to be fool-proof - rename the 'backend/vtpm' to 'backend/vtpm2'
Perhaps?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:35:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:35:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5ZH-0008Om-Dd; Fri, 07 Dec 2012 21:34:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th5ZG-0008Oh-0h
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:34:46 +0000
Received: from [85.158.143.99:48860] by server-3.bemta-4.messagelabs.com id
	D1/8D-18211-5F062C05; Fri, 07 Dec 2012 21:34:45 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354916081!18914473!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31948 invoked from network); 7 Dec 2012 21:34:42 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:34:42 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so999604vbi.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 13:34:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=r/4w3yvT4M+Eg0Q41u3zU+dpqDslkv0LjXJzeZQSfLs=;
	b=ypNgSfWOM050Pm+2WRt/JvCrPrcgJEw/yU1RaKEOqZYpR24Lz9cBDbRP+eoCfmGY4N
	IC0Q7h0S38bwPo+9mXCcamqs/heiTgfO5AtzQtCimV/bVZD91uWiOx+EwfMfAVwiLHP6
	dI8r/wEULMa7vDESDrP9Ek0kPjZbe+IpPk9Kajqre9FZ3wcouIE63vClYKVY2aWjpP2k
	/Crj5oW1MtebWUXoIs24Gs3YaX5p5EpLWDUS+nQbmLj64o6kTnmOZrrqGJ1y8PI/nIbm
	2D2sbyHHeOI8kxfaqqO1c7Mb8V3cn7wIBs7ylHmYyhoTHv4SZ0nmeoh7Bn1enLG4nacI
	1WqA==
Received: by 10.220.238.148 with SMTP id ks20mr4689193vcb.5.1354916081218;
	Fri, 07 Dec 2012 13:34:41 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id a10sm3820445vez.10.2012.12.07.13.34.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 13:34:40 -0800 (PST)
Date: Fri, 7 Dec 2012 16:34:38 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121207213437.GF9664@phenom.dumpdata.com>
References: <50B4B40B02000078000ABA19@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50B4B40B02000078000ABA19@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] vscsiif: allow larger segments-per-request
 values
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Nov 27, 2012 at 11:37:31AM +0000, Jan Beulich wrote:
> At least certain tape devices require fixed size blocks to be operated
> upon, i.e. breaking up of I/O requests is not permitted. Consequently
> we need an interface extension that (leaving aside implementation
> limitations) doesn't impose a limit on the number of segments that can
> be associated with an individual request.
> 
> This, in turn, excludes the blkif extension FreeBSD folks implemented,
> as that still imposes an upper limit (the actual I/O request still
> specifies the full number of segments - as an 8-bit quantity -, and
> subsequent ring slots get used to carry the excess segment
> descriptors).
> 
> The alternative therefore is to allow the frontend to pre-set segment
> descriptors _before_ actually issuing the I/O request. I/O will then
> be done by the backend for the accumulated set of segments.

How do you deal with migration to older backends?
> 
> To properly associate segment preset operations with the main request,
> the rqid-s between them should match (originally I had hoped to use
> this to avoid producing individual responses for the pre-set
> operations, but that turned out to violate the underlying shared ring
> implementation).

Right. If we could seperate those two it would be solve that.
So seperate 'request' and 'response' ring.


> 
> Negotiation of the maximum number of segments a particular backend
> implementation supports happens through a new "segs-per-req" xenstore
> node.

No 'feature-segs-per-req'?

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As I have no plans to backport this to the 2.6.18 tree, I'm attaching
> for reference the full kernel side patch we're intending to use.

> 
> --- a/xen/include/public/io/vscsiif.h
> +++ b/xen/include/public/io/vscsiif.h
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  /*
>   * Maximum scatter/gather segments per request.
> @@ -50,6 +51,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -66,18 +73,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;
> 
> 
> 

> vscsiif: allow larger segments-per-request values
> 
> At least certain tape devices require fixed size blocks to be operated
> upon, i.e. breaking up of I/O requests is not permitted. Consequently
> we need an interface extension that (leaving aside implementation
> limitations) doesn't impose a limit on the number of segments that can
> be associated with an individual request.
> 
> This, in turn, excludes the blkif extension FreeBSD folks implemented,
> as that still imposes an upper limit (the actual I/O request still
> specifies the full number of segments - as an 8-bit quantity -, and
> subsequent ring slots get used to carry the excess segment
> descriptors).
> 
> The alternative therefore is to allow the frontend to pre-set segment
> descriptors _before_ actually issuing the I/O request. I/O will then
> be done by the backend for the accumulated set of segments.
> 
> To properly associate segment preset operations with the main request,
> the rqid-s between them should match (originally I had hoped to use
> this to avoid producing individual responses for the pre-set
> operations, but that turned out to violate the underlying shared ring
> implementation).
> 
> Negotiation of the maximum number of segments a particular backend
> implementation supports happens through a new "segs-per-req" xenstore
> node.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As I have no plans to backport this to the 2.6.18 tree, I'm attaching
> for reference the full kernel side patch we're intending to use.
> 
> --- a/xen/include/public/io/vscsiif.h
> +++ b/xen/include/public/io/vscsiif.h
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  /*
>   * Maximum scatter/gather segments per request.
> @@ -50,6 +51,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -66,18 +73,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;

> --- sle11sp3.orig/drivers/xen/scsiback/common.h	2012-06-06 13:53:26.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/common.h	2012-11-22 14:55:58.000000000 +0100
> @@ -94,10 +94,15 @@ struct vscsibk_info {
>  	unsigned int waiting_reqs;
>  	struct page **mmap_pages;
>  
> +	struct pending_req *preq;
> +
> +	union {
> +		struct gnttab_map_grant_ref   *gmap;
> +		struct gnttab_unmap_grant_ref *gunmap;
> +	};
>  };
>  
> -typedef struct {
> -	unsigned char act;
> +typedef struct pending_req {
>  	struct vscsibk_info *info;
>  	struct scsi_device *sdev;
>  
> @@ -114,7 +119,8 @@ typedef struct {
>  	
>  	uint32_t request_bufflen;
>  	struct scatterlist *sgl;
> -	grant_ref_t gref[VSCSIIF_SG_TABLESIZE];
> +	grant_ref_t *gref;
> +	vscsiif_segment_t *segs;
>  
>  	int32_t rslt;
>  	uint32_t resid;
> @@ -123,7 +129,7 @@ typedef struct {
>  	struct list_head free_list;
>  } pending_req_t;
>  
> -
> +extern unsigned int vscsiif_segs;
>  
>  #define scsiback_get(_b) (atomic_inc(&(_b)->nr_unreplied_reqs))
>  #define scsiback_put(_b)				\
> @@ -163,7 +169,7 @@ void scsiback_release_translation_entry(
>  
>  void scsiback_cmd_exec(pending_req_t *pending_req);
>  void scsiback_do_resp_with_sense(char *sense_buffer, int32_t result,
> -			uint32_t resid, pending_req_t *pending_req);
> +			uint32_t resid, pending_req_t *, uint8_t act);
>  void scsiback_fast_flush_area(pending_req_t *req);
>  
>  void scsiback_rsp_emulation(pending_req_t *pending_req);
> --- sle11sp3.orig/drivers/xen/scsiback/emulate.c	2012-01-11 12:14:54.000000000 +0100
> +++ sle11sp3/drivers/xen/scsiback/emulate.c	2012-11-22 14:29:27.000000000 +0100
> @@ -352,7 +352,9 @@ void scsiback_req_emulation_or_cmdexec(p
>  	else {
>  		scsiback_fast_flush_area(pending_req);
>  		scsiback_do_resp_with_sense(pending_req->sense_buffer,
> -		  pending_req->rslt, pending_req->resid, pending_req);
> +					    pending_req->rslt,
> +					    pending_req->resid, pending_req,
> +					    VSCSIIF_ACT_SCSI_CDB);
>  	}
>  }
>  
> --- sle11sp3.orig/drivers/xen/scsiback/interface.c	2011-10-10 11:58:37.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/interface.c	2012-11-13 13:21:10.000000000 +0100
> @@ -51,6 +51,13 @@ struct vscsibk_info *vscsibk_info_alloc(
>  	if (!info)
>  		return ERR_PTR(-ENOMEM);
>  
> +	info->gmap = kcalloc(max(sizeof(*info->gmap), sizeof(*info->gunmap)),
> +			     vscsiif_segs, GFP_KERNEL);
> +	if (!info->gmap) {
> +		kfree(info);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
>  	info->domid = domid;
>  	spin_lock_init(&info->ring_lock);
>  	atomic_set(&info->nr_unreplied_reqs, 0);
> @@ -120,6 +127,7 @@ void scsiback_disconnect(struct vscsibk_
>  
>  void scsiback_free(struct vscsibk_info *info)
>  {
> +	kfree(info->gmap);
>  	kmem_cache_free(scsiback_cachep, info);
>  }
>  
> --- sle11sp3.orig/drivers/xen/scsiback/scsiback.c	2012-11-22 15:36:11.000000000 +0100
> +++ sle11sp3/drivers/xen/scsiback/scsiback.c	2012-11-22 15:36:16.000000000 +0100
> @@ -56,6 +56,10 @@ int vscsiif_reqs = VSCSIIF_BACK_MAX_PEND
>  module_param_named(reqs, vscsiif_reqs, int, 0);
>  MODULE_PARM_DESC(reqs, "Number of scsiback requests to allocate");
>  
> +unsigned int vscsiif_segs = VSCSIIF_SG_TABLESIZE;
> +module_param_named(segs, vscsiif_segs, uint, 0);
> +MODULE_PARM_DESC(segs, "Number of segments to allow per request");
> +
>  static unsigned int log_print_stat = 0;
>  module_param(log_print_stat, int, 0644);
>  
> @@ -67,7 +71,7 @@ static grant_handle_t *pending_grant_han
>  
>  static int vaddr_pagenr(pending_req_t *req, int seg)
>  {
> -	return (req - pending_reqs) * VSCSIIF_SG_TABLESIZE + seg;
> +	return (req - pending_reqs) * vscsiif_segs + seg;
>  }
>  
>  static unsigned long vaddr(pending_req_t *req, int seg)
> @@ -82,7 +86,7 @@ static unsigned long vaddr(pending_req_t
>  
>  void scsiback_fast_flush_area(pending_req_t *req)
>  {
> -	struct gnttab_unmap_grant_ref unmap[VSCSIIF_SG_TABLESIZE];
> +	struct gnttab_unmap_grant_ref *unmap = req->info->gunmap;
>  	unsigned int i, invcount = 0;
>  	grant_handle_t handle;
>  	int err;
> @@ -117,6 +121,7 @@ static pending_req_t * alloc_req(struct 
>  	if (!list_empty(&pending_free)) {
>  		req = list_entry(pending_free.next, pending_req_t, free_list);
>  		list_del(&req->free_list);
> +		req->nr_segments = 0;
>  	}
>  	spin_unlock_irqrestore(&pending_free_lock, flags);
>  	return req;
> @@ -144,7 +149,8 @@ static void scsiback_notify_work(struct 
>  }
>  
>  void scsiback_do_resp_with_sense(char *sense_buffer, int32_t result,
> -			uint32_t resid, pending_req_t *pending_req)
> +				 uint32_t resid, pending_req_t *pending_req,
> +				 uint8_t act)
>  {
>  	vscsiif_response_t *ring_res;
>  	struct vscsibk_info *info = pending_req->info;
> @@ -159,6 +165,7 @@ void scsiback_do_resp_with_sense(char *s
>  	ring_res = RING_GET_RESPONSE(&info->ring, info->ring.rsp_prod_pvt);
>  	info->ring.rsp_prod_pvt++;
>  
> +	ring_res->act    = act;
>  	ring_res->rslt   = result;
>  	ring_res->rqid   = pending_req->rqid;
>  
> @@ -186,7 +193,8 @@ void scsiback_do_resp_with_sense(char *s
>  	if (notify)
>  		notify_remote_via_irq(info->irq);
>  
> -	free_req(pending_req);
> +	if (act != VSCSIIF_ACT_SCSI_SG_PRESET)
> +		free_req(pending_req);
>  }
>  
>  static void scsiback_print_status(char *sense_buffer, int errors,
> @@ -225,25 +233,25 @@ static void scsiback_cmd_done(struct req
>  		scsiback_rsp_emulation(pending_req);
>  
>  	scsiback_fast_flush_area(pending_req);
> -	scsiback_do_resp_with_sense(sense_buffer, errors, resid, pending_req);
> +	scsiback_do_resp_with_sense(sense_buffer, errors, resid, pending_req,
> +				    VSCSIIF_ACT_SCSI_CDB);
>  	scsiback_put(pending_req->info);
>  
>  	__blk_put_request(req->q, req);
>  }
>  
>  
> -static int scsiback_gnttab_data_map(vscsiif_request_t *ring_req,
> -					pending_req_t *pending_req)
> +static int scsiback_gnttab_data_map(const vscsiif_segment_t *segs,
> +				    unsigned int nr_segs,
> +				    pending_req_t *pending_req)
>  {
>  	u32 flags;
> -	int write;
> -	int i, err = 0;
> -	unsigned int data_len = 0;
> -	struct gnttab_map_grant_ref map[VSCSIIF_SG_TABLESIZE];
> +	int write, err = 0;
> +	unsigned int i, j, data_len = 0;
>  	struct vscsibk_info *info   = pending_req->info;
> -
> +	struct gnttab_map_grant_ref *map = info->gmap;
>  	int data_dir = (int)pending_req->sc_data_direction;
> -	unsigned int nr_segments = (unsigned int)pending_req->nr_segments;
> +	unsigned int nr_segments = pending_req->nr_segments + nr_segs;
>  
>  	write = (data_dir == DMA_TO_DEVICE);
>  
> @@ -264,14 +272,20 @@ static int scsiback_gnttab_data_map(vscs
>  		if (write)
>  			flags |= GNTMAP_readonly;
>  
> -		for (i = 0; i < nr_segments; i++)
> +		for (i = 0; i < pending_req->nr_segments; i++)
>  			gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
> -						ring_req->seg[i].gref,
> +						pending_req->segs[i].gref,
> +						info->domid);
> +		for (j = 0; i < nr_segments; i++, j++)
> +			gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
> +						segs[j].gref,
>  						info->domid);
>  
> +
>  		err = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map, nr_segments);
>  		BUG_ON(err);
>  
> +		j = 0;
>  		for_each_sg (pending_req->sgl, sg, nr_segments, i) {
>  			struct page *pg;
>  
> @@ -294,8 +308,15 @@ static int scsiback_gnttab_data_map(vscs
>  			set_phys_to_machine(page_to_pfn(pg),
>  				FOREIGN_FRAME(map[i].dev_bus_addr >> PAGE_SHIFT));
>  
> -			sg_set_page(sg, pg, ring_req->seg[i].length,
> -				    ring_req->seg[i].offset);
> +			if (i < pending_req->nr_segments)
> +				sg_set_page(sg, pg,
> +					    pending_req->segs[i].length,
> +					    pending_req->segs[i].offset);
> +			else {
> +				sg_set_page(sg, pg, segs[j].length,
> +					    segs[j].offset);
> +				++j;
> +			}
>  			data_len += sg->length;
>  
>  			barrier();
> @@ -306,6 +327,8 @@ static int scsiback_gnttab_data_map(vscs
>  
>  		}
>  
> +		pending_req->nr_segments = nr_segments;
> +
>  		if (err)
>  			goto fail_flush;
>  	}
> @@ -471,7 +494,8 @@ static void scsiback_device_reset_exec(p
>  	scsiback_get(info);
>  	err = scsi_reset_provider(sdev, SCSI_TRY_RESET_DEVICE);
>  
> -	scsiback_do_resp_with_sense(NULL, err, 0, pending_req);
> +	scsiback_do_resp_with_sense(NULL, err, 0, pending_req,
> +				    VSCSIIF_ACT_SCSI_RESET);
>  	scsiback_put(info);
>  
>  	return;
> @@ -489,13 +513,11 @@ static int prepare_pending_reqs(struct v
>  {
>  	struct scsi_device *sdev;
>  	struct ids_tuple vir;
> +	unsigned int nr_segs;
>  	int err = -EINVAL;
>  
>  	DPRINTK("%s\n",__FUNCTION__);
>  
> -	pending_req->rqid       = ring_req->rqid;
> -	pending_req->act        = ring_req->act;
> -
>  	pending_req->info       = info;
>  
>  	pending_req->v_chn = vir.chn = ring_req->channel;
> @@ -525,11 +547,10 @@ static int prepare_pending_reqs(struct v
>  		goto invalid_value;
>  	}
>  
> -	pending_req->nr_segments = ring_req->nr_segments;
> +	nr_segs = ring_req->nr_segments;
>  	barrier();
> -	if (pending_req->nr_segments > VSCSIIF_SG_TABLESIZE) {
> -		DPRINTK("scsiback: invalid parameter nr_seg = %d\n",
> -			pending_req->nr_segments);
> +	if (pending_req->nr_segments + nr_segs > vscsiif_segs) {
> +		DPRINTK("scsiback: invalid nr_segs = %u\n", nr_segs);
>  		err = -EINVAL;
>  		goto invalid_value;
>  	}
> @@ -546,7 +567,7 @@ static int prepare_pending_reqs(struct v
>  	
>  	pending_req->timeout_per_command = ring_req->timeout_per_command;
>  
> -	if(scsiback_gnttab_data_map(ring_req, pending_req)) {
> +	if (scsiback_gnttab_data_map(ring_req->seg, nr_segs, pending_req)) {
>  		DPRINTK("scsiback: invalid buffer\n");
>  		err = -EINVAL;
>  		goto invalid_value;
> @@ -558,6 +579,20 @@ invalid_value:
>  	return err;
>  }
>  
> +static void latch_segments(pending_req_t *pending_req,
> +			   const struct vscsiif_sg_list *sgl)
> +{
> +	unsigned int nr_segs = sgl->nr_segments;
> +
> +	barrier();
> +	if (pending_req->nr_segments + nr_segs <= vscsiif_segs) {
> +		memcpy(pending_req->segs + pending_req->nr_segments,
> +		       sgl->seg, nr_segs * sizeof(*sgl->seg));
> +		pending_req->nr_segments += nr_segs;
> +	}
> +	else
> +		DPRINTK("scsiback: invalid nr_segs = %u\n", nr_segs);
> +}
>  
>  static int _scsiback_do_cmd_fn(struct vscsibk_info *info)
>  {
> @@ -575,9 +610,11 @@ static int _scsiback_do_cmd_fn(struct vs
>  	rmb();
>  
>  	while ((rc != rp)) {
> +		int act, rqid;
> +
>  		if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
>  			break;
> -		pending_req = alloc_req(info);
> +		pending_req = info->preq ?: alloc_req(info);
>  		if (NULL == pending_req) {
>  			more_to_do = 1;
>  			break;
> @@ -586,32 +623,55 @@ static int _scsiback_do_cmd_fn(struct vs
>  		ring_req = RING_GET_REQUEST(ring, rc);
>  		ring->req_cons = ++rc;
>  
> +		act = ring_req->act;
> +		rqid = ring_req->rqid;
> +		barrier();
> +		if (!pending_req->nr_segments)
> +			pending_req->rqid = rqid;
> +		else if (pending_req->rqid != rqid)
> +			DPRINTK("scsiback: invalid rqid %04x, expected %04x\n",
> +				rqid, pending_req->rqid);
> +
> +		info->preq = NULL;
> +		if (pending_req->rqid != rqid) {
> +			scsiback_do_resp_with_sense(NULL, DRIVER_INVALID << 24,
> +						    0, pending_req, act);
> +			continue;
> +		}
> +
> +		if (act == VSCSIIF_ACT_SCSI_SG_PRESET) {
> +			latch_segments(pending_req, (void *)ring_req);
> +			info->preq = pending_req;
> +			scsiback_do_resp_with_sense(NULL, 0, 0,
> +						    pending_req, act);
> +			continue;
> +		}
> +
>  		err = prepare_pending_reqs(info, ring_req,
>  						pending_req);
>  		if (err == -EINVAL) {
>  			scsiback_do_resp_with_sense(NULL, (DRIVER_ERROR << 24),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		} else if (err == -ENODEV) {
>  			scsiback_do_resp_with_sense(NULL, (DID_NO_CONNECT << 16),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		}
>  
> -		if (pending_req->act == VSCSIIF_ACT_SCSI_CDB) {
> -
> +		if (act == VSCSIIF_ACT_SCSI_CDB) {
>  			/* The Host mode is through as for Emulation. */
>  			if (info->feature == VSCSI_TYPE_HOST)
>  				scsiback_cmd_exec(pending_req);
>  			else
>  				scsiback_req_emulation_or_cmdexec(pending_req);
>  
> -		} else if (pending_req->act == VSCSIIF_ACT_SCSI_RESET) {
> +		} else if (act == VSCSIIF_ACT_SCSI_RESET) {
>  			scsiback_device_reset_exec(pending_req);
>  		} else {
>  			pr_err("scsiback: invalid parameter for request\n");
>  			scsiback_do_resp_with_sense(NULL, (DRIVER_ERROR << 24),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		}
>  	}
> @@ -673,17 +733,32 @@ static int __init scsiback_init(void)
>  	if (!is_running_on_xen())
>  		return -ENODEV;
>  
> -	mmap_pages = vscsiif_reqs * VSCSIIF_SG_TABLESIZE;
> +	if (vscsiif_segs < VSCSIIF_SG_TABLESIZE)
> +		vscsiif_segs = VSCSIIF_SG_TABLESIZE;
> +	if (vscsiif_segs != (uint8_t)vscsiif_segs)
> +		return -EINVAL;
> +	mmap_pages = vscsiif_reqs * vscsiif_segs;
>  
>  	pending_reqs          = kzalloc(sizeof(pending_reqs[0]) *
>  					vscsiif_reqs, GFP_KERNEL);
> +	if (!pending_reqs)
> +		return -ENOMEM;
>  	pending_grant_handles = kmalloc(sizeof(pending_grant_handles[0]) *
>  					mmap_pages, GFP_KERNEL);
>  	pending_pages         = alloc_empty_pages_and_pagevec(mmap_pages);
>  
> -	if (!pending_reqs || !pending_grant_handles || !pending_pages)
> +	if (!pending_grant_handles || !pending_pages)
>  		goto out_of_memory;
>  
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		pending_reqs[i].gref = kcalloc(sizeof(*pending_reqs->gref),
> +					       vscsiif_segs, GFP_KERNEL);
> +		pending_reqs[i].segs = kcalloc(sizeof(*pending_reqs->segs),
> +					       vscsiif_segs, GFP_KERNEL);
> +		if (!pending_reqs[i].gref || !pending_reqs[i].segs)
> +			goto out_of_memory;
> +	}
> +
>  	for (i = 0; i < mmap_pages; i++)
>  		pending_grant_handles[i] = SCSIBACK_INVALID_HANDLE;
>  
> @@ -705,6 +780,10 @@ static int __init scsiback_init(void)
>  out_interface:
>  	scsiback_interface_exit();
>  out_of_memory:
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		kfree(pending_reqs[i].gref);
> +		kfree(pending_reqs[i].segs);
> +	}
>  	kfree(pending_reqs);
>  	kfree(pending_grant_handles);
>  	free_empty_pages_and_pagevec(pending_pages, mmap_pages);
> @@ -715,12 +794,17 @@ out_of_memory:
>  #if 0
>  static void __exit scsiback_exit(void)
>  {
> +	unsigned int i;
> +
>  	scsiback_xenbus_unregister();
>  	scsiback_interface_exit();
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		kfree(pending_reqs[i].gref);
> +		kfree(pending_reqs[i].segs);
> +	}
>  	kfree(pending_reqs);
>  	kfree(pending_grant_handles);
> -	free_empty_pages_and_pagevec(pending_pages, (vscsiif_reqs * VSCSIIF_SG_TABLESIZE));
> -
> +	free_empty_pages_and_pagevec(pending_pages, vscsiif_reqs * vscsiif_segs);
>  }
>  #endif
>  
> --- sle11sp3.orig/drivers/xen/scsiback/xenbus.c	2011-06-30 17:04:59.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/xenbus.c	2012-11-13 14:36:16.000000000 +0100
> @@ -339,6 +339,13 @@ static int scsiback_probe(struct xenbus_
>  	if (val)
>  		be->info->feature = VSCSI_TYPE_HOST;
>  
> +	if (vscsiif_segs > VSCSIIF_SG_TABLESIZE) {
> +		err = xenbus_printf(XBT_NIL, dev->nodename, "segs-per-req",
> +				    "%u", vscsiif_segs);
> +		if (err)
> +			xenbus_dev_error(dev, err, "writing segs-per-req");
> +	}
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> --- sle11sp3.orig/drivers/xen/scsifront/common.h	2011-01-31 17:29:16.000000000 +0100
> +++ sle11sp3/drivers/xen/scsifront/common.h	2012-11-22 13:45:50.000000000 +0100
> @@ -95,7 +95,7 @@ struct vscsifrnt_shadow {
>  
>  	/* requested struct scsi_cmnd is stored from kernel */
>  	unsigned long req_scsi_cmnd;
> -	int gref[VSCSIIF_SG_TABLESIZE];
> +	int gref[SG_ALL];
>  };
>  
>  struct vscsifrnt_info {
> @@ -110,7 +110,6 @@ struct vscsifrnt_info {
>  
>  	grant_ref_t ring_ref;
>  	struct vscsiif_front_ring ring;
> -	struct vscsiif_response	ring_res;
>  
>  	struct vscsifrnt_shadow shadow[VSCSIIF_MAX_REQS];
>  	uint32_t shadow_free;
> @@ -119,6 +118,12 @@ struct vscsifrnt_info {
>  	wait_queue_head_t wq;
>  	unsigned int waiting_resp;
>  
> +	struct {
> +		struct scsi_cmnd *sc;
> +		unsigned int rqid;
> +		unsigned int done;
> +		vscsiif_segment_t segs[];
> +	} active;
>  };
>  
>  #define DPRINTK(_f, _a...)				\
> --- sle11sp3.orig/drivers/xen/scsifront/scsifront.c	2011-06-28 18:57:14.000000000 +0200
> +++ sle11sp3/drivers/xen/scsifront/scsifront.c	2012-11-22 16:37:35.000000000 +0100
> @@ -106,6 +106,66 @@ irqreturn_t scsifront_intr(int irq, void
>  	return IRQ_HANDLED;
>  }
>  
> +static bool push_cmd_to_ring(struct vscsifrnt_info *info,
> +			     vscsiif_request_t *ring_req)
> +{
> +	unsigned int left, rqid = info->active.rqid;
> +	struct scsi_cmnd *sc;
> +
> +	for (; ; ring_req = NULL) {
> +		struct vscsiif_sg_list *sgl;
> +
> +		if (!ring_req) {
> +			struct vscsiif_front_ring *ring = &info->ring;
> +
> +			ring_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
> +			ring->req_prod_pvt++;
> +			ring_req->rqid = rqid;
> +		}
> +
> +		left = info->shadow[rqid].nr_segments - info->active.done;
> +		if (left <= VSCSIIF_SG_TABLESIZE)
> +			break;
> +
> +		sgl = (void *)ring_req;
> +		sgl->act = VSCSIIF_ACT_SCSI_SG_PRESET;
> +
> +		if (left > VSCSIIF_SG_LIST_SIZE)
> +			left = VSCSIIF_SG_LIST_SIZE;
> +		memcpy(sgl->seg, info->active.segs + info->active.done,
> +		       left * sizeof(*sgl->seg));
> +
> +		sgl->nr_segments = left;
> +		info->active.done += left;
> +
> +		if (RING_FULL(&info->ring))
> +			return false;
> +	}
> +
> +	sc = info->active.sc;
> +
> +	ring_req->act     = VSCSIIF_ACT_SCSI_CDB;
> +	ring_req->id      = sc->device->id;
> +	ring_req->lun     = sc->device->lun;
> +	ring_req->channel = sc->device->channel;
> +	ring_req->cmd_len = sc->cmd_len;
> +
> +	if ( sc->cmd_len )
> +		memcpy(ring_req->cmnd, sc->cmnd, sc->cmd_len);
> +	else
> +		memset(ring_req->cmnd, 0, VSCSIIF_MAX_COMMAND_SIZE);
> +
> +	ring_req->sc_data_direction   = sc->sc_data_direction;
> +	ring_req->timeout_per_command = sc->request->timeout / HZ;
> +	ring_req->nr_segments         = left;
> +
> +	memcpy(ring_req->seg, info->active.segs + info->active.done,
> +               left * sizeof(*ring_req->seg));
> +
> +	info->active.sc = NULL;
> +
> +	return !RING_FULL(&info->ring);
> +}
>  
>  static void scsifront_gnttab_done(struct vscsifrnt_shadow *s, uint32_t id)
>  {
> @@ -194,6 +254,16 @@ int scsifront_cmd_done(struct vscsifrnt_
>  		
>  		ring_res = RING_GET_RESPONSE(&info->ring, i);
>  
> +		if (info->host->sg_tablesize > VSCSIIF_SG_TABLESIZE) {
> +			u8 act = ring_res->act;
> +
> +			if (act == VSCSIIF_ACT_SCSI_SG_PRESET)
> +				continue;
> +			if (act != info->shadow[ring_res->rqid].act)
> +				DPRINTK("Bogus backend response (%02x vs %02x)\n",
> +					act, info->shadow[ring_res->rqid].act);
> +		}
> +
>  		if (info->shadow[ring_res->rqid].act == VSCSIIF_ACT_SCSI_CDB)
>  			scsifront_cdb_cmd_done(info, ring_res);
>  		else
> @@ -208,8 +278,16 @@ int scsifront_cmd_done(struct vscsifrnt_
>  		info->ring.sring->rsp_event = i + 1;
>  	}
>  
> -	spin_unlock_irqrestore(&info->io_lock, flags);
> +	spin_unlock(&info->io_lock);
> +
> +	spin_lock(info->host->host_lock);
> +
> +	if (info->active.sc && !RING_FULL(&info->ring)) {
> +		push_cmd_to_ring(info, NULL);
> +		scsifront_do_request(info);
> +	}
>  
> +	spin_unlock_irqrestore(info->host->host_lock, flags);
>  
>  	/* Yield point for this unbounded loop. */
>  	cond_resched();
> @@ -242,7 +320,8 @@ int scsifront_schedule(void *data)
>  
>  
>  static int map_data_for_request(struct vscsifrnt_info *info,
> -		struct scsi_cmnd *sc, vscsiif_request_t *ring_req, uint32_t id)
> +				struct scsi_cmnd *sc,
> +				struct vscsifrnt_shadow *shadow)
>  {
>  	grant_ref_t gref_head;
>  	struct page *page;
> @@ -254,7 +333,7 @@ static int map_data_for_request(struct v
>  	if (sc->sc_data_direction == DMA_NONE)
>  		return 0;
>  
> -	err = gnttab_alloc_grant_references(VSCSIIF_SG_TABLESIZE, &gref_head);
> +	err = gnttab_alloc_grant_references(info->host->sg_tablesize, &gref_head);
>  	if (err) {
>  		pr_err("scsifront: gnttab_alloc_grant_references() error\n");
>  		return -ENOMEM;
> @@ -266,7 +345,7 @@ static int map_data_for_request(struct v
>  		unsigned int data_len = scsi_bufflen(sc);
>  
>  		nr_pages = (data_len + sgl->offset + PAGE_SIZE - 1) >> PAGE_SHIFT;
> -		if (nr_pages > VSCSIIF_SG_TABLESIZE) {
> +		if (nr_pages > info->host->sg_tablesize) {
>  			pr_err("scsifront: Unable to map request_buffer for command!\n");
>  			ref_cnt = (-E2BIG);
>  			goto big_to_sg;
> @@ -294,10 +373,10 @@ static int map_data_for_request(struct v
>  				gnttab_grant_foreign_access_ref(ref, info->dev->otherend_id,
>  					buffer_pfn, write);
>  
> -				info->shadow[id].gref[ref_cnt]  = ref;
> -				ring_req->seg[ref_cnt].gref     = ref;
> -				ring_req->seg[ref_cnt].offset   = (uint16_t)off;
> -				ring_req->seg[ref_cnt].length   = (uint16_t)bytes;
> +				shadow->gref[ref_cnt] = ref;
> +				info->active.segs[ref_cnt].gref   = ref;
> +				info->active.segs[ref_cnt].offset = off;
> +				info->active.segs[ref_cnt].length = bytes;
>  
>  				buffer_pfn++;
>  				len -= bytes;
> @@ -336,34 +415,27 @@ static int scsifront_queuecommand(struct
>  		return SCSI_MLQUEUE_HOST_BUSY;
>  	}
>  
> +	if (info->active.sc && !push_cmd_to_ring(info, NULL)) {
> +		scsifront_do_request(info);
> +		spin_unlock_irqrestore(shost->host_lock, flags);
> +		return SCSI_MLQUEUE_HOST_BUSY;
> +	}
> +
>  	sc->result    = 0;
>  
>  	ring_req          = scsifront_pre_request(info);
>  	rqid              = ring_req->rqid;
> -	ring_req->act     = VSCSIIF_ACT_SCSI_CDB;
> -
> -	ring_req->id      = sc->device->id;
> -	ring_req->lun     = sc->device->lun;
> -	ring_req->channel = sc->device->channel;
> -	ring_req->cmd_len = sc->cmd_len;
>  
>  	BUG_ON(sc->cmd_len > VSCSIIF_MAX_COMMAND_SIZE);
>  
> -	if ( sc->cmd_len )
> -		memcpy(ring_req->cmnd, sc->cmnd, sc->cmd_len);
> -	else
> -		memset(ring_req->cmnd, 0, VSCSIIF_MAX_COMMAND_SIZE);
> -
> -	ring_req->sc_data_direction   = (uint8_t)sc->sc_data_direction;
> -	ring_req->timeout_per_command = (sc->request->timeout / HZ);
> -
>  	info->shadow[rqid].req_scsi_cmnd     = (unsigned long)sc;
>  	info->shadow[rqid].sc_data_direction = sc->sc_data_direction;
> -	info->shadow[rqid].act               = ring_req->act;
> +	info->shadow[rqid].act               = VSCSIIF_ACT_SCSI_CDB;
>  
> -	ref_cnt = map_data_for_request(info, sc, ring_req, rqid);
> +	ref_cnt = map_data_for_request(info, sc, &info->shadow[rqid]);
>  	if (ref_cnt < 0) {
>  		add_id_to_freelist(info, rqid);
> +		scsifront_do_request(info);
>  		spin_unlock_irqrestore(shost->host_lock, flags);
>  		if (ref_cnt == (-ENOMEM))
>  			return SCSI_MLQUEUE_HOST_BUSY;
> @@ -372,9 +444,13 @@ static int scsifront_queuecommand(struct
>  		return 0;
>  	}
>  
> -	ring_req->nr_segments          = (uint8_t)ref_cnt;
>  	info->shadow[rqid].nr_segments = ref_cnt;
>  
> +	info->active.sc  = sc;
> +	info->active.rqid = rqid;
> +	info->active.done = 0;
> +	push_cmd_to_ring(info, ring_req);
> +
>  	scsifront_do_request(info);
>  	spin_unlock_irqrestore(shost->host_lock, flags);
>  
> --- sle11sp3.orig/drivers/xen/scsifront/xenbus.c	2012-10-02 14:32:45.000000000 +0200
> +++ sle11sp3/drivers/xen/scsifront/xenbus.c	2012-11-21 13:35:47.000000000 +0100
> @@ -43,6 +43,10 @@
>    #define DEFAULT_TASK_COMM_LEN	TASK_COMM_LEN
>  #endif
>  
> +static unsigned int max_nr_segs = VSCSIIF_SG_TABLESIZE;
> +module_param_named(max_segs, max_nr_segs, uint, 0);
> +MODULE_PARM_DESC(max_segs, "Maximum number of segments per request");
> +
>  extern struct scsi_host_template scsifront_sht;
>  
>  static void scsifront_free(struct vscsifrnt_info *info)
> @@ -181,7 +185,9 @@ static int scsifront_probe(struct xenbus
>  	int i, err = -ENOMEM;
>  	char name[DEFAULT_TASK_COMM_LEN];
>  
> -	host = scsi_host_alloc(&scsifront_sht, sizeof(*info));
> +	host = scsi_host_alloc(&scsifront_sht,
> +			       offsetof(struct vscsifrnt_info,
> +					active.segs[max_nr_segs]));
>  	if (!host) {
>  		xenbus_dev_fatal(dev, err, "fail to allocate scsi host");
>  		return err;
> @@ -223,7 +229,7 @@ static int scsifront_probe(struct xenbus
>  	host->max_id      = VSCSIIF_MAX_TARGET;
>  	host->max_channel = 0;
>  	host->max_lun     = VSCSIIF_MAX_LUN;
> -	host->max_sectors = (VSCSIIF_SG_TABLESIZE - 1) * PAGE_SIZE / 512;
> +	host->max_sectors = (host->sg_tablesize - 1) * PAGE_SIZE / 512;
>  	host->max_cmd_len = VSCSIIF_MAX_COMMAND_SIZE;
>  
>  	err = scsi_add_host(host, &dev->dev);
> @@ -278,6 +284,23 @@ static int scsifront_disconnect(struct v
>  	return 0;
>  }
>  
> +static void scsifront_read_backend_params(struct xenbus_device *dev,
> +					  struct vscsifrnt_info *info)
> +{
> +	unsigned int nr_segs;
> +	int ret;
> +	struct Scsi_Host *host = info->host;
> +
> +	ret = xenbus_scanf(XBT_NIL, dev->otherend, "segs-per-req", "%u",
> +			   &nr_segs);
> +	if (ret == 1 && nr_segs > host->sg_tablesize) {
> +		host->sg_tablesize = min(nr_segs, max_nr_segs);
> +		dev_info(&dev->dev, "using up to %d SG entries\n",
> +			 host->sg_tablesize);
> +		host->max_sectors = (host->sg_tablesize - 1) * PAGE_SIZE / 512;
> +	}
> +}
> +
>  #define VSCSIFRONT_OP_ADD_LUN	1
>  #define VSCSIFRONT_OP_DEL_LUN	2
>  
> @@ -368,6 +391,7 @@ static void scsifront_backend_changed(st
>  		break;
>  
>  	case XenbusStateConnected:
> +		scsifront_read_backend_params(dev, info);
>  		if (xenbus_read_driver_state(dev->nodename) ==
>  			XenbusStateInitialised) {
>  			scsifront_do_lun_hotplug(info, VSCSIFRONT_OP_ADD_LUN);
> @@ -413,8 +437,13 @@ static DEFINE_XENBUS_DRIVER(scsifront, ,
>  	.otherend_changed	= scsifront_backend_changed,
>  );
>  
> -int scsifront_xenbus_init(void)
> +int __init scsifront_xenbus_init(void)
>  {
> +	if (max_nr_segs > SG_ALL)
> +		max_nr_segs = SG_ALL;
> +	if (max_nr_segs < VSCSIIF_SG_TABLESIZE)
> +		max_nr_segs = VSCSIIF_SG_TABLESIZE;
> +
>  	return xenbus_register_frontend(&scsifront_driver);
>  }
>  
> --- sle11sp3.orig/include/xen/interface/io/vscsiif.h	2008-07-21 11:00:33.000000000 +0200
> +++ sle11sp3/include/xen/interface/io/vscsiif.h	2012-11-22 14:32:31.000000000 +0100
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  
>  #define VSCSIIF_BACK_MAX_PENDING_REQS    128
> @@ -53,6 +54,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -69,18 +76,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 21:35:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 21:35:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th5ZH-0008Om-Dd; Fri, 07 Dec 2012 21:34:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Th5ZG-0008Oh-0h
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 21:34:46 +0000
Received: from [85.158.143.99:48860] by server-3.bemta-4.messagelabs.com id
	D1/8D-18211-5F062C05; Fri, 07 Dec 2012 21:34:45 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1354916081!18914473!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31948 invoked from network); 7 Dec 2012 21:34:42 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 21:34:42 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so999604vbi.32
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 13:34:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=r/4w3yvT4M+Eg0Q41u3zU+dpqDslkv0LjXJzeZQSfLs=;
	b=ypNgSfWOM050Pm+2WRt/JvCrPrcgJEw/yU1RaKEOqZYpR24Lz9cBDbRP+eoCfmGY4N
	IC0Q7h0S38bwPo+9mXCcamqs/heiTgfO5AtzQtCimV/bVZD91uWiOx+EwfMfAVwiLHP6
	dI8r/wEULMa7vDESDrP9Ek0kPjZbe+IpPk9Kajqre9FZ3wcouIE63vClYKVY2aWjpP2k
	/Crj5oW1MtebWUXoIs24Gs3YaX5p5EpLWDUS+nQbmLj64o6kTnmOZrrqGJ1y8PI/nIbm
	2D2sbyHHeOI8kxfaqqO1c7Mb8V3cn7wIBs7ylHmYyhoTHv4SZ0nmeoh7Bn1enLG4nacI
	1WqA==
Received: by 10.220.238.148 with SMTP id ks20mr4689193vcb.5.1354916081218;
	Fri, 07 Dec 2012 13:34:41 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id a10sm3820445vez.10.2012.12.07.13.34.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 07 Dec 2012 13:34:40 -0800 (PST)
Date: Fri, 7 Dec 2012 16:34:38 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121207213437.GF9664@phenom.dumpdata.com>
References: <50B4B40B02000078000ABA19@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50B4B40B02000078000ABA19@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] vscsiif: allow larger segments-per-request
 values
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Nov 27, 2012 at 11:37:31AM +0000, Jan Beulich wrote:
> At least certain tape devices require fixed size blocks to be operated
> upon, i.e. breaking up of I/O requests is not permitted. Consequently
> we need an interface extension that (leaving aside implementation
> limitations) doesn't impose a limit on the number of segments that can
> be associated with an individual request.
> 
> This, in turn, excludes the blkif extension FreeBSD folks implemented,
> as that still imposes an upper limit (the actual I/O request still
> specifies the full number of segments - as an 8-bit quantity -, and
> subsequent ring slots get used to carry the excess segment
> descriptors).
> 
> The alternative therefore is to allow the frontend to pre-set segment
> descriptors _before_ actually issuing the I/O request. I/O will then
> be done by the backend for the accumulated set of segments.

How do you deal with migration to older backends?
> 
> To properly associate segment preset operations with the main request,
> the rqid-s between them should match (originally I had hoped to use
> this to avoid producing individual responses for the pre-set
> operations, but that turned out to violate the underlying shared ring
> implementation).

Right. If we could seperate those two it would be solve that.
So seperate 'request' and 'response' ring.


> 
> Negotiation of the maximum number of segments a particular backend
> implementation supports happens through a new "segs-per-req" xenstore
> node.

No 'feature-segs-per-req'?

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As I have no plans to backport this to the 2.6.18 tree, I'm attaching
> for reference the full kernel side patch we're intending to use.

> 
> --- a/xen/include/public/io/vscsiif.h
> +++ b/xen/include/public/io/vscsiif.h
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  /*
>   * Maximum scatter/gather segments per request.
> @@ -50,6 +51,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -66,18 +73,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;
> 
> 
> 

> vscsiif: allow larger segments-per-request values
> 
> At least certain tape devices require fixed size blocks to be operated
> upon, i.e. breaking up of I/O requests is not permitted. Consequently
> we need an interface extension that (leaving aside implementation
> limitations) doesn't impose a limit on the number of segments that can
> be associated with an individual request.
> 
> This, in turn, excludes the blkif extension FreeBSD folks implemented,
> as that still imposes an upper limit (the actual I/O request still
> specifies the full number of segments - as an 8-bit quantity -, and
> subsequent ring slots get used to carry the excess segment
> descriptors).
> 
> The alternative therefore is to allow the frontend to pre-set segment
> descriptors _before_ actually issuing the I/O request. I/O will then
> be done by the backend for the accumulated set of segments.
> 
> To properly associate segment preset operations with the main request,
> the rqid-s between them should match (originally I had hoped to use
> this to avoid producing individual responses for the pre-set
> operations, but that turned out to violate the underlying shared ring
> implementation).
> 
> Negotiation of the maximum number of segments a particular backend
> implementation supports happens through a new "segs-per-req" xenstore
> node.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> As I have no plans to backport this to the 2.6.18 tree, I'm attaching
> for reference the full kernel side patch we're intending to use.
> 
> --- a/xen/include/public/io/vscsiif.h
> +++ b/xen/include/public/io/vscsiif.h
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  /*
>   * Maximum scatter/gather segments per request.
> @@ -50,6 +51,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -66,18 +73,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;

> --- sle11sp3.orig/drivers/xen/scsiback/common.h	2012-06-06 13:53:26.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/common.h	2012-11-22 14:55:58.000000000 +0100
> @@ -94,10 +94,15 @@ struct vscsibk_info {
>  	unsigned int waiting_reqs;
>  	struct page **mmap_pages;
>  
> +	struct pending_req *preq;
> +
> +	union {
> +		struct gnttab_map_grant_ref   *gmap;
> +		struct gnttab_unmap_grant_ref *gunmap;
> +	};
>  };
>  
> -typedef struct {
> -	unsigned char act;
> +typedef struct pending_req {
>  	struct vscsibk_info *info;
>  	struct scsi_device *sdev;
>  
> @@ -114,7 +119,8 @@ typedef struct {
>  	
>  	uint32_t request_bufflen;
>  	struct scatterlist *sgl;
> -	grant_ref_t gref[VSCSIIF_SG_TABLESIZE];
> +	grant_ref_t *gref;
> +	vscsiif_segment_t *segs;
>  
>  	int32_t rslt;
>  	uint32_t resid;
> @@ -123,7 +129,7 @@ typedef struct {
>  	struct list_head free_list;
>  } pending_req_t;
>  
> -
> +extern unsigned int vscsiif_segs;
>  
>  #define scsiback_get(_b) (atomic_inc(&(_b)->nr_unreplied_reqs))
>  #define scsiback_put(_b)				\
> @@ -163,7 +169,7 @@ void scsiback_release_translation_entry(
>  
>  void scsiback_cmd_exec(pending_req_t *pending_req);
>  void scsiback_do_resp_with_sense(char *sense_buffer, int32_t result,
> -			uint32_t resid, pending_req_t *pending_req);
> +			uint32_t resid, pending_req_t *, uint8_t act);
>  void scsiback_fast_flush_area(pending_req_t *req);
>  
>  void scsiback_rsp_emulation(pending_req_t *pending_req);
> --- sle11sp3.orig/drivers/xen/scsiback/emulate.c	2012-01-11 12:14:54.000000000 +0100
> +++ sle11sp3/drivers/xen/scsiback/emulate.c	2012-11-22 14:29:27.000000000 +0100
> @@ -352,7 +352,9 @@ void scsiback_req_emulation_or_cmdexec(p
>  	else {
>  		scsiback_fast_flush_area(pending_req);
>  		scsiback_do_resp_with_sense(pending_req->sense_buffer,
> -		  pending_req->rslt, pending_req->resid, pending_req);
> +					    pending_req->rslt,
> +					    pending_req->resid, pending_req,
> +					    VSCSIIF_ACT_SCSI_CDB);
>  	}
>  }
>  
> --- sle11sp3.orig/drivers/xen/scsiback/interface.c	2011-10-10 11:58:37.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/interface.c	2012-11-13 13:21:10.000000000 +0100
> @@ -51,6 +51,13 @@ struct vscsibk_info *vscsibk_info_alloc(
>  	if (!info)
>  		return ERR_PTR(-ENOMEM);
>  
> +	info->gmap = kcalloc(max(sizeof(*info->gmap), sizeof(*info->gunmap)),
> +			     vscsiif_segs, GFP_KERNEL);
> +	if (!info->gmap) {
> +		kfree(info);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
>  	info->domid = domid;
>  	spin_lock_init(&info->ring_lock);
>  	atomic_set(&info->nr_unreplied_reqs, 0);
> @@ -120,6 +127,7 @@ void scsiback_disconnect(struct vscsibk_
>  
>  void scsiback_free(struct vscsibk_info *info)
>  {
> +	kfree(info->gmap);
>  	kmem_cache_free(scsiback_cachep, info);
>  }
>  
> --- sle11sp3.orig/drivers/xen/scsiback/scsiback.c	2012-11-22 15:36:11.000000000 +0100
> +++ sle11sp3/drivers/xen/scsiback/scsiback.c	2012-11-22 15:36:16.000000000 +0100
> @@ -56,6 +56,10 @@ int vscsiif_reqs = VSCSIIF_BACK_MAX_PEND
>  module_param_named(reqs, vscsiif_reqs, int, 0);
>  MODULE_PARM_DESC(reqs, "Number of scsiback requests to allocate");
>  
> +unsigned int vscsiif_segs = VSCSIIF_SG_TABLESIZE;
> +module_param_named(segs, vscsiif_segs, uint, 0);
> +MODULE_PARM_DESC(segs, "Number of segments to allow per request");
> +
>  static unsigned int log_print_stat = 0;
>  module_param(log_print_stat, int, 0644);
>  
> @@ -67,7 +71,7 @@ static grant_handle_t *pending_grant_han
>  
>  static int vaddr_pagenr(pending_req_t *req, int seg)
>  {
> -	return (req - pending_reqs) * VSCSIIF_SG_TABLESIZE + seg;
> +	return (req - pending_reqs) * vscsiif_segs + seg;
>  }
>  
>  static unsigned long vaddr(pending_req_t *req, int seg)
> @@ -82,7 +86,7 @@ static unsigned long vaddr(pending_req_t
>  
>  void scsiback_fast_flush_area(pending_req_t *req)
>  {
> -	struct gnttab_unmap_grant_ref unmap[VSCSIIF_SG_TABLESIZE];
> +	struct gnttab_unmap_grant_ref *unmap = req->info->gunmap;
>  	unsigned int i, invcount = 0;
>  	grant_handle_t handle;
>  	int err;
> @@ -117,6 +121,7 @@ static pending_req_t * alloc_req(struct 
>  	if (!list_empty(&pending_free)) {
>  		req = list_entry(pending_free.next, pending_req_t, free_list);
>  		list_del(&req->free_list);
> +		req->nr_segments = 0;
>  	}
>  	spin_unlock_irqrestore(&pending_free_lock, flags);
>  	return req;
> @@ -144,7 +149,8 @@ static void scsiback_notify_work(struct 
>  }
>  
>  void scsiback_do_resp_with_sense(char *sense_buffer, int32_t result,
> -			uint32_t resid, pending_req_t *pending_req)
> +				 uint32_t resid, pending_req_t *pending_req,
> +				 uint8_t act)
>  {
>  	vscsiif_response_t *ring_res;
>  	struct vscsibk_info *info = pending_req->info;
> @@ -159,6 +165,7 @@ void scsiback_do_resp_with_sense(char *s
>  	ring_res = RING_GET_RESPONSE(&info->ring, info->ring.rsp_prod_pvt);
>  	info->ring.rsp_prod_pvt++;
>  
> +	ring_res->act    = act;
>  	ring_res->rslt   = result;
>  	ring_res->rqid   = pending_req->rqid;
>  
> @@ -186,7 +193,8 @@ void scsiback_do_resp_with_sense(char *s
>  	if (notify)
>  		notify_remote_via_irq(info->irq);
>  
> -	free_req(pending_req);
> +	if (act != VSCSIIF_ACT_SCSI_SG_PRESET)
> +		free_req(pending_req);
>  }
>  
>  static void scsiback_print_status(char *sense_buffer, int errors,
> @@ -225,25 +233,25 @@ static void scsiback_cmd_done(struct req
>  		scsiback_rsp_emulation(pending_req);
>  
>  	scsiback_fast_flush_area(pending_req);
> -	scsiback_do_resp_with_sense(sense_buffer, errors, resid, pending_req);
> +	scsiback_do_resp_with_sense(sense_buffer, errors, resid, pending_req,
> +				    VSCSIIF_ACT_SCSI_CDB);
>  	scsiback_put(pending_req->info);
>  
>  	__blk_put_request(req->q, req);
>  }
>  
>  
> -static int scsiback_gnttab_data_map(vscsiif_request_t *ring_req,
> -					pending_req_t *pending_req)
> +static int scsiback_gnttab_data_map(const vscsiif_segment_t *segs,
> +				    unsigned int nr_segs,
> +				    pending_req_t *pending_req)
>  {
>  	u32 flags;
> -	int write;
> -	int i, err = 0;
> -	unsigned int data_len = 0;
> -	struct gnttab_map_grant_ref map[VSCSIIF_SG_TABLESIZE];
> +	int write, err = 0;
> +	unsigned int i, j, data_len = 0;
>  	struct vscsibk_info *info   = pending_req->info;
> -
> +	struct gnttab_map_grant_ref *map = info->gmap;
>  	int data_dir = (int)pending_req->sc_data_direction;
> -	unsigned int nr_segments = (unsigned int)pending_req->nr_segments;
> +	unsigned int nr_segments = pending_req->nr_segments + nr_segs;
>  
>  	write = (data_dir == DMA_TO_DEVICE);
>  
> @@ -264,14 +272,20 @@ static int scsiback_gnttab_data_map(vscs
>  		if (write)
>  			flags |= GNTMAP_readonly;
>  
> -		for (i = 0; i < nr_segments; i++)
> +		for (i = 0; i < pending_req->nr_segments; i++)
>  			gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
> -						ring_req->seg[i].gref,
> +						pending_req->segs[i].gref,
> +						info->domid);
> +		for (j = 0; i < nr_segments; i++, j++)
> +			gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
> +						segs[j].gref,
>  						info->domid);
>  
> +
>  		err = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map, nr_segments);
>  		BUG_ON(err);
>  
> +		j = 0;
>  		for_each_sg (pending_req->sgl, sg, nr_segments, i) {
>  			struct page *pg;
>  
> @@ -294,8 +308,15 @@ static int scsiback_gnttab_data_map(vscs
>  			set_phys_to_machine(page_to_pfn(pg),
>  				FOREIGN_FRAME(map[i].dev_bus_addr >> PAGE_SHIFT));
>  
> -			sg_set_page(sg, pg, ring_req->seg[i].length,
> -				    ring_req->seg[i].offset);
> +			if (i < pending_req->nr_segments)
> +				sg_set_page(sg, pg,
> +					    pending_req->segs[i].length,
> +					    pending_req->segs[i].offset);
> +			else {
> +				sg_set_page(sg, pg, segs[j].length,
> +					    segs[j].offset);
> +				++j;
> +			}
>  			data_len += sg->length;
>  
>  			barrier();
> @@ -306,6 +327,8 @@ static int scsiback_gnttab_data_map(vscs
>  
>  		}
>  
> +		pending_req->nr_segments = nr_segments;
> +
>  		if (err)
>  			goto fail_flush;
>  	}
> @@ -471,7 +494,8 @@ static void scsiback_device_reset_exec(p
>  	scsiback_get(info);
>  	err = scsi_reset_provider(sdev, SCSI_TRY_RESET_DEVICE);
>  
> -	scsiback_do_resp_with_sense(NULL, err, 0, pending_req);
> +	scsiback_do_resp_with_sense(NULL, err, 0, pending_req,
> +				    VSCSIIF_ACT_SCSI_RESET);
>  	scsiback_put(info);
>  
>  	return;
> @@ -489,13 +513,11 @@ static int prepare_pending_reqs(struct v
>  {
>  	struct scsi_device *sdev;
>  	struct ids_tuple vir;
> +	unsigned int nr_segs;
>  	int err = -EINVAL;
>  
>  	DPRINTK("%s\n",__FUNCTION__);
>  
> -	pending_req->rqid       = ring_req->rqid;
> -	pending_req->act        = ring_req->act;
> -
>  	pending_req->info       = info;
>  
>  	pending_req->v_chn = vir.chn = ring_req->channel;
> @@ -525,11 +547,10 @@ static int prepare_pending_reqs(struct v
>  		goto invalid_value;
>  	}
>  
> -	pending_req->nr_segments = ring_req->nr_segments;
> +	nr_segs = ring_req->nr_segments;
>  	barrier();
> -	if (pending_req->nr_segments > VSCSIIF_SG_TABLESIZE) {
> -		DPRINTK("scsiback: invalid parameter nr_seg = %d\n",
> -			pending_req->nr_segments);
> +	if (pending_req->nr_segments + nr_segs > vscsiif_segs) {
> +		DPRINTK("scsiback: invalid nr_segs = %u\n", nr_segs);
>  		err = -EINVAL;
>  		goto invalid_value;
>  	}
> @@ -546,7 +567,7 @@ static int prepare_pending_reqs(struct v
>  	
>  	pending_req->timeout_per_command = ring_req->timeout_per_command;
>  
> -	if(scsiback_gnttab_data_map(ring_req, pending_req)) {
> +	if (scsiback_gnttab_data_map(ring_req->seg, nr_segs, pending_req)) {
>  		DPRINTK("scsiback: invalid buffer\n");
>  		err = -EINVAL;
>  		goto invalid_value;
> @@ -558,6 +579,20 @@ invalid_value:
>  	return err;
>  }
>  
> +static void latch_segments(pending_req_t *pending_req,
> +			   const struct vscsiif_sg_list *sgl)
> +{
> +	unsigned int nr_segs = sgl->nr_segments;
> +
> +	barrier();
> +	if (pending_req->nr_segments + nr_segs <= vscsiif_segs) {
> +		memcpy(pending_req->segs + pending_req->nr_segments,
> +		       sgl->seg, nr_segs * sizeof(*sgl->seg));
> +		pending_req->nr_segments += nr_segs;
> +	}
> +	else
> +		DPRINTK("scsiback: invalid nr_segs = %u\n", nr_segs);
> +}
>  
>  static int _scsiback_do_cmd_fn(struct vscsibk_info *info)
>  {
> @@ -575,9 +610,11 @@ static int _scsiback_do_cmd_fn(struct vs
>  	rmb();
>  
>  	while ((rc != rp)) {
> +		int act, rqid;
> +
>  		if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
>  			break;
> -		pending_req = alloc_req(info);
> +		pending_req = info->preq ?: alloc_req(info);
>  		if (NULL == pending_req) {
>  			more_to_do = 1;
>  			break;
> @@ -586,32 +623,55 @@ static int _scsiback_do_cmd_fn(struct vs
>  		ring_req = RING_GET_REQUEST(ring, rc);
>  		ring->req_cons = ++rc;
>  
> +		act = ring_req->act;
> +		rqid = ring_req->rqid;
> +		barrier();
> +		if (!pending_req->nr_segments)
> +			pending_req->rqid = rqid;
> +		else if (pending_req->rqid != rqid)
> +			DPRINTK("scsiback: invalid rqid %04x, expected %04x\n",
> +				rqid, pending_req->rqid);
> +
> +		info->preq = NULL;
> +		if (pending_req->rqid != rqid) {
> +			scsiback_do_resp_with_sense(NULL, DRIVER_INVALID << 24,
> +						    0, pending_req, act);
> +			continue;
> +		}
> +
> +		if (act == VSCSIIF_ACT_SCSI_SG_PRESET) {
> +			latch_segments(pending_req, (void *)ring_req);
> +			info->preq = pending_req;
> +			scsiback_do_resp_with_sense(NULL, 0, 0,
> +						    pending_req, act);
> +			continue;
> +		}
> +
>  		err = prepare_pending_reqs(info, ring_req,
>  						pending_req);
>  		if (err == -EINVAL) {
>  			scsiback_do_resp_with_sense(NULL, (DRIVER_ERROR << 24),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		} else if (err == -ENODEV) {
>  			scsiback_do_resp_with_sense(NULL, (DID_NO_CONNECT << 16),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		}
>  
> -		if (pending_req->act == VSCSIIF_ACT_SCSI_CDB) {
> -
> +		if (act == VSCSIIF_ACT_SCSI_CDB) {
>  			/* The Host mode is through as for Emulation. */
>  			if (info->feature == VSCSI_TYPE_HOST)
>  				scsiback_cmd_exec(pending_req);
>  			else
>  				scsiback_req_emulation_or_cmdexec(pending_req);
>  
> -		} else if (pending_req->act == VSCSIIF_ACT_SCSI_RESET) {
> +		} else if (act == VSCSIIF_ACT_SCSI_RESET) {
>  			scsiback_device_reset_exec(pending_req);
>  		} else {
>  			pr_err("scsiback: invalid parameter for request\n");
>  			scsiback_do_resp_with_sense(NULL, (DRIVER_ERROR << 24),
> -				0, pending_req);
> +						    0, pending_req, act);
>  			continue;
>  		}
>  	}
> @@ -673,17 +733,32 @@ static int __init scsiback_init(void)
>  	if (!is_running_on_xen())
>  		return -ENODEV;
>  
> -	mmap_pages = vscsiif_reqs * VSCSIIF_SG_TABLESIZE;
> +	if (vscsiif_segs < VSCSIIF_SG_TABLESIZE)
> +		vscsiif_segs = VSCSIIF_SG_TABLESIZE;
> +	if (vscsiif_segs != (uint8_t)vscsiif_segs)
> +		return -EINVAL;
> +	mmap_pages = vscsiif_reqs * vscsiif_segs;
>  
>  	pending_reqs          = kzalloc(sizeof(pending_reqs[0]) *
>  					vscsiif_reqs, GFP_KERNEL);
> +	if (!pending_reqs)
> +		return -ENOMEM;
>  	pending_grant_handles = kmalloc(sizeof(pending_grant_handles[0]) *
>  					mmap_pages, GFP_KERNEL);
>  	pending_pages         = alloc_empty_pages_and_pagevec(mmap_pages);
>  
> -	if (!pending_reqs || !pending_grant_handles || !pending_pages)
> +	if (!pending_grant_handles || !pending_pages)
>  		goto out_of_memory;
>  
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		pending_reqs[i].gref = kcalloc(sizeof(*pending_reqs->gref),
> +					       vscsiif_segs, GFP_KERNEL);
> +		pending_reqs[i].segs = kcalloc(sizeof(*pending_reqs->segs),
> +					       vscsiif_segs, GFP_KERNEL);
> +		if (!pending_reqs[i].gref || !pending_reqs[i].segs)
> +			goto out_of_memory;
> +	}
> +
>  	for (i = 0; i < mmap_pages; i++)
>  		pending_grant_handles[i] = SCSIBACK_INVALID_HANDLE;
>  
> @@ -705,6 +780,10 @@ static int __init scsiback_init(void)
>  out_interface:
>  	scsiback_interface_exit();
>  out_of_memory:
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		kfree(pending_reqs[i].gref);
> +		kfree(pending_reqs[i].segs);
> +	}
>  	kfree(pending_reqs);
>  	kfree(pending_grant_handles);
>  	free_empty_pages_and_pagevec(pending_pages, mmap_pages);
> @@ -715,12 +794,17 @@ out_of_memory:
>  #if 0
>  static void __exit scsiback_exit(void)
>  {
> +	unsigned int i;
> +
>  	scsiback_xenbus_unregister();
>  	scsiback_interface_exit();
> +	for (i = 0; i < vscsiif_reqs; ++i) {
> +		kfree(pending_reqs[i].gref);
> +		kfree(pending_reqs[i].segs);
> +	}
>  	kfree(pending_reqs);
>  	kfree(pending_grant_handles);
> -	free_empty_pages_and_pagevec(pending_pages, (vscsiif_reqs * VSCSIIF_SG_TABLESIZE));
> -
> +	free_empty_pages_and_pagevec(pending_pages, vscsiif_reqs * vscsiif_segs);
>  }
>  #endif
>  
> --- sle11sp3.orig/drivers/xen/scsiback/xenbus.c	2011-06-30 17:04:59.000000000 +0200
> +++ sle11sp3/drivers/xen/scsiback/xenbus.c	2012-11-13 14:36:16.000000000 +0100
> @@ -339,6 +339,13 @@ static int scsiback_probe(struct xenbus_
>  	if (val)
>  		be->info->feature = VSCSI_TYPE_HOST;
>  
> +	if (vscsiif_segs > VSCSIIF_SG_TABLESIZE) {
> +		err = xenbus_printf(XBT_NIL, dev->nodename, "segs-per-req",
> +				    "%u", vscsiif_segs);
> +		if (err)
> +			xenbus_dev_error(dev, err, "writing segs-per-req");
> +	}
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> --- sle11sp3.orig/drivers/xen/scsifront/common.h	2011-01-31 17:29:16.000000000 +0100
> +++ sle11sp3/drivers/xen/scsifront/common.h	2012-11-22 13:45:50.000000000 +0100
> @@ -95,7 +95,7 @@ struct vscsifrnt_shadow {
>  
>  	/* requested struct scsi_cmnd is stored from kernel */
>  	unsigned long req_scsi_cmnd;
> -	int gref[VSCSIIF_SG_TABLESIZE];
> +	int gref[SG_ALL];
>  };
>  
>  struct vscsifrnt_info {
> @@ -110,7 +110,6 @@ struct vscsifrnt_info {
>  
>  	grant_ref_t ring_ref;
>  	struct vscsiif_front_ring ring;
> -	struct vscsiif_response	ring_res;
>  
>  	struct vscsifrnt_shadow shadow[VSCSIIF_MAX_REQS];
>  	uint32_t shadow_free;
> @@ -119,6 +118,12 @@ struct vscsifrnt_info {
>  	wait_queue_head_t wq;
>  	unsigned int waiting_resp;
>  
> +	struct {
> +		struct scsi_cmnd *sc;
> +		unsigned int rqid;
> +		unsigned int done;
> +		vscsiif_segment_t segs[];
> +	} active;
>  };
>  
>  #define DPRINTK(_f, _a...)				\
> --- sle11sp3.orig/drivers/xen/scsifront/scsifront.c	2011-06-28 18:57:14.000000000 +0200
> +++ sle11sp3/drivers/xen/scsifront/scsifront.c	2012-11-22 16:37:35.000000000 +0100
> @@ -106,6 +106,66 @@ irqreturn_t scsifront_intr(int irq, void
>  	return IRQ_HANDLED;
>  }
>  
> +static bool push_cmd_to_ring(struct vscsifrnt_info *info,
> +			     vscsiif_request_t *ring_req)
> +{
> +	unsigned int left, rqid = info->active.rqid;
> +	struct scsi_cmnd *sc;
> +
> +	for (; ; ring_req = NULL) {
> +		struct vscsiif_sg_list *sgl;
> +
> +		if (!ring_req) {
> +			struct vscsiif_front_ring *ring = &info->ring;
> +
> +			ring_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
> +			ring->req_prod_pvt++;
> +			ring_req->rqid = rqid;
> +		}
> +
> +		left = info->shadow[rqid].nr_segments - info->active.done;
> +		if (left <= VSCSIIF_SG_TABLESIZE)
> +			break;
> +
> +		sgl = (void *)ring_req;
> +		sgl->act = VSCSIIF_ACT_SCSI_SG_PRESET;
> +
> +		if (left > VSCSIIF_SG_LIST_SIZE)
> +			left = VSCSIIF_SG_LIST_SIZE;
> +		memcpy(sgl->seg, info->active.segs + info->active.done,
> +		       left * sizeof(*sgl->seg));
> +
> +		sgl->nr_segments = left;
> +		info->active.done += left;
> +
> +		if (RING_FULL(&info->ring))
> +			return false;
> +	}
> +
> +	sc = info->active.sc;
> +
> +	ring_req->act     = VSCSIIF_ACT_SCSI_CDB;
> +	ring_req->id      = sc->device->id;
> +	ring_req->lun     = sc->device->lun;
> +	ring_req->channel = sc->device->channel;
> +	ring_req->cmd_len = sc->cmd_len;
> +
> +	if ( sc->cmd_len )
> +		memcpy(ring_req->cmnd, sc->cmnd, sc->cmd_len);
> +	else
> +		memset(ring_req->cmnd, 0, VSCSIIF_MAX_COMMAND_SIZE);
> +
> +	ring_req->sc_data_direction   = sc->sc_data_direction;
> +	ring_req->timeout_per_command = sc->request->timeout / HZ;
> +	ring_req->nr_segments         = left;
> +
> +	memcpy(ring_req->seg, info->active.segs + info->active.done,
> +               left * sizeof(*ring_req->seg));
> +
> +	info->active.sc = NULL;
> +
> +	return !RING_FULL(&info->ring);
> +}
>  
>  static void scsifront_gnttab_done(struct vscsifrnt_shadow *s, uint32_t id)
>  {
> @@ -194,6 +254,16 @@ int scsifront_cmd_done(struct vscsifrnt_
>  		
>  		ring_res = RING_GET_RESPONSE(&info->ring, i);
>  
> +		if (info->host->sg_tablesize > VSCSIIF_SG_TABLESIZE) {
> +			u8 act = ring_res->act;
> +
> +			if (act == VSCSIIF_ACT_SCSI_SG_PRESET)
> +				continue;
> +			if (act != info->shadow[ring_res->rqid].act)
> +				DPRINTK("Bogus backend response (%02x vs %02x)\n",
> +					act, info->shadow[ring_res->rqid].act);
> +		}
> +
>  		if (info->shadow[ring_res->rqid].act == VSCSIIF_ACT_SCSI_CDB)
>  			scsifront_cdb_cmd_done(info, ring_res);
>  		else
> @@ -208,8 +278,16 @@ int scsifront_cmd_done(struct vscsifrnt_
>  		info->ring.sring->rsp_event = i + 1;
>  	}
>  
> -	spin_unlock_irqrestore(&info->io_lock, flags);
> +	spin_unlock(&info->io_lock);
> +
> +	spin_lock(info->host->host_lock);
> +
> +	if (info->active.sc && !RING_FULL(&info->ring)) {
> +		push_cmd_to_ring(info, NULL);
> +		scsifront_do_request(info);
> +	}
>  
> +	spin_unlock_irqrestore(info->host->host_lock, flags);
>  
>  	/* Yield point for this unbounded loop. */
>  	cond_resched();
> @@ -242,7 +320,8 @@ int scsifront_schedule(void *data)
>  
>  
>  static int map_data_for_request(struct vscsifrnt_info *info,
> -		struct scsi_cmnd *sc, vscsiif_request_t *ring_req, uint32_t id)
> +				struct scsi_cmnd *sc,
> +				struct vscsifrnt_shadow *shadow)
>  {
>  	grant_ref_t gref_head;
>  	struct page *page;
> @@ -254,7 +333,7 @@ static int map_data_for_request(struct v
>  	if (sc->sc_data_direction == DMA_NONE)
>  		return 0;
>  
> -	err = gnttab_alloc_grant_references(VSCSIIF_SG_TABLESIZE, &gref_head);
> +	err = gnttab_alloc_grant_references(info->host->sg_tablesize, &gref_head);
>  	if (err) {
>  		pr_err("scsifront: gnttab_alloc_grant_references() error\n");
>  		return -ENOMEM;
> @@ -266,7 +345,7 @@ static int map_data_for_request(struct v
>  		unsigned int data_len = scsi_bufflen(sc);
>  
>  		nr_pages = (data_len + sgl->offset + PAGE_SIZE - 1) >> PAGE_SHIFT;
> -		if (nr_pages > VSCSIIF_SG_TABLESIZE) {
> +		if (nr_pages > info->host->sg_tablesize) {
>  			pr_err("scsifront: Unable to map request_buffer for command!\n");
>  			ref_cnt = (-E2BIG);
>  			goto big_to_sg;
> @@ -294,10 +373,10 @@ static int map_data_for_request(struct v
>  				gnttab_grant_foreign_access_ref(ref, info->dev->otherend_id,
>  					buffer_pfn, write);
>  
> -				info->shadow[id].gref[ref_cnt]  = ref;
> -				ring_req->seg[ref_cnt].gref     = ref;
> -				ring_req->seg[ref_cnt].offset   = (uint16_t)off;
> -				ring_req->seg[ref_cnt].length   = (uint16_t)bytes;
> +				shadow->gref[ref_cnt] = ref;
> +				info->active.segs[ref_cnt].gref   = ref;
> +				info->active.segs[ref_cnt].offset = off;
> +				info->active.segs[ref_cnt].length = bytes;
>  
>  				buffer_pfn++;
>  				len -= bytes;
> @@ -336,34 +415,27 @@ static int scsifront_queuecommand(struct
>  		return SCSI_MLQUEUE_HOST_BUSY;
>  	}
>  
> +	if (info->active.sc && !push_cmd_to_ring(info, NULL)) {
> +		scsifront_do_request(info);
> +		spin_unlock_irqrestore(shost->host_lock, flags);
> +		return SCSI_MLQUEUE_HOST_BUSY;
> +	}
> +
>  	sc->result    = 0;
>  
>  	ring_req          = scsifront_pre_request(info);
>  	rqid              = ring_req->rqid;
> -	ring_req->act     = VSCSIIF_ACT_SCSI_CDB;
> -
> -	ring_req->id      = sc->device->id;
> -	ring_req->lun     = sc->device->lun;
> -	ring_req->channel = sc->device->channel;
> -	ring_req->cmd_len = sc->cmd_len;
>  
>  	BUG_ON(sc->cmd_len > VSCSIIF_MAX_COMMAND_SIZE);
>  
> -	if ( sc->cmd_len )
> -		memcpy(ring_req->cmnd, sc->cmnd, sc->cmd_len);
> -	else
> -		memset(ring_req->cmnd, 0, VSCSIIF_MAX_COMMAND_SIZE);
> -
> -	ring_req->sc_data_direction   = (uint8_t)sc->sc_data_direction;
> -	ring_req->timeout_per_command = (sc->request->timeout / HZ);
> -
>  	info->shadow[rqid].req_scsi_cmnd     = (unsigned long)sc;
>  	info->shadow[rqid].sc_data_direction = sc->sc_data_direction;
> -	info->shadow[rqid].act               = ring_req->act;
> +	info->shadow[rqid].act               = VSCSIIF_ACT_SCSI_CDB;
>  
> -	ref_cnt = map_data_for_request(info, sc, ring_req, rqid);
> +	ref_cnt = map_data_for_request(info, sc, &info->shadow[rqid]);
>  	if (ref_cnt < 0) {
>  		add_id_to_freelist(info, rqid);
> +		scsifront_do_request(info);
>  		spin_unlock_irqrestore(shost->host_lock, flags);
>  		if (ref_cnt == (-ENOMEM))
>  			return SCSI_MLQUEUE_HOST_BUSY;
> @@ -372,9 +444,13 @@ static int scsifront_queuecommand(struct
>  		return 0;
>  	}
>  
> -	ring_req->nr_segments          = (uint8_t)ref_cnt;
>  	info->shadow[rqid].nr_segments = ref_cnt;
>  
> +	info->active.sc  = sc;
> +	info->active.rqid = rqid;
> +	info->active.done = 0;
> +	push_cmd_to_ring(info, ring_req);
> +
>  	scsifront_do_request(info);
>  	spin_unlock_irqrestore(shost->host_lock, flags);
>  
> --- sle11sp3.orig/drivers/xen/scsifront/xenbus.c	2012-10-02 14:32:45.000000000 +0200
> +++ sle11sp3/drivers/xen/scsifront/xenbus.c	2012-11-21 13:35:47.000000000 +0100
> @@ -43,6 +43,10 @@
>    #define DEFAULT_TASK_COMM_LEN	TASK_COMM_LEN
>  #endif
>  
> +static unsigned int max_nr_segs = VSCSIIF_SG_TABLESIZE;
> +module_param_named(max_segs, max_nr_segs, uint, 0);
> +MODULE_PARM_DESC(max_segs, "Maximum number of segments per request");
> +
>  extern struct scsi_host_template scsifront_sht;
>  
>  static void scsifront_free(struct vscsifrnt_info *info)
> @@ -181,7 +185,9 @@ static int scsifront_probe(struct xenbus
>  	int i, err = -ENOMEM;
>  	char name[DEFAULT_TASK_COMM_LEN];
>  
> -	host = scsi_host_alloc(&scsifront_sht, sizeof(*info));
> +	host = scsi_host_alloc(&scsifront_sht,
> +			       offsetof(struct vscsifrnt_info,
> +					active.segs[max_nr_segs]));
>  	if (!host) {
>  		xenbus_dev_fatal(dev, err, "fail to allocate scsi host");
>  		return err;
> @@ -223,7 +229,7 @@ static int scsifront_probe(struct xenbus
>  	host->max_id      = VSCSIIF_MAX_TARGET;
>  	host->max_channel = 0;
>  	host->max_lun     = VSCSIIF_MAX_LUN;
> -	host->max_sectors = (VSCSIIF_SG_TABLESIZE - 1) * PAGE_SIZE / 512;
> +	host->max_sectors = (host->sg_tablesize - 1) * PAGE_SIZE / 512;
>  	host->max_cmd_len = VSCSIIF_MAX_COMMAND_SIZE;
>  
>  	err = scsi_add_host(host, &dev->dev);
> @@ -278,6 +284,23 @@ static int scsifront_disconnect(struct v
>  	return 0;
>  }
>  
> +static void scsifront_read_backend_params(struct xenbus_device *dev,
> +					  struct vscsifrnt_info *info)
> +{
> +	unsigned int nr_segs;
> +	int ret;
> +	struct Scsi_Host *host = info->host;
> +
> +	ret = xenbus_scanf(XBT_NIL, dev->otherend, "segs-per-req", "%u",
> +			   &nr_segs);
> +	if (ret == 1 && nr_segs > host->sg_tablesize) {
> +		host->sg_tablesize = min(nr_segs, max_nr_segs);
> +		dev_info(&dev->dev, "using up to %d SG entries\n",
> +			 host->sg_tablesize);
> +		host->max_sectors = (host->sg_tablesize - 1) * PAGE_SIZE / 512;
> +	}
> +}
> +
>  #define VSCSIFRONT_OP_ADD_LUN	1
>  #define VSCSIFRONT_OP_DEL_LUN	2
>  
> @@ -368,6 +391,7 @@ static void scsifront_backend_changed(st
>  		break;
>  
>  	case XenbusStateConnected:
> +		scsifront_read_backend_params(dev, info);
>  		if (xenbus_read_driver_state(dev->nodename) ==
>  			XenbusStateInitialised) {
>  			scsifront_do_lun_hotplug(info, VSCSIFRONT_OP_ADD_LUN);
> @@ -413,8 +437,13 @@ static DEFINE_XENBUS_DRIVER(scsifront, ,
>  	.otherend_changed	= scsifront_backend_changed,
>  );
>  
> -int scsifront_xenbus_init(void)
> +int __init scsifront_xenbus_init(void)
>  {
> +	if (max_nr_segs > SG_ALL)
> +		max_nr_segs = SG_ALL;
> +	if (max_nr_segs < VSCSIIF_SG_TABLESIZE)
> +		max_nr_segs = VSCSIIF_SG_TABLESIZE;
> +
>  	return xenbus_register_frontend(&scsifront_driver);
>  }
>  
> --- sle11sp3.orig/include/xen/interface/io/vscsiif.h	2008-07-21 11:00:33.000000000 +0200
> +++ sle11sp3/include/xen/interface/io/vscsiif.h	2012-11-22 14:32:31.000000000 +0100
> @@ -34,6 +34,7 @@
>  #define VSCSIIF_ACT_SCSI_CDB         1    /* SCSI CDB command */
>  #define VSCSIIF_ACT_SCSI_ABORT       2    /* SCSI Device(Lun) Abort*/
>  #define VSCSIIF_ACT_SCSI_RESET       3    /* SCSI Device(Lun) Reset*/
> +#define VSCSIIF_ACT_SCSI_SG_PRESET   4    /* Preset SG elements */
>  
>  
>  #define VSCSIIF_BACK_MAX_PENDING_REQS    128
> @@ -53,6 +54,12 @@
>  #define VSCSIIF_MAX_COMMAND_SIZE         16
>  #define VSCSIIF_SENSE_BUFFERSIZE         96
>  
> +struct scsiif_request_segment {
> +    grant_ref_t gref;
> +    uint16_t offset;
> +    uint16_t length;
> +};
> +typedef struct scsiif_request_segment vscsiif_segment_t;
>  
>  struct vscsiif_request {
>      uint16_t rqid;          /* private guest value, echoed in resp  */
> @@ -69,18 +76,26 @@ struct vscsiif_request {
>                                           DMA_NONE(3) requests  */
>      uint8_t nr_segments;              /* Number of pieces of scatter-gather */
>  
> -    struct scsiif_request_segment {
> -        grant_ref_t gref;
> -        uint16_t offset;
> -        uint16_t length;
> -    } seg[VSCSIIF_SG_TABLESIZE];
> +    vscsiif_segment_t seg[VSCSIIF_SG_TABLESIZE];
>      uint32_t reserved[3];
>  };
>  typedef struct vscsiif_request vscsiif_request_t;
>  
> +#define VSCSIIF_SG_LIST_SIZE ((sizeof(vscsiif_request_t) - 4) \
> +                              / sizeof(vscsiif_segment_t))
> +
> +struct vscsiif_sg_list {
> +    /* First two fields must match struct vscsiif_request! */
> +    uint16_t rqid;          /* private guest value, must match main req */
> +    uint8_t act;            /* VSCSIIF_ACT_SCSI_SG_PRESET */
> +    uint8_t nr_segments;    /* Number of pieces of scatter-gather */
> +    vscsiif_segment_t seg[VSCSIIF_SG_LIST_SIZE];
> +};
> +typedef struct vscsiif_sg_list vscsiif_sg_list_t;
> +
>  struct vscsiif_response {
>      uint16_t rqid;
> -    uint8_t padding;
> +    uint8_t act;               /* valid only when backend supports SG_PRESET */
>      uint8_t sense_len;
>      uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE];
>      int32_t rslt;

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 22:30:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 22:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th6R9-0001WC-7S; Fri, 07 Dec 2012 22:30:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th6R7-0001W7-F3
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 22:30:25 +0000
Received: from [193.109.254.147:60832] by server-16.bemta-14.messagelabs.com
	id 1B/AC-09215-00E62C05; Fri, 07 Dec 2012 22:30:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354919422!9337923!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14227 invoked from network); 7 Dec 2012 22:30:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 22:30:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="5742"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 22:30:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 22:30:21 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th6R3-0001UA-Ip;
	Fri, 07 Dec 2012 22:30:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th6R3-0001cW-CA;
	Fri, 07 Dec 2012 22:30:21 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14596-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 22:30:21 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14596: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14596 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14596/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14566
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566
 build-amd64                  2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-pvops            2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-oldkern          2 host-install(2) broken in 14587 REGR. vs. 14566
 build-i386-pvops             2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386-oldkern           2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386                   2 host-install(2) broken in 14583 REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            3 host-install(3)           broken pass in 14574
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 14574
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)           broken pass in 14583

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 14587 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-win          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14587 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-win         1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-win            1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 22:30:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 22:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th6R9-0001WC-7S; Fri, 07 Dec 2012 22:30:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th6R7-0001W7-F3
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 22:30:25 +0000
Received: from [193.109.254.147:60832] by server-16.bemta-14.messagelabs.com
	id 1B/AC-09215-00E62C05; Fri, 07 Dec 2012 22:30:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354919422!9337923!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14227 invoked from network); 7 Dec 2012 22:30:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 22:30:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="5742"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 22:30:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 22:30:21 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th6R3-0001UA-Ip;
	Fri, 07 Dec 2012 22:30:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th6R3-0001cW-CA;
	Fri, 07 Dec 2012 22:30:21 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14596-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 22:30:21 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14596: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14596 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14596/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14566
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566
 build-amd64                  2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-pvops            2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-oldkern          2 host-install(2) broken in 14587 REGR. vs. 14566
 build-i386-pvops             2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386-oldkern           2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386                   2 host-install(2) broken in 14583 REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            3 host-install(3)           broken pass in 14574
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 14574
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)           broken pass in 14583

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 14587 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-win          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14587 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-win         1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-win            1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 22:42:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 22:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th6cW-0001jW-L2; Fri, 07 Dec 2012 22:42:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th6cV-0001jR-0J
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 22:42:11 +0000
Received: from [193.109.254.147:34873] by server-12.bemta-14.messagelabs.com
	id 64/64-00510-2C072C05; Fri, 07 Dec 2012 22:42:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1354920128!2951858!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29109 invoked from network); 7 Dec 2012 22:42:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 22:42:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="5858"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 22:42:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 22:42:08 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th6cS-0001bN-HG;
	Fri, 07 Dec 2012 22:42:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th6cS-0001Th-Fg;
	Fri, 07 Dec 2012 22:42:08 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14606-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 22:42:08 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14606: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14606 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14606/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 22:42:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 22:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th6cW-0001jW-L2; Fri, 07 Dec 2012 22:42:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th6cV-0001jR-0J
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 22:42:11 +0000
Received: from [193.109.254.147:34873] by server-12.bemta-14.messagelabs.com
	id 64/64-00510-2C072C05; Fri, 07 Dec 2012 22:42:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1354920128!2951858!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29109 invoked from network); 7 Dec 2012 22:42:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 22:42:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,240,1355097600"; 
   d="scan'208";a="5858"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 22:42:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 22:42:08 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th6cS-0001bN-HG;
	Fri, 07 Dec 2012 22:42:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th6cS-0001Th-Fg;
	Fri, 07 Dec 2012 22:42:08 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14606-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 22:42:08 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14606: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14606 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14606/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 22:54:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 22:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th6nq-00020i-27; Fri, 07 Dec 2012 22:53:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th6no-00020d-Bf
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 22:53:52 +0000
Received: from [85.158.138.51:2530] by server-11.bemta-3.messagelabs.com id
	0F/B1-19361-F7372C05; Fri, 07 Dec 2012 22:53:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354920830!27650020!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20892 invoked from network); 7 Dec 2012 22:53:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 22:53:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,241,1355097600"; 
   d="scan'208";a="6049"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 22:53:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 22:53:50 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th6nm-0001hf-10;
	Fri, 07 Dec 2012 22:53:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th6nm-00005r-0Q;
	Fri, 07 Dec 2012 22:53:50 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14607-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 22:53:50 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14607: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14607 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14607/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 22:54:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 22:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th6nq-00020i-27; Fri, 07 Dec 2012 22:53:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th6no-00020d-Bf
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 22:53:52 +0000
Received: from [85.158.138.51:2530] by server-11.bemta-3.messagelabs.com id
	0F/B1-19361-F7372C05; Fri, 07 Dec 2012 22:53:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1354920830!27650020!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20892 invoked from network); 7 Dec 2012 22:53:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 22:53:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,241,1355097600"; 
   d="scan'208";a="6049"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 22:53:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 22:53:50 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th6nm-0001hf-10;
	Fri, 07 Dec 2012 22:53:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th6nm-00005r-0Q;
	Fri, 07 Dec 2012 22:53:50 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14607-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 22:53:50 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14607: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14607 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14607/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 23:05:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 23:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th6yZ-0002Ff-AA; Fri, 07 Dec 2012 23:04:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th6yX-0002Fa-5C
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 23:04:57 +0000
Received: from [85.158.139.83:8309] by server-1.bemta-5.messagelabs.com id
	E4/25-12813-81672C05; Fri, 07 Dec 2012 23:04:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354921495!28874627!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9729 invoked from network); 7 Dec 2012 23:04:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 23:04:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,241,1355097600"; 
   d="scan'208";a="6125"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 23:04:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 23:04:54 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th6yU-0001t4-Ov;
	Fri, 07 Dec 2012 23:04:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th6yU-0006eD-MT;
	Fri, 07 Dec 2012 23:04:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14608-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 23:04:54 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14608: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14608 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14608/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 23:05:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 23:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th6yZ-0002Ff-AA; Fri, 07 Dec 2012 23:04:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th6yX-0002Fa-5C
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 23:04:57 +0000
Received: from [85.158.139.83:8309] by server-1.bemta-5.messagelabs.com id
	E4/25-12813-81672C05; Fri, 07 Dec 2012 23:04:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1354921495!28874627!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9729 invoked from network); 7 Dec 2012 23:04:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 23:04:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,241,1355097600"; 
   d="scan'208";a="6125"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 23:04:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 23:04:54 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th6yU-0001t4-Ov;
	Fri, 07 Dec 2012 23:04:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th6yU-0006eD-MT;
	Fri, 07 Dec 2012 23:04:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14608-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 23:04:54 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14608: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14608 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14608/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 23:10:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 23:10:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th73c-0002Qa-6a; Fri, 07 Dec 2012 23:10:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th73Z-0002QQ-Rk
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 23:10:10 +0000
Received: from [85.158.139.83:10094] by server-4.bemta-5.messagelabs.com id
	A2/C1-14693-15772C05; Fri, 07 Dec 2012 23:10:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354921807!28369220!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11291 invoked from network); 7 Dec 2012 23:10:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 23:10:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,241,1355097600"; 
   d="scan'208";a="6219"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 23:10:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 23:10:06 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th73W-0001uS-RL;
	Fri, 07 Dec 2012 23:10:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th73W-00082Y-QC;
	Fri, 07 Dec 2012 23:10:06 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14597-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 23:10:06 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14597: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14597 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14597/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565
 build-i386-pvops             2 host-install(2) broken in 14586 REGR. vs. 14565
 build-i386-oldkern           2 host-install(2) broken in 14586 REGR. vs. 14565
 test-amd64-i386-xl-credit2   3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-rhel6hvm-intel 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl           3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-amd 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-amd 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-rhel6hvm-amd 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-win-vcpus1 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-qemut-win-vcpus1 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-pair 4 host-install/dst_host(4) broken in 14592 REGR. vs. 14565
 test-amd64-i386-pair 3 host-install/src_host(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-pv           3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-multivcpu 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-win7-amd64 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-qemut-win7-amd64 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-win-vcpus1   3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-qemut-win-vcpus1 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 14592 REGR. vs. 14565

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           3 host-install(3)           broken pass in 14586
 test-amd64-amd64-pair         3 host-install/src_host(3)  broken pass in 14592
 test-amd64-amd64-pair         4 host-install/dst_host(4)  broken pass in 14592
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail in 14586 pass in 14597

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 14592 never pass
 test-amd64-i386-win          16 leak-check/check      fail in 14592 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 14592 never pass
 test-amd64-i386-qemut-win    16 leak-check/check      fail in 14592 never pass

version targeted for testing:
 xen                  12d2786dc549
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 402 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 23:10:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 23:10:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th73c-0002Qa-6a; Fri, 07 Dec 2012 23:10:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th73Z-0002QQ-Rk
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 23:10:10 +0000
Received: from [85.158.139.83:10094] by server-4.bemta-5.messagelabs.com id
	A2/C1-14693-15772C05; Fri, 07 Dec 2012 23:10:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1354921807!28369220!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11291 invoked from network); 7 Dec 2012 23:10:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 23:10:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,241,1355097600"; 
   d="scan'208";a="6219"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 23:10:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 23:10:06 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th73W-0001uS-RL;
	Fri, 07 Dec 2012 23:10:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th73W-00082Y-QC;
	Fri, 07 Dec 2012 23:10:06 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14597-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 23:10:06 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14597: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14597 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14597/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565
 build-i386-pvops             2 host-install(2) broken in 14586 REGR. vs. 14565
 build-i386-oldkern           2 host-install(2) broken in 14586 REGR. vs. 14565
 test-amd64-i386-xl-credit2   3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-rhel6hvm-intel 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl           3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-amd 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-amd 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-rhel6hvm-amd 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-win-vcpus1 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-qemut-win-vcpus1 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-pair 4 host-install/dst_host(4) broken in 14592 REGR. vs. 14565
 test-amd64-i386-pair 3 host-install/src_host(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-pv           3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-multivcpu 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-win7-amd64 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-qemut-win7-amd64 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-win-vcpus1   3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xl-qemut-win-vcpus1 3 host-install(3) broken in 14592 REGR. vs. 14565
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 14592 REGR. vs. 14565

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           3 host-install(3)           broken pass in 14586
 test-amd64-amd64-pair         3 host-install/src_host(3)  broken pass in 14592
 test-amd64-amd64-pair         4 host-install/dst_host(4)  broken pass in 14592
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail in 14586 pass in 14597

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 14592 never pass
 test-amd64-i386-win          16 leak-check/check      fail in 14592 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 14592 never pass
 test-amd64-i386-qemut-win    16 leak-check/check      fail in 14592 never pass

version targeted for testing:
 xen                  12d2786dc549
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 402 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 23:59:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 23:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th7p8-000313-4a; Fri, 07 Dec 2012 23:59:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th7p7-00030y-AE
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 23:59:17 +0000
Received: from [85.158.138.51:50901] by server-13.bemta-3.messagelabs.com id
	EE/A6-24887-4D282C05; Fri, 07 Dec 2012 23:59:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354924755!27921777!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31365 invoked from network); 7 Dec 2012 23:59:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 23:59:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,241,1355097600"; 
   d="scan'208";a="6798"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 23:59:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 23:59:15 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th7p4-00027Z-Sk;
	Fri, 07 Dec 2012 23:59:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th7p4-000234-LZ;
	Fri, 07 Dec 2012 23:59:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14609-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 23:59:14 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14609: trouble: blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14609 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14609/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 07 23:59:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Dec 2012 23:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th7p8-000313-4a; Fri, 07 Dec 2012 23:59:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Th7p7-00030y-AE
	for xen-devel@lists.xensource.com; Fri, 07 Dec 2012 23:59:17 +0000
Received: from [85.158.138.51:50901] by server-13.bemta-3.messagelabs.com id
	EE/A6-24887-4D282C05; Fri, 07 Dec 2012 23:59:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1354924755!27921777!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDg4Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31365 invoked from network); 7 Dec 2012 23:59:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 23:59:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,241,1355097600"; 
   d="scan'208";a="6798"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Dec 2012 23:59:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 7 Dec 2012 23:59:15 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Th7p4-00027Z-Sk;
	Fri, 07 Dec 2012 23:59:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Th7p4-000234-LZ;
	Fri, 07 Dec 2012 23:59:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14609-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Dec 2012 23:59:14 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14609: trouble: blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14609 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14609/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 01:43:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 01:43:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th9R3-0008Gs-Ed; Sat, 08 Dec 2012 01:42:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Th9R1-0008Gk-L3
	for Xen-devel@lists.xensource.com; Sat, 08 Dec 2012 01:42:31 +0000
Received: from [85.158.143.99:50424] by server-3.bemta-4.messagelabs.com id
	8F/F8-18211-60B92C05; Sat, 08 Dec 2012 01:42:30 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354930948!23302961!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTE5OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31452 invoked from network); 8 Dec 2012 01:42:30 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Dec 2012 01:42:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB81gQLb028767
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 8 Dec 2012 01:42:26 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB81gPh0027481
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 8 Dec 2012 01:42:26 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB81gPuG022306; Fri, 7 Dec 2012 19:42:25 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 17:42:24 -0800
Date: Fri, 7 Dec 2012 17:42:23 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20121207174223.5411233d@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir.xen@gmail.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PVH] Help: mtrr.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hi,

I am getting closer to submitting RFC patch for PVH but have two more things
that need sorting out beofre I can do that. First is mtrr.c, I'm attaching
list of functions that are affected below. I am guessing it should be
similar to HVM path, but I'd need to spend time understanding it. If
anyone already knows, please let me know.

Thanks,
Mukesh


diff -r ba5e9253d04d xen/arch/x86/hvm/mtrr.c
--- a/xen/arch/x86/hvm/mtrr.c	Thu Nov 01 16:53:31 2012 -0700
+++ b/xen/arch/x86/hvm/mtrr.c	Fri Dec 07 17:17:31 2012 -0800
@@ -553,6 +553,10 @@ int32_t hvm_get_mem_pinned_cacheattr(
 
     *type = 0;
 
+    if ( is_pvh_domain(d) ) {
+        printk("PVH: fixme: hvm_get_mem_pinned_cacheattr(). \n");
+        return 0;
+    }
     if ( !is_hvm_domain(d) )
         return 0;
 
@@ -578,6 +582,11 @@ int32_t hvm_set_mem_pinned_cacheattr(
 {
     struct hvm_mem_pinned_cacheattr_range *range;
 
+    if ( is_pvh_domain(d) ) {
+        printk("PVH: fixme: hvm_set_mem_pinned_cacheattr()\n");
+        return 0;
+    }
+
     if ( !((type == PAT_TYPE_UNCACHABLE) ||
            (type == PAT_TYPE_WRCOMB) ||
            (type == PAT_TYPE_WRTHROUGH) ||
@@ -606,6 +615,12 @@ static int hvm_save_mtrr_msr(struct doma
     struct vcpu *v;
     struct hvm_hw_mtrr hw_mtrr;
     struct mtrr_state *mtrr_state;
+
+    if ( is_pvh_domain(d) ) {
+        printk("PVH: fixme: hvm_save_mtrr_msr()\n");
+        return 0;
+    }
+
     /* save mtrr&pat */
     for_each_vcpu(d, v)
     {
@@ -644,6 +659,10 @@ static int hvm_load_mtrr_msr(struct doma
     struct mtrr_state *mtrr_state;
     struct hvm_hw_mtrr hw_mtrr;
 
+    if ( is_pvh_domain(d) ) {
+        printk("PVH: fixme: hvm_load_mtrr_msr()\n");
+        return 0;
+    }
     vcpuid = hvm_load_instance(h);
     if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
     {
@@ -693,6 +712,14 @@ uint8_t epte_get_entry_emt(struct domain
          ((d->vcpu == NULL) || ((v = d->vcpu[0]) == NULL)) )
         return MTRR_TYPE_WRBACK;
 
+    /* PVH TBD: FIXME: this needs to be studied, figure what need to be done
+     * for PVH */
+    if ( is_pvh_domain(d) ) {
+        if (direct_mmio)
+            return MTRR_TYPE_UNCACHABLE;
+        return MTRR_TYPE_WRBACK;
+    }
+
     if ( !v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )
         return MTRR_TYPE_WRBACK;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 01:43:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 01:43:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th9R3-0008Gs-Ed; Sat, 08 Dec 2012 01:42:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Th9R1-0008Gk-L3
	for Xen-devel@lists.xensource.com; Sat, 08 Dec 2012 01:42:31 +0000
Received: from [85.158.143.99:50424] by server-3.bemta-4.messagelabs.com id
	8F/F8-18211-60B92C05; Sat, 08 Dec 2012 01:42:30 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354930948!23302961!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTE5OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31452 invoked from network); 8 Dec 2012 01:42:30 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Dec 2012 01:42:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB81gQLb028767
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 8 Dec 2012 01:42:26 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB81gPh0027481
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 8 Dec 2012 01:42:26 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB81gPuG022306; Fri, 7 Dec 2012 19:42:25 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 17:42:24 -0800
Date: Fri, 7 Dec 2012 17:42:23 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20121207174223.5411233d@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir.xen@gmail.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PVH] Help: mtrr.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hi,

I am getting closer to submitting RFC patch for PVH but have two more things
that need sorting out beofre I can do that. First is mtrr.c, I'm attaching
list of functions that are affected below. I am guessing it should be
similar to HVM path, but I'd need to spend time understanding it. If
anyone already knows, please let me know.

Thanks,
Mukesh


diff -r ba5e9253d04d xen/arch/x86/hvm/mtrr.c
--- a/xen/arch/x86/hvm/mtrr.c	Thu Nov 01 16:53:31 2012 -0700
+++ b/xen/arch/x86/hvm/mtrr.c	Fri Dec 07 17:17:31 2012 -0800
@@ -553,6 +553,10 @@ int32_t hvm_get_mem_pinned_cacheattr(
 
     *type = 0;
 
+    if ( is_pvh_domain(d) ) {
+        printk("PVH: fixme: hvm_get_mem_pinned_cacheattr(). \n");
+        return 0;
+    }
     if ( !is_hvm_domain(d) )
         return 0;
 
@@ -578,6 +582,11 @@ int32_t hvm_set_mem_pinned_cacheattr(
 {
     struct hvm_mem_pinned_cacheattr_range *range;
 
+    if ( is_pvh_domain(d) ) {
+        printk("PVH: fixme: hvm_set_mem_pinned_cacheattr()\n");
+        return 0;
+    }
+
     if ( !((type == PAT_TYPE_UNCACHABLE) ||
            (type == PAT_TYPE_WRCOMB) ||
            (type == PAT_TYPE_WRTHROUGH) ||
@@ -606,6 +615,12 @@ static int hvm_save_mtrr_msr(struct doma
     struct vcpu *v;
     struct hvm_hw_mtrr hw_mtrr;
     struct mtrr_state *mtrr_state;
+
+    if ( is_pvh_domain(d) ) {
+        printk("PVH: fixme: hvm_save_mtrr_msr()\n");
+        return 0;
+    }
+
     /* save mtrr&pat */
     for_each_vcpu(d, v)
     {
@@ -644,6 +659,10 @@ static int hvm_load_mtrr_msr(struct doma
     struct mtrr_state *mtrr_state;
     struct hvm_hw_mtrr hw_mtrr;
 
+    if ( is_pvh_domain(d) ) {
+        printk("PVH: fixme: hvm_load_mtrr_msr()\n");
+        return 0;
+    }
     vcpuid = hvm_load_instance(h);
     if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
     {
@@ -693,6 +712,14 @@ uint8_t epte_get_entry_emt(struct domain
          ((d->vcpu == NULL) || ((v = d->vcpu[0]) == NULL)) )
         return MTRR_TYPE_WRBACK;
 
+    /* PVH TBD: FIXME: this needs to be studied, figure what need to be done
+     * for PVH */
+    if ( is_pvh_domain(d) ) {
+        if (direct_mmio)
+            return MTRR_TYPE_UNCACHABLE;
+        return MTRR_TYPE_WRBACK;
+    }
+
     if ( !v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )
         return MTRR_TYPE_WRBACK;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 01:47:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 01:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th9V8-0008OM-43; Sat, 08 Dec 2012 01:46:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Th9V6-0008OE-Sv
	for Xen-devel@lists.xensource.com; Sat, 08 Dec 2012 01:46:45 +0000
Received: from [193.109.254.147:50101] by server-10.bemta-14.messagelabs.com
	id 29/18-31741-40C92C05; Sat, 08 Dec 2012 01:46:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354931202!9347592!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTA0Nzc1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25879 invoked from network); 8 Dec 2012 01:46:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Dec 2012 01:46:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB81kcMJ032019
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 8 Dec 2012 01:46:38 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB81kbUM000107
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 8 Dec 2012 01:46:38 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB81kbJ2021954; Fri, 7 Dec 2012 19:46:37 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 17:46:37 -0800
Date: Fri, 7 Dec 2012 17:46:36 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>, Keir
	Fraser <keir.xen@gmail.com>, Jan Beulich <JBeulich@suse.com>
Message-ID: <20121207174636.49c4f7eb@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The second is msi.c. I don't understand it very well, and need to
figure what to do for PVH. Would appreciate suggestions if anyone knows.

Thanks for your time,
Mukesh



diff -r ba5e9253d04d xen/arch/x86/msi.c
--- a/xen/arch/x86/msi.c	Thu Nov 01 16:53:31 2012 -0700
+++ b/xen/arch/x86/msi.c	Fri Dec 07 17:45:07 2012 -0800
@@ -766,6 +766,9 @@ static int msix_capability_init(struct p
         WARN_ON(rangeset_overlaps_range(mmio_ro_ranges, dev->msix_pba.first,
                                         dev->msix_pba.last));
 
+/* PVH: fixme: not a clue what to do here :) */
+if (is_pvh_domain(dev->domain) && dev->domain->domain_id != 0)
+{
         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
                                 dev->msix_table.last) )
             WARN();
@@ -793,6 +796,7 @@ static int msix_capability_init(struct p
                 /* XXX How to deal with existing mappings? */
             }
         }
+}
     }
     WARN_ON(dev->msix_nr_entries != nr_entries);
     WARN_ON(dev->msix_table.first != (table_paddr >> PAGE_SHIFT));

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 01:47:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 01:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Th9V8-0008OM-43; Sat, 08 Dec 2012 01:46:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Th9V6-0008OE-Sv
	for Xen-devel@lists.xensource.com; Sat, 08 Dec 2012 01:46:45 +0000
Received: from [193.109.254.147:50101] by server-10.bemta-14.messagelabs.com
	id 29/18-31741-40C92C05; Sat, 08 Dec 2012 01:46:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1354931202!9347592!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTA0Nzc1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25879 invoked from network); 8 Dec 2012 01:46:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Dec 2012 01:46:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qB81kcMJ032019
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 8 Dec 2012 01:46:38 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qB81kbUM000107
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 8 Dec 2012 01:46:38 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qB81kbJ2021954; Fri, 7 Dec 2012 19:46:37 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Dec 2012 17:46:37 -0800
Date: Fri, 7 Dec 2012 17:46:36 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>, Keir
	Fraser <keir.xen@gmail.com>, Jan Beulich <JBeulich@suse.com>
Message-ID: <20121207174636.49c4f7eb@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The second is msi.c. I don't understand it very well, and need to
figure what to do for PVH. Would appreciate suggestions if anyone knows.

Thanks for your time,
Mukesh



diff -r ba5e9253d04d xen/arch/x86/msi.c
--- a/xen/arch/x86/msi.c	Thu Nov 01 16:53:31 2012 -0700
+++ b/xen/arch/x86/msi.c	Fri Dec 07 17:45:07 2012 -0800
@@ -766,6 +766,9 @@ static int msix_capability_init(struct p
         WARN_ON(rangeset_overlaps_range(mmio_ro_ranges, dev->msix_pba.first,
                                         dev->msix_pba.last));
 
+/* PVH: fixme: not a clue what to do here :) */
+if (is_pvh_domain(dev->domain) && dev->domain->domain_id != 0)
+{
         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
                                 dev->msix_table.last) )
             WARN();
@@ -793,6 +796,7 @@ static int msix_capability_init(struct p
                 /* XXX How to deal with existing mappings? */
             }
         }
+}
     }
     WARN_ON(dev->msix_nr_entries != nr_entries);
     WARN_ON(dev->msix_table.first != (table_paddr >> PAGE_SHIFT));

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 02:58:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 02:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThAbc-0000m5-Od; Sat, 08 Dec 2012 02:57:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThAbb-0000m0-8J
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 02:57:31 +0000
Received: from [85.158.143.99:2850] by server-3.bemta-4.messagelabs.com id
	D3/87-18211-A9CA2C05; Sat, 08 Dec 2012 02:57:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354935449!21526226!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20152 invoked from network); 8 Dec 2012 02:57:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 02:57:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,242,1355097600"; 
   d="scan'208";a="8192"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 02:57:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 02:57:27 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThAbW-0002zP-Uc;
	Sat, 08 Dec 2012 02:57:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThAbW-0001Gk-Hs;
	Sat, 08 Dec 2012 02:57:26 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14598-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 02:57:26 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14598: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14598 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14598/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl-qemut-win7-amd64  3 host-install(3) broken REGR. vs. 14481
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14481
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-amd64-qemut-win    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14481

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14481
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14481
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14481

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1
baseline version:
 linux                2fc3fd44850e68f21ce1746c2e94470cab44ffab

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-amd64-amd64-qemut-win                                   broken  
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3bbbcb136d00b0718e63f7fd633f098e0889c3d1
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Dec 5 18:40:31 2012 -0800

    Linux 3.0.55

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 02:58:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 02:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThAbc-0000m5-Od; Sat, 08 Dec 2012 02:57:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThAbb-0000m0-8J
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 02:57:31 +0000
Received: from [85.158.143.99:2850] by server-3.bemta-4.messagelabs.com id
	D3/87-18211-A9CA2C05; Sat, 08 Dec 2012 02:57:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354935449!21526226!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20152 invoked from network); 8 Dec 2012 02:57:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 02:57:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,242,1355097600"; 
   d="scan'208";a="8192"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 02:57:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 02:57:27 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThAbW-0002zP-Uc;
	Sat, 08 Dec 2012 02:57:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThAbW-0001Gk-Hs;
	Sat, 08 Dec 2012 02:57:26 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14598-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 02:57:26 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14598: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14598 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14598/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl-qemut-win7-amd64  3 host-install(3) broken REGR. vs. 14481
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14481
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-amd64-qemut-win    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14481

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14481
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14481
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14481

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1
baseline version:
 linux                2fc3fd44850e68f21ce1746c2e94470cab44ffab

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-amd64-amd64-qemut-win                                   broken  
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3bbbcb136d00b0718e63f7fd633f098e0889c3d1
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Dec 5 18:40:31 2012 -0800

    Linux 3.0.55

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 05:39:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 05:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThD7w-0002Ct-0m; Sat, 08 Dec 2012 05:39:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1ThD7u-0002Co-EF
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 05:39:02 +0000
Received: from [85.158.143.99:13489] by server-2.bemta-4.messagelabs.com id
	C8/E9-30861-572D2C05; Sat, 08 Dec 2012 05:39:01 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1354945139!28429496!1
X-Originating-IP: [209.85.210.174]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29619 invoked from network); 8 Dec 2012 05:39:00 -0000
Received: from mail-ia0-f174.google.com (HELO mail-ia0-f174.google.com)
	(209.85.210.174)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 05:39:00 -0000
Received: by mail-ia0-f174.google.com with SMTP id y25so2234584iay.5
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 21:38:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=90UaLM8f8L/rFI0j551LcQn/uhE2lvc3+lsXNZCG9y0=;
	b=wDyzRzrfM6ifKRMoFzYUAwbR5Df7sDFTn9v1u28OQdpCdQRrSnjxy9nMr8cxlS4G9J
	MsGyw8vd13XNPR9yOT4ERHfcXEkJSOUBrwFN0wqUMOtbG+EaIraKIa6R3Nj0O5H/oVGu
	Ng/W7PZNY0eJNUr8AAMATCTI6CnrN/a1U4TKAt5y+DwlFdqyrWNcjblNLDJ+M6l1t5Pe
	K/xyIXNIOAOgKVz3BMkDk/ufS0cTJMhFamS/+tXCBCTXa6efKqD39RABW+1TAcnM/dqQ
	Ix8ATmgPNRU05QwutKdseQo2SYACrAcL2irqgvEUV9iB9e4yYNX3IKOZWsozcgcu9pH2
	Rdng==
MIME-Version: 1.0
Received: by 10.50.194.132 with SMTP id hw4mr1194062igc.37.1354945139355; Fri,
	07 Dec 2012 21:38:59 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Fri, 7 Dec 2012 21:38:59 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212071811380.8801@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
	<alpine.DEB.2.02.1212071635370.8801@kaball.uk.xensource.com>
	<CAKhsbWZF6NaX92Cz3tp_OUqf=bd5qQ-W9hcPN4Y96D7EKoqXig@mail.gmail.com>
	<alpine.DEB.2.02.1212071811380.8801@kaball.uk.xensource.com>
Date: Sat, 8 Dec 2012 13:38:59 +0800
X-Google-Sender-Auth: 0cXpL31QLemo3am2u8YcHSwPbWo
Message-ID: <CAKhsbWaCb5LYwLLKqF8_5TuxShCFASs1p+PW77tbxCGYNHd4Aw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3
 with the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU
 missing interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2351402363160171476=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2351402363160171476==
Content-Type: multipart/alternative; boundary=14dae93411e9b8af8d04d050bfe1

--14dae93411e9b8af8d04d050bfe1
Content-Type: text/plain; charset=ISO-8859-1

On Sat, Dec 8, 2012 at 2:11 AM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:

> On Fri, 7 Dec 2012, G.R. wrote:
> > I won't be able to reproduce the bug, so I thank you in advance for any
> > help you can give me with testing a fix
> >
> > Thanks for your fix, I'll try it out tomorrow and post the result --
> it's too late today.
> > But I'm wondering if it is specific to 4.1.3, or have been fixed in 4.2
> already.
>
> I don't think it has been fixes in 4.2
>

Thanks Stefano. The fix solved the regression caused by the mistranslate
patch.
I guess you can submit this patch for further verification.

But unfortunately it does not help to make the IGD work in pure HVM guest.
I've enabled drm.debug log and dumped the IGD register value for both
passing && failing case.
They do have some differences. I'm contacting the driver developer to
identify the fatal ones.

I hope this effort can finally make win7 work also.
Currently it simply BSOD at early booting stage after the driver is
installed.
I have no way to debug directly -- if you have any suggestion, please share
with me.

Thanks,
Timothy

--14dae93411e9b8af8d04d050bfe1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><div class=3D"gmail_quote">On Sat, Dec 8, 2012 a=
t 2:11 AM, Stefano Stabellini <span dir=3D"ltr">&lt;<a href=3D"mailto:stefa=
no.stabellini@eu.citrix.com" target=3D"_blank">stefano.stabellini@eu.citrix=
.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On F=
ri, 7 Dec 2012, G.R. wrote:<br>
&gt; I won&#39;t be able to reproduce the bug, so I thank you in advance fo=
r any<br>
&gt; help you can give me with testing a fix<br>
&gt;<br>
&gt; Thanks for your fix, I&#39;ll try it out tomorrow and post the result =
-- it&#39;s too late today.<br>
&gt; But I&#39;m wondering if it is specific to 4.1.3, or have been fixed i=
n 4.2 already.<br>
<br>
</div></div>I don&#39;t think it has been fixes in 4.2<br>
</blockquote></div><br>Thanks Stefano. The fix solved the regression caused=
 by the mistranslate patch.<br>I guess you can submit this patch for furthe=
r verification.<br><br>But unfortunately it does not help to make the IGD w=
ork in pure HVM guest.<br>
I&#39;ve enabled drm.debug log and dumped the IGD register value for both p=
assing &amp;&amp; failing case.<br>They do have some differences. I&#39;m c=
ontacting the driver developer to identify the fatal ones.<br><br>I hope th=
is effort can finally make win7 work also. <br>
Currently it simply BSOD at early booting stage after the driver is install=
ed.<br>I have no way to debug directly -- if you have any suggestion, pleas=
e share with me.<br><br>Thanks,<br>Timothy<br></div>

--14dae93411e9b8af8d04d050bfe1--


--===============2351402363160171476==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2351402363160171476==--


From xen-devel-bounces@lists.xen.org Sat Dec 08 05:39:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 05:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThD7w-0002Ct-0m; Sat, 08 Dec 2012 05:39:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1ThD7u-0002Co-EF
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 05:39:02 +0000
Received: from [85.158.143.99:13489] by server-2.bemta-4.messagelabs.com id
	C8/E9-30861-572D2C05; Sat, 08 Dec 2012 05:39:01 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1354945139!28429496!1
X-Originating-IP: [209.85.210.174]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29619 invoked from network); 8 Dec 2012 05:39:00 -0000
Received: from mail-ia0-f174.google.com (HELO mail-ia0-f174.google.com)
	(209.85.210.174)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 05:39:00 -0000
Received: by mail-ia0-f174.google.com with SMTP id y25so2234584iay.5
	for <xen-devel@lists.xen.org>; Fri, 07 Dec 2012 21:38:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=90UaLM8f8L/rFI0j551LcQn/uhE2lvc3+lsXNZCG9y0=;
	b=wDyzRzrfM6ifKRMoFzYUAwbR5Df7sDFTn9v1u28OQdpCdQRrSnjxy9nMr8cxlS4G9J
	MsGyw8vd13XNPR9yOT4ERHfcXEkJSOUBrwFN0wqUMOtbG+EaIraKIa6R3Nj0O5H/oVGu
	Ng/W7PZNY0eJNUr8AAMATCTI6CnrN/a1U4TKAt5y+DwlFdqyrWNcjblNLDJ+M6l1t5Pe
	K/xyIXNIOAOgKVz3BMkDk/ufS0cTJMhFamS/+tXCBCTXa6efKqD39RABW+1TAcnM/dqQ
	Ix8ATmgPNRU05QwutKdseQo2SYACrAcL2irqgvEUV9iB9e4yYNX3IKOZWsozcgcu9pH2
	Rdng==
MIME-Version: 1.0
Received: by 10.50.194.132 with SMTP id hw4mr1194062igc.37.1354945139355; Fri,
	07 Dec 2012 21:38:59 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Fri, 7 Dec 2012 21:38:59 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212071811380.8801@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
	<alpine.DEB.2.02.1212071635370.8801@kaball.uk.xensource.com>
	<CAKhsbWZF6NaX92Cz3tp_OUqf=bd5qQ-W9hcPN4Y96D7EKoqXig@mail.gmail.com>
	<alpine.DEB.2.02.1212071811380.8801@kaball.uk.xensource.com>
Date: Sat, 8 Dec 2012 13:38:59 +0800
X-Google-Sender-Auth: 0cXpL31QLemo3am2u8YcHSwPbWo
Message-ID: <CAKhsbWaCb5LYwLLKqF8_5TuxShCFASs1p+PW77tbxCGYNHd4Aw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Mats Petersson <mats.petersson@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Inconsistent MSI flags for pure HVM guest in 4.1.3
 with the back-ported 4.2.0 msi_retranslate patch. (Was: Issue about domU
 missing interrupt)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2351402363160171476=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2351402363160171476==
Content-Type: multipart/alternative; boundary=14dae93411e9b8af8d04d050bfe1

--14dae93411e9b8af8d04d050bfe1
Content-Type: text/plain; charset=ISO-8859-1

On Sat, Dec 8, 2012 at 2:11 AM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:

> On Fri, 7 Dec 2012, G.R. wrote:
> > I won't be able to reproduce the bug, so I thank you in advance for any
> > help you can give me with testing a fix
> >
> > Thanks for your fix, I'll try it out tomorrow and post the result --
> it's too late today.
> > But I'm wondering if it is specific to 4.1.3, or have been fixed in 4.2
> already.
>
> I don't think it has been fixes in 4.2
>

Thanks Stefano. The fix solved the regression caused by the mistranslate
patch.
I guess you can submit this patch for further verification.

But unfortunately it does not help to make the IGD work in pure HVM guest.
I've enabled drm.debug log and dumped the IGD register value for both
passing && failing case.
They do have some differences. I'm contacting the driver developer to
identify the fatal ones.

I hope this effort can finally make win7 work also.
Currently it simply BSOD at early booting stage after the driver is
installed.
I have no way to debug directly -- if you have any suggestion, please share
with me.

Thanks,
Timothy

--14dae93411e9b8af8d04d050bfe1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><div class=3D"gmail_quote">On Sat, Dec 8, 2012 a=
t 2:11 AM, Stefano Stabellini <span dir=3D"ltr">&lt;<a href=3D"mailto:stefa=
no.stabellini@eu.citrix.com" target=3D"_blank">stefano.stabellini@eu.citrix=
.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On F=
ri, 7 Dec 2012, G.R. wrote:<br>
&gt; I won&#39;t be able to reproduce the bug, so I thank you in advance fo=
r any<br>
&gt; help you can give me with testing a fix<br>
&gt;<br>
&gt; Thanks for your fix, I&#39;ll try it out tomorrow and post the result =
-- it&#39;s too late today.<br>
&gt; But I&#39;m wondering if it is specific to 4.1.3, or have been fixed i=
n 4.2 already.<br>
<br>
</div></div>I don&#39;t think it has been fixes in 4.2<br>
</blockquote></div><br>Thanks Stefano. The fix solved the regression caused=
 by the mistranslate patch.<br>I guess you can submit this patch for furthe=
r verification.<br><br>But unfortunately it does not help to make the IGD w=
ork in pure HVM guest.<br>
I&#39;ve enabled drm.debug log and dumped the IGD register value for both p=
assing &amp;&amp; failing case.<br>They do have some differences. I&#39;m c=
ontacting the driver developer to identify the fatal ones.<br><br>I hope th=
is effort can finally make win7 work also. <br>
Currently it simply BSOD at early booting stage after the driver is install=
ed.<br>I have no way to debug directly -- if you have any suggestion, pleas=
e share with me.<br><br>Thanks,<br>Timothy<br></div>

--14dae93411e9b8af8d04d050bfe1--


--===============2351402363160171476==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2351402363160171476==--


From xen-devel-bounces@lists.xen.org Sat Dec 08 05:43:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 05:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThDBv-0002N3-UZ; Sat, 08 Dec 2012 05:43:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThDBu-0002My-Gp
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 05:43:10 +0000
Received: from [193.109.254.147:3097] by server-15.bemta-14.messagelabs.com id
	2A/E5-12105-D63D2C05; Sat, 08 Dec 2012 05:43:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354945389!2092948!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24836 invoked from network); 8 Dec 2012 05:43:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 05:43:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,242,1355097600"; 
   d="scan'208";a="9518"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 05:43:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 05:43:03 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThDBn-0003nt-JJ;
	Sat, 08 Dec 2012 05:43:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThDBn-0006uH-CU;
	Sat, 08 Dec 2012 05:43:03 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14613-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 05:43:03 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14613: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14613 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14613/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 14570
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-xl-qemuu-win-vcpus1 10 guest-saverestore.2  fail pass in 14594
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 05:43:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 05:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThDBv-0002N3-UZ; Sat, 08 Dec 2012 05:43:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThDBu-0002My-Gp
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 05:43:10 +0000
Received: from [193.109.254.147:3097] by server-15.bemta-14.messagelabs.com id
	2A/E5-12105-D63D2C05; Sat, 08 Dec 2012 05:43:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1354945389!2092948!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24836 invoked from network); 8 Dec 2012 05:43:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 05:43:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,242,1355097600"; 
   d="scan'208";a="9518"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 05:43:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 05:43:03 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThDBn-0003nt-JJ;
	Sat, 08 Dec 2012 05:43:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThDBn-0006uH-CU;
	Sat, 08 Dec 2012 05:43:03 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14613-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 05:43:03 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14613: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14613 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14613/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 14570
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-xl-qemuu-win-vcpus1 10 guest-saverestore.2  fail pass in 14594
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 06:34:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 06:34:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThDz4-00036C-Nh; Sat, 08 Dec 2012 06:33:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThDz2-000367-BO
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 06:33:56 +0000
Received: from [85.158.143.99:28003] by server-3.bemta-4.messagelabs.com id
	A1/C5-18211-35FD2C05; Sat, 08 Dec 2012 06:33:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354948434!21537134!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8772 invoked from network); 8 Dec 2012 06:33:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 06:33:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="9930"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 06:33:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 06:33:52 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThDyy-00045r-QW;
	Sat, 08 Dec 2012 06:33:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThDyy-0003BG-JH;
	Sat, 08 Dec 2012 06:33:52 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14610-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 06:33:52 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14610: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14610 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14610/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-qemuu-winxpsp3  8 guest-saverestore   fail REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 06:34:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 06:34:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThDz4-00036C-Nh; Sat, 08 Dec 2012 06:33:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThDz2-000367-BO
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 06:33:56 +0000
Received: from [85.158.143.99:28003] by server-3.bemta-4.messagelabs.com id
	A1/C5-18211-35FD2C05; Sat, 08 Dec 2012 06:33:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1354948434!21537134!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8772 invoked from network); 8 Dec 2012 06:33:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 06:33:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="9930"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 06:33:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 06:33:52 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThDyy-00045r-QW;
	Sat, 08 Dec 2012 06:33:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThDyy-0003BG-JH;
	Sat, 08 Dec 2012 06:33:52 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14610-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 06:33:52 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14610: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14610 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14610/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-qemuu-winxpsp3  8 guest-saverestore   fail REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 06:46:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 06:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThEAn-0003Hb-63; Sat, 08 Dec 2012 06:46:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThEAl-0003HW-Rh
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 06:46:04 +0000
Received: from [85.158.138.51:28567] by server-12.bemta-3.messagelabs.com id
	39/0F-22757-622E2C05; Sat, 08 Dec 2012 06:45:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1354949157!9313730!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18507 invoked from network); 8 Dec 2012 06:45:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 06:45:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="9972"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 06:45:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 06:45:57 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThEAf-00049U-C3;
	Sat, 08 Dec 2012 06:45:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThEAf-00007h-Am;
	Sat, 08 Dec 2012 06:45:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14614-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 06:45:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14614: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14614 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14614/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 06:46:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 06:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThEAn-0003Hb-63; Sat, 08 Dec 2012 06:46:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThEAl-0003HW-Rh
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 06:46:04 +0000
Received: from [85.158.138.51:28567] by server-12.bemta-3.messagelabs.com id
	39/0F-22757-622E2C05; Sat, 08 Dec 2012 06:45:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1354949157!9313730!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18507 invoked from network); 8 Dec 2012 06:45:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 06:45:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="9972"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 06:45:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 06:45:57 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThEAf-00049U-C3;
	Sat, 08 Dec 2012 06:45:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThEAf-00007h-Am;
	Sat, 08 Dec 2012 06:45:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14614-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 06:45:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14614: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14614 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14614/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 06:58:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 06:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThEM7-0003ST-DT; Sat, 08 Dec 2012 06:57:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThEM5-0003SO-Uv
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 06:57:46 +0000
Received: from [85.158.138.51:51490] by server-13.bemta-3.messagelabs.com id
	36/6D-24887-4E4E2C05; Sat, 08 Dec 2012 06:57:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354949859!28067714!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23260 invoked from network); 8 Dec 2012 06:57:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 06:57:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="10006"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 06:57:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 06:57:38 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThELy-0004D8-LT;
	Sat, 08 Dec 2012 06:57:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThELy-0007BK-Gh;
	Sat, 08 Dec 2012 06:57:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14616-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 06:57:38 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14616: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14616 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14616/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 06:58:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 06:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThEM7-0003ST-DT; Sat, 08 Dec 2012 06:57:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThEM5-0003SO-Uv
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 06:57:46 +0000
Received: from [85.158.138.51:51490] by server-13.bemta-3.messagelabs.com id
	36/6D-24887-4E4E2C05; Sat, 08 Dec 2012 06:57:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354949859!28067714!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23260 invoked from network); 8 Dec 2012 06:57:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 06:57:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="10006"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 06:57:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 06:57:38 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThELy-0004D8-LT;
	Sat, 08 Dec 2012 06:57:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThELy-0007BK-Gh;
	Sat, 08 Dec 2012 06:57:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14616-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 06:57:38 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14616: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14616 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14616/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 07:09:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 07:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThEX3-0003q8-3h; Sat, 08 Dec 2012 07:09:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThEX1-0003q3-5h
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 07:09:03 +0000
Received: from [85.158.139.211:30999] by server-1.bemta-5.messagelabs.com id
	8B/11-12813-E87E2C05; Sat, 08 Dec 2012 07:09:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354950541!18107063!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24340 invoked from network); 8 Dec 2012 07:09:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 07:09:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="10084"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 07:09:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 07:08:59 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThEWx-0004Gn-Nx;
	Sat, 08 Dec 2012 07:08:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThEWx-0005Vx-MK;
	Sat, 08 Dec 2012 07:08:59 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14617-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 07:08:59 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14617: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14617 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14617/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 07:09:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 07:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThEX3-0003q8-3h; Sat, 08 Dec 2012 07:09:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThEX1-0003q3-5h
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 07:09:03 +0000
Received: from [85.158.139.211:30999] by server-1.bemta-5.messagelabs.com id
	8B/11-12813-E87E2C05; Sat, 08 Dec 2012 07:09:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354950541!18107063!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24340 invoked from network); 8 Dec 2012 07:09:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 07:09:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="10084"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 07:09:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 07:08:59 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThEWx-0004Gn-Nx;
	Sat, 08 Dec 2012 07:08:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThEWx-0005Vx-MK;
	Sat, 08 Dec 2012 07:08:59 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14617-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 07:08:59 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14617: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14617 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14617/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 07:23:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 07:23:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThEke-000438-I0; Sat, 08 Dec 2012 07:23:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThEkc-000430-RZ
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 07:23:07 +0000
Received: from [85.158.139.83:29417] by server-10.bemta-5.messagelabs.com id
	05/23-13383-9DAE2C05; Sat, 08 Dec 2012 07:23:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354951385!28251851!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1340 invoked from network); 8 Dec 2012 07:23:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 07:23:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="10146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 07:22:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 07:22:05 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThEjd-0004Kx-OT;
	Sat, 08 Dec 2012 07:22:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThEjd-0001bl-Nn;
	Sat, 08 Dec 2012 07:22:05 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14611-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 07:22:05 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14611: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14611 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14611/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 07:23:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 07:23:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThEke-000438-I0; Sat, 08 Dec 2012 07:23:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThEkc-000430-RZ
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 07:23:07 +0000
Received: from [85.158.139.83:29417] by server-10.bemta-5.messagelabs.com id
	05/23-13383-9DAE2C05; Sat, 08 Dec 2012 07:23:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1354951385!28251851!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkwNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1340 invoked from network); 8 Dec 2012 07:23:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 07:23:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="10146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 07:22:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 8 Dec 2012 07:22:05 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThEjd-0004Kx-OT;
	Sat, 08 Dec 2012 07:22:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThEjd-0001bl-Nn;
	Sat, 08 Dec 2012 07:22:05 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14611-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 07:22:05 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14611: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14611 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14611/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 10:19:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 10:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThHVA-0005OF-Cf; Sat, 08 Dec 2012 10:19:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThHV8-0005OA-2C
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 10:19:18 +0000
Received: from [85.158.139.211:13659] by server-3.bemta-5.messagelabs.com id
	BB/1A-25441-52413C05; Sat, 08 Dec 2012 10:19:17 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354961956!18119283!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14058 invoked from network); 8 Dec 2012 10:19:16 -0000
Received: from unknown (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 10:19:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="10983"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 10:17:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 10:17:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThHT6-0005Cc-1Y;
	Sat, 08 Dec 2012 10:17:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThHT5-00075K-Tu;
	Sat, 08 Dec 2012 10:17:11 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14612-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 10:17:11 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14612: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14612 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14612/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14563
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14563
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-qemut-win     3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14563
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14563
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14563
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-win   3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-winxpsp3  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14563

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair         3 host-install/src_host(3)  broken pass in 14595
 test-amd64-amd64-pair         4 host-install/dst_host(4)  broken pass in 14595
 test-i386-i386-xl-win    12 guest-localmigrate/x10 fail in 14595 pass in 14612

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14563
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    broken  
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  broken  
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 10:19:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 10:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThHVA-0005OF-Cf; Sat, 08 Dec 2012 10:19:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThHV8-0005OA-2C
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 10:19:18 +0000
Received: from [85.158.139.211:13659] by server-3.bemta-5.messagelabs.com id
	BB/1A-25441-52413C05; Sat, 08 Dec 2012 10:19:17 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1354961956!18119283!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14058 invoked from network); 8 Dec 2012 10:19:16 -0000
Received: from unknown (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 10:19:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="10983"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 10:17:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 10:17:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThHT6-0005Cc-1Y;
	Sat, 08 Dec 2012 10:17:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThHT5-00075K-Tu;
	Sat, 08 Dec 2012 10:17:11 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14612-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 10:17:11 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14612: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14612 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14612/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14563
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14563
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-qemut-win     3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14563
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14563
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14563
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-win   3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-winxpsp3  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14563

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair         3 host-install/src_host(3)  broken pass in 14595
 test-amd64-amd64-pair         4 host-install/dst_host(4)  broken pass in 14595
 test-i386-i386-xl-win    12 guest-localmigrate/x10 fail in 14595 pass in 14612

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14563
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    broken  
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  broken  
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 13:02:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 13:02:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThK2l-0006QU-Tu; Sat, 08 Dec 2012 13:02:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <asadrupucit2006@gmail.com>) id 1ThK2k-0006QP-3I
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 13:02:10 +0000
Received: from [85.158.139.211:42737] by server-2.bemta-5.messagelabs.com id
	42/0D-16162-15A33C05; Sat, 08 Dec 2012 13:02:09 +0000
X-Env-Sender: asadrupucit2006@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1354971727!19544779!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3029 invoked from network); 8 Dec 2012 13:02:08 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 13:02:08 -0000
Received: by mail-ob0-f173.google.com with SMTP id xn12so1480815obc.32
	for <xen-devel@lists.xen.org>; Sat, 08 Dec 2012 05:02:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=y/JCQ667vHgNnmCOV0f0GnceZLDSjg3DwQyR6F/ImDo=;
	b=sy7LPeycqtfDTQ6GOnBvlHKUQ1TnOC+/iVneHBINboFQtw68m+q/BtZSQpnK2fv31A
	0G/58TsPnouMaemi3Kv0vbeyzLjcRui6exqMuoOFdfLZGlcae52KQ3/MFvRK3runfrQU
	DT5P0k2k4UZQm1gkKg80dDpHzWgRa0hRlLoa+onChGF9amMuD/SjuqgREQEBIyzENUiO
	aWIHItd44Q5+byaBPqhIBMzu6XKtk8DcBxM+KPQpVwatrQxtQrKJxML+amU+NW173W4n
	XTQTJW4JnW8MqI9j/PGoipymkGVGKU5K8jUgQTk5WxVzLIHEoHkTJTheqMLf8I6VVqZg
	9ULw==
MIME-Version: 1.0
Received: by 10.60.4.35 with SMTP id h3mr407618oeh.123.1354971726950; Sat, 08
	Dec 2012 05:02:06 -0800 (PST)
Received: by 10.60.20.3 with HTTP; Sat, 8 Dec 2012 05:02:06 -0800 (PST)
Date: Sat, 8 Dec 2012 18:02:06 +0500
Message-ID: <CAJ2v2mgkTjuq7cJWsgwbY-NJpAeEVRtai=GJV_EhrVzMuZ7QBA@mail.gmail.com>
From: asad raza <asadrupucit2006@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Why we are doing this (below code) in __init
 setup_pagetables() method in ARM arch in setup.c.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 /* Link in the fixmap pagetable */
    pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset)
                           >> PAGE_SHIFT);
    pte.pt.table = 1;
    write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte);
    /*
     * No flush required here. Individual flushes are done in
     * set_fixmap as entries are used.
     */

    /* Break up the Xen mapping into 4k pages and protect them separately. */
    for ( i = 0; i < LPAE_ENTRIES; i++ )
    {
        unsigned long mfn = paddr_to_pfn(xen_paddr) + i;
        unsigned long va = XEN_VIRT_START + (i << PAGE_SHIFT);
        if ( !is_kernel(va) )
            break;
        pte = mfn_to_xen_entry(mfn);
        pte.pt.table = 1; /* 4k mappings always have this bit set */
        if ( is_kernel_text(va) || is_kernel_inittext(va) )
        {
            pte.pt.xn = 0;
            pte.pt.ro = 1;
        }
        if ( is_kernel_rodata(va) )
            pte.pt.ro = 1;
        write_pte(xen_xenmap + i, pte);
        /* No flush required here as page table is not hooked in yet. */
    }
    pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset)
                           >> PAGE_SHIFT);
    pte.pt.table = 1;
    write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
    /* Have changed a mapping used for .text. Flush everything for safety. */
    flush_xen_text_tlb();

    /* From now on, no mapping may be both writable and executable. */
    WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 13:02:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 13:02:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThK2l-0006QU-Tu; Sat, 08 Dec 2012 13:02:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <asadrupucit2006@gmail.com>) id 1ThK2k-0006QP-3I
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 13:02:10 +0000
Received: from [85.158.139.211:42737] by server-2.bemta-5.messagelabs.com id
	42/0D-16162-15A33C05; Sat, 08 Dec 2012 13:02:09 +0000
X-Env-Sender: asadrupucit2006@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1354971727!19544779!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3029 invoked from network); 8 Dec 2012 13:02:08 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 13:02:08 -0000
Received: by mail-ob0-f173.google.com with SMTP id xn12so1480815obc.32
	for <xen-devel@lists.xen.org>; Sat, 08 Dec 2012 05:02:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=y/JCQ667vHgNnmCOV0f0GnceZLDSjg3DwQyR6F/ImDo=;
	b=sy7LPeycqtfDTQ6GOnBvlHKUQ1TnOC+/iVneHBINboFQtw68m+q/BtZSQpnK2fv31A
	0G/58TsPnouMaemi3Kv0vbeyzLjcRui6exqMuoOFdfLZGlcae52KQ3/MFvRK3runfrQU
	DT5P0k2k4UZQm1gkKg80dDpHzWgRa0hRlLoa+onChGF9amMuD/SjuqgREQEBIyzENUiO
	aWIHItd44Q5+byaBPqhIBMzu6XKtk8DcBxM+KPQpVwatrQxtQrKJxML+amU+NW173W4n
	XTQTJW4JnW8MqI9j/PGoipymkGVGKU5K8jUgQTk5WxVzLIHEoHkTJTheqMLf8I6VVqZg
	9ULw==
MIME-Version: 1.0
Received: by 10.60.4.35 with SMTP id h3mr407618oeh.123.1354971726950; Sat, 08
	Dec 2012 05:02:06 -0800 (PST)
Received: by 10.60.20.3 with HTTP; Sat, 8 Dec 2012 05:02:06 -0800 (PST)
Date: Sat, 8 Dec 2012 18:02:06 +0500
Message-ID: <CAJ2v2mgkTjuq7cJWsgwbY-NJpAeEVRtai=GJV_EhrVzMuZ7QBA@mail.gmail.com>
From: asad raza <asadrupucit2006@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Why we are doing this (below code) in __init
 setup_pagetables() method in ARM arch in setup.c.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 /* Link in the fixmap pagetable */
    pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset)
                           >> PAGE_SHIFT);
    pte.pt.table = 1;
    write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte);
    /*
     * No flush required here. Individual flushes are done in
     * set_fixmap as entries are used.
     */

    /* Break up the Xen mapping into 4k pages and protect them separately. */
    for ( i = 0; i < LPAE_ENTRIES; i++ )
    {
        unsigned long mfn = paddr_to_pfn(xen_paddr) + i;
        unsigned long va = XEN_VIRT_START + (i << PAGE_SHIFT);
        if ( !is_kernel(va) )
            break;
        pte = mfn_to_xen_entry(mfn);
        pte.pt.table = 1; /* 4k mappings always have this bit set */
        if ( is_kernel_text(va) || is_kernel_inittext(va) )
        {
            pte.pt.xn = 0;
            pte.pt.ro = 1;
        }
        if ( is_kernel_rodata(va) )
            pte.pt.ro = 1;
        write_pte(xen_xenmap + i, pte);
        /* No flush required here as page table is not hooked in yet. */
    }
    pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset)
                           >> PAGE_SHIFT);
    pte.pt.table = 1;
    write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
    /* Have changed a mapping used for .text. Flush everything for safety. */
    flush_xen_text_tlb();

    /* From now on, no mapping may be both writable and executable. */
    WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 14:21:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 14:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThLH8-0007Bx-OO; Sat, 08 Dec 2012 14:21:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThLH7-0007Bq-9T
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 14:21:05 +0000
Received: from [85.158.138.51:41418] by server-6.bemta-3.messagelabs.com id
	9D/20-28265-0DC43C05; Sat, 08 Dec 2012 14:21:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354976463!27997652!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2361 invoked from network); 8 Dec 2012 14:21:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 14:21:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="12034"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 14:21:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 14:21:02 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThLH4-0006Og-NV;
	Sat, 08 Dec 2012 14:21:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThLH4-0002QK-K0;
	Sat, 08 Dec 2012 14:21:02 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14620-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 14:21:02 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14620: trouble: blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14620 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14620/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 14:21:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 14:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThLH8-0007Bx-OO; Sat, 08 Dec 2012 14:21:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThLH7-0007Bq-9T
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 14:21:05 +0000
Received: from [85.158.138.51:41418] by server-6.bemta-3.messagelabs.com id
	9D/20-28265-0DC43C05; Sat, 08 Dec 2012 14:21:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1354976463!27997652!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2361 invoked from network); 8 Dec 2012 14:21:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 14:21:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="12034"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 14:21:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 14:21:02 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThLH4-0006Og-NV;
	Sat, 08 Dec 2012 14:21:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThLH4-0002QK-K0;
	Sat, 08 Dec 2012 14:21:02 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14620-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 14:21:02 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14620: trouble: blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14620 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14620/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 15:02:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 15:02:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThLv1-0007Yp-4f; Sat, 08 Dec 2012 15:02:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThLuz-0007Yk-7e
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 15:02:17 +0000
Received: from [85.158.139.83:33176] by server-4.bemta-5.messagelabs.com id
	3A/9F-14693-87653C05; Sat, 08 Dec 2012 15:02:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354978935!28985894!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1893 invoked from network); 8 Dec 2012 15:02:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 15:02:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="12264"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 15:02:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 15:02:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThLuu-0006bS-DV;
	Sat, 08 Dec 2012 15:02:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThLuu-0004yP-9R;
	Sat, 08 Dec 2012 15:02:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14623-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 15:02:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14623: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14623 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14623/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 15:02:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 15:02:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThLv1-0007Yp-4f; Sat, 08 Dec 2012 15:02:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThLuz-0007Yk-7e
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 15:02:17 +0000
Received: from [85.158.139.83:33176] by server-4.bemta-5.messagelabs.com id
	3A/9F-14693-87653C05; Sat, 08 Dec 2012 15:02:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354978935!28985894!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1893 invoked from network); 8 Dec 2012 15:02:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 15:02:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="12264"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 15:02:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 15:02:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThLuu-0006bS-DV;
	Sat, 08 Dec 2012 15:02:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThLuu-0004yP-9R;
	Sat, 08 Dec 2012 15:02:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14623-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 15:02:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14623: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14623 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14623/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 15:32:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 15:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThMO5-0007nh-Ph; Sat, 08 Dec 2012 15:32:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThMO4-0007nc-FY
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 15:32:20 +0000
Received: from [193.109.254.147:37032] by server-8.bemta-14.messagelabs.com id
	CB/37-05026-38D53C05; Sat, 08 Dec 2012 15:32:19 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354980738!9885730!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17301 invoked from network); 8 Dec 2012 15:32:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 15:32:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="12450"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 15:32:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 15:32:18 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThMO1-0006kb-Vu;
	Sat, 08 Dec 2012 15:32:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThMO1-0006cW-Uu;
	Sat, 08 Dec 2012 15:32:17 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14619-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 15:32:17 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14619: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14619 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14619/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 15:32:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 15:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThMO5-0007nh-Ph; Sat, 08 Dec 2012 15:32:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThMO4-0007nc-FY
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 15:32:20 +0000
Received: from [193.109.254.147:37032] by server-8.bemta-14.messagelabs.com id
	CB/37-05026-38D53C05; Sat, 08 Dec 2012 15:32:19 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354980738!9885730!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17301 invoked from network); 8 Dec 2012 15:32:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 15:32:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="12450"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 15:32:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 15:32:18 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThMO1-0006kb-Vu;
	Sat, 08 Dec 2012 15:32:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThMO1-0006cW-Uu;
	Sat, 08 Dec 2012 15:32:17 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14619-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 15:32:17 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14619: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14619 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14619/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 15:44:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 15:44:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThMZf-00080A-9D; Sat, 08 Dec 2012 15:44:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThMZd-000805-Fa
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 15:44:17 +0000
Received: from [85.158.143.99:3711] by server-2.bemta-4.messagelabs.com id
	27/EC-30861-05063C05; Sat, 08 Dec 2012 15:44:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354981456!23350264!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19001 invoked from network); 8 Dec 2012 15:44:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 15:44:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="12509"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 15:44:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 15:44:16 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThMZb-0006o8-VU;
	Sat, 08 Dec 2012 15:44:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThMZb-00055Y-Pz;
	Sat, 08 Dec 2012 15:44:15 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14624-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 15:44:15 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14624: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14624 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14624/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 15:44:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 15:44:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThMZf-00080A-9D; Sat, 08 Dec 2012 15:44:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThMZd-000805-Fa
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 15:44:17 +0000
Received: from [85.158.143.99:3711] by server-2.bemta-4.messagelabs.com id
	27/EC-30861-05063C05; Sat, 08 Dec 2012 15:44:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1354981456!23350264!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19001 invoked from network); 8 Dec 2012 15:44:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 15:44:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="12509"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 15:44:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 15:44:16 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThMZb-0006o8-VU;
	Sat, 08 Dec 2012 15:44:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThMZb-00055Y-Pz;
	Sat, 08 Dec 2012 15:44:15 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14624-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 15:44:15 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14624: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14624 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14624/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 17:44:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 17:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThORB-00013p-MM; Sat, 08 Dec 2012 17:43:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1ThOR9-00013j-Pn
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 17:43:40 +0000
Received: from [193.109.254.147:11437] by server-10.bemta-14.messagelabs.com
	id 08/65-31741-A4C73C05; Sat, 08 Dec 2012 17:43:38 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354988616!9892123!1
X-Originating-IP: [209.85.223.172]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16669 invoked from network); 8 Dec 2012 17:43:38 -0000
Received: from mail-ie0-f172.google.com (HELO mail-ie0-f172.google.com)
	(209.85.223.172)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 17:43:38 -0000
Received: by mail-ie0-f172.google.com with SMTP id c13so4478236ieb.3
	for <xen-devel@lists.xen.org>; Sat, 08 Dec 2012 09:43:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=MoDuppC/L7uu9fFlptU3mC+CVktpJC/bzLKTTEimWzQ=;
	b=cHW4QtG7pdxlZSNXlMgl9j8smT4dtTqRwU6j5JZU1VeolhGHW6SlnJ0wcem257FRhU
	5r50rKc17PmOXtyqJLQc+Eq/riCOpfbNDqUBDr9KZSjmAvhFpK6D8SGo+oHF3lQ1IN2W
	8umUSYYnS3KrMClTqYiXpZMhr0RjGyTMfUTL4wSQ6i80Xb3ibZVhhSbDeWPdZfdtwpgg
	knh4H0A3POUhrEyRgeX6j/RSLq8jXZnYOU8nrGH9fRCtC+pF0LwWL7wA2nr2BtjP/Y2l
	qbM3xYJr7R/Kv4ARvaLTf4T/wCndT8oO7zl7JbgsFsLRHlYF8NHYbM9YnHae8cBANxIE
	tubQ==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr2355055igq.37.1354988616609; Sat,
	08 Dec 2012 09:43:36 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Sat, 8 Dec 2012 09:43:36 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Sat, 8 Dec 2012 09:43:36 -0800 (PST)
Date: Sun, 9 Dec 2012 01:43:36 +0800
X-Google-Sender-Auth: Cj3U94b2IOD3EKClSa1mxyLos6A
Message-ID: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8823744821956459500=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8823744821956459500==
Content-Type: multipart/alternative; boundary=14dae93411152af09204d05adfc5

--14dae93411152af09204d05adfc5
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
I'm debugging an issue that an HVM guest failed to produce any output with
IGD passed through.
This is an pure HVM linux guest with i915 driver directly compiled in.
An PVHVM kernel with i915 driver compiled as module works without issue.
I'm not yet sure which factor is more important, pure HVM or the I915=y
kernel config.

The direct cause of no output is that the driver does not select Display
PLL properly, which is in turn due to fail to detect pch type properly.

Strange enough, the intel_detect_pch() function works by checking the
device ID of the ISA bridge coming with the chipset:

/* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make
graphics device passthrough fwork easy for VMM, that only * need to expose
ISA bridge to let driver know the real hardware  underneath. This is a
requirement from virtualization team. */

I added some debug output in this function and find out that it obtained a
strange device ID:
[ 1.005423] [drm] intel pch detect, found 00007000

This looks like the ISA bridge provided by qemu:
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.0 0601: 8086:7000

However, I can find the same device on a PVHVM linux guest, but the
intel_detect_pch() is not fooled by that. Is it due to I915=m config or
some magic played by PVOPS? Any suggestion how to fix this?

Thanks,
Timothy

--14dae93411152af09204d05adfc5
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p>Hi all, <br>
I&#39;m debugging an issue that an HVM guest failed to produce any output w=
ith IGD passed through. <br>
This is an pure HVM linux guest with i915 driver directly compiled in. <br>
An PVHVM kernel with i915 driver compiled as module works without issue. <b=
r>
I&#39;m not yet sure which factor is more important, pure HVM or the I915=
=3Dy kernel config.</p>
<p>The direct cause of no output is that the driver does not select Display=
 PLL properly, which is in turn due to fail to detect pch type properly.</p=
>
<p>Strange enough, the intel_detect_pch() function works by checking the de=
vice ID of the ISA bridge coming with the chipset:</p>
<p>/* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make g=
raphics device passthrough fwork easy for VMM, that only * need to expose I=
SA bridge to let driver know the real hardware=A0 underneath. This is a req=
uirement from virtualization team. */</p>

<p>I added some debug output in this function and find out that it obtained=
 a strange device ID:<br>
[ 1.005423] [drm] intel pch detect, found 00007000</p>
<p>This looks like the ISA bridge provided by qemu:<br>
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] =
<br>
00:01.0 0601: 8086:7000</p>
<p>However, I can find the same device on a PVHVM linux guest, but the inte=
l_detect_pch() is not fooled by that. Is it due to I915=3Dm config or some =
magic played by PVOPS? Any suggestion how to fix this?</p>
<p>Thanks, <br>
Timothy</p>

--14dae93411152af09204d05adfc5--


--===============8823744821956459500==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8823744821956459500==--


From xen-devel-bounces@lists.xen.org Sat Dec 08 17:44:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 17:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThORB-00013p-MM; Sat, 08 Dec 2012 17:43:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1ThOR9-00013j-Pn
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 17:43:40 +0000
Received: from [193.109.254.147:11437] by server-10.bemta-14.messagelabs.com
	id 08/65-31741-A4C73C05; Sat, 08 Dec 2012 17:43:38 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1354988616!9892123!1
X-Originating-IP: [209.85.223.172]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16669 invoked from network); 8 Dec 2012 17:43:38 -0000
Received: from mail-ie0-f172.google.com (HELO mail-ie0-f172.google.com)
	(209.85.223.172)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 17:43:38 -0000
Received: by mail-ie0-f172.google.com with SMTP id c13so4478236ieb.3
	for <xen-devel@lists.xen.org>; Sat, 08 Dec 2012 09:43:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=MoDuppC/L7uu9fFlptU3mC+CVktpJC/bzLKTTEimWzQ=;
	b=cHW4QtG7pdxlZSNXlMgl9j8smT4dtTqRwU6j5JZU1VeolhGHW6SlnJ0wcem257FRhU
	5r50rKc17PmOXtyqJLQc+Eq/riCOpfbNDqUBDr9KZSjmAvhFpK6D8SGo+oHF3lQ1IN2W
	8umUSYYnS3KrMClTqYiXpZMhr0RjGyTMfUTL4wSQ6i80Xb3ibZVhhSbDeWPdZfdtwpgg
	knh4H0A3POUhrEyRgeX6j/RSLq8jXZnYOU8nrGH9fRCtC+pF0LwWL7wA2nr2BtjP/Y2l
	qbM3xYJr7R/Kv4ARvaLTf4T/wCndT8oO7zl7JbgsFsLRHlYF8NHYbM9YnHae8cBANxIE
	tubQ==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr2355055igq.37.1354988616609; Sat,
	08 Dec 2012 09:43:36 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Sat, 8 Dec 2012 09:43:36 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Sat, 8 Dec 2012 09:43:36 -0800 (PST)
Date: Sun, 9 Dec 2012 01:43:36 +0800
X-Google-Sender-Auth: Cj3U94b2IOD3EKClSa1mxyLos6A
Message-ID: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8823744821956459500=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8823744821956459500==
Content-Type: multipart/alternative; boundary=14dae93411152af09204d05adfc5

--14dae93411152af09204d05adfc5
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
I'm debugging an issue that an HVM guest failed to produce any output with
IGD passed through.
This is an pure HVM linux guest with i915 driver directly compiled in.
An PVHVM kernel with i915 driver compiled as module works without issue.
I'm not yet sure which factor is more important, pure HVM or the I915=y
kernel config.

The direct cause of no output is that the driver does not select Display
PLL properly, which is in turn due to fail to detect pch type properly.

Strange enough, the intel_detect_pch() function works by checking the
device ID of the ISA bridge coming with the chipset:

/* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make
graphics device passthrough fwork easy for VMM, that only * need to expose
ISA bridge to let driver know the real hardware  underneath. This is a
requirement from virtualization team. */

I added some debug output in this function and find out that it obtained a
strange device ID:
[ 1.005423] [drm] intel pch detect, found 00007000

This looks like the ISA bridge provided by qemu:
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.0 0601: 8086:7000

However, I can find the same device on a PVHVM linux guest, but the
intel_detect_pch() is not fooled by that. Is it due to I915=m config or
some magic played by PVOPS? Any suggestion how to fix this?

Thanks,
Timothy

--14dae93411152af09204d05adfc5
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p>Hi all, <br>
I&#39;m debugging an issue that an HVM guest failed to produce any output w=
ith IGD passed through. <br>
This is an pure HVM linux guest with i915 driver directly compiled in. <br>
An PVHVM kernel with i915 driver compiled as module works without issue. <b=
r>
I&#39;m not yet sure which factor is more important, pure HVM or the I915=
=3Dy kernel config.</p>
<p>The direct cause of no output is that the driver does not select Display=
 PLL properly, which is in turn due to fail to detect pch type properly.</p=
>
<p>Strange enough, the intel_detect_pch() function works by checking the de=
vice ID of the ISA bridge coming with the chipset:</p>
<p>/* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make g=
raphics device passthrough fwork easy for VMM, that only * need to expose I=
SA bridge to let driver know the real hardware=A0 underneath. This is a req=
uirement from virtualization team. */</p>

<p>I added some debug output in this function and find out that it obtained=
 a strange device ID:<br>
[ 1.005423] [drm] intel pch detect, found 00007000</p>
<p>This looks like the ISA bridge provided by qemu:<br>
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] =
<br>
00:01.0 0601: 8086:7000</p>
<p>However, I can find the same device on a PVHVM linux guest, but the inte=
l_detect_pch() is not fooled by that. Is it due to I915=3Dm config or some =
magic played by PVOPS? Any suggestion how to fix this?</p>
<p>Thanks, <br>
Timothy</p>

--14dae93411152af09204d05adfc5--


--===============8823744821956459500==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8823744821956459500==--


From xen-devel-bounces@lists.xen.org Sat Dec 08 18:02:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 18:02:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThOic-0001dY-Up; Sat, 08 Dec 2012 18:01:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThOib-0001dT-5q
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 18:01:41 +0000
Received: from [85.158.137.99:20900] by server-10.bemta-3.messagelabs.com id
	FB/8F-19806-48083C05; Sat, 08 Dec 2012 18:01:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354989699!18512330!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7431 invoked from network); 8 Dec 2012 18:01:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 18:01:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13181"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 18:01:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 18:01:38 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThOiY-0007UA-6Q;
	Sat, 08 Dec 2012 18:01:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThOiX-0006yy-Vt;
	Sat, 08 Dec 2012 18:01:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14626-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 18:01:37 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14626: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14626 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14626/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 18:02:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 18:02:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThOic-0001dY-Up; Sat, 08 Dec 2012 18:01:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThOib-0001dT-5q
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 18:01:41 +0000
Received: from [85.158.137.99:20900] by server-10.bemta-3.messagelabs.com id
	FB/8F-19806-48083C05; Sat, 08 Dec 2012 18:01:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1354989699!18512330!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7431 invoked from network); 8 Dec 2012 18:01:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 18:01:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13181"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 18:01:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 18:01:38 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThOiY-0007UA-6Q;
	Sat, 08 Dec 2012 18:01:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThOiX-0006yy-Vt;
	Sat, 08 Dec 2012 18:01:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14626-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 18:01:37 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14626: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14626 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14626/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 18:12:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 18:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThOsS-0001ty-1W; Sat, 08 Dec 2012 18:11:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThOsQ-0001tr-AY
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 18:11:50 +0000
Received: from [193.109.254.147:4600] by server-8.bemta-14.messagelabs.com id
	B9/09-05026-5E283C05; Sat, 08 Dec 2012 18:11:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354990308!9671268!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12787 invoked from network); 8 Dec 2012 18:11:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 18:11:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13235"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 18:11:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 18:11:46 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThOsM-0007Wy-D3;
	Sat, 08 Dec 2012 18:11:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThOsM-0005OK-Bd;
	Sat, 08 Dec 2012 18:11:46 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14625-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 18:11:46 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14625: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14625 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14625/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 18:12:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 18:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThOsS-0001ty-1W; Sat, 08 Dec 2012 18:11:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThOsQ-0001tr-AY
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 18:11:50 +0000
Received: from [193.109.254.147:4600] by server-8.bemta-14.messagelabs.com id
	B9/09-05026-5E283C05; Sat, 08 Dec 2012 18:11:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1354990308!9671268!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12787 invoked from network); 8 Dec 2012 18:11:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 18:11:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13235"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 18:11:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 18:11:46 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThOsM-0007Wy-D3;
	Sat, 08 Dec 2012 18:11:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThOsM-0005OK-Bd;
	Sat, 08 Dec 2012 18:11:46 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14625-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 18:11:46 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14625: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14625 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14625/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 18:46:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 18:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThPPj-0002VR-5L; Sat, 08 Dec 2012 18:46:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThPPh-0002VJ-Ed
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 18:46:13 +0000
Received: from [85.158.139.83:4735] by server-1.bemta-5.messagelabs.com id
	24/85-12813-4FA83C05; Sat, 08 Dec 2012 18:46:12 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354992371!28872501!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22451 invoked from network); 8 Dec 2012 18:46:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 18:46:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13369"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 18:46:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 18:46:10 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThPPe-0007hg-Kq;
	Sat, 08 Dec 2012 18:46:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThPPe-00058p-IN;
	Sat, 08 Dec 2012 18:46:10 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14621-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 18:46:10 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14621: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14621 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14621/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14566
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566
 build-amd64                  2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-pvops            2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-oldkern          2 host-install(2) broken in 14587 REGR. vs. 14566
 build-i386-pvops             2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386-oldkern           2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386                   2 host-install(2) broken in 14583 REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            3 host-install(3)           broken pass in 14574
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 14574
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)           broken pass in 14583
 test-amd64-amd64-xl           3 host-install(3)  broken in 14574 pass in 14621

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 14587 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-win          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14587 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-win         1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-win            1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 18:46:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 18:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThPPj-0002VR-5L; Sat, 08 Dec 2012 18:46:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThPPh-0002VJ-Ed
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 18:46:13 +0000
Received: from [85.158.139.83:4735] by server-1.bemta-5.messagelabs.com id
	24/85-12813-4FA83C05; Sat, 08 Dec 2012 18:46:12 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1354992371!28872501!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22451 invoked from network); 8 Dec 2012 18:46:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 18:46:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13369"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 18:46:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 18:46:10 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThPPe-0007hg-Kq;
	Sat, 08 Dec 2012 18:46:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThPPe-00058p-IN;
	Sat, 08 Dec 2012 18:46:10 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14621-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 18:46:10 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14621: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14621 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14621/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14566
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566
 build-amd64                  2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-pvops            2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-oldkern          2 host-install(2) broken in 14587 REGR. vs. 14566
 build-i386-pvops             2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386-oldkern           2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386                   2 host-install(2) broken in 14583 REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            3 host-install(3)           broken pass in 14574
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 14574
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)           broken pass in 14583
 test-amd64-amd64-xl           3 host-install(3)  broken in 14574 pass in 14621

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 14587 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-win          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14587 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-win         1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-win            1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 18:58:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 18:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThPbB-0002yE-JW; Sat, 08 Dec 2012 18:58:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThPbA-0002y7-C1
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 18:58:04 +0000
Received: from [85.158.143.35:25681] by server-3.bemta-4.messagelabs.com id
	69/FB-18211-BBD83C05; Sat, 08 Dec 2012 18:58:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354993082!5560408!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5348 invoked from network); 8 Dec 2012 18:58:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 18:58:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13423"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 18:58:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 18:58:02 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThPb8-0007lD-D2;
	Sat, 08 Dec 2012 18:58:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThPb8-0003la-Bb;
	Sat, 08 Dec 2012 18:58:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14627-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 18:58:02 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14627: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14627 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14627/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 18:58:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 18:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThPbB-0002yE-JW; Sat, 08 Dec 2012 18:58:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThPbA-0002y7-C1
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 18:58:04 +0000
Received: from [85.158.143.35:25681] by server-3.bemta-4.messagelabs.com id
	69/FB-18211-BBD83C05; Sat, 08 Dec 2012 18:58:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1354993082!5560408!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5348 invoked from network); 8 Dec 2012 18:58:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 18:58:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13423"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 18:58:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 18:58:02 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThPb8-0007lD-D2;
	Sat, 08 Dec 2012 18:58:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThPb8-0003la-Bb;
	Sat, 08 Dec 2012 18:58:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14627-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 18:58:02 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14627: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14627 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14627/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 19:10:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 19:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThPmr-0003EB-QR; Sat, 08 Dec 2012 19:10:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThPmp-0003E3-L2
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 19:10:08 +0000
Received: from [85.158.143.35:43387] by server-2.bemta-4.messagelabs.com id
	7C/5C-30861-E8093C05; Sat, 08 Dec 2012 19:10:06 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354993805!15898101!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23271 invoked from network); 8 Dec 2012 19:10:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 19:10:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13462"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 19:10:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 19:10:05 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThPmn-0007p4-82;
	Sat, 08 Dec 2012 19:10:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThPmn-0003xr-7a;
	Sat, 08 Dec 2012 19:10:05 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14628-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 19:10:05 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14628: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14628 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14628/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 19:10:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 19:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThPmr-0003EB-QR; Sat, 08 Dec 2012 19:10:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThPmp-0003E3-L2
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 19:10:08 +0000
Received: from [85.158.143.35:43387] by server-2.bemta-4.messagelabs.com id
	7C/5C-30861-E8093C05; Sat, 08 Dec 2012 19:10:06 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1354993805!15898101!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23271 invoked from network); 8 Dec 2012 19:10:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 19:10:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13462"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 19:10:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 19:10:05 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThPmn-0007p4-82;
	Sat, 08 Dec 2012 19:10:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThPmn-0003xr-7a;
	Sat, 08 Dec 2012 19:10:05 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14628-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 19:10:05 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14628: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14628 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14628/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 20:01:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 20:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThQaH-0003vW-MQ; Sat, 08 Dec 2012 20:01:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThQaG-0003vR-01
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 20:01:12 +0000
Received: from [85.158.143.99:27147] by server-3.bemta-4.messagelabs.com id
	75/F7-18211-78C93C05; Sat, 08 Dec 2012 20:01:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354996867!28012267!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29446 invoked from network); 8 Dec 2012 20:01:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 20:01:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13606"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 20:01:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 20:01:07 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThQaB-00084e-1S;
	Sat, 08 Dec 2012 20:01:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThQaB-0006k0-0n;
	Sat, 08 Dec 2012 20:01:07 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14629-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 20:01:07 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14629: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14629 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14629/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 20:01:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 20:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThQaH-0003vW-MQ; Sat, 08 Dec 2012 20:01:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThQaG-0003vR-01
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 20:01:12 +0000
Received: from [85.158.143.99:27147] by server-3.bemta-4.messagelabs.com id
	75/F7-18211-78C93C05; Sat, 08 Dec 2012 20:01:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1354996867!28012267!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29446 invoked from network); 8 Dec 2012 20:01:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 20:01:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13606"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 20:01:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 20:01:07 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThQaB-00084e-1S;
	Sat, 08 Dec 2012 20:01:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThQaB-0006k0-0n;
	Sat, 08 Dec 2012 20:01:07 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14629-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 20:01:07 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14629: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14629 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14629/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 20:11:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 20:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThQjk-0004AT-UO; Sat, 08 Dec 2012 20:11:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThQjj-0004AO-Th
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 20:11:00 +0000
Received: from [85.158.137.99:49010] by server-14.bemta-3.messagelabs.com id
	65/80-31424-3DE93C05; Sat, 08 Dec 2012 20:10:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354997458!13002137!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3028 invoked from network); 8 Dec 2012 20:10:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 20:10:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13631"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 20:10:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 20:10:57 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThQjh-00087S-8I;
	Sat, 08 Dec 2012 20:10:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThQjh-00054M-7g;
	Sat, 08 Dec 2012 20:10:57 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14630-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 20:10:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14630: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14630 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14630/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 20:11:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 20:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThQjk-0004AT-UO; Sat, 08 Dec 2012 20:11:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThQjj-0004AO-Th
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 20:11:00 +0000
Received: from [85.158.137.99:49010] by server-14.bemta-3.messagelabs.com id
	65/80-31424-3DE93C05; Sat, 08 Dec 2012 20:10:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1354997458!13002137!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3028 invoked from network); 8 Dec 2012 20:10:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 20:10:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13631"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 20:10:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 20:10:57 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThQjh-00087S-8I;
	Sat, 08 Dec 2012 20:10:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThQjh-00054M-7g;
	Sat, 08 Dec 2012 20:10:57 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14630-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 20:10:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14630: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14630 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14630/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 20:20:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 20:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThQs5-0004JA-U3; Sat, 08 Dec 2012 20:19:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThQs4-0004J4-Uu
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 20:19:37 +0000
Received: from [193.109.254.147:47518] by server-16.bemta-14.messagelabs.com
	id 33/48-09215-8D0A3C05; Sat, 08 Dec 2012 20:19:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354997975!9139203!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32181 invoked from network); 8 Dec 2012 20:19:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 20:19:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13652"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 20:19:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 20:19:34 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThQs2-0008AK-RK;
	Sat, 08 Dec 2012 20:19:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThQs2-0004OI-QN;
	Sat, 08 Dec 2012 20:19:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14631-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 20:19:34 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14631: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14631 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14631/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 20:20:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 20:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThQs5-0004JA-U3; Sat, 08 Dec 2012 20:19:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThQs4-0004J4-Uu
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 20:19:37 +0000
Received: from [193.109.254.147:47518] by server-16.bemta-14.messagelabs.com
	id 33/48-09215-8D0A3C05; Sat, 08 Dec 2012 20:19:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1354997975!9139203!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32181 invoked from network); 8 Dec 2012 20:19:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 20:19:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13652"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 20:19:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 20:19:34 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThQs2-0008AK-RK;
	Sat, 08 Dec 2012 20:19:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThQs2-0004OI-QN;
	Sat, 08 Dec 2012 20:19:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14631-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 20:19:34 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14631: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14631 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14631/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 20:46:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 20:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThRHp-0004ba-Dp; Sat, 08 Dec 2012 20:46:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThRHn-0004bV-9p
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 20:46:11 +0000
Received: from [85.158.139.83:49204] by server-4.bemta-5.messagelabs.com id
	76/DE-14693-217A3C05; Sat, 08 Dec 2012 20:46:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354999569!29004060!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19090 invoked from network); 8 Dec 2012 20:46:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 20:46:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13748"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 20:46:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 20:46:09 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThRHl-0008IN-4e;
	Sat, 08 Dec 2012 20:46:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThRHl-00055o-3x;
	Sat, 08 Dec 2012 20:46:09 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14618-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 20:46:09 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14618: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14618 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14618/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14563
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14563
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-qemut-win     3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14563
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14563
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14563
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-win   3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-winxpsp3  3 host-install(3)     broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          broken  
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    broken  
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  broken  
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 20:46:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 20:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThRHp-0004ba-Dp; Sat, 08 Dec 2012 20:46:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThRHn-0004bV-9p
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 20:46:11 +0000
Received: from [85.158.139.83:49204] by server-4.bemta-5.messagelabs.com id
	76/DE-14693-217A3C05; Sat, 08 Dec 2012 20:46:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1354999569!29004060!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19090 invoked from network); 8 Dec 2012 20:46:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 20:46:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13748"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 20:46:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 20:46:09 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThRHl-0008IN-4e;
	Sat, 08 Dec 2012 20:46:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThRHl-00055o-3x;
	Sat, 08 Dec 2012 20:46:09 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14618-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 20:46:09 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14618: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14618 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14618/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14563
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14563
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-qemut-win     3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14563
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14563
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14563
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-win   3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-winxpsp3  3 host-install(3)     broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          broken  
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    broken  
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  broken  
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 20:58:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 20:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThRTB-0004oY-PS; Sat, 08 Dec 2012 20:57:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThRT9-0004oT-S7
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 20:57:56 +0000
Received: from [193.109.254.147:52096] by server-1.bemta-14.messagelabs.com id
	97/BD-25314-3D9A3C05; Sat, 08 Dec 2012 20:57:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355000274!2139655!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25096 invoked from network); 8 Dec 2012 20:57:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 20:57:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13778"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 20:57:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 20:57:49 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThRT3-0008Lv-Le;
	Sat, 08 Dec 2012 20:57:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThRT3-0003kI-LA;
	Sat, 08 Dec 2012 20:57:49 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14632-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 20:57:49 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14632: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14632 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14632/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 20:58:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 20:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThRTB-0004oY-PS; Sat, 08 Dec 2012 20:57:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThRT9-0004oT-S7
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 20:57:56 +0000
Received: from [193.109.254.147:52096] by server-1.bemta-14.messagelabs.com id
	97/BD-25314-3D9A3C05; Sat, 08 Dec 2012 20:57:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355000274!2139655!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25096 invoked from network); 8 Dec 2012 20:57:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 20:57:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13778"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 20:57:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 20:57:49 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThRT3-0008Lv-Le;
	Sat, 08 Dec 2012 20:57:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThRT3-0003kI-LA;
	Sat, 08 Dec 2012 20:57:49 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14632-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 20:57:49 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14632: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14632 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14632/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 21:09:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 21:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThRdl-00051p-VA; Sat, 08 Dec 2012 21:08:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThRdk-00051k-PZ
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 21:08:53 +0000
Received: from [85.158.138.51:6852] by server-16.bemta-3.messagelabs.com id
	BD/D6-07461-F5CA3C05; Sat, 08 Dec 2012 21:08:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355000926!19137354!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28325 invoked from network); 8 Dec 2012 21:08:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 21:08:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13875"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 21:08:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 21:08:46 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThRdd-0008PL-Vh;
	Sat, 08 Dec 2012 21:08:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThRdd-0002If-Ui;
	Sat, 08 Dec 2012 21:08:45 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14633-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 21:08:45 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14633: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14633 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14633/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 21:09:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 21:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThRdl-00051p-VA; Sat, 08 Dec 2012 21:08:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThRdk-00051k-PZ
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 21:08:53 +0000
Received: from [85.158.138.51:6852] by server-16.bemta-3.messagelabs.com id
	BD/D6-07461-F5CA3C05; Sat, 08 Dec 2012 21:08:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355000926!19137354!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28325 invoked from network); 8 Dec 2012 21:08:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 21:08:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13875"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 21:08:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 21:08:46 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThRdd-0008PL-Vh;
	Sat, 08 Dec 2012 21:08:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThRdd-0002If-Ui;
	Sat, 08 Dec 2012 21:08:45 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14633-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 21:08:45 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14633: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14633 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14633/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 21:20:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 21:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThRoS-0005DR-68; Sat, 08 Dec 2012 21:19:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThRoQ-0005DM-JI
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 21:19:55 +0000
Received: from [85.158.137.99:35633] by server-9.bemta-3.messagelabs.com id
	1F/B3-02388-9FEA3C05; Sat, 08 Dec 2012 21:19:53 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355001592!18466228!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16454 invoked from network); 8 Dec 2012 21:19:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 21:19:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13916"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 21:19:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 21:19:52 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThRoO-0008Sx-9D;
	Sat, 08 Dec 2012 21:19:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThRoO-0000Q1-8c;
	Sat, 08 Dec 2012 21:19:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14634-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 21:19:52 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14634: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14634 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14634/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 21:20:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 21:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThRoS-0005DR-68; Sat, 08 Dec 2012 21:19:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThRoQ-0005DM-JI
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 21:19:55 +0000
Received: from [85.158.137.99:35633] by server-9.bemta-3.messagelabs.com id
	1F/B3-02388-9FEA3C05; Sat, 08 Dec 2012 21:19:53 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355001592!18466228!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16454 invoked from network); 8 Dec 2012 21:19:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 21:19:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,243,1355097600"; 
   d="scan'208";a="13916"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 21:19:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 21:19:52 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThRoO-0008Sx-9D;
	Sat, 08 Dec 2012 21:19:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThRoO-0000Q1-8c;
	Sat, 08 Dec 2012 21:19:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14634-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 21:19:52 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14634: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14634 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14634/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 23:15:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 23:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThTbt-00060l-QN; Sat, 08 Dec 2012 23:15:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThTbr-00060g-Q4
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 23:15:04 +0000
Received: from [85.158.143.35:55141] by server-3.bemta-4.messagelabs.com id
	17/1D-18211-7F9C3C05; Sat, 08 Dec 2012 23:15:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355008502!14063673!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12496 invoked from network); 8 Dec 2012 23:15:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 23:15:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,245,1355097600"; 
   d="scan'208";a="14240"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 23:15:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 23:15:01 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThTbp-0000b3-Mf;
	Sat, 08 Dec 2012 23:15:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThTbp-0004WB-Ly;
	Sat, 08 Dec 2012 23:15:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14622-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 23:15:01 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14622: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14622 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14622/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl-qemut-win7-amd64  3 host-install(3) broken REGR. vs. 14481
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14481
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-amd64-qemut-win    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14481

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14481
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14481
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14481

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1
baseline version:
 linux                2fc3fd44850e68f21ce1746c2e94470cab44ffab

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-amd64-amd64-qemut-win                                   broken  
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3bbbcb136d00b0718e63f7fd633f098e0889c3d1
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Dec 5 18:40:31 2012 -0800

    Linux 3.0.55

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 08 23:15:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Dec 2012 23:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThTbt-00060l-QN; Sat, 08 Dec 2012 23:15:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThTbr-00060g-Q4
	for xen-devel@lists.xensource.com; Sat, 08 Dec 2012 23:15:04 +0000
Received: from [85.158.143.35:55141] by server-3.bemta-4.messagelabs.com id
	17/1D-18211-7F9C3C05; Sat, 08 Dec 2012 23:15:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355008502!14063673!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12496 invoked from network); 8 Dec 2012 23:15:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 23:15:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,245,1355097600"; 
   d="scan'208";a="14240"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Dec 2012 23:15:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 8 Dec 2012 23:15:01 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThTbp-0000b3-Mf;
	Sat, 08 Dec 2012 23:15:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThTbp-0004WB-Ly;
	Sat, 08 Dec 2012 23:15:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14622-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Dec 2012 23:15:01 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14622: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14622 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14622/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl-qemut-win7-amd64  3 host-install(3) broken REGR. vs. 14481
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14481
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-amd64-qemut-win    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14481

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14481
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14481
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14481

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1
baseline version:
 linux                2fc3fd44850e68f21ce1746c2e94470cab44ffab

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-amd64-amd64-qemut-win                                   broken  
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3bbbcb136d00b0718e63f7fd633f098e0889c3d1
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Dec 5 18:40:31 2012 -0800

    Linux 3.0.55

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 03:14:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 03:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThXKp-0003k8-MG; Sun, 09 Dec 2012 03:13:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThXKo-0003k3-IN
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 03:13:43 +0000
Received: from [85.158.139.211:16046] by server-3.bemta-5.messagelabs.com id
	8D/5F-25441-5E104C05; Sun, 09 Dec 2012 03:13:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355022821!15420394!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25904 invoked from network); 9 Dec 2012 03:13:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 03:13:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,245,1355097600"; 
   d="scan'208";a="14929"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 03:13:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 03:13:40 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThXKm-0001lv-Dm;
	Sun, 09 Dec 2012 03:13:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThXKm-0002DY-DD;
	Sun, 09 Dec 2012 03:13:40 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14636-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 03:13:40 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14636: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14636 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14636/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 03:14:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 03:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThXKp-0003k8-MG; Sun, 09 Dec 2012 03:13:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThXKo-0003k3-IN
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 03:13:43 +0000
Received: from [85.158.139.211:16046] by server-3.bemta-5.messagelabs.com id
	8D/5F-25441-5E104C05; Sun, 09 Dec 2012 03:13:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355022821!15420394!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25904 invoked from network); 9 Dec 2012 03:13:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 03:13:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,245,1355097600"; 
   d="scan'208";a="14929"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 03:13:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 03:13:40 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThXKm-0001lv-Dm;
	Sun, 09 Dec 2012 03:13:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThXKm-0002DY-DD;
	Sun, 09 Dec 2012 03:13:40 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14636-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 03:13:40 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14636: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14636 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14636/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 05:00:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 05:00:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThYzn-0004XG-Tz; Sun, 09 Dec 2012 05:00:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1ThYzl-0004X8-KW
	for xen-devel@lists.xen.org; Sun, 09 Dec 2012 05:00:06 +0000
Received: from [193.109.254.147:6459] by server-14.bemta-14.messagelabs.com id
	7E/8B-14517-4DA14C05; Sun, 09 Dec 2012 05:00:04 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1355029200!6762836!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5512 invoked from network); 9 Dec 2012 05:00:01 -0000
Received: from mail-la0-f45.google.com (HELO mail-la0-f45.google.com)
	(209.85.215.45)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 05:00:01 -0000
Received: by mail-la0-f45.google.com with SMTP id p9so1459443laa.32
	for <xen-devel@lists.xen.org>; Sat, 08 Dec 2012 21:00:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=uY8q2l11mxZ9bxGReMjgNYrOnLbKftMTjKGWhjI0GVA=;
	b=khPxd/oxP0RM/t+z/VELWxS4mpn1UoMugu/BbIlETqGKNvmRMkBWiEA/QV+cMkVlxo
	NKGg8QVF93Hsa+3Rf78HZK1wix1neoqFU7UjzAuDiEDw+mSM6Wq1wf/h0bvj/89crJDj
	r09PiuhigFtVrwRd+m+LI3nt3eXfOLXtj0LIZ3IAh9OQQihHxP2oRufBjdWUnHRlYIXX
	HaxN0eSMUFwS02xmm/S8XvBiB6NhSsnSTApJZtNd36H/HPtLX4NU4w+rCXvwYeWRDcCw
	YytBxvcKgk2lF3R7zxoCDmzNtVz0jHdjByP3GCXeDk72/4fg4ma94CjB34FfPmosCrJK
	Mjvw==
MIME-Version: 1.0
Received: by 10.152.104.115 with SMTP id gd19mr10015181lab.13.1355029200267;
	Sat, 08 Dec 2012 21:00:00 -0800 (PST)
Received: by 10.112.99.197 with HTTP; Sat, 8 Dec 2012 21:00:00 -0800 (PST)
In-Reply-To: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
Date: Sun, 9 Dec 2012 13:00:00 +0800
X-Google-Sender-Auth: SrhHxyGuMkDnlBrgKT4-2zu2bmM
Message-ID: <CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3127713365379019960=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3127713365379019960==
Content-Type: multipart/alternative; boundary=f46d04088d23246a8104d0645275

--f46d04088d23246a8104d0645275
Content-Type: text/plain; charset=ISO-8859-1

I dug further and got confused.
The host ISA bridge 00:1f.0 is automatically passed-through as part of the
gfx_passthru magic.
However, it is passed through as a PCI bridge:
On host:   00:1f.0 ISA bridge [0601]: Intel Corporation H77 Express Chipset
LPC Controller [8086:1e4a] (rev 04)
On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express Chipset
LPC Controller [8086:1e4a] (rev 04)

This is both the case for pure HVM && PVHVM. And this one exists for both
case too:
00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
[Natoma/Triton II] [8086:7000]

And the intel_detect_pch() function only check the first ISA bridge on the
PCI bus:
pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);

Unless there are magic elsewhere, I can't imagine the code would behave
differently on the two builds.
But what's the magic behind this?

Also, is there anyway to get rid of the ISA bridge emulated by qemu?
I don't think this is ever required for most case...

Thanks,
Timothy


On Sun, Dec 9, 2012 at 1:43 AM, G.R. <firemeteor@users.sourceforge.net>wrote:

> Hi all,
> I'm debugging an issue that an HVM guest failed to produce any output with
> IGD passed through.
> This is an pure HVM linux guest with i915 driver directly compiled in.
> An PVHVM kernel with i915 driver compiled as module works without issue.
> I'm not yet sure which factor is more important, pure HVM or the I915=y
> kernel config.
>
> The direct cause of no output is that the driver does not select Display
> PLL properly, which is in turn due to fail to detect pch type properly.
>
> Strange enough, the intel_detect_pch() function works by checking the
> device ID of the ISA bridge coming with the chipset:
>
> /* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make
> graphics device passthrough fwork easy for VMM, that only * need to expose
> ISA bridge to let driver know the real hardware  underneath. This is a
> requirement from virtualization team. */
>
> I added some debug output in this function and find out that it obtained a
> strange device ID:
> [ 1.005423] [drm] intel pch detect, found 00007000
>
> This looks like the ISA bridge provided by qemu:
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
> 00:01.0 0601: 8086:7000
>
> However, I can find the same device on a PVHVM linux guest, but the
> intel_detect_pch() is not fooled by that. Is it due to I915=m config or
> some magic played by PVOPS? Any suggestion how to fix this?
>
> Thanks,
> Timothy
>

--f46d04088d23246a8104d0645275
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

I dug further and got confused.<br>The host ISA bridge 00:1f.0 is automatic=
ally passed-through as part of the gfx_passthru magic.<br>However, it is pa=
ssed through as a PCI bridge:<br>On host: =A0 00:1f.0 ISA bridge [0601]: In=
tel Corporation H77 Express Chipset LPC Controller [8086:1e4a] (rev 04)<br>
On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express Chipset =
LPC Controller [8086:1e4a] (rev 04)<br><br>This is both the case for pure H=
VM &amp;&amp; PVHVM. And this one exists for both case too:<br>00:01.0 ISA =
bridge [0601]: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] [8086=
:7000]<br>
<br>And the intel_detect_pch() function only check the first ISA bridge on =
the PCI bus:<br>pch =3D pci_get_class(PCI_CLASS_BRIDGE_ISA &lt;&lt; 8, NULL=
);<br><br>Unless there are magic elsewhere, I can&#39;t imagine the code wo=
uld behave differently on the two builds.<br>
But what&#39;s the magic behind this?<br><br>Also, is there anyway to get r=
id of the ISA bridge emulated by qemu? <br>I don&#39;t think this is ever r=
equired for most case...<br><br>Thanks,<br>Timothy<br><div class=3D"gmail_e=
xtra">
<br><br><div class=3D"gmail_quote">On Sun, Dec 9, 2012 at 1:43 AM, G.R. <sp=
an dir=3D"ltr">&lt;<a href=3D"mailto:firemeteor@users.sourceforge.net" targ=
et=3D"_blank">firemeteor@users.sourceforge.net</a>&gt;</span> wrote:<br><bl=
ockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #=
ccc solid;padding-left:1ex">
<p>Hi all, <br>
I&#39;m debugging an issue that an HVM guest failed to produce any output w=
ith IGD passed through. <br>
This is an pure HVM linux guest with i915 driver directly compiled in. <br>
An PVHVM kernel with i915 driver compiled as module works without issue. <b=
r>
I&#39;m not yet sure which factor is more important, pure HVM or the I915=
=3Dy kernel config.</p>
<p>The direct cause of no output is that the driver does not select Display=
 PLL properly, which is in turn due to fail to detect pch type properly.</p=
>
<p>Strange enough, the intel_detect_pch() function works by checking the de=
vice ID of the ISA bridge coming with the chipset:</p>
<p>/* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make g=
raphics device passthrough fwork easy for VMM, that only * need to expose I=
SA bridge to let driver know the real hardware=A0 underneath. This is a req=
uirement from virtualization team. */</p>


<p>I added some debug output in this function and find out that it obtained=
 a strange device ID:<br>
[ 1.005423] [drm] intel pch detect, found 00007000</p>
<p>This looks like the ISA bridge provided by qemu:<br>
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] =
<br>
00:01.0 0601: 8086:7000</p>
<p>However, I can find the same device on a PVHVM linux guest, but the inte=
l_detect_pch() is not fooled by that. Is it due to I915=3Dm config or some =
magic played by PVOPS? Any suggestion how to fix this?</p>
<p>Thanks, <br>
Timothy</p>
</blockquote></div><br></div>

--f46d04088d23246a8104d0645275--


--===============3127713365379019960==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3127713365379019960==--


From xen-devel-bounces@lists.xen.org Sun Dec 09 05:00:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 05:00:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThYzn-0004XG-Tz; Sun, 09 Dec 2012 05:00:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1ThYzl-0004X8-KW
	for xen-devel@lists.xen.org; Sun, 09 Dec 2012 05:00:06 +0000
Received: from [193.109.254.147:6459] by server-14.bemta-14.messagelabs.com id
	7E/8B-14517-4DA14C05; Sun, 09 Dec 2012 05:00:04 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1355029200!6762836!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5512 invoked from network); 9 Dec 2012 05:00:01 -0000
Received: from mail-la0-f45.google.com (HELO mail-la0-f45.google.com)
	(209.85.215.45)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 05:00:01 -0000
Received: by mail-la0-f45.google.com with SMTP id p9so1459443laa.32
	for <xen-devel@lists.xen.org>; Sat, 08 Dec 2012 21:00:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=uY8q2l11mxZ9bxGReMjgNYrOnLbKftMTjKGWhjI0GVA=;
	b=khPxd/oxP0RM/t+z/VELWxS4mpn1UoMugu/BbIlETqGKNvmRMkBWiEA/QV+cMkVlxo
	NKGg8QVF93Hsa+3Rf78HZK1wix1neoqFU7UjzAuDiEDw+mSM6Wq1wf/h0bvj/89crJDj
	r09PiuhigFtVrwRd+m+LI3nt3eXfOLXtj0LIZ3IAh9OQQihHxP2oRufBjdWUnHRlYIXX
	HaxN0eSMUFwS02xmm/S8XvBiB6NhSsnSTApJZtNd36H/HPtLX4NU4w+rCXvwYeWRDcCw
	YytBxvcKgk2lF3R7zxoCDmzNtVz0jHdjByP3GCXeDk72/4fg4ma94CjB34FfPmosCrJK
	Mjvw==
MIME-Version: 1.0
Received: by 10.152.104.115 with SMTP id gd19mr10015181lab.13.1355029200267;
	Sat, 08 Dec 2012 21:00:00 -0800 (PST)
Received: by 10.112.99.197 with HTTP; Sat, 8 Dec 2012 21:00:00 -0800 (PST)
In-Reply-To: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
Date: Sun, 9 Dec 2012 13:00:00 +0800
X-Google-Sender-Auth: SrhHxyGuMkDnlBrgKT4-2zu2bmM
Message-ID: <CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3127713365379019960=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3127713365379019960==
Content-Type: multipart/alternative; boundary=f46d04088d23246a8104d0645275

--f46d04088d23246a8104d0645275
Content-Type: text/plain; charset=ISO-8859-1

I dug further and got confused.
The host ISA bridge 00:1f.0 is automatically passed-through as part of the
gfx_passthru magic.
However, it is passed through as a PCI bridge:
On host:   00:1f.0 ISA bridge [0601]: Intel Corporation H77 Express Chipset
LPC Controller [8086:1e4a] (rev 04)
On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express Chipset
LPC Controller [8086:1e4a] (rev 04)

This is both the case for pure HVM && PVHVM. And this one exists for both
case too:
00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
[Natoma/Triton II] [8086:7000]

And the intel_detect_pch() function only check the first ISA bridge on the
PCI bus:
pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);

Unless there are magic elsewhere, I can't imagine the code would behave
differently on the two builds.
But what's the magic behind this?

Also, is there anyway to get rid of the ISA bridge emulated by qemu?
I don't think this is ever required for most case...

Thanks,
Timothy


On Sun, Dec 9, 2012 at 1:43 AM, G.R. <firemeteor@users.sourceforge.net>wrote:

> Hi all,
> I'm debugging an issue that an HVM guest failed to produce any output with
> IGD passed through.
> This is an pure HVM linux guest with i915 driver directly compiled in.
> An PVHVM kernel with i915 driver compiled as module works without issue.
> I'm not yet sure which factor is more important, pure HVM or the I915=y
> kernel config.
>
> The direct cause of no output is that the driver does not select Display
> PLL properly, which is in turn due to fail to detect pch type properly.
>
> Strange enough, the intel_detect_pch() function works by checking the
> device ID of the ISA bridge coming with the chipset:
>
> /* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make
> graphics device passthrough fwork easy for VMM, that only * need to expose
> ISA bridge to let driver know the real hardware  underneath. This is a
> requirement from virtualization team. */
>
> I added some debug output in this function and find out that it obtained a
> strange device ID:
> [ 1.005423] [drm] intel pch detect, found 00007000
>
> This looks like the ISA bridge provided by qemu:
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
> 00:01.0 0601: 8086:7000
>
> However, I can find the same device on a PVHVM linux guest, but the
> intel_detect_pch() is not fooled by that. Is it due to I915=m config or
> some magic played by PVOPS? Any suggestion how to fix this?
>
> Thanks,
> Timothy
>

--f46d04088d23246a8104d0645275
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

I dug further and got confused.<br>The host ISA bridge 00:1f.0 is automatic=
ally passed-through as part of the gfx_passthru magic.<br>However, it is pa=
ssed through as a PCI bridge:<br>On host: =A0 00:1f.0 ISA bridge [0601]: In=
tel Corporation H77 Express Chipset LPC Controller [8086:1e4a] (rev 04)<br>
On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express Chipset =
LPC Controller [8086:1e4a] (rev 04)<br><br>This is both the case for pure H=
VM &amp;&amp; PVHVM. And this one exists for both case too:<br>00:01.0 ISA =
bridge [0601]: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] [8086=
:7000]<br>
<br>And the intel_detect_pch() function only check the first ISA bridge on =
the PCI bus:<br>pch =3D pci_get_class(PCI_CLASS_BRIDGE_ISA &lt;&lt; 8, NULL=
);<br><br>Unless there are magic elsewhere, I can&#39;t imagine the code wo=
uld behave differently on the two builds.<br>
But what&#39;s the magic behind this?<br><br>Also, is there anyway to get r=
id of the ISA bridge emulated by qemu? <br>I don&#39;t think this is ever r=
equired for most case...<br><br>Thanks,<br>Timothy<br><div class=3D"gmail_e=
xtra">
<br><br><div class=3D"gmail_quote">On Sun, Dec 9, 2012 at 1:43 AM, G.R. <sp=
an dir=3D"ltr">&lt;<a href=3D"mailto:firemeteor@users.sourceforge.net" targ=
et=3D"_blank">firemeteor@users.sourceforge.net</a>&gt;</span> wrote:<br><bl=
ockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #=
ccc solid;padding-left:1ex">
<p>Hi all, <br>
I&#39;m debugging an issue that an HVM guest failed to produce any output w=
ith IGD passed through. <br>
This is an pure HVM linux guest with i915 driver directly compiled in. <br>
An PVHVM kernel with i915 driver compiled as module works without issue. <b=
r>
I&#39;m not yet sure which factor is more important, pure HVM or the I915=
=3Dy kernel config.</p>
<p>The direct cause of no output is that the driver does not select Display=
 PLL properly, which is in turn due to fail to detect pch type properly.</p=
>
<p>Strange enough, the intel_detect_pch() function works by checking the de=
vice ID of the ISA bridge coming with the chipset:</p>
<p>/* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make g=
raphics device passthrough fwork easy for VMM, that only * need to expose I=
SA bridge to let driver know the real hardware=A0 underneath. This is a req=
uirement from virtualization team. */</p>


<p>I added some debug output in this function and find out that it obtained=
 a strange device ID:<br>
[ 1.005423] [drm] intel pch detect, found 00007000</p>
<p>This looks like the ISA bridge provided by qemu:<br>
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] =
<br>
00:01.0 0601: 8086:7000</p>
<p>However, I can find the same device on a PVHVM linux guest, but the inte=
l_detect_pch() is not fooled by that. Is it due to I915=3Dm config or some =
magic played by PVOPS? Any suggestion how to fix this?</p>
<p>Thanks, <br>
Timothy</p>
</blockquote></div><br></div>

--f46d04088d23246a8104d0645275--


--===============3127713365379019960==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3127713365379019960==--


From xen-devel-bounces@lists.xen.org Sun Dec 09 08:05:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 08:05:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thbsz-0006Fj-Sc; Sun, 09 Dec 2012 08:05:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thbsy-0006Fe-BN
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 08:05:16 +0000
Received: from [193.109.254.147:58797] by server-12.bemta-14.messagelabs.com
	id 89/EF-00510-B3644C05; Sun, 09 Dec 2012 08:05:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355040313!1975893!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2059 invoked from network); 9 Dec 2012 08:05:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 08:05:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="16063"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 08:05:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 08:05:13 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thbsv-0003GI-2S;
	Sun, 09 Dec 2012 08:05:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thbsu-0000Gj-Rl;
	Sun, 09 Dec 2012 08:05:12 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14635-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 08:05:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14635: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14635 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14635/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14566
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566
 build-amd64                  2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-pvops            2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-oldkern          2 host-install(2) broken in 14587 REGR. vs. 14566
 build-i386-pvops             2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386-oldkern           2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386                   2 host-install(2) broken in 14583 REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            3 host-install(3)           broken pass in 14574
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 14574
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)           broken pass in 14583
 test-amd64-amd64-xl           3 host-install(3)  broken in 14574 pass in 14635

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 14587 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-win          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14587 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-win         1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-win            1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 08:05:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 08:05:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thbsz-0006Fj-Sc; Sun, 09 Dec 2012 08:05:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thbsy-0006Fe-BN
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 08:05:16 +0000
Received: from [193.109.254.147:58797] by server-12.bemta-14.messagelabs.com
	id 89/EF-00510-B3644C05; Sun, 09 Dec 2012 08:05:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355040313!1975893!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2059 invoked from network); 9 Dec 2012 08:05:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 08:05:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="16063"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 08:05:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 08:05:13 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thbsv-0003GI-2S;
	Sun, 09 Dec 2012 08:05:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thbsu-0000Gj-Rl;
	Sun, 09 Dec 2012 08:05:12 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14635-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 08:05:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14635: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14635 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14635/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14566
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566
 build-amd64                  2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-pvops            2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-oldkern          2 host-install(2) broken in 14587 REGR. vs. 14566
 build-i386-pvops             2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386-oldkern           2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386                   2 host-install(2) broken in 14583 REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            3 host-install(3)           broken pass in 14574
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 14574
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)           broken pass in 14583
 test-amd64-amd64-xl           3 host-install(3)  broken in 14574 pass in 14635

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 14587 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-win          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14587 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-win         1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-win            1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 08:30:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 08:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThcGQ-0006Tq-4g; Sun, 09 Dec 2012 08:29:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThcGN-0006Tl-Vd
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 08:29:28 +0000
Received: from [85.158.139.211:23308] by server-7.bemta-5.messagelabs.com id
	58/42-08009-7EB44C05; Sun, 09 Dec 2012 08:29:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355041766!18171690!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14775 invoked from network); 9 Dec 2012 08:29:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 08:29:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="16188"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 08:29:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 08:29:26 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThcGM-0003Nd-4a;
	Sun, 09 Dec 2012 08:29:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThcGM-00086a-3r;
	Sun, 09 Dec 2012 08:29:26 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14637-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 08:29:26 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14637: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14637 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14637/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 14619
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 08:30:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 08:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThcGQ-0006Tq-4g; Sun, 09 Dec 2012 08:29:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThcGN-0006Tl-Vd
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 08:29:28 +0000
Received: from [85.158.139.211:23308] by server-7.bemta-5.messagelabs.com id
	58/42-08009-7EB44C05; Sun, 09 Dec 2012 08:29:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355041766!18171690!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14775 invoked from network); 9 Dec 2012 08:29:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 08:29:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="16188"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 08:29:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 08:29:26 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThcGM-0003Nd-4a;
	Sun, 09 Dec 2012 08:29:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThcGM-00086a-3r;
	Sun, 09 Dec 2012 08:29:26 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14637-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 08:29:26 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14637: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14637 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14637/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 14619
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 11:41:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 11:41:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThfFx-0007eo-UK; Sun, 09 Dec 2012 11:41:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThfFw-0007ej-FT
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 11:41:12 +0000
Received: from [85.158.139.83:29244] by server-15.bemta-5.messagelabs.com id
	4D/35-20523-6D874C05; Sun, 09 Dec 2012 11:41:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355053270!27888568!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15598 invoked from network); 9 Dec 2012 11:41:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 11:41:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="17344"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 11:41:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 11:41:09 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThfFs-0004K1-UA;
	Sun, 09 Dec 2012 11:41:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThfFs-0008JO-Mf;
	Sun, 09 Dec 2012 11:41:08 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14638-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 11:41:08 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14638: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14638 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14638/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14565
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 11:41:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 11:41:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThfFx-0007eo-UK; Sun, 09 Dec 2012 11:41:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThfFw-0007ej-FT
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 11:41:12 +0000
Received: from [85.158.139.83:29244] by server-15.bemta-5.messagelabs.com id
	4D/35-20523-6D874C05; Sun, 09 Dec 2012 11:41:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355053270!27888568!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15598 invoked from network); 9 Dec 2012 11:41:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 11:41:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="17344"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 11:41:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 11:41:09 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThfFs-0004K1-UA;
	Sun, 09 Dec 2012 11:41:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThfFs-0008JO-Mf;
	Sun, 09 Dec 2012 11:41:08 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14638-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 11:41:08 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14638: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14638 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14638/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14565
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 14:49:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 14:49:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThiBY-0000yE-5o; Sun, 09 Dec 2012 14:48:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThiBV-0000y9-RV
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 14:48:50 +0000
Received: from [85.158.138.51:53646] by server-2.bemta-3.messagelabs.com id
	74/56-04744-0D4A4C05; Sun, 09 Dec 2012 14:48:48 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1355064528!20070602!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23830 invoked from network); 9 Dec 2012 14:48:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 14:48:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="18189"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 14:48:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 14:48:47 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThiBT-0005F6-N9;
	Sun, 09 Dec 2012 14:48:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThiBT-0004V3-IW;
	Sun, 09 Dec 2012 14:48:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14639-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 14:48:47 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14639: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14639 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14639/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 14619
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 14:49:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 14:49:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThiBY-0000yE-5o; Sun, 09 Dec 2012 14:48:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThiBV-0000y9-RV
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 14:48:50 +0000
Received: from [85.158.138.51:53646] by server-2.bemta-3.messagelabs.com id
	74/56-04744-0D4A4C05; Sun, 09 Dec 2012 14:48:48 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1355064528!20070602!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23830 invoked from network); 9 Dec 2012 14:48:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 14:48:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="18189"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 14:48:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 14:48:47 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThiBT-0005F6-N9;
	Sun, 09 Dec 2012 14:48:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThiBT-0004V3-IW;
	Sun, 09 Dec 2012 14:48:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14639-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 14:48:47 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14639: regressions -
	trouble: blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14639 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14639/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 test-amd64-amd64-xl-qemuu-win 8 guest-saverestore fail in 14570 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 14619
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 14570
 test-amd64-i386-qemuu-win-vcpus1  3 host-install(3)       broken pass in 14570
 test-amd64-i386-xl-qemuu-win7-amd64  3 host-install(3)    broken pass in 14570

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14570 never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check  fail in 14570 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 14570 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 14570 never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check      fail in 14570 never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-win-vcpus1                             broken  
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 15:03:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 15:03:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThiPH-0001Dk-JE; Sun, 09 Dec 2012 15:03:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThiPG-0001Dd-2V
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 15:03:02 +0000
Received: from [193.109.254.147:43053] by server-12.bemta-14.messagelabs.com
	id 28/35-00510-528A4C05; Sun, 09 Dec 2012 15:03:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355065380!2189843!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19512 invoked from network); 9 Dec 2012 15:03:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 15:03:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="18230"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 15:03:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 15:03:00 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThiPE-0005Jb-90;
	Sun, 09 Dec 2012 15:03:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThiPE-00025Y-5y;
	Sun, 09 Dec 2012 15:03:00 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14644-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 15:03:00 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14644: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14644 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14644/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 15:03:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 15:03:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThiPH-0001Dk-JE; Sun, 09 Dec 2012 15:03:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThiPG-0001Dd-2V
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 15:03:02 +0000
Received: from [193.109.254.147:43053] by server-12.bemta-14.messagelabs.com
	id 28/35-00510-528A4C05; Sun, 09 Dec 2012 15:03:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355065380!2189843!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19512 invoked from network); 9 Dec 2012 15:03:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 15:03:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="18230"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 15:03:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 15:03:00 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThiPE-0005Jb-90;
	Sun, 09 Dec 2012 15:03:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThiPE-00025Y-5y;
	Sun, 09 Dec 2012 15:03:00 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14644-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 15:03:00 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14644: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14644 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14644/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 18:03:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 18:03:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThlDd-0002iy-4i; Sun, 09 Dec 2012 18:03:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThlDb-0002it-NH
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 18:03:11 +0000
Received: from [85.158.143.99:23931] by server-1.bemta-4.messagelabs.com id
	57/F9-28401-F52D4C05; Sun, 09 Dec 2012 18:03:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1355076190!27807687!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32501 invoked from network); 9 Dec 2012 18:03:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 18:03:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="19190"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 18:03:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 18:03:09 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThlDZ-0006CX-Nx;
	Sun, 09 Dec 2012 18:03:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThlDZ-0006BU-59;
	Sun, 09 Dec 2012 18:03:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14645-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 18:03:09 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14645: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14645 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14645/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 18:03:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 18:03:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThlDd-0002iy-4i; Sun, 09 Dec 2012 18:03:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThlDb-0002it-NH
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 18:03:11 +0000
Received: from [85.158.143.99:23931] by server-1.bemta-4.messagelabs.com id
	57/F9-28401-F52D4C05; Sun, 09 Dec 2012 18:03:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1355076190!27807687!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32501 invoked from network); 9 Dec 2012 18:03:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 18:03:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,246,1355097600"; 
   d="scan'208";a="19190"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 18:03:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 18:03:09 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThlDZ-0006CX-Nx;
	Sun, 09 Dec 2012 18:03:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThlDZ-0006BU-59;
	Sun, 09 Dec 2012 18:03:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14645-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 18:03:09 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14645: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14645 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14645/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 19:05:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 19:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThmAu-0003E8-OR; Sun, 09 Dec 2012 19:04:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThmAs-0003E3-Gl
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 19:04:26 +0000
Received: from [193.109.254.147:41074] by server-14.bemta-14.messagelabs.com
	id 2F/63-14517-9B0E4C05; Sun, 09 Dec 2012 19:04:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355079861!9495525!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8025 invoked from network); 9 Dec 2012 19:04:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 19:04:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="19456"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 19:04:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 19:04:21 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThmAn-0006VH-1r;
	Sun, 09 Dec 2012 19:04:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThmAm-0001Cj-S5;
	Sun, 09 Dec 2012 19:04:20 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14641-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 19:04:20 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14641: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14641 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14641/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14563
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14563
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-qemut-win     3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14563
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14563
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14563
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-win   3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-winxpsp3  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14563

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14563
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    broken  
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  broken  
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 19:05:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 19:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThmAu-0003E8-OR; Sun, 09 Dec 2012 19:04:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThmAs-0003E3-Gl
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 19:04:26 +0000
Received: from [193.109.254.147:41074] by server-14.bemta-14.messagelabs.com
	id 2F/63-14517-9B0E4C05; Sun, 09 Dec 2012 19:04:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355079861!9495525!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8025 invoked from network); 9 Dec 2012 19:04:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 19:04:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="19456"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 19:04:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 19:04:21 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThmAn-0006VH-1r;
	Sun, 09 Dec 2012 19:04:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThmAm-0001Cj-S5;
	Sun, 09 Dec 2012 19:04:20 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14641-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 19:04:20 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14641: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14641 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14641/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14563
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14563
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14563
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-qemut-win     3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14563
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)   broken REGR. vs. 14563
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14563
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14563
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14563
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14563
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14563
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-win   3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-winxpsp3  3 host-install(3)     broken REGR. vs. 14563
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14563

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14563
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    broken  
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  broken  
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 19:18:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 19:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThmOH-0003RR-97; Sun, 09 Dec 2012 19:18:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThmOF-0003RM-J0
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 19:18:15 +0000
Received: from [85.158.139.211:52119] by server-6.bemta-5.messagelabs.com id
	3F/F6-30498-6F3E4C05; Sun, 09 Dec 2012 19:18:14 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355080693!19641625!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2994 invoked from network); 9 Dec 2012 19:18:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 19:18:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="19528"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 19:18:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 19:18:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThmOC-0006Zc-Q4;
	Sun, 09 Dec 2012 19:18:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThmOC-0007Lm-Ob;
	Sun, 09 Dec 2012 19:18:12 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14646-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 19:18:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14646: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14646 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14646/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 19:18:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 19:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThmOH-0003RR-97; Sun, 09 Dec 2012 19:18:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThmOF-0003RM-J0
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 19:18:15 +0000
Received: from [85.158.139.211:52119] by server-6.bemta-5.messagelabs.com id
	3F/F6-30498-6F3E4C05; Sun, 09 Dec 2012 19:18:14 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355080693!19641625!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2994 invoked from network); 9 Dec 2012 19:18:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 19:18:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="19528"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 19:18:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 19:18:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThmOC-0006Zc-Q4;
	Sun, 09 Dec 2012 19:18:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThmOC-0007Lm-Ob;
	Sun, 09 Dec 2012 19:18:12 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14646-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 19:18:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14646: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14646 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14646/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 19:32:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 19:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thmb7-0003cj-IZ; Sun, 09 Dec 2012 19:31:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thmb5-0003ce-OW
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 19:31:32 +0000
Received: from [85.158.138.51:16861] by server-9.bemta-3.messagelabs.com id
	DF/BA-02388-217E4C05; Sun, 09 Dec 2012 19:31:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355081489!23867902!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4317 invoked from network); 9 Dec 2012 19:31:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 19:31:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="19615"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 19:31:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 19:31:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thmam-0006dZ-RE;
	Sun, 09 Dec 2012 19:31:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thmam-0004RY-Ip;
	Sun, 09 Dec 2012 19:31:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14647-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 19:31:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14647: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14647 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14647/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 19:32:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 19:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thmb7-0003cj-IZ; Sun, 09 Dec 2012 19:31:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thmb5-0003ce-OW
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 19:31:32 +0000
Received: from [85.158.138.51:16861] by server-9.bemta-3.messagelabs.com id
	DF/BA-02388-217E4C05; Sun, 09 Dec 2012 19:31:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355081489!23867902!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4317 invoked from network); 9 Dec 2012 19:31:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 19:31:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="19615"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 19:31:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 19:31:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thmam-0006dZ-RE;
	Sun, 09 Dec 2012 19:31:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thmam-0004RY-Ip;
	Sun, 09 Dec 2012 19:31:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14647-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 19:31:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14647: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14647 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14647/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 20:03:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 20:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thn5G-0003vk-9X; Sun, 09 Dec 2012 20:02:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thn5E-0003vf-S4
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 20:02:41 +0000
Received: from [193.109.254.147:37115] by server-15.bemta-14.messagelabs.com
	id 5A/06-12105-06EE4C05; Sun, 09 Dec 2012 20:02:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355083359!9498279!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5608 invoked from network); 9 Dec 2012 20:02:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 20:02:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="19982"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 20:02:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 20:02:38 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thn5C-0006n5-Hy;
	Sun, 09 Dec 2012 20:02:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thn5C-0007Vn-Ar;
	Sun, 09 Dec 2012 20:02:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14649-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 20:02:38 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14649: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14649 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14649/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 20:03:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 20:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thn5G-0003vk-9X; Sun, 09 Dec 2012 20:02:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thn5E-0003vf-S4
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 20:02:41 +0000
Received: from [193.109.254.147:37115] by server-15.bemta-14.messagelabs.com
	id 5A/06-12105-06EE4C05; Sun, 09 Dec 2012 20:02:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355083359!9498279!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5608 invoked from network); 9 Dec 2012 20:02:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 20:02:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="19982"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 20:02:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 20:02:38 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thn5C-0006n5-Hy;
	Sun, 09 Dec 2012 20:02:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thn5C-0007Vn-Ar;
	Sun, 09 Dec 2012 20:02:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14649-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 20:02:38 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14649: trouble:
	blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14649 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14649/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472
 build-i386                    2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 20:13:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 20:13:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThnFi-000475-JY; Sun, 09 Dec 2012 20:13:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThnFg-000470-WB
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 20:13:29 +0000
Received: from [85.158.139.83:18190] by server-3.bemta-5.messagelabs.com id
	F6/88-25441-7E0F4C05; Sun, 09 Dec 2012 20:13:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355084003!29071955!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25292 invoked from network); 9 Dec 2012 20:13:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 20:13:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="20080"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 20:13:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 20:13:22 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThnFa-0006q2-Le;
	Sun, 09 Dec 2012 20:13:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThnFa-00061C-LA;
	Sun, 09 Dec 2012 20:13:22 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14648-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 20:13:22 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14648: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14648 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14648/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 20:13:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 20:13:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThnFi-000475-JY; Sun, 09 Dec 2012 20:13:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThnFg-000470-WB
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 20:13:29 +0000
Received: from [85.158.139.83:18190] by server-3.bemta-5.messagelabs.com id
	F6/88-25441-7E0F4C05; Sun, 09 Dec 2012 20:13:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355084003!29071955!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25292 invoked from network); 9 Dec 2012 20:13:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 20:13:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="20080"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 20:13:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 20:13:22 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThnFa-0006q2-Le;
	Sun, 09 Dec 2012 20:13:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThnFa-00061C-LA;
	Sun, 09 Dec 2012 20:13:22 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14648-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 20:13:22 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14648: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14648 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14648/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14563
 build-i386                    2 host-install(2)         broken REGR. vs. 14563
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 21:36:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 21:36:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThoXl-0004ts-QI; Sun, 09 Dec 2012 21:36:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1ThoXk-0004tm-As
	for xen-devel@lists.xen.org; Sun, 09 Dec 2012 21:36:12 +0000
Received: from [193.109.254.147:12040] by server-12.bemta-14.messagelabs.com
	id B1/65-00510-B4405C05; Sun, 09 Dec 2012 21:36:11 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355088970!9367378!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28934 invoked from network); 9 Dec 2012 21:36:10 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	9 Dec 2012 21:36:10 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:54270 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1ThobU-0002cS-8B; Sun, 09 Dec 2012 22:40:04 +0100
Date: Sun, 9 Dec 2012 22:36:02 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <341064135.20121209223602@eikelenboom.it>
To: annie li <annie.li@oracle.com>
In-Reply-To: <50C4A8FD.5070407@oracle.com>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	Eric Dumazet <edumazet@google.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
	net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Sunday, December 9, 2012, 4:06:37 PM, you wrote:

> Hi Ian,

> I guess this issue is similar with this one 
> http://comments.gmane.org/gmane.linux.network/236358. And netfront also 
> needs to reserve some tail room for IP/TCP headers too?

Hi Annie,
Thanks for digging this up !

That looks indeed remarkable similar. It's probably revealed by the other netfront/netback changes in 3.7, because i have never seen it before.
It also seems to take some time before it gets triggered.
Only the code in netfront.c is that much different, i don't think i'm able to determine the proper size and suggest a fix.



--
Sander


> Thanks
> Annie

> On 2012-12-9 4:14, Sander Eikelenboom wrote:
>> Hi All,
>>
>> I still seem to hit some network warn in 3.7-rc8+.
>> It only seems to appear in guests and not in dom0, it happens every once in a while and in multiple guests.
>>
>> [  778.846089] ------------[ cut here ]------------
>> [  778.846107] WARNING: at net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
>> [  778.846114] Modules linked in:
>> [  778.846121] Pid: 0, comm: swapper/0 Not tainted 3.7.0-rc8-20121208 #1
>> [  778.846127] Call Trace:
>> [  778.846131]  <IRQ>  [<ffffffff8106752a>] warn_slowpath_common+0x7a/0xb0
>> [  778.846143]  [<ffffffff81067575>] warn_slowpath_null+0x15/0x20
>> [  778.846149]  [<ffffffff816a2b29>] skb_try_coalesce+0x359/0x390
>> [  778.846157]  [<ffffffff8175a009>] tcp_try_coalesce+0x69/0xc0
>> [  778.846163]  [<ffffffff8175a0b4>] tcp_queue_rcv+0x54/0x100
>> [  778.846168]  [<ffffffff817602ff>] ? tcp_wfree+0x2f/0x140
>> [  778.846174]  [<ffffffff8175eb7b>] tcp_rcv_established+0x2bb/0x6a0
>> [  778.846180]  [<ffffffff817672ff>] ? tcp_v4_rcv+0x6cf/0xb10
>> [  778.846186]  [<ffffffff817668e5>] tcp_v4_do_rcv+0x135/0x480
>> [  778.846192]  [<ffffffff8180bdc2>] ? _raw_spin_lock_nested+0x42/0x50
>> [  778.846198]  [<ffffffff817672ff>] ? tcp_v4_rcv+0x6cf/0xb10
>> [  778.846203]  [<ffffffff8176758d>] tcp_v4_rcv+0x95d/0xb10
>> [  778.846209]  [<ffffffff810b21e8>] ? lock_acquire+0xd8/0x100
>> [  778.846216]  [<ffffffff81743d55>] ? ip_local_deliver_finish+0x45/0x230
>> [  778.846222]  [<ffffffff81743e2a>] ip_local_deliver_finish+0x11a/0x230
>> [  778.846228]  [<ffffffff81743d55>] ? ip_local_deliver_finish+0x45/0x230
>> [  778.846284]  [<ffffffff81743f78>] ip_local_deliver+0x38/0x80
>> [  778.846291]  [<ffffffff8174353a>] ip_rcv_finish+0x15a/0x630
>> [  778.846297]  [<ffffffff81743c28>] ip_rcv+0x218/0x300
>> [  778.846303]  [<ffffffff816abf4d>] __netif_receive_skb+0x65d/0x8d0
>> [  778.846309]  [<ffffffff816aba35>] ? __netif_receive_skb+0x145/0x8d0
>> [  778.846315]  [<ffffffff810ae48d>] ? trace_hardirqs_on+0xd/0x10
>> [  778.846322]  [<ffffffff810fafc3>] ? free_hot_cold_page+0x1b3/0x1e0
>> [  778.846329]  [<ffffffff816ae4a8>] netif_receive_skb+0x28/0xf0
>> [  778.846334]  [<ffffffff816a3fc3>] ? __pskb_pull_tail+0x253/0x340
>> [  778.846342]  [<ffffffff814b3c75>] xennet_poll+0xad5/0xe10
>> [  778.846349]  [<ffffffff816af256>] net_rx_action+0x136/0x260
>> [  778.846355]  [<ffffffff8106f419>] __do_softirq+0xc9/0x1a0
>> [  778.846361]  [<ffffffff8180e93c>] call_softirq+0x1c/0x30
>> [  778.846368]  [<ffffffff8100fd95>] do_softirq+0x85/0xf0
>> [  778.846373]  [<ffffffff8106f28e>] irq_exit+0x9e/0xd0
>> [  778.846380]  [<ffffffff813463af>] xen_evtchn_do_upcall+0x2f/0x40
>> [  778.846386]  [<ffffffff8180e99e>] xen_do_hypervisor_callback+0x1e/0x30
>> [  778.846391]  <EOI>  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
>> [  778.846401]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
>> [  778.846408]  [<ffffffff81008850>] ? xen_safe_halt+0x10/0x20
>> [  778.846415]  [<ffffffff810170f0>] ? default_idle+0x40/0x90
>> [  778.846421]  [<ffffffff810174a6>] ? cpu_idle+0x96/0xf0
>> [  778.846428]  [<ffffffff817e4c0c>] ? rest_init+0xbc/0xd0
>> [  778.846433]  [<ffffffff817e4b50>] ? csum_partial_copy_generic+0x170/0x170
>> [  778.846441]  [<ffffffff81ee7be7>] ? start_kernel+0x390/0x39d
>> [  778.846447]  [<ffffffff81ee7677>] ? repair_env_string+0x5b/0x5b
>> [  778.846454]  [<ffffffff81ee7356>] ? x86_64_start_reservations+0x131/0x136
>> [  778.846461]  [<ffffffff81eea915>] ? xen_start_kernel+0x54e/0x550
>> [  778.846467] ---[ end trace d13d814dbabaca0e ]---
>>
>>
>> --
>> Sander
>>
>>   



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 21:36:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 21:36:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThoXl-0004ts-QI; Sun, 09 Dec 2012 21:36:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1ThoXk-0004tm-As
	for xen-devel@lists.xen.org; Sun, 09 Dec 2012 21:36:12 +0000
Received: from [193.109.254.147:12040] by server-12.bemta-14.messagelabs.com
	id B1/65-00510-B4405C05; Sun, 09 Dec 2012 21:36:11 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355088970!9367378!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28934 invoked from network); 9 Dec 2012 21:36:10 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	9 Dec 2012 21:36:10 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:54270 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1ThobU-0002cS-8B; Sun, 09 Dec 2012 22:40:04 +0100
Date: Sun, 9 Dec 2012 22:36:02 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <341064135.20121209223602@eikelenboom.it>
To: annie li <annie.li@oracle.com>
In-Reply-To: <50C4A8FD.5070407@oracle.com>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	Eric Dumazet <edumazet@google.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
	net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Sunday, December 9, 2012, 4:06:37 PM, you wrote:

> Hi Ian,

> I guess this issue is similar with this one 
> http://comments.gmane.org/gmane.linux.network/236358. And netfront also 
> needs to reserve some tail room for IP/TCP headers too?

Hi Annie,
Thanks for digging this up !

That looks indeed remarkable similar. It's probably revealed by the other netfront/netback changes in 3.7, because i have never seen it before.
It also seems to take some time before it gets triggered.
Only the code in netfront.c is that much different, i don't think i'm able to determine the proper size and suggest a fix.



--
Sander


> Thanks
> Annie

> On 2012-12-9 4:14, Sander Eikelenboom wrote:
>> Hi All,
>>
>> I still seem to hit some network warn in 3.7-rc8+.
>> It only seems to appear in guests and not in dom0, it happens every once in a while and in multiple guests.
>>
>> [  778.846089] ------------[ cut here ]------------
>> [  778.846107] WARNING: at net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
>> [  778.846114] Modules linked in:
>> [  778.846121] Pid: 0, comm: swapper/0 Not tainted 3.7.0-rc8-20121208 #1
>> [  778.846127] Call Trace:
>> [  778.846131]  <IRQ>  [<ffffffff8106752a>] warn_slowpath_common+0x7a/0xb0
>> [  778.846143]  [<ffffffff81067575>] warn_slowpath_null+0x15/0x20
>> [  778.846149]  [<ffffffff816a2b29>] skb_try_coalesce+0x359/0x390
>> [  778.846157]  [<ffffffff8175a009>] tcp_try_coalesce+0x69/0xc0
>> [  778.846163]  [<ffffffff8175a0b4>] tcp_queue_rcv+0x54/0x100
>> [  778.846168]  [<ffffffff817602ff>] ? tcp_wfree+0x2f/0x140
>> [  778.846174]  [<ffffffff8175eb7b>] tcp_rcv_established+0x2bb/0x6a0
>> [  778.846180]  [<ffffffff817672ff>] ? tcp_v4_rcv+0x6cf/0xb10
>> [  778.846186]  [<ffffffff817668e5>] tcp_v4_do_rcv+0x135/0x480
>> [  778.846192]  [<ffffffff8180bdc2>] ? _raw_spin_lock_nested+0x42/0x50
>> [  778.846198]  [<ffffffff817672ff>] ? tcp_v4_rcv+0x6cf/0xb10
>> [  778.846203]  [<ffffffff8176758d>] tcp_v4_rcv+0x95d/0xb10
>> [  778.846209]  [<ffffffff810b21e8>] ? lock_acquire+0xd8/0x100
>> [  778.846216]  [<ffffffff81743d55>] ? ip_local_deliver_finish+0x45/0x230
>> [  778.846222]  [<ffffffff81743e2a>] ip_local_deliver_finish+0x11a/0x230
>> [  778.846228]  [<ffffffff81743d55>] ? ip_local_deliver_finish+0x45/0x230
>> [  778.846284]  [<ffffffff81743f78>] ip_local_deliver+0x38/0x80
>> [  778.846291]  [<ffffffff8174353a>] ip_rcv_finish+0x15a/0x630
>> [  778.846297]  [<ffffffff81743c28>] ip_rcv+0x218/0x300
>> [  778.846303]  [<ffffffff816abf4d>] __netif_receive_skb+0x65d/0x8d0
>> [  778.846309]  [<ffffffff816aba35>] ? __netif_receive_skb+0x145/0x8d0
>> [  778.846315]  [<ffffffff810ae48d>] ? trace_hardirqs_on+0xd/0x10
>> [  778.846322]  [<ffffffff810fafc3>] ? free_hot_cold_page+0x1b3/0x1e0
>> [  778.846329]  [<ffffffff816ae4a8>] netif_receive_skb+0x28/0xf0
>> [  778.846334]  [<ffffffff816a3fc3>] ? __pskb_pull_tail+0x253/0x340
>> [  778.846342]  [<ffffffff814b3c75>] xennet_poll+0xad5/0xe10
>> [  778.846349]  [<ffffffff816af256>] net_rx_action+0x136/0x260
>> [  778.846355]  [<ffffffff8106f419>] __do_softirq+0xc9/0x1a0
>> [  778.846361]  [<ffffffff8180e93c>] call_softirq+0x1c/0x30
>> [  778.846368]  [<ffffffff8100fd95>] do_softirq+0x85/0xf0
>> [  778.846373]  [<ffffffff8106f28e>] irq_exit+0x9e/0xd0
>> [  778.846380]  [<ffffffff813463af>] xen_evtchn_do_upcall+0x2f/0x40
>> [  778.846386]  [<ffffffff8180e99e>] xen_do_hypervisor_callback+0x1e/0x30
>> [  778.846391]  <EOI>  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
>> [  778.846401]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
>> [  778.846408]  [<ffffffff81008850>] ? xen_safe_halt+0x10/0x20
>> [  778.846415]  [<ffffffff810170f0>] ? default_idle+0x40/0x90
>> [  778.846421]  [<ffffffff810174a6>] ? cpu_idle+0x96/0xf0
>> [  778.846428]  [<ffffffff817e4c0c>] ? rest_init+0xbc/0xd0
>> [  778.846433]  [<ffffffff817e4b50>] ? csum_partial_copy_generic+0x170/0x170
>> [  778.846441]  [<ffffffff81ee7be7>] ? start_kernel+0x390/0x39d
>> [  778.846447]  [<ffffffff81ee7677>] ? repair_env_string+0x5b/0x5b
>> [  778.846454]  [<ffffffff81ee7356>] ? x86_64_start_reservations+0x131/0x136
>> [  778.846461]  [<ffffffff81eea915>] ? xen_start_kernel+0x54e/0x550
>> [  778.846467] ---[ end trace d13d814dbabaca0e ]---
>>
>>
>> --
>> Sander
>>
>>   



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 21:55:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 21:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThoqJ-0005Qs-9F; Sun, 09 Dec 2012 21:55:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThoqH-0005Qm-D0
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 21:55:21 +0000
Received: from [85.158.139.83:15039] by server-2.bemta-5.messagelabs.com id
	03/2E-16162-8C805C05; Sun, 09 Dec 2012 21:55:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355090117!24766180!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11915 invoked from network); 9 Dec 2012 21:55:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 21:55:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="20530"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 21:55:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 21:54:59 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thopv-0007Lj-LW;
	Sun, 09 Dec 2012 21:54:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thopv-0008Qa-7Q;
	Sun, 09 Dec 2012 21:54:59 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14642-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 21:54:59 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14642: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14642 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14642/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14565
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 21:55:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 21:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThoqJ-0005Qs-9F; Sun, 09 Dec 2012 21:55:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThoqH-0005Qm-D0
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 21:55:21 +0000
Received: from [85.158.139.83:15039] by server-2.bemta-5.messagelabs.com id
	03/2E-16162-8C805C05; Sun, 09 Dec 2012 21:55:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355090117!24766180!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11915 invoked from network); 9 Dec 2012 21:55:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 21:55:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="20530"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 21:55:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 21:54:59 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thopv-0007Lj-LW;
	Sun, 09 Dec 2012 21:54:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thopv-0008Qa-7Q;
	Sun, 09 Dec 2012 21:54:59 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14642-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 21:54:59 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14642: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14642 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14642/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14565
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14565
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14565
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14565
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken REGR. vs. 14565
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14565
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14565
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14565
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)   broken REGR. vs. 14565

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14565
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 22:09:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 22:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thp31-0005xY-EN; Sun, 09 Dec 2012 22:08:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thp2z-0005xT-5D
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 22:08:29 +0000
Received: from [193.109.254.147:27001] by server-14.bemta-14.messagelabs.com
	id 46/BA-14517-CDB05C05; Sun, 09 Dec 2012 22:08:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1355090907!9972152!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6990 invoked from network); 9 Dec 2012 22:08:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 22:08:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="20578"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 22:08:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 22:08:26 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thp2w-0007Pn-LK;
	Sun, 09 Dec 2012 22:08:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thp2w-0005kp-A5;
	Sun, 09 Dec 2012 22:08:26 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14650-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 22:08:26 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14650: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14650 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14650/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 22:09:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 22:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thp31-0005xY-EN; Sun, 09 Dec 2012 22:08:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thp2z-0005xT-5D
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 22:08:29 +0000
Received: from [193.109.254.147:27001] by server-14.bemta-14.messagelabs.com
	id 46/BA-14517-CDB05C05; Sun, 09 Dec 2012 22:08:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1355090907!9972152!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6990 invoked from network); 9 Dec 2012 22:08:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 22:08:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="20578"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 22:08:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 22:08:26 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thp2w-0007Pn-LK;
	Sun, 09 Dec 2012 22:08:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thp2w-0005kp-A5;
	Sun, 09 Dec 2012 22:08:26 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14650-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 22:08:26 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14650: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14650 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14650/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 22:21:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 22:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThpEp-0006I6-RK; Sun, 09 Dec 2012 22:20:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThpEo-0006Hz-66
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 22:20:42 +0000
Received: from [85.158.139.83:33291] by server-3.bemta-5.messagelabs.com id
	F0/6C-25441-9BE05C05; Sun, 09 Dec 2012 22:20:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355091639!25079410!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9270 invoked from network); 9 Dec 2012 22:20:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 22:20:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="20647"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 22:20:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 22:20:39 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThpEl-0007Th-5A;
	Sun, 09 Dec 2012 22:20:39 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThpEl-0002Y4-3z;
	Sun, 09 Dec 2012 22:20:39 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14651-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 22:20:39 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14651: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14651 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14651/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 22:21:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 22:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThpEp-0006I6-RK; Sun, 09 Dec 2012 22:20:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThpEo-0006Hz-66
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 22:20:42 +0000
Received: from [85.158.139.83:33291] by server-3.bemta-5.messagelabs.com id
	F0/6C-25441-9BE05C05; Sun, 09 Dec 2012 22:20:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355091639!25079410!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9270 invoked from network); 9 Dec 2012 22:20:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 22:20:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="20647"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 22:20:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 22:20:39 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThpEl-0007Th-5A;
	Sun, 09 Dec 2012 22:20:39 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThpEl-0002Y4-3z;
	Sun, 09 Dec 2012 22:20:39 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14651-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 22:20:39 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14651: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14651 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14651/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 22:33:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 22:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThpQU-0006XJ-Pf; Sun, 09 Dec 2012 22:32:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThpQT-0006X9-OL
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 22:32:46 +0000
Received: from [85.158.139.211:51277] by server-2.bemta-5.messagelabs.com id
	2C/BF-16162-C8115C05; Sun, 09 Dec 2012 22:32:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355092364!19766615!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2910 invoked from network); 9 Dec 2012 22:32:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 22:32:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="20733"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 22:32:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 22:32:43 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThpQR-0007XG-Ht;
	Sun, 09 Dec 2012 22:32:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThpQR-0001Wz-5r;
	Sun, 09 Dec 2012 22:32:43 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14652-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 22:32:43 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14652: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14652 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14652/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 09 22:33:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Dec 2012 22:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThpQU-0006XJ-Pf; Sun, 09 Dec 2012 22:32:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThpQT-0006X9-OL
	for xen-devel@lists.xensource.com; Sun, 09 Dec 2012 22:32:46 +0000
Received: from [85.158.139.211:51277] by server-2.bemta-5.messagelabs.com id
	2C/BF-16162-C8115C05; Sun, 09 Dec 2012 22:32:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355092364!19766615!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2910 invoked from network); 9 Dec 2012 22:32:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Dec 2012 22:32:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,247,1355097600"; 
   d="scan'208";a="20733"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Dec 2012 22:32:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 9 Dec 2012 22:32:43 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThpQR-0007XG-Ht;
	Sun, 09 Dec 2012 22:32:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThpQR-0001Wz-5r;
	Sun, 09 Dec 2012 22:32:43 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14652-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Dec 2012 22:32:43 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14652: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14652 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14652/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 14565
 build-i386                    2 host-install(2)         broken REGR. vs. 14565
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 00:52:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 00:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thrb5-00087G-Iu; Mon, 10 Dec 2012 00:51:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thrb4-00087B-81
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 00:51:50 +0000
Received: from [85.158.139.211:3945] by server-4.bemta-5.messagelabs.com id
	9C/7F-14693-52235C05; Mon, 10 Dec 2012 00:51:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355100708!18279126!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16586 invoked from network); 10 Dec 2012 00:51:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 00:51:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,248,1355097600"; 
   d="scan'208";a="21407"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 00:51:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 00:51:48 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thrb2-0008Cz-5j;
	Mon, 10 Dec 2012 00:51:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thrb2-0000QX-4U;
	Mon, 10 Dec 2012 00:51:48 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14640-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 00:51:48 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14640: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14640 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14640/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14566
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566
 build-amd64                  2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-pvops            2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-oldkern          2 host-install(2) broken in 14587 REGR. vs. 14566
 build-i386-pvops             2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386                   2 host-install(2) broken in 14583 REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            3 host-install(3)           broken pass in 14574
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 14574
 test-amd64-amd64-xl           3 host-install(3)           broken pass in 14635
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)           broken pass in 14583

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 14587 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-win          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14587 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-win         1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-win            1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 00:52:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 00:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thrb5-00087G-Iu; Mon, 10 Dec 2012 00:51:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thrb4-00087B-81
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 00:51:50 +0000
Received: from [85.158.139.211:3945] by server-4.bemta-5.messagelabs.com id
	9C/7F-14693-52235C05; Mon, 10 Dec 2012 00:51:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355100708!18279126!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16586 invoked from network); 10 Dec 2012 00:51:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 00:51:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,248,1355097600"; 
   d="scan'208";a="21407"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 00:51:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 00:51:48 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thrb2-0008Cz-5j;
	Mon, 10 Dec 2012 00:51:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thrb2-0000QX-4U;
	Mon, 10 Dec 2012 00:51:48 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14640-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 00:51:48 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14640: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14640 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14640/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14566
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14566
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14566
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-qemut-win-vcpus1  3 host-install(3)  broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14566
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-i386-win-vcpus1    3 host-install(3)         broken REGR. vs. 14566
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14566
 test-amd64-amd64-xl-win       3 host-install(3)         broken REGR. vs. 14566
 test-amd64-amd64-xl-qemut-win  3 host-install(3)        broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566
 build-amd64                  2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-pvops            2 host-install(2) broken in 14587 REGR. vs. 14566
 build-amd64-oldkern          2 host-install(2) broken in 14587 REGR. vs. 14566
 build-i386-pvops             2 host-install(2) broken in 14583 REGR. vs. 14566
 build-i386                   2 host-install(2) broken in 14583 REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            3 host-install(3)           broken pass in 14574
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 14574
 test-amd64-amd64-xl           3 host-install(3)           broken pass in 14635
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)           broken pass in 14583

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14566
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 14587 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-win          1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14587 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)       blocked in 14587 n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14587 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14587 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14587 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-win         1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-win            1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 14583 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 14583 n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   broken  
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          broken  
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                broken  
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      broken  
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 01:30:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 01:30:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThsC2-0003mg-8q; Mon, 10 Dec 2012 01:30:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThsC0-0003mb-86
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 01:30:00 +0000
Received: from [85.158.139.211:43237] by server-6.bemta-5.messagelabs.com id
	4A/61-30498-61B35C05; Mon, 10 Dec 2012 01:29:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355102997!18881598!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18808 invoked from network); 10 Dec 2012 01:29:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 01:29:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,248,1355097600"; 
   d="scan'208";a="21615"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 01:29:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 01:29:56 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThsBw-0008P5-SU;
	Mon, 10 Dec 2012 01:29:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThsBw-00014d-Rt;
	Mon, 10 Dec 2012 01:29:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14653-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 01:29:56 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14653: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14653 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14653/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 01:30:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 01:30:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThsC2-0003mg-8q; Mon, 10 Dec 2012 01:30:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThsC0-0003mb-86
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 01:30:00 +0000
Received: from [85.158.139.211:43237] by server-6.bemta-5.messagelabs.com id
	4A/61-30498-61B35C05; Mon, 10 Dec 2012 01:29:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355102997!18881598!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18808 invoked from network); 10 Dec 2012 01:29:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 01:29:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,248,1355097600"; 
   d="scan'208";a="21615"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 01:29:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 01:29:56 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThsBw-0008P5-SU;
	Mon, 10 Dec 2012 01:29:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThsBw-00014d-Rt;
	Mon, 10 Dec 2012 01:29:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14653-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 01:29:56 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14653: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14653 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14653/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 02:02:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 02:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thsh9-0004NS-4R; Mon, 10 Dec 2012 02:02:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thsh7-0004NN-Iw
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 02:02:09 +0000
Received: from [85.158.138.51:25983] by server-6.bemta-3.messagelabs.com id
	5D/53-28265-0A245C05; Mon, 10 Dec 2012 02:02:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355104927!28206267!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11166 invoked from network); 10 Dec 2012 02:02:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 02:02:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,248,1355097600"; 
   d="scan'208";a="21795"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 02:02:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 02:02:07 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thsh5-00007C-7m;
	Mon, 10 Dec 2012 02:02:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thsh5-0002NA-0T;
	Mon, 10 Dec 2012 02:02:07 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14654-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 02:02:07 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14654: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14654 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14654/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 02:02:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 02:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thsh9-0004NS-4R; Mon, 10 Dec 2012 02:02:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thsh7-0004NN-Iw
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 02:02:09 +0000
Received: from [85.158.138.51:25983] by server-6.bemta-3.messagelabs.com id
	5D/53-28265-0A245C05; Mon, 10 Dec 2012 02:02:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355104927!28206267!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11166 invoked from network); 10 Dec 2012 02:02:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 02:02:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,248,1355097600"; 
   d="scan'208";a="21795"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 02:02:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 02:02:07 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thsh5-00007C-7m;
	Mon, 10 Dec 2012 02:02:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thsh5-0002NA-0T;
	Mon, 10 Dec 2012 02:02:07 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14654-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 02:02:07 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14654: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14654 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14654/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 build-i386                    2 host-install(2)         broken REGR. vs. 14566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win   1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 02:15:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 02:15:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thsu2-0004YS-Qh; Mon, 10 Dec 2012 02:15:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1Thsu1-0004YN-Fi
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 02:15:29 +0000
Received: from [85.158.139.83:40776] by server-16.bemta-5.messagelabs.com id
	97/91-09208-0C545C05; Mon, 10 Dec 2012 02:15:28 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355105726!17829206!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTM4MTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26966 invoked from network); 10 Dec 2012 02:15:27 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 02:15:27 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBA2FOHG005465
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Dec 2012 02:15:25 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBA2FNtW001452
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Dec 2012 02:15:24 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBA2FNWV004547; Sun, 9 Dec 2012 20:15:23 -0600
Received: from [192.168.1.100] (/106.3.103.160)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 09 Dec 2012 18:15:23 -0800
Message-ID: <50C545C3.4000604@oracle.com>
Date: Mon, 10 Dec 2012 10:15:31 +0800
From: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50C161AD.9020803@oracle.com>
	<50C1E0A502000078000AEECA@nat28.tlf.novell.com>
In-Reply-To: <50C1E0A502000078000AEECA@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Joe Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] rombios: add support for special CHS layout (spt=32)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2012-12-07 19:27, Jan Beulich wrote:
>>>> On 07.12.12 at 04:25, DuanZhenzhong <zhenzhong.duan@oracle.com> wrote:
>> When booting up windows VM which converted by dd scsi disk, it fails with
>> "Error Loading Operating System", root cause is rombios doesn't simulate
>> disk's
>> logical CHS properly. Real bios uses spt=32 here.
>>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
>> --- a/tools/firmware/rombios/rombios.c	2011-12-09 00:50:28.000000000 +0800
>> +++ b/tools/firmware/rombios/rombios.c	2012-11-20 09:42:50.000000000 +0800
>> @@ -2674,6 +2674,8 @@ void ata_detect( )
>>         Bit32u sectors_low, sectors_high;
>>         Bit16u cylinders, heads, spt, blksize;
>>         Bit8u  translation, removable, mode;
>> +      Bit8u i;
>> +      Bit8u *p;
>>   
>>         // default mode to PIO16
>>         mode = ATA_MODE_PIO16;
>> @@ -2738,14 +2740,32 @@ void ata_detect( )
>>           case ATA_TRANSLATION_NONE:
>>             break;
>>           case ATA_TRANSLATION_LBA:
>> -          spt = 63;
>> -          sectors_low /= 63;
>> -          heads = sectors_low / 1024;
>> -          if (heads>128) heads = 255;
>> -          else if (heads>64) heads = 128;
>> -          else if (heads>32) heads = 64;
>> -          else if (heads>16) heads = 32;
>> -          else heads=16;
>> +          spt = heads = 0;
>> +          if (ata_cmd_data_in(device,ATA_CMD_READ_SECTORS, 1, 0, 0, 1, 0L, 0L, get_SS(),buffer) !=0 )
>> +            BX_PANIC("ata-detect: Failed to read first sector\n");
>> +          for(i=0; i<4; i++) {
>> +            p = buffer + 0x1be + i * 16;
>> +            if (read_dword(get_SS(), p+12) && read_byte(get_SS(), p+5)) {
>> +              /* We make the assumption that the partition terminates on
>> +                 a cylinder boundary */
> Which certainly isn't a generally correct assumption, so I strongly
> recommend against basing any decisions on that.
>
> Jan
Thanks for comment. Are we the first facing this bug?
Is there any workaround for this issue?
zduan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 02:15:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 02:15:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thsu2-0004YS-Qh; Mon, 10 Dec 2012 02:15:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1Thsu1-0004YN-Fi
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 02:15:29 +0000
Received: from [85.158.139.83:40776] by server-16.bemta-5.messagelabs.com id
	97/91-09208-0C545C05; Mon, 10 Dec 2012 02:15:28 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355105726!17829206!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTM4MTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26966 invoked from network); 10 Dec 2012 02:15:27 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 02:15:27 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBA2FOHG005465
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Dec 2012 02:15:25 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBA2FNtW001452
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Dec 2012 02:15:24 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBA2FNWV004547; Sun, 9 Dec 2012 20:15:23 -0600
Received: from [192.168.1.100] (/106.3.103.160)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 09 Dec 2012 18:15:23 -0800
Message-ID: <50C545C3.4000604@oracle.com>
Date: Mon, 10 Dec 2012 10:15:31 +0800
From: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50C161AD.9020803@oracle.com>
	<50C1E0A502000078000AEECA@nat28.tlf.novell.com>
In-Reply-To: <50C1E0A502000078000AEECA@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Joe Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] rombios: add support for special CHS layout (spt=32)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2012-12-07 19:27, Jan Beulich wrote:
>>>> On 07.12.12 at 04:25, DuanZhenzhong <zhenzhong.duan@oracle.com> wrote:
>> When booting up windows VM which converted by dd scsi disk, it fails with
>> "Error Loading Operating System", root cause is rombios doesn't simulate
>> disk's
>> logical CHS properly. Real bios uses spt=32 here.
>>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
>> --- a/tools/firmware/rombios/rombios.c	2011-12-09 00:50:28.000000000 +0800
>> +++ b/tools/firmware/rombios/rombios.c	2012-11-20 09:42:50.000000000 +0800
>> @@ -2674,6 +2674,8 @@ void ata_detect( )
>>         Bit32u sectors_low, sectors_high;
>>         Bit16u cylinders, heads, spt, blksize;
>>         Bit8u  translation, removable, mode;
>> +      Bit8u i;
>> +      Bit8u *p;
>>   
>>         // default mode to PIO16
>>         mode = ATA_MODE_PIO16;
>> @@ -2738,14 +2740,32 @@ void ata_detect( )
>>           case ATA_TRANSLATION_NONE:
>>             break;
>>           case ATA_TRANSLATION_LBA:
>> -          spt = 63;
>> -          sectors_low /= 63;
>> -          heads = sectors_low / 1024;
>> -          if (heads>128) heads = 255;
>> -          else if (heads>64) heads = 128;
>> -          else if (heads>32) heads = 64;
>> -          else if (heads>16) heads = 32;
>> -          else heads=16;
>> +          spt = heads = 0;
>> +          if (ata_cmd_data_in(device,ATA_CMD_READ_SECTORS, 1, 0, 0, 1, 0L, 0L, get_SS(),buffer) !=0 )
>> +            BX_PANIC("ata-detect: Failed to read first sector\n");
>> +          for(i=0; i<4; i++) {
>> +            p = buffer + 0x1be + i * 16;
>> +            if (read_dword(get_SS(), p+12) && read_byte(get_SS(), p+5)) {
>> +              /* We make the assumption that the partition terminates on
>> +                 a cylinder boundary */
> Which certainly isn't a generally correct assumption, so I strongly
> recommend against basing any decisions on that.
>
> Jan
Thanks for comment. Are we the first facing this bug?
Is there any workaround for this issue?
zduan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 03:16:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 03:16:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thtq1-0005I2-KQ; Mon, 10 Dec 2012 03:15:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thtq0-0005Hx-7L
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 03:15:24 +0000
Received: from [85.158.139.211:34249] by server-10.bemta-5.messagelabs.com id
	CE/D9-13383-BC355C05; Mon, 10 Dec 2012 03:15:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355109321!17181851!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25635 invoked from network); 10 Dec 2012 03:15:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 03:15:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,248,1355097600"; 
   d="scan'208";a="22035"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 03:15:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 03:15:21 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thtpx-0000UC-22;
	Mon, 10 Dec 2012 03:15:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thtpx-0002FN-1P;
	Mon, 10 Dec 2012 03:15:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14656-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 03:15:21 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14656: trouble: blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14656 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14656/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 03:16:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 03:16:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thtq1-0005I2-KQ; Mon, 10 Dec 2012 03:15:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thtq0-0005Hx-7L
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 03:15:24 +0000
Received: from [85.158.139.211:34249] by server-10.bemta-5.messagelabs.com id
	CE/D9-13383-BC355C05; Mon, 10 Dec 2012 03:15:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355109321!17181851!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25635 invoked from network); 10 Dec 2012 03:15:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 03:15:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,248,1355097600"; 
   d="scan'208";a="22035"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 03:15:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 03:15:21 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Thtpx-0000UC-22;
	Mon, 10 Dec 2012 03:15:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Thtpx-0002FN-1P;
	Mon, 10 Dec 2012 03:15:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14656-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 03:15:21 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14656: trouble: blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14656 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14656/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14565
 build-amd64                   2 host-install(2)         broken REGR. vs. 14565
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f96a0cda1216
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 472 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 03:23:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 03:23:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThtxR-0005R0-IU; Mon, 10 Dec 2012 03:23:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThtxQ-0005Qv-45
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 03:23:04 +0000
Received: from [85.158.143.35:18115] by server-2.bemta-4.messagelabs.com id
	94/4D-30861-79555C05; Mon, 10 Dec 2012 03:23:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355109781!12244253!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32761 invoked from network); 10 Dec 2012 03:23:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 03:23:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,249,1355097600"; 
   d="scan'208";a="22078"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 03:23:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 03:23:00 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThtxM-0000Wi-ND;
	Mon, 10 Dec 2012 03:23:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThtxM-0001Qd-Mc;
	Mon, 10 Dec 2012 03:23:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14657-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 03:23:00 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14657: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14657 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14657/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 03:23:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 03:23:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThtxR-0005R0-IU; Mon, 10 Dec 2012 03:23:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThtxQ-0005Qv-45
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 03:23:04 +0000
Received: from [85.158.143.35:18115] by server-2.bemta-4.messagelabs.com id
	94/4D-30861-79555C05; Mon, 10 Dec 2012 03:23:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355109781!12244253!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32761 invoked from network); 10 Dec 2012 03:23:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 03:23:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,249,1355097600"; 
   d="scan'208";a="22078"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 03:23:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 03:23:00 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThtxM-0000Wi-ND;
	Mon, 10 Dec 2012 03:23:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThtxM-0001Qd-Mc;
	Mon, 10 Dec 2012 03:23:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14657-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 03:23:00 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14657: trouble:
	blocked/broken/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14657 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14657/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14472
 build-amd64                   2 host-install(2)         broken REGR. vs. 14472
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14472

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemuu-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemuu-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemuu-win-vcpus1                          blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-qemuu-win                                   blocked 
 test-amd64-i386-qemuu-win                                    blocked 
 test-amd64-amd64-xl-qemuu-win                                blocked 
 test-amd64-i386-xend-qemuu-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 03:37:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 03:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThuAi-0005fy-52; Mon, 10 Dec 2012 03:36:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThuAh-0005ft-5W
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 03:36:47 +0000
Received: from [85.158.143.35:40099] by server-1.bemta-4.messagelabs.com id
	BB/E3-28401-EC855C05; Mon, 10 Dec 2012 03:36:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355110604!5759534!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24495 invoked from network); 10 Dec 2012 03:36:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 03:36:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,249,1355097600"; 
   d="scan'208";a="22130"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 03:36:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 03:36:43 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThuAd-0000at-UA;
	Mon, 10 Dec 2012 03:36:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThuAd-0006JE-SS;
	Mon, 10 Dec 2012 03:36:43 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14643-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 03:36:43 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14643: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14643 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14643/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl-qemut-win7-amd64  3 host-install(3) broken REGR. vs. 14481
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14481
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-amd64-qemut-win    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14481

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14481
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14481
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14481

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1
baseline version:
 linux                2fc3fd44850e68f21ce1746c2e94470cab44ffab

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-amd64-amd64-qemut-win                                   broken  
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3bbbcb136d00b0718e63f7fd633f098e0889c3d1
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Dec 5 18:40:31 2012 -0800

    Linux 3.0.55

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 03:37:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 03:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThuAi-0005fy-52; Mon, 10 Dec 2012 03:36:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThuAh-0005ft-5W
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 03:36:47 +0000
Received: from [85.158.143.35:40099] by server-1.bemta-4.messagelabs.com id
	BB/E3-28401-EC855C05; Mon, 10 Dec 2012 03:36:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355110604!5759534!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24495 invoked from network); 10 Dec 2012 03:36:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 03:36:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,249,1355097600"; 
   d="scan'208";a="22130"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 03:36:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 03:36:43 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThuAd-0000at-UA;
	Mon, 10 Dec 2012 03:36:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThuAd-0006JE-SS;
	Mon, 10 Dec 2012 03:36:43 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14643-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 03:36:43 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14643: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14643 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14643/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-i386-pv            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xl-credit2    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 14481
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 14481
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 14481
 test-amd64-i386-xl-win7-amd64  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)       broken REGR. vs. 14481
 test-amd64-i386-win           3 host-install(3)         broken REGR. vs. 14481
 test-amd64-amd64-xl-qemut-win7-amd64  3 host-install(3) broken REGR. vs. 14481
 test-amd64-i386-qemut-win-vcpus1  3 host-install(3)     broken REGR. vs. 14481
 test-amd64-i386-xl-win-vcpus1  3 host-install(3)        broken REGR. vs. 14481
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-amd64-qemut-win    3 host-install(3)         broken REGR. vs. 14481
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 14481
 test-amd64-i386-xend-winxpsp3  3 host-install(3)        broken REGR. vs. 14481

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14481
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 14481
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 14481

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1
baseline version:
 linux                2fc3fd44850e68f21ce1746c2e94470cab44ffab

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             broken  
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          broken  
 test-amd64-amd64-qemut-win                                   broken  
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                broken  
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3bbbcb136d00b0718e63f7fd633f098e0889c3d1
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Wed Dec 5 18:40:31 2012 -0800

    Linux 3.0.55

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 05:47:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 05:47:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwCf-0006RA-28; Mon, 10 Dec 2012 05:46:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThwCd-0006R5-7L
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 05:46:55 +0000
Received: from [85.158.139.83:54350] by server-7.bemta-5.messagelabs.com id
	92/1F-08009-E4775C05; Mon, 10 Dec 2012 05:46:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355118412!25222182!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16844 invoked from network); 10 Dec 2012 05:46:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 05:46:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,249,1355097600"; 
   d="scan'208";a="22875"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 05:46:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 05:46:52 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThwCa-0001Ko-0G;
	Mon, 10 Dec 2012 05:46:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThwCZ-0007Cy-W5;
	Mon, 10 Dec 2012 05:46:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14655-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 05:46:51 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14655: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14655 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14655/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14563
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14563
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14563
 test-i386-i386-xl-qemut-win   3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-winxpsp3  3 host-install(3)     broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  broken  
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 05:47:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 05:47:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwCf-0006RA-28; Mon, 10 Dec 2012 05:46:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ThwCd-0006R5-7L
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 05:46:55 +0000
Received: from [85.158.139.83:54350] by server-7.bemta-5.messagelabs.com id
	92/1F-08009-E4775C05; Mon, 10 Dec 2012 05:46:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355118412!25222182!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16844 invoked from network); 10 Dec 2012 05:46:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 05:46:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,249,1355097600"; 
   d="scan'208";a="22875"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 05:46:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 05:46:52 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThwCa-0001Ko-0G;
	Mon, 10 Dec 2012 05:46:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThwCZ-0007Cy-W5;
	Mon, 10 Dec 2012 05:46:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14655-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 05:46:51 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14655: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14655 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14655/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 14563
 build-amd64                   2 host-install(2)         broken REGR. vs. 14563
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14563
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14563
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14563
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14563
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14563
 test-i386-i386-xl-qemut-win   3 host-install(3)         broken REGR. vs. 14563
 test-i386-i386-xl-qemut-winxpsp3  3 host-install(3)     broken REGR. vs. 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  broken  
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbA-0006mQ-T7; Mon, 10 Dec 2012 06:12:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb9-0006lZ-HH
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:15 +0000
Received: from [85.158.137.99:19895] by server-5.bemta-3.messagelabs.com id
	CB/23-26311-E3D75C05; Mon, 10 Dec 2012 06:12:14 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!3
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27586 invoked from network); 10 Dec 2012 06:12:14 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:14 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:24 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997332"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:11 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:19 +0800
Message-Id: <1355162243-11857-8-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 07/11] nEPT: Sync PDPTR fields if L2 guest in
	PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
vmentry.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   10 ++++++++--
 1 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ab68b52..3fc128b 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -824,9 +824,15 @@ static void load_shadow_guest_state(struct vcpu *v)
     vvmcs_to_shadow(vvmcs, CR0_READ_SHADOW);
     vvmcs_to_shadow(vvmcs, CR4_READ_SHADOW);
     vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
-    vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
 
-    /* TODO: PDPTRs for nested ept */
+    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
+                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
+    }
+
     /* TODO: CR3 target control */
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbA-0006mQ-T7; Mon, 10 Dec 2012 06:12:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb9-0006lZ-HH
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:15 +0000
Received: from [85.158.137.99:19895] by server-5.bemta-3.messagelabs.com id
	CB/23-26311-E3D75C05; Mon, 10 Dec 2012 06:12:14 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!3
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27586 invoked from network); 10 Dec 2012 06:12:14 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:14 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:24 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997332"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:11 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:19 +0800
Message-Id: <1355162243-11857-8-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 07/11] nEPT: Sync PDPTR fields if L2 guest in
	PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
vmentry.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   10 ++++++++--
 1 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ab68b52..3fc128b 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -824,9 +824,15 @@ static void load_shadow_guest_state(struct vcpu *v)
     vvmcs_to_shadow(vvmcs, CR0_READ_SHADOW);
     vvmcs_to_shadow(vvmcs, CR4_READ_SHADOW);
     vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
-    vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
 
-    /* TODO: PDPTRs for nested ept */
+    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
+                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
+    }
+
     /* TODO: CR3 target control */
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thwb3-0006jv-LB; Mon, 10 Dec 2012 06:12:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb2-0006jk-Np
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:09 +0000
Received: from [193.109.254.147:6196] by server-9.bemta-14.messagelabs.com id
	DC/D8-30773-83D75C05; Mon, 10 Dec 2012 06:12:08 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355119924!2300111!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4648 invoked from network); 10 Dec 2012 06:12:05 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:05 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:15 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997292"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:02 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:13 +0800
Message-Id: <1355162243-11857-2-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 01/11] nestedhap: Change hostcr3 and p2m->cr3 to
	meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

VMX doesn't have the concept about host cr3 for nested p2m,
and only SVM has, so change it to netural words.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c             |    6 +++---
 xen/arch/x86/hvm/svm/svm.c         |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    2 +-
 xen/arch/x86/mm/hap/nested_hap.c   |   15 ++++++++-------
 xen/arch/x86/mm/mm-locks.h         |    2 +-
 xen/arch/x86/mm/p2m.c              |   26 +++++++++++++-------------
 xen/include/asm-x86/hvm/hvm.h      |    4 ++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +-
 xen/include/asm-x86/p2m.h          |   16 ++++++++--------
 10 files changed, 39 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b6026d7..85bc9be 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4536,10 +4536,10 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v)
     return -EOPNOTSUPP;
 }
 
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v)
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v)
 {
-    if (hvm_funcs.nhvm_vcpu_hostcr3)
-        return hvm_funcs.nhvm_vcpu_hostcr3(v);
+    if (hvm_funcs.nhvm_vcpu_p2m_base)
+        return hvm_funcs.nhvm_vcpu_p2m_base(v);
     return -EOPNOTSUPP;
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 4c4abfc..6c469ec 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2003,7 +2003,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vcpu_vmexit = nsvm_vcpu_vmexit_inject,
     .nhvm_vcpu_vmexit_trap = nsvm_vcpu_vmexit_trap,
     .nhvm_vcpu_guestcr3 = nsvm_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3 = nsvm_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base = nsvm_vcpu_hostcr3,
     .nhvm_vcpu_asid = nsvm_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 9fb9562..47d8ca6 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1504,7 +1504,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_destroy    = nvmx_vcpu_destroy,
     .nhvm_vcpu_reset      = nvmx_vcpu_reset,
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3    = nvmx_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b005816..6d1a736 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -94,7 +94,7 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
     return 0;
 }
 
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v)
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
     /* TODO */
     ASSERT(0);
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 317875d..f9a5edc 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -48,9 +48,10 @@
  *    1. If #NPF is from L1 guest, then we crash the guest VM (same as old 
  *       code)
  *    2. If #NPF is from L2 guest, then we continue from (3)
- *    3. Get h_cr3 from L1 guest. Map h_cr3 into L0 hypervisor address space.
- *    4. Walk the h_cr3 page table
- *    5.    - if not present, then we inject #NPF back to L1 guest and 
+ *    3. Get np2m base from L1 guest. Map np2m base into L0 hypervisor address space.
+ *    4. Walk the np2m's  page table
+ *    5.    - if not present or permission check failure, then we inject #NPF back to 
+ *    L1 guest and 
  *            re-launch L1 guest (L1 guest will either treat this #NPF as MMIO,
  *            or fix its p2m table for L2 guest)
  *    6.    - if present, then we will get the a new translated value L1-GPA 
@@ -89,7 +90,7 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 
     if (old_flags & _PAGE_PRESENT)
         flush_tlb_mask(p2m->dirty_cpumask);
-    
+
     paging_unlock(d);
 }
 
@@ -110,7 +111,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     /* If this p2m table has been flushed or recycled under our feet, 
      * leave it alone.  We'll pick up the right one as we try to 
      * vmenter the guest. */
-    if ( p2m->cr3 == nhvm_vcpu_hostcr3(v) )
+    if ( p2m->np2m_base == nhvm_vcpu_p2m_base(v) )
     {
         unsigned long gfn, mask;
         mfn_t mfn;
@@ -186,7 +187,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t pfec;
     unsigned long nested_cr3, gfn;
     
-    nested_cr3 = nhvm_vcpu_hostcr3(v);
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
 
     pfec = PFEC_user_mode | PFEC_page_present;
     if (access_w)
@@ -221,7 +222,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
-    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
     rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 3700e32..1817f81 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -249,7 +249,7 @@ declare_mm_order_constraint(per_page_sharing)
  * A per-domain lock that protects the mapping from nested-CR3 to 
  * nested-p2m.  In particular it covers:
  * - the array of nested-p2m tables, and all LRU activity therein; and
- * - setting the "cr3" field of any p2m table to a non-CR3_EADDR value. 
+ * - setting the "cr3" field of any p2m table to a non-P2M_BASE_EAADR value. 
  *   (i.e. assigning a p2m table to be the shadow of that cr3 */
 
 /* PoD lock (per-p2m-table)
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index e351942..62c2d78 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -81,7 +81,7 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->domain = d;
     p2m->default_access = p2m_access_rwx;
 
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
         ept_p2m_init(p2m);
@@ -1445,7 +1445,7 @@ p2m_flush_table(struct p2m_domain *p2m)
     ASSERT(page_list_empty(&p2m->pod.single));
 
     /* This is no longer a valid nested p2m for any address space */
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
     
     /* Zap the top level of the trie */
     top = mfn_to_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
@@ -1483,7 +1483,7 @@ p2m_flush_nestedp2m(struct domain *d)
 }
 
 struct p2m_domain *
-p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
+p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
 {
     /* Use volatile to prevent gcc to cache nv->nv_p2m in a cpu register as
      * this may change within the loop by an other (v)cpu.
@@ -1492,8 +1492,8 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     struct domain *d;
     struct p2m_domain *p2m;
 
-    /* Mask out low bits; this avoids collisions with CR3_EADDR */
-    cr3 &= ~(0xfffull);
+    /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
+    np2m_base &= ~(0xfffull);
 
     if (nv->nv_flushp2m && nv->nv_p2m) {
         nv->nv_p2m = NULL;
@@ -1505,14 +1505,14 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     if ( p2m ) 
     {
         p2m_lock(p2m);
-        if ( p2m->cr3 == cr3 || p2m->cr3 == CR3_EADDR )
+        if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
         {
             nv->nv_flushp2m = 0;
             p2m_getlru_nestedp2m(d, p2m);
             nv->nv_p2m = p2m;
-            if (p2m->cr3 == CR3_EADDR)
+            if (p2m->np2m_base == P2M_BASE_EADDR)
                 hvm_asid_flush_vcpu(v);
-            p2m->cr3 = cr3;
+            p2m->np2m_base = np2m_base;
             cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
             p2m_unlock(p2m);
             nestedp2m_unlock(d);
@@ -1527,7 +1527,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     p2m_flush_table(p2m);
     p2m_lock(p2m);
     nv->nv_p2m = p2m;
-    p2m->cr3 = cr3;
+    p2m->np2m_base = np2m_base;
     nv->nv_flushp2m = 0;
     hvm_asid_flush_vcpu(v);
     cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
@@ -1543,7 +1543,7 @@ p2m_get_p2m(struct vcpu *v)
     if (!nestedhvm_is_n2(v))
         return p2m_get_hostp2m(v->domain);
 
-    return p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    return p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 }
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
@@ -1561,15 +1561,15 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
         uint32_t pfec_21 = *pfec;
-        uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
+        uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
 
         /* translate l2 guest va into l2 guest gfn */
-        p2m = p2m_get_nestedp2m(v, ncr3);
+        p2m = p2m_get_nestedp2m(v, np2m_base);
         mode = paging_get_nestedmode(v);
         gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
         /* translate l2 guest gfn into l1 guest gfn */
-        return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
+        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
                                        gfn << PAGE_SHIFT, &pfec_21, NULL);
     }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index fdb0f58..d3535b6 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -170,7 +170,7 @@ struct hvm_function_table {
                                 uint64_t exitcode);
     int (*nhvm_vcpu_vmexit_trap)(struct vcpu *v, struct hvm_trap *trap);
     uint64_t (*nhvm_vcpu_guestcr3)(struct vcpu *v);
-    uint64_t (*nhvm_vcpu_hostcr3)(struct vcpu *v);
+    uint64_t (*nhvm_vcpu_p2m_base)(struct vcpu *v);
     uint32_t (*nhvm_vcpu_asid)(struct vcpu *v);
     int (*nhvm_vmcx_guest_intercepts_trap)(struct vcpu *v, 
                                unsigned int trapnr, int errcode);
@@ -475,7 +475,7 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v);
 /* returns l1 guest's cr3 that points to the page table used to
  * translate l2 guest physical address to l1 guest physical address.
  */
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v);
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v);
 /* returns the asid number l1 guest wants to use to run the l2 guest */
 uint32_t nhvm_vcpu_asid(struct vcpu *v);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index dce2cd8..d97011d 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -99,7 +99,7 @@ int nvmx_vcpu_initialise(struct vcpu *v);
 void nvmx_vcpu_destroy(struct vcpu *v);
 int nvmx_vcpu_reset(struct vcpu *v);
 uint64_t nvmx_vcpu_guestcr3(struct vcpu *v);
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v);
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v);
 uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 907a817..1807ad6 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -197,17 +197,17 @@ struct p2m_domain {
 
     struct domain     *domain;   /* back pointer to domain */
 
-    /* Nested p2ms only: nested-CR3 value that this p2m shadows. 
-     * This can be cleared to CR3_EADDR under the per-p2m lock but
+    /* Nested p2ms only: nested p2m base value that this p2m shadows. 
+     * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but
      * needs both the per-p2m lock and the per-domain nestedp2m lock
      * to set it to any other value. */
-#define CR3_EADDR     (~0ULL)
-    uint64_t           cr3;
+#define P2M_BASE_EADDR     (~0ULL)
+    uint64_t           np2m_base;
 
     /* Nested p2ms: linked list of n2pms allocated to this domain. 
      * The host p2m hasolds the head of the list and the np2ms are 
      * threaded on in LRU order. */
-    struct list_head np2m_list; 
+    struct list_head   np2m_list; 
 
 
     /* Host p2m: when this flag is set, don't flush all the nested-p2m 
@@ -282,11 +282,11 @@ struct p2m_domain {
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
-/* Get p2m table (re)usable for specified cr3.
+/* Get p2m table (re)usable for specified np2m base.
  * Automatically destroys and re-initializes a p2m if none found.
- * If cr3 == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
+ * If np2m_base == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
  */
-struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3);
+struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
 
 /* If vcpu is in host mode then behaviour matches p2m_get_hostp2m().
  * If vcpu is in guest mode then behaviour matches p2m_get_nestedp2m().
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thwb3-0006jv-LB; Mon, 10 Dec 2012 06:12:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb2-0006jk-Np
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:09 +0000
Received: from [193.109.254.147:6196] by server-9.bemta-14.messagelabs.com id
	DC/D8-30773-83D75C05; Mon, 10 Dec 2012 06:12:08 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355119924!2300111!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4648 invoked from network); 10 Dec 2012 06:12:05 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:05 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:15 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997292"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:02 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:13 +0800
Message-Id: <1355162243-11857-2-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 01/11] nestedhap: Change hostcr3 and p2m->cr3 to
	meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

VMX doesn't have the concept about host cr3 for nested p2m,
and only SVM has, so change it to netural words.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c             |    6 +++---
 xen/arch/x86/hvm/svm/svm.c         |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    2 +-
 xen/arch/x86/mm/hap/nested_hap.c   |   15 ++++++++-------
 xen/arch/x86/mm/mm-locks.h         |    2 +-
 xen/arch/x86/mm/p2m.c              |   26 +++++++++++++-------------
 xen/include/asm-x86/hvm/hvm.h      |    4 ++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +-
 xen/include/asm-x86/p2m.h          |   16 ++++++++--------
 10 files changed, 39 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b6026d7..85bc9be 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4536,10 +4536,10 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v)
     return -EOPNOTSUPP;
 }
 
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v)
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v)
 {
-    if (hvm_funcs.nhvm_vcpu_hostcr3)
-        return hvm_funcs.nhvm_vcpu_hostcr3(v);
+    if (hvm_funcs.nhvm_vcpu_p2m_base)
+        return hvm_funcs.nhvm_vcpu_p2m_base(v);
     return -EOPNOTSUPP;
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 4c4abfc..6c469ec 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2003,7 +2003,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vcpu_vmexit = nsvm_vcpu_vmexit_inject,
     .nhvm_vcpu_vmexit_trap = nsvm_vcpu_vmexit_trap,
     .nhvm_vcpu_guestcr3 = nsvm_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3 = nsvm_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base = nsvm_vcpu_hostcr3,
     .nhvm_vcpu_asid = nsvm_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 9fb9562..47d8ca6 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1504,7 +1504,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_destroy    = nvmx_vcpu_destroy,
     .nhvm_vcpu_reset      = nvmx_vcpu_reset,
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3    = nvmx_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b005816..6d1a736 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -94,7 +94,7 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
     return 0;
 }
 
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v)
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
     /* TODO */
     ASSERT(0);
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 317875d..f9a5edc 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -48,9 +48,10 @@
  *    1. If #NPF is from L1 guest, then we crash the guest VM (same as old 
  *       code)
  *    2. If #NPF is from L2 guest, then we continue from (3)
- *    3. Get h_cr3 from L1 guest. Map h_cr3 into L0 hypervisor address space.
- *    4. Walk the h_cr3 page table
- *    5.    - if not present, then we inject #NPF back to L1 guest and 
+ *    3. Get np2m base from L1 guest. Map np2m base into L0 hypervisor address space.
+ *    4. Walk the np2m's  page table
+ *    5.    - if not present or permission check failure, then we inject #NPF back to 
+ *    L1 guest and 
  *            re-launch L1 guest (L1 guest will either treat this #NPF as MMIO,
  *            or fix its p2m table for L2 guest)
  *    6.    - if present, then we will get the a new translated value L1-GPA 
@@ -89,7 +90,7 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 
     if (old_flags & _PAGE_PRESENT)
         flush_tlb_mask(p2m->dirty_cpumask);
-    
+
     paging_unlock(d);
 }
 
@@ -110,7 +111,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     /* If this p2m table has been flushed or recycled under our feet, 
      * leave it alone.  We'll pick up the right one as we try to 
      * vmenter the guest. */
-    if ( p2m->cr3 == nhvm_vcpu_hostcr3(v) )
+    if ( p2m->np2m_base == nhvm_vcpu_p2m_base(v) )
     {
         unsigned long gfn, mask;
         mfn_t mfn;
@@ -186,7 +187,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t pfec;
     unsigned long nested_cr3, gfn;
     
-    nested_cr3 = nhvm_vcpu_hostcr3(v);
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
 
     pfec = PFEC_user_mode | PFEC_page_present;
     if (access_w)
@@ -221,7 +222,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
-    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
     rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 3700e32..1817f81 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -249,7 +249,7 @@ declare_mm_order_constraint(per_page_sharing)
  * A per-domain lock that protects the mapping from nested-CR3 to 
  * nested-p2m.  In particular it covers:
  * - the array of nested-p2m tables, and all LRU activity therein; and
- * - setting the "cr3" field of any p2m table to a non-CR3_EADDR value. 
+ * - setting the "cr3" field of any p2m table to a non-P2M_BASE_EAADR value. 
  *   (i.e. assigning a p2m table to be the shadow of that cr3 */
 
 /* PoD lock (per-p2m-table)
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index e351942..62c2d78 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -81,7 +81,7 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->domain = d;
     p2m->default_access = p2m_access_rwx;
 
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
         ept_p2m_init(p2m);
@@ -1445,7 +1445,7 @@ p2m_flush_table(struct p2m_domain *p2m)
     ASSERT(page_list_empty(&p2m->pod.single));
 
     /* This is no longer a valid nested p2m for any address space */
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
     
     /* Zap the top level of the trie */
     top = mfn_to_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
@@ -1483,7 +1483,7 @@ p2m_flush_nestedp2m(struct domain *d)
 }
 
 struct p2m_domain *
-p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
+p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
 {
     /* Use volatile to prevent gcc to cache nv->nv_p2m in a cpu register as
      * this may change within the loop by an other (v)cpu.
@@ -1492,8 +1492,8 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     struct domain *d;
     struct p2m_domain *p2m;
 
-    /* Mask out low bits; this avoids collisions with CR3_EADDR */
-    cr3 &= ~(0xfffull);
+    /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
+    np2m_base &= ~(0xfffull);
 
     if (nv->nv_flushp2m && nv->nv_p2m) {
         nv->nv_p2m = NULL;
@@ -1505,14 +1505,14 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     if ( p2m ) 
     {
         p2m_lock(p2m);
-        if ( p2m->cr3 == cr3 || p2m->cr3 == CR3_EADDR )
+        if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
         {
             nv->nv_flushp2m = 0;
             p2m_getlru_nestedp2m(d, p2m);
             nv->nv_p2m = p2m;
-            if (p2m->cr3 == CR3_EADDR)
+            if (p2m->np2m_base == P2M_BASE_EADDR)
                 hvm_asid_flush_vcpu(v);
-            p2m->cr3 = cr3;
+            p2m->np2m_base = np2m_base;
             cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
             p2m_unlock(p2m);
             nestedp2m_unlock(d);
@@ -1527,7 +1527,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     p2m_flush_table(p2m);
     p2m_lock(p2m);
     nv->nv_p2m = p2m;
-    p2m->cr3 = cr3;
+    p2m->np2m_base = np2m_base;
     nv->nv_flushp2m = 0;
     hvm_asid_flush_vcpu(v);
     cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
@@ -1543,7 +1543,7 @@ p2m_get_p2m(struct vcpu *v)
     if (!nestedhvm_is_n2(v))
         return p2m_get_hostp2m(v->domain);
 
-    return p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    return p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 }
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
@@ -1561,15 +1561,15 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
         uint32_t pfec_21 = *pfec;
-        uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
+        uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
 
         /* translate l2 guest va into l2 guest gfn */
-        p2m = p2m_get_nestedp2m(v, ncr3);
+        p2m = p2m_get_nestedp2m(v, np2m_base);
         mode = paging_get_nestedmode(v);
         gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
         /* translate l2 guest gfn into l1 guest gfn */
-        return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
+        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
                                        gfn << PAGE_SHIFT, &pfec_21, NULL);
     }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index fdb0f58..d3535b6 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -170,7 +170,7 @@ struct hvm_function_table {
                                 uint64_t exitcode);
     int (*nhvm_vcpu_vmexit_trap)(struct vcpu *v, struct hvm_trap *trap);
     uint64_t (*nhvm_vcpu_guestcr3)(struct vcpu *v);
-    uint64_t (*nhvm_vcpu_hostcr3)(struct vcpu *v);
+    uint64_t (*nhvm_vcpu_p2m_base)(struct vcpu *v);
     uint32_t (*nhvm_vcpu_asid)(struct vcpu *v);
     int (*nhvm_vmcx_guest_intercepts_trap)(struct vcpu *v, 
                                unsigned int trapnr, int errcode);
@@ -475,7 +475,7 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v);
 /* returns l1 guest's cr3 that points to the page table used to
  * translate l2 guest physical address to l1 guest physical address.
  */
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v);
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v);
 /* returns the asid number l1 guest wants to use to run the l2 guest */
 uint32_t nhvm_vcpu_asid(struct vcpu *v);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index dce2cd8..d97011d 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -99,7 +99,7 @@ int nvmx_vcpu_initialise(struct vcpu *v);
 void nvmx_vcpu_destroy(struct vcpu *v);
 int nvmx_vcpu_reset(struct vcpu *v);
 uint64_t nvmx_vcpu_guestcr3(struct vcpu *v);
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v);
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v);
 uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 907a817..1807ad6 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -197,17 +197,17 @@ struct p2m_domain {
 
     struct domain     *domain;   /* back pointer to domain */
 
-    /* Nested p2ms only: nested-CR3 value that this p2m shadows. 
-     * This can be cleared to CR3_EADDR under the per-p2m lock but
+    /* Nested p2ms only: nested p2m base value that this p2m shadows. 
+     * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but
      * needs both the per-p2m lock and the per-domain nestedp2m lock
      * to set it to any other value. */
-#define CR3_EADDR     (~0ULL)
-    uint64_t           cr3;
+#define P2M_BASE_EADDR     (~0ULL)
+    uint64_t           np2m_base;
 
     /* Nested p2ms: linked list of n2pms allocated to this domain. 
      * The host p2m hasolds the head of the list and the np2ms are 
      * threaded on in LRU order. */
-    struct list_head np2m_list; 
+    struct list_head   np2m_list; 
 
 
     /* Host p2m: when this flag is set, don't flush all the nested-p2m 
@@ -282,11 +282,11 @@ struct p2m_domain {
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
-/* Get p2m table (re)usable for specified cr3.
+/* Get p2m table (re)usable for specified np2m base.
  * Automatically destroys and re-initializes a p2m if none found.
- * If cr3 == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
+ * If np2m_base == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
  */
-struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3);
+struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
 
 /* If vcpu is in host mode then behaviour matches p2m_get_hostp2m().
  * If vcpu is in guest mode then behaviour matches p2m_get_nestedp2m().
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thwb5-0006kN-6J; Mon, 10 Dec 2012 06:12:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb3-0006ju-Sq
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:10 +0000
Received: from [193.109.254.147:6220] by server-1.bemta-14.messagelabs.com id
	BB/84-25314-93D75C05; Mon, 10 Dec 2012 06:12:09 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355119923!9510627!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10921 invoked from network); 10 Dec 2012 06:12:06 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:06 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:17 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997301"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:04 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:14 +0800
Message-Id: <1355162243-11857-3-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 02/11] nestedhap: Change nested p2m's walker to
	vendor-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

EPT and NPT adopts differnt formats for each-level entry,
so change the walker functions to vendor-specific.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c              |    1 +
 xen/arch/x86/hvm/vmx/vmx.c              |    3 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |   13 +++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   46 +++++++++++--------------------
 xen/include/asm-x86/hvm/hvm.h           |    5 +++
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 ++
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    5 +++
 8 files changed, 76 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index ed0faa6..5dcb354 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1171,6 +1171,37 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
     return vcpu_nestedsvm(v).ns_hap_enabled;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    uint32_t pfec;
+    unsigned long nested_cr3, gfn;
+    
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
+
+    pfec = PFEC_user_mode | PFEC_page_present;
+    if (access_w)
+        pfec |= PFEC_write_access;
+    if (access_x)
+        pfec |= PFEC_insn_fetch;
+
+    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
+    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
+
+    if ( gfn == INVALID_GFN ) 
+        return NESTEDHVM_PAGEFAULT_INJECT;
+
+    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+    return NESTEDHVM_PAGEFAULT_DONE;
+}
+
+
 enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 6c469ec..a905764 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2008,6 +2008,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
     .nhvm_intr_blocked = nsvm_intr_blocked,
+    .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m,
 };
 
 void svm_vmexit_handler(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 47d8ca6..c67ac59 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1511,7 +1511,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_intr_blocked    = nvmx_intr_blocked,
     .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
     .update_eoi_exit_bitmap = vmx_update_eoi_exit_bitmap,
-    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled
+    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled,
+    .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6d1a736..4495dd6 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1445,6 +1445,19 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
     return 1;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    /*TODO:*/
+    return 0;
+}
+
 void nvmx_idtv_handling(void)
 {
     struct vcpu *v = current;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index f9a5edc..8787c91 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -136,6 +136,22 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     }
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+static int
+nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
+
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+        access_r, access_w, access_x);
+}
+
+
 /* This function uses L1_gpa to walk the P2M table in L0 hypervisor. If the
  * walk is successful, the translated value is returned in L0_gpa. The return 
  * value tells the upper level what to do.
@@ -175,36 +191,6 @@ out:
     return rc;
 }
 
-/* This function uses L2_gpa to walk the P2M page table in L1. If the 
- * walk is successful, the translated value is returned in
- * L1_gpa. The result value tells what to do next.
- */
-static int
-nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
-                      bool_t access_r, bool_t access_w, bool_t access_x)
-{
-    uint32_t pfec;
-    unsigned long nested_cr3, gfn;
-    
-    nested_cr3 = nhvm_vcpu_p2m_base(v);
-
-    pfec = PFEC_user_mode | PFEC_page_present;
-    if (access_w)
-        pfec |= PFEC_write_access;
-    if (access_x)
-        pfec |= PFEC_insn_fetch;
-
-    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
-    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
-
-    if ( gfn == INVALID_GFN ) 
-        return NESTEDHVM_PAGEFAULT_INJECT;
-
-    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
-    return NESTEDHVM_PAGEFAULT_DONE;
-}
-
 /*
  * The following function, nestedhap_page_fault(), is for steps (3)--(10).
  *
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index d3535b6..80f07e9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -183,6 +183,11 @@ struct hvm_function_table {
     /* Virtual interrupt delivery */
     void (*update_eoi_exit_bitmap)(struct vcpu *v, u8 vector, u8 trig);
     int (*virtual_intr_delivery_enabled)(void);
+
+    /*Walk nested p2m  */
+    int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index fa83023..0c90f30 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -133,6 +133,9 @@ int nsvm_wrmsr(struct vcpu *v, unsigned int msr, uint64_t msr_content);
 void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
+int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
 #define NSVM_INTR_NOTINTERCEPTED 2
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d97011d..422f006 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -108,6 +108,11 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
+
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
  *
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thwb5-0006kN-6J; Mon, 10 Dec 2012 06:12:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb3-0006ju-Sq
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:10 +0000
Received: from [193.109.254.147:6220] by server-1.bemta-14.messagelabs.com id
	BB/84-25314-93D75C05; Mon, 10 Dec 2012 06:12:09 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355119923!9510627!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10921 invoked from network); 10 Dec 2012 06:12:06 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:06 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:17 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997301"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:04 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:14 +0800
Message-Id: <1355162243-11857-3-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 02/11] nestedhap: Change nested p2m's walker to
	vendor-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

EPT and NPT adopts differnt formats for each-level entry,
so change the walker functions to vendor-specific.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c              |    1 +
 xen/arch/x86/hvm/vmx/vmx.c              |    3 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |   13 +++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   46 +++++++++++--------------------
 xen/include/asm-x86/hvm/hvm.h           |    5 +++
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 ++
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    5 +++
 8 files changed, 76 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index ed0faa6..5dcb354 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1171,6 +1171,37 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
     return vcpu_nestedsvm(v).ns_hap_enabled;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    uint32_t pfec;
+    unsigned long nested_cr3, gfn;
+    
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
+
+    pfec = PFEC_user_mode | PFEC_page_present;
+    if (access_w)
+        pfec |= PFEC_write_access;
+    if (access_x)
+        pfec |= PFEC_insn_fetch;
+
+    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
+    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
+
+    if ( gfn == INVALID_GFN ) 
+        return NESTEDHVM_PAGEFAULT_INJECT;
+
+    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+    return NESTEDHVM_PAGEFAULT_DONE;
+}
+
+
 enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 6c469ec..a905764 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2008,6 +2008,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
     .nhvm_intr_blocked = nsvm_intr_blocked,
+    .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m,
 };
 
 void svm_vmexit_handler(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 47d8ca6..c67ac59 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1511,7 +1511,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_intr_blocked    = nvmx_intr_blocked,
     .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
     .update_eoi_exit_bitmap = vmx_update_eoi_exit_bitmap,
-    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled
+    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled,
+    .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6d1a736..4495dd6 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1445,6 +1445,19 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
     return 1;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    /*TODO:*/
+    return 0;
+}
+
 void nvmx_idtv_handling(void)
 {
     struct vcpu *v = current;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index f9a5edc..8787c91 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -136,6 +136,22 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     }
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+static int
+nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
+
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+        access_r, access_w, access_x);
+}
+
+
 /* This function uses L1_gpa to walk the P2M table in L0 hypervisor. If the
  * walk is successful, the translated value is returned in L0_gpa. The return 
  * value tells the upper level what to do.
@@ -175,36 +191,6 @@ out:
     return rc;
 }
 
-/* This function uses L2_gpa to walk the P2M page table in L1. If the 
- * walk is successful, the translated value is returned in
- * L1_gpa. The result value tells what to do next.
- */
-static int
-nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
-                      bool_t access_r, bool_t access_w, bool_t access_x)
-{
-    uint32_t pfec;
-    unsigned long nested_cr3, gfn;
-    
-    nested_cr3 = nhvm_vcpu_p2m_base(v);
-
-    pfec = PFEC_user_mode | PFEC_page_present;
-    if (access_w)
-        pfec |= PFEC_write_access;
-    if (access_x)
-        pfec |= PFEC_insn_fetch;
-
-    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
-    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
-
-    if ( gfn == INVALID_GFN ) 
-        return NESTEDHVM_PAGEFAULT_INJECT;
-
-    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
-    return NESTEDHVM_PAGEFAULT_DONE;
-}
-
 /*
  * The following function, nestedhap_page_fault(), is for steps (3)--(10).
  *
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index d3535b6..80f07e9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -183,6 +183,11 @@ struct hvm_function_table {
     /* Virtual interrupt delivery */
     void (*update_eoi_exit_bitmap)(struct vcpu *v, u8 vector, u8 trig);
     int (*virtual_intr_delivery_enabled)(void);
+
+    /*Walk nested p2m  */
+    int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index fa83023..0c90f30 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -133,6 +133,9 @@ int nsvm_wrmsr(struct vcpu *v, unsigned int msr, uint64_t msr_content);
 void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
+int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
 #define NSVM_INTR_NOTINTERCEPTED 2
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d97011d..422f006 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -108,6 +108,11 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
+
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
  *
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbA-0006mA-Fr; Mon, 10 Dec 2012 06:12:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb8-0006lJ-To
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:15 +0000
Received: from [85.158.137.99:26253] by server-10.bemta-3.messagelabs.com id
	FC/BD-19806-E3D75C05; Mon, 10 Dec 2012 06:12:14 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27562 invoked from network); 10 Dec 2012 06:12:13 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:13 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:22 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997326"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:09 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:18 +0800
Message-Id: <1355162243-11857-7-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 06/11] nEPT: Try to enable EPT paging for L2
	guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Once found EPT is enabled by L1 VMM, enabled nested EPT support
for L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
 xen/arch/x86/hvm/vmx/vvmx.c        |   50 ++++++++++++++++++++++++++++--------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
 3 files changed, 56 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 06455bf..1bfb67f 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1513,6 +1513,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
     .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
+    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
     .nhvm_intr_blocked    = nvmx_intr_blocked,
@@ -2055,6 +2056,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
     unsigned long gla, gfn = gpa >> PAGE_SHIFT;
     mfn_t mfn;
     p2m_type_t p2mt;
+    int ret;
     struct domain *d = current->domain;
 
     if ( tb_init_done )
@@ -2073,14 +2075,22 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
-    if ( hvm_hap_nested_page_fault(gpa,
+    ret = hvm_hap_nested_page_fault(gpa,
                                    qualification & EPT_GLA_VALID       ? 1 : 0,
                                    qualification & EPT_GLA_VALID
                                      ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
                                    qualification & EPT_READ_VIOLATION  ? 1 : 0,
                                    qualification & EPT_WRITE_VIOLATION ? 1 : 0,
-                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
-        return;
+                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
+    switch (ret) {
+        case 0:
+            break;
+        case 1:
+            return;
+        case -1:
+            vcpu_nestedhvm(current).nv_vmexit_pending = 1;
+            return;
+    }
 
     /* Everything else is an error. */
     mfn = get_gfn_query_unlocked(d, gfn, &p2mt);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 76cf757..ab68b52 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
         gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
 	goto out;
     }
+    nvmx->ept.enabled = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -96,9 +97,11 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
 
 uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
-    /* TODO */
-    ASSERT(0);
-    return 0;
+    uint64_t eptp_base;
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+
+    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
+    return eptp_base & PAGE_MASK; 
 }
 
 uint32_t nvmx_vcpu_asid(struct vcpu *v)
@@ -108,6 +111,13 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v)
     return 0;
 }
 
+bool_t nvmx_ept_enabled(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    return !!(nvmx->ept.enabled);
+}
+
 static const enum x86_segment sreg_to_index[] = {
     [VMX_SREG_ES] = x86_seg_es,
     [VMX_SREG_CS] = x86_seg_cs,
@@ -503,14 +513,16 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
 }
 
 void nvmx_update_secondary_exec_control(struct vcpu *v,
-                                            unsigned long value)
+                                            unsigned long host_cntrl)
 {
     u32 shadow_cntrl;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
     shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
-    shadow_cntrl |= value;
-    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
+    nvmx->ept.enabled = !!(shadow_cntrl & SECONDARY_EXEC_ENABLE_EPT);
+    shadow_cntrl |= host_cntrl;
+    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
 }
 
 static void nvmx_update_pin_control(struct vcpu *v, unsigned long host_cntrl)
@@ -818,6 +830,19 @@ static void load_shadow_guest_state(struct vcpu *v)
     /* TODO: CR3 target control */
 }
 
+
+static uint64_t get_shadow_eptp(struct vcpu *v)
+{
+    uint64_t eptp_asr;
+    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
+    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
+    struct ept_data *ept_data = p2m->hap_data;
+
+    eptp_asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    ept_data->ept_ctl.asr = eptp_asr;
+    return ept_data->ept_ctl.eptp;
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -862,7 +887,10 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
 
-    /* TODO: EPT_POINTER */
+    /* Setup virtual ETP for L2 guest*/
+    if ( nestedhvm_paging_mode_hap(v) )
+        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -915,8 +943,8 @@ static void sync_vvmcs_ro(struct vcpu *v)
     /* Adjust exit_reason/exit_qualifciation for violation case */
     if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
                 EXIT_REASON_EPT_VIOLATION ) {
-        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
-        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
     }
 }
 
@@ -1480,8 +1508,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
         case EPT_TRANSLATE_VIOLATION:
         case EPT_TRANSLATE_MISCONFIG:
             rc = NESTEDHVM_PAGEFAULT_INJECT;
-            nvmx->ept_exit.exit_reason = exit_reason;
-            nvmx->ept_exit.exit_qual = exit_qual;
+            nvmx->ept.exit_reason = exit_reason;
+            nvmx->ept.exit_qual = exit_qual;
             break;
         case EPT_TRANSLATE_RETRY:
             rc = NESTEDHVM_PAGEFAULT_RETRY;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 8eb377b..661cd8a 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -33,9 +33,10 @@ struct nestedvmx {
         u32           error_code;
     } intr;
     struct {
+        char     enabled;
         uint32_t exit_reason;
         uint32_t exit_qual;
-    } ept_exit;
+    } ept;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -110,6 +111,8 @@ int nvmx_intercepts_exception(struct vcpu *v,
                               unsigned int trap, int error_code);
 void nvmx_domain_relinquish_resources(struct domain *d);
 
+bool_t nvmx_ept_enabled(struct vcpu *v);
+
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbA-0006mA-Fr; Mon, 10 Dec 2012 06:12:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb8-0006lJ-To
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:15 +0000
Received: from [85.158.137.99:26253] by server-10.bemta-3.messagelabs.com id
	FC/BD-19806-E3D75C05; Mon, 10 Dec 2012 06:12:14 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27562 invoked from network); 10 Dec 2012 06:12:13 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:13 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:22 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997326"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:09 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:18 +0800
Message-Id: <1355162243-11857-7-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 06/11] nEPT: Try to enable EPT paging for L2
	guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Once found EPT is enabled by L1 VMM, enabled nested EPT support
for L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
 xen/arch/x86/hvm/vmx/vvmx.c        |   50 ++++++++++++++++++++++++++++--------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
 3 files changed, 56 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 06455bf..1bfb67f 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1513,6 +1513,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
     .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
+    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
     .nhvm_intr_blocked    = nvmx_intr_blocked,
@@ -2055,6 +2056,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
     unsigned long gla, gfn = gpa >> PAGE_SHIFT;
     mfn_t mfn;
     p2m_type_t p2mt;
+    int ret;
     struct domain *d = current->domain;
 
     if ( tb_init_done )
@@ -2073,14 +2075,22 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
-    if ( hvm_hap_nested_page_fault(gpa,
+    ret = hvm_hap_nested_page_fault(gpa,
                                    qualification & EPT_GLA_VALID       ? 1 : 0,
                                    qualification & EPT_GLA_VALID
                                      ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
                                    qualification & EPT_READ_VIOLATION  ? 1 : 0,
                                    qualification & EPT_WRITE_VIOLATION ? 1 : 0,
-                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
-        return;
+                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
+    switch (ret) {
+        case 0:
+            break;
+        case 1:
+            return;
+        case -1:
+            vcpu_nestedhvm(current).nv_vmexit_pending = 1;
+            return;
+    }
 
     /* Everything else is an error. */
     mfn = get_gfn_query_unlocked(d, gfn, &p2mt);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 76cf757..ab68b52 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
         gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
 	goto out;
     }
+    nvmx->ept.enabled = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -96,9 +97,11 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
 
 uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
-    /* TODO */
-    ASSERT(0);
-    return 0;
+    uint64_t eptp_base;
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+
+    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
+    return eptp_base & PAGE_MASK; 
 }
 
 uint32_t nvmx_vcpu_asid(struct vcpu *v)
@@ -108,6 +111,13 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v)
     return 0;
 }
 
+bool_t nvmx_ept_enabled(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    return !!(nvmx->ept.enabled);
+}
+
 static const enum x86_segment sreg_to_index[] = {
     [VMX_SREG_ES] = x86_seg_es,
     [VMX_SREG_CS] = x86_seg_cs,
@@ -503,14 +513,16 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
 }
 
 void nvmx_update_secondary_exec_control(struct vcpu *v,
-                                            unsigned long value)
+                                            unsigned long host_cntrl)
 {
     u32 shadow_cntrl;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
     shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
-    shadow_cntrl |= value;
-    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
+    nvmx->ept.enabled = !!(shadow_cntrl & SECONDARY_EXEC_ENABLE_EPT);
+    shadow_cntrl |= host_cntrl;
+    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
 }
 
 static void nvmx_update_pin_control(struct vcpu *v, unsigned long host_cntrl)
@@ -818,6 +830,19 @@ static void load_shadow_guest_state(struct vcpu *v)
     /* TODO: CR3 target control */
 }
 
+
+static uint64_t get_shadow_eptp(struct vcpu *v)
+{
+    uint64_t eptp_asr;
+    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
+    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
+    struct ept_data *ept_data = p2m->hap_data;
+
+    eptp_asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    ept_data->ept_ctl.asr = eptp_asr;
+    return ept_data->ept_ctl.eptp;
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -862,7 +887,10 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
 
-    /* TODO: EPT_POINTER */
+    /* Setup virtual ETP for L2 guest*/
+    if ( nestedhvm_paging_mode_hap(v) )
+        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -915,8 +943,8 @@ static void sync_vvmcs_ro(struct vcpu *v)
     /* Adjust exit_reason/exit_qualifciation for violation case */
     if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
                 EXIT_REASON_EPT_VIOLATION ) {
-        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
-        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
     }
 }
 
@@ -1480,8 +1508,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
         case EPT_TRANSLATE_VIOLATION:
         case EPT_TRANSLATE_MISCONFIG:
             rc = NESTEDHVM_PAGEFAULT_INJECT;
-            nvmx->ept_exit.exit_reason = exit_reason;
-            nvmx->ept_exit.exit_qual = exit_qual;
+            nvmx->ept.exit_reason = exit_reason;
+            nvmx->ept.exit_qual = exit_qual;
             break;
         case EPT_TRANSLATE_RETRY:
             rc = NESTEDHVM_PAGEFAULT_RETRY;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 8eb377b..661cd8a 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -33,9 +33,10 @@ struct nestedvmx {
         u32           error_code;
     } intr;
     struct {
+        char     enabled;
         uint32_t exit_reason;
         uint32_t exit_qual;
-    } ept_exit;
+    } ept;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -110,6 +111,8 @@ int nvmx_intercepts_exception(struct vcpu *v,
                               unsigned int trap, int error_code);
 void nvmx_domain_relinquish_resources(struct domain *d);
 
+bool_t nvmx_ept_enabled(struct vcpu *v);
+
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thwb2-0006ji-8e; Mon, 10 Dec 2012 06:12:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb0-0006jZ-9Z
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:06 +0000
Received: from [193.109.254.147:24142] by server-5.bemta-14.messagelabs.com id
	AF/A2-10257-53D75C05; Mon, 10 Dec 2012 06:12:05 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355119923!9510627!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10880 invoked from network); 10 Dec 2012 06:12:04 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:04 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997286"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:01 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:12 +0800
Message-Id: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 00/11] Add virtual EPT support Xen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

With virtual EPT support, L1 hyerpvisor can use EPT hardware
for L2 guest's memory virtualization. In this way, L2 guest's
performance can be improved sharply. According to our testing,
some benchmarks can show > 5x performance gain.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Zhang Xiantao (11):
  nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
  nestedhap: Change nested p2m's walker to vendor-specific
  nested_ept: Implement guest ept's walker
  nested_ept: Add permission check for success case
  EPT: Make ept data structure or operations neutral
  nEPT: Try to enable EPT paging for L2 guest.
  nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
  nEPT: Use minimal permission for nested p2m.
  nEPT: handle invept instruction from L1 VMM
  nEPT: expost EPT capablity to L1 VMM
  nVMX: Expose VPID capability to nested VMM.

 xen/arch/x86/hvm/hvm.c                  |    7 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++
 xen/arch/x86/hvm/svm/svm.c              |    3 +-
 xen/arch/x86/hvm/vmx/vmcs.c             |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c              |   76 +++++---
 xen/arch/x86/hvm/vmx/vvmx.c             |  208 ++++++++++++++++++-
 xen/arch/x86/mm/guest_walk.c            |   12 +-
 xen/arch/x86/mm/hap/Makefile            |    1 +
 xen/arch/x86/mm/hap/nested_ept.c        |  345 +++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   79 +++----
 xen/arch/x86/mm/mm-locks.h              |    2 +-
 xen/arch/x86/mm/p2m-ept.c               |   96 +++++++--
 xen/arch/x86/mm/p2m.c                   |   44 +++--
 xen/arch/x86/mm/shadow/multi.c          |    2 +-
 xen/include/asm-x86/guest_pt.h          |    8 +
 xen/include/asm-x86/hvm/hvm.h           |    9 +-
 xen/include/asm-x86/hvm/nestedhvm.h     |    1 +
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
 xen/include/asm-x86/hvm/vmx/vmcs.h      |   31 ++-
 xen/include/asm-x86/hvm/vmx/vmx.h       |    6 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |   29 +++-
 xen/include/asm-x86/p2m.h               |   17 +-
 22 files changed, 859 insertions(+), 153 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thwb2-0006ji-8e; Mon, 10 Dec 2012 06:12:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb0-0006jZ-9Z
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:06 +0000
Received: from [193.109.254.147:24142] by server-5.bemta-14.messagelabs.com id
	AF/A2-10257-53D75C05; Mon, 10 Dec 2012 06:12:05 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355119923!9510627!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10880 invoked from network); 10 Dec 2012 06:12:04 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:04 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997286"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:01 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:12 +0800
Message-Id: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 00/11] Add virtual EPT support Xen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

With virtual EPT support, L1 hyerpvisor can use EPT hardware
for L2 guest's memory virtualization. In this way, L2 guest's
performance can be improved sharply. According to our testing,
some benchmarks can show > 5x performance gain.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Zhang Xiantao (11):
  nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
  nestedhap: Change nested p2m's walker to vendor-specific
  nested_ept: Implement guest ept's walker
  nested_ept: Add permission check for success case
  EPT: Make ept data structure or operations neutral
  nEPT: Try to enable EPT paging for L2 guest.
  nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
  nEPT: Use minimal permission for nested p2m.
  nEPT: handle invept instruction from L1 VMM
  nEPT: expost EPT capablity to L1 VMM
  nVMX: Expose VPID capability to nested VMM.

 xen/arch/x86/hvm/hvm.c                  |    7 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++
 xen/arch/x86/hvm/svm/svm.c              |    3 +-
 xen/arch/x86/hvm/vmx/vmcs.c             |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c              |   76 +++++---
 xen/arch/x86/hvm/vmx/vvmx.c             |  208 ++++++++++++++++++-
 xen/arch/x86/mm/guest_walk.c            |   12 +-
 xen/arch/x86/mm/hap/Makefile            |    1 +
 xen/arch/x86/mm/hap/nested_ept.c        |  345 +++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   79 +++----
 xen/arch/x86/mm/mm-locks.h              |    2 +-
 xen/arch/x86/mm/p2m-ept.c               |   96 +++++++--
 xen/arch/x86/mm/p2m.c                   |   44 +++--
 xen/arch/x86/mm/shadow/multi.c          |    2 +-
 xen/include/asm-x86/guest_pt.h          |    8 +
 xen/include/asm-x86/hvm/hvm.h           |    9 +-
 xen/include/asm-x86/hvm/nestedhvm.h     |    1 +
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
 xen/include/asm-x86/hvm/vmx/vmcs.h      |   31 ++-
 xen/include/asm-x86/hvm/vmx/vmx.h       |    6 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |   29 +++-
 xen/include/asm-x86/p2m.h               |   17 +-
 22 files changed, 859 insertions(+), 153 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbD-0006oN-Gr; Mon, 10 Dec 2012 06:12:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1ThwbC-0006n7-FQ
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:18 +0000
Received: from [85.158.137.99:26349] by server-7.bemta-3.messagelabs.com id
	02/C2-01713-14D75C05; Mon, 10 Dec 2012 06:12:17 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!5
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27682 invoked from network); 10 Dec 2012 06:12:16 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:16 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997349"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:14 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:21 +0800
Message-Id: <1355162243-11857-10-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 09/11] nEPT: handle invept instruction from L1
	VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Add the INVEPT instruction emulation logic.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |    6 +++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   37 ++++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/p2m.c              |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 1bfb67f..36f6d82 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2622,11 +2622,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
             update_guest_eip();
         break;
-
+    case EXIT_REASON_INVEPT:
+        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVEPT:
     case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 41779bc..07ca90e 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1357,6 +1357,43 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+int nvmx_handle_invept(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long eptp;
+    u64 inv_type;
+
+    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
+             != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);
+
+    switch (inv_type){
+    case INVEPT_SINGLE_CONTEXT:
+        {
+            struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
+            if ( p2m )
+            {
+	            p2m_flush(current, p2m);
+		        ept_sync_domain(p2m);
+            }
+        }
+		break;
+   case INVEPT_ALL_CONTEXT:
+       p2m_flush_nestedp2m(current->domain);
+       __invept(INVEPT_ALL_CONTEXT, 0, 0);
+       break;
+   default:
+       return X86EMUL_EXCEPTION;
+   }
+   vmreturn(regs, VMSUCCEED);
+
+   return X86EMUL_OKAY;
+}
+
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 799bbfb..657fc03 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1478,7 +1478,7 @@ p2m_flush_table(struct p2m_domain *p2m)
 void
 p2m_flush(struct vcpu *v, struct p2m_domain *p2m)
 {
-    ASSERT(v->domain == p2m->domain);
+    ASSERT(p2m && v->domain == p2m->domain);
     vcpu_nestedhvm(v).nv_p2m = NULL;
     p2m_flush_table(p2m);
     hvm_asid_flush_vcpu(v);
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 55c0ad1..cf5ed9a 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -190,6 +190,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs);
 int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
+int nvmx_handle_invept(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbD-0006oN-Gr; Mon, 10 Dec 2012 06:12:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1ThwbC-0006n7-FQ
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:18 +0000
Received: from [85.158.137.99:26349] by server-7.bemta-3.messagelabs.com id
	02/C2-01713-14D75C05; Mon, 10 Dec 2012 06:12:17 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!5
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27682 invoked from network); 10 Dec 2012 06:12:16 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:16 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997349"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:14 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:21 +0800
Message-Id: <1355162243-11857-10-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 09/11] nEPT: handle invept instruction from L1
	VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Add the INVEPT instruction emulation logic.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |    6 +++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   37 ++++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/p2m.c              |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 1bfb67f..36f6d82 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2622,11 +2622,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
             update_guest_eip();
         break;
-
+    case EXIT_REASON_INVEPT:
+        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVEPT:
     case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 41779bc..07ca90e 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1357,6 +1357,43 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+int nvmx_handle_invept(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long eptp;
+    u64 inv_type;
+
+    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
+             != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);
+
+    switch (inv_type){
+    case INVEPT_SINGLE_CONTEXT:
+        {
+            struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
+            if ( p2m )
+            {
+	            p2m_flush(current, p2m);
+		        ept_sync_domain(p2m);
+            }
+        }
+		break;
+   case INVEPT_ALL_CONTEXT:
+       p2m_flush_nestedp2m(current->domain);
+       __invept(INVEPT_ALL_CONTEXT, 0, 0);
+       break;
+   default:
+       return X86EMUL_EXCEPTION;
+   }
+   vmreturn(regs, VMSUCCEED);
+
+   return X86EMUL_OKAY;
+}
+
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 799bbfb..657fc03 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1478,7 +1478,7 @@ p2m_flush_table(struct p2m_domain *p2m)
 void
 p2m_flush(struct vcpu *v, struct p2m_domain *p2m)
 {
-    ASSERT(v->domain == p2m->domain);
+    ASSERT(p2m && v->domain == p2m->domain);
     vcpu_nestedhvm(v).nv_p2m = NULL;
     p2m_flush_table(p2m);
     hvm_asid_flush_vcpu(v);
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 55c0ad1..cf5ed9a 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -190,6 +190,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs);
 int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
+int nvmx_handle_invept(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbA-0006m0-2a; Mon, 10 Dec 2012 06:12:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb8-0006l7-7V
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:14 +0000
Received: from [85.158.137.99:56770] by server-2.bemta-3.messagelabs.com id
	06/4C-04744-D3D75C05; Mon, 10 Dec 2012 06:12:13 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27537 invoked from network); 10 Dec 2012 06:12:11 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:11 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:21 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997320"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:08 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:17 +0800
Message-Id: <1355162243-11857-6-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Share the current EPT logic with nested EPT case, so
make the related data structure or operations netural
to comment EPT and nested EPT.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c         |   39 +++++++++------
 xen/arch/x86/mm/p2m-ept.c          |   96 ++++++++++++++++++++++++++++--------
 xen/arch/x86/mm/p2m.c              |   16 +++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |   30 +++++++----
 xen/include/asm-x86/hvm/vmx/vmx.h  |    6 ++-
 xen/include/asm-x86/p2m.h          |    1 +
 7 files changed, 137 insertions(+), 53 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 9adc7a4..b9ebdfe 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -942,7 +942,7 @@ static int construct_vmcs(struct vcpu *v)
     }
 
     if ( paging_mode_hap(d) )
-        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept_control.eptp);
+        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept.ept_ctl.eptp);
 
     if ( cpu_has_vmx_pat && paging_mode_hap(d) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index c67ac59..06455bf 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -79,22 +79,23 @@ static void __ept_sync_domain(void *info);
 static int vmx_domain_initialise(struct domain *d)
 {
     int rc;
+    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
 
     /* Set the memory type used when accessing EPT paging structures. */
-    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
+    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
 
     /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
-    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
+    ept->ept_ctl.ept_wl = 3;
 
-    d->arch.hvm_domain.vmx.ept_control.asr  =
+    ept->ept_ctl.asr  =
         pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
 
-    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
+    if ( !zalloc_cpumask_var(&ept->ept_synced) )
         return -ENOMEM;
 
     if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
     {
-        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
+        free_cpumask_var(ept->ept_synced);
         return rc;
     }
 
@@ -103,9 +104,10 @@ static int vmx_domain_initialise(struct domain *d)
 
 static void vmx_domain_destroy(struct domain *d)
 {
+    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
     if ( paging_mode_hap(d) )
-        on_each_cpu(__ept_sync_domain, d, 1);
-    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
+        on_each_cpu(__ept_sync_domain, p2m_get_hostp2m(d), 1);
+    free_cpumask_var(ept->ept_synced);
     vmx_free_vlapic_mapping(d);
 }
 
@@ -641,6 +643,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
 {
     struct domain *d = v->domain;
     unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
+    struct ept_data *ept_data = p2m_get_hostp2m(d)->hap_data;
 
     /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
     if ( old_cr4 != new_cr4 )
@@ -650,10 +653,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
     {
         unsigned int cpu = smp_processor_id();
         /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
-        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced) &&
+        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
              !cpumask_test_and_set_cpu(cpu,
-                                       d->arch.hvm_domain.vmx.ept_synced) )
-            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
+                                       ept_get_synced_mask(ept_data)) )
+            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
     }
 
     vmx_restore_guest_msrs(v);
@@ -1218,12 +1221,16 @@ static void vmx_update_guest_efer(struct vcpu *v)
 
 static void __ept_sync_domain(void *info)
 {
-    struct domain *d = info;
-    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
+    struct p2m_domain *p2m = info;
+    struct ept_data *ept_data = p2m->hap_data;
+
+    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
 }
 
-void ept_sync_domain(struct domain *d)
+void ept_sync_domain(struct p2m_domain *p2m)
 {
+    struct domain *d = p2m->domain;
+    struct ept_data *ept_data = p2m->hap_data;
     /* Only if using EPT and this domain has some VCPUs to dirty. */
     if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
         return;
@@ -1236,11 +1243,11 @@ void ept_sync_domain(struct domain *d)
      * the ept_synced mask before on_selected_cpus() reads it, resulting in
      * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
      */
-    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
+    cpumask_and(ept_get_synced_mask(ept_data),
                 d->domain_dirty_cpumask, &cpu_online_map);
 
-    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
-                     __ept_sync_domain, d, 1);
+    on_selected_cpus(ept_get_synced_mask(ept_data),
+                     __ept_sync_domain, p2m, 1);
 }
 
 void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index c964f54..8adf3f9 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     int need_modify_vtd_table = 1;
     int vtd_pte_present = 0;
     int needs_sync = 1;
-    struct domain *d = p2m->domain;
     ept_entry_t old_entry = { .epte = 0 };
+    struct ept_data *ept_data = p2m->hap_data;
+    struct domain *d = p2m->domain;
 
+    ASSERT(ept_data);
     /*
      * the caller must make sure:
      * 1. passing valid gfn and mfn at order boundary.
@@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
      * 3. passing a valid order.
      */
     if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
-         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
+         ((u64)gfn >> ((ept_get_wl(ept_data) + 1) * EPT_TABLE_ORDER)) ||
          (order % EPT_TABLE_ORDER) )
         return 0;
 
-    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
-           (target == 1 && hvm_hap_has_2mb(d)) ||
+    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
+           (target == 1 && hvm_hap_has_2mb()) ||
            (target == 0));
 
-    table = map_domain_page(ept_get_asr(d));
+    table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-    for ( i = ept_get_wl(d); i > target; i-- )
+    for ( i = ept_get_wl(ept_data); i > target; i-- )
     {
         ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
         if ( !ret )
@@ -439,9 +441,11 @@ out:
     unmap_domain_page(table);
 
     if ( needs_sync )
-        ept_sync_domain(p2m->domain);
+        ept_sync_domain(p2m);
 
-    if ( rv && iommu_enabled && need_iommu(p2m->domain) && need_modify_vtd_table )
+    /* For non-nested p2m, may need to change VT-d page table.*/
+    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled && need_iommu(p2m->domain) &&
+                need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
             iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
@@ -488,14 +492,14 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
                            unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
                            p2m_query_t q, unsigned int *page_order)
 {
-    struct domain *d = p2m->domain;
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    ept_entry_t *table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     u32 index;
     int i;
     int ret = 0;
     mfn_t mfn = _mfn(INVALID_MFN);
+    struct ept_data *ept_data = p2m->hap_data;
 
     *t = p2m_mmio_dm;
     *a = p2m_access_n;
@@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
 
     /* Should check if gfn obeys GAW here. */
 
-    for ( i = ept_get_wl(d); i > 0; i-- )
+    for ( i = ept_get_wl(ept_data); i > 0; i-- )
     {
     retry:
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
@@ -588,19 +592,20 @@ out:
 static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
     unsigned long gfn, int *level)
 {
-    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     ept_entry_t content = { .epte = 0 };
     u32 index;
     int i;
     int ret=0;
+    struct ept_data *ept_data = p2m->hap_data;
 
     /* This pfn is higher than the highest the p2m map currently holds */
     if ( gfn > p2m->max_mapped_pfn )
         goto out;
 
-    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
+    for ( i = ept_get_wl(ept_data); i > 0; i-- )
     {
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
         if ( !ret || ret == GUEST_TABLE_POD_PAGE )
@@ -622,7 +627,8 @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
 void ept_walk_table(struct domain *d, unsigned long gfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    struct ept_data *ept_data = p2m->hap_data;
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
 
     int i;
@@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned long gfn)
         goto out;
     }
 
-    for ( i = ept_get_wl(d); i >= 0; i-- )
+    for ( i = ept_get_wl(ept_data); i >= 0; i-- )
     {
         ept_entry_t *ept_entry, *next;
         u32 index;
@@ -778,16 +784,16 @@ static void ept_change_entry_type_page(mfn_t ept_page_mfn, int ept_page_level,
 static void ept_change_entry_type_global(struct p2m_domain *p2m,
                                          p2m_type_t ot, p2m_type_t nt)
 {
-    struct domain *d = p2m->domain;
-    if ( ept_get_asr(d) == 0 )
+    struct ept_data *ept_data = p2m->hap_data;
+    if ( ept_get_asr(ept_data) == 0 )
         return;
 
     BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
     BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct));
 
-    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d), ot, nt);
+    ept_change_entry_type_page(_mfn(ept_get_asr(ept_data)), ept_get_wl(ept_data), ot, nt);
 
-    ept_sync_domain(d);
+    ept_sync_domain(p2m);
 }
 
 void ept_p2m_init(struct p2m_domain *p2m)
@@ -811,6 +817,7 @@ static void ept_dump_p2m_table(unsigned char key)
     unsigned long gfn, gfn_remainder;
     unsigned long record_counter = 0;
     struct p2m_domain *p2m;
+    struct ept_data *ept_data;
 
     for_each_domain(d)
     {
@@ -818,15 +825,16 @@ static void ept_dump_p2m_table(unsigned char key)
             continue;
 
         p2m = p2m_get_hostp2m(d);
+    ept_data = p2m->hap_data;
         printk("\ndomain%d EPT p2m table: \n", d->domain_id);
 
         for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
         {
             gfn_remainder = gfn;
             mfn = _mfn(INVALID_MFN);
-            table = map_domain_page(ept_get_asr(d));
+            table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-            for ( i = ept_get_wl(d); i > 0; i-- )
+            for ( i = ept_get_wl(ept_data); i > 0; i-- )
             {
                 ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
                 if ( ret != GUEST_TABLE_NORMAL_PAGE )
@@ -858,6 +866,52 @@ out:
     }
 }
 
+int alloc_p2m_hap_data(struct p2m_domain *p2m)
+{
+    struct domain *d = p2m->domain;
+    struct ept_data *ept;
+
+    ASSERT(d);
+    if (!hap_enabled(d))
+        return 0;
+
+    p2m->hap_data = ept = xzalloc(struct ept_data);
+    if ( !p2m->hap_data )
+        return -ENOMEM;
+    if ( !zalloc_cpumask_var(&ept->ept_synced) )
+    {
+        xfree(ept);
+        p2m->hap_data = NULL;
+        return -ENOMEM;    
+    }
+    return 0;
+}
+
+void free_p2m_hap_data(struct p2m_domain *p2m)
+{
+    struct ept_data *ept;
+
+    if ( !hap_enabled(p2m->domain) )
+        return;
+
+    if ( p2m_is_nestedp2m(p2m)) {
+        ept = p2m->hap_data;
+        if ( ept ) {
+            free_cpumask_var(ept->ept_synced);
+            xfree(ept);
+        }
+    }
+}
+
+void p2m_init_hap_data(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = p2m->hap_data;
+
+    ept->ept_ctl.ept_wl = 3;
+    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
+    ept->ept_ctl.asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+}
+
 static struct keyhandler ept_p2m_table = {
     .diagnostic = 0,
     .u.fn = ept_dump_p2m_table,
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 62c2d78..799bbfb 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -105,6 +105,8 @@ p2m_init_nestedp2m(struct domain *d)
         if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
             return -ENOMEM;
         p2m_initialise(d, p2m);
+        if ( cpu_has_vmx && alloc_p2m_hap_data(p2m) )
+            return -ENOMEM;
         p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
         list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
     }
@@ -126,12 +128,14 @@ int p2m_init(struct domain *d)
         return -ENOMEM;
     }
     p2m_initialise(d, p2m);
+    if ( hap_enabled(d) && cpu_has_vmx)
+        p2m->hap_data = &d->arch.hvm_domain.vmx.ept;
 
     /* Must initialise nestedp2m unconditionally
      * since nestedhvm_enabled(d) returns false here.
      * (p2m_init runs too early for HVM_PARAM_* options) */
     rc = p2m_init_nestedp2m(d);
-    if ( rc ) 
+    if ( rc )
         p2m_final_teardown(d);
     return rc;
 }
@@ -354,6 +358,8 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     if ( hap_enabled(d) )
         iommu_share_p2m_table(d);
+    if ( p2m_is_nestedp2m(p2m) && hap_enabled(d) )
+        p2m_init_hap_data(p2m);
 
     P2M_PRINTK("populating p2m table\n");
 
@@ -436,12 +442,16 @@ void p2m_teardown(struct p2m_domain *p2m)
 static void p2m_teardown_nestedp2m(struct domain *d)
 {
     uint8_t i;
+    struct p2m_domain *p2m;
 
     for (i = 0; i < MAX_NESTEDP2M; i++) {
         if ( !d->arch.nested_p2m[i] )
             continue;
-        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
-        xfree(d->arch.nested_p2m[i]);
+        p2m = d->arch.nested_p2m[i];
+        if ( p2m->hap_data )
+            free_p2m_hap_data(p2m);
+        free_cpumask_var(p2m->dirty_cpumask);
+        xfree(p2m);
         d->arch.nested_p2m[i] = NULL;
     }
 }
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 9a728b6..e6b4e3b 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -56,26 +56,34 @@ struct vmx_msr_state {
 
 #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
 
-struct vmx_domain {
-    unsigned long apic_access_mfn;
-    union {
-        struct {
+union eptp_control{
+    struct {
             u64 ept_mt :3,
                 ept_wl :3,
                 rsvd   :6,
                 asr    :52;
         };
         u64 eptp;
-    } ept_control;
+};
+
+struct ept_data{
+    union eptp_control ept_ctl;
     cpumask_var_t ept_synced;
 };
 
-#define ept_get_wl(d)   \
-    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
-#define ept_get_asr(d)  \
-    ((d)->arch.hvm_domain.vmx.ept_control.asr)
-#define ept_get_eptp(d) \
-    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
+struct vmx_domain {
+    unsigned long apic_access_mfn;
+    struct ept_data ept; 
+};
+
+#define ept_get_wl(ept_data)   \
+    (((struct ept_data*)(ept_data))->ept_ctl.ept_wl)
+#define ept_get_asr(ept_data)  \
+    (((struct ept_data*)(ept_data))->ept_ctl.asr)
+#define ept_get_eptp(ept_data) \
+    (((struct ept_data*)(ept_data))->ept_ctl.eptp)
+#define ept_get_synced_mask(ept_data)\
+    (((struct ept_data*)(ept_data))->ept_synced)
 
 struct arch_vmx_struct {
     /* Virtual address of VMCS. */
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index aa5b080..573a12e 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -333,7 +333,7 @@ static inline void ept_sync_all(void)
     __invept(INVEPT_ALL_CONTEXT, 0, 0);
 }
 
-void ept_sync_domain(struct domain *d);
+void ept_sync_domain(struct p2m_domain *p2m);
 
 static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
 {
@@ -401,6 +401,10 @@ void setup_ept_dump(void);
 
 void update_guest_eip(void);
 
+int alloc_p2m_hap_data(struct p2m_domain *p2m);
+void free_p2m_hap_data(struct p2m_domain *p2m);
+void p2m_init_hap_data(struct p2m_domain *p2m);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 1807ad6..0fb1b2d 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -277,6 +277,7 @@ struct p2m_domain {
         mm_lock_t        lock;         /* Locking of private pod structs,   *
                                         * not relying on the p2m lock.      */
     } pod;
+    void *hap_data;
 };
 
 /* get host p2m table */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbA-0006m0-2a; Mon, 10 Dec 2012 06:12:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb8-0006l7-7V
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:14 +0000
Received: from [85.158.137.99:56770] by server-2.bemta-3.messagelabs.com id
	06/4C-04744-D3D75C05; Mon, 10 Dec 2012 06:12:13 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27537 invoked from network); 10 Dec 2012 06:12:11 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:11 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:21 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997320"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:08 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:17 +0800
Message-Id: <1355162243-11857-6-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Share the current EPT logic with nested EPT case, so
make the related data structure or operations netural
to comment EPT and nested EPT.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c         |   39 +++++++++------
 xen/arch/x86/mm/p2m-ept.c          |   96 ++++++++++++++++++++++++++++--------
 xen/arch/x86/mm/p2m.c              |   16 +++++-
 xen/include/asm-x86/hvm/vmx/vmcs.h |   30 +++++++----
 xen/include/asm-x86/hvm/vmx/vmx.h  |    6 ++-
 xen/include/asm-x86/p2m.h          |    1 +
 7 files changed, 137 insertions(+), 53 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 9adc7a4..b9ebdfe 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -942,7 +942,7 @@ static int construct_vmcs(struct vcpu *v)
     }
 
     if ( paging_mode_hap(d) )
-        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept_control.eptp);
+        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept.ept_ctl.eptp);
 
     if ( cpu_has_vmx_pat && paging_mode_hap(d) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index c67ac59..06455bf 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -79,22 +79,23 @@ static void __ept_sync_domain(void *info);
 static int vmx_domain_initialise(struct domain *d)
 {
     int rc;
+    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
 
     /* Set the memory type used when accessing EPT paging structures. */
-    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
+    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
 
     /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
-    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
+    ept->ept_ctl.ept_wl = 3;
 
-    d->arch.hvm_domain.vmx.ept_control.asr  =
+    ept->ept_ctl.asr  =
         pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
 
-    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
+    if ( !zalloc_cpumask_var(&ept->ept_synced) )
         return -ENOMEM;
 
     if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
     {
-        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
+        free_cpumask_var(ept->ept_synced);
         return rc;
     }
 
@@ -103,9 +104,10 @@ static int vmx_domain_initialise(struct domain *d)
 
 static void vmx_domain_destroy(struct domain *d)
 {
+    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
     if ( paging_mode_hap(d) )
-        on_each_cpu(__ept_sync_domain, d, 1);
-    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
+        on_each_cpu(__ept_sync_domain, p2m_get_hostp2m(d), 1);
+    free_cpumask_var(ept->ept_synced);
     vmx_free_vlapic_mapping(d);
 }
 
@@ -641,6 +643,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
 {
     struct domain *d = v->domain;
     unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
+    struct ept_data *ept_data = p2m_get_hostp2m(d)->hap_data;
 
     /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
     if ( old_cr4 != new_cr4 )
@@ -650,10 +653,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
     {
         unsigned int cpu = smp_processor_id();
         /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
-        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced) &&
+        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
              !cpumask_test_and_set_cpu(cpu,
-                                       d->arch.hvm_domain.vmx.ept_synced) )
-            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
+                                       ept_get_synced_mask(ept_data)) )
+            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
     }
 
     vmx_restore_guest_msrs(v);
@@ -1218,12 +1221,16 @@ static void vmx_update_guest_efer(struct vcpu *v)
 
 static void __ept_sync_domain(void *info)
 {
-    struct domain *d = info;
-    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
+    struct p2m_domain *p2m = info;
+    struct ept_data *ept_data = p2m->hap_data;
+
+    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
 }
 
-void ept_sync_domain(struct domain *d)
+void ept_sync_domain(struct p2m_domain *p2m)
 {
+    struct domain *d = p2m->domain;
+    struct ept_data *ept_data = p2m->hap_data;
     /* Only if using EPT and this domain has some VCPUs to dirty. */
     if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
         return;
@@ -1236,11 +1243,11 @@ void ept_sync_domain(struct domain *d)
      * the ept_synced mask before on_selected_cpus() reads it, resulting in
      * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
      */
-    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
+    cpumask_and(ept_get_synced_mask(ept_data),
                 d->domain_dirty_cpumask, &cpu_online_map);
 
-    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
-                     __ept_sync_domain, d, 1);
+    on_selected_cpus(ept_get_synced_mask(ept_data),
+                     __ept_sync_domain, p2m, 1);
 }
 
 void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index c964f54..8adf3f9 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     int need_modify_vtd_table = 1;
     int vtd_pte_present = 0;
     int needs_sync = 1;
-    struct domain *d = p2m->domain;
     ept_entry_t old_entry = { .epte = 0 };
+    struct ept_data *ept_data = p2m->hap_data;
+    struct domain *d = p2m->domain;
 
+    ASSERT(ept_data);
     /*
      * the caller must make sure:
      * 1. passing valid gfn and mfn at order boundary.
@@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
      * 3. passing a valid order.
      */
     if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
-         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
+         ((u64)gfn >> ((ept_get_wl(ept_data) + 1) * EPT_TABLE_ORDER)) ||
          (order % EPT_TABLE_ORDER) )
         return 0;
 
-    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
-           (target == 1 && hvm_hap_has_2mb(d)) ||
+    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
+           (target == 1 && hvm_hap_has_2mb()) ||
            (target == 0));
 
-    table = map_domain_page(ept_get_asr(d));
+    table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-    for ( i = ept_get_wl(d); i > target; i-- )
+    for ( i = ept_get_wl(ept_data); i > target; i-- )
     {
         ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
         if ( !ret )
@@ -439,9 +441,11 @@ out:
     unmap_domain_page(table);
 
     if ( needs_sync )
-        ept_sync_domain(p2m->domain);
+        ept_sync_domain(p2m);
 
-    if ( rv && iommu_enabled && need_iommu(p2m->domain) && need_modify_vtd_table )
+    /* For non-nested p2m, may need to change VT-d page table.*/
+    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled && need_iommu(p2m->domain) &&
+                need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
             iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
@@ -488,14 +492,14 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
                            unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
                            p2m_query_t q, unsigned int *page_order)
 {
-    struct domain *d = p2m->domain;
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    ept_entry_t *table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     u32 index;
     int i;
     int ret = 0;
     mfn_t mfn = _mfn(INVALID_MFN);
+    struct ept_data *ept_data = p2m->hap_data;
 
     *t = p2m_mmio_dm;
     *a = p2m_access_n;
@@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
 
     /* Should check if gfn obeys GAW here. */
 
-    for ( i = ept_get_wl(d); i > 0; i-- )
+    for ( i = ept_get_wl(ept_data); i > 0; i-- )
     {
     retry:
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
@@ -588,19 +592,20 @@ out:
 static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
     unsigned long gfn, int *level)
 {
-    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     ept_entry_t content = { .epte = 0 };
     u32 index;
     int i;
     int ret=0;
+    struct ept_data *ept_data = p2m->hap_data;
 
     /* This pfn is higher than the highest the p2m map currently holds */
     if ( gfn > p2m->max_mapped_pfn )
         goto out;
 
-    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
+    for ( i = ept_get_wl(ept_data); i > 0; i-- )
     {
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
         if ( !ret || ret == GUEST_TABLE_POD_PAGE )
@@ -622,7 +627,8 @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
 void ept_walk_table(struct domain *d, unsigned long gfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    struct ept_data *ept_data = p2m->hap_data;
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
 
     int i;
@@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned long gfn)
         goto out;
     }
 
-    for ( i = ept_get_wl(d); i >= 0; i-- )
+    for ( i = ept_get_wl(ept_data); i >= 0; i-- )
     {
         ept_entry_t *ept_entry, *next;
         u32 index;
@@ -778,16 +784,16 @@ static void ept_change_entry_type_page(mfn_t ept_page_mfn, int ept_page_level,
 static void ept_change_entry_type_global(struct p2m_domain *p2m,
                                          p2m_type_t ot, p2m_type_t nt)
 {
-    struct domain *d = p2m->domain;
-    if ( ept_get_asr(d) == 0 )
+    struct ept_data *ept_data = p2m->hap_data;
+    if ( ept_get_asr(ept_data) == 0 )
         return;
 
     BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
     BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct));
 
-    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d), ot, nt);
+    ept_change_entry_type_page(_mfn(ept_get_asr(ept_data)), ept_get_wl(ept_data), ot, nt);
 
-    ept_sync_domain(d);
+    ept_sync_domain(p2m);
 }
 
 void ept_p2m_init(struct p2m_domain *p2m)
@@ -811,6 +817,7 @@ static void ept_dump_p2m_table(unsigned char key)
     unsigned long gfn, gfn_remainder;
     unsigned long record_counter = 0;
     struct p2m_domain *p2m;
+    struct ept_data *ept_data;
 
     for_each_domain(d)
     {
@@ -818,15 +825,16 @@ static void ept_dump_p2m_table(unsigned char key)
             continue;
 
         p2m = p2m_get_hostp2m(d);
+    ept_data = p2m->hap_data;
         printk("\ndomain%d EPT p2m table: \n", d->domain_id);
 
         for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
         {
             gfn_remainder = gfn;
             mfn = _mfn(INVALID_MFN);
-            table = map_domain_page(ept_get_asr(d));
+            table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-            for ( i = ept_get_wl(d); i > 0; i-- )
+            for ( i = ept_get_wl(ept_data); i > 0; i-- )
             {
                 ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
                 if ( ret != GUEST_TABLE_NORMAL_PAGE )
@@ -858,6 +866,52 @@ out:
     }
 }
 
+int alloc_p2m_hap_data(struct p2m_domain *p2m)
+{
+    struct domain *d = p2m->domain;
+    struct ept_data *ept;
+
+    ASSERT(d);
+    if (!hap_enabled(d))
+        return 0;
+
+    p2m->hap_data = ept = xzalloc(struct ept_data);
+    if ( !p2m->hap_data )
+        return -ENOMEM;
+    if ( !zalloc_cpumask_var(&ept->ept_synced) )
+    {
+        xfree(ept);
+        p2m->hap_data = NULL;
+        return -ENOMEM;    
+    }
+    return 0;
+}
+
+void free_p2m_hap_data(struct p2m_domain *p2m)
+{
+    struct ept_data *ept;
+
+    if ( !hap_enabled(p2m->domain) )
+        return;
+
+    if ( p2m_is_nestedp2m(p2m)) {
+        ept = p2m->hap_data;
+        if ( ept ) {
+            free_cpumask_var(ept->ept_synced);
+            xfree(ept);
+        }
+    }
+}
+
+void p2m_init_hap_data(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = p2m->hap_data;
+
+    ept->ept_ctl.ept_wl = 3;
+    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
+    ept->ept_ctl.asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+}
+
 static struct keyhandler ept_p2m_table = {
     .diagnostic = 0,
     .u.fn = ept_dump_p2m_table,
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 62c2d78..799bbfb 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -105,6 +105,8 @@ p2m_init_nestedp2m(struct domain *d)
         if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
             return -ENOMEM;
         p2m_initialise(d, p2m);
+        if ( cpu_has_vmx && alloc_p2m_hap_data(p2m) )
+            return -ENOMEM;
         p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
         list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
     }
@@ -126,12 +128,14 @@ int p2m_init(struct domain *d)
         return -ENOMEM;
     }
     p2m_initialise(d, p2m);
+    if ( hap_enabled(d) && cpu_has_vmx)
+        p2m->hap_data = &d->arch.hvm_domain.vmx.ept;
 
     /* Must initialise nestedp2m unconditionally
      * since nestedhvm_enabled(d) returns false here.
      * (p2m_init runs too early for HVM_PARAM_* options) */
     rc = p2m_init_nestedp2m(d);
-    if ( rc ) 
+    if ( rc )
         p2m_final_teardown(d);
     return rc;
 }
@@ -354,6 +358,8 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     if ( hap_enabled(d) )
         iommu_share_p2m_table(d);
+    if ( p2m_is_nestedp2m(p2m) && hap_enabled(d) )
+        p2m_init_hap_data(p2m);
 
     P2M_PRINTK("populating p2m table\n");
 
@@ -436,12 +442,16 @@ void p2m_teardown(struct p2m_domain *p2m)
 static void p2m_teardown_nestedp2m(struct domain *d)
 {
     uint8_t i;
+    struct p2m_domain *p2m;
 
     for (i = 0; i < MAX_NESTEDP2M; i++) {
         if ( !d->arch.nested_p2m[i] )
             continue;
-        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
-        xfree(d->arch.nested_p2m[i]);
+        p2m = d->arch.nested_p2m[i];
+        if ( p2m->hap_data )
+            free_p2m_hap_data(p2m);
+        free_cpumask_var(p2m->dirty_cpumask);
+        xfree(p2m);
         d->arch.nested_p2m[i] = NULL;
     }
 }
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 9a728b6..e6b4e3b 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -56,26 +56,34 @@ struct vmx_msr_state {
 
 #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
 
-struct vmx_domain {
-    unsigned long apic_access_mfn;
-    union {
-        struct {
+union eptp_control{
+    struct {
             u64 ept_mt :3,
                 ept_wl :3,
                 rsvd   :6,
                 asr    :52;
         };
         u64 eptp;
-    } ept_control;
+};
+
+struct ept_data{
+    union eptp_control ept_ctl;
     cpumask_var_t ept_synced;
 };
 
-#define ept_get_wl(d)   \
-    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
-#define ept_get_asr(d)  \
-    ((d)->arch.hvm_domain.vmx.ept_control.asr)
-#define ept_get_eptp(d) \
-    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
+struct vmx_domain {
+    unsigned long apic_access_mfn;
+    struct ept_data ept; 
+};
+
+#define ept_get_wl(ept_data)   \
+    (((struct ept_data*)(ept_data))->ept_ctl.ept_wl)
+#define ept_get_asr(ept_data)  \
+    (((struct ept_data*)(ept_data))->ept_ctl.asr)
+#define ept_get_eptp(ept_data) \
+    (((struct ept_data*)(ept_data))->ept_ctl.eptp)
+#define ept_get_synced_mask(ept_data)\
+    (((struct ept_data*)(ept_data))->ept_synced)
 
 struct arch_vmx_struct {
     /* Virtual address of VMCS. */
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index aa5b080..573a12e 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -333,7 +333,7 @@ static inline void ept_sync_all(void)
     __invept(INVEPT_ALL_CONTEXT, 0, 0);
 }
 
-void ept_sync_domain(struct domain *d);
+void ept_sync_domain(struct p2m_domain *p2m);
 
 static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
 {
@@ -401,6 +401,10 @@ void setup_ept_dump(void);
 
 void update_guest_eip(void);
 
+int alloc_p2m_hap_data(struct p2m_domain *p2m);
+void free_p2m_hap_data(struct p2m_domain *p2m);
+void p2m_init_hap_data(struct p2m_domain *p2m);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 1807ad6..0fb1b2d 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -277,6 +277,7 @@ struct p2m_domain {
         mm_lock_t        lock;         /* Locking of private pod structs,   *
                                         * not relying on the p2m lock.      */
     } pod;
+    void *hap_data;
 };
 
 /* get host p2m table */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbD-0006o8-3Q; Mon, 10 Dec 2012 06:12:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1ThwbB-0006mW-I9
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:17 +0000
Received: from [85.158.137.99:19956] by server-9.bemta-3.messagelabs.com id
	E8/62-02388-04D75C05; Mon, 10 Dec 2012 06:12:16 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!4
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27637 invoked from network); 10 Dec 2012 06:12:15 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:15 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997343"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:12 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:20 +0800
Message-Id: <1355162243-11857-9-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 08/11] nEPT: Use minimal permission for nested
	p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Emulate permission check for the nested p2m. Current solution is to
use minimal permission, and once meet permission violation in L0, then
determin whether it is caused by guest EPT or host EPT

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |    4 ++--
 xen/arch/x86/mm/hap/nested_ept.c        |    9 +++++----
 xen/arch/x86/mm/hap/nested_hap.c        |   22 +++++++++++++---------
 xen/include/asm-x86/hvm/hvm.h           |    2 +-
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    6 +++---
 7 files changed, 26 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 5dcb354..ab455a9 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1177,7 +1177,7 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
  */
 int
 nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint32_t pfec;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 3fc128b..41779bc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1494,7 +1494,7 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
  */
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
@@ -1504,7 +1504,7 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
-    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn, p2m_acc,
                                 &exit_qual, &exit_reason);
     switch ( rc ) {
         case EPT_TRANSLATE_SUCCEED:
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 2d733a8..637db1a 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -286,8 +286,8 @@ bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason)
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason)
 {
     uint32_t rc, rwx_bits = 0;
     walk_t gw;
@@ -317,8 +317,9 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
             }
             if ( nept_permission_check(rwx_acc, rwx_bits) )
             {
-                 *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
-                 break;
+                *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
+                *p2m_acc = (uint8_t)rwx_bits;
+                break;
             }
             rc = EPT_TRANSLATE_VIOLATION;
         /* Fall through to EPT violation if permission check fails. */
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 6d1264b..9c1654d 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -142,12 +142,12 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
  */
 static int
 nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
 
-    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order, p2m_acc,
         access_r, access_w, access_x);
 }
 
@@ -158,16 +158,15 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      p2m_type_t *p2mt,
+                      p2m_type_t *p2mt, p2m_access_t *p2ma,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
@@ -206,12 +205,14 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
     p2m_type_t p2mt_10;
+    p2m_access_t p2ma_10;
+    uint8_t p2ma_21;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
@@ -229,7 +230,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     /* ==> we have to walk L0 P2M */
     rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
-        &p2mt_10, &page_order_10,
+        &p2mt_10, &p2ma_10, &page_order_10,
         access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
@@ -250,10 +251,13 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     page_order_20 = min(page_order_21, page_order_10);
 
+    if (p2ma_10 > p2m_access_rwx)
+        p2ma_10 = p2m_access_rwx;
+    p2ma_10 &= (p2m_access_t)p2ma_21; /* Use minimal permission for nested p2m. */
+
     /* fix p2m_get_pagetable(nested_p2m) */
     nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
-        p2mt_10,
-        p2m_access_rwx /* FIXME: Should use minimum permission. */);
+        p2mt_10, p2ma_10);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 80f07e9..889e3c9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -186,7 +186,7 @@ struct hvm_function_table {
 
     /*Walk nested p2m  */
     int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index 0c90f30..748cc04 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -134,7 +134,7 @@ void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
 int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 661cd8a..55c0ad1 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -124,7 +124,7 @@ int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
@@ -207,7 +207,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason);
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbD-0006o8-3Q; Mon, 10 Dec 2012 06:12:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1ThwbB-0006mW-I9
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:17 +0000
Received: from [85.158.137.99:19956] by server-9.bemta-3.messagelabs.com id
	E8/62-02388-04D75C05; Mon, 10 Dec 2012 06:12:16 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!4
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27637 invoked from network); 10 Dec 2012 06:12:15 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:15 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997343"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:12 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:20 +0800
Message-Id: <1355162243-11857-9-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 08/11] nEPT: Use minimal permission for nested
	p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Emulate permission check for the nested p2m. Current solution is to
use minimal permission, and once meet permission violation in L0, then
determin whether it is caused by guest EPT or host EPT

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |    4 ++--
 xen/arch/x86/mm/hap/nested_ept.c        |    9 +++++----
 xen/arch/x86/mm/hap/nested_hap.c        |   22 +++++++++++++---------
 xen/include/asm-x86/hvm/hvm.h           |    2 +-
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    6 +++---
 7 files changed, 26 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 5dcb354..ab455a9 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1177,7 +1177,7 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
  */
 int
 nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint32_t pfec;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 3fc128b..41779bc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1494,7 +1494,7 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
  */
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
@@ -1504,7 +1504,7 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
-    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn, p2m_acc,
                                 &exit_qual, &exit_reason);
     switch ( rc ) {
         case EPT_TRANSLATE_SUCCEED:
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 2d733a8..637db1a 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -286,8 +286,8 @@ bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason)
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason)
 {
     uint32_t rc, rwx_bits = 0;
     walk_t gw;
@@ -317,8 +317,9 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
             }
             if ( nept_permission_check(rwx_acc, rwx_bits) )
             {
-                 *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
-                 break;
+                *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
+                *p2m_acc = (uint8_t)rwx_bits;
+                break;
             }
             rc = EPT_TRANSLATE_VIOLATION;
         /* Fall through to EPT violation if permission check fails. */
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 6d1264b..9c1654d 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -142,12 +142,12 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
  */
 static int
 nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
 
-    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order, p2m_acc,
         access_r, access_w, access_x);
 }
 
@@ -158,16 +158,15 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      p2m_type_t *p2mt,
+                      p2m_type_t *p2mt, p2m_access_t *p2ma,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
@@ -206,12 +205,14 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
     p2m_type_t p2mt_10;
+    p2m_access_t p2ma_10;
+    uint8_t p2ma_21;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
@@ -229,7 +230,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     /* ==> we have to walk L0 P2M */
     rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
-        &p2mt_10, &page_order_10,
+        &p2mt_10, &p2ma_10, &page_order_10,
         access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
@@ -250,10 +251,13 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     page_order_20 = min(page_order_21, page_order_10);
 
+    if (p2ma_10 > p2m_access_rwx)
+        p2ma_10 = p2m_access_rwx;
+    p2ma_10 &= (p2m_access_t)p2ma_21; /* Use minimal permission for nested p2m. */
+
     /* fix p2m_get_pagetable(nested_p2m) */
     nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
-        p2mt_10,
-        p2m_access_rwx /* FIXME: Should use minimum permission. */);
+        p2mt_10, p2ma_10);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 80f07e9..889e3c9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -186,7 +186,7 @@ struct hvm_function_table {
 
     /*Walk nested p2m  */
     int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index 0c90f30..748cc04 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -134,7 +134,7 @@ void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
 int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 661cd8a..55c0ad1 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -124,7 +124,7 @@ int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
@@ -207,7 +207,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason);
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbB-0006mt-GZ; Mon, 10 Dec 2012 06:12:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1ThwbA-0006lv-89
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:16 +0000
Received: from [193.109.254.147:24458] by server-3.bemta-14.messagelabs.com id
	3E/DF-01317-F3D75C05; Mon, 10 Dec 2012 06:12:15 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355119924!2300111!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4682 invoked from network); 10 Dec 2012 06:12:07 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:07 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:18 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997307"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:05 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:15 +0800
Message-Id: <1355162243-11857-4-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 03/11] nEPT: Implement guest ept's walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Implment guest EPT PT walker, some logic is based on shadow's
ia32e PT walker. During the PT walking, if the target pages are
not in memory, use RETRY mechanism and get a chance to let the
target page back.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c              |    1 +
 xen/arch/x86/hvm/vmx/vvmx.c         |   42 +++++-
 xen/arch/x86/mm/guest_walk.c        |   12 +-
 xen/arch/x86/mm/hap/Makefile        |    1 +
 xen/arch/x86/mm/hap/nested_ept.c    |  327 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c    |    2 +-
 xen/arch/x86/mm/shadow/multi.c      |    2 +-
 xen/include/asm-x86/guest_pt.h      |    8 +
 xen/include/asm-x86/hvm/nestedhvm.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h  |   14 ++
 11 files changed, 403 insertions(+), 8 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 85bc9be..3400e6b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1324,6 +1324,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
+        case NESTEDHVM_PAGEFAULT_RETRY:
             return 1;
         case NESTEDHVM_PAGEFAULT_L1_ERROR:
             /* An error occured while translating gpa from
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 4495dd6..76cf757 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -906,9 +906,18 @@ static void sync_vvmcs_ro(struct vcpu *v)
 {
     int i;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    void *vvmcs = nvcpu->nv_vvmcx;
 
     for ( i = 0; i < ARRAY_SIZE(vmcs_ro_field); i++ )
         shadow_to_vvmcs(nvcpu->nv_vvmcx, vmcs_ro_field[i]);
+
+    /* Adjust exit_reason/exit_qualifciation for violation case */
+    if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
+                EXIT_REASON_EPT_VIOLATION ) {
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+    }
 }
 
 static void load_vvmcs_host_state(struct vcpu *v)
@@ -1454,8 +1463,37 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
-    /*TODO:*/
-    return 0;
+    uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
+    uint32_t exit_reason = EXIT_REASON_EPT_VIOLATION;
+    int rc;
+    unsigned long gfn;
+    uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+                                &exit_qual, &exit_reason);
+    switch ( rc ) {
+        case EPT_TRANSLATE_SUCCEED:
+            *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+            rc = NESTEDHVM_PAGEFAULT_DONE;
+            break;
+        case EPT_TRANSLATE_VIOLATION:
+        case EPT_TRANSLATE_MISCONFIG:
+            rc = NESTEDHVM_PAGEFAULT_INJECT;
+            nvmx->ept_exit.exit_reason = exit_reason;
+            nvmx->ept_exit.exit_qual = exit_qual;
+            break;
+        case EPT_TRANSLATE_RETRY:
+            rc = NESTEDHVM_PAGEFAULT_RETRY;
+            break;
+        case EPT_TRANSLATE_ERR_PAGE:
+            rc = NESTEDHVM_PAGEFAULT_L1_ERROR;
+            break;
+        default:
+            gdprintk(XENLOG_ERR, "GUEST EPT translation error!\n");
+    }
+
+    return rc;
 }
 
 void nvmx_idtv_handling(void)
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 13ea0bb..afbe9db 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -88,10 +88,11 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
 
 /* If the map is non-NULL, we leave this function having 
  * acquired an extra ref on mfn_to_page(*mfn) */
-static inline void *map_domain_gfn(struct p2m_domain *p2m,
+void *map_domain_gfn(struct p2m_domain *p2m,
                                    gfn_t gfn, 
                                    mfn_t *mfn,
                                    p2m_type_t *p2mt,
+                                   p2m_query_t *q,
                                    uint32_t *rc) 
 {
     struct page_info *page;
@@ -99,7 +100,7 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
 
     /* Translate the gfn, unsharing if shared */
     page = get_page_from_gfn_p2m(p2m->domain, p2m, gfn_x(gfn), p2mt, NULL,
-                                  P2M_ALLOC | P2M_UNSHARE);
+                                  *q);
     if ( p2m_is_paging(*p2mt) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
@@ -128,7 +129,6 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
     return map;
 }
 
-
 /* Walk the guest pagetables, after the manner of a hardware walker. */
 /* Because the walk is essentially random, it can cause a deadlock 
  * warning in the p2m locking code. Highly unlikely this is an actual
@@ -149,6 +149,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     uint32_t gflags, mflags, iflags, rc = 0;
     int smep;
     bool_t pse1G = 0, pse2M = 0;
+    p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
 
     perfc_incr(guest_walk);
     memset(gw, 0, sizeof(*gw));
@@ -188,7 +189,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     l3p = map_domain_gfn(p2m, 
                          guest_l4e_get_gfn(gw->l4e), 
                          &gw->l3mfn,
-                         &p2mt, 
+                         &p2mt,
+                         &qt, 
                          &rc); 
     if(l3p == NULL)
         goto out;
@@ -249,6 +251,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                          guest_l3e_get_gfn(gw->l3e), 
                          &gw->l2mfn,
                          &p2mt, 
+                         &qt,
                          &rc); 
     if(l2p == NULL)
         goto out;
@@ -322,6 +325,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                              guest_l2e_get_gfn(gw->l2e), 
                              &gw->l1mfn,
                              &p2mt,
+                             &qt,
                              &rc);
         if(l1p == NULL)
             goto out;
diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
index 80a6bec..68f2bb5 100644
--- a/xen/arch/x86/mm/hap/Makefile
+++ b/xen/arch/x86/mm/hap/Makefile
@@ -3,6 +3,7 @@ obj-y += guest_walk_2level.o
 obj-y += guest_walk_3level.o
 obj-$(x86_64) += guest_walk_4level.o
 obj-y += nested_hap.o
+obj-y += nested_ept.o
 
 guest_walk_%level.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
new file mode 100644
index 0000000..da868e7
--- /dev/null
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -0,0 +1,327 @@
+/*
+ * nested_ept.c: Handling virtulized EPT for guest in nested case.
+ *
+ * pt walker logic based on arch/x86/mm/guest_walk.c
+ * Copyright (c) 2012, Intel Corporation
+ *  Xiantao Zhang <xiantao.zhang@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+#include <asm/domain.h>
+#include <asm/page.h>
+#include <asm/paging.h>
+#include <asm/p2m.h>
+#include <asm/mem_event.h>
+#include <public/mem_event.h>
+#include <asm/mem_sharing.h>
+#include <xen/event.h>
+#include <asm/hap.h>
+#include <asm/hvm/support.h>
+
+#include <asm/hvm/nestedhvm.h>
+
+#include "private.h"
+
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vvmx.h>
+
+/* EPT always use 4-level paging structure*/
+#define GUEST_PAGING_LEVELS 4
+#include <asm/guest_pt.h>
+
+/* For EPT's walker reserved bits and EMT check  */
+#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
+                     ~((1ull << paddr_bits) - 1))
+
+
+#define EPT_EMT_WB  6
+#define EPT_EMT_UC  0
+
+#define NEPT_VPID_CAP_BITS 0
+
+#define NEPT_1G_ENTRY_FLAG (1 << 11)
+#define NEPT_2M_ENTRY_FLAG (1 << 10)
+#define NEPT_4K_ENTRY_FLAG (1 << 9)
+
+/* Always expose 1G and 2M capability to guest, 
+ so don't need additional check */
+bool_t nept_sp_entry(uint64_t entry)
+{
+    return !!(entry & EPTE_SUPER_PAGE_MASK);
+}
+
+static bool_t nept_rsv_bits_check(uint64_t entry, uint32_t level)
+{
+    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
+
+    switch ( level ){
+        case 1:
+            break;
+        case 2 ... 3:
+            if (nept_sp_entry(entry))
+                rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
+            else
+                rsv_bits |= 0xfull << 3;
+            break;
+        case 4:
+        rsv_bits |= 0xf8;
+        break;
+        default:
+            printk("Unsupported EPT paging level: %d\n", level);
+    }
+    if ( ((entry & rsv_bits) ^ rsv_bits) == rsv_bits )
+        return 0;
+    return 1;
+}
+
+/* EMT checking*/
+static bool_t nept_emt_bits_check(uint64_t entry, uint32_t level)
+{
+    ept_entry_t e;
+    e.epte = entry;
+    if ( e.sp || level == 1 ) {
+        if ( e.emt == 2 || e.emt == 3 || e.emt == 7 )
+            return 1;
+    }
+    return 0;
+}
+
+static bool_t nept_rwx_bits_check(uint64_t entry) {
+    /*write only or write/execute only*/
+    uint8_t rwx_bits = entry & 0x7;
+
+    if ( rwx_bits == 2 || rwx_bits == 6)
+        return 1;
+    if ( rwx_bits == 4 && !(NEPT_VPID_CAP_BITS &
+                        VMX_EPT_EXEC_ONLY_SUPPORTED))
+        return 1;
+    return 0;
+}
+
+/* nept's misconfiguration check */
+static bool_t nept_misconfiguration_check(uint64_t entry, uint32_t level)
+{
+    return (nept_rsv_bits_check(entry, level) ||
+                nept_emt_bits_check(entry, level) ||
+                nept_rwx_bits_check(entry));
+}
+
+static bool_t nept_present_check(uint64_t entry)
+{
+    if (entry & 0x7)
+        return 1;
+    return 0;
+}
+
+uint64_t nept_get_ept_vpid_cap(void)
+{
+    /*TODO: exposed ept and vpid features*/
+    return NEPT_VPID_CAP_BITS;
+}
+
+static uint32_t
+nept_walk_tables(struct vcpu *v, unsigned long l2ga, walk_t *gw)
+{
+    p2m_type_t p2mt;
+    uint32_t rc = 0, ret = 0, gflags;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = d->arch.p2m;
+    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
+    p2m_query_t qt = P2M_ALLOC;
+
+    guest_l1e_t *l1p = NULL;
+    guest_l2e_t *l2p = NULL;
+    guest_l3e_t *l3p = NULL;
+    guest_l4e_t *l4p = NULL;
+
+    bool_t sp= 0;
+
+    memset(gw, 0, sizeof(*gw));
+    gw->va = l2ga;
+
+    /* Map the l4 root entry */
+    l4p = map_domain_gfn(p2m, base_gfn, &gw->l4mfn, &p2mt, &qt, &rc);
+    if ( !l4p )
+        goto map_err;
+    gw->l4e = l4p[guest_l4_table_offset(l2ga)];
+    if (!nept_present_check(gw->l4e.l4))
+        goto non_present;
+    if (nept_misconfiguration_check(gw->l4e.l4, 4))
+        goto misconfig_err;
+
+    /* Map the l3 table */
+    base_gfn = guest_l4e_get_gfn(gw->l4e);
+    l3p = map_domain_gfn(p2m, base_gfn, &gw->l3mfn, &p2mt, &qt, &rc);
+    if( l3p == NULL )
+        goto map_err;
+
+    /* Get the l3e and check its flags*/
+    gw->l3e = l3p[guest_l3_table_offset(l2ga)];
+    if ( !nept_present_check(gw->l3e.l3) )
+        goto non_present;
+    if ( nept_misconfiguration_check(gw->l3e.l3, 3) )
+        goto misconfig_err;
+
+    sp = nept_sp_entry(gw->l3e.l3);
+    /* Super 1G entry */
+    if ( sp )
+    {
+        /* Generate a fake l1 table entry so callers don't all 
+         * have to understand superpages. */
+        gfn_t start = guest_l3e_get_gfn(gw->l3e);
+
+        /* Increment the pfn by the right number of 4k pages. */
+        start = _gfn((gfn_x(start) & ~GUEST_L3_GFN_MASK) +
+                     ((l2ga >> PAGE_SHIFT) & GUEST_L3_GFN_MASK));
+        gflags = (gw->l3e.l3 & 0x7f) | NEPT_1G_ENTRY_FLAG;
+        gw->l1e = guest_l1e_from_gfn(start, gflags);
+        gw->l2mfn = gw->l1mfn = _mfn(INVALID_MFN);
+        goto done;
+    }
+
+    /* Map the l2 table */
+    base_gfn = guest_l3e_get_gfn(gw->l3e);
+    l2p = map_domain_gfn(p2m, base_gfn, &gw->l2mfn, &p2mt, &qt, &rc); 
+    if( l2p == NULL )
+        goto map_err;
+    /* Get the l2e */
+    gw->l2e = l2p[guest_l2_table_offset(l2ga)];
+    if ( !nept_present_check(gw->l2e.l2) )
+        goto non_present;
+    if ( nept_misconfiguration_check(gw->l2e.l2, 2) )
+        goto misconfig_err;
+    sp = nept_sp_entry(gw->l2e.l2);
+
+    if ( sp )
+    {
+        gfn_t start = guest_l2e_get_gfn(gw->l2e);
+        gflags = (gw->l2e.l2 & 0x7f) | NEPT_2M_ENTRY_FLAG;
+
+        /* Increment the pfn by the right number of 4k pages.*/
+        start = _gfn((gfn_x(start) & ~GUEST_L2_GFN_MASK) +
+                     guest_l1_table_offset(l2ga));
+        gw->l1e = guest_l1e_from_gfn(start, gflags);
+        gw->l1mfn = _mfn(INVALID_MFN);
+        goto done;
+    }
+    /* Not a superpage: carry on and find the l1e. */
+    base_gfn = guest_l2e_get_gfn(gw->l2e);
+    l1p = map_domain_gfn(p2m, base_gfn, &gw->l1mfn, &p2mt, &qt, &rc);
+    if( l1p == NULL )
+            goto map_err;
+    /* Get the l1e */
+    gw->l1e = l1p[guest_l1_table_offset(l2ga)];
+    if ( !nept_present_check(gw->l1e.l1) )
+        goto non_present;
+    if ( nept_misconfiguration_check(gw->l1e.l1, 1) )
+        goto misconfig_err;
+
+    gflags = (gw->l1e.l1 & 0x7f) | NEPT_4K_ENTRY_FLAG;
+    gw->l1e.l1 = (gw->l1e.l1 & PAGE_MASK) | gflags;
+
+done:
+    ret = EPT_TRANSLATE_SUCCEED;
+    goto unmap;
+
+misconfig_err:
+    ret =  EPT_TRANSLATE_MISCONFIG;
+    goto unmap;
+
+map_err:
+    if ( rc == _PAGE_PAGED )
+        ret = EPT_TRANSLATE_RETRY;
+    else
+        ret = EPT_TRANSLATE_ERR_PAGE;
+    goto unmap;
+
+non_present:
+    ret = EPT_TRANSLATE_VIOLATION;
+
+unmap:
+    if ( l4p )
+    {
+        unmap_domain_page(l4p);
+        put_page(mfn_to_page(mfn_x(gw->l4mfn)));
+    }
+    if ( l3p )
+    {
+        unmap_domain_page(l3p);
+        put_page(mfn_to_page(mfn_x(gw->l3mfn)));
+    }
+    if ( l2p )
+    {
+        unmap_domain_page(l2p);
+        put_page(mfn_to_page(mfn_x(gw->l2mfn)));
+    }
+    if ( l1p )
+    {
+        unmap_domain_page(l1p);
+        put_page(mfn_to_page(mfn_x(gw->l1mfn)));
+    }
+    return ret;
+}
+
+/* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
+
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason)
+{
+    uint32_t rc, rwx_bits = 0;
+    walk_t gw;
+
+    *l1gfn = INVALID_GFN;
+
+    rc = nept_walk_tables(v, l2ga, &gw);
+    switch ( rc ) {
+        case EPT_TRANSLATE_SUCCEED:
+            if ( likely(gw.l1e.l1 & NEPT_2M_ENTRY_FLAG) )
+            {
+                rwx_bits = gw.l4e.l4 & gw.l3e.l3 & gw.l2e.l2 & 0x7;
+                *page_order = 9;
+            }
+            else if ( gw.l1e.l1 & NEPT_4K_ENTRY_FLAG )  {
+                rwx_bits = gw.l4e.l4 & gw.l3e.l3 & gw.l2e.l2 & gw.l1e.l1 & 0x7;
+                *page_order = 0;
+            }
+            else if ( gw.l1e.l1 & NEPT_1G_ENTRY_FLAG  )
+            {
+                rwx_bits = gw.l4e.l4 & gw.l3e.l3  & 0x7;
+                *page_order = 18;
+            }
+            else
+                gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
+
+            *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
+            break;
+        case EPT_TRANSLATE_VIOLATION:
+            *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
+            *exit_reason = EXIT_REASON_EPT_VIOLATION;
+            break;
+
+        case EPT_TRANSLATE_ERR_PAGE:
+            break;
+        case EPT_TRANSLATE_MISCONFIG:
+            rc = EPT_TRANSLATE_MISCONFIG;
+            *exit_qual = 0;
+            *exit_reason = EXIT_REASON_EPT_MISCONFIG;
+            break;
+        case EPT_TRANSLATE_RETRY:
+            break;
+        default:
+            gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);
+    }
+    return rc;
+}
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 8787c91..6d1264b 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -217,7 +217,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
-        return rv;
+    case NESTEDHVM_PAGEFAULT_RETRY:
     case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 4967da1..409198c 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
     /* Translate the GFN to an MFN */
     ASSERT(!paging_locked_by_me(v->domain));
     mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
-        
+
     if ( p2m_is_readonly(p2mt) )
     {
         put_gfn(v->domain, gfn);
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index 4e1dda0..600c52d 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -315,6 +315,14 @@ guest_walk_to_page_order(walk_t *gw)
 #define GPT_RENAME2(_n, _l) _n ## _ ## _l ## _levels
 #define GPT_RENAME(_n, _l) GPT_RENAME2(_n, _l)
 #define guest_walk_tables GPT_RENAME(guest_walk_tables, GUEST_PAGING_LEVELS)
+#define map_domain_gfn GPT_RENAME(map_domain_gfn, GUEST_PAGING_LEVELS)
+
+extern void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn, 
+                                   mfn_t *mfn,
+                                   p2m_type_t *p2mt,
+                                   p2m_query_t *q,
+                                   uint32_t *rc);
 
 extern uint32_t 
 guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, unsigned long va,
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index 91fde0b..649c511 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -52,6 +52,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
 #define NESTEDHVM_PAGEFAULT_L1_ERROR   2
 #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
 #define NESTEDHVM_PAGEFAULT_MMIO       4
+#define NESTEDHVM_PAGEFAULT_RETRY      5
 int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ef2c9c9..9a728b6 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -194,6 +194,7 @@ extern u32 vmx_secondary_exec_control;
 
 extern bool_t cpu_has_vmx_ins_outs_instr_info;
 
+#define VMX_EPT_EXEC_ONLY_SUPPORTED             0x00000001
 #define VMX_EPT_WALK_LENGTH_4_SUPPORTED         0x00000040
 #define VMX_EPT_MEMORY_TYPE_UC                  0x00000100
 #define VMX_EPT_MEMORY_TYPE_WB                  0x00004000
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 422f006..8eb377b 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -32,6 +32,10 @@ struct nestedvmx {
         unsigned long intr_info;
         u32           error_code;
     } intr;
+    struct {
+        uint32_t exit_reason;
+        uint32_t exit_qual;
+    } ept_exit;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -109,6 +113,12 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
+#define EPT_TRANSLATE_SUCCEED   0
+#define EPT_TRANSLATE_VIOLATION 1
+#define EPT_TRANSLATE_ERR_PAGE  2
+#define EPT_TRANSLATE_MISCONFIG 3
+#define EPT_TRANSLATE_RETRY     4
+
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
@@ -192,5 +202,9 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbB-0006mt-GZ; Mon, 10 Dec 2012 06:12:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1ThwbA-0006lv-89
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:16 +0000
Received: from [193.109.254.147:24458] by server-3.bemta-14.messagelabs.com id
	3E/DF-01317-F3D75C05; Mon, 10 Dec 2012 06:12:15 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355119924!2300111!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4682 invoked from network); 10 Dec 2012 06:12:07 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:07 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:18 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997307"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:05 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:15 +0800
Message-Id: <1355162243-11857-4-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 03/11] nEPT: Implement guest ept's walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Implment guest EPT PT walker, some logic is based on shadow's
ia32e PT walker. During the PT walking, if the target pages are
not in memory, use RETRY mechanism and get a chance to let the
target page back.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c              |    1 +
 xen/arch/x86/hvm/vmx/vvmx.c         |   42 +++++-
 xen/arch/x86/mm/guest_walk.c        |   12 +-
 xen/arch/x86/mm/hap/Makefile        |    1 +
 xen/arch/x86/mm/hap/nested_ept.c    |  327 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c    |    2 +-
 xen/arch/x86/mm/shadow/multi.c      |    2 +-
 xen/include/asm-x86/guest_pt.h      |    8 +
 xen/include/asm-x86/hvm/nestedhvm.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h  |   14 ++
 11 files changed, 403 insertions(+), 8 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 85bc9be..3400e6b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1324,6 +1324,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
+        case NESTEDHVM_PAGEFAULT_RETRY:
             return 1;
         case NESTEDHVM_PAGEFAULT_L1_ERROR:
             /* An error occured while translating gpa from
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 4495dd6..76cf757 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -906,9 +906,18 @@ static void sync_vvmcs_ro(struct vcpu *v)
 {
     int i;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    void *vvmcs = nvcpu->nv_vvmcx;
 
     for ( i = 0; i < ARRAY_SIZE(vmcs_ro_field); i++ )
         shadow_to_vvmcs(nvcpu->nv_vvmcx, vmcs_ro_field[i]);
+
+    /* Adjust exit_reason/exit_qualifciation for violation case */
+    if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
+                EXIT_REASON_EPT_VIOLATION ) {
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+    }
 }
 
 static void load_vvmcs_host_state(struct vcpu *v)
@@ -1454,8 +1463,37 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
-    /*TODO:*/
-    return 0;
+    uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
+    uint32_t exit_reason = EXIT_REASON_EPT_VIOLATION;
+    int rc;
+    unsigned long gfn;
+    uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+                                &exit_qual, &exit_reason);
+    switch ( rc ) {
+        case EPT_TRANSLATE_SUCCEED:
+            *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+            rc = NESTEDHVM_PAGEFAULT_DONE;
+            break;
+        case EPT_TRANSLATE_VIOLATION:
+        case EPT_TRANSLATE_MISCONFIG:
+            rc = NESTEDHVM_PAGEFAULT_INJECT;
+            nvmx->ept_exit.exit_reason = exit_reason;
+            nvmx->ept_exit.exit_qual = exit_qual;
+            break;
+        case EPT_TRANSLATE_RETRY:
+            rc = NESTEDHVM_PAGEFAULT_RETRY;
+            break;
+        case EPT_TRANSLATE_ERR_PAGE:
+            rc = NESTEDHVM_PAGEFAULT_L1_ERROR;
+            break;
+        default:
+            gdprintk(XENLOG_ERR, "GUEST EPT translation error!\n");
+    }
+
+    return rc;
 }
 
 void nvmx_idtv_handling(void)
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 13ea0bb..afbe9db 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -88,10 +88,11 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
 
 /* If the map is non-NULL, we leave this function having 
  * acquired an extra ref on mfn_to_page(*mfn) */
-static inline void *map_domain_gfn(struct p2m_domain *p2m,
+void *map_domain_gfn(struct p2m_domain *p2m,
                                    gfn_t gfn, 
                                    mfn_t *mfn,
                                    p2m_type_t *p2mt,
+                                   p2m_query_t *q,
                                    uint32_t *rc) 
 {
     struct page_info *page;
@@ -99,7 +100,7 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
 
     /* Translate the gfn, unsharing if shared */
     page = get_page_from_gfn_p2m(p2m->domain, p2m, gfn_x(gfn), p2mt, NULL,
-                                  P2M_ALLOC | P2M_UNSHARE);
+                                  *q);
     if ( p2m_is_paging(*p2mt) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
@@ -128,7 +129,6 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
     return map;
 }
 
-
 /* Walk the guest pagetables, after the manner of a hardware walker. */
 /* Because the walk is essentially random, it can cause a deadlock 
  * warning in the p2m locking code. Highly unlikely this is an actual
@@ -149,6 +149,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     uint32_t gflags, mflags, iflags, rc = 0;
     int smep;
     bool_t pse1G = 0, pse2M = 0;
+    p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
 
     perfc_incr(guest_walk);
     memset(gw, 0, sizeof(*gw));
@@ -188,7 +189,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     l3p = map_domain_gfn(p2m, 
                          guest_l4e_get_gfn(gw->l4e), 
                          &gw->l3mfn,
-                         &p2mt, 
+                         &p2mt,
+                         &qt, 
                          &rc); 
     if(l3p == NULL)
         goto out;
@@ -249,6 +251,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                          guest_l3e_get_gfn(gw->l3e), 
                          &gw->l2mfn,
                          &p2mt, 
+                         &qt,
                          &rc); 
     if(l2p == NULL)
         goto out;
@@ -322,6 +325,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                              guest_l2e_get_gfn(gw->l2e), 
                              &gw->l1mfn,
                              &p2mt,
+                             &qt,
                              &rc);
         if(l1p == NULL)
             goto out;
diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
index 80a6bec..68f2bb5 100644
--- a/xen/arch/x86/mm/hap/Makefile
+++ b/xen/arch/x86/mm/hap/Makefile
@@ -3,6 +3,7 @@ obj-y += guest_walk_2level.o
 obj-y += guest_walk_3level.o
 obj-$(x86_64) += guest_walk_4level.o
 obj-y += nested_hap.o
+obj-y += nested_ept.o
 
 guest_walk_%level.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
new file mode 100644
index 0000000..da868e7
--- /dev/null
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -0,0 +1,327 @@
+/*
+ * nested_ept.c: Handling virtulized EPT for guest in nested case.
+ *
+ * pt walker logic based on arch/x86/mm/guest_walk.c
+ * Copyright (c) 2012, Intel Corporation
+ *  Xiantao Zhang <xiantao.zhang@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+#include <asm/domain.h>
+#include <asm/page.h>
+#include <asm/paging.h>
+#include <asm/p2m.h>
+#include <asm/mem_event.h>
+#include <public/mem_event.h>
+#include <asm/mem_sharing.h>
+#include <xen/event.h>
+#include <asm/hap.h>
+#include <asm/hvm/support.h>
+
+#include <asm/hvm/nestedhvm.h>
+
+#include "private.h"
+
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vvmx.h>
+
+/* EPT always use 4-level paging structure*/
+#define GUEST_PAGING_LEVELS 4
+#include <asm/guest_pt.h>
+
+/* For EPT's walker reserved bits and EMT check  */
+#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
+                     ~((1ull << paddr_bits) - 1))
+
+
+#define EPT_EMT_WB  6
+#define EPT_EMT_UC  0
+
+#define NEPT_VPID_CAP_BITS 0
+
+#define NEPT_1G_ENTRY_FLAG (1 << 11)
+#define NEPT_2M_ENTRY_FLAG (1 << 10)
+#define NEPT_4K_ENTRY_FLAG (1 << 9)
+
+/* Always expose 1G and 2M capability to guest, 
+ so don't need additional check */
+bool_t nept_sp_entry(uint64_t entry)
+{
+    return !!(entry & EPTE_SUPER_PAGE_MASK);
+}
+
+static bool_t nept_rsv_bits_check(uint64_t entry, uint32_t level)
+{
+    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
+
+    switch ( level ){
+        case 1:
+            break;
+        case 2 ... 3:
+            if (nept_sp_entry(entry))
+                rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
+            else
+                rsv_bits |= 0xfull << 3;
+            break;
+        case 4:
+        rsv_bits |= 0xf8;
+        break;
+        default:
+            printk("Unsupported EPT paging level: %d\n", level);
+    }
+    if ( ((entry & rsv_bits) ^ rsv_bits) == rsv_bits )
+        return 0;
+    return 1;
+}
+
+/* EMT checking*/
+static bool_t nept_emt_bits_check(uint64_t entry, uint32_t level)
+{
+    ept_entry_t e;
+    e.epte = entry;
+    if ( e.sp || level == 1 ) {
+        if ( e.emt == 2 || e.emt == 3 || e.emt == 7 )
+            return 1;
+    }
+    return 0;
+}
+
+static bool_t nept_rwx_bits_check(uint64_t entry) {
+    /*write only or write/execute only*/
+    uint8_t rwx_bits = entry & 0x7;
+
+    if ( rwx_bits == 2 || rwx_bits == 6)
+        return 1;
+    if ( rwx_bits == 4 && !(NEPT_VPID_CAP_BITS &
+                        VMX_EPT_EXEC_ONLY_SUPPORTED))
+        return 1;
+    return 0;
+}
+
+/* nept's misconfiguration check */
+static bool_t nept_misconfiguration_check(uint64_t entry, uint32_t level)
+{
+    return (nept_rsv_bits_check(entry, level) ||
+                nept_emt_bits_check(entry, level) ||
+                nept_rwx_bits_check(entry));
+}
+
+static bool_t nept_present_check(uint64_t entry)
+{
+    if (entry & 0x7)
+        return 1;
+    return 0;
+}
+
+uint64_t nept_get_ept_vpid_cap(void)
+{
+    /*TODO: exposed ept and vpid features*/
+    return NEPT_VPID_CAP_BITS;
+}
+
+static uint32_t
+nept_walk_tables(struct vcpu *v, unsigned long l2ga, walk_t *gw)
+{
+    p2m_type_t p2mt;
+    uint32_t rc = 0, ret = 0, gflags;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = d->arch.p2m;
+    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
+    p2m_query_t qt = P2M_ALLOC;
+
+    guest_l1e_t *l1p = NULL;
+    guest_l2e_t *l2p = NULL;
+    guest_l3e_t *l3p = NULL;
+    guest_l4e_t *l4p = NULL;
+
+    bool_t sp= 0;
+
+    memset(gw, 0, sizeof(*gw));
+    gw->va = l2ga;
+
+    /* Map the l4 root entry */
+    l4p = map_domain_gfn(p2m, base_gfn, &gw->l4mfn, &p2mt, &qt, &rc);
+    if ( !l4p )
+        goto map_err;
+    gw->l4e = l4p[guest_l4_table_offset(l2ga)];
+    if (!nept_present_check(gw->l4e.l4))
+        goto non_present;
+    if (nept_misconfiguration_check(gw->l4e.l4, 4))
+        goto misconfig_err;
+
+    /* Map the l3 table */
+    base_gfn = guest_l4e_get_gfn(gw->l4e);
+    l3p = map_domain_gfn(p2m, base_gfn, &gw->l3mfn, &p2mt, &qt, &rc);
+    if( l3p == NULL )
+        goto map_err;
+
+    /* Get the l3e and check its flags*/
+    gw->l3e = l3p[guest_l3_table_offset(l2ga)];
+    if ( !nept_present_check(gw->l3e.l3) )
+        goto non_present;
+    if ( nept_misconfiguration_check(gw->l3e.l3, 3) )
+        goto misconfig_err;
+
+    sp = nept_sp_entry(gw->l3e.l3);
+    /* Super 1G entry */
+    if ( sp )
+    {
+        /* Generate a fake l1 table entry so callers don't all 
+         * have to understand superpages. */
+        gfn_t start = guest_l3e_get_gfn(gw->l3e);
+
+        /* Increment the pfn by the right number of 4k pages. */
+        start = _gfn((gfn_x(start) & ~GUEST_L3_GFN_MASK) +
+                     ((l2ga >> PAGE_SHIFT) & GUEST_L3_GFN_MASK));
+        gflags = (gw->l3e.l3 & 0x7f) | NEPT_1G_ENTRY_FLAG;
+        gw->l1e = guest_l1e_from_gfn(start, gflags);
+        gw->l2mfn = gw->l1mfn = _mfn(INVALID_MFN);
+        goto done;
+    }
+
+    /* Map the l2 table */
+    base_gfn = guest_l3e_get_gfn(gw->l3e);
+    l2p = map_domain_gfn(p2m, base_gfn, &gw->l2mfn, &p2mt, &qt, &rc); 
+    if( l2p == NULL )
+        goto map_err;
+    /* Get the l2e */
+    gw->l2e = l2p[guest_l2_table_offset(l2ga)];
+    if ( !nept_present_check(gw->l2e.l2) )
+        goto non_present;
+    if ( nept_misconfiguration_check(gw->l2e.l2, 2) )
+        goto misconfig_err;
+    sp = nept_sp_entry(gw->l2e.l2);
+
+    if ( sp )
+    {
+        gfn_t start = guest_l2e_get_gfn(gw->l2e);
+        gflags = (gw->l2e.l2 & 0x7f) | NEPT_2M_ENTRY_FLAG;
+
+        /* Increment the pfn by the right number of 4k pages.*/
+        start = _gfn((gfn_x(start) & ~GUEST_L2_GFN_MASK) +
+                     guest_l1_table_offset(l2ga));
+        gw->l1e = guest_l1e_from_gfn(start, gflags);
+        gw->l1mfn = _mfn(INVALID_MFN);
+        goto done;
+    }
+    /* Not a superpage: carry on and find the l1e. */
+    base_gfn = guest_l2e_get_gfn(gw->l2e);
+    l1p = map_domain_gfn(p2m, base_gfn, &gw->l1mfn, &p2mt, &qt, &rc);
+    if( l1p == NULL )
+            goto map_err;
+    /* Get the l1e */
+    gw->l1e = l1p[guest_l1_table_offset(l2ga)];
+    if ( !nept_present_check(gw->l1e.l1) )
+        goto non_present;
+    if ( nept_misconfiguration_check(gw->l1e.l1, 1) )
+        goto misconfig_err;
+
+    gflags = (gw->l1e.l1 & 0x7f) | NEPT_4K_ENTRY_FLAG;
+    gw->l1e.l1 = (gw->l1e.l1 & PAGE_MASK) | gflags;
+
+done:
+    ret = EPT_TRANSLATE_SUCCEED;
+    goto unmap;
+
+misconfig_err:
+    ret =  EPT_TRANSLATE_MISCONFIG;
+    goto unmap;
+
+map_err:
+    if ( rc == _PAGE_PAGED )
+        ret = EPT_TRANSLATE_RETRY;
+    else
+        ret = EPT_TRANSLATE_ERR_PAGE;
+    goto unmap;
+
+non_present:
+    ret = EPT_TRANSLATE_VIOLATION;
+
+unmap:
+    if ( l4p )
+    {
+        unmap_domain_page(l4p);
+        put_page(mfn_to_page(mfn_x(gw->l4mfn)));
+    }
+    if ( l3p )
+    {
+        unmap_domain_page(l3p);
+        put_page(mfn_to_page(mfn_x(gw->l3mfn)));
+    }
+    if ( l2p )
+    {
+        unmap_domain_page(l2p);
+        put_page(mfn_to_page(mfn_x(gw->l2mfn)));
+    }
+    if ( l1p )
+    {
+        unmap_domain_page(l1p);
+        put_page(mfn_to_page(mfn_x(gw->l1mfn)));
+    }
+    return ret;
+}
+
+/* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
+
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason)
+{
+    uint32_t rc, rwx_bits = 0;
+    walk_t gw;
+
+    *l1gfn = INVALID_GFN;
+
+    rc = nept_walk_tables(v, l2ga, &gw);
+    switch ( rc ) {
+        case EPT_TRANSLATE_SUCCEED:
+            if ( likely(gw.l1e.l1 & NEPT_2M_ENTRY_FLAG) )
+            {
+                rwx_bits = gw.l4e.l4 & gw.l3e.l3 & gw.l2e.l2 & 0x7;
+                *page_order = 9;
+            }
+            else if ( gw.l1e.l1 & NEPT_4K_ENTRY_FLAG )  {
+                rwx_bits = gw.l4e.l4 & gw.l3e.l3 & gw.l2e.l2 & gw.l1e.l1 & 0x7;
+                *page_order = 0;
+            }
+            else if ( gw.l1e.l1 & NEPT_1G_ENTRY_FLAG  )
+            {
+                rwx_bits = gw.l4e.l4 & gw.l3e.l3  & 0x7;
+                *page_order = 18;
+            }
+            else
+                gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
+
+            *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
+            break;
+        case EPT_TRANSLATE_VIOLATION:
+            *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
+            *exit_reason = EXIT_REASON_EPT_VIOLATION;
+            break;
+
+        case EPT_TRANSLATE_ERR_PAGE:
+            break;
+        case EPT_TRANSLATE_MISCONFIG:
+            rc = EPT_TRANSLATE_MISCONFIG;
+            *exit_qual = 0;
+            *exit_reason = EXIT_REASON_EPT_MISCONFIG;
+            break;
+        case EPT_TRANSLATE_RETRY:
+            break;
+        default:
+            gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);
+    }
+    return rc;
+}
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 8787c91..6d1264b 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -217,7 +217,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
-        return rv;
+    case NESTEDHVM_PAGEFAULT_RETRY:
     case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 4967da1..409198c 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
     /* Translate the GFN to an MFN */
     ASSERT(!paging_locked_by_me(v->domain));
     mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
-        
+
     if ( p2m_is_readonly(p2mt) )
     {
         put_gfn(v->domain, gfn);
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index 4e1dda0..600c52d 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -315,6 +315,14 @@ guest_walk_to_page_order(walk_t *gw)
 #define GPT_RENAME2(_n, _l) _n ## _ ## _l ## _levels
 #define GPT_RENAME(_n, _l) GPT_RENAME2(_n, _l)
 #define guest_walk_tables GPT_RENAME(guest_walk_tables, GUEST_PAGING_LEVELS)
+#define map_domain_gfn GPT_RENAME(map_domain_gfn, GUEST_PAGING_LEVELS)
+
+extern void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn, 
+                                   mfn_t *mfn,
+                                   p2m_type_t *p2mt,
+                                   p2m_query_t *q,
+                                   uint32_t *rc);
 
 extern uint32_t 
 guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, unsigned long va,
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index 91fde0b..649c511 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -52,6 +52,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
 #define NESTEDHVM_PAGEFAULT_L1_ERROR   2
 #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
 #define NESTEDHVM_PAGEFAULT_MMIO       4
+#define NESTEDHVM_PAGEFAULT_RETRY      5
 int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ef2c9c9..9a728b6 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -194,6 +194,7 @@ extern u32 vmx_secondary_exec_control;
 
 extern bool_t cpu_has_vmx_ins_outs_instr_info;
 
+#define VMX_EPT_EXEC_ONLY_SUPPORTED             0x00000001
 #define VMX_EPT_WALK_LENGTH_4_SUPPORTED         0x00000040
 #define VMX_EPT_MEMORY_TYPE_UC                  0x00000100
 #define VMX_EPT_MEMORY_TYPE_WB                  0x00004000
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 422f006..8eb377b 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -32,6 +32,10 @@ struct nestedvmx {
         unsigned long intr_info;
         u32           error_code;
     } intr;
+    struct {
+        uint32_t exit_reason;
+        uint32_t exit_qual;
+    } ept_exit;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -109,6 +113,12 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
+#define EPT_TRANSLATE_SUCCEED   0
+#define EPT_TRANSLATE_VIOLATION 1
+#define EPT_TRANSLATE_ERR_PAGE  2
+#define EPT_TRANSLATE_MISCONFIG 3
+#define EPT_TRANSLATE_RETRY     4
+
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
@@ -192,5 +202,9 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thwb6-0006ko-L1; Mon, 10 Dec 2012 06:12:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb4-0006k0-E2
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:10 +0000
Received: from [193.109.254.147:6240] by server-13.bemta-14.messagelabs.com id
	6C/51-11239-93D75C05; Mon, 10 Dec 2012 06:12:09 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355119923!9510627!3
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10995 invoked from network); 10 Dec 2012 06:12:09 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:09 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:20 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997316"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:06 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:16 +0800
Message-Id: <1355162243-11857-5-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, JBeulich@suse.com, eddie.dong@intel.com,
	Xu Dongxiao <dongxiao.xu@intel.com>, jun.nakajima@intel.com,
	Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 04/11] nEPT: Do further permission check for
	sucessful translation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

If permission check fails, inject EPT violation vmexit to guest. 

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Signed-off-by: Xu Dongxiao<dongxiao.xu@intel.com>
---
 xen/arch/x86/mm/hap/nested_ept.c |   24 ++++++++++++++++++++----
 1 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index da868e7..2d733a8 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -272,6 +272,16 @@ unmap:
     return ret;
 }
 
+static
+bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
+{
+    if ( ((rwx_acc & 0x1) && !(rwx_bits & 0x1)) ||
+        ((rwx_acc & 0x2) && !(rwx_bits & 0x2 )) ||
+        ((rwx_acc & 0x4) && !(rwx_bits & 0x4 )) )
+        return 0;
+    return 1;
+}
+
 /* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
@@ -301,11 +311,17 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
                 rwx_bits = gw.l4e.l4 & gw.l3e.l3  & 0x7;
                 *page_order = 18;
             }
-            else
+            else {
                 gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
-
-            *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
-            break;
+                BUG();
+            }
+            if ( nept_permission_check(rwx_acc, rwx_bits) )
+            {
+                 *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
+                 break;
+            }
+            rc = EPT_TRANSLATE_VIOLATION;
+        /* Fall through to EPT violation if permission check fails. */
         case EPT_TRANSLATE_VIOLATION:
             *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
             *exit_reason = EXIT_REASON_EPT_VIOLATION;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thwb6-0006ko-L1; Mon, 10 Dec 2012 06:12:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Thwb4-0006k0-E2
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:10 +0000
Received: from [193.109.254.147:6240] by server-13.bemta-14.messagelabs.com id
	6C/51-11239-93D75C05; Mon, 10 Dec 2012 06:12:09 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355119923!9510627!3
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10995 invoked from network); 10 Dec 2012 06:12:09 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:09 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:20 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997316"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:06 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:16 +0800
Message-Id: <1355162243-11857-5-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, JBeulich@suse.com, eddie.dong@intel.com,
	Xu Dongxiao <dongxiao.xu@intel.com>, jun.nakajima@intel.com,
	Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 04/11] nEPT: Do further permission check for
	sucessful translation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

If permission check fails, inject EPT violation vmexit to guest. 

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Signed-off-by: Xu Dongxiao<dongxiao.xu@intel.com>
---
 xen/arch/x86/mm/hap/nested_ept.c |   24 ++++++++++++++++++++----
 1 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index da868e7..2d733a8 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -272,6 +272,16 @@ unmap:
     return ret;
 }
 
+static
+bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
+{
+    if ( ((rwx_acc & 0x1) && !(rwx_bits & 0x1)) ||
+        ((rwx_acc & 0x2) && !(rwx_bits & 0x2 )) ||
+        ((rwx_acc & 0x4) && !(rwx_bits & 0x4 )) )
+        return 0;
+    return 1;
+}
+
 /* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
@@ -301,11 +311,17 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
                 rwx_bits = gw.l4e.l4 & gw.l3e.l3  & 0x7;
                 *page_order = 18;
             }
-            else
+            else {
                 gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
-
-            *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
-            break;
+                BUG();
+            }
+            if ( nept_permission_check(rwx_acc, rwx_bits) )
+            {
+                 *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
+                 break;
+            }
+            rc = EPT_TRANSLATE_VIOLATION;
+        /* Fall through to EPT violation if permission check fails. */
         case EPT_TRANSLATE_VIOLATION:
             *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
             *exit_reason = EXIT_REASON_EPT_VIOLATION;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbK-0006sM-DS; Mon, 10 Dec 2012 06:12:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1ThwbI-0006qr-28
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:24 +0000
Received: from [193.109.254.147:27225] by server-8.bemta-14.messagelabs.com id
	47/CA-05026-74D75C05; Mon, 10 Dec 2012 06:12:23 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355119939!9500028!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13891 invoked from network); 10 Dec 2012 06:12:20 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:20 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997361"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:17 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:23 +0800
Message-Id: <1355162243-11857-12-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 11/11] nVMX: Expose VPID capability to nested
	VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Virtualize VPID for the nested vmm, use host's VPID
to emualte guest's VPID. For each virtual vmentry, if
guest'v vpid is changed, allocate a new host VPID for
L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   10 +++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   60 +++++++++++++++++++++++++++++++++++-
 xen/arch/x86/mm/hap/nested_ept.c   |    7 ++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +
 4 files changed, 73 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 36f6d82..fb40392 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2626,10 +2626,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
             update_guest_eip();
         break;
+    case EXIT_REASON_INVVPID:
+        if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
          * running in guest context, and the CPU checks that before getting
@@ -2747,8 +2750,11 @@ void vmx_vmenter_helper(void)
 
     if ( !cpu_has_vmx_vpid )
         goto out;
+    if ( nestedhvm_vcpu_in_guestmode(curr) )
+        p_asid =  &vcpu_nestedhvm(curr).nv_n2asid;
+    else
+        p_asid = &curr->arch.hvm_vcpu.n1asid;
 
-    p_asid = &curr->arch.hvm_vcpu.n1asid;
     old_asid = p_asid->asid;
     need_flush = hvm_asid_handle_vmenter(p_asid);
     new_asid = p_asid->asid;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ec875d2..28a8e78 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -42,6 +42,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
 	goto out;
     }
     nvmx->ept.enabled = 0;
+    nvmx->guest_vpid = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -849,6 +850,16 @@ static uint64_t get_shadow_eptp(struct vcpu *v)
     return ept_data->ept_ctl.eptp;
 }
 
+static bool_t nvmx_vpid_enabled(struct nestedvcpu *nvcpu)
+{
+    uint32_t second_cntl;
+
+    second_cntl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
+    if ( second_cntl & SECONDARY_EXEC_ENABLE_VPID )
+        return 1;
+    return 0;
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -897,6 +908,18 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     if ( nestedhvm_paging_mode_hap(v) )
         __vmwrite(EPT_POINTER, get_shadow_eptp(v));
 
+    /* nested VPID support! */
+    if ( cpu_has_vmx_vpid && nvmx_vpid_enabled(nvcpu) )
+    {
+        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+        uint32_t new_vpid =  __get_vvmcs(vvmcs, VIRTUAL_PROCESSOR_ID);
+        if ( nvmx->guest_vpid != new_vpid )
+        {
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
+            nvmx->guest_vpid = new_vpid;
+        }
+    }
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -1188,7 +1211,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
     {
         vmreturn (regs, VMFAIL_INVALID);
-        return X86EMUL_OKAY;        
+        return X86EMUL_OKAY;
     }
 
     launched = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
@@ -1363,6 +1386,9 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     unsigned long eptp;
     u64 inv_type;
 
+    if(!cpu_has_vmx_ept)
+        return X86EMUL_EXCEPTION;
+
     if ( decode_vmx_inst(regs, &decode, &eptp, 0)
              != X86EMUL_OKAY )
         return X86EMUL_EXCEPTION;
@@ -1401,6 +1427,37 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
     ((uint32_t)(__emul_value(enable1, default1) | host_value)))
 
+int nvmx_handle_invvpid(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long vpid;
+    u64 inv_type;
+
+    if(!cpu_has_vmx_vpid)
+        return X86EMUL_EXCEPTION;
+
+    if ( decode_vmx_inst(regs, &decode, &vpid, 0)
+             != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, vpid:%lx\n", inv_type, vpid);
+
+    switch ( inv_type ){
+        /* Just invalidate all tlb entries for all types! */
+        case INVVPID_INDIVIDUAL_ADDR:
+	    case INVVPID_SINGLE_CONTEXT:
+	    case INVVPID_ALL_CONTEXT:
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
+            break;
+	    default:
+		    return X86EMUL_EXCEPTION;
+	}
+    vmreturn(regs, VMSUCCEED);
+
+    return X86EMUL_OKAY;
+}
+
 /*
  * Capability reporting
  */
@@ -1458,6 +1515,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
                SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+               SECONDARY_EXEC_ENABLE_VPID |
                SECONDARY_EXEC_ENABLE_EPT;
         data = gen_vmx_msr(data, 0, host_data);
         break;
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 8dfb70a..d0be5ce 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -48,7 +48,7 @@
 #define EPT_EMT_WB  6
 #define EPT_EMT_UC  0
 
-#define NEPT_VPID_CAP_BITS 0x0000000006134140ul
+#define NEPT_VPID_CAP_BITS 0xf0106134140ul
 
 #define NEPT_1G_ENTRY_FLAG (1 << 11)
 #define NEPT_2M_ENTRY_FLAG (1 << 10)
@@ -126,8 +126,9 @@ static bool_t nept_present_check(uint64_t entry)
 
 uint64_t nept_get_ept_vpid_cap(void)
 {
-    /*TODO: exposed ept and vpid features*/
-    return NEPT_VPID_CAP_BITS;
+    if (cpu_has_vmx_ept && cpu_has_vmx_vpid)
+        return NEPT_VPID_CAP_BITS;
+    return 0;
 }
 
 static uint32_t
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index fcdce62..1e7a6d7 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -37,6 +37,7 @@ struct nestedvmx {
         uint32_t exit_reason;
         uint32_t exit_qual;
     } ept;
+    uint32_t guest_vpid;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -191,6 +192,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
 int nvmx_handle_invept(struct cpu_user_regs *regs);
+int nvmx_handle_invvpid(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbK-0006sM-DS; Mon, 10 Dec 2012 06:12:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1ThwbI-0006qr-28
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:24 +0000
Received: from [193.109.254.147:27225] by server-8.bemta-14.messagelabs.com id
	47/CA-05026-74D75C05; Mon, 10 Dec 2012 06:12:23 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355119939!9500028!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13891 invoked from network); 10 Dec 2012 06:12:20 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 06:12:20 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997361"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:17 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:23 +0800
Message-Id: <1355162243-11857-12-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 11/11] nVMX: Expose VPID capability to nested
	VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Virtualize VPID for the nested vmm, use host's VPID
to emualte guest's VPID. For each virtual vmentry, if
guest'v vpid is changed, allocate a new host VPID for
L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   10 +++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   60 +++++++++++++++++++++++++++++++++++-
 xen/arch/x86/mm/hap/nested_ept.c   |    7 ++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +
 4 files changed, 73 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 36f6d82..fb40392 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2626,10 +2626,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
             update_guest_eip();
         break;
+    case EXIT_REASON_INVVPID:
+        if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
          * running in guest context, and the CPU checks that before getting
@@ -2747,8 +2750,11 @@ void vmx_vmenter_helper(void)
 
     if ( !cpu_has_vmx_vpid )
         goto out;
+    if ( nestedhvm_vcpu_in_guestmode(curr) )
+        p_asid =  &vcpu_nestedhvm(curr).nv_n2asid;
+    else
+        p_asid = &curr->arch.hvm_vcpu.n1asid;
 
-    p_asid = &curr->arch.hvm_vcpu.n1asid;
     old_asid = p_asid->asid;
     need_flush = hvm_asid_handle_vmenter(p_asid);
     new_asid = p_asid->asid;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ec875d2..28a8e78 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -42,6 +42,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
 	goto out;
     }
     nvmx->ept.enabled = 0;
+    nvmx->guest_vpid = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -849,6 +850,16 @@ static uint64_t get_shadow_eptp(struct vcpu *v)
     return ept_data->ept_ctl.eptp;
 }
 
+static bool_t nvmx_vpid_enabled(struct nestedvcpu *nvcpu)
+{
+    uint32_t second_cntl;
+
+    second_cntl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
+    if ( second_cntl & SECONDARY_EXEC_ENABLE_VPID )
+        return 1;
+    return 0;
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -897,6 +908,18 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     if ( nestedhvm_paging_mode_hap(v) )
         __vmwrite(EPT_POINTER, get_shadow_eptp(v));
 
+    /* nested VPID support! */
+    if ( cpu_has_vmx_vpid && nvmx_vpid_enabled(nvcpu) )
+    {
+        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+        uint32_t new_vpid =  __get_vvmcs(vvmcs, VIRTUAL_PROCESSOR_ID);
+        if ( nvmx->guest_vpid != new_vpid )
+        {
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
+            nvmx->guest_vpid = new_vpid;
+        }
+    }
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -1188,7 +1211,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
     {
         vmreturn (regs, VMFAIL_INVALID);
-        return X86EMUL_OKAY;        
+        return X86EMUL_OKAY;
     }
 
     launched = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
@@ -1363,6 +1386,9 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     unsigned long eptp;
     u64 inv_type;
 
+    if(!cpu_has_vmx_ept)
+        return X86EMUL_EXCEPTION;
+
     if ( decode_vmx_inst(regs, &decode, &eptp, 0)
              != X86EMUL_OKAY )
         return X86EMUL_EXCEPTION;
@@ -1401,6 +1427,37 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
     ((uint32_t)(__emul_value(enable1, default1) | host_value)))
 
+int nvmx_handle_invvpid(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long vpid;
+    u64 inv_type;
+
+    if(!cpu_has_vmx_vpid)
+        return X86EMUL_EXCEPTION;
+
+    if ( decode_vmx_inst(regs, &decode, &vpid, 0)
+             != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, vpid:%lx\n", inv_type, vpid);
+
+    switch ( inv_type ){
+        /* Just invalidate all tlb entries for all types! */
+        case INVVPID_INDIVIDUAL_ADDR:
+	    case INVVPID_SINGLE_CONTEXT:
+	    case INVVPID_ALL_CONTEXT:
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
+            break;
+	    default:
+		    return X86EMUL_EXCEPTION;
+	}
+    vmreturn(regs, VMSUCCEED);
+
+    return X86EMUL_OKAY;
+}
+
 /*
  * Capability reporting
  */
@@ -1458,6 +1515,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
                SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+               SECONDARY_EXEC_ENABLE_VPID |
                SECONDARY_EXEC_ENABLE_EPT;
         data = gen_vmx_msr(data, 0, host_data);
         break;
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 8dfb70a..d0be5ce 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -48,7 +48,7 @@
 #define EPT_EMT_WB  6
 #define EPT_EMT_UC  0
 
-#define NEPT_VPID_CAP_BITS 0x0000000006134140ul
+#define NEPT_VPID_CAP_BITS 0xf0106134140ul
 
 #define NEPT_1G_ENTRY_FLAG (1 << 11)
 #define NEPT_2M_ENTRY_FLAG (1 << 10)
@@ -126,8 +126,9 @@ static bool_t nept_present_check(uint64_t entry)
 
 uint64_t nept_get_ept_vpid_cap(void)
 {
-    /*TODO: exposed ept and vpid features*/
-    return NEPT_VPID_CAP_BITS;
+    if (cpu_has_vmx_ept && cpu_has_vmx_vpid)
+        return NEPT_VPID_CAP_BITS;
+    return 0;
 }
 
 static uint32_t
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index fcdce62..1e7a6d7 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -37,6 +37,7 @@ struct nestedvmx {
         uint32_t exit_reason;
         uint32_t exit_qual;
     } ept;
+    uint32_t guest_vpid;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -191,6 +192,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
 int nvmx_handle_invept(struct cpu_user_regs *regs);
+int nvmx_handle_invvpid(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbE-0006pS-VH; Mon, 10 Dec 2012 06:12:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1ThwbD-0006oD-Pu
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:20 +0000
Received: from [85.158.137.99:20055] by server-14.bemta-3.messagelabs.com id
	42/67-31424-34D75C05; Mon, 10 Dec 2012 06:12:19 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!6
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27974 invoked from network); 10 Dec 2012 06:12:18 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:18 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997353"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:16 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:22 +0800
Message-Id: <1355162243-11857-11-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 10/11] nEPT: expost EPT capablity to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Expose EPT's basic features to L1 VMM.
No EPT A/D bit feature supported.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |    6 +++++-
 xen/arch/x86/mm/hap/nested_ept.c   |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 ++
 3 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 07ca90e..ec875d2 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1457,7 +1457,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
-               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+               SECONDARY_EXEC_ENABLE_EPT;
         data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
@@ -1510,6 +1511,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_MISC:
         gdprintk(XENLOG_WARNING, "VMX MSR %x not fully supported yet.\n", msr);
         break;
+    case MSR_IA32_VMX_EPT_VPID_CAP:
+        data = nept_get_ept_vpid_cap();
+        break;
     default:
         r = 0;
         break;
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 637db1a..8dfb70a 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -48,7 +48,7 @@
 #define EPT_EMT_WB  6
 #define EPT_EMT_UC  0
 
-#define NEPT_VPID_CAP_BITS 0
+#define NEPT_VPID_CAP_BITS 0x0000000006134140ul
 
 #define NEPT_1G_ENTRY_FLAG (1 << 11)
 #define NEPT_2M_ENTRY_FLAG (1 << 10)
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index cf5ed9a..fcdce62 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -206,6 +206,8 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+uint64_t nept_get_ept_vpid_cap(void);
+
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
                         unsigned long *l1gfn, uint8_t *p2m_acc,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:12:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThwbE-0006pS-VH; Mon, 10 Dec 2012 06:12:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1ThwbD-0006oD-Pu
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:12:20 +0000
Received: from [85.158.137.99:20055] by server-14.bemta-3.messagelabs.com id
	42/67-31424-34D75C05; Mon, 10 Dec 2012 06:12:19 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355119931!13499244!6
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27974 invoked from network); 10 Dec 2012 06:12:18 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 06:12:18 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 09 Dec 2012 22:11:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,249,1355126400"; d="scan'208";a="254997353"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by orsmga002.jf.intel.com with ESMTP; 09 Dec 2012 22:12:16 -0800
From: xiantao.zhang@intel.com
To: xen-devel@lists.xensource.com
Date: Tue, 11 Dec 2012 01:57:22 +0800
Message-Id: <1355162243-11857-11-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Cc: eddie.dong@intel.com, Zhang Xiantao <xiantao.zhang@intel.com>, keir@xen.org,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 10/11] nEPT: expost EPT capablity to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Expose EPT's basic features to L1 VMM.
No EPT A/D bit feature supported.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |    6 +++++-
 xen/arch/x86/mm/hap/nested_ept.c   |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 ++
 3 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 07ca90e..ec875d2 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1457,7 +1457,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
-               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+               SECONDARY_EXEC_ENABLE_EPT;
         data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
@@ -1510,6 +1511,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_MISC:
         gdprintk(XENLOG_WARNING, "VMX MSR %x not fully supported yet.\n", msr);
         break;
+    case MSR_IA32_VMX_EPT_VPID_CAP:
+        data = nept_get_ept_vpid_cap();
+        break;
     default:
         r = 0;
         break;
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 637db1a..8dfb70a 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -48,7 +48,7 @@
 #define EPT_EMT_WB  6
 #define EPT_EMT_UC  0
 
-#define NEPT_VPID_CAP_BITS 0
+#define NEPT_VPID_CAP_BITS 0x0000000006134140ul
 
 #define NEPT_1G_ENTRY_FLAG (1 << 11)
 #define NEPT_2M_ENTRY_FLAG (1 << 10)
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index cf5ed9a..fcdce62 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -206,6 +206,8 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+uint64_t nept_get_ept_vpid_cap(void);
+
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
                         unsigned long *l1gfn, uint8_t *p2m_acc,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 06:28:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:28:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thwr5-00007R-96; Mon, 10 Dec 2012 06:28:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <maheen_butt26@yahoo.com>) id 1Thwr4-00007J-1V
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:28:42 +0000
Received: from [85.158.139.211:52172] by server-9.bemta-5.messagelabs.com id
	5B/BB-10690-91185C05; Mon, 10 Dec 2012 06:28:41 +0000
X-Env-Sender: maheen_butt26@yahoo.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355120919!18275243!1
X-Originating-IP: [98.138.90.153]
X-SpamReason: No, hits=0.7 required=7.0 tests=FROM_HAS_ULINE_NUMS,
	HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21869 invoked from network); 10 Dec 2012 06:28:40 -0000
Received: from nm5-vm2.bullet.mail.ne1.yahoo.com (HELO
	nm5-vm2.bullet.mail.ne1.yahoo.com) (98.138.90.153)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 06:28:40 -0000
Received: from [98.138.90.55] by nm5.bullet.mail.ne1.yahoo.com with NNFMP;
	10 Dec 2012 06:27:39 -0000
Received: from [98.138.87.11] by tm8.bullet.mail.ne1.yahoo.com with NNFMP;
	10 Dec 2012 06:27:39 -0000
Received: from [127.0.0.1] by omp1011.mail.ne1.yahoo.com with NNFMP;
	10 Dec 2012 06:27:39 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 480547.82467.bm@omp1011.mail.ne1.yahoo.com
Received: (qmail 14022 invoked by uid 60001); 10 Dec 2012 06:27:39 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1355120859; bh=Li4Rfb5ZCgFKysP/NuRh26ffzP9kArFaTzsuvLbHbqg=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=0d3X+hGQtnC72NkOvGa/gqIckveaTJlPjWfIGSGZ5ozYAqmH9b1ojHFfbYlwNPkEoff63OQxFY2qXcGvhfzOOVCk8PEpoAt22wpHGd11lZA3vBg5zVcZ4cUO5wyCSenFFFh6wtDrF/tbtGbr7LkETGhWHKBgNAdBiniaO+vhqUU=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=phFmWXXgzwKFr6I9PThWFPK0yAqtF3P2NWpJ7axYDsQgXQeNT2mI929zSQyJbVd7iVDAQcPmn9QZlTgeHaZb9QYpue8kp1wL+y8a4hiUK3UV4bJgjISVMzFRCoGIVGh++f0B1CPkkOf/QoaPzyIBab6ikUo+vMfvRVbBWoW0knc=;
X-YMail-OSG: Ja_DbpMVM1lINhhf7jkSaj5nsNiI08B6tfwW4BSN7BujUIg
	4EzzWBP_by_CLvv8Uandziz_lSPXzY5_z0j6WurE6SZXztC0Eh6HcWp4QHQa
	Ip9zusP617sEdR9Vas0Jt.pgWLayYfkYPYbX039pJ5yrWkw.8ozpVoLs7Zmn
	aSMCOpYHgfXyLQRgLyPaU8lIb1PESInPKFny7OeE64PK7QmeT5.Ry4QH1ZJF
	7jK5CafqJAALLUWmcUkfpj1TanolJo_XKHlTGGsYvLRSxpQQP3hdwxE29efQ
	ZGZMbb3Jql6pbvmlV13H8oEEPqg8VncTJsxCKethzeuffJGG2_Nyi4E_kCXj
	zzXZZOJgJsqWC81qCSEOt2J4UKoNFIyQFdoo4Yyk6R_Ds9zlZfxaiODHszXy
	VDe.RAjpoiklWKSvsSroV.aqGbHzV3tjYZLE313CESw1ho556nlYxAxcVCaE
	kRFQpYqkXazMPpqCQwPHTLffBgXhjKdb0laaolVTt9xL.sJqjrU.tNrW.st. 6S1Zr
Received: from [58.27.199.186] by web120003.mail.ne1.yahoo.com via HTTP;
	Sun, 09 Dec 2012 22:27:39 PST
X-Rocket-MIMEInfo: 001.001,
	SGksCgpJbiBzdGFydF94ZW4oKTp4ZW4vYXJjaC94ODYvc2V0dXAuYwpoYXMgZnVuY3Rpb24gc21wX3ByZXBhcmVfYm9vdF9jcHUoKSB3aGljaCBhbHNvIGV4aXN0IGluIHZhbmlsbGEga2VybmVsCnRoZSBrZXJuZWwgdmVyc2lvbiBpcyBnaXZlbiB0aGF0Ogp2b2lkIF9faW5pdCBuYXRpdmVfc21wX3ByZXBhcmVfYm9vdF9jcHUodm9pZCkKewrCoMKgwqAgaW50IG1lID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwrCoMKgwqAgc3dpdGNoX3RvX25ld19nZHQobWUpOwrCoMKgwqAgLyogYWxyZWFkeSBzZXQgbWUgaW4gY3ABMAEBAQE-
X-Mailer: YahooMailWebService/0.8.128.478
Message-ID: <1355120859.46777.YahooMailNeo@web120003.mail.ne1.yahoo.com>
Date: Sun, 9 Dec 2012 22:27:39 -0800 (PST)
From: maheen butt <maheen_butt26@yahoo.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
MIME-Version: 1.0
Subject: [Xen-devel] smp_prepare_boot_cpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: maheen butt <maheen_butt26@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8191420588852697460=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8191420588852697460==
Content-Type: multipart/alternative; boundary="-1007318780-1574078396-1355120859=:46777"

---1007318780-1574078396-1355120859=:46777
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,=0A=0AIn start_xen():xen/arch/x86/setup.c=0Ahas function smp_prepare_boo=
t_cpu() which also exist in vanilla kernel=0Athe kernel version is given th=
at:=0Avoid __init native_smp_prepare_boot_cpu(void)=0A{=0A=A0=A0=A0 int me =
=3D smp_processor_id();=0A=A0=A0=A0 switch_to_new_gdt(me);=0A=A0=A0=A0 /* a=
lready set me in cpu_online_mask in boot_cpu_init() */=0A=A0=A0=A0 cpumask_=
set_cpu(me, cpu_callout_mask);=0A=A0=A0=A0 per_cpu(cpu_state, me) =3D CPU_O=
NLINE;=0A}=0A=0AWhera in case of Xen we have:=0Avoid __init smp_prepare_boo=
t_cpu(void)=0A{=0A=A0=A0=A0 cpumask_set_cpu(smp_processor_id(), &cpu_online=
_map);=0A=A0=A0=A0 cpumask_set_cpu(smp_processor_id(), &cpu_present_map);=
=0A}=0AMy question is that why there is no need to change gdt pointer to cu=
rrent cpu segment as it is=0Adone in case of kernel code?=0A=0AThanks=0A
---1007318780-1574078396-1355120859=:46777
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:ti=
mes new roman, new york, times, serif;font-size:12pt"><div>Hi,</div><div><b=
r></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: ti=
mes new roman,new york,times,serif; background-color: transparent; font-sty=
le: normal;">In start_xen():xen/arch/x86/setup.c</div><div style=3D"color: =
rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new york,times,=
serif; background-color: transparent; font-style: normal;">has function smp=
_prepare_boot_cpu() which also exist in vanilla kernel</div><div style=3D"c=
olor: rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new york,=
times,serif; background-color: transparent; font-style: normal;">the kernel=
 version is given that:</div><div style=3D"color: rgb(0, 0, 0); font-size: =
16px; font-family: times new roman,new york,times,serif; background-color: =
transparent; font-style: normal;">void __init
 native_smp_prepare_boot_cpu(void)<br>{<br>&nbsp;&nbsp;&nbsp; int me =3D sm=
p_processor_id();<br>&nbsp;&nbsp;&nbsp; switch_to_new_gdt(me);<br>&nbsp;&nb=
sp;&nbsp; /* already set me in cpu_online_mask in boot_cpu_init() */<br>&nb=
sp;&nbsp;&nbsp; cpumask_set_cpu(me, cpu_callout_mask);<br>&nbsp;&nbsp;&nbsp=
; per_cpu(cpu_state, me) =3D CPU_ONLINE;<br>}</div><div style=3D"color: rgb=
(0, 0, 0); font-size: 16px; font-family: times new roman,new york,times,ser=
if; background-color: transparent; font-style: normal;"><br></div><div styl=
e=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new=
 york,times,serif; background-color: transparent; font-style: normal;">Wher=
a in case of Xen we have:</div><div style=3D"color: rgb(0, 0, 0); font-size=
: 16px; font-family: times new roman,new york,times,serif; background-color=
: transparent; font-style: normal;">void __init smp_prepare_boot_cpu(void)<=
br>{<br>&nbsp;&nbsp;&nbsp; cpumask_set_cpu(smp_processor_id(),
 &amp;cpu_online_map);<br>&nbsp;&nbsp;&nbsp; cpumask_set_cpu(smp_processor_=
id(), &amp;cpu_present_map);<br>}<br>My question is that why there is no ne=
ed to change gdt pointer to current cpu segment as it is</div><div style=3D=
"color: rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new yor=
k,times,serif; background-color: transparent; font-style: normal;">done in =
case of kernel code?</div><div style=3D"color: rgb(0, 0, 0); font-size: 16p=
x; font-family: times new roman,new york,times,serif; background-color: tra=
nsparent; font-style: normal;"><br></div><div style=3D"color: rgb(0, 0, 0);=
 font-size: 16px; font-family: times new roman,new york,times,serif; backgr=
ound-color: transparent; font-style: normal;">Thanks<br></div></div></body>=
</html>
---1007318780-1574078396-1355120859=:46777--


--===============8191420588852697460==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8191420588852697460==--


From xen-devel-bounces@lists.xen.org Mon Dec 10 06:28:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 06:28:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thwr5-00007R-96; Mon, 10 Dec 2012 06:28:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <maheen_butt26@yahoo.com>) id 1Thwr4-00007J-1V
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 06:28:42 +0000
Received: from [85.158.139.211:52172] by server-9.bemta-5.messagelabs.com id
	5B/BB-10690-91185C05; Mon, 10 Dec 2012 06:28:41 +0000
X-Env-Sender: maheen_butt26@yahoo.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355120919!18275243!1
X-Originating-IP: [98.138.90.153]
X-SpamReason: No, hits=0.7 required=7.0 tests=FROM_HAS_ULINE_NUMS,
	HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21869 invoked from network); 10 Dec 2012 06:28:40 -0000
Received: from nm5-vm2.bullet.mail.ne1.yahoo.com (HELO
	nm5-vm2.bullet.mail.ne1.yahoo.com) (98.138.90.153)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 06:28:40 -0000
Received: from [98.138.90.55] by nm5.bullet.mail.ne1.yahoo.com with NNFMP;
	10 Dec 2012 06:27:39 -0000
Received: from [98.138.87.11] by tm8.bullet.mail.ne1.yahoo.com with NNFMP;
	10 Dec 2012 06:27:39 -0000
Received: from [127.0.0.1] by omp1011.mail.ne1.yahoo.com with NNFMP;
	10 Dec 2012 06:27:39 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 480547.82467.bm@omp1011.mail.ne1.yahoo.com
Received: (qmail 14022 invoked by uid 60001); 10 Dec 2012 06:27:39 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1355120859; bh=Li4Rfb5ZCgFKysP/NuRh26ffzP9kArFaTzsuvLbHbqg=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=0d3X+hGQtnC72NkOvGa/gqIckveaTJlPjWfIGSGZ5ozYAqmH9b1ojHFfbYlwNPkEoff63OQxFY2qXcGvhfzOOVCk8PEpoAt22wpHGd11lZA3vBg5zVcZ4cUO5wyCSenFFFh6wtDrF/tbtGbr7LkETGhWHKBgNAdBiniaO+vhqUU=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=phFmWXXgzwKFr6I9PThWFPK0yAqtF3P2NWpJ7axYDsQgXQeNT2mI929zSQyJbVd7iVDAQcPmn9QZlTgeHaZb9QYpue8kp1wL+y8a4hiUK3UV4bJgjISVMzFRCoGIVGh++f0B1CPkkOf/QoaPzyIBab6ikUo+vMfvRVbBWoW0knc=;
X-YMail-OSG: Ja_DbpMVM1lINhhf7jkSaj5nsNiI08B6tfwW4BSN7BujUIg
	4EzzWBP_by_CLvv8Uandziz_lSPXzY5_z0j6WurE6SZXztC0Eh6HcWp4QHQa
	Ip9zusP617sEdR9Vas0Jt.pgWLayYfkYPYbX039pJ5yrWkw.8ozpVoLs7Zmn
	aSMCOpYHgfXyLQRgLyPaU8lIb1PESInPKFny7OeE64PK7QmeT5.Ry4QH1ZJF
	7jK5CafqJAALLUWmcUkfpj1TanolJo_XKHlTGGsYvLRSxpQQP3hdwxE29efQ
	ZGZMbb3Jql6pbvmlV13H8oEEPqg8VncTJsxCKethzeuffJGG2_Nyi4E_kCXj
	zzXZZOJgJsqWC81qCSEOt2J4UKoNFIyQFdoo4Yyk6R_Ds9zlZfxaiODHszXy
	VDe.RAjpoiklWKSvsSroV.aqGbHzV3tjYZLE313CESw1ho556nlYxAxcVCaE
	kRFQpYqkXazMPpqCQwPHTLffBgXhjKdb0laaolVTt9xL.sJqjrU.tNrW.st. 6S1Zr
Received: from [58.27.199.186] by web120003.mail.ne1.yahoo.com via HTTP;
	Sun, 09 Dec 2012 22:27:39 PST
X-Rocket-MIMEInfo: 001.001,
	SGksCgpJbiBzdGFydF94ZW4oKTp4ZW4vYXJjaC94ODYvc2V0dXAuYwpoYXMgZnVuY3Rpb24gc21wX3ByZXBhcmVfYm9vdF9jcHUoKSB3aGljaCBhbHNvIGV4aXN0IGluIHZhbmlsbGEga2VybmVsCnRoZSBrZXJuZWwgdmVyc2lvbiBpcyBnaXZlbiB0aGF0Ogp2b2lkIF9faW5pdCBuYXRpdmVfc21wX3ByZXBhcmVfYm9vdF9jcHUodm9pZCkKewrCoMKgwqAgaW50IG1lID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwrCoMKgwqAgc3dpdGNoX3RvX25ld19nZHQobWUpOwrCoMKgwqAgLyogYWxyZWFkeSBzZXQgbWUgaW4gY3ABMAEBAQE-
X-Mailer: YahooMailWebService/0.8.128.478
Message-ID: <1355120859.46777.YahooMailNeo@web120003.mail.ne1.yahoo.com>
Date: Sun, 9 Dec 2012 22:27:39 -0800 (PST)
From: maheen butt <maheen_butt26@yahoo.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
MIME-Version: 1.0
Subject: [Xen-devel] smp_prepare_boot_cpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: maheen butt <maheen_butt26@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8191420588852697460=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8191420588852697460==
Content-Type: multipart/alternative; boundary="-1007318780-1574078396-1355120859=:46777"

---1007318780-1574078396-1355120859=:46777
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,=0A=0AIn start_xen():xen/arch/x86/setup.c=0Ahas function smp_prepare_boo=
t_cpu() which also exist in vanilla kernel=0Athe kernel version is given th=
at:=0Avoid __init native_smp_prepare_boot_cpu(void)=0A{=0A=A0=A0=A0 int me =
=3D smp_processor_id();=0A=A0=A0=A0 switch_to_new_gdt(me);=0A=A0=A0=A0 /* a=
lready set me in cpu_online_mask in boot_cpu_init() */=0A=A0=A0=A0 cpumask_=
set_cpu(me, cpu_callout_mask);=0A=A0=A0=A0 per_cpu(cpu_state, me) =3D CPU_O=
NLINE;=0A}=0A=0AWhera in case of Xen we have:=0Avoid __init smp_prepare_boo=
t_cpu(void)=0A{=0A=A0=A0=A0 cpumask_set_cpu(smp_processor_id(), &cpu_online=
_map);=0A=A0=A0=A0 cpumask_set_cpu(smp_processor_id(), &cpu_present_map);=
=0A}=0AMy question is that why there is no need to change gdt pointer to cu=
rrent cpu segment as it is=0Adone in case of kernel code?=0A=0AThanks=0A
---1007318780-1574078396-1355120859=:46777
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:ti=
mes new roman, new york, times, serif;font-size:12pt"><div>Hi,</div><div><b=
r></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: ti=
mes new roman,new york,times,serif; background-color: transparent; font-sty=
le: normal;">In start_xen():xen/arch/x86/setup.c</div><div style=3D"color: =
rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new york,times,=
serif; background-color: transparent; font-style: normal;">has function smp=
_prepare_boot_cpu() which also exist in vanilla kernel</div><div style=3D"c=
olor: rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new york,=
times,serif; background-color: transparent; font-style: normal;">the kernel=
 version is given that:</div><div style=3D"color: rgb(0, 0, 0); font-size: =
16px; font-family: times new roman,new york,times,serif; background-color: =
transparent; font-style: normal;">void __init
 native_smp_prepare_boot_cpu(void)<br>{<br>&nbsp;&nbsp;&nbsp; int me =3D sm=
p_processor_id();<br>&nbsp;&nbsp;&nbsp; switch_to_new_gdt(me);<br>&nbsp;&nb=
sp;&nbsp; /* already set me in cpu_online_mask in boot_cpu_init() */<br>&nb=
sp;&nbsp;&nbsp; cpumask_set_cpu(me, cpu_callout_mask);<br>&nbsp;&nbsp;&nbsp=
; per_cpu(cpu_state, me) =3D CPU_ONLINE;<br>}</div><div style=3D"color: rgb=
(0, 0, 0); font-size: 16px; font-family: times new roman,new york,times,ser=
if; background-color: transparent; font-style: normal;"><br></div><div styl=
e=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new=
 york,times,serif; background-color: transparent; font-style: normal;">Wher=
a in case of Xen we have:</div><div style=3D"color: rgb(0, 0, 0); font-size=
: 16px; font-family: times new roman,new york,times,serif; background-color=
: transparent; font-style: normal;">void __init smp_prepare_boot_cpu(void)<=
br>{<br>&nbsp;&nbsp;&nbsp; cpumask_set_cpu(smp_processor_id(),
 &amp;cpu_online_map);<br>&nbsp;&nbsp;&nbsp; cpumask_set_cpu(smp_processor_=
id(), &amp;cpu_present_map);<br>}<br>My question is that why there is no ne=
ed to change gdt pointer to current cpu segment as it is</div><div style=3D=
"color: rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new yor=
k,times,serif; background-color: transparent; font-style: normal;">done in =
case of kernel code?</div><div style=3D"color: rgb(0, 0, 0); font-size: 16p=
x; font-family: times new roman,new york,times,serif; background-color: tra=
nsparent; font-style: normal;"><br></div><div style=3D"color: rgb(0, 0, 0);=
 font-size: 16px; font-family: times new roman,new york,times,serif; backgr=
ound-color: transparent; font-style: normal;">Thanks<br></div></div></body>=
</html>
---1007318780-1574078396-1355120859=:46777--


--===============8191420588852697460==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8191420588852697460==--


From xen-devel-bounces@lists.xen.org Mon Dec 10 07:23:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 07:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thxhf-0000k1-UG; Mon, 10 Dec 2012 07:23:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thxhe-0000jw-9z
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 07:23:02 +0000
Received: from [85.158.143.35:28787] by server-1.bemta-4.messagelabs.com id
	47/91-28401-5DD85C05; Mon, 10 Dec 2012 07:23:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355124177!5668657!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31944 invoked from network); 10 Dec 2012 07:22:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 07:22:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="23851"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 07:22:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 07:22:56 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThxhY-0001zk-O8;
	Mon, 10 Dec 2012 07:22:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThxhY-0007sL-NV;
	Mon, 10 Dec 2012 07:22:56 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14658-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 07:22:56 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14658: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14658 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14658/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 07:23:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 07:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thxhf-0000k1-UG; Mon, 10 Dec 2012 07:23:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Thxhe-0000jw-9z
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 07:23:02 +0000
Received: from [85.158.143.35:28787] by server-1.bemta-4.messagelabs.com id
	47/91-28401-5DD85C05; Mon, 10 Dec 2012 07:23:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355124177!5668657!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31944 invoked from network); 10 Dec 2012 07:22:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 07:22:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="23851"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 07:22:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 07:22:56 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ThxhY-0001zk-O8;
	Mon, 10 Dec 2012 07:22:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ThxhY-0007sL-NV;
	Mon, 10 Dec 2012 07:22:56 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14658-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 07:22:56 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14658: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14658 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14658/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:33:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:33:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thzil-0002tZ-CU; Mon, 10 Dec 2012 09:32:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Thzij-0002tU-94
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:32:17 +0000
Received: from [193.109.254.147:33071] by server-5.bemta-14.messagelabs.com id
	64/BE-10257-02CA5C05; Mon, 10 Dec 2012 09:32:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355131933!2062834!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1733 invoked from network); 10 Dec 2012 09:32:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 09:32:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 09:32:13 +0000
Message-Id: <50C5BA2A02000078000AF4F4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 09:32:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
	<50C1E67F02000078000AEEEF@nat28.tlf.novell.com>
	<50C25D3D.10407@citrix.com>
In-Reply-To: <50C25D3D.10407@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 V3] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 22:18, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 07/12/2012 11:52, Jan Beulich wrote:
>>>>> On 06.12.12 at 22:42, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> +
>>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>>> +     * invokes do_nmi_crash (above), which cause them to write state and
>>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>>> +     * cause it to return to this function ASAP.
>>> +     */
>>> +    for ( i = 0; i < nr_cpu_ids; ++i )
>>> +        if ( idt_tables[i] )
>>> +        {
>>> +
>>> +            if ( i == cpu )
>>> +            {
>>> +                /* Disable the interrupt stack tables for this MCE and
>>> +                 * NMI handler (shortly to become a nop) as there is a 1
>>> +                 * instruction race window where NMIs could be
>>> +                 * re-enabled and corrupt the exception frame, leaving
>>> +                 * us unable to continue on this crash path (which half
>>> +                 * defeats the point of using the nop handler in the
>>> +                 * first place).
>>> +                 *
>>> +                 * This update is safe from a security point of view, as
>>> +                 * this pcpu is never going to try to sysret back to a
>>> +                 * PV vcpu.
>>> +                 */
>>> +                set_ist(&idt_tables[i][TRAP_nmi],           IST_NONE);
>>> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
>>> +
>>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
>> This makes the first set_ist() above pointless, doesn't it?
> 
> No.  The set_ist() is to remove the possibility of stack corruption from
> reentrant NMIs, while the trap_nop handler is so we don't get diverted
> into the regular NMI handler.  There is nothing the NMI handler would do
> which could alter the outcome, and there are many cases where the
> regular NMI handler would try to panic, starting us reentrantly on the
> kexec path again (where we would deadlock on the one_cpu_only() check).

I think you didn't get the point of the question: _set_gate() clears
the IST field of the descriptor anyway, so why clear it separately
first, and then overwrite it again?

>>> +            }
>>> +            else
>>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
>>> +        }
>>> +
>>>      /* Ensure the new callback function is set before sending out the NMI. 
> */
>>>      wmb();
>>>  ...
>>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>>> +ENTRY(enable_nmis)
>>> +        pushq %rax
>> What's the point of saving %rax here, btw?
>>
>> Jan
> 
> Because at the moment I believe I might need to call it from asm
> context, when doing some of the later fixes.  I figured that it was
> better to make it safe now, rather than patch it up later.

I don't think that's good practice - if you end up not calling the
thing from assembly code, the question on the purpose of saving
%rax will re-surface sooner or later. Plus the patch by itself
wouldn't really explain either why this is so (which might be
of interest in the context of backporting it to older trees).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:33:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:33:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thzil-0002tZ-CU; Mon, 10 Dec 2012 09:32:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Thzij-0002tU-94
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:32:17 +0000
Received: from [193.109.254.147:33071] by server-5.bemta-14.messagelabs.com id
	64/BE-10257-02CA5C05; Mon, 10 Dec 2012 09:32:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355131933!2062834!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1733 invoked from network); 10 Dec 2012 09:32:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 09:32:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 09:32:13 +0000
Message-Id: <50C5BA2A02000078000AF4F4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 09:32:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
	<50C1E67F02000078000AEEEF@nat28.tlf.novell.com>
	<50C25D3D.10407@citrix.com>
In-Reply-To: <50C25D3D.10407@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 V3] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 22:18, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 07/12/2012 11:52, Jan Beulich wrote:
>>>>> On 06.12.12 at 22:42, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> +
>>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>>> +     * invokes do_nmi_crash (above), which cause them to write state and
>>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>>> +     * cause it to return to this function ASAP.
>>> +     */
>>> +    for ( i = 0; i < nr_cpu_ids; ++i )
>>> +        if ( idt_tables[i] )
>>> +        {
>>> +
>>> +            if ( i == cpu )
>>> +            {
>>> +                /* Disable the interrupt stack tables for this MCE and
>>> +                 * NMI handler (shortly to become a nop) as there is a 1
>>> +                 * instruction race window where NMIs could be
>>> +                 * re-enabled and corrupt the exception frame, leaving
>>> +                 * us unable to continue on this crash path (which half
>>> +                 * defeats the point of using the nop handler in the
>>> +                 * first place).
>>> +                 *
>>> +                 * This update is safe from a security point of view, as
>>> +                 * this pcpu is never going to try to sysret back to a
>>> +                 * PV vcpu.
>>> +                 */
>>> +                set_ist(&idt_tables[i][TRAP_nmi],           IST_NONE);
>>> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
>>> +
>>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
>> This makes the first set_ist() above pointless, doesn't it?
> 
> No.  The set_ist() is to remove the possibility of stack corruption from
> reentrant NMIs, while the trap_nop handler is so we don't get diverted
> into the regular NMI handler.  There is nothing the NMI handler would do
> which could alter the outcome, and there are many cases where the
> regular NMI handler would try to panic, starting us reentrantly on the
> kexec path again (where we would deadlock on the one_cpu_only() check).

I think you didn't get the point of the question: _set_gate() clears
the IST field of the descriptor anyway, so why clear it separately
first, and then overwrite it again?

>>> +            }
>>> +            else
>>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0, &nmi_crash);
>>> +        }
>>> +
>>>      /* Ensure the new callback function is set before sending out the NMI. 
> */
>>>      wmb();
>>>  ...
>>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>>> +ENTRY(enable_nmis)
>>> +        pushq %rax
>> What's the point of saving %rax here, btw?
>>
>> Jan
> 
> Because at the moment I believe I might need to call it from asm
> context, when doing some of the later fixes.  I figured that it was
> better to make it safe now, rather than patch it up later.

I don't think that's good practice - if you end up not calling the
thing from assembly code, the question on the purpose of saving
%rax will re-surface sooner or later. Plus the patch by itself
wouldn't really explain either why this is so (which might be
of interest in the context of backporting it to older trees).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:33:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thzj0-0002uA-P6; Mon, 10 Dec 2012 09:32:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Thziz-0002u4-Th
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 09:32:34 +0000
Received: from [85.158.139.83:44152] by server-14.bemta-5.messagelabs.com id
	21/18-09538-03CA5C05; Mon, 10 Dec 2012 09:32:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355131951!21780912!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7160 invoked from network); 10 Dec 2012 09:32:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 09:32:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="27145"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 09:32:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 09:32:31 +0000
Message-ID: <1355131949.31710.95.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 10 Dec 2012 09:32:29 +0000
In-Reply-To: <20121207170556.GA6165@phenom.dumpdata.com>
References: <20121207170556.GA6165@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Linux Kernel Summit 2012 hallway talks - PV MMU, PVH,
 hpa, tglrx, stefano and me.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 17:05 +0000, Konrad Rzeszutek Wilk wrote:

>    Anyhow, once the PVH works - so can do SMP guests, does
>    properly interrupt delivery, etc, we would obsolete the PV MMU
>    mode in 5 years. This means that arch/x86/xen/p2m.c and arch/x86/xen/mmu.c
>    along with a host of paravirt interfaces would be #ifdef-ed out.
>    There would also be a note in the Documentation/deprecate-schedule
>    pointing that out. If everything time-wise aligns itself that
>    means 2013 is when PVH has it debut and will have its kinks worked
>    out. 2018 is when PV MMU would be obsoleted. The impact is that in
>    2018 users would need Intel VT-d or AMD VI-IOMMU capable machine to run
>    the latest Linux dom0 kernel with device drivers on x86.
>    You would still be able to run the ancient PV kernels (like 2.6.18) as
>    guests - just not as a dom0.

I'm not sure I follow -- why does this future change in mainline Linux
have any impact on other kernel trees and their ability to run as dom0?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:33:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thzj0-0002uA-P6; Mon, 10 Dec 2012 09:32:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Thziz-0002u4-Th
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 09:32:34 +0000
Received: from [85.158.139.83:44152] by server-14.bemta-5.messagelabs.com id
	21/18-09538-03CA5C05; Mon, 10 Dec 2012 09:32:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355131951!21780912!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7160 invoked from network); 10 Dec 2012 09:32:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 09:32:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="27145"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 09:32:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 09:32:31 +0000
Message-ID: <1355131949.31710.95.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 10 Dec 2012 09:32:29 +0000
In-Reply-To: <20121207170556.GA6165@phenom.dumpdata.com>
References: <20121207170556.GA6165@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Linux Kernel Summit 2012 hallway talks - PV MMU, PVH,
 hpa, tglrx, stefano and me.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 17:05 +0000, Konrad Rzeszutek Wilk wrote:

>    Anyhow, once the PVH works - so can do SMP guests, does
>    properly interrupt delivery, etc, we would obsolete the PV MMU
>    mode in 5 years. This means that arch/x86/xen/p2m.c and arch/x86/xen/mmu.c
>    along with a host of paravirt interfaces would be #ifdef-ed out.
>    There would also be a note in the Documentation/deprecate-schedule
>    pointing that out. If everything time-wise aligns itself that
>    means 2013 is when PVH has it debut and will have its kinks worked
>    out. 2018 is when PV MMU would be obsoleted. The impact is that in
>    2018 users would need Intel VT-d or AMD VI-IOMMU capable machine to run
>    the latest Linux dom0 kernel with device drivers on x86.
>    You would still be able to run the ancient PV kernels (like 2.6.18) as
>    guests - just not as a dom0.

I'm not sure I follow -- why does this future change in mainline Linux
have any impact on other kernel trees and their ability to run as dom0?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:35:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thzl3-000340-Ff; Mon, 10 Dec 2012 09:34:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Thzl2-00033u-Fz
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:34:40 +0000
Received: from [193.109.254.147:18949] by server-11.bemta-14.messagelabs.com
	id C6/B8-29027-FACA5C05; Mon, 10 Dec 2012 09:34:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355132079!9795544!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16388 invoked from network); 10 Dec 2012 09:34:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 09:34:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 09:34:39 +0000
Message-Id: <50C5BABC02000078000AF4F7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 09:34:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<20121207212336.GD9664@phenom.dumpdata.com>
In-Reply-To: <20121207212336.GD9664@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Paolo Bonzini <pbonzini@redhat.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 22:23, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> On Fri, Nov 30, 2012 at 08:33:34AM +0000, Jan Beulich wrote:
>> >>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> > On some machines, the location at 0x40e does not point to the beginning
>> > of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>> > area of the EBDA, while the option ROMs place their data below that
>> > segment.
>> > 
>> > For this reason, 0x413 is actually a better source than 0x40e to get
>> > the location of the real-mode trampoline.  But it is even better to
>> > fetch the information from the multiboot structure, where the boot
>> > loader has placed the data for us already.
>> 
>> I think if anything we really should make this a minimum calculation
>> of all three (sanity checked) values, rather than throwing the other
>> sources out. It's just not certain enough that we can trust all
>> multiboot implementations.
>> 
>> Of course, ideally we'd consult the memory map, but the E820 one
>> is unavailable at that point (and getting at it would create a
>> chicken-and-egg problem).
> 
> Can we scan the memory for the possible EBDA regions? There is an
> "EBDA" type header in those regions, if I recall?

I don't think there are any signatures - the value at (real mode)
0040:000e has to be relied upon.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:35:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thzl3-000340-Ff; Mon, 10 Dec 2012 09:34:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Thzl2-00033u-Fz
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:34:40 +0000
Received: from [193.109.254.147:18949] by server-11.bemta-14.messagelabs.com
	id C6/B8-29027-FACA5C05; Mon, 10 Dec 2012 09:34:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355132079!9795544!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16388 invoked from network); 10 Dec 2012 09:34:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 09:34:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 09:34:39 +0000
Message-Id: <50C5BABC02000078000AF4F7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 09:34:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<20121207212336.GD9664@phenom.dumpdata.com>
In-Reply-To: <20121207212336.GD9664@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Paolo Bonzini <pbonzini@redhat.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 22:23, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> On Fri, Nov 30, 2012 at 08:33:34AM +0000, Jan Beulich wrote:
>> >>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> > On some machines, the location at 0x40e does not point to the beginning
>> > of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>> > area of the EBDA, while the option ROMs place their data below that
>> > segment.
>> > 
>> > For this reason, 0x413 is actually a better source than 0x40e to get
>> > the location of the real-mode trampoline.  But it is even better to
>> > fetch the information from the multiboot structure, where the boot
>> > loader has placed the data for us already.
>> 
>> I think if anything we really should make this a minimum calculation
>> of all three (sanity checked) values, rather than throwing the other
>> sources out. It's just not certain enough that we can trust all
>> multiboot implementations.
>> 
>> Of course, ideally we'd consult the memory map, but the E820 one
>> is unavailable at that point (and getting at it would create a
>> chicken-and-egg problem).
> 
> Can we scan the memory for the possible EBDA regions? There is an
> "EBDA" type header in those regions, if I recall?

I don't think there are any signatures - the value at (real mode)
0040:000e has to be relied upon.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:38:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThzoZ-0003H8-3q; Mon, 10 Dec 2012 09:38:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1ThzoX-0003H2-AR
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:38:17 +0000
Received: from [193.109.254.147:30427] by server-2.bemta-14.messagelabs.com id
	97/51-20829-88DA5C05; Mon, 10 Dec 2012 09:38:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355132290!9796100!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10650 invoked from network); 10 Dec 2012 09:38:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 09:38:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 09:38:10 +0000
Message-Id: <50C5BB8E02000078000AF50A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 09:38:06 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <50B4B40B02000078000ABA19@nat28.tlf.novell.com>
	<20121207213437.GF9664@phenom.dumpdata.com>
In-Reply-To: <20121207213437.GF9664@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] vscsiif: allow larger segments-per-request
 values
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 22:34, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> On Tue, Nov 27, 2012 at 11:37:31AM +0000, Jan Beulich wrote:
>> At least certain tape devices require fixed size blocks to be operated
>> upon, i.e. breaking up of I/O requests is not permitted. Consequently
>> we need an interface extension that (leaving aside implementation
>> limitations) doesn't impose a limit on the number of segments that can
>> be associated with an individual request.
>> 
>> This, in turn, excludes the blkif extension FreeBSD folks implemented,
>> as that still imposes an upper limit (the actual I/O request still
>> specifies the full number of segments - as an 8-bit quantity -, and
>> subsequent ring slots get used to carry the excess segment
>> descriptors).
>> 
>> The alternative therefore is to allow the frontend to pre-set segment
>> descriptors _before_ actually issuing the I/O request. I/O will then
>> be done by the backend for the accumulated set of segments.
> 
> How do you deal with migration to older backends?

As being able to use larger blocks is - as described - a functional
requirement in some cases, imo there's little point in supporting
such migration.

>> To properly associate segment preset operations with the main request,
>> the rqid-s between them should match (originally I had hoped to use
>> this to avoid producing individual responses for the pre-set
>> operations, but that turned out to violate the underlying shared ring
>> implementation).
> 
> Right. If we could seperate those two it would be solve that.
> So seperate 'request' and 'response' ring.

Yes. But that would be an entirely incompatible change, so only
suitable in the context of (as for blkif) re-designing the whole
thing.

>> Negotiation of the maximum number of segments a particular backend
>> implementation supports happens through a new "segs-per-req" xenstore
>> node.
> 
> No 'feature-segs-per-req'?

To me, a name like this implies boolean nature. Hence I prefer to
not have the "feature-" prefix.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:38:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThzoZ-0003H8-3q; Mon, 10 Dec 2012 09:38:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1ThzoX-0003H2-AR
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:38:17 +0000
Received: from [193.109.254.147:30427] by server-2.bemta-14.messagelabs.com id
	97/51-20829-88DA5C05; Mon, 10 Dec 2012 09:38:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355132290!9796100!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10650 invoked from network); 10 Dec 2012 09:38:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 09:38:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 09:38:10 +0000
Message-Id: <50C5BB8E02000078000AF50A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 09:38:06 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <50B4B40B02000078000ABA19@nat28.tlf.novell.com>
	<20121207213437.GF9664@phenom.dumpdata.com>
In-Reply-To: <20121207213437.GF9664@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] vscsiif: allow larger segments-per-request
 values
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 22:34, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> On Tue, Nov 27, 2012 at 11:37:31AM +0000, Jan Beulich wrote:
>> At least certain tape devices require fixed size blocks to be operated
>> upon, i.e. breaking up of I/O requests is not permitted. Consequently
>> we need an interface extension that (leaving aside implementation
>> limitations) doesn't impose a limit on the number of segments that can
>> be associated with an individual request.
>> 
>> This, in turn, excludes the blkif extension FreeBSD folks implemented,
>> as that still imposes an upper limit (the actual I/O request still
>> specifies the full number of segments - as an 8-bit quantity -, and
>> subsequent ring slots get used to carry the excess segment
>> descriptors).
>> 
>> The alternative therefore is to allow the frontend to pre-set segment
>> descriptors _before_ actually issuing the I/O request. I/O will then
>> be done by the backend for the accumulated set of segments.
> 
> How do you deal with migration to older backends?

As being able to use larger blocks is - as described - a functional
requirement in some cases, imo there's little point in supporting
such migration.

>> To properly associate segment preset operations with the main request,
>> the rqid-s between them should match (originally I had hoped to use
>> this to avoid producing individual responses for the pre-set
>> operations, but that turned out to violate the underlying shared ring
>> implementation).
> 
> Right. If we could seperate those two it would be solve that.
> So seperate 'request' and 'response' ring.

Yes. But that would be an entirely incompatible change, so only
suitable in the context of (as for blkif) re-designing the whole
thing.

>> Negotiation of the maximum number of segments a particular backend
>> implementation supports happens through a new "segs-per-req" xenstore
>> node.
> 
> No 'feature-segs-per-req'?

To me, a name like this implies boolean nature. Hence I prefer to
not have the "feature-" prefix.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:39:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thzpm-0003Ls-Ja; Mon, 10 Dec 2012 09:39:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Thzpl-0003Li-DI
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:39:33 +0000
Received: from [85.158.143.99:62113] by server-3.bemta-4.messagelabs.com id
	DF/EE-18211-4DDA5C05; Mon, 10 Dec 2012 09:39:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1355132372!27883671!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29082 invoked from network); 10 Dec 2012 09:39:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 09:39:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="27350"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 09:39:32 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 09:39:31 +0000
Message-ID: <1355132370.31710.98.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Dec 2012 09:39:30 +0000
In-Reply-To: <alpine.DEB.2.02.1212071728350.8801@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071728350.8801@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/9] xen: arm: parse modules from DT during
	early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 17:30 +0000, Stefano Stabellini wrote:
> On Thu, 6 Dec 2012, Ian Campbell wrote:
> > The bootloader should populate /chosen/modules/module@<N>/ for each
> > module it wishes to pass to the hypervisor. The content of these nodes
> > is described in docs/misc/arm/device-tree/booting.txt
> > 
> > The hypervisor parses for 2 types of module, linux zImages and linux
> > initrds. Currently we don't do anything with them.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> > v4: Use /chosen/modules/module@N
> >     Identify module type by compatible property not number.
> > v3: Use a reg = < > property for the module address/length.
> > v2: Reserve the zeroeth module for Xen itself (not used yet)
> >     Use a more idiomatic DT layout
> >     Document said layout
> > ---
> >  docs/misc/arm/device-tree/booting.txt |   25 ++++++++++
> >  xen/common/device_tree.c              |   86 +++++++++++++++++++++++++++++++++
> >  xen/include/xen/device_tree.h         |   14 +++++
> >  3 files changed, 125 insertions(+), 0 deletions(-)
> >  create mode 100644 docs/misc/arm/device-tree/booting.txt
> > 
> > diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> > new file mode 100644
> > index 0000000..94cd3f1
> > --- /dev/null
> > +++ b/docs/misc/arm/device-tree/booting.txt
> > @@ -0,0 +1,25 @@
> > +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> > +node of the device tree.
> > +
> > +Each node has the form /chosen/modules/module@<N> and contains the following
> > +properties:
> > +
> > +- compatible
> > +
> > +	Must be:
> > +
> > +		"xen,<type>", "xen,multiboot-module"
> > +
> > +	where <type> must be one of:
> > +
> > +	- "linux-zimage" -- the dom0 kernel
> > +	- "linux-initrd" -- the dom0 ramdisk
> > +
> > +- reg
> > +
> > +	Specifies the physical address of the module in RAM and the
> > +	length of the module.
> > +
> > +- bootargs (optional)
> > +
> > +	Command line associated with this module
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index da0af77..4bb640e 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -270,6 +270,90 @@ static void __init process_cpu_node(const void *fdt, int node,
> >      cpumask_set_cpu(start, &cpu_possible_map);
> >  }
> >  
> > +static int __init process_chosen_modules_node(const void *fdt, int node,
> > +                                              const char *name, int *depth,
> > +                                              u32 address_cells, u32 size_cells)
> > +{
> > +    const struct fdt_property *prop;
> > +    const u32 *cell;
> > +    int nr, nr_modules = 0;
> > +    struct dt_mb_module *mod;
> > +    int len;
> > +
> > +    for ( *depth = 1;
> > +          *depth >= 1;
> > +          node = fdt_next_node(fdt, node, depth) )
> > +    {
> > +        name = fdt_get_name(fdt, node, NULL);
> > +        if ( strncmp(name, "module@", strlen("module@")) == 0 ) {
> > +
> > +            if ( fdt_node_check_compatible(fdt, node,
> > +                                           "xen,multiboot-module" ) != 0 )
> > +                early_panic("%s not a compatible module node\n", name);
> > +
> > +            if ( fdt_node_check_compatible(fdt, node,
> > +                                           "xen,linux-zimage") == 0 )
> > +                nr = 1;
> > +            else if ( fdt_node_check_compatible(fdt, node,
> > +                                                "xen,linux-initrd") == 0)
> > +                nr = 2;
> > +            else
> > +                early_panic("%s not a known xen multiboot byte\n");
> > +
> > +            if ( nr > nr_modules )
> > +                nr_modules = nr;
> > +
> > +            mod = &early_info.modules.module[nr];
> > +
> > +            prop = fdt_get_property(fdt, node, "reg", NULL);
> > +            if ( !prop )
> > +                early_panic("node %s missing `reg' property\n", name);
> > +
> > +            cell = (const u32 *)prop->data;
> > +            device_tree_get_reg(&cell, address_cells, size_cells,
> > +                                &mod->start, &mod->size);
> > +
> > +            prop = fdt_get_property(fdt, node, "bootargs", &len);
> > +            if ( prop )
> > +            {
> > +                if ( len > sizeof(mod->cmdline) )
> > +                    early_panic("module %d command line too long\n", nr);
> > +
> > +                safe_strcpy(mod->cmdline, prop->data);
> > +            }
> > +            else
> > +                mod->cmdline[0] = 0;
> > +        }
> > +    }
> > +
> > +    for ( nr = 1 ; nr < nr_modules ; nr++ )
> > +    {
> > +        mod = &early_info.modules.module[nr];
> > +        if ( !mod->start || !mod->size )
> > +            early_panic("module %d  missing / invalid\n", nr);
> > +    }
> > +
> > +    early_info.modules.nr_mods = nr_modules;
> > +    return node;
> > +}
> > +
> > +static void __init process_chosen_node(const void *fdt, int node,
> > +                                       const char *name,
> > +                                       u32 address_cells, u32 size_cells)
> > +{
> > +    int depth;
> > +
> > +    for ( depth = 0;
> > +          depth >= 0;
> > +          node = fdt_next_node(fdt, node, &depth) )
> > +    {
> > +        name = fdt_get_name(fdt, node, NULL);
> > +        if ( depth == 1 && strcmp(name, "modules") == 0 )
> > +            node = process_chosen_modules_node(fdt, node, name, &depth,
> > +                                               address_cells, size_cells);
> > +    }
> > +}
> > +
> >  static int __init early_scan_node(const void *fdt,
> >                                    int node, const char *name, int depth,
> >                                    u32 address_cells, u32 size_cells,
> > @@ -279,6 +363,8 @@ static int __init early_scan_node(const void *fdt,
> >          process_memory_node(fdt, node, name, address_cells, size_cells);
> >      else if ( device_tree_type_matches(fdt, node, "cpu") )
> >          process_cpu_node(fdt, node, name, address_cells, size_cells);
> > +    else if ( device_tree_node_matches(fdt, node, "chosen") )
> > +        process_chosen_node(fdt, node, name, address_cells, size_cells);
> >  
> >      return 0;
> >  }
> 
> You have really written a lot of code here!
> I would have thought that just matching on the compatible string would
> be enough:
> 
> else if ( device_tree_node_matches(fdt, node, "linux-zimage") )
>      process_linuxzimage_node(fdt, node, name, address_cells, size_cells);
> else if ( device_tree_node_matches(fdt, node, "linux-initrd") )
>      process_linuxinitrd_node(fdt, node, name, address_cells, size_cells);
> 
> so that your process_linuxzimage_node and process_linuxinitrd_node start
> from the right node and have everything they need to parse it

Is the tree structure of Device Tree meaningless? I'd have thought that
a compatible node would only have meaning at a particular place in the
tree. Granted compatible nodes are often pretty specific and precise,
but is that inherent enough in DT that we can rely on it?


> 
> 
> 
> > diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> > index 4d010c0..c383677 100644
> > --- a/xen/include/xen/device_tree.h
> > +++ b/xen/include/xen/device_tree.h
> > @@ -15,6 +15,7 @@
> >  #define DEVICE_TREE_MAX_DEPTH 16
> >  
> >  #define NR_MEM_BANKS 8
> > +#define NR_MODULES 2
> >  
> >  struct membank {
> >      paddr_t start;
> > @@ -26,8 +27,21 @@ struct dt_mem_info {
> >      struct membank bank[NR_MEM_BANKS];
> >  };
> >  
> > +struct dt_mb_module {
> > +    paddr_t start;
> > +    paddr_t size;
> > +    char cmdline[1024];
> > +};
> > +
> > +struct dt_module_info {
> > +    int nr_mods;
> > +    /* Module 0 is Xen itself, followed by the provided modules-proper */
> > +    struct dt_mb_module module[NR_MODULES + 1];
> > +};
> > +
> >  struct dt_early_info {
> >      struct dt_mem_info mem;
> > +    struct dt_module_info modules;
> >  };
> >  
> >  typedef int (*device_tree_node_func)(const void *fdt,
> > -- 
> > 1.7.9.1
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:39:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thzpm-0003Ls-Ja; Mon, 10 Dec 2012 09:39:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Thzpl-0003Li-DI
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:39:33 +0000
Received: from [85.158.143.99:62113] by server-3.bemta-4.messagelabs.com id
	DF/EE-18211-4DDA5C05; Mon, 10 Dec 2012 09:39:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1355132372!27883671!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29082 invoked from network); 10 Dec 2012 09:39:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 09:39:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="27350"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 09:39:32 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 09:39:31 +0000
Message-ID: <1355132370.31710.98.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Dec 2012 09:39:30 +0000
In-Reply-To: <alpine.DEB.2.02.1212071728350.8801@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071728350.8801@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/9] xen: arm: parse modules from DT during
	early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 17:30 +0000, Stefano Stabellini wrote:
> On Thu, 6 Dec 2012, Ian Campbell wrote:
> > The bootloader should populate /chosen/modules/module@<N>/ for each
> > module it wishes to pass to the hypervisor. The content of these nodes
> > is described in docs/misc/arm/device-tree/booting.txt
> > 
> > The hypervisor parses for 2 types of module, linux zImages and linux
> > initrds. Currently we don't do anything with them.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> > v4: Use /chosen/modules/module@N
> >     Identify module type by compatible property not number.
> > v3: Use a reg = < > property for the module address/length.
> > v2: Reserve the zeroeth module for Xen itself (not used yet)
> >     Use a more idiomatic DT layout
> >     Document said layout
> > ---
> >  docs/misc/arm/device-tree/booting.txt |   25 ++++++++++
> >  xen/common/device_tree.c              |   86 +++++++++++++++++++++++++++++++++
> >  xen/include/xen/device_tree.h         |   14 +++++
> >  3 files changed, 125 insertions(+), 0 deletions(-)
> >  create mode 100644 docs/misc/arm/device-tree/booting.txt
> > 
> > diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> > new file mode 100644
> > index 0000000..94cd3f1
> > --- /dev/null
> > +++ b/docs/misc/arm/device-tree/booting.txt
> > @@ -0,0 +1,25 @@
> > +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> > +node of the device tree.
> > +
> > +Each node has the form /chosen/modules/module@<N> and contains the following
> > +properties:
> > +
> > +- compatible
> > +
> > +	Must be:
> > +
> > +		"xen,<type>", "xen,multiboot-module"
> > +
> > +	where <type> must be one of:
> > +
> > +	- "linux-zimage" -- the dom0 kernel
> > +	- "linux-initrd" -- the dom0 ramdisk
> > +
> > +- reg
> > +
> > +	Specifies the physical address of the module in RAM and the
> > +	length of the module.
> > +
> > +- bootargs (optional)
> > +
> > +	Command line associated with this module
> > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > index da0af77..4bb640e 100644
> > --- a/xen/common/device_tree.c
> > +++ b/xen/common/device_tree.c
> > @@ -270,6 +270,90 @@ static void __init process_cpu_node(const void *fdt, int node,
> >      cpumask_set_cpu(start, &cpu_possible_map);
> >  }
> >  
> > +static int __init process_chosen_modules_node(const void *fdt, int node,
> > +                                              const char *name, int *depth,
> > +                                              u32 address_cells, u32 size_cells)
> > +{
> > +    const struct fdt_property *prop;
> > +    const u32 *cell;
> > +    int nr, nr_modules = 0;
> > +    struct dt_mb_module *mod;
> > +    int len;
> > +
> > +    for ( *depth = 1;
> > +          *depth >= 1;
> > +          node = fdt_next_node(fdt, node, depth) )
> > +    {
> > +        name = fdt_get_name(fdt, node, NULL);
> > +        if ( strncmp(name, "module@", strlen("module@")) == 0 ) {
> > +
> > +            if ( fdt_node_check_compatible(fdt, node,
> > +                                           "xen,multiboot-module" ) != 0 )
> > +                early_panic("%s not a compatible module node\n", name);
> > +
> > +            if ( fdt_node_check_compatible(fdt, node,
> > +                                           "xen,linux-zimage") == 0 )
> > +                nr = 1;
> > +            else if ( fdt_node_check_compatible(fdt, node,
> > +                                                "xen,linux-initrd") == 0)
> > +                nr = 2;
> > +            else
> > +                early_panic("%s not a known xen multiboot byte\n");
> > +
> > +            if ( nr > nr_modules )
> > +                nr_modules = nr;
> > +
> > +            mod = &early_info.modules.module[nr];
> > +
> > +            prop = fdt_get_property(fdt, node, "reg", NULL);
> > +            if ( !prop )
> > +                early_panic("node %s missing `reg' property\n", name);
> > +
> > +            cell = (const u32 *)prop->data;
> > +            device_tree_get_reg(&cell, address_cells, size_cells,
> > +                                &mod->start, &mod->size);
> > +
> > +            prop = fdt_get_property(fdt, node, "bootargs", &len);
> > +            if ( prop )
> > +            {
> > +                if ( len > sizeof(mod->cmdline) )
> > +                    early_panic("module %d command line too long\n", nr);
> > +
> > +                safe_strcpy(mod->cmdline, prop->data);
> > +            }
> > +            else
> > +                mod->cmdline[0] = 0;
> > +        }
> > +    }
> > +
> > +    for ( nr = 1 ; nr < nr_modules ; nr++ )
> > +    {
> > +        mod = &early_info.modules.module[nr];
> > +        if ( !mod->start || !mod->size )
> > +            early_panic("module %d  missing / invalid\n", nr);
> > +    }
> > +
> > +    early_info.modules.nr_mods = nr_modules;
> > +    return node;
> > +}
> > +
> > +static void __init process_chosen_node(const void *fdt, int node,
> > +                                       const char *name,
> > +                                       u32 address_cells, u32 size_cells)
> > +{
> > +    int depth;
> > +
> > +    for ( depth = 0;
> > +          depth >= 0;
> > +          node = fdt_next_node(fdt, node, &depth) )
> > +    {
> > +        name = fdt_get_name(fdt, node, NULL);
> > +        if ( depth == 1 && strcmp(name, "modules") == 0 )
> > +            node = process_chosen_modules_node(fdt, node, name, &depth,
> > +                                               address_cells, size_cells);
> > +    }
> > +}
> > +
> >  static int __init early_scan_node(const void *fdt,
> >                                    int node, const char *name, int depth,
> >                                    u32 address_cells, u32 size_cells,
> > @@ -279,6 +363,8 @@ static int __init early_scan_node(const void *fdt,
> >          process_memory_node(fdt, node, name, address_cells, size_cells);
> >      else if ( device_tree_type_matches(fdt, node, "cpu") )
> >          process_cpu_node(fdt, node, name, address_cells, size_cells);
> > +    else if ( device_tree_node_matches(fdt, node, "chosen") )
> > +        process_chosen_node(fdt, node, name, address_cells, size_cells);
> >  
> >      return 0;
> >  }
> 
> You have really written a lot of code here!
> I would have thought that just matching on the compatible string would
> be enough:
> 
> else if ( device_tree_node_matches(fdt, node, "linux-zimage") )
>      process_linuxzimage_node(fdt, node, name, address_cells, size_cells);
> else if ( device_tree_node_matches(fdt, node, "linux-initrd") )
>      process_linuxinitrd_node(fdt, node, name, address_cells, size_cells);
> 
> so that your process_linuxzimage_node and process_linuxinitrd_node start
> from the right node and have everything they need to parse it

Is the tree structure of Device Tree meaningless? I'd have thought that
a compatible node would only have meaning at a particular place in the
tree. Granted compatible nodes are often pretty specific and precise,
but is that inherent enough in DT that we can rely on it?


> 
> 
> 
> > diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> > index 4d010c0..c383677 100644
> > --- a/xen/include/xen/device_tree.h
> > +++ b/xen/include/xen/device_tree.h
> > @@ -15,6 +15,7 @@
> >  #define DEVICE_TREE_MAX_DEPTH 16
> >  
> >  #define NR_MEM_BANKS 8
> > +#define NR_MODULES 2
> >  
> >  struct membank {
> >      paddr_t start;
> > @@ -26,8 +27,21 @@ struct dt_mem_info {
> >      struct membank bank[NR_MEM_BANKS];
> >  };
> >  
> > +struct dt_mb_module {
> > +    paddr_t start;
> > +    paddr_t size;
> > +    char cmdline[1024];
> > +};
> > +
> > +struct dt_module_info {
> > +    int nr_mods;
> > +    /* Module 0 is Xen itself, followed by the provided modules-proper */
> > +    struct dt_mb_module module[NR_MODULES + 1];
> > +};
> > +
> >  struct dt_early_info {
> >      struct dt_mem_info mem;
> > +    struct dt_module_info modules;
> >  };
> >  
> >  typedef int (*device_tree_node_func)(const void *fdt,
> > -- 
> > 1.7.9.1
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:42:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThzsV-0003ad-63; Mon, 10 Dec 2012 09:42:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1ThzsT-0003aT-IT
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:42:21 +0000
Received: from [85.158.138.51:24475] by server-10.bemta-3.messagelabs.com id
	3E/33-19806-C7EA5C05; Mon, 10 Dec 2012 09:42:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355132534!28100066!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3593 invoked from network); 10 Dec 2012 09:42:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 09:42:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="27434"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 09:42:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 09:42:14 +0000
Message-ID: <1355132533.31710.100.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Dec 2012 09:42:13 +0000
In-Reply-To: <alpine.DEB.2.02.1212071733450.8801@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071733450.8801@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] xen: strip /chosen/modules/module@<N>/*
 from dom0 device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 17:35 +0000, Stefano Stabellini wrote:
> On Thu, 6 Dec 2012, Ian Campbell wrote:
> > These nodes are used by Xen to find the initial modules.
> > 
> > Only drop the "xen,multiboot-module" compatible nodes in case someone
> > else has a similar idea.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> > v4 - /chosen/modules/modules@N not /chosen/module@N
> > v3 - use a helper to filter out DT elements which are not for dom0.
> >      Better than an ad-hoc break in the middle of a loop.
> > ---
> >  xen/arch/arm/domain_build.c |   40 ++++++++++++++++++++++++++++++++++++++--
> >  1 files changed, 38 insertions(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 7a964f7..27e02e4 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -172,6 +172,40 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
> >      return prop;
> >  }
> >  
> > +/* Returns the next node in fdt (starting from offset) which should be
> > + * passed through to dom0.
> > + */
> > +static int fdt_next_dom0_node(const void *fdt, int node,
> > +                              int *depth_out,
> > +                              int parents[DEVICE_TREE_MAX_DEPTH])
> > +{
> > +    int depth = *depth_out;
> > +
> > +    while ( (node = fdt_next_node(fdt, node, &depth)) &&
> > +            node >= 0 && depth >= 0 )
> > +    {
> > +        if ( depth >= DEVICE_TREE_MAX_DEPTH )
> > +            break;
> > +
> > +        parents[depth] = node;
> > +
> > +        /* Skip /chosen/modules/module@<N>/ and all subnodes */
> > +        if ( depth >= 3 &&
> > +             device_tree_node_matches(fdt, parents[1], "chosen") &&
> > +             device_tree_node_matches(fdt, parents[2], "modules") &&
> > +             device_tree_node_matches(fdt, parents[3], "module") &&
> > +             fdt_node_check_compatible(fdt, parents[3],
> > +                                       "xen,multiboot-module" ) == 0 )
> > +            continue;
> > +
> > +        /* We've arrived at a node which dom0 is interested in. */
> > +        break;
> > +    }
> > +
> > +    *depth_out = depth;
> > +    return node;
> > +}
> 
> Can't we just skip the node if it is compatible with
> "xen,multiboot-module", no matter where it lives?  This should simplify
> this function greatly and you wouldn't need the parents parameter
> anymore.

As well as my previous query about the meaning of the tree structure I
think that even if we could get away with this in this particular case
we are going to need this sort of infrastructure once we start doing
proper filtering of dom0 vs xen nodes in the tree.

> This way we could have a simple node blacklist based on the compatible
> node all in a single function.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:42:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ThzsV-0003ad-63; Mon, 10 Dec 2012 09:42:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1ThzsT-0003aT-IT
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:42:21 +0000
Received: from [85.158.138.51:24475] by server-10.bemta-3.messagelabs.com id
	3E/33-19806-C7EA5C05; Mon, 10 Dec 2012 09:42:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355132534!28100066!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3593 invoked from network); 10 Dec 2012 09:42:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 09:42:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="27434"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 09:42:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 09:42:14 +0000
Message-ID: <1355132533.31710.100.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Dec 2012 09:42:13 +0000
In-Reply-To: <alpine.DEB.2.02.1212071733450.8801@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071733450.8801@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] xen: strip /chosen/modules/module@<N>/*
 from dom0 device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 17:35 +0000, Stefano Stabellini wrote:
> On Thu, 6 Dec 2012, Ian Campbell wrote:
> > These nodes are used by Xen to find the initial modules.
> > 
> > Only drop the "xen,multiboot-module" compatible nodes in case someone
> > else has a similar idea.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> > v4 - /chosen/modules/modules@N not /chosen/module@N
> > v3 - use a helper to filter out DT elements which are not for dom0.
> >      Better than an ad-hoc break in the middle of a loop.
> > ---
> >  xen/arch/arm/domain_build.c |   40 ++++++++++++++++++++++++++++++++++++++--
> >  1 files changed, 38 insertions(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 7a964f7..27e02e4 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -172,6 +172,40 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
> >      return prop;
> >  }
> >  
> > +/* Returns the next node in fdt (starting from offset) which should be
> > + * passed through to dom0.
> > + */
> > +static int fdt_next_dom0_node(const void *fdt, int node,
> > +                              int *depth_out,
> > +                              int parents[DEVICE_TREE_MAX_DEPTH])
> > +{
> > +    int depth = *depth_out;
> > +
> > +    while ( (node = fdt_next_node(fdt, node, &depth)) &&
> > +            node >= 0 && depth >= 0 )
> > +    {
> > +        if ( depth >= DEVICE_TREE_MAX_DEPTH )
> > +            break;
> > +
> > +        parents[depth] = node;
> > +
> > +        /* Skip /chosen/modules/module@<N>/ and all subnodes */
> > +        if ( depth >= 3 &&
> > +             device_tree_node_matches(fdt, parents[1], "chosen") &&
> > +             device_tree_node_matches(fdt, parents[2], "modules") &&
> > +             device_tree_node_matches(fdt, parents[3], "module") &&
> > +             fdt_node_check_compatible(fdt, parents[3],
> > +                                       "xen,multiboot-module" ) == 0 )
> > +            continue;
> > +
> > +        /* We've arrived at a node which dom0 is interested in. */
> > +        break;
> > +    }
> > +
> > +    *depth_out = depth;
> > +    return node;
> > +}
> 
> Can't we just skip the node if it is compatible with
> "xen,multiboot-module", no matter where it lives?  This should simplify
> this function greatly and you wouldn't need the parents parameter
> anymore.

As well as my previous query about the meaning of the tree structure I
think that even if we could get away with this in this particular case
we are going to need this sort of infrastructure once we start doing
proper filtering of dom0 vs xen nodes in the tree.

> This way we could have a simple node blacklist based on the compatible
> node all in a single function.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:44:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thztk-0003gk-LZ; Mon, 10 Dec 2012 09:43:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Thztj-0003gW-AX
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:43:39 +0000
Received: from [85.158.143.99:49686] by server-2.bemta-4.messagelabs.com id
	8D/B5-30861-ACEA5C05; Mon, 10 Dec 2012 09:43:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355132617!23456887!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22436 invoked from network); 10 Dec 2012 09:43:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 09:43:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 09:43:36 +0000
Message-Id: <50C5BCD602000078000AF51A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 09:43:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
In-Reply-To: <20121207174636.49c4f7eb@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: KeirFraser <keir.xen@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.12.12 at 02:46, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> The second is msi.c. I don't understand it very well, and need to
> figure what to do for PVH. Would appreciate suggestions if anyone knows.

Why do you think you need to do something specially for PVH here
in the first place? The only adjustment I would expect might be
needed is address translation (depending on how PVH deal with
MMIO addresses).

In any case is checking the domain ID here pointless - this code
can only be run in the context of Dom0 anyway.

What the code here does is protect the MSI-X table of a device
from Dom0 write accesses - Xen itself needs to be able to fully
control the mask bits, and hence Dom0 shouldn't (and qemu, even
if running in Dom0) mustn't write to it (which was happening in the
past).

Jan

> --- a/xen/arch/x86/msi.c	Thu Nov 01 16:53:31 2012 -0700
> +++ b/xen/arch/x86/msi.c	Fri Dec 07 17:45:07 2012 -0800
> @@ -766,6 +766,9 @@ static int msix_capability_init(struct p
>          WARN_ON(rangeset_overlaps_range(mmio_ro_ranges, dev->msix_pba.first,
>                                          dev->msix_pba.last));
>  
> +/* PVH: fixme: not a clue what to do here :) */
> +if (is_pvh_domain(dev->domain) && dev->domain->domain_id != 0)
> +{
>          if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
>                                  dev->msix_table.last) )
>              WARN();
> @@ -793,6 +796,7 @@ static int msix_capability_init(struct p
>                  /* XXX How to deal with existing mappings? */
>              }
>          }
> +}
>      }
>      WARN_ON(dev->msix_nr_entries != nr_entries);
>      WARN_ON(dev->msix_table.first != (table_paddr >> PAGE_SHIFT));




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:44:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Thztk-0003gk-LZ; Mon, 10 Dec 2012 09:43:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Thztj-0003gW-AX
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:43:39 +0000
Received: from [85.158.143.99:49686] by server-2.bemta-4.messagelabs.com id
	8D/B5-30861-ACEA5C05; Mon, 10 Dec 2012 09:43:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355132617!23456887!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22436 invoked from network); 10 Dec 2012 09:43:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 09:43:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 09:43:36 +0000
Message-Id: <50C5BCD602000078000AF51A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 09:43:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
In-Reply-To: <20121207174636.49c4f7eb@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: KeirFraser <keir.xen@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.12.12 at 02:46, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> The second is msi.c. I don't understand it very well, and need to
> figure what to do for PVH. Would appreciate suggestions if anyone knows.

Why do you think you need to do something specially for PVH here
in the first place? The only adjustment I would expect might be
needed is address translation (depending on how PVH deal with
MMIO addresses).

In any case is checking the domain ID here pointless - this code
can only be run in the context of Dom0 anyway.

What the code here does is protect the MSI-X table of a device
from Dom0 write accesses - Xen itself needs to be able to fully
control the mask bits, and hence Dom0 shouldn't (and qemu, even
if running in Dom0) mustn't write to it (which was happening in the
past).

Jan

> --- a/xen/arch/x86/msi.c	Thu Nov 01 16:53:31 2012 -0700
> +++ b/xen/arch/x86/msi.c	Fri Dec 07 17:45:07 2012 -0800
> @@ -766,6 +766,9 @@ static int msix_capability_init(struct p
>          WARN_ON(rangeset_overlaps_range(mmio_ro_ranges, dev->msix_pba.first,
>                                          dev->msix_pba.last));
>  
> +/* PVH: fixme: not a clue what to do here :) */
> +if (is_pvh_domain(dev->domain) && dev->domain->domain_id != 0)
> +{
>          if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
>                                  dev->msix_table.last) )
>              WARN();
> @@ -793,6 +796,7 @@ static int msix_capability_init(struct p
>                  /* XXX How to deal with existing mappings? */
>              }
>          }
> +}
>      }
>      WARN_ON(dev->msix_nr_entries != nr_entries);
>      WARN_ON(dev->msix_table.first != (table_paddr >> PAGE_SHIFT));




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:56:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti05Y-00043z-7g; Mon, 10 Dec 2012 09:55:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti05W-00043u-GV
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 09:55:50 +0000
Received: from [85.158.137.99:46398] by server-13.bemta-3.messagelabs.com id
	C2/80-24887-5A1B5C05; Mon, 10 Dec 2012 09:55:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355133338!15489035!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17811 invoked from network); 10 Dec 2012 09:55:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 09:55:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="27918"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 09:55:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 09:55:37 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Ti05J-0002ov-UB;
	Mon, 10 Dec 2012 09:55:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Ti05J-00025X-SF;
	Mon, 10 Dec 2012 09:55:37 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14659-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 09:55:37 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14659: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14659 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14659/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:56:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti05Y-00043z-7g; Mon, 10 Dec 2012 09:55:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti05W-00043u-GV
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 09:55:50 +0000
Received: from [85.158.137.99:46398] by server-13.bemta-3.messagelabs.com id
	C2/80-24887-5A1B5C05; Mon, 10 Dec 2012 09:55:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355133338!15489035!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17811 invoked from network); 10 Dec 2012 09:55:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 09:55:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="27918"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 09:55:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 09:55:37 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Ti05J-0002ov-UB;
	Mon, 10 Dec 2012 09:55:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Ti05J-00025X-SF;
	Mon, 10 Dec 2012 09:55:37 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14659-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 09:55:37 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14659: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14659 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14659/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-pair          4 host-install/dst_host(4) broken REGR. vs. 14566
 test-i386-i386-pair          3 host-install/src_host(3) broken REGR. vs. 14566
 build-amd64                   2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-win            3 host-install(3)         broken REGR. vs. 14566
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 14566
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 14566
 test-i386-i386-xl-winxpsp3    3 host-install(3)         broken REGR. vs. 14566
 test-i386-i386-xl-qemuu-winxpsp3  3 host-install(3)     broken REGR. vs. 14566

Tests which are failing intermittently (not blocking):
 test-i386-i386-pv             3 host-install(3)           broken pass in 14587

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win     1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-qemut-win-vcpus1                             blocked 
 test-amd64-i386-xl-qemut-win-vcpus1                          blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           broken  
 test-amd64-amd64-qemut-win                                   blocked 
 test-amd64-i386-qemut-win                                    blocked 
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                blocked 
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             broken  
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   broken  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23423:309ff3ad9dcc
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:13:00 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   23422:6c7febc0bbeb
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:03:05 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   23421:a8a9e1c126ea
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:50:03 2012 +0000
    
    memop: limit guest specified extent order
    
    Allowing unbounded order values here causes almost unbounded loops
    and/or partially incomplete requests, particularly in PoD code.
    
    The added range checks in populate_physmap(), decrease_reservation(),
    and the "in" one in memory_exchange() architecturally all could use
    PADDR_BITS - PAGE_SHIFT, and are being artificially constrained to
    MAX_ORDER.
    
    This is XSA-31 / CVE-2012-5515.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:58:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:58:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti087-0004Am-W3; Mon, 10 Dec 2012 09:58:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti086-0004Ae-9j
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:58:30 +0000
Received: from [85.158.143.99:8364] by server-3.bemta-4.messagelabs.com id
	F9/5E-18211-542B5C05; Mon, 10 Dec 2012 09:58:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355133509!23518549!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5508 invoked from network); 10 Dec 2012 09:58:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 09:58:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="28148"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 09:58:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 09:58:28 +0000
Message-ID: <1355133507.31710.104.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Date: Mon, 10 Dec 2012 09:58:27 +0000
In-Reply-To: <20121207212536.GE9664@phenom.dumpdata.com>
References: <50B4D060.9070403@jhuapl.edu>
	<1354029286-17652-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354029286-17652-2-git-send-email-dgdegra@tycho.nsa.gov>
	<50B76DBB.90504@jhuapl.edu> <20121207212536.GE9664@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] stubdom: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 21:25 +0000, Konrad Rzeszutek Wilk wrote:
> > >+   snprintf(path, 512, "backend/vtpm/%u/%u/feature-protocol-v2", (unsigned int) tpmif->domid, tpmif->handle);
> > >+   if ((err = xenbus_write(XBT_NIL, path, "1")))
> > >+   {
> > >+      /* if we got an error here we should carefully remove the interface and then return */
> > >+      TPMBACK_ERR("Unable to write feature-protocol-v2 node: %s\n", err);
> > >+      free(err);
> > >+      remove_tpmif(tpmif);
> > >+      goto error_post_irq;
> > >+   }
> > >+
> > My preference is still to do away with the versioning stuff since
> > tpm is just getting released.

It is present in the 2.6.18-xen tree and has made its way into distros,
at least SLES11.

>  Its not even in linux yet so there is
> > no confusion. We can even merge the linux patches together and
> > resubmit as one if thats preferrable. Konrad, Ian, your final votes
> > on that?
> 
> I am up for just removing the versioning stuff - and if one really
> wants to be fool-proof - rename the 'backend/vtpm' to 'backend/vtpm2'
> Perhaps?
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 09:58:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 09:58:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti087-0004Am-W3; Mon, 10 Dec 2012 09:58:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti086-0004Ae-9j
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 09:58:30 +0000
Received: from [85.158.143.99:8364] by server-3.bemta-4.messagelabs.com id
	F9/5E-18211-542B5C05; Mon, 10 Dec 2012 09:58:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355133509!23518549!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5508 invoked from network); 10 Dec 2012 09:58:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 09:58:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="28148"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 09:58:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 09:58:28 +0000
Message-ID: <1355133507.31710.104.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Date: Mon, 10 Dec 2012 09:58:27 +0000
In-Reply-To: <20121207212536.GE9664@phenom.dumpdata.com>
References: <50B4D060.9070403@jhuapl.edu>
	<1354029286-17652-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354029286-17652-2-git-send-email-dgdegra@tycho.nsa.gov>
	<50B76DBB.90504@jhuapl.edu> <20121207212536.GE9664@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] stubdom: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 21:25 +0000, Konrad Rzeszutek Wilk wrote:
> > >+   snprintf(path, 512, "backend/vtpm/%u/%u/feature-protocol-v2", (unsigned int) tpmif->domid, tpmif->handle);
> > >+   if ((err = xenbus_write(XBT_NIL, path, "1")))
> > >+   {
> > >+      /* if we got an error here we should carefully remove the interface and then return */
> > >+      TPMBACK_ERR("Unable to write feature-protocol-v2 node: %s\n", err);
> > >+      free(err);
> > >+      remove_tpmif(tpmif);
> > >+      goto error_post_irq;
> > >+   }
> > >+
> > My preference is still to do away with the versioning stuff since
> > tpm is just getting released.

It is present in the 2.6.18-xen tree and has made its way into distros,
at least SLES11.

>  Its not even in linux yet so there is
> > no confusion. We can even merge the linux patches together and
> > resubmit as one if thats preferrable. Konrad, Ian, your final votes
> > on that?
> 
> I am up for just removing the versioning stuff - and if one really
> wants to be fool-proof - rename the 'backend/vtpm' to 'backend/vtpm2'
> Perhaps?
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:08:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0H6-0004eY-MY; Mon, 10 Dec 2012 10:07:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti0H5-0004eT-EZ
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:07:47 +0000
Received: from [85.158.139.83:63587] by server-8.bemta-5.messagelabs.com id
	E3/00-15003-274B5C05; Mon, 10 Dec 2012 10:07:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355134032!24831871!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11251 invoked from network); 10 Dec 2012 10:07:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 10:07:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="28444"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 10:07:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 10:07:12 +0000
Message-ID: <1355134031.31710.111.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: asad raza <asadrupucit2006@gmail.com>
Date: Mon, 10 Dec 2012 10:07:11 +0000
In-Reply-To: <CAJ2v2mgkTjuq7cJWsgwbY-NJpAeEVRtai=GJV_EhrVzMuZ7QBA@mail.gmail.com>
References: <CAJ2v2mgkTjuq7cJWsgwbY-NJpAeEVRtai=GJV_EhrVzMuZ7QBA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why we are doing this (below code) in __init
 setup_pagetables() method in ARM arch in setup.c.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please read http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions and
consider re-asking your question with more context. 

e.g. include your actual goal, what steps have you taken to find the
answer for yourself etc.

On Sat, 2012-12-08 at 13:02 +0000, asad raza wrote:
>  /* Link in the fixmap pagetable */
>     pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset)
>                            >> PAGE_SHIFT);
>     pte.pt.table = 1;
>     write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte);
>     /*
>      * No flush required here. Individual flushes are done in
>      * set_fixmap as entries are used.
>      */
> 
>     /* Break up the Xen mapping into 4k pages and protect them separately. */
>     for ( i = 0; i < LPAE_ENTRIES; i++ )
>     {
>         unsigned long mfn = paddr_to_pfn(xen_paddr) + i;
>         unsigned long va = XEN_VIRT_START + (i << PAGE_SHIFT);
>         if ( !is_kernel(va) )
>             break;
>         pte = mfn_to_xen_entry(mfn);
>         pte.pt.table = 1; /* 4k mappings always have this bit set */
>         if ( is_kernel_text(va) || is_kernel_inittext(va) )
>         {
>             pte.pt.xn = 0;
>             pte.pt.ro = 1;
>         }
>         if ( is_kernel_rodata(va) )
>             pte.pt.ro = 1;
>         write_pte(xen_xenmap + i, pte);
>         /* No flush required here as page table is not hooked in yet. */
>     }
>     pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset)
>                            >> PAGE_SHIFT);
>     pte.pt.table = 1;
>     write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
>     /* Have changed a mapping used for .text. Flush everything for safety. */
>     flush_xen_text_tlb();
> 
>     /* From now on, no mapping may be both writable and executable. */
>     WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:08:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0H6-0004eY-MY; Mon, 10 Dec 2012 10:07:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti0H5-0004eT-EZ
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:07:47 +0000
Received: from [85.158.139.83:63587] by server-8.bemta-5.messagelabs.com id
	E3/00-15003-274B5C05; Mon, 10 Dec 2012 10:07:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355134032!24831871!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11251 invoked from network); 10 Dec 2012 10:07:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 10:07:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="28444"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 10:07:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 10:07:12 +0000
Message-ID: <1355134031.31710.111.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: asad raza <asadrupucit2006@gmail.com>
Date: Mon, 10 Dec 2012 10:07:11 +0000
In-Reply-To: <CAJ2v2mgkTjuq7cJWsgwbY-NJpAeEVRtai=GJV_EhrVzMuZ7QBA@mail.gmail.com>
References: <CAJ2v2mgkTjuq7cJWsgwbY-NJpAeEVRtai=GJV_EhrVzMuZ7QBA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why we are doing this (below code) in __init
 setup_pagetables() method in ARM arch in setup.c.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please read http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions and
consider re-asking your question with more context. 

e.g. include your actual goal, what steps have you taken to find the
answer for yourself etc.

On Sat, 2012-12-08 at 13:02 +0000, asad raza wrote:
>  /* Link in the fixmap pagetable */
>     pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset)
>                            >> PAGE_SHIFT);
>     pte.pt.table = 1;
>     write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte);
>     /*
>      * No flush required here. Individual flushes are done in
>      * set_fixmap as entries are used.
>      */
> 
>     /* Break up the Xen mapping into 4k pages and protect them separately. */
>     for ( i = 0; i < LPAE_ENTRIES; i++ )
>     {
>         unsigned long mfn = paddr_to_pfn(xen_paddr) + i;
>         unsigned long va = XEN_VIRT_START + (i << PAGE_SHIFT);
>         if ( !is_kernel(va) )
>             break;
>         pte = mfn_to_xen_entry(mfn);
>         pte.pt.table = 1; /* 4k mappings always have this bit set */
>         if ( is_kernel_text(va) || is_kernel_inittext(va) )
>         {
>             pte.pt.xn = 0;
>             pte.pt.ro = 1;
>         }
>         if ( is_kernel_rodata(va) )
>             pte.pt.ro = 1;
>         write_pte(xen_xenmap + i, pte);
>         /* No flush required here as page table is not hooked in yet. */
>     }
>     pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset)
>                            >> PAGE_SHIFT);
>     pte.pt.table = 1;
>     write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
>     /* Have changed a mapping used for .text. Flush everything for safety. */
>     flush_xen_text_tlb();
> 
>     /* From now on, no mapping may be both writable and executable. */
>     WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0Ky-0004rR-CV; Mon, 10 Dec 2012 10:11:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Ti0Kw-0004rH-Gc
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:11:46 +0000
Received: from [85.158.138.51:58979] by server-4.bemta-3.messagelabs.com id
	5D/D2-30023-165B5C05; Mon, 10 Dec 2012 10:11:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355134235!28257177!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23435 invoked from network); 10 Dec 2012 10:10:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 10:10:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 10:10:34 +0000
Message-Id: <50C5C32802000078000AF542@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 10:10:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 19:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> Abstract away from vesa.c the funcions to handle a linear framebuffer
> and print characters to it.
> The corresponding functions are going to be removed from vesa.c in the
> next patch.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/drivers/video/Makefile |    1 +
>  xen/drivers/video/fb.c     |  209 
> ++++++++++++++++++++++++++++++++++++++++++++
>  xen/drivers/video/fb.h     |   49 ++++++++++
>  3 files changed, 259 insertions(+), 0 deletions(-)
>  create mode 100644 xen/drivers/video/fb.c
>  create mode 100644 xen/drivers/video/fb.h
> 
> diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> index 2993c39..3b3eb43 100644
> --- a/xen/drivers/video/Makefile
> +++ b/xen/drivers/video/Makefile
> @@ -2,4 +2,5 @@ obj-$(HAS_VGA) := vga.o
>  obj-$(HAS_VIDEO) += font_8x14.o
>  obj-$(HAS_VIDEO) += font_8x16.o
>  obj-$(HAS_VIDEO) += font_8x8.o
> +obj-$(HAS_VIDEO) += fb.o
>  obj-$(HAS_VGA) += vesa.o
> diff --git a/xen/drivers/video/fb.c b/xen/drivers/video/fb.c
> new file mode 100644
> index 0000000..282f97e
> --- /dev/null
> +++ b/xen/drivers/video/fb.c
> @@ -0,0 +1,209 @@
> +/**************************************************************************
> ****
> + * fb.c
> + *
> + * linear frame buffer handling.
> + */
> +
> +#include <xen/config.h>
> +#include <xen/kernel.h>
> +#include <xen/lib.h>
> +#include <xen/errno.h>
> +#include "fb.h"
> +#include "font.h"
> +
> +#define MAX_XRES 1900
> +#define MAX_YRES 1200
> +#define MAX_BPP 4
> +#define MAX_FONT_W 8
> +#define MAX_FONT_H 16
> +static __initdata unsigned int line_len[MAX_XRES / MAX_FONT_W];
> +static __initdata unsigned char lbuf[MAX_XRES * MAX_BPP];
> +static __initdata unsigned char text_buf[(MAX_XRES / MAX_FONT_W) * \
> +                          (MAX_YRES / MAX_FONT_H)];
> +
> +struct fb_status {
> +    struct fb_prop fbp;
> +
> +    unsigned char *lbuf, *text_buf;
> +    unsigned int *line_len;
> +    unsigned int xpos, ypos;
> +};
> +static struct fb_status fb;
> +
> +static void fb_show_line(
> +    const unsigned char *text_line,
> +    unsigned char *video_line,
> +    unsigned int nr_chars,
> +    unsigned int nr_cells)
> +{
> +    unsigned int i, j, b, bpp, pixel;
> +
> +    bpp = (fb.fbp.bits_per_pixel + 7) >> 3;
> +
> +    for ( i = 0; i < fb.fbp.font->height; i++ )
> +    {
> +        unsigned char *ptr = fb.lbuf;
> +
> +        for ( j = 0; j < nr_chars; j++ )
> +        {
> +            const unsigned char *bits = fb.fbp.font->data;
> +            bits += ((text_line[j] * fb.fbp.font->height + i) *
> +                     ((fb.fbp.font->width + 7) >> 3));
> +            for ( b = fb.fbp.font->width; b--; )
> +            {
> +                pixel = (*bits & (1u<<b)) ? fb.fbp.pixel_on : 0;
> +                memcpy(ptr, &pixel, bpp);
> +                ptr += bpp;
> +            }
> +        }
> +
> +        memset(ptr, 0, (fb.fbp.width - nr_chars * fb.fbp.font->width) * bpp);
> +        memcpy(video_line, fb.lbuf, nr_cells * fb.fbp.font->width * bpp);
> +        video_line += fb.fbp.bytes_per_line;
> +    }
> +}
> +
> +/* Fast mode which redraws all modified parts of a 2D text buffer. */
> +void fb_redraw_puts(const char *s)
> +{
> +    unsigned int i, min_redraw_y = fb.ypos;
> +    char c;
> +
> +    /* Paste characters into text buffer. */
> +    while ( (c = *s++) != '\0' )
> +    {
> +        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
> +        {
> +            if ( ++fb.ypos >= fb.fbp.text_rows )
> +            {
> +                min_redraw_y = 0;
> +                fb.ypos = fb.fbp.text_rows - 1;
> +                memmove(fb.text_buf, fb.text_buf + fb.fbp.text_columns,
> +                        fb.ypos * fb.fbp.text_columns);
> +                memset(fb.text_buf + fb.ypos * fb.fbp.text_columns, 0, 
> fb.xpos);
> +            }
> +            fb.xpos = 0;
> +        }
> +
> +        if ( c != '\n' )
> +            fb.text_buf[fb.xpos++ + fb.ypos * fb.fbp.text_columns] = c;
> +    }
> +
> +    /* Render modified section of text buffer to VESA linear framebuffer. 
> */
> +    for ( i = min_redraw_y; i <= fb.ypos; i++ )
> +    {
> +        const unsigned char *line = fb.text_buf + i * fb.fbp.text_columns;
> +        unsigned int width;
> +
> +        for ( width = fb.fbp.text_columns; width; --width )
> +            if ( line[width - 1] )
> +                 break;
> +        fb_show_line(line,
> +                       fb.fbp.lfb + i * fb.fbp.font->height * 
> fb.fbp.bytes_per_line,
> +                       width, max(fb.line_len[i], width));
> +        fb.line_len[i] = width;
> +    }
> +
> +    fb.fbp.flush();
> +}
> +
> +/* Slower line-based scroll mode which interacts better with dom0. */
> +void fb_scroll_puts(const char *s)
> +{
> +    unsigned int i;
> +    char c;
> +
> +    while ( (c = *s++) != '\0' )
> +    {
> +        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
> +        {
> +            unsigned int bytes = (fb.fbp.width *
> +                                  ((fb.fbp.bits_per_pixel + 7) >> 3));
> +            unsigned char *src = fb.fbp.lfb + fb.fbp.font->height * 
> fb.fbp.bytes_per_line;
> +            unsigned char *dst = fb.fbp.lfb;
> +
> +            /* New line: scroll all previous rows up one line. */
> +            for ( i = fb.fbp.font->height; i < fb.fbp.height; i++ )
> +            {
> +                memcpy(dst, src, bytes);
> +                src += fb.fbp.bytes_per_line;
> +                dst += fb.fbp.bytes_per_line;
> +            }
> +
> +            /* Render new line. */
> +            fb_show_line(
> +                fb.text_buf,
> +                fb.fbp.lfb + (fb.fbp.text_rows-1) * fb.fbp.font->height * 
> fb.fbp.bytes_per_line,
> +                fb.xpos, fb.fbp.text_columns);
> +
> +            fb.xpos = 0;
> +        }
> +
> +        if ( c != '\n' )
> +            fb.text_buf[fb.xpos++] = c;
> +    }
> +
> +    fb.fbp.flush();
> +}
> +
> +void fb_cr(void)
> +{
> +    fb.xpos = 0;
> +}
> +
> +int __init fb_init(struct fb_prop fbp)
> +{
> +    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
> +    {
> +        printk("Couldn't initialize a %xx%x framebuffer early.\n",
> +                        fbp.width, fbp.height);
> +        return -EINVAL;
> +    }
> +
> +    fb.fbp = fbp;
> +    fb.lbuf = lbuf;
> +    fb.text_buf = text_buf;
> +    fb.line_len = line_len;
> +    return 0;
> +}
> +
> +int __init fb_alloc(void)
> +{
> +    fb.lbuf = NULL;
> +    fb.text_buf = NULL;
> +    fb.line_len = NULL;
> +
> +    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
> +    if ( !fb.lbuf )
> +        goto fail;
> +
> +    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
> +    if ( !fb.text_buf )
> +        goto fail;
> +
> +    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
> +    if ( !fb.line_len )
> +        goto fail;
> +
> +    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
> +    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
> +    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
> +
> +    return 0;
> +
> +fail:
> +    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
> +                    "the framebuffer\n");
> +    xfree(fb.lbuf);
> +    xfree(fb.text_buf);
> +    xfree(fb.line_len);
> +
> +    return -ENOMEM;
> +}
> +
> +void fb_free(void)
> +{
> +    xfree(fb.lbuf);
> +    xfree(fb.text_buf);
> +    xfree(fb.line_len);
> +}
> diff --git a/xen/drivers/video/fb.h b/xen/drivers/video/fb.h
> new file mode 100644
> index 0000000..558d058
> --- /dev/null
> +++ b/xen/drivers/video/fb.h
> @@ -0,0 +1,49 @@
> +/*
> + * xen/drivers/video/fb.h
> + *
> + * Cross-platform framebuffer library
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + * Copyright (c) 2012 Citrix Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef _XEN_FB_H
> +#define _XEN_FB_H
> +
> +#include <xen/init.h>
> +
> +struct fb_prop {
> +    const struct font_desc *font;
> +    unsigned char *lfb;
> +    unsigned int pixel_on;
> +    uint16_t width, height;
> +    uint16_t bytes_per_line;
> +    uint16_t bits_per_pixel;
> +    void (*flush)(void);
> +
> +    unsigned int text_columns;
> +    unsigned int text_rows;
> +};
> +
> +void fb_redraw_puts(const char *s);
> +void fb_scroll_puts(const char *s);
> +void fb_cr(void);

Please make this fb_create() or alike - "cr" alone could well be
mistaken as "carriage return".

Jan

> +void fb_free(void);
> +
> +/* initialize the framebuffer, can be called early (before xmalloc is
> + * available) */
> +int __init fb_init(struct fb_prop fbp);
> +/* fb_alloc allocates internal structures using xmalloc */
> +int __init fb_alloc(void);
> +
> +#endif
> -- 
> 1.7.2.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0Ky-0004rR-CV; Mon, 10 Dec 2012 10:11:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Ti0Kw-0004rH-Gc
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:11:46 +0000
Received: from [85.158.138.51:58979] by server-4.bemta-3.messagelabs.com id
	5D/D2-30023-165B5C05; Mon, 10 Dec 2012 10:11:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355134235!28257177!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23435 invoked from network); 10 Dec 2012 10:10:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 10:10:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 10:10:34 +0000
Message-Id: <50C5C32802000078000AF542@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 10:10:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 19:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> Abstract away from vesa.c the funcions to handle a linear framebuffer
> and print characters to it.
> The corresponding functions are going to be removed from vesa.c in the
> next patch.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/drivers/video/Makefile |    1 +
>  xen/drivers/video/fb.c     |  209 
> ++++++++++++++++++++++++++++++++++++++++++++
>  xen/drivers/video/fb.h     |   49 ++++++++++
>  3 files changed, 259 insertions(+), 0 deletions(-)
>  create mode 100644 xen/drivers/video/fb.c
>  create mode 100644 xen/drivers/video/fb.h
> 
> diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> index 2993c39..3b3eb43 100644
> --- a/xen/drivers/video/Makefile
> +++ b/xen/drivers/video/Makefile
> @@ -2,4 +2,5 @@ obj-$(HAS_VGA) := vga.o
>  obj-$(HAS_VIDEO) += font_8x14.o
>  obj-$(HAS_VIDEO) += font_8x16.o
>  obj-$(HAS_VIDEO) += font_8x8.o
> +obj-$(HAS_VIDEO) += fb.o
>  obj-$(HAS_VGA) += vesa.o
> diff --git a/xen/drivers/video/fb.c b/xen/drivers/video/fb.c
> new file mode 100644
> index 0000000..282f97e
> --- /dev/null
> +++ b/xen/drivers/video/fb.c
> @@ -0,0 +1,209 @@
> +/**************************************************************************
> ****
> + * fb.c
> + *
> + * linear frame buffer handling.
> + */
> +
> +#include <xen/config.h>
> +#include <xen/kernel.h>
> +#include <xen/lib.h>
> +#include <xen/errno.h>
> +#include "fb.h"
> +#include "font.h"
> +
> +#define MAX_XRES 1900
> +#define MAX_YRES 1200
> +#define MAX_BPP 4
> +#define MAX_FONT_W 8
> +#define MAX_FONT_H 16
> +static __initdata unsigned int line_len[MAX_XRES / MAX_FONT_W];
> +static __initdata unsigned char lbuf[MAX_XRES * MAX_BPP];
> +static __initdata unsigned char text_buf[(MAX_XRES / MAX_FONT_W) * \
> +                          (MAX_YRES / MAX_FONT_H)];
> +
> +struct fb_status {
> +    struct fb_prop fbp;
> +
> +    unsigned char *lbuf, *text_buf;
> +    unsigned int *line_len;
> +    unsigned int xpos, ypos;
> +};
> +static struct fb_status fb;
> +
> +static void fb_show_line(
> +    const unsigned char *text_line,
> +    unsigned char *video_line,
> +    unsigned int nr_chars,
> +    unsigned int nr_cells)
> +{
> +    unsigned int i, j, b, bpp, pixel;
> +
> +    bpp = (fb.fbp.bits_per_pixel + 7) >> 3;
> +
> +    for ( i = 0; i < fb.fbp.font->height; i++ )
> +    {
> +        unsigned char *ptr = fb.lbuf;
> +
> +        for ( j = 0; j < nr_chars; j++ )
> +        {
> +            const unsigned char *bits = fb.fbp.font->data;
> +            bits += ((text_line[j] * fb.fbp.font->height + i) *
> +                     ((fb.fbp.font->width + 7) >> 3));
> +            for ( b = fb.fbp.font->width; b--; )
> +            {
> +                pixel = (*bits & (1u<<b)) ? fb.fbp.pixel_on : 0;
> +                memcpy(ptr, &pixel, bpp);
> +                ptr += bpp;
> +            }
> +        }
> +
> +        memset(ptr, 0, (fb.fbp.width - nr_chars * fb.fbp.font->width) * bpp);
> +        memcpy(video_line, fb.lbuf, nr_cells * fb.fbp.font->width * bpp);
> +        video_line += fb.fbp.bytes_per_line;
> +    }
> +}
> +
> +/* Fast mode which redraws all modified parts of a 2D text buffer. */
> +void fb_redraw_puts(const char *s)
> +{
> +    unsigned int i, min_redraw_y = fb.ypos;
> +    char c;
> +
> +    /* Paste characters into text buffer. */
> +    while ( (c = *s++) != '\0' )
> +    {
> +        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
> +        {
> +            if ( ++fb.ypos >= fb.fbp.text_rows )
> +            {
> +                min_redraw_y = 0;
> +                fb.ypos = fb.fbp.text_rows - 1;
> +                memmove(fb.text_buf, fb.text_buf + fb.fbp.text_columns,
> +                        fb.ypos * fb.fbp.text_columns);
> +                memset(fb.text_buf + fb.ypos * fb.fbp.text_columns, 0, 
> fb.xpos);
> +            }
> +            fb.xpos = 0;
> +        }
> +
> +        if ( c != '\n' )
> +            fb.text_buf[fb.xpos++ + fb.ypos * fb.fbp.text_columns] = c;
> +    }
> +
> +    /* Render modified section of text buffer to VESA linear framebuffer. 
> */
> +    for ( i = min_redraw_y; i <= fb.ypos; i++ )
> +    {
> +        const unsigned char *line = fb.text_buf + i * fb.fbp.text_columns;
> +        unsigned int width;
> +
> +        for ( width = fb.fbp.text_columns; width; --width )
> +            if ( line[width - 1] )
> +                 break;
> +        fb_show_line(line,
> +                       fb.fbp.lfb + i * fb.fbp.font->height * 
> fb.fbp.bytes_per_line,
> +                       width, max(fb.line_len[i], width));
> +        fb.line_len[i] = width;
> +    }
> +
> +    fb.fbp.flush();
> +}
> +
> +/* Slower line-based scroll mode which interacts better with dom0. */
> +void fb_scroll_puts(const char *s)
> +{
> +    unsigned int i;
> +    char c;
> +
> +    while ( (c = *s++) != '\0' )
> +    {
> +        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
> +        {
> +            unsigned int bytes = (fb.fbp.width *
> +                                  ((fb.fbp.bits_per_pixel + 7) >> 3));
> +            unsigned char *src = fb.fbp.lfb + fb.fbp.font->height * 
> fb.fbp.bytes_per_line;
> +            unsigned char *dst = fb.fbp.lfb;
> +
> +            /* New line: scroll all previous rows up one line. */
> +            for ( i = fb.fbp.font->height; i < fb.fbp.height; i++ )
> +            {
> +                memcpy(dst, src, bytes);
> +                src += fb.fbp.bytes_per_line;
> +                dst += fb.fbp.bytes_per_line;
> +            }
> +
> +            /* Render new line. */
> +            fb_show_line(
> +                fb.text_buf,
> +                fb.fbp.lfb + (fb.fbp.text_rows-1) * fb.fbp.font->height * 
> fb.fbp.bytes_per_line,
> +                fb.xpos, fb.fbp.text_columns);
> +
> +            fb.xpos = 0;
> +        }
> +
> +        if ( c != '\n' )
> +            fb.text_buf[fb.xpos++] = c;
> +    }
> +
> +    fb.fbp.flush();
> +}
> +
> +void fb_cr(void)
> +{
> +    fb.xpos = 0;
> +}
> +
> +int __init fb_init(struct fb_prop fbp)
> +{
> +    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
> +    {
> +        printk("Couldn't initialize a %xx%x framebuffer early.\n",
> +                        fbp.width, fbp.height);
> +        return -EINVAL;
> +    }
> +
> +    fb.fbp = fbp;
> +    fb.lbuf = lbuf;
> +    fb.text_buf = text_buf;
> +    fb.line_len = line_len;
> +    return 0;
> +}
> +
> +int __init fb_alloc(void)
> +{
> +    fb.lbuf = NULL;
> +    fb.text_buf = NULL;
> +    fb.line_len = NULL;
> +
> +    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
> +    if ( !fb.lbuf )
> +        goto fail;
> +
> +    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
> +    if ( !fb.text_buf )
> +        goto fail;
> +
> +    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
> +    if ( !fb.line_len )
> +        goto fail;
> +
> +    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
> +    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
> +    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
> +
> +    return 0;
> +
> +fail:
> +    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
> +                    "the framebuffer\n");
> +    xfree(fb.lbuf);
> +    xfree(fb.text_buf);
> +    xfree(fb.line_len);
> +
> +    return -ENOMEM;
> +}
> +
> +void fb_free(void)
> +{
> +    xfree(fb.lbuf);
> +    xfree(fb.text_buf);
> +    xfree(fb.line_len);
> +}
> diff --git a/xen/drivers/video/fb.h b/xen/drivers/video/fb.h
> new file mode 100644
> index 0000000..558d058
> --- /dev/null
> +++ b/xen/drivers/video/fb.h
> @@ -0,0 +1,49 @@
> +/*
> + * xen/drivers/video/fb.h
> + *
> + * Cross-platform framebuffer library
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + * Copyright (c) 2012 Citrix Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef _XEN_FB_H
> +#define _XEN_FB_H
> +
> +#include <xen/init.h>
> +
> +struct fb_prop {
> +    const struct font_desc *font;
> +    unsigned char *lfb;
> +    unsigned int pixel_on;
> +    uint16_t width, height;
> +    uint16_t bytes_per_line;
> +    uint16_t bits_per_pixel;
> +    void (*flush)(void);
> +
> +    unsigned int text_columns;
> +    unsigned int text_rows;
> +};
> +
> +void fb_redraw_puts(const char *s);
> +void fb_scroll_puts(const char *s);
> +void fb_cr(void);

Please make this fb_create() or alike - "cr" alone could well be
mistaken as "carriage return".

Jan

> +void fb_free(void);
> +
> +/* initialize the framebuffer, can be called early (before xmalloc is
> + * available) */
> +int __init fb_init(struct fb_prop fbp);
> +/* fb_alloc allocates internal structures using xmalloc */
> +int __init fb_alloc(void);
> +
> +#endif
> -- 
> 1.7.2.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:12:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:12:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0Lb-0004v3-Qj; Mon, 10 Dec 2012 10:12:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Ti0La-0004un-26
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:12:26 +0000
Received: from [193.109.254.147:7411] by server-8.bemta-14.messagelabs.com id
	89/58-05026-985B5C05; Mon, 10 Dec 2012 10:12:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355134343!4315530!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17929 invoked from network); 10 Dec 2012 10:12:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 10:12:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 10:12:23 +0000
Message-Id: <50C5C39302000078000AF545@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 10:12:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1354903377-13068-4-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 19:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> Make use of the framebuffer functions previously introduced.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
>  1 files changed, 26 insertions(+), 153 deletions(-)
> 
> diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> index aaf8b23..778cfdf 100644
> --- a/xen/drivers/video/vesa.c
> +++ b/xen/drivers/video/vesa.c
> @@ -13,20 +13,15 @@
>  #include <asm/io.h>
>  #include <asm/page.h>
>  #include "font.h"
> +#include "fb.h"
>  
>  #define vlfb_info    vga_console_info.u.vesa_lfb
> -#define text_columns (vlfb_info.width / font->width)
> -#define text_rows    (vlfb_info.height / font->height)
>  
> -static void vesa_redraw_puts(const char *s);
> -static void vesa_scroll_puts(const char *s);
> +static void lfb_flush(void);
>  
> -static unsigned char *lfb, *lbuf, *text_buf;
> -static unsigned int *__initdata line_len;
> +static unsigned char *lfb;

What's the point of retaining this, when ...

>  static const struct font_desc *font;
>  static bool_t vga_compat;
> -static unsigned int pixel_on;
> -static unsigned int xpos, ypos;
>  
>  static unsigned int vram_total;
>  integer_param("vesa-ram", vram_total);
> @@ -87,29 +82,26 @@ void __init vesa_early_init(void)
>  
>  void __init vesa_init(void)
>  {
> -    if ( !font )
> -        goto fail;
> -
> -    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
> -    if ( !lbuf )
> -        goto fail;
> +    struct fb_prop fbp;
>  
> -    text_buf = xzalloc_bytes(text_columns * text_rows);
> -    if ( !text_buf )
> -        goto fail;
> +    if ( !font )
> +        return;
>  
> -    line_len = xzalloc_array(unsigned int, text_columns);
> -    if ( !line_len )
> -        goto fail;
> +    fbp.font = font;
> +    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
> +    fbp.bytes_per_line = vlfb_info.bytes_per_line;
> +    fbp.width = vlfb_info.width;
> +    fbp.height = vlfb_info.height;
> +    fbp.flush = lfb_flush;
> +    fbp.text_columns = vlfb_info.width / font->width;
> +    fbp.text_rows = vlfb_info.height / font->height;
>  
> -    lfb = ioremap(vlfb_info.lfb_base, vram_remap);
> +    fbp.lfb = lfb = ioremap(vlfb_info.lfb_base, vram_remap);

... you set up the consolidated field at the same time anyway?

Jan

>      if ( !lfb )
> -        goto fail;
> +        return;
>  
>      memset(lfb, 0, vram_remap);
>  
> -    video_puts = vesa_redraw_puts;
> -
>      printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
>             "using %uk, total %uk\n",
>             vlfb_info.lfb_base, lfb,
> @@ -131,7 +123,7 @@ void __init vesa_init(void)
>      {
>          /* Light grey in truecolor. */
>          unsigned int grey = 0xaaaaaaaa;
> -        pixel_on = 
> +        fbp.pixel_on = 
>              ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
>              ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
>              ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
> @@ -139,15 +131,14 @@ void __init vesa_init(void)
>      else
>      {
>          /* White(ish) in default pseudocolor palette. */
> -        pixel_on = 7;
> +        fbp.pixel_on = 7;
>      }
>  
> -    return;
> -
> - fail:
> -    xfree(lbuf);
> -    xfree(text_buf);
> -    xfree(line_len);
> +    if ( fb_init(fbp) < 0 )
> +        return;
> +    if ( fb_alloc() < 0 )
> +        return;
> +    video_puts = fb_redraw_puts;
>  }
>  
>  #include <asm/mtrr.h>
> @@ -192,8 +183,8 @@ void __init vesa_endboot(bool_t keep)
>  {
>      if ( keep )
>      {
> -        xpos = 0;
> -        video_puts = vesa_scroll_puts;
> +        video_puts = fb_scroll_puts;
> +        fb_cr();
>      }
>      else
>      {
> @@ -202,124 +193,6 @@ void __init vesa_endboot(bool_t keep)
>              memset(lfb + i * vlfb_info.bytes_per_line, 0,
>                     vlfb_info.width * bpp);
>          lfb_flush();
> +        fb_free();
>      }
> -
> -    xfree(line_len);
> -}
> -
> -/* Render one line of text to given linear framebuffer line. */
> -static void vesa_show_line(
> -    const unsigned char *text_line,
> -    unsigned char *video_line,
> -    unsigned int nr_chars,
> -    unsigned int nr_cells)
> -{
> -    unsigned int i, j, b, bpp, pixel;
> -
> -    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
> -
> -    for ( i = 0; i < font->height; i++ )
> -    {
> -        unsigned char *ptr = lbuf;
> -
> -        for ( j = 0; j < nr_chars; j++ )
> -        {
> -            const unsigned char *bits = font->data;
> -            bits += ((text_line[j] * font->height + i) *
> -                     ((font->width + 7) >> 3));
> -            for ( b = font->width; b--; )
> -            {
> -                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
> -                memcpy(ptr, &pixel, bpp);
> -                ptr += bpp;
> -            }
> -        }
> -
> -        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
> -        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
> -        video_line += vlfb_info.bytes_per_line;
> -    }
> -}
> -
> -/* Fast mode which redraws all modified parts of a 2D text buffer. */
> -static void __init vesa_redraw_puts(const char *s)
> -{
> -    unsigned int i, min_redraw_y = ypos;
> -    char c;
> -
> -    /* Paste characters into text buffer. */
> -    while ( (c = *s++) != '\0' )
> -    {
> -        if ( (c == '\n') || (xpos >= text_columns) )
> -        {
> -            if ( ++ypos >= text_rows )
> -            {
> -                min_redraw_y = 0;
> -                ypos = text_rows - 1;
> -                memmove(text_buf, text_buf + text_columns,
> -                        ypos * text_columns);
> -                memset(text_buf + ypos * text_columns, 0, xpos);
> -            }
> -            xpos = 0;
> -        }
> -
> -        if ( c != '\n' )
> -            text_buf[xpos++ + ypos * text_columns] = c;
> -    }
> -
> -    /* Render modified section of text buffer to VESA linear framebuffer. */
> -    for ( i = min_redraw_y; i <= ypos; i++ )
> -    {
> -        const unsigned char *line = text_buf + i * text_columns;
> -        unsigned int width;
> -
> -        for ( width = text_columns; width; --width )
> -            if ( line[width - 1] )
> -                 break;
> -        vesa_show_line(line,
> -                       lfb + i * font->height * vlfb_info.bytes_per_line,
> -                       width, max(line_len[i], width));
> -        line_len[i] = width;
> -    }
> -
> -    lfb_flush();
> -}
> -
> -/* Slower line-based scroll mode which interacts better with dom0. */
> -static void vesa_scroll_puts(const char *s)
> -{
> -    unsigned int i;
> -    char c;
> -
> -    while ( (c = *s++) != '\0' )
> -    {
> -        if ( (c == '\n') || (xpos >= text_columns) )
> -        {
> -            unsigned int bytes = (vlfb_info.width *
> -                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
> -            unsigned char *src = lfb + font->height * 
> vlfb_info.bytes_per_line;
> -            unsigned char *dst = lfb;
> -            
> -            /* New line: scroll all previous rows up one line. */
> -            for ( i = font->height; i < vlfb_info.height; i++ )
> -            {
> -                memcpy(dst, src, bytes);
> -                src += vlfb_info.bytes_per_line;
> -                dst += vlfb_info.bytes_per_line;
> -            }
> -
> -            /* Render new line. */
> -            vesa_show_line(
> -                text_buf,
> -                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
> -                xpos, text_columns);
> -
> -            xpos = 0;
> -        }
> -
> -        if ( c != '\n' )
> -            text_buf[xpos++] = c;
> -    }
> -
> -    lfb_flush();
>  }
> -- 
> 1.7.2.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:12:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:12:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0Lb-0004v3-Qj; Mon, 10 Dec 2012 10:12:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Ti0La-0004un-26
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:12:26 +0000
Received: from [193.109.254.147:7411] by server-8.bemta-14.messagelabs.com id
	89/58-05026-985B5C05; Mon, 10 Dec 2012 10:12:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355134343!4315530!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17929 invoked from network); 10 Dec 2012 10:12:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 10:12:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 10:12:23 +0000
Message-Id: <50C5C39302000078000AF545@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 10:12:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1354903377-13068-4-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.12.12 at 19:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> Make use of the framebuffer functions previously introduced.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
>  1 files changed, 26 insertions(+), 153 deletions(-)
> 
> diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> index aaf8b23..778cfdf 100644
> --- a/xen/drivers/video/vesa.c
> +++ b/xen/drivers/video/vesa.c
> @@ -13,20 +13,15 @@
>  #include <asm/io.h>
>  #include <asm/page.h>
>  #include "font.h"
> +#include "fb.h"
>  
>  #define vlfb_info    vga_console_info.u.vesa_lfb
> -#define text_columns (vlfb_info.width / font->width)
> -#define text_rows    (vlfb_info.height / font->height)
>  
> -static void vesa_redraw_puts(const char *s);
> -static void vesa_scroll_puts(const char *s);
> +static void lfb_flush(void);
>  
> -static unsigned char *lfb, *lbuf, *text_buf;
> -static unsigned int *__initdata line_len;
> +static unsigned char *lfb;

What's the point of retaining this, when ...

>  static const struct font_desc *font;
>  static bool_t vga_compat;
> -static unsigned int pixel_on;
> -static unsigned int xpos, ypos;
>  
>  static unsigned int vram_total;
>  integer_param("vesa-ram", vram_total);
> @@ -87,29 +82,26 @@ void __init vesa_early_init(void)
>  
>  void __init vesa_init(void)
>  {
> -    if ( !font )
> -        goto fail;
> -
> -    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
> -    if ( !lbuf )
> -        goto fail;
> +    struct fb_prop fbp;
>  
> -    text_buf = xzalloc_bytes(text_columns * text_rows);
> -    if ( !text_buf )
> -        goto fail;
> +    if ( !font )
> +        return;
>  
> -    line_len = xzalloc_array(unsigned int, text_columns);
> -    if ( !line_len )
> -        goto fail;
> +    fbp.font = font;
> +    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
> +    fbp.bytes_per_line = vlfb_info.bytes_per_line;
> +    fbp.width = vlfb_info.width;
> +    fbp.height = vlfb_info.height;
> +    fbp.flush = lfb_flush;
> +    fbp.text_columns = vlfb_info.width / font->width;
> +    fbp.text_rows = vlfb_info.height / font->height;
>  
> -    lfb = ioremap(vlfb_info.lfb_base, vram_remap);
> +    fbp.lfb = lfb = ioremap(vlfb_info.lfb_base, vram_remap);

... you set up the consolidated field at the same time anyway?

Jan

>      if ( !lfb )
> -        goto fail;
> +        return;
>  
>      memset(lfb, 0, vram_remap);
>  
> -    video_puts = vesa_redraw_puts;
> -
>      printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
>             "using %uk, total %uk\n",
>             vlfb_info.lfb_base, lfb,
> @@ -131,7 +123,7 @@ void __init vesa_init(void)
>      {
>          /* Light grey in truecolor. */
>          unsigned int grey = 0xaaaaaaaa;
> -        pixel_on = 
> +        fbp.pixel_on = 
>              ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
>              ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
>              ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
> @@ -139,15 +131,14 @@ void __init vesa_init(void)
>      else
>      {
>          /* White(ish) in default pseudocolor palette. */
> -        pixel_on = 7;
> +        fbp.pixel_on = 7;
>      }
>  
> -    return;
> -
> - fail:
> -    xfree(lbuf);
> -    xfree(text_buf);
> -    xfree(line_len);
> +    if ( fb_init(fbp) < 0 )
> +        return;
> +    if ( fb_alloc() < 0 )
> +        return;
> +    video_puts = fb_redraw_puts;
>  }
>  
>  #include <asm/mtrr.h>
> @@ -192,8 +183,8 @@ void __init vesa_endboot(bool_t keep)
>  {
>      if ( keep )
>      {
> -        xpos = 0;
> -        video_puts = vesa_scroll_puts;
> +        video_puts = fb_scroll_puts;
> +        fb_cr();
>      }
>      else
>      {
> @@ -202,124 +193,6 @@ void __init vesa_endboot(bool_t keep)
>              memset(lfb + i * vlfb_info.bytes_per_line, 0,
>                     vlfb_info.width * bpp);
>          lfb_flush();
> +        fb_free();
>      }
> -
> -    xfree(line_len);
> -}
> -
> -/* Render one line of text to given linear framebuffer line. */
> -static void vesa_show_line(
> -    const unsigned char *text_line,
> -    unsigned char *video_line,
> -    unsigned int nr_chars,
> -    unsigned int nr_cells)
> -{
> -    unsigned int i, j, b, bpp, pixel;
> -
> -    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
> -
> -    for ( i = 0; i < font->height; i++ )
> -    {
> -        unsigned char *ptr = lbuf;
> -
> -        for ( j = 0; j < nr_chars; j++ )
> -        {
> -            const unsigned char *bits = font->data;
> -            bits += ((text_line[j] * font->height + i) *
> -                     ((font->width + 7) >> 3));
> -            for ( b = font->width; b--; )
> -            {
> -                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
> -                memcpy(ptr, &pixel, bpp);
> -                ptr += bpp;
> -            }
> -        }
> -
> -        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
> -        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
> -        video_line += vlfb_info.bytes_per_line;
> -    }
> -}
> -
> -/* Fast mode which redraws all modified parts of a 2D text buffer. */
> -static void __init vesa_redraw_puts(const char *s)
> -{
> -    unsigned int i, min_redraw_y = ypos;
> -    char c;
> -
> -    /* Paste characters into text buffer. */
> -    while ( (c = *s++) != '\0' )
> -    {
> -        if ( (c == '\n') || (xpos >= text_columns) )
> -        {
> -            if ( ++ypos >= text_rows )
> -            {
> -                min_redraw_y = 0;
> -                ypos = text_rows - 1;
> -                memmove(text_buf, text_buf + text_columns,
> -                        ypos * text_columns);
> -                memset(text_buf + ypos * text_columns, 0, xpos);
> -            }
> -            xpos = 0;
> -        }
> -
> -        if ( c != '\n' )
> -            text_buf[xpos++ + ypos * text_columns] = c;
> -    }
> -
> -    /* Render modified section of text buffer to VESA linear framebuffer. */
> -    for ( i = min_redraw_y; i <= ypos; i++ )
> -    {
> -        const unsigned char *line = text_buf + i * text_columns;
> -        unsigned int width;
> -
> -        for ( width = text_columns; width; --width )
> -            if ( line[width - 1] )
> -                 break;
> -        vesa_show_line(line,
> -                       lfb + i * font->height * vlfb_info.bytes_per_line,
> -                       width, max(line_len[i], width));
> -        line_len[i] = width;
> -    }
> -
> -    lfb_flush();
> -}
> -
> -/* Slower line-based scroll mode which interacts better with dom0. */
> -static void vesa_scroll_puts(const char *s)
> -{
> -    unsigned int i;
> -    char c;
> -
> -    while ( (c = *s++) != '\0' )
> -    {
> -        if ( (c == '\n') || (xpos >= text_columns) )
> -        {
> -            unsigned int bytes = (vlfb_info.width *
> -                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
> -            unsigned char *src = lfb + font->height * 
> vlfb_info.bytes_per_line;
> -            unsigned char *dst = lfb;
> -            
> -            /* New line: scroll all previous rows up one line. */
> -            for ( i = font->height; i < vlfb_info.height; i++ )
> -            {
> -                memcpy(dst, src, bytes);
> -                src += vlfb_info.bytes_per_line;
> -                dst += vlfb_info.bytes_per_line;
> -            }
> -
> -            /* Render new line. */
> -            vesa_show_line(
> -                text_buf,
> -                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
> -                xpos, text_columns);
> -
> -            xpos = 0;
> -        }
> -
> -        if ( c != '\n' )
> -            text_buf[xpos++] = c;
> -    }
> -
> -    lfb_flush();
>  }
> -- 
> 1.7.2.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:19:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0SS-0005SC-I1; Mon, 10 Dec 2012 10:19:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti0SQ-0005S3-W5
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:19:31 +0000
Received: from [85.158.139.83:49361] by server-14.bemta-5.messagelabs.com id
	C6/A2-09538-237B5C05; Mon, 10 Dec 2012 10:19:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355134768!28451834!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12684 invoked from network); 10 Dec 2012 10:19:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 10:19:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="28804"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 10:19:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 10:19:27 +0000
Message-ID: <1355134766.31710.119.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 10:19:26 +0000
In-Reply-To: <20674.16214.934271.479230@mariner.uk.xensource.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "jfehlig@suse.com" <jfehlig@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 19:11 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> > Can we provide, or (more likely) require the application to provide, a
> > lock (perhaps per-event or, again more likely, per-event-loop) which
> > must be held while processing callbacks and also while events are being
> > registered/unregistered with the application's event handling subsystem?
> > With such a lock in place the application would be able to guarantee
> > that having returned from the deregister hook any further events would
> > be seen as spurious events by its own event processing loop.
> 
> I think this might be difficult to get right without deadlocks.

I took Bamvor's most recent response to mean that a per-event lock was
already in place in libvirt and inferred that this was the reason why
the originally proposed one line fix worked for them. Perhaps I
misunderstood?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:19:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0SS-0005SC-I1; Mon, 10 Dec 2012 10:19:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti0SQ-0005S3-W5
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:19:31 +0000
Received: from [85.158.139.83:49361] by server-14.bemta-5.messagelabs.com id
	C6/A2-09538-237B5C05; Mon, 10 Dec 2012 10:19:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355134768!28451834!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12684 invoked from network); 10 Dec 2012 10:19:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 10:19:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="28804"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 10:19:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 10:19:27 +0000
Message-ID: <1355134766.31710.119.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 10:19:26 +0000
In-Reply-To: <20674.16214.934271.479230@mariner.uk.xensource.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "jfehlig@suse.com" <jfehlig@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 19:11 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> > Can we provide, or (more likely) require the application to provide, a
> > lock (perhaps per-event or, again more likely, per-event-loop) which
> > must be held while processing callbacks and also while events are being
> > registered/unregistered with the application's event handling subsystem?
> > With such a lock in place the application would be able to guarantee
> > that having returned from the deregister hook any further events would
> > be seen as spurious events by its own event processing loop.
> 
> I think this might be difficult to get right without deadlocks.

I took Bamvor's most recent response to mean that a per-event lock was
already in place in libvirt and inferred that this was the reason why
the originally proposed one line fix worked for them. Perhaps I
misunderstood?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:25:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:25:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0Y5-0005k3-ED; Mon, 10 Dec 2012 10:25:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Ti0Y3-0005jp-13
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:25:19 +0000
Received: from [85.158.137.99:21872] by server-5.bemta-3.messagelabs.com id
	78/6D-26311-D88B5C05; Mon, 10 Dec 2012 10:25:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355135098!16934069!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32559 invoked from network); 10 Dec 2012 10:24:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 10:24:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 10:24:58 +0000
Message-Id: <50C5C68602000078000AF565@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 10:24:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2312BB66.0__="
Subject: [Xen-devel] [PATCH] x86: frame table related improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2312BB66.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- fix super page frame table setup for memory hotplug case (should
  create full table, or else the hotplug code would need to do the
  necessary table population)
- simplify super page frame table setup (can re-use frame table setup
  code)
- slightly streamline frame table setup code
- fix (tighten) a BUG_ON() and an ASSERT() condition
- fix spage <-> pdx conversion macros (they had no users so far, and
  hence no-one noticed how broken they were)

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -182,28 +182,6 @@ static uint32_t base_disallow_mask;
       !is_hvm_domain(d)) ?                                      \
      L1_DISALLOW_MASK : (L1_DISALLOW_MASK & ~PAGE_CACHE_ATTRS))
=20
-static void __init init_spagetable(void)
-{
-    unsigned long s, start =3D SPAGETABLE_VIRT_START;
-    unsigned long end =3D SPAGETABLE_VIRT_END;
-    unsigned long step, mfn;
-    unsigned int max_entries;
-
-    step =3D 1UL << PAGETABLE_ORDER;
-    max_entries =3D (max_pdx + ((1UL<<SUPERPAGE_ORDER)-1)) >> SUPERPAGE_OR=
DER;
-    end =3D start + (((max_entries * sizeof(*spage_table)) +
-                    ((1UL<<SUPERPAGE_SHIFT)-1)) & (~((1UL<<SUPERPAGE_SHIFT=
)-1)));
-
-    for (s =3D start; s < end; s +=3D step << PAGE_SHIFT)
-    {
-        mfn =3D alloc_boot_pages(step, step);
-        if ( !mfn )
-            panic("Not enough memory for spage table");
-        map_pages_to_xen(s, mfn, step, PAGE_HYPERVISOR);
-    }
-    memset((void *)start, 0, end - start);
-}
-
 static void __init init_frametable_chunk(void *start, void *end)
 {
     unsigned long s =3D (unsigned long)start;
@@ -232,15 +210,25 @@ static void __init init_frametable_chunk
     }
=20
     memset(start, 0, end - start);
-    memset(end, -1, s - (unsigned long)end);
+    memset(end, -1, s - e);
+}
+
+static void __init init_spagetable(void)
+{
+    BUILD_BUG_ON(XEN_VIRT_END > SPAGETABLE_VIRT_START);
+
+    init_frametable_chunk(spage_table,
+                          mem_hotplug ? (void *)SPAGETABLE_VIRT_END
+                                      : pdx_to_spage(max_pdx - 1) + 1);
 }
=20
 void __init init_frametable(void)
 {
     unsigned int sidx, eidx, nidx;
     unsigned int max_idx =3D (max_pdx + PDX_GROUP_COUNT - 1) / PDX_GROUP_C=
OUNT;
+    struct page_info *end_pg, *top_pg;
=20
-    BUILD_BUG_ON(XEN_VIRT_END > FRAMETABLE_VIRT_END);
+    BUILD_BUG_ON(XEN_VIRT_END > FRAMETABLE_VIRT_START);
     BUILD_BUG_ON(FRAMETABLE_VIRT_START & ((1UL << L2_PAGETABLE_SHIFT) - =
1));
=20
     for ( sidx =3D 0; ; sidx =3D nidx )
@@ -252,17 +240,13 @@ void __init init_frametable(void)
         init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
                               pdx_to_page(eidx * PDX_GROUP_COUNT));
     }
-    if ( !mem_hotplug )
-        init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
-                              pdx_to_page(max_pdx - 1) + 1);
-    else
-    {
-        init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
-                              pdx_to_page(max_idx * PDX_GROUP_COUNT - 1) =
+ 1);
-        memset(pdx_to_page(max_pdx), -1,
-               (unsigned long)pdx_to_page(max_idx * PDX_GROUP_COUNT) -
-               (unsigned long)pdx_to_page(max_pdx));
-    }
+
+    end_pg =3D pdx_to_page(max_pdx - 1) + 1;
+    top_pg =3D mem_hotplug ? pdx_to_page(max_idx * PDX_GROUP_COUNT - 1) + =
1
+                         : end_pg;
+    init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT), top_pg);
+    memset(end_pg, -1, (unsigned long)top_pg - (unsigned long)end_pg);
+
     if (opt_allow_superpage)
         init_spagetable();
 }
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -301,7 +301,7 @@ static inline struct page_info *__virt_t
=20
 static inline void *__page_to_virt(const struct page_info *pg)
 {
-    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END=
);
+    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_SIZE);
     /*
      * (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg). =
The
      * division and re-multiplication avoids one shift when sizeof(*pg) =
is a
--- a/xen/include/asm-x86/x86_64/page.h
+++ b/xen/include/asm-x86/x86_64/page.h
@@ -46,8 +46,8 @@ extern void pfn_pdx_hole_setup(unsigned=20
=20
 #define page_to_pdx(pg)  ((pg) - frame_table)
 #define pdx_to_page(pdx) (frame_table + (pdx))
-#define spage_to_pdx(spg) ((spg>>(SUPERPAGE_SHIFT-PAGE_SHIFT)) - =
spage_table)
-#define pdx_to_spage(pdx) (spage_table + ((pdx)<<(SUPERPAGE_SHIFT-PAGE_SHI=
FT)))
+#define spage_to_pdx(spg) (((spg) - spage_table)<<(SUPERPAGE_SHIFT-PAGE_SH=
IFT))
+#define pdx_to_spage(pdx) (spage_table + ((pdx)>>(SUPERPAGE_SHIFT-PAGE_SHI=
FT)))
 /*
  * Note: These are solely for the use by page_{get,set}_owner(), and
  *       therefore don't need to handle the XEN_VIRT_{START,END} range.



--=__Part2312BB66.0__=
Content-Type: text/plain; name="x86-frame-table.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-frame-table.patch"

x86: frame table related improvements=0A=0A- fix super page frame table =
setup for memory hotplug case (should=0A  create full table, or else the =
hotplug code would need to do the=0A  necessary table population)=0A- =
simplify super page frame table setup (can re-use frame table setup=0A  =
code)=0A- slightly streamline frame table setup code=0A- fix (tighten) a =
BUG_ON() and an ASSERT() condition=0A- fix spage <-> pdx conversion macros =
(they had no users so far, and=0A  hence no-one noticed how broken they =
were)=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=0A@@ -182,28 +182,6 @@ =
static uint32_t base_disallow_mask;=0A       !is_hvm_domain(d)) ?          =
                            \=0A      L1_DISALLOW_MASK : (L1_DISALLOW_MASK =
& ~PAGE_CACHE_ATTRS))=0A =0A-static void __init init_spagetable(void)=0A-{=
=0A-    unsigned long s, start =3D SPAGETABLE_VIRT_START;=0A-    unsigned =
long end =3D SPAGETABLE_VIRT_END;=0A-    unsigned long step, mfn;=0A-    =
unsigned int max_entries;=0A-=0A-    step =3D 1UL << PAGETABLE_ORDER;=0A-  =
  max_entries =3D (max_pdx + ((1UL<<SUPERPAGE_ORDER)-1)) >> SUPERPAGE_ORDER=
;=0A-    end =3D start + (((max_entries * sizeof(*spage_table)) +=0A-      =
              ((1UL<<SUPERPAGE_SHIFT)-1)) & (~((1UL<<SUPERPAGE_SHIFT)-1)));=
=0A-=0A-    for (s =3D start; s < end; s +=3D step << PAGE_SHIFT)=0A-    =
{=0A-        mfn =3D alloc_boot_pages(step, step);=0A-        if ( !mfn =
)=0A-            panic("Not enough memory for spage table");=0A-        =
map_pages_to_xen(s, mfn, step, PAGE_HYPERVISOR);=0A-    }=0A-    memset((vo=
id *)start, 0, end - start);=0A-}=0A-=0A static void __init init_frametable=
_chunk(void *start, void *end)=0A {=0A     unsigned long s =3D (unsigned =
long)start;=0A@@ -232,15 +210,25 @@ static void __init init_frametable_chun=
k=0A     }=0A =0A     memset(start, 0, end - start);=0A-    memset(end, =
-1, s - (unsigned long)end);=0A+    memset(end, -1, s - e);=0A+}=0A+=0A+sta=
tic void __init init_spagetable(void)=0A+{=0A+    BUILD_BUG_ON(XEN_VIRT_END=
 > SPAGETABLE_VIRT_START);=0A+=0A+    init_frametable_chunk(spage_table,=0A=
+                          mem_hotplug ? (void *)SPAGETABLE_VIRT_END=0A+   =
                                   : pdx_to_spage(max_pdx - 1) + 1);=0A =
}=0A =0A void __init init_frametable(void)=0A {=0A     unsigned int sidx, =
eidx, nidx;=0A     unsigned int max_idx =3D (max_pdx + PDX_GROUP_COUNT - =
1) / PDX_GROUP_COUNT;=0A+    struct page_info *end_pg, *top_pg;=0A =0A-    =
BUILD_BUG_ON(XEN_VIRT_END > FRAMETABLE_VIRT_END);=0A+    BUILD_BUG_ON(XEN_V=
IRT_END > FRAMETABLE_VIRT_START);=0A     BUILD_BUG_ON(FRAMETABLE_VIRT_START=
 & ((1UL << L2_PAGETABLE_SHIFT) - 1));=0A =0A     for ( sidx =3D 0; ; sidx =
=3D nidx )=0A@@ -252,17 +240,13 @@ void __init init_frametable(void)=0A    =
     init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),=0A         =
                      pdx_to_page(eidx * PDX_GROUP_COUNT));=0A     }=0A-   =
 if ( !mem_hotplug )=0A-        init_frametable_chunk(pdx_to_page(sidx * =
PDX_GROUP_COUNT),=0A-                              pdx_to_page(max_pdx - =
1) + 1);=0A-    else=0A-    {=0A-        init_frametable_chunk(pdx_to_page(=
sidx * PDX_GROUP_COUNT),=0A-                              pdx_to_page(max_i=
dx * PDX_GROUP_COUNT - 1) + 1);=0A-        memset(pdx_to_page(max_pdx), =
-1,=0A-               (unsigned long)pdx_to_page(max_idx * PDX_GROUP_COUNT)=
 -=0A-               (unsigned long)pdx_to_page(max_pdx));=0A-    =
}=0A+=0A+    end_pg =3D pdx_to_page(max_pdx - 1) + 1;=0A+    top_pg =3D =
mem_hotplug ? pdx_to_page(max_idx * PDX_GROUP_COUNT - 1) + 1=0A+           =
              : end_pg;=0A+    init_frametable_chunk(pdx_to_page(sidx * =
PDX_GROUP_COUNT), top_pg);=0A+    memset(end_pg, -1, (unsigned long)top_pg =
- (unsigned long)end_pg);=0A+=0A     if (opt_allow_superpage)=0A         =
init_spagetable();=0A }=0A--- a/xen/include/asm-x86/mm.h=0A+++ b/xen/includ=
e/asm-x86/mm.h=0A@@ -301,7 +301,7 @@ static inline struct page_info =
*__virt_t=0A =0A static inline void *__page_to_virt(const struct page_info =
*pg)=0A {=0A-    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < =
FRAMETABLE_VIRT_END);=0A+    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_STA=
RT < FRAMETABLE_SIZE);=0A     /*=0A      * (sizeof(*pg) & -sizeof(*pg)) =
selects the LS bit of sizeof(*pg). The=0A      * division and re-multiplica=
tion avoids one shift when sizeof(*pg) is a=0A--- a/xen/include/asm-x86/x86=
_64/page.h=0A+++ b/xen/include/asm-x86/x86_64/page.h=0A@@ -46,8 +46,8 @@ =
extern void pfn_pdx_hole_setup(unsigned =0A =0A #define page_to_pdx(pg)  =
((pg) - frame_table)=0A #define pdx_to_page(pdx) (frame_table + (pdx))=0A-#=
define spage_to_pdx(spg) ((spg>>(SUPERPAGE_SHIFT-PAGE_SHIFT)) - spage_table=
)=0A-#define pdx_to_spage(pdx) (spage_table + ((pdx)<<(SUPERPAGE_SHIFT-PAGE=
_SHIFT)))=0A+#define spage_to_pdx(spg) (((spg) - spage_table)<<(SUPERPAGE_S=
HIFT-PAGE_SHIFT))=0A+#define pdx_to_spage(pdx) (spage_table + ((pdx)>>(SUPE=
RPAGE_SHIFT-PAGE_SHIFT)))=0A /*=0A  * Note: These are solely for the use =
by page_{get,set}_owner(), and=0A  *       therefore don't need to handle =
the XEN_VIRT_{START,END} range.=0A
--=__Part2312BB66.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2312BB66.0__=--


From xen-devel-bounces@lists.xen.org Mon Dec 10 10:25:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:25:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0Y5-0005k3-ED; Mon, 10 Dec 2012 10:25:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Ti0Y3-0005jp-13
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:25:19 +0000
Received: from [85.158.137.99:21872] by server-5.bemta-3.messagelabs.com id
	78/6D-26311-D88B5C05; Mon, 10 Dec 2012 10:25:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355135098!16934069!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32559 invoked from network); 10 Dec 2012 10:24:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 10:24:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 10:24:58 +0000
Message-Id: <50C5C68602000078000AF565@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 10:24:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2312BB66.0__="
Subject: [Xen-devel] [PATCH] x86: frame table related improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2312BB66.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- fix super page frame table setup for memory hotplug case (should
  create full table, or else the hotplug code would need to do the
  necessary table population)
- simplify super page frame table setup (can re-use frame table setup
  code)
- slightly streamline frame table setup code
- fix (tighten) a BUG_ON() and an ASSERT() condition
- fix spage <-> pdx conversion macros (they had no users so far, and
  hence no-one noticed how broken they were)

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -182,28 +182,6 @@ static uint32_t base_disallow_mask;
       !is_hvm_domain(d)) ?                                      \
      L1_DISALLOW_MASK : (L1_DISALLOW_MASK & ~PAGE_CACHE_ATTRS))
=20
-static void __init init_spagetable(void)
-{
-    unsigned long s, start =3D SPAGETABLE_VIRT_START;
-    unsigned long end =3D SPAGETABLE_VIRT_END;
-    unsigned long step, mfn;
-    unsigned int max_entries;
-
-    step =3D 1UL << PAGETABLE_ORDER;
-    max_entries =3D (max_pdx + ((1UL<<SUPERPAGE_ORDER)-1)) >> SUPERPAGE_OR=
DER;
-    end =3D start + (((max_entries * sizeof(*spage_table)) +
-                    ((1UL<<SUPERPAGE_SHIFT)-1)) & (~((1UL<<SUPERPAGE_SHIFT=
)-1)));
-
-    for (s =3D start; s < end; s +=3D step << PAGE_SHIFT)
-    {
-        mfn =3D alloc_boot_pages(step, step);
-        if ( !mfn )
-            panic("Not enough memory for spage table");
-        map_pages_to_xen(s, mfn, step, PAGE_HYPERVISOR);
-    }
-    memset((void *)start, 0, end - start);
-}
-
 static void __init init_frametable_chunk(void *start, void *end)
 {
     unsigned long s =3D (unsigned long)start;
@@ -232,15 +210,25 @@ static void __init init_frametable_chunk
     }
=20
     memset(start, 0, end - start);
-    memset(end, -1, s - (unsigned long)end);
+    memset(end, -1, s - e);
+}
+
+static void __init init_spagetable(void)
+{
+    BUILD_BUG_ON(XEN_VIRT_END > SPAGETABLE_VIRT_START);
+
+    init_frametable_chunk(spage_table,
+                          mem_hotplug ? (void *)SPAGETABLE_VIRT_END
+                                      : pdx_to_spage(max_pdx - 1) + 1);
 }
=20
 void __init init_frametable(void)
 {
     unsigned int sidx, eidx, nidx;
     unsigned int max_idx =3D (max_pdx + PDX_GROUP_COUNT - 1) / PDX_GROUP_C=
OUNT;
+    struct page_info *end_pg, *top_pg;
=20
-    BUILD_BUG_ON(XEN_VIRT_END > FRAMETABLE_VIRT_END);
+    BUILD_BUG_ON(XEN_VIRT_END > FRAMETABLE_VIRT_START);
     BUILD_BUG_ON(FRAMETABLE_VIRT_START & ((1UL << L2_PAGETABLE_SHIFT) - =
1));
=20
     for ( sidx =3D 0; ; sidx =3D nidx )
@@ -252,17 +240,13 @@ void __init init_frametable(void)
         init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
                               pdx_to_page(eidx * PDX_GROUP_COUNT));
     }
-    if ( !mem_hotplug )
-        init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
-                              pdx_to_page(max_pdx - 1) + 1);
-    else
-    {
-        init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
-                              pdx_to_page(max_idx * PDX_GROUP_COUNT - 1) =
+ 1);
-        memset(pdx_to_page(max_pdx), -1,
-               (unsigned long)pdx_to_page(max_idx * PDX_GROUP_COUNT) -
-               (unsigned long)pdx_to_page(max_pdx));
-    }
+
+    end_pg =3D pdx_to_page(max_pdx - 1) + 1;
+    top_pg =3D mem_hotplug ? pdx_to_page(max_idx * PDX_GROUP_COUNT - 1) + =
1
+                         : end_pg;
+    init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT), top_pg);
+    memset(end_pg, -1, (unsigned long)top_pg - (unsigned long)end_pg);
+
     if (opt_allow_superpage)
         init_spagetable();
 }
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -301,7 +301,7 @@ static inline struct page_info *__virt_t
=20
 static inline void *__page_to_virt(const struct page_info *pg)
 {
-    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END=
);
+    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_SIZE);
     /*
      * (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg). =
The
      * division and re-multiplication avoids one shift when sizeof(*pg) =
is a
--- a/xen/include/asm-x86/x86_64/page.h
+++ b/xen/include/asm-x86/x86_64/page.h
@@ -46,8 +46,8 @@ extern void pfn_pdx_hole_setup(unsigned=20
=20
 #define page_to_pdx(pg)  ((pg) - frame_table)
 #define pdx_to_page(pdx) (frame_table + (pdx))
-#define spage_to_pdx(spg) ((spg>>(SUPERPAGE_SHIFT-PAGE_SHIFT)) - =
spage_table)
-#define pdx_to_spage(pdx) (spage_table + ((pdx)<<(SUPERPAGE_SHIFT-PAGE_SHI=
FT)))
+#define spage_to_pdx(spg) (((spg) - spage_table)<<(SUPERPAGE_SHIFT-PAGE_SH=
IFT))
+#define pdx_to_spage(pdx) (spage_table + ((pdx)>>(SUPERPAGE_SHIFT-PAGE_SHI=
FT)))
 /*
  * Note: These are solely for the use by page_{get,set}_owner(), and
  *       therefore don't need to handle the XEN_VIRT_{START,END} range.



--=__Part2312BB66.0__=
Content-Type: text/plain; name="x86-frame-table.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-frame-table.patch"

x86: frame table related improvements=0A=0A- fix super page frame table =
setup for memory hotplug case (should=0A  create full table, or else the =
hotplug code would need to do the=0A  necessary table population)=0A- =
simplify super page frame table setup (can re-use frame table setup=0A  =
code)=0A- slightly streamline frame table setup code=0A- fix (tighten) a =
BUG_ON() and an ASSERT() condition=0A- fix spage <-> pdx conversion macros =
(they had no users so far, and=0A  hence no-one noticed how broken they =
were)=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=0A@@ -182,28 +182,6 @@ =
static uint32_t base_disallow_mask;=0A       !is_hvm_domain(d)) ?          =
                            \=0A      L1_DISALLOW_MASK : (L1_DISALLOW_MASK =
& ~PAGE_CACHE_ATTRS))=0A =0A-static void __init init_spagetable(void)=0A-{=
=0A-    unsigned long s, start =3D SPAGETABLE_VIRT_START;=0A-    unsigned =
long end =3D SPAGETABLE_VIRT_END;=0A-    unsigned long step, mfn;=0A-    =
unsigned int max_entries;=0A-=0A-    step =3D 1UL << PAGETABLE_ORDER;=0A-  =
  max_entries =3D (max_pdx + ((1UL<<SUPERPAGE_ORDER)-1)) >> SUPERPAGE_ORDER=
;=0A-    end =3D start + (((max_entries * sizeof(*spage_table)) +=0A-      =
              ((1UL<<SUPERPAGE_SHIFT)-1)) & (~((1UL<<SUPERPAGE_SHIFT)-1)));=
=0A-=0A-    for (s =3D start; s < end; s +=3D step << PAGE_SHIFT)=0A-    =
{=0A-        mfn =3D alloc_boot_pages(step, step);=0A-        if ( !mfn =
)=0A-            panic("Not enough memory for spage table");=0A-        =
map_pages_to_xen(s, mfn, step, PAGE_HYPERVISOR);=0A-    }=0A-    memset((vo=
id *)start, 0, end - start);=0A-}=0A-=0A static void __init init_frametable=
_chunk(void *start, void *end)=0A {=0A     unsigned long s =3D (unsigned =
long)start;=0A@@ -232,15 +210,25 @@ static void __init init_frametable_chun=
k=0A     }=0A =0A     memset(start, 0, end - start);=0A-    memset(end, =
-1, s - (unsigned long)end);=0A+    memset(end, -1, s - e);=0A+}=0A+=0A+sta=
tic void __init init_spagetable(void)=0A+{=0A+    BUILD_BUG_ON(XEN_VIRT_END=
 > SPAGETABLE_VIRT_START);=0A+=0A+    init_frametable_chunk(spage_table,=0A=
+                          mem_hotplug ? (void *)SPAGETABLE_VIRT_END=0A+   =
                                   : pdx_to_spage(max_pdx - 1) + 1);=0A =
}=0A =0A void __init init_frametable(void)=0A {=0A     unsigned int sidx, =
eidx, nidx;=0A     unsigned int max_idx =3D (max_pdx + PDX_GROUP_COUNT - =
1) / PDX_GROUP_COUNT;=0A+    struct page_info *end_pg, *top_pg;=0A =0A-    =
BUILD_BUG_ON(XEN_VIRT_END > FRAMETABLE_VIRT_END);=0A+    BUILD_BUG_ON(XEN_V=
IRT_END > FRAMETABLE_VIRT_START);=0A     BUILD_BUG_ON(FRAMETABLE_VIRT_START=
 & ((1UL << L2_PAGETABLE_SHIFT) - 1));=0A =0A     for ( sidx =3D 0; ; sidx =
=3D nidx )=0A@@ -252,17 +240,13 @@ void __init init_frametable(void)=0A    =
     init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),=0A         =
                      pdx_to_page(eidx * PDX_GROUP_COUNT));=0A     }=0A-   =
 if ( !mem_hotplug )=0A-        init_frametable_chunk(pdx_to_page(sidx * =
PDX_GROUP_COUNT),=0A-                              pdx_to_page(max_pdx - =
1) + 1);=0A-    else=0A-    {=0A-        init_frametable_chunk(pdx_to_page(=
sidx * PDX_GROUP_COUNT),=0A-                              pdx_to_page(max_i=
dx * PDX_GROUP_COUNT - 1) + 1);=0A-        memset(pdx_to_page(max_pdx), =
-1,=0A-               (unsigned long)pdx_to_page(max_idx * PDX_GROUP_COUNT)=
 -=0A-               (unsigned long)pdx_to_page(max_pdx));=0A-    =
}=0A+=0A+    end_pg =3D pdx_to_page(max_pdx - 1) + 1;=0A+    top_pg =3D =
mem_hotplug ? pdx_to_page(max_idx * PDX_GROUP_COUNT - 1) + 1=0A+           =
              : end_pg;=0A+    init_frametable_chunk(pdx_to_page(sidx * =
PDX_GROUP_COUNT), top_pg);=0A+    memset(end_pg, -1, (unsigned long)top_pg =
- (unsigned long)end_pg);=0A+=0A     if (opt_allow_superpage)=0A         =
init_spagetable();=0A }=0A--- a/xen/include/asm-x86/mm.h=0A+++ b/xen/includ=
e/asm-x86/mm.h=0A@@ -301,7 +301,7 @@ static inline struct page_info =
*__virt_t=0A =0A static inline void *__page_to_virt(const struct page_info =
*pg)=0A {=0A-    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < =
FRAMETABLE_VIRT_END);=0A+    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_STA=
RT < FRAMETABLE_SIZE);=0A     /*=0A      * (sizeof(*pg) & -sizeof(*pg)) =
selects the LS bit of sizeof(*pg). The=0A      * division and re-multiplica=
tion avoids one shift when sizeof(*pg) is a=0A--- a/xen/include/asm-x86/x86=
_64/page.h=0A+++ b/xen/include/asm-x86/x86_64/page.h=0A@@ -46,8 +46,8 @@ =
extern void pfn_pdx_hole_setup(unsigned =0A =0A #define page_to_pdx(pg)  =
((pg) - frame_table)=0A #define pdx_to_page(pdx) (frame_table + (pdx))=0A-#=
define spage_to_pdx(spg) ((spg>>(SUPERPAGE_SHIFT-PAGE_SHIFT)) - spage_table=
)=0A-#define pdx_to_spage(pdx) (spage_table + ((pdx)<<(SUPERPAGE_SHIFT-PAGE=
_SHIFT)))=0A+#define spage_to_pdx(spg) (((spg) - spage_table)<<(SUPERPAGE_S=
HIFT-PAGE_SHIFT))=0A+#define pdx_to_spage(pdx) (spage_table + ((pdx)>>(SUPE=
RPAGE_SHIFT-PAGE_SHIFT)))=0A /*=0A  * Note: These are solely for the use =
by page_{get,set}_owner(), and=0A  *       therefore don't need to handle =
the XEN_VIRT_{START,END} range.=0A
--=__Part2312BB66.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2312BB66.0__=--


From xen-devel-bounces@lists.xen.org Mon Dec 10 10:26:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0ZN-0005p5-Tm; Mon, 10 Dec 2012 10:26:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti0ZM-0005ow-W2
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 10:26:41 +0000
Received: from [85.158.139.211:19537] by server-12.bemta-5.messagelabs.com id
	C3/48-02275-0E8B5C05; Mon, 10 Dec 2012 10:26:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355134955!18290613!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27658 invoked from network); 10 Dec 2012 10:22:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 10:22:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="28904"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 10:22:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 10:22:35 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti0VP-00033m-8Z	for xen-devel@lists.xensource.com;
	Mon, 10 Dec 2012 10:22:35 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti0VP-0000oA-4z	for
	xen-devel@lists.xensource.com; Mon, 10 Dec 2012 10:22:35 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20677.47083.50190.657814@mariner.uk.xensource.com>
Date: Mon, 10 Dec 2012 10:22:35 +0000
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
In-Reply-To: <osstest-14659-mainreport@xen.org>
References: <osstest-14659-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [xen-4.1-testing test] 14659: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[xen-4.1-testing test] 14659: trouble: blocked/broken/fail/pass"):
> flight 14659 xen-4.1-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14659/
> 
> Failures and problems with tests :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566

These are an infrastructure problem which I think I have just fixed.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:26:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0ZN-0005p5-Tm; Mon, 10 Dec 2012 10:26:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti0ZM-0005ow-W2
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 10:26:41 +0000
Received: from [85.158.139.211:19537] by server-12.bemta-5.messagelabs.com id
	C3/48-02275-0E8B5C05; Mon, 10 Dec 2012 10:26:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355134955!18290613!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27658 invoked from network); 10 Dec 2012 10:22:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 10:22:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="28904"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 10:22:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 10:22:35 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti0VP-00033m-8Z	for xen-devel@lists.xensource.com;
	Mon, 10 Dec 2012 10:22:35 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti0VP-0000oA-4z	for
	xen-devel@lists.xensource.com; Mon, 10 Dec 2012 10:22:35 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20677.47083.50190.657814@mariner.uk.xensource.com>
Date: Mon, 10 Dec 2012 10:22:35 +0000
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
In-Reply-To: <osstest-14659-mainreport@xen.org>
References: <osstest-14659-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [xen-4.1-testing test] 14659: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[xen-4.1-testing test] 14659: trouble: blocked/broken/fail/pass"):
> flight 14659 xen-4.1-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14659/
> 
> Failures and problems with tests :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-i386-i386-xl             3 host-install(3)         broken REGR. vs. 14566

These are an infrastructure problem which I think I have just fixed.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:34:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:34:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0gc-00066r-TG; Mon, 10 Dec 2012 10:34:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti0gb-00066k-Er
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 10:34:09 +0000
Received: from [85.158.143.99:24055] by server-1.bemta-4.messagelabs.com id
	D1/AC-28401-0AAB5C05; Mon, 10 Dec 2012 10:34:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1355135644!19148065!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1609 invoked from network); 10 Dec 2012 10:34:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 10:34:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="29263"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 10:34:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 10:34:03 +0000
Message-ID: <1355135642.21160.0.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: maheen butt <maheen_butt26@yahoo.com>
Date: Mon, 10 Dec 2012 10:34:02 +0000
In-Reply-To: <1355120859.46777.YahooMailNeo@web120003.mail.ne1.yahoo.com>
References: <1355120859.46777.YahooMailNeo@web120003.mail.ne1.yahoo.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] smp_prepare_boot_cpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-10 at 06:27 +0000, maheen butt wrote:
> Hi,
> 
> 
> In start_xen():xen/arch/x86/setup.c
> has function smp_prepare_boot_cpu() which also exist in vanilla kernel
> the kernel version is given that:

I'm not sure that you can infer that because these functions are
similarly named in Linux and Xen that they should have the same
behaviour / semantics.

> void __init native_smp_prepare_boot_cpu(void)
> {
>     int me = smp_processor_id();
>     switch_to_new_gdt(me);
>     /* already set me in cpu_online_mask in boot_cpu_init() */
>     cpumask_set_cpu(me, cpu_callout_mask);
>     per_cpu(cpu_state, me) = CPU_ONLINE;
> }
> 
> 
> Whera in case of Xen we have:
> void __init smp_prepare_boot_cpu(void)
> {
>     cpumask_set_cpu(smp_processor_id(), &cpu_online_map);
>     cpumask_set_cpu(smp_processor_id(), &cpu_present_map);
> }
> My question is that why there is no need to change gdt pointer to
> current cpu segment as it is done in case of kernel code?

Perhaps it is done somewhere else?

IIRC the gdtr contains a virtual address and Xen uses a fixed virtual
address for the GDT with per-PCPU mappings, so perhaps loading the gdtr
here is simply not necessary.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:34:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:34:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0gc-00066r-TG; Mon, 10 Dec 2012 10:34:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti0gb-00066k-Er
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 10:34:09 +0000
Received: from [85.158.143.99:24055] by server-1.bemta-4.messagelabs.com id
	D1/AC-28401-0AAB5C05; Mon, 10 Dec 2012 10:34:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1355135644!19148065!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1609 invoked from network); 10 Dec 2012 10:34:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 10:34:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="29263"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 10:34:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 10:34:03 +0000
Message-ID: <1355135642.21160.0.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: maheen butt <maheen_butt26@yahoo.com>
Date: Mon, 10 Dec 2012 10:34:02 +0000
In-Reply-To: <1355120859.46777.YahooMailNeo@web120003.mail.ne1.yahoo.com>
References: <1355120859.46777.YahooMailNeo@web120003.mail.ne1.yahoo.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] smp_prepare_boot_cpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-10 at 06:27 +0000, maheen butt wrote:
> Hi,
> 
> 
> In start_xen():xen/arch/x86/setup.c
> has function smp_prepare_boot_cpu() which also exist in vanilla kernel
> the kernel version is given that:

I'm not sure that you can infer that because these functions are
similarly named in Linux and Xen that they should have the same
behaviour / semantics.

> void __init native_smp_prepare_boot_cpu(void)
> {
>     int me = smp_processor_id();
>     switch_to_new_gdt(me);
>     /* already set me in cpu_online_mask in boot_cpu_init() */
>     cpumask_set_cpu(me, cpu_callout_mask);
>     per_cpu(cpu_state, me) = CPU_ONLINE;
> }
> 
> 
> Whera in case of Xen we have:
> void __init smp_prepare_boot_cpu(void)
> {
>     cpumask_set_cpu(smp_processor_id(), &cpu_online_map);
>     cpumask_set_cpu(smp_processor_id(), &cpu_present_map);
> }
> My question is that why there is no need to change gdt pointer to
> current cpu segment as it is done in case of kernel code?

Perhaps it is done somewhere else?

IIRC the gdtr contains a virtual address and Xen uses a fixed virtual
address for the GDT with per-PCPU mappings, so perhaps loading the gdtr
here is simply not necessary.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:45:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0rY-0006Jt-3d; Mon, 10 Dec 2012 10:45:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti0rW-0006Jo-5p
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:45:26 +0000
Received: from [85.158.143.35:64510] by server-3.bemta-4.messagelabs.com id
	6F/DC-18211-54DB5C05; Mon, 10 Dec 2012 10:45:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355135867!4087956!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22083 invoked from network); 10 Dec 2012 10:37:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 10:37:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="29384"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 10:37:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 10:37:47 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti0k7-0003BT-H0; Mon, 10 Dec 2012 10:37:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti0k7-0000oc-Cr;
	Mon, 10 Dec 2012 10:37:47 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20677.47995.298291.120095@mariner.uk.xensource.com>
Date: Mon, 10 Dec 2012 10:37:47 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355134766.31710.119.camel@zakaz.uk.xensource.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "jfehlig@suse.com" <jfehlig@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> I took Bamvor's most recent response to mean that a per-event lock was
> already in place in libvirt and inferred that this was the reason why
> the originally proposed one line fix worked for them. Perhaps I
> misunderstood?

Yes, I think that's what Bamvor meant but I don't think it's correct
that such a lock eliminates the race.  libvirt has to release that
lock before making the callback (to follow the libxl locking rules
which are necessary to avoid deadlock).

I'm not surprised that the original patch makes Bamvor's symptoms go
away.  Bamvor had one of the possible races (the fd-related one) but
not the other.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 10:45:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 10:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti0rY-0006Jt-3d; Mon, 10 Dec 2012 10:45:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti0rW-0006Jo-5p
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 10:45:26 +0000
Received: from [85.158.143.35:64510] by server-3.bemta-4.messagelabs.com id
	6F/DC-18211-54DB5C05; Mon, 10 Dec 2012 10:45:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355135867!4087956!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22083 invoked from network); 10 Dec 2012 10:37:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 10:37:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="29384"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 10:37:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 10:37:47 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti0k7-0003BT-H0; Mon, 10 Dec 2012 10:37:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti0k7-0000oc-Cr;
	Mon, 10 Dec 2012 10:37:47 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20677.47995.298291.120095@mariner.uk.xensource.com>
Date: Mon, 10 Dec 2012 10:37:47 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355134766.31710.119.camel@zakaz.uk.xensource.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "jfehlig@suse.com" <jfehlig@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> I took Bamvor's most recent response to mean that a per-event lock was
> already in place in libvirt and inferred that this was the reason why
> the originally proposed one line fix worked for them. Perhaps I
> misunderstood?

Yes, I think that's what Bamvor meant but I don't think it's correct
that such a lock eliminates the race.  libvirt has to release that
lock before making the callback (to follow the libxl locking rules
which are necessary to avoid deadlock).

I'm not surprised that the original patch makes Bamvor's symptoms go
away.  Bamvor had one of the possible races (the fd-related one) but
not the other.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 11:13:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 11:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti1Ht-0006i4-P1; Mon, 10 Dec 2012 11:12:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Ti1Hs-0006hz-8I
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 11:12:40 +0000
Received: from [85.158.137.99:25985] by server-14.bemta-3.messagelabs.com id
	8B/70-31424-7A3C5C05; Mon, 10 Dec 2012 11:12:39 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355137956!13175142!1
X-Originating-IP: [209.85.223.174]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18907 invoked from network); 10 Dec 2012 11:12:37 -0000
Received: from mail-ie0-f174.google.com (HELO mail-ie0-f174.google.com)
	(209.85.223.174)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 11:12:37 -0000
Received: by mail-ie0-f174.google.com with SMTP id c11so7915518ieb.33
	for <xen-devel@lists.xen.org>; Mon, 10 Dec 2012 03:12:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=UXhY0RTJOmcY0kf+BrjiLrG/EZsQpzykuJH/BYcGwSk=;
	b=nYKw2fCK7m+63i1WhXaiUWSR/TczDaJ12RvkMG75W8BLd0Do2MI1A+HrMbpIWrFZOB
	06CEKIp1DYHqmH3kaS1V5aCFEjKkDnCZ/aeV4b+8ALndi++MdBWib0gRXiVAKGhg+XdX
	Vnwpd8i9iNWZOPL3W5L13DhMHy6H9mxUeuq/pSGzpY7LwukGkk4dFVl3Itx0lddWXev+
	lbbSxK3Zl55bD6UqKa8Cz0kwwbva8bvQwgd4+w2frEDRSFaKJ0tZ2o7jZcz0DA0rwror
	nnBBdp8yxgL5Q12ZJ415d6XPUjz7FyOozmJ3EkNcO7FsvVfoe/J3uyHVpRnN32CxIQoL
	V/7g==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr6241284igq.37.1355137955541; Mon,
	10 Dec 2012 03:12:35 -0800 (PST)
Received: by 10.64.37.39 with HTTP; Mon, 10 Dec 2012 03:12:35 -0800 (PST)
In-Reply-To: <CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
Date: Mon, 10 Dec 2012 19:12:35 +0800
X-Google-Sender-Auth: rK1GydDZB0Y0_dBtMdqSnNk_WWI
Message-ID: <CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1792337151670081720=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1792337151670081720==
Content-Type: multipart/alternative; boundary=14dae934111576335004d07da498

--14dae934111576335004d07da498
Content-Type: text/plain; charset=ISO-8859-1

Hello, could anybody help?

On Sun, Dec 9, 2012 at 1:00 PM, G.R. <firemeteor@users.sourceforge.net>wrote:

> I dug further and got confused.
> The host ISA bridge 00:1f.0 is automatically passed-through as part of the
> gfx_passthru magic.
> However, it is passed through as a PCI bridge:
> On host:   00:1f.0 ISA bridge [0601]: Intel Corporation H77 Express
> Chipset LPC Controller [8086:1e4a] (rev 04)
> On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express Chipset
> LPC Controller [8086:1e4a] (rev 04)
>
> This is both the case for pure HVM && PVHVM. And this one exists for both
> case too:
> 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> [Natoma/Triton II] [8086:7000]
>
> And the intel_detect_pch() function only check the first ISA bridge on the
> PCI bus:
> pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
>
> Unless there are magic elsewhere, I can't imagine the code would behave
> differently on the two builds.
> But what's the magic behind this?
>
> Also, is there anyway to get rid of the ISA bridge emulated by qemu?
> I don't think this is ever required for most case...
>
> Thanks,
> Timothy
>
>
>
> On Sun, Dec 9, 2012 at 1:43 AM, G.R. <firemeteor@users.sourceforge.net>wrote:
>
>> Hi all,
>> I'm debugging an issue that an HVM guest failed to produce any output
>> with IGD passed through.
>> This is an pure HVM linux guest with i915 driver directly compiled in.
>> An PVHVM kernel with i915 driver compiled as module works without issue.
>> I'm not yet sure which factor is more important, pure HVM or the I915=y
>> kernel config.
>>
>> The direct cause of no output is that the driver does not select Display
>> PLL properly, which is in turn due to fail to detect pch type properly.
>>
>> Strange enough, the intel_detect_pch() function works by checking the
>> device ID of the ISA bridge coming with the chipset:
>>
>> /* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make
>> graphics device passthrough fwork easy for VMM, that only * need to expose
>> ISA bridge to let driver know the real hardware  underneath. This is a
>> requirement from virtualization team. */
>>
>> I added some debug output in this function and find out that it obtained
>> a strange device ID:
>> [ 1.005423] [drm] intel pch detect, found 00007000
>>
>> This looks like the ISA bridge provided by qemu:
>> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton
>> II]
>> 00:01.0 0601: 8086:7000
>>
>> However, I can find the same device on a PVHVM linux guest, but the
>> intel_detect_pch() is not fooled by that. Is it due to I915=m config or
>> some magic played by PVOPS? Any suggestion how to fix this?
>>
>> Thanks,
>> Timothy
>>
>
>

--14dae934111576335004d07da498
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello, could anybody help?<br><div class=3D"gmail_extra"><br><div class=3D"=
gmail_quote">On Sun, Dec 9, 2012 at 1:00 PM, G.R. <span dir=3D"ltr">&lt;<a =
href=3D"mailto:firemeteor@users.sourceforge.net" target=3D"_blank">firemete=
or@users.sourceforge.net</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">I dug further and got confused.<br>The host =
ISA bridge 00:1f.0 is automatically passed-through as part of the gfx_passt=
hru magic.<br>
However, it is passed through as a PCI bridge:<br>On host: =A0 00:1f.0 ISA =
bridge [0601]: Intel Corporation H77 Express Chipset LPC Controller [8086:1=
e4a] (rev 04)<br>
On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express Chipset =
LPC Controller [8086:1e4a] (rev 04)<br><br>This is both the case for pure H=
VM &amp;&amp; PVHVM. And this one exists for both case too:<br>00:01.0 ISA =
bridge [0601]: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] [8086=
:7000]<br>

<br>And the intel_detect_pch() function only check the first ISA bridge on =
the PCI bus:<br>pch =3D pci_get_class(PCI_CLASS_BRIDGE_ISA &lt;&lt; 8, NULL=
);<br><br>Unless there are magic elsewhere, I can&#39;t imagine the code wo=
uld behave differently on the two builds.<br>

But what&#39;s the magic behind this?<br><br>Also, is there anyway to get r=
id of the ISA bridge emulated by qemu? <br>I don&#39;t think this is ever r=
equired for most case...<br><br>Thanks,<br>Timothy<div class=3D"HOEnZb">
<div class=3D"h5"><br><div class=3D"gmail_extra">
<br><br><div class=3D"gmail_quote">On Sun, Dec 9, 2012 at 1:43 AM, G.R. <sp=
an dir=3D"ltr">&lt;<a href=3D"mailto:firemeteor@users.sourceforge.net" targ=
et=3D"_blank">firemeteor@users.sourceforge.net</a>&gt;</span> wrote:<br><bl=
ockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #=
ccc solid;padding-left:1ex">

<p>Hi all, <br>
I&#39;m debugging an issue that an HVM guest failed to produce any output w=
ith IGD passed through. <br>
This is an pure HVM linux guest with i915 driver directly compiled in. <br>
An PVHVM kernel with i915 driver compiled as module works without issue. <b=
r>
I&#39;m not yet sure which factor is more important, pure HVM or the I915=
=3Dy kernel config.</p>
<p>The direct cause of no output is that the driver does not select Display=
 PLL properly, which is in turn due to fail to detect pch type properly.</p=
>
<p>Strange enough, the intel_detect_pch() function works by checking the de=
vice ID of the ISA bridge coming with the chipset:</p>
<p>/* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make g=
raphics device passthrough fwork easy for VMM, that only * need to expose I=
SA bridge to let driver know the real hardware=A0 underneath. This is a req=
uirement from virtualization team. */</p>



<p>I added some debug output in this function and find out that it obtained=
 a strange device ID:<br>
[ 1.005423] [drm] intel pch detect, found 00007000</p>
<p>This looks like the ISA bridge provided by qemu:<br>
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] =
<br>
00:01.0 0601: 8086:7000</p>
<p>However, I can find the same device on a PVHVM linux guest, but the inte=
l_detect_pch() is not fooled by that. Is it due to I915=3Dm config or some =
magic played by PVOPS? Any suggestion how to fix this?</p>
<p>Thanks, <br>
Timothy</p>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>

--14dae934111576335004d07da498--


--===============1792337151670081720==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1792337151670081720==--


From xen-devel-bounces@lists.xen.org Mon Dec 10 11:13:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 11:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti1Ht-0006i4-P1; Mon, 10 Dec 2012 11:12:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Ti1Hs-0006hz-8I
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 11:12:40 +0000
Received: from [85.158.137.99:25985] by server-14.bemta-3.messagelabs.com id
	8B/70-31424-7A3C5C05; Mon, 10 Dec 2012 11:12:39 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355137956!13175142!1
X-Originating-IP: [209.85.223.174]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18907 invoked from network); 10 Dec 2012 11:12:37 -0000
Received: from mail-ie0-f174.google.com (HELO mail-ie0-f174.google.com)
	(209.85.223.174)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 11:12:37 -0000
Received: by mail-ie0-f174.google.com with SMTP id c11so7915518ieb.33
	for <xen-devel@lists.xen.org>; Mon, 10 Dec 2012 03:12:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=UXhY0RTJOmcY0kf+BrjiLrG/EZsQpzykuJH/BYcGwSk=;
	b=nYKw2fCK7m+63i1WhXaiUWSR/TczDaJ12RvkMG75W8BLd0Do2MI1A+HrMbpIWrFZOB
	06CEKIp1DYHqmH3kaS1V5aCFEjKkDnCZ/aeV4b+8ALndi++MdBWib0gRXiVAKGhg+XdX
	Vnwpd8i9iNWZOPL3W5L13DhMHy6H9mxUeuq/pSGzpY7LwukGkk4dFVl3Itx0lddWXev+
	lbbSxK3Zl55bD6UqKa8Cz0kwwbva8bvQwgd4+w2frEDRSFaKJ0tZ2o7jZcz0DA0rwror
	nnBBdp8yxgL5Q12ZJ415d6XPUjz7FyOozmJ3EkNcO7FsvVfoe/J3uyHVpRnN32CxIQoL
	V/7g==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr6241284igq.37.1355137955541; Mon,
	10 Dec 2012 03:12:35 -0800 (PST)
Received: by 10.64.37.39 with HTTP; Mon, 10 Dec 2012 03:12:35 -0800 (PST)
In-Reply-To: <CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
Date: Mon, 10 Dec 2012 19:12:35 +0800
X-Google-Sender-Auth: rK1GydDZB0Y0_dBtMdqSnNk_WWI
Message-ID: <CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1792337151670081720=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1792337151670081720==
Content-Type: multipart/alternative; boundary=14dae934111576335004d07da498

--14dae934111576335004d07da498
Content-Type: text/plain; charset=ISO-8859-1

Hello, could anybody help?

On Sun, Dec 9, 2012 at 1:00 PM, G.R. <firemeteor@users.sourceforge.net>wrote:

> I dug further and got confused.
> The host ISA bridge 00:1f.0 is automatically passed-through as part of the
> gfx_passthru magic.
> However, it is passed through as a PCI bridge:
> On host:   00:1f.0 ISA bridge [0601]: Intel Corporation H77 Express
> Chipset LPC Controller [8086:1e4a] (rev 04)
> On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express Chipset
> LPC Controller [8086:1e4a] (rev 04)
>
> This is both the case for pure HVM && PVHVM. And this one exists for both
> case too:
> 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> [Natoma/Triton II] [8086:7000]
>
> And the intel_detect_pch() function only check the first ISA bridge on the
> PCI bus:
> pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
>
> Unless there are magic elsewhere, I can't imagine the code would behave
> differently on the two builds.
> But what's the magic behind this?
>
> Also, is there anyway to get rid of the ISA bridge emulated by qemu?
> I don't think this is ever required for most case...
>
> Thanks,
> Timothy
>
>
>
> On Sun, Dec 9, 2012 at 1:43 AM, G.R. <firemeteor@users.sourceforge.net>wrote:
>
>> Hi all,
>> I'm debugging an issue that an HVM guest failed to produce any output
>> with IGD passed through.
>> This is an pure HVM linux guest with i915 driver directly compiled in.
>> An PVHVM kernel with i915 driver compiled as module works without issue.
>> I'm not yet sure which factor is more important, pure HVM or the I915=y
>> kernel config.
>>
>> The direct cause of no output is that the driver does not select Display
>> PLL properly, which is in turn due to fail to detect pch type properly.
>>
>> Strange enough, the intel_detect_pch() function works by checking the
>> device ID of the ISA bridge coming with the chipset:
>>
>> /* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make
>> graphics device passthrough fwork easy for VMM, that only * need to expose
>> ISA bridge to let driver know the real hardware  underneath. This is a
>> requirement from virtualization team. */
>>
>> I added some debug output in this function and find out that it obtained
>> a strange device ID:
>> [ 1.005423] [drm] intel pch detect, found 00007000
>>
>> This looks like the ISA bridge provided by qemu:
>> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton
>> II]
>> 00:01.0 0601: 8086:7000
>>
>> However, I can find the same device on a PVHVM linux guest, but the
>> intel_detect_pch() is not fooled by that. Is it due to I915=m config or
>> some magic played by PVOPS? Any suggestion how to fix this?
>>
>> Thanks,
>> Timothy
>>
>
>

--14dae934111576335004d07da498
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello, could anybody help?<br><div class=3D"gmail_extra"><br><div class=3D"=
gmail_quote">On Sun, Dec 9, 2012 at 1:00 PM, G.R. <span dir=3D"ltr">&lt;<a =
href=3D"mailto:firemeteor@users.sourceforge.net" target=3D"_blank">firemete=
or@users.sourceforge.net</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">I dug further and got confused.<br>The host =
ISA bridge 00:1f.0 is automatically passed-through as part of the gfx_passt=
hru magic.<br>
However, it is passed through as a PCI bridge:<br>On host: =A0 00:1f.0 ISA =
bridge [0601]: Intel Corporation H77 Express Chipset LPC Controller [8086:1=
e4a] (rev 04)<br>
On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express Chipset =
LPC Controller [8086:1e4a] (rev 04)<br><br>This is both the case for pure H=
VM &amp;&amp; PVHVM. And this one exists for both case too:<br>00:01.0 ISA =
bridge [0601]: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] [8086=
:7000]<br>

<br>And the intel_detect_pch() function only check the first ISA bridge on =
the PCI bus:<br>pch =3D pci_get_class(PCI_CLASS_BRIDGE_ISA &lt;&lt; 8, NULL=
);<br><br>Unless there are magic elsewhere, I can&#39;t imagine the code wo=
uld behave differently on the two builds.<br>

But what&#39;s the magic behind this?<br><br>Also, is there anyway to get r=
id of the ISA bridge emulated by qemu? <br>I don&#39;t think this is ever r=
equired for most case...<br><br>Thanks,<br>Timothy<div class=3D"HOEnZb">
<div class=3D"h5"><br><div class=3D"gmail_extra">
<br><br><div class=3D"gmail_quote">On Sun, Dec 9, 2012 at 1:43 AM, G.R. <sp=
an dir=3D"ltr">&lt;<a href=3D"mailto:firemeteor@users.sourceforge.net" targ=
et=3D"_blank">firemeteor@users.sourceforge.net</a>&gt;</span> wrote:<br><bl=
ockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #=
ccc solid;padding-left:1ex">

<p>Hi all, <br>
I&#39;m debugging an issue that an HVM guest failed to produce any output w=
ith IGD passed through. <br>
This is an pure HVM linux guest with i915 driver directly compiled in. <br>
An PVHVM kernel with i915 driver compiled as module works without issue. <b=
r>
I&#39;m not yet sure which factor is more important, pure HVM or the I915=
=3Dy kernel config.</p>
<p>The direct cause of no output is that the driver does not select Display=
 PLL properly, which is in turn due to fail to detect pch type properly.</p=
>
<p>Strange enough, the intel_detect_pch() function works by checking the de=
vice ID of the ISA bridge coming with the chipset:</p>
<p>/* * The reason to probe ISA bridge instead of Dev31:Fun0 is to * make g=
raphics device passthrough fwork easy for VMM, that only * need to expose I=
SA bridge to let driver know the real hardware=A0 underneath. This is a req=
uirement from virtualization team. */</p>



<p>I added some debug output in this function and find out that it obtained=
 a strange device ID:<br>
[ 1.005423] [drm] intel pch detect, found 00007000</p>
<p>This looks like the ISA bridge provided by qemu:<br>
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] =
<br>
00:01.0 0601: 8086:7000</p>
<p>However, I can find the same device on a PVHVM linux guest, but the inte=
l_detect_pch() is not fooled by that. Is it due to I915=3Dm config or some =
magic played by PVOPS? Any suggestion how to fix this?</p>
<p>Thanks, <br>
Timothy</p>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>

--14dae934111576335004d07da498--


--===============1792337151670081720==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1792337151670081720==--


From xen-devel-bounces@lists.xen.org Mon Dec 10 11:17:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 11:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti1Mm-0006sj-O4; Mon, 10 Dec 2012 11:17:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Ti1Mk-0006sX-PM
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 11:17:43 +0000
Received: from [85.158.137.99:48487] by server-12.bemta-3.messagelabs.com id
	05/39-22757-1D4C5C05; Mon, 10 Dec 2012 11:17:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1355138256!18685633!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2373 invoked from network); 10 Dec 2012 11:17:36 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 11:17:36 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so1190453wgb.32
	for <xen-devel@lists.xen.org>; Mon, 10 Dec 2012 03:17:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=P9R/o6cVIFLC8Y5edOF26S1Zvd4bKd7FdmEyZJjBi98=;
	b=SvX46IFmLiql151cHNxBNgQKiun2xzKclYVnu+s0zjvj1UZ+/0n88nvtF8JR+cnCCT
	40RLqTz03aKsTliIdZAWnVOTBB+Qg+7TRxtUAzyTZKvN3A0hCtvTww0o2vNjtwsfvDF9
	BYA5+W0PSanKowSPSM3RppmYOSKQDV0IT7LOgGGljWpRJ3f1DAlCkY02aykaqqvFX/RQ
	SUJ/SD7a7ttRUDZfmOsmh4oAZo1GnP6cOTRwMph0NHu2h4uKE8K3cjFFoOLw5lCiTw5G
	tKmq2pn0BajHeDX7kpevNgkSofRNy6fIcuVdfxUP/fotAMKrnBgQBh/tCY4Thxv9PmbE
	/5Ww==
Received: by 10.216.28.78 with SMTP id f56mr4879924wea.172.1355138255806;
	Mon, 10 Dec 2012 03:17:35 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id bz12sm11059998wib.5.2012.12.10.03.17.34
	(version=SSLv3 cipher=OTHER); Mon, 10 Dec 2012 03:17:35 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Mon, 10 Dec 2012 11:17:30 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCEB754A.5550F%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86: frame table related improvements
Thread-Index: Ac3Wx/Ff7ZeEa6H4VUS/c2z4zPPe7Q==
In-Reply-To: <50C5C68602000078000AF565@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: frame table related improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/2012 10:24, "Jan Beulich" <JBeulich@suse.com> wrote:

> - fix super page frame table setup for memory hotplug case (should
>   create full table, or else the hotplug code would need to do the
>   necessary table population)
> - simplify super page frame table setup (can re-use frame table setup
>   code)
> - slightly streamline frame table setup code
> - fix (tighten) a BUG_ON() and an ASSERT() condition
> - fix spage <-> pdx conversion macros (they had no users so far, and
>   hence no-one noticed how broken they were)
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -182,28 +182,6 @@ static uint32_t base_disallow_mask;
>        !is_hvm_domain(d)) ?                                      \
>       L1_DISALLOW_MASK : (L1_DISALLOW_MASK & ~PAGE_CACHE_ATTRS))
>  
> -static void __init init_spagetable(void)
> -{
> -    unsigned long s, start = SPAGETABLE_VIRT_START;
> -    unsigned long end = SPAGETABLE_VIRT_END;
> -    unsigned long step, mfn;
> -    unsigned int max_entries;
> -
> -    step = 1UL << PAGETABLE_ORDER;
> -    max_entries = (max_pdx + ((1UL<<SUPERPAGE_ORDER)-1)) >> SUPERPAGE_ORDER;
> -    end = start + (((max_entries * sizeof(*spage_table)) +
> -                    ((1UL<<SUPERPAGE_SHIFT)-1)) &
> (~((1UL<<SUPERPAGE_SHIFT)-1)));
> -
> -    for (s = start; s < end; s += step << PAGE_SHIFT)
> -    {
> -        mfn = alloc_boot_pages(step, step);
> -        if ( !mfn )
> -            panic("Not enough memory for spage table");
> -        map_pages_to_xen(s, mfn, step, PAGE_HYPERVISOR);
> -    }
> -    memset((void *)start, 0, end - start);
> -}
> -
>  static void __init init_frametable_chunk(void *start, void *end)
>  {
>      unsigned long s = (unsigned long)start;
> @@ -232,15 +210,25 @@ static void __init init_frametable_chunk
>      }
>  
>      memset(start, 0, end - start);
> -    memset(end, -1, s - (unsigned long)end);
> +    memset(end, -1, s - e);
> +}
> +
> +static void __init init_spagetable(void)
> +{
> +    BUILD_BUG_ON(XEN_VIRT_END > SPAGETABLE_VIRT_START);
> +
> +    init_frametable_chunk(spage_table,
> +                          mem_hotplug ? (void *)SPAGETABLE_VIRT_END
> +                                      : pdx_to_spage(max_pdx - 1) + 1);
>  }
>  
>  void __init init_frametable(void)
>  {
>      unsigned int sidx, eidx, nidx;
>      unsigned int max_idx = (max_pdx + PDX_GROUP_COUNT - 1) / PDX_GROUP_COUNT;
> +    struct page_info *end_pg, *top_pg;
>  
> -    BUILD_BUG_ON(XEN_VIRT_END > FRAMETABLE_VIRT_END);
> +    BUILD_BUG_ON(XEN_VIRT_END > FRAMETABLE_VIRT_START);
>      BUILD_BUG_ON(FRAMETABLE_VIRT_START & ((1UL << L2_PAGETABLE_SHIFT) - 1));
>  
>      for ( sidx = 0; ; sidx = nidx )
> @@ -252,17 +240,13 @@ void __init init_frametable(void)
>          init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
>                                pdx_to_page(eidx * PDX_GROUP_COUNT));
>      }
> -    if ( !mem_hotplug )
> -        init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
> -                              pdx_to_page(max_pdx - 1) + 1);
> -    else
> -    {
> -        init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
> -                              pdx_to_page(max_idx * PDX_GROUP_COUNT - 1) +
> 1);
> -        memset(pdx_to_page(max_pdx), -1,
> -               (unsigned long)pdx_to_page(max_idx * PDX_GROUP_COUNT) -
> -               (unsigned long)pdx_to_page(max_pdx));
> -    }
> +
> +    end_pg = pdx_to_page(max_pdx - 1) + 1;
> +    top_pg = mem_hotplug ? pdx_to_page(max_idx * PDX_GROUP_COUNT - 1) + 1
> +                         : end_pg;
> +    init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT), top_pg);
> +    memset(end_pg, -1, (unsigned long)top_pg - (unsigned long)end_pg);
> +
>      if (opt_allow_superpage)
>          init_spagetable();
>  }
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -301,7 +301,7 @@ static inline struct page_info *__virt_t
>  
>  static inline void *__page_to_virt(const struct page_info *pg)
>  {
> -    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END);
> +    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_SIZE);
>      /*
>       * (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg). The
>       * division and re-multiplication avoids one shift when sizeof(*pg) is a
> --- a/xen/include/asm-x86/x86_64/page.h
> +++ b/xen/include/asm-x86/x86_64/page.h
> @@ -46,8 +46,8 @@ extern void pfn_pdx_hole_setup(unsigned
>  
>  #define page_to_pdx(pg)  ((pg) - frame_table)
>  #define pdx_to_page(pdx) (frame_table + (pdx))
> -#define spage_to_pdx(spg) ((spg>>(SUPERPAGE_SHIFT-PAGE_SHIFT)) - spage_table)
> -#define pdx_to_spage(pdx) (spage_table +
> ((pdx)<<(SUPERPAGE_SHIFT-PAGE_SHIFT)))
> +#define spage_to_pdx(spg) (((spg) -
> spage_table)<<(SUPERPAGE_SHIFT-PAGE_SHIFT))
> +#define pdx_to_spage(pdx) (spage_table +
> ((pdx)>>(SUPERPAGE_SHIFT-PAGE_SHIFT)))
>  /*
>   * Note: These are solely for the use by page_{get,set}_owner(), and
>   *       therefore don't need to handle the XEN_VIRT_{START,END} range.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 11:17:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 11:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti1Mm-0006sj-O4; Mon, 10 Dec 2012 11:17:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Ti1Mk-0006sX-PM
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 11:17:43 +0000
Received: from [85.158.137.99:48487] by server-12.bemta-3.messagelabs.com id
	05/39-22757-1D4C5C05; Mon, 10 Dec 2012 11:17:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1355138256!18685633!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2373 invoked from network); 10 Dec 2012 11:17:36 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 11:17:36 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so1190453wgb.32
	for <xen-devel@lists.xen.org>; Mon, 10 Dec 2012 03:17:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=P9R/o6cVIFLC8Y5edOF26S1Zvd4bKd7FdmEyZJjBi98=;
	b=SvX46IFmLiql151cHNxBNgQKiun2xzKclYVnu+s0zjvj1UZ+/0n88nvtF8JR+cnCCT
	40RLqTz03aKsTliIdZAWnVOTBB+Qg+7TRxtUAzyTZKvN3A0hCtvTww0o2vNjtwsfvDF9
	BYA5+W0PSanKowSPSM3RppmYOSKQDV0IT7LOgGGljWpRJ3f1DAlCkY02aykaqqvFX/RQ
	SUJ/SD7a7ttRUDZfmOsmh4oAZo1GnP6cOTRwMph0NHu2h4uKE8K3cjFFoOLw5lCiTw5G
	tKmq2pn0BajHeDX7kpevNgkSofRNy6fIcuVdfxUP/fotAMKrnBgQBh/tCY4Thxv9PmbE
	/5Ww==
Received: by 10.216.28.78 with SMTP id f56mr4879924wea.172.1355138255806;
	Mon, 10 Dec 2012 03:17:35 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id bz12sm11059998wib.5.2012.12.10.03.17.34
	(version=SSLv3 cipher=OTHER); Mon, 10 Dec 2012 03:17:35 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Mon, 10 Dec 2012 11:17:30 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCEB754A.5550F%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86: frame table related improvements
Thread-Index: Ac3Wx/Ff7ZeEa6H4VUS/c2z4zPPe7Q==
In-Reply-To: <50C5C68602000078000AF565@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: frame table related improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/2012 10:24, "Jan Beulich" <JBeulich@suse.com> wrote:

> - fix super page frame table setup for memory hotplug case (should
>   create full table, or else the hotplug code would need to do the
>   necessary table population)
> - simplify super page frame table setup (can re-use frame table setup
>   code)
> - slightly streamline frame table setup code
> - fix (tighten) a BUG_ON() and an ASSERT() condition
> - fix spage <-> pdx conversion macros (they had no users so far, and
>   hence no-one noticed how broken they were)
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -182,28 +182,6 @@ static uint32_t base_disallow_mask;
>        !is_hvm_domain(d)) ?                                      \
>       L1_DISALLOW_MASK : (L1_DISALLOW_MASK & ~PAGE_CACHE_ATTRS))
>  
> -static void __init init_spagetable(void)
> -{
> -    unsigned long s, start = SPAGETABLE_VIRT_START;
> -    unsigned long end = SPAGETABLE_VIRT_END;
> -    unsigned long step, mfn;
> -    unsigned int max_entries;
> -
> -    step = 1UL << PAGETABLE_ORDER;
> -    max_entries = (max_pdx + ((1UL<<SUPERPAGE_ORDER)-1)) >> SUPERPAGE_ORDER;
> -    end = start + (((max_entries * sizeof(*spage_table)) +
> -                    ((1UL<<SUPERPAGE_SHIFT)-1)) &
> (~((1UL<<SUPERPAGE_SHIFT)-1)));
> -
> -    for (s = start; s < end; s += step << PAGE_SHIFT)
> -    {
> -        mfn = alloc_boot_pages(step, step);
> -        if ( !mfn )
> -            panic("Not enough memory for spage table");
> -        map_pages_to_xen(s, mfn, step, PAGE_HYPERVISOR);
> -    }
> -    memset((void *)start, 0, end - start);
> -}
> -
>  static void __init init_frametable_chunk(void *start, void *end)
>  {
>      unsigned long s = (unsigned long)start;
> @@ -232,15 +210,25 @@ static void __init init_frametable_chunk
>      }
>  
>      memset(start, 0, end - start);
> -    memset(end, -1, s - (unsigned long)end);
> +    memset(end, -1, s - e);
> +}
> +
> +static void __init init_spagetable(void)
> +{
> +    BUILD_BUG_ON(XEN_VIRT_END > SPAGETABLE_VIRT_START);
> +
> +    init_frametable_chunk(spage_table,
> +                          mem_hotplug ? (void *)SPAGETABLE_VIRT_END
> +                                      : pdx_to_spage(max_pdx - 1) + 1);
>  }
>  
>  void __init init_frametable(void)
>  {
>      unsigned int sidx, eidx, nidx;
>      unsigned int max_idx = (max_pdx + PDX_GROUP_COUNT - 1) / PDX_GROUP_COUNT;
> +    struct page_info *end_pg, *top_pg;
>  
> -    BUILD_BUG_ON(XEN_VIRT_END > FRAMETABLE_VIRT_END);
> +    BUILD_BUG_ON(XEN_VIRT_END > FRAMETABLE_VIRT_START);
>      BUILD_BUG_ON(FRAMETABLE_VIRT_START & ((1UL << L2_PAGETABLE_SHIFT) - 1));
>  
>      for ( sidx = 0; ; sidx = nidx )
> @@ -252,17 +240,13 @@ void __init init_frametable(void)
>          init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
>                                pdx_to_page(eidx * PDX_GROUP_COUNT));
>      }
> -    if ( !mem_hotplug )
> -        init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
> -                              pdx_to_page(max_pdx - 1) + 1);
> -    else
> -    {
> -        init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT),
> -                              pdx_to_page(max_idx * PDX_GROUP_COUNT - 1) +
> 1);
> -        memset(pdx_to_page(max_pdx), -1,
> -               (unsigned long)pdx_to_page(max_idx * PDX_GROUP_COUNT) -
> -               (unsigned long)pdx_to_page(max_pdx));
> -    }
> +
> +    end_pg = pdx_to_page(max_pdx - 1) + 1;
> +    top_pg = mem_hotplug ? pdx_to_page(max_idx * PDX_GROUP_COUNT - 1) + 1
> +                         : end_pg;
> +    init_frametable_chunk(pdx_to_page(sidx * PDX_GROUP_COUNT), top_pg);
> +    memset(end_pg, -1, (unsigned long)top_pg - (unsigned long)end_pg);
> +
>      if (opt_allow_superpage)
>          init_spagetable();
>  }
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -301,7 +301,7 @@ static inline struct page_info *__virt_t
>  
>  static inline void *__page_to_virt(const struct page_info *pg)
>  {
> -    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END);
> +    ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_SIZE);
>      /*
>       * (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg). The
>       * division and re-multiplication avoids one shift when sizeof(*pg) is a
> --- a/xen/include/asm-x86/x86_64/page.h
> +++ b/xen/include/asm-x86/x86_64/page.h
> @@ -46,8 +46,8 @@ extern void pfn_pdx_hole_setup(unsigned
>  
>  #define page_to_pdx(pg)  ((pg) - frame_table)
>  #define pdx_to_page(pdx) (frame_table + (pdx))
> -#define spage_to_pdx(spg) ((spg>>(SUPERPAGE_SHIFT-PAGE_SHIFT)) - spage_table)
> -#define pdx_to_spage(pdx) (spage_table +
> ((pdx)<<(SUPERPAGE_SHIFT-PAGE_SHIFT)))
> +#define spage_to_pdx(spg) (((spg) -
> spage_table)<<(SUPERPAGE_SHIFT-PAGE_SHIFT))
> +#define pdx_to_spage(pdx) (spage_table +
> ((pdx)>>(SUPERPAGE_SHIFT-PAGE_SHIFT)))
>  /*
>   * Note: These are solely for the use by page_{get,set}_owner(), and
>   *       therefore don't need to handle the XEN_VIRT_{START,END} range.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 12:16:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:16:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2H6-0007dR-Ok; Mon, 10 Dec 2012 12:15:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Ti2H4-0007dM-GT
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:15:54 +0000
Received: from [85.158.143.99:33528] by server-1.bemta-4.messagelabs.com id
	FA/AC-28401-972D5C05; Mon, 10 Dec 2012 12:15:53 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1355141751!22404071!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19606 invoked from network); 10 Dec 2012 12:15:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:15:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="32008"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 12:15:52 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 12:15:51 +0000
Message-ID: <50C5D276.6090009@citrix.com>
Date: Mon, 10 Dec 2012 13:15:50 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
	<1354630913-17287-2-git-send-email-roger.pau@citrix.com>
	<20121207202003.GA9462@phenom.dumpdata.com>
In-Reply-To: <20121207202003.GA9462@phenom.dumpdata.com>
Cc: "sfr@canb.auug.org.au" <sfr@canb.auug.org.au>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
 llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/12 21:20, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
>> Implement a safe version of llist_for_each_entry, and use it in
>> blkif_free. Previously grants where freed while iterating the list,
>> which lead to dereferences when trying to fetch the next item.
> 
> Looks like xen-blkfront is the only user of this llist_for_each_entry.
> 
> Would it be more prudent to put the macro in the llist.h file?

I'm not able to find out who is the maintainer of llist, should I just
CC it's author?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 12:16:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:16:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2H6-0007dR-Ok; Mon, 10 Dec 2012 12:15:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Ti2H4-0007dM-GT
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:15:54 +0000
Received: from [85.158.143.99:33528] by server-1.bemta-4.messagelabs.com id
	FA/AC-28401-972D5C05; Mon, 10 Dec 2012 12:15:53 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1355141751!22404071!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19606 invoked from network); 10 Dec 2012 12:15:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:15:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,250,1355097600"; 
   d="scan'208";a="32008"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 12:15:52 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 12:15:51 +0000
Message-ID: <50C5D276.6090009@citrix.com>
Date: Mon, 10 Dec 2012 13:15:50 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
	<1354630913-17287-2-git-send-email-roger.pau@citrix.com>
	<20121207202003.GA9462@phenom.dumpdata.com>
In-Reply-To: <20121207202003.GA9462@phenom.dumpdata.com>
Cc: "sfr@canb.auug.org.au" <sfr@canb.auug.org.au>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
 llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/12 21:20, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
>> Implement a safe version of llist_for_each_entry, and use it in
>> blkif_free. Previously grants where freed while iterating the list,
>> which lead to dereferences when trying to fetch the next item.
> 
> Looks like xen-blkfront is the only user of this llist_for_each_entry.
> 
> Would it be more prudent to put the macro in the llist.h file?

I'm not able to find out who is the maintainer of llist, should I just
CC it's author?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 12:27:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2Rl-0007nx-Vk; Mon, 10 Dec 2012 12:26:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti2Rl-0007ns-2I
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:26:57 +0000
Received: from [85.158.138.51:32351] by server-8.bemta-3.messagelabs.com id
	3C/E5-07786-015D5C05; Mon, 10 Dec 2012 12:26:56 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355142413!28283996!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12648 invoked from network); 10 Dec 2012 12:26:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:26:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="93825"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 12:26:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 07:26:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti2Rg-0008Sj-Tk;
	Mon, 10 Dec 2012 12:26:52 +0000
Date: Mon, 10 Dec 2012 12:26:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1438485043-1355142294=:4633"
Content-ID: <alpine.DEB.2.02.1212101225200.4633@kaball.uk.xensource.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	allen.m.kay@intel.com, eddie.dong@intel.com,
	xen-devel <xen-devel@lists.xen.org>, dongxiao.xu@intel.com,
	xiantao.zhang@intel
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1438485043-1355142294=:4633
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1212101225201.4633@kaball.uk.xensource.com>

CC'ing some engineers that could have some useful suggestions

On Mon, 10 Dec 2012, G.R. wrote:
> Hello, could anybody help?
>=20
> On Sun, Dec 9, 2012 at 1:00 PM, G.R. <firemeteor@users.sourceforge.net> w=
rote:
>       I dug further and got confused.
>       The host ISA bridge 00:1f.0 is automatically passed-through as part=
 of the gfx_passthru magic.
>       However, it is passed through as a PCI bridge:
>       On host: =C2=A0 00:1f.0 ISA bridge [0601]: Intel Corporation H77 Ex=
press Chipset LPC Controller [8086:1e4a] (rev 04)
>       On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express =
Chipset LPC Controller [8086:1e4a] (rev 04)
>=20
>       This is both the case for pure HVM && PVHVM. And this one exists fo=
r both case too:
>       00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA [Nat=
oma/Triton II] [8086:7000]
>=20
>       And the intel_detect_pch() function only check the first ISA bridge=
 on the PCI bus:
>       pch =3D pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
>=20
>       Unless there are magic elsewhere, I can't imagine the code would be=
have differently on the two builds.
>       But what's the magic behind this?
>=20
>       Also, is there anyway to get rid of the ISA bridge emulated by qemu=
?
>       I don't think this is ever required for most case...
>=20
>       Thanks,
>       Timothy
>=20
>=20
>       On Sun, Dec 9, 2012 at 1:43 AM, G.R. <firemeteor@users.sourceforge.=
net> wrote:
>=20
>             Hi all,
>             I'm debugging an issue that an HVM guest failed to produce an=
y output with IGD passed through.
>             This is an pure HVM linux guest with i915 driver directly com=
piled in.
>             An PVHVM kernel with i915 driver compiled as module works wit=
hout issue.
>             I'm not yet sure which factor is more important, pure HVM or =
the I915=3Dy kernel config.
>=20
>             The direct cause of no output is that the driver does not sel=
ect Display PLL properly, which is in turn
>             due to fail to detect pch type properly.
>=20
>             Strange enough, the intel_detect_pch() function works by chec=
king the device ID of the ISA bridge
>             coming with the chipset:
>=20
>             /* * The reason to probe ISA bridge instead of Dev31:Fun0 is =
to * make graphics device passthrough
>             fwork easy for VMM, that only * need to expose ISA bridge to =
let driver know the real hardware=C2=A0
>             underneath. This is a requirement from virtualization team. *=
/
>=20
>             I added some debug output in this function and find out that =
it obtained a strange device ID:
>             [ 1.005423] [drm] intel pch detect, found 00007000
>=20
>             This looks like the ISA bridge provided by qemu:
>             00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Nato=
ma/Triton II]
>             00:01.0 0601: 8086:7000
>=20
>             However, I can find the same device on a PVHVM linux guest, b=
ut the intel_detect_pch() is not fooled by
>             that. Is it due to I915=3Dm config or some magic played by PV=
OPS? Any suggestion how to fix this?
>=20
>             Thanks,
>             Timothy
>=20
>=20
>=20
>=20
>=20
--1342847746-1438485043-1355142294=:4633
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1438485043-1355142294=:4633--


From xen-devel-bounces@lists.xen.org Mon Dec 10 12:27:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2Rl-0007nx-Vk; Mon, 10 Dec 2012 12:26:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti2Rl-0007ns-2I
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:26:57 +0000
Received: from [85.158.138.51:32351] by server-8.bemta-3.messagelabs.com id
	3C/E5-07786-015D5C05; Mon, 10 Dec 2012 12:26:56 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355142413!28283996!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12648 invoked from network); 10 Dec 2012 12:26:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:26:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="93825"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 12:26:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 07:26:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti2Rg-0008Sj-Tk;
	Mon, 10 Dec 2012 12:26:52 +0000
Date: Mon, 10 Dec 2012 12:26:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1438485043-1355142294=:4633"
Content-ID: <alpine.DEB.2.02.1212101225200.4633@kaball.uk.xensource.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	allen.m.kay@intel.com, eddie.dong@intel.com,
	xen-devel <xen-devel@lists.xen.org>, dongxiao.xu@intel.com,
	xiantao.zhang@intel
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1438485043-1355142294=:4633
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1212101225201.4633@kaball.uk.xensource.com>

CC'ing some engineers that could have some useful suggestions

On Mon, 10 Dec 2012, G.R. wrote:
> Hello, could anybody help?
>=20
> On Sun, Dec 9, 2012 at 1:00 PM, G.R. <firemeteor@users.sourceforge.net> w=
rote:
>       I dug further and got confused.
>       The host ISA bridge 00:1f.0 is automatically passed-through as part=
 of the gfx_passthru magic.
>       However, it is passed through as a PCI bridge:
>       On host: =C2=A0 00:1f.0 ISA bridge [0601]: Intel Corporation H77 Ex=
press Chipset LPC Controller [8086:1e4a] (rev 04)
>       On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express =
Chipset LPC Controller [8086:1e4a] (rev 04)
>=20
>       This is both the case for pure HVM && PVHVM. And this one exists fo=
r both case too:
>       00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA [Nat=
oma/Triton II] [8086:7000]
>=20
>       And the intel_detect_pch() function only check the first ISA bridge=
 on the PCI bus:
>       pch =3D pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
>=20
>       Unless there are magic elsewhere, I can't imagine the code would be=
have differently on the two builds.
>       But what's the magic behind this?
>=20
>       Also, is there anyway to get rid of the ISA bridge emulated by qemu=
?
>       I don't think this is ever required for most case...
>=20
>       Thanks,
>       Timothy
>=20
>=20
>       On Sun, Dec 9, 2012 at 1:43 AM, G.R. <firemeteor@users.sourceforge.=
net> wrote:
>=20
>             Hi all,
>             I'm debugging an issue that an HVM guest failed to produce an=
y output with IGD passed through.
>             This is an pure HVM linux guest with i915 driver directly com=
piled in.
>             An PVHVM kernel with i915 driver compiled as module works wit=
hout issue.
>             I'm not yet sure which factor is more important, pure HVM or =
the I915=3Dy kernel config.
>=20
>             The direct cause of no output is that the driver does not sel=
ect Display PLL properly, which is in turn
>             due to fail to detect pch type properly.
>=20
>             Strange enough, the intel_detect_pch() function works by chec=
king the device ID of the ISA bridge
>             coming with the chipset:
>=20
>             /* * The reason to probe ISA bridge instead of Dev31:Fun0 is =
to * make graphics device passthrough
>             fwork easy for VMM, that only * need to expose ISA bridge to =
let driver know the real hardware=C2=A0
>             underneath. This is a requirement from virtualization team. *=
/
>=20
>             I added some debug output in this function and find out that =
it obtained a strange device ID:
>             [ 1.005423] [drm] intel pch detect, found 00007000
>=20
>             This looks like the ISA bridge provided by qemu:
>             00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Nato=
ma/Triton II]
>             00:01.0 0601: 8086:7000
>=20
>             However, I can find the same device on a PVHVM linux guest, b=
ut the intel_detect_pch() is not fooled by
>             that. Is it due to I915=3Dm config or some magic played by PV=
OPS? Any suggestion how to fix this?
>=20
>             Thanks,
>             Timothy
>=20
>=20
>=20
>=20
>=20
--1342847746-1438485043-1355142294=:4633
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1438485043-1355142294=:4633--


From xen-devel-bounces@lists.xen.org Mon Dec 10 12:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2bJ-0007xr-8j; Mon, 10 Dec 2012 12:36:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti2bH-0007xm-Ku
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:36:47 +0000
Received: from [193.109.254.147:23460] by server-7.bemta-14.messagelabs.com id
	41/71-02272-E57D5C05; Mon, 10 Dec 2012 12:36:46 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355143003!9546739!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26571 invoked from network); 10 Dec 2012 12:36:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:36:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="94462"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 12:36:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 07:36:42 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti2bC-0000B4-G0;
	Mon, 10 Dec 2012 12:36:42 +0000
Date: Mon, 10 Dec 2012 12:36:41 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212101227490.4633@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1759592547-1355142773=:4633"
Content-ID: <alpine.DEB.2.02.1212101232560.4633@kaball.uk.xensource.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] qemu-xen-trad/pt_msi_disable: do not clear all
	MSI flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1759592547-1355142773=:4633
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1212101232561.4633@kaball.uk.xensource.com>

"qemu-xen-trad: fix msi_translate with PV event delivery" added a
pt_msi_disable() call into pt_msgctrl_reg_write, clearing the MSI flags
as a consequence. MSIs get enabled again soon after by calling
pt_msi_setup.

However the MSI flags are only setup once in=C2=A0the pt_msgctrl_reg_init
function, so from the QEMU point of view the device has lost some
important properties, like for example PCI_MSI_FLAGS_64BIT.

This patch fixes the bug by clearing only the MSI
enabled/mapped/initialized flags in pt_msi_disable.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Tested-by: G.R. <firemeteor@users.sourceforge.net>
Xen-devel: http://marc.info/?l=3Dxen-devel&m=3D135489879503075

diff --git a/hw/pt-msi.c b/hw/pt-msi.c
index 73f737d..b03b989 100644
--- a/hw/pt-msi.c
+++ b/hw/pt-msi.c
@@ -213,7 +213,7 @@ void pt_msi_disable(struct pt_dev *dev)
=20
 out:
     /* clear msi info */
-    dev->msi->flags =3D 0;
+    dev->msi->flags &=3D ~(MSI_FLAG_UNINIT | PT_MSI_MAPPED | PCI_MSI_FLAGS=
_ENABLE);
     dev->msi->pirq =3D -1;
     dev->msi_trans_en =3D 0;
 }
--1342847746-1759592547-1355142773=:4633
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1759592547-1355142773=:4633--


From xen-devel-bounces@lists.xen.org Mon Dec 10 12:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2bJ-0007xr-8j; Mon, 10 Dec 2012 12:36:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti2bH-0007xm-Ku
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:36:47 +0000
Received: from [193.109.254.147:23460] by server-7.bemta-14.messagelabs.com id
	41/71-02272-E57D5C05; Mon, 10 Dec 2012 12:36:46 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355143003!9546739!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26571 invoked from network); 10 Dec 2012 12:36:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:36:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="94462"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 12:36:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 07:36:42 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti2bC-0000B4-G0;
	Mon, 10 Dec 2012 12:36:42 +0000
Date: Mon, 10 Dec 2012 12:36:41 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212101227490.4633@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1759592547-1355142773=:4633"
Content-ID: <alpine.DEB.2.02.1212101232560.4633@kaball.uk.xensource.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] qemu-xen-trad/pt_msi_disable: do not clear all
	MSI flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1759592547-1355142773=:4633
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1212101232561.4633@kaball.uk.xensource.com>

"qemu-xen-trad: fix msi_translate with PV event delivery" added a
pt_msi_disable() call into pt_msgctrl_reg_write, clearing the MSI flags
as a consequence. MSIs get enabled again soon after by calling
pt_msi_setup.

However the MSI flags are only setup once in=C2=A0the pt_msgctrl_reg_init
function, so from the QEMU point of view the device has lost some
important properties, like for example PCI_MSI_FLAGS_64BIT.

This patch fixes the bug by clearing only the MSI
enabled/mapped/initialized flags in pt_msi_disable.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Tested-by: G.R. <firemeteor@users.sourceforge.net>
Xen-devel: http://marc.info/?l=3Dxen-devel&m=3D135489879503075

diff --git a/hw/pt-msi.c b/hw/pt-msi.c
index 73f737d..b03b989 100644
--- a/hw/pt-msi.c
+++ b/hw/pt-msi.c
@@ -213,7 +213,7 @@ void pt_msi_disable(struct pt_dev *dev)
=20
 out:
     /* clear msi info */
-    dev->msi->flags =3D 0;
+    dev->msi->flags &=3D ~(MSI_FLAG_UNINIT | PT_MSI_MAPPED | PCI_MSI_FLAGS=
_ENABLE);
     dev->msi->pirq =3D -1;
     dev->msi_trans_en =3D 0;
 }
--1342847746-1759592547-1355142773=:4633
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1759592547-1355142773=:4633--


From xen-devel-bounces@lists.xen.org Mon Dec 10 12:37:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2ba-0007yh-MZ; Mon, 10 Dec 2012 12:37:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Ti2bY-0007yV-NB
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:37:04 +0000
Received: from [85.158.143.99:13265] by server-1.bemta-4.messagelabs.com id
	7B/6E-28401-077D5C05; Mon, 10 Dec 2012 12:37:04 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355143022!27828006!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_TEST_2,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20092 invoked from network); 10 Dec 2012 12:37:03 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:37:03 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so2869901vbi.32
	for <xen-devel@lists.xen.org>; Mon, 10 Dec 2012 04:37:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=S/zXgVjOQ1XFpML7OsmFSw/3mU79TEi4ttoAt4FpHbY=;
	b=uPCh6UFbg9MoJdNkMb14Q2dgU9QseA10Y6guU5QIQWRPl+TOElxzflB7INMp7ZVzqV
	5xH8NLZ13RQwNtFlTt8Oxa8JV0HZNwDWWlOfix2vea0/t6cNmr70A++Ggo5NJkRDVbP8
	PwwmJ2F3s0z5uWTuaaIj4PpInsWjZOUgIM2AfNN4DYhA3KUEqtsbyNz5gi+SWvGj5vNu
	RO4yq/r1hcozUkaC4Zn2C/XuBXX8oCMxvLTgjTsUc0EvW5tmxPqMqkk0yxsJwjcOWsgS
	4uEq/uAKGLdYAl3tm6oKTjDf5r4TQnBGo/Q2W2C4AcMtOwYgDEhVu+ZSEJBBlGZtx0/G
	7vcQ==
MIME-Version: 1.0
Received: by 10.58.64.51 with SMTP id l19mr9154838ves.15.1355143021773; Mon,
	10 Dec 2012 04:37:01 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Mon, 10 Dec 2012 04:37:01 -0800 (PST)
In-Reply-To: <CACaajQuzns_hFFwR86akZ57Zh9O54zdGOL3yEyqUquVB3W8T5Q@mail.gmail.com>
References: <CACaajQuzns_hFFwR86akZ57Zh9O54zdGOL3yEyqUquVB3W8T5Q@mail.gmail.com>
Date: Mon, 10 Dec 2012 12:37:01 +0000
X-Google-Sender-Auth: pWQPjuttj3yv8Z2lbdfB1oNmKgQ
Message-ID: <CAFLBxZYSnHB3GvkwaKdrJZGuNU5m=QQ+gohFaY1=r9tcbvpf5Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] migrate vps from xm to xl toolstack
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7765851455012727175=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7765851455012727175==
Content-Type: multipart/alternative; boundary=047d7b6d81026ec1d804d07ed2f3

--047d7b6d81026ec1d804d07ed2f3
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Dec 7, 2012 at 5:46 AM, Vasiliy Tolstov <v.tolstov@selfip.ru> wrote:

> Hello, xen  team.
> As i understand migrate from xm to xl is not possible.
> But is that possible to migrate vps by hands? For example on xm node i
> do xm save and on xl node doins xl restore... ?
>

I take it you're looking to do a "rolling upgrade" of a set of servers?  I
don't think I had thought of that before. :-)

A couple of points though:
* The save file is actually not made by xm or xl, but by the lower-level xc
libraries which are shared by both.  So although I don't think we've tested
saving with xm and restoring with xl, I have every reason to think it will
work just fine.
* Switching from xm to xl and back should just be a matter of starting or
stopping xend.  You could try starting xend on the "upgraded" side
temporarily, just long enough to do the migration, and then shut it down
again.  There are certain commands which xl can't safely do if xend is
running, but xl will automatically look for xend and refuse to execute if
that's the case.  Then you can just shut xend down, and xl will work again.

Obviously you should test this out on some non-critical VMs before doing
them on production VMs. :-)

 -George


> --
> Vasiliy Tolstov,
> Clodo.ru
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--047d7b6d81026ec1d804d07ed2f3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 7, 2012 at 5:46 AM, Vasiliy Tolstov <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:v.tolstov@selfip.ru" target=3D"_blank">v.tolstov@selfip.ru</a>=
&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"=
><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border=
-left:1px solid rgb(204,204,204);padding-left:1ex">
Hello, xen =A0team.<br>
As i understand migrate from xm to xl is not possible.<br>
But is that possible to migrate vps by hands? For example on xm node i<br>
do xm save and on xl node doins xl restore... ?<br></blockquote><div><br>I =
take it you&#39;re looking to do a &quot;rolling upgrade&quot; of a set of =
servers?=A0 I don&#39;t think I had thought of that before. :-)<br><br>A co=
uple of points though:<br>
* The save file is actually not made by xm or xl, but by the lower-level xc=
 libraries which are shared by both.=A0 So although I don&#39;t think we&#3=
9;ve tested saving with xm and restoring with xl, I have every reason to th=
ink it will work just fine.<br>
* Switching from xm to xl and back should just be a matter of starting or s=
topping xend.=A0 You could try starting xend on the &quot;upgraded&quot; si=
de temporarily, just long enough to do the migration, and then shut it down=
 again.=A0 There are certain commands which xl can&#39;t safely do if xend =
is running, but xl will automatically look for xend and refuse to execute i=
f that&#39;s the case.=A0 Then you can just shut xend down, and xl will wor=
k again.<br>
<br>Obviously you should test this out on some non-critical VMs before doin=
g them on production VMs. :-)<br><br>=A0-George <br><br></div><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px soli=
d rgb(204,204,204);padding-left:1ex">

<br>
--<br>
Vasiliy Tolstov,<br>
Clodo.ru<br>
e-mail: <a href=3D"mailto:v.tolstov@selfip.ru">v.tolstov@selfip.ru</a><br>
jabber: <a href=3D"mailto:vase@selfip.ru">vase@selfip.ru</a><br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</blockquote></div><br></div>

--047d7b6d81026ec1d804d07ed2f3--


--===============7765851455012727175==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7765851455012727175==--


From xen-devel-bounces@lists.xen.org Mon Dec 10 12:37:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2ba-0007yh-MZ; Mon, 10 Dec 2012 12:37:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Ti2bY-0007yV-NB
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:37:04 +0000
Received: from [85.158.143.99:13265] by server-1.bemta-4.messagelabs.com id
	7B/6E-28401-077D5C05; Mon, 10 Dec 2012 12:37:04 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355143022!27828006!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_TEST_2,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20092 invoked from network); 10 Dec 2012 12:37:03 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:37:03 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so2869901vbi.32
	for <xen-devel@lists.xen.org>; Mon, 10 Dec 2012 04:37:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=S/zXgVjOQ1XFpML7OsmFSw/3mU79TEi4ttoAt4FpHbY=;
	b=uPCh6UFbg9MoJdNkMb14Q2dgU9QseA10Y6guU5QIQWRPl+TOElxzflB7INMp7ZVzqV
	5xH8NLZ13RQwNtFlTt8Oxa8JV0HZNwDWWlOfix2vea0/t6cNmr70A++Ggo5NJkRDVbP8
	PwwmJ2F3s0z5uWTuaaIj4PpInsWjZOUgIM2AfNN4DYhA3KUEqtsbyNz5gi+SWvGj5vNu
	RO4yq/r1hcozUkaC4Zn2C/XuBXX8oCMxvLTgjTsUc0EvW5tmxPqMqkk0yxsJwjcOWsgS
	4uEq/uAKGLdYAl3tm6oKTjDf5r4TQnBGo/Q2W2C4AcMtOwYgDEhVu+ZSEJBBlGZtx0/G
	7vcQ==
MIME-Version: 1.0
Received: by 10.58.64.51 with SMTP id l19mr9154838ves.15.1355143021773; Mon,
	10 Dec 2012 04:37:01 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Mon, 10 Dec 2012 04:37:01 -0800 (PST)
In-Reply-To: <CACaajQuzns_hFFwR86akZ57Zh9O54zdGOL3yEyqUquVB3W8T5Q@mail.gmail.com>
References: <CACaajQuzns_hFFwR86akZ57Zh9O54zdGOL3yEyqUquVB3W8T5Q@mail.gmail.com>
Date: Mon, 10 Dec 2012 12:37:01 +0000
X-Google-Sender-Auth: pWQPjuttj3yv8Z2lbdfB1oNmKgQ
Message-ID: <CAFLBxZYSnHB3GvkwaKdrJZGuNU5m=QQ+gohFaY1=r9tcbvpf5Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] migrate vps from xm to xl toolstack
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7765851455012727175=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7765851455012727175==
Content-Type: multipart/alternative; boundary=047d7b6d81026ec1d804d07ed2f3

--047d7b6d81026ec1d804d07ed2f3
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Dec 7, 2012 at 5:46 AM, Vasiliy Tolstov <v.tolstov@selfip.ru> wrote:

> Hello, xen  team.
> As i understand migrate from xm to xl is not possible.
> But is that possible to migrate vps by hands? For example on xm node i
> do xm save and on xl node doins xl restore... ?
>

I take it you're looking to do a "rolling upgrade" of a set of servers?  I
don't think I had thought of that before. :-)

A couple of points though:
* The save file is actually not made by xm or xl, but by the lower-level xc
libraries which are shared by both.  So although I don't think we've tested
saving with xm and restoring with xl, I have every reason to think it will
work just fine.
* Switching from xm to xl and back should just be a matter of starting or
stopping xend.  You could try starting xend on the "upgraded" side
temporarily, just long enough to do the migration, and then shut it down
again.  There are certain commands which xl can't safely do if xend is
running, but xl will automatically look for xend and refuse to execute if
that's the case.  Then you can just shut xend down, and xl will work again.

Obviously you should test this out on some non-critical VMs before doing
them on production VMs. :-)

 -George


> --
> Vasiliy Tolstov,
> Clodo.ru
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--047d7b6d81026ec1d804d07ed2f3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 7, 2012 at 5:46 AM, Vasiliy Tolstov <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:v.tolstov@selfip.ru" target=3D"_blank">v.tolstov@selfip.ru</a>=
&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"=
><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border=
-left:1px solid rgb(204,204,204);padding-left:1ex">
Hello, xen =A0team.<br>
As i understand migrate from xm to xl is not possible.<br>
But is that possible to migrate vps by hands? For example on xm node i<br>
do xm save and on xl node doins xl restore... ?<br></blockquote><div><br>I =
take it you&#39;re looking to do a &quot;rolling upgrade&quot; of a set of =
servers?=A0 I don&#39;t think I had thought of that before. :-)<br><br>A co=
uple of points though:<br>
* The save file is actually not made by xm or xl, but by the lower-level xc=
 libraries which are shared by both.=A0 So although I don&#39;t think we&#3=
9;ve tested saving with xm and restoring with xl, I have every reason to th=
ink it will work just fine.<br>
* Switching from xm to xl and back should just be a matter of starting or s=
topping xend.=A0 You could try starting xend on the &quot;upgraded&quot; si=
de temporarily, just long enough to do the migration, and then shut it down=
 again.=A0 There are certain commands which xl can&#39;t safely do if xend =
is running, but xl will automatically look for xend and refuse to execute i=
f that&#39;s the case.=A0 Then you can just shut xend down, and xl will wor=
k again.<br>
<br>Obviously you should test this out on some non-critical VMs before doin=
g them on production VMs. :-)<br><br>=A0-George <br><br></div><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px soli=
d rgb(204,204,204);padding-left:1ex">

<br>
--<br>
Vasiliy Tolstov,<br>
Clodo.ru<br>
e-mail: <a href=3D"mailto:v.tolstov@selfip.ru">v.tolstov@selfip.ru</a><br>
jabber: <a href=3D"mailto:vase@selfip.ru">vase@selfip.ru</a><br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</blockquote></div><br></div>

--047d7b6d81026ec1d804d07ed2f3--


--===============7765851455012727175==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7765851455012727175==--


From xen-devel-bounces@lists.xen.org Mon Dec 10 12:40:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2eD-00089S-An; Mon, 10 Dec 2012 12:39:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti2eC-00089J-HH
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:39:48 +0000
Received: from [85.158.138.51:18062] by server-7.bemta-3.messagelabs.com id
	0D/C1-01713-318D5C05; Mon, 10 Dec 2012 12:39:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355143186!28178951!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4677 invoked from network); 10 Dec 2012 12:39:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:39:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="32631"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 12:39:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 12:39:46 +0000
Message-ID: <1355143184.21160.10.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 10 Dec 2012 12:39:44 +0000
In-Reply-To: <CAFLBxZYSnHB3GvkwaKdrJZGuNU5m=QQ+gohFaY1=r9tcbvpf5Q@mail.gmail.com>
References: <CACaajQuzns_hFFwR86akZ57Zh9O54zdGOL3yEyqUquVB3W8T5Q@mail.gmail.com>
	<CAFLBxZYSnHB3GvkwaKdrJZGuNU5m=QQ+gohFaY1=r9tcbvpf5Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Vasiliy Tolstov <v.tolstov@selfip.ru>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] migrate vps from xm to xl toolstack
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-10 at 12:37 +0000, George Dunlap wrote:
> On Fri, Dec 7, 2012 at 5:46 AM, Vasiliy Tolstov <v.tolstov@selfip.ru>
> wrote:
>         Hello, xen  team.
>         As i understand migrate from xm to xl is not possible.
>         But is that possible to migrate vps by hands? For example on
>         xm node i
>         do xm save and on xl node doins xl restore... ?
> 
> I take it you're looking to do a "rolling upgrade" of a set of
> servers?  I don't think I had thought of that before. :-)
> 
> A couple of points though:
> * The save file is actually not made by xm or xl, but by the
> lower-level xc libraries which are shared by both.  So although I
> don't think we've tested saving with xm and restoring with xl, I have
> every reason to think it will work just fine.

Ian J tested this as part of the 4.2.0 release, it could be made to work
with a couple of simple tricks. Details ought to be in the list
archives.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 12:40:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2eD-00089S-An; Mon, 10 Dec 2012 12:39:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti2eC-00089J-HH
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:39:48 +0000
Received: from [85.158.138.51:18062] by server-7.bemta-3.messagelabs.com id
	0D/C1-01713-318D5C05; Mon, 10 Dec 2012 12:39:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355143186!28178951!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4677 invoked from network); 10 Dec 2012 12:39:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:39:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="32631"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 12:39:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 12:39:46 +0000
Message-ID: <1355143184.21160.10.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 10 Dec 2012 12:39:44 +0000
In-Reply-To: <CAFLBxZYSnHB3GvkwaKdrJZGuNU5m=QQ+gohFaY1=r9tcbvpf5Q@mail.gmail.com>
References: <CACaajQuzns_hFFwR86akZ57Zh9O54zdGOL3yEyqUquVB3W8T5Q@mail.gmail.com>
	<CAFLBxZYSnHB3GvkwaKdrJZGuNU5m=QQ+gohFaY1=r9tcbvpf5Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Vasiliy Tolstov <v.tolstov@selfip.ru>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] migrate vps from xm to xl toolstack
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-10 at 12:37 +0000, George Dunlap wrote:
> On Fri, Dec 7, 2012 at 5:46 AM, Vasiliy Tolstov <v.tolstov@selfip.ru>
> wrote:
>         Hello, xen  team.
>         As i understand migrate from xm to xl is not possible.
>         But is that possible to migrate vps by hands? For example on
>         xm node i
>         do xm save and on xl node doins xl restore... ?
> 
> I take it you're looking to do a "rolling upgrade" of a set of
> servers?  I don't think I had thought of that before. :-)
> 
> A couple of points though:
> * The save file is actually not made by xm or xl, but by the
> lower-level xc libraries which are shared by both.  So although I
> don't think we've tested saving with xm and restoring with xl, I have
> every reason to think it will work just fine.

Ian J tested this as part of the 4.2.0 release, it could be made to work
with a couple of simple tricks. Details ought to be in the list
archives.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 12:54:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2sd-00004z-0A; Mon, 10 Dec 2012 12:54:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Ti2sb-00004u-0U
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:54:41 +0000
Received: from [85.158.139.83:59127] by server-16.bemta-5.messagelabs.com id
	D1/AF-09208-09BD5C05; Mon, 10 Dec 2012 12:54:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355144078!25289658!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14529 invoked from network); 10 Dec 2012 12:54:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:54:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="95724"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 12:54:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 07:54:37 -0500
Received: from [10.80.239.153]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id 1Ti2sW-0000Uz-SR;
	Mon, 10 Dec 2012 12:54:37 +0000
Message-ID: <50C5DB8C.4050601@citrix.com>
Date: Mon, 10 Dec 2012 12:54:36 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121127 Icedove/10.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
	<50C1E67F02000078000AEEEF@nat28.tlf.novell.com>
	<50C25D3D.10407@citrix.com>
	<50C5BA2A02000078000AF4F4@nat28.tlf.novell.com>
In-Reply-To: <50C5BA2A02000078000AF4F4@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 V3] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/12 09:32, Jan Beulich wrote:
>>>> On 07.12.12 at 22:18, Andrew Cooper<andrew.cooper3@citrix.com>  wrote:
>> On 07/12/2012 11:52, Jan Beulich wrote:
>>>>>> On 06.12.12 at 22:42, Andrew Cooper<andrew.cooper3@citrix.com>  wrote:
>>>> +
>>>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>>>> +     * invokes do_nmi_crash (above), which cause them to write state and
>>>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>>>> +     * cause it to return to this function ASAP.
>>>> +     */
>>>> +    for ( i = 0; i<  nr_cpu_ids; ++i )
>>>> +        if ( idt_tables[i] )
>>>> +        {
>>>> +
>>>> +            if ( i == cpu )
>>>> +            {
>>>> +                /* Disable the interrupt stack tables for this MCE and
>>>> +                 * NMI handler (shortly to become a nop) as there is a 1
>>>> +                 * instruction race window where NMIs could be
>>>> +                 * re-enabled and corrupt the exception frame, leaving
>>>> +                 * us unable to continue on this crash path (which half
>>>> +                 * defeats the point of using the nop handler in the
>>>> +                 * first place).
>>>> +                 *
>>>> +                 * This update is safe from a security point of view, as
>>>> +                 * this pcpu is never going to try to sysret back to a
>>>> +                 * PV vcpu.
>>>> +                 */
>>>> +                set_ist(&idt_tables[i][TRAP_nmi],           IST_NONE);
>>>> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
>>>> +
>>>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0,&trap_nop);
>>> This makes the first set_ist() above pointless, doesn't it?
>> No.  The set_ist() is to remove the possibility of stack corruption from
>> reentrant NMIs, while the trap_nop handler is so we don't get diverted
>> into the regular NMI handler.  There is nothing the NMI handler would do
>> which could alter the outcome, and there are many cases where the
>> regular NMI handler would try to panic, starting us reentrantly on the
>> kexec path again (where we would deadlock on the one_cpu_only() check).
> I think you didn't get the point of the question: _set_gate() clears
> the IST field of the descriptor anyway, so why clear it separately
> first, and then overwrite it again?

Ah - my apologies.  I indeed was not understanding the point.

I will need to fix up the calls then.  Having _set_gate() change the IST 
as well reintroduces the security vulnerability.  I will create a new 
function similar to _set_gate() which only changes the handler and 
nothing else.

>
>>>> +            }
>>>> +            else
>>>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0,&nmi_crash);
>>>> +        }
>>>> +
>>>>       /* Ensure the new callback function is set before sending out the NMI.
>> */
>>>>       wmb();
>>>>   ...
>>>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>>>> +ENTRY(enable_nmis)
>>>> +        pushq %rax
>>> What's the point of saving %rax here, btw?
>>>
>>> Jan
>> Because at the moment I believe I might need to call it from asm
>> context, when doing some of the later fixes.  I figured that it was
>> better to make it safe now, rather than patch it up later.
> I don't think that's good practice - if you end up not calling the
> thing from assembly code, the question on the purpose of saving
> %rax will re-surface sooner or later. Plus the patch by itself
> wouldn't really explain either why this is so (which might be
> of interest in the context of backporting it to older trees).
>
> Jan
>

Ok - will remove.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 12:54:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 12:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2sd-00004z-0A; Mon, 10 Dec 2012 12:54:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Ti2sb-00004u-0U
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:54:41 +0000
Received: from [85.158.139.83:59127] by server-16.bemta-5.messagelabs.com id
	D1/AF-09208-09BD5C05; Mon, 10 Dec 2012 12:54:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355144078!25289658!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14529 invoked from network); 10 Dec 2012 12:54:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:54:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="95724"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 12:54:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 07:54:37 -0500
Received: from [10.80.239.153]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id 1Ti2sW-0000Uz-SR;
	Mon, 10 Dec 2012 12:54:37 +0000
Message-ID: <50C5DB8C.4050601@citrix.com>
Date: Mon, 10 Dec 2012 12:54:36 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121127 Icedove/10.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1354830148@andrewcoop.uk.xensource.com>
	<3757511a785287066cfd.1354830150@andrewcoop.uk.xensource.com>
	<50C1E67F02000078000AEEEF@nat28.tlf.novell.com>
	<50C25D3D.10407@citrix.com>
	<50C5BA2A02000078000AF4F4@nat28.tlf.novell.com>
In-Reply-To: <50C5BA2A02000078000AF4F4@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 V3] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/12 09:32, Jan Beulich wrote:
>>>> On 07.12.12 at 22:18, Andrew Cooper<andrew.cooper3@citrix.com>  wrote:
>> On 07/12/2012 11:52, Jan Beulich wrote:
>>>>>> On 06.12.12 at 22:42, Andrew Cooper<andrew.cooper3@citrix.com>  wrote:
>>>> +
>>>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>>>> +     * invokes do_nmi_crash (above), which cause them to write state and
>>>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>>>> +     * cause it to return to this function ASAP.
>>>> +     */
>>>> +    for ( i = 0; i<  nr_cpu_ids; ++i )
>>>> +        if ( idt_tables[i] )
>>>> +        {
>>>> +
>>>> +            if ( i == cpu )
>>>> +            {
>>>> +                /* Disable the interrupt stack tables for this MCE and
>>>> +                 * NMI handler (shortly to become a nop) as there is a 1
>>>> +                 * instruction race window where NMIs could be
>>>> +                 * re-enabled and corrupt the exception frame, leaving
>>>> +                 * us unable to continue on this crash path (which half
>>>> +                 * defeats the point of using the nop handler in the
>>>> +                 * first place).
>>>> +                 *
>>>> +                 * This update is safe from a security point of view, as
>>>> +                 * this pcpu is never going to try to sysret back to a
>>>> +                 * PV vcpu.
>>>> +                 */
>>>> +                set_ist(&idt_tables[i][TRAP_nmi],           IST_NONE);
>>>> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
>>>> +
>>>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0,&trap_nop);
>>> This makes the first set_ist() above pointless, doesn't it?
>> No.  The set_ist() is to remove the possibility of stack corruption from
>> reentrant NMIs, while the trap_nop handler is so we don't get diverted
>> into the regular NMI handler.  There is nothing the NMI handler would do
>> which could alter the outcome, and there are many cases where the
>> regular NMI handler would try to panic, starting us reentrantly on the
>> kexec path again (where we would deadlock on the one_cpu_only() check).
> I think you didn't get the point of the question: _set_gate() clears
> the IST field of the descriptor anyway, so why clear it separately
> first, and then overwrite it again?

Ah - my apologies.  I indeed was not understanding the point.

I will need to fix up the calls then.  Having _set_gate() change the IST 
as well reintroduces the security vulnerability.  I will create a new 
function similar to _set_gate() which only changes the handler and 
nothing else.

>
>>>> +            }
>>>> +            else
>>>> +                _set_gate(&idt_tables[i][TRAP_nmi], 14, 0,&nmi_crash);
>>>> +        }
>>>> +
>>>>       /* Ensure the new callback function is set before sending out the NMI.
>> */
>>>>       wmb();
>>>>   ...
>>>> +/* Enable NMIs.  No special register assumptions, and all preserved. */
>>>> +ENTRY(enable_nmis)
>>>> +        pushq %rax
>>> What's the point of saving %rax here, btw?
>>>
>>> Jan
>> Because at the moment I believe I might need to call it from asm
>> context, when doing some of the later fixes.  I figured that it was
>> better to make it safe now, rather than patch it up later.
> I don't think that's good practice - if you end up not calling the
> thing from assembly code, the question on the purpose of saving
> %rax will re-surface sooner or later. Plus the patch by itself
> wouldn't really explain either why this is so (which might be
> of interest in the context of backporting it to older trees).
>
> Jan
>

Ok - will remove.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:00:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2xi-0000EW-VB; Mon, 10 Dec 2012 12:59:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti2xh-0000EQ-FA
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:59:57 +0000
Received: from [85.158.139.211:60087] by server-10.bemta-5.messagelabs.com id
	32/16-13383-CCCD5C05; Mon, 10 Dec 2012 12:59:56 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355144394!15585109!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20911 invoked from network); 10 Dec 2012 12:59:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:59:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="96066"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 12:59:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 07:59:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti2xd-0000Zi-4G;
	Mon, 10 Dec 2012 12:59:53 +0000
Date: Mon, 10 Dec 2012 12:59:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1877721103.20121207192546@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212101256080.4633@kaball.uk.xensource.com>
References: <272767244.20121206175454@eikelenboom.it>
	<alpine.DEB.2.02.1212071655460.8801@kaball.uk.xensource.com>
	<1877721103.20121207192546@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Upstream qemu-xen,
 log verbosity and compile errors when enabling debug, filenaming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Dec 2012, Sander Eikelenboom wrote:
> Friday, December 7, 2012, 6:01:43 PM, you wrote:
> 
> > On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
> >> Hi All,
> >> 
> >> Yesterday I have tried building and using upstream qemu and seabios.
> >> Config.mk:
> >> QEMU_UPSTREAM_URL ?= git://git.qemu.org/qemu.git
> >> QEMU_UPSTREAM_REVISION ?= master
> >> 
> >> SEABIOS_UPSTREAM_URL ?= git://git.qemu.org/seabios.git
> >> SEABIOS_UPSTREAM_TAG ?= master
> >> 
> >> And i'm happy to say that it works quite ok, even with (secondary pci) pci-passthrough ( using an ATI gfx adapter and windows7 as guest os) :-).
> >> 
> >> But it seems to have an issue with a USB controller which is trying to use msi-X interrupts, which makes xl dmesg report:
> >> (XEN) [2012-12-06 16:07:24] vmsi.c:108:d32767 Unsupported delivery mode 3
> >> and when "pci_msitranslate=0" is set the error still occurs, only this time the correct domain number is reported, instead of the 32767.
> >> 
> >> 
> >> However, when trying to debug, i noticed although making a debug build (make debug=y && make debug=y install), qemu-dm-<guestname>.log stays almost empty.
> >> It seems all the defines related to debugging are not set.
> >> 
> >> - Would it be appropriated to enable them all when making a debug build ?
> >> - Would it be wise to also have some more verbose logging when not running a debug build ?
> 
> > Yes and yes
> 
> >> - And if yes, what would be the nicest way to set the defines ?
> 
> > My guess is that it would be enough to turn on XEN_PT_LOGGING_ENABLED by
> > default
> 
> >> - Should the naming of the debug defines be made more consistent ?
> 
> > Yes
> 
> 
> 
> >> 
> >> When enabling these debug defines by hand:
> >> 
> >> xen-all.c
> >> #define DEBUG_XEN
> >> 
> >> xen-mapcache.c
> >> #define MAPCACHE_DEBUG
> >> 
> >> hw/xen-host-pci-device.c
> >> #define XEN_HOST_PCI_DEVICE_DEBUG
> >> 
> >> hw/xen_platform.c
> >> #define DEBUG_PLATFORM
> >> 
> >> hw/xen_pt.h
> >> #define XEN_PT_LOGGING_ENABLED
> >> #define XEN_PT_DEBUG_PCI_CONFIG_ACCESS
> >> 
> >> I get a lot of compile errors related to wrong types in the debug printf's.
> 
> > That's really bad. I would like upstream QEMU to have the same level of
> > logging as qemu-xen-traditional by default. And they should compile.
> 
> 
> >> Another thing that occurred to me was that the file naming doesn't seem to be overly consistent:
> >> 
> >> xen-all.c
> >> xen-mapcache.c
> >> xen-mapcache.h
> >> xen-stub.c
> >> xen_apic.c
> >> hw/xen_backend.c
> >> hw/xen_backend.h
> >> hw/xen_blkif.h
> >> hw/xen_common.h
> >> hw/xen_console.c
> >> hw/xen_devconfig.c
> >> hw/xen_disk.c
> >> hw/xen_domainbuild.c
> >> hw/xen_domainbuild.h
> >> hw/xenfb.c
> >> hw/xen.h
> >> hw/xen-host-pci-device.c
> >> hw/xen-host-pci-device.h
> >> hw/xen_machine_pv.c
> >> hw/xen_nic.c
> >> hw/xen_platform.c
> >> hw/xen_pt.c
> >> hw/xen_pt_config_init.c
> >> hw/xen_pt.h
> >> hw/xen_pt_msi.c
> >> 
> >> Would it be worthwhile to make it:
> >> - consistent all underscore or all minus ?
> >> - allways xen_ (or xen- depending on the above) ?
> 
> > Yes, definitely.
> > Given that the development window for QEMU 1.4 has just opened might
> > even be the right time to make these changes.
> 
> > Are you volunteering? :)
> 
> Erhmm yes i think i should be able to accomplish this :-)
> And yes i did notice the 1.3 release :-)
> 
> Patches would be directly against the qemu-upstream git-tree, first round to xen-devel and when acked send to qemu-list ?

It is best to CC qemu-devel from the start, they are used to high levels
of traffic anyway ;)

> For the filerenaming, the rest of the qemu sources seems to be mixed, but i think it would be more neat for the xen part to stick to one of the two .. but which one would be prefered ?
> 1. all underscores
> 2. all minus

I would go for 1.
However I would keep the renaming patch separate from the others,
because it could start a flame war between underscores supporters
and minuses supporters :)

The other changes (better default debug levels, working debugs, debug
functions naming) should all be non-controversial.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:00:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti2xi-0000EW-VB; Mon, 10 Dec 2012 12:59:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti2xh-0000EQ-FA
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 12:59:57 +0000
Received: from [85.158.139.211:60087] by server-10.bemta-5.messagelabs.com id
	32/16-13383-CCCD5C05; Mon, 10 Dec 2012 12:59:56 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355144394!15585109!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20911 invoked from network); 10 Dec 2012 12:59:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 12:59:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="96066"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 12:59:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 07:59:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti2xd-0000Zi-4G;
	Mon, 10 Dec 2012 12:59:53 +0000
Date: Mon, 10 Dec 2012 12:59:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1877721103.20121207192546@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212101256080.4633@kaball.uk.xensource.com>
References: <272767244.20121206175454@eikelenboom.it>
	<alpine.DEB.2.02.1212071655460.8801@kaball.uk.xensource.com>
	<1877721103.20121207192546@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Upstream qemu-xen,
 log verbosity and compile errors when enabling debug, filenaming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Dec 2012, Sander Eikelenboom wrote:
> Friday, December 7, 2012, 6:01:43 PM, you wrote:
> 
> > On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
> >> Hi All,
> >> 
> >> Yesterday I have tried building and using upstream qemu and seabios.
> >> Config.mk:
> >> QEMU_UPSTREAM_URL ?= git://git.qemu.org/qemu.git
> >> QEMU_UPSTREAM_REVISION ?= master
> >> 
> >> SEABIOS_UPSTREAM_URL ?= git://git.qemu.org/seabios.git
> >> SEABIOS_UPSTREAM_TAG ?= master
> >> 
> >> And i'm happy to say that it works quite ok, even with (secondary pci) pci-passthrough ( using an ATI gfx adapter and windows7 as guest os) :-).
> >> 
> >> But it seems to have an issue with a USB controller which is trying to use msi-X interrupts, which makes xl dmesg report:
> >> (XEN) [2012-12-06 16:07:24] vmsi.c:108:d32767 Unsupported delivery mode 3
> >> and when "pci_msitranslate=0" is set the error still occurs, only this time the correct domain number is reported, instead of the 32767.
> >> 
> >> 
> >> However, when trying to debug, i noticed although making a debug build (make debug=y && make debug=y install), qemu-dm-<guestname>.log stays almost empty.
> >> It seems all the defines related to debugging are not set.
> >> 
> >> - Would it be appropriated to enable them all when making a debug build ?
> >> - Would it be wise to also have some more verbose logging when not running a debug build ?
> 
> > Yes and yes
> 
> >> - And if yes, what would be the nicest way to set the defines ?
> 
> > My guess is that it would be enough to turn on XEN_PT_LOGGING_ENABLED by
> > default
> 
> >> - Should the naming of the debug defines be made more consistent ?
> 
> > Yes
> 
> 
> 
> >> 
> >> When enabling these debug defines by hand:
> >> 
> >> xen-all.c
> >> #define DEBUG_XEN
> >> 
> >> xen-mapcache.c
> >> #define MAPCACHE_DEBUG
> >> 
> >> hw/xen-host-pci-device.c
> >> #define XEN_HOST_PCI_DEVICE_DEBUG
> >> 
> >> hw/xen_platform.c
> >> #define DEBUG_PLATFORM
> >> 
> >> hw/xen_pt.h
> >> #define XEN_PT_LOGGING_ENABLED
> >> #define XEN_PT_DEBUG_PCI_CONFIG_ACCESS
> >> 
> >> I get a lot of compile errors related to wrong types in the debug printf's.
> 
> > That's really bad. I would like upstream QEMU to have the same level of
> > logging as qemu-xen-traditional by default. And they should compile.
> 
> 
> >> Another thing that occurred to me was that the file naming doesn't seem to be overly consistent:
> >> 
> >> xen-all.c
> >> xen-mapcache.c
> >> xen-mapcache.h
> >> xen-stub.c
> >> xen_apic.c
> >> hw/xen_backend.c
> >> hw/xen_backend.h
> >> hw/xen_blkif.h
> >> hw/xen_common.h
> >> hw/xen_console.c
> >> hw/xen_devconfig.c
> >> hw/xen_disk.c
> >> hw/xen_domainbuild.c
> >> hw/xen_domainbuild.h
> >> hw/xenfb.c
> >> hw/xen.h
> >> hw/xen-host-pci-device.c
> >> hw/xen-host-pci-device.h
> >> hw/xen_machine_pv.c
> >> hw/xen_nic.c
> >> hw/xen_platform.c
> >> hw/xen_pt.c
> >> hw/xen_pt_config_init.c
> >> hw/xen_pt.h
> >> hw/xen_pt_msi.c
> >> 
> >> Would it be worthwhile to make it:
> >> - consistent all underscore or all minus ?
> >> - allways xen_ (or xen- depending on the above) ?
> 
> > Yes, definitely.
> > Given that the development window for QEMU 1.4 has just opened might
> > even be the right time to make these changes.
> 
> > Are you volunteering? :)
> 
> Erhmm yes i think i should be able to accomplish this :-)
> And yes i did notice the 1.3 release :-)
> 
> Patches would be directly against the qemu-upstream git-tree, first round to xen-devel and when acked send to qemu-list ?

It is best to CC qemu-devel from the start, they are used to high levels
of traffic anyway ;)

> For the filerenaming, the rest of the qemu sources seems to be mixed, but i think it would be more neat for the xen part to stick to one of the two .. but which one would be prefered ?
> 1. all underscores
> 2. all minus

I would go for 1.
However I would keep the renaming patch separate from the others,
because it could start a flame war between underscores supporters
and minuses supporters :)

The other changes (better default debug levels, working debugs, debug
functions naming) should all be non-controversial.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:05:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti32S-0000PS-MN; Mon, 10 Dec 2012 13:04:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti32Q-0000PM-QM
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:04:51 +0000
Received: from [85.158.138.51:50392] by server-12.bemta-3.messagelabs.com id
	A9/31-22757-DEDD5C05; Mon, 10 Dec 2012 13:04:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355144683!26450758!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17652 invoked from network); 10 Dec 2012 13:04:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:04:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="89745"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 13:04:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 08:04:42 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti32H-0000fL-NB;
	Mon, 10 Dec 2012 13:04:41 +0000
Date: Mon, 10 Dec 2012 13:04:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355132370.31710.98.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212101300250.4633@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071728350.8801@kaball.uk.xensource.com>
	<1355132370.31710.98.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/9] xen: arm: parse modules from DT during
	early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Ian Campbell wrote:
> On Fri, 2012-12-07 at 17:30 +0000, Stefano Stabellini wrote:
> > On Thu, 6 Dec 2012, Ian Campbell wrote:
> > > The bootloader should populate /chosen/modules/module@<N>/ for each
> > > module it wishes to pass to the hypervisor. The content of these nodes
> > > is described in docs/misc/arm/device-tree/booting.txt
> > > 
> > > The hypervisor parses for 2 types of module, linux zImages and linux
> > > initrds. Currently we don't do anything with them.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > ---
> > > v4: Use /chosen/modules/module@N
> > >     Identify module type by compatible property not number.
> > > v3: Use a reg = < > property for the module address/length.
> > > v2: Reserve the zeroeth module for Xen itself (not used yet)
> > >     Use a more idiomatic DT layout
> > >     Document said layout
> > > ---
> > >  docs/misc/arm/device-tree/booting.txt |   25 ++++++++++
> > >  xen/common/device_tree.c              |   86 +++++++++++++++++++++++++++++++++
> > >  xen/include/xen/device_tree.h         |   14 +++++
> > >  3 files changed, 125 insertions(+), 0 deletions(-)
> > >  create mode 100644 docs/misc/arm/device-tree/booting.txt
> > > 
> > > diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> > > new file mode 100644
> > > index 0000000..94cd3f1
> > > --- /dev/null
> > > +++ b/docs/misc/arm/device-tree/booting.txt
> > > @@ -0,0 +1,25 @@
> > > +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> > > +node of the device tree.
> > > +
> > > +Each node has the form /chosen/modules/module@<N> and contains the following
> > > +properties:
> > > +
> > > +- compatible
> > > +
> > > +	Must be:
> > > +
> > > +		"xen,<type>", "xen,multiboot-module"
> > > +
> > > +	where <type> must be one of:
> > > +
> > > +	- "linux-zimage" -- the dom0 kernel
> > > +	- "linux-initrd" -- the dom0 ramdisk
> > > +
> > > +- reg
> > > +
> > > +	Specifies the physical address of the module in RAM and the
> > > +	length of the module.
> > > +
> > > +- bootargs (optional)
> > > +
> > > +	Command line associated with this module
> > > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > > index da0af77..4bb640e 100644
> > > --- a/xen/common/device_tree.c
> > > +++ b/xen/common/device_tree.c
> > > @@ -270,6 +270,90 @@ static void __init process_cpu_node(const void *fdt, int node,
> > >      cpumask_set_cpu(start, &cpu_possible_map);
> > >  }
> > >  
> > > +static int __init process_chosen_modules_node(const void *fdt, int node,
> > > +                                              const char *name, int *depth,
> > > +                                              u32 address_cells, u32 size_cells)
> > > +{
> > > +    const struct fdt_property *prop;
> > > +    const u32 *cell;
> > > +    int nr, nr_modules = 0;
> > > +    struct dt_mb_module *mod;
> > > +    int len;
> > > +
> > > +    for ( *depth = 1;
> > > +          *depth >= 1;
> > > +          node = fdt_next_node(fdt, node, depth) )
> > > +    {
> > > +        name = fdt_get_name(fdt, node, NULL);
> > > +        if ( strncmp(name, "module@", strlen("module@")) == 0 ) {
> > > +
> > > +            if ( fdt_node_check_compatible(fdt, node,
> > > +                                           "xen,multiboot-module" ) != 0 )
> > > +                early_panic("%s not a compatible module node\n", name);
> > > +
> > > +            if ( fdt_node_check_compatible(fdt, node,
> > > +                                           "xen,linux-zimage") == 0 )
> > > +                nr = 1;
> > > +            else if ( fdt_node_check_compatible(fdt, node,
> > > +                                                "xen,linux-initrd") == 0)
> > > +                nr = 2;
> > > +            else
> > > +                early_panic("%s not a known xen multiboot byte\n");
> > > +
> > > +            if ( nr > nr_modules )
> > > +                nr_modules = nr;
> > > +
> > > +            mod = &early_info.modules.module[nr];
> > > +
> > > +            prop = fdt_get_property(fdt, node, "reg", NULL);
> > > +            if ( !prop )
> > > +                early_panic("node %s missing `reg' property\n", name);
> > > +
> > > +            cell = (const u32 *)prop->data;
> > > +            device_tree_get_reg(&cell, address_cells, size_cells,
> > > +                                &mod->start, &mod->size);
> > > +
> > > +            prop = fdt_get_property(fdt, node, "bootargs", &len);
> > > +            if ( prop )
> > > +            {
> > > +                if ( len > sizeof(mod->cmdline) )
> > > +                    early_panic("module %d command line too long\n", nr);
> > > +
> > > +                safe_strcpy(mod->cmdline, prop->data);
> > > +            }
> > > +            else
> > > +                mod->cmdline[0] = 0;
> > > +        }
> > > +    }
> > > +
> > > +    for ( nr = 1 ; nr < nr_modules ; nr++ )
> > > +    {
> > > +        mod = &early_info.modules.module[nr];
> > > +        if ( !mod->start || !mod->size )
> > > +            early_panic("module %d  missing / invalid\n", nr);
> > > +    }
> > > +
> > > +    early_info.modules.nr_mods = nr_modules;
> > > +    return node;
> > > +}
> > > +
> > > +static void __init process_chosen_node(const void *fdt, int node,
> > > +                                       const char *name,
> > > +                                       u32 address_cells, u32 size_cells)
> > > +{
> > > +    int depth;
> > > +
> > > +    for ( depth = 0;
> > > +          depth >= 0;
> > > +          node = fdt_next_node(fdt, node, &depth) )
> > > +    {
> > > +        name = fdt_get_name(fdt, node, NULL);
> > > +        if ( depth == 1 && strcmp(name, "modules") == 0 )
> > > +            node = process_chosen_modules_node(fdt, node, name, &depth,
> > > +                                               address_cells, size_cells);
> > > +    }
> > > +}
> > > +
> > >  static int __init early_scan_node(const void *fdt,
> > >                                    int node, const char *name, int depth,
> > >                                    u32 address_cells, u32 size_cells,
> > > @@ -279,6 +363,8 @@ static int __init early_scan_node(const void *fdt,
> > >          process_memory_node(fdt, node, name, address_cells, size_cells);
> > >      else if ( device_tree_type_matches(fdt, node, "cpu") )
> > >          process_cpu_node(fdt, node, name, address_cells, size_cells);
> > > +    else if ( device_tree_node_matches(fdt, node, "chosen") )
> > > +        process_chosen_node(fdt, node, name, address_cells, size_cells);
> > >  
> > >      return 0;
> > >  }
> > 
> > You have really written a lot of code here!
> > I would have thought that just matching on the compatible string would
> > be enough:
> > 
> > else if ( device_tree_node_matches(fdt, node, "linux-zimage") )
> >      process_linuxzimage_node(fdt, node, name, address_cells, size_cells);
> > else if ( device_tree_node_matches(fdt, node, "linux-initrd") )
> >      process_linuxinitrd_node(fdt, node, name, address_cells, size_cells);
> > 
> > so that your process_linuxzimage_node and process_linuxinitrd_node start
> > from the right node and have everything they need to parse it
> 
> Is the tree structure of Device Tree meaningless? I'd have thought that
> a compatible node would only have meaning at a particular place in the
> tree. Granted compatible nodes are often pretty specific and precise,
> but is that inherent enough in DT that we can rely on it?

I don't know if it is entirely meaningless, but surely the compatible
string is regarded as a much more reliable way to identify a node AFAIK.
More often than not Linux drivers just use of_find_compatible_node to
find their DT node.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:05:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti32S-0000PS-MN; Mon, 10 Dec 2012 13:04:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti32Q-0000PM-QM
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:04:51 +0000
Received: from [85.158.138.51:50392] by server-12.bemta-3.messagelabs.com id
	A9/31-22757-DEDD5C05; Mon, 10 Dec 2012 13:04:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355144683!26450758!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17652 invoked from network); 10 Dec 2012 13:04:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:04:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="89745"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 13:04:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 08:04:42 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti32H-0000fL-NB;
	Mon, 10 Dec 2012 13:04:41 +0000
Date: Mon, 10 Dec 2012 13:04:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355132370.31710.98.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212101300250.4633@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071728350.8801@kaball.uk.xensource.com>
	<1355132370.31710.98.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/9] xen: arm: parse modules from DT during
	early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Ian Campbell wrote:
> On Fri, 2012-12-07 at 17:30 +0000, Stefano Stabellini wrote:
> > On Thu, 6 Dec 2012, Ian Campbell wrote:
> > > The bootloader should populate /chosen/modules/module@<N>/ for each
> > > module it wishes to pass to the hypervisor. The content of these nodes
> > > is described in docs/misc/arm/device-tree/booting.txt
> > > 
> > > The hypervisor parses for 2 types of module, linux zImages and linux
> > > initrds. Currently we don't do anything with them.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > ---
> > > v4: Use /chosen/modules/module@N
> > >     Identify module type by compatible property not number.
> > > v3: Use a reg = < > property for the module address/length.
> > > v2: Reserve the zeroeth module for Xen itself (not used yet)
> > >     Use a more idiomatic DT layout
> > >     Document said layout
> > > ---
> > >  docs/misc/arm/device-tree/booting.txt |   25 ++++++++++
> > >  xen/common/device_tree.c              |   86 +++++++++++++++++++++++++++++++++
> > >  xen/include/xen/device_tree.h         |   14 +++++
> > >  3 files changed, 125 insertions(+), 0 deletions(-)
> > >  create mode 100644 docs/misc/arm/device-tree/booting.txt
> > > 
> > > diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> > > new file mode 100644
> > > index 0000000..94cd3f1
> > > --- /dev/null
> > > +++ b/docs/misc/arm/device-tree/booting.txt
> > > @@ -0,0 +1,25 @@
> > > +Xen is passed the dom0 kernel and initrd via a reference in the /chosen
> > > +node of the device tree.
> > > +
> > > +Each node has the form /chosen/modules/module@<N> and contains the following
> > > +properties:
> > > +
> > > +- compatible
> > > +
> > > +	Must be:
> > > +
> > > +		"xen,<type>", "xen,multiboot-module"
> > > +
> > > +	where <type> must be one of:
> > > +
> > > +	- "linux-zimage" -- the dom0 kernel
> > > +	- "linux-initrd" -- the dom0 ramdisk
> > > +
> > > +- reg
> > > +
> > > +	Specifies the physical address of the module in RAM and the
> > > +	length of the module.
> > > +
> > > +- bootargs (optional)
> > > +
> > > +	Command line associated with this module
> > > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> > > index da0af77..4bb640e 100644
> > > --- a/xen/common/device_tree.c
> > > +++ b/xen/common/device_tree.c
> > > @@ -270,6 +270,90 @@ static void __init process_cpu_node(const void *fdt, int node,
> > >      cpumask_set_cpu(start, &cpu_possible_map);
> > >  }
> > >  
> > > +static int __init process_chosen_modules_node(const void *fdt, int node,
> > > +                                              const char *name, int *depth,
> > > +                                              u32 address_cells, u32 size_cells)
> > > +{
> > > +    const struct fdt_property *prop;
> > > +    const u32 *cell;
> > > +    int nr, nr_modules = 0;
> > > +    struct dt_mb_module *mod;
> > > +    int len;
> > > +
> > > +    for ( *depth = 1;
> > > +          *depth >= 1;
> > > +          node = fdt_next_node(fdt, node, depth) )
> > > +    {
> > > +        name = fdt_get_name(fdt, node, NULL);
> > > +        if ( strncmp(name, "module@", strlen("module@")) == 0 ) {
> > > +
> > > +            if ( fdt_node_check_compatible(fdt, node,
> > > +                                           "xen,multiboot-module" ) != 0 )
> > > +                early_panic("%s not a compatible module node\n", name);
> > > +
> > > +            if ( fdt_node_check_compatible(fdt, node,
> > > +                                           "xen,linux-zimage") == 0 )
> > > +                nr = 1;
> > > +            else if ( fdt_node_check_compatible(fdt, node,
> > > +                                                "xen,linux-initrd") == 0)
> > > +                nr = 2;
> > > +            else
> > > +                early_panic("%s not a known xen multiboot byte\n");
> > > +
> > > +            if ( nr > nr_modules )
> > > +                nr_modules = nr;
> > > +
> > > +            mod = &early_info.modules.module[nr];
> > > +
> > > +            prop = fdt_get_property(fdt, node, "reg", NULL);
> > > +            if ( !prop )
> > > +                early_panic("node %s missing `reg' property\n", name);
> > > +
> > > +            cell = (const u32 *)prop->data;
> > > +            device_tree_get_reg(&cell, address_cells, size_cells,
> > > +                                &mod->start, &mod->size);
> > > +
> > > +            prop = fdt_get_property(fdt, node, "bootargs", &len);
> > > +            if ( prop )
> > > +            {
> > > +                if ( len > sizeof(mod->cmdline) )
> > > +                    early_panic("module %d command line too long\n", nr);
> > > +
> > > +                safe_strcpy(mod->cmdline, prop->data);
> > > +            }
> > > +            else
> > > +                mod->cmdline[0] = 0;
> > > +        }
> > > +    }
> > > +
> > > +    for ( nr = 1 ; nr < nr_modules ; nr++ )
> > > +    {
> > > +        mod = &early_info.modules.module[nr];
> > > +        if ( !mod->start || !mod->size )
> > > +            early_panic("module %d  missing / invalid\n", nr);
> > > +    }
> > > +
> > > +    early_info.modules.nr_mods = nr_modules;
> > > +    return node;
> > > +}
> > > +
> > > +static void __init process_chosen_node(const void *fdt, int node,
> > > +                                       const char *name,
> > > +                                       u32 address_cells, u32 size_cells)
> > > +{
> > > +    int depth;
> > > +
> > > +    for ( depth = 0;
> > > +          depth >= 0;
> > > +          node = fdt_next_node(fdt, node, &depth) )
> > > +    {
> > > +        name = fdt_get_name(fdt, node, NULL);
> > > +        if ( depth == 1 && strcmp(name, "modules") == 0 )
> > > +            node = process_chosen_modules_node(fdt, node, name, &depth,
> > > +                                               address_cells, size_cells);
> > > +    }
> > > +}
> > > +
> > >  static int __init early_scan_node(const void *fdt,
> > >                                    int node, const char *name, int depth,
> > >                                    u32 address_cells, u32 size_cells,
> > > @@ -279,6 +363,8 @@ static int __init early_scan_node(const void *fdt,
> > >          process_memory_node(fdt, node, name, address_cells, size_cells);
> > >      else if ( device_tree_type_matches(fdt, node, "cpu") )
> > >          process_cpu_node(fdt, node, name, address_cells, size_cells);
> > > +    else if ( device_tree_node_matches(fdt, node, "chosen") )
> > > +        process_chosen_node(fdt, node, name, address_cells, size_cells);
> > >  
> > >      return 0;
> > >  }
> > 
> > You have really written a lot of code here!
> > I would have thought that just matching on the compatible string would
> > be enough:
> > 
> > else if ( device_tree_node_matches(fdt, node, "linux-zimage") )
> >      process_linuxzimage_node(fdt, node, name, address_cells, size_cells);
> > else if ( device_tree_node_matches(fdt, node, "linux-initrd") )
> >      process_linuxinitrd_node(fdt, node, name, address_cells, size_cells);
> > 
> > so that your process_linuxzimage_node and process_linuxinitrd_node start
> > from the right node and have everything they need to parse it
> 
> Is the tree structure of Device Tree meaningless? I'd have thought that
> a compatible node would only have meaning at a particular place in the
> tree. Granted compatible nodes are often pretty specific and precise,
> but is that inherent enough in DT that we can rely on it?

I don't know if it is entirely meaningless, but surely the compatible
string is regarded as a much more reliable way to identify a node AFAIK.
More often than not Linux drivers just use of_find_compatible_node to
find their DT node.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:07:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:07:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti35C-0000Wo-8y; Mon, 10 Dec 2012 13:07:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1ThNBf-0000YE-Km
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 16:23:35 +0000
Received: from [85.158.138.51:21793] by server-11.bemta-3.messagelabs.com id
	10/BA-19361-68963C05; Sat, 08 Dec 2012 16:23:34 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354983812!28103095!1
X-Originating-IP: [209.85.210.174]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4887 invoked from network); 8 Dec 2012 16:23:33 -0000
Received: from mail-ia0-f174.google.com (HELO mail-ia0-f174.google.com)
	(209.85.210.174)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 16:23:33 -0000
Received: by mail-ia0-f174.google.com with SMTP id y25so2418868iay.19
	for <xen-devel@lists.xen.org>; Sat, 08 Dec 2012 08:23:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=25GDOAPM3QEdG///WzeFUBEijBWxzqbhnZectthvS80=;
	b=bJ+u2uBtoo3IigBF0UF7EvUgmbpDLDNd2e1NyFqmR3Xh9hWnJJ+WmYM2TlvxhDk1sZ
	DDSWYwtkF1ROqG7bGi0MUEAoUW3jjfhfAgkWdDJ6+DxVAmAcpDvfvd50IPgYLaksL2d8
	Kps1/cuRZrWwlAdQotgiQLSDX2NRzwFf2O+/enFGqBBnni2VypnCCxnySzQYZzSAxOCK
	sWiAOJu4/oRFZmflP9FHVP3ddpxxnH0osZ1YEf856IYNnJYU2EF336zfCxOExFqpGegr
	br1Il1YTSa9Be9kF05b98mbYLakcRSQhzQf8DjjwxNsioKVjhiW6U8lWqp6XuJHQqDcP
	xd6A==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr2203518igq.37.1354983811740; Sat,
	08 Dec 2012 08:23:31 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Sat, 8 Dec 2012 08:23:31 -0800 (PST)
Date: Sun, 9 Dec 2012 00:23:31 +0800
Message-ID: <CAKhsbWYd0R23yaaPJ43uSJaagBh_oCdP0=wt7GECzE8zKB9BSQ@mail.gmail.com>
From: "G.R." <firemeteor.guo@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Mailman-Approved-At: Mon, 10 Dec 2012 13:07:40 +0000
Subject: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4414825909082529498=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4414825909082529498==
Content-Type: multipart/alternative; boundary=14dae9341115c6727504d059c0ba

--14dae9341115c6727504d059c0ba
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
I'm debugging an issue that an HVM guest failed to produce any output with
IGD passed through.
This is an pure HVM linux guest with i915 driver directly compiled in.
An PVHVM kernel with i915 driver compiled as module works without issue.
I'm not yet sure which factor is more important, pure HVM or the I915=y
kernel config.

The direct cause of no output is that the driver does not select Display
PLL properly, which is in turn due to fail to detect pch type properly.

Strange enough, the intel_detect_pch() function works by checking the
device ID of the ISA bridge coming with the chipset:

    /*
     * The reason to probe ISA bridge instead of Dev31:Fun0 is to
     * make graphics device passthrough work easy for VMM, that only
     * need to expose ISA bridge to let driver know the real hardware
     * underneath. This is a requirement from virtualization team.
     */

I added some debug output in this function and find out that it obtained a
strange device ID:
[    1.005423] [drm] intel pch detect, found 00007000

This looks like the ISA bridge provided by qemu:
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.0 0601: 8086:7000

However, I can find the same device on a PVHVM linux guest, but the
intel_detect_pch() is not fooled by that.
Is it due to I915=m config or some magic played by PVOPS? Any suggestion
how to fix this?

Thanks,
Timothy

--14dae9341115c6727504d059c0ba
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,<br>I&#39;m debugging an issue that an HVM guest failed to produce a=
ny output with IGD passed through.<br>This is an pure HVM linux guest with =
i915 driver directly compiled in.<br>An PVHVM kernel with i915 driver compi=
led as module works without issue.<br>
I&#39;m not yet sure which factor is more important, pure HVM or the I915=
=3Dy kernel config.<br><br>The direct cause of no output is that the driver=
 does not select Display PLL properly, which is in turn due to fail to dete=
ct pch type properly.<br>
<br>Strange enough, the intel_detect_pch() function works by checking the d=
evice ID of the ISA bridge coming with the chipset:<br><br>=A0=A0=A0 /*<br>=
=A0=A0=A0 =A0* The reason to probe ISA bridge instead of Dev31:Fun0 is to<b=
r>=A0=A0=A0 =A0* make graphics device passthrough work easy for VMM, that o=
nly<br>
=A0=A0=A0 =A0* need to expose ISA bridge to let driver know the real hardwa=
re<br>=A0=A0=A0 =A0* underneath. This is a requirement from virtualization =
team.<br>=A0=A0=A0 =A0*/<br><br>I added some debug output in this function =
and find out that it obtained a strange device ID:<br>
[=A0=A0=A0 1.005423] [drm] intel pch detect, found 00007000=A0=A0=A0=A0=A0 =
<br><br>This looks like the ISA bridge provided by qemu:<br>00:01.0 ISA bri=
dge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]<br>00:01.0 0601=
: 8086:7000<br>
<br>However, I can find the same device on a PVHVM linux guest, but the int=
el_detect_pch() is not fooled by that. <br>Is it due to I915=3Dm config or =
some magic played by PVOPS? Any suggestion how to fix this?<br><br>Thanks,<=
br>
Timothy<br>

--14dae9341115c6727504d059c0ba--


--===============4414825909082529498==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4414825909082529498==--


From xen-devel-bounces@lists.xen.org Mon Dec 10 13:07:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:07:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti35C-0000Wo-8y; Mon, 10 Dec 2012 13:07:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1ThNBf-0000YE-Km
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 16:23:35 +0000
Received: from [85.158.138.51:21793] by server-11.bemta-3.messagelabs.com id
	10/BA-19361-68963C05; Sat, 08 Dec 2012 16:23:34 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1354983812!28103095!1
X-Originating-IP: [209.85.210.174]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4887 invoked from network); 8 Dec 2012 16:23:33 -0000
Received: from mail-ia0-f174.google.com (HELO mail-ia0-f174.google.com)
	(209.85.210.174)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Dec 2012 16:23:33 -0000
Received: by mail-ia0-f174.google.com with SMTP id y25so2418868iay.19
	for <xen-devel@lists.xen.org>; Sat, 08 Dec 2012 08:23:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=25GDOAPM3QEdG///WzeFUBEijBWxzqbhnZectthvS80=;
	b=bJ+u2uBtoo3IigBF0UF7EvUgmbpDLDNd2e1NyFqmR3Xh9hWnJJ+WmYM2TlvxhDk1sZ
	DDSWYwtkF1ROqG7bGi0MUEAoUW3jjfhfAgkWdDJ6+DxVAmAcpDvfvd50IPgYLaksL2d8
	Kps1/cuRZrWwlAdQotgiQLSDX2NRzwFf2O+/enFGqBBnni2VypnCCxnySzQYZzSAxOCK
	sWiAOJu4/oRFZmflP9FHVP3ddpxxnH0osZ1YEf856IYNnJYU2EF336zfCxOExFqpGegr
	br1Il1YTSa9Be9kF05b98mbYLakcRSQhzQf8DjjwxNsioKVjhiW6U8lWqp6XuJHQqDcP
	xd6A==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr2203518igq.37.1354983811740; Sat,
	08 Dec 2012 08:23:31 -0800 (PST)
Received: by 10.64.139.1 with HTTP; Sat, 8 Dec 2012 08:23:31 -0800 (PST)
Date: Sun, 9 Dec 2012 00:23:31 +0800
Message-ID: <CAKhsbWYd0R23yaaPJ43uSJaagBh_oCdP0=wt7GECzE8zKB9BSQ@mail.gmail.com>
From: "G.R." <firemeteor.guo@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Mailman-Approved-At: Mon, 10 Dec 2012 13:07:40 +0000
Subject: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4414825909082529498=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4414825909082529498==
Content-Type: multipart/alternative; boundary=14dae9341115c6727504d059c0ba

--14dae9341115c6727504d059c0ba
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
I'm debugging an issue that an HVM guest failed to produce any output with
IGD passed through.
This is an pure HVM linux guest with i915 driver directly compiled in.
An PVHVM kernel with i915 driver compiled as module works without issue.
I'm not yet sure which factor is more important, pure HVM or the I915=y
kernel config.

The direct cause of no output is that the driver does not select Display
PLL properly, which is in turn due to fail to detect pch type properly.

Strange enough, the intel_detect_pch() function works by checking the
device ID of the ISA bridge coming with the chipset:

    /*
     * The reason to probe ISA bridge instead of Dev31:Fun0 is to
     * make graphics device passthrough work easy for VMM, that only
     * need to expose ISA bridge to let driver know the real hardware
     * underneath. This is a requirement from virtualization team.
     */

I added some debug output in this function and find out that it obtained a
strange device ID:
[    1.005423] [drm] intel pch detect, found 00007000

This looks like the ISA bridge provided by qemu:
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.0 0601: 8086:7000

However, I can find the same device on a PVHVM linux guest, but the
intel_detect_pch() is not fooled by that.
Is it due to I915=m config or some magic played by PVOPS? Any suggestion
how to fix this?

Thanks,
Timothy

--14dae9341115c6727504d059c0ba
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,<br>I&#39;m debugging an issue that an HVM guest failed to produce a=
ny output with IGD passed through.<br>This is an pure HVM linux guest with =
i915 driver directly compiled in.<br>An PVHVM kernel with i915 driver compi=
led as module works without issue.<br>
I&#39;m not yet sure which factor is more important, pure HVM or the I915=
=3Dy kernel config.<br><br>The direct cause of no output is that the driver=
 does not select Display PLL properly, which is in turn due to fail to dete=
ct pch type properly.<br>
<br>Strange enough, the intel_detect_pch() function works by checking the d=
evice ID of the ISA bridge coming with the chipset:<br><br>=A0=A0=A0 /*<br>=
=A0=A0=A0 =A0* The reason to probe ISA bridge instead of Dev31:Fun0 is to<b=
r>=A0=A0=A0 =A0* make graphics device passthrough work easy for VMM, that o=
nly<br>
=A0=A0=A0 =A0* need to expose ISA bridge to let driver know the real hardwa=
re<br>=A0=A0=A0 =A0* underneath. This is a requirement from virtualization =
team.<br>=A0=A0=A0 =A0*/<br><br>I added some debug output in this function =
and find out that it obtained a strange device ID:<br>
[=A0=A0=A0 1.005423] [drm] intel pch detect, found 00007000=A0=A0=A0=A0=A0 =
<br><br>This looks like the ISA bridge provided by qemu:<br>00:01.0 ISA bri=
dge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]<br>00:01.0 0601=
: 8086:7000<br>
<br>However, I can find the same device on a PVHVM linux guest, but the int=
el_detect_pch() is not fooled by that. <br>Is it due to I915=3Dm config or =
some magic played by PVOPS? Any suggestion how to fix this?<br><br>Thanks,<=
br>
Timothy<br>

--14dae9341115c6727504d059c0ba--


--===============4414825909082529498==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4414825909082529498==--


From xen-devel-bounces@lists.xen.org Mon Dec 10 13:08:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti35c-0000ZO-5w; Mon, 10 Dec 2012 13:08:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liyz@pku.edu.cn>) id 1ThF1F-0004Eo-ES
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 07:40:17 +0000
Received: from [85.158.137.99:10333] by server-9.bemta-3.messagelabs.com id
	0F/FD-02388-0EEE2C05; Sat, 08 Dec 2012 07:40:16 +0000
X-Env-Sender: liyz@pku.edu.cn
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354952413!13328462!1
X-Originating-IP: [162.105.129.28]
X-SpamReason: No, hits=1.4 required=7.0 tests=FROM_EXCESS_BASE64,
	SUBJECT_EXCESS_BASE64
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32132 invoked from network); 8 Dec 2012 07:40:15 -0000
Received: from mx4.pku.edu.cn (HELO mail.pku.edu.cn) (162.105.129.28)
	by server-6.tower-217.messagelabs.com with SMTP;
	8 Dec 2012 07:40:15 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by mail.pku.edu.cn (tmailer) with ESMTP id 9225218022
	for <xen-devel@lists.xen.org>; Sat,  8 Dec 2012 15:40:10 +0800 (CST)
X-Spam-Flag: NO
X-Spam-Score: -415.249
X-Spam-Level: 
X-Spam-Status: No, score=-415.249 tagged_above=-1000 required=20
	tests=[ALL_TRUSTED=-1.8, AWL=-104.709, BAYES_00=-10.396,
	CN_BODY_1041=0.2, FROM_EXCESS_BASE64=1.456, USER_IN_WHITELIST=-300]
	autolearn=no
Received: from mail.pku.edu.cn ([127.0.0.1])
	by localhost (bj-mail05.pku.edu.cn [127.0.0.1]) (theinterface-new,
	port 10024) with ESMTP id 4KryrC4jAOs3 for <xen-devel@lists.xen.org>;
	Sat,  8 Dec 2012 15:40:08 +0800 (CST)
Received: from bj-mail05.pku.edu.cn (bj-mail05.pku.edu.cn [162.105.129.125])
	by mail.pku.edu.cn (tmailer) with ESMTP id 735FC18013
	for <xen-devel@lists.xen.org>; Sat,  8 Dec 2012 15:40:06 +0800 (CST)
Date: Sat, 8 Dec 2012 15:40:04 +0800 (CST)
From: =?gbk?B?WWFuemhhbmcgTGkg?=<liyz@pku.edu.cn>
To: xen-devel@lists.xen.org
Message-ID: <1947237017.20.1354952403984.JavaMail.root@bj-mail05.pku.edu.cn>
MIME-Version: 1.0
X-Originating-IP: [162.105.129.97]
X-Mailman-Approved-At: Mon, 10 Dec 2012 13:08:04 +0000
Subject: [Xen-devel] =?gbk?q?_=5BPATCH=5D_memshr_tools=3A_use_hypercall_re?=
	=?gbk?q?turn_value_incorrectly?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When programs in TOOLS call hypercall and fail, they will get return value -1 and find real error number in "errno". But in the following code, it uses return value -1 to do switch-case incorrectly.

Signed-off-by: Yanzhang Li <liyz@pku.edu.cn>

diff -u xen-4.2.0/tools/memshr/interface.c change/xen-4.2.0/tools/memshr/interface.c
--- xen-4.2.0/tools/memshr/interface.c  2012-09-17 18:21:18.000000000 +0800
+++ change/xen-4.2.0/tools/memshr/interface.c   2012-12-07 09:50:16.208705246 +0800
@@ -18,6 +18,7 @@
  */
 #include <string.h>
 #include <inttypes.h>
+#include <errno.h>

 #include "memshr.h"
 #include "memshr-priv.h"
@@ -184,7 +185,7 @@
         if(!ret) return 0;
         /* Handles failed to be shared => at least one of them must be invalid,
            remove the relevant ones from the map */
-        switch(ret)
+        switch(-errno)
         {
             case XENMEM_SHARING_OP_S_HANDLE_INVALID:
                 ret = blockshr_shrhnd_remove(memshr.blks, source_st, NULL);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:08:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti35c-0000ZO-5w; Mon, 10 Dec 2012 13:08:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liyz@pku.edu.cn>) id 1ThF1F-0004Eo-ES
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 07:40:17 +0000
Received: from [85.158.137.99:10333] by server-9.bemta-3.messagelabs.com id
	0F/FD-02388-0EEE2C05; Sat, 08 Dec 2012 07:40:16 +0000
X-Env-Sender: liyz@pku.edu.cn
X-Msg-Ref: server-6.tower-217.messagelabs.com!1354952413!13328462!1
X-Originating-IP: [162.105.129.28]
X-SpamReason: No, hits=1.4 required=7.0 tests=FROM_EXCESS_BASE64,
	SUBJECT_EXCESS_BASE64
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32132 invoked from network); 8 Dec 2012 07:40:15 -0000
Received: from mx4.pku.edu.cn (HELO mail.pku.edu.cn) (162.105.129.28)
	by server-6.tower-217.messagelabs.com with SMTP;
	8 Dec 2012 07:40:15 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by mail.pku.edu.cn (tmailer) with ESMTP id 9225218022
	for <xen-devel@lists.xen.org>; Sat,  8 Dec 2012 15:40:10 +0800 (CST)
X-Spam-Flag: NO
X-Spam-Score: -415.249
X-Spam-Level: 
X-Spam-Status: No, score=-415.249 tagged_above=-1000 required=20
	tests=[ALL_TRUSTED=-1.8, AWL=-104.709, BAYES_00=-10.396,
	CN_BODY_1041=0.2, FROM_EXCESS_BASE64=1.456, USER_IN_WHITELIST=-300]
	autolearn=no
Received: from mail.pku.edu.cn ([127.0.0.1])
	by localhost (bj-mail05.pku.edu.cn [127.0.0.1]) (theinterface-new,
	port 10024) with ESMTP id 4KryrC4jAOs3 for <xen-devel@lists.xen.org>;
	Sat,  8 Dec 2012 15:40:08 +0800 (CST)
Received: from bj-mail05.pku.edu.cn (bj-mail05.pku.edu.cn [162.105.129.125])
	by mail.pku.edu.cn (tmailer) with ESMTP id 735FC18013
	for <xen-devel@lists.xen.org>; Sat,  8 Dec 2012 15:40:06 +0800 (CST)
Date: Sat, 8 Dec 2012 15:40:04 +0800 (CST)
From: =?gbk?B?WWFuemhhbmcgTGkg?=<liyz@pku.edu.cn>
To: xen-devel@lists.xen.org
Message-ID: <1947237017.20.1354952403984.JavaMail.root@bj-mail05.pku.edu.cn>
MIME-Version: 1.0
X-Originating-IP: [162.105.129.97]
X-Mailman-Approved-At: Mon, 10 Dec 2012 13:08:04 +0000
Subject: [Xen-devel] =?gbk?q?_=5BPATCH=5D_memshr_tools=3A_use_hypercall_re?=
	=?gbk?q?turn_value_incorrectly?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When programs in TOOLS call hypercall and fail, they will get return value -1 and find real error number in "errno". But in the following code, it uses return value -1 to do switch-case incorrectly.

Signed-off-by: Yanzhang Li <liyz@pku.edu.cn>

diff -u xen-4.2.0/tools/memshr/interface.c change/xen-4.2.0/tools/memshr/interface.c
--- xen-4.2.0/tools/memshr/interface.c  2012-09-17 18:21:18.000000000 +0800
+++ change/xen-4.2.0/tools/memshr/interface.c   2012-12-07 09:50:16.208705246 +0800
@@ -18,6 +18,7 @@
  */
 #include <string.h>
 #include <inttypes.h>
+#include <errno.h>

 #include "memshr.h"
 #include "memshr-priv.h"
@@ -184,7 +185,7 @@
         if(!ret) return 0;
         /* Handles failed to be shared => at least one of them must be invalid,
            remove the relevant ones from the map */
-        switch(ret)
+        switch(-errno)
         {
             case XENMEM_SHARING_OP_S_HANDLE_INVALID:
                 ret = blockshr_shrhnd_remove(memshr.blks, source_st, NULL);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti35b-0000Z9-QF; Mon, 10 Dec 2012 13:08:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liyz@pku.edu.cn>) id 1ThE9g-0003H8-UB
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 06:44:57 +0000
Received: from [85.158.143.35:5722] by server-3.bemta-4.messagelabs.com id
	09/E7-18211-8E1E2C05; Sat, 08 Dec 2012 06:44:56 +0000
X-Env-Sender: liyz@pku.edu.cn
X-Msg-Ref: server-16.tower-21.messagelabs.com!1354949092!14014016!1
X-Originating-IP: [162.105.129.28]
X-SpamReason: No, hits=1.4 required=7.0 tests=FROM_EXCESS_BASE64,
	SUBJECT_EXCESS_BASE64
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20049 invoked from network); 8 Dec 2012 06:44:54 -0000
Received: from mx4.pku.edu.cn (HELO mail.pku.edu.cn) (162.105.129.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	8 Dec 2012 06:44:54 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by mail.pku.edu.cn (tmailer) with ESMTP id 94FBA180AF;
	Sat,  8 Dec 2012 14:44:50 +0800 (CST)
X-Spam-Flag: NO
X-Spam-Score: -421.066
X-Spam-Level: 
X-Spam-Status: No, score=-421.066 tagged_above=-1000 required=20
	tests=[ALL_TRUSTED=-1.8, AWL=-110.526, BAYES_00=-10.396,
	CN_BODY_1041=0.2, FROM_EXCESS_BASE64=1.456, USER_IN_WHITELIST=-300]
	autolearn=no
Received: from mail.pku.edu.cn ([127.0.0.1])
	by localhost (bj-mail05.pku.edu.cn [127.0.0.1]) (theinterface-new,
	port 10024)
	with ESMTP id 9+LfgVdznm0D; Sat,  8 Dec 2012 14:44:50 +0800 (CST)
Received: from bj-mail05.pku.edu.cn (bj-mail05.pku.edu.cn [162.105.129.125])
	by mail.pku.edu.cn (tmailer) with ESMTP id 1C2C718022;
	Sat,  8 Dec 2012 14:44:50 +0800 (CST)
Date: Sat, 8 Dec 2012 14:44:49 +0800 (CST)
From: =?gbk?B?WWFuemhhbmcgTGkg?=<liyz@pku.edu.cn>
To: George Dunlap <dunlapg@umich.edu>
Message-ID: <799502979.150.1354949089864.JavaMail.root@bj-mail05.pku.edu.cn>
In-Reply-To: <1320657526.120.1354947728395.JavaMail.root@bj-mail05.pku.edu.cn>
MIME-Version: 1.0
X-Originating-IP: [162.105.129.97]
X-Mailman-Approved-At: Mon, 10 Dec 2012 13:08:04 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel]
	=?gbk?q?Suggestion=3A_Improve_hypercall_Interface_to_?=
	=?gbk?q?get_real_return_value?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RGVhciBHZW9yZ2UsClRoYW5rIHlvdSB2ZXJ5IG11Y2ggZm9yIHlvdXIgY29tbWVudHMuIFRoYXQg
cmVhbGx5IGhlbHBzIGEgbG90LgpCZXN0IFJlZ2FyZHMsCllhbnpoYW5nIExpCgotLS0tLSBPcmln
aW5hbCBNZXNzYWdlIC0tLS0tCkZyb206ICJHZW9yZ2UgRHVubGFwIiA8ZHVubGFwZ0B1bWljaC5l
ZHU+ClRvOiAiWWFuemhhbmcgTGkiIDxsaXl6QHBrdS5lZHUuY24+CkNjOiB4ZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpTZW50OiB1bjVlZmluZTUgNzowNTozMiBQTQpTdWJqZWN0OiBSZTogW1hlbi1k
ZXZlbF0gU3VnZ2VzdGlvbjogSW1wcm92ZSBoeXBlcmNhbGwgSW50ZXJmYWNlIHRvIGdldCByZWFs
CiByZXR1cm4gdmFsdWUKCk9uIFdlZCwgRGVjIDUsIDIwMTIgYXQgNjoyMSBBTSwgWWFuemhhbmcg
TGkgPCBsaXl6QHBrdS5lZHUuY24gPiB3cm90ZTogCgoKCgoKCkRvIHlvdSB0aGluayB0aGlzIHdv
dWxkIGJlIGEgZ29vZCBtb2RpZmljYXRpb24/IApBbHNvLCBJIGFtIGN1cmlvdXMgd2h5IHRoZSBv
cmlnaW5hbCBkZXNpZ24gZGlkbid0IGRvIHRoYXQuIElzIGl0IGEgYnVnIG9yIGlzIGl0IGRlc2ln
bmVkIHRoYXQgd2F5IGludGVudGlvbmFsbHk/IApBbnkgc3VnZ2VzdGlvbnMgYW5kIGNvbW1lbnRz
IHdpbGwgYmUgaGlnaGx5IGFwcHJlY2lhdGVkLiAKCgpUaGUgcmVhc29uIHdlIGp1c3QgcmV0dXJu
IC0xIGlzIGJlY2F1c2UgdGhhdCBpcyB0aGUgc3RhbmRhcmQgcHJhY3RpY2UgZm9yIFVuaXggc3lz
dGVtIGxpYnJhcmllczogdG8gcmV0dXJuIC0xIGJ1dCBzZXQgdGhlIGVycm9yIHZhbHVlIGluICJl
cnJubyIuwqAgSSBjb3VsZG4ndCB0ZWxsIHlvdSB3aHkgVW5peCBkb2VzIHRoaXMsIGJ1dCB0aGVy
ZSdzIGFuIGFkdmFudGFnZSB0byBmb2xsb3dpbmcgc3RhbmRhcmQgaW50ZXJmYWNlcywgYmVjYXVz
ZSBpdCByZWR1Y2VzIHRoZSBzdXJwcmlzZSBmYWN0b3IsIGFuZCByZWR1Y2VzIHRoZSBhbW91bnQg
b2YgaW5mb3JtYXRpb24gcHJvZ3JhbW1lcnMgbmVlZCB0byBrZWVwIGluIHRoZWlyIGhlYWQuIAoK
wqAtR2VvcmdlIAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti35b-0000Z9-QF; Mon, 10 Dec 2012 13:08:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liyz@pku.edu.cn>) id 1ThE9g-0003H8-UB
	for xen-devel@lists.xen.org; Sat, 08 Dec 2012 06:44:57 +0000
Received: from [85.158.143.35:5722] by server-3.bemta-4.messagelabs.com id
	09/E7-18211-8E1E2C05; Sat, 08 Dec 2012 06:44:56 +0000
X-Env-Sender: liyz@pku.edu.cn
X-Msg-Ref: server-16.tower-21.messagelabs.com!1354949092!14014016!1
X-Originating-IP: [162.105.129.28]
X-SpamReason: No, hits=1.4 required=7.0 tests=FROM_EXCESS_BASE64,
	SUBJECT_EXCESS_BASE64
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20049 invoked from network); 8 Dec 2012 06:44:54 -0000
Received: from mx4.pku.edu.cn (HELO mail.pku.edu.cn) (162.105.129.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	8 Dec 2012 06:44:54 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by mail.pku.edu.cn (tmailer) with ESMTP id 94FBA180AF;
	Sat,  8 Dec 2012 14:44:50 +0800 (CST)
X-Spam-Flag: NO
X-Spam-Score: -421.066
X-Spam-Level: 
X-Spam-Status: No, score=-421.066 tagged_above=-1000 required=20
	tests=[ALL_TRUSTED=-1.8, AWL=-110.526, BAYES_00=-10.396,
	CN_BODY_1041=0.2, FROM_EXCESS_BASE64=1.456, USER_IN_WHITELIST=-300]
	autolearn=no
Received: from mail.pku.edu.cn ([127.0.0.1])
	by localhost (bj-mail05.pku.edu.cn [127.0.0.1]) (theinterface-new,
	port 10024)
	with ESMTP id 9+LfgVdznm0D; Sat,  8 Dec 2012 14:44:50 +0800 (CST)
Received: from bj-mail05.pku.edu.cn (bj-mail05.pku.edu.cn [162.105.129.125])
	by mail.pku.edu.cn (tmailer) with ESMTP id 1C2C718022;
	Sat,  8 Dec 2012 14:44:50 +0800 (CST)
Date: Sat, 8 Dec 2012 14:44:49 +0800 (CST)
From: =?gbk?B?WWFuemhhbmcgTGkg?=<liyz@pku.edu.cn>
To: George Dunlap <dunlapg@umich.edu>
Message-ID: <799502979.150.1354949089864.JavaMail.root@bj-mail05.pku.edu.cn>
In-Reply-To: <1320657526.120.1354947728395.JavaMail.root@bj-mail05.pku.edu.cn>
MIME-Version: 1.0
X-Originating-IP: [162.105.129.97]
X-Mailman-Approved-At: Mon, 10 Dec 2012 13:08:04 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel]
	=?gbk?q?Suggestion=3A_Improve_hypercall_Interface_to_?=
	=?gbk?q?get_real_return_value?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RGVhciBHZW9yZ2UsClRoYW5rIHlvdSB2ZXJ5IG11Y2ggZm9yIHlvdXIgY29tbWVudHMuIFRoYXQg
cmVhbGx5IGhlbHBzIGEgbG90LgpCZXN0IFJlZ2FyZHMsCllhbnpoYW5nIExpCgotLS0tLSBPcmln
aW5hbCBNZXNzYWdlIC0tLS0tCkZyb206ICJHZW9yZ2UgRHVubGFwIiA8ZHVubGFwZ0B1bWljaC5l
ZHU+ClRvOiAiWWFuemhhbmcgTGkiIDxsaXl6QHBrdS5lZHUuY24+CkNjOiB4ZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpTZW50OiB1bjVlZmluZTUgNzowNTozMiBQTQpTdWJqZWN0OiBSZTogW1hlbi1k
ZXZlbF0gU3VnZ2VzdGlvbjogSW1wcm92ZSBoeXBlcmNhbGwgSW50ZXJmYWNlIHRvIGdldCByZWFs
CiByZXR1cm4gdmFsdWUKCk9uIFdlZCwgRGVjIDUsIDIwMTIgYXQgNjoyMSBBTSwgWWFuemhhbmcg
TGkgPCBsaXl6QHBrdS5lZHUuY24gPiB3cm90ZTogCgoKCgoKCkRvIHlvdSB0aGluayB0aGlzIHdv
dWxkIGJlIGEgZ29vZCBtb2RpZmljYXRpb24/IApBbHNvLCBJIGFtIGN1cmlvdXMgd2h5IHRoZSBv
cmlnaW5hbCBkZXNpZ24gZGlkbid0IGRvIHRoYXQuIElzIGl0IGEgYnVnIG9yIGlzIGl0IGRlc2ln
bmVkIHRoYXQgd2F5IGludGVudGlvbmFsbHk/IApBbnkgc3VnZ2VzdGlvbnMgYW5kIGNvbW1lbnRz
IHdpbGwgYmUgaGlnaGx5IGFwcHJlY2lhdGVkLiAKCgpUaGUgcmVhc29uIHdlIGp1c3QgcmV0dXJu
IC0xIGlzIGJlY2F1c2UgdGhhdCBpcyB0aGUgc3RhbmRhcmQgcHJhY3RpY2UgZm9yIFVuaXggc3lz
dGVtIGxpYnJhcmllczogdG8gcmV0dXJuIC0xIGJ1dCBzZXQgdGhlIGVycm9yIHZhbHVlIGluICJl
cnJubyIuwqAgSSBjb3VsZG4ndCB0ZWxsIHlvdSB3aHkgVW5peCBkb2VzIHRoaXMsIGJ1dCB0aGVy
ZSdzIGFuIGFkdmFudGFnZSB0byBmb2xsb3dpbmcgc3RhbmRhcmQgaW50ZXJmYWNlcywgYmVjYXVz
ZSBpdCByZWR1Y2VzIHRoZSBzdXJwcmlzZSBmYWN0b3IsIGFuZCByZWR1Y2VzIHRoZSBhbW91bnQg
b2YgaW5mb3JtYXRpb24gcHJvZ3JhbW1lcnMgbmVlZCB0byBrZWVwIGluIHRoZWlyIGhlYWQuIAoK
wqAtR2VvcmdlIAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti35a-0000Yz-Ry; Mon, 10 Dec 2012 13:08:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgzMZ-0007yn-1e
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:57:16 +0000
Received: from [85.158.137.99:18653] by server-13.bemta-3.messagelabs.com id
	C3/87-24887-5C302C05; Fri, 07 Dec 2012 14:57:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-217.messagelabs.com!1354892163!15225862!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=Mail larger than max spam size
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4650 invoked from network); 7 Dec 2012 14:56:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:56:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216786685"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 14:55:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 09:55:10 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgzKY-0004T4-Bs;
	Fri, 07 Dec 2012 14:55:10 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 14:55:08 +0000
Message-ID: <1354892110-31108-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
References: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Mon, 10 Dec 2012 13:08:04 +0000
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_1/3=5D_docs=3A_Remove_xen-api_docs?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyBkb2N1bWVudCBpcyBhYm91dCBhbiBvbGQgdW5tYWludGFpbmVkIHZlcnNpb24gb2YgdGhl
IFhlbkFQSSwKd2hpY2ggYmVhcnMgbGl0dGxlIHRvIG5vIHJlbGF0aW9uIHRvIHdoYXQgaXMgaW1w
bGVtZW50ZWQgaW4geGFwaSBhbmQKd2hpY2ggaXMgb25seSBwYXJ0aWFsbHkgaW1wbGVtZW50ZWQg
aW4geGVuZCAod2hpY2ggaXMgZGVwcmVjYXRlZCkuIFRoZQpkb2MgaGFzbid0IHNlZW4gbXVjaCBp
biB0aGUgd2F5IG9mIHVwZGF0ZXMgc2luY2UgMjAwOS4KCkFueW9uZSB3aG8gaXMgYWN0dWFsbHkg
aW50ZXJlc3RlZCBjYW4gY29udGludWUgdG8gdXNlIHRoZSB2ZXJzaW9uCndoaWNoIHdhcyBpbiA0
LjIuCgpTaWduZWQtb2ZmLWJ5OiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29t
PgotLS0KIGRvY3MvRG9jcy5tayAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgIDUgLQog
ZG9jcy9NYWtlZmlsZSAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgNyAtCiBkb2NzL3hl
bi1hcGkvTWFrZWZpbGUgICAgICAgICAgICAgICAgICAgfCAgIDQ0IC0KIGRvY3MveGVuLWFwaS9i
aWJsaW9ncmFwaHkudGV4ICAgICAgICAgICB8ICAgIDUgLQogZG9jcy94ZW4tYXBpL2NvdmVyc2hl
ZXQudGV4ICAgICAgICAgICAgIHwgICA2NSAtCiBkb2NzL3hlbi1hcGkvZmRsLnRleCAgICAgICAg
ICAgICAgICAgICAgfCAgNDg4IC0KIGRvY3MveGVuLWFwaS9wcmVzZW50YXRpb24udGV4ICAgICAg
ICAgICB8ICAxNDYgLQogZG9jcy94ZW4tYXBpL3JldmlzaW9uLWhpc3RvcnkudGV4ICAgICAgIHwg
ICA2MSAtCiBkb2NzL3hlbi1hcGkvdG9kby50ZXggICAgICAgICAgICAgICAgICAgfCAgMTM1IC0K
IGRvY3MveGVuLWFwaS92bS1saWZlY3ljbGUudGV4ICAgICAgICAgICB8ICAgNDMgLQogZG9jcy94
ZW4tYXBpL3ZtX2xpZmVjeWNsZS5kb3QgICAgICAgICAgIHwgICAxNyAtCiBkb2NzL3hlbi1hcGkv
d2lyZS1wcm90b2NvbC50ZXggICAgICAgICAgfCAgMzgzIC0KIGRvY3MveGVuLWFwaS94ZW4uZXBz
ICAgICAgICAgICAgICAgICAgICB8ICAgNDQgLQogZG9jcy94ZW4tYXBpL3hlbmFwaS1jb3ZlcnNo
ZWV0LnRleCAgICAgIHwgICAzOCAtCiBkb2NzL3hlbi1hcGkveGVuYXBpLWRhdGFtb2RlbC1ncmFw
aC5kb3QgfCAgIDU3IC0KIGRvY3MveGVuLWFwaS94ZW5hcGktZGF0YW1vZGVsLnRleCAgICAgICB8
MjAyNDUgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogZG9jcy94ZW4tYXBpL3hlbmFw
aS50ZXggICAgICAgICAgICAgICAgIHwgICA2MCAtCiAxNyBmaWxlcyBjaGFuZ2VkLCAwIGluc2Vy
dGlvbnMoKyksIDIxODQzIGRlbGV0aW9ucygtKQogZGVsZXRlIG1vZGUgMTAwNjQ0IGRvY3MveGVu
LWFwaS9NYWtlZmlsZQogZGVsZXRlIG1vZGUgMTAwNjQ0IGRvY3MveGVuLWFwaS9iaWJsaW9ncmFw
aHkudGV4CiBkZWxldGUgbW9kZSAxMDA2NDQgZG9jcy94ZW4tYXBpL2NvdmVyc2hlZXQudGV4CiBk
ZWxldGUgbW9kZSAxMDA2NDQgZG9jcy94ZW4tYXBpL2ZkbC50ZXgKIGRlbGV0ZSBtb2RlIDEwMDY0
NCBkb2NzL3hlbi1hcGkvcHJlc2VudGF0aW9uLnRleAogZGVsZXRlIG1vZGUgMTAwNjQ0IGRvY3Mv
eGVuLWFwaS9yZXZpc2lvbi1oaXN0b3J5LnRleAogZGVsZXRlIG1vZGUgMTAwNjQ0IGRvY3MveGVu
LWFwaS90b2RvLnRleAogZGVsZXRlIG1vZGUgMTAwNjQ0IGRvY3MveGVuLWFwaS92bS1saWZlY3lj
bGUudGV4CiBkZWxldGUgbW9kZSAxMDA2NDQgZG9jcy94ZW4tYXBpL3ZtX2xpZmVjeWNsZS5kb3QK
IGRlbGV0ZSBtb2RlIDEwMDY0NCBkb2NzL3hlbi1hcGkvd2lyZS1wcm90b2NvbC50ZXgKIGRlbGV0
ZSBtb2RlIDEwMDY0NCBkb2NzL3hlbi1hcGkveGVuLmVwcwogZGVsZXRlIG1vZGUgMTAwNjQ0IGRv
Y3MveGVuLWFwaS94ZW5hcGktY292ZXJzaGVldC50ZXgKIGRlbGV0ZSBtb2RlIDEwMDY0NCBkb2Nz
L3hlbi1hcGkveGVuYXBpLWRhdGFtb2RlbC1ncmFwaC5kb3QKIGRlbGV0ZSBtb2RlIDEwMDY0NCBk
b2NzL3hlbi1hcGkveGVuYXBpLWRhdGFtb2RlbC50ZXgKIGRlbGV0ZSBtb2RlIDEwMDY0NCBkb2Nz
L3hlbi1hcGkveGVuYXBpLnRleAoKZGlmZiAtLWdpdCBhL2RvY3MvRG9jcy5tayBiL2RvY3MvRG9j
cy5tawppbmRleCBhYTY1M2QzLi5kY2M4YTIxIDEwMDY0NAotLS0gYS9kb2NzL0RvY3MubWsKKysr
IGIvZG9jcy9Eb2NzLm1rCkBAIC0xLDEyICsxLDcgQEAKLVBTMlBERgkJOj0gcHMycGRmCi1EVklQ
UwkJOj0gZHZpcHMKLUxBVEVYCQk6PSBsYXRleAogRklHMkRFVgkJOj0gZmlnMmRldgogTEFURVgy
SFRNTAk6PSBsYXRleDJodG1sCiBET1hZR0VOCQk6PSBkb3h5Z2VuCiBQT0QyTUFOCQk6PSBwb2Qy
bWFuCiBQT0QySFRNTAk6PSBwb2QyaHRtbAogUE9EMlRFWFQJOj0gcG9kMnRleHQKLURPVAkJOj0g
ZG90Ci1ORUFUTwkJOj0gbmVhdG8KIE1BUktET1dOCTo9IG1hcmtkb3duCmRpZmYgLS1naXQgYS9k
b2NzL01ha2VmaWxlIGIvZG9jcy9NYWtlZmlsZQppbmRleCAwM2YxNDFhLi42MjBhMjk2IDEwMDY0
NAotLS0gYS9kb2NzL01ha2VmaWxlCisrKyBiL2RvY3MvTWFrZWZpbGUKQEAgLTI2LDEwICsyNiw2
IEBAIGFsbDogYnVpbGQKIAogLlBIT05ZOiBidWlsZAogYnVpbGQ6IGh0bWwgdHh0IG1hbi1wYWdl
cyBmaWdzCi0JQGlmIHdoaWNoICQoRE9UKSAxPi9kZXYvbnVsbCAyPi9kZXYvbnVsbCA7IHRoZW4g
ICAgICAgICAgICAgIFwKLQkkKE1BS0UpIC1DIHhlbi1hcGkgYnVpbGQgOyBlbHNlICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgXAotICAgICAgICBlY2hvICJHcmFwaHZpeiAoZG90KSBub3Qg
aW5zdGFsbGVkOyBza2lwcGluZyB4ZW4tYXBpLiIgOyBmaQotCXJtIC1mICouYXV4ICouZHZpICou
YmJsICouYmxnICouZ2xvICouaWR4ICouaWxnICoubG9nICouaW5kICoudG9jCiAKIC5QSE9OWTog
ZGV2LWRvY3MKIGRldi1kb2NzOiBweXRob24tZGV2LWRvY3MKQEAgLTc2LDcgKzcyLDYgQEAgbWFu
NS8lLjU6IG1hbi8lLnBvZC41IE1ha2VmaWxlCiAKIC5QSE9OWTogY2xlYW4KIGNsZWFuOgotCSQo
TUFLRSkgLUMgeGVuLWFwaSBjbGVhbgogCSQoTUFLRSkgLUMgZmlncyBjbGVhbgogCXJtIC1yZiAu
d29yZF9jb3VudCAqLmF1eCAqLmR2aSAqLmJibCAqLmJsZyAqLmdsbyAqLmlkeCAqfiAKIAlybSAt
cmYgKi5pbGcgKi5sb2cgKi5pbmQgKi50b2MgKi5iYWsgY29yZQpAQCAtOTMsOCArODgsNiBAQCBp
bnN0YWxsOiBhbGwKIAlybSAtcmYgJChERVNURElSKSQoRE9DRElSKQogCSQoSU5TVEFMTF9ESVIp
ICQoREVTVERJUikkKERPQ0RJUikKIAotCSQoTUFLRSkgLUMgeGVuLWFwaSBpbnN0YWxsCi0KIAkk
KElOU1RBTExfRElSKSAkKERFU1RESVIpJChNQU5ESVIpCiAJY3AgLWRSIG1hbjEgJChERVNURElS
KSQoTUFORElSKQogCWNwIC1kUiBtYW41ICQoREVTVERJUikkKE1BTkRJUikKZGlmZiAtLWdpdCBh
L2RvY3MveGVuLWFwaS9NYWtlZmlsZSBiL2RvY3MveGVuLWFwaS9NYWtlZmlsZQpkZWxldGVkIGZp
bGUgbW9kZSAxMDA2NDQKaW5kZXggNzdhMDExNy4uMDAwMDAwMAotLS0gYS9kb2NzL3hlbi1hcGkv
TWFrZWZpbGUKKysrIC9kZXYvbnVsbApAQCAtMSw0NCArMCwwIEBACi0jIS91c3IvYmluL21ha2Ug
LWYKLQotWEVOX1JPT1Q9JChDVVJESVIpLy4uLy4uCi1pbmNsdWRlICQoWEVOX1JPT1QpL0NvbmZp
Zy5tawotaW5jbHVkZSAkKFhFTl9ST09UKS9kb2NzL0RvY3MubWsKLQotCi1URVggOj0gJCh3aWxk
Y2FyZCAqLnRleCkKLUVQUyA6PSAkKHdpbGRjYXJkICouZXBzKQotRVBTRE9UIDo9ICQocGF0c3Vi
c3QgJS5kb3QsJS5lcHMsJCh3aWxkY2FyZCAqLmRvdCkpCi0KLS5QSE9OWTogYWxsCi1hbGw6IGJ1
aWxkCi0KLS5QSE9OWTogYnVpbGQKLWJ1aWxkOiB4ZW5hcGkucGRmIHhlbmFwaS5wcwotCi1pbnN0
YWxsOgotCSQoSU5TVEFMTF9ESVIpICQoREVTVERJUikkKERPQ0RJUikvcHMKLQkkKElOU1RBTExf
RElSKSAkKERFU1RESVIpJChET0NESVIpL3BkZgotCi0JWyAtZSB4ZW5hcGkucHMgXSAmJiBjcCB4
ZW5hcGkucHMgJChERVNURElSKSQoRE9DRElSKS9wcyB8fCB0cnVlCi0JWyAtZSB4ZW5hcGkucGRm
IF0gJiYgY3AgeGVuYXBpLnBkZiAkKERFU1RESVIpJChET0NESVIpL3BkZiB8fCB0cnVlCi0KLXhl
bmFwaS5kdmk6ICQoVEVYKSAkKEVQUykgJChFUFNET1QpCi0JJChMQVRFWCkgeGVuYXBpLnRleAot
CSQoTEFURVgpIHhlbmFwaS50ZXgKLQlybSAtZiAqLmF1eCAqLmxvZwotCi0lLnBkZjogJS5wcwot
CSQoUFMyUERGKSAkPCAkQAotCi0lLnBzOiAlLmR2aQotCSQoRFZJUFMpICQ8IC1vICRACi0KLSUu
ZXBzOiAlLmRvdAotCSQoRE9UKSAtVHBzICQ8ID4kQAotCi14ZW5hcGktZGF0YW1vZGVsLWdyYXBo
LmVwczogeGVuYXBpLWRhdGFtb2RlbC1ncmFwaC5kb3QKLQkkKE5FQVRPKSAtR292ZXJsYXA9ZmFs
c2UgLVRwcyAkPCA+JEAKLQotLlBIT05ZOiBjbGVhbgotY2xlYW46Ci0Jcm0gLWYgKi5wZGYgKi5w
cyAqLmR2aSAqLmF1eCAqLmxvZyAqLm91dCAkKEVQU0RPVCkKZGlmZiAtLWdpdCBhL2RvY3MveGVu
LWFwaS9iaWJsaW9ncmFwaHkudGV4IGIvZG9jcy94ZW4tYXBpL2JpYmxpb2dyYXBoeS50ZXgKZGVs
ZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDMwZGYzODcuLjAwMDAwMDAKLS0tIGEvZG9jcy94
ZW4tYXBpL2JpYmxpb2dyYXBoeS50ZXgKKysrIC9kZXYvbnVsbApAQCAtMSw1ICswLDAgQEAKLVxi
ZWdpbnt0aGViaWJsaW9ncmFwaHl9ezl9Ci1cYmliaXRlbVtSRkMyMzk3XXtSRkMyMzk3fQotTWFz
aW50ZXIgTC4sIFx0ZXh0YmZ7VGhlICJkYXRhIiBVUkwgc2NoZW1lfSwgUkZDIDIzOTcsIEF1Z3Vz
dCAxOTk4LAotTmV0d29yayBXb3JraW5nIEdyb3VwLCBodHRwOi8vd3d3LmlldGYub3JnL3JmYy9y
ZmMyMzk3LnR4dAotXGVuZHt0aGViaWJsaW9ncmFwaHl9CmRpZmYgLS1naXQgYS9kb2NzL3hlbi1h
cGkvY292ZXJzaGVldC50ZXggYi9kb2NzL3hlbi1hcGkvY292ZXJzaGVldC50ZXgKZGVsZXRlZCBm
aWxlIG1vZGUgMTAwNjQ0CmluZGV4IDJkNTY4YzUuLjAwMDAwMDAKLS0tIGEvZG9jcy94ZW4tYXBp
L2NvdmVyc2hlZXQudGV4CisrKyAvZGV2L251bGwKQEAgLTEsNjUgKzAsMCBAQAotJQotJSBDb3B5
cmlnaHQgKGMpIDIwMDYtMjAwNyBYZW5Tb3VyY2UsIEluYy4KLSUKLSUgUGVybWlzc2lvbiBpcyBn
cmFudGVkIHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlzIGRvY3VtZW50IHVu
ZGVyCi0lIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlLCBW
ZXJzaW9uIDEuMiBvciBhbnkgbGF0ZXIKLSUgdmVyc2lvbiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUg
U29mdHdhcmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlhbnQKLSUgU2VjdGlvbnMsIG5vIEZy
b250LUNvdmVyIFRleHRzIGFuZCBubyBCYWNrLUNvdmVyIFRleHRzLiAgQSBjb3B5IG9mIHRoZQot
JSBsaWNlbnNlIGlzIGluY2x1ZGVkIGluIHRoZSBzZWN0aW9uIGVudGl0bGVkCi0lICJHTlUgRnJl
ZSBEb2N1bWVudGF0aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxlIGZkbC50ZXguCi0lCi0lIEF1dGhv
cnM6IEV3YW4gTWVsbG9yLCBSaWNoYXJkIFNoYXJwLCBEYXZlIFNjb3R0LCBKb24gSGFycm9wLgot
JQotCi1ccGFnZXN0eWxle2VtcHR5fQotCi1cZG9jdGl0bGV7fSBcaGZpbGwgXHJldnN0cmluZ3t9
Ci0KLVx2c3BhY2V7MWNtfQotCi1cYmVnaW57Y2VudGVyfQotXHJlc2l6ZWJveHs4Y219eyF9e1xp
bmNsdWRlZ3JhcGhpY3N7XGNvdmVyc2hlZXRsb2dvfX0KLQotXHZzcGFjZXsyY219Ci0KLVxiZWdp
bntIdWdlfQotICBcZG9jdGl0bGV7fQotXGVuZHtIdWdlfQotCi1cdnNwYWNlezFjbX0KLVxiZWdp
bntMYXJnZX0KLVZlcnNpb246IFxyZXZzdHJpbmd7fVxcCi1EYXRlOiBcZGF0ZXN0cmluZ3t9Ci1c
XAotXHJlbGVhc2VzdGF0ZW1lbnR7fQotCi1cdnNwYWNlezFjbX0KLVxiZWdpbnt0YWJ1bGFyfXty
bH0KLVxkb2NhdXRob3Jze30KLVxlbmR7dGFidWxhcn0KLVxlbmR7TGFyZ2V9Ci1cZW5ke2NlbnRl
cn0KLVx2c3BhY2V7LjVjbX0KLVxiZWdpbntsYXJnZX0KLVx0ZXh0YmZ7Q29udHJpYnV0b3JzOn0g
XFwKLVxcCi1cYmVnaW57dGFidWxhcn17cHswLjVcdGV4dHdpZHRofWx9Ci1TdGVmYW4gQmVyZ2Vy
LCBJQk0gJiBWaW5jZW50IEhhbnF1ZXosIFhlblNvdXJjZSBcXAotRGFuaWVsIEJlcnJhbmdcJ2Us
IFJlZCBIYXQgJiBKb2huIExldm9uLCBTdW4gTWljcm9zeXN0ZW1zIFxcCi1HYXJldGggQmVzdG9y
LCBJQk0gJiBKb24gTHVkbGFtLCBYZW5Tb3VyY2UgXFwKLUhvbGxpcyBCbGFuY2hhcmQsIElCTSAm
IEFsYXN0YWlyIFRzZSwgWGVuU291cmNlIFxcCi1NaWtlIERheSwgSUJNICYgRGFuaWVsIFZlaWxs
YXJkLCBSZWQgSGF0IFxcCi1KaW0gRmVobGlnLCBOb3ZlbGwgJiBUb20gV2lsa2llLCBVbml2ZXJz
aXR5IG9mIENhbWJyaWRnZSBcXAotSm9uIEhhcnJvcCwgWGVuU291cmNlICYgWW9zdWtlIEl3YW1h
dHN1LCBORUMgXFwKLU1hc2FraSBLYW5ubywgRlVKSVRTVSBcXAotTHV0eiBEdWJlLCBGVUpJVFNV
IFRFQ0hOT0xPR1kgU09MVVRJT05TIFxcCi1cZW5ke3RhYnVsYXJ9Ci1cZW5ke2xhcmdlfQotCi1c
dmZpbGwKLQotXG5vaW5kZW50Ci1cbGVnYWxub3RpY2V7fQotCi1cbmV3cGFnZQotXHBhZ2VzdHls
ZXtmYW5jeX0KZGlmZiAtLWdpdCBhL2RvY3MveGVuLWFwaS9mZGwudGV4IGIvZG9jcy94ZW4tYXBp
L2ZkbC50ZXgKZGVsZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IGQ4MjE0NTcuLjAwMDAwMDAK
LS0tIGEvZG9jcy94ZW4tYXBpL2ZkbC50ZXgKKysrIC9kZXYvbnVsbApAQCAtMSw0ODggKzAsMCBA
QAotXGNoYXB0ZXJ7R05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlfQotJVxsYWJlbHtsYWJl
bF9mZGx9Ci0KLSBcYmVnaW57Y2VudGVyfQotCi0gICAgICAgVmVyc2lvbiAxLjIsIE5vdmVtYmVy
IDIwMDIKLQotCi0gQ29weXJpZ2h0IFxjb3B5cmlnaHQgMjAwMCwyMDAxLDIwMDIgIEZyZWUgU29m
dHdhcmUgRm91bmRhdGlvbiwgSW5jLgotIAotIFxiaWdza2lwCi0gCi0gICAgIDUxIEZyYW5rbGlu
IFN0LCBGaWZ0aCBGbG9vciwgQm9zdG9uLCBNQSAgMDIxMTAtMTMwMSAgVVNBCi0gIAotIFxiaWdz
a2lwCi0gCi0gRXZlcnlvbmUgaXMgcGVybWl0dGVkIHRvIGNvcHkgYW5kIGRpc3RyaWJ1dGUgdmVy
YmF0aW0gY29waWVzCi0gb2YgdGhpcyBsaWNlbnNlIGRvY3VtZW50LCBidXQgY2hhbmdpbmcgaXQg
aXMgbm90IGFsbG93ZWQuCi1cZW5ke2NlbnRlcn0KLQotCi1cYmVnaW57Y2VudGVyfQote1xiZlxs
YXJnZSBQcmVhbWJsZX0KLVxlbmR7Y2VudGVyfQotCi1UaGUgcHVycG9zZSBvZiB0aGlzIExpY2Vu
c2UgaXMgdG8gbWFrZSBhIG1hbnVhbCwgdGV4dGJvb2ssIG9yIG90aGVyCi1mdW5jdGlvbmFsIGFu
ZCB1c2VmdWwgZG9jdW1lbnQgImZyZWUiIGluIHRoZSBzZW5zZSBvZiBmcmVlZG9tOiB0bwotYXNz
dXJlIGV2ZXJ5b25lIHRoZSBlZmZlY3RpdmUgZnJlZWRvbSB0byBjb3B5IGFuZCByZWRpc3RyaWJ1
dGUgaXQsCi13aXRoIG9yIHdpdGhvdXQgbW9kaWZ5aW5nIGl0LCBlaXRoZXIgY29tbWVyY2lhbGx5
IG9yIG5vbmNvbW1lcmNpYWxseS4KLVNlY29uZGFyaWx5LCB0aGlzIExpY2Vuc2UgcHJlc2VydmVz
IGZvciB0aGUgYXV0aG9yIGFuZCBwdWJsaXNoZXIgYSB3YXkKLXRvIGdldCBjcmVkaXQgZm9yIHRo
ZWlyIHdvcmssIHdoaWxlIG5vdCBiZWluZyBjb25zaWRlcmVkIHJlc3BvbnNpYmxlCi1mb3IgbW9k
aWZpY2F0aW9ucyBtYWRlIGJ5IG90aGVycy4KLQotVGhpcyBMaWNlbnNlIGlzIGEga2luZCBvZiAi
Y29weWxlZnQiLCB3aGljaCBtZWFucyB0aGF0IGRlcml2YXRpdmUKLXdvcmtzIG9mIHRoZSBkb2N1
bWVudCBtdXN0IHRoZW1zZWx2ZXMgYmUgZnJlZSBpbiB0aGUgc2FtZSBzZW5zZS4gIEl0Ci1jb21w
bGVtZW50cyB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UsIHdoaWNoIGlzIGEgY29weWxl
ZnQKLWxpY2Vuc2UgZGVzaWduZWQgZm9yIGZyZWUgc29mdHdhcmUuCi0KLVdlIGhhdmUgZGVzaWdu
ZWQgdGhpcyBMaWNlbnNlIGluIG9yZGVyIHRvIHVzZSBpdCBmb3IgbWFudWFscyBmb3IgZnJlZQot
c29mdHdhcmUsIGJlY2F1c2UgZnJlZSBzb2Z0d2FyZSBuZWVkcyBmcmVlIGRvY3VtZW50YXRpb246
IGEgZnJlZQotcHJvZ3JhbSBzaG91bGQgY29tZSB3aXRoIG1hbnVhbHMgcHJvdmlkaW5nIHRoZSBz
YW1lIGZyZWVkb21zIHRoYXQgdGhlCi1zb2Z0d2FyZSBkb2VzLiAgQnV0IHRoaXMgTGljZW5zZSBp
cyBub3QgbGltaXRlZCB0byBzb2Z0d2FyZSBtYW51YWxzOwotaXQgY2FuIGJlIHVzZWQgZm9yIGFu
eSB0ZXh0dWFsIHdvcmssIHJlZ2FyZGxlc3Mgb2Ygc3ViamVjdCBtYXR0ZXIgb3IKLXdoZXRoZXIg
aXQgaXMgcHVibGlzaGVkIGFzIGEgcHJpbnRlZCBib29rLiAgV2UgcmVjb21tZW5kIHRoaXMgTGlj
ZW5zZQotcHJpbmNpcGFsbHkgZm9yIHdvcmtzIHdob3NlIHB1cnBvc2UgaXMgaW5zdHJ1Y3Rpb24g
b3IgcmVmZXJlbmNlLgotCi0KLVxiZWdpbntjZW50ZXJ9Ci17XExhcmdlXGJmIDEuIEFQUExJQ0FC
SUxJVFkgQU5EIERFRklOSVRJT05TfQotXGFkZGNvbnRlbnRzbGluZXt0b2N9e3NlY3Rpb259ezEu
IEFQUExJQ0FCSUxJVFkgQU5EIERFRklOSVRJT05TfQotXGVuZHtjZW50ZXJ9Ci0KLVRoaXMgTGlj
ZW5zZSBhcHBsaWVzIHRvIGFueSBtYW51YWwgb3Igb3RoZXIgd29yaywgaW4gYW55IG1lZGl1bSwg
dGhhdAotY29udGFpbnMgYSBub3RpY2UgcGxhY2VkIGJ5IHRoZSBjb3B5cmlnaHQgaG9sZGVyIHNh
eWluZyBpdCBjYW4gYmUKLWRpc3RyaWJ1dGVkIHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGlzIExpY2Vu
c2UuICBTdWNoIGEgbm90aWNlIGdyYW50cyBhCi13b3JsZC13aWRlLCByb3lhbHR5LWZyZWUgbGlj
ZW5zZSwgdW5saW1pdGVkIGluIGR1cmF0aW9uLCB0byB1c2UgdGhhdAotd29yayB1bmRlciB0aGUg
Y29uZGl0aW9ucyBzdGF0ZWQgaGVyZWluLiAgVGhlIFx0ZXh0YmZ7IkRvY3VtZW50In0sIGJlbG93
LAotcmVmZXJzIHRvIGFueSBzdWNoIG1hbnVhbCBvciB3b3JrLiAgQW55IG1lbWJlciBvZiB0aGUg
cHVibGljIGlzIGEKLWxpY2Vuc2VlLCBhbmQgaXMgYWRkcmVzc2VkIGFzIFx0ZXh0YmZ7InlvdSJ9
LiAgWW91IGFjY2VwdCB0aGUgbGljZW5zZSBpZiB5b3UKLWNvcHksIG1vZGlmeSBvciBkaXN0cmli
dXRlIHRoZSB3b3JrIGluIGEgd2F5IHJlcXVpcmluZyBwZXJtaXNzaW9uCi11bmRlciBjb3B5cmln
aHQgbGF3LgotCi1BIFx0ZXh0YmZ7Ik1vZGlmaWVkIFZlcnNpb24ifSBvZiB0aGUgRG9jdW1lbnQg
bWVhbnMgYW55IHdvcmsgY29udGFpbmluZyB0aGUKLURvY3VtZW50IG9yIGEgcG9ydGlvbiBvZiBp
dCwgZWl0aGVyIGNvcGllZCB2ZXJiYXRpbSwgb3Igd2l0aAotbW9kaWZpY2F0aW9ucyBhbmQvb3Ig
dHJhbnNsYXRlZCBpbnRvIGFub3RoZXIgbGFuZ3VhZ2UuCi0KLUEgXHRleHRiZnsiU2Vjb25kYXJ5
IFNlY3Rpb24ifSBpcyBhIG5hbWVkIGFwcGVuZGl4IG9yIGEgZnJvbnQtbWF0dGVyIHNlY3Rpb24g
b2YKLXRoZSBEb2N1bWVudCB0aGF0IGRlYWxzIGV4Y2x1c2l2ZWx5IHdpdGggdGhlIHJlbGF0aW9u
c2hpcCBvZiB0aGUKLXB1Ymxpc2hlcnMgb3IgYXV0aG9ycyBvZiB0aGUgRG9jdW1lbnQgdG8gdGhl
IERvY3VtZW50J3Mgb3ZlcmFsbCBzdWJqZWN0Ci0ob3IgdG8gcmVsYXRlZCBtYXR0ZXJzKSBhbmQg
Y29udGFpbnMgbm90aGluZyB0aGF0IGNvdWxkIGZhbGwgZGlyZWN0bHkKLXdpdGhpbiB0aGF0IG92
ZXJhbGwgc3ViamVjdC4gIChUaHVzLCBpZiB0aGUgRG9jdW1lbnQgaXMgaW4gcGFydCBhCi10ZXh0
Ym9vayBvZiBtYXRoZW1hdGljcywgYSBTZWNvbmRhcnkgU2VjdGlvbiBtYXkgbm90IGV4cGxhaW4g
YW55Ci1tYXRoZW1hdGljcy4pICBUaGUgcmVsYXRpb25zaGlwIGNvdWxkIGJlIGEgbWF0dGVyIG9m
IGhpc3RvcmljYWwKLWNvbm5lY3Rpb24gd2l0aCB0aGUgc3ViamVjdCBvciB3aXRoIHJlbGF0ZWQg
bWF0dGVycywgb3Igb2YgbGVnYWwsCi1jb21tZXJjaWFsLCBwaGlsb3NvcGhpY2FsLCBldGhpY2Fs
IG9yIHBvbGl0aWNhbCBwb3NpdGlvbiByZWdhcmRpbmcKLXRoZW0uCi0KLVRoZSBcdGV4dGJmeyJJ
bnZhcmlhbnQgU2VjdGlvbnMifSBhcmUgY2VydGFpbiBTZWNvbmRhcnkgU2VjdGlvbnMgd2hvc2Ug
dGl0bGVzCi1hcmUgZGVzaWduYXRlZCwgYXMgYmVpbmcgdGhvc2Ugb2YgSW52YXJpYW50IFNlY3Rp
b25zLCBpbiB0aGUgbm90aWNlCi10aGF0IHNheXMgdGhhdCB0aGUgRG9jdW1lbnQgaXMgcmVsZWFz
ZWQgdW5kZXIgdGhpcyBMaWNlbnNlLiAgSWYgYQotc2VjdGlvbiBkb2VzIG5vdCBmaXQgdGhlIGFi
b3ZlIGRlZmluaXRpb24gb2YgU2Vjb25kYXJ5IHRoZW4gaXQgaXMgbm90Ci1hbGxvd2VkIHRvIGJl
IGRlc2lnbmF0ZWQgYXMgSW52YXJpYW50LiAgVGhlIERvY3VtZW50IG1heSBjb250YWluIHplcm8K
LUludmFyaWFudCBTZWN0aW9ucy4gIElmIHRoZSBEb2N1bWVudCBkb2VzIG5vdCBpZGVudGlmeSBh
bnkgSW52YXJpYW50Ci1TZWN0aW9ucyB0aGVuIHRoZXJlIGFyZSBub25lLgotCi1UaGUgXHRleHRi
ZnsiQ292ZXIgVGV4dHMifSBhcmUgY2VydGFpbiBzaG9ydCBwYXNzYWdlcyBvZiB0ZXh0IHRoYXQg
YXJlIGxpc3RlZCwKLWFzIEZyb250LUNvdmVyIFRleHRzIG9yIEJhY2stQ292ZXIgVGV4dHMsIGlu
IHRoZSBub3RpY2UgdGhhdCBzYXlzIHRoYXQKLXRoZSBEb2N1bWVudCBpcyByZWxlYXNlZCB1bmRl
ciB0aGlzIExpY2Vuc2UuICBBIEZyb250LUNvdmVyIFRleHQgbWF5Ci1iZSBhdCBtb3N0IDUgd29y
ZHMsIGFuZCBhIEJhY2stQ292ZXIgVGV4dCBtYXkgYmUgYXQgbW9zdCAyNSB3b3Jkcy4KLQotQSBc
dGV4dGJmeyJUcmFuc3BhcmVudCJ9IGNvcHkgb2YgdGhlIERvY3VtZW50IG1lYW5zIGEgbWFjaGlu
ZS1yZWFkYWJsZSBjb3B5LAotcmVwcmVzZW50ZWQgaW4gYSBmb3JtYXQgd2hvc2Ugc3BlY2lmaWNh
dGlvbiBpcyBhdmFpbGFibGUgdG8gdGhlCi1nZW5lcmFsIHB1YmxpYywgdGhhdCBpcyBzdWl0YWJs
ZSBmb3IgcmV2aXNpbmcgdGhlIGRvY3VtZW50Ci1zdHJhaWdodGZvcndhcmRseSB3aXRoIGdlbmVy
aWMgdGV4dCBlZGl0b3JzIG9yIChmb3IgaW1hZ2VzIGNvbXBvc2VkIG9mCi1waXhlbHMpIGdlbmVy
aWMgcGFpbnQgcHJvZ3JhbXMgb3IgKGZvciBkcmF3aW5ncykgc29tZSB3aWRlbHkgYXZhaWxhYmxl
Ci1kcmF3aW5nIGVkaXRvciwgYW5kIHRoYXQgaXMgc3VpdGFibGUgZm9yIGlucHV0IHRvIHRleHQg
Zm9ybWF0dGVycyBvcgotZm9yIGF1dG9tYXRpYyB0cmFuc2xhdGlvbiB0byBhIHZhcmlldHkgb2Yg
Zm9ybWF0cyBzdWl0YWJsZSBmb3IgaW5wdXQKLXRvIHRleHQgZm9ybWF0dGVycy4gIEEgY29weSBt
YWRlIGluIGFuIG90aGVyd2lzZSBUcmFuc3BhcmVudCBmaWxlCi1mb3JtYXQgd2hvc2UgbWFya3Vw
LCBvciBhYnNlbmNlIG9mIG1hcmt1cCwgaGFzIGJlZW4gYXJyYW5nZWQgdG8gdGh3YXJ0Ci1vciBk
aXNjb3VyYWdlIHN1YnNlcXVlbnQgbW9kaWZpY2F0aW9uIGJ5IHJlYWRlcnMgaXMgbm90IFRyYW5z
cGFyZW50LgotQW4gaW1hZ2UgZm9ybWF0IGlzIG5vdCBUcmFuc3BhcmVudCBpZiB1c2VkIGZvciBh
bnkgc3Vic3RhbnRpYWwgYW1vdW50Ci1vZiB0ZXh0LiAgQSBjb3B5IHRoYXQgaXMgbm90ICJUcmFu
c3BhcmVudCIgaXMgY2FsbGVkIFx0ZXh0YmZ7Ik9wYXF1ZSJ9LgotCi1FeGFtcGxlcyBvZiBzdWl0
YWJsZSBmb3JtYXRzIGZvciBUcmFuc3BhcmVudCBjb3BpZXMgaW5jbHVkZSBwbGFpbgotQVNDSUkg
d2l0aG91dCBtYXJrdXAsIFRleGluZm8gaW5wdXQgZm9ybWF0LCBMYVRlWCBpbnB1dCBmb3JtYXQs
IFNHTUwKLW9yIFhNTCB1c2luZyBhIHB1YmxpY2x5IGF2YWlsYWJsZSBEVEQsIGFuZCBzdGFuZGFy
ZC1jb25mb3JtaW5nIHNpbXBsZQotSFRNTCwgUG9zdFNjcmlwdCBvciBQREYgZGVzaWduZWQgZm9y
IGh1bWFuIG1vZGlmaWNhdGlvbi4gIEV4YW1wbGVzIG9mCi10cmFuc3BhcmVudCBpbWFnZSBmb3Jt
YXRzIGluY2x1ZGUgUE5HLCBYQ0YgYW5kIEpQRy4gIE9wYXF1ZSBmb3JtYXRzCi1pbmNsdWRlIHBy
b3ByaWV0YXJ5IGZvcm1hdHMgdGhhdCBjYW4gYmUgcmVhZCBhbmQgZWRpdGVkIG9ubHkgYnkKLXBy
b3ByaWV0YXJ5IHdvcmQgcHJvY2Vzc29ycywgU0dNTCBvciBYTUwgZm9yIHdoaWNoIHRoZSBEVEQg
YW5kL29yCi1wcm9jZXNzaW5nIHRvb2xzIGFyZSBub3QgZ2VuZXJhbGx5IGF2YWlsYWJsZSwgYW5k
IHRoZQotbWFjaGluZS1nZW5lcmF0ZWQgSFRNTCwgUG9zdFNjcmlwdCBvciBQREYgcHJvZHVjZWQg
Ynkgc29tZSB3b3JkCi1wcm9jZXNzb3JzIGZvciBvdXRwdXQgcHVycG9zZXMgb25seS4KLQotVGhl
IFx0ZXh0YmZ7IlRpdGxlIFBhZ2UifSBtZWFucywgZm9yIGEgcHJpbnRlZCBib29rLCB0aGUgdGl0
bGUgcGFnZSBpdHNlbGYsCi1wbHVzIHN1Y2ggZm9sbG93aW5nIHBhZ2VzIGFzIGFyZSBuZWVkZWQg
dG8gaG9sZCwgbGVnaWJseSwgdGhlIG1hdGVyaWFsCi10aGlzIExpY2Vuc2UgcmVxdWlyZXMgdG8g
YXBwZWFyIGluIHRoZSB0aXRsZSBwYWdlLiAgRm9yIHdvcmtzIGluCi1mb3JtYXRzIHdoaWNoIGRv
IG5vdCBoYXZlIGFueSB0aXRsZSBwYWdlIGFzIHN1Y2gsICJUaXRsZSBQYWdlIiBtZWFucwotdGhl
IHRleHQgbmVhciB0aGUgbW9zdCBwcm9taW5lbnQgYXBwZWFyYW5jZSBvZiB0aGUgd29yaydzIHRp
dGxlLAotcHJlY2VkaW5nIHRoZSBiZWdpbm5pbmcgb2YgdGhlIGJvZHkgb2YgdGhlIHRleHQuCi0K
LUEgc2VjdGlvbiBcdGV4dGJmeyJFbnRpdGxlZCBYWVoifSBtZWFucyBhIG5hbWVkIHN1YnVuaXQg
b2YgdGhlIERvY3VtZW50IHdob3NlCi10aXRsZSBlaXRoZXIgaXMgcHJlY2lzZWx5IFhZWiBvciBj
b250YWlucyBYWVogaW4gcGFyZW50aGVzZXMgZm9sbG93aW5nCi10ZXh0IHRoYXQgdHJhbnNsYXRl
cyBYWVogaW4gYW5vdGhlciBsYW5ndWFnZS4gIChIZXJlIFhZWiBzdGFuZHMgZm9yIGEKLXNwZWNp
ZmljIHNlY3Rpb24gbmFtZSBtZW50aW9uZWQgYmVsb3csIHN1Y2ggYXMgXHRleHRiZnsiQWNrbm93
bGVkZ2VtZW50cyJ9LAotXHRleHRiZnsiRGVkaWNhdGlvbnMifSwgXHRleHRiZnsiRW5kb3JzZW1l
bnRzIn0sIG9yIFx0ZXh0YmZ7Ikhpc3RvcnkifS4pICAKLVRvIFx0ZXh0YmZ7IlByZXNlcnZlIHRo
ZSBUaXRsZSJ9Ci1vZiBzdWNoIGEgc2VjdGlvbiB3aGVuIHlvdSBtb2RpZnkgdGhlIERvY3VtZW50
IG1lYW5zIHRoYXQgaXQgcmVtYWlucyBhCi1zZWN0aW9uICJFbnRpdGxlZCBYWVoiIGFjY29yZGlu
ZyB0byB0aGlzIGRlZmluaXRpb24uCi0KLVRoZSBEb2N1bWVudCBtYXkgaW5jbHVkZSBXYXJyYW50
eSBEaXNjbGFpbWVycyBuZXh0IHRvIHRoZSBub3RpY2Ugd2hpY2gKLXN0YXRlcyB0aGF0IHRoaXMg
TGljZW5zZSBhcHBsaWVzIHRvIHRoZSBEb2N1bWVudC4gIFRoZXNlIFdhcnJhbnR5Ci1EaXNjbGFp
bWVycyBhcmUgY29uc2lkZXJlZCB0byBiZSBpbmNsdWRlZCBieSByZWZlcmVuY2UgaW4gdGhpcwot
TGljZW5zZSwgYnV0IG9ubHkgYXMgcmVnYXJkcyBkaXNjbGFpbWluZyB3YXJyYW50aWVzOiBhbnkg
b3RoZXIKLWltcGxpY2F0aW9uIHRoYXQgdGhlc2UgV2FycmFudHkgRGlzY2xhaW1lcnMgbWF5IGhh
dmUgaXMgdm9pZCBhbmQgaGFzCi1ubyBlZmZlY3Qgb24gdGhlIG1lYW5pbmcgb2YgdGhpcyBMaWNl
bnNlLgotCi0KLVxiZWdpbntjZW50ZXJ9Ci17XExhcmdlXGJmIDIuIFZFUkJBVElNIENPUFlJTkd9
Ci1cYWRkY29udGVudHNsaW5le3RvY317c2VjdGlvbn17Mi4gVkVSQkFUSU0gQ09QWUlOR30KLVxl
bmR7Y2VudGVyfQotCi1Zb3UgbWF5IGNvcHkgYW5kIGRpc3RyaWJ1dGUgdGhlIERvY3VtZW50IGlu
IGFueSBtZWRpdW0sIGVpdGhlcgotY29tbWVyY2lhbGx5IG9yIG5vbmNvbW1lcmNpYWxseSwgcHJv
dmlkZWQgdGhhdCB0aGlzIExpY2Vuc2UsIHRoZQotY29weXJpZ2h0IG5vdGljZXMsIGFuZCB0aGUg
bGljZW5zZSBub3RpY2Ugc2F5aW5nIHRoaXMgTGljZW5zZSBhcHBsaWVzCi10byB0aGUgRG9jdW1l
bnQgYXJlIHJlcHJvZHVjZWQgaW4gYWxsIGNvcGllcywgYW5kIHRoYXQgeW91IGFkZCBubyBvdGhl
cgotY29uZGl0aW9ucyB3aGF0c29ldmVyIHRvIHRob3NlIG9mIHRoaXMgTGljZW5zZS4gIFlvdSBt
YXkgbm90IHVzZQotdGVjaG5pY2FsIG1lYXN1cmVzIHRvIG9ic3RydWN0IG9yIGNvbnRyb2wgdGhl
IHJlYWRpbmcgb3IgZnVydGhlcgotY29weWluZyBvZiB0aGUgY29waWVzIHlvdSBtYWtlIG9yIGRp
c3RyaWJ1dGUuICBIb3dldmVyLCB5b3UgbWF5IGFjY2VwdAotY29tcGVuc2F0aW9uIGluIGV4Y2hh
bmdlIGZvciBjb3BpZXMuICBJZiB5b3UgZGlzdHJpYnV0ZSBhIGxhcmdlIGVub3VnaAotbnVtYmVy
IG9mIGNvcGllcyB5b3UgbXVzdCBhbHNvIGZvbGxvdyB0aGUgY29uZGl0aW9ucyBpbiBzZWN0aW9u
IDMuCi0KLVlvdSBtYXkgYWxzbyBsZW5kIGNvcGllcywgdW5kZXIgdGhlIHNhbWUgY29uZGl0aW9u
cyBzdGF0ZWQgYWJvdmUsIGFuZAoteW91IG1heSBwdWJsaWNseSBkaXNwbGF5IGNvcGllcy4KLQot
Ci1cYmVnaW57Y2VudGVyfQote1xMYXJnZVxiZiAzLiBDT1BZSU5HIElOIFFVQU5USVRZfQotXGFk
ZGNvbnRlbnRzbGluZXt0b2N9e3NlY3Rpb259ezMuIENPUFlJTkcgSU4gUVVBTlRJVFl9Ci1cZW5k
e2NlbnRlcn0KLQotCi1JZiB5b3UgcHVibGlzaCBwcmludGVkIGNvcGllcyAob3IgY29waWVzIGlu
IG1lZGlhIHRoYXQgY29tbW9ubHkgaGF2ZQotcHJpbnRlZCBjb3ZlcnMpIG9mIHRoZSBEb2N1bWVu
dCwgbnVtYmVyaW5nIG1vcmUgdGhhbiAxMDAsIGFuZCB0aGUKLURvY3VtZW50J3MgbGljZW5zZSBu
b3RpY2UgcmVxdWlyZXMgQ292ZXIgVGV4dHMsIHlvdSBtdXN0IGVuY2xvc2UgdGhlCi1jb3BpZXMg
aW4gY292ZXJzIHRoYXQgY2FycnksIGNsZWFybHkgYW5kIGxlZ2libHksIGFsbCB0aGVzZSBDb3Zl
cgotVGV4dHM6IEZyb250LUNvdmVyIFRleHRzIG9uIHRoZSBmcm9udCBjb3ZlciwgYW5kIEJhY2st
Q292ZXIgVGV4dHMgb24KLXRoZSBiYWNrIGNvdmVyLiAgQm90aCBjb3ZlcnMgbXVzdCBhbHNvIGNs
ZWFybHkgYW5kIGxlZ2libHkgaWRlbnRpZnkKLXlvdSBhcyB0aGUgcHVibGlzaGVyIG9mIHRoZXNl
IGNvcGllcy4gIFRoZSBmcm9udCBjb3ZlciBtdXN0IHByZXNlbnQKLXRoZSBmdWxsIHRpdGxlIHdp
dGggYWxsIHdvcmRzIG9mIHRoZSB0aXRsZSBlcXVhbGx5IHByb21pbmVudCBhbmQKLXZpc2libGUu
ICBZb3UgbWF5IGFkZCBvdGhlciBtYXRlcmlhbCBvbiB0aGUgY292ZXJzIGluIGFkZGl0aW9uLgot
Q29weWluZyB3aXRoIGNoYW5nZXMgbGltaXRlZCB0byB0aGUgY292ZXJzLCBhcyBsb25nIGFzIHRo
ZXkgcHJlc2VydmUKLXRoZSB0aXRsZSBvZiB0aGUgRG9jdW1lbnQgYW5kIHNhdGlzZnkgdGhlc2Ug
Y29uZGl0aW9ucywgY2FuIGJlIHRyZWF0ZWQKLWFzIHZlcmJhdGltIGNvcHlpbmcgaW4gb3RoZXIg
cmVzcGVjdHMuCi0KLUlmIHRoZSByZXF1aXJlZCB0ZXh0cyBmb3IgZWl0aGVyIGNvdmVyIGFyZSB0
b28gdm9sdW1pbm91cyB0byBmaXQKLWxlZ2libHksIHlvdSBzaG91bGQgcHV0IHRoZSBmaXJzdCBv
bmVzIGxpc3RlZCAoYXMgbWFueSBhcyBmaXQKLXJlYXNvbmFibHkpIG9uIHRoZSBhY3R1YWwgY292
ZXIsIGFuZCBjb250aW51ZSB0aGUgcmVzdCBvbnRvIGFkamFjZW50Ci1wYWdlcy4KLQotSWYgeW91
IHB1Ymxpc2ggb3IgZGlzdHJpYnV0ZSBPcGFxdWUgY29waWVzIG9mIHRoZSBEb2N1bWVudCBudW1i
ZXJpbmcKLW1vcmUgdGhhbiAxMDAsIHlvdSBtdXN0IGVpdGhlciBpbmNsdWRlIGEgbWFjaGluZS1y
ZWFkYWJsZSBUcmFuc3BhcmVudAotY29weSBhbG9uZyB3aXRoIGVhY2ggT3BhcXVlIGNvcHksIG9y
IHN0YXRlIGluIG9yIHdpdGggZWFjaCBPcGFxdWUgY29weQotYSBjb21wdXRlci1uZXR3b3JrIGxv
Y2F0aW9uIGZyb20gd2hpY2ggdGhlIGdlbmVyYWwgbmV0d29yay11c2luZwotcHVibGljIGhhcyBh
Y2Nlc3MgdG8gZG93bmxvYWQgdXNpbmcgcHVibGljLXN0YW5kYXJkIG5ldHdvcmsgcHJvdG9jb2xz
Ci1hIGNvbXBsZXRlIFRyYW5zcGFyZW50IGNvcHkgb2YgdGhlIERvY3VtZW50LCBmcmVlIG9mIGFk
ZGVkIG1hdGVyaWFsLgotSWYgeW91IHVzZSB0aGUgbGF0dGVyIG9wdGlvbiwgeW91IG11c3QgdGFr
ZSByZWFzb25hYmx5IHBydWRlbnQgc3RlcHMsCi13aGVuIHlvdSBiZWdpbiBkaXN0cmlidXRpb24g
b2YgT3BhcXVlIGNvcGllcyBpbiBxdWFudGl0eSwgdG8gZW5zdXJlCi10aGF0IHRoaXMgVHJhbnNw
YXJlbnQgY29weSB3aWxsIHJlbWFpbiB0aHVzIGFjY2Vzc2libGUgYXQgdGhlIHN0YXRlZAotbG9j
YXRpb24gdW50aWwgYXQgbGVhc3Qgb25lIHllYXIgYWZ0ZXIgdGhlIGxhc3QgdGltZSB5b3UgZGlz
dHJpYnV0ZSBhbgotT3BhcXVlIGNvcHkgKGRpcmVjdGx5IG9yIHRocm91Z2ggeW91ciBhZ2VudHMg
b3IgcmV0YWlsZXJzKSBvZiB0aGF0Ci1lZGl0aW9uIHRvIHRoZSBwdWJsaWMuCi0KLUl0IGlzIHJl
cXVlc3RlZCwgYnV0IG5vdCByZXF1aXJlZCwgdGhhdCB5b3UgY29udGFjdCB0aGUgYXV0aG9ycyBv
ZiB0aGUKLURvY3VtZW50IHdlbGwgYmVmb3JlIHJlZGlzdHJpYnV0aW5nIGFueSBsYXJnZSBudW1i
ZXIgb2YgY29waWVzLCB0byBnaXZlCi10aGVtIGEgY2hhbmNlIHRvIHByb3ZpZGUgeW91IHdpdGgg
YW4gdXBkYXRlZCB2ZXJzaW9uIG9mIHRoZSBEb2N1bWVudC4KLQotCi1cYmVnaW57Y2VudGVyfQot
e1xMYXJnZVxiZiA0LiBNT0RJRklDQVRJT05TfQotXGFkZGNvbnRlbnRzbGluZXt0b2N9e3NlY3Rp
b259ezQuIE1PRElGSUNBVElPTlN9Ci1cZW5ke2NlbnRlcn0KLQotWW91IG1heSBjb3B5IGFuZCBk
aXN0cmlidXRlIGEgTW9kaWZpZWQgVmVyc2lvbiBvZiB0aGUgRG9jdW1lbnQgdW5kZXIKLXRoZSBj
b25kaXRpb25zIG9mIHNlY3Rpb25zIDIgYW5kIDMgYWJvdmUsIHByb3ZpZGVkIHRoYXQgeW91IHJl
bGVhc2UKLXRoZSBNb2RpZmllZCBWZXJzaW9uIHVuZGVyIHByZWNpc2VseSB0aGlzIExpY2Vuc2Us
IHdpdGggdGhlIE1vZGlmaWVkCi1WZXJzaW9uIGZpbGxpbmcgdGhlIHJvbGUgb2YgdGhlIERvY3Vt
ZW50LCB0aHVzIGxpY2Vuc2luZyBkaXN0cmlidXRpb24KLWFuZCBtb2RpZmljYXRpb24gb2YgdGhl
IE1vZGlmaWVkIFZlcnNpb24gdG8gd2hvZXZlciBwb3NzZXNzZXMgYSBjb3B5Ci1vZiBpdC4gIElu
IGFkZGl0aW9uLCB5b3UgbXVzdCBkbyB0aGVzZSB0aGluZ3MgaW4gdGhlIE1vZGlmaWVkIFZlcnNp
b246Ci0KLVxiZWdpbntpdGVtaXplfQotXGl0ZW1bQS5dIAotICAgVXNlIGluIHRoZSBUaXRsZSBQ
YWdlIChhbmQgb24gdGhlIGNvdmVycywgaWYgYW55KSBhIHRpdGxlIGRpc3RpbmN0Ci0gICBmcm9t
IHRoYXQgb2YgdGhlIERvY3VtZW50LCBhbmQgZnJvbSB0aG9zZSBvZiBwcmV2aW91cyB2ZXJzaW9u
cwotICAgKHdoaWNoIHNob3VsZCwgaWYgdGhlcmUgd2VyZSBhbnksIGJlIGxpc3RlZCBpbiB0aGUg
SGlzdG9yeSBzZWN0aW9uCi0gICBvZiB0aGUgRG9jdW1lbnQpLiAgWW91IG1heSB1c2UgdGhlIHNh
bWUgdGl0bGUgYXMgYSBwcmV2aW91cyB2ZXJzaW9uCi0gICBpZiB0aGUgb3JpZ2luYWwgcHVibGlz
aGVyIG9mIHRoYXQgdmVyc2lvbiBnaXZlcyBwZXJtaXNzaW9uLgotICAgCi1caXRlbVtCLl0KLSAg
IExpc3Qgb24gdGhlIFRpdGxlIFBhZ2UsIGFzIGF1dGhvcnMsIG9uZSBvciBtb3JlIHBlcnNvbnMg
b3IgZW50aXRpZXMKLSAgIHJlc3BvbnNpYmxlIGZvciBhdXRob3JzaGlwIG9mIHRoZSBtb2RpZmlj
YXRpb25zIGluIHRoZSBNb2RpZmllZAotICAgVmVyc2lvbiwgdG9nZXRoZXIgd2l0aCBhdCBsZWFz
dCBmaXZlIG9mIHRoZSBwcmluY2lwYWwgYXV0aG9ycyBvZiB0aGUKLSAgIERvY3VtZW50IChhbGwg
b2YgaXRzIHByaW5jaXBhbCBhdXRob3JzLCBpZiBpdCBoYXMgZmV3ZXIgdGhhbiBmaXZlKSwKLSAg
IHVubGVzcyB0aGV5IHJlbGVhc2UgeW91IGZyb20gdGhpcyByZXF1aXJlbWVudC4KLSAgIAotXGl0
ZW1bQy5dCi0gICBTdGF0ZSBvbiB0aGUgVGl0bGUgcGFnZSB0aGUgbmFtZSBvZiB0aGUgcHVibGlz
aGVyIG9mIHRoZQotICAgTW9kaWZpZWQgVmVyc2lvbiwgYXMgdGhlIHB1Ymxpc2hlci4KLSAgIAot
XGl0ZW1bRC5dCi0gICBQcmVzZXJ2ZSBhbGwgdGhlIGNvcHlyaWdodCBub3RpY2VzIG9mIHRoZSBE
b2N1bWVudC4KLSAgIAotXGl0ZW1bRS5dCi0gICBBZGQgYW4gYXBwcm9wcmlhdGUgY29weXJpZ2h0
IG5vdGljZSBmb3IgeW91ciBtb2RpZmljYXRpb25zCi0gICBhZGphY2VudCB0byB0aGUgb3RoZXIg
Y29weXJpZ2h0IG5vdGljZXMuCi0gICAKLVxpdGVtW0YuXQotICAgSW5jbHVkZSwgaW1tZWRpYXRl
bHkgYWZ0ZXIgdGhlIGNvcHlyaWdodCBub3RpY2VzLCBhIGxpY2Vuc2Ugbm90aWNlCi0gICBnaXZp
bmcgdGhlIHB1YmxpYyBwZXJtaXNzaW9uIHRvIHVzZSB0aGUgTW9kaWZpZWQgVmVyc2lvbiB1bmRl
ciB0aGUKLSAgIHRlcm1zIG9mIHRoaXMgTGljZW5zZSwgaW4gdGhlIGZvcm0gc2hvd24gaW4gdGhl
IEFkZGVuZHVtIGJlbG93LgotICAgCi1caXRlbVtHLl0KLSAgIFByZXNlcnZlIGluIHRoYXQgbGlj
ZW5zZSBub3RpY2UgdGhlIGZ1bGwgbGlzdHMgb2YgSW52YXJpYW50IFNlY3Rpb25zCi0gICBhbmQg
cmVxdWlyZWQgQ292ZXIgVGV4dHMgZ2l2ZW4gaW4gdGhlIERvY3VtZW50J3MgbGljZW5zZSBub3Rp
Y2UuCi0gICAKLVxpdGVtW0guXQotICAgSW5jbHVkZSBhbiB1bmFsdGVyZWQgY29weSBvZiB0aGlz
IExpY2Vuc2UuCi0gICAKLVxpdGVtW0kuXQotICAgUHJlc2VydmUgdGhlIHNlY3Rpb24gRW50aXRs
ZWQgIkhpc3RvcnkiLCBQcmVzZXJ2ZSBpdHMgVGl0bGUsIGFuZCBhZGQKLSAgIHRvIGl0IGFuIGl0
ZW0gc3RhdGluZyBhdCBsZWFzdCB0aGUgdGl0bGUsIHllYXIsIG5ldyBhdXRob3JzLCBhbmQKLSAg
IHB1Ymxpc2hlciBvZiB0aGUgTW9kaWZpZWQgVmVyc2lvbiBhcyBnaXZlbiBvbiB0aGUgVGl0bGUg
UGFnZS4gIElmCi0gICB0aGVyZSBpcyBubyBzZWN0aW9uIEVudGl0bGVkICJIaXN0b3J5IiBpbiB0
aGUgRG9jdW1lbnQsIGNyZWF0ZSBvbmUKLSAgIHN0YXRpbmcgdGhlIHRpdGxlLCB5ZWFyLCBhdXRo
b3JzLCBhbmQgcHVibGlzaGVyIG9mIHRoZSBEb2N1bWVudCBhcwotICAgZ2l2ZW4gb24gaXRzIFRp
dGxlIFBhZ2UsIHRoZW4gYWRkIGFuIGl0ZW0gZGVzY3JpYmluZyB0aGUgTW9kaWZpZWQKLSAgIFZl
cnNpb24gYXMgc3RhdGVkIGluIHRoZSBwcmV2aW91cyBzZW50ZW5jZS4KLSAgIAotXGl0ZW1bSi5d
Ci0gICBQcmVzZXJ2ZSB0aGUgbmV0d29yayBsb2NhdGlvbiwgaWYgYW55LCBnaXZlbiBpbiB0aGUg
RG9jdW1lbnQgZm9yCi0gICBwdWJsaWMgYWNjZXNzIHRvIGEgVHJhbnNwYXJlbnQgY29weSBvZiB0
aGUgRG9jdW1lbnQsIGFuZCBsaWtld2lzZQotICAgdGhlIG5ldHdvcmsgbG9jYXRpb25zIGdpdmVu
IGluIHRoZSBEb2N1bWVudCBmb3IgcHJldmlvdXMgdmVyc2lvbnMKLSAgIGl0IHdhcyBiYXNlZCBv
bi4gIFRoZXNlIG1heSBiZSBwbGFjZWQgaW4gdGhlICJIaXN0b3J5IiBzZWN0aW9uLgotICAgWW91
IG1heSBvbWl0IGEgbmV0d29yayBsb2NhdGlvbiBmb3IgYSB3b3JrIHRoYXQgd2FzIHB1Ymxpc2hl
ZCBhdAotICAgbGVhc3QgZm91ciB5ZWFycyBiZWZvcmUgdGhlIERvY3VtZW50IGl0c2VsZiwgb3Ig
aWYgdGhlIG9yaWdpbmFsCi0gICBwdWJsaXNoZXIgb2YgdGhlIHZlcnNpb24gaXQgcmVmZXJzIHRv
IGdpdmVzIHBlcm1pc3Npb24uCi0gICAKLVxpdGVtW0suXQotICAgRm9yIGFueSBzZWN0aW9uIEVu
dGl0bGVkICJBY2tub3dsZWRnZW1lbnRzIiBvciAiRGVkaWNhdGlvbnMiLAotICAgUHJlc2VydmUg
dGhlIFRpdGxlIG9mIHRoZSBzZWN0aW9uLCBhbmQgcHJlc2VydmUgaW4gdGhlIHNlY3Rpb24gYWxs
Ci0gICB0aGUgc3Vic3RhbmNlIGFuZCB0b25lIG9mIGVhY2ggb2YgdGhlIGNvbnRyaWJ1dG9yIGFj
a25vd2xlZGdlbWVudHMKLSAgIGFuZC9vciBkZWRpY2F0aW9ucyBnaXZlbiB0aGVyZWluLgotICAg
Ci1caXRlbVtMLl0KLSAgIFByZXNlcnZlIGFsbCB0aGUgSW52YXJpYW50IFNlY3Rpb25zIG9mIHRo
ZSBEb2N1bWVudCwKLSAgIHVuYWx0ZXJlZCBpbiB0aGVpciB0ZXh0IGFuZCBpbiB0aGVpciB0aXRs
ZXMuICBTZWN0aW9uIG51bWJlcnMKLSAgIG9yIHRoZSBlcXVpdmFsZW50IGFyZSBub3QgY29uc2lk
ZXJlZCBwYXJ0IG9mIHRoZSBzZWN0aW9uIHRpdGxlcy4KLSAgIAotXGl0ZW1bTS5dCi0gICBEZWxl
dGUgYW55IHNlY3Rpb24gRW50aXRsZWQgIkVuZG9yc2VtZW50cyIuICBTdWNoIGEgc2VjdGlvbgot
ICAgbWF5IG5vdCBiZSBpbmNsdWRlZCBpbiB0aGUgTW9kaWZpZWQgVmVyc2lvbi4KLSAgIAotXGl0
ZW1bTi5dCi0gICBEbyBub3QgcmV0aXRsZSBhbnkgZXhpc3Rpbmcgc2VjdGlvbiB0byBiZSBFbnRp
dGxlZCAiRW5kb3JzZW1lbnRzIgotICAgb3IgdG8gY29uZmxpY3QgaW4gdGl0bGUgd2l0aCBhbnkg
SW52YXJpYW50IFNlY3Rpb24uCi0gICAKLVxpdGVtW08uXQotICAgUHJlc2VydmUgYW55IFdhcnJh
bnR5IERpc2NsYWltZXJzLgotXGVuZHtpdGVtaXplfQotCi1JZiB0aGUgTW9kaWZpZWQgVmVyc2lv
biBpbmNsdWRlcyBuZXcgZnJvbnQtbWF0dGVyIHNlY3Rpb25zIG9yCi1hcHBlbmRpY2VzIHRoYXQg
cXVhbGlmeSBhcyBTZWNvbmRhcnkgU2VjdGlvbnMgYW5kIGNvbnRhaW4gbm8gbWF0ZXJpYWwKLWNv
cGllZCBmcm9tIHRoZSBEb2N1bWVudCwgeW91IG1heSBhdCB5b3VyIG9wdGlvbiBkZXNpZ25hdGUg
c29tZSBvciBhbGwKLW9mIHRoZXNlIHNlY3Rpb25zIGFzIGludmFyaWFudC4gIFRvIGRvIHRoaXMs
IGFkZCB0aGVpciB0aXRsZXMgdG8gdGhlCi1saXN0IG9mIEludmFyaWFudCBTZWN0aW9ucyBpbiB0
aGUgTW9kaWZpZWQgVmVyc2lvbidzIGxpY2Vuc2Ugbm90aWNlLgotVGhlc2UgdGl0bGVzIG11c3Qg
YmUgZGlzdGluY3QgZnJvbSBhbnkgb3RoZXIgc2VjdGlvbiB0aXRsZXMuCi0KLVlvdSBtYXkgYWRk
IGEgc2VjdGlvbiBFbnRpdGxlZCAiRW5kb3JzZW1lbnRzIiwgcHJvdmlkZWQgaXQgY29udGFpbnMK
LW5vdGhpbmcgYnV0IGVuZG9yc2VtZW50cyBvZiB5b3VyIE1vZGlmaWVkIFZlcnNpb24gYnkgdmFy
aW91cwotcGFydGllcy0tZm9yIGV4YW1wbGUsIHN0YXRlbWVudHMgb2YgcGVlciByZXZpZXcgb3Ig
dGhhdCB0aGUgdGV4dCBoYXMKLWJlZW4gYXBwcm92ZWQgYnkgYW4gb3JnYW5pemF0aW9uIGFzIHRo
ZSBhdXRob3JpdGF0aXZlIGRlZmluaXRpb24gb2YgYQotc3RhbmRhcmQuCi0KLVlvdSBtYXkgYWRk
IGEgcGFzc2FnZSBvZiB1cCB0byBmaXZlIHdvcmRzIGFzIGEgRnJvbnQtQ292ZXIgVGV4dCwgYW5k
IGEKLXBhc3NhZ2Ugb2YgdXAgdG8gMjUgd29yZHMgYXMgYSBCYWNrLUNvdmVyIFRleHQsIHRvIHRo
ZSBlbmQgb2YgdGhlIGxpc3QKLW9mIENvdmVyIFRleHRzIGluIHRoZSBNb2RpZmllZCBWZXJzaW9u
LiAgT25seSBvbmUgcGFzc2FnZSBvZgotRnJvbnQtQ292ZXIgVGV4dCBhbmQgb25lIG9mIEJhY2st
Q292ZXIgVGV4dCBtYXkgYmUgYWRkZWQgYnkgKG9yCi10aHJvdWdoIGFycmFuZ2VtZW50cyBtYWRl
IGJ5KSBhbnkgb25lIGVudGl0eS4gIElmIHRoZSBEb2N1bWVudCBhbHJlYWR5Ci1pbmNsdWRlcyBh
IGNvdmVyIHRleHQgZm9yIHRoZSBzYW1lIGNvdmVyLCBwcmV2aW91c2x5IGFkZGVkIGJ5IHlvdSBv
cgotYnkgYXJyYW5nZW1lbnQgbWFkZSBieSB0aGUgc2FtZSBlbnRpdHkgeW91IGFyZSBhY3Rpbmcg
b24gYmVoYWxmIG9mLAoteW91IG1heSBub3QgYWRkIGFub3RoZXI7IGJ1dCB5b3UgbWF5IHJlcGxh
Y2UgdGhlIG9sZCBvbmUsIG9uIGV4cGxpY2l0Ci1wZXJtaXNzaW9uIGZyb20gdGhlIHByZXZpb3Vz
IHB1Ymxpc2hlciB0aGF0IGFkZGVkIHRoZSBvbGQgb25lLgotCi1UaGUgYXV0aG9yKHMpIGFuZCBw
dWJsaXNoZXIocykgb2YgdGhlIERvY3VtZW50IGRvIG5vdCBieSB0aGlzIExpY2Vuc2UKLWdpdmUg
cGVybWlzc2lvbiB0byB1c2UgdGhlaXIgbmFtZXMgZm9yIHB1YmxpY2l0eSBmb3Igb3IgdG8gYXNz
ZXJ0IG9yCi1pbXBseSBlbmRvcnNlbWVudCBvZiBhbnkgTW9kaWZpZWQgVmVyc2lvbi4KLQotCi1c
YmVnaW57Y2VudGVyfQote1xMYXJnZVxiZiA1LiBDT01CSU5JTkcgRE9DVU1FTlRTfQotXGFkZGNv
bnRlbnRzbGluZXt0b2N9e3NlY3Rpb259ezUuIENPTUJJTklORyBET0NVTUVOVFN9Ci1cZW5ke2Nl
bnRlcn0KLQotCi1Zb3UgbWF5IGNvbWJpbmUgdGhlIERvY3VtZW50IHdpdGggb3RoZXIgZG9jdW1l
bnRzIHJlbGVhc2VkIHVuZGVyIHRoaXMKLUxpY2Vuc2UsIHVuZGVyIHRoZSB0ZXJtcyBkZWZpbmVk
IGluIHNlY3Rpb24gNCBhYm92ZSBmb3IgbW9kaWZpZWQKLXZlcnNpb25zLCBwcm92aWRlZCB0aGF0
IHlvdSBpbmNsdWRlIGluIHRoZSBjb21iaW5hdGlvbiBhbGwgb2YgdGhlCi1JbnZhcmlhbnQgU2Vj
dGlvbnMgb2YgYWxsIG9mIHRoZSBvcmlnaW5hbCBkb2N1bWVudHMsIHVubW9kaWZpZWQsIGFuZAot
bGlzdCB0aGVtIGFsbCBhcyBJbnZhcmlhbnQgU2VjdGlvbnMgb2YgeW91ciBjb21iaW5lZCB3b3Jr
IGluIGl0cwotbGljZW5zZSBub3RpY2UsIGFuZCB0aGF0IHlvdSBwcmVzZXJ2ZSBhbGwgdGhlaXIg
V2FycmFudHkgRGlzY2xhaW1lcnMuCi0KLVRoZSBjb21iaW5lZCB3b3JrIG5lZWQgb25seSBjb250
YWluIG9uZSBjb3B5IG9mIHRoaXMgTGljZW5zZSwgYW5kCi1tdWx0aXBsZSBpZGVudGljYWwgSW52
YXJpYW50IFNlY3Rpb25zIG1heSBiZSByZXBsYWNlZCB3aXRoIGEgc2luZ2xlCi1jb3B5LiAgSWYg
dGhlcmUgYXJlIG11bHRpcGxlIEludmFyaWFudCBTZWN0aW9ucyB3aXRoIHRoZSBzYW1lIG5hbWUg
YnV0Ci1kaWZmZXJlbnQgY29udGVudHMsIG1ha2UgdGhlIHRpdGxlIG9mIGVhY2ggc3VjaCBzZWN0
aW9uIHVuaXF1ZSBieQotYWRkaW5nIGF0IHRoZSBlbmQgb2YgaXQsIGluIHBhcmVudGhlc2VzLCB0
aGUgbmFtZSBvZiB0aGUgb3JpZ2luYWwKLWF1dGhvciBvciBwdWJsaXNoZXIgb2YgdGhhdCBzZWN0
aW9uIGlmIGtub3duLCBvciBlbHNlIGEgdW5pcXVlIG51bWJlci4KLU1ha2UgdGhlIHNhbWUgYWRq
dXN0bWVudCB0byB0aGUgc2VjdGlvbiB0aXRsZXMgaW4gdGhlIGxpc3Qgb2YKLUludmFyaWFudCBT
ZWN0aW9ucyBpbiB0aGUgbGljZW5zZSBub3RpY2Ugb2YgdGhlIGNvbWJpbmVkIHdvcmsuCi0KLUlu
IHRoZSBjb21iaW5hdGlvbiwgeW91IG11c3QgY29tYmluZSBhbnkgc2VjdGlvbnMgRW50aXRsZWQg
Ikhpc3RvcnkiCi1pbiB0aGUgdmFyaW91cyBvcmlnaW5hbCBkb2N1bWVudHMsIGZvcm1pbmcgb25l
IHNlY3Rpb24gRW50aXRsZWQKLSJIaXN0b3J5IjsgbGlrZXdpc2UgY29tYmluZSBhbnkgc2VjdGlv
bnMgRW50aXRsZWQgIkFja25vd2xlZGdlbWVudHMiLAotYW5kIGFueSBzZWN0aW9ucyBFbnRpdGxl
ZCAiRGVkaWNhdGlvbnMiLiAgWW91IG11c3QgZGVsZXRlIGFsbCBzZWN0aW9ucwotRW50aXRsZWQg
IkVuZG9yc2VtZW50cyIuCi0KLVxiZWdpbntjZW50ZXJ9Ci17XExhcmdlXGJmIDYuIENPTExFQ1RJ
T05TIE9GIERPQ1VNRU5UU30KLVxhZGRjb250ZW50c2xpbmV7dG9jfXtzZWN0aW9ufXs2LiBDT0xM
RUNUSU9OUyBPRiBET0NVTUVOVFN9Ci1cZW5ke2NlbnRlcn0KLQotWW91IG1heSBtYWtlIGEgY29s
bGVjdGlvbiBjb25zaXN0aW5nIG9mIHRoZSBEb2N1bWVudCBhbmQgb3RoZXIgZG9jdW1lbnRzCi1y
ZWxlYXNlZCB1bmRlciB0aGlzIExpY2Vuc2UsIGFuZCByZXBsYWNlIHRoZSBpbmRpdmlkdWFsIGNv
cGllcyBvZiB0aGlzCi1MaWNlbnNlIGluIHRoZSB2YXJpb3VzIGRvY3VtZW50cyB3aXRoIGEgc2lu
Z2xlIGNvcHkgdGhhdCBpcyBpbmNsdWRlZCBpbgotdGhlIGNvbGxlY3Rpb24sIHByb3ZpZGVkIHRo
YXQgeW91IGZvbGxvdyB0aGUgcnVsZXMgb2YgdGhpcyBMaWNlbnNlIGZvcgotdmVyYmF0aW0gY29w
eWluZyBvZiBlYWNoIG9mIHRoZSBkb2N1bWVudHMgaW4gYWxsIG90aGVyIHJlc3BlY3RzLgotCi1Z
b3UgbWF5IGV4dHJhY3QgYSBzaW5nbGUgZG9jdW1lbnQgZnJvbSBzdWNoIGEgY29sbGVjdGlvbiwg
YW5kIGRpc3RyaWJ1dGUKLWl0IGluZGl2aWR1YWxseSB1bmRlciB0aGlzIExpY2Vuc2UsIHByb3Zp
ZGVkIHlvdSBpbnNlcnQgYSBjb3B5IG9mIHRoaXMKLUxpY2Vuc2UgaW50byB0aGUgZXh0cmFjdGVk
IGRvY3VtZW50LCBhbmQgZm9sbG93IHRoaXMgTGljZW5zZSBpbiBhbGwKLW90aGVyIHJlc3BlY3Rz
IHJlZ2FyZGluZyB2ZXJiYXRpbSBjb3B5aW5nIG9mIHRoYXQgZG9jdW1lbnQuCi0KLQotXGJlZ2lu
e2NlbnRlcn0KLXtcTGFyZ2VcYmYgNy4gQUdHUkVHQVRJT04gV0lUSCBJTkRFUEVOREVOVCBXT1JL
U30KLVxhZGRjb250ZW50c2xpbmV7dG9jfXtzZWN0aW9ufXs3LiBBR0dSRUdBVElPTiBXSVRIIElO
REVQRU5ERU5UIFdPUktTfQotXGVuZHtjZW50ZXJ9Ci0KLQotQSBjb21waWxhdGlvbiBvZiB0aGUg
RG9jdW1lbnQgb3IgaXRzIGRlcml2YXRpdmVzIHdpdGggb3RoZXIgc2VwYXJhdGUKLWFuZCBpbmRl
cGVuZGVudCBkb2N1bWVudHMgb3Igd29ya3MsIGluIG9yIG9uIGEgdm9sdW1lIG9mIGEgc3RvcmFn
ZSBvcgotZGlzdHJpYnV0aW9uIG1lZGl1bSwgaXMgY2FsbGVkIGFuICJhZ2dyZWdhdGUiIGlmIHRo
ZSBjb3B5cmlnaHQKLXJlc3VsdGluZyBmcm9tIHRoZSBjb21waWxhdGlvbiBpcyBub3QgdXNlZCB0
byBsaW1pdCB0aGUgbGVnYWwgcmlnaHRzCi1vZiB0aGUgY29tcGlsYXRpb24ncyB1c2VycyBiZXlv
bmQgd2hhdCB0aGUgaW5kaXZpZHVhbCB3b3JrcyBwZXJtaXQuCi1XaGVuIHRoZSBEb2N1bWVudCBp
cyBpbmNsdWRlZCBpbiBhbiBhZ2dyZWdhdGUsIHRoaXMgTGljZW5zZSBkb2VzIG5vdAotYXBwbHkg
dG8gdGhlIG90aGVyIHdvcmtzIGluIHRoZSBhZ2dyZWdhdGUgd2hpY2ggYXJlIG5vdCB0aGVtc2Vs
dmVzCi1kZXJpdmF0aXZlIHdvcmtzIG9mIHRoZSBEb2N1bWVudC4KLQotSWYgdGhlIENvdmVyIFRl
eHQgcmVxdWlyZW1lbnQgb2Ygc2VjdGlvbiAzIGlzIGFwcGxpY2FibGUgdG8gdGhlc2UKLWNvcGll
cyBvZiB0aGUgRG9jdW1lbnQsIHRoZW4gaWYgdGhlIERvY3VtZW50IGlzIGxlc3MgdGhhbiBvbmUg
aGFsZiBvZgotdGhlIGVudGlyZSBhZ2dyZWdhdGUsIHRoZSBEb2N1bWVudCdzIENvdmVyIFRleHRz
IG1heSBiZSBwbGFjZWQgb24KLWNvdmVycyB0aGF0IGJyYWNrZXQgdGhlIERvY3VtZW50IHdpdGhp
biB0aGUgYWdncmVnYXRlLCBvciB0aGUKLWVsZWN0cm9uaWMgZXF1aXZhbGVudCBvZiBjb3ZlcnMg
aWYgdGhlIERvY3VtZW50IGlzIGluIGVsZWN0cm9uaWMgZm9ybS4KLU90aGVyd2lzZSB0aGV5IG11
c3QgYXBwZWFyIG9uIHByaW50ZWQgY292ZXJzIHRoYXQgYnJhY2tldCB0aGUgd2hvbGUKLWFnZ3Jl
Z2F0ZS4KLQotCi1cYmVnaW57Y2VudGVyfQote1xMYXJnZVxiZiA4LiBUUkFOU0xBVElPTn0KLVxh
ZGRjb250ZW50c2xpbmV7dG9jfXtzZWN0aW9ufXs4LiBUUkFOU0xBVElPTn0KLVxlbmR7Y2VudGVy
fQotCi0KLVRyYW5zbGF0aW9uIGlzIGNvbnNpZGVyZWQgYSBraW5kIG9mIG1vZGlmaWNhdGlvbiwg
c28geW91IG1heQotZGlzdHJpYnV0ZSB0cmFuc2xhdGlvbnMgb2YgdGhlIERvY3VtZW50IHVuZGVy
IHRoZSB0ZXJtcyBvZiBzZWN0aW9uIDQuCi1SZXBsYWNpbmcgSW52YXJpYW50IFNlY3Rpb25zIHdp
dGggdHJhbnNsYXRpb25zIHJlcXVpcmVzIHNwZWNpYWwKLXBlcm1pc3Npb24gZnJvbSB0aGVpciBj
b3B5cmlnaHQgaG9sZGVycywgYnV0IHlvdSBtYXkgaW5jbHVkZQotdHJhbnNsYXRpb25zIG9mIHNv
bWUgb3IgYWxsIEludmFyaWFudCBTZWN0aW9ucyBpbiBhZGRpdGlvbiB0byB0aGUKLW9yaWdpbmFs
IHZlcnNpb25zIG9mIHRoZXNlIEludmFyaWFudCBTZWN0aW9ucy4gIFlvdSBtYXkgaW5jbHVkZSBh
Ci10cmFuc2xhdGlvbiBvZiB0aGlzIExpY2Vuc2UsIGFuZCBhbGwgdGhlIGxpY2Vuc2Ugbm90aWNl
cyBpbiB0aGUKLURvY3VtZW50LCBhbmQgYW55IFdhcnJhbnR5IERpc2NsYWltZXJzLCBwcm92aWRl
ZCB0aGF0IHlvdSBhbHNvIGluY2x1ZGUKLXRoZSBvcmlnaW5hbCBFbmdsaXNoIHZlcnNpb24gb2Yg
dGhpcyBMaWNlbnNlIGFuZCB0aGUgb3JpZ2luYWwgdmVyc2lvbnMKLW9mIHRob3NlIG5vdGljZXMg
YW5kIGRpc2NsYWltZXJzLiAgSW4gY2FzZSBvZiBhIGRpc2FncmVlbWVudCBiZXR3ZWVuCi10aGUg
dHJhbnNsYXRpb24gYW5kIHRoZSBvcmlnaW5hbCB2ZXJzaW9uIG9mIHRoaXMgTGljZW5zZSBvciBh
IG5vdGljZQotb3IgZGlzY2xhaW1lciwgdGhlIG9yaWdpbmFsIHZlcnNpb24gd2lsbCBwcmV2YWls
LgotCi1JZiBhIHNlY3Rpb24gaW4gdGhlIERvY3VtZW50IGlzIEVudGl0bGVkICJBY2tub3dsZWRn
ZW1lbnRzIiwKLSJEZWRpY2F0aW9ucyIsIG9yICJIaXN0b3J5IiwgdGhlIHJlcXVpcmVtZW50IChz
ZWN0aW9uIDQpIHRvIFByZXNlcnZlCi1pdHMgVGl0bGUgKHNlY3Rpb24gMSkgd2lsbCB0eXBpY2Fs
bHkgcmVxdWlyZSBjaGFuZ2luZyB0aGUgYWN0dWFsCi10aXRsZS4KLQotCi1cYmVnaW57Y2VudGVy
fQote1xMYXJnZVxiZiA5LiBURVJNSU5BVElPTn0KLVxhZGRjb250ZW50c2xpbmV7dG9jfXtzZWN0
aW9ufXs5LiBURVJNSU5BVElPTn0KLVxlbmR7Y2VudGVyfQotCi0KLVlvdSBtYXkgbm90IGNvcHks
IG1vZGlmeSwgc3VibGljZW5zZSwgb3IgZGlzdHJpYnV0ZSB0aGUgRG9jdW1lbnQgZXhjZXB0Ci1h
cyBleHByZXNzbHkgcHJvdmlkZWQgZm9yIHVuZGVyIHRoaXMgTGljZW5zZS4gIEFueSBvdGhlciBh
dHRlbXB0IHRvCi1jb3B5LCBtb2RpZnksIHN1YmxpY2Vuc2Ugb3IgZGlzdHJpYnV0ZSB0aGUgRG9j
dW1lbnQgaXMgdm9pZCwgYW5kIHdpbGwKLWF1dG9tYXRpY2FsbHkgdGVybWluYXRlIHlvdXIgcmln
aHRzIHVuZGVyIHRoaXMgTGljZW5zZS4gIEhvd2V2ZXIsCi1wYXJ0aWVzIHdobyBoYXZlIHJlY2Vp
dmVkIGNvcGllcywgb3IgcmlnaHRzLCBmcm9tIHlvdSB1bmRlciB0aGlzCi1MaWNlbnNlIHdpbGwg
bm90IGhhdmUgdGhlaXIgbGljZW5zZXMgdGVybWluYXRlZCBzbyBsb25nIGFzIHN1Y2gKLXBhcnRp
ZXMgcmVtYWluIGluIGZ1bGwgY29tcGxpYW5jZS4KLQotCi1cYmVnaW57Y2VudGVyfQote1xMYXJn
ZVxiZiAxMC4gRlVUVVJFIFJFVklTSU9OUyBPRiBUSElTIExJQ0VOU0V9Ci1cYWRkY29udGVudHNs
aW5le3RvY317c2VjdGlvbn17MTAuIEZVVFVSRSBSRVZJU0lPTlMgT0YgVEhJUyBMSUNFTlNFfQot
XGVuZHtjZW50ZXJ9Ci0KLQotVGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiBtYXkgcHVibGlz
aCBuZXcsIHJldmlzZWQgdmVyc2lvbnMKLW9mIHRoZSBHTlUgRnJlZSBEb2N1bWVudGF0aW9uIExp
Y2Vuc2UgZnJvbSB0aW1lIHRvIHRpbWUuICBTdWNoIG5ldwotdmVyc2lvbnMgd2lsbCBiZSBzaW1p
bGFyIGluIHNwaXJpdCB0byB0aGUgcHJlc2VudCB2ZXJzaW9uLCBidXQgbWF5Ci1kaWZmZXIgaW4g
ZGV0YWlsIHRvIGFkZHJlc3MgbmV3IHByb2JsZW1zIG9yIGNvbmNlcm5zLiAgU2VlCi1odHRwOi8v
d3d3LmdudS5vcmcvY29weWxlZnQvLgotCi1FYWNoIHZlcnNpb24gb2YgdGhlIExpY2Vuc2UgaXMg
Z2l2ZW4gYSBkaXN0aW5ndWlzaGluZyB2ZXJzaW9uIG51bWJlci4KLUlmIHRoZSBEb2N1bWVudCBz
cGVjaWZpZXMgdGhhdCBhIHBhcnRpY3VsYXIgbnVtYmVyZWQgdmVyc2lvbiBvZiB0aGlzCi1MaWNl
bnNlICJvciBhbnkgbGF0ZXIgdmVyc2lvbiIgYXBwbGllcyB0byBpdCwgeW91IGhhdmUgdGhlIG9w
dGlvbiBvZgotZm9sbG93aW5nIHRoZSB0ZXJtcyBhbmQgY29uZGl0aW9ucyBlaXRoZXIgb2YgdGhh
dCBzcGVjaWZpZWQgdmVyc2lvbiBvcgotb2YgYW55IGxhdGVyIHZlcnNpb24gdGhhdCBoYXMgYmVl
biBwdWJsaXNoZWQgKG5vdCBhcyBhIGRyYWZ0KSBieSB0aGUKLUZyZWUgU29mdHdhcmUgRm91bmRh
dGlvbi4gIElmIHRoZSBEb2N1bWVudCBkb2VzIG5vdCBzcGVjaWZ5IGEgdmVyc2lvbgotbnVtYmVy
IG9mIHRoaXMgTGljZW5zZSwgeW91IG1heSBjaG9vc2UgYW55IHZlcnNpb24gZXZlciBwdWJsaXNo
ZWQgKG5vdAotYXMgYSBkcmFmdCkgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4KLQot
Ci1cYmVnaW57Y2VudGVyfQote1xMYXJnZVxiZiBBRERFTkRVTTogSG93IHRvIHVzZSB0aGlzIExp
Y2Vuc2UgZm9yIHlvdXIgZG9jdW1lbnRzfQotXGFkZGNvbnRlbnRzbGluZXt0b2N9e3NlY3Rpb259
e0FEREVORFVNOiBIb3cgdG8gdXNlIHRoaXMgTGljZW5zZSBmb3IgeW91ciBkb2N1bWVudHN9Ci1c
ZW5ke2NlbnRlcn0KLQotVG8gdXNlIHRoaXMgTGljZW5zZSBpbiBhIGRvY3VtZW50IHlvdSBoYXZl
IHdyaXR0ZW4sIGluY2x1ZGUgYSBjb3B5IG9mCi10aGUgTGljZW5zZSBpbiB0aGUgZG9jdW1lbnQg
YW5kIHB1dCB0aGUgZm9sbG93aW5nIGNvcHlyaWdodCBhbmQKLWxpY2Vuc2Ugbm90aWNlcyBqdXN0
IGFmdGVyIHRoZSB0aXRsZSBwYWdlOgotCi1cYmlnc2tpcAotXGJlZ2lue3F1b3RlfQotICAgIENv
cHlyaWdodCBcY29weXJpZ2h0ICBZRUFSICBZT1VSIE5BTUUuCi0gICAgUGVybWlzc2lvbiBpcyBn
cmFudGVkIHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlzIGRvY3VtZW50Ci0g
ICAgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgRnJlZSBEb2N1bWVudGF0aW9uIExpY2Vuc2Us
IFZlcnNpb24gMS4yCi0gICAgb3IgYW55IGxhdGVyIHZlcnNpb24gcHVibGlzaGVkIGJ5IHRoZSBG
cmVlIFNvZnR3YXJlIEZvdW5kYXRpb247Ci0gICAgd2l0aCBubyBJbnZhcmlhbnQgU2VjdGlvbnMs
IG5vIEZyb250LUNvdmVyIFRleHRzLCBhbmQgbm8gQmFjay1Db3ZlciBUZXh0cy4KLSAgICBBIGNv
cHkgb2YgdGhlIGxpY2Vuc2UgaXMgaW5jbHVkZWQgaW4gdGhlIHNlY3Rpb24gZW50aXRsZWQgIkdO
VQotICAgIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlIi4KLVxlbmR7cXVvdGV9Ci1cYmlnc2tp
cAotICAgIAotSWYgeW91IGhhdmUgSW52YXJpYW50IFNlY3Rpb25zLCBGcm9udC1Db3ZlciBUZXh0
cyBhbmQgQmFjay1Db3ZlciBUZXh0cywKLXJlcGxhY2UgdGhlICJ3aXRoLi4uVGV4dHMuIiBsaW5l
IHdpdGggdGhpczoKLQotXGJpZ3NraXAKLVxiZWdpbntxdW90ZX0KLSAgICB3aXRoIHRoZSBJbnZh
cmlhbnQgU2VjdGlvbnMgYmVpbmcgTElTVCBUSEVJUiBUSVRMRVMsIHdpdGggdGhlCi0gICAgRnJv
bnQtQ292ZXIgVGV4dHMgYmVpbmcgTElTVCwgYW5kIHdpdGggdGhlIEJhY2stQ292ZXIgVGV4dHMg
YmVpbmcgTElTVC4KLVxlbmR7cXVvdGV9Ci1cYmlnc2tpcAotICAgIAotSWYgeW91IGhhdmUgSW52
YXJpYW50IFNlY3Rpb25zIHdpdGhvdXQgQ292ZXIgVGV4dHMsIG9yIHNvbWUgb3RoZXIKLWNvbWJp
bmF0aW9uIG9mIHRoZSB0aHJlZSwgbWVyZ2UgdGhvc2UgdHdvIGFsdGVybmF0aXZlcyB0byBzdWl0
IHRoZQotc2l0dWF0aW9uLgotCi1JZiB5b3VyIGRvY3VtZW50IGNvbnRhaW5zIG5vbnRyaXZpYWwg
ZXhhbXBsZXMgb2YgcHJvZ3JhbSBjb2RlLCB3ZQotcmVjb21tZW5kIHJlbGVhc2luZyB0aGVzZSBl
eGFtcGxlcyBpbiBwYXJhbGxlbCB1bmRlciB5b3VyIGNob2ljZSBvZgotZnJlZSBzb2Z0d2FyZSBs
aWNlbnNlLCBzdWNoIGFzIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSwKLXRvIHBlcm1p
dCB0aGVpciB1c2UgaW4gZnJlZSBzb2Z0d2FyZS4KZGlmZiAtLWdpdCBhL2RvY3MveGVuLWFwaS9w
cmVzZW50YXRpb24udGV4IGIvZG9jcy94ZW4tYXBpL3ByZXNlbnRhdGlvbi50ZXgKZGVsZXRlZCBm
aWxlIG1vZGUgMTAwNjQ0CmluZGV4IDE3ZmUzYzUuLjAwMDAwMDAKLS0tIGEvZG9jcy94ZW4tYXBp
L3ByZXNlbnRhdGlvbi50ZXgKKysrIC9kZXYvbnVsbApAQCAtMSwxNDYgKzAsMCBAQAotJQotJSBD
b3B5cmlnaHQgKGMpIDIwMDYtMjAwNyBYZW5Tb3VyY2UsIEluYy4KLSUKLSUgUGVybWlzc2lvbiBp
cyBncmFudGVkIHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlzIGRvY3VtZW50
IHVuZGVyCi0lIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNl
LCBWZXJzaW9uIDEuMiBvciBhbnkgbGF0ZXIKLSUgdmVyc2lvbiBwdWJsaXNoZWQgYnkgdGhlIEZy
ZWUgU29mdHdhcmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlhbnQKLSUgU2VjdGlvbnMsIG5v
IEZyb250LUNvdmVyIFRleHRzIGFuZCBubyBCYWNrLUNvdmVyIFRleHRzLiAgQSBjb3B5IG9mIHRo
ZQotJSBsaWNlbnNlIGlzIGluY2x1ZGVkIGluIHRoZSBzZWN0aW9uIGVudGl0bGVkCi0lICJHTlUg
RnJlZSBEb2N1bWVudGF0aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxlIGZkbC50ZXguCi0lCi0lIEF1
dGhvcnM6IEV3YW4gTWVsbG9yLCBSaWNoYXJkIFNoYXJwLCBEYXZlIFNjb3R0LCBKb24gSGFycm9w
LgotJQotCi1UaGUgQVBJIGlzIHByZXNlbnRlZCBoZXJlIGFzIGEgc2V0IG9mIFJlbW90ZSBQcm9j
ZWR1cmUgQ2FsbHMsIHdpdGggYSB3aXJlCi1mb3JtYXQgYmFzZWQgdXBvbiBYTUwtUlBDLiBObyBz
cGVjaWZpYyBsYW5ndWFnZSBiaW5kaW5ncyBhcmUgcHJlc2NyaWJlZCwKLWFsdGhvdWdoIGV4YW1w
bGVzIHdpbGwgYmUgZ2l2ZW4gaW4gdGhlIHB5dGhvbiBwcm9ncmFtbWluZyBsYW5ndWFnZS4KLSAK
LUFsdGhvdWdoIHdlIGFkb3B0IHNvbWUgdGVybWlub2xvZ3kgZnJvbSBvYmplY3Qtb3JpZW50ZWQg
cHJvZ3JhbW1pbmcsIAotZnV0dXJlIGNsaWVudCBsYW5ndWFnZSBiaW5kaW5ncyBtYXkgb3IgbWF5
IG5vdCBiZSBvYmplY3Qgb3JpZW50ZWQuCi1UaGUgQVBJIHJlZmVyZW5jZSB1c2VzIHRoZSB0ZXJt
aW5vbG9neSB7XGVtIGNsYXNzZXNcL30gYW5kIHtcZW0gb2JqZWN0c1wvfS4KLUZvciBvdXIgcHVy
cG9zZXMgYSB7XGVtIGNsYXNzXC99IGlzIHNpbXBseSBhIGhpZXJhcmNoaWNhbCBuYW1lc3BhY2U7
Ci1hbiB7XGVtIG9iamVjdFwvfSBpcyBhbiBpbnN0YW5jZSBvZiBhIGNsYXNzIHdpdGggaXRzIGZp
ZWxkcyBzZXQgdG8KLXNwZWNpZmljIHZhbHVlcy4gT2JqZWN0cyBhcmUgcGVyc2lzdGVudCBhbmQg
ZXhpc3Qgb24gdGhlIHNlcnZlci1zaWRlLgotQ2xpZW50cyBtYXkgb2J0YWluIG9wYXF1ZSByZWZl
cmVuY2VzIHRvIHRoZXNlIHNlcnZlci1zaWRlIG9iamVjdHMgYW5kIHRoZW4KLWFjY2VzcyB0aGVp
ciBmaWVsZHMgdmlhIGdldC9zZXQgUlBDcy4KLQotJUluIGVhY2ggY2xhc3MgdGhlcmUgaXMgYSAk
XG1hdGhpdHt1aWR9JCBmaWVsZCB0aGF0IGFzc2lnbnMgYW4gaW5kZW50aWZpZXIKLSV0byBlYWNo
IG9iamVjdC4gVGhpcyAkXG1hdGhpdHt1aWR9JCBzZXJ2ZXMgYXMgYW4gb2JqZWN0IHJlZmVyZW5j
ZQotJW9uIGJvdGggY2xpZW50LSBhbmQgc2VydmVyLXNpZGUsIGFuZCBpcyBvZnRlbiBpbmNsdWRl
ZCBhcyBhbiBhcmd1bWVudCBpbgotJVJQQyBtZXNzYWdlcy4KLQotRm9yIGVhY2ggY2xhc3Mgd2Ug
c3BlY2lmeSBhIGxpc3Qgb2YKLWZpZWxkcyBhbG9uZyB3aXRoIHRoZWlyIHtcZW0gdHlwZXNcL30g
YW5kIHtcZW0gcXVhbGlmaWVyc1wvfS4gIEEKLXF1YWxpZmllciBpcyBvbmUgb2Y6Ci1cYmVnaW57
aXRlbWl6ZX0KLSAgXGl0ZW0gJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQ6IHRoZSBmaWVsZCBp
cyBSZWFkCi1Pbmx5LiBGdXJ0aGVybW9yZSwgaXRzIHZhbHVlIGlzIGF1dG9tYXRpY2FsbHkgY29t
cHV0ZWQgYXQgcnVudGltZS4KLUZvciBleGFtcGxlOiBjdXJyZW50IENQVSBsb2FkIGFuZCBkaXNr
IElPIHRocm91Z2hwdXQuCi0gIFxpdGVtICRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc30kOiB0aGUg
ZmllbGQgbXVzdCBiZSBtYW51YWxseSBzZXQKLXdoZW4gYSBuZXcgb2JqZWN0IGlzIGNyZWF0ZWQs
IGJ1dCBpcyB0aGVuIFJlYWQgT25seSBmb3IKLXRoZSBkdXJhdGlvbiBvZiB0aGUgb2JqZWN0J3Mg
bGlmZS4KLUZvciBleGFtcGxlLCB0aGUgbWF4aW11bSBtZW1vcnkgYWRkcmVzc2FibGUgYnkgYSBn
dWVzdCBpcyBzZXQgCi1iZWZvcmUgdGhlIGd1ZXN0IGJvb3RzLgotICBcaXRlbSAkXG1hdGhpdHtS
V30kOiB0aGUgZmllbGQgaXMgUmVhZC9Xcml0ZS4gRm9yIGV4YW1wbGUsIHRoZSBuYW1lCi1vZiBh
IFZNLgotXGVuZHtpdGVtaXplfQotCi1BIGZ1bGwgbGlzdCBvZiB0eXBlcyBpcyBnaXZlbiBpbiBD
aGFwdGVyflxyZWZ7YXBpLXJlZmVyZW5jZX0uIEhvd2V2ZXIsCi10aGVyZSBhcmUgdGhyZWUgdHlw
ZXMgdGhhdCByZXF1aXJlIGV4cGxpY2l0IG1lbnRpb246Ci1cYmVnaW57aXRlbWl6ZX0KLSAgXGl0
ZW0gJHR+XG1hdGhpdHtSZWZ9JDogc2lnbmlmaWVzIGEgcmVmZXJlbmNlIHRvIGFuIG9iamVjdAot
b2YgdHlwZSAkdCQuCi0gIFxpdGVtICR0flxtYXRoaXR7U2V0fSQ6IHNpZ25pZmllcyBhIHNldCBj
b250YWluaW5nCi12YWx1ZXMgb2YgdHlwZSAkdCQuCi0gIFxpdGVtICQodF8xLCB0XzIpflxtYXRo
aXR7TWFwfSQ6IHNpZ25pZmllcyBhIG1hcHBpbmcgZnJvbSB2YWx1ZXMgb2YKLXR5cGUgJHRfMSQg
dG8gdmFsdWVzIG9mIHR5cGUgJHRfMiQuCi1cZW5ke2l0ZW1pemV9Ci0KLU5vdGUgdGhhdCB0aGVy
ZSBhcmUgYSBudW1iZXIgb2YgY2FzZXMgd2hlcmUge1xlbSBSZWZ9cyBhcmUge1xlbSBkb3VibHkK
LWxpbmtlZFwvfS0tLWUuZy5cIGEgVk0gaGFzIGEgZmllbGQgY2FsbGVkIHtcdHQgVklGc30gb2Yg
dHlwZQotJChcbWF0aGl0e1ZJRn1+XG1hdGhpdHtSZWZ9KX5cbWF0aGl0e1NldH0kOyB0aGlzIGZp
ZWxkIGxpc3RzCi10aGUgbmV0d29yayBpbnRlcmZhY2VzIGF0dGFjaGVkIHRvIGEgcGFydGljdWxh
ciBWTS4gU2ltaWxhcmx5LCB0aGUgVklGCi1jbGFzcyBoYXMgYSBmaWVsZCBjYWxsZWQge1x0dCBW
TX0gb2YgdHlwZSAkKFxtYXRoaXR7Vk19fntcbWF0aGl0Ci1SZWZ9KSQgd2hpY2ggcmVmZXJlbmNl
cyB0aGUgVk0gdG8gd2hpY2ggdGhlIGludGVyZmFjZSBpcyBjb25uZWN0ZWQuCi1UaGVzZSB0d28g
ZmllbGRzIGFyZSB7XGVtIGJvdW5kIHRvZ2V0aGVyXC99LCBpbiB0aGUgc2Vuc2UgdGhhdAotY3Jl
YXRpbmcgYSBuZXcgVklGIGNhdXNlcyB0aGUge1x0dCBWSUZzfSBmaWVsZCBvZiB0aGUgY29ycmVz
cG9uZGluZwotVk0gb2JqZWN0IHRvIGJlIHVwZGF0ZWQgYXV0b21hdGljYWxseS4KLQotVGhlIEFQ
SSByZWZlcmVuY2UgZXhwbGljaXRseSBsaXN0cyB0aGUgZmllbGRzIHRoYXQgYXJlCi1ib3VuZCB0
b2dldGhlciBpbiB0aGlzIHdheS4gSXQgYWxzbyBjb250YWlucyBhIGRpYWdyYW0gdGhhdCBzaG93
cwotcmVsYXRpb25zaGlwcyBiZXR3ZWVuIGNsYXNzZXMuIEluIHRoaXMgZGlhZ3JhbSBhbiBlZGdl
IHNpZ25pZmllcyB0aGUKLWV4aXN0ZW5jZSBvZiBhIHBhaXIgb2YgZmllbGRzIHRoYXQgYXJlIGJv
dW5kIHRvZ2V0aGVyLCB1c2luZyBzdGFuZGFyZAotY3Jvd3MtZm9vdCBub3RhdGlvbiB0byBzaWdu
aWZ5IHRoZSB0eXBlIG9mIHJlbGF0aW9uc2hpcCAoZS5nLlwKLW9uZS1tYW55LCBtYW55LW1hbnkp
LgotCi1cc2VjdGlvbntSUENzIGFzc29jaWF0ZWQgd2l0aCBmaWVsZHN9Ci0KLUVhY2ggZmllbGQs
IHtcdHQgZn0sIGhhcyBhbiBSUEMgYWNjZXNzb3IgYXNzb2NpYXRlZCB3aXRoIGl0Ci10aGF0IHJl
dHVybnMge1x0dCBmfSdzIHZhbHVlOgotXGJlZ2lue2l0ZW1pemV9Ci1caXRlbSBgYHtcdHQgZ2V0
XF9mKFJlZiB4KX0nJzogdGFrZXMgYQote1x0dCBSZWZ9IHRoYXQgcmVmZXJzIHRvIGFuIG9iamVj
dCBhbmQgcmV0dXJucyB0aGUgdmFsdWUgb2Yge1x0dCBmfS4KLVxlbmR7aXRlbWl6ZX0KLQotRWFj
aCBmaWVsZCwge1x0dCBmfSwgd2l0aCBhdHRyaWJ1dGUKLXtcZW0gUld9IGFuZCB3aG9zZSBvdXRl
cm1vc3QgdHlwZSBpcyB7XGVtIFNldFwvfSBoYXMgdGhlIGZvbGxvd2luZwotYWRkaXRpb25hbCBS
UENzIGFzc29jaWF0ZWQgd2l0aCBpdDoKLVxiZWdpbntpdGVtaXplfQotXGl0ZW0gYW4gYGB7XHR0
IGFkZFxfdG9cX2YoUmVmIHgsIHYpfScnIFJQQyBhZGRzIGEgbmV3IGVsZW1lbnQgdiB0byB0aGUg
c2V0XGZvb3Rub3RlewotJQotU2luY2Ugc2V0cyBjYW5ub3QgY29udGFpbiBkdXBsaWNhdGUgdmFs
dWVzIHRoaXMgb3BlcmF0aW9uIGhhcyBubyBhY3Rpb24gaW4gdGhlIGNhc2UKLXRoYXQge1x0dCB2
fSB3YXMgYWxyZWFkeSBpbiB0aGUgc2V0LgotJQotfTsKLVxpdGVtIGEgYGB7XHR0IHJlbW92ZVxf
ZnJvbVxfZihSZWYgeCwgdil9JycgUlBDIHJlbW92ZXMgZWxlbWVudCB7XHR0IHZ9IGZyb20gdGhl
IHNldDsKLVxlbmR7aXRlbWl6ZX0KLQotRWFjaCBmaWVsZCwge1x0dCBmfSwgd2l0aCBhdHRyaWJ1
dGUKLXtcZW0gUld9IGFuZCB3aG9zZSBvdXRlcm1vc3QgdHlwZSBpcyB7XGVtIE1hcFwvfSBoYXMg
dGhlIGZvbGxvd2luZwotYWRkaXRpb25hbCBSUENzIGFzc29jaWF0ZWQgd2l0aCBpdDoKLVxiZWdp
bntpdGVtaXplfQotXGl0ZW0gYW4gYGB7XHR0IGFkZFxfdG9cX2YoUmVmIHgsIGssIHYpfScnIFJQ
QyBhZGRzIG5ldyBwYWlyIHtcdHQgKGssIHYpfQotdG8gdGhlIG1hcHBpbmcgc3RvcmVkIGluIHtc
dHQgZn0gaW4gb2JqZWN0IHtcdHQgeH0uIEFkZGluZyBhIG5ldyBwYWlyIGZvciBkdXBsaWNhdGUK
LWtleSwge1x0dCBrfSwgb3ZlcndyaXRlcyBhbnkgcHJldmlvdXMgbWFwcGluZyBmb3Ige1x0dCBr
fS4KLVxpdGVtIGEgYGB7XHR0IHJlbW92ZVxfZnJvbVxfZihSZWYgeCwgayl9JycgUlBDIHJlbW92
ZXMgdGhlIHBhaXIgd2l0aCBrZXkge1x0dCBrfQotZnJvbSB0aGUgbWFwcGluZyBzdG9yZWQgaW4g
e1x0dCBmfSBpbiBvYmplY3Qge1x0dCB4fS4KLVxlbmR7aXRlbWl6ZX0KLQotRWFjaCBmaWVsZCB3
aG9zZSBvdXRlcm1vc3QgdHlwZSBpcyBuZWl0aGVyIHtcZW0gU2V0XC99IG5vciB7XGVtIE1hcFwv
fSwgCi1idXQgd2hvc2UgYXR0cmlidXRlIGlzIHtcZW0gUld9IGhhcyBhbiBSUEMgYWNlc3NvciBh
c3NvY2lhdGVkIHdpdGggaXQKLXRoYXQgc2V0cyBpdHMgdmFsdWU6Ci1cYmVnaW57aXRlbWl6ZX0K
LVxpdGVtIEZvciB7XGVtIFJXXC99ICh7XGVtIFJcL31lYWQve1xlbQotV1wvfXJpdGUpLCBhIGBg
e1x0dCBzZXRcX2YoUmVmIHgsIHYpfScnIFJQQyBmdW5jdGlvbiBpcyBhbHNvIHByb3ZpZGVkLgot
VGhpcyBzZXRzIGZpZWxkIHtcdHQgZn0gb24gb2JqZWN0IHtcdHQgeH0gdG8gdmFsdWUge1x0dCB2
fS4KLVxlbmR7aXRlbWl6ZX0KLQotXHNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3Nl
c30KLQotXGJlZ2lue2l0ZW1pemV9Ci1caXRlbSBFYWNoIGNsYXNzIGhhcyBhIHtcZW0gY29uc3Ry
dWN0b3JcL30gUlBDIG5hbWVkIGBge1x0dCBjcmVhdGV9JycgdGhhdAotdGFrZXMgYXMgcGFyYW1l
dGVycyBhbGwgZmllbGRzIG1hcmtlZCB7XGVtIFJXXC99IGFuZAotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7aW5zfSQuIFRoZSByZXN1bHQgb2YgdGhpcyBSUEMgaXMgdGhhdCBhIG5ldyB7XGVtCi1wZXJz
aXN0ZW50XC99IG9iamVjdCBpcyBjcmVhdGVkIG9uIHRoZSBzZXJ2ZXItc2lkZSB3aXRoIHRoZSBz
cGVjaWZpZWQgZmllbGQKLXZhbHVlcy4KLQotXGl0ZW0gRWFjaCBjbGFzcyBoYXMgYSB7XHR0IGdl
dFxfYnlcX3V1aWQodXVpZCl9IFJQQyB0aGF0IHJldHVybnMgdGhlIG9iamVjdAotb2YgdGhhdCBj
bGFzcyB0aGF0IGhhcyB0aGUgc3BlY2lmaWVkIHtcdHQgdXVpZH0uCi0KLVxpdGVtIEVhY2ggY2xh
c3MgdGhhdCBoYXMgYSB7XHR0IG5hbWVcX2xhYmVsfSBmaWVsZCBoYXMgYQotYGB7XHR0IGdldFxf
YnlcX25hbWVcX2xhYmVsKG5hbWUpfScnIFJQQyB0aGF0IHJldHVybnMgYSBzZXQgb2Ygb2JqZWN0
cyBvZiB0aGF0Ci1jbGFzcyB0aGF0IGhhdmUgdGhlIHNwZWNpZmllZCB7XHR0IGxhYmVsfS4KLQot
XGl0ZW0gRWFjaCBjbGFzcyBoYXMgYSBgYHtcdHQgZGVzdHJveShSZWYgeCl9JycgUlBDIHRoYXQg
ZXhwbGljaXRseSBkZWxldGVzCi10aGUgcGVyc2lzdGVudCBvYmplY3Qgc3BlY2lmaWVkIGJ5IHtc
dHQgeH0gZnJvbSB0aGUgc3lzdGVtLiAgVGhpcyBpcyBhCi1ub24tY2FzY2FkaW5nIGRlbGV0ZSAt
LSBpZiB0aGUgb2JqZWN0IGJlaW5nIHJlbW92ZWQgaXMgcmVmZXJlbmNlZCBieSBhbm90aGVyCi1v
YmplY3QgdGhlbiB0aGUge1x0dCBkZXN0cm95fSBjYWxsIHdpbGwgZmFpbC4KLQotXGVuZHtpdGVt
aXplfQotCi1cc3Vic2VjdGlvbntBZGRpdGlvbmFsIFJQQ3N9Ci0KLUFzIHdlbGwgYXMgdGhlIFJQ
Q3MgZW51bWVyYXRlZCBhYm92ZSwgc29tZSBjbGFzc2VzIGhhdmUgYWRkaXRpb25hbCBSUENzCi1h
c3NvY2lhdGVkIHdpdGggdGhlbS4gRm9yIGV4YW1wbGUsIHRoZSB7XHR0IFZNfSBjbGFzcyBoYXMg
UlBDcyBmb3IgY2xvbmluZywKLXN1c3BlbmRpbmcsIHN0YXJ0aW5nIGV0Yy4gU3VjaCBhZGRpdGlv
bmFsIFJQQ3MgYXJlIGRlc2NyaWJlZCBleHBsaWNpdGx5Ci1pbiB0aGUgQVBJIHJlZmVyZW5jZS4K
ZGlmZiAtLWdpdCBhL2RvY3MveGVuLWFwaS9yZXZpc2lvbi1oaXN0b3J5LnRleCBiL2RvY3MveGVu
LWFwaS9yZXZpc2lvbi1oaXN0b3J5LnRleApkZWxldGVkIGZpbGUgbW9kZSAxMDA2NDQKaW5kZXgg
M2ZhODMyOS4uMDAwMDAwMAotLS0gYS9kb2NzL3hlbi1hcGkvcmV2aXNpb24taGlzdG9yeS50ZXgK
KysrIC9kZXYvbnVsbApAQCAtMSw2MSArMCwwIEBACi17IFxiZiBSZXZpc2lvbiBIaXN0b3J5fQot
Ci0lIFBsZWFzZSBkbyBub3QgdXNlIG1pbmlwYWdlcyBpbiBhIHRhYnVsYXIgZW52aXJvbm1lbnQ7
IHRoaXMgcmVzdWx0cwotJSBpbiBiYWQgdmVydGljYWwgYWxpZ25tZW50LiAKLQotXGJlZ2lue2Zs
dXNobGVmdH0KLVxiZWdpbntjZW50ZXJ9Ci0gXGJlZ2lue3RhYnVsYXJ9e3xsfGx8bHw+e1xyYWdn
ZWRyaWdodH1wezdjbX18fQotICBcaGxpbmUKLSAgMS4wLjAgJiAyN3RoIEFwcmlsIDA3ICYgWGVu
c291cmNlIGV0IGFsLiAmCi0gICAgIEluaXRpYWwgUmV2aXNpb25cdGFidWxhcm5ld2xpbmUKLSAg
XGhsaW5lCi0gIDEuMC4xICYgMTB0aCBEZWMuIDA3ICYgUy4gQmVyZ2VyICYKLSAgICAgQWRkZWQg
WFNQb2xpY3kucmVzZXRcX3hzcG9saWN5LCBWVFBNLmdldFxfb3RoZXJcX2NvbmZpZywKLSAgICAg
VlRQTS5zZXRcX290aGVyY29uZmlnLiBBQ01Qb2xpY3kuZ2V0XF9lbmZvcmNlZFxfYmluYXJ5IG1l
dGhvZHMuXHRhYnVsYXJuZXdsaW5lCi0gIFxobGluZQotICAxLjAuMiAmIDI1dGggSmFuLiAwOCAm
IEouIEZlaGxpZyAmCi0gICAgIEFkZGVkIENyYXNoZWQgVk0gcG93ZXIgc3RhdGUuXHRhYnVsYXJu
ZXdsaW5lCi0gIFxobGluZQotICAxLjAuMyAmIDExdGggRmViLiAwOCAmIFMuIEJlcmdlciAmCi0g
ICAgIEFkZGVkIHRhYmxlIG9mIGNvbnRlbnRzIGFuZCBoeXBlcmxpbmsgY3Jvc3MgcmVmZXJlbmNl
Llx0YWJ1bGFybmV3bGluZQotICBcaGxpbmUKLSAgMS4wLjQgJiAyM3JkIE1hcmNoIDA4ICYgUy4g
QmVyZ2VyICYKLSAgICAgQWRkZWQgWFNQb2xpY3kuY2FuXF9ydW5cdGFidWxhcm5ld2xpbmUKLSAg
XGhsaW5lCi0gIDEuMC41ICYgMTd0aCBBcHIuIDA4ICYgUy4gQmVyZ2VyICYKLSAgICAgQWRkZWQg
dW5kb2N1bWVudGVkIGZpZWxkcyBhbmQgbWV0aG9kcyBmb3IgZGVmYXVsdFxfbmV0bWFzayBhbmQK
LSAgICAgZGVmYXVsdFxfZ2F0ZXdheSB0byB0aGUgTmV0d29yayBjbGFzcy4gUmVtb3ZlZCBhbiB1
bmltcGxlbWVudGVkCi0gICAgIG1ldGhvZCBmcm9tIHRoZSBYU1BvbGljeSBjbGFzcyBhbmQgcmVt
b3ZlZCB0aGUgJ29wdGlvbmFsJyBmcm9tCi0gICAgICdvbGRsYWJlbCcgcGFyYW1ldGVycy5cdGFi
dWxhcm5ld2xpbmUKLSAgXGhsaW5lCi0gIDEuMC42ICYgMjR0aCBKdWwuIDA4ICYgWS4gSXdhbWF0
c3UgJgotICAgICBBZGRlZCBkZWZpbml0aW9ucyBvZiBuZXcgY2xhc3NlcyBEUENJIGFuZCBQUENJ
LiBVcGRhdGVkIHRoZSB0YWJsZQotICAgICBhbmQgdGhlIGRpYWdyYW0gcmVwcmVzZW50aW5nIHJl
bGF0aW9uc2hpcHMgYmV0d2VlbiBjbGFzc2VzLgotICAgICBBZGRlZCBob3N0LlBQQ0lzIGFuZCBW
TS5EUENJcyBmaWVsZHMuXHRhYnVsYXJuZXdsaW5lCi0gIFxobGluZQotICAxLjAuNyAmIDIwdGgg
T2N0LiAwOCAmIE0uIEthbm5vICYKLSAgICAgQWRkZWQgZGVmaW5pdGlvbnMgb2YgbmV3IGNsYXNz
ZXMgRFNDU0kgYW5kIFBTQ1NJLiBVcGRhdGVkIHRoZSB0YWJsZQotICAgICBhbmQgdGhlIGRpYWdy
YW0gcmVwcmVzZW50aW5nIHJlbGF0aW9uc2hpcHMgYmV0d2VlbiBjbGFzc2VzLgotICAgICBBZGRl
ZCBob3N0LlBTQ1NJcyBhbmQgVk0uRFNDU0lzIGZpZWxkcy5cdGFidWxhcm5ld2xpbmUKLSAgXGhs
aW5lCi0gIDEuMC44ICYgMTd0aCBKdW4uIDA5ICYgQS4gRmxvcmF0aCAmCi0gICAgIFVwZGF0ZWQg
aW50ZXJhY3RpdmUgc2Vzc2lvbiBleGFtcGxlLgotICAgICBBZGRlZCBkZXNjcmlwdGlvbiBmb3Ig
XHRleHR0dHtQVi9rZXJuZWx9IGFuZCBcdGV4dHR0e1BWL3JhbWRpc2t9Ci0gICAgIHBhcmFtZXRl
cnMgdXNpbmcgVVJJcy5cdGFidWxhcm5ld2xpbmUKLSAgXGhsaW5lCi0gIDEuMC45ICYgMjB0aCBO
b3YuIDA5ICYgTS4gS2Fubm8gJgotICAgICBBZGRlZCBkZWZpbml0aW9ucyBvZiBuZXcgY2xhc3Nl
cyBEU0NTSVxfSEJBIGFuZCBQU0NTSVxfSEJBLgotICAgICBVcGRhdGVkIHRoZSB0YWJsZSBhbmQg
dGhlIGRpYWdyYW0gcmVwcmVzZW50aW5nIHJlbGF0aW9uc2hpcHMKLSAgICAgYmV0d2VlbiBjbGFz
c2VzLiBBZGRlZCBob3N0LlBTQ1NJXF9IQkFzIGFuZCBWTS5EU0NTSVxfSEJBcwotICAgICBmaWVs
ZHMuXHRhYnVsYXJuZXdsaW5lCi0gIFxobGluZQotICAxLjAuMTAgJiAxMHRoIEphbi4gMTAgJiBM
LiBEdWJlICYKLSAgICAgQWRkZWQgZGVmaW5pdGlvbnMgb2YgbmV3IGNsYXNzZXMgY3B1XF9wb29s
LiBVcGRhdGVkIHRoZSB0YWJsZQotICAgICBhbmQgdGhlIGRpYWdyYW0gcmVwcmVzZW50aW5nIHJl
bGF0aW9uc2hpcHMgYmV0d2VlbiBjbGFzc2VzLgotICAgICBBZGRlZCBmaWVsZHMgaG9zdC5yZXNp
ZGVudFxfY3B1XF9wb29scywgVk0uY3B1XF9wb29sIGFuZAotICAgICBob3N0XF9jcHUuY3B1XF9w
b29sLlx0YWJ1bGFybmV3bGluZQotICBcaGxpbmUKLSBcZW5ke3RhYnVsYXJ9Ci1cZW5ke2NlbnRl
cn0KLVxlbmR7Zmx1c2hsZWZ0fQpkaWZmIC0tZ2l0IGEvZG9jcy94ZW4tYXBpL3RvZG8udGV4IGIv
ZG9jcy94ZW4tYXBpL3RvZG8udGV4CmRlbGV0ZWQgZmlsZSBtb2RlIDEwMDY0NAppbmRleCA2MTVh
NWU1Li4wMDAwMDAwCi0tLSBhL2RvY3MveGVuLWFwaS90b2RvLnRleAorKysgL2Rldi9udWxsCkBA
IC0xLDEzNSArMCwwIEBACi0lCi0lIENvcHlyaWdodCAoYykgMjAwNiBYZW5Tb3VyY2UsIEluYy4K
LSUKLSUgUGVybWlzc2lvbiBpcyBncmFudGVkIHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1v
ZGlmeSB0aGlzIGRvY3VtZW50IHVuZGVyCi0lIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9j
dW1lbnRhdGlvbiBMaWNlbnNlLCBWZXJzaW9uIDEuMiBvciBhbnkgbGF0ZXIKLSUgdmVyc2lvbiBw
dWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlh
bnQKLSUgU2VjdGlvbnMsIG5vIEZyb250LUNvdmVyIFRleHRzIGFuZCBubyBCYWNrLUNvdmVyIFRl
eHRzLiAgQSBjb3B5IG9mIHRoZQotJSBsaWNlbnNlIGlzIGluY2x1ZGVkIGluIHRoZSBzZWN0aW9u
IGVudGl0bGVkCi0lICJHTlUgRnJlZSBEb2N1bWVudGF0aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxl
IGZkbC50ZXguCi0lCi0lIEF1dGhvcnM6IEV3YW4gTWVsbG9yLCBSaWNoYXJkIFNoYXJwLCBEYXZl
IFNjb3R0LCBKb24gSGFycm9wLgotJQotCi1cc2VjdGlvbntUby1Eb30KLQotTG90cyBhbmQgbG90
cyEgSW5jbHVkaW5nOgotCi1cc3Vic2VjdGlvbntDbGFyaXR5fQotCi1cYmVnaW57aXRlbWl6ZX0K
LQotXGl0ZW0gUm9sbCBjb25zdHJ1Y3RvcnMgYW5kIGdldFxfYnlcX3V1aWQgZXRjIChzZWN0aW9u
IDEuMikgaW50byBzZWN0aW9uIDIgc28KLXRoYXQgaXQgaXMgY2xlYXJlciB0aGF0IGVhY2ggY2xh
c3MgaGFzIHRoZXNlLgotCi1caXRlbSBFbXBoYXNpc2UgdGhhdCBlbnVtcyBhcmUgc3RyaW5ncyBv
biB0aGUgd2lyZSwgYW5kIHNvIGFyZSBub3QgcmVzdHJpY3RlZAotdG8gYSBjZXJ0YWluIG51bWJl
ciBvZiBiaXRzLgotCi1caXRlbSBDbGFyaWZ5IHJldHVybiB2YWx1ZXMsIGluIHBhcnRpY3VsYXIg
dGhhdCB2b2lkIG1lYW5zIHJldHVybiBhIHN0YXR1cwotY29kZSwgcG90ZW50aWFsIGVycm9yIGRl
c2NyaXB0aW9uLCBidXQgb3RoZXJ3aXNlIG5vIHZhbHVlLgotCi1caXRlbSBUYWxrIGFib3V0IFVV
SUQgZ2VuZXJhdGlvbi4KLQotXGl0ZW0gQ2xhcmlmeSBzZXNzaW9uIGJlaGF2aW91ciB3cnQgdGlt
ZW91dHMgYW5kIGRpc2Nvbm5lY3RzLgotCi1caXRlbSBDbGFyaWZ5IGJlaGF2aW91ciBvZiBwcm9n
cmVzcyBmaWVsZCBvbiBhc3luY2hyb25vdXMgcmVxdWVzdCBwb2xsaW5nIHdoZW4KLXRoYXQgcmVx
dWVzdCBmYWlscy4KLQotXGl0ZW0gQ2xhcmlmeSB3aGljaCBjYWxscyBoYXZlIGFzeW5jaHJvbm91
cyBjb3VudGVycGFydHMgYnkgbWFya2luZyB0aGVtIGFzIHN1Y2ggaW4gdGhlIHJlZmVyZW5jZS4g
KEluZGl2aWR1YWwgZ2V0dGVycyBhbmQgc2V0dGVycyBhcmUgdG9vIHNtYWxsIGFuZCBxdWljayB0
byBqdXN0aWZ5IGhhdmluZyBhc3luYyB2ZXJzaW9ucykKLQotXGVuZHtpdGVtaXplfQotCi1cc3Vi
c2VjdGlvbntDb250ZW50fQotCi1cc3Vic3Vic2VjdGlvbntNb2RlbH0KLQotXGJlZ2lue2l0ZW1p
emV9Ci0KLVxpdGVtIEltcHJvdmUgdGhlIHNldCBvZiBhdmFpbGFibGUgcG93ZXJcX3N0YXRlcyBh
bmQgY29ycmVzcG9uZGluZyBsaWZlY3ljbGUKLXNlbWFudGljcy4gIFJlbmFtZSBwb3dlclxfc3Rh
dGUsIG1heWJlLgotCi1caXRlbSBTcGVjaWZ5IHRoZSBDUFUgc2NoZWR1bGVyIGNvbmZpZ3VyYXRp
b24gcHJvcGVybHksIGluYyBDUFUgYWZmaW5pdHksCi13ZWlnaHRzLCBldGMuCi0KLVxpdGVtIEFk
ZCBWbS5hcmNoaXRlY3R1cmUgYW5kIEhvc3QuY29tcGF0aWJsZVxfYXJjaGl0ZWN0dXJlIGZpZWxk
cy4KLQotXGl0ZW0gQWRkIG1pZ3JhdGlvbiBjYWxscywgaW5jbHVkaW5nIHRoZSBhYmlsaXR5IHRv
IHRlc3Qgd2hldGhlciBhIG1pZ3JhdGlvbgotd2lsbCBzdWNjZWVkLCBhbmQgYXV0aGVudGljYXRp
b24gdG9rZW4gZXhjaGFuZ2UuCi0KLVxpdGVtIEltcHJvdmUgYXN5bmNocm9ub3VzIHRhc2sgaGFu
ZGxpbmcsIHdpdGggYSByZWdpc3RyYXRpb24gY2FsbCwgYQotYGBibG9ja2luZyBwb2xsJycgY2Fs
bCwgYW5kIGFuIGV4cGxpY2l0IG5vdGlmaWNhdGlvbiBkZXN0aW5hdGlvbi4gIFJlZ2lzdHJhdGlv
bgotZm9yIGBgcG93ZXJcX3N0YXRlJycgaXMgdXNlZnVsLgotCi1caXRlbSBTcGVjaWZ5IHRoYXQg
c2Vzc2lvbiBrZXlzIG91dGxpdmUgdGhlIEhUVFAgc2Vzc2lvbiwgYW5kIGFkZCBhIHRpbWVvdXQK
LWZvciB0aGVtIChjb25maWd1cmFibGUgaW4gdGhlIHRvb2xzKS4KLQotXGl0ZW0gQWRkIHBsYWNl
cyBmb3IgcGVvcGxlIHRvIHN0b3JlIGV4dHJhIGRhdGEgKGBgb3RoZXJDb25maWcnJyBwZXJoYXBz
KQotCi1caXRlbSBTcGVjaWZ5IGhvdyBoYXJkd2FyZSBVVUlEcyBhcmUgdXNlZCAvIGFjY2Vzc2Vk
LgotCi1caXRlbSBNYXJraW5nIFZESXMgYXMgZXhjbHVzaXZlIC8gc2hhcmVhYmxlIChsb2NraW5n
PykKLQotXGl0ZW0gQ29uc2lkZXIgaG93IHRvIHJlcHJlc2VudCBDRFJPTXMgKGFzIFZESXM/KQot
Ci1caXRlbSBEZWZpbmUgbGlzdHMgb2YgZXhjZXB0aW9ucyB3aGljaCBtYXkgYmUgdGhyb3duIGJ5
IGVhY2ggUlBDLCBpbmNsdWRpbmcKLWVycm9yIGNvZGVzIGFuZCBwYXJhbWV0ZXJzLgotCi1caXRl
bSBIb3N0IGNoYXJhY3RlcmlzdGljczogbWluaW11bSBhbW91bnQgb2YgbWVtb3J5LCBUUE0sIG5l
dHdvcmsgYmFuZHdpZHRoLAotYW1vdW50IG9mIGhvc3QgbWVtb3J5LCBhbW91bnQgY29uc3VtZWQg
YnkgVk1zLCBtYXggYW1vdW50IGF2YWlsYWJsZSBmb3IgbmV3Ci1WTXM/Ci0KLVxpdGVtIENvb2tl
ZCByZXNvdXJjZSBtb25pdG9yaW5nIGludGVyZmFjZS4KLQotXGl0ZW0gTmV0d29yayBuZWVkcyBh
ZGRpdGlvbmFsIGF0dHJpYnV0ZXMgdGhhdCBwcm92aWRlIG1lZGlhIGNoYXJhY3RlcmlzdGljcwot
b2YgdGhlIE5JQzoKLQotXGJlZ2lue2l0ZW1pemV9Ci0KLVxpdGVtIFJPIGJhbmR3aWR0aCBpbnRl
Z2VyIEJhbmR3aWR0aCBpbiBtYnBzCi1caXRlbSBSTyBsYXRlbmN5IGludGVnZXIgdGltZSBpbiBt
cyBmb3IgYW4gaWNtcCByb3VuZHRyaXAgdG8gYSBob3N0IG9uIHRoZQotc2FtZSBzdWJuZXQuCi0K
LVxlbmR7aXRlbWl6ZX0KLQotXGl0ZW0gQUNNCi1cYmVnaW57aXRlbWl6ZX0KLQotXGl0ZW0gQSBY
ZW4gc3lzdGVtIGNhbiBiZSBydW5uaW5nIGFuIGFjY2VzcyBjb250cm9sIHBvbGljeSB3aGVyZSBl
YWNoCi1WTSdzIHJ1bi10aW1lIGFjY2VzcyB0byByZXNvdXJjZXMgaXMgcmVzdHJpY3RlZCBieSB0
aGUgbGFiZWwgaXQgaGFzIGJlZW4gZ2l2ZW4KLWNvbXBhcmVkIHRvIHRob3NlIG9mIHRoZSByZXNv
dXJjZXMuIEN1cnJlbnRseSBhIFZNJ3MgY29uZmlndXJhdGlvbiBmaWxlIG1heQotY29udGFpbiBh
IGxpbmUgbGlrZSBhY2Nlc3NcX2NvbnRyb2xbcG9saWN5PSckPCRuYW1lIG9mIHRoZSBzeXN0ZW0n
cwotcG9saWN5JD4kJyxsYWJlbD0nJDwkbGFiZWwgZ2l2ZW4gdG8gVk0kPiQnXS4gIEkgdGhpbmsg
dGhlIGlkZW50aWZpZXJzICdwb2xpY3knCi1hbmQgJ2xhYmVsJyBzaG91bGQgYWxzbyBiZSBwYXJ0
IG9mIHRoZSBWTSBjbGFzcyBlaXRoZXIgZGlyZWN0bHkgaW4gdGhlIGZvcm0KLSdhY2Nlc3NcX2Nv
bnRyb2wvcG9saWN5JyBvciBpbmRpcmVjdGx5IGluIGFuIGFjY2Vzc1xfY29udHJvbCBjbGFzcy4K
LQotXGVuZHtpdGVtaXplfQotCi1caXRlbSBNaWtlIERheSdzIFZtLnByb2ZpbGUgZmllbGQ/Ci0K
LVxpdGVtIENsb25lIGN1c3RvbWlzYXRpb24/Ci0KLVxpdGVtIE5JQyB0ZWFtaW5nPyAgVGhlIE5J
QyBmaWVsZCBvZiB0aGUgTmV0d29yayBjbGFzcyBzaG91bGQgYmUgYSBsaXN0IChTZXQpCi1zbyB0
aGF0IHdlIGNhbiBzaWduaWZ5IE5JQyB0ZWFtaW5nLiAoQ29tYmluaW5nIHBoeXNpY2FsIE5JQ3Mg
aW4gYSBzaW5nbGUgaG9zdAotaW50ZXJmYWNlIHRvIGFjaGlldmUgZ3JlYXRlciBiYW5kd2lkdGgp
LgotCi1cZW5ke2l0ZW1pemV9Ci0KLVxzdWJzdWJzZWN0aW9ue1RyYW5zcG9ydH0KLQotXGJlZ2lu
e2l0ZW1pemV9Ci0KLVxpdGVtIEFsbG93IG5vbi1IVFRQIHRyYW5zcG9ydHMuICBFeHBsaWNpdGx5
IGFsbG93IHN0ZGlvIHRyYW5zcG9ydCwgZm9yIFNTSC4KLQotXGVuZHtpdGVtaXplfQotCi1cc3Vi
c3Vic2VjdGlvbntBdXRoZW50aWNhdGlvbn0KLQotXGJlZ2lue2l0ZW1pemV9Ci0KLVxpdGVtIERl
bGVnYXRpb24gdG8gdGhlIHRyYW5zcG9ydCBsYXllci4KLQotXGl0ZW0gRXh0ZW5kIFBBTSBleGNo
YW5nZSBhY3Jvc3MgdGhlIHdpcmUuCi0KLVxpdGVtIEZpbmUtZ3JhaW5lZCBhY2Nlc3MgY29udHJv
bC4KLQotXGVuZHtpdGVtaXplfQpkaWZmIC0tZ2l0IGEvZG9jcy94ZW4tYXBpL3ZtLWxpZmVjeWNs
ZS50ZXggYi9kb2NzL3hlbi1hcGkvdm0tbGlmZWN5Y2xlLnRleApkZWxldGVkIGZpbGUgbW9kZSAx
MDA2NDQKaW5kZXggYzU4NGI2Ny4uMDAwMDAwMAotLS0gYS9kb2NzL3hlbi1hcGkvdm0tbGlmZWN5
Y2xlLnRleAorKysgL2Rldi9udWxsCkBAIC0xLDQzICswLDAgQEAKLSUKLSUgQ29weXJpZ2h0IChj
KSAyMDA2LTIwMDcgWGVuU291cmNlLCBJbmMuCi0lCi0lIFBlcm1pc3Npb24gaXMgZ3JhbnRlZCB0
byBjb3B5LCBkaXN0cmlidXRlIGFuZC9vciBtb2RpZnkgdGhpcyBkb2N1bWVudCB1bmRlcgotJSB0
aGUgdGVybXMgb2YgdGhlIEdOVSBGcmVlIERvY3VtZW50YXRpb24gTGljZW5zZSwgVmVyc2lvbiAx
LjIgb3IgYW55IGxhdGVyCi0lIHZlcnNpb24gcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJl
IEZvdW5kYXRpb247IHdpdGggbm8gSW52YXJpYW50Ci0lIFNlY3Rpb25zLCBubyBGcm9udC1Db3Zl
ciBUZXh0cyBhbmQgbm8gQmFjay1Db3ZlciBUZXh0cy4gIEEgY29weSBvZiB0aGUKLSUgbGljZW5z
ZSBpcyBpbmNsdWRlZCBpbiB0aGUgc2VjdGlvbiBlbnRpdGxlZAotJSAiR05VIEZyZWUgRG9jdW1l
bnRhdGlvbiBMaWNlbnNlIiBvciB0aGUgZmlsZSBmZGwudGV4LgotJQotJSBBdXRob3JzOiBFd2Fu
IE1lbGxvciwgUmljaGFyZCBTaGFycCwgRGF2ZSBTY290dCwgSm9uIEhhcnJvcC4KLSUKLQotXHNl
Y3Rpb257Vk0gTGlmZWN5Y2xlfQotCi1cYmVnaW57ZmlndXJlfQotXGNlbnRlcmluZwotXHJlc2l6
ZWJveHswLjlcdGV4dHdpZHRofXshfXtcaW5jbHVkZWdyYXBoaWNze3ZtX2xpZmVjeWNsZX19Ci1c
Y2FwdGlvbntWTSBMaWZlY3ljbGV9Ci1cbGFiZWx7ZmlnLXZtLWxpZmVjeWNsZX0KLVxlbmR7Zmln
dXJlfQotCi1GaWd1cmV+XHJlZntmaWctdm0tbGlmZWN5Y2xlfSBzaG93cyB0aGUgc3RhdGVzIHRo
YXQgYSBWTSBjYW4gYmUgaW4KLWFuZCB0aGUgQVBJIGNhbGxzIHRoYXQgY2FuIGJlIHVzZWQgdG8g
bW92ZSB0aGUgVk0gYmV0d2VlbiB0aGVzZSBzdGF0ZXMuICBUaGUgY3Jhc2hlZAotc3RhdGUgaW5k
aWNhdGVzIHRoYXQgdGhlIGd1ZXN0IE9TIHJ1bm5pbmcgd2l0aGluIHRoZSBWTSBoYXMgY3Jhc2hl
ZC4gIFRoZXJlIGlzIG5vCi1BUEkgdG8gZXhwbGljaXRseSBtb3ZlIHRvIHRoZSBjcmFzaGVkIHN0
YXRlLCBob3dldmVyIGEgaGFyZFNodXRkb3duIHdpbGwgbW92ZSB0aGUKLVZNIHRvIHRoZSBwb3dl
cmVkIGRvd24gc3RhdGUuCi0KLVxzZWN0aW9ue1ZNIGJvb3QgcGFyYW1ldGVyc30KLQotVGhlIFZN
IGNsYXNzIGNvbnRhaW5zIGEgbnVtYmVyIG9mIGZpZWxkcyB0aGF0IGNvbnRyb2wgdGhlIHdheSBp
biB3aGljaCB0aGUgVk0gaXMgYm9vdGVkLgotV2l0aCByZWZlcmVuY2UgdG8gdGhlIGZpZWxkcyBk
ZWZpbmVkIGluIHRoZSBWTSBjbGFzcyAoc2VlIGxhdGVyIGluIHRoaXMgZG9jdW1lbnQpLAotdGhp
cyBzZWN0aW9uIG91dGxpbmVzIHRoZSBib290IG9wdGlvbnMgYXZhaWxhYmxlIGFuZCB0aGUgbWVj
aGFuaXNtcyBwcm92aWRlZCBmb3IgY29udHJvbGxpbmcgdGhlbS4KLQotVk0gYm9vdGluZyBpcyBj
b250cm9sbGVkIGJ5IHNldHRpbmcgb25lIG9mIHRoZSB0d28gbXV0dWFsbHkgZXhjbHVzaXZlIGdy
b3VwczogYGBQVicnLCBhbmQgYGBIVk0nJy4gIElmIEhWTS5ib290XF9wb2xpY3kgaXMgdGhlIGVt
cHR5IHN0cmluZywgdGhlbiBwYXJhdmlydHVhbCBkb21haW4gYnVpbGRpbmcgYW5kIGJvb3Rpbmcg
d2lsbCBiZSB1c2VkOyBvdGhlcndpc2UgdGhlIFZNIHdpbGwgYmUgbG9hZGVkIGFzIGFuIEhWTSBk
b21haW4sIGFuZCBib290ZWQgdXNpbmcgYW4gZW11bGF0ZWQgQklPUy4KLQotV2hlbiBwYXJhdmly
dHVhbCBib290aW5nIGlzIGluIHVzZSwgdGhlIFBWL2Jvb3Rsb2FkZXIgZmllbGQgaW5kaWNhdGVz
IHRoZSBib290bG9hZGVyIHRvIHVzZS4gIEl0IG1heSBiZSBgYHB5Z3J1YicnLCBpbiB3aGljaCBj
YXNlIHRoZSBwbGF0Zm9ybSdzIGRlZmF1bHQgaW5zdGFsbGF0aW9uIG9mIHB5Z3J1YiB3aWxsIGJl
IHVzZWQsIG9yIGEgZnVsbCBwYXRoIHdpdGhpbiB0aGUgY29udHJvbCBkb21haW4gdG8gc29tZSBv
dGhlciBib290bG9hZGVyLiAgVGhlIG90aGVyIGZpZWxkcywgUFYva2VybmVsLCBQVi9yYW1kaXNr
LCBQVi9hcmdzIGFuZCBQVi9ib290bG9hZGVyXF9hcmdzIHdpbGwgYmUgcGFzc2VkIHRvIHRoZSBi
b290bG9hZGVyIHVubW9kaWZpZWQsIGFuZCBpbnRlcnByZXRhdGlvbiBvZiB0aG9zZSBmaWVsZHMg
aXMgdGhlbiBzcGVjaWZpYyB0byB0aGUgYm9vdGxvYWRlciBpdHNlbGYsIGluY2x1ZGluZyB0aGUg
cG9zc2liaWxpdHkgdGhhdCB0aGUgYm9vdGxvYWRlciB3aWxsIGlnbm9yZSBzb21lIG9yIGFsbCBv
ZiB0aG9zZSBnaXZlbiB2YWx1ZXMuIEZpbmFsbHkgdGhlIHBhdGhzIG9mIGFsbCBib290YWJsZSBk
aXNrcyBhcmUgYWRkZWQgdG8gdGhlIGJvb3Rsb2FkZXIgY29tbWFuZGxpbmUgKGEgZGlzayBpcyBi
b290YWJsZSBpZiBpdHMgVkJEIGhhcyB0aGUgYm9vdGFibGUgZmxhZyBzZXQpLiBUaGVyZSBtYXkg
YmUgemVybywgb25lIG9yIG1hbnkgYm9vdGFibGUgZGlza3M7IHRoZSBib290bG9hZGVyIGRlY2lk
ZXMgd2hpY2ggZGlzayAoaWYgYW55KSB0byBib290IGZyb20uCi0KLUlmIHRoZSBib290bG9hZGVy
IGlzIHB5Z3J1YiwgdGhlbiB0aGUgbWVudS5sc3QgaXMgcGFyc2VkIGlmIHByZXNlbnQgaW4gdGhl
IGd1ZXN0J3MgZmlsZXN5c3RlbSwgb3RoZXJ3aXNlIHRoZSBzcGVjaWZpZWQga2VybmVsIGFuZCBy
YW1kaXNrIGFyZSB1c2VkLCBvciBhbiBhdXRvZGV0ZWN0ZWQga2VybmVsIGlzIHVzZWQgaWYgbm90
aGluZyBpcyBzcGVjaWZpZWQgYW5kIGF1dG9kZXRlY3Rpb24gaXMgcG9zc2libGUuICBQVi9hcmdz
IGlzIGFwcGVuZGVkIHRvIHRoZSBrZXJuZWwgY29tbWFuZCBsaW5lLCBubyBtYXR0ZXIgd2hpY2gg
bWVjaGFuaXNtIGlzIHVzZWQgZm9yIGZpbmRpbmcgdGhlIGtlcm5lbC4KLQotSWYgUFYvYm9vdGxv
YWRlciBpcyBlbXB0eSBidXQgUFYva2VybmVsIGlzIHNwZWNpZmllZCwgdGhlbiB0aGUga2VybmVs
IGFuZCByYW1kaXNrIHZhbHVlcyB3aWxsIGJlIHRyZWF0ZWQgYXMgcGF0aHMgd2l0aGluIHRoZSBj
b250cm9sIGRvbWFpbi4gIElmIGJvdGggUFYvYm9vdGxvYWRlciBhbmQgUFYva2VybmVsIGFyZSBl
bXB0eSwgdGhlbiB0aGUgYmVoYXZpb3VyIGlzIGFzIGlmIFBWL2Jvb3Rsb2FkZXIgd2FzIHNwZWNp
ZmllZCBhcyBgYHB5Z3J1YicnLgotCi1XaGVuIHVzaW5nIEhWTSBib290aW5nLCBIVk0vYm9vdFxf
cG9saWN5IGFuZCBIVk0vYm9vdFxfcGFyYW1zIHNwZWNpZnkgdGhlIGJvb3QgaGFuZGxpbmcuICBP
bmx5IG9uZSBwb2xpY3kgaXMgY3VycmVudGx5IGRlZmluZWQ6IGBgQklPUyBvcmRlcicnLiAgSW4g
dGhpcyBjYXNlLCBIVk0vYm9vdFxfcGFyYW1zIHNob3VsZCBjb250YWluIG9uZSBrZXktdmFsdWUg
cGFpciBgYG9yZGVyJycgPSBgYE4nJyB3aGVyZSBOIGlzIHRoZSBzdHJpbmcgdGhhdCB3aWxsIGJl
IHBhc3NlZCB0byBRRU1VLgpcIE5vIG5ld2xpbmUgYXQgZW5kIG9mIGZpbGUKZGlmZiAtLWdpdCBh
L2RvY3MveGVuLWFwaS92bV9saWZlY3ljbGUuZG90IGIvZG9jcy94ZW4tYXBpL3ZtX2xpZmVjeWNs
ZS5kb3QKZGVsZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDJjMDYyZjkuLjAwMDAwMDAKLS0t
IGEvZG9jcy94ZW4tYXBpL3ZtX2xpZmVjeWNsZS5kb3QKKysrIC9kZXYvbnVsbApAQCAtMSwxNyAr
MCwwIEBACi1kaWdyYXBoIGd7Ci0KLW5vZGUgW3NoYXBlPWJveF07ICJwb3dlcmVkIGRvd24iIHBh
dXNlZCBydW5uaW5nIHN1c3BlbmRlZCBjcmFzaGVkOwotCi0icG93ZXJlZCBkb3duIiAtPiBwYXVz
ZWQgW2xhYmVsPSJzdGFydChwYXVzZWQ9dHJ1ZSkiXTsKLSJwb3dlcmVkIGRvd24iIC0+IHJ1bm5p
bmcgW2xhYmVsPSJzdGFydChwYXVzZWQ9ZmFsc2UpIl07Ci1ydW5uaW5nIC0+IHN1c3BlbmRlZCBb
bGFiZWw9InN1c3BlbmQiXTsKLXN1c3BlbmRlZCAtPiBydW5uaW5nIFtsYWJlbD0icmVzdW1lKHBh
dXNlZD1mYWxzZSkiXTsKLXN1c3BlbmRlZCAtPiBwYXVzZWQgW2xhYmVsPSJyZXN1bWUocGF1c2Vk
PXRydWUpIl07Ci1wYXVzZWQgLT4gc3VzcGVuZGVkIFtsYWJlbD0ic3VzcGVuZCJdOwotcGF1c2Vk
IC0+IHJ1bm5pbmcgW2xhYmVsPSJyZXN1bWUiXTsKLXJ1bm5pbmcgLT4gInBvd2VyZWQgZG93biIg
W2xhYmVsPSJjbGVhblNodXRkb3duIC9cbmhhcmRTaHV0ZG93biJdOwotcnVubmluZyAtPiBwYXVz
ZWQgW2xhYmVsPSJwYXVzZSJdOwotcnVubmluZyAtPiBjcmFzaGVkIFtsYWJlbD0iZ3Vlc3QgT1Mg
Y3Jhc2giXQotY3Jhc2hlZCAtPiAicG93ZXJlZCBkb3duIiBbbGFiZWw9ImhhcmRTaHV0ZG93biJd
Ci0KLX0KXCBObyBuZXdsaW5lIGF0IGVuZCBvZiBmaWxlCmRpZmYgLS1naXQgYS9kb2NzL3hlbi1h
cGkvd2lyZS1wcm90b2NvbC50ZXggYi9kb2NzL3hlbi1hcGkvd2lyZS1wcm90b2NvbC50ZXgKZGVs
ZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IGRjYjFhMWMuLjAwMDAwMDAKLS0tIGEvZG9jcy94
ZW4tYXBpL3dpcmUtcHJvdG9jb2wudGV4CisrKyAvZGV2L251bGwKQEAgLTEsMzgzICswLDAgQEAK
LSUKLSUgQ29weXJpZ2h0IChjKSAyMDA2LTIwMDcgWGVuU291cmNlLCBJbmMuCi0lIENvcHlyaWdo
dCAoYykgMjAwOSBmbG9uYXRlbCBHbWJIICYgQ28uIEtHCi0lCi0lIFBlcm1pc3Npb24gaXMgZ3Jh
bnRlZCB0byBjb3B5LCBkaXN0cmlidXRlIGFuZC9vciBtb2RpZnkgdGhpcyBkb2N1bWVudCB1bmRl
cgotJSB0aGUgdGVybXMgb2YgdGhlIEdOVSBGcmVlIERvY3VtZW50YXRpb24gTGljZW5zZSwgVmVy
c2lvbiAxLjIgb3IgYW55IGxhdGVyCi0lIHZlcnNpb24gcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNv
ZnR3YXJlIEZvdW5kYXRpb247IHdpdGggbm8gSW52YXJpYW50Ci0lIFNlY3Rpb25zLCBubyBGcm9u
dC1Db3ZlciBUZXh0cyBhbmQgbm8gQmFjay1Db3ZlciBUZXh0cy4gIEEgY29weSBvZiB0aGUKLSUg
bGljZW5zZSBpcyBpbmNsdWRlZCBpbiB0aGUgc2VjdGlvbiBlbnRpdGxlZAotJSAiR05VIEZyZWUg
RG9jdW1lbnRhdGlvbiBMaWNlbnNlIiBvciB0aGUgZmlsZSBmZGwudGV4LgotJQotJSBBdXRob3Jz
OiBFd2FuIE1lbGxvciwgUmljaGFyZCBTaGFycCwgRGF2ZSBTY290dCwgSm9uIEhhcnJvcC4KLSUg
Q29udHJpYnV0b3I6IEFuZHJlYXMgRmxvcmF0aAotJQotCi1cc2VjdGlvbntXaXJlIFByb3RvY29s
IGZvciBSZW1vdGUgQVBJIENhbGxzfQotCi1BUEkgY2FsbHMgYXJlIHNlbnQgb3ZlciBhIG5ldHdv
cmsgdG8gYSBYZW4tZW5hYmxlZCBob3N0IHVzaW5nCi10aGUgWE1MLVJQQyBwcm90b2NvbC4gSW4g
dGhpcyBTZWN0aW9uIHdlIGRlc2NyaWJlIGhvdyB0aGUKLWhpZ2hlci1sZXZlbCB0eXBlcyB1c2Vk
IGluIG91ciBBUEkgUmVmZXJlbmNlIGFyZSBtYXBwZWQgdG8KLXByaW1pdGl2ZSBYTUwtUlBDIHR5
cGVzLgotCi1JbiBvdXIgQVBJIFJlZmVyZW5jZSB3ZSBzcGVjaWZ5IHRoZSBzaWduYXR1cmVzIG9m
IEFQSSBmdW5jdGlvbnMgaW4gdGhlIGZvbGxvd2luZwotc3R5bGU6Ci1cYmVnaW57dmVyYmF0aW19
Ci0gICAgKHJlZl92bSBTZXQpICAgVk0uZ2V0X2FsbCgpCi1cZW5ke3ZlcmJhdGltfQotVGhpcyBz
cGVjaWZpZXMgdGhhdCB0aGUgZnVuY3Rpb24gd2l0aCBuYW1lIHtcdHQgVk0uZ2V0XF9hbGx9IHRh
a2VzCi1ubyBwYXJhbWV0ZXJzIGFuZCByZXR1cm5zIGEgU2V0IG9mIHtcdHQgcmVmXF92bX1zLgot
VGhlc2UgdHlwZXMgYXJlIG1hcHBlZCBvbnRvIFhNTC1SUEMgdHlwZXMgaW4gYSBzdHJhaWdodC1m
b3J3YXJkIG1hbm5lcjoKLVxiZWdpbntpdGVtaXplfQotICBcaXRlbSBGbG9hdHMsIEJvb2xzLCBE
YXRlVGltZXMgYW5kIFN0cmluZ3MgbWFwIGRpcmVjdGx5IHRvIHRoZSBYTUwtUlBDIHtcdHQKLSAg
ZG91YmxlfSwge1x0dCBib29sZWFufSwge1x0dCBkYXRlVGltZS5pc284NjAxfSwgYW5kIHtcdHQg
c3RyaW5nfSBlbGVtZW50cy4KLQotICBcaXRlbSBhbGwgYGB7XHR0IHJlZlxffScnIHR5cGVzIGFy
ZSBvcGFxdWUgcmVmZXJlbmNlcywgZW5jb2RlZCBhcyB0aGUKLSAgWE1MLVJQQydzIHtcdHQgU3Ry
aW5nfSB0eXBlLiBVc2VycyBvZiB0aGUgQVBJIHNob3VsZCBub3QgbWFrZSBhc3N1bXB0aW9ucwot
ICBhYm91dCB0aGUgY29uY3JldGUgZm9ybSBvZiB0aGVzZSBzdHJpbmdzIGFuZCBzaG91bGQgbm90
IGV4cGVjdCB0aGVtIHRvCi0gIHJlbWFpbiB2YWxpZCBhZnRlciB0aGUgY2xpZW50J3Mgc2Vzc2lv
biB3aXRoIHRoZSBzZXJ2ZXIgaGFzIHRlcm1pbmF0ZWQuCi0KLSAgXGl0ZW0gZmllbGRzIG5hbWVk
IGBge1x0dCB1dWlkfScnIG9mIHR5cGUgYGB7XHR0IFN0cmluZ30nJyBhcmUgbWFwcGVkIHRvCi0g
IHRoZSBYTUwtUlBDIHtcdHQgU3RyaW5nfSB0eXBlLiBUaGUgc3RyaW5nIGl0c2VsZiBpcyB0aGUg
T1NGCi0gIERDRSBVVUlEIHByZXNlbnRhdGlvbiBmb3JtYXQgKGFzIG91dHB1dCBieSB7XHR0IHV1
aWRnZW59LCBldGMpLgotCi0gIFxpdGVtIGludHMgYXJlIGFsbCBhc3N1bWVkIHRvIGJlIDY0LWJp
dCBpbiBvdXIgQVBJIGFuZCBhcmUgZW5jb2RlZCBhcyBhCi0gIHN0cmluZyBvZiBkZWNpbWFsIGRp
Z2l0cyAocmF0aGVyIHRoYW4gdXNpbmcgWE1MLVJQQydzIGJ1aWx0LWluIDMyLWJpdCB7XHR0Ci0g
IGk0fSB0eXBlKS4KLQotICBcaXRlbSB2YWx1ZXMgb2YgZW51bSB0eXBlcyBhcmUgZW5jb2RlZCBh
cyBzdHJpbmdzLiBGb3IgZXhhbXBsZSwgYSB2YWx1ZSBvZgotICB7XHR0IGRlc3Ryb3l9IG9mIHR5
cGUge1x0dCBvblxfbm9ybWFsXF9leGl0fSwgd291bGQgYmUgY29udmV5ZWQgYXM6Ci0gIFxiZWdp
bnt2ZXJiYXRpbX0KLSAgICA8dmFsdWU+PHN0cmluZz5kZXN0cm95PC9zdHJpbmc+PC92YWx1ZT4K
LSAgXGVuZHt2ZXJiYXRpbX0KLQotICBcaXRlbSBmb3IgYWxsIG91ciB0eXBlcywge1x0dCB0fSwg
b3VyIHR5cGUge1x0dCB0IFNldH0gc2ltcGx5IG1hcHMgdG8KLSAgWE1MLVJQQydzIHtcdHQgQXJy
YXl9IHR5cGUsIHNvIGZvciBleGFtcGxlIGEgdmFsdWUgb2YgdHlwZSB7XHR0IGNwdVxfZmVhdHVy
ZQotICBTZXR9IHdvdWxkIGJlIHRyYW5zbWl0dGVkIGxpa2UgdGhpczoKLQotICBcYmVnaW57dmVy
YmF0aW19Ci08YXJyYXk+Ci0gIDxkYXRhPgotICAgIDx2YWx1ZT48c3RyaW5nPkNYODwvc3RyaW5n
PjwvdmFsdWU+Ci0gICAgPHZhbHVlPjxzdHJpbmc+UFNFMzY8L3N0cmluZz48L3ZhbHVlPgotICAg
IDx2YWx1ZT48c3RyaW5nPkZQVTwvc3RyaW5nPjwvdmFsdWU+Ci0gIDwvZGF0YT4KLTwvYXJyYXk+
IAotICBcZW5ke3ZlcmJhdGltfQotCi0gIFxpdGVtIGZvciB0eXBlcyB7XHR0IGt9IGFuZCB7XHR0
IHZ9LCBvdXIgdHlwZSB7XHR0IChrLCB2KSBNYXB9IG1hcHMgb250byBhbgotICBYTUwtUlBDIHN0
cnVjdCwgd2l0aCB0aGUga2V5IGFzIHRoZSBuYW1lIG9mIHRoZSBzdHJ1Y3QuICBOb3RlIHRoYXQg
dGhlIHtcdHQKLSAgKGssIHYpIE1hcH0gdHlwZSBpcyBvbmx5IHZhbGlkIHdoZW4ge1x0dCBrfSBp
cyBhIHtcdHQgU3RyaW5nfSwge1x0dCBSZWZ9LCBvcgotICB7XHR0IEludH0sIGFuZCBpbiBlYWNo
IGNhc2UgdGhlIGtleXMgb2YgdGhlIG1hcHMgYXJlIHN0cmluZ2lmaWVkIGFzCi0gIGFib3ZlLiBG
b3IgZXhhbXBsZSwgdGhlIHtcdHQgKFN0cmluZywgZG91YmxlKSBNYXB9IGNvbnRhaW5pbmcgYSB0
aGUgbWFwcGluZ3MKLSAgTWlrZSAkXHJpZ2h0YXJyb3ckIDIuMyBhbmQgSm9obiAkXHJpZ2h0YXJy
b3ckIDEuMiB3b3VsZCBiZSByZXByZXNlbnRlZCBhczoKLQotICBcYmVnaW57dmVyYmF0aW19Ci08
dmFsdWU+Ci0gIDxzdHJ1Y3Q+Ci0gICAgPG1lbWJlcj4KLSAgICAgIDxuYW1lPk1pa2U8L25hbWU+
Ci0gICAgICA8dmFsdWU+PGRvdWJsZT4yLjM8L2RvdWJsZT48L3ZhbHVlPgotICAgIDwvbWVtYmVy
PgotICAgIDxtZW1iZXI+Ci0gICAgICA8bmFtZT5Kb2huPC9uYW1lPgotICAgICAgPHZhbHVlPjxk
b3VibGU+MS4yPC9kb3VibGU+PC92YWx1ZT4KLSAgICA8L21lbWJlcj4KLSAgPC9zdHJ1Y3Q+Ci08
L3ZhbHVlPgotICBcZW5ke3ZlcmJhdGltfQotCi0gIFxpdGVtIG91ciB7XHR0IFZvaWR9IHR5cGUg
aXMgdHJhbnNtaXR0ZWQgYXMgYW4gZW1wdHkgc3RyaW5nLgotCi1cZW5ke2l0ZW1pemV9Ci0KLVxz
dWJzZWN0aW9ue05vdGUgb24gUmVmZXJlbmNlcyB2cyBVVUlEc30KLQotUmVmZXJlbmNlcyBhcmUg
b3BhcXVlIHR5cGVzIC0tLSBlbmNvZGVkIGFzIFhNTC1SUEMgc3RyaW5ncyBvbiB0aGUgd2lyZSAt
LS0gdW5kZXJzdG9vZAotb25seSBieSB0aGUgcGFydGljdWxhciBzZXJ2ZXIgd2hpY2ggZ2VuZXJh
dGVkIHRoZW0uIFNlcnZlcnMgYXJlIGZyZWUgdG8gY2hvb3NlCi1hbnkgY29uY3JldGUgcmVwcmVz
ZW50YXRpb24gdGhleSBmaW5kIGNvbnZlbmllbnQ7IGNsaWVudHMgc2hvdWxkIG5vdCBtYWtlIGFu
eSAKLWFzc3VtcHRpb25zIG9yIGF0dGVtcHQgdG8gcGFyc2UgdGhlIHN0cmluZyBjb250ZW50cy4g
UmVmZXJlbmNlcyBhcmUgbm90IGd1YXJhbnRlZWQKLXRvIGJlIHBlcm1hbmVudCBpZGVudGlmaWVy
cyBmb3Igb2JqZWN0czsgY2xpZW50cyBzaG91bGQgbm90IGFzc3VtZSB0aGF0IHJlZmVyZW5jZXMg
Ci1nZW5lcmF0ZWQgZHVyaW5nIG9uZSBzZXNzaW9uIGFyZSB2YWxpZCBmb3IgYW55IGZ1dHVyZSBz
ZXNzaW9uLiBSZWZlcmVuY2VzIGRvIG5vdAotYWxsb3cgb2JqZWN0cyB0byBiZSBjb21wYXJlZCBm
b3IgZXF1YWxpdHkuIFR3byByZWZlcmVuY2VzIHRvIHRoZSBzYW1lIG9iamVjdCBhcmUKLW5vdCBn
dWFyYW50ZWVkIHRvIGJlIHRleHR1YWxseSBpZGVudGljYWwuCi0KLVVVSURzIGFyZSBpbnRlbmRl
ZCB0byBiZSBwZXJtYW5lbnQgbmFtZXMgZm9yIG9iamVjdHMuIFRoZXkgYXJlCi1ndWFyYW50ZWVk
IHRvIGJlIGluIHRoZSBPU0YgRENFIFVVSUQgcHJlc2VudGF0aW9uIGZvcm1hdCAoYXMgb3V0cHV0
IGJ5IHtcdHQgdXVpZGdlbn0uCi1DbGllbnRzIG1heSBzdG9yZSBVVUlEcyBvbiBkaXNrIGFuZCB1
c2UgdGhlbSB0byBsb29rdXAgb2JqZWN0cyBpbiBzdWJzZXF1ZW50IHNlc3Npb25zCi13aXRoIHRo
ZSBzZXJ2ZXIuIENsaWVudHMgbWF5IGFsc28gdGVzdCBlcXVhbGl0eSBvbiBvYmplY3RzIGJ5IGNv
bXBhcmluZyBVVUlEIHN0cmluZ3MuCi0KLVRoZSBBUEkgcHJvdmlkZXMgbWVjaGFuaXNtcwotZm9y
IHRyYW5zbGF0aW5nIGJldHdlZW4gVVVJRHMgYW5kIG9wYXF1ZSByZWZlcmVuY2VzLiBFYWNoIGNs
YXNzIHRoYXQgY29udGFpbnMgYSBVVUlECi1maWVsZCBwcm92aWRlczoKLVxiZWdpbntpdGVtaXpl
fQotXGl0ZW0gIEEgYGB7XHR0IGdldFxfYnlcX3V1aWR9JycgbWV0aG9kIHRoYXQgdGFrZXMgYSBV
VUlELCAkdSQsIGFuZCByZXR1cm5zIGFuIG9wYXF1ZSByZWZlcmVuY2UKLXRvIHRoZSBzZXJ2ZXIt
c2lkZSBvYmplY3QgdGhhdCBoYXMgVVVJRD0kdSQ7IAotXGl0ZW0gQSB7XHR0IGdldFxfdXVpZH0g
ZnVuY3Rpb24gKGEgcmVndWxhciBgYGZpZWxkIGdldHRlcicnIFJQQykgdGhhdCB0YWtlcyBhbiBv
cGFxdWUgcmVmZXJlbmNlLAotJHIkLCBhbmQgcmV0dXJucyB0aGUgVVVJRCBvZiB0aGUgc2VydmVy
LXNpZGUgb2JqZWN0IHRoYXQgaXMgcmVmZXJlbmNlZCBieSAkciQuCi1cZW5ke2l0ZW1pemV9Ci0K
LVxzdWJzZWN0aW9ue1JldHVybiBWYWx1ZXMvU3RhdHVzIENvZGVzfQotXGxhYmVse3N5bmNocm9u
b3VzLXJlc3VsdH0KLQotVGhlIHJldHVybiB2YWx1ZSBvZiBhbiBSUEMgY2FsbCBpcyBhbiBYTUwt
UlBDIHtcdHQgU3RydWN0fS4KLQotXGJlZ2lue2l0ZW1pemV9Ci1caXRlbSBUaGUgZmlyc3QgZWxl
bWVudCBvZiB0aGUgc3RydWN0IGlzIG5hbWVkIHtcdHQgU3RhdHVzfTsgaXQKLWNvbnRhaW5zIGEg
c3RyaW5nIHZhbHVlIGluZGljYXRpbmcgd2hldGhlciB0aGUgcmVzdWx0IG9mIHRoZSBjYWxsIHdh
cwotYSBgYHtcdHQgU3VjY2Vzc30nJyBvciBhIGBge1x0dCBGYWlsdXJlfScnLgotXGVuZHtpdGVt
aXplfQotCi1JZiB7XHR0IFN0YXR1c30gd2FzIHNldCB0byB7XHR0IFN1Y2Nlc3N9IHRoZW4gdGhl
IFN0cnVjdCBjb250YWlucyBhIHNlY29uZAotZWxlbWVudCBuYW1lZCB7XHR0IFZhbHVlfToKLVxi
ZWdpbntpdGVtaXplfQotXGl0ZW0gVGhlIGVsZW1lbnQgb2YgdGhlIHN0cnVjdCBuYW1lZCB7XHR0
IFZhbHVlfSBjb250YWlucyB0aGUgZnVuY3Rpb24ncyByZXR1cm4gdmFsdWUuCi1cZW5ke2l0ZW1p
emV9Ci0KLUluIHRoZSBjYXNlIHdoZXJlIHtcdHQgU3RhdHVzfSBpcyBzZXQgdG8ge1x0dCBGYWls
dXJlfSB0aGVuCi10aGUgc3RydWN0IGNvbnRhaW5zIGEgc2Vjb25kIGVsZW1lbnQgbmFtZWQge1x0
dCBFcnJvckRlc2NyaXB0aW9ufToKLVxiZWdpbntpdGVtaXplfQotXGl0ZW0gVGhlIGVsZW1lbnQg
b2YgdGhlIHN0cnVjdCBuYW1lZCB7XHR0IEVycm9yRGVzY3JpcHRpb259IGNvbnRhaW5zCi1hbiBh
cnJheSBvZiBzdHJpbmcgdmFsdWVzLiBUaGUgZmlyc3QgZWxlbWVudCBvZiB0aGUgYXJyYXkgaXMg
YW4gZXJyb3IgY29kZTsKLXRoZSByZW1haW5kZXIgb2YgdGhlIGFycmF5IGFyZSBzdHJpbmdzIHJl
cHJlc2VudGluZyBlcnJvciBwYXJhbWV0ZXJzIHJlbGF0aW5nCi10byB0aGF0IGNvZGUuCi1cZW5k
e2l0ZW1pemV9Ci0KLUZvciBleGFtcGxlLCBhbiBYTUwtUlBDIHJldHVybiB2YWx1ZSBmcm9tIHRo
ZSB7XHR0IGhvc3QuZ2V0XF9yZXNpZGVudFxfVk1zfQotZnVuY3Rpb24gYWJvdmUKLW1heSBsb29r
IGxpa2UgdGhpczoKLVxiZWdpbnt2ZXJiYXRpbX0KLSAgICA8c3RydWN0PgotICAgICAgIDxtZW1i
ZXI+Ci0gICAgICAgICA8bmFtZT5TdGF0dXM8L25hbWU+Ci0gICAgICAgICA8dmFsdWU+U3VjY2Vz
czwvdmFsdWU+Ci0gICAgICAgPC9tZW1iZXI+Ci0gICAgICAgPG1lbWJlcj4KLSAgICAgICAgICA8
bmFtZT5WYWx1ZTwvbmFtZT4KLSAgICAgICAgICA8dmFsdWU+Ci0gICAgICAgICAgICA8YXJyYXk+
Ci0gICAgICAgICAgICAgICA8ZGF0YT4KLSAgICAgICAgICAgICAgICAgPHZhbHVlPjgxNTQ3YTM1
LTIwNWMtYTU1MS1jNTc3LTAwYjk4MmM1ZmUwMDwvdmFsdWU+Ci0gICAgICAgICAgICAgICAgIDx2
YWx1ZT42MWM4NWEyMi0wNWRhLWI4YTItMmU1NS0wNmIwODQ3ZGE1MDM8L3ZhbHVlPgotICAgICAg
ICAgICAgICAgICA8dmFsdWU+MWQ0MDFlYzQtM2MxNy0zNWE2LWZjNzktY2VlNmJkOTgxMWZlPC92
YWx1ZT4KLSAgICAgICAgICAgICAgIDwvZGF0YT4KLSAgICAgICAgICAgIDwvYXJyYXk+Ci0gICAg
ICAgICA8L3ZhbHVlPgotICAgICAgIDwvbWVtYmVyPgotICAgIDwvc3RydWN0PgotXGVuZHt2ZXJi
YXRpbX0KLQotXHNlY3Rpb257TWFraW5nIFhNTC1SUEMgQ2FsbHN9Ci0KLVxzdWJzZWN0aW9ue1Ry
YW5zcG9ydCBMYXllcn0KLQotVGhlIGZvbGxvd2luZyB0cmFuc3BvcnQgbGF5ZXJzIGFyZSBjdXJy
ZW50bHkgc3VwcG9ydGVkOgotXGJlZ2lue2l0ZW1pemV9Ci1caXRlbSBIVFRQL1MgZm9yIHJlbW90
ZSBhZG1pbmlzdHJhdGlvbgotXGl0ZW0gSFRUUCBvdmVyIFVuaXggZG9tYWluIHNvY2tldHMgZm9y
IGxvY2FsIGFkbWluaXN0cmF0aW9uCi1cZW5ke2l0ZW1pemV9Ci0KLVxzdWJzZWN0aW9ue1Nlc3Np
b24gTGF5ZXJ9Ci0KLVRoZSBYTUwtUlBDIGludGVyZmFjZSBpcyBzZXNzaW9uLWJhc2VkOyBiZWZv
cmUgeW91IGNhbiBtYWtlIGFyYml0cmFyeSBSUEMgY2FsbHMKLXlvdSBtdXN0IGxvZ2luIGFuZCBp
bml0aWF0ZSBhIHNlc3Npb24uIEZvciBleGFtcGxlOgotXGJlZ2lue3ZlcmJhdGltfQotICAgc2Vz
c2lvbl9pZCAgICBzZXNzaW9uLmxvZ2luX3dpdGhfcGFzc3dvcmQoc3RyaW5nIHVuYW1lLCBzdHJp
bmcgcHdkKQotXGVuZHt2ZXJiYXRpbX0KLVdoZXJlIHtcdHQgdW5hbWV9IGFuZCB7XHR0IHBhc3N3
b3JkfSByZWZlciB0byB5b3VyIHVzZXJuYW1lIGFuZCBwYXNzd29yZAotcmVzcGVjdGl2ZWx5LCBh
cyBkZWZpbmVkIGJ5IHRoZSBYZW4gYWRtaW5pc3RyYXRvci4KLVRoZSB7XHR0IHNlc3Npb25cX2lk
fSByZXR1cm5lZCBieSB7XHR0IHNlc3Npb24ubG9naW5cX3dpdGhcX3Bhc3N3b3JkfSBpcyBwYXNz
ZWQKLXRvIHN1YnNlcXVlbnQgUlBDIGNhbGxzIGFzIGFuIGF1dGhlbnRpY2F0aW9uIHRva2VuLgot
Ci1BIHNlc3Npb24gY2FuIGJlIHRlcm1pbmF0ZWQgd2l0aCB0aGUge1x0dCBzZXNzaW9uLmxvZ291
dH0gZnVuY3Rpb246Ci1cYmVnaW57dmVyYmF0aW19Ci0gICB2b2lkICAgICAgICAgIHNlc3Npb24u
bG9nb3V0KHNlc3Npb25faWQgc2Vzc2lvbikKLVxlbmR7dmVyYmF0aW19Ci0KLVxzdWJzZWN0aW9u
e1N5bmNocm9ub3VzIGFuZCBBc3luY2hyb25vdXMgaW52b2NhdGlvbn0KLQotRWFjaCBtZXRob2Qg
Y2FsbCAoYXBhcnQgZnJvbSBtZXRob2RzIG9uIGBgU2Vzc2lvbicnIGFuZCBgYFRhc2snJyBvYmpl
Y3RzIAotYW5kIGBgZ2V0dGVycycnIGFuZCBgYHNldHRlcnMnJyBkZXJpdmVkIGZyb20gZmllbGRz
KQotY2FuIGJlIG1hZGUgZWl0aGVyIHN5bmNocm9ub3VzbHkgb3IgYXN5bmNocm9ub3VzbHkuCi1B
IHN5bmNocm9ub3VzIFJQQyBjYWxsIGJsb2NrcyB1bnRpbCB0aGUKLXJldHVybiB2YWx1ZSBpcyBy
ZWNlaXZlZDsgdGhlIHJldHVybiB2YWx1ZSBvZiBhIHN5bmNocm9ub3VzIFJQQyBjYWxsIGlzCi1l
eGFjdGx5IGFzIHNwZWNpZmllZCBpbiBTZWN0aW9uflxyZWZ7c3luY2hyb25vdXMtcmVzdWx0fS4K
LQotT25seSBzeW5jaHJvbm91cyBBUEkgY2FsbHMgYXJlIGxpc3RlZCBleHBsaWNpdGx5IGluIHRo
aXMgZG9jdW1lbnQuIAotQWxsIGFzeW5jaHJvbm91cyB2ZXJzaW9ucyBhcmUgaW4gdGhlIHNwZWNp
YWwge1x0dCBBc3luY30gbmFtZXNwYWNlLgotRm9yIGV4YW1wbGUsIHN5bmNocm9ub3VzIGNhbGwg
e1x0dCBWTS5jbG9uZSguLi4pfQotKGRlc2NyaWJlZCBpbiBDaGFwdGVyflxyZWZ7YXBpLXJlZmVy
ZW5jZX0pCi1oYXMgYW4gYXN5bmNocm9ub3VzIGNvdW50ZXJwYXJ0LCB7XHR0Ci1Bc3luYy5WTS5j
bG9uZSguLi4pfSwgdGhhdCBpcyBub24tYmxvY2tpbmcuCi0KLUluc3RlYWQgb2YgcmV0dXJuaW5n
IGl0cyByZXN1bHQgZGlyZWN0bHksIGFuIGFzeW5jaHJvbm91cyBSUEMgY2FsbAotcmV0dXJucyBh
IHtcdHQgdGFzay1pZH07IHRoaXMgaWRlbnRpZmllciBpcyBzdWJzZXF1ZW50bHkgdXNlZAotdG8g
dHJhY2sgdGhlIHN0YXR1cyBvZiBhIHJ1bm5pbmcgYXN5bmNocm9ub3VzIFJQQy4gTm90ZSB0aGF0
IGFuIGFzeW5jaHJvbm91cwotY2FsbCBtYXkgZmFpbCBpbW1lZGlhdGVseSwgYmVmb3JlIGEge1x0
dCB0YXNrLWlkfSBoYXMgZXZlbiBiZWVuIGNyZWF0ZWQtLS10bwotcmVwcmVzZW50IHRoaXMgZXZl
bnR1YWxpdHksIHRoZSByZXR1cm5lZCB7XHR0IHRhc2staWR9Ci1pcyB3cmFwcGVkIGluIGFuIFhN
TC1SUEMgc3RydWN0IHdpdGggYSB7XHR0IFN0YXR1c30sIHtcdHQgRXJyb3JEZXNjcmlwdGlvbn0g
YW5kCi17XHR0IFZhbHVlfSBmaWVsZHMsIGV4YWN0bHkgYXMgc3BlY2lmaWVkIGluIFNlY3Rpb25+
XHJlZntzeW5jaHJvbm91cy1yZXN1bHR9LgotCi1UaGUge1x0dCB0YXNrLWlkfSBpcyBwcm92aWRl
ZCBpbiB0aGUge1x0dCBWYWx1ZX0gZmllbGQgaWYge1x0dCBTdGF0dXN9IGlzIHNldCB0bwote1x0
dCBTdWNjZXNzfS4KLQotVGhlIFJQQyBjYWxsCi1cYmVnaW57dmVyYmF0aW19Ci0gICAgKHJlZl90
YXNrIFNldCkgICBUYXNrLmdldF9hbGwoc2Vzc2lvbl9pZCBzKQotXGVuZHt2ZXJiYXRpbX0gCi1y
ZXR1cm5zIGEgc2V0IG9mIGFsbCB0YXNrIElEcyBrbm93biB0byB0aGUgc3lzdGVtLiBUaGUgc3Rh
dHVzIChpbmNsdWRpbmcgYW55Ci1yZXR1cm5lZCByZXN1bHQgYW5kIGVycm9yIGNvZGVzKSBvZiB0
aGVzZSB0YXNrcwotY2FuIHRoZW4gYmUgcXVlcmllZCBieSBhY2Nlc3NpbmcgdGhlIGZpZWxkcyBv
ZiB0aGUgVGFzayBvYmplY3QgaW4gdGhlIHVzdWFsIHdheS4gCi1Ob3RlIHRoYXQsIGluIG9yZGVy
IHRvIGdldCBhIGNvbnNpc3RlbnQgc25hcHNob3Qgb2YgYSB0YXNrJ3Mgc3RhdGUsIGl0IGlzIGFk
dmlzYWJsZSB0byBjYWxsIHRoZSBgYGdldFxfcmVjb3JkJycgZnVuY3Rpb24uCi0KLVxzZWN0aW9u
e0V4YW1wbGUgaW50ZXJhY3RpdmUgc2Vzc2lvbn0KLVRoaXMgc2VjdGlvbiBkZXNjcmliZXMgaG93
IGFuIGludGVyYWN0aXZlIHNlc3Npb24gbWlnaHQgbG9vaywgdXNpbmcKLXRoZSBweXRob24gQVBJ
LiAgQWxsIHB5dGhvbiB2ZXJzaW9ucyBzdGFydGluZyBmcm9tIDIuNCBzaG91bGQgd29yay4KLQot
VGhlIGV4YW1wbGVzIGluIHRoaXMgc2VjdGlvbiB1c2UgYSByZW1vdGUgWGVuIGhvc3Qgd2l0aCB0
aGUgaXAgYWRkcmVzcwotb2YgXHRleHR0dHsxOTIuMTY4LjcuMjB9IGFuZCB0aGUgeG1scnBjIHBv
cnQgXHRleHR0dHs5MzYzfS4gIE5vCi1hdXRoZW50aWNhdGlvbiBpcyB1c2VkLgotCi1Ob3RlIHRo
YXQgdGhlIHJlbW90ZSBzZXJ2ZXIgbXVzdCBiZSBjb25maWd1cmVkIGluIHRoZSB3YXksIHRoYXQg
aXQKLWFjY2VwdHMgcmVtb3RlIGNvbm5lY3Rpb25zLiAgU29tZSBsaW5lcyBtdXN0IGJlIGFkZGVk
IHRvIHRoZQoteGVuZC1jb25maWcuc3hwIGNvbmZpZ3VyYXRpb24gZmlsZToKLVxiZWdpbnt2ZXJi
YXRpbX0KLSh4ZW4tYXBpLXNlcnZlciAoKDkzNjMgbm9uZSkKLSAgICAgICAgICAgICAgICAgKHVu
aXggbm9uZSkpKQotKHhlbmQtdGNwLXhtbHJwYy1zZXJ2ZXIgeWVzKQotXGVuZHt2ZXJiYXRpbX0K
LVRoZSB4ZW5kIG11c3QgYmUgcmVzdGFydGVkIGFmdGVyIGNoYW5naW5nIHRoZSBjb25maWd1cmF0
aW9uLgotCi1CZWZvcmUgc3RhcnRpbmcgcHl0aG9uLCB0aGUgXHRleHR0dHtQWVRIT05QQVRIfSBt
dXN0IGJlIHNldCB0aGF0IHRoZQotXHRleHR0dHtYZW5BUEkucHl9IGNhbiBiZSBmb3VuZC4gIFR5
cGljYWxseSB0aGUgXHRleHR0dHtYZW5BUEkucHl9IGlzCi1pbnN0YWxsZWQgd2l0aCBvbmUgb2Yg
dGhlIFhlbiBoZWxwZXIgcGFja2FnZXMgd2hpY2ggdGhlIGxhc3QgcGFydCBvZgotdGhlIHBhdGgg
aXMgXHRleHR0dHt4ZW4veG0vWGVuQVBJLnB5fS4KLQotRXhhbXBsZTogVW5kZXIgRGViaWFuIDUu
MCB0aGUgcGFja2FnZSB3aGljaCBjb250YWlucyB0aGUKLVx0ZXh0dHR7WGVuQVBJLnB5fSBpcyBc
dGV4dHR0e3hlbi11dGlscy0zLjItMX0uIFx0ZXh0dHR7WGVuQVBJLnB5fSBpcwotbG9jYXRlZCBp
biBcdGV4dHR0ey91c3IvbGliL3hlbi0zLjItMS9saWIvcHl0aG9uL3hlbi94bX0uIFRoZQotZm9s
bG93aW5nIGNvbW1hbmQgd2lsbCBzZXQgdGhlIFx0ZXh0dHR7UFlUSE9OUEFUSH0gZW52aXJvbm1l
bnQKLXZhcmlhYmxlIGluIGEgYmFzaDoKLQotXGJlZ2lue3ZlcmJhdGltfQotJCBleHBvcnQgUFlU
SE9OUEFUSD0vdXNyL2xpYi94ZW4tMy4yLTEvbGliL3B5dGhvbgotXGVuZHt2ZXJiYXRpbX0KLQot
VGhlbiBweXRob24gY2FuIGJlIHN0YXJ0ZWQgYW5kIHRoZSBYZW5BUEkgbXVzdCBiZSBpbXBvcnRl
ZDoKLQotXGJlZ2lue3ZlcmJhdGltfQotJCBweXRob24KLS4uLgotPj4+IGltcG9ydCB4ZW4ueG0u
WGVuQVBJCi1cZW5ke3ZlcmJhdGltfQotCi1UbyBjcmVhdGUgYSBzZXNzaW9uIHRvIHRoZSByZW1v
dGUgc2VydmVyLCB0aGUKLVx0ZXh0dHR7eGVuLnhtLlhlbkFQSS5TZXNzaW9ufSBjb25zdHJ1Y3Rv
ciBpcyB1c2VkOgotXGJlZ2lue3ZlcmJhdGltfQotPj4+IHNlc3Npb24gPSB4ZW4ueG0uWGVuQVBJ
LlNlc3Npb24oImh0dHA6Ly8xOTIuMTY4LjcuMjA6OTM2MyIpCi1cZW5ke3ZlcmJhdGltfQotCi1G
b3IgYXV0aGVudGljYXRpb24gd2l0aCBhIHVzZXJuYW1lIGFuZCBwYXNzd29yZCB0aGUKLVx0ZXh0
dHR7bG9naW5cX3dpdGhcX3Bhc3N3b3JkfSBpcyB1c2VkOgotXGJlZ2lue3ZlcmJhdGltfQotPj4+
IHNlc3Npb24ubG9naW5fd2l0aF9wYXNzd29yZCgiIiwgIiIpCi1cZW5ke3ZlcmJhdGltfQotCi1X
aGVuIHNlcmlhbGlzZWQsIHRoaXMgY2FsbCBsb29rcyBsaWtlOgotXGJlZ2lue3ZlcmJhdGltfQot
UE9TVCAvUlBDMiBIVFRQLzEuMAotSG9zdDogMTkyLjE2OC43LjIwOjkzNjMKLVVzZXItQWdlbnQ6
IHhtbHJwY2xpYi5weS8xLjAuMSAoYnkgd3d3LnB5dGhvbndhcmUuY29tKQotQ29udGVudC1UeXBl
OiB0ZXh0L3htbAotQ29udGVudC1MZW5ndGg6IDIyMQotCi08P3htbCB2ZXJzaW9uPScxLjAnPz4K
LTxtZXRob2RDYWxsPgotPG1ldGhvZE5hbWU+c2Vzc2lvbi5sb2dpbl93aXRoX3Bhc3N3b3JkPC9t
ZXRob2ROYW1lPgotPHBhcmFtcz4KLTxwYXJhbT4KLTx2YWx1ZT48c3RyaW5nPjwvc3RyaW5nPjwv
dmFsdWU+Ci08L3BhcmFtPgotPHBhcmFtPgotPHZhbHVlPjxzdHJpbmc+PC9zdHJpbmc+PC92YWx1
ZT4KLTwvcGFyYW0+Ci08L3BhcmFtcz4KLTwvbWV0aG9kQ2FsbD4KLVxlbmR7dmVyYmF0aW19Ci0K
LUFuZCB0aGUgcmVzcG9uc2U6Ci1cYmVnaW57dmVyYmF0aW19Ci1IVFRQLzEuMSAyMDAgT0sKLVNl
cnZlcjogQmFzZUhUVFAvMC4zIFB5dGhvbi8yLjUuMgotRGF0ZTogRnJpLCAxMCBKdWwgMjAwOSAw
OTowMToyNyBHTVQKLUNvbnRlbnQtVHlwZTogdGV4dC94bWwKLUNvbnRlbnQtTGVuZ3RoOiAzMTMK
LQotPD94bWwgdmVyc2lvbj0nMS4wJz8+Ci08bWV0aG9kUmVzcG9uc2U+Ci08cGFyYW1zPgotPHBh
cmFtPgotPHZhbHVlPjxzdHJ1Y3Q+Ci08bWVtYmVyPgotPG5hbWU+U3RhdHVzPC9uYW1lPgotPHZh
bHVlPjxzdHJpbmc+U3VjY2Vzczwvc3RyaW5nPjwvdmFsdWU+Ci08L21lbWJlcj4KLTxtZW1iZXI+
Ci08bmFtZT5WYWx1ZTwvbmFtZT4KLTx2YWx1ZT48c3RyaW5nPjY4ZTNhMDA5LTAyNDktNzI1Yi0y
NDZiLTdmYzQzY2Y0ZjE1NDwvc3RyaW5nPjwvdmFsdWU+Ci08L21lbWJlcj4KLTwvc3RydWN0Pjwv
dmFsdWU+Ci08L3BhcmFtPgotPC9wYXJhbXM+Ci08L21ldGhvZFJlc3BvbnNlPgotXGVuZHt2ZXJi
YXRpbX0KLQotTmV4dCwgdGhlIHVzZXIgbWF5IGFjcXVpcmUgYSBsaXN0IG9mIGFsbCB0aGUgVk1z
IGtub3duIHRvIHRoZSBob3N0OgotCi1cYmVnaW57dmVyYmF0aW19Ci0+Pj4gdm1zID0gc2Vzc2lv
bi54ZW5hcGkuVk0uZ2V0X2FsbCgpCi0+Pj4gdm1zCi1bJzAwMDAwMDAwLTAwMDAtMDAwMC0wMDAw
LTAwMDAwMDAwMDAwMCcsICdiMjhlNGVlMy0yMTZmLWZhODUtOWNhZS02MTVlOTU0ZGJiZTcnXQot
XGVuZHt2ZXJiYXRpbX0KLQotVGhlIFZNIHJlZmVyZW5jZXMgaGVyZSBoYXZlIHRoZSBmb3JtIG9m
IGFuIHV1aWQsIHRob3VnaCB0aGV5IG1heQotY2hhbmdlIGluIHRoZSBmdXR1cmUsIGFuZCB0aGV5
IHNob3VsZCBiZSB0cmVhdGVkIGFzIG9wYXF1ZSBzdHJpbmdzLgotCi1Tb21lIGV4YW1wbGVzIG9m
IHVzaW5nIGFjY2Vzc29ycyBmb3Igb2JqZWN0IGZpZWxkczoKLVxiZWdpbnt2ZXJiYXRpbX0KLT4+
PiBzZXNzaW9uLnhlbmFwaS5WTS5nZXRfbmFtZV9sYWJlbCh2bXNbMV0pCi0nZ3Vlc3QwMDInCi0+
Pj4gc2Vzc2lvbi54ZW5hcGkuVk0uZ2V0X2FjdGlvbnNfYWZ0ZXJfcmVib290KHZtc1sxXSkKLSdy
ZXN0YXJ0JwotXGVuZHt2ZXJiYXRpbX0KLQotR3JhYiB0aGUgYWN0dWFsIG1lbW9yeSBhbmQgY3B1
IHV0aWxpc2F0aW9uIG9mIG9uZSB2bToKLVxiZWdpbnt2ZXJiYXRpbX0KLT4+PiBtID0gc2Vzc2lv
bi54ZW5hcGkuVk0uZ2V0X21ldHJpY3Modm1zWzFdKQotPj4+IHNlc3Npb24ueGVuYXBpLlZNX21l
dHJpY3MuZ2V0X21lbW9yeV9hY3R1YWwobSkKLScyNjg0MzU0NTYnCi0+Pj4gc2Vzc2lvbi54ZW5h
cGkuVk1fbWV0cmljcy5nZXRfVkNQVXNfdXRpbGlzYXRpb24obSkKLXsnMCc6IDAuMDAwNDE3NTk5
NTU2MzI5MzUzNjJ9Ci1cZW5ke3ZlcmJhdGltfQotKFRoZSB2aXJ0dWFsIG1hY2hpbmUgaGFzIGFi
b3V0IDI1NiBNQnl0ZSBSQU0gYW5kIGlzIGlkbGUuKQotCi1QYXVzaW5nIGFuZCB1bnBhdXNpbmcg
YSB2bToKLVxiZWdpbnt2ZXJiYXRpbX0KLT4+PiBzZXNzaW9uLnhlbmFwaS5WTS5wYXVzZSh2bXNb
MV0pCi0nJwotPj4+IHNlc3Npb24ueGVuYXBpLlZNLnVucGF1c2Uodm1zWzFdKQotJycKLVxlbmR7
dmVyYmF0aW19Ci0KLVRyeWluZyB0byBzdGFydCBhbiB2bToKLVxiZWdpbnt2ZXJiYXRpbX0KLT4+
PiBzZXNzaW9uLnhlbmFwaS5WTS5zdGFydCh2bXNbMV0sIEZhbHNlKQotLi4uCi06IFhlbi1BUEkg
ZmFpbHVyZTogWydWTV9CQURfUE9XRVJfU1RBVEUnLCBcCi0gICAgJ2IyOGU0ZWUzLTIxNmYtZmE4
NS05Y2FlLTYxNWU5NTRkYmJlNycsICdIYWx0ZWQnLCAnUnVubmluZyddCi1cZW5ke3ZlcmJhdGlt
fQotCi1JbiB0aGlzIGNhc2UgdGhlIHtcdHQgc3RhcnR9IG1lc3NhZ2UgaGFzIGJlZW4gcmVqZWN0
ZWQsIGJlY2F1c2UgdGhlIFZNIGlzCi1hbHJlYWR5IHJ1bm5pbmcsIGFuZCBzbyBhbiBlcnJvciBy
ZXNwb25zZSBoYXMgYmVlbiByZXR1cm5lZC4gIFRoZXNlIGhpZ2gtbGV2ZWwKLWVycm9ycyBhcmUg
cmV0dXJuZWQgYXMgc3RydWN0dXJlZCBkYXRhIChyYXRoZXIgdGhhbiBhcyBYTUwtUlBDIGZhdWx0
cyksCi1hbGxvd2luZyB0aGVtIHRvIGJlIGludGVybmF0aW9uYWxpc2VkLiAgCmRpZmYgLS1naXQg
YS9kb2NzL3hlbi1hcGkveGVuLmVwcyBiL2RvY3MveGVuLWFwaS94ZW4uZXBzCmRlbGV0ZWQgZmls
ZSBtb2RlIDEwMDY0NAppbmRleCBkYTE0ZmU5Li4wMDAwMDAwCi0tLSBhL2RvY3MveGVuLWFwaS94
ZW4uZXBzCisrKyAvZGV2L251bGwKQEAgLTEsNDQgKzAsMCBAQAotJSFQUy1BZG9iZS0zLjEgRVBT
Ri0zLjAKJSVUaXRsZTogeGVuMy0xLjAuZXBzCiUlQ3JlYXRvcjogQWRvYmUgSWxsdXN0cmF0b3Io
UikgMTEKJSVBSThfQ3JlYXRvclZlcnNpb246IDExLjAuMAolQUk5X1ByaW50aW5nRGF0YUJlZ2lu
CiUlRm9yOiBSaWNoIFF1YXJsZXMKJSVDcmVhdGlvbkRhdGU6IDYvMjYvMDYKJSVCb3VuZGluZ0Jv
eDogMCAwIDIxNSA5NAolJUhpUmVzQm91bmRpbmdCb3g6IDAgMCAyMTQuMTY0NiA5My41MTk2CiUl
Q3JvcEJveDogMCAwIDIxNC4xNjQ2IDkzLjUxOTYKJSVMYW5ndWFnZUxldmVsOiAyCiUlRG9jdW1l
bnREYXRhOiBDbGVhbjdCaXQKJSVQYWdlczogMQolJURvY3VtZW50TmVlZGVkUmVzb3VyY2VzOiAK
JSVEb2N1bWVudFN1cHBsaWVkUmVzb3VyY2VzOiBwcm9jc2V0IEFkb2JlX0FHTV9JbWFnZSAoMS4w
IDApCiUlKyBwcm9jc2V0IEFkb2JlX0Nvb2xUeXBlX1V0aWxpdHlfVDQyICgxLjAgMCkKJSUrIHBy
b2NzZXQgQWRvYmVfQ29vbFR5cGVfVXRpbGl0eV9NQUtFT0NGICgxLjE5IDApCiUlKyBwcm9jc2V0
IEFkb2JlX0Nvb2xUeXBlX0NvcmUgKDIuMjMgMCkKJSUrIHByb2NzZXQgQWRvYmVfQUdNX0NvcmUg
KDIuMCAwKQolJSsgcHJvY3NldCBBZG9iZV9BR01fVXRpbHMgKDEuMCAwKQolJURvY3VtZW50Rm9u
dHM6IAolJURvY3VtZW50TmVlZGVkRm9udHM6IAolJURvY3VtZW50TmVlZGVkRmVhdHVyZXM6IAol
JURvY3VtZW50U3VwcGxpZWRGZWF0dXJlczogCiUlRG9jdW1lbnRQcm9jZXNzQ29sb3JzOiAgQmxh
Y2sKJSVEb2N1bWVudEN1c3RvbUNvbG9yczogCiUlQ01ZS0N1c3RvbUNvbG9yOiAKJSVSR0JDdXN0
b21Db2xvcjogCiVBRE9fQ29udGFpbnNYTVA6IE1haW5GaXJzdAolQUk3X1RodW1ibmFpbDogMTI4
IDU2IDgKJSVCZWdpbkRhdGE6IDYyNjYgSGV4IEJ5dGVzCiUwMDAwMzMwMDAwNjYwMDAwOTkwMDAw
Q0MwMDMzMDAwMDMzMzMwMDMzNjYwMDMzOTkwMDMzQ0MwMDMzRkYKJTAwNjYwMDAwNjYzMzAwNjY2
NjAwNjY5OTAwNjZDQzAwNjZGRjAwOTkwMDAwOTkzMzAwOTk2NjAwOTk5OQolMDA5OUNDMDA5OUZG
MDBDQzAwMDBDQzMzMDBDQzY2MDBDQzk5MDBDQ0NDMDBDQ0ZGMDBGRjMzMDBGRjY2CiUwMEZGOTkw
MEZGQ0MzMzAwMDAzMzAwMzMzMzAwNjYzMzAwOTkzMzAwQ0MzMzAwRkYzMzMzMDAzMzMzMzMKJTMz
MzM2NjMzMzM5OTMzMzNDQzMzMzNGRjMzNjYwMDMzNjYzMzMzNjY2NjMzNjY5OTMzNjZDQzMzNjZG
RgolMzM5OTAwMzM5OTMzMzM5OTY2MzM5OTk5MzM5OUNDMzM5OUZGMzNDQzAwMzNDQzMzMzNDQzY2
MzNDQzk5CiUzM0NDQ0MzM0NDRkYzM0ZGMDAzM0ZGMzMzM0ZGNjYzM0ZGOTkzM0ZGQ0MzM0ZGRkY2
NjAwMDA2NjAwMzMKJTY2MDA2NjY2MDA5OTY2MDBDQzY2MDBGRjY2MzMwMDY2MzMzMzY2MzM2NjY2
MzM5OTY2MzNDQzY2MzNGRgolNjY2NjAwNjY2NjMzNjY2NjY2NjY2Njk5NjY2NkNDNjY2NkZGNjY5
OTAwNjY5OTMzNjY5OTY2NjY5OTk5CiU2Njk5Q0M2Njk5RkY2NkNDMDA2NkNDMzM2NkNDNjY2NkND
OTk2NkNDQ0M2NkNDRkY2NkZGMDA2NkZGMzMKJTY2RkY2NjY2RkY5OTY2RkZDQzY2RkZGRjk5MDAw
MDk5MDAzMzk5MDA2Njk5MDA5OTk5MDBDQzk5MDBGRgolOTkzMzAwOTkzMzMzOTkzMzY2OTkzMzk5
OTkzM0NDOTkzM0ZGOTk2NjAwOTk2NjMzOTk2NjY2OTk2Njk5CiU5OTY2Q0M5OTY2RkY5OTk5MDA5
OTk5MzM5OTk5NjY5OTk5OTk5OTk5Q0M5OTk5RkY5OUNDMDA5OUNDMzMKJTk5Q0M2Njk5Q0M5OTk5
Q0NDQzk5Q0NGRjk5RkYwMDk5RkYzMzk5RkY2Njk5RkY5OTk5RkZDQzk5RkZGRgolQ0MwMDAwQ0Mw
MDMzQ0MwMDY2Q0MwMDk5Q0MwMENDQ0MwMEZGQ0MzMzAwQ0MzMzMzQ0MzMzY2Q0MzMzk5CiVDQzMz
Q0NDQzMzRkZDQzY2MDBDQzY2MzNDQzY2NjZDQzY2OTlDQzY2Q0NDQzY2RkZDQzk5MDBDQzk5MzMK
JUNDOTk2NkNDOTk5OUNDOTlDQ0NDOTlGRkNDQ0MwMENDQ0MzM0NDQ0M2NkNDQ0M5OUNDQ0NDQ0ND
Q0NGRgolQ0NGRjAwQ0NGRjMzQ0NGRjY2Q0NGRjk5Q0NGRkNDQ0NGRkZGRkYwMDMzRkYwMDY2RkYw
MDk5RkYwMENDCiVGRjMzMDBGRjMzMzNGRjMzNjZGRjMzOTlGRjMzQ0NGRjMzRkZGRjY2MDBGRjY2
MzNGRjY2NjZGRjY2OTkKJUZGNjZDQ0ZGNjZGRkZGOTkwMEZGOTkzM0ZGOTk2NkZGOTk5OUZGOTlD
Q0ZGOTlGRkZGQ0MwMEZGQ0MzMwolRkZDQzY2RkZDQzk5RkZDQ0NDRkZDQ0ZGRkZGRjMzRkZGRjY2
RkZGRjk5RkZGRkNDMTEwMDAwMDAxMTAwCiUwMDAwMTExMTExMTEyMjAwMDAwMDIyMDAwMDAwMjIy
MjIyMjI0NDAwMDAwMDQ0MDAwMDAwNDQ0NDQ0NDQKJTU1MDAwMDAwNTUwMDAwMDA1NTU1NTU1NTc3
MDAwMDAwNzcwMDAwMDA3Nzc3Nzc3Nzg4MDAwMDAwODgwMAolMDAwMDg4ODg4ODg4QUEwMDAwMDBB
QTAwMDAwMEFBQUFBQUFBQkIwMDAwMDBCQjAwMDAwMEJCQkJCQkJCCiVERDAwMDAwMEREMDAwMDAw
RERERERERERFRTAwMDAwMEVFMDAwMDAwRUVFRUVFRUUwMDAwMDAwMDAwRkYKJTAwRkYwMDAwRkZG
RkZGMDAwMEZGMDBGRkZGRkYwMEZGRkZGRgolNTI0QzQ1RkQxOUZGQThBODdEQThGRDA3N0RBOEE4
RkQ3MEZGN0Q3RDUyN0Q1MjdENTI3RDdEN0Q1MjdECiU1MjdENTJGRDA0N0RGRDZBRkZBODdENTI3
RDdEN0Q1MkZEMEI3RDUyRkQwNDdEQThBOEZENjVGRkE4N0QKJTdENTI3RDUyRkQwNDdERkQwOUE4
N0Q3RDUyN0Q1MjdENTI3RDdERkQ2M0ZGQThGRDA1N0RBOEE4RkZBOAolRkZBOEZGQThGRkE4RkZB
OEZGQThGRkE4RkZBOEE4RkQwNTdEQThGRDVGRkZBODdENTI3RDUyN0Q3REE4CiVBOEZGQThBOEE4
RkZBOEE4QThGRkE4QThBOEZGQThBOEE4RkZBOEE4N0Q3RDUyN0Q1MjdEQThGRDVDRkYKJUE4N0Q1
MjdEN0RBOEE4RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThGRgol
QThBODdEN0Q1MjdEQThGRDVBRkY3RDdENTI3RDdERkQwNEE4RkZBOEE4QThGRkE4QThBOEZGQThB
OEE4CiVGRkE4QThBOEZGQThBOEE4RkZGRDA0QTg1MjdENTI3REE4RkQ1OEZGRkQwNTdERkZBOEZG
QThGRkE4RkYKJUE4RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThB
OEZEMDQ3REE4RkQ1NgolRkY3RDdENTI3RDdEQThBOEZGQThBOEE4RkZBOEE4QThGRkE4QThBOEZG
QThBOEE4RkZBOEE4QThGRkE4CiVBOEE4RkZBOEE4QThGRkE4QThGRDA0N0RBOEZENTRGRkE4RkQw
NDdERkZBOEZGQThGRkE4RkZBOEZGQTgKJUZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4
RkZBOEZGQThGRkE4RkZBOEZGRkQwNDdEQThGRAolNTJGRkE4N0Q1MjdEN0RGRkE4RkZBOEZGQThG
RkE4RkZBOEZGQThGRkE4RkZBOEE4QThGRkE4QThBOEZGCiVBOEE4QThGRkZEMDVBOEZGQThGRkE4
RkZGRDA0N0RBOEZENTFGRjdEN0Q3RDI3RkQwRTUyN0RBOEZGQTgKJUZGQThGRkE4RkZBOEZGQThG
RkE4RkZBOEZGRkZBODI3RkQwNTUyMjcyNzI3RkQwOTUyN0RGRDQ3RkZBOAolNTI3RDdENTJGRDBG
Rjg1MkE4QThBOEZGQThBOEE4RkZBOEE4QThGRkE4RkZBODdERkQxMkY4NTJGRDQ4CiVGRjdEN0Q3
REE4QTgyN0ZEMEZGODdERkZGRkE4RkZBOEZGQThGRkE4RkZBOEZGRkY3REZEMTFGODI3N0QKJUZE
NDhGRjdEN0Q1MjdEQThGRkE4MjdGRDBGRjhGRDA0QThGRkE4QThBOEZGQThGRkE4NTJGRDExRjgy
NwolQThGRDQ5RkY3RDdEN0RBOEZGQThGRjdERkQwRkY4MjdGRkE4RkZBOEZGQThGRkE4RkZBODI3
RkQxMUY4CiU1MkZENEFGRkE4NTI3RDdERkZBOEE4QThGRjUyRkQwRkY4NTJGRkE4RkZBOEE4QThG
RjdERkQxMkY4N0QKJUZENEJGRjdEN0Q3REE4QThGRkE4RkZBOEZGNTJGRDBGRjg3REZGQThGRkE4
RkY3REZEMTJGODUyQThGRAolNEFGRkE4N0Q1MjdEQThBOEE4RkZBOEE4QThGRjI3RkQwRUY4MjdB
OEZGQThGRjUyRkQxMUY4Mjc3RDdECiU3REE4RkQwN0ZGQThGRkE4RkZBOEZEMjNGRkE4RkZBOEZE
MTdGRkE4NTI3RDdERkZBOEZGQThGRkE4RkYKJUZGRkZGRDBGRjg1MkZGRkYyN0ZEMTFGODUyRkY3
RDdEN0RGRkE4N0Q1MjUyMjcyN0Y4MjdGODI3RjgyNwolMjc1MjdERkQwQkZGRkQwNEE4N0RBOEE4
QTg3REZEMDRBOEZGRkZGRkE4N0Q1MjI3RjgyN0Y4MjcyNzUyCiU3REZEMTRGRjdEN0Q1MkZEMDRB
OEZGQThBOEE4RkZBOEE4RkQwRkY4NTIyN0ZEMTBGODI3N0RGRkE4QTgKJTUyMjdGRDExRjgyNzUy
RkQwN0ZGQThGRDBDRjhGRkZGQTgyN0ZEMENGODdERkZGRkZGQThGOEY4Rjg3RAolMjdGODUyN0RG
OEY4RkQwNEZGN0Q3RDdEQThGRkE4RkZBOEZGQThGRkE4RkZGRjdERkQxRkY4MjdBOEZGCiVGRjdE
MjdGRDE2RjhBOEZEMDVGRkE4RkQwQkY4MjdGRjdERkQwRkY4N0RGRkZGRkZBOEY4N0RGRjI3RjgK
JTI3MjdGOEY4RkZGRkZGQTg1MjdEN0RBOEE4RkZBOEE4QThGRkE4QThBOEZGQTg1MkZEMURGODUy
QThGRgolQTg1MkZEMTlGODdERkQwNEZGNTJGRDBCRjgyNzI3RkQxMUY4QThGRkZGQThGODUyRkZG
ODI3RjhGODUyCiVGOEZGRkZGRkE4N0Q3REE4QThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQTg1MkZE
MUJGOEE4RkZGRkE4MjcKJUZEMEJGODdEQThGRjdENTJGRDBCRjhGRDA0RkY1MkZEMUVGODUyRkZG
RkZGNTJBOEZGRkQwNDdEQTg1MgolRkZGRkZGQTg1MjdEN0RGRkE4RkZBOEE4QThGRkE4QThBOEZG
QThGRkE4MjdGRDE4RjgyN0E4QThGRkE4CiUyN0ZEMEFGODI3RkQwNkZGMjdGRDBBRjgyN0ZGRkZG
RkZEMEZGODI3MjdGRDBFRjg1MkZEMEZGRkE4N0QKJTUyQThBOEZGQThGRkE4RkZBOEZGQThGRkE4
RkZBOEZGQTgyN0ZEMTZGODUyRkZGRkZGQTg1MkZEMEJGOAolRkQwN0ZGNTJGRDBBRjgyN0ZGRkZB
OEZEMERGODI3QThGRkZGRkY1MkZEMENGODUyRkQwRkZGQTg1MjdECiU3REZGQThBOEE4RkZBOEE4
QThGRkE4QThBOEZGQThGRjdERkQxNUY4NTJGRkE4QThBODdERkQwQkY4NTIKJUZEMDRGRkE4RkZG
RjUyRkQwQkY4QThGRjdERkQwQ0Y4MjdGRDA1RkZBOEZEMENGODUyRkQwRkZGQTg3RAolN0RBOEE4
RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGNTJGRDEyRjgyN0E4RkZBOEZGQThGRjI3CiVG
RDBDRjhGRDA3MjdGRDBDRjhBOEZGNTJGRDBDRjhGRDA3RkZGRDBDRjg3REZEMEZGRkE4NTI3RDdE
RkYKJUE4QThBOEZGQThBOEE4RkZBOEE4QThGRkE4RkY1MkZEMTJGODUyQThGRkE4QThBOEZGN0RG
RDIwRjhGRgolRkYyN0ZEMEJGODUyRkQwNkZGN0RGRDBDRjhBOEZEMEZGRkE4N0Q3REE4QThGRkE4
RkZBOEZGQThGRkE4CiVGRkE4RkZGRkE4MjdGRDEyRjgyN0E4RkZBOEZGQThGRkE4N0RGRDFGRjgy
N0ZGRkZGRDBDRjg3REZEMDYKJUZGN0RGRDBCRjgyN0ZEMTBGRkE4NTI3RDdEQThBOEZGQThBOEE4
RkZBOEE4QThGRkE4N0RGRDE1Rjg1MgolQThBOEE4RkZBOEZGNTJGRDBDRjgyN0Y4MjdGODI3Rjgy
N0Y4MjdGODI3RjgyN0Y4MjdGODI3RjgyNzUyCiVGRjdERkQwQ0Y4QThGRDA2RkYyN0ZEMEJGODI3
RkQxMEZGQTg3RDdEN0RBOEZGQThGRkE4RkZBOEZGQTgKJUZGRkY3REZEMTdGODdERkZGRkE4RkZB
ODUyRkQwQkY4QThGRDE1RkY3REZEMEJGODI3RkQwN0ZGMjdGRAolMEJGODdERkQxMUZGN0Q3RDdE
QThBOEZGQThBOEE4RkZBOEZGQTg1MkZEMTlGODdEQThGRkE4RkY1MkZECiUwQkY4N0RGRDE1RkYy
N0ZEMEJGODI3RkQwNkZGQThGRDBDRjhBOEZEMTFGRkE4NTI3REE4RkZBOEZGQTgKJUZGQThGRkE4
MjdGRDFCRjhBOEE4RkZGRjdERkQwQkY4N0RGRDA2RkY3RDI3Mjc1MjI3NTIyNzUyMjc1MgolMjcy
NzdERkZGRjI3RkQwQkY4N0RGRDA2RkY3REZEMENGOEZEMTJGRjdEN0Q1MkZEMDZBOEZGNTJGRDFE
CiVGODI3RkZBOEZGQThGRDBDRjhBOEZEMDRGRjUyRkQwQ0Y4QThGRkE4RkQwQ0Y4N0RGRDA2RkY1
MkZEMEIKJUY4NTJGRDEzRkY3RDdEN0RGRkE4RkZGRkZGNTJGRDFGRjg1MkZGQThGRjdERkQwQ0Y4
NTI1MjUyRkQwRAolRjhBOEZGRkY3REZEMENGOEZEMDdGRjI3RkQwQkY4NTJGRDEzRkY3RDUyN0RB
OEZGQThBODI3RkQxMUY4CiUyNzI3RkQwRUY4NTJGRkE4RkY1MkZEMTlGODI3QThGRkZGRkY1MkZE
MEJGODUyRkQwNkZGQThGRDBDRjgKJUE4RkQxM0ZGQTg3RDUyQThGRjdERkQxMkY4NTJGRjUyRkQw
RkY4N0RGRkE4RkY3RDI3RkQxNUY4Mjc3RAolRkQwNUZGRkQwQ0Y4NTJGRDA2RkY3REZEMENGOEZE
MTVGRjUyN0Q3RDUyRkQxMkY4N0RGRkE4RkYyN0ZECiUwRUY4MjdBOEZGQThGRkE4N0QyNzI3RkQw
RkY4NTI1MkE4RkQwNkZGQTg1MjI3MjcyNzUyMjcyNzI3NTIKJTI3MjcyN0E4RkQwNkZGNTJGRDA0
Mjc1MjI3MjcyNzUyMjcyNzUyRkQxNUZGQTg1MjI3RkQxMUY4MjdBOAolRkZBOEZGQThBOEZEMEZG
ODUyRkZGRkE4RkY3RDdEN0RBOEE4QTg3RDdENTI3RDUyRkQwNDdEQThBOEZECiU0MEZGN0RGRDEy
Rjg1MkE4RkZBOEZGQThBOEE4N0RGRDBGRjg3REZGRkY3RDdENTI3REZENERGRjdERkQKJTEyRjg3
REE4RkZBOEZGQThGRkE4RkZBODUyRkQwRkY4QThBODdEN0Q3REE4RkQ0Q0ZGNTJGRDEyRjhGRAol
MDRBOEZGQThBOEE4RkZBOEZGN0RGRDEwRjhGRDA0N0RGRDRDRkYyN0ZEMTFGODI3RkZGRkZGQThG
RkE4CiVGRkE4RkZBOEZGQThGRjdERkQwRkY4Mjc3RDdERkQ0QkZGQThGRDEyRjg1MkZGQThGRkE4
QThBOEZGQTgKJUE4QThGRkE4QThBOEZGMjdGRDBGRjgyN0ZENEJGRjdERkQwRkY4MjdGODI3N0RG
RkE4RkZBOEZGQThGRgolQThGRkE4RkZBOEZGQThGRkE4QThGRDEwRjg3REZENEFGRjdERkQwQUE4
N0Q1MjUyNTI3RDdEQThBOEZGCiVBOEZGQThBOEE4RkZBOEE4QThGRkE4QThBOEZGRkQwNEE4N0RB
ODdEQTg3REE4N0RBODdERkQwNDUyQTgKJUE4QThGRDU2RkZBODdEN0Q3REE4QThGRkE4RkZBOEZG
QThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThGRgolQThGRkE4RkQwNUZGQThGRDA0N0RGRDVCRkZB
ODUyN0Q1MjdEN0RBOEE4RkZBOEE4QThGRkE4QThBOEZGCiVBOEE4QThGRkE4QThBOEZGQThBOEE4
RkZBOEE4RkQwNDdENTJGRDVFRkZGRDA1N0RBOEE4RkZBOEZGQTgKJUZGQThGRkE4RkZBOEZGQThG
RkE4RkZBOEZGQThGRkE4QThGRDA1N0RGRDYwRkY3RDdENTI3RDUyN0Q3RAolRkQwNEE4RkZBOEE4
QThGRkE4QThBOEZGRkQwNEE4N0Q3RDUyN0Q1MjdEN0RGRDYyRkZBOEE4RkQwNzdECiVGRDA0QThG
RkZEMDZBOEZEMDc3REE4RkQ2N0ZGN0Q3RDUyN0Q1MjdENTI3RDUyN0Q1MjdEN0Q3RDUyN0QKJTUy
N0Q1MjdENTI3RDdERkQ2QkZGQThGRDA0N0Q1MjdEN0Q3RDUyN0Q3RDdENTJGRDA0N0RGRDcwRkZG
RAolMDRBOEZEMDc3REE4QThGRkE4RkQ1OEZGRkYKJSVFbmREYXRhCiUlRW5kQ29tbWVudHMKJSVC
ZWdpbkRlZmF1bHRzCiUlVmlld2luZ09yaWVudGF0aW9uOiAxIDAgMCAxCiUlRW5kRGVmYXVsdHMK
JSVCZWdpblByb2xvZwolJUJlZ2luUmVzb3VyY2U6IHByb2NzZXQgQWRvYmVfQUdNX1V0aWxzIDEu
MCAwCiUlVmVyc2lvbjogMS4wIDAKJSVDb3B5cmlnaHQ6IENvcHlyaWdodCAoQykgMjAwMC0yMDAz
IEFkb2JlIFN5c3RlbXMsIEluYy4gIEFsbCBSaWdodHMgUmVzZXJ2ZWQuCnN5c3RlbWRpY3QgL3Nl
dHBhY2tpbmcga25vd24KewoJY3VycmVudHBhY2tpbmcKCXRydWUgc2V0cGFja2luZwp9IGlmCnVz
ZXJkaWN0IC9BZG9iZV9BR01fVXRpbHMgNjggZGljdCBkdXAgYmVnaW4gcHV0Ci9iZGYKewoJYmlu
ZCBkZWYKfSBiaW5kIGRlZgovbmR7CgludWxsIGRlZgp9YmRmCi94ZGYKewoJZXhjaCBkZWYKfWJk
ZgovbGRmIAp7Cglsb2FkIGRlZgp9YmRmCi9kZGYKewoJcHV0Cn1iZGYJCi94ZGRmCnsKCTMgLTEg
cm9sbCBwdXQKfWJkZgkKL3hwdAp7CglleGNoIHB1dAp9YmRmCi9uZGYKeyAKCWV4Y2ggZHVwIHdo
ZXJlewoJCXBvcCBwb3AgcG9wCgl9ewoJCXhkZgoJfWlmZWxzZQp9ZGVmCi9jZG5kZgp7CglleGNo
IGR1cCBjdXJyZW50ZGljdCBleGNoIGtub3duewoJCXBvcCBwb3AKCX17CgkJZXhjaCBkZWYKCX1p
ZmVsc2UKfWRlZgovYmRpY3QKewoJbWFyawp9YmRmCi9lZGljdAp7Cgljb3VudHRvbWFyayAyIGlk
aXYgZHVwIGRpY3QgYmVnaW4ge2RlZn0gcmVwZWF0IHBvcCBjdXJyZW50ZGljdCBlbmQKfWRlZgov
cHNfbGV2ZWwKCS9sYW5ndWFnZWxldmVsIHdoZXJlewoJCXBvcCBzeXN0ZW1kaWN0IC9sYW5ndWFn
ZWxldmVsIGdldCBleGVjCgl9ewoJCTEKCX1pZmVsc2UKZGVmCi9sZXZlbDIgCglwc19sZXZlbCAy
IGdlCmRlZgovbGV2ZWwzIAoJcHNfbGV2ZWwgMyBnZQpkZWYKL3BzX3ZlcnNpb24KCXt2ZXJzaW9u
IGN2cn0gc3RvcHBlZCB7CgkJLTEKCX1pZgpkZWYKL21ha2VyZWFkb25seWFycmF5CnsKCS9wYWNr
ZWRhcnJheSB3aGVyZXsKCQlwb3AgcGFja2VkYXJyYXkKCX17CgkJYXJyYXkgYXN0b3JlIHJlYWRv
bmx5Cgl9aWZlbHNlCn1iZGYKL21hcF9yZXNlcnZlZF9pbmtfbmFtZQp7CglkdXAgdHlwZSAvc3Ry
aW5ndHlwZSBlcXsKCQlkdXAgL1JlZCBlcXsKCQkJcG9wIChfUmVkXykKCQl9ewoJCQlkdXAgL0dy
ZWVuIGVxewoJCQkJcG9wIChfR3JlZW5fKQoJCQl9ewoJCQkJZHVwIC9CbHVlIGVxewoJCQkJCXBv
cCAoX0JsdWVfKQoJCQkJfXsKCQkJCQlkdXAgKCkgY3ZuIGVxewoJCQkJCQlwb3AgKFByb2Nlc3Mp
CgkJCQkJfWlmCgkJCQl9aWZlbHNlCgkJCX1pZmVsc2UKCQl9aWZlbHNlCgl9aWYKfWJkZgovQUdN
VVRJTF9HU1RBVEUgMjIgZGljdCBkZWYKL2dldF9nc3RhdGUKewoJQUdNVVRJTF9HU1RBVEUgYmVn
aW4KCS9BR01VVElMX0dTVEFURV9jbHJfc3BjIGN1cnJlbnRjb2xvcnNwYWNlIGRlZgoJL0FHTVVU
SUxfR1NUQVRFX2Nscl9pbmR4IDAgZGVmCgkvQUdNVVRJTF9HU1RBVEVfY2xyX2NvbXBzIDEyIGFy
cmF5IGRlZgoJbWFyayBjdXJyZW50Y29sb3IgY291bnR0b21hcmsKCQl7QUdNVVRJTF9HU1RBVEVf
Y2xyX2NvbXBzIEFHTVVUSUxfR1NUQVRFX2Nscl9pbmR4IDMgLTEgcm9sbCBwdXQKCQkvQUdNVVRJ
TF9HU1RBVEVfY2xyX2luZHggQUdNVVRJTF9HU1RBVEVfY2xyX2luZHggMSBhZGQgZGVmfSByZXBl
YXQgcG9wCgkvQUdNVVRJTF9HU1RBVEVfZm50IHJvb3Rmb250IGRlZgoJL0FHTVVUSUxfR1NUQVRF
X2x3IGN1cnJlbnRsaW5ld2lkdGggZGVmCgkvQUdNVVRJTF9HU1RBVEVfbGMgY3VycmVudGxpbmVj
YXAgZGVmCgkvQUdNVVRJTF9HU1RBVEVfbGogY3VycmVudGxpbmVqb2luIGRlZgoJL0FHTVVUSUxf
R1NUQVRFX21sIGN1cnJlbnRtaXRlcmxpbWl0IGRlZgoJY3VycmVudGRhc2ggL0FHTVVUSUxfR1NU
QVRFX2RvIHhkZiAvQUdNVVRJTF9HU1RBVEVfZGEgeGRmCgkvQUdNVVRJTF9HU1RBVEVfc2EgY3Vy
cmVudHN0cm9rZWFkanVzdCBkZWYKCS9BR01VVElMX0dTVEFURV9jbHJfcm5kIGN1cnJlbnRjb2xv
cnJlbmRlcmluZyBkZWYKCS9BR01VVElMX0dTVEFURV9vcCBjdXJyZW50b3ZlcnByaW50IGRlZgoJ
L0FHTVVUSUxfR1NUQVRFX2JnIGN1cnJlbnRibGFja2dlbmVyYXRpb24gY3ZsaXQgZGVmCgkvQUdN
VVRJTF9HU1RBVEVfdWNyIGN1cnJlbnR1bmRlcmNvbG9ycmVtb3ZhbCBjdmxpdCBkZWYKCWN1cnJl
bnRjb2xvcnRyYW5zZmVyIGN2bGl0IC9BR01VVElMX0dTVEFURV9neV94ZmVyIHhkZiBjdmxpdCAv
QUdNVVRJTF9HU1RBVEVfYl94ZmVyIHhkZgoJCWN2bGl0IC9BR01VVElMX0dTVEFURV9nX3hmZXIg
eGRmIGN2bGl0IC9BR01VVElMX0dTVEFURV9yX3hmZXIgeGRmCgkvQUdNVVRJTF9HU1RBVEVfaHQg
Y3VycmVudGhhbGZ0b25lIGRlZgoJL0FHTVVUSUxfR1NUQVRFX2ZsdCBjdXJyZW50ZmxhdCBkZWYK
CWVuZAp9ZGVmCi9zZXRfZ3N0YXRlCnsKCUFHTVVUSUxfR1NUQVRFIGJlZ2luCglBR01VVElMX0dT
VEFURV9jbHJfc3BjIHNldGNvbG9yc3BhY2UKCUFHTVVUSUxfR1NUQVRFX2Nscl9pbmR4IHtBR01V
VElMX0dTVEFURV9jbHJfY29tcHMgQUdNVVRJTF9HU1RBVEVfY2xyX2luZHggMSBzdWIgZ2V0Cgkv
QUdNVVRJTF9HU1RBVEVfY2xyX2luZHggQUdNVVRJTF9HU1RBVEVfY2xyX2luZHggMSBzdWIgZGVm
fSByZXBlYXQgc2V0Y29sb3IKCUFHTVVUSUxfR1NUQVRFX2ZudCBzZXRmb250CglBR01VVElMX0dT
VEFURV9sdyBzZXRsaW5ld2lkdGgKCUFHTVVUSUxfR1NUQVRFX2xjIHNldGxpbmVjYXAKCUFHTVVU
SUxfR1NUQVRFX2xqIHNldGxpbmVqb2luCglBR01VVElMX0dTVEFURV9tbCBzZXRtaXRlcmxpbWl0
CglBR01VVElMX0dTVEFURV9kYSBBR01VVElMX0dTVEFURV9kbyBzZXRkYXNoCglBR01VVElMX0dT
VEFURV9zYSBzZXRzdHJva2VhZGp1c3QKCUFHTVVUSUxfR1NUQVRFX2Nscl9ybmQgc2V0Y29sb3Jy
ZW5kZXJpbmcKCUFHTVVUSUxfR1NUQVRFX29wIHNldG92ZXJwcmludAoJQUdNVVRJTF9HU1RBVEVf
YmcgY3Z4IHNldGJsYWNrZ2VuZXJhdGlvbgoJQUdNVVRJTF9HU1RBVEVfdWNyIGN2eCBzZXR1bmRl
cmNvbG9ycmVtb3ZhbAoJQUdNVVRJTF9HU1RBVEVfcl94ZmVyIGN2eCBBR01VVElMX0dTVEFURV9n
X3hmZXIgY3Z4IEFHTVVUSUxfR1NUQVRFX2JfeGZlciBjdngKCQlBR01VVElMX0dTVEFURV9neV94
ZmVyIGN2eCBzZXRjb2xvcnRyYW5zZmVyCglBR01VVElMX0dTVEFURV9odCAvSGFsZnRvbmVUeXBl
IGdldCBkdXAgOSBlcSBleGNoIDEwMCBlcSBvcgoJCXsKCQljdXJyZW50aGFsZnRvbmUgL0hhbGZ0
b25lVHlwZSBnZXQgQUdNVVRJTF9HU1RBVEVfaHQgL0hhbGZ0b25lVHlwZSBnZXQgbmUKCQkJewoJ
CQkgIG1hcmsgQUdNVVRJTF9HU1RBVEVfaHQge3NldGhhbGZ0b25lfSBzdG9wcGVkIGNsZWFydG9t
YXJrCgkJCX0gaWYKCQl9ewoJCUFHTVVUSUxfR1NUQVRFX2h0IHNldGhhbGZ0b25lCgkJfSBpZmVs
c2UKCUFHTVVUSUxfR1NUQVRFX2ZsdCBzZXRmbGF0CgllbmQKfWRlZgovZ2V0X2dzdGF0ZV9hbmRf
bWF0cml4CnsKCUFHTVVUSUxfR1NUQVRFIGJlZ2luCgkvQUdNVVRJTF9HU1RBVEVfY3RtIG1hdHJp
eCBjdXJyZW50bWF0cml4IGRlZgoJZW5kCglnZXRfZ3N0YXRlCn1kZWYKL3NldF9nc3RhdGVfYW5k
X21hdHJpeAp7CglzZXRfZ3N0YXRlCglBR01VVElMX0dTVEFURSBiZWdpbgoJQUdNVVRJTF9HU1RB
VEVfY3RtIHNldG1hdHJpeAoJZW5kCn1kZWYKL0FHTVVUSUxfc3RyMjU2IDI1NiBzdHJpbmcgZGVm
Ci9BR01VVElMX3NyYzI1NiAyNTYgc3RyaW5nIGRlZgovQUdNVVRJTF9kc3Q2NCA2NCBzdHJpbmcg
ZGVmCi9BR01VVElMX3NyY0xlbiBuZAovQUdNVVRJTF9uZHggbmQKL2FnbV9zZXRoYWxmdG9uZQp7
IAoJZHVwCgliZWdpbgoJCS9fRGF0YSBsb2FkCgkJL1RocmVzaG9sZHMgeGRmCgllbmQKCWxldmVs
MyAKCXsgc2V0aGFsZnRvbmUgfXsKCQlkdXAgL0hhbGZ0b25lVHlwZSBnZXQgMyBlcSB7CgkJCXNl
dGhhbGZ0b25lCgkJfSB7cG9wfSBpZmVsc2UKCX1pZmVsc2UKfSBkZWYgCi9yZGNtbnRsaW5lCnsK
CWN1cnJlbnRmaWxlIEFHTVVUSUxfc3RyMjU2IHJlYWRsaW5lIHBvcAoJKCUpIGFuY2hvcnNlYXJj
aCB7cG9wfSBpZgp9IGJkZgovZmlsdGVyX2NteWsKewkKCWR1cCB0eXBlIC9maWxldHlwZSBuZXsK
CQlleGNoICgpIC9TdWJGaWxlRGVjb2RlIGZpbHRlcgoJfQoJewoJCWV4Y2ggcG9wCgl9CglpZmVs
c2UKCVsKCWV4Y2gKCXsKCQlBR01VVElMX3NyYzI1NiByZWFkc3RyaW5nIHBvcAoJCWR1cCBsZW5n
dGggL0FHTVVUSUxfc3JjTGVuIGV4Y2ggZGVmCgkJL0FHTVVUSUxfbmR4IDAgZGVmCgkJQUdNQ09S
RV9wbGF0ZV9uZHggNCBBR01VVElMX3NyY0xlbiAxIHN1YnsKCQkJMSBpbmRleCBleGNoIGdldAoJ
CQlBR01VVElMX2RzdDY0IEFHTVVUSUxfbmR4IDMgLTEgcm9sbCBwdXQKCQkJL0FHTVVUSUxfbmR4
IEFHTVVUSUxfbmR4IDEgYWRkIGRlZgoJCX1mb3IKCQlwb3AKCQlBR01VVElMX2RzdDY0IDAgQUdN
VVRJTF9uZHggZ2V0aW50ZXJ2YWwKCX0KCWJpbmQKCS9leGVjIGN2eAoJXSBjdngKfSBiZGYKL2Zp
bHRlcl9pbmRleGVkX2Rldm4KewoJY3ZpIE5hbWVzIGxlbmd0aCBtdWwgbmFtZXNfaW5kZXggYWRk
IExvb2t1cCBleGNoIGdldAp9IGJkZgovZmlsdGVyX2Rldm4KewkKCTQgZGljdCBiZWdpbgoJL3Ny
Y1N0ciB4ZGYKCS9kc3RTdHIgeGRmCglkdXAgdHlwZSAvZmlsZXR5cGUgbmV7CgkJMCAoKSAvU3Vi
RmlsZURlY29kZSBmaWx0ZXIKCX1pZgoJWwoJZXhjaAoJCVsKCQkJL2RldmljZW5fY29sb3JzcGFj
ZV9kaWN0IC9BR01DT1JFX2dnZXQgY3Z4IC9iZWdpbiBjdngKCQkJY3VycmVudGRpY3QgL3NyY1N0
ciBnZXQgL3JlYWRzdHJpbmcgY3Z4IC9wb3AgY3Z4CgkJCS9kdXAgY3Z4IC9sZW5ndGggY3Z4IDAg
L2d0IGN2eCBbCgkJCQlBZG9iZV9BR01fVXRpbHMgL0FHTVVUSUxfbmR4IDAgL2RkZiBjdngKCQkJ
CW5hbWVzX2luZGV4IE5hbWVzIGxlbmd0aCBjdXJyZW50ZGljdCAvc3JjU3RyIGdldCBsZW5ndGgg
MSBzdWIgewoJCQkJCTEgL2luZGV4IGN2eCAvZXhjaCBjdnggL2dldCBjdngKCQkJCQljdXJyZW50
ZGljdCAvZHN0U3RyIGdldCAvQUdNVVRJTF9uZHggL2xvYWQgY3Z4IDMgLTEgL3JvbGwgY3Z4IC9w
dXQgY3Z4CgkJCQkJQWRvYmVfQUdNX1V0aWxzIC9BR01VVElMX25keCAvQUdNVVRJTF9uZHggL2xv
YWQgY3Z4IDEgL2FkZCBjdnggL2RkZiBjdngKCQkJCX0gZm9yCgkJCQljdXJyZW50ZGljdCAvZHN0
U3RyIGdldCAwIC9BR01VVElMX25keCAvbG9hZCBjdnggL2dldGludGVydmFsIGN2eAoJCQldIGN2
eCAvaWYgY3Z4CgkJCS9lbmQgY3Z4CgkJXSBjdngKCQliaW5kCgkJL2V4ZWMgY3Z4CgldIGN2eAoJ
ZW5kCn0gYmRmCi9BR01VVElMX2ltYWdlZmlsZSBuZAovcmVhZF9pbWFnZV9maWxlCnsKCUFHTVVU
SUxfaW1hZ2VmaWxlIDAgc2V0ZmlsZXBvc2l0aW9uCgkxMCBkaWN0IGJlZ2luCgkvaW1hZ2VEaWN0
IHhkZgoJL2ltYnVmTGVuIFdpZHRoIEJpdHNQZXJDb21wb25lbnQgbXVsIDcgYWRkIDggaWRpdiBk
ZWYKCS9pbWJ1ZklkeCAwIGRlZgoJL29yaWdEYXRhU291cmNlIGltYWdlRGljdCAvRGF0YVNvdXJj
ZSBnZXQgZGVmCgkvb3JpZ011bHRpcGxlRGF0YVNvdXJjZXMgaW1hZ2VEaWN0IC9NdWx0aXBsZURh
dGFTb3VyY2VzIGdldCBkZWYKCS9vcmlnRGVjb2RlIGltYWdlRGljdCAvRGVjb2RlIGdldCBkZWYK
CS9kc3REYXRhU3RyIGltYWdlRGljdCAvV2lkdGggZ2V0IGNvbG9yU3BhY2VFbGVtQ250IG11bCBz
dHJpbmcgZGVmCgkvc3JjRGF0YVN0cnMgWyBpbWFnZURpY3QgYmVnaW4KCQljdXJyZW50ZGljdCAv
TXVsdGlwbGVEYXRhU291cmNlcyBrbm93biB7TXVsdGlwbGVEYXRhU291cmNlcyB7RGF0YVNvdXJj
ZSBsZW5ndGh9ezF9aWZlbHNlfXsxfSBpZmVsc2UKCQl7CgkJCVdpZHRoIERlY29kZSBsZW5ndGgg
MiBkaXYgbXVsIGN2aSBzdHJpbmcKCQl9IHJlcGVhdAoJCWVuZCBdIGRlZgoJaW1hZ2VEaWN0IC9N
dWx0aXBsZURhdGFTb3VyY2VzIGtub3duIHtNdWx0aXBsZURhdGFTb3VyY2VzfXtmYWxzZX0gaWZl
bHNlCgl7CgkJL2ltYnVmQ250IGltYWdlRGljdCAvRGF0YVNvdXJjZSBnZXQgbGVuZ3RoIGRlZgoJ
CS9pbWJ1ZnMgaW1idWZDbnQgYXJyYXkgZGVmCgkJMCAxIGltYnVmQ250IDEgc3ViIHsKCQkJL2lt
YnVmSWR4IHhkZgoJCQlpbWJ1ZnMgaW1idWZJZHggaW1idWZMZW4gc3RyaW5nIHB1dAoJCQlpbWFn
ZURpY3QgL0RhdGFTb3VyY2UgZ2V0IGltYnVmSWR4IFsgQUdNVVRJTF9pbWFnZWZpbGUgaW1idWZz
IGltYnVmSWR4IGdldCAvcmVhZHN0cmluZyBjdnggL3BvcCBjdnggXSBjdnggcHV0CgkJfSBmb3IK
CQlEZXZpY2VOX1BTMiB7CgkJCWltYWdlRGljdCBiZWdpbgoJCSAJL0RhdGFTb3VyY2UgWyBEYXRh
U291cmNlIC9kZXZuX3NlcF9kYXRhc291cmNlIGN2eCBdIGN2eCBkZWYKCQkJL011bHRpcGxlRGF0
YVNvdXJjZXMgZmFsc2UgZGVmCgkJCS9EZWNvZGUgWzAgMV0gZGVmCgkJCWVuZAoJCX0gaWYKCX17
CgkJL2ltYnVmIGltYnVmTGVuIHN0cmluZyBkZWYKCQlJbmRleGVkX0RldmljZU4gbGV2ZWwzIG5v
dCBhbmQgRGV2aWNlTl9Ob25lTmFtZSBvciB7CgkJCWltYWdlRGljdCBiZWdpbgoJCSAJL0RhdGFT
b3VyY2UgW0FHTVVUSUxfaW1hZ2VmaWxlIERlY29kZSBCaXRzUGVyQ29tcG9uZW50IGZhbHNlIDEg
L2ZpbHRlcl9pbmRleGVkX2Rldm4gbG9hZCBkc3REYXRhU3RyIHNyY0RhdGFTdHJzIGRldm5fYWx0
X2RhdGFzb3VyY2UgL2V4ZWMgY3Z4XSBjdnggZGVmCgkJCS9EZWNvZGUgWzAgMV0gZGVmCgkJCWVu
ZAoJCX17CgkJCWltYWdlRGljdCAvRGF0YVNvdXJjZSB7QUdNVVRJTF9pbWFnZWZpbGUgaW1idWYg
cmVhZHN0cmluZyBwb3B9IHB1dAoJCX0gaWZlbHNlCgl9IGlmZWxzZQoJaW1hZ2VEaWN0IGV4Y2gK
CWxvYWQgZXhlYwoJaW1hZ2VEaWN0IC9EYXRhU291cmNlIG9yaWdEYXRhU291cmNlIHB1dAoJaW1h
Z2VEaWN0IC9NdWx0aXBsZURhdGFTb3VyY2VzIG9yaWdNdWx0aXBsZURhdGFTb3VyY2VzIHB1dAoJ
aW1hZ2VEaWN0IC9EZWNvZGUgb3JpZ0RlY29kZSBwdXQJCgllbmQKfSBiZGYKL3dyaXRlX2ltYWdl
X2ZpbGUKewoJYmVnaW4KCXsgKEFHTVVUSUxfaW1hZ2VmaWxlKSAodyspIGZpbGUgfSBzdG9wcGVk
ewoJCWZhbHNlCgl9ewoJCUFkb2JlX0FHTV9VdGlscy9BR01VVElMX2ltYWdlZmlsZSB4ZGRmIAoJ
CTIgZGljdCBiZWdpbgoJCS9pbWJ1ZkxlbiBXaWR0aCBCaXRzUGVyQ29tcG9uZW50IG11bCA3IGFk
ZCA4IGlkaXYgZGVmCgkJTXVsdGlwbGVEYXRhU291cmNlcyB7RGF0YVNvdXJjZSAwIGdldH17RGF0
YVNvdXJjZX1pZmVsc2UgdHlwZSAvZmlsZXR5cGUgZXEgewoJCQkvaW1idWYgaW1idWZMZW4gc3Ry
aW5nIGRlZgoJCX1pZgoJCTEgMSBIZWlnaHQgeyAKCQkJcG9wCgkJCU11bHRpcGxlRGF0YVNvdXJj
ZXMgewoJCQkgCTAgMSBEYXRhU291cmNlIGxlbmd0aCAxIHN1YiB7CgkJCQkJRGF0YVNvdXJjZSB0
eXBlIGR1cAoJCQkJCS9hcnJheXR5cGUgZXEgewoJCQkJCQlwb3AgRGF0YVNvdXJjZSBleGNoIGdl
dCBleGVjCgkJCQkJfXsKCQkJCQkJL2ZpbGV0eXBlIGVxIHsKCQkJCQkJCURhdGFTb3VyY2UgZXhj
aCBnZXQgaW1idWYgcmVhZHN0cmluZyBwb3AKCQkJCQkJfXsKCQkJCQkJCURhdGFTb3VyY2UgZXhj
aCBnZXQKCQkJCQkJfSBpZmVsc2UKCQkJCQl9IGlmZWxzZQoJCQkJCUFHTVVUSUxfaW1hZ2VmaWxl
IGV4Y2ggd3JpdGVzdHJpbmcKCQkJCX0gZm9yCgkJCX17CgkJCQlEYXRhU291cmNlIHR5cGUgZHVw
CgkJCQkvYXJyYXl0eXBlIGVxIHsKCQkJCQlwb3AgRGF0YVNvdXJjZSBleGVjCgkJCQl9ewoJCQkJ
CS9maWxldHlwZSBlcSB7CgkJCQkJCURhdGFTb3VyY2UgaW1idWYgcmVhZHN0cmluZyBwb3AKCQkJ
CQl9ewoJCQkJCQlEYXRhU291cmNlCgkJCQkJfSBpZmVsc2UKCQkJCX0gaWZlbHNlCgkJCQlBR01V
VElMX2ltYWdlZmlsZSBleGNoIHdyaXRlc3RyaW5nCgkJCX0gaWZlbHNlCgkJfWZvcgoJCWVuZAoJ
CXRydWUKCX1pZmVsc2UKCWVuZAp9IGJkZgovY2xvc2VfaW1hZ2VfZmlsZQp7CglBR01VVElMX2lt
YWdlZmlsZSBjbG9zZWZpbGUgKEFHTVVUSUxfaW1hZ2VmaWxlKSBkZWxldGVmaWxlCn1kZWYKc3Rh
dHVzZGljdCAvcHJvZHVjdCBrbm93biB1c2VyZGljdCAvQUdNUF9jdXJyZW50X3Nob3cga25vd24g
bm90IGFuZHsKCS9wc3RyIHN0YXR1c2RpY3QgL3Byb2R1Y3QgZ2V0IGRlZgoJcHN0ciAoSFAgTGFz
ZXJKZXQgMjIwMCkgZXEgCQoJcHN0ciAoSFAgTGFzZXJKZXQgNDAwMCBTZXJpZXMpIGVxIG9yCglw
c3RyIChIUCBMYXNlckpldCA0MDUwIFNlcmllcyApIGVxIG9yCglwc3RyIChIUCBMYXNlckpldCA4
MDAwIFNlcmllcykgZXEgb3IKCXBzdHIgKEhQIExhc2VySmV0IDgxMDAgU2VyaWVzKSBlcSBvcgoJ
cHN0ciAoSFAgTGFzZXJKZXQgODE1MCBTZXJpZXMpIGVxIG9yCglwc3RyIChIUCBMYXNlckpldCA1
MDAwIFNlcmllcykgZXEgb3IKCXBzdHIgKEhQIExhc2VySmV0IDUxMDAgU2VyaWVzKSBlcSBvcgoJ
cHN0ciAoSFAgQ29sb3IgTGFzZXJKZXQgNDUwMCkgZXEgb3IKCXBzdHIgKEhQIENvbG9yIExhc2Vy
SmV0IDQ2MDApIGVxIG9yCglwc3RyIChIUCBMYXNlckpldCA1U2kpIGVxIG9yCglwc3RyIChIUCBM
YXNlckpldCAxMjAwIFNlcmllcykgZXEgb3IKCXBzdHIgKEhQIExhc2VySmV0IDEzMDAgU2VyaWVz
KSBlcSBvcgoJcHN0ciAoSFAgTGFzZXJKZXQgNDEwMCBTZXJpZXMpIGVxIG9yIAoJewogCQl1c2Vy
ZGljdCAvQUdNUF9jdXJyZW50X3Nob3cgL3Nob3cgbG9hZCBwdXQKCQl1c2VyZGljdCAvc2hvdyB7
CgkJICBjdXJyZW50Y29sb3JzcGFjZSAwIGdldAoJCSAgL1BhdHRlcm4gZXEKCQkgIHtmYWxzZSBj
aGFycGF0aCBmfQoJCSAge0FHTVBfY3VycmVudF9zaG93fSBpZmVsc2UKCQl9IHB1dAoJfWlmCglj
dXJyZW50ZGljdCAvcHN0ciB1bmRlZgp9IGlmCi9jb25zdW1laW1hZ2VkYXRhCnsKCWJlZ2luCglj
dXJyZW50ZGljdCAvTXVsdGlwbGVEYXRhU291cmNlcyBrbm93biBub3QKCQl7L011bHRpcGxlRGF0
YVNvdXJjZXMgZmFsc2UgZGVmfSBpZgoJTXVsdGlwbGVEYXRhU291cmNlcwoJCXsKCQkxIGRpY3Qg
YmVnaW4KCQkvZmx1c2hidWZmZXIgV2lkdGggY3ZpIHN0cmluZyBkZWYKCQkxIDEgSGVpZ2h0IGN2
aQoJCQl7CgkJCXBvcAoJCQkwIDEgRGF0YVNvdXJjZSBsZW5ndGggMSBzdWIKCQkJCXsKCQkJCURh
dGFTb3VyY2UgZXhjaCBnZXQKCQkJCWR1cCB0eXBlIGR1cCAKCQkJCS9maWxldHlwZSBlcQoJCQkJ
CXsKCQkJCQlleGNoIGZsdXNoYnVmZmVyIHJlYWRzdHJpbmcgcG9wIHBvcAoJCQkJCX1pZgoJCQkJ
L2FycmF5dHlwZSBlcQoJCQkJCXsKCQkJCQlleGVjIHBvcAoJCQkJCX1pZgoJCQkJfWZvcgoJCQl9
Zm9yCgkJZW5kCgkJfQoJCXsKCQkvRGF0YVNvdXJjZSBsb2FkIHR5cGUgZHVwIAoJCS9maWxldHlw
ZSBlcQoJCQl7CgkJCTEgZGljdCBiZWdpbgoJCQkvZmx1c2hidWZmZXIgV2lkdGggRGVjb2RlIGxl
bmd0aCAyIGRpdiBtdWwgY3ZpIHN0cmluZyBkZWYKCQkJMSAxIEhlaWdodCB7IHBvcCBEYXRhU291
cmNlIGZsdXNoYnVmZmVyIHJlYWRzdHJpbmcgcG9wIHBvcH0gZm9yCgkJCWVuZAoJCQl9aWYKCQkv
YXJyYXl0eXBlIGVxCgkJCXsKCQkJMSAxIEhlaWdodCB7IHBvcCBEYXRhU291cmNlIHBvcCB9IGZv
cgoJCQl9aWYKCQl9aWZlbHNlCgllbmQKfWJkZgovYWRkcHJvY3MKewoJICAyey9leGVjIGxvYWR9
cmVwZWF0CgkgIDMgMSByb2xsCgkgIFsgNSAxIHJvbGwgXSBiaW5kIGN2eAp9ZGVmCi9tb2RpZnlf
aGFsZnRvbmVfeGZlcgp7CgljdXJyZW50aGFsZnRvbmUgZHVwIGxlbmd0aCBkaWN0IGNvcHkgYmVn
aW4KCSBjdXJyZW50ZGljdCAyIGluZGV4IGtub3duewoJIAkxIGluZGV4IGxvYWQgZHVwIGxlbmd0
aCBkaWN0IGNvcHkgYmVnaW4KCQljdXJyZW50ZGljdC9UcmFuc2ZlckZ1bmN0aW9uIGtub3duewoJ
CQkvVHJhbnNmZXJGdW5jdGlvbiBsb2FkCgkJfXsKCQkJY3VycmVudHRyYW5zZmVyCgkJfWlmZWxz
ZQoJCSBhZGRwcm9jcyAvVHJhbnNmZXJGdW5jdGlvbiB4ZGYgCgkJIGN1cnJlbnRkaWN0IGVuZCBk
ZWYKCQljdXJyZW50ZGljdCBlbmQgc2V0aGFsZnRvbmUKCX17IAoJCWN1cnJlbnRkaWN0L1RyYW5z
ZmVyRnVuY3Rpb24ga25vd257CgkJCS9UcmFuc2ZlckZ1bmN0aW9uIGxvYWQgCgkJfXsKCQkJY3Vy
cmVudHRyYW5zZmVyCgkJfWlmZWxzZQoJCWFkZHByb2NzIC9UcmFuc2ZlckZ1bmN0aW9uIHhkZgoJ
CWN1cnJlbnRkaWN0IGVuZCBzZXRoYWxmdG9uZQkJCgkJcG9wCgl9aWZlbHNlCn1kZWYKL2Nsb25l
YXJyYXkKewoJZHVwIHhjaGVjayBleGNoCglkdXAgbGVuZ3RoIGFycmF5IGV4Y2gKCUFkb2JlX0FH
TV9Db3JlL0FHTUNPUkVfdG1wIC0xIGRkZiAKCXsKCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfdG1w
IEFHTUNPUkVfdG1wIDEgYWRkIGRkZiAKCWR1cCB0eXBlIC9kaWN0dHlwZSBlcQoJCXsKCQkJQUdN
Q09SRV90bXAKCQkJZXhjaAoJCQljbG9uZWRpY3QKCQkJQWRvYmVfQUdNX0NvcmUvQUdNQ09SRV90
bXAgNCAtMSByb2xsIGRkZiAKCQl9IGlmCglkdXAgdHlwZSAvYXJyYXl0eXBlIGVxCgkJewoJCQlB
R01DT1JFX3RtcCBleGNoCgkJCWNsb25lYXJyYXkKCQkJQWRvYmVfQUdNX0NvcmUvQUdNQ09SRV90
bXAgNCAtMSByb2xsIGRkZiAKCQl9IGlmCglleGNoIGR1cAoJQUdNQ09SRV90bXAgNCAtMSByb2xs
IHB1dAoJfWZvcmFsbAoJZXhjaCB7Y3Z4fSBpZgp9YmRmCi9jbG9uZWRpY3QKewoJZHVwIGxlbmd0
aCBkaWN0CgliZWdpbgoJCXsKCQlkdXAgdHlwZSAvZGljdHR5cGUgZXEKCQkJewoJCQkJY2xvbmVk
aWN0CgkJCX0gaWYKCQlkdXAgdHlwZSAvYXJyYXl0eXBlIGVxCgkJCXsKCQkJCWNsb25lYXJyYXkK
CQkJfSBpZgoJCWRlZgoJCX1mb3JhbGwKCWN1cnJlbnRkaWN0CgllbmQKfWJkZgovRGV2aWNlTl9Q
UzIKewoJL2N1cnJlbnRjb2xvcnNwYWNlIEFHTUNPUkVfZ2dldCAwIGdldCAvRGV2aWNlTiBlcSBs
ZXZlbDMgbm90IGFuZAp9IGJkZgovSW5kZXhlZF9EZXZpY2VOCnsKCS9pbmRleGVkX2NvbG9yc3Bh
Y2VfZGljdCBBR01DT1JFX2dnZXQgZHVwIG51bGwgbmUgewoJCS9DU0Qga25vd24KCX17CgkJcG9w
IGZhbHNlCgl9IGlmZWxzZQp9IGJkZgovRGV2aWNlTl9Ob25lTmFtZQp7CQoJL05hbWVzIHdoZXJl
IHsKCQlwb3AKCQlmYWxzZSBOYW1lcwoJCXsKCQkJKE5vbmUpIGVxIG9yCgkJfSBmb3JhbGwKCX17
CgkJZmFsc2UKCX1pZmVsc2UKfSBiZGYKL0RldmljZU5fUFMyX2luUmlwX3NlcHMKewoJL0FHTUNP
UkVfaW5fcmlwX3NlcCB3aGVyZQoJewoJCXBvcCBkdXAgdHlwZSBkdXAgL2FycmF5dHlwZSBlcSBl
eGNoIC9wYWNrZWRhcnJheXR5cGUgZXEgb3IKCQl7CgkJCWR1cCAwIGdldCAvRGV2aWNlTiBlcSBs
ZXZlbDMgbm90IGFuZCBBR01DT1JFX2luX3JpcF9zZXAgYW5kCgkJCXsKCQkJCS9jdXJyZW50Y29s
b3JzcGFjZSBleGNoIEFHTUNPUkVfZ3B1dAoJCQkJZmFsc2UKCQkJfQoJCQl7CgkJCQl0cnVlCgkJ
CX1pZmVsc2UKCQl9CgkJewoJCQl0cnVlCgkJfSBpZmVsc2UKCX0KCXsKCQl0cnVlCgl9IGlmZWxz
ZQp9IGJkZgovYmFzZV9jb2xvcnNwYWNlX3R5cGUKewoJZHVwIHR5cGUgL2FycmF5dHlwZSBlcSB7
MCBnZXR9IGlmCn0gYmRmCi9kb2Nfc2V0dXB7CglBZG9iZV9BR01fVXRpbHMgYmVnaW4KfWJkZgov
ZG9jX3RyYWlsZXJ7CgljdXJyZW50ZGljdCBBZG9iZV9BR01fVXRpbHMgZXF7CgkJZW5kCgl9aWYK
fWJkZgpzeXN0ZW1kaWN0IC9zZXRwYWNraW5nIGtub3duCnsKCXNldHBhY2tpbmcKfSBpZgolJUVu
ZFJlc291cmNlCiUlQmVnaW5SZXNvdXJjZTogcHJvY3NldCBBZG9iZV9BR01fQ29yZSAyLjAgMAol
JVZlcnNpb246IDIuMCAwCiUlQ29weXJpZ2h0OiBDb3B5cmlnaHQgKEMpIDE5OTctMjAwMyBBZG9i
ZSBTeXN0ZW1zLCBJbmMuICBBbGwgUmlnaHRzIFJlc2VydmVkLgpzeXN0ZW1kaWN0IC9zZXRwYWNr
aW5nIGtub3duCnsKCWN1cnJlbnRwYWNraW5nCgl0cnVlIHNldHBhY2tpbmcKfSBpZgp1c2VyZGlj
dCAvQWRvYmVfQUdNX0NvcmUgMjE2IGRpY3QgZHVwIGJlZ2luIHB1dAovbmR7CgludWxsIGRlZgp9
YmluZCBkZWYKL0Fkb2JlX0FHTV9Db3JlX0lkIC9BZG9iZV9BR01fQ29yZV8yLjBfMCBkZWYKL0FH
TUNPUkVfc3RyMjU2IDI1NiBzdHJpbmcgZGVmCi9BR01DT1JFX3NhdmUgbmQKL0FHTUNPUkVfZ3Jh
cGhpY3NhdmUgbmQKL0FHTUNPUkVfYyAwIGRlZgovQUdNQ09SRV9tIDAgZGVmCi9BR01DT1JFX3kg
MCBkZWYKL0FHTUNPUkVfayAwIGRlZgovQUdNQ09SRV9jbXlrYnVmIDQgYXJyYXkgZGVmCi9BR01D
T1JFX3NjcmVlbiBbY3VycmVudHNjcmVlbl0gY3Z4IGRlZgovQUdNQ09SRV90bXAgMCBkZWYKL0FH
TUNPUkVfJnNldGdyYXkgbmQKL0FHTUNPUkVfJnNldGNvbG9yIG5kCi9BR01DT1JFXyZzZXRjb2xv
cnNwYWNlIG5kCi9BR01DT1JFXyZzZXRjbXlrY29sb3IgbmQKL0FHTUNPUkVfY3lhbl9wbGF0ZSBu
ZAovQUdNQ09SRV9tYWdlbnRhX3BsYXRlIG5kCi9BR01DT1JFX3llbGxvd19wbGF0ZSBuZAovQUdN
Q09SRV9ibGFja19wbGF0ZSBuZAovQUdNQ09SRV9wbGF0ZV9uZHggbmQKL0FHTUNPUkVfZ2V0X2lu
a19kYXRhIG5kCi9BR01DT1JFX2lzX2NteWtfc2VwIG5kCi9BR01DT1JFX2hvc3Rfc2VwIG5kCi9B
R01DT1JFX2F2b2lkX0wyX3NlcF9zcGFjZSBuZAovQUdNQ09SRV9kaXN0aWxsaW5nIG5kCi9BR01D
T1JFX2NvbXBvc2l0ZV9qb2IgbmQKL0FHTUNPUkVfcHJvZHVjaW5nX3NlcHMgbmQKL0FHTUNPUkVf
cHNfbGV2ZWwgLTEgZGVmCi9BR01DT1JFX3BzX3ZlcnNpb24gLTEgZGVmCi9BR01DT1JFX2Vudmly
b25fb2sgbmQKL0FHTUNPUkVfQ1NBX2NhY2hlIDAgZGljdCBkZWYKL0FHTUNPUkVfQ1NEX2NhY2hl
IDAgZGljdCBkZWYKL0FHTUNPUkVfcGF0dGVybl9jYWNoZSAwIGRpY3QgZGVmCi9BR01DT1JFX2N1
cnJlbnRvdmVycHJpbnQgZmFsc2UgZGVmCi9BR01DT1JFX2RlbHRhWCBuZAovQUdNQ09SRV9kZWx0
YVkgbmQKL0FHTUNPUkVfbmFtZSBuZAovQUdNQ09SRV9zZXBfc3BlY2lhbCBuZAovQUdNQ09SRV9l
cnJfc3RyaW5ncyA0IGRpY3QgZGVmCi9BR01DT1JFX2N1cl9lcnIgbmQKL0FHTUNPUkVfb3ZwIG5k
Ci9BR01DT1JFX2N1cnJlbnRfc3BvdF9hbGlhcyBmYWxzZSBkZWYKL0FHTUNPUkVfaW52ZXJ0aW5n
IGZhbHNlIGRlZgovQUdNQ09SRV9mZWF0dXJlX2RpY3RDb3VudCBuZAovQUdNQ09SRV9mZWF0dXJl
X29wQ291bnQgbmQKL0FHTUNPUkVfZmVhdHVyZV9jdG0gbmQKL0FHTUNPUkVfQ29udmVydFRvUHJv
Y2VzcyBmYWxzZSBkZWYKL0FHTUNPUkVfRGVmYXVsdF9DVE0gbWF0cml4IGRlZgovQUdNQ09SRV9E
ZWZhdWx0X1BhZ2VTaXplIG5kCi9BR01DT1JFX2N1cnJlbnRiZyBuZAovQUdNQ09SRV9jdXJyZW50
dWNyIG5kCi9BR01DT1JFX2dyYWRpZW50Y2FjaGUgMzIgZGljdCBkZWYKL0FHTUNPUkVfaW5fcGF0
dGVybiBmYWxzZSBkZWYKL2tub2Nrb3V0X3VuaXRzcSBuZAovQUdNQ09SRV9DUkRfY2FjaGUgd2hl
cmV7Cglwb3AKfXsKCS9BR01DT1JFX0NSRF9jYWNoZSAwIGRpY3QgZGVmCn1pZmVsc2UKL0FHTUNP
UkVfa2V5X2tub3duCnsKCXdoZXJlewoJCS9BZG9iZV9BR01fQ29yZV9JZCBrbm93bgoJfXsKCQlm
YWxzZQoJfWlmZWxzZQp9bmRmCi9mbHVzaGlucHV0CnsKCXNhdmUKCTIgZGljdCBiZWdpbgoJL0Nv
bXBhcmVCdWZmZXIgMyAtMSByb2xsIGRlZgoJL3JlYWRidWZmZXIgMjU2IHN0cmluZyBkZWYKCW1h
cmsKCXsKCWN1cnJlbnRmaWxlIHJlYWRidWZmZXIge3JlYWRsaW5lfSBzdG9wcGVkCgkJe2NsZWFy
dG9tYXJrIG1hcmt9CgkJewoJCW5vdAoJCQl7cG9wIGV4aXR9CgkJaWYKCQlDb21wYXJlQnVmZmVy
IGVxCgkJCXtleGl0fQoJCWlmCgkJfWlmZWxzZQoJfWxvb3AKCWNsZWFydG9tYXJrCgllbmQKCXJl
c3RvcmUKfWJkZgovZ2V0c3BvdGZ1bmN0aW9uCnsKCUFHTUNPUkVfc2NyZWVuIGV4Y2ggcG9wIGV4
Y2ggcG9wCglkdXAgdHlwZSAvZGljdHR5cGUgZXF7CgkJZHVwIC9IYWxmdG9uZVR5cGUgZ2V0IDEg
ZXF7CgkJCS9TcG90RnVuY3Rpb24gZ2V0CgkJfXsKCQkJZHVwIC9IYWxmdG9uZVR5cGUgZ2V0IDIg
ZXF7CgkJCQkvR3JheVNwb3RGdW5jdGlvbiBnZXQKCQkJfXsgCgkJCQlwb3AKCQkJCXsKCQkJCQlh
YnMgZXhjaCBhYnMgMiBjb3B5IGFkZCAxIGd0ewoJCQkJCQkxIHN1YiBkdXAgbXVsIGV4Y2ggMSBz
dWIgZHVwIG11bCBhZGQgMSBzdWIKCQkJCQl9ewoJCQkJCQlkdXAgbXVsIGV4Y2ggZHVwIG11bCBh
ZGQgMSBleGNoIHN1YgoJCQkJCX1pZmVsc2UKCQkJCX1iaW5kCgkJCX1pZmVsc2UKCQl9aWZlbHNl
Cgl9aWYKfSBkZWYKL2NscF9ucHRoCnsKCWNsaXAgbmV3cGF0aAp9IGRlZgovZW9jbHBfbnB0aAp7
Cgllb2NsaXAgbmV3cGF0aAp9IGRlZgovbnB0aF9jbHAKewoJbmV3cGF0aCBjbGlwCn0gZGVmCi9h
ZGRfZ3JhZAp7CglBR01DT1JFX2dyYWRpZW50Y2FjaGUgMyAxIHJvbGwgcHV0Cn1iZGYKL2V4ZWNf
Z3JhZAp7CglBR01DT1JFX2dyYWRpZW50Y2FjaGUgZXhjaCBnZXQgZXhlYwp9YmRmCi9ncmFwaGlj
X3NldHVwCnsKCS9BR01DT1JFX2dyYXBoaWNzYXZlIHNhdmUgZGVmCgljb25jYXQKCTAgc2V0Z3Jh
eQoJMCBzZXRsaW5lY2FwCgkwIHNldGxpbmVqb2luCgkxIHNldGxpbmV3aWR0aAoJW10gMCBzZXRk
YXNoCgkxMCBzZXRtaXRlcmxpbWl0CgluZXdwYXRoCglmYWxzZSBzZXRvdmVycHJpbnQKCWZhbHNl
IHNldHN0cm9rZWFkanVzdAoJQWRvYmVfQUdNX0NvcmUvc3BvdF9hbGlhcyBnZXQgZXhlYwoJL0Fk
b2JlX0FHTV9JbWFnZSB3aGVyZSB7CgkJcG9wCgkJQWRvYmVfQUdNX0ltYWdlL3Nwb3RfYWxpYXMg
MiBjb3B5IGtub3duewoJCQlnZXQgZXhlYwoJCX17CgkJCXBvcCBwb3AKCQl9aWZlbHNlCgl9IGlm
CgkxMDAgZGljdCBiZWdpbgoJL2RpY3RzdGFja2NvdW50IGNvdW50ZGljdHN0YWNrIGRlZgoJL3No
b3dwYWdlIHt9IGRlZgoJbWFyawp9IGRlZgovZ3JhcGhpY19jbGVhbnVwCnsKCWNsZWFydG9tYXJr
CglkaWN0c3RhY2tjb3VudCAxIGNvdW50ZGljdHN0YWNrIDEgc3ViIHtlbmR9Zm9yCgllbmQKCUFH
TUNPUkVfZ3JhcGhpY3NhdmUgcmVzdG9yZQp9IGRlZgovY29tcG9zZV9lcnJvcl9tc2cKewoJZ3Jl
c3RvcmVhbGwgaW5pdGdyYXBoaWNzCQoJL0hlbHZldGljYSBmaW5kZm9udCAxMCBzY2FsZWZvbnQg
c2V0Zm9udAoJL0FHTUNPUkVfZGVsdGFZIDEwMCBkZWYKCS9BR01DT1JFX2RlbHRhWCAzMTAgZGVm
CgljbGlwcGF0aCBwYXRoYmJveCBuZXdwYXRoIHBvcCBwb3AgMzYgYWRkIGV4Y2ggMzYgYWRkIGV4
Y2ggbW92ZXRvCgkwIEFHTUNPUkVfZGVsdGFZIHJsaW5ldG8gQUdNQ09SRV9kZWx0YVggMCBybGlu
ZXRvCgkwIEFHTUNPUkVfZGVsdGFZIG5lZyBybGluZXRvIEFHTUNPUkVfZGVsdGFYIG5lZyAwIHJs
aW5ldG8gY2xvc2VwYXRoCgkwIEFHTUNPUkVfJnNldGdyYXkKCWdzYXZlIDEgQUdNQ09SRV8mc2V0
Z3JheSBmaWxsIGdyZXN0b3JlIAoJMSBzZXRsaW5ld2lkdGggZ3NhdmUgc3Ryb2tlIGdyZXN0b3Jl
CgljdXJyZW50cG9pbnQgQUdNQ09SRV9kZWx0YVkgMTUgc3ViIGFkZCBleGNoIDggYWRkIGV4Y2gg
bW92ZXRvCgkvQUdNQ09SRV9kZWx0YVkgMTIgZGVmCgkvQUdNQ09SRV90bXAgMCBkZWYKCUFHTUNP
UkVfZXJyX3N0cmluZ3MgZXhjaCBnZXQKCQl7CgkJZHVwIDMyIGVxCgkJCXsKCQkJcG9wCgkJCUFH
TUNPUkVfc3RyMjU2IDAgQUdNQ09SRV90bXAgZ2V0aW50ZXJ2YWwKCQkJc3RyaW5nd2lkdGggcG9w
IGN1cnJlbnRwb2ludCBwb3AgYWRkIEFHTUNPUkVfZGVsdGFYIDI4IGFkZCBndAoJCQkJewoJCQkJ
Y3VycmVudHBvaW50IEFHTUNPUkVfZGVsdGFZIHN1YiBleGNoIHBvcAoJCQkJY2xpcHBhdGggcGF0
aGJib3ggcG9wIHBvcCBwb3AgNDQgYWRkIGV4Y2ggbW92ZXRvCgkJCQl9IGlmCgkJCUFHTUNPUkVf
c3RyMjU2IDAgQUdNQ09SRV90bXAgZ2V0aW50ZXJ2YWwgc2hvdyAoICkgc2hvdwoJCQkwIDEgQUdN
Q09SRV9zdHIyNTYgbGVuZ3RoIDEgc3ViCgkJCQl7CgkJCQlBR01DT1JFX3N0cjI1NiBleGNoIDAg
cHV0CgkJCQl9Zm9yCgkJCS9BR01DT1JFX3RtcCAwIGRlZgoJCQl9CgkJCXsKCQkJCUFHTUNPUkVf
c3RyMjU2IGV4Y2ggQUdNQ09SRV90bXAgeHB0CgkJCQkvQUdNQ09SRV90bXAgQUdNQ09SRV90bXAg
MSBhZGQgZGVmCgkJCX0gaWZlbHNlCgkJfSBmb3JhbGwKfSBiZGYKL2RvY19zZXR1cHsKCUFkb2Jl
X0FHTV9Db3JlIGJlZ2luCgkvQUdNQ09SRV9wc192ZXJzaW9uIHhkZgoJL0FHTUNPUkVfcHNfbGV2
ZWwgeGRmCgllcnJvcmRpY3QgL0FHTV9oYW5kbGVlcnJvciBrbm93biBub3R7CgkJZXJyb3JkaWN0
IC9BR01faGFuZGxlZXJyb3IgZXJyb3JkaWN0IC9oYW5kbGVlcnJvciBnZXQgcHV0CgkJZXJyb3Jk
aWN0IC9oYW5kbGVlcnJvciB7CgkJCUFkb2JlX0FHTV9Db3JlIGJlZ2luCgkJCSRlcnJvciAvbmV3
ZXJyb3IgZ2V0IEFHTUNPUkVfY3VyX2VyciBudWxsIG5lIGFuZHsKCQkJCSRlcnJvciAvbmV3ZXJy
b3IgZmFsc2UgcHV0CgkJCQlBR01DT1JFX2N1cl9lcnIgY29tcG9zZV9lcnJvcl9tc2cKCQkJfWlm
CgkJCSRlcnJvciAvbmV3ZXJyb3IgdHJ1ZSBwdXQKCQkJZW5kCgkJCWVycm9yZGljdCAvQUdNX2hh
bmRsZWVycm9yIGdldCBleGVjCgkJCX0gYmluZCBwdXQKCQl9aWYKCS9BR01DT1JFX2Vudmlyb25f
b2sgCgkJcHNfbGV2ZWwgQUdNQ09SRV9wc19sZXZlbCBnZQoJCXBzX3ZlcnNpb24gQUdNQ09SRV9w
c192ZXJzaW9uIGdlIGFuZCAKCQlBR01DT1JFX3BzX2xldmVsIC0xIGVxIG9yCglkZWYKCUFHTUNP
UkVfZW52aXJvbl9vayBub3QKCQl7L0FHTUNPUkVfY3VyX2VyciAvQUdNQ09SRV9iYWRfZW52aXJv
biBkZWZ9IGlmCgkvQUdNQ09SRV8mc2V0Z3JheSBzeXN0ZW1kaWN0L3NldGdyYXkgZ2V0IGRlZgoJ
bGV2ZWwyewoJCS9BR01DT1JFXyZzZXRjb2xvciBzeXN0ZW1kaWN0L3NldGNvbG9yIGdldCBkZWYK
CQkvQUdNQ09SRV8mc2V0Y29sb3JzcGFjZSBzeXN0ZW1kaWN0L3NldGNvbG9yc3BhY2UgZ2V0IGRl
ZgoJfWlmCgkvQUdNQ09SRV9jdXJyZW50YmcgY3VycmVudGJsYWNrZ2VuZXJhdGlvbiBkZWYKCS9B
R01DT1JFX2N1cnJlbnR1Y3IgY3VycmVudHVuZGVyY29sb3JyZW1vdmFsIGRlZgoJL0FHTUNPUkVf
ZGlzdGlsbGluZwoJCS9wcm9kdWN0IHdoZXJlewoJCQlwb3Agc3lzdGVtZGljdC9zZXRkaXN0aWxs
ZXJwYXJhbXMga25vd24gcHJvZHVjdCAoQWRvYmUgUG9zdFNjcmlwdCBQYXJzZXIpIG5lIGFuZAoJ
CX17CgkJCWZhbHNlCgkJfWlmZWxzZQoJZGVmCglsZXZlbDIgbm90ewoJCS94cHV0ewoJCQlkdXAg
bG9hZCBkdXAgbGVuZ3RoIGV4Y2ggbWF4bGVuZ3RoIGVxewoJCQkJZHVwIGR1cCBsb2FkIGR1cAoJ
CQkJbGVuZ3RoIGR1cCAwIGVxIHtwb3AgMX0gaWYgMiBtdWwgZGljdCBjb3B5IGRlZgoJCQl9aWYK
CQkJbG9hZCBiZWdpbgoJCQkJZGVmCiAJCQllbmQKCQl9ZGVmCgl9ewoJCS94cHV0ewoJCQlsb2Fk
IDMgMSByb2xsIHB1dAoJCX1kZWYKCX1pZmVsc2UKCS9BR01DT1JFX0dTVEFURSBBR01DT1JFX2tl
eV9rbm93biBub3R7CgkJL0FHTUNPUkVfR1NUQVRFIDIxIGRpY3QgZGVmCgkJL0FHTUNPUkVfdG1w
bWF0cml4IG1hdHJpeCBkZWYKCQkvQUdNQ09SRV9nc3RhY2sgMzIgYXJyYXkgZGVmCgkJL0FHTUNP
UkVfZ3N0YWNrcHRyIDAgZGVmCgkJL0FHTUNPUkVfZ3N0YWNrc2F2ZXB0ciAwIGRlZgoJCS9BR01D
T1JFX2dzdGFja2ZyYW1la2V5cyAxMCBkZWYKCQkvQUdNQ09SRV8mZ3NhdmUgL2dzYXZlIGxkZgoJ
CS9BR01DT1JFXyZncmVzdG9yZSAvZ3Jlc3RvcmUgbGRmCgkJL0FHTUNPUkVfJmdyZXN0b3JlYWxs
IC9ncmVzdG9yZWFsbCBsZGYKCQkvQUdNQ09SRV8mc2F2ZSAvc2F2ZSBsZGYKCQkvQUdNQ09SRV9n
ZGljdGNvcHkgewoJCQliZWdpbgoJCQl7IGRlZiB9IGZvcmFsbAoJCQllbmQKCQl9ZGVmCgkJL0FH
TUNPUkVfZ3B1dCB7CgkJCUFHTUNPUkVfZ3N0YWNrIEFHTUNPUkVfZ3N0YWNrcHRyIGdldAoJCQkz
IDEgcm9sbAoJCQlwdXQKCQl9ZGVmCgkJL0FHTUNPUkVfZ2dldCB7CgkJCUFHTUNPUkVfZ3N0YWNr
IEFHTUNPUkVfZ3N0YWNrcHRyIGdldAoJCQlleGNoCgkJCWdldAoJCX1kZWYKCQkvZ3NhdmUgewoJ
CQlBR01DT1JFXyZnc2F2ZQoJCQlBR01DT1JFX2dzdGFjayBBR01DT1JFX2dzdGFja3B0ciBnZXQK
CQkJQUdNQ09SRV9nc3RhY2twdHIgMSBhZGQKCQkJZHVwIDMyIGdlIHtsaW1pdGNoZWNrfSBpZgoJ
CQlBZG9iZV9BR01fQ29yZSBleGNoCgkJCS9BR01DT1JFX2dzdGFja3B0ciB4cHQKCQkJQUdNQ09S
RV9nc3RhY2sgQUdNQ09SRV9nc3RhY2twdHIgZ2V0CgkJCUFHTUNPUkVfZ2RpY3Rjb3B5CgkJfWRl
ZgoJCS9ncmVzdG9yZSB7CgkJCUFHTUNPUkVfJmdyZXN0b3JlCgkJCUFHTUNPUkVfZ3N0YWNrcHRy
IDEgc3ViCgkJCWR1cCBBR01DT1JFX2dzdGFja3NhdmVwdHIgbHQgezEgYWRkfSBpZgoJCQlBZG9i
ZV9BR01fQ29yZSBleGNoCgkJCS9BR01DT1JFX2dzdGFja3B0ciB4cHQKCQl9ZGVmCgkJL2dyZXN0
b3JlYWxsIHsKCQkJQUdNQ09SRV8mZ3Jlc3RvcmVhbGwKCQkJQWRvYmVfQUdNX0NvcmUKCQkJL0FH
TUNPUkVfZ3N0YWNrcHRyIEFHTUNPUkVfZ3N0YWNrc2F2ZXB0ciBwdXQgCgkJfWRlZgoJCS9zYXZl
IHsKCQkJQUdNQ09SRV8mc2F2ZQoJCQlBR01DT1JFX2dzdGFjayBBR01DT1JFX2dzdGFja3B0ciBn
ZXQKCQkJQUdNQ09SRV9nc3RhY2twdHIgMSBhZGQKCQkJZHVwIDMyIGdlIHtsaW1pdGNoZWNrfSBp
ZgoJCQlBZG9iZV9BR01fQ29yZSBiZWdpbgoJCQkJL0FHTUNPUkVfZ3N0YWNrcHRyIGV4Y2ggZGVm
CgkJCQkvQUdNQ09SRV9nc3RhY2tzYXZlcHRyIEFHTUNPUkVfZ3N0YWNrcHRyIGRlZgoJCQllbmQK
CQkJQUdNQ09SRV9nc3RhY2sgQUdNQ09SRV9nc3RhY2twdHIgZ2V0CgkJCUFHTUNPUkVfZ2RpY3Rj
b3B5CgkJfWRlZgoJCTAgMSBBR01DT1JFX2dzdGFjayBsZW5ndGggMSBzdWIgewoJCQkJQUdNQ09S
RV9nc3RhY2sgZXhjaCBBR01DT1JFX2dzdGFja2ZyYW1la2V5cyBkaWN0IHB1dAoJCX0gZm9yCgl9
aWYKCWxldmVsMyAvQUdNQ09SRV8mc3lzc2hmaWxsIEFHTUNPUkVfa2V5X2tub3duIG5vdCBhbmQK
CXsKCQkvQUdNQ09SRV8mc3lzc2hmaWxsIHN5c3RlbWRpY3Qvc2hmaWxsIGdldCBkZWYKCQkvQUdN
Q09SRV8mdXNyc2hmaWxsIC9zaGZpbGwgbG9hZCBkZWYKCQkvQUdNQ09SRV8mc3lzbWFrZXBhdHRl
cm4gc3lzdGVtZGljdC9tYWtlcGF0dGVybiBnZXQgZGVmCgkJL0FHTUNPUkVfJnVzcm1ha2VwYXR0
ZXJuIC9tYWtlcGF0dGVybiBsb2FkIGRlZgoJfWlmCgkvY3VycmVudGNteWtjb2xvciBbMCAwIDAg
MF0gQUdNQ09SRV9ncHV0CgkvY3VycmVudHN0cm9rZWFkanVzdCBmYWxzZSBBR01DT1JFX2dwdXQK
CS9jdXJyZW50Y29sb3JzcGFjZSBbL0RldmljZUdyYXldIEFHTUNPUkVfZ3B1dAoJL3NlcF90aW50
IDAgQUdNQ09SRV9ncHV0CgkvZGV2aWNlbl90aW50cyBbMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAg
MCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwXSBBR01DT1JFX2dwdXQKCS9z
ZXBfY29sb3JzcGFjZV9kaWN0IG51bGwgQUdNQ09SRV9ncHV0CgkvZGV2aWNlbl9jb2xvcnNwYWNl
X2RpY3QgbnVsbCBBR01DT1JFX2dwdXQKCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBudWxsIEFH
TUNPUkVfZ3B1dAoJL2N1cnJlbnRjb2xvcl9pbnRlbnQgKCkgQUdNQ09SRV9ncHV0CgkvY3VzdG9t
Y29sb3JfdGludCAxIEFHTUNPUkVfZ3B1dAoJPDwKCS9NYXhQYXR0ZXJuSXRlbSBjdXJyZW50c3lz
dGVtcGFyYW1zIC9NYXhQYXR0ZXJuQ2FjaGUgZ2V0Cgk+PgoJc2V0dXNlcnBhcmFtcwoJZW5kCn1k
ZWYKL3BhZ2Vfc2V0dXAKewoJL3NldGNteWtjb2xvciB3aGVyZXsKCQlwb3AKCQlBZG9iZV9BR01f
Q29yZS9BR01DT1JFXyZzZXRjbXlrY29sb3IgL3NldGNteWtjb2xvciBsb2FkIHB1dAoJfWlmCglB
ZG9iZV9BR01fQ29yZSBiZWdpbgoJL3NldGNteWtjb2xvcgoJewoJCTQgY29weSBBR01DT1JFX2Nt
eWtidWYgYXN0b3JlIC9jdXJyZW50Y215a2NvbG9yIGV4Y2ggQUdNQ09SRV9ncHV0CgkJMSBzdWIg
NCAxIHJvbGwKCQkzIHsKCQkJMyBpbmRleCBhZGQgbmVnIGR1cCAwIGx0IHsKCQkJCXBvcCAwCgkJ
CX0gaWYKCQkJMyAxIHJvbGwKCQl9IHJlcGVhdAoJCXNldHJnYmNvbG9yIHBvcAoJfW5kZgoJL2N1
cnJlbnRjbXlrY29sb3IKCXsKCQkvY3VycmVudGNteWtjb2xvciBBR01DT1JFX2dnZXQgYWxvYWQg
cG9wCgl9bmRmCgkvc2V0b3ZlcnByaW50Cgl7CgkJcG9wCgl9bmRmCgkvY3VycmVudG92ZXJwcmlu
dAoJewoJCWZhbHNlCgl9bmRmCgkvQUdNQ09SRV9kZXZpY2VEUEkgNzIgMCBtYXRyaXggZGVmYXVs
dG1hdHJpeCBkdHJhbnNmb3JtIGR1cCBtdWwgZXhjaCBkdXAgbXVsIGFkZCBzcXJ0IGRlZgoJL0FH
TUNPUkVfY3lhbl9wbGF0ZSAxIDAgMCAwIHRlc3RfY215a19jb2xvcl9wbGF0ZSBkZWYKCS9BR01D
T1JFX21hZ2VudGFfcGxhdGUgMCAxIDAgMCB0ZXN0X2NteWtfY29sb3JfcGxhdGUgZGVmCgkvQUdN
Q09SRV95ZWxsb3dfcGxhdGUgMCAwIDEgMCB0ZXN0X2NteWtfY29sb3JfcGxhdGUgZGVmCgkvQUdN
Q09SRV9ibGFja19wbGF0ZSAwIDAgMCAxIHRlc3RfY215a19jb2xvcl9wbGF0ZSBkZWYKCS9BR01D
T1JFX3BsYXRlX25keCAKCQlBR01DT1JFX2N5YW5fcGxhdGV7IAoJCQkwCgkJfXsKCQkJQUdNQ09S
RV9tYWdlbnRhX3BsYXRlewoJCQkJMQoJCQl9ewoJCQkJQUdNQ09SRV95ZWxsb3dfcGxhdGV7CgkJ
CQkJMgoJCQkJfXsKCQkJCQlBR01DT1JFX2JsYWNrX3BsYXRlewoJCQkJCQkzCgkJCQkJfXsKCQkJ
CQkJNAoJCQkJCX1pZmVsc2UKCQkJCX1pZmVsc2UKCQkJfWlmZWxzZQoJCX1pZmVsc2UKCQlkZWYK
CS9BR01DT1JFX2hhdmVfcmVwb3J0ZWRfdW5zdXBwb3J0ZWRfY29sb3Jfc3BhY2UgZmFsc2UgZGVm
CgkvQUdNQ09SRV9yZXBvcnRfdW5zdXBwb3J0ZWRfY29sb3Jfc3BhY2UKCXsKCQlBR01DT1JFX2hh
dmVfcmVwb3J0ZWRfdW5zdXBwb3J0ZWRfY29sb3Jfc3BhY2UgZmFsc2UgZXEKCQl7CgkJCShXYXJu
aW5nOiBKb2IgY29udGFpbnMgY29udGVudCB0aGF0IGNhbm5vdCBiZSBzZXBhcmF0ZWQgd2l0aCBv
bi1ob3N0IG1ldGhvZHMuIFRoaXMgY29udGVudCBhcHBlYXJzIG9uIHRoZSBibGFjayBwbGF0ZSwg
YW5kIGtub2NrcyBvdXQgYWxsIG90aGVyIHBsYXRlcy4pID09CgkJCUFkb2JlX0FHTV9Db3JlIC9B
R01DT1JFX2hhdmVfcmVwb3J0ZWRfdW5zdXBwb3J0ZWRfY29sb3Jfc3BhY2UgdHJ1ZSBkZGYKCQl9
IGlmCgl9ZGVmCgkvQUdNQ09SRV9jb21wb3NpdGVfam9iCgkJQUdNQ09SRV9jeWFuX3BsYXRlIEFH
TUNPUkVfbWFnZW50YV9wbGF0ZSBhbmQgQUdNQ09SRV95ZWxsb3dfcGxhdGUgYW5kIEFHTUNPUkVf
YmxhY2tfcGxhdGUgYW5kIGRlZgoJL0FHTUNPUkVfaW5fcmlwX3NlcAoJCS9BR01DT1JFX2luX3Jp
cF9zZXAgd2hlcmV7CgkJCXBvcCBBR01DT1JFX2luX3JpcF9zZXAKCQl9ewoJCQlBR01DT1JFX2Rp
c3RpbGxpbmcgCgkJCXsKCQkJCWZhbHNlCgkJCX17CgkJCQl1c2VyZGljdC9BZG9iZV9BR01fT25I
b3N0X1NlcHMga25vd257CgkJCQkJZmFsc2UKCQkJCX17CgkJCQkJbGV2ZWwyewoJCQkJCQljdXJy
ZW50cGFnZWRldmljZS9TZXBhcmF0aW9ucyAyIGNvcHkga25vd257CgkJCQkJCQlnZXQKCQkJCQkJ
fXsKCQkJCQkJCXBvcCBwb3AgZmFsc2UKCQkJCQkJfWlmZWxzZQoJCQkJCX17CgkJCQkJCWZhbHNl
CgkJCQkJfWlmZWxzZQoJCQkJfWlmZWxzZQoJCQl9aWZlbHNlCgkJfWlmZWxzZQoJZGVmCgkvQUdN
Q09SRV9wcm9kdWNpbmdfc2VwcyBBR01DT1JFX2NvbXBvc2l0ZV9qb2Igbm90IEFHTUNPUkVfaW5f
cmlwX3NlcCBvciBkZWYKCS9BR01DT1JFX2hvc3Rfc2VwIEFHTUNPUkVfcHJvZHVjaW5nX3NlcHMg
QUdNQ09SRV9pbl9yaXBfc2VwIG5vdCBhbmQgZGVmCgkvQUdNX3ByZXNlcnZlX3Nwb3RzIAoJCS9B
R01fcHJlc2VydmVfc3BvdHMgd2hlcmV7CgkJCXBvcCBBR01fcHJlc2VydmVfc3BvdHMKCQl9ewoJ
CQlBR01DT1JFX2Rpc3RpbGxpbmcgQUdNQ09SRV9wcm9kdWNpbmdfc2VwcyBvcgoJCX1pZmVsc2UK
CWRlZgoJL0FHTV9pc19kaXN0aWxsZXJfcHJlc2VydmluZ19zcG90aW1hZ2VzCgl7CgkJY3VycmVu
dGRpc3RpbGxlcnBhcmFtcy9QcmVzZXJ2ZU92ZXJwcmludFNldHRpbmdzIGtub3duCgkJewoJCQlj
dXJyZW50ZGlzdGlsbGVycGFyYW1zL1ByZXNlcnZlT3ZlcnByaW50U2V0dGluZ3MgZ2V0CgkJCQl7
CgkJCQkJY3VycmVudGRpc3RpbGxlcnBhcmFtcy9Db2xvckNvbnZlcnNpb25TdHJhdGVneSBrbm93
bgoJCQkJCXsKCQkJCQkJY3VycmVudGRpc3RpbGxlcnBhcmFtcy9Db2xvckNvbnZlcnNpb25TdHJh
dGVneSBnZXQKCQkJCQkJL0xlYXZlQ29sb3JVbmNoYW5nZWQgZXEKCQkJCQl9ewoJCQkJCQl0cnVl
CgkJCQkJfWlmZWxzZQoJCQkJfXsKCQkJCQlmYWxzZQoJCQkJfWlmZWxzZQoJCX17CgkJCWZhbHNl
CgkJfWlmZWxzZQoJfWRlZgoJL2NvbnZlcnRfc3BvdF90b19wcm9jZXNzIHdoZXJlIHtwb3B9ewoJ
CS9jb252ZXJ0X3Nwb3RfdG9fcHJvY2VzcwoJCXsKCQkJZHVwIG1hcF9hbGlhcyB7CgkJCQkvTmFt
ZSBnZXQgZXhjaCBwb3AKCQkJfSBpZgoJCQlkdXAgZHVwIChOb25lKSBlcSBleGNoIChBbGwpIGVx
IG9yCgkJCQl7CgkJCQlwb3AgZmFsc2UKCQkJCX17CgkJCQlBR01DT1JFX2hvc3Rfc2VwCgkJCQl7
IAoJCQkJCWdzYXZlCgkJCQkJMSAwIDAgMCBzZXRjbXlrY29sb3IgY3VycmVudGdyYXkgMSBleGNo
IHN1YgoJCQkJCTAgMSAwIDAgc2V0Y215a2NvbG9yIGN1cnJlbnRncmF5IDEgZXhjaCBzdWIKCQkJ
CQkwIDAgMSAwIHNldGNteWtjb2xvciBjdXJyZW50Z3JheSAxIGV4Y2ggc3ViCgkJCQkJMCAwIDAg
MSBzZXRjbXlrY29sb3IgY3VycmVudGdyYXkgMSBleGNoIHN1YgoJCQkJCWFkZCBhZGQgYWRkIDAg
ZXEKCQkJCQl7CgkJCQkJCXBvcCBmYWxzZQoJCQkJCX17CgkJCQkJCWZhbHNlIHNldG92ZXJwcmlu
dAoJCQkJCQkxIDEgMSAxIDUgLTEgcm9sbCBmaW5kY215a2N1c3RvbWNvbG9yIDEgc2V0Y3VzdG9t
Y29sb3IKCQkJCQkJY3VycmVudGdyYXkgMCBlcQoJCQkJCX1pZmVsc2UKCQkJCQlncmVzdG9yZQoJ
CQkJfXsKCQkJCQlBR01DT1JFX2Rpc3RpbGxpbmcKCQkJCQl7CgkJCQkJCXBvcCBBR01faXNfZGlz
dGlsbGVyX3ByZXNlcnZpbmdfc3BvdGltYWdlcyBub3QKCQkJCQl9ewoJCQkJCQlBZG9iZV9BR01f
Q29yZS9BR01DT1JFX25hbWUgeGRkZgoJCQkJCQlmYWxzZQoJCQkJCQlBZG9iZV9BR01fQ29yZS9B
R01DT1JFX2luX3BhdHRlcm4ga25vd24ge0Fkb2JlX0FHTV9Db3JlL0FHTUNPUkVfaW5fcGF0dGVy
biBnZXR9e2ZhbHNlfSBpZmVsc2UKCQkJCQkJbm90IGN1cnJlbnRwYWdlZGV2aWNlL092ZXJyaWRl
U2VwYXJhdGlvbnMga25vd24gYW5kCgkJCQkJCQl7CgkJCQkJCQljdXJyZW50cGFnZWRldmljZS9P
dmVycmlkZVNlcGFyYXRpb25zIGdldAoJCQkJCQkJCXsKCQkJCQkJCQkvSHFuU3BvdHMgL1Byb2NT
ZXQgcmVzb3VyY2VzdGF0dXMKCQkJCQkJCQkJewoJCQkJCQkJCQlwb3AgcG9wIHBvcCB0cnVlCgkJ
CQkJCQkJCX1pZgoJCQkJCQkJCX1pZgoJCQkJCQkJfWlmCQkJCQkKCQkJCQkJCXsKCQkJCQkJCUFH
TUNPUkVfbmFtZSAvSHFuU3BvdHMgL1Byb2NTZXQgZmluZHJlc291cmNlIC9UZXN0U3BvdCBnZXQg
ZXhlYyBub3QKCQkJCQkJCX17CgkJCQkJCQlnc2F2ZQoJCQkJCQkJWy9TZXBhcmF0aW9uIEFHTUNP
UkVfbmFtZSAvRGV2aWNlR3JheSB7fV1zZXRjb2xvcnNwYWNlCgkJCQkJCQlmYWxzZQoJCQkJCQkJ
Y3VycmVudHBhZ2VkZXZpY2UvU2VwYXJhdGlvbkNvbG9yTmFtZXMgMiBjb3B5IGtub3duCgkJCQkJ
CQl7CgkJCQkJCQkJZ2V0CgkJCQkJCQkJeyBBR01DT1JFX25hbWUgZXEgb3J9Zm9yYWxsCgkJCQkJ
CQlub3QKCQkJCQkJCX17CgkJCQkJCQkJcG9wIHBvcCBwb3AgdHJ1ZQoJCQkJCQkJfWlmZWxzZQoJ
CQkJCQkJZ3Jlc3RvcmUKCQkJCQkJfWlmZWxzZQoJCQkJCX1pZmVsc2UKCQkJCX1pZmVsc2UKCQkJ
fWlmZWxzZQoJCX1kZWYKCX1pZmVsc2UKCS9jb252ZXJ0X3RvX3Byb2Nlc3Mgd2hlcmUge3BvcH17
CgkJL2NvbnZlcnRfdG9fcHJvY2VzcwoJCXsKCQkJZHVwIGxlbmd0aCAwIGVxCgkJCQl7CgkJCQlw
b3AgZmFsc2UKCQkJCX17CgkJCQlBR01DT1JFX2hvc3Rfc2VwCgkJCQl7IAoJCQkJZHVwIHRydWUg
ZXhjaAoJCQkJCXsKCQkJCQlkdXAgKEN5YW4pIGVxIGV4Y2gKCQkJCQlkdXAgKE1hZ2VudGEpIGVx
IDMgLTEgcm9sbCBvciBleGNoCgkJCQkJZHVwIChZZWxsb3cpIGVxIDMgLTEgcm9sbCBvciBleGNo
CgkJCQkJZHVwIChCbGFjaykgZXEgMyAtMSByb2xsIG9yCgkJCQkJCXtwb3B9CgkJCQkJCXtjb252
ZXJ0X3Nwb3RfdG9fcHJvY2VzcyBhbmR9aWZlbHNlCgkJCQkJfQoJCQkJZm9yYWxsCgkJCQkJewoJ
CQkJCXRydWUgZXhjaAoJCQkJCQl7CgkJCQkJCWR1cCAoQ3lhbikgZXEgZXhjaAoJCQkJCQlkdXAg
KE1hZ2VudGEpIGVxIDMgLTEgcm9sbCBvciBleGNoCgkJCQkJCWR1cCAoWWVsbG93KSBlcSAzIC0x
IHJvbGwgb3IgZXhjaAoJCQkJCQkoQmxhY2spIGVxIG9yIGFuZAoJCQkJCQl9Zm9yYWxsCgkJCQkJ
CW5vdAoJCQkJCX17cG9wIGZhbHNlfWlmZWxzZQoJCQkJfXsKCQkJCWZhbHNlIGV4Y2gKCQkJCQl7
CgkJCQkJZHVwIChDeWFuKSBlcSBleGNoCgkJCQkJZHVwIChNYWdlbnRhKSBlcSAzIC0xIHJvbGwg
b3IgZXhjaAoJCQkJCWR1cCAoWWVsbG93KSBlcSAzIC0xIHJvbGwgb3IgZXhjaAoJCQkJCWR1cCAo
QmxhY2spIGVxIDMgLTEgcm9sbCBvcgoJCQkJCXtwb3B9CgkJCQkJe2NvbnZlcnRfc3BvdF90b19w
cm9jZXNzIG9yfWlmZWxzZQoJCQkJCX0KCQkJCWZvcmFsbAoJCQkJfWlmZWxzZQoJCQl9aWZlbHNl
CgkJfWRlZgoJfWlmZWxzZQkKCS9BR01DT1JFX2F2b2lkX0wyX3NlcF9zcGFjZSAgCgkJdmVyc2lv
biBjdnIgMjAxMiBsdCAKCQlsZXZlbDIgYW5kIAoJCUFHTUNPUkVfcHJvZHVjaW5nX3NlcHMgbm90
IGFuZAoJZGVmCgkvQUdNQ09SRV9pc19jbXlrX3NlcAoJCUFHTUNPUkVfY3lhbl9wbGF0ZSBBR01D
T1JFX21hZ2VudGFfcGxhdGUgb3IgQUdNQ09SRV95ZWxsb3dfcGxhdGUgb3IgQUdNQ09SRV9ibGFj
a19wbGF0ZSBvcgoJZGVmCgkvQUdNX2F2b2lkXzBfY215ayB3aGVyZXsKCQlwb3AgQUdNX2F2b2lk
XzBfY215awoJfXsKCQlBR01fcHJlc2VydmVfc3BvdHMgCgkJdXNlcmRpY3QvQWRvYmVfQUdNX09u
SG9zdF9TZXBzIGtub3duIAoJCXVzZXJkaWN0L0Fkb2JlX0FHTV9JblJpcF9TZXBzIGtub3duIG9y
CgkJbm90IGFuZAoJfWlmZWxzZQoJewoJCS9zZXRjbXlrY29sb3JbCgkJCXsKCQkJCTQgY29weSBh
ZGQgYWRkIGFkZCAwIGVxIGN1cnJlbnRvdmVycHJpbnQgYW5kewoJCQkJCXBvcCAwLjAwMDUKCQkJ
CX1pZgoJCQl9L2V4ZWMgY3Z4CgkJCS9BR01DT1JFXyZzZXRjbXlrY29sb3IgbG9hZCBkdXAgdHlw
ZS9vcGVyYXRvcnR5cGUgbmV7CgkJCQkvZXhlYyBjdngKCQkJfWlmCgkJXWN2eCBkZWYKCX1pZgoJ
QUdNQ09SRV9ob3N0X3NlcHsKCQkvc2V0Y29sb3J0cmFuc2ZlcgoJCXsgCgkJCUFHTUNPUkVfY3lh
bl9wbGF0ZXsKCQkJCXBvcCBwb3AgcG9wCgkJCX17CgkJCSAgCUFHTUNPUkVfbWFnZW50YV9wbGF0
ZXsKCQkJICAJCTQgMyByb2xsIHBvcCBwb3AgcG9wCgkJCSAgCX17CgkJCSAgCQlBR01DT1JFX3ll
bGxvd19wbGF0ZXsKCQkJICAJCQk0IDIgcm9sbCBwb3AgcG9wIHBvcAoJCQkgIAkJfXsKCQkJICAJ
CQk0IDEgcm9sbCBwb3AgcG9wIHBvcAoJCQkgIAkJfWlmZWxzZQoJCQkgIAl9aWZlbHNlCgkJCX1p
ZmVsc2UKCQkJc2V0dHJhbnNmZXIgIAoJCX0JCgkJZGVmCgkJL0FHTUNPUkVfZ2V0X2lua19kYXRh
CgkJCUFHTUNPUkVfY3lhbl9wbGF0ZXsKCQkJCXtwb3AgcG9wIHBvcH0KCQkJfXsKCQkJICAJQUdN
Q09SRV9tYWdlbnRhX3BsYXRlewoJCQkgIAkJezQgMyByb2xsIHBvcCBwb3AgcG9wfQoJCQkgIAl9
ewoJCQkgIAkJQUdNQ09SRV95ZWxsb3dfcGxhdGV7CgkJCSAgCQkJezQgMiByb2xsIHBvcCBwb3Ag
cG9wfQoJCQkgIAkJfXsKCQkJICAJCQl7NCAxIHJvbGwgcG9wIHBvcCBwb3B9CgkJCSAgCQl9aWZl
bHNlCgkJCSAgCX1pZmVsc2UKCQkJfWlmZWxzZQoJCWRlZgoJCS9BR01DT1JFX1JlbW92ZVByb2Nl
c3NDb2xvck5hbWVzCgkJCXsKCQkJMSBkaWN0IGJlZ2luCgkJCS9maWx0ZXJuYW1lCgkJCQl7CgkJ
CQlkdXAgL0N5YW4gZXEgMSBpbmRleCAoQ3lhbikgZXEgb3IKCQkJCQl7cG9wIChfY3lhbl8pfWlm
CgkJCQlkdXAgL01hZ2VudGEgZXEgMSBpbmRleCAoTWFnZW50YSkgZXEgb3IKCQkJCQl7cG9wIChf
bWFnZW50YV8pfWlmCgkJCQlkdXAgL1llbGxvdyBlcSAxIGluZGV4IChZZWxsb3cpIGVxIG9yCgkJ
CQkJe3BvcCAoX3llbGxvd18pfWlmCgkJCQlkdXAgL0JsYWNrIGVxIDEgaW5kZXggKEJsYWNrKSBl
cSBvcgoJCQkJCXtwb3AgKF9ibGFja18pfWlmCgkJCQl9ZGVmCgkJCWR1cCB0eXBlIC9hcnJheXR5
cGUgZXEKCQkJCXtbZXhjaCB7ZmlsdGVybmFtZX1mb3JhbGxdfQoJCQkJe2ZpbHRlcm5hbWV9aWZl
bHNlCgkJCWVuZAoJCQl9ZGVmCgkJL0FHTUNPUkVfSXNTZXBhcmF0aW9uQVByb2Nlc3NDb2xvcgoJ
CQl7CgkJCWR1cCAoQ3lhbikgZXEgZXhjaCBkdXAgKE1hZ2VudGEpIGVxIGV4Y2ggZHVwIChZZWxs
b3cpIGVxIGV4Y2ggKEJsYWNrKSBlcSBvciBvciBvcgoJCQl9ZGVmCgkJbGV2ZWwzIHsKCQkJL0FH
TUNPUkVfSXNDdXJyZW50Q29sb3IKCQkJCXsKCQkJCWdzYXZlCgkJCQlmYWxzZSBzZXRvdmVycHJp
bnQKCQkJCTEgMSAxIDEgNSAtMSByb2xsIGZpbmRjbXlrY3VzdG9tY29sb3IgMSBzZXRjdXN0b21j
b2xvcgoJCQkJY3VycmVudGdyYXkgMCBlcSAKCQkJCWdyZXN0b3JlCgkJCQl9ZGVmCgkJCS9BR01D
T1JFX2ZpbHRlcl9mdW5jdGlvbmRhdGFzb3VyY2UKCQkJCXsJCgkJCQk1IGRpY3QgYmVnaW4KCQkJ
CS9kYXRhX2luIHhkZgoJCQkJZGF0YV9pbiB0eXBlIC9zdHJpbmd0eXBlIGVxCgkJCQkJewoJCQkJ
CS9uY29tcCB4ZGYKCQkJCQkvY29tcCB4ZGYKCQkJCQkvc3RyaW5nX291dCBkYXRhX2luIGxlbmd0
aCBuY29tcCBpZGl2IHN0cmluZyBkZWYKCQkJCQkwIG5jb21wIGRhdGFfaW4gbGVuZ3RoIDEgc3Vi
CgkJCQkJCXsKCQkJCQkJc3RyaW5nX291dCBleGNoIGR1cCBuY29tcCBpZGl2IGV4Y2ggZGF0YV9p
biBleGNoIG5jb21wIGdldGludGVydmFsIGNvbXAgZ2V0IDI1NSBleGNoIHN1YiBwdXQKCQkJCQkJ
fWZvcgoJCQkJCXN0cmluZ19vdXQKCQkJCQl9ewoJCQkJCXN0cmluZyAvc3RyaW5nX2luIHhkZgoJ
CQkJCS9zdHJpbmdfb3V0IDEgc3RyaW5nIGRlZgoJCQkJCS9jb21wb25lbnQgeGRmCgkJCQkJWwoJ
CQkJCWRhdGFfaW4gc3RyaW5nX2luIC9yZWFkc3RyaW5nIGN2eAoJCQkJCQlbY29tcG9uZW50IC9n
ZXQgY3Z4IDI1NSAvZXhjaCBjdnggL3N1YiBjdnggc3RyaW5nX291dCAvZXhjaCBjdnggMCAvZXhj
aCBjdnggL3B1dCBjdnggc3RyaW5nX291dF1jdngKCQkJCQkJWy9wb3AgY3Z4ICgpXWN2eCAvaWZl
bHNlIGN2eAoJCQkJCV1jdnggL1JldXNhYmxlU3RyZWFtRGVjb2RlIGZpbHRlcgoJCQkJfWlmZWxz
ZQoJCQkJZW5kCgkJCQl9ZGVmCgkJCS9BR01DT1JFX3NlcGFyYXRlU2hhZGluZ0Z1bmN0aW9uCgkJ
CQl7CgkJCQkyIGRpY3QgYmVnaW4KCQkJCS9wYWludD8geGRmCgkJCQkvY2hhbm5lbCB4ZGYKCQkJ
CQliZWdpbgoJCQkJCUZ1bmN0aW9uVHlwZSAwIGVxCgkJCQkJCXsKCQkJCQkJL0RhdGFTb3VyY2Ug
Y2hhbm5lbCBSYW5nZSBsZW5ndGggMiBpZGl2IERhdGFTb3VyY2UgQUdNQ09SRV9maWx0ZXJfZnVu
Y3Rpb25kYXRhc291cmNlIGRlZgoJCQkJCQljdXJyZW50ZGljdCAvRGVjb2RlIGtub3duCgkJCQkJ
CQl7L0RlY29kZSBEZWNvZGUgY2hhbm5lbCAyIG11bCAyIGdldGludGVydmFsIGRlZn1pZgoJCQkJ
CQlwYWludD8gbm90CgkJCQkJCQl7L0RlY29kZSBbMSAxXWRlZn1pZgoJCQkJCQl9aWYKCQkJCQlG
dW5jdGlvblR5cGUgMiBlcQoJCQkJCQl7CgkJCQkJCXBhaW50PwoJCQkJCQkJewoJCQkJCQkJL0Mw
IFtDMCBjaGFubmVsIGdldCAxIGV4Y2ggc3ViXSBkZWYKCQkJCQkJCS9DMSBbQzEgY2hhbm5lbCBn
ZXQgMSBleGNoIHN1Yl0gZGVmCgkJCQkJCQl9ewoJCQkJCQkJL0MwIFsxXSBkZWYKCQkJCQkJCS9D
MSBbMV0gZGVmCgkJCQkJCQl9aWZlbHNlCQkJCgkJCQkJCX1pZgoJCQkJCUZ1bmN0aW9uVHlwZSAz
IGVxCgkJCQkJCXsKCQkJCQkJL0Z1bmN0aW9ucyBbRnVuY3Rpb25zIHtjaGFubmVsIHBhaW50PyBB
R01DT1JFX3NlcGFyYXRlU2hhZGluZ0Z1bmN0aW9ufSBmb3JhbGxdIGRlZgkJCQoJCQkJCQl9aWYK
CQkJCQljdXJyZW50ZGljdCAvUmFuZ2Uga25vd24KCQkJCQkJey9SYW5nZSBbMCAxXSBkZWZ9aWYK
CQkJCQljdXJyZW50ZGljdAoJCQkJCWVuZAoJCQkJZW5kCgkJCQl9ZGVmCgkJCS9BR01DT1JFX3Nl
cGFyYXRlU2hhZGluZwoJCQkJewoJCQkJMyAtMSByb2xsIGJlZ2luCgkJCQljdXJyZW50ZGljdCAv
RnVuY3Rpb24ga25vd24KCQkJCQl7CgkJCQkJY3VycmVudGRpY3QgL0JhY2tncm91bmQga25vd24K
CQkJCQkJe1sxIGluZGV4e0JhY2tncm91bmQgMyBpbmRleCBnZXQgMSBleGNoIHN1Yn17MX1pZmVs
c2VdL0JhY2tncm91bmQgeGRmfWlmCgkJCQkJRnVuY3Rpb24gMyAxIHJvbGwgQUdNQ09SRV9zZXBh
cmF0ZVNoYWRpbmdGdW5jdGlvbiAvRnVuY3Rpb24geGRmCgkJCQkJL0NvbG9yU3BhY2UgWy9EZXZp
Y2VHcmF5XSBkZWYKCQkJCQl9ewoJCQkJCUNvbG9yU3BhY2UgZHVwIHR5cGUgL2FycmF5dHlwZSBl
cSB7MCBnZXR9aWYgL0RldmljZUNNWUsgZXEKCQkJCQkJewoJCQkJCQkvQ29sb3JTcGFjZSBbL0Rl
dmljZU4gWy9fY3lhbl8gL19tYWdlbnRhXyAvX3llbGxvd18gL19ibGFja19dIC9EZXZpY2VDTVlL
IHt9XSBkZWYKCQkJCQkJfXsKCQkJCQkJQ29sb3JTcGFjZSBkdXAgMSBnZXQgQUdNQ09SRV9SZW1v
dmVQcm9jZXNzQ29sb3JOYW1lcyAxIGV4Y2ggcHV0CgkJCQkJCX1pZmVsc2UKCQkJCQlDb2xvclNw
YWNlIDAgZ2V0IC9TZXBhcmF0aW9uIGVxCgkJCQkJCXsKCQkJCQkJCXsKCQkJCQkJCQlbMSAvZXhj
aCBjdnggL3N1YiBjdnhdY3Z4CgkJCQkJCQl9ewoJCQkJCQkJCVsvcG9wIGN2eCAxXWN2eAoJCQkJ
CQkJfWlmZWxzZQoJCQkJCQkJQ29sb3JTcGFjZSAzIDMgLTEgcm9sbCBwdXQKCQkJCQkJCXBvcAoJ
CQkJCQl9ewoJCQkJCQkJewoJCQkJCQkJCVtleGNoIENvbG9yU3BhY2UgMSBnZXQgbGVuZ3RoIDEg
c3ViIGV4Y2ggc3ViIC9pbmRleCBjdnggMSAvZXhjaCBjdnggL3N1YiBjdnggQ29sb3JTcGFjZSAx
IGdldCBsZW5ndGggMSBhZGQgMSAvcm9sbCBjdnggQ29sb3JTcGFjZSAxIGdldCBsZW5ndGh7L3Bv
cCBjdnh9IHJlcGVhdF1jdngKCQkJCQkJCX17CgkJCQkJCQkJcG9wIFtDb2xvclNwYWNlIDEgZ2V0
IGxlbmd0aCB7L3BvcCBjdnh9IHJlcGVhdCBjdnggMV1jdngKCQkJCQkJCX1pZmVsc2UKCQkJCQkJ
CUNvbG9yU3BhY2UgMyAzIC0xIHJvbGwgYmluZCBwdXQKCQkJCQkJfWlmZWxzZQoJCQkJCUNvbG9y
U3BhY2UgMiAvRGV2aWNlR3JheSBwdXQJCQkJCQkJCQkJCQkJCQkJCQkKCQkJCQl9aWZlbHNlCgkJ
CQllbmQKCQkJCX1kZWYKCQkJL0FHTUNPUkVfc2VwYXJhdGVTaGFkaW5nRGljdAoJCQkJewoJCQkJ
ZHVwIC9Db2xvclNwYWNlIGdldAoJCQkJZHVwIHR5cGUgL2FycmF5dHlwZSBuZQoJCQkJCXtbZXhj
aF19aWYKCQkJCWR1cCAwIGdldCAvRGV2aWNlQ01ZSyBlcQoJCQkJCXsKCQkJCQlleGNoIGJlZ2lu
IAoJCQkJCWN1cnJlbnRkaWN0CgkJCQkJQUdNQ09SRV9jeWFuX3BsYXRlCgkJCQkJCXswIHRydWV9
aWYKCQkJCQlBR01DT1JFX21hZ2VudGFfcGxhdGUKCQkJCQkJezEgdHJ1ZX1pZgoJCQkJCUFHTUNP
UkVfeWVsbG93X3BsYXRlCgkJCQkJCXsyIHRydWV9aWYKCQkJCQlBR01DT1JFX2JsYWNrX3BsYXRl
CgkJCQkJCXszIHRydWV9aWYKCQkJCQlBR01DT1JFX3BsYXRlX25keCA0IGVxCgkJCQkJCXswIGZh
bHNlfWlmCQkKCQkJCQlkdXAgbm90IGN1cnJlbnRvdmVycHJpbnQgYW5kCgkJCQkJCXsvQUdNQ09S
RV9pZ25vcmVzaGFkZSB0cnVlIGRlZn1pZgoJCQkJCUFHTUNPUkVfc2VwYXJhdGVTaGFkaW5nCgkJ
CQkJY3VycmVudGRpY3QKCQkJCQllbmQgZXhjaAoJCQkJCX1pZgoJCQkJZHVwIDAgZ2V0IC9TZXBh
cmF0aW9uIGVxCgkJCQkJewoJCQkJCWV4Y2ggYmVnaW4KCQkJCQlDb2xvclNwYWNlIDEgZ2V0IGR1
cCAvTm9uZSBuZSBleGNoIC9BbGwgbmUgYW5kCgkJCQkJCXsKCQkJCQkJQ29sb3JTcGFjZSAxIGdl
dCBBR01DT1JFX0lzQ3VycmVudENvbG9yIEFHTUNPUkVfcGxhdGVfbmR4IDQgbHQgYW5kIENvbG9y
U3BhY2UgMSBnZXQgQUdNQ09SRV9Jc1NlcGFyYXRpb25BUHJvY2Vzc0NvbG9yIG5vdCBhbmQKCQkJ
CQkJCXsKCQkJCQkJCUNvbG9yU3BhY2UgMiBnZXQgZHVwIHR5cGUgL2FycmF5dHlwZSBlcSB7MCBn
ZXR9aWYgL0RldmljZUNNWUsgZXEgCgkJCQkJCQkJewoJCQkJCQkJCS9Db2xvclNwYWNlCgkJCQkJ
CQkJCVsKCQkJCQkJCQkJL1NlcGFyYXRpb24KCQkJCQkJCQkJQ29sb3JTcGFjZSAxIGdldAoJCQkJ
CQkJCQkvRGV2aWNlR3JheQoJCQkJCQkJCQkJWwoJCQkJCQkJCQkJQ29sb3JTcGFjZSAzIGdldCAv
ZXhlYyBjdngKCQkJCQkJCQkJCTQgQUdNQ09SRV9wbGF0ZV9uZHggc3ViIC0xIC9yb2xsIGN2eAoJ
CQkJCQkJCQkJNCAxIC9yb2xsIGN2eAoJCQkJCQkJCQkJMyBbL3BvcCBjdnhdY3Z4IC9yZXBlYXQg
Y3Z4CgkJCQkJCQkJCQkxIC9leGNoIGN2eCAvc3ViIGN2eAoJCQkJCQkJCQkJXWN2eAkJCQkJCQkJ
CQoJCQkJCQkJCQldZGVmCgkJCQkJCQkJfXsKCQkJCQkJCQlBR01DT1JFX3JlcG9ydF91bnN1cHBv
cnRlZF9jb2xvcl9zcGFjZQoJCQkJCQkJCUFHTUNPUkVfYmxhY2tfcGxhdGUgbm90CgkJCQkJCQkJ
CXsKCQkJCQkJCQkJY3VycmVudGRpY3QgMCBmYWxzZSBBR01DT1JFX3NlcGFyYXRlU2hhZGluZwoJ
CQkJCQkJCQl9aWYKCQkJCQkJCQl9aWZlbHNlCgkJCQkJCQl9ewoJCQkJCQkJY3VycmVudGRpY3Qg
Q29sb3JTcGFjZSAxIGdldCBBR01DT1JFX0lzQ3VycmVudENvbG9yCgkJCQkJCQkwIGV4Y2ggCgkJ
CQkJCQlkdXAgbm90IGN1cnJlbnRvdmVycHJpbnQgYW5kCgkJCQkJCQkJey9BR01DT1JFX2lnbm9y
ZXNoYWRlIHRydWUgZGVmfWlmCgkJCQkJCQlBR01DT1JFX3NlcGFyYXRlU2hhZGluZwoJCQkJCQkJ
fWlmZWxzZQkKCQkJCQkJfWlmCQkJCgkJCQkJY3VycmVudGRpY3QKCQkJCQllbmQgZXhjaAoJCQkJ
CX1pZgoJCQkJZHVwIDAgZ2V0IC9EZXZpY2VOIGVxCgkJCQkJewoJCQkJCWV4Y2ggYmVnaW4KCQkJ
CQlDb2xvclNwYWNlIDEgZ2V0IGNvbnZlcnRfdG9fcHJvY2VzcwoJCQkJCQl7CgkJCQkJCUNvbG9y
U3BhY2UgMiBnZXQgZHVwIHR5cGUgL2FycmF5dHlwZSBlcSB7MCBnZXR9aWYgL0RldmljZUNNWUsg
ZXEgCgkJCQkJCQl7CgkJCQkJCQkvQ29sb3JTcGFjZQoJCQkJCQkJCVsKCQkJCQkJCQkvRGV2aWNl
TgoJCQkJCQkJCUNvbG9yU3BhY2UgMSBnZXQKCQkJCQkJCQkvRGV2aWNlR3JheQoJCQkJCQkJCQlb
CgkJCQkJCQkJCUNvbG9yU3BhY2UgMyBnZXQgL2V4ZWMgY3Z4CgkJCQkJCQkJCTQgQUdNQ09SRV9w
bGF0ZV9uZHggc3ViIC0xIC9yb2xsIGN2eAoJCQkJCQkJCQk0IDEgL3JvbGwgY3Z4CgkJCQkJCQkJ
CTMgWy9wb3AgY3Z4XWN2eCAvcmVwZWF0IGN2eAoJCQkJCQkJCQkxIC9leGNoIGN2eCAvc3ViIGN2
eAoJCQkJCQkJCQldY3Z4CQkJCQkJCQkJCgkJCQkJCQkJXWRlZgoJCQkJCQkJfXsKCQkJCQkJCUFH
TUNPUkVfcmVwb3J0X3Vuc3VwcG9ydGVkX2NvbG9yX3NwYWNlCgkJCQkJCQlBR01DT1JFX2JsYWNr
X3BsYXRlIG5vdAoJCQkJCQkJCXsKCQkJCQkJCQljdXJyZW50ZGljdCAwIGZhbHNlIEFHTUNPUkVf
c2VwYXJhdGVTaGFkaW5nCgkJCQkJCQkJL0NvbG9yU3BhY2UgWy9EZXZpY2VHcmF5XSBkZWYKCQkJ
CQkJCQl9aWYKCQkJCQkJCX1pZmVsc2UKCQkJCQkJfXsKCQkJCQkJY3VycmVudGRpY3QKCQkJCQkJ
ZmFsc2UgLTEgQ29sb3JTcGFjZSAxIGdldAoJCQkJCQkJewoJCQkJCQkJQUdNQ09SRV9Jc0N1cnJl
bnRDb2xvcgoJCQkJCQkJCXsKCQkJCQkJCQkxIGFkZAoJCQkJCQkJCWV4Y2ggcG9wIHRydWUgZXhj
aCBleGl0CgkJCQkJCQkJfWlmCgkJCQkJCQkxIGFkZAoJCQkJCQkJfWZvcmFsbAoJCQkJCQlleGNo
IAoJCQkJCQlkdXAgbm90IGN1cnJlbnRvdmVycHJpbnQgYW5kCgkJCQkJCQl7L0FHTUNPUkVfaWdu
b3Jlc2hhZGUgdHJ1ZSBkZWZ9aWYKCQkJCQkJQUdNQ09SRV9zZXBhcmF0ZVNoYWRpbmcKCQkJCQkJ
fWlmZWxzZQoJCQkJCWN1cnJlbnRkaWN0CgkJCQkJZW5kIGV4Y2gKCQkJCQl9aWYKCQkJCWR1cCAw
IGdldCBkdXAgL0RldmljZUNNWUsgZXEgZXhjaCBkdXAgL1NlcGFyYXRpb24gZXEgZXhjaCAvRGV2
aWNlTiBlcSBvciBvciBub3QKCQkJCQl7CgkJCQkJZXhjaCBiZWdpbgoJCQkJCUNvbG9yU3BhY2Ug
ZHVwIHR5cGUgL2FycmF5dHlwZSBlcQoJCQkJCQl7MCBnZXR9aWYKCQkJCQkvRGV2aWNlR3JheSBu
ZQoJCQkJCQl7CgkJCQkJCUFHTUNPUkVfcmVwb3J0X3Vuc3VwcG9ydGVkX2NvbG9yX3NwYWNlCgkJ
CQkJCUFHTUNPUkVfYmxhY2tfcGxhdGUgbm90CgkJCQkJCQl7CgkJCQkJCQlDb2xvclNwYWNlIDAg
Z2V0IC9DSUVCYXNlZEEgZXEKCQkJCQkJCQl7CgkJCQkJCQkJL0NvbG9yU3BhY2UgWy9TZXBhcmF0
aW9uIC9fY2llYmFzZWRhXyAvRGV2aWNlR3JheSB7fV0gZGVmCgkJCQkJCQkJfWlmCgkJCQkJCQlD
b2xvclNwYWNlIDAgZ2V0IGR1cCAvQ0lFQmFzZWRBQkMgZXEgZXhjaCBkdXAgL0NJRUJhc2VkREVG
IGVxIGV4Y2ggL0RldmljZVJHQiBlcSBvciBvcgoJCQkJCQkJCXsKCQkJCQkJCQkvQ29sb3JTcGFj
ZSBbL0RldmljZU4gWy9fcmVkXyAvX2dyZWVuXyAvX2JsdWVfXSAvRGV2aWNlUkdCIHt9XSBkZWYK
CQkJCQkJCQl9aWYKCQkJCQkJCUNvbG9yU3BhY2UgMCBnZXQgL0NJRUJhc2VkREVGRyBlcQoJCQkJ
CQkJCXsKCQkJCQkJCQkvQ29sb3JTcGFjZSBbL0RldmljZU4gWy9fY3lhbl8gL19tYWdlbnRhXyAv
X3llbGxvd18gL19ibGFja19dIC9EZXZpY2VDTVlLIHt9XQoJCQkJCQkJCX1pZgoJCQkJCQkJY3Vy
cmVudGRpY3QgMCBmYWxzZSBBR01DT1JFX3NlcGFyYXRlU2hhZGluZwoJCQkJCQkJfWlmCgkJCQkJ
CX1pZgoJCQkJCWN1cnJlbnRkaWN0CgkJCQkJZW5kIGV4Y2gKCQkJCQl9aWYKCQkJCXBvcAoJCQkJ
ZHVwIC9BR01DT1JFX2lnbm9yZXNoYWRlIGtub3duCgkJCQkJewoJCQkJCWJlZ2luCgkJCQkJL0Nv
bG9yU3BhY2UgWy9TZXBhcmF0aW9uIChOb25lKSAvRGV2aWNlR3JheSB7fV0gZGVmCgkJCQkJY3Vy
cmVudGRpY3QgZW5kCgkJCQkJfWlmCgkJCQl9ZGVmCgkJCS9zaGZpbGwKCQkJCXsKCQkJCWNsb25l
ZGljdAoJCQkJQUdNQ09SRV9zZXBhcmF0ZVNoYWRpbmdEaWN0IAoJCQkJZHVwIC9BR01DT1JFX2ln
bm9yZXNoYWRlIGtub3duCgkJCQkJe3BvcH0KCQkJCQl7QUdNQ09SRV8mc3lzc2hmaWxsfWlmZWxz
ZQoJCQkJfWRlZgoJCQkvbWFrZXBhdHRlcm4KCQkJCXsKCQkJCWV4Y2gKCQkJCWR1cCAvUGF0dGVy
blR5cGUgZ2V0IDIgZXEKCQkJCQl7CgkJCQkJY2xvbmVkaWN0CgkJCQkJYmVnaW4KCQkJCQkvU2hh
ZGluZyBTaGFkaW5nIEFHTUNPUkVfc2VwYXJhdGVTaGFkaW5nRGljdCBkZWYKCQkJCQljdXJyZW50
ZGljdCBlbmQKCQkJCQlleGNoIEFHTUNPUkVfJnN5c21ha2VwYXR0ZXJuCgkJCQkJfXsKCQkJCQll
eGNoIEFHTUNPUkVfJnVzcm1ha2VwYXR0ZXJuCgkJCQkJfWlmZWxzZQoJCQkJfWRlZgoJCX1pZgoJ
fWlmCglBR01DT1JFX2luX3JpcF9zZXB7CgkJL3NldGN1c3RvbWNvbG9yCgkJewoJCQlleGNoIGFs
b2FkIHBvcAoJCQlkdXAgNyAxIHJvbGwgaW5SaXBfc3BvdF9oYXNfaW5rIG5vdAl7IAoJCQkJNCB7
NCBpbmRleCBtdWwgNCAxIHJvbGx9CgkJCQlyZXBlYXQKCQkJCS9EZXZpY2VDTVlLIHNldGNvbG9y
c3BhY2UKCQkJCTYgLTIgcm9sbCBwb3AgcG9wCgkJCX17IAoJCQkJQWRvYmVfQUdNX0NvcmUgYmVn
aW4KCQkJCQkvQUdNQ09SRV9rIHhkZiAvQUdNQ09SRV95IHhkZiAvQUdNQ09SRV9tIHhkZiAvQUdN
Q09SRV9jIHhkZgoJCQkJZW5kCgkJCQlbL1NlcGFyYXRpb24gNCAtMSByb2xsIC9EZXZpY2VDTVlL
CgkJCQl7ZHVwIEFHTUNPUkVfYyBtdWwgZXhjaCBkdXAgQUdNQ09SRV9tIG11bCBleGNoIGR1cCBB
R01DT1JFX3kgbXVsIGV4Y2ggQUdNQ09SRV9rIG11bH0KCQkJCV0KCQkJCXNldGNvbG9yc3BhY2UK
CQkJfWlmZWxzZQoJCQlzZXRjb2xvcgoJCX1uZGYKCQkvc2V0c2VwYXJhdGlvbmdyYXkKCQl7CgkJ
CVsvU2VwYXJhdGlvbiAoQWxsKSAvRGV2aWNlR3JheSB7fV0gc2V0Y29sb3JzcGFjZV9vcHQKCQkJ
MSBleGNoIHN1YiBzZXRjb2xvcgoJCX1uZGYKCX17CgkJL3NldHNlcGFyYXRpb25ncmF5CgkJewoJ
CQlBR01DT1JFXyZzZXRncmF5CgkJfW5kZgoJfWlmZWxzZQoJL2ZpbmRjbXlrY3VzdG9tY29sb3IK
CXsKCQk1IG1ha2VyZWFkb25seWFycmF5Cgl9bmRmCgkvc2V0Y3VzdG9tY29sb3IKCXsKCQlleGNo
IGFsb2FkIHBvcCBwb3AKCQk0IHs0IGluZGV4IG11bCA0IDEgcm9sbH0gcmVwZWF0CgkJc2V0Y215
a2NvbG9yIHBvcAoJfW5kZgoJL2hhc19jb2xvcgoJCS9jb2xvcmltYWdlIHdoZXJlewoJCQlBR01D
T1JFX3Byb2R1Y2luZ19zZXBzewoJCQkJcG9wIHRydWUKCQkJfXsKCQkJCXN5c3RlbWRpY3QgZXEK
CQkJfWlmZWxzZQoJCX17CgkJCWZhbHNlCgkJfWlmZWxzZQoJZGVmCgkvbWFwX2luZGV4Cgl7CgkJ
MSBpbmRleCBtdWwgZXhjaCBnZXRpbnRlcnZhbCB7MjU1IGRpdn0gZm9yYWxsCgl9IGJkZgoJL21h
cF9pbmRleGVkX2Rldm4KCXsKCQlMb29rdXAgTmFtZXMgbGVuZ3RoIDMgLTEgcm9sbCBjdmkgbWFw
X2luZGV4Cgl9IGJkZgoJL25fY29sb3JfY29tcG9uZW50cwoJewoJCWJhc2VfY29sb3JzcGFjZV90
eXBlCgkJZHVwIC9EZXZpY2VHcmF5IGVxewoJCQlwb3AgMQoJCX17CgkJCS9EZXZpY2VDTVlLIGVx
ewoJCQkJNAoJCQl9ewoJCQkJMwoJCQl9aWZlbHNlCgkJfWlmZWxzZQoJfWJkZgoJbGV2ZWwyewoJ
CS9tbyAvbW92ZXRvIGxkZgoJCS9saSAvbGluZXRvIGxkZgoJCS9jdiAvY3VydmV0byBsZGYKCQkv
a25vY2tvdXRfdW5pdHNxCgkJewoJCQkxIHNldGdyYXkKCQkJMCAwIDEgMSByZWN0ZmlsbAoJCX1k
ZWYKCQkvbGV2ZWwyU2NyZWVuRnJlcXsKCQkJYmVnaW4KCQkJNjAKCQkJSGFsZnRvbmVUeXBlIDEg
ZXF7CgkJCQlwb3AgRnJlcXVlbmN5CgkJCX1pZgoJCQlIYWxmdG9uZVR5cGUgMiBlcXsKCQkJCXBv
cCBHcmF5RnJlcXVlbmN5CgkJCX1pZgoJCQlIYWxmdG9uZVR5cGUgNSBlcXsKCQkJCXBvcCBEZWZh
dWx0IGxldmVsMlNjcmVlbkZyZXEKCQkJfWlmCgkJCSBlbmQKCQl9ZGVmCgkJL2N1cnJlbnRTY3Jl
ZW5GcmVxewoJCQljdXJyZW50aGFsZnRvbmUgbGV2ZWwyU2NyZWVuRnJlcQoJCX1kZWYKCQlsZXZl
bDIgL3NldGNvbG9yc3BhY2UgQUdNQ09SRV9rZXlfa25vd24gbm90IGFuZHsKCQkJL0FHTUNPUkVf
JiYmc2V0Y29sb3JzcGFjZSAvc2V0Y29sb3JzcGFjZSBsZGYKCQkJL0FHTUNPUkVfUmVwbGFjZU1h
cHBlZENvbG9yCgkJCXsKCQkJCWR1cCB0eXBlIGR1cCAvYXJyYXl0eXBlIGVxIGV4Y2ggL3BhY2tl
ZGFycmF5dHlwZSBlcSBvcgoJCQkJewoJCQkJCWR1cCAwIGdldCBkdXAgL1NlcGFyYXRpb24gZXEK
CQkJCQl7CgkJCQkJCXBvcAoJCQkJCQlkdXAgbGVuZ3RoIGFycmF5IGNvcHkKCQkJCQkJZHVwIGR1
cCAxIGdldAoJCQkJCQljdXJyZW50X3Nwb3RfYWxpYXMKCQkJCQkJewoJCQkJCQkJZHVwIG1hcF9h
bGlhcwoJCQkJCQkJewoJCQkJCQkJCWJlZ2luCgkJCQkJCQkJL3NlcF9jb2xvcnNwYWNlX2RpY3Qg
Y3VycmVudGRpY3QgQUdNQ09SRV9ncHV0CgkJCQkJCQkJcG9wIHBvcAlwb3AKCQkJCQkJCQlbIAoJ
CQkJCQkJCQkvU2VwYXJhdGlvbiBOYW1lIAoJCQkJCQkJCQlDU0EgbWFwX2NzYQoJCQkJCQkJCQlk
dXAgL01hcHBlZENTQSB4ZGYgCgkJCQkJCQkJCS9zZXBfY29sb3JzcGFjZV9wcm9jIGxvYWQKCQkJ
CQkJCQldCgkJCQkJCQkJZHVwIE5hbWUKCQkJCQkJCQllbmQKCQkJCQkJCX1pZgoJCQkJCQl9aWYK
CQkJCQkJbWFwX3Jlc2VydmVkX2lua19uYW1lIDEgeHB0CgkJCQkJfXsKCQkJCQkJL0RldmljZU4g
ZXEgCgkJCQkJCXsKCQkJCQkJCWR1cCBsZW5ndGggYXJyYXkgY29weQoJCQkJCQkJZHVwIGR1cCAx
IGdldCBbIAoJCQkJCQkJCWV4Y2ggewoJCQkJCQkJCQljdXJyZW50X3Nwb3RfYWxpYXN7CgkJCQkJ
CQkJCQlkdXAgbWFwX2FsaWFzewoJCQkJCQkJCQkJCS9OYW1lIGdldCBleGNoIHBvcAoJCQkJCQkJ
CQkJfWlmCgkJCQkJCQkJCX1pZgoJCQkJCQkJCQltYXBfcmVzZXJ2ZWRfaW5rX25hbWUKCQkJCQkJ
CQl9IGZvcmFsbCAKCQkJCQkJCV0gMSB4cHQKCQkJCQkJfWlmCgkJCQkJfWlmZWxzZQoJCQkJfWlm
CgkJCX1kZWYKCQkJL3NldGNvbG9yc3BhY2UKCQkJewoJCQkJZHVwIHR5cGUgZHVwIC9hcnJheXR5
cGUgZXEgZXhjaCAvcGFja2VkYXJyYXl0eXBlIGVxIG9yCgkJCQl7CgkJCQkJZHVwIDAgZ2V0IC9J
bmRleGVkIGVxCgkJCQkJewoJCQkJCQlBR01DT1JFX2Rpc3RpbGxpbmcKCQkJCQkJewoJCQkJCQkJ
L1Bob3Rvc2hvcER1b3RvbmVMaXN0IHdoZXJlCgkJCQkJCQl7CgkJCQkJCQkJcG9wIGZhbHNlCgkJ
CQkJCQl9ewoJCQkJCQkJCXRydWUKCQkJCQkJCX1pZmVsc2UKCQkJCQkJfXsKCQkJCQkJCXRydWUK
CQkJCQkJfWlmZWxzZQoJCQkJCQl7CgkJCQkJCQlhbG9hZCBwb3AgMyAtMSByb2xsCgkJCQkJCQlB
R01DT1JFX1JlcGxhY2VNYXBwZWRDb2xvcgoJCQkJCQkJMyAxIHJvbGwgNCBhcnJheSBhc3RvcmUK
CQkJCQkJfWlmCgkJCQkJfXsKCQkJCQkJQUdNQ09SRV9SZXBsYWNlTWFwcGVkQ29sb3IKCQkJCQl9
aWZlbHNlCgkJCQl9aWYKCQkJCURldmljZU5fUFMyX2luUmlwX3NlcHMge0FHTUNPUkVfJiYmc2V0
Y29sb3JzcGFjZX0gaWYKCQkJfWRlZgoJCX1pZgkKCX17CgkJL2FkagoJCXsKCQkJY3VycmVudHN0
cm9rZWFkanVzdHsKCQkJCXRyYW5zZm9ybQoJCQkJMC4yNSBzdWIgcm91bmQgMC4yNSBhZGQgZXhj
aAoJCQkJMC4yNSBzdWIgcm91bmQgMC4yNSBhZGQgZXhjaAoJCQkJaXRyYW5zZm9ybQoJCQl9aWYK
CQl9ZGVmCgkJL21vewoJCQlhZGogbW92ZXRvCgkJfWRlZgoJCS9saXsKCQkJYWRqIGxpbmV0bwoJ
CX1kZWYKCQkvY3Z7CgkJCTYgMiByb2xsIGFkagoJCQk2IDIgcm9sbCBhZGoKCQkJNiAyIHJvbGwg
YWRqIGN1cnZldG8KCQl9ZGVmCgkJL2tub2Nrb3V0X3VuaXRzcQoJCXsKCQkJMSBzZXRncmF5CgkJ
CTggOCAxIFs4IDAgMCA4IDAgMF0gezxmZmZmZmZmZmZmZmZmZmZmPn0gaW1hZ2UKCQl9ZGVmCgkJ
L2N1cnJlbnRzdHJva2VhZGp1c3R7CgkJCS9jdXJyZW50c3Ryb2tlYWRqdXN0IEFHTUNPUkVfZ2dl
dAoJCX1kZWYKCQkvc2V0c3Ryb2tlYWRqdXN0ewoJCQkvY3VycmVudHN0cm9rZWFkanVzdCBleGNo
IEFHTUNPUkVfZ3B1dAoJCX1kZWYKCQkvY3VycmVudFNjcmVlbkZyZXF7CgkJCWN1cnJlbnRzY3Jl
ZW4gcG9wIHBvcAoJCX1kZWYKCQkvc2V0Y29sb3JzcGFjZQoJCXsKCQkJL2N1cnJlbnRjb2xvcnNw
YWNlIGV4Y2ggQUdNQ09SRV9ncHV0CgkJfSBkZWYKCQkvY3VycmVudGNvbG9yc3BhY2UKCQl7CgkJ
CS9jdXJyZW50Y29sb3JzcGFjZSBBR01DT1JFX2dnZXQKCQl9IGRlZgoJCS9zZXRjb2xvcl9kZXZp
Y2Vjb2xvcgoJCXsKCQkJYmFzZV9jb2xvcnNwYWNlX3R5cGUKCQkJZHVwIC9EZXZpY2VHcmF5IGVx
ewoJCQkJcG9wIHNldGdyYXkKCQkJfXsKCQkJCS9EZXZpY2VDTVlLIGVxewoJCQkJCXNldGNteWtj
b2xvcgoJCQkJfXsKCQkJCQlzZXRyZ2Jjb2xvcgoJCQkJfWlmZWxzZQoJCQl9aWZlbHNlCgkJfWRl
ZgoJCS9zZXRjb2xvcgoJCXsKCQkJY3VycmVudGNvbG9yc3BhY2UgMCBnZXQKCQkJZHVwIC9EZXZp
Y2VHcmF5IG5lewoJCQkJZHVwIC9EZXZpY2VDTVlLIG5lewoJCQkJCWR1cCAvRGV2aWNlUkdCIG5l
ewoJCQkJCQlkdXAgL1NlcGFyYXRpb24gZXF7CgkJCQkJCQlwb3AKCQkJCQkJCWN1cnJlbnRjb2xv
cnNwYWNlIDMgZ2V0IGV4ZWMKCQkJCQkJCWN1cnJlbnRjb2xvcnNwYWNlIDIgZ2V0CgkJCQkJCX17
CgkJCQkJCQlkdXAgL0luZGV4ZWQgZXF7CgkJCQkJCQkJcG9wCgkJCQkJCQkJY3VycmVudGNvbG9y
c3BhY2UgMyBnZXQgZHVwIHR5cGUgL3N0cmluZ3R5cGUgZXF7CgkJCQkJCQkJCWN1cnJlbnRjb2xv
cnNwYWNlIDEgZ2V0IG5fY29sb3JfY29tcG9uZW50cwoJCQkJCQkJCQkzIC0xIHJvbGwgbWFwX2lu
ZGV4CgkJCQkJCQkJfXsKCQkJCQkJCQkJZXhlYwoJCQkJCQkJCX1pZmVsc2UKCQkJCQkJCQljdXJy
ZW50Y29sb3JzcGFjZSAxIGdldAoJCQkJCQkJfXsKCQkJCQkJCQkvQUdNQ09SRV9jdXJfZXJyIC9B
R01DT1JFX2ludmFsaWRfY29sb3Jfc3BhY2UgZGVmCgkJCQkJCQkJQUdNQ09SRV9pbnZhbGlkX2Nv
bG9yX3NwYWNlCgkJCQkJCQl9aWZlbHNlCgkJCQkJCX1pZmVsc2UKCQkJCQl9aWYKCQkJCX1pZgoJ
CQl9aWYKCQkJc2V0Y29sb3JfZGV2aWNlY29sb3IKCQl9IGRlZgoJfWlmZWxzZQoJL3NvcCAvc2V0
b3ZlcnByaW50IGxkZgoJL2x3IC9zZXRsaW5ld2lkdGggbGRmCgkvbGMgL3NldGxpbmVjYXAgbGRm
CgkvbGogL3NldGxpbmVqb2luIGxkZgoJL21sIC9zZXRtaXRlcmxpbWl0IGxkZgoJL2RzaCAvc2V0
ZGFzaCBsZGYKCS9zYWRqIC9zZXRzdHJva2VhZGp1c3QgbGRmCgkvZ3J5IC9zZXRncmF5IGxkZgoJ
L3JnYiAvc2V0cmdiY29sb3IgbGRmCgkvY215ayAvc2V0Y215a2NvbG9yIGxkZgoJL3NlcCAvc2V0
c2VwY29sb3IgbGRmCgkvZGV2biAvc2V0ZGV2aWNlbmNvbG9yIGxkZgoJL2lkeCAvc2V0aW5kZXhl
ZGNvbG9yIGxkZgoJL2NvbHIgL3NldGNvbG9yIGxkZgoJL2NzYWNyZCAvc2V0X2NzYV9jcmQgbGRm
Cgkvc2VwY3MgL3NldHNlcGNvbG9yc3BhY2UgbGRmCgkvZGV2bmNzIC9zZXRkZXZpY2VuY29sb3Jz
cGFjZSBsZGYKCS9pZHhjcyAvc2V0aW5kZXhlZGNvbG9yc3BhY2UgbGRmCgkvY3AgL2Nsb3NlcGF0
aCBsZGYKCS9jbHAgL2NscF9ucHRoIGxkZgoJL2VjbHAgL2VvY2xwX25wdGggbGRmCgkvZiAvZmls
bCBsZGYKCS9lZiAvZW9maWxsIGxkZgoJL0AgL3N0cm9rZSBsZGYKCS9uY2xwIC9ucHRoX2NscCBs
ZGYKCS9nc2V0IC9ncmFwaGljX3NldHVwIGxkZgoJL2djbG4gL2dyYXBoaWNfY2xlYW51cCBsZGYK
CWN1cnJlbnRkaWN0ewoJCWR1cCB4Y2hlY2sgMSBpbmRleCB0eXBlIGR1cCAvYXJyYXl0eXBlIGVx
IGV4Y2ggL3BhY2tlZGFycmF5dHlwZSBlcSBvciBhbmQgewoJCQliaW5kCgkJfWlmCgkJZGVmCgl9
Zm9yYWxsCgkvY3VycmVudHBhZ2VkZXZpY2UgY3VycmVudHBhZ2VkZXZpY2UgZGVmCi9nZXRyYW1w
Y29sb3IgewovaW5keCBleGNoIGRlZgowIDEgTnVtQ29tcCAxIHN1YiB7CmR1cApTYW1wbGVzIGV4
Y2ggZ2V0CmR1cCB0eXBlIC9zdHJpbmd0eXBlIGVxIHsgaW5keCBnZXQgfSBpZgpleGNoClNjYWxp
bmcgZXhjaCBnZXQgYWxvYWQgcG9wCjMgMSByb2xsCm11bCBhZGQKfSBmb3IKQ29sb3JTcGFjZUZh
bWlseSAvU2VwYXJhdGlvbiBlcQoJewoJc2VwCgl9Cgl7CglDb2xvclNwYWNlRmFtaWx5IC9EZXZp
Y2VOIGVxCgkJewoJCWRldm4KCQl9CgkJewoJCXNldGNvbG9yCgkJfWlmZWxzZQoJfWlmZWxzZQp9
IGJpbmQgZGVmCi9zc3NldGJhY2tncm91bmQgeyBhbG9hZCBwb3Agc2V0Y29sb3IgfSBiaW5kIGRl
ZgovUmFkaWFsU2hhZGUgewo0MCBkaWN0IGJlZ2luCi9Db2xvclNwYWNlRmFtaWx5IGV4Y2ggZGVm
Ci9iYWNrZ3JvdW5kIGV4Y2ggZGVmCi9leHQxIGV4Y2ggZGVmCi9leHQwIGV4Y2ggZGVmCi9CQm94
IGV4Y2ggZGVmCi9yMiBleGNoIGRlZgovYzJ5IGV4Y2ggZGVmCi9jMnggZXhjaCBkZWYKL3IxIGV4
Y2ggZGVmCi9jMXkgZXhjaCBkZWYKL2MxeCBleGNoIGRlZgovcmFtcGRpY3QgZXhjaCBkZWYKL3Nl
dGlua292ZXJwcmludCB3aGVyZSB7cG9wIC9zZXRpbmtvdmVycHJpbnR7cG9wfWRlZn1pZgpnc2F2
ZQpCQm94IGxlbmd0aCAwIGd0IHsKbmV3cGF0aApCQm94IDAgZ2V0IEJCb3ggMSBnZXQgbW92ZXRv
CkJCb3ggMiBnZXQgQkJveCAwIGdldCBzdWIgMCBybGluZXRvCjAgQkJveCAzIGdldCBCQm94IDEg
Z2V0IHN1YiBybGluZXRvCkJCb3ggMiBnZXQgQkJveCAwIGdldCBzdWIgbmVnIDAgcmxpbmV0bwpj
bG9zZXBhdGgKY2xpcApuZXdwYXRoCn0gaWYKYzF4IGMyeCBlcQp7CmMxeSBjMnkgbHQgey90aGV0
YSA5MCBkZWZ9ey90aGV0YSAyNzAgZGVmfSBpZmVsc2UKfQp7Ci9zbG9wZSBjMnkgYzF5IHN1YiBj
MnggYzF4IHN1YiBkaXYgZGVmCi90aGV0YSBzbG9wZSAxIGF0YW4gZGVmCmMyeCBjMXggbHQgYzJ5
IGMxeSBnZSBhbmQgeyAvdGhldGEgdGhldGEgMTgwIHN1YiBkZWZ9IGlmCmMyeCBjMXggbHQgYzJ5
IGMxeSBsdCBhbmQgeyAvdGhldGEgdGhldGEgMTgwIGFkZCBkZWZ9IGlmCn0KaWZlbHNlCmdzYXZl
CmNsaXBwYXRoCmMxeCBjMXkgdHJhbnNsYXRlCnRoZXRhIHJvdGF0ZQotOTAgcm90YXRlCnsgcGF0
aGJib3ggfSBzdG9wcGVkCnsgMCAwIDAgMCB9IGlmCi95TWF4IGV4Y2ggZGVmCi94TWF4IGV4Y2gg
ZGVmCi95TWluIGV4Y2ggZGVmCi94TWluIGV4Y2ggZGVmCmdyZXN0b3JlCnhNYXggeE1pbiBlcSB5
TWF4IHlNaW4gZXEgb3IKewpncmVzdG9yZQplbmQKfQp7Ci9tYXggeyAyIGNvcHkgZ3QgeyBwb3Ag
fSB7ZXhjaCBwb3B9IGlmZWxzZSB9IGJpbmQgZGVmCi9taW4geyAyIGNvcHkgbHQgeyBwb3AgfSB7
ZXhjaCBwb3B9IGlmZWxzZSB9IGJpbmQgZGVmCnJhbXBkaWN0IGJlZ2luCjQwIGRpY3QgYmVnaW4K
YmFja2dyb3VuZCBsZW5ndGggMCBndCB7IGJhY2tncm91bmQgc3NzZXRiYWNrZ3JvdW5kIGdzYXZl
IGNsaXBwYXRoIGZpbGwgZ3Jlc3RvcmUgfSBpZgpnc2F2ZQpjMXggYzF5IHRyYW5zbGF0ZQp0aGV0
YSByb3RhdGUKLTkwIHJvdGF0ZQovYzJ5IGMxeCBjMnggc3ViIGR1cCBtdWwgYzF5IGMyeSBzdWIg
ZHVwIG11bCBhZGQgc3FydCBkZWYKL2MxeSAwIGRlZgovYzF4IDAgZGVmCi9jMnggMCBkZWYKZXh0
MCB7CjAgZ2V0cmFtcGNvbG9yCmMyeSByMiBhZGQgcjEgc3ViIDAuMDAwMSBsdAp7CmMxeCBjMXkg
cjEgMzYwIDAgYXJjbgpwYXRoYmJveAovYXltYXggZXhjaCBkZWYKL2F4bWF4IGV4Y2ggZGVmCi9h
eW1pbiBleGNoIGRlZgovYXhtaW4gZXhjaCBkZWYKL2J4TWluIHhNaW4gYXhtaW4gbWluIGRlZgov
YnlNaW4geU1pbiBheW1pbiBtaW4gZGVmCi9ieE1heCB4TWF4IGF4bWF4IG1heCBkZWYKL2J5TWF4
IHlNYXggYXltYXggbWF4IGRlZgpieE1pbiBieU1pbiBtb3ZldG8KYnhNYXggYnlNaW4gbGluZXRv
CmJ4TWF4IGJ5TWF4IGxpbmV0bwpieE1pbiBieU1heCBsaW5ldG8KYnhNaW4gYnlNaW4gbGluZXRv
CmVvZmlsbAp9CnsKYzJ5IHIxIGFkZCByMiBsZQp7CmMxeCBjMXkgcjEgMCAzNjAgYXJjCmZpbGwK
fQp7CmMyeCBjMnkgcjIgMCAzNjAgYXJjIGZpbGwKcjEgcjIgZXEKewovcDF4IHIxIG5lZyBkZWYK
L3AxeSBjMXkgZGVmCi9wMnggcjEgZGVmCi9wMnkgYzF5IGRlZgpwMXggcDF5IG1vdmV0byBwMngg
cDJ5IGxpbmV0byBwMnggeU1pbiBsaW5ldG8gcDF4IHlNaW4gbGluZXRvCmZpbGwKfQp7Ci9BQSBy
MiByMSBzdWIgYzJ5IGRpdiBkZWYKL3RoZXRhIEFBIDEgQUEgZHVwIG11bCBzdWIgc3FydCBkaXYg
MSBhdGFuIGRlZgovU1MxIDkwIHRoZXRhIGFkZCBkdXAgc2luIGV4Y2ggY29zIGRpdiBkZWYKL3Ax
eCByMSBTUzEgU1MxIG11bCBTUzEgU1MxIG11bCAxIGFkZCBkaXYgc3FydCBtdWwgbmVnIGRlZgov
cDF5IHAxeCBTUzEgZGl2IG5lZyBkZWYKL1NTMiA5MCB0aGV0YSBzdWIgZHVwIHNpbiBleGNoIGNv
cyBkaXYgZGVmCi9wMnggcjEgU1MyIFNTMiBtdWwgU1MyIFNTMiBtdWwgMSBhZGQgZGl2IHNxcnQg
bXVsIGRlZgovcDJ5IHAyeCBTUzIgZGl2IG5lZyBkZWYKcjEgcjIgZ3QKewovTDFtYXhYIHAxeCB5
TWluIHAxeSBzdWIgU1MxIGRpdiBhZGQgZGVmCi9MMm1heFggcDJ4IHlNaW4gcDJ5IHN1YiBTUzIg
ZGl2IGFkZCBkZWYKfQp7Ci9MMW1heFggMCBkZWYKL0wybWF4WCAwIGRlZgp9aWZlbHNlCnAxeCBw
MXkgbW92ZXRvIHAyeCBwMnkgbGluZXRvIEwybWF4WCBMMm1heFggcDJ4IHN1YiBTUzIgbXVsIHAy
eSBhZGQgbGluZXRvCkwxbWF4WCBMMW1heFggcDF4IHN1YiBTUzEgbXVsIHAxeSBhZGQgbGluZXRv
CmZpbGwKfQppZmVsc2UKfQppZmVsc2UKfSBpZmVsc2UKfSBpZgpjMXggYzJ4IHN1YiBkdXAgbXVs
CmMxeSBjMnkgc3ViIGR1cCBtdWwKYWRkIDAuNSBleHAKMCBkdHJhbnNmb3JtCmR1cCBtdWwgZXhj
aCBkdXAgbXVsIGFkZCAwLjUgZXhwIDcyIGRpdgowIDcyIG1hdHJpeCBkZWZhdWx0bWF0cml4IGR0
cmFuc2Zvcm0gZHVwIG11bCBleGNoIGR1cCBtdWwgYWRkIHNxcnQKNzIgMCBtYXRyaXggZGVmYXVs
dG1hdHJpeCBkdHJhbnNmb3JtIGR1cCBtdWwgZXhjaCBkdXAgbXVsIGFkZCBzcXJ0CjEgaW5kZXgg
MSBpbmRleCBsdCB7IGV4Y2ggfSBpZiBwb3AKL2hpcmVzIGV4Y2ggZGVmCmhpcmVzIG11bAovbnVt
cGl4IGV4Y2ggZGVmCi9udW1zdGVwcyBOdW1TYW1wbGVzIGRlZgovcmFtcEluZHhJbmMgMSBkZWYK
L3N1YnNhbXBsaW5nIGZhbHNlIGRlZgpudW1waXggMCBuZQp7Ck51bVNhbXBsZXMgbnVtcGl4IGRp
diAwLjUgZ3QKewovbnVtc3RlcHMgbnVtcGl4IDIgZGl2IHJvdW5kIGN2aSBkdXAgMSBsZSB7IHBv
cCAyIH0gaWYgZGVmCi9yYW1wSW5keEluYyBOdW1TYW1wbGVzIDEgc3ViIG51bXN0ZXBzIGRpdiBk
ZWYKL3N1YnNhbXBsaW5nIHRydWUgZGVmCn0gaWYKfSBpZgoveEluYyBjMnggYzF4IHN1YiBudW1z
dGVwcyBkaXYgZGVmCi95SW5jIGMyeSBjMXkgc3ViIG51bXN0ZXBzIGRpdiBkZWYKL3JJbmMgcjIg
cjEgc3ViIG51bXN0ZXBzIGRpdiBkZWYKL2N4IGMxeCBkZWYKL2N5IGMxeSBkZWYKL3JhZGl1cyBy
MSBkZWYKbmV3cGF0aAp4SW5jIDAgZXEgeUluYyAwIGVxIHJJbmMgMCBlcSBhbmQgYW5kCnsKMCBn
ZXRyYW1wY29sb3IKY3ggY3kgcmFkaXVzIDAgMzYwIGFyYwpzdHJva2UKTnVtU2FtcGxlcyAxIHN1
YiBnZXRyYW1wY29sb3IKY3ggY3kgcmFkaXVzIDcyIGhpcmVzIGRpdiBhZGQgMCAzNjAgYXJjCjAg
c2V0bGluZXdpZHRoCnN0cm9rZQp9CnsKMApudW1zdGVwcwp7CmR1cApzdWJzYW1wbGluZyB7IHJv
dW5kIGN2aSB9IGlmCmdldHJhbXBjb2xvcgpjeCBjeSByYWRpdXMgMCAzNjAgYXJjCi9jeCBjeCB4
SW5jIGFkZCBkZWYKL2N5IGN5IHlJbmMgYWRkIGRlZgovcmFkaXVzIHJhZGl1cyBySW5jIGFkZCBk
ZWYKY3ggY3kgcmFkaXVzIDM2MCAwIGFyY24KZW9maWxsCnJhbXBJbmR4SW5jIGFkZAp9CnJlcGVh
dApwb3AKfSBpZmVsc2UKZXh0MSB7CmMyeSByMiBhZGQgcjEgbHQKewpjMnggYzJ5IHIyIDAgMzYw
IGFyYwpmaWxsCn0KewpjMnkgcjEgYWRkIHIyIHN1YiAwLjAwMDEgbGUKewpjMnggYzJ5IHIyIDM2
MCAwIGFyY24KcGF0aGJib3gKL2F5bWF4IGV4Y2ggZGVmCi9heG1heCBleGNoIGRlZgovYXltaW4g
ZXhjaCBkZWYKL2F4bWluIGV4Y2ggZGVmCi9ieE1pbiB4TWluIGF4bWluIG1pbiBkZWYKL2J5TWlu
IHlNaW4gYXltaW4gbWluIGRlZgovYnhNYXggeE1heCBheG1heCBtYXggZGVmCi9ieU1heCB5TWF4
IGF5bWF4IG1heCBkZWYKYnhNaW4gYnlNaW4gbW92ZXRvCmJ4TWF4IGJ5TWluIGxpbmV0bwpieE1h
eCBieU1heCBsaW5ldG8KYnhNaW4gYnlNYXggbGluZXRvCmJ4TWluIGJ5TWluIGxpbmV0bwplb2Zp
bGwKfQp7CmMyeCBjMnkgcjIgMCAzNjAgYXJjIGZpbGwKcjEgcjIgZXEKewovcDF4IHIyIG5lZyBk
ZWYKL3AxeSBjMnkgZGVmCi9wMnggcjIgZGVmCi9wMnkgYzJ5IGRlZgpwMXggcDF5IG1vdmV0byBw
MnggcDJ5IGxpbmV0byBwMnggeU1heCBsaW5ldG8gcDF4IHlNYXggbGluZXRvCmZpbGwKfQp7Ci9B
QSByMiByMSBzdWIgYzJ5IGRpdiBkZWYKL3RoZXRhIEFBIDEgQUEgZHVwIG11bCBzdWIgc3FydCBk
aXYgMSBhdGFuIGRlZgovU1MxIDkwIHRoZXRhIGFkZCBkdXAgc2luIGV4Y2ggY29zIGRpdiBkZWYK
L3AxeCByMiBTUzEgU1MxIG11bCBTUzEgU1MxIG11bCAxIGFkZCBkaXYgc3FydCBtdWwgbmVnIGRl
ZgovcDF5IGMyeSBwMXggU1MxIGRpdiBzdWIgZGVmCi9TUzIgOTAgdGhldGEgc3ViIGR1cCBzaW4g
ZXhjaCBjb3MgZGl2IGRlZgovcDJ4IHIyIFNTMiBTUzIgbXVsIFNTMiBTUzIgbXVsIDEgYWRkIGRp
diBzcXJ0IG11bCBkZWYKL3AyeSBjMnkgcDJ4IFNTMiBkaXYgc3ViIGRlZgpyMSByMiBsdAp7Ci9M
MW1heFggcDF4IHlNYXggcDF5IHN1YiBTUzEgZGl2IGFkZCBkZWYKL0wybWF4WCBwMnggeU1heCBw
Mnkgc3ViIFNTMiBkaXYgYWRkIGRlZgp9CnsKL0wxbWF4WCAwIGRlZgovTDJtYXhYIDAgZGVmCn1p
ZmVsc2UKcDF4IHAxeSBtb3ZldG8gcDJ4IHAyeSBsaW5ldG8gTDJtYXhYIEwybWF4WCBwMnggc3Vi
IFNTMiBtdWwgcDJ5IGFkZCBsaW5ldG8KTDFtYXhYIEwxbWF4WCBwMXggc3ViIFNTMSBtdWwgcDF5
IGFkZCBsaW5ldG8KZmlsbAp9CmlmZWxzZQp9CmlmZWxzZQp9IGlmZWxzZQp9IGlmCmdyZXN0b3Jl
CmdyZXN0b3JlCmVuZAplbmQKZW5kCn0gaWZlbHNlCn0gYmluZCBkZWYKL0dlblN0cmlwcyB7CjQw
IGRpY3QgYmVnaW4KL0NvbG9yU3BhY2VGYW1pbHkgZXhjaCBkZWYKL2JhY2tncm91bmQgZXhjaCBk
ZWYKL2V4dDEgZXhjaCBkZWYKL2V4dDAgZXhjaCBkZWYKL0JCb3ggZXhjaCBkZWYKL3kyIGV4Y2gg
ZGVmCi94MiBleGNoIGRlZgoveTEgZXhjaCBkZWYKL3gxIGV4Y2ggZGVmCi9yYW1wZGljdCBleGNo
IGRlZgovc2V0aW5rb3ZlcnByaW50IHdoZXJlIHtwb3AgL3NldGlua292ZXJwcmludHtwb3B9ZGVm
fWlmCmdzYXZlCkJCb3ggbGVuZ3RoIDAgZ3QgewpuZXdwYXRoCkJCb3ggMCBnZXQgQkJveCAxIGdl
dCBtb3ZldG8KQkJveCAyIGdldCBCQm94IDAgZ2V0IHN1YiAwIHJsaW5ldG8KMCBCQm94IDMgZ2V0
IEJCb3ggMSBnZXQgc3ViIHJsaW5ldG8KQkJveCAyIGdldCBCQm94IDAgZ2V0IHN1YiBuZWcgMCBy
bGluZXRvCmNsb3NlcGF0aApjbGlwCm5ld3BhdGgKfSBpZgp4MSB4MiBlcQp7CnkxIHkyIGx0IHsv
dGhldGEgOTAgZGVmfXsvdGhldGEgMjcwIGRlZn0gaWZlbHNlCn0Kewovc2xvcGUgeTIgeTEgc3Vi
IHgyIHgxIHN1YiBkaXYgZGVmCi90aGV0YSBzbG9wZSAxIGF0YW4gZGVmCngyIHgxIGx0IHkyIHkx
IGdlIGFuZCB7IC90aGV0YSB0aGV0YSAxODAgc3ViIGRlZn0gaWYKeDIgeDEgbHQgeTIgeTEgbHQg
YW5kIHsgL3RoZXRhIHRoZXRhIDE4MCBhZGQgZGVmfSBpZgp9CmlmZWxzZQpnc2F2ZQpjbGlwcGF0
aAp4MSB5MSB0cmFuc2xhdGUKdGhldGEgcm90YXRlCnsgcGF0aGJib3ggfSBzdG9wcGVkCnsgMCAw
IDAgMCB9IGlmCi95TWF4IGV4Y2ggZGVmCi94TWF4IGV4Y2ggZGVmCi95TWluIGV4Y2ggZGVmCi94
TWluIGV4Y2ggZGVmCmdyZXN0b3JlCnhNYXggeE1pbiBlcSB5TWF4IHlNaW4gZXEgb3IKewpncmVz
dG9yZQplbmQKfQp7CnJhbXBkaWN0IGJlZ2luCjIwIGRpY3QgYmVnaW4KYmFja2dyb3VuZCBsZW5n
dGggMCBndCB7IGJhY2tncm91bmQgc3NzZXRiYWNrZ3JvdW5kIGdzYXZlIGNsaXBwYXRoIGZpbGwg
Z3Jlc3RvcmUgfSBpZgpnc2F2ZQp4MSB5MSB0cmFuc2xhdGUKdGhldGEgcm90YXRlCi94U3RhcnQg
MCBkZWYKL3hFbmQgeDIgeDEgc3ViIGR1cCBtdWwgeTIgeTEgc3ViIGR1cCBtdWwgYWRkIDAuNSBl
eHAgZGVmCi95U3BhbiB5TWF4IHlNaW4gc3ViIGRlZgovbnVtc3RlcHMgTnVtU2FtcGxlcyBkZWYK
L3JhbXBJbmR4SW5jIDEgZGVmCi9zdWJzYW1wbGluZyBmYWxzZSBkZWYKeFN0YXJ0IDAgdHJhbnNm
b3JtCnhFbmQgMCB0cmFuc2Zvcm0KMyAtMSByb2xsCnN1YiBkdXAgbXVsCjMgMSByb2xsCnN1YiBk
dXAgbXVsCmFkZCAwLjUgZXhwIDcyIGRpdgowIDcyIG1hdHJpeCBkZWZhdWx0bWF0cml4IGR0cmFu
c2Zvcm0gZHVwIG11bCBleGNoIGR1cCBtdWwgYWRkIHNxcnQKNzIgMCBtYXRyaXggZGVmYXVsdG1h
dHJpeCBkdHJhbnNmb3JtIGR1cCBtdWwgZXhjaCBkdXAgbXVsIGFkZCBzcXJ0CjEgaW5kZXggMSBp
bmRleCBsdCB7IGV4Y2ggfSBpZiBwb3AKbXVsCi9udW1waXggZXhjaCBkZWYKbnVtcGl4IDAgbmUK
ewpOdW1TYW1wbGVzIG51bXBpeCBkaXYgMC41IGd0CnsKL251bXN0ZXBzIG51bXBpeCAyIGRpdiBy
b3VuZCBjdmkgZHVwIDEgbGUgeyBwb3AgMiB9IGlmIGRlZgovcmFtcEluZHhJbmMgTnVtU2FtcGxl
cyAxIHN1YiBudW1zdGVwcyBkaXYgZGVmCi9zdWJzYW1wbGluZyB0cnVlIGRlZgp9IGlmCn0gaWYK
ZXh0MCB7CjAgZ2V0cmFtcGNvbG9yCnhNaW4geFN0YXJ0IGx0CnsgeE1pbiB5TWluIHhNaW4gbmVn
IHlTcGFuIHJlY3RmaWxsIH0gaWYKfSBpZgoveEluYyB4RW5kIHhTdGFydCBzdWIgbnVtc3RlcHMg
ZGl2IGRlZgoveCB4U3RhcnQgZGVmCjAKbnVtc3RlcHMKewpkdXAKc3Vic2FtcGxpbmcgeyByb3Vu
ZCBjdmkgfSBpZgpnZXRyYW1wY29sb3IKeCB5TWluIHhJbmMgeVNwYW4gcmVjdGZpbGwKL3ggeCB4
SW5jIGFkZCBkZWYKcmFtcEluZHhJbmMgYWRkCn0KcmVwZWF0CnBvcApleHQxIHsKeE1heCB4RW5k
IGd0CnsgeEVuZCB5TWluIHhNYXggeEVuZCBzdWIgeVNwYW4gcmVjdGZpbGwgfSBpZgp9IGlmCmdy
ZXN0b3JlCmdyZXN0b3JlCmVuZAplbmQKZW5kCn0gaWZlbHNlCn0gYmluZCBkZWYKfWRlZgovcGFn
ZV90cmFpbGVyCnsKCWVuZAp9ZGVmCi9kb2NfdHJhaWxlcnsKfWRlZgpzeXN0ZW1kaWN0IC9maW5k
Y29sb3JyZW5kZXJpbmcga25vd257CgkvZmluZGNvbG9ycmVuZGVyaW5nIHN5c3RlbWRpY3QgL2Zp
bmRjb2xvcnJlbmRlcmluZyBnZXQgZGVmCn1pZgpzeXN0ZW1kaWN0IC9zZXRjb2xvcnJlbmRlcmlu
ZyBrbm93bnsKCS9zZXRjb2xvcnJlbmRlcmluZyBzeXN0ZW1kaWN0IC9zZXRjb2xvcnJlbmRlcmlu
ZyBnZXQgZGVmCn1pZgovdGVzdF9jbXlrX2NvbG9yX3BsYXRlCnsKCWdzYXZlCglzZXRjbXlrY29s
b3IgY3VycmVudGdyYXkgMSBuZQoJZ3Jlc3RvcmUKfWRlZgovaW5SaXBfc3BvdF9oYXNfaW5rCnsK
CWR1cCBBZG9iZV9BR01fQ29yZS9BR01DT1JFX25hbWUgeGRkZgoJY29udmVydF9zcG90X3RvX3By
b2Nlc3Mgbm90Cn1kZWYKL21hcDI1NV90b19yYW5nZQp7CgkxIGluZGV4IHN1YgoJMyAtMSByb2xs
IDI1NSBkaXYgbXVsIGFkZAp9ZGVmCi9zZXRfY3NhX2NyZAp7Cgkvc2VwX2NvbG9yc3BhY2VfZGlj
dCBudWxsIEFHTUNPUkVfZ3B1dAoJYmVnaW4KCQlDU0EgbWFwX2NzYSBzZXRjb2xvcnNwYWNlX29w
dAoJCXNldF9jcmQKCWVuZAp9CmRlZgovc2V0c2VwY29sb3IKeyAKCS9zZXBfY29sb3JzcGFjZV9k
aWN0IEFHTUNPUkVfZ2dldCBiZWdpbgoJCWR1cCAvc2VwX3RpbnQgZXhjaCBBR01DT1JFX2dwdXQK
CQlUaW50UHJvYwoJZW5kCn0gZGVmCi9zZXRkZXZpY2VuY29sb3IKeyAKCS9kZXZpY2VuX2NvbG9y
c3BhY2VfZGljdCBBR01DT1JFX2dnZXQgYmVnaW4KCQlOYW1lcyBsZW5ndGggY29weQoJCU5hbWVz
IGxlbmd0aCAxIHN1YiAtMSAwCgkJewoJCQkvZGV2aWNlbl90aW50cyBBR01DT1JFX2dnZXQgMyAx
IHJvbGwgeHB0CgkJfSBmb3IKCQlUaW50UHJvYwoJZW5kCn0gZGVmCi9zZXBfY29sb3JzcGFjZV9w
cm9jCnsKCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfdG1wIHhkZGYKCS9zZXBfY29sb3JzcGFjZV9k
aWN0IEFHTUNPUkVfZ2dldCBiZWdpbgoJY3VycmVudGRpY3QvQ29tcG9uZW50cyBrbm93bnsKCQlD
b21wb25lbnRzIGFsb2FkIHBvcCAKCQlUaW50TWV0aG9kL0xhYiBlcXsKCQkJMiB7QUdNQ09SRV90
bXAgbXVsIE5Db21wb25lbnRzIDEgcm9sbH0gcmVwZWF0CgkJCUxNYXggc3ViIEFHTUNPUkVfdG1w
IG11bCBMTWF4IGFkZCAgTkNvbXBvbmVudHMgMSByb2xsCgkJfXsKCQkJVGludE1ldGhvZC9TdWJ0
cmFjdGl2ZSBlcXsKCQkJCU5Db21wb25lbnRzewoJCQkJCUFHTUNPUkVfdG1wIG11bCBOQ29tcG9u
ZW50cyAxIHJvbGwKCQkJCX1yZXBlYXQKCQkJfXsKCQkJCU5Db21wb25lbnRzewoJCQkJCTEgc3Vi
IEFHTUNPUkVfdG1wIG11bCAxIGFkZCAgTkNvbXBvbmVudHMgMSByb2xsCgkJCQl9IHJlcGVhdAoJ
CQl9aWZlbHNlCgkJfWlmZWxzZQoJfXsKCQlDb2xvckxvb2t1cCBBR01DT1JFX3RtcCBDb2xvckxv
b2t1cCBsZW5ndGggMSBzdWIgbXVsIHJvdW5kIGN2aSBnZXQKCQlhbG9hZCBwb3AKCX1pZmVsc2UK
CWVuZAp9IGRlZgovc2VwX2NvbG9yc3BhY2VfZ3JheV9wcm9jCnsKCUFkb2JlX0FHTV9Db3JlL0FH
TUNPUkVfdG1wIHhkZGYKCS9zZXBfY29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCBiZWdpbgoJ
R3JheUxvb2t1cCBBR01DT1JFX3RtcCBHcmF5TG9va3VwIGxlbmd0aCAxIHN1YiBtdWwgcm91bmQg
Y3ZpIGdldAoJZW5kCn0gZGVmCi9zZXBfcHJvY19uYW1lCnsKCWR1cCAwIGdldCAKCWR1cCAvRGV2
aWNlUkdCIGVxIGV4Y2ggL0RldmljZUNNWUsgZXEgb3IgbGV2ZWwyIG5vdCBhbmQgaGFzX2NvbG9y
IG5vdCBhbmR7CgkJcG9wIFsvRGV2aWNlR3JheV0KCQkvc2VwX2NvbG9yc3BhY2VfZ3JheV9wcm9j
Cgl9ewoJCS9zZXBfY29sb3JzcGFjZV9wcm9jCgl9aWZlbHNlCn0gZGVmCi9zZXRzZXBjb2xvcnNw
YWNlCnsgCgljdXJyZW50X3Nwb3RfYWxpYXN7CgkJZHVwIGJlZ2luCgkJCU5hbWUgbWFwX2FsaWFz
ewoJCQkJZXhjaCBwb3AKCQkJfWlmCgkJZW5kCgl9aWYKCWR1cCAvc2VwX2NvbG9yc3BhY2VfZGlj
dCBleGNoIEFHTUNPUkVfZ3B1dAoJYmVnaW4KCS9NYXBwZWRDU0EgQ1NBIG1hcF9jc2EgZGVmCglB
ZG9iZV9BR01fQ29yZS9BR01DT1JFX3NlcF9zcGVjaWFsIE5hbWUgZHVwICgpIGVxIGV4Y2ggKEFs
bCkgZXEgb3IgZGRmCglBR01DT1JFX2F2b2lkX0wyX3NlcF9zcGFjZXsKCQlbL0luZGV4ZWQgTWFw
cGVkQ1NBIHNlcF9wcm9jX25hbWUgMjU1IGV4Y2ggCgkJCXsgMjU1IGRpdiB9IC9leGVjIGN2eCAz
IC0xIHJvbGwgWyA0IDEgcm9sbCBsb2FkIC9leGVjIGN2eCBdIGN2eCAKCQldIHNldGNvbG9yc3Bh
Y2Vfb3B0CgkJL1RpbnRQcm9jIHsKCQkJMjU1IG11bCByb3VuZCBjdmkgc2V0Y29sb3IKCQl9YmRm
Cgl9ewoJCU1hcHBlZENTQSAwIGdldCAvRGV2aWNlQ01ZSyBlcSAKCQljdXJyZW50ZGljdC9Db21w
b25lbnRzIGtub3duIGFuZCAKCQlBR01DT1JFX3NlcF9zcGVjaWFsIG5vdCBhbmR7CgkJCS9UaW50
UHJvYyBbCgkJCQlDb21wb25lbnRzIGFsb2FkIHBvcCBOYW1lIGZpbmRjbXlrY3VzdG9tY29sb3Ig
CgkJCQkvZXhjaCBjdnggL3NldGN1c3RvbWNvbG9yIGN2eAoJCQldIGN2eCBiZGYKCQl9ewogCQkJ
QUdNQ09SRV9ob3N0X3NlcCBOYW1lIChBbGwpIGVxIGFuZHsKIAkJCQkvVGludFByb2MgeyAKCQkJ
CQkxIGV4Y2ggc3ViIHNldHNlcGFyYXRpb25ncmF5IAoJCQkJfWJkZgogCQkJfXsKCQkJCUFHTUNP
UkVfaW5fcmlwX3NlcCBNYXBwZWRDU0EgMCBnZXQgL0RldmljZUNNWUsgZXEgYW5kIAoJCQkJQUdN
Q09SRV9ob3N0X3NlcCBvcgoJCQkJTmFtZSAoKSBlcSBhbmR7CgkJCQkJL1RpbnRQcm9jIFsKCQkJ
CQkJTWFwcGVkQ1NBIHNlcF9wcm9jX25hbWUgZXhjaCAwIGdldCAvRGV2aWNlQ01ZSyBlcXsKCQkJ
CQkJCWN2eCAvc2V0Y215a2NvbG9yIGN2eAoJCQkJCQl9ewoJCQkJCQkJY3Z4IC9zZXRncmF5IGN2
eAoJCQkJCQl9aWZlbHNlCgkJCQkJXSBjdnggYmRmCgkJCQl9ewoJCQkJCUFHTUNPUkVfcHJvZHVj
aW5nX3NlcHMgTWFwcGVkQ1NBIDAgZ2V0IGR1cCAvRGV2aWNlQ01ZSyBlcSBleGNoIC9EZXZpY2VH
cmF5IGVxIG9yIGFuZCBBR01DT1JFX3NlcF9zcGVjaWFsIG5vdCBhbmR7CgkgCQkJCQkvVGludFBy
b2MgWwoJCQkJCQkJL2R1cCBjdngKCQkJCQkJCU1hcHBlZENTQSBzZXBfcHJvY19uYW1lIGN2eCBl
eGNoCgkJCQkJCQkwIGdldCAvRGV2aWNlR3JheSBlcXsKCQkJCQkJCQkxIC9leGNoIGN2eCAvc3Vi
IGN2eCAwIDAgMCA0IC0xIC9yb2xsIGN2eAoJCQkJCQkJfWlmCgkJCQkJCQkvTmFtZSBjdnggL2Zp
bmRjbXlrY3VzdG9tY29sb3IgY3Z4IC9leGNoIGN2eAoJCQkJCQkJQUdNQ09SRV9ob3N0X3NlcHsK
CQkJCQkJCQlBR01DT1JFX2lzX2NteWtfc2VwCgkJCQkJCQkJL05hbWUgY3Z4IAoJCQkJCQkJCS9B
R01DT1JFX0lzU2VwYXJhdGlvbkFQcm9jZXNzQ29sb3IgbG9hZCAvZXhlYyBjdngKCQkJCQkJCQkv
bm90IGN2eCAvYW5kIGN2eCAKCQkJCQkJCX17CgkJCQkJCQkJTmFtZSBpblJpcF9zcG90X2hhc19p
bmsgbm90CgkJCQkJCQl9aWZlbHNlCgkJCQkJCQlbCgkJIAkJCQkJCS9wb3AgY3Z4IDEKCQkJCQkJ
CV0gY3Z4IC9pZiBjdngKCQkJCQkJCS9zZXRjdXN0b21jb2xvciBjdngKCQkJCQkJXSBjdnggYmRm
CiAJCQkJCX17IAoJCQkJCQkvVGludFByb2MgL3NldGNvbG9yIGxkZgoJCQkJCQlbL1NlcGFyYXRp
b24gTmFtZSBNYXBwZWRDU0Egc2VwX3Byb2NfbmFtZSBsb2FkIF0gc2V0Y29sb3JzcGFjZV9vcHQK
CQkJCQl9aWZlbHNlCgkJCQl9aWZlbHNlCgkJCX1pZmVsc2UKCQl9aWZlbHNlCgl9aWZlbHNlCglz
ZXRfY3JkCglzZXRzZXBjb2xvcgoJZW5kCn0gZGVmCi9hZGRpdGl2ZV9ibGVuZAp7CiAgCTMgZGlj
dCBiZWdpbgogIAkvbnVtYXJyYXlzIHhkZgogIAkvbnVtY29sb3JzIHhkZgogIAkwIDEgbnVtY29s
b3JzIDEgc3ViCiAgCQl7CiAgCQkvYzEgeGRmCiAgCQkxCiAgCQkwIDEgbnVtYXJyYXlzIDEgc3Vi
CiAgCQkJewoJCQkxIGV4Y2ggYWRkIC9pbmRleCBjdngKICAJCQljMSAvZ2V0IGN2eCAvbXVsIGN2
eAogIAkJCX1mb3IKIAkJbnVtYXJyYXlzIDEgYWRkIDEgL3JvbGwgY3Z4IAogIAkJfWZvcgogCW51
bWFycmF5cyBbL3BvcCBjdnhdIGN2eCAvcmVwZWF0IGN2eAogIAllbmQKfWRlZgovc3VidHJhY3Rp
dmVfYmxlbmQKewoJMyBkaWN0IGJlZ2luCgkvbnVtYXJyYXlzIHhkZgoJL251bWNvbG9ycyB4ZGYK
CTAgMSBudW1jb2xvcnMgMSBzdWIKCQl7CgkJL2MxIHhkZgoJCTEgMQoJCTAgMSBudW1hcnJheXMg
MSBzdWIKCQkJewoJCQkxIDMgMyAtMSByb2xsIGFkZCAvaW5kZXggY3Z4ICAKCQkJYzEgL2dldCBj
dnggL3N1YiBjdnggL211bCBjdngKCQkJfWZvcgoJCS9zdWIgY3Z4CgkJbnVtYXJyYXlzIDEgYWRk
IDEgL3JvbGwgY3Z4CgkJfWZvcgoJbnVtYXJyYXlzIFsvcG9wIGN2eF0gY3Z4IC9yZXBlYXQgY3Z4
CgllbmQKfWRlZgovZXhlY190aW50X3RyYW5zZm9ybQp7CgkvVGludFByb2MgWwoJCS9UaW50VHJh
bnNmb3JtIGN2eCAvc2V0Y29sb3IgY3Z4CgldIGN2eCBiZGYKCU1hcHBlZENTQSBzZXRjb2xvcnNw
YWNlX29wdAp9IGJkZgovZGV2bl9tYWtlY3VzdG9tY29sb3IKewoJMiBkaWN0IGJlZ2luCgkvbmFt
ZXNfaW5kZXggeGRmCgkvTmFtZXMgeGRmCgkxIDEgMSAxIE5hbWVzIG5hbWVzX2luZGV4IGdldCBm
aW5kY215a2N1c3RvbWNvbG9yCgkvZGV2aWNlbl90aW50cyBBR01DT1JFX2dnZXQgbmFtZXNfaW5k
ZXggZ2V0IHNldGN1c3RvbWNvbG9yCglOYW1lcyBsZW5ndGgge3BvcH0gcmVwZWF0CgllbmQKfSBi
ZGYKL3NldGRldmljZW5jb2xvcnNwYWNlCnsgCglkdXAgL0FsaWFzZWRDb2xvcmFudHMga25vd24g
e2ZhbHNlfXt0cnVlfWlmZWxzZSAKCWN1cnJlbnRfc3BvdF9hbGlhcyBhbmQgewoJCTYgZGljdCBi
ZWdpbgoJCS9uYW1lc19pbmRleCAwIGRlZgoJCWR1cCAvbmFtZXNfbGVuIGV4Y2ggL05hbWVzIGdl
dCBsZW5ndGggZGVmCgkJL25ld19uYW1lcyBuYW1lc19sZW4gYXJyYXkgZGVmCgkJL25ld19Mb29r
dXBUYWJsZXMgbmFtZXNfbGVuIGFycmF5IGRlZgoJCS9hbGlhc19jbnQgMCBkZWYKCQlkdXAgL05h
bWVzIGdldAoJCXsKCQkJZHVwIG1hcF9hbGlhcyB7CgkJCQlleGNoIHBvcAoJCQkJZHVwIC9Db2xv
ckxvb2t1cCBrbm93biB7CgkJCQkJZHVwIGJlZ2luCgkJCQkJbmV3X0xvb2t1cFRhYmxlcyBuYW1l
c19pbmRleCBDb2xvckxvb2t1cCBwdXQKCQkJCQllbmQKCQkJCX17CgkJCQkJZHVwIC9Db21wb25l
bnRzIGtub3duIHsKCQkJCQkJZHVwIGJlZ2luCgkJCQkJCW5ld19Mb29rdXBUYWJsZXMgbmFtZXNf
aW5kZXggQ29tcG9uZW50cyBwdXQKCQkJCQkJZW5kCgkJCQkJfXsKCQkJCQkJZHVwIGJlZ2luCgkJ
CQkJCW5ld19Mb29rdXBUYWJsZXMgbmFtZXNfaW5kZXggW251bGwgbnVsbCBudWxsIG51bGxdIHB1
dAoJCQkJCQllbmQKCQkJCQl9IGlmZWxzZQoJCQkJfSBpZmVsc2UKCQkJCW5ld19uYW1lcyBuYW1l
c19pbmRleCAzIC0xIHJvbGwgL05hbWUgZ2V0IHB1dAoJCQkJL2FsaWFzX2NudCBhbGlhc19jbnQg
MSBhZGQgZGVmIAoJCQl9ewoJCQkJL25hbWUgeGRmCQkJCQoJCQkJbmV3X25hbWVzIG5hbWVzX2lu
ZGV4IG5hbWUgcHV0CgkJCQlkdXAgL0xvb2t1cFRhYmxlcyBrbm93biB7CgkJCQkJZHVwIGJlZ2lu
CgkJCQkJbmV3X0xvb2t1cFRhYmxlcyBuYW1lc19pbmRleCBMb29rdXBUYWJsZXMgbmFtZXNfaW5k
ZXggZ2V0IHB1dAoJCQkJCWVuZAoJCQkJfXsKCQkJCQlkdXAgYmVnaW4KCQkJCQluZXdfTG9va3Vw
VGFibGVzIG5hbWVzX2luZGV4IFtudWxsIG51bGwgbnVsbCBudWxsXSBwdXQKCQkJCQllbmQKCQkJ
CX0gaWZlbHNlCgkJCX0gaWZlbHNlCgkJCS9uYW1lc19pbmRleCBuYW1lc19pbmRleCAxIGFkZCBk
ZWYgCgkJfSBmb3JhbGwKCQlhbGlhc19jbnQgMCBndCB7CgkJCS9BbGlhc2VkQ29sb3JhbnRzIHRy
dWUgZGVmCgkJCTAgMSBuYW1lc19sZW4gMSBzdWIgewoJCQkJL25hbWVzX2luZGV4IHhkZgoJCQkJ
bmV3X0xvb2t1cFRhYmxlcyBuYW1lc19pbmRleCBnZXQgMCBnZXQgbnVsbCBlcSB7CgkJCQkJZHVw
IC9OYW1lcyBnZXQgbmFtZXNfaW5kZXggZ2V0IC9uYW1lIHhkZgoJCQkJCW5hbWUgKEN5YW4pIGVx
IG5hbWUgKE1hZ2VudGEpIGVxIG5hbWUgKFllbGxvdykgZXEgbmFtZSAoQmxhY2spIGVxCgkJCQkJ
b3Igb3Igb3Igbm90IHsKCQkJCQkJL0FsaWFzZWRDb2xvcmFudHMgZmFsc2UgZGVmCgkJCQkJCWV4
aXQKCQkJCQl9IGlmCgkJCQl9IGlmCgkJCX0gZm9yCgkJCUFsaWFzZWRDb2xvcmFudHMgewoJCQkJ
ZHVwIGJlZ2luCgkJCQkvTmFtZXMgbmV3X25hbWVzIGRlZgoJCQkJL0FsaWFzZWRDb2xvcmFudHMg
dHJ1ZSBkZWYKCQkJCS9Mb29rdXBUYWJsZXMgbmV3X0xvb2t1cFRhYmxlcyBkZWYKCQkJCWN1cnJl
bnRkaWN0IC9UVFRhYmxlc0lkeCBrbm93biBub3QgewoJCQkJCS9UVFRhYmxlc0lkeCAtMSBkZWYK
CQkJCX0gaWYKCQkJCWN1cnJlbnRkaWN0IC9OQ29tcG9uZW50cyBrbm93biBub3QgewoJCQkJCS9O
Q29tcG9uZW50cyBUaW50TWV0aG9kIC9TdWJ0cmFjdGl2ZSBlcSB7NH17M31pZmVsc2UgZGVmCgkJ
CQl9IGlmCgkJCQllbmQKCQkJfSBpZgoJCX1pZgoJCWVuZAoJfSBpZgoJZHVwIC9kZXZpY2VuX2Nv
bG9yc3BhY2VfZGljdCBleGNoIEFHTUNPUkVfZ3B1dAoJYmVnaW4KCS9NYXBwZWRDU0EgQ1NBIG1h
cF9jc2EgZGVmCgljdXJyZW50ZGljdCAvQWxpYXNlZENvbG9yYW50cyBrbm93biB7CgkJQWxpYXNl
ZENvbG9yYW50cwoJfXsKCQlmYWxzZQoJfSBpZmVsc2UKCS9UaW50VHJhbnNmb3JtIGxvYWQgdHlw
ZSAvbnVsbHR5cGUgZXEgb3IgewoJCS9UaW50VHJhbnNmb3JtIFsKCQkJMCAxIE5hbWVzIGxlbmd0
aCAxIHN1YgoJCQkJewoJCQkJL1RUVGFibGVzSWR4IFRUVGFibGVzSWR4IDEgYWRkIGRlZgoJCQkJ
ZHVwIExvb2t1cFRhYmxlcyBleGNoIGdldCBkdXAgMCBnZXQgbnVsbCBlcQoJCQkJCXsKCQkJCQkx
IGluZGV4CgkJCQkJTmFtZXMgZXhjaCBnZXQKCQkJCQlkdXAgKEN5YW4pIGVxCgkJCQkJCXsKCQkJ
CQkJcG9wIGV4Y2gKCQkJCQkJTG9va3VwVGFibGVzIGxlbmd0aCBleGNoIHN1YgoJCQkJCQkvaW5k
ZXggY3Z4CgkJCQkJCTAgMCAwCgkJCQkJCX0KCQkJCQkJewoJCQkJCQlkdXAgKE1hZ2VudGEpIGVx
CgkJCQkJCQl7CgkJCQkJCQlwb3AgZXhjaAoJCQkJCQkJTG9va3VwVGFibGVzIGxlbmd0aCBleGNo
IHN1YgoJCQkJCQkJL2luZGV4IGN2eAoJCQkJCQkJMCAvZXhjaCBjdnggMCAwCgkJCQkJCQl9CgkJ
CQkJCQl7CgkJCQkJCQkoWWVsbG93KSBlcQoJCQkJCQkJCXsKCQkJCQkJCQlleGNoCgkJCQkJCQkJ
TG9va3VwVGFibGVzIGxlbmd0aCBleGNoIHN1YgoJCQkJCQkJCS9pbmRleCBjdngKCQkJCQkJCQkw
IDAgMyAtMSAvcm9sbCBjdnggMAoJCQkJCQkJCX0KCQkJCQkJCQl7CgkJCQkJCQkJZXhjaAoJCQkJ
CQkJCUxvb2t1cFRhYmxlcyBsZW5ndGggZXhjaCBzdWIKCQkJCQkJCQkvaW5kZXggY3Z4CgkJCQkJ
CQkJMCAwIDAgNCAtMSAvcm9sbCBjdngKCQkJCQkJCQl9IGlmZWxzZQoJCQkJCQkJfSBpZmVsc2UK
CQkJCQkJfSBpZmVsc2UKCQkJCQk1IC0xIC9yb2xsIGN2eCAvYXN0b3JlIGN2eAoJCQkJCX0KCQkJ
CQl7CgkJCQkJZHVwIGxlbmd0aCAxIHN1YgoJCQkJCUxvb2t1cFRhYmxlcyBsZW5ndGggNCAtMSBy
b2xsIHN1YiAxIGFkZAoJCQkJCS9pbmRleCBjdnggL211bCBjdnggL3JvdW5kIGN2eCAvY3ZpIGN2
eCAvZ2V0IGN2eAoJCQkJCX0gaWZlbHNlCgkJCQkJTmFtZXMgbGVuZ3RoIFRUVGFibGVzSWR4IGFk
ZCAxIGFkZCAxIC9yb2xsIGN2eAoJCQkJfSBmb3IKCQkJTmFtZXMgbGVuZ3RoIFsvcG9wIGN2eF0g
Y3Z4IC9yZXBlYXQgY3Z4CgkJCU5Db21wb25lbnRzIE5hbWVzIGxlbmd0aAogIAkJCVRpbnRNZXRo
b2QgL1N1YnRyYWN0aXZlIGVxCiAgCQkJCXsKICAJCQkJc3VidHJhY3RpdmVfYmxlbmQKICAJCQkJ
fQogIAkJCQl7CiAgCQkJCWFkZGl0aXZlX2JsZW5kCiAgCQkJCX0gaWZlbHNlCgkJXSBjdnggYmRm
Cgl9IGlmCglBR01DT1JFX2hvc3Rfc2VwIHsKCQlOYW1lcyBjb252ZXJ0X3RvX3Byb2Nlc3MgewoJ
CQlleGVjX3RpbnRfdHJhbnNmb3JtCgkJfQoJCXsJCgkJCWN1cnJlbnRkaWN0IC9BbGlhc2VkQ29s
b3JhbnRzIGtub3duIHsKCQkJCUFsaWFzZWRDb2xvcmFudHMgbm90CgkJCX17CgkJCQlmYWxzZQoJ
CQl9IGlmZWxzZQoJCQk1IGRpY3QgYmVnaW4KCQkJL0F2b2lkQWxpYXNlZENvbG9yYW50cyB4ZGYK
CQkJL3BhaW50ZWQ/IGZhbHNlIGRlZgoJCQkvbmFtZXNfaW5kZXggMCBkZWYKCQkJL25hbWVzX2xl
biBOYW1lcyBsZW5ndGggZGVmCgkJCU5hbWVzIHsKCQkJCUF2b2lkQWxpYXNlZENvbG9yYW50cyB7
CgkJCQkJL2N1cnJlbnRzcG90YWxpYXMgY3VycmVudF9zcG90X2FsaWFzIGRlZgoJCQkJCWZhbHNl
IHNldF9zcG90X2FsaWFzCgkJCQl9IGlmCgkJCQlBR01DT1JFX2lzX2NteWtfc2VwIHsKCQkJCQlk
dXAgKEN5YW4pIGVxIEFHTUNPUkVfY3lhbl9wbGF0ZSBhbmQgZXhjaAoJCQkJCWR1cCAoTWFnZW50
YSkgZXEgQUdNQ09SRV9tYWdlbnRhX3BsYXRlIGFuZCBleGNoCgkJCQkJZHVwIChZZWxsb3cpIGVx
IEFHTUNPUkVfeWVsbG93X3BsYXRlIGFuZCBleGNoCgkJCQkJKEJsYWNrKSBlcSBBR01DT1JFX2Js
YWNrX3BsYXRlIGFuZCBvciBvciBvciB7CgkJCQkJCS9kZXZpY2VuX2NvbG9yc3BhY2VfZGljdCBB
R01DT1JFX2dnZXQgL1RpbnRQcm9jIFsKCQkJCQkJCU5hbWVzIG5hbWVzX2luZGV4IC9kZXZuX21h
a2VjdXN0b21jb2xvciBjdngKCQkJCQkJXSBjdnggZGRmCgkJCQkJCS9wYWludGVkPyB0cnVlIGRl
ZgoJCQkJCX0gaWYKCQkJCQlwYWludGVkPyB7ZXhpdH0gaWYKCQkJCX17CgkJCQkJMCAwIDAgMCA1
IC0xIHJvbGwgZmluZGNteWtjdXN0b21jb2xvciAxIHNldGN1c3RvbWNvbG9yIGN1cnJlbnRncmF5
IDAgZXEgewoJCQkJCS9kZXZpY2VuX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgL1RpbnRQ
cm9jIFsKCQkJCQkJTmFtZXMgbmFtZXNfaW5kZXggL2Rldm5fbWFrZWN1c3RvbWNvbG9yIGN2eAoJ
CQkJCV0gY3Z4IGRkZgoJCQkJCS9wYWludGVkPyB0cnVlIGRlZgoJCQkJCWV4aXQKCQkJCQl9IGlm
CgkJCQl9IGlmZWxzZQoJCQkJQXZvaWRBbGlhc2VkQ29sb3JhbnRzIHsKCQkJCQljdXJyZW50c3Bv
dGFsaWFzIHNldF9zcG90X2FsaWFzCgkJCQl9IGlmCgkJCQkvbmFtZXNfaW5kZXggbmFtZXNfaW5k
ZXggMSBhZGQgZGVmCgkJCX0gZm9yYWxsCgkJCXBhaW50ZWQ/IHsKCQkJCS9kZXZpY2VuX2NvbG9y
c3BhY2VfZGljdCBBR01DT1JFX2dnZXQgL25hbWVzX2luZGV4IG5hbWVzX2luZGV4IHB1dAoJCQl9
ewoJCQkJL2RldmljZW5fY29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCAvVGludFByb2MgWwoJ
CQkJCW5hbWVzX2xlbiBbL3BvcCBjdnhdIGN2eCAvcmVwZWF0IGN2eCAxIC9zZXRzZXBhcmF0aW9u
Z3JheSBjdngKCQkJCQkwIDAgMCAwICgpIC9maW5kY215a2N1c3RvbWNvbG9yIGN2eCAwIC9zZXRj
dXN0b21jb2xvciBjdngKCQkJCV0gY3Z4IGRkZgoJCQl9IGlmZWxzZQoJCQllbmQKCQl9IGlmZWxz
ZQoJfQoJewoJCUFHTUNPUkVfaW5fcmlwX3NlcCB7CgkJCU5hbWVzIGNvbnZlcnRfdG9fcHJvY2Vz
cyBub3QKCQl9ewoJCQlsZXZlbDMKCQl9IGlmZWxzZQoJCXsKCQkJWy9EZXZpY2VOIE5hbWVzIE1h
cHBlZENTQSAvVGludFRyYW5zZm9ybSBsb2FkXSBzZXRjb2xvcnNwYWNlX29wdAoJCQkvVGludFBy
b2MgbGV2ZWwzIG5vdCBBR01DT1JFX2luX3JpcF9zZXAgYW5kIHsKCQkJCVsKCQkJCQlOYW1lcyAv
bGVuZ3RoIGN2eCBbL3BvcCBjdnhdIGN2eCAvcmVwZWF0IGN2eAoJCQkJXSBjdnggYmRmCgkJCX17
CgkJCQkvc2V0Y29sb3IgbGRmCgkJCX0gaWZlbHNlCgkJfXsKCQkJZXhlY190aW50X3RyYW5zZm9y
bQoJCX0gaWZlbHNlCgl9IGlmZWxzZQoJc2V0X2NyZAoJL0FsaWFzZWRDb2xvcmFudHMgZmFsc2Ug
ZGVmCgllbmQKfSBkZWYKL3NldGluZGV4ZWRjb2xvcnNwYWNlCnsKCWR1cCAvaW5kZXhlZF9jb2xv
cnNwYWNlX2RpY3QgZXhjaCBBR01DT1JFX2dwdXQKCWJlZ2luCgkJY3VycmVudGRpY3QgL0NTRCBr
bm93biB7CgkJCUNTRCBnZXRfY3NkIC9OYW1lcyBrbm93biB7CgkJCQlDU0QgZ2V0X2NzZCBiZWdp
bgoJCQkJY3VycmVudGRpY3QgZGV2bmNzCgkJCQlBR01DT1JFX2hvc3Rfc2VwewoJCQkJCTQgZGlj
dCBiZWdpbgoJCQkJCS9kZXZuQ29tcENudCBOYW1lcyBsZW5ndGggZGVmCgkJCQkJL05ld0xvb2t1
cCBIaVZhbCAxIGFkZCBzdHJpbmcgZGVmCgkJCQkJMCAxIEhpVmFsIHsKCQkJCQkJL3RhYmxlSW5k
ZXggeGRmCgkJCQkJCUxvb2t1cCBkdXAgdHlwZSAvc3RyaW5ndHlwZSBlcSB7CgkJCQkJCQlkZXZu
Q29tcENudCB0YWJsZUluZGV4IG1hcF9pbmRleAoJCQkJCQl9ewoJCQkJCQkJZXhlYwoJCQkJCQl9
IGlmZWxzZQoJCQkJCQlzZXRkZXZpY2VuY29sb3IKCQkJCQkJY3VycmVudGdyYXkKCQkJCQkJdGFi
bGVJbmRleCBleGNoCgkJCQkJCUhpVmFsIG11bCBjdmkgCgkJCQkJCU5ld0xvb2t1cCAzIDEgcm9s
bCBwdXQKCQkJCQl9IGZvcgoJCQkJCVsvSW5kZXhlZCBjdXJyZW50Y29sb3JzcGFjZSBIaVZhbCBO
ZXdMb29rdXBdIHNldGNvbG9yc3BhY2Vfb3B0CgkJCQkJZW5kCgkJCQl9ewoJCQkJCWxldmVsMwoJ
CQkJCXsKCQkJCQlbL0luZGV4ZWQgWy9EZXZpY2VOIE5hbWVzIE1hcHBlZENTQSAvVGludFRyYW5z
Zm9ybSBsb2FkXSBIaVZhbCBMb29rdXBdIHNldGNvbG9yc3BhY2Vfb3B0CgkJCQkJfXsKCQkJCQlb
L0luZGV4ZWQgTWFwcGVkQ1NBIEhpVmFsCgkJCQkJCVsKCQkJCQkJTG9va3VwIGR1cCB0eXBlIC9z
dHJpbmd0eXBlIGVxCgkJCQkJCQl7L2V4Y2ggY3Z4IENTRCBnZXRfY3NkIC9OYW1lcyBnZXQgbGVu
Z3RoIGR1cCAvbXVsIGN2eCBleGNoIC9nZXRpbnRlcnZhbCBjdnggezI1NSBkaXZ9IC9mb3JhbGwg
Y3Z4fQoJCQkJCQkJey9leGVjIGN2eH1pZmVsc2UKCQkJCQkJCS9UaW50VHJhbnNmb3JtIGxvYWQg
L2V4ZWMgY3Z4CgkJCQkJCV1jdngKCQkJCQldc2V0Y29sb3JzcGFjZV9vcHQKCQkJCQl9aWZlbHNl
CgkJCQl9IGlmZWxzZQoJCQkJZW5kCgkJCX17CgkJCX0gaWZlbHNlCgkJCXNldF9jcmQKCQl9CgkJ
ewoJCQkvTWFwcGVkQ1NBIENTQSBtYXBfY3NhIGRlZgoJCQlBR01DT1JFX2hvc3Rfc2VwIGxldmVs
MiBub3QgYW5kewoJCQkJMCAwIDAgMCBzZXRjbXlrY29sb3IKCQkJfXsKCQkJCVsvSW5kZXhlZCBN
YXBwZWRDU0EgCgkJCQlsZXZlbDIgbm90IGhhc19jb2xvciBub3QgYW5kewoJCQkJCWR1cCAwIGdl
dCBkdXAgL0RldmljZVJHQiBlcSBleGNoIC9EZXZpY2VDTVlLIGVxIG9yewoJCQkJCQlwb3AgWy9E
ZXZpY2VHcmF5XQoJCQkJCX1pZgoJCQkJCUhpVmFsIEdyYXlMb29rdXAKCQkJCX17CgkJCQkJSGlW
YWwgCgkJCQkJY3VycmVudGRpY3QvUmFuZ2VBcnJheSBrbm93bnsKCQkJCQkJeyAKCQkJCQkJCS9p
bmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgYmVnaW4KCQkJCQkJCUxvb2t1cCBl
eGNoIAoJCQkJCQkJZHVwIEhpVmFsIGd0ewoJCQkJCQkJCXBvcCBIaVZhbAoJCQkJCQkJfWlmCgkJ
CQkJCQlOQ29tcG9uZW50cyBtdWwgTkNvbXBvbmVudHMgZ2V0aW50ZXJ2YWwge30gZm9yYWxsCgkJ
CQkJCQlOQ29tcG9uZW50cyAxIHN1YiAtMSAwewoJCQkJCQkJCVJhbmdlQXJyYXkgZXhjaCAyIG11
bCAyIGdldGludGVydmFsIGFsb2FkIHBvcCBtYXAyNTVfdG9fcmFuZ2UKCQkJCQkJCQlOQ29tcG9u
ZW50cyAxIHJvbGwKCQkJCQkJCX1mb3IKCQkJCQkJCWVuZAoJCQkJCQl9IGJpbmQKCQkJCQl9ewoJ
CQkJCQlMb29rdXAKCQkJCQl9aWZlbHNlCgkJCQl9aWZlbHNlCgkJCQldIHNldGNvbG9yc3BhY2Vf
b3B0CgkJCQlzZXRfY3JkCgkJCX1pZmVsc2UKCQl9aWZlbHNlCgllbmQKfWRlZgovc2V0aW5kZXhl
ZGNvbG9yCnsKCUFHTUNPUkVfaG9zdF9zZXAgewoJCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBB
R01DT1JFX2dnZXQgZHVwIC9DU0Qga25vd24gewoJCQliZWdpbgoJCQlDU0QgZ2V0X2NzZCBiZWdp
bgoJCQltYXBfaW5kZXhlZF9kZXZuCgkJCWRldm4KCQkJZW5kCgkJCWVuZAoJCX17CgkJCUFHTUNP
UkVfZ2dldC9Mb29rdXAgZ2V0IDQgMyAtMSByb2xsIG1hcF9pbmRleAoJCQlwb3Agc2V0Y215a2Nv
bG9yCgkJfSBpZmVsc2UKCX17CgkJbGV2ZWwzIG5vdCBBR01DT1JFX2luX3JpcF9zZXAgYW5kIC9p
bmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgL0NTRCBrbm93biBhbmQgewoJCQkv
aW5kZXhlZF9jb2xvcnNwYWNlX2RpY3QgQUdNQ09SRV9nZ2V0IC9DU0QgZ2V0IGdldF9jc2QgYmVn
aW4KCQkJbWFwX2luZGV4ZWRfZGV2bgoJCQlkZXZuCgkJCWVuZAoJCX0KCQl7CgkJCXNldGNvbG9y
CgkJfSBpZmVsc2UKCX1pZmVsc2UKfSBkZWYKL2lnbm9yZWltYWdlZGF0YQp7CgljdXJyZW50b3Zl
cnByaW50IG5vdHsKCQlnc2F2ZQoJCWR1cCBjbG9uZWRpY3QgYmVnaW4KCQkxIHNldGdyYXkKCQkv
RGVjb2RlIFswIDFdIGRlZgoJCS9EYXRhU291cmNlIDxGRj4gZGVmCgkJL011bHRpcGxlRGF0YVNv
dXJjZXMgZmFsc2UgZGVmCgkJL0JpdHNQZXJDb21wb25lbnQgOCBkZWYKCQljdXJyZW50ZGljdCBl
bmQKCQlzeXN0ZW1kaWN0IC9pbWFnZSBnZXQgZXhlYwoJCWdyZXN0b3JlCgkJfWlmCgljb25zdW1l
aW1hZ2VkYXRhCn1kZWYKL2FkZF9jc2EKewoJQWRvYmVfQUdNX0NvcmUgYmVnaW4KCQkJL0FHTUNP
UkVfQ1NBX2NhY2hlIHhwdXQKCWVuZAp9ZGVmCi9nZXRfY3NhX2J5X25hbWUKewoJZHVwIHR5cGUg
ZHVwIC9uYW1ldHlwZSBlcSBleGNoIC9zdHJpbmd0eXBlIGVxIG9yewoJCUFkb2JlX0FHTV9Db3Jl
IGJlZ2luCgkJMSBkaWN0IGJlZ2luCgkJL25hbWUgeGRmCgkJQUdNQ09SRV9DU0FfY2FjaGUKCQl7
CgkJCTAgZ2V0IG5hbWUgZXEgewoJCQkJZXhpdAoJCQl9ewoJCQkJcG9wCgkJCX0gaWZlbHNlCgkJ
fWZvcmFsbAoJCWVuZAoJCWVuZAoJfXsKCQlwb3AKCX0gaWZlbHNlCn1kZWYKL21hcF9jc2EKewoJ
ZHVwIHR5cGUgL25hbWV0eXBlIGVxewoJCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfQ1NBX2NhY2hl
IGdldCBleGNoIGdldAoJfWlmCn1kZWYKL2FkZF9jc2QKewoJQWRvYmVfQUdNX0NvcmUgYmVnaW4K
CQkvQUdNQ09SRV9DU0RfY2FjaGUgeHB1dAoJZW5kCn1kZWYKL2dldF9jc2QKewoJZHVwIHR5cGUg
L25hbWV0eXBlIGVxewoJCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfQ1NEX2NhY2hlIGdldCBleGNo
IGdldAoJfWlmCn1kZWYKL3BhdHRlcm5fYnVmX2luaXQKewoJL2NvdW50IGdldCAwIDAgcHV0Cn0g
ZGVmCi9wYXR0ZXJuX2J1Zl9uZXh0CnsKCWR1cCAvY291bnQgZ2V0IGR1cCAwIGdldAoJZHVwIDMg
MSByb2xsCgkxIGFkZCAwIHhwdAoJZ2V0CQkJCQp9IGRlZgovY2FjaGVwYXR0ZXJuX2NvbXByZXNz
CnsKCTUgZGljdCBiZWdpbgoJY3VycmVudGZpbGUgZXhjaCAwIGV4Y2ggL1N1YkZpbGVEZWNvZGUg
ZmlsdGVyIC9SZWFkRmlsdGVyIGV4Y2ggZGVmCgkvcGF0YXJyYXkgMjAgZGljdCBkZWYKCS9zdHJp
bmdfc2l6ZSAxNjAwMCBkZWYKCS9yZWFkYnVmZmVyIHN0cmluZ19zaXplIHN0cmluZyBkZWYKCWN1
cnJlbnRnbG9iYWwgdHJ1ZSBzZXRnbG9iYWwgCglwYXRhcnJheSAxIGFycmF5IGR1cCAwIDEgcHV0
IC9jb3VudCB4cHQKCXNldGdsb2JhbAoJL0xaV0ZpbHRlciAKCXsKCQlleGNoCgkJZHVwIGxlbmd0
aCAwIGVxIHsKCQkJcG9wCgkJfXsKCQkJcGF0YXJyYXkgZHVwIGxlbmd0aCAxIHN1YiAzIC0xIHJv
bGwgcHV0CgkJfSBpZmVsc2UKCQl7c3RyaW5nX3NpemV9ezB9aWZlbHNlIHN0cmluZwoJfSAvTFpX
RW5jb2RlIGZpbHRlciBkZWYKCXsgCQkKCQlSZWFkRmlsdGVyIHJlYWRidWZmZXIgcmVhZHN0cmlu
ZwoJCWV4Y2ggTFpXRmlsdGVyIGV4Y2ggd3JpdGVzdHJpbmcKCQlub3Qge2V4aXR9IGlmCgl9IGxv
b3AKCUxaV0ZpbHRlciBjbG9zZWZpbGUKCXBhdGFycmF5CQkJCQoJZW5kCn1kZWYKL2NhY2hlcGF0
dGVybgp7CgkyIGRpY3QgYmVnaW4KCWN1cnJlbnRmaWxlIGV4Y2ggMCBleGNoIC9TdWJGaWxlRGVj
b2RlIGZpbHRlciAvUmVhZEZpbHRlciBleGNoIGRlZgoJL3BhdGFycmF5IDIwIGRpY3QgZGVmCglj
dXJyZW50Z2xvYmFsIHRydWUgc2V0Z2xvYmFsIAoJcGF0YXJyYXkgMSBhcnJheSBkdXAgMCAxIHB1
dCAvY291bnQgeHB0CglzZXRnbG9iYWwKCXsKCQlSZWFkRmlsdGVyIDE2MDAwIHN0cmluZyByZWFk
c3RyaW5nIGV4Y2gKCQlwYXRhcnJheSBkdXAgbGVuZ3RoIDEgc3ViIDMgLTEgcm9sbCBwdXQKCQlu
b3Qge2V4aXR9IGlmCgl9IGxvb3AKCXBhdGFycmF5IGR1cCBkdXAgbGVuZ3RoIDEgc3ViICgpIHB1
dAkJCQkJCgllbmQJCn1kZWYKL2FkZF9wYXR0ZXJuCnsKCUFkb2JlX0FHTV9Db3JlIGJlZ2luCgkJ
L0FHTUNPUkVfcGF0dGVybl9jYWNoZSB4cHV0CgllbmQKfWRlZgovZ2V0X3BhdHRlcm4KewoJZHVw
IHR5cGUgL25hbWV0eXBlIGVxewoJCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfcGF0dGVybl9jYWNo
ZSBnZXQgZXhjaCBnZXQKCQlkdXAgd3JhcF9wYWludHByb2MKCX1pZgp9ZGVmCi93cmFwX3BhaW50
cHJvYwp7IAogIHN0YXR1c2RpY3QgL2N1cnJlbnRmaWxlbmFtZWV4dGVuZCBrbm93bnsKCSAgYmVn
aW4KCQkvT2xkUGFpbnRQcm9jIC9QYWludFByb2MgbG9hZCBkZWYKCQkvUGFpbnRQcm9jCgkJewoJ
CSAgbWFyayBleGNoCgkJICBkdXAgL09sZFBhaW50UHJvYyBnZXQgc3RvcHBlZAoJCSAge2Nsb3Nl
ZmlsZSByZXN0b3JlIGVuZH0gaWYKCQkgIGNsZWFydG9tYXJrCgkJfSAgZGVmCgkgIGVuZAogIH0g
e3BvcH0gaWZlbHNlCn0gZGVmCi9tYWtlX3BhdHRlcm4KewoJZHVwIG1hdHJpeCBjdXJyZW50bWF0
cml4IG1hdHJpeCBjb25jYXRtYXRyaXggMCAwIDMgMiByb2xsIGl0cmFuc2Zvcm0KCWV4Y2ggMyBp
bmRleCAvWFN0ZXAgZ2V0IDEgaW5kZXggZXhjaCAyIGNvcHkgZGl2IGN2aSBtdWwgc3ViIHN1YgoJ
ZXhjaCAzIGluZGV4IC9ZU3RlcCBnZXQgMSBpbmRleCBleGNoIDIgY29weSBkaXYgY3ZpIG11bCBz
dWIgc3ViCgltYXRyaXggdHJhbnNsYXRlIGV4Y2ggbWF0cml4IGNvbmNhdG1hdHJpeAoJCQkgIDEg
aW5kZXggYmVnaW4KCQlCQm94IDAgZ2V0IFhTdGVwIGRpdiBjdmkgWFN0ZXAgbXVsIC94c2hpZnQg
ZXhjaCBuZWcgZGVmCgkJQkJveCAxIGdldCBZU3RlcCBkaXYgY3ZpIFlTdGVwIG11bCAveXNoaWZ0
IGV4Y2ggbmVnIGRlZgoJCUJCb3ggMCBnZXQgeHNoaWZ0IGFkZAoJCUJCb3ggMSBnZXQgeXNoaWZ0
IGFkZAoJCUJCb3ggMiBnZXQgeHNoaWZ0IGFkZAoJCUJCb3ggMyBnZXQgeXNoaWZ0IGFkZAoJCTQg
YXJyYXkgYXN0b3JlCgkJL0JCb3ggZXhjaCBkZWYKCQlbIHhzaGlmdCB5c2hpZnQgL3RyYW5zbGF0
ZSBsb2FkIG51bGwgL2V4ZWMgbG9hZCBdIGR1cAoJCTMgL1BhaW50UHJvYyBsb2FkIHB1dCBjdngg
L1BhaW50UHJvYyBleGNoIGRlZgoJCWVuZAoJZ3NhdmUgMCBzZXRncmF5CgltYWtlcGF0dGVybgoJ
Z3Jlc3RvcmUKfWRlZgovc2V0X3BhdHRlcm4KewoJZHVwIC9QYXR0ZXJuVHlwZSBnZXQgMSBlcXsK
CQlkdXAgL1BhaW50VHlwZSBnZXQgMSBlcXsKCQkJY3VycmVudG92ZXJwcmludCBzb3AgWy9EZXZp
Y2VHcmF5XSBzZXRjb2xvcnNwYWNlIDAgc2V0Z3JheQoJCX1pZgoJfWlmCglzZXRwYXR0ZXJuCn1k
ZWYKL3NldGNvbG9yc3BhY2Vfb3B0CnsKCWR1cCBjdXJyZW50Y29sb3JzcGFjZSBlcXsKCQlwb3AK
CX17CgkJc2V0Y29sb3JzcGFjZQoJfWlmZWxzZQp9ZGVmCi91cGRhdGVjb2xvcnJlbmRlcmluZwp7
CgljdXJyZW50Y29sb3JyZW5kZXJpbmcvSW50ZW50IGtub3duewoJCWN1cnJlbnRjb2xvcnJlbmRl
cmluZy9JbnRlbnQgZ2V0Cgl9ewoJCW51bGwKCX1pZmVsc2UKCUludGVudCBuZXsKCQlmYWxzZSAg
CgkJSW50ZW50CgkJQUdNQ09SRV9DUkRfY2FjaGUgewoJCQlleGNoIHBvcCAKCQkJYmVnaW4KCQkJ
CWR1cCBJbnRlbnQgZXF7CgkJCQkJY3VycmVudGRpY3Qgc2V0Y29sb3JyZW5kZXJpbmdfb3B0CgkJ
CQkJZW5kIAoJCQkJCWV4Y2ggcG9wIHRydWUgZXhjaAkKCQkJCQlleGl0CgkJCQl9aWYKCQkJZW5k
CgkJfSBmb3JhbGwKCQlwb3AKCQlub3R7CgkJCXN5c3RlbWRpY3QgL2ZpbmRjb2xvcnJlbmRlcmlu
ZyBrbm93bnsKCQkJCUludGVudCBmaW5kY29sb3JyZW5kZXJpbmcgcG9wCgkJCQkvQ29sb3JSZW5k
ZXJpbmcgZmluZHJlc291cmNlIAoJCQkJZHVwIGxlbmd0aCBkaWN0IGNvcHkKCQkJCXNldGNvbG9y
cmVuZGVyaW5nX29wdAoJCQl9aWYKCQl9aWYKCX1pZgp9IGRlZgovYWRkX2NyZAp7CglBR01DT1JF
X0NSRF9jYWNoZSAzIDEgcm9sbCBwdXQKfWRlZgovc2V0X2NyZAp7CglBR01DT1JFX2hvc3Rfc2Vw
IG5vdCBsZXZlbDIgYW5kewoJCWN1cnJlbnRkaWN0L0NSRCBrbm93bnsKCQkJQUdNQ09SRV9DUkRf
Y2FjaGUgQ1JEIGdldCBkdXAgbnVsbCBuZXsKCQkJCXNldGNvbG9ycmVuZGVyaW5nX29wdAoJCQl9
ewoJCQkJcG9wCgkJCX1pZmVsc2UKCQl9ewoJCQljdXJyZW50ZGljdC9JbnRlbnQga25vd257CgkJ
CQl1cGRhdGVjb2xvcnJlbmRlcmluZwoJCQl9aWYKCQl9aWZlbHNlCgkJY3VycmVudGNvbG9yc3Bh
Y2UgZHVwIHR5cGUgL2FycmF5dHlwZSBlcQoJCQl7MCBnZXR9aWYKCQkvRGV2aWNlUkdCIGVxCgkJ
CXsKCQkJY3VycmVudGRpY3QvVUNSIGtub3duCgkJCQl7L1VDUn17L0FHTUNPUkVfY3VycmVudHVj
cn1pZmVsc2UKCQkJbG9hZCBzZXR1bmRlcmNvbG9ycmVtb3ZhbAoJCQljdXJyZW50ZGljdC9CRyBr
bm93biAKCQkJCXsvQkd9ey9BR01DT1JFX2N1cnJlbnRiZ31pZmVsc2UKCQkJbG9hZCBzZXRibGFj
a2dlbmVyYXRpb24KCQkJfWlmCgl9aWYKfWRlZgovc2V0Y29sb3JyZW5kZXJpbmdfb3B0CnsKCWR1
cCBjdXJyZW50Y29sb3JyZW5kZXJpbmcgZXF7CgkJcG9wCgl9ewoJCWJlZ2luCgkJCS9JbnRlbnQg
SW50ZW50IGRlZgoJCQljdXJyZW50ZGljdAoJCWVuZAoJCXNldGNvbG9ycmVuZGVyaW5nCgl9aWZl
bHNlCn1kZWYKL2NwYWludF9nY29tcAp7Cgljb252ZXJ0X3RvX3Byb2Nlc3MgQWRvYmVfQUdNX0Nv
cmUvQUdNQ09SRV9Db252ZXJ0VG9Qcm9jZXNzIHhkZGYKCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVf
Q29udmVydFRvUHJvY2VzcyBnZXQgbm90Cgl7CgkJKCVlbmRfY3BhaW50X2djb21wKSBmbHVzaGlu
cHV0Cgl9aWYKfWRlZgovY3BhaW50X2dzZXAKewoJQWRvYmVfQUdNX0NvcmUvQUdNQ09SRV9Db252
ZXJ0VG9Qcm9jZXNzIGdldAoJewkKCQkoJWVuZF9jcGFpbnRfZ3NlcCkgZmx1c2hpbnB1dAoJfWlm
Cn1kZWYKL2NwYWludF9nZW5kCnsKCW5ld3BhdGgKfWRlZgovcGF0aF9yZXoKewoJZHVwIDAgbmV7
CgkJQUdNQ09SRV9kZXZpY2VEUEkgZXhjaCBkaXYgCgkJZHVwIDEgbHR7CgkJCXBvcCAxCgkJfWlm
CgkJc2V0ZmxhdAoJfXsKCQlwb3AKCX1pZmVsc2UgCQp9ZGVmCi9zZXRfc3BvdF9hbGlhc19hcnkK
ewoJL0FHTUNPUkVfU3BvdEFsaWFzQXJ5IHdoZXJlewoJCXBvcCBwb3AKCX17CgkJQWRvYmVfQUdN
X0NvcmUvQUdNQ09SRV9TcG90QWxpYXNBcnkgeGRkZgoJCXRydWUgc2V0X3Nwb3RfYWxpYXMKCX1p
ZmVsc2UKfWRlZgovc2V0X3Nwb3RfYWxpYXMKewoJL0FHTUNPUkVfU3BvdEFsaWFzQXJ5IHdoZXJl
ewoJCS9BR01DT1JFX2N1cnJlbnRfc3BvdF9hbGlhcyAzIC0xIHJvbGwgcHV0Cgl9ewoJCXBvcAoJ
fWlmZWxzZQp9ZGVmCi9jdXJyZW50X3Nwb3RfYWxpYXMKewoJL0FHTUNPUkVfU3BvdEFsaWFzQXJ5
IHdoZXJlewoJCS9BR01DT1JFX2N1cnJlbnRfc3BvdF9hbGlhcyBnZXQKCX17CgkJZmFsc2UKCX1p
ZmVsc2UKfWRlZgovbWFwX2FsaWFzCnsKCS9BR01DT1JFX1Nwb3RBbGlhc0FyeSB3aGVyZXsKCQli
ZWdpbgoJCQkvQUdNQ09SRV9uYW1lIHhkZgoJCQlmYWxzZQkKCQkJQUdNQ09SRV9TcG90QWxpYXNB
cnl7CgkJCQlkdXAvTmFtZSBnZXQgQUdNQ09SRV9uYW1lIGVxewoJCQkJCXNhdmUgZXhjaAoJCQkJ
CS9BZG9iZV9BR01fQ29yZSBjdXJyZW50ZGljdCBkZWYKCQkJCQkvQ1NEIGdldCBnZXRfY3NkCgkJ
CQkJZXhjaCByZXN0b3JlCgkJCQkJZXhjaCBwb3AgdHJ1ZQoJCQkJCWV4aXQKCQkJCX17CgkJCQkJ
cG9wCgkJCQl9aWZlbHNlCgkJCX1mb3JhbGwKCQllbmQKCX17CgkJcG9wIGZhbHNlCgl9aWZlbHNl
Cn1iZGYKL3Nwb3RfYWxpYXMKewoJdHJ1ZSBzZXRfc3BvdF9hbGlhcwoJL0FHTUNPUkVfJnNldGN1
c3RvbWNvbG9yIEFHTUNPUkVfa2V5X2tub3duIG5vdCB7CgkJQWRvYmVfQUdNX0NvcmUvQUdNQ09S
RV8mc2V0Y3VzdG9tY29sb3IgL3NldGN1c3RvbWNvbG9yIGxvYWQgcHV0Cgl9IGlmCgkvY3VzdG9t
Y29sb3JfdGludCAxIEFHTUNPUkVfZ3B1dAoJQWRvYmVfQUdNX0NvcmUgYmVnaW4KCS9zZXRjdXN0
b21jb2xvcgoJewoJCWR1cCAvY3VzdG9tY29sb3JfdGludCBleGNoIEFHTUNPUkVfZ3B1dAoJCWN1
cnJlbnRfc3BvdF9hbGlhc3sKCQkJMSBpbmRleCA0IGdldCBtYXBfYWxpYXN7CgkJCQltYXJrIDMg
MSByb2xsCgkJCQlzZXRzZXBjb2xvcnNwYWNlCgkJCQljb3VudHRvbWFyayAwIG5lewoJCQkJCXNl
dHNlcGNvbG9yCgkJCQl9aWYKCQkJCXBvcAoJCQkJcG9wCgkJCX17CgkJCQlBR01DT1JFXyZzZXRj
dXN0b21jb2xvcgoJCQl9aWZlbHNlCgkJfXsKCQkJQUdNQ09SRV8mc2V0Y3VzdG9tY29sb3IKCQl9
aWZlbHNlCgl9YmRmCgllbmQKfWRlZgovYmVnaW5fZmVhdHVyZQp7CglBZG9iZV9BR01fQ29yZS9B
R01DT1JFX2ZlYXR1cmVfZGljdENvdW50IGNvdW50ZGljdHN0YWNrIHB1dAoJY291bnQgQWRvYmVf
QUdNX0NvcmUvQUdNQ09SRV9mZWF0dXJlX29wQ291bnQgMyAtMSByb2xsIHB1dAoJe0Fkb2JlX0FH
TV9Db3JlL0FHTUNPUkVfZmVhdHVyZV9jdG0gbWF0cml4IGN1cnJlbnRtYXRyaXggcHV0fWlmCn1k
ZWYKL2VuZF9mZWF0dXJlCnsKCTIgZGljdCBiZWdpbgoJL3NwZCAvc2V0cGFnZWRldmljZSBsb2Fk
IGRlZgoJL3NldHBhZ2VkZXZpY2UgeyBnZXRfZ3N0YXRlIHNwZCBzZXRfZ3N0YXRlIH0gZGVmCglz
dG9wcGVkeyRlcnJvci9uZXdlcnJvciBmYWxzZSBwdXR9aWYKCWVuZAoJY291bnQgQWRvYmVfQUdN
X0NvcmUvQUdNQ09SRV9mZWF0dXJlX29wQ291bnQgZ2V0IHN1YiBkdXAgMCBndHt7cG9wfXJlcGVh
dH17cG9wfWlmZWxzZQoJY291bnRkaWN0c3RhY2sgQWRvYmVfQUdNX0NvcmUvQUdNQ09SRV9mZWF0
dXJlX2RpY3RDb3VudCBnZXQgc3ViIGR1cCAwIGd0e3tlbmR9cmVwZWF0fXtwb3B9aWZlbHNlCgl7
QWRvYmVfQUdNX0NvcmUvQUdNQ09SRV9mZWF0dXJlX2N0bSBnZXQgc2V0bWF0cml4fWlmCn1kZWYK
L3NldF9uZWdhdGl2ZQp7CglBZG9iZV9BR01fQ29yZSBiZWdpbgoJL0FHTUNPUkVfaW52ZXJ0aW5n
IGV4Y2ggZGVmCglsZXZlbDJ7CgkJY3VycmVudHBhZ2VkZXZpY2UvTmVnYXRpdmVQcmludCBrbm93
bnsKCQkJY3VycmVudHBhZ2VkZXZpY2UvTmVnYXRpdmVQcmludCBnZXQgQWRvYmVfQUdNX0NvcmUv
QUdNQ09SRV9pbnZlcnRpbmcgZ2V0IG5lewoJCQkJdHJ1ZSBiZWdpbl9mZWF0dXJlIHRydWV7CgkJ
CQkJCWJkaWN0IC9OZWdhdGl2ZVByaW50IEFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfaW52ZXJ0aW5n
IGdldCBlZGljdCBzZXRwYWdlZGV2aWNlCgkJCQl9ZW5kX2ZlYXR1cmUKCQkJfWlmCgkJCS9BR01D
T1JFX2ludmVydGluZyBmYWxzZSBkZWYKCQl9aWYKCX1pZgoJQUdNQ09SRV9pbnZlcnRpbmd7CgkJ
W3sxIGV4Y2ggc3VifS9leGVjIGxvYWQgZHVwIGN1cnJlbnR0cmFuc2ZlciBleGNoXWN2eCBiaW5k
IHNldHRyYW5zZmVyCgkJZ3NhdmUgbmV3cGF0aCBjbGlwcGF0aCAxIC9zZXRzZXBhcmF0aW9uZ3Jh
eSB3aGVyZXtwb3Agc2V0c2VwYXJhdGlvbmdyYXl9e3NldGdyYXl9aWZlbHNlIAoJCS9BR01JUlNf
JmZpbGwgd2hlcmUge3BvcCBBR01JUlNfJmZpbGx9e2ZpbGx9IGlmZWxzZSBncmVzdG9yZQoJfWlm
CgllbmQKfWRlZgovbHdfc2F2ZV9yZXN0b3JlX292ZXJyaWRlIHsKCS9tZCB3aGVyZSB7CgkJcG9w
CgkJbWQgYmVnaW4KCQlpbml0aWFsaXplcGFnZQoJCS9pbml0aWFsaXplcGFnZXt9ZGVmCgkJL3Bt
U1ZzZXR1cHt9IGRlZgoJCS9lbmRwe31kZWYKCQkvcHNle31kZWYKCQkvcHNie31kZWYKCQkvb3Jp
Z19zaG93cGFnZSB3aGVyZQoJCQl7cG9wfQoJCQl7L29yaWdfc2hvd3BhZ2UgL3Nob3dwYWdlIGxv
YWQgZGVmfQoJCWlmZWxzZQoJCS9zaG93cGFnZSB7b3JpZ19zaG93cGFnZSBnUn0gZGVmCgkJZW5k
Cgl9aWYKfWRlZgovcHNjcmlwdF9zaG93cGFnZV9vdmVycmlkZSB7CgkvTlRQU09jdDk1IHdoZXJl
Cgl7CgkJYmVnaW4KCQlzaG93cGFnZQoJCXNhdmUKCQkvc2hvd3BhZ2UgL3Jlc3RvcmUgbG9hZCBk
ZWYKCQkvcmVzdG9yZSB7ZXhjaCBwb3B9ZGVmCgkJZW5kCgl9aWYKfWRlZgovZHJpdmVyX21lZGlh
X292ZXJyaWRlCnsKCS9tZCB3aGVyZSB7CgkJcG9wCgkJbWQgL2luaXRpYWxpemVwYWdlIGtub3du
IHsKCQkJbWQgL2luaXRpYWxpemVwYWdlIHt9IHB1dAoJCX0gaWYKCQltZCAvckMga25vd24gewoJ
CQltZCAvckMgezR7cG9wfXJlcGVhdH0gcHV0CgkJfSBpZgoJfWlmCgkvbXlzZXR1cCB3aGVyZSB7
CgkJL215c2V0dXAgWzEgMCAwIDEgMCAwXSBwdXQKCX1pZgoJQWRvYmVfQUdNX0NvcmUgL0FHTUNP
UkVfRGVmYXVsdF9DVE0gbWF0cml4IGN1cnJlbnRtYXRyaXggcHV0CglsZXZlbDIKCQl7QWRvYmVf
QUdNX0NvcmUgL0FHTUNPUkVfRGVmYXVsdF9QYWdlU2l6ZSBjdXJyZW50cGFnZWRldmljZS9QYWdl
U2l6ZSBnZXQgcHV0fWlmCn1kZWYKL2RyaXZlcl9jaGVja19tZWRpYV9vdmVycmlkZQp7CgkvUHJl
cHNEaWN0IHdoZXJlCgkJe3BvcH0KCQl7CgkJQWRvYmVfQUdNX0NvcmUgL0FHTUNPUkVfRGVmYXVs
dF9DVE0gZ2V0IG1hdHJpeCBjdXJyZW50bWF0cml4IG5lCgkJQWRvYmVfQUdNX0NvcmUgL0FHTUNP
UkVfRGVmYXVsdF9QYWdlU2l6ZSBnZXQgdHlwZSAvYXJyYXl0eXBlIGVxCgkJCXsKCQkJQWRvYmVf
QUdNX0NvcmUgL0FHTUNPUkVfRGVmYXVsdF9QYWdlU2l6ZSBnZXQgMCBnZXQgY3VycmVudHBhZ2Vk
ZXZpY2UvUGFnZVNpemUgZ2V0IDAgZ2V0IGVxIGFuZAoJCQlBZG9iZV9BR01fQ29yZSAvQUdNQ09S
RV9EZWZhdWx0X1BhZ2VTaXplIGdldCAxIGdldCBjdXJyZW50cGFnZWRldmljZS9QYWdlU2l6ZSBn
ZXQgMSBnZXQgZXEgYW5kCgkJCX1pZgoJCQl7CgkJCUFkb2JlX0FHTV9Db3JlIC9BR01DT1JFX0Rl
ZmF1bHRfQ1RNIGdldCBzZXRtYXRyaXgKCQkJfWlmCgkJfWlmZWxzZQp9ZGVmCkFHTUNPUkVfZXJy
X3N0cmluZ3MgYmVnaW4KCS9BR01DT1JFX2JhZF9lbnZpcm9uIChFbnZpcm9ubWVudCBub3Qgc2F0
aXNmYWN0b3J5IGZvciB0aGlzIGpvYi4gRW5zdXJlIHRoYXQgdGhlIFBQRCBpcyBjb3JyZWN0IG9y
IHRoYXQgdGhlIFBvc3RTY3JpcHQgbGV2ZWwgcmVxdWVzdGVkIGlzIHN1cHBvcnRlZCBieSB0aGlz
IHByaW50ZXIuICkgZGVmCgkvQUdNQ09SRV9jb2xvcl9zcGFjZV9vbmhvc3Rfc2VwcyAoVGhpcyBq
b2IgY29udGFpbnMgY29sb3JzIHRoYXQgd2lsbCBub3Qgc2VwYXJhdGUgd2l0aCBvbi1ob3N0IG1l
dGhvZHMuICkgZGVmCgkvQUdNQ09SRV9pbnZhbGlkX2NvbG9yX3NwYWNlIChUaGlzIGpvYiBjb250
YWlucyBhbiBpbnZhbGlkIGNvbG9yIHNwYWNlLiApIGRlZgplbmQKZW5kCnN5c3RlbWRpY3QgL3Nl
dHBhY2tpbmcga25vd24KewoJc2V0cGFja2luZwp9IGlmCiUlRW5kUmVzb3VyY2UKJSVCZWdpblJl
c291cmNlOiBwcm9jc2V0IEFkb2JlX0Nvb2xUeXBlX0NvcmUgMi4yMyAwCiUlQ29weXJpZ2h0OiBD
b3B5cmlnaHQgMTk5Ny0yMDAzIEFkb2JlIFN5c3RlbXMgSW5jb3Jwb3JhdGVkLiAgQWxsIFJpZ2h0
cyBSZXNlcnZlZC4KJSVWZXJzaW9uOiAyLjIzIDAKMTAgZGljdCBiZWdpbgovQWRvYmVfQ29vbFR5
cGVfUGFzc3RocnUgY3VycmVudGRpY3QgZGVmCi9BZG9iZV9Db29sVHlwZV9Db3JlX0RlZmluZWQg
dXNlcmRpY3QgL0Fkb2JlX0Nvb2xUeXBlX0NvcmUga25vd24gZGVmCkFkb2JlX0Nvb2xUeXBlX0Nv
cmVfRGVmaW5lZAoJeyAvQWRvYmVfQ29vbFR5cGVfQ29yZSB1c2VyZGljdCAvQWRvYmVfQ29vbFR5
cGVfQ29yZSBnZXQgZGVmIH0KaWYKdXNlcmRpY3QgL0Fkb2JlX0Nvb2xUeXBlX0NvcmUgNjAgZGlj
dCBkdXAgYmVnaW4gcHV0Ci9BZG9iZV9Db29sVHlwZV9WZXJzaW9uIDIuMjMgZGVmCi9MZXZlbDI/
CglzeXN0ZW1kaWN0IC9sYW5ndWFnZWxldmVsIGtub3duIGR1cAoJCXsgcG9wIHN5c3RlbWRpY3Qg
L2xhbmd1YWdlbGV2ZWwgZ2V0IDIgZ2UgfQoJaWYgZGVmCkxldmVsMj8gbm90Cgl7CgkvY3VycmVu
dGdsb2JhbCBmYWxzZSBkZWYKCS9zZXRnbG9iYWwgL3BvcCBsb2FkIGRlZgoJL2djaGVjayB7IHBv
cCBmYWxzZSB9IGJpbmQgZGVmCgkvY3VycmVudHBhY2tpbmcgZmFsc2UgZGVmCgkvc2V0cGFja2lu
ZyAvcG9wIGxvYWQgZGVmCgkvU2hhcmVkRm9udERpcmVjdG9yeSAwIGRpY3QgZGVmCgl9CmlmCmN1
cnJlbnRwYWNraW5nCnRydWUgc2V0cGFja2luZwovQF9TYXZlU3RhY2tMZXZlbHMKCXsKCUFkb2Jl
X0Nvb2xUeXBlX0RhdGEKCQliZWdpbgoJCUBvcFN0YWNrQ291bnRCeUxldmVsIEBvcFN0YWNrTGV2
ZWwKCQkyIGNvcHkga25vd24gbm90CgkJCXsgMiBjb3B5IDMgZGljdCBkdXAgL2FyZ3MgNyBpbmRl
eCA1IGFkZCBhcnJheSBwdXQgcHV0IGdldCB9CgkJCXsKCQkJZ2V0IGR1cCAvYXJncyBnZXQgZHVw
IGxlbmd0aCAzIGluZGV4IGx0CgkJCQl7CgkJCQlkdXAgbGVuZ3RoIDUgYWRkIGFycmF5IGV4Y2gK
CQkJCTEgaW5kZXggZXhjaCAwIGV4Y2ggcHV0aW50ZXJ2YWwKCQkJCTEgaW5kZXggZXhjaCAvYXJn
cyBleGNoIHB1dAoJCQkJfQoJCQkJeyBwb3AgfQoJCQlpZmVsc2UKCQkJfQoJCWlmZWxzZQoJCQli
ZWdpbgoJCQljb3VudCAyIHN1YiAxIGluZGV4IGx0CgkJCQl7IHBvcCBjb3VudCAxIHN1YiB9CgkJ
CWlmCgkJCWR1cCAvYXJnQ291bnQgZXhjaCBkZWYKCQkJZHVwIDAgZ3QKCQkJCXsKCQkJCWV4Y2gg
MSBpbmRleCAyIGFkZCAxIHJvbGwKCQkJCWFyZ3MgZXhjaCAwIGV4Y2ggZ2V0aW50ZXJ2YWwgCgkJ
CWFzdG9yZSBwb3AKCQkJCX0KCQkJCXsgcG9wIH0KCQkJaWZlbHNlCgkJCWNvdW50IDEgc3ViIC9y
ZXN0Q291bnQgZXhjaCBkZWYKCQkJZW5kCgkJL0BvcFN0YWNrTGV2ZWwgQG9wU3RhY2tMZXZlbCAx
IGFkZCBkZWYKCQljb3VudGRpY3RzdGFjayAxIHN1YgoJCUBkaWN0U3RhY2tDb3VudEJ5TGV2ZWwg
ZXhjaCBAZGljdFN0YWNrTGV2ZWwgZXhjaCBwdXQKCQkvQGRpY3RTdGFja0xldmVsIEBkaWN0U3Rh
Y2tMZXZlbCAxIGFkZCBkZWYKCQllbmQKCX0gYmluZCBkZWYKL0BfUmVzdG9yZVN0YWNrTGV2ZWxz
Cgl7CglBZG9iZV9Db29sVHlwZV9EYXRhCgkJYmVnaW4KCQkvQG9wU3RhY2tMZXZlbCBAb3BTdGFj
a0xldmVsIDEgc3ViIGRlZgoJCUBvcFN0YWNrQ291bnRCeUxldmVsIEBvcFN0YWNrTGV2ZWwgZ2V0
CgkJCWJlZ2luCgkJCWNvdW50IHJlc3RDb3VudCBzdWIgZHVwIDAgZ3QKCQkJCXsgeyBwb3AgfSBy
ZXBlYXQgfQoJCQkJeyBwb3AgfQoJCQlpZmVsc2UKCQkJYXJncyAwIGFyZ0NvdW50IGdldGludGVy
dmFsIHt9IGZvcmFsbAoJCQllbmQKCQkvQGRpY3RTdGFja0xldmVsIEBkaWN0U3RhY2tMZXZlbCAx
IHN1YiBkZWYKCQlAZGljdFN0YWNrQ291bnRCeUxldmVsIEBkaWN0U3RhY2tMZXZlbCBnZXQKCQll
bmQKCWNvdW50ZGljdHN0YWNrIGV4Y2ggc3ViIGR1cCAwIGd0CgkJeyB7IGVuZCB9IHJlcGVhdCB9
CgkJeyBwb3AgfQoJaWZlbHNlCgl9IGJpbmQgZGVmCi9AX1BvcFN0YWNrTGV2ZWxzCgl7CglBZG9i
ZV9Db29sVHlwZV9EYXRhCgkJYmVnaW4KCQkvQG9wU3RhY2tMZXZlbCBAb3BTdGFja0xldmVsIDEg
c3ViIGRlZgoJCS9AZGljdFN0YWNrTGV2ZWwgQGRpY3RTdGFja0xldmVsIDEgc3ViIGRlZgoJCWVu
ZAoJfSBiaW5kIGRlZgovQFJhaXNlCgl7CglleGNoIGN2eCBleGNoIGVycm9yZGljdCBleGNoIGdl
dCBleGVjCglzdG9wCgl9IGJpbmQgZGVmCi9AUmVSYWlzZQoJewoJY3Z4ICRlcnJvciAvZXJyb3Ju
YW1lIGdldCBlcnJvcmRpY3QgZXhjaCBnZXQgZXhlYwoJc3RvcAoJfSBiaW5kIGRlZgovQFN0b3Bw
ZWQKCXsKCTAgQCNTdG9wcGVkCgl9IGJpbmQgZGVmCi9AI1N0b3BwZWQKCXsKCUBfU2F2ZVN0YWNr
TGV2ZWxzCglzdG9wcGVkCgkJeyBAX1Jlc3RvcmVTdGFja0xldmVscyB0cnVlIH0KCQl7IEBfUG9w
U3RhY2tMZXZlbHMgZmFsc2UgfQoJaWZlbHNlCgl9IGJpbmQgZGVmCi9AQXJnCgl7CglBZG9iZV9D
b29sVHlwZV9EYXRhCgkJYmVnaW4KCQlAb3BTdGFja0NvdW50QnlMZXZlbCBAb3BTdGFja0xldmVs
IDEgc3ViIGdldCAvYXJncyBnZXQgZXhjaCBnZXQKCQllbmQKCX0gYmluZCBkZWYKY3VycmVudGds
b2JhbCB0cnVlIHNldGdsb2JhbAovQ1RIYXNSZXNvdXJjZUZvckFsbEJ1ZwoJTGV2ZWwyPwoJCXsK
CQkxIGRpY3QgZHVwIGJlZ2luCgkJbWFyawoJCQl7CgkJCQkoKikgeyBwb3Agc3RvcCB9IDEyOCBz
dHJpbmcgL0NhdGVnb3J5CgkJCXJlc291cmNlZm9yYWxsCgkJCX0KCQlzdG9wcGVkCgkJY2xlYXJ0
b21hcmsKCQljdXJyZW50ZGljdCBlcSBkdXAKCQkJeyBlbmQgfQoJCWlmCgkJbm90CgkJfQoJCXsg
ZmFsc2UgfQoJaWZlbHNlCglkZWYKL0NUSGFzUmVzb3VyY2VTdGF0dXNCdWcKCUxldmVsMj8KCQl7
CgkJbWFyawoJCQl7IC9zdGV2ZWFtZXJpZ2UgL0NhdGVnb3J5IHJlc291cmNlc3RhdHVzIH0KCQlz
dG9wcGVkCgkJCXsgY2xlYXJ0b21hcmsgdHJ1ZSB9CgkJCXsgY2xlYXJ0b21hcmsgY3VycmVudGds
b2JhbCBub3QgfQoJCWlmZWxzZQoJCX0KCQl7IGZhbHNlIH0KCWlmZWxzZQoJZGVmCnNldGdsb2Jh
bAovQ1RSZXNvdXJjZVN0YXR1cwoJCXsKCQltYXJrIDMgMSByb2xsCgkJL0NhdGVnb3J5IGZpbmRy
ZXNvdXJjZQoJCQliZWdpbgoJCQkoe1Jlc291cmNlU3RhdHVzfSBzdG9wcGVkKSAwICgpIC9TdWJG
aWxlRGVjb2RlIGZpbHRlciBjdnggZXhlYwoJCQkJeyBjbGVhcnRvbWFyayBmYWxzZSB9CgkJCQl7
IHsgMyAyIHJvbGwgcG9wIHRydWUgfSB7IGNsZWFydG9tYXJrIGZhbHNlIH0gaWZlbHNlIH0KCQkJ
aWZlbHNlCgkJCWVuZAoJCX0gYmluZCBkZWYKL0NUV29ya0Fyb3VuZEJ1Z3MKCXsKCUxldmVsMj8K
CQl7CgkJL2NpZF9QcmVMb2FkIC9Qcm9jU2V0IHJlc291cmNlc3RhdHVzCgkJCXsKCQkJcG9wIHBv
cAoJCQljdXJyZW50Z2xvYmFsCgkJCW1hcmsKCQkJCXsKCQkJCSgqKQoJCQkJCXsKCQkJCQlkdXAg
L0NNYXAgQ1RIYXNSZXNvdXJjZVN0YXR1c0J1ZwoJCQkJCQl7IENUUmVzb3VyY2VTdGF0dXMgfQoJ
CQkJCQl7IHJlc291cmNlc3RhdHVzIH0KCQkJCQlpZmVsc2UKCQkJCQkJewoJCQkJCQlwb3AgZHVw
IDAgZXEgZXhjaCAxIGVxIG9yCgkJCQkJCQl7CgkJCQkJCQlkdXAgL0NNYXAgZmluZHJlc291cmNl
IGdjaGVjayBzZXRnbG9iYWwKCQkJCQkJCS9DTWFwIHVuZGVmaW5lcmVzb3VyY2UKCQkJCQkJCX0K
CQkJCQkJCXsKCQkJCQkJCXBvcCBDVEhhc1Jlc291cmNlRm9yQWxsQnVnCgkJCQkJCQkJeyBleGl0
IH0KCQkJCQkJCQl7IHN0b3AgfQoJCQkJCQkJaWZlbHNlCgkJCQkJCQl9CgkJCQkJCWlmZWxzZQoJ
CQkJCQl9CgkJCQkJCXsgcG9wIH0KCQkJCQlpZmVsc2UKCQkJCQl9CgkJCQkxMjggc3RyaW5nIC9D
TWFwIHJlc291cmNlZm9yYWxsCgkJCQl9CgkJCXN0b3BwZWQKCQkJCXsgY2xlYXJ0b21hcmsgfQoJ
CQlzdG9wcGVkIHBvcAoJCQlzZXRnbG9iYWwKCQkJfQoJCWlmCgkJfQoJaWYKCX0gYmluZCBkZWYK
L2RvY19zZXR1cAoJewoJQWRvYmVfQ29vbFR5cGVfQ29yZQoJCWJlZ2luCgkJQ1RXb3JrQXJvdW5k
QnVncwoJCS9tb3YgL21vdmV0byBsb2FkIGRlZgoJCS9uZm50IC9uZXdlbmNvZGVkZm9udCBsb2Fk
IGRlZgoJCS9tZm50IC9tYWtlZm9udCBsb2FkIGRlZgoJCS9zZm50IC9zZXRmb250IGxvYWQgZGVm
CgkJL3VmbnQgL3VuZGVmaW5lZm9udCBsb2FkIGRlZgoJCS9jaHAgL2NoYXJwYXRoIGxvYWQgZGVm
CgkJL2F3c2ggL2F3aWR0aHNob3cgbG9hZCBkZWYKCQkvd3NoIC93aWR0aHNob3cgbG9hZCBkZWYK
CQkvYXNoIC9hc2hvdyBsb2FkIGRlZgoJCS9zaCAvc2hvdyBsb2FkIGRlZgoJCWVuZAoJdXNlcmRp
Y3QgL0Fkb2JlX0Nvb2xUeXBlX0RhdGEgMTAgZGljdCBkdXAKCQliZWdpbgoJCS9BZGRXaWR0aHM/
IGZhbHNlIGRlZgoJCS9DQyAwIGRlZgoJCS9jaGFyY29kZSAyIHN0cmluZyBkZWYKCQkvQG9wU3Rh
Y2tDb3VudEJ5TGV2ZWwgMzIgZGljdCBkZWYKCQkvQG9wU3RhY2tMZXZlbCAwIGRlZgoJCS9AZGlj
dFN0YWNrQ291bnRCeUxldmVsIDMyIGRpY3QgZGVmCgkJL0BkaWN0U3RhY2tMZXZlbCAwIGRlZgoJ
CS9JblZNRm9udHNCeUNNYXAgMTAgZGljdCBkZWYKCQkvSW5WTURlZXBDb3BpZWRGb250cyAxMCBk
aWN0IGRlZgoJCWVuZCBwdXQKCX0gYmluZCBkZWYKL2RvY190cmFpbGVyCgl7CgljdXJyZW50ZGlj
dCBBZG9iZV9Db29sVHlwZV9Db3JlIGVxCgkJeyBlbmQgfQoJaWYKCX0gYmluZCBkZWYKL3BhZ2Vf
c2V0dXAKCXsKCUFkb2JlX0Nvb2xUeXBlX0NvcmUgYmVnaW4KCX0gYmluZCBkZWYKL3BhZ2VfdHJh
aWxlcgoJewoJZW5kCgl9IGJpbmQgZGVmCi91bmxvYWQKCXsKCXN5c3RlbWRpY3QgL2xhbmd1YWdl
bGV2ZWwga25vd24KCQl7CgkJc3lzdGVtZGljdC9sYW5ndWFnZWxldmVsIGdldCAyIGdlCgkJCXsK
CQkJdXNlcmRpY3QvQWRvYmVfQ29vbFR5cGVfQ29yZSAyIGNvcHkga25vd24KCQkJCXsgdW5kZWYg
fQoJCQkJeyBwb3AgcG9wIH0KCQkJaWZlbHNlCgkJCX0KCQlpZgoJCX0KCWlmCgl9IGJpbmQgZGVm
Ci9uZGYKCXsKCTEgaW5kZXggd2hlcmUKCQl7IHBvcCBwb3AgcG9wIH0KCQl7IGR1cCB4Y2hlY2sg
eyBiaW5kIH0gaWYgZGVmIH0KCWlmZWxzZQoJfSBkZWYKL2ZpbmRmb250IHN5c3RlbWRpY3QKCWJl
Z2luCgl1c2VyZGljdAoJCWJlZ2luCgkJL2dsb2JhbGRpY3Qgd2hlcmUgeyAvZ2xvYmFsZGljdCBn
ZXQgYmVnaW4gfSBpZgoJCQlkdXAgd2hlcmUgcG9wIGV4Y2ggZ2V0CgkJL2dsb2JhbGRpY3Qgd2hl
cmUgeyBwb3AgZW5kIH0gaWYKCQllbmQKCWVuZApBZG9iZV9Db29sVHlwZV9Db3JlX0RlZmluZWQK
CXsgL3N5c3RlbWZpbmRmb250IGV4Y2ggZGVmIH0KCXsKCS9maW5kZm9udCAxIGluZGV4IGRlZgoJ
L3N5c3RlbWZpbmRmb250IGV4Y2ggZGVmCgl9CmlmZWxzZQovdW5kZWZpbmVmb250Cgl7IHBvcCB9
IG5kZgovY29weWZvbnQKCXsKCWN1cnJlbnRnbG9iYWwgMyAxIHJvbGwKCTEgaW5kZXggZ2NoZWNr
IHNldGdsb2JhbAoJZHVwIG51bGwgZXEgeyAwIH0geyBkdXAgbGVuZ3RoIH0gaWZlbHNlCgkyIGlu
ZGV4IGxlbmd0aCBhZGQgMSBhZGQgZGljdAoJCWJlZ2luCgkJZXhjaAoJCQl7CgkJCTEgaW5kZXgg
L0ZJRCBlcQoJCQkJeyBwb3AgcG9wIH0KCQkJCXsgZGVmIH0KCQkJaWZlbHNlCgkJCX0KCQlmb3Jh
bGwKCQlkdXAgbnVsbCBlcQoJCQl7IHBvcCB9CgkJCXsgeyBkZWYgfSBmb3JhbGwgfQoJCWlmZWxz
ZQoJCWN1cnJlbnRkaWN0CgkJZW5kCglleGNoIHNldGdsb2JhbAoJfSBiaW5kIGRlZgovY29weWFy
cmF5Cgl7CgljdXJyZW50Z2xvYmFsIGV4Y2gKCWR1cCBnY2hlY2sgc2V0Z2xvYmFsCglkdXAgbGVu
Z3RoIGFycmF5IGNvcHkKCWV4Y2ggc2V0Z2xvYmFsCgl9IGJpbmQgZGVmCi9uZXdlbmNvZGVkZm9u
dAoJewoJY3VycmVudGdsb2JhbAoJCXsKCQlTaGFyZWRGb250RGlyZWN0b3J5IDMgaW5kZXggIGtu
b3duCgkJCXsgU2hhcmVkRm9udERpcmVjdG9yeSAzIGluZGV4IGdldCAvRm9udFJlZmVyZW5jZWQg
a25vd24gfQoJCQl7IGZhbHNlIH0KCQlpZmVsc2UKCQl9CgkJewoJCUZvbnREaXJlY3RvcnkgMyBp
bmRleCBrbm93bgoJCQl7IEZvbnREaXJlY3RvcnkgMyBpbmRleCBnZXQgL0ZvbnRSZWZlcmVuY2Vk
IGtub3duIH0KCQkJewoJCQlTaGFyZWRGb250RGlyZWN0b3J5IDMgaW5kZXgga25vd24KCQkJCXsg
U2hhcmVkRm9udERpcmVjdG9yeSAzIGluZGV4IGdldCAvRm9udFJlZmVyZW5jZWQga25vd24gfQoJ
CQkJeyBmYWxzZSB9CgkJCWlmZWxzZQoJCQl9CgkJaWZlbHNlCgkJfQoJaWZlbHNlCglkdXAKCQl7
CgkJMyBpbmRleCBmaW5kZm9udCAvRm9udFJlZmVyZW5jZWQgZ2V0CgkJMiBpbmRleCBkdXAgdHlw
ZSAvbmFtZXR5cGUgZXEKCQkJe2ZpbmRmb250fQoJCWlmIG5lCgkJCXsgcG9wIGZhbHNlIH0KCQlp
ZgoJCX0KCWlmCgkJewoJCXBvcAoJCTEgaW5kZXggZmluZGZvbnQKCQkvRW5jb2RpbmcgZ2V0IGV4
Y2gKCQkwIDEgMjU1CgkJCXsgMiBjb3B5IGdldCAzIGluZGV4IDMgMSByb2xsIHB1dCB9CgkJZm9y
CgkJcG9wIHBvcCBwb3AKCQl9CgkJewoJCWR1cCB0eXBlIC9uYW1ldHlwZSBlcQoJCSAgeyBmaW5k
Zm9udCB9CgkgIGlmCgkJZHVwIGR1cCBtYXhsZW5ndGggMiBhZGQgZGljdAoJCQliZWdpbgoJCQll
eGNoCgkJCQl7CgkJCQkxIGluZGV4IC9GSUQgbmUKCQkJCQl7ZGVmfQoJCQkJCXtwb3AgcG9wfQoJ
CQkJaWZlbHNlCgkJCQl9CgkJCWZvcmFsbAoJCQkvRm9udFJlZmVyZW5jZWQgZXhjaCBkZWYKCQkJ
L0VuY29kaW5nIGV4Y2ggZHVwIGxlbmd0aCBhcnJheSBjb3B5IGRlZgoJCQkvRm9udE5hbWUgMSBp
bmRleCBkdXAgdHlwZSAvc3RyaW5ndHlwZSBlcSB7IGN2biB9IGlmIGRlZiBkdXAKCQkJY3VycmVu
dGRpY3QKCQkJZW5kCgkJZGVmaW5lZm9udCBkZWYKCQl9CglpZmVsc2UKCX0gYmluZCBkZWYKL1Nl
dFN1YnN0aXR1dGVTdHJhdGVneQoJewoJJFN1YnN0aXR1dGVGb250CgkJYmVnaW4KCQlkdXAgdHlw
ZSAvZGljdHR5cGUgbmUKCQkJeyAwIGRpY3QgfQoJCWlmCgkJY3VycmVudGRpY3QgLyRTdHJhdGVn
aWVzIGtub3duCgkJCXsKCQkJZXhjaCAkU3RyYXRlZ2llcyBleGNoIAoJCQkyIGNvcHkga25vd24K
CQkJCXsKCQkJCWdldAoJCQkJMiBjb3B5IG1heGxlbmd0aCBleGNoIG1heGxlbmd0aCBhZGQgZGlj
dAoJCQkJCWJlZ2luCgkJCQkJeyBkZWYgfSBmb3JhbGwKCQkJCQl7IGRlZiB9IGZvcmFsbAoJCQkJ
CWN1cnJlbnRkaWN0CgkJCQkJZHVwIC8kSW5pdCBrbm93bgoJCQkJCQl7IGR1cCAvJEluaXQgZ2V0
IGV4ZWMgfQoJCQkJCWlmCgkJCQkJZW5kCgkJCQkvJFN0cmF0ZWd5IGV4Y2ggZGVmCgkJCQl9CgkJ
CQl7IHBvcCBwb3AgcG9wIH0KCQkJaWZlbHNlCgkJCX0KCQkJeyBwb3AgcG9wIH0KCQlpZmVsc2UK
CQllbmQKCX0gYmluZCBkZWYKL3NjZmYKCXsKCSRTdWJzdGl0dXRlRm9udAoJCWJlZ2luCgkJZHVw
IHR5cGUgL3N0cmluZ3R5cGUgZXEKCQkJeyBkdXAgbGVuZ3RoIGV4Y2ggfQoJCQl7IG51bGwgfQoJ
CWlmZWxzZQoJCS8kc25hbWUgZXhjaCBkZWYKCQkvJHNsZW4gZXhjaCBkZWYKCQkvJGluVk1JbmRl
eAoJCQkkc25hbWUgbnVsbCBlcQoJCQkJewoJCQkJMSBpbmRleCAkc3RyIGN2cwoJCQkJZHVwIGxl
bmd0aCAkc2xlbiBzdWIgJHNsZW4gZ2V0aW50ZXJ2YWwgY3ZuCgkJCQl9CgkJCQl7ICRzbmFtZSB9
CgkJCWlmZWxzZSBkZWYKCQllbmQKCQl7IGZpbmRmb250IH0KCUBTdG9wcGVkCgkJewoJCWR1cCBs
ZW5ndGggOCBhZGQgc3RyaW5nIGV4Y2gKCQkxIGluZGV4IDAgKEJhZEZvbnQ6KSBwdXRpbnRlcnZh
bAoJCTEgaW5kZXggZXhjaCA4IGV4Y2ggZHVwIGxlbmd0aCBzdHJpbmcgY3ZzIHB1dGludGVydmFs
IGN2bgoJCQl7IGZpbmRmb250IH0KCQlAU3RvcHBlZAoJCQl7IHBvcCAvQ291cmllciBmaW5kZm9u
dCB9CgkJaWYKCQl9CglpZgoJJFN1YnN0aXR1dGVGb250CgkJYmVnaW4KCQkvJHNuYW1lIG51bGwg
ZGVmCgkJLyRzbGVuIDAgZGVmCgkJLyRpblZNSW5kZXggbnVsbCBkZWYKCQllbmQKCX0gYmluZCBk
ZWYKL2lzV2lkdGhzT25seUZvbnQKCXsKCWR1cCAvV2lkdGhzT25seSBrbm93bgoJCXsgcG9wIHBv
cCB0cnVlIH0KCQl7CgkJZHVwIC9GRGVwVmVjdG9yIGtub3duCgkJCXsgL0ZEZXBWZWN0b3IgZ2V0
IHsgaXNXaWR0aHNPbmx5Rm9udCBkdXAgeyBleGl0IH0gaWYgfSBmb3JhbGwgfQoJCQl7CgkJCWR1
cCAvRkRBcnJheSBrbm93bgoJCQkJeyAvRkRBcnJheSBnZXQgeyBpc1dpZHRoc09ubHlGb250IGR1
cCB7IGV4aXQgfSBpZiB9IGZvcmFsbCB9CgkJCQl7IHBvcCB9CgkJCWlmZWxzZQoJCQl9CgkJaWZl
bHNlCgkJfQoJaWZlbHNlCgl9IGJpbmQgZGVmCi8/c3RyMSAyNTYgc3RyaW5nIGRlZgovP3NldAoJ
ewoJJFN1YnN0aXR1dGVGb250CgkJYmVnaW4KCQkvJHN1YnN0aXR1dGVGb3VuZCBmYWxzZSBkZWYK
CQkvJGZvbnRuYW1lIDQgaW5kZXggZGVmCgkJLyRkb1NtYXJ0U3ViIGZhbHNlIGRlZgoJCWVuZAoJ
MyBpbmRleAoJY3VycmVudGdsb2JhbCBmYWxzZSBzZXRnbG9iYWwgZXhjaAoJL0NvbXBhdGlibGVG
b250cyAvUHJvY1NldCByZXNvdXJjZXN0YXR1cwoJCXsKCQlwb3AgcG9wCgkJL0NvbXBhdGlibGVG
b250cyAvUHJvY1NldCBmaW5kcmVzb3VyY2UKCQkJYmVnaW4KCQkJZHVwIC9Db21wYXRpYmxlRm9u
dCBjdXJyZW50ZXhjZXB0aW9uCgkJCTEgaW5kZXggL0NvbXBhdGlibGVGb250IHRydWUgc2V0ZXhj
ZXB0aW9uCgkJCTEgaW5kZXggL0ZvbnQgcmVzb3VyY2VzdGF0dXMKCQkJCXsKCQkJCXBvcCBwb3AK
CQkJCTMgMiByb2xsIHNldGdsb2JhbAoJCQkJZW5kCgkJCQlleGNoCgkJCQlkdXAgZmluZGZvbnQK
CQkJCS9Db21wYXRpYmxlRm9udHMgL1Byb2NTZXQgZmluZHJlc291cmNlCgkJCQkJYmVnaW4KCQkJ
CQkzIDEgcm9sbCBleGNoIC9Db21wYXRpYmxlRm9udCBleGNoIHNldGV4Y2VwdGlvbgoJCQkJCWVu
ZAoJCQkJfQoJCQkJewoJCQkJMyAyIHJvbGwgc2V0Z2xvYmFsCgkJCQkxIGluZGV4IGV4Y2ggL0Nv
bXBhdGlibGVGb250IGV4Y2ggc2V0ZXhjZXB0aW9uCgkJCQllbmQKCQkJCWZpbmRmb250CgkJCQkk
U3Vic3RpdHV0ZUZvbnQgLyRzdWJzdGl0dXRlRm91bmQgdHJ1ZSBwdXQKCQkJCX0KCQkJaWZlbHNl
CgkJfQoJCXsgZXhjaCBzZXRnbG9iYWwgZmluZGZvbnQgfQoJaWZlbHNlCgkkU3Vic3RpdHV0ZUZv
bnQKCQliZWdpbgoJCSRzdWJzdGl0dXRlRm91bmQKCQkJewoJCSBmYWxzZQoJCSAoJSVbVXNpbmcg
ZW1iZWRkZWQgZm9udCApIHByaW50CgkJIDUgaW5kZXggP3N0cjEgY3ZzIHByaW50CgkJICggdG8g
YXZvaWQgdGhlIGZvbnQgc3Vic3RpdHV0aW9uIHByb2JsZW0gbm90ZWQgZWFybGllci5dJSVcbikg
cHJpbnQKCQkgfQoJCQl7CgkJCWR1cCAvRm9udE5hbWUga25vd24KCQkJCXsKCQkJCWR1cCAvRm9u
dE5hbWUgZ2V0ICRmb250bmFtZSBlcQoJCQkJMSBpbmRleCAvRGlzdGlsbGVyRmF1eEZvbnQga25v
d24gbm90IGFuZAoJCQkJL2N1cnJlbnRkaXN0aWxsZXJwYXJhbXMgd2hlcmUKCQkJCQl7IHBvcCBm
YWxzZSAyIGluZGV4IGlzV2lkdGhzT25seUZvbnQgbm90IGFuZCB9CgkJCQlpZgoJCQkJfQoJCQkJ
eyBmYWxzZSB9CgkJCWlmZWxzZQoJCQl9CgkJaWZlbHNlCgkJZXhjaCBwb3AKCQkvJGRvU21hcnRT
dWIgdHJ1ZSBkZWYKCQllbmQKCQl7CgkJZXhjaCBwb3AgZXhjaCBwb3AgZXhjaAoJCTIgZGljdCBk
dXAgL0ZvdW5kIDMgaW5kZXggcHV0CgkJZXhjaCBmaW5kZm9udCBleGNoCgkJfQoJCXsKCQlleGNo
IGV4ZWMKCQlleGNoIGR1cCBmaW5kZm9udAoJCWR1cCAvRm9udFR5cGUgZ2V0IDMgZXEKCSAgewoJ
CSAgZXhjaCA/c3RyMSBjdnMKCQkgIGR1cCBsZW5ndGggMSBzdWIKCQkgIC0xIDAKCQl7CgkJCSAg
ZXhjaCBkdXAgMiBpbmRleCBnZXQgNDIgZXEKCQkJewoJCQkJIGV4Y2ggMCBleGNoIGdldGludGVy
dmFsIGN2biA0IDEgcm9sbCAzIDIgcm9sbCBwb3AKCQkJCSBleGl0CgkJCSAgfQoJCQkgIHtleGNo
IHBvcH0gaWZlbHNlCgkJICB9Zm9yCgkJfQoJCXsKCQkgZXhjaCBwb3AKCSAgfSBpZmVsc2UKCQky
IGRpY3QgZHVwIC9Eb3dubG9hZGVkIDYgNSByb2xsIHB1dAoJCX0KCWlmZWxzZQoJZHVwIC9Gb250
TmFtZSA0IGluZGV4IHB1dCBjb3B5Zm9udCBkZWZpbmVmb250IHBvcAoJfSBiaW5kIGRlZgovP3N0
cjIgMjU2IHN0cmluZyBkZWYKLz9hZGQKCXsKCTEgaW5kZXggdHlwZSAvaW50ZWdlcnR5cGUgZXEK
CQl7IGV4Y2ggdHJ1ZSA0IDIgfQoJCXsgZmFsc2UgMyAxIH0KCWlmZWxzZQoJcm9sbAoJMSBpbmRl
eCBmaW5kZm9udAoJZHVwIC9XaWR0aHMga25vd24KCQl7CgkJQWRvYmVfQ29vbFR5cGVfRGF0YSAv
QWRkV2lkdGhzPyB0cnVlIHB1dAoJCWdzYXZlIGR1cCAxMDAwIHNjYWxlZm9udCBzZXRmb250CgkJ
fQoJaWYKCS9Eb3dubG9hZGVkIGtub3duCgkJewoJCWV4ZWMKCQlleGNoCgkJCXsKCQkJZXhjaCA/
c3RyMiBjdnMgZXhjaAoJCQlmaW5kZm9udCAvRG93bmxvYWRlZCBnZXQgMSBkaWN0IGJlZ2luIC9E
b3dubG9hZGVkIDEgaW5kZXggZGVmID9zdHIxIGN2cyBsZW5ndGgKCQkJP3N0cjEgMSBpbmRleCAx
IGFkZCAzIGluZGV4IHB1dGludGVydmFsCgkJCWV4Y2ggbGVuZ3RoIDEgYWRkIDEgaW5kZXggYWRk
CgkJCT9zdHIxIDIgaW5kZXggKCopIHB1dGludGVydmFsCgkJCT9zdHIxIDAgMiBpbmRleCBnZXRp
bnRlcnZhbCBjdm4gZmluZGZvbnQgCgkJCT9zdHIxIDMgaW5kZXggKCspIHB1dGludGVydmFsCgkJ
CTIgZGljdCBkdXAgL0ZvbnROYW1lID9zdHIxIDAgNiBpbmRleCBnZXRpbnRlcnZhbCBjdm4gcHV0
CgkJCWR1cCAvRG93bmxvYWRlZCBEb3dubG9hZGVkIHB1dCBlbmQgY29weWZvbnQKCQkJZHVwIC9G
b250TmFtZSBnZXQgZXhjaCBkZWZpbmVmb250IHBvcCBwb3AgcG9wCgkJCX0KCQkJewoJCQlwb3AK
CQkJfQoJCWlmZWxzZQoJCX0KCQl7CgkJcG9wCgkJZXhjaAoJCQl7CgkJCWZpbmRmb250CgkJCWR1
cCAvRm91bmQgZ2V0CgkJCWR1cCBsZW5ndGggZXhjaCA/c3RyMSBjdnMgcG9wCgkJCT9zdHIxIDEg
aW5kZXggKCspIHB1dGludGVydmFsCgkJCT9zdHIxIDEgaW5kZXggMSBhZGQgNCBpbmRleCA/c3Ry
MiBjdnMgcHV0aW50ZXJ2YWwKCQkJP3N0cjEgZXhjaCAwIGV4Y2ggNSA0IHJvbGwgP3N0cjIgY3Zz
IGxlbmd0aCAxIGFkZCBhZGQgZ2V0aW50ZXJ2YWwgY3ZuCgkJCTEgZGljdCBleGNoIDEgaW5kZXgg
ZXhjaCAvRm9udE5hbWUgZXhjaCBwdXQgY29weWZvbnQKCQkJZHVwIC9Gb250TmFtZSBnZXQgZXhj
aCBkZWZpbmVmb250IHBvcAoJCQl9CgkJCXsKCQkJcG9wCgkJCX0KCQlpZmVsc2UKCQl9CglpZmVs
c2UKCUFkb2JlX0Nvb2xUeXBlX0RhdGEgL0FkZFdpZHRocz8gZ2V0CgkJeyBncmVzdG9yZSBBZG9i
ZV9Db29sVHlwZV9EYXRhIC9BZGRXaWR0aHM/IGZhbHNlIHB1dCB9CglpZgoJfSBiaW5kIGRlZgov
P3NoCgl7CgljdXJyZW50Zm9udCAvRG93bmxvYWRlZCBrbm93biB7IGV4Y2ggfSBpZiBwb3AKCX0g
YmluZCBkZWYKLz9jaHAKCXsKCWN1cnJlbnRmb250IC9Eb3dubG9hZGVkIGtub3duIHsgcG9wIH0g
eyBmYWxzZSBjaHAgfSBpZmVsc2UKCX0gYmluZCBkZWYKLz9tdiAKCXsKCWN1cnJlbnRmb250IC9E
b3dubG9hZGVkIGtub3duIHsgbW92ZXRvIHBvcCBwb3AgfSB7IHBvcCBwb3AgbW92ZXRvIH0gaWZl
bHNlCgl9IGJpbmQgZGVmCnNldHBhY2tpbmcKdXNlcmRpY3QgLyRTdWJzdGl0dXRlRm9udCAyNSBk
aWN0IHB1dAoxIGRpY3QKCWJlZ2luCgkvU3Vic3RpdHV0ZUZvbnQKCQlkdXAgJGVycm9yIGV4Y2gg
MiBjb3B5IGtub3duCgkJCXsgZ2V0IH0KCQkJeyBwb3AgcG9wIHsgcG9wIC9Db3VyaWVyIH0gYmlu
ZCB9CgkJaWZlbHNlIGRlZgoJL2N1cnJlbnRkaXN0aWxsZXJwYXJhbXMgd2hlcmUgZHVwCgkJewoJ
CXBvcCBwb3AKCQljdXJyZW50ZGlzdGlsbGVycGFyYW1zIC9DYW5ub3RFbWJlZEZvbnRQb2xpY3kg
MiBjb3B5IGtub3duCgkJCXsgZ2V0IC9FcnJvciBlcSB9CgkJCXsgcG9wIHBvcCBmYWxzZSB9CgkJ
aWZlbHNlCgkJfQoJaWYgbm90CgkJewoJCWNvdW50ZGljdHN0YWNrIGFycmF5IGRpY3RzdGFjayAw
IGdldAoJCQliZWdpbgoJCQl1c2VyZGljdAoJCQkJYmVnaW4KCQkJCSRTdWJzdGl0dXRlRm9udAoJ
CQkJCWJlZ2luCgkJCQkJLyRzdHIgMTI4IHN0cmluZyBkZWYKCQkJCQkvJGZvbnRwYXQgMTI4IHN0
cmluZyBkZWYKCQkJCQkvJHNsZW4gMCBkZWYKCQkJCQkvJHNuYW1lIG51bGwgZGVmCgkJCQkJLyRt
YXRjaCBmYWxzZSBkZWYKCQkJCQkvJGZvbnRuYW1lIG51bGwgZGVmCgkJCQkJLyRzdWJzdGl0dXRl
Rm91bmQgZmFsc2UgZGVmCgkJCQkJLyRpblZNSW5kZXggbnVsbCBkZWYKCQkJCQkvJGRvU21hcnRT
dWIgdHJ1ZSBkZWYKCQkJCQkvJGRlcHRoIDAgZGVmCgkJCQkJLyRmb250bmFtZSBudWxsIGRlZgoJ
CQkJCS8kaXRhbGljYW5nbGUgMjYuNSBkZWYKCQkJCQkvJGRzdGFjayBudWxsIGRlZgoJCQkJCS8k
U3RyYXRlZ2llcyAxMCBkaWN0IGR1cAoJCQkJCQliZWdpbgoJCQkJCQkvJFR5cGUzVW5kZXJwcmlu
dAoJCQkJCQkJewoJCQkJCQkJY3VycmVudGdsb2JhbCBleGNoIGZhbHNlIHNldGdsb2JhbAoJCQkJ
CQkJMTEgZGljdAoJCQkJCQkJCWJlZ2luCgkJCQkJCQkJL1VzZUZvbnQgZXhjaAoJCQkJCQkJCQkk
V01vZGUgMCBuZQoJCQkJCQkJCQkJewoJCQkJCQkJCQkJZHVwIGxlbmd0aCBkaWN0IGNvcHkKCQkJ
CQkJCQkJCWR1cCAvV01vZGUgJFdNb2RlIHB1dAoJCQkJCQkJCQkJL1VzZUZvbnQgZXhjaCBkZWZp
bmVmb250CgkJCQkJCQkJCQl9CgkJCQkJCQkJCWlmIGRlZgoJCQkJCQkJCS9Gb250TmFtZSAkZm9u
dG5hbWUgZHVwIHR5cGUgL3N0cmluZ3R5cGUgZXEgeyBjdm4gfSBpZiBkZWYKCQkJCQkJCQkvRm9u
dFR5cGUgMyBkZWYKCQkJCQkJCQkvRm9udE1hdHJpeCBbIC4wMDEgMCAwIC4wMDEgMCAwIF0gZGVm
CgkJCQkJCQkJL0VuY29kaW5nIDI1NiBhcnJheSBkdXAgMCAxIDI1NSB7IC8ubm90ZGVmIHB1dCBk
dXAgfSBmb3IgcG9wIGRlZgoJCQkJCQkJCS9Gb250QkJveCBbIDAgMCAwIDAgXSBkZWYKCQkJCQkJ
CQkvQ0NJbmZvIDcgZGljdCBkdXAKCQkJCQkJCQkJYmVnaW4KCQkJCQkJCQkJL2NjIG51bGwgZGVm
CgkJCQkJCQkJCS94IDAgZGVmCgkJCQkJCQkJCS95IDAgZGVmCgkJCQkJCQkJCWVuZCBkZWYKCQkJ
CQkJCQkvQnVpbGRDaGFyCgkJCQkJCQkJCXsKCQkJCQkJCQkJZXhjaAoJCQkJCQkJCQkJYmVnaW4K
CQkJCQkJCQkJCUNDSW5mbwoJCQkJCQkJCQkJCWJlZ2luCgkJCQkJCQkJCQkJMSBzdHJpbmcgZHVw
IDAgMyBpbmRleCBwdXQgZXhjaCBwb3AKCQkJCQkJCQkJCQkvY2MgZXhjaCBkZWYKCQkJCQkJCQkJ
CQlVc2VGb250IDEwMDAgc2NhbGVmb250IHNldGZvbnQKCQkJCQkJCQkJCQljYyBzdHJpbmd3aWR0
aCAveSBleGNoIGRlZiAveCBleGNoIGRlZgoJCQkJCQkJCQkJCXggeSBzZXRjaGFyd2lkdGgKCQkJ
CQkJCQkJCQkkU3Vic3RpdHV0ZUZvbnQgLyRTdHJhdGVneSBnZXQgLyRVbmRlcnByaW50IGdldCBl
eGVjCgkJCQkJCQkJCQkJMCAwIG1vdmV0byBjYyBzaG93CgkJCQkJCQkJCQkJeCB5IG1vdmV0bwoJ
CQkJCQkJCQkJCWVuZAoJCQkJCQkJCQkJZW5kCgkJCQkJCQkJCX0gYmluZCBkZWYKCQkJCQkJCQlj
dXJyZW50ZGljdAoJCQkJCQkJCWVuZAoJCQkJCQkJZXhjaCBzZXRnbG9iYWwKCQkJCQkJCX0gYmlu
ZCBkZWYKCQkJCQkJLyRHZXRhVGludAoJCQkJCQkJMiBkaWN0IGR1cAoJCQkJCQkJCWJlZ2luCgkJ
CQkJCQkJLyRCdWlsZEZvbnQKCQkJCQkJCQkJewoJCQkJCQkJCQlkdXAgL1dNb2RlIGtub3duCgkJ
CQkJCQkJCQl7IGR1cCAvV01vZGUgZ2V0IH0KCQkJCQkJCQkJCXsgMCB9CgkJCQkJCQkJCWlmZWxz
ZQoJCQkJCQkJCQkvJFdNb2RlIGV4Y2ggZGVmCgkJCQkJCQkJCSRmb250bmFtZSBleGNoCgkJCQkJ
CQkJCWR1cCAvRm9udE5hbWUga25vd24KCQkJCQkJCQkJCXsKCQkJCQkJCQkJCWR1cCAvRm9udE5h
bWUgZ2V0CgkJCQkJCQkJCQlkdXAgdHlwZSAvc3RyaW5ndHlwZSBlcSB7IGN2biB9IGlmCgkJCQkJ
CQkJCQl9CgkJCQkJCQkJCQl7IC91bm5hbWVkZm9udCB9CgkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJ
CQlleGNoCgkJCQkJCQkJCUFkb2JlX0Nvb2xUeXBlX0RhdGEgL0luVk1EZWVwQ29waWVkRm9udHMg
Z2V0CgkJCQkJCQkJCTEgaW5kZXggL0ZvbnROYW1lIGdldCBrbm93bgoJCQkJCQkJCQkJewoJCQkJ
CQkJCQkJcG9wCgkJCQkJCQkJCQlBZG9iZV9Db29sVHlwZV9EYXRhIC9JblZNRGVlcENvcGllZEZv
bnRzIGdldAoJCQkJCQkJCQkJMSBpbmRleCBnZXQKCQkJCQkJCQkJCW51bGwgY29weWZvbnQKCQkJ
CQkJCQkJCX0KCQkJCQkJCQkJCXsgJGRlZXBjb3B5Zm9udCB9CgkJCQkJCQkJCWlmZWxzZQoJCQkJ
CQkJCQlleGNoIDEgaW5kZXggZXhjaCAvRm9udEJhc2VkT24gZXhjaCBwdXQKCQkJCQkJCQkJZHVw
IC9Gb250TmFtZSAkZm9udG5hbWUgZHVwIHR5cGUgL3N0cmluZ3R5cGUgZXEgeyBjdm4gfSBpZiBw
dXQKCQkJCQkJCQkJZGVmaW5lZm9udAoJCQkJCQkJCQlBZG9iZV9Db29sVHlwZV9EYXRhIC9JblZN
RGVlcENvcGllZEZvbnRzIGdldAoJCQkJCQkJCQkJYmVnaW4KCQkJCQkJCQkJCWR1cCAvRm9udEJh
c2VkT24gZ2V0IDEgaW5kZXggZGVmCgkJCQkJCQkJCQllbmQKCQkJCQkJCQkJfSBiaW5kIGRlZgoJ
CQkJCQkJCS8kVW5kZXJwcmludAoJCQkJCQkJCQl7CgkJCQkJCQkJCWdzYXZlCgkJCQkJCQkJCXgg
YWJzIHkgYWJzIGd0CgkJCQkJCQkJCQl7IC95IDEwMDAgZGVmIH0KCQkJCQkJCQkJCXsgL3ggLTEw
MDAgZGVmIDUwMCAxMjAgdHJhbnNsYXRlIH0KCQkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJCUxldmVs
Mj8KCQkJCQkJCQkJCXsKCQkJCQkJCQkJCVsgL1NlcGFyYXRpb24gKEFsbCkgL0RldmljZUNNWUsg
eyAwIDAgMCAxIHBvcCB9IF0KCQkJCQkJCQkJCXNldGNvbG9yc3BhY2UKCQkJCQkJCQkJCX0KCQkJ
CQkJCQkJCXsgMCBzZXRncmF5IH0KCQkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJCTEwIHNldGxpbmV3
aWR0aAoJCQkJCQkJCQl4IC44IG11bAoJCQkJCQkJCQlbIDcgMyBdCgkJCQkJCQkJCQl7CgkJCQkJ
CQkJCQl5IG11bCA4IGRpdiAxMjAgc3ViIHggMTAgZGl2IGV4Y2ggbW92ZXRvCgkJCQkJCQkJCQkw
IHkgNCBkaXYgbmVnIHJsaW5ldG8KCQkJCQkJCQkJCWR1cCAwIHJsaW5ldG8KCQkJCQkJCQkJCTAg
eSA0IGRpdiBybGluZXRvCgkJCQkJCQkJCQljbG9zZXBhdGgKCQkJCQkJCQkJCWdzYXZlCgkJCQkJ
CQkJCQlMZXZlbDI/CgkJCQkJCQkJCQkJeyAuMiBzZXRjb2xvciB9CgkJCQkJCQkJCQkJeyAuOCBz
ZXRncmF5IH0KCQkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJZmlsbCBncmVzdG9yZQoJCQkJCQkJ
CQkJc3Ryb2tlCgkJCQkJCQkJCQl9CgkJCQkJCQkJCWZvcmFsbAoJCQkJCQkJCQlwb3AKCQkJCQkJ
CQkJZ3Jlc3RvcmUKCQkJCQkJCQkJfSBiaW5kIGRlZgoJCQkJCQkJCWVuZCBkZWYKCQkJCQkJLyRP
YmxpcXVlCgkJCQkJCQkxIGRpY3QgZHVwCgkJCQkJCQkJYmVnaW4KCQkJCQkJCQkvJEJ1aWxkRm9u
dAoJCQkJCQkJCQl7CgkJCQkJCQkJCWN1cnJlbnRnbG9iYWwgZXhjaCBkdXAgZ2NoZWNrIHNldGds
b2JhbAoJCQkJCQkJCQludWxsIGNvcHlmb250CgkJCQkJCQkJCQliZWdpbgoJCQkJCQkJCQkJL0Zv
bnRCYXNlZE9uCgkJCQkJCQkJCQljdXJyZW50ZGljdCAvRm9udE5hbWUga25vd24KCQkJCQkJCQkJ
CQl7CgkJCQkJCQkJCQkJRm9udE5hbWUKCQkJCQkJCQkJCQlkdXAgdHlwZSAvc3RyaW5ndHlwZSBl
cSB7IGN2biB9IGlmCgkJCQkJCQkJCQkJfQoJCQkJCQkJCQkJCXsgL3VubmFtZWRmb250IH0KCQkJ
CQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJZGVmCgkJCQkJCQkJCQkvRm9udE5hbWUgJGZvbnRuYW1l
IGR1cCB0eXBlIC9zdHJpbmd0eXBlIGVxIHsgY3ZuIH0gaWYgZGVmCgkJCQkJCQkJCQkvY3VycmVu
dGRpc3RpbGxlcnBhcmFtcyB3aGVyZQoJCQkJCQkJCQkJCXsgcG9wIH0KCQkJCQkJCQkJCQl7CgkJ
CQkJCQkJCQkJL0ZvbnRJbmZvIGN1cnJlbnRkaWN0IC9Gb250SW5mbyBrbm93bgoJCQkJCQkJCQkJ
CQl7IEZvbnRJbmZvIG51bGwgY29weWZvbnQgfQoJCQkJCQkJCQkJCQl7IDIgZGljdCB9CgkJCQkJ
CQkJCQkJaWZlbHNlCgkJCQkJCQkJCQkJZHVwCgkJCQkJCQkJCQkJCWJlZ2luCgkJCQkJCQkJCQkJ
CS9JdGFsaWNBbmdsZSAkaXRhbGljYW5nbGUgZGVmCgkJCQkJCQkJCQkJCS9Gb250TWF0cml4IEZv
bnRNYXRyaXgKCQkJCQkJCQkJCQkJWyAxIDAgSXRhbGljQW5nbGUgZHVwIHNpbiBleGNoIGNvcyBk
aXYgMSAwIDAgXQoJCQkJCQkJCQkJCQltYXRyaXggY29uY2F0bWF0cml4IHJlYWRvbmx5CgkJCQkJ
CQkJCQkJCWVuZAoJCQkJCQkJCQkJCTQgMiByb2xsIGRlZgoJCQkJCQkJCQkJCWRlZgoJCQkJCQkJ
CQkJCX0KCQkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJRm9udE5hbWUgY3VycmVudGRpY3QKCQkJ
CQkJCQkJCWVuZAoJCQkJCQkJCQlkZWZpbmVmb250CgkJCQkJCQkJCWV4Y2ggc2V0Z2xvYmFsCgkJ
CQkJCQkJCX0gYmluZCBkZWYKCQkJCQkJCQllbmQgZGVmCgkJCQkJCS8kTm9uZQoJCQkJCQkJMSBk
aWN0IGR1cAoJCQkJCQkJCWJlZ2luCgkJCQkJCQkJLyRCdWlsZEZvbnQge30gYmluZCBkZWYKCQkJ
CQkJCQllbmQgZGVmCgkJCQkJCWVuZCBkZWYKCQkJCQkvJE9ibGlxdWUgU2V0U3Vic3RpdHV0ZVN0
cmF0ZWd5CgkJCQkJLyRmaW5kZm9udEJ5RW51bQoJCQkJCQl7CgkJCQkJCWR1cCB0eXBlIC9zdHJp
bmd0eXBlIGVxIHsgY3ZuIH0gaWYKCQkJCQkJZHVwIC8kZm9udG5hbWUgZXhjaCBkZWYKCQkJCQkJ
JHNuYW1lIG51bGwgZXEKCQkJCQkJCXsgJHN0ciBjdnMgZHVwIGxlbmd0aCAkc2xlbiBzdWIgJHNs
ZW4gZ2V0aW50ZXJ2YWwgfQoJCQkJCQkJeyBwb3AgJHNuYW1lIH0KCQkJCQkJaWZlbHNlCgkJCQkJ
CSRmb250cGF0IGR1cCAwIChmb250cy8qKSBwdXRpbnRlcnZhbCBleGNoIDcgZXhjaCBwdXRpbnRl
cnZhbAoJCQkJCQkvJG1hdGNoIGZhbHNlIGRlZgoJCQkJCQkkU3Vic3RpdHV0ZUZvbnQgLyRkc3Rh
Y2sgY291bnRkaWN0c3RhY2sgYXJyYXkgZGljdHN0YWNrIHB1dAoJCQkJCQltYXJrCgkJCQkJCQl7
CgkJCQkJCQkkZm9udHBhdCAwICRzbGVuIDcgYWRkIGdldGludGVydmFsCgkJCQkJCQkJeyAvJG1h
dGNoIGV4Y2ggZGVmIGV4aXQgfQoJCQkJCQkJJHN0ciBmaWxlbmFtZWZvcmFsbAoJCQkJCQkJfQoJ
CQkJCQlzdG9wcGVkCgkJCQkJCQl7CgkJCQkJCQljbGVhcmRpY3RzdGFjawoJCQkJCQkJY3VycmVu
dGRpY3QKCQkJCQkJCXRydWUKCQkJCQkJCSRTdWJzdGl0dXRlRm9udCAvJGRzdGFjayBnZXQKCQkJ
CQkJCQl7CgkJCQkJCQkJZXhjaAoJCQkJCQkJCQl7CgkJCQkJCQkJCTEgaW5kZXggZXEKCQkJCQkJ
CQkJCXsgcG9wIGZhbHNlIH0KCQkJCQkJCQkJCXsgdHJ1ZSB9CgkJCQkJCQkJCWlmZWxzZQoJCQkJ
CQkJCQl9CgkJCQkJCQkJCXsgYmVnaW4gZmFsc2UgfQoJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCX0K
CQkJCQkJCWZvcmFsbAoJCQkJCQkJcG9wCgkJCQkJCQl9CgkJCQkJCWlmCgkJCQkJCWNsZWFydG9t
YXJrCgkJCQkJCS8kc2xlbiAwIGRlZgoJCQkJCQkkbWF0Y2ggZmFsc2UgbmUKCQkJCQkJCXsgJG1h
dGNoIChmb250cy8pIGFuY2hvcnNlYXJjaCBwb3AgcG9wIGN2biB9CgkJCQkJCQl7IC9Db3VyaWVy
IH0KCQkJCQkJaWZlbHNlCgkJCQkJCX0gYmluZCBkZWYKCQkJCQkvJFJPUyAxIGRpY3QgZHVwCgkJ
CQkJCWJlZ2luCgkJCQkJCS9BZG9iZSA0IGRpY3QgZHVwCgkJCQkJCQliZWdpbgoJCQkJCQkJL0ph
cGFuMSAgWyAvUnl1bWluLUxpZ2h0IC9IZWlzZWlNaW4tVzMKCQkJCQkJCQkJCSAgL0dvdGhpY0JC
Qi1NZWRpdW0gL0hlaXNlaUtha3VHby1XNQoJCQkJCQkJCQkJICAvSGVpc2VpTWFydUdvLVc0IC9K
dW4xMDEtTGlnaHQgXSBkZWYKCQkJCQkJCS9Lb3JlYTEgIFsgL0hZU015ZW9uZ0pvLU1lZGl1bSAv
SFlHb1RoaWMtTWVkaXVtIF0gZGVmCgkJCQkJCQkvR0IxCSAgWyAvU1RTb25nLUxpZ2h0IC9TVEhl
aXRpLVJlZ3VsYXIgXSBkZWYKCQkJCQkJCS9DTlMxCSBbIC9NS2FpLU1lZGl1bSAvTUhlaS1NZWRp
dW0gXSBkZWYKCQkJCQkJCWVuZCBkZWYKCQkJCQkJZW5kIGRlZgoJCQkJCS8kY21hcG5hbWUgbnVs
bCBkZWYKCQkJCQkvJGRlZXBjb3B5Zm9udAoJCQkJCQl7CgkJCQkJCWR1cCAvRm9udFR5cGUgZ2V0
IDAgZXEKCQkJCQkJCXsKCQkJCQkJCTEgZGljdCBkdXAgL0ZvbnROYW1lIC9jb3BpZWQgcHV0IGNv
cHlmb250CgkJCQkJCQkJYmVnaW4KCQkJCQkJCQkvRkRlcFZlY3RvciBGRGVwVmVjdG9yIGNvcHlh
cnJheQoJCQkJCQkJCTAgMSAyIGluZGV4IGxlbmd0aCAxIHN1YgoJCQkJCQkJCQl7CgkJCQkJCQkJ
CTIgY29weSBnZXQgJGRlZXBjb3B5Zm9udAoJCQkJCQkJCQlkdXAgL0ZvbnROYW1lIC9jb3BpZWQg
cHV0CgkJCQkJCQkJCS9jb3BpZWQgZXhjaCBkZWZpbmVmb250CgkJCQkJCQkJCTMgY29weSBwdXQg
cG9wIHBvcAoJCQkJCQkJCQl9CgkJCQkJCQkJZm9yCgkJCQkJCQkJZGVmCgkJCQkJCQkJY3VycmVu
dGRpY3QKCQkJCQkJCQllbmQKCQkJCQkJCX0KCQkJCQkJCXsgJFN0cmF0ZWdpZXMgLyRUeXBlM1Vu
ZGVycHJpbnQgZ2V0IGV4ZWMgfQoJCQkJCQlpZmVsc2UKCQkJCQkJfSBiaW5kIGRlZgoJCQkJCS8k
YnVpbGRmb250bmFtZQoJCQkJCQl7CgkJCQkJCWR1cCAvQ0lERm9udCBmaW5kcmVzb3VyY2UgL0NJ
RFN5c3RlbUluZm8gZ2V0CgkJCQkJCQliZWdpbgoJCQkJCQkJUmVnaXN0cnkgbGVuZ3RoIE9yZGVy
aW5nIGxlbmd0aCBTdXBwbGVtZW50IDggc3RyaW5nIGN2cwoJCQkJCQkJMyBjb3B5IGxlbmd0aCAy
IGFkZCBhZGQgYWRkIHN0cmluZwoJCQkJCQkJZHVwIDUgMSByb2xsIGR1cCAwIFJlZ2lzdHJ5IHB1
dGludGVydmFsCgkJCQkJCQlkdXAgNCBpbmRleCAoLSkgcHV0aW50ZXJ2YWwKCQkJCQkJCWR1cCA0
IGluZGV4IDEgYWRkIE9yZGVyaW5nIHB1dGludGVydmFsCgkJCQkJCQk0IDIgcm9sbCBhZGQgMSBh
ZGQgMiBjb3B5ICgtKSBwdXRpbnRlcnZhbAoJCQkJCQkJZW5kCgkJCQkJCTEgYWRkIDIgY29weSAw
IGV4Y2ggZ2V0aW50ZXJ2YWwgJGNtYXBuYW1lICRmb250cGF0IGN2cyBleGNoCgkJCQkJCWFuY2hv
cnNlYXJjaAoJCQkJCQkJeyBwb3AgcG9wIDMgMiByb2xsIHB1dGludGVydmFsIGN2biAvJGNtYXBu
YW1lIGV4Y2ggZGVmIH0KCQkJCQkJCXsgcG9wIHBvcCBwb3AgcG9wIHBvcCB9CgkJCQkJCWlmZWxz
ZQoJCQkJCQlsZW5ndGgKCQkJCQkJJHN0ciAxIGluZGV4ICgtKSBwdXRpbnRlcnZhbCAxIGFkZAoJ
CQkJCQkkc3RyIDEgaW5kZXggJGNtYXBuYW1lICRmb250cGF0IGN2cyBwdXRpbnRlcnZhbAoJCQkJ
CQkkY21hcG5hbWUgbGVuZ3RoIGFkZAoJCQkJCQkkc3RyIGV4Y2ggMCBleGNoIGdldGludGVydmFs
IGN2bgoJCQkJCQl9IGJpbmQgZGVmCgkJCQkJLyRmaW5kZm9udEJ5Uk9TCgkJCQkJCXsKCQkJCQkJ
LyRmb250bmFtZSBleGNoIGRlZgoJCQkJCQkkUk9TIFJlZ2lzdHJ5IDIgY29weSBrbm93bgoJCQkJ
CQkJewoJCQkJCQkJZ2V0IE9yZGVyaW5nIDIgY29weSBrbm93bgoJCQkJCQkJCXsgZ2V0IH0KCQkJ
CQkJCQl7IHBvcCBwb3AgW10gfQoJCQkJCQkJaWZlbHNlCgkJCQkJCQl9CgkJCQkJCQl7IHBvcCBw
b3AgW10gfQoJCQkJCQlpZmVsc2UKCQkJCQkJZmFsc2UgZXhjaAoJCQkJCQkJewoJCQkJCQkJZHVw
IC9DSURGb250IHJlc291cmNlc3RhdHVzCgkJCQkJCQkJewoJCQkJCQkJCXBvcCBwb3AKCQkJCQkJ
CQlzYXZlCgkJCQkJCQkJMSBpbmRleCAvQ0lERm9udCBmaW5kcmVzb3VyY2UKCQkJCQkJCQlkdXAg
L1dpZHRoc09ubHkga25vd24KCQkJCQkJCQkJeyBkdXAgL1dpZHRoc09ubHkgZ2V0IH0KCQkJCQkJ
CQkJeyBmYWxzZSB9CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJZXhjaCBwb3AKCQkJCQkJCQlleGNo
IHJlc3RvcmUKCQkJCQkJCQkJeyBwb3AgfQoJCQkJCQkJCQl7IGV4Y2ggcG9wIHRydWUgZXhpdCB9
CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJfQoJCQkJCQkJCXsgcG9wIH0KCQkJCQkJCWlmZWxzZQoJ
CQkJCQkJfQoJCQkJCQlmb3JhbGwKCQkJCQkJCXsgJHN0ciBjdnMgJGJ1aWxkZm9udG5hbWUgfQoJ
CQkJCQkJewoJCQkJCQkJZmFsc2UgKCopCgkJCQkJCQkJewoJCQkJCQkJCXNhdmUgZXhjaAoJCQkJ
CQkJCWR1cCAvQ0lERm9udCBmaW5kcmVzb3VyY2UKCQkJCQkJCQlkdXAgL1dpZHRoc09ubHkga25v
d24KCQkJCQkJCQkJeyBkdXAgL1dpZHRoc09ubHkgZ2V0IG5vdCB9CgkJCQkJCQkJCXsgdHJ1ZSB9
CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJZXhjaCAvQ0lEU3lzdGVtSW5mbyBnZXQKCQkJCQkJCQlk
dXAgL1JlZ2lzdHJ5IGdldCBSZWdpc3RyeSBlcQoJCQkJCQkJCWV4Y2ggL09yZGVyaW5nIGdldCBP
cmRlcmluZyBlcSBhbmQgYW5kCgkJCQkJCQkJCXsgZXhjaCByZXN0b3JlIGV4Y2ggcG9wIHRydWUg
ZXhpdCB9CgkJCQkJCQkJCXsgcG9wIHJlc3RvcmUgfQoJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCX0K
CQkJCQkJCSRzdHIgL0NJREZvbnQgcmVzb3VyY2Vmb3JhbGwKCQkJCQkJCQl7ICRidWlsZGZvbnRu
YW1lIH0KCQkJCQkJCQl7ICRmb250bmFtZSAkZmluZGZvbnRCeUVudW0gfQoJCQkJCQkJaWZlbHNl
CgkJCQkJCQl9CgkJCQkJCWlmZWxzZQoJCQkJCQl9IGJpbmQgZGVmCgkJCQkJZW5kCgkJCQllbmQK
CQkJCWN1cnJlbnRkaWN0IC8kZXJyb3Iga25vd24gY3VycmVudGRpY3QgL2xhbmd1YWdlbGV2ZWwg
a25vd24gYW5kIGR1cAoJCQkJCXsgcG9wICRlcnJvciAvU3Vic3RpdHV0ZUZvbnQga25vd24gfQoJ
CQkJaWYKCQkJCWR1cAoJCQkJCXsgJGVycm9yIH0KCQkJCQl7IEFkb2JlX0Nvb2xUeXBlX0NvcmUg
fQoJCQkJaWZlbHNlCgkJCQliZWdpbgoJCQkJCXsKCQkJCQkvU3Vic3RpdHV0ZUZvbnQKCQkJCQkv
Q01hcCAvQ2F0ZWdvcnkgcmVzb3VyY2VzdGF0dXMKCQkJCQkJewoJCQkJCQlwb3AgcG9wCgkJCQkJ
CXsKCQkJCQkJJFN1YnN0aXR1dGVGb250CgkJCQkJCQliZWdpbgoJCQkJCQkJLyRzdWJzdGl0dXRl
Rm91bmQgdHJ1ZSBkZWYKCQkJCQkJCWR1cCBsZW5ndGggJHNsZW4gZ3QKCQkJCQkJCSRzbmFtZSBu
dWxsIG5lIG9yCgkJCQkJCQkkc2xlbiAwIGd0IGFuZAoJCQkJCQkJCXsKCQkJCQkJCQkkc25hbWUg
bnVsbCBlcQoJCQkJCQkJCQl7IGR1cCAkc3RyIGN2cyBkdXAgbGVuZ3RoICRzbGVuIHN1YiAkc2xl
biBnZXRpbnRlcnZhbCBjdm4gfQoJCQkJCQkJCQl7ICRzbmFtZSB9CgkJCQkJCQkJaWZlbHNlCgkJ
CQkJCQkJQWRvYmVfQ29vbFR5cGVfRGF0YSAvSW5WTUZvbnRzQnlDTWFwIGdldAoJCQkJCQkJCTEg
aW5kZXggMiBjb3B5IGtub3duCgkJCQkJCQkJCXsKCQkJCQkJCQkJZ2V0CgkJCQkJCQkJCWZhbHNl
IGV4Y2gKCQkJCQkJCQkJCXsKCQkJCQkJCQkJCXBvcAoJCQkJCQkJCQkJY3VycmVudGdsb2JhbAoJ
CQkJCQkJCQkJCXsKCQkJCQkJCQkJCQlHbG9iYWxGb250RGlyZWN0b3J5IDEgaW5kZXgga25vd24K
CQkJCQkJCQkJCQkJeyBleGNoIHBvcCB0cnVlIGV4aXQgfQoJCQkJCQkJCQkJCQl7IHBvcCB9CgkJ
CQkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJCQkJfQoJCQkJCQkJCQkJCXsKCQkJCQkJCQkJCQlGb250
RGlyZWN0b3J5IDEgaW5kZXgga25vd24KCQkJCQkJCQkJCQkJeyBleGNoIHBvcCB0cnVlIGV4aXQg
fQoJCQkJCQkJCQkJCQl7CgkJCQkJCQkJCQkJCUdsb2JhbEZvbnREaXJlY3RvcnkgMSBpbmRleCBr
bm93bgoJCQkJCQkJCQkJCQkJeyBleGNoIHBvcCB0cnVlIGV4aXQgfQoJCQkJCQkJCQkJCQkJeyBw
b3AgfQoJCQkJCQkJCQkJCQlpZmVsc2UKCQkJCQkJCQkJCQkJfQoJCQkJCQkJCQkJCWlmZWxzZQoJ
CQkJCQkJCQkJCX0KCQkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJfQoJCQkJCQkJCQlmb3JhbGwK
CQkJCQkJCQkJfQoJCQkJCQkJCQl7IHBvcCBwb3AgZmFsc2UgfQoJCQkJCQkJCWlmZWxzZQoJCQkJ
CQkJCQl7CgkJCQkJCQkJCWV4Y2ggcG9wIGV4Y2ggcG9wCgkJCQkJCQkJCX0KCQkJCQkJCQkJewoJ
CQkJCQkJCQlkdXAgL0NNYXAgcmVzb3VyY2VzdGF0dXMKCQkJCQkJCQkJCXsKCQkJCQkJCQkJCXBv
cCBwb3AKCQkJCQkJCQkJCWR1cCAvJGNtYXBuYW1lIGV4Y2ggZGVmCgkJCQkJCQkJCQkvQ01hcCBm
aW5kcmVzb3VyY2UgL0NJRFN5c3RlbUluZm8gZ2V0IHsgZGVmIH0gZm9yYWxsCgkJCQkJCQkJCQkk
ZmluZGZvbnRCeVJPUwoJCQkJCQkJCQkJfQoJCQkJCQkJCQkJewoJCQkJCQkJCQkJMTI4IHN0cmlu
ZyBjdnMKCQkJCQkJCQkJCWR1cCAoLSkgc2VhcmNoCgkJCQkJCQkJCQkJewoJCQkJCQkJCQkJCTMg
MSByb2xsIHNlYXJjaAoJCQkJCQkJCQkJCQl7CgkJCQkJCQkJCQkJCTMgMSByb2xsIHBvcAoJCQkJ
CQkJCQkJCQkJeyBkdXAgY3ZpIH0KCQkJCQkJCQkJCQkJc3RvcHBlZAoJCQkJCQkJCQkJCQkJeyBw
b3AgcG9wIHBvcCBwb3AgcG9wICRmaW5kZm9udEJ5RW51bSB9CgkJCQkJCQkJCQkJCQl7CgkJCQkJ
CQkJCQkJCQk0IDIgcm9sbCBwb3AgcG9wCgkJCQkJCQkJCQkJCQlleGNoIGxlbmd0aAoJCQkJCQkJ
CQkJCQkJZXhjaAoJCQkJCQkJCQkJCQkJMiBpbmRleCBsZW5ndGgKCQkJCQkJCQkJCQkJCTIgaW5k
ZXgKCQkJCQkJCQkJCQkJCXN1YgoJCQkJCQkJCQkJCQkJZXhjaCAxIHN1YiAtMSAwCgkJCQkJCQkJ
CQkJCQkJewoJCQkJCQkJCQkJCQkJCSRzdHIgY3ZzIGR1cCBsZW5ndGgKCQkJCQkJCQkJCQkJCQk0
IGluZGV4CgkJCQkJCQkJCQkJCQkJMAoJCQkJCQkJCQkJCQkJCTQgaW5kZXgKCQkJCQkJCQkJCQkJ
CQk0IDMgcm9sbCBhZGQKCQkJCQkJCQkJCQkJCQlnZXRpbnRlcnZhbAoJCQkJCQkJCQkJCQkJCWV4
Y2ggMSBpbmRleCBleGNoIDMgaW5kZXggZXhjaAoJCQkJCQkJCQkJCQkJCXB1dGludGVydmFsCgkJ
CQkJCQkJCQkJCQkJZHVwIC9DTWFwIHJlc291cmNlc3RhdHVzCgkJCQkJCQkJCQkJCQkJCXsKCQkJ
CQkJCQkJCQkJCQkJcG9wIHBvcAoJCQkJCQkJCQkJCQkJCQk0IDEgcm9sbCBwb3AgcG9wIHBvcAoJ
CQkJCQkJCQkJCQkJCQlkdXAgLyRjbWFwbmFtZSBleGNoIGRlZgoJCQkJCQkJCQkJCQkJCQkvQ01h
cCBmaW5kcmVzb3VyY2UgL0NJRFN5c3RlbUluZm8gZ2V0IHsgZGVmIH0gZm9yYWxsCgkJCQkJCQkJ
CQkJCQkJCSRmaW5kZm9udEJ5Uk9TCgkJCQkJCQkJCQkJCQkJCXRydWUgZXhpdAoJCQkJCQkJCQkJ
CQkJCQl9CgkJCQkJCQkJCQkJCQkJCXsgcG9wIH0KCQkJCQkJCQkJCQkJCQlpZmVsc2UKCQkJCQkJ
CQkJCQkJCQl9CgkJCQkJCQkJCQkJCQlmb3IKCQkJCQkJCQkJCQkJCWR1cCB0eXBlIC9ib29sZWFu
dHlwZSBlcQoJCQkJCQkJCQkJCQkJCXsgcG9wIH0KCQkJCQkJCQkJCQkJCQl7IHBvcCBwb3AgcG9w
ICRmaW5kZm9udEJ5RW51bSB9CgkJCQkJCQkJCQkJCQlpZmVsc2UKCQkJCQkJCQkJCQkJCX0KCQkJ
CQkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJCQkJCX0KCQkJCQkJCQkJCQkJeyBwb3AgcG9wIHBvcCAk
ZmluZGZvbnRCeUVudW0gfQoJCQkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJCX0KCQkJCQkJCQkJ
CQl7IHBvcCBwb3AgJGZpbmRmb250QnlFbnVtIH0KCQkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJ
fQoJCQkJCQkJCQlpZmVsc2UKCQkJCQkJCQkJfQoJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCX0KCQkJ
CQkJCQl7IC8vU3Vic3RpdHV0ZUZvbnQgZXhlYyB9CgkJCQkJCQlpZmVsc2UKCQkJCQkJCS8kc2xl
biAwIGRlZgoJCQkJCQkJZW5kCgkJCQkJCX0KCQkJCQkJfQoJCQkJCQl7CgkJCQkJCXsKCQkJCQkJ
JFN1YnN0aXR1dGVGb250CgkJCQkJCQliZWdpbgoJCQkJCQkJLyRzdWJzdGl0dXRlRm91bmQgdHJ1
ZSBkZWYKCQkJCQkJCWR1cCBsZW5ndGggJHNsZW4gZ3QKCQkJCQkJCSRzbmFtZSBudWxsIG5lIG9y
CgkJCQkJCQkkc2xlbiAwIGd0IGFuZAoJCQkJCQkJCXsgJGZpbmRmb250QnlFbnVtIH0KCQkJCQkJ
CQl7IC8vU3Vic3RpdHV0ZUZvbnQgZXhlYyB9CgkJCQkJCQlpZmVsc2UKCQkJCQkJCWVuZAoJCQkJ
CQl9CgkJCQkJCX0KCQkJCQlpZmVsc2UKCQkJCQliaW5kIHJlYWRvbmx5IGRlZgoJCQkJCUFkb2Jl
X0Nvb2xUeXBlX0NvcmUgL3NjZmluZGZvbnQgL3N5c3RlbWZpbmRmb250IGxvYWQgcHV0CgkJCQkJ
fQoJCQkJCXsKCQkJCQkvc2NmaW5kZm9udAoJCQkJCQl7CgkJCQkJCSRTdWJzdGl0dXRlRm9udAoJ
CQkJCQkJYmVnaW4KCQkJCQkJCWR1cCBzeXN0ZW1maW5kZm9udAoJCQkJCQkJZHVwIC9Gb250TmFt
ZSBrbm93bgoJCQkJCQkJCXsgZHVwIC9Gb250TmFtZSBnZXQgZHVwIDMgaW5kZXggbmUgfQoJCQkJ
CQkJCXsgL25vbmFtZSB0cnVlIH0KCQkJCQkJCWlmZWxzZQoJCQkJCQkJZHVwCgkJCQkJCQkJewoJ
CQkJCQkJCS8kb3JpZ2ZvbnRuYW1lZm91bmQgMiBpbmRleCBkZWYKCQkJCQkJCQkvJG9yaWdmb250
bmFtZSA0IGluZGV4IGRlZiAvJHN1YnN0aXR1dGVGb3VuZCB0cnVlIGRlZgoJCQkJCQkJCX0KCQkJ
CQkJCWlmCgkJCQkJCQlleGNoIHBvcAoJCQkJCQkJCXsKCQkJCQkJCQkkc2xlbiAwIGd0CgkJCQkJ
CQkJJHNuYW1lIG51bGwgbmUKCQkJCQkJCQkzIGluZGV4IGxlbmd0aCAkc2xlbiBndCBvciBhbmQK
CQkJCQkJCQkJewoJCQkJCQkJCQlwb3AgZHVwICRmaW5kZm9udEJ5RW51bSBmaW5kZm9udAoJCQkJ
CQkJCQlkdXAgbWF4bGVuZ3RoIDEgYWRkIGRpY3QKCQkJCQkJCQkJCWJlZ2luCgkJCQkJCQkJCQkJ
eyAxIGluZGV4IC9GSUQgZXEgeyBwb3AgcG9wIH0geyBkZWYgfSBpZmVsc2UgfQoJCQkJCQkJCQkJ
Zm9yYWxsCgkJCQkJCQkJCQljdXJyZW50ZGljdAoJCQkJCQkJCQkJZW5kCgkJCQkJCQkJCWRlZmlu
ZWZvbnQKCQkJCQkJCQkJZHVwIC9Gb250TmFtZSBrbm93biB7IGR1cCAvRm9udE5hbWUgZ2V0IH0g
eyBudWxsIH0gaWZlbHNlCgkJCQkJCQkJCSRvcmlnZm9udG5hbWVmb3VuZCBuZQoJCQkJCQkJCQkJ
ewoJCQkJCQkJCQkJJG9yaWdmb250bmFtZSAkc3RyIGN2cyBwcmludAoJCQkJCQkJCQkJKCBzdWJz
dGl0dXRpb24gcmV2aXNlZCwgdXNpbmcgKSBwcmludAoJCQkJCQkJCQkJZHVwIC9Gb250TmFtZSBr
bm93bgoJCQkJCQkJCQkJCXsgZHVwIC9Gb250TmFtZSBnZXQgfSB7ICh1bnNwZWNpZmllZCBmb250
KSB9CgkJCQkJCQkJCQlpZmVsc2UKCQkJCQkJCQkJCSRzdHIgY3ZzIHByaW50ICguXG4pIHByaW50
CgkJCQkJCQkJCQl9CgkJCQkJCQkJCWlmCgkJCQkJCQkJCX0KCQkJCQkJCQkJeyBleGNoIHBvcCB9
CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJfQoJCQkJCQkJCXsgZXhjaCBwb3AgfQoJCQkJCQkJaWZl
bHNlCgkJCQkJCQllbmQKCQkJCQkJfSBiaW5kIGRlZgoJCQkJCX0KCQkJCWlmZWxzZQoJCQkJZW5k
CgkJCWVuZAoJCUFkb2JlX0Nvb2xUeXBlX0NvcmVfRGVmaW5lZCBub3QKCQkJewoJCQlBZG9iZV9D
b29sVHlwZV9Db3JlIC9maW5kZm9udAoJCQkJewoJCQkJJFN1YnN0aXR1dGVGb250CgkJCQkJYmVn
aW4KCQkJCQkkZGVwdGggMCBlcQoJCQkJCQl7CgkJCQkJCS8kZm9udG5hbWUgMSBpbmRleCBkdXAg
dHlwZSAvc3RyaW5ndHlwZSBuZSB7ICRzdHIgY3ZzIH0gaWYgZGVmCgkJCQkJCS8kc3Vic3RpdHV0
ZUZvdW5kIGZhbHNlIGRlZgoJCQkJCQl9CgkJCQkJaWYKCQkJCQkvJGRlcHRoICRkZXB0aCAxIGFk
ZCBkZWYKCQkJCQllbmQKCQkJCXNjZmluZGZvbnQKCQkJCSRTdWJzdGl0dXRlRm9udAoJCQkJCWJl
Z2luCgkJCQkJLyRkZXB0aCAkZGVwdGggMSBzdWIgZGVmCgkJCQkJJHN1YnN0aXR1dGVGb3VuZCAk
ZGVwdGggMCBlcSBhbmQKCQkJCQkJewoJCQkJCQkkaW5WTUluZGV4IG51bGwgbmUKCQkJCQkJCXsg
ZHVwICRpblZNSW5kZXggJEFkZEluVk1Gb250IH0KCQkJCQkJaWYKCQkJCQkJJGRvU21hcnRTdWIK
CQkJCQkJCXsKCQkJCQkJCWN1cnJlbnRkaWN0IC8kU3RyYXRlZ3kga25vd24KCQkJCQkJCQl7ICRT
dHJhdGVneSAvJEJ1aWxkRm9udCBnZXQgZXhlYyB9CgkJCQkJCQlpZgoJCQkJCQkJfQoJCQkJCQlp
ZgoJCQkJCQl9CgkJCQkJaWYKCQkJCQllbmQKCQkJCX0gYmluZCBwdXQKCQkJfQoJCWlmCgkJfQoJ
aWYKCWVuZAovJEFkZEluVk1Gb250Cgl7CglleGNoIC9Gb250TmFtZSAyIGNvcHkga25vd24KCQl7
CgkJZ2V0CgkJMSBkaWN0IGR1cCBiZWdpbiBleGNoIDEgaW5kZXggZ2NoZWNrIGRlZiBlbmQgZXhj
aAoJCUFkb2JlX0Nvb2xUeXBlX0RhdGEgL0luVk1Gb250c0J5Q01hcCBnZXQgZXhjaAoJCSREaWN0
QWRkCgkJfQoJCXsgcG9wIHBvcCBwb3AgfQoJaWZlbHNlCgl9IGJpbmQgZGVmCi8kRGljdEFkZAoJ
ewoJMiBjb3B5IGtub3duIG5vdAoJCXsgMiBjb3B5IDQgaW5kZXggbGVuZ3RoIGRpY3QgcHV0IH0K
CWlmCglMZXZlbDI/IG5vdAoJCXsKCQkyIGNvcHkgZ2V0IGR1cCBtYXhsZW5ndGggZXhjaCBsZW5n
dGggNCBpbmRleCBsZW5ndGggYWRkIGx0CgkJMiBjb3B5IGdldCBkdXAgbGVuZ3RoIDQgaW5kZXgg
bGVuZ3RoIGFkZCBleGNoIG1heGxlbmd0aCAxIGluZGV4IGx0CgkJCXsKCQkJMiBtdWwgZGljdAoJ
CQkJYmVnaW4KCQkJCTIgY29weSBnZXQgeyBmb3JhbGwgfSBkZWYKCQkJCTIgY29weSBjdXJyZW50
ZGljdCBwdXQKCQkJCWVuZAoJCQl9CgkJCXsgcG9wIH0KCQlpZmVsc2UKCQl9CglpZgoJZ2V0CgkJ
YmVnaW4KCQkJeyBkZWYgfQoJCWZvcmFsbAoJCWVuZAoJfSBiaW5kIGRlZgplbmQKZW5kCiUlRW5k
UmVzb3VyY2UKJSVCZWdpblJlc291cmNlOiBwcm9jc2V0IEFkb2JlX0Nvb2xUeXBlX1V0aWxpdHlf
TUFLRU9DRiAxLjE5IDAKJSVDb3B5cmlnaHQ6IENvcHlyaWdodCAxOTg3LTIwMDMgQWRvYmUgU3lz
dGVtcyBJbmNvcnBvcmF0ZWQuCiUlVmVyc2lvbjogMS4xOSAwCnN5c3RlbWRpY3QgL2xhbmd1YWdl
bGV2ZWwga25vd24gZHVwCgl7IGN1cnJlbnRnbG9iYWwgZmFsc2Ugc2V0Z2xvYmFsIH0KCXsgZmFs
c2UgfQppZmVsc2UKZXhjaAp1c2VyZGljdCAvQWRvYmVfQ29vbFR5cGVfVXRpbGl0eSAyIGNvcHkg
a25vd24KCXsgMiBjb3B5IGdldCBkdXAgbWF4bGVuZ3RoIDI1IGFkZCBkaWN0IGNvcHkgfQoJeyAy
NSBkaWN0IH0KaWZlbHNlIHB1dApBZG9iZV9Db29sVHlwZV9VdGlsaXR5CgliZWdpbgoJL2N0X0xl
dmVsMj8gZXhjaCBkZWYKCS9jdF9DbG9uZT8gMTE4MzYxNTg2OSBpbnRlcm5hbGRpY3QgZHVwCgkJ
CS9DQ1J1biBrbm93biBub3QKCQkJZXhjaCAvZUNDUnVuIGtub3duIG5vdAoJCQljdF9MZXZlbDI/
IGFuZCBvciBkZWYKY3RfTGV2ZWwyPwoJeyBnbG9iYWxkaWN0IGJlZ2luIGN1cnJlbnRnbG9iYWwg
dHJ1ZSBzZXRnbG9iYWwgfQppZgoJL2N0X0FkZFN0ZENJRE1hcAoJCWN0X0xldmVsMj8KCQkJeyB7
CgkJCSgoSGV4KSA1NyBTdGFydERhdGEKCQkJMDYxNSAxZTI3IDJjMzkgMWM2MCBkOGE4IGNjMzEg
ZmUyYiBmNmUwCgkJCTdhYTMgZTU0MSBlMjFjIDYwZDggYThjOSBjM2QwIDZkOWUgMWM2MAoJCQlk
OGE4IGM5YzIgMDJkNyA5YTFjIDYwZDggYTg0OSAxYzYwIGQ4YTgKCQkJY2MzNiA3NGY0IDExNDQg
YjEzYiA3NykgMCAoKSAvU3ViRmlsZURlY29kZSBmaWx0ZXIgY3Z4IGV4ZWMKCQkJfSB9CgkJCXsg
ewoJCQk8QkFCNDMxRUEwN0YyMDlFQjhDNDM0ODMxMTQ4MUQ5RDNGNzZFM0QxNTI0NjU1NTU3N0Q4
N0JDNTEwRUQ1NEUKCQkgMTE4QzM5Njk3RkE5RjZEQjU4MTI4RTYwRUI4QTEyRkEyNEQ3Q0REMkZB
OTREMjIxRkE5RUM4REEzRTVFNkExQwoJCQk0QUNFQ0M4QzJEMzlDNTRFN0M5NDYwMzFERDE1NkMz
QTZCNEEwOUFEMjlFMTg2N0E+IGVleGVjCgkJCX0gfQoJCWlmZWxzZSBiaW5kIGRlZgp1c2VyZGlj
dCAvY2lkX2V4dGVuc2lvbnMga25vd24KZHVwIHsgY2lkX2V4dGVuc2lvbnMgL2NpZF9VcGRhdGVE
QiBrbm93biBhbmQgfSBpZgoJIHsKCSBjaWRfZXh0ZW5zaW9ucwoJIGJlZ2luCgkgL2NpZF9HZXRD
SURTeXN0ZW1JbmZvCgkJIHsKCQkgMSBpbmRleCB0eXBlIC9zdHJpbmd0eXBlIGVxCgkJCSB7IGV4
Y2ggY3ZuIGV4Y2ggfQoJCSBpZgoJCSBjaWRfZXh0ZW5zaW9ucwoJCQkgYmVnaW4KCQkJIGR1cCBs
b2FkIDIgaW5kZXgga25vd24KCQkJCSB7CgkJCQkgMiBjb3B5CgkJCQkgY2lkX0dldFN0YXR1c0lu
Zm8KCQkJCSBkdXAgbnVsbCBuZQoJCQkJCSB7CgkJCQkJIDEgaW5kZXggbG9hZAoJCQkJCSAzIGlu
ZGV4IGdldAoJCQkJCSBkdXAgbnVsbCBlcQoJCQkJCQkgIHsgcG9wIHBvcCBjaWRfVXBkYXRlREIg
fQoJCQkJCQkgIHsKCQkJCQkJICBleGNoCgkJCQkJCSAgMSBpbmRleCAvQ3JlYXRlZCBnZXQgZXEK
CQkJCQkJCSAgeyBleGNoIHBvcCBleGNoIHBvcCB9CgkJCQkJCQkgIHsgcG9wIGNpZF9VcGRhdGVE
QiB9CgkJCQkJCSAgaWZlbHNlCgkJCQkJCSAgfQoJCQkJCSBpZmVsc2UKCQkJCQkgfQoJCQkJCSB7
IHBvcCBjaWRfVXBkYXRlREIgfQoJCQkJIGlmZWxzZQoJCQkJIH0KCQkJCSB7IGNpZF9VcGRhdGVE
QiB9CgkJCSBpZmVsc2UKCQkJIGVuZAoJCSB9IGJpbmQgZGVmCgkgZW5kCgkgfQppZgpjdF9MZXZl
bDI/Cgl7IGVuZCBzZXRnbG9iYWwgfQppZgoJL2N0X1VzZU5hdGl2ZUNhcGFiaWxpdHk/ICBzeXN0
ZW1kaWN0IC9jb21wb3NlZm9udCBrbm93biBkZWYKCS9jdF9NYWtlT0NGIDM1IGRpY3QgZGVmCgkv
Y3RfVmFycyAyNSBkaWN0IGRlZgoJL2N0X0dseXBoRGlyUHJvY3MgNiBkaWN0IGRlZgoJL2N0X0J1
aWxkQ2hhckRpY3QgMTUgZGljdCBkdXAKCQliZWdpbgoJCS9jaGFyY29kZSAyIHN0cmluZyBkZWYK
CQkvZHN0X3N0cmluZyAxNTAwIHN0cmluZyBkZWYKCQkvbnVsbHN0cmluZyAoKSBkZWYKCQkvdXNl
d2lkdGhzPyB0cnVlIGRlZgoJCWVuZCBkZWYKCWN0X0xldmVsMj8geyBzZXRnbG9iYWwgfSB7IHBv
cCB9IGlmZWxzZQoJY3RfR2x5cGhEaXJQcm9jcwoJCWJlZ2luCgkJL0dldEdseXBoRGlyZWN0b3J5
CgkJCXsKCQkJc3lzdGVtZGljdCAvbGFuZ3VhZ2VsZXZlbCBrbm93bgoJCQkJeyBwb3AgL0NJREZv
bnQgZmluZHJlc291cmNlIC9HbHlwaERpcmVjdG9yeSBnZXQgfQoJCQkJewoJCQkJMSBpbmRleCAv
Q0lERm9udCBmaW5kcmVzb3VyY2UgL0dseXBoRGlyZWN0b3J5CgkJCQlnZXQgZHVwIHR5cGUgL2Rp
Y3R0eXBlIGVxCgkJCQkJewoJCQkJCWR1cCBkdXAgbWF4bGVuZ3RoIGV4Y2ggbGVuZ3RoIHN1YiAy
IGluZGV4IGx0CgkJCQkJCXsKCQkJCQkJZHVwIGxlbmd0aCAyIGluZGV4IGFkZCBkaWN0IGNvcHkg
MiBpbmRleAoJCQkJCQkvQ0lERm9udCBmaW5kcmVzb3VyY2UvR2x5cGhEaXJlY3RvcnkgMiBpbmRl
eCBwdXQKCQkJCQkJfQoJCQkJCWlmCgkJCQkJfQoJCQkJaWYKCQkJCWV4Y2ggcG9wIGV4Y2ggcG9w
CgkJCQl9CgkJCWlmZWxzZQoJCQkrCgkJCX0gZGVmCgkJLysKCQkJewoJCQlzeXN0ZW1kaWN0IC9s
YW5ndWFnZWxldmVsIGtub3duCgkJCQl7CgkJCQljdXJyZW50Z2xvYmFsIGZhbHNlIHNldGdsb2Jh
bAoJCQkJMyBkaWN0IGJlZ2luCgkJCQkJL3ZtIGV4Y2ggZGVmCgkJCQl9CgkJCQl7IDEgZGljdCBi
ZWdpbiB9CgkJCWlmZWxzZQoJCQkvJCBleGNoIGRlZgoJCQlzeXN0ZW1kaWN0IC9sYW5ndWFnZWxl
dmVsIGtub3duCgkJCQl7CgkJCQl2bSBzZXRnbG9iYWwKCQkJCS9ndm0gY3VycmVudGdsb2JhbCBk
ZWYKCQkJCSQgZ2NoZWNrIHNldGdsb2JhbAoJCQkJfQoJCQlpZgoJCQk/IHsgJCBiZWdpbiB9IGlm
CgkJCX0gZGVmCgkJLz8geyAkIHR5cGUgL2RpY3R0eXBlIGVxIH0gZGVmCgkJL3wgewoJCQl1c2Vy
ZGljdCAvQWRvYmVfQ29vbFR5cGVfRGF0YSBrbm93bgoJCQkJewoJCQlBZG9iZV9Db29sVHlwZV9E
YXRhIC9BZGRXaWR0aHM/IGtub3duCgkJCQl7CgkJCQkgY3VycmVudGRpY3QgQWRvYmVfQ29vbFR5
cGVfRGF0YQoJCQkJCWJlZ2luCgkJCQkJICBiZWdpbgoJCQkJCQlBZGRXaWR0aHM/CgkJCQkJCQkJ
ewoJCQkJCQkJCUFkb2JlX0Nvb2xUeXBlX0RhdGEgL0NDIDMgaW5kZXggcHV0CgkJCQkJCQkJPyB7
IGRlZiB9IHsgJCAzIDEgcm9sbCBwdXQgfSBpZmVsc2UKCQkJCQkJCQlDQyBjaGFyY29kZSBleGNo
IDEgaW5kZXggMCAyIGluZGV4IDI1NiBpZGl2IHB1dAoJCQkJCQkJCTEgaW5kZXggZXhjaCAxIGV4
Y2ggMjU2IG1vZCBwdXQKCQkJCQkJCQlzdHJpbmd3aWR0aCAyIGFycmF5IGFzdG9yZQoJCQkJCQkJ
CWN1cnJlbnRmb250IC9XaWR0aHMgZ2V0IGV4Y2ggQ0MgZXhjaCBwdXQKCQkJCQkJCQl9CgkJCQkJ
CQkJeyA/IHsgZGVmIH0geyAkIDMgMSByb2xsIHB1dCB9IGlmZWxzZSB9CgkJCQkJCQlpZmVsc2UK
CQkJCQllbmQKCQkJCWVuZAoJCQkJfQoJCQkJeyA/IHsgZGVmIH0geyAkIDMgMSByb2xsIHB1dCB9
IGlmZWxzZSB9CWlmZWxzZQoJCQkJfQoJCQkJeyA/IHsgZGVmIH0geyAkIDMgMSByb2xsIHB1dCB9
IGlmZWxzZSB9CgkJCWlmZWxzZQoJCQl9IGRlZgoJCS8hCgkJCXsKCQkJPyB7IGVuZCB9IGlmCgkJ
CXN5c3RlbWRpY3QgL2xhbmd1YWdlbGV2ZWwga25vd24KCQkJCXsgZ3ZtIHNldGdsb2JhbCB9CgkJ
CWlmCgkJCWVuZAoJCQl9IGRlZgoJCS86IHsgc3RyaW5nIGN1cnJlbnRmaWxlIGV4Y2ggcmVhZHN0
cmluZyBwb3AgfSBleGVjdXRlb25seSBkZWYKCQllbmQKCWN0X01ha2VPQ0YKCQliZWdpbgoJCS9j
dF9jSGV4RW5jb2RpbmcKCQlbL2MwMC9jMDEvYzAyL2MwMy9jMDQvYzA1L2MwNi9jMDcvYzA4L2Mw
OS9jMEEvYzBCL2MwQy9jMEQvYzBFL2MwRi9jMTAvYzExL2MxMgoJCSAvYzEzL2MxNC9jMTUvYzE2
L2MxNy9jMTgvYzE5L2MxQS9jMUIvYzFDL2MxRC9jMUUvYzFGL2MyMC9jMjEvYzIyL2MyMy9jMjQv
YzI1CgkJIC9jMjYvYzI3L2MyOC9jMjkvYzJBL2MyQi9jMkMvYzJEL2MyRS9jMkYvYzMwL2MzMS9j
MzIvYzMzL2MzNC9jMzUvYzM2L2MzNy9jMzgKCQkgL2MzOS9jM0EvYzNCL2MzQy9jM0QvYzNFL2Mz
Ri9jNDAvYzQxL2M0Mi9jNDMvYzQ0L2M0NS9jNDYvYzQ3L2M0OC9jNDkvYzRBL2M0QgoJCSAvYzRD
L2M0RC9jNEUvYzRGL2M1MC9jNTEvYzUyL2M1My9jNTQvYzU1L2M1Ni9jNTcvYzU4L2M1OS9jNUEv
YzVCL2M1Qy9jNUQvYzVFCgkJIC9jNUYvYzYwL2M2MS9jNjIvYzYzL2M2NC9jNjUvYzY2L2M2Ny9j
NjgvYzY5L2M2QS9jNkIvYzZDL2M2RC9jNkUvYzZGL2M3MC9jNzEKCQkgL2M3Mi9jNzMvYzc0L2M3
NS9jNzYvYzc3L2M3OC9jNzkvYzdBL2M3Qi9jN0MvYzdEL2M3RS9jN0YvYzgwL2M4MS9jODIvYzgz
L2M4NAoJCSAvYzg1L2M4Ni9jODcvYzg4L2M4OS9jOEEvYzhCL2M4Qy9jOEQvYzhFL2M4Ri9jOTAv
YzkxL2M5Mi9jOTMvYzk0L2M5NS9jOTYvYzk3CgkJIC9jOTgvYzk5L2M5QS9jOUIvYzlDL2M5RC9j
OUUvYzlGL2NBMC9jQTEvY0EyL2NBMy9jQTQvY0E1L2NBNi9jQTcvY0E4L2NBOS9jQUEKCQkgL2NB
Qi9jQUMvY0FEL2NBRS9jQUYvY0IwL2NCMS9jQjIvY0IzL2NCNC9jQjUvY0I2L2NCNy9jQjgvY0I5
L2NCQS9jQkIvY0JDL2NCRAoJCSAvY0JFL2NCRi9jQzAvY0MxL2NDMi9jQzMvY0M0L2NDNS9jQzYv
Y0M3L2NDOC9jQzkvY0NBL2NDQi9jQ0MvY0NEL2NDRS9jQ0YvY0QwCgkJIC9jRDEvY0QyL2NEMy9j
RDQvY0Q1L2NENi9jRDcvY0Q4L2NEOS9jREEvY0RCL2NEQy9jREQvY0RFL2NERi9jRTAvY0UxL2NF
Mi9jRTMKCQkgL2NFNC9jRTUvY0U2L2NFNy9jRTgvY0U5L2NFQS9jRUIvY0VDL2NFRC9jRUUvY0VG
L2NGMC9jRjEvY0YyL2NGMy9jRjQvY0Y1L2NGNgoJCSAvY0Y3L2NGOC9jRjkvY0ZBL2NGQi9jRkMv
Y0ZEL2NGRS9jRkZdIGRlZgoJCS9jdF9DSURfU1RSX1NJWkUgODAwMCBkZWYKCQkvY3RfbWtvY2ZT
dHIxMDAgMTAwIHN0cmluZyBkZWYKCQkvY3RfZGVmYXVsdEZvbnRNdHggWy4wMDEgMCAwIC4wMDEg
MCAwXSBkZWYKCQkvY3RfMTAwME10eCBbMTAwMCAwIDAgMTAwMCAwIDBdIGRlZgoJCS9jdF9yYWlz
ZSB7ZXhjaCBjdnggZXhjaCBlcnJvcmRpY3QgZXhjaCBnZXQgZXhlYyBzdG9wfSBiaW5kIGRlZgoJ
CS9jdF9yZXJhaXNlCgkJCXsgY3Z4ICRlcnJvciAvZXJyb3JuYW1lIGdldCAoRXJyb3I6ICkgcHJp
bnQgZHVwICgJCQkJCQkgICkgY3ZzIHByaW50CgkJCQkJZXJyb3JkaWN0IGV4Y2ggZ2V0IGV4ZWMg
c3RvcAoJCQl9IGJpbmQgZGVmCgkJL2N0X2N2bnNpCgkJCXsKCQkJMSBpbmRleCBhZGQgMSBzdWIg
MSBleGNoIDAgNCAxIHJvbGwKCQkJCXsKCQkJCTIgaW5kZXggZXhjaCBnZXQKCQkJCWV4Y2ggOCBi
aXRzaGlmdAoJCQkJYWRkCgkJCQl9CgkJCWZvcgoJCQlleGNoIHBvcAoJCQl9IGJpbmQgZGVmCgkJ
L2N0X0dldEludGVydmFsCgkJCXsKCQkJQWRvYmVfQ29vbFR5cGVfVXRpbGl0eSAvY3RfQnVpbGRD
aGFyRGljdCBnZXQKCQkJCWJlZ2luCgkJCQkvZHN0X2luZGV4IDAgZGVmCgkJCQlkdXAgZHN0X3N0
cmluZyBsZW5ndGggZ3QKCQkJCQl7IGR1cCBzdHJpbmcgL2RzdF9zdHJpbmcgZXhjaCBkZWYgfQoJ
CQkJaWYKCQkJCTEgaW5kZXggY3RfQ0lEX1NUUl9TSVpFIGlkaXYKCQkJCS9hcnJheUluZGV4IGV4
Y2ggZGVmCgkJCQkyIGluZGV4IGFycmF5SW5kZXggIGdldAoJCQkJMiBpbmRleAoJCQkJYXJyYXlJ
bmRleCBjdF9DSURfU1RSX1NJWkUgbXVsCgkJCQlzdWIKCQkJCQl7CgkJCQkJZHVwIDMgaW5kZXgg
YWRkIDIgaW5kZXggbGVuZ3RoIGxlCgkJCQkJCXsKCQkJCQkJMiBpbmRleCBnZXRpbnRlcnZhbAoJ
CQkJCQlkc3Rfc3RyaW5nICBkc3RfaW5kZXggMiBpbmRleCBwdXRpbnRlcnZhbAoJCQkJCQlsZW5n
dGggZHN0X2luZGV4IGFkZCAvZHN0X2luZGV4IGV4Y2ggZGVmCgkJCQkJCWV4aXQKCQkJCQkJfQoJ
CQkJCQl7CgkJCQkJCTEgaW5kZXggbGVuZ3RoIDEgaW5kZXggc3ViCgkJCQkJCWR1cCA0IDEgcm9s
bAoJCQkJCQlnZXRpbnRlcnZhbAoJCQkJCQlkc3Rfc3RyaW5nICBkc3RfaW5kZXggMiBpbmRleCBw
dXRpbnRlcnZhbAoJCQkJCQlwb3AgZHVwIGRzdF9pbmRleCBhZGQgL2RzdF9pbmRleCBleGNoIGRl
ZgoJCQkJCQlzdWIKCQkJCQkJL2FycmF5SW5kZXggYXJyYXlJbmRleCAxIGFkZCBkZWYKCQkJCQkJ
MiBpbmRleCBkdXAgbGVuZ3RoIGFycmF5SW5kZXggZ3QKCQkJCQkJCSAgeyBhcnJheUluZGV4IGdl
dCB9CgkJCQkJCQkgIHsKCQkJCQkJCSAgcG9wCgkJCQkJCQkgIGV4aXQKCQkJCQkJCSAgfQoJCQkJ
CQlpZmVsc2UKCQkJCQkJMAoJCQkJCQl9CgkJCQkJaWZlbHNlCgkJCQkJfQoJCQkJbG9vcAoJCQkJ
cG9wIHBvcCBwb3AKCQkJCWRzdF9zdHJpbmcgMCBkc3RfaW5kZXggZ2V0aW50ZXJ2YWwKCQkJCWVu
ZAoJCQl9IGJpbmQgZGVmCgkJY3RfTGV2ZWwyPwoJCQl7CgkJCS9jdF9yZXNvdXJjZXN0YXR1cwoJ
CQljdXJyZW50Z2xvYmFsIG1hcmsgdHJ1ZSBzZXRnbG9iYWwKCQkJCXsgL3Vua25vd25pbnN0YW5j
ZW5hbWUgL0NhdGVnb3J5IHJlc291cmNlc3RhdHVzIH0KCQkJc3RvcHBlZAoJCQkJeyBjbGVhcnRv
bWFyayBzZXRnbG9iYWwgdHJ1ZSB9CgkJCQl7IGNsZWFydG9tYXJrIGN1cnJlbnRnbG9iYWwgbm90
IGV4Y2ggc2V0Z2xvYmFsIH0KCQkJaWZlbHNlCgkJCQl7CgkJCQkJewoJCQkJCW1hcmsgMyAxIHJv
bGwgL0NhdGVnb3J5IGZpbmRyZXNvdXJjZQoJCQkJCQliZWdpbgoJCQkJCQljdF9WYXJzIC92bSBj
dXJyZW50Z2xvYmFsIHB1dAoJCQkJCQkoe1Jlc291cmNlU3RhdHVzfSBzdG9wcGVkKSAwICgpIC9T
dWJGaWxlRGVjb2RlIGZpbHRlciBjdnggZXhlYwoJCQkJCQkJeyBjbGVhcnRvbWFyayBmYWxzZSB9
CgkJCQkJCQl7IHsgMyAyIHJvbGwgcG9wIHRydWUgfSB7IGNsZWFydG9tYXJrIGZhbHNlIH0gaWZl
bHNlIH0KCQkJCQkJaWZlbHNlCgkJCQkJCWN0X1ZhcnMgL3ZtIGdldCBzZXRnbG9iYWwKCQkJCQkJ
ZW5kCgkJCQkJfQoJCQkJfQoJCQkJeyB7IHJlc291cmNlc3RhdHVzIH0gfQoJCQlpZmVsc2UgYmlu
ZCBkZWYKCQkJL0NJREZvbnQgL0NhdGVnb3J5IGN0X3Jlc291cmNlc3RhdHVzCgkJCQl7IHBvcCBw
b3AgfQoJCQkJewoJCQkJY3VycmVudGdsb2JhbCAgdHJ1ZSBzZXRnbG9iYWwKCQkJCS9HZW5lcmlj
IC9DYXRlZ29yeSBmaW5kcmVzb3VyY2UKCQkJCWR1cCBsZW5ndGggZGljdCBjb3B5CgkJCQlkdXAg
L0luc3RhbmNlVHlwZSAvZGljdHR5cGUgcHV0CgkJCQkvQ0lERm9udCBleGNoIC9DYXRlZ29yeSBk
ZWZpbmVyZXNvdXJjZSBwb3AKCQkJCXNldGdsb2JhbAoJCQkJfQoJCQlpZmVsc2UKCQkJY3RfVXNl
TmF0aXZlQ2FwYWJpbGl0eT8KCQkJCXsKCQkJCS9DSURJbml0IC9Qcm9jU2V0IGZpbmRyZXNvdXJj
ZSBiZWdpbgoJCQkJMTIgZGljdCBiZWdpbgoJCQkJYmVnaW5jbWFwCgkJCQkvQ0lEU3lzdGVtSW5m
byAzIGRpY3QgZHVwIGJlZ2luCgkJCQkgIC9SZWdpc3RyeSAoQWRvYmUpIGRlZgoJCQkJICAvT3Jk
ZXJpbmcgKElkZW50aXR5KSBkZWYKCQkJCSAgL1N1cHBsZW1lbnQgMCBkZWYKCQkJCWVuZCBkZWYK
CQkJCS9DTWFwTmFtZSAvSWRlbnRpdHktSCBkZWYKCQkJCS9DTWFwVmVyc2lvbiAxLjAwMCBkZWYK
CQkJCS9DTWFwVHlwZSAxIGRlZgoJCQkJMSBiZWdpbmNvZGVzcGFjZXJhbmdlCgkJCQk8MDAwMD4g
PEZGRkY+CgkJCQllbmRjb2Rlc3BhY2VyYW5nZQoJCQkJMSBiZWdpbmNpZHJhbmdlCgkJCQk8MDAw
MD4gPEZGRkY+IDAKCQkJCWVuZGNpZHJhbmdlCgkJCQllbmRjbWFwCgkJCQlDTWFwTmFtZSBjdXJy
ZW50ZGljdCAvQ01hcCBkZWZpbmVyZXNvdXJjZSBwb3AKCQkJCWVuZAoJCQkJZW5kCgkJCQl9CgkJ
CWlmCgkJCX0KCQkJewoJCQkvY3RfQ2F0ZWdvcnkgMiBkaWN0IGJlZ2luCgkJCS9DSURGb250ICAx
MCBkaWN0IGRlZgoJCQkvUHJvY1NldAkyIGRpY3QgZGVmCgkJCWN1cnJlbnRkaWN0CgkJCWVuZAoJ
CQlkZWYKCQkJL2RlZmluZXJlc291cmNlCgkJCQl7CgkJCQljdF9DYXRlZ29yeSAxIGluZGV4IDIg
Y29weSBrbm93bgoJCQkJCXsKCQkJCQlnZXQKCQkJCQlkdXAgZHVwIG1heGxlbmd0aCBleGNoIGxl
bmd0aCBlcQoJCQkJCQl7CgkJCQkJCWR1cCBsZW5ndGggMTAgYWRkIGRpY3QgY29weQoJCQkJCQlj
dF9DYXRlZ29yeSAyIGluZGV4IDIgaW5kZXggcHV0CgkJCQkJCX0KCQkJCQlpZgoJCQkJCTMgaW5k
ZXggMyBpbmRleCBwdXQKCQkJCQlwb3AgZXhjaCBwb3AKCQkJCQl9CgkJCQkJeyBwb3AgcG9wIC9k
ZWZpbmVyZXNvdXJjZSAvdW5kZWZpbmVkIGN0X3JhaXNlIH0KCQkJCWlmZWxzZQoJCQkJfSBiaW5k
IGRlZgoJCQkvZmluZHJlc291cmNlCgkJCQl7CgkJCQljdF9DYXRlZ29yeSAxIGluZGV4IDIgY29w
eSBrbm93bgoJCQkJCXsKCQkJCQlnZXQKCQkJCQkyIGluZGV4IDIgY29weSBrbm93bgoJCQkJCQl7
IGdldCAzIDEgcm9sbCBwb3AgcG9wfQoJCQkJCQl7IHBvcCBwb3AgL2ZpbmRyZXNvdXJjZSAvdW5k
ZWZpbmVkcmVzb3VyY2UgY3RfcmFpc2UgfQoJCQkJCWlmZWxzZQoJCQkJCX0KCQkJCQl7IHBvcCBw
b3AgL2ZpbmRyZXNvdXJjZSAvdW5kZWZpbmVkIGN0X3JhaXNlIH0KCQkJCWlmZWxzZQoJCQkJfSBi
aW5kIGRlZgoJCQkvcmVzb3VyY2VzdGF0dXMKCQkJCXsKCQkJCWN0X0NhdGVnb3J5IDEgaW5kZXgg
MiBjb3B5IGtub3duCgkJCQkJewoJCQkJCWdldAoJCQkJCTIgaW5kZXgga25vd24KCQkJCQlleGNo
IHBvcCBleGNoIHBvcAoJCQkJCQl7CgkJCQkJCTAgLTEgdHJ1ZQoJCQkJCQl9CgkJCQkJCXsKCQkJ
CQkJZmFsc2UKCQkJCQkJfQoJCQkJCWlmZWxzZQoJCQkJCX0KCQkJCQl7IHBvcCBwb3AgL2ZpbmRy
ZXNvdXJjZSAvdW5kZWZpbmVkIGN0X3JhaXNlIH0KCQkJCWlmZWxzZQoJCQkJfSBiaW5kIGRlZgoJ
CQkvY3RfcmVzb3VyY2VzdGF0dXMgL3Jlc291cmNlc3RhdHVzIGxvYWQgZGVmCgkJCX0KCQlpZmVs
c2UKCQkvY3RfQ0lESW5pdCAyIGRpY3QKCQkJYmVnaW4KCQkJL2N0X2NpZGZvbnRfc3RyZWFtX2lu
aXQKCQkJCXsKCQkJCQl7CgkJCQkJZHVwIChCaW5hcnkpIGVxCgkJCQkJCXsKCQkJCQkJcG9wCgkJ
CQkJCW51bGwKCQkJCQkJY3VycmVudGZpbGUKCQkJCQkJY3RfTGV2ZWwyPwoJCQkJCQkJewoJCQkJ
CQkJCXsgY2lkX0JZVEVfQ09VTlQgKCkgL1N1YkZpbGVEZWNvZGUgZmlsdGVyIH0KCQkJCQkJCXN0
b3BwZWQKCQkJCQkJCQl7IHBvcCBwb3AgcG9wIH0KCQkJCQkJCWlmCgkJCQkJCQl9CgkJCQkJCWlm
CgkJCQkJCS9yZWFkc3RyaW5nIGxvYWQKCQkJCQkJZXhpdAoJCQkJCQl9CgkJCQkJaWYKCQkJCQlk
dXAgKEhleCkgZXEKCQkJCQkJewoJCQkJCQlwb3AKCQkJCQkJY3VycmVudGZpbGUKCQkJCQkJY3Rf
TGV2ZWwyPwoJCQkJCQkJewoJCQkJCQkJCXsgbnVsbCBleGNoIC9BU0NJSUhleERlY29kZSBmaWx0
ZXIgL3JlYWRzdHJpbmcgfQoJCQkJCQkJc3RvcHBlZAoJCQkJCQkJCXsgcG9wIGV4Y2ggcG9wICg+
KSBleGNoIC9yZWFkaGV4c3RyaW5nIH0KCQkJCQkJCWlmCgkJCQkJCQl9CgkJCQkJCQl7ICg+KSBl
eGNoIC9yZWFkaGV4c3RyaW5nIH0KCQkJCQkJaWZlbHNlCgkJCQkJCWxvYWQKCQkJCQkJZXhpdAoJ
CQkJCQl9CgkJCQkJaWYKCQkJCQkvU3RhcnREYXRhIC90eXBlY2hlY2sgY3RfcmFpc2UKCQkJCQl9
CgkJCQlsb29wCgkJCQljaWRfQllURV9DT1VOVCBjdF9DSURfU1RSX1NJWkUgbGUKCQkJCQl7CgkJ
CQkJMiBjb3B5IGNpZF9CWVRFX0NPVU5UIHN0cmluZyBleGNoIGV4ZWMKCQkJCQlwb3AKCQkJCQkx
IGFycmF5IGR1cAoJCQkJCTMgLTEgcm9sbAoJCQkJCTAgZXhjaCBwdXQKCQkJCQl9CgkJCQkJewoJ
CQkJCWNpZF9CWVRFX0NPVU5UIGN0X0NJRF9TVFJfU0laRSBkaXYgY2VpbGluZyBjdmkKCQkJCQlk
dXAgYXJyYXkgZXhjaCAyIHN1YiAwIGV4Y2ggMSBleGNoCgkJCQkJCXsKCQkJCQkJMiBjb3B5CgkJ
CQkJCTUgaW5kZXgKCQkJCQkJY3RfQ0lEX1NUUl9TSVpFCgkJCQkJCXN0cmluZwoJCQkJCQk2IGlu
ZGV4IGV4ZWMKCQkJCQkJcG9wCgkJCQkJCXB1dAoJCQkJCQlwb3AKCQkJCQkJfQoJCQkJCWZvcgoJ
CQkJCTIgaW5kZXgKCQkJCQljaWRfQllURV9DT1VOVCBjdF9DSURfU1RSX1NJWkUgbW9kIHN0cmlu
ZwoJCQkJCTMgaW5kZXggZXhlYwoJCQkJCXBvcAoJCQkJCTEgaW5kZXggZXhjaAoJCQkJCTEgaW5k
ZXggbGVuZ3RoIDEgc3ViCgkJCQkJZXhjaCBwdXQKCQkJCQl9CgkJCQlpZmVsc2UKCQkJCWNpZF9D
SURGT05UIGV4Y2ggL0dseXBoRGF0YSBleGNoIHB1dAoJCQkJMiBpbmRleCBudWxsIGVxCgkJCQkJ
ewoJCQkJCXBvcCBwb3AgcG9wCgkJCQkJfQoJCQkJCXsKCQkJCQlwb3AgL3JlYWRzdHJpbmcgbG9h
ZAoJCQkJCTEgc3RyaW5nIGV4Y2gKCQkJCQkJewoJCQkJCQkzIGNvcHkgZXhlYwoJCQkJCQlwb3AK
CQkJCQkJZHVwIGxlbmd0aCAwIGVxCgkJCQkJCQl7CgkJCQkJCQlwb3AgcG9wIHBvcCBwb3AgcG9w
CgkJCQkJCQl0cnVlIGV4aXQKCQkJCQkJCX0KCQkJCQkJaWYKCQkJCQkJNCBpbmRleAoJCQkJCQll
cQoJCQkJCQkJewoJCQkJCQkJcG9wIHBvcCBwb3AgcG9wCgkJCQkJCQlmYWxzZSBleGl0CgkJCQkJ
CQl9CgkJCQkJCWlmCgkJCQkJCX0KCQkJCQlsb29wCgkJCQkJcG9wCgkJCQkJfQoJCQkJaWZlbHNl
CgkJCQl9IGJpbmQgZGVmCgkJCS9TdGFydERhdGEKCQkJCXsKCQkJCW1hcmsKCQkJCQl7CgkJCQkJ
Y3VycmVudGRpY3QKCQkJCQlkdXAgL0ZEQXJyYXkgZ2V0IDAgZ2V0IC9Gb250TWF0cml4IGdldAoJ
CQkJCTAgZ2V0IDAuMDAxIGVxCgkJCQkJCXsKCQkJCQkJZHVwIC9DRGV2UHJvYyBrbm93biBub3QK
CQkJCQkJCXsKCQkJCQkJCS9DRGV2UHJvYyAxMTgzNjE1ODY5IGludGVybmFsZGljdCAvc3RkQ0Rl
dlByb2MgMiBjb3B5IGtub3duCgkJCQkJCQkJeyBnZXQgfQoJCQkJCQkJCXsKCQkJCQkJCQlwb3Ag
cG9wCgkJCQkJCQkJeyBwb3AgcG9wIHBvcCBwb3AgcG9wIDAgLTEwMDAgNyBpbmRleCAyIGRpdiA4
ODAgfQoJCQkJCQkJCX0KCQkJCQkJCWlmZWxzZQoJCQkJCQkJZGVmCgkJCQkJCQl9CgkJCQkJCWlm
CgkJCQkJCX0KCQkJCQkJewoJCQkJCQkgL0NEZXZQcm9jCgkJCQkJCQkgewoJCQkJCQkJIHBvcCBw
b3AgcG9wIHBvcCBwb3AKCQkJCQkJCSAwCgkJCQkJCQkgMSBjaWRfdGVtcCAvY2lkX0NJREZPTlQg
Z2V0CgkJCQkJCQkgL0ZEQXJyYXkgZ2V0IDAgZ2V0CgkJCQkJCQkgL0ZvbnRNYXRyaXggZ2V0IDAg
Z2V0IGRpdgoJCQkJCQkJIDcgaW5kZXggMiBkaXYKCQkJCQkJCSAxIGluZGV4IDAuODggbXVsCgkJ
CQkJCQkgfSBkZWYKCQkJCQkJfQoJCQkJCWlmZWxzZQoJCQkJCS9jaWRfdGVtcCAxNSBkaWN0IGRl
ZgoJCQkJCWNpZF90ZW1wCgkJCQkJCWJlZ2luCgkJCQkJCS9jaWRfQ0lERk9OVCBleGNoIGRlZgoJ
CQkJCQkzIGNvcHkgcG9wCgkJCQkJCWR1cCAvY2lkX0JZVEVfQ09VTlQgZXhjaCBkZWYgMCBndAoJ
CQkJCQkJewoJCQkJCQkJY3RfY2lkZm9udF9zdHJlYW1faW5pdAoJCQkJCQkJRkRBcnJheQoJCQkJ
CQkJCXsKCQkJCQkJCQkvUHJpdmF0ZSBnZXQKCQkJCQkJCQlkdXAgL1N1YnJNYXBPZmZzZXQga25v
d24KCQkJCQkJCQkJewoJCQkJCQkJCQliZWdpbgoJCQkJCQkJCQkvU3VicnMgU3VickNvdW50IGFy
cmF5IGRlZgoJCQkJCQkJCQlTdWJycwoJCQkJCQkJCQlTdWJyTWFwT2Zmc2V0CgkJCQkJCQkJCVN1
YnJDb3VudAoJCQkJCQkJCQlTREJ5dGVzCgkJCQkJCQkJCWN0X0xldmVsMj8KCQkJCQkJCQkJCXsK
CQkJCQkJCQkJCWN1cnJlbnRkaWN0IGR1cCAvU3Vick1hcE9mZnNldCB1bmRlZgoJCQkJCQkJCQkJ
ZHVwIC9TdWJyQ291bnQgdW5kZWYKCQkJCQkJCQkJCS9TREJ5dGVzIHVuZGVmCgkJCQkJCQkJCQl9
CgkJCQkJCQkJCWlmCgkJCQkJCQkJCWVuZAoJCQkJCQkJCQkvY2lkX1NEX0JZVEVTIGV4Y2ggZGVm
CgkJCQkJCQkJCS9jaWRfU1VCUl9DT1VOVCBleGNoIGRlZgoJCQkJCQkJCQkvY2lkX1NVQlJfTUFQ
X09GRlNFVCBleGNoIGRlZgoJCQkJCQkJCQkvY2lkX1NVQlJTIGV4Y2ggZGVmCgkJCQkJCQkJCWNp
ZF9TVUJSX0NPVU5UIDAgZ3QKCQkJCQkJCQkJCXsKCQkJCQkJCQkJCUdseXBoRGF0YSBjaWRfU1VC
Ul9NQVBfT0ZGU0VUIGNpZF9TRF9CWVRFUyBjdF9HZXRJbnRlcnZhbAoJCQkJCQkJCQkJMCBjaWRf
U0RfQllURVMgY3RfY3Zuc2kKCQkJCQkJCQkJCTAgMSBjaWRfU1VCUl9DT1VOVCAxIHN1YgoJCQkJ
CQkJCQkJCXsKCQkJCQkJCQkJCQlleGNoIDEgaW5kZXgKCQkJCQkJCQkJCQkxIGFkZAoJCQkJCQkJ
CQkJCWNpZF9TRF9CWVRFUyBtdWwgY2lkX1NVQlJfTUFQX09GRlNFVCBhZGQKCQkJCQkJCQkJCQlH
bHlwaERhdGEgZXhjaCBjaWRfU0RfQllURVMgY3RfR2V0SW50ZXJ2YWwKCQkJCQkJCQkJCQkwIGNp
ZF9TRF9CWVRFUyBjdF9jdm5zaQoJCQkJCQkJCQkJCWNpZF9TVUJSUyA0IDIgcm9sbAoJCQkJCQkJ
CQkJCUdseXBoRGF0YSBleGNoCgkJCQkJCQkJCQkJNCBpbmRleAoJCQkJCQkJCQkJCTEgaW5kZXgK
CQkJCQkJCQkJCQlzdWIKCQkJCQkJCQkJCQljdF9HZXRJbnRlcnZhbAoJCQkJCQkJCQkJCWR1cCBs
ZW5ndGggc3RyaW5nIGNvcHkgcHV0CgkJCQkJCQkJCQkJfQoJCQkJCQkJCQkJZm9yCgkJCQkJCQkJ
CQlwb3AKCQkJCQkJCQkJCX0KCQkJCQkJCQkJaWYKCQkJCQkJCQkJfQoJCQkJCQkJCQl7IHBvcCB9
CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJfQoJCQkJCQkJZm9yYWxsCgkJCQkJCQl9CgkJCQkJCWlm
CgkJCQkJCWNsZWFydG9tYXJrIHBvcCBwb3AKCQkJCQkJZW5kCgkJCQkJQ0lERm9udE5hbWUgY3Vy
cmVudGRpY3QgL0NJREZvbnQgZGVmaW5lcmVzb3VyY2UgcG9wCgkJCQkJZW5kIGVuZAoJCQkJCX0K
CQkJCXN0b3BwZWQKCQkJCQl7IGNsZWFydG9tYXJrIC9TdGFydERhdGEgY3RfcmVyYWlzZSB9CgkJ
CQlpZgoJCQkJfSBiaW5kIGRlZgoJCQljdXJyZW50ZGljdAoJCQllbmQgZGVmCgkJL2N0X3NhdmVD
SURJbml0CgkJCXsKCQkJL0NJREluaXQgL1Byb2NTZXQgY3RfcmVzb3VyY2VzdGF0dXMKCQkJCXsg
dHJ1ZSB9CgkJCQl7IC9DSURJbml0QyAvUHJvY1NldCBjdF9yZXNvdXJjZXN0YXR1cyB9CgkJCWlm
ZWxzZQoJCQkJewoJCQkJcG9wIHBvcAoJCQkJL0NJREluaXQgL1Byb2NTZXQgZmluZHJlc291cmNl
CgkJCQljdF9Vc2VOYXRpdmVDYXBhYmlsaXR5PwoJCQkJCXsgcG9wIG51bGwgfQoJCQkJCXsgL0NJ
REluaXQgY3RfQ0lESW5pdCAvUHJvY1NldCBkZWZpbmVyZXNvdXJjZSBwb3AgfQoJCQkJaWZlbHNl
CgkJCQl9CgkJCQl7IC9DSURJbml0IGN0X0NJREluaXQgL1Byb2NTZXQgZGVmaW5lcmVzb3VyY2Ug
cG9wIG51bGwgfQoJCQlpZmVsc2UKCQkJY3RfVmFycyBleGNoIC9jdF9vbGRDSURJbml0IGV4Y2gg
cHV0CgkJCX0gYmluZCBkZWYKCQkvY3RfcmVzdG9yZUNJREluaXQKCQkJewoJCQljdF9WYXJzIC9j
dF9vbGRDSURJbml0IGdldCBkdXAgbnVsbCBuZQoJCQkJeyAvQ0lESW5pdCBleGNoIC9Qcm9jU2V0
IGRlZmluZXJlc291cmNlIHBvcCB9CgkJCQl7IHBvcCB9CgkJCWlmZWxzZQoJCQl9IGJpbmQgZGVm
CgkJL2N0X0J1aWxkQ2hhclNldFVwCgkJCXsKCQkJMSBpbmRleAoJCQkJYmVnaW4KCQkJCUNJREZv
bnQKCQkJCQliZWdpbgoJCQkJCUFkb2JlX0Nvb2xUeXBlX1V0aWxpdHkgL2N0X0J1aWxkQ2hhckRp
Y3QgZ2V0CgkJCQkJCWJlZ2luCgkJCQkJCS9jdF9kZkNoYXJDb2RlIGV4Y2ggZGVmCgkJCQkJCS9j
dF9kZkRpY3QgZXhjaCBkZWYKCQkJCQkJQ0lERmlyc3RCeXRlIGN0X2RmQ2hhckNvZGUgYWRkCgkJ
CQkJCWR1cCBDSURDb3VudCBnZQoJCQkJCQkJeyBwb3AgMCB9CgkJCQkJCWlmCgkJCQkJCS9jaWQg
ZXhjaCBkZWYKCQkJCQkJCXsKCQkJCQkJCUdseXBoRGlyZWN0b3J5IGNpZCAyIGNvcHkga25vd24K
CQkJCQkJCQl7IGdldCB9CgkJCQkJCQkJeyBwb3AgcG9wIG51bGxzdHJpbmcgfQoJCQkJCQkJaWZl
bHNlCgkJCQkJCQlkdXAgbGVuZ3RoIEZEQnl0ZXMgc3ViIDAgZ3QKCQkJCQkJCQl7CgkJCQkJCQkJ
ZHVwCgkJCQkJCQkJRkRCeXRlcyAwIG5lCgkJCQkJCQkJCXsgMCBGREJ5dGVzIGN0X2N2bnNpIH0K
CQkJCQkJCQkJeyBwb3AgMCB9CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJL2ZkSW5kZXggZXhjaCBk
ZWYKCQkJCQkJCQlkdXAgbGVuZ3RoIEZEQnl0ZXMgc3ViIEZEQnl0ZXMgZXhjaCBnZXRpbnRlcnZh
bAoJCQkJCQkJCS9jaGFyc3RyaW5nIGV4Y2ggZGVmCgkJCQkJCQkJZXhpdAoJCQkJCQkJCX0KCQkJ
CQkJCQl7CgkJCQkJCQkJcG9wCgkJCQkJCQkJY2lkIDAgZXEKCQkJCQkJCQkJeyAvY2hhcnN0cmlu
ZyBudWxsc3RyaW5nIGRlZiBleGl0IH0KCQkJCQkJCQlpZgoJCQkJCQkJCS9jaWQgMCBkZWYKCQkJ
CQkJCQl9CgkJCQkJCQlpZmVsc2UKCQkJCQkJCX0KCQkJCQkJbG9vcAoJCQl9IGRlZgoJCS9jdF9T
ZXRDYWNoZURldmljZQoJCQl7CgkJCTAgMCBtb3ZldG8KCQkJZHVwIHN0cmluZ3dpZHRoCgkJCTMg
LTEgcm9sbAoJCQl0cnVlIGNoYXJwYXRoCgkJCXBhdGhiYm94CgkJCTAgLTEwMDAKCQkJNyBpbmRl
eCAyIGRpdiA4ODAKCQkJc2V0Y2FjaGVkZXZpY2UyCgkJCTAgMCBtb3ZldG8KCQkJfSBkZWYKCQkv
Y3RfQ2xvbmVTZXRDYWNoZVByb2MKCQkJewoJCQkxIGVxCgkJCQl7CgkJCQlzdHJpbmd3aWR0aAoJ
CQkJcG9wIC0yIGRpdiAtODgwCgkJCQkwIC0xMDAwIHNldGNoYXJ3aWR0aAoJCQkJbW92ZXRvCgkJ
CQl9CgkJCQl7CgkJCQl1c2V3aWR0aHM/CgkJCQkJewoJCQkJCWN1cnJlbnRmb250IC9XaWR0aHMg
Z2V0IGNpZAoJCQkJCTIgY29weSBrbm93bgoJCQkJCQl7IGdldCBleGNoIHBvcCBhbG9hZCBwb3Ag
fQoJCQkJCQl7IHBvcCBwb3Agc3RyaW5nd2lkdGggfQoJCQkJCWlmZWxzZQoJCQkJCX0KCQkJCQl7
IHN0cmluZ3dpZHRoIH0KCQkJCWlmZWxzZQoJCQkJc2V0Y2hhcndpZHRoCgkJCQkwIDAgbW92ZXRv
CgkJCQl9CgkJCWlmZWxzZQoJCQl9IGRlZgoJCS9jdF9UeXBlM1Nob3dDaGFyU3RyaW5nCgkJCXsK
CQkJY3RfRkREaWN0IGZkSW5kZXggMiBjb3B5IGtub3duCgkJCQl7IGdldCB9CgkJCQl7CgkJCQlj
dXJyZW50Z2xvYmFsIDMgMSByb2xsCgkJCQkxIGluZGV4IGdjaGVjayBzZXRnbG9iYWwKCQkJCWN0
X1R5cGUxRm9udFRlbXBsYXRlIGR1cCBtYXhsZW5ndGggZGljdCBjb3B5CgkJCQkJYmVnaW4KCQkJ
CQlGREFycmF5IGZkSW5kZXggZ2V0CgkJCQkJZHVwIC9Gb250TWF0cml4IDIgY29weSBrbm93bgoJ
CQkJCQl7IGdldCB9CgkJCQkJCXsgcG9wIHBvcCBjdF9kZWZhdWx0Rm9udE10eCB9CgkJCQkJaWZl
bHNlCgkJCQkJL0ZvbnRNYXRyaXggZXhjaCBkdXAgbGVuZ3RoIGFycmF5IGNvcHkgZGVmCgkJCQkJ
L1ByaXZhdGUgZ2V0CgkJCQkJL1ByaXZhdGUgZXhjaCBkZWYKCQkJCQkvV2lkdGhzIHJvb3Rmb250
IC9XaWR0aHMgZ2V0IGRlZgoJCQkJCS9DaGFyU3RyaW5ncyAxIGRpY3QgZHVwIC8ubm90ZGVmCgkJ
CQkJCTxkODQxMjcyY2YxOGY1NGZjMTM+IGR1cCBsZW5ndGggc3RyaW5nIGNvcHkgcHV0IGRlZgoJ
CQkJCWN1cnJlbnRkaWN0CgkJCQkJZW5kCgkJCQkvY3RfVHlwZTFGb250IGV4Y2ggZGVmaW5lZm9u
dAoJCQkJZHVwIDUgMSByb2xsIHB1dAoJCQkJc2V0Z2xvYmFsCgkJCQl9CgkJCWlmZWxzZQoJCQlk
dXAgL0NoYXJTdHJpbmdzIGdldCAxIGluZGV4IC9FbmNvZGluZyBnZXQKCQkJY3RfZGZDaGFyQ29k
ZSBnZXQgY2hhcnN0cmluZyBwdXQKCQkJcm9vdGZvbnQgL1dNb2RlIDIgY29weSBrbm93bgoJCQkJ
eyBnZXQgfQoJCQkJeyBwb3AgcG9wIDAgfQoJCQlpZmVsc2UKCQkJZXhjaAoJCQkxMDAwIHNjYWxl
Zm9udCBzZXRmb250CgkJCWN0X3N0cjEgMCBjdF9kZkNoYXJDb2RlIHB1dAoJCQljdF9zdHIxIGV4
Y2ggY3RfZGZTZXRDYWNoZVByb2MKCQkJY3RfU3ludGhldGljQm9sZAoJCQkJewoJCQkJY3VycmVu
dHBvaW50CgkJCQljdF9zdHIxIHNob3cKCQkJCW5ld3BhdGgKCQkJCW1vdmV0bwoJCQkJY3Rfc3Ry
MSB0cnVlIGNoYXJwYXRoCgkJCQljdF9TdHJva2VXaWR0aCBzZXRsaW5ld2lkdGgKCQkJCXN0cm9r
ZQoJCQkJfQoJCQkJeyBjdF9zdHIxIHNob3cgfQoJCQlpZmVsc2UKCQkJfSBkZWYKCQkvY3RfVHlw
ZTRTaG93Q2hhclN0cmluZwoJCQl7CgkJCWN0X2RmRGljdCBjdF9kZkNoYXJDb2RlIGNoYXJzdHJp
bmcKCQkJRkRBcnJheSBmZEluZGV4IGdldAoJCQlkdXAgL0ZvbnRNYXRyaXggZ2V0IGR1cCBjdF9k
ZWZhdWx0Rm9udE10eCBjdF9tYXRyaXhlcSBub3QKCQkJCXsgY3RfMTAwME10eCBtYXRyaXggY29u
Y2F0bWF0cml4IGNvbmNhdCB9CgkJCQl7IHBvcCB9CgkJCWlmZWxzZQoJCQkvUHJpdmF0ZSBnZXQK
CQkJQWRvYmVfQ29vbFR5cGVfVXRpbGl0eSAvY3RfTGV2ZWwyPyBnZXQgbm90CgkJCQl7CgkJCQlj
dF9kZkRpY3QgL1ByaXZhdGUKCQkJCTMgLTEgcm9sbAoJCQkJCXsgcHV0IH0KCQkJCTExODM2MTU4
NjkgaW50ZXJuYWxkaWN0IC9zdXBlcmV4ZWMgZ2V0IGV4ZWMKCQkJCX0KCQkJaWYKCQkJMTE4MzYx
NTg2OSBpbnRlcm5hbGRpY3QKCQkJQWRvYmVfQ29vbFR5cGVfVXRpbGl0eSAvY3RfTGV2ZWwyPyBn
ZXQKCQkJCXsgMSBpbmRleCB9CgkJCQl7IDMgaW5kZXggL1ByaXZhdGUgZ2V0IG1hcmsgNiAxIHJv
bGwgfQoJCQlpZmVsc2UKCQkJZHVwIC9SdW5JbnQga25vd24KCQkJCXsgL1J1bkludCBnZXQgfQoJ
CQkJeyBwb3AgL0NDUnVuIH0KCQkJaWZlbHNlCgkJCWdldCBleGVjCgkJCUFkb2JlX0Nvb2xUeXBl
X1V0aWxpdHkgL2N0X0xldmVsMj8gZ2V0IG5vdAoJCQkJeyBjbGVhcnRvbWFyayB9CgkJCWlmCgkJ
CX0gYmluZCBkZWYKCQkvY3RfQnVpbGRDaGFySW5jcmVtZW50YWwKCQkJewoJCQkJewoJCQkJQWRv
YmVfQ29vbFR5cGVfVXRpbGl0eSAvY3RfTWFrZU9DRiBnZXQgYmVnaW4KCQkJCWN0X0J1aWxkQ2hh
clNldFVwCgkJCQljdF9TaG93Q2hhclN0cmluZwoJCQkJfQoJCQlzdG9wcGVkCgkJCQl7IHN0b3Ag
fQoJCQlpZgoJCQllbmQKCQkJZW5kCgkJCWVuZAoJCQllbmQKCQkJfSBiaW5kIGRlZgoJCS9CYXNl
Rm9udE5hbWVTdHIgKEJGMDApIGRlZgoJCS9jdF9UeXBlMUZvbnRUZW1wbGF0ZSAxNCBkaWN0CgkJ
CWJlZ2luCgkJCS9Gb250VHlwZSAxIGRlZgoJCQkvRm9udE1hdHJpeCAgWzAuMDAxIDAgMCAwLjAw
MSAwIDBdIGRlZgoJCQkvRm9udEJCb3ggIFstMjUwIC0yNTAgMTI1MCAxMjUwXSBkZWYKCQkJL0Vu
Y29kaW5nIGN0X2NIZXhFbmNvZGluZyBkZWYKCQkJL1BhaW50VHlwZSAwIGRlZgoJCQljdXJyZW50
ZGljdAoJCQllbmQgZGVmCgkJL0Jhc2VGb250VGVtcGxhdGUgMTEgZGljdAoJCQliZWdpbgoJCQkv
Rm9udE1hdHJpeCAgWzAuMDAxIDAgMCAwLjAwMSAwIDBdIGRlZgoJCQkvRm9udEJCb3ggIFstMjUw
IC0yNTAgMTI1MCAxMjUwXSBkZWYKCQkJL0VuY29kaW5nIGN0X2NIZXhFbmNvZGluZyBkZWYKCQkJ
L0J1aWxkQ2hhciAvY3RfQnVpbGRDaGFySW5jcmVtZW50YWwgbG9hZCBkZWYKCQkJY3RfQ2xvbmU/
CgkJCQl7CgkJCQkvRm9udFR5cGUgMyBkZWYKCQkJCS9jdF9TaG93Q2hhclN0cmluZyAvY3RfVHlw
ZTNTaG93Q2hhclN0cmluZyBsb2FkIGRlZgoJCQkJL2N0X2RmU2V0Q2FjaGVQcm9jIC9jdF9DbG9u
ZVNldENhY2hlUHJvYyBsb2FkIGRlZgoJCQkJL2N0X1N5bnRoZXRpY0JvbGQgZmFsc2UgZGVmCgkJ
CQkvY3RfU3Ryb2tlV2lkdGggMSBkZWYKCQkJCX0KCQkJCXsKCQkJCS9Gb250VHlwZSA0IGRlZgoJ
CQkJL1ByaXZhdGUgMSBkaWN0IGR1cCAvbGVuSVYgNCBwdXQgZGVmCgkJCQkvQ2hhclN0cmluZ3Mg
MSBkaWN0IGR1cCAvLm5vdGRlZiA8ZDg0MTI3MmNmMThmNTRmYzEzPiBwdXQgZGVmCgkJCQkvUGFp
bnRUeXBlIDAgZGVmCgkJCQkvY3RfU2hvd0NoYXJTdHJpbmcgL2N0X1R5cGU0U2hvd0NoYXJTdHJp
bmcgbG9hZCBkZWYKCQkJCX0KCQkJaWZlbHNlCgkJCS9jdF9zdHIxIDEgc3RyaW5nIGRlZgoJCQlj
dXJyZW50ZGljdAoJCQllbmQgZGVmCgkJL0Jhc2VGb250RGljdFNpemUgQmFzZUZvbnRUZW1wbGF0
ZSBsZW5ndGggNSBhZGQgZGVmCgkJL2N0X21hdHJpeGVxCgkJCXsKCQkJdHJ1ZSAwIDEgNQoJCQkJ
ewoJCQkJZHVwIDQgaW5kZXggZXhjaCBnZXQgZXhjaCAzIGluZGV4IGV4Y2ggZ2V0IGVxIGFuZAoJ
CQkJZHVwIG5vdAoJCQkJCXsgZXhpdCB9CgkJCQlpZgoJCQkJfQoJCQlmb3IKCQkJZXhjaCBwb3Ag
ZXhjaCBwb3AKCQkJfSBiaW5kIGRlZgoJCS9jdF9tYWtlb2NmCgkJCXsKCQkJMTUgZGljdAoJCQkJ
YmVnaW4KCQkJCWV4Y2ggL1dNb2RlIGV4Y2ggZGVmCgkJCQlleGNoIC9Gb250TmFtZSBleGNoIGRl
ZgoJCQkJL0ZvbnRUeXBlIDAgZGVmCgkJCQkvRk1hcFR5cGUgMiBkZWYKCQkJZHVwIC9Gb250TWF0
cml4IGtub3duCgkJCQl7IGR1cCAvRm9udE1hdHJpeCBnZXQgL0ZvbnRNYXRyaXggZXhjaCBkZWYg
fQoJCQkJeyAvRm9udE1hdHJpeCBtYXRyaXggZGVmIH0KCQkJaWZlbHNlCgkJCQkvYmZDb3VudCAx
IGluZGV4IC9DSURDb3VudCBnZXQgMjU2IGlkaXYgMSBhZGQKCQkJCQlkdXAgMjU2IGd0IHsgcG9w
IDI1Nn0gaWYgZGVmCgkJCQkvRW5jb2RpbmcKCQkJCQkyNTYgYXJyYXkgMCAxIGJmQ291bnQgMSBz
dWIgeyAyIGNvcHkgZHVwIHB1dCBwb3AgfSBmb3IKCQkJCQliZkNvdW50IDEgMjU1IHsgMiBjb3B5
IGJmQ291bnQgcHV0IHBvcCB9IGZvcgoJCQkJCWRlZgoJCQkJL0ZEZXBWZWN0b3IgYmZDb3VudCBk
dXAgMjU2IGx0IHsgMSBhZGQgfSBpZiBhcnJheSBkZWYKCQkJCUJhc2VGb250VGVtcGxhdGUgQmFz
ZUZvbnREaWN0U2l6ZSBkaWN0IGNvcHkKCQkJCQliZWdpbgoJCQkJCS9DSURGb250IGV4Y2ggZGVm
CgkJCQkJQ0lERm9udCAvRm9udEJCb3gga25vd24KCQkJCQkJeyBDSURGb250IC9Gb250QkJveCBn
ZXQgL0ZvbnRCQm94IGV4Y2ggZGVmIH0KCQkJCQlpZgoJCQkJCUNJREZvbnQgL0NEZXZQcm9jIGtu
b3duCgkJCQkJCXsgQ0lERm9udCAvQ0RldlByb2MgZ2V0IC9DRGV2UHJvYyBleGNoIGRlZiB9CgkJ
CQkJaWYKCQkJCQljdXJyZW50ZGljdAoJCQkJCWVuZAoJCQkJQmFzZUZvbnROYW1lU3RyIDMgKDAp
IHB1dGludGVydmFsCgkJCQkwIDEgYmZDb3VudCBkdXAgMjU2IGVxIHsgMSBzdWIgfSBpZgoJCQkJ
CXsKCQkJCQlGRGVwVmVjdG9yIGV4Y2gKCQkJCQkyIGluZGV4IEJhc2VGb250RGljdFNpemUgZGlj
dCBjb3B5CgkJCQkJCWJlZ2luCgkJCQkJCWR1cCAvQ0lERmlyc3RCeXRlIGV4Y2ggMjU2IG11bCBk
ZWYKCQkJCQkJRm9udFR5cGUgMyBlcQoJCQkJCQkJeyAvY3RfRkREaWN0IDIgZGljdCBkZWYgfQoJ
CQkJCQlpZgoJCQkJCQljdXJyZW50ZGljdAoJCQkJCQllbmQKCQkJCQkxIGluZGV4ICAxNgoJCQkJ
CUJhc2VGb250TmFtZVN0ciAgMiAyIGdldGludGVydmFsIGN2cnMgcG9wCgkJCQkJQmFzZUZvbnRO
YW1lU3RyIGV4Y2ggZGVmaW5lZm9udAoJCQkJCXB1dAoJCQkJCX0KCQkJCWZvcgoJCQkJY3RfQ2xv
bmU/CgkJCQkJeyAvV2lkdGhzIDEgaW5kZXggL0NJREZvbnQgZ2V0IC9HbHlwaERpcmVjdG9yeSBn
ZXQgbGVuZ3RoIGRpY3QgZGVmIH0KCQkJCWlmCgkJCQlGb250TmFtZQoJCQkJY3VycmVudGRpY3QK
CQkJCWVuZAoJCQlkZWZpbmVmb250CgkJCWN0X0Nsb25lPwoJCQkJewoJCQkJZ3NhdmUKCQkJCWR1
cCAxMDAwIHNjYWxlZm9udCBzZXRmb250CgkJCQljdF9CdWlsZENoYXJEaWN0CgkJCQkJYmVnaW4K
CQkJCQkvdXNld2lkdGhzPyBmYWxzZSBkZWYKCQkJCQljdXJyZW50Zm9udCAvV2lkdGhzIGdldAoJ
CQkJCQliZWdpbgoJCQkJCQlleGNoIC9DSURGb250IGdldCAvR2x5cGhEaXJlY3RvcnkgZ2V0CgkJ
CQkJCQl7CgkJCQkJCQlwb3AKCQkJCQkJCWR1cCBjaGFyY29kZSBleGNoIDEgaW5kZXggMCAyIGlu
ZGV4IDI1NiBpZGl2IHB1dAoJCQkJCQkJMSBpbmRleCBleGNoIDEgZXhjaCAyNTYgbW9kIHB1dAoJ
CQkJCQkJc3RyaW5nd2lkdGggMiBhcnJheSBhc3RvcmUgZGVmCgkJCQkJCQl9CgkJCQkJCWZvcmFs
bAoJCQkJCQllbmQKCQkJCQkvdXNld2lkdGhzPyB0cnVlIGRlZgoJCQkJCWVuZAoJCQkJZ3Jlc3Rv
cmUKCQkJCX0KCQkJCXsgZXhjaCBwb3AgfQoJCQlpZmVsc2UKCQkJfSBiaW5kIGRlZgoJCS9jdF9D
b21wb3NlRm9udAoJCQl7CgkJCWN0X1VzZU5hdGl2ZUNhcGFiaWxpdHk/CgkJCQl7CgkJCQkyIGlu
ZGV4IC9DTWFwIGN0X3Jlc291cmNlc3RhdHVzCgkJCQkJeyBwb3AgcG9wIGV4Y2ggcG9wIH0KCQkJ
CQl7CgkJCQkJL0NJREluaXQgL1Byb2NTZXQgZmluZHJlc291cmNlCgkJCQkJCWJlZ2luCgkJCQkJ
CTEyIGRpY3QKCQkJCQkJCWJlZ2luCgkJCQkJCQliZWdpbmNtYXAKCQkJCQkJCS9DTWFwTmFtZSAz
IGluZGV4IGRlZgoJCQkJCQkJL0NNYXBWZXJzaW9uIDEuMDAwIGRlZgoJCQkJCQkJL0NNYXBUeXBl
IDEgZGVmCgkJCQkJCQlleGNoIC9XTW9kZSBleGNoIGRlZgoJCQkJCQkJL0NJRFN5c3RlbUluZm8g
MyBkaWN0IGR1cAoJCQkJCQkJCWJlZ2luCgkJCQkJCQkJL1JlZ2lzdHJ5IChBZG9iZSkgZGVmCgkJ
CQkJCQkJL09yZGVyaW5nCgkJCQkJCQkJQ01hcE5hbWUgY3RfbWtvY2ZTdHIxMDAgY3ZzCgkJCQkJ
CQkJKEFkb2JlLSkgc2VhcmNoCgkJCQkJCQkJCXsKCQkJCQkJCQkJcG9wIHBvcAoJCQkJCQkJCQko
LSkgc2VhcmNoCgkJCQkJCQkJCQl7CgkJCQkJCQkJCQlkdXAgbGVuZ3RoIHN0cmluZyBjb3B5CgkJ
CQkJCQkJCQlleGNoIHBvcCBleGNoIHBvcAoJCQkJCQkJCQkJfQoJCQkJCQkJCQkJeyBwb3AgKElk
ZW50aXR5KX0KCQkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJCX0KCQkJCQkJCQkJeyBwb3AgIChJZGVu
dGl0eSkgIH0KCQkJCQkJCQlpZmVsc2UKCQkJCQkJCQlkZWYKCQkJCQkJCQkvU3VwcGxlbWVudCAw
IGRlZgoJCQkJCQkJCWVuZCBkZWYKCQkJCQkJCTEgYmVnaW5jb2Rlc3BhY2VyYW5nZQoJCQkJCQkJ
PDAwMDA+IDxGRkZGPgoJCQkJCQkJZW5kY29kZXNwYWNlcmFuZ2UKCQkJCQkJCTEgYmVnaW5jaWRy
YW5nZQoJCQkJCQkJPDAwMDA+IDxGRkZGPiAwCgkJCQkJCQllbmRjaWRyYW5nZQoJCQkJCQkJZW5k
Y21hcAoJCQkJCQkJQ01hcE5hbWUgY3VycmVudGRpY3QgL0NNYXAgZGVmaW5lcmVzb3VyY2UgcG9w
CgkJCQkJCQllbmQKCQkJCQkJZW5kCgkJCQkJfQoJCQkJaWZlbHNlCgkJCQljb21wb3NlZm9udAoJ
CQkJfQoJCQkJewoJCQkJMyAyIHJvbGwgcG9wCgkJCQkwIGdldCAvQ0lERm9udCBmaW5kcmVzb3Vy
Y2UKCQkJCWN0X21ha2VvY2YKCQkJCX0KCQkJaWZlbHNlCgkJCX0gYmluZCBkZWYKCQkvY3RfTWFr
ZUlkZW50aXR5CgkJCXsKCQkJY3RfVXNlTmF0aXZlQ2FwYWJpbGl0eT8KCQkJCXsKCQkJCTEgaW5k
ZXggL0NNYXAgY3RfcmVzb3VyY2VzdGF0dXMKCQkJCQl7IHBvcCBwb3AgfQoJCQkJCXsKCQkJCQkv
Q0lESW5pdCAvUHJvY1NldCBmaW5kcmVzb3VyY2UgYmVnaW4KCQkJCQkxMiBkaWN0IGJlZ2luCgkJ
CQkJYmVnaW5jbWFwCgkJCQkJL0NNYXBOYW1lIDIgaW5kZXggZGVmCgkJCQkJL0NNYXBWZXJzaW9u
IDEuMDAwIGRlZgoJCQkJCS9DTWFwVHlwZSAxIGRlZgoJCQkJCS9DSURTeXN0ZW1JbmZvIDMgZGlj
dCBkdXAKCQkJCQkJYmVnaW4KCQkJCQkJL1JlZ2lzdHJ5IChBZG9iZSkgZGVmCgkJCQkJCS9PcmRl
cmluZwoJCQkJCQlDTWFwTmFtZSBjdF9ta29jZlN0cjEwMCBjdnMKCQkJCQkJKEFkb2JlLSkgc2Vh
cmNoCgkJCQkJCQl7CgkJCQkJCQlwb3AgcG9wCgkJCQkJCQkoLSkgc2VhcmNoCgkJCQkJCQkJeyBk
dXAgbGVuZ3RoIHN0cmluZyBjb3B5IGV4Y2ggcG9wIGV4Y2ggcG9wIH0KCQkJCQkJCQl7IHBvcCAo
SWRlbnRpdHkpIH0KCQkJCQkJCWlmZWxzZQoJCQkJCQkJfQoJCQkJCQkJeyBwb3AgKElkZW50aXR5
KSB9CgkJCQkJCWlmZWxzZQoJCQkJCQlkZWYKCQkJCQkJL1N1cHBsZW1lbnQgMCBkZWYKCQkJCQkJ
ZW5kIGRlZgoJCQkJCTEgYmVnaW5jb2Rlc3BhY2VyYW5nZQoJCQkJCTwwMDAwPiA8RkZGRj4KCQkJ
CQllbmRjb2Rlc3BhY2VyYW5nZQoJCQkJCTEgYmVnaW5jaWRyYW5nZQoJCQkJCTwwMDAwPiA8RkZG
Rj4gMAoJCQkJCWVuZGNpZHJhbmdlCgkJCQkJZW5kY21hcAoJCQkJCUNNYXBOYW1lIGN1cnJlbnRk
aWN0IC9DTWFwIGRlZmluZXJlc291cmNlIHBvcAoJCQkJCWVuZAoJCQkJCWVuZAoJCQkJCX0KCQkJ
CWlmZWxzZQoJCQkJY29tcG9zZWZvbnQKCQkJCX0KCQkJCXsKCQkJCWV4Y2ggcG9wCgkJCQkwIGdl
dCAvQ0lERm9udCBmaW5kcmVzb3VyY2UKCQkJCWN0X21ha2VvY2YKCQkJCX0KCQkJaWZlbHNlCgkJ
CX0gYmluZCBkZWYKCQljdXJyZW50ZGljdCByZWFkb25seSBwb3AKCQllbmQKCWVuZAolJUVuZFJl
c291cmNlCiUlQmVnaW5SZXNvdXJjZTogcHJvY3NldCBBZG9iZV9Db29sVHlwZV9VdGlsaXR5X1Q0
MiAxLjAgMAolJUNvcHlyaWdodDogQ29weXJpZ2h0IDE5ODctMjAwMyBBZG9iZSBTeXN0ZW1zIElu
Y29ycG9yYXRlZC4KJSVWZXJzaW9uOiAxLjAgMAp1c2VyZGljdCAvY3RfVDQyRGljdCAxNSBkaWN0
IHB1dApjdF9UNDJEaWN0IGJlZ2luCi9JczIwMTU/CnsKICB2ZXJzaW9uCiAgY3ZpCiAgMjAxNQog
IGdlCn0gYmluZCBkZWYKL0FsbG9jR2x5cGhTdG9yYWdlCnsKICBJczIwMTU/CiAgewkKCQlwb3AK
ICB9IAogIHsgCgkJe3N0cmluZ30gZm9yYWxsCiAgfSBpZmVsc2UKfSBiaW5kIGRlZgovVHlwZTQy
RGljdEJlZ2luCnsKCTI1IGRpY3QgYmVnaW4KICAvRm9udE5hbWUgZXhjaCBkZWYKICAvQ2hhclN0
cmluZ3MgMjU2IGRpY3QgCgliZWdpbgoJCSAgLy5ub3RkZWYgMCBkZWYKCQkgIGN1cnJlbnRkaWN0
IAoJZW5kIGRlZgogIC9FbmNvZGluZyBleGNoIGRlZgogIC9QYWludFR5cGUgMCBkZWYKICAvRm9u
dFR5cGUgNDIgZGVmCiAgL0ZvbnRNYXRyaXggWzEgMCAwIDEgMCAwXSBkZWYKICA0IGFycmF5ICBh
c3RvcmUgY3Z4IC9Gb250QkJveCBleGNoIGRlZgogIC9zZm50cwp9IGJpbmQgZGVmCi9UeXBlNDJE
aWN0RW5kICAKewoJIGN1cnJlbnRkaWN0IGR1cCAvRm9udE5hbWUgZ2V0IGV4Y2ggZGVmaW5lZm9u
dCBlbmQKCWN0X1Q0MkRpY3QgZXhjaAoJZHVwIC9Gb250TmFtZSBnZXQgZXhjaCBwdXQKfSBiaW5k
IGRlZgovUkQge3N0cmluZyBjdXJyZW50ZmlsZSBleGNoIHJlYWRzdHJpbmcgcG9wfSBleGVjdXRl
b25seSBkZWYKL1ByZXBGb3IyMDE1CnsKCUlzMjAxNT8KCXsJCSAgCgkJIC9HbHlwaERpcmVjdG9y
eSAKCQkgMTYKCQkgZGljdCBkZWYKCQkgc2ZudHMgMCBnZXQKCQkgZHVwCgkJIDIgaW5kZXgKCQkg
KGdseXgpCgkJIHB1dGludGVydmFsCgkJIDIgaW5kZXggIAoJCSAobG9jeCkKCQkgcHV0aW50ZXJ2
YWwKCQkgcG9wCgkJIHBvcAoJfQoJewoJCSBwb3AKCQkgcG9wCgl9IGlmZWxzZQkJCQp9IGJpbmQg
ZGVmCi9BZGRUNDJDaGFyCnsKCUlzMjAxNT8KCXsKCQkvR2x5cGhEaXJlY3RvcnkgZ2V0IAoJCWJl
Z2luCgkJZGVmCgkJZW5kCgkJcG9wCgkJcG9wCgl9Cgl7CgkJL3NmbnRzIGdldAoJCTQgaW5kZXgK
CQlnZXQKCQkzIGluZGV4CgkgIDIgaW5kZXgKCQlwdXRpbnRlcnZhbAoJCXBvcAoJCXBvcAoJCXBv
cAoJCXBvcAoJfSBpZmVsc2UKfSBiaW5kIGRlZgplbmQKJSVFbmRSZXNvdXJjZQpBZG9iZV9Db29s
VHlwZV9Db3JlIGJlZ2luIC8kT2JsaXF1ZSBTZXRTdWJzdGl0dXRlU3RyYXRlZ3kgZW5kCiUlQmVn
aW5SZXNvdXJjZTogcHJvY3NldCBBZG9iZV9BR01fSW1hZ2UgMS4wIDAKJSVWZXJzaW9uOiAxLjAg
MAolJUNvcHlyaWdodDogQ29weXJpZ2h0IChDKSAyMDAwLTIwMDMgQWRvYmUgU3lzdGVtcywgSW5j
LiAgQWxsIFJpZ2h0cyBSZXNlcnZlZC4Kc3lzdGVtZGljdCAvc2V0cGFja2luZyBrbm93bgp7Cglj
dXJyZW50cGFja2luZwoJdHJ1ZSBzZXRwYWNraW5nCn0gaWYKdXNlcmRpY3QgL0Fkb2JlX0FHTV9J
bWFnZSA3NSBkaWN0IGR1cCBiZWdpbiBwdXQKL0Fkb2JlX0FHTV9JbWFnZV9JZCAvQWRvYmVfQUdN
X0ltYWdlXzEuMF8wIGRlZgovbmR7CgludWxsIGRlZgp9YmluZCBkZWYKL0FHTUlNR18maW1hZ2Ug
bmQKL0FHTUlNR18mY29sb3JpbWFnZSBuZAovQUdNSU1HXyZpbWFnZW1hc2sgbmQKL0FHTUlNR19t
YnVmICgpIGRlZgovQUdNSU1HX3lidWYgKCkgZGVmCi9BR01JTUdfa2J1ZiAoKSBkZWYKL0FHTUlN
R19jIDAgZGVmCi9BR01JTUdfbSAwIGRlZgovQUdNSU1HX3kgMCBkZWYKL0FHTUlNR19rIDAgZGVm
Ci9BR01JTUdfdG1wIG5kCi9BR01JTUdfaW1hZ2VzdHJpbmcwIG5kCi9BR01JTUdfaW1hZ2VzdHJp
bmcxIG5kCi9BR01JTUdfaW1hZ2VzdHJpbmcyIG5kCi9BR01JTUdfaW1hZ2VzdHJpbmczIG5kCi9B
R01JTUdfaW1hZ2VzdHJpbmc0IG5kCi9BR01JTUdfaW1hZ2VzdHJpbmc1IG5kCi9BR01JTUdfY250
IG5kCi9BR01JTUdfZnNhdmUgbmQKL0FHTUlNR19jb2xvckFyeSBuZAovQUdNSU1HX292ZXJyaWRl
IG5kCi9BR01JTUdfbmFtZSBuZAovQUdNSU1HX21hc2tTb3VyY2UgbmQKL2ludmVydF9pbWFnZV9z
YW1wbGVzIG5kCi9rbm9ja291dF9pbWFnZV9zYW1wbGVzCW5kCi9pbWcgbmQKL3NlcGltZyBuZAov
ZGV2bmltZyBuZAovaWR4aW1nIG5kCi9kb2Nfc2V0dXAKeyAKCUFkb2JlX0FHTV9Db3JlIGJlZ2lu
CglBZG9iZV9BR01fSW1hZ2UgYmVnaW4KCS9BR01JTUdfJmltYWdlIHN5c3RlbWRpY3QvaW1hZ2Ug
Z2V0IGRlZgoJL0FHTUlNR18maW1hZ2VtYXNrIHN5c3RlbWRpY3QvaW1hZ2VtYXNrIGdldCBkZWYK
CS9jb2xvcmltYWdlIHdoZXJlewoJCXBvcAoJCS9BR01JTUdfJmNvbG9yaW1hZ2UgL2NvbG9yaW1h
Z2UgbGRmCgl9aWYKCWVuZAoJZW5kCn1kZWYKL3BhZ2Vfc2V0dXAKewoJQWRvYmVfQUdNX0ltYWdl
IGJlZ2luCgkvQUdNSU1HX2NjaW1hZ2VfZXhpc3RzIHsvY3VzdG9tY29sb3JpbWFnZSB3aGVyZSAK
CQl7CgkJCXBvcAoJCQkvQWRvYmVfQUdNX09uSG9zdF9TZXBzIHdoZXJlCgkJCXsKCQkJcG9wIGZh
bHNlCgkJCX17CgkJCS9BZG9iZV9BR01fSW5SaXBfU2VwcyB3aGVyZQoJCQkJewoJCQkJcG9wIGZh
bHNlCgkJCQl9ewoJCQkJCXRydWUKCQkJCSB9aWZlbHNlCgkJCSB9aWZlbHNlCgkJCX17CgkJCWZh
bHNlCgkJfWlmZWxzZSAKCX1iZGYKCWxldmVsMnsKCQkvaW52ZXJ0X2ltYWdlX3NhbXBsZXMKCQl7
CgkJCUFkb2JlX0FHTV9JbWFnZS9BR01JTUdfdG1wIERlY29kZSBsZW5ndGggZGRmCgkJCS9EZWNv
ZGUgWyBEZWNvZGUgMSBnZXQgRGVjb2RlIDAgZ2V0XSBkZWYKCQl9ZGVmCgkJL2tub2Nrb3V0X2lt
YWdlX3NhbXBsZXMKCQl7CgkJCU9wZXJhdG9yL2ltYWdlbWFzayBuZXsKCQkJCS9EZWNvZGUgWzEg
MV0gZGVmCgkJCX1pZgoJCX1kZWYKCX17CQoJCS9pbnZlcnRfaW1hZ2Vfc2FtcGxlcwoJCXsKCQkJ
ezEgZXhjaCBzdWJ9IGN1cnJlbnR0cmFuc2ZlciBhZGRwcm9jcyBzZXR0cmFuc2ZlcgoJCX1kZWYK
CQkva25vY2tvdXRfaW1hZ2Vfc2FtcGxlcwoJCXsKCQkJeyBwb3AgMSB9IGN1cnJlbnR0cmFuc2Zl
ciBhZGRwcm9jcyBzZXR0cmFuc2ZlcgoJCX1kZWYKCX1pZmVsc2UKCS9pbWcgL2ltYWdlb3JtYXNr
IGxkZgoJL3NlcGltZyAvc2VwX2ltYWdlb3JtYXNrIGxkZgoJL2Rldm5pbWcgL2Rldm5faW1hZ2Vv
cm1hc2sgbGRmCgkvaWR4aW1nIC9pbmRleGVkX2ltYWdlb3JtYXNrIGxkZgoJL19jdHlwZSA3IGRl
ZgoJY3VycmVudGRpY3R7CgkJZHVwIHhjaGVjayAxIGluZGV4IHR5cGUgZHVwIC9hcnJheXR5cGUg
ZXEgZXhjaCAvcGFja2VkYXJyYXl0eXBlIGVxIG9yIGFuZHsKCQkJYmluZAoJCX1pZgoJCWRlZgoJ
fWZvcmFsbAp9ZGVmCi9wYWdlX3RyYWlsZXIKewoJZW5kCn1kZWYKL2RvY190cmFpbGVyCnsKfWRl
ZgovaW1hZ2Vvcm1hc2tfc3lzCnsKCWJlZ2luCgkJc2F2ZSBtYXJrCgkJbGV2ZWwyewoJCQljdXJy
ZW50ZGljdAoJCQlPcGVyYXRvciAvaW1hZ2VtYXNrIGVxewoJCQkJQUdNSU1HXyZpbWFnZW1hc2sK
CQkJfXsKCQkJCXVzZV9tYXNrIHsKCQkJCQlsZXZlbDMge3Byb2Nlc3NfbWFza19MMyBBR01JTUdf
JmltYWdlfXttYXNrZWRfaW1hZ2Vfc2ltdWxhdGlvbn1pZmVsc2UKCQkJCX17CgkJCQkJQUdNSU1H
XyZpbWFnZQoJCQkJfWlmZWxzZQoJCQl9aWZlbHNlCgkJfXsKCQkJV2lkdGggSGVpZ2h0CgkJCU9w
ZXJhdG9yIC9pbWFnZW1hc2sgZXF7CgkJCQlEZWNvZGUgMCBnZXQgMSBlcSBEZWNvZGUgMSBnZXQg
MCBlcQlhbmQKCQkJCUltYWdlTWF0cml4IC9EYXRhU291cmNlIGxvYWQKCQkJCUFHTUlNR18maW1h
Z2VtYXNrCgkJCX17CgkJCQlCaXRzUGVyQ29tcG9uZW50IEltYWdlTWF0cml4IC9EYXRhU291cmNl
IGxvYWQKCQkJCUFHTUlNR18maW1hZ2UKCQkJfWlmZWxzZQoJCX1pZmVsc2UKCQljbGVhcnRvbWFy
ayByZXN0b3JlCgllbmQKfWRlZgovb3ZlcnByaW50X3BsYXRlCnsKCWN1cnJlbnRvdmVycHJpbnQg
ewoJCTAgZ2V0IGR1cCB0eXBlIC9uYW1ldHlwZSBlcSB7CgkJCWR1cCAvRGV2aWNlR3JheSBlcXsK
CQkJCXBvcCBBR01DT1JFX2JsYWNrX3BsYXRlIG5vdAoJCQl9ewoJCQkJL0RldmljZUNNWUsgZXF7
CgkJCQkJQUdNQ09SRV9pc19jbXlrX3NlcCBub3QKCQkJCX1pZgoJCQl9aWZlbHNlCgkJfXsKCQkJ
ZmFsc2UgZXhjaAoJCQl7CgkJCQkgQUdNT0hTX3NlcGluayBlcSBvcgoJCQl9IGZvcmFsbAoJCQlu
b3QKCQl9IGlmZWxzZQoJfXsKCQlwb3AgZmFsc2UKCX1pZmVsc2UKfWRlZgovcHJvY2Vzc19tYXNr
X0wzCnsKCWR1cCBiZWdpbgoJL0ltYWdlVHlwZSAxIGRlZgoJZW5kCgk0IGRpY3QgYmVnaW4KCQkv
RGF0YURpY3QgZXhjaCBkZWYKCQkvSW1hZ2VUeXBlIDMgZGVmCgkJL0ludGVybGVhdmVUeXBlIDMg
ZGVmCgkJL01hc2tEaWN0IDkgZGljdCBiZWdpbgoJCQkvSW1hZ2VUeXBlIDEgZGVmCgkJCS9XaWR0
aCBEYXRhRGljdCBkdXAgL01hc2tXaWR0aCBrbm93biB7L01hc2tXaWR0aH17L1dpZHRofSBpZmVs
c2UgZ2V0IGRlZgoJCQkvSGVpZ2h0IERhdGFEaWN0IGR1cCAvTWFza0hlaWdodCBrbm93biB7L01h
c2tIZWlnaHR9ey9IZWlnaHR9IGlmZWxzZSBnZXQgZGVmCgkJCS9JbWFnZU1hdHJpeCBbV2lkdGgg
MCAwIEhlaWdodCBuZWcgMCBIZWlnaHRdIGRlZgoJCQkvTkNvbXBvbmVudHMgMSBkZWYKCQkJL0Jp
dHNQZXJDb21wb25lbnQgMSBkZWYKCQkJL0RlY29kZSBbMCAxXSBkZWYKCQkJL0RhdGFTb3VyY2Ug
QUdNSU1HX21hc2tTb3VyY2UgZGVmCgkJY3VycmVudGRpY3QgZW5kIGRlZgoJY3VycmVudGRpY3Qg
ZW5kCn1kZWYKL3VzZV9tYXNrCnsKCWR1cCB0eXBlIC9kaWN0dHlwZSBlcQoJewoJCWR1cCAvTWFz
ayBrbm93bgl7CgkJCWR1cCAvTWFzayBnZXQgewoJCQkJbGV2ZWwzCgkJCQl7dHJ1ZX0KCQkJCXsK
CQkJCQlkdXAgL01hc2tXaWR0aCBrbm93biB7ZHVwIC9NYXNrV2lkdGggZ2V0IDEgaW5kZXggL1dp
ZHRoIGdldCBlcX17dHJ1ZX1pZmVsc2UgZXhjaAoJCQkJCWR1cCAvTWFza0hlaWdodCBrbm93biB7
ZHVwIC9NYXNrSGVpZ2h0IGdldCAxIGluZGV4IC9IZWlnaHQgZ2V0IGVxfXt0cnVlfWlmZWxzZQoJ
CQkJCTMgLTEgcm9sbCBhbmQKCQkJCX0gaWZlbHNlCgkJCX0KCQkJe2ZhbHNlfSBpZmVsc2UKCQl9
CgkJe2ZhbHNlfSBpZmVsc2UKCX0KCXtmYWxzZX0gaWZlbHNlCn1kZWYKL21ha2VfbGluZV9zb3Vy
Y2UKewoJYmVnaW4KCU11bHRpcGxlRGF0YVNvdXJjZXMgewoJCVsKCQlEZWNvZGUgbGVuZ3RoIDIg
ZGl2IGN2aSB7V2lkdGggc3RyaW5nfSByZXBlYXQKCQldCgl9ewoJCVdpZHRoIERlY29kZSBsZW5n
dGggMiBkaXYgbXVsIGN2aSBzdHJpbmcKCX1pZmVsc2UKCWVuZAp9ZGVmCi9kYXRhc291cmNlX3Rv
X3N0cgp7CglleGNoIGR1cCB0eXBlCglkdXAgL2ZpbGV0eXBlIGVxIHsKCQlwb3AgZXhjaCByZWFk
c3RyaW5nCgl9ewoJCS9hcnJheXR5cGUgZXEgewoJCQlleGVjIGV4Y2ggY29weQoJCX17CgkJCXBv
cAoJCX1pZmVsc2UKCX1pZmVsc2UKCXBvcAp9ZGVmCi9tYXNrZWRfaW1hZ2Vfc2ltdWxhdGlvbgp7
CgkzIGRpY3QgYmVnaW4KCWR1cCBtYWtlX2xpbmVfc291cmNlIC9saW5lX3NvdXJjZSB4ZGYKCS9t
YXNrX3NvdXJjZSBBR01JTUdfbWFza1NvdXJjZSAvTFpXRGVjb2RlIGZpbHRlciBkZWYKCWR1cCAv
V2lkdGggZ2V0IDggZGl2IGNlaWxpbmcgY3ZpIHN0cmluZyAvbWFza19zdHIgeGRmCgliZWdpbgoJ
Z3NhdmUKCTAgMSB0cmFuc2xhdGUgMSAtMSBIZWlnaHQgZGl2IHNjYWxlCgkxIDEgSGVpZ2h0IHsK
CQlwb3AKCQlnc2F2ZQoJCU11bHRpcGxlRGF0YVNvdXJjZXMgewoJCQkwIDEgRGF0YVNvdXJjZSBs
ZW5ndGggMSBzdWIgewoJCQkJZHVwIERhdGFTb3VyY2UgZXhjaCBnZXQKCQkJCWV4Y2ggbGluZV9z
b3VyY2UgZXhjaCBnZXQKCQkJCWRhdGFzb3VyY2VfdG9fc3RyCgkJCX0gZm9yCgkJfXsKCQkJRGF0
YVNvdXJjZSBsaW5lX3NvdXJjZSBkYXRhc291cmNlX3RvX3N0cgoJCX0gaWZlbHNlCgkJPDwKCQkJ
L1BhdHRlcm5UeXBlIDEKCQkJL1BhaW50UHJvYyBbCgkJCQkvcG9wIGN2eAoJCQkJPDwKCQkJCQkv
SW1hZ2VUeXBlIDEKCQkJCQkvV2lkdGggV2lkdGgKCQkJCQkvSGVpZ2h0IDEKCQkJCQkvSW1hZ2VN
YXRyaXggV2lkdGggMS4wIHN1YiAxIG1hdHJpeCBzY2FsZSAwLjUgMCBtYXRyaXggdHJhbnNsYXRl
IG1hdHJpeCBjb25jYXRtYXRyaXgKCQkJCQkvTXVsdGlwbGVEYXRhU291cmNlcyBNdWx0aXBsZURh
dGFTb3VyY2VzCgkJCQkJL0RhdGFTb3VyY2UgbGluZV9zb3VyY2UKCQkJCQkvQml0c1BlckNvbXBv
bmVudCBCaXRzUGVyQ29tcG9uZW50CgkJCQkJL0RlY29kZSBEZWNvZGUKCQkJCT4+CgkJCQkvaW1h
Z2UgY3Z4CgkJCV0gY3Z4CgkJCS9CQm94IFswIDAgV2lkdGggMV0KCQkJL1hTdGVwIFdpZHRoCgkJ
CS9ZU3RlcCAxCgkJCS9QYWludFR5cGUgMQoJCQkvVGlsaW5nVHlwZSAyCgkJPj4KCQltYXRyaXgg
bWFrZXBhdHRlcm4gc2V0X3BhdHRlcm4KCQk8PAoJCQkvSW1hZ2VUeXBlIDEKCQkJL1dpZHRoIFdp
ZHRoCgkJCS9IZWlnaHQgMQoJCQkvSW1hZ2VNYXRyaXggV2lkdGggMSBtYXRyaXggc2NhbGUKCQkJ
L011bHRpcGxlRGF0YVNvdXJjZXMgZmFsc2UKCQkJL0RhdGFTb3VyY2UgbWFza19zb3VyY2UgbWFz
a19zdHIgcmVhZHN0cmluZyBwb3AKCQkJL0JpdHNQZXJDb21wb25lbnQgMQoJCQkvRGVjb2RlIFsw
IDFdCgkJPj4KCQlpbWFnZW1hc2sKCQlncmVzdG9yZQoJCTAgMSB0cmFuc2xhdGUKCX0gZm9yCgln
cmVzdG9yZQoJZW5kCgllbmQKfWRlZgovaW1hZ2Vvcm1hc2sKewoJYmVnaW4KCQlTa2lwSW1hZ2VQ
cm9jIHsKCQkJY3VycmVudGRpY3QgY29uc3VtZWltYWdlZGF0YQoJCX0KCQl7CgkJCXNhdmUgbWFy
awoJCQlsZXZlbDIgQUdNQ09SRV9ob3N0X3NlcCBub3QgYW5kewoJCQkJY3VycmVudGRpY3QKCQkJ
CU9wZXJhdG9yIC9pbWFnZW1hc2sgZXEgRGV2aWNlTl9QUzIgbm90IGFuZCB7CgkJCQkJaW1hZ2Vt
YXNrCgkJCQl9ewoJCQkJCUFHTUNPUkVfaW5fcmlwX3NlcCBjdXJyZW50b3ZlcnByaW50IGFuZCBj
dXJyZW50Y29sb3JzcGFjZSAwIGdldCAvRGV2aWNlR3JheSBlcSBhbmR7CgkJCQkJCVsvU2VwYXJh
dGlvbiAvQmxhY2sgL0RldmljZUdyYXkge31dIHNldGNvbG9yc3BhY2UKCQkJCQkJL0RlY29kZSBb
IERlY29kZSAxIGdldCBEZWNvZGUgMCBnZXQgXSBkZWYKCQkJCQl9aWYKCQkJCQl1c2VfbWFzayB7
CgkJCQkJCWxldmVsMyB7cHJvY2Vzc19tYXNrX0wzIGltYWdlfXttYXNrZWRfaW1hZ2Vfc2ltdWxh
dGlvbn1pZmVsc2UKCQkJCQl9ewoJCQkJCQlEZXZpY2VOX05vbmVOYW1lIERldmljZU5fUFMyIElu
ZGV4ZWRfRGV2aWNlTiBsZXZlbDMgbm90IGFuZCBvciBvciBBR01DT1JFX2luX3JpcF9zZXAgYW5k
IAoJCQkJCQl7CgkJCQkJCQlOYW1lcyBjb252ZXJ0X3RvX3Byb2Nlc3Mgbm90IHsKCQkJCQkJCQky
IGRpY3QgYmVnaW4KCQkJCQkJCQkvaW1hZ2VEaWN0IHhkZgoJCQkJCQkJCS9uYW1lc19pbmRleCAw
IGRlZgoJCQkJCQkJCWdzYXZlCgkJCQkJCQkJaW1hZ2VEaWN0IHdyaXRlX2ltYWdlX2ZpbGUgewoJ
CQkJCQkJCQlOYW1lcyB7CgkJCQkJCQkJCQlkdXAgKE5vbmUpIG5lIHsKCQkJCQkJCQkJCQlbL1Nl
cGFyYXRpb24gMyAtMSByb2xsIC9EZXZpY2VHcmF5IHsxIGV4Y2ggc3VifV0gc2V0Y29sb3JzcGFj
ZQoJCQkJCQkJCQkJCU9wZXJhdG9yIGltYWdlRGljdCByZWFkX2ltYWdlX2ZpbGUKCQkJCQkJCQkJ
CQluYW1lc19pbmRleCAwIGVxIHt0cnVlIHNldG92ZXJwcmludH0gaWYKCQkJCQkJCQkJCQkvbmFt
ZXNfaW5kZXggbmFtZXNfaW5kZXggMSBhZGQgZGVmCgkJCQkJCQkJCQl9ewoJCQkJCQkJCQkJCXBv
cAoJCQkJCQkJCQkJfSBpZmVsc2UKCQkJCQkJCQkJfSBmb3JhbGwKCQkJCQkJCQkJY2xvc2VfaW1h
Z2VfZmlsZQoJCQkJCQkJCX0gaWYKCQkJCQkJCQlncmVzdG9yZQoJCQkJCQkJCWVuZAoJCQkJCQkJ
fXsKCQkJCQkJCQlPcGVyYXRvciAvaW1hZ2VtYXNrIGVxIHsKCQkJCQkJCQkJaW1hZ2VtYXNrCgkJ
CQkJCQkJfXsKCQkJCQkJCQkJaW1hZ2UKCQkJCQkJCQl9IGlmZWxzZQoJCQkJCQkJfSBpZmVsc2UK
CQkJCQkJfXsKCQkJCQkJCU9wZXJhdG9yIC9pbWFnZW1hc2sgZXEgewoJCQkJCQkJCWltYWdlbWFz
awoJCQkJCQkJfXsKCQkJCQkJCQlpbWFnZQoJCQkJCQkJfSBpZmVsc2UKCQkJCQkJfSBpZmVsc2UK
CQkJCQl9aWZlbHNlCgkJCQl9aWZlbHNlCgkJCX17CgkJCQlXaWR0aCBIZWlnaHQKCQkJCU9wZXJh
dG9yIC9pbWFnZW1hc2sgZXF7CgkJCQkJRGVjb2RlIDAgZ2V0IDEgZXEgRGVjb2RlIDEgZ2V0IDAg
ZXEJYW5kCgkJCQkJSW1hZ2VNYXRyaXggL0RhdGFTb3VyY2UgbG9hZAoJCQkJCS9BZG9iZV9BR01f
T25Ib3N0X1NlcHMgd2hlcmUgewoJCQkJCQlwb3AgaW1hZ2VtYXNrCgkJCQkJfXsKCQkJCQkJY3Vy
cmVudGdyYXkgMSBuZXsKCQkJCQkJCWN1cnJlbnRkaWN0IGltYWdlb3JtYXNrX3N5cwoJCQkJCQl9
ewoJCQkJCQkJY3VycmVudG92ZXJwcmludCBub3R7CgkJCQkJCQkJMSBBR01DT1JFXyZzZXRncmF5
CgkJCQkJCQkJY3VycmVudGRpY3QgaW1hZ2Vvcm1hc2tfc3lzCgkJCQkJCQl9ewoJCQkJCQkJCWN1
cnJlbnRkaWN0IGlnbm9yZWltYWdlZGF0YQoJCQkJCQkJfWlmZWxzZQkJCQkgCQkKCQkJCQkJfWlm
ZWxzZQoJCQkJCX1pZmVsc2UKCQkJCX17CgkJCQkJQml0c1BlckNvbXBvbmVudCBJbWFnZU1hdHJp
eCAKCQkJCQlNdWx0aXBsZURhdGFTb3VyY2VzewoJCQkJCQkwIDEgTkNvbXBvbmVudHMgMSBzdWJ7
CgkJCQkJCQlEYXRhU291cmNlIGV4Y2ggZ2V0CgkJCQkJCX1mb3IKCQkJCQl9ewoJCQkJCQkvRGF0
YVNvdXJjZSBsb2FkCgkJCQkJfWlmZWxzZQoJCQkJCU9wZXJhdG9yIC9jb2xvcmltYWdlIGVxewoJ
CQkJCQlBR01DT1JFX2hvc3Rfc2VwewoJCQkJCQkJTXVsdGlwbGVEYXRhU291cmNlcyBsZXZlbDIg
b3IgTkNvbXBvbmVudHMgNCBlcSBhbmR7CgkJCQkJCQkJQUdNQ09SRV9pc19jbXlrX3NlcHsKCQkJ
CQkJCQkJTXVsdGlwbGVEYXRhU291cmNlc3sKCQkJCQkJCQkJCS9EYXRhU291cmNlIFsKCQkJCQkJ
CQkJCQlEYXRhU291cmNlIDAgZ2V0IC9leGVjIGN2eAoJCQkJCQkJCQkJCURhdGFTb3VyY2UgMSBn
ZXQgL2V4ZWMgY3Z4CgkJCQkJCQkJCQkJRGF0YVNvdXJjZSAyIGdldCAvZXhlYyBjdngKCQkJCQkJ
CQkJCQlEYXRhU291cmNlIDMgZ2V0IC9leGVjIGN2eAoJCQkJCQkJCQkJCS9BR01DT1JFX2dldF9p
bmtfZGF0YSBjdngKCQkJCQkJCQkJCV0gY3Z4IGRlZgoJCQkJCQkJCQl9ewoJCQkJCQkJCQkJL0Rh
dGFTb3VyY2UgCgkJCQkJCQkJCQlXaWR0aCBCaXRzUGVyQ29tcG9uZW50IG11bCA3IGFkZCA4IGlk
aXYgSGVpZ2h0IG11bCA0IG11bCAKCQkJCQkJCQkJCS9EYXRhU291cmNlIGxvYWQKCQkJCQkJCQkJ
CWZpbHRlcl9jbXlrIDAgKCkgL1N1YkZpbGVEZWNvZGUgZmlsdGVyIGRlZgoJCQkJCQkJCQl9aWZl
bHNlCgkJCQkJCQkJCS9EZWNvZGUgWyBEZWNvZGUgMCBnZXQgRGVjb2RlIDEgZ2V0IF0gZGVmCgkJ
CQkJCQkJCS9NdWx0aXBsZURhdGFTb3VyY2VzIGZhbHNlIGRlZgoJCQkJCQkJCQkvTkNvbXBvbmVu
dHMgMSBkZWYKCQkJCQkJCQkJL09wZXJhdG9yIC9pbWFnZSBkZWYKCQkJCQkJCQkJaW52ZXJ0X2lt
YWdlX3NhbXBsZXMKCQkJCQkJIAkJCTEgQUdNQ09SRV8mc2V0Z3JheQoJCQkJCQkJCQljdXJyZW50
ZGljdCBpbWFnZW9ybWFza19zeXMKCQkJCQkJCQl9ewoJCQkJCQkJCQljdXJyZW50b3ZlcnByaW50
IG5vdCBPcGVyYXRvci9pbWFnZW1hc2sgZXEgYW5kewogIAkJCSAJCQkJCQkJMSBBR01DT1JFXyZz
ZXRncmF5CiAgCQkJIAkJCQkJCQljdXJyZW50ZGljdCBpbWFnZW9ybWFza19zeXMKICAJCQkgCQkJ
CQkJfXsKICAJCQkgCQkJCQkJCWN1cnJlbnRkaWN0IGlnbm9yZWltYWdlZGF0YQogIAkJCSAJCQkJ
CQl9aWZlbHNlCgkJCQkJCQkJfWlmZWxzZQoJCQkJCQkJfXsJCgkJCQkJCQkJTXVsdGlwbGVEYXRh
U291cmNlcyBOQ29tcG9uZW50cyBBR01JTUdfJmNvbG9yaW1hZ2UJCQkJCQkKCQkJCQkJCX1pZmVs
c2UKCQkJCQkJfXsKCQkJCQkJCXRydWUgTkNvbXBvbmVudHMgY29sb3JpbWFnZQoJCQkJCQl9aWZl
bHNlCgkJCQkJfXsKCQkJCQkJT3BlcmF0b3IgL2ltYWdlIGVxewoJCQkJCQkJQUdNQ09SRV9ob3N0
X3NlcHsKCQkJCQkJCQkvRG9JbWFnZSB0cnVlIGRlZgoJCQkJCQkJCUhvc3RTZXBDb2xvckltYWdl
ewoJCQkJCQkJCQlpbnZlcnRfaW1hZ2Vfc2FtcGxlcwoJCQkJCQkJCX17CgkJCQkJCQkJCUFHTUNP
UkVfYmxhY2tfcGxhdGUgbm90IE9wZXJhdG9yL2ltYWdlbWFzayBuZSBhbmR7CgkJCQkJCQkJCQkv
RG9JbWFnZSBmYWxzZSBkZWYKCQkJCQkJCQkJCWN1cnJlbnRkaWN0IGlnbm9yZWltYWdlZGF0YQoJ
CQkJCSAJCQkJfWlmCgkJCQkJCQkJfWlmZWxzZQoJCQkJCQkgCQkxIEFHTUNPUkVfJnNldGdyYXkK
CQkJCQkJCQlEb0ltYWdlCgkJCQkJCQkJCXtjdXJyZW50ZGljdCBpbWFnZW9ybWFza19zeXN9IGlm
CgkJCQkJCQl9ewoJCQkJCQkJCXVzZV9tYXNrIHsKCQkJCQkJCQkJbGV2ZWwzIHtwcm9jZXNzX21h
c2tfTDMgaW1hZ2V9e21hc2tlZF9pbWFnZV9zaW11bGF0aW9ufWlmZWxzZQoJCQkJCQkJCX17CgkJ
CQkJCQkJCWltYWdlCgkJCQkJCQkJfWlmZWxzZQoJCQkJCQkJfWlmZWxzZQoJCQkJCQl9ewoJCQkJ
CQkJT3BlcmF0b3Iva25vY2tvdXQgZXF7CgkJCQkJCQkJcG9wIHBvcCBwb3AgcG9wIHBvcAoJCQkJ
CQkJCWN1cnJlbnRjb2xvcnNwYWNlIG92ZXJwcmludF9wbGF0ZSBub3R7CgkJCQkJCQkJCWtub2Nr
b3V0X3VuaXRzcQoJCQkJCQkJCX1pZgoJCQkJCQkJfWlmCgkJCQkJCX1pZmVsc2UKCQkJCQl9aWZl
bHNlCgkJCQl9aWZlbHNlCgkJCX1pZmVsc2UKCQkJY2xlYXJ0b21hcmsgcmVzdG9yZQoJCX1pZmVs
c2UKCWVuZAp9ZGVmCi9zZXBfaW1hZ2Vvcm1hc2sKewogCS9zZXBfY29sb3JzcGFjZV9kaWN0IEFH
TUNPUkVfZ2dldCBiZWdpbgoJL01hcHBlZENTQSBDU0EgbWFwX2NzYSBkZWYKCWJlZ2luCglTa2lw
SW1hZ2VQcm9jIHsKCQljdXJyZW50ZGljdCBjb25zdW1laW1hZ2VkYXRhCgl9Cgl7CgkJc2F2ZSBt
YXJrIAoJCUFHTUNPUkVfYXZvaWRfTDJfc2VwX3NwYWNlewoJCQkvRGVjb2RlIFsgRGVjb2RlIDAg
Z2V0IDI1NSBtdWwgRGVjb2RlIDEgZ2V0IDI1NSBtdWwgXSBkZWYKCQl9aWYKIAkJQUdNSU1HX2Nj
aW1hZ2VfZXhpc3RzIAoJCU1hcHBlZENTQSAwIGdldCAvRGV2aWNlQ01ZSyBlcSBhbmQKCQljdXJy
ZW50ZGljdC9Db21wb25lbnRzIGtub3duIGFuZCAKCQlOYW1lICgpIG5lIGFuZCAKCQlOYW1lIChB
bGwpIG5lIGFuZCAKCQlPcGVyYXRvciAvaW1hZ2UgZXEgYW5kCgkJQUdNQ09SRV9wcm9kdWNpbmdf
c2VwcyBub3QgYW5kCgkJbGV2ZWwyIG5vdCBhbmQKCQl7CgkJCVdpZHRoIEhlaWdodCBCaXRzUGVy
Q29tcG9uZW50IEltYWdlTWF0cml4IAoJCQlbCgkJCS9EYXRhU291cmNlIGxvYWQgL2V4ZWMgY3Z4
CgkJCXsKCQkJCTAgMSAyIGluZGV4IGxlbmd0aCAxIHN1YnsKCQkJCQkxIGluZGV4IGV4Y2gKCQkJ
CQkyIGNvcHkgZ2V0IDI1NSB4b3IgcHV0CgkJCQl9Zm9yCgkJCX0gL2V4ZWMgY3Z4CgkJCV0gY3Z4
IGJpbmQKCQkJTWFwcGVkQ1NBIDAgZ2V0IC9EZXZpY2VDTVlLIGVxewoJCQkJQ29tcG9uZW50cyBh
bG9hZCBwb3AKCQkJfXsKCQkJCTAgMCAwIENvbXBvbmVudHMgYWxvYWQgcG9wIDEgZXhjaCBzdWIK
CQkJfWlmZWxzZQoJCQlOYW1lIGZpbmRjbXlrY3VzdG9tY29sb3IKCQkJY3VzdG9tY29sb3JpbWFn
ZQoJCX17CgkJCUFHTUNPUkVfcHJvZHVjaW5nX3NlcHMgbm90ewoJCQkJbGV2ZWwyewoJCQkJCUFH
TUNPUkVfYXZvaWRfTDJfc2VwX3NwYWNlIG5vdCBjdXJyZW50Y29sb3JzcGFjZSAwIGdldCAvU2Vw
YXJhdGlvbiBuZSBhbmR7CgkJCQkJCVsvU2VwYXJhdGlvbiBOYW1lIE1hcHBlZENTQSBzZXBfcHJv
Y19uYW1lIGV4Y2ggMCBnZXQgZXhjaCBsb2FkIF0gc2V0Y29sb3JzcGFjZV9vcHQKCQkJCQkJL3Nl
cF90aW50IEFHTUNPUkVfZ2dldCBzZXRjb2xvcgoJCQkJCX1pZgoJCQkJCWN1cnJlbnRkaWN0IGlt
YWdlb3JtYXNrCgkJCQl9eyAKCQkJCQljdXJyZW50ZGljdAoJCQkJCU9wZXJhdG9yIC9pbWFnZW1h
c2sgZXF7CgkJCQkJCWltYWdlb3JtYXNrCgkJCQkJfXsKCQkJCQkJc2VwX2ltYWdlb3JtYXNrX2xl
djEKCQkJCQl9aWZlbHNlCgkJCQl9aWZlbHNlCiAJCQl9ewoJCQkJQUdNQ09SRV9ob3N0X3NlcHsK
CQkJCQlPcGVyYXRvci9rbm9ja291dCBlcXsKCQkJCQkJY3VycmVudGRpY3QvSW1hZ2VNYXRyaXgg
Z2V0IGNvbmNhdAoJCQkJCQlrbm9ja291dF91bml0c3EKCQkJCQl9ewoJCQkJCQljdXJyZW50Z3Jh
eSAxIG5lewogCQkJCQkJCUFHTUNPUkVfaXNfY215a19zZXAgTmFtZSAoQWxsKSBuZSBhbmR7CiAJ
CQkJCQkJCWxldmVsMnsKCSAJCQkJCQkJCVsgL1NlcGFyYXRpb24gTmFtZSBbL0RldmljZUdyYXld
CgkgCQkJCQkJCQl7IAoJIAkJCQkJCQkJCXNlcF9jb2xvcnNwYWNlX3Byb2MgQUdNQ09SRV9nZXRf
aW5rX2RhdGEKCQkJCQkJCQkJCTEgZXhjaCBzdWIKCSAJCQkJCQkJCX0gYmluZAoJCQkJCQkJCQld
IEFHTUNPUkVfJnNldGNvbG9yc3BhY2UKCQkJCQkJCQkJL3NlcF90aW50IEFHTUNPUkVfZ2dldCBB
R01DT1JFXyZzZXRjb2xvcgogCQkJCQkJCQkJY3VycmVudGRpY3QgaW1hZ2Vvcm1hc2tfc3lzCgkg
CQkJCQkJCX17CgkgCQkJCQkJCQljdXJyZW50ZGljdAoJCQkJCQkJCQlPcGVyYXRvciAvaW1hZ2Vt
YXNrIGVxewoJCQkJCQkJCQkJaW1hZ2Vvcm1hc2tfc3lzCgkJCQkJCQkJCX17CgkJCQkJCQkJCQlz
ZXBfaW1hZ2VfbGV2MV9zZXAKCQkJCQkJCQkJfWlmZWxzZQoJIAkJCQkJCQl9aWZlbHNlCiAJCQkJ
CQkJfXsKIAkJCQkJCQkJT3BlcmF0b3IvaW1hZ2VtYXNrIG5lewoJCQkJCQkJCQlpbnZlcnRfaW1h
Z2Vfc2FtcGxlcwogCQkJCQkJCQl9aWYKCQkgCQkJCQkJY3VycmVudGRpY3QgaW1hZ2Vvcm1hc2tf
c3lzCiAJCQkJCQkJfWlmZWxzZQogCQkJCQkJfXsKIAkJCQkJCQljdXJyZW50b3ZlcnByaW50IG5v
dCBOYW1lIChBbGwpIGVxIG9yIE9wZXJhdG9yL2ltYWdlbWFzayBlcSBhbmR7CgkJCQkJCQkJY3Vy
cmVudGRpY3QgaW1hZ2Vvcm1hc2tfc3lzIAoJCQkJCQkJCX17CgkJCQkJCQkJY3VycmVudG92ZXJw
cmludCBub3QKCQkJCQkJCQkJewogCQkJCQkJCQkJZ3NhdmUgCiAJCQkJCQkJCQlrbm9ja291dF91
bml0c3EKIAkJCQkJCQkJCWdyZXN0b3JlCgkJCQkJCQkJCX1pZgoJCQkJCQkJCWN1cnJlbnRkaWN0
IGNvbnN1bWVpbWFnZWRhdGEgCgkJIAkJCQkJfWlmZWxzZQogCQkJCQkJfWlmZWxzZQoJCSAJCQl9
aWZlbHNlCiAJCQkJfXsKCQkJCQljdXJyZW50Y29sb3JzcGFjZSAwIGdldCAvU2VwYXJhdGlvbiBu
ZXsKCQkJCQkJWy9TZXBhcmF0aW9uIE5hbWUgTWFwcGVkQ1NBIHNlcF9wcm9jX25hbWUgZXhjaCAw
IGdldCBleGNoIGxvYWQgXSBzZXRjb2xvcnNwYWNlX29wdAoJCQkJCQkvc2VwX3RpbnQgQUdNQ09S
RV9nZ2V0IHNldGNvbG9yCgkJCQkJfWlmCgkJCQkJY3VycmVudG92ZXJwcmludCAKCQkJCQlNYXBw
ZWRDU0EgMCBnZXQgL0RldmljZUNNWUsgZXEgYW5kIAoJCQkJCU5hbWUgaW5SaXBfc3BvdF9oYXNf
aW5rIG5vdCBhbmQgCgkJCQkJTmFtZSAoQWxsKSBuZSBhbmQgewoJCQkJCQlpbWFnZW9ybWFza19s
Ml9vdmVycHJpbnQKCQkJCQl9ewoJCQkJCQljdXJyZW50ZGljdCBpbWFnZW9ybWFzawogCQkJCQl9
aWZlbHNlCgkJCQl9aWZlbHNlCgkJCX1pZmVsc2UKCQl9aWZlbHNlCgkJY2xlYXJ0b21hcmsgcmVz
dG9yZQoJfWlmZWxzZQoJZW5kCgllbmQKfWRlZgovZGVjb2RlX2ltYWdlX3NhbXBsZQp7Cgk0IDEg
cm9sbCBleGNoIGR1cCA1IDEgcm9sbAoJc3ViIDIgNCAtMSByb2xsIGV4cCAxIHN1YiBkaXYgbXVs
IGFkZAp9IGJkZgovY29sb3JTcGFjZUVsZW1DbnQKewoJY3VycmVudGNvbG9yc3BhY2UgMCBnZXQg
ZHVwIC9EZXZpY2VDTVlLIGVxIHsKCQlwb3AgNAoJfQoJewoJCS9EZXZpY2VSR0IgZXEgewoJCQlw
b3AgMwoJCX17CgkJCTEKCQl9IGlmZWxzZQoJfSBpZmVsc2UKfSBiZGYKL2Rldm5fc2VwX2RhdGFz
b3VyY2UKewoJMSBkaWN0IGJlZ2luCgkvZGF0YVNvdXJjZSB4ZGYKCVsKCQkwIDEgZGF0YVNvdXJj
ZSBsZW5ndGggMSBzdWIgewoJCQlkdXAgY3VycmVudGRpY3QgL2RhdGFTb3VyY2UgZ2V0IC9leGNo
IGN2eCAvZ2V0IGN2eCAvZXhlYyBjdngKCQkJL2V4Y2ggY3Z4IG5hbWVzX2luZGV4IC9uZSBjdngg
WyAvcG9wIGN2eCBdIGN2eCAvaWYgY3Z4CgkJfSBmb3IKCV0gY3Z4IGJpbmQKCWVuZAp9IGJkZgkJ
Ci9kZXZuX2FsdF9kYXRhc291cmNlCnsKCTExIGRpY3QgYmVnaW4KCS9zcmNEYXRhU3RycyB4ZGYK
CS9kc3REYXRhU3RyIHhkZgoJL2NvbnZQcm9jIHhkZgoJL29yaWdjb2xvclNwYWNlRWxlbUNudCB4
ZGYKCS9vcmlnTXVsdGlwbGVEYXRhU291cmNlcyB4ZGYKCS9vcmlnQml0c1BlckNvbXBvbmVudCB4
ZGYKCS9vcmlnRGVjb2RlIHhkZgoJL29yaWdEYXRhU291cmNlIHhkZgoJL2RzQ250IG9yaWdNdWx0
aXBsZURhdGFTb3VyY2VzIHtvcmlnRGF0YVNvdXJjZSBsZW5ndGh9ezF9aWZlbHNlIGRlZgoJL3Nh
bXBsZXNOZWVkRGVjb2RpbmcKCQkwIDAgMSBvcmlnRGVjb2RlIGxlbmd0aCAxIHN1YiB7CgkJCW9y
aWdEZWNvZGUgZXhjaCBnZXQgYWRkCgkJfSBmb3IKCQlvcmlnRGVjb2RlIGxlbmd0aCAyIGRpdiBk
aXYKCQlkdXAgMSBlcSB7CgkJCS9kZWNvZGVEaXZpc29yIDIgb3JpZ0JpdHNQZXJDb21wb25lbnQg
ZXhwIDEgc3ViIGRlZgoJCX0gaWYKCQkyIG9yaWdCaXRzUGVyQ29tcG9uZW50IGV4cCAxIHN1YiBu
ZQoJZGVmCglbCgkJMCAxIGRzQ250IDEgc3ViIFsKCQkJY3VycmVudGRpY3QgL29yaWdNdWx0aXBs
ZURhdGFTb3VyY2VzIGdldCB7CgkJCQlkdXAgY3VycmVudGRpY3QgL29yaWdEYXRhU291cmNlIGdl
dCBleGNoIGdldCBkdXAgdHlwZQoJCQl9ewoJCQkJY3VycmVudGRpY3QgL29yaWdEYXRhU291cmNl
IGdldCBkdXAgdHlwZQoJCQl9IGlmZWxzZQoJCQlkdXAgL2ZpbGV0eXBlIGVxIHsKCQkJCXBvcCBj
dXJyZW50ZGljdCAvc3JjRGF0YVN0cnMgZ2V0IDMgLTEgL3JvbGwgY3Z4IC9nZXQgY3Z4IC9yZWFk
c3RyaW5nIGN2eCAvcG9wIGN2eAoJCQl9ewoJCQkJL3N0cmluZ3R5cGUgbmUgewoJCQkJCS9leGVj
IGN2eAoJCQkJfSBpZgoJCQkJY3VycmVudGRpY3QgL3NyY0RhdGFTdHJzIGdldCAvZXhjaCBjdngg
MyAtMSAvcm9sbCBjdnggL3hwdCBjdngKCQkJfSBpZmVsc2UKCQldIGN2eCAvZm9yIGN2eAoJCWN1
cnJlbnRkaWN0IC9zcmNEYXRhU3RycyBnZXQgMCAvZ2V0IGN2eCAvbGVuZ3RoIGN2eCAwIC9uZSBj
dnggWwoJCQkwIDEgV2lkdGggMSBzdWIgWwoJCQkJQWRvYmVfQUdNX1V0aWxzIC9BR01VVElMX25k
eCAveGRkZiBjdngKCQkJCWN1cnJlbnRkaWN0IC9vcmlnTXVsdGlwbGVEYXRhU291cmNlcyBnZXQg
ewoJCQkJCTAgMSBkc0NudCAxIHN1YiBbCgkJCQkJCUFkb2JlX0FHTV9VdGlscyAvQUdNVVRJTF9u
ZHgxIC94ZGRmIGN2eAoJCQkJCQljdXJyZW50ZGljdCAvc3JjRGF0YVN0cnMgZ2V0IC9BR01VVElM
X25keDEgL2xvYWQgY3Z4IC9nZXQgY3Z4IC9BR01VVElMX25keCAvbG9hZCBjdnggL2dldCBjdngK
CQkJCQkJc2FtcGxlc05lZWREZWNvZGluZyB7CgkJCQkJCQljdXJyZW50ZGljdCAvZGVjb2RlRGl2
aXNvciBrbm93biB7CgkJCQkJCQkJY3VycmVudGRpY3QgL2RlY29kZURpdmlzb3IgZ2V0IC9kaXYg
Y3Z4CgkJCQkJCQl9ewoJCQkJCQkJCWN1cnJlbnRkaWN0IC9vcmlnRGVjb2RlIGdldCAvQUdNVVRJ
TF9uZHgxIC9sb2FkIGN2eCAyIC9tdWwgY3Z4IDIgL2dldGludGVydmFsIGN2eCAvYWxvYWQgY3Z4
IC9wb3AgY3Z4cwoJCQkJCQkJCUJpdHNQZXJDb21wb25lbnQgL2RlY29kZV9pbWFnZV9zYW1wbGUg
bG9hZCAvZXhlYyBjdngKCQkJCQkJCX0gaWZlbHNlCgkJCQkJCX0gaWYKCQkJCQldIGN2eCAvZm9y
IGN2eAoJCQkJfXsKCQkJCQlBZG9iZV9BR01fVXRpbHMgL0FHTVVUSUxfbmR4MSAwIC9kZGYgY3Z4
CgkJCQkJY3VycmVudGRpY3QgL3NyY0RhdGFTdHJzIGdldCAwIC9nZXQgY3Z4IC9BR01VVElMX25k
eCAvbG9hZCBjdngJCQoJCQkJCWN1cnJlbnRkaWN0IC9vcmlnRGVjb2RlIGdldCBsZW5ndGggMiBp
ZGl2IGR1cCAzIDEgL3JvbGwgY3Z4IC9tdWwgY3Z4IC9leGNoIGN2eCAvZ2V0aW50ZXJ2YWwgY3Z4
IAoJCQkJCVsKCQkJCQkJc2FtcGxlc05lZWREZWNvZGluZyB7CgkJCQkJCQljdXJyZW50ZGljdCAv
ZGVjb2RlRGl2aXNvciBrbm93biB7CgkJCQkJCQkJY3VycmVudGRpY3QgL2RlY29kZURpdmlzb3Ig
Z2V0IC9kaXYgY3Z4CgkJCQkJCQl9ewoJCQkJCQkJCWN1cnJlbnRkaWN0IC9vcmlnRGVjb2RlIGdl
dCAvQUdNVVRJTF9uZHgxIC9sb2FkIGN2eCAyIC9tdWwgY3Z4IDIgL2dldGludGVydmFsIGN2eCAv
YWxvYWQgY3Z4IC9wb3AgY3Z4CgkJCQkJCQkJQml0c1BlckNvbXBvbmVudCAvZGVjb2RlX2ltYWdl
X3NhbXBsZSBsb2FkIC9leGVjIGN2eAoJCQkJCQkJCUFkb2JlX0FHTV9VdGlscyAvQUdNVVRJTF9u
ZHgxIC9BR01VVElMX25keDEgL2xvYWQgY3Z4IDEgL2FkZCBjdnggL2RkZiBjdngKCQkJCQkJCX0g
aWZlbHNlCgkJCQkJCX0gaWYKCQkJCQldIGN2eCAvZm9yYWxsIGN2eAoJCQkJfSBpZmVsc2UKCQkJ
CWN1cnJlbnRkaWN0IC9jb252UHJvYyBnZXQgL2V4ZWMgY3Z4CgkJCQljdXJyZW50ZGljdCAvb3Jp
Z2NvbG9yU3BhY2VFbGVtQ250IGdldCAxIHN1YiAtMSAwIFsKCQkJCQljdXJyZW50ZGljdCAvZHN0
RGF0YVN0ciBnZXQgMyAxIC9yb2xsIGN2eCAvQUdNVVRJTF9uZHggL2xvYWQgY3Z4IGN1cnJlbnRk
aWN0IC9vcmlnY29sb3JTcGFjZUVsZW1DbnQgZ2V0IC9tdWwgY3Z4IC9hZGQgY3Z4IC9leGNoIGN2
eAoJCQkJCWN1cnJlbnRkaWN0IC9jb252UHJvYyBnZXQgL2ZpbHRlcl9pbmRleGVkX2Rldm4gbG9h
ZCBuZSB7CgkJCQkJCTI1NSAvbXVsIGN2eCAvY3ZpIGN2eCAKCQkJCQl9IGlmCgkJCQkJL3B1dCBj
dnggCgkJCQldIGN2eCAvZm9yIGN2eAoJCQldIGN2eCAvZm9yIGN2eAoJCQljdXJyZW50ZGljdCAv
ZHN0RGF0YVN0ciBnZXQKCQldIGN2eCAvaWYgY3Z4CgldIGN2eCBiaW5kCgllbmQKfSBiZGYKL2Rl
dm5faW1hZ2Vvcm1hc2sKewogCS9kZXZpY2VuX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQg
YmVnaW4KCS9NYXBwZWRDU0EgQ1NBIG1hcF9jc2EgZGVmCgkyIGRpY3QgYmVnaW4KCWR1cCBkdXAK
CS9kc3REYXRhU3RyIGV4Y2ggL1dpZHRoIGdldCBjb2xvclNwYWNlRWxlbUNudCBtdWwgc3RyaW5n
IGRlZgoJL3NyY0RhdGFTdHJzIFsgMyAtMSByb2xsIGJlZ2luCgkJY3VycmVudGRpY3QgL011bHRp
cGxlRGF0YVNvdXJjZXMga25vd24ge011bHRpcGxlRGF0YVNvdXJjZXMge0RhdGFTb3VyY2UgbGVu
Z3RofXsxfWlmZWxzZX17MX0gaWZlbHNlCgkJewoJCQlXaWR0aCBEZWNvZGUgbGVuZ3RoIDIgZGl2
IG11bCBjdmkgc3RyaW5nCgkJfSByZXBlYXQKCQllbmQgXSBkZWYKCWJlZ2luCglTa2lwSW1hZ2VQ
cm9jIHsKCQljdXJyZW50ZGljdCBjb25zdW1laW1hZ2VkYXRhCgl9Cgl7CgkJc2F2ZSBtYXJrIAoJ
CUFHTUNPUkVfcHJvZHVjaW5nX3NlcHMgbm90IHsKCQkJbGV2ZWwzIG5vdCB7CgkJCQlPcGVyYXRv
ciAvaW1hZ2VtYXNrIG5lIHsKCQkJCQkvRGF0YVNvdXJjZSBbCgkJCQkJCURhdGFTb3VyY2UgRGVj
b2RlIEJpdHNQZXJDb21wb25lbnQgY3VycmVudGRpY3QgL011bHRpcGxlRGF0YVNvdXJjZXMga25v
d24ge011bHRpcGxlRGF0YVNvdXJjZXN9e2ZhbHNlfSBpZmVsc2UKCQkJCQkJY29sb3JTcGFjZUVs
ZW1DbnQgL2RldmljZW5fY29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCAvVGludFRyYW5zZm9y
bSBnZXQgCgkJCQkJCWRzdERhdGFTdHIgc3JjRGF0YVN0cnMgZGV2bl9hbHRfZGF0YXNvdXJjZSAv
ZXhlYyBjdngKCQkJCQkJXSBjdnggMCAoKSAvU3ViRmlsZURlY29kZSBmaWx0ZXIgZGVmCgkJCQkJ
L011bHRpcGxlRGF0YVNvdXJjZXMgZmFsc2UgZGVmCgkJCQkJL0RlY29kZSBjb2xvclNwYWNlRWxl
bUNudCBbIGV4Y2ggezAgMX0gcmVwZWF0IF0gZGVmCgkJCQl9IGlmCgkJCX1pZgoJCQljdXJyZW50
ZGljdCBpbWFnZW9ybWFzawogCQl9ewoJCQlBR01DT1JFX2hvc3Rfc2VwewoJCQkJTmFtZXMgY29u
dmVydF90b19wcm9jZXNzIHsKCQkJCQlDU0EgbWFwX2NzYSAwIGdldCAvRGV2aWNlQ01ZSyBlcSB7
CgkJCQkJCS9EYXRhU291cmNlCgkJCQkJCQlXaWR0aCBCaXRzUGVyQ29tcG9uZW50IG11bCA3IGFk
ZCA4IGlkaXYgSGVpZ2h0IG11bCA0IG11bCAKCQkJCQkJCVsKCQkJCQkJCURhdGFTb3VyY2UgRGVj
b2RlIEJpdHNQZXJDb21wb25lbnQgY3VycmVudGRpY3QgL011bHRpcGxlRGF0YVNvdXJjZXMga25v
d24ge011bHRpcGxlRGF0YVNvdXJjZXN9e2ZhbHNlfSBpZmVsc2UKCQkJCQkJCTQgL2RldmljZW5f
Y29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCAvVGludFRyYW5zZm9ybSBnZXQgCgkJCQkJCQlk
c3REYXRhU3RyIHNyY0RhdGFTdHJzIGRldm5fYWx0X2RhdGFzb3VyY2UgL2V4ZWMgY3Z4CgkJCQkJ
CQldIGN2eAoJCQkJCQlmaWx0ZXJfY215ayAwICgpIC9TdWJGaWxlRGVjb2RlIGZpbHRlciBkZWYK
CQkJCQkJL011bHRpcGxlRGF0YVNvdXJjZXMgZmFsc2UgZGVmCgkJCQkJCS9EZWNvZGUgWzEgMF0g
ZGVmCgkJCQkJCS9EZXZpY2VHcmF5IHNldGNvbG9yc3BhY2UKCQkJIAkJCWN1cnJlbnRkaWN0IGlt
YWdlb3JtYXNrX3N5cwogCQkJCQl9ewoJCQkJCQlBR01DT1JFX3JlcG9ydF91bnN1cHBvcnRlZF9j
b2xvcl9zcGFjZQoJCQkJCQlBR01DT1JFX2JsYWNrX3BsYXRlIHsKCQkJCQkJCS9EYXRhU291cmNl
IFsKCQkJCQkJCQlEYXRhU291cmNlIERlY29kZSBCaXRzUGVyQ29tcG9uZW50IGN1cnJlbnRkaWN0
IC9NdWx0aXBsZURhdGFTb3VyY2VzIGtub3duIHtNdWx0aXBsZURhdGFTb3VyY2VzfXtmYWxzZX0g
aWZlbHNlCgkJCQkJCQkJQ1NBIG1hcF9jc2EgMCBnZXQgL0RldmljZVJHQiBlcXszfXsxfWlmZWxz
ZSAvZGV2aWNlbl9jb2xvcnNwYWNlX2RpY3QgQUdNQ09SRV9nZ2V0IC9UaW50VHJhbnNmb3JtIGdl
dAoJCQkJCQkJCWRzdERhdGFTdHIgc3JjRGF0YVN0cnMgZGV2bl9hbHRfZGF0YXNvdXJjZSAvZXhl
YyBjdngKCQkJCQkJCQldIGN2eCAwICgpIC9TdWJGaWxlRGVjb2RlIGZpbHRlciBkZWYKCQkJCQkJ
CS9NdWx0aXBsZURhdGFTb3VyY2VzIGZhbHNlIGRlZgoJCQkJCQkJL0RlY29kZSBjb2xvclNwYWNl
RWxlbUNudCBbIGV4Y2ggezAgMX0gcmVwZWF0IF0gZGVmCgkJCQkgCQkJY3VycmVudGRpY3QgaW1h
Z2Vvcm1hc2tfc3lzCgkJCQkgCQl9CgkJCQkJCXsKCSAJCQkJCQlnc2F2ZSAKCSAJCQkJCQlrbm9j
a291dF91bml0c3EKCSAJCQkJCQlncmVzdG9yZQoJCQkJCQkJY3VycmVudGRpY3QgY29uc3VtZWlt
YWdlZGF0YSAKCQkJCQkJfSBpZmVsc2UKIAkJCQkJfSBpZmVsc2UKCQkJCX0KCQkJCXsJCgkJCQkJ
L2RldmljZW5fY29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCAvbmFtZXNfaW5kZXgga25vd24g
ewoJIAkJCQkJT3BlcmF0b3IvaW1hZ2VtYXNrIG5lewoJIAkJCQkJCU11bHRpcGxlRGF0YVNvdXJj
ZXMgewoJCSAJCQkJCQkvRGF0YVNvdXJjZSBbIERhdGFTb3VyY2UgZGV2bl9zZXBfZGF0YXNvdXJj
ZSAvZXhlYyBjdnggXSBjdnggZGVmCgkJCQkJCQkJL011bHRpcGxlRGF0YVNvdXJjZXMgZmFsc2Ug
ZGVmCgkgCQkJCQkJfXsKCQkJCQkJCQkvRGF0YVNvdXJjZSAvRGF0YVNvdXJjZSBsb2FkIGRzdERh
dGFTdHIgc3JjRGF0YVN0cnMgMCBnZXQgZmlsdGVyX2Rldm4gZGVmCgkgCQkJCQkJfSBpZmVsc2UK
CQkJCQkJCWludmVydF9pbWFnZV9zYW1wbGVzCgkgCQkJCQl9IGlmCgkJCSAJCQljdXJyZW50ZGlj
dCBpbWFnZW9ybWFza19zeXMKCSAJCQkJfXsKCSAJCQkJCWN1cnJlbnRvdmVycHJpbnQgbm90IE9w
ZXJhdG9yL2ltYWdlbWFzayBlcSBhbmR7CgkJCQkJCQljdXJyZW50ZGljdCBpbWFnZW9ybWFza19z
eXMgCgkJCQkJCQl9ewoJCQkJCQkJY3VycmVudG92ZXJwcmludCBub3QKCQkJCQkJCQl7CgkgCQkJ
CQkJCWdzYXZlIAoJIAkJCQkJCQlrbm9ja291dF91bml0c3EKCSAJCQkJCQkJZ3Jlc3RvcmUKCQkJ
CQkJCQl9aWYKCQkJCQkJCWN1cnJlbnRkaWN0IGNvbnN1bWVpbWFnZWRhdGEgCgkJCSAJCQl9aWZl
bHNlCgkgCQkJCX1pZmVsc2UKCSAJCQl9aWZlbHNlCiAJCQl9ewoJCQkJY3VycmVudGRpY3QgaW1h
Z2Vvcm1hc2sKCQkJfWlmZWxzZQoJCX1pZmVsc2UKCQljbGVhcnRvbWFyayByZXN0b3JlCgl9aWZl
bHNlCgllbmQKCWVuZAoJZW5kCn1kZWYKL2ltYWdlb3JtYXNrX2wyX292ZXJwcmludAp7CgljdXJy
ZW50ZGljdAoJY3VycmVudGNteWtjb2xvciBhZGQgYWRkIGFkZCAwIGVxewoJCWN1cnJlbnRkaWN0
IGNvbnN1bWVpbWFnZWRhdGEKCX17CgkJbGV2ZWwzeyAJCQkKCQkJY3VycmVudGNteWtjb2xvciAK
CQkJL0FHTUlNR19rIHhkZiAKCQkJL0FHTUlNR195IHhkZiAKCQkJL0FHTUlNR19tIHhkZiAKCQkJ
L0FHTUlNR19jIHhkZgoJCQlPcGVyYXRvci9pbWFnZW1hc2sgZXF7CgkJCQlbL0RldmljZU4gWwoJ
CQkJQUdNSU1HX2MgMCBuZSB7L0N5YW59IGlmCgkJCQlBR01JTUdfbSAwIG5lIHsvTWFnZW50YX0g
aWYKCQkJCUFHTUlNR195IDAgbmUgey9ZZWxsb3d9IGlmCgkJCQlBR01JTUdfayAwIG5lIHsvQmxh
Y2t9IGlmCgkJCQldIC9EZXZpY2VDTVlLIHt9XSBzZXRjb2xvcnNwYWNlCgkJCQlBR01JTUdfYyAw
IG5lIHtBR01JTUdfY30gaWYKCQkJCUFHTUlNR19tIDAgbmUge0FHTUlNR19tfSBpZgoJCQkJQUdN
SU1HX3kgMCBuZSB7QUdNSU1HX3l9IGlmCgkJCQlBR01JTUdfayAwIG5lIHtBR01JTUdfa30gaWYK
CQkJCXNldGNvbG9yCQkJCgkJCX17CQoJCQkJL0RlY29kZSBbIERlY29kZSAwIGdldCAyNTUgbXVs
IERlY29kZSAxIGdldCAyNTUgbXVsIF0gZGVmCgkJCQlbL0luZGV4ZWQgCQkJCQoJCQkJCVsKCQkJ
CQkJL0RldmljZU4gWwoJCQkJCQkJQUdNSU1HX2MgMCBuZSB7L0N5YW59IGlmCgkJCQkJCQlBR01J
TUdfbSAwIG5lIHsvTWFnZW50YX0gaWYKCQkJCQkJCUFHTUlNR195IDAgbmUgey9ZZWxsb3d9IGlm
CgkJCQkJCQlBR01JTUdfayAwIG5lIHsvQmxhY2t9IGlmCgkJCQkJCV0gCgkJCQkJCS9EZXZpY2VD
TVlLIHsKCQkJCQkJCUFHTUlNR19rIDAgZXEgezB9IGlmCgkJCQkJCQlBR01JTUdfeSAwIGVxIHsw
IGV4Y2h9IGlmCgkJCQkJCQlBR01JTUdfbSAwIGVxIHswIDMgMSByb2xsfSBpZgoJCQkJCQkJQUdN
SU1HX2MgMCBlcSB7MCA0IDEgcm9sbH0gaWYJCQkJCQkKCQkJCQkJfQoJCQkJCV0KCQkJCQkyNTUK
CQkJCQl7CgkJCQkJCTI1NSBkaXYgCgkJCQkJCW1hcmsgZXhjaAoJCQkJCQlkdXAJZHVwIGR1cAoJ
CQkJCQlBR01JTUdfayAwIG5lewoJCQkJCQkJL3NlcF90aW50IEFHTUNPUkVfZ2dldCBtdWwgTWFw
cGVkQ1NBIHNlcF9wcm9jX25hbWUgZXhjaCBwb3AgbG9hZCBleGVjIDQgMSByb2xsIHBvcCBwb3Ag
cG9wCQkKCQkJCQkJCWNvdW50dG9tYXJrIDEgcm9sbAoJCQkJCQl9ewoJCQkJCQkJcG9wCgkJCQkJ
CX1pZmVsc2UKCQkJCQkJQUdNSU1HX3kgMCBuZXsKCQkJCQkJCS9zZXBfdGludCBBR01DT1JFX2dn
ZXQgbXVsIE1hcHBlZENTQSBzZXBfcHJvY19uYW1lIGV4Y2ggcG9wIGxvYWQgZXhlYyA0IDIgcm9s
bCBwb3AgcG9wIHBvcAkJCgkJCQkJCQljb3VudHRvbWFyayAxIHJvbGwKCQkJCQkJfXsKCQkJCQkJ
CXBvcAoJCQkJCQl9aWZlbHNlCgkJCQkJCUFHTUlNR19tIDAgbmV7CgkJCQkJCQkvc2VwX3RpbnQg
QUdNQ09SRV9nZ2V0IG11bCBNYXBwZWRDU0Egc2VwX3Byb2NfbmFtZSBleGNoIHBvcCBsb2FkIGV4
ZWMgNCAzIHJvbGwgcG9wIHBvcCBwb3AJCQoJCQkJCQkJY291bnR0b21hcmsgMSByb2xsCgkJCQkJ
CX17CgkJCQkJCQlwb3AKCQkJCQkJfWlmZWxzZQoJCQkJCQlBR01JTUdfYyAwIG5lewoJCQkJCQkJ
L3NlcF90aW50IEFHTUNPUkVfZ2dldCBtdWwgTWFwcGVkQ1NBIHNlcF9wcm9jX25hbWUgZXhjaCBw
b3AgbG9hZCBleGVjIHBvcCBwb3AgcG9wCQkKCQkJCQkJCWNvdW50dG9tYXJrIDEgcm9sbAoJCQkJ
CQl9ewoJCQkJCQkJcG9wCgkJCQkJCX1pZmVsc2UKCQkJCQkJY291bnR0b21hcmsgMSBhZGQgLTEg
cm9sbCBwb3AKCQkJCQl9CgkJCQldIHNldGNvbG9yc3BhY2UKCQkJfWlmZWxzZQoJCQlpbWFnZW9y
bWFza19zeXMKCQl9ewoJd3JpdGVfaW1hZ2VfZmlsZXsKCQljdXJyZW50Y215a2NvbG9yCgkJMCBu
ZXsKCQkJWy9TZXBhcmF0aW9uIC9CbGFjayAvRGV2aWNlR3JheSB7fV0gc2V0Y29sb3JzcGFjZQoJ
CQlnc2F2ZQoJCQkvQmxhY2sKCQkJW3sxIGV4Y2ggc3ViIC9zZXBfdGludCBBR01DT1JFX2dnZXQg
bXVsfSAvZXhlYyBjdnggTWFwcGVkQ1NBIHNlcF9wcm9jX25hbWUgY3Z4IGV4Y2ggcG9wIHs0IDEg
cm9sbCBwb3AgcG9wIHBvcCAxIGV4Y2ggc3VifSAvZXhlYyBjdnhdCgkJCWN2eCBtb2RpZnlfaGFs
ZnRvbmVfeGZlcgoJCQlPcGVyYXRvciBjdXJyZW50ZGljdCByZWFkX2ltYWdlX2ZpbGUKCQkJZ3Jl
c3RvcmUKCQl9aWYKCQkwIG5lewoJCQlbL1NlcGFyYXRpb24gL1llbGxvdyAvRGV2aWNlR3JheSB7
fV0gc2V0Y29sb3JzcGFjZQoJCQlnc2F2ZQoJCQkvWWVsbG93CgkJCVt7MSBleGNoIHN1YiAvc2Vw
X3RpbnQgQUdNQ09SRV9nZ2V0IG11bH0gL2V4ZWMgY3Z4IE1hcHBlZENTQSBzZXBfcHJvY19uYW1l
IGN2eCBleGNoIHBvcCB7NCAyIHJvbGwgcG9wIHBvcCBwb3AgMSBleGNoIHN1Yn0gL2V4ZWMgY3Z4
XQoJCQljdnggbW9kaWZ5X2hhbGZ0b25lX3hmZXIKCQkJT3BlcmF0b3IgY3VycmVudGRpY3QgcmVh
ZF9pbWFnZV9maWxlCgkJCWdyZXN0b3JlCgkJfWlmCgkJMCBuZXsKCQkJWy9TZXBhcmF0aW9uIC9N
YWdlbnRhIC9EZXZpY2VHcmF5IHt9XSBzZXRjb2xvcnNwYWNlCgkJCWdzYXZlCgkJCS9NYWdlbnRh
CgkJCVt7MSBleGNoIHN1YiAvc2VwX3RpbnQgQUdNQ09SRV9nZ2V0IG11bH0gL2V4ZWMgY3Z4IE1h
cHBlZENTQSBzZXBfcHJvY19uYW1lIGN2eCBleGNoIHBvcCB7NCAzIHJvbGwgcG9wIHBvcCBwb3Ag
MSBleGNoIHN1Yn0gL2V4ZWMgY3Z4XQoJCQljdnggbW9kaWZ5X2hhbGZ0b25lX3hmZXIKCQkJT3Bl
cmF0b3IgY3VycmVudGRpY3QgcmVhZF9pbWFnZV9maWxlCgkJCWdyZXN0b3JlCgkJfWlmCgkJMCBu
ZXsKCQkJWy9TZXBhcmF0aW9uIC9DeWFuIC9EZXZpY2VHcmF5IHt9XSBzZXRjb2xvcnNwYWNlCgkJ
CWdzYXZlCgkJCS9DeWFuIAoJCQlbezEgZXhjaCBzdWIgL3NlcF90aW50IEFHTUNPUkVfZ2dldCBt
dWx9IC9leGVjIGN2eCBNYXBwZWRDU0Egc2VwX3Byb2NfbmFtZSBjdnggZXhjaCBwb3Age3BvcCBw
b3AgcG9wIDEgZXhjaCBzdWJ9IC9leGVjIGN2eF0KCQkJY3Z4IG1vZGlmeV9oYWxmdG9uZV94ZmVy
CgkJCU9wZXJhdG9yIGN1cnJlbnRkaWN0IHJlYWRfaW1hZ2VfZmlsZQoJCQlncmVzdG9yZQoJCX0g
aWYKCQkJCWNsb3NlX2ltYWdlX2ZpbGUKCQkJfXsKCQkJCWltYWdlb3JtYXNrCgkJCX1pZmVsc2UK
CQl9aWZlbHNlCgl9aWZlbHNlCn0gZGVmCi9pbmRleGVkX2ltYWdlb3JtYXNrCnsKCWJlZ2luCgkJ
c2F2ZSBtYXJrIAogCQljdXJyZW50ZGljdAogCQlBR01DT1JFX2hvc3Rfc2VwewoJCQlPcGVyYXRv
ci9rbm9ja291dCBlcXsKCQkJCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQg
ZHVwIC9DU0Ega25vd24gewoJCQkJCS9DU0EgZ2V0IG1hcF9jc2EKCQkJCX17CgkJCQkJL0NTRCBn
ZXQgZ2V0X2NzZCAvTmFtZXMgZ2V0CgkJCQl9IGlmZWxzZQoJCQkJb3ZlcnByaW50X3BsYXRlIG5v
dHsKCQkJCQlrbm9ja291dF91bml0c3EKCQkJCX1pZgoJCQl9ewoJCQkJSW5kZXhlZF9EZXZpY2VO
IHsKCQkJCQkvZGV2aWNlbl9jb2xvcnNwYWNlX2RpY3QgQUdNQ09SRV9nZ2V0IC9uYW1lc19pbmRl
eCBrbm93biB7CgkJCSAJCQlpbmRleGVkX2ltYWdlX2xldjJfc2VwCgkJCQkJfXsKCQkJCQkJY3Vy
cmVudG92ZXJwcmludCBub3R7CgkJCQkJCQlrbm9ja291dF91bml0c3EKCQkJIAkJCX1pZgoJCQkg
CQkJY3VycmVudGRpY3QgY29uc3VtZWltYWdlZGF0YQoJCQkJCX0gaWZlbHNlCgkJCQl9ewoJCSAJ
CQlBR01DT1JFX2lzX2NteWtfc2VwewoJCQkJCQlPcGVyYXRvciAvaW1hZ2VtYXNrIGVxewoJCQkJ
CQkJaW1hZ2Vvcm1hc2tfc3lzCgkJCQkJCX17CgkJCQkJCQlsZXZlbDJ7CgkJCQkJCQkJaW5kZXhl
ZF9pbWFnZV9sZXYyX3NlcAoJCQkJCQkJfXsKCQkJCQkJCQlpbmRleGVkX2ltYWdlX2xldjFfc2Vw
CgkJCQkJCQl9aWZlbHNlCgkJCQkJCX1pZmVsc2UKCQkJCQl9ewoJCQkJCQljdXJyZW50b3ZlcnBy
aW50IG5vdHsKCQkJCQkJCWtub2Nrb3V0X3VuaXRzcQoJCQkgCQkJfWlmCgkJCSAJCQljdXJyZW50
ZGljdCBjb25zdW1laW1hZ2VkYXRhCgkJCQkJfWlmZWxzZQoJCQkJfWlmZWxzZQoJCQl9aWZlbHNl
CiAJCX17CgkJCWxldmVsMnsKCQkJCUluZGV4ZWRfRGV2aWNlTiB7CgkJCQkJL2luZGV4ZWRfY29s
b3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCBiZWdpbgoJCQkJCUNTRCBnZXRfY3NkIGJlZ2luCgkJ
CQl9ewoJCQkJCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgYmVnaW4KCQkJ
CQlDU0EgbWFwX2NzYSAwIGdldCAvRGV2aWNlQ01ZSyBlcSBwc19sZXZlbCAzIGdlIGFuZCBwc192
ZXJzaW9uIDMwMTUuMDA3IGx0IGFuZCB7CgkJCQkJCVsvSW5kZXhlZCBbL0RldmljZU4gWy9DeWFu
IC9NYWdlbnRhIC9ZZWxsb3cgL0JsYWNrXSAvRGV2aWNlQ01ZSyB7fV0gSGlWYWwgTG9va3VwXQoJ
CQkJCQlzZXRjb2xvcnNwYWNlCgkJCQkJfSBpZgoJCQkJCWVuZAoJCQkJfSBpZmVsc2UKCQkJCWlt
YWdlb3JtYXNrCgkJCQlJbmRleGVkX0RldmljZU4gewoJCQkJCWVuZAoJCQkJCWVuZAoJCQkJfSBp
ZgoJCQl9eyAKCQkJCU9wZXJhdG9yIC9pbWFnZW1hc2sgZXF7CgkJCQkJaW1hZ2Vvcm1hc2sKCQkJ
CX17CgkJCQkJaW5kZXhlZF9pbWFnZW9ybWFza19sZXYxCgkJCQl9aWZlbHNlCgkJCX1pZmVsc2UK
IAkJfWlmZWxzZQoJCWNsZWFydG9tYXJrIHJlc3RvcmUKCWVuZAp9ZGVmCi9pbmRleGVkX2ltYWdl
X2xldjJfc2VwCnsKCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgYmVnaW4K
CWJlZ2luCgkJSW5kZXhlZF9EZXZpY2VOIG5vdCB7CgkJCWN1cnJlbnRjb2xvcnNwYWNlIAoJCQlk
dXAgMSAvRGV2aWNlR3JheSBwdXQKCQkJZHVwIDMKCQkJY3VycmVudGNvbG9yc3BhY2UgMiBnZXQg
MSBhZGQgc3RyaW5nCgkJCTAgMSAyIDMgQUdNQ09SRV9nZXRfaW5rX2RhdGEgNCBjdXJyZW50Y29s
b3JzcGFjZSAzIGdldCBsZW5ndGggMSBzdWIKCQkJewoJCQlkdXAgNCBpZGl2IGV4Y2ggY3VycmVu
dGNvbG9yc3BhY2UgMyBnZXQgZXhjaCBnZXQgMjU1IGV4Y2ggc3ViIDIgaW5kZXggMyAxIHJvbGwg
cHV0CgkJCX1mb3IgCgkJCXB1dAlzZXRjb2xvcnNwYWNlCgkJfSBpZgoJCWN1cnJlbnRkaWN0IAoJ
CU9wZXJhdG9yIC9pbWFnZW1hc2sgZXF7CgkJCUFHTUlNR18maW1hZ2VtYXNrCgkJfXsKCQkJdXNl
X21hc2sgewoJCQkJbGV2ZWwzIHtwcm9jZXNzX21hc2tfTDMgQUdNSU1HXyZpbWFnZX17bWFza2Vk
X2ltYWdlX3NpbXVsYXRpb259aWZlbHNlCgkJCX17CgkJCQlBR01JTUdfJmltYWdlCgkJCX1pZmVs
c2UKCQl9aWZlbHNlCgllbmQgZW5kCn1kZWYKICAvT1BJaW1hZ2UKICB7CiAgCWR1cCB0eXBlIC9k
aWN0dHlwZSBuZXsKICAJCTEwIGRpY3QgYmVnaW4KICAJCQkvRGF0YVNvdXJjZSB4ZGYKICAJCQkv
SW1hZ2VNYXRyaXggeGRmCiAgCQkJL0JpdHNQZXJDb21wb25lbnQgeGRmCiAgCQkJL0hlaWdodCB4
ZGYKICAJCQkvV2lkdGggeGRmCiAgCQkJL0ltYWdlVHlwZSAxIGRlZgogIAkJCS9EZWNvZGUgWzAg
MSBkZWZdCiAgCQkJY3VycmVudGRpY3QKICAJCWVuZAogIAl9aWYKICAJZHVwIGJlZ2luCiAgCQkv
TkNvbXBvbmVudHMgMSBjZG5kZgogIAkJL011bHRpcGxlRGF0YVNvdXJjZXMgZmFsc2UgY2RuZGYK
ICAJCS9Ta2lwSW1hZ2VQcm9jIHtmYWxzZX0gY2RuZGYKICAJCS9Ib3N0U2VwQ29sb3JJbWFnZSBm
YWxzZSBjZG5kZgogIAkJL0RlY29kZSBbCiAgCQkJCTAgCiAgCQkJCWN1cnJlbnRjb2xvcnNwYWNl
IDAgZ2V0IC9JbmRleGVkIGVxewogIAkJCQkJMiBCaXRzUGVyQ29tcG9uZW50IGV4cCAxIHN1Ygog
IAkJCQl9ewogIAkJCQkJMQogIAkJCQl9aWZlbHNlCiAgCQldIGNkbmRmCiAgCQkvT3BlcmF0b3Ig
L2ltYWdlIGNkbmRmCiAgCWVuZAogIAkvc2VwX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQg
bnVsbCBlcXsKICAJCWltYWdlb3JtYXNrCiAgCX17CiAgCQlnc2F2ZQogIAkJZHVwIGJlZ2luIGlu
dmVydF9pbWFnZV9zYW1wbGVzIGVuZAogIAkJc2VwX2ltYWdlb3JtYXNrCiAgCQlncmVzdG9yZQog
IAl9aWZlbHNlCiAgfWRlZgovY2FjaGVtYXNrX2xldmVsMgp7CgkzIGRpY3QgYmVnaW4KCS9MWldF
bmNvZGUgZmlsdGVyIC9Xcml0ZUZpbHRlciB4ZGYKCS9yZWFkQnVmZmVyIDI1NiBzdHJpbmcgZGVm
CgkvUmVhZEZpbHRlcgoJCWN1cnJlbnRmaWxlCgkJMCAoJUVuZE1hc2spIC9TdWJGaWxlRGVjb2Rl
IGZpbHRlcgoJCS9BU0NJSTg1RGVjb2RlIGZpbHRlcgoJCS9SdW5MZW5ndGhEZWNvZGUgZmlsdGVy
CglkZWYKCXsKCQlSZWFkRmlsdGVyIHJlYWRCdWZmZXIgcmVhZHN0cmluZyBleGNoCgkJV3JpdGVG
aWx0ZXIgZXhjaCB3cml0ZXN0cmluZwoJCW5vdCB7ZXhpdH0gaWYKCX1sb29wCglXcml0ZUZpbHRl
ciBjbG9zZWZpbGUKCWVuZAp9ZGVmCi9jYWNoZW1hc2tfbGV2ZWwzCnsKCWN1cnJlbnRmaWxlCgk8
PAoJCS9GaWx0ZXIgWyAvU3ViRmlsZURlY29kZSAvQVNDSUk4NURlY29kZSAvUnVuTGVuZ3RoRGVj
b2RlIF0KCQkvRGVjb2RlUGFybXMgWyA8PCAvRU9EQ291bnQgMCAvRU9EU3RyaW5nICglRW5kTWFz
aykgPj4gbnVsbCBudWxsIF0KCQkvSW50ZW50IDEKCT4+CgkvUmV1c2FibGVTdHJlYW1EZWNvZGUg
ZmlsdGVyCn1kZWYKL3Nwb3RfYWxpYXMKewoJL21hcHRvX3NlcF9pbWFnZW9ybWFzayAKCXsKCQlk
dXAgdHlwZSAvZGljdHR5cGUgbmV7CgkJCTEyIGRpY3QgYmVnaW4KCQkJCS9JbWFnZVR5cGUgMSBk
ZWYKCQkJCS9EYXRhU291cmNlIHhkZgoJCQkJL0ltYWdlTWF0cml4IHhkZgoJCQkJL0JpdHNQZXJD
b21wb25lbnQgeGRmCgkJCQkvSGVpZ2h0IHhkZgoJCQkJL1dpZHRoIHhkZgoJCQkJL011bHRpcGxl
RGF0YVNvdXJjZXMgZmFsc2UgZGVmCgkJfXsKCQkJYmVnaW4KCQl9aWZlbHNlCgkJCQkvRGVjb2Rl
IFsvY3VzdG9tY29sb3JfdGludCBBR01DT1JFX2dnZXQgMF0gZGVmCgkJCQkvT3BlcmF0b3IgL2lt
YWdlIGRlZgoJCQkJL0hvc3RTZXBDb2xvckltYWdlIGZhbHNlIGRlZgoJCQkJL1NraXBJbWFnZVBy
b2Mge2ZhbHNlfSBkZWYKCQkJCWN1cnJlbnRkaWN0IAoJCQllbmQKCQlzZXBfaW1hZ2Vvcm1hc2sK
CX1iZGYKCS9jdXN0b21jb2xvcmltYWdlCgl7CgkJQWRvYmVfQUdNX0ltYWdlL0FHTUlNR19jb2xv
ckFyeSB4ZGRmCgkJL2N1c3RvbWNvbG9yX3RpbnQgQUdNQ09SRV9nZ2V0CgkJYmRpY3QKCQkJL05h
bWUgQUdNSU1HX2NvbG9yQXJ5IDQgZ2V0CgkJCS9DU0EgWyAvRGV2aWNlQ01ZSyBdIAoJCQkvVGlu
dE1ldGhvZCAvU3VidHJhY3RpdmUKCQkJL1RpbnRQcm9jIG51bGwKCQkJL01hcHBlZENTQSBudWxs
CgkJCS9OQ29tcG9uZW50cyA0IAoJCQkvQ29tcG9uZW50cyBbIEFHTUlNR19jb2xvckFyeSBhbG9h
ZCBwb3AgcG9wIF0gCgkJZWRpY3QKCQlzZXRzZXBjb2xvcnNwYWNlCgkJbWFwdG9fc2VwX2ltYWdl
b3JtYXNrCgl9bmRmCglBZG9iZV9BR01fSW1hZ2UvQUdNSU1HXyZjdXN0b21jb2xvcmltYWdlIC9j
dXN0b21jb2xvcmltYWdlIGxvYWQgcHV0CgkvY3VzdG9tY29sb3JpbWFnZQoJewoJCUFkb2JlX0FH
TV9JbWFnZS9BR01JTUdfb3ZlcnJpZGUgZmFsc2UgcHV0CgkJZHVwIDQgZ2V0IG1hcF9hbGlhc3sK
CQkJL2N1c3RvbWNvbG9yX3RpbnQgQUdNQ09SRV9nZ2V0IGV4Y2ggc2V0c2VwY29sb3JzcGFjZQoJ
CQlwb3AKCQkJbWFwdG9fc2VwX2ltYWdlb3JtYXNrCgkJfXsKCQkJQUdNSU1HXyZjdXN0b21jb2xv
cmltYWdlCgkJfWlmZWxzZQkJCQoJfWJkZgp9ZGVmCi9zbmFwX3RvX2RldmljZQp7Cgk2IGRpY3Qg
YmVnaW4KCW1hdHJpeCBjdXJyZW50bWF0cml4CglkdXAgMCBnZXQgMCBlcSAxIGluZGV4IDMgZ2V0
IDAgZXEgYW5kCgkxIGluZGV4IDEgZ2V0IDAgZXEgMiBpbmRleCAyIGdldCAwIGVxIGFuZCBvciBl
eGNoIHBvcAoJewoJCTEgMSBkdHJhbnNmb3JtIDAgZ3QgZXhjaCAwIGd0IC9BR01JTUdfeFNpZ24/
IGV4Y2ggZGVmIC9BR01JTUdfeVNpZ24/IGV4Y2ggZGVmCgkJMCAwIHRyYW5zZm9ybQoJCUFHTUlN
R195U2lnbj8ge2Zsb29yIDAuMSBzdWJ9e2NlaWxpbmcgMC4xIGFkZH0gaWZlbHNlIGV4Y2gKCQlB
R01JTUdfeFNpZ24/IHtmbG9vciAwLjEgc3VifXtjZWlsaW5nIDAuMSBhZGR9IGlmZWxzZSBleGNo
CgkJaXRyYW5zZm9ybSAvQUdNSU1HX2xsWSBleGNoIGRlZiAvQUdNSU1HX2xsWCBleGNoIGRlZgoJ
CTEgMSB0cmFuc2Zvcm0KCQlBR01JTUdfeVNpZ24/IHtjZWlsaW5nIDAuMSBhZGR9e2Zsb29yIDAu
MSBzdWJ9IGlmZWxzZSBleGNoCgkJQUdNSU1HX3hTaWduPyB7Y2VpbGluZyAwLjEgYWRkfXtmbG9v
ciAwLjEgc3VifSBpZmVsc2UgZXhjaAoJCWl0cmFuc2Zvcm0gL0FHTUlNR191clkgZXhjaCBkZWYg
L0FHTUlNR191clggZXhjaCBkZWYJCQkKCQlbQUdNSU1HX3VyWCBBR01JTUdfbGxYIHN1YiAwIDAg
QUdNSU1HX3VyWSBBR01JTUdfbGxZIHN1YiAgQUdNSU1HX2xsWCBBR01JTUdfbGxZXSBjb25jYXQK
CX17Cgl9aWZlbHNlCgllbmQKfSBkZWYKbGV2ZWwyIG5vdHsKCS9jb2xvcmJ1ZgoJewoJCTAgMSAy
IGluZGV4IGxlbmd0aCAxIHN1YnsKCQkJZHVwIDIgaW5kZXggZXhjaCBnZXQgCgkJCTI1NSBleGNo
IHN1YiAKCQkJMiBpbmRleCAKCQkJMyAxIHJvbGwgCgkJCXB1dAoJCX1mb3IKCX1kZWYKCS90aW50
X2ltYWdlX3RvX2NvbG9yCgl7CgkJYmVnaW4KCQkJV2lkdGggSGVpZ2h0IEJpdHNQZXJDb21wb25l
bnQgSW1hZ2VNYXRyaXggCgkJCS9EYXRhU291cmNlIGxvYWQKCQllbmQKCQlBZG9iZV9BR01fSW1h
Z2UgYmVnaW4KCQkJL0FHTUlNR19tYnVmIDAgc3RyaW5nIGRlZgoJCQkvQUdNSU1HX3lidWYgMCBz
dHJpbmcgZGVmCgkJCS9BR01JTUdfa2J1ZiAwIHN0cmluZyBkZWYKCQkJewoJCQkJY29sb3JidWYg
ZHVwIGxlbmd0aCBBR01JTUdfbWJ1ZiBsZW5ndGggbmUKCQkJCQl7CgkJCQkJZHVwIGxlbmd0aCBk
dXAgZHVwCgkJCQkJL0FHTUlNR19tYnVmIGV4Y2ggc3RyaW5nIGRlZgoJCQkJCS9BR01JTUdfeWJ1
ZiBleGNoIHN0cmluZyBkZWYKCQkJCQkvQUdNSU1HX2tidWYgZXhjaCBzdHJpbmcgZGVmCgkJCQkJ
fSBpZgoJCQkJZHVwIEFHTUlNR19tYnVmIGNvcHkgQUdNSU1HX3lidWYgY29weSBBR01JTUdfa2J1
ZiBjb3B5IHBvcAoJCQl9CgkJCWFkZHByb2NzCgkJCXtBR01JTUdfbWJ1Zn17QUdNSU1HX3lidWZ9
e0FHTUlNR19rYnVmfSB0cnVlIDQgY29sb3JpbWFnZQkKCQllbmQKCX0gZGVmCQkJCgkvc2VwX2lt
YWdlb3JtYXNrX2xldjEKCXsKCQliZWdpbgoJCQlNYXBwZWRDU0EgMCBnZXQgZHVwIC9EZXZpY2VS
R0IgZXEgZXhjaCAvRGV2aWNlQ01ZSyBlcSBvciBoYXNfY29sb3Igbm90IGFuZHsKCQkJCXsKCQkJ
CQkyNTUgbXVsIHJvdW5kIGN2aSBHcmF5TG9va3VwIGV4Y2ggZ2V0CgkJCQl9IGN1cnJlbnR0cmFu
c2ZlciBhZGRwcm9jcyBzZXR0cmFuc2ZlcgoJCQkJY3VycmVudGRpY3QgaW1hZ2Vvcm1hc2sKCQkJ
fXsKCQkJCS9zZXBfY29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldC9Db21wb25lbnRzIGtub3du
ewoJCQkJCU1hcHBlZENTQSAwIGdldCAvRGV2aWNlQ01ZSyBlcXsKCQkJCQkJQ29tcG9uZW50cyBh
bG9hZCBwb3AKCQkJCQl9ewoJCQkJCQkwIDAgMCBDb21wb25lbnRzIGFsb2FkIHBvcCAxIGV4Y2gg
c3ViCgkJCQkJfWlmZWxzZQoJCQkJCUFkb2JlX0FHTV9JbWFnZS9BR01JTUdfayB4ZGRmIAoJCQkJ
CUFkb2JlX0FHTV9JbWFnZS9BR01JTUdfeSB4ZGRmIAoJCQkJCUFkb2JlX0FHTV9JbWFnZS9BR01J
TUdfbSB4ZGRmIAoJCQkJCUFkb2JlX0FHTV9JbWFnZS9BR01JTUdfYyB4ZGRmIAoJCQkJCUFHTUlN
R195IDAuMCBlcSBBR01JTUdfbSAwLjAgZXEgYW5kIEFHTUlNR19jIDAuMCBlcSBhbmR7CgkJCQkJ
CXtBR01JTUdfayBtdWwgMSBleGNoIHN1Yn0gY3VycmVudHRyYW5zZmVyIGFkZHByb2NzIHNldHRy
YW5zZmVyCgkJCQkJCWN1cnJlbnRkaWN0IGltYWdlb3JtYXNrCgkJCQkJfXsgCgkJCQkJCWN1cnJl
bnRjb2xvcnRyYW5zZmVyCgkJCQkJCXtBR01JTUdfayBtdWwgMSBleGNoIHN1Yn0gZXhjaCBhZGRw
cm9jcyA0IDEgcm9sbAoJCQkJCQl7QUdNSU1HX3kgbXVsIDEgZXhjaCBzdWJ9IGV4Y2ggYWRkcHJv
Y3MgNCAxIHJvbGwKCQkJCQkJe0FHTUlNR19tIG11bCAxIGV4Y2ggc3VifSBleGNoIGFkZHByb2Nz
IDQgMSByb2xsCgkJCQkJCXtBR01JTUdfYyBtdWwgMSBleGNoIHN1Yn0gZXhjaCBhZGRwcm9jcyA0
IDEgcm9sbAoJCQkJCQlzZXRjb2xvcnRyYW5zZmVyCgkJCQkJCWN1cnJlbnRkaWN0IHRpbnRfaW1h
Z2VfdG9fY29sb3IKCQkJCQl9aWZlbHNlCgkJCQl9ewoJCQkJCU1hcHBlZENTQSAwIGdldCAvRGV2
aWNlR3JheSBlcSB7CgkJCQkJCXsyNTUgbXVsIHJvdW5kIGN2aSBDb2xvckxvb2t1cCBleGNoIGdl
dCAwIGdldH0gY3VycmVudHRyYW5zZmVyIGFkZHByb2NzIHNldHRyYW5zZmVyCgkJCQkJCWN1cnJl
bnRkaWN0IGltYWdlb3JtYXNrCgkJCQkJfXsKCQkJCQkJTWFwcGVkQ1NBIDAgZ2V0IC9EZXZpY2VD
TVlLIGVxIHsKCQkJCQkJCWN1cnJlbnRjb2xvcnRyYW5zZmVyCgkJCQkJCQl7MjU1IG11bCByb3Vu
ZCBjdmkgQ29sb3JMb29rdXAgZXhjaCBnZXQgMyBnZXQgMSBleGNoIHN1Yn0gZXhjaCBhZGRwcm9j
cyA0IDEgcm9sbAoJCQkJCQkJezI1NSBtdWwgcm91bmQgY3ZpIENvbG9yTG9va3VwIGV4Y2ggZ2V0
IDIgZ2V0IDEgZXhjaCBzdWJ9IGV4Y2ggYWRkcHJvY3MgNCAxIHJvbGwKCQkJCQkJCXsyNTUgbXVs
IHJvdW5kIGN2aSBDb2xvckxvb2t1cCBleGNoIGdldCAxIGdldCAxIGV4Y2ggc3VifSBleGNoIGFk
ZHByb2NzIDQgMSByb2xsCgkJCQkJCQl7MjU1IG11bCByb3VuZCBjdmkgQ29sb3JMb29rdXAgZXhj
aCBnZXQgMCBnZXQgMSBleGNoIHN1Yn0gZXhjaCBhZGRwcm9jcyA0IDEgcm9sbAoJCQkJCQkJc2V0
Y29sb3J0cmFuc2ZlciAKCQkJCQkJCWN1cnJlbnRkaWN0IHRpbnRfaW1hZ2VfdG9fY29sb3IKCQkJ
CQkJfXsgCgkJCQkJCQljdXJyZW50Y29sb3J0cmFuc2ZlcgoJCQkJCQkJe3BvcCAxfSBleGNoIGFk
ZHByb2NzIDQgMSByb2xsCgkJCQkJCQl7MjU1IG11bCByb3VuZCBjdmkgQ29sb3JMb29rdXAgZXhj
aCBnZXQgMiBnZXR9IGV4Y2ggYWRkcHJvY3MgNCAxIHJvbGwKCQkJCQkJCXsyNTUgbXVsIHJvdW5k
IGN2aSBDb2xvckxvb2t1cCBleGNoIGdldCAxIGdldH0gZXhjaCBhZGRwcm9jcyA0IDEgcm9sbAoJ
CQkJCQkJezI1NSBtdWwgcm91bmQgY3ZpIENvbG9yTG9va3VwIGV4Y2ggZ2V0IDAgZ2V0fSBleGNo
IGFkZHByb2NzIDQgMSByb2xsCgkJCQkJCQlzZXRjb2xvcnRyYW5zZmVyIAoJCQkJCQkJY3VycmVu
dGRpY3QgdGludF9pbWFnZV90b19jb2xvcgoJCQkJCQl9aWZlbHNlCgkJCQkJfWlmZWxzZQoJCQkJ
fWlmZWxzZQoJCQl9aWZlbHNlCgkJZW5kCgl9ZGVmCgkvc2VwX2ltYWdlX2xldjFfc2VwCgl7CgkJ
YmVnaW4KCQkJL3NlcF9jb2xvcnNwYWNlX2RpY3QgQUdNQ09SRV9nZ2V0L0NvbXBvbmVudHMga25v
d257CgkJCQlDb21wb25lbnRzIGFsb2FkIHBvcAoJCQkJQWRvYmVfQUdNX0ltYWdlL0FHTUlNR19r
IHhkZGYgCgkJCQlBZG9iZV9BR01fSW1hZ2UvQUdNSU1HX3kgeGRkZiAKCQkJCUFkb2JlX0FHTV9J
bWFnZS9BR01JTUdfbSB4ZGRmIAoJCQkJQWRvYmVfQUdNX0ltYWdlL0FHTUlNR19jIHhkZGYgCgkJ
CQl7QUdNSU1HX2MgbXVsIDEgZXhjaCBzdWJ9CgkJCQl7QUdNSU1HX20gbXVsIDEgZXhjaCBzdWJ9
CgkJCQl7QUdNSU1HX3kgbXVsIDEgZXhjaCBzdWJ9CgkJCQl7QUdNSU1HX2sgbXVsIDEgZXhjaCBz
dWJ9CgkJCX17IAoJCQkJezI1NSBtdWwgcm91bmQgY3ZpIENvbG9yTG9va3VwIGV4Y2ggZ2V0IDAg
Z2V0IDEgZXhjaCBzdWJ9CgkJCQl7MjU1IG11bCByb3VuZCBjdmkgQ29sb3JMb29rdXAgZXhjaCBn
ZXQgMSBnZXQgMSBleGNoIHN1Yn0KCQkJCXsyNTUgbXVsIHJvdW5kIGN2aSBDb2xvckxvb2t1cCBl
eGNoIGdldCAyIGdldCAxIGV4Y2ggc3VifQoJCQkJezI1NSBtdWwgcm91bmQgY3ZpIENvbG9yTG9v
a3VwIGV4Y2ggZ2V0IDMgZ2V0IDEgZXhjaCBzdWJ9CgkJCX1pZmVsc2UKCQkJQUdNQ09SRV9nZXRf
aW5rX2RhdGEgY3VycmVudHRyYW5zZmVyIGFkZHByb2NzIHNldHRyYW5zZmVyCgkJCWN1cnJlbnRk
aWN0IGltYWdlb3JtYXNrX3N5cwoJCWVuZAoJfWRlZgoJL2luZGV4ZWRfaW1hZ2Vvcm1hc2tfbGV2
MQoJewoJCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgYmVnaW4KCQliZWdp
bgoJCQljdXJyZW50ZGljdAoJCQlNYXBwZWRDU0EgMCBnZXQgZHVwIC9EZXZpY2VSR0IgZXEgZXhj
aCAvRGV2aWNlQ01ZSyBlcSBvciBoYXNfY29sb3Igbm90IGFuZHsKCQkJCXtIaVZhbCBtdWwgcm91
bmQgY3ZpIEdyYXlMb29rdXAgZXhjaCBnZXQgSGlWYWwgZGl2fSBjdXJyZW50dHJhbnNmZXIgYWRk
cHJvY3Mgc2V0dHJhbnNmZXIKCQkJCWltYWdlb3JtYXNrCgkJCX17CgkJCQlNYXBwZWRDU0EgMCBn
ZXQgL0RldmljZUdyYXkgZXEgewoJCQkJCXtIaVZhbCBtdWwgcm91bmQgY3ZpIExvb2t1cCBleGNo
IGdldCBIaVZhbCBkaXZ9IGN1cnJlbnR0cmFuc2ZlciBhZGRwcm9jcyBzZXR0cmFuc2ZlcgoJCQkJ
CWltYWdlb3JtYXNrCgkJCQl9ewoJCQkJCU1hcHBlZENTQSAwIGdldCAvRGV2aWNlQ01ZSyBlcSB7
CgkJCQkJCWN1cnJlbnRjb2xvcnRyYW5zZmVyCgkJCQkJCXs0IG11bCBIaVZhbCBtdWwgcm91bmQg
Y3ZpIDMgYWRkIExvb2t1cCBleGNoIGdldCBIaVZhbCBkaXYgMSBleGNoIHN1Yn0gZXhjaCBhZGRw
cm9jcyA0IDEgcm9sbAoJCQkJCQl7NCBtdWwgSGlWYWwgbXVsIHJvdW5kIGN2aSAyIGFkZCBMb29r
dXAgZXhjaCBnZXQgSGlWYWwgZGl2IDEgZXhjaCBzdWJ9IGV4Y2ggYWRkcHJvY3MgNCAxIHJvbGwK
CQkJCQkJezQgbXVsIEhpVmFsIG11bCByb3VuZCBjdmkgMSBhZGQgTG9va3VwIGV4Y2ggZ2V0IEhp
VmFsIGRpdiAxIGV4Y2ggc3VifSBleGNoIGFkZHByb2NzIDQgMSByb2xsCgkJCQkJCXs0IG11bCBI
aVZhbCBtdWwgcm91bmQgY3ZpCQkgTG9va3VwIGV4Y2ggZ2V0IEhpVmFsIGRpdiAxIGV4Y2ggc3Vi
fSBleGNoIGFkZHByb2NzIDQgMSByb2xsCgkJCQkJCXNldGNvbG9ydHJhbnNmZXIgCgkJCQkJCXRp
bnRfaW1hZ2VfdG9fY29sb3IKCQkJCQl9eyAKCQkJCQkJY3VycmVudGNvbG9ydHJhbnNmZXIKCQkJ
CQkJe3BvcCAxfSBleGNoIGFkZHByb2NzIDQgMSByb2xsCgkJCQkJCXszIG11bCBIaVZhbCBtdWwg
cm91bmQgY3ZpIDIgYWRkIExvb2t1cCBleGNoIGdldCBIaVZhbCBkaXZ9IGV4Y2ggYWRkcHJvY3Mg
NCAxIHJvbGwKCQkJCQkJezMgbXVsIEhpVmFsIG11bCByb3VuZCBjdmkgMSBhZGQgTG9va3VwIGV4
Y2ggZ2V0IEhpVmFsIGRpdn0gZXhjaCBhZGRwcm9jcyA0IDEgcm9sbAoJCQkJCQl7MyBtdWwgSGlW
YWwgbXVsIHJvdW5kIGN2aSAJCUxvb2t1cCBleGNoIGdldCBIaVZhbCBkaXZ9IGV4Y2ggYWRkcHJv
Y3MgNCAxIHJvbGwKCQkJCQkJc2V0Y29sb3J0cmFuc2ZlciAKCQkJCQkJdGludF9pbWFnZV90b19j
b2xvcgoJCQkJCX1pZmVsc2UKCQkJCX1pZmVsc2UKCQkJfWlmZWxzZQoJCWVuZCBlbmQKCX1kZWYK
CS9pbmRleGVkX2ltYWdlX2xldjFfc2VwCgl7CgkJL2luZGV4ZWRfY29sb3JzcGFjZV9kaWN0IEFH
TUNPUkVfZ2dldCBiZWdpbgoJCWJlZ2luCgkJCXs0IG11bCBIaVZhbCBtdWwgcm91bmQgY3ZpCQkg
TG9va3VwIGV4Y2ggZ2V0IEhpVmFsIGRpdiAxIGV4Y2ggc3VifQoJCQl7NCBtdWwgSGlWYWwgbXVs
IHJvdW5kIGN2aSAxIGFkZCBMb29rdXAgZXhjaCBnZXQgSGlWYWwgZGl2IDEgZXhjaCBzdWJ9CgkJ
CXs0IG11bCBIaVZhbCBtdWwgcm91bmQgY3ZpIDIgYWRkIExvb2t1cCBleGNoIGdldCBIaVZhbCBk
aXYgMSBleGNoIHN1Yn0KCQkJezQgbXVsIEhpVmFsIG11bCByb3VuZCBjdmkgMyBhZGQgTG9va3Vw
IGV4Y2ggZ2V0IEhpVmFsIGRpdiAxIGV4Y2ggc3VifQoJCQlBR01DT1JFX2dldF9pbmtfZGF0YSBj
dXJyZW50dHJhbnNmZXIgYWRkcHJvY3Mgc2V0dHJhbnNmZXIKCQkJY3VycmVudGRpY3QgaW1hZ2Vv
cm1hc2tfc3lzCgkJZW5kIGVuZAoJfWRlZgp9aWYKZW5kCnN5c3RlbWRpY3QgL3NldHBhY2tpbmcg
a25vd24KewoJc2V0cGFja2luZwp9IGlmCiUlRW5kUmVzb3VyY2UKY3VycmVudGRpY3QgQWRvYmVf
QUdNX1V0aWxzIGVxIHtlbmR9IGlmCiUlRW5kUHJvbG9nCiUlQmVnaW5TZXR1cApBZG9iZV9BR01f
VXRpbHMgYmVnaW4KMiAyMDEwIEFkb2JlX0FHTV9Db3JlL2RvY19zZXR1cCBnZXQgZXhlYwpBZG9i
ZV9Db29sVHlwZV9Db3JlL2RvY19zZXR1cCBnZXQgZXhlYwpBZG9iZV9BR01fSW1hZ2UvZG9jX3Nl
dHVwIGdldCBleGVjCmN1cnJlbnRkaWN0IEFkb2JlX0FHTV9VdGlscyBlcSB7ZW5kfSBpZgolJUVu
ZFNldHVwCiUlUGFnZTogeGVuMy0xLjAuZXBzIDEKJSVFbmRQYWdlQ29tbWVudHMKJSVCZWdpblBh
Z2VTZXR1cAovY3VycmVudGRpc3RpbGxlcnBhcmFtcyB3aGVyZQp7cG9wIGN1cnJlbnRkaXN0aWxs
ZXJwYXJhbXMgL0NvcmVEaXN0VmVyc2lvbiBnZXQgNTAwMCBsdH0ge3RydWV9IGlmZWxzZQp7IHVz
ZXJkaWN0IC9BSTExX1BERk1hcms1IC9jbGVhcnRvbWFyayBsb2FkIHB1dAp1c2VyZGljdCAvQUkx
MV9SZWFkTWV0YWRhdGFfUERGTWFyazUge2ZsdXNoZmlsZSBjbGVhcnRvbWFyayB9IGJpbmQgcHV0
fQp7IHVzZXJkaWN0IC9BSTExX1BERk1hcms1IC9wZGZtYXJrIGxvYWQgcHV0CnVzZXJkaWN0IC9B
STExX1JlYWRNZXRhZGF0YV9QREZNYXJrNSB7L1BVVCBwZGZtYXJrfSBiaW5kIHB1dCB9IGlmZWxz
ZQpbL05hbWVzcGFjZVB1c2ggQUkxMV9QREZNYXJrNQpbL19vYmpkZWYge2FpX21ldGFkYXRhX3N0
cmVhbV8xMjN9IC90eXBlIC9zdHJlYW0gL09CSiBBSTExX1BERk1hcms1Clt7YWlfbWV0YWRhdGFf
c3RyZWFtXzEyM30KY3VycmVudGZpbGUgMCAoJSAgJiZlbmQgWE1QIHBhY2tldCBtYXJrZXImJikK
L1N1YkZpbGVEZWNvZGUgZmlsdGVyIEFJMTFfUmVhZE1ldGFkYXRhX1BERk1hcms1Cjw/eHBhY2tl
dCBiZWdpbj0n77u/JyBpZD0nVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkJz8+PHg6eG1wbWV0YSB4
bWxuczp4PSdhZG9iZTpuczptZXRhLycgeDp4bXB0az0nWE1QIHRvb2xraXQgMy4wLTI5LCBmcmFt
ZXdvcmsgMS42Jz4KLTxyZGY6UkRGIHhtbG5zOnJkZj0naHR0cDovL3d3dy53My5vcmcvMTk5OS8w
Mi8yMi1yZGYtc3ludGF4LW5zIycgeG1sbnM6aVg9J2h0dHA6Ly9ucy5hZG9iZS5jb20vaVgvMS4w
Lyc+Ci0KLSA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0ndXVpZDpiYWNmNDIzNS1lNDM1LTEx
ZGEtOGYxYS0wMDBkOTNhZmViYjInCi0gIHhtbG5zOnBkZj0naHR0cDovL25zLmFkb2JlLmNvbS9w
ZGYvMS4zLyc+Ci0gIDxwZGY6UHJvZHVjZXI+QWRvYmUgUERGIGxpYnJhcnkgNi42NjwvcGRmOlBy
b2R1Y2VyPgotIDwvcmRmOkRlc2NyaXB0aW9uPgotCi0gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJv
dXQ9J3V1aWQ6YmFjZjQyMzUtZTQzNS0xMWRhLThmMWEtMDAwZDkzYWZlYmIyJwotICB4bWxuczp0
aWZmPSdodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyc+Ci0gPC9yZGY6RGVzY3JpcHRpb24+
Ci0KLSA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0ndXVpZDpiYWNmNDIzNS1lNDM1LTExZGEt
OGYxYS0wMDBkOTNhZmViYjInCi0gIHhtbG5zOnhhcD0naHR0cDovL25zLmFkb2JlLmNvbS94YXAv
MS4wLycKLSAgeG1sbnM6eGFwR0ltZz0naHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL2cvaW1n
Lyc+Ci0gIDx4YXA6Q3JlYXRlRGF0ZT4yMDA2LTA1LTE0VDA5OjM0OjE0LTA3OjAwPC94YXA6Q3Jl
YXRlRGF0ZT4KLSAgPHhhcDpNb2RpZnlEYXRlPjIwMDYtMDYtMjZUMTg6MDM6MTlaPC94YXA6TW9k
aWZ5RGF0ZT4KLSAgPHhhcDpDcmVhdG9yVG9vbD5JbGx1c3RyYXRvcjwveGFwOkNyZWF0b3JUb29s
PgotICA8eGFwOk1ldGFkYXRhRGF0ZT4yMDA2LTA1LTE0VDA5OjM0OjE0LTA3OjAwPC94YXA6TWV0
YWRhdGFEYXRlPgotICA8eGFwOlRodW1ibmFpbHM+Ci0gICA8cmRmOkFsdD4KLSAgICA8cmRmOmxp
IHJkZjpwYXJzZVR5cGU9J1Jlc291cmNlJz4KLSAgICAgPHhhcEdJbWc6Zm9ybWF0PkpQRUc8L3hh
cEdJbWc6Zm9ybWF0PgotICAgICA8eGFwR0ltZzp3aWR0aD4yNTY8L3hhcEdJbWc6d2lkdGg+Ci0g
ICAgIDx4YXBHSW1nOmhlaWdodD4xMTI8L3hhcEdJbWc6aGVpZ2h0PgotICAgICA8eGFwR0ltZzpp
bWFnZT4vOWovNEFBUVNrWkpSZ0FCQWdFQVNBQklBQUQvN1FBc1VHaHZkRzl6YUc5d0lETXVNQUE0
UWtsTkErMEFBQUFBQUJBQVNBQUFBQUVBJiN4QTtBUUJJQUFBQUFRQUIvKzRBRGtGa2IySmxBR1RB
QUFBQUFmL2JBSVFBQmdRRUJBVUVCZ1VGQmdrR0JRWUpDd2dHQmdnTERBb0tDd29LJiN4QTtEQkFN
REF3TURBd1FEQTRQRUE4T0RCTVRGQlFURXh3Ykd4c2NIeDhmSHg4Zkh4OGZId0VIQndjTkRBMFlF
QkFZR2hVUkZSb2ZIeDhmJiN4QTtIeDhmSHg4Zkh4OGZIeDhmSHg4Zkh4OGZIeDhmSHg4Zkh4OGZI
eDhmSHg4Zkh4OGZIeDhmSHg4Zkh4OGYvOEFBRVFnQWNBRUFBd0VSJiN4QTtBQUlSQVFNUkFmL0VB
YUlBQUFBSEFRRUJBUUVBQUFBQUFBQUFBQVFGQXdJR0FRQUhDQWtLQ3dFQUFnSURBUUVCQVFFQUFB
QUFBQUFBJiN4QTtBUUFDQXdRRkJnY0lDUW9MRUFBQ0FRTURBZ1FDQmdjREJBSUdBbk1CQWdNUkJB
QUZJUkl4UVZFR0UyRWljWUVVTXBHaEJ4V3hRaVBCJiN4QTtVdEhoTXhaaThDUnlndkVsUXpSVGtx
S3lZM1BDTlVRbms2T3pOaGRVWkhURDB1SUlKb01KQ2hnWmhKUkZScVMwVnROVktCcnk0L1BFJiN4
QTsxT1QwWlhXRmxhVzF4ZFhsOVdaMmhwYW10c2JXNXZZM1IxZG5kNGVYcDdmSDErZjNPRWhZYUhp
SW1LaTR5TmpvK0NrNVNWbHBlWW1aJiN4QTtxYm5KMmVuNUtqcEtXbXA2aXBxcXVzcmE2dm9SQUFJ
Q0FRSURCUVVFQlFZRUNBTURiUUVBQWhFREJDRVNNVUVGVVJOaElnWnhnWkV5JiN4QTtvYkh3Rk1I
UjRTTkNGVkppY3ZFekpEUkRnaGFTVXlXaVk3TENCM1BTTmVKRWd4ZFVrd2dKQ2hnWkpqWkZHaWRr
ZEZVMzhxT3p3eWdwJiN4QTswK1B6aEpTa3RNVFU1UFJsZFlXVnBiWEYxZVgxUmxabWRvYVdwcmJH
MXViMlIxZG5kNGVYcDdmSDErZjNPRWhZYUhpSW1LaTR5TmpvJiN4QTsrRGxKV1dsNWlabXB1Y25a
NmZrcU9rcGFhbnFLbXFxNnl0cnErdi9hQUF3REFRQUNFUU1SQUQ4QTlVNHE3RlhZcTdGVW4xM3pa
b2VpJiN4QTtML3BzNDllbFZ0by9pbFAreDdmTnFZcTgvd0JXL05yVnB5eWFiYnBaeDlwSC9leWZQ
ZjRCOXh4VklYMVR6cHF4NWZXTHk0UnY1QzZ4JiN4QTsvY3RFeFZUL0FNSitacHZpYTFaajR2SWdQ
L0ROaXJZOHYrYXJUNG80Sm95TndZWEJQL0NNY1ZWN2Z6ZjV6MHB3ajNjNHAvdXE2QmVvJiN4QTs4
UDNvSkgwWXF5dlJmemNpZGxpMWkxOU91eHViZXBYNW1NMVAzRS9MRldmYWZxVmhxTnVMbXhuUzRo
UDdhR3RENEVkUWZZNHFpY1ZkJiN4QTtpcnNWZGlyc1ZkaXJzVmRpcnNWZGlyc1ZkaXJzVmRpcnNW
ZGlyc1ZkaXJzVmRpcnNWZGlyc1ZlY2VjdnpMTWJTYWZvYmd1S3JOZmRRJiN4QTtEM0VYai9yZmQ0
NHF4RFNmTE9xNnpJYnFabWpoa1BKcm1Xck01N2xRZDIrZUtzejAzeXJvMWdBVmdFMG8vd0IyelVj
MTloMEgwREZVJiN4QTszeFYyS3V4VlpMREZNaGpsUlpFUFZIQVlINkRpckhkVjhqYWJjaG5zejlV
bTZnRGVNbjNYdDlHS3NXams4dytWdFJFa2JOYnkvd0F3JiN4QTsrS0tWUjJQWmg4K21LdlZQS0hu
ZXgxK0wwbkF0OVNRVmt0NjdNUDVvNjlSN2R2eHhWazJLdXhWMkt1eFYyS3V4VjJLdXhWMkt1eFYy
JiN4QTtLdXhWMkt1eFYyS3V4VjJLdXhWMkt1eFYyS3ZOUHpJODZ1SGswUFRwT05QaHZwMU81OFln
ZitKZmQ0NHFrM2xUeWtzeXBmNmd0WWp2JiN4QTtCYmtmYUhabTl2QWQ4Vlp1QUFBQUtBYkFERlc4
VmRpcnNWZGlyc1ZkaXFoZTJOcmUyN1c5ekdKSW43SHFENGc5amlyem5XZEh2dkwrJiN4QTtveHoy
OGpDTU56dGJsZGlDdTlEL0FKUS9IRlhxM2tyelpGNWcwNnNsRTFDQ2d1b2hzRDRPdnMzNFlxeVBG
WFlxN0ZYWXE3RlhZcTdGJiN4QTtYWXE3RlhZcTdGWFlxN0ZYWXE3RlhZcTdGWFlxN0ZXUCtkL01Z
MFBSSkpveVByay83cTFIZ3hHNy93Q3hHL3pwaXJ4YVB5UEo1eHNMJiN4QTs2MWx2SjdGU2hLWDhE
TXNxM0JCS01DQ09RQjNZVjNHM2V1S3ZtZnpaZi9tYjVWMSs3MExWdGIxS0s4dEc0a2k4dU9Eb2Qw
a2pKWVZSJiN4QTtodU1WU2ovSFBuYi9BS21EVXY4QXBNbi9BT2E4VmQvam56dC8xTUdwZjlKay93
RHpYaXFjK1dmemgvTVBRdFlzdFFYWGIrOWd0SkF6JiN4QTs2ZmRYVTh0dktoMmRIamRtWDRnVHZT
b080M3hWOXRlVVBOZWtlYS9MMW5ydWxTYzdTN1RseFAyNDNHenh1T3pJMngvcGlxYzRxN0ZYJiN4
QTtsLzV2L25ub2ZrU0I3QzA0YWo1bWRmM2RpRDhFSVlWRWx3UjBGTndnK0krdzN4VjhtNnArWm41
Z2FucUU5L2RlWUw4VDNERjNXSzRsJiN4QTtoakhza2NiS2lxT3dBeFZCUytjL09FeWNKdGQxQ1JP
dkY3dWRoWDVGOFZXMi9uRHpiYk9aTGJXNytDUWloZU82bVEwOEtxd3hWRzIvJiN4QTtucjh4cm1l
TzN0L01Pc1RYRXpCSW9ZN3k2ZDNkalJWVlE1SkpQUURGWDFiK1IvNUkrZkxjMi9tTDh3dk1HcVBP
T010bjVlK3YzQlZDJiN4QTtOMWE3SWVqSC9pb2JmelYzVUt2b1BGWFlxN0ZYWXE3RlhZcTdGWFlx
N0ZYWXE3RlhZcTdGWFlxN0ZYWXE3RlhqWDVtNnU5LzVrTm9oJiN4QTs1UTJDaUZGSGVScU01K2Rh
TDlHS3NxMExUVjA3UzRMVUQ0d09VeDhYYmR2Nllxd1A4Ny95anR2UHVnZXRacWtYbVRUMUxhZmNH
ZzlSJiN4QTtlcHQ1Ry9sYjlrL3N0N0U0cStKN3UxdWJPNm10THFKb0xtM2RvcDRaQVZkSFE4V1Zn
ZWhCR0txV0t1eFY2cCtRZjV0UDVJOHcvVWRSJiN4QTtsUDhBaHZWSEMzaW5jUVM3S3R3QjdkSHAx
WC9WR0t2dEpaRWRCSWpCbzJISlhCcUNEdUNEaXI1Ky9Pai9BSnlSZzAzMS9MM2txWlo5JiN4QTtS
Rlk3dldWbzBjSkd4U0N0VmQvRi9zanRVOUZYeTljWEZ4YzNFbHhjeXZOY1RNWkpwcEdMdTdzYXN6
TWFra25xVGlxbmlyc1ZUUFFQJiN4QTtMbXM2L2ZDejBxMmU0bEFEU3NCOEVhRWdjNUg2S3RUMVB5
NjRxK3hmK2NhUHlyOHFlWFlMcS9taFM5ODFRa0JyK1FWRVVVaTA0MjZuJiN4QTs3RzRZTS8yaVBB
R21LdmZNVmRpcnNWZGlyc1ZkaXJzVmRpcnNWZGlyc1ZkaXJzVmRpcnNWZGlyc1ZXU3lKRkU4cm1p
UnFXWSt3RlRpJiN4QTtyd2pRMWZWUE5FVXMyN1N6UGN5MThRVElhL000cTlPeFYyS3ZBZjhBbkpI
OG1mMHZheStjOUFnLzNLMnFWMWExakc5eENnL3ZsQTZ5JiN4QTtSZ2IvQU15KzY3cXZsWEZYWXE3
RldjeS9uTjU3ZnlKYitTMXZURnBzSEtOcDBxTGg3Yy9adDJrci9kcnZzTzN3L1oyeFZnMkt1eFYy
JiN4QTtLdlJ2eXAvSlR6SjUrdVZ1RkJzUEw4YjhialU1RjJhaCtKSUZOT2IvQUlEdjRZcTkxODBl
YnZ5bS9LSHl6UDVUMHVINnpxa2tmN3l6JiN4QTtnS3ZjTktSVlpieWNpaW51QjFwOWxlT0t2RDV2
K2NndnpFaG1uYlE3dE5Gam5YMDNGdWl2SVVxRHZKS0hvYWpxb1hGV0thdCtZUG56JiN4QTtWMkxh
cDVpMUs4NWJjWnJ1WjFvZXdVdHhBOWdNVlNLV2FhWnpKTTdTU0hxN2tzZHZjNHFpTFBWdFZzaURa
WGs5c1ZxVk1NcngwSjYwJiN4QTs0a1lxelh5MytmZjV1K1g1Rk5sNW12SjRsTzhGOC8xeU1qK1ds
eDZoVWY2cEdLdnBQOG0vK2NyZEo4MTMxdm9IbXkzajBqVzdobGl0JiN4QTtMdUV0OVR1SldORlNq
Rm1oZGlmaEJZZytJTkJpcjZCeFY4UGZuTCtldjVrV241bytaTEhRL01OMVk2WFkzaldrRnJDeThF
TnNvaGtwJiN4QTtWZThpTVQ3NHF3ei9BSlg5K2NuL0FGTmw5L3dTL3dETk9LdS81WDkrY24vVTJY
My9BQVMvODA0cTl4MHovbktDTHloK1ZtaXg2aGNQJiN4QTs1bzgrWDhVbHpjUnlTRDA3ZEpabjlE
NnhJdGFIMHVCRWFpdmp4cUNWWGlYbTc4Ly9BTTJmTkUwalhtdjNGbGJPVFN5MDVtdElWVS9zJiN4
QTswaUlkeC9yczJLc0J1Ynk3dTVUTGRUeVR5bXRaSldaMjNOVHV4SjY0cW1taCtkZk9HZ3lwTG91
dFh1bnNsT0l0N2lTTmRoUUFxRzRrJiN4QTtVN0VZcStxUCtjZC8rY2x0UjgwYXRENVE4NUZIMWE0
Qi9SbXFvaXhpZGtVczBVeUxSRmZpUGhaUUFlbEswcXF6WC9uSTdYUFBubGZ5JiN4QTtkSjVyOHIr
WW0wcExBeFF6YWNMUzF1Rm5hZVpVRG1XZEpHVGlHNkFiNHE4NDg4K2Yvd0E3TkI4ai9sOWVhYjVy
YS8xdnpxVW5GZFBzJiN4QTtJK0gxbTN0bWl0bEhwTXJjWkptK09nSnJpckRkZS81eVovTnZVZERz
ZFM4dmFwOVJnMHF4dExiekJNYlcwY3phbE04dzVqMUlYVmZVJiN4QTtqaDVCVW9vM3hWOWllWkpQ
VDh1Nm80TkN0cE9RVDQrbTFNVmVSZVFFRGEzSVQreEF4SHo1S1A0NHE5RHhWMkt1eFY4ay93RE9S
ZjVNJiN4QTtmNGR2bjgxNkRCVFFieVQvQUU2MmpIdzJrN25xQU9rVWhPM1pXMjZGY1ZlRzRxN0ZY
WXE3RlcxVm1ZS29KWW1nQTNKSnhWOUNmazkvJiN4QTt6alBjYWdJTmQ4OFJ0YjJKcEpiYUxVck5L
T29hNElvWTEveUI4Ujc4ZTZyMFA4K2Z6UWovQUM5OHRXdWhlWFZqdHRadjR6SFpyRXFxJiN4QTts
cGFwOEprVkFLQS9zeGlsT3AvWm9WWHgxTk5OUE04MHp0TE5LeGVTUnlXWm1ZMUxNVHVTVDFPS3JN
VmU5LzhBT0wvNUhhWjU0dTdyJiN4QTt6SjVqaU0zbC9USlJCQlpWS3JjM1hFT3djZ2crbkVyS1NQ
MmlSMnFDcSt3b1BLUGxTM3NQMGZCb3RqRllVNC9WRXRvVmlwNGNBdkh0JiN4QTs0WXErSC84QW5L
SHlQNWY4b2ZtZWJYUVlFdExIVUxLTFVEWlJiUnd5U1NTeE9pTCt5cDlIa0Y2Q3UyMU1WZVJZcTJD
UWFqWWpGWDZHJiN4QTsvazE1OGZXL3lZMG56UHE4cGFhMXM1bDFLZC90TWJGbmplUmllcGRZdVor
ZUt2ejcxWFVKOVMxTzgxR2YrL3ZaNUxpWHY4Y3JsMi9FJiN4QTs0cWhjVmRpcjNQOEFLci9uRkx6
VjV6MGVEWGRXdjAwRFNidFJKWkJvalBjelJuY1NDTGxFcUk0K3l4YXA2OGFVSlZhL09iL25HRy8v
JiN4QTtBQys4dG56Slk2d05YMHlHU09LOGplRDZ2TEY2cDRJNG84aXVwY2hUMElxT3UrS3ZEY1Zk
aXJKZnkwa3VJL3pHOHJQYmxoT05Yc2ZUJiN4QTs0YnR5TnlnQUE3MXhWK2tlcmFOcEdzV0wyR3Iy
TnZxTmpJVk1scGR4SlBFeFU4bEpqa0RLYUVWRzJLb2VieXQ1WW5UVFVuMGl5bFRSJiN4QTtpaDBk
WHRvbUZtWStQcG0yQlg5enc0THg0VXBRZUdLb0wvbFhmNWYvQUZPYXkvd3hwUDFLNWxXZTR0dnFO
dDZVa3FBaFpIVGh4WjFEJiN4QTt0UmlLN25GVXc4eHhtVHk5cWlLS3MxcE9GSHVZMnBpcnlIeUE0
WFc1RlA3Y0RBZk1NcC9oaXIwUEZYWXFwWE56Yld0dkpjM01xUVc4JiN4QTtLbDVwcEdDSWlxS2xt
WnFBQUR1Y1ZmTC9BT2RuL09Sc2VzV3Q1NVg4b2hXMHVkV2d2OVZrU3BtUnRtU0JISHdvUisyUlU5
cWRTcStmJiN4QTtNVmRpcnNWZGlyNmcvd0NjWHZJZmtHNjBmL0ZITWFsNWt0NURITkJPb0FzV3Fl
QmpqcWFsMUhJU241Q2hEWXEraU1WZkN2NTdlWUp0JiN4QTtiL05UWDVYWW1PeXVHMCtCRDBSTFQ5
MHdIemtWbStuRldBNHE3RlV3dG90ZmppSDFaTHRJVytKZlRFZ1UxSFVjZHQ4VlZmOEFuYWYrJiN4
QTtYNy9rdGlxaE5ZNjVPL09hM3VaWHBUazZTTWFmTWpGVm42SjFYL2xqbi81RnYvVEZYZm9uVmY4
QWxqbi9BT1JiL3dCTVZmVjBkeGNlJiN4QTtUUDhBbkRMalB5aXZ0V3RwWUVpZllrYW5kc0NvSC9N
TTViRlh5Tmlyc1ZaVCtWM2xkUE5YNWgrWDlBa1V2YjMxNUd0MG82bTNRK3BQJiN4QTswLzRxUnNW
ZnBUSEhIRkdzY2FoSTBBVkVVQUtxZ1VBQUhRREZYaG4vQURtTHI2NmYrVkthWUcvZTZ6ZndRbFAr
SzRLM0RINkhqVDc4JiN4QTtWZkVHS3V4VjZuL3pqSjVkR3Qvbk5vUWRPY0dtbVhVWnZiNnVoTVIv
NUhHUEZYNkE0cTdGWFlxdGxqU1NONDNGVWNGV0hzUlE0cThIJiN4QTswVm4wcnpSSEZOc1laMnRw
YTdkU1l6OXgzeFY2ZGlxU2ViL09mbDN5am84bXJhN2RyYld5YlJwMWtsZnRIRW5WMlA4QWFkc1Zm
SG41JiN4QTtzL25oNWk4K1hEMmtaYlR2TGlOV0hUVWJlVGlhcTl3dysyM2ZqOWxlMis1VmVhNHE3
RlhZcTdGWFlxeTM4c3Z6RDFYeUo1bmcxaXpyJiN4QTtKYk5TTFViS3RGbmdKK0pmOVplcUhzZmF1
S3Z1dnkvcjJsZVlOR3ROWjBxY1hGaGV4aVdDVWVCMktzT3pLZG1IWTdZcStGZnphMFc1JiN4QTsw
Yjh5dk1kbE9oU3QvUFBEeTd3M0RtYUp2ZXFPTVZZamlyc1ZmcForV0d1NlByZjVmNkJmYVJNa3Rt
YkczaW9oSDd0NG9sUjRtQSt5JiN4QTswYkRpUmlxMzh4L3pJOHRlUVBMczJzNjFNQVFDdG5aS3c5
ZTVsN1J4S2Y4QWhtNktOemlyeFQvb2Q3eXQvd0JTMWZmOGpvY1ZkLzBPJiN4QTs5NVcvNmxxKy93
Q1IwT0twMTVOLzV5eTB2emI1bjAveTlwWGxhK2U4MUNaWWxiMW9pc2E5WGxlbjdNYUFzM3NNVlkv
L0FNNXUrWXZTJiN4QTswVHkzNWNqZmU3dUpyK2RCMlczUVJSMTltTTcvQUhZcStSc1ZkaXIzei9u
RFh5NytrUHpOdXRZa1NzV2kyRWp4di9MUGNzSVVIMHhHJiN4QTtYRlgyeGlyNDgvNXpaOHhmV2ZO
MmcrWDBlcWFiWnZkeXFPZ2t1NU9ORDdoTGNINmNWZk4yS3V4VjlSLzg0UStYZVY3NW04eVNKL2RS
JiN4QTt3YWRiU2VQcU1acGg5SHB4WXErc2NWZGlyc1ZkaXJ4ejh6OUlheDh4RzhSYVEzNmlWU09n
a1dpdVAxTjlPS3BENTUvUHp5MzVTMENGJiN4QTttSXYvQURITkhTUFM0Mm9WY2JlcE93cjZhSHFP
NTdlSVZmSmZuUHp4NWw4NDZ3K3E2OWRtNG1OUkRDUGhoaFFtdnB3cDBWZnhQVWtuJiN4QTtmRlVn
eFYyS3JvNDNrZFk0MUx5T1FxSW9xU1RzQUFNVmU1K1ZmK2NXUE1lbytVcnpVOVhuT25hMUxCejBm
U2pTdk1mRUJkRS9ZNWpiJiN4QTtpTjFyVnVoWEZYaDkxYlhGcmN5MnR6RzBOeEE3UlRST0tNam9l
TEt3UFFnaWh4VlN4VjJLdlgvK2NmZnpnYnliclA2RjFlYW5sblU1JiN4QTtCemRqdGF6dFJSTVA4
aHRoSjkvYmRWNzcrY1A1TGFSK1lkbkZlVzhxV1htQzJUamEzOU9VY2tmMmhGTngzSzFOVllicjc5
TVZmSjNtJiN4QTs3OHIvQUR2NVR2SHR0WDAyUUJSeUZ6QisvaFpha0J1YVY0MXAwYWg5c1ZZcGlx
WWFWNWcxN1NDNTBuVXJyVHpKL2VHMW5raDVVL205JiN4QTtObHJpcWhmNmxxT28zTFhXb1hVMTVj
dHMwOXhJMHNoSHV6a25GWFdHbWFqcU00dDlQdFpyeTRiN01Odkcwcm12K1NnSnhWNm41TC81JiN4
QTt4ZS9ObnpKTEcxenB2NkJzV29YdXRUckM0SFdndHhXYXZ6VUQzeFY5WmZsSCtSM2xMOHRyTm1z
QTE5cmR3bkM4MWVjQVNNdXhNY1NpJiN4QTtvampxSzhRU1QzSm9NVmZOWC9PVjYrWXRlL05xZUsw
MDI3dUxQU2JTM3M0WllvSlhqWXNwdUhJWlZvZmluNG41WXE4Yi93QUorYWYrJiN4QTtyTmZmOUkw
My9OT0t1L3duNXAvNnMxOS8walRmODA0cSt2UCtjTnZLTjVvL2t2V3RWdjdXUzB1OVV2bGlFY3lN
am1HMGorQnVMQUg3JiN4QTtjMG1Ldm9QRlh3QitmY1BtYnpIK2JubVRVSWRMdlpiV081K3AyenJi
eXNoanRGRUFaQ0ZvVll4bGg4OFZlZjhBK0UvTlAvVm12djhBJiN4QTtwR20vNXB4VjMrRS9OUDhB
MVpyNy9wR20vd0NhY1ZmY1AvT0xIbFc0OHY4QTVSV1J1NEd0NzNWTGk0dnJpS1JTanJWdlJqNUE3
N3h3JiN4QTtxZnB4VjY5aXJzVmRpclJJVUVrMEEzSlBRREZYeTEvemtWL3prYm9Nc0xlV1BLQlMv
dm9KS3o2NHBEUVFzQVZaTGM3aVZ0OTIreU8zJiN4QTtMc3ErVmJpNG51Sm5udUpHbW5sSmFTVnlX
Wm1QVWtuY25GVlBGWFlxbVhsM3k1cmZtUFZvTkowVzBlOHY3ZzBTS01kQjNaaWRsVmU3JiN4QTtI
WVlxK3ZmeWYvSVBSUEpNY2VxYXB3MUx6TXdyOVlJckRiVkc2d0EvdGVNaDM4S2IxVmVzNHErWi93
RG5LUDhBS29Sdi9qdlNJYUk1JiN4QTtXUFhZVUhSalJZN21nOGRrZjNvZTVPS3ZtL0ZYWXE3Rlgw
ei9BTTQ4Zm5oRU5ML3dqNWptSnVMS01uUkxoalV5eHFQOTVTZjVrL1lQJiN4QTs4dTNZVlZlaDZU
cDk5NXA4eGlOaWF6dDZseElPa2NTOWFmSWJMaXIyQzc4bitVN3lPT085MGF4dTBpVlVRWEZ0RkxS
VkZGSHhxM1FZJiN4QTtxbG4vQUNxZjhyUCtwTjBQL3VHMm4vVlBGVmUxL0xYOHViUnVWcjVWMGUz
YW9ibEZZV3FHbzZINFl4MHhWUHJXenRMU0lRMnNFZHZDJiN4QTtPa2NTcWlqYW5SUUIyeFZXeFYy
S3V4VjJLdXhWMkt1eFYyS3V4VjJLdXhWMkt1eFYyS3Ztajg3N24vbklmenFaOUM4dWVWN3pTdkt4
JiN4QTtxa3A5ZTJXNXZGNkgxU3Mzd1JuL0FIMkR2KzBUMENydzMvb1cvd0RPMy9xVnAvOEFrZGJm
OVZjVlVybi9BSngzL09pMmdlZWJ5dmNDJiN4QTtLTUZuSWt0MklBNm1peUU0cWtpL2xaNStaZ28w
aDZuWVZraEErOHZpckp2THYvT09INW82cnFkdmJYZW5EUzdLVDRwdFFua2llTkU2JiN4QTsxQ3h1
ek9UK3lCMThRTjhWZlZuNWUvbHI1WThpYVQ5UjBlQ3M4Z0J2TCtRQXp6c083dDJVZnNxTmg4Nm5G
V1Y0cTdGVkc5czdXOXM1JiN4QTs3TzdpV2UxdVkyaG5oY1ZWNDNCVmxZZUJCcGlyNDY4Ni93RE9P
SG4vQUU3ekxlMjNsN1M1TlQwWG56c0xwWklRZlNmY0k0ZDFQSlBzJiN4QTtrMDM2OThWU1AvbFFQ
NXYvQVBVdHpmOEFJMjMvQU9xbUtxRjUrUi81cVdjUHJYV2dTUXg5T1RUVys1OEFQVTN4VkQ2ZitW
WDVseVhzJiN4QTtDMkdqem04NWcyL3BQSHo1ZzFCV2o5dXVLdnQ3OGp0Tzh3V25rNVQ1bTBlVFN2
TWZNeDM1bGFKeE1FL3U1SXpFemdJVk80UDdWZTFNJiN4QTtWZWlZcTdGWFlxN0ZYWXE3RlhZcTdG
WFlxN0ZYWXE3RlhZcTdGWFlxN0ZYWXE3RlhZcTdGWFlxODg4NS9scUxsNU5RMFJRa3pWYWF5JiN4
QTs2S3g2a3g5Z2Y4bnBpckN0TDh4YXZvY3h0WlVab296U1MwbXFwVSsxZDF4Vm1XbStiZEd2Z0Y5
YjZ2TWVzYzN3Nyt6ZlpQMzRxbklJJiN4QTtJQkJxRDBJeFZ2RlhZcXB6M0Z2YnhtU2VSWW94MVoy
Q2o3emlyRzlWODkyRnVDbGl2MXFYK2MxV01IOVorajc4Vll6YjIzbUx6VnFQJiN4QTtHTld1SkIx
YjdNVVNueFBSUitKOThWZXIrVXZKZGg1Zmc1N1hHb1NDa3R5UjBIOHNmZ3Y2OFZaSGlyc1ZkaXJz
VmRpcnNWZGlyc1ZkJiN4QTtpcnNWZGlyc1ZkaXJzVmRpcnNWZGlyc1ZkaXJzVmRpcnNWZGlxVmE1
NVkwWFdvK04vYmhwQUtKT253eXI4bUg2anRpckFOVy9LUFVZJiN4QTttTDZYZEpjeDlvcHYzY2c5
cWlxbjhNVlk4K2dlZGRLSkMydDVDQjFNSEowKytJc3VLcWY2ZTgyUmZBMDA0STdNbFQrSzF4VnNh
bjV3JiN4QTt1L2hqZTdrSjJwRWpBLzhBQ0FZcWlMWHlQNXkxS1FQSmFTcFhyTGROd0krWWM4L3d4
Vmx1amZsSGFSRlpOWHVUY01OemJ3VlJQcGMvJiN4QTtFZm9BeFZuZGpZV1ZqYnJiV2NLUVFMMFJC
UWZNK0o5OFZSR0t1eFYyS3V4VjJLdXhWMkt1eFYyS3V4VjJLdXhWMkt1eFYyS3V4VjJLJiN4QTt1
eFYyS3V4Vi85az08L3hhcEdJbWc6aW1hZ2U+Ci0gICAgPC9yZGY6bGk+Ci0gICA8L3JkZjpBbHQ+
Ci0gIDwveGFwOlRodW1ibmFpbHM+Ci0gPC9yZGY6RGVzY3JpcHRpb24+Ci0KLSA8cmRmOkRlc2Ny
aXB0aW9uIHJkZjphYm91dD0ndXVpZDpiYWNmNDIzNS1lNDM1LTExZGEtOGYxYS0wMDBkOTNhZmVi
YjInCi0gIHhtbG5zOnhhcE1NPSdodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vJz4KLSAg
PHhhcE1NOkRvY3VtZW50SUQ+dXVpZDo2NWFkNGUwZS1lMzY3LTExZGEtOGYxYS0wMDBkOTNhZmVi
YjI8L3hhcE1NOkRvY3VtZW50SUQ+Ci0gPC9yZGY6RGVzY3JpcHRpb24+Ci0KLSA8cmRmOkRlc2Ny
aXB0aW9uIHJkZjphYm91dD0ndXVpZDpiYWNmNDIzNS1lNDM1LTExZGEtOGYxYS0wMDBkOTNhZmVi
YjInCi0gIHhtbG5zOmRjPSdodHRwOi8vcHVybC5vcmcvZGMvZWxlbWVudHMvMS4xLyc+Ci0gIDxk
Yzpmb3JtYXQ+YXBwbGljYXRpb24vcG9zdHNjcmlwdDwvZGM6Zm9ybWF0PgotIDwvcmRmOkRlc2Ny
aXB0aW9uPgotCi08L3JkZjpSREY+Ci08L3g6eG1wbWV0YT4KLSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPD94cGFja2V0IGVuZD0ndyc/PgolICAm
JmVuZCBYTVAgcGFja2V0IG1hcmtlciYmClt7YWlfbWV0YWRhdGFfc3RyZWFtXzEyM30KPDwvVHlw
ZSAvTWV0YWRhdGEgL1N1YnR5cGUgL1hNTD4+Ci9QVVQgQUkxMV9QREZNYXJrNQpbL0RvY3VtZW50
CjEgZGljdCBiZWdpbiAvTWV0YWRhdGEge2FpX21ldGFkYXRhX3N0cmVhbV8xMjN9IGRlZgpjdXJy
ZW50ZGljdCBlbmQgL0JEQyBBSTExX1BERk1hcms1CkFkb2JlX0FHTV9VdGlscyBiZWdpbgpBZG9i
ZV9BR01fQ29yZS9wYWdlX3NldHVwIGdldCBleGVjCkFkb2JlX0Nvb2xUeXBlX0NvcmUvcGFnZV9z
ZXR1cCBnZXQgZXhlYwpBZG9iZV9BR01fSW1hZ2UvcGFnZV9zZXR1cCBnZXQgZXhlYwolJUVuZFBh
Z2VTZXR1cApBZG9iZV9BR01fQ29yZS9BR01DT1JFX3NhdmUgc2F2ZSBkZGYKMSAtMSBzY2FsZSAw
IC05My41MTk2IHRyYW5zbGF0ZQpbMSAwIDAgMSAwIDAgXSAgY29uY2F0CiUgcGFnZSBjbGlwCmdz
YXZlCm5ld3BhdGgKZ3NhdmUgJSBQU0dTdGF0ZQowIDAgbW8KMCA5My41MTk2IGxpCjIxNC4xNjUg
OTMuNTE5NiBsaQoyMTQuMTY1IDAgbGkKY2xwClsxIDAgMCAxIDAgMCBdIGNvbmNhdAo4LjI1ODc5
IDQ2Ljc1NzkgbW8KOC4yNTg3OSAyMi4zMTY1IDI4LjA3ODIgMi41IDUyLjUyMSAyLjUgY3YKNzYu
OTYzNCAyLjUgOTYuNzc2OSAyMi4zMTY1IDk2Ljc3NjkgNDYuNzU3OSBjdgo5Ni43NzY5IDcxLjIw
MzIgNzYuOTYzNCA5MS4wMTk2IDUyLjUyMSA5MS4wMTk2IGN2CjI4LjA3ODIgOTEuMDE5NiA4LjI1
ODc5IDcxLjIwMzIgOC4yNTg3OSA0Ni43NTc5IGN2CmZhbHNlIHNvcAovMCAKWy9EZXZpY2VHcmF5
XSBhZGRfY3NhCjAuODcwNiBncnkKZgo1IGx3CjAgbGMKMCBsago0IG1sCltdIDAgZHNoCnRydWUg
c2Fkago4LjI1ODc5IDQ2Ljc1NzkgbW8KOC4yNTg3OSAyMi4zMTY1IDI4LjA3ODIgMi41IDUyLjUy
MSAyLjUgY3YKNzYuOTYzNCAyLjUgOTYuNzc2OSAyMi4zMTY1IDk2Ljc3NjkgNDYuNzU3OSBjdgo5
Ni43NzY5IDcxLjIwMzIgNzYuOTYzNCA5MS4wMTk2IDUyLjUyMSA5MS4wMTk2IGN2CjI4LjA3ODIg
OTEuMDE5NiA4LjI1ODc5IDcxLjIwMzIgOC4yNTg3OSA0Ni43NTc5IGN2CmNwCjAuNTY0NyBncnkK
QAoxMTYuMTE2IDQ3LjEwNTUgbW8KMTE3LjA3NSA0Mi45OTgxIDExNS41NTUgNDAuMjc5MyAxMTAu
ODk2IDQwLjI3OTMgY3YKMTA2LjUxNiA0MC4yNzkzIDEwMy40NiA0Mi45MzU2IDEwMi40ODMgNDcu
MTA1NSBjdgoxMTYuMTE2IDQ3LjEwNTUgbGkKY3AKMTAxLjA2MyA1My4xNyBtbwo5OS44MDUyIDU4
LjU0MTEgMTAxLjU5NSA2MS4wMDQgMTA2LjI1NiA2MS4wMDQgY3YKMTEwLjIyIDYxLjAwNCAxMTIu
MjA1IDU5LjM1OTQgMTEzLjIzMyA1Ny4zMzc5IGN2CjEzMy4yNjYgNTcuMzM3OSBsaQoxMzEuMzk3
IDYyLjY0NjUgMTIzLjA1IDY3LjcwMTIgMTA1LjAzOCA2Ny43MDEyIGN2Cjg4LjY5MSA2Ny43MDEy
IDc4Ljc2OTEgNjIuODM0IDgxLjU3OTYgNTAuODMyMSBjdgo4NC40NjA1IDM4LjUxMzcgOTcuMDIy
IDMzLjU4NiAxMTIuNDY2IDMzLjU4NiBjdgoxMjUuODIgMzMuNTg2IDEzOC40MSAzNy40Mzk1IDEz
NS4xMjcgNTEuNDYyOSBjdgoxMzQuNzI4IDUzLjE3IGxpCjEwMS4wNjMgNTMuMTcgbGkKLzEgClsv
RGV2aWNlQ01ZS10gYWRkX2NzYQowIDAgMCAxIGNteWsKZgoxMzkuODcxIDQ3LjIzMjUgbW8KMTQw
Ljg2IDQyLjk5ODEgMTQxLjc2NiAzOC44MjgyIDE0Mi4zNjUgMzQuNzg3MiBjdgoxNjIuNTM2IDM0
Ljc4NzIgbGkKMTYxLjUxMiA0MC4zNDU4IGxpCjE2MS42NDggNDAuMzQ1OCBsaQoxNjYuMDQgMzUu
ODU3NSAxNzIuMDY4IDMzLjU4NiAxNzkuMTYgMzMuNTg2IGN2CjE4NS40MjMgMzMuNTg2IDE5NS43
NTggMzUuNDgwNSAxOTIuOTM2IDQ3LjU0NjkgY3YKMTg4LjQ5OCA2Ni40OTgxIGxpCjE2OC4wNTQg
NjYuNDk4MSBsaQoxNzIuMDI2IDQ5LjUwNTkgbGkKMTczLjEyMiA0NC44MzAxIDE3MS40OSA0My4x
ODc2IDE2Ny45NDEgNDMuMTg3NiBjdgoxNjMuMjA5IDQzLjE4NzYgMTYwLjY0NCA0NS44NDE4IDE1
OS41MDcgNTAuNzA1MSBjdgoxNTUuODA3IDY2LjQ5ODEgbGkKMTM1LjM1OCA2Ni40OTgxIGxpCjEz
OS44NzEgNDcuMjMyNSBsaQpmCjM5Ljc2MTggNDcuODM2IG1vCjE3Ljg3NzUgMjAuODczMSBsaQo0
NC42OTM0IDIwLjkyIGxpCjU2LjMwMjMgMzYuNjM2OCBsaQo3NS42MTkyIDIwLjg3MzEgbGkKMTA2
LjY0NiAyMC44NzMxIGxpCjY3LjkxMDcgNTAuNjExNCBsaQo4OS42OTU4IDc4LjY0MDcgbGkKNjIu
NjczOSA3OC42NDA3IGxpCjUxLjU3NzcgNjIuMzI0MyBsaQozMC45MjU4IDc4LjY0MDcgbGkKMCA3
OC42NDA3IGxpCjM5Ljc2MTggNDcuODM2IGxpCmYKMTk5LjA2MSAzNi41OTkyIG1vCjE5Ny4xNjUg
MzYuNTk5MiBsaQoxOTcuMTY1IDM1LjE5MTkgbGkKMjAzLjM4OSAzNS4xOTE5IGxpCjIwMy4zODkg
MzYuNTk5MiBsaQoyMDEuNDkzIDM2LjU5OTIgbGkKMjAxLjQ5MyA0MC45NjczIGxpCjE5OS4wNjEg
NDAuOTY3MyBsaQoxOTkuMDYxIDM2LjU5OTIgbGkKZgoyMDQuMzgxIDM1LjE5MTkgbW8KMjA4LjA2
OSAzNS4xOTE5IGxpCjIwOS4yNzYgMzkuMDYzIGxpCjIwOS4yOTIgMzkuMDYzIGxpCjIxMC40NiAz
NS4xOTE5IGxpCjIxNC4xNjUgMzUuMTkxOSBsaQoyMTQuMTY1IDQwLjk2NzMgbGkKMjExLjkwOSA0
MC45NjczIGxpCjIxMS45MDkgMzYuNjIzNiBsaQoyMTEuODkzIDM2LjYyMzYgbGkKMjEwLjMwOSA0
MC45NjczIGxpCjIwNy45NTYgNDAuOTY3MyBsaQoyMDYuNDY5IDM2LjYyMzYgbGkKMjA2LjQ0NCAz
Ni42MjM2IGxpCjIwNi40NDQgNDAuOTY3MyBsaQoyMDQuMzgxIDQwLjk2NzMgbGkKMjA0LjM4MSAz
NS4xOTE5IGxpCmYKJUFET0JlZ2luQ2xpZW50SW5qZWN0aW9uOiBFbmRQYWdlQ29udGVudCAiQUkx
MUVQUyIKdXNlcmRpY3QgL2Fubm90YXRlcGFnZSAyIGNvcHkga25vd24ge2dldCBleGVjfXtwb3Ag
cG9wfSBpZmVsc2UKCiVBRE9FbmRDbGllbnRJbmplY3Rpb246IEVuZFBhZ2VDb250ZW50ICJBSTEx
RVBTIgolIHBhZ2UgY2xpcApncmVzdG9yZQpncmVzdG9yZSAlIFBTR1N0YXRlCkFkb2JlX0FHTV9D
b3JlL0FHTUNPUkVfc2F2ZSBnZXQgcmVzdG9yZQolJVBhZ2VUcmFpbGVyClsvRU1DIEFJMTFfUERG
TWFyazUKWy9OYW1lc3BhY2VQb3AgQUkxMV9QREZNYXJrNQpBZG9iZV9BR01fSW1hZ2UvcGFnZV90
cmFpbGVyIGdldCBleGVjCkFkb2JlX0Nvb2xUeXBlX0NvcmUvcGFnZV90cmFpbGVyIGdldCBleGVj
CkFkb2JlX0FHTV9Db3JlL3BhZ2VfdHJhaWxlciBnZXQgZXhlYwpjdXJyZW50ZGljdCBBZG9iZV9B
R01fVXRpbHMgZXEge2VuZH0gaWYKJSVUcmFpbGVyCkFkb2JlX0FHTV9JbWFnZS9kb2NfdHJhaWxl
ciBnZXQgZXhlYwpBZG9iZV9Db29sVHlwZV9Db3JlL2RvY190cmFpbGVyIGdldCBleGVjCkFkb2Jl
X0FHTV9Db3JlL2RvY190cmFpbGVyIGdldCBleGVjCiUlRU9GCiVBSTlfUHJpbnRpbmdEYXRhRW5k
Cgp1c2VyZGljdCAvQUk5X3JlYWRfYnVmZmVyIDI1NiBzdHJpbmcgcHV0CnVzZXJkaWN0IGJlZ2lu
Ci9haTlfc2tpcF9kYXRhCnsKCW1hcmsKCXsKCQljdXJyZW50ZmlsZSBBSTlfcmVhZF9idWZmZXIg
eyByZWFkbGluZSB9IHN0b3BwZWQKCQl7CgkJfQoJCXsKCQkJbm90CgkJCXsKCQkJCWV4aXQKCQkJ
fSBpZgoJCQkoJUFJOV9Qcml2YXRlRGF0YUVuZCkgZXEKCQkJewoJCQkJZXhpdAoJCQl9IGlmCgkJ
fSBpZmVsc2UKCX0gbG9vcAoJY2xlYXJ0b21hcmsKfSBkZWYKZW5kCnVzZXJkaWN0IC9haTlfc2tp
cF9kYXRhIGdldCBleGVjCiVBSTlfUHJpdmF0ZURhdGFCZWdpbgolIVBTLUFkb2JlLTMuMCBFUFNG
LTMuMAolJUNyZWF0b3I6IEFkb2JlIElsbHVzdHJhdG9yKFIpIDExLjAKJSVBSThfQ3JlYXRvclZl
cnNpb246IDExLjAuMAolJUZvcjogKFJpY2ggUXVhcmxlcykgKGdsYXNzQ2Fub3B5LCBMTEMpCiUl
VGl0bGU6ICh4ZW4uZXBzKQolJUNyZWF0aW9uRGF0ZTogNi8yNi8wNiAxMTowMyBBTQolQUk5X0Rh
dGFTdHJlYW0KJUdiIi02bCNKJmtGWTxGbHJYU19QQXRtTl1MamJ1PUE8ckA/V1FSQEtdOF1QQS5o
MWxMKWQlZjNAWWIoaFclKFpIVms6PCwrOD45RyU7PCRyK2hcNEpBayUlP0hNamwlcTtSPXMKJWwx
PHM0cW9BMCZWdVFQMjxFMyIiRl4mKi5BVWk1ZU5WaF5ucylKKG5dOHUyNmZDZE1BR2k8ZEY8dC9F
aj8rOCc8XmIsNSVoYz1eTnI6S0ZLTFYoVCY/VGdXZV1ETUFDcXQwQ00KJXI6KFMpZSYxUmI1Rlxr
TjpRTS4raidWaVVJIjIlbWtKKUhBRTtPPV9yVkgtW2hgb29SXiFDI21xLjlISnJyMWtvczdINkpw
PSEjaz8vVEFnMnRYJytKJVlhSm1NQXNzMmgxYksKJWEqLyZkaHFyaGMlQ2djYmI3Ris4OCxUK11M
QXEyS21hRVFJKyQ/P1BrQVBrS1lRKXJCbz5hWCYrL2pjdT5TWWpaZVtQPVMqdWFvM1Bca25NPmxE
UUFTKT0vM2JNVlpoYi1KXzcKJXJfMmFlPyV1Ul9QWlJuU2NPV2ckXVJBXU1eZT1cXFcyRkIncHQ4
SlxmKyQ+OygsMWhycCVKKzY0MXVKIyhnQmdzKlctaGZUdS5ZZUZvVkFlR1BBJFBRR1pQWFk2SmMk
KF1FVj8KJVN1aWVkRmJybjBiOSFdJ2kuJ1IsWGhYU106W2U6KVg0S3UsbGBuPFFhLzZkXjc4XS1G
bmg2VlIrMjgxOC9RdXQqPzImS2tKP1NDNG4sSUcucHVkRkU6aWY+Zz0oTjFUJCE2cGcKJS5TRlU3
YUxyTVY7T0U3ZTIibkM0PSpqT0FlaTkkYzQ1YGc6Rj5JZSJmNj1yPToiMCFDbUdrdFxvKy9RaDRB
SCgrOztYN0VcYUVobipaZ3BQLCslUl9zNUNHMmhxYGVbb1ItUSMKJXJvcnRRci9AISJuKldBZXMp
QGwiTmNuNileamBob2B0ai9aXlpBJ2lkclo9YDFBa3EuJltpQilNOCYuO3BvOlI1bWYqMWFtT3Rl
KURWczg3SVg6a2MtZitpMWg3cXUlWj9jSEEKJXAmMEJXP0tlJmZJXVc9bmhnVSRhSGlGISg0YDJl
KDJeQDVSbF9qSyQmJHVaKGhZaz5EbUwmPmRUblAvcGg8amZYR0ZzQ040P1wpM2hZdU1MLi0uR2RJ
L2BXOWdFNWpUNT4jZVEKJTJkXyNgcFw7QHVzJGEnZ0d1W2oobGgpITdNPm51NCFDRFJSJEwyPC5X
ZkVZVDwyYnNNUWg1ajklLXRkIW9GWDVbXyxtayVeTzNjcW1mKV9tZF9Qak9JdCMrXSs4MGpnRm42
Li8KJSIwSG5KNU8qYDkvRy9oZ1ZuNUg1SS9oSVtycS0kKCshNUZpRUglcjpXcFFycywvMEZQVkI+
IVU6VWAkajxTKE9UTzFZRyVSWFkkYlVuclRDPy9zODZfazV0RjVJWGtxWycmcUMKJT8vckQ6TlNp
O0VKJU9VdWgwYDtVaURqMD1uJSciNGw4bFNYbGg1XC1FdWFfPUVQdVdeR2lASz1MZl1RXD9BM21m
OlY6R1NgTi1GT11ROl4oRG1pbF9CInVCKXBcViwyXUQkW00KJUlfWSZtXEFkNjBEQFxzb15Wcjst
cW1wclhpKHB1bXFoLjMoLmRXQW1QZVkqV2lENylqSFw9VmdoYDtCdDBENV4laV1ISz1ya3Q3TVNA
RTF0JmNUXnRqKT1pMHFoMztWZEgnS04KJTxTL3UmTTpfK2dwbUtEPFtbYkMqOCEtOkpKK0IvY0Yp
Ri5NU19NLlthLVEkMmtuWC01WDBAKksnNVFTRmpCaEclNENlLipZSlU7JyUuZ0ZlXiI+UklNPTda
bTRURzI/PkU1PzYKJTpWNyYzTEhnJiJIJ1JtL28oMkMuXGQjRCNccGViSFw6JEwpR1NROi9OWFBU
Tz5FOkIhRzRCWGFBdDpUaDY9NHNIcC5lWzlwKCRRUj9MdFlZNVEkQD5uYlU5KW5YL2dacF8wakkK
JWhSUDpdQWFBNDNPbEloWWxJSWhvQ0lNO0RbW15EK1k2RV91M2k+Ky5nYz9MIUlQLVsjW1RtSDVx
bzNLblg3WVhdNDVcQkFvOD9odTQtTmAlY1QuJHFmPEQuaFlNWlM9cSZdOWEKJW1hZVZLRkleNSVr
TEheIzUtLXVdNDAwVV1sIV1tI1BmI1VrTWZeNjdtUCNSNTxjLCQoUWc8YHBiSGRjV3JhcjA0Yixf
UTtTKyluUzl1SEc7RGdvNU1nNk9yMF5cUUZTcicoJkAKJWYuY15rQEYvV3RvJUlwVFMrKW5TZSY6
Xm1EZ281TWsqXEY/XlxQOmxuYl9nRj9cTnRhNGliK05UMDA9VD9Kazlra1sya1ZScyUjXzBRcnBv
MVxDOU5mWyVwOCk7amlFKmo0a20KJUlWdTRVP2YwbyJwX1IybW9eZFAiMkhZYUojLzZNQidqUDgj
bmw+KWNYXG5PPzdmNUFzKD5NZEkqVW5KLUZoZyhEKCZWLyRKJTBlaF9ySC1GXDQ0LDhZPDJJNFtZ
SlVtXWJHZGwKJT0uXUM1bWdmImdTWFY9UTFfLmNWWDE3J1cyZ2UtRjkpXU1vQC85LzE+ImRvLjg9
XmJiNDErVlNmWXRWOWhzaVkmcC4iMlM4X1xYOEktOk5iZzJXdENbPF1GQnAsQkhjQzJFSzsKJUJN
MEQ5SV1lImxGPS9rYVV1aEI4WSZFJ3BsRVQ4Il5qR2ZzZSs7RE5AYk4uKjRqPEtuI11nTWIwMzxC
PWUxZypGblNEKGk6Xj9dRFhaXmNVXFZGcz9MTE05NGNNOHJsQVt0UysKJTU8YkcmVm1TRUFAcVlt
TTZTIVBGPTYvKGNbSCUzQkBybyE4M2QmYUM4UlgxMHE5bHVNYjl1Jz1uLW0iYlkoQT9CXU5RNEBh
alAxa3E4TideKUtqOGtZMGZrJXFfcVQhcHEsblEKJUliPmA2S1w/KlxUODBpNzQ3ODA8TlJuU3VW
IzVGREpDbVByVmpOXSRCRmIvVWZtQVNubV04czc2Sz9HVk45O1VPW1oiQmlFMWNxRlMyImsyZiZm
SjEwIi0sPWYhaTNsbCEjVnMKJUM9aiMycW5RdHQ+clMwb10lOipDRlxZYSRwLVNgPmZiOlBHPyQ1
a19wYnRBT1Q4Z1ZbSVZdS20zMUI7PTxYVmIwcUlKQykoczxoZUdbLE1eRkxUR3AlZkFGL289XlFT
X3JoPWgKJUFJPFM9SS5NRShQaztvRW5ZPkRvXHJROW5WLTNIV1c1IVdDQkQxSCZqL21FX0FvazxB
PW1dYTRuR0BXOWBCcitiKlZkYUo6T2Y2UExUJUgmVm1gM2YhO2Y6Ym1VX3BlJzU+MG4KJT5uYyIn
Pz5uZkViZFVcaFAzWGQnWSNbITFoYFsxSShsXi40NlgvViNnSyk8XWZBQUxScEYvMyxjITRnbyRy
XDNucDZPak1mPWc2WlJYQGxqXjZRKUNjIyxNXUVkY01PbmdebiQKJVs0Om1VYTY4cVRvVmAtXCg6
bkhtJUk1SixrQjUwNjNQNSE2ckllSl9rTV5WQDtQJ1lxMXNEWlxHOFJCSkVqI0IoPm5aaDokZl9m
UFo6Om0+TyY5LVhwdUY6XyNNVS1oJXFjSygKJSF1RjJsKG1KVFhST3E6ajs8YFcpZGJOWnVEOG5k
J2xEM1UwM2hGTTRQUExyPSMyNis4by4mXGlfdT07O2I1S2o8TFswKWpRQmFJRzNrbCRscjNsTWgp
I0daKFREX04janRYaWUKJVdMaGg4UiI2SEdMM00iUWkrWWYtZSREYjVyTjxHQVQ1K3FzQWJoWEpo
TSVMWDAnSkpPOChuTDlSRUFeWkIyPUdRVUtkND9Fb3NjXD9VTUQ3VyhvSi8vXzZUVGMoJ3FwKDBX
PCUKJUdZZG5DbEE8aiImW3RmT2VhVCgqOW07QVxCXj9LM1A/WTowLz9gSTU+MlYpVlZoQSZnVihq
SnQhbCViXTRJZUZSaWZkX2IhPC1zIkhlXUUiJylELDduJ2leQD9lOSlyWGVcZnUKJUU1QyVlQF4i
MmdoJT11IXEyYkpQMkAtck05N2s3JkRNQmtGbldRay05JzZQMjFcXjRtSFRqJVpjY0xlMlhGZzFG
Wl1RLmFiJig9LkhXWWswUzdoYUE1S0VTXlFAaGtGQyQyYSIKJUVSRj5yYjBpOFlQPlxZUS1jO1dW
OyFaIW9EMElIc2dRaWtINlQ6YGRmWyFpdDQtOlBQUTByZTUnJFdyLkNbQ0hFNjdrVyUiNSEzPS9e
OWBtYG1kMitkQmNtWW1OOFchJT9PclsKJTVyMW9RUCtEXERvU19QY0hhSEI2P2VfSWhCSUVZVGVy
JFomVCxzVDxqNnVwJ29aYyEzVDBAYm5ybyEpM2syLz9lSUNmPCNzN2tFZmg6OjA2SS5yJGNtcnJJ
PlxDbk8wXk8oXVIKJTUyTCtJaVRtISNdPVtoMyVoSC0uXUElUCU1TWIvbmswczpIcW4mPU5vZUxh
X0hbaidmP1pKZkloOyNgVkl0JS5mX21eLyNzKUUqIVkyVDdyV1ViNzpxJlM7VURYUjtONCowX1kK
JV5cKzIhXlpURCNiciw3cFJSYUM3aCl0NTg6T01QblxiPS1QcW51NSliYTNlQDkwXDIpaDlCU15t
SkdaLF46U2ErR24xQHNdPVc9bmoqZ09oZShzVHRwPlksZzQjRHJRcjowQnIKJV9xaDVbbXIraDwi
TVhvYiVHXkw0cEA4KScrKDE+KWFeYltiXXRcamVvX2Y8SHBZWXI2ZSc0ZlJnXDVuNkRQLXJWX2Vu
bihdNywvKG9gIjo1UydDQmBiRSExQys5MUYiOClIT1kKJXJxUElPblQyMzJKLGFLXkdrOW04cVFY
NzBWblxqcWhnXXRYPFNdbXJpU1BfcDVKSSRtSjhiKDM+UE1IcHJibkZ0NSU1bnM1UTBeNmszYklq
NGt0TFo1QEpdal5Vbm9FKXVPIiEKJTUhNltCL3Q7Q2VCOi50ciVvOGY9bV84VzZoWE8ubDRacU1M
TyQzTzxyVSU1L1JnaTA1N01fS2tnPittTSlMNl1AOXRJbjQ1ZWd0VV9bb2JDcU0rPTNHTiFXQm5H
Vz0jaHI9ZS4KJUtCQzVQP2k9P2g1MllwaW1RTmoiOytrVyRwaWIrSldtOUZKSE9oLV9qVC8va3Em
Wi4hb0giZVNjI3FaT0olWWApazhRczFpNmtETkhNW09PUUFFMCtRbUs9IklyQlJDbSFSaToKJTQ+
YWg2Y1tUWXFsUyYzKl5aOjxlakdVYC9va0QxVWo9Pl0qPTheb1VvNl5LJTRqTVF0Tzd0VHFET3VY
cFwqb28uXWcoRUVBW2gsQTZoJEUoKlVSYkRvX0okaGdaUW0pKmM9MmwKJUpwbUZkR2teQmpHSU1P
YHFxWzkrXV4+YXNrJyVuM0olNUJwZVNqQT0yTDYlKWctKi9zNj01c2E+Xl4qJjY9VERGNU1TWTZo
RihHTjRvJz0lMDM4TCY6b2g3X11SVGU8aD0jZ2YKJXJtKEo5bHE1c1c9KFBhcl5wcjF1Z1pSO3JH
Z2tMKDVDTyI7aW1TRU1cKUNIOSJbRlgtREswamlOQFRiczJdWmAyOlxWPS9uKmUkaXAnKlNOSlFy
OnBeNDFUWHM3bVtzS1I5WmYKJScsITcmSU1kIjtyLEREZG0/SzNCcCcpb0ZEUDQlTmx1LWRWTEUk
NFs9dHBMR29BLjxeZ1QuPmpBQ1xQN3FwMm1wbipbP1lWU0oqMz9sYWFnb2FgL0RxIjQoIyVLQDQw
NGE0VSQKJW5iYFUycEZEVU5bI3F1WEkjbS5uaGdHK2QpOD50MWtHUVlgczc1XjctaGUtLjEwN2hW
Oz5nN1tiPUQ/ZjpLUSQ6a0R0LChxVkJXTF9wPkxGJGZpaEAiaSNIYUE/UzJsJShMSm8KJXJfViEy
RmAyMkxPOl8pdUohWzMibz1dWSFnOltfNVw6OyM6XidmKTE+UT1UWVNSOS4sVSEwbyJybFM8Nlxg
PC9tQk0lIS9yIig+OVQ3anQ/WFUoO20jR0FYTUk1VXBcMiFlPDwKJXMvbUZiWSYoaUxwP0RDKlI5
S0gvcFw9MyQlbzspUEBKJC9sRDdvYCZcJ05zODxLIXUzWG5qKS5qbTE3ZVRDMFFhNUNOO0pJRzFq
PHJtVTkvcSFtPzloQj4tKG0vSGVVY09YaV4KJVdrNF1AYzFoNnNSaG1MMTA8WTpfXkFBMmNyOUFk
SWRKJzgvTyRAXk0jTlksKV1hKl1jKythQmoyYyhoTGdOblVbSzxmRCYtbWIpYjhkdHIyL1gtYDsl
cVRFRC1sWFFKUFYhUlwKJVdnWzUiRjc8XHNtK1ZTNiNtRm1xKUczQW9kdUkkMkFVbE82Zmcoc3VD
b1BXVEcjXDA7XSMmU1k9JlEjamU1V0BhPCM8O3EuS15JXDpYSChcN3FLRmFuR2ZCQDwpQD1DL2NO
Y3EKJUcrNSlHWVNqLWowbTltVzEyaD8qNG9WJlwuLTRGZUlnO05cbCRpcjdUKj9fKFslW05LMWYp
az8yc21PQVtiLW1CLk1yWl45PU87XiRySDkiU1tSOz9ERURjNmI+P0VaSzk8PEUKJWAzP2wwNE5v
JTdjK0tTcDIiPG9jbXR0QldWOENwPDI6KmRxS1MuIlQ4RyNeXy10PFFoLyJxT2BlLz9FM2dtQG88
VyhKZ11mLl9EI05ZdENGNlQlM2A8KVd0Jlg2Ky51bUheSyEKJWxgLipuPyxIZlMwV11bY2N0X0wo
WWJNcTBkRzIyZmxOPkUsNnVyZyQ5VSFYNU1jMltiMXI5QVZTMkhiPEwqaWtmYGJrbj1bZEE3b14u
NzlyJTRlQUdLL2xXQTZrYCViPkxRc3IKJTIjSy4nYDw7QVtfQSlZcz5xSmNXIm9YLShLckxXYG01
VnI7TlQyX2RAZ0Q9dSkvSkdHMz5UTV1mWFlFM1tQMUBEQDJ0I3FMTlV1U2J0bnIkOyJYWDYjPyZq
VVRBXHJZWyVXOF8KJWRtQXQtQk5hIiloJmI5MTZZZSZfMj5USzBkImcxPEI1NlguZm4lb2JNZlhX
RTo9LyM1UT5tSDZEZGFXIyhsdHNTQC85OGk7dE1ITVRya1NSSl0kazFfQ3VIUlIoLmBQKGc+XiIK
JSdsQ0BRaEYjdUQtSlpIYGkzalY2OGQ6WSI+a1A0T0Nba0VAYDBHSUJtQGlsUzEyISEmJCEwKVsx
JVokUyleaT1RbCVfbz8rZD5lMT5UMGgzQW1EIjU/dVkoYE5gSyJRXTtwVzQKJUAoZDo4OSwsPmRh
cHV1UFhELTttWV9lNyIzYFMhYVBrdCUzPGtrWGUua1otUCpcNVU6azNAa0NWSXVGJzRqRUFTTF8s
Y0ZNLlYkIzFIY0F1bERca15rO19jOmMkVzY9JUxIR1IKJVZWVFk0RGkkbVFoIT09bXIzbydyR24l
XW0rS1BaSiN1JnVUPjNkMzFxNDwkRm9VZ1ZuWmxBWiNMQW9vUjAhUldsQltQXDFUKzY+LjxsMUpN
azMiQWdGLWxnc21zJDVLVlZSUToKJSRySzpCS2ovW2hnMS50XDhqbElzKlk6YV9IOltPbWZKMydm
V0JlLjdcIlUrYDlpTyRKTjkySyheMUwjbCNVcWRTbSc1SyNrJiUqcDJWa2RQOE9xSSxdS0lkNVBi
JF10VU1OTWwKJXA4NSFfQ3JSSzQ3YyciSUQ4aSpUK2BHO0ApcFlmIkFMIz5dS1VrWGNVcEVhT2p0
KCZMb1JzT3FkUSZOQSI+Vm1RNWphPS5DOGNZYDFwQkEwK0YhdT5NQEMtNC9KLGU0KVtkZEkKJSdi
SkdVKD0rKnJAblgvK0ItYSswU1hiVWMsMGszRWlebTg0JlArTCwwanFeXUdtP0dmW19iPFkoVTlh
M11fPnMxb0dlJisnWnNMbXE7T01YVGFYQ2Q0LCk+ckFNb2suQCE0bj8KJSVVcUpkaz44bUY+Ikct
NDA+bCQ6Y0VyWGY+bUdoQ188OWNZO1YyVUJDa08mWDs2Njp0T1YtK2RsQTUnRS5IRSRaJydcSlFb
JCZRc1Y1RkFiQFIzWyJOZUFdPm4yWXJZPSxxQVUKJWVxKDZSPCdOTEM2bSI8bDUxZ1FjPCVDJ3Iw
MEkkQD5RQXRjRzE+RDRPaFsmXUpYVj1gPVROKDwyVC0iLEckbFtLNXNaTzJLOzBkLVRZWS05KCYi
OkooMUVsWFwncipXbCZSM2EKJWZMVkoxS1RVQTJQTE4xTEUwPlo6YkQ7OWtOXk1Ka1VyOXFbSVNy
Mks7K1MtOVFIRUpfTnU5b09yPmZtS0kqVVxZbkE8LjlsX29KbilaPChLMWY4VScobzxrI2YhQikt
VXEkWWkKJVE7MS5NTFVVJSdRNEVVLypxRCc8ND1ROTY3ZCdcW1hSa21oY3I4ZGFONDRhJ0NgPThT
ZydmVHBrc2UuJmBeYjljTVYuRSFmUGAqPCleJjQ6QCg/SGNaO2JURDdQJSc9T0UrJmYKJVxWUDNp
OVhuYj5qSklQJkNka2htcy1oTTk5OWpKXFY2V0NlKi9QS2UmNWdCZ1xdaSxDbVxLNDZIMiw8RVNt
K2RjVzxFYE5SNFhRSjo8bTMxSTdPPFlGQDk5bFVLNV1CNFkmcl4KJVUmUT0zXyNLUnJFKE5gQDoi
MSRFRVQ7bExtNmtkVExgN0NrNmpNWlMzVFtAWj90U0RqcE4uVXI6WlNIQ1Ftbi03Ry5AIXRBZz5M
REBMV1omL0k0OTJaUjZvb0xDVHVNX2tQRkUKJV9UaUA9Kj5OajMmOjQtJDAucVVSRihQP3NSKEo4
OjcqQlZaYVlyTz9xRCdBUlw6aDYvYClYcFsnJV5bXyQ/O1ZuWkw3S2VFTHN0M2leZy50PWEuND9r
SExdYFovJ2FuPk8ubCgKJTY9TS9XTjZdYnRsc2VlY1w5Q3BbLydYQSNLYCYpV1tHTSlIMUlLaUlG
OilkYVFKOFhhL15zZmhDWm9sQ1RKKFBTbT1BJiJJVkM7cGVkK0AzaG5gP0o6Iz9MQllCSGI7JzVu
RTUKJWMsQChjWWgzbnAmJFZVZVVNR0YmZj5RWDVCUW8yXzEsJD9nUChJXSJvZnNhYFJQXHRtPCI9
WVlLUys/LC51QDxXNnUtVGEmWSVCXiYuTVchPDpNWEpMYWpVY0RJRnRramhyOGoKJVdpVy9JVTZj
WyFAS1xgYiwlRDAmNiM6V2cyXjRNOWtxSDMnOzw6PDUrQj0uKTtQX0h0LlM1ZFNCcj9ZNEtyNzxk
S1ZoKmBLO1YkYEp1OFZyXzk7bj9zNTdTY0Ztc1w/SC9OaHQKJUgtZzlIQVw8L21IKXU6ZV1hNCU0
Nk8oIkRcO09uZW5rcG8hI2lZMWMuSyI6SmhPY0FGR0tiTWotVzFXSi8uXmgmY1NPQWlNZCI7Tz0y
XiU0SWIoZGNjdXFxKFRCS1xEa29XO2wKJWVQVHNfaWNdLCY7Rk1NLEshWW9qJHAmWU9GRUJ0VFsv
dCJjSVgyI28yJVFgI1c6TTNILChxU2U6YTpkUjtJWjxVRz90YidoMVIiKkwiJGNCNihvY2MvXWdK
XlcpLUtKLlAiXHMKJVN1Zi4tPWZFa3A1J142WnJNIkNsQFNbK2UoUnVIU1xbJjpYLlVnL0soSlBH
QjQ2ZXBZQUk7O1VXIW9VO1RaVmE0T0tlOmUnWVIwKjhCLlNyamcjUC5ATTA4XjBWWDNgPE4kWU4K
JXJ0TjxUU2InSVBJWz01Q3BzVHEoNmA6OiJZLT51X2E3azprWmFnPmdWWG9xVTQwUTA1N242bWVw
MGQhIyJack9tZUxyQzkyUGcsPUpKdDxDW3RLYDtQTGQyRiZpNVdwRzlaXyMKJU5xQyNgISw1ajBH
SXVdNGVLN0FWWXBDXCNXXDZ1LyJdLUhHLllCQGxeYl9fNVJDXkpBNWwwTk1KVl0sMSx0LnNIP0d1
LzkxZy5OPVV1JGtdOEhHalNNIkhQLFNCMFlXZ2NMaioKJU1DPDdOPyVgWmYqOFpqKE1qXyNoXEFf
W1xNbXArO20uQiheRHJxIj5zN141XV9oJ0dQVzMiSUhvaXFGO05nLFEpNEF0X0U9O2BxcSheT0w3
S1gnKFsyZT9sMkA7YDFOSlFaNyQKJWo7S3BYSCFGRik9KUFmMS0qKGdAJSdwTkZJQTBtNjdMKFRp
SElkOW9SWUFqcydqdFA4OEVnQzsxK0t0N1M2MEw8J1BUWlNNXnBGayNFMmJYQzJZXFxEJ19JTDAt
Zl9uMmhKTG0KJWw3P205aW1sZUJgRlNcciR1N150blRsQV9gMUVqWSRKMnNRP1BXPktNLzhhNl5A
OysoLDJPaTgkUyMrbnIzXWldI0tSOnI5LkouLE05IXJkalhrcVdbKzZoTjA5WzInN2djIVQKJTdE
LyEuSEo9NztNL2w8WkBQYHEySF1pPms6KmBUOGdFMCNGKTw9SiJxNk1AXkRfNGsjN0hCOi9kNjM9
UTNzcHQkQShKSjkzZyFDO0VwNF4lJ01HPlVCb0gxZmFcbGtGJStlZnIKJTw+Om0kP0gnM2ksWDg5
ZGc2M01RaWJRRk1EZG1PRkY7P0ZKXi5FUkk4cEhZKTBNNnUtP2VodFdISCdJLW0kRjAkWzdkOE40
XEdUXUhcOFxTZDIvLEhqLFVYbUFfIkdsbDo+bTsKJThaSltqU2xrJzAiOmhtYkVVOldwQT5HNj5k
Jzw3MWEkQlA6PjpKS3AtJEdqalBWakxvOWBFMkpbc3VNSFFbVE0lKU9KL1QlUFc5Lj5mNl5zYklZ
NnE6Jl47Oz5fWTFDRTJDRjQKJWwyU1RoUEM8ImMzN3UoKU9SZlZxW3JVazJtR2AwSSdmbl5gZGJI
RnNtIWwoRDkpaWZTYkVaODdVcHVATyo3JG4qYDdUJDlSXClIY2dVTHFAbT51TT1AK3Q5YGBIJkRn
S2JkRy8KJTFqZnAycGZRNEZoMzo2ZWFFO2JWUTQlVlpHL0Yub1ZNJlRcNWxGIiJlT0kqaSIzcmVU
TjIlVSI8XFZeQV5APikvWkhKXHVHQEhIP15rVjJyN1YpJWBAZk5EWiZHNGZDIWpwLUQKJWQsSGlt
QUVaclxCR1xlO0BNcm1tPykvLVxMcTgjPjRfLW9TVDlyRF5ZKE4rL11mOlRZbDN1ZzI2IiFgZSZe
UlxJZkVoLVxAaXBQaUhGamQ3SzNwRy0qIjJYPU0hbkc7IiZZMT0KJVRrNU9aKGJuKWpRXykzL246
InBmKVByOVtDOSdURGx0OydKN1R0NStxMXVebjdaNlshL0ljOF1fcUVNIyVWITBbJU1vTHMkYEV0
PjF1XWY5SiFEXj4sUihHZyp1LVFwOyZRP0wKJTg/Im4tYUYyJlQzbjstcCUnX0JrN1xIM2BmaV5k
MmErP2dWMUxtL1NVKkNuWGozLk1mK0dHQFJLPTE6aFYqSjZYMU4mSjxTaGdKSUssPWVqTEppbWAl
LV9HUT1YTklYaWxoXFMKJV1DOF0lPFQva0c0Tk5XTVReMERqXzA5SnEzYXBgYipsOzFPQ2klPWNr
Y0FUIXJMLVlKPV5eT29pLzZpdGtSJGNxQ3J1Yj5mKmlDJ2dzbGhmPHJiTmM6cVoobi9ac1YvQEtn
LE4KJTA+PiEwSyc9TEJdamVMZDosIShbODkhZV5uZkRjXVRhbnAtNmJHUm5cU0pCQUAvYEojPztf
KjshazpZIiEqRWFCS2pZT3NsYUZePjJacztfY1I+aiZqVSwwSSI7Zm9qY1BGPTYKJUpoW11IMVIj
KjJwaSFhTldOUDNVa2VjcjpcOy1dbU9HK0VaKzE/a25DUDopRSNmaU8wLGs5c1VnWyJeSUNxQSVX
byY7LiciV2Y5LSRhNSlQOyhoTUJsVzZtbiMzaUMjMCwhaCUKJTJNWU1gO2ZgLzVKJWRlSXFxTi5M
QyIpRVNeIiRVU24mNTFqcXRLUHRJcGQoYHFaYFdnWDdKNnNQVGNfPy5SZStqT3FPcGhLV09zVGQ3
JmgjT0JQWSU8UVVNNUYmV1FCJE07Q1cKJWRaZm5CSFEpKi4/cmJvQjspKW4+bj1qNzNhdUhZXTEy
R1cqWiYmUmA3I1ZiYThZKihWYCNnUU9QK0QsNi9yOy9IVj9nPEtBO0g8Zldka1dGYD5yTV8qIl9b
KmFTZGddX2poOC8KJTRuWVtwakhoVGEoV01KWypUaEkqRC4jIzpqRGdSKSQyYjZSMl0yOXFfaCQ+
T1FXUTYoSTslaXRYWkE0RUE0MVBmOD5eJGY6dS8tZkhDP2pRS1taU0xNLGQyN1onODA6PFE5NWsK
JW1oYiVjNFclRyI4ZF5IT1MuXU9LNDtzTz4sTElKazsmc3IkOy0lRDwwbVxiU1tdJEptJW9RJTJo
S3MvVlwlVTVibUxRUUwzbXBCbjtULkVES1NdLkQrW1osIycuSUNhSE9fNlEKJSQ0blVMKi1YVVZa
OiFfQUhvT1dIWWdQbiMqQDEzbVI4JkM6MjI+RSJWQ0lQPEQzLCRdWFk9a25GI1FUQkIwUDFTRS40
Vm1jIV43KmptUCdiZSdjSSk5IiQ1LE5qOEwmOTtrLiIKJTlrRSVcPUZfX09jaihgTydeLE1PYW9E
JT4yOm8qNitWPTAxWFw2c289a0xnNjVNPWhmU3JnQClkNUtqcldZI2JHL1krZ25GY1EpbV4mUTJk
KW1yWlRpaVlAOEA6UlUmZ1Q+JzUKJS1tWDBiR3NSb3BsYzwsJzxbc24ybE04UlQkK245dGRYL18h
SS0uZSw5MmZGJFdfPkMqWlxqPVdjL2RqVlA7RGUtQTFKQV1iOTdWdCxvVXJOOEIkJTdMPjxrSloh
VlBMXUNRY2QKJUxEXik5Wj8jZTAjL0hlKlNrYlIiXUwsYkZKbDpDcVFbSFZbcHVAU3JKQDQ/OnBr
LVovJ18pSHA5Pzo1Jj5cJTx0WGArWkMuIlNaRDZZcCtNQUQjUnRVNFFCNmxuR3NuWzFrXHMKJXA3
ZXAnZj8mYV0+KlYnRFE/N1pEXWM2KSJhRCNsM0Rvck8iJ0cnJnVaNGYxbkw2Y1tORUtvTmg4XXU+
SD5xQSFqI2h1UydpYkxQSD5aK0FTZF1OOkwyV1JNOmQvbUQsOTNnVW0KJWhDbCJuP2xUTUdJNlx1
IjhRP3NyUTdSK08+cW9tXiJtPV5dZE48bTdgTCIlXj9eRkRvcjAzK0IuVjktQChULDY1MSNWPHEm
VFJkMiMjaz04cFJFSSJyb1dXJT0yMj5Zcj4rSGIKJXMhUT1XVis9U3VQOkVMRkJkZ281TmwoJHRg
UlE+VEhhMj8pNWQlR3ROMUk4JlpwdG9bSCNISUlkRFc2QHM2UF9TczNMUSFxKjJPTGVXVGdhT1ty
SnQ5ZWtzajxDYV4lK2w7VVcKJVdbcExLYUQnYTQzK1QnSi0tb0ouMFY7Nmc5QlJuJzJtYmNDcDZP
UFNNWE5bVFZpZilgS2RCPEhhaGNSKjxAXUJiRk1LPUNUXDwwTU9OWCI/b0IxWCYxU2toSkhvc2tF
TDUuKz0KJSpYOUgsZjBHN3I4LnI2NTxgJWIqJEJjcDA3Ijx0ZERcK1tLUkVlITJhcChaOTUkaHVd
YDM9aFFgPzlvNytRZkc8PlM7Xy0rck5KKUIhZW4kSVB0JmBAczk4cChsOkM8W1AhXTAKJTc7PDwr
Q2F0W0omMkpSUkBlK1FXbSxLOkEmIV8sVUYlUGAvR2A4cClUZj41b2Y8YCEiSGBqLjZUJEUmWipd
cmlOXUZiV2FuPzEkT0tmRUwvYXVHUVFTb3AwZl8+dEZwamByN1wKJSlqPypyZTcpIjpGZHFjbmRb
QilAKnA6RUtEMy8xc1c2WEs9JjVCS0FqNUtUYTI4VkxtL2osKEhPVWlPSmYtJ2AnOiUlK0NjazRq
Zz1VP0gnQ1VaZyclblAuRUY+MWRYW15QNWUKJVc0X0o/SD8nNjBPLitVLCw9Y1dlYWdtJFIzWG0l
QlExRkpzXFVZKFY1TmJsQWxNOWxIbUFIbEZYNSs7MUUzTWZNaUdgWitFIT4zcVZPYjJJNGswUVZX
ZGtmImR0JlkuZ0ZfY1IKJSpMcDsoQ00tdSRJUGpuYlFOX0csKkhQIzooM1U6RFhpcUwuW2RnOTlp
KkJHZ3A9LSEyXFQhRj5LRUlial0ndEonXDAxdTVLTVtzcE1KIWQ9NE9PIVJML0xBcHAtKyI4JyZk
O1kKJVtXXltVKCgoWDBHIz4nUEJRNFlpJmgiaEcnSmErZi04SGFfLHVlMWpUbkNHLj1WT0FAZHRz
bTdQODJLaVUiQkdGKWg9LGNXWDNGbzY5Rm1bcjchRTk1WVlXIVIqcClKaD9fQiIKJVQvXGhmNz81
SE1iJkpvVUtsXlFsO0JwazonLjYiUzh1Py5NKiYxdGREa2RING9VXjA0NHVAdE09STtPSjhJXEFB
aTIiS2o4S0U1QCQ7WU9eaCd0PHRXKFE8LDxIUFlPQ3I0YksKJWdrOlh0YXJSO00hV3BoYjY0SDRY
Y3AvQFApTzYmYGkiQyU6OShXUD1cLyY3JEcwVy8kKEdjV08mXktuSUFwbkshMzYjcUhWST9ySlZL
big0Zy43MDEkXU8hIiRdYGRjJXVUdVAKJT0jT2NXRSJSNzAmKi9eN0xrQ1ZYM3Iua1crL3VjZzEw
WVZAOD1MR15KZi8pVyRMcE0pbWBgIjgyYlBlK0IxbSVkcDMlOWNnK1hmN2hyQ2EzWEZLRV05SSly
RClQNEk0UlRuRjkKJUhhbV9rMW9jOWBBdFd1dT5lTDxsWHM4TS0hNGQ6dFE1NiVqT0VtREVoJShv
QExcdEZMMT9mNS4nXFVCYSo7Z0FaKiU8Ll5lLWVccnJKb1YsMUxJTzpDLDFAMmkrKksoSTBoUnMK
JTttW2plP2tWKyZPRHBJYCVSX2hhR19sYC1LSVtXWjkqaHRaJy4xPE1KMTZQJj9yMS90WTdlJksn
XjJBazVRX08vWVtcXG5uLUtLKjsnbz47MiNkNzdJNlMkR0pWIlU+XlxiblEKJU5vJD08aiNGa1Ji
LHRObT9nJnMjYzJ1YFtfLEpMSSIxTjktVmooREgyOUo/XmtlKVY7Z0RbNlBMNVUvJiJwO2RIMUFh
NmpgKCFNP2NrN0Q3Xm5WXVpgLm5lQ1E/ZFRvW0BtL1UKJSlsP1thMydvLmJMXylTTksmQEVSXmFm
W2hZKjg6XSsrRTNsbElBVUg2blYmLiRbKlNBK0JGJ1cxPTFeZFE5c2k0ZkBGRjlONls/YFpYVDFr
b1JCVFhDK2ZebmknWDpHJklFLSsKJWA3O2Q7JlItPUIyM2JTI25KMissYFdSZVlPWjxLYVMmXktv
ODhqSi1sXGI/TmQsImdOYTQ9WDpHaVQ9IjtPcD0pTVw0aUtDUV5TJV9PNTFNZHAlYmtJQGBXWlQy
SCIuVzFCZ2cKJUckLTROZkxcP2FiLUkrQjMvXz9GM1BRdVJCTWx0cEBVYmxoYCghUUgkaz1dJ0Nq
WlJOb2RRUihja0E7Jk5ZMSdhWE9mbVhdSWo4dCtpUSRgIWdgK29fZCk4UzoqTi9bUzRfQUwKJSgz
Y0BkZDZOTzcoVVtYJmdodTNncjcjNiw3XWJDdDcxJ01ePS9wNnEmM1YxYC9xTVswKTwvKy5JRTMp
O2ROMypUSGAkPTY0OVcqKiVsSl4wbUA+RD8mYD1eVzkuaGYiYS5jJDUKJSlSWig5P3MnbVIqUmkr
SVhjLUpoRCZvbSFYaUZfVytIKzVUWTAuUE0jSFZrM0ZpLVEiYGEmZFdZXG5sMUNNTUpyVEZqSm4m
VzNYISJJMmhtJ2NzJlZZPEsobmZPc15dKi1IJDEKJSQlRHQ8WUYjOD0jYz5JSC9gTDEqZDElUjxV
ZD5IMlldI0soWFZWSW1pM2lcKVtkaWhsVUw2ajU2b2kqQmxVOl9nK3NmVXFDJ0xHSiNbZHBxY190
ZGcvOF1VVkNFO10tTks7RikKJUIqM2lBL2wzSyUwVFA7Ok0qRkdFTlFYJ05iMitYYj1ZM1QpN1dL
YjFXXGA5Ly1JYk5lKEdzXXVHNWpmNCVJWDFQQ11sT0tXI0g+VzMzV2JhJyg3KFJaNGNqQ24iXyEk
WXRdPEsKJWcnRmArZyRsPSZFRUxTSWYnYi06OFB1OnEzLVQ7OS9OKm1JTDhwYS83LGM+dEc0TSt0
MlAwTkdTNUR1dVEsRDBrNmYhWDIpMEo8aFFJTm41ciRpI2JQUDshQ2Q1a0dOIUJrRG0KJWdRYFMv
Q2coSyE0TUMyZ11HTF4tPyElQi8/TnJTOmcuP05iSkBBcFknb2NRLEA6LiUrMUlOYTZYPFA4UlRO
dXBoNCpYN2M5YVNrYSRqUjFeL2hJPXFfNGUwMmlya1szVVwuVTcKJUhLKkpuYSUndVsxL203dSpp
SUxpQlgnWTwnbldDUyopWWI0a1VxXzNXQz1aQVhsXEtrPCptZkNGRXJtZ1ErQHMwYFA9O1QvOzcn
UWNfN21RZyg7XjowQnAqRDg8UHUxXlE6bVcKJVsodF81QGI/bSEzPGJNXVlTN0ZJJEJcYWU2IzpN
NFYrLiJYbFlDWFdPWmByQ1UzMGdHQCwqWSNqRlk2WiFKcWFvPWd1YkRDYSxDO0xwV1UqWmw6SGdY
IksjMj1cJTIwLHA2Xy4KJSZvdDkxSUhzMUFySkZcMDNrMCdaRydjOjxtSCJnWU4xaDBmN15tWkph
cD9gcFxdUk5GIkFuKFxRUiVBZ0spZ3MrKmJvNktLZSthIz5OJVoiaGNuKG0+P1srJ11oXTVdSyw+
WnIKJWBaZzNYXEZAZGhhTHJFQGdgaEcmV2E3MmpTKTBxJzlxbzpnIiZRS1AqRFlea19FLjJuKyss
J1w8bicsa00+XG5tV0RlMCVuNlVfKWFNI0ZjcGIidUgwNzYjZCw5QEFkJilkbDMKJT5JNC9pR1Va
TU5bOFw2XCZdJTExXCVNakZqcSdALSRfOlYoZVFXKzIkJWRtalE3aWUrQE5YX1lMLjg5Jyc6IlZf
QlJKclVuViooRzFhay0wb25nLSdIJCpKb240U0szZUBSYGsKJXFQPj91JGU4VWFyKEdSaUJySWhA
ITFkMGxWTFYkSl5nbj89UkNePS1CU2M4IlBHbzc3NlJVYzVfZ2JiIWlRSXMvaCRIVk9fMltFMk47
P09pNmE/Q2cxQSRuUTowMSMzKzMoTVsKJUUycGA4JCpdUSspTTN0S3IqKTAiRHNZWClwYjs3bEVL
bUVEN3RJRHVqSz8zUEVIcFsnJC10LTRdPW1aX3B1IS0pS0FZUEI1UTozOzAwYU9XMDt0YUgoQmEr
ZVVmUDpAPUVAdT8KJT1wS1xiYzxLUVpnLCcmVVAvKVgrJzZTOGY8WDNBPjJSL3VqRTIkdGgvWCxD
Yy00cEx0XW4+Ljs9Zks9aFYrK3RzUG1DS3AsanR0Li1HOyFmIWUlbEwmMUtrVSEwbU9WaVZvOD4K
JUIpdGw+SVVaRWdCSjghTFlFKyZcTWdkNUwrXjwpPy5DI1liTUQ/Xi5kTXQiMl9SX1BQOjdOWjJt
OzdlXG0sSyh1UzxqTnVBbi0tKyIoMUlZPS11SnNAIyldaCQyaWUnXVlfUmkKJUtrZjcvZEU4XlNI
PitKSXE4XkhDLCs2cC43QVZxUyk8Ml4nXydNJzFTUSRKRlldTjdvcCMzTWoiW0UkcFdgRj4lOytq
QVBVJ1RXZGpMNTZFbm4sKiUza3JQYExOISglWDcwTXMKJVJUQ24tLyRyLmhzNi4qMFdATyFaODNP
OHJjOj5SJFwvMS1RPWI7T0hbNFlYYSlPXDxyKGw5UT0+ODc9cWZKKExrZEAvKjwwR0E4VUp0MCVg
XywyYCk7W0JGYWRGMDliJV83J2MKJTVSZU9rT05HSl9wQmYyPF51RXV0JDw9IypvXiFddGdKPEFV
N1FOc1Y2JStZWU9WSFhzOGxPYWNvS0hBcFMpYVUvXihGXFhwQXRrNUklPCcrPUlWQj5ycyVPbVxL
Z1tUWWY6VEUKJTc9UGFvZmFjPDMpVT5OLWNFN0lgNVVsRTUsc0IyJCFeMWFxQHF0JGBCVVVVWChn
LnFWVlFDXD5DTiFnVS4uUDlKRSZmM3NSWjhMIk09cHR0OihFazdtIkBgPVVyXG9EKkIoO0oKJSNd
Li9tVDItN2U+UE1hJkVzMzQ3MFM3KTJCYnRAZkZqPmNSZTonKCpaX0tBU1RRPUdUXCwlMyw0ZVY5
SERbOyheNiZiWWRFMGdWV1hHJ2BlcCVWLXQlNWQ3JT9TJldaTVw2XUYKJSIlOl9ePWpXUVJiXV9T
YFYtXUMkWGduJi08Y0cyO1g/b25aPzRoV1gvPmhKYUIlV1VrZmhiUzVTPmZfRjQtYDEqbmZWNFg2
SEdoREZWMzF0TjtJJmZwPCZvMyNENjBNYSVZazAKJUo2KGkhWzlBRWBJUlZtbyMqKGEnV2JcaFRb
aStPOEY1WCVoTz5AYkxFXUhtPktbcltwLTc3TFtkOEguKF5uYTV1I2EwOVJGVHJmLkdAXURRVDRC
Xj1VTnE3SSpLbmc3OFwsOVcKJXJsQFdHRzMiUURQLUd0aiRpOylDQFZLVjpaMyFVU2lqPlldIVha
cXUlTGpGVSYjOjVHVydGUSlqZiZuTiJWTTFnJ2hnJEM5SkRUKiZlUF8zS0IsPCJsTytnSkRvMVkr
WVNoNSsKJV1kcXFkbmhRNUhWKTtqQixqWEQ6WzNDWDVMUCUqJFtpUzo0ZTkjPywjJEQ+XThoYUE5
MCJWVls4c2lYcUxpbWBTYi8qPFQxcnErVC5sR2k6LWwyOFRrI1c5M2NCOWhlPyFxclkKJW41VXJL
KkhkclphVSRMZGVXYiZsRyovUyotRTpQcDw4QW50aSJgVWwoUyUwRGFOR24lamRiZmMvJmBQLnFJ
aiI4bkJZVkReRkpodGlUazAySlcibWgxWi0sRSdKa18hbmQzQVUKJWEpU1hqVF0pNyEzKDNdKCZY
cCtPNFZMWzYnbDtSU1Eva0U0PDBXJChNbT1ZSjxUTGM3WVMoL21LIidMcEgndD9TP1ZMbHU6QShF
aFM2IU1zTCd0XCsvdDVCcUZfLkBWKC4nYWQKJTcnYS1lNjpvKkcmVnBoNCUsPTs0aUJXS1U+N2NT
OChxTk1oNTJoLCZTO0MvUmo8NShpO2I7VS5JLWs0RTkzYkFJNUFEUWozUTpZbmYpOyk+cSdLbFw9
Z1A+by4wWT9YTyk2YF0KJWhRcVQxQWhaUEhcWDVMNjdZXVZkbSwram8sWWNsISg0IzkpQVcvSDg4
RUMmN1JfY2ltXVwocFFnXzA8Ni5eY3FrYGJjWyJHKGlCTSgoY2RvSjY8YEM+Qzw2KCJYTmk5Q1Vt
M1sKJVA4UDgpYEV1XjFGZ2ZHZlFfKjROcCVALyRhM1Y0Kj0yMkZxcyJiSik2UGg4NkQ1TWxbKUZW
X2NhZFRdL1NXYTxJKlApaDFqW202QkIoKWZQcGgzRV8xUmhXKFVmRmMuJ2M0dCsKJTdDY0pjPD8j
VTohT1NMZy1objowalZHcjEqW2pob2ArXTQ3XGQpLWEiKTVaajxtazRnKmRjV0MuNGRARSg+YUI9
bjFBdG5jNG9HUVRVcTFxTUA/TFg/PFtxOiReU0dabzQzUmgKJUBmaUNdIiNbKDllPSVFTVg5W0hb
XzknKGljPyJeQkEtVWFsODY8L0M3YlIzWE8vNmokO1tqaTlnbiRvOENVakFjKT4xRGtZZExfbE0k
NGxlSm9ONXMsPmxSZjUvQm8vNWtRVEwKJU4kWFMzIW1dQWZLK2kzVUROJlE0JTU3ZWIqKWRlOSRb
NlIqRkUkL083cStVTi89YTNPazk3QipAYmg8Y1w6YXNCM05zW3Q5Ry01ZmRlYi5pazRpOzw1TWQl
c2BrWXI4Jms+OzMKJScmSlltSjgsNCcjajZGN2grIzpoaidfYClFcGcnWTRqaFRYUVE3QWZrXzJq
K0NRKmpMInNeaVVKb0hLcTVlUjp0PTolPWI4MSQ+QiV0cVQ+NS4xNThfMUVZPi9ybyppSSpaWjYK
JUdxUlRwTGRLb0YrNmptS2gsdDdjMyFrbjQqLyg0VFNNMEo0RnRwTGFlW1s3VV9NcVhNIUd0RVJC
OVhJP2tWXVokUVVcYEQvTkc0bWpDcTkySWlocVI8XiNtOU4hVW0pVTlJWmYKJVlxUzY9TEhuWz4l
QzhAK2FAKCpZNSlAJVxLRnEvSE0jMUtAJEVGOCtndDxOXlZ1LlotP2JSKD4zIysqOnBRXi9cJGol
aGAzMGMvRj5vMklpTERdck5SV1VPUklnW2pccSdkNDYKJWZTR2BZcWQ5ZSYhQ250UGxQUDFePUJH
dEA0PFwpQGtPWjFaSSdYZSxoP2IxXzBiM3VzUzVtbCheYztsQUghb2NiZVE4ZEZdV1hLIkk0PmBp
NW9TOExcT2knYzlEKiNPTkhYSlQKJTBnPi9dOixoJy1Ea1MwMTtac1AhRSUjVmcrcVMzUDpBMG8m
V15wOVMuWV4nbUMqKVJvOjclS2tLXmRYZlo4VVtnWitwcFg4RTgzazxiSC1TUTwlb0FGJlJMJU5D
UTFQTSgoWCgKJU9CVzBrPERQMHBAXUdrQCFjTiRJPSE/OSlOUy9aYEM9JlI7L2g/ZytRQDRva0Zn
bENtI1ArI2dTMzNVcDk9JFEwRkNeME07TS9SK1YhVlduNlw8M2NsOkB0OWJINUghXldTTmAKJVhm
RnJMTlAralBPOCskYFhzQCM0XDIxSzxwL0c5azkmZS5DUG9ScV9rZW0yTiQvMVE4LmxpaWFLTic4
MVMoPkVQWyFgbCxtMkNhIUw2ZC5eTl1XJyI0X0FkdDdyNjY2Qkp1LVAKJURUQT5gcC5VQXFgczZg
SjddVmEsTWokaFlbJ2BRWG42VHIhYV1XdUdiNyRSLU5tIyxbaTAzXzo2ZkQ/ZG8jXF0kXUZMPCdC
VFJRXTBVbSUuXENVYmYjdEt1JWFdWGZqQzgucSQKJTxEPTMkaC1wOmUlUFFvQCItUCFNUGwtJyVc
clhWdCFKMlo9aSl0OVdlOnMuU0AnJU1yISdTZzBIQUAyYjUzamdtK1A5V2Q2VjlCVjwtQjhOLiUz
RU1PamJwIUFMKzBqakJqYnEKJS5DaUxPSzpVPlgoTVAwa15aZDpcaWhzRSNMVEJlUSYsMi5SbiIq
Q0soazcnMy45KjM5Z1AuWUk3Qi1eSXI5Sm1VQUlfUGlIKiRZYERsT1lfQihmRlpbR2I1aClVSFNl
JiNlR2EKJThRYUBSajAtSy9scnM7TUpYU1YxSTFpLUghJTxOMmo6IV87ITpVNyoqYlBSNV0rYyFW
KDpdMlk0UiMlai9uLU4yazheQSs5NiIqOVJRImlwPVxcMnJFNVkyTC9WSC4kQCJqckUKJWFVKm9K
RDZbQjsjcGhfRW0yRWNHcEJgTzEjOjRANXBVMFNCNFE4Uy8+M1QqWy5fWE5ILHBXJDRuQHE5LF1s
NGJXcC4iNVEhLExzV1c8XWlvNz1eRz1oRk91a05gISNdJGVMaSIKJU5nVkc/Ti9Pck4zMzAiLiVM
KENFIlxrT2dlQXE0YDBiam9KUkdOOnBnNGVOZ2VdJz09QmkjWmQ6YFgyJyJVOVxuPTAvdWtLczYq
NkNRcioyIzQva04qVSE9SylWYyctS2VNQ2MKJVs0P0MzIkMuZEBcLVIoc1lIWE8sVWtIIWZRb1Mz
YiI9OSNTXyQiRkhQIyxHUUpMJC5GZ2I2WHMyLUpKS19RKWFNSWwvZi9aSldzY2huPW1QUiNBTkRb
bXBCNCldZEYrXTtZJ0AKJT8iL3BJayI1V2w8MmddTVtFImVbWkVQXXBWb1chamwzJXJNJT0vdFpd
MV5NW2RyXCJfSTddODZuKEdyWFonZSpMJCtpLyRnJUBfalUrSUFzRztFXz8lRGEkPV5eZ0pVSytu
MT0KJUAncy5rIXRrW1cqRDltNlFSMHIjXSdPYW1EJjVQSVMiPnQ3MCc+ODkmdDJfLT8pSDYlQi9C
PnEwKTA0OG5YXmNsSU9sW2tUc0FsMic6JEhWTklgMTIrW10uYE5JdSpcKDtScjEKJVNpW1VZKEhY
UlMyVm5YaHEmSWJEOi1POC5HK3JiVUM3QThzOWxEJ0w0NyQoIjktNCQ0blJEbW1YMUdRaSJJWUFx
I0MhcyVGJE43OEViZTRib09BNCQkMExWIVpPKWNlPlxYcEkKJScuQC5qR1kvOjpQKD5cKFIoKis2
UCJMJ2YoTEE8YExQL1BMXzwxaGhWbj0uVXE7S2NJViErcCVeUWpkZVJgQkJUaEgqU2QsXi0rIk9c
NTNeLTBuJm8vcC07NDpZMnBAZywraC0KJWg4WiguRiozaUVcYUo/OXBcM0gnSWo0WU1hZGFaZjBX
PlU/VjRUKShtUjsjLSxwZFJoYGViO0s8MjIzQlJMbzElZVsqRkdvNC0nIztyY0hcVyZgdXNoNWso
aj5CUkM6UGB1I3EKJS9zSDUpN0kiTnBlM0VoQFRIVjRIYVclS0IhdHBkVzImU1EpNWBoPGpdLXEw
Jzg5QidcJEpPVjQpTidiSldRPjhLSjIkSVEuMmYkL2orTHMrTm0tWjFqJU8iOSZuTlZdV00hbGIK
JU8nSG9YUixcVDRaczJnaG5iY2olZTE4TzRUT1EkRixXV0Q7MSRcPEEnNXJtMSlgQmwqR2IxLCM4
LSxPQFVybDpQXHAnPWtcS2o0PydAI1ZeTSVZLVI+Mm4zJywzM1pHIT50ImIKJWYhQyQ0N3VpKj0q
QF9GczxNWU8jImIuWz9YUWpecWhwVztdQC9xQ1pLbF91c2FLX2wyNXUuNlAjWShJNUQqQyYuU2Bt
WS1fSSE4YT02SjFuJCtIaWhpPitgUERqQz5PRWtLNUsKJUc2ck5sYDJqOEBRUjQhZiVjMCs1Zmln
UXFTJnFjKGM5WiFlQk1AJzglbGlPcTtZPj80WW5yUVJnOkVfVzxQKT5AKWRVKSVHaXQjZVAwJVVd
RkAzQDQsVXUwXi1JWjQ/QnN0YVcKJVtQMFc7R18vOS5uZC9RZU4naCQ8WF1GOGhvSURZZixxRDA8
cytSKixRaTxnVGo7SlFuL24jczdvK3VHI0NDSG9LS0hvITQ+Zj5MJEVIZ2dgcDVXP3QrZzg+b1dq
dGNJMjdAMSkKJUdsNGI5cnBHczBzL2RVI1EzOkZmRUBealJTVT4tI08pM290WHFlQDMjTS1sYjZR
cCY9TyYtcSFrYjshNDNfLkZ0KUdYNVY/L0EyNk4pJCxoal45KUBsPi5eLmxmOWo3YmVqQUgKJTQ6
WSItZmlqYVhYRnFhLVFFXW48K1ZjQV0+YVMxNl9ATVEzKWRmSTJmblNMPl8pdG5HOFkraXJPRVZW
YVwuKVFcSkxQVVwwUjZdbzFvaSdAT3RuQVpiTjwiJFI4dTJzV18nVXUKJThjTSxBQ2hLSlFIZnI6
VzZgQCk0QVY9SUVKRUljcmY1ODA6Zm1FXzk3QTclay81T21uS3BmW1EhamUqWnFKZkBqXFxgYUBF
VzprUSV1UjFvN0g/bTdTIk4rNjkvMm4yU1A6YG0KJUlEa1IoYjNsZ1k5LWo6aGA1Li82am8paGpc
a1NoSi9URkE/RjIrKT40LkNDVz02QHMsbCEwXnVwQT5mUGM8LDVvOk5JbClccyJaYzhnOVViSShu
NGo9cVs+dWVEPiMwQkhNTl4KJU4sLWZyKWZvK01IVllrOW1vWWc2OHApOiJlWjlgQCt0T08nKU04
czo1NE1zam1wIS1ZTE0xKmlhZy4hO0RAR2A/XUhsZDxbKDQlWyslUThtPF1yWDQ0cyxwPUxcNVc0
VjNHIVsKJSYrb3A6L0lOWWJVYy5EbCY2XG1eXXJ0XSI/IjsvW3EsWC01RksqSFIsSEk7NihBPWFe
TjlrSklCW0RTRlg3VmZQcEpKQHNTJD5yOWBvSmBtXXElZlJvQkFxOWwjUG0mIzJhJV4KJUQkTCUm
MU1IOHJISEBjU2lTPCZyM0NLXygmJm80bz1fclxqajlyUTFRXDhCIUZsMFs5V0FhYUduUydbUlY8
YjpIX21vIU9DSFFiTDJHTlVEXSEmNkdaRjdDNVMkRUYyVWdKKWQKJTBSN2NoP0draWxYNCZgM1wn
azVkXWNMQXNFWUVwdGYvP1NvSWd0VDFHWkNTJ2NAM3NRMylXZ1g/UGRrQFJzOkZgJGJANmJLIStD
XUw6KVxxYzNbXE1yU3VyKkxWKjpeaG9TQVEKJUYzQldLP0ImMD9dai5WUFNASjtXYV9AX1cvYl9I
LFYuUixPSHJlRWNLJkZbYGotJT9IU0QoN08qaiJxQy9OJCsuOTI3QzZqZyQsPylwV1QqZmUsNUlP
WWNnP1FXY2BQYkpMMG0KJVhDUDZlNHFULXJEP2lhUWteaDUoIjUvUSFAW1I/R1dfIy5GcUguLldW
SV4nYjU6LGIiKC5KY1dBKWkhLWpMbjJRcDZYRjBMV2FFPzpXcTdxTCoxNG5baDszNi8vXEBpcmlV
J1cKJXFzI1BsT20jPDhFN0lPIyVHLFhGIUFhPTolNyZWdWtBOHNAZTB0ajpMNUoka1VGa2VMOnQi
I3BfQSJPYEZkLSR0P2dMaEpHcWhnW0M5YiJUZGskM2NUUj02RE03JVMjMjlsVFgKJVhYSGlMa2Ri
O2M0YF9IWFUoV0wyYVUrXWZXRlw3bz5nNDFYLz9FYnBjQFhhSmxVMHNPPFZUcHJfWi8/Pm5WZGYu
OHVoIzpjUCg/MiVPZytFOFQ7LUUmL1xcIiwyZFxfNmtnOkkKJSJbdHNJS0VkLzZIRUQvZzkhbmNn
bVMpLFxRNj9yVGh0U3QwUUhjXG85JUxCallnM2IvRnRzWjVvInBbYFpCXl1VSm5NPzJiZEYxMCQ7
aTk2V0syJmZgKE4mYWwuMENhKDExQFEKJVImUy1yOVU3SlhmcSU9cClpYGwxSFRjTXNcMDgzWTtf
OkthOktBWHFdXjN0VlBCMWg9QVdqQGg/RlpjVU5EPFs/PyNqMD5wL1IxMmcrUzQ6MzlfPk9oN11z
QGs8NCEkWmkhLisKJTowPTtNUW1JRXEvPi1rXmcuWmsvWCo0a1NQTkhcO0khcSZvaTpDVk0lKVIp
cDJMWkI0LGZVSTpTR1ZNRGtUcTwsP3IwOTtDO0JZXCE8ajp1ISZrQCxAdGI7az5aQnQpQyk5cGAK
JS89XTZPbCFKSEAhJ3E5bi9jYF1vNWh1RUw3ZlpONCZCcU5YIi9wc087UHNNdVc8cDszXylYZEpz
K0A5QD1pSzhaMHUsYGJmcjcldS5JMWQ8aCNfYWBYNDExL285M2o0ZTU6YGkKJW9oSDRLQzk3YXBu
XSJHREwwIlEvaFQ3U2c2VDJwNCtAME1bLTRvbiNtVTEmJS9rMmQjMClbJjBcQzZWXj1XLTE2OWtN
U2tHRj0tMENVMz9sMTF1XEgyZXJLXTdWKD5GXmAlWisKJXEwPlI5Zl8nRWJhSCljZy9nUVJbSUBM
bDhGKF1OYktvQlo+aHNoamhXMWZiaE1kVjIwXVRCQGs0WyM7IV8hSEZXJlNXZ2JLLyVdM2QjM3A6
MicqMlE1JWU+dT1EXGUhVkchVDAKJU5ZMSRTVWU5PDpgIj02JzEkTkBDcEpUSTs3SEIoLGFQTixB
NVg6UWUhImo1MUxnRyNQUE9aLXNBOT0lYCdVTDQoWFs7VzA9dTJxcydZLiZaYCd1bDhDPG00VmlQ
SEsxcDJZPl8KJVdKdElkMlNdJmQpW1tWPC90Q0l0W0o1LUAuKnBSOShvKSRRTE4mYToqNTk0X2wr
VjNjcDFCPHA+PFpGUjcxKVxzTi9yaEA6Wmk2YEhbJmtFMUIqaGFwai5oNFtkPFdlaUVNI3UKJSdk
JlBnSDYmZFdpPU5TcTtOWURSRiFac2BFXkVuW18kKFVuYidVQmk/aiRTQC5OKFlGSldzLCFmSWsq
cE5FYUkvM051KEphR3BOMGNIJC8+WkdEQ0dJSk1wUFU3Wm1nUVlELDUKJSUnMk4tXzEsXixaM108
SlpSaTNaLzAnQ0VZai0oVjxMQitWUGI6PEY7NHRrTSc8YTY0N1VIY1k1UV1ZbyJeXWk8P0csYks1
VldDL2lyay4yKCNEPVZsM0YoOT0kSktwamsraHAKJVRgQGwqITMuIUEtUCtyJDBsZTBmYz1hdWwv
OTJkZW1SLDhicSc7QCRgcFlhPFsjXj8rLCRaPC42NiRbRm9vdWEzQVluc2JvNTlEWWAiSD9gKzBH
SW03UDdAQmEyS3EhYFJvU24KJUVkM01UUnVFdS8xRzNRWlkpaExeVzMzSEY+R09QNWA/R2tpRUtU
L0ZpNThETWBgW0tKai5sYk1sUztMdVVTVGskJENLUTEhaUVtOUNzQl8ucVBDLG9NNXI0XGwuOGRP
WGhyIm8KJU43IXVFQ2MwRXAiUFZSKz44cUNqI2UrKERnb2NGZi9pL3BqWD9eWWQvXlQ9M1BMaWV0
PFdqLUQxXzRSWSdnNlQqKTxmJClmcEdRTEVZKmJBWFE2XCRDRilcTTIuIm0vNT9wKGAKJXAhQG9C
WjFqbDNRYWcnOURHcSg0ZTQmRnRFO0QuPkVCOFdEYVM2ZTZWc3IyKj5uO3NlOjwqajtqUUxVKmwn
JGJRY0phJSVaJGlGWGpGXT5OVVFga15HZ15ucCsrQyM+TEomLSoKJS4sak5RVVMkXj5EbWQqX18q
K2koM2VpdSlidGQyWmNFVEdPLjtMaUBNcCRWQSQ/T2xdY1JwWHA8Tl5STSc8ZWIoRWo0cjpgJHNQ
NEBBT1hgbGYoU1BMOmFRMyNFSGw0PyIhTFwKJTg9W15qRjZjKSJMKyxoUVYsVFlKZSUyVW9wWlou
SkZtPD8nPFMnS2tkMEVNaE1KSnJiLUtjVlJbWXBmVSZASDc/WWlkRkEuWlFHMDMrZzkicl5XQy47
c11WZHFGQ2c/YCs6KV8KJTFkZDNHa29bZ2oyW0NccyEhSi1JZTBicm5PPEZAO1ZFS2NDLnFUJHFw
O05saSxvJUtNbiU7Q1U8b01eTF4wZDxPPl1UdUJQLDlvdD09RidQPV9aKDxbLlM/YlktTzFBcC1S
Vz0KJU8+MlBIIk0kc2FoTjU9bFZYNTpMREIoUmwuKi5tbl1FdVs2RE90LmJJKypxbzxfWU1hS183
Nk00M2EvLl1mIlF1QyRNNE00TkgyYjlfS0h1KkM6PzFwUT44cCRwTzouXT9MU2IKJWRibSpXR0Zj
NHIpPDFRJio8SHFCWmszbXBCYVM0OSIjLnUxQyxoYU1uJ28hOCMjKiUtJllcIjUodGFQakJEMSUj
OWhBOlpQTCUvRzdgakRaXmIiWkRaPj5QaS4+bDYpQDhcWFUKJWIlOlptOmhXKyQnOmorXWNHdWhm
SjRBQ2I1ZWJCUzt1IjsrO3BUYENNTTZCbWVJJlZ0LkRxNlo7VG0oaSkuKC4iJFNeJlQlJUNmQ0Vk
QC0lYl1idEJjKGlBYl8hSXAuZEJCZzQKJS5USzhJXEVsWGBSa25MIiReREVjJCVZLjs3bFMzOFBl
R0RRRkdFT0craWlmNEVpU3BcTShaJlZEZyIkZ2w1OUlfL2o/VnVnIz9dNDwmI1BmZHJCZ29ha29T
UGdHVEIiTjx1UjMKJVZVIU9lOTJWK1dTZ2Bgay0rQlskcUVbIi0hPXExRHJPTG9hWS1ldStIYS5H
dE8iMigpMCVvZS1hPlklT2VKNi5kIjMnW24tWS1gWTRwJ15WJ1tATUFbZDlgMDczLiE2WEAmJ2YK
JW9gc1oiNnVCKiMzOytoXl44ZU5bYUBMVkAzKEJFJjc/UzM1NEBSUDpQPChHU1AwO3JrSDpiQ2dw
S0RoIUg+cEdfUj5qMm9dK2A2IWVAQDVVbWpsQEpbRWdpLjc7ZFEsbmFkXDUKJSRYXzMsMD1MJTRg
XVdsTVlaQUVoXyhVISEzc0NtSmAqOydOWjBBQT0rb2p1bmA0IzVYUUNnNUYvdF4sWTFGVmxJZTU+
MF5vc0tMTT9wVydma0gwPmJxLmJuOT0vLyI1YlJia3AKJVoiW1wlKVhvIS9OWzZOO2spNjk6TTtK
WlUoOTVaSDg3ZWxgW3M0L2gwPSZsb0FPVltwSCgxcF9HJDIjUDNfRm07O3JGcWtSdWBAV2I4MzdU
Ri9MOUBFLDAsa1BfOmpjIkIyaC4KJShgPkxKKEg3Lm0uQ0xiJ0JwKmhPLWBKXDM/SF83cycsLzwx
U0lEKHNaRk1bW0A2Yy8jKCNSRmAlcD0hSUBYYW07XkZBWyExMj49dTpnMExSTlIrTnIpTiEtMEIy
LmxcMDIvanMKJTRFWSxmXEMrVm09SzhGKVxGMDNqXUxsKGVqWHVXNz5VKFI4LmtrYT5FQlVbblYk
bG1JMilYK3NpLkZIMlojYWZHSjVhVjg8SEdgKzJHOmk7Xy1cZCw0RFhlZTs6RVluRGRkSHUKJT1S
JTNBbzk3Myk+NktsbGVoRnJwaW1qUCVYbGJLaGFdY1tpRlQvQnFaX0NFJS5uXUlqUl5SPmhiKTlT
RSdqZ2YhYVswckQzRihOMy1BVVdxKlVDckwkMHEoay47NzxBRk09OFcKJU02XkMzK1puMlBRRGhY
KzgrSCVIQ2hHKzVoaTpwO1ZSPkRhXj1dI15BbEgtUzYuZGVQXXA7cGBpUHUqQiRxNkw2MVJYSThR
Xl9SUTtCM05LVk1qUmdYUjJvY2lTVE5HKkBuPFkKJV9OIzg8bzE3TD9oPz5NVDYkWFlvQ3B0ZUgw
Oio2UGJeLWlYQUsyb0gwLS9sc1hBV2hfRElIMS5TbzBaMlowMiQvK1hWU288LyVULFMnPUc+aidc
a09RIU4qZWNARyJaXFxrLz8KJSx0XzdCRSIuaEM1cit0I183YEpSKWNPUmwqUVpzVUwyU0AoYTAr
cDBiOz5iWkk0Zi0jJWIkYVhdUFdzNkIjVWglWCdsW249cjciYlkjLTEuW2tANlI/LFJacWBTaTV0
ZkxdTGAKJT1xZGZTLFA+OFgvLm8hZEVlO29ba1FsT1BVLEZMaGA2cCxEPlNrSS5APV9lPUc2JXUu
MUkuSnNdSi1IMV5wWTspZ1M7UDIiR01ZZyxBTnM7Ny1PdVBPcVNRQnJERms6bVxBQWIKJVRjWlRf
cD5nXy0qRzphMEY/NChVR29US2BiIVVVXm1GNSdoUEY2ZSspUEM9WzM+OmFmQl48M3VMTnRbb2Fa
LSMkUGoxVWIjKkNoV3AvXi1WS2cxRCRzJGpacFI0IldgRkByQzEKJS9OaUZNNGlGa2IoO0BgdF4v
XUVvbF9pZ0ZHLVB1PWcicD1WXjFGMyhMaSkvMmRMVjlZX2Nkb2YnXzZJaFBxSDldMGduYEA+aVtb
IT5wa1pKMEhAbDJFNGMmTD1HOFVITCJNU1cKJSRVc1ghT2coSFlLczBLYEhCOkJzayVdNGpJUWhR
QTAiN29EXGxYcjBjIiZvSUJLKikyXzxmYC8iKSU9cjkvXXRiXzRuJUhqVURdXStIOS89PmpqMCpo
K0xFMlpJakxfZSImSWQKJVdOVzo1J3QvVzlsNHJDWTowZSZbLFRxaERXL3B1ayZtVTFWYTpFRj5P
PEUqaFBzJmNdMVRMNXI2dGVvTkVNP1NxJk5UQldhbmFBZEhJRkY4Qm5IQkxwZFcsTGxqQTRWUUsv
LEQKJUY0YFNRPmZxKmhgQXNMQ1g7cSFxLl5QcCpsIlAlVk1kJyNFKixRIkhWPW0rPTxDMFoubEZU
NGZVOVVPQyFJdVNtLTVmXG8nLCZzOC90c1NKN0AqT1doZyFiYktEcFdZaispQmMKJSRYUUxeTGdb
YEtAXTInbF1AKm8uJUQxI1wnK01TI1RFPlk3Y18wazxdJD9zQUhJXDw4XCEvM0lGNil1Uy9sUm41
JSZYVVVsQWNpcDhxSTs6I1NJRC9KMTsrI0dELFtCMydaVkoKJT0yTEpETWJQbz8/LiIwRkJtZWco
WHUmTVxtUjlNI1lsYkdcU204YEE3WEk7KExJcl5CLEsyWChEYkFKaipERmJub2puVVtEOlpQIXI+
QGJMKikkXGhCNTdhNUMpUlJkbW9CUFgKJTxtV2clOmprITozMSQlQCwtQTszJHUhP0tSTF1aOTJD
RkZtWmpYVWgrWU83TSxVXVIiRWdfWTNQOSZkXigrRjpGRjJqSi9tbkJwJkdwZDNbRURfPClbP0lS
T1Zuc3RmOC1RdC4KJSNgJmVVIiw9anNHIT1BSjdrM3NuRSJHQTQwUXFPaC40aFlMKFgmLG9pNFVC
Qmo0biUkaXNxdTsnN1gzKiZgXmYiUzBnZXVII190IWVudCFbOyNFJUNJYDNuY2ZARWc/Y0JhTTcK
JUwkNSRQLG5sRV1UIVtmZ1tTbSRDW005TDtENihXbSJsV24lZDowYUQzYEBAVyZOITluJEInVztS
IypOYyclJk5OQ21RcEpSLGRUKkxcMUdaSVkpbioxKE5EOWssM1lqYTdqYC4KJXBTSUoubE1faGRe
cGJwOV9HTEBNbE5aNXVVPDE4RCohSENfNS5rLC1jaycpZWgvIiVoa2k5MVlgNiRUNEguYWc/Q3Ql
MS5SSCxwXyk9J3Mnb21ENFMvOlIpN0g1NzwuQi90SmMKJUNmI2Q6TDJgKHE0PFRBdERkV208MzVS
LC1FI3FbKVRGJjQ5Ui4rNSEhTlhSZyE9KWc8MyxmYUNpR2VsSlFAYXFzOCN1aGEvLUAxNkpWYnNl
YDo/YDYtN0MkRjJCMmRvODpDQGoKJVljTCVGclpEYmw7cmtgTTFJZCFTIjM0dGJRVzlPI1gxWzpb
ZkdiLUQsQFBMOW0jP1wzT2ckYldwLXVwVThEKWdVS0ksP2c7WTVKSUFSPkA/ZS48c3FfbEM5OEBC
JD8vKDtnOy8KJWUrU2MoRDtVMC5BM00qbl9BUDJGXEAqIi9xLSZaPDlNRVc1ak49aTdZTSYidGp0
XSdeRFZoSXRuQ1NvQURpNnRfRUhQSUhPaGAkZWpUKzU0Y2xJTjY3KU1NZTg6Z0A+KzohUDcKJVI0
LEAsLjlJUklfYSw/SUxWZTQpJFg0ZzhvO0VSLExYbzJiYmopai1JLGBFOSpUV1BhZSQzXDlFL1ow
Uy9YTzwuTVVBa3U9QmMwLy9JU29II1xUJmw4TkhCPScxSXJVZF5VIj0KJUZuMGRbJy0yajRZQywo
MldpS089M2hPQXJFUHQwRkYqPzI5QUNqa2Q+bUhHWFdxYGBcYEhRRy9pYC82TSJtRUBpWlRTPCUi
dUopbnA8JExAIzRMKikjJzpaNF42W2goPmdmJEAKJSlwIz1OMzAuQU1QZV5XMHFpcURQY3NJZnE9
WmljbCtuT11OTFlxRV5pakM6YW9iSy8xQTYoWVhfSzxAbjtjO0NiUVE+UlA9dV4pYEU3KTZpZVp1
XkZRUiFtLT46N0NYNUE8WXEKJTYiNydDLTEzVmo5LlhEK0hPW2InbFhKZ05ebGskTiZAZDZWNVhY
Z2pRLjxORWRfR3MxVUVqW2o4TUE/czBQKVNAZT9tOjNWL2pkWE9xQ2o6LS0tT1FoOVpOWjguXUc5
SVkvaCgKJWJZX0QlT19DNmoxZmdEVCQ4YVsocEtDVFlPXnFdLkEwTms6YWw9Py0nRSMuOk5yIjk0
MlohVGsxJ2A3MmQtMz1cUzAiL2BqcTopMzYnYmwhQC85PVRvLFIpVWUidFpRVz8tYloKJV1wISQ6
VSJSaj89NDVsQz80OydGNldRJlM4PTljMl1IWlE0I3Rnb2BjUkBKU0tRT2JVSUM9Ym9XRixmUFo2
YS1xU2REcy5nLDBCJiU6YDxwS15Hbl9oQCxsIjBXUVlZOWIjMUMKJWQ1WCQiMDxpbmxPUnMwIzxT
WV9NLDtjSSo4RkAlV0A6MSRDZ0suNF1hZzVNNC11Ly0wVGpEdTlOR3ByWyhfcC1rQjxZNjtfNTg6
V1EoRkRuIkNrc19cTShoZixHRFhwMHRAa0MKJUdVPGlwTj9kRGstVypzTy4kWidvX0RvTFBsQEI8
UGNsb2ZLVDhzK0hJUVxrWkg/JWNHIkJBXjk+cmBhT002aCwzJXVtRzUqIlBfKE8yMSd1OW4hajgs
J10nKiwjVTUsN3RrIyIKJUUoUElAZXM0Py5zLG91NjZUZWAxPC4uRSEtOkFOQ0kuNDBRJjxpZVNJ
aDp1L2RUUFlKcWQxVTlBZUBUYytOXlxdTWovI1YsIkwjPE8qSFdVMGNXU0Y9PkM+RzRRbk5aLCFi
OlwKJWEyZzo/LCFHZ10ka1o9Im9iTCxvVywyc05AUDJ0O0FzTk8pMk1TZSVXQl8vbGZtOExuNHNI
LEFMbDRBYFhGRWRkNDJndU1lRF89PkRCXD9gaWwqODsjcTxQYDhGLXJKckJWJy8KJSZGMFppREJM
UyUiJ1BKTi9EV09sbFxCV1srUC1uXkk2UTBALEJFS1szLW0sXTQ8aXF1RzQzTDRGZltURU4ydGVT
KS4ybSddPmJfOyhvdUwxUVlRKCUqZ2dpLUQpSXIkRlRKUzIKJUdXJ2xfN15pQl5OP05WJydrIyFc
PytyK0RkQT5mWiIlUEl1ay5OcFhhV0ZXS2hbKlo/L0cvX0RZazVzWWI9XitPYzdWUjwxRS0wWWx1
Jm88TyZZaithNyQxcm1GKUtmPGFqPCgKJXFvWFdqYk1uU2BnQTQxaVFcMV9qOXRhZkM2cG5fVEdy
YSRaQ209S2srXSJZTS5saXFyKktkQVtmQCk9QFBTSDJpakkyT04/LkgiIT1tKGVuZFo8V2tRPjVn
MjdyTVcwRDJaL0cKJW5bQytMW2AuLjRwdEhTYXFxXjg0ZDRALikuL0YkaGtBWiFdPitdYSNJWSJY
RlJtYUQvQGVlRF5XL0Y7Kmx1WyE6bGpCbXRBZ1liXzg/Q2NOVlUvIUM5JnQ5Sio1SGVrPSduLyEK
JWgmOVxgSkk6IWw1ci5oPWVSOi1zMDI5Z0UhMCVuUE88dTFfJmt0cmpRM1FPUWdtcEFoMmxSLDss
J1VYNidEKi5IQlpda2FHcWddMCcwUihubEgoQiE5KFFIZTwpN0pxS2ZpLyIKJWpiWWBUM1FaV2k9
WURYbDcyO1onZCUkR2RCWjtMM2AxQEtaMGFjWk9UVUhKIjE/UlBXKWNUUE5TTTprW1okPTJsVSIz
XGNvaVhZK0ZLMSlOVGQ/ODxTM14rcUxwbDpiZjk9MFsKJVpBQ1c3OmZQblNibk5eakJnaUVYXWJH
UyhmOiQkMWBgK2FRJV0ibEouI2YiVkFkPGYuXTxKMiFlc29uWVluSlx1RnNkU2Q3clxYZHFSISJa
VUBiOlgqXnFFUGpsRTBXRUEiIjMKJStQZFQlLElwOCxPUG4+LCdyYypjbyxhMytoQ05ZQWFYQkxV
WTA+Xk9dKl1jYHAoM3Q+LjQlOC44T0pZSj5SamlGJWEpIlpudXNYMlExSydNL01gKFpVTypGYF0h
XDNDZi5gbiYKJSRiQ1pvJz11R0xqXEFMMUZUUlJmPk0uXjMiLk02US0uWHJZWGcpZURRYU1iNy9F
Ok9jOlRXVTVoY0QnNChWQlVKNDhsPllyJEdhYm1ZS01YKEhlZS5EclUkSU43YWdqTEQ7Z0sKJWkv
IydtWGVfWGdMLkUwLE1cZVtoOnIsSUUkQlpYSSUmJmQ7WiJdRShKZ0ZQSFlDWF8zTksoMEdjdS5I
Vjo0WW05UFcjQVRnJVZkbFBPZkJGQ11QTSVfX1lrNTVZTCQjPGwxYFsKJT1WVDJzb2puUFxTUnE7
K2osaEVPLSdaXmRnITJvWlwwPnNkPDBRKGJKUHQ7T2BIVGFlKTlsYk80bGVgWUhLUmw5IV9tckEo
TC8+JSldSVhpXDBxZChjPi0tQG1fWVZUaU4scGoKJScxLnJqQDY9K1FUVj9DX2pPP040MEZaMGdT
PiVTZHBrK2YhOTZcR2xANlNxZU1LRlVuISYkRXNJcTg5Mj5rYmduTUg0dF5QTU41O0hKSUZbRidF
LkZSTTFJPj0kKENoV18rPmsKJU9EPCVNLWVcRUpFSF0tWjpNPSciKU5dKCkxLGVGPTZMRSVIJEdi
RGNiWCpZV1UyN1FkTCRWVCwpVGMmNGs+ckxJSjteQ088WDBdK19QOWpnJTM9MDlhX25jXUxAXFE9
US46OU0KJUxoMEYiamkxTlFIZ2QxXU1CZkcnK0tHYCZjWUZmbCMiZEZoXFhedF9LSllAQVo6Z1Ar
QUs0KkJTJjxsajBGbGNvPE9BO2tROj9JQGBuWCIpV10+MDBCbU9rTS5uT1pjcEY1TjQKJSFuRCtk
V09sVydGXVN1c24kWiFgXG9GZzpDM1FINjtRN1BYY1EzPFVdcC1lNTlWK201VFVzSGlWOyk1JT8i
LTkzKHFNYktJdEgoZERvWCQ9PitCZjNibVBoY1FbZnNQJHBWVjwKJUZEcVlQZ0pfSTteSmY4XDIx
O2xXbmpTZ0dmLkEqRkdmcj83OGxAckJyT0ZRLTBlOjAlPlplaGMzMVZWSTouN0UwNTRqUytUJlFL
VCJWcFZYNz1BJj5cVy1OaDNZIVc3ZCxvTiEKJSUyTnFZNVsyMFwtMmJfbjg7a1lyUFVkZF9LY209
MWRFYW9SZ1BNWVokTF1oZDUxbVk6bnFLLUlXL08lPjw/cXIxOCRFX1w3MSJ0b0FGVnQ2JTZBIitX
JFNgJmAhYjp0XzdibUkKJUFFc0NMYzg4KT1nRlNrZmReJz5uN2cnbUE7bCJ0bzIlKCctNilbYk8/
cS0pXGJgSmVtUTwqJHNuRlhcYFtpbEg1TChLZzRQViZTPS9TcF5iMVxEPzlJOC5jRSMsaTRXUGEq
SmQKJXE+bzU5IkszdXE2YFZUTFg7UiVfUCEkIXEzXGI2cFFfTDNyLFVdV1UzRUw/VjVlai4qSyJR
TFBrczYyOl4oZlxsNWM7SS89T1RsSUk3TV9yYyNhWkxGOlxrXGheJWMiUEdgPTAKJSxec2N1Tj5H
bEsvK04zYTkoMWBaXjNtIXJCOHVpLzdvN0htXkZmbHJWPmotSCk/NGssRkUvc1gxPGxgLSRxMnM1
SSguSjsrMU0uZk4uNmQrKipLSyUva1U9U2wuOmVLRGhccUoKJUdjV2xzLTJWWUw6KzUwSG1lRS1d
VG89b1JpJiY9MzBVb3Q9TlxXXT9tWWxgSzIyKWt1NiglM2Q9Qk0oMTJnQSwhNHE9Vyphcy09UlNX
Q19LQldPS0RaOGBuQCxHOCI/QWhJXFAKJWJgXUllWUNcRFEidW4kaD1vSTFfQ2ZNa0pwWU10LjhI
SjkqQCNHYGdsc2Z1ZUJsSSZ1ayZQMFQmI2RGLC5gPlNUMSZmLSEwa1dAUSZTI2BqSVswY1k9OjE+
bDxWaGFpNllkbGoKJSwsISw4TUM+PzIiYlxIZWVxVnNKQU9ta082LmUiVyZdXCpaS2dELjwyJT9t
XmZZWmEnNV4vQjdMc3EmazE+NzlwLEhTVkFlJ1IqU19HckUhXFJ0L21gX0leSl5OMmZNLm8zU1UK
JTQzdEA4Rlw+YDppQm1BWnI5RXIwUWE0JT06RiI/aGA8VylPZzM1YE5CUWdiRDJNT0hKUiVjZShj
dHVUViopST45TUM+Ijk3MWh0OUpubUtyP0xXR1dZTmFUIVUvP000SE5LZEwKJWpmJnNXcy9cVXI7
bUIlKUNLU0tgcVBmNUhnVHBwKE5VblhQZmRnTzpDalsrcWRZR10+VjorQ1s9cDopSz9uYlAuMTVz
SE9EMj9mXS1lK3EvNVQ6b0kiXCdcRyo/bmVxRzQ5Kj4KJUw6Sm1pJmBLWjRbQWNQNFJuNnNEZFZL
O05KWE4jV0ZELW9cX0ZdQERfNj4nbSlIKFpRRykqP1A3LDszImQ3UHErazkoMFVaPFcqVkh1Pkwh
QmMyVl4iP0tGTWxLIydLLFMrNDQKJWdeKVk5I2BDWSNgYEVtcC51SWMjXWxrVEgrbk83bV9jQEcz
LUpEZWc6WStNOm5kdS1mbXNyZmMxbkJJXSNYTSJYVmZUNlAwJ0ViUGdcPCRkXTMlNVZAZEhaOEtE
QjtyIksqOj0KJU1rVWVFajBwNl9GSEtVVl1HYCdIQCRsYXVERVFQNCUoZDdBWE1AZ2ItUF5rUm1o
aXNyWnJiWUtLdWQnYVJXXjJEVVlUXz5YWEkzSkNZR21zUiIoUSE5cFEmQy8jP04jUj9zK04KJXJg
Zz1BYi4oPypab2F1TltTNUFbPnEnaSJTM1tfLVFgTVpdZUs6XTJNQD5dNUM4RGFnU2xOLjMqOWZA
WWUtPTJnZDpYMkAyQF5ySGU2LUc0VWh1JkJBQEdDXiJwUUdQY11XPiIKJVpUP0NtSWgwbT1KYTYy
JVNuOFNSQkpWPUg1RyxxQSZXQlktcEpIXGZASF5udFZdW1ojTU91I181Tk1fJV1JMXFdXyVKayxK
cDI6OF1sT1JuIU8za0dVbFhoJWkzN2EqJVBgSjgKJWhkIUZoOWk/YjVHVTk3NyswUVg5bDlSWi1Q
Iko4NDJLMWNkYnNqaCdnOUg3MWldQShkUypWaC46cjxvXTMwbGxVPztMbEJrcFFUMTI8QkZWJjRB
Z2hrMWFXSnIxVGolVHEiN0YKJWhBPkc9Wjo7bnIiK2I7T1YrOTFlMiZuaGI9QTAvRDFEPFQ8IVoz
TSJaUlJwaltXIzo+RShWNkNbM2BqQGpKP1RHXWtxL1ZbOkFUV0pQdTI1QF1rLXAyM043RVMtMFJU
bnBGQGYKJT5hKm4yS3JkQHVdYUAtVjJeV1EuamRDL2U7SlRQTlM7SS02TjVmJURAX21bVVxGJ1Ih
aDRQJFAyLUI7a0YqYT1ja1NoKzc2cmZGUGN0cThlWGoxcTI5Yl1JITkkV1gqWDRBODgKJWNTXj1y
SUpwU0Zmcjh1JCxZa2ZcLz5GQiowaURybSRaMysxTzo3OzcxTT4sKyxbLVlzRXM/bzohKTY0Qm0j
NSJbTiwiVGdRITBtOTMxJTR0XlhSUnMiczs9UEFaRWRHQGlTM3EKJVghbyZOY0QxKTRoV0lrTzo9
VSxoRzQ3P0o8R0BJN29nMllDPCZXMypoS3VPQU8mVkc3ZmRpP0tJLjlqMFFjV0ZEO2FLXG0sa3Ft
WmIxaF9ANiR0Ik4lMXAwIUNXXTt0UCRZVUIKJTdHSGJWLTdANzRERTVOOmcmJl9JLVQ0MjBDTWdf
IkNJXTtTXydVRUIycUtXLWIxSVNFPSRoUDReaEc5LzJsPUZsMjhYMmJXNFstN01nWDtyRktFWGdg
TVJYWERlb0FpbUQjZSUKJVwkYzdFcEFbNFZyUkotT1ZSXTAhQ0QkSy9dY24rTDxWUCljWFFNRywu
WC9fUGpJLkRHZkpHIzBacCZtJiY7IU08Lj1FZydJMF4qXnM2P0UsXEFjQUFhdGpSSTMzPkNyIU5l
TV0KJUBuL25kZ2hJdGBUYSlTZERSRCZEW2pmY21lc1dCcElbVipKUCtiR1MxZWk/VThab0NtPSZt
TUZrWjJcdSNNI2goPStiNiZRKWcqbTMzZXEmPmVvZWYlNSg3ZD1ITmxXOk02VysKJTxdM2duNiwr
YzVmNktHWCR1TW85XkVOQXIkIz5sTSZwJF0hWiw/dGFKRjBvTmdlTDRoOSo5MUNHSyhzZU1gMCM7
VjYiNy5wR2gkI0dPc0UhZ1QvdCcxZ1gtPkNXM0UqRmdQWyQKJUZfTTRiTWBdXkZQcW9IWGtEdUZI
MSo3cDRRT0UhMU00J0tgX286I2EoSVBedC8rN0cqXjckN0g4cEhVOl4vQls8aFlcKm1wXyM2KVIt
c05NbVFCVVcvVSZpRlxiIllpT2M6S2wKJSY0QV87OCdEJmZZbi0xOnA0OEgwaHRZLTJsNmlcXCFY
XHNPX1IqT1VoNFBjLD5KXGBIPiRaaD9gYjcnPGkjWElmNWxtRmJuXWpEKSljNWwmJ2A8SVNTPiNB
VG8yImBQQHBPVzcKJSQtOV5bblZdTF5jcVc2KF84UDxtYlhPMjNuPFRDJydsTSwyO2VySm1YOjVo
ZlwxNVxvL0A7P01fVTlGYWdmQV1wLzxvJCFvNjVsMDNhPk9CTyc/MG42T1FdS0lTWEdnMUxsZzkK
JVVGaiFnX1RRMFhhUik6U2ZjMW0uZEQqKEgsVHFmLk5zNTBgODE+TT41PEJQQmhhRnVPTFNGXnMj
KzE4PEdBQ1hkUU5HRG4rNl0+TXEoSks/Um5UQFAxPFApYy1cT2gxOFgrXTwKJTdKVWpJYigiIm1f
KGs7UExxYzdOb2A/Zzk7MmYmcVg/SVg1NT5JQjRbMWtcPXBlQTdjTllAY1M5dDNRTic6OEpnXDNs
OGlRaFg6YjxDa3BSaipfciNsJm4+SzZXUGtbMkpJdSEKJVFYbVkzTElOc2lWL0FxXUFhbClSIj1L
XUMtV2deNyZ0X0YpQTItYTtRVllqViwjNGw0MClNSF4qYS9mcC1bNFdaYTtHP3IqZSxXbWJTcCtH
cjRpRHFGNGloaSdiME5JXFchTywKJTxvRGxxQlc7ZFJmbSpWL01QQCYkQS9zWj5RJjEsRUNuVFdC
RDNeJDBoRXNRP0dJQj5abSgmP1wrYjNpQmYzSyN0WSg+bVhaWmNxbCdsJl09YDJDNiRMOFdXZV5h
K1QyIz0sQCUKJVFRQEAhKXAxV1ZLZkYvaktxV0ZEMms6bDI2aElvSlBVQGdLP2xWK008T2snbWhK
XVNFMVRuZVwyTVJ0WUNedXNcSyM+M0JaXlcwcC4mNGVwKz49Vz4tUU5CKFRNMllUVFUxQzgKJU9j
YyJCQTNsUmBASjBla081UCo4OC1CPmtCPlsiU2dvWl5oQCw6akNKbWd1KFMhIThAMU9pOEpPNl4r
MyRnJnNxWClQcFQtcWdmUlZbVj9LTiM0QVc2TjMhIW51R15hTEgoWkEKJUFnUCY7ZyhnPm9UbCwo
S0RlRydBQz5Ebl1OYlhjLScjcWBxPSRtW1NccWlXRWlgLVlHU2QrXyIpb1NlalonNUNDLy8+Nzw6
X2EqX1MpUGNeOWM/ZChCQW5fK2BkLzlhLWMoQkkKJWdKRUo6QUo4SS0+WDBKJ1FCWEdJKyZfYSkl
RTcnQENQbShbJHA3TClKPURgLlxQcWlEW0M8J3BYPTdBRkEqNUVtNzVWRWdNQHNfUl8xKEJcLVxj
T1JJYU8iU0teQyUiKzUoc20KJWFIKWBBcWktQy9dQWsldCM6XTBNXGw9JSlIaiQ3SC9ePHVpQWly
PV5mX2g2cG9mVyZxNV9AIUFXPltrQE9eT0cibTQ/NlRDSlFIakIzJ2VoJW5AIVFcN1I1bWBYa0hT
TmlpXUQKJTlUJ1lVM1dXMDRtITNoLWpBblVZJU8wc1ZrTW1KdClNaEhbUi9hQWxQLG9YbFFcLDlT
QTBwR0NrZSg2Q2BbOlxQQ3I8KlFLcGpoN1RiNm5xWD5AZz8hdURfJzlDUHJGXjlmN08KJTcrPEt0
VDJBbVpFJExvJThbY1IiME1iUlonNTFBaz5OL0EjJmdDP2lJajFnLGNLJjNeKWlcTi1PQShMX1Yi
PiJXL2tXNWdEV2FkZVNwVnIlR1UyVDlCOzZsa0YnK11zV1A9JEcKJTdxNiM9YjUjTytHVW5gOT1i
NGsmQE4xMmtcVlhrRyo0aldrakdZcE5YYFkwJW5URjQzQ0BYMWplLWFYIkhYLlgra01zYTs5alhj
UGxDRV5rZkxEQkFdYFFAaEBucDRHOF1lb2gKJTopSzZBQSpTbCQobDxPaicnIUJQTClvZHM+Ti4u
a19FSkEwWStsMjdAc0FjcW8/TmhyLVNaSDlJWjBsLlM1c2xtPiUhVSYnKkMwODRFXE5PNlJScmRR
S19POyZhXGMyOCNLLW8KJUN0M2xeaWJUanQtKl5TWFgwbV9fVkRGJFJQUTRoIStWcDxbbFNPYEch
M1JbNCcsTWdhOUxfTDQvX007TG9GWTw5TWk/PUw4TmlFM1htXWIlbFdXQ21iKk8hYjs5JTg+RTov
aj4KJWIhTUBhNVFRTSokLEBBXSdVN3RNVDEndGRxJlUjLC4qRWJaNjosT2ZHSDdeR2xMbCxFOTc+
QGFmMER0SW85PT1hJmcua2kuJDpUMHJmMGtxZUAqdEtQLGQhNmtJTlwnVHFjRDYKJStrT3UoITNT
MGdOJ19BczUzNk1oJERHTlFqLzlnXzJkUl8/Mks5ZkQwNy9nQztVdWVkTTZXUlhxPVJlSWxXLzgo
YEIlTUQ1JDFQaFlLLWBXQTRmVEdIYDQiX2hAPWsyKk9AVnAKJUNEXTFRVGRxUzA2QyQ9RmVfXzo6
Rz5kQzYvVnRRKkFrRWduRCg+KkpbYiRecFMpUiprW1NWLTskRUhca2Q8ZWBZM3FHN2FPQzteMkda
Q1dqXiwyXHVwXHBBQU1yI29sWUoxXFYKJV09Pl5WV1NTNDJrIy5YayZJSFhKZGFmZztjQTdhSUtH
WWBXQU4jJFU/Uz1BJUViIVtobW1GcSEnUjJ1RDBTVj9PRkdyKyhqNDojO1BpQlg/XVQ+dFRkWCtU
PzRINkpaPHNbRmIKJSl0LHRlZko5NkZFOG5LOWVCQDM8Ul9RKmRmTG1fdCouc1RDTW8kVzppbWdU
ZCI9YzJNQU0vUz4yNXRFZjdgJ1dAJEBgYklGIWVlSFM3dCUiKjNCZzZpMVFANS1ccU1UJSwjVSMK
JW9IVkM4Ik10YGEmNDhRYlI3WispKUdrZFIjNk1ic2YwRFk8SCFIbzwpK2xqXSpkOGJTKEo2NWYu
JGc2ImJgIyxRJENHOWcrcCY2U2w/YnNxIT5XT0IiL10lK0UnQ1tGXm4/U1gKJUAlQyRUUlY5UG5T
bGgkJkAmYT8uJTlSRWtqXDJmLTZKPjdKPGY5Uk07Z3NnOlpVJk9kTDZ1Z0tWX3NWcU1fZylETHU6
XCRYLTBjQiVxS0QsOzVXaGRsc0UranBZKj9EWWRYdHMKJS5mO0ItbjhIS2NeW01NV2t1IyRMLEdB
bEU9Ky5iXnBnOmJQaDEiTl5OW2QpcD5adFBnbmI/byZEYmklQF9vOGEqRUktY0FQQiYsbmNLb1gk
V0RwV3VXJDJgZWVuLk1mKFsiLkUKJUQ1ZTtJTihPXkBBblQ7RHErWTQ5Jj1YLFBaYSJOR2ZLTiZG
cyRqa1NxQSchcCNmTi8uaDQ7Lkk6WWIxPkVobzg0Um5xKj1GKGluJkRjUEdZRFJpOV4hOVVgT0Vd
WmBwZEhOaVoKJUZOcU1PY29bbC1ZYFclKltBU2RaNz44NFBuKVw5RUp1Rl04UEYxJVJjb3I6VkhV
bUw0QzJkZiY+KzwyalhtWWUobFFYZ1RrWTpOTmxeKWhsJyhjImVjMHNyZyxycVwzX19yQGgKJWUi
Q10nJjYnSV42Yi9yKTJ1RlUkcWxxW0w7KDBdbl1ZdGNvRUslSzEmc1k7dWlyTj5IK1hAcExPcmpE
LyxDVzNGO1IwX2VMLmBzYjY3c2dvOC1xXlksX1NRRmo4MWkqajw4ZC0KJT1DN1NfMFxFMEopYnRU
aDhWNldULjJLdTotPFdILiFLNTdoPyVjS08hbV5IT05xJUQ6Ni5jSlgzVyJHaCM0L282OkpnYG0q
WEdVX11nME9BPC5NLHVNLCxRKGpFZmhsZ190VCkKJS8rV0oqYFM3XkQ6WDlaTkhSOVApRlJbNixP
YjBnOzJqQEpLU04wPzFQQVJRUkw4JDE6JjBxYzVydVYlMF1caT8lXTslJVtCMEBAV2MxUDJIWGIj
Tk4iQDoiTDlDPWYjcFJVcEUKJUlANV5aKDxFaF1OSmdAP2dfRjBkOXNMRG89UCZyJzw4dC5xZ2Fn
LUxGYSRaJkhhVTtlTCc8WlhYMV1SK2BuRyFmXEUwdUg6TkYhZlgkMms5ZEIqPktsSFAxbkFsTEZh
RWdiQXIKJUNxRXRLNWZmRFZKQVo4JFUhMmU0UnU3SWNhbG44ZEs5OVJnbmNoI0hUKEwoKTRsLTon
NWdBaT1fLWhFVyJOL2FpPy8qNUhlTmRFXFNAZGNiUGYkXmlfcVMoNj43LDpBaUdDI1MKJUEiVChi
O2hTO0ZxXyFPIVFaLExrRGRVRWJvZjNlZDJ1L148ZDNZQUJrUE5hVGBMViM9VHVVJ2lDblRuQkA6
ay06MGQ+Y2JMcSFWdFAvTi9lIiU4RnUrcXU8Pk9ZO3FSJzdVJTwKJU8pU2tOQTAhXDtxOyNOU1Q5
PUQvOEg5UzVjX2YqOWFkaGYlbWBaUmNDT0orLkd0a1UhRTsjQXVXNkhrOToqZychLkouQTJbJGBl
cCIzaihYJDIhMkhiX11yJk03cSdxSEAoSEUKJWFha1s/R1hOXFNSTT4nQUtmdHNYLS1rZTUkLmFP
NzVWWTAuUGFuI2Y2VnVYZmxEYmApLnAvY0pxTGpjRVFgTFlZOihGbTxnYVVudGk2SiViQ2AoLmVw
Vz45UFtmbF9HLjRfRy4KJWE2PCdtVkI7N3NLUipSMTA7PlU6PGwsakFbaWdEQVJIMEU0ZUJgLChi
QmdHKUwkUkUia2dkMzQ9QTpzJ0BQOVAnND4pJjxWPTZQKzhYPVthRDdxbm0sRzN0U2s/KmhoWF5u
VDQKJT5yXzw0Jy8pQjs+YS8zWTpWQV87O0s7SnQjbXBGJ19gQms0VjliMUBGSFknViYhKWkiLE1a
MEg2TDQyIl5OakFnUiNXXFBTNiZJKyZqSVwqalxVYmpFU1lfLj9GdD9IOWE3WFkKJWpRVEJBSHNQ
Z28qQiRyXm5hUltGInErYm5oRDVoYUpaPFw6IyZDTjYvMjM/V29OOXNBUnNjdD5BKm4oaj1yIVBc
JCYpdDljSWZ1JVo9UFNVW3A9IiRsXG8raE0xXGtWRlEsXmcKJWtPaCF1WkBOXDBfQl1VTjNQMnFS
RiomLFhwLltOMD4vLCI+P3VkU0osLEVSWzJHYihvQEgvSl5BJGJUcU49Lz0jQyJPbmI6OjdIVyhf
QEZhW0JndTcwZkRBaVtPXjJhNFQ3RDUKJUdXXyQkUy9hbj0kcC9GRTwmdTY3LmxxSF5iJCFsNlZY
PTA5ZkczP2teLkNoQyVvNS5bWUJVUGZLbjJJbE0rRSZIO0VNViVpJmJra1wjMCZca25EaDtITlQ2
Pk91JT1qNGEsNGEKJW4xYi1wOC1PKURaLFJfVUkqYlA1XGdgdW45WlFzSjxAYW4sLkZMX0syTiFk
SSIoYV01MWsybS1tLCs8NlooXmc7UWlWMyRDdS9pI1k5YS8sTDJuQVZePz9SYWJQbEZtLytpTHQK
JWg2QzI9J3BuOUpCU1gqVkYnW1okLSlbKlFsayglUFAoZWZlRSxJXEM5PWg/I1NEPlVNTkZiJCRl
O3MnXlNWOFw7T1Y3RE0lNEJgYiM1dFo9RGNxLUM0NSk2YTxDRUIlQyhxJnUKJSdybFZXVVYyaTQo
QWJIaWpScEMnRVNPQGcqMSpwYT5QVEVJWW5Ra2ViOHFrdCVTOUhZb2M0KlxgN1pgSjRBJy5GIjxY
aTtmSCwkXVA/OjJtcT4mUz4vb2csNm5bbF5dLj42VjwKJU1wPlYlcWQuKVAkQT46SjEqVnBSXUJW
Ryk9LUNgJV0mazlfOWlJQ1NMVFdERGwzbScxUkQrIWdsRG9dbSpkQjAkU18jREUzOHVuLjZINk0h
T0FmKElSSGFuTzQtU3FsZXRSJV4KJTY1OmEoJkpfPE1BMC9zLzsuRlJVUz8laVRWUmcjR200aEsz
Rj1BTWdsXT4xMWpRKURtVCMjKFdTPSwuNytkW2dlVyJyIm48NzBNXlJcTidwRyZQI2crYjItMV1k
IkdiRl01REkKJS1UWnN0KS1EU2FGbDtISSN1MjA6aDI5WVEnWDk8bGcyUyJEY0kvWXE/IShPJTZT
SD4hb1YkRSVmNGhQMyhPXm1dWiwiJFlETGxEI3JuYG5vVnM0OmAnRjYpTSNaIkEyXSVlVnEKJWJi
I21nQDw7QCNrST4iY19LJDAoQUJxPmY8XVchTT5HSEFXP108blpSaGBzPVctSVBCSHVpJyJOMSM4
Oz5CJjNxbDU5LD45TGswYkBbRjw4XE8jWC1rJW5uLUFaL2xzZiRSPGwKJUs6UiRrXThTajozZzR1
c05Fa2dbcFxOIm1AWjM9UGIkXEozajJtT3NlSzosL05bbk5SPmRCOUBhLTdmRHE7RC9uOCJ1XF1Q
bWMyZC1PMU1xUVUpRS5VNmwtQGdzSSVkIidbXiUKJU9UUTk6Z2o4LTdpK15kQ0M9YVNDQ0lyaCFT
aG5bKGM3RUM9L1BwOW07bkUxNDRVPiZHQCd0bGtJZ1Q7WzVoNDtXTVlKcDZDWU9NQGYnRi8tNmcm
RU5MZURkbiIsNl1kIVhxXUIKJTZhdDdzLDtoWTAqUkI7XEpockVwNHUoa0szTHFFRiQyTmdCSk80
PG5nLzNXYjI5Il84WCQscEhCTXNKZjgvZU5TcUVBJ2YhMS81UyRrWCNOQSMxbSQzYFpZKFA8cWVQ
PTZnOVAKJTw1PVdyVTYwMk1WXTpiXUBTMD9PODNCajVsUmZPTCFPUUI4O2cnVk1KJERZS2BDPkY4
LFJkTmklQyoxUyYzTGFdLmxJZV8tR2lybiVYSyRrTEw5MiZdNiJnMC1UXjtNSmBlaFwKJVxGKFpH
XD9SRDZGWjpJYkFYakcqIi1PVDkpQUtYdFEtP2I+R1VbOVxgYXJHQG8tOUxpPGptWSEqRFdCVW5m
RjdpMFo2YD9kRTZFZE9iZj5vXUtrSk04M0k7bzBVdUpzblMjamkKJWdaYGE4TEkyOixPU1lhbFlN
NTVxckRuImdpO3VvUWU/RjBvcCFIaykmZForUCRcIV5LLjdcXjhNWSUyTGJRLGM4RmstK2E6bzpt
dSVWZUFXRE1cZikxWTtRLUBgUVRSaU1KLEcKJVNAbCJvMHIpdCduR1tLbjNXSkY1XUEnQ05pVWJt
T1NHPSouZT5JMVMqJV1FPXErLEB0VGFSUkpTNSpcZS46bVo3Z2NHU09HNmtSWS46YEtRXSZdJFMs
WGUvWzpKSEpfbk5HXGkKJSFNPCMwblZlNkVyIy8xb0hbJUEyK08pPFBNZTlIVzRATFNZWl0vLmIy
Wj1GOmVZVHUmPD01ZmY2Ol10RitXblNILV5gZzs9IzFnXC0oUixmR1ozKC9tY24nMjlsVzNsUV9U
RUsKJUU2NiFfcm0zPFU7X2FtL2lpRExtTGY/QzFfdEEkRURAcyI2NCFOLUBkTjRTWSpIVk08U2ss
UW9Ta2pDIi1ZXWJYWSJiTkQ4V2snNFJZL2xrKz9aYTU5LHRmaSEiSldqZjQ9R3EKJXFvRVYlNjs2
YltIYV0pRFRiZkZTS2Qkc1phJlBxJ1MvUTplYlpVMWxiX2xuUFI0XjpxLClpLWlGUVAiNFpBQShB
Oz00ZDNrSCU1aCheLmBwVkNZcnFzKGNwOjlsQilsUmxTVy4KJVtbIyZOKCNldDJQU1QwVTtEPipt
VlBAZkJnITs1L2Y4KVlzOCVMYzRocCpaJWNhaEZuRUc+blZOJXQxXVpIYWA3Li9oRiMjcixuQG0q
MW9faG86IlFedGVfMENpb2ZhTG1saiIKJVw1WCtUIk9uXl1Lay5qI0BWXEZuWj1fInM/bEdHXFtk
cjJLOTEnRnRsJls9JzY5Xk5IN0JvOyVMRSgrZSkpT2hBOnJMNHRbdWYndCRSNmF0TFA4J29OLGxk
RGQ0Si0yOHBmNz8KJUM6aVdqcE80V0pWQjpHPm49VTxlYSIqZCY9YS1pSV9obVBbN3UsWzZIJyhZ
K2RgZFhoUCtHTCVYaHFAKXB0Vm0tXk9jNzZGczdwUENPKjBWTDMrbipMPz01IVldWz1NKmMmZ00K
JS9BL05QKlFyImc9XVI8RlU8JWluXG5oZEdtOFhAbDVOKCVhSzs4YiYnVlYpPV1QPGcySSt1ZkAl
KWlpVW48b1BfNSd1TUVSallecVVJY1c/OUtGOVJJVWJXMT1eP19MZHE1IyEKJShXPEY6bWcqbmVY
UW9GVl9zK0Y3bCdaL3BQUU1AZ19YLFZCRTpXSllqOj5vU3FHVj83Uzo7Mz5LWTdybl1ydU88TGN1
SidyXSo5YnA8XjMhYllcYj1ULGBlWSJjKGU7TEtrLysKJWcnL21zSypCYD9QXHNtI0MoNGM9JHBZ
UFBPNG9mVTJQUls1alpfPl85cldPLmdlKl1BQF5jRmo0ZmwxKygwaChsOSFOKE05W2o6L1hhaCgw
SjxkP28nNydhUkcoXy51XzZWT00KJSY0YGdJP08sI2lmXTZCS0MlJUEwXlQ7LS0kTGAoMCJFXkVb
WlBxRFYiJykqMDxSWS1oZiVcKk8iMTNMRlRbaklBZy1FOHIzMUMwMiI9RGNfLSQ+dDUuZD4jaDE6
LyYuIXU3Zi4KJUpoWERWKFs/YUZWYENecjQmKz5EIkFUSEYtMCwmZ3MyLEQ+NGZtYGQpRXBNdUA+
MktUUXA/bSY0SDBoL2JuZkhKSmlzOTEjaEUjW101SVw4WEZOR3BeZmFiPyw7OjlhN0taUkcKJSI2
ak4ubUVMNlMwdSteM2tTXmJccTlcMG5FcVZNUFttSi5vNSpwN19aQmJWJC1KJ1I5Qy5ZUGkiZlla
PC5SVSlnLls7SW1NKmRlVDlpNik1VCQvZHNgZzZrMTQtUU4yUF5yR20KJSFQXDZLUzFsTSJrdCom
SW9bZ1VrQ3NfRDk5MWEnPVxaO2tgRiNBX1k5JTg0PEwrNiEvUUtzY2tlQCQ3bkZjNmdbaiJGSGlS
JjZJIl1sITNEMV9NWiVfYSMqWGA6bnVkTSVsVj0KJURYM1k2NU5DPjA3X28nKHJtVWhEXURxYkNI
RlNDdGtGXTxbL18wWSRGN11aWG9FS0MvIyYqUl5EPWIxTWhObCU1O1JkLnRrXjNtJiZwVTQxXD1U
YllfVDArciQxanBhXi1Pb3IKJW5OXyJUY2w5Qj40OEBadV9WN3Q1Pjcsa3A6bGsrXjJuc1dhcjlG
K1duXklIc0c8Yyw8QD8nVWltUGolYDJgNipWZWE5ODFHTCk+YzFNcmBQRFFlTmRxKV91S3IxRGMl
SylUKm4KJV5ycSxxYyJDZF5bZDdATVRYWHE3cnF1ZlNfblRHNj0yMktHX3VEMGdoVUNaZm5sdEEl
I1BhO08nTmAmNF5TRj5GVD4nP2QqcUBDUmMxZ2tYUV9cKm5WQCMyK1hBZy5kPD0/TE0KJS1yK1lm
bkYwPVBfcFFscz9aKDlyLnI/ZG8hWms/PlBPcmVKTGJgW1JHVFlkamxOYCQyR0Ujaio1VWUoYjRE
NSphU2AuL3VIVihFX2ZRNVFKRWsjaz5sWWBYXypoTj9MUT0vckEKJS5tIjEyQ1txQCJUUVtgVkRZ
ayRbKy8tbUlabjotZVRDcitiIVZPbjxWVy4/cGFSPzB1aGo4XmBMW0ZCM2c3aT0jSGppK1AwJ1RP
WlowOmBWMydiKzEoNDAhTTRvS00vcCdsSCQKJTYiZiwoZi9AXl08ZjghV15ZWmZHPD0nSm5YYVVE
WmRhbW5aZSdhaldcPzxBV2k3VUkuP2NYZm5VXDJgOW4rYmw2MGA9bydKbz5HZ10hYEY0X2BsWzk7
UDBCckw4cSU1OzRqP0sKJUhMMFU5M1w5ZFcldVpCTyItWV1SWzUsJlteOTdDdGo4Mzhkb2VBOWo6
ZWB0RDFiOko5JTxjQXRXI3RSWEZoIyNjTW5cZVQ8QTlrQTBqYEI0X0IzciYoJXUxOllOazhdV1FM
PkUKJTs4WU9bMXBRSlA+RyxlNllWKVNJRmtyLFJWXHIzViNKS09EXDVyJV5XcDlqaVAtOz9JL2Uk
MDZJPEpbdUdKdU9KQyJyUkYkWiNAKVhTOl9jQDQ2VFFVVDsiOWFtLztLZXBZZWwKJVBuIS5QXlZq
cCZnV0MrP1pFTm5xJEonJ05UQ2hjUj1XJk0wNHAhPGsiWUkyP11qRFlqPm9DIjtvdW0+XCE6aW0n
YkhdJVw0aWVgO14/VEs+YVdPajhhdCIlak9dRS4wRGRiTDQKJS8jdCspRVFOZms8Ym8iKm0zNT4t
XlldZ3NMNUMrUV0lLDYiXEAsVjg6TWczY2VEbGhmKTRpYnNyMTpwKVheRihFVjs9UkxMUyctLkdi
bWJYO1xlSEJYOU11Mioxa3EkWjJmXlcKJUM8U0c/OzttLCNqVWU8Yks9bUVEOktCc19cP21qMlZJ
bUZyNjt1ZXVWTG50N1dkcm5zSXRoW1FhVyRuRU5jbSkzIjNaQkk1XnJPKTkuWDd0bmFfWGpZJCc8
RkVwKUN0ZmpIKCsKJVVgLjVfTFE3RSopOmpCTSkpQDwkR1JpckY+YHUiKjkxJyVWYDxdWlpxViJ0
KCtYOTFKbWcmWV0rI21SYWojJSw7amNzJ11tNl1IaS8jYSR0Z3BhU1RUVmAzYzFKbEYuW05kPnMK
JSlUU2VqYGdlNVU2Q1E/VixxV0hrWVFqOnNOYDYsMl8zN001MWFJPFcnaG9LP0hcWSteZ11LKEFX
Tl4qSVNaazYyaWoxOlo0QSQuQyg4KTJCLlYmTmVuSTdBUk5lVnAjQXJkdFUKJT4+QUhKZG9NOHBw
WG8iUl5gQ2tdKiY7QjRsdDNKLTouSmZSLTxbcGZVaGUvbWdLZGxMNmRvOSEzRG00QW5ecShAZHVK
SEEuZ1lJaUhfQSxAJTxRNW9aX2Q6P1ZFX1ZaOFM/QEMKJUNwZ2k3XSJUP1lfTGpHUFUtY0JIIWdk
TzlEMnUhJiszWTtrbVswVEM2Zj8hMypPWl1vJkNDdFQ8YDQ4KCN1TzxpMS4xcytgUVZuO2VJP3RJ
XyZHTCtpKD8qaVJLOzpZJFxDLzkKJSxNTk5bSjwtTS42NkM9ckNeby9LRmI9VlVIS11wPys7YEdX
NEgkXmRDQUEsTmxIJShua1VpbVhPWjBwYTQtNV4ycVVRaE03JTouWDE0bFZXXTtrTEEjPHVKYThX
Vlg1WSZgVCgKJVRcMkpLMDMuJiNaRyYrZykwLm1FVSdCVi9USmcyOE9PRVtwPW0tcjNpbmpSUFpo
LSpzX1hcIXFaZktScUM8cmAoOmkwaj9gWFwlWGIyPFMsWy5rRGkoTz9IZS9BQ047W1VLLkcKJVtv
QHUlXycwJm8lRlxDPDRGTVgkcU47LXAxRVo5N0Y+KidqKnRdQl4xPDtcLVFIOkwzYG90ZGs+cV50
Z2EvKkYpJm5NN0E2T2dwZnIuLCxmO0RBUGo7QytPRGo/KThtOGVyQ2EKJVNaXjNIPT5IW1xYSjpO
KXA7ZC4mbUk2U3VQMjswYGwlUyMqUSUrbTQmYXQ/Y1FSb1JBX11PI0dAUkZ1OWY3bFwtUGhybHMj
ZyFPcywxJmdeaGJganQqZV9KVVQpMlFiUnJIYCYKJTBFUCNxLl0vcU46PkQoXGIoYnVXJWJEbDNQ
RCxrRDc5NGZuRm9FSD9ALzctbGBSbydtVkJDOmxUIyRhTDcjQSxpTDc0TypMbTAzPTAiW0k9V2VH
YDpaWUgwI2cjOTs9WjtmbHEKJWhDP3MkO0MhQy5qWj5sMz1BO11rPVEnVWlAJ0VqRyxKJGRdMyV1
KnJMcFw2VSJDTzwxcnVyQmAubS4saChcOFA6a0FXNmo7Tl9UW20qNVpIRTVYZGVqRkJqbzxiUyhg
N1ZiSVwKJUw/O1JNK2pIcjM2XmVuYmI3QiFYIyRUa1xvOS1PcEhQXVBGUi1dJ1RbJ2J0alEjQ0xB
bmlPOkBvKj1wQzlLYSFfQThZcVdAYFRfOTZJQS1wXnJxX1VTPVotaHF1c1BSTFA1OzgKJV8wRGlJ
bCU0ISQwWG1JYFwoSTI3cmwmciMpIkJDPVtwQHNySWAyJmBxZixsOTpiYEU7LXNYLVJQTjxrU0wj
bFhPUzAoWEw8bjBtaDU0QWdqNCZOaF5MWF01M049JkcpJmJeZ2gKJVhmSVssV0dzcjdaXD1tSmxZ
P1hsPG5fPz4yZl1QWDgraU4mIVBSbD4uLVJrLihaJ3RoJ0BbNSZBJCtldWhVXG5URmYzX0hQQ0la
JzZZZjRnNmdebkNZLm9rSiUvaCZHIT9qc10KJVY0OkBuM01BM0k2R2JKIS1kIzVZLCwtXjBYKW9L
JVpzOkhgOE9KbXRXWERdI248bU9AZzxlNSg8KkYtQDJZZVY/MiFNYWI5LzloQSZKXmEqPSM/SnI3
IU1AVko1P10iPkJSNTMKJUtcTSQ8Yis9dERJLi5EOElWa1NEIWssJF8jQmhwUkBDMmIhS2JicS5i
O1BxbkB0Z2slVCZlRkpdW2NfI1gyXk9mKkc6PC5nL2E2VDApV1xzYUQ7aSQ+a0NuPkNlYC9tQ1pw
SGAKJSpJOWU4NyMqc1hjM1Zna1wkNSlVU2BWcFZVWC9UYTs2JzReRmkkZVowaUN1XlVhblNzVj4h
Rj8uNFJKTU5aLiNKa1ZGTzduZWFJbUYvbkEoQHU2U002NF9HOD9WYTopOmYqIVMKJTprY2okMXMt
QiFjNSpAWTo8biJYNypiSjpdY00sZUIoZjdsY09zdC86UVYoUjIvSShLOUtqMitQZkhYXilZTiNX
Q20iL3RNXUhxImY3ZVhPLFBELVZNaF1bOD86bWFXNjplRSMKJTN1WT5KMWp1dUJgWGpIYHIiLDVU
bTdIKS01NjxVblBvMScpKVxVI00pMzJublUzKzw6XjBHS1NBQ0lDUDEwJTtuSEhgNFs7bitGL1xL
VVZbP0VBXTFRS1lUZz1FRiMkOUdWSFcKJVJUIUNzVE4iUWY3UCVSTC1aMnIzI3R1PUBLQFw6Ljky
XiNJLSo+X3RjZUk0QT4qIUlQQDhDayFpT11EKGhcNz4uVW88Jy8iVUJpXDYlWDNLXC5KajBVWU9z
UWQoPF8uZ1szVVIKJWBtOkdPMCghUS4+JzU0NDo0OlBzUC9vcTBFY1YvPFxdSExuW3VEb1Jvb2db
UU1NTDhdRk5tMG9VbjtbcGpLQjAzNCsvVmJzN3BabWhCZiVEZEBIdFJJN09UOGVlMkEtPT4uPSwK
JV1yc09vRTFWLWNcSjBvbyxLKWBnM1Q2JmxqYVhuLF5Mb0xVbl5CP09bVXEzYzUtOUxXVypPVmxu
dVpFNnA8cytZYi1lP1hdZjtIVjApZlNCIWFVITpjWlxPbkxMIzQlKDhgOioKJWgxOj0vRGY/X08k
UzxgL0lZMGZOS1FaMiQpPjpuUD84RSZiWWctXFM+SFhCNFNGOGJSZVYsOUtPREZhOlwjYHJEcjY4
NkliTjRdcmksKC8kOWZgSjc1Mm9lbyMkYyUqXVhzYEUKJV4/MGdNUEM5dHVuOG1LJDwrZjg/Wydr
OUUpWzBoPHAkQl4/PyY+bmNuXGwxO20mXzc+YXVrZlgtPGxYYXEwR2I9WUZcdVQ0NFVvIzBIKmk/
cGg1J1FsTCRMcGpDI1RJWWRDSyIKJVdBU2BDTGcrWlckRDxHODFfXVVrbzhlJEhsYUVhNkVETEZY
KzZeZDtUcE8+JyQ2U1d1bl9MUGtWcmZnV1QtajtDQFwjJ1NZOW1HdGVnVCplV05LUCQhaVhzKk0+
KFhYZy1PNi8KJUJzV1ptYWVDVXJTPVYoYzBNMUchciV1ZUVpJVstViorKUsmW11dWE5mJ05rQ1df
WyteTzY4TFglaG1zPTMmZDhuVlBFYlVNJ0g8PCE3JkpdIlpOdGUkTFhQXm5VUmJDMy1eUCsKJTxY
QnFRVGIrSDBAP0xvJFFHP0U7blhtUHFGbWhHNFw9XytITzduLjhNLFAzM0g+KkRYQW4+S19VQVpQ
SFdMNlwsPz9AWTtvKFNzLDlGTGdCcFNPS29mVG9rPFhMPWdeUykjS0cKJUIqbk1kXm40Wl47Nk50
KCcvSCxyUHRZNUpyJz9rV0J1U24kRFtFMHNJVT9aSEpyIl5jQjdjKXNHLEdIdVlBb2FEaj5vazBd
ZVl1aSMhOlVKX3NJRWdbXEleaihqOmQmMyk1WS4KJTdCZXJeaGwwK0VMRjRwZ0tNX144N2oobmRy
PkIpKEMkIz9Gb2RTN3NqO0AiOys9cy0xQmVPKDIpPnBmNmQzQTZdNFtQJzgwRHF0QzElPWUpR2FM
RydILDA8aWNSWWtDbzBBZC0KJW0lNT9OJlxlaDBaJVotSmtNQUFqPDI1LDotMFI5dG9xI0AwTCJb
RSJHYilWWExdLSc7cEpVWkg/Ml5zVWlQbHMmZVQnOFg0aC9lRiUsMyUsT14xNzEkSFVwa1hmIisn
TCI6PCYKJUNzW0srQzBGRjJQRGRMc0JcV3RJcCFTInVjRjc5O1I8I2xDWm8zWGsqOVA4PUUlcms4
cGRsO1xUZFhiTFFBL01CZkNfIzYuMjQpTjpLWV9SW2BIRDdtXT5ZQzU3QVg3VkBpUlIKJSM8VEBq
XE4sb0JSckNGdGJCPUFBMmljJkVBO29rdCNYYiFLVFFmRiZZQk9AKlNpUWRPMEJkZDYiUG5iYihZ
J2FUMVZhXCdjVTBOWXJDcC4/O3BhSEArZyY+cCZEJW5yT0JzWWoKJVokLSJWTD85LG5bLHIiR0kn
Q0BbVlg1MjtwR1xFLjBWLU9VcCRZci5ZZmMqLnBwPHBOaktFbmVSaDFFTC8wKnQlOTcqJGllNGNk
Pm9bTzgpMlZrN2BwYlpEJCUiIz4+biY3OCIKJUZfWDo7bTwjQy0wSixQOHI1U0BoKStRKCdrcW9w
IW5BITItVlxzJTBeVk4qM05bRl8uQiEkMyg/QDhAcFZVPEdmOiwqNHE/PTgpMkVqVjwwPGc8bjRo
cHRIYFdTNlo/aDhWODsKJTk+ZFZhbTpiTnQycCd1XDVnUm1VPENnNzNLN09Nai1hWVZDRjAzLVQr
JzZYYU5NU1tOQSxWIiw/U0BzU3FVSFY/UyZjVHJuZ0xZUj1vS285VD0rajkoTTZFWGtoRlpGb0gm
XHUKJVUpQ2NwUFhBOkEuanAjQy8+ITEraDJncU9jWHBWW1hMYUAjZS4oPGYsQVZaXWtnb09fUzlT
MC1Ec0YvcChzSD9WKVk1XWRbbyVLaWo3UDcrWlF0dGNlW0ldSiYnYyxJLms6Y04KJVlra18vMWEx
aT0zVEBJaWlyZnNDMihkRU1ZUEVPUnByLWVNY2xgN19hN2BHS0lQalZrSltwIUNtTSpfXW5cJWxa
bXBoVUlDVyslJ1JrYjJOLzpIU0VsXFtIL25jZnVXPlRALkIKJU10MWMiRjRfUlQ/Oy1xP0BASURw
WHQsYUJxQmU9NnFHdTQoQlxVa2peYDktQEskImQ6MGExczRmOi4hSGMtalovMlEnXnAvYF0hVzFb
YGZeP2toUE4zKUZAK15DQ2gtYlFlXEMKJTcocj0pOjYoRlg6JUM+YmEsMGM+SEkuVWM5Y3B0PVNV
OWdyVFQ/L1JyTG91UUlCQC5vPS8iRmtwUyI/SzdwPTIoTFo/J2kyUDJcLFk5UyZeS1dFWSljTSg7
c2UhKXFPJmIkTWYKJWNbKXIocGhEK2JPNjxaOyQ/WT9yK0Y4MVY4IzI5c2VFPDI9ZCdjVGowdTto
YjNAO2UuJzgubiFSKUM0RSw9MVBEU1JXOVNDT107ZiwwKHRfZ1crNSZULUNSKXFBYlhqIUQ9ZSwK
JSFrQVJyJHRMVUAwW1Q4X10ubFkyPkROayJrNWtHLHElYUdeNU4zYU5oTUFyVWhubFkzZzFaRSk9
c0AvWEphNigsLlJWMmNVP2VAcighZjlLPGtQbkNaSydcWmNBVC09YWQ2PloKJUgxLUY1ZTxYSVkp
PUVrJ2huIzZwLjQ8MD9FVCMhPTJ0O14xQ01OLTVEXnA/MVFMOWYxSD4/MGVXR3FaPi5WXicyP0Je
Qm1PVVksM2dyQSJJTzBgUGxFViJrTUJCcUM+cWFpdXQKJXJqRkF0IloqPFlRX2NVQmA6JF9XO1hw
OSomVWZdK01ubC9iJklAX0lGWFdwVTp1MTBhYyViLEpAPEUrUFFgYWZxVmpUOGdtXE1LZzltdUJS
WkpRTjFHYkhvN2xCJmY0LnFedV8KJTFRaDRzYzFJRiVmSml0Z2tySSZGREg7aElyVmJySidEaT9K
SjMxYG1CZCMmTyNYODplJV9JK09JJ1tkPidxVlUlIWVETkhGR2c5aT4kSFNQKkFVNiVEMkNYTTlJ
KCoyOj5PT0gKJUJcXVsxcCt1RnNAT1Y/WmEmMyUuU0ojbUZJOy8hNEhMJWIlbEM0ckBVI291a21i
MzI9Rzk2U1lANlFWWD9iUWptQjk5bGw5dGdJTlEjRWppKHVbQEpDaFpCMzosQW47UlNnY0kKJU44
ZzZyMVx0XzJjI1JFXDZWZGVKOlBzJihlMz47L0MyVzw8Iz03Wk5mQChVaEw8VS5dMnRDQ0ErZTxr
R2pOXT89QW0rZDZtMD83XG9wNSpBPj1gU1dQSmA8LVhxIigvKWExPmMKJVFXMVRZRSEqMXM1NG1y
QT9LMzVLJXNGNShDI1haQVE+M15jXEwoRz1MbXMzMi9yN2NZW000Tz09Nk88Pk5mU05rOitgMSdn
Zm03cTNPWEVVM0NSVC5HM2BCVzphXkAhN0NdRkMKJSZGVHBybEknWCFYJzI/V29XOSJlSGBWXDJZ
JmtfNUdSKENBIkVdQl1bLCk2dUxCX3NaQFQyYWxJYTxRIjhCaThAITFmcyk2cGVvWDInKlZRVl5I
OFBadUM+RE5iZyhiYWx1UUUKJT9iPGBFcV1oMWEuIz9PXjwiSXNXXXFIVGopXiJzTkgwYWdUJys5
IyZCVkgqUG81WV4oZSxKW21PbHQzND0qTShsOipCLTdqbjAxOytNUHBpaTpVRFsicE0yazBnX1Vh
InUuXy4KJUIrV1BfRlJdKiNScSYpKW9Xa3RLKEI5U0tKLTlbIy4sP2FKOl9qL2hWZjJBJmdbWztt
U1VKMGIzKEJWNGEhR0xlSW9scj5jTydQWTNMWU5hS0cnRiRKNydTbmZlKEMvR19DcF4KJSFNQUZx
LlZER04qXi1bQj1Xa3VrRi08R2pWSz5KXlAhVGxJLm9WM2U7SVIvTiJkTWlMS2BtbyIkW3FANVNm
P29oTyVYQCswPTFHIWg/dWRZN3RlSzReLmFMQ1YsLD4mUSpkLloKJTFqNkQzVWVRUXNqM3UyOChO
SyVzY2hlPHVaTnBoLm8tLE0uMGV1Q2pxLTVZRjQncVU/MUtmQD0zPjk8TFVRYik1JithZTRaMUVY
P2s1LVw5KltKXlcxZy9kVE1pYjRPcFkzOWYKJU9LcDZicTdcSilbS3Fib0cvYDcvZlJZOkMtYDIp
ME9ycF9oO0E8VTVbaUMqRSUtdD5hREZsRkdhIj0/J0gkMCtBO1YrZ2RlaFAxPl06U100UUJjKDRa
NmNWdXBaUG9LVGdWPnAKJTxwM1pPWGNUWls2SW9eSkttciZkWVgxRlRsNVVgOUpKUWd0ZEVzXzg2
O3JdKVYtJyUzQFI1Z1A4QC9aLSdxI2h1ajgoUCloNlwjbG1eYkgyY3A/Lm00KyRLQT4qaCVQVTwo
VGoKJXF0ZFE1ZW45UmQ2WlBYZTJyPXUsKD9sXXQwWVRnLk5QWVplZjlgYi8/KDlROnJUQ04oVkRa
Y25SVkNSZGxmcXAzbUhePDxAM08uay40Zy4sI1JST3AvT1o+RkFyT1tHO1ozXjIKJWphQW1WTSI0
KSsjdU1iYy1yMSYzLV9KNkE7Z0BqcHBfYm9qcyVFbUBvNkFDWnEnL2tEKXRMMyNndDJiZy8xQCws
KkhkJyxFIWZsWl0yZV1FPDlaSzo9UWslRSdsJS0kYE9xPysKJTRzRm9VT2NMLnA9XEhnUmtLcXN1
ZlxGclptYmxZYWNuKG9fZixxRGdhTVNAJUIiUEk9RDpFMS9iZSIyKChTaS1CaXJeJ2ZdLidHOUc3
QXViYSliVFhHcjFEWjo3MXVCSWRhV0AKJXFJdSs/R3R1ZjNELmRILzldPTZEWVtnTTtjNTBGQ280
Ij9SXzJHc24oMmJOaCZaSzwzV0opZ00tZGo/SDJpLEZFQzxkMnI9dGQnMGxNXUFrajEnYEReWiU7
K21MTV1DbTNVY1cKJUNbLVE/QEFuZidbZiRlVlhZJmJOLUNvUF1pOT9bNUEjLVdNQSUzTSpVcHJP
VE8yQGs1M0Y6WkROI1hPWVdDbWdiJj5IZW0pLlxMQzZyI0ZvYktmdDNEVjo3dVViUiNVYVVOT14K
JTZTXHJWTFYqPUAob0RJOWssNDldNSlxQDg/Kzt0cF1jVm8zcE5TJytcKz9sdTpBIVwjU1ImVltc
OGE1T09lRUdAKCldLmZZXlBrXyUiQDdjYDgjO002OztsZ19MW2ZKQ1pHLGoKJV9cI2gwVVtYKXJO
OjhZKGNYU2ZDUDooU0dqR2ouNVZtcC5la2g5TkdiRiNrbWxUKDYpZWhQZFlTalAhX1VYPF9fKjU+
W0NeR0EnXzFwPiNNclM5Nm89NSg6dVQsJnM6cVRATjQKJTFsK1deQl81LVcxXFJDL2gwRCYwRmBP
M05cVHBlMjoib24/aC8oL1oxJWxARkhvVzRJLCFeOzxkXDRDXVxCUydiZmtOR1RuaF5rYFZUSTpk
RWVJXC5FKV5ZI3EuTFdjOlRPIWAKJSh1RDZhcDUwOFxdTzpEY2tAPzY1bTBRUjVeKWI2VWNlNUtz
JVgiOGZcND8zMmZYLzhNNDVWPEc/YztVZ2I/bE1xKk9Ca1ExKmxCdDJWXTMuWSsxK0lHOEs3MSw3
IUlJQVxLOCwKJTQxRV4kcls8N2RuTj10XihXJChwcFJoLWBwKmZMJVprKylJM2hcNW8xYURkS1pE
STdmVCZHPSsqW0xQOFtcSTYiVjtBTnMxUT42akhwYHQnK2RLNG0xIjVRKWg3amNIcWhXKWQKJVtB
Ymw2MVUlPjI0Qy9QLjNfUTVOPigsLzNHQkQ/JywxY2FvPklNOGJQbzhRVF1dXm1qRVIiOkc9TEpn
cyo+PkZgMkBgLyNkT1ZENDcvIlQwbHJZMVREYUwtKzFMWUpqbF5YKTYKJSgwWUshUzEuTzRrcEUr
Q1s2SWxSTWZjQnRYLm0pSThEJmJOMiVrKEE8O1pJXz1OdXMzJlc+STlobyJyZ05wOEhPWT4xaGU8
algjKiIvOz83J1JCPHNaP3I9TGNMLHRiZCowbjQKJUYqLVhgaSZBXlZddTEuPmB0L19aQVkpSWha
ZCM0JztDRm0oMkQiXD1NUGQoaiMkRzgmJDo5UU84NCEjX0xyWDNgOlFxXVUxYDxRMixQT0UyI09i
QEpVRkgvWEwxX2Vdbi1eTmsKJW4tYkIsaXRsL2c8Y2wtcGprYU0/VG8vXTtBVDlLcWJwTDYoOF9E
QTxHb0FuQUZVUW5vUWo+dHMjbnJBdWJmR2RaNS5XZUhDb2pyUEZ1IWZjSmYlV3NUY2NnamZDcWJc
JWxEZCIKJWMic2kkOVNTcypnV2VOblc9TWYnSHM4bi09KyRCOmp1bUQxVE8pZkZbXkQrTGhDKz1p
LGtbJzxKISdyaWpzKGBNMUx0Mzc9X0hlJ1tFUSpIL1c5N28tYjhyXUppWzl0STtzI0sKJWVDckNi
UCttX29PUyRkdTEyZiwzRFVfPjs4TmFhWW1yKztUZEpySkFmMWY5VkpwYF9kWldOXS5oUSRlI0Bn
Wko+SWJ1P1hOMmtidS1YUHVcQipyQjxYdFIoc1E/Q082MjJ1PC8KJWdkQTlCMCRmPjlUXzxpT2Nv
ST81KF1uQzZmXDRzSD5AYzE7OE44KmdDbUdrJmFmKFVbIkYnZEVUOGYpN0xGQGVBMzhIMz9HRSsn
PF4obFtPTltUL1trQlZwWEgmLE5lUV40Qm0KJUphbkZoIjcqXE9tMGdabiNPNDItSGNHLE40TXVe
XjlxJVJLZT4jI3BAaC5Nbi00S2FRM1ZEO0NSOWFVJ2tINEAhckIoUiZlQVI0aVkzO21sXEEhKTZd
SkNaQGJvWFhOMmUzaGUKJWtOVWZnYGwkZihTTzBbM2tlIUciJzxlXUdXRWpFTUVQI2R0bTs+KDJb
NUdFQCcqcisrREFXMDc6ciVXITEjajdKSWxgVSZbbSwoWj1QSV9DbWNxJGNoJmFNQk11RD9NNydZ
R1IKJVpMPWZHUFcvVTUyTFNoM2FtbFReIkFwU3UsVlNbQHIlODUkWVElX1lBIl9EYWhqNyI1OmM8
ckVabHRvP1pZc0o3Y1kycXNScU9ZVFJhMiJsQ2hEQmQuW3VrYk9SL0M/Ijc5b0YKJU4jN1tubnIi
P2lHMjc5YnBiP0doMEQ2TnUiUEBIXUQiKHFWcEBaazFwKWdNaFJbODpNSmBfLCc3ST8tKW5aQXFC
JDFbP3RCPF5oJj5GYE9iTDchNTVgYkI+SWtGTEtQbmNPPzIKJURNR2RbUDlUWCkmPzNwOW9gYmw5
NDxSWW8rMUxxXHJuMmlxWHExZFZCPHJZXyVRN3FdNVdWa0tgPG5bPzJhaDtPWjhFdEVtN1Y2LmJj
NXNORmQkajtYX2V0UCc3PzJHZSdkRnIKJWMrS001QU4kYS1MSzBkSU0pKD5JXT86NVM0SFRPNTRX
W2xcLnEwaFwsSm5fZT47dURuNTpQdEA8WSMtaGs1Qio1NHVdaVlEVzUuYixrRSMqXE5BJE5EYVhX
Tmcuczp0cSJddF8KJVsjSl0/VnAuPzNcbG5iXypGa1s5MU8sO0RtbWxeMWc0SG4kbC50KVIyNV9Y
KDBwMj8lVDA7WW82ISQuVnBsazxvckVOQ29GZjFmYmc9NWMqJldFL2EjN05qbGRXPzg1WlM2aTcK
JT4tcj1POyVnYEZhRDVQTlFGXSlNY2trVG4qVnBEcDR0X1w7Ri41LCdCbEEqZDcuYC8/VCxFLmtx
WDNrRDdTU29vWlMsK1hePXRGbjpXVWU0Q0RMYTRaYWFvSzwpS110JCVSOF4KJTIpaG82Tk9GSWZJ
JmUhdG8hPUNYRSMoRj41XkhIX2BfZF1cSCg0cCg3bCY/NmVRK01MQyRiP3FhLEpPV2pZUFxlQCUk
KiQrSSI0MG9jaGEoaS1vJilLV2hYVF48SitSPFpXRygKJWdbdTEnLkl0Y0BnbWdjWUYxTy9fZUVP
KFVvOTssMGMnSFM2I29pRHE6Omdob1YiOzY7W11NV2dBQCQiK18lZDZEbSY5U2IxSCYxNmFhcC1Y
WGVCM0BZOFNDaGUwS0xsPypzZmIKJTM5PUtQLWlZYF8rODkxS21NPUY/Rj5lRTtvcVAzZW84USdS
KFByJCcwXWRTPGQ5bzpfZGMwLFJdPyQ0LUYzY291RkRHbD04ZFxTNGxJNTVzIiREUWxmOlQ9Syo8
YDNvbDBNaD4KJT1oWzY9VEZIYVVkPl01aCFSV2gtVSJjSnU7dGpSNjtFaz1WNGdSO0IrU2xOISJS
MUdcZj9nYzVHTWNSY2cqNkgtNl9DOFpyWTtvPkZZS3FcUVhzXDNIKD1VZjdpJDdWRGlFZWkKJUst
dHVmSCNvUFAiJG1iX1hoM0o4M2MjYnA9LkNsPlpmSyMsb0JBQzY7Vjw2Im4jP11gPl5uUF0yU3VO
IjNnak4mNVU1MGtEIkJqaExXKTw4MFJsW2glLSRWQmZZK0FhODBfM08KJV0kISo1cnAtLiZFWFIw
KG1VQkRjVy4mWEgrJ0txWVxTI3NkLk1ULW5uWEksdCVjbUk/OClAOTpWTEVySVNFTTY/TWsjJnE9
QUlkZHJcYz0tNUZnRCRiQyNmIVs5cEMlPTstUnEKJTM1PDYmXCpuaDIsKzpCc3BLOkgoX1c3IjxJ
Py9GI1VcQC1BTFUkJHIkWSxpR0pcK1dSMkFOc2tVOWgtZGxgdV9TNDBQbEQoJjUiKmYyZSpwMW4z
TDlyYnV1MUc7PGlwO1BPWWkKJUtcU0UlTFsiaUUzbzNxZ05cXGEvXHFPci0kL0wwa09EcHUrX0gt
J3NOYWBATXBuSl5DWGczVUtWXD81UGpiNThSTWhQWUIrNzJVJ0xJPy1pQGB0SilAYEVXSjY8MlJM
ZjEvbVgKJWUyOUtvM28maSVDOkNBZy9pZGVWU05aXT5VSztxW1lRISNhLk1EWWBqXW47RzFdciI2
Yk89Jjg5J3U2blZPIywwWEBCQHRxJlg+J2FPXT9YRjpmTjIzXVFiW0puO2djKWNYJ14KJSxSJyol
Mk84czY8STZmcTstWUMlbDojXVBuMUMrIiNPIkdfSmhMQ0hSMVpeXztjPjJRUmVZQmgxSDMsVFdG
QChPSVNXJlY9bilCckhJK1h0XloqUEcmPmolMFlgU0xtPnUrbFsKJVFDOCFxPCpWXGUvKCo4WytH
NCElKmRUU0dJX1U4S1tMU0FMaiZiOF1nZm0xSkxzZUwyQCdhKG0kZVhmQEtyV1gwW21DcVxFVFJl
XEtzTE9QZycwckhoIkJcTU1FRjdBYWg3YVwKJUg6PktCZlsqOkBwc3QzIVsnWC4tZyojbGstKS5H
JEgjRytma0htL1ZOVzJOYUNBbFFIbG9XZnVMWjU+bDxaUmM1Y20xIkQxJUtrN0tcJFpEWUE6ZVMi
QU5iMmRKc11oXEBeP0MKJUNvKDRjM187I15MZlhsPT0sI2U1JlpdaFY1MykyUD01UmhwUzRvP1Nb
U0NtTD4ucihwJVZnJTJIJFlJVkRrPEUkRjgmXHViVF5hNj4sSkwzQzZkJU4qTGJURG5HJls1IW9l
ZW0KJSp0PkRQJ009M0ErW3JlVy5LOSJAMi5YXTVgPy0qSmpEYW9IaV1ZXjFFTnFzTkAnSmZhYDFT
a1cqaD86Q1RCX3JLWVF1Tzo2YWtNRDhcN0RaXlxWOG9rWkhuXlEvJk4obFdedDkKJVY4Xm1IamE0
OVguRSMoX24xSjhVLDUhaFk7QjtKUT9NaS8hPyJ1UkhlSV9MKD00PC9HWFdQMWgmO3RlLUlRVmM/
Y1lWXHFsJiVCOE5hMWknZmk7JiFHdC9IcmRGSDk5czZRXzYKJW9qbEFZYFMnRTZxXXJxRyRONGUv
LWUkT3RNcFNDSiRUJDEkJWFYVTskSUVdIVVYLWMvcS9KL2I1Kz00UWA3O3N1ZDcsbllrSDIuU2dz
aWEmMXFAUitrPDZicD4uN1VVQ0tIXz8KJUA6UiRqT0tWWCYoYGorRW8+SStHazs3Q1RWdFhqcypB
K0NnKT49IzJmX00kVDxpNmFsZDVTSTJVVWVoZklAN29JWlRMZHJTUmVyYj80OCs9alJbZ0FXLFhO
U18xJWFeWUQwTy4KJS8yblo+KkU5VTIxTVV1LWwwP0tsVmthKEJKWyFWdDNUSi5pckkhbVtxWV0t
ZVFTLHJFbydKLWBIJls7cmxfaUteYGRPaURbJitqXGghSEYzciQlaEluYm9jQSoqZyc5OTRpNmEK
JVdTYFExbWpvPl0zUk1KdDpzJCwtRjFiJkxwVlVEJWwwaG5kaD9cakFAX2NDZFJeNUQxaFdDYThG
Qm9uQEUrITVXak04b21obyE+MGJsY0YpRG5UX0hLVlAxW1Y8RWhuST9EPEAKJU1rPkUkcFtCMyVR
aVoiazZfREBhVEJGL0FdUWwuPXAyTz5NIj1KXyxBbWpyQWdxKmRvQWVAJV1jdTJEPjRoXDVQPUpO
NkY3JSJhZT0vS2xnbnRNVlZcclFGKjFmIVpmXWYsSDYKJWE4UCM9X2ZrVF1qdVNOImojTixzaihb
KzRldSg+R0NTRj00VXQvTDwqPkkwXDE6QkQ7ZmVNZy9PK0NlPD1AaWl0TTdRPCdlUHFlR2wnPyhA
QFBucG0tJlU0WWteKyItZG42T1oKJVw/R25fKkJaN2JmMC5iSlBkQjE4TCsxczVoP0wtLVtlPj5Y
W1NeciZdJDFeamdnZTpnT0U7ZVhSUDBuSWhuVjA7NEg3LyorQUVHdUYraUk9JkUuT2tJR1ljZ0Eh
aFdvN1BQJEoKJTFwcF0pZEY/TkxVOGFqIjRJT3FOYlAjS0AqZlQqQmBWZF1Kcmo3S1lGWihualZr
bzlVTz5hJzhAYjFubiosNy5naGNFTTdNPWRXYSleakIiKEVfWytUMWpMOU0vUEdzKFk/OCsKJUQ1
dCwmPEc1PWcpKigyQ2tbWXJlKHRXM1trR2R1Lkk5XlY6VWQpMi5sZVxIO2UoVC5RMHFDQiRWJy1e
clF1TW9BVWNNVVEtSCF0LlBqZGRKVjpXdWVASUcnbFZVbytTRC8mJFIKJTJiTiUzNykpQGAtU19I
LDstKGNiLW5BaidrSCxBXUYza19gRl05YFtRSzdxI2wuZjFLIkBmKEg4RTsuPGg5NUBsZChhQiRx
bCNJPTE/YEhkKCoscSVJTGQ6cWYoQ0YsYzwlS20KJUctWkIpaDFOZTtqT29JJEhQMjFhKGVtQm1n
KmBxS2RBS2g6U2VoOU9NV20mSk4qa2xoMTpELT9QJ1AhO1wraWY3YyhhTFhUIlU9XG8rNlE/UzxU
czRvdHM/ZGY+JGFMVzthNmsKJWwwQEg4ZiciUD5HRHQ7RDZbSGRkaGAyREcwanNaM2RVOCklcTlk
JlgzXylJJCQ6QHBjPWVlJ2ombzQhXW9vNVVDTlQ/X2BRQiElKilVWzo1bylTR0MxKFJlLDtkQDlV
Rm4rN1IKJUc6JCUiVEExOylSJTdsLEVrMEc0I05fKztzMEY0bDRuU2JrYUJ1VzJQOidvM1BJc1pI
RUdFN2siVEUoTnE6TWBcPTVII0kqaGMqVW9RJyNrQWVLbkU3bD9KcTpYVTY5bSxAY0IKJS03S0RN
RGAoKyNwLTI5XChQMikrTjB0N2A/ZDRyVWlTOmokQlQqcHA8Y0ZrNSVUNT1iY1IpIzokNXNxL05s
ZltoSnJWSDItamdeUWYiU1c2cDpZc0pZNSJeMzBrbD4xKjNTLCwKJTFCUWo6MCtoIyZJR05LVU1M
Y1NPZS50N2M4Lk90ajdRV149MGQ+UidMXzhzVUIyPSttMkpWaC9PQ21OKGJwbideLy80M21gSCM7
TnAqNWEhOSQ+ajk/K3VjVllDJWsrWmY5WmIKJVlQXCg3KSknXFQnLy8nIWMtPy1pbyhyTiU3KS4v
XnBuaUA1cWppPVpbI08sMEcpZlRDOUpiYUNPL15dNnFqamxEJz1zVy9BLkZXdW9cKSRLRzMqSDMl
OGxmLyd0IzhSbEpuc2wKJVI8ZFZhUzdaUStgb0AxREJFLFYkXGpcRVlWUkZfMigmNS4uU1RHSGxN
NTYnbkNMZEo8R1hVZ25ALTU5LCJTXUVLVV00QUxlU3RpRGJbXU1TTD5kMyU0NkBrUkc9Ky4zNyc7
WnIKJTpnOU1cNT40Z2FuPyM1bkxoQGctYy0rVyVCJixRaUEiSl06R2IhWFphMk46ITQ4VkAzaE40
JCZJYmtFN2JPJUBaZ29kTjtYYjBAYWNXVUs0XSxdZlFRbUVua01xa0QyQ1JvJiUKJS85NUJjcjRL
Z1kmI2M/M05cJFtSO0doMXFfcmJkMnBuUDBYaW47QSswbE0oJygtKF1DJVs7YmozXWIxV1RDRDpa
K2BISUZlTldlTmtrNUI2OldVZj9DRExqN0YnMyhYOVQwc0oKJTdDPUVXZV5xYTYzW1BXXVpdQW1B
KC1BailIPClgPjs0SWk5UWlaWjIpOXBFS0cyKFpHV1EsKVwnMzZiXzUvVE9AUVZtVTxVMW9vdFpX
TVIpcWdKRSFyOUMhNGNxLmZgM2EmXTgKJWdBTDchZzRFSlFOXUFaLSYpKldySXJMaiRYPnVlND9E
cy9uQFooIURKTW1bW01uP15GKj9zbjBGPSJgREhUOV0yNDdWOnJNVFxGOCZQRloncGsjSjVrOG9Z
XG9AQlFZUkUyajgKJTszY2lma3NYamM7LyM9YG5lWVQnMCJgKlo+Ok9UM3IwWW1GYTZLY0hYUmlU
ailPV2goUHRvam9mVTtXYzJuIU02ZSY1Rm5gcjshK11IQic0cSZTKnFjPnA+LWFOI0ZeXVdSWzoK
JSs7NERBJHJgXkFZQFRjKjtnNDs3KCZxPDQ8WnBuLmJDRj0vRVciWmRBIz5QVXAnJXVEVnAia29G
XWRnMF49UmctYz0nRUdsTjM6NUsxSlhhUkxVamNASFFPTWoqUCE2Pj1NJT0KJThVSkJNZTthb1Eu
LVJuOyxRJDVYKm1bP2tgPHNAXUU2QlZ1KFxvVGJTY2piPldDSXVQQzZYJSJEMDIiVC9aJEBMcEFI
ZFtqSSkiRWpVVl9yJVZzOTtgZzVKKTc0dCZ0W1pmME4KJThjKFFtTWxkS1wsMEVmIlc/aVhXWSov
VWpyYSNjZi1ubGBSV2ZwUlZeUnBSUWJwKiNfcisjJDo3c1txXlxdQWdWcWpMTXEsM1A5YlV1RGpG
XTxzUT9RO1I/PGlBNHVyXy07XDcKJXBgUF03QzktdCFgcjonY2MvXFNzJlgtT09iZC09RlA3Sjs8
VktMYz9nYTpdLmFHcmohYlU4IzRdXlRtNVlZayFLRGMnVW86XCxKZjlZR15wXjxHR2gkMyUxJD9V
VmRfNHNJTDEKJVVsI2FoPG1BMk0sVUQ6IV5Jako7MlM6YGJyVUkjKGZcSW1oNGhaZFBYSCI5Uz1T
N2ZQJj9FTjZISmJBUGY7biJQYkBuLiNEU0ZhdSVZcDYyUGUtQGlMWEs4aFRHKidQNG1uJCEKJTRd
S1JDWy5oMSUsXTg+LEY9aldBWDZRWDYqTVdcNUM4QGwqbV8jUCJZa0MtJiM/YkUpanFYOlZeKG1W
PmRMLjhAVE80JUpCRyVPIjErYmduNGNcZWlvJGJNb3FERUUsTExWRUcKJT49W0siVj9vXCc8Y2tB
QCtkdUwsXWVBP3NKJWpEWmhCR3JFYDdtNClOXissTTBKbiU+OiJSXiRJMjJEU1hRSickIyRSPkM/
c0EsZ11nKDM1YVZcZnFXUD0wZydsYThmOmwwUl8KJU9NSHVTJmVdR3A9U3JwYl0zRGZ0STtOPEhv
QkxmPyZCMjttQjtEV2pdaC1IYVAwY2VGcllbK3JyRl9lNTRVSUI8OV5BL0RpNytHdCVfZyotZjJe
dUJWcEd1SUMpVERWMj5COksKJW5DWlIhZFkzMCVbKFlyREZvLDlPay85RyFcTi1wVTRjSUpaXFJK
SnJeSSNJJiFwRVdMXXJqUDNAWUdGbkRDKTA9OGteK2ZRbGEtX2hNMzlMK0ZQZiVfUFYsLGgiR1pe
YjkyP3MKJWlBPCQ+LSwxTVhoVEkmXUIxREAzJG1kPCxSQHRsaGZHdT9XakQ7OHBYSTpkZGpBaixc
TGBxTCNMJlRRPFBHQjdKLmdvczcxISxWXmpOTUNJPXNGUiRrbCVJOTl1UE4qK1EvdEIKJWB1U3VH
UiJDYkNTU14hXmFcKlZpcD02Z28mTiFjVEw+PkM5V0ZkSyQ+Il5fL3EzLUBYMTp0RTgobiJhaT8w
bl51Wl9IUnVYIjQrKG9XZl5MOVVNUypYcTFmTHA/KmhgUUgibFcKJV8mIiE+UGtIaENMbitNQlNP
WjE+a1ZKNVQva3BERy43ND05KmVkaD9HVUk2TW8hWCIsXCYnOk0zYyNtcygvRTI0T20rOlpgPSku
RW5VPkYlNTlWP1I7XDNEXWAncSFbKCFTOSIKJT50OT5GJCs3QklyWCQ6TmljdGckTDF0WlljWWhu
TmNaSjYzaD1XNCtTOXVpQVonSStVUChfOGQlaGllKyZjXSxFW2FvTGpSMmRiPlpeSCVyLEVHbWZm
YS5PY1VqY0s9ZHBlZnUKJWltVzNySmpxVzhHcEkmWy9FMSUyUzopYyxHRmhOOUolQVBzZ1BkKCFq
Qzk9TFA7LXI1ZWtLZi04Oyp1cSEiYXVgZUhII2dbQVpmI1woTjgqMXFybyZRdE9MaC85Oi5fQyF0
SVYKJWUiVzxhckZsXmlTbFhGLVw5UGkhX1dpJ0JsIVBVI2JRZ3BfXilhc01jZShkQlNwLWw6TWdG
J2tnPyg7MkklcmZRJl84VkByKDE6cTg1c0hKN3MvT2Y4K2dKb081cVBDTVdDT0kKJVZZWzM1cSpB
S0NlNUJNRjgrQVteanNTIiQ4Ykg2SV5NWEVTRGk0Tl5IQls1SEBGOV1hajdlSCREZEFFNWwxRjRv
U1dGbihHVj9OUVUlVF0lVCZlbGY3YWhwX3AxWUowIyY+OUQKJSM1YCEzQW4nRkwkSWRWPEwoI1oo
TDFEQDFrS1NHaTdeSz5xW1YnTG5XRDMrNHIvMGZCXC5OdVNbOTc3WkUkS0Jicm9oL2M9TFxyVGdl
WHJrWDlfTkYxK3R1Q0dSUFkiNXFuUEMKJTBGM0QyPW1EWiQ3TlBeZk1gYGUyWER1b0ZdQyh1Qk1C
L0glcExIVTRZImlmJE9mSy1XQyc6Ri5fXSViTVooTTJEUSlAM0suIVQpakFKLjZlU29OJCQ2ckg6
VytpOzpzRyIscToKJW9ZbFlNZEZDYV5kaTpARClIXSRCLGojWlIzQ05BS1I0dWpOR3JAaDFcRDBe
QDpdTS9oJkFVRzBKVTFzOVtMJEcjNUw4S1ZEYlNlc1ZcUEEuQWdlcVBiXlROMWlYKVMmQ1doPC4K
JSd0PEEtI2lDTkBMbGVXdFBBZCVjZSw0V2NpaElmSlpGIURecSJJZTxmYlZwX0RKJDY6Q3Q+K2dp
QT1EPWs0LW8qR0grJmVnLjdKNzs+XFU3TlBkYzlwPSJcPjBOJCtCYlNTYF8KJSc+LVkhTiEtPzIm
PihMJVQtcCJLLmRDUG5SQjg4QGhrL1p1ZFlFVSxeOSstVFgxXzpdbExwbFpNXTI7bUJsYjRQXi5C
QmxfS3FPL1k3IzRnMnU2JzRuaVFNZFpVcl1FUlJMW14KJVcrSmdcbT1RK1pCYXF1YjElLEZkTD0s
JDhNVSxHNjU0QThWSiVEZUMobUk+NE1hWEl0aG8iYVhtOCVyI0BHZGdySk4xZnE7NCxYbHAvTVln
OEg/SWBQZVUrY25pLTVVKjsiTU8KJTRPJz9TO3A6aVFGW2c8T3BnallJQHJickJoJUtqclM9Sl4p
MmwkJVhbYTdBUVo4Njh1YGkoVCNrJmZFKEU2MGIjVTA3Qz9kXCNObVZkUCc5NV5BO3EqZVRzRmFR
R1E7cURpYC4KJUNTKi82SzlecUJmcFRhJz8zMWszIipWODU7Xj9xNSRiSzJXZ1ZgcCFCNzZsLyoo
O1NQPVpnKFJfPmNXMHI4bEZoXFhWSVskYlxGLipoNEldQjw/NmROQTZLIl1CUGwtLlRhTXAKJWFe
ITZdRGQ8X21qSk1VcFQiWypBQXNvLTtVSF4/LSNOc0ciKy5FLXVjWCRGV2ZEaiw8czVJRGJwaWxj
JG9ybkNDckdWWilZUHQ8OjhvRCRCPzRkNiZedEw8Sm5rVj9TYSFZZyQKJW9kdG5GVGhkLFs9LmQq
ZmlZUzJKLyo5L2VfWHMiVGIra2YqTmliaShxXUM2IjxlLHBoPDFnU1hkJlxAQzZaKCk4Omk2Ul5s
UGtJWGhpJT45aDVYaChFSlwtc1B0UnBNKVMpbm4KJWpKR0FZP1o4KyhQJD9uJS8/I1hFKzQ+b1tU
RHRTSDRiNUd0WCFcMzs3MXUsclUsWzFUIlFQWDg6VV1SJXJTM01MXkxHQGY2UFVASkZgTF9FUCJY
OXAoWlAzYzFiRlBhR3U8PVoKJVAwPlVHRnA9SmREJEBXQUpdKWIrblxWSkdTdEZWNFYqLTBDTTki
X2gwJy0hdUliSFwzPW9TIlBeWmoxclBjRkUnJillZz07I0JcVzU0aihKNTJ1SHFKcjstO19YYnQ7
P15KU04KJTEmbUgtYlAxWEllKGlFRXMkYFRobixNJmZycj9AOT4+MEdCQjhzanJJYitPJWsnWidW
VCI7UFBIb0YhLUVIQjhlZEA8RjVXcVMrOVNDQlljUFxCO3FtYi1vL141dS11UylnVWEKJTFdSWcu
XnFMUGVsUilDIjJuST49ZzRcb1xcZ0IvPyN1PkY5N3NwJllmdShLJURfbkZdV2FjZToxN2JeVUpZ
JitqXVdeX2k3bS1EKyZGLXMzRyJPX3RXJihGV0pOYjctbW8jaWIKJV1LdVJYNT90SylOPmxAQjpD
JCdSW3MzLEwqWk1XaDFDYTlCNy9ZRCQkW0xRJEs7Yl1sPFs4bCNJPkR0UUhpS3VbcEFGJSdLS1dc
PE0lZWVCWSgmK2xaR2FMSj03VnI3cmhyUUwKJW89ZiJAYFxaTSNqM0s0SUBzKkg8TzQ9Z2Q1Pz5n
c0RmJi9KaFUmWlEmT0xzVldQPFVzcDxVI2lgXHJoUDg+Um1DOTdKVUdANGhFUTxsM0xyPjsnXjZe
Y09pYEpsI3U/ZVs2MzgKJUNWWTpKUVtuZjNZQ0xXbDVQL1onTWA2Oys1QXBjXW89WjZhYCNrb1VK
I2MvJSxnWVFqbzwqWydFbkwtMSs5Si1YOGV0bkBXZVkyKlU5YkYlRltCcmBaZzJUUG1FYl8hSVF1
LTgKJURRUHJUKktnPW1dSyc8NzBDL0RVRnJgIyosNWJVQTM+cmIvVmBzYEpdMEJhYmRQLiMrPycq
W2krWyw7ODEtTCo/RXRRPV0mZiNLKiE2LSxKQkFCM3AzWmRAOWhxQzNCSENLRGsKJWI7Py0rUUNH
RTY1M1o5ZFNmZD1SZyxaYy8/aVRyZytoNFxCWW1nK1JNdFVjJmZ0LzVIczJfXENvdURLWURzImRq
bXRwZUlEIzwhZihBM2FKTzY7VCRjPjBvaCkiRWRKTl5AIm8KJUdpQjRyKUlHOjwoIzpbL2hfPCM8
OCQjQV1uZSlgLktLW1k/WTJOMUpPMT04L05xUy4/PFBxX0BMbT0jciV0Q2xqXGxKXEJyKipJZVYv
YmBxY01jc2NBbWRUI1wmTmlMbTItRWgKJWdaJTNjWWghWlNSZGo9L2MqJGppXFBRXTEiWFFjNTBC
PEE7WzhAPDNMMTlKUG04blYiXkZbOldpdHApQ15FRDdeW0puaTdubytSI1lRKzdQck4jczRxZ1xW
PGh1M0hQb3JuQUQKJUosOEtHcyNYTmQldEZPInJvV1pza0ouJmUwN05vJHM3bD48b0FfTSdlKTEy
TUosSlZecG02SyFYdVJMNkFtJUlvZz1TMSNZVi82P2MxVEVfbiUvPSVJZEI7dWs5J2BVczFTQCoK
JXBVcE9wclxtUChAYSVuLjQwM1EoZE8pNm9eT09qIWAkXiRhMTAuN0tFSF89LCxQQ29uaHUwU1wl
Q1o5MjA9aSlSYmtUZkk/TjlqMF48S2w5WjlAdEBWRTtjT2M/UXEtVmI7QFQKJXBvYVY0b3NvUUol
SClISnJWdWxBaHUzNDRyOlEtTnJHVls5RSJDUEM6ZF4lbXJyXFs+MEZlJzZhU2IzSy9gaVZDSXNn
UzZJNCU1Sk1lLHJgWi9NOihybGBLc0BgNFFTcnNTQUEKJUVYdDBWZ3AscHVtJWVlYydGQ0xVMkgw
LD9PZnJdV28kSElMW1FIREFbcipzKDc5QzgvWjgtMXQ2QlUkIXJrQzVlITBcUjNKNmtDdU0hc2hE
RCdaV2RZTVFQOWYqNmtYQV9RQSQKJVRIWk5WSWVMdGBaVUg9cV5IQi47cFxyIWdCLE84P1NQc0xA
WmxDQChsUmZrbjVOQmQiSTVrUys/IWEuUCdlYksraTJMITlMNldRTlhkOXE9aGhqPDlWXV5IXzlE
Ok10bSRIPjUKJVs/JWgxcVFsUy1OImIlXE5KdWM2ZjcpKmtBSEMyNlQ+Zy1VMVk3bGsnLWgpUSZP
UCNeYCh1YSksK1pBSCcuQ1xAUD1aaXQpXCw/Q1ZkZ24hLHNVdCwqXWBnXTxfJS9YalwrPzMKJWE6
YGtMPkBcUkdBUUBdMmAwOjE6WiUtTXFvZ2wpYj88XHU0WW9kYi5HMiVLInBSOmFCS0FrLWxdSzl0
aXIwcEc9WnVNRk1hTFY/VzE+OC4hPmthR2lbRjpIOHAzQ21XYjQtXUEKJUUhLzJXRD9QTV5URz9Y
PyEhLEJYXTtrWl41aChAWDkwIT0mUjxKa21XYCJhQGZHRVlAJmQ2OCNyLlVgcz8kMWNCMztdI2Yi
cGZVI2hBWmF1cl9hWD9yXVYiViNmPWtobXU2PWYKJTVRU1JSZiVOcjZpKlkpXStnNTstUTVmcGds
Zk5EXVUqbyFZJi0wPG5YSiY8cG1TM3A/NkwmXy48XlA6MVJBOlptciQzX05IUydXX05XYmpIcUx1
cXRsSE9Ub2BcXDZaOio1TXUKJWRHckdDYE46KmBIQi5nb1hWQ3JzT1VOV2wlIU1ZYjZYMU9VSjxF
UEZnZDxLW3FwcktCXFdCVVxQLmJMOl4vWGNvLXQjKmRaU2lMT2pmNERLRjEvNDBkTm5DVyZIV0M5
Tk8uNSoKJS49KkNsZytdblsrIVMtIyhyI1ZsNHBpLU8jLyI4SlFHKCxNY14yJlVuP150TT06IUQx
VFJjaVEnSmVwcVszMUdxNmlaRm85YmE/NzZAJk9IaVxzRyRyNUM3PFk6bDVYaFkzY0EKJUM0RkA1
clFtYUBRPWE/XzcxMlVNTnNINVVFJFFeKDk9LkQ1RTNITltuJE41Qk9QM0AwalE+O2InYmsiZ3BV
Zj1ra0dSNTpjSjlzPEUxbGRQaCc0US8uRzBUQ1NsRElta1AnLUQKJWMvQEdEaEZAXHUvKCxINEI1
R15KVk86V3MhSmg5KTpaSG91ZHMxXmA0T0BhLSdjLih1Vj5UbkNuWVg8UXI9JVdoR1s8bnUvSiI9
WyJwVzlIK1hBIUhtaz4pYmF1Ylc3TE51JW0KJWZpaTFOSWVWVWpyOkpiJCN0ZiQwIk5SXitITj9t
UylYRjZXTSIwNExaWk5yQF5RL1tMUWFiMkQ/UGkkLT05Uy1dIjl0dF9yYl5IWklpSF5YTXNOTHI7
REhxUDFJO0ltPjUjOnUKJTpqXT0+YiNZJm5YZXRINlJZZGtqNDIkcFpGWitCVFFOLT88RFMxLU8m
US9SNy1zNys7SixBKC5pcSFHZDhJaUgwbEwpV0BKKzteOV0sMU1HamwoWCRSWVlFRD5uPnAxNkYv
dVoKJT9Qb0kyRk5ZM29vKG9OUictNmRxZXAiXjdxKjNdWSpxJWZacklgK0U/Z0QlVURWYFcuYkZg
XC5uIWkva0oiZC9zVCdlXHBtV0M0a0I6blRkNWBIJEcyZURQJ20xdCVrRWJIW0EKJVA3VU45WG8m
OlRuRmA5MUtwWW1laWpMSCNbUnNlIVg4Q2NQQTxoI0BdYEsrcFYsYkhISWImQzgyUTNTTS1dZyQj
UU8mM0tcWDsldHMmPiZVRCs2RmJpInQicGdlaDwzU2QvJWQKJWc9ZHBpRUUrNG5hTl82VT4xbipY
Sz9fJ2NQZWNiNzJTdU44Q047aClIZVghWjF0ayRNPVQ9cGdxSmA8PkJucGhqTko2MVk1XGspbF5p
TkQ+KjtFKVVZb1FxMmFPYkVCYVVQcSEKJUlZak9TNSNHa2ZxLHIiaWQoPytFT1wiZGJSUyZzM1k5
UUhOI0owZ2gpZ0M8WTU7QFdXTD9NXz0mYl9iciVsR1c6TTF1NGBHPzxLT2d0LWUnT0QqTio8bycw
WDNAY1w4Rz9xUl0KJUtwSm9oM0E3YlgsciVZa0s8THQ5YltCZW5uYVQ7XFFvPVIwIy9ncVJhLGZw
L0UqX0FeOSpLJmI8JDJTVjVYR1Ynb2RWXjIySEFcaC41bE9QSmI/UXNeV0ZiWXAsYXM1JGpRKDQK
JUstPnI7KkdyV1NfXE8pK2BMSzM3R01ldGIpZ0FqK19oYlxnZUBWNmo/ViFgVWEmWkFua3JsWUkw
QVw2SEVYYE5tcT5hVzRMNTNTOkJDMm9TMldlQE8nOy0/LElfUCU8Jj4vSGsKJWlGIkR1IVE/VFcm
bSMnamcpM0giJD0raFErMDZdIjVrVGteMVZPSVdyKXA9cm0wRyxxYiE7dWsvckxwb0ohTzhnWS9Y
Nkc3IWpLaWw5P0tRaExEKnBtYmRybUosJT8+cCVTTFcKJTpbbnM5YVZVJzksSkojK1JQJFE4RVcu
RV86TidEJWcuVjM5M1JrWCRyNmpFSkI2N3FjcDJtVyhnNy5DREthNCgwQnBZZVY0OVonJVkrZVAu
MDUuS2IxVWxlXUpGJ2pnRjxvLD0KJWY4ZWRQLGk9S11cU3VYPWE3UmVoaTc1TC46dEFLJT9eIzxk
NFhbWERaQVs+ZFluT2deQiJYYCpbSlZlL2BxTSljSS8oc0g1XE9KKklZWjVgWGtJcDJgNTVBQkdY
OUE2QTI3MScKJWFBOEt0NV9rPnVHUSRvXmxfTkE3N0MoczppUmFRPEJPNXRNOi09OiY5ZStnV0ZM
MVJgQmc/bVFcY09dcCdXI1s1RWR0MkZibSVkVVdsPF8sXV8oSCQ/IXM8dE1fXnRsY3JYJzYKJUs9
MWJQazcpOWRDU1NscFdqaTwhQS9zN1ZvaEwyXCEzSUJsW0FXQixoc2Q9PTo6YjRTOypvPTZPYV5b
L0VMZ0xsWnEkMV1tOisyUlhCT05eSm1cVkUiQ0hpcEgwSyZLU0MvQFgKJT4xSzpEK2BgXF1CL0Yl
IU5mPCpgLV1jR29bV0NLM0RsZEUmSlBjU1RFbSw3QFtwV1A3RihaNFY/Sk4vOU0kIVE+XUFCPDFl
PlorVzZVc2NsVEtAKlhhSGdbWTFAVDcyWGNWQkEKJUcuVTdiXmItIkNAJGU8YyRDbypLYFVyKS1I
S2Y3JFNNXF0hJGIiNFQoY2Q6aiFMUSxhRC40ZCxNSFZrcUZjWi5pJDAyQzUvTmxZQjpQV05qPFku
MGVbOVhBc2xPLj8/W3MlP00KJVwqSGJDJW1bTTxxOFssazdncltxOFwkTzJUQjg2LUZoX2wiT0Em
WGxFUCNmdDkmTkNJblBCTURwKkZRKjZPLHQ3PzwpLFUqVytPK3A3bDNMYC1nJiNGMDpaQjBuISc9
WE5Ka0gKJVFYP3RnNilMLU0/Ii1QUicrMHRgUjtrPThfKVU9LyQlLVVCRnN1NmZEdGZzND8hZTdZ
TFllPjlwOGZdXlBBZlIiJTEtSD45RDJaOVxWQFhLWltrMSYiPDFYWixBbVpuKVZRaWMKJTNIaDY5
LEwuISgjU1AsI0lEJ1gsJXJeJlg8PzluZzZoSShKWyZlK1I3QkFqQTgkcmAuajU6aztJTjZdYDJJ
Vmc6J10lVWxqOylUQ1xpK1hcQGEhYVBLTzNOSWxnJWhmZUtqOi4KJVQrPmYwNDVgYkc4KWFdUlo5
J0IxWjwrJko+TCVNR1kvS1xvVGZZNj9KSzIudFgoUGMhXThYV1w/MSFfcDdbTURWPFRoQ2tiUCc8
LVhGaGA4V20hX0U1dTVwbC9IcC9CZ108V3IKJSY+WVlhXERWbWY/Pzs7Xm5IMyhzV2NSKnByOCUp
WSozbGhuUz9KLWNNc0hoc1RnTUNIOi4uVU9UZT07TlRRXUJKIUQmYlAjMmwvbURZKCNXIj44Vk4/
WyRLXG4qNCZxZChPUC8KJTZWXC05OiEmJ1pcXSxiKEtpTjUyKEhNUVQ8STUxcWArbUREVHBEQ2Zk
ZyFTVD87Vkg5V1BZZ0FOPmhCdWo1KVpuWSFGWjYyN3NvI0hXbl4xJUZkcUZfcHNxWE51P0hxZEBQ
QnEKJUEtSTxVWFk3M1M6Y1tOOTxVT2U9VlcyXEozYmlDMDpBRmhOU3AkY0AwJWgkISQiZklPYjtN
RW8lODBtSyNgSlRhR1RHRDFKVi5Mb1wxSF1JOixQVyhEOXR1YTIyL09zQy05TVQKJV1tWiVBWVso
YlZqVD01MTttb3VBaWgvdHJMOFY6VG49MSFBSipoOSxuTihSUyc1QzlBKSxXS1dmYFB1J2hlQ3VI
WC1NWklASi8ndCVRSU44MGRFL2dtR2I3IkhVQUJtLylVZmoKJUJedCRIY19KI28oWDhMViZQYUhq
PVo4V2hTZGBXNWgnQi0wKlRWdGowYEwzVlU0Zl4pIUVIQE82YCtOcXJITi5TJFlRMSNxSDNOZENu
JnVzQF4vR2snSyszcT5jbDFAaks5LiYKJWtyTGEhSCNSZj49TT1ANzZWYVlNUWltciU8YkgySi4z
dW07RUFzNjAuVDcnLkMpITA/OCtYJCRFMHRNU21LRTVFRVImanVkJ10hR19IUWldbClLIUJWS19S
VzRBSkc5bjoiOTgKJUwhSHNFZVRwTFNuai8tKjhvZCk/Jlw1JTI2NjErIWNfTiQsSFRpNU5acFdt
dF4lZUctPy1udEdORl1HNj0uVT9HXUJaMisqP186MCNCRnBRaS1UN0xmJi8/WyQzbGJEQjxHOTEK
JSdEPlRxSmJ0UiZdL0Zdby4vV3Q0ZGhOXGZsXU1rRSFLXmQvIUdDV25GPF1cbkZzSk5Bajt1KkM+
Z042Sj5zMmdoRERWImdcb0pmMzJCXXU/bT9QdUFQO2dNLEcwZ1ElYDBHVzMKJS8wMy0oP0djT0w6
TTVaJnFgSF5XRjBiQ0lLO3VWIVU3XSVbZVdoMVBMXjQ7SiJjaUFIZCh0UXExO05FIVwyaSNdP18+
VURfK01wYSc5Q1JFWjZnWTVjZy49Tk9jXVtWOHVcI0kKJUssQ0ouRSdnXm48UUxgYDs0PmBfVHJR
XjkkZG1WWSVRUHVONSgiQFg9PC0iKjFfV2dPXmZzWkQjQTA8bjNzXlpGKS1ibjZBQVstaVkuYC5n
WWNhT10vIkRsWSFST2VAPGE/THIKJUJDQkslTlc8OlJra2s/LDZbQGxZT2NHTTU7aj4rUmM1L2Jt
PiwyW0gkPVRjZWoyJ2xAMSwuN2UtQkdrLGM0IlYuN0VUKExgRFdcajdtOEQ4ZW9KUWhSbEIoYFFB
JU9oREMyLXIKJURqKl5VQTJbamNrQ3J0KzUjSE1ZQm5EWE0yMDdjLixQZm5XNWNZVlpPUUlPQUMj
NnMkRzI8YnI1MGtqXjUjSzheWiokW007YTNJT0g1J21JQCttNU5MZiM+M2BqdWo5V2s7UEYKJTA4
a15fP0dQJzUtLk9JczBiS2IyVUU1JGRFSj1LTjMzdVRqXjZQM0Q8I1dncjtCJVNINzonJWFIO28k
VGUyO3NbamU6cCprPUBOPmQ9SiVwaSI3QCtbVUJaPm9eWDpIP1RSaEkKJUQ8I3I1O2duISllJ2Nt
RGYpbzdLS2htOHFiMUlHbyYnVl47I0JCOFcsRGI7PTJTbTtpIlsiYzplaCJdbSkxQ2cnUCpJVSgu
bGAudS9tLSR1TGNCLz0tcnAnZF5vK1ByWl1EWEcKJWBlUSViIWg3aC4vcFVEVz4+YF1CIkJpUnJV
WCc+YyJaQiJlSUZlZi0tSDhQcUMuaEZmJERPb0RnJ0UzOWloSjlZbVMoSzlERCkyQlcuXUdcLkdk
P0RKRVJeTD9QKXFBJEpoYDUKJSEhbGFDOGxDTkZATzgvK2wmJz1IPk9UK1llTjknQUBZSz4uPzo8
WVM6bVlIaFNYL0NaSTowJWE/SCdfKCJZZ3VULkhbUnRdT1o8ZTEvcFdJaihIak5dPjNBciFNJi50
IWpSS0wKJV51NyFRaSNKIj0mX3JcZ19iQWQ/U2BxXUZMMWFDYDkxLGY1S2FRKSZRLGQodCdUJDki
UmU+TWNzKDIxNWkxWDhvNlcrTUVdcnAjI0dFUE4sNG8zQUI7LSZNYSQ2JEZCJSsnUycKJShdW05Z
J1puVW9YOVYjZk5eU3FELURaPkVuUlQnYDVdZzQ6ISlgT201XDtAOmpUYU4hLGgncW88RkMiZSpN
VmwmI1o6cTxiYGZYJWktZG8wQkc4YGZVXXI7QWtRMm5nXExyT0YKJWJYTl9eKl5YUSxkcTxoL089
MmNYWTplODklOGY5VlEkQ09vVlNFbTdZbFVSLkMnXUpJNjg4SFBwcHBwXlltSS1cSFwnI3E+W11e
ZVV1MkJZMCQuT1RSJGUmaihYYTRKOGpaOHQKJWUqQDhQZDheZCJMLkdxUSRSMT5eT2RjVlttL2Rn
LE00b1xnSTotSHVdOSVvJ0c7NVdYOTNdKV0sQmlWPT9oIS0oQzVHZyhMK2tlWisrPmJmaiVcUjRU
MzkhayJmdXQjNmUxLjgKJVRISyJePmAxckZuNForTj1YSTsxYGZzbWtOKXU4XWg8T05fUmhhbl0m
Ql1gbDI8RmxMTTcxNHBaOC1MbVBbWCpZOTpxVWJfI0lHPDY3YVlDYVRzVWImSjJiZkwha2kwTDsp
UyYKJV1vbSZoZDZzayZwZjRYUW0vZ3Q9anB1dGIiKT0pNExFcEgpX2FFYysjKU0iaS01Q2VbWmc9
LUk6dGhVR0o6cnE+aV5wMmsjWjpEcCgwMXFpVWxhVT8qYWs1XFIoTiFRYmxITyYKJVY8VnBQQDtX
U1hlYzZoKSdQO1RiOW9SRG89QGFFIjVRUlMtcXA+U3BOWDlvQVdRZnRXJnNuc2FfQmZBKiRKdVx0
J11LK0s6cG9AOmRLYmZnS2Y8R3JhZVpsLzosKFdTV0Q/IUsKJWJFISFjVztLcSpJQmNUR1YsJjJk
Iy1QR2NiU0Bkaz1JQmIxaEBGTXBYTVJFNlFzKFRzI209Iig9Y14tdWRzdVB0S29pW1lBRSkkOTda
RVJyKy0vdHNTJE85bCYhM0QoLCIwJGMKJT4wLyZPPzotLC5LZlRsQUUmJGdFPEFOODdfOmInZFg0
LTUyZFxpLkIhMlJRZjItaCc7Q1A4VzRZS05lTkJJa1hVSzJZSj5hQ0Q4VUBuXFg4NHQjLHJCMyRR
Pm4zUGssbCJHKi4KJSRcW1BGR2NnMXJuVkw0Vm1ZYiRGMmJHQ2E1Z1RYV1AwZyJoZ0VZNVJTST5s
bmZ1O0w8PnRhW0cqVyJ1aT9db1M3XV49LSZDVlc9JSUhXTMzKCpBZiUzPktmIWVWJUxjSmR1YVwK
JXMoXUBSclRcS14kMU5LYiVyMDw8TkAhdEdwaSxaa1I9aE1ZYEYzXGU+I0NxLU4icmhDQHNyNzsq
QUJbPk5pblYqSmQ7cFQuQT41LURYdTVQOlFHI21mbUJ1P2IxLTRXZyNNIUsKJVxaKmtrMk0oPU5j
W2ojVVg8aGNKYSpqXDZnY0MtYj4ucVdbV1xDRCcnVyNtclQuSiszR2hPIWRJbU0qJ21VPE9VJ1FM
Lm8wWU5lN1c9Iyc7S19JOW1UJVskVG5fJFxlRClEXW0KJVl0I2ZzZzxsKkBgQyI7NUxwaDlCUGZv
aVcyLEtIcSk7IjJDTnU/KGdoKyxZcmgnKlVFIzdpJ2k+UUFjVydyUDwsJ2RdNnUhL1RrVkI5QU1f
ITpjSVdnSEFSPUZ0PkA/KVYhMlcKJUFIYCEiRGZmREVYSyxlNzlxOlIxJjZYZVwwXl1dUUBeIkJF
a1Y/QFBUNm8mZ2pBV1goI19DOzRaQmZSJGYsckk5OTpLSDtfKDxeUnBPYGVyPldCJkU7NG4+KVw1
OUxbTXByTCIKJUVtR1s/OXQ9WlJMWSE5KFRJdVgpTiV0PzRtXDs8c1BFPCJLTjpJXE9mVGdoZkI+
PmhFJjskLm1TXWY/P2U4aCJTIW9iNm5BQm4xUVIjOlJsUydQbjc0Uz4vbkpIZCRIZjwtJCQKJVQo
JG9FQ1sibkpIaGlfUCwjTydeWGc3JzwsMlJRTGR1XWReQj5WJV1gX1FMOzghaC9CNUkhZ1RrImU3
blxdTko7JkoqNlJbXzpAVGM6b3NvMUErJklDY2hMRE5MJ1tGWjw9UD4KJUZoYEozK0ZhM3E9Ymg2
SmopNmkpIl5QTHQ6UlNiRCZ0XzBfcFdWcFttLVFfNlAoMzphZmEyU1w9VFQ2W2xtaEFWXy83PDc5
Qy5VblBVIjcvTUsqL1pkL3BgXzJKTS5HN0JWOW4KJUhQWkFqOk1TQ2VicmZcJztDXzN0UEklWC1K
ZUJGakZwSUY6PUtwWk5XMkdiV1NaLU0yQ3IkOisxPTZkWkRWWUo9I3VgaE4vbkUiVzFnUkx0OEFY
byRXNVthWSJCL1xWTllpc2QKJS4taFl0JSg7ZCNcUllmb1djTE1BTlQ3OFdfTGpYXGVaVkdNXTVg
LzdyZTFrZkhGZDAvZ24mQVVHajt0Jy1TXEl1Xm1wcl9LIjsyRUtGTVQvJlxzV3IuPm0oNTI+a3FU
NlRmQiQKJW8tMFppb0ouLkVqK19vJ09lclluIVRQTmBeXiUjLmBtaGVVS1luKWhVS28zTCdsZzBy
RDA7QkJnRi1vcTd0JXA7RSJwOlIrMjs5U0cxa285IzU8LjFkdEYtQ1h1Q0drYDA8clsKJSInZEMp
OjBsal1nRTpAbz4wRXErcF4xYXA4OG0vPW4zMDtebzZpM2EiYXEoMjpmMEwtS25MNWU2QHJkUFZH
L1MhWFgiQUdXRk9gK0E6VDpoZV1KQSklXDtLPzlNLHRQSWBTZ3MKJU1tRDk2XDBfWSg6XUk8cDsj
L0o7MUNZMTAmYWZbMCVGMmdFYiJ1ZWwiJHI6bVZGX11rNWNEQj1AdW9KO190L3A7YD1YZV4hUkZk
WzBpQ2s8NVtKLzQlS2spQmNrRVJsQSRdP1sKJSpwa1svPG9TQUBDdDlEOV06byk3ayE2U00yRmtK
dG9oaDc+WmxHNUMkXkBUci5BKUIzS2NWVjVJNXFPOHEjT1lBOCtTYnNYPyYqb2wsZWxeTV9NLmA0
TkFNOz5FPmRoTV0mOS0KJT4pWG9DYFtLNWVGWm1saDhCQCplQTg/aUkvWGNvOj4hdCRPOzUxSFFN
LWJlcjlwXSRrTF9FMDtIdHRUYC5xOmxoSi9FbFArYkM7Yyg6UCExSUVJK2kzbj5HWU07QTstMjI6
VjYKJWktR3BeNChfKFBUalVALTNBJDknbHNYNDpRVj9CbWVMa0BjRWZTUWhDWlNfZ0xvIS1LMGo4
V2JKY0dmMUNZTFwsOUE9LDJfZyVLZWIvPj4rZDBWWk8uZy0mNyxqZig3RFAjbU0KJUpRWl5rQnRV
YigrV3NjPl80dUcvWy05Oz1OKSRASCcxI3VHUGxaZS1QIUlYWExsZlYxLSg7NE88VGIrVCNLPDUj
Tm49OUNXQSQraywmTUYxQ0lcRTgyMD4kJ2Y1P3U0MFRHKDcKJVdnOChVNjVSUEhUPTFwTjMrTDgh
K0JhTGRJK0A4UVwwJkZwUSdhJlApNlAjJChxPSl1KF9xOyEkbFhxcVNLPFQhM0tsaUFadDlzJWlE
Pm41aydDUjVgc2suME4xUCJgRjRTS00KJVtdVXBhcC9QbmJENERgNTVTWyNeIkY6L3NGOFNXc0k0
bE0mLUdFVVJKXVxETU1NWDxfLkckaE0oXWw7aD5xVnMzXjlJJWM5WjEmJyk7L28jSV1uOzxEJ0k8
Vi47KylLVWxEXCcKJUNCJUdTLltoKWNpXiNVL3BFZGFPJm0rWlIkITEsTmJqO1o9Skl1MyhOJktZ
KkMvaztvPEZcRkhkT0FmQGU+VTpZMV5rPl9STlBNXWVmbTQxQS5bYkpdJVZGP2tUXitkNTg9c0IK
JSckXCpqOCxTODhTPkBDL1czS2ZmPF9zIktKN0ZlNjhwT2A5VDxTVFllJytMPENyXmdeLz48RDhb
QzVqMzRmbUhGXWpXL0ouSDFtTGc6OTZzPCpsNSc0WDRBT2IlWl9OZFUhV0oKJTtPNE1CRVkoYCE9
PVVeVypKdTJbWWNTJDI0LExqZzBtM2ZoLV51WGpuPk10Wk8saUVeJSw4KCxTOz1sIThSMD5wPCdf
UisoUzdzRCJFT0M+ZjVNWlYoYSktdFUsVm0xTnFQIUsKJSphJWEtVCFWNGhhRyddUGw+aDFgZGBU
J11wXk5OVzlNYWpaUVxLdSNZQllPSSInSzhFXTVkXUdRPi82SyRmUj1dXUZDJ2JaXl04aTR0Tyla
bCFeVCRgW2BzMUNuWjduZmAzYUUKJTgkXjc/ZkpuUGg0IjM8dSxSQlBIcDFeYFEsJ1hMXGUkZWRj
ZTEtYmxtIU50YnB0MTkpKEdeKWxYX3Q3bGZGLkhQZWZgQmEzbW5KZzU4citHaTVEZkkiJCRFOipu
PC9CLyp0Pj4KJWItJ0hYOzpYZWM3bUFwaEEzVz86ZyRjMkJBSUklIWlHM3FaWzJZdCJEXzVVa0Ap
TCI9OG5XNjpkYHNjPlNTLWwxK2M4ZlhOYCxETVA8SVNDIzZvQFo/LT5PZl9iUnEwPTs8PmQKJU4y
LW1tMW0tLWYmVW5cRGBqLiM4WSc5ZVk3RSg6UjZtKj5QJWVDMXQmOGlGJj1yPlE7Sy1ePSg0XCkh
JlNrRCZNRCp1PERIS3RSISM7b1tgNDwzJihZLWQ6aFNbWmkpK1htVWQKJXFyZjE6ZGdNdUchNTJn
QkI7NjRvJ1ghOU41V1tBM2JdKClrXDFeOFBXWnA2QmoyMyJSbE9ER3BPak5CTCRkalxkX2E7RkYm
UEBdZTVeLWovaVc7Jj9WPDF0VDw5L09uW0BbZEEKJTpQY0lPaSxBaFYsLSwhTnJnJ2g0QjJUOlZa
ayFMbDNjO1deLkNKP3VeYm1zJGVIR2UjYjQ7UmFFMEEtZiQvWF9rQzJDZWhqZFRjVUI2ZiI2NVo/
UkglUVQ+RTMpP047IjBnVDgKJXJJMVhGNixwZF5nUXFGcC9KMlk3STZBbWhoLyEuUmlya2lwZXRj
Z14zMDt0MDkvakpLVlVkKjhOOzRkS2pQSSowcDBnKzleNG43cktrRFA1S09tbDc/UjhnUmJIVT88
WD9zOzYKJUEjMkxwTT1wdSlvdDZRRjxPSnUlUi8jb05OWlc3YDssJ1dxVCFoO1BeTCJWPm4tcCp1
KVJlOD4yMz1EMzlNZ19oVWltaiZdbTBYZSlsRE5bXGUoTydPJlZfKS4tWEVwXU00blwKJS46YW8i
IydsPFNuZFpWXj9LQlRyWz9FPHRPMmwzU10xS1c+IUZNUSFrc1BgLDtlMSw6X25HMTZPOC8wUWx1
My8jUydFQytFUmYvPTFtWVJfMV8wdDo/I0lXNEw2MGs4ODgmLVsKJUVWdF1nUzlCaWM+Jz9SQFVZ
QENaPmpBZ10vPGkrNG4jaTRAPmQ/NXBhLVxta0YtWEU8JzxeLjZeIzs2OEtZZl9NWUtPdDc9Ql1n
RFs+SUhMQztAQk5GLkI5LlhZUyVrKiUvJkoKJUY2dCkrJUowJjg7Xz1nWEs9TnRpLmRgQjNkTCko
IVlBNTBQTW1PMF88IlRbdDdDPmRgUExWIkxfVjRDcFFGPFQ5Uzw/NXA2V0dTN3JKJDtCUGhgRyxg
NG0zI2w/bWpIQDMoXi4KJTUtZDQyMTFGKjJcWjxcSlBwUik6VDs2VGtUXDlrJ21ySURhR2ovYl9J
OFpnQlQxaSotJT9aaiZLLj8xOlhrOk8kXWUucF8jWDo8TlBTWSI+ITlsKEwtcVk/OiE0PytKRlIp
NTEKJSpVLjgjZDtWOy1EJVNMVklvRjViai1DUCNHbFRwSUJObz9zIiFHI2hUWihpcDVaQygjPGBC
IyhmQGpLOF9BTkRqUjF0T0tHP2YhMT0yNzhxbm1hbT01QEs4JWEmST0vWjRKP24KJTQ4W2k8JU9K
X2tuO2pJJFhYUy4iWGIvTCQ3PFQuYVUhUGFkYUMrLlMoLGFWSzJuUCVDKGBGYCQnSWNnQytHa0Bo
bVA0X1ViZFUxZTdsWG1XKUZkbnAzUnRGPipCY01CXVE6MGYKJWoxZGwvLyRhOjxiM0tXTEUpXTJR
PyctQ2lbOC5RJlQnSzFNcCopXz89bVdMKS5qYG9tODY3NDJqPG0zIT46UVVAKGpCKk9uMj1hXHJj
RmxoTmorPS8wVio0RD9WL1pWLy9XQyIKJWdBMjs8XCIiX29YUSpdWi8jTi5FKUFRT1tmQTQhMENO
WllzRV5rKWJpY0hVOC9XNVwwOVRORkhncztPRE0rXiRTOyhIPG4qM3JibUJcXStGUWs4VTYmM2M8
YmBpVDwlYVwoQTIKJWdZUUIvUTdxNmRiUz1hYydsaVAwWDpWcSduX11qZzEjVVxwKEw8N2tacHJo
dFZpWW0wUEpyZDgpPVQsWERpMkNOJENdc0QtdUBaSy4lSTZKLlQocSdFN2o2dVJSSUdFXGItc10K
JV5ZQFApWzc7X2BcN0hHQjRpUS1KRUlaZlAvTV1uRVguQyZRZWBxOl0kK00wIkhFLEtaOWVlKWMi
TTdcdFViSCFwOmZMPGdiWVpacSZsOiIzLVM1aEk5TSNeS1kjbF9HPzxaNj8KJU9NLC4nTEJwI3E/
XnR1Wis/SGouJUJKVTc3OzsuMz1aZUwpcFJCciZrX1tsck9LXi1bbTQuITo7cSlZQkg/K0lWP3Iw
aUZeZlhDc1BsXyVHcFN0aWo2IjxJVTttZS1vUz1hUUcKJWZfPnI4Rz4tViY7N1svbyRsRCg/LzYm
SCNYZGFTSE9LVyNzciNjVlgmR3QiWUtdMXNCTCclcmU0MmxnTEphV0MiJSpZR1wrMDYmN2oxdWBf
bj1bVV4iO1NQKidHUSkuUTpRRm0KJWc+QmE6XSspXGhNIiQ6cFNlLDozQllwLnREIkxaX0xbKVow
WDlkc21fLE0rciY9VkQoVEo6QW4/dXJPY2NOOkxCNzIkdFNPKF9NUyUwUlQ7JVJBZiRWRk0zRkRH
V2c5cDdSJnUKJSpHWzRXNT5IMjRRX3U5by41R1Y+PXVTbUMxOk5rLj1bOyVlLCdsPW1qPkdGZmtT
NltIMGYnNioiQlZdOjtIMzo9Ol8mRzcqN0REIyYxLEgjJVxiPC48KTssSTg/cGJWRkZfIzQKJXAu
WFUwVVdlQ3VwcCoubj9pVWs4UDM5YEdHTCRnLSc7Ujx0O2FJc1JBU1pPSD5Raz9ORE1COUFkKWc/
TylMKCxFYjJHOD02ImBsUXBYJSgvamE2Pls2I11AKEBNJUMzIzxHISgKJTBKR2lMZ0Y0WDNFPmo9
MFZZREQjLWcsYHRMQ0gpSzZiMkpFajJfMzNYcCZWZFdYSDNgY0JlOWZIa1F1JF9YV0FVMkgkT1o4
ITNObGkxUysnR0ZicU1Sb2VcOV1FW0RtckVcZEUKJSlzMDg9LVtjVk5Pa1MwZGAobSRjaUtoOz4r
WmRUQkVeZjQkO2MkZVxJNVY9NyluJSRSVGYsaTNOWm5wbWA7SEFKJT5KNnEpWTNUMkJiW1VnRWVc
UTU9MG1ZNEc/RExkMVcqQ2wKJU48ZCZpQD43LjQsVlVTSitdPSglN08vOUBgbz9vQyMmcVgyUy9O
MCVFTVBJRG5cQ2VeaShlcT1vVV4rL2Boa1FnTGRtXCEvXlI+YmVTOiFWRi1eTkQ9QChQOiJpbzhI
JVpmTGMKJVNBSlNKMT8rLFwsdWNAcEBdV1xATGQsK0RWRklcZ2BnZ0szMTsnNlVfL1o3cmlzYj0r
YmFVYjM9JW5SX24pLlUwW2g0WGtXYHMhM2lgTjlKR24uNCdpVzNOKUBTVnJncj5jZ0YKJT1GaiJa
XjFwWjAlSXFXKkw5PTRpUiprJ0BYUWBhNlVYQz9EN0JrTCo6NitwUWg2MUNWK2dNQ10zXD8yTUtN
TEkxLyQrITZAaFsmJjA+PXVfMUtNQk9vX2JmT2NAWkpGOUReZTUKJUlcNEZbLyFKdS9gVSg/S1VE
Ljo5XDYoXjdTJjwnOVtTVDtXLW8uUyclVTZAaEJqJHUibT9lUm5lJE88YUEyLj84NVZPMCpKNT80
V0BTS1kpYFItJixCa0g0a1UldD4jYVZJQzMKJWsxY21KT2ViRGBvR2ZBZCxLY0osXTpSLGcoLVc2
Zjplc0JRJTQ8QzA4ZWVHUz43ZkFhX3IwU19mKSlzWWMuKT9DWzJpR1lPTUptQzIpWHBBcHNWVFJy
UFAtM2xMOmNLK1w3J3UKJTVuVS40KUFoRDI6QzIkNChOXSJMMCElOWcsVyM8TkJqZWROVDpuYyFB
dDZNSyUjNkdfUj06bVJlRF8lR0c0K09TUVdBUFJiTlVePCF1cTRQSE00KHAmWW91IylAJmtCNSlO
I0EKJUMlcipENz5iTkxdcU5kKSwnJ3NqJSlXOSNxKU81dGBFYzVfU1YjR1NvZDY8LiVYMFVLbiQi
cmVCXTVrL09iZDQmbk1XR0xlX2FAQi40MTxrbTgwVCJBajJEKylYITlBJyljUy8KJUJMQyg3X1o8
aG0oKmJiXDJtISpCQ3IyTzxfaTg4NlhHQCtmRXA0KyZSZVRIcitfWDomNnAqQT1MUEJOK0RNOGAj
KTEjQUtWXmNTajpnOUAmP2dkbi9RYiQ0Yi1WZklLajV1bHMKJWYmOW9rWGVFWGouTV9qbz10KkJA
J21CMnNhcSRsNEUoN2g9NFo4a1ZfWFxKYVxUMyJqTyY1KSZTQkViMGVaTDkxQThTVGBAazhiVitq
YW45PWxIcyhwMmhRIkc/LVpMNkVKP1kKJV5fLSFvVXRZJFJLV0szRFRBJydLZkdmYmNOJWotJ2ZH
aylSUk1NI01RT2JhR0t1cCxnQltBZCdnZG9STFwoNHRXP1hYYCFUKiMtdGAzdTxFSDBxXTlcT1tl
W0g8ZlE3TlpPZm0KJS8vTWRbUjJicm4iPCIrNkImWFUjT1FcQW1UaCl0MlcjSC81MFhpZmlqQV8y
LmsoLEBwNGtwLj8nMmlvIWZIXy0jK1NZNlhtdDVSOz8vWClpNlAza0Q9SmIwUkoiNicxXjltSUsK
JUxvX2pIaHIyTiswSEcyQy4rbmltZUgocStqXnVlI04tNl5fcFsucmNicDhQYmBWRkpXT084QVhs
K09dJkk8PVxZLEJvbSo0QE9kTDZINXQvTk0iMEpfKChAQ0s1QWdycjVeaiQKJSpfTy5ZQkpjJiQh
Wk1LWjZXbjhWP0F1N28qKG9JaTBORFpJL3VOT0NIVTlKLSlVWzRCNjJ1UExtJkEjdT8uUy4+VHJb
dUtKdWlbJGpDYSQxQEhZPlw3XDxadF4pLyJcaylUJjsKJWpuO01qOCZoVCxdTkNVUzBzTkkzMzRz
PDNtL2oiXlZZS04xM2ZXNDFhYitXRFkhZEIjMz4lcTFmWWEwLDJ0T0I2KWRqJC1kJzYrc0tsKWFL
V09SWm0qQCRyQUFRKy5NLDRDMW4KJT4mMFQ0JUppUTE5Tmw4W0xFOWI/Kk1rZiEoNGVNZylOY2A4
cDdPLGNxOGdHRlY0IXMwWjZbbk8/Wi4vQEdTbVtLIUBTdThOOSpMNlRNMXJYZ3JgbVpDWTNFYzlZ
OHJuV3U6MGUKJWxDOk09VDsnZjgvMjBHaDJjMjJcKWg+YVo2dC8wK11KVDVUXl8jLiYnIWlsZVgh
T2kmaV0mQFs9ZWRiYVQxaEw2ak9gSkdWak0wWVxrRmxLPFBjTkJDLG5wYT83a24yLkBhbzMKJUVE
aCRub1UpWk8hRSwvVjJ0YDEuMFpYOyhHO3BQLjtxa11kNyk9RzcvQ01NZV8yUkswJENcOXM5WnFU
RnA6c09vKTRTb1pBcTZNYzlmPUwhV2RLcm1vWXJILVUhRzVoZjxFbFQKJSlNdGlhMjVwQko3X1Ru
TEoyXWd0XDZjUSksO2lDNVhtXF40WmNsWHQ+OytyLlRHRTw5ZFlOcXRGZnI7YjQhYkRzPTlpVV9I
SFAmV2tpc0ErS1AlO1UrPCE0MWAzaEtcbk1CV2AKJTEqLCN1JWEhRF82XSdxLDtsSD5ablhrZGkv
RmwvMDtnNFdsQ2VfWC44IlNvLjl0Jz9cNnI5RFc6QEhESF1zZlE6RmNVbTBfSERNVj5vW0EtUVI3
MSVDWU45RjxKTEo7LFUoN1gKJWIkZjNgO0lJMkZqOllIPlYjO0QqTSk8TDteXnUnWChcQXFBKyU3
J19OMUNpcVpOVzQwQiUvPl9pYD1QXEBqWWg2VGdYQEstSVw0XDx1SSQoNCFsUT4jRSsoaTJxIzg2
JGttRysKJVtRLj1MYElbcyEuLWtFR0VYXicpcnU5O2EkdWxSKSlcMG5eZj4kJEpUKGdaUFFTZlZs
KWMndDRtXS5sRU1taT8uI05nX0s8TFwrQypXbTk4NloyTHI/XU8/KGBGOzEhWlooWXAKJSRqL29g
KixWLUNTTz4ibTgvO1s9cGJNSVxWZnNTXkxvamxjVTYuRWEmX0pSPkhfZ11jUENlSWRSaSghTTAr
ZldKK1NeYDAoVm1xdS0jUjUnNmdHPmc/WE90IWAsXFpnblpwalsKJVMmS0JuNldYaF5fREdZPS5e
cUUsKWNCSWJiLVk5L0ZOb3UmQDI3Yko9PCk5XCdaVUs7blxLNDwlcmZETlplMSpbTTIzUllUV1dE
cENrZjZoK1NcX1stYVN0NCZWZVBkQD9YYF4KJUBkP0NrZClwRi1Ka0hcUyInbFhaX2tZQSE5ZGU1
aSZnInQsUXFJYj1gOUpRbENzZz82UCJPbkwnLS83QkZgakpqSUhKKFFsNDtOOV41VlhnU18yNlBL
LmU3Q1FVU3JbMFNMLV8KJVxuaGUlJ25iQVtwazVda1BzUD5Mb2ZPRGo2Jy1pcylwYU9vRj5vL0dd
JE8sXWBocnItNkUuPDYzcjJkVm10SjZcTS4mbDotY3BxLzBdSFxXYypTJVM0X2Q9JVNEOF8laGZV
cGYKJV08KDpSVmMjTUtbOU9ObUlORCRGWGk0X0k8Kz8+YD1xbCJTNDo4KGJdUzpCLUE6b0hWYWZz
Qm8vMWYiaDdqbmdGJ3VWODszajMxSmxFYkZAak0jOElnaTRQU21VWC9Nb1MnLisKJWc+Okc4VCtf
YHVFci1GZWYtOTklL2Y9I3RSLT5hYl1KLFU4PU4yQi9VZF0lU2srYG9YazdrQEI5YlRYTFs3KCNK
NlkwVXBuMUdBLlFwKFQwMD9yT29TYyIwS1ZAMjBuUiZBSTgKJVAyLTpzV2pZcyVjOkFBPURIN0xp
YzdtTi4/b2AvMExGYUFYPDFjVU4sZ2lxUGZRMDFXRzh1M2MvRDdJMmlZSjtMXD8mJURmcUcxQjoz
KkE3amhTKW5gZ3BGRkg7VVY0PEtyaUwKJVovSjdXZ1UyLGNIUCUjLkNXNTROPDwzJkZYb1VXRVNI
L05zSWFKTmxdP08+SzdfOk0lWik2KyxLJjxrW2hPPj1RTC9mM1pxLm1RZSJbImYrJkIvSkhYK0Ve
OClgcS0iLmBIMCkKJWxbcD1ILGBybyViXnIiIVoiUmknLDNWPiYwaW1bTUVqNyo7JEdpPUUkITJ1
NGUibTUwOT1cayxTVyVGWD0+I15YUHNpczliaiYrQVlfbCZTLGlyOzJWPWRhV21RckhKPlBlaDsK
JSJGZSs7OWlXZmZRdCgkaEdtSWlLJVRfRVkoRGpkRGwhaVZUUjJRak5ZS0MkRkJyIjJOLiNWaHNY
VT0uKjxXczBhXDM6OnU4Vz5UYTdOYy1jMFBFa0s2UF5CYEUpaFYrKitGLCcKJVArKTFqV3FxY1Ul
Xy88MSglPkFhSFhoIiRacUw1b19ATkxRTVFpJDEsNCZBamhYM25MYzlcPyFSZ2A6JmQ/Tk9GLWk+
TVFLWSEuUUdyQjsrXmJwNDNxPV0tLWhhTzQwa1MnV2AKJSJYcFJgUl9fYkU/K0NINU5LJSxbLD5L
PjdsdGlbdTApRD1VcWw3KkhkaFJuXUpWVlImOV5vWEk5YDVebVtvRG85cF4scEM9S1xKKF5rcGY7
JCJPWSRaYDhJNUJjRlJzL2w/YT4KJVdJamhOOGwmbmonbDQmQHA4Lz5dMFtObD1BIUohXTlYNUI/
Oi5XQWUuRTJxPjFXMTdjcUQhcT5KYnBlZ1AmTkBUOFZENFU9VD4sYWMhWXI2KiMsWWQwdXVcUSxy
JDgoInFcLD4KJUxkaCJDRWBAcyIiRmkhTVErTFNoQDMvaTw2YFtuMSxVNVxTWFxNZDoqYzVVYyVa
cGtrajxYM3IhTFpZQUhTalJmQGAqcHU/IiVNayRqJEpcYjMiXTdcdCo/KzNPXkdtPCFlYzIKJS1l
K2BBOUJNMWpPLEslRlZiPkY/WHBoWl9lVCs5K2MzbThgY1xRdWBOM3E3X2IhPTAhTCs2WUtJSzgl
R3EuOmpfYCFuSURTXjRaM1c2KyFiKlBvKl0qTzpWTCNfa1JFIj5NXTYKJVpZVC0lYiNvcjQjaDI+
SlRwXWhTLmM9WkttcD1fTl0waVxdKGpaZ2dxbiNNKTBPVk4vOkZAY1U+UlhTNEg4KCpbXHEvY01d
QSluT1YlWGJxblcwP11vYFxDRVlVYF8wUENFYCUKJVVbWTBVXzBWPmxCby43XT9PbmcoIkRgZXAn
KidTI01QMjhoVW5jOF03QEdXOyRrZlFOUV1mNGVfQTY/I3MvV01JK19QPHBTTFBeNVNYS1I6RTBn
b1hlXkBQNzF0THJfNDEvb0EKJVJAYyNsMjM8IykhITNIOyplUDtJWmEndUVKYCNbQyQvZ2BTK3Ar
P2tCKlBWTDdsO2t0O0JHW25lMGUqb1lqY0tGJT09KlVldGdtPVNCWDhZQjlPTTZAOjw7RFRON15J
KWgvTSUKJVFSVF5gK2VVIjhgQTEjMEIlRk9kLisrIVtXKyRbZkIya1NNPC5fbUc2SzM2LyIrVig8
VSRALl1yYFxcYSNVR0tQbG0vK0AsJTEyKlozVjtNTGlzL01NVCNTcGteaG5HUDlIaEAKJVsmIVFb
OGldOS9wQmUiSzZGbydwOS8sL2Y9Q1Mxbio8aG9EKiI7WWdUSFkrO2pyJ19xIkpQZl0/XWhBNEUu
XWYwOz1lJjhRMDokWThHJmxJa0cyb1pQNFctXDROKklRXSEnWjMKJSlvak9fQS89bUk1cFBJcG1J
I2RAQ145M0BuMGM4cWwlZ0llYWVMUjlxNGM5W1NQITdjTlcqQjUwbzo9MUpMY1g3TVRDYyQpcyFJ
Lk5pXDcoIk1xbVlfUnVPPzJJY01iamAvMDQKJWg0VishVTFmbGtpaDciUV00ciZJWSE8NGgiJClk
JUZWa00vOGoqVEA4XCorKkMpIzovOC06OWdjPjZyNjotZTFvSFdFbEZqaDVlQTEnP05wcTNSN2ox
NC0valZTaUtFZ0ZfXlgKJWw3PSVaYC1cSDY3R0VTN2JuTEBJKSRmYEBLb1wkIiwpNUJJMFtXXlJt
JUI3NUdEVGo7Ky1hJVJZJT9yaUVuJjdrSnNqX1pnJTt1bF9Pajk5RCwoUyQnWi1mKTwuZV1oUG4l
bTUKJTdRY00tMS5AVlxgVGhhRCJlNlVzOHV1VzNqOS5TVlVUXnVdVVtmayU9QjEpMHE+JyZfUmJb
NyQwRk1EKkFqcVteO3VqNGpUW3BgPkNRcyQjYFQ8Q1NkUzlaTUInWyI2KydKIy8KJWRCL2NtIj9V
PD83b2MpJzc+WmBGWlpQaEY+Rzg8I0dyaTYqP2tdXSM0JC1fJE4iS1BrcjJkbHJRYHVxPS81VFZl
Iz5xZy03L2ZqXzpQXDRqKiUqODQmXHFESG44WD4iIVg4OloKJTtjOVddNEErQDtbZyNGSkBKXF1r
Iko5YiNPUEZrWTwjOm5UVCkvRFIrV2YvOmBHaC5sWl0oV2hvLFM0JStyNWpLLjslTishYHIyWVJG
NSYnIi5aZC5aO0BwLDNfM2FIJi50JU0KJU9HQV5FMitTKVQpXC5hYzVnNXVRKmMvWEI0Rm90YSkr
KUBtRiQ7J0pfKzdhKU9dMVlaQ2kjY0FYKEdfbDksKlMwVlljXD5CWGQzLDNxck0tWCNiI088Y0ww
XmRpcjEpYyZeL1MKJUpuNGBkcElrXkApK0laVjZmQT1pcEopJU1kcTNtRF5PVTs4TiRpVWE5LDtg
RSlfLjw8Z2N1R1tMO1NERSFMXj8jRWVCS3EqUko4JVRgY1dxIz1RSmZMZER1aTllYGssJXEzWmwK
JUFIVDNvVksuMkwzVDJzZDtKZnBZNUhwO2hAJidxMGs6UjQxS0UhdWI0OkQvQGBFPlpAazlrazNm
PG1bVz1pVTxYUHRoYj1GRz9RVm0qcWBXMGRoI2dJUVNvQjhnKTlNLUgzMF4KJSt0Q20mZUk4JWhP
JywvbCdaQlNrZ1AlRjVndStSdXFeMWJiMzREWCVtSXNtQF1mPGJuRytjbm5RNCxkOjBhWWxRSXFK
KDs3RmZrL2clODtNSyJEbytFVUg1Ky81YFB0TCNPIjkKJWBoYkRqWWxIcz9fVURhT2FGJGBwTSwh
XVcsbiQ+a0x1Q05YMz9QTkdpJzFWY2NjIklHOkFJLzomcCYxQVsrLUkiWyYuNzwhOkEqQXFwcklk
XT0yWDpnRmFgSWwlNSpQKldGOHQKJU5SdVJQXlNuX1teYUgyJi08X2deM3UuISNdXCNaKCVFWzlf
VmIqaipiQi9tblE2bmoiNDJLLWhZJzIxbUEvPjdgcVJDJ29DKyxgZyJvLl5lPXIralZUU1RnLUo2
XCw1ci5XZU4KJSpQR2pXIWBwYzJtSzJQNllVTyYtZW81OFU4PjxPOkxZQV4zRi9NNVY1STIzdDRx
V2hXXmJEYU03UCxXWGQvaGhASjhfcCFULktEaFk3YXVEN2FOamIjKVpNTz0uQDlwM0ByIi0KJSVO
YyRtJCptVVowJFxBTDxBMUJZPmJoJy1sM3RONjxdPFVoST8+ISxhPi9WPSVURlFkWlNjXSRMbm5s
LEMtKT9GWkNaaW1NITYuNWwxSlw0QTxjJk0vKidbND9zamdgMSlBRDUKJXAoXzhmYl0wJi5bLy81
dDZeI1I5YVs1ZlUpLEwsUURLWFx0T2xSIlkiPyciSV1nSjVYWz51PVM2YEhbRGc0NS8rRkNBY1kq
MnFwZ3EwPFxNMS9RXlpNPjxiT18/ZVNuaGAvYGQKJT5tLTY3VSJOUFRJSF42NzJJVypmQyhiNyVI
LWE9bEMsY01pWCZLaSk2WElbZGcnZWJqOkIpL0o8JTdVT2MpST1PP2l0cCZoUCQ5JyY6cilOa0w/
QjNNLyRnNilHLWpzaDx1JDUKJVtPT0pBPidXSS05YzAnK3A1PEdEbVNhOFBrRT4haGssJF1XUDYr
KEU3XktoTitoUVJjbC1GMVhSanA4NDU/NjxeIytkRmg3Ziw2YjQjVDFxWmstX1YiQmpwW29FbkM+
anMwaVAKJWF0M1xYTDdrZTBKPzUmZmIoUWlhSU9PJ0xCKS1kVW4rXyJoLEM/QT4rQj9FZVtbUEdF
NGNXLkc6NltDYD1sbjtZPys9NEtdcG9mPSU5aVswJ1ZmYDE1SThVUFUxQiY+Jm5aJSEKJS1UPGk1
LWEqN0hmbV1OVV8nVGRhQmxUOlUqUlk+c2V0dEQ4LUdncCFYW25rXzJCWD5KLiMjPGhWJkhdOitL
PWM1MUg8UC4tPEVXbit1XyVxV2smOD07ZjhQWTpvSjlcPmBJRT0KJT1iPEJJNFYzZDNQRXFWOiUk
YmIxVXMoaS1MUyRbK1I7SShGZDpDT3JlMFxQdEZLIkF0Rj8hRDFYK0hFQlQ4J1MzWVZdbnAxQDZp
NUA9cV5ZNkJjTkZrYEwhJzcrQ11KVTlaVmcKJV5HcVdIVWknNHNDVW1TVDJ1RFQ6JkZcbz1Gc245
MFkpOUwmPClja1kpUCFoKzk+JnBFXzJYPUViQFpkJixsJTNXSXEwZVtgIyszVFUtNz1SJXUoZlkq
YW8vKzpvXCsyIlQrcUAKJWoqaCVeMVs2I00idUFaWDxMcTkjJGYuVGQ2bGZkbWQoWDlhQENxNCdf
SyU4c0ZlZ0g/RSdVQjRNRW5GTS4qT3BuZz1nX2stVzYuTV4/ci1sPF5jcTcsY2hpXGZRRkZPQFpc
WyUKJVgxOjAycGQ2UFpmUXFiVVNMSy1raHNNc0BmXEpMPCdJITRDIi4qLTVoRWBGblosOVVvSTBs
JFBUTUMnXDZVSmVnJyhpcllIOVhpSUJtaDlMKWQtYV8jR1o7cG9PcTNaTGo1TWwKJS9iMD0jViIj
JyFXJGktcmtBJS9VJT5RZTFHXiUsdTdBXjkrbmt1JGA7YyM5cE1VJVsmcjNrIS08NCFpZVtYXGkv
NURXSD1dIURaaEFJSmpePFhgOXE6N09jO049PmwqO2FiZHEKJUlIKyhObSM9aiFkLTJDTzg5SVdk
TWYpRiZXMWc+NmJKTDdsXVY5cTpgdDtOXltfZl9WbEtsY2lwOkNVV1pWXl5pPEAhI1Vlbz9tTWQ1
PXUhU2laZTsmSjQlaDEhXkJMWTpxQFgKJUNbKXVzcS42I3UmIjMxX2lrS0JVXVhPTWotOjIsbmVp
JWAqREdfYkpCJ1VPUjwyUjl0Mi9kQDBDLTpHKGdZSzshZT4uMEptSSE1SjwiaEVeSiNgZWVfLjwo
PTRWMG9BLitoTCEKJUxEST8xUy46bi02MSpeUWVFYDBTSlA4Pk5JK01aPzEvIVMiSjBcJFQqOV9R
KkJmbiUkZVlOPnAxalVmWS9yLCgiRSVGX3EsLml0QkxCQCN0Z0s7REdkVVgnPzZYNy9PU2o8XGwK
JVBkNV9oIypRU1JPRiVSOiNSMi9UOEpybWFwZTZqRi4/UHFgTj1aJUBWS2ctMDBqLGYiaSQxQW1d
Zjk3P0RbLCxVWCdmKWspRmghX2knUDhZNlkzdCNAampYTyNOSEAhLDJiUl8KJShwOkNiby0rdE4p
TVJHW3FeKmc1WCcmXztHQDBIIkBUY05fU1JLUWwjb0BqJ0U/M0E5WmghTmVgS3RXXFNXRjwsPmsx
RS83LSQjamA5KkxJRCwzajRnR29JWyYvZThpZG4zTTMKJSEtNDljLzpLLnEmdDBzPGJXczYpQy5P
Rmc8Ui9YODtXZywiJTNZXj9POy5iTVs0IitHbDo/LGVcXCYyJlZTY1wrRThQMTtBJCpHXzIyaysv
XVRaUVBqK1EqbjQxLUZPXFssdGkKJTVEYSQyOj5gSkFPaFx0MGF0UHRqazE8USk3YWElY0A3WTNv
bDRPbmw0IXVDLWEmLlslWzlgT0c8VExhWklwYVRrS2kjTjZCKlVZb3ArdDgtQTA4ZkAtYFkjbWZO
IkgnYUEuU3EKJVA8Oyo1KiZLX0M1My8iWkYpK21STz1YRlsuWVleXV1xYTw6U2xzIWpdNF5wU0Vn
OTtMOmJrWi9SWDIlXVFsUlc9KCFNMi1CMygpXC5aInQ2T0BtcnU9Rjw6XUc1S1NrSHRHOUkKJWBF
cDY7OSElUVlXcls1UC5RNmU3OEhpbEtMWXFHImtfRiNyInIlKVJMSilxWmxzW3FHPy1RIzdtJGVL
bF5iXzs3XisnNCwia21xSkMhUz1jTydaOlk0cl1gajlYYEA7TFEzQisKJTUkV2FncDI4cWksTE1N
YyFtLC9DcCcuMjIrak9INiRtLiRZXj0uX01aKDwtNmJgOWIiZyxSTUJzMic7ZFIhT29yIUMyaEtV
ZlNCYEN0ITJPOy9uQkZNWWZWPTxUMmA+JlZMVnIKJTQ9UC9GMG1HWGwwQk9lblpnUVxRaWVeKk9h
T2taRDJvdC9WVzQjYjJQc0AxQy5bKzwnNSpVUFdKRUxgPTBeQCZATDU+XGUmNTonTzhZalJBYzss
KGJiUSojdTtsITBePTQpVjcKJS8/JzooWCVrNiwuRm5PUC0/RV1fMjhhKUMjRUBoNmhIXm1kXi1x
RFdqKWg4cCpTYmlgaz1ZSFZTT1YuJDNYKzRuLmpJKEs2UGRbaGhRNStPPEBmbTpKRUBAcitdRUk1
VGVcXTEKJSk+MlohWDY6K1NJLW9MYmxMTVg4Mz1sM0s2aCRMbDBjdWk6YSEpQzE1ZS9zU1NxcDNx
bUxvcUMvPylYKSJITVptITJaTlE4WSJEPl5mI2tBPD1XUlhtWzZnQmk0M2FNKSslW0IKJW4za1Bo
SGMwZlVyRDtFZkVBN2soOGpzLiQnZFFDc1Y2Xlo+W01lKGdcWz1eXExeNSlnXSo3bElHMD1KIzJc
YmgmckwhS15PZy1WV2BVLiMiYEJ0UVtnXz07QTxMWCs8NUljVCsKJSljSGZSJl01S19RRllDRWw+
aStmbixfUS5qY0JwbGNnaEI0SnBPcW1AJ0BMREhiSldsQiU5MFtFYC1aMig0PVgvVz5tZklBTVZR
NDExPVsuNjwlLEhoYypiSDFDYSUoRkppcjQKJT4vQUY6XWI5J0ZIdTpfQysuNDFWQUpwWSMlaTsx
aFV1VEQpIm86S0pUZD5Ob2FlWk9XcC0iS0YuaCtCYWZIc0VSKj4pOk5BY1RmREAhJW9KPU8iRXNo
SSQlaTtAcTVUZHNmJV0KJWoqQGdYPnFbNiMiJWFPREosdUUxP0VOYz9NUzc4XWUmaCY1NS4mP1Fq
blI+YUFpcyJzXFVOdUQ3bi0kKWBiXiQqRWkkTF84TWMhRk1zTWMxZzhhR15aOmtSZEE9LENqTVVt
TS0KJWNiQEBFV0EoNy5QOkFhODZOZmAlIW0/RF5oTVwwYWQqMTtqOF5wXE4xMDdRYVtYajkhVGRE
V1I8YVxMIV4wNmZaWzhVTkYkZyhoaj08OFxGKSxJUV9gZlQ+TVw1VyxkWjpJPSMKJWxzJVBsTD8v
WDIxX208ciIqIXJiS0ticG5CRUUoUHE6KV85OzwkZEFdUVt1PSdwayQ3JXIzR25DYEozS1dkVk03
P0BGKXRgb3FsLypNKFhNXDwhZ1FsIWkkYiRTKzw7TURgaFkKJSw7RWQ/I09mVmQuXy8oXDdgKVdx
V1JST19hTSVUYXJWc2YiJCsxQ2xJJlk2ZzJPNjE3IUBuZTQtMDlQYSpcTEVOQks/QjdTIU86SC4i
WGdEZVtBY0dvUmQhbz8wY1cxUSQzR0AKJSlHPCsqTVRIXnEmXEU4XTtiREBlajQ2TnQjNmBacEVO
azJHTG1ER2JPU2UjayRQUz0yZiI4WC0tNkMpPTtUdUQlWkhGQCpMQllzJ0h0S1MrLURaPl5RaWtY
ME9KNjQhJyZcRyoKJSdoXFNNJSFoc08sMzYzUkZCSj5tLCwoNVZiYUE5IUJpcjtMWyVFcUQ9aykn
OlJZMTZZUlEwcW1GMyQkQ09sJDhDWWhpZDknYyxvTm8uJFNrTE8pS2JvQyVpNmFAQDA8SSpqcG0K
JU48UTxxaGczQSg1J1IhM2I5X24qbWw/Vl5EJUkoUig5L3BRM1FWTkM2WyVGNjE1OnBzYzclKldF
ODthRmZtWFVlZ0NRUS0uRTZSREg7NidmVzVvMDk2JGlfaF09PzlkM28uMDoKJUAxZztGNDEuKlok
SlMiXUY+Zls5Q2RgOSlKSCxrbyJqZDQmIWBkUyFKaXM1KlIzZSUzPidGLkU8YEBra0teRHVlKkhS
cS8jXkVrOiFMQVAwIS1qL2paaygiXEpNWV8kcV1eSz8KJUMpIl5jJkM7dU9BYyUxcUUqL0BrW2ly
JG0nZGhTcF41ZmVaUnNvQGc+Z0lSWEU0VjNgRG1hPzJnU1g7VTxPX19YWEo7azA8OzR0amRvOydX
XkghTTBPZ1hmMXAnSW1sQ19BWUcKJU9iYmhiTmU6JlEnRDxbayxOKXRkV0RZVyxDNzVKdW5OcyJL
JGZvW3U9QiVFZ1tTajdsIiEnVjJTWydrR21oLD8qUicpV0E0VzNTLCNXdEEzNWQ8XCEkO1UyQWEj
MStTV2JgQEMKJSZCdS9TZmVbQVpvLWgobDZBU0tdVnNNWz9NXzI0VSdJVDJPLm02RnEpKzxHSl9F
KjRZS19RRVAlWU5LbyFnMiRAR2QqQVRkbCYmRUAjTW9fb2suISg7RC9hPVwlazs+YjI+cF0KJSQi
Qks/YXBaO04oSi9tV2VhMjsiLm9pM1pDSmdBUjRcPlo5KkZLJiMsVmdNJDZxNDdaPlQ3ZCY2I2hy
R1pWVXU/VVNkVF1MRmI8MCZbLmBqRzZPW2ZvNCJwQVF0RUZGMXVcKHAKJWViIXQ2KkBQKVdYcERN
XlUxJSlSbXAxIWRnOygrJE5gMFMvVylCamNyTideLC1HSktocCo5VEpSaUlXTU0wXWFBO1NfXi0s
XTJVYjw9JjU6O200ZT89ZCRDVCJKIWQ5ZmhVNGoKJVoyQyRjal9LTG9Hb1pRWFIlYzpVZitkNDVP
VXNMSlslW3QibDlOXkJaa3AtREckNS8wUGdRJSFxUzlVdT0hOjdybEdLMXFFI0xeN1RyR09XWzlX
Jz9kJ3I5YStHdSJLSig+VE8KJUEuIj9bRSlCcypQKls9LkFeRCVaUyxVZVNKVVdjTGVQbTIyZjNr
Mm1ldWhoIlJdPmhSPDNvTlFMW10saiJ0USwlaWdQQzFFUyYvbS0+WHQpOipZbWgsQzluIj1laFEy
I1pJKGwKJVsyKWNuOkElX0ota2Y0RikuNjdsYSw2ZzdBJ1BjZ0hrRDVcOGdoTzhLVGQyUzx1NlIm
UTxpIjU1QzNTWjJCK1tscldbYVlMIUQkUy0wQTQ0SlVyK0UsWUBDJzAkalYmMD01VCsKJW9GWmNe
QStxRzI2NHBYZjg7J3VtWW01QTosQjJpKllpZjRhK0c/QSZDRGY9SjouRW0xRXRFOjYzdEByY2Q5
bVkhLG87aTFDaEI+KDFYaz8jKGQrL2gyT0RsWSs7XyQ2JVY3P0YKJVhBQilxcVxbVl9iKEtgc1g4
LDtJNmI8ZzYrQW43MyNVP01VXihcXy5VNC50Oi9sN1YkLGFqJzFkV2RcbG43bmNAK2cpUmpIQy9z
bjhKY0U4bTRAblJpZDxfVnE5a1hLTWdUOW8KJVZNV1I+XGxCVzw8OUloZj11PWEkO3RjTkhNYUVO
MCJWazYjQjlZVGFqZG8zVkxETXE1YUE8PF5WdTM4VyhXSidSKUo8YC83JWMlVSlAbFZyWEZBXzdB
ZCgocFosIj4yWmlJPGgKJUFodFYzRTkoT1A7XlZSK1QtU0VrOmNEIS9sKVlJKiQyay9FO207XCUk
b0U1YiZBRXQuSWA8TitSWk8jYDU0MS4hKlAzRVNxcyRlLFBhT0xcUUsnU3Q3S2pyPUkoSkFcJ1NU
T2sKJUJUTFZUIyUsIlBkKHNuOGZHOS1aXTBpXzxYTCUzP0xAQl1wOzhnTUhuaU8sJWc8WjUpalFN
OyUia2xxQDIlMF5wb1NdQkdAW1N0N25JJFw4QSFeZTdFVFg1YXMiSk5YQk0nO18KJUhfbl1bSkZC
QSRhMGNOQjdOVjsnTnRdLnAsLXFYOlo8JldUUGU6IUBJIyhiTURYRWlDNFxbPV1BK1ltXChqUiRl
PCQ6JlBRUjVjaC0hLURHIUZcaHNpZDFuPjpqJ2xFMylTWWEKJXMmajwoLjBjVFFZVjRNNkZXdE4l
YFRodSooNGFGOTJQbiZvVDgnZSlQQHEvXlZSbDE6WEU5Q0RGYEE1O00oP1JSJmJVLVhhPmxvUVN1
UVdbUGpZOUwhUzdZQSdzVk9CMGEvMkcKJSo3N3JTNlxlc0RsbitFL18hISNEMGpbPkxfLUlnTSIz
dU5rRVMxcVk5R0tUa0AkQFVqJ0YmPj8wanVJJU4/WXRjSFdSPUJrOUI7YkozLlRxITA8YWc6Mjdw
TCgpX3RoOmQvRj0KJVhSX0YuTS5JRFgySTRAWyJeQU5nYTBjTCw2WSJuTXFWOTQhSWhIMzY7SGVo
PlE/Ll1SMV5LKFMvbyYkIyttYDBhUz07aEBhTGxuaENuZDtzYyV0dSgpLj1KbFdaZkBDOUcjJ0YK
JTcwdV9oVjRXR2hdWjJfYFklJE9rYTJFaXMmYDlFLCJRSnFiLWdCM1Y9cilwT2ZeKGFrclhIMiwl
TCUxXUVHaCRnZ11mTiYvakVZVV1oTjZCcWkuRW0lVE1TMEA3MzU6WHRkYScKJW9jbVdKNSI8Ym1a
QUlSV2Bvbldsb1Alcjk4UDZib09wdXFgRTViImRBM3FFSyglcmgyXitYOylMQmc+bzc5Xy5kNyNX
cjInRlo4QmBQZTRpXE5uKj4iREp1IjU2OU1hX0UqOSwKJSFvJFtuTGZQVUFARF0vXy4yV15aPEJy
IXVbJjpzUG5ZWF9UNUgtLjBaIz9AJzdrPzJcKktrMiZnPGpXVEZIIVo2VktlZ0E9MmVBa0xkJihC
aic8QCM4U09sUSlcRkVZY25MOEAKJTYyLEtgZDByPUM9UV9bNUNwJSZzNDkyW1w0KGtLYS8nXm1K
cWFbNDRDI2ZgMXJKT0MlKkUhODI1KWBIXj84YSZKb0ZwUzBWaC4oQlhzaFx1QCZgUEE/ckxwJj9R
OlBRXEYxV0cKJTdRO2AjWDBzcmtRUihzMTIiYnMxND9FPHRDNHVhVFRAcEw+W05eTF1tWUguQ24j
bkxmNlNDUHJQSUhBLUt0dEw/RUBNJVBCMGlOUVlXKlVaSnNXZDU9LCozQl5PU0NwaW5YRnIKJUxp
Ul9PS0RgUEM8TT5vMmhJMTNKPzsuOF88OTYwRE0tYz8zZHNiNWs0JSRERG4sXm40J2xDSzU7YlUz
QG9PX2tOV1wzMVInJjRLbW4hXmlYUk9YWGBfZDNhLjZaJT8pOVxaKEQKJU1MTT1JJjVtKEBMXlsz
MEllR2NBKCpdN2RpOmIkJDVdcWdeI1NyaFhvX0FYcWZLX0tIWkg2STNEJVlyTiMiWENiJjFkNz0t
cjpNPiJOX3VcRzMkQ25LLydOZzFmbjJhTj0vUSoKJXI+LTBOXTZKKi5yR2JuRz9QU14+W19iLTVe
R1tXMTgtYk06cVRLbDdsSCpqKEQ4OEZyY10hSy5XVV5qXjcnPkRpbUVPXjdJIm1sZzYxcz1LREtY
PzUwXko1XjIxMmIwPmFmQXAKJUFXOihwLlItQSgnTyg1UjdEJVBnN0FLaFFUNUxmQWpEXE5eWmlV
QTQmXEhgU2FjNkxgJThqPFU9InFaOHI1Ym00YF1gSlJsdU1lJzxCWXVqUEE+YzMyPVZxcVZzQSx0
L0VNMD8KJSdwME1pamtwV0gjLiksKiVQITY0USR0SWU3ajNsMkdwQVZEPk4mMUg0OiEiXjopYXFU
UDE0ZnI/c0o7WzArYzwlVihsYVNPaGZmblxHIkBvY09BOmBfRicsKiZrTzQ4PkBDMGwKJSciMFlp
WCFIQDw6dGReQiwxb0dYQWBrK2ZUcUtkVU8lSXU8NVM9PUxbQ1BcJz1HSFdqYUc4OGFLQUdjJzNz
TDY8Nzs1YzE2UyRlNVRlUVNnRCdhX21Ua3BiYlciOj8vcFlnXiQKJUMjdT1sVEBvTjVPLWZadDhP
byZFJlxxbT87SmpDdFxdSV9RaSgvQWI2TERdOm8vNFQ9OFRgJk1OLyxeRkFHKVhyRmllPUYhLCM6
c1h1P0BtWC45OjhXZW1kVj9XQSlwWzQ8bDEKJW1MUio1YERvRS4hR14+WT91KzhfXFtJRXVcPz1d
RGRnbGwqV3AhXmAvblhlW0MzOSJ0Ozo9IyojTSdGQzZ1IywoS1pzPz1hPyRSUTJwdDBfOkc7JTA9
XjpxM1FrXz4hUT5YIWoKJU9sQ0s5OFJNYXMxPThJdGw9WS0oJSQ5Ri07IiFUXmUub0RaXy4nMzci
aERMOWJwa2J1by5xJTUuaVVjaFh1XjJgQUhuYVBcJkYlZjs4TzpLISlSXztfcWQsLi0wUE5kRERL
QmIKJUUqLDJbTC4/O3JaQzdMKTtjRTAkJy9pbidqISsjQl5nX0xyVTptOCRvLXU9VltFO0NnQTs+
b1InYmIlWC5daStXRDZCRFdwW14wSy9YYCkiXD9qPTkoVTEwdUNNTE0zUkFdIkUKJWAjQ2dOU25q
PlBOUydCcVxbayssT1khSW0/YEY7UFJGXFIwUypIISUwSy8pMTphOV1FYilETHFZYicmWCIpPV5S
JC50V1daIjQsZkY2SmA6TShLal4yWSw1RCYnVixOXjBbJUIKJWBNZy5FJzRaKVlHOC9oRUBAaV44
LzFXWmk3UnJoUmI7dEtSIyo8ZDsscS4hR1gocVBmXGY3LFozQkdlYGIrNyQiX1BbMiFmNzNcR1ZS
REtwXEYjMUZMZThSIiJcPzNYTWVNTXIKJSpRUGZNMVUlQ1xyUUpCRD4xYj9IOVBLcSttJ0k3dGAl
QD9RUkhUdE9DRkxpK2ZKLFxsPTZrLG9AJWRYW01TT2RMUlhCP2NHdCIlW2VCU2xnZDJAc10tQW07
UUZSKDhzcVpicE4KJT4oRCUtKk1YP1UhSEJEQGYtb102I0ltVlYqTW0raHFpVF11b2BlSG4xLDom
XWVLazZBLjJaKCssYEVgVjNdWEUrJWknIXNdJ0UncU5gZWdVczNuQXMrY10/dS9aU1lqZjJDSWkK
JSRlX2FvK2pFX1I0X248LmRhK2QjMlhHVCpTZCtES2w0RD40XSxSXWhiaiV0NyVbLC48Q2pHPTQ3
aSJlXjU0ZjQtWzFRU1YnS01rPzdvJixXW1Y5S2YvNiZCXT8rSnRhaVR0PzQKJU5cMmdwaWJ0USk6
RlxtJkhDcSJPO0NSXytrYmZGSFJPXVpxXGBBJltWNkFeZ2JOTSNRXCtdSnAyYXR0OGVYUnBDUD8l
LjRlZVQhJylFUmJYLj9gNT4/cU5wVUQtXylEYU9wbnUKJTFPXXVbLzNHYzlUWiE6MjxSPEkzaVJj
RmMvaComKjB1S0ZdbjIwXCVFMlphOzh0JzpFISZpbi84IWNCVj1lMWwxLmIsLVBBTDEsSGE+S1BN
UGBdK2FPOTIoU1E6VlpZLUw/MEYKJWtzbExYUUNgXCglJTVuZStjRGlBX0lwYWAuMm9xQU5fYE11
aUtTMF1hLjE0P1ROIWk8JyY9RjEvZ1FrQiItXWhSKCpxQDtrJTduK1crZTRpO0YlTjUvW1VgZDkl
NCc5bTUrb2UKJTxrZV90Km4kLkA+dWZwQi1TaGRLWkZlSWFocUxyLksuN1I4OF0vRj0jQlVBXSE3
cTJdPVFKa0xnYSRSK2UyQU43Tlgsb2EjX3B1J01jLTZKJydGM0cqYHFJIl1hZVksajJhMCQKJVJn
czFUZjteX19lQSglYFpLT1FqWlZoPD4zYjo/aGVWcGAtaVsyRVkoREldJFdmUy9ccWpGRkU0JCZW
NlUnMClIJG5dPVVMUVQnT0xGR2JUK1lsMUdhUz5ELihISDhnLUhFPDAKJWVbKTg3cjJOYWgqTDJn
VWRcJWBnJTJOdF5pWCJnJExzIyNtLSc+NzZnKCkyKEttM2NBNEsuOSVqcTtDKClgdHRWM1ZXMlBa
UTQ2N1w8azxQOzlvRmIkWiEjO0c/JWQnQzBSY2QKJSVxZF43IVR0RUlMWUwzQSxMOCtmQXBKRmBf
WjMlNDZSYlhQZiVdYWphQ3RMTjBoNy5sXDwqR2FMQ0R0I2xOR20sVmZpYiFrO2E3TkExSDZtKW8x
TjxNW09iJT8yYCJGTCxvUnAKJUs/aCgvMVE4U3BjLFUkSyFzKHJpXC4uQHVGRGJyTkQxU0BcJSk+
R2NUNigsJV0/IiNIcnArRUhQOypQdWxkLGVvQVtURG0+K0AiVWFWXDJkSTNnPmNbVSQkZiU2ZVs3
Zkc7QSEKJUdiUz9rIydhIUojSVR1VChqXm5HUSJ0a1FQaFE3VmdibF4kaiIkPj1bV15nVSwsKypN
TiIydUAtbFMlNCVHYGJyME5zJiJlZXVkNWolbDsjWVRAOUdNaDM1ciljRzpsQWVPO2UKJTJ0USxy
Mi1mPnJOUUdDbyp0QzJLTSRiNFcmQSVxZS9OTWJIIjJrOCQzKmJGWE5FRi5lanErRG5FLzdPcCxo
aS9ZZ21JVC5KUTZKZUhBJkhRNCohUlJYZmU8Y21rdUhhQiglb1MKJS9cbjksKnNzXFc5UHQpPikk
OkBeSikpRVRPUkU/X19zXFlGQF1hbG5gSDI8I103RzhVb151JlI2dUc3QmtBamIyYGZVPm5sYTU3
V2dEQ1BHa186Uj0kaT8xVXFcQFIuUkI3RlAKJUcsTChQPE5mbDElVzE4NmUtJyZxVkVvZFQoYFkj
bDNuZ1s6J1xBLzliLFVDWz8qQUlMRVNRMkhfOjdiT1RKYFprOFRkamRpLUZNXixCMDlfNWlHTExp
M18tKyZdKDE9MmwlQWoKJW1cLmRcK04mdDE8a0s8U203LUIzRTdsRmkoJEgtI1w7PyxoVjQkQHUu
b3I0ZGw1WV9zNip0OEIuLXRZUjhbRWZVajtKLyhWOFNUWURJcWVrZUZAN2lGYkE1MjtQV2xYTFBw
TEUKJU5LRlpKM09raCUxYkNTRmwhRkNnb2dCUSRmTnA5dU1Hb1ZBLkZKIy9mNE50NUpzJjo3IUEw
Rj86OXVQIzc+YTZvZyxcckwrYFYrSCk1N0dyZC9fcTdXJzZUdG8iLC1NTzdOaWAKJU5kZjI+ZDEs
STIyN1hvdVAxOXFCYlpybWU5PmJWSj9IOGk9QmJDWU05KiEsRVZWT0pvOihgUixNZEk2RnFQVkkz
QnVXVjlQXG1sXCxhcW1iImNML0JGNDQybXMiODhhJj1iKCIKJV9tZ29nKTkqOEVINmRGIVsqQ0M7
b2NMVTorOVQnazBmWy5gR1pKIWZdM1oyZW1EWyttMyVwZE9DVDonWTluPFBncl43XVNxMiJpM09Q
b0ZlNCMiRSpdST9CTGhBQjQoNkVhNksKJUNEcihXPDhTJlxjJ2xbcCpwZG80P2xFVUkzPiJUIjdj
YVZKQDMyL10+V0MjIWZpdHU1QWh0KU5QcCk4RyRpa3RgOTpaZ1xPXGs5Uiw2UDduQ0I3TVU9XTQ1
KGluTy5BZFhkWlcKJSY6ZW0iOWlQSW4xQWpZRTUhI1BxZy40UEhSYDxXPztuSDshMEMrYWNJQiti
NiknYXUkTVdGM0plKCtQMzs5XWplL2EmaSdxWWt1dWhKQzZAL0B1RGhLKUNFJEBSKFhDX1dpKVQK
JWFrKGpDJjp0PVU3NUo/WDdjQjRfQko5XlssI2UhZ3FpIy1pXiFGNWQyRFlEJCFMPEVHZl4zRWQx
WmZOKl0qK2lyVGJrZFxAaVlJOGNFLl5WOTRtMTBsJkFwKUlLaio1UF1xaUMKJUBbVFkpUlgwTDco
V05MIzMwY29LYGZzPy8lO3RNOk1dRUAxOCZhOlo8bUFOM2VIaCpWQ3EjZF9gL2xkRC4paS4pQUhK
QExqOmssSWRhPV06cGBNL0JxXGl1LjJnI2hqQC5cWToKJTNRKik+UnAwZ0dxOlckWW5kdFhVQ1Fp
O0FabDJrTiZpKy4yRDwnMkw9cSMtXW04Qm49K21eKFwxTkhYRztZcmw2JmpGMWoqLCFbNDFOI0FR
XS5xTXUqbVBCIUw6XkgqMi5OP1wKJVRxJSdlPUEjXU9GIiJDYE5FW08qJEI0NjFZOztJUF1ebEVf
T1xnLj1iTjFBQERJbyg8ajdCKEgtTF8vM08nVSU3YlNWW2NlZnA4ImRDMnAhMD1iO1QjcE1ldHFo
NE9yY3RoYzEKJWxlYmYtUUkwXjhmVDk5RFBXYENoYlVWRlsjUV9LM1RZNXAtMj4sY28oR0pqaGQ1
N2M7PTpFOEpdYStlPkJnckhdWj4maXUsJGBiVTNzcSdlPyJwdWgraEM3PlQvU2g1ajhkJikKJStI
VjYiLlNEVmUwLTBlcDFdW21VTz0iPWlYVVRlPUwwbVcuJXAwRCVhQDBrWF1MV1AyWSVHLSpfU1Rj
X1IxYmFUQm0rTGUhLW5aNyZCcUFAOlgnTEsvdCZYdFBPXUQyPSQvPG8KJSEpTiwjNFVrVSM1Qj9A
aW0xZ0gqa0ksMWEuN1FKUmlccU5mW0woOWJUcGQmX1xRRCkrRzMoODlOUTFFTzI4QSdjYTpwR2xk
WTU4JmkiO0ZlIjBkTSdKNyIhbGlXVl0zWjlnOj4KJWBEdVM4bjJzYlVjKEksWGE/O0Y5UTgzNjNe
JC0qT2s+PFBDTUcoIkxNRyFRQGY9bUw4Zk04Mlo8VENgLThtPCo5XTFGbm9VIkZELGYkcyNjUy4o
MS9IKlNhVU08M09SO0dIY2IKJVE9K2MyKz5lb1dALUVkNSJfMi1AM2BUYGgwYSlfcmYzRz0+YD1o
IkFCZ3JiTj9YKl5uWEdGcGxcOnQtOUBtLyElWE9LO14naXBjY1tWaSMmOkxBYGRwQF5AYlRINU1u
S0YyKW8KJTRjMjU2bCUxNmBOOT0hZVA/JkYzSjYvIzRwS0RjO0AjNWNcVC5xS2sjamFBRj51UlAm
MEFaOHMqIj8nJEJsUW1MOyVTIkVBT1deTWdJYTcvZGQvNjgsVnVDdFw0SSIxOCYnRzUKJWFFK3U2
YW4yczshaGlddDZGSXI+N01gc1xZcSlPT1dRTmE4KkomUi1HcVRfVGomb0prXE1bVT1sU2lyLj0i
JCUsLGBrWy0mZXNWZ1QuaSo6M1lQVzlccDxNRCRrRjhPZnUtaVEKJTcqMj8iQFVMTFYkcmYsL3Ay
O01TX2FLUGsvPXAuZHJwRmJlaEckT0tYUm5jMkNRMkddJ3RqTVxqLkMpWGJPUSluR2s7QDdnZFBS
Qi8hM0ldYjtgYjNkM2YjSFpJaFVNP2tcKWwKJSJcbzRsRlhsb0tnOXJdQlowNWJPXkstJE5RUCQs
XzI4WVlTaT9fYjk7VHVoVmMqJVdTZWxbTyFfXHExMTkkXm0iQU1DWDlkL0hiUVFdVloxZm5XclJr
JGgydFNCSDZWYz1WUWUKJXE5ITJRKCxqOykjMk11LGhvMDxfWE5kMyMnXzA0Wzw9RFg/aCg6RTtP
VFljSG9hYT1uNTkzOFliLE1NPVAsb2hRJmNkRGAnYC1vNC1iaSQ9MExbRCJqTnRVNmUhRylhUiNZ
MVgKJWlyUl42XD1HV1lLa1hZOT5lMDVaO3BKIWFdRittXDQscnBSODk/ZCs9Ii9qOktcSE84cS4s
OGVnImJUalxtYiItYytLUEo2O1FSPl5XUG8uWnE2T0tmQmU5aVQkK29mbDkycnIKJVJWTVQ0YzhV
dEZAQTUjckElcm5ZbFQqInNcQillQTNMVCVWYD04NTNNO15FcDRwYmdNTGhIT0RWVz50bmltWGIh
M2ooRkRiIk4qWEdBMGUiOiw0dE48KkozZURAaSpkYFgsIUkKJSMkXD9fUVl1WGwuVzdjUC84R0Uv
a3ExOmA8LkY2NDVhVUgmMklYcDc9W1VpKDZWPyFjTHBXWkJUQChhXDxiLnR1PVVFSFxFXzxML0Eu
OS5WIWpUa2pDO24vPFNeSF1TalVqNDkKJUZDdU0vO2lHMS5WXD5oWTwhcD10LDIpWUtVOSstTG5K
P3IjM0crXy5wKmVPOG5XLWowIi0yWVZROjddRWohZ1BEUnNIWjxEYmwtYVRncks/XV1pdENbJVdY
TEBbLERuS3AsbWIKJVUyTyMlXmNfMVtdYGFlPmxhPChuUUBiJiEibCRKXEMkN19oMFMhW18pV0l1
S3AuKGxRVFEjcF1TMls9USc8aXQtW01pa29RLkJwKGJZKFk6ZypFOD48RCtFbHJrYitKTzM4O18K
JWAsLUQlS1osJHNsU25SMCJwJiQoLlYmbDs2PE5DPylhZU8qZHNkPSJcSiIrIl51VmhoSFJVKVRd
VzU5YT9JQjVyYlE+IkRXdG9SYm1EO2hYVjZVNE9zLk1UNEU8NmhxRUsyODcKJUNoUD0jTFprIkc3
M2UvQjpaQD45bmlPYm89R1FEJDk6c0oiWS0oImluN2dqY0BsWUZGP1s1RipVX0xQJVptTSI9XiIi
V0E1cEBSYk1IPiVaOyZZdTkqX2s5OTtwImEuJTBEdDsKJSw0MV5uaEMrLmpGRCZlWU81QGxuWXM/
Y0M2RUluQV81O2NvV1ouN1JAUlFSb1NBOTkmRVIyclsnQE08LEEpM3JJW0xOVDJPbyVrcT5LRF5s
QE1udEsiLGtoaEVKVjBDRWkqayUKJTg6ITI1SVJ0YUZtYWMtU10jSmtkRChVOFI2TTQsNSNTQyFU
N01HL0U0TjhZJEs7WTxQQVVaaytaTkA8MThuR0hgX15MLGk3OVozJTtoOnFVWjQ1TDpUO1dsLWYy
Ik5EPjVLSicKJSFELGRBWjc4IURDO3BzailhL25YNnJrLUNRcikpaFU3MD1NaSMkITlwZzo5bG5W
U2ZZXyg5N09DY0tAOVBHITxFUjxSZT1XNEUjTVtsXVczNlo1IlI3a0clTSNyM3VUYy5RIXIKJT1s
RU5ULylbK29KZzlOQCRwOEwrVHRKRGtZOm4uRWxcZVBMVjkkLVUtXEhFSDdwalcjZmphO2Q9Yy5y
NiEtWFQlOmtkPDNVRldWdGwqLTZNW1BMVjpJJjIwdVcuY2AncXVbYz4KJT1eUFo7TSxcTCtvL3Aw
SUw1P3N0Z1VzODtAamMlJExcbkhkYXA/WTtwdE5KLE9vZihkTl51ZCQzWS4qNzhYTmdlbCdPKktY
ITBoTHA0R2ZrPy1uPE04ZCQrImQoTTlsWFdOIjEKJSZ1ZiRWRj1TP046MiYzWU9gQWQlcjh1VzlU
UVkrcGEpPzBSWjssQkxwLisoZixyckI3I2FyKF5ZWUBSTEsicjRzVlJBNDYuJzRxJy4hPkNHS2Bz
Sm1EXnRXNzZVO0YsJ25IdWAKJW1lIyc0OiEtLDwlUkJcKGE6KzRcT2deWSpaOWpkKGJwRXV1XD9t
P0gzZkovVUpIVkVNPF9IO1JRWmZ1cCYzYzZHQmRTNDc2SGN0U0JMSGUxNVdXRVo4Mi9JVytIZGtt
ZnRYOXUKJVdgQy4yO2xwNzlGZGBBaEwtZDpNQDohOE1WUExeMltkPk9fYHFZdXFhPUdhZz8sV3Ei
XGtoVXEnMy1raGQ/MkxWTFlzb0xuJUMqN2RPNGRsPUNOT0lKbFhdcjNVZVZoOlQuVV8KJVpddTon
VHVmWStfNVY1UiQ8Ri0uV0tRcUBmIjYoRC1oOSdsSVlBV1dAZE8vQl9ZYVU1O0RfI19eXldpIzdd
Tk5sZHI0dEtVS2kmNmRqMkpnWGxmdWIrbSUpZEVxJ2MwOGt1N20KJU4tUVpEVT4wMChiVjorWlVY
TFxnZHRYIiVWI1ohUy9nUjRfYlVkaShMciNiK11jWlQyLSRdaEdRQls9YFYsdSNgUjdOKF1YSXMr
S25AZ1NlVEZYOnBmLj5RTSU8cSxraGpbNWgKJWhDJllnUWhYNSwiW3UyTVpHK1IhI2BKZCNodV00
bzIjVDRWVEcrQmw7ZjJuRywpPFQ2TDloYXBAdEUkbFJqXzJDamhSUjQiNSghISZFTDZxOlYjaGQi
X0lwbCgrIzhjSjY4TTMKJVZdUCYyJCtrRDQqTWFyZVUjbktkJFFLXFM9I2ArMjhobFAsO0wzLGEm
QG4zaEl1M2dtVjwsRldiNWNyLjFQckY7Z15RXU9QTmsickM7PHVWMC8xQ0MnQFheays1ayYsRi4q
OVoKJUtnNWExTFUnKm4ja1RIJCp0cmlWb0YlTWVXJXEtKThyUmZDKyckN0hdTzMwOCRrLTNQIz5H
W0EwRz1pJGRzLkdoOiI6JFI/R007VVZJOzJFMUhfXm9RNWhkIlliOCVBNz1UIjEKJUZOZiJFbHE0
c0VLNS9QP0giamZCLG5zTXBoJkRIVUAzXT45W0QlIXFeX1o3RCg5OF8mXkRGWkAkISFHXTUsVXUh
bz9saVZlS2VyKVdfWWFfXFJEXDo5MVFTaSttTy9jLFQtJzoKJTQja1JEYjFwVS4pMicmRigvQnJb
M1NYWlFJSE4rXC9lXz9XVilDYXI6KGZXJEtIVS9AaFRyNkMhciNIKjkmI19NIyQpKklMUz89OzI/
YyhbPFVSbTxnUkVqLFNTaG1KSSZdTykKJVY1OnFrJzdiUE5BRThaWEctQiJSMkFfb3E8b0ovOWJa
VyhlcGgrW2VkREotTUZIPnBST1QpPUFiV2NfIkdUKVooRSUucW45XithWlkkREhCV143VG9yalhX
TFlHSitOOyhRL10KJW9GRzFsJF8tVmUjXCxBRTo1dWs0bWw4RXBCN10saWhZRV89XiVZWW1JRShy
XiM1Zmc7SCVIQic8Vi86OTFjI0cuKlxtckg4UCIsJFhgLHJccjIlSXVIanItUShZTitmUDRrRU8K
JV1zLDRKXChpOG87YiFwPl0kSk1fSXJFSWlEVCxzdFc6NFBjb3JNUnNFI2koWDxhOGlgUjZsUCFW
Ji5hcHFlUW0sWGVBVW8kV0pCNz0sZ0czKi1AT2hfJiVZaDIrOi1jUlRkYlwKJTZNVEFQJF81W21R
XipyTkZCOVYzIVlWSiErcElVT2okRjxiOjxkZ2g2YGJMcStuRzFTO1lTVlw/XSoiJEdNbF8iPjVB
ciFaT1k5ZiUuay89YWNiVWguNyJmY1U6WV84SisiPEEKJVZEdFsuOU5iRXVKSnMkPlcncmVDMUl0
T0VmTzVFaVFnQGxrU2BmZ08nZWRHLSk+SUs6Pm02XS9cPkgiUTRPPEoiQiFbcEMhdDgmJl5QJSM9
YV48YFM+UFpdc2pkXS07UG46SHUKJVV1V1xJL01OV19PcD00YVZvK04qMCR0cjopJDdYNCchbyVP
ckNaalROXU9ETD4ldWVPWEBhSVFALkEqPmtkXlhbPCJIZXVzLD43RmtPMlwzJVNQNmhQc2o3P1dj
WlInXWZJaFcKJWN1Wj9gLVpbOzZpaUVkYkRXX288RGYvN1ctRUl0Yzg/SUwiTEw+MEwxIzRSaWdV
NVg2SzMnJidYJzcmTzhTJnUuND5uXyFwVXFpJWJYdE05ZDU2S0ErJmFIbUMrNGxHJSZCdEwKJTg5
OD5wSDg5XjVYbmZAUUc3PkUoSCshIi1BVEUrZ2U0cEpOLkJlODpTRV1ybmYxPFUiQjY8Q0c5UGA8
Xz1WayJqSnQ8NywxRWtuYVc/amhPL091JjFgZy8yRjltO10xOjQ3WVgKJSVAX0I8ZUo6LDUwWDVP
TVUiVU1HJkd1WiJyQEI/VFE9T15hWWwrPi5bMColNWQ5cz05cXVjJSY+KkBNIWw9LydzL3I+NFgz
NC5sbENpaHVmcEdWdGQ5SCVpO1IxbDNxUC9SYy8KJTMiUkAiM0FsNV9MdTA8MUUnUSNSUitwQ25p
ZigkOmVlSiIuRD1PMkpkZC9BbUwhO1dSPVRCNFkvNHMxdGNCb0cmLUpPNk83SU1RVXE0dDxzOXAq
Xk4oOFRZKEtpOitUNGk8I1cKJU1WaiVoIVM4LFZaR0I4TyNnbTQxPGVpLF04LTlxdVxMVV9WIl9g
MjhnZ1xUUj0jK3NKLShwMVQlMCQqM186NUFgXWdeKC07KV5tM0tJUnVgIjIoKWExUmYwU0UzJCY3
OFNUNkYKJSRJWk9JLiphNyQwXDspIS1dNXFrTTBKZEUoYmZvMWlAMixOTlluOlgrdVVQWC5cSUld
NldccidIRURMNlUicXNHXk8vITZVP1ZOQDpdKC43KEo4cUYtJVhTYmM3TEJuXT1zPlEKJTxIRGxo
KWlFIUs7Sy5rTFhFcmwjWiksQUZMaW0qPkchaVA2XXN0LVonWkhsUUZeLFApXztVQ25VUiVdUDZI
TmcnSXVqbDJSQTptNSsjJF1wZW0+ZyM/J1kyOjAmQSI8STgqMlkKJVQ+J2RGPUomLFUwRjIlLSFk
PChJVHBeXGckRlUqVEUsSTg8Q1wtUDErWVI4bilBaT9WUy0jbDMkLHRlRHFpKC8oIV9SRF4mUWtv
LVAvT2o2VHIrZkBvYml1YStMdCw4bjYmVHMKJTMmYE11MFgqNWdLQUkmUlgwSTsibGQhRl1UWk5A
TUlDUCRGSWNkY1s0c1FQMkNEcnE+cj5LLT1DVmJmcSRvOjUwT0cnVG5RSigkRSM2LE1waHI5PHBB
RD1aOkJSJ1tgOCk6TCQKJVhDNzJzSl9wbCljUjpaUmtxajlZNkVzZChjW19aXlBLZF5SMj9vQEtP
YEkuIjlPYXEhIyNLQ2NXXiosU2JTR1IlYGBuQExxbjpeTjc0Jzo0RlMxU0lZMTpkKGpsVSQsVi06
ZG0KJWMqXFMhIjtFOlshSEgjI0pJXisjZl5dVV8ySydaLTJGcFNdQmoma2pXXyVkTnJja2E9T1k3
LCZULjRrK2ZTVCo3VzpAXTdncldUQGxTdXBUb1ZjQEFFaUVBMG5uUEAoYV47cCwKJWlpOWZKaUhL
XHNQdTFwSjZlMCVmImtldSkxL25iS2FcQSpuKjJmR0IjNUxCaihkNks0ITpTSlFBOC0iXEhsPVQm
UlAicFkjUltsT2MhWCMqNzpobVpMXWguU2otaCwkKVJnT20KJWlWM1xjKGZpbnQ0Q00oIUksaD9G
NHNPdUtqPGk+J1g8aCYvQDk9QV5MYUJbUExfUEVNMkEhcjlnREtyVS4/Tz42R21acD1GW1YvODll
LiFBIj8uUldxNVdLOUUlV1gmJSxxZGcKJTExPkkwOTlfcm5WOFc0MToiaFNuJCxhOSojT09FOi8y
NGQ+MXA0JmUmNVY8NEllaCxqUyxJNDY5NjJKMjlYXDdwRCFdITYpZSV1SzBfUURfU1xxOTVmR1Vy
b09za29LOTA8QSQKJSI9MFhpbzowRz5VOz9sJj1OJ1ZyJFI4NFZSPChOLFlERlAzWCEkRCFMZlFl
cGFLKGJvQiVjLnUzcjpmUztSXVI/VClnLTolLjJXISFJRl5nPkRtbyFWbTZSVDYjLSg1N04ibyUK
JSkiQy9HM289KlAkYTNVbHA/RyxHO2wqN0FbclZtTVZOZGsnLlZuXmQoaVouLiNKWz9CajBRP1kj
cClncGY9RCtiTT5rKi06aFMyKDxbY3RAJ3QkPyJOY15OTkxmQkFbNWJmS1sKJSU9OE9lS0BlR14y
OzBHPS4tamFcRFRMdFJoYVJrJmdYQl1XMGIoOS1Zb1YzMjcqUywuZXNeNkk5L3JaWFJLWG9IU3BF
PWApbD4yNVpBKVpZRV8vU2I7JlVCbVNMUyhWQVErUSEKJSwsI0AoKjs0JS9NVkA2UkRzMUslJCRR
JVMmYk9PMmhBWV9vSEJWPzRHPVsvIkxOUzUoM15TPEFbbTQiazFvWnBTX208P0FWQzNzP2NINDNS
XF5CVjs8TXIsSkw3M2ZzVFlIRC0KJVJRY2RsamQ5WTxoS15vTDBGLWIjMlpbQEZAN0E2WE07XSNU
bF0wdUZCN1JDVmI/XjdocXRjXl5Sc2RiMGRQc1VLMCE3SnVWWSxZc0toLCcmJVRSNVRickZGWChZ
PF9nLyw0UyoKJSU6WkRgOiFDXGNsOkUrMDRedFNlNztzRzAhNTg1RGBeaE4hQFEjQ0pKUF1eXz82
L2BsJl9gT0wuV3Q4JUJKX11NL0w1W01eQFRvaFQrSS1kaEAkZjNfUCxJTEdAYktmRU1MWGoKJSpm
PlJSVTRma29BXz8xOVU3S0RjN1xPSmFtU1ZpO0JxclQ6VlRrKUVYYDQvYTNUIVdrP0M2OCE7bzA4
OVkrXFVTPFxKL2o8cFB0P1JcPUQkSGoySSZRWlFRPj00LW5YaFYuPmoKJUNJTHVlX1tuaVwlUEdV
QEI4UjFdXEJhb2ZLTkdpZCI6KThvcnFnb2NFQ05zK1lgRV4vJ0hxJGRda0ZzRTJxdClXTGptVnUh
am9TLTItcFtMbSpKSjBSYmpsM0cjWCUqV2g/a2sKJVI6WmYtVFskUmFnJ3QwRUx1SVokSVAnYz1L
biglRi8+QS5Wb2EhJCFARmllOz5hKVZbQF4lMyxUKDpkcEFzQEAoQS03cllbaSZbPVtFR2NHRTBV
SWpyNkA7LVEiNTUyUnJSdFoKJSNTX0QjQ0ppcjk4Q3J1RltFPEVIRUpBclZsJm1HQ2M5Tml0ZDBi
ImxodGJxcioyXFNFJDJWVGssOjJrOWs9JEtfbTI/b0phbm8xTChwa2E9Rmo/czlaaENrRF1yVGo7
K045Li0KJUlrOyFfUDNjRDBnKUwsZDMpTjhNKS5SXCgrbi1uYj8iO2hgTG88cnRPIS1icV9gXUVe
IjZBJTtTV1AxcEhNNU8raEU5Ilg3KXU8WzwoUVNpOVZQIjw4Xlhmc3AhW2s9VWpdZlAKJVBuVHNu
TmRJSGc8MFgwZlpMYUExMSVudTdLJlQkM2srJ0tvRmJWa3M1VikuQ2YtLnJkNHNVSytpV2lMUWsi
N0tiQDl1YiE2YFZEQF5dTjQ6PXNqKHBjVTlwYFpOYD5zL05XYFIKJV1sX0ZdViEma2doazwwRl9i
MylKNkMsQU4nRjE4KGNtTHFkM2dja080QHJxdU9EUT9dM1slazwkQnU2Tl1DJEUmcVxaZkQsSVgr
WkRmWTAkSCJvPTEsLlVpJjA4NmcrLkhtaysKJTRkRmFxKDk5ZVtKVVVxUTJbJD5iRWt0QXQuUE1T
J0xTNW4uNkJgVExrM2gzNzElKShuRl81LUZBPSxPVkxTODZEJUtWb3RCIW5ZQWBDKypxWiVzX14k
JFF1UCFrTzhTZ1JtS1cKJT5CTVZRJT1ARTo6UDhfSyIsVFM0TCRPXFYjNG1bPzg2Y1FGK0c1TSZj
IXAuWzgrW2F0UCRJNDEpQkF1cy5VbWokYFxEWlJcOi1tRG5LSCUtUTdJXTxQdUxSRlpTY0ghRSdl
LG0KJS1WbSlDNVVyRVRfP1MtbEVzPS4oJUlTYkRMYS9IXW85ZFJeWCkhWDhuV0NEQVFQbUtEOzVb
XCYubVxYcFI9TjslLkM3QmQiKjBdJVMpYjROPEhXIWZiUj9JN1lLX0NaVWVjWGAKJTNIJnE9XVdC
P0AsZjI3XzhQLUhbMCYtYDRVVGIlMlBVUjQ7KjYiS2RRa2lrPCdIKDwwQmRGSyxmUVY3YTc2cWhO
KyFaa2Q2SEhRM2giYU8nS1YvZy5MRHI7KSxWayFaRERsTEEKJS8mNmRaP2tgaVI2Qm07IkZzKGJX
JlVCKkwjYDA0cy9qXTxKSDlFN2JVVS8rWD4vOkwlIm5HOyckbnFnPzJERl9UPltKaC9bUVY8dClA
QDxwUzZbZGJIZ0BaZylzWE1qZEwoPmwKJWg6Ki5VNyhldDhWQDJAXVB0bkFqYl0lR0U1QVpUXDxw
S2pQTEA+UF1DaFszKUckWTthRT5LYzphLE9eZS80Slc/OkM+czE1bmQwMEpvOltxNkBIJ0BFITMi
UyhtZFhUZFBQSjoKJT0wNnIlTCIjUiZrMDY8PFpJbkFAPVYjXWYzQyQscmAqI29OUzxvR2Q8Tz86
TUVITVZKInIyOygvREJrPFZza0hZbSdUXS9rYnE6MTxpXE8rb1tXUG4vZGQqIj1hYlovJ3V1VykK
JVFONihjUC5dYWlFQW5JU2gtSE0lUWlqY0piJlQucT9IKzArPDE9LCo0MDxeNWFqWEpVaEkrPjlH
Ny9LISV1REdKQjBqQyYsQDMyJTtPR14hQSY1anVVaFJNWHI8bTs7KyYqMGQKJUEza1knUUlDTTNG
JEZOWjlHP0E0Nm0ndU5GTyZub1pTJ2guZEBJMy9UWnArSFE+LEwyR1x1JjowNz4lOTlDREw+YS1a
K0ZIXG9vK2J0OWxva3NdQ3VmUEchMEA0PUhPL2ViY3AKJTtmYmoyXV5nQ2FZZlcnKUBVPGVhTTlJ
I0VCTW5VOD1RI1pYXVU9MTlOJG1zRCJQP1AxWklVSFBpITlPT1A/VkNoTGU5Sz1xOGQ0IzxcX2x1
UFlndCc3QClVclguLXApP1pjaWUKJS8pXWpeMiJXWSQhMk0tOD5PN0hGZlJzJ1c9J3VbQ1NAb2lX
ZW5nNVgyWEBLL0lXIzMoXT9RU21CQCcnOmBNUVpzVWFpQVpNdV91ZS4nN188VUxuWmhGOVojSVd0
JGw7ZzovMXIKJUZeUW1HTmlhWDtqPj4kbFcoVyNLJ01IT01TXWgvcy5VNGBsYWVAXW0vNCM+NSgz
SkhcWjoxZStVNkZqUD9nTSE+SmZZcDRRL1l1PkM8OEdaJj9CYDMwczZzNzlLKiVMIm9vbVUKJS9p
cTc2X3ItPStEKEFyOls1cStsYmFtMmtdSm8vWUswQm1OLU01JmxCNEhwbDQsSE5OV2xQby1oU2B0
Wlc2XFZiImhQWW5pJyhWMDtSO2AiIVAxI2RNV20wKGUmJ0otXTg0J14KJVZJSilNVjFBMCMvXT5u
R1pccUEkbDdTZ0NJXjcrJS03PHRmaktdYGVcdVpZKTAnV01CWkk2ITtxZChsKHFKNDsyNEhlNHMi
TD8xMyw+LE9OLVZzNkUqP0UpaiF1QT5KKmhpUFkKJTF0KGdMPzVyUmc3REQmSiVZazlPQiNJXjQq
J0wpaD5LIyYzR1UqOEpubz9TIVpBVj4+XkJaMHJHbFQoVk1cL0xiXG9dZT0kQl1YLixmak9SKUMh
RGFfKDxTQTZJIS5wXiw1Ul4KJSQnJEYkZixRc0BmVDI3ZzYiQGAmSzxbMUNDUTU4LU0tZWw8R2Ft
Ukg/IVskJWNEYDRsSkNoQDgzM0cwYVcwLDdwPHUka19qL1RDTW0iJnMsWVsrKlguRkMvWjltYzMo
aF1KKicKJWBlXiNrODg1QChTcSlAOVhCQVpcUlBgSChSOlpRdFZeYWFfOThcUyRRPVBvPFI0ZFtS
IWEuJl9lUCUodEBzaHRYMEhlNldJQlspYFxXXkUiSllCZC9YXlo2IT0sZysnUThMJzcKJWIrJyxr
YDZnSnBRP1xCMmMrOGNGIk5mMi0vYl1rITdCbWJpLG9gaTQhXFxMa1lkSDFPXE82bTpwX0AoRys7
K2ojPCI7Jl9nQHIhUERcOT41VClyRD05USJcVStoVG4wX1wkLlwKJUo1SjgnZClAQzQsclZDTCU+
Z05kNlA/JjdXTk5HNyZ1Y2BYTU8lXTFKJyExYSsjLkw/I1JtYW0qQF0kMj1QQChQVlo6R1EzSz5n
Z3ElZEcqRicjODFfNDF1ZUNZU0tOal1oLXUKJUtoOF8tMkE1Zk9QLFpDWVk8IzBWPDMkP1ttT1En
b3ErPEJMKipPQHEnQ1BeR0FsZFpOYSMsbEJhOm43I1teVjpuckZsTUwoY0ZYUC8jKHNgQC5NT2As
alZuS2QwSlw1YDctLUIKJWs+YydTZTVlQi0mbFM+al9qWT9oMFhuaCpOZFdMRmwhaDFMYlQxTDJg
Uk5CUyVuL0MoXy1SWDYwJHE7XUcrJlBXYTtwYikrMV5VXlUxRVh0YjgkJ3JBL2VcaU41RCVeXnRo
cykKJUtuUUxmIytnRUo8VEZRQlxKVU88KD4wM1suRGpebC9IcWMiSVIyUS1xcUEtXDU1UDNoQy1z
ZV9sST1UVlhPb2NLJTR1TD9KdCF1Uj5oWmJTQ09zWC5MOHFWI0FdcHJFQkBgPlcKJUMyb0grLk5y
bGZkPmMjbS43VzQiUyVBWUlgP2pndVVvSU1MKD8lPDg3S2A7dSc6RDFTTW9vTU9jWWtGX2pzQlFd
QUFrKXE8RDJbSm1OQVlRQlFiKUdlZnBEWzNrcE04VTouLk8KJU9uVSRjVCUzP14uRGhaIUNqW3JB
czUnaGE5PDJITUAwKkJtI04nQWslakJ0ZDRPM0xaPjE+P0A+OyNtJjMvT1lxQXNKOlFCUzEkPTtO
Iys1RShJQFNJQydNTTs+cy9BKlhoVWEKJUAmVkY/I2tFYjRtSnQqKDdOTG9JYD4hczYkLiRTZCdt
R0NbTU1oJmUsWkJHYlk9c1M9KXBiX1U3VVghaClTcG0nL2VXaD1bN2tlKiVAPGNbZlJ1UDI0XVVC
SDIwI0BCSlJQRkIKJUxBYnAsTElVbS4uL2ArYDdXXTNiXVsvckk+OkpiZzsrWD4hRi08SGNTXGEs
PCdXJ2Q2OFs1MjVob3NSYi1tRTQkNSFDK0NOTE1PL0tkKj83NS11aCFLUnA3XFE5YihgJlE5LjoK
JUBNc25EIlw2RSJRQyMiTnEnZ1cxO0gxOjNHXTBYKmgpZ0sxJT40OURbQXEoMEIvIj9hPEFmZ048
Zm90PDc6Q2cnWF1KQDRwU0NsLlorPF5bVGcjO3AtdCpdWisha2NvWThLcUYKJUFMQ1E2NnI/K1FW
cEQjJGs7XCxDNlpSQ2A/amFOKCReTytNLyUvSzA9STRUV00oSWFoLCM4V1hwOGtwKEJhb1NPJmsu
ZFpLIVBhcG8yIiY9Y1YnaWpHaFlxKGBIR3NhXyU4TzAKJU1uYk8kIUdcQ0BhXSY8WCUyRGNcbkRO
NydaKmQ5Lk40b0ItOU5PTS5nbUVyN2BYQVouJDxDJjlcQyFpVEQuLkUrZlZsOmJcOEE4JChVWF08
ZV9QUEc9Py84QWVmOG47aHQrNycKJSQtOHRccSEzZWBcVE86YyMtXXBfM3RsJGJXNkNJPjdkMWki
PEJHWW9gImUwbi9obFcnM1dUUE0kIyU4ITJHJ3NGZWZkJ1FnbkdzbVdvUj4hMko7PXE4UFpdKF9b
Oi0hN01Dby8KJSE9LCZMLD1IczsqVkpaRF42WVx0S2BFaig2aSYkRCpAJE10OFxeVT48VG85KVk7
STg8S11XWDgzQ0g/PUNzRy1HUFMycUM6OVJKdSQpMEZVUT8lamQrIWNOQCVdMiliSU5vXT8KJSQz
KTFvWiE1OE9YXWU1SVYkVUtqViljbm4qZSsnWjAtZ2NyN2g1ImxLIlolPmltcFV0Kiw2by41SDhF
bV0hPD9PLlVNSURVVm48XC9IWmJWRCMuUElLSEljY2YmZShWPlJlVCYKJS5hTnNPal8mSGM0PSdq
Km1zUStvVGg3ME5uYjleY0pONi88WWVZKFhOKSpnUFtAbUhLclc/NydURzVmZTtWKHJBN3BcdVgk
V11SJV8wdUxvJ2Y4cTpMWk8/K244UD9CYW1na0YKJSIsV1oqZ0A3RjhCJyo6R14yb1NgQ0YkP01c
WVRJcmN0V3VDXDhnSGxwPydBJ11kPyE8MF5ZYVlYaCYoPT9iWV09WiNLKHVLVV5JJlZjJltpQUFR
MFxNaUtMU2JKZXJuYmlTOEcKJSovRF85TFUnSXBDZGhscy9wKlBRTS8wTC5VSkhyWitXU1VnTXBu
VzRLS2clRkkrZWNuWzpsIVFJUUNxLWFIPVNYMitvXE9gbStIdF4oPmZRK0AhKWNQJHE+a3E4ZUYy
OicmLiMKJTtdLjAhKXEpaic/LFNxW0ZWKDtUWFRkOHVwYWprdUwuUUAjLzgtVFlPSzFGRUBwREQo
IkQwclE0UDs9cWNhZD9KJGBbV2tpZ21zQj1GbihHP00qZUs/T1FjVTxlMXRFNSIlMCEKJWJVXV1i
aVQ6SkRpaEB0NTlmMixDVEwtKi9KN2Q4KG5VXzAqbFZgS1ddUENaJ3FGY19bW2UkRmJPY0NKcmcs
X2ZJMnVvaGwmYTp1KVxuUkg1cVVAPilRMyhdViEvVWVZXnRsal4KJUkvXVdoIl5FWUZOL0YjO1ZH
MHQhIjg6X0tnJmchT1VLTjkmJDxlMyZARVVXXCJxOzBlZk9XVTBBbzROTS8iRGw/MHM5YG9lbjVm
b1s3Q0U4QGdGRy9QJWEhJCg3SXRwPjdIZE0KJSdqXT1DVmEqXjFkRiQuSEhtdWlrPFBDOmAnaS9S
UUgxMENPJlJaRl9ibWxFOEZEMkEvSTtWIiY1VVJXI0ZQJTo4PiUkYGIuSFZsLGxyZCxIW2pjJlhs
cSRTTydrXEBLZCZLXzkKJV9tNSk8LChwXyNcKCM3blwuTmo0NyZGWEM2OjhTdUllZktlNzJEZCRp
PC5BcG1DYihmJFVsYi0xalo1LSoqbFxvTUNxMF1WNWBWPkxGYmltVEpdYCNDY1FPYmBuLiJYNEEx
bkoKJUchXV9KJTh0I0c5KkZHaWFmQyhaZChEP19fUVFxRmNyQzA3KWUjMGZTanBwRFBkbkFSYipW
KzQ7XiZfTDZvWGtDLSRhUkkwYEJ1TGZcbTBTby1RVkxRSzFBY0NpKEphcGQlTUgKJVEsPWAkYG11
cT5qJWlYLFJtbERQZWhZRUhqSUVabSc2YzVwYDloI19bQzZGSUEvdWs2TGU6QEhST0wtYlpROFZt
UGwzbUghTUNjOChmTGtiYDY1Vz4tLlh1PVhoYHImRCMsXU4KJSk4ZEZNPyJmL0RbOVJVYzE4dXVj
NjRFQ2hNLHQyJkM4cmFZOiQ6aHNPUztrSlteVlJCQTA3Yz1pSFFhNyhtQF8oRS1ydEItWjZ1IXAz
ajhxPThDcVxKdEBSJGc3ViU6KDROXlUKJUBXRCJUNFtxaVBIUyRzVGpOR1NuL0whKGUwbkFwJThD
PiV0LUJLJVJCInM4N0JgTkVSXidENlI8WlYocko6UUA+b2FPUmMpMC5EXkhTQW1ATkhNWWhBIk9x
Z0ZqYD5hRUFkLioKJSxqImBuO2ZPb1U3QCZdcHJzQV43JyctN0dlPzguOTtmZWwhcUZfPylGT2Bm
O1QlUzlbOydca1FjIm9sRWpOVlFXJmxBdGFuWXBkdClxVzYsbCVVMnI/OyhzRCJLIi4mMltcTFEK
JU1pPTwoUCpUZ1xOREFSa1w2LVYhOmthOS1gXnNJa1tKMFUnOCctYjIwZVZOUCxjIkc3bDVnPDtF
LmRRaiZwSiEhO01mV3NhRiVNbkhjOW9oIWBBNG9SVzJpO0dWLmxdTikkb04KJVJdSkJNK1oqbCU4
LWw+ZTVhLiQsUEpLIU8lJnRxVC9tclVASDk9InJkPCpuKEg+XmouS3NMUHA9K19LIz1ma1dUakQ6
Yk1KaFZtS0tjOUEsPlNOPSguXydPJlBjP0Ancm82MScKJS0hZDZATHMuJF9Ub18kdC5eNl9RYlJ0
W2MnRSxLbilAQCpiMjYudT42PyRSLyg/IyE9XihlTk5vJ2lXMmRDV0heZWZNXE4sWU48Y201cjdr
UiwmPlNRaXA4JGAlMGZ1JkNIV10KJUZOMyxLIVR1amMrWUlPXUsvTUI0NiNibiEmckBrN09APXM9
VV9aUCJjPT8pZm5uVW1aNjhxVSg1Mzw1YDlITjQuKUZsSTxvIVM2bShKR1hgU0QrYTU7ZjYsaDA+
WldPOGROOFcKJTowaTM4YC49P2A2aytrYyZKVkJ1Ym8hbiIqO241PWpFPWZPYE5GIlgjbyNaZ2A7
VixBJW0ycUVSYTdSSC47OU5aWzgyV1szMzBlPzNWVykhP10vMk05Z1c0LiZVUkZkNlguNjoKJVox
YT4mKnVHVC9RUFhYTlFAOi1BRigiNDM8Z0c9RyVbJyk/SzFLLiguMjd1SilKW1lRV2ckbWpWQyNb
ZmdjRCEwUk4lIilULkUtazg1M1xGV1o1Qzw2RHAzcWUhR0cqQltzSi8KJTthMlVDWyRcMENHaF1Q
WilxZGVfXDs9QVxYWjhfYWclKkoxYS5VNHNDUTVBKWNqV3V1TkQkOUU5YEA6czFbMW91I2RANmAr
WEVXZFFYNE83YDk3cVIhPlckKzlpbTdILFhLZ2IKJTUzP1c4WC86Sm5YO1ZBLldGXTg5JD9WSy4y
Ly8oOUZNZUczWFI7a1lhakY5TlU6ciRvXGNoXSxVMzVtIT9RI2A6TCo7MXJOPyJyUGY+S3Nya045
PDhfQEpwJy9ka3FtbDchZ2QKJSRCUEhlQTw5P3BJKjouJDxEQ0ldW1s/MiRsKEtAVDFzZGh0Nj9Y
NTk9USJYUEBEa0pgMFA7JjNBRClaMDs6OywtRjAqJUo5OHAtJG9iZzY8TVVzREBoLyFnKD5YYl5b
I11hTmcKJSVdS0BzTm5gV3M2KCtfODpcZD5hV1gyWF4zLUguOj5vczw6Py1LbVZPPl8xdFdrdGdW
P2ttMiJXRlQpIU1VWTxjVkxuQGYyWl45JTZyIW4oPCRMOVlBNjc6dUJGPGZvUEtxRnIKJU5Jamtg
XThkX0dPV20hS1YuUXRCNkFAJFxJSyNKNzdCM0FOJWJZJ0VnNF5lcWchcSE2Zkc5UFwhSXFgQzRz
R1FgKkMuaFwlInJGTWphUTlNVFVqLkMwPzFZTk09Ij9dMjRHbG4KJSgkKj1ATkNmN1NtUTpyQiJb
V1VrSDw+S0taK24tOkEscUdebmdSb1E7Y09SQEIyYDNRQ0YtdUxPSWAmM2xFMERdWmZMLz8mME86
TD49PCpQTlpWZTZAaDlXPiJTOSdpPTBgK2cKJVdiKkYsKV9gaWhOXE5FMDpIKWdwJ0w9RC0uUmQi
KT4ocEVeXz5SWiguUj9yQGtiamJjcSwzVFtkUTVNdEdaOyMqOHBwbDouZkNhW2NHSmElcWZuPWNS
aERzQmhKRUcxSlJLQ20KJSYtMi46MFkjPXBTbSFGOlFWc0FQbzkjX0RHKUsrPFgmS3JEQykqXl88
TC5mcU5COj9CalM7MFcnVkRCNCIyWkMhNW9xS3EzWE5HR14kTi1RU04mJyM5N1R0ZHBxVl9mUDNh
YFoKJS48Qi86WlIvX0xWTVFANm5tP1lISG9cTF03ZjY+a1Y8SXRTZ2E1YkhMdFtNXCZ1JVJzLW5Y
YD8/JEdpOi0hcSFUJ2FuZyRYMTZ0U2BEaSUyPUcwbUY3YUFIKm4uOWhYPCcvPkYKJVdBRSteJlIn
cVFadFhRSFpIVnRRbTJYLF0hOGxIQEZHUjlwZkk9WmVFTjdjaU1mRClOLDRYKyUsSC89LFUuVF9Z
Y2txJCxRRVB0Mk1SUW9QRV0mXlc4JS8yTC9WQWw1PE5xbCIKJUgqNClza1VIJ0VVNkQpU1ZETClS
IlZPPGteTihRci9ddHFcWVIzNW89Xls7IiVdNiVDOT1cNVtiR1pDdWpnMF08UTtoamVWZ1FMXjpc
ZTJYVyZQTys8JkxPckAwSi1GRT1oUF0KJTBTS1IvI2NLNDk5cCpvX0RHVT4sIlBeIkdNWktyMjpL
P0tRWlZobzw1Vi5XUEZaUi42T1dcLz8vPCthMWJLdXViQXMmLCpfZixTWEFAREAuJ11cYCxbc1Uq
WmooYE5hVWsoMEwKJVZRITg7aEtHMCs7Vz1kV01TIzhDXl5fRywhRGxdUUg+YDRwTXNLUyRFRWNa
W1Y+cUIpI18zay85OWVZN1kjO3BvNWFnQ1FfImtOLGpHP1JQSCJCJl05Mk9kNiYlVURoYkhOJ0gK
JWdoY0BuZW1xUSZYWkhobDZOdDA4PDlzNlliKj4vIjxgaFBgV11fJ3Q2PEliOUZ0PnNcISgyX2gx
OkBWWz9uJD9VKFBOIUxXLV8oJiJZZzhXZ1RhXCNmN1dRODk5a0NVbGxSIjkKJUctNV9xSFZUblcn
ZTQkTVZxWGtkKTNWKkkuXmJJUil1LDE3Nzo7InFuND8hS1FyIjNeaVteLWpxZF1WQyFlMUtsaHUu
JC1samZKJCIydCRDXmVJWl4wazhoZzEtPXFEXkU4XCoKJSZmZiFcVyY6R3JaOnJwZyMjQkMkLSRk
bigoZ0ppYjEiJUpTSygoW1pxcjVyQ1BBUSorXDlMWlNFRytBUlZGL1dqQyImTjpNY2ldYTthOGRE
akRdXyMpUjFEaFJZUnJiMDdyZ0gKJWpxUWxQIjNzbXU2IlxYZj5RWD5waDNDKk4qQUI1R09BSkRM
XmhPNCheP15kI1tnKEEzTXFOLWE/ZDBTJDZXZnNsODI7LV1PSnQzW1E6ODdGJyFMSCRsNkZHcm00
TmQqP2c+QzsKJWFwJkRGXzRaUFQ8Slwwbzc4OUs5OUM/LmM4SFo6J2E+JFInVzFGKmttRHM5Yz1Z
ZUdvXyNQKUpaODBcZTNnWWRsLiE0LiI9QlFEJjxTPFRWZWFLQk5rXENpI2UhMzlHOXAtbSwKJTlN
SDs/PGhjNGZAaDtzJUk3SC9cS15rLSheM1VtRWNBR1NaZjF1bVRCIVI1PFZBKFEvPiokRiZtY0gx
J2hBKTdlUkpjNkc1XWlBblxJNEguRnUrQT0qWENMTSQsQmV0QEBdaSsKJTkzU0dnXDpgJzZKSmJv
QWFJNHBqRD1eMUU7KF1YXmJSLk81QCZGJz0rXkNISig5ZT5RMGZxN2lQM0Y7S1IzTXRIVEU9OWdI
SCVRVm9gNT0tQHNJKzxAclJKVCtpQkU5WUImPTwKJWRiLUdoaTw4NzhHUihLcUsoJlQjb0VDLlU4
WHVsRmZjaGhCO21iXGBUOzI5WjtdIW44ZHE/VHNKSHU/TTAyUipuZGFZQiQiPSFuWl8tQUlPcEZS
SjhUSSVxJyM+R0ltNi1DZjsKJSNGO25EZTw3b1pPaUZsJmhWYUA6WHFHWlNuSmxLUHJSLWBpIz5v
PmhFPGZkQ2dWRCJjLjE3Y2Q8SiM6WyZYbCJFci91ciYwKj1vbGVeV1RlKidtYFxNSG5BYUc0JCRJ
TU9GW0oKJSJHMUZKSjhTKDRRaCo2YW5Ka1JJX09qdDFKLDR1dUFMLEVwZFpBMFRXN1piPyMlRSg3
W1BZbXM6TWlrTmdOLzljMURLIl1KLWMmVHEjI3JLSyhmUVImMCI+dWUkPkxqI0BVLHIKJSwmIS5z
OmBPIjE6JUQ1NFM7bktoLTdqNjVdUENYbj5TU2RiO0xlZWFLUGNFQ1A0MmxmU1xkLFtjdSQ5OEBV
I1J1RFIjMi1mOmJZVjppITdtQkhQXV4zb0RYPGI2RDxsbGNSIloKJVUkRDlQNVVVS2RTZ2NYK1Y/
RylIV2MiZmgoUik/PyVzZEgtVWk+ZD0jTzRiOWsiSlxqPTZFYjtrLTdMM1BRYFYvXEhOTUI6Li5K
OSktS189I0VgLUlkRlA/WmRVcj1nVzAqc1cKJUM9JT1ON1ozSkdBb0xRLz0qJXVoLTNfVSpNWydt
b2tcYUxnKjk8SW9HW2BEUCI6aWtjYEhpaURPdFAxMEVaKmZAPCljZj5ePCU7OUhiOSZgLiZEYGBc
QjtcVVRPSko4V2tKaFoKJSEqbkpcPDxObCwrQjBFIlxtZCcrPFMsbVVaKEVxTEhXM0U4X2hoc04+
UG0pK1dbXCY4MiJuLFwlVmhOViwpcTJVKkExX2o7bGxfQyczOCgzYERFMmBVcE1bVmdZQWFMYFBf
aU0KJUA6UjlXYGAvbUwvTHJrIlBlVmpAZiFVMFgkJTEkNj5XdGwnWElAaUBoR18yL10lYDQjOEZm
MCNKL2ozQVJxOSN1XWZndG9QO0ZMPCxYQ2YyNj9WKjcmbHIjXDY7PCctYTtUXEkKJVdwLFJRVHBl
T2FSOj1YbmROKGlYKCdvLmUuN2c5WDBhdSktXDxDW1QiO2pJITZTMFU/TGxrIlJePCNWZXJLbkch
ME1YPzJrN0VCJUhjbzxlKklwWEJZSWY7ZlIhaWNRVzRCbDEKJSFCbEhCSSlzLzpaST9uWWIjTmRs
XHRrOF1DOE5ONmN1LEoyYSYiTiMoX29GKEhXJiFyLT1LZDwqRkwpcnAocElYY3FkYl4uQlY/PUlQ
XVA0UEVKXExtN0AuMFZTTlI1ciYrQ2oKJVYvWmF0QVFbKz5hM0FgbTo0M0ZdV1hqOkhvIVFsPjBp
OVwyanUncSduPEs6VDw1JDVHR1k+QT1NMiFEb2goMjdnRihhSHJabyQ5b2xpSD4/JjlccGFtU0VO
NCFBRDxyV3NaMGgKJU1IaE4+Tk5lITo8RypkXF9VNXRBM2Ija0MxPWZhM0o1WituV0QpSChfNSUt
Sk9bUDtrQkpkXkdpYWRkRSpzNSloaG5GUGcxXmo5Iy9hPzQkWkw5dFo3alNedDNKVDJLRztLX04K
JUomV1I4PSlxaCEyUiJDNF9EO1BLRyo5MDVOJDE6cGtYUWpkNSk0cVBCbFkwZWohWkA4RV5dPi0w
cEhgTUpwQVtIRm1LZCgmMCo7PDNaQjlkISRUYnMmLVRiYHEkLEoyV2c8LUgKJTk5WWNjS2k/a2FT
NWssJSJkTS1zRFMsPEkhbGtpcSM5IWhlUWR0MnBgQiFGJCJ0cS1ISlxSaUlqXkJvVm1hUWwjIy4r
Y0JiKXJeOT51ajInQWdHTWFdaFxPPTwicF1HbTFUcigKJUE4LTI5TjxkOSldUGxkQFJDb0RtPlo0
T05qKkFnb1A9NEhCN1RFMmZuQkRTdElvckY1SW5ZQFJWa0ZaM0EyYnQpMWsuLloqRiFKTE4pYSE/
J0k1P2tlIT1obmFJZ0Y8bHNkZS4KJW06T2shOSs0LDxKVWoyJCpPNDlfJSMnUURWQVlvOUBIdHND
aCVbI1o8VipCOC5PT1taPl9BUGhFdUxTKkg1PzFfb1lNIixUPEt1XWxSWlY1V2JUb3NlMC8pIkxW
M3M8SFYpMjYKJWInV0xCVHIoYmYrOF8mXm9rRyJyMG8nTSNLJFhiQlkhJmBPUStSYl8wZD1TQUk7
UzBFKitlXWJZNVolT2BcQGRMKi5IU3RkVGZwUmphPmVvS2YtJU9cJDkuXThyJ2EwRz5ZKVMKJVIx
MDFoJEInLD9YTWBYRDE1T2ltJkhrUF5kIUtIZDZWJkZWRi4vcl1jQVQhPFZRayVxNV9TKlprWjpr
aUJmR1Jkay1RQzRhMzFuXllJT1A7KUNKTkImSzZuYl9JSV5jPk1yOmMKJT89bGY/LFBtZS1vOV9O
bixjTmtgUlsycVpRcj0kL2k7JyVBT0hCY0Qtb0RsTVIvaUhzQG5TLWppaUoub2xtXFBMKV1PUCMi
L2NbIig0LE8qS19WcDBbPUBKUzxiLTtuUG9VOmAKJUUwTj8/blBvMiFpZTVIVFtRa3BJaWMkZm5a
Oj9IXF81cUplaDUsM3RQS2pvU0BRSGlbTU1JL2JOV21cPThsSV1wUW5VOlM3NiNtOj1PclMsOCNW
LC5QJWFLKWE+PjQ1ZGFERmEKJVhVLFUnQS9vPylFMiV0ZTVgSSxsaTpKZExjXyttaSFWWUV0IS5P
Wi5DJE1SKWgnK3RFRUhlb01PU1BjK1Q9alteL044cShMbSReWzcmR2stUXQiJU9ncT5RJjE/K2Jc
PFxaJVQKJVZ1XFVPQ1BGSF8naHVZTldhTzcmJVk1SWApNlliUUxiUlJkM2RqNUBia1hrQ0UwQWle
V2oxb1lYP10uQTU5KVAibV8pa0VHJUBXLm5jVl1HXHVJcktTX1QnQUAlXG5NTjdFKm0KJSxHY0pU
bDJpZ0FsZjUyQzEnUk9kPF5EX1tRLFlIIS0uMERsciRRIlNrRzotWW1dVmtdJSpaM1goT1AoT0Fn
b0RZIk9WUkVvKTRQKzlTP0dHWVJvaDxAQW0qV2RHXGFjKHItc2gKJVBmUmQ3clxWY3U9XjNdI1on
IVJVaVIxXFBQXlcxNkVEMyM/MVlRNi4nOThYLmJDZTNKWjRLKVQ8O2docWJkcEc0Jm4/UFIzM3Q+
U14vUCE+TDNiMixZb0FQUGlwJ0xeZ1UrL18KJV0rQkJiZkclVlZTTyRBTy9tRzBeTkwmUmttaWk1
cGpiMEtrPDNbRixiXTVGMyxsWVtGND9AMDNMaltDZ09tc3ByVmFJN3JqOyNlNDgvMDwsXmxMXiZj
TSk5VDwoS2ohTT9ySnIKJTkkbm1rJllzaEg8RmxGZ1E5SSlBW0Unay9aZjhAVz1COlRBSj8iJS0m
bFlJXmsjZTtqPls3W2FYSjNrc0A9ZyI2TTZUQF80UjxfbUI3Yk5WUklETklhJldKQ1ZfMEFnUVk4
VEoKJUonYlQsSDJLT05GXjhYXEpCaTdtMFtrcFtAWnJeYzBcLEJ1NE1iaVhNOy9NdS03VFRkYmNJ
dSRWSUQyRj1YXiQjZEppMHFGJylPYSZRaWk+YVUibjBKN0k7cTdCRE08YjxwRy0KJUROXVpmVW9Y
cD5IOzg9V3IwdWg9ZDEwJE1nMD5BOVc9JWEtUmdBaCYwIXQzdTooczJvZFM7MUxuZWU/Y3AxUW90
YjdHOkEhR2xCSUxXMz8nNldGayRcSks/J0duc2VyQik1TSIKJTYhMVQ4ISRTLU1tZkJHJ2V1aitN
R0Q3MVY4UTFtRDFPL0JNXW9MJl4wNWliOyhfNWc5Pis2XkdNSUVedEsiLGg5WEZVZHEidU1ANDgj
YClVK1smaDFiTi0nNmM0UkM9cTM0bjEKJSNNWVBxVyQ6Yy9ETW9eIjt0UE5SUzpdOjpMZ1tidUBP
OXJNLWwsbCdqLl5MOT8+ViNaYl9kI2M7YkVXYDs7ZEFLPDAtczhbQEVURWZVT1Y2SVlqL0AxJGws
OEBQTEcoZVxpV0kKJSs9XHIoQWI4Nl4qUkc8RCZAUWU9KnVHKmJLUDBTcWQ8I0hjVDZvWGZROiFg
QCsxTSlqOjxaSFRTIl46TGdhO2k4bG8oclNubyNoMnBZRzZMT1ZEZCouWk85Vl9aaHQrPVIoRC0K
JT1pPXNYcT9tPSZHWj1fREZDOihfMFg3Ii0yU1I7OnE7R0MhMGRYW29DIScxXXEwQCRITCJrW005
JV0pWSQ1UkRUNm9Tbj0/dXIwJTJiJWp0IWw3QEJQYWQwW1B1NjlSOiE5SXAKJUBjclBkVG1RODFq
RkpLL3I3IjQwZTxKUylndVZAJC1RY09uKC1fWFtAW3BoRiIrYEQvYjBZMTsjPF0tLSpwcT5KL183
akozOEpML2cjMFFCLDN0KCRnQ3FzUUEjbDEmMkNBYV0KJThoKj8pZD1JSVxGXD5yY1doUmItJ2RM
bjRKXjAsclpXLEApN0NJZz1DaD44YyhyI2VfOTwpdW5PPEcvaUphITRATVIjJDZEMDM9K2ZtWjJN
JURkWTBWMztLUSMmSikwPSEzZ2wKJWQ6MWsvbFhGKlIoWXJJIUNIQDRHS1FIXzhJR205OSJKIigk
MTc4QUVTSUFRJlhaPDZOQCRzSUFXayk7NDdtdSlkWGxsYVByJS1pTFgkTWglW0xPLF0zMHI/M19t
ZFk4Y1s8aGUKJU5vPD4qNTdoZmdiVGE2XDNXPTJjRmUlLT44bEFXK1xLc21nJGkjWS1iSkNuKlhi
KHBxPTIrWG05aj9ZYj4ubidcRVNoXS9OU2QrVFRdUjZnVzluPiYoOTRQYjtHaHJJOVQ8VmgKJUkr
MzZtS1EtSVI1YnNSX1pOMTs1amJIdDRxJU9xaShtUzdVZjpXYjlYOXI3TzcrYVI0JUknPHFfXWV1
SkNAcnBhZU5Lc2khUWFQUmFkcDVcLTFmaCdOVCphb28+STVpSDVAN24KJS5hOk5YMVU9cDcxR0oy
XWFXVkEuUCh0P3QsQFo5MnJvckBsXzRBRydtWiZyaCtlLiVzI1RbPldkL0c/O2ondXJZO1w6OCFb
XS5JZGEjPURUaSc0dVdWLF5Zc044VDFhO19oOUIKJUheR09UPTpbRVU6IzVObCM5VkUrbCFZYHVB
SVc6VHJfUitnVTorJmllbyMjO2o3Uk9EcSNcMjRAaDVwdGowODhDWF5NVTk5LGo0Nm87OHQ9TUhH
RzpPXD4mKi5GdV1MJSgkbV0KJS5tWlxkTFg2KlJCOjRVSCRmKiVJVC1iLXIrYFhzLksmRCJbaCVN
TVMoLU8nTUwsQ2E7VWkwIjBSUWtPWi8tNmomYjNHWDlaKGBoaztxSyFDNiZRZjwkO0MlMSFZP1By
PUlJOl8KJURjcUVuai89QyFWRjdLQTluJyY7TlRUcF8xPy8oSFE+IT9wT0FdJl9NX3EmKm0rK1g7
UnBEOUZiWXUhSl1ONVFeXU8yUyspJCdDXiw0ajd0QyxJRnNNKWBqS0kjREkmPVJlK0cKJUBuVV1W
JCY5NEQzQjUnOlJgLlstciFaTTBGcHVgbEZWcXBGLjlcOGNfXCUjTkdYbGVEaTBraVIpQzQiOC1z
NkhrZWpSV2dlVDg8JVAlc1VHLSs+amNpL2NqYFxvKGNOcVtkY1MKJTBTJmk/S21NJ14maFM1L0dR
I2ZmaDNnNFBoPChHTmxdWCRjZkcoOU0mOGxcWVBOJnJEUC1eR11SNCQ3VSVwYVlyJ3ItNko2ZWFl
cGRtalIkLm1ZNzwkNHJyNVFWX1kxcGopMicKJVphdDM6ODs6UCxgTDtjZi5QcyksSzY0UUNCX25e
c21cLSInYz5TLz9XMmZHNG5KMD0zKSNUJEFbXWJnMmA5RDQqTUctY0ZeLi5jJW0qQm9lQyJIdStE
YG8vPGg+LW42LWpdSj8KJUkvZTVvMU5iblVEKldtT0lUW25pRGd0LWZJSig3XG9hImhAaEVJNi4x
NFxaXmZPJTRgTCExRjpNS1Y2cEVKJDFKcjI2Z2xTJlxWZWZuL0IqNGo5IlsjMGs0SEAsLnQrQGVt
W0wKJUEuYlFASUlmdUBRTGhRUFdgQD0nWXU+RlAjVzFsQyE8ZEAmMWNhSTFKX0gpaEpTIV5iNkpV
QjdCNDA8PkFRSSVwLCooLlZePjclMyRINmdgOHQ3PS5oRWBlKEYtIWRCJl5GTSkKJVtObD5ARW41
J2ZuLWZuIlYlcTdHO05eU2xCIys+XjMkSiZKJFFMS1c9ZUJFKERnT2xXVTJnNyJrbzU6aDNMcS5V
IjlsW1dfakdsLD9wSnUxJDhfZ19TUi8lT2BWSXBmNTxAKkIKJSdQaWdFPD41alFNWVFAMV5cbDNS
O0hlR20yR2dlUSZpcWddJytXKiFXQ20pTS4lRzosPFtlJWAzYEV1SVAyY0RccmtVYjhdQVN0SzJH
Q09iY1VwIlsmNWItUWtLcyUrTGJOOTcKJVM2PCY3VTs+SVVdUiUhLl8qVFYhPWw1VEAhTW4jZEBX
WjI+KGZeUmpnIz9cZm07Vmo2SDIibSxhaz4xKjlSLlVnK1YqQGZuNnVCUjpoZmpWJFFPZHMwSE0t
ZWFubUEjbDphNXIKJW5ua1Y8JEJYPmc3VExwdF85cyFHL1JmIilSJWVUOlo9JWNjY0ROOmdHX2ZY
LDlrMikmZG08PS1qc2w6Sk1cU2sxX3BIZ0RwYCNxUjY4J2hRKzI9OkRMZGpZZDdvPDdtVWRHIWgK
JVotV2NgQD9DZ2hccFBTSjRvRmVpJDwiZkw4NU1CaUtaJGlCUV9IcUFnNGlcYFAucTFWKVpbL3Ax
NW9MdTxzIV5CbmE4S0wuYiQib2FSN1QhLiRhdUNTUlI7cEFsP1NiO2MvJnUKJTY9QEdKIklcS2Ez
KjltbGBoUm8jbzU4I2YzMyJbTzxOckEsNkFiaUwpdDZBS0ZzL01yJ0BdQDw3OD9dJThKNEsjMGZF
U1FRVTs/PFhVcEo1V2Ikc0RnZi9gPlVxTFA5PzJfYE8KJTtEQ2hbX1pyXD1jS1BDZVUkcCdFYUtT
SjgsIlxjWmwtTyZvOT9DTjdGL0MzM0UiSEgzWmBuKyI7Y25RVjg8UiQkY2dQa05dI2FFNWhYPCtX
Vm5vKzAuYzg9QUZeXyVTYUpPRigKJWcrMnJZQHBGZWlfJEpeL1ktL0FJMj0pXUooJyZvPDFVcExn
Yy5gUiVtN20/UShAbG85JShEO0RKaS9pQF9pP0xqQTFbWldKPWIxQV0yXmxQVnAhNENiR2pDL2JM
dEdINFQibWwKJT1zTUAjbzxmYU5WQjRXa2E3bGE4JXMrWFNIVCxSKm0zYlQiU1krNFZeNHVkLF07
azcrRlZwMUhdP2VoLWJMcCpfSExeMXMqP3JiN0krdTtvRUFjPy5gKmgibkI/PWdhVXFjKDkKJWNR
SzgqJSNRPmgiYmA1LiZqZVFoTnE7ajFKPk5qNmlIRGVnLi1vV15yWlJWVHIkbTo5JVdFL3RLTz05
SyZeYWJOWzQhZVRuYTcvX1MiIWVqWUplRmw1NWRNJGxIRCMuSy41NHMKJURUVV0vRVBuN15TUSlk
WDclKj4hNmluRWVGakk3NlM1LkRnU0laRjFSUzx0JGIkcU9QYzA3Uy8rZihCSTQ4RikmbyU5SkBs
Y05UWGovLVY0Kj5JJTJCLSUyW0Q1aCE9XSdkUXUKJXJ0KVlFPSQ4dUMyX0w0JWBzI1pRTDZvKiJP
OEJHKVRBVFhlP1tWYjpzLTpAdW9lNkAkXlxYU2ZzOEs1JXJdZ0EtSiw3MzhyMHIzMkhpPHBjcm9Y
N1lnVjxLRnM4TGNmcHBZYkwKJV0+KzwmP2g+Rj5PbXQ6NSdkcDVsNTQ4Jj1nMXBIKG9eL2peKzkp
MHNPKzcuOU8rW0sqPlBuLDReXHIqTjBuJnBwNFQ8dDRUKVZBdWdEImtqSylMKVdxazlmSlJmRTla
cjpkdHQKJU9rWioib105O2hIOzglXD1iWmduOUYuWWVnS0lbVU1WQ184JyleL1xGJ3BpZlA6RF0j
JDFMJmMnTTgtRTMrcU04blQ2JiYlZXIhNlg8RjBdO1REWCREW0s+VWVhSzBFVDRcVEoKJW47X2Zy
Ik9zQzBTS19XRDlKU2pHQDFNW0g4L0onOi04XDcqKEU5Ykw9P3JPdVZBUCYhYkUwaGxQXypMZUIv
NmlNPTNvKnE1Q2BbIzpkcj5ILiRZM2s3Wkk/MTw4bW1faUJFJTUKJTBiYmEscyhsI1NzL2BPWVJl
XTQjK09IU21LZCFgLGR1X3JGV05aJVlBPyNnSTJLNDB0NlMzLDJhWkUmU0NLcSxvI1YhK0lcdU1O
QDNncm5ZajJfOzNUWWRGVkxlK2NqTl47W1MKJTBRaGBLUU1SRV1jR0VcSm9LWykxUmxfaTsrW0Ez
K1RkYVJVIlJILCJUV2hEW1xLTiMhTmJGcXRTIz5QUiZWT1tQWC5LWGdSTlNPMCdCM11TV1lXVEhB
RC0iJWBTJDAjOUdZdGUKJUdAUmoiLmtXPUEpcTgtMUYlQCNhVSphJG5tS2RxPy4kQjEqWWhpQD47
JmolVUQsSTBzcm1WKHUxZExIRkZRTjUpQUIodSVqOkwqUUM/MG9pLVorOm9iJCtcXTRBSEhaNlpq
OWIKJWwwcnRbaTUuV1pgODJkUGQkIVUuOmtUU0YoQzQ8UFVoTkYlSzdzOz0oRm5rcmE9I0UmNlhH
QzVyWHVPTytXUztrQWYlbVdKcW5wcFY/T1lXMVdCOGk5OUU4NTVEX0lNclZAMmkKJUNJYXFTYXRp
RUgtMy9aS1NmalFYMiwwQTFXKjM/Jj4taCFpTV1VZGdhSEpHMTslZVQkbERoMy49NTo5OyVcZTco
JWdLaWU6QmA3QS0wQ2lyLW9bXy01bFwwakpnY1Q3PDFpSksKJUJnaCpoZFZWTE9LM0lda11PXXMp
Z0gkMDw8a2orXkVSJnFtQEtyM0tVbkRHJWlQXGhAPjkyZSwxQztIIlFwVEJIYyNIOCgmOjEsJWk4
MVgkM3QxI2MnTDVWOGAsPF08JFlMVDsKJTRwaDpQNjEuQVw8XSlbOkctYmJKT2s7WlRVPysoQT1K
Rl8rXm1yOW01ITcuRzV1S3IhTGtLbV4+UVIlViY9aXErMHNCSVZtPU9tT2ddRUJwJyNsYVgjOHVa
XmI5PGNIbUtlQmkKJTAiI1khQXFhKVJhPExTWzFrWlhDJmlnInFHTTpAZE5FV3EkQ0I0cz0mNnBJ
TS9gWmslWFdlSmg5LCpTTzZjMHApU1AuI3UhJVxkL2gtMTo4PW11YTdPVmhObF4xPFk5VG80PSEK
JSYuP2VlSm07WnQ1TUYqMjRzaDUxVF5WRlsnSjxSWCE8QCVZZWg6JiYsN0A1Z1VpZ1pLIUc4VURy
MzU1ZmBpb2FXPUUwIy8vI2E2VFwnKW8yQUU9YlAiZCI1UShkdVZoXU41cCQKJWFTMjw2cCIxWVQs
aGI5PFg1by4sK3I8W2xcZTBmNEpFIzJmTzsrMCIwODxhX11CRi4jXCtwT1xYIkxNLDZfb29bZDs1
UidtRElXU0lRRlwsclRWaWJBI11TbSNEREVVck1SYCkKJWplLDEyImYmKUVRY1JqXXA5PEAwXSNh
MT1MQStIJE46PSc+PEg0LVI4aCRFTzJTRVglLWonOSI2az9pO1ZIPm0vcjFQRUlNTDovZihnYSFu
ST5oU01JQVxsazMjNl5wTi5wXEEKJTZQZTJYZWgsPHVubzZuJDg5PD8tZU1cWTdaTTRKbUhXJlNX
XzkpO3VrMkZFPTE1LkFxYGNLcEtTIk8qa04rM0tMTHBFNj5kRSs/MVpTL25eJmBnRldhWFlpJDh0
LURhMDhBPTkKJWRURlpEbi42VG5qT1FsOCxFJmgiN1dKJEcsYFRPXWZPRktvT1BSQiplN3ROSC0/
JCs+P0hAKCsvWzk+IiJxKHJOPlkxVjk9cUZ0UT1cIUY9K0RwZm5ZQ2o5MURKIWU4JjFtaEcKJSxh
LE9EazxrI3IqTDszbUFuWl1COW5PX0hFP1VCQU4lYzJFPEpOUCUhLmlSVlAjO2JaRzBRV0VKdGYi
MChrW14+cTBcLFszWS5uYjAxXzxcRGVfVDxoNmkqUlAiaFluPC5JYUkKJWhzZGc7W1o7S2djaCtR
QScwZ1RtUmhkP25iOVM8K1ZNbTVvblJYbWowYzhCNlI9KTkmYXNbUDJqPiEnYFYmb2FtOidBXUIw
Vl9BRi1CJTw7TENkTj1qRWRXaiFuJ2IzKXVWZUsKJTkuJ2xFV2YwKkExaXRWWT9lXGE0ZS5nTWhh
YSVGbidKNEpmZFRJaUwtLGddXTEsVUUrWyxLKlVcbHJkbVlkb3VdRStlLWMmVkhyTCdlQ1ZkPy5X
TUBKbUFccDAsbGZQPCdQR04KJShXTChaLz9IM0AiS01XaTkiMi5GUF09QUpYO1cjS1tyXEY3WDAs
NSc3dVNZU2YzLWtVUGo/OlFTZ3NKJTAycTtCQUVCIlRcWXFXWChcLEQzPkV0JCE1bjo8ZClmNFYw
N1ZVIScKJStiTS81XV0sVjZpSXEpMlEzPTcvLTJ1W1xGJ0ZWbmVwOTNHSW0sTW1MX2pXX0tQVT9G
WURSOGhvZyIlM1s3ZyddNFYnUyQ0VWBAUkY7I3FHLDxiaD9ZKjVAYlw6IlFeRk4vczoKJVBJQXRn
REM5azonL1tWdUk0NC9abWx0MVFRYm5YWUg4MGgobGk2bV4mTHNhZVBrLl80LTBJO3NWVz9QLEd0
PHA0IyxmJCZSOlVxcTEwOUE2TFhUak9QJkdMdSEkI280Z1lWOWEKJU9HWkFvYExFQSNiMmVFXEsx
QjFgUU5eK0hUZDpnW105V2hmaVprb1YhY0NkKy9CJUpSSk50ZXFoV0xxTjprVCpVbWt1UnMlSmU3
PkNgXHRrPCtpWCYlTThcXj49Y2I3O3IpJUUKJVw/Jys+VmI9YzJqZERkNlpuPSRDPEIxRmc+Q2VA
MGhVal0oTyw3dG0odVxgNGVFbk9GLCc/Kjs+NVBdVyJhR09JZl9HQlYwaHQtLSkjajZTRmo/VWte
NjVEakJEKixebk9pNigKJVo6RE9CXExUKmRqZSc8MCRBQDcjLENuS0dDSTROL2I7T0FscV51aSZa
aERxXVQjOnRYVFA9PkhxSENEbm5JWW9ZTjsxIigkJFlYIiovaDgkQ2djbWQ+NVtWQUZqJjY6bVE0
K0gKJW1QLFxlYlMmJXExXz1XW01TXE5ISmxjbGU0Ok5ndERFajFHY3VELlNXPWRrLGBOQnVvKF9N
a2tkMi9Sa0gwLjdhTVBUSVRJST8lK2V0L2NWX1gkYWYwYl4kREt0JGw4YSkzUVQKJTBybkxuVT5d
aFdoXU0vMWluZWReTzxDYkBmWV5nMFJZO3NSYjdfJF9FL08rYihMRjtXTkZTKFk3RyVFP3BpTiou
OV87R288MzMsOzg/cVsmZE1wNTlBXFMyQTFxZUJnXlRJJUUKJSY6L2VNLWItb1dsa2BIcUghPlBU
Y15XN29LanAqaDNfcWdUZTRaaTQ6JGQoIlZeNSlzQ1xpIzllNEsjR0YpNSslQGU1JEVoNVlwKm02
UyI6aG5MT15xdGdiUzsvZFZlQDY2VEIKJUIra0JQWm9qQl1KakpQbDJsbV87RlxMN29nU2k+Yjhf
TF9oJz89Ri9zKkQjQW1TYzdXLzNHXGNCMDVIPDJvQWdHM2oiMD4vQ1FNJF9zY2lzSElWMClNQGFD
XGRzNSdGJCxJJWkKJThYbEtCLDJsR1JCWUFuSC9SJ2llKm1xRUZtcillT3E3WnVPPShpcSJrIVk8
JUk9RVtiajB0PlFWYGFrS1InJDVeWnFYPWgvMzNBcElHalJnW2ZIMUtPPCE0UUo0XmdRMileP2UK
JUQzUTQ8T3VWVHBMNVBaK1NcNkcjczB1b0lrOFw+UG5LdSJXSGVlUiJAckZjaSU3YGZtXCM/Y0Ml
VSRESy0tJlZXZmpqYHE1Lk10QC0ydSJjbVMkXDU/Z2c0ZVBKJztpbUE7Vz4KJT8sbFlSVC8nU1Yv
aCE6OUdWT1FAMGJpLVJyZ1UsUm9PbTFvaVUrVycqIkhzbnMkSiQiNTRjPikvcC5ScDQ1S1goX0Nh
cyMnPkJvQTQrRTxgVnJsaDhrSFFvRXIlclVDIVFRbSsKJSwxKGtsK1xBbkomYnFuOyVaUEUyYGtG
YVVOK1UiL3A+Sj5eP09UKTg1P15JMz9KXGxgVkhDPkZKOCg+JUhYMzI1SDRtVHJMYidbQzREPF9z
YElaQlUmUGw4b1cwdTNgSmsvUj0KJVBrcz5Dal8uVypZb1M+KHBGMytyayFCbkpAVUBHXSZKXjVb
Z2RrU2E1Y14tdCwtSFtINiM4OEJEdGtAO1RhOHI7YVQzQjxmTmpIMSNrLGAnMV4nKkZeaiglQCNO
TTxbK2laWm8KJWFAS2cqMG8iXyJtUmEjOTtHZUs0WyxTU2VOOi4lUU9GQ1lmSFdfTTBuX0pPaXFE
aT5OVjtZYypdOlFlOU8yJCZkUTVVLl0rS0Z1MUcvU14hMkwoV18jLG1qMkA3TiVHMklkWCYKJXFl
OkAyViFHNiNhaDRFSVsnYXQwZyUoTGRAZDZHNGBgJmo6ayhmMmBYIkkrN0tjTUJrLT8nLEZmVyI/
SzxfcChhMTZeNjdSLDtIak5UKkEiREYyajJzJXA5NDhZS2RpPiFwWEQKJWZjcV4/Oz5JP0JESClm
dV8+O1FrXk07WFEuMCtCLjhSY25EWDwzbD0+aSkkQ25CN11YSkE1aHA7P09cVTE4JlJBPDxuPjsi
NGVhbXBNVmc4PTlvbmUhMXArMyhKJmhaPHJEaz0KJUYuPCFYMyFSWVJQY0A1OiMsbkY8SktkVDEh
TXIsMGBOKS5fS1Y4LTUqKi00JCInQ2RhOmhcNlJrLEZHQjpTXk5DP2wrUjFcIS04YiEqWicrYV1q
VyI1SSk0NDBhclxHYDsrZ1cKJScqN2NhMSlGc2VZaXRRZSgtcyhkMUAscSNXQ0M9YyYmMmRYMkk5
TywkUmVyJClCJiQ2OGRwYEA6bVlqSWlrVnBDR3MoREI4Ok5mM0xVdDR1SkgjY0xYW1ppKS89YHQs
ZXM2aEUKJV1sUGorVk9KQiNKayc4YEk3QlIxQU1vIkdXcHApX1I0YUQ3KC1BTVRPMi0zckJSQWhg
ZyhvKUpwUUtTOU1EUzoiUiZhSUVCU14zaG1YbF9tZkI4N0ZaRS1iMCkzPzpFTz1HIjsKJVFTL2U5
KUEpZV1XPkI0IVVcb2pGRyNKO1NjWkNuZDZyIWBcJVFvdDUyYid0UEVJZkIqTSNqcS5ubFtNc0w+
UTk4SEoiRTc3WENoPzhVKG0oVHNHPk0sREphWVdZWix1JjpDPUIKJWg6NSs1OXFxa1dJSVtWIldM
NF1EYDxFN1lYXUkqYU5mY19FZVgrVXMhPkFRSz1OKjVzL2dsVHQwKF5YRSNsRnUwIypOUS1KPi9L
MyhMbVcnPS00bU9sXjUyMlRgZ2kiKmBKNDAKJWJiUSZfMSJJSmsuXWM+bTEqVCcpNV1WcjciVWVQ
WiwuLktiJSRNK1c6ZlNOKV5TRGUvSjg2Py1MPDpBPmJScCMoPDdfP0I5b2MsPFZ0QnUlMGZsT1la
XGFIWFklRDdBOWlIXmUKJTcsXlIzVk1IKEVtUi86VjFBV0hVaSNZOFQ2PWshVEpjbzElXmYmZGpX
ZSQ5bG1RbC8mY0MzJ3IldGA9M2hQLzovJTBeLGMqbG03WmUiOl5lODE0bSRhWHAnKSYzOmI2Kmo4
cGwKJU85TlkxO1A3WU5TaHVUR2kraCwnRFBSPSs2bCtJakBgbi1hcTM0OzsnSSttM1xuWURDJzBw
VGY8dSFFcmpgK0IqOGpIWihrIkUsO1Q6OkZEUyoyclBhSTlzYmVWMXBpNlJvK0oKJWpdOlExQC49
NWdQLSdoJioiQWE1PDQyUG9gUjNTIUomLiYxLVFTLDAxU0ovW11TQiRvQHBPVl4jWGo5LE90PlhW
MFVvIWEpL09BSy9GWFNtZFRGWWtEcG9FPTVwLEpKVzEjT18KJWFMMEkrYWAuTChJX1cjLTlVTThN
MlBVPFooV1M2W1YoXzoiPEgsQTEhXyNlZTtxMmtEXTVHMT8vWmQqVm1zQktcI3JIc0NMU1BZKGBF
UWIuTiEnPi9bSCYlSWxyJjUqMEdrKWYKJVtMQDlvMyJJYmknJV9oaylvK11xNE5TNmxXdVBpZWNE
c11dLUJYYj9mbkcnP0MxJl1OV1BDQmFFV00zO0MuY0QpSHNEMFNpSjNpNSEiYVpSVk1iJHE1Wl5a
XW8ob1VHLU48K1YKJUdWcjVcaVBHZjlUckRmJUF0YGhDVXRUVCJFWyR0VjIjcE5HOmBzX0BAbVgs
TEgpVUJZSD0qZFNMSjt0cUFJbFNjYHNbS0I0XHJwRjA2ZF1bRDMlJmJNcTtlV0lwKG5naEpZMCkK
JTUtZWsvXU50NXMlYk1oLFU8VUxbI2BFT1Nsa108TTxkMzxYby5SXThbTCVAMTlMYixKVmdZWD1z
ODZsaVNOYDU3KlxwTyMyQDVoTWhrXD5bMj5uUkw3QF0/MF0oYTEoaGRmVCQKJW07OkU4aFEiYVVm
LUZQN0tPbGMxPnNNMU9dVGphUkxta19gZGpSWC4oTVI/QTEwcyNTVnNHJTdUSlM9cU4vNUo3Zyhf
Q1UkdU5nPWAqVyI5WkpMR0MlYztwY2ZMXjYyJGpQaUEKJV01aCFaLVI8SDZjbUMiRmQ7I2JdPjI1
QGVANEAzPVlmNW9PUzUiKkNHVE82SStTXCdiamI/bCs6KzJuOVZKYS1cSHJmIUgqYytrWklfZ21O
V3QvMGc4OGtfYTozK0duO3FVUWMKJTxaSytLLFwuRTMhaTdAWSNGcVgqQz8sXUA0KFlMP0JMWE4v
X0dkQ3FOOyw5Zzg7VU0oYDVFaVQiZDFxU2pxUkEtZFNUKiNRdVtHKkQqQ0FQWzVzai4zKGhpOE87
dCFLI21VM1MKJUFzRVgpUlJpJF1CWzZtOVkzVEEnIUpuXzZOZW4vZ2FPUHF0KDVrPShuM0xQL0dw
cDdZLG1nPS9ZTjAoKDhZMGsoJj5QPmg5ZG5bPzVWSVRbLlc5WHVCMkQhLmptK1UoKDFrWmgKJTgo
VWJKSi9sYlhfS2M8Z1srSz1vNzk2Pms2WkU1P1lKO105cTVFPW4vRUlrTiM3cTtHTHM/XkslNFBU
cEBFczBcZiJwTmRsdWVTYk8kZExmYnM0U3UrXTNLczZtMz5QTFYiMDMKJU5mV3FmSyYlSnI7TyVj
ZTRoQjJqZ2dsPSQqK0U4ImZvbWtKIkwsTz1JNT85UTkvR1E4JWQpUisnNGUoRTZmTjxYOWRlQVVR
UFBDQzM+X2JVXk9qMWooKW4nOy88PFgvUm1eUU8KJVJpNz1xNSNyWGsiMSRJQyFMRm03TTMoPTo4
SCFiQ11qQCFOP3RjNlNuZ1MmbTVdcHA6MCVzWnNhSVBgaiJTIS1AZy0nJko8I0I7Rm0jLytRQ0Qr
TE4+SWNebU0vTWAyaWQkJVEKJWNKc2E1MGklbz9maio8bkBtckBmNkcqbmVTbFQ8bkYxOUVHZmVi
YT5hRTgndCFaImc4SVhrJEtWS2AyPSFOVUZKSmYiWCIjVkMyYVtSaW49PktbMWRpbEk+c0dqXEkv
RTYqLzIKJUlQUT9xZ2JmM19nYGZMMCUrJCFVTitUQTVOL2ZOITwyKW4pTjZkNFFsMERoVUo+N0JE
JjJCcVQjSEEqJFolZTdbZCI3RjZsPCk4ME1AOzZoTmJcPERBbDBCVFskPmMuO0s3J2kKJS11U2sy
Nl5YMFZHQWF1UW1BO19HXW9ydWdZbkMxJVwrPz1vMlEnLGpoLT4lOFtQS1JAOEtWVlhaNSE5Ok5B
MlBGWzY/bmxEa1IzWD5mQlJBN0luSkpqSmtrYjhGWD0mJDZMPXQKJUA4aF9XaUYoZVBfRHNEOltF
MSpqWTZiQklJOWYqME1EKlpFI0Q1ZTE5UC9IM1g1dSdVSnM6ST1bJGpfJDViQEUpOnNHZm9xOiFB
UHMkQl4rMkVBO2svYTNaTixUOmI2Q2pJM1IKJVZxLiYpMWsmQFpCRzQkUiVXSyNcP0NqaypfNlEt
PzNtLlN1RS8/allbQjFMQiJbXytQXW9hS2QnQCpGbjJBdW1PW2UrPyNccUUmXWVuNypPT15AS1du
Q3FAIzZKRXBwaCRMQ1kKJVM6Q0xdLWZiJnE+RzFaWHJwckFSbT5qTDFoNyNXSmh1RT5ISiwjTjVu
ZV0oPUZAOVxFNmR1Vm5DJjZxLEZeJTApQTVoSzprR01nPltXMmI5TnRgYjpGRC8vIm5zUVkubTIi
L2UKJXFtRWx1Rix1WiVuYUdTQT1CWjtQaGlTVUsrW3NNL1xzNTt1IipUKGhsW09UbHI2W1FkU2FH
MF1QcWFOYlJqUU1bcXM+NSJOP1omay5ERkQoTFo2SWMmKzUwXnMkY0NVR0hvKGwKJTxQcCl1USlY
KShDYzojVmM0c1wiZy5lOyleTWtrZjcqaXVhI0RYTlRkdVRPLlYvSlRVbUlrLmxePEk7Q1gmM2ZZ
PjIzZ3Q9RiNIYUpFIyxBPDUzTzsmQUkkM09mNXU3NDohWVkKJSshJ3NMRGEzLkhIRjlpdUxoaE9J
XU1FO2gvam08UC9OYmBwb2tJbT82JFQ/Y11RdFpoRygnbTdxNlMtJGc6JVVcR0ouOHJEK2cndGBK
L3VmRmFEPzdELWdsKSE/QiNnaG0tWUgKJURvWkhOP1ZcMiZlTlEjLj0yJyolXCs8ZVxVNmxTbmor
Rjw3WilDdVFMQTNiZVkyTVZnZEpFPy9cXTM0ZGg7WjZyTStKXWBVTyNWUFosTWxHIiRgJmRyU1VM
V3E9X1lcNTJHS10KJUVrPXVqSGJVUypsKzIuQS5GQUJVaWxcX0BtZTQmXjkhNWckcVlzUSJPNkFd
ZUpQWWE+Wmw3dUBkWyhJU2BLYj9gaCFKbDVdTlkqI2pdcHBnTyg2UHFfX0hkSHJwNm8icD9kTTAK
JUk7KjdEQiRdaSkiXUlCOk5lKS4oNHMuYUMnWGRfbz1CdGFzXz5TNylwUjFFZkFTRWohQE9AVVRq
IzY3ZiMxSkBKYXUnWjltaF1uRGpJazVbSCwjQW0kcTxvOWU1Olg5YlpET1cKJUZLWz8/Oj1tYiZU
KGRbU1dVJSxXLTUoaFkrISdcIVJtLGBxQ0VqYisqSmNKKFs8VEhIK0Q/R2JPJURhNltMXERvYi4i
IzJvc0llTSlYVig7STw5TmFdKSo/LyV1YnRSJS9QJWMKJWluQ2pQbWU+dEJQT3FUMj1oNlxUVGJd
XlIqT0NNVU1AWWBkWztSL0JuLEhuQEBZaDVLO28mPFs1Pz11ODpiOUJXaU5bSkRoQ25VYlJvVTIy
NCFjJVJXI15gNHJTXihKN05KMW4KJTU5OTkqRGEzLkRkdT1qZCVDXGd0OV5oIURGXkI5SC9dWEJ1
I0tNLydqTzlbNjJGTHVbQUhrbFdnYlBrZzphTTM5KkJPYmFrRHRaIWReUGs7cDllJDRQRHE9I2ky
Jy1xTFdPJGsKJV1ybj02V0lPbkw+bDQ2MnJQI11TRExGWDhLZ1lSXmEzKSVZKzMjOihqSi00KW1G
cW0/cXE9NSZFbkFPZm5KVjpGTidhO1toQlVxVjpJakVVW2glbUFEdF9NIklmSzI4czdSUTYKJW9R
TGg8ZkRmNllIbjxoSHAwb1o9Ni4kM3JaaCE5IWhxLEtTaGk+WzhbOHN0KW1lTE8kczdYOGwwKmJX
JEM9OSV1OHVucipjSTFrI1dwczMxOVtwWk5tR0lpWVstPko5T3E+cUoKJU5rM1YsN2lpbCQ6XVIo
W1sqdDBAQE1XKXJMImpQX0osRyNBVVI+KFQ1WF9UXjczQ0AsRmNHb0UuS1hMQSpLYFlrMEJcVzQ7
cGBpIVpZJFlJI1NJRHE9LWxOJiFsaWEtL3FlY2YKJUpBPnRZcHEmPVJJbjBuTiNYTCRGNXFVSDlp
MGpfW2ExMDolTDJzYkMjZVxiWFhIJjcoaihrcSc8RVUoTT9ccy9XZzglcWFjKzxRWCkkZz5STmNZ
X0FHV10xIWlSY0osRWJdYE8KJSE3O2NBWyZYNGRiXTdwJSdVWVNrWTRjPUZMXkMmKjRLMUk3RldH
PTJpZXMySEFTKHVAU2gxUFM3KTlwQF8lSHA8MEE5WlNKPDBwS3BCMiluWD50TzNKTihERD5aaEdM
ciQ3NGUKJUEqZTlkNkM4MEs/Ol1XJUJzbzlqYExQTmtFdVpUJiY0PD5rQVopZ1VQNzwkQkZUSj4i
VWBWR1JhW2teVDJfNUtdRzsoOUsuMzUoOCE1QCpPNmljYGpyWElDUV5ecyRnZzcrSS8KJUouTSYs
LyNdPUc0M15rcV0xQz5WPnNgYVcoS0hxcVsxc1VXL3JrTS5CZWUoJDlcVS4vMDI3c1ViYUpYXiFs
MSNsQVoxczQzNEthNEM9MFA3QDdocj9NSixSViI/TVp1JiVvTFcKJW43NHVAY2JTOi0+Q3BxQ2pw
SkwjYUksIjZySmVZSVIxX0lFJSZlOD4zc1ZBSCghKWFQWHMmcFI7QFhXOUw9aDtKV15nQUlnbUgz
aTtILjYiNlJJNFNVcUJxPls1PSRnJHJMQXAKJSZOLWUpITFQUzMicUcwPlE8b2BiLj1jKDNnO3NA
J0lDLWVdQSxzKVphU0JBNFBqIjxmNjg/KjE9XFo2aDU7Wks+QWZubDxkTE5aLyY2UFFXOkJSQjRf
VDVhdWxyTidPZzM2SVgKJSVLV0hkLHIyLkJRcW0uMU9HamNXJzF0NGs8Qmw9PWJpSkkwaFxcJSpc
TUxpZV5pXitWPWwsMVxlalFAUSdmbEouRkE7NlUlaio5VCZQLWs3JUdUTU1POzxCS2ZrSTFYJG1u
dUEKJVAwSFgtYGs4YFYmMUlvXDpNUWktKGYqXmxxayZdTVRaUGhBT1pGSztfYjteZlczLTBKJUdC
Mj86VlwnW0pTYVdvJnMpLHM/MSlLI3BnbSokR2A0LCtxOTVgXUotdURTRkw8a1gKJSdjVyFrVGg1
VlwsL0w8KFE9biUwXzg5PjghVnAiTFMoUFRJUDV1PGpLKDwpSk1OMSI8IWhtXC9tVHEwX0UoQVAt
VXEvYGxtLTpaNyc4O0BtWU4tYiZUXnRLMGsmRWE4Yi1EWlYKJUhcI0VpMT45Z3FDbnNMVi1qL0dt
IVljYnMyUilvLEJVZEQpMlEiRSNybmEsaERHbkRIT3VUXmdNKV9cUCdvNygyXyVpSVo5THNkJ1hY
Wk5fJCNUL3NlIiUkLk5CJThNJGQoSFkKJVhtQEtFNGpzOGxbNDAndU5YSmYtVT05PiRLSFYpS1Zr
WzpkOU9VV2hwPSEvVVYyWCEsO1k0XXFIQ0s9Sz9rYFNmLkgwVyFiT2BOaVBgJGhQayMnaj1TMkRD
SDZHXlR0PEJvTzkKJVJlUDo6Kzg+KGksbzEsJmNfUEpcR3BGMkdkWD1rKj4mWTRDIkhEXC9ZQFRo
Q3A7VC1nV1VlPFE6KClAMGw4UDwkRWtZIjgpTU47VFpIMSldYUs4dXFnUm9zOiMySztGU11wUlcK
JUQ5ZiZVQy06J0cnNG06IVtHK20mMlRqZjNPZjFJJ2ArMF11TWtjUTleQi00VVx1MzpaVmtlajtT
ZSZnY1Y3KD9XMyFrbXJeLFY8J0NnRk4/UjQxbDpNYkphRCtWQjszMklyMjoKJVYjbE1gWD9UNVgh
Xi9XN01XS3RmYj9obzU4amFBRFJXVE5fYElfbDJRVCgzWVVXMHNiUW0iNm00OklydGZcXzRoNmZW
KWUpVDxLZTQpWnVRLToxckc3K05OZ1ppcCsoJ2JkbksKJVF1ZGRYSj8+ZUM6bXAncEQ6JDw1Mmwn
OFU4YUZpRk81PGxaInAydXFvXWZqKTo8NyhpL0VaQlE2LnFNPyFARXA5Mk9LQldCMmVAXUVoOTdl
SDY0XkkndVFXPUpkbm9FLlE8RikKJTdgY0kpRUJtdFJlJy5VIlQhKWl1QmsjLWo8dCJKb0NSZWpb
VSsvIlxDWW5hWlxNbihOUko5Ny8nYGVOZ0VjOFxWTDU9QmNPbkQoQjFGTHFfY1Y5RUsjYVMnbWw+
YWtsK2ZDWycKJThTMyRyV1RxTU0kbDA8JCZkKVBKaVpcKUM1Wy1UbSROQHRmUyZfaWRAZEREMSI1
OHIzQ3RDJ1BgXV1EMy1cbEg4P0pGJFMmYUxjSCk3OzRDZzUvU2ojKTpDISgtPSpIWmsxJEcKJVRf
W1MoSC9CbEVcJm5nRFs2JUJoZFFdJyUsWm5hRzMxUDssIzZnYktnNCI0TmFoO1YmcydfYD5gWmcw
MDEnOV5KTSZcJE9tI1BmTSwyOmtqQWtLUmpRSmVtJUpnRzlfZyZvbzEKJVEqMTdsJjJVXy4vVz4r
QSM+S0tqT2ZENGxVZksjKmo/NlgtInBGTmQpWS1qIWpDUV48WEZpLW8kOVdoVEsnWWxfbnNuOURX
bz5CdTYhNF0wLisrMypvXlMqa2Y0NFZiSk5kNTUKJSZsUDNpSFNQJVZncUU3N1EtUFhzXyNZKWw2
RVNII2FRKkxFZ2RGbmsrZFUkJCk5XmlZMWJBS3IrQGtvKG8vTVRWPnJHNjk/JF5aUmVrSypNTXFF
P01TOEZtbzNbLXBJUUhHX3QKJS5NJ0lGXlhBcCk7P0FLTDd0NVNMbVtvZDJwPD4/I1MjUmBkLldw
Mmo6STJrNW5zKD1NbVdmLyNEVUx0Z2dWWSwwPCpCU2JjXnFGbSs/MmNscFsqPSg1Vm9WbUloU0Ah
NkY3LTsKJV05bDpdWFAqLFclXltcZ1JcPmZQSlRhbzUwZ1p0KmU3NlEsZXBvRzJabHQqczJrWkol
VSVGT089U2s3KVBtIW9pKFRGJ2srOFtLXSVjVygqS2hILlYsW2NdVTIwLltiQSooUj4KJSEwdSIo
Nm46WCIjaipbM2VHMWpbN2RrLnNeQHJwNmY7TjJVKVFjRV9JR20zcVg7V0BMSlxRdWoxVkluR1Ai
YCsnZFNVZy05RlJwLlpJVDE3Ym1COVs9aUROKE48SFA/Jm8xRGwKJWNMYD45X1EyZFg7MykjcFxN
XTZpRVlGLV8qZSRySUkmbHE3bV9lW2lsbilUa19zSTI8TUJRIickXzRwUHBfZVdOVkpjWDRFNFE5
WUlQLXVqKEVtVWxdPm9YZj9ycD45UjFrXzwKJSdhKVlvP2dRZDdCQ2w6OVw5QUwkXmU8P2FoRk8x
NzZiQGBgOnUsJkBYP09PQWJDXTlEIk8tXjklVFdbYkwmRC1wMUM7RDY2MHFjQ2J0bCZWLXRUOysy
O15VUl47MEBnJFZoQDkKJSNzXSdHSUwyb0VRaT02YCdxLkxyZnBQcVZda0I2Zz1SLzxpcUg3WFND
YkVyNEcla1VOSjxFWT8qR1clMy9dUyxrJSwiZCIiIywuTjQ9bVNPPjczaGVXS0lSYjFzdGwtVVpX
JSEKJSJKR1FiW3NeJy5AVC90Wy1KOG1tYTVZWXI5QUVPQ0wyXGkwZ2QnO2pTSllRPlNMZFkhcFxX
PCdsaVhcWDdTWDlHSE9oK2tEVSU6PTBsRz8tbi1FTFUrPTUiQ2s+OzxgW01mJlkKJUs7UDgtNUVE
NEFWVExdKkNqT3UvOUtIWztaP19MQjFrbEEmV1BVbFtPVEgoT0xjJlw4VE44RSonIS0nc2pqRXJS
SDBjJmJKaUg0clUoK0ZTTCdUPCRUI00iR1AjLHM4X2gzLjIKJW1hcFopVS1YMkcqKmxQSjBmRyp0
V3BQYFIsYyJbJig/OyxbO2AuI0osJWRkL00qVi5qSDYwT0ElOmQpZiNmZChiNFM1WFxwNk9DWDM6
WSE0NEpkTm5nQ2ZKQ0ExN11XblNYOl4KJVVxTW1USDdFcVdNZSRhbDhtaUtyM1tCbm5CMl5laydp
Uy5zPFs0YGBqL08/alZicCo5WV1tWUhYaipePU1GdSU2TFlHNFciQStbV0tqPyksYVY9XkA/RzQ2
ai9gTGNZZ1lfVTMKJSw7JGp0KDhZRy4xQ2tGNCdjSFllOUojKUlJWGhEPSwxZUclZXUrK14nOVtM
cjkuJV5QNjdwZysvITEoJysnTFc4Oi5bQ3A8UlIsOTQuS0VDUF45Mi0uVFNta0dOKiR1SG80LSkK
JVJGLmJfLSMrIXFKTWA3dU1uWl0/ZkFFOkRjPFYrNmE6akhxNzhkciwoTColNCFjOkk4bmdxIXU7
ZzpLXltNLWpAIyVcRyRmKSFMZTE/YUVvPD1LanUuYTskLipRLDsncGpOVmgKJV88VkprTzolOXM+
MFJkSEs+U0NVRVMrRlNQK1VETTtvNzQsQ0VTPkAyKWQ2THFpa2RcQkwqXSErP2ElPXBZWixBP2l1
U1cuPUdSa09xXF5jLGVYKC8jWiEiMz4hI05BbDA0XGwKJTtVNCZZKyYrUys6YGchTU11YSY4N3BY
TlQmS1JMKSJYZEtfPVJRKiU+aTVeMUhaJVQ4KCZHPCgmQHMpKShBZl0/cm1uOz08WzVOciU5b10k
WSJ1cVJlR0lhdWwiWCcsLydOOFcKJVsqJFMrMmVTbitxYF8tPSE6JS1TLS0xNWs7Yk0ub0A4bTxL
U3RQXl06Ty5GW2Fva0Z1VHQ1KTdfOWJncF8+Vz40OnRXJDYkTjFoLCR0RSxVJnVDVCI2aWFfaVpO
PFs2bDlhIVkKJUwxLFdhK0tLYm8nPzhhVjxJSk1eNGhKaTJnPVRlPE4nU1suW2UvRjY9YC1SZCdb
cFc8KDxqQzM4bW5KMixBb2ZkXyJzQC00PmUrWVRzXU9XNDwlPTBBZCU2MUlBNDdWVjZlaT0KJVM5
TWRCMkZpaTpCNGU6NGI6NipSajZTZTI3JyIkK0dCK0kzcyMuNE9uY3FvRXIzSCxwO1s2JFoxMmlr
VDFgPnBTRjBcP3VoOkRKQCItaC9xLio9N24zKzBpYyVSYCg/UzdJPzEKJSIjVm5XT3FqM2EsSGhx
JSZwUDF0T1lmNS0tSGtnWDtZWSRETzAoUV86PG9xdSchMz9IT0VeKi5YREM+JSo5blVtLCJbXSkq
TjtAOkRvP1wpW0tuaWZsbCZIXzRbXk5kOk5XNTcKJUBSLTppbzspSCUhJzJiQU5eMTklaXRvKiFj
LCoyMkJBdC41RChtKCNLQDJkKEdSdDAtS2tFZCMnXnUtQl47OyVIYSUyKl84UDBkQz1HSllILitQ
PS1xR0pOQTteZGpfNEkwdEUKJVdEbkthSykxQ3JeMCRBaGtzQXFyQ0InQ3QpT1tta0ZIZ1E1bkow
XScmRzk/Pi5McjNZJjJwNUIqUThZJzJCNz8+LDhKJS1BLzNdYi1ES0tfKCExKm5EbT1LRy9JdC4/
M0M2VkwKJV01RDhtPT5jYSYyX0wtJTMoOlsoKzskJzVELlkxbTFRVjJRMSksa0ZyW0MsNltxZW1a
UXJUK004KjZBYC5eJFd1UCVIUSlLTT1NLSVtLFJFYy9XPyxjcFNeVXAqQG5kJWtvPGwKJT4kV0ln
ZnBAWDVIbHVzW1Z0SmNrRnFmXVJMTzRlQi5fKjxuSlJMMzVWNGA+clM8RUtAKldSI1FiXU5CVmFz
N3QlTHInZWFLPGNvTW0iT1ksZF1PaDZNV3AwJ0pAZERaWSpGcGIKJW9WcUhEVWc0JUszbSYkXzJu
Zy5IU0JQYW1UOHE1JUwmITtsS2MoOmRFXFxhS0U1N0lwInE/PGspX0FZKUUya1t0LUk8XGpGYGIy
Mz5UTW8mN2E7JEM0XzEpcWVxTmU3XEtFdUAKJSVNIToubT9HZSNbdWBoTmMtX1lIRU4kK18qXXFO
PUspQkBdP3RZKlg/QlRGVitTbldzO3NEZF5sW0RqMiNvb2xXbnUlOGNfKlZVLjkkPiRWKWk8cUcw
XG4zcFlXYV1ROERPYSkKJXExWlxSLEwlcCVKUm9gLi5UZEA8amE0KDNaLC85aVBfVnM9P19gc1dY
YzUjPjs3WkAwMWBMIm1WTW1UV2s5KztmP2pkMDxRI21vSygqXTZzO0BEKWdPOzcoaTNIRCg6LGM4
ViwKJSYyRlRPJWpIKSxYRGM7K0NGUztOY3UjbSJHUXJKKGZvY25TMGtxPltBISdrPElcRnJyK3JC
QjxTP0NYcCdJa1E6KUsnO19HaE00UWpxXWtlJ1NzKGhPZF9hV2NtXDszWURPcVoKJS9ONio5OFNY
Iy9nZ2RfWC0iWzgvY1A4a1U6RyhyVUZrUGc6LFxLXXNcOUVYOTxlYFVGX21CYzArZ1ozJyhadCdb
W1BOdCg7VCxEblxPUC9vZTViWmo/dFM4WWFidD4jci89Y2kKJVI4NlViZiM5I1VFLzxTLGY4NiIr
N09ZPy0rOlAuKEpEXj1jcUIvZ1FfQSNEaGpQQixCPUwrNWtYMihLaFVkMDhBVWgsJ1BSZmI0P2cm
PEI9Qk0zV1JDSTx1Mi1vaDc2W1tWQGoKJSpqPD8zNEBKMmVJX0VqIyxaaGtNSUtWMmgiPD07SURi
J1UqUW0xPGM6c2NWTjw2V0hFN1o7b0NhXFgpQkFpal9AOSVub0Vwc1FHaz1fR2w/ND9iZF1KT09l
Y1xnV0FQaEhrYT0KJSVbdXBEYSE5X0ssLVsjR00xa0thbmVGXWRFKDk9Mlk5SWY8RCRSTVNHKVto
Yy45Y0hmIitcY0RRRGBhMlVdSk9mMkVcTyZMP2M9bC9nbiFFcXBXQyYqQUJHNFw5LTM+YXN0OGwK
JUVdQ0NzPCVSV1cwPSczQEVjblknQTYrS0pIJDEkTE87ZyI9RmMycUBSPzliR3AmdGI8TmpDQW1V
XmVVcDI/cHJOJDNSRmEzczRMUkVySm8jXXVHPVxWV2tYNSZkdU9bS09cX2wKJSliMUEsb09xYFpu
Vm1Bc0hodXVlO0wtSTslPXBgcT0qSk9RQVhCXU9eKXF1KldHMG1JNmtRYG9Yay0tQiYjbHUtVjNl
Y0gtTHRdY05FU2RBJHJCVChLYV5yUSE1PXFHUSJHLkgKJUdVZVIzcWFuMmU1Tz8uLGdAQVZvaSZc
ZylDVUE7Ly9rPmZYOyJ1ayxGKktuPCNEYEItUiw+alxDY14yX0w8Imonb25nZSNPZUg/aF5JK2o6
bSp1UF1CKiktOlA8dU1FW3BrVlgKJUVUVzkxX188Oi9pIXE1KEJVU1tPNWE7cVgxUi1eXVMyL25z
a0ArS29gWltvakgzOz5HbSpbKkssNDhGXEkpaVtnI1ZhMDZwbCw7L0BpLidwOSJdbGMtY2ExIj9T
N0VYX3IiWjIKJUlSS1NJZT07YHFoQVA0LktLPmA7Uz0vczRYV0YsUVpmdTZHVXRKWyU0ZVFER1c+
JmNeTGMrNUFARWYmMCUvMidTaStaU29jJmJnTnI6KDdVYk01by8uald0JiM4Xz1aYzsqTS4KJStW
aCY+N2NPRDRjQnUnbzNBPilBQyY3biRFRUwiSzJzOltwYWZKJStjSUE4WXJCLWpiUGhdKFhTNT1f
Yl8qKCRTI3QoKmlZMz8nbFQhWl00Q25iJWRtKzViTFp0JXNpJmgwO1YKJWsjIidTZnBQPVlOPyw+
Mk8iKUtwM089T2JsMGAwczRhSiRkN25iKEYmTiFRTGB0VVQpWEkpOCw1YWpLVyY0XipuKmInOTko
NlJ0T2JeK3FZOGB0ZFVNXixaOGFdPnFSW1RmYEoKJSYpc3VuQyg2KlNpUyhGNG5JIWkuVEZkZiNe
ZD4xPzQ0cTxNcnUvPUBrVD4lOzBQKEBeI0heMTE1b11TaC9qMl9LIjhAUj0jXkpINXAvYVhbRlxt
UVRtZjMjZzAka110WDw1TSoKJTtQcikzIXFCQCNPQmFhIVAoNWIuLGloVkM5bUZHMlNNZjJAQl8r
RHFeU1s5Sy9wKE0iW2FuYnByP3NJdSlGa0JHLzdSQ1BZc1Y2cEhhIVgwQjhTUSw8YDMyOkZbQENB
MD02KzUKJSw6M0BwPVNpJ2c6XUtwRkora21Pbk9yZTRxIl5tNUtoIyhXMDBrYUI8OkNGMXA2U1Mq
QFImXHJlWidDKTNbRjhMJWwrNG8wRHNiVG1EPTRIcDs+Qk0sdDFfLltLVSQ0byZJZl0KJVRaQDJq
WEBobVtWRCVbbjciX0lMIV88UCdlaGUxISRwTFNwX0YiODVEQ1dAO28kLDtTRk46YV1FWVtWOj9l
U1BsIUJNP1QrTXNpRChjKmNzcTIiWGBWWCVgM2hFOXNZZDBPay4KJTVKIS83NDtSdSdTODRWTy9h
R1FJKmhValJtPSphL1ZWXVk+R1YqPHRFRis6MFImUChQKkxqZE8/L0wzWDUxbV9VXTpSY1dLYEZn
N2s0NjhIRzBVMVhjLFwqY15hZEcmcCQ6QEkKJXBOXzMoMy0nRShWWWdDWSg1NyFvbFtBKjBtWCFk
TG9CUF9xPGtBVUNaLmlnITlzLnVER0puMmopYFAwJzpKWkQ+cmxNV2JmQk0mPS1BWTElY1ZAaStc
RFI4O24/b2ImIU0mbDQKJUgvNk1ZQHF0KWZuYkc1I0osQT5AWFkjXFsoIk1iUCN0JjlpTTVdOSVz
MCM9bDVPJWhnbkU5YzxrcDBdY2l1IjpXNTBjbUdrcUxFMFQ9Ni8qZjNHZC5Ybi1GOSJucnJVKHU8
JyYKJW1NOjZFXDZXPjtEUlxiX1wmaTo8XS4xQztoN0NiZDc8QGw4K3UpTy9EKXIoOV9nLl5dLEU4
JWA/ZEZpJVwqOVBxaiNXNSxWVVYzLzlAMHMzW3IwWTxtYDRxN0FOXTk5IlsqbWMKJTRqJVYmWyVG
NTpuOU9cTWJZJmZKV2MtQDtySGBzU0JRbC9WZ25tR14pckxQZFZrTWktKSRBSmlnOEU5XkQqI1VT
WGdfYFFJaUExVF90M0hiUWw+ZEdJKGtYIkNIRENdaDE4Nz4KJSNVWDFNXD4nKkVgckwsSlFNNXMs
S2ZFRzNbcSFQQ3EtJ0NcKCxhSGMsJi1Rb01hWVY+bUZdZWFkLGd0J0YsbE5gbz5JXWVUJ2pYKSdM
RCwtRD1Nb0tGdEVOXU1cN0ZlQkIxNUkKJSc1Ui1cMy8hKiJXKzdgN2Z0Jz5TcWVsO3FRcykwIT4v
NEsibVNFTWBVWWBeIztFcEc+UTBVT1c3XSZeUnJIOk46NTkzV21ncmdIPXE8aUonb24uXkFxclE5
V2snPj51ZlNHO1IKJW9haTppTy4xS0NMZzd1YGNoIlI9VGdcYVdIRmdKL0xnOCEzTEo8NTRhUigt
a0R0Kio6RVpHTkpMQU5ZVTU0QD9aYW1XSlBfYElbY2xnOjgnSGU3WGpnLFg/Vk5wVCxRcyxRQkkK
JVxGL1BLKSMqNGk+L1tJWmNXS1ZocWMiaSFuKzI+WEU6V2ckTFFGQklROFVhNVRDMkA7IzxkRDxh
ajIxJkotW0BZIlJwITJHPz9GMiFHRWo0TkBfLWA7ajowSF02PSEhUzY0XCkKJVtlQT11PlZjXztb
Uz51TzlWRzxTM0BQX0hCZSs8JFZWay8mXCkwRVc7KSFQQmEmKF8jUC4iczhqQm89MlApUUcnMGJG
JSVOWTBxX2ExQSslWlNYUEVhX2NSb1dzSiZ1Pk48YmwKJT50bTMvOztcSURvcl8zWjErPllYWzc0
ZyhBJ3FedVhgQyVBT2tJXS5gO15DJDd0YFNbXHIqdS9AOSg+OUMsTF4sXzVBW09jQTxNZ2JTclwy
WUlwSCledE9IZT1FL2tSInNHTWQKJSktQ2RiMSdmL3Nxc0lHajV1IV8xRS4vQV1AKi47ZmE6PXBG
RShGQ2VUXD5nYzRTQjZiVlY9cWorbWFzMz5pby9vRlpBTV1iNHNKXycnQi47cyNBU2FhQkcpK2xI
clA/KE9CYCgKJS8uWCgoRGBqQkFNUzVyPiFhWiFbRlkqcE5WK0w0NSg7TnQ9Kyk9WywnI3UjX1U+
PkkvcWdlR18iKEdxLytAMWk4M0dQc1BAJkElXjksQ246VThpNlpRbkxdbFRQZEIqPGAnMiwKJVpO
LDRcY0BBMHRfQDQtZkxnIzFiWFNjM0Qza1FhdWtoc2pAW3VaQzUpTFhrMzRXXChaQXUrIWBfKk5k
Kj5jKUpRa2U/TCgqb21hcnFfMkZrXlhuSV5yO1JjWUE8PF4sMFJIUnEKJSwoY2NATWE/cFZQMjt1
TyUnb2NcMyhIck80Y1BKSSdySSdhOzpkZSwtXGlTYyFTQSlPInJrPy81a1g/JklhaDJOIVc1Szs/
ST1wKCZCJWEtb1xKJVcjSS1JVy81TmdIOD9FSm4KJT9GY2hNPVcrMC1RbihXMkBfamJJMEozc09L
YnRMLiRpOU0nNDBRS2pNKVlUVCMrazs5OWslb11SIzVCQjhpbCRuKihlUi9BTGlwb1NrYS9BKi01
LVM4amRuOj5haS5NXyx1OmwKJSUjWUpcXG1nNWlpKEhRODQxX3BZRCNlKmYoPmZCQkhlakBaaG1K
KEMvUkdmS1A0IyxJQVlwPTxqbGA5UV8tMmdrISRrNzxoaVFvKkIrWkRAXW1GKnI2VSpealcxKXA7
ZkUlcjwKJUBxQlp0IU5nQDsqJVdZUCdpPT9XKCJxWmhRc0NxKEAlSW0hInVtcWExKzhNTFYqbWYk
Wi8mQXQ7SWdJYkdnLD4tTkQ/UnMvZywpWj1obi8oO0NkJVhjPi42PGckaThdKzJkc18KJTd0WlJr
UUorI19PMmYyJktacFpKTj4nSTEybmozS0o5JDBAKE07dUQzLTJMR0BYVCRHRjcjUSgiTlYlI2Yo
M2wlQSpJPVpbclZLczsmLWM0UkkvcyJjayJzczl0OlVDInUnOV4KJVVZc01PPlswbF8xKGAwNE1t
PktKWlRxL1c+SHFDaDZUKlVMTjslOUc+cCI4OEkuMVNcU3MnVyQiVjZJSSpcVUsjTS42c1xWdXJr
dE5rO29qRzUzUihOKVskTUBfbWQwaEY1YDEKJW9iVzhAREhfOyMwRj8/alhVOi4lKUpLSiExY1sn
Wk5nNkNIXiledUchXnRjOCNtWWVEaTl0JS1BdTkvUVEzaDFsV1xeaFxoKnNTRUJgTnFKJj5faGJG
a2UoV2MtUUlXSklQbEkKJT1rbFMmYiUkKSIrJlg8I1I5RSIqJyI/NWlOOGZ1KkpbVGtFai5ccidO
cmghOygoS2kjOEYmWXFub2wlVi1wVjxqQ1FmP2JaXl42SU0qKSRFLE4pNz40V19EOzZQJ15IZlJR
PkAKJVUnLzBMcWRBKy9DaSEtQk1JVVRdWVRhWVFQWjI1JjMwcSdRSEU4ZitgL3BKLC9FaT4oUGNx
KFshISc5QUdRdTFxWlhtQkdUWl9vdSM3dT0+UkdNUHUnXTpAVWhKNWgqbydEPCwKJW9VY3VKNk0o
MmVKJ1ZOdWQoM1I0MT9uW007XidgcU5nMyZlJiIqZXVTNV4mc186N2dNPmwqbWIhSE4wIilqM1xE
N1BMUXEjMiFbMlQxZlVobnFuYHAncUBhbD9tLWhmUGA7QjAKJSZHPEZoZjNHSTsyR3RINi0hVl5b
ITVTWShuLnUqOVBnTkgxVUYkaTdtV2NURWc2TidZZSUrYiNWaUhocW1lQmJVcEVYOT1tVVkrKjBJ
KWQnQDo2a3BFJTc0TCxaTVh0bE9ydUoKJVlLbTklVyJTR3NGZjJtbHI3UW8qOzs1dUVIUVB0NmlZ
WF41KSFGbWUkL2FrMjpgcF5wIioiYi1QJGltJ1VaTDZHR0NGQy0pPmAuJlNyP09oKm9JbE8zY0le
bFtiLEdqLEg2U1gKJTJraWFdcCJMc2lOKztkZkNcW2QuU01VImgzZW9oM2gzPy1mQlVwYE1pU2JC
RV0lRV1yYT1RW1k1KidlPDZkJCQyZCdfYCsuNS03VkNkOWtOJ1wjakhOLWNhZThAWWcmKUNSYyQK
JSNrKj8nIXAjU1VMPnVoSj8yRyVPajgwZmBZQ1kiKlttMyhoOU49W2ZEVWxwViJEUVM4YyJKJE40
bk9ULDJNUjlcZXVRWCw3aVlFbGVebD5gZlZgUC03Vz5WczlFQDQ7SVhpIU0KJWA3dTdwS1RvcWdC
ZiZHUl0pNj1KbFRDQzEpRiooaipsaG1eS2dMOzRuWkQmMCFHayJlbHRJaGdIZiNqLCtvI0opSyJV
JjAmLTUqLjBVI2IhPnRkQitoX20rNU9FLVU3Ly4/XU0KJVEtOTlyPz91ZiRlRm1TN2AucjdrLSRt
XD0/P1RnW0ZWSEZVOWxuSTBLa2diblRDRENHP3UuZzxMKFMiMSJJPWwkKlxBciROYW48UGxsOXUl
RnE6XTJoKEEnQl9XV3QnJ2AibnIKJWJOX01PQGhdcmJnM0c8b1FhaSlvVTNbNFlKRGNPRVZEcmA4
ZT9LT0VMJHU9Nj0/X0NQTT1ISDs9QkZZcEVcMEZzbi8lKElVJSQ9VUUrc0xUVmtmTWxUdF5kISdl
RnJpISxtJSUKJU4hcF0rbD1IOTg/c3VAYFY7IiNLbiNoX1NYZSNdUlBKLF5bVk1TTHQpQD4wSWhA
MEhAP0A4IWouOGtHQ1IxL1FsQjxTPE0hdSRLKjRWJiEyVFIyaFhMKEQ5UGxNNSE3YVxOO2QKJVV0
QV4zJFY7TDE6X2pGKm8uMFFOVmhdVS1gcl89ZFo+SV40SmlDKy9fRFxLaThiOiRFOW9EWVQmWkpI
XFdJZCotQ0pyYSFGUiQ8NSZANnA8THJJQFtrVVtWM2kyP2RJNUJaLisKJS8sSHJZJD5fYjooa2Ys
dDEpZDJDWWtrajJsW1Y6Z1FYYl01a1pXX2UqJm0jaTc8I3VUVWBjbVVqRHI0WHEkYklAN2RTYDlT
Vi9nZ1V1Ii4xPS5Ja3NFVSlLOFQpTzU9SF5BIzIKJVAzImFvJGdKZ1MsIVsnTiZIO0E/OGRQczxP
XHVQZ1ImcmJEJjsyNVJaJTh0bGZPUDxlcFYjMXI0P1c8dEgkQ2pLLTNBSTJoPGJxKSpwMlZJayU7
ZiZUbjpqKi5nUUdmWDAwM0MKJVZfZisyNzozKGFgUi9qcnAqPVMuKlVDPDA+VHUiSWg9VjB1W3Is
XUFfPGkrXUlFKkRgZkN0VHM0MGlGWWo1ViUuUUYjPW89XVYhI1c/XysyVkw9WUJvNiM+SjJxbys2
Z0tucyMKJTNHUjEnUUYrWl9dVG9xZkkmMk0rZl8hdGwwI18tRy8pQmQtWS4rYUhsLEwoL11vWixw
W3JNXT1WVE4rRENUNmxybCswamlIcTBJSjJsXHVYK3FAVkVuIzE4UjghVnUtR0A2QmUKJTs8QEQv
LmR1WU5XSCo/XGIkLUVHO3MiIj0uXEgxLm8oZTpxW2w6PGxEYkRpYkgrbS87XVcmQT9xNm40T2hU
akFWPmNLcERXLGdTUkQ2LGBpbiM1ZilVV1Y/PmZNb0RyOzw7KFAKJTAzKW9ZVzpLYVJQckhEUTha
T3NZa3VQN1EhTyw3I1NdMDI7SmUlJEVkVTduSy8zTl5eK0ssLjNxbypeMD1PSXNmMmNZXyMmLydp
cyU2SithOmBAclpsViEzS0YkK1hwSEBgSE8KJSlRV2tsOWw2Iy9uVEBLKVJickRqZ0Q0NWFjLHFz
Yk9jSC4zJUgiPyheW250Oi9RLSMpbV5aV2Q9QWhsSkZpY0RwWD5kUztiM0hMdF1eQF9lbF5bW2NX
SFQ7Pm1vWmpzayRdOCgKJT8pWkY+ZnA8KHVJVjY4R2UqXTYtYVkuMjloNjZRcWg+MCVcJDl1V0g3
MC9fM1YqZUQiJHJqZWhlUjZFTzVZZ1VAKUUzcS9fLC0yU0U3XlMtZkBxY0VIIUUuWygpNEUhQ2gk
JUQKJUlwMylqJUxbRlY7Uk49ZlUxMU11LGMsdEcmI0NLJGo+QFNdQnFFV2hnPDVaSzIqMnRFW1Y9
N2VOY29WXmtjV2FsWmRXZ0dGJytWWk49UUlYRF0yZl1hayhbLD1oJ0piQ0xibjUKJTs8aTs9RVJY
QClSYlspNEc0LXJgMCJHNHNJQkUhMyMxTUpBQ002byc/MVBVPEhIaktRXldXP0dDW0VISV5iZFVX
aDRoMUQqamlVLzw9TSpxZ0AhUSU+TElyZFZOWFpkaGhnWS4KJV9eNTZET2dWKUdPNjQpQVluJVNe
MTFsMWshPTw1KnEqbFdOTVw2KCJYVjQ9IlQzLlBDNmQ/ZU4sdSdWIUU+bnMicU1PXkY4aEUtViIt
WF44WlNlXWNqV29ZL2VNdUtoJVcsT2oKJTtjSHQuWWFdKlNGJkpXaS1mXT1sTzgkR1A1ZTBVbzxK
J2BMLjlvODpKXURrSlpYJE5gVlkjWWJnVCJ0TF4yIkRGSGZDVTdkRShUYFN0Sm4xLDIrYm5WJGpr
LytNMjghO15jdSEKJV07bGYwKzRXaTIjb2lmZiVUQUtvXTI0ckw6Jm45bFhGNCM9KEtVPEAyJ1sv
MVg6JmgyIWc/LjQ1VS1AbnFuL1ZTSzdPUDAobk42SlFQPSJEbFYjR2BVW1kxVW1jO25cUm81O1cK
JSJdVW4mX08wcmosV05WUSE3LT5OKDFRLkUobjNlbFlXaHVzbSkjLzszWE8iZjFGbTFVJGRORj1N
Yj1WcF9DYmxGNipVQSdpQSJRJjBuYjosOD8uJl0jKyw6XC5xcydsXydSczYKJS4tOjRcUmtKQ0pW
dHImVUtsK0U7XSQsREo8aWNvY3I9QlVXLURfMywsS0g3QywxTCkmS2laL3I/KVMuNGQ1VF1mSi02
K2lFITIuVyQnbVRmO3BGNTQ3M3BqJGdcZyciVGlONy8KJURyP3VgcWQ6SFViWDohMC88XUFaSi8u
J1E9PDFCNVhJb3IxJFIlXUUjPCNqcWNWJzNDVGBzKU8za0txYEUjZjU2UDZjPDE2PzxvbiUnIzhN
USlYRFssLFQhNEonSmE5WC0iYD4KJTJdNjZsU0VwW1wvVVNydHJpbjR0bHJTdDRFWWwmQEB1LShw
Mjc2bEdCQ0U3XGtaIlRYM2Qkb0ErTyNLVD9yYWVhLGBwWVQ+REI0IlEnIUZpITM+dVdGcT5ZMy1t
OE1iOlReMjEKJTVAT0lGSjAjc0xKPTU9LSklW1huSFJzJjxJVy87bj1QU0NnV2JCWUhNQVs6PS5b
T3VyKmVpLG1vOEthWUdJTkE0XnUwOmM5KyFfMjVRUVxSUCY9RD1GYGdJXTUjbWUqXmhFM1YKJUxV
YGZDTktGUVlPL1lES1glYzlMJzA/TFpRbGcmSTJqUnBdMGQmbXAwa2RmVCpLUz06TV42aFEsXEtF
byNXanMra1Mtai1ALm0+SVplSzwxUT9HOCkobVxpIycyRlcsVD9QXSgKJSo9Wi5rTU8sZWJkamIi
c1IzaSwuJFBTRFFVOiZQZ2REZ05sLj9uKjAyZCgyLnA1JiJAWlwuMF1RIWwlQGQ3PWJGZ1dAJHNx
ZUAwSywySi1cS0ZOTXUhYFAoc19rJUlZPF10QG8KJUFfOV48L2gmLT5UdXIxJGojclskOXFvYzk0
aklAImU/NFZPNGQzdE02Ok9CYSQyLlY3PFQpIkUkJVNwNmBJaCwpLzUoJXM+JE9yL3JDXVUhXzxA
KSZWPUhRJ25gaG9PVWhAPG8KJWowMjgnXzE0UW4nXjI0USQlPVVGNSpgW3Ffb2M1TTcoK3QwKnNK
WmFiNUFMSjpjWkh1PUFARTFeY0Q6QkVpYmQ3RyJudWdSV1Y8V19KTUJlTy81QztoSmxDXlNNSkZT
Xl5PRmcKJVNEImVFOSMvODluLiFwS00+ODVTJj5zYjkhI3I3QiY5ViNJQWpLM2dbWUksO2lmRyIr
KyhabCkhSmRKcENUI25IOWUkb09bPVJqNSpBPWQxWypULyc/YmR0QEpYZzBvLjUjWzMKJUl1TCpT
JSpVZHBILTo6QkNlSFpJL1pYUGtZPU9VS1ZlaWpHUzoiZms9cnFpOjNTQmhWakFtbj5HYWxsZjgp
QTdUVl5qPi9qYWZQXSxZPXBIJUVWNWIoVW43OSdHOUlUXF0oZy0KJTFpMWheQShZPF9lOGUwPWJk
PE1aMlJTKTZTTy1hI1wmUUJeNGFnXk5cbiZzW19ROmY/YiVoVjFibkRCIVF1JlIwWWR0dWxuclln
K19LPjYvcS4uWCJRVmtfSEdedD91cj0hTGIKJU9tNEE3b0oyTD4ock42cW5PZ2hMKT1QdDxhc2tB
cyZpQ1VWVDZGcz1XM1NcPTBOJUUlNkcqXFE4cUk7WDtkRTU5PVA3XlJdIjteSVgiTCFLSjpsJUNA
VHQ7RHE4KU1DQEdeLzwKJTxCKysvajNMRllQXVN0SjRYKGFJIWFFPFhkT0g0ZyRYXUlCK0U2OVFG
dVBWRDdUJW9UTlpbZ1JuSmZEO2lLIl1FNmlmQFQhcylvS3AqSmxOLWRSLS5wKTRJayMjcDNxYzRM
XmgKJWtyWWdrRUQpcyM/al4oI21xKmBIXWc7dGtsbSVQImJFP2UzMXEwaUNdJTpVOlFnRyclXWIk
LGxFLVVFMT8xakdAM29PZmovTjVebnJhQDxDRFdlNUw+c08+TitGRmZBJCw5SmMKJWRjcERfK0xD
My8kWFNPXTVEOVw3OHEqX2YzVzYiVmFhRFkrOU5cJS5qY1o7VDRAUGxPPSJeWVsta1hrQkQxXVRz
TjZpJTFBZlQiLEwnNHNPckwlPWYzaFxvPis8S2k8WFtAakYKJTxcTChjOjgmVzJqaytPMHAwSXE7
LCVqXDxZOW5pPWxqWjxUU21aTDMxT1QyMVBzUXBNVUw8OFZZJTgkVlcwSENqKl1WbkRmJCk5Jkxi
Y3VNKlA4I2omSihMZFdIIU5RZkteay8KJTdNQT5wWG9YTTBpRiUsKUAnND1nP0NKKyJlZGJpYVZM
Qz5CaFU1PDUjK1cwUC5VOExuJSp0SE4vXnUzSm8/XDBRYVJoVGswLT0nUUFeKWZFakAzLHEhcFdx
SCgrUCxmLjY7TVkKJWFPYy4sLSxoVGsubW9ccl9YaEVXI29uXmJxTGBILkkiP3QzMSVOZnRsTjNe
a0JBY0osP3JLL1lSVj9XbjYhKVJzIlZfZCIvaS09QjFYMVM1YG8sQ0BqOiIiIjcoTVAiMmRybmcK
JS9gQU5FZHRBSEUqVURoQ0pxRic9KkkoZE42YCI/VVkvI0IxWz4pdEdHYm8kPTdDQWdfJFokdTVG
NlhbSHBuWSRaVllZIkU1KE47aFFZKmlLK2ohNDNTSzs/Mjk3YjtEY0ohODMKJSgydCgmI3MrKzJO
Pz8xRCc3OCZ0SmwlKEpdU2pkIj1VU2pWJj1aJjlNZ1NjOyMwMS9hJF0oUWBvXis2Rl44Lz9iNGph
aUY2QictMUZLRzEnMnVvPGZAYVA/bUAmMDdhUzFgJi0KJUczai9XOnE+QWRgPDtIX3FcbztZXUNT
ZGRwa1hkIyg1RUIjKFVGTjJKIlRlc19rcjUhaiVKJG9sdFFZSShOQE4+SjRwaCpiZCgtJiNwWyhl
ZCZGXGcsZTRFNFptNlZPcUNxKCYKJSYvTUdnZlYpPjNJJzlGJyNPKkdqaSFtVyQxXylDS2cqTG5f
NyIvVFxfO29iXFcmVjhQNTVYbGcrY0RFcTxXNW1pKlEnMW8pUlFkYj86MGxQYmZGOS1yMk5FTjw+
PlY+RUFBTEoKJS1gLCtLaj90Nl8lKksxLUBZSjckKDo0b2lBVklER0tPc3JSWFldUnU9bFZHVFhR
IU9aPnA7NF09Oi84YlM7JW4rX1hGZz41Q0VNZTlHdUBBNmo0VWJIV11HXU02RmRIPzw3aSEKJT0+
ZHMzOlAtYSFSW3NZLTohLTZrSnVpODBxckc9KilWSVJhNWg0ZyslZFsrKjMmbE9RLjolKF1gMy9a
RGY/X0pGSDs5ajpYdEkzJSo6V19XJ1VmLDIiUlkqVjVrM1pnJWglUXIKJXFEJ0tyOVNFIiItX0tq
SzZCaUVtL0ZeJCs9RCtPU0BXO1IwL1RPT0FjO2AnTSVuIkdCLW1AMS0mI1AiNC9aLUoiZTdiQyc0
KFN0KFw7VGwlX2BMOUo4bXBnYSZqSHRxI0AmLC0KJVJqOm9hUFcwWytbUzBaRXE0KVY7RSIrPSJN
Q0wkQ10xSjl1SVM2QnQncllRYTFOJ2BRKSFpJElbaUNoQWpUZ1lwOGZyNXNWaDQtZzN0MCU4KC4i
YkNKYGFyYTNmOlhaYmUvYmIKJTN0Li9OaE1bVSNQbXFsOSFTOy41YTFqJ11mZ0RpKDMnaUVGT1lx
TzQ7TD9ZIiUwUGNoVGpZV14+JE5Qb29hNkwiKyNqckdxJ000OztpNTo/QypTZSNibHVQdGNmOygw
QD5xPWUKJWNOPXReKEhXbz9SWyNdJzFyWmdJcTkiQEVaVi0uXD89XXAmLUooT09ca0pcNCpBQ29T
OkEtS2owcjpvJk9XISs3KCMsdGlKODo8TzJObFciRDJGLE9sRDJyc2Vyaj1hPy8tTTgKJUYpKDdf
N24jTWdrcyUySTM0YiU9QSZZKENgOU8zRkNoJlQtTGloMDRgOUlFMS0kN2MoLDFjXTtUZ05oUSch
NHU3NyZiQDA9R0xrWS00YkNVQFl1bj5XTkJRPlpVNCRzO3FXVCoKJWErcGc+TCkmblIiPSUwN1px
MVtgcjFQUkgsOmpzJiZvai5JU3U9aGZlam9SJ0RQWWhAcjgoTjZxKVkrP2FMYiVaKSlwZ2plclch
bllpZjtSWTY+MCNGY0lVU11DVVI8NCgoTGoKJWAmJmJUYyZqbStTMHBnbFQwcyoyX25kVFEmJm9B
ZT9bXjFtRj5FU21UWmdnYk1ubV1hWzxGVV1wOUJuYkc3LXFeUm0tYjZVb1lMYTQxZydoTjE2bj9A
RUhMSV5EUGdsPyVDU2AKJTpFP2QpbDw2O1xgUFxSJ2ZHL19CPFJtPl5iIVF0OWA9Ol0lWm4pbkMq
JVIiTyszMDxSUDYnP1VxK1pray9WTkNTPiNoal8+XThpRE5uZjhWOFhkOTNdRldRbUFIL1AmcSpW
YzEKJT5mMkAzQCteOUFLOWQ/XydwTlwuW2siLCNQRGtGPzgwdW5fSE8+cGFcQENHX1tjSEovLktg
clY8W1Nicl03OjJOWUlJM14hI0UnNHFzb28laVglR3I9bitlIV1dZl9rYWM4P1YKJTwjJTpGLGFA
P01sJmBINEFqNWEkMEpXXCtpbjotQUleVTJQZG1jNTEjSW4rWEc7dGdLX2RnYyNmQjQ+T0I5RUMt
RE1wcllNcitMODFlMyNicV8zIi4hKz8vaSEkQCYkV1RON1kKJW5XSSlDQT5pMGNkczhxR3ItL2RX
QDNeJFwtXEVmMj1yIzhLVT4nT2lxZ2htXSssPDMvUyoxVmYkOG1lKSVULEIjQzFfXVpiZjMhUClf
JkMhOTksJVVTU1xoWDRjQGQzZUdTbF4KJVRsLnRnYW1xUG9oZDUhMz1ZP289MTsyIlY+WWtwI0tj
aExAIW0+Tzs1aW8lZ0stYzUhJCgmUUZHOVo+NVhEMyFZXUknLENHaUhkLkwtZTZaV1opR2JVc1st
TmA6UDNoT2xDVUMKJUY6XGlAOmlGPjgiQ2goS1IzcTwvcW1dUzxMS0g+Z1dCdDVrMjxocVFPalxU
Vik9VTQ4SWI8RU85aTswdDN1MT0uZ0spbiVqNFFzUGM1Z1EnKzdkZyhSKTlfMU0pUiVRYlIpQ1cK
JWtYN0dcJldZWypkQV8uOTUkRytrTjMyZ2Nsc3BkLFZbOXFIcV43Q0k1RkNnRFdBMEc8JllrYUxV
VE8zSmciKTFSRk1cMycmTiNIRmI8X0xVNVg3O0RPVkc3KFsuOTxcQVg6MzIKJSdtXkYlODhNaFU8
Z0AsbGNGNWZfJGtbIVxYVDhtOTNkMC0mWihFJ0RJY0RKTSFMVzA9Z2BNRVBFO0NmQW1PQGk9aExP
UXBjQDdVSmBIcChmTUZiSzA9KmhNTCVcPSYlXkZ1O0gKJU1SY0Q5RC9jMz1QRXJDJkljMEhcKVxw
cjhGYTVTZkUlND1OaWpYUTsuNUgwZjZIUV4iS2ZmPilvYChfUDpJW0Fza0tudS5XJEVwOjFLIjtj
YDhZLEY3UUVcJSwpRzwmUlhXZ14KJWc0P29WZzlqP2FUcGFSIVpUNlQsSGhfaF9taVdYTyctTVBc
TU9QUSlKaCUsSUBuXSpmMmI2PEJsU1d0Ly9RPFwrPyRPRW9hcnM/WTY2dW5LM2VVTlZtPEtxPWtV
RSZGaGVaPykKJWBCKGc6SDcwT0FLbEBVJFg6bj9hbjVeWTdndS5QInI+QTw9b182X1tOZ1lhakBo
JkNiX2ooOjdHdCRQXjQwQ0xuR2ZOZl1gSyRSdG1SZCZVPFhMLStQMitdZkJhZ1FyazNiYToKJSMy
V1EoXDY5TSJFRHAwIkUjSzVAUFw0J0VwYFs/cVN1b0FUZGZPZjxbOnFANE1FLC9eV0phNl0tOjJj
WCJXcj88YTxcYk8wdVolPlBhJio5Ny0kSk1uL0Y4O0lgRjkzbXJNWioKJVJwSisoSFEmPTA4ZGt1
TSlCIkQ0NCQhMCI0Wmw/amZeLW1IVFFjI3RxJG4iLl9dMzIzQnREXlE3LGJERTAtXzciYEYrWW1C
KTNbKSRUaFppKEYmLHVkN2xgREJeYlhuR2NpV20KJVpALz1IUWFjSzkvNj1YcjJaPFZyLlVfLlUs
Wl4oWWtWQ0lIKSY7WWxpMHFLWkRIJWlbbj0wL1pIcTthZkAlKz42QC1aXy9JRCZ1VCwnXUlkNms7
UDUyLGpXKmAqMGBRJVhWbHIKJS5acDNeSk5ISTpjWjQ1dStYYSw3QHFPWzNTKWtYUUgrJyM3QGNL
ZkUjWnMuQyw1QFg5ZkFYPScvRDorRTJsQEBYbTBeST47b1hNI2tgM2lLXVk9IUljSlU1XSNzMyZr
UTdNKWIKJW88Skk9Z0MwS3BSWjk4NURiaGhOOkUyO145SDloRTZVWFBmYmkhaVksWzsrSz5UaGMw
LTZyLSJUTEdBcThVYj9AMHBVT01CcTJJcUt0b0hRLl47ZjRQbUBcKi00PkgxaC9NQ2EKJVdwcFpH
PVg4RXUyJ3U0Kkg8LT0nVz5uKk4sLychPSokUTNQbVMoaExmPEBEdURZYlIkRmREImI5Lic2IV8x
TGwpUzYyNmsqWEd0cy01azFMOVRrKzFCcmZFSFslXThhVSs9bzUKJTYoVVpmMWlpQmg1RkJRYmgw
cnA6VlA3KUonY0dAXkFTO0hIbXAkLXJCUTpjP0xfZyhTUi4laW1qOFM1P1FRbFs9I1hwQk5nJzc0
LzM3X2otSlgwa0EuTD4tclJiZEczS2g5SkMKJStbPjtTS0wqY0cqZV0rS1xrUT47JEklciFBLnB1
XWFlZiI7X1trWjEhXk1PIjYqKnRpRkNcSTlHXUYqOWFqMEhkM0tYSDsuaExJKS5hR0BYNUlURk1W
LUReXTZlaDJIM2hqNmAKJTFpRj9KXW1AO15ATElbV050OWZVbCYtSSFqXlM9MV9YciVuajw7NlxP
Q0ZDPVAtXTFCM1VGLCc4OHBib2ldY0YiL0srWitgSDVOLHFrITpwV2I4IUFqJ2RkOVlNOHFaZyNl
bjAKJWI6Pyc+WHFFTUxibUZWSTwoL1lnJ2diVTFtOFZeUlFLYSNlPmxgLXNoND48aD9DMl9OM0Q5
XywoMlk4JWJ1TVIoPm86bEFHRkxbJ2chSjlbV2FNP3Q9bWAqTSRvciVIPz1DPkoKJVJqVlltQCV0
STJGQipcVFREJzgkI0VpRis3OCIhLDpeaWQ9SmQzQFs1NyU/Yz9dXC9PRSg9JXIyaGlFOUs0Nkct
Wm1tNi5RJzI5YWoqVklTW3RWR21hOmEmY2RAcEYzTnFMZUoKJUJIQXFcaDEtUiwhVSpqNlA2cUNi
KiJnVUVUX0EpJTA0R2RKIU9DSCMjUyk8SEVIU0MuUDAxV2tbIzkkXUViUmpNM0pCKzQ2NFIvNkAr
SmtdV2I3MlZNQUJmajw+NEhfXjtfI1IKJUA/IiVwPm8oYidicFx0TFgvbmpLUzBJWjY1aCw6QCxT
JG07cXMwITloKjo/YzRBX0FfZl9dOGUlYllLUjIuYFZpNmpIcGVsOjNuKExPNDdKbD5YZlxBYFM9
KDVkQUhsOXFKU0kKJVdBRCMxUVAuMlJXO1NkbWM1R1JEWFZxR1AjTnQqUlM1LWJwP3EiQ1M0WVdk
MFREJWo4SF41PylnZlFxLic+cmFlcGQjPUU5WDRVdGlrTmUvaT9bdWVxYVxQK0ZcIUYrW01fdGYK
JVBcM1NMPWU4Jk4/M1BqM09zIm1YX3MvU1VhQVtrKFksbm05WyNDJTxtWj0kMDpCL2VLcUZNa0E3
Uzo3NV5EN1QhOmJcY2g4Q2pFa191ai5EXTBzPk5bXTFpJE4/XHRlUV9hQjkKJUUrQC5kImEtLklf
MGBmJCojQlNkUHQxMWEraVRtQCs7PTdJLkc5NzgsZVJWblksYFdqUCpBK0tuNSNbYyNuKFRxRUpp
J1JmJExWdTM4QiMjaW9lJj4hYFxJPCZzZ2wkXTRSMToKJSZTIkxwNz9oc0RFX1omTWFXaXEsLUg3
SHI3TE1kPDFobTRhJC1BMGhBOVwtJjdGSCNoZjpcMU1RV0FoI0stRV9yW1A5I0tkKHEvL1k7aUhC
Xi1LOWNMXkYuRyFUSydhXzlWaVIKJUdxVztOSEdfdVg5STRwV1NqM1hTbVlpWjI1dFFibD5wSWtc
XyJXLGYhN1RiXDsvYGFAWjZaRUxDSFBpNCxgXUlRVlRjVkxeL11DdXIhZVpyakJDaVVyXlJScFAs
UHFJUDpeYGAKJT1JUD1qJyIzYGJxW28oUzlxZzxpOi80czpJOFpzLXBJOkhdJUxhcmVdTz10bihd
WnQubjJsZnI8LSdzJ206U0VPcipJQDpUJmZZWmAuWj8/XGlEazpHdUAwOSNsRlsvTj8jYCUKJWU/
RiMtWlhEIVtHZGYmSSRsN0JtVV5XLTNKQUMpIk1CNiJBb1hvTUIiPWcqTDlnUWRWRSQodDxPUVQv
YGt0U2gwOD5LUCg0b2MlYW42Y2oqUTNINkdLSWdvZShgQmArYWFmLWoKJT5vbmVzcD09ZCk8cilf
Uk9hTjZCRlI6Rz4hVTM4Y08vRVBGSmZxQHIqJVwsOUYxIkopJ0A0bCQhMUpjQi9CNCwuOkRmJFcq
bHFedXJMTyhTPigiMkRSIloiZS1BVz4qJE5QWywKJSVMUmRaT2dlPzs8NFdFRkFZaVs4TVpxb1Y9
dWUsWWdJN15gS1BxTWE0clJ1Vl9aYXFacignZyRWbTgwdDgmTCJtPzVOaDsmK1BHaDpIRDVzXGNj
QnRqbk4kMV1KcTRwPykvI1cKJS9WIkU2JWsxSl1KL0cyT0A2I2NnOUM/RCgmNipYISUqZHBLPylK
PC9CSm5VLTVZU2p0VipkQVojMV1DRGwmOV5BX2s+K0Jcbj1cXkhDJU5XOSNyZiUhOl5da09rJDRC
VDlRRzwKJWQ2OSFORFhHTVs7b1FWN2JYaXA6JzotYFokRj5rZCdWJzUoLU9UWzMyUUU2YHBmV1Yh
WGIudU5TZ1w1W0JPUV84KG9uLVoiYnVAU10kQXU2LmxAbDRkO0slJ284UzZSWGNVcE4KJW1pYEgi
SDMtIiU7S1o+cVxMUWFoS0tiaFg8Tzk3MmghWSNaVjwjZ2NEYipKZCxwQj02ZmVjRTxcTUYvXzBR
Y1o7ViducGQybU8uRCwpRUBhYykpKj9CWGIhTS0pWixLYWk1Rl0KJUVFVm9kOVVJUGRCIm5QaGVW
TTJOWT0uTWcwTW1hcFAiIkkiLGUhbGM/cj0lTFdHRUtJYnBKXGFSbGBcMDRDakJGXS5FTVtLIldr
OCJOL1g/ZiRyJ0Y2Tk5WSCFrVWJrWWE7KHMKJSolTitjO0wtQlhvNUMnKS5lZm9aXjhiTWM9bl0m
VUNfNmg3PEo6ZSQlQzxCMEVAaTwnP2oiYl8zIXE8REo8NVd1XkpYM11TNy48dSREcXQ+PEJcTE85
cHVERiJuc19HSVtcQWQKJVI1UU5yLUlJTUojZilPSDNpUmZTVy5JYyVUPiZTPU1wRCc9aWhHNjlN
bixcU0doTyRbajpwTXVSQStYRDxIZUUtUXBrITUwVkVfczBnQj5FJ0JwKjQ3bnNlMz4kJSk1K3JF
Yi4KJWpkPCcnR2MxLW40S01xbC5RV20/ci4qXFguPU9iKmQpZi1AS2c2US5fIUZGZyZzLj4xI0Aw
LEVyQWk5cjo1KTlzQTBZNSxTZHI8VGZcQmtVbkwkMkohYicnRlcvV1hPLjlndD4KJTVtSkw0MV9r
LzhyMmA/NG1ZZSZaVlp1Mm9NZHIzRmNdcyUtVDplX0UsXGc1UmE+ZDpTMCVcV1U7LFdZJSdrRj5p
UDYmQ0pvaWJBXmFxYyRiS1xlJ1JbPFRLLy9AT0RPTkMpUHQKJWd0QWJQYF9pYidwcWJWPG0sIzBv
T0FvX1hQbzZEKVFSTVhiOU1NdUkrT2FHOGBdOWBEWmciTDJGImpSRDIhKjoxIV5vcyU1aDRdb10s
MmcnTiouVCFwLSFvTF9OW1ktcW5YczcKJXJRKSUmLCVWWDhDQyNpUGQzZ1oqSSRgTitbLFg/QFpA
cGJaUlxpSSpVVC1eRDFxXF89UjlnS14zNnRUdWM7cFdgaT9sXVRgOy0pTyNkVE9ZcG48YFtgIWgh
NC5QQ2cyTT1JKygKJXJHM1BSKzZYIiMiT0liM2FyZXBiXDc9VWtdWTUtaVJEcC9xYyw9XTs0a0dz
XVAxXzg7cVlzLDVFUjlJSj90cGEhQGRyOChScW0nR0RlcD5NWWIiVF9XW21AMFMxZj1dSG4nS2IK
JTxDcl1nIWsoTkQrSk1HS0g4PWpuPTpNSlBkISxnSi9RRjZPQTY7aDtmaDlDSCUhcChPUyEiZFNX
c1g3c0JPcjdFQjE1RHAiRWsyRiVsJ1BPMHE9OmlXIXJZWGc4Mlk9LEpqYCwKJSU6PG8lJDNsQk4k
LW9ZWzcvOC83MkJrOEQsVWgjW3BlPGAsXEo4Mkk1aS4nPEtjYHAxZF1vZlkqbnBoNF4qRFYvUykp
NjxsOV1YPmg/UTtvPS1VLUBCRydqZ090czFVKDpnTi8KJUVNJSw5ITUmcjBBTUI0YDEscC0qLDdg
MSQtZmJ1YyZhLFNiZWhsb0BlKmZQO0IxKC1uTlJGcWZMaCQ8ai0vc3UtIyotUixjO1tlSz1fQl0p
Kz5tXVFTQ2lwZDxpZ0Z1JkdoVyYKJVZWLzo9OT1sKDQrM01lbzVMLVc2KnQ1V1FYYEBKNFpJRjRc
JGZGSyZpPUs0K1wuayluS25zaGxyZmFtRl0sPiI8I2BbOU9XWD5jbSt1TT9kQC5cdE5qI1ZqQVAq
YT4obGZzQ0MKJTQ1XVwoT2REcVleUEVNQFVoNzUwK2RnRlEuJk4mSiwjblQzNmFdLCpdRTU2PTBx
U0lALkEuOShHTCs6Q191LilwUGdNIidCODJpT1ZZdTFZSl5oWCFsYmE8W10mQWpfZ186bksKJShr
bF5oM2ZgU20lamxLNjAjbFhUXylHcUVYN2xwOjQnO18wbSdXJiRGbk86KiMyMWhkNmVVIjUiN3Ne
MWowJE8oXVEpLTxxQGgzXVo7InBYTDE0J01nPEJVK2NAbikuOmdsNGIKJS9wWVZhK2FMISRqMD5X
LTpWdExgIl4lMyJIVGdbYCdUbltISiJHN25wRD1bSmpMYzNXUmxxMGlWZjhWa1pFVnIkKFokRjgi
aWNJIiQ1aGRNTiItbjJHL3ReUzEvcXV1LThfPUsKJWsjSTFvJGozJzFnLWRORUZrVktdayY2LEE4
IVApZ19DO0NLOC9LJzklVysyJ2E+W2REcEczR19uImk4VG1HVmlDXGs/JS1dVG4xVk5Hc0NjV2tN
cEwiVCFnbUNiKEYqXzZOQz4KJXJzX3BkV0RTNClWS1ZAIihvKlpcWltSLGttUmtiZ09CYUUoYUxS
YFk8K01IJm5SKzsyKyo9TUUyVlNhQlQ9VFNnMVgvWzs6Qk1mOjEtSDhZWEZMRGcuZUJQRUliQVJs
XzJXPlcKJWVlXCsqZnRIXUtnPi5KP2pEYjxXKkE3S1hqYSFcZUJfdEAmLUxkSW88bWZEOyFrNXNd
anQhWFdHUnIjNlFoTyksOFNXOWZuSmc0OlVgSFEtbjQjLG9XT1k4Vy5rJDYrTFwpUWgKJVooPks+
VShyRCgtYXMtbCVsKFxMUjlhXSU7PFhGPmNINW5mJmgzP3VDRmRAKiwyVzEkIVYtam1aZ2ZNZTpN
aiwkcVdWOEEhWCMnbyNwdDlxMm9HYE5XRzpNNi1MTzZBb3EwPVAKJTolXi9QVV9eXFAyZEQoWUFe
OGFzcFInQSYwckFJK1RldWtYJWE9RzYqNUNwKydPZVNVSz9lR1Myc0cidEFrQUUzOTlrL1hTYjtM
ZmEtb1ZHKk4qcU46Xj9vXiMiRmhfbiNPWnMKJWIjbG5gLFZXLnE4PGBiQi1IVVY7OSh1R3AhTGQl
W1glPGEzY1lOKHVmRFpXYiMmI1FrT3UrZFZnX1JwZ0NKNy8sT1VgYj5nMCFKQy0wb0ZmYUZJZmE1
O1pATUZdUm1pb0tlOW8KJU9GOkBiO3QrOVA/XUotP1lSVXBLVmtLOzk6Sm5bKUs6YCpyJU5wSiJN
Ri5UVFluWkdjPDJGM1RHbmtHTiwsYmpNY3RNRl4uImxaSEEoOilhSilISW1PX0VHU2stNzoqXm43
aWwKJTdCaFwwOl9iVXVdWF1NKlAickcsNUlsKUJfRD5YbEAiUTkyQEAoVTRvRjJrVSFMTzJIVycj
RF4qdUVHR2dmQi5cTkNjZkhBWGJTJXFyX21DWFpoTFQ4OHFHO0BVZTZuIkttIUUKJSYqXCp1Z2hg
TydmZTJTRkYuZFE0L1NFNEVGXigidS1pWytEYyk1IWFvSS9NMjBqcm5SJnBEbkJbXUNERjllbUk0
UkkxKEErKmdUOjokNUtpZWRybylZWyNiWENmSWwxJSRldWYKJVFRJCFsSWInR2gubEw/Y2ReL0da
TGdXRXJRWkgxYUlpKVhjR0VgRyc9ZGZVa1BNKlZZIXFXLlpuXTAvNERlSSM+QE5aLjRbRkouQC1O
JmNEaUZnO0ImVmZub0ZtY1VzOypZTU8KJWU4R0pQa3UxSD5EaS11RjskWE1eI0Uqb0UiVVBxMSFy
IV1CSTxDRE8yL1ZdRSxMdUJFMG8tRV8uMDxgSjNdIl5eOSFeYHRDMionUEQ0OHJGZUk+RE89Ny8z
W01wXWQkcUtqV1AKJTcsaStWX2opSXNQO3ItUSwjUlEhP1ZLIlUiIiI8QDY6PkgtQ0ktc0YmTzpy
bTc/Y0luXkgzMlBZV0BSNWFFPTRIKkdtPjY6K2I3SlFyMnFbL0koPmpZdTFuMl4hRlxVWlRoQ0wK
JURqb2g8WSs+XW5jVGMiPUdzI3VhNUZjYHM0Z2pwKEsxYmhPLDBUS1k6RzB1PjZcLkh1MF1TXzlP
SkBCUUIwRVshXl1JKSdtTWNKPzdKWjkyOUpmYk88JVUhcE4rITs+RFBjJjkKJVdtVDZeNGo5WiZq
PG8mRWFuODEkNDgoUUpDKWMiIWknMG9zJzBYK2NZR1JoLS83bUl1LGUpVVkkWStfRiUzXDIwVi9S
amtZUHQ8XVY+IlFaIl9PLCxuS1BXPjc2dUQ5IztbJ3QKJTRcYShmaUhIZUE4Uy0pPDw9a0tyZVtz
Z2s1Z1g+cStudCdCcTdJREJRX2R1XE1hNVxOOm1oN0owLEZmZCYzZ2M9PXAjVGVMZVVpVVpZLDFX
SlJaQEctV1ZVSEFrb0c5L15IJ0MKJTBfNVtIZipgYCNvJzVgRVApRXQ1YVFVbl4uPF9TOEkpZz1g
YDFdbSNgWWNXQF1FcVk9b2ZaaWApOWdPNS9dLHExUl5tL00vSUBJTVJWQS1aLTBGSk5EY3QxKFg5
KTo6OUpSbU8KJTgtUTNLa11FQmFnaWlMPTExJTt0RCQ6SUdgaCQ+WS1rPzJwO2FDLz4xWjI2bTFm
Kk4hRnIwVSUzPjcsSTk0LWU6MFxBL0EuQVBsM001QEkzQGxPQVRtYEtsNCc3ayU5T1NvMHAKJWRD
LDwvRksmPmNqI15tKWJiTCs4Vkx1WHAsQDRJUExlKVpYYCdbWTVeZCYhNU9ZTi1FIltGWFolTmI+
JVomVXUyQDNJV2Y6PGk0YjtKMmoqNlp0IkxuK09LIityNVNtO0AzSXEKJUNMaGFjK2FcNT9rVTha
J2NrI0wrU15mcDUzVmQ+czJKaiolO3M4dDY6VDkhdWM+aUtAIm9yJnJTRzE5VywzZlQ2IS1zczY4
UjcpVG8pLGE8SGFzJlNIMEhJbU5UcT1JVTBoMUQKJTgrXylOYyRQWXBMZSFlLSFiPU1oNFc5QlRY
RDUnQEZoYTpvVV00cU0/VCRGUEdjU2Y8PixbSnJnWS0uNVdpMkJyXkdDLjchKFs5PzZWaEwnUihF
O0koIXB0UWRTMlddYnRRITUKJTM1V1ApanE5PzBkcy49NChFJnM5VFU3MDhETFUoIS10U2pIQW8/
ZXJoUzw4TTIhKCdQZiNHaWxZL0A1VGgrNGtGU3NsJC9HVSw9MDFyRCMmVTRKW3BPRHRcLyZCPEUr
UlZvWGIKJWZKR1JqSyM7RDpySlYiV0Y1Q0grYGUoJD9KMjMzSVknR2VwPHUzckFtVlBMXGR1Mzkh
VWJFRVUwIlolailiJD9eRTRHPFU2Mik5MENMVzIzNFZsRlpgOT11JGVzdTNpJyEhRkUKJV9yIWp0
N3JoKmoyZ0NSNlNxV1FcSXNjX1ghcU05SkdjcV9DN1RUU0RTPCkyRUtfYDV0I0EsJVs1XDFtRWsn
KUhsMWZKQzE2VV5rYDYrVy1dcnBuQChdWj5KTjMxTVxrJSReUyIKJWpAcC1zZjEhUl9FIllVYXAt
Jlwia0whNGVPXjJaI1kjQyVocWZhcG9bWF1ANnBcWE8sTWQybkkxQmk/Uz9zMGcvSUZlUmczcEg/
NEklJFUnTVtaZjIkbCVcVVJ0PCRVQHItU2AKJT1FYChtIiFqKi1CKXRUVERCSWNiIVgkcDNtZD9H
ST5pMUw4cFhAbEdjcWpnYEM3SDVuRCl0I25dMWY3Q2xodCw1cSw5WHJPLVZETVRVbVgiZlozSmU1
MFclNU9LJCYoQGkrZVQKJSdjM2ZIZWdtLGRha3Vub14xZFVpcUNXM21aMyYwS2o8aDJVIjM8WWZz
M0k+L1o3XWJpMCtXJ1U9Wi9ZckZuXlsiRS1TVlJgbyJjUCFgKSxDQFNPPGQ1aGxxYzEsISZoX0Au
SyoKJWclYStPTlBscUhSKWMnJz9hWV1YMnItbyQiTStvRC80Xz9mK19BYlpycEIrKUtQOkFdblhv
RiU1UTtPbHJrbmQzQHIkNFM4PzMoPmBCXVNtcjova0BKTz9TRzArLSdAODpUOXEKJVgnZ24sPy1l
YDIvX2loKkNbVyYsa29GKiI9dTVOTmRhVis3LlxfWzxyQD1oOnAlQ28rOnNSSjw3Vk11Km0xQkJc
XSxpL1RoR0snJmxlKGE7Lk1VTjBnWEE2c0E9c3BjSl5GUXAKJU1fOU5kXGlXSk0yMG8rRkFoYkM0
OCJRQWFyK0RDUztuVCRIPWNWcERAMVsiKE4pIllQUm5mRD5tajgrZSFxLW9ta3A4UEo5MnUrMDJD
JSslNDdGLzFCMmxEUChsZD8+MTg1PD8KJSg9bCZrWGo/JD9tSW9JS2lZImYyQWVUOzotViEvVzpG
TDk/XG1IYytGSG82K29WZz9rQERILWErI2BRQVJubjknPk45RVUqOGRybmg3Yl9OaFAiRUAlIiMn
TmFdU0JhXGkzM28KJUc7Q3BBJGpJIigwUGtqMj1VN0VJWkIkRkNqX2ZhI0Q3O1oyQUJIZjYoUyFN
bmhrLlhAOkRrNWYxYCx1N2hlZCRkVS5QSWJFbyJNaT1xQEssTzBDUzA0Z1RTZFtdSE1lck9zNUcK
JUwlaE1Ebl4+JlpMW0xFaWUhbmdVMSVmXWVjXkNUMWEjPlJeZS5zKjY9R1psXl5UYF8vRVE8PVw9
TWgnNm1lYW8pKF1PZVcxKDRPZS5wVGhKblFrUUFVUWtQSVchVWlUSnU5WXUKJS9eUilOZldFNz1e
WWNXQWVBJiVURFUyTGlyRGpCYUhgO3M1VXJSb3MkaC4hLDpEdWEtOCVcc2JNW2h0RC5qJT8pU2Ul
KVw3InNMSGJJai4hNlhBNXFYQFlqWnBIOWpVUz4sdS8KJVhJR3EnPD00ZE1qUFxHKkQxMURkYW8t
QDxUJUUmNThPMTJqMVpFaV9VL2FZIjw1aTo8PS47alMiRXNjJEddWlg2RmYrZUY2LT9DUnIkTSQy
RTwnT21kZmFHXnBPOkJeXzF0O3MKJTVcWDxRX0pnS1M5cT5bK01HVSkkPDZTLz1nNyUvTyo1Vz5E
NEkuZlYzI2JVLyNoVUZaYi5zX3BtJFBCXDs5LFQ/P0xAQFhXQFZUT1xWajxPXUgqTT9mJDhCRiFi
ZVVcXT1uL0MKJV9YVEdKMSxEW19DOiNnLGFwOkApUVYtPUNMQUlYPkEtIkwtZisjc3FwJjdydWwx
WlBVIkRdQmNiLSsnQzs7PCxmcmxCPjJcMiFKRmRubTllRjJIcHAxZmhSIVpSMlllQmkxKkMKJWMj
OSgiWldoNUA8UU86YTVnIiZKPC0kbFVrQF02P0c0RD9aLEBZSmY3WygjJzFfSkpcYDs7LHUraF81
T1EyYG1TWmdTZjJvcE8rKTBNQjpuZGdvYl0xLlAhTEMtMT9GKz0iPDIKJU9CUjY4RDtBJSssUC4/
UGRcPkA4L3NadFtAWjUuazpuMT5PO0pKYWBvPTIxbW9YMjkoOyYlc2o7QVojY2xlT2dTQFdIaCVp
QCJzZFNXM3FhNSlCRl9GcTBCWj9AZj4qPUNxa2UKJT5WM1Q2bjA2NW5gUFQ+SEo3PmNPclRRYiZA
LGBpR0dEUF1dM3FJXjY3OllGP0FvczRZKzlvMnEzUnVkNTYnSDZhQXE5LEIjP0lGZ3BuQVlNM3A+
VDFjLVMwXilqNyshJmFuVDUKJUpbdCFhWV1vZ2E2IWgjUSZpQ0wzN01eJC5BIjkmcFtjKW5dN2lp
VD07JyUxdGpWRj1ZWiVqPF1laDNPT1E6ZCMxU24iLjZZXVssKydcPDZnJ05XVWkzWWlPJURgL3Jt
SlhAVF0KJUhuSEs+cjskQCtsbVYlZiQoaU06LSFWOnJqRCJSZ20ndGMhK0E1MiZoNGJgW3Anck9X
RGpra25FRyk1NSVtKzkhOi5oZmRubVBfNmcvJFslT0RVJmBAQWUpQ1tQN1RiMFtuUVEKJUdRPUBo
WmhZYXQ1NUQ7KmBWMDYqKydJJC5lcT9Ba2pWJChqS0hIOyFedTBfRyhJL3JbLy9TaTxOJDxcOSRr
dC1gIVsxS247R29qQ2JUU0BdJU5OWFUlKEJbUEEtbUpYLTVxMEoKJTR0V15EN1NSbi00OkJWIjRJ
YC8nS1ImQG9EOHMjO0JYWTVRT3NNVEcsQ0dpc21mXChqIz9cciY7TSs6RShYUykrMlQ9WiRGYVFf
ITsqaSc1KUlvUDw3NnVTM2VcVC9dTEAvYCwKJWppcjcnTF9WKlJPNERAZk4tNFlwWE9vLWBhNm1f
XF5VJkhMLGNQZzZVMSIkLmBpb1tCcWtQdVNub2g0UChrXjQrVDsxZkpvZS50TC1lNFw9Ty9hXmI3
PzhvK2lSTXVgSisjL2gKJT1FZHAzIzRNYVJwJGU7dW11QyllRSZUWU1nZmZrX1UhKl9WL19kLTs5
XmkzQldbWlVvJyM4RlpBWFIucG48WilLWlJEXk9TKkhsV2AjKS9hYGouKSU3N29fTS5QNEAxOldE
JDgKJVUnOTF1XCtTZEVrK1M2aW5GJD9ubFVhT0gzMixxWlAhXDlWNjQvIlBaYCFlXVYrIWArS2ZW
MmNAMSYwcFBKJEY5LHE6M3EtNmdDNVdgVElBczUhKmBbXV9wS0Q5bjtkITsySjEKJTkkMSg1WDYr
JVlKPGVnPE5QLGczbUFJMF46TixtQXBWJTptNUJZZmFVMXBcYicqNlVdVVAicz9eU1FvaSNqbD1L
OCYxXzhUbmJOY0lROCtYL3NVSSYsMWs6TUZycitrRjtPMm4KJWlJTnMhTiQwSzojPksrY1hvUTNb
bE9LNShKUT81QCM7RFlSWGJaXkNaSid1LDV1bl4oYDlqUEktJnVSb1A1cVpRLW06aGErbWplYVpd
Z01zO1k0ZEpvX0guVi8qM1ZrZT5xYEcKJTxQckNpODQwIilQUkxfPFtAKXRWYFB0O2Y2dEVJLGFn
bUxQVC9uMzVZUEZYNUBEITxcbmUnQ0kqYjQjOVJHV2ZxZzItKjBtREcuNl5AJ21LS2debSQrLScu
YSQ4M291T0k/KE8KJW1nb0wmN0o1MDgzSzBqLUlIZDJBWWZgUj4sIU5kKGEvOklAPV5WRVFDbVQ0
KSJUSkhdKW9pVGY0LDguM2xdWSg8ZEpjX0giI1lBSVpUM1cjQzlkO25gO1QvdWNsXUlLN1IwN1sK
JV8iXXUjOEtJKDddYlkmKCYuVlBsakM5YCZFJlZXKkxIOWIrcGtAL25fYDw9ZCpKViVqYWhdMW5F
OzojckxxRTZgOFQrZHBPOzs5WnJCNEtzJ1Q2RzYkZU09QjBtdFVJUFlvL3QKJSoqKGhxcDAhNz0h
bFdhT08wW01dKSdSOFlBXTtQMC0jYVAhZyo8bVo2QV5vQ2Ekb0syPWw+ViElSkI3KyU8NF9fJUgl
OWppdCQtWmZTTWZBMT48LmFWdVcuVUA8OWshVENpdU4KJTNJJz1KYnI1XGcjPmFQYCY7Z1EzY0tP
WDNEYHMvcjokLFlQL2x1Ykc3UUBTPkdkZnRIODMsY1xdUXNjUyhOYFdEKCcjPkciVCs/Li4pbyxB
bFcyLGVGc05XTCFRVURsNy0xYFYKJWc2PUVTZTUuSGkqPWgxOCxlZVA7VUYpKGk+T0BqLmIyTycl
LFQhNWM1ZidScChmImJwP2RHSylMMSFrYVhbKVM3YChFSDdqXlEpWjpTKUBFZkkkYllCSy9SMTI3
dTchSTVXLygKJVBcSSkuS1FuPiFKVy8zTmMsWmVAayk7OWZZIkRRVDJDSiRtLUkrXz9GLy41QkFX
Pih1YG5IOyNYXEZDaVFoKEg6L0BlXVQ5QiwnL1chP149SiVxZFIpOEdnIyI7NjtSbnV1byIKJUE/
ZGltUEpMR1JzISciS1ReQUYiaiFDTCY8QmpySDA1QzA4M1hGOF4oJFRzQG1nZy0xNzNnNSssSyQr
THBlaGpsUVhbPmpyQicic01eWXFEcGk8XkIsI2lQXW9JJC00OG5KLmIKJUs2K14rQm5wa1MvMmFl
NFxWZCdVckouPWAlPj9xXjs4W2QuNkFcPiNpaT5PW0VAKDFGNFhPJDc8cUoyKj5FXiU7JSQ7aTRD
bV5TVzJFU1gkYiVsZDcvRmJKNjI3dS9QVTZJSmMKJXArbk0iPCprXGU+dTlwITZHOV87RCQ3XTQo
N0A1RVdoXCZHRU9wZEhgYElYIS41QylTLG05ZjhdRiMoRDQkPkJbTGdbVTRBQ0QpMU5NVjwyXSIh
PXBaS2Y+Wj9sSj9sREk8KzgKJTRDIVooIURuXCRtRzQhbDhibUBDQnVsR0JmJj0sbV8zLyFFbHBG
US1JO09sbSVTZWxpVjUyQzgtVmo+dTwqQF0xSTQnPG08R2RkNjlmJ09bT2lwTF1IbDhQTV9yJl9K
RiJkZTUKJUEvTj5vPkJ1cjxbPU02Y09xXicuSWUyYUMmL2wiZjFJRGhFamx0bDZnLiZfTVMoVzEj
WzsnR2NVNHUqYzg/TSknKV1dO21gKmVHMjRHNWwwbkVoblNqcjhPRVFgbixybCtAS1IKJVU1QEcn
UW02Rl05bmpTZWsyOnIsWj0pTGowUUZaVi5RLCRzUiVlLytlZTJpVzZxXkVvJWpkPyxuRVs8Omhg
cGY6QSskTSVjInRwNmZDOVJhSzRIXGdQIjQ9cmgtJDpGOzVTb1gKJVU/UTRHJiFbYyxpdGRnSWVY
dHI6LHRXXWFBa0R1MGUzImFcO24lVEUsZ21TOFQucj02bjBGRmttPCljR2AtKV9CaTlcUEhwdEpy
Z0AjUnRxIWdNTk9kJWpibyctQ0NqLy5VKHAKJSpwVlNJNm1HYDJMKU1mdSdEcTcnLCEjV1gyRV4y
PSMjPktqWS9OW0AjJUw4dFNtTSptXmlGSS44SmcpUGRrdW5aRCNbJ3VcOERjbTZVODkwcVZJbiha
Wl5QNzUqV0p1SDRDdUcKJSIka24mL00nNDxxQ00nQjBtKVgnKidhKyFGUFdFajsqO08wSStWWyND
RlZNa0EuQU5ZNi5zQWIxWk87LnMsIUxnPm1DY1BfQWsjTSxBYmx0X2g3aSVtX09bMjxOIWZMTyRh
QG8KJUNWWSUsKFhfISthL0BPOGMhM0tBcGlKQCNpSDRxXjsiSHBEWWZqTlk7XHAjaHBhVzFNLWpI
S3UqbmtBXkV0YCgmQzRWJiFMZG4rQStfXDczLFYramhkODZsbCI7bG8hOCNOVF4KJTxAJ0NLJTc1
NTRwYSo9YDRbbmYjLkY1ImMwLk5OJFBiPipYX0gjdCQ1KzIjVThFY1M5KiEpK1FMWnI4Uys4N1Vq
V25CQEJqcXFKK0BWLEgkLSJiVCVXQlNhK2lkQGBLYFclXXUKJTNRJEhMYVUuVTYvQFxYJWVgKGI0
UEpHOkwlUCU0RmVTT2MzO04yTUg4Z2xWY0pNTDBZYyZiTUZlUFEiSjZEYysmRCpyX2FWN1hwamNI
Pz5XQjUuYFc2K2VmYUQ9VyVCVk9HVEYKJTZbdWpAPG06KSJyP3NgJjw+KFhzPVFmXD4rLmNsc09W
JD5tZ2FhJGBVZDNiOFU1RmIxVCMqLFdhT2lNaTFQTmE0L1pINjpdMnI2QCdvMUsoPSxGV25RTkA5
ZkxFdFIpKmUuRSgKJW9hVnVwblZma1k7Kms2WD1BOFd0ZGBVVF9XKWpVNyIsPW5wVGM9NyRuNUsu
VkkkLGZXZ29FIztfKlU4bSwzOiJYRSJeQkorRiFbXl5IIztSY1k+YjI4al5vcykiWD9VWCozPicK
JSZrUjRRMiZRPzdePEIvQF1RU10wMVxXYkBhMV03SnFoXS1sVS5RKkJpbFMzYUFIayg+cEAhKSRH
Q0RZSiFHV0snRStnRCRAXD1qIVdeOCtrMyYuL00yRy5WbGtkaCpJMHI5MWUKJV5fUVdeN2dzUDwr
alI6QzxXJ2JfcEVCTG8nXTRPZCkuI2EkaVhuT28uUm1UYlM6TDlOY2xzLE1pIidbIVcsYjgzZktr
SS1uV09UZ3BpYkhxa01Ob25jRDIkaCRJME1WQyZwKl8KJUxEaiZwUCcuOWxNXzhAZGZfJEVPQE1h
Jkc0WiJmUDNCaUdiQWhpcDclSnBWcjdAbTVibSIsP10sM1NTZzMwWDQtTVFQREJhcSFVZkRSMVtT
QWQiVzphW2glNjErLG9DWzFZaUEKJTtlaWxNTWhaNV8sIyJuaGFRLzlDYC0lQEVZalhzVTZmU1Y1
SGE3NUlIM3E+MWdHZWJZRCQ1N2YlWjZdT1tHajtrbGAra1I+dDwiVGYwJ3NvbCNCb2shQSJzN2BW
XS1nK0NBPjAKJVZzXThrWStgOWw4KUAlZCRoJzQ9OTI7Qz5kXSlMUj5wWDIrLDQ1NnRGckZnVkoh
OztGV1dHS1onblFOTTdDdGElPzhTYEsoNkQnZ29uNDsnRWVgPFhPJFpaTDtdL2huYUksUCIKJWcx
REUuOFAyODk6bnRnaGhuSXE2bC4nKl1tU0txIUNCJFxAaS0sXGQxbUoyX3BuRFw+M1JzQGRnXlI1
UyFpcF9JYFV1PVUsaEc/cEEwJW0laT81YC1faE89M1Q2MztkL1RrS0YKJT1gMi5SPy5RRSsyUDRb
K0xbJkpsQlpkLkMxWzlQJzNKazlRXCNWcjRja0xcKExhMDwsYWZOaWhHKisqTGBDVVwwO3BeJ2k4
S1AhU3FgVFAyKGhqRyY9WTRCXSM3Oz9HImUnbzEKJUo9WURaZmQ1bW09O1hPX3AmJDA2ZkQ7Ji9C
KThXXW5VQilXI1VfLCdKPmlDJkgtSE9EQGdwQ2E6XkRfMFRrXk5eWzUrU1omZSEpbD04b006L086
PGVWIiFgQy1cOyNuXmg4JycKJVRGU086Ul5zJjksN2hmQz9WKm9YYjA/cFNsVWpzV1BwSDI3Q2ZU
K2RBYTxXbiQvK2BIYCciOXVWbGVaVjktRXNpPUEybzsnZV1MU2BCYUJhInEyaVxMNVEoYm5wPUlA
TGhPKyEKJU9SbjldWWk3L0xDIzpiWG9QSTJhRTtpXEdFTjEiSyxobnAoVyhMJ0IwN1BeN0RlPW1r
JzVeaGEwbkg2YEZCSlVncWByaiVTMHFyRzMuSmw5WCs9LStsIVs7QlZoXlslOysuL1MKJTczTmdB
O0MuRGMhTj4+U0I6MDFWKkI0RE1ZLSYmZyNtXz0mJEclZSRCOVM+Slx0TGstbTMuSTgzWE8wdU1c
KkZnNGBjRGskZnRpYzlKMDtVYT1nPGFoUVxzcjw6USQzWTAsJigKJXJAREVWb2o8NFczPGVqPT5Y
ZEtuRkVqN0pmKWpRI05IKFAqaEU5XHFAUHJBbVFnTUM+TClsQVdxNGBmJz5AdT8uYGBRZWo3XSww
cUZkb2BtLlBCZHJiQidOPi9MP15uZzBraT8KJTc3SjNzJkAxKS46REhIYEwoUmhdP289L0tUc2lA
cTAqZjNdKmE1SSQjVEZaV1dRNzAqMy9OImpGU2wzM1VFT2JAb1FKPz5objhuR1tiXTg7J0BjKzdt
Ql0oWCdYI3M8WmZGPCkKJU1SYWBRSnVWLUwtbjksITVOQmUtUV83U3BwRFVSQ2VdV1ksN0QlKGRh
OHBaMTg1YyY1PUIkPiw6aV5UI2E9aEk9bCpXWyckNGsvSUJXJSlNRmZjTl5CXllBaUdCN2g4J0Y8
ZGAKJTs2RUFBaSIuJnNVO0chSSJiXXJnaFEvPEBFWGhSOihPX0hAbU1OXiklTnRaU2cjQGZrPDdY
bzgmYE1ZTlxTKSdCUTZxZFpfNkxxZTAmOzJWYigrdVVOVz1rOF9gTTlYXGdwPG0KJUM4TFUoTHUo
PStRWEwkNzFCJzE0ZkhLSlU4XS9dWzUuJDY9aWpoJEtrRTI5cHEiQGJeZjBvVlYqXjxCUEcpL11e
cWZfa3NiM1RQL2JdPUtGMjA0PG5SQDoralpZQVIhcXFcRVMKJSwsbk1nMGJsbmJTTFo+OFk2KSdd
KF9dN25SOysvImVOXDxXVTdRJTU8V0I4IWpFW2hhV3JRInRCTzgmQCZVXzxtRTckNldScitTVWFl
VVBYPjhILVIsclxXXWQkLzk3OTchdDsKJT01JG8jZ1oyZ181PlEwSzUpay5ZNnNfZDomRm5oRT8+
cCw/X0cwa0tacG8vVDhncWAiJ2puNDosbiQtQkJiPixhZEZlUEwsRGQuZGt1S3AxVmxoLjExbSg9
TGY+SFQuLmsyRVgKJUxzLzRLTiRZX1YzRmZARFhbX203WGZGbHU8PURoP0JYNCInbiVyNC1QMDd1
J1s/LCZtQ2ovJWo4QDNFKyZscTpVXGIvPCRmOz1CJDptR250MjdBQk41ZG1WNWAlJ1VZUV5OIlsK
JVE0R1wqWjl1ak5WckVvKHM4I15cU1VGL0E6VmhzJSxdM0BRVjxJPWtEPlV1akIzIis4bHIkQjNS
JCpjYVMobjpkNl5kV21nVGFSY0NBamxlN01fbiJSQ05TcDtxWFFvTXV0cHMKJThGU0tWWi5iPEkm
clYyPmpNN0hmZV9LYWRJaywkc2FXbU1KOm1acz83dGIoaGw6OHJPSmI2YEplXUdxJmlgcidBbm8n
LUs2PVotX1MxYFxuRlJMU1RoWDxZSCVPcm1dPU8ia3IKJUVPYy4zTzJFWURBKUlqdC1DRSVzX1xn
OV8wQkdmXDQ+aURgZk1Gby1EbD5EOzJSQSYiXEdkODtaYGEyNWFNZS1MIiJBRzhxSmBPSlY8JkRz
aElEM0FfUFxsJzVOaGNcZ2csQ2sKJW9OYUZkKzRqVy1iQjtYT1QiRWppKT9kYSdEYC5KVylRcSZr
PDBLSk4oOEl0c1JickRqZ1E6cWhWUl02YSUpRF0pMHRuIzVSKCRsMXByLio/Iy8ralgkVT9nLSk/
X0RSQykjbDsKJWMlIyJZSGE5WllOSFlGdEVkYjtUOCwjcFNAUSkkVk9Cc0FDYWFeVS1sU1kkLWQ9
TnBQPkBVVG5dPW4ndW9zPVBCXFYuMTYzMFloVW5sXyleLmRCSFQ5bTxaX1ZWRm0iXDVmaTAKJT4v
WTBWbnM9JEIzYUpjcUQ5VCNUJXAtLUpEIy8+Ljt1ZCttWEc3VC1iL2lLQDo8N3QsNE1TbU9nZGYj
SGFxaVpPYjkrLkFXT3FIL1ItYl41NUdlKGk4JWxUUD9JTSpJUTA/N0QKJT03UTIsMEA/aGNZdFBo
SEJCSl1LQVRBa0ZeREN0ZDMuJWdTTjsnaXMvJjwxK0BwZzN0XlxcYSlxVTxRQEJIdVc2ci0nX1BI
VGQ7WltBTStaQzchbFVeJkpkNXBzMyNRSik+Q1AKJSopZ2hcOz1mbXFBUCo6P006am4tUVgrRlJV
YjpiMmRAWzpVNiRLWls8QmRRRi5bWUQ0NkwzRkglZmJxT2BiYihaUC5eMyFBKWo7aGMjcF0tWUtK
aFJUT0dkUHJtM082SitWUmIKJSZrOiJbK147RD5oZGhKLTgiPihKYiYvMEksbjAtZyJSXihuUU4m
O3VAZSVaOFpDdFVyXGlvYlBINjkuaTZYc21iPGIhYkNJVmdSST0xcEk3QGpuPG9YaWNDODc7WXUl
XUBzclsKJT5jQjtdWnUkLz9UM0opRC5sXUs4cHRhOGNbKVZdO2ZsbFI7NUw3ZCM9K2Rxa2RwUD9w
R2lsKkQpS1hlXUNbOmVuL1I0RjpPaFRJKTtdOWp0QSZuSCNDVE9XJ3BBOFE/PlY0aVIKJT8xM3JY
XzowTjZYWiRcUVhqcz0sPUJdJidTXnVgbSxZJ1tNSCE5NGhHYmBkLTk2KSsoVGJiNztpRG8lZWZl
QjQrKVEnJDVOVl1OZyppYlw2SFNYP1FfN1xDJlMjN11xXCgqMGMKJWBiViY1aD1ZWiJuMmtVNVRr
RldBI1BEPVllUmhaYklSQ0pFN2RXaWI+TGk6XilpUCNpRWdEYmRTVSgwQi9HJzBTP1pYSiVnXmdu
S2s6YmxtIVM6TSkpViZOZ2c2S1AxQGs9cEwKJTdgKDw2WWhsSE5sZT9sMzhvWlIkLkUqYCw1VEIo
OF4iZUhRalBANjBXRUYscUcmOG0wITgkNSgqYnNgRDNHUFgmUVpaKT5oYFVBLVZzXjBSPDN0JCwt
JzBnXnFcXS9kJ24oTDAKJUg9Y1BwVyIpMzpaYENNO1YzZkkpb11SJDlxIU5UPUNqciUjQTRIW1Qm
cyppMHMmRiJNUUFgLW5MS0ZwO10rJlZLNmFkRD1dOkZATEk8XVInWVJlXWlRW2VRL1VJK1IyK2Fu
TEgKJSJEL2tdTCE0PScwOVM3M0hKOElYLGopYzlSPGFYTlNLYVtJR3EmS2E7ci4mUjpLJHBnZVts
JF5WbztSIz1tUU0qUWosNEQ5QEhXZGwySjBUSEw/ciQyPEwmSVtWOl5tJ0MnRTwKJWBCPFQ+bUkz
MzhyRkJLPWEySCFjTCFINm9EVkxWdFsydG43XmRdNiRzNitXRzhnUWdoa2E5NlU5JiJeSm1Fcj85
cmFeM2RKMWxFV209Rj0+M2RtL1IxZ2YsISk9bmNrblAhV3UKJShoQzNvJDRlQy09JDgwYypgMldW
YGc6bkBdL3BRZ2Y1XF1vaUpLTE9BYWhVYVY5QS44Z3I+ZltWOkZiNy5AM1tMUUQhLWJuRmszPkZx
QmBmQSw/OWkmISJHS0VgPTExWVA6QlQKJVxnXy48YClFQTtMU2c9bV0iTkRvMDcrXDRrWF45KWNQ
ODZUUWVwS3FQJU9JXlt0ZUUoQVdNVT81TkNQYzovTCk+Pk80XzJBVE5jWGZYQzRvaDhaVmAqOjYp
ZVYuRC40PXAjcSIKJS9SRzJyaElNbGw8W2A9PWBQK2dwLmZaZnJYalByWCgwXDlKW1ZrUi5rJD5q
UGdvX1NnLyo+KT4+SSYhJlI7LjM6KDtuOG5OazNKXD9BM1prTV9LazhKODgyVTRHcFhEa2BHSSYK
JW89TSFbWmBedVYqa0lWSlRxZzRtbiEhKDEoaj5JdCNxcHRFXSwwdUxRVTJgZG45bStBblZsOm9e
SlhqanBVYFZ0VTk/Qy5iYC00ZnMoSysqUyRQOjosc3FWQm9LZTs6QV0zWycKJVs4UkBvZ1hqJDtt
YUtjI1shX21VUUJnOV06czhBI1dNRF8ocUdfKFlTailpQEVqKmFSQnBuayFeQC0sIVtLVVJqbUYm
STRrJSY/ST0wc09cUyZONGhrUEtgc0tgOUktZnNYYT8KJWxKITNicVxGVU5dTT90MlZlaGk5ZD0o
VFUnUTItMmY1cG9YaVZUSj1EXVBJaUZfYmlkWCQsQWVLIjlgTCJWYkRuYV4lP0tpQiRqU2I4K2BO
cl9zN3VbUnJtNGJzbXNVRy5nT2wKJS0vWjhoMERTLl9ERl9VQzVxQkcxJWM/YCRrOSxgO0stZnUu
M1QlIU9gcFVPJ3JROGtuWig0Y2NJPSIiLS1HKiU1W1VsRShESlw7T0FwUFA2bmUjP2ZxZmM4Z25f
RUhBZStgMTsKJS1LdSpJPjBqbEBxdDo0U3JSQUFwWHMmPGdXRDVCPlxnc0VQRnFaOTUvSDREKU9q
XUhBXlVmZFcxZS1tMyplKWM1Q3NTcSxdL1UxI0M9OVlMXW5zOShUc2pTVVtTTERKYDQ+OGcKJUxG
TDxhW0wyMCIoOXNVbnBFMDswP0lwX0NNMDssVkUyRShnSUU/c3REXyMkajVJWVFXUDBvMSxMITxt
b2YrTS5Fay5IK2ZuVERJWSdKbSltNi45PnFIQiMyP2J1UzM6Rm1LSlsKJSg+KjVrREEhNlQsXyNj
dV5NZEB1WS5xTGNiOHFJbmdmZCpbckVcNHBRTComNERcS2FBP00tWlBGZyEtTlFaMUU+RjtwZyk5
QztUaG1YQUhDYWtPYTlgVkMpWFVRPTteZkdqPiMKJWxeQ0pLUUkrUERnSU91JlRBRyJeYTdPInNy
UDZBcztmaTwoPCE5Qjo2WVNIZS5XK14hYTUyQWUxaDVFb3Fbcjc9aC5tODRyUTN0TClJUy4+YlYs
MmwsaDw/I2RJWDM4a2RMaUUKJTMuJyMmSjA7KkBrSCVENl02MTEjNTsvWmgoXjY4YUdBWz1vWGZy
Z0xAXGBEUm1EQTVfRzQ8bFdwK1xaIXFxb0RlIlZtcyRJVTNgSDBxMnBzZyY2K3JaRmQ4YF05OldR
TktuX0UKJT0zOTEnaExXSEFuQGpVT3BVXEFcX0tfdHNnNUZUbFYhJEpyKDFaWUxmNihYNEZaZGMo
KCxVVV1DbV01LSlucCpIUHA/SkZrJy47bS0yMmUzW0pTZ2lGJk5JT0QoSkUrOV88Yj4KJVVKNW1e
NDhoJDM+XC0pLldELUMoXj5vJDswPm1GW2VcP0FKYGonWVY0JXMoLU1DLTBzWSIqQEgiO1ZbJUhf
MW10IVFYPVxrMHU2V1lnajRaOS5RalFJQW0zLVwiVmFLaGtiTEMKJWo3VnMyISchRk5nREFzcTAh
VjM0b0wyLGBnaWdVXD1bZmNKa3BTI0xyZzEpUWg2cGE5VVwiSmMqT2Q9cUI0Xzx1UCxwcjlHXXBk
WkAhYj0/SHEqSiYnOCEjSUg7dEE6TWdwNnQKJWAqJG5ZXmUtaiM8RTEhdS8zIzk5ZTcvXFZlOnJS
VzNRITdGSyo3NyY8LTxGNSxuZHAzYD1vMnQ2YU40S2hqY0A4MXROVmddbm5xajxsSzxvcjdSdWRm
a2ZTQjFxNCVCXig9USwKJTJjbVNLVkI3J01SPj0mVGhnVWIzPmorZ1tDVF1EQnIvUnRsNkNrOjBl
PDNLT242cSJuQFdJZ1BxKDFvRiI1YkUpVVRjIzZRdDFsVyNhaWNSRmRgU1QrVWZCTmtHUStmRE9Y
L2kKJVphNmolXk9QKlUzLFptMFw5NkE3TENSYkkiQ0ZFZEBXZWksVlpQKDw9YHI5PFxcMjBROUdA
VnM7Y1lMcDhMIiNvVG8rMTspal1QI2RkYnFQKCN1Zj9ATmttX0pdUTVeOV0sOi8KJWcrNkRjcyMh
RztaTTdoKzxoKj0rcjtING5VSipFZ1lLPzA2VSZxVHQiQW0+Oj9JYDYxVTEvK1gnJi5QUHElI19k
QVZSLWNsQlFJbG0pa1BbKjojUlpWXCVBUW9lSkRnJHArajEKJUxRY2BqbUJddGo1YnM2N1JsTUQr
M2YwQzJDUEE6L0tQQmlTSUstYFdRZXNsRi5ZLGYmJ3RtKmFYME5WJWBEazRkLFJHImYrYHMkV1lM
Wzc7YWFvNTdkLGJhWE9lL21aKjwkdWAKJURbRDskck5WXCZdbUdRVSJFTFIwRzxjaikpc2hUWm5o
ZmdNKmhtdExUUT1iXXA6VDhcOCE8YCVlTWRbISpwTGcyTzopVC5UVnRDTEMsPjcxMicocCwwSlBi
XlNZWkxqW0IoaSoKJUtvSyQ7Jl0+Z2VSOF1NOzJRI3VhWj5zQnFNVlg3JU9VUDJKbVMuTiwjLl50
cD1iIylTVlUvUnA+XiVRYmQ8PTNBclBMTkIhQEgnJCdxXzQtSC1BbiRKdCZEMiYkIjUtLS5rcT8K
JWtgUVM1J0dncldYJilkWEVqaDM3UV4ndSshJ0NgcE4/I2U4Sig1OSpBL110KzVjVFNkWDZEIlQ4
V05SSkFrK0YhPUMxOFZaV2JvP203RHVOaS8uWyoyNXU2JUI9WCRgREVWPTAKJTtFM2g/TUZCT246
UzdCZU5CNTAkOCxLVCI+SzdWQDhzI2BnXyNuIzhVQE8wLiY7RDxbSSZeZ1NfLSV1cmJeKkllVTtZ
QnQuRlhsbG1hT2BMK05YazQ3IU9PbEtCVHJLJ3NhcUQKJW83RlE/O2FWJ21vYF1qWTFgRCxoI0ZW
UGVjOCVTM2g5L2phLFNITz0sY2hdamJMQDo9ZmFRZyNfLy5gaSxZVyYqKC86QUVta0VNJyx0NGNy
K1YsPlFCci1tMkRQZ2RtZF9ocUEKJWpzO0hzOS03P2ZlXC8kcEBFR1I+VGNaVFFxcDI2KkxINVVU
OVMjNC9XNiIvZG8wMCwuai06V3VEWUBqTS5hdUkhaTNqNzVBWU49K28xYiElSExxXS8hLmk6TCdN
PyRjXktLSTUKJUksKVIqJEdXTi9uPTZYN2FNREh1WVVuX3JMSTtmZjwqJG9cVmNTO29wLDVwJF01
RWtqTkVkL2IiOlZacixFS1JRV251TnEvRyInNkUvR0hzJzowOyxqLDtwKWwiLD1QODJMMzkKJTJM
XiUicF1tVzQhVWAqJ21bQENWJS9WYV4/STRpcW5EXDZgSGIpS18zPEdONTUxR04qMzpfRSRwQzI0
JnA2bGloV1EjczIsQ1BpWz86QEVMIk4wN0ZgSiNMRVU0JmQ0NmxiR0cKJW80YWJTOUZKQ1csLyYm
Ij4vO2FtO0BhZW9JaSpUSVwiUSdGa3A3PVBbcFpCISkzL0ldQSNoNStRYUpjODdQMVNFWzg1QEI/
aCFIYjZzZz5zRmpcX1o8dF1FRFAhWF1bb01NaGsKJW9ubjQoYT5UXjlMREo+JGdXZkd1cl9LVl9Z
IiRsMShsQCpOSy04QycpQidXIiVjZ1lSRkdwV1llIk8rdWtNKlxRR1BdZUkvZl8pNTY7dXFHUkBr
TnQhZjVwZV86RmZzPzJbQzoKJSIjbm1TI2Y5OyMwVSRnaz9DVjQ+bTBkOmI9czpbZ2NGMkI5Yz4i
T08nLykxTDxIUWItcmE/ci1bTlJDZFtlJUBZMWZfSHFnMzI6JnJic2A2WDFtbG02ayp1NVhLZlEm
JF5HaDoKJVg0QUF1MlljTktvQmQ+V0lKNCpfKDE2SGAmTEsxJG8yYDY0cTVTSWQnYHRfa3AqJS0i
Z3A2WmAkdTknJlw/OCYkNWE6OS9bTzknOkROdSNoJC86JlNGJ1ozUSY9amw3RGA2SSQKJWRGMzxT
VTRaQUteRGRfcEwoMl9ncCduYkBgWEgxX1ojNS9mUEVFVmI/MD8rclctQWVIQDdPSV9MJSFXKT9h
Z15iSlM3NWteTCsyW2VFcV5mIz1hZTk/MTVnSG9GOiIjaHBKbSQKJUw+WmR1SlBGc3VrSXNxMlRy
Ql4mRF8nLjpFblIpXixlb18oaUpmXCs0Uy9HNmV1bzNPR0c6QU1dNSpBNUtJQDVcYGAqJlItIXE2
VTRpTDBEaC8+bmxpLFQ4PEc7bmY4I0BiIysKJT8kWGRzWG9KTk03TTNjWDU5OUckNXNyOShLJT1h
Q1YkWlsvRTJ1PS5jU0JgUlRxYmtJLCsyTDtuaG8iJTxzcGJFRVlGWEMycjJMN2tBNiFoREIqZlFu
KW9HWUMmP0RJMT8+bycKJSZRPmZUciZZYHNxImFhXnIiMldRZ04zclpPNStZUF8vRF00QVhlUkgk
c3FVazojSEBUNjFqPFcjJVBuXFk2TllHUyREJiM1SGlbcEAvKUE2KDlecHExIkM/KVNgVFpjcVtH
a0cKJXJNYGo+ZXAzIypZR25KTFUyKF42YGcxZExNY3VCLF1vQlNcLSE4TkJeTTdVbWRvXil1Yk9s
bkdJWDxPW2dSMi1yRl49VkBoNERmNSZQanAhYyxVJVBCWiJsSWF0YGF0YFwnOTQKJUdRLjtNSVVE
VyVKcF8hM0w5K0JqcXUzU1ZTTT8lNVlCPVNsRCsyRTdjVkgjZTJlNUw5XiJyKyM0S2ZJL0ppNUIq
UyI/aG1xcFw1ZGcsPz9lXzQlTV1xYWUxTmoqMSZwZ0pyKigKJVQyYldHb25pRnQlNSFcKVFJVSxL
PkM8OjdcWkp0XjJBIiJoKy4kNWU8R3VmVkgqIWloYWEoVmxbbmxWbUpsUCpSRyMpUl8lX0MhKGMo
biU4OGZ1KyxRVF1IckM2IyhGUl4iNlwKJVsnaCtvZGVEITk0P1lMV3JeWFY4Uj1xXDJTQkJhTlJx
Uks5NzFURTFjcVlpIXFXJEBkW1dNakdhakVVTS5DKGJNZyc/JGEvJCNMN1wzSVczLT82aWdFTmFO
XTdKajdpQUFSOkcKJUFDPSRcLHJfRUk4O3Rxa0tjMVFVRTdna1JoSyQ6X1pWXF1yZyEnUCRjLThQ
cFZESFJiVSMkbWU7SmdBM1ZbNiorWzBMK1w6UitMWzJfbFs4M15kOnFeT0ZGRUhYWlRBTkYtRzMK
JVhPT0omMlVXcU9nXEs7MzVOciJBcmshPG0oYWFMQCtjJk1FK08tcVhebyRyTyRjTl4pSVRNMmRT
Pl5uO1ZZMi5eSmZoXFdjUkQ7bGRDOU5vKyUmL2wsVy9ZU081Ol8vQipLLkgKJSMlajhzU1x1JCc4
JTpWSD9BbD5eRWUwM1lfLm49QzxrUzROTjFzX2o+YCY1XTZcPEVlRTFeW1Y5dFN0Wz1zXEo/QyVf
InFdJHRgNzldZnMhYmdbJkQpZmU1TmcmQj9LSkUtPj0KJTFwRyJVQTA/YjleUCE5NC5lQjVxRCom
W0JmVz8qQiYpbSw8RGFdSi9hXmxkPz5CZWtpKUxEZV1gV2AxKEZ1bm5HZTdRQUpGbyouW21KJDF1
YD88aUQtSXMiKiZUPkdAQ1RNZDYKJVxuXjpDXGsiLiNSSGBbT29SLVVUaW11Zl9BOis2PFVkQlQ3
RCpqYGc+RFZmMiVKJl9oSl9oM0dGLzg7YEJuPmVoYVJhaGpZJyVTMFFUK1lDNDVgM00zSGVCZWhX
IScvYi46U0oKJUZaVy06JFBVSiMocUcwRU5UbFhOIV9yUT9cKTlSSGA0UDxLKXRtOmQiYjVnMmUv
Uk5TRlM1TlJoWEs0KSc8ck5yZ2pQI2JZMkEtPVNTdTk6Zm5sXzUnRUhEYCFyXClHYSl0UCQKJXMn
NnMmczdGcyRiYCRcWmxULkJUbFcuSyloQmQqSitBSnRWITNRZXVlcCtsPHFxVXRSXU00KF5oX2Y9
TiZgUFVtN1E3RDQ4cDEtW1JaVFJYMFA6b05jSlI1MShlbjwtRDtFNScKJTxrS2NoO3AsQEtsVUsr
S1o7ImQlMVtqWXNWTTpHYHArbmRPT3VIQ3EnaCVCVVEhMUJibD1gV24qcSRUW1FRRUZZOjddSnFS
Iko9Kjw0P2Q/YiI/LlJkY0c3dEkrUjhtITVUblkKJT5HTm5KKlVoPS4/V0VBJFdpKyFbWz5gZ25C
J2pJdGFeWTZpSVg7IjtmRDhuSVslPWUzUXRVL3FRLU5sZWdlcyYhRj1JMSEnUllVLFJMY2tNYSZ0
XkJwUz0rWWxeImUpOTU/Lz4KJSRXRmMmKEsoVkMiSW88NldWSUdNUl0jViVOPElpLV1OVkkxYFxr
M0YndU9lUihBMnE4X3JLa1BrMVhOVGYvRV5eJGJaaWtUIkNKQmdcWkUnJlYpM15Vc1tnWU10V2ZH
cTBDY2gKJUBNUywiZ0ZIa0A0O05UN1hPSmctaW5ZWFpLJ0lQSksiPEgzbGA5QVhQbEckLmReZDRE
QmFZTyIqOGRAT1E7NzVzRWVsT0JBTnBYLGJJXWpwbTNsbyxnJ2k1OUtnXTBeUytnPScKJS8qSEtj
bEVpKFtkWjgjaW05NiZdR2lsOjhCYSNuZEJnUVhEQCQrUSQ+czYxPltPVV4wZGdiLVNdMGtdMSNw
ZGRsZ08rXkhHL3FlM2h1RURJJ3BpM1RgUHVHbD01MCplajFhMzEKJSh1K140a0dQUU9xSE1TRGAl
aF0scy5PNSZcNFNpUV85JFxoazhGJz5dQlBIXWtmcl5HXkxULlNcU2NTRCYwPj5UNj9FVSlsWTUi
Jmk7L1BRZj51U2JsVSsuS1Y8bDUwMGpSRXUKJW1XSUZMRFhbKEc/ISdEKTZdXyZHZVk0PThcbkYz
WEEpZT9HQ04+TlBRRlgvPiZNYE1tZ0tKOnVtZDd1Rlk1VEtXIyktPCdfV0pMVklONGZjK14wSCY0
am9hWlxgcywrVmdxRlYKJXIxVzt1VWlKbj9bNk44Q2Y4NjY4JywjX1ZnLlZFPW84dD0+PSQ4VCJW
JiFSczZsVzxrWT49PGlTPT9hIl9YOmtxck8xPTU1IUFQNTFXMkdBbD5ERmE+Wj4zcEgsRXUkVEww
R3QKJV9uaCJjMD5TVEotRTZhQ1BxPnRTbHFSMG86c0x0cUg0Pi4zLnM+PCc3XlJuNWZgVnFiSiVC
JXEkMzpDRGwnZC8lcDJbPnM4T0Y/PEckSDBlK1ZvTTA5IzNIRSVNTERVa1FjQmkKJSNPbC0vP0hB
U0Q4al9VIVF1ZEliZSpbVDc3IiJsTVAzdWpgMSpJcGJXdXEjdE0hMU9dTkIiSSRkYEJgSSREVmpJ
PjZgZ1JFQD0qJDhiKlFGOnNFbkVlSERgUGVmdUdAYzsxSmQKJW5lO11FYUVaWF47MGlCSUEpbXFv
b1EwQCNITG1SLVtBRHJ0RC5gMGAnJVVpWHFsbixALVpWJykiRT4mUiJDWkddVUhnJVsyODVjMTlP
VkgjRWVnLCQlXkJLKTFRLz8qJ2RYYiQKJV9ONkYvSjpBLF5KRHNZa1JcSXUiTUR1LGhULl43KCYm
SilENDAjaSM6SlY2OzBmJnI2ITMwUjpqUHVJUE9mQSdAUypTMC9IPkE8ZjFWNkBbJk0rU1E9RDFu
RiVoQSdyXE44KU4KJVc3bmYnTm5ST0NsMkUjXDltIXErREZJPiZnb0IqLTQ7Sk1kTGBoaSNdaF45
YEY5TWszM1FfSFYtaFMlRCYkbDlIZW90YlpFRzQsUlA/ZGNLPTEpMzZZZnM4ZFFqbGxtUlRRIkAK
JVFBN01sTi5sNUItUTxOTVZCX1tPWVptIkxeOF4qRzNgVEhUZyFWLmcxZCllYygsX1dmbGFIXEg8
ZDQ+XjhJVio9UGVXYEosa14rNTpsI0JBR0tycztSV3BgQFgoNl5ebycnOysKJSstYk1VcCFkUSZW
X10wZyNvdU8xWWBZJE06NzJxQ2RfIz82PGQxOztDXWksPUE7M3NASk9zSi5WT0BdKmt0MEQiJGsi
cktoVGs/KGpLO0svIStqVk81NTdqRW9tKnEsVyRadUsKJSRxLFRZO0JQUW4jbj4wWWFQZ2FTUilE
UUZPakg0WSNESEQkS29LM2Y5K2BAKCsjLyQ9UlFBM2lsbWVEaS42PCNDYW0oYTFVSFlvMFJxTk1p
NycyQVYia1hcVF5BKls4bClORiwKJSxlRDhtUCdxKUYjZ09KOGZjZFVCbFhUJ1ZvXVdOKUZHVUU6
MT4sTlQ0NkdRVUs+X05BQy80RiYwUjd1bTlUW0Y3JDRecGxiYEZuLTVqKkVxSDE9anJvOkNyU0ov
MT51blVnazIKJV87QThoTkpPJDlNRVZbMFdrMHIrSDtXTCpwbV0yajVXZTZsVEdwXGRxbCE8UFRH
Z2lpIUhHMzJXLCNAISE4QSJPSj4oWyVpZ2ltMm89USFZNSVOTlRUSS90QDYsVzk/QVhFc0wKJVBW
YD1QXS91PXQ8XiQpQ1VQSy1vaGk6VEJBLlBPdVA8bl1nW09SNkNdLz4tKSpVU1NJL1Y6RGpXNTww
QHAqaURFLG9xc0dmZy0oMWBfcUdiUENDN2tCcyVLc1lBZFM4NCUzJ0EKJUxxXTlCUV9JTDsmSEQw
XnIqRWxmQV9WLlM7NytnLSpsbkE0VEgmWSVLMXUuZlwscllbX2JAdHUiQlRuLG84US4scVtONFtn
OVFnNlxxL3A9UmgudUZIND9IY0ctQV0hTD9RJ0kKJTFKKFNNbi46MikhOj5VNyEqNys0OGZrR0Nd
ZE5IJ11lUUw0VV1pLSo8QV9jPEIlRSQnKy9jXUFQSSIqXzEnPlEmXStnVidkL2c3XmttO2gzaj5K
KkBhXkQjQDpQT0o8NmFaWWIKJW8odHVGTWI6MTEiOC1DSTlodS4tRENeVUhETWo3WCo2b0JrVz4l
Mjs2LHIhUl1bLklyRDNDVmMudGImLGpxVy5FalFCWyxQaVA7Q15WZiZdOT5NVikuNiQzRVxKWWsk
P2Y1Q0YKJVRjQ1JyLk5BSjFtR0c7SDBjTFROSDRRWUdsLl1PJiknbF1LL1AuNkZwOjdGXDVWQSRS
P3VIQkdKc2JBdEVZWFwwIkgqVjZOaUszRSYuZ09TOiZYQUgrbC0+VmAvQjNhJzJnOFgKJT1gTE9q
NW44JmBOPVlcZ1BSXUgjIXNmI2slWEc3NTMwQCRSVSg9LV4qcTApdEZHNjJKcmFddGNBQCMoSCRj
OG90JzdkNFcyZkZnUW5YVkpaWihNSUE5RWgqK0BIVSRLPjQhNjYKJTtackxebk5CPS9oMWBbOFFh
ayhgK1kjRyxRUG4wYUtuPl0sYEtbK3JXams1Sk10Rk1pbCpxMDsubSgoQmYmbksuLjZQSWIkOS8p
LV5EYFJETCduXihNLlVaU204XypLcilOX2oKJSZsK01IWWNiVm1iL2ZVXVQuajZHZl4lS2NVOjpp
MDZdZ01hLyQ8bVBBLCM0ZHJcaDw9ciEuJVFKOiMncS1kRldeVFErR2ZRMzAsTjsmcygpQWZGbDxE
UWY4RS1YJUdkWSpwRzkKJStUKV1hYlxlTW4/KSFGczg0W1pWNDJiMlAyYDxHLyo1U1xyJnF1SWNv
SUBsIkBsSjJpXVtcMSJNNzpXSShqKilvTF9INygxXi5iOygvckYxb0cwK2hWdVI8YnBHcHBqWVEr
T1IKJSFQQiI5PlxSXl9yMnBRQ1xMaFwwUFFoN14nOnM6WXFlRi5rRT1WMUxfRXIwSkMkTl5KITY5
cF5eJmBbbzhfMClnRSU0JFMuTi5NMiJNJGRTXTxFTl1iayw/XzE4aWY8Rj89YToKJUFtbikmUG8i
I2wrMD9zWGVxWGRZaXBCYTY5YmEvLkJTcFg+Q0ZCTic6PidLNFRkVG4tajdoJyo7UVJhMVkydVMj
SSZMZmonU2pdbz9rNHRCJDs5NlMwTjMnNmtXazlkKUE3Mm8KJTduQiI0KjBHJiFmUmEwKlAvWylp
O3RYVWxGXGxQPV1fZFFZbCVwNG1oT3QhJycoPWEkRU10OzMnRT9eLjNHbiRxcm4oTUxbXG8oWkF0
RTwsRSMzVTRSV11QWSdnUUt1OkZIU2kKJTtxMUJbJUNvbTolYDBWSlREMzVlYEcrSTtKc20sWzNF
OW8oLzFCWkcnXFJ0MjNfKzVRUEY9OXAzVDA9dTlha2JgYWEoJWxLXj5PcUl1ZkInZT9hZk0zX0Y9
XT82NFAyYyskciMKJVU4ZzM4aDFubExaKFhVbm1Gc0I+NlQpRFhZV11zRidqMVhURyNgbUw2Zk03
UD1AMUVlPTVOODNWOm9GP15XZyRNa085NT1MQSRfWnEoPW1hXVIjQC07NCZvZURBTSJGXyVlJDMK
JVdTVEkxJGsvWlAuNGFSLlFVMDoxZi9CWl4odCZaSzpZQ1xdSj1BWWxEVkVjJ0o+RFE9VCpybjhM
WCNwZjUwWCI9XFAtIy9BWiRscWtISl42SXVBbithWTpcTS0hYG0pUU48REoKJV04Zz1jNTsrPlFL
SSVMVlxyYlY8N0lmK0cjIXQsLXJIUnExbmZmJTM0aG1bbVFUZ2B0RDtxWm9gNWIlPVRIOUUjQ1kx
aUNTKiNJZiFgSSRkVEJHTDlBZF9fKEJOS3FYQSYzSE0KJWZrVWdjJHRndXNpRyNDLDRlbXUoblAy
P2xUU2olS2dNKCxTO15KNi5HbG0xIk43S3JLQS0iW2ArOU9fbj1cc2UiajVAcmMrVVU9OCJySUEm
Nm8sY0xKZlg0X005TkMiRS0rSSYKJTwrbixyY25QMzdWQyglVTInRTs9O2FXOGNWbk9ZalQmIT4o
LWZDOkMiWnQ3TkxbNDguYHMzSzRYSEguKzlVby5GRSlXIV5fJWM1ZmoxUCpjQFdESiZCVUYhXDhf
ciNubSIxJTEKJTZBLmRFUT5cU0xkPjtBdU9IQTpOYycqPVQ9YTBMODtvQV1YYVI9R0Q2WjZ0J0FS
TEJsWUBdK0ZAJFFMRlw2Llc5cytwWSxOYHEoPDJiZ3BTJ2pTdU4oQyo5aDoyZGVdPlBRRCgKJU9O
MSdoKU4zX3VyMWAhdENtckNzRkVabGpQPVRSLiMvZVdgX2MjWz9VQ1ZeX0g5LytSMDduY0VsOkww
Qi0taHN1Z188OFVPIXNDIilaZEVvZ0s1VF9uMmhXWDNuPmkvYDZFQEYKJSQlODgyKEQkJyRsQlVP
dFk8dHA4Kk8rXDBtX1tXZytqNEt0cWtlV2IlaGFjWyVzSTZ1cSZEVzhUcWRIVVMySGNKbEk2bDdN
OF4mNENTOjJacStbO1tUI14laiova286UGklOWIKJW0kVVNra19wJllcaTQucCckKTVacWA6L1El
RltxOTI5UnJMWm1xOS1lVzs5alBFaUVNKUMzTS4pVnIrTz1FdDY+TjVqQkJgLHQqMjpcOypVIURQ
QTlFVTxBVWJVKz8xNDQpJToKJVU1OXNlLk9AUzJVJV0mUm0qa0c5Vi5zJmM+JCNFNV8xLk4yUjRC
US8/ayR0UjNkRERPPypwQCE+SFRtLmY/SVZgRywlZTpWMlhgOjZzZ1FbZVQiJk5NRWlENVgvXkM/
YSUtP04KJV5GbixQM1UjQmFEXl9cMFNZbD1PbFZUPGhDYHMzblRKbW8hOD89YV42ZXBickwiLi0j
aWlHQGpAbz8rJUdnVGZyUEVSRVw7KGxvTVRSN0NYR1xkdDA+KGUnIVw7NF1tPWVmVCMKJUFHSDRa
SVRfPTRibWUuNi1RV09WLUtdTkBdMWxuVldoNW1UKUE+RChDQjx1NyI7VysjVGw7UjMhTSVlb0RH
JWJlQFldTmlFSGxdQk1wSnNQNjAqZkMhO0cnN28sbFNaNSUnJVcKJW5uc0w6akNyPnVHcyFbPzBE
ajtDTkNDMXMmWUhzZVlJX2RcUC1dKzM0bVBaQWozWkZUMFtNZThsVzEyXChqZCFzP0suNUBjL01l
K0tiRVghUiQsWXQoYVM0XW4uV3BqLy1gOksKJTBaJWcobVU/XU5ST1dHOltAY3VhP0huZmpbQ2s/
dSJuXyw/JzJlSEcxO1h0P0VbQFw9ajR0TyZbY2swUiVyXE49OWhvayc+dSxjXG05dVcjYj9FZ1Uw
KUxDcEZXcSI8OUZCJWwKJStcPTA3YUc1P3Q4SElIVDB0YHBNajtTUnBITC0waFkiKm81YCh1aXFo
L1wkTz5xXj9FImpNTmUxJm9FTShJWCMkXlJTMHVIbEZqY1JqaC08TzBKW19VMG5cTU0nXjteLCFQ
KzAKJSFhSVRfby9wQ2dSbFMtaUcoTDFeVyZJLCIiSkJqKWtsVjc/aFMuQUZrLD9DcU1mNWp0QFc1
dC4rLEc9QG9zNSRDKW8hNypGU2NKX0RFKy1laT0rPD0jXGooJC81NyRdVjssWygKJSFNMSNGKUIp
IWY0cUgtYlkoO0M6REFzMmhCPjk7JS1zRjA+LiheNSg2XVdMQColXVRPUy80JjdQRT1dRCRyJi5k
bE40XjdbZV4uYCtJXikoamVcUFtvJU0rW1xFVE5ybC4/LDQKJU1OOFRBWXJNSSZvdDBPcyo1PHIp
LTwoK28kIl1ERldPbGFYVixyMiJdLyEqSFlAUik5JlRHYFxUczlEX01ULFduaCdgJDA8VDVAX0w+
MnVbKC9oZU4+Y15VKGlbPTpYPTk4TyMKJSEiL2g+JFk6Pi8+P2VbKXFOUFdMXV1LQj47N2JgS1Jz
NW4jPnFsTyM0YEZCMXA1bGRhNGxQOTYwL0d0NC5XLEFFZkhFUmEyUz1fdCs1YmhoIT5wcjxnSmZf
TkFEKU9KQVlUJF8KJWhJOCZacmQwdGJcWmtNbTdFVj9vMEdAUD4yQS5YYk5DTGxAXmlbUGRUcG1o
Q1lSNE5aLyQpUUtpb0Y1aDAwMVonSUIqNU1HLW8+ZWZtVCokN01AWiJBKVohJFZMVFwqaVhPOSgK
JVdfK2c/bmZsZkNwIlJhRXBWLV9PKFJaUHNSKWtlciNBZEY9TFtEdCozWi5ETSxLSTBROyYscjQ9
Xy9BZFVBWlEuZ1A4bClTQDw0M3FOSD4yV1czLnIua2pwXF5GYmsiTExQOEoKJWdnTUg8NTNaSjMs
a11RYVYkXVVKZ21HcScoXElzMUAyQHAvb0c5TnIoJiQ9LG0vTyQ9IiNwPWJKMTlwaytGTV1zYDo/
YElUcEtYUGhkISlgMipraWdOQ3VEZ0VFSnJGbTphX2cKJVhQWyRSNk1tQypZbGouIlNPdCRBJyZo
byw9JG9tSiNCTjhtMiYlTFhtOSM7Kj48R1I/N0UnLTNNRyNsKV5qUnRRZWNWT0dmaixaIjNnZEpX
QEw7QGFMV01uRSFfRydlWyZhYV8KJVtCWl8yOmVSUSNsZk9kRV5QYDBdcXFsQDw8SGM+U1snRTk2
QHBGcTIpbGxAUjkvLjwhaWNAaytXX2I9KGdkRXNdQGxNUWo4TGJnUyUtNiFsOVpAKmEuVzFHYi1Z
OGBGUyhbPFAKJVBEbUFmRSlIYmBoIjBYOzFvJipxSUJCZDVBVFpgLDxfY0hGYGUqQElkLzFBQjVp
bidpayx1NlQmUmE/Ui9KakdgUyxJZWU+REVANUdGVSM/Oj1uQW1QbSZZU2VvVDBeMFZ0XFAKJVo4
bGpDayg/OT1OUWpuZ1piTmlMQ1kvTlooJ2o4P1xjLFhial9dVlcpaWtFMDMlZDJyMixEcmdMXl83
SFgyZFE1SUU3NGBwYHNjaVtwS2RDR21BJ1shTHFbTWY8PUdGWShtcHQKJUlmJVgnZTg1MF86cFpY
azplWEk/PmtlIV01QlAvXz5uXW1xR3IqMmlJOltiMzlMNkBPaDdpLEIrKzdDYTZIOkNHT04rLGE3
Xi5PQkJLSWVaT21FRGtQMD5pZyNzVWVDRU5Fc04KJWdDZ05xQ0whOEZwXENbPmY9VUhDbWtVV2w0
JDdyP1d0PjwjKUxVR0U4RFI8Zj9SdE1JVjEhSiZmWi4lZUsmYk42aCo0UjcwL3IzWS9WKi85PSQt
OmgnRE5Oc3JERStkTDYyOVwKJTZDWF4nLD8iWGxIPnFDMDInUmcyblpYSWBmVygjQzRTMEVaNmMx
Sz8+U05AWUVwSj9yIk9aNmtvKi1xNjMmZ25AXnA0QGckJjYkK2d1JVM+aSFvKltdKzE3WCNhUjFV
RDEzYTsKJSZHLV8hXjArYT1KSD8kNTkoZyIqZEtVWG88PT5TVSdLVDAvSkkyNlRZOVhzMzoyK0pl
MkM/JUEwSW0ySFJsayglZU8/O3RQNVQ6O2c9O0gkMyEkZ1U1Yl9oXzFYajQxTWpfZkwKJV41S0JL
Smllbz8hQU9Wcj8rVzAxNURlQy0jWkZXdSUiIVJca0dRW1FoIj5YSW86YW88RW9FXVBVOFZsTzhM
PjxyNEgvIXFRKXU4S2JxaDZuRW0rT2RTM0QzVSpKb2xDVTtOLz0KJUloJDIuQj9ZKz9pMSwqXEMr
M29PVHFham07N01mNVlTNClrR04jLmxJXVEvJGEqc3M8IWczbGhLYXUuK2k4W05NWUQnaVZhR1tZ
UTtDOHFfVk5YZCJqJVBLWCE6Xl5cNCVINj4KJVRwbFloOWZfR0tMQy0vZkxsJV0uVjpxZGhMSEs5
LjtacD0pYmIvRVxMKy1TRFtFNTJGbDg4WXU1Yz9yQiJMOEowW2k9WykjaDc3RWVkc2chTTpQdDI2
Sl5PMWIrb2FkKDlqMGoKJSJWOW5kajciPU8oQ2UzW2QtQVtnUU8tUjNYO3U6XDxBSzddSHNLLjBQ
J15MLCpoXFFeSi5rKVJQTmdYZCYiPGZlV0NeZGojIWpHVzFTaWg1VVE4bjlLVG9ebEpMcitDImw6
V3AKJVBWSXFnNihYKXEwaS5qS0oydVUxZyE1a18jJ3VFVkMvLzI7WVN0KWViNio/SCohWF1cOUov
UGRqNVhCJU1qazk1I2RIK1NLTXJDUD5gZiMobEdjMEdtPF9fbWpadD85R05kKkoKJWVlJFgiJ2Am
RE5McktqYjFEODZqQEExdFNSMCQ5YSklRGt0RSZJQWokc1lNXj4wV2xVI1U5RkljRU5oU29vK2k/
ZUgmY21BNlFvWzlQSi1rVCFAcjJLPFJzbDo3YG9WcTEiVUoKJWJJTSUualsmRVhOKWA/dGVBUilz
P1lsIUBeJmhHKDFSJFFPLEQ2c0U1RWFkKTpxVGEiU0khVnE0VFFDZUY9XiRWTDMiK3NXX1RyNWBn
MUMiJXFKMjcxLz85cUZkYWckZCEiPWoKJWkzWHFzY0slVTk0VD9raC1ERF8lOEY9OTs2bVc+a0Ey
bTZ1QDYmTGZJJiohND45YVY3T0tyWGcxO0cxKVAmanVoNFVfZzAvS2M8UnMrPmlUbWQiX0pMYm8s
UTZucTdAJyZKL0UKJV89XkRWcmZsJ0NZXj0pWDVsYmdpKltDTlpSbi5kMCM8MF1VM0o1OWdEMitN
Zmg7dTJrN1xVMDhrZlg3JVNjWnNPTDtbW2FrPl9tYkRSWTYlYkIuVHEtNS9mak4hQHMwOS5qWGQK
JTQpRFUyQ15oV24raEw9ISl0ajwia3FEI05AJUtbSEJKZlAmSC9WdGluIkhvTUx1QUN1J08kSD9O
ISpEYSw1a24uX1QldVwwYl8/OWo/IyMhMlhmSjlHSVdZP2M0IlhtP2YnMWUKJVFkLFM1TTNacnNh
b0Y7bVtcK2hwcStHMSQ6OSRqSGA0dWdMUy5KPG9JPkMlMFUtXTtfW0hsUUpFX2hZJG8raWRMYj49
ViwnMmdSbVAoQidzTS0qUTtgM2I/UE9FTGIrLlo+QHEKJU9eRCdWMF9YVlEjSkY0RGQkOSVFLy5O
JE9FKCswU1pTYHU4YT1YVmpQOWhya0tANGIiViRGSU8+U189Il8jbzRybDoqV3ImOS5GNilkQWJi
NllIYkZja2xmKyxZQVo/JVA9LlcKJVMvOChTMD9rWE9WVkE8UG8oSUU6PD8wUW1PXVozPUlfYkRB
LCpoSXI2dU5sM14jVWVhYWdCYVddYTtTSDNwalIkMV1DPzkjZ1NrNVVDbW9kXHRNJGVcQEEiQ19D
QF06WUNWbVoKJS91LFxkbW1sUDE3ciwwSyhDWEw1ZCFSYHNbMkktLFQjPmNGT1dOUUxwclUrPmwu
QTxUTFc0Uy8lQ21VUD9MLjxvOkRNNl1iNGxBbjwqQTw5ZVJOYiNELWdgPFx1QWI2L2AlMVMKJUJi
J2ZhWzpATGtEKDciNFU5V2ppXiJ0PFJyO3FZLVQqLydEP2BuPT45NU41RysrRGFmXW1lUUVdLD1r
PyZUQ1BlZGRxZyFCKEMsV0NlOk1ZUWxfYE1QbHA1Y2tWalVaYSJHPywKJT1cY3JeZ1hmVjVwVig1
bEIpKCNNcEpgQWVScyEzNUdWJFNuXSYvXFVeOiRpZztrIm4/Nz1hQzBeMmo6RT5oXlNxMC9IRUZn
IlFzIT9wM0VpZWdQXG1IS1lDYS40JHVMTzVDVSsKJXJVdS8xYl88SWJmPWVNJVNWOV5BKUVVTGRg
Pi9iZCE+WU8rXCwjRl5CTlFVSmVvZjFoYEgvZixyUUgmZUohRTUjOHFrX2MmLiRQKEddKF0iJyxv
K1BoQT1tTDQ9IXAjYjhhQCMKJV40aTNVXUkiVWBYYSldVW1nQ19NPkdfXz08KUdbTmw1Y0dMPi1R
ViQ6Wih1MTxMYypeITNtQERgbUBYLEgoL2giaU5Ra0NGImVQR15MZi5dbmRtOUE7NjooWixGZW8z
OkIybFYKJWtfI0I9N1MhYDklJlE5SSFCY11wcTVdMjJuX0BqaVpuIz9jWjVoOHImdUUzKi5tUism
KkZTSSNQVmopdTdYXjBvWVE+R0VBVlhqM1teblMuMUQ9TEtXWGJUaVddMUtNOUlIc2AKJWVeWlVg
WHI1VCEoQTZhcGIqL2s4KnA7WlAkSz44aSE6SipfVHU0TlNPRStPXEwrY1RjO3AsbTZbaFEhb1Bo
U1pWUEFoJS9vW15VYFNhRmthaGRhISlUaDlfJFA6RjVaZWdRZmwKJUBEbzQmP1U2YVZEMGhOJysx
YXBMMW1QQFYpJ0pORSlRJl1KZVcuI3RxTz0lN1UmJTBzNU4hMGFLLiQqVmMuODkkWT4rW2o/aFFf
VWckLVtBMEUlPm5EV1ZOJXA6OlYyTyssXVEKJUdTLzhpPmhzNDI+NU8yOThQajoxakVDNDAkMis4
KiorWS1nU1JtKkBcKiY2TyJjczoyNEBuKUVGcUtHNSNYVGAoXDlxU2c1NDBDYVlcMG1BKT5eUWs/
IVchNGRbMG5MTkV0Z28KJTUpJCNxLVRJNmNrUF4+X19KdW1XQV9QWz80MWUlNUhQLUFTI1Q+OE9v
KEVxUyp0b0wwX09NZS1zJzdPNk9RXFlXU0VhLkRiZSFnU2hUUzRvIVdoSjdxRi1Oc2xSWjFYKzou
aiEKJShLLWZHKzQ2XmMobUNaIU9mdT9KVlpjW0dfaE0+LCgwMCNnJVAkaC8zKiQ+Z2Y7KUFXXlBV
RU1mMCxfKz9vU1lxIUkncGcsXmdHMF8yRG9lXEg0M2Q9Mi9dUDBmSnRqVUVhJiQKJSY9ci9GZmpM
KmhFMmleZ0VmNVA6KS11PlUtZDBXUkBbJT5LSmdQXilsPm0rJlM8blBmcyVCMyhOMTlcSjlSYC9c
LFNLKihabzpgb2FaMi1GaScjcWQzPSdjVVpBJjNrVmooMmwKJThRNWIxODkkPTFZcWgyZU46Ljc2
JidtMkBAMCxnVUpVbzpfXj1paW5RRF8hRiJWNCxeSC9qbVEuVlA5TklRY05SXj48Kl4xbVE5PXBP
M2BSOUdMO1dNR2AjWjp1U1I1XmMibzkKJTxjJCI7YUktYS9HX2VPNTQ0Ni08JTZiOWdbOXVkdU1Z
K2U7IVBFRCUuTVIjVFF0TV5qViM5SWszYVNObGIhbWorUEw9X29mZz9QJDtmaShQOkxNSiZKKiEp
OG4xNXRtUG1ZM3EKJT44MXU6LClWRSNHVUE5ZGknITFJVlBrK0BGJDFMIUxDbGEqWzAzVjxSWVgx
MldwUkAhRSxESnE1Q2ctcG0rUkAhND1JdD9oWkkjOlBIW1Q7WnRXOmNPJlQ2ZWAyWU5jQidya08K
JUQ9RGtdV1dqVSNiLm1DU0pQc3VxUXJNQ003TEI3KSFmYWdhV1BkdCJIJXNFSzM+SFZSam9jN0cu
O0IiMzBaMyZSL0lgR2JuISE8LjBKciZzWjBrZSVDUWZPKFNJW0Q5ZDdgLUIKJS5jOVRobzw2ZiMy
LUElI2g8KG9OSyhlYFYuXGosUV0xT0llZkNicj88WW9cOGUmWiYiZj9wMD1cJmldOEppNzsibTBo
SlFbMFc9N0lnZ05FX0MjJy0kTDAjND9vSGdRQENKdDkKJS9scGFOUEFqbHM4LjtGciJCcGEhZkE0
TSo0JVVrQ24/YCRQbWA3YVEiUTQyJDxQVkVPUSE4R2dMLGpDcDVnZjo4PSU0WHA9XnBBbWZMRW9e
P1kuMjIqQTxcJUFjb2BxYG9gO1AKJVNuLytkXVtDLktcYTshKD1aQmxYVW8nRElHOUU8TDJZUCkz
OTZmKypcbWpXT11CaDZOSTtqMyJrUTswNUo5XnBUclQwYVFpLGhKbzQzc3EuJSRwa1VjPSM7N0dQ
LzMuX1lKTTgKJSc2OiwvVHRWdU9cZVsqNG1eUzFbYGAjUD1kPWpNJ0AwKlpXMCkqXGxrQXEuSFZx
NSZUM0UwT0guNTFUJ1NrWnViJEtgZydqKXNpVjVRRGVeM0ROUmJkaWd1OiNbJ0M8Yj1CYkkKJTpw
LUJyYVozaTw9Y25TcCVNTSJdU0hbMHRZLzEnWXEpVWBvRCUjdSxjO1dAXilSOFhuampUYXE1LSw+
VSdhMk0lV0syWzVqWjJBYCckVz1pZ21iUjNEbzwtVCJ1M0RMTTdhPkgKJW9NXztDYmxAUW0zSjc7
KFg5TnNqK0BSIWlBcEEkbkZYTk9dZjo4QFpRV2QsTyZ0VWVrZ0poNDY3QTRnaEFyPmE4TFdWMSw6
KSRRQmdoLE5hZ21zcDkrRypPRCF0ZlliUFEhJkgKJTN1SnNZVyo8SmhBb28sRUxCWUtRRzU9K3VE
TV5Vc0khVkNCTzZia2JKSCUnMiVzbzkpaFBuNW4/W11kXy8tS08lMlMkbSQmMUljWGFYPSFKZ0op
TEExTyE6Ni05IkA/OXNCNloKJUFhPmAhXlpaS1U6J0NMKkNRPHRvI3RBJEFVRy8wOSJKTURRWmJe
XFpfSEBsSlFgN1hoPS5fVyZCOWcrbEFCQU8yOj8qRTlXY0NmL0VoZmlsUDYvXipOb3NSP1cibmFa
YEZGZVYKJT1HMlxzL09KIy4tYVxsQlJdNElnZiNBTFwhRkQqRmRVISM0cE83Qy1mbSxbSk9oUFdC
NycqRilCL1IpaWg3QERKSzJLdFxNOTVeQVwsSTdJQnRBJTZrQFFNKW11NSE9b04nXzAKJWxkSDws
QXA5SSk6TiddaG02IzwpbVtaQlRZNGNdK0c5XT51TSwuNyIkdEBMZk5aTGY+IXU6XVJSKms7PVwj
NzRuZ2NOXldvZTZAQVFbZV1eczBxUWtycTVgMG9DKVxNNVEmRlYKJXJyKTxcaD0xOytyUy5BOHBP
RHIzSityaClyOCRpQW9ybj9lX3VLIj9zNkpTQF5Baip1cnA+M1VeUj1wV3M2QihRbmMvUStFTSVh
W2Y+JUBiVD4xRW5oZ0EmcmlnOF1TXk81dEIKJTVRJFJMcnBmTi5SdVtKdE8hIkUhaislIWFPLmxV
RFdJRlwmVERsJW9HXlQ/LD5sM3U+cTVcaGciKyxqM1AsJWRLMDdXWklsVz8xKFlMLCM0ZD4lamU2
JiNEQ0osZUJYcCItdFUKJVY+Y05faENLZikoZD9SUGVfajsoM05CUSVnQTxuUyEzLTZbQz8tTiUt
VDBQdUppXG8xUHFqOCxSLkBhP1FzTW4ySTgvPWY8Xjs7OiVSNWhmayZsPFZRSlJcXVhScy4wPktz
bUoKJTlWXTw3NiNbPiljazg5Rl1xYiJiVyRCIltsbydWQGVvXCJQIWc1NiRtXWxaUWdQVTU4RlVK
PUREXEk0OCZrUj5cREdWbz5TLmxpV25oPC1ybkBqVHJAaCJxIW1ncU5WSmlOJiwKJTB1KnBYJVxS
LylxYmNuc0RhTz48JVRkbVUqWV45Y0peXGgjUyomYz5vPTpfJzYuXTomTi5uQkxIIlBbazZNUyxJ
QSNIXl0mLU0jNU4sSWtdUixZQHEtSzhJcWJHVFAhbS4ybjEKJWBITzpdcm8sRlZXSlhQaDcuXHBs
ajdGRGAhXW1qZmUlLE1zKmhNOyVVbHIpMUtnSVk0TUFiaTIydSYoUVZOTjtuTk5yaVRybCdxZmYn
MU4lJXUnVDRYT0IsUWs+SlZFNElfWVQKJWFGO18jUy5yOHNgRkZpRztZOyxERl4tUy1rdT4zK0VS
UXBxL1ErZU9tJkJnYFFpaTlgQCs3S2JUQFpJbiVhRG4wRlxVcSxvayRGZ1ZtW3FYWS0wdSM7Ijo0
OTVkWTApS1UucnIKJSs5ZVp1Y09mJ3EzLVAuJUIpNEFFPjI1ZSc7TnI6Pm8naSdLIzomdUlHanRY
LzIzXlVXJUpDQzdNJmcrKFtBX3AwVFpBRl1HWjM9UE5mcHJTcWhCXD9KMUUoJypbRztGWEdWSCQK
JTh0cGVlRVNoRGtfQHRCLz9jRVlvSGNnYWNyS0BmLWEpRzwmIXV1IVVdUTRQVXJwYiNWczRaa0Nm
QVE2V28kcDM5PiEjYEJxOyZzOztPSGVBIyo5bk5mOS45KUtBXk0nK0Q2QTIKJWBmYEJOMjVeK2JF
IUxfbnFcRV1XZURVITBBNmRSQF9kRmkkVkFYLmMqSDczOV9HSWFGS0JKMTRVZlFLb05uNCs3aj5X
KTxXOl5ZKDoiOVMrXmhBcjBeO003MllcLyFuKmwtWDAKJWR0PlZEJ1FBLk9tTCpWNUlJSWVwTlVD
JCMxXCEzJGFDQzE3aS9qXW9iQ1Uxbks4KF1KcG5SUUloPi0kSmVgLmNuYjQqJEo3PzBpYS0/JjYq
S1Y2Y09AckxyWDJfSFJlV0UwcC8KJTciK2I8UisnTTxpY3VUOHFNNjxeSDEobydVTiQyYjZmN1NL
VW02QC1gNUQrVzVnazc3a19rMyxQIS5pPyppIWljcl9GJDAtLkJMI048XiluNS4pKFRdODxdPGpP
JUJKQSRGWkoKJSYubiVraGAuPClAT3NzSVVcYjZIRFlVTWxbMWtiREtEQm4tMF5YTVleKyUqTiZQ
aXBTZi1VbzJgXGkjPDBSclQ5WitIcSRLLiNXbThZXV89MFc6dTZUW3JYRD1RVDBfQmpFQzMKJUNI
OCguUUM1XTdNOm0iUmZyLWhfaSJsTCwpdEpATlZHZmBmXik8cU0mTGNjMUgnVSFRVT5WIi0nbyFn
NSoxJkVhZzc/MzhabDlvL05mXGwmPWVYMFU2LEA4JF9xSyotK1lodFoKJTBiYHAtP0BDZEIkUyhL
MkNPcWhrKFJXLTBDMDNaNTU+TnA1aGRSISdaNT1JMztlXzYzIjxCTz1hKmdEKk5hXj8/bG49KVpU
Ij1rKTxFbVlPXF5DMSs3KDtxRkxBV1VBRiolcVkKJVxjdGdNKlghJkhaUjo/LUZcK11OXURIYyNe
MWNMS1gqMywjMWBGPE0yTXNjYVM9VlxNVEFfME1BYFlMQUdeS2VgXjNlYmBmKXNdU0pNKUBjczQk
WW8pJzE3LlIyVy5RVDVmWVgKJXI6SmlfMDpVSlNANFFeSjFAW01eY1A0QTpUUDRpRl0tLTstJWM5
JysrJ0o+bjpqYE0mRTtWNmk6JVNkNktNP2Y+WU5wbic1Jj50YXMyTU1GbU8jN1VvSiRDY2pKcmtQ
aVw0R0EKJUNhZHNVQks6Q09nc2I8I2UvVylHKy1GdFo+TjdqWD1uJlgmN2dQYT8+a3MkZDgwSGFI
Omp1V1ctREFmYWVGO11bcCkiZCVQJVA5KFtpYTZAXkRGSEQwcS1bWThZLjgsPCo3R1cKJVZpT249
K2NlRl1rTkBKbEBJTkUmbT9HZUYwSjM4ckdHVllgT0lwZmRiISh0MTxUMEdEY2FkZEZmUlxoTDJe
Llc4IVEnaFQpdS0oYkdTZzNxS08nbEU3X2wuSEAqLyZaRTddXk4KJUg1SyxoQWJycl5LO3NScCJF
b1MyL1goNjpXbT9QcEg7ITAvP3JcUmZoNnJwKlFbMWU7O1dndXBBKzVgK1FJSExLSik/S15CPzdv
PDNoRFFuPFlXZCxAKGBGSSdDa2otZTsuNWIKJVsmcS5pJz5oc19PKj00ZWdzWzxZUXUoSlhIdTBo
MkU3OTxYPHNYIlozWWltREFGMFs6TTFjTTYyMCltcE5UaGQpOS9CdGIoUD9tLDEiXEFFS1VNakJj
NUtVZmFXJkFKWkdZIVEKJTtpLGQibWlaPjFVKCNEIWM0XyZkM1VMKSomcVVdVGgmPkBuO0QpaDRo
Qk9eRmZPL0MpY2A/aG1IJXJjOig+JStvWGkoIWBpKCtaNmJBWXNRbSdUUitlcV5Zb1swQTwvVnA8
MFoKJTVCJCMhQ0tscyxMT0RKZTZiTmFKJChrLFAwRDYnKWo5SFhZWVRdY1UmciVQZCVTPCZtRnBo
dS02ZFhfOzdEcCNmRjc2TVtNMz8nTTdYKEM6Mi04KG5IJl4iJyZbQTVmclIyQzsKJS4uZWBJJWFK
bnRDJDpbOEQyNktbZnFNIVYmZE1rI1lvX1paXy8mIT06LWQqRSJqSWNENFBsPzVxK3BKX2BvLTw3
KU0iNkJfX1ldKzhUVyYwNzRvSjMmPyQ5O3JSWjJGSmxlZD4KJU85bz9kSUtiJG8qSl5rTTpFInFo
U3AiQ0ZuRVttJy02WlY/bV4rIjJYakI+dHJPUF9VXnJkVCozLUlOPiw4Tzk/TVNoOFEmaygrSFZF
J1ZPaHFoYjEzTUkwVmFqLFQpVkRpPnEKJSNXIUNnMStUbFpVaUtdNzI7cVQvP2NBbyIyKnNQLHA6
PTtEYU8hMl9jNihiPCNiOm5zTXBMXVxxLDkwOjdNcWgiZ05zUENjWFsiV1c1IzxJUkRHbF9rdUZe
XUpMR20saWxrSzcKJS02NGlfLVJqUlhFTV5xVzkpPGQ0NkZpTmJuJkp0Il83U0pyYihcQzpjaik/
ckNZIk1ZSTorMWZtIkkrKm1xPypQTU9pbEtfU1VGUWcjZjQsO29zOHRaPS8rNWYqcjc9NiZfaWYK
JTdIUUZcJzIqZFlYZlxLSzFULysuQHBqRnRZIVFWI1NYQnIqRVYlLktJIWNdWTdXNFE/clY6JSFx
LUQpJCQ2T2dGbFZRPzJLNUpiakE7XTFbWWAwWzQhdTViaSNZLUM4Z0tWKTUKJSpPKXQvNlJjLzVk
PjFsQzY4bWReXTk/UShqNj5iaSRMJUxJTGcwIzVzNz9JYjA7Tk40YEFkOzgtNi9qRCNobDU1KmBe
ZHBDQyJHPVxTdVUsU2VQJVg2ZE4kbmhrcU5sVm8za1kKJT9oVEJiakgrQUAlLU1oamwkbzVeR15u
YyZFRWZzTSFjIkRgcmxWLmRiME1gMTRnbSpDNmB1Kj5JI1IpMGg9VEAuOz40Z09Rbjc+cCRDZDxG
KHRUS2AoPW10MG9dV0pncGhzR2sKJVhwQ0I6SF87bnE7dFNfW0EoJ0M+aFUoIzJGKWJCIkZQcDwq
TFxAZDxOVkBMTz5uYVtkUzZRcTYjTWIhaz9hPWFHbV0pYlxTTS9hTWZuTmhmXUhDOC1rJE9gSjN0
OGA+X1FvX0UKJVo9YUZiKC1sTjVSbDZAKDA3LlljNWVYdWpaPidMLiFOZztTLT0wUCQ4MClbPkxH
JVxoR0hfKm9aP0U8b1ckQV1IPUslN2xqdFYuJitlbW9UPFJDcCYjV0IhJVVPZGZfM25EQC0KJUgx
PmUvKlduJl8qRT8+bmlGUFRgMD9uUWJYTmdqTVMmU0JZSXM0QmRfQFlrZCthITMpKFQ3RzZJPCVl
UDFeRyJpWlkqWk49RSpLOGwjO209Wk9TSCI4cEAjTVpfXkxrNExcY1AKJWEvJk88b0E0Ui8wQzdS
MkhdSlkmMD80ZGRkV3Q/TEZJbDNsX0BpYnVRYFM7IWZ0a2YtVGBQIXI0dURjLiNCajtLZVhxPzY1
UTtcc01gQUtZYF9cRkdmSzckcC5NLjkhPjt0KkYKJUE/dT5zZT5GTEQlMVZNNFFOSnNdJicnN1cu
aU1cMj9KWilkUGFlZmojQmM+aT4kMVhaUmlDaUddRjE8J11HSmsjPXBRRT49cyRMWDpPS19gb2JI
bS83anUzX2hoIWZoNGVTbTgKJU5BbjhUQCZfT2tvWk41WSZHP0QjSEhIUExJNzNfUCpYbExlbEQk
MFdiMkNTcidFUE91RWdKSWk8KGgpaylKdVRhWVI4KF0oNVUzUDxnbHREV0o0SDNHbzYyVDBNLTxj
ZUJrOCsKJSFSN0NUIy0sbisqNWs8ZXFsQy40Rk9PJyNnaFdNW1smSkJnKGc2QDJaQFQuKlZZVF1H
IUY1bHMjXz1bMG9HJzNaUXMoPmEuI1lxO1ZSR2Z1UVFZKHNeP0Bpbk8mI3R0MGloLykKJSdyZE0+
SEBsQ1pXKUJhLSRUO29bVE88bD9dL2VmLz4lMz51P08uNlhMXiNbRCpcRjE7SURGXmNZbmFlNU5j
VjEiM25XVmtaS3NDSishLmRGO3RhW2BubFgrayZMKWxebTY3KEwKJSldMmRGIk9ESDFlPmYuPSk3
RGdAS0hvPjE+QGdTWEsqYzZOTilpI0FTNVVHPGMiRk9qLy80QC0qVV1cJDhFSVpNNVEpUUA7PDYq
ZWYoTlJbXSZKbHFQQE8vK0puTj5JQm85cWsKJSwpYz1jVSVFJmE7dSFgO3Etam04X1Q1anVfOlYl
SXFwRltJIyU0MUtjRUNGWF1lbEYsJFo3QSYocjpPJyZZTW05QkMnQTxTaU8vPjVFRDtYXiw3MjZF
KEpfLFNvQjoiR3NjTnQKJV1MKionW0NyczoiPy1DRE9vJ25JQlhrbUsuO1NlRE8zUk8/bk4xTj43
PVViWS47WV1sO2srXGtJUz9BYEhyTCtsbjchW01fZC9SaWZEP1F0KidhY2smblNbPitaSCE+ISUk
OUMKJVFOZVg7YShvSFk2ST1NL1FzYWhEbidIOm0oKm8/SVhcZnBKYWBHKEFARFlgSUA+SmFPTzIl
bkJINGFRaztkJiNJWmtqNkBGOTY7LDVZSjBicFtjKDcoR0BVXUc5WjVzPjhgV0sKJU1JRzZeKlZZ
UCYuJSFIWGZBWVsnY1BBJzsnOTdZbGg1OjJOcUs6LyU9cVxLYm83V2dsMXA5c2JBLShLN2MsdWYk
UXAoPE02M1JJPkA9NixhPjlVU24kZGwyMF8rJjQ8bjVCUzsKJVBRVmlATFRYQCk9QTU1RGBFY3Nx
WzsmJkdOW2tkU1kzaipCKkgrbTlrPFFFLzQ2TDttMkdjbUNDQFlcUyR0SlA3SmVlPSZZTDdXUGVQ
Km0ma0w/Pz1JWjJbQkU3aV90K18nUEgKJTVYIlEpPEJGTFJpNixgVnJjOFQwUW07MiQ+K2VKI1FD
RWBpKjZZJiI3Ty0kRlQtN0l1RDBDZSVgZ0Qxckl0LTksYGtuXFNxRiJeRi5qL0poKyRlKFpiMis4
PHI8KSg6S0ZNYyMKJUlmJEcsbWkpK3RgVD4pTCNHMydaKjRITjU9RSxDWktXJkplZWwtSClxUVZL
UGk+XyNbZChwNWVJWDQuOUhEM25rW0UmR3FMUjxYP0xgdCFRO01dMS1fSG5rUkc7YDksVVVMTmAK
JV1ZL2I3LTZzWkFLbUVTJDo0I2JPOUdCajRvby0yMjphak9qNmFQPzxSL2Y/MlIkXSEiUDBGYlxW
ZjQ4c3E8QGhwVFNjLVQ1RlNcNEc9alNSZChWaGtpLXVGMSxxSV1AR1chI24KJWs7c1NRRUNDTGQj
OWEoRy0qZ11dRmF1OjIvbUxAX2giSCxOKHVAZT5GUmRJIiJFXEpnQSpXYURDJGBHcXIxKzs8cmhb
NkM2MGshXUgvJ0llNFpIYUVlMiJXPylwJDprYWQudG8KJTRiLCxpMWEvT0RfIUIoRTNOKiclb01G
a2U8Pms+cmBPdSZVUXFnOlViXXBeOVknMVFSNm5zKWEtNEU2bTNFRV5YYURKWzorbVYzLkxnbzJM
NDJpLVUmR0QoWUktU09sRDFNZWQKJW0sbEBJKC5jOklxbCJvPilQWGNeZnAsRCJMa10wdUwjJzsv
Qk0qMHNFajBPVkBIbklTRUdVL11cIl0pM1ooaHRGMVRZLWFJMihAMEFpX1cza3NUcT9MWDFQbzBL
WEozMj80YSgKJXEpRmdgS1dXby0hPUNoSDgoIkJGRT1fW1UvaTtsWyxHXHU5OjotQHFOcEVDU29a
UXRwbCczMTchIy5gOl5zQ0tGaD9kUVRiI1FbMC4pXElLPyZmVCQmKmcnSFlXL2ZxNlkvKGQKJV09
QWVWcFQmK04oKzVjLix1VCc9IippRWomOFQjTUtFSUk+LW5PQ2k+bT5zRi9uRDBFPT5AOWo/NnVk
UThqTjA3aHBta3A5Kk5DNjtuYHJxWktVXVBvbjBfPXE3QGQyOmp0YUAKJS1pPGFTZVh1NnVcP1kj
S1gqL0gmNCJPVmZNXEkqQSI3aid1LTk7P1ovSUsmWDJHayRHTDdEbi41LyRBXigjbDE8VzYlXDRU
bC9SXC9NVE0uQFtaVmQ2JW5hW0cnKTshW3NQOj0KJSMoKiZVLTI5KixpJHErOnFtNkErZjQ/Mj1Z
YDxBVExLOTZcMUtZUzorTjRzbmhCZiRgVEZoJCRrVk1wN0BxdWNScUFCdTRwLitCSVwwIWxOXT9x
LTBAKmJwMUJuTmtIYyNlQiMKJUdZJ2NIOFN1PyM1OldvYlsqNiNrMk0vUSMiQUs1RDZQc3MtVCN1
YWhTJ083LSNFPWFwOTc9SWxFQkIuITIrWXM1WCYiM0BxPVVmaEpnSzJhY2BhNzRSMC9vOCFjVVNZ
VFNZVnIKJT1MdURPJ01qQkAnKzEzLS9ZNFBYODFaN15wUzIlV3FXNT9BTTVtVWZgSishYT41VWlF
T1EuSlNLW0A4YkBjTE0zSi4jW0xqN047TkF1VztWXCFURTQ5P2tfRnBCJl9ZRUlkPlEKJS91XD1J
ZCo/QWQhIzdGWSZyUC8wJTdjSjkmYDBpYFFaLltVOFk4KGBYZSQlYjdaMW5UR2NOPGIxNS5FI1No
XmpIN1ouKVlaO2tKdVpmYWBWZzBGY1llb2xfPVtWVCgjUEtvbVoKJSwnKjJgWXNUVSM0QDRVNC1f
NF5bKWslPV9gIjwrYmhxOjhjQlU6LE1EXW1WbGtIXSI1Ri9DVnVDVjkhTnI8IjpHI2xiXkRvIjc/
JjZLVlNnODw+OlsnciFMPFxAWkE6K1EpXikKJVdoXmJ0MUxcJTtJQFUmX0s5ZUltMiZZZ3NjQ21O
WVBYMChCVTdsO0JZXHRjVz8yW2FyJF1JNVZdLS0kZjVwMVdQLmtdMi0ic3MtIiInISxBVkxDYCkr
U09LRTVEQVRTV0o8Sm8KJWdtaThuMEguNDNoVS9LdSI4Si9aTXI+NFwucFNZOFhLNilOMj1tdFMl
aSZDMFZuJUxAZCFkPDJgMzVfWlUjRyc+TzkqKlA6KEVGXS9UOnNUMy0xc11ZX2h1SUdsYD9GYjUv
SicKJVsnQmcpODQtPEtgUTBhYloySUQsJFxhTklRbyRQTVNxNjslYzgkckNOTjI2JGA+TGZwXUgh
UiNCOiMzU09yOVd0I1clWW4jZD8lUDVqN2V0YTloJGJNKHJqJyRKOyMmP0ljaSUKJVguT11XOz9b
ODRwQDZDJCdkMmpmYGxUNWZLXDhLJEA1MU1JbTxgJGpwMlNCLSZpNWJMaTpXV2dTPEFUXVJPOy9f
P1BzYWUqOlZIYD82KUshNVgiVDttaEJcWDxhbkNkSVAwQDwKJV9NJEphcyEkX3NIcSs4TGdDYkhY
UlJccUFCYjAvNDpoWHVNKyUiJDpXSV9eZVovcC89XTEhZ2BvLV4mTjlmbyRKJygjJDAvXmIrNlRP
RSInXztFXURramtjJSg5aSlYWEFKc18KJSNpQS49Xz5pIzltaSQ5dC4zbVYnKi1CK1ZkU05Ec2RJ
S2A5azk0cyQta2czSHByPFBdSEFHVyYzKmI4Ni82aiJCITpbNk4kWi8nL1pjYFI6aCZeUWZpRlFz
cyRuXkIiOlovXi0KJStiZTQtJGEram5vRDA8NiZtVnBfTCVAT1gyPlkuRTpyXSwyNGdHUz1wX11d
czNAWywvKz4hV2hNWj1DXUtYKzgjKjVHOS4kJSlbVE9dTzRXcnIqS3AuZm5xXWxbWHNYJlBrS0cK
JWtBMV1NR0RyMzxAQ3FaQWlhUS5fWldyLWk8YFAudD5bSmA4bm5kNmhKKEUpVExnKTBpM2FvJy5i
QS1KOkAzI1pDX15PMTQtaWtidDhQXVk/OU0/PEw5YWdLX11MXFBvaSo0IjAKJSlLQlVoYlAoPSle
aSVaRDokSCtMSU8rNCxfV2VdNUJBaEgwXWMjMzgkZiZXLTJXW3MwXDMnL2FHc1FPUllCWCQmU2tg
TS9HNyNQJi1nPyY2PzshKys0bG5KZUwxNF4mS00mPU0KJWpUcTdjQHFhSlNBZiItbykwWyhXRmdD
PFtTSy41al9eVlIpYjFTV15nMkpOSjFiVU09TTlSJVtYMUo/cjE2amFeVE04Wl5XdEJPaEltMF9E
OShtQj49SSo0SFJxZEs8UVIqIk4KJVosL2o+aG1KMDEhWl1yMCJRPzdKaW1pRFgibmpLSjJWOmM2
QCgwaUQ7Ui1FUysrcD1HITQrUicqSjE5VlpBJmNSS29dPUorcTVeIit1XDRFXjdTV1A1XEFMXDpc
bU1LJlgzZGEKJVtIM0xrRGJxbW0waDJyNjwyJ1QoaVtOY0ItIyppPEhFTUNHbjlCMkBfSDJLOEpt
UTBFLnA0YSI4P2lZRGBuQERHKUsiKEIzNyYqVltlVSkySCs0MSs4YyxTODtuKT9IWj1hOToKJT9s
PzluRm11YUpIRHUvKDRHNkMoPiQmWWduajc7QEB0dWpBSzhUMlhqS0syNWtRUlBlbV1xOlVOX1Rp
cGJzI1EkTSc0SV9FLm49X0BtVy5iL1AoXEElN14oZFFuJkRYJD1sLVgKJT4/akBpXV8pOUw/Uklw
YjdNYGlPMXNlPTI1XEkvKUU/S1VZRSk7QTAwcXF1IUhiYiRrJlFPb2RCTmBROT43VmVCTE4iR0RL
cV1YOUo7KUFFSUJESTI+dWZ0Oj06PWJHcmkuYGIKJSlKQ21UV0FYKz4vVE0mLVZsTCdjIW8tcVsm
c0hTRStfSmNsPmRRUnVmOTVJc0JYXmZoblhALi1RWVpOXzI7OnNlXHFDKWRZb1s7Mk4lPS42Um4m
JU0+IXEjOi0qLExcLyU/RCgKJWo0cFYpLmBsKixCOjY+aUtxNExUNFRxSV8tYyguXXFASUlOOSt0
bSJlTWM6ajN0JnJHLyprNkA4NydOUGFKXitpZFk4VElcRkJzcHFkJCZqTmhAM3UrbGQ0cSRNYSI3
cVtcRSIKJUdiIks/IlE/Y0Y8RWMxaipvZ2dDSVFFWVlBWDBhZzZ1Nkw/Qyk2MWdhKHFvMWEvWiNa
IWs3Wy1cZUJiOyxLLiFMWXNkMmdAS3VpKlU2YVNYRVQtPChLSVo7JzhKRUBJX2c1S0MKJVpeV3Qr
XjphIjhjJUhLYW1EQElhL20tJ0U3Y1ttKHBZOiZTKG8qdU9FWD51cS1RI3FmJXEvTCJsJzF1WGlt
JDVrbVk0NnJmVTFYJyQ2ZWY8MUJlOy9kQGlbVDJDPSpqOzhKYzEKJSJLNF8oP0ItPEpUQlFoTVVD
LnFQMnJ1L1woPEFSPVYiQUdrXS1IVjxMMl5mLTRCM2thYSQrdUhiIVo+JCVuKVszUC5MUW8rcSc/
TCduY11qSVkvRmpuZnBoRWdxUWAnbGldMGgKJWM8ciYwIy5UJ2UiZCJNYnFXIks4ZXVkS0YkIWQ0
Rkc+QVskPDE0Vy8jQDJgJDJGIz0/K0BgZGhWXnFOVWM9cVMyXkwxVWlTXnBGNjRHJiksb0FrJXUn
OyQ4NV9NKyFmLUA/Oy4KJT9aSCovS11cb0JBXERUXDVwUEs/UVchS1xsM1xjRyQlZC5IIilbJz4v
Pm06KE1rST9KOmYtc0FPb0hGYzN1Pz5XTEJBMStgRFtIL0I4R041bEckPHMqaixeclg8R11UP0Ey
WCIKJWVXOVNRInNpaFYoUUxtKkFnQl5UTz85Wz40Y09YTjVRZjwlYS5bQyRTJShPK1ZPRlMnL1dh
RkYoZythL2d1cWlVTGV0SDxvTCQmWFFJUmpCWSU1TjloQDstWDtUUHJ1R110PEYKJTpZSUlqX3JN
UEshSSZOYDRXWW1OVk9pMyUoKnMzOGxnNHIoJm1pMCUvLXA+dVRfUk82UmokOGJvS2U5RnAyN0Q2
KzJUMDNXXFpiXSNUajlmSEd0OC9ocTsiV0EsPzJAPCNuJzEKJUBLa0RhX2E6Q2pCIjU+UkImJzhb
RmZcVV9BIWU4TzZtSFxeL1tASVclSV40WWM9QV0nYTM6LlowXEdGLGk+ZCNcPFlLKS9pbUMuJFN0
S19SQWNtMkA0QT1jVGo7JGBGJUI0Y2MKJTFhWWVFLlRmZllfUUhXJm1eUkNpNkVMOzJDXltpZ0Qx
cDkoaDFcanJUS1dvITBnWjpnVFhjRXBJYTpCRmArXSxzcE5YXGoyNXNfXy07azpTYD5jYDdrPjI+
ZmY6MGpTNzc0QSkKJSNUSyJdKD91JVdTUWo4VjplbEFqaywoZkxldGdqLHFXU2AzbDUibjQkaiFS
R2ZXbElzbV1CLihHcDlVVE4jOCxPI1FYSUU8OWtHYEBnRy4pa14lJEBQOjhBWlk7XzlgMWNVTzgK
JUtSZmVDcl9Ucz9GXGI+Kzo5MWJXQy1YQUUmZGNvTWghNy1fP1ktSStsNFxeZUNKTCIuNV8xSktS
KTlOZ0tSWDdaQyg5KWIudStqTStyS2A3LGNnOFwlQ2NPSDMnVnJ1NlA4LHQKJUZNM0Q8YGMhUFIz
YiZgSmVZPjolMWY/MHFZUUc7IipYcWAwWTJoOmdIKHAuYjNSQmNANUg6YVZiIVhwLDFsL2dSa210
QjNjW2snXCtfMVI1MU5zUWA1SVJcbklHc2lUOTFYUlAKJWsnSEJpa1xsS0xNJD5OP1lYJERoWkNR
Klsnayc7LWVPXStKTUYsdXQuRmB1cD1lUjoxKUlaUmdETCxFQCxRZmo4JWFqVEhhakNUNSNOPSRx
JVU/b1whYyloNlAjOW5jcCsxVD0KJUxXQGBiZDZxRlxtYj4uR0ssRmRIWiE3VFUybFkjM1A1RkAp
VlJcPzxgbVI7YVlmMTdCX3FRJTwqKjBRWj9lMVteXWJhSG81S1VqLz09Y2wwYnU7dFxHPVI+JyFx
VUdTNWAraSsKJTFnaWsnVE5ibz0sJVRmQlJfTEJgMjApbHUxblsvQEY4VkwhUG8zVllxYzJwOFhU
QyktJlVUPEpEXyImVzBuNy4zQD9aXWgicllZSj82TGdmUnNWKDk0Wk1iOz4wNi5kSylAMGsKJU48
QFZBM2g6TElaI1hzKm07ITheajRBTm5KalNraTpsVCIlcmsqKkJGPC4xSSMmdEg5K1VXLjlxciFm
ZUNnRV09PTFzRCw7ZipjXCkrcVxATDdlIk9qRmdDMEFXZ25fZjQ3J20KJS9UdT1qTmNmNXBNNGVW
a19OOVdgO185bXMnWzVrTF4kPUktZG0wbHBwKGs7U1QpKSdIQUhTb3IkJ0FEKitWLFdVZWooUC4y
ajorVzFFRmhyNmBKNGIkWiMoXFNtMFJrKiFYTUQKJUdPZmNpQipJaEwrVWhcUUknRywmbm5mXHRn
Jl9xRzRoTFRoNGBKaTw7bjk1JWMvNzlgXExKT1guMjY5MUcpTCZrUUEuOSJYUTJeUC41UjFNYm1s
MFhOaUBzWEZHaklSJydHdUEKJS4iX10oMGFsJSZwa0ZrWzpkMF9XNmN1NkhNRExOdS5nVj1gYm9x
YVdFLV1nKGxmOScwYiN0LUNMST0lKVciXWJrSV5lRWsnIlJKUy9VZjYjKkNrVylaK0xEb1NjW1I/
Yy9WO14KJT1RVjxAKGZvVz9iIUM0bjZINVMiOyhZKVUnaD1RTENiOSokK0QkaitoPylkP3FwKlZc
Lyw+JTtPJFdGQ14+J1wzZUlTa0BsQ1txQlRvIyUxXS5cSS5sSyFwZVFrbDIsVUB1NzYKJVlCXiNb
R3EuWjcxQzE0OFBydCklLiVON3A/b1hVXDRvIm4+WThWQFJNOGAmMTsqKWxFS3MwMGBxL09GPyIq
Z0g4ISg6ZDZXZ043PjxMViVHT3FQSGYoOEY7MGw+Xz8yMWtZVkwKJXAyY0Q5OWszSjdpM0lEW25R
UjopSCstSUxsVFVfcGZjcnUza2dxYWw1OWdFZldXMFIjNW0lTVM6S0FjOCdAbjhpQWYkUz5PPFY3
MUUyaHEpQ14vOFdiIjwtLF0oXzJbKEonLFgKJS07dSdnJXBLN2JvKldObScuTGtFUCRtbnJeJ1pc
J0ZDUFp0JE48MDkpXVwnZTZKQnJJO0ZEcEMwNGwzJS41RkAtYyZEakknOWdxSzBHMyZtOVxNK1Y+
QEo9ZGohL0VzM24/XycKJV9NQ05WO0psVSQ+VipkVlgzWWBNbmo6MzUmLVhlVjU9c2prWHBwVDNm
dS02LkBFWUgrSl9oTkgzTDBjJV8nPjhzSVN0RjhjOTc5Ij4pRm0rKzlpPSQ/ZWVSalhyYXJnVD1R
cWAKJU9kU3IpM2BJXTRIbkVmIU4oT0VlO0cvamAlNlZNRU5nRjw0OVNMKD1aRWQ0bmZGJ0lSWTFf
amoqSEE0WDEvXnExYFpYM1ZxVWpkNDgqJTwzRmhTZlJUMFVpPXBoVjNtcW4lKkkKJTJFQGpWXGU4
U3JxbTVAPmxFSTpaKXRQPzklcCpYNW1IK2xwWz5UU1tbQWNfa2dvSkxQWWpcbTVQNytRakYxW1Jh
TFowZWQ9c00hNk84SCImXzQ0VDRgWnVedW08ZHRyY0RqNkwKJVNQdVEqJGZlVWZBclVkXGpMY0Jb
OGQuODReI1dGbUtHRzcpUl9ANzljPVJKaHE1M1lCIjJYQGUnOUBHTD4nSik4PXRGay0uNnN1ai5M
bSVQJ0x1ZkpiUnIvJCZERkkkUkVFbFwKJUkwZjZsR0JbY3JtcTVXYUlJNiNoQVxydTZjKzAiLmJn
W1JLcWpKSG4+Piw8ZXFiaWc1WkdHL04oRlplakh0c2ZtLzkiNjBHUC9mYEM4ZkVmLDZRK2phKzow
blxkNjdhTSRzZCUKJV84bnVMOUpRIkRUTj9JbzwnPE0hKScmXl5xQ25dTTNHRXBBTCUnSVVZKzEs
MVsmPUtvbm0vPHRJOSdGRWA5cS1TYFU9VTosND5MR1g+KDskTzUhMVtZbkxpUmxnaWkhPT9BYDcK
JUFbLlJyKk9VRE1OPFM0NyskR05XJSpQdTRiNkI/bSlvTi1aZ19BOyIrV05VPyREOyNpTCdbVWpa
LF8lPixWKExuLkkwPkZMbjMsLiQpTSVrYGg5Yl0lNEs4MkFNcCwiK0dpOjQKJWlVXy1wPz5YXHUx
LFtvSFkvOkdzN1pxOHNlZVVzK0htTz91VTdfKTJHaHJvayJiOTA2YT1UU1UlMzRKVC9CJjcuI2Zw
aGxpPDc0RytMQ2xJOUl1ZHAjQ0wuXC0qOytcRmxUZEMKJSVORyEiKzgnKUY/Lj1TPUQ0KHNSZD1I
ZS5NaSJJXGtndCI3XnQiTGAmdCMzTDUlRmY8aSs/TiskbiFDbDFMIXNtY3JQWkFBL0hxYEQnIz0l
TTJVX1hfLHA8TDwraUt0UjUhOF0KJTlkbE9ATS0lUzgyU2x1XzY9ckE/OUYmR1pbLS8xaCssQjst
KkFaaysxQSVAbCM6QWZNbiIiUW0lMXE8XTc1V0JwUW9MW2ZkREIqKDRwLjQhX2xFRW45WXReIi0/
NSM1JzlZPT0KJUhabVREIyF1bSddTEkoQGtcIUNjUj9gYFBSZ1ZMckNYNyRSJ11qL25YN0NkdE8y
XVpUJXVETF5ZY1tDa1A6YzY1ZC0pZ1ticzdQcUJdJ2pDPVZRM1IqPTYvPmVoVTk4NjwkRGUKJT1o
NiIsYSs3bSFfXkFwUEVjT2hKQ1ZSUFhTcT5UWFcxZHR0cEhdTzhYNT4vZjp1PCk8YkZmUmIkSTMk
bGxdcy4qbSNMZ3RHTF5oZykjbSp1KyYqMWBmT0ksR2U3PWNpSC1VW2EKJSIjYENTTV4vYUdnaWVP
L15wN1dmISQkVGhFPWNYRSY+MypNTUdkZ3E+ZkklX01MXmRoZjsySTAvK3JCVnBBaTFrUVxxXVdf
YT5wKCVqT2ozNjcoJUxGZzMpcTcwOzhlKWteKSUKJSI0P2lQTFhudE1dNTFAcSszYTBqbTxbR0dJ
IkhYZEFhazYnZWgoLHM1NHAjLERRIzxIP2hwJGYwVllpaSJxbGkzImQ5UDtaJ2BHT2MwT2AlK3E4
Xk1VLF42WmsnX21VMy9oXFkKJTNtT21ATCRxXWhOIylSNXMuXVtwckElOCluQFMwJDJpUzYhXiVa
OiQ9PSMyMmtmVCs0NSwhR1VyXkFmOU9jXUddRXNsSDdLRSxyO15IY2o1T19vXClqPkdTLFNuSys/
Kk9ScykKJUwzaGg1IywqTV5lS0wzT1ZwWWwvUDAscy9BJSVUWzIhL2RWMzFRLWtSLnJIVWtaNW9o
cmVoNFRsK1ZdLUZYJSQ6RkZENWY9Y21BOyUwM0FXZUBcPTFtLDxKUlc8I2wmbmFlI2wKJTNMRSdu
cCshW14+c1I9UyZNRVllP05jLiwiU0g2XyxEK0QxRmYmdTpSLmkncEBfMHE1aDRzZVEtViIwT0hV
RVwnVC4pQWFSR05OKW0qZCRBbUZuQGckby5MKkE1WW04XmROcUUKJTljcSksWVI6dUdDTTQnUSNF
JjFhJSlbYzNqdCpvISc/Zy50aUwpREU6JjUhW2pQUlFeNVNDKmxLVG5BLmleUmpSYVByX1dUaWo8
MjViMGBjazdTLiczVWMyXmwuJCNBXD5fJGMKJWREalFkNmBrXDJhSlhUc2AmUiZaYCQ8dHNXaz5S
TzVQMSRAUShzclVNLWBGRElCQyo4LlFjUS00Kks5Lmo+XDptY0ZDQ1EwSj0qUigxMmVCX0lfI1lf
W2poKFMtQyNRIyM/WnMKJVQ3KCYmXmVqVmBoPmMzUiZBZ1I5ZHRYdEdOQGxTNmNSZG5oPiZIXGE6
YXRtRnImczo3Jz5YM2Y4aCFQbmtKJjRESlRzZnIuW2lRPzo2ZTktQU89QGRdXDliX2pocXBDKEEq
ZTQKJVIqVyRZKUNVXmBgcywtWlQpdXEuKkZROGpUcmtRJnFiWC5aXFg7bWA6cVoickBiPC9yM18y
I2lCLDJfb1hAWEdbNGc8XXI3LFFsKUonWm5aJ0heUDxbKmNROz9fVHFqJTtVRVQKJTByMGVPcCxy
a0ZVLS5CNStcNWJTR2gqYCczY20vKVEzcENQMkE2LiZGXFRkZl1jKXIsX0ZvYkdhUixBTk4oc3Mh
QCZOWClvWzRzS0N1RSlBZ181OWw/QDUiM3BHTG5Qa1UiYyUKJTtVQSZgMFZaR25rcDxXOT49PjhB
XWBwS28kI2c7LScrW008SCRFdEhNZGpPTzQqOygpQkoiXCo8QDkuYTonZEZbaEcmbSVgWlpWP2Eo
KGxqczEsPEwsMkxnIWkpZ0taWiRmVmEKJUUzI0lDXl8maEMhVUVoVGY8a2xbS21DVS0hYkUwWFZi
LDFeMTY6Kk82bVFsRnAqT1RpTm9BSyk9X1BjIm5FUCoxU0BGVV9Qa2c/I0JeSVknIyo1NEgsJVxI
PEJmV1soLUZqRDwKJTIlcCFYMEIqc0RwS0JKMihXV3QiXmFiJjZuWyt1WTIiSzVgRCZPYD1gTCc1
KzZlZltAY2suLCg1LixMaXAzT0skO2xvaUNaYGs7cmljLyc/VTtoLlokUHVeZkJNJlJSVGNiVEoK
JThKbiJpXmk0VmtFLXRBcChaYT9bcUwhQVRfSzJ0LT5gPE1pYzFEUVpBdC0yVjs5WTxJMVRhZUs0
UitrY0FVMjJCXzc6XmgsKHBPbG9PbCEpZ09mI1xMK0FtXzswVHNOMyFebzIKJWs/Tz5zRCI6Nkcq
Oj0lT1F0I2JMI1IjR3FRQU4mUjNVXilETkhILkVCQCZCXGV0ciw6YydqLSEhSU10biNHcUIoWVhq
b0w3aUNjKz5nVj5eIks6KEdCMEU8Zm1Qa20uNGM1cVoKJUZjS3BKSGtpWVhLdCU1ZVB1TydnInJE
XyNpcENQZTNvK3RXLCxDPyosK1VZXSxhJFBAVExhcjxWUWskRlppbl09YD5ZJ1phLUFuYiImaF83
Rl5ORidqOWIqTzUlQ3NbbEpTPFYKJTo7YystKShTJzpWXT9JdU0/OzxPOUxtQGZgR19PW1MpM0U2
TlJBK1AnWnJbIVU/LC9lKVZNJCFOLzp1T0w9dTsqQnApZDUwIzYkTTdXcW9FWmJmOWEoam9oOko7
WVlFLWZTQiIKJVg7QD9GUVNDTSo/QSZKSnMpRGdwUik+aWsjNF5gSkMqMG9VM3UoTVJaRi1kQzZd
PE9UIlQmKjA9WT9Wak5XbkB0KEhrInVlbSFpJkxsUkxYOzF0S1g3XGEjYmM2OGIpKlBrZXQKJVQ9
XXNWMVteNElkaDVhUGozYlpcaENiXWFZNmNeJzZTdEJuZiknVGlLbERwdVBQTGdwWl1ILF8lQlFS
bD1YOj5IMk5xVilLV24nIyhcR1AyNlJMLHRaNlMmMzxoO0VWJDJfQ2kKJS5UbG9qWkVfZ3VlNTJz
XF4vYGVsTDVnImRwS2AjSm5RcktWVSRZOTFZViFoYU1zLiFTPCJeP208Rm9mOzRMbDFpW2lgO2Nl
LlMyJEFnOHV1VEBqYU9iTFZCLDZjNTkiJ19BMScKJWhSV3RLNSgjVzFVXDlzaD9gaEtXOEJTci5Y
ZCFvTW51aDRUOFExaklfRUxNRCFNLzw8YS5BdVpQVGliZm1CWXFrJmlHKVFpXkhKdS9kcm1xU186
SzNHMW9aJywlSnF1PEZKRzgKJUtBKEMzSEBZSDpyayoscC42TWt1S0xGW25GMFJWXz4lMkw2ZG0u
OypqZFAwQF84JVB1SU5ia2VEdEBnVVZCYSdqMUl0PENWYC1vPi9AITBySmZYaWNRSUcsQWRRLGw2
Zys9ZFUKJUdwN1NlSzFALzxHaUA9aSQySDQxP2NANG4hcmgsPHJqUiFFKVBWJmQub0tlPDNdWkxG
b0g1X20scVEkWltPYEppbSRTKEpxbERnZ1ZaXWk+RTcrS3MrQCdkXmRQOV5HITgwVVEKJSRBW1Rb
PHBtMTdOL0wvM0xVW0QtOjc4OHRXVFBHQ0A7SVlyYmFHQmNYPWQ+NCsmIyUwSVBBX2tXTHUnY0JM
cm1cLTU9XGo8N0dXWz5ZbDQxcCQzWTVFJTIpZjpSPmpSbVh1Ry8KJUJNZnUjQGxFNk1DWlVFc1xk
K2I/OlNPPD1IdFNgKGE4cSwsK2dbUi1gRVNSXFJFYExmQFZWcEtrVTtnUzAlZVY9Ik9FM0pbTyJk
ZmZnMzBvYiMvUSZLYTwjdVhJU0ZoWTpMPCYKJUpLSVxDVyxAbiEuLGc1NVVOaUVwZUpvM0NwTGQ/
TWVaSkVyWDQjWmc9P19kUmg3Iy1lck1GSmBGSigoXEdKUGs+ZzAhTEwoMlVEbmkjJFEkKGJONFdw
YzwkKi1gTVE7WltRQFUKJSZMPCg3Y29sSzBQSWQiUFA2Lj9JInUzPF8nW10xJ0hmZSVXQ1VeSGNX
OElwbCJAXWBSJjtKSUBmJVBSW0JcJE5hWzhVQyxKM0lnYEZnKXI8VWI2cGk7RTM+c0deUFJqMlYn
MFoKJSNhMGU2SlUlRToobltyJSVWNm0rb0pNOVhfOj9xLUIrMCtXYHNrRjJDXj5cM29uN1I9X1Zw
XClgS1RzcSUsYCJrMElHIlNmZThWNlxpJC04P3QibCY9Qi1xYTNNTD80V3Q9PmkKJSNPNWQrV3A+
blMhTEtEaDpmTGQpRkFBQT4vK28iN1U5bzRQUjJWNys6RGVfLCFtRiNDZy5mNXVuN25XXzJpJWJC
Lz42ITcvLi0pJStCKHNWVS4zQ2JRakMhSSdHWUJfTVQhKTUKJWJlZjw8ZV0lWTBUKFJYLGRILS5v
STxXKEo7NWhocDpwJ1lUbmo/ZW9ldUk2MCosMC1iclA9dE0wSzs3dE1CYiRZbm9cZSMmPSpnPnM1
UVwmUS4kSVFYSmZWOUticylOYFxeNz0KJSlWR2g0YXJiZyhRWU1BXTZBUTVVWCtzLCI5NWdlJCkv
KElUK0pPRTRLZlA6VF1JInBhQXReaiM8V1pDO0klJy9tKj8nLVNTaDVUSSVkSFNyLjJ0J0tgUUhA
USdiLzRpKUpCOTYKJXBqQylGYlF1SVtLPjRLXD0pRm1WPVFbVGtnaTBsazZdJS1eKGduNEpRZk50
OkxbWnVrLWIqPzxKYnVFQUZYYl1LbmEwcVlrZVBrXWA6Zk9DPEw6SEttLSFKLCJQTEllRShDWk0K
JVlXXG5pJ0FWUFguSjYuNzdrbW1mXXRCcCs9KkROXUVVTTttIl0rVFUyN2UkIy9hTzhta2MuU25X
Y0JeNCRwIkJNPmJJPiMvclgjPlhBQjEzXSwlUFdxbjtQMGFjQyFUOmNWQSgKJVM1TSMtZ2NvP1dj
ITgrW2JnMzMkOytPP20mRSEnblQ5Mm5PVzRjbko4cDE5TjYtUVsvS0p1b2picmQtW1BlTFEsMWYv
IWdSSjo/TCk7WkBna1pTYz84ZiRCPTU/ZWJDOUcpWXIKJVY0QXFeRjJgL0UpZylOSyVOYVZhQHVG
M2VkVCorNmU8MkxvazVGWzhDamM5MSxtZ2xdQi4nZmZZRjE6clB1SGQsWiRjUzdNVD5YTEVvS3Fy
cWs+VWJMNCFQMlNwN10tPVFrYCIKJT4vO2JgZFVgUyNQVTtxJStFOkUvOFIvaW9LOFVrVS1JZSRg
QmVZaGclT1R0I01hXy5sRSxRSl0oRzlNMiIhKi0vTFI/R3ImZ2VfT2VoQUpOJDpcM2Y2PV42O3Fb
WGhXJzFXLykKJTZyN0hwPnVjXDEubTgzLUglXXAqXiUucy9hKSVfWTdBaT1PM1ZRUGk6XDQ1XzJk
M09FRUgsNj04Vy51Uj08ISgzNShsI0AsVD86KGduP2QhWUlKT2NRVyZVJ2oxU2Apb1QiUCMKJXA4
U3JAVnUuQkRMW3FYVWJOPlhtRERPOnRIUEpxPWx0JkxYKj9TQ28vYzYnTGJnSm0uVDdSPWs3UFdA
OT1FNVJRPUJUMC5fXlwpQ0A5TFlDTW0/LCVeKFQ6MSonXSola3FLUHEKJT5cSFJbNmIvPE9WMjQ/
RCJFYWNQVkUsLmAhI29FX0VRPSZRallkPjtcMEUhTSs9Oyc7VyUqIzU9OCZsbFNKUGtxMGUmJTRN
SlhjPFIhLDgzNC1NM3VDVihpKCReRTVoVkJoayIKJSRlYCphXjFwLWFcUnJTdTleQSEvZ2pWRilJ
Z3IrYChHWylATS5cYmM7KjM7Tj8xXD1wJSk1O3RdP1AnaFExNUFBbzthPm8rV2FJTFd0ODM0W0VW
QyRVVzglP2AiUUdAMnUuLGYKJWVJdHFjUENUKkAnPTRgXignQEpNNGc6WG1cQ2Q5JlxNTEleQz9w
dVJUPlo0UikidXNENT9KXGtpQjFdcU1xLGRQWERFKjhiU1AxZyVjTUxZV3FTamdac1RadDkiR3I2
SDdVNCUKJWJkOic8SiNaWUVZSlAvNCxIL2VoVTw3M20mMilmYCRDWFwrMGwrJmVIWnMrMWVxLnNo
RWwhc0xdRixUbSU0bFs7Zkw8Zzk4YCZBO1JDWFFzRTNYTlxaZz0mZFVsPFNzXDtrXlkKJSw7L1ov
RXVzSm5MYlhpZU1jbTJNYGJ0JiNjQm9EUWVaXGo6QWoqW2ZscWxVI2MiXEFqWmVNcEhYVV9eW1Yo
OShFWTQ4QjRYMTRQXktRI0JUVGdqYmVfVDVFXUo7W3RnMjdhM2UKJUhvSjBAKTpJXEsiQU00TTYq
SVlbZWBzL14nJTpKWGY+ZSk8bmUpW2dJQCtibVBiVV50X3UtPmpRQio3XiZ0KT0vW3VdL0IwME9w
J2JTZzs2KENiNSksN3BmR0ZiPXJIIj8qKTEKJU8hcXJGUmcpTC4sZmFBbihYa0RONEFFOVtVM1s2
P0g/NTFiUnEoYlQxaENDXFd0KkAkTDZCam4uL0pIY2MlPmppbzk5Qlc4clpzM1xgWl5JRCEiS2BO
JnRgblhJL2xhWjdNZXMKJVoxWEc3IlRPNidIZGRSXT5hJGN0QSk6Q2ouN1ZRaz9KSEZNJk4+PjZv
L1wkQTFFXTtVT1lcNE9hQStKPFo7KkRIJF9yZHFidW9BRy0nRiZPSTllVW5vVDUqXDM8ZFtLMT9M
Kj8KJUBUSj1Gb1k9S1ctXSpNITw3WEBII2szTENiSlopPm9ZMS5KJWE2J0ZeaEoya2xeRzdaZFps
XjhbaU8/K0RoWEh1Ll9LSmA+ZD1qP1dJOis4OWVpc0lvP1EoOS1uRnBwTl5sJW4KJWklQDNFWicm
WHBHUzNxRiM4Z19MVz1cXV9mK2wqYlNfbGEiI0BWJ25GOWRuOz5xRV5pYiU0UV9DI2k9Qi1lSixx
Mms/UC47Rkt1Y24rLWljQSJzOk1wYnByKVk8JE40ZilAX0cKJWBOK0NOPGQnQVY0XW8hb29lQidJ
Rl87cGM7Wl1FOz0iUDNJRktNYT8pZVhbUGchXkA8KCtkQHFoJ2dqdFUuRC1kWDFFUkZhYnFBNi9N
SEdFTlpxMjpoajNsXiVaU0s2RXNwT0MKJTlmUTtGWFU/bXQlOzlLVTgrPSVEZW5NTmFBXURmayFG
LGslJG5vbkdbUDduTTpgIzcpZ0Q7NzxrWjElYm1QWU0pKk1YMShiPz1OT2dMXUAyQHBzbUslNz5B
Qy1zM2hMW1pwVVMKJVZnImhBZk5rQWRiJixTRjc8c2Q/PjZocjBaQiROXioyY1hCNTJDK2tfXWlT
Q3FxPzFuVDBkX2VdJ0NSU2huLEJUblYiOW4wby9Ybzs9c2FNV289SW8wRloxYjAmbkVzLHAyKVIK
JVg9V091VypuJmwxPmM1K2tASD9eYGdYI09kPC9HT0BNI2loWC1FOTtIZCMldFVSNGhfMnU8Xytc
M2NqWmhxTWkva1VmRyZvSVwtaj1mYT1EYjk1OixMKWUiIVk1aj83IjhcR2EKJWBvPGJHSzVGUVZb
czdJWzQnJWUpTydFWTo/NStGJ0UwSC11Xi9eUjo3MSNcWGkoK1s5QFRLISxsIjEoKF9nXGUyRUkw
TmFOTFUuYjI0bm8haC8iKTlSa1kkakZ0PkUqMlc7OTkKJTUlXyQ4cG1hVEZmTSRKN0dIQTUiPF5Q
JFE1SEdYci90OiZJR2c8KEVTJVckIi5sRkMiXGVgR2YkdDE0XDZKUGFIJ0BUIj1JKmwtIzc5KjRT
YDlNVyRHOk9hUEJwSikzZy5eaXUKJWNrcSE0M19PciZYRFFpJS0pREBdaVUnOWw5IU5DWEs6QGpX
clZsMDtnZjRzUFsxOi0rT10vJ2FaMyxXYj05QmNMZ0soTCVcQD9VQ0I7ZkVLWjhsMi5qbjpHTVlq
VVklSy9kZisKJVAvbSduPD9bU0lgRzZgUmJAbyc6QiZgazpkKFRhL1gqRWJBbUFSPlRbTVQnXVA4
XT5WXy9DQWJSTCVXQjorJC1ANDEmZDEkKDdLbjg8YFgkJyhLZkhbO1BEQl5ITk5zX2ssMUcKJW1X
T1lLQExTPkFuPl5gcltnck1ZL2AyYClKJFFvbVIwO2k/SFltJmEjbzVhMl1GWWVRVVZDVEZLVkNw
KzpzPEAnP1NbJ289SCJyLW9LKF5VJSdWNERgVmVZYj5KXVI+JWxpaHEKJW9DRDNGWnBVZ11GXk0v
K10wSTYlbWlZLmgmXzdXJT00dTQ+VkRMNSpVTGpTJlw4REAwbUY1W0dVO1M7YGdHZjZxW3JjPTJY
clk1QiRxKkhILSgxNU1TdGZTYkpLO2ReTnFMWXEKJTgnP0otTVRyMlReaWpTcFRDTVlAWEJQWVkh
SnFwMl9TQiJ1cGYqT0BAR3U3Ykc6S3FeMTBGQ2UiYlw8RXFjI2VTSk9bXDMuaEtSOm8nL25aU002
XV9yLTE8T1ovYlNMK2pRVFEKJUVOMmRUREVcMjZEMzFKQCIpOE1aJHNGTUQiODZUW2tFU2k/XC0l
XDhxbTJxYFBNS2BEZzhHNGFfYkZPM2koWzAyXWwlJi9XV0IrM04hLlBGTy5TYWRiJ09jQ2dDSGFD
UC1hL1kKJWV0TWlSbmc7aj1iOUQ7aWA5dUNmTTYycUlEZD0yb0NEU1AhPk5JTFhFNmVQPF0rVGZu
VnQuUitgITNfaSooMVMzbCQmT2lqXkFtKUpbJ1E5LkRNdHJoL0JXLTJCYWpvTVlhUEsKJTI6Vydi
M25VSEZVLlQkIi9nSWJmS1laQnNVKGRMJWpXLyNxT3M3Y18jKklhXyE9Skk3byg1KzBFLC1UZFRP
Xkk2ZFY5P2xnMltVKGhTXDhwM3JIQE5DUWlGSUZfMj89TGNyVlYKJTwnIVMzNjAwKV5yJmxSLW1y
NzQpYUwnLFhqVj9KPSVATktjJzw6UmJmOjFoYCopRDBMNG5EWilEU2ZmTVdpWUs9bFItZyNaTG1T
NTZlcnJsMXNLMTlZdWBxVWYjSyRdaWdGRmkKJVowSzZmI009IkhSVy1YKW5mN0g6WkI0TzQ0SF0p
R2kwPCdhQVspOWU8Rj1cTUZzLj1HRl9QIUVtXHN1WGk0RkZULj81WGcxLChZYkVAWmlaUkFsWUdX
Y3QvRlZWVGsjMFlgT1YKJVNpPjhiJTJfXmpUJHVDNFY8WiVIREpxJzFwZChOMlomNTAjTSNhRXVB
S0BqRStdTG0kXkhqPiohJjRWWnI5bEVzN21aQnVFSF9TYkQ8aDhnRCFAKkRCV2NIckYsYzNkYCU7
bTYKJTBmSDc8L1IsTC5kJUcrLTslIyUxRnErSmVtb3RuUD1gOy1cQC81c29ocmNOUWVsRig5My8m
UlFCI1tkXV9fVzZKUD5UMWksTkUlYVM2Ul45KVJpWiYvR2YkSl1MSD9lZypWNSMKJWstMzxqMztX
SkIvQF5vdENvLFRTbkclWEwwWjhyZnAnLm1WT20pO0MoUSRYLyJcV1lQXy0sdVQpUk5wJnIjVlVF
cy4qKSVJYUxEKC1Hc0YyMitZRXBEckktUFReLFUvP1xmTTgKJSc7QWlnVTVhJS8tUEAoJFEoXkFX
ZjBGckFhLSwhR1dsZSNyUy1dUl9uPmlga21aYCFRSFJjP1pZPWlba1tYXFEwSCFpLEw6LFhlbyVk
VGVrYisrXFs1NzZxY0RWSl0iRVdFSzQKJT5FTFA7OyRsUlktIVNJQzphXVApTi5yMjo3LHFdIjMx
XE9BQ29TXEUkbDlyMHJHc0pMXFckMDEqRG8yaFIsOk1EMi8rVW9QKjVsPk8haDhITT5PMmduUzBx
Pk06NlQ4UUQsYCMKJTY0MTBKJkVdYWBoJklCK1dlbFdEVUpyNyhNXVpwak5bQihBLjZqME5pNzVu
Vl5pTEonQGRjOTwnTWNTVFR1MDg7RXVzYkM5aVpVdGtnI2RtPDFSLWE9WmYzWV9RbWoyaiU2ZzEK
JS9HP3NjSmQ5K0s9T3I2SlNiWyw/SHQuZXJFRSlFV0dIc1U9NlsmMHMlTlckVk1lMjpxP3AuX2M7
L0JZKGtbPUQ0P3M2RCFHKjxcVkBGP0JjUWJTPF5jND9tLilRaEUwaGBHWnQKJVxyJ2dqLm84Jipn
J1BycWlFNzppckJTTydYPSFpPlVoKEd0J3JoSCg4M1tiQDxVJGYiXHAwYm1OSTNfOTlWckJYRUZZ
IlAhblZTKzBdWEdQKiVfWVk+ajcvK0xHOkdeUjshKD4KJT5WOUEtUkRyWSQsYzsjUzFaZkRfQClZ
b2dSPnFQKWNNN2pvUC9FMCI/Xz9jQUk9aUFRX3RmUGxGdHUmRDpRYm82WVdbXmc2IkFubzksYipk
Pm44O0JobSVvKT5JNnBjUztadEQKJW9DaWQvJ29zIS5DSTZnVC0lXG9nNjdGWCEjL1JHbUpxbV5O
aE5mJkcsRV9XTlFlb21GbHNnWGFiM1pkIVE/VUVyMUpjRFUmVyJYQlpDXWpdSDBbQSpEUWM/I249
SzpyQTpxaVUKJVBIJDJDK2w/b0YhLV5tYiVpXUFgQGJHMGksa1B1M1pKJ08tPFUvZWNIclVwMitY
Pm9HTGp0cUNaIjpvSjhEN0pXOERXP1BSOj0nTCU8KUBxL0svSUlhcDBUaWFrRUUnU0VQYCEKJTQn
ZDdZVWgrLmxlNTJCc2RtPWVhP05haVg9LGNkMy9tT0t0QDpNa09aQVAlXTIzaUtgcypkSD8iP0xd
VFUsLl04JXFSQzs+YiJtOTVcbiVhUUxJKCMnW0lNXy40LWI/PCklRVQKJUBNUVUuSEpQSitbVSs7
PjkpRUYqNFM2UGlTYDZkWl5vWjp1bUswTyQkT2dJQEUxZE9dVD5fXz5IO1xQQ0NAXkVOWF48TyJt
WisqXkBuXG0yR3MiaSlsREEibFVTb04uPy9KUCIKJToxXWFvaEtHYFsyNEMpKidDWCdwaEY1MjYy
MzpwTkM4YGJBLjpmSFNsWmNNRUByNTNnZSZkISNVQlpZJE8xUlReUVNyS2s3VV1rUkhBPXRhVTFl
JDpwdCYlPC1hcCkyTSxASzoKJU1KJCgrQSI5cDwtS3BTPy9MbFJgXl4+ST0zVC1rMkQsdT40YEVW
J0c0Ik0vV1BvOEE8OCpUQ15AW1pRZlI1UVxrK2dXZSNbaCd0aiw3MG5xPCpqXGVwbHNsSSovaWpB
PUNkXDMKJTJnPzddKC5iQCRybkFWYUQ8SD9cVmFlVi1oZG9OZTphPnByRFleTVZSYC8jODdQS0ok
MDc4OSptYj4uVGYyXiZKRi9AX0JMNWBeZztTQy9zQyooW2BnM0hGNy0ucy1rInM2OUYKJTgjS1om
OmU5NC9GMyJGL0ExO24hIVc9VmVORylkcjpRbmxMVnA7JnRwUiVybU1QSysjLlJXNmFHJ0dHJUJ1
SXJGKiJTbzo8MGxrdE5QcTJWIVoyTlRlPUUvQVY/Zys7ViRDLksKJVxmaTZoXFJYdUdkb2l1Tmpg
Y05LcEw5OTg2Mm5WRk86XEIvWDZRZ0grRT88P1lyImYuOUpcRnRJNG1TaFM+V1JTZFpnJi8zNjAt
SDpoWUktTVhvVD9oXDU/aSNwSFRiUydxOioKJVRSPzc2ZzI7ITxdNG4sJGI9XjBlIyo9XXQ2P2Bs
STRmLUM2ZUtOcyRvOzRybWIwN0gzSjJicTxgZEYsPkglckZIUmg9Q2ZNJHEzPiE6VHNhNz1HImhL
QGV0Z1xHNSw9R2w3PCwKJUBNTFlyZSc8VjtMLSk1Oy9VPTApVDZOaUduaDFib29PX1puPV9KZHBW
R2UkKWFyaFFMOy5qNlQ5JFwpXFxHMCQjQUxXT1gkOCU5V0QyNDRbMi0rZmcmO19FaFprKm9GXC07
LjgKJS1WXCRNKSM9TTYqJGE8M2kpdCJaZkdBKG0hdVglRCpub3I+WSJeYWA0XjhOZCY5RChLMyJk
QiZhW0FxMXItTnFhbi1rN0BBVVp1WixXND9gTGo+UGs5bUwiUiguWlJPQywxZUAKJSdgPUZtKUIy
YltqQVJcUCpnTEB1WCUzSWdWTCVsajNKJmc6WiFxQzI7OnJCUlIwdHNMKkByWjwlakBOKmBSbGBc
JjRjRixRR05oL0pGYUw5bktFUmBfRSdRdGtyayhWN25VS10KJVFyV2dGJkM5JnJhOlwqN2lzO2Ve
I3FSbiUlR3VaMFFqQ0kmR11WP0k6PCVwNDMwTERWLlh1cDxKcWFoXCg7ckRrISIpQmRmPG5xTTRD
OmxmOTM6ODBBTSJbdFtBNG5HUTF1VF0KJWhBVl1bOVReNCpWJHRbKz8kOnQnMFNGIk1SO0pFSUxU
MCJQIzd0WkJOcDVvbGIlSmNLNyEhcXNfLVI7I19NbVtlKWBXUWdkPys+WUBnTTBfQyZlOkJZKy0s
VzVEMDdnOm5laVIKJWsiNT9KRylMbm4oXkFaYSUsPF5JU3IyaUZCcztoRlpoYHQwWlxtQGc8Uz5H
Pj49b2o4cSdsWUdlXl47UShoQW84WiQqTmA9WGhARDY8NW4/YE9fbidwP2prSyIzYmQqKFN0aWoK
JWlZaVwscy85KV1GJG5XTWYsMG48TUZWXTBSTTthKEVUYmxSXVdFPzcyWik6TzFXcFlASiJWQFdc
M3NLMmJYUzszO0NvQDBsNjNcJChSOmBmISNyLFI7QkBTSygzRlEuZUBVJW4KJUBAbz9cbVklO1Mw
MXIxbmZjbjsoZXMzOkcmMC9MMV1CWUVmcGdiOFA9ITFOcGxhY1xFOkxTPFRvZ3BJSENrIj4hIko0
I1ddYmk8JEVCdDljK1lWJ1RhaUpfQEBBKSJqQDZNbGoKJVlxPk1WMGk9P0VtLnBTazpSSTohYUw+
Ll1IYyhNbDNlJ2krRUhxdE8pNjwjKyxSckQ7Uy9zTFxnPCZlJyZMTSJUXkY/SFswI3RDZzNCaSsq
Mj1RLE8naVwsXUlcNWhKbmU/S0EKJVJzQl0zaGlOTTY9PDFZPGJiRVtXZFEobmotKzZHY0ZlLmAj
PlApQCUtMk5cITFNJVQsVmojZkswITJAWyRSZTo8Pk1JY2RCMjYvMVJyKksuPlpxI2NuWVFxSTtC
OVNkLSJMISYKJT9rbUdibCtnXGE0aENyai0oKj0oP28xIUJfLCEkbSxTYlIyOCROUTRpLnEjLyFb
QUJPQ2VxcDxoXUUqc0oxXGNOK2VPbiEiOVohaVM4bGpxJXJOM1BnLykyTVBqWD9dYDora0MKJTZv
TVRGa1FNJFM5Vy82WldPP1IkITJZMjdtJ2IlNTBqZFVeO3UlPjltWm9jLFc6JWhHRWNJZ2xSVytv
Q01NRDtrbmxILkpZJC0uaz1iWTYiRU9gSlYkPytCSUROZVdDNS87VmEKJWtOUTduI1MmIkRQTklT
V0JRWzopaHIwK11uXkU3WjlTX1RfPGxcYUs2bmJtMm1dWmNTX0BFKj9mVnJTKlokJS9TNWFzZUY8
bEY7SS82cSteXXM6cSxKa2RNKEQ2c1toRDVzIW4KJSJkVER0XGN1YyYkdUBdJWY3biQwKjJlZm9g
NTFuTmZoX283YE4mP2wrPHRWUWV1VyllYTo1YV0jak9DUl9ILGJDNnEwOVZYTiRvN1A8PkNqbzBR
ZDtlPUQiYismLyJeJixiJ1cKJS01dUw2OWcmMmJJV2VPXUk8NDkjPUxsVj4+RF4tUkIkaEthMXRn
QnRcPj1qIkkiXWRWLUhwYlUpVEgyOEZEYUtPRW9XJGxYXjRnbCpHVT1daUpfV2FaZU5TSVIqI25Q
PEk/b2UKJS9EQkshTSxrPDhpcFUiTmRCRyZaSGpXL2lTbk0rUGpSIm9PRnJGUjFYKEs2VF0vI3No
UzdKUj8/N1VXZE5MU15dZ2ZmLzxaQkNZdDMlXUhWVFJuKEE7PlJgWktkUyRmPlVxWXEKJVlUP1pO
JT9FRC5GW1p1LFEhPD48WSJdP0ZQczlpYGFQYD4/YkZrXyU4dTAxaHAzRXMsbFcsKGMjXiRzIzoq
SzU/V3JxOEtXWmlwX2J0Sj8tKEJBci1NXiUkKTBTOD1fY1RSKFkKJV9sLEouMyc9az4rWU9fLDRH
PDQ2NTktITAjIloqVSdGXzFjal9rSGZTVCVdOlRDZz1bSVIuVW5LVmNjXVprU2sjZFU0LllSLTJa
OmgwLXJJcFcvQlViYEkuTGEjWVRmRl1MUTMKJSk3REhTMF9nIisuUy4lTWpJLF9iaU5nXE0pL1FT
cE4zaiFUV1ZvMFNkM0BWSj50KitENjI/JDo6PSllYE5aUiNoOz8wSzY7Vm5qbFFtPi00V0M9ak0v
MF1hVzxVQC9Nck5OIS0KJVhCNEYlUy9lInMmQlZtcVorLm5zJ3AyUmlpKDoiJWUnImk4WiptVEM5
SSpmcjdgSlBFZlglQkZPZSYnayVvTUw9bFpcbGFyclFXPlFsbEAvKVJ0LjAnJFYyTWJHOlAiOl9m
KyEKJSlhWWJHcUYsX2lBJEJuLVNmRV84YW0jZ1tDIVhZbUQ/Wj1vKms7a21DX25ZSXFFKSooV05b
PXMiSTwuX3IlOyM8QzAwN3RDS0dobDxBMjk8MFFQaGpxKFtPKy1YQCpLUEleYDsKJUZSQE1TXDZz
JipeSSQ2TCsqIm5oOjhUOkpMODUxUDQxRllAKEFxJVc1aTFrbTdJJUVPZy90KyY6XkcxdWVySzEu
NzJzXzhwbk1RPSxfNzwyQFVvJ3BKK2kpRClvOjhFPlpXY1AKJVw8bEw+Oi82QEtZVC8vRGEpRVg+
U2ZGOktkS085I09qLTluRT5IZSInc18vMTxTR0NFcGhbT2lWSlVBTitPZzMuaWFfRiU6IzheNUU6
YGEkX1BMVipcWl5oUUUkc1NpTlZmYyoKJSNwXCozJjJvR2xxdWhLOVU4LihpLiZZcnAqYy1TQzRt
PE0wKjY4NEJwYXVFZDQzJV9fWldXSD9OTjUvQ1pJW0gjJk9ec1c0JzticzJmQ1NiNyZnWiQjYyNt
TD8rSCJIV2NbTTYKJT8wLCIyJi4/TGElJjZQbStGNywrb2M1JFZKZWolWGVlSk1OIyRbQVRhIyFs
S3JpcUo9Nzk+ZDxTIz9ZXzMzbD9Eb0leRlI1IjNtWy9yRmgpVyhLaTNeUz0ob0VDLz9iVzFDMnIK
JSM1ajQ1IzBhXiNSL1RFVUxETGU6SnBLJi0rQl09bGNIKURTIiw1STRgbGhVJU1gc3VqWGVqLS9h
VWdATFdSJ0BFOThUKzdQVHEzYUpoQEplXT49SFZKSXM/XitmI1lFISxxXjMKJTRtSSU3IlM9OUxM
TU9rRydyYUJzO3I+MSsmW3VCVCJrcTAzWGdBQkxoZGhKTSQwZVQuQmhJOSc9LSlSPEApRGFtbDpC
X0Zhb1Nna0YrKk4jWSZXQFQlbD9PNHFAaFBnNjtvWU0KJSNaSiR0PF1IbztqYldBM2s4MEdPKytO
WTBFVT5JNV9aSnRhcT5YJyNmSUR1bl1KdHErWDNvZTdiM1AtdDctLnBeMjdrTDFERHVYKVlJXmBK
OVU1N1M0Wjpzck9NXSV0alZbTCwKJXBSYyJOJ1lZNzBKMCxAaW4/TDZAKXUrI2xZaytlSkdYZ2tG
NUNoQjQiMktxN240aC09UCJ0bGNDUmUpVTMkQzpwNkFSIjFPcGJNJ1kqUGU5V1FaZiRWZDdAZTxp
KjNNPGF0JVcKJWdRaTtVNk0scWRlPHJENzZAVTZEXDEkYzMwR1RpNjxpTjQyLjJIXWZJKEluM2Qt
VlpPJkozOmBHYytabmhaVHBKOV9ET2ZDQG9salN1XWdQNGRAcFpddENBPD5YR14lXSgjVV8KJURB
Zko3WUM9TzVWaWE1XFUyQmdAbWd0W2dYNSthaEQsS0QoQDdhdDZKLkpZY0cqbixVbGo/LW9OLmlI
TS5YYC9hazhsbUhhU1c4cDlSZ050XSJoLTs/LUgmQ1cpPVQ2SUgwTUMKJWVGM3JGUlhcTG0oMzhX
VVRBSzo9P0tCJC0rXHQ0WERDaGIiO1w0J3JzM01yMUxDclNlQ0NeND1OV3RrZFtja29XOS1JQVhN
XE5JLWpoZV4/NDdDRCtKaSdqRk50LW9JMjs2MDAKJVAkOyVQR3EuY2I/JzRfQCJvWHBBWlxYcWdE
UDAkOnIlbk9oTDc2USo0SFlcJzZuMD43KlpTVEQsNDA+JDpqY05TWHNncnJEQXM2ZU5ba25bWDJJ
Tz9WNDtXbSkiJlEwZChgUmgKJSxgbnFhL0g9YyJLNiROWi5OakJcKkhgT3BpX3AnZjloYi1CNmtR
Z2wtRCNmVitKNCgnRT1wLD4iV1o8TnE1VDIvYG1VbyIxJjtbWFlINVAkVFYkRmFlImpMNGAsUFhV
WEFoVWcKJVdpJ0JrOGFvXFlXQShkWklFXktLaWRsJ1ddPlU2JVg5TUZaXV1ZPGdXUTBTYSRYMmVk
ZVJSYjgscitYRWdbck0ta00lPjokLzEyaDpLXGp0Zz07LiJBNillY2A0N0F1XypCX00KJURgLHV0
R2paRHFDYC9EKipyPGlTaCpRbGtxLmNvXEQsNiZpVFs1Ll9nXWVaVVFkRSxPKFBoJShOKTZvTGch
R0VWSHNDPDFIZWV0Pl4nLmg4WW1aJmBvcmw/LClFQG5QOSZROmcKJVBhX2wzUkc/YGcsbHRndDIr
VVMmailaR1hUbD5wbChkJ0Y4aiQlWmU6Zio5S0ldOD1OVFInNiFmY2BXNlZlMzMhPi5LTi5cLU1h
S2lgV2NGUnRxJzZAak1JYGVnN0ouSTxFJEUKJWdadTxET1w7TXJGdGReSGR0OmVqWGk9bWxiTDZd
RGxJPGtSUmtEQyUoRVwoXV5mJzE8LkdhZ1QiLSxtWkQqaCRDQiVLXi1VSzA2aURDUklSTyRHLCNc
MTo4XiowOm5LTzxWVV0KJVNNWStQOTw5TjJXPDwxUlNuL2R0PTNrIjZAW3VeYTpKUUFYNyQ7ckE5
YkBdUGg2WzFFTjtgJ0pIZnVeTytMNChIX2ZlUnA4Iy1dcXByYUcqKjZwKz0jWGFSZ2taQEE/YWI2
LS0KJUk1SDZNYmFnV0drZUxbOGNASzRdT1Q5K19TQ1R0OV1NNz1qXjo5Ik5jIz8nVl9DcDsrUEdU
TD8wUm9LIW9RR25ZVFsrcktBR11RKGVRI3FvWzRgRGYuTSkyZEJGZkVAJ2tnWGQKJTdjKitYbERT
ajAiPilNWUpBTmRfWjleNy1bU1xBUCxvIkktR3VyLzVXJzFjWD0oMGwxQkdkYUM5XSk7JkoibkVa
O1koam0mTT40YlQ+LG9LcD0vWHBgJTgiJVBgYU5tImUyUlQKJUVDKlltKFM0TygjdDQsNFZJKFZN
KCkhST07PGRfbFtyRSZfITlULTVaXyQzTjFELyJGKmVhajtbYz00SWhkLzg0VkUqSFw+STFJYURW
PVdMMjdKRnVsTlIyIS9oSFdwWjVUb0oKJUkpQlM5cnU2Rz9PJFVfa2w9SFo/O0xFcm4mJ2AwLlgv
XDVfM0AiK1BNPW9rWUxkWFhOJmRBKzhBWz8kSU9CJ1o5SSQuKDlsQyRLXj9bcGs1UWcvP2s6Mzx0
bk5rVDZTMmw5cloKJVA+PmpZZUo6Ol9zOCQjRE1OZzgpVD0odU8wTUQ3KXBPSFBVJ01cLy4qY1An
L0peVz9TSVkzbG1DTzhJbmhkLzUyVlYtTHNnbTQ1OVVjQkFaIT9OI1ZTczolXjpBKjNhWVBcbEcK
JXA6R1h1Qlk1Ji8qREhbZWpcb3AxY0BcXzFKWUojU2c3WFJyZl9EVT1XIVdvazRGYi1hWnFoLVBl
PXVBOm5CcEpzYW9dQzVEIXNKNWJmZTooTXNkInBDOXJkUy5nbiRZJkkjckcKJVleXzdDTHFvNW0j
ZzdHJ1MtSnEwPz46RUc1cSJdJD9pI2hPaF9hRjEwVksrISl1JTxNI00/QS9JQmY3UTQ+WjIjTCJj
MFJWcEVhaDYwNzFIQnJPI0A7az8sWmAhOydYQk5kZSsKJVM1bzZbWlQ7NSM0Xy9RKy9AJ2E3IUZW
UDNIUDJaMzYxVj90bmlPaT88az5JMTBfVmtgX2JCRGpSMiI4ZEVMOnFzJ1NVVGQ4ZXJsZ2tnUl9o
V3Unc3FvTEliZnJIMTdONGZwT3AKJVE7aDNYUzNGRipsMEN0JVQ3TEt0TFI9YU0pJG5iKDtJPnJv
bFhtayVSL2tPUVU1NHMkaE9caz9RW09oUF1eLFs+VSoqVW5iOWUpTk5DZVE3PTsnXjokZSY+ZzNw
MHNAYUlhRjEKJUklLjFCNGwoN1BjJnNmNmE3LTxUUDAnMzg2VHJUSzdvZGtNRTFGRGhEPWFTUjdA
Y2A4Y1ZHYkJBSlRgLEFFJks6N3BuWSE3SiZ0aG5zcixsSyRrdEJhNS9fcjhdVyVQYWJlNXMKJS5G
QzI8Wz1jQyVwWTkmJWE0czQ6TjcoXUBhM2ldIy8sJDhtVHItKF88RUxUXV50ZUVEcWxIXztRJi5H
OCI5TF9AW2ona1FJIVEoIUMzTF0kaG1uZ1ZpYDAvQEI1JyFaXzdQYS8KJTspMychaVNzMnBGZFxg
c0wwYltaOmxpLmBiciM+Uk9fY2kqPElLZFpASExIX0BnXDtOKV4nN15ZWXJdQSolPjgsNlBwRUso
XWtKM1Mtcl46cyJZOU1SMWg7KVQ3b1I2YC01SDUKJShZXmRNLWpSRDgsJkMiTmQlUEZOTDxzNC1a
KSRYLjJROE9wbj40SGZOSVMiSC06SFxBKCFeSWs0QTNlbTk/RGpgOm90XlIxbmJjUV0iJFdFT2Vq
NHVKaGkvYXAjQzQxPDVLQGEKJTQjZi1ZcCJwRlBVazxEYW8mK2VRVFtURSUqXG5uQCdqRiVwKzZH
W0FDYG9abWljTUEvaC9UaldaTlhiLFxaW2cqWzpDb2NAJTYtdV1RJkc7MkhsXVAsakVoRzE6MVNr
J3M7bjUKJUVPN0JOSlc5NHNcY1giakZeSDtmPltAX101KEM5NmNCU3RcTzQpL3JaU180OzFwM009
WHBmXyVvX1NXRFJTW2VDQEJAam8hRDpZVi5zKmxlVScxIWFHNC9sSGMwLDEyJStkZmsKJVQ2PGBz
Q1AlSmE9Iy1OOUNbLTozYFI8QU5jQ00mIjJlclVgYGk0OSFfPnRJOy5NSVRhLm1DcVQndVhsXigi
bDJnOy1FTDJLTWl0dERJIUY0UjhSTVItLW1lUUtyYj0qNnQ9bysKJTImNGE5KVk3Y1FMXmFAXjdq
Mlt1cSlmNz0+b2Y1XistTC9oMjxlTEM+LEdIKC9yIm1EXzZMU0UjakZyMT1eQEhcOnI/YVtnL1NF
aCE0ZkE5R1M+dTkxKkxrTFdyaCk4cSxtci0KJVElTkokbUgyQUs7J1hULjxOZ19iIi9QIllQLVc5
Qi5xRmB1YkQxXGVEZDkhJEIwRHIsYjZlSSVsY0EoVS5rOnVoPyVrVV1PKiRYISgnJXJYVGtUTTsw
SkdJOiRjSG91Qz9GKS0KJShDTD51LTwuQkY5Z2huJzRIJV8qXGkhS29ZRkg4VzxXVTsxSC1TNzNs
SyxVTClcbiosZEpLYV0hOFsmPCpkbS1qZyQuKGFQS1UqYChiblU0Wm9KalZeM0xDaj1kSihtPi9Y
WGkKJXBXTDpcTT5tbTs3QVE2aF1pRUZlMD90LWhVWklWIjYxKT8kMlA6RlRvQk5raCVZWm5aIypd
YihPOzEsO1klcm9OXlkpSU8zZl47VyRVOTtTLzl0RUQyVSRfRSotIjYsQGImRFQKJUxIaVo0RiEs
TmNyLDUiajxJXj4zbUNCXSpxJCY3OXJEST82P1U4N04jdC1aVjxhYVFSSlImJDlCKWhdWEcyb11a
YFBzWmRyW3JeQFQ9WF0kaDt1SlxVWDBlUmJOYyUhbnMoQ0IKJUw5aShtazMjM29yZDIuKjRZTVFk
MDNLYyosZk4sSCxoPC5TPG4kMk5raV8nK15MNDs7YXI4VzM2cWhKUUEwZUtGXl0zWjQuPXVKJjZJ
aT5bXWMwRCw6MTRlSG5kQTQ0SiJQSzIKJT0pWzdQKEssaV5aXm1Bc0MtW1IkL1UoZllOUyFiMVhS
QUBLS2RsVFdeN3RdaVYzNz9sNC1SYjEpcVx1YjtcLjBOOjdJcks2ZDtrbzQ6Py9YIl08WzkrUEg6
c1tkczkqSCNydWQKJWVWWEZvRD9uSUNCMHVDOFVNUDdBRkReOkhtZU5SL2wsO1Y4NnU3MyVQYWxm
J2hcJ2ZmXjotI11nL2lmaDRAK206RjU8Y10jXC5db0JEYTlFMVZDczU1biFIJTxuUmZXaDEjQCkK
JT1YRWJZRiw1ckxdOUNiKmMyOC06MjBLZCIxX1dyZFNdJG9DP2E8bFlsWDcoKTdVVDpPSk9OMCo6
aDRhXVVUMCJwbV1MKTxGKWM8PF9MPEInQD1ydU5UPF1MZVMudE1qJnE5P10KJSg+KVlyTy1HbSFd
JCNrdGBbIl1xN0NsVSJoOWhzOEJLJFBUSSFSLUcrNzNiaTVgTzsqWW5TLXRYVGU5PURQcyZGSj1W
Oyk0J2FZVGVtXGAwJz5ERT8pO2MpMThmUyE1Y0g4MUAKJV9CODJFJG1sOmNxSiZzJlwhYUwlIzU2
Vks3U0pdQFU2aEE8XmJXND5rcVJmIi4pX1w9WnBPbE4uJVVRTktjTGFsZUo0KlpMVGZGTjEjVW1r
bD1oVmRAcm0vO0FKLHJtMmFfRFIKJV8pcUFATzRXMnRyTkwoQmZGKTlDLDY/JTlaO0A2L0txUnUh
Y0xHLkZKM2omRllSQk00aVhIPl9rRVA5PjZHTlNhXGFSXFkjTm9XKS9QJDciOWolUlRMR2E/LnJN
akxhcCNELkMKJSJ1YE46XT0qWmJKYGVDMHFwYDxNZ2tnKyNZXj9AJ0leKDJfZSIqYG84V0BuS1lA
cXIubkYwZm1IYiokSmNdK0lHNTtnQDBhZmZpYTQyZiFDXmFFaiUsMzA8VjswSCJebCMjQ2wKJWw/
JSxYXm84K0pnSkY6WmlsSSJlUzZVRTY7SjRYQ2ZXNTAobU9pPipSJUJkMVQ/VyVTRkslQzc/VE5D
LyJmWVNKS0snQz0zMFFfY0dSLkgqLnVBUUpxL3RURk8mK2ZbSVJMMzQKJTAqWCxdUXVMSFFUSDFP
Kkl1TlFkRlQ4Ty5qTWVNKic8RUc+YWxLQjdvKyJeZWU3OlFYVFNYbUJgT1U9dUN0NzY5NWZPdUZE
X0lVQi5rPkUnM1JLRC5ZTFA4ODg0UTdKLHNKNi0KJVM6XUZ0TU8xbGNSYXMqTGVpNGlFS0YmI3VH
VUouUk47SVosVyQtXmtpVUpSOT90QSs7LT5AbSomOVA9aTowZSxKWj9aTEQ4Lk0nLmw6Yjg7MS8v
SENIM2chNGlKYkghOU9hPlAKJTFlJmVyNUJCW0oySlhGRTgoSy9LR1VtcExkIi8nVyRKczZsOCg1
QyZlXT1fJGAxUVs5MjN0SmpjQTlsdV5eMTFlZlBXT2hTYyhKMD9fODdAJEtaP09nOjQ/ayRFaTYu
UE5RUSoKJXBUPllka1VfaDBIIzZCQldqcXFlWkpeSj9FTC0xaW0hZklfMF1qOnNWMkcqSEdsX2tx
MXJdblpsNWFvbU5ZNnVrYDBrNFQoUjEuTk1CSVtpNjBRLS8lNSQuRG9oQyQtOypfKE4KJXEwcVQv
SGFYW0UyWEBVZ1hoSkc2QWBUXSJiXHRNPiMncGxJXS4xblBXYmhJPFElNGJfU281MTQxNyNrWz1z
I0xrYjZmXmI1QmZkYTdMK21Mb0RSOnVRLE9dX0ZQRjIsYjxLXG8KJSsuQmhIQWZoTCYrcS4yWVFt
ImlOa0klTC5PV0wxbWlXQz1DalpaLEZrLyc9N0M3UDpMNGxHUThyLGpGK0IjTUReLzcwbVM4SWx0
LWRUXUlVbzw2J0RaRDRTa0c7S2o2OmBsaiQKJUdtMF5PKidPJUhsS0RWSS9ebVU0TCxNKEclQzxz
OSh1Y2g9WjApZTNuNkNWVkQxJ19caE5ncXQ4PzYpUyEodE9sRmBLcWEmcWc2JzpTYV89Jj8vWmY2
aT9UQ04vaTFhaV1oSSUKJWRdNEdlMypsRjYxOnI6aEInK1JrQzNhVWpLSF1tdCpYWDN0NVhqO0tb
al0hXCVOKUAsMD9zUHI4TFw4J2AvO0Vsa1E8ckw5VUwpZycqJVVRbXNgdFRzMkFARXItLzFxKE91
OV8KJUosTVU1SitxayNpPy8nXU8rNy9EczNMYCxwT0UlZzVROjBgK29JZ1pyOmc2aExDYENacHRb
JWtuR0RzMHIuZ25jclFDaF1jW1lvZ2hyIkNRVERuY2BEdSlLb1NHXi1PcFJgTFYKJXEmXl0ucHFJ
bEVzN2dbNHExJjg6SixfYV1jaTdEIXJgZkRxbmQ+cG5oZHVuTGhrWFxScSFFXG5VLVpXZV8lcEEh
WStQYTFIaEJXJydZUkU1SWZKcVxYUnNfQzBXMi8oL1JIXGMKJTlhMm4nbCJaQl5nYFY+Xi06ZXE9
QG5LLlU0aGxwRWcvSWFaI0ldTnJBSkJMXjwnX15qYkpmcnFXJkMvLzlaNEVdMUEzYk1qPFBadTgw
JFU0XHVpJGBfZSVAZDNTVl9oL1tfVHAKJTcyITl1R0YoI2AtRCxxJCw2XGJLT2l1LipsQVRBRGty
WS8kQyItKDoxSmY/N0xHNzBpK2xnXihIRmNJdSRRbyhgI0VHOWxXUkFGWD9HL11nT0tgMVJTJzxw
OD8nNFBPPENfJ14KJU9XYDc7KmtJJHVYRl1xdFlkI1xWRE9Dcl1sRztgPVZqPyNEIypaYFQ6JkdJ
OkRPLXU3Vy9ZNiYmWGYhWWlvRG44QXFpPDloLSsqKEBiIWBiNVpoLSxeNVswSVhsWSo8PjBUTF4K
JTJLZkklQT9RMWJpIzwyRC5hV0paPFVcZmcjJElTYWQjRWBIPVQ/alspbyxbTDxJO2tcckBzMGxa
PzNfVU8+JmsxMWRKRzNhRTFCR0tFUmkrYC8lVWxrUTI/UiVGYy0sVW0rRi4KJVQsbyJIU11TXFU1
YHAoZUw/TVBTP3AuVzVWNCc0YENeVkJvUExWcSRaQFExMClLNk8jI2EmSFNnM0k+WjFMKEE5bS5J
L1JaOjlUYW5SWVw+OGs1SCNWdXROaVdnJTxCUktkWWIKJUU+PTNKUkU5a0g/Km44azRdV01ZVlZt
X2AodHA3LSZJU3U7WEhzPE5eLykvWDlyK0lhQWs0MGtTV2ghSE5rOFxuNVpcUD5fOnBjSDpgKSpT
YCFrMlsocyIlLWBeUzZPOFZYcVsKJUViSGduSicjMUtdKTcpdTFzPlpuUERCZlJOYHJyIT9CLi1d
JTU4VDJeMTBqMzw/VzooUHNlLGMyNjRnS0xiYkpva1VeYlBVIkQ5PlJlXFlVTyppWFotcFpmNCtl
LSFYVXI5LF8KJUdHcWRfPFNlPWFQWEZUYW5EcV1LRjxJU3UqcTdwYUUnOHIiO2cuVCZbb3IpTlQh
YUIxOThlVl02VCUtXzhyI0FdbGJMLVQhPzdSaV0qNlwvP0dfWDs9cCRkW0U5WCYpP00yMzIKJTso
ZDd0X2UyaF5YPkEkPl4ncTF0cVk4TU1TLUxUSkAocXJQIWAqbmVSbWczXTpIV29caFdmPFQ3biR0
ajQ2aGAuS2RgYDA1ai0lSyNiNz0yLWxSIT4/cW4xKU0wTUpTWmFQTEUKJTpoaV1iTGd1cVo4L1ok
dE5rLEM5NlkvbG5nMCR0QFNXcjUxYGBiST8tYjV1K0A1UEhmNU1UaTllNiFXWTNNZWsmTjRFOkI8
Wk5IXilNYUAkOVNDRVAwaiY5WCtyKkk7VT1yTjUKJVtraWFwKCRCVDYoQkZpZ2sxc3RyPWxnbGpG
LUJTRy5gYGtVKmdPQF1gKTxAUD9lZ2FmUkwzaiVTQHIsXl9dRGoxLUJuJlBEbGFNNVtwayJMMy8i
MVo4TCZjKFVrPideLyQiY1kKJShda0FXLUokXC8yRD4mKlVQLGhQI0ZKK3JEJyUxN3A8OSUqKjJu
dCJnTzBgT1RKVWdtb3M3V2ZjJlZzM2ZpKUZPWFQsTVZfIjVJQzEkNkhZXyg0JCdtXlsmJio/Wjdb
Ry4mQ3EKJS5QKnI9ZCo8PmxZbGtXQydyKE4xQ3VlP2MpMFQ9JEdZQWhSXFYxaTRSYT9AVUJsVTJK
Jy4jPnJkRUprR2pkNjwsS3IwWj9Hb1IzY2QjRyx1MTlvKzRXVGcxTiMsTj07ImxFLTcKJSxkcTpr
I0hBKjJpKCg7bDZQVkJ1XWguPD9KNkI0OXMxc1MmISdGR01KU2NsMWVeJkg3SyhpKGEuTlZtOmYu
VVFsUi4oNGxiSDREVSZRTEEoZTU2XWEsZDAxXSZRTSxsP3JIITQKJV08RSlGVmEmV2NMTmonS0BG
YmEqLGoyaWVVW2wmT1FtTzljWnEsWls7UkxSWFxbO2wvM25UNHUyXEU5LkZmXnE5PnFNZFgxQ1E2
JWVcMiYiNC5zcHJXVlBwczIxKDgsXSNPK0IKJStVQmBZbDRNNyhcKkJyaiM/LTxIaypERHRkdE0p
RzVhRitfXkgjOC0hZ1FhRVo0QStYaHQnaGNjQ2UnQ1dbWydTJjtnaUVLaVlaM2UkN2MmZyhdMSox
UmQ0PkFzLVhTJmkkaGcKJUZkND9XcEZTWFo1ZSpLaGJXIVFKWWExMnEwVSZwY2NjODMuPCtaUjpQ
SzBDJTd1LiJrUEk6V2ovZHBqR0w6LG9eQFE2UyRRaSN0aTQnYExmTzgkcDlyPipLJUc3ZGhYXFRE
aiIKJUtFSWxaM3NMaT4uWExrOklhNmJPczJ0LWBlZkRjMz5XPCtabmRmIlZPalJwJipDOixiRmtm
UUxPJCFvUitmN1E/IWxHbzpMQ1dhSighYElrSTNdJkc6JjEvWVZlWDJLXSQwblEKJT5USS5dZDlm
Z19IbyRmT1NTRl5eVDMnTUsrYUhBKF8yM2IyRDNAUyRYcHVoRzs+XmReajg0YDVfRl5WW1dXblY5
XiIkPEU2JSFAR2lmND1pNkFTRlZlX0soQ01kMDZGN1hmWjoKJU9IZWIncF89SUg8I3M+PGg5aUJG
bCckIzsyYjo9MTZuN1Z0Sy4rPVxEIWczN2EyRSorT2VDaT4jU0hgSzBmUlJBWm1yTVIyUkJmOyc9
QG11LjV0MystWGY5WVFtaVInYWomMTMKJUk8IlEqUE1BQS04amtbUSNVVU5PRS5aK1JrX1VrNzFQ
LV5hYE1oMkUkT200JkFmYE1DYTRZcEciXF5VclNFWk9kOStwSG0+KU8zXUcyWjhqYVhOMzkjWFVu
Ym5QdE9vZ3RKN2YKJVJrVTMwSC9MZXUxSlo+bm0nKUg0ISswX1sxOTBzTjBoX0Vda2tiX1VOYDI5
TUY5KCdxQVdaTW0yT2toUyNMOT9cJzZhQktlIWlEK25FTipZTW08Tl0xRyRtXiZeLkVyYkJSOmUK
JT1wZy5yRD1ONUUuJD8xU1BnXUo7TloqIiElVW1baVhVSD4nZz4oSG9GLkchazo1TCMjMVoqR2Eo
SGJuKS1oXFonaWVZZV4yMmxNKFNAPj1LIWB1WWJcOz9VVlw6NTcpU0JNNFsKJUpDPCt1a0V0b149
WHMuQmghImtBU15RUW0iJlFYPmBsdWZ1OUEsJEYyQm1eX2tqbGpMZlMmIiklOD1iYi9QZF9AWi1z
UCI6LUAuUFhURixqVjNsTnIoWT80bmRpZSI8KjUlJDoKJWA2PU8yYjBXXT5tUW9JRSxgZmdLYG9e
WycuYztZJ1EmUXMiMTlTRC4jW0xOLFhhaTY0KXFxLSlXX1Y+Q0ldLXA2VS1SOj1ZdSVxWl1oN1to
JzI5Pl9jVFhGMkoxT10sQzEvI0wKJS5lVGNYI2JNaFAmOilaTWtpajcvR1dHaTpfOU9HXXA0aEEz
RyxUcWA/SDg7dS9JZ049JDRyUmBuazk+RmJpSilZMyhfbFxNYmQnW2FeTV8wQmMhNlpsI14jOzxT
LUtVaGU1WlQKJVskXCI5XVkvNERtPTduTTIpUilnXDUvRD1ybmFLaDtOXTshUEQnV0NFZThcTFE6
dFM8RE9mYkhJc18iIjJhS1srSkQyLCdANF8lSUZcK0lHTGI7LTFOcmwtQTdgKFtrUC5ESWkKJU89
Nlo2UlRhOD4xSV82IlxIQ2BDUV9mXzZTPmw/KiRKWzJOZVpCYkJrMDlcckAqVE9ALVdAclgkWWsy
aFJsQj5yciZyLkRPdTE3WHBPPTJAcWxicENOREhnYnMzYlF1TzUoTDAKJU0lPlo4Xk8+MzpyOldF
PSUpLHI6UGozL0kkaDtjJ0whQlZeY1Y+REFkQmdtbUdvU3BwVi51VShhMWVobnFNLil1ZCtBbS8r
NDJyRV9EWzxpSDE+L3FQMHJARDswRDUwRSpcRywKJStbUFR1Q2hjZ2tYSl1JUi9TPC5vJCI1b2ZY
PTEsWFYvZkxFUCpOX0xmO19abCw9ZHRZQWpcdClUK2ZeTkQqK0VIbEQ5MShJXHU5M1RmTE5hWVot
U1JIQlIkPDkiQjtKJiw6YFEKJVV0NSJxb1ckVWxYLC85J0RQQ2FdWEhoKmFlSDBeRzFnMW9xVm9e
LCFvJVE8Vj9mV2tGbm41PFJoTTpiM0Y8MEJFLlxxYD5yN1ZcWlNmV18vPCstcVxtRldGPD8/Q11t
OmtQIywKJTt1KixmPF8iPjFAbzwnPi9ZQHBKNERyMS0rKlpiRTV0dFIuV1JCJ0tVLjs0UG9GRGxY
WFJcUVYuWnBzKFpqZzkoTWVVNlpFQDVLLGlTSEpzYyI7XjEhSFhwOjI9XTtPWls2P0EKJWUldDxj
WFlzb29ZMEpoQyxdQTg+akhWaGBmXShLWygxKScsTio/RW5JaXRbV2Mzb0NDZDZVTl9EYzhyLCRW
PWwsNFdpW04pQDw8ajZdVlFiXiIpSSRGQWgkNFlkZF9MWWNOVlMKJTNZaG03MW1pS2ZBPjFAZ11F
W1JaS1dILz1xcUNuXF9PN2dcNGwuKT83My9PSlxyIlhYN0pGTlEpdVZMTVNYOXRoWzQyTCJLYUhY
SS8rIVJrTys7JFtiPVo7cUstS202aXMkPEMKJVA0KitSLkohWVVBcFVLMjA5XkxHRjwjUyYyZW1H
biZbczopQEV1YmNRTVs0Xm9XNkRcLCMoOWg9c2I1WnMyLzBTbm0rUGpkImZHMEVYVVRyKWskPGUy
aV5XQFhXXEckR0lGOjcKJWgjOFRZVnBlLGA9IyJxJnBhSnM6UkMlYUgpbWszMFgvX0VnP20kQjdF
UUpvSSY+T11qTU5nTkNOOFVcR2cvSi8oaihBL0EmKG02cFE4MV0vMChENkVNcHFtZTE/MiVMZClX
Ri8KJTBub0s5ZV5eR18+RTBVaGdjOVNVXzZPOGpcTVM5TzxfNzhPRmRPbUZXPFBlT3EyLTY7OW1H
MFIqT2BTWyJfWUEuYlJtNGdZWF1ocDVYOExjPGBtZWhGTXR0azdEQW5kOFg8XWkKJVttM2snXyJq
WjJLbkR0aWNBQlBhLEhWRk5WTDNRO0RUR15eYkZsYkpNMypoYCxHbFwwRDdJV0ZJPDFgcmwmZUl0
NnF1JCpnQFFQKUBIbV91cixHLydcWVo1Wz9DU2FbZDhXQk8KJVIsTy5HZGs6T0VEWVArJkhMNGBG
SyFgYUpWKiFPOllcV21SJzlSYTpKVykucyksMGBJLGhXcyhARjUwciJoNSVwbD8oPSsoOmElOkVs
Z0hMRUJwPlduVS1vZitYMFJrR0p0WEsKJV1dLD85UDoyOiVNI140NG1lQzMxXkonSkU5XTYjWV5w
QTBJLGliVGQwVTNMXFM1RFJqVkRGIzBfMEZxJEFcS0RicF90J05hOz9YWTxAdD1FVXEtZl9AZlhh
RzYmVkpELz8lNmoKJSNvZV9tZGtcXj8oKWJnZ0BJbUJzY1ZiM0ZvNVVXRV81XTxCTmgkcThycGMj
WmZSOFoxQiEjIUUiOmhxT05iV1BVZTdIdDpVTj45c0htTkMlTG9TdXVMSUNeRXEpMVE2bixrb0cK
JTVnP0FYRW42YXEuQyxVLU5cZ148L0Q/J2E/QzkuVW4rSWtfYGtYJ0EyVXNiZW4hRFNYNilidC9n
ZjBTXkEnJDAoLGBWSkxiaGxNP2RWN2M9TWBwNVlZMDRiJl9wTDR1U0xnYXQKJUVNRk5dcixfRDo6
QERZLDBuW0RdbkQjbDNXbGUhaltIcGoxSEAxL1ZEaztfOV4lV2kvMiRfSm4yQSFMQzVwMThcKUlt
MyQpLyQ9NykwRDxnTipmJzxlWS8tQGNbKFsrS1IjNWQKJWc3NClhIzkkLXMyJWdjU282QSFyK0Fl
OG5YVz1faXM0K21dZjczYj41US4oJHJwY245clhYTVJyPyknZ2NpPEIib0c/QzkpYHROb3M3NT8m
clAvP3JPOG8tNUosZitJcjY9LXMKJW9vQzJFSWZIbldpRDc0dEJFLlBjczVyazRKK3JeWyk2YVI8
Vlh1c0o8Rk0mOEw8YUQpK0c/ZixXOyJRN1EwRipeLDBybEVVRjMrOUJMZHJEI1syPDNUcUU2XCks
XFVkJmAuTU0KJWREanEsLV9DUmM8YjJqZDRGYi5iMip1ZmM0KlcnSGQidVUrZj4kNS0vKnJDYj4v
UHJKXkBdRWRgIlMqRDc/VFlaPG01QChIMztWSFp0Qz9cS0VRSkpCLnRMQjxbUVVBYEhmZj0KJVJy
QWlgYk5Ba2FpRk8tYmFmN2dINi4lSC0mMDVEbCtWZDlLQGcwdC4pYi9YI1ZkOXRbY0RpOSdsKS5K
KWEnbFNHLjlYVDBqXzZJYl9PYjwtIjppQEdaSGFwVlNwbGtXIzBOdG0KJStOU25rJzg8JmppUSNq
bmJIWmBiQV1nXT5QLlxOPmRVbXJmYEtDanRhbkVeXjZBNClfclYnXzEvJVpJLmNwaVw3R1Q6XzBM
aVxlSSpOJTxFIVwxWVFHRTQtLW1pKlkpNWMkSVMKJTJcOjUhYXUtSzs/O0Q3RVtoLmZMZD1bWTk7
VlsxODY7NDhBX1A0akY+cC0mTy1RUWhaSm9MckBGaG9PN2hiRlFNYV4sYW9LdUk+O2VlWz49YXBB
YztGW0Yqb3JPIjRHXlM6bG0KJWteNzRVMjs0WENyJ0xac183TUo+aElDTDVYVE9PUWdUX110TlRc
aShDX08uWTEzdFctbiUhZSM5aDpjK0tJKnFFRjZHbWsuWkIkNG9ydUBRU2Ric2YkKDplYF1ULkhB
VUJwL2kKJTNVI3NvS283dUBVRylTPzEvX11BZ1ItJUY0SV5lUz1SYUAjSmJLKFFjbG4tNUNjXCIs
clAhJSpgcChYMz9UW1xMWVpIYGI4ZVFfb2lDKmNHQ18jOm8+SnFYRiYjTCJ0KUFecmkKJSdeKUVl
M0IiYDEtNyJaQGZFZ2E7K1wlNj45MDc4ST4yMzA1LGgyMTReXGBXIldCKE1BWDZIOSNXT3BXXUFc
OzVpOWhzXEEkTUVIYml0OSsjQlksJShTLyQ3TClkaTVOQmZwOC8KJTBoPzFtW0Y6N2NUN2wxLy5k
cGQqODQqKj4mNG5lRW5pLUJOPlNJQHMkPEg8JktfOk8+Q2QnTUVoc15WLyJlLig0RCU8QGNZKVl1
R08kNXErKTJnZEFwWUBEJkxZPzpFaCdtbjYKJS8rRlByIS1cXiZMZVxkLyIkKShnU25obERyJzFA
ZDFKaDJuLCt0bjxCakBmJVA3Nk9lb2wqJWUyXCdwOzUxLGw3WCMzKC5gMm8uOkUkT1BpYlVtcHNP
PDU7WFc4SyNaRV4tOWQKJU1bL2taQTYmUSlMUkhvOE1pVFdpYzRIbWBoKj9VWlBPS086bkVPSG9b
LTRwVzlOdSVXNGVQcXQkcUBPbCNyZDdXX2BiOk9EISorUUc0VFVKIzQiOWhycl1qbzY9ckoiXkxE
JXIKJUAnRlU3VUpaWUVWb2E2Q045XnU9Mj9UMidxJzdNUHMyXGhDVmEucDdCT0grZHBYLTNbXlAr
bVsmZ3JlPmU0QGQlLWUnYkM7IjJpXldBO1ZmKSwoX2JjRUMrP0JUNDdkajRENT4KJW5kcWQlL1FO
LEtET2JTR0hrMCU0cTpVTFxoZCMtSDZHI0BiL0FqTjg0WGFXSV8lIUs1WC8/NyRdNVpfSkY9OF49
LlpvR1E3bCowNSRiVXEuaSk8W207SjJYa10kLCFWWylrZ0gKJTRrZ1skKW9YNW8xPyMpJWVmXyNc
WCFtI0YwS1pLWzBJRGopTFgmMixmRnJHOipzY2w1LFFPOTw1PU0pXik/RnNJUiRhW28uQCdHZ2Yj
ZU06X0VBVmI6Mi5RcU9GbSJMWktGJlAKJXAzZE8/MC40RGUiMiJpUExpRFQ+RltPJSIvbTZnREFs
Yz5vSnBtXixqKTBKXFJ1bVtVaF5FU2FqW0E8MUppLkxQY09RMEheU1BsMWVXblFCNkxDQWsrJEho
RVs5R2NFQywhTEgKJU47Im9lZnNuNkQ2U2I0KWw1dWFTayglLmJOYSQ7KzE0XF0wUlI3b09UXSFp
c11UMXFrIys8NT5FLWJTITFsMk5FZmNhWHNOM3JbbW9fXzJSYktoLy0iRThTdUJzYTxvTGA5KVcK
JStgXFc+LCNAUTZrPUkkRi1TNkI3TUBnXnI7OzQyPCRSdHA5Kk1nSSMuPUh1N2UnQVFUbWddQ2By
cF5MYE5TYzVMciZOcDswOS8kPDdIZl9WTksnVWdtWkJOPSw9aCdeXjN1NSwKJVg8amg6bWo+YVtX
c3JXUzUyKjdpRV0sVShzNzRnXV91PSs/KXJuNmZlLFNyPyE+XT90XVxRLnRdPWhmR2sxKVEzVCZM
aSRkdEFoMzJCUl0rLjJWaFgnTEs5cjZGX1Y6ZiM4XSIKJU4hP2tCNTRHUXBjZ3FyaU11QzVrUV84
XEVZbTNbJmx1O0EoJVEhdWZrYz8zZDVOJDx1YCV0NlApI2xAPmk2YyVVJzBPOGZeOSVUWWEyUl5r
T0AhYWhiYEhbalM1RkVfbGwkY1IKJS4+NkBDbFFMLV1sI0dcWE4sOC43JkRDLCdoXEtgX25iXkIh
NTxcO3BAMGgzbFVTJURfKWlkWExXLWxoTyojUCxib2YrcD8/IS1KJkxkdGFfRi5rREhMOVFFQ2lv
OEdLKz9iXWIKJS9QSEBXQXVkIjhuPS5fOEYnY2lOMUpeLXJJNCE2WkU6bm4yTUspJENuc041PUMi
a2dwRmFPdSRsQVpbUW05MC10ZWlmbzwpbGhnZjpLUypwKFQtdFJALXVXKDMhajAjby49QlkKJVVI
NlQhWVErVzRBM0NJRGBOc1BsKnU/WC1bYSQ/XTojJz03WCovSU5FPlZfNjRWa1lHaWQ0cjNxI0Mz
Pi9ERlMvbCMlWXFVKC0oZEkkWm0sJjs4Ij1QLy9LN0tWJGBZNktXVnQKJVtKIiFORjdEVTlAcTBg
aDU6dTUtU2NuYHVbVTlvLDBZQ21KcCknNlZLOSxGRGxsPSozY007PnFxczdAYVI3SSkyNm5CJTEw
cWVkMGspKjQ2N11VXihQbFk9ZzA7RGgxL00/YFAKJWJNRmFuK2hHbT0vJFlwYF9cXVE0QyE5YmEu
bG42UEJsV1QsZUMzUlk6YjdoRF9gQywkSCYsTkkjKTpkXyJbZShqYElwPDMjYEhOJEBUOEZTbEpo
RjdTOS9gczFcRGVeQmgmV2AKJVw7UTpPZVJeci5cZFA5OUs/ZVhHKF5bUDFqdSFEJWl1N2Q9UUtK
WywiODlYVF0nKTs0J1BGYi1AcEUnWlctUUlDY2kzSXJKKyZHZWNbNS50MmRmQWRrOEdHT2ZlZmlg
QG47U0sKJSFNKzdOYklzWlYtUDpOcTgqIXFNZlhoTy5EImBWbGdJUmg/VVBdPFxKOCluRWA5O2Yk
KnQ5S0VtPmBZMW5TSCRlQjFgKUppY0NVLltUc18pL0taVWtSdVNraTUuYyYpIl4yTU8KJUxUcD9G
ZzklZlU/XU9sITNnW1BoQGlTPCdlXHImXyNOUC0oNlVVJ2deKDtvPm1ZZTo3XTQwRWYnUWpcJCpC
QjlfX19ePXAiXzxdKDFdbzIlSl4jNyNZJjszVUVHTjY8Qks9XUYKJUc4NS82cTcyYiJcO0JgaiUi
PypoSHAjXW1LWFVoIUxwUGtQa2VQZTFrSl5bRCoxcD1FZGduQilZVCNcVFE2XixWPls3MEFpWCZk
cyhVZDB0XVE2PXQjMW1SKXJBOSU3aGQwcDEKJUpJbF1WJjhuZS9jaT8lYj48IUBmVXVpIztRS20i
dHE8VGRqXyU6cDAhdUxDXm4/J1g9NUsjLVUvYipZST4pKSknSDFQVF01WmVIViZBaiM2JyZrUDZn
M1JvcERXLFxqPkdrOyMKJUVMQFpaMFFUQj1BMFpiTShBLCcnZHQ9V1JNayYpVm1HS0UtWzEpKi0j
K3FIWz8uZmReOk1bLDhmO0k9VjFJNGJXOiFVdXNBPVNeLHJWRGdATyNAT0hcWmtnWiMoaiYvS3NO
PVoKJS4yWCgxXUdETmMpOV02a0hTInFGPmoxWHFIRE1zJWNQQS45XUdfJUBia0VGXVUnRnRWKCJq
ci9ZISMhJF9samtCZFBCVTkwOCQtRDA+aHQ4ajQoQWA1SWsraVR1WnJeVCVyckwKJVJNVXNMS1ly
QiNbaisqKmdeRz9KMkhfa2U4SnVMTFNRa0ZjKVZBKjExcWtFNlQ5YzUvbkZdWEBoMVl0LkBSY2VZ
b1k7PDpmbjg1VEUxbU1ENmcvVzcvcCYzTWU3Kmo/WiNRJUwKJVswcEVALmRzZiVLT3JKJjFLRlVm
bmY4ZmZVKHJmOEZEPyIsNCJpS04nXVM7YVNpTyk0QEMuU0U8aFVPbV4qbipiJS9UWyohdFhjaV9w
LG9daic+ZDc8IV83Pm5YOV0wOm4ycUwKJTAzY1xkInF1PCJgSFtkUy1OPWstREA+VDpfVyxTYWBa
TExrXVdFaypNTy1cRDpHOW9dYDQ7bkJlWVFuK2xyPDs6SCJwVG9XT0EyKFIicy9gXlhnN1tba1d0
LSMxTGhQTXFjNDsKJW0jUmRRR2s+Ujo/YmNWMD9YSjk0UVFzOy4/L2BGVChrWCFscVhuTS1PNCNn
Qj0qdUg6PklpQXU3VkhuZGBBPkQzbjkidEI5XDdJRUk5SGNnXG1SXVJZLnQjJFgrZz1oYWFVTS0K
JVhrZ15xUVk+K2YiOEhcSWNUUmdiUzksZz4nQStcRihtJFlZMVghJzJsY2RHX25wa1w9XD4sTTNu
LyRjJjtZN1UmYkBjQSU0IUk6LzY6OlhqWi8jTjZyIidiUzo1ZVQrRVcxN2QKJUlGMSpENWU5KDJf
bERuPUVVWXIsQDshbiNaKXFTZGIrODs2J0NcJSFWcEg/S2hbQ09CTERvLWdqV3RtbiNedU9GK0tj
SjVPLC88SjUpWE40WD9pYzRqbUQhZTkoPmg1JipdcmsKJU0mc15MaDJobWUwJi5RInIhRltRVy5z
OyE0QDpuMi5rSEhKMk4uIV8lNT5iMjclQl9aRG1DZEtKLEFHXEBwSmZxT1RMakRwajlFbTdwZior
O2QhSmM6UGFTMjBvay1MV0FoTTAKJTs2UWktJE4/Yjs2P2FaI2s/QUJLcidUN0xMWUNJTVsuNTJH
J01hPWUsTm9tUDpyPWZJPzxXUElbQ2NvWChvci4zZ2MvJmo4byZYOWklRi8oWTtubyhUZCNkOl82
NFtXTVxqPioKJWZqUUpCWVlyRnFObEZBKSpQQld1WD0mPjBVIzw7Oj0tWWVIWS4jK1o8KGlwXTNh
YlNAXmcsM1VmUmpwWWtIYEhdYHJvUmFNXTlJNDcsZl5MMy5vMEdXWWhTK2RCWF8oKGFLZDQKJSJo
aWo3SjRYTG1bZXBFOmM2YkFfPVF0WWldZiNxV1BibllaP0BkVW5DMlRyPGJHPDAoQTJBPjw8ZGNc
cTZXMXNmbytcXVIpLmM+KU1eZjBPYlh0QThFYTZJMDpyYW5eXUZvVjcKJWk/K01KVlRRZytWXVZa
ZElSL3VuPXI0QFcjSGtCXENxTTg9JE9YQTQlRnVOY0pGOV4zbSNfW2gjJ3BRU3E0cXRcS3M1YEg/
JD0haFVLdG81XSs/P2pOc2lUUVJqWTk0UVQhQUYKJS1dLHRXO3BCPSVbMkBPQSlwWWVRRlooQDJf
OFU4NEJHLF91PGxxLGNvQD1obkJlKGAlJ0FEakVqLzRRJnJNJWdxSExTNUwwPkhpNyhuYlY7OUtF
KElUT1BjOD1RcXM/KmtLXjoKJWBVRTxwZDRqNkVbdD9DLHBUNG5pZk85NVw9LiNpWTB1NTZCIiZm
bE0lM2wycFVxWEQ2T15pPz5YdEhsXkMtRCMqJTg/JFstbHMhZjdNTjptIiJwSV5dZCdVISItX2cy
UDZaOGIKJS5oVlg0VytxdUopSSpQWy9PcWtOQlFlU1E0dVImZE1WdS1kYlxsOVlNcHA7XD5TKChF
ST5tT01pYFY/JWEwU1NVTms8bU5uXVk+Wm4pSERRcTN1X0hwREwrTyMmWktZXC9kLFQKJUMtXUFh
MSgpLUFZPDQ6YXMtbjpMbSZXImAkWiVKQDJaNyRhRkIrUE42Yj1Nc1wuaCJcbWZATycmKzVuKy8q
QWhfJSFLVz5mImwrTnA5XiNoQSMsZkgqJU0kbSYqbDtNTnU6ITMKJTVDTEUjbmFjXlNpVjxxKFlo
L281RTRBMzQ3aCgwbj8nUmp1clljQFE1OCkwOypKPG0iclVeS2ZAYEM5WGk3VlM/TCgwV0RBSz4y
dSJNUlRBcjpIJFVLNm5aIT9nXighVG0nLCwKJTQmQHN1a1pzIXA5UShMaCQpKFQlMCdwcyg5VCRX
OjhOaWlvMycnYlFDXzQjXD5uZV5tLUVJQz1jc2ZNNmxSLSQ6XDQ2ZUouWC8jRjYqKFpZLiNUZm9L
P2AoKCklRVtfZEM9RkkKJUIhXmwtcVFMQmwnVlVHP1hFTT9ucUApYTpJVytGPTVIMEpMSTBjbjAt
JTwwPW06dSJXPVlqUGstSlhhS3FARlNQPCFdNzBfKl8xU0c1V1JoVyJlTDUmVmlxOl5EX1E7N1lH
XFIKJWoqaWZoQkc0UXVMN15uX0xXcz9yNm1RJFlSS1hpVSlcJUo3aU5naGA3bGwnPTFxKkAzak5G
MDRNPSZMLGhzKUBwVyUmLis1KG5WNV1rTj5qX2NPaEg0OVtRTFdVNF1KJWpyVXEKJVFhSWZZT3RR
KzVLWz1pPU5bTDZycVtoQVg9UidBSlNMMlMwPSxWJ1lbInM+bzxOS3VMQXVDMjRgbC81Umc5WWQl
Y2RqOmctc0g6Kl9JVy4lPi1JKSomL3FUIjNtXSZjLEcnPCMKJXBORlBcIytiQFpxTkoocjRiVWlF
PVM3IkcqO1E/bjBJQGNqR11tTjUqJkxrbiN1Ql0/JGQkPU1xZUAtJyUqVDhsPyFIO0llclhKQExP
XDEoQGRnOSMqaWZzTCdpPUY2K0xyLVcKJSY0M2VxPHUmMkIqXSNuPzQ6T1EyKSw8b1RuLVBdJWk9
aDVlY0crLyxkY3E1Z0BxY0ArNilcJy8iZThrI1gocjdLNWw7MmkuZTFJYUBTJVswJ2xGKFdubiku
Yz8iKStQOWRAV2cKJWw9aTozUy05N1FNamtmVGgrSzs8XV0kbHI0KiZMZiUoRD5bMk9cITAlPSJE
c2BCRVJzbCY8VCQzJVVsI1xJZD1yMmIzJCtKRGokOTN1Z1M8WiM8KHBgLyJjO1c+IUg0RzUjQnQK
JTUvXFxJRGhBQkUxZjNGbG1UamZrJzhHXGUxQGReNiktYHVYXmNbP3JBJGRlWV8/UU1TL1ZMN3I4
XChLZmxsPGJtTnEvJls3KTdQLWYnZVVjXXBCZyVwOFleOClZISwtYTBsJ3IKJVVPZnMwIy9ScCk9
KUZKaCpfcClGYzQwKUsjUTAhMDJFSi1DMEo0KTFuYyh1M108MGVtLFsiU2BFO25oLyNjKjZAcSMw
ZDleIVZKTigodTM3JT8xcTxXUjs1VjUuJW44b180ZWQKJTJNUUxZMjlnPVxRKT4wXmpqKC9oJ2Yz
X0UwNlM0Qz1ATkJkMj0jPkxrOyJKWDIoamZjVzQ/XTk5PixBOFJiTG5RMVwubT5FTCtWTiszVzsz
STMiLUpCI14rSE8qXVp1Oi0ja2cKJVY+LDJDcVc8NkMoX2VeaWFJLzBFR1tZT1lLakIuMyEzNCVN
RikuT2pXWWw3UCs7PFcrNjhcLWVsYm5yWkw3Z0VhW100JTFZQTpjNyxWNiZpZSluWihcMkVeJG8p
UlplNmJRUVwKJSRJRGohMihTP2khU2dCdTxkc2ZkPS9NR048Zz9fKltJL2VKOyoqK1JjUVhINiNo
VEo2RjtTaC9XP1IoZ11cIzBePUxhdT1AdE4pZ2opQipVX0hvcz1nOT1UWHEoVzIkUzpkLzoKJTJt
cWo5Uy0xRGdmUyU5Vyw8WnU4TFtUOkJqLWs0N29sYl1mOyVvbDNJTVBzTTolakZmI2pEX0YvaFcx
KF5lN3VASlssUHVqLVpyNCN0NShlMkI9Z24wQ09PKk1lMUU6Lig2bGoKJTY7NT4tKWRibVBSISsq
byszRUFVKF8hWi07YidZLyNHdWcjZTNUJlQyZ2JkVkY6MFowM2lnO1JDOTNwLlJfcSVtMiE7bjFK
ZFlPSC8iQUVgKCEka0BxSTsqREg/dXVOWU1bLC4KJT5VIkxyLXE+O1orQSc7UC0yNGpSbTtVJT05
VEItX1VMJztzZWFaN1RiI0k6Q29jbVFCOHUyKVAvTnNRSypMKFUvW1JGT1puS0lKLVFDTyhPVFZR
WitWMiJvUjlKPDMpSFNMWkQKJSVzPTVIPzIyVTE2JGxcLmM1c1UnKm5QZENcakNqPWRKLko7PGAw
ZVVjSSplZT5ycyhdZGYkWClLU1YwZV51Nk5ATEE8PDtaLEtkYVhUJ2llPyJxIUtyT3NTIjhZVlVx
aTkia2gKJTs5c19DPl45UERUPitbYEVObjpRNCUkTjlLX3MjKkRuPjpGXzJgYG0qUStGKWdpTU5T
M1NfZzw+c0NMcURKKU9fbEdOcCxFMGMzVWhXOUQhOlhlZ2ddcSFML0glJWRMNlNCVScKJS1uXysz
YENnTSNSX1xbLnBkcGxSM0FHYGg8JTBbdTohSlhPRz8pX2EpRC0kNm80JTNcYTswWjxxZDkqO2gl
RCtDWW1uXnIuX0RwKCZyaExcQEBEZSU/IUJCOGM+W1AvXCsrRlQKJV5EbWBNSy1XRXMwI3VIJi1q
cE51YjVpSiVWVyJlLDNNSjFJcT44IVhFPXRUbiwnNXNMMEJOKWY9dTY3WlVgXyYtUl01Tl1dLVJK
NyxpJj1vJiwrVmFPOEpiLUZXdUs7anRhV0UKJVIyQVpPQlB0a0srQzZYZCEkaGhOL24qIU4iOkUj
QjBYLyttQDo/cWVMMz91QihsIXVYRSNGRChAREVlOVtiITJtJSU1I2szWGlzRVRLTl9QaiVSNjdC
YjcnVi91YVhcRCVTKlMKJVkvdWJaRT4jLU5fSz0hR2FBYyVyKlFiQCFpWjJiOjg2KEsvPG8ia11o
NVl1KTE/MSYtQlRIYXBOYEhzXG5xSV91OzJzOS9JVm8mP1AuUWx0VGJ0K1FfLmorLVwydDNnMzJx
Ml8KJSc4Z1I+XTo3bFYvPzsiQ11oViNlTlNBbzsnIWVAamgiO0I9Wj5WI0I6ZkVRbigkSylQV19M
UCs4dTNASyxLPVpTKkdSTjBkMXAuPWFGI2dncmFfOSIkSjFHYkNcJWFnZmh1WiIKJUBuMjNeInI5
RyVebjVsXlA0aFNuRl9YOSpdbipRLC5JJkM/YDFJKUxWVGI0J2VJTStJW11MZCJfYzElMS84LmEk
JWNcTUxCR3IuJzdOVFFZKF4pUSdfTWouXFRDRSEjRjZKNmUKJT84XUlqYm1DLkY8aTB0KGFkL24/
S0VhVmsmRGxlMlBOT2tzSmYnJ1FkY1A4UE5qVU1KU011ZyxaUSY4b2I3PUYrLDFEVmhaRHUmWUhQ
cmBGKFNtYSMxISxRRSk+bTVlMVBuPSgKJWNWcyltTDhqdSgvZ1M3Ois0SUo0OkQnOio+YjgiXkJT
ZyxQXy9uXUJMKkUsaGVsY21cJC0ubmRqaytJUjVBIlwrTTEoQS03OWxfQjloa2FpPi1UInREVDZF
TCRBQlwyKFUoZUoKJTwiMG5sZWdhVz9LL2FsTS0vU25fOitFMTBmPUdyQDF0cGFFNzYmJHJbRVhP
TW0lIW8tQWZfc0pILDMjX15rND1ub2glcldnQzxaN1tOLS9ncEldTHVmWyFGZT1XZlElOW5QIm8K
JXFlay43cm5hTjJacDF1dDo3YzRUXmhXVVM0aFJVIVB0dHBmOCZVb2omOjc8TWxAYlE7UF9fMSsj
UWklMSlgRFhmMEliNC5uUktjNzxtP0gjY1Q5TjoqQUZbYWlHNUReNWREUHAKJWsjVC4yX1lSSGpP
LGU8OzY8YjJpRTlWMF0qRnNkJmdZV2dvLGlsa3FgcV9COSs+KFArY0w+UldnJFdUMSM1MjcvNDky
Pm1BUiU1PTwyUUgkUHBOXixHQ15MVy9MRj4pYHBzW3AKJTk4JmtaWzQjQm9lPkRrQW0+MUQuPlRT
UWgsP3FoZDdYcCNUZi0wcD5iYi0jbSwoSyJBM1hwUFc1cGcpIVInNF5ZXFo9T08xKCwnKDxmMGcj
VixObG4iM1g0WUloX2dENkRkaysKJSNKc0NsLSgyaSI+USJxTWlcZC81Uy09VCkrOjVlOUVeLEc6
bjp0I2FhWTQ0VFIpKzNfbEQsUyo3VkNjalw7aUliP3ApMShDWS9QIUs0RU9QUjBEO3I7P21SdTxJ
a05sWV9hNC8KJUIvXj03XmBZZDxxIXA8NCJjL2RvYmF1TmBxJWk2P1NpPmNnQ0YuN1EtVzBRZU9L
aVEtbVozKl1fSllfTDddP0tPLmlITzEnTTVaX1lmQy1lYCFSb21jSkkmKitwJi9BJ3IvZWwKJV8k
OWdjRV0rKSQ6akUvZlwoTC5iVUY4NT9FW1A+VyIhZ2NfJ1deKDBZM1xpci1uSEI9OFlKbWtdNjhI
IzInM0NLaVs5aVYxNVNvTmtQMzVcbkxMbi4/dXNIRzYsbypNSGtxTm0KJV4tRitNVCcvcmYqNFZB
NFY6YmpeLj9cVj0pIVArRExGSElYLTtcXC9DTyg0P0UybzNMcSRbRylnREtzOlImbWMiSkpPR0lJ
SCUsQGgmayxQLzwoTDtkbmg8IT1JWHI7ZEZoRTMKJUcmRkdfJi4jaispX2FdcmJsbS1wQVUtdC5e
Vkx0ZkZZXU8yUmZeQFlKPD8uO0F1VC5dPXVKOkZBTWZhbGEtTSZsSjlAOFdOdURRLiZEWEFQWCRx
S1xNUm03SmgwM20kJCxpYm8KJUgwa2g2Sjc6W2pfQFYzW0BbTl91X2xPT1BKNzVAb1IjKE8+USZJ
PjJeLFUoSzApYzElWXViRlowLiJxY0s9KlA6L2NOMmheXUt1MHMpJFtvYDdVJWAib2c0VkJKNFki
YzhFZlQKJTd0ZGJEIkBVLFI8ZCprKFpBQGgwXC0tUkgkXE9oYCFCZEQ3NCM/J1w1NnBjPnA/L0dW
OTY3UWc9SHRfYGtSUWNIYEk7UT4uMVdaNlAvaFFnPDloKVE/cDBPdWlYZTpGLkYqcTcKJVA1NG05
O21dNl8uKmM9OyFbWzVpbTc/VCs+IUZvMmZhZFFNOWIsVjxsY2pVJyMqYWlIMHE6UGBVVTJmM1om
VEUqUUNiZihAMihIJU9wQzxrImI7LTUnYCxbc2xQdV8jMDRqJHAKJUM1LlZcaEghXHBrWzVkWCdB
NiVdWmcmITFONE9tJjhFN14oXmQmQGQzZmpmXy4xL1BrbjZlcF1pJ0Q5TF51JC5XNDAzKFkpMkNS
KENnUUM3Ui5FQkwiYGssO1pwY2ZEMjNsW0kKJSovZCJdbTRIYSlkcE0jVTJJR1RcV1BXWFcmUyhb
YzU0IXRIJ1RJc10lUSpgUzFIO1tNcGVFNykpNjgxPEpfJGY4ZS8pKC9qPFp1KCdZZmEpUiNrZERp
VU5TLSFQVWhhXFcmRnMKJT1DTiRQajFtSmAsVS1KTyJLSiNpPDJhZyMtNSwhNlxwcDRTXDFkRy1P
V1Y4O09ZQ0JwbUJtJGNeaC9pTEFvaSNXNWNgJWxbOVwhZD9qNU1lQidlZVskSXNaNl06Wy82Mz1W
UmsKJStARDFVR29IaE8iOkNVYyNmWklKR2x1Pk9fOFlZdDxITjpuUCU+U2w1YWlGamReVGpOS2pR
K2VBKC5fJC5qUEBcQVUtVEpga0hAMkxEPnRgSDdhMm8pcmVQR0E+VycqQGwsI2cKJTo/XkkwOj9p
Sm1CZUAmQjkpYzJQU3MyRnFFK2tMJERZWihCK3VvZTQoc1skMF9TQWNNQGJPTWEuYzguLF5fNT0y
YzlxRDgtN1tDUjlOYSpMWFNbKW5AJE1lZGlUUCNjXmh0ak4KJV5VKFJYQnEtcmhHPEEwLk8zLEFR
QzMmJlZVMU0rY1xIN19uLiwkYFtOVW9bWDNHa0gsVVwkbWI2KzpMNmlnSChtYDdTZityLC4pZjNM
cm5oNVdwSmlXSWo8T2ZbOWtiZTpFPS8KJXFAdF10WCtMSypLP1otbyxAUiFbVWVGPnVCRENENW1z
dTRmOUpvKCVscnMyZmchVWkvQDJqQ21uQU1bN29WPz1bMUtkaS4nNkQ7aTk6OUNCRS9rS0pCaWA9
XTNoajptVypRalUKJUs6Xz5dIWRFK0dLKUM+aEpzSFwuaXRFTShwTkMoXmM8ZmRdOy8zYCxHM3NR
UDtpXTVMXDhTMj5QKjlVN0ZKR0AiXWcuNj1Qc1snTSM5XE8jI0E3I147LC9pOSVOMF0tb2drPSgK
JSZdRmNGa2spa05iM0goNmlhJEBrWS1uLD5VKWU0bjwyMTVecGBbIkRWbHQxPiRjdTYpRkEwOi0k
cXMuYWxyQ0pmSks7JzE/TiZpNjg7cm5IIzdybEIyYERPb1RqXCQ0OUc4PDgKJXItZSsyIU5Ab0VX
aF40bU08bDgpNmhBLW03T0JxQm4xa1R1UVloJipGLypsWylSVjVaPz9kNV4rJS9fOUE4TkRdYk4+
NGs9OS8yRUBxKipTLDlDPElZKTcoVEEyOmlhQlxQQDsKJSslcmJBTlEsLTU9bmgyQ2YpLG1cWWsh
bFdTKmFbSi9bZ11hYGE1Uz88MzxBZVUvSjQtLFY0Lko5YVZBPkRqM0FhaztFSCJKV2ZrXTwzM1dL
USdIZ0spI0AmcVJOY1plPCcnY1gKJVhzL1wzMDlpX0pbViE3OyUhSnM/OmxHQ05tLE86USUyI2gm
UVlNcm9QNCYoLSReWnQ9b2pfQGEwJF5RXDNPWEUnZ1ZtTWhtQGUxLTc9c1REb1lAY1ZePjMxWllx
S11ebnFqQ1QKJTNwMm80RDQhQTxEPjI8W1A9aiYsQTNRXG5lVS5pI2A+WV9lZzZGK3U8bGg0UU9T
OlJmZkloMnNfWC8rMikuRy83ZGpVM3QiYkhSRUpNQmhdbl0vRkNQVmszKER0Ilo/Yz40bHQKJSRF
Q25zQzctZC9HK0BcLzpJVEdvQGtZPXJXc24zNmNbNW80bU0kKDJScyprS0VSNVpXTGo2bktSJCkk
JUZzWGBKSTdxOzxDK2xeMzYvNzhUWjYsLk4xXUEiKiZiMzYwK0RcdFQKJXBecGdlZU8hcldFVFdt
MVp1MWFMbC5pVC1Jb0xLdVQ7LENFL0VXSlYkI1xAKSgxcjUjRmFIUSQ9bl1ZNiJMYm1vMCQ+M0xy
Qi4kKE4/XSpiI21Ba0QsJ2FPNGlBTjx1IkYrV04KJSwhZlxlNjZaRERVJUtbJjVvRidEXS8iQExT
azUjIUdlc0kvKk5TIWxDazRGalMjWGEhXVxATCkhYzVZXi9aQGxKKjNcKGBwJyRmRyhTI0V0Vl9g
OUxfOCtHYldgdEZ0YGI4O1EKJWM3cVJzIWg4XWs/RSEna0BiWDdbZWZzNCxBYFFDKF1laE5gNW9Q
KnBPNCIwVzgnLyEyYmUyXDxCSTI/ITtOWFdyLkNwaVpGYDVodWBNYHFjSGBFOmc7KFI7Om5aRztM
XF8jSSMKJVhfYjMrbWZQZTknR0VIa0BLcy8+NV1gZWFWVEVWQExfYjdLaVFBQjoxaVUwc0BbaVFv
a3Q6KEZROzFKbFNGZGtrUUgkazhrKismVlBkRl89SihpSU1PZExlVFdrSG8yK0BDclkKJUsncDEn
MScxVyRZYEhAWGBMJTo5USdaOU5tXnAham0vYDMwQVtEVCEuWXN0TWFJPkpUN1FTYlUmdHVoK2Zx
MFg/NTtkPGtIUWVVdWBvYnUqZSRBXUBbYixUPD07ZSY2aCo/cT0KJTg1WDwnKitsRVEySnBWOydq
VGZtT0psTV1iU1UzayxBcVZlQFBROGhXalx1UWtXVk84SEI+ZVdZR2U4KCJwZm9wUE8mTEg2Sypg
TlJST1dKYyFoPDVnMEAyVSdyLFRhSWohOE8KJUw6V2MsVE9jSiFEVCRmS2d1P1VSbFFNb0ZcSEk0
Iy5Xa1Reb24sRD9KJVZwJXFyQSFjbU0mOEBuXSFyPTc+TUY9QnEiMUk/JFhGNF0yK1hLQCRoaC5T
YUJyQk9sWl5HRyoqUTkKJVhmRC4jI29rSFclYmtVLyJ1T19WLFtkRys7dHBQLEpsIz1aYCtVX2dG
TCU7YkFvRDZBQXU2OVBsZ19ca0RzRV5FZSJVWTchV3NVW0U0NHNvPE0tTTVxTlRWQU1FOz9QOW49
NF0KJWBWTW0hT2pYOTFPIiRyRCtkYC5RcDJbMC44JWhhQGFAJVUncTVNZWYhdURfMEZZUy9GK08n
XTApO0ZTQkZbdUloOVY6akRbMlVJWVknWjolL2dzLjI/RkhOQCRCLGJSTWlETzkKJUphTydRWlxO
MFtMO1QpSEdeQUpeJWlHMVI+IS1cISZGXjI4akdAIUVgRUckc2U2aE1tWWZgX3VfRGVOZWdJKWVI
NDAkOFtcJltHUygiNm1uOS0jQ2ZyRz8sRjgnT2tAUEhtZiQKJSw3OnVIPi4iQDNuYUM4SSo3UCFU
YChLK3VIMipCWC8nSjE+T0olMDYuSFhiczw2X1FaVmk7aENcMSU6W09GbCspamYwbWttKkhkYGU8
WGtYKytPb2ttSV5qNEpiL2h1OWE3KScKJVBqSFNSPFVYSCtDK2dgQ0AoK0BERUFIW0tSNFRMMTdj
XiExQDFEQTAyQScuS1dIS1YvNWREKz4/XjQvP0JnbEsiODpmI2daOjBiLGhzYTpwSU1laGBKVTxy
QUomIiJnKV9CKisKJUI9c0xZMloqQSk9QFIvXD9tbDJzQVB1Q1ZIKktGJTNhQzUvcEZbdU1PZzZu
LyJSRkVlP007bU0rWzs5WlMtPmdxRy9oVDljYCxQKV4rQSY9MSJOLXQ5bEhyZC4vKm5wNm0/OWUK
JTdaJWRoMyEkMyQ2cj9mJiRiVi9XWkNaKUBjcl9nWm0qdUFdQVxrTU5gTTxobSJrWmZZaWM1PTM7
czYhJC4sTFYiVDwlKFA+ZWlpOFc5c0Q+Kz8uLE9ERCRtJ1heRUdyXWRHTWEKJWA9ck9PIWhjKGwt
QjBEXzhhYCJnMkhJKjBsWyduI0ZKbzVbYldOKiRwJXVaWkk5S1wvLzBGT08pOmVnNEY9JzRZSk1v
Y1lCQm4qaVZoUD1BOCpRLWwhNDxTZj1LQWpuWzVjaEwKJW1RcVhOJS0hRDFbIkk6ZFwyPiQsSCYs
OWNSXmBDR1BRRV1ER1c4cFdXXSpjSUxsTEA7TiwsIWJYSXRzZkUocVV1QktrTz1oW0QvLExUaWBU
OmY+NTQ/J0pFcCRBdVgkIS9dc24KJWRTRyU9bVglRF1cPjEhPjhVcDUzLGpqQF0pKVcwKCIja2JT
Kj8ucnAlVnVnaCRyOCZeO2pzQicqNU1TKlhGMD5hZ1FJdFZYUSJSUStJc2dETUlGb2NYdE1JS1hG
dUs5VjVHUlwKJVsxKXU4L1o4RW5EPUojSjpuZTcxYjMzNzsuWSMjZklNN0E1LiNiKmdSSCZBcUtw
PG9HLUM6cjdjLTxqTiJ0ZltKLGlnZC9cJUlnL1JxOzFbPCpjImZZNUA6dVs/LlstY204WjkKJWIh
WV9JOm80bXFcakpfYWYzJz4yQms+LmE9XyxUU1JhYDpgX2x0ZUNPdSp1KGwlTV1NImpVU25pcVRs
Ii9CXCReR3JuNlpIMy9kNUpWblA9W1wtXzhgYnMtMHA2KT1KZUghLykKJUY9P09aUUQlNzJwaFdi
ZWQnVDp1VE5QYVFiaVVsNyZJIjhuYFkrWUpdb2MjKC5NTnR0XSdJZExfLHAxbUwsS1xpKDZSK1Ar
X1QwT24oJnA5ZiQjX3RNSWc2IUQtXnM7OGQzaDwKJVdaKyNpMEwkQ1xlcExfXk1IQDJRSy9pJGZr
WE1gZGgtPFosMjQ+Wz1nZyZIKU4iOS9YMlEtRHVIJnU/UGkmRmAnXXA1K09fJjs8XVleMm9ZZU0v
YlE8Lm8/IVdqS1AncUI0Y24KJTEuYjU8ZSg9US4xQktyO0ApdCw0TGJJWEcoXVBaZyVTO1BRNyNW
VEknNSMpayZPS29fKU4lMjFlUTE0Z2c0R2wsbm9OJDJnTDx1RS9UIS5XIzd1WlFSaTNsXTpCdEg9
PDRkYlcKJTI5LmBmZUdEY0kkaihqcWFGX3RzM2YhXCsxMjFZV25sdFpsMlFeREAiIkJeUEo8LG0t
XlVoc1VEMjIpZFY0MUUkW0IqXFdKTzdiP1JZMlVEVyhKWWNQXzk6T0lOSnBJTDRrZD4KJTI0LE1z
NWRiQnVHVXRNYE0icy9NK2shbl1YR0VbaGQyWitCLzVPU0QqWmguP1FJWUJBZldFUkZrUlM/dSct
ZlZWckw2UVxIKUZXTFozWThVJEF0Uj42TkRxWGU/PCdWLVRKQV8KJSopTiZqT3EsM1ZQbCQiZDMy
LiNmUjBOZzhqRGh1cCNhVmtoQjYubzBdb1BYcVRVX0lsSGNeaC9Dbls0OFRiQkNbNm9gUSFndUg+
NkBoQlMtS2ghTEokQj8+RU9GYjFhQCQ5YycKJWorJzdNSk0nVzYqLkpAYVR1LzU8R01sclBNL0k5
Nl8vYm8hLVZvakVXNSJhNShyMVFnYTA6KGtmbjlZXFRcW2FcM29FLVdBcCNmSlYjSF4qK3B0bjos
TyEwR187Ni1OPCZqbmkKJUIuZTRMPiI0R2hwIkEpJ3JEbm5DbG4xcS5mYVNpNVVna0RlOlFGZkQx
bCs1TVEpU2loVis7MT9xME5QRVtZXEQsYFAkJEA2IXU7ZEBJRiMnSTQjdT5XJmA+Q1JYUD9pN2hj
M14KJSIjX0A4RlNFTz9lbCVTWzxcMGFTcF4lbWokRGRdMlkkakd1PFUvW1g/Sj5oSCJFWlYiLD5c
TyZCK21rQSRrRDNoPjNFdVAiXSU/JTZXUlFwLGIoKTRQQ1hSVk5EN1xwIUBROT4KJWk4YkYjRmg/
UCk7TnIsUUZoQiE5ZUAxO181YjEwRCJfJjA9ajkpKGtUaU1dIkljSmcvTVs3UGJLYWc+LSduYSI3
bz4wNFJtMDpKVkRmdTNtaXExaENROTRyaStGSUBFPDZiUUMKJWhedCxXX1xaQXVoTTdwVC8zJnRE
ISJoWzUrZFJNQU0wczoxUTIqXiM+QyopKjdFTDdqLSItcFY7amcodTNELjtKX1dkS1JPNGsvXVtd
cTpMQm5kKV9CS2tIRnE1bkwzWmMwPk4KJVVubjBIcHAuTWdLIl4nK0gkKUI1VlQ1ZCYwLVEpazNt
ciFtPV1fKlFWYi8vOlMqWTlDYDEsMVxVW25MRUdoZ15cYXE/RzgzJnQzUzdtVVhlZlAxNmdaXERM
RjpwWV5dO0lXP2kKJU4hQUktWENGLC1BUztRVWVoVUNvYDBcIlVQXlZbNlA0ImlqNjpfbjtQNjc8
KV5sKltiQ0onJDw5KWJTJiE9K2EoJmBuWitBWl5nV2RFTD9zbjA3VDEqXzthQWdcNzpVai9DKVgK
JUNoOF84NUxGQXElPEhEXyE7MUV0YCw9WSc9ZmxGSS47RyhKPFMnMSViVmRJMDFTO1UoQjBROE9N
Z0I2OTA+JkozPmt1YCVGUWU+cyppWklgJ1ZfcFpkLmJycyJjWG5YLEJdUToKJT89NF1vTFMxbl0z
KTNEJl9rLmQlRSZgU1cqbU4oU2crQCFTOGFtY1tSODY5Oig4aDJ1T0daNVZFYXJSOUFpJ1UqZz9e
MF1WNEslR283WUksIi5RdHAiczY/XjZdQFFFL001S0sKJSNUV2I9aGI9I2psZCojRWdQPWxtQUxr
Il4sYWFHNzNcKzJyYVZlIkMwQk42QD4wWFB1P0cvJTA8dUVSSU8kalpyTE0sWHVWTCJSIis/V3Fx
PC9UKFdyWzRVK1Jfbj1QQFdiSG0KJTNARFEiMzhOW1ljUkNlVCddODhAXjoqL0BiZFw+ImRjS3Nq
T2M2XUpKMmktTWYyVF5sITJWJmc2ckllRS4tdWlwXlUkYlYtR2A4SENSTT10Vm9XaiYmVUMrWVhW
SlolUD4hYzoKJSwyWzg8IlYrWyosP1pIbiVcPG0iLEdnK14hU2xvNGlLZ0ppPDRgZmxBXnIyYDlJ
SUQhZ0M5Sjs+NnVpPldLLzcnLDNtSTRXLjwrWWJva0Q7Xm4qRlwzWVoxXl8rLyRrIkUlNW4KJSs7
ZUhxZiMsMk1OWChtTUk7R0VPb2VgJWtAIWdRNEBrZ2ItXj1abVQ1PTxRSydcMVtbaltbcSxpP0hD
bCRiZktRP0s3IzA6ZXNRc0NTSyUzSE0iIzteL0JpaVRSREw1W3FoZkkKJVEpIy8nQ0JDWS5lLm5J
XmlTJ1BaJS9uaTw4Wy9uNDFZSkEjaTlYSXVEUXFDSEhrI0dnV3VkPGljLDQocFgtX1IvIXNaP11l
XjFnPDcpNWJARjlpc1pudVcjZ1k6Plw7QGpCOW8KJW5oYUxxU1ojVFRdT08iVVRXTyhNS09wUHNu
WEtxMCpAPzxuQ3JnYnNDTGI9X00/c2pRJ2JxWD46OGlWZSwpSEVcZyxhMGVBOyIkbl9RT2dYN00k
Zz8nOCgiSjBmbDcqQTA/ayIKJS9TVFVeRTMhZDZNSzksSFlSI0FSYV4lQUhYRmsuN2YiWEpoYSko
dGQiZjhqJzQ6aThoOVUpaVMnVDRbXU1bPXVKIz1CUkdjI050KjBuUCcwKS9pLjYkWF1mMSk0QixS
T2xUV1cKJVBeQElHVWlxW29ZNyctTzltWlZXXEsqQysiPzRxclo/M1otOV0mM3A9JkdlPyFNazFm
RyFDJ0NuS15GQyMjaC8iRUJQXj81aWJKNiJCMzA6b000ZFVtVko6bVMnMTcyOTVua2gKJVlhOHM3
LzFkRTo+TTtRUGxdPGcnISU/TUlpYWA0RENEazkiND1VbjMsZnNZUEAiQ3FWcys2PmAoXShLW21V
bzJMLjI8LVIlZ0FnUmRsTGQrRWphJVBsVTJYajVHTVdbZChjWkYKJXBFaCpgKCwqcGtWVFImbFRq
aSxQYmRvcmZaPi89SkMtQ2lMMG8lMHA4PmNYLmg1JzIncTVCSnRNOyVTSydcK1tKKnQhPjRxKGA/
JEhPPihQcGkvMjQpcj8qYFJnbGY+NURFXygKJWJgNVM+UllqSUc7NDQ3V2kkOTVZLS9WNWIjMWNi
cWBGJFRvL1dBLydWO1dyNTdBP2REQUlOKiJHWW0wUSc8XWhcKFFMJFAjLDZjLE9ka29HT0cvdWUh
UkBFLTddLk0wZDoiPmgKJWtAL3BqJUxrRWxvdV8xLkU6IzJuYCFtNyhjRitibnA9RnE7LytRVFIy
TXFwSF0+UF9vaG8zU0xsNlsvZ2hwOytjU19rXUMoWV9Ka3IsIT47S2FDXFMtM29yT0c+WS9RVUo0
I2gKJVtocjNMQT8uKytCampPQm8yRHRoKW5sST8qSExOcSgiOCVKMnA0TjJML1NBX1koPGtCa21G
ZnVoUSZWQEwrLzhnIi5EdD1ecCNLWlg9SCJFNnJsZHNHaz5gPSEtLV4lTGlFcVYKJVhgQTxAOFY+
REZabUBCQ0phcG1DWig2VHUzVmg3XydQbDpfY2k/NlRuWlFzLDFoJjojRT0qLFdAO1MlWDkvWyw9
Y3JCSzhDbUUjaU8kbXQhSVs8QUQ0Pi8ubSczWmBxUCEvPj4KJW9VaHQwYmstXSluMmJDZF0sPlRM
YC0qVUtQYFtMTDZucyJaOilxdC5LYjpjcTJZYm4+LEFTVjtjVGQ4NFEvc3BWIj1CUGsrc1pKQ0pb
bjxPZWtdLjZSN2JPXDg7NV1kWW0haSgKJT5IRU9MZnFlJEQ7U2ZlZl5EU0NpQHVxcz1nNWRGJG5U
QDdyalVBOmMtRkpdZmNXImoqZShgZW87amlUM2lwN2lKKTFgU1QkSzZWci4+ZlUhYSpvT0thcER0
YiFRQCIoPFpZXVMKJWgrLm5EUFBQKF9BPWg8R09ZdUViYGJvN1dYOiNSYFhOI2soZGBHZS8/WUtT
NWdSK0BmWlNgWUgpPHFKUmZTVzhba3Uwa0xhRSw+c3IyInJYJDtGJCtqIm5zYkUubHRRQEFcOG4K
JUopRixFQnBZdSFQPjtcLzokRiJlOERgVEdqTWtjPFoqdDZ0YC1xLmMrOUZVLmxFRUQ0cTBMXnNe
PUIySEBLYnIiM2QnSC1cLXByNyZXUTglcF4+YysrY2U+OiovPUMwUilzU10KJTRiYUJlT0BzbFg/
JXM/RE9OaSZaSUtoKEZPQTElMlNxXGZIT08mNzdeJzZeUU9BVUA5aGhGaFxKQ0FsKHMrWDUrSjVM
XSYrLDxQT09PSlU5NT1cdUNPQiRePUAjJlJjT09rTG8KJV5kR2hcYlVlZkhrVE89M19KRzkobjAp
LiZAYi9SRnIwWFs7QEo1SSshMjNqZVRJYic0KzouV0k2PWtkRyJYLWk9I2RrRE0zWGk5KjZfcyxd
LlJBYGYmW0F1Oi80OihiPFdxXCcKJShGPiM6MideVyZcOCQjcmFgYEA2Kl1HWzQ6Y0lJMDBRTCM4
JCdcKExOS1tIUyx1ZFZHJ05RaGkvPjZyUyxrLGolSnBDRlckLldVKThpMzVWOXJNdU9kRD8zNEYk
YlNPTV0wbzEKJVxMNW8lJDImZm1sKENSVmtjLilkLWw3UlU3b3FqTDgzLzNIPU0hXU5MIl04aSlH
Xy8sVyl1JjBlPG1pMjsyKl8oVHV0blMmTSRpLCQ/YGQwUlR0dSY9L2JfNWQwJ2pNX1pIWz4KJU1e
bUNLXyg9KHNdQkd1QylKWXUiZy1xRF08LGk8PWhRR1JPSlc9ITIzI0BNJWBCJjE5aW5bVFQwMyRA
LTIiclghSGwiQkw1a1hUPyYvX0RSTDpXIVJOUFw2MzBbcjd1QkkmPVkKJU1OUkpoN0FdT09rU1Mm
IUwsdEc1OiEkNFsxL3Q3XjxAcFQ6YkRURDVXMmdkY0UqaCRUMD9UbDsjJigoUylKWTJNLlxtbSJk
WWtcTlcuUSFbVk09QjlUYWtpaGZjJG0mTCx1T1QKJWMuR2oiMkEyYUQyLz5SLWdLNGRWVTcqI05t
IlNdVihaIXE/Ikg+aEA9R2tmcUsoMmAlMmdGdThAXFMpczRGK2NVYyVWOS01aFxzNzhQMSk+QDs0
VWJcbEpZSGZyTCEkVTsqU1QKJXJhIXVpXldaVmZeXGp1MUZQaD9kVC5sJWpPY1dqaEBBOkQ1PFda
bVZLYHVPK2JmJjsqTGtRTjVGW0JcOWE4dVxkNlFyciZfRicyMDkqKU5kX0xpMVU7XiZkPF9TY2Rh
PjlVY0kKJV8/R2oyQGovVlJfRjlBckNFXklbX00sVmtrIUZmLmA1VzFRSFNOUzBfP1okdUstQTsp
YiF1bE1NYDZyM2AvJ0FvUDlKKylBN1UjVzxuV3RYMDMtSy1cIzcqXVw4Yiwucm9uV2kKJWxAbUQi
bm00XDxtJDE5YTEyZDhUbVkvdCs7MUNybm46YVljaT8jYj1QVjghWCRpO3JZOFZfXVlPRWhqZVpu
SGdwJGhsX25NKzw2aU9GJiUyZTFaNENPU14qKWo9Y0pnT0Y4NSIKJW9IUWoqTlZzc2EiQFNbW1oy
UiJlPCxiXGxcMHJJJj1wWiFOS2khQ0Y8LDh1bDQrbydwVGpHWis+TjJBRSZMXltiZTFHUTEobmtQ
MlhLTjRORUoxPjRUdSc8dSJxcDBUPlZub1oKJTRkJlhQakokWi0kKTMqcS4yUzg3Om0hJmVEUmxe
WSJGO2JxRFRrT1otSVNYcVEidDdVMmg7bEJXMS5rWWZFcTRxIXAtI2NiImosYydJWDJsPEpDUURV
LzhKY1UycSNgLCksO1IKJShTckZXTXN0U08uTVxfVjtIVidXXUdpPz86SDQ4cyZBVjErXnAmXV43
bENTJUcqOGwkMCJlOGRtPEplWy9rNmdSSjlZZDpmMFQ7LFg0bCZsUjQ/XmokTUNQJ0FaVWZBPGlF
SEwKJTs8QTNDXUwnPDhNYCtcOSw1Ly91KCtQTmUpSjY4MGZnUWMxZTUtUEdlbycyWFRvKidNcF9L
bUlLZlg1amMrL2ByPylWOlg7OCE+UG5qNHFBTXNhb0VkMi9oWT8nUEA1T2hhZGYKJW49aSRpOygy
cUgkbVkvVzphW3R1PC9lZDRNYF5UaTllTWMxcSJZKzdRPl5mKnA9R1JoZGNDaWBNSCRKKyRVOylx
PWEvdU07ISU3UiVlSEViR3VOQ1RvZionQGM7PFtbPCE5bjUKJUxQQUVyT1gwMGteJT5VbzFmUWxW
V2kmPFFYNyhxSVRsdDRRbnA4XTklQ0tcNVNfOG9GbTdrOmY/ME84c2A0YkJ0P0Y0a1g8U2lYY21t
WlYmZS01aCgqdXE5aExPLW5ERyZhSSUKJSoiIWxzWz9FdEdqXFYhdWRWXVA7U2pQczFYKlhFXV9y
MyYnbWpCUSlgLzwjZjpuZG11TCpXVVhXWyVqQTU7MyhPRkJWZDpNVEsyXydxZiM/OE9VV3FhMVo2
bT9vdCNwWXNJV2kKJXFBRzMpPSxIcjVCP3RUMW9PNSFjTlFgbmEmRDRvXCk6YShTOzlVPUBYWyRr
IzM4LEApVEoyZDI8YTQuWFdDTUgsNmYkR2o8RyxpU2ElKXBPYCgyJkBHM0lnRWUiUWBwUzR0LCEK
JSxyOj5HJE09cSRCWWdXNWVsZE1AR211JT1RODgrbFAqOCRmKUJbQktNPUhoY15rQ28vUD9QTCMw
TylvJkowNz1TN11dPD5AJUZTWmkqXjtLInB0WSFrXllMP19RJzYwYyUlSiYKJUtNXXJqZDFZQlBP
SGorNE5pWnRIJ0AvXk8/aUMwajcoWDM1P3BuVk1GVWNqMU9SPjBfV01NI2JOYk1KIURoSFA7KSYx
bDFkRTd1S00qWzhfZjpYZTksPWRoXCY8PC9rKylTclgKJVdqJmQpQTtPWy1cbWVSMzk3RjxQRDBA
V3A+Kl5wYTJkaW4wVHEubS5rbydjLjBbSlJuNidjW0BvbGQsQG0qZmUuQXFDaShLNFxMTVZCIUYz
ckBFVj5lT0NVVSQwYjExZGQoMEQKJWpbbnJJQT1hYGM2bVlKR2k4Nlxfak1hPG4oPnJGNDcpRE9c
UFxNWDslKi4iJXBbQU5qLz81Ryg+WjxhRmcvOktdUGY7bzU+QThqYGdJbWFKUElpJj1dLC0oTEJG
UCRdWDdNLkEKJWxQWyJQOUw9RygsWydZKUhwdUtoOy4lVURjJWJfSWNKWW9MXVBUMDNfU29jLyIi
USRfTyw9NW81Sy1nN1w/ZkFkPCY4OHBsTTtGXztpS1hgMyE2a10jTl91XWFZMGJaJCxTYz4KJS08
XFJgQDJNLEVdaC9nXXJkYS4pWjZzLGYhKGI7aWJvZG0kZm80Jm9kdVUvKW1RVjhKMSZHOT4jXUBi
Mzo7JFo/ZGUuIUJkT0AoJ1swUkBARVdBViNtU1lCLXE3TTtaVlotLUgKJU9UXG1ETHVBJVxbMk1o
NU5SIWptMDQpPFlbTFkncGY3RE9fODBdIjVkZUhgOFh0TSNtKE5TWCo5RmApNlRabFB0JDdzQjUu
aG5kKU48RyhRbT5YX0c7LyRRSzBXNWglYi91JEIKJTFwP1lJM05xJTppYVkhRTZkMyFULFg3RGFu
YDRVTDFidTMnQDsuLHRUNz4vISszMUJsTyhYS0I+JFJVYi51NlNyUWgyKGRBWUM2QD0+UEkoJChB
dVo9UEFFYj8kIyMyQS1YSj4KJV1RQD4qKGJWbWdmbzolc1RAYm0hYGVzX0NBIi1ZSXFOUTlpQ0Zy
dGZHLWFSNEFeI0wyOUhMayw5JzE1PiY+V15JT01dKTlrayY7P2olNCskO2U/UkhRc2krcU4wP1ho
YzhdciUKJTBlLVohQk4rS0ttKmRkTT5iXTB1cmomKmBMVzdjMjQxPj5aNmhaYWlxVkY5WEQpK25q
IzxvVClic20+WV08dC02QU1LNy9ZX2FLSyZhRD1LQ2djSk4jVFJfUC1RLHInITVXRikKJWRCMERG
OFdxb1dQKnVaakNIO29kYEN1aVBsbiVVKUxiSjFhJi8+cDZXLmpAJDdaPWNURSRlQUpNVVBPOCxc
Rm06K0VdYldZT3VKVSRmVlcyYGA6LD8oRyJHPjtFU0hUU0VuVVMKJSlJXW1CIU5eSDg2ODsxYT5L
Yz9tZmQtbnI+NDBwUGlqLEVsXDppPW0vWGFOZmFVTkYmRW9aWFVGdUBwLz04XC5eVzJJKVglP0A5
ZyE5VTYmTnFkVkpSMlpcbjZoOkU+ZzxxQDsKJW8zaChtaDNgcFQzUyQ9YzhoSUMmb1wnO144TDdV
aFFjcjIpWmFRTHNtL14nYF1qIixkYjk+SzoyRzlxdE9bLStIZnVCcGZGb1gmJE88Ql9ARFdub0lC
JF1RdW1ZU05eXmw+Y20KJU1HX1VFL3NMLkZWaXA4NDJbPVs5Pz5hXyltNjJxM1pPKWVbOTwkRk89
RC82ZlRoSHJXSydCQWhyLT49QSNsPFdkM24tKkRLLz01NFNjVGlKW18/bDstUjxDZDxCckxMOiYs
ZXAKJVw5J1o3UF9GTmErVDJkXzhSbVJoPmBzNFY2IzxhPlZyJmUxXSVhXUQlKmJOI0ZALFBAMEUy
Lmg1YDhVMSNNQXMpZlNSUFs/QnJNNmdpUFwhLz9RR2JHWixUbVo5O11wZC00NEwKJV5waTNOSiNC
I1NTMERbbyxkWS9KUnIqYEQkamw6OHJhWUNDViZrLjFMNWlDTj0ja10wakxaOlVXQ1ImcSZidUlQ
QW4/KzNoM2FSUTByTm1kZyhVUzgyYFBhXkxdV2hUaTNuRV4KJUA2KDFZQXBPbVpbIixjRkU+Wych
UzhiL0IrZypUNT9JZy1yO2BeRGw5PkZ1L0BQSkxDS0xgRWhGTjs8LiE3OkQ6N09SaUQyU0poc3I7
TDdrK0QiXWxfR1dBSC1kNVEvbEouK28KJTlYQnFMVWFNXU4yZVd1RTZJN3NoIycoQ21QKGZRImct
RzljMzdpWnRBWipoUUM5VVRSOU5eLyhiVEsvSixcXk9URFI6SElAWTg/akE/SzBVWT4lZUcqW2x0
bnJmZyZEYiNqVG4KJS91VDdjaXBTUCg1InU5QSNiPz8+Ik9HVVQ0Vl1UMjVzZDxlbWptJ2g1JDQj
czlwM3InOl9UQ1tGUjVrcmFaJkZDO01xImkwTmZgYmNOUS5eMF0rN19HP3Q9ImQ1cEwnQD5eMT4K
JVplczorMVZaNTdnIkJRLipCZEJPUGEmUmc0MHRsZT4wMSM3S3BLb0cubkw4UmBtIUVPT0NCOWo9
a2FYaEwwMXA1V0YwMWxBbihrRjklZUpTMEY8azk8YEosP1tIJVFOXDtuQk4KJXJvOEYoWCJcTXQ0
XkZ1ZC1IZlBKIXVHWE47KVNUJCNVYS83Oy9ccUItX2ZZXlRKOTBSJ18kN1VUODQoLWVTUjtPMUda
R09ORC4vLSY/a0ZILlkxLEtLNCwjcTJGNiRGbFcwMTEKJWJKNjJxXEprZj9wa2pON2BqXTw+LUNX
YitxPygkcUlVSFIpckUuOzM6TWs7VWdwZWQ9Z2cjVF1QS1RzPkNrPChZJ2pfX0VNLzJtalphbF9f
Mj9oU0JxIkx0Z2A2Zz5kWnAlS2AKJVEyc2tbRHIzOCRkMCghYC9bb0lOZ2NrQStZaURoSlMnSGYp
PVczOERXNzIiYiZNKC1lakI/NkAnZkhGdVouMlEnJFI0NDxPZXQ0UWw6WCwnRmdzSl8nSGNabkxR
UDBFLDBwc0sKJTUjT2dVY0NTNFppTy1xZyljST5LJ2lVQSZbSCdNRzxEYWdmPEFaU1xdKCEtMm5i
OGZWPS9mPTZOKyJRaVA9IiUsP21aQD9XQj1MLig0XC5PMGFjKUZZXEVIQm9AQUpzaCgta3MKJWlP
TFhgW106OjtvY1J0XzZFczxMPVMxZz5QW05mMmNZUFBlYiJdbTpCa0lCNFouTz1UOVFFUiZmRjxe
LCc2WXJbWGprL3Upb3JZdFhYLVJXOjRoUUddLyVJKjRyL2ZHKWhFdSgKJS5gOTU7IUszKzxPWDRE
TUl1Nk1XQlt1c1QpWEhVRF5XRT4rJUdrKWIsTypgXFZEWE5ZJUFbPmEsLWRpN0NiLl9yT2Y/I2Jn
NG5VclRob1ZUV1owK0w/RE1tI1pHKiI0bjNubycKJTlpYTY8KzIxOFxMVStwbiE6aVEwcScpbVs4
aGBiVWMoVkxjLy06NTNONHQqQExHSUBMWGRYYS9JMFRTKi9HaCFLRHEyKiMya1hRRilCU1ItWyI2
SE06UloubzhRLGhvYSt0PVQKJTxSRW9STy8qVnM3NzdeYiJpWS8oUilqTiZdSHIlOyJYNClHXVZl
P2dYSE5cNk81KShYZkcjP1dyUSouPVUrVG1uOEdfRV1OT2pWbyVDMGJfREcoZTUmS1s4TVA+WVRa
YVAybGMKJSwuVU1mMyw9Ti9lQzlXbGtzNGhEKjxBS29oaE5tJmtJJWo8PjtuZUtqJXVcP1tyKTg9
QW41Qk0yPTZqcSUpQS1tTGslPSYwZEg7PFxvWU5tVW9dRm0nNjMwJ1o4LzFKMmxrVigKJSI1UW9u
QmhVNFRHY087VEtHbnJwR20hN1I4bC5uRz9UIy5OYkshbmNtXFdIYFlVRiRwO1FfbUhrMDIiUj1m
LDc4PnNmV2kmYzcuLXIlK2lCZzRESCNUbVVWUCx0TEcpWi9PazsKJUpwUkZzXyxvP11iSDFsJ25C
IWRuMkZuYj9AKUxWUUMvQ0pTWi9KODk4PUEtSiU1X08yIjttJ108KEUnRl5aTnFcUzZNQU9dNENK
YVhDJT46XV1LakVdXTE7Kikua1tnM1VgN1sKJVYnW2poZCJncVgnKE1RVmNYaTtYImtiT1VTLGIh
OjMnTmQkLFJCN2pTIW5Aa3MkQUdHJVhgO2YxLDYoaFFSTEBhX2VLTG1Tck9BWklUV2cqWkdDNUVc
UFYkU0lbOUQxWF0taGIKJWZmQDpmPzhsIl1mbzQjKk0zMXVeUVY7IllvYUxHZWYzQmVGUSdtbkpW
L0ZNb0ExWzlfbiNQQXBXRT4hIkBsNWFRajpnWmVuNHVvalVWKEA0U1lOYlFNOjwxaVJHT1VwXzly
QVkKJU5vSzdvSmIsP1kuNXVkP1gvTmBCNShQWFJPX19jVSdKb0xlM0tbOmtnPGlUQkhcTFpIVnVM
a0ooJW1EJSZRPlhhTVVpanJVPiE/Skg3IT0wWW51WWVXV2ZpdElcPCxRVHQ3cjIKJWdSKkAiaiMx
WVcpSTU/aFgxMmFFbiVINnJTVWAxZWxSdSopMlBbPEYkN1snIUFPbS90JkwpdWM4M2ZtQS4xZGsr
SFlEcFUvRFpnVTYkakxiN1o+LSFsKlhQVTA3dUIlcTAiQSEKJT44ZjQhWGs5dFFSVjVacmY8cUcz
bEZMRnBkRThbPGkwLDZmYER0TV5AS0JpQSxDInBUUkI+I20xVDtKJCQqQGtqXEdhJGY8YV4uNSFN
Q0dVaFxvRyEpVTdnYGJSQjpUOGcxZlkKJTEpOWwrRGwhM25LZ2hQQjRlcyxVI1toPDxSLkEicEUh
SSJ0RDtUIWFITz4lU2U9PSQlWzV0Lio9SDVNZSRcLytGO29nOidYO2FKRlddLyhXTkMldTVSXDEu
VU9VL25YLW1MajYKJVJiXSJrQUdIOVRoNnQtYjlGSXNManNLbXA0LT9vck1qJUwxPEBUSl85SyY7
TGtmdSJTaXUwWElncWBhUUQtNitvKy5WKVRmN0xnaDdcQ0NkQmFwaTcuX3FNMU9Ma2o5YWZGOzcK
JSRjampKZFAvL2hCL1pRREQvS0h0ZGlTTSVQaFAqUmI3ZCxWVUZeRy1nMlEiMDBqNUpkbVY1c2ZS
L3VwbzFNWCUnO2xOWlI1V09dM1glPFVWVyhtcCpKNWZMO3A1VTVsKUpjYFkKJWRJSjhGJj5bZnFU
cyQkazlwaihyV2RaO0ghQV49TylDSDgoOTo+VjJkZkMiLFA5PG8rJysjYyYtR15yZD5mZEdzQilL
IyRfZXUtcl5YKkMkNWM4UmcoZik1Z0oxQilvVyFUVHQKJUYmVj0vQkZJP3FEJFZRN0xdaS5sJW9A
SGpiU21MUUMqYHVjTSdJIzY7OFgzUzhFQ1kkMCMoXi80MTstISolZSpfLEJcTCYxIy0uVidQTFsv
OTY6ZTRBOV49RlRpbGhIQmBybV0KJSwwVU07UEYtJXNaYWhWcjRLQjU4RWhzYWAlIyxkPEcoV0tB
KnVUXSpiUlZeT1dxJz84KF1aNmknUFVcVyckc0ZXKV1SRFFNK0FiYm9IYEVWTXVlYD5XSDdKdVRQ
RD5kRydJLigKJUJLKi5cVzNJMjk3Y1EhV29MXGgvMmgzXjlKQmtKVWo9c2AsRVpoIXQ8NVAjU1t1
VzUyIUdMIlxYRl5SMWlrUElwK1ZHJmVRaTw1dUkoVyssP0wtbi5McCJdZlEpYCIpViQ5OUMKJSUz
OnUlPGg3YEBbO00hXDdPXkkuT08qQTMqcydjcWc0JSpEUHVvcVFcPy1oQmshZEZzKGNVOU1pV0FR
OGFNbyFeZlArWUJdY1FnPjtcNk1OKEtWaSc5MHJtR2lwOz5ZOzwxJnQKJWNJVTVsJFFaYThaaE5T
dExAL1BqZCFuPDVJa190KkpgW1IsO29Pbm9kdUBeMW9LVDInKikqS1xobGJFXlM/VW5KcCgnamFB
WFs2JWs3bCosX1w3KyEhNXFyQXEyMTghMVckZCoKJSEtdFtiS24wY1I9Zmc4VWJNJTJeb1xVSlNr
QVN0aypjO0AoPktAPW0rWlJXJTZRYlY+I1koLixlR21KRzkiO1xcZlZsP3UtaFdcQS5JdWk2KHFy
PytTZk41X2JYN1oicSVJP3MKJTVBMTpRRChSQyFcSkZPPE1DR1RbbF1GI2M/QzVWSD9CLnUybUVR
SGdBTGRLX19gJ0staVNzO0hsImxUZUtoSFxwVFNKWElrcjpTQnAkO25VaUslMis8PztMSSlePGlD
VThhKiUKJVoxX0BISG1oRUAoL0NGV3JTI0Q9UUowOkc2aGkjZDdoZTgnOEtyaD5CYWlOSGtrSFFz
QyhuP0lTRix0QmhAWl5Gais7YzpFIlo1QyJmXiYuJzVHS2pbUmlhcHEnVGAoOkopPjIKJVtTcTE2
RkAhUEFuMEJYcFBqN25kX1hBXVxUZWJJWSZWZUh0a3E8YmQ5L0B1REAnSi8oQDFvNSJAOmAmMF9A
QjMlIkNOcCVXTFsqQElWXixBSVI/UC8mLzUvcmllI0AwK1dUckkKJU5yVHBjY2Qob1pGb2k1UylP
QU1hRihzQS1VOVEwUUwhRCFrcS9TbHArRC5TLmlDbyVyNVMtdTFQb2lBJVgtNkkqRW5rL3EkNlku
KS5QUEsxOnV1JHVVdUBSWHA1QG9zPUdAKGgKJSg0UytycylwQU1yY1t0XTY9MkB0Ris6U15MSEc9
YD8ya0dqSkI1XVdlNERhLlc4YFRZRlNfW0pFJWtMI1MwKi00V2kzK2VNN2FCRTFfLGMzcW1YbCZu
MTlTcW1CPl5NZ0YwZ2AKJXBuJzZZX2AibWkkRGBpRTFJW0ZMcGU0OW9rNHMqZTVjKlRDZURnI2Ml
WjNjOiZxaiNjNCE2OSpYUVgrUChyUiMjX1QoZ01oKSRoSVxOQUZKO1YwW2srUDVjaVVlXippaTkm
Yl0KJV9sYy4vby48TUQiPmpRTm8raVtkTzBtN3QzLXEjb1wrO2FHKitpa1lNNlopX0NHKis6U1Ns
MickLFxQVDNhWVxaVGIsYzJdT1FvcUU5SltCVE4nY0kmX2ZQRkBbL1lNOm9lPiEKJWBpTTR1cWdB
YkwiQzhEL1VbQFpeP1E9XldKVSwtYF4xLypMJ11dXVc4Tl8ubW8oTl48QmVEXyI3TFFlZEhab3Rv
OiQsXHVWUU8haDFFQEUrNENUUzwhLj4rPiMpPDM4bC9UbW4KJSdLI3Fza01kTXArNz0nbFpUaGsu
KWY1ZzNcTSo/XjA8TXNAVlZtZEZpZzQ5OzstMz11SXNLRSs0RF1uKGxebzYqPXFAKy4/UTZLW2Mv
LUBIbjxYcF1AQ3U/PjEmW3IqSUciJTwKJVpkPmpUNm1FIltNLFsnXzM9VEdvYlwiITBQZjcuaT5Q
PT1TKSpuVmFsLTw9KyJLVXFyMmxIczdmLVtsJjc9WyRXT1Fka05IXjBSV2pSdEAhUicjJ29GSE9H
VDpQbVNtJDxhVXEKJTkhTzhvVlpwRkpKOUtbSD1KKSNuWSwnaT1gaWpPQjAjPTc/PHM0Q0lwSlhj
R2snS0BmQzVeQGhDYjNdNFVuITRVOjVaVCdhY1FBYDtXdEVyOjJyIVtiJD1zUVBZO0w3VicyVGMK
JUc+WlorZjBYLENUU05JVnI6dF1TcXElSC1gKW1uRVJ1QDB1N00rciZEaz9rU2JcMj0jQih0QStX
ZiVKRjQ1OGNlO2dATnVHc14wX0IjI1xQJUEvK2ZeWiIzU1IqPzMhKTwhaEoKJWJjI24tMktPVHVK
aXVAZEY5LXE+SC4kUCMpMyMtSTpmJygjLGZGQm5bO1t1LFdPPUFlUD8ucUspPTxRM0szJklXRjJR
OzEqOztYYWJJQFllNlMqSjQpU1tLP0s9TDRbVHRObTkKJStoaTYrJXUibz4iPzsjQy9pWiNYSEFb
KixpRmR1RHFZNj1aQ1QtUzxDOz5STVosUTloTDhUWVBVdWtCMEU4WUQqK11oXTg+IUYiNVdMUkos
a1dqLG0/UGYuVG1AX1MsMmozVSwKJSY/OGpsOWY8KTheJV5qcCwySkI9bXQySTpGU0E/LmVbSj5B
KCRdQnJEST9fKUdBVUw8TV9tZGVFNERYVGxsaTNjNmVqNT0+VV41LiZET2xGRUhILSdbdVdVPS9k
QmRRX3FZXiUKJW1pajdFOkJTLyg1WUc+P2gsVSQvQDhqMlpHJVl1XEpeTTxBJTdJbCdUbzZkNixS
YzVCLCFOSmlvb2k3OlRTXks0Mzc/KE0hPVZIYC5sRCk+RC9sc19JXThoQFQpNyQuT3BORUkKJWtT
LF5EZDd0RT1yLi00NFE1PllEO1swSlNFVC1KbllqOSE7Nk0tbChGVSNINE0qPkBGSGUwNl1WOTBi
bFRLbCFnTCUoczZrR1k5RU8sNS1zV1hRRC8yUE4wTlw8L2lrXj4kYFwKJS9xL0hHRTknU3Nkak0p
NWoxL1s3XkklRl07ZlA6ck1EXC0mI3JBazQqUyRHZGVJQUVgWyZnN11MRlM2XSIqWWVkcjk5MiMi
QmdUZ3AtaVshLzdMP14/XFBNV2gqOyNrV2hoQ0YKJSxZXWpyRCYwMHMyUDFLLkU4cFRjYnIlVm04
VlZZXG9YX0hMO25Yaj08aHQ5bz5SQ2BtV2klU2w/YDtZVFIxKGtsckNzIWlWKCNdRjBVNTpoTTQq
cFZPLlIlTnEyRGwhK0YjRlMKJTtfKyNmPFlOKjxNXSZwI187XHI8VCNZISglYCJaPDBfVlcvLU44
MkdHbCQpOWJ1bm9CYj4mNCo/NGFAR3JgJ0QrX1N0XF1JZVU0KmxVK1xKZEg9VWwoR0tIalRPcjop
P2xeVCgKJUNXa3EmNXFtR2NEMmQ4OGhWUCMnUEYqZnBOOjooWmVmOUVeKjxjJHUtX280WHJUUD1y
LEk4SixwMmRLXFU+LTEkXnVaUm1xSXI1Sj9QKEshS3NDYTxwdXNCckhoLm5UKVUkISUKJURvVSom
SkpuKjlLS3IuVGJyNHIwVmFfWEUsNXB1YS1ROXNGJDwtdDlwZSRtLEFTI146YHNOVyZbKmxGRT5X
MXJwLWNbJ2EwKURGOC5sL2soXysyNUZQdUhgaERRXigzSUhzKlwKJTVicyQ8LWZVPEJFOWZkdWoj
TmchXStGOCcoQCRfOWZRb09SJlAhWWglTFMwJVVKP2hgVGg2TlVrImJuTWRxTmtUayFSJFZpYGQm
M2ByJmNpS0lxSkZQMWMwdDdbQzk1OiJcRTgKJV5MS1ouU0xaOXBCMkVUMU0wUTlzXWtIUUpaNWQ5
NW06Q0FAUnEpQHEvP2t1YyU6TCFFPjw1Z25bODdebDg9dGM1ZlZzJWRpKjwrc1VQay9iKC5jXTAh
a2lIQWVJOHEiZSRSdWAKJSlMXV5ZNV88RGM8RXNKIUQ2NV0vLF1QWjxxI2RmY1B0XSRvJSxbSkcz
SF9vL11NSVNvOWA1dDtxWyUnNFdPPTw2YlFmKkAuPFFXb2csQzpbNSVUXSwlSXEsNS9GXiVDViRc
VD0KJTgkXEtVOk4+Pm03OG05Zj5OZVFQaiNmVyRxNlxwM0pQPjF0Qjdic0hbK0owcXIzWCwkUklk
L1chbEA9TElzKG4xRVNlZ0tAVlQ0UnEzVmYpYmo7XjhLTUwocGpQZkxcN3UvcT8KJURqZyxrUiNy
Pk8vR19WTElXRFdUNmouJFBLWFlhUWpCSDc9TUBVRj1iLnNUalFSbjdBZ3AxVSZLWGNuTUAuSWM5
SF4nYGFgTzclOiREOSsiPz9JWjcqTCJMUE46M3NVX2RPVEgKJWF1TS0oUWFXREtBPHR0XEtWSXQo
Yi1wJjVGbDY6bCNcc3EtbFAmMD5ldV5TczpmZi5zPTdRPFQ1VShgPGYoNTcuMiFpNW9iZ1wjYlRk
M1ZkPzUzY1BLcV9cQ0krQ2AkLyFULywKJSNeRCY0P11jdG43NmlDPCluUUI+IlZQRWtcWy1Lb1px
KlVQLC5KREhaTlZPaihWLiM1MWdERlFHN2c+UWA5QlQrXTt0IXUvPDpTU2whNl5PNkBsaklhdEso
WiRSR2lWMHU4ZTsKJWknakZxUFZdc0MrLDJQLVRxR2pDXikuMjdMNkBRNXE4XENtQTEvb0ZrdGFj
QFEoYiM4PFc0Yj5hcF5dciI5JEo4V1xZQF5dXWFqIjtQQmVba1hCa1UnSWpQOzllWig4ZVlWTl0K
JWwrWXMuVDMsRlBUKDc6SGROQUVxQyVZZS4+RSRdJ21pITUiOjEyOjIxIVtkMGdCMUdtJD0+MzZt
UzBDMVZ1TitMZFZBTWhDNk1eMkBDJG1tZnBKKkRdby9faF02QlxnP3NWYykKJWYvYlBGQjhOJSZj
J3FTJHA4WkYpbylMS1FDXENNO21aT1JFW1pWI1REc09sTGBiPmopKCFKU0tLTD9QaD90O0ZRZTBg
dS02XD5mVGpcLztCalFuaTlTTS9hLHJCKzJEPE81REcKJVI2OTBOLUVgbERgYFgkJHB1NWFWamc5
YlQpLy1PbCxLQC1yN0MnT19UUDonYCI6XWg6Sjc8b14+QjtIXmcydF8waCxpPSolbj0xIj01X3Ns
SlFKaSktNCdbXmdYV3RgbChgLSQKJUFJL183ajZuRW1TVytEPilhSTFgVVpJVGlGSWVrdWZncmQm
RyNqUTAjZSNxQFNLPl8valRGXVpPYWBoJTlSKTNiJ1VaLl8tPVRzJ0kwKHI4JGw6dVw4OE83Lydc
WztpT1hgamsKJWZOR19yTSVyTz8rRiF1OCtmYiFAbygtZmBmXT9OXi9nUEsvUzpHRmk7K1xYVmZ0
T0Q4Vk1SKzFNYixyV0VSclY3V0VNLWVAcGwjcU1BamxJOSVWZVpKWURLQVFmJzopXzhKKCQKJU0l
cyolIXBXLyplIW83cWVVdWctISZmZSdSZlBAQzkrLU9mLiYpIlBfc1AhbSUvK0QkcS5Gc11oL1Zy
YG0qVmpxZHRYQFVAP200WnFjUUk6KDFXKTJIWjokam0hIkc6Z3UxSF0KJSozUjRdJGptPk9BZjpl
XURiQ2IoPCEjL2lFLUNeOFAxZS9VTjhROmcpUStWRlpbM1FMPjdZJlA2MV49VEVFckVYbWFQaztE
KGZXSmVcc0FZMSZOVChjbWRiaTVXMUYpK1ghST0KJUFnRjAuQ1lMLXMkXjQvZi8xWTtsLFkvJV9N
OiVYXkUhaCtGSSdPZFtmWTAtZWIkMFxsY3JAIi4uUkZ1M0wkVkNlYFNuZChtOnQjRHBXMG07UFFi
VSVWIiY8NEM7c3IhMCowYHAKJSQxaytsXj1ZbkchIUU5bU5WV0FKZHVgUUs2TyUtQC9rWGBdIjBv
ZTAxWUw5NCxWWnBaM0xFMyxlZz5GOl5gLm9TXmdpXD09KGJYPCFKQjJOZ0VPJidWXEsmO1I8Z0M+
MjgmTzkKJTdddSZGIlIpPk9iKFAkPGo7WmlOWSNQWiJLUkFKVGY8bXNHZF9JN0hTJEJhcy0tTyJE
RWs4RGlUazdQZDE7OS1TYDBvRj1jaXQnOlBTYSg0cUJXS3RxIktzUktVQVpMIi1jSy4KJWgwVTxH
IVg/JT4jPXIyP0FQa15UWGRbYWZoIlpFIl11PmQ5XzJrQjVWRDBzWGVLJD4haT80YFctQyZEPk9B
PTxcJyFLOE5HQWk8XzVGKi5iLCc0SGVHUEJPVltwdWtPcE5oJEQKJT1JRmQpa1VwaEw4VF1hMUsv
ME1UVmE9OHQpJWVoZz9ebUw1VG1aTGkkWlFQWlgmIyplYW1TKCZTcThjJyk5UF5POFxMIiZiZFJh
TkZ1MzVRQGs7OVFYXChHXVNOJCRSV3A5XTMKJWUpczI/YW9mP3VlNmpCRGBoNl9pZU1GZ15FSWts
S25JNy4jQURoRzNoV2FlJ2RTNlE5cmpUV0A6OzR0WHFMPm5YXkFWUW9fZnEydFEuLiREOC42Tkw4
cz1yYUF0ZS85NzMkLyoKJTJPdFIkZmI8V1QrQE1LODUlPGBrYHVTYichWVItRCJmLV1LbnJKU1wx
YSFHKzc8UzUyVWQ+MnNfN1tNbStiKHVIWUFSZig9SCcwaUNVcT05OUQ/b2FFMmp0Kl8vN2pTPzIo
T10KJUNUNFVkI01PaihTLk5GKyM1LmYoMVA+ZVVOPGgjOlxhMz45bztrKHNpUCdkZlhJYnB1N1JA
aywiYUo0Xl1iVDNkV2VuPyM0ZSNGbFVmI00rRkBPTGhQI3BuYD8sIW1rLThKXDsKJUZQUlsrPzJF
NFpacicnUSh0bDxzWmw6SmBUVEFyUSMoZXJiXWM2WTZlM3BgZVUpQmtFO18sdE5XMFFHTGptXkEk
aVNxUXJPPC9fbjE8cW9pXyxaJkRBTFpcXjdCPl11R2xzJWUKJVs8Q3RRciFiYWtlWFEudTA3K101
bUdXTEwyNCg6OlxMT0o9WmheTUYxYjFmOS41U1IzMEdKIThhZyJeLkEla2Y3PHJBQFhtdSpXM0Vq
ZE1rMiQ8S1csY2YjVzoiXkxNbilZbVEKJXIrVGBdbVpVYmM1QGE5TW1wI1RfLUBSLEw5P2BlRkwh
c0tEJFo1O0NMbEY3LWpjPkZcQU1bOWlUI1ZJR21FIVVWZ0ptXC1dbHAoa2UsVWU4L1pQYCZjRilc
OUUnXFY1RlcrT1IKJTVkXywiaz9xZSNSRi43J0xZbFE2Sm9saVxLY1xOZ0MvWl5aaGo+ISpuRkFq
bDdfYHI6Jl8rckIqMkk5UWxgMis4XEVtIlQ6VSItSFZoci90MmlwdW9BYSRQZ0NvYSNNNm8rJkIK
JSkzNmIqLlJgUzBAVyRJTCRQW1BrW3QjPTdGb2NXP2YxYVhmSSIpc1czSlVBYTAhSFMkL2xORz0y
J0VZMTctWnQuTTczKUA1VUpSKVRRXTJdJmkvT1VETGdbVz5eU2dcSWc3UCkKJXJMVigsXl8vT0NJ
Il8hMT5zPCpiJTUmPVYyUyNRPShlaVcrb1dHbmEvYlZTMDBXJl1obS9SSkg7cl1XbV8/b3JwM1Qh
XHREYGNWN0RqUy1pZyhRLjpCPXVdSjQqJWlYa0pvW2UKJSJcYzxqPTkiSDI7ZWB0a15YaV88Zm1l
KURIWkJDS2U4Ml8yV1UzPlYwV0U4JUM8JmpBckM0JCRYPFRdViMtI1JwK05bTSRgQzlpYjdZTEk4
Sz1PI1BaVjlVaCxxdDJLVFIrO20KJUgjPStiQihiS3BNaTpGKEk+aFBnJzRIOnIhISVcTjRGbmdG
ZjtyWVptRCc+bytdPWslXVczQzRAMENfJClzPlJVKlMkb28kKywzYlhGPzZeZjxxMDRbVVxzMHEs
O200V3BgVzMKJWVxYWI6JWJpXGgwTVhnSUpOSENPVUoqZCRfSyVCKjtzZTlsbj9UYipJX21CTi9o
JzVJaERbWS4hLG1NO1QuREplaDMlZkduNTU4NElTYTo8VXA1SU9QWydrPV45S1IrYkA3bmMKJUoq
TVUwVHV0WW1OJCZ0KWJTUE1VZHBkLGhMXVt0cTZQampsImM7bDM9aDxPXC9sOEpQMTNHbGFGbS9Y
Oz0tUXRTbmJadUlZRiRvZyJJQmpER185JnROSTFzbERDXTYvOWlgI2MKJW1XcXJdV2pXcyhoWFkn
Pik0TytKajxmOkpOMCZoR1dfcFcvLjQqVyVjbk0zWmJwNWU3PjNTbXBOWC9NPFhAYS8tX04wTjhQ
OkFOIXBecnVRQiw3a3QuTGY8bFVVL05yQl9QLEoKJUhyOWBQOz5QSkwpRDN0US1IWlwsIWNobkUl
KSdMcWMiZ0FPaS9eX05GNSNTXGBhVikuZ0g6J0skRCdcMFcpTUpXXnBiLXNhLzJZYTlOUClnaUIl
VmA5ITclc0o3OkBGQlNXKjEKJT1fJ2xjNEBHbV1XXz0+dDM9bE1SNExRMEhRSF9yMFlcRCc9bVxT
L0YhJVFqPy5ERUkhbFYzRnAsZWg1NTU6cl5IYTIoXWlNKTZUMjYhL106SC4tVWpHV04ya1ZXSFY1
XmotLFgKJXAlKUAqXDFHWVtqKyRbYDA0JyxKKlgmMCkoajxndThlQjRaJGQpM1smN2g/Ui1pQmRO
blY2W05RPUtEQVojL0YtPkpITi8yYl9eaWswZTVyQ2ViWCRaKzlTPzYuKjFDPSgiTSIKJU1xX0lQ
TzRdbCY9S1J0b0ZZOFJbJFNGYzhgKDhuYytzVT89cWYmPDQtQ1xGJiZDOl0oXSotRTtkTyVgVUFZ
LzRDJD4kWl4mNVNkMmZhVHUoXixJRWhNMWRDJGVlWjZaWTM8UzIKJSQkYThwP0xXWEwoLDgzLz80
aTQ8YD9gYyZpMDlMXkB0SmE3QlglPidFMV1MLj5dXSJYO2MtJVc2QWpmZi0sKHBBIicwbjdaREZU
O1NZOVdCTztWRGBVTjBwSytZNjIrUkczbFMKJSZuaWpEPEQlMlFHbF08cUNAIzYhWTBfaT02XGZj
Uyw/KCViRT1aPFVIYCcwOFhQVkcmUCxDbDMlPzhPLiJnbXFfLTM8PkUsJEBOdFJtZmNSWyRdNyhr
PEZSLyFmPnUoOUd1PksKJTwoXzVuSFlvZnFYO2Z1Xi4sY1A6V0UnPGZMVTUtcmlpbFNKVHBdbUpU
J2FmclFlJCwwZDFOVHNkT2lhSjdFXjhgPlYjIVg1XlczIkhJKVtkKz8qdHFdZGJYIWlKVW8lZ0xb
U1gKJTlcLU9rMjUrQmgoOzhzUixMb1MmSi8pUiNgX2Vdc3IzVV9JWXFLOFNFRS4rZWRJQ2VPbV5z
VDowVCMsO2JQUmhYZi1WbyZvMjVYZFVINGhTcUwqVClfby5RTmIrJG0yP05VOz0KJWAoYWZcJ3Q4
O0JhPF82Zm5rNWxeaiRYOio5Jzo0TTVDKzdRNkNPcThhWzs5bytKS2cwSVVwcENQVnNyNkVtMDpB
OFk1OFBlUDljTjQ8LGIyTGUjUj5CW1pVaUZsaSZEPWBMNSsKJSE+UHBya2NTIz4zQiktR09RNTM8
Ok0nVVQsaCI5MVxbJzowN1xiSVRdXS1SRi1OREMySkB0UmVJKkBqbUBbW3Q8ajlbL25jIy9cR0co
WXVoQkY6LFpYXSJrT1RIbkNNXD5YcjsKJUtyYVBMNS4wKCk4UkIrUFE0KC4zWyxlaDdaJGgzcFts
QD5ISVRKbzkiNT9AUScrZXAkXF1VNSdXPEcmLmIiMEdJYVlwM1dmRjgmXjo1WlNqY1B1ImVeaD8w
cTFnSnRHKlEjRFMKJVlHNSJQIlQ5TlRHQWJaYjxZWkMwRUM0a3IhSi0vJnE5MU9aXilicUJXQzc+
TzxCXTpGLS9TPUlqJ2AuYlk9UmQuWHQ8LCNsNCYoU10uPGpHbmEkOz80RyNJUzpfRk8wVSwxIUgK
JUpHXipqRmlqSGpHa2JmVVglZVZgKnJaP0s8QFE2aFwzPVMybTMiNWNxXSJDLkYoVjtWVks1bllH
b1dyVTdSKSYlLjkmdEpvIWJCQWxDQSRrRlcnclZJaCo8RzRJLV8kZV45JCsKJVJXSmVDOSQzRyQ6
T2IhQ28uJTUpbjpzWkgiRF5QLkBBYl4hRVNuImEoIVRAW29XY0ZTJ2dSXyhALCRHO2ooMDVHYEcp
Vy41TiFBI25RUWMnZWcxKyQpNSxKcWNtaWgoOEZsNS0KJUorJmZcS2Y0PVEtPTxtRHAkUzNeR008
Yys4bnBRbTBEUHFHLDcrT0VybUsjQ1pMXFs0VlU/TUdPbFFNKmROP2E3PTAkIzdBXCdIc2hEU2pj
U25ncHBXLSpcM1VCaChtJyJmLnEKJSRuPjxMQDwudXA5JVQ4S25jZyFlZFckMzFVKyssKz51aWZe
MFlwVjFuV2tzWDwvNl1YJDI4KDAvNEdaYD5aL3MtNzImODtjLV5PMXEydDdmWk4mW2UqJXVSajZR
PVBBX0g0cFQKJSVfSToiMk9PZSNFJjFLLFxJYnI0bCFFZydgbG8tODBLY15ma2UhLyo7U2lFa1dP
PCUjREBGOUdGNjczLUIjaDxLLnVqc0RGJXJzWTxsJXNpW2k5RmhSdTZbT25gbDBnJWc2OlkKJVwv
XVBNI0Y/aFotW0xoRSchJmdFNiRLXlIxKzhbOUQ8VFpnW3IxY14xX0VjZj80PDloNE1XY243U0FL
JTFvI0Y5XWc0X2RsIjZuZ29SJFpPZTxFc0IoW3VsPy8nJDF1bEJ0I2UKJSNcUi00QFRFcScqbjxJ
OHAoYiEoOF9tKVFscSoucVtkQSErQWA0Tl9VNmhhN0ZhOlVaSypHNW1TazFgVi5wST9fNipYc3Qo
Z2lhTT8vUyUnUmphaE1MM0pWUWQhYUJ0L3FuYz8KJWA/KUYoJixXdVJvbidaZkdKcTZLOy1VTUg7
Yjk3cGUqYTpnaSdjPF4zW2kvLzhoTDVWRz5HIiRfK1wqaGwiSCIsOUxMVkszU1IzQyFgMWFpQXNa
aUVvbWo6bXJkTSMuIjQvLmMKJTRcZT9eXzg6Zmk2YTk/UTtMOXFfImc0UmtvTi9Tc2Iobmc5NE5D
YW5VTClbcU9KbD5xL0MkRmsqIzduLV1DRj4vaEUmOWpiNz1PVlh0Wlg4V1U7T3UqWlQ6LWtSdF4x
MSdbKjAKJS5CQlVSXkdiVmJiRyYsK21TJEsoTzgxXlxFa0IuVls2MmVwXk0nPHJdJzVwKnFGLURC
VT4tZildWjUnXlRsUE9aMGQjXS5MWUREO1EyPSlgNUxkc2g8VkJvZ3JdbiVaZUctZlEKJSFzajJY
S08iZWIqPTc+YyJfIiRVOTJDNUZJbVZlJWlONFdKJiw4VjkhTUd0akxGRmdkS3FMR18wZis3Rm5n
KW9qQGRxaW1COFZtOVJacyliLixDNlFUbUR0O0g3aSxXMGRIV2AKJTpGV3UoRzxzN1pSOlVzT2Vw
YzNKOCgmNnI7Sz8uRSFMa2xsTi8hMyZbQDxUJj9WRzVDNkJuMV4xO0siaWNQVW0/MExfMERhSXAo
R2RvKitaXSlrNkZpZCctbkgqXV9TIigzOXAKJWEwZG0sOD4pMSoycG9rSk42UDRYJmg9b11HbHA7
ZGReQVNDQStmUVptYyFVMFo2ISpaOFUjPSJgWzZZZTkvX1JnVS05Uyk8SlFhZyhccWk0MSlKYDNX
TkdTOkQwW1ZIcTZeRksKJThbI3BRKmdNaCFCUzRSKm9ONGctWGstYjklSzg2MWc/IkMnbGliOyIl
bzorJl1BQiczUVRNTD8iW2hOP0xGIyVdVi5VOnNbcVlhZjVaTjghRlVtP1kmXkw0W2FvN1U4SW5s
PGsKJVVfbE4+RV1gLic5LCJZUFckaTVCYi9oZi4mUz9KdFJfXmBoKE9ScDpCJUZsSTRwdWMjSz5w
bm5hbEYtNSU4P1JmIk1PaGczSXBATEpHcVkqNU11aE5LUyNTPEUxbl1gRkc+blAKJSY1WFZyLmBJ
KmJFTUkzI0E8JSZmLEldSVYoOV47Mj9fZEQ7PTNkMjBSbF00YktgKGNeO3RKNzsiTnBMOik7OGoz
SmZxLnJVVkdaSElrT042UjE7S1I7PFlNcmE9USQvJ1E7S1oKJVtuNi1bJWYxaTk7OUIjPW9oNWxN
WFdIIjc4bzxQcmFUZXAhN2hrUD1wbTtfP2soYV1aQS4lNSk7Pmc8QTdNUVE9MCcnKnFhOyVVZTAo
Y0BVUHE0Mk9LYG5mRmJdXWJwYD4ja0UKJTopPVEzL2dOVTxUaTtyXXIjSllyKUohQVpsYDJLXDYn
QGRzSTVualRVZVxUIShPZGY0KEdgVXFXaioqc1tfKlIhRkNGaEU4UTJxJTI0ckpzST84Wm5iazhF
MUBmMipzSGA/UEEKJUQrXyxxPz00R0YvT2s4LmpDa1tWYktvL1htY0xPdG1jJzM+U2c4QEZTZnMv
YEhqQ0Q/I0lqKDxaQylhS0VfVSIyYjokSi1lISIyITZjaylVJCQvPEZARDVRXWdwYTVYbkEuMzoK
JVsqajMjJ2UkWlduUm1uLWBfOC04W2NscykxY2EkUEQmJSxxQS9NWjVVXCZdPGNxKW8zRThYOHUw
JzktUE4zSUBQZF4yZ2FKMEk1LjNzaVU4SWwnXTdON1VqbTZbITplVUNCNW0KJStbPl88akYtNGhM
QCFpYzdzczg4TFs9KWBgSSkvWDgqb0gkN3QnPyReUVl1ayVla1xHOWUrLkMyLHVKbG5rUnU6S3BJ
R2ZJNlYxNSkoSmthN2duIjciPyUsQWFIbTA1Y0NDUSEKJS9SLSdbOCp0I0FTUkBuJ2hmcGk1RCNz
XmctSD0/TEpjSCEyR291LEBjKGNSZjlcJ19RWWpgPHInJzI3SGZwLj9uWiFnP2hHZ2xiLnFiSkA3
OUlsPkZkcVlTZ1dMMih1PlBMaUsKJVxiN1kjQERoXkY1KDRbUylzQEVbTVNgRm08KE5bLlZjcW5q
PWtJS202Z1JfNEkyYD9sWW0tbClDLC1aWTYyJiVIbUJnUEYoLWUvdURTJ1NtVE8mbTBlPiooYFN0
dUAhPilhUWUKJWcjUFFlZlBEdG5mdXJMRSlhczJiX1EoIjdaVWZob2k4S3JkYEd0TlVdPk9aJyw6
PTBZRGBHUzwrMVAiLFFWI2IhQWVXQj9uaGNbMWY+JGUoWCpwMi9yT0MmJ2BlKFwpMyVkSj4KJSFy
QjgmPjVEM2E5LDU5S0BlMU50Sk1FNS5mW1QjVTo2ZmU1YWwnNWxvam4nZz5bWTJtMyFYM0VUZT51
Ll9caG5hNlk5bE1VQF40KFIiO0Uqbzw7J0dSJWhLOlpRUl1ASURhS18KJV9FOTRDOHUhPyNHM0pJ
MyZgYTMzVCdgRDxCZllFVk1TL0RQNDB0WE4uKihmRylCQGZBWlUjPmE1LmEhJTNfWiQwIz09VjYk
Q2giLitsO2V1LGxFS3A7KmUsK0h0IzdYcjVpOkcKJWZqaUJVS1Q7K0RvW1E0a0Z0IVMxXWMwZzlB
Ry1VVCFVTCImRF1pUi1ndDJsS2shSGsnXEY6WS5tUjUuJSNUK0hKSnU1KG0lWk5zX0g/Qk1RNTtD
U1FHPU9CXiY+R11Qbk9rPVMKJU1zUCF0T0VoR2UlIyRXLU05W1g0NUdTTlA0RyZaODBIPXJETyY1
Rm1ZaltaTHF1dT0uK0BxL2JNQ1wpUmphKDxGXDFhajU8TTo2T0xARlJUN1dAQkhoOkdHaVJnRU5x
bSY5aUsKJU9vIV9gb1ZCYTFXN0MiIzJbaWE3XVEkP1pjRkpzMSlNaVFHRjwobGwjVVIwLGFvIzsv
JzNsKSUsRDwsOUEqRGBXKSVrWyIjKXMqWWU2ZDI5RyhcRCFmYjkxcDt1a3RvUChQOWIKJW9wbXJs
QihAWj5tamE5U2ZCNE8uRitdcyUhJ0FAXXJVOE8jSkIpWTxJIl5qSjpvPzZPb0Y5PCJoZlc5Nlg1
VlBPcklnU1JMQjoldEFuYmdIK0U/Yi04XCswSVA8O1pEPE0jMVcKJVErNjhFPy4tMSllKmgwKGIt
cSlZRmonMWApNFA1MCFcbUdLbS0pUjxpI2VZK2xHbTE7MixuJXVpNDkiIUdRXGVzaTYiQ3FIQUdW
OitQVDZ0SSMrLShALV1JKnIiLSxfNjIuXj4KJWhZJ0YjXS9WLVsnNGhhIVUxZ1I/LksiWTNnbWlI
LiM8Rj5lYWpNIkpNQ2tubi1gPSN1VjwiT0UyRmIrbTBtTCFUUjdaSk5cXjxSNWdSMzJmU2UjS2op
amdwbDAmWS04VnNacnMKJVMjcFs7ayU5XXNFPlZQaTo2ajZ1Wj8+YzRsQFQwczRMLmA8WzhGOTI1
NzInO0ZLS0Qwb3FEMC0nOCVnLmcyPSw5KjciaC9RPzcwNk0pJFZycTJ0QihzLmlBQzQwOi88cltA
Lj0KJTlQLEdMUyRxP1hbdDJIaSM1UygrSj1fOERUWyc9Qz4hbCRGUEY0Wm9fdTtpKGwzNUoxamJl
KGoiMjxmSC5dPUJJTTMoLioiI25sWjlbMzhaKnQkaSNTOjVYKGVVNG1SZmxIaVwKJSJJODVsU207
Z1tIUCdJPzp1JkVUTWIkJF9LPy8nWDg2N0RJQkhrJ1NPU2kvJSVSLj48TzNCUEsiN3UjXU9tK0pJ
LUBBXmxLNDdAdGciYG5hKlZwdGFFbj9tKmdIbVVdOEJYJGsKJTQ+U2AscydbcUsrQTdOTEEyYzVa
VjJ1UUFjZC0mXi0lRCoqYjNbSEhWITBgbWlVYk4icXJTcFBJLFo9P1hfPDpSLkshJ01STilOQjsw
TipyYDlzWGBuMjluOVFoYmxRb0pJOiEKJSJeQlBpJEghLVksQiI+VTFyX0dwN2VZbVc/TVsjNCsy
LjZKJV9IX0ZcM3VtKSc+b1VebSNTQ2FgKD9oXy0pRVltSF8kM0lXcTxFYkZUOC9GcFVfOlQ1OTpW
RSZVXj9QYyNIWl0KJVptQyg6MEEzbklJOFVnJlVeS0xcIVJQSVdrbj0+b2FcLUpTYTg3T2oqYmwu
QEZEKGo5QVg9SiZoVjdZIyVPW1RBUEc3cnIsc21SWSMoSSNTRVpxQ0wkR1slYkg2S0JZJDM6Um4K
JUpnJUFmJDo1LiRNXW9JdCRBL2EwUDlJRmsjJUU9W2JvIztfSlYmV1xkMUJcKEBvNG5gZUleWWIh
USwsV2ZhcVY+SmBETDNoJThYbyJVJ1Y9aTtkSjtKTCJVXGI5PjRtIV8la0UKJWw1NDhhSlJzVEpt
TFxgNSFKWkBQbmRGN04hX2oyNmxpSCZDIWZkZUJvSEU5WiEtIT9rWiFJR1psaUo8WVJPXkNMY2tW
biJEQTJfOCJuRSRJLzdhZUYvQC1pS21AQ2opWko3J2MKJVw8JmNnWktzMyZdVEJmQ28pKCwxXmxV
YnQxKD5uPGAhXzRmTlRrXUg2cTcqck9EL3VNQVZMVz1SLUpzKkt0XDNrNmAwKyVQbztEWyxIQydg
J2NSYHU2YDA3KVBvREsnNmEjbTMKJShFQElPIjFJYFJSPSZFdExPQTg4MmBnOj4nRC5McXFUbGc+
Vk5sPHBwJipHL1g+Nj8kZjouUS1ZRDlQNjdfZlZiOy8sTE1pNWBpT1xpYVVpMSRFXkZdT3ArVE4x
SmRiOTRfMVUKJTFxaCkiUWxEWko2Q1EvQzpuOz87RCFdUztAXSRtJTszIkQoXDkxZT9YXmkqTj9J
OFxnclpHcCI8akZnXjg7cmlPbCxYKiEsOGNFR0dUUTMjVGRiXmxRcmRrNiZHZ1RScjMmZGIKJSJK
RFBORUJTKFBDJUVkX1wsMUU8WDdKLCdbYWBHRkxCWiNCLjtaM0c5RGY0Z2A5JSYzZ0VlZyViRzQ2
PVk3Uz9aaFEwMkZkVFtwIWxxIypjZmNVRS1vWi05JnAqSTxaSW08L2kKJVAkSyExcVtUQnBncidW
MiF1RXUsYGwocipTbS8+Z2RBTjIiKV1EKyRtVVcvdV5uTG8oZWMuWlJCM2tPKFIhWV1wTVYqbTdX
JVhBRD9BNF4rSisyK15JV1o8TE1mSnJDUlEjUDwKJSsyKSc8TWpGaDUrX0B1J0ZgM15aTWdTbVtb
PXVmSU5aST03L00+M0lOVF9SXEhSJWEwWUhRVmJyODY4RVdtUD1cOWcoPC4vT2NdayIrRXBpMnJJ
VV82SkZnOUglU0xxZW9MUzAKJTBjKzptWmtRPTFMOkVMRCcpXkpxNnNaWEg/ZG9QTHByUyV1OjVn
bj4nPE82UVUsZFBrQVliVzsyJz0uaFUndWxiKSgxLyZTVnJdQjRjKzEqO2xpMUNVJChfJyRUL2Bf
USRpZ20KJUpiS3Q6aFFpOU4sOSxAcEpzNDdoKUo7UTU/W0pvXWZmTTgqJCZqPVo+RkQmbm84Mk1x
SUckSyJvdEc7MFBiUUY1cG4kMj5RKF90UFxJWi1MMSNqdE9bbGNxdSUjRFszNjNSMjEKJVVwXDk0
JDIncCU5ImZJRkQoOTg+Zyc5O2BrUygvYGpNam5rMVEpJHJSZSMxJk1xN2BQQyFqPGBacSZvY205
bT49ZnUmXVUpLXRCZDY6c2o6XGtTaDprVkRVRiYtNWslbW0tMmkKJVZOVj85OSVWaEgyUFtVZD04
LWE+KG5sZDRIOVcpT2U5M3IpaTxKL2Y4LHMxZWNwSDZWRlIxbTgiRG8sV2NKS2o2MS1NNmo1Zyoh
IUc+SUs3WG1GTmw1c1FpLjojNWpncjY7azkKJTAkQFtXNmgnOSM8cnVBTXBARV02TzomQ21tXW9m
VW9KWThnamVgVCRSUkdDKiVoUUNBTHVqTWM3XWU7KTUlR0ltbzcpZD1zNEU3QyRdXVlQXVVZbSVV
IV4+Uy82UXNbJ1EjKCcKJTlRYVRDUFtkTkQvXltycDNJTD9UKS9pbUpNST9yTFhCJlhXUytGSDhL
XERnIWkoOHE2MzpSZkRxMXI4dFpMQk4rck5YRi5SXlFAdUVLVW1ePXNyRihuQSsiLUReYmN1R3I9
KyMKJUNdTSYpOU1jWFRxLzUkYi49dCp1JyprcTRUbUVySycqUHAlSVFsKz0xSTNvYiJUZXVTLXQl
Lj9mOiNgXmkxcWItRSZdZXVcUFNrTSdYQSo3bixBUE9xXyNxMGFvKmpKKFEvLkgKJSJUXyFcXmQp
TiQ7WUloaGFGZHN1QVYjTW0uOXEmVCFZJjFOXEUlUlNLSyY7ODhFSz44Jk1fXmRxbU06aFRGamUs
PmY4SGNpSmdbMCxFaFouODpTUTUrZWJSTVMrbTgtNWU8QEYKJUI0QFtiRF5UKl0qWGcmKTszOjtg
IVYwaTZAIU50MDxMcGJHTklbYXRRXTooN2s8dXIyZ0ovT1dgVmxoLF9TM08yKiklYiNFdE9mSCJf
ZzxmUlojQmEuI0UhV2NPIztLVkMxQjcKJU1waCNaSVozJ3RVP1A1Mmp1bnJXZHJiRlM9WitRVSRD
Lj48V2AjTWw8TiZuNGI/QD05PUo4WEQ3OShqO2kvNG5RMCJQQCViUl1eXiFVXzhFRHJXP0dpYFQ7
V0wzN001SkwhTiEKJT5nZl5VNCJqP0ZkIyNvZ2Q9VGRqImhwcUo1Y1U9ZWRqO3BsR3JoJzJAY1ZS
bmBdSVdOUjliKk02aTg3QE4wTzRfLyM8Uk0pM0knI20sakhHNzBmWT5wO1c+M1tnUkdjYmEjZysK
JXAzLEoucklHZ2QqZjY3XWNCMmdsUS1zcS8/WS4mPyQqZDxDLy5hLlRjUj9GXGshKV5fJC8ibm8m
LSQhaFppQmBkbik9aEwwYFYvKm1MV1xDaERCVjFuNnM1ZCRNNXBFRzF0aUwKJThEJFshUFYhMUU6
K2NcOmwjKzpJYF8/QFFITG5GMitBcl1WOyNaLC0tWFxvdEAoaVwhWUZKLERJOS1Kc2pnITNEXTZi
PyVvcVtzQ2VUY050L2QhUCJaU1I8XyNNUScyZzI9a00KJVxxM3NoUWFVN2YoMUNXLVl0TyxeTz07
LEdDLiVeXkZLND5yUU1BblovVWdOSSpTLlZiS1siX2ojUTonLWBlT2p1UnVuV1YiJ20raGBhS2BT
OVhoYlEtcSdkc0xXWS4vU0pzXk4KJW4iJlhvMy1CNVJRWlI3NnFWaDYiMGItQUk6Sz9ZKjRubzA5
PWQ7QFk9WmZba1pGISEzRUI1JClKbT1nSGgkcTk9XkwpX11IKmk5bGhxSFIvakphT2omTlsocWtt
UlJkSjdhNE4KJUQ2UiIrPUgvIkRkMi4tNUJxUClrImNWXFdmT2NyX0hyZzsjUyEvSzQxJiVEXnFd
cWJWKDkmMjA5Sm4uXC9YNUZgNXNLI2MkNWA6Q1phYkBbNkxCQ3BUMW50bDFENWh0JW1YMF4KJSR0
cDxGXzFyayk4PFA+U2QmQVktNj1DOkY3WDBQcDgyKDRtP2s+YThFb0A1UFEmc08lM0FASnJfND8s
RDFSXltIXShcNCdWZTEkSm5bSm1TS1tNISpOKD4/V0VZMFtVRkEsJEMKJVBEX0NtLiY6a3E+bV1S
cFknTFxCWnBeaVluWVRDWGtEN1gjYnE6SjZLSlA/TVpeOks5PVVYMmckITVkSzRmVlkibGcqI29I
TVBiWypOayooZUhAQlJQMScsSyhqdD0rXzQ+SyMKJW4zX0A+UGdDXTEhVGAzNiRSNCNZI1NlXVND
KHJ0V2RdZzpIViVJSXAxYzkqUXFVVVBzIzg2SUxdLERmVj5jUkVsSW9fbnApLi5xKlM5YWVxbGl1
IipuJSojQ2pLWWZYUktQJyEKJWRYUTguXUgnL28pMElzYy9uXE85OG1ZOUJwWDsmYFZmLTNvSS9K
Yks2S2hAJHJeWjFPIkNCP0ZpR2NNXD4/NTdOR25BOGdFLT9xMEpcUUM9XiwtWjpSaDpFbU1yUDtm
a2MvJD8KJV5VdW9oSlIpcC1XajJZdDtLOW1rW1JBZVcmTTpuQy1uViUoQDdHcnVgLlVPPEk5NXI8
cF1NX2thcTRhImgxLkRGKTwmOyNqNiJWXTVKJ25APU8kdTFuQUI0IzhrOjdAVi4xR2EKJTo7XiJb
SURPO0otZT1tb24xdTpOREBCRWdqSitGJ1ZsXlszcVttZylhOlFnMCgzJDswQUxYRDFpRDJKcGhW
W1xrXCpHR3FqKWxCc0FsOipmQiE6VWRuPipsSS9bcURbOCtBQGsKJURyOV1MLkhELyJRLEtzVChD
Pz0mL2dccUdcT3MpJnAyPjQmPzJpXlBVPyZCXSc2ZUMzPEBBOnVKQGljQ2s/SGRDXWVbUDQrUTZs
RGglajgqMW8iViUwKT04YSZGVWlQO3RbQm4KJXFWUD45Uj4pOy1dWV9sLDg9PEdHSFhPQUdVZCFm
QGdDVkoqSS9IPVIsMT0sW2o9ciJPSmdxWEpsWTNxZksjZUJ1Z1pCXD0nTTkkRTcjV2FNN29YLk46
cywsUkE+UFEwMEQuNVQKJUonSnIsXiZnSywjIUgwU0oicGZwLSNDJy4tPiVWb0xwYVsqVmIiL2xN
KFJcRk8zZTxPQ0otWVltbUBnOWNidWFuKVRMPTxNdXBjcCguMVg9TTFnbE04S1thXlRFY2RwIi5l
RU8KJTs6JWA2WiZjTC5lNGhiTDtPPDhwVzU7W09iZFhXcDkuOUJFJEdwJmRAaDtda1VNdDJyT1om
R3BAQUBzUVtYVjQ+KyllQlJmWytIPFQ6NFF1TlhIQTlESEQtITNvM2BLZm90SjIKJURkWTJkTUtJ
WicvYmxLYFVRWytXbEZFa2lLRjB0cmptb2onY3AmdWpSWFlPbEAwLGgtYGw9N0EwWzdHY09BNDVS
XkgtMVRcV1xZYWpcc2E0WDlEMSpHc0FJPlhGcUJcaEBJaSYKJV5OPk9QKkAvN1s8QV90RUYpMk07
VWRzcTgoUyJnPT1fSjlvPGQqak0/TTtlN0VOSWtGI19DLEpnPDRuTmlsUGhfaj07Oi9FaV5mJm1n
bC5VRkY0KDIwdDdgWixzXWFHXC09LmEKJWRISDcqV2RlO0xkYFxVL1NCbCs4ZyVXSmxzJEMpWkk8
ZzZNSDNaSytPbEZoWSpVOUxJJ2wkS0xvVnBeITtSbCJwJFQodS9kQSJERCYpZ2gtbVEsOEplPVVg
LCQvajBMQ1FjZ1AKJUx0YTwkbSFoKExdTjw4bFI7T2VNRkFYWE1qJllvMSQ3Jl1YZ0dgOWo6ZDRW
J1syWzlaTCcjKU9mUGAvSCoocFFFakcqOSQyUUZrTzFNLl9HQWFQc2FcKXNPY01VcGVvSD1eODwK
JS0icmtrYTFlZkdqak1YVF9cNzlFRWRdYF45dU1ecyRtUD1xUylLYE5ZWkcuYUFVJmVvYV4jOFQp
bGtVXl4lUzY2WEQqT1Q5b1Z1dVxwYWFzQyVUSD1dY10rYGJIImN0ZTtQRXUKJUVISSNGKWcoNz9U
JmNTRUg2RiFOUy5HNlVPNUpQWlEmUy5lPENRWDA+OUw1SEVqOy83NnVRcDQmPzN0PlZQXV0vPjAl
cmo4TTIyZSk2L0EvJ1UoXltxP2YpOCdvIVY7OFpiQ08KJTc/SGYzbmpxUWwpL15fXj5rbStEQ28k
YTpDLUEnKGUnVTQqUyQtVnFOcDJkI1lLS3AsXnQ/MzA8KWU9TllERWRRPVBQYWg9PkJfTTYpJEMo
Pko4PSlHNTwyQTpdWS9NJGpbSDwKJUNAcFdccy0/czpyXEVZQ2QsZCk9UjNmJTkzc0RHam5RWVpJ
NSNfbXBxaTBDSDIoa10yWiZhIkxjbFZIXzhRbS8hTyciQylqL1Q/dE1YY0E6NGBQOSFZUXNiWVI0
QjUtOC5kQToKJWM/Yl0tWkc0LjJWW2hIYExGb11uLD5ZY0xQaklHMF9QRTIhVyI9Ty5SLDEpaWVW
Y2dhcDpNMmVeSTB1KSM0VjM3MmB0aDAqcWdlbW9JWiwtUzs0ZilodGo1OWh0UENGV05MNGwKJSsv
bW83WCgoY3FNS1YpT11hYXU6RWgxY2xjISJwXCwhLXJcNEpdU2cyNEZpY1ZMdUMvOTZUdVxrcjIv
OUJSbVR1RzV0XE87O0AyZj9sM1k0NDxlbi9tQCVcKVs+OjJMKE47KGQKJUpQTSxrTzBDRVRCIkQ+
JChbMixsOiI5UyQ3P1BSNFo1WmFSLD91YUVDPF5NL0hmSFhrZVJSczRhamU9Wj9sSzklO1M0ckcm
OCZCKzpoY3FHP1U5XT1yZmhJbVZeZEkhTEMwK2sKJTBuS00uOlFYYy5pQzxlb2toMyE+VGRzTWFC
QlctW2MvRFw9biJObyxRIlo+aTxrXSRlamtCNCxoayZjUGNsQEpTbyw6TyI6TTtLXkc5RGUjO2Ui
V1FgJzdaPm91OUAxbmlwRiEKJXIoZlYjXlZgOXEkXmpeb0AyWSdXI10paWZtNDhWViJkUE9nOXM3
OklAPTpfX0Y6QV46XE5vRTttQUg5aztgRTlwSC5BQ0ExIiY+ODk0bFxtR1QhZElbOkQyOkRDMSo6
NElCMUkKJSJyaV4lVFU7TlA6LEMpblZYUD9YXUFzRyNjPVdFNldcS0FqZ3VNQCNAVEsnIzpYcXVn
O2xrJyMiJVliMkZsTm9zbCNdPkEzWkY6a0JxImdxRFlVLlBwa2Z1JzAjUEA7Mkk7LVoKJWpvXy1B
Pl48cUlNcTJiZ1deI1tlSjQmMGVoYW07XGJxImJKKlNeQERANVZsXVYyb0s/bmIwUi1KdTEsO1Un
W08rX1BOU3BkLmNfTyttQD9LLi1zJjFwKmRORTFNVCtOOm1iM28KJVY4SDJ1Y09uXiZiJ19LTVhy
XUhXZi0vQjcwJT1gK0ZfTk9jQihRXTpNO1I5X0RwPyRcQDFGMzA3NGxKMm1EbjNlQVJTTkQ5KjtR
Kj1eJ0hZbUYkUzxDdEJGXEcuVFNeJFFJKj4KJVhsWl9NTUJLOz0tOio1S1BMaFBvLWciUmtxKm1h
XVpuJyokIVZZKGRfQE5iMFpEcytWLDAmNUZFOCIzIURuPD5aKm5IaTBeSSFMTWp0UEdXSUJmLDdN
X1NWWixycyVJSjVIS2MKJVZOcjkqRlFkLmsmWXNZR1I1RjE/JlMhZ3A9WzZuJXBCL2Y5KG1yTkJe
UW9JWmVBZ3BLTkZpW1RnMEdXVHJWPV8vaEQlPjViXGMhYC4tc25JcCwkZS0zUVtiYzh0a1ZeVyQ9
Jk8KJThUMEY+Jj1qZCJHVj5IQSYwRGw5TT5ubnRtMztcTSZYclZOI0stTiJnNCovcFZFWTRibz1h
a0A3UVVTP2dAJFtJJTwpKlZwQUI8P0orbTFEPytmVV9QUSZuQ3Mqb09JSixcdUkKJW5cIWVcaVA1
NEpzNSFQL3JuRzNlajFrUFJzN3VdajVRQFgycGVVcVNURG1rWGMxbDVXbDJVT25ycWFWMz9pUmFR
cnEuP3NzNjFWVD02bzwvczUqYUVmNTpRNGxbU3M1SXNJJFQKJXIobS9QNUctb0xKLFBgNnFkb1VX
XlVOLCNMWXItTHMtZUU6OCVKSkdldGZzK0FZRltYVUxEcj1sMCROW1hoc2kocjgidT1KK2NybE47
ckUsSE9QQ29Jc0JoUilHNiZUJGImQWgKJVBiaSEtVCpEXHVyZ1Frc0lnMSxscT4oTEFiNDExKCwq
YiFPZzNsK19mcF9ZQmBDY21pQDg9Z0lQJF5VYUo8cGwtSTEqZCdoLGUvWC0oX0Ynbi41RyUyOyFY
R1JtQiUsSlc1IiYKJUdJOTMpKUtEVUw+b21iLDBCV3FuTWRvSExnLCguQyltbm0rOmNiMnVqbV40
ZE1VLVFUV1kuODVFaW5AIiVbP3Nva0ldKSpnW2JYM1RQTnAsZz0tMUFhPW1cQVUxN3U4Mig6XmEK
JUdqKXUqbzM5O0g9dGhvYzFXalkrXG5HVCQxUFNDYDp1cj9YWnJhRCgkMUJkT1VTSHUmbkppTFQ0
SUw9al4hQl1YIXVhOzwldENyQG9CJD1CX24ucj8jSiM6OkVoTk0hKEFdZmoKJVorRiFyXjQ0Rjol
RD86JVZyYS0vZ01fKFpgJDZuKF5CT3VjMj5dYko+ZmFmXktrTjhHLiZrNTYzLjUuLyVOYEw5Pi1e
VTUndTQvZCs5LWV0cDNaUTcqMWZhbSswWVpzKSZlOyQKJUlhLl1XUUdtJlUwRSRlNlpeKmc6JnQ8
LyNBRWpOdUF0Lk44Ol1mLSRrbDE9cWA4NShmK1E0YDg9cT9nTlNANipLLGgtQWUvRCdRXFw+JGZV
bT1FYyFuW009Nlg7LGEzLU1AaSEKJTpfaVM4KCFBcDpdMEdnamZsSFtuIywsLD5yZSRFLm5BakFp
cDtjJ1okUyJZVkIlKS44UVdwdStaWmgyKD9sVURgKGpcc1InXHU6IjpiPm9mXlJQX01wSjA4SXBF
dDw1YipaIm8KJTQ4aGFXKktSZUFGKjcqK0haMDo5MmJmXScnKFYwW2g2clhFb15jWWJNY0dZanBB
P2NDb3RhLl9gRUItVS9nUEIsIy90QmNDak9eVD5uTj9NQU5hc1w0KGkpXUghdW1kLl9QUWoKJTxl
cUMwcic7NkkvXTBFdC5qS11DcmFIakFuTlNkKDouOSxMZFFEQGFqZUpuIkRNUUwyOVo0OlY+R0gw
dFYwZUQuLEEsKFhMbDlcaCojZjpSbTBXOHFVMUtxXyg/TiZiNjdhMycKJWlGP1EjbjBjR3FFNj11
czJDNkFVSG5kUC1odVNcJFk9WVtLWkskL1A0Q10iQko6LGc0L11OYUJuaCtjRzFdPFNHWW40bzdR
aCg/NjVNIiZAKEYnSTRISDdzUS9wMjMjWVpKX1MKJUJsc0lLQEUrRVdga1wia0BZUzlCU2NYYWRL
a1JsN3BHakosMkxDR2pwXWtwcVxiImVnNmFvXklkajE+XjBxJmksRFcwTnNGTUBEUDMqdTFBczFP
MWBQKVoocDlnQktKRFY+T2oKJVdIVUomMkVHb1NDKEouc284V2o+aCg/IjdSUGc2PlBsZztCYFBk
WU9hbjFzM05OOC5EQT9nMG0oNEdvLis5KD8/QHJjXSFDdUsjTyVyPFdBOy40SEM4cz46c2I/USFJ
I1VQcj0KJUtYbjBqR1QmRHJzKzw8X1pkTDg9QzxSRilfPW1Jb2JdVkZVLjpZZ147Nyo8U1ZDV3FU
VEJZTGcsOztzSEluLGVLbEklPlxcO1BVY0pNZ0Yqa141UFpVPyUnOUpGR00oUz1OdGkKJXEtJ2o3
bVJsXl1xJ0c+JE1BX3A3MHInQ01gbHRAP04wOWtsXVUwUExgI3UwMlg9dSNjTE4zYHI8T1hxNlxb
JV4kXyFJTyZwRUFbP0kmQEU2S3I4IUcvVzg0ZEdBOGd0ay0qLSoKJW5jX1tNOzFndWBaJ0FsTVRQ
MiQ+bUZvIVlTMWprWWRbWmA6aDsocTM1QSpwJ14rSVpKVl9IWklDSDY1RVZwL2IhL1wvPWUlPk4x
P0ZObVxXaFYpYlk1NjJqIywob3Q1UWw7Qi0KJWMsSCM8YSYoMnMuYjtvOVFmIk91YzJENSpfQWBw
VDRMN08rRG5nZG4ub2lgdUtzcFFnPjM6Ll1MJT1pRVlJP3RIaStCa0xucTVsR2BWOFpXTzdEVyNs
b0EkTFo3YGVIUCs9ZE4KJTo9VFNfSEdiTlNgRWdoI3Fkay5saCM0TDNzMWYpam41YSV0YXJBIitt
Lzw8JkFkJGxRXFI8QWIwU1c4WDFBdENvai43LWJjVDo8TzM9YSIiREhFWkVPZj0+MClWbG8vNEIv
TSsKJWRiIitQWyZUVEJwQlkhc0NOajo0RUdBKz1oOHJHUWVLQ1AiS3BmTVMma1tqSlNoJkNjZipQ
LjZYQHNGLid0c0U4LVEiaz5RYSUnTi8uLz9gNS0pTks0aFhzbzhdW1ctbG05JFcKJTZGUClBQ0lQ
V1NsIl8qJGBFY0RXKSR1WiM4OmJmP3JBZSowNWtiTCJSS15DNUZcMWNcRzZBWFdyWWsnPD42TWQq
TFpUOFlGNTFRbU1MVVxBJ0MrPVVEbkdCZCUhbVM5VVpwOUwKJVAtN0llKDJfRklRWiUpJThpKyNj
Y0k2NCwoUWJMQUQ3WXFRV0FjZyZNaTE9ZW5wJENsNiZITWlYJlhlQ0U6XFVDaVppOXFoO1lnN04v
WmdpSiNsXy1lR1RUVDJvJTMuKWckclUKJWtvNmlrOmlXKk07PW0zIStJSG9hZVZSWSVQQCJcOy9p
Q0xRY1IqLz9eWG5YMlJOJ1JOJmBwM2AuQlFjbS5HRGsmQ05yYT5jdCY1ZCstRkNGTTcpIkY2Zm5y
WzRYaU5MKW0zaEMKJW0nTy9wWDg/XicpXUg/V1pBYTdUUGNAXzJyY0hYVSRqXSk8LyJpLykubDkk
I2stTF8zQ20sMztta2o1TzAtSFAjX2VSIVYnQltwJVRNKi1sLitabmE3XCQ9MmljRE9qUXNDbmEK
JW07QjVUIidiPnFkLy5mR2xxNlcnOi86KjIsZkc4RCE8ZVtXP1gkNFkha1hAKk4rSFoxJFlhJyZv
SyUyN2draEkpSDI2N2cqLWQ+Rmk9L1ZkT19bN0VQLk1uUS0mMlZBcUE/PHEKJT8lajJEI11NOF0m
YlJKNkhbTDxSYmw3K0hBO2ZQcEVhJG1eYyRmMWxCQk5VNiNQM1UjWikqPFgmLFtAcFEvaj1kcGgs
SmRcNV1sPi05cFxVLE0ibF8+V2xYZlVQQWw7YCNwZjQKJTNPMyVZQEZlQS83WGhGY1FzNVBKLFkt
MVA/ZlA6QzUwTz9NWyFRXztlSFRyNj1PRjlCNykxVEtfKThUZDNKZEI4SFhzbi1YR2k/N0MoYjBR
b3JsKUxpLC1nTUgkY25vUk00U1wKJUFVaCpHRSs8VmxNX1EkXG4pRkJncltmNF9aX2U9IWJ0KFpj
LGZkRWtCUV1aRlFFYkQ3J2JrcEZVSig2ZyRPP0VFSCEoJidWX3RZOGooPDdTMiNgNzJDJU1MayNx
Tl9kbmlSK2UKJU1XRUwxWVMwWFVwSGU5MV88PkwpVVtxJnVAOGBZVSFrT1hmYnNuaSpNcT9DWkpI
Iipga2A6KjRkVkVTUyVhSUJzWjYvVVhyUFNkc2hpImdAcSovOWxBNVwxKEIxMkBgN1Y5bUAKJU1x
cSpPPExlbSNScz1YPW4rSDtPISQiVik0dG5rYCRNMFBjVUdSPjA0OV5GUycyb0QkKiZGXjtIZzYm
Ik5GKS1qJUE5Iz9oJF9QdEdmWT0xcTZSNj5AZCFbTmpDO19kWGI/UDcKJTAoXC1WWUkhaDVIa1Vt
ZzQ6TGdwYzwoK2Y1J04xLlFWXCEpST5NcDNlUnRfbSIkczNJOVFhTnVdP3ApNlklRU0+NiRARm5e
ZTM7WiRkTidyPCc4YUNhLnJZUEZNOSgiL2hXKnMKJTghX0BRbzpqPihNOUstUFM/NHAsWjN1PiMl
LTpiZzEtJlFCbkE5ZmhHQj4tJENyP1JoNjFdRGorTz49Y2Y/cVE2IltJYUJKLy8/USE0OmhKMl5p
aU5oTEgyRjB0LEY2PilSVz4KJSUiUC9PYmlCZWk8PkpQUV86Zk5vcCpacGgkYGxUPnE8WmtMRWYj
OnU4XGRdUzlmZSVIOk5kOmcsbC1iRXJlQVU1WCsqVy4iaGdjMEBaOGRFQipqSFpGTXAuJWxMcmRf
VmlfJjAKJSpHL1ExJVhBdExBITVYKUJXYyc6OEZZKSQkUW49JSlyQUljKSQ0R3I/QlI3b29UX1Jp
bSgrMWRrdCpQWyQ7OzVBZmUiPzYscDEqaTk2c1k4W2kxNCE/MGo3akkwcjJuQmVlay8KJUJ1Xlwu
UUsoaGhnLURSQF5EamRGQWdHW2hhIWNycylcZChtWSU4alYlMjMpQmo5J1guVythPDJiJjFkJkQ6
WksyJ2MyPUVDZ3BlOmFSVEo3PF5xWFdHJzI/LD5WN2xRbF8/WDQKJVopKF0iUmVtTCZEa09sclJw
YVVYJlMlXSk4LSpsIyZdQmwyNUwnZVAhXm9xLi9UKmE8Py8xSzNlV2JOQ1MnZTJePmtlSChFIz1f
OTF1VjpJKnUhNHFhQCNmOypQLkV0T1g9LGoKJV8rUE1vYWNtMWJjKSkxJFROM2ZOXi1dZXQhRGoh
ZTpEZGtWNGtrMjEvcFwtJDBiblB0NTwwLCRCXkZIOj5oN0NjbiNMaWVvbFElZWZAMEc4JWFsRD0/
UFk7N2I8YyhSSCk+RWYKJWhlcEc9a1RaVzclcTxPIjstZkFXQTNXNG9qPCY+Y05NJkhtP0FGOj1l
SjFpVS4qJGBIXXIoSiUmb1slPTIxNUo3RTdOTHNgMGFYVlIwRkxVNCQ3TG5rPWgtWHEjWkpXYC1q
NFEKJSpYcDYjRScpRHVjXSU1IW9sYV5BaSQsamgzQ2l0XzlyLEIqQGNlRUlbRTRJVVtkNzE2cSIs
ajktc0VtaVVKTVg/ckRNYEUhaXFuLyZgJ09MbnVva21lQTxKM209KF4hYT1iYzQKJUVEOEMhJiVc
SFRYXUwwVVVyODU6LnI5JF9YI0pzJ2tnWSVzbztRMClGXk51Plgqa0QsRkcrPzpEQGNTI0RpPTQn
QCpzN24sWW9OdWZDRy5GaTYjZS4yaFVASFlcbUFSMScmXGgKJURSLk0rMWZZN2ZvLDBxLEhHcy1X
LydpL005LzkxdSknUjskMysoIV9OcTwiaStRNSdZbStyc2otWFFATnJXXiIrWEQzSGxIa0JacCtX
WTZEWFtxUW0zXFwvTighZER0UENVJTYKJTU7cDQmWTFub1oyKVdzSVsyMUFEKCwnL1hKJ1clOElJ
IzZRcTYiT18rZyQ+Q0YxPSo+JjhxMkBQNXEzXFBSXyJMNUovOjpHazpDMzdBW1BdT3A6bEFKWHU8
K1JZMVMxVkJERUgKJS43RU4jPiZvN11cRyVWYTFfRE4vIl8rVG89Lm5Gc1U8QXMlPGFXIWBYbm0i
W18nSFsxPzsuJiwxN241N14tWC9qZWJNMm8jTSQnQSxCTCswRTFXdWprYDddWEJLVVkoVDwpXDkK
JUpdIzYtVzpzaz8ic2xKOyhYbWZoNFJAVSMoLVYlTjAxJlp0R18+OCIpOldRc1dKajNDOj5ZK0sj
SEZgQkNqclYkbT8ubTAtSXFYUDw+SWYtNFZhaENGZXRdYS1kUW8vaU5EMjcKJUheb0dsYml1PG4l
Vz4pSVkjOFkmSXJHcnNbSl9XLV9EN15rcT1MazFnalFdZkBhJTROOjNxVjQldEkkQmo5SFNrcnU5
JnQuaERAZzE/T1dLJVpmJ2ErbGgubkVZKlNSU2EmSS0KJWRIQ0orcF5CSnVbPStoZzUxblYoWGF0
LTlSXUA8JU0yKCs7K2E8dUBmcSl0YzNcOyFnN3REVzRabEUvZWFdY2hzLm0rL2pkZDMvJCZ1YFxy
Qm5ZI20xIl0vLWJwPjFuQWpCPlYKJTt1KWBDXUgsWWpeQ0BAP11qXCIlVzRlSlxuOzk2cl4oQy1m
Zkw0Pz5rMDNdJkViJVY+XDIoMz5lNlhRPyRmUXJgXjlwREAlc1kxN0s9XF1Cb1tyJjBAOztyTVom
IyU2bWk/M0kKJV5tdTtRL0ZFJ3JQJiN1Jm07NV5kKjc+IjpWUEApRG9HbDtTInRqTSpPQ3VQIWwi
UTNCaWcyMW0hSjEpcl4+KjpibSctOmlBUHRCYU9ZQ0Myb0dlR3JNTGFkRjBAKEZKQzBxKXEKJTc9
V1JvUDptLkNlT09RUVpDY3JWY0pJUDEuIip1bTFOSU5yPFZjQmkmTCwvQiM9WzspcXFIdD80bmE0
TCY4SDViUG8mcj5rVixKKExTVkprZi0zJVVAUS82QVVSbidlXykhZjQKJTUpT0c3RnE1VF5JNT0p
U0YjOmg4UChkPmlNIkE1akNOc1lTRTZoQSlLazMtWmZVOCMvXihWakVATUxNImkwXElgJjBCV10y
OUI9Oi9JdCszKDtLN05vYXExNmpdPlQwPlRWKjMKJWZEWzJeVyswSkxtQ2wrNUxob3FbMDQmLXJX
YFlfJnFlc2xnV1BZUjEmKi1rKCZdP1ZaQzsnI21WPEIwV0IpbTxrQmJLbDomNUJcN15dUSFmQmcm
NXNnPEhQJ2NOIjI8SD84c1cKJTRKWWM4ZT1VJz9LakxKTj1TOG5lWTtOQy1PRjc6TkoxNDpAW04+
bm5VIiZWUk8wIkEtaF1kPGQ4bl9YXV4ubzI8WCZZRjdfRnVOJk0tbE9SNmFSTFRpRjhtWFEvJ0Nf
U0lhM3UKJTtebGlQJiFeaU1PU2ZlK1Y2N1g9azYhV1toKj0jaEIuYWE9WV5WIjxRSnMzN2FVY2kv
WnUzKGRtYylhdFJGTlQhWDdJImRiQmgxKTxobVhNTV9vOSUwZm1uKDQvI2ElJShnam8KJWsmXyhr
bUxmLU44SXFWTmpUXlhgZzZYMHBFaHVyZDc1TE5LQl1kT05rL2tZKVpvZE5WSU5GRlVUbUVnPFIs
KkpNI2tBK15fLnRXLy49ZCZyOWQ0amg9MXJWLi4/Zjg4YXJvX0QKJWVaLWtdKl5EOTc4W1Eqa0ox
WkRVMi0raixcaWVSR25NWzQuRGZFL1gvcSFmMilDSkVGZGVDQzhpWUhkLGROPVNdNUQ4dDUsWzA+
aFVLZm5FPERrM0IxbGotTlJrVDduPWFlKkMKJS0zZyknTklKYCtSQFojKGtZOHVUKiJ0VjVEKTI7
O2dtYEsiWDBaV0ErIjVJbm1lSyUqR1hZZmslXE1TOD1hRy4yXWJxJV8wWT5mQytpa0cwYTVfcERX
KmhSVjE3VVsuLidFNzIKJTgoIzA8bSczWmNUPXBgNU4kOmM1PnBEKFUtLkJMbkkvZExmRHFvMSRA
NzZiOFJeUm5scjdfbkNkRz9mPlo1XTpGLENtaSFWSDEzWDRFZTxSXkd1QW9GVyJGJ2dDYiFkb2Bd
WXEKJUQoWENAaDtTRCFtc1lJUVQ5UzRbY2k8IlZwKS4qc2tNczkxSEBHOl9FdWVUYXBwI1ZFUnM0
a0hAZmdFTUZFb1JhZi9QdCM2RzBnNzw1Nl0/T1IxLlRqLXEwJydPcEVAOm9ncXMKJSdCWUsyPFdM
Kjc/TDdhNSFTbWk0U25QZkxQJm5DbyhVJ1NQailqMm8iaWREcy8oI2VHK05SP0loI1YscTdYR1py
RiQ8TkpsZT9qWEs7Jz9YTGJXN1FLO05ETVI7VWhFL08qOyEKJSMqcVArKlRFLFRpcTZSLUYmXjxM
QydvYj1AKzw2OyJqLCE8VHBiPy1FQl5DNEg4KlRnaG1Sams7SWYhOFRScXNqYmpvYEQlJS4+Ol4h
SXAwTiwqUV42aTFqcVFkJUBqMEs6RUcKJV5KNjRSNyNBUSVeWUZNVjs+TzJUQmkrJiNuRVBdMTUz
dFMwaSIkQCI9ZzJFcTM1cWguVmpTRHVAPXFJXCxDLlxLPic1V1s/dTRfKyZsRXU3cU9EKXQwKFNV
bk8vOHRLb0VOYGwKJVtlYyJ1OiQvZWwhY24oXCVvMU0rRVwtO09hNTRFXGY5Jz4kSmtwb0BeYFEi
ZCplQz5tYiJKIV1rKGEoQiVTb29mRTZDOl85TUJRJDJwbEIxIXRvSTRDbCpXO2FgW0RbWSxpJ2gK
JTVqVk9YOEZBSC0nKWB0dDxhSmsxcWVFKjQ7X14xMk1uNUV1PGRKUm1nS2hzR1VwNU1LaEMuVjJI
W2o3VVBvWDdmTTtlYW5UR0RhRF1xXT5KMXFCXV9lM1hwU0tPamZBInRrK1sKJSIzUG0wZCIxPF1w
RHImQi8nLyVpQylBMElRJmAsWEMpRWxERG1PI2tIXjFUZDpIMllCZU9mOCdOa0o7UkFvWyghZC9u
WWspLzJHIj8wSW1BcEkrTGk5MCZTQV5nXVRHOyJkZmwKJSwwPD5NRzpDLjtsWlNIQVYnY0VfVmhJ
NURQYEtPYDRkW0BNOmRRMTQsOkJTRGtrOGZBX1QubidCPHE7KjlESFcuJmQrQmBLOWYxOjAiTWBE
M20lbjNUTShoLElnZFxiMmFTN2QKJXIoRSw6XUBkVCNkTG0talVXYDZvLDhiOCxWaTlrOFZFRy1l
NyRHImBSWXQqSyNPN1U4K2dSPXJmKTBCQ3FHXkBXRztkJ2RiQTdKJGYjYiJgUz46VCYzXENHQ05y
SGdkI0k9VSoKJTcrPCFDMz9dJXM1Wz5HUl08UCYkSzBvbF8oUnE0QE8/Pk5sLFVUQVJSW1EiLDgt
XU1qPHJQYysxVSopSSwvYTRxIShSY3EuKVghVWArIWkyN0ghN0Y9SDBKTmpcXzR1QnMqRVUKJXI3
YzoscCFkMl5pYlVWP1U7XV5RU148WlNAUkJbW0VPSE1PXFJvJ1hyLVdOPEdMRDQ/PDw0UDxfKEox
YisuamBAWSFHXGhObHQlJE06JTBEYygwViZBLU5UT14zb2MzUDYpTkEKJVwwW1A0RS8/T3AkY1xm
Lj1CYUNRP0ZGdTVgRkMua2RCWExtTSxGMS9pYjZWOyg+WidyYG1tZi9mRmBhaUdyXDA1MHFYNCxW
YDt0P0ErJzBUakFZJGpnbCsoNUw0MFlOZFY0KEUKJUc2VSNiN2xtVStNR1J1JEEwLyFfZC5gcGJc
aEo5KCw9TSVTJitGbmFoTUZOZSoxSDcjNlArOis3blEkKVI+P1s7N0ZdbyxualFJY25UI04vUm10
VEExSS9FWkVRdCRHVjgwU0MKJWVPQnM8YCE/UUs2Km1uZCF1PyNAOldFI1paZ0NqSUZyPTlCJz9P
KDg1V2BWSm02JjhrLkE2U2lTVjYtQiJiZCtrWlZiWi8rTWwkMFJPOEItLGFfYW86Ky5lUlgxUWAi
S0MraSYKJUdYSkJWJSsrVUYyZzg1XVRQXiomPyQjPFJlW1tXSW9GRS9qVF1oXFVRSWdEKDxRVmM9
KT5uYElZN2lUbS4lLXIhK0UxS01xRkExKiJJZlw/LSZkTSUyJChPamk4JFwmJHFnL1AKJWUqdS9U
SDpRVy88ZmYpXUUhNEFYWydMYFxbUlhrKS5jcXBHSC5eKEFjNjJtT2pxPnQpcjNxSklrX2tSZVxT
XiFQaCVHTTpTJzJhJWZxU1cyP2tkTkgiWD9KSjdYUVdmQktxb28KJTgjTVF1Rk0hTC0xa0htJG9o
UFc5b0InX2AlImFbY1RYJE4+ZUFCM2JjcGJSNzlwSGRJJk8sM1EuInJrMGIkPS1GRG5hQWhiOUJG
RVs1ZjMlPDdbQXIhXUhkNTIuNSlZKiE7QDoKJT1GOzNBRy1XWzRGckFRaydNcTtPJnUvXC0jUF1Q
WF5HPV9ZSy5lb2trcScnW183LG5MJSUtayMoakQ4cy1GPTFQRCZrL1ErXXMiI2M5IzctSWYxWyNq
cF5jPFgmPkVjUTxOKEkKJWZjX2IqV0pVLEtHLEZDVzRHbVhFRWlmR05jZlgiSkksJjEuLi1hdSs+
XTNQJGwjaVxBYDkmOHNdXD5HZlpWPD5rNm11W24pRCk6VTZ1XzIyYVtgWy0wS1ZMTzVOTkZoTF1f
NCkKJVohMGYhK11VSFU4J0JocjcrbkM/bChvU00tNF9wNlNKJnE3LC1aP3VUWWZjPFd0OkU2K0c6
OGtCSFhAXEtsVCpBNltJOUQwQ3JfV1UwTy4qNVpCV2VxMzchU2hGamAyREFDTVIKJU5GUzo+S0Zq
QiI8Sy8rRVsiK15VL1ljXF9wbERgNz4kP3EiW18sVj4vMXEmQWBfcXRkUG87VFJkXjJPTDItQnRP
IkppSkNAM0YrRjhCS3NeMjloTDlAIjhRTCZlIyQ3bVZrQjcKJS49P1kyISZHTSM+aW4zIjRJKUVF
VkkpJDZeRmArLFlZOyd0XHElJTZBY08tJV9Ya20tblFbcFhDcTo/KlZgbyNaK1BAbicxKG5bSC8r
byVpUHBsSFIuKW4+YCFZOW1QQzcyYz4KJVNjXm1OOnNBNmwkT1ZPOz1LMEYpckI/W01CRSokaSoz
NVljTlxdVTpAYkYycmtsQywvWEQuMWwzUDVZOWdaUXQpSlQ+Jj1tYV1uaWZKRCtCalcsJDYicSVa
VCdRNC1uOCQ4VkMKJUxPN0dATypPJkwoQCMnZEojVSpASnNVc1EqYlRBK2RNVCInRjBBdHNAOkBx
NV83OiJMTWUlR00/dGJxY2A2amBFb1AiQkdII2dJLHBBPDNeUVk2ZlIsZnVmLGNgXkYwWmFdOkIK
JUtxaDZKKUpPWj9bOTkwWFY0XlRCMHJZcU8zU2NPKEAqJy9lKl05by49JDlHc0hvMHVUJEdOSXUs
ZTlvX1YobTNsWm9EXSdEP1tzLytvKVRhLD8nNHFQWD1pZSFOIUFgLktjck8KJS1nUylVSywwSDFV
JFFjNjZxdVlFSCxeVGFpJEsjU2NWNCYjIWs7bEtZUV0zRTRCaShWaV0kPWIkZFkoQDhKJm50Pmcx
TU4zW1o7LChHJCRjUmEyXVgzSjwsOlZVVU9oNSxdWzYKJS0nc0slX2hWUEE/ZGVDXzcnYWJPYXFA
Qk9ZcW9WLlBaK0FpJFJncVk3bS9lLGtVPUFNNWNrWVY3dGRYTGMlJTdcTVBgbmdRSjNRVGVEcU0h
M2VTS1BiOk9pPUdIRUFJKkAqKGkKJTVbUkUrNj5nZVdkQ2REcmtlYEZjZlNEMyonU25kNW1pZGco
cVVNNi8hM1hGaFo4JFYuLDJHIUhdVXBHWU1mZG1QO1QpXm49c2M/aCg0cVtEMUR0Yi1VZXNqJWBD
IUJJSTgpSjEKJWJiJ2dVMj5GYCE4U3Q8bykrbClOTk1IVSMwJ1tBbnFbLDMkNVZkYD9LIyYtbmBP
Nm02WThdVSlrdDtvLTJPLCVwbHRqUDU/PnVvJ1xjZEIyUFY8amJccUY3VVA8UWV1WmNXJjkKJV0q
SUlXJU9ZUjRwKShSYitSOSxwYzZ1TlBdNSFPKGRbM0ZUVFFkUSxCJTgsWTUqYiddYlIsLy4wTmlF
S2dSVE5gPEFIQWxgPF0ta0lsJ3FMLkxRaU5cLS89Pi9uU1tSYGYqbi8KJVdwSnMvJSNNaDBHUWhJ
X11UOiI8cFZQJ2k4QT5xVCMsL2VUNCNKbV81LGVyL1FPNHRjbSxILFY3aC9iIzAkUUotbiphSko+
Il8iXmlSdV5vTHAhSDItNWcsOUUiL1AjbyZeOysKJSwtJElHZ1ZqMWg/RHQuY0NeNzV0OUA5PkU5
OWklZWRrbyVCUStfYldjLGIuRkpuLj88YyFXK0QiTyM9MGJyX25gKDVDM1hSUE1IRWZGbGdLJHVd
MGQjQ2ltaUhOTmY1OkVZSzIKJS9OPypFKV9FQmlcai48Z2oodUJvVXBlcE1jVU44YyJbcXVwMWhA
Tz5GJVBUKTM+Xi9uWEk3KSZXQDFCRytLOSQ0TTEoc18zPzlSbjNLNicoVlAjVC1Ca1ktXkhkK2NG
bE5DOjEKJS5vJlwxJSghOypGM1RpcShKcGpLTSFAMGlgLSJrZCZiVE8pY3VbWTlDcnIxSEhMXkkm
ZzFoTHAnSiM+U247X2VZS1AsZCYpUm4hZWdCM2ZMZVNUPGFacVdtO2VKcElmMiVfPzMKJWcscS1q
ImVlPTwrWzc6aT1dPGRnUCshVHNOX3VzLyZxPCo1ZzBCPzlUUGEobmMnRikuTUpLOEMxImcnKkNp
KmNgU1EqbnRcWT87XjdDL2Y+ZVVcclIxOEBWb0BgYXNsNDFkJ0gKJV4hcVljIW1VbXFkJmJOX1Zq
R0R1JjhKLHVuWzNLUCIiYCNucVtHNk4hcWpENzQ9PDZHRSUlSGxFZ2RAalIzbmdfVzIyS05GUVIt
aD50VD02LmhjWkA2IS5SWTVyUT9FMGlkMSoKJWBpX1dYLHJDOCs2PV9fI0JYRVBNQlptJFVEMCk3
NmFfaVIyXC81SUtgLzFQaSk5Wm5hMyMqJGZDLGlTOWw3UENEOUclcUI3Jk5aTThuJ01QITdbIiYt
ND1dS19DPVdwRVMiNjQKJTptOGlfXUhDRCZTaWoiVCc+PTM2UyEoYlhjXWRgXyZsWkM+SVBNalE5
Lko3KyJqZ28jR1hIKi4pK0lfMkFITS5bYDZWYDkqRzdhJEo5WU1ER1dpTStTYyF1WVtqOm5xW2lH
UScKJVVxWVlLZU07UmwxZWhpLFxuU1VoQkwyTGxlPjg6dXFVNFxoajRGWCEqTE1QRUVSNVxWJygn
aVBpSjVKYT1YSyQuSWBCX0dBIWsxOy4zTSQ3akVyXVUjUE5QMUU7UWVmaDNWIyQKJSZic29GRGRO
RDRwP0wxdFgmamQ2M3RtdTtEPlFQZnI7SmVTOSxRX2M/aUtsWUlYPzNDXS9mXGByVV1ZZzo0RSMq
aGBfIzxjJjlkQWNfI14paSczZTNvI2l1Ik8yJTpjcjlsO0IKJT9lKW9eZD01JEFtWEk6cmBRP3Fj
clZrZ1EvY1JpJHJwbnJSXiQ1UmxrMXQ3WltwVC4icEBkTSVxUDBDLEhALFBeZ1shJGFdPz9pKWRw
S0pzbiVRS0hJVzlgMU1aPFtGWDUwOEcKJVJvXkU4bGdbT1c1PCxnPF5OakpzMlRHOFcpbCUxSjpX
TWtRZjVLUE9eOD5OOCYoW01GYTZpIVNcKD5PPjFRdVwiIVRXUVtuYidsZWJNVi9fcnFHMFlmXkJs
TSRtZjM4YGFpLGYKJW1jWDYtbUU3YFxoSFlHKFhhYlJeaiMxTjAjTWAsOm9wYCFAXTVSXmtGb0Nm
Nz9KVSJOTk9MWD1EVyY7KVRyLFMsOzZUU1xRZiRXM14zWytXaWNYcVpUbCdTcWszX1IobyYiPTwK
JV5BSVY9W3VKKTJIRjVxOz0wTSRScSY6U2E0UDUsXDJyQG90P0cqZ29rXDdfW103L2BnWy4tPiEo
I0pnUTUicjs0W1opVGNVWEZZbD1AVzthb3VSL1ZwIz01RUFDYTNsTDk2REEKJUxZTC82Z1dRWTRJ
ZXQ4XlpDT1FwXUlRJjppaDc3ZmxKS0FRZWIrLkQ/QDNLcjBiNTJtMENCQWFycSJbalJnNVNcUyQy
a2RqbUxCc0RyLzNTOyZmSzFrLldBPCdAM1lJXyUuMGcKJT9pSzNZUydzJE4hbjMyXzBSXCJoOyZS
SUBocXFcUDUpTiVnJnQoWEVHXl5vcC47aVdSN3U/SVkxJlc+KGwoYzE9UGFGWiwmZmAzcCVsTz9d
YCdiNmU3XDlJT2AzX2dlbC8rLE0KJV40KEtRbmBuYFhEaCFZKl5HaDopUz4+XUpyTS84J1YiJVJE
TT05YWctRjFyWF1tZlM0MmpgRS8/LXIlKScnWiw0bnBCb2RKLCYvcWY5WyFnXUZBNSkpXW8iaT5q
SUkzQFZrNEIKJW4pJnA7JiRoaUlcSGFlXT1PN2lwbnQ5MilxVlUzJWlkVV5aam1NTy9gVS5hK1Jk
MCpkbmdGKmsyY20zPjRGW0twcSpDdCJLMUdYOkVrXUdxTlFscyZhbHQ/bkIoMS9hZSxNTmkKJXBp
L0FncyQmYF5fI0NqIWxaS25VSXM/PElaZkErWixPZl5BWHJrIzw+ViNKMGdZSCxsaCM+TFNeOmRs
KHEzMURsNFQqYSJoVk5eJlorS3BTR09iMVAyV0ZQXiVzXiNRSURjJ1cKJW4pKXM2XHBYL1ZBXTcu
JFpvQEVia0IyN0ZsTGdYKTgnR2FRZHFaZiJGT14vWiZRLkU3LGJBUDdmViJQZ28uYSdoRWdRWSFs
SExEKFkzci4nKzNDWWE8MzIqIVVKPVYhZiwobnQKJURuXlhGcS47LkhpVT03SmNpOk9PXzlvPydn
UiVzTUY+YF5mcG5IIkI0bVdXK3FGR2lwKVgociIlcG40cyUiM05MXCpTPSpbdCI/KmhRXS9WOSlu
Jmk/ZE9TY1NaTz9uWU1SbUoKJW5XQyRUW15fWW9YVCJwTnBkX2o4UztnYkNFZDMyOUc7RWBZVTc0
ajNiNGxLZVBNKy8qJVdJWC1wI1lxN2YzYHVSbmAmXExnQm9WL3EjIV40R0I8WjtbbDtudElzbS5G
Q2ZpPUgKJUxpMjtmRnNWRjgvR1w7anFJMXN1TFxwL1JMRkxHSltVSU1ebWU1WVplXzo9N24ldURq
NnMhbS5uJk9PPSFYKjNeaFdSYF04NlwrU1QvZGVDYT02c0phakEpcjpBRktPcHUzWV8KJUE4R2NO
Rz8+JWpeOiwlYXFZYz9MWWxFYjo8a0pecV4jc2YsN005VmdQNStARk9UJD1ITmQpZGlxMnNabTJy
W1IpbCxPQ05rQmpcJzgpbCQvaEY7RChLMCRPOFVbYWhTZkFCa0AKJWxuMlhPXjNdQ19Nc24rTEVd
Zk5BaElpXEAiTU9jNDpdSj1tK0liQ2pxYFotYFxAV1w/M1clJWNdQ1Rxbz9HM3FfcnBvWFZFcV1q
NXMmW3NCYVNyREZCIVYwJGVwaExHPjVRM0QKJV4iU1A7alBJc1xIT1VlWnBOUzE1YWZZUGQoVV9Y
WEcwXD4+KGJeQVBvSDAtLVluNzw4YVxVX14wIzxfZ11vQixscmFXYkhHWWhdYDpAXD9acFxsJkFJ
cjhiZlIrNk9cV2gyNzwKJUpILERgWC9oVF0vKUVLPUpjK1pxcVlGVjBiaVJzZENkZSJBajZjVGwp
Wj82dHIuJy5iX3BTS0tuV1sobD9fLHMrUyFQNFJdOz8iRSs5JWIhTyRyRyciJ21AMCshX0xhOlxS
YDsKJVQzPGNDRyl1Km5YUEU1ZjJcNWFRcnF1V1hlYEZiPjQ/cVNmPTguJ05IaSpDJCYiInU/aXA1
LGFrYGQxXEMsUl9PPmVSXUVya1ovIz9FdCZRLExgbmBuKyE4bi9vRGQpNyhXQlAKJWBPaCs3KCl1
ZzpxK1cubCpiOktuMSdcaWRMQyVlc0g2WF40YWNhWUVcRmxhailCMmlWSTdgSUlaNzleRUBKUGtn
LmRdLiIiZVZKLWApYDMnOFQjKzIyVEMyZktAbXNkNEMpJSEKJWdQQkcpPUZvYXRbczkyWl43dXBr
N1w+Wzw0RiVtVm4iNmQuYCw2Z05RKUYyIilhPTMyNCRtY0dwY2xfMEBPaWhoa1FsMVNacCNkK0xs
STMsN244QEdeJytgSShFNjY/NG8pKTMKJSRHaCx0VVVRQW5zM1ZmMF9VL3MpV3VVTHJgOW8hW287
dThfVmQvaFFYU3E9IktmSjUsSDojP0Jib0dSZm1JblQ0bVtHcTpJVi5vW1dONjhZcV5fVFMqWTQ7
O3I3c3RPVzJKLiwKJWZaO0YzUXJiaFRBYSlHUl1PTCVfMnFQaCRPOyUsZyFvR19CZzVsNS5JTURV
ZTVHXyU8TyVZaEcmVC0vRlBOQklXXFU0aGFyPS81WDx1WTsiQG5PY2RKTj1EbFJuO0pWLGRvdDIK
JTJSNCNlIWQsYmIjOD0mWmdSRU9JNHMoTWZnJ0g9IjYwWms1ZFVqUXJWXiRJSlAsR0ZFVmlDMFMp
XiVvcT1jJjMpNERpb1ZpSStESyl0PUFeMEBUUFhrJWFac3JHIVMjRWtFUDsKJXJlP1lTKi8xTk5P
R3RnN2EqQyQ5bmhgTT1RUzM6RjllM0siMWQtbkBfJG03I01mQ11jV1cpWDI+NiIxX3JISjR0MEJz
V1Nba1lHVUZudClMNnVwJEFqNS9uTTtAOS1nLGYwamwKJThXN1hTLUNpXUY7KGVBVTldTUtIVl5I
PCQ5SzonZDMtV11iMT8la1BuX048R1MpNj4lXCNzJk5cR1lMU2VjIiVFSis9VmJKVWBELHJFaks/
IzQzZENaTC87YkwsUUBJTFxeW18KJTZsJWkhXl00NW9qYGBZZW4jJm8mXzRCRmskQGQ8NE9VQi0p
Oi0jXFBDW21XM29NJkxScD5YSUFDXTpoZl0wO25UajdCdDopaTpBOkVbXWYsQltkJkFIcko4MFBP
SjdibUZAXEUKJT9YQ0oiJEU6Vl9gcWIsUDFKJ0RoQmNAX1lKWmFgVmhsWV9pZCk0X2VuKzVWbT4m
NVBxRj5gW3FZT1BZSjUvLTNMRnBwaWc4aGZRKD9JPFRvbDkzOShwO2E1Sz9YRWQmcVZwPTQKJT4/
VC1GNT5aJ0RFcWBjcT4yb0o9MWk7O3JFRlYxOS86VShBcFxAb0JxKFMjV0VaRGdHZChxbkwuSkdn
JlhWYFhNb2ZBKzZiKCZvJk5gTzxPVVF0QmxbTmU5bWpZbGhrQlZqTSQKJWtXYSw9al9xPDlxPCUp
MWJORypyVWVdcXUhbmwiTDJCcE1bYk5uSmBeTz5ZKGdXUmNBP2R0UWpxKmZfPFk1V2I9YFYzRm1m
OV5FaVxAbTpbYFFkZWhYKllTNysubUBrWGlMQlYKJTd1O2ApMHMtJzNVZ2xeLlVPW0cjT0gnQ2o7
KmF0IyFuKSJpbDsjJz5vREIyV2paV3I7RipWJytuLy8jXERhWGlxIy8kWzVzKVtsKG0zPkw/YzVX
MS1OVkNJJzBtZkc3cWErRV8KJUQ2MV0obCttJWRdWHQtcFwzbz4qSGkkTmNncVM1MVEmZ25NIXEo
aVZAWzpUNWNZbSxFN0ssaSoyWUtWUmE0byslbztGN3BhRnEsOnBRS2xlcjNbLzZba1hsVGFQVm5F
cSF1Jj8KJT8wL3FBTzRvb0c9NTBJXCgiJ0lnbEBrS1U3cj9GSGdOKGZQbGxFSS5bY04tO2tHVzVY
NF9lMl1nNlZKQkJIRUhHWUNCc0QrJ14hUC5aNkotW0pkPlBVPVVsaENhUFY5O2NGVUkKJU8xbG9m
Ijo+cWJSWGI7NUdFaUpdS1o3UlJGNHFULWg9R3MiLT49XF1BOk5CaVcqY1BtXHJDREVVak1uP1Uz
OGE9ZzpYdFFJLmVfcWEscDMwSUNaUGJoWWNwJ2xKPUpeLEl1S3MKJTlOOlA4XT1mZUtoKF1VbjQ3
Vlo1I2ZZXFc0bWpibXBmZ25vLVxVSnRBYSZTJU1aNCFyQUsqWik6T1VIcW9jdWNkKlNWXjdJPTAz
cjZeaiRMQU5sOWRDYiVrO1ZOcHJgZGBRQ20KJXAmJCg1YjFxM1k1Lkk8UlFyPCk9UXI8JzpVal45
O207OW9TKWNSPE4taDBCT2M1V0lMIzZbKmZpTD1hbD9aKzpsakRlOFxMPCEqQXFZJ0whTDxlPklC
VjBlRTptJnUoZTlJYT4KJVkoaEJJPTBUNi9zOD0/PXI4VkBGWFx0LUsjU2l1XTUmQWUmcC8vdFQ/
W2RAbGlvWTEiNEpEZWIzTlt0R1VFIkI/Szh0ISFGcF9PdGplMGs8MW5VWmMxaUhtZjpRXjo8MUEl
PV0KJUZIaEU8RW1VXG1odWglZCVvVjhqT0ZGV3QibCxsZCYhR2Y6TjtgNTJaMmE0VmA/VHVSaWlu
UjhsWXVfXUkhYltGVzV0VGRCZSxDX0debzZvOXUnNkInMVlkNCJ1UUYySEAsUF4KJWdbISRhcFAu
LT1lI2hETiprLl9Zamg0UVJJIXRQXW8hYylwczhCQnRLRFY3ay9TLSgjazRbcyFLPEJJRjJBL0ou
S0MoJlVrMnRYXUReOVpmJTxXc1o1K0cuJVIvWjI7U10zRF8KJTkyQmtwbCFKblZvSWxoJm5zJm4y
JjdpIjBdSCNoYT9MNk5SPmxYZmNDK0dFZWYmTStYUSQ0L1orbXJBVTVERlU+KmFjLGlGWyleKERk
V0w7U3Q+Qm1GcTM+IzxXdF9JWzA+Z1sKJWMwaj1OPyFUWldSOSk9U0VvViIxVz5rWDA9dFh1Jlor
b2A0N1FtXDJmWko9UU5QVHUqSWxBQWdwa1VgNFEkIzs8SStaRlEqaERmWnI9QHFgb1M3PzsyQThs
MmUvaiksZztBQ0cKJVFpPy1AKTE0OCczbUQxU0Y4L1ZCb1xuOC5kbDY4Xm5gYixaVCtBRUBTSTly
LihJKjFvblI3PVJaWV84MHF0alQ2QWBRWVpbTVpHLnJeND1NMkReMGxdSWIkWHI3LTVLMk5CUUEK
JTsuZlwiQVlyOEZaRVlEc2w6NnUqMkEzYiMsNy4nYFJFXFRCM1hTU09VJDk5byovMThiY3VddGpO
XkQ0MjZJU0M8Xi40bDJLMG4kQUAvWnU3MkVwKytbSihuLmdZJUJFYiJdV0AKJV8jNl8qVWdxND1r
IUVdRmBJUmlAbitGcDlTWWNwSFpdQnNsMk9IMlZcKDxQdCYiVGExZSI3J2FOUDoyLzRSP3ByU0xO
QGUpU15YLyRLM3BjTDg9N2teTzFIWFoyNmFtUy1qQjAKJWg4U2dJYytBSllOKyxoY29CQnJkNTBt
O1Y5Mzo3KUM8V2dKYSFbYi9IcDlbaVJeMzBmRHMxbEpZUFw/PFViOj5BWGI/TTZDRk1IMFpKL1Q/
NjkqUVAlYiFuM1UqJG81cDgvMCwKJUlcKjssVlJHQ201NWRYJUwyWi1KckQ6YVg/MHQuS2YmIkUq
cC1zSXM4J14vW1xvMihGaDIoc01dTy1ybl5PPF9eWWtXazlLUl1zOW84WUJRQDRddGZObCRwcjNj
XD02cmpHaTcKJW8yRl9VNHUiQWZCQkVTJHJuWWkiYj0/TSMpa2UvSGoxXWtIKnVYZ1w1TzlXOG5r
IU11NThpMWZcVlY7U0UmWGAncmsnbFhbYFlPdWc0IUhAaHNHOHBcVzU0dVtvNjMnJC5bcm0KJV9B
PkQ0KVozVzpJRyxZOkszKjpKNlwiIS5uQWkiUyxCKThEQig7WzBDa0ldP25MPDMkQVU7M2xbWyFP
TmlAM08oLyxZJ3VDYiVsJmhsI2dDS14vcixha1FLRkhja2MrXV9lcVAKJWtQb0xaZCFQLTowZ1dw
SSZzUE1nKWs1Y2NoXl9GQ1ZqPyhXLnNBJGo1PlRDIlBgZC1bO3AqRDxeVXBHKCpmQF9kLzFsPkg2
X2pacFc6N1JQJlxTKURaaSk1LG5BQDFvQ1FIVE8KJUVlPVZkMFo/MHFvYCpiPzAxMzAmOFBKQl5E
WjZQcm42Z1VrYltDXlklOEdtbm4iTz1PSjwvby9yKmNBW3Jfa0prcDFDVVs2UVFgUTVNTSEpbnJz
RlhaXS81VHBDPiU+Ij4hbjoKJSJoRl9hZy5aaC9WciRIbkVPbnEkSXIxSVpJX0xoR1QqQENDRHR1
a1NBPW4xbVtuIVk0QHFgYlxhdVVUWmRkZklqJWRBSlNdbTA1SFJoKFRtbiRkNEJtSl5VUnBbXiwu
VkRmO3UKJURyb05cOUA6XHFxdCRUNClycDBQMXM0WEFoXzIqMDRcPlpzNEBwWGssWCFaVV9zJ1gm
RW1SdGNwJGpTXW4/V2ZtN15EWHNBLU4hMlRCUDA9aC82Ykk1Ok0wUG9AImI6SykrRmsKJTo+PEF0
IzZdPGgvPjIqR2VLLF8vJChwLnJbUkhxXUBzUERvNURlTEhyNzFLNTJWK1E2ZixvMmcqayQhP1hq
aTtTcyxuVl0pPmIpKk5oaWU5cyxgcyJTIWgxcTc3W0AvYy07MF0KJWhIayRbPEJPOyUzVzdzdUI+
PTw7MFJXUHNZP2s+bSZWOlc9P2lCdWRlSz0rOTMhW0JpXnFbVl1sZFtOY0xRZ1ZGVmdcJyxeM29c
R2xeUjRPZ1VAdSs/YlpPLjNxXF1xb04xXU0KJUosLmpAZ1xPLWEzI1M9LC5FJ11vPVVISixzLWAy
JlZOTVZrOldjPy9UPFU7UXJ0Qyx1NTVpW0Q4VFFXX2JJZFYjcicwaXJzMTpVSTQscC9RNDhqWy9e
WiJlZ0ZvTk8yQjRZRT8KJVpQO0c/M3AyKG48TDIrIydqY1hLb3MtdTtRWTVuSFNlO2cxYyNMaldO
UkpSWlNFYTFSNyxbdG8vOVI/Q2cyMHM/NnVbSFFRSipwQWUkRzJGRz5LWUE6XDNoI0VTTCYjOkBi
T1oKJUNtW1w+WEpAVnBcRTlkOTkwUDdoYlVxI0U8NVFRcT5PJ1BPMS41aFwoUFxGMSFLOEkhVmBF
QWpPXFZnRTdLVVZDSClicj1sIVMiKUM+NipIY2RGZCIqYyxbam5rPEYzNWdDVSwKJWlFOT5SIzBY
P0NsPjM7TFxodWlgR2NsQ1lkRDdPIjc2WGJIWE1FXC1tblVUaipwMz0yMFQ8VCVUbWxDKkchKExF
bURKXjhiVjxUPTBzQUZgZmVRQDEic00mO2I8Q1NsOUFNNVYKJWEtYiNLcCxYYHI3UFUzJilgYzU7
KlBvVEVWZThBUVkoTSNIckpkImkvQ21PVjhObyZPM002Oz1nWF9XIl1ZJVp0PG0hTzViJTIlYW5t
blBLQEBCWSc1bz00OEEjU3UqTCNER08KJWEsclNlUVxQT15NcCRtNV5ldENuLkA/RUlQZGxAVmM9
XCxUXnJzVVhRR0BmcFAwQGBJYG1bJCQpXUxOai9DaGAkJ0RQSmFFUVhKdCdxRTAqMy90M0wkRCsj
RlxgYmRXRU5wIk4KJWE5T1pUSTdpJ1ZSV1EmWk4zLmBDTidWV20/NkVDP1xlLXFNckUsIkUySV1H
UzQ9MzM3M1RXaSErXCdbbFF1WiJXZS4lV11bRiJYOFsiRDhDOnNOckE+UE08WkZNcWo0bFpwVU0K
JUlfPms4NzZASGdBdCUyS21JMltZJzxwYmVXKWpIR181Jjsob3NnOzJhQ3BwJ1ViMUVbLTEhNVlD
PChFSGJQM1V1YkdfPidXIzlNYlI7LjRsYTsxSU8iSTxcL1QhSV1OKU8jXVgKJWRDLTxnTidfRkVQ
M1BjMVpYL0s3Tl9jTSdcO145aFpJSDtCU0xAPl5XY3JVRHFjPlFAP05pZTcoRUw9aUw9MlZgPWU2
Y0VhX1FbQDtZaTUyckArcSY/IjA1UTJra0oiVktxSEYKJTpSc2ViTG5WUnJGaCZoRm4sPylFal1O
VTRQKFBqXEExUkhzVUwqUzwuLzxhVVxeK2gvRV1vJnVVZjlHJW9vLVJgXC0qc0wnamJMRCJiQzEv
IWBmQjshMlFxUWkpbCRUbW1CaW0KJV5kNFRLME9lUlhsNzo3ZmxLdXVdZVwuIjZtYU8qbGRbJzNO
VW84MzJoMGYkIUlBPVlBYjtTaW9fIydMNSssLEcjUSdyRVQycUshKyJzcEdGIlpfazBSXSJTNEdX
PyJPJmItXjYKJVRZIiYjUFBTQFUyUGFeU21uNWZLQVouaVloUDBrQ0t0bSVfWGgjaUg4OXQwTkFg
PS9WNS1fVVdcUGgwPWt0a2tcVmclUEQ/KUUrI0hQMCknNlFPXkNwI0huYUJIXy05SFdlczgKJS83
WGhGPD9IPk4vJ29SYVo1ZC8/UClxcSJsTiRhVG5HcEkzRHRaOkNDQ2hpXEAmdHMuMEZLaWsiLTVM
bUJucmRqUUJ0R2QlXz50VC5DcW9pV207UWI6RjNiMzF1J0QnZFIxMSgKJS0zZGJjK2pXM1A7TmRX
NlM7TFhmOmBsXDxxX2hHaGNZYCI2Vk8ocDZBNWlZakFeTSRKRl9UdCNvWS9XQXJBcHI4bzxHOSE7
IUMzalRHITciLF82RjU6R3BEaDonXEpHRktZRTMKJTlnZjVESSImMHNVa1A6OixwWi1AIWBpSyo4
ZUVjPVVydE0vXWhhTEs5OiNIM1ZZWChQM0hKSzYzS0RoMFldJ2o/O2JYdU8iQjBQJ1VtWDplTVRo
RDw8ZCYtOkxTLDlOVHViSzoKJXEsdVtzTz9lN2RHUT1fJy07PDMzVykkLD9IOnAnbVpcJWpcYSdt
Sl0qdUI2ZFdJa246XipJaF5fYjpSQTlsKkVWbC4lNTZSQnEyc0w5MEkpVCQqT0pYWipqJ0lnVTNK
LzVST3IKJVs6ZmVmTi5xIlYzZihZRmRLbWIvO2klXnI2QSJNSjQjLDJjXTtqPGNiKCxGM2goNl9x
Tiljckw3VFIuWC0lLkonPVcsNE5ROFZdWz1DOU4nbnFlTUIrTmM9ZklqdSY2JXMrO0wKJUhpJzYt
KHAiakQ1NkIiUGpHMjwtLUheNWElU3ArODtyPiJyT0lVUixgNHM7LDwvR0JtRjkzSlFAX0c2WV01
akonXmpBX1ZaXmNZPzVkUk8tRUlNJzM7UUJBP3EkRWQyZGtwQnAKJScicEs4MShVLGkvKEppM3A1
PyolK0dWMmhoRzFDXlx0TC9NKjNRQTpyVyFdcU8jV1lPamxdNHQ3M11hOVBjaTprU0ZqI29fP0pN
azQ7UWUrPDsvUWotKSFdXj8/bFEkXV1dWGQKJT1IbG1mIixQKVZuRVZgLDw/PkNvSlZgQ1JGS09p
PzM2QSk8cU9IZTUuKlZfPVlqMWJKRUIuV1YoLmFsIzNuSTc5Q0ZXMz5OLypwMGxAYUhJKWJPYGNg
UG1DS04lO19OVkgkaDUKJWFjbnE6b2RVPiI7MFRAKktAODBmUVI6MidfWE1JKS8+QnBIO0o0Syos
YFtsRlhCOXJmZHFUPS1IOSpGOUxnalktVkU3RXNkP09oYks5T2IuaStWXHQtbVZ0ITRhTyM8bT9d
KyYKJSkqbnFNYFxgVFE4T0RDIUsiYFhVKFQkI180cyZBR3FJRjQ6PUdhQEo/TTJEU3A0RDQqYDxI
ckpCIWUsbDs5clssIVQ5LWZsVkpJPlI7U3NpM1FxZDdSO1s9XTJzXVViRzMjKFwKJS9CJSc1QSko
PyMmcE5yMS1lWDghMmR0W0NFMTVuOlI/MUdARTpPL2ErP1VbZ2ZENTRMLTk1cTI8LkxMWUpPdE9j
X0ZnVjtxaDRINDMvYitfIUA0Yi4rPjtubi86P3VEY150dS8KJVY3YF4iZUVbMklNNDYsKDhJREFL
JkpYQHQobkJZRjwwLkVYSjBdNlpiaipYK0xLcmNKNVVwPmtfYlg5XEYsSFJOSDRoUytWYVxSX0tg
aHN0OXBPcT8mQDdzZz1TVC8mZSNKQUEKJSojW2BLUkFTVFQoZS9wUlE7SShOT0JhWHRAbyRzPi4t
amMkPD9FRmkoWlBQXXFcJzRKJHRERmI9XF9LSy9KSDxLPERvYiMhTEs8MEFkJ1BDV2puKSxUSihC
ImIhTzNdRi4iYWkKJWxIQERfJWYkbjFodXFXZzpXdEs9LTUmaGdeRis5SlFpWUgjNWBxPlw5X1cy
Pm1yUVgnYEhwYFoiXys7UCxQLm9NPFxAaSlpYFdmdT1ZMXFWRmNjOUpOTUd0XjZXKTZDV1JTREsK
JXIpI0o3TSQ6LzxTc2hXTEgjMU5YY1hza1JvazRaUCc4NVA0JWQ2YixvanQ9I0hmREswPXNBZV1l
WlAna1xUSylHIT4lNUBsV2Q/KUJJaCtXUCpBPFYmSFEyPkg4VUBfVFpJc1AKJUsqbW9RJEckSiNy
I1o/Si1BRC9vJVZRSk8uPFc9QlEsMkwlOVsoVi83QkQ4M2NZVDFBSHI+ODcvMXU/ZWlqWE0yLDNy
KT1NJmpPSCNhJCFcLyoiKCdraVM6VzBrOzZTLipTY1MKJWAuQz9gaUwiLmk4PS01MWRzZidgSWAi
LEwpOyRcc0VrXmoqWmFDaDozVSQ6QVpsZllsKDp0XVM/KENeQ0E0UVM2VjVYbFxHWGwudUhwSiNg
VStBKzxfYShoPFFWUSVWVHEzLTYKJVY8QjBtNTJdLyk2dGFNUmpsZWo5ZUhBUDNaMDZoPDwvTU85
VV5GQSgvZG07QWgoZzkzV2hpTk9LOy5WQzxhXSZHSj5QZT1BRWU9QVBBOSI/RDQtJm0uXDEvci1D
LjZnQT05PlsKJT00V1NFTy9IMmsjNCVabStjNWpQOHVIZnE3Im4yW2cjUzNKJ1pEa2ZNWmNDbSJG
MHRcMUxrZzlLTDZ0UTpySz43XywnIjNuRVBgX1g6LlEwTmssay43P2VGXDxvTmU9YzRSZmEKJWtL
cFhDUmdYSF8yIm9hWDBqNGdtbkFqYT9gWWwoI0BdYDBhYDpbYCMjLilUJUE2cVdkQ2U6VzwtRWRi
SjBrYWRJXFlNMSs2I0tDW0VBSlxfay9aSlM2ZFgiRU4pcGpKP0BBLG4KJSdKLC8/TmUpKjs6NzNz
JDVfUkVuOyVsbyo/ZCUxJzFhSWxCVT9cTzNYSl0jWGluQFU7JlMsWD5JY1ZHTEF1OSZwNWdnMUNL
ZCdyOlhzL2gxYCFgLlo7RTYuYDctc1pgSFVGajUKJW8tMz1COHBVc1xQVV9yS01oW1w/MjI3aiw1
Jm9sISNrXDlMYUIjJVZjT2hmMz1PTDtZblxRSVkmTlooPFxSMHFFJVZSIisvYl89cTRFcVVESTwt
NFJDXyhnKSUjXSF1S150MkUKJTQsTUhqKzF1Oydma180LkpQP0lTTWVbcltkTF5QM3FIV003ITdf
UyosJUpxZykrKD5LQipQJiUuWUMsbmBIaUVkOGAvY2lOSFE2SlFCaSRyWUJMaUxITG4nSjEmOTdc
OjdYb3AKJUZIVGVEMEQpYzNFKE1nPDBTcjMuTGM3OnAsbnVecTxOPCo5PFo2cjAuVF09NDsrPUVY
aUhXWDY1Ij0+IVdtJksqaShRYUBiOmosKGxnTSheNylOc11sWEU4TSkhJmIpZXBzak8KJVA2RF85
Lj5nKHRMJE4hOmFWcC0mKz1pMmo/a0A8YnBdKVRQPD0wXmY/MEo3Km9lUC1JTSlkKEozMyw0aWM+
ImpSRSJaMy9vRlwqUy1JIyVEJTVOSFsjU0paLz1FdVpZbEI9MVkKJWVqN2NyYCdxYzY9PEQnImg/
bjkuLEFBPFNvNW5qOGNBIzI/Jl9kMztfJlNAImlpImRlY3QvXktoP3FhM2VCOG5EJUpXZzU4QWAk
RGpFNlsoVFBJKWU0XVZOXmBeQUlXQDhgZFAKJSI/SCNMZyxkW0UlQU0hTDhFbjZSJ2tYISVcSkxI
LidYaXFZLSZWVW8oOEZfWEFHKFgmOW1BPSguTDslKlcsY0tjPjonc1dacmFNJ2koXUNJXm1ySCZq
TlJAPyQjN2FxbUorbUYKJUAhdWRgPD9SJCQqXllDJW9SX3JwWG1KRylEcTlHWFZ0J0UpJ2JIP15a
LWk2c1IyLHJxZDdCUUJSOT1CZ3EhInNwIm5aRGJsKj8qSy9BYnVKU3U0VCFrS2JJZWcyZnJYVGgq
UFsKJV8nSS9ET3EmZlY+MkImT1g7OjIyMFB0IVUyY0BGNktVaCpZT2hhSm1bNSJSOzdoSydxLkFM
ZjUmOCMjZFRnRDc4Z1I3a29MIihzImBbXEdPNWIxR2JdZVRyWFRnV2F1RFxlaSIKJTcmWltdXVVD
WFVKVigyLiI+PnRdK3RQbmAzKC9sYWJ1ZFpRV2Q5I2hYXDF0XFczKmU8IkFlcl8mbSxzXS4mYi9C
QmBSYG4rOjNyXSFnNkBhbTZHcyY4Zm05P1VQInJXWE1VNDkKJSE8ZzdOYSJnW0gkPE5bNEpzVihT
NSldTyNsNTVAaE1XVWRzaU4vKzQ5ODBdaTctLGY4Xi03XEw/VEpcJHAhRTUsKjw4c2FCbjgqLXFY
VVM8MWxYciI4JSM8UGFTJDYsWyl1N00KJWM7Ul8yWzdgME9hblAqWkxmVGZuUW0wby1NMjxbRGBt
NGxxcDtXK08qQCpkPlBBMUIhRDlCMiZZISw1ZzlRbWlPPj1dJ2lMbXFHLCUxQmdIK3MtLVRTaD1m
Sz9yM1BgUy8rWXUKJUQkOUFRQW1lR2QvZVpMOiExXWc5Q2JsXDhATz01aztjIi9wPzZVRllgM0k6
IScjND0tJERga1dnNS01WT9pWGkxKT1hWiIoOC9dT2BMQUApbEVxa1ZANmIoOSE3IjgkOFAsTEIK
JShvcz1eTTxRI2U9VTdmWDdAQElGVkZLIWI0YEdLdSwmOWc6TSFCMSM0XiwySWpeYXEtQF0sQShU
WSRTJzM8W01WUCdUZkZnKjRUaGttSCg5LkQmbSNLLkEwOTBpa0M+PlkwKVMKJWVSU1JVK1I0VCol
Z1FNSCQ+RCJiZDRyXiosY3MzY245ITBcajplQDUkYFw1S2UxKl5RallBNy9XVFBRQj8+OTc0UTRF
K3VRPWtSZFRuK0NhNzBWOThQWSFxWlF1dHEmSzU9VUMKJUEjPGFJbGgrcCRfQjRwOkBwM2lZbF1c
OjE8VmNbK1NCaVwsZGxyQmlaOig7RyFfYWRUYjFCLiZfa2NnKEg4Q140UDguVj5YVjFSRidQKSd1
Mz9GLyJGSiVyIV89OjRhKGRkK10KJV9iYEtvbS1CPCJYNDFIRkxxTCpDZCUpX2JHaEJjXnFQJl46
J3NlMU9QcTNVOUdjPmEvJlBVcCkiYCJyVCVVO1swNydFW1NeIXM8NHJfKkY0ZG0/TVlcWEkiYC1G
Rj4tJjp1XnQKJTNGWG1OaGRdUFNXLjg0XCdtN0cnIXNZPGo9YC0sUEQ6LUVJXWxFNTgsbl07VV4t
WUheZlJLY3NhWGVwKjk3IzgvciVOMTlrR2o7SldDPm1dT01uJlMkdHQ5NlNwOFFTPS0sIz8KJXMq
PTEtLVU3TG5RYlUrW2RsRyk+bSZSbyFodD1iYHEvLSk2WmIlM0w/TGYwSkZGQjwpJVdRclxkMmg0
X0wxcmNxaCtVdW9qRFpHNTUvY1JMQjQkJ2NzJElRRSptVkpRUEFcWjMKJWhGdVJrIjhwMzhNaD0u
NVhHXVVFT0BlYmpdWkJUWSI4cDVOSGQ7J01EOE86NGpQQGFbKjlRLjNpPnBqP0FtSU9ULmgsY0hj
N19tcSs2OGlFL2U3QUZaZCU+I1FkWCxBYltaT0IKJTM2NWApIycjYGdSV1MoJTJgNTt0T0pBN1xi
VFgtKEQtcDxSOmJRWk8lV0h1Xj8kWCFwUFYyMy8vby9uUytcbGgkX2RoTlJWIkdta0NZNGBIXHVz
X2BQJXBTS1ZRL2VFX1JzXTcKJS1uSk5qKzRRIVI6UWJaYVknJyVXPVIzTio8aDg3LCNMQE5MX0Bi
VmdhPGxCcmFIX2FzMzZbZGJdJUQ0PidEcnUuMlgwMyRRU3BETmUiLDNnTFArQzlcO1M2RWdVTi9v
PmhjY0QKJV9qb0tCRnJmWiMjIiFmaUs6QHRqSzNbMmpNZ2AxI1MkPFQlR0I6KVxuIzs9YkdlM1dT
aDtmKE8zUDNDUTNRMF9fI09eZ2REdFxpSWIybEAzR0NcLV08cF1tcjJxY1czKjhmZ2IKJSpUWyYk
JDFlL1syaiJzZ0VkczkqKnBXZTIrbllnTzM6PmNbLVRMQU0rPyk5SmE/YmkqT1NTV1VpUldMJ0po
bkQtb0Uzbi1BLyM4L2AoOmMjXEFaS0UrNjQrYFJzZ0ImKi1aQ2kKJUBpVF00T0Q+ajFFMCwuMm9Q
Zm49Ki1aQ2lLOkklJ09HYiomRUw7clttKU1EMEVjU0FvRWcyYj8qcFk6J1FmNTNwRXAzQiNHPCpu
OiEvRVQhaVpPUkpVRVdLWFJLRy0nXmhqUGwKJXAmVTFwP3RsOCQzWD1kMiFSPmIyKSVMVlgxSlE2
RipVKGo9L24yVz9AVHNGNjJpMSQuXF40J2dFa2c2WEl1JVVTKzd0Z0RKJTAmZGlCPDZTOERcXXJG
XldIKUVpb0twR150LF0KJVptUSk8ZkVrSyhMbDM7K2w5LUxWUDdYPlJCI2UzNGpfLSUkbU5aJClV
bUsrX2dJXl9PXSVdZ0JWYFdcQW9tO1lAU1s6UHVrbWYyTSI1Tjw8TDNeQWoqQTZqX1knN0MsOk5z
TzoKJWBCOzY/JWQzRWAjPF5FNCo5O1xWKCM4bWNHYlcsO2E6MXEwT1RURU1HPCteKip0RT4zSzBl
TXRMSE9oXCY9NUxBVkdERVlLLSdoYGEjczpZITdzPnEzdD87VlA6ZD5iY11dQ1sKJUVSOW1qJVdd
cjFHNVdPMjRga1tvUyI9X0FPSjxqbGA8ISxmRzNcTDlmPkFdJW1dXCgmNE5NKTBcaUBLPjpIRWEz
Z1JeIkNQOSdET2VcOHFIOl0+PGhxayFHIU85OypcXyRATz8KJU0oIWg3JzsnQWRKL2IsXmAkKC5n
XDVIK0ZqYiU9MWwtPVo5RD9tXUM4Sl9VbG9MVGlwRzNQM18rNjpwYU8zUS8wcVFMRzczOzIlVV4h
IXFzWltCPzIrNVcsLTQ4PjRFamxuQ0gKJWQtVmgyajlOcnAhSCpAbGQsYUIhU0dyR1k6QjBuImxH
Kj5vKmsvbU1GNjY+S1xeTVFGUzo6QSNgVGNdLDA+PiVcSUU8QDlJbzBFdTs/J1NwcydQYy9xSHIj
OXI6L2RrUycwUGYKJUlzQ2tRcVdjVm1EZjlPLnI5ajo8aDt0YlNEXWNJZEdFVE4pTmEtYkJeNChF
U3FSYDZPbiVRTCJaSXJxOSksPXBaYEEkUVZdaks0ITh0OSNKNFFoaSNcbCNmXDcrRDhsUztVTE0K
JSwhRUhTXkxwRnFQS2NYTyYudGxKQTMsYm9bPVBqZ0pzaGY5MEpMNyw/cmVMaixsPjU+LHU7TC9T
Z0hAamZALmctS1lWbjFgYF4mYGFbVC1oRDghJ0pGSERfcS0kVGktYUtJRzQKJSplVlJfQ15rRWdr
OEw0cS03NTBIKmcyRCpLdU45UGlnTVgpNHNgUS5jUyYyL043cFEtaVkjKWY1dDYqXThiOGE4Mlpk
LWptX0wvNCIlXUNnKzE1ODBPcVFJVSdkWVE2JjQ4JWcKJU8hbURpZSdjJjgjRlBwM25jMD1XUTtR
RFIqbEJZdWAzKyI9Kil1OyFiamRCVUFsVkhfQzNhQyIxRlRqck9IYz8oSlUicSU+OTFNWGM0Q0Jn
RU1kM2tUOzRZTGJUOWtlT1RrYEwKJT1cZyReP3FUXVYyJSo0I2syUiRyJF42czBZVTgtYGxjNyha
I0slYlFuMmgoc0FhUEdMWltAWEQ6Sy8scVE8al1hYmhvV3NePCNZPWlJRWQmN288KkdQMj49WU1I
WlFUY0FSOHQKJSU0c15bUmhhU0wvMHBFKFFYJiRhJWsiTF49aGZgKllBbytwMEg3NXFOL1lWYi81
Py1gQEBKUnRnNUlJN2RqZ21RLzM2YDY3PF90UDYhYFNDM2AnX3RubU9tJCkiTDcnUUU0cEgKJWE8
bzpFMSxwayxnXUplVyVXbm9gJGZGVVM6cixeRW8zIz5xJ01RQlpPY1JrQi46aCpFbTlrS0M9YVlG
LVxvQ1wuYmAmSFxDdGMuajRfLFMyVEhxM1UubTEpLzFWbnUlLCU0JUgKJUFiT3RiSzU9KXFZaDVd
Iz9qMTJKZ15eMWZWPytWKE5NclYrYG0tQyRIWkIqKTFSc2AyPCcnO1A4YnIwcFpzRClfVG1EPC5T
ckd1REw4KCFhKSY9bmUjS1RiIVdybldZTmVoVlEKJTooS2lhXmowJlFoRHQtXGJIYThqTVtGSjZB
b1w5ZSZPXWM+M1paMlouXGMxVUNeQl5bYjtBNHNWWTVAWkVOSzE0YEZeNkFmYFZMQlJ1OT5aWl5R
SHA9UV5VczMvQVhWckVgNU0KJVxXcWtPWG4mZ1BJcWEhUjFlYEFjOzc5OGpZbTNRN3EqPzIwMHNV
XUptNTpNVkVdUTxRRVg/QDtSSTJbT1pqLE5RYC48IkdePzFBa3MvXHVZVmhacSxgKT1BTlJEW0hU
PSgyVSYKJTNMYEYyJ3BeUDNEX15lKkZQQnUsVFJfWmIqb1trKStNSlI/XidVYDYwRFFAM1s/O0BD
O1okY1IjTmEhK25lR00rSDIuYj8wclVGJ1MiSmc6ZDM5LEtdUiddVGw2IWxIIUVKWm8KJUA+X11R
YGVhKUVaSzopYC1WQSI1ay9FYC1jNDBvXVJSRmVMa2dsOz80ck8uYi4oZmIwWUxdQDByPDI6LU5Z
JS08Nm8kPDombjRdKTFCXT5pIjFHMCghNVlNXDBXVSxVT0pDJlkKJU1NIidbQlpJJWxecmBVNCE/
USMxTU9bRyVLbkZlSTxSLkJYYl0zbyU9Q1FfKyI5NEhlVzBGZzNaOlskNDJNRlVeTUFAbXJcQFF0
MVdaJz1yM2A4N1oxL0RWKDF1V2RlLlMuYz4KJSFgbk09V203SF5xXy1tKzY+ViZVUVAmNnRYUDQv
cyZJVSclWnVOc2wmL2A/IkpPaFdOOk5xK2JwdSFzUC1nWmR0MDMjM3VIOkJvR2A4JDI1UmJmWkMl
IUklWSgrSmspU1Yhcy0KJUYmWjpkTlshV2gxKyE4MTxPWS0lWFJKL1xxLFIqXmA4JT9gYSE8NFFd
K11tMCFza0pmQ1dqTzwkanIzOCM0aSEiMF5Ga0lGZ045LXAjKGRtYiI1I1pXOSFJa1JaKj9JaDNS
MDYKJTNoS2ZkKCIsTFNIR1U+X2BVZkldRCJEJlhWPXJtMCkwLWk0V2NDLCExPVo4WWAsQ1sxK2Vi
QF4sNj4sQlRwRW9naUZ1PiY7K3JcJ2YscHQ4UDFhMUNkOnA5ZltiKSQwK1wlWSsKJUo9PTkwJEJH
UjoiX0tHdVd0dGc7S2FKYk1QQEVaVDQ/LGZROiwpRF9iKmsoWyNdaE5TXVNQSXBSaU89RlVrNGYv
IlUzK0klJSo0YWpLYUBMOSFNKDZYIWstPFBvITVQVG06UlwKJW9vcV5eTV0hLSU6PC0wSkpPRE8s
ZDJHaipNRSgrMmhHLUdIKDBLKmJQL2dROElhclksIlhcaDZaJzVqIUxIREo8P2JnKV5lUCUvPTIl
a1dpJUVNKFNaM1EsM2leQmtEJ0FObTAKJWRWaVpETGtOJSg1LUlDO05EYyhjMDk8XyFqZVZUUDhB
STUhKywpL0U3bWtEPksxVik6QzUhJy9NXiYocD86LWFXbDk4I0FYTF0rMDVgNCYvbSglWEZVaS8h
YmlsV14nXlEhZDEKJS9MXCYwT3M2XyknI0ZpNFcyTzJea3Q6Tj5bZWBFNFYkTFZEWSNxZElXdFFL
XiZqZi0rTWdAYUxkKEdTPjcxa1pJbitjcjtrV3BPNFx0XkdqakNdSEFfS1QyPUhJTj1SPytqQmsK
JWFSIVZzJWYiIkMoRWRtOjtfVzlbPyhPOmlCPjNVQzxEXVNWNVZ0IUxHZnFlYjRTJ0ZQVUlzU2BU
QGtjVS4wUnF0R1k1cE8rc2dVPFU0bSZZPlVTM20yOUN0KGs2XWcxMTA/L1YKJVBFS21LbygrLDVu
O09YXG9VMzU3QWlZVEM0T2FLQj1bIzZSP0tCNXFAK0BZJitjXGc6NW06NTtyJXElMCZzQ1ZHXjlZ
Vm4xR3VNOSJmQF8mYEBeQTY0WCgrI24iUnVfYlVrIikKJS8yKHA2X1lzcWEkajVBczMjMHAiYFcm
Jl8wQVdeVlIkJzUpZTxZNGQ0X1NYZT8haFNyOEciKiVRSmEvZXIjLWttXicrcCIuayhmVGhOSlZd
WWooTkdKcGFsMjZzODRKJ0NRU2sKJSooOGAlVEVDa14wYERjc09GYGtvME1BIU1sa0dSTzswNTkx
W2trSW8wLVlxbyZYZWNKLzF0dHBgP0FYdXB0LjkoOVNHTihOK1NdVjM0LilJMDYrNzU3QDwlUTUs
RCxBYW42QDQKJUlUI2p1Wzs8KENBLiFecmwwMGgjZDQvVkxCOSEkWktCcy9aVzJYVz1BdU43UjQu
IXEmQkMlPWZlazVLNFQjSTNAWlQmMUtwKE1iWHFPaTE7Nis0b0s+QUdIWz5HWlgtY2M/MFAKJWUw
O2lKOihgMG9mZy9SP1IpRj5YNVtqITJoOyZfKTgpZEwqIy9LZlQmNGRTW1g6XVltciZBMl8rP1Bk
IytTM3ROJkAoQ1RfaShiIW88ZF42WVgzJmNUR0ZqJVEpSzlMWlZUQW0KJUE9aUplKkpLKj5LJlI+
NVJfI1QjVDFSa209OiFCK14jKWBLMy1jU1dUPlgvZWtVSSs9TDdnalhiVlBFLTM3N2IiVW5fQkhr
KWdAY19QakheInNWaHFLZSYzSCQmaEF0P2VnIVAKJWldLjBvVVlKNXEnQDs7XUZaMi4nPEMqal4k
VXNIYWdHWCFicVVnRi8iQlMlIU9vcUtvPm4+KSwlX2xvYUc8NEQ0NlxCLGouMylNVDI4ZHQpZkkk
dDJFPnRRZWppR2kmSkNlWFoKJTgtJFczP20lTCxZUGwxUCI1TzUkMzZrRzlcXSkrVClaZT8nPWAy
aldYTzAwNjEwJyZPLCdpcWwoPWxIZzYndVohZyJndWhENk5aRSpeUEFKY0VCT0NaNGFicFJkIjpV
Ni5SSCkKJSVaTz03NkdqbDRfLHM2WkEuW0VcODItXCIqYyknKyxGPHBpbE9ATVpnSWAzYixDXT5J
OXVtTCwqISJhQWFuSSYpTHBFUEk/bS5uVXBfUDs0Q09VbD5pNGVxLlQ+aipyR1s7LkcKJWBpYFw+
MTxKaG9eZzpcNUNVYFQlPys/I2FfR0wqQjE4M1NqLWpcMTxJYioiJ2JrMVMhLT9YKVtLPltMVmhj
PXNlPiUqOUhwUj1dMzY4PjtuUnRZZFswbHRedD9rSlBNKXVBNmIKJS1JNSdONXA0bm8wSlArRjZO
IVk6TDFoZDpsUUNNbTc5Y3UmUzIjUj1UYjVAI2tPMjFCL2sxXWZsVVJWViUnUGE5aD5CJWthW1Yy
Vlc/dSEnNWEsOSY/aWpANFVZX0c5cTxiS0YKJTYqTlUkMl82YGYxYzVbKiVwPUktWSNFRFZcSlwp
RWtMUFB0VVVXS1pLMzUiVD9DMlcjWmZBKFErTXJJYVdrZiM2bThJOmA0UzBdRGlAQloxaHBxOyMq
cS9zNiVEPSZMa3NoZ2YKJXEhUWFMJTMmYSY2ZUtEQGpKdXFGcDVLY1k2WWJlc186K1hCVVo8Qioj
MTlFJWs5a100IVtGI0NlXWxiLGM3WihwOFIxLUYoQm5iQld0KigyVzNwO107PlE+OiEyZFJhSXRN
ZWsKJXFdZyZmKV82UihyVWlPbWgxPXVbJCJHbioqQWxEPGxUcSpwbVEoaGM1dT1bSklnVVhTWjYi
WDZnSFx1dCMnT15QPi8rMzRjMU0vcG8zPjRQKUwvdDdYbk1Wb14jSFU3cWpjKicKJS9zKzdtMEVw
czorRjZjIVguUVJKRyw8Tis/STs3IjVzKGFuJyQkY2A7XyRxSSxQNyNlImM0dF1wRFA+YGAvNkhu
TmZHbGNZLy9JT1NIKUVaZk5wXyg2UFFTL0lpZyU4bDFESCkKJScnaEEqKT5cU3VQK2hzSWNFV0It
MlJccFg9UCVgcEhnMyJPW1g6aTA2TVstNjlYKlhBVXNeP1tIU2JrcWtJYzg5anVWdCpWWVZhIy9S
TV9SZjhubFkhXD1fImxrWV1BOnFDPHUKJSc3Ryk4Sy4zYVVkW2teYkYrZTQrKCdHMSRsO2pSalJm
J21rR3VpT0NWVD8vXj5jZSpuczIiZCovVlVBQyg7WkY6O3EpUSRbMmZNOV5CYjVzQyEvVkhBTkgy
KWxTWjFHXG0sW2kKJWZAKFZaMDVcTnBORV1ZP2dmRDNuKidxcSUzZTZiRkk7RGBUb0tELEkiOl1O
IV5ySktUQWBvOStAPkJTTko9Kz0zOy9EZ19OaWRyb2Jfay5CXk1BcCRuR1FadC81OklcSFZjQSoK
JTJyL1hzcW1Ic1BGP0BpRGZEIUgubUB1cGY4b2lub05tXyZYRnU/Zz0/YDwuKF87bSg7YEM5Uyss
OkMtZVJZRzJtU05zbFdvV2pSNVRNY2BbJjxOVy4uSDdgczhcUU4jXi42VUsKJTsrI2pwMyx0MT9L
SkdGSClsOj1lQDBbKEpSUFchc1E+KjJxTiUwUG9XOjw5R2kkK0pBTT4jRG5XWzkiX01uJWBGUlcr
N2tsOz44WitqQU46Kl1aLEYrQTJfSk89UHAhITJUT1MKJVRAPGIyZT9ZMWQ/R0xpWz1OYTgzOG49
NiY+JDsqXWIxZDJjJFw8OjBLa188IiwyYjs3NjwuJSZdOT4pbGlaKSNBO2JhTklLPUkvPS0kUEZl
QUUuWC9Bcl9SU285TD1dIzsuPTMKJS5BMSV1LlxFdGEmYUxFYHJHcl85VThXL2s8TEUhQlpfa3Il
PT9tQWleRTchX1o/MjA+LV4uJDA+XVxpYF5jaiVjTjo/PCg0LEA/TURCRydXPUhjLypwTD4mamZa
dD4tVFEtMVwKJVU8biNPZFAmLjlaYHRBRStLPiwkUCJSM3BnRSk1PCdGY0kpXj1BSDctW2hHWDQq
SUAlb2U8SUBCaVlFIiRFVmkyM3RORmZmYldsT0c0PWc3PWxhbDJqQVY3NVR1NCVgLjloKmYKJXFQ
VF1kbENVKUo4OUZQdU43MXNDVC9AdTBUZUo1Xm83MHA8ajYoSUtHOi4vPyg4QilgSjA3N0ApKl5C
aiMjSFVkKCs5R2BTQVleXjJhOFxxWSJUJmJfdWs1bl9XY3NoYCVqQDUKJV9YOnFEKEglQDlkK0tD
JGZMaFBvZTJCalZTcS0vbGhgdThgQHUwaVo8V2ZWZzMobnUuLVpOY2YjSGRBO1IzQjNwQUpFPkA0
MS5tdU9GXz8pXmUiUzBmLSE6NjBRRiZNSzcjLCkKJSsqYk4qXU1ML1hnN0hOLSFgTk08VkojJVlS
PWFNM0BbK3MsZV0mMEgxRj83NjZWUlJeTChvVEtRSXV1KVJ1KzU1WFtsTTNpMTMsJmJsUWxOJTIo
LT02LXM/WDhlaCU6Ly1oRWYKJSFdI0JdI1RAI0csLS1ATG9XXFZabFBSYjpgMTZfYGNVMllkZmBb
SHUqbElcXl4nI0prW3RXWVw5R05bTWAxKy0ucTZuOUghQkxvZ1MpS0xtIyo5cyNvcTQ/cG9VcmIq
XDs/SmUKJWYlUlFlVjItP3FcKFtaTDtURShYMm9NTUZMcGxVODgzK1E6O3BpIkNgQC8vVkVCPVo3
TFs8JzNNRSY+PFI4Im4sQWpoM1xaVVQ1WTFkbl9sOThdPmpUaUcwMVlcSkA6J2UjIS4KJVFvXjxy
NSdENlg7QV81TTlwckQ0TkI3UGw2PW80WixRaHVyIycicVY4ME9nTllfVFs3LU9STF8/RDNNNl0+
N0AvTzhCcUtSYztTQ1VEV1YiQDRAIUJQU2hdQGpqbkhGMEw6M0IKJUs4bmkpcW8mUyRGNl1xOiJh
TlFIUW1TSE1RPm0rP15IOihhViZpVHMuS1ZYKFolOywzWSlKKy1pcEEhIVhNZypeWl5TMkljPDw6
U2dqVF5xRShBWl1FPDw6RW1pRUp1a0tCc2sKJWY/RG4kXm8+OkYxbSRxJWxbWEVQKG9BYDtKVy5b
KC1NYjonQFU7WClLRS0+QUUwaUY2QUA4KSFFTERkJClHTl8wPDpwWDxtLj09bDoyRVY8M1RwNmMk
MS1iQVp0L1FPWSw1WFEKJSZEPlBnSTlZNTs4XiIzSGE8V2UkYmRKS2BtLj06Zz5AKG5pRSlhOzlU
XUBycEZJV0szOi5OS25sK3RmI01KalVAVGsmOWtTTEhrR0BbOl8iazBPZ2wnTFlrWipNcTZbO24x
KU4KJWVoPUVVYUtlY2osWVQraiY9JFYhTGZDZCkyTldPIUIqOmhHKzwjSU5bOk5Ya0YrbEo7Mm1j
IVQsKWMpKCVSJjhiXy8tMidAXCtTPFteTjg9UCgoTmVTZTdKOmV0NSQ/azVCTykKJSJnbF9iLVQu
OERpcFNMLTBhU2Q6PXN1WC1Da0pLUCQpPCtXWCcxOkU+ZidnPGsoJilQJmcmQGVxcVFDZyRDSkZx
aSJpVCMxKCJgaixyYVIkSlhqUGwlOlRuYT5kKDdJSm5ZZ2wKJVVrRU5Maj8oJGVjSWQyKDtLaGpR
LFAjaUFfUyFAaVIpMThmLypBOXEoZlwldCZva1ZlMHNCLTxgQVgxQWRkOUBXOGVFSGAtZD1LY1Rq
dUojOD1xRGllQ11UPlo1PjtrI0RjJiYKJWlfaTJdWisxOWQwXTEyNkJLK1VzQiEtLE5CKVNpTXJw
Tjt1KUNcSm8qSDUsJShXVEhnJC45WlpTVWokNChbYGdVXjkoOzhrRk9yLDYsNkYpTm9nQEEqQmNI
U2VdYVx0SidkIzsKJShFIkFcWSx1RWU0YjZVTTNwODM7bkhlUnVobVUzJ05QYiMhWiRvSVE4V2At
QjVOaSJfYTBHY1kpTDNBbiwxaXQicFMtTC4oWSwrcUQhbXRpVS5Mcj5BRFU/MywoQTI1Qz5GR2YK
JVdgb11cSnMoMDZFX247N2dHZF9ASGBBTyk4Iz40cltWJEImMSxpRWJjXkJGdV4xRypjS1I4L3Ra
QSwsaGojWVFeViQ8ckBnQSwlW2BeXGU1XyoqZnBHPDNEcDp1aEk6KFJGLFIKJWcqcSl0cG5WUzxT
MGVcKlQhOFNCby0/QFRuPFwrVFQxZz9ZZlFnKF4mUzxfcS5xXyVaKC90QSg9XytsS2JvUDhjWDxM
IjovKj9rSixybU5GVHVSVTg2LS8+OFtWX1VvR0NoV0AKJSFJP2BnXEROVFcrLylmSGI9VCFKT0Qo
KU1WLEdgVltDdSc/UCE2XE09LylHc0FTN1IkbCw6JXM2OUVibUEhOWlebjdfQVNBQXRsTWJaMSZE
aVxUTFNnR1hucyIuU3B1NixtNS8KJV49Mm4iaGJiLS8+TkhQUFgzV0VFKD1JXUNRSTFuLF0vSDF1
M2luYlNgJixsPyxiLWw3bT1oKF4uTDtwU0MiKDNtQic3ZlIzcT43Izg8VlNrT0Q4aipxQDAyaj1t
MSNJUiU8VGwKJV80JCJjIz49QUpEYEdQP0hLXCxrO1c5Q1cyKV8+Wk5cIiIhUkJsMVtdUzpeNjc9
XWdyZi8oVyo0KlYzPCRHPDZBUnVncGdMSEIpKF4oOWtNXlwhRSNvWGA0LV1zKFFaZ1g8cC8KJWhW
RURQSmAjaU1JXmFDb1NSMU9TR2NXaiNwKVRUPmY3REtiQm87SlBsRyhmIjVTcVszNzdxXkUiYDt0
QiFyJjVRR0E5TFRfYTBuXSozcEl1aWlFP0I2MGcuYyduYkNyU18+RE0KJWopSyJHNWxhJyFPOiY8
QCQ/KTVwU0prM15IITVvU0E2RCEoJE9vRj5KZ0dIdGtdOUtZUzpWVFRWMVhBKVUmSlEuViI9I2pd
dGFmPVkzbF49Y3RtZ1w1MTwyLFxoSXBTPSdJPWoKJWNnJjtYOy83MW8wSmxSTCMlWyhpak50XmBK
Sz1qJlIsLVY+V2BQI3U7K21VKUo3LzkiZkprJ25XJm1PTEQqW1tkRT10bzRSVCI3TUBKJyVsQi5c
NzxkPjNJNzM2SkpATVVTb0gKJSRoUCI3a1tqTU5DaDtsaD43TGhgJTcoKShPQkxNZVRVPEBPQi84
YWVeb1QocE1GaDt1QCxJWVczTF1KLU0yJERtTF4nZTZxYlcjMU0kKj5iQ11iPnJwZmRsYEZURDNe
P0pmSk8KJS8tViInKz9AJ1s8cCJGNSJKY2h0NzQhXmlHXTc4RCtEMXFfNiEwbUMwVVAqalhHOCY5
R0Q3S3I4JmxBNy04JFlORFtTbDopWD9VZF5NQmwnS24qJ3NiMipoPW42KHNkRzQscl0KJUMoYU4/
YG40L1shbythZEthdWZFZlJcOEpwTypncSFGaTUwSjtdQzBqNy4oXFRwRyRIOyRRVEFRcFtGTFxq
IUssZGpgJTI9UkZDXy0mPCJwTjQsJG9iKERVNCpfbyRDY0Q0dVEKJThCTXRWOGJeZ2kla0RoLVBZ
JGo5Szo2akBJSWEkQ0ZlaVZQSl1jLVZjW3V1QGo1QTlNQ0csQWwvLTpxOEUpJ0pYL2RFVEFOV11Z
IzJnWisuOz1KdStvYFxbWSdHL0A9Jz8pI2kKJXIxKnBJSGtGN0xlY0JnTzBTaixbOzhyLlghQzVt
V2UxdTNzNiM5JkdlWnRdTSl0aEYsQTVbISMiNzBuJy8xMzA+LTxjIlNtK2UqS1laaydLYTMvJlcs
UVAuUmRmSXAqVUV1VSQKJWVxYzU6RD9SbVlOPSg7XFY+czxPWXJmMFxyXk1HJC1ZalF1TGFgZDso
cHNHcjNCKnMpJjJEP0YhNi8kUldkLjdsVEljODorL3JIRmldRkw/Qztkbi8yOU5MZ0ZfckFqXkNS
a20KJTF0a09VYEAhb2VRaTJ1OTRkNHI9YV9zRypVRk0vY2opVjFML01HRCE+PmUxKF1OX0xFMVBq
SllkTD8nYVM8MlwzKzpgbzktS0AoK0kxSjo7SigyXi1hQm09VjxATF1KMFlwL3EKJWAyRkRGKS0l
cU5NJE9TISg8TEtcWDpYdCQjaEZydFdRZ1NDTGElQTUlKjorJSs+NC9PZydcXWxaMTUwLmVHPFBF
Z2UzbGM/VHRtcFU0V1lHVE4qJj9XZkEsPz4pRjBYQ0xVQ18KJTo4RXBnME5zSzpRQUtAKiQlU0Nt
JVQrYCNmai44QVdxN08/PlhcWDpnNDxqckBAc3EuKkBNXy81b1cqJkM9JmgjKkAsMUxwKWltQ0A0
LnRRJzw1ZURnJihcWCVDO21ZaDVeV14KJVk3Li0tXnEmOUU5LUAxbiFjYHFZXkxSUDVSbShONys+
MkBeZ2NnTihMXm1JbWc3Ly5FKkdEaTpAY2lwMEBrYWpkNE8lJ3RXY14hV28ybDYlI0tiTDBSbCpR
V2JQK0RDXG5TUFkKJWV0ZHRJI0tda1IwLF9DQ003bHU6LFI4KWI/LCpWJTNDdSkpJ2ltOmBhRm4r
U1gtTnNKV2dQJEg1V2FfI0tZZmBFJC08UktTKkU4SyRabUxlISQtJVRyUCFtaiQydXNaVW06QmUK
JUZlcEZrLmhGInRrTWZsdDdSVj5NUlU8VmNBOi1nZ2EvTSlOJ1pXSj0rM1lYPl4rSyNpbj8tSlMw
TFdgO14uYTM0WzJeSCQiTjZyRWBpNVpYPTJJZ1U7a20qLzxdUypEWD9ubUUKJTgkZDooSkJNQm9k
UmNpKVMkV0cvRk9qI1cvLC5bIz5AIScpTnM7QjVxYi9JQyJbQ0RmUWRramYkO0BbZU1gRyo5VWEm
WUNcUCJCaGFEJl01IjFZVnJDaSgjUm9AQWVsQjNgSlkKJWVVTD1oVF5wJlhTQVVHKlRgXl5TVDpg
Mm5HY2MqMlgtailpWF5pNks6U2lzT2U0YWhEPj9aYEtWTC49MGsmQDI1WSgpQTdCOV9KUSNdNkdi
LmNbPj1LOWsjODFiVUJwcDghPl8KJTFtIT5YSHA5aHVHNipoSVIxTDAxZ1RTJ18lTGRPMy4vUzpd
N2lYJSliOCsqbEcrT2ppJ05WRVAhWGZXUXE4NXI2azEsMTlJJlQtInFrPW9sbmFfJCFsOCxGYSEh
WmRbN2tQLTIKJVprWkNmXE5taVklTjkuQF5LVXBgMy9wUk1UV0EuVDRxK0k3SjYlWUVyZjFkNz4s
a2wvM0YwOVVLZWRkQmlrVkZPVHNBMypiRixzcC0mTlNpJ21YUTYoWD0qSU00VTIvTUA1US0KJTpl
QFdVTTlEVT06ZT5ZIkZGXmciWGFBP3Q8PixJXmBZaVIiOjdxaDoxJ1trQzU3ci10X3NCU10+aUJM
Ni5Gal5yXDFDNSVMb1QiIkY/KlVnQFFpWnFRTSVyb2hxIXROLTclJUIKJUxebEQoRW8vZE0wTWk5
cGhRWF1yWzMzYz89Lk9CYCglcGxPXk5yV2JQW2VrYTYnZDhyL3JYLFMuYztfY11tVS00VDM+JSxT
WFA1dTRzIk9QY3VtXG5kWmV1XV0xbDAwW0ZiUDEKJUMocmNwPWkzLk9KOTk1UmYtJSU9azBBaEBU
QS1OJCMyMD9da1lEUDRNYEApclw5O2kpI01VKUMoQ29MXVYlKTNvIzpiYC9KYmNgbD9xbCFuNTQu
USE5LF9mODkrY24tViVgJDAKJW9hOS1kbE1ILVlKMG0sKkZtb0NQV21sK1crSzIkSUJpbEFWXGQq
JmtKZSFNbmVxJms4W1NKP15Tak0kRVsqQjdhOHA+c0goSXNsNGdpXnQvLE9PUTlQaDBpK2hfQ1VQ
NTZpYloKJVVIPiwkak07ciNYdDotXmRKRyFHWGhiJFQrMSZELk1mKkxPX0VpM1hpIikhUUpUJzs3
OCYjMDFFYUg0WTptOExTOmM4KEMtaU5ULV1qYmQlQUVETipOQXUzVmJdcl4vKVhCKXQKJSlta3Iv
NjJcSTVpKXEpUU4sKUMsO186QFhiS05IRjQ6TGdmZ084Q1giJFAsZl8pPkktN2EyOyw1LEJBPSFB
OlVqRFZJRz9rcG4tNFFqbXNKQldSIyZgZjA2UVM2bmEqYkZWP2AKJUFkSSxibF1bP08ha1xgMl5A
Kzxmb15ETU0yLDtcU2JdXmJNTyVpT0BiZDRuI2paLT0qZkImTkxmcURtc1NZaSJEKEtoZmVQKkZj
XVE1IjxVXTk+KmdVJmJ1UTYtS1EhWj9vbjwKJUVTQmBMbyVITTZTXltfdFpkM2kqMHNuLEM8LExv
K082V2g3KSNfcl4wS1ZKNVJIWU1wTUEhcFktW2xjT0tJZS5Ca0BTZCMmMjkhKGZTUXUqYlNfXTs9
bmstRWtsaVNbSlFSUGQKJUYuS1w3OydiP2o0ZUhjRDkmJ1lcPUlTIkByMWs6IXFcVVY1L248Uyhf
Uk0uIzhacTE0Vl4vPSxRMUMrUVpuQywoSTM+YDA6J0sjYjdIYGBSZmdYcWo9WypxZClIPGdwRkAt
TScKJTwxQVA0RkwnTDAoSkNWYlxKMzcuNDxnPT9xOGwvYjIwMzRtaFVbc0hmcVZgPFtLIWM+Mjdq
dHJGPi01W2JLUDYkRyZQWEZmUVZqQUZXK2pWIWhTI1Qhcz4kMTlSQydHI0o3REUKJVNXO0hXbHJK
Zk1UR3FyRmZHJ1ElbEgicm1CVzQ9RihxTGxnTi0nNGE+P05gS1JyWUxtI0cnPFRBYFpsPSdXODVl
VyRyL0QrRVg3XSJxYi1dMWZdTmZDT15Nb2lePj5YSl4uWjAKJTMzcUNwTi9YWjYvO29ecy9UbCRa
Xis6XFgjNEpbXC8oYztabTVEUDMuIUtHZjolKyQnSidRTmRbNE50OV5UQmo+XztvIlU2K15RI0sw
OzwwXDMjXTtAaEVFLio6XlFNQ2BQPykKJUxHUk02YmxPcV4zRXArcExZQjBjMWRWTGY3OSNpKlU/
VCg1WTc5aGYmQm5iOFs4alchKytfKmBBNEtgZTwnJltfVEc+KFQybU5QPUNrV0ouSltuNSFrSzgi
cSg8MDkhIzktTl8KJV5NZC9mcCwlQSE2T2hdKydOS05xVUpfY2ddRGBcRV8jIW9iYiI5IVg6ZnRN
MThpdV9oOjZMMkdLLWxFWVowQjlkaERZK2kvJUFhY3MxVUV1Sm0+IihZUUMjdTxVT29laShyZSMK
JT1kaXIhR1NZNlclOThEIkEkJEIkTFs7P081SldfVSRJTGsyMzcsOExHJlVANlMpW1xBQjQmXCdR
U0xCdUczLkkrKzxtMFgnUUxhOzhJKD00QCFNKiUlJ2gycUVlWTgwbDU5QFwKJU9sRD49VCwnLUtq
OiFba1xbSF40OWlQIzAvTGttW1MicC9UVjY0QzdKZk8vZj87TnFJTGVfYUAnRidJJyM0LGBdTUcu
STooWFtrUSZQWHRQTWVGQClFZkxJIk9Cak9BRmhJPmoKJU5HZjFoVmxqKUtDXGJGLEBxYVUrJF9M
PC5dN0BpX21VTls1P0AkPmthWkhrUVlSKztOS1I4ZF9daDwzQFpbOEFxUjg3XmE8RDdOR1RBbThT
ZXAlZE42ZS0kZGgzJ2Rnb2tkX08KJSo+ZyllR1c4YTtkNCEmXi9cXWptPkdwKTteLF8tNmRbUiok
Zj1UU05RbjtVSUhZVCFyXS1JO2lEVlVlYWlOMSIwUl9cIWRUVCw2JWJjIjVJRXNCdDFAQzZyJyMt
Jk9XbHFNOVwKJUVsJ2FIS3U+IV1qOSliMEojIilJaGgxQ2FfRyVEWElzLXQ3TnE3PU0/anMuKltF
MW80JWE5JXE6bV5NRlA7U2s3Tj84UzEiXFE6NVhtR2xBLXNiVElIYipOJlw0REk4MjRzMS0KJUVO
YiI9LWlHTjoiKUhpVUJAXkhZRS0uanQrSTVtQDIkYFI5akRBVmImVTFqZio7RkN0OnEoNkNtWUdY
X0UpLGZjSj9zYWhQLFZ1Q3FaTSliWjI5R1kmRk1UUGZXXVctbVNnVmsKJTlvcSpfX0xqYyY0ZCJv
IVNzaDlqNXM2LG4uYTdoIkUpa1NTVGQsJkQ5LCNhOlYwIkYqKlg4dCJKS1daISJcWTJzQTkpZUxK
aEVvXj8zaEElVFBrLypLaEZob2lbWlNAaUJHVGwKJT50IkZZPWB0XkFEZkJsODE5KTJnZC85RUdb
YGlKSzx1Pj9WTlMzTVxCUS42LktcLHBcUTxkNnJFcmAqTlw1TWs2cTJgLmg3R14haT9dYCxaYzUs
R1wvQD5tVkZrZy5hWFw2TiIKJVM+UScoM1hgMiNSbnVLKj1kUzxuVm5oVE9aRVs2czdnNXJAZFpT
c2ZyKSMqKGVkMjtOMCJEYzc8L2MrUVtVbDpNZnFVVVRDbmoxOV9aYl1dPW86LjVCRlxcM29pXFks
TF1xQCEKJUwrTzk/JWtkRkpsW1hqdCxJSjtdJk0jaUFOUTRjMFZbUlcsWm9tXi85J0pZYStbTSZb
LXBBbkc3UElkNllAWjdtbygwYl48a1pkXkg/IzJuYyw2c3QqZCMoPkdmPyhwJClWQkkKJSNOI290
NWFTWCsoP2oyU2VILTlcRlNWNU01Z19tYGYrLCwvIUo+bVkpbS9kW2NQNWY7RE4haCNEOzFBZzc7
TjpoXSkxK21fUjVhSCNKKUAvWWRuYmBJYmouSmtXUSxxNXRgNk0KJUZsOyRGYS5BcCk+TUMiS0Q8
U2JwX0I8ZEpsbEomV2RscUBqRFo7LyZcX2RFIyVfPjx1XmtkJHVoQWtyPmkmKTtnWSQvQ2YucTcx
Zk8zZF9uRHAiI3RNOk9zO0M8YDQ3YUlhajMKJTwwMCNAbSYzJlFoXWtYLmBUNFFrbCVzWlZCWl5H
QkQ+TyIoVjo4RihITXEvbEQlVFJBVFokY1I7Qk5KZ2xUa2dEajltcmFdJyk/P0A9JyJYI2FyPGpQ
c0xBOyVGIVBfIjFFS2MKJUI9anU/bHBkWU5SODIjVCphUWs3MScudGpYdWQuNDJhUHQtLTgyYSQi
MT5IM1tWWi1mVy8xdHFUVnFcQDJVTEdQNGlmMkFxOG1xamYlLlxpU19CX18yWEpvYzxOMTs5XWYw
RVgKJWVvPiUrbSNwSyslW2JkbmhYYHEmL1tSV3RUJVY9Y1hHMCRBRV1MVHNNRV5PXShiSGdSbHMk
YjpuR0hoUSg9bFUubEZAQ2AvSGlmc14vRTVAOmk+dTcxUkwvSTxpcTVCVFlackYKJVktR3R1W2BA
cGhoUnE+NG4xbio0JTJWZHBOMko4cyhiTy0kSzQuRjw1Yz9rLVlaWV5GZFIpakImZipIPEZOXVpK
bVFma0xrZT5BajpxJjYlM2ArQW5kUXBNOGY8ayEhbCtvTlUKJWpmTktZJj1RQTdGa1RMam1lOkpI
NVIwa1RNQ2Y6MWxXKGRpWzROIkIxXmYsOWJcTDVvI0lTZF1RXEllOSxIX2I2KCddPk9NJTU0NFc8
NkA+Ni5gOmlWPjpMPytHUigvMGE8QykKJSVXKVAqJk9Xbl5cc3M5RDFwcFFkXjQoMjdvNGtNdEJY
UzxOaD9EYyE7PGcjYmhKWzRKPE5CRFdiOC10YTAwPkAnSWNfcnM9bUpEZ2xuOk9rK1gpaUlBUUpL
ZzkuSVVkazVaZGAKJTQ0YXQuLz5aJVAlVEdtXzQjIyszaz1SSXRrYFcnVGY0TCZfNGdwLSIlKnEs
IlkhKllQcVtQWWA/UCVhZ1wkOWwrNWlkamZedWNXaCpXV3RyOiVmVTdLaDVfXGZVX1hCaEQzWmwK
JVEjLnNnZ2JwVmZpZVFsVy9TQyk6LCpTTW9oOXJ1LS02Sj0lVEloMG0jSi9nWSE8XWp0OFhGQDJq
TV5mSEVcQTMyXHBFNDMqXyFba0g0U1M1VDlhRG5UVlFuR0ZCa1BlISY8UWgKJWVSXydoOFVPXGs3
SkhDU1VWLEUjO0taLUJcSVFubmcyQ2IxKSQ8SWQyXzhyQ0ZtYSkzT3U+MXQiLT8lIichOFlSKVZK
VFtrSFteZF9LRDFpRWA/MjlQcEp1K2dxTjEuUDIzTzMKJVVDLj9PKnVjZWpbS0JlPyEoM2YhZ1w3
YFAhKmZgXUFKQmx1RGBLZ0IyOnE7LTlUXFFfVUFCMT82VlVnQjoiOE1fYUUjdSY9MTFIY1klO2Q9
ZTFLRGJpLWwhPy4hTi4pYT9JJmQKJWAhVi5LRSJjKDs/KzQiP0tRcTlKKSlaYXAzKnNLLEIoMjNd
Yl0+RXMoIzhRb1tRTCtwPHNRL0cwUjE9I0snKU1hRihwRzg5cFYpNEZkOCo8X2o0STlcVltFNSRG
NWdkV1xATD8KJVZXQHIlRl9OdDozVV8sPUNILzlNTnIpb2BOYlAoKkx1WmxqOTxkJEZpOjtaM1FF
Sjtscy1RSVZrPFRxS2dmVXNSZFd1NlFuSzpgOXFKOnNZIWYqazZub04iK01QPUhqL2U5IlMKJT1Z
b11aUF5PckAtNjRBMiVhOGxtNF8yX0hMaF5VOTUhTDFcUCJHdEZjSG1sL04pXD5QU201SUhaXjBS
NDhtYj5MPkRxUi1OXU01YjQzZi86WydHQTdFNit0RGFrNnUyIz90LS8KJTlDbEVJQTxlJDw7VVdO
NjFFc3FRUWh1dEZQK00sYmk3YlVMPmQlZ0Q+XyZRbG1BZjRGIUg+RW9MbDhmL0ZSWWk8WzBDIkw+
MUxrPSVPNiJyK3RwISovdSosMmZdXE5QcU5fUlgKJTFpRVYmYy1cXy9DZllHWUFgKi1YWFgwUypF
ZEcvPlB1X2JIa0VDITozRW5DNl1wbjE0Q1k4J1AjT1FGKj9FTy1tY1VAKyo2LjNzaT8yRGczMlJ0
JmVhUUQrZTM4YDo4QS02K28KJS9EaWAtTTxhbi46Q0NGLTorP3IsLjklPlBEQ20uXmtJTFJ1UCIt
MEhCZGZLIWE2b0tgTmxVU2BaR19mS3MzUFV1JFgoKERtODdcbClJMVJIUUdbVT5xaUwyMmQiSVJJ
Pkw3bkcKJVVXK2c1YTc9TyQ8VGZjR0prJS5vZ2dvRlE0TiNuaiIoYyJpYlUhR2VAKjttLXBdPiFg
SSw4cEE/PisiR1VJbE9JKEJhSkRjLGRxKzxNVzIhViJjI1MmP1s7JG8uIk1Tb0BvSUoKJXFBVjo4
ODVVUGFqKD88NUUrIjsjXy9pUkVHNks6aG5CZD5gTztDVE4zJF9Qb1N1YExpNylhIiMzPWUuOFpL
NnMuKDxvNW5RM0VkTFFvOmI5W0VCUmAuZXRYYW47KSpZYnBZaCIKJTQlMk82Pz0vZz9MT1czPSE4
MzMzSm9kZEhSUyg8L0JyOFVnYFRXY2leWyFha1lbXmFuWHIuZjtXbGlBSlBYQnJQbzsiKy43UClq
OFVmYkMuUy5qPSNPVClSO1JkOmlHUVNiUE8KJWFrWTxURE1tMj9wYUk6Zz5RcSwncTNWYygxX1p0
Q0FcNHNzIkstNjUvaTMjOSJIMWNcO1tVY1NYKUtCUitALj9FbmQlUWVLOjgwUGY4Ny9LMWhUUz5r
R2JFRSRdViE5Mmg1LT8KJT00LUpDRTNlITc+bUhSKTYsa2dvJldlcjxlXCZwLm9QakZAOTdMUkEi
NDpvOSE8anJHW0RdV3BiNltjU1pCKWdcYDhsIVJbNFs8M2tWO2xfInIkS1tKb21aKGtxbnVLRGpu
SU0KJWw7W2xqRlklXGw6YWxNZjFgLWBwZjc9JS9oZC1aUz9vUWRFXnRFTio8MGY5KW80Ik4jWkRX
J2RxL003UWdmPDdALGdkZmVoWFtFZV9XWU9hOWJxbVUqMUImOS8oVShYSykkYGMKJVwjbDs0Q1E9
XjZLTC9DUSFfRi5UVFlXNnBXcDJjb1dFNkpRQlNsMl1LXS4nV1ZPRD43QzFXM2RwO003JjcuYjol
P0NmbERTXydWVTYpdXNHNzFbP0YlWFdnUGpXRzJbMnIuPnIKJWQudEI+MmV0O2ZFXTQjYWprJkNe
UnV0Uj9hPDlaUSQyVVdkcDtfVSo5dCMzJjwjZnNkO2I2TChkRC80TSssKjknUGliLyJUOUFQckk+
MmJmalI5Q0IwRGB1YUJKayZHTkY2Oi4KJWxWXSEhJUVhMkxmbWc2N0lHOFdIUzwlRW1MXm47SVRJ
SCxoKyg0ITNKIUs5OmJYb250ZGRSaS4lP25OSy5AWEBSTTk8SjdKUT5wKzsrXWkjUktCbEs+SGgz
TSFWKD4xPSpXMWsKJVRTIUdFWGJQNysrPlgjKyMwSShaIV05UWlgMVk8VXBiPV0yMCpCUkk3NkVh
Tz4kcWpiJ2pCM1RIUDpbQTc0Ii4lTnE7MW4qOj5zLFEvPkh0I2gxQ3BASUBgMzA9aio0cE9eMUgK
JTRWLzkuKipqNzgzSmNDTGVvbj1qZzYpZi41aytGcVc3LS0kPyVLLyJQZ1JlJFhyaVVFY09yZ0lA
aGkuUipzJyViUEQ5PStdR0U9LDFObD5WPjQtTkwiZXNaRjtPWjtQWjVRMzoKJUFFQlhVJk0+LlRt
ZV5dTCVgXCpFVkVIVm5AQ2AuY1hKMDcxQ2hBcD0ycEFqPzdcaiJWQDtvcCk1ViJdalFUSkE3Ry0z
QDRAKjRlYCpZNUIxZStUZy0jalAhIk50ViVEWlcnTTcKJWxZPUNWUWlZI1hQKUxhY1osdC5xal88
LlgzQE1RIXBfIWFNTjldclFCSCdCVjgmSjpVRlFqUithLVRpXkJbUCYrSShrXy0/YFZsZzM4PUJF
cEg3bDpqaiwkZURUL2dsRHBBW1MKJW11XGd1V103ZlcjSkQtUVQ/dTwjNlAqdGpZdUpQWyFqWyZj
ZjUkK007QDh1Y248SHVzT11Ba2VeQkYoTS5bVlxGa3QmakJkSyhKJSdhYDs7Qlw9LGRgJWNkQ2BP
XVNZRz1uYCEKJW1RZ0BrZCY1bF4xWE86P2JZWFZRLjMsRzUqVyVVOUZeSnFnSSkwU2dpaDF0UzJi
aU1IWC1mK1Una0ZHYjlcW0NWPjBLRlkvXyNocS0xV2Bub2M8Y1U9IyZJQzwzQSdANGM4dWMKJTlX
ZFxmQjkxSjlocyZsSlhaL2BnXyxBTFdlIUEhJmBiYD46IzZkZUpoYSltY0VxTV87KUtIZEpBMUI5
OGNcP0U7cE9URSw1UjIiXnBtQWVdRj5hbjMvT2hQQVs2IyRNVC1LcFEKJVJdYkZWSyUxVzIrLkso
cTNUWmI+bzRoXnRxJkowVU4pQCIpYUs/RXQ5aEEuPTRLTE9OPCZmKWdSRzI+I1AzQEw8YGJhM2ZD
Tz9zSllhMjYxa2lmKE44O19KPzNBYU9JVSc5QEcKJWo0VDtGWl5sUVE2X1IvJChpME9bXCUvWVln
WS9NYjJlaXNcKWUuRj5ZXEFtSURFNytUVTwsRHJxSkctQGVDK2tLZUBeMyRDVyddXyZ0LmlSYSpV
WCppRzZmPnJwT1FaYWBiQSsKJSZPYlZeTiNfLUlbUT0pMVs5WCM7QExBPGsoWSNjOE9dXUAiWjkt
L0hQXS85My1YW0ooaDNwaUFldFgnOT1LPD9MY1YxIT5fNlYsPEwpS2UmOD1bYyI3QV43M3I/YSIt
JShicGMKJUFHZystMGFCWVlcJD9iOSdWPyk/TDo2RUlpQWUycjduKnNWIihOakk5NzVycigjX2g5
JUkuUTwhbFttL1k2W1lgJykmOXFbRy1LbCZkPydBQD9xITQsT3RkISRiRTArcjxkQ1UKJSJZO11k
RnUzKl1CXVJcalAyQS4xPitNWEtJIUVOLCpQY2lqLC4nPjpbPChyaik1TT9rRCdGakw5SkcwdWRo
V2I7U3FzRldraHBVcjNZOyY3Yy4xXV87TlBWX1wwQ25XWjxVJCoKJS9NbC1DNzkxYzxRYzw6MSdm
MzNgQjA2N0ZXTEhPJik/ZCdtbXUqZzMlOC9zaiJgQD9IZicuKm5NWEpuOjMxTyUoQ2dARj1JR2hv
aWZAM0hoU3JLcyxVOWtMWSEpQypWPFw1YWwKJVxBJVpva0M+XD1vOmwsIVFSN1NPTGpWbDxBS0Zt
Z3JHRU8zVyUlTls3ZkQqRlNabms4REhQXjVCVz9DUz09YDN1ZS4+aT5MImhfXFJZJkAxTzAvKl9i
R1BFOk9Bck4rUUVDTDMKJVo5LjUkUTwmQE9eSVtKcU43V2g9My91VVQuX2htaklhZl9bWC8vUVdK
dU1lQDRLbkBMSUJwNTVoYjFndURFRChpKFp0JCRadXEpSyhrS0RFSDtIZjIpIS5UTVdnTFQlNWtn
Yi0KJVBpdGZnNXUqV0hiRTE4XCxzdWtDNiNqLDVGY1xGXkxyTyJUVSNqdCUka1JJNVJkNTY7OUlJ
VVIzLVxRQUd1MXFhJGtqaSEsXXNOSDtvbT9lJjhZQFJccGdWQTE2Rl4xIkkiRXMKJWBuLz8kNGUs
QzZyXiFjLlE1M2NnZmlTVlVuPnJVcTIvJzBJNXRKcV1CO0AqYksicjdUK0AyMzIpQlRHPk1QU25z
cjRVWVErVUo2MyhsUVomQ1UwKkAtOTRwIlM7IjpkPFRZXmQKJTZiYmpzOUo/bTlrWiZqdS9OLzhK
USRkcipKIk9dcXJhU2czV2o3R1ZGVG9sSltAZiFkOTpcbzw9XjxkUDtnZjk8I3IqTVI8aFlxal0o
Mk49UUtIc20zWTgwPC5TMTgycT8mUScKJWNYUSRhSUYvT0gqQWlray9eXS1ma2NFSnJkLk9Ebl9n
VmAjakUjb2ZaVFwkP1U/bCg4IU1dOyFPNTBuaWZpYElFQCdDcWZJOCpSJUhda0RBWDlvWkEmQXQ0
bDYkcUhFVy5FM3UKJWFNUl8mU2E0VVcmNi9hPXE2YF5LJzJLQ05aWXJiKG9bbVZrRyc1NlU4cEls
TyxEIUMoKWF0LmRFbz8kRDZhLzlsV3RsVi1sJWcnWkIlPkcjTU1qZmxaUUN0YTp0RDM3U1RHIyYK
JTQoaypGRilrRTQzWixnSiNUUkhhP3NAY3A/amNCYWJXLlJ1aTNeOic0ZFwhJCkuPy0vJS9cSzBu
dCJDP2VXKlhGYE9gWEkuaEw3OUJWIUA3YzVAVmFVJXUqSG5rYFB1YlwnakMKJSZOSVwsbGtHZltw
I1BKKiMzTlI5PGMiJjRFVVMhMSYwb0RtajxcMWs8XDI2VjtONDtPUC9OX2YpMnA4NzFtQGljOiYt
Y2lRVjpdK0RrTVdIQHJIaXJjWWA/P2gxWTFqalc5QW4KJVhgUTBwNid0S2ZRa20pc2ooI1AhJGJR
WDBbU1cqO2onZ1dsWWRDX3BXTkRvby9EQ1hfcyRLTVVXSF1mSG0pZ0djMlJgUGpcOSJMJEkzTmpD
Jk9DbSIpX2oxWjk+SF0hJ0VvTFsKJTJGYVRwRF1ZUl4uSm03PCwrdXJebVo2OjJLLWVVTStaRnQj
KmFUYSVvSHA3UVw/Z2ZlRS9KV0dBLitTMTlaM2A4WkRxMlhDUCctQlROMl1dUlQ5YlU+N19tSUJG
b2dHQy8vQmoKJVs/JWIvPVJ0Izopci9TN2IuM2NsUjoya1tyamdpKShlIzVJT01dOGFIcFMqLlxc
TWA5RlU5JkJJMHJpZj9sOkhobVwiQjo9O0d0YDo/XjUxYV8iUExuPGhpXCVudCVPWT1FcGUKJVcj
NzNCMmYpVi5sXnEjWE9UQzZiUyQjUk9IVVIhZSldPms8MF4zXnJlOV9gYy9kQjNVQDo0QGVabUJL
RWQ2ViMmV0NKYGdNa0VSOj5GPHQxZ1VzPGBNdG5tVz1XWWhJOzNiZDEKJSxzdHUuRUFZT2k2VVVq
MVYmQiwhRmwiST1YM3A0TyRCbCE0JCImW2JoMSheVj9wX2ZZUipeYmZXYEIkbUA4bmIyNzlCW2VH
TFg2Jm8nNk0vS3IwNUFvQEY3cCRBcTY2OldhKGIKJSYsUyE+cE5aLmI9ZTMxQCwmUUI+JSw9UHBf
WC4mR10uJTRaWCJfXVQmPypuJjtsZjFHXmRqQmMiSSdZWDFmdVBVcyxKdUk4Ui9KST1aWUIyQDhf
cjdgPU85a2UiUlRdaigvRj4KJSFLZUBzNEVScFVMP1NTKSM1SGtLPj5pVy1CTjcuWi4tJy8mU0Bd
Iy5cayZuXWBuSVhINl9tLHAtKDo5LEVHRU85JkZHKGpLNWtsYWtUaWJLb29JTXQxSlM6b05qXmtI
OyNSODgKJWJTJkVjT2Y+ZjEkZChyclZCcjI5XTFiXCJWRURYQjVeW05UMVFjPEo7aV9BKzo8K0l1
JWNcNmIhczNEUWE+OkI7Wk9BMURPZzhESUpJSUY7L0VQS1BAbUVWJ0F0SipkSlpDWUEKJVtoV09X
WnRYXGRRX0laPiEtdCdMJSdbRlE6cWRfOCRuc2tTUl5IdVBEOy1mJkBbWlE7YUxqUVBPPTNpTkdl
YyFzSi0tcDJpKnFdXE1dLTVhQ01gdFMjXHFHSDVhJ1gzX2xXMEAKJXJhWTZ0Zk8jSmlGTW08Q0Fs
TklWIWFHdSFxOV4mSmhZY1RyZnVQIkduUTYnP0lLVjJUI1QyPUJVJGBGQ0FzSWBPSmtEO2cyPCg6
PHJVMyxEZXI8XGUpKmIxPG9VSDcsWFlHJzYKJWU1I0RyIk5HOVQvSElMSjFmZ1U2ajdaXUdkXHBW
JWdkQ2gmaGlTZSlcZWY/UFsyMXU6RltJazdKakNuSyZublA9Sz1MblBiRm81YGlqLSxpJyZpNjAv
RyVgb0dvYDRaSFcicVoKJTk4L108JWhKMGRLLDJbQClnXDdrR25xYFZFUkAnPydiZlNPVFUnMUNM
OCpSbz47Uy5tO0RTKkVra3EoQjBuYDFOLCMmQnBCWnQybW1hWTcsbDZyNzg9RTBXRzcubDBuZ09u
JlgKJS00aSczK0wrb3BSdGNdRGlNXkIxRlgxWjU8KTYnRDU6MTp0WmMoZU8zZiM0T1RhJDZaMFVw
KnIvKXIlUGY1YFYtPVZUMC9TLGpvSipSQ2EqXXJNbUBWPjk8U1siQlVbSzJJWGMKJWtfLU91O2M4
KVNhcywicComTUVYKiYqUms2biNkMyJhakRTR1lBTW5BMCEodWJpXllbOEdNZ1AiTjFkUyFGKyFW
Q1FwUEdEM0tFVThpSUVOLD46dXQsbUwtcCs9R1dyJ1YyIWEKJUtdV0FxMFNDWDg0P1AtLkVPSzJA
W3VhZVBiVlsuZk8sKC1hQG1qUzs+PWlsYlwwdEdBbFViN0w5R0dKViJML11xJztAZl5ZXEldRDpe
SDN0aWQwLW9lbT5OTU9mUlZwZylYQEUKJT9cO001aGV0NlA6OERsWHFOYmk4YzlhLFBGXEllb0Jc
VmFNRFpjP1EzJSl1U2xfQGRGcj1hITFWWiwwVkQlWDVENChHUUheVWFFZkdFa0lZPT1ZK2JpUF47
Qks/bVhPKGttTlAKJWtCLFwjYjZXaVRLTlFcUigoS0ElMGoqYWBPN0lBdUxoY0JIUWl1Q3IhQyRl
cDJnUF1wRGF0bGU4JEMua2wmU0UkXk02RWw3SFtcI1A4LidjKlNbXS5UWV5BU1psLWNrZjdBXzkK
JT40NFlQZ05NXzpcK15KNDxrazFnZ1pPQyRYSkU/SkZSKmtRaEhgNiItK04sLlMzWFVGW2VmSDNU
cD8wTkNvJkFZL0Etbj5WUDl0c20yIk1tYFYyU2ViOEhELEdtRipINUlXRToKJWM8SXMzUFhkOW1G
UHBUM21CMidPZT9ZLWAjP0s3LlxeOzVkRXRtZWQlRD0zSzUoMz9kPUFhRkdaRG0mVTBDalE+LTVW
I1FaOjU1RUcpP1lQbWVWRUk2UzgxLDAyTzFcWSU/TFcKJVZSVU5vX0BdR11qKHNGV09WJ2lEWFs7
RyhFN144VFJRS24lLmZfWERtUVBlKShiQFAvS0EuWDE1MVk7W2VEUWlxbVZAL15ZNzMlS3MxN009
YGFDWy5WLkAsYGBpLmBSNCVvNTcKJT9uOlgjR0wvZWVSc2FOOlk6RSk4ck9SQGlYaytXREhKbmtN
OlklNTRubGpGUjh0aT5oJ3MvJGpgc0s4OlxbSHNvMj9MWExfMGgzc1RsWSE3bStZQTE1Y25GUFha
OWAvQVI0ZmcKJS1pTFUtZiRARiJvZThQXENXZWdaVDVwPT1NPSpxXTVlODNgPj5ZUltybjlqJj8/
QjhVT2hxa0ZAcnVvYDJFXTJJSm5rbzApQGZfXVtRQFgwI2kpWGNAS2pvJEBVVDYwRj5Pbz4KJT9V
b0k8M29UZk9oJHVuLFhxJWg6X2gwUC03TC5dJkErYjdzRTNSdV0oMTpzXyVsIUU6SU9Uc046ME9v
Vz9XQyltUFIxZVo3QnJPJ0VdczJLTlZjJ1MoWG5wdTBaOVdDIU5aKSMKJSRrJSlpKz5wWUQ1Ljkr
UkY2dTY4KT5mTFUudCQtVyldaU81SUVoK0QuIUJFMCRXIVJfQ0whOldRbmdEVDAoO14xaUxDc3JF
KSRxTDRMJis1OnNBW1NaRDJfIUdmdS00UGNqPlwKJVlXaDNnK3VKXWpJRnBybUY5U0s2YEorbUln
Vz9cTFYuY0suNG0rWlJNYz0pZW4pP2BMWHNwJiVWN0ApK2hnLWk5RWFcJyxxaUVZZFBmZW41Sm9P
TDwnaXBaVyREUzZHUyklSicKJVJPcmU5bV9kPHMhXDRaPk0qZi1ELkdIUm1gX1duRlpTKkFObHUh
KmNbOCdMaCJkSkQ9M2MuRkdoSm5vPDhvIyk7VUUvMj9qImo0cDchZWBncF1EJ2AsS1hHbGIuc19b
ImlfSjkKJUNKKEdTald1Jlczb1xzMHMmJW8hZT1PI0hgb0FMaCYkS2BJazxcLHVcMVIhYDJwZjtB
byxxKEpML2snRiInP1puLDAvOjYnPk5aVzdMUG9oLyRMRSovYHEjWmNUPlNCKWsxTyUKJWcsI2FR
MFY5bFg9LklyKW1QUD1qLkhFdVZaVT5QSVcnRVlhKDg5IUtDL0FDbyo0LSlOVTg1Nzc0c0I8MGxA
JUJnL24/bDI9cGFcKytfLmFyZmI5YTJnLXNcakRUPlJARSUzOzoKJTQkU11zZ1Eja3JYPkdwQ0VB
ajhvZ0VWU049b28iU0BaXm05PEsxVjolXTVgU2YsPDIrKktmaCQjLjV0aVhEIWhjZCJFRzdbPz1j
PEJsXmIrTlNOWGBIIk9yJlNoXWM3VE5zdDYKJTwkTDk6LmddaWUuVGklIXFndEBKQ190LFxtPzQr
aFBiKW9NIl1DJiQtWipXbSwpQWZSMF5AZyZbM0IyUi87IlIjajxZcycsYkgybnJ0b18wazZgR3Uu
WjVlOmJpUFIyX1EhV3UKJSVlU0c5LzllUz40XD4kKzhKPnU7YklNZE1XO1JBKjVMLSNQSEB1SiQn
Oi9NOFIhXyVZRChYMmRkdGNBR2c8cjo5U1FfP05lYVQmUFBeRGtcRkx1bGdnUlJtTjRbMyllci9V
SkwKJW02RVg6RlU4YW85ajEoaEVSRi1VPC1fQVhDRyUmZiljaTpuVSdTTGs5RDBxOChgQGNtcCVQ
XkpmJC8hM0BpP2cnJlxoInMySEBhWi1kMXE/YyMjImhsXTxxS0VcOyxFYEcoTGUKJUUpYigsLS0i
UC05YWVXXkxJK1E5YDUtN3QzQD0/QDBvXDhrblZDPz4iLSFuIlFXM2hwXENgbltPWi8xXGwtN1A7
J3NoQD5kISRrPCRccW1lbHNlS0FQZyNFPG5wU0VwRTAuRVUKJUpSak9MOjU8OExKYSIpNEY2RjRz
LklrL1dROlhkYzFcXzVLLVVSNkphOUxMQU40OWo4L2dcWEFmLjUkYklOTjAxMzpxISspXmUiMClG
QU1OJFVRdTZLV2NXamFoaUNXaGUxNCoKJTFtaWM4Yz0wT1pkIm1HNGcrRmAhM0RkMFo2MEUhXigq
aW88KlIhLzgxLlpBZnBTc2ItXFI5LV5CLXQqOjFFb0ExJFEzN14xSVxqTUYrQmY0THAiNjZETGtH
LDo3XDhNRVR1JmMKJTtyYltVSl1kKzopdXNMXCReUVoubFlkcWxFNydrSFFcZDVwaThhPTsrM1xH
ay05b2drV0ltdTZrQ0Q7XCJtbVk9PF9qYF9tZkQrQCg4STMvI1ghRi9TWnNfLUE6QGRiSypHcFwK
JWNeUmYpTUU9V0w3MFExOW9RX05mPDpfZT8zXj8jW1tkYDUvXCc9MidOPS8yXGs7aVo9VnVvOUhG
ZyI/MVMmczs+bihHVD4oYiczdT0pZW1iKXIlVWBvYnIyXz5jaituIyc0PnMKJW5PUFhfRXAmPipf
YEhDOiRxcEcxbk51VDk1LSNGMFcpbzQpRTo7VCVLMTJfNUBYOS85JjRddUpBISlKTUZTUCI8K3VV
TVJfdDJfNyJmc3UtVEQ1b3MlVEhRW1VZaksuJ1ZjMC4KJSkzVmxeaUYlbS87TFZtaldgTWJLVD1d
Y1xJQCIzYzdCJVtgXlgjR2spLVRYZVBpNCYxU0RSZmFyXmMlWFpvZE8yQ2ZyTi0kVk8/YCsvRitW
RCknWTRqcCs6aztwaVFSM1dTIzwKJUZGXUomL3AmOEZATy06NCU7WD0iKydSa3AyWmhtZkpnWGFW
QkpQNCpUXWRWWSpTYShqRSxpP0VcLWc/XjtlUmtKL2xONDoxP2YmZS1YLiU8NS1mRHQvPk5NVD5S
XS5kO05EWzsKJTBLbj5XKXNlQ2hNZXNlO28kcFROMTpUanNRRW9TXkZUP1lBVmgpZWdzLG1Obl0r
Q0Bqb00wKmRFaW9iSSE+UiI+JE9NZS1DJTBrN25lNFIjVi9yXmtsWSdeK1hIQiYiQE9gTi4KJVdl
RyVmZz5XU09aXlA0LEFUS0NTK2FBVC8vQig3Vk86ZUpNTXIrOD1cX3BAQiZ0PkJcZUlscTtDYCYs
XyVDTkpGVURDYy5WTzFCVDFbLiQmLzplTnVbS1AqVTwkI2tKbD1iYUYKJWxiWClxRjdzUCxZNFxc
ckdXS0wnSkJzVS5lLScxX0tUcjIkNkVvQ3EiOnRNZ25eQyZdJkJjaTNCXydPJm5uMzM+X1NzW05j
dGshXUxTcWJablddNiFnOV0pJDhUUHRdIyw8bS4KJSI7ZjY0O0BqYVgwb2crUTFjLmYvJjxPbmJo
QlBbPEw6PDUuMjc8W08tLCJlay5KZ0BSYzJaXU9cLilzQWQ2Pzo0ZFJRLXNfTVZecSY+OG5jYDA+
U15WTSI4RypdMUBNQk81WjYKJSdWXDYvaUF1TElANWlmI1EwI1psVS9pYVMoay8kYCI4KzJRL1A9
JDRiJ0E+JmtSV0owNmppZjpXXjZvJlw2Z1QnZyhrY2tSJD4nIVJZVTlgWD5AZiEoRnEjMVJBV1Vi
VFNGJD8KJVlgKUg0O1tcNCErVkBYV1BddVhdQHJbQE0mWiw7aTNLTFlnYVdZPEJoI2w8cCgzRC0y
RF9EYDhbVWJuYSMjXClIJVBiNmkvSmI4NWRkKCFAUHRVVC8sYnExKjdlMT4+SFZDcToKJSxdNCZs
WFVycnEoRlZsXS5vNHFFR0dDSWYxa0RzJ0JMX0BQTXQ8VUxaN088SShhRmhvYVRGUSwmZD9mSyVu
UVhzO0Y5My5Ra28pKTY4X1MhXy9gV0daaWVdNG8+Zj8kVSJyVmwKJWE6aFZ1aFlqYCE4VDopST5F
JCk8W2FbJm9HMWkxTV47NzRaMWNeVnNFM0wyUCgxPzVdNiJwQyMsczxbc2FmJGdPRkI1MElUXTpd
OVZlaThENXRGLWZmSFROWThxWkRQWFAyLTYKJUhKVSYvYD1CJFIqRSlyMCxKUDMnLihcJG1lMjpw
ZGcjVGU2Pk40LihaYWw1bDxlYFZNW2VDSEVgWz9sP1csaiMsIS1JSEVtbiNFb0hTXGoqTU8kVFhC
NWRFJipNRlsjRUlgXCYKJWdWVUFrOiw5NylLMjUwYSNEPjZIck9CW2VIOUlZaCxDMSVsI04/LS83
UzRtRD1yITEsX2s/KV8qNWhAIy9CJlMwT25KSD5xOlxIQC9rXjhhb0FKLThlZ3JRRjEsaEIhTFBO
UWcKJUBkSG1vPWF0WU9yZ0RyMSd0YkQzKG0ldFslLllaZTM8RzxsZFRgQnRScnM8c1gnSGpeV3BH
JDY9MlVdOiouXmkzUCtdcCxIW0NaaD1gUk9zVzJkczYvO2plRD5LcmZINTpoRnUKJT9XVC5bOiFv
Z0FeR3I/IllyaDQvMCNETCdYJXFqQyNAQm8sJXRkPkpkISEvTjlFajE/ISxWOi1hWC89cGdoakQ8
JTcqVVEwMm5vM1FkZVhNP1lYQ1M3VENCX21SPjxRJGhwZC4KJXEodT1HMD1YWWEiTzFCJykjNVNj
V0hxV0pVJ280N2gwV2BsMzQoKShOSExrQVc+IkgiV2FPMDBHOSVmbT9LWCJXImArMUFjWmEnZUA3
U2lsMTcxYDdmXzopJkJYR2E5XD5aUWAKJVtSU19fInVGJ1llJz1POEA7SElNKDNRLjoxIl4+JkFb
R1ssPjNlaTk5UTEoKW1eV1hhYDhsSzZOJilMSWtINkY3NnJjbFBMdSokLkhJS0I9NFtFUy0jcUsl
WEpFVzxvbWBbPC4KJUxSZjRXSUBDbmNdWlFNVnArKDtpInRyZUxKIipHYWk8N0JpUl9fU0QvTDtf
aUd1dUU8U1kiPUdTIThRLGY6aWB0RjEjSkphS2ZcYU5cUDhSPFxDYW1XWThfb1dtdU1kN1dzIzcK
JVVfczdqNmBPYFtpWHJ1ZmAxLzdQJUxQJCI+bUdLY1FqYCopLlZtbWxfPFQxKjFGaWlIIytwYD5e
ay5VXltFJDFzJnVFNktMREJjZi0mVTtXQyFpZCFNQmJzR0p0WFMwYmAjLy8KJWhoZDh0UE4sYnFX
XnAlNyY2QDtAOUExPzFpQ1kzdE9tUStCK25kIjAiMl9xYCRKYHJuaWo9RE1UYnMyTFRPXzIsSFlA
K15PdWlJUkJZIjlybjNvTk47ZWg5WlBoP182SSlfZ1cKJTg0IW91IXQtWXFWOCo7dHEvTUomcF9w
YFdvMzYhbFBBIkQqLikmL2pFZ1dxQS1UdSZmL2VzI1VQTzFdTC80VDElXzNYJGprUnMwLUwxVnJP
Qk5dX3MramhKSSh1KDU7MFpqNCEKJUVVIS5MJ29uOyUobVIvcmd1UVBdIWVIUDlaYjROLTQvZGVq
T09mYC8zJHRET01xSXJlTmZJIj5FK3JgIUdDaWwoJTNWNWReaVBGP0xgNVUpO2VWQ2hDVWsyJmta
UVZMLissUi4KJWEnX2RTOG0wR080KiJecGxAWSw2KFJKSGI/NHBqSzhuZW9hWDJTNGhkWTo9R0c5
TnNxNiMiY2FMcFBjRT4xSEUtaGZLaj5JQTVWJ2w3bVhEZms1ITw+SER1MU1DKHFRPGFVWiMKJUVJ
IUAxM3M8WlxPLl4wYVFEWCJbNGJCQGJHUGRQLCcpbSZaYUNLMU4sIldPMTFnNzlzJyxfKkI0MTRA
PG1JcHRVOj1jWiJhMFtWKF41ZicqJW4rWjokcGA7YyJNPHEjWlE+KGkKJVhyPTNKMUkmSTEqQSsv
OC5fIWpQOTxVQF02RENvQkguOEpULjk2YjZsRTlFZlxuZDk6O3U0JWhWNi1kQywoPlIlZWcuLSQv
SUxvQGVvNU1bK0UucyUqbDIrNGJHbGYmJDhyYTQKJV9QRCc1Z186IW9kRXMqMTFBZWFJbmhfdTth
TEJ0NC9bOXA1TEYqaF1XcEVFdVoqLCNfJjpDMiw8aVQ2O2g4cFNENEdsX0QmWkB0WSRvTmQlVm5R
LEtLQkhoaGNuWlhOZzk+QiYKJTpnayZSKz8kKCFuN1leQC1acmdLXEFhVGw9XjgvOE1rY0MqPmcy
bDo+bjZFYVJCOylHbnUodU1uMiZwXitEW0VjL3M7K0hQc2xRPlQ/OE04J0tPX1FdNW83YzU/J19w
Yl03UDAKJSdPOiluZj0kNzBMJF5gTz4oVUhpU040MERMVFFTWCNHVUAlZjdBXS8jdUlbYkkxT0M+
aSQ6cW9mJ1slUzxsMSQlUjRKVDdhT2xgV1dDMGYnSTZtdGdbJF5Ma2BPaCRnIScqbVUKJUFLcm5d
V1t1QSVHSCEvXGNgR2JbXE4jX0xsOGU0SEg+NlsmIT5QWVhqVFFgQj8+TFRlb0spUzIiKj1ISEIk
Y2BOVkpLIV4sUDRBWytkLEA0UFgrQiI1PHIyJy5CV183UilfcjcKJXMmW1I9MVsiXF9QPTBHI0BW
bUs4QCNlVixyaV9nckNQJDJgM2wnL0xYUytVQ2JabEhOTSkoNjw9VEJtTlcua2xyJVJGLihca1FV
NChnY2FwO3JWUnE+Nyo1PERMREhgO3RfaCoKJVtOKkpSZkdFJSVFMihHWDJPM2BkISJGVHJCWEZb
M2tgLGU+Si1IcUVDXVRPYCI3WWNFbVZfMCkqYnVMOTIpcEFlPEBsSjthRVZ0KV91LTpHYSdGX1Ql
RDRPcydwLlVDOyZPZFgKJUZTSSw/a11JODNMcVVcLzBKOCRwJiZUcEFANktYQU5XUSgzNUs/YSMr
Y01ILnJQNEZgN2pwPVMsTWcnRWthKitrUE5lOikjNzhmbEhKTUd0SG5TWiVcbFRtVD9iNSZqZWxf
XloKJWpbTmhlSnJIcDZFK19RQFMrYyQ/SjlzSzs2Um9IVy9LMGFjLmtQcWREZ29INEZMRjtdLnIv
KCtecXRjbVUxPDZsITpLO1otMFZaWjwzS2AmTmNUW1A3ZTtgWGElKCpHNCRVQ0oKJUwudGA8PFld
cVloayZfKUMvZFhTNiNMOCtmRVxqI2hJTTdyQyRCJ0hoTmthTGMrcFBzZldKckdQVCdwJjNZb0w6
O3Nsc0pQMjUxRFk+IzMuK1JLUydELmcpLmBeWGtnTCUjUzYKJXBCLT5yTS8vam1hQlEqLDJQJSty
Y1VqZE1eanVCNik5OD1HNGhKczcxU28na1REUkl0ME9BT09bVEBHJnJLJkU6IjNYX0Q2VkRZLDhb
bUc+QVlUWXArJScwOmtRLSs7cVFEUWwKJTtoZldgNz1LN2pSPTJYI1p0M3EyKD9RS2FJZy1iQj5r
QiFPIjIxTCZvUkRXREJEOT0nP2xcKjk7L3VANCh0Rjk5ZVppak9sNGxUKShLLUdQOVxMZy5CN0Q3
aGpET2h1KXBPLDwKJTc/clpeSGQ2VVlPJUg0UG9YYVExSVwjczZkUjtLdThmNGY0I1FkLTsvJ2tP
TGdVSTVfVmVUVGdWQT8rWS9PcmxPNFxSa1pyJGNMVi0lSSpBbS4lP1ZEJ2c3NSpeMV1GZ2lrYDAK
JSJuMFtFcUZAYzdqKGJxLVwnYjBiWEhZRlFFOSpEbj8vUyw6NihJREJUNHU+WT1SN1g2K1xoczVL
QjAuazstPjo0YWxNYWwzLiU/akAxM2s0LzhBUSZDUCU+aSY9UkA+KmQ9NXIKJXAnVTUrW3M8P0Mp
czszL1IuTnRzQC42b043WkVGR0U5bTgwbTRwUXBxaFUxKSlacnUnNGJtRmI3YSxaZVMqJW0oPy5r
S1s6KlAmZy5kIS47aTY5LUo1UCdidUphRVlSSVRfNSsKJU9AUjoxNzRrJmkkXjIkQk8rUTBta2g2
LHRHaWgrRnE+MjlxWjk7XS1PSyE0MGBEVHNeKUw3dF89cltZWURmJlkjZFU1I01yPCsrSkhNMzBR
S1MyNUwqNjI9NypRZGhkaTAhKDoKJS01TzJJUzVHV11eUCxEImlvcTw8YWNMNj5cJzEpQk0hLV4m
RzpGKTI3U2lta01zUjhBcEgnP3VnaWRaWScpblRJWkNNMEpMbiw4SSgxNipeUTg/VGJyNittRkZb
KjJsQ1Q0XTkKJTRkQipDPSlfOkNOJTs0LE05Nz1uaWlRYzFQb0Bkb1VmXFthJHUrJkpuVDRMKFNg
STdPaFY0KD1yLjpldD8pYkVyYCNxMG8sKmVndD0wb0QsOWZUVFtNJF0rK1RpbGxlL29AckkKJVZq
PiRgUjFdNVxtMjkwKT9QKV9ORmJUPnVUZmtISTRtSmd1NlkvNFJIVVRtMTs9JCEzNmdUXWhfLTZo
UGNBUDBvOj0hRi0zdE89YVcmK15yZj5zTDteX2tkSGIqLjE4Ti1OK0oKJVhJU2k4XFhqSS9RP0xa
L0ErcTg/V0B1MDNTc2pxPTRQJlJaWiZAVV47KDZbIUFgITpyb2J1bDIpXTUuOmNNSmhBYiRab284
QlIkSShjOWdfY09cQGM6JTNvYlJIYU5GVEVUNDcKJWpyM3IwbFJpXm4qVjxjWzBqYE5YakZzM0lN
M3A8ZWZePVxAIl1kK04wKnI6R2RhRVNYWSdCRGFfLmZvMzJcPjRyJSNOailyWzMvU0EmTTNAXmkr
N0lrSFtjUlEvO0ssM1MzKiYKJWo9W145UXUhaWk4cC1edSg6VUA6RT5wPTteK0NTV2UqX1VaLmJq
WmllakU9W1JqJEM7LnM3NnByMkdxJ1lJP3REQG0jVD5vdW1ZWkcyQDlAN3FgdG0zJjMoOCYyKSFz
T21HO0YKJTUzM2lCbCtfWSVlW3FsTVwyQFJXcE8oQ0pAKWBtPFlyaGttbnQiQ25aUlxXS3FfaCQ3
L2I8TE9uZjgpNCMnPFgnQjxQTU8kNkZ0a2wpOTs9MzVtckEhIVk2YV5NSkB0ZlIsRiIKJWlcTGFe
JjJBVnIzODdXW21IQzErIkBNNClwUTBCXlVKQGdtOTM9XFhQTitRJz4lKWVTbVsyWEk+cz5YRy5E
a0xfTmwhK2tnOVIwUjlaOUMpayk+ZTdtNTghOmdyQiNdZklwLDcKJXBwYUpocDA5SFEqPVBzISg9
bE5zN0VGWzVhPU43UTwobUVQY19mTDxIL2koZFouRipKKjB1bGY7QTNtVGVzKylkSCJPRm80IyYs
P2smLWJjXC1MYTE2UClaSmdaQmI3ckFqVC0KJS9VXlRBIlVNQ0hTKFVWJW1EdF41YlImMicyPkUh
aEZHQ15tYWZQZWhdYDY5YTFdKks5aGxaLydiKy9lOVdrKWBMPyNNSENaUEswQGtFJVRuOXJXR2Fb
a0UmPTsnPFlMUERxY24KJTtCKm5LKjNmKENULW1hXmg2LjYhblI+NGVaVChGaSQ5JDFTKHFnPT5Z
UkA9bm8/KUVtRm81cHVpXChSbihpK2FxYW1JQy8lSz4rWGE+ZnFzXW4+cEI9a2EwSzhaWFNMVyoy
Wj8KJTsiRSFhKV9JMHBoSFtHdDo5b1IqIWtwPlwxX3VvWm8vKiUwVDg/XWlvRCIyYFUkQCZqXWJW
YD9tclxYTmhSUilTYG9vUGAsUGJ1WEtRVUBxM0UpV0wqNDpmTTpjWXNDLi9malEKJUpeTWcjJkFV
Kz5UOTs6dSo1ZD5iLlxYRDJtR2dbc2VRTW1JaWxTSyVNX2UwYTJyUjczbWJBLEtJV0AxQkxyWzk/
S1QnayNLOkBpZiwmR0RUblpbZmlaYWteWlhFXz5RLz8iPUEKJSZfS2NgL09XIUAmIUNoSSFcPmwn
LlhFY25dW0s1IU9HQGpOakJlIVdOJDsjMzsnU1AmMis6PkE3Riw8XjtpQScpUXMhaj5SWS9SPyJL
XUYnKSQpYUFtLGdZKzFvSVRhNWtWO2QKJXFwVV46Jz8tNVovXWxzUUBwbCNkMDs3NXIibHBJNVlX
YiVfY1RWbEAwPWMoUWchMjo3OEU7bGktRVRzZltJOmNgXT4rRzYndChPNC90SlJlIio1TWkxYDBP
bFpLTixjZzNSMTQKJTA4KldTI2NzMFdxMVRFIlVKOFJBKU9aKm8oPk5CJTI2TTxINkVIdHU5c09q
N0UlazdMKSspPD5AUCc2NiNJYmtIOidhKyZLTT8/Q15gI2RcblpNJmhoTCYzTidyVl8hQkM0TFEK
JXBcQlBvVXEoYWlJZzJbMV9ebyh1TXNDRyZwXXJvPCM9amwuWzNdPTw5SXA1MScqLilTPjJrUWJE
LyNMNj8iJl1YUD47MT80V28uNUFfXCVZQE88VjIwUkMqYVlNMFEtWWkvNloKJVU3W2VsaVpgRnM6
Vz5VRExxU3VWYFUnbSUmXi9aIyR0RSE3PGcmanImKEtvQSlPK0RDUU1nSXNCMFFFTGNBJlFQXyQ1
SmZOOFdtPiNoKGZcXmw+ZUYhdD11M25yLFVAKWdoUWoKJVJaYTpEJlFedGpSczVZND8oMD1ZcFs1
Kjs2VDBRQDVmRWUyRUNaOGhGKmFQTEVFOFhBTDgwV2VKXl51SEhmb2pVXlRZMWs7UTVpYEdjJ0Zq
KUlEclByMl48ZF1zQF4uJTk7dUUKJW50SC89YzZDOFAjL2lCZCJMU0dbWFRvOzJZP09hMDpVY2Zq
R2c3YHNeYG8hZUNTV1lrK21iJDpiVmE4XDdaaz1HNG1DYGYqKD0mcWo7NXJLJSg+VDlHLUkxPGNR
L2VqSXUwLTEKJS5nW2dXaSFkNi07WWtycyEhMlkpP1wyOmpPNVwjPVBYWVJtJUFkQktbJHJtWCdX
PSJuREUrJ3FKXk41LlkuPTonIy5mP0A2ZUE6J2lzcCZObDExIl9tV3NTPnFLajwoIiVzIykKJXBl
OyhwRTRfXCgmSSJMNVxUUCg5NT8rWiFBPzhAby5pS0w8Nmw1ajssakoyLV81Kk9gTSgvJ1U4Jj5Z
Ty5MbjRyPklrNlwiQCtFci9eTkYjSCFpImQkMGlpcSVIW2Y2RzA3YjoKJVQkUVRHQDRkaTIxXFEy
bVRHPCRPZmlcaCRcb0VnND1kSSE3VjpicmI/XWdfPiZeci9hZWhXLWxOWERERT1iTzNNPnM0OyRg
LyowVHJhbFg8RztqMk07TEAmI249KElRRjVraSsKJTpodXE2NSJTbSgzIidDOiM5LzZBais2c1Jd
Rj5OV1kqLUNgSFozVEBGSGFtKS9nNypTOm05bk01dGduTyomKk5xLUhuPDNFKXBMIj41XnFzR1Nc
JU9ycmJyc2ErXF9PcXUrYyMKJTYvNz9PRzZmPDMqW3BDPGBTWlthPEMiZnVcWF4tIzkxI1g3M20o
PFVIKCROZ2BPc3FhR3BXJ2lQLGVfaGBCWE9QVitic2FMb2I9WzlSKG0/XztFOTZtZlNwbixFKHAh
Qj0pNyoKJXFtIzU2PilFa0I4UVs9JE1EM0ovKilYUlpZc109Kl1baU9qMiIwT2YqYClpOCtmQG0y
JD8yVWlkYTJ1WmNkNj48ci5HJ1FSXkpOQUNjRTNdZjBXR0duUWA6NihPTkExRGZMXUgKJTIzPF48
JHU5U1w4IVpxKURVKV1EXDpeakc7S2IwXC9RbV1dRF8nUT9qQCs+WjBEW3M4LiI/IU5nN1ZDXTsy
K1JqQVh1YU0raCc3UTBdUkJkY0VQdWIuYiZGWk06IWNyLzImPFsKJV9SPlklKFM3a3IrOkVLZ1tp
MmBHS2JvanBTb2VWSSNGVENpciJpP0JFLmlLQ0JFQUhPLFpCO2UmP3BkWW1yWmxNMV4sPWA7VGdO
UVklR0JLN24rPWpeQiRAS1ZwdDVNcG88O1kKJSdbaTFebCswTyFQPjRNKWtKQHRWNDlbTWhZczk8
MlZcPDVnIj0rO1hdbSNBY2ZKVHQ6b2g8Uj1tSU9oQEMiLiw1RFhcRVopKWJAITs/WFQ+MktvSGo8
aSZAQS8qIVhyJWxKIm8KJW9wXkhoYzJSNzNWLCNnV1JJNFR1Y1hHJiI6SHVcTkg3cShcTDVqZCpY
cF1eXEFrPFFnTnBXQlUubU9kYD1RWVY/V2UsVG1pcEFsaU4jNzZZVTxxRCxbPm9pWFIsQj1wcSJr
ISgKJS9NOmVPaltScyVLPzxXMWc5THFnRzhiPCluRkwuQ09daEY+aTolKVRYdEhuczBIcDxUbjBJ
W0JJbCFuN2hkM2hZKm9DYiJAVSQhbzpkREZCTm1iUTQjMkNFO0ZXPyRuXHNrbWIKJUpMPz4yRHJL
VVshSUdIJEQoTDBaSFdASE9QZFZmXU85V0Q5bnVyMVhOUz5PJFlIVDgjXSJST1cmdDE6bTZnXiE9
Zm40JioqVyxMQVhdXWVlR1UoZGFNPXBUazl0JDNJY18tcmIKJSU0M1A9PWhUZSIvW1c4UUNCRUJM
O2tmaC0vQEgma1RtYjNaZU5DVStLayo2XEVlOGoiLUIqa1kwSVZRczQ0aVgqUUJbKE1MWUpkZzg9
ZDJCSC9iRXFrXW05XlhFLFpoOjBdSEMKJVVFamxjRHFlIz87TVVvXWlwRVAzLmxccmleZVk0cU9B
WHVUVUY0YjphVk9DVWtrUzdkKyFaYShDZ0NlLWVVb1hkczNaSWklImBaODdJPUZjViRAZEE0M28r
OSFkTTY2ZChiUVwKJShBdGNSZ3FgMyVMOUVAXT8+dWMkRjpkUy5RJFFFR21iU2A0YjhvQWkiT2Iq
SStFKzYzP1AhM2hXbnQwTE9dbkZNXmUxXk1nRTwwKVNZPElyWkV0SGhtKFhaTlxRSE5dMjZAPUgK
JUpmMzZiIkhOWiwsUUU+Jy48Vm4rNTZZQnRdXjVhZkhqWGFfPUpLPWFmXFIjPW8ycm5UQSkrQSk/
VW89IVQ0MlI9VSlEI2JfI1JobDA1VigiKWQ8T3BIPCtwYVo/aDdbW0YzQygKJVRFbnIySFwpcUM/
QTRXdWdTV1FPNiwranJJPm1ZOCdwL3NcL2xqWUJFITxSKDVrXWB0NkpiXVdZWEZNJEo3MDMnTlll
WGNcKDxaSz8kKiRFW3JeLGFNT1ZicCdpcnRuJjQsVV0KJTdbOEJ0VE9gVlM/TmorU243Okg2KyMp
SnA2MyUwb2FiaWxXWW0pc1JkUykvIW0wN11lMHBCa0VgWGQkJjJOVy1VIy4+U2NEOU1aZztRQyVP
aDRmRk1TTiVWZ0lQNj4kXzU+LWkKJUtRM1RiZ0VKNDpfIkIpayRdLyJPW2o1Z28/clcyUS1WU0FZ
KHFNKzxvXzQjY2dILkRxTkNaNiIkT0JXMTFAYj1KbFM+aVxLITJBZV1IWXFhb2xtX1Q/RE1RMVtf
RSRXQixYNSwKJVA/QUFiUVFLdFUnaD0uczhHYW50ZUhBJCs2SVteY21DXSlhPVJfPik/MUBVSCZo
cWNdbDdBRG4xbmxqR1cpRV5AI2ZuUio9Yj5fa1I0My9wT1tTXFNPU2VCTSNmZ2giUjh0ZVgKJT0v
ZWkoIzJGOlNVK0pPZ2hndENjT2UlXlQhaihcMjdWYWo/KWwya2lTbHUvXTBbOS40JD4qUD44Nm1G
ci0sOmE3RzBOS0BaImlUcThUMDhQOm8ialQ0WGJSIjhkUCFsIzFnc1kKJWE7dUk/XEA4U3FhWFxK
RkA9M2NcVihKTiVFMCpabV9wRkEjIVlcSDZWYy5vV0xXcEYjaFJiKWw4bTc5Lj90RjVXbmFkbCJW
P3FKVF0lIXBFJlM/Y19cTk0sWnIxNVxkZ2pETloKJWJLTmZxS0taOUIvLzA7QUkxJHJOLSQqMC1J
IkorYSpuSi0qbT9vKUQ6YllmRG10Q2QxcFZjbyRLakQxLkZpRUFVQVVOcUtkXVprI24tclZYbUVS
bkRBM3NROlE/Nj4tSzVYPDQKJSglRl51Xl9RITEkOiVFOmk9UWIpIl1qP0pjJXA/KkJcU3EkLl9a
I2A7KSY3XkR0KFknVzwtKjJRSFU9Ym5DdERCP0dkVi1dYVVYQi9QcjVZZlFxTEQ1dEU5KVJrbUZI
RDliKGgKJWdMMk83Tk8jKiomSz9kPVs3YTotOnNMKVApdVoiQGsjVT0sZlolJyFMUFAhXzEmJnBm
VzhNR0ZwOFVbXFouX0EsInExdG1VSVtJPDtDL0MtTT9JL00laGhiOTRYKW9ULSolKTAKJVROQE1B
XWtoOzZLckdHT2U0JmU/IiFiIyVwMmZYM1ZONmhQYjlPcz5URV4rYF5wZGdATlNYLTVqYU1aVEVl
VU5IQ1JLY0s2ZmFgPSxSUUQqcjJWWEEsOExCbmkyJCM/JDBvdUoKJT5PXEEmVHVqai5JMm0zWywv
VjQiIk5dKipAPyc5O04xLmtORzdsLV8/L1hPbFgyVm00W0xbKzVBLlJEJShlRzpBWUg1NiJdMSFw
SlxuYE9FX05eRU5JaVd0aSI2ISUjPGVBXzcKJStkaDNzZzdEcz0pXXRaJy9tdCpnOCdwImJDNlhF
M0tUazw7Uz42VFRLQVpKPGVrXTBrLG9fQWMxRXE7TzwyaVdJb19Ibm5NbV4iLSFEQkZEaXNvU0BH
N142MlFeWERDWzA8Qm4KJW1EXUxpTSEtXz8sRTxCVUokKEBYLlhAPyQhLTVXImlURU1sL2RxWWIo
KytUT18kQjMoPiRxVz88LmRRIVw2Zj8ua0dhImstPzRBTDYrIUhDSmU6c2ppNDR1JSVaNiEpOGVk
NnAKJTZgXUdvb0gvSztkP1g/NFdFM2ZDITwra0pHJUomNF1yWWI1LiZDLkVLIlFTXGZNOTJkSVlZ
UUZrU0g9XENtO2RcVEtgMm5UJ08vZlVvMCNHL3RBbWppO0IsWCs/STlzbWRtTCoKJTclYmR1MTpN
MFw5UDQ9ajhcbCI8PytSbTpoP1ctbkxROTVpZ0tuOzFtckpLZWZDL3JJPkNLV1NmNFI6TGdQWiZj
bzElRCVpMllRL2k4QHJhRDJIbGBdWTldYSgyPzNaR0lGP2wKJW9oaz9iQ1kqUzhVUUBFXWxZVEhS
MXQtdWtpM1YhaVhaMlRiaklvdUk2LTVcXipqKVxJVUl0XEJgcHRYbSJFQlZYViNjTVU6W21zRSEp
NWs+T2o7ZmJba1ohPFhjWVwuJVVwPF8KJW8kXUciVW9LXDEjPF0sJCw9am5jSj9bK0NJPm1ZWi9d
MlAwQTZaKFVlST9wXj5cMys0Ol1yMjYlK1JER2RiNEsoITw7aStbWSpBRTRvImxjaFhZKGFqSUIv
ZEJQVThhQzI7IUkKJTwvX1FkM1hRTz4nM1EvWjw2VzVIJzNzaVlJJFYnMGJLalYxcllkaFlbX0lQ
bFUlUFR0SUNAXUdNZ0Mxal1TVztwXWE2K0FJLVxgYDlIW1dWcFRIO0hcXmxCN1YkQDE0KmtaM2YK
JSIuPCYmLShsUEs2bGI3UGJFLTljbW9BOVgiYmFUUjdNbywmXGVIKSNnTlM3QTAqOj5LOGZedV0x
cFgkTzs3cmtZUE5uPG9tSEEpUy1bakpeTiVjNFJERmRKZSo3cyg/RlkvKFcKJTplaiJnZm8lRD1x
SiJHXC4rQDMjbiM/aywzPyhDOi5TPWx0L2BHUWdHTXJDaDxMMSFMbShRZS9EUyNaXFpnJXAnZCtK
TF43WWAlQF5bc1VsKmp0NnU6YFtgN2pWUmUoUFRKbysKJVosLj91RVh0O11UNWY4X1tgaGcvVzY7
M0JpTnFpXCZJaSVTLUl1TG5fJFxHRikyYDk3SltMO0hFaCo6WTtcZ15kKVJwaGNbcTcmckI3bzVm
bWdlb0VSPl9Xb2dRcEkoQWdiRUQKJTBjQVxjPlpkXyUjQCJuMCU9Vi5VazAxZnFEWzo7WTZbRHMz
UWwxIkFnMktmQSNNSUFFbCFOMzA5UzgxMTYxbVRdZCkmc1ptKk9XdEE+YDZcQSItR25dTUMuXFgp
NTJYRFQuOkwKJSE+MUNxIXE5dS02KjJgVDI8b0NFVExXKEIhWUM6O2tQQGY+UTRFYVwzdWVIWF1A
O11AT21lQEhRWjwhSVF0MSRWRSMqX2BMQGlSKS5PSVtoKzopYGMvI1oicVg8TyVtMUNDV1wKJVYh
bnAzPEQlaFw6Yj5dS0Y8ODhlS2gkZyRFWlBwUjlZOmpdU104RiE4VSRLRlV1bVVBVy4/dChMPTEk
YztKblEtZnMsXkZua2g3KiJAcDQubFxbTS9oLUkyWVRSQ2FoK3IsU2UKJXM2Izs/LSQ6OSVtMEc9
PFhsc15mbGxTS3NhY0FxblAwRVtqSDQtPFVOVlxGPWtcWnMiITY3WlRQIzhFND42LV9wZysxXE5Q
MVFoLDpWJ1A2KWFvQyReaF9YX08/TCsyUm8wRkYKJVReVjJiSGUzUGFCbEgvM2hyaSQ+M0MkM2hW
b0RibCUlXXJVIl9jZDFWSGdNRFdSXUVFZ3VqVEdENyNgPztSbm1pbydrYEJPZ1lVb2U1VDM/LzVU
Rk8mPHA9ak1wQi4kcnE/W2cKJUNpJ0o6VzxLXk9QUTdNPWgiRW0sSU1yUTxeYF4jMVE3YkdWMj8+
QChwJ2suJGQ8cjlIUFxiIldvakg8Lm0vX0FfJENJVS5PZ19bOT4tQD9AZlc1T3BrIkB1XmA7KWFB
b3E/NVIKJWZyWT5HVzZeTVdcJGplOSMoPDlHSDRkc1tbbG1ETTpWXCNIT0I5IVFeXilvWW0rOi1D
ImIqYE0nYWViYUpZaFAqW2wsWltVYlhzXyUhMnFANVt1TFM1amEndVQtKlpDczY3RTsKJWNodU4z
bCQ4Nj4lTkMnKUVTbnQyTzRBajU5b2pjPEsqXjI+WSsuZUlHc2ppWSNBcjJyImJ0Kk5UZVtaV11U
S109J3BUNHJjRiQqTVpWNkM3ZCFEZkcnLzUxWmdGaU8zVyElQm4KJWJ1cCl0XFNERjBFcHMuIyE/
TVpiRDteOi48ZW9pP3ItZWxFIXU/ImRgZmhpR1NIX1BfTnROTzw8OXQjb3BhS2t1VStKV2dVa2Nx
RTwuMyx1U29MZmY9JjlgOkotb0oxYzFNRjgKJTlIdUFSTUNmMDhYJTMmWVQjdDowXSo6R1hDalpB
T0c+TCtwT3Q+NiZRKDxNOkgyLjFiSFhxY0ZITDc4LEs7YG8/LEFUW2RQRTdbaUMoKjpIRFA0NHRj
W3UkXGdHcGAlKVJyO2MKJTFQcDpNIzkuRyIzITM8OzV0QmRPOzZrS1ZuZUo2PjZKbFBJWkNST1Vj
Xz9NYGdOWkMwOGAlNz8zaylcSGYrLlI6OidzOEtYJklFa15kdSpEbnE2YDk4YTdFa1xtZFlXUkNw
Kj0KJThRY01YVVdTTVZOXEtNQF9rP2BIJUVubiNrI0w1J2NTXmg4X0VHLEUvPUFNc3MkT1Q9TylT
XVNUdVpYTGlBNClySCI/Q1tUQVo9Kl5ISVtGQio2NSVbUGJfPVBtXVZvcFpnJTcKJW51SFBAIzNA
blxALSEhcztqTT9CP0xzLFQiZlhvcG9JX1AobSVlWT5xUldqSChgUVByMWJYZmcxSjU0ZWZYSyht
RTldSzMxRXEiMUdJNSYmN3JSPVY0cygvc11hcT5COi1sVlsKJWkoaFFoWWllJ1djTTJfIm5VcGxa
Ml1QIjJwYmcxdTQ2YFJyJUwrbkRpLFQ0VUpIQkgxYSk3a1I9RikkXVNPJ09wU04rZDxrdStFIlsl
NkJtbmBPSEVoVCxgWEVOOjRMaF5Yc3IKJWlMbEc/MktROmhQYWwzOUs8OjQhb0Y9Q0JaU1JRQ2Zs
bD5nRSgyRWkjQTxINC1iJlcrQF08KV8/U1Jraj9zWCtYLSgmIXQmIkhvajlZSSVhNTpTPj4qZkxe
bCgzNF9iYnMsLjsKJTRZQzJWPDxsai8vUGtaYzYwSEJfbUokOTlCYSNTXERJRjwiRWolJTM2J2JR
MDZGZldtN2ZNMlQjYVVdNlkvO1lARWg9VWNbc1wzOVIlR0FSUDtkQTMhRWFvbWBlSylFKlxodGwK
JTsvPW08YjNxTyInVERVcz1rZmtuLihqS1wpTj5qTiw/azNzYEk1JXQ+W3NBazBdVm4mPGllKGci
Z0tZLDIkdG9RUXVvImBFSyhGWnBxT1RXRmU5KWxaV0AuLTVtLTNgL245XWYKJSE6bE9dJCxTOihe
Pj1JN0lOI3NmPFFQaVckKWVuRU9XJy0pUlRTW2s4Uk4rUilqKWdEIVg4RWFxJEtYMWdyKVY3Sy1m
L1laNDw6XDUuNSo6JzpDWWRGJ2gyVGpYLUFZXyVzVFEKJTxwKy1WKThVUERgPWM8YyU0bUM4YmMp
VDlWPnNYZzg+aW5LK0tOLGslU1FtLjkidVdXUk1DMlFwWE1dNTU3OEZCQFZUNGgzcURHVjlmZj5S
WFReTTIvYV5cUSdjKlBbWFNpOGgKJV5hYGwtajwrIS9XWU5ITClacjBjWWYiNyVvZWRTNjxka1xe
WyUtbk89WD5WUiRoXi1WR0tQUCZyNFQqY2ZeMm5eKClSbjEvQlskOzRJVS10WkJZLls2ciFXSyxN
WGooRFEjKmUKJTFlSnJrLkBdRG9gYildbiRAIikvQldET0dNcCVWOjZ1SFVMSFlwOSZAaDNhJW1c
IXItImdESzo+Z0BxLlUoMypxNkBoYTVBS2MsZ1tNVGVwV3Era2YlSS4rITYmb2U6Z11zNzsKJTtC
dFslYFIoS2JBNE5uTUgqW3UlM0VuVyQvYE9UM11fLitNXEFjWkdJTztlcihAL2UmMSlcWGcsXHBB
XEBPLCtuTjJbaD9HOmxpJClhVjVfSUlSY0BxPzVkSUBDYHFNWDkkS0cKJUssZkltMHBpIyM8WjEq
REhhJjIpK1ciVilvIiswZydnbjQnKTknUjxeQFZFcFtmVkwnLCJVN2tGQ1RxcGc9Y1o4R1F1SlZf
OVwxV0MmOCpeSC46SyElR2hiWjZDakYzZlFJY0oKJUlhO19ZIWN1NSFBViNuPiw0NiVeVDssSDY0
KWNIPGR1ZTNYL1swIU9edWNUYzlMX1wjQUA7WkI8QkdNL1g6bHVbOUVFKkc8SlMtczloRilNP11x
UmBJY0w+RzdjOzNKWnN1YWgKJUMxNl4zMUI6RDg1XkBCRzghZD0yKzRxQlpdY1FENjInSy82LEoz
PmNROSk6NmJNTCUxVHQuTD1PJDNKR1xISXEoYTBYPkE2aThxRnAjUVBicEpiTFA9KW9TLiRXdSVy
Qy9kOEYKJW42VCxTYWNsZ0JgRHIyU0ViTTB1UjA/aENfN1B0SSVQclxDITQuTjFXVCw9ImY5QyJj
J1ZcVjBbakouVWI2P2pLST1AKzU2dT9GbEp0Iio/PmdMTnM/W0BtYiRUOVJjbEFCXXEKJVtETTgx
M0FPSVcjKSc3SWpEaE9IWERIKCMuMVNCYi5kSjR0LykyOHQ4MnJsaSI3LUdqXHBcNUBxJyRVLFxz
dV9HQiI+dWNiazcuUEwrJGIkV1tscFpFPixeWENLVU5lO0A/YDQKJTQtRDYqX21yMy5XcWNQNFlA
dW1ZWVJqcV9vdVouNFNYM0ElKS1uLCQ0WTpGUTg0Uyc2STNqUE85aihmSm9NXSdNJignR2Esbyxl
TFlqJTpjLlI5cnVYP0lbSmYkJytfVVJzMUAKJV5mbUtEQ0VWMWU2JmtzcW1iMGZNMzgrNmtOZWQj
ZEBDQ08/OjtYOmc5cEhLIyNRaC0mQjxzI2BRJks+V0VsKj9eTj4lSEdaVWZDUUhIWUJMb3FCcDtB
TSkkKlwlblBdUUY5bygKJSNCTSpXZFMoJGRENl1MJTc7aTQ3LGwhP1tVUUdWX2tASDhUYCVDP1ps
cSVDbjlsYEo7by5aIUtLLDFuQytqb2A9L3BFNF88UCY4cyQ3SW1TbkIrWW48RVc2KUc1KT5aPjkl
cVoKJUFgUEBKKz0tNmpHTzo8UEd1JlNjWlsmRD9AVjAqMC9TO0ZgPCslRGdSRyNGV0FYQEEhK1JD
ND1uX0ApODw/VWE6PFtydUw1ZThnVT8hSywkbThnSTNiPTY/Wz4pQWktXT44bCoKJWAvTCVnUXIt
LTA3MTk2Wi81YkpFXWElPG0yRDQpUU5UNGwobitDInAoV11LND5LTVNTXyg4Sz09bENdS0hBWkUn
QTFfTjhjIigySjJFTzcsLWBdMkZOLEFZZl8+cyc9azJQa1YKJTM2WklcMUs1bkpYXmMkR0sjbzUx
ImBzM11oaSM5LDY4Ukk3Vi1NMk9oMEAuaURvKC47aSc3a2cwPXJiREcibDIhTm0oKkk2ajlbVmNC
MVZKSzhVTjZXI0Zra3InQlJhQCQoY1cKJStKJyJkWk04LDAyJzZvQEEmSDAhW1RnY2FXNDxUSV9B
TkEzPSMubUdJPm1bMEBGOkkyN2stR1EjMERmbTk/dEdCPyNAUyQ4YytnZEdCR0Feb2FxQVdaMy1N
Ilp0MyFZRyYlMi4KJTw5I3V0Rjg3ay5KXSZFSUpTPUZJIjNHLXFRKEsuLj5MV1pHPmNPJmxoUF9N
MVZLPVJhTUxhLCs6QF90UlEuc1E1ZlY7JXIzTkFVTylsXG5OXGRtL3M3QClUN0VLYVQ0O1cuWk8K
JTM/NShdRD5eRGptP3BARzQxQ1dmY2F1Ky0yMDY1X3FpQStJRilsZyxySSYkQV5KI3Bxbjw+YyhB
X1UqOkZtT1JVREREKVshUCVwODRfRWVNKWBMOFtRVGIndTJYcFZINEcnT1YKJS5zT2FjPT1bOlBv
RG1KR202VFVOXThqIjQuTWhbYSFuTytISVhbNWMuZnFbYVUqQTIzW1c/a2BGRE1oNmNZczBaYz5T
VyEkOGhnUDRVWEk+QENZUUQmPmU7KCtnKSheKj9dOjgKJSRDOFs4ckddN3FoY2xxaVFIRFw5NV1z
LDU4ZU8oTWFqYzJCQyVuSUZec1tPWVBJRUhYXDY7QDdnITB0KS9tNnJHKCo6Ny1bdXRrIT1OMWUv
QmVKSTlwZlRbRUROX1NIWyJEKz4KJWhwPjpaTU1TO1lTVEFFY2FTPWddaVFdRWw/Lk1BRkZWODpn
RFpmVSQ7cGFcVSJZXl4/Yj1LWzFGYy5nPz8/QHQ2WExPRyxPaWVQbktvPF1DZ0V0NlUjRU5vQipp
TF5aMF9XYTkKJTgqZTU1PCZzLDhWKzFEIUI/MS8tLFtmaFUxV14wRGZCTTMzYCdSUSNSczZFSDNV
azdrRzY6LFhDV3NYM08odGwla01WcGw3amRXVWBaP09hQyhMYStwJG9cJEZLN1FdZGQxRT4KJUtR
MFRYK0omNVpLQzZvIXFGL3BFUEVvK1M0KDZdPkwkKWpnITdfcis4RFFsQWJjISEvXEpoOitXPyo+
RFZ0UDgiaj5IMWgqQ01oblFlX11WQmlqPHBOblc7WT81cUdfW1c5RWgKJUczOkA0MyFONFU3XCxb
NyUjaDNBMVxvbCpANnNCKUA1L15JSjg4JSUvMEJIc044X0pVRl05IVQpQW4xTSM0VDM5Sj1jI0dP
Ji5OUW9fJ3ROImgkcEg5KyVWUVYxPGxvIyJFLk0KJStGcTlYM1hxbSVEJVJaKVBST1ouRSNtdWFP
alFAMCt1MW8la0JtUCNjJjU2KTpiPTszNSZPMldLIVd0QiUxTDdmUk9zOCstYWQ0TTVxODkiTCs5
dWNuRmIyZks2O04lUXQqQHAKJTtUQ0hxPkdPb0chNyU+S20uXDQ/X0IvQ1YjZmtzdSVnaUxnXWZ0
Zjk9akhbWVxuLl5TQlJJL1Zcbjo7LWJdZzxIXl9xQi5ETjlaLEZ1OG9jP29VSjhCRjpDMCQ5SzZB
KFRvdUgKJXBvWDwvSC5CNFQiNGlIVykmSW5XLE5NX0ohbVkuL25vLyV1LGIldT9xS1BBLDNOV2lR
cyJ0WydBJzFzcS9jblBGWGNEPkYwXyIvLlFvaUxvaEY3NyxSdSZeaFA0UmNWMmJ0OzsKJT1MVlJm
Z2wyWlRtcUVJaUIwcihNQzYkR24nWW9hUmBHaDQhKTtDMUUrNk0+ZVlnWmVsOjNiJnAvX2U7R0Qr
NipcYlA1SFcnRGktcSZOImFxZiFUJGZsJ2FeOSksPWhUKTFLP10KJS9GazdPRDAqUnFWdXJRLGdI
NmgoUSNtLnVobjdRJzRzR0twXz49NU1tWVNOKCgyQjlSPUd1amJGc15rW01kJkk4YzxzWjdFWDwj
RitsRVgsSSIwXVpcIW9vNj1UUnNMX2hcUyEKJT4/ckJQJk9RXixwXkhvciFrUDVDaE5cLWQrS2w2
KksoSFI2aSJraHMnSkxjPUclMzhoUyxZTyQ7MWltI1lRX2UhX2wxbW5pNExdZ0ckUm8zYU9tM21F
WlQ0RChuOGYuUy1jVikKJUNqRkllVzEiTW9ibk0ibTBmSS1mK18+SSxoZU0zNDcvayVCbG9SOlov
ITJYPV0pQylqMmlMJm4mW25eWmonTkQvK2AiMXUubFk4JlZKNGtRQDJgT1heK2JWNlA5WjpPSTQi
NEcKJTdTaTo5OzpWMy0xNjleN1tRJHI5S19eOTdnPW1oLjpLQE8va0xacFBBQz5vLzdaR1hxZm42
NmJrNCJAXGVXV1xzRitxNERHXUorQ1ZSTjY/PFRVdVZdJDI7Ql42Ik0nLD1hSzAKJT5UdVkuZWAl
Vis6O2ptPGYxXjlLa1UmKik1RThJUjpdMSd0PDhJPWwnR1UyVHBPY1VQaj49LUdHQFg9X1BTPTE6
bD0mOG80Z1VRbXBNL0kuUjJlM0YxISFZQHFxcD5VK1lCb1YKJVFqMC9WK2g5JjRsOzpALl8hMG5h
RXRwVSpcbnNuQS83IUApTDk5TiszLiptdG1zO0IpLERSQ01gb15Ja3MzKWM2aSx0Mi9nLUgodTYz
L2UyQlIlaTNTMVpAXGU5NGdWYltSMzkKJSMhOz9HaWVhc0MnIzZPJSpsdU42K2F1Yi5RUTBoMmZq
X21wYTdeXkIhakhIO15sRjJQKSo7YjdOc2REI0hZKGY2JFZzYmVKJFRDLDZJdDMhcDY9QmtEJVki
RkhKbHU5IW5jNmcKJV9PJFovVEJYbDM+Wzpza0FGLyZiVWwkXmdjYj1LQUpjRy0tWU8qLFRxImFI
I2M8V0EzbGtfMkRTbDYpJCgpV0I8QVssNERDWWY2ci51TnRFTCtgWG0zckhHUiM1NSxGW0lwMj0K
JVJrdDJTYG9dKzJiOihYTlJoXV5UQDdFLlIuQW9qV0JjdC8iIlFacjdTNzRzJ2dEXGBcb0g1JGFD
Pk0+SFVdPkdiOlIrTG0tY1EhXzRULjNKJCFtLi44RW5YXmJcXW1pQFhURCYKJWovLUROayddYVVM
bmZAR0wmMDYjYyoscDZaMiUvaG4qK3Eta2UhL0xZV1tMJTU1akheOzNcZSknKnVcXmouWU03R0VD
SFAyZ01ROGpOM1s+J14xQUNhYSZoPFlJPyc8RUZwVzYKJSdjUVQuJWZda09HSCltR0NFOi8raygv
bmxDQTNvQlRQWzUrQyxdQEdwPEVUTUo4W1w7X2ZFKT0/MSxqOEEqNWA3WjhAWjg2T2VENUFvYSpN
Vz5nWWlYI0InTz1nWi8kV14hRiwKJSlkcFZOXDprITspRGZQMGVqXU07VDMjTDpXbWNROmB1YiRL
NHVbaElENCpjVkZTdVxDXjFsNzIjbGJAVCpYaSwqKD9YYlcsLXNZSmhSIiJyPEhVUWFyY21rc0ZL
OzZwZ1VXXjkKJS80b1hhbCRTbm9DLXVwUydlaF9RP3MvXC5sUElgQ0Q6L2RSRzVmMSpbaCM2Jjlq
VSxGTU8qUUtqWSRFIVY5KGpHXmZnbEYuSFxYYFAmcmpEZnF0dWUzKFQzJnEwb2hLSGpxbEYKJTxT
ZTRvL1o2KGI8NTYrb2YqRFdGZkY0V21yUXQkWEl0IT4xPytVUUFMU0tFW2dDRURCXDREb1NSYC10
bWtQT1E0UG01aVkiXz9DWCIiUzksInRIb1lORF41KCk1Yk1nSjQoSFUKJUgjNzpOI0JiT0puLGBk
WFRQN2VmPy5AYFZnMjItP1BlVU43KUA3Nm9raER0UV89ST9EV0UqQHVObGgtcFozQDdcOSIkdSZt
RmhVbColISU1N106RV89b0o7dWhAQ2BEUXRwIUYKJSVRIVtQRmMvQm9GVGMoWjVYaSpmIU83K0Zo
OGk5TkVmaTQzNF5xaGgmP3BLZk84UVxIZG1xaz1lIkdhRjltOiYyTUVNQEtoT1FjOEs5Z3MsMFo0
J0FJSXIpbUo7NytLKW90YCkKJV83a1hdJjI9XVhOLWFqSmpjIzJiRipOPEVOZmBeTTwhazBwSzBB
LVlqO3NUVD5AOW01ayElSz1gTzM2SmkyO04sN1N1Uk1YbzU+VXI7IiNyPmhLbmNNa11IZmtgLjYz
cVJNYHEKJVx1R1pbY21aKGQqXSltYyhhV0wzSjppaV4qIU5RUkFPPVBROyQtQF1iRE5LMGhmXkZl
TlosIy4lQk8tZiJXNGxualBVOSRLbCkwLk0rWU1BZ15APnI2MSZWKFtdKysiUC1UOEAKJSgsOFw7
VT1QWSxIJW90cVthaTZgclBqQGY0LTswVDAvODIpbUEwL1kvYWdUbXByIWEvUj9rb0UoXkUwWjQy
ITVZUyReN1NSSV8wdDoyNlIlI1NoPF1UVGk6SGhTcDI7Ui5bQzkKJSpHMmVoRmUrWT0/NSRZPm9B
YzkoMGtQcDxdNzhuKEg+bjlyKzJgRHE8TE1bUFIkNVZtJGs0bjBeLEFUc24pNEYiM2FeSFNGZ11l
Si04cnBiUFFLS3RVMEBGa3BBQ0M3VVIjbj0KJWo/QS85YScnTnVLWTEwWS8jbyY5cjxNVEtRRUM3
Ryo7UCxAbz83Xl5FQGRFXyEldVwoUCY0OyhhOVNYZzZKZSFaKEVKXjBebGsuW15IVGRaOmQ8OUJQ
TVMmTjtwKXMwa3M4bjoKJXA7NkknIzAoJi05VUgtYU9HMHVJQk0zdVs3M3IpWV1PZzlBTU9AbjRx
NUVkdSxvOysoK1BmcVlUTiNnLlFDWks1ak8mKHI2XlVCQyVYdCxXaT5Ja1JTcjFhKiFaOS5OTzVf
IXEKJUJWWWxsWEhdZl0lVkNsaWEkPSlHbUApZFtSKFImQ3BEOmJyVEo7JDM5P0w0b1FpcDAiaWBM
cF8mW11TL11bT1Y4b2xqRy0+bzk8LGlTO1hUcXRiISJoY0lmSEJcUlcrR1RPdE0KJVg5LFs0YVNC
Nj8+ZUMsdSZqR1o7TC5ZY2VfX2E/NWUjVi1aVzFqTzxLXyxOKUBXLkZGMUNwQk1bYzxoTS8tRS9A
MlpfTWNpPiZlPC9jTEI6PXMsYjZZQDQsYGYqMC1UWDdRJzsKJTAxdTxMKzA6IWorTy1TcERxYDVT
VUppXSE7O3RdI0EyRVBKcEhbSmohIUxQXSYiWGhLZi5KWEI1X2U6YC5yN01rY2lyZHNGbS0jOig2
ZmFLVU1bSlw7NkdUalJoRjdIT0BzUnIKJTxqXVIoal4zWSNpUTtraCEpXl5kKzpnayk3Ql02JToo
bm8lVHNyT3AiPmkkPldLZCFuZkdRNEFlZ24nJmphcl5lT3BTQV5cLmZMKnE1OEBPOT8xNVM3MzZa
UVZESDosKi9xMm4KJXM2JUdwZ09LJzVfNy5XR3JZJHQlUk04OGU7cW5YMVJGSCNUTnN1Ml1kNGNV
Ul06Wy9nN05kQT5ePmEqK2w0Kz5RTk4yT1JoSWUuMTY2IT1LQSZfV0ZZJnElYShvUFExZXM7VzcK
JV9YJD1RT2xfO0pAIixYQWo/Ojo3XzVtLC1CLWNebj1gMmtraTVnVmEsbGJsQTVjcmQ2JWkySClm
YmJuZEhISChsOGFXWmEwNj4oS0ksWGZzZV5FMzhoVmRZYl4qKUo2QWhCbFAKJT8lYzErIiMtUVZV
aF1HR1BtUVctJTxLPDMjZFBnRE0xTD51NTcsQl1CSjpaOj0oV2VNMmFbXUZIUTxMIm1XOD1zIzNX
XXNecmcxdG0ocnQwKTIoMEZpOkghREkvTWBLYGQsZSEKJUZHZ1JINUI0dClBVExNZTNxR3BaT141
UDBJP2dPT2RUZ08+M1EqbihJWF00KSYyPj5LR1VaYFY9aVEtZTloVFltZz5Fa3QhWGMjVzJhXEMn
Zj81UyEtQGlvT0lGbFszJUdgMGAKJUVaaDRYcWFbIi1nNEVsIj4sIU0tI1xJOT4yWmNVNFBaUCo6
aGZVbVhxWWtVRjZwL2QkKmEqdE1VSDJScykvNTVrKyEyXlldayw1Jj1QXXU1OWNobSIvdT0kZ2Fr
UUhEbjAxInEKJUFMXjdeJ1U3M0RDLDBZTEdZOSdcQGM2T0xDOG9qKyFhaFpWal1UaUlpbysscUFR
VChHIWghZD05TixeJlNZVUNuMCVgSCU6Z2MlNXIhMXJlZTo3TENnUDwmQyQuYUgzbCZXJFsKJWpv
KnFXVjQwT1UjV15hWDxPUm5jcyJFUEFDWysxW3IxQU1xZlx1PUswcUpWY3JFK3NfWkc3Y1w/OVZl
Kk04NkxaUkxBVlBKZ0c5V2QiJkV0KG9cTEclPF5udHAzXSdrX01vcnMKJVlGTS5fWUM3IiwnXnVs
W1drTCwnR2VzdStOKTQubVYicTVGQ2wxSUxHNVpyX0tATzdWIyxhWVAoKl4zY1RibGJsM01ARkVZ
W1NQRlpgJEVzcjZgcHQmLFdoT0ZTUWYlZDRoKCsKJU5jUmcuRT5sWmFCXlhTKV9NJURgW1cpSj1r
I2YnViNCWXRhUlY2KVIrWyExWjAzTilUaSpgWkU9dFNKK2xQMTFLNiZNZjRaNT8naFBkJU5Acllj
VHQrKTd1a2JKQExaVlExZygKJSJ0NVk1JV89SDkwX1YpYjIjLWFrKnNbI25kaVZfY2NwVyNEbHVf
ZyYjLXFfPWEsWl9cTnJAXyhsM2NIZCIlNiMociYuOnVHZCF0MidqY18vNE9nSz9uNVg/JGA+WmdZ
SFxrczYKJUhCPjQlQzJKOU9CXVRsSSgnRkUwXzFfXFs5L3UxWSFYcm5MK0E1PzNfXHUiYVxyOzR0
bkJOTCwhbWVNUWFLRjQ0TG1NcEc7JitzPDRBUzlfV3FxUG5mVjIoZCtZKUlAYDFRYywKJS4yQ1c0
P1FVTlo5KWMiW2xZKEtLbDghV1hsKiU9J0o8K0FjMSgnaWotRTM1RVQ0ITBJcVdQTWpPTV1UJDJO
VkcnIlFWazYqKyViK20qaGYuYFxjNW4hVkYlQiE6ZCw5Mm89dFMKJXEkdTJkKUguWUolbTZWUTsk
TGVwMEtnOVZMSDQsRCg6XHI4VWstWCktNTBXYz5gUE0wM1BHOz9qSG45R2MwTjVOJjQ9V0A7Ylxv
RT1TaElbUW5KVHEhTjgqNkciXixvRG5tcFQKJVdIWEhmZj9dTi89PTI3PU4pUkAnPFY5P1U1bi5W
RFdkXjpLZzF0Oy5ET3NwYE5NWyVeVWo5Ny9FbjtAWyFTbF4qMFUkdWFLLTxiZGQjPDsqW2lpOyo4
cTwqZmJpNTVmRWVxXDMKJWglJEZDa2k3TSUjT29IdEsrR0Y1ZU5EcU8jdT9cPS0zJ2VrKzk7SDdV
MChzQUkncEBpUzlvMywnLSdmb1haVCQwIyRaZUhmLSRnTFwvRUJSZiIsY0IoVGleYWJWLi43Z1w1
LjAKJVEzTDN1Q19vcGtZLWZpWFVfXklRTSpAQCQjP2BYbiJoQUBRITxwWmEkSSFDKV9RIyZnT05Y
bzxJMSpuZzM9PFAuZiVVdC1oLDcnVSttZj0sXSYvI2wmclpuRWdKU3IkUnIuYkkKJSsxdD5GUk49
NUxvKVMsIjJrTUZpMlA4RVtqXUV1WT4+RHNrRVt1cFcqZCUuamMlJStMWUUpTlhcVD9dI29KRXRB
NUNxZnBCJigjSWkoLSFJIy0sY0AlI2lCOVhyLF1QcWE3dVIKJVBVNS9TL2M6JTUsXC5bYSVIUipd
TmxsP0lPcCMjPidpO15tKTlMVzg5YmVNYU9mazBQQmUobzdfdTlDOUlWMjk6JVxGMEYydVBDczso
P0RfPzZZcFFqSjpdOk9MRFJdZWBCZGgKJV1naD9xZGg2bFlULSVRWTtCajMrREJSUF0/QV9DOm0h
WlluM3BuOEFXPy5HaWpAR2JVcDQuLEpKUjgmWFY7MUI6WS9CV19JS3RhQVRgZVZjcyY2b1RZcTIr
V1ZbOkZTNkpFdVcKJTRdJkIiXEdmJmJsUDt1KygpJTAlSFtBYTA9KUwmPlVqXTUtRCw+SSs+N2Nj
WGJzOTM1NEpHPGEzL0MoaVksS1lwRldVcyw0ZkM7JGFbdDhSbigmVFpZalFBY0spQilVN11nQi8K
JTtZaTQoRC1SRlpbdG5CIjd1RmIqMD1LX19wczo0QGNUJWoyRCM6T2ghR1dyM1VTTFNFNEcqSEZh
RmRLalNVciRjYSFyL2dwY3JyY1YrJihNXENvPCdwNFdsLU45QkA8S1A6I0cKJVRKYlsuaVNAP2Iu
clomcXE+TyltUStBWiZvXDVCQFNAYXI5RHI4YkRFVVlYU1teYUtTIU8iZVJOVCtOalMnJUhwZC8o
Pj9MMjVANkJKWVRHbClcaHVTWS1JUT5MaEpScXAhOlIKJVY+J1lnRWNScSI9ZjIzO1tlQDAtLkEx
TWwnUUFfIkolMjJRTTdeM3IjMWZzPCJfLEpXLjglUE5bdUhPWUlHJHBLRTRdXGpxSmFWXlBsVT07
JU4+IyZvWF9VUEFbbE5xP3EyLD8KJUZTYFpiYlJcJTxWN0VRX0IxNSc8OzRWYTxeLFdVY29PQCYq
ZmZSVzdYKHVcYmBmc1liP1NKUSlMbEpnJiRtWmliMVBNJD9BXkdpNCgzTiMyOGtsbUw/PDQmQmJz
WmgsW2k+IjkKJSs2biNiT1dgYkAnKHJCRGtYNmQ1RSNCSyorV3JZZU1JQDFzWidabzRCWkFBXjo5
cTgkQFoscjNzMjFVVG1dOys+Mmk8TElPa1dEJ11uOmZeaF9vYiU0LD1Ga0soNiZSLiRGSWcKJVY6
XDA5TCk6U103WlxGYkU7Ri5ZJGE5TG4hQCtDM1g5JDlPZ2Qva0hkRTRiYy4rS1FeJWZ0V2JrV1Fe
LW5kPkwxNTooTnMjMUBZQTVlaDhlXidKdTRbZTY8L1xtTlg6ZVEnM1IKJTxpISQ0SFxFQixcazov
ZVNzZjpRT19PSDlIcFJnYmQmY2xaUy9kUnE8dGZZQyNaL3VSRUJJRW0kbzJSRyI3ZjpoVUJfWkEt
V1hGWiJjUEVWWW1xTzxVXFp0aCRIJSMoYl44anIKJT11KzVWSTJXYWtwYDwpN1wpbD8hZjNmVCRW
VyxwYVFMR05sVVMrUjg2Mjx1OFs0SUE6MT0zIW5XbjkjdSI0aWpWWWZOTEpcaVE8NDhiLy0kTzVH
TFlBbSUjT01hbDc4TzlvOSUKJT47N3VpQUJFQj9CTXNWUjlgYSNsZz9SW04wNzIoM1kvMHFWXFF0
TG1HcVc8QDIkX2tkZTVxYGVlUTZFLDY7MU84TFAmXUptVFUlMCpWSFVzWiIoL09iJUdONTVJb0tG
KWs3a0kKJUsiYiteb2VKQUdSOU8lIkpjJnRsLlZsWFRyUiVxRFhmTiNeXzVuOUdgc1UsKDMuUVU+
OmkwZGNBXVc+YEIyb3NcZVshSzNHTll0Z3JuUy9ra0dLQk9LciVZS3B1OUpvPCs1JGsKJTtlLV5T
RmYia10jMk4qV3E+JGJjTHBEWlNIQVBPdUJWX2RfLjg1WVsjXFtWRjpyOj5WIWc0L20pZzFJYCYk
aTlGSihQXj4pLyVMJk1ASl5NWV5yTic7RGZHPyYuRGRmV2csJzMKJSMqPXEhMD9tU0o9VVdOVVU0
UllqPClCYFRXQVVgNlNqJG5za1lyOz1HJnUjblI8YSJVTV80NU1yRmUtRjI+QXJJWCwiVlAndEA1
JmY9RCM6PXNURitELCE5PENGZDh1Lkt1aGAKJXBsXiJ1YjQqcS1LL0dGZTcsOmczO1VNPUpEO1JI
a2Mib0llJnInZixxbEVyND5lbTFsTlRzZjkjU2U6USVTSkByaEEwWj1KJGRySDVebC9xPFdsQj9R
YEFQPkYoOjFYRGJlNFkKJTczXmg8QS1iUi4zPighX0JObFIyVC0lUy8qI2VDaiUzVXVINVY9ZUhl
S2NEODVVbFZ1TWlqV28hZyRSJ2lmPj4qa3UlPGFWb1EkaDhjMmlyXE8kRlhNKWJKdTZNOjJNJ1o3
b20KJW0uNF8nPVg6OmsrLSpJSUdXOVRAQV06VU0/VVgkKmQyb0dCPWw7JlprXzwpbVdePys9Skcu
bVBfTkw0YydNPVw7ZUNrXFpmY3FTZ1BwNSJjWGo/XkxSMW9NTTdqI084bTwpOlgKJSpiOSg6QixY
OkBnQTh0VURjVmNtNS4wTGdWQjAhVVNZTlB1SiRWL2Y4SDdtZGZBWVxobGBjLEAwWyRqTWZXK3Um
SyYsbkItSkg8TFliWUdrZ0ZRWjhYLWlCKW9hUS43ZztbTj0KJWZyWXRKNHEoTitKSUYqZHAkNitA
Y2d0W086QURCYFg7MUFZVi5wLW1haVk2Z0B1cklUa1EjJUVpRSlmVCxTS0Q3YiJPMVIjLSNUNyRG
Wm9QbDouKGNUSXVfYC9jLS9SL18yRDEKJWNXVD5tPippTlRxQigkK0M4VCYiY0dmRy41YWlgNS4m
QD5NQyJKLTpdcWc6YjVdZCVhWylMaW9AWVVeKjlUUippNVdpRnM6RDRDZCIzWHElWTk0P1M6XXFK
Nz9DRTMiJFZMPUEKJWtiZTxwZ2BfITolLz9xXjBtSzEmYiZsSic2bDMhRlNaKS4mVmM7dV1nY3J1
Li5LWiY8JEBqbzEuTmZKcF0nS0tvYWI2TiJIISRyX1FFPTRkPyFEdSdBXXEqTTI5ZHEnVigkQjMK
JStzXi9jXHMuPDlWXnFxUT9vYE0ucDJfXElvYTRbSHM2QDsiI3FCam86Ki1DQFVzaF0yKFIvb0Zn
U1JKR1djN0pDNjlxRjxmJz5wREUuJ0dpMThpXUtXQydSbSt0WGNzPFJLRyMKJWBBVD4rWEE2bClY
UDFdZkInKipRQEckSyZuVSRRODosQC5gPEwvTmY0RCs4MkZ1W09IYFspRkhyczdsSmQjWDpWUFs1
NTM5J1FsSVpNPjcyYFFLSypRIWVeN1hVM0kiYiptRDUKJTwwTmEqVXBCIWozbSZAN1Q5cj0rIzRi
VWtCbCZJKTtpZEY1NjBYUzpaLyZEWnJaNjcpPFxeL2tjX1FqIlMxUkdRXFNcXFlAPkFSMy9oRTJy
KVRsXWo4W01wPk8rV2ojRFtbcXQKJURONGNiQWMySENnWkRBKlJqSjpJcCxVWWFYMChUYFxKX2s8
RmtRYFNyW2giS3JwYjJMI0l0bCldOyNXQ0tkVm9rbTs4RC05aFIuaVVwVmQuRmZQU3VsXiQ+O0NS
NyVaMj1LaF0KJSchcVtsIl46P0o5YFFpc2RLQV1saiIhQzRyJigjMVU9Izs7bi4iU2MzOT0mc1lW
SmQvUHA6TmoqW01iamFWQkpsOiRudDc7QltcZk4sJj8haCtnUVNUamJoNVxfQFU/JiZxUEgKJW1V
Km0jY2VrZSxjcFxsWDFbVGZUU2FZTTJDLyhCLFpLVlhkJGtALlc6dTNMc0w+KUdDO25pMmlNPCpy
IW80OlA2QTZCLykiNFktb2NMNHI7PHBoMz8oWCwwLTI9PDFDVFhJMXQKJVVzci88RywsPE0uMGZP
USM0YlcuKkU6J19nQi10OVVuQkIkRT0/QShlZ2lTPUk5ajZzWEw/QjU3c2NhTChnYE1AKjAxYlk9
PDIlW3E6PHFEayFcL2RhVi4rblddVk0rK2NtWzcKJTxBYC8jLltsQjQrL0tYLTAhQUdVQz5Hb2kx
XzpzM2VMVztrWjc5QmNiK04/Y1hxXWVwaGdDMjQoMkwpMTJcPG8hIj1YPGNdWk9hNSJnJGxgXmZo
cnAzZkg9VSVTP3MzaV5vYWEKJUR1OTNDR2QuWjxkIWhoV1kwMFBTPSs7cj1NXTFHWC5faWdIWiVD
bEFJMSg/UGA7IjJ1RU1iZDwiJCZCW1MjLVAsND1tRDkzbEE2SzBSUWo7cVBrWlZcXilqMm47TVFV
Z3NKU10KJUI+bnIkW1ZuSUJxaXIqPkQxQCJuSToyOUloIz5IQVE6Z1lVWUpua1hnYm0jQVZjXTNm
V1ZeP0o1TzomZnBRTVkvXGQybG5fOm5cW2Q8OyctSj0lbWldI2g/az1KJmNfT2FfZXUKJWZMWGgr
OjVcU2ktP0RVQVI7PCY8Nl5ScWJDV1VtSjQnUSZtZidAWC0sOVQvImcvOHBeLlE/MkNdIjZAb1kn
WW91ZVRINFAhbl1GMGhnSGAzMHVzdWFxP2dBTTs1SWxlN2VZYVIKJSFgNzcrbjBmRGQ2IUpaTjYx
YzpGQUxcYGwuVT8pKkwoTiFOUltGbCplQCZOWlo6Y0wiRFFzRU1iOkI1OEFxLE8xZzVfTidUV2U1
XjVUPGk1QlxRaSJCTGJaODhSLFpyQmRHXSwKJVZiQzlhM1g+aFdjRkNEVUJfQFQkQHU1ZU9eZWpP
ZldTVEgzL3FuQ1UkVjUlRG9ucTA1MT8pS2dCXD1HPDRmYCE6RzZuSjFFZ3FYIVZeckNiYyc1JWli
cTQmbT0kO0dkME5HVXMKJVVLc2JzUmA2K1s9a0lpYCIjYWg+J1VtJGJtODxdZ1tJW2tFZiwlLm9T
LjpDZ0k+dDNDQWBVMV4iXlhFTi4kNk1FXG1cMz5SJzEnOldqUEdicTEoSzNlOzBzV0shQG5aLHUr
VG4KJURETUJqSztnWlkoZ3Bhay45Jj9QYyptSikzbldbI2wjOyppZF9pUU9WN28pZjhpPW9YIz5b
IlpJZGsiLWtWTnE3KDdoWFNFVSVDTDVZdDk/cGk3NzdNZTpaLmpAWWt1JVdDdWwKJUJoZVBPV0BO
KiZISVhFb0Q8OkpwaTVsQTZKayVqKTk9KkMvMCYqU1BxakUsOEFTcCtbNlQmWUtWbkhZK1piMC9n
NzxXYUdTWUg1JCosJDJzPWgiPlNcVFEndV9oV0I2QDRSdU8KJVU+JEVvV0MxOV9UYGBXPTFDYy5h
cV07ISYsS04zJ0BUM0MnImE7KE9JaCw0Lms0MG5mN3JmQjNrRSxnMGlLLUg4UWpRQ0lqaEFjdSRr
VS1LSmVWJSNeLV9cLlQsbjlzLlhcN0UKJSUrO2htIVx1cik/X1E4aSEjYjgrYSopQippLy5TMSVH
WnVFSmQraGJwLDY0QipKQTQhKzxCNS9HSVAkUU40MGlCMyozNWombDwnXG1kXnU1XjNSQi5rSnU2
K0hncD8yQ245QS8KJSkkKSh0WFhHOERWPEQiYzVqXyVgSSwnQCkxOWlcO2EqUmhoK1ZkMmswU0NL
ZWZtZUkwOXA+UTkiNEpfLjVUN1hDJmwyO2xRKlRtWmpIRkNWJ21pQz0nPktVTllkcVhcS1J0LTcK
JU9PNyxNNnBmKiJTbE9NblBAa19AbmFjcVhla0xnUEJJI19dXzBGanBVNkUkTio7M1Z1XCIrWjVe
XnFbWiNtIV5DV15CNS9RQmVfK0FfKGE8YGg2Pj87P09gTTVgZS06PkRZV3IKJVdUPmdxcE9bX2Fw
IUpPYTY0bXVFT2lHcWNEV2BqVzMnPjxiV2pcKF5WUCMyY0FlTDpTSlFCU0IyZGlYZSpSNmRLSWZR
MjhOPDRidVhoI2Y9SWU9JV1hWydHY1UlREBqMSIxOnIKJTdUOHE4U09mImc9LmBNWy83NDUnXjVR
SUdmXnNXXkl1QWU7TCMtQj9qWyQvMCdUIyYqNjU9b0ZFMW9LJEZWSXVAPlJcTnFCYFo0aSQ6MFJg
ay42UiwmbVVTLEBcRFVdWUVmVCoKJW5AaFNIYVhca0MxLitBTUtwTScrcUhALTEjLUs8cSMsVzc/
UVstPEEjRzZKblRIOUNeVyw8Y05rNDwqTEQkbUxwTXJiZjRiSGEpR21vWzVTPFFKTTdUc2syVzZS
UCxAakp0IV4KJSk4JDN0VSg2cldpLFBpZVY9WEw7YjBBODJEMzY5cl8nQ1FlZlU5WidzNW0xKDsy
OEhQTUBBOkNgbSMyPDpgLCRVIyJDMjxVKl9ZXlJWajkqcW5nXFhmUGNoIz8/ImEjIy0xPWMKJVcn
NEombShKIm0oTTMpVlBNSV5ETy1NTlVuMHJULWptbVE3V1MsIzlyTD5GZlorWVxeXGZFUC1pXjBN
XWRQRFxhKDJhIkRSU0cya0tgSGhDQDpcO0ZTWmtqNCIwSTQ0T29pKCcKJTt0P0NYVUMrMyMiO3Il
VkdadFlfTFhTOnFGUC5JSEtQQGxBbVBycj8/Ym8/TEYibzs0STgsWWM+P0hLXFE9OTcraDgsX2Iv
dSg4IV1hJWNrT01ZXiE4Nl1LQFJqYSknQ0RuNyYKJT9mTUlfbzpvSUBlQDpXQTo6XDM9SCoqUUBk
Vj5ZZ29gRjRaP0E4Y0MyPUdMbS9ZY0FMQnVGXTpyQSUqU1teOW5JPG1uaFY7S2ApdHA/LGknOERn
MEYuQCdMJk49dXAqWENkLlgKJSg5PkNzMipNPG5VMmk4XSZLV3B0Lk9eKWkhIzVKU1I2O3NDYksp
ZWoyXkQ1LzBPLC5uN3UxRURodT1cMSx1c0ZSPXE/RCtWLiI+byEySy0oYzVzNVQ8KGplI1FOYTU9
KzNMXDQKJWc9ISZIN1lxMWQmS0JOb11acENPTE5YS0xYdTY7cG1ubVJeSktkQHEsbXVpWWxVaF0x
L2dGOTsjaDNiTGcsPG0sKTtydDApS0dFOz42LEU3KnI5PDQ+N19nTSg1Tj04Qkp0ci0KJWMwUyRq
MCFRRStBQSN1ISo3SjkhaVw5QSYvKmNCKWZFYlRWKShVaWVIPzNQY0pfRCZEKSEnN2wmOENxK2Ym
b1BBZ0lzZDppOWlgbl4qLmptQy5McXRvc3VvYW9zK1IzYWEyXi0KJVZDWyl1ayVFK1ctNU4qSTMr
O3Q9IidxRChvdVVjb0kwOCZqRUZVJG5lR01vWD9XZGxSIUYkKVdiKllnaTVvTWhSa3UkIytYYk86
VlxPXCNnV2c1SS0hYUwhKlY6closMFs6KmAKJWRPN3BgQic6cTMpJi9URFRXV25iLkchbi9hMGE9
SjlZNUlTZUo0PiwrWykiLGdnLTY9OVhgPlk6LDhyX0Q0XyUwM2dLPDZZJjklMjMpYjg3OywnWTlC
UlVAPig2U3Jycl1wIjgKJVEwWFEtSG1yJl46RyhDbSJmR2o2Xz1XO0ZEVitnSjxsOE1sX21IaiMy
NTdoRSFlb1ExImMvWVNvTGoxSjBAKDNDRU83M1NQXUhwS0ZuJDI3WW5YJWtJYWUjQjdvdCdvNiFp
PCcKJU5pZXVPTVNCPnJtNys8cyltNjFXWUgjcUhoRGAhV25QMyIyQ11PLyJMPmVGIkNxKjZpamld
LmYwI05aQEQ/dFFcUGZwaiREMFtGbk8vNGQ9SDJZPTsyNXUmUHFzMSMiWE0jRDsKJTo7dCtlaSxW
UDpzITVfOytcN1cvKk5PbmVgLC8+Mz1OS1IqayNnWUU9RU4rQEAlUF1AS0NXI3FqJyhPa1JDLUBZ
NzZvXC5kNE80aSRPNGVfaFIrOyk7Tm4hVGdVOyU6aURJOCkKJV9aSjM+WS1Ec107MFskRFBzJVNV
NVpVVkQsRSIjbzNmJFpaN1A8WSohKFNwdEhgMG9DXzlsXCkqJUlVUl5XRTZTUkNOK1M9alRrajtH
IlIiMEgwXSYqNGtMQkFyMDkkJldnZCIKJS8idCZHPz0oK0RwL1JjXlwnM1BgWkdgYVlNXyU9NE8/
NytENio9a2ZKSTdLVm5QSzhrNlVaRTBbWnBvQTIkUXIyOl4nQyQ1UiN1a3BUS3BsQ0hfIys1anVP
KURtS2AtMmRgWiMKJVhnZEFdaVVrMnBQaEojJzAnWFJSaThYT1JaYXVvTEojSitUVjQsOTc/MjRp
b21KNlkmQnBmXHNqVHA7aGYhcSFMLXNPS2dYZDROQC8tYWpuJkRxSFEoVUsrMFhpYjBVaDgmXVsK
JWRSZz1qR1AvQTQ3VVElPjhxQytjJ3U5ZUJoZ2pJcV1HJzRCI0lFaiM6KTQ1PFw0WmhiRUdKZTpI
Lz0wY0BuNTRfRHJZN3RcMU8tOzVjNGhhbDk0V1xdX0NBbj1fQ2hgTDpyUCoKJSMuNTBvMGliL1M0
c0phQVRJQTdyYkpRczspXEBGby9yR2BQbGYncyhRYzdhazJxSTVZYUw5aklFRG0lUl8nIUFmY1Ui
NSZEMyMlaGFmJUxXWydDaVdSWyhNZjchKVxhcilPQW8KJWAkcT9lTStdPHAhaDZdYWNMUT5ILmw3
Om5mI2I7UTAlRjd1SztGXyIjYDhXOzRmZlMhRVBAKixJRm4wRWZRcihLOi90VmU5MlBxJldcZkBl
UlYtUlgrQl0+K1BhV2JCQmJmMVgKJW5ibGx0TCVpRTRmZzRFTm1eaipZLG5BLCJQS1BqcCpkIUts
STRoO2w7JC1Oc104WVZDWWQ0QklsVlQ6bScwIkNNS0g9X0pBPStbSFdzMm5eX1NFamhULHFVZ1Jw
VWRGJV5VKGoKJWg+Vl0wcTE3LDZZV2oxSF49WG9QMStyPFZrNk4rK0k9dUVMPklDaE1iPyJdcVx0
KDAxIi1XZUVrKFVeOWgsY1ouZGhJcGAoVS9UTFVAWmJNUz45VCNEKVJLcV4xNWxsSnArVyMKJSkh
bzQvZWA8LThlU0xDIz9OaEckZEJoTDg5TD81amJvXSQ2JTVALiYjNEpLdWRGdGxZN1ZkOHVzJ1db
azJCb3NBMEZWNG5GdDcrR21HUSlZVWgpX0dFJVBRVkEnIy1rRUtSV0QKJSkwaDdETzEpWDtYNCZB
YThAOFghMmRkSW0nLSlBR2cobFpDXUxgNShFXzY/WXBFNTBhcDBhTmRxckpQdDVXPkpbPFhYI0oi
Kzg9OScxWjFoTDkyS2YqIVhnYzRiTU4mPGgmRWEKJWkxXHE/Il5nbSJnQGR1PlZ0UidZXCVwaVRC
Y0hTYWhndVh0WkpRITNcNV8haCJrV1JNJEhsaiRbZGdxKl8ndDNwQ2ddZ2Y4XiQ2TSVyIzZCXDFe
W0hoNmUzL1BzRDclXytGTiwKJT5MK0FWVnNlWXEzYDlhZ1ssZElxPUdiXGFfaTtlcF9vJ0JwUihH
O2RzKT1WV2dzOlhsXmFnb1AxWmslZkxybXNOb1ckNUUwQCpHWiFNM1gmWTM0QDIlJ2pHLyEiR05l
NDI7TjoKJVNSNEtGJUdcRylyMDs4Vi1lbzcrPjglPzExV1o+U0g0NHA7M05jIz0maFA8MjNgTCJc
JERyaDMzX3NJUmhnaGskOzNyZVhBJWNFJFIiMC0vJy0rWzxwK2ppSS4idW5ZUjtLLUIKJV1TNSdR
MjI4KWFVcmgxWzA1JkRsX05vVlwjQjNcSEZePm0vTTgsc3JPKSJZdFkzYVUjZkFbQkBHW1hMYj8/
SlpcWU5xKkxtRlNxPShbUWg8bTEtIltQLltwaTkxZGVFRGhcQG8KJW9yNSpLQE9bM3MtciFsJUoo
XSJlKGE7SCdKRSNkSjdjai1OKll0amQhSUVaNG1gSlkoJU8wVl8rOWZEcHFjPCIvSzxMPUduIkZM
cUxIbzUybShuK15QPS1yPjErbS5EQlY6aFIKJWBcQj0laEtIVXFsal5JWS48MGBoMTllaSRwXWJp
WCZQXSRWS0A3Mj5gM11lNmAuc1RhWmg5dWNROFVjNDQuJm5cLzttUG05N3JKXUA1aXFEbFkmayRu
RDczKVF1PU9GQzUiNz8KJVQkMi07NiZmXUsibCF1aCYlKF8oWUo/OUsuZl5wUjsqWiFHQUxgVltb
akY3QWBaUT0qOFIyYlMpOl85KUh1R0Vham5mdD5nNWxZUjBpJSpPUy5GWSMlY1pqM2ZhPzgvUkcz
LycKJW9PIjJTaUtzMU1yNlRzYEpnIUIoNWk4cGBjVTUpQ3BrXiVTRikxOS8jVWsuYilWL05bPSNk
c2ckNHI8VDtYOEZXRjJFOyw7UTpZJVYkQ1I+WT80U2s3NV9DLiREPig2YF8zclkKJSk2VlwjNi9o
MEkibWVqajhoLlgmTzBOUSlVXldYWGg7WTNaQURRMiEzOWpvXV1ULUtJZm9CZ19BYWNaQGFGNzBO
NjsuJTFmcz5WTWxGUiRWJ15CNjFFWF5nNS9NTjUzKCNcIjAKJUxoLlwuQjJEQFNISiJwMmY2bENt
XnA6TzEhRClWKVlFNVcrUkU7LWpOLCk5cEVjLCZXYF1kKVZfIlwnbmozcVBxWWBgK1U1NVNYL3Jn
Xl00bCpFMSU+JEJMO1ZcalZJPE4tXG8KJVppOj80VCU9OVVuI2FvYkxNPihrUUQlWSIqISZcb0o1
Yk5FMy81VycwNDVJL28rbXBMWUUnOGJMNV5eWUM0WFxdXlxXY2ZwWUw/JFxtJ21qVyNyQyU7PHJI
N0BMVTdQK0RwRmAKJUReM0ckI1hzKjE1R0VZWm9JR0VSYmdXVVo1ISlIIVEtPDUvb2hrOGVEcWZe
Y1Bvc0NWbSIjU1RLKE9dO1VZOGRvJmInUXFTPTNbbU4/cWYyWU1fdUUuIlRYLjRsK2pjKzdeMUgK
JTZOXzlFKUBIOTQhbi5yYywzWHIiQ1Ena0BkQV0wbDJiWVBobFdERThLO2k5ImNJI1ZtRGFGRDhZ
T2FbLGxHViNSYm9NQEYwXGYlZkZsKlsmSWBDTzNDVHIyVnFXJ1hAMDQvb2MKJTFZTjZmIVAnOFBW
KWcqXC5XXWopcHBrUWpNOStDTj1JKllBPk5XIWJsWE4/L2xbRVxTaz5uLixGXF8tbD4zQWJyPFVM
aidqLz5uY185dTczNTExZyxQITVMUTZoVSg3N1Ekaz0KJVJRPykwLzwiTUNmVTRKWSU+T1dTZEU5
YWdIV2JmcC9hUVUiM09TaCJYPWpmRmplJ1ghPSZSRy8hcXVSNDJuM2UlKD9maTFbME84ISdAYiIs
ckI/WHNYSi0lZ15uViowKW5hKjoKJSZkJzMlUDtQZ0tTJTEuaUVhOjIjXj81YW5mTzZLVCQwNlIo
cSFtcVBcUi4kR2RJTGkiY0lFX2dILklWVWRENGtSITpDXylFRVEzKCJkQFJDZ1hjTls7MSRaQTAh
NyItYD4kLyYKJSVcRSUiSnAvJ1IiRkkhTjwrYSssQERmN0JQRkVfNSxKYW9qVDd1JmheOF1gdSlq
azIhJUdwTWVIImZwVTRQOk0rKFs3cSY3ZmZgL0pUUGZoWV5HXDBZLis/OUdgRUpKMHFeRmQKJUIl
W2g9bzVbUXRFOV04NTFuMT5DQzhCRWNNckQ+MkBzRVk2RjpcJ05xVm48JEk+U25iT19YXl8vQXVE
cDkkPHU7XHUoIWpIRSsxJV5iXkEtLElnRFxDdThCWGZPL3BNXHRwQl8KJWQwP2JvW1dZI2IkOEN0
bkxmQGtJbm1dbC4vTVFcSUI1TjczP2MuKi4hMSxSVU9mTTprPVohPGBNXFpRZUZgJWYvMG5OODJm
MEUsaiVmbSRkWlFeTXNgSUZSdTw1KSU9WiU2Wm4KJUg/VERVWmMlbmw+L25JW1hWb3FeMl9ZYElI
VFQzS3I1WmhgOUE9ImdMTzAtKD5BTVo5Wz4tTDQ6LV9JZCRdWDBJXCtTSW8iXGYiJF9hLmtsaVQy
VHBPS1QmXTw8L1FJJDlndUMKJSovclBdMzckUVFMTFdDSzRGPzMqbypfX2chMktPa1w9ZkhHbkA6
NkZRYWk7al5fTVVlY0c3VTIjKUVNcEszLjQqU2ZbMEVYOERRTlQiUTpQZl5rViFcQW9pb1pISDhF
LFE3QSYKJW9HPnUiPzhCW05cVT0qUjRLU2ZPLDhYYyFvUSNDSV5wQCpjPTA3Ji5GSzsvbl5ha3E2
bE1PVUBdci9RUS8kKVlOTVo7OGNBTjlhbzBfJCJrSkNBUUo0a00hcjY9JXNoZWV1IW8KJWxQQjhj
LzIxP0pbSlQ3Z2dXPiZYWXNeMDpkdC9rYzRecGM2Ul1TYE1pUVhMRSM8VEwpcnBzV1FpR0okK3I6
MSdnUVMzPEoiNCo5alxGVG8mV2hpQzYlQUZVQEAuOD8zRXNpaUsKJUQlVHN1Vi5MUXJfOic3LG4v
Y0RMT0Zbc11qRTpSMCVdV2EyLFU6Sl9BMTVJLGksUV10JGBPaFovYU11dVlbQyl1KDJRLF4zMSRx
JkxhTUFJRXEmWmhKQCRuaVVdOkhnNWBCT2wKJVpjMCE+W0wvbl1COypQLixBPVxkRS07V2tmWD5Y
MGdOaTRtblZtVFxtc09odVBxYE5ma2EicWBRT0ZMQE1sPEFCSj1qaT5ecCw5ITY9JDhqSUE+WiFp
Ml5iXEVcOCslcDEsLycKJUAzMDYqQUNrN1RpRjE6bUMvUWtNQ2NvXVRNNUclJ0hkJFZqY1UhUGhW
UFM6RmEtUHNgLSk9QFw7XkdKcmxmI3JpLl9mal5kKCpDS11TJVwmZCZhMDY5alNhWmk6UkY3aFFp
IUgKJShhZFlJVCRwNHQhO1o1LTw4ZG8sVFNWK0knKV1cL1lCL1QuMXIpb3FeSnVHYExlKUktVy8/
aC5pI2I8amQ7O2xqQUhgUGc4JT5IazwrQStuP2o7M0BFJVMmX2AkWTZjIVM+JHAKJTs1Xi1YMmwz
LmhIK080cC89LjFXbytJaC1MODpMdGZYJSomJGRcX01YdFg4Wik3b0cqLDklW2lbKm9Rai8lNlQt
YU0mXDZvJGNAVSdzQmxVPHI1aGRIRFUvX0VeT1ZaJCElVVwKJV0xbD49WzpQWmtAMUhyUFNkWHJi
Pjc1Q2FpJ0lySjZYVDtnTVVjYlZqM3VubkhtYzBgcXQmcChOTCNsRVtrWURUZ2tRdEElKm1NcURz
UkgxY0dnczFzJkxqNkhvZyJfJ1RUX00KJXBEQEpCKzBqNlpeXyErJTo8b0twZTZoI0VlNjxdVTEq
PEBPJGtyKGdjYGJFXW5QTUxULm5KZEhSXUNtQTJNIXE8MSkyb1M1R1ErWCU6by45InMzQjVpOS8v
NXA8ZCY1cWkyTUEKJWw7OEkmLFFib0tVbkFaanJjaS5IUURmclNaZSg3TTlQVlQ9WnBxJU5wRE4r
T2hOYkowVkxiaUVdcTAvTzloVnJiUUBWZmRRWURAJjM4ITV0RCRiJiVUMEVVMy9adVEvPU1cIXAK
JS9TWHM2TSwoXjltZ290IXMhOk1LZXVINC8jTTFJUTZVbFF0b0kvbW1VLW5eMmdAaSsuPlVlNFJn
NWklLls3ZDhuNDosQzxQK1VwQjxhVzEuNWtVR29EUVtvTTN1OiE0QWc4PlIKJVM5VUVebDQ7M1BV
MTdaNVghUEU8MUtVPGtsRCQxVGZqQjI3ZG06aFJsO0w6SDZGQjVSX0gzM2UnJTExImJBbitYPSld
SWRgJzpccUJXdWcxKC0sRSotI0lHJTc7PlBzLTl1IyUKJTY4bTIjNGJlQVM2VTlwRiZtLyRgZi0x
aSlcSXQ+JCU+Xz9GaTM5PXUrQVBXP00lP1VQQ2U7X0c/QlIhSjkyTDUrJ19EQydFT2YsdVwzLk1N
WnRgUilMRmg9VCk4bCJOZXMtLDwKJWVrR04zbGonU1NjNy04aEotLF0rNEsoWTdwZi5dLkdpcV9g
LT9bNSFaM2lyYi40Mm8hPG84ZVZyMSglYFpscWsmV1JQczMubVtBbDNKLC4ta1xVXk9PXUFVbzBV
cjU7XTE1RVUKJXEvUi81Q2tPZ1A4LmpLZCElSE9yTWRpVkxxaiZVOlRVJTtdZkRaXU45OUU8KnJb
UlJqOGhIPyRwUyhiX004KXJTWDQ0aV4qYj4yXSwlOTRCRWpZI0kwQ3IjdV1QYjZdUT9jYlYKJVYl
P2Q4Y05jLkFuc3BSamcoal1rZzozQ0ovPCdrVSU4SkdOSlZxWGlpZl5fUVRsRFh0N2hXaGc4ZVU3
dFc9LGUqTWE4bWcqNkNzbiptRjRAM1wrKmRiLWlrVmFhKGNGQ24zOG0KJTlMP19XPE9TWWwrQkoz
OitxUTkiUW5UNUBYWS5HLW4uYVRMWkZDN1xPVnF0cnFaPkdPUCkoJykqVFY7SEwiODEqRDdkb1Bv
STwrUF5wXz9dZS0uK2U7PT5AY1tMQE9ZSCZzdDkKJSZ1bEFmYVppNTNyPDQ1dFlZK2oqX25DIy5R
L09uMy09MSwxMlJtVU8oYk9lTjMnPTRxaCRtcnJXNFlHIV9lZTEjXVpSc2hdaEswN1laZzlCJ1dV
SS9LRGhjI011Ii8sLWtoSGQKJVw+RVtkSmc/MjlBTE90TUE4TigvWzVQLWU1YSxaUGRbKF1QWCU3
X2krR043VCxkayRrYD0oZSI0PWooKG9NS0g/ZHRRRS0uW1gqQUNkI2lLOjEjMFJicTkhY1ZVY1JQ
MEdxO10KJS4sP0FQPks0KSRePXJncT43SVwuYikxXnRdWCJaIlFdbGlZQCJtYCNcTD50KzQrRk1l
YWEhckxZPkIxVWdePDBaP1xdJk1LP3JMRS5nRTlBb1w2NmcpXCpgaEQyLzRNRzIlMjcKJVYqUytC
NnM9R11xWTxZN0Eja2MsNXQxZlYoMDQ9amQ6Tl1jQmkwYlFXYEtAMi5TKWhpYEhIQDw/PF1UbC48
bmMwOGtSUkREMD9JLy0sRG1wJ1Y7VzhWV3NrSi50ISFGOnJwcmMKJWJUYk4xJltoSlRBXGRWS1Rn
Mjg1a2x1SicicEdeYi5iNCpvIy1xRzU1PGhXKTIxdGxHYlc+KygjOC5VPltUZS40aT4jYGxQVlVq
YkpKUkZPPlc+UDNEUWxvSV1mPHVvYTEnODMKJVc3TnE2UXIpJnM4JVVJPyFcRCs8aWRQblNXSWUu
UE1IWDxOXHJtaGVDK289UFx1cGsnP1p1ZU1qcF9NIV1zKE1bXVdVTDlfYTBgWD9qIm0kOSNmclY/
IllGNEdGJiM6TWdTOVwKJS41IUs9IWFtPC45RmxBc29ERSpITm8lbSJgcSkuZEdnRFRbLDVMVTdK
JyltYmNfLVksY0MqYiYiN01EV0pePGsrPmxhV1c7N08ndFhhU2I7a1JmVTcobjlANz03XjhcRjFN
QFwKJWpqPyxVPWtYKGlcbWBdYVNwPmhCU3RHaVhuWFMtIUxBPiVdZGg0QU0yZmsqJ0NjP2stSzA+
RURWSE5cRj0yM286XVljbypKP3E9YiRzW1gpczcrTlxgPmlnU1o6cUlbQUJOcFsKJSI8PGM/Lmdg
WyQ6aUhfRV0pbmI6MzxtL0tfKSVIJjdlIldQL3MwcFFENWBmTEs0aksrYjsicWw1ZVs7LkVzZFY0
N0tpKSIhOnI8ZiYqLFJBTjZlM3RKOiFwXkxqZzpJRFVWZTwKJWpgJTk7MGdFXHNiJzYpQEQ9UD0i
PWdsKWNJPD02KGFTIyQ7Qj1gdWAubmpdTSNNZSllMU91Vk4nX21PXF4pUmUxPD5dMlVSMzBbOlYi
OWk4YExEOksnQzVgLUNPTDtsRzU5QnAKJWNBWWtoJy8/S3JnYnJhaUREcS9nYCxAM1cyTms8TSFO
LTFwSEVeIj5CMVo8TFg9XD5vW0gzc2Q5YStgLSpyJUpUXXJcOkYxbWpIS0pSSyk5WW03dVpYTnJ1
OkJIUHErTlwsNCIKJVtOWjdcY0pdNjZjWTtKcWciUS0jUVJVJCY6ZXBHKGU/Zi9JZ1EmSyw+Vzs2
SlA2WTo3Lz8xbjlpPjNMMkRnUUpvNWs7KGReTWE3JzY5QkYpVzllXUlQa0o2UjJsSSIsZTRKak4K
JW9zTnM3W2YrTzVaKXRuRVQnZm1oLSpydVJbXlkkPFpyQjVVWE9xaSQiOFQ2YGNCVy4jWmojNSxl
aiw4MyZhbDkyJSQndGViPVhoRElrTXBmK3BvX1hba11IamlyS2tMcEExUTIKJU9vajhqZyVLLmxF
UCE6LmwmYDwzJ3JsVWglPyppW0JvIWY9OTJUJmk0cS9Qa3A3VCkjI0AoUzoiZTZHTGRHUiw2JEMn
IjRfKTNHdD1zJFgtSTBDNVBIcjhMI1pLMm1mImgwXSgKJWRuO3RDbkFoY1xVP3NVTWkzc1hvIzgs
RnVUPlM0QFRIRU9lIjFyXyg5OEg3USZRJHJPLjFBKXAsK3EtclpPPiRPSWBdXzVxIjwmL1I7XmZO
WWZhRD9CK2djVlkxMjBHOTcuR2gKJW0zJChEXWh1Ul8+MlFdMm9kRyUxN3VdaWBjJyNWPktbO2pc
XGVNOlBnSEAnRi9IYlRAYVBrYVdcTltJWTtESE8yMl89MlpwLE88XypNIkJMb1lLXE0pI0MjcG9b
dDlqZzQnImEKJSk0cENJbiw2WEwnLlU0bHI7JkUmTmIudVJxZj5gbHJEUGxrcGhlX3RoXUYwK1M7
MkNdSj5dX0hXcU8hbCs+aU9DRz1RPmhmVG9wSThbQzBJYSNZXTIlSF5RKkAmXmEpQDlhREoKJSo9
SGk+UV47OC1ZZTRpW0dpZ144JShmWEplalgzPWZcO2tBXSxgRXJoZ29SbyZxNTg4QlI7YmNHa1Rf
KGRyKGtqIm8zVVstO0ZBYnJINGAsRTgmQDlUXHN0PzdmQy1LZ3RRMFcKJTxoS19zKmxvX1xoRmNO
L0xQQUw2Qz4kVTdNYD9pLUcwY2MpQ1xLNyhqT21HZzRcNzYvQDhzVEA0cVU2Pl5gQFhkZzY7a0Ah
NE5dUmFsUi00O1kpSStwVlVhKCRnLGQ8SWQiSSQKJWNYUixjMCE5QCpybSlcZGJRJU9VclpEMT1K
LEg1RXFYbSlQbGk2az5yaCc1aWY1TF1EIjh1VSxxdCM9O3AtNTJMWmk8Mic/Q3IyNUkuMWxdKW9c
SEJMUDYrUz8lWj4oNXAiKTcKJSRSJ2NTME8ubGE5KnNiXlhpL0kwIUJ1OnRjTGkwQGYiZ1Vlb1Fu
KVlAblJEQiRGKWY4OEFlXE4oYkwtZCtfKzlfNCJfMFxAQUpuMy49VldtTmRRcyFUKTRAaSM3PUln
R1puVVYKJWhfO2lfKEA4UmBDXm4nYyRAPjluTFlFblZLTzxASm1idUxJcUMrQyJUXnJYdExQJ3Iq
QjdXMF5eYSlsJ3BBN1NNL09eV19mIytyLk0uaEJIcEVtKypmY1E0ZlttP01gJzopL0YKJW49RFgn
YFUnMEBbWyIuWFJgYHMkPjtwV1pyZSQnbyE+InQmMktSSlpib0NVdVRSXlsxazZhUWJnaUgyXVE0
JixeKXFJV1YiQyJjV1I6YW0nSypvYkVkdUxLN0s9STMkSUUpWEgKJSVDR1tBJ1g9LFZDNVk7bShS
M0pxWz1aLyNPITNOTWxNWT1jJXN1MCtNQVomS1RDbXRUYElRZjhsPE1CQE4uXmJRKFBpIU1iZU9j
dT81JEJGRDIxTGZvYEsqXGBiZ29zVU9rLC0KJTctbDo2YGNXQjlAcFk/TGdCSmtIYEg4MiZfMCMz
UDhyMi1FP2FjWEVYSFIoUV41NnAnJk1VQzkhXyZJbVBVQTgkWVQxPFw+Yi5XJS1kQlQpaCZTIVtF
P1pkYWNhczpKZUIoQkkKJWdOVlJxZmk8bFBJcXNvMUApWmRnO0UrZDEpdUNMJUVTX189J04nP2ZV
bVsnIVcyJmlXQiZKJ3EqOytHSCZeUkA5KE1yUDgwYCQ9XnJhL2AzUFwpbU9CPVZpKSdLW1FbV0cw
WXQKJUdTcFUtND1gIV1gPyRgM2Npb2tOW3JzL00iQ2w9YHJyWTpNTl0iaytjUm1kUVQhO0Y6Y05m
a1pxM2dTP2xFUkBtTGZxJ1ckYlk5Z3FnOmk/b0hXYipCIk9GKk9tcV86WVoqOzUKJW9RZjtJSSc5
cGpMX2JvOHBHWSE7USYnWmcwKzFIY2tERzUlLS1oQUprO2NsWlxSQS8jIShcY2c6cTtuaUg8KFxs
XkBIajYjQ0Y2clRkQSVMW1MiS1BaRi0jaHJrV11kMVZhOzcKJSdGJDJlOGhla2FIJlI9VmY1TGpd
NU4mZmsmQmImQWJNPk4lanNdT2QhRztzbkhKPV4lVC9XU0hDcEdcZEwzW01wQ1VaJFpzMSctOSIo
bVdIXThbZ0BoaWpudFNKL3FuYXVDWjoKJWlQcFs8XnFPOjk1LFsmV29cOS9vcjRGS2MqXUcwajVW
ZkM6b2g/cmBIdFJpRF1raHJQQVpnKzE+JlAzZjA6V1pAblQ7Iy5aODlSMlNiWD41LzViYFBGV0hd
MEU/cGlzXzlhaloKJW1YKC4qOEowOUFGLFs5S2VLNDU0NjdtUC1sJEZXXFxuYXJCTiQ9Qi1fTGZj
KUhvJEIsJzpxXUVUIj8wNWQpMkspUzhaaEQrbF4sMGReU0o9NmBNVildTTMzalJmYEk2PnJHKlAK
JVRUNjZAT0xyZlQpalpDTVpaSHU7NHFQQkQ3U0M8NSwydUkhZlFzOS1eKFBfI2oxSlxgQS47SlNn
RG82NGZ1UyhlUVVBXE1ILVU9YVU1LysiJyNDWnNrOiZMKUkuKksjIzJjOy8KJWR1OSVJXScqb0w0
OFEwXHJnNUBWYUY9OlhVQSpVSkJzMzxUPCQyI0ROXig9TkFZQkAmNC4/TTFecT0zSlFcYEUkOklj
JlIrNS1LYms1M21RcTNRSlQ1NEVGX3JTJ0k1MmBeZ1sKJWlZcilQW3AvcWVdRS9mO0w1IlA7P2Jt
XCFLK1JQXic+ZHUhS1JtTGgxSVxyZkEsJzliTDksZTcsLyxGV0dcKWhRWCpqcDkvSGB0PVkpN2Ve
VS1vRFAoZEhwVWlAU2licEtTXEYKJURhXzZIajdtc2ImO2teRCVzYz5TbyNhTiomY1krKG5yRHFQ
bztNSFVcKlBQXkd1ZTYyPVhgOEsjKyU4UjAqPF41ImhgclhILTghIkU3MHUkW2dZMDVcOEdWdEVA
PjloXScoUlcKJU5XPkQrZlZZUk44SEZkQ2AhX2dqLHNvLSMuZiY4W0dBcHQ6XUAlJ086bDBHPy0/
UGRwRWZzNlRfOHFTRExhIloyVUpsS0RaW2VnWS9rMllyRC1IRChpZEJeMFU1NDAtcSQ0RVYKJSU6
JTVNLCZXOkpwRGpkbmUnJWMxYmpsIj80IVRaQCVFckAoXilsPlhKJll0Ykw4RC5ZRDooTE47ZmNq
JjozVWhnUzIzNGY4QiooNChwZElxLXFrK1cuYCVISiY8bmI8Y0YnYloKJSxgbmU7Ri0tSC4/NGVh
bTZrcmFsIXR1UV1TZjhwbGRcdGs9MmNxZkFtRmhYVjspUi5balxQZ1dJck87c0ReMVhbNFwqSCwn
V0MmNVVcMj1gOC8rX05IbjZvbih0R28qYkcnSWwKJW42QV8/YVpQO0xAQjN1aVVXbkxARE5vKi9K
UTtqYiorQGQxUDhFWEpDL0xIRVN1ZDlMSW0ncDAvWFlMZVMzMCVhamhOV1c3KUg6aSVROyRsQkRv
MWAoRVtCR15ATFRUTGo7ZDoKJS9LIShKPmMuXmdaSW1qbkBPTkZHTEVhMFosMV48SylyPElCWFtW
WUNPW2cjMDkicldJJ3RoVTUzUjlNUChHP0NTPkRgJV8vUGlwVT1SMTwkQ0tHWl1vXCxmRmM8IyRs
TFEwLzAKJVZnQi1iaTU8PVJYUlxeUUk5MFwqKWo4RFomYyEnanFIRFNLM0FdJURRKVE/OmpDUSRo
NylzY21JV00mN2BxSSE+PUk6ZGVeLzA+WGA4OzJeXjgyVD4qXkdUQjsjcC0zO2JmOy4KJSg8dEpr
MUJsRkJbVm91bWtdXSleb3AuaTcoVi1pIVkoSjApY1hXZUhib0JjQm9qNTJKbWInSklGMTtkJi9c
USxIUEtRLjIzKUxSWUBLQ1heOTxTS0c3ZzFAS0dBJVpiIi1mMDcKJVk1NT4wYlVXZGRRXl0wcW1i
OmdkOW1aTGJjUigsOGsvUTVtV09nMlNCUFZDcG5DJFddX1QqPmY7bmE7VT8yKDJGIUNfQjcodV08
PEtxWzknN0QoNHRuNGFkVVUjXFJ0R1osYUUKJUQoMT0uWjRkckIlbi50VFJST0UsNWxrbl5BQHNM
bG5QWFtjLC8sTVVuWz43NjlESypVR1p1YFtELTwhZiZnU1hbRDhEQlI4dDNKWDVtXC1BTWwobCM/
S0ohdCRVcV1baCIlbmwKJWI+OFxHNyVBJjJeV2kwPUBzPW1FZ14tX1lDNCJpQEF1Ok01RyxJRGJJ
X21fRmxjLFlbY1JvWDJbLFk3X1MxLHVFKjdsPlhmJDQuVD8zNGQibzlNbmEsKypMbT9MJSlYYDEm
NT8KJURzSTs2Vz05T2NeOFdDTVtjWXEwSTNPUkRWT0dvZVxqbyxJLGA2aTQqMjNhTEorVTxGI2xe
NS84YHMqSlNGXFdqSVVEbGBTKVw1VU1bNTdBQmdRKi1WV2w8ZGRSQGZHcklLTiUKJS5qRzo7MU4o
TXRqKkc0WUhkO1JcSVdDPzJCVWlBSENsMTVfSUdoWVxGc25TLEYsLGBcW2JRZF4wL2pAO21gITtN
cWQrLiJoO1FUTHI0Mm07WWB0VikwUGhPMCpYSFlPKSpwaCsKJShGISpyYGBqQGssQT8zWWQrLGVv
YExcNTJHTzQvXl5dbipfOj8sYVQsI2trNDRiRD5wQSY2bCNCaDhwMCNDMnRvQi1fUEFkLj1ENEtJ
PT8iUFg+U1cpQWskS21NKFIwWkYjIm4KJWgsLFl0clNgTDciRnA0Qm5jckJSOyRfInJaVGU9Wjk7
ayMtbnIxTWxRZ1MiREAkRi9ja01aMlhwQzoqUUFEayVTYVNyRUhAQjg4SyYlOi82PlV1PSNHQF1Q
cCRJUTB0ME9mMHEKJUNRJyM6LVRMI2IzRF4wJTEjKCEqMypLQC4+RTUpZF4qXjxOcj9DcmovTisx
S25YWC1vL2s2Y1YsPVhYImVMXyQ+Y2RObkg1PiQvcVZkZnFVU0hvKXJhSzxwL2pPX0VFWyQsIzEK
JUJoZkI0QktgU1paZjlqRTUiQGJMazI3aDplRDdKWjU8SU9xLEBVOnQjdW0uPD05O2E8RipkWXUh
MSw0U19NREE+JjxQayNmW2pyUS9kMEslVmo2MktNP2xtIUBrTSFUNVZqZnEKJTMvOCVeQSlPLVQ+
VzwmWVpYLi9PXWYrMF1GKF06YFlHV2clRCNwZHU1clhwakxoL15tZ0pAYkpULyMtUkhCLC4yPjM8
aGpMSjRmNCNlNE5vRihNITIiSWdSTzpdMToyLzM5PG8KJXAnNTVJO1UqIm5OPSNbNlxmPXUtP1hO
JURfVUgnLihDMnRpPzxkbCYoOHVxcmEkLFReQ2JGVj9eVzpTbTIoTFtOQCg8Mmw2bGFEUiMkcnRs
PmBncEUsK146Yjwkcz5fJ1o1ZSgKJW1XW01fQGM0U09tKCtrO01zcTc4aCYoVicrLyYmbiVmZGdi
SE47W1gnXyMkaD5kMjpVZj9CZyMycVxkbDhVcCtqbGs7ckJRQGo7OTF0RCI2JU1ITiYyLC1jJk1g
YjU8L09SR1cKJWguJj5Ka19Aa0hsJVtIcks2dC9hXXVNMGBpRUNOXmVVNlNZJzE+Zkg9TnFSYyw0
cVVVNi51JCNKLl9DP2kjZ1c1J0g2dVJDPC47UVQ9aFRwJDNFS21mKVlQPFQhdEowXy4paCsKJTJL
blxeSzYoOWMqRidFOVstaT9HWyouV2hTIU5MZ3I8NDo2QDYxUC1kJyRMQ25iQGxQLVVHaldxXUlo
NzQsQUtGMmw3VHQvRmtoOU8sLGZPZnNBPy5QWydpIT9SWE1ARjpqKDQKJVQmU1hWZWZQQ1k0RTV1
XGZpUFc9RXAqbltYXz49KSlQNlhCJUcuKWQzNWhCSFJAVk8jb2NAIiExUGFUdTY4ZyZqNG9VXWNj
MyolNkltSV4oRSs4SlEnSiRWOllETjJVaXFNc3AKJUNWRT89RkQnbyZhOWk/RyRiUy1ZYzd0JFE2
bkonQCZhUW1mYGQrYUEoci5ZSVJyNEslbzEyLWwkJDBiKk9JYDw8S1tcO2FUUlxmZTZzW1xULlkz
I0YjUzRgPzVOSiEhJjRHSmIKJTBfY0xLT1whOnRGYCFzUjVHLidiWTg8QShEXClCTzhVNUk3IlFx
PHJcPF1BK2VGSTVqNk5mcjklNlVQbVM2SjBDYlkxc25ZTjI3c3BfQ1xcN1IhMytsLzlebkckPz8w
aideJWYKJWJ1Oyh0PlsuTkRVU3VhSGA2Yms9b2kwRkRbVG8xR0QhLlowPEA7clgtOTxTVGU4TWVt
YWdRY3VsLkBuR2JGVFxIQU9Xc2pwZiomNmJiNUdKOihnNidDSklyaG9FZWx1SlEoZ28KJV9dUm1A
ZjBUOy5Jbjhsb2A9UTo4MnJMIls7JU1IRDtyQ1shOmMtbUhVVFNrLzIlMTwmcVRlM1w+MD4zXjdN
V2kkSmllPjchYWFUZThITSNqUjBJOSVQIUJbZztGVlBmWSxXXDQKJXFZK2c+L0poIyMsMCVbPnEu
QXI8VTU4SituOD9dPGY0Nm0uYzB0cy1UVFdgJCxdSmRDajJRMkknYmlnI1FwaklOZityX1ZQa3As
Y2RnIyo6L0xwSFNcQVQtYVgnWCooXmdRYD0KJWFuZVtLNDpBWF8jNlg5WGpxMltDSy1JInMvTSle
XTNbVDBGLz5IcnFrI1lWUy5ac089WV5iJi9lXDIoN1VOaidRPmYocGEqSSspaV1uSEFuWGspalg3
MzYmWCQoQGRHITJoXkYKJWJWU3FIMm8kXlBiIUZwPFhZTiFHazdRYCpIYDBIJGpyWV1pYGxHbj9f
KGIhaEdmbyxfaXBoOjgwYD0lZTlfRnQsKXVXcDVKZixEWkRtTCpVNER0YiYuUlxzZ3E1aURxVD5l
YG4KJV5WLl44aSEjNXRzNz1KV0FtVW5mKGBDWjw3bDc3ZycmaUBpLl9jQzUoaispaE9nNm9fMEVJ
b29JWVNGbEoySzdHUlAxZzhxQihhWmk4ZmQvYTVXQTlYQTNENTpfLz9eaStAbVcKJT47dC5fVVhu
aS4oOixjMmVRWUVaR2JOaTNFL0BBIkFHSVRLQEo/TGVGNGdHNltPbjw6M1lwInVsI1taYShuW3Qk
YD83YCFwYztqSDBUVjI5X0peLVk4RXI7SWA1Rm5pPWNVSDAKJStqNVYpKSdvP0JYR0ZnbWAhL05N
OiRdLSUnUHBwLV43MychVHFdXykzWSNzJEJGb3MrUFlMQEhfMzBwYSMpZVUqN24xRmEoY1k8P1Yp
Y2dsImZrUWRLI1Z0cDFyTV1BInJDKnEKJUo4aW4/YGM1SDJQTjtvUiZLVzRXWic7NElcM1w/VEkk
IylxKChRcFtsYnRkQiVxalw3L0g3NnU6UUAjK01FRFRaNWBsVixKI0JRb1tqSHFObCE3Z2UjPD5K
SzduPCZbJ146Ui0KJToyb0BWMiE5PlpVcjZcN19PSGxqZk9CRHVSOmw0M3FQWmpXNjZFVEtodChJ
aEkhIUonUF9tSG5wIiYoSlwzM1hhIlIlc1khQk9ZYUtDXyIjNTJ0R0NrVDRVdVZMI2M+ZkthXCsK
JXJGJjItYy01LUwta1NGbnEkMCM5XEAnJFFjWDcqUU0jKWJZTDdvP1EsYThDKlNbSlooOz4iQ2hh
XjU5Ziw2T1ItOUlIR2doPFkySSRwYU1xS3BRRmc0XlhqYDY8VWkhZjNRMj8KJS1fRztjJV9GZy42
UUI+WU9BJnJpXVhXXGVrLEVFKTo/X3JTOGMtRTw7UUU3WU9GXSo4MyNLa1lHO0hmKisjXTNZKkNp
ODI3YURSSWRLXCFrLzI/QURzJ1JRODU+MTBfRGNNbTcKJWk9dGolMDUhJlBfSydNT1M9Rkg7KWwp
Y18mLFFpZ1ssbGhcblVQN01XU0prMEVZWFQzQypkPkxVMnAsc2staiJhLnAmcmI0V3BWWz8tWyVp
Kl5yRiY7dD9HIU9tRitIXHQqTV8KJSowZmFkOzFYUVVJYD8odSouQy1EJW5qal1POWRQVTgtQkxI
XnRUOj4nPDFyY1AiIlBucmRpV1EiMmlEbD90PWdKXD82WVQ0bykwQ2JhYnU2V2xtT1tcRCc4LUwn
OEVOJyZMIz0KJT5STSxbIjZZVWIwQUxPT2lca0RVSTs8WT5kdFBxLDopbkt1aWw/clMhK1klTm1y
NytBQigmanNgZj4hYDNbLWpLcDRyLGJmSDReMSNIKjJjIXIoNzREMVFPb1ljdDVBODZIUUwKJSdk
YG5MX2dRTksiRHI5OCZla3U+NDchUVwxVWMuKi8lTXNlR0FsXmdIZ3RwQ0A0XGFMQmpAZGJyIW09
MSZiNTNzZ09wQzVpcFMvcEVGUDYpRDMrW2FZIiFWVmVmVEBOQGhSPm0KJSJlUnIlTTpWciZmbEpF
PGYwYiZNKzhDYloyO28jOk9LRTc8aSxsRC9rTkdYSXJSZl5nZDNnX0o3cWduckFXY0cnclspXHJI
JWouQUNtMGRxYk5VbU43OnBINkZoKS5mcDlQYV4KJWwtcExaYm0tQilvQyJzPUJ0ZkhRIW0kNkEm
LT5CWXBUQXQrQCo9RlI1ZmM5UzVbJlNnMFQ/RkdAZyxlXWZzKV9TcC8kUiFaKkQvI0tON05NLyZK
OihMX11rY0lwYTZwNz1QUzUKJUNJT05cUExhMj5TbzQiO2RmKXFuLSYwKDxBZDcqbVNAMjA8Zl0k
KkZnXDMzaVttRlZZOSRscUJaWW9yQjkxWTtsQDxQRydaQy09TFdXIlQnak5NSGAtamo8TU9KSyop
Sy5PImYKJTdQciR0SDsmIVtHQCVeMk8wMVUzVkJCJmlkTG9Xa0EkNzxMJSRFK0I1OFJiRTxJPCgn
Pj9MXkBSYGhZOS8lZz0yNDRqK0kjcU4zal5kPDNHS25ZNjZZQTJcWzY6Mj4mKWVxcjAKJWJKSWBc
TF90VFYqUV05MUZIKSMhcDUjaTE5M0JfbUxOMis0SkVEOXFRQm09PSguLlRVK1V1b1xcZHIsKChk
c1olaTZRYzphLyNFZlczcCZQJTNjcyFiP1ReIjotXVwkQSdWYzIKJSdgailZWzNjIWdHdS9uN1lt
VjlAVXRqME8mWDNQOyRddFtibnNTMUtBaC9gVWZrN0ksPV1fIWlaLE9cUyt0TmNMUHJpY0BQYlE/
OyhORkIjJ1xSIkhGQUdLUF81U2Q0J1V1WW4KJWFea0YlYHNOLFUhN3F1R1RCMz0jK3NMIT1BI2cs
THA1SVdZW1ohMlc9QENFJTMub3RJLDNDXGFHOD9pcE9XO2FYK2dJV0FLQ21ydDhuOWxMXTk0VDAl
aFg1NFhSbGpFaWwvUmgKJXJvOCk8IUReKy4lPztfOVw0YWImOlowSl1wLGJxPDZzUXBIQ1FhPnFo
cklvIXBfIjA7cVpAbkxvNz9dXWNhOjcpTTxeYz4nTicndG87Om5jcDhrPUpcdWFzM0kyI1Y5Y21c
L00KJUBPL08nUCFNSFM+MG1JPjZpX0pzZCFeTWBGXidsITZwMUBgTkJfcllnOypOPUxiKkBkPW1c
JzRoXT88OCtzc0NHUjVQNlNoM1QsbV1VOCwvVCpKUFdTXyFmUFdnM05rNDEiVS4KJVo4SCpdOWVM
VXUzUHFzbUtYNHRcKnVPJ2YpRmAvR1hRUF4tRFJpZDFLZDA2Lms/XEgmKF00VCskLTBvKUdwKDY8
K1NcP2FDOmw8ZVQwRkc7YF5ESiQtQCleRCQkYFlFRWhQVDUKJT1gOzg9KWInYG0jbjQxISYvL0hv
JnNTdExgaUpbMVdMLCVXTS1UXDxYbGxLdWNUNVIlXmo7X2ZSY0VeYGFPVXEzJyxiViskcickMTdk
cmZxUU80cFtaUzVDMEFQJTJUSWNkVWIKJSxHQjFQJzojUDVKJS5GIjA/WmhWZT0/ZV9qJSNQTEVn
bXFdN09LTW9sPCYmNmwvSGk2YyYtOiwzKj5PWV4uOSUjJlIsTT5KOCctbydGaGo5M0osbmdwKmVk
cyZvS1U2VmVnYioKJSRSTD1RM1FvRSg9SjwmS1pgQ0cqOlxobWhiOmFMWEdQUy1RNEFZSD4/WlF1
RjM2bG1mQlxFWlZPZVNkK1NAJipIU1Vyc1VWXjlqPlFwP1lUZE86SE1gN2BuRiwrYytDLC9FI0YK
JTcyTzVXTkw/aFVpLXFEb1FaPVZgWmEzZ3FLRDthVltDXEA7NEtDNFxmVmxFVkQpY0VSPShVU1pn
O3QsV1JgKmcsLnM5Jzk2TjJHL0k/RWssIUgtQUZiNXVDSEtLbStWPWJIaHIKJU1ddEY3WVRMdTkr
SSk3cjJddEwmXzBrOlAtVSVFNz1oRFkraHNYbFNkYVMwcVFbdEdwVzldb2U5IypsViw0bERVNUJk
WXRpWFRqJ3A2UXUjZC5IVEUpV2I4Nm8qKzIha1RVTWMKJSQ3OSJoKT4ubXRNXik2QkRUJ2JYWTo5
bFA8NWJKZFdoWmUzblImKElGY2BLKDM8QzNNaV5jL1E5VFM6SClCMTRiRjgyKU8iSnMnTFE0cCN1
UXFHO2xPX2lXLTRFK21MIjhpIzYKJWZmX0hESCtgVFFERi1KSjUnIjdhL0NXLmFacV1pbDVbb11X
TVFqPkYkKyNIXlsvQ1BHKXExbkRaRitcKDkxRzNGK1xrdTUiKWFoO2M6P2E1QjRCaFppIkdpWCN1
YSJ1bSFgUWAKJVxuPjlzLWBnUl5DVFVeIWM0QGNVbSpyWiY3TikqLkpdaiI4K2pCRjdtXkEmMzs1
UW5dSy5DYlRTSTFgJTE6SCEtXjJhJUpKLjVCUz1tbUtLM0BuTlgsTXQ7YUBDUzFhK20uXGQKJTNQ
V1MmZy82NzpsakdEOF9sXktMaUNGJFhBX05mQ28iX1JcSUJuPF0tNlYlYltjdW9yQFpyXShkV0FT
Ly1VPFVaJj9uUS0mVG0yOC11ZzpJUisyNFxWIyNBWypQRkdUM1VENl8KJShdPDUhcWtTZitbYyhn
Il07TGk8N11yQj45Uz0yI2Ymak47RllQaGQqUy5KJz5dS25HR0dzUWZPbEpAP18vNU5nMCFPLDxd
bF9Ec1lPTiNRPipFcEtXJDY1PmZuNiItaSI3cEIKJV4hVWZjKDFrNHRhWWUhViNqbXErRSEiaCdm
UWQuSVI2S0cmayxjTVFJYGBhJUUoOk1nIyZZMGRUbW0nYzhxPTkjXy1sZk5xNFdDXzxFVDApNysr
YkNlSCteYkYzWVEvQE1rMGkKJSkvamAqZW5wNVE1X15iOz90WHNYYldtclI5MSdRTjkjOzBhX0sz
b1tCKlJjLD1UWnBiWWY8JUtZaC1QWTU7NCdjM0toOHFxUDM5a2FMajdhWURDbz5XLlFNQjhKLm4m
QFhrblIKJUNgTyEjJXBZTHRFYkcvWDpAKVU+aVg1WUxiZkRAQDJja1lxVXVFVF4nZS4tPiVOMFVC
JFcoMzBfRWFYXy5yOW5KKGIkJ2kjSG1cJ1pGbFFUTGV0KG1hWEUhXF4jT2pwOGUwPmoKJVhFbm4k
JmM/Qklxb244L191Ii4scXNeLHNQJU0nci9fJEdHZTlWOCtbNnRjW18xMGRKUUpsMFosakgzYEla
MFhPTiclQF9XYm89ZGNHQmVeKCQmXz8+NF1WclM/RktyazgqZGwKJXBNNEMhZVBRV2pITnJNVmxg
Zjl1ZDtpZUZNaDYiYC83M1pgZUs7R20/b3RacT08UTdBVXVlPFwnXm1pYEk/dGpfOnVYTCYxaTI8
W1wyKz1gMWAiOFk+T2RJb0s6Pi1oQTA8J3AKJUFOLXQtXXJkSXJDUC5kWk8nQUlBUTgsTGIoPW8v
PlpwM2ZbMnI5O0wtTkErO2NFZTFXVlpVNC0qNC8qVkciSl1nMk5YXz8hXmM+LElvSFI6WHU9WHBr
P15TZlEuSDk0XnU2ZzAKJWNza1JFI0tnUiZfKmkzaWtOQW4zNmktMV5cJ1cyTFhXbTY9M2o1NkBL
N0pFcDVpZ2hLT2w5QGNndThyXFo3blhqQ0RRYmlOZWRaTzpvIzQoXGpZSWcuMFw5Ky85NzljW1pk
WXQKJTdaTzZyMmAhNU9QKUZxRFtvdFBxWydRNURORSU2P1BNX2NeTUE+XTRANzBhQWQ0IkJMJlUj
JFgpQ1M4aCMhdFQjYlQybyUxSS0oKD00VTpGO2xvLSInJ1Y1RzBNOCkkbSE0QmgKJVxeTDJBUWxf
WmUxZilMW0hsZVRaNyNEWGxmZFhZRCU0QTgqMzE0TENrSz1zLSlVSFtPRkk8KXFWcGNFODBlOl9a
IjojQihrKF0xMHJXN2FhNUZhNTxQc1UvWENkV0k5JFRoT1MKJVBPXTUpWyJBWVZZPWhcZTNwO1gk
I2xWWnFlZCU1OyMkdFI8YUFiO0xKPGBZQCs/NWdLWlpWMlhbNlRsblQsSThYTEA7dWY5MW4oakdM
SSE9PkZLaTg+ZUQyYEtBYjozbTc8LisKJUskWCpaTl9ZKkQ3QFovTFI3TVk5TTA7TyNlQThaQSgy
UyNXJmY8PmM6KSg+bitRcTNBazEsXjsqTSdnWjElL00zbGNvb28xZkU8cDtTJyE9N1RCPHAjb0A9
KlM1IiNpaihrJkoKJTdnVk4hZUpLSCM3SHBBOkoyOixFMm9gYHJrZE1tM1JjSFsoOSYiLytyXyRP
WTQoJWNURHVQamJpYnMrYD9UPzZwaFdycCJyKmQ5YEZbInVaLC1aST42PyhpJzMpdDhmXk0nZl0K
JUFFS0coXCRBXj5pPSQ3YERuUG9lUzNOWUIsIlBTLiszTUQ1VXM6TCFmW21EJ1hGVy0uYk8pM3FC
N1c0aGlRbFBnQ14xNExrJHQuQ18zNyknVXA+QGxCZjxkJiZHSF9pMVNqczgKJVF1Z0kxYTQkJGdV
ZGI5WjtxYmBYKE5YMGM6XTgyUTU+NjA0RCVRYmRpMGxqNzdpUCtqbD9tMl4+RjQtVWpML0JCSWIy
OSplYkAnNWsudCZJZWZVNDlvbTZWQVNMZiFpVVU1UnAKJUJKVj5PNjlPSilyJixsR1UvTlVsIUxI
XzhEN3JGMGkwKGYvMDJhcTgvQSNdNSlxbm5kbXRhX01KKUB0bkhsWEcnZlFfOj8yNFkvIlJPbEtW
SGQrZi9uRU9EZC9AWV87ZlZdJyEKJTo4LzVGPjBuVHVHcS4wKk4qV0NNSChJY3VvbCg8NzFcImo3
cDciVEJpPEVTZT8iTkdOODxZYERTaSFVWW5nQTg2cWpEW1chV0tuIVRdR280PUw7cExPNUpmOlhg
a18oUF1aaCgKJV5TKk89RlVPJigoNylVPl1TITxsVSxQajgiXEJOYi0sSXRWZkNUQjQwbl1qLzU0
b29OLjA/c1ZJTTNfI0pFVj1GYDY5aGlEcnAoRGo8WileKjxxPGFsT1dVNDU9ZFgvXUhEbzkKJV5H
NzdyXigySi5bJkQ5PjYubyNHUkU2TXIvY1pLdEJFTm9yZTlPVXJic09lYGgiZ3VUcFQ/VEgqKjc9
SSdEK0I5Okp1QDVFUEVMUnIqXThEJGxMcGcrYU1kWChrdDlqZ0NmTEsKJS06KmREaVMxIVtIYHNo
JV4uWFVFPE0jPFgmMlkiW09daDNjN2BLIkhlbT9QVVNMYCVhazJgUj84SkcnZ2xgMiZwY00xQjRZ
Nk1ZLDFxIyxlJVZAVjpqbG5PU1x0dVxPOE9bYi8KJT1XcSNARiZMMjVdJmNoMEUsQWxHVEEhZGlx
W1pgMWRqbSlXWG5XWC0rRTdAczA7J0ErJytwUiMsVTQsT1NNWldyZlhUOS9kZUJDRUgwVSQrJidM
UzpDODYiSGdLVklhWGUtMDEKJSYjNTw7XTxkOHMvNCNzLXFQMFVwI2lBT0Usaio/IVRBRnRjaGlF
OEYydWlBVmpiRidTXF03a0tZZExaZjE9UC1ZYWs4VVNuKGVWZUhVQFNMP2F0QUVgM20lJkVkVHEk
Lls+K2IKJVZrLFxeKFhVZT9wWFxWbmExQWJSUWpfWUFBcDBfa1spKTthPSEobkQ6YCIkOV9uJEUk
RVRzYVddb2EiKmRbM1NrW3JZZkc+WW9zUjI1OV8yJEQqUydIYztFOCw3PS5eZTpVJD4KJW8mKD1j
M2AoKiFpN1VXTCVoYGM1Zz1Ucjo9Lil0R2ZcSHFeOzk3UWFscT03RjhQaWlQa2UzUU5fWS9BRDYq
MUMsOnJeOW9VUjJLSSlXazNBT2RmUWgmTjhMTDAkR1klXXQ2YiIKJSdKYzsqNGdfWkRfXDp0SlJZ
TS9NLjhWOWRXamwkYm0+Ql5hIig9TG8+Uk5ZNVsjNm1MNWQ8cT44KVBXWC8iYydjWEtOJmU+Pikk
cVBGPGFyPi9BMU1xcCRvViJdK1lAaS1vQ0wKJVBoaiFISCpXQlFuMmFDMHFWZy5eJUswZnMhKGlu
IjdvX14oLFdbUUZTQGpIOVUzRyE3bWBsQmJwYzI7Um5ARyJyVzxbLD9XJFhSV3FyTmBQRG9vakVG
Wi1Yc0xGWC9Sa1xbMCkKJS5MYypWYmRvbzM/RV9rSl1LOzRvKykyaFVGSDheJ3MyU1hNVk5dbklx
aXFYYCNIOW4wMSwlTm9zLFwkWmxqckRLRms2OD5gSzZYOWBMNEgrYDRAbFszKDQ+YShMWi0nXkgm
WCoKJTs6WVhRKWElRG0kWmFQdV5MMWtlN1h0aWRUYjcjYChQKGdhV0RdaSpJImsoYGZpXG9qTkow
S209IiRRIiZWJ0owZ2YoYFlvbCZdOkorTGVGXlBVcTNyZ1NbXV5cUWplPSNZWCkKJW85XEBLXW05
RDhYMStfWHI7NU5XcylAbzFwcUpLXWg9M15VRHU/ZUdJZktCJG4pIio8TFtYLEtyVmwxS0llMkNh
Xl0hVERUQUclK2g8dVxBb19qUkZxIzc+T3M3R2ckRHIwSVEKJUosL2U2P2JdK1MtU0tzb3B1QXVx
Ml9YKCZyVWBPZ3E+QyRWKE9wbCdfc3JlOzM8JmcwKFxLLTMiVD9YXzU5MTQ1cT1wPFZJNUxvcVZW
TSp1YzJbUjJpVDtaZE1yLWZpaEQ0cEcKJSE+Q0F1N0BGRCMmZ1BKO1cmbDZjWjspKFduQEomLGw+
XDxaMEVuTVJOaHJDKFNkYjRTX0B0JyhANF5vM2JdUUQ5QF89RXJEa1tFM2BqQllcLVJtSGlnbjcs
WDhDZkdKIzFNUjUKJWg5cC01Ijs+YGA0al1ZSEdUV2gzS1hDZW1cZW90cChHR2dgLCUoQ1hycVQh
cXE7ZTEqXl0zbGBtOGw1cnFYMlAxXjQoRVZzIlFeJmJGZD1TKS9eXnJJazNrPDp1c0IyNy42Kj4K
JW1WXyliUFVXK3BSPjFAWG5vJWVKOTQlVl5gTjtyRSNWV0VESE82cnI7VWJJOklfbG9eaihpVUwq
VCkmM25qK1AzRGRML2o1IV9DRnBpLz5RT1xdbkVyVlEmdXF0ZzhgREVgQEEKJW8oMmJVKyEsbipj
MWJTZlhaIz9rXiZSJzlZUSlkKWh1RS0zckFXVl5GaEZTVnJuK15VcCReYWRxPGpsaDRGWmkjZG4w
VClvQUJIbk43TkxVP2YtOCpAJGBTJ08jP0lYMmgtNVAKJUxYMUgrcyk+V2ZuJVxuZWg6ay9xa0xn
Vj4zOUtURT9fKVYtb1JVQWY8ZTo1PDEiblslJUJEJkpDWk9wbGlRMkhObGBmOiVgaD1YVVBWalMn
P2w5JVlmP2w6S0dgJ1BuUERKSUIKJU1JR086OS5JLjtwR0hvPCVAaG0rVDo+aUVJLzZKOWBLJ1Zn
Wy1BMCgnS29ASUBTa2ZJLl8/PidSc1tPazcnVyI/P1F0K1xxM3A3ak4xMSQtJ0BCNjY7QVdrbigx
b3IpSVdnQTkKJWdbcGdcLDRlbmZCVTZrSTFfbyxLP1F1OlNSQEstMTt1LVhWSy9NKy0pQ28/ZGlN
JWJfLCNEXGE2Z3U4al5aXmQ6Yi5fcFlBY0hLP2ZaclMjaVswSVlFVyNFTWpyQSFRaVE0V0UKJTct
WUwwYDhnSEYuM15pRlBqL19iIzc0USljKWpDLy86TXNBSyldVC9pJDcuQURuUVhgSiE/UWklNmpl
LV4mIz44bmpVJm4sVnVkbjcuVmg4OSJPIUwlSEwyc1JkIWhPZU5tXEMKJT43dChvNCMtcTNbMV47
QGM2PjNzTHU6M0QqPGgxIVc4aSxyXHM8dFBSQTlmV2FOJE1PNUYjJ18wN0o+Lm00ZVRDZ3JBdDFU
USl1R3BraDxqLjc/OS9JaUJVKlpSaUMoP2RMKyoKJV8sKVdCZGxYJTEmOW9RbmQ1PSM6PUtNWmQr
X2E/K1khXmZRPUQ7bzk1JyIxSGRFNys6bmVIVjozSEsnMWBxbFouUDlgNFRcQjFJWkU0P0wzQU4+
NTVDcWBJNGlBZFswRzNATVQKJTwzXVRQWE1hXkJIQTIidFAlWFJMaUl0YFRnUHFsOU8haFIxKEJZ
O2FxR05MPUlzTSghNVArSDRJbiJIZjBzXWsxbEI+VDNGNU1mS2xpUTklJTEiaGoiYi9MS1BwLzlp
MyFHVV0KJVBeLzBmNiZQZUA6bEBmdHJbV2U1cE5wT2IlVitnSTI8IlhRb0NjMzZYJGU6V0ZYayZp
L0IuLGFAJiVuTCpkNS8+XiIxZCNgPF5rNG1OOT1PKG5MSillQWkmWklmdDhUZlhHaicKJT5DIWBp
J2gvKGRQS2NXQCJWbCw/TVBrcz5SYVVwZVVhXkY/UlhsUDQyVUA5RkAmZ0djKkwzQSc9I19uPydz
WzNKJHJUZWomTEBTVEtQclAlaS8oR1hlTlM8UXElZjo7PC5qVkMKJTdgTUdPI0dfcEgrP1RgJi9b
U1VgRGtBZk0/V2RlRCpARS5VL1BKdToubEIuSVspYFxWNkdwV2pCMklQcDplc2R0LThbJGU8IXAv
MyEwc3RTIkxYIVpSO2kzOjhSTVpKSy4nMC4KJXElNy5PNGElX09vNFVcMFpsYWNjTF1PUWdHNi9y
TD5wJlBKUmFpLFkpQVp0Q2I1UExoJC9sbi9BJ25dVV4iPG5OMWU+UjVvZWhJW1orTk5oVGFJYWYs
PzQqPT5abkVRa11fUEcKJTthNT8jUjp0WyRIN3NvKlBsWUFcXzQnSi5YN1tcRyFbZllsPGAhbGAl
ND1DLTFgXiskMiVSVEJnZmJTUyZnaltDKC41SVZGYGA6MiJyMF5kMUpUYkovOmk+VE1nZzsiLD1u
TWoKJTskO2Y5PCpsNU9lJCY3Xm9BSjk3O04oUFtrQ0lPMms4cyRKcFtrQk5sMDdCJFcqQWVZIm9Q
PSRxIktzbSpDJHFpb1ErKSxsN0IlNzhMLCl1aD5WQklYSyYnU2ZjIWdWbUhqOT4KJUZCSG4lVC1Z
QU1lVUZER0wiM0xePSFOM2RKSTQ1cEE9QmJOU1o6Q0BjX25QKDdQN2IpbD87V0JXTkZkcSFJMFgm
LTJzXz1WO1hCRFgnIlJfTyZJMms8Uj5VOWNKX3U5MW1MPjgKJTZsYE1KSkJsR0w1NS1fPzxQUmBM
SUUzKiNnTi9NJlNLVyI6L0pCYjwpM1dVVUs/KCRfcXQtLTpZdSEkQVYtbydVLW1pdF5OR002V2wj
K2MrKy1zNDtKcWUpMGdbYD0wIkE6QSkKJTwua1Njayg9ZTNLSz41OSkhQChfRyopPnFDN1xyWixn
ZGtrQWBDNkZCPTBTXSpqNzknQDdbYFouYSFOVEpcWGtZTltuVy88MyZrT0ZONEFxImE7X29qS1E5
IkpVRytRbDxQUEgKJWEuZVU2JCdLRisycWUyTU5ANmFsWCwvNUkjXTAtNmRrMENyMzpDQVQ5Ly1Q
SWwvUTBDY109V0ZqOXRwZ1FjJytIbzAzbVNycWg6bGxATEdJMSQiQyYkRjZRZzs+TmtNNWZtJmkK
JS9fI1FEUlNWIyJiWzxAL2hjSVtpTFZvPW5hSio+UTFybGlTYlFtcC1lYGFdYiE8PkFXa2UwbTBG
QFczMEJbM3UkaFdpbW5aSGFDNio5LCRQOGNnQUs9UTUrZGw7ITxSMCxSY08KJWtVQlg0LF5BSCZG
PmtRbnI4Ym8iMkg0aDUuRFI6bGA/bkVIWGljTkBqTVRQY1FAZ1M2IT4+cms8MDEpUjdQO2E+PWI1
V19ZcGs3bmNoIltsZmIsR0RQYjwiRztQXGNPWEtpVlsKJUMoblAjWkc5K0lgPXVHNVooJC9yVTBp
MTkuJCE9LnM3SS9GYTFmcFhIbmRiQDtKSUlAPG1layhLNCxVRzY2L1lSMjllMyxDYz44OjVyKmYj
MmpFb24pQi5hW0UpaE1cKTVQTSQKJS5FUFtGQkQ0XWE+XDAtU1QtcS9WN0Q3REcwVTouKi1HWlg8
L2VjSHUjJEZXX14lNSYzaFlASTArZFk1QWpPRmYiNF5gQGFodUVcXEo4RWVdQ19aJDNBJ0BCLjdH
Xk1sNTRLZz4KJUhSaVldJmpbKDssXz1EIyRFYSRsR1FKIyIzLUBSXTxAKmgzLDVCLTByMS0uKFo3
cE1vRFdVSFgiQypQJSw4WjhzXXRBaktjYjk3XFQ5RkByQkQmPz4zTk5NUlVXWUE2ZSlaakoKJSNh
anBqZDZzSDwlJVJpO0NqN3BUWSIvdFoqYXVBbD1ZZlRhRyokSGAvaU1RMkM9S0FzR0o/KCUiYDNM
RzI8IS90YVZOTCVhV0xnaklFTTMrcT1OanRBcTA1RWw/QCwhWyg8cEAKJW5oVChiTFJEJU0+IlVP
aDRqKnFWVi4sOGclQT9FZVE1NjFIT1xOSlZicGQuWlEnMF5AUFUqUilDNlQ0QjNwSWhGSkFKTnBk
O1BTPj9tbWJCbVg3SS0yLShvJFMzYylRMHNLJiEKJWgjXG5PQ0AoYGVpNUQyOCZYUGkiJC5yNyVb
SiVjRUddX246LTpNZCs4Sm81dC9rVEddLUJHdCtLPFRDT2o9LSVPK3JHX0lSa1BhUSdNdCslSkZU
SiU8KVUjc2xQVWZaJFklOlIKJW5CIStYQDZpKDVvKDx1VkxMIVRiZ1lBKD05byhtVnMxM0AjZzhJ
YF0yLUNQbVUyOzpEcUZTSSFucTJHRDJZdEYuQ2RqTTlhUlZNKWAqRTlQISNSb2I0OGIhWVtUPUtR
UEleOisKJS1gYGVCVEhHKW1rPEgzKDg7c29kKVJIbWptQUZEKCRCN2YiIkEuRW9EVWpCJmUubCxW
LDFBJzAiPUckYEE7Jz5UPSs7KDtmMk9IcEAiaWZLN2Iyb1suI18nYDo0JU0vMXRXSkEKJTxxazZe
QCxUPms8SWkrXytpMGxZRm5mVmQiN1gtYlo2XkErV0g4XlsvO1ZFTSNgY2Y8UlJLclVSbzBBQ09j
R1FnWW5uUUlHbmtuZk10bUdmMFZQWyNncks7bkhwb0w1NFZoUmoKJV5xKGQtJWJrMHAqaj5QLGJm
Kyw5TldXVFtDP08pRm1qVUtzbmNPQT4vSyg9MlAnRjtBbmgyUl87KThWNEwkKVtSKyNFUWhRYG45
bCk5T2I5PSxHbXEqKiUjLzEzcUVjTzxDWFQKJTRqRT9pWTlOYU1iM1Fwcy8xWEhhIUBqcEg9XkQs
TDcrKFMzPiRpIytRaDckLj5Hc1A5SjVyaG4jXWJWQzMsVj9rImpnLihYNycpKT4hdW5lSnNnRGNr
W0dVJmBrMDtySjdoTDoKJVBHQlQhbW1TSj1hTT0lPlxQV10kV0lAXTVIKzlEJzJPMGNRRjtlSzU9
OSdHVEY8SEYqKyMuLzM8Wz4uWm8tQkRMOW5XS21kSihfTF5IOlNlcj8rZicpNkU5MzFtXW9zY1Zn
SysKJTwtVDk6WjVlMm1dPS1uZGBdQzYiKGBXX3AzOV90VzlUXGZacTJRSEojUzE4NlxyOUwiS0xQ
JHFhXzwoS240NzIvZVZbdS0/XnMkYDxLKjRxaTxbYygnY1JAVmAzMEw9VkR0bzIKJS5bK09jLjlr
QjQnKSRuOTtsbEMzTGJTLTYzUkVfPFRQLC9wRVFILGtMKGNKPTcmbGpsV2NhWkdyPWNudENPXy9s
LG9RMCwoXVQ6RTJ0TV4lVjAmWC9KWFZcNHFKJlM5bCVvdGkKJS82NSg3LDo5WG8kMHRnLSJ0Uls2
VkslTEpYVnBJNVpBKipkUGBobS5mTkAtclIzblIwIUcvLmJVUCEqMXAwcjpPZjdiZCsqODloN1Uq
PFp1Iipbay9gXWc7XlZDLGFPKXJ0PG8KJUVNcCQtPWReQ11fVFcrO1JvVS8nYi8kSVpDbXJKc0sh
LHE/LkZEVThQOGZMRCRAK0deZ0FnRjZSKHJNJVRNVWAmak9iWSZULUZjL0BTZz43IiFmXVJiPHI0
LSI/bXRTLixsOWUKJTJaX2VNUGlRO0JiUDljJD1fMjZYZVRyNidgKldBPCNlbTdrT2QrYSE5bEFj
T20yJyxZaSVJUGI3YiUzNU0iPFliRC1hKU5NQ2ZVQ1BJSHNMYUMuJnBlP0RJWWsoJCFVbmdPSG0K
JUQyOmYzZVJdYWhsY3JWM0EvZFFucU4iX2lOOzk6clFPJzAnMVUpa1s3SEctZTRdTSlmbWEyJU5P
KDlMWms5NFMrYylUNlVhaWlGQipNcyg3XHEsZWNiKzpDK0ZZJD8yTzJWM2sKJVpNK0xJPldCUVNG
Uj1lQS4rPDg8QVJrcTRuS1R0KjtoI0hpVCFkZj5DZW9pXC5RNV5OZj5tZkcxS1ZLcVY6XGxJbVxV
UC5xaE1TVmopaFVXalV1KkBbVFJFcVZdXSxcVyZpaDkKJUlTOCpMZShcZ0ZWbEhoZkVZJk9TT2oq
OWVSTzwsMipfM2w5ZGJDJE81J0VNcy85M2hgLiFBN2Q9bSxQbkllRGZYcmkxRyE9VF5SUytrWUBR
MkBdbz9SLVYpPF9pT2EjSWwkdWsKJUs9QVcwSGEwMF08VSk9Ji0lYkE1MzwyQio7SlYiaCt1dT5G
JllZbydBN2c0aXAoU0RpPEkjIzQ2QjZiKVomOFNgZDBiYDdrOjRwcG4xNS47OFMrSWJtbVs4cGpA
MUQ0IitPMjMKJWxKUU1pLENALFNkZEI1MiFAbE1rNCItWFw+OCJmME5WJiotckpaWWtvM3Fpbz8/
REVjYVlYaSlpTjROciU5XzlEZlByMWA7KENZKSYpMTFoaixTRCxIbGVERDteTF49L0tNIz8KJSIi
MnMtQk4iTlZJI1FoPCh0VDloUWA1ZjFTQnBxcDU1TDk8ZWdaNDZlZk9kPDVzMC1RUy8zODFBTSlc
TEA9ZkBVO0VdLkk7NC1CVWlEQl08RV02W1JOaEs+Y3JkPmdiOUBQJj8KJVhJMnFyODs5KmdbMDUh
czJAPDwmOmtjSExTMTwiW1NTVlcxQzI0cUBkQUpSSUM/NTErISdsKmNZVEZoM1k9OSdnWHUuWSJY
aShgUCFMNDQxbktwcTw6S0BKTiVMMnM2IiY2YiYKJThdWEAsVGQ6XUBbWzknRCJHPycvPXVxLSpq
ZSVfa2M7Qk1NPi1KLnFhZ0IwRisqO2BtIW5xSSJBVmksb0NJJVYxYVxkbyouOkclVjlwXiFBXUts
M21tK1gmaS5rWkk4SSVTMUgKJS5lcmRsblYpT2kxWEhbQSZ1KWgyK0M7OTxOL0w9RSNTaEhQbjll
TCQsdV49UW9JW1ZOSDlULmknYkkzJFE7bypZLF1jZ0dKWG1ucF5MM3NyIl1fQFsvcmBEVjMicSRg
PDFfKjcKJUJ1VixAM2BKKHU4aGwyXm0/VVdbNjNDQU1nYkRwRWUhOE83XG8jcUJeVEoqRTtzLytp
LHJLNl08VSokbEUxO1dFVUUhJF9xRExbPDQvWktPUCpGY0xkMmBkbFNaInBpRDJuIUIKJV1INy1G
QlNTKXVUOmFGKWE7PVxXMDg0JFg3WG5iTisyJjYyRDUqKHUiMUpvOk9MMj5eLFwiTkZVPmdBKlVK
UFZONmU0bUVXZGtVXUdadC1dIXJgP3NVMjM4TCQ4LDgqLUNiImAKJVsuR29zIUlqIilkYyQ1YEJz
KDptTGpZM01nW29FVmVRUUExLFpcMk8kXT05SjtwPjlEZV5bP2cxQEY5KWduUmJVNyJyMEIzODUu
QVEoMWdpWCVhNjI6KlFqJy9BNSFIMyxKZ1UKJSgzUmUta2cjXiU2ImFKUlVpTydlS0JIPys8TWxk
aDVDNVBfNDB0aThdJTNaXl5RSm8/O0dkM2BVVUhII2pSSm83VlYtUl4vJ2UpLnM3T3NbOCYhPSxI
N2FOV0BJc2czY1BRV00KJW5OWGViNiMzOnFvWVQ1YzArcV8pSWMyaChbS08qKSxTQmsrbWYpLURV
dHUhTkEwYjstVDpWKE0oXTpMamo7Wm5RVGRgUENoTTNAJmFeNDNPTFNEJyUxUUdkSWpDRkRlcDY6
UkMKJSJEI14rPV0uIm5JM3JvIT5vbmt1LiI2L2Q/OU4mLFNCQFdtXi0nOTkqcWtxNF80bnFJPXBk
aGRQK1lLKCF0ZzRJPDRgbVlaT20yT2h0Yl1zJ04zSmhBQi06T2RBbUlnW2s5cVYKJUlaOk1MZldR
cCw/KjcpQD5iZDUpRF8lKFJXb1lWZyIxNWkuQmUqRmZNOTgybFM+cjdLbHRRJ29UckQwcCdhJUdV
V0dBay5EYmdYJk9SJ1NYWWVYOG5DRjcsNGg3X0hyVGdsJDcKJVBYK0k3KCIlPDdWSiw4KSY2TGkx
T2RLRydzMi1WTmNCSi5UPmUicUdTODovbG1tSFspSHAoSFVSZXB1RXFqSjloUkMmYiZGNV5eSSJv
T2taK09HQV5EZWFtXEskNi5XYDJiMTcKJWNFRWBObEFxNHQoImFuZ0hwZ1IzN1lYNTtYYjdUWkEj
KmQlXEgpI0pYZW8nSnJhVkhVWkhkT2FeaW1zVj4lOSYzbzpdITcoLiU3O2ROW20kPE5JIlpJWSFK
MVotZGBQQ05cLi0KJSRSWjxPaU9MY2xGWCErOGY+WWFoSz4tJ0JaMVlvbDtRNmNXVFFoWSRZJzty
TFxXXEcsIVFBVy5sQUxMKC9qUSRWXFRpWClGJGg2Vz0uX0EkM1tsRi5IckkpKEMuXCU9U110TSIK
JUA2Nj4ta2M8J0RAYEx1S2g1IlFBIVo3KXEnO11CMTdSMFtuIm40QDo0TCk9RDpIKldWY0MsXl1K
UkZIYGwtIXFmXnNkSz1ANW1ELi83NzxXL1ByMk1bNCcxOT9qYGpMRFwuYD0KJV4jMWhPakZuJEVE
JS50YEpxa3VzWktBO11oWWJEPllgU2ZQJU1Lc09NSFBbI21oS0RxI2dIREhJRCNmJklZKHQqUUok
MW1Rby4xJTAuZ3BhWWRAQGtnaSdCOmM4W1FEJHRQbkgKJVhMTEhbXVxYJTE6LTY6aEpbMi0xXS5a
bE9ANzJmSTZUSC8iUG4kbEQzVjdkKGU3JSpnXUBzbC5cJ1xsLiF0Y0grRWhRJEBmbigwT089PURs
WkZocTZLREwsT0ZmcVsrSDdjcnUKJTZLY082YCsrWF5NXjZAJktOQDxGIXIyJy9FTy40clAnYW8i
SixYPFM5Y2UyaG5VJUlyclU/L0tkaEJtRz9TMGpOVDE5JWAhWSZecUldVjBiPEZkKmtjdGw8QURX
Rm9wK2g6UUYKJUFebCVbVylgdDctaGxjJ0A7REEtMGk/SnEicHU8bjM0OmBfZmJgLD00aDduLTYm
SXMhZmhhdFotWSYhTSQ5M0E5IyltQGotIzFbJEclWW0xJCYwcD9VQDIqQkIvUGtNYmdhNTQKJUhg
KiUrOiZvaDYhLjFJXC1aKWtzWXVxSyJcKFplNTcwWEtRVSUyLFNfNjRxR1dFPEpAI0tfV1w6JG5o
dFYhSTZbODNEUWglSygnaVlEJ1hZcCZtZXNQMGokKGA4WDM+WFVQWGEKJVIqaCRnTkVbWHQzWDFM
JlxUX3BVMk1vN1FOUFBVKTclL1tlN2NAIXNibDFrJ1UycTZPO0xnbnMkN2gnQ2cscSs+Qm8qKC9i
KyIhKks7dS9ZQUAhbTcoTGIwX0shbG5FOzRLcEYKJTx0Ll5wPVguVE5abU5kRGFKMUpmJCUhPitU
JGYwSVE9YEsiLkxqX1JXTko9P1hWYmFsInQ8bz1xPy9nYjlRQFsnR0I0LEEiTFQpamVnQmZSKjtL
LVUuV2RXR1dqc2VqZVY0Wi0KJTk0S2VVNWkpPWxOKCwsImhxTTdGXFRRVSs3UWJCIUpTWSdeQitI
KydZSFY/Yjk4US51Z1h1UzJPYDg6QjUxK2lKUk1dODQ+Tj5cY1gqSyVxXylrIT9GTTFiVS1KRWkq
Iyx0RWIKJVlSVFY0W2A5ay02QkFHbGMoRlczRitqME46Ql8rdEldUVlkPFYuZnNwUSlbKjwlNkxq
SjInYCghS0BSZ1NZX3Q4SFMhSldtQk0yPjREZ1RoJicydXVjK1NNRUM+L2RWLWBwK2AKJTEuaFxU
KlRzVlxpYTJfJ00lVTEmW0NdNCovXHNcJUg3ZUZsJl9UdWBNcUNWNCdkI2FwIks1V0VEJk5jci8u
KCFANigyXWZwQGZiQkIjNWdZSDlWM2YjckYzcFpjNlw0SlM0RikKJV5eQkVHLDVbWlY5TDRPPDpU
O1lZQHImRmlPLyJZVzsvR3Q6Zj1iTShETjROLG0qNChFKitGbCpkWy1DYTBwLWQ7aGFPbWY3czdj
WFUpPi11XkBHQy1NRmJUaUVvKz1uLF5YVFYKJVYoJENbOVt1SCw4RWBzc0MzWDZwSXArNSUnU0tH
bVJBUjslSnN1JkpOMkBEMF10c3RQXV8uNl8kSFozOiZwJ2hFNSJVa1pwMCtXWydbIjQwVTYxNi0x
X1FuZ2hXKGs3Sjo5PzwKJS5AJHBlNXUrK08tPFFzMlpVaEc1YmxJUy1UTV9DVkFGTDhBYGFKXnVy
Lz48VFI+QDVuSEo4KS89Zl4mLW02SV83Mk4/dT1UQTFyREwkYGhgVDwlTUNwVUI5b0cvbTFYKVRy
UCUKJTpWXUdISyZMajNnVikmQ2klM1gwODpudCs/TkcxaGN0WCNdWk8zIyNyMkJaVjA4XFNzOic0
JkUiOzw6LXFcOmsnP3NTcy8tdG0mYDlML2xOUldQL0kzWmNHcGtjYlZfTDk1ZUsKJVZxISNzW05N
IXEpRl5dXWE8a2lWRjoiJS9scjVQWlcqQiciR1UndHNjQ0VAMSlucTVyJXAnKWZVZ1InK1c/TWZs
NiQnWHFmSyVhN0tAWlRYXCwkPituTlRzWVBaX2YiRyNGajQKJUcuI3RWVHQrXUljNEpCdShuckdY
LkdMN183TitZLjFhZFAiQyk8bTZlWkdmRDlOPGtfNyFFSFhqdF1fNilTOSFPPmQ9JDFCbjpwTU5M
RF05S1BWblQ6KVwicG9AcnAqTTtbVFsKJVhkSilXVlJLXnFFX14yLygvT1BwRDUuV2BEUz50VTQq
cUIpJzRILiltUF1nJkRIaGFCJE91MVBpUm9kTS1mPEBvS0ghX1dXS0orbl1qcSg6JCQpSTomYFdF
PkhAcT9ZVkRWaGMKJU1oTHJKbiMyPG9ELHUnX0kwY3IrPENVclRPPFVgLzJkb1xhVTIhS2g7aSNG
bVtFOictTD9bamNPNVZONT4sIWBRMjg7NVEvXVRbb0NtNE5DYz9pRUdNK1xPWGhaNy8sJy5rZWcK
JVNlMltcTklSXFJhSltSLUshTlVmOFNLRlZqcVZGJWduZWQkJ3NPVykyJ2JFdFROLj5WIW8tV1M3
VEo5SGN0dFBOX0UiSkFJRVFIND5XWWJiO0lpK2haWGZYO0o1JnV0MS8nKWUKJT5YIi5eWjQ+RCFA
JSdwZmEkUE8hTixPSSFTTXJNOU5RPWsiOGJMQyQ7XVpUQSReJnMpV3Q/WEwoSS5XdWNWYVg+KztY
WCMkMz9wNmQ7YkIhbyp0cDFdZENCJDgyKW8/KkdjJywKJTRuJlowJVctQiJCYlVIYE1rQG4uVy9G
a29MPC0oWUpFQzgqNV4ibj1uPixiKjpVRVI2YiJSW2xXWzRQSzxFQFhFLi0xJnQuXmpyUD8mb19B
aE9tLTlFXW9PY2tDRFBgPE9cVVcKJSxCMGJvKyNYcE1lISVdb2ZxW1FRLVVdZEtFYi9bKmFmSllM
TSopRWBvI29ZKjRAcCJIKSJla2AyYVpVVD5mMCNURTEkVmFgK1BCNEdsYFxtKDdGQkFhTyhFRiw4
bklGI3VKYDMKJU4/cFA1KTl0MW1KKnViJjNCSDxcZmxMdURqYSVDUzlETj04Y1FQY0BSQGk1VFcy
MUJzcS9jJmduL09fVlc9RFFrWGFBQEY8WjxyNk5UJy9NJT80TU4yR0Q5XEctQ0VLMEpnMHEKJWty
RG1UTmVsXkBmXl1oQ0xcW3NWNDJZLk9uPT5nckZWRWRrYlZQZjRKTSdbPERDODwjSSJTdEBsPEw+
XT4yYWw1I2snZ2dxVFxdKmoqNVlPcSV1WTQtNkVjWFI5XDZHKU5nKUQKJWtWYUEtQSUjaURSZm90
ZyRSbVBbcUdFcSYuXzJzYF5vZ2VoNXBBOEBiM09eMSgsZ2IkQitxXEI6NiRVaStHOnBBclM9IShe
UlAuaS04PiIyXDksV3IqJDUjblNlbV1YTWphNygKJUI0Q3BXNWEsMilGZG5UTDh0cFk4XD9fRkUm
NSFJJEI4JyFnQSdRUDFKQUhRTl1vbHJJIzs3OWoqM0RXdUJzKGhAUmBxJShyTmtOJlVBZFtZO20n
cm4la2spJ29sbEg7JFdtJm8KJWY4bz4hPSs5UXBiXy1ATkpCQT02I1EhNyQ4LG9FSm5HNiI3aXB0
LHJAcz5JYytjaWgoYkdGXylgUlRaYHBaTmc1ciZIbVtqZyJRIXJLK1s9b1JzaD0lOjs+R0VhSl9i
VCZYaHIKJUk9L047Z2FaW0dcQFNzJTRoOnIscm5QSjZuM2hSIm9YUihGY0NUWG5jLlIvU0xYUDsx
QD90NWdjNEFPTm0oNWNJajlwPjJvcVNEWWpfby48PmtlW2tqJyxjNSM1M100RFQoQWsKJSpsbDcn
aUB1c28sVmgkREQuJVJLbHVjQWZXbjM7TEM2dV4vZygyISo7dFQlcidETSc+SVhWL0deTyxTcV5B
TCE6TmFEcDY/MWVyIVJiXWQqNmdXZzNxJnVWVFQ6QCgxJVxBcTEKJSwkc0ZNPVgqMWs9YTs0V2lg
VlpuNS9mPHU3Pj4mbTxqcCknUkBnJygsJ05HPiknXUB0NSE8T2g+cydXUVVnNikxRGUna04tWmBm
XTQ9YlZ0cCgxJEEqbmFYWi50O1khWGUoXEEKJWloVWZnJWB1Oyw8WWllWGwqS3FPPmordS8sUzsq
bW9OVyJATUdNUEY9ViZqJC1fNjc3O19ncko2cDJjbTZvPEBLWjpZN0M0WDcvSyhPOEpeYWd1XnRE
TV4lO045aDBPSm5PLzIKJS10KXEyJSRPJy5Tc2ovMFdTaltlUjEoMjJYPzcubkRHU1A3WiVIXU1S
XU1FW0FYZUw/SFViYGtccC11aWFtSjtpaysiRHM6WDspVDkwPWBuVWc3Nkk0THA6RWRyUz03Vjdt
Pz8KJTVIT1ErI0QtWSg1ITlQKEpyWmJMX1YpaGpVWiJDKWYtNk4oaUU3bS5wQUtyS1YwMj0wXDBb
NDBCMStBPWBxYCdrTUFrWyhhXjRNRUkhWWRMY2deUUAqZmxcJCxrOmtOXmc+cDQKJUk9TGthXCk0
bl1mMy8uQEFQRGtEZCZwOFUhbm1CWitfN1Y/TVRXMzFZQTtxQzlHTyYtOWtTaSRNbGZ1c1cyRWwr
bSRPTj9YUyYjWiE4M2Q7VUMtXi9TUE8sTT5dO08qMFFTL1sKJV02LXJQO0dhLCNYUilIRFMjakYs
am1ubHQiXVooWGgrOFltJTVxbUBsQlY7OWxCbD5RSypBL05WbEZfY2clQTNUP21aNiNpJkRtPzpA
LWhKQ01VV09CcXNWPz1rcFxKLWZZYisKJUBxcWdbN145bm49KCdVMTsvVDVHL1goMEA9Yz9CcSxs
WEF0XSo3QyRtO0w4JlNjZSMyL3JGZGtYK0M1VUlZbVhXXW4/Jm8vMG5uO0tzNytZbHNlIjVDXWlP
YDUvQkJbNG05QVcKJSVPQGNjTj8nNmxnLFFNZz1PI2hPbl0wX1duKD9ASF8+NVY2NTwubDRAQTZv
NW0/VF45ZHBEIiM2KCVnUltSUGBNaiE4OS5uKVNdc15AJ1JOYC45Ols7SmN1S1loclp1S0FvWlwK
JU1KWVlbT243YCs/N0VjWEBwPXFMKydfaiQ1PSwkMFM8dEEsPXQ3ZHAlSzlKTShKU2VzWHEkbzht
VD9ZaC1fSWhDbW87W15lbmEvZyVBWilnP2FvQWRMM28hJzYrNk5SSUUiTyEKJVInPE9JbzhCZSZi
UjFsMy1cbVNtQEYhMGMvK2U2PkV1bWxHUU0iR1EpRlQpK1JlRyhrOkZmVnNrWzhoWUBrazVpbldD
SihsYkppWFhvPltvN3VHPSMzay86T1pNXUQyNFxzV2AKJUkmP2c3NjJqc0g1YlhdRT5pMCtdcFtF
XUVOOz4xTixiPCU7YU5ZMFgmMFo4ZGoqTCRJKXBWRDNXJWpnaVsoPm5NRTpLaV8+JE4jQzMvZGpH
RFInN089J0BSTGVjNzFdLFZrdT8KJTsrJD8pMkQiISE8T0BWKiE2ODA1VipXZEJpMkRgZk5dSSY5
Yj1pZiFAY11ublBSVV9LZDNRXzcmVDIiZF5NJ15bL11oSD9UZVowMk9oMFpLOC1oTEdBPUotSmI+
bls6U001ZmEKJT8pPVFYKGhGU1FnYl1wcWJkQSMrYSxybC4lPkBmPF1XLz0/MWhyNEZnZiZcKC9G
OGl1WFZBcCRJSyQiZiRIa1I4QGklJ01ETll1cERZMGNrJjxYJm9tRmllLCNma3JzZFE8KjMKJXMl
UCIqXG5CX108W3VfUDViNFpgI0YsV1xfI0QkN0tmNEBjZS1FJjBQMCs6TkFTQCQiUWlYM2Q3Pmox
KXMraUUyLz0uc1ZxNUM1bGRMXE11QTgubEFkXS4nPFlwSC0qPSY/JVsKJUNLMFIkS1xaclooMl5C
aTNFKitCPERqU19lPFY8OT5STDVUQEUkPCRqRjFTMT8yTTtKOVBlPU8qa2RhJiwkNzY8VDBSMHBc
TEUuXz0lbSlFYyI7O2pGWS03O1crQ04nMEBrZl8KJTQ1OE0lWkdJayoqXWRfUDhWUlFjMmNEbyhh
VyRbKWUqOmcyWkY2QmZCV05mQVM/amtWP1NnQUksYXNsQmxiPGlTRl0lOVVzKixrSTArUExIJkk/
Y19caU1Wb2lHK1I5NVtSI1sKJV1Pb2JMVidENTgwJy4sRGwtNnBrInJyQXFQNEdXPk5ObmkqXjAx
bDRnUFdvS2hEZjhobVNvQ29ycipibU1LRHFpaW5JOVJFYEdyRFwmREhxOCpUc2gjLFJOY2giIk9m
ZkJZX2EKJUQycTBWWSZQZzs6aGU+NElSUERCUzNRdUUvMyZsWkwoJSM7PUFFRSMkVFlDLk9wTzAi
KSZvKFNHJSJxU2lYPXRqNCJsQkxwZmRuYEBZOGlHLFYsaEVrKC4nNigzbD1sKVhXY24KJUxqMnJI
NHUodDg6K1o0RjswO1pVS0xiOjtxVTFyXUpKP1RnR0BSX10yLWw+Vi4kXm9PRUBrU1U0KGBdYl4l
YVJGaDxIJV5gZWtEWDdTTEgyVCVFIT5abWE5LGdeSzdZcFVpUUIKJUFHcj5NMENUPG9va0coIWZq
Z3BvQDdgSCtLJjFmLCwsbik5T2xnaC01ZF87RVZURUY8SD1XbCtHJF8kI0hKKmNgTENsPjY+RXU7
M1ZQV080MipmRStAR1deNyRSblxCYFQ+cFUKJVkyQHIsLykkJG8sYTlkKFY4cE0yZV9lZVwlcjIx
KkppaUUlJGReKUFEKzpVMkcnVj9dPispVVNNY1NoQU1JKyM/RmUpJlhXP1AkVEJzQW9pWytPN0hp
SGRBKXE+Y08zTWdjMjoKJTZqPzpkPUdTNEtWPytbTFxBTlZvKT1eaz03WTVzUFRGJnAtMXE+ckJZ
cTkzZCIsKStGVnJSIilbV2U8MFAxREIoVVs4WE1ZWGNCbEw9JGNuR2lPWEliYldtN0Flb1RjR0hr
WnEKJSJSWDEuU01vSGglYlpAVitbNiFnMHIlI2k+R0hIUGZhSl8mQylLYUwlIUdZM1pGc143MVw+
LmxQXCZNdUheIVBXaUtLLTAmNT9cYT1ROG9lPlFgKFVfRE9SYVxxcG9iYWlkKGEKJXJXXlY+Mjsu
Y3NIPyxJXj1eZExlTGwnTmMpJDo4W2NUXSxwI0QoMkJfKE1Qcms2S1luVzcnOkI/ZmMxKjQ6bWdM
WXFWUnNnNTFLLWcnVVxWMjgoYGlbWkwqQitdLClgJVBGJCYKJURrTWtXSV88aCY+cyJxImRHPGtc
Tm5sTkUiQHRFQyFjJksrZ1NzLFRNR2lBPVRZSWBHRnA6XTUkL0VpMEVKJThNMUohWD4/Y0o6Qis9
JWZJay5WZydMKVsuZDJVaWsqMS9BczEKJUo5QWdqLiFANGhQLz5SYiRHVHQ2UyshNV9PKm8vUTA1
LVRIRFUkOjRTc0lUNjM6KSZHOS1oM3QhaVRFZmgkaj1WVmtua2FmZ2RTKEVRbj0wUW5iaDxcIjhs
VyMlUycyMVh0RVkKJWVFXEg0Vjc0bW5mSC1cZi9zWSJTRCk8VU5PSjBmQjZORFkiKU8xVCErV2Jx
P2AjIygrT3BFXEEpP0UnX0NsVFgzJjArTSc+OVFWWVNbMGhdZUF0SFIxYGZCKjhgRVlTOmVSLzsK
JT9ULiQ+Y2JsbTpqNEZQNDpiUy1KUFYidSs7cDpwbFJbVUc2PXUqJWA/OjNZRzEzQGEsKC02clFk
SFRCZV5oMzA6RWZQLnM7L1I2bF5SQUUnY3Rrc2sqY0hUO2peK1pETGNqOEEKJSojPWMxR2NbOXNl
UmVkQ3FEPnFYcTVxbFEpSj0rNDQ4IzguOWovVz8rPygnTyddTCtQKUIyRlYwY0lINDdRWXMpPmBv
c0hWM25PLWpFKyU0YVhJMjpUUWY/Qk9rPzRdKXNCLGUKJSYyYmd1cE5oNUZkYElMc0pRNyNjKjZQ
ZkMsWElnT2tyTzVnO0lfSSwsRjcuVDUrNXQlYUUkT0NbUlBrR1ZLaT4nTGRValslJk1eamBzYldx
OzkiT2xgbzk8QUhSbDc8MHVoYiMKJVFRYnVmND9ccl0wSW8rQGdWPixLPzspMl5nLm0kMjwmKjRJ
NHA/KDRdLE5tZjEsZjIkUCQxU15fIjFDW0hDaz1UMyJINGAyOD0hQGVoaGNWMjxKY2xfWEtcSmpY
YmBOSmJbJlMKJVpbZVFVZjhQTmRrL09mbWQ4SF1iZiY1dTZgOHQtU2djTkwzY29BUSVlREZBbT9h
YFUuQz0/VVxYcztCZlkpXCZqPUZgNV9tWzRtdUw9RkwkK1BXbjAzVWZKQTtpLkdoWTM8QEUKJU9M
P2JzLGVlbkohNEspST48QlBbMFtKQFopVSVNM1o3Zk1PQCtUUWBIXGVfTGo0LkgtYilZZWxLaUVT
NSUmOzJvI2cjIWxpJSVEVHBvV1ByWkltQGRSdF9mIytiNypsWFhNcisKJVZNT1YlKEE2VDU5bG87
Wm1LU2clX2I/YipINTw/cy9wKi0sKTRfVD5RIT9pXkdGLlA+U1NeMk8xUiloSU9qZnAqNTlXdUdQ
XS10QCFbSkIxWlpKQComIipETyZWX0BbTEkvM3IKJTRpLiVEKy08V25DMDJva2ImIm0rLChfIlJs
YERfM0FXTlZwcEFvczVQPFcwTCNAbWp1MypSUWM9P21iJT4udSFUIyMzL3BmPFhsL1JyOlQ3bDgo
UD1QK2srcWEqKDRiVmdJKVQKJSdbUTxVKj1ybF8zSF5SXzEkLUpHY2pqZkBTRjNsPU8kISM2N0dn
Om1dQi9wc0VLW2w6TSc8ZCFWYylPLjJdTUVGYEEnOk00J2QqQV9aTllMVi1qIi5bYms/XWkzKXAm
ZkROS0EKJT9IUiVBJDg3KGBFIytUJDlocWciS01UTVY0MUkoSSheIUMnXEpFVF9XXD1fIj1YRzRH
O09eYW1SYXVfIk4qKVUkYTshPjcpPCtyKClxJiJnRlovNiJHJ1tjWV9HSUs8Xk07PikKJTFPdVZH
OS9fNCVSVDxMTFlVXXBJRmBHQzVdNUBtXElMcHQ3czE5RVZeTm9WSl09MzpjYk9KaEVzNWkySjFh
IlZYXk8pWWBdSmxrO1ZyWU5zUkhjWEowKiYjSk5OTDtlTmBqa00KJV5BLlY5VEVTZkgsZlYlK0tN
UGAqKE11NnEpO0A3PWgpK1AtPUwybmNYZTliO1owTjsucEApLCkhNzg1QERRcCgsIUdFbSMpX0JC
SnJrI002YjdjLmFHTEUkJj87O2E+ZiNBK08KJTFkPU9dODVpU008S1k/MC8/OVZNWzY/akpNJE4p
KTFZblgpbnVYYlFlZjlLOD8jP0RSRFprWVdYSkowZDx0K2lJXkZ0NVM8SENuT049R0JLQlNWaSRe
Tjo2Vy4wVV8xOzZDUGAKJTJuOFg9cFhkMUldLCxzSlVIKF4zIi1TZCYwcDRmTUheb2k0NF82JWwk
IU8scGc6JXEuRyZpVyttTENxNExyWTcwaVtpM2VHLW9oYF1HM2ozWkxrIUJHUCdlXz5odGs5aD9H
Z2cKJVpsMkJGWlFecTQlInVWK0tIXCszKDJvaydkOCg4bkxyLG4zJWQ2VEwmYDxjXDdlcCUoOGVL
LWY3P14hZlxJSSU+YVgzZmgxKjxKVEIqMVBRZVwjSERpUkZMZ2BXYEsoVVNEV24KJWFVa1MsZV9M
Jy9LSE4kK0ojZzVmXCFCKkUwV1oxZ08lXmMsRT49LD5eLWFdQUJgdWFpIyYwOC1AXGtUI29kYSxn
OHItNTZHN2VbZ0M8IXNDOCQkbTBORSNIaDVcNEEnYmdfJF8KJVNWMmskbztFTTtiQ1prPGw/ay1p
NFs9P0JTPihRTVVXRyJyOiU2SiNfZjVSIld1TThUNi1WT0EnZjA7TD9fWFxKK2tcRlxkSmdrS2Eo
W2dxO0duUiVKajdgImolSjJuS0htO1EKJWEzaUs1JXRaI1M2XlBXcFBiQiM6QF4rYkttZSR1TDQ4
PFlMWG9lWExHLDdcSjBzRmNpWUtmPUxaZTkwZEssanFHTWFmODFQP0EsciRlblwqQ2k6Xm1jbCNV
KkVzJiNWX0xcWz4KJSooUmZYTFRmLXIsNlhqZkpmKWAkVSlNTjgyKmo5Sj5rSDtbQW47Ny4obiVL
ZFFrUCQ/b0Q2Qzs4WSlMZGdMa1ZmTCNNb1shU2Y1MkUiKGE8YkBbaihQPUckN1VzWSItWihZUicK
JSozPClpNWRaUyhWayVYJyhZYEloTTQtOSkscT5DRkRuZiclPnNSL0ghPFpqVGBPV0AtOFNSaCMu
I0RdWjFmYGV1aC46WnRTMlBQSTgobzVDbi1YWV9oYSFbOTYzVWlXY2szRUIKJS5FITZLaDtcOj5N
Lm50YU5lal5UbURtXyNaLFpIQ0gmWW5cSXVNRmJSZSdiPWpsMFpYI0E7OkAjVU4oWU44OzUmZEVQ
TWEhJnNnYUhdMU1lMz8jPWtUYEdTKmxUKHJsSzpLOEQKJUImImk2RlZtbzNDMi86KTQxKVhGTiF1
LmFyY3RyaGMnVmJSOVlmQjhsOVdtPl9gRkRnbVlqZ2s4a0xDUUFFJE8yTEtjInIoNCdLLVpTRCFj
cEVxQSgyOks3JiVVQFNPYG5QLzIKJUtPMFleTnA9YCtITEYmTWhXcixzXEEvI09CWSRsN25qNUIn
SzF1a2lbWERDKjpFc21VMFxyPXRMbVlLIkQkO1grUSM2MzwmM3FGcF80KFpmRVhnXClgKjkqTC5x
dWIjYWRtVCEKJSQuIWdVP1U7JzxVbmktMSFUL2hbI0FZbWlrPz1Abm91aV1TU2k5ZjZCJi9DLDJc
VE1sIy9aS0pybWk7Wi9ZKmNMRm1pUiZGJWheIm42IWdyY1I1cy4vOzs2SlZfdTkpQWlvY2kKJWBK
WEhCU1hxcltTOWZZIkQxYXBAYSplb0VCayl0Jmo1JGAzYzVndEs6dXByVDZjKDhKcW0kRSI0UCgw
YTcmPUFpXDNMLnRtVFFLST1SLmxpIksuVDo0dCFEQkZQcChLbz4uO2IKJUBicTFXIWpdYVxiIktU
W3FXSCxzWnJcWENyVXAhMVhmNmpZNlZna2RgZDgzRyE3UitVYXB0c0A0W0k5XjxOYERgYlM5STBe
XEdzaW4rRnFhZSlsSShIWV9KKmwwK2hlbCUvWTYKJTsuP2NUVCReNT5BNTFOc0U1TTZtMV8mQ1By
WThFWlJbXGdEMmxAUFsoLkdAZFAxQVJcMGM8bEwjLFs7WmxEMT5gX09iWnRMZ1hldUVkUylGX2c6
bFM3YmpbRCwvL20kTzs8UW0KJWxsYDNbJ1IhX1tGJTkycypBWVNmVTFOUS0ucTBQJ0UoLitoLmps
M08lIjZoSSk9b1hNI2ZXJE4+TGQxVSUqTGlbYyZuUEZCWDRgXmVidCohKGApcmA9MVRQO0BrOFkz
Lm5HVlQKJVQ+bmBOKUxyVnFMLSU8MkBkOCtaWis2VyVVUkAmPU5kIi1oZjs5QmZuPSwnZ05oN3Fw
XV81VFtoKW5YYGNQNjdnclQ8ZGtVI1BEMz4xZ1opJ2kxTHVHZVFZZjxuclc9YHU1XU4KJUJUNy8/
blNJSUY/WEhbOWU/M1I+LS9KOSlxQmE2MCVKU01ROTBGTEhlTE4tTSMqXisnMidVQCU0NiItQkBx
SCEkRks2aDpgZURBQyw8Wz5oPjhmaFVMTHRFbS4oWU9OKDZzPSQKJXBja2stOG0vaWknOlI3OTxS
OFxdYmF1KDQsLTxYNFMkPSo8XSFFWkojVWo5NnEzOktmX0NFZEcyTVArZENvQys+VGY8JGZtJCRV
O2kjXSklLm08SXEpb05ZMGFnY01vKVpeajsKJURvbUVGOzROczpPQlxrVFlhJEFhYWVBOmdNbEg2
IkBMTnFqJ1ZkI0Y0N0NPdFdBJjI/Zj1RQkwzbyY2dSJvTCI9SGAlQEVOdCtSZDJGN0NoRzQ3S2Ri
XXEpO2xkSkJSK2ZwZ2QKJT5uKmNCTkxicDc0SkdXPEphIkE9Il9OO2NPITwuTDhuSmRZYmVzbXJd
cS5fRCU+amY6LVEmL2JbQFtcQ2whaEJhP1gzaFlRRGxLcyJWL01pT2oxZXJTZEo/Vl4lJCY8TScl
cUkKJSgyVWArVCQjKWFkNm9fXThybkVcWVNkMidNO0BEX087QCttPFUuV05XWkZxX0ZRMyldTGY7
SjI5cDBAOilJWitNYVleTi9PbmtOY2dtMGQjWic3LjkhXCNDNk8yPlZFciFyVGAKJUE6dTZQS0lw
MF9IK0ZiajUsRlhjLTBaRHJrOU5cRUUhdDRkXCl1czFHc2EsVWokNmQ4KUQwSXJPVTpMZWU5SW9x
Tm5qLy8hRGV0Xl9JRSFfUGprOzlMajtQP2QuNVU6Rl9scU4KJVk5WHBlTz8jQixoblZZIkNxLyxm
L1tdU1JrKGxnIWMpQTFqZilhIUs2a01rcmo1IjlmK3VyYG9gKHBYMyFUdSJTRi1XQiEzIW10aFA/
bz4qUC5FUWo3T0olOEF1TDo0QVYwXmQKJW0wa09lYDJuU2QwYnFxSWtLLyZbanQvNiY1KFBQUmpX
bE0+PkZPKlJoRkNZNU5kO3I5OTN0biJwWGJgTENycU5zIzk9XzpDLUpqPlQ8bipGWzFUJUZnLE5U
Y2AlNzIkQHNrZmUKJT1dSDFHKyNxVUBrJEddJXIpQnJzSjZsL0FUZ0tqUkhkLiM/byIqNTMiTCIq
NVZSXT9GQ25tWGlVTWhxajtsRkcxLjVQUmdDcSJ1Xj91KFJKI0BYYlopJj4yKiEqRzJlcWFRRlwK
JUVXYlgibHNuZitucUdiNWdkRyZuVTk8LTE2bUJqdDxKRC5fJF4rNEJaKSxZRlBNZD1FbEZzTU8l
XlRQWTskLkFJTz9KM14yZnEwO2RKMkQ1a2xfIk4tay0lKT5USGNQQSs/Kl0KJUg6OCddNmY5Qj89
VFshPURPR180PTVpT21bcjAwL1w9NVFUaDU2IVlxQCJjKE9ZbG8/U19vI15nQygzV2JIZS0uJVIh
ZDVpMT08SWY0XHE+ZXQoWEgmRDcoK1hCV0k1IkZUPEsKJTwrMzdhNXEpRU9gYkwuTEFmQ28sL1w3
LkRoNVpbOixcUzsmVEtlZ1osKTpkKSsqIj5eNihibUQsRTlzKlxuJXU9Riw8UCpyJEMpaSlaZEgn
cTxHT01vI0UqNitvMShBZC8kRiUKJTAzcykkQFhNPz5UOilHLSJ1aVhKPTpJYDNOPUl1MSddJV9R
WkdMSUQ4Nis4MEUkSz1xYiZUYlk3KFdlJkZPRF1PLEM9XidXYTJvcUFXOnE9LjhNNmo0JCJFQT1j
RkZgVzFaWFMKJU5hZT5FcWQqLzQrS1AtKWdePyJDQm5kOFVvZ1hkaylVXUtIIjxYZUA3dGMtNyxR
Wy4xTmk6SzM1dT1bSkVeQDczUFNQK2wuTjQnX2k4LlxVcyVUN0RhZF9ZaFcmdUVJT3BAM0sKJS1g
MjtKRVxPaT9fYy1nQ2FlLlwvJnQkUm0mTDksa2AxJiQlLmFBJ1htTSVuOFgwI1VvaihIV2ZCR1I8
YSk2W0NQRCpDKlFFLCtvTFNmMmIlKkQjIVJPJ0NZKjRBNVQrZ1s4QCwKJWdRKipJPUpKdWIyNE9S
NWMkMDpQbTdIPiFBJG9CXUUtQ2tbRD5YXmZjLCVPaT8qUUNDJVgyLGguYjxpJ3I6Rz9GOWQwPict
VykzbU5YR2QtTHMqbkgmVGNpdVl1YTE9Jm1cVyIKJWw/QGtqIyZbLSpEZG4oKzZORVFeREttVXNL
UUpCRFZCaCVkXGs0KVZOKmo8MkNNMEhPIkZaXltlXStYOTdBJiw/IydzWXQ6aSJmPy4yQzwkJFIt
PlJmWVx1Oms8YTEsZUhlXG8KJVJGdFpOXD1PQjpDTGdfZXBCZUAmTChNJ08/YztWUDZbPkNhJVBL
Iz9Vc1tdUkNZMG1hITl0XWZAbC1qdC1UOzJkO1FZUltmVG9zSTAoc1JlW2JtLEZBJVpLVlgjSGBj
JEgxJzwKJVQzPnUhbDtdODdcYjozbzE7cHRjXW5GaVxRbDEuMCxSRVAsJDpZZTM4RlxcLWJDI1tY
ZlZxblJGKU9FXVptcDxsZT9aYkxTVyEmXy8oLVdbbHBXXXRdTXFXOGBuY01eW1tSaWkKJWghR2op
WjsuX2tCXmdQWzYoXU85YG8yNVFIc1gpITNabFJvNnA6QlNROnNOVUxSWFo+YFJIT1ZuLDtvOl1v
bC9WXl5XaHVQMUBJc1h0Jz09KkJBYTIqYiJHPVBzVjBiYz9aWCEKJUprIm4kZE5aQjQuIktrMTko
YT5PcV01cD4pWU5GPk5UXEYyclRzYEhmRiNkND5zRUdYMCxTWWNrbDEuaHFLZGBiXTNaXzghMDlx
TV0lV0g+UiQ2PC9vIUoyV2dRLzJtNUthUj4KJUEkZk8nMGlLLFcvJUhkaStrcjxrXEtrYEVHLElU
MjRWb21ZJWhzVDk1clZpND4mQGclQG5FdEZQLGBBV1xlX204SG8hVFtqIjlrWUxTNGZlXD05VUJH
LzQnKUk+KE45KSZWMU4KJTBwYFI8cnFjUGU0IitqdSc5dSM2Rihnb2JlayJaJ0FyV1NyNSY3U3JJ
TEJ0MV1fQU1cKUs/JHVrK0xfTCgrYmZjVDZxR28pVFgoLC5FU2BPUT9LYk9iW0lvZk1WNUJwNkwl
Y08KJVRdRCVVXCI3JjloRUY6IV9EUEZkKzNBL2E3XGpVVD5lWnJRYytNIUJiZyE3ZVVmU1xSU0Jd
VUUlRGs4RCEnMF5RajVObFk7ZEVnPDZBJlgiJSsnJVVAa1JgJFZfcj9kZnAnKVYKJSM2OyQwNDEs
WyQ5KCdXYXElYkNFLUZSXF9hKVlZMy0+QTBfMStoVkNwXnFLW1pGZnJOVG9oNUBSVjk+XzsuRzpQ
SSlvQUUuXnRqPEopSEkjSlA9bVxcLnNyJ1pKUEYnJDFRS2IKJVIzcm5TZTQpZGdRKExhWFMjbTli
VEdkR0ZDVkhvUkRYcT1hTjYrSkwmaHF0cU8xQTxgQmxmZkFcI2p1Qyk6SlssTlY3XHAjNCpJOFlZ
XmM9TUs6O2M7LkZSP09bX1NTM0YoLSQKJSJvNUpfN3RzSmRnUFY5QmwuPUFEbUZALHFibnIiNFc4
VTZvclNoUUpCUj50I18wOytjP1guRztVRF8wVydxSCE2WGhMdT4oSGtpN2tfV0hDNCwhOFcsbj5k
MEozYzteKzNsUT4KJSF1KTBRaSxZQWpHZDBTXDlrZHROPzxpP0lPQENBJ0E8QDNRS2xSNShCOWlo
N2Y4UChQJyRWJlc1Qj86KGVmO2BlPi5aJjIkaGJiMi0pUXNKa1k/ZCQzTEwyTSljRU9NYylgS1oK
JXBpL3BYYE11TmpwJzBpMm9OOE5nJzJGWStDSS5wXE5Dcj9LVU1PYCZFZzwmVDFZS1FYJXFOXDM4
KTRXM14iSlZHKFRtQzBFLDpjM0BOcWJVckUvYSZNVk8tLVsnJU1dUDJ0QzcKJWg+Q0VaOmBTKzEh
b25OVVo1SiRfJFhFK3JhLURgVjNwVjwuJEBbalE+V2k+PClXK1k3NlA+YG9eIWU4Uy9cW1tvKD1G
OFpDcDclSz9FU0gsK1wrPjJGcipcQF4yLFBRLVUtMD4KJSlJa2syUyR1bUYzYydbM1tRcEhjO2Iq
KDFWZD5gY1pRLE1tZiRzZTstKWtJVi8iX1JEJ2NuYypxUDsnW0xcMWQ4aV49PjxwI1M1PVAjPmwp
KTMrQSJuP2BmJ0ZVZk5ROS9daWEKJS84LDo9UlNtUSVVW0heUTVLM25URVtPKiQ4Vl5xY24sRS9S
STxYWUM6Rmsub21QTFwwX2dVW3VoL3RgKjZwdWRKbWtRT3NDPj02X1s+KTdGRmgvMG0zdVlGbycv
cEcvckVQaTYKJV8+RzI6VEYwdEgqXjM5SFZWJlV0W09oUnIlQEM3Nz5XLXBCOSZVLSNkTmQ1b2om
aE0+Kk4qRE9dUzcvP0IoTGpYRzFzY0kvM1g3Y2MhR242cERUQnJHNGxacjRZOE1OV3NQI3UKJTBg
S2NnWDxQI2JMcUVRQzA5JTtxaHVTQzUiJF46N1ZVOjRSUXBqSiVWPUdgdGRPX1ZfMTM2ZmVsIypa
Izs/M3NBPGBVUiI5OGdRVFkiWmxDSzFUOGNfKkZOLSo4KEBFTzdFOjcKJWFSbW9bKWg6TD0oX0U2
UF9uPSpdWWxXSkQmV2BSWldiVVFBQF90MyVpciwmS3FvJUV0bE91KjdgRyc6SD5Cb1ElVG1DJzhZ
Z1MrNEFxMjZhOmFfN3BvPG8uXygxViRHRT5mUj4KJUVUQWxlcGhBZEEqNFdhSTI8XlREK04+XF5V
bW5jO2tFLG1BYyYjalQ/M1MzTkwzMEJ1MV1VNWRtXXVOLmRuNWppZ0I2RGZqNTM+byUrUiY5S1hy
XHBnYyEqX0JiOU1lTDRgXT0KJW4vVk51MUJyTFpjZXFYNURYaFRuYnRCc1RFYlo3KyVrXGk5LnRv
LEJvZkcmI0ZLTkcmW0NdcCNXOjZkLTdiSClxX3FIZk4+MkhcKjhWSyxZNF5WM3Q2PV08bUpZMGZY
ZTFnWy8KJUVrSDY/OUMmcG8+OmZ0OEg8KSNGKi5KOVEpaURCRlMmanByTjokMT8iPl0/Ylw2RkE5
IiJwMzJSbE90KjhPSSVvJ2o9U2deLXEwcz4sKDBxV0kwajI0LkMtSmVoYSIhais3K2YKJThcZmpN
UEQmZGBZV189VG45W2UyIXUzK2QhLFMrLk0uNSJGIUo1XCFuOEopKixbUWY1PFxyVExjV1g6L2dJ
MGV1Y1JaOEk1UnJLWylWI1BdIjByRC5QKDclTTgoOSEyLFR1RSkKJVUlTW4rN14raVo3VygvVFZl
cC1yJGdqVy8rJjtfcV1QcDIzOENGbE5icVMuQWRtP1MwcVhGPmMxZ0Q8Q1FGWVU6J1xMU0NYWG5Q
OixTc0JrRm1kRCtdQ0g0O3BjR3BeU3FvWW4KJXJtLDdVVFt1ITlxU3F0LTouLGhMLjpidSlnNDo6
UCUtRl1CUyo/YC1PcnJ1bD9VaUwmOVtGXTJlcF00NFErJG9SXjAmKGg0KVovR1MoK2EoPnMpN0Vy
WTc/PlEtbnNJLnQ3KjQKJU9zT2JgK01QLiw/PmpCZCJJIzdEQzg1X1VUNjReIl0/ZlIlYWhEN15K
WSNNRU5WTilPWWI/Zm4kayJDP184cFNKPVI3OzEiM1lcRF4/I21EbW1naFRTQ2NbSz1wV0RxMDNQ
ZWkKJW9fNnUzIW5laUpPYVozb1hcOFZwcjlxWyxJPUNFS2heTmYsViR1PTg/XGdFSGcmVDZJVDk4
ZDEvNk5SUl8pYyFpQl9bMyppZGpCPG9zJT5TKmZNUG0pTExEQE4iVGVsPDlKWGoKJUVOS1wpI2Fk
KDYpbSEuSG9sJCpsQ282QF9WRUlnW2U2LScsLnRKW0ovcHMwP0BLWGtEPnFoMEJyUmRnRF8mVy1w
XG0jVS0sQ21MKDJrRy1tUzxAcC8sXUc6bj4jPDciWiUlV1UKJSdcVixtJzRtQGhCbjk5VjFHVCQz
KGdoLkBOWnNIQ1RGYm1jRGVWXlU1ayZONmtyKixmNkJRLychXzAmSDw8dDItNlxKXjhhZTteYTVx
LzJkMyQ4Rko1JWlOSmExUC8vM1k8QEsKJUVTSUZwbTdDOj9mJi5dczE6LTpkOEU5J0hdQiYrNlZc
PkMjaCJFUnVQRScjL2RMWVpcKHVFZVJaNlZtcGRrVUdHTkw+UDViUTJsITEjOTA5aGtJQ01gNV4o
X1YtdVQxMSQ1bnUKJTtQRDRJNFtsQi1PJSs1VjFidDskWEFbWl5CXHBgI1EnYmFtMS9PbSE/IkE+
OT4ybVhrLFdxV2BEbHJKZDBVYzVYYlBLTGNAayEkVjwjNSYxU1pAckotWy0nQGQ+MUFqakhyUlQK
JUoxJVsqJ1NXVV4wXTYoSlxIckQwSVI4S2pmYk0yMElTaFpbSG80JCpbK1lEXj5tTWpGLUVEdFtW
SERPOCtmO1JMXFgmR05fUlJDQz1wVSspLihJYnJuJFsqMWFNbFdhaUg8M2MKJWo9K2JHOWI2T2su
WFw5NyVHIlduQFsuIjI0TCsyO0EzbilyPC4hJ3E9KWE/ZzQlbWIkQHFWJi5NTmwpYW1nWEwrU3Fh
LVpbUidEUyorJnVqb3RpUkdpb0NkLUY6XlkrY0tKY28KJWAtVUxdYF0rKVM5YiU4K3AiPz4lKiRJ
QnE5THJsYSEsO2FlMks9VEJbTDRcWF9KIkJhOTJrP3NsSGBYRGJWVWReXDxCbSpOJEZRRzxYVShp
Ij5jIiQoNz1oPigiOi5RcUM4SUEKJW1XJ3RGYzBNVlpiIjs3J1RaXilOJVlbRzBCJGJvTW1SQnNZ
TGxaLkMiTTIoWmRgMkBATDEpckFbaTBdVSRvSzBwJUA5SCUrZFZnWVZsOj4tbnBuIkkzYFo8KFch
LnNGMFZXZSYKJSpGPWg2K0UpRlkpbGxFdDE2dWtWbV81XUJcImkhVUskOmdMJGMzMjc2Q0UjRGtR
dXAxWUtDUG1ccidaIlstMz0rRGlEPDkjXj5ySSlqUUQpQWF0QUpyPGVubidLczJgSi4+TS4KJWZk
KUZlW2pMK2VScGAxVmBiPGJcQEssYC9HQD8yM1ZYZGg7TC9dbUdAYTplOlU0TGBkakNBaVpDIjgr
WlxpJFxDM0wzSFJkQW4+VWxOMlJBUjVxblg3SFM0QFInZ1pgMUwlT1wKJV9DNVxCNClwPSRUMFtG
L1dvWSJ1ZDVGJ1w1X3NSJm4iNzshKTpYZ24oRGhgYlRwPiIzTF1NVCInL0hwdTFgWFJmKVBhPlhK
MShRQkU9USpGb1ZCJkBHKCdjUmNabk45MCVxYz8KJTFUWDtaNUg5WGguYmFhKTBuQz03QkZdOE5L
Yi0zaSVTcSUyJ0VZYjgsLjosQ01Xc1VvKzspUWZOZXApNkMwJnRGQExWV0RNZG1QbS9TZiRgOTpI
NS0uUCM1OEJXI3NbUT5hRywKJSpiR2hqPVhCOV8wW0guS0dIJ15vV3JPWjBESSpaPW5mVlRAaVNS
LUZldEoxM0stTWpyWj5HMTU3aDZKajg+UmJBKSlZNmkoWG9FJGg/OyJfWFZqcG9iXDJtNWVsXWAt
KTlqbV4KJV1rMWVfV0RgZmtVZitCbGN0U19dYG9OTGgzdG4wKDhBOG1vO1U4QS4tV3VeJik/WCJP
JkpcbyMySmsnRVxmcltBJTxZY2xMZkFOTmlYRV9hSTspc2EqQS1IVk1yWTlmWERaZywKJTRldTcl
RWJwPTptOSVFP0hoKjdxPTwpJik3T1JtWyxyMDxHJlRiaFNDcm10RlpDPmkuaWZvIWA/T0shXFpz
Rit1Zk1dREgkZGMyM1hoKWc1aXVESHVaJlw0NVkiJVpQUzMmJWYKJUpsN3RJTyQ7PHUnbSFsLFw6
WktFYDQtWm9aPEplIThfWU4sW0YsRnNEKVQ2PCFYbDpiJzFYcF1FQXRpXGA1RzlEX1FDXk9oTzln
SVJxOlJJVWJVZlZgKz4iIz1gNzk0JEZFVE8KJSJdc0QsRDooYzpLZ191QidzO0UlNm9HLiJaYSNM
bEVAbEcrKjFia0o0RlRVKChZcGxMU0ptSG9KUis0M25vZUkhKUZDLjpQa0k2YjJgSVJGZz1oVDQm
UG5nKD9zUGtaO0tUZnUKJS9hXWtmZm1HY1E8LFc/aj5hX1o4ZXFHIVckdEJHVENlbyVLcDtDJ1tC
WUItbEg1TXM9TVVZLmtGI0cuUS1FOCY8WlU3Zz9eIUMlI0dKbzIvL09ebUNkQ0NNN2I/SWZHQUIp
KVYKJWhxKmVrOkAvKygoRDwpWkgkN2pYRmhnXlBOMTRoUUBRPTBYX0JNXmcoO1U0T2BjaFU1aDIs
LD4lX1MrOmguSEdyWj5FPFluUCJpJGxmZGhNOnNbcnNJTUE4YDJCJGc3KGFXM2cKJVFQPmA+Vicl
IXNIJzNcTlRfMyNsQCRkJVU0bydQZSZzKUVqbC9rISs0KS8xXkpgOEtTb2MwZXMqJEBdRigpSW90
Q0NUYStXMVJZYk9USXVQLkI/SDcpSjRySkQoLVVeQGNzL3QKJSFJKiJOO0lBNHVgOWApZ2FudVM/
WGpXTGQyRDRPTVtGSnEzVlZqY2Q+LkxxQkpLTW5PIjFdQjRgYGlxZD5jOzJOPTRsODJPPSRaODJU
YVNuPlBBPC0xZXFvMzBGKTNDKloiJ24KJURMJEUhYHIvNUBBR11jVThxbUxMbiFkUVwsbzxBM08m
RmhoSj1HMXJxYVU2UUNnLGQzXi1nL0wuaHFmYjU0bkFla1gqUFdvZ3FNR24xU205Z2VAWi9hVlo5
bmdpZkFKci5aR18KJWA8U3I8N2F1JFtAbWJXMDRrIUtlNERMT0M7Jm82ZT8sMzE6YmMmQCQ+JFQ3
QmpLIi1NYi1KNlA/clVsSVM8Yjw6UjgkJT9TJGRoK1ZAISUrT11dL0xcP0c5ZmQ8XydgW0N0WD8K
JVRrTjRVJF1WL09ndTxqYTk1UVBTWChFIlQhSCtUY0siOVphXFZiTixhYCEsO1BJbmtmPCc+QyNo
OD5aVTFyYUhuOUtpaVxsUVVqckxxX0xGZEU+WSNMdXNyW1QnWWhwIiFLSUMKJU1caWAjLyJnK0NS
UlwzYG82WT8vTFFIbG1wOCNlQT9LZm1FZjRyby9RODktdS9QXTIoWyI6TVE/ZV8wQkVfX1dfMzpD
Rz1ScnJwJlgwZl11WyZvRGk5c3IzdD1QLzZjTTZCcWoKJU02N3IqNjhhK3NBRllGakBtcF8xXUdL
RW5APysscE5SMWNQOls+YWs0WyVQNi9qMGombFxnJGBRbyhpN1ZsQ3VtNCJrLUpaJjRLTXEydDNH
YjgqQjxVREYxUk0lXVlxNDwnNUkKJTRhTy0wU0peOk9PKjtBcWdOXGMpMFRiOzZzL0xJSWxbZ3Rx
YT45PS9QJkg+UTMtc1tsajs8LGgkVkg8SGcxJUVcSDpZJ0ZHQUI0cjNeRVEjQ3Febm5MSXViQk03
aTY6MWwyQmUKJS0lZksrNlYiYGAjXFBsRFw1a3FxSHJXYWk4OTldQitLZCQ7a2pELERaKlVcXTpG
RTNDWWRYM2NbYnQoXytWTT8kXUw+bTJpbG8jOWlgRztZO29pUChIJlhCaVk8NTAzPkk5MzsKJVpF
XE1XLGhdWFRZIlgpKVQ5J2tYY1ljPWpmLDttdWhvM1g1PWhDMlRAOjMrXCFZYCdiazxqQGVNSjpx
K0lFNTRtNFAzQHNqS0tnP0BvcWkobkghRWdgIWUhNmUuOiEiQnRsZCYKJTA6cTIiMUFrUWZNTlg/
JSdeLk5iRWZbUU1LcG0zLz9ZY0lVN1U/RTRQKF0/bVIqaG9uZkw8Wk5TKDhEXW8qbThFa1s1OWpt
NlklMTZDaj0oLW9iJTE1cFUvIjxcT2UhUW49RU0KJVVwXyJtKGswSXQ5S0VmV09xTkozLm0uVypU
MUUiJ2IsbF1sbC5JcU9bTWBPaywqSkYwJmYraltbMVIuLUBlPF05WC5XU0hTWkNebjZOc2JvMHVG
bmtoN21JVUphPnUyLGIoXmAKJU4rZlpTZldGQDYpcFwtK0hKOiZrN1dmcyFZZjlkLDg+SGA6YG5B
Qz9gQ2svcC9gQzMhUUwkLyEvci5CRGF0Ui1SQ1hhNT5taW1QU01RQUJmSS0tJFcmV3Veal0rVkwu
Vl5dLVcKJVZgNms9WzBxZyViVjhoYVhEMThFWTtWOGliQmVnXy5nVktRaF5gcTpnSCo1IXBDXFsn
T3FnM2hgViozSGgndGFCMDlvSUdVSiZLPztbL0k9X1ZENmoxKCY9dXFwX1hWVGFzbDgKJVhVQmxF
ZiQ5YShkJFVPJ01dTkVOIiFUVVhIXjssKVMuZT0oKlAwXmlPOVFtUURnX0BfK2V1PHJGazJiQ2Ri
cm9bPmtsai9qQkJjXD1xSy8rTnRqSXVOUl1ZX0dcQSosY11sNE8KJVlDbi83R0E/J3BlTWMvUipU
b2g5YTRjJ2M1MWYkSWhvRm5ORCM2aVk2bFg5VihKUl8pR2lrPDBZLmh0MShkJ2k9LGJYSkdrLWl0
X0JHbSlWS29PQ0lqQi8wLVVPLiFTTEZZTjAKJWhnImxXPTAxV0ZpUWVNRllzWkJrKiNULGBkVFFM
SjlZOCRUQmAnbkduMy1vLWA/Qm80YVRyXTJtWGI7NTNoO1w1J2crNXQrJ2VVUkkmWlRFMSdNQFIu
J1NWXEZMM1s9IlgyXFMKJVA3Wi91QlJGa15sMDFPQydBK0cpSCVlQDBkPUVjdTlMKixbOU0/OXFA
dUA5KCw1Sl5WRjxLV0csWHBIO2VYNChOM2FuQ3MzIVwqP0M2XlhqNGEzdUZgTTIuJjlhLj1RQStc
RTkKJUspVG4pYSUuKydsW25VLTIkZTdHb2FfTllmYSdpRTI0IStNWWUlTE9dcj5CUypRSihxTkpH
OSxQRCE+T1RjdDo4P0o3L05LdCtWUFRFN1VoMDBrOyE7UGVIMjBpWjc+LERYRUAKJXBFNGE7TDc9
OjpPdEpkdVkqQCJENGElLXMqcXEoMWFUUG1GK29VXytJMUZuSVJWZlVZO0pcPWY3MUlOKzw6Uk1n
VzwtNDQ4Pk9SMGNmMTVhVF0kYSIrUSpGO1RNM1JhRFRJcWsKJU5JZyhcPDNSNykkSWxQZi5oU3RK
RVRHKTsoMFBFaDI+RnJqMlYtSEA6LzYpNkNDRD41b24lKCRsZ1lDTCs7YE88ZklqTzFwa2QmSDBA
VSc/MkFfTDxgQ14+VzVpTjUtPzkhIiwKJUUsUkZOVkFDIiNAIjlaUWcsRnRRNEZKU3M+YUZJOkcl
JEt0MjMvSWlPOiprXlRNcURzNDkmJmVuMlI+X1A/J0QhcnV1RT9lRGA/Nk4zRm5iJSZnZk0xWzst
I0k8J0xROzUmJV8KJUJKaV5tTW50P1dtTUlJRzNtPSdfRmRPbXQuYkEtPz45XmEkQzU4bUdBRG9O
WEAzTGYvKVxGaltUb2F1ckNnWHE6WW5cWUY6XlJmOUg8OyFbb0YoPWNxTzNESVwlPjVXYWoiPE4K
JU40VD4lOFkoXGhgOj8iXzhhK2EjZUZVaUNdbThRUyNQJTpVaVNrS1JyKmBsXThvbGUuIUEmX2pO
bTJPXkJFWUdPQSwuZ2ZQckxPY0Byc2ApJ0dMPXM1YGRtVVwkQnJMSlZCKlYKJTdgZHMiI3RUSyNb
Xj1nKCkyaT9jUi8+Pm9OTEJeLFwlWic2VTY0YjJdOzwnMS5bKjYxLiE8ZyRxR3JPJCciOkgqZT9z
OVhrV0NlcyhJQSwhVV1TQGohUS1iLSVBLmVZTTJaPWsKJVAjcVgpTiFZQVRlc25bcEhjdG1ZK1ph
Ky9gXVVCRz1gIltISGZnJnROLEEtK1BtXjMwPkhcT1VGWzRaO1pSaW4lVys4UW81Yz5WNVohYm5c
JVNkV1JDdXBMPz5DQjFRMTxmUHQKJVlvMSo2XD5MZm9DWjRvRTplQDRbSkZcSFohZFYpQ1lwVVJb
L05KKEk4clA5W0I/Ll4rZiZHMkxUb2dxODNhXyZZVSFQNS0kU1dkb2JZa1tlM2MhdWJcUDIxayMs
ZXNcUF1XP1AKJSckNEUlOyQ1VjtLdCowQzoiLzFRL3MkSUdfPkRgKUJUN2U6UzleYz8sNW5nLjFh
aXA6JF9ibkwjYkVgYFYxaSVlTjEqQSEuPy1ySjJFQHNCXDRTQigmW3M8JzwoVXVHYXQ9PUcKJU5N
KkYmTklKTnRySiZFdDpIa2piLXEjaT9bSz1pdCZhTks4RDp1MyRLQSRcRiZNYVotVjlJPHVyIUBr
W0svQ0NSLCssXWpkZDxTb1R1MyM9X0tOTj1LaTMzTylFLD5zWkRbZkYKJVxEZXMxYm9xOj9PL21I
Zi9jNDQ5N29aTjxrT2lQTWtxcmhSWzhjakooYSFHOGxhPERlLG9qRjNYVlxNSkQwOF04aSdIK10n
Nyo8cWxBYVtwVT5ZQm83bSRhKjI5J0Y1XE46LWcKJTtlTnAiLz4vO0RhIy9PQSdYIl1IKF9bSzw8
UFoxTTtbTy1RYWA7aGc8Q1ttYlUsYV8iQ3FtTGQ0aiZxaU0wLURpV0JPUyNkJSxNIkxkM2ZyWywj
IzQyazNZISVfKzhjXHU5aDkKJWkpcGYvPzlTIS1xb01jcjY8ckQ5TWVzTzVKZ2dgY0twX1IiUltO
LD4nZEw/Xl5TIileNTltZDZGbWdVKUIiSVEmKnRmKSk/O1h0Lyg/clhnQS5WaVhAam9BVUNpcmo7
PEl0NycKJSo9NDlBZnUhVzItJ0shPlUvVTRWLU06S0xbLzJAPFA9JHQ0XTlQX0s+SCk6MUlTJDgo
UCMjMHIlZHQsSydZX1dyT2ExXEFBSm1VNUxJITh1QHRyTzJOIVsuTTFQYWVgWnUmVzEKJV4jOERk
L09wYCEmTFdcdSo2bl5rKCE2azFmZVJeTyliaXRkPFBRV10qSFghQ2QvcXVmUlA3JnROSDRcO1Ze
TzU/UDhDWCdhO3RgVkJxPlc5T0k5OjMqRCdMRzRTOCElPl8jWUAKJTckPWpcTWFUIzdNZFUnKUE5
NyJaImB1L1NYWSgoYyNpKHMyblZFMTREam9caSpENkwpOUYyTCNIQUcmODFNTjEySkBmcUlBYW5q
LTNQQV5dSFBzZmdXWSxDUE1JO0JFNlo5RHEKJVpRT2lUPksrKVwsTUQwLykzRzklLWNIVDtwajx1
MUBIRzJiVDs2J01ZTC9dLUg7b3VtTC1bXyducFdKKF9eZWp0YGppJWspRVoiazolIV1aVikjNFMi
SUNNZVctRGlmPiVPOEMKJUdmVFouPiZJN21VOGo5Y0FpUSUuNzlaUjgsWlFFX1BHT01GU2ZPOm1y
Lz9lPzBMSz5RZyU9VjpXTjpmUCdGX0pPKVkkQ2YvXHJHN09VYkhBWltBMjJXTmZKT1JtO21PNTws
WC8KJWBFXFw7MighTmlVYktcLz10NV48bXNVUjBLL1MjZ0VLSD5ZSj8jbThicyopMzctciRITiJT
LGdOTT9pOTkqLUIsLGJTPzZbXm8iYVJkTVwjVl9PXXJDJGxpWiE4S20tKFhwcT0KJV5mKV4tMnVO
dVApUF9aIzg8NiZAWSZmaTBMTFxqbm1gJ1dnP3VmN25ba2E9bltWXUE1LkZbTEo0KTI6ciNhNF1W
YDEnWC1HXy5NUWEuJi85QDVeYW5HQV05K2RnPCloLEx1KCUKJSxUZyxrVyxGUyRYKEdaPDRbZ0pP
bz8sbVRMIj9eX21yJSFrN0s4VTIqLE1fL1FXPUM/SmVgVUphOV4+IjZETW4lVnM2SmBJNUUna108
QSNlJiNZZlA0JE9WYC8pXSJdTyZXO2IKJW0hOy5UJVxVVVhoKEROJmMpbVtqcjQ2KzZgXG5Wc2lp
cFFqalVaKlliXkpKSVktUzlOZD0+bl1tO0VRXVA3W0s0IXNdPXNfP1tGTVxdZmQmLCpHPGczKXJh
KCdqWCgwPlkvLlkKJUhWPShXVCg5PFkqXmM6KEdPaDpqanMxKkEuP1pjTkZHR3NeZlQrWFReakFy
YWNIJ04uJWlyaFNfMVgoOXBqaXNTV1I4K0RMQCEqZzNoZk84XyZULFA8dS9WcCFSZVhnRDMlbkoK
JSNuYlhmS3ArYG9sPlxTPU5ZVkFuZltIIlcxOkBSRUhkPVVOb0txYkQlQTdkP3JIRz5YO0lDa2Io
a1JYS0FmRCwraUJCJmg3S3BZcTk1VGpDN1c/I1okbiJdOUhoLiJJblsiNUgKJThqcmUuOUFlI1FO
PVE6LkxfWD9gJ0VhKyZvM0AtSlMzcWlbJFkxMWNUOzwzVmw9LFZIY2I1RG44OmMlREBec1BiV1hy
TChKXTQ3RTZlUks6IXEyYjViTkIkOSMqY1dtMTI6RkEKJT8+bjVSPFMpM2xVZ0VDUGUrQW4kTzFo
USwoZ2IyVEpKT1c/NVA/SENYL2RwRzt0Mm9sW0JmSWo1PUtAYGBtbGBVV2ReQjtTSjRMUXAyVjlQ
azZVMypYbHQ9QkUhI1hbIVs9MlkKJUchV0UpbFtrcmpGbVwuWG1pRV9oQGpXSFpgWUxqJVVqTURH
N0JNM2kkZ1JUQkIySWhWRDU5alJaUmhdVThUOG8/RkQ7LCw7OUdgOVticS9JMFtnSU9AWlQ1ZjNZ
X1luZFcyM1YKJWlQaDA3KS1Hc2U3aD02bidqN29bL3RLK3UzRkRTPEtBWkRUXUFMX1pZPiJFRlVq
QidnRj43clBtMycpbk40KTxicUxwcVRUZ2c0aDpnZyhVbT1TZi8oPis+OGs0a14nIloiWlQKJUFr
cTo/XGNCXidBaUFMaEA4dWc8OnUkI3E9TWwhO0ZgLlA/IjErc01cWFxwSDlwQUoxU3VLc0BqQUUn
WmkzRlxtSUdYUHBIM1I3KTYiLHEjbFtoRURbN202U20va1FkYEBqb0oKJTYtTXNtMl9JY3ROUmlu
MmpNSmhMV3NfIUFgJT9EZDI3M1pVRzdBJ01cQ3FHVDAvKU9QNnJkZUNmSydmU1I8TDlaX2xObExj
Y2BkcCdNWFNeYytXJmFhcElTNGpKPEhYM1tGSz0KJWg5UGhpNT5PWT9qdSFWMDZIR1AkYytPOXIw
QV1JOm06I0VwPTt1aUAsNlU+TExCcmRpUC4jJ04zLTFqb1EiLFpzMmFSTmU0OTg9XW4hc28oWk1H
bVVCRiVtamlPXUNIZ0VkWywKJSI+QEgsViVFXFJCaUhRZENLLzMkI2xYVy0zYm4mT25PYS5JWmUp
PGwmRSJzKGNKM280LGA6OihfMmdHYC8lUTxxcUhXYjoiay9CN09TSWdZXU5NZWRhSGxNWSJaXzNX
Pm1WcF4KJXEvMUNtbC5TQWNoK2l0UzhVWjkpIS9aPkYrUVNOSjNBQ1M2NjdPX2xLREsrKS8vUC1c
Q0tQN1NSS3FHMVxLZmk9VVAyVCVkVTlbZlA1dDsxW0lwWyI4YFdzMzBgTEJxWSVObmYKJWFmMlEo
OXIyQz4mSCxRaDZ0ODBWQXRzI2ooMUdfNC87WG1cOzk3bEw6RFsma2khSGFnZmBmOTRxI3RPam9z
VSgobEBMbjxkSycyQkpPIkVxVDoxakUraEdRT1M2JUItO3FtJTAKJT0sakJKLWhFOi1XTGwoaGYy
XmxQKWdRZl1nM3UxWkVQVlhvcCJXaFguKGU7XkhHUGZRJm1qWXVTWm9sLzBxK2JjJz5uaWVYY3NS
X05Pbm85PT1dKWsxYWk8OEk+Z2JeKyU9KGAKJT9jZj0pJ14pckEmZ2AocWU4WlouWmBvVG5bXTNk
UEtuUmRAbWo2YDw6YmorJS5MZE5jSUZrO2AiWSw5W21GRm85PHNOLDFPNClBcS91bD5BWS5hcldP
dE5BYCc9aWBpNjlWdEwKJUppPyFxLz5GQj9eZ2pybWVoTDQhPzBKR15sX2gqKipQQyZIS0s6azwu
XkZAZCJmNylgZUBmJEtjQGRMTGwjT19mN11pcWpoOk1IRi1MQlxzZSxiIlpLXCtTbFlsXDw7XTE/
JFsKJU9PM1xSKUJbOytNRT9ZY0ZMV14nZFYlSGY1KUZeVCZIWjlHblgnaTk2KVRUR0pFRVhJSXRL
c2hWRnEuO149IlJsWVUzOT5cKS5JYS5YV2VQPmMvZy8tWnE1UFM4RURiJ2IucCUKJT1XWWA9WFZe
XTo6QmdIUDVzb284OD8pYSxmcU4xazFhX1ddKS5aIiE2aDR0MllAdFFhM3AuNVdjTEdwcVVeZzhu
bUBZRjUrTmwvalBeZyk3XCgyWTIiVCVYQiNSL04xcD1hMm8KJWU4WUwsZmFfMTpDUmtDalRlVEdq
O1ddWzheLWpXUVxAPUReUGBTTkRAI1J0UjBXJCVZO2ZIbCFdRTJELShHRT9SIW0mK0ouNkkkXFtZ
KzswTz9Ccy1nTiM2QjNsLzpoTVRQMyoKJSkjXWhSLzNBUDE6bGhZUVMsMjg8KmlCN3U6OUQoZFM+
bUU3PSNdUWVQPE40X0NINFxvMG1xI1hsKUhxcj4mY0snOyNtW3JVJGhVN1Yoal0nT1F0P2FRIWtz
Z1ZfSD5hQilKJUQKJUcqXlt0Q19gZmxaSGpINlRUMCZQUSkpZTdYaTZPa0A/JUwxZlsrXz5aMTJC
Y1UhY0lHJDh1JzBcWSpMWEhHUUMuclBIVmVMLzwtS1xoRl1qZzFIPkVJX2ElSjwpSFBOSmNbX1gK
JVksWj9MXTFqN21WKVtHT1lfPThaLXMqP0hORSFSO2deLTJ1QFtTTTdiWkxvanIjQytINWRsQ1JE
J1ZYXjApS19gVjpzOzljIjVvaUszaUk8YTJBXD84XypfM2VeQ0hlJ2s0Oj8KJVdOITg4I0VlQTwi
J0c8Sk5URkA6aiVoa19bQzNDXkJdI0Jja3BALkE9NkZiYCFvOWx0XEFqJXAvWEJoR2BmIklDKi8v
R2ArMF9tKC9tXSh1SzJuTC9ycDhvaDJdNFZLNz5hLmMKJS1wLjhCKSw1WiVrLyxwcmdkKF9baCZC
LXNBMzcuRCpaJCdwbmZXRmVQLS00PTtnK0tuU0U4X0NATFdlX1R1YWkjISdJYSInXV5QO2psc3Ul
PjghWi89N09GM0hwR0RKKilgN0IKJVhkXDoyOCFlTmo0NmtPLTlySCY2b3VfTGMyRSU3ITpjdT1M
cF1fRDRlWVFaaFRSYVk2Zy1ec29MZjJmPUJIP2Jrb15vZmovcGBAbkwvUlViNydLUGQuaEduNkIp
QFRHcC9OVk0KJWBSNVhmVTtCP19NVD9ATSstMzcoPzFoLENLO2U1LTc3PnBYXHIsQWpLKko3dTcq
R3IwWGgpMW1uQjQvKVFtWDZFZTY7SicyY1daKlJHQSI6JHIhdC43aW5HPS9LdHUjXWwhL1kKJU0y
PF1XZyRzKVBiYFlZPFoyb3BwXkI7XVFtIjQzWkFXRm00WFVIUT9jPHVdVmxySFpaVkw+QSxdUEtr
WDxeI3RxRERKTFpOSG9oVS4ocUNYRE1oI3VUSyYxOihEKEdILltXaV8KJUpAUjFlV19yLS8sZkI6
YCxlTDtrcDBpRSJUOFZTbTFvVVA+MDIwJDg3NWxbIiJQRyNDT2NcJV0qZGd1ZyZ0YV5pO3ImIU1n
OjNvNUE8I1opWSddKk9JOF1YIkZgLm4lLF5YaE8KJUpHZkFTXyQvYSMySl5aKTptciltQUg/WUUl
P0lEVFdZbGI1XScjUE8+LFgqUVI/Zl9zVGRGKi0xc0wzOzpoMXRKMmk1Pkkta2c4P2kzWD9IPlth
MStFPVdBLSw+ZywoP0NSNy0KJTktWVtAWTI/KzROSTFcQC5VMyteWVw0dEUhbSkrT09pTkpFUXBy
dT9xPFUlNDApWmlXJF42IipPak9tcCgnaDZYIzNHX09pJGhbMG9TYnA6XXAvTWpmQ3MkTWBcY0s1
U2ZxJWAKJWgsTCRXZjc3X2lmWHNzMUA6YDNhNzNiVTtrKTVjIWFLQ2I0YygnLk0kXSI2WS1eRjRC
TTFIRTxTRSdxaURdOjwhNUc8ZHFdKiFJM05CXlhCbnEsZVk4XnRSbEIhVVoiWEJlVlIKJUBcJ0pc
WTgjN0wtVWQsVlFkJSohJjBpI1JtQ2dpQ0JnOjc9ZnVidVRiXWRnUDg3NjhEXmo3LlIiX2hoRDZL
cWdVOyppYmJnJEI9IiUnSSRCKWtCOFRaLWxsST1sWDo1OWdQRW4KJT9FKG1fVUdsQS5KOVBtOlVl
NjhwamxoU0pLWElRQHMoaSpbZ2U+ckc9XjxYUWhNZF9jITM5KG1NcUp1ZW1BWFVCJ09lRDArZ210
YmkkLXI9YnJCPkU+YnRLIUtQMmxAXU5BM2wKJThDUzc7K0wjNWhZK2ZHUGdIQFcyazJrWzpTcFFF
I0UvKVpbWm0rLlhtPUwkPmpzbWgrZTVXLnBpNzYoc2dJbllyOS5uUl8zLCdYRFttWHROLipcbV5u
RGBDWmJLTjQ3M1hkNz4KJSY7JD9qWiVFMCFPLT91QFxlako0ZCNBQS9JR1siI2B1cmwrJEUlI1Bm
OW91RztZV01dST1SUFBZbmFZTU1EO0JyIj1cUlFxT3QwLiMwRTQ1ZUhEOy8ibWpmRDUic0wlIjNV
bUYKJSlCbDM4NHNRO2dmVkRPPEJVcjgvXUVdIlguNWtmQjpaVENSS3RLLnFhPypMOT5VNXFOQEhs
JWEvK2Y5Nk5JMlBDcDkiTD9xdSJpMkkjciRtSl9XWCo1N29PZCM1MD5UODFVb1MKJUdNNWQ1VGoq
WU9tIV0oTS01dUU7JEQzSGdIbSc2QzBcLF4ucFVRRl9KPFJ0JV1ZNSMtJXNeM2k0VnFXM1pwSzVa
NlsoJEMhVEdKZF9sMEpFZ2ViJlIrREdWa0ZYdWYlMypAUVMKJU9II15IIUsjVldJKTpOIi5EMlNW
K3JhMygyVUk/UVc3NF8jY0VDWVduQmxEXVp0TkJWSE0sV2hqVT5PNT4mWGlhazU5PUNnKmtKJjQl
VjsnTC9xaTgoKCtFTClQYDMwSmQ1IzsKJUhOcUEjRlRHTj1mQkZVMGw1NSgtMFdXMCxIZUAlNEQ3
JipjRFMqXGNkIigzNVEoQyVSL19eN2EnQTVYRzhLK09GPy9FOyJrTm4zLUtha1w1TktgOkElUDhU
K1hVWytNOUo0OkAKJTgvKytIOzc9SiZJSWJXRFtxQkJGXFo2MGBZOzNRO1wwZl9xLkc5RS5kRk01
PVRycy10PGwhNihcJGFAOVBdRltZYXBZdGBVNV1GWCI6Tyk9Zy9NdV9eRyZBXDhOTlolU0RrJUsK
JSpuL19oMSpJWSdfQ0JYYUcvXHAkSzMrQjUkJ1kxUFsnbUxZOlFMXj5vUFw6dWBHQEIiRmAiOC5q
R00mU0tKVE9uNmMxZUJZMEZqa3A/K2xKOy85IjgsRTZQJlg6J2xhYXBmVTIKJVNYaTZYYEx0S25f
Mzg8ZUE8Wl1ZV1kqb0ExK2lmY2VLN0YvQlhpZmQkWT1MJENtbklmQmJmJ1BkYktOPD9ucE8yNDI1
L1MpTDxRRU5zT2NQN3IwZFk4V2BJODozSFY5LzklTjAKJWVGJXBfVVwqQiZiI2YqRGYzSkQwQmxZ
NDJpRGY6UGIxYSlZZy5LOWo9L1guWT43NlpwYUlTVFZVSls0QGl0U08lIkVOUyFhYzlGRTFJP0RH
Nj5PXjY1L1k7SDZoQW8zcUswVkMKJUw8ZjV0IXA0aVhqX2hTNGYlO0ZyQl5sLlgyJ1FzbjdFbHA9
UFdVVmtaQjA7NCNCbmNrZmtmNW5BMz8hS0YuJVBcSSYrIyo0NS1VJ3E8S3UhQG5cUSJYX1t0RyIy
ZjhqW0w6YUgKJS1Ia105OSxhOyI4cD0iZjJhb2Y2JWdhWyxiOF1TPzZOJ0B0Il8mWUs+cl9zOTJB
K0FcIVFjVWRWKTpuTGQkQ0FuIyw6bTFQQioqMFNCQktjKD9TUSkwdGxMU1ApXD4vZiMlWG0KJTYr
S0VZOSw/cl86SkoidUxiTmBoaXEqOlleVXIsRlBYNDMuXTJSVm5mITR1L1AtVy49PkBab2YvV2ki
a2oxblVEZTBRMDAkRFdAUzw7Uy9LXj8mYitTV2MoUG09SE4+bSklMGcKJVNBc3RcInEwIlhZb0Qy
OjVgRmxUSUBPJTlrTGZqKFk5JGoxWVo4KD9SaWJtZD01RiYzPXIibjtmUnFWdFJVbTJHTWsmVCcm
az9ZWUBIQ1hoSjM6azFGbSsuW2NWU10yOyI4QTEKJStSMyRuU2NYbXNnSU1NVjMlLFNYTlFqa21M
PEQsQi0hKlxJPjBHal1xaEFkbGpYUU9QKkYkWmVYP29KT1w7ajFHNVcjKS0iMCQrS0dhc29gUUw8
LGVbbywhbidwWjNUJWstJUMKJTFpSU1mU0RzVTlgUzBjPClKXEVhO0U2SSM8I1M/dGpScTAwZE05
I3VOLEQvcykyIWElQmZyLy9qRGxmSkxjR1BkRV1dRWNWSGJLQD4jXCVQJVE5XVVhVnFnXi1JZjZZ
b0U/cFAKJSxGci8/P00rZFdWKlluUFJeIyxkYXAlOVQudE4hTj47OGRcS2Uxa0FjUksvU1E2aF5Z
PGlSXHRDbWR0ajptRkhlVmVvKC5QcytVPEM6TEAtJlcoK11JZio1O3B0MUojZmNIa1UKJVQvKURd
KmFDZVdeYFwkW005NE8zLG1DdEk4X3EpbXFPViJhWlsrUDpGJ209T0R1QiQ0QDB1VUcoXENLUFRk
KmZMImgxM1w8MU5QMjslW0xuR2hNMF0oRkI3NyNPWVZmZ1hTNUoKJSw3OC5BcW0vZG9cKDI0OHFo
bC82SlRcKTg1WGxTakE/T0ZeTlk6JGE8WHBeM1w7VWUxS21ZKCRBLy5DPlwxR1VEaTNMalgmJWVQ
SFxHPStsZzBzRWsvYSUvUzA5ZmA+XFl1cyMKJWZ1VjUmaV4kTyhnX3U0Z01tNjBxM0lgaEclJ180
IWc4Xj1lXCRQUDAkcmItQT1cKUBPSV8jck1VMWpfJ2BuLFw2bkJGalZrWjlcRilIaDIzN3FsQkVt
aUdkRzRxJmExTkZhZEEKJSplPXUuR0Y1TiRecVI0Ykg8bCVSUjYiVz1wJTFrSlUrO05OV1khJmE6
ZiMoSkc/NmVsTDZrZTE3K05qaUBqPlIhLz5hIixQJSJeVlRBVE5YMz1MJFVQNjk/VDZNRCdCQ18h
RHAKJThMIm5nP3F0O01mYideRl4vVVlLJGBFOERfPTNJXSJqdUBDNzZdZUxhPzQvOkE8OTdkZTtC
SCFpUkJCTGI7amxBJFdmcjBUTF45QiV0NVxHYVU5dVhoMHFfW1xaXWpRQyRCbiYKJUEzXGlPRWZi
XmlqXiVdMDw+VmEmUj1TOkklOStcbGZrUD5GY146ZD4hXS5RXHIzXUE8NE0wZFs9Zl81NUNBdWly
MDBsMTphSFtQV1cnWkpNYjpcWG47Oj0kdSkqKVFjU1NlNisKJS8hbmRnOlRFMCcmZV8ncEIlR046
J1oucUZdZk4mJiEmJiFLUzViKiQmPFt1XWcnLyhETUVhcEFncyE0YHI1KlEuUWs8Jz1KI2VRQmtX
LjI5TWRuUCVGImpZODAoVzYhPlRoOWEKJT49Qko5Q3UtYTtCZEheRSlzQTsiY3FHWCJiVmJJTjwh
bEZWaEktbl1iaHJ1WzJoalVTTTo4cVoyOjg2dUFNN1xMcXMjR1ReNCkvXTk0aCFYNzZiLTBCW2pv
IWVITiI6ImdNKVgKJUcpdDtpbVQsOFUpSz9CVlxTKF9IYj5abzIzQiM9MEdGayk0NC48ZHJSUVov
UjVOQF5ES1k+S2RsWzVwNlctPjBiRVVPb2lQSEdhbjlUblskcCZLSGNHSytBZDVCX3FOUT5aRnEK
JSxhNHAiL0BfOHJjTCIqOUIiRm08Yl5FVUBLNDFFW2BtakljJFFfQCYiI2woSztrKFEvbjc9Zm9R
QDlnLWQ0UjtPaVUlQ0JBQy0uUzxfYztzYFRVb1k2OEpbIlxoZFhzbjUkSG8KJSdEa2R1UyguXEsj
KmdxUEFsZmUtKmAwR3JhRkksTiYpXy0wJFs6KEBjUmZRY0lzOSlOTCE4RW1dQ1Yqa08qOExoaC8t
WSRoLHFTRShDY1ZlRHNhXSUkT2lDT0FfIiozQHBoIWoKJVldKk9eNCIhQmdUdD1CU1xwI2V1TnQv
UVo9VUcxWj08QlxLMl1aTWJBY21wMSMuT1slKnBbUGdROlQnNVhaUXBvOCRkPHM/bkk9WDtaKkhI
UEMrL3JDZUo5OVdBaltMLTYwJGAKJTI1biNVKEdDOjFvU0dWTCd1X0BMbFxUNUElJ2BgSztsWGF0
OiNONmxmPnAlPTc9UipVJFxPTltmdCJKJlhzPVdNV0trV3AxU0U+RU5FNGRWTkVhTWo4cTg2YExi
VWs/NFZrTl4KJSc4UF05Im4sN0Q1Kz4wRjMoNi45WWhqaXJbN1ozRVphYUgwKWw5LGJTaUwlJyhN
O1xQVSdzZEpaSW1vSFVtWThsRzc6MicrRU1gVzs3MTltWGJQJE5HZFlmYDgzbjFUTmdlRV4KJVdO
ZycvLmBEM2xmYmw1bzdVQFFzXS1zNTVOSHAqSm9gRW8mJ2dVZyhwcGovTE9aIkNHTmRBP0wwWiIz
YDFRQnRbUSJlPERTJzheNkdQRDBCUyhRUCwhamJubC5XIVdSKGRmMDcKJSI0Oy5yLDlcJiZdaD5e
QlVsL1JRMiRIKyQ+aEpyYldMOnQkY210MGVsa1BcLztZLVVALTA1T2w9JGkpRVowXF9INm0uYiwr
YVljYDZ1QWEkVTo9TEo4bzpQLVgxTVpWJF5GQ1YKJSlHcjpmISVuKUEubVhwUENLbHFEaUlyaHE8
c2lxVzNwVWxGU2ZUdTAyOl9xby1DTUBZQGFuciJAXGsjVU43LGlkY2xuTGs+ZjEvU0NTYCsrWSc/
PFloOnEpOStjVzRoIyNDQzIKJTB0bi1FXCVwR1tTL05mUmdGZm9TMz5DaSdLYllHaiRjMWlTWGFB
VlNKRTQiNlRhTmRcJmRcPDhvcFtyLE5xW05zW24lVzEiLGNPM1pvcTJmYjI9Qk1kJWU7LW5qPFwv
TT9ZXGMKJWwiUmAsPXJjMS45WF42QSIjWl5XTihHR2FHLDM6LUJOUEVlYzQjUVozUUFZNk5PYj9s
IyY1bGRJRilRMzI6SyVgQzs3UFM0KyEqLGozZlNEMHBXX01rUEI+YlJYPlFiSTxkNz4KJUJfOFNl
T2FVcDw3TEZZJ2p0Jk89SmhPZzY+L3Q8RChHc0dqVEFobzdqVjI+aGgrQycuaW8vbjZFKFViWmta
T25AKG1fOzRgYVplLEN0dFw0T2ZrJSZqOVhQQStPakNNISZaZS0KJWQiJk1fTVFvKz4vVydQVk5P
ITVzalsqTStGZyg4ISRdKUokYnVfK0JSJTFNIjY1MW1yKUA9TWFhclk+JzM1JkdSX0o7JE8/NCM1
NG9YZys+PFo9b2gtZ1RUY1BwLlJscl9pZjwKJUstVj5RXS8yMV07aW5MQFw0L0EvQV4vbHNUKmdZ
ZUdTKUxyPGEhLXJpbVFZXzxtbi4xKWExZz8ybzZTWnJtayxESiNYQj1oSDs5XS88KGMicl9jLTRR
O3BlLWpLNycsayF0Zm4KJUwlP2BPODAzNzhtazNOLjokT01iLicrK1dwVFE5YyV1SCRhXFFda1hF
bD0xOlRqSTRsITk6OTQ3YzZiMS8pTjQ/UmdfWW0vPCJVYmU4cGBbUUEpT2psMCtJa0Vtai5SWEho
REAKJV1GMyQwOEIwXEpbUSFPL2FHUGAxXlFzQzJPLz5PJWBsZDtYaTRiL1cudVgjJ2E9JjxdVmFI
My1nUlY3MiVYP1NiJC0xMmtTZihyXExuUV1QSywxPUFZJ2Q1W1leL11vV0RpMVwKJU00LExgX0sj
ITNdOmErNWdjX1VWKGVzYHI6RzFcLDokVCtXPjZ0cV1udCImRS1CNihRYmQ8VnIsaVczODIvK0tQ
ZzNzKGYsZGpeKWIxaktWJXIsVGJeUSJMYVdnOj5VIkllQ1AKJSMzKVllWVotQ1wmI01qUWZiXTAv
TCFjaUpzKzYlSkhVIS8zYTlSTmMnNFVXMmxFcV1pIlk1NUtZNENDTGJULCs/K0Y7Ui5YL3M1RERq
SWFmMCwqL0BEPXAzdWkzcElgZjU/R3UKJWVbJD4uSmBCXjNMOj4tI3IqW1ImXFEqTEtsL2A7ZUdk
OkJhNWoiMjdBQ2ZdLl9DWyNyPklzI1c+LF0yZ2xeMFkzTS5iV204cCxfP14uUEU4LDdJTSUnV2Zk
LTw4NyJiaCoyLmUKJTUqUkZkSFotPSdaXiJwXk0pKDUjaE5TW1skXmslX0RpaWtpby1xO0knL2Rw
YnEvJENUV0MqOzFDXFZgIT5qZStLal9acWk3SygsPTJGajIjVjxgKWomMilTcWVlLzNQU2dAc2QK
JWxBPVU6QU1EOkBkdWhbRSJobzVRXmt1WVYpW2dHO3BoaGosI29rN15LPCwsLkE7cGJVT2FFTCRO
UiQhKFAzQS9Lajo3MlQmTF07cmNkOjpbWGdFOVVDRGs6Pyt0K0NfbWgqb0cKJVA4cT9OWFpFaClt
dF5HMktRREJrLTElM1RqXlZYKzg1JShZb2tiUVZWQ15mRm5GJTxoUFUvVzJfK2JCW0MhP2pCPy9f
PDpXa3MxUD5aNVtsUWoqODJjcGosOik+NVtWJXBfQ1YKJS5lQi0tNzpuQGRePi1nPV0iWGA9WGRn
I2s+KTxyWiY0XmRqMkkyNG5samgzLClyJiQ+Qz1nPkNMUVcmJ3Eybi5OJCsrOVNBSjVvYmBmMjU9
byd0IVpkVChKUEsib2V0cjItNFIKJTdHR3RcQCNdVD1qSTdrJz4uai4vUlNHMUdecCZvVkg6Ym4v
VyVBbjtiYE0mdS9LNTk/ZCkzRVUnOmJnS0xTaGc2KWYwYGBScHByK1tQRzlwal9CblFINlcvNW42
WFVmbGtNP0wKJUBaaml0NVtBcTdmMmtwRkhSSTk0MUwsVlIjPyc/IW1wLWRKWFpiSE1qVFVgY2gi
Zj9PRUQ9UiM1Pzo1QiEzJVlZXV5UZWUsZSEmJV0+SjROPi4yZXVIRztRaTAna10kX0s5TmwKJTBK
V3JCZTs/aDdGMDI4YjtFc1FCWnImP1wzPD5dTlc6bW8mOkxSPyRMLkFfOzloJF4nNTcsbStLaEIo
dVtgRVxeLStQX142KFNpaDhNMSRENTBBSiJGYlpLIytdVGUtLFhhaWkKJVUpVWQyKllhZClTX18h
IzIhZFFkMCFJKC5DUTNcVXBLSkc4bXA+dCRDSldAM2sjVWU6IXUvcXRZZWpqXihKcyc3Tk1RP3Uq
OVpZVEJhMTslM1grc0xKbzFkOUA3KkNRNi1HV0QKJVFWPylBOEtcKyhOSSlFP29uXzp0Z1ZgMU1R
QHVnOi5dcU9kaW4/cVdBWzBDKF82aFw8Y3VTY2NEV15gLWwqVDhNK0EtTiJicFFjcF45QDghRTk0
dUFSLypoZzIybVlBOzdCPzMKJSghRipAalpgW3FnLD4rWFQ0PFM4Lz05PHVoYl5hQjBiXi1VSURJ
VGRHUUpTXScmYzVaL15IZ1ZHMypOI1koQCFrM3M7N1UzNSElLTpAXmozV290ND4sPlFCY1BaLGZR
UV0wPnQKJSlWWDJcMksrUDIiWC5TTSZhXkQqJWZfVzJaQS5eLF8rOlNCJi4jQGw3QkQ+TkBiNmxp
Q2pTOXRXZ2hVaTg4b0lhSycoNClkWUtzVTFQcVI6aWBJNSdZVVdtPSJMSms0Zy9xLUEKJSpIR10y
REdgTWZXSzdjWmw9bHVaOl1ZRFlTa0JWSlkwPUtzNzohXUFvbTRKWiY8KF9wQC1MO0ZANXVXdEtS
I24qQFpOJTVeOidAMzA9QkxocFl0KEhII2hGVysiLTBlWjIiUiwKJS4wXUNRMlFmcz1mJlA+a2hI
XElIRnM0R1UpcmE5PmFDPkxbNnR0RUo3LmtybW0iK0ZvT19oUGUlJXVgZkNEI2JGWDRkR2RYZlAt
KTsnWWpbZ0JIPS1SVnVARzpxSkMjLW5DSCkKJSkvIUpvTihiXzhSM3BeLDRkc3VEVUQ/S1stQ2RG
VjY+I3BObUdLVVlxbyk4ImJ1OWk3YGIxc0RVO1hbb1FuXjciSSNnXWdlRW9xbGUwMVY9S0tbXlJf
Vzo1YDU6NCNRUTpiM00KJT1eUltSKkFYYipwZkZfRVtxW1s2NUBbRW5sNEEoQjxxSnMnQD5oKGJT
OGtJYkNHbFMlMjReaVdVZmdbbkZLYic5LzRDPCVTPGppPyMtRD5AUy4nJExrQ2pTSi5eLlZFW0hA
S3QKJV5HKiwpW2pdZ05obG5gcDtsbUJDVFRPQjBFQDUoIUxVJW0oR1JbRCouPzBOZDkwJnNSIzxs
OyhSSFpoJDxoaUBwaiFyb2cjLElEdW4jUXBLZFtzZ2BJNyo9Qm5dWj0rbT88YWsKJTlva09Acl06
aVlpa2otNVVoXHA3M2I/V2pHJ2BsJC9SJjdvW0lhMnBUWlY8QEppKzxSLkZRNVRqOmdjIm1Ob1RZ
QVMqJF5mWEA/LiQhSWJxLlxxIiZVM2J0YWJNIW0mLnVwb1YKJTUlUEIzTztaSGAkI3NQSD1ANWwy
PSE6RSw+PUlII2w5R25gMEJBWkZHLlE8YkJTWCI/K0crMTAzZC5COFY2T25gPSlyTC1QVz1KXmhl
Xyg8bzlLZzNZJD1XNDNxcTd1QUk+cVcKJUdBQ05HTTgiJi9KSEdIMHI5LEVSUy44VXRfPz5CK1gz
Jjw4UC00biVJOTVzKEsuTFM9KGdvXjE7Q1VAazQqK1RqbDU9PlhKbjEvXzVcKEArYEdibiVTJjY2
RDYsU3BoNXFGPjAKJSgmbE5aK1suXC5gO0JpUm5OPDdnbEk4OS1hYjNXUUw5ZC9fR0ZUODlYSW0o
YGdKRzZsbyt0cFA5YGUmQmh0cSg2LDovRzdbSEptWFZOW2JnLUppImBkXVY+bF5RU1drZjpSUWUK
JXJCR1tYczhDUixdYDdRN1Q1T1dNaCZDa0JTK2tndEkuSSlPQF5ydUtrMkcnNUlYKG5DYyFVSkpq
Z1QvSCprcUk4bUFEb0Nyal84WSo8LzIkbXUpWz5yL11cUnM2SCQvSixmNVMKJXBWNltjcWZjRF5K
LD47YUkvIU1WXlxkVCJcKTImc3FpPUQtSiwjKW5eXFtuKSV0Rk4+KldIKSpEdV1KJkhhSjFrP2Ja
TmdUPi8vOnJWJ1pKcFpbVFRvRC5xNjBFMWgtTFo5SmsKJTRvVDZzTT5FaHReM2wmXEtNOTklXlxJ
JydyaCM3JHE7OTYjXllZM0VKJUdGdHMiaUdYLG91cE9MIlhLcl0vNU8yYS4vMEYoVXI9LStAOEki
MUBDc0syQ2c1XGteSEVJYDcpPyEKJXJTLS4iaHI9VFYiKCVHaF4iPEdiRD84ckZBb2xdI1xwSnBp
Yk5IVGRuXDopY082YWNpNlJFaVRXWm9ObjFPOCcxVDM/SSk7MGkuNjdcbF1bVEdjY3RuaXNbcT1i
dSU8a0IvU1EKJWoxT1xEaWlvSVovYW1rKkcoK0heZWIoZkduYGU8JUosZklrP2JjV2licFAvT0lY
WmVIX28ocldxPk8kdEdGTlVCazQ8TVA6XUokQyFkaFFUOkc0IWxHXGVSUlEhXEFZa19fbFsKJTMs
UjZrQjxkajg7UygtIz1ObGY3QEg1JjhIWm1EIl5YaVUySDlDRzdDWkguIiNAIlAkNlZzTk4/PC1m
ZFAzPihZZlZoVSg6VjVNcmhKO0hNRVhfQUNdX25OWFonVHIoN2R1VWsKJUlmS0RqTFVVWEhyO1os
cyNJSU4jUThcWWloaDpFaCk1akNFPTE+O0team4waUddSFcxLFk6YEA2JzA8RSFALyM9JFtjSjFL
Lk4ySF1TSiU3ZDNURC4iOWZQYCkkT1JQUCo6UGYKJUxBUlRLSGdkc3FrQyMhZ1p1dTZCPGhFXiFe
IXFDKFNaNF5jRzFEJzhvLyg3P1coRG9FZEcvLi5WYCJcSl5aK1pucE9RRVRsdTFnU0wldWdiVF8m
VCw4PG1rOj9mUypKUTctYFsKJSpgR247cnQiN2lOYWFjJF4wZ0xTcGoxMydeVT0lTkhmUnIvIiZC
bDZiIllAJFoyZFxOLiM1YlAtR2BWQFcmXkc/Jl0sYT5pbCJaYWczXSI+Nyk1SS9sWF8zXyQvUD0h
Kyh1QUkKJSNjNVVLO2NDN04rbzZhbmgkaDUwInJgKDEmckFxXDZzTTVBKTU5Ly85I1xlNSROYzVC
SVQjUyklSShoZiY3RFRGK0dDJ2k4KDVSKCxnbkdZLiJWcV1xLCY4JCQtWWwoS25JRSIKJSEvclhE
Vi1RLiwydDxbXSc7SmotS28lTy0nKkYpcEVFYmlPQD1tU1pfXVNyIjdTMTZWPikzWWIpNmNcQy5B
UEExcmleS1JRcS5JPl9CWihmPC5wXTg1ZUIqTDEqcFY0ZmRYYiUKJVJvLyxiTENhRlA1Z1AjXlNt
VnNeWUdKMzlSaiQ3UVJhVjYkVFg7KUtLIzNFPC9XcURJOGZabk5fYUlGQFxWMVlFQjQzT0JbVT9L
czMiJD8yJGU5ZydjdTk7KmxfUSZIJEs2OjYKJXBddFNMOFpISW5mYjJhRyNeSGVZOiI5XCtocmFE
O2QnSTAjOTNeaDU0VzQmIl1KRU1CMC4tVydkUVBabCZoOjxuQC4/V1UnOChybE9dWVoiLUloNkEq
S0k2N0tFWGJEJExwdSwKJVIvPXJgTFBbbmpfRidLYTtkJFVJWW44SSlPWToxYGFJWCY4aWc8N1Zq
S1JUYlpxaCYmMDNXXFVVdThPKy9kcjxSTTJBI1MiV0xVaGI6UkQ+NmknKWgqNHNGZCk6QzVaY01p
Rk4KJS4wKGVxaClkMVspcGVucD1PQjU+cUEzakRqaFIpRmpXVyVtVz5wPDhBVikkcWRxOlt0MF8/
ZSU/aDQoKkwmc24xJ0Zmb2ZQPyctLkxRXyRBSlVKVS9KNjMicyFNUyczKTIoSVgKJVJnazxKUCE8
V2wvSSpUISU6bl8jTkozLVNJWUt0QCJDdDhYZzdedXA5Y11PM1AqJVYoWT1rQ3VNPT5PVkEtKjMi
WUFqV2Q2PEFvWyslIz8iOigzXWM3KzAxUGI2IlNKcGBcXVIKJUxCSWhlPV8xJlZfbDNrdSZqXmVm
Y1hSTyVkMV45Jmw2SyhsW20yLjAnVl0maTpQbCkjIj0kJ0VJW2Q5MDZHIjFoRilYai86WS4tMT5y
PkpnPCxLOFpqWFpSZ1I5SWpHTkc6dW0KJSpSN18/USZBJjgoMzo2LDBtU1FzOEEzYiVaOyFsImhB
XkZpUkpcXF5Xc3IjT1NrJiNyJytybCNYcylFQVZBTD45a1xHNF41W0U7RXJLc0FnXDFFPCwrPW8n
QiZMcEBkOWVrdDQKJS85Jj9HXTo3VytRdUJjbyNWQjBiLm9scDcjbCYhWVxoZTA4PWIqJWwlMWFv
VzljQUxmTEAjKkNKOHRBWktXVTAoQ0s3MXM0W19sXE0/K0A7U3Msa2wqWUdqRSglaCxSRiVhI04K
JWRUW1VDJUlPX05gOClLUmFpMXVMN2doYz5mN09yNjJCNzNeVC1DQEddYFFhaSw9PDhWb3IodUJA
aygkX0BqNjFzPXMiNnFWJi9JZkkiRTpWKVpedEZWaTt0UVApPjcyJkNlT1oKJSw4Qjsrb1lETFs9
W2QvZl9uZmE9YTYvOD9fZC9FXU1AYzM8Skt1SGc9QTtQX1c7b0tWYGIlS3RrKC41MjRxISlMVTNr
PyYvWGJoJy8vV1BpRiJwK11Sajh1S2NlPHBKVkkhdTQKJS46dFE6Y1JYVFgkcHJcXGYrKl82PEE3
ZXM8dVEzUCQ5bmZoUTxdIyQmT2lyNlBmKUZ0ZC0hZXElZ0s6a09PbFVcRFFyM3REOSlmSyo1cG06
QHIqOydpMHRqRi9cMy1ITEdkOyQKJUZWWkAvXUZzT0czTGdTSUFlOD5GXyUyLk9OUU9XaGglKGJC
OEQiVC1oPmhzYGI1KSstMi9uL1dTSDpjJzArPzoyKVhwLTxQImk5NktfaGs/YGlVUCtWXWspKF9S
OU1VUT5AMUYKJU0xSUJTVmsiX0tnaCIlIUtTOmRxSjZkPm9QISdZbk4mOHA1N2xeT1RxMzRgNDFJ
SGNTYCwqanVFX0w2IUNJazAmQnNvWUxpcyVnKTE1PVFtMlFrX2ZMJygwUmJVNHJaXC9TIzgKJWBT
RDU2anRTUFpqRDJoN1F1JjctLlk9I2tTKVRfVz1rRDtVbCxuZyxCSVpFV0xXJFtEPW5qUHAtWyQh
JENlXWs4K3BabDZGJDhIITpZb3FEalc6W1xXPT1eWj5UPnIybXA9b1MKJTtfK29eJU9jRURvTFtk
UDYycUw7OWxeYWpINHFCZm84WDRUU2UzNyc9JFkpJV42LzFjJV9BayEzcUJ1cTNrQis0XTc+J1xK
bXV1OWNdWWAvJSE9ZGdYbjBMZGA2bm9TKDBEcl0KJTwzYExTM1RUZ1M6JW9ycSNRSk9XJHEmZ1Qn
Il9nJzRkdCNFX14mW1VdQ1YsX1M3QjIrOl5BYFhsJE5EKzgwT0dsJzd1PXU0Q2AiMUpzRl4pRTpO
dWIqSS5iOzM8XlxWbCheO2UKJSNcRGUxaXFybi01Xjg5Mz1vdSU0YDxaWjM8c19rXEVWcExUayVm
VVtRTUFaLiZvQmY9aW1Tbz1qUzFfJSxVZW5lLldvSk9jMmZBJzpuQj1zNyRFSDJbKnBIJEpeOFJU
aWwvaWQKJW4mQTsvYDE6TT04Ojw3YDgxNCQmT0JQQllKUV9FQjBfJWFHXWpDNSwkZislTGJRYyo/
YkZoSlgoLCMvZWwwVXJWb05KaygpblRpNjxhV2IsOylyJjkmMDA4NlBtKVlPN2wmbjkKJUlEUzsz
ZmlTYWRLMEZDISU6MjQ6TEVVTExMaWxDMSgpWCZpUE1DYkxuLSMhWUZvJm1TSnRtQDJfIksrOklA
Q0VTY3NqVD9VYV8nP2k7Pzk6S2dlUzdyJSxsRDM2IWNZOWY7S20KJTI6OkRES3IsLDhvMmpCXTlk
ZyVhUyNFbT8tXm1xIy9WOCFFJzkoZTIwVSZAcSZTVTdpOVcnXi0uYGtuMylNTlVBMDNLdUNkTTg7
NlJPWFZuVTAyamoyT09yYUhBQylTUmVqaXAKJWJkIyprS2dMdT86ZU0tVGk2P2NjQmBbYzxMNWZI
PzsiLTAnWC5VZ0Q+WUpIQkJiXz07J2NNRnRPKWxvMGMublVLWlFUWmkvUGUhT1kwLmA6VGNbVzQ5
ZSQ9OUYkX0ZLQE5DaDgKJT00ZFZSWVkhOTFSVDhAUFFtTkw/LUYrYWUyOiRTPilvZ0tjQzhKPF9l
NV1lX0cuPEJXJ0wsR3RPPGwzN0owKCYvQ2JDNnFLR04xIiRvQiU0QnBGTkZlc3JUQVZYLUtSNi01
QFYKJS4qZmdISCsoIi09XWQ6Z0kwcnRnJDZraEZXbUhgXz1qMUYwIXJnbD0vN0JPQzYxMlJWYEBN
XjZbQUsmWjI0UGJPJ1RvXT4hTWpHWFlgWClqYDpOaUpQRGpdSUJtMylDZzFnQW8KJVVdP1pcJDZM
TUFpYXBEMzQtcWZLUEFMPVBCO1cuQ2Bcby9xIVdGOGhBSlUrRVBiXT5zbjZHJkVeZEpSKEdzWlsn
UFtuNV1DVV89bUBASixrSSl1bk5nRVZjSjwtSUBjZ0tBRWsKJWA4KWNoLys/KDNOXCpVMVwoUCFu
LShoPldsJE5aNztnVWFVY0poRG5EQTsmT2dOLkslaC5URVIqdDJCMS9fS3QmVSVyPW48YyJEZ2RS
Q2dNZytsL2tGLDMmKixlLCZsZDRBVmAKJW50TzE6RFA6YidCbVRvRiZQYlpwYWwvJW5LKTorI15s
ckM8WnFgdTFdV1xtZGZPNVEsPD9cKEAwZSVbNCl0NmtXTVNPcT86NTkjMixdc1VWMHJsKWY1XU43
UDBTaWA7OW9CTzwKJSJQc1g7NEBAMDRDL1gnNDlXQVNUNm0zUC5pTTgjMiU1Oic/SSJra3NhT0RR
cTMiKFFlPD5JdDUsZyVWSms0ImEyZTE6bWRhZGhEN0cqJD0mM15PS0VmRSNXOUxNXDV0ImA9JEoK
JToqXmAtZ11FbWNyUFtOYGApcDZMWCRKMjFcOUc0bWFGLEUnLUt0UmtGYFxUdUFGSGVRUz9zITVe
TW1mIyJvZEY2SiNATlhAQm9zbiYpbltJUVwuO21OTDVuWE05UjpcYWA8VTcKJUNxUSdRYDlSU2Bw
Pj8mM1srPTl0ajROMThjbERwNFtQVFxkOGpPYFNxQTg+VG1UQUIpWmdrRG9gJGFmU0lWM0JqYXJc
XDFvWj9OO0A4TCYnanVmcyIkREVNLyRiN1RuX3A9VzcKJURWQCZSXVZOYHFVUTkvUiNzY0hMK0lz
TV9NOnUwK1c/bConKztRZTZOWlVTNENHUVFLSzMvLC1qdCxhWTkkVkMpXnNbR1xJIydzOC1VKVRE
ZjJjcGBRJltQODtIPThgRD4lI2UKJVtEWj1SPl1KSz9UVV4yWilQVUphTSVyTkAiTFtub0duLjZJ
OV9mUD5sXmxKQSUsQ1YtJDJPRktfcz07QD1nMztkP1tbQF5WMl0tSDE2blZIW2c3Yic4NEpAb1ln
PUFlKGZfW2UKJSlYMjtoVkxGWHJALDhAO2RfLVFLK1pYIWg2VjRoKic4RFpqbm5Ob0JFWC1CN1Am
X1RrJG1rIk1lZE5rZFBoJDZoamhhY1tZMCxfKik3bXNKWHBhO181QzIrNjxfZGtgZk4uRWMKJWJx
KXJBci5HWmM+bFU4N1k8VzBDayEyMz4mbUdZRzVnVkRjJEU5XDhbMW9iZVlxXTAqI10xTlluLk9I
MDpTQjlPZ0kkXDZjby4ncFtAcUchJ3VoSyc4UUNRWEQrZ3E2LE1NOV0KJStcYFNSQVZFZSkyViY1
T1UyQF5daEJmRjg9NkxLPEhgcUJGbDVXJz9EQSVeLyREPS5oKF9pLSc0LD9fQ1Q+XCFnNjQlQF0q
IiNLOypKWic4OzwlKDVAP24zcCg1NkwkQi8wZXEKJVByUVMsISlyMG03dFhqQUstOlxbJ2dHMTk/
c0RwQzhIWDQ6aiFiOT1NXVpNRV5iS3V1VipXPlQhTFgtX2RnamlLbjx1SWFVST0sKzdiOmMvNSZs
cltlbDQhI1gsSmpRUWRYUVsKJTMtVDIwcD1dWF1jSylTblU/Zz0+LE0mIyRKY19aUDsvNmopXnU7
LkgvXSpSK1FxNG5SIXQ8PCVnUVUkMVtjLihfNGRja0AnVmotWFFETmp1RWI/Nm4wbTBxVTUuRjAs
M3V0NVMKJWFvXlNUZl8vbWNvLUA9JiZGWGNgQVxlcHVkSSIiaC9JUmIlcDxyR1s3TW5cMTsrRXFZ
NF9FWWNLcF9MaWArcXFxTStmTVY9clVaTSpNbEYlL3FbTz1HWE80VEwkL3BcVnUzIT4KJVZcbkFO
SmJVY0ojTERFTWo9Sz9ESSRVQ1QoWVYoNSVMIiFZLSkpTGA2bFY7Wj9bPWxwUTFVL2MpdHA3bjBy
N211YTVDJShlYihyJ2N0LjUyQmY4UGonRzlpMWFaW0w/YXRbOWoKJThLXXBWJD41NzUwMW9SWVRR
S2srZTwzcCg9XDdVPEtgKmZTUS4sJm09NDFXZlNXR1skPjJRNy8kYDZWJS9UJlU8bUwjTzM4WXVw
UD1sQU1wKCtuU0ohSSM8R2QkLCkpYT9fRl0KJS9vcj5XXzMjYCMnZT1tdFlXcD9COTBLQm1INS5h
PC1APlxwYjkmSmAqL1hQSU5lSz1XT0NEP1wjYz4+RUYkUEZsZzBAJT9kaE9UbipbZkNlPUljOkNc
cU9RUSVZcV9XWEtiTSEKJSJANEotQWUlNWZaWkEnO3JCZS9SRVdvLDdDRWtZXVM8PTpvOGFEY0k/
V3RtUypKOCppZW1mLTtSOm86LVYvPWZAKCUsdDlrbSsiNnBzUDc7RDEhWERpJGomSk5VMjhYbSsq
P3QKJUBpLnNUZFpRUk0qLSpoUnFAOUlKTiM1PURoPFNiI0EwbmE7TVZoLkA2akVxJ2NLZzdDbmQi
KEEjVE9EXUQ7YUQ3SkBHZDM6Y2Vhc1I6cG4tVFRlRF0hWWxhbiNFJFpZJClqSUEKJUAmNSxSNWMn
Uz4jYmRQIixpR1xaTU5rYiwpJSxXTmZmKTBiZiUwRElCXFIrWSVqQl1aWWZGJ3E6ckpbUzNTdS0p
QjVEVnRYRGVwdTpNaytoTGItQ3MvSU5UM0M/Wjo1JUkyKjQKJUo4VDIhSzQ1Sy9kPGk3UTNZUFUx
XjcvJCIxXm8uKS9zPmtNUEJeYTBrJklLJmQnVVxPbDhHSEFRLyFgPjBzUTAkI2xuS1ExMiUmYG1m
WSxdZFpzbSpBOUFGdWpCNDVHTydYNScKJTZCZnNIWCFpdVY2NE5kZUBUa0EiYEw2ZUVVPiNtRzpV
V0JbJi1TTXRLVDtgclMiaEViPVRnTTZnZ0VbPGpNWFxZYVFkQ2kkO1cwNCcyX14vWlM0OCxET0E0
WUVAb1l0WlxqciQKJVJUKGgtV1kmblU+bE0xTj9vT1BnPnRWRTEhWGJJIV87SkgrbSI1XjU2SkdY
VF8iIj9hcm1rJEg0VitvOzdVMGwrKUhuL1RKLkNXPUNhZlVFMypfMkhDPi5cVnJVWWksJ01jWTEK
JU0rUCczP21VPVooaF8hSDQ8YWs2YnA/NloiTTM0OGRDajROYSJXPlNtPDkvK0x0ZzdfPjNJaSZq
KidQJi8nXFxEJktLSG9GQWxUMiNeIzxNYCs5ZFM1MyI3WSpCUGdMK08jRjoKJXE7QVgtb2JibmdF
XCswTG47WTxdZHVJTj5HcipvRjVSc0wyM0VZZXI8aiUwYDdKOCNRV3NJbmBUcE5qQSw9MlpgVjhf
ZEE2OUk5QUozR1BFSD5SLDE8Tl5McXFtbD5nIWNBIlkKJTpHZiVILF8zLz46S0JpZEBYblBUb3M8
Nj8jPFxpP291YEI1VTE6a3EsYzJwVCxSU3I9Jm1cTT03dGpBISo6JXJrb1c1KUxQVU1AIm0rMjlW
PTFhOyRPQiltb09wUGcnVmFiL2UKJTZHdGlAJkIuMXIpcWshMyZlYiJoZzY7XEJwXydjK0RHT1s7
XCVWUSlRQiYsbVAvWVYjYFFhNG05aU9OYVYrRURgYWksS1VaOEVMWVg1ZTFPYWVUbkhtaj1UVXFE
b2V0TGxQYCkKJVVyJS1SSEU0UyNXTSkvQURCLioqIUonISJNaGJhVUxDS1dlbFMtTzYpTFsrVkZW
T25RX1c1akknQzM5J1YiN2oiInVMIl9OQWBoYyVaXGIhPVU3WF44YD4icjc1SkcpRVBKMDkKJSNP
Lk4zSDdES2tsJTNSRk9tQURJQklkQmNFQ19dJmU+RUNuI3JXP01xXCRcMFpkZltCQFYrNCEybkB0
Z202S1daMmU/Nlc8J3E+aGFPbzJiYiRWW1A5c1RpYTpILWY2NFxKVTUKJWBnZlpCSjhQWCpyJz4+
PEI3cG0yR1g4W0tlKFwiST05KSUiSz0lYkYhMmBrSUkqM15hLjltXSFPSVJcR1I5OmNhYFtRcD1R
RW1zJUEzSjtLQHQ6YStpQ15NJycwMDA4TyFgOG0KJW5TNGtcLlxEdTZIIyk1PnJda2pUWVYwVW4i
QCxyaiZ1XVMiLFJSViZZcFZETzwzUGFtRVt1JyNCTk46UShxYT5wa1YsQUFjSFM9MDlNdTkyMC5b
NEcuMTpiQSdENG08PTk/S1MKJUJhNVlOQ3Rga0hUSVppaCFxI1YpQTFAKDdNWixYIm8jZkNkQCM+
X2w9QE03YEAsIyg2KiU7PSlxQjY7SSlCY2JdOjAmVmpcbVhnJDFEa201SEFST0M0WkEscEtiR3Ry
XD1GK2oKJWktPEgpZFkqRUYnOW8ybmAkVGdITCNFNj0nKmZRUzlObjZGXk0tYVdrKVJYKzhaJHJq
MVRzJjZtb2s5UG5Ia1Y6RGQlN3EmQmM3cE8qRHJhIStUIyNRX28wRV8kNjpqZlVUXycKJTwyOl5h
L2VML08zb01WQSRnJ0RxJ2I4MDtMKG5kVCNIUCddPT1kLV1rWVdpUmNyWEVHSys1REJTOnVwTFZJ
UyN0ODgyKUleZ2VzIjBpUCNTWjdvJTBAYygtJ0ExLy80ZSIyLmcKJTQiUFpPbmJybDxHS1dfKEsi
OkFaZFsoJSpCTDlhN1U/TW4zKyFRdXArVV4oVyYzSmlHWURCMilKVzdkNFR0KVdZREc5NlFJLzVN
U2BmLjRjLEY5NkBnUGB0VklBcztbPVleKUQKJURqMU5FNmgxQ19MKGEyUF9kMDE4VyJVUnIkL08y
JmdwKmtqWS4qbS5bOCtuLjAwalxBRSpNb0FAPi5mMzFibytJYlYjLGpwbiNGS0xkWzxWOi5SSyMj
MmUiMUcuImFgRCVdYS8KJTpnXz8rIi05aCNgQixyJC1QcSEyV2ArISVGJlpcM2xEPic+VXJjQSs0
UmZVSCdKVF9tS2k5Mi1IKXJ1TSFZaypiQyUxXT9AIXRsNkEoM1FOTVpkI3RsT2c9cjpcJGsyJnAo
KkcKJUg1OCRaNCUvb2tBZlVeUVtLOC8jcislVShnYlxWMzdnRVkxTyZiLU8pQyIiJz5zMiFdM2U4
bHUiajwpQyQibSwuQ1c5UHJYKFdgXSkzYlk0Il1cIUxLVjBtNGhUbFIiNVk7LiYKJT84JVReUFBq
KVxeaHFSbjg3XmgkYUMsNDtiZWRzXltyIUgtTTAqdSlDQ29dYD9tUy9KNzM0NmM2YSd1QiJcalFJ
Om46ZUtiImRnL19XPCJaISglIjVrcERLR2VkdT09PEAsVzMKJS5VYFsjOTRmPUpvaWVGLyxwZ0Ek
QS9ZX0FKTV1CQ1tMRWNPWEdgOUMlREBlTUUrN1VVMSZzWmU6REM7RiklSXJkJz5kSScuXFs5VGpD
LnM6cSE4T2NCYUpDWCZGaTQ1VEZ0Z1wKJT4kciQ5aDZTdT4jUyd1WCFvWEJtL1RXVHRjR0FHVzZw
MztvbzdXPEEvXnM1cCFIQmstRXBrdUhKQENiZzlkNGgkX1dzNy8rSjsuaTRBcEFnUi51XHBZRkNB
bmBjUyI7RCcuY0kKJUcxcFZjLy07LWpNLSEnSlUubUFCIWhBQU5ZJW1RSyEkRicpSVRIPkdKUSRY
P2hVSWZdW25tbD5aXVZDJk9EXkFLRzRRLStaWiRRM0B0QWFOcFBhXkZNMUJLKFBkPF9IOl1TKTMK
JTRkckhAVlNgbE1rMGtUcGxxaTsnL1ZZZDIvZjBzKlZvOzQyQStrImIrP1lsMm9Vbms8R09jOzY1
Uk4+czRccyI7IVskZlMxKC9mM3AoWnBVPGZwKVRKN0w1WGgqaEQnVydfYCEKJV0pakAuNFFbMWkk
TVo7XlFsMzAvJF5VcF0+R2xyRXE5VmlTZXAldSYhUmBLXWRCUSYlNzYsTjJfWzNoUzByOlVXPXVM
Umg+OCRDdVZRPVxcOUtZdGtfNl5bZkhnSDg7QTBNTmMKJWk+VytUbVY9ajdQUCpsU2UvXy0oK2Nz
P3NuUVAtNlBccl9qI1YkJ2JBL0BsdStiOWMkQig/WU82XVplUiVWaWZjUiVlJm8oME1KMEZNaj1f
MmhqdGhOcT9kTzpMWUxiLFEubC8KJS5qXXNdKjMpUTRiO2NGVDs4XEAsa2tMKztJKG49SVE1QUk2
Ok8mW0QiVk0sVTE2MGFANEonWHNqaixqJm9KcC0jXkRZZD4uSVhgQTBQN14uVWZBV0xDYG1YW2l1
WEsjaCpDcF0KJS9gOTZvIkA+YEFjbidhXlBjS1pXYTk+XUxUYGVPSzRzdUVOYGwrR2YnMWk2Migv
PyhZTlVWPCotNS4pdFM0K2o0K1JFQjs0bFZfUGtYRT9OJjNwXTgjYT1cMkRvcTYqNV8yKksKJVUq
WV8yPjw2SmBLdVQia19JWF5iQDJkU1wwVVVDS2pYRU5cPSZnKiYhKUlcP1Rsb1NALzNJV2lSJTBQ
QjgwIUBPKGA1MDonRmAkKShua1k4MkRKY2YpODY2JnFTWCpqYFo6dWoKJVpRak1LUmdJanA5SyxJ
bUJLV1MzQSRgVG0tJSQubk1Mc2IrNCJaQFkyOkI7UyclIlE0WlUuW1Y1WjQvN05VMWdNLFRcYGkt
cldkLV44Rm9ZZD82I2NPUlEvZGgkP0FRVGhAYjAKJS9yYWwqZDUyTldVQUIrWilbITJoQzY8XG5W
cjI6JXEsJW1XZCZMVj4kSWtQNER0WUtZPEgsV0owazQ8Uy1jQmwhL0lzYENsSFEmPkgiU0spRFMu
UC9yQVkpNjZDKXElX0ZJJDEKJS5MOTE+cGondEVTOlxoTTtUaSxtLSl1dWE4bFY5ZCVETjFcQSF1
SSZfVT85bDEwJFNvN2BLSVYmN2xoPTkxVktnLD46PDVZIzAmKiZIVi05KCsoQ1VsQ2gzVV9nPy5d
LFdbajUKJTxCcHFHLCphTWkwYStcJ1JOXyssTSV1PVA1IiksOk1OKUY4TGZeUiNIOzQ+aiVAYUVJ
PEQoSz1MPyVCMWtdZ0k6REh1U0RENzg/JW1HIydlUmJiVmpgc3NpND9mcTVDbzFBP1gKJSlRIjNr
RDo+QmA5JVJhLyZhYGdsUVpqUkA3I1pjRk4tXzJAaVxWaVEuSGBuRS1EaGdqLnROSkVYLzhWajg0
Wi1XRjQ4KlkqXGZlYyhsUjxpMiQ2JGYoKjcoTkA/dWIlK2U3J0IKJVI1W2RLZS9ZYT1AXjZPUihQ
ZnNmK3BGW0FKYERxUEthMWczLz84cyg1O10wTGlzVG5sK2ZeLUUwazFBPF9wNkgoTTxzY1IxYFEj
TiU0W3RObCllMTJVKk0hXkwjVzddTl5NUXMKJUdTOmBWPD90JFE+KGoxL1JXXUM0cD9BZzhbdUdW
PUtJYmFlMyE1OGknN1AwO0tzbEFTbjE7a1AsQkpDXSJRMENvViZfXEFfQFVCa2dDO01XTjotQi5c
MV89clU6Z09uK0YqLTYKJWQkLiYqQDQzXiZiKzw+Ol1IZlAkQ1FiUyhfZy9IbDZBWig3L2cicU4h
RlkjRnA6dWpQYFRRXzFPdCJINEo6XlNcMUxSQCJwKDtGOE5jcjxKQWFzPl5UWnRPPyFVdFpnVzFG
T0AKJSQ0XC44bT8hIWA3SikoPGc1Y0tHT1V1PUduXmA6LExlaTgjbUcqaU0zLFJwPEo/SkBnZGZy
RCVhX3V1bF9JUHFYS2JiXCZJISRROiw1Z0dIW2ReUChlNGU1ZSRXIkdCZExML3IKJTlbVFRcaj9m
NW0xKVo1JS1bLlZPITdASFBQcG08aC9ELzJlJzFFUEQhSyM9TzJQYUhKKGhlTGBjQ3VhS0xDNmFk
ISFQZmA/J0BVaSYuW1FtVmNjaiQkIWA3Ill0cS9MN10xdWMKJVk6MlVOSjAvZUNNVzVOI0xdXClV
T049Wy1jVDt0ZiVCZidFQGRIYXA5OV4+OCMwMDJyOmluc0VwO2RAJ1pZcDhyPGA8Xyw9XEtzclJP
VlIqJmt1VWhAWW5pQyYvbF9VaCNMI1EKJT9cU1ZPKV81Pj0pXS9nO21nJiddN0J1a0QqWGJJQiVZ
cVNlI1tvLUhJVjooTywlPk1YKjQpZipxKTZrWig3SSEoVF0yW04kcEVhPTpFOVk4YjhjMEczSyFq
Zy1Ta2g8K1xOU2IKJU0nNT5gMHBPWm5CMERqVWRJTUs4PS5aY2VMQjNCWiVnaFtiVlppMlU9RSRZ
LyJTM29sZF1PL3BvcFdrVSJmdSttKGchYisrSmRYTTciaT1ASSIvZjpALXRFKWczLkpzTSEoSEUK
JVNfM1xhSDZPJi87OV1tO0NMUHBsQURpa1g2RzNTY18oJFpLVEw1QyRoODxNLDU0PmM3LGAoJWE/
cXRCXi0nZjJmaz1oIyc1MElwKl0sWycmTSMnPFtPJzJRTGJfYnUuPDxpOiYKJVYvQjlgTyo3UmNK
O1FCbSVXcjRGWyk6TnFrX0E7bCgnQCMvZWVUI1ZSTj9MOU04MExPaGRxQ0dQXDorayMpQGQ9KnQm
JVorWWpfaFNbPTtwL0hTMUhbTm42KDVnTFg+MTcpKUcKJT8sb0ovYmViYlFDUGQ+UDBiZD9wMyZq
Ol1IUmlYU2cnZWBuUVtCQFVTKUppNEo/NU85QEEnPipNRmlMKkAvNzA0QUg4I2xrYi0+ST5zTylB
USJZPkpAQSt1SSE2dVNiZGM6YGwKJT8hJSklXVU4NVFpJ2IjLzhub1lVaHRRIypKUCkwSkNscG9i
czE/KTxRLm9wTj00PTdabUAtTipJODksajFSQCt0cyheOGQ8TWAxPG05PVpKMkpVQG5qaVVFZVdy
STp0cDkhajQKJVo7KVZYW1JvNW45Y0ZkIihILidWOmJCOlBYLzY/VUNHZCVIQ3NROVomaFJlcUtH
SiptSHI1QlMvRUFwMG9CLlZdbEhpSCxBSlRPKTcsLC9QJDVCTW1VLnJoLjlzQi8iLHAsXEsKJSxE
NF1lOnFCKXJWRCVGJ0tRNnJ0TkIwSFNIQl0sVyxjWDska2NbJT9QUT11UDs0JEpbTT1dKEpSZG5g
NCY9IXExJE09YW5kSDw8XFspZlVsSVcmIz5ISUM6P2hhIjZXJ0UtNy0KJU9xbmJOL0NjL1xkMGct
JVczPydsZUxMO0hXUDtlWnJOQlQhPywsSURycUEpQFkoLTtnP2dfNSJvI3BeXUlzTE83P2lPJG1x
SyY/XkosIl8ocCVyRmlJY3FTbz0yQUEtTUE/JiMKJT9NLzotUldZVjdKLENVIW9nJHBZL1F0JVxv
QFBfZFkzMFQvbUZuQzplTyUkZFd1YG5OV3FXMDhmcm1SNSRiWFoiZlE2aU9UcSx1dCFxMUgyRXFc
NiQtNEs/Qmo2PDM5PnUiR1gKJW5eQWkiaDdpZj41UHFLJldnR3FZZzw5aipSZWY4R1gmO0NPb2M/
Pj1xV1ApczcrVl8lKGQqKWBVXUdDdXI6dWlPXytHXzw6N1hGJFZ0L2s0cWw8VktDMzpVXUFVTkJd
Y2khXkUKJU1FWStGRzomP29IMCNpZTxVMERWNFwoSlBkY1NOQG9kKy0uJ3M/IyZmVm5kJElia1M3
Ji0hZE9vNW90UCI1aU9xYDsqUW5vaFlUNXJwckNDUUchS0VEZS9tTG4rbm9wMS9AZEMKJV1mSipe
X241bVQ8cEcrSnMtPEB0VHJLXCtGZj1kLlNGPz0hSytJcG1wdUJdPmxFdE5ZVF9TSjJoWUBbSGZk
KztIaTAuXUknZTgrRkgsSHBgakVdb1FyYCJnTmhrKVpyVVQ0NUgKJVc3SVZqVjhATjdxQENAO00o
YTUqSCw0bFlyZE8obWw/MCw/cjBLUDdyVEUpOVVtS25VJ2VmIXJNPmNabzIvPzk2bmhmJiFuUiRk
J3EwZyJcWjxTaE9HKSY5YzxGO3E4LUxdWF4KJWZEa1FqNVE0WzYvIVZTIUoqc2QkaU1lU1QpVi1Q
OkJPOkg1ZHIwYTFAcihkTl0xZ2FpJ2dhTTxxZDZHWFVmajYkIm5JISFoc1VObl5dKGBFM0cjOEFY
Qk1xIW4kS0g3LlE7NHIKJVtVSWslZiNxRyFsLVcjaXMwaipBV2NIWCo8QF9xK10jKCg0MXNZQD1q
YylsLmptZ140NTVPVmU6WWtXPi49LCFSZyw3JSxNMFp0OmRzOTxwckImTk1YK05QKHIhUC5ucTl0
Q00KJTxCOTZnTT1hOmZHOHJlbGIzMG1oVCRSPmlEa0FbW3JZPjEhYmIlLFFBWktIaTxAViJVaDVH
VGwoLzVuI1UwLVE9cDJvUGZbQiJ1VG4/VXBqVj1TcTNiJWErZG5fLFdzMCxnPFkKJW9AWmZ1KCtl
KDFIJ0BOJUwqdVQmU2dhK1ldXXQlVjU8W1M6ZUFrZS5xPCJhPmUsLWVEV0o8VEJeMWhjMig5Y2xU
QCIkc2UuXmEmWEc7R1NmYi00XDlTaUNmWGxTWiNbXlJBVF8KJXBaR1c3a2wrVzdWUFdpZFlzMThh
Q1ksVElBWkFhUFBvbVUuXmtNKVVjWzwnNjZHInFobW1VSVIpYEZwMW9tQ1dfJF5ILlJJWjldYlJh
OTBqXiYlVkFcW2c8ZkRJOyYycVNWUjQKJVFZY11xXDVCcFhqTmUqWFROTFRfTCknPE1ARVQwY0hn
Rz9eRFhQTEJiPT1FXnMiJzVca1Y3ODdLUm4lVXMqXUBRPE1YciFlZFk6NmQ9Uz5zS2I8V0dna3Nz
MmpqN1hNWmRNb00KJSxBWjVlM1JgX2svczJjTTRdSms4VmlIYWc0RmFPa3FUST9zYE9mMDFYQV44
L2tITjRQZyRkMCZnKjV0b1sqLFcwWzlxNkFEITdtWmdPKG8qPFVRXl4qdEtjaHJwcVhHLUcyOWUK
JXFPanFmKG01RDBsbkR1QmkmMzdMR3NndWA0ImU9XGNMJCJvOURWcGllZi1TaWlJV0xzZV1ILDsu
cFRqWFVgJjJSUWNTTGVCPGk/QV5DbjZGYHBrXS9BailSVW8/Mic0SCNONSIKJUBsbV03Y3BRODdc
KWtcVFFjcD1MM05xPTUkczdmOSJ1UDtRTzU8TGQrO0xwLkomXDUlWlxDQmFvLEhtL2xPMkMwNUA0
PDFlQCZDIXBYWV85a3RLU2FEUCNRRVhbVXNSLXI/YmAKJXEiWiFfUitTMDtKTlAiUmh1QkFKajVt
RmJlbyc9OWRFJScxOkdFMyM9UmMhW1cqNU11Ly9xXERtRk4uWSYiaWE4I0JQWDVPXzY+Q1NIclFA
SFMzc2BKLGVZYklGRTdoWDVfbEwKJVRGI1AvckcjQjZbIURaKEwnKSQ/cS5PMjlYWlZaYklcZzdQ
LS9NaDJvQFUkcCllTzw4ZTgoW2wxTT1gPTs3M1BLPzhYXkUwU085N1QiOEljZFEpTGc+bFNvPERK
NT0lXUNiRXEKJVFkNmNLSSk2Zz8/PDFzYk1TaU1qRz9ybCJET11eV25eTT47VkNZbFRJLjttOFEx
TUJKSWUucEJjNzhBOGRWZ2JuXHA4YylNRD0/T2QnS2llRGBccDJOOCg2aF1NZy1IZ0xJRjEKJVNi
MGJgW0VJX0NdIy5bZUlKPGJzcDkxMlIoQFI3JmxIT0JzNHMhbFxcTTA2aV1dKCY1bGZQSFBeIStF
NTA9TiYnaVZIOjc1RlU3KC0nYWtRUFB0NDYjSVIlblpLSCppI0BtR1YKJXA0W0ZNOFFLV1g9KGJw
dF4lKSdMZiNdQ1NORUdjNU5wWi1KJSIpPiQjbGpBU1FMK0AsNGYyNkdNKy8vbTMhIWc3ZVJLUDJe
RFsmXmpUW0EmI11pSydOTUZbQ2BwPDsyb3RLbUMKJVokNjVOZV5MajM8aUtmZl5IV0tdOGlQcysl
cDhPcmFSSmM6Mi5JVGlVWzRDTDFVaXMxQz89OzVcVUhxb2hZI1N0P0c2KlNpRWFWckJgbXJLZGU2
NF0uZUs3RiMiUktxNCsuKGAKJWdIVSlhQDZfRzhRZlYtK2srOkhiWlgxO2VJQCc6W0czN0c7YFAi
bURgN1ZqP2ZEWCN1Ol1KJyNuI0BVNisxV18lbEslKTFEM1c8QDw6ZClZOj5OajQ+czVSLVtyLSJF
ZDxPSSMKJUtCRm5ZTGRubTRfPmVUMSg/cG4rcTNGMmU/TDUrNk8nIjhbWThnWSlYSCZnb010Wyks
UnVAJ0FCXHFBcnFiOnBWTkknKl87cUoxRWcsdUBDUUlCZCNHIkQ/anFiZi5RMVUzTmYKJTNwRFE0
Zj5WQV9aaS9dNG5jWXU4YUwrT0FlJTlZKCpGMW4rUTBMQF9wXE5oNiRiKCZIZmZZdD48aXQ5RGJz
YCoyRXFxSm5BbWA0IUhjX2okWVtXQWdOTW9JSzpVY2k/YFxDYmMKJSYkMj5LN2E9SCFxQ2FvQSVE
M2RrOXMxSmRxM00/ZEVhU0JDTnUsISNaWFRTJG4+QDFiPzJNPjFAXmpYI1RBL0tHZm5CNVRhXUlU
XF5AOXQ0a3JJayllITF1ZW8pI0IyKFxjLWYKJVc6S29bX2JXRnU6RDdcMTtaKnBsJTowS0NhMSIr
UmdPQWkzSV4vIU48KWUyQFpidERGXTVoZ3E0TGVnK15cWlQsYXBfUihvRUtFKk83Yi1zT1ByQm1m
SDooXktqKWAqQVFALz4KJTpyVU1hJ3RzTHFXUWVBMiRhXWclYFxiXDhicToqb1xwLj5DR1ppXy9m
LFRfKkZtSlxebC03Xm9wb24sMlw8U2JWTWM9Vl9oN2NgaFtISjxCMjA4QFw9UXNsW0lePmlKUWpC
Y08KJWRsM28xUG9JPzwqb0hlQFckVl9eJE9VcTldK29yS1c8Ul5LRWpaNydJIiwsIm5mKGxHQi9f
PDtxRj9zJmQ7MD9IPnNHVyxLV21WUEZ1LlMnUSRfb0pZKWMoME5nPjw8X3ElJ2wKJTdOWm08ciU9
U0JmPzYjaWVwZlE5RCwoJThaPiVwPklCbVc7KXJwMiZqJCI3XXFZJlVFZSFuL2IxS21LT2lxU01P
XTVrKklnQGFGT1xjbCRvaHU2O080THMoKERqXUNNRD4/REwKJVs5QnFPcUopLXBgTFFkMSM3YzBf
WGBiY1JGRjgjSTNVI0U8cEZPQzo2MlA/T09YZ3JLOFxRX0xnNTJzI0ZZPWJQUFpBaj43YC9jXXBf
R3RGNUA6YGtrRj8vR0BTPmw/MjZtSUgKJSo3aGBJLkZrakFMOTsuYj9lTjMkQi8pcSpkPSkiUzFX
YCtGSVF0PmNAZiowYkxAUztoKWNrVz0xcF0+Tls4InQyU0shakBrTSQ2YlstQl9mYmhIRTAyZTpA
Q0RuYGpMNEZJLzIKJWJQSStcb1ZucHNULGtHK3A5JihIPVpFV09lSywqQVQ7Uj9gSFRyQktzIWMr
IkFhZEo2NCkzSGNmO1JHUGA6dSVoPipXI11lay4vTkxNaE0sZC43MExFdSsoKEdMTmd0QmVSKi4K
JW4pTTZMO21qVm5hV29mYmwoKmIvY21uP21vYm9hJERzT11UP1tTZmhSbVkxayo3Om1LMSw1Qmhc
N2pjMkdoN2ZScG9ZdVolcEg6IWVGaG43WHJMXDFVZzBvZmhiU0wxYWg7azYKJWosRFJBTU06QTwr
Iy5BUFwoV1MmaEooVmhvWWUkNT5BMVgiUlprMU0oJW47RlYyQClMcDxpOmFhZDY5cl1RLVEyZT9j
OCRvNzFKc2JNVEg+PGBgQiklb3BtbTUmb3E+QiFHKE0KJSxGKDp1KSpfUDNhJUU/RD9RJC4xKVA+
Rk9sO3JqQE1EUTNhQ3NTTTthMWxlQmBMdFlZPDk8KT5FIjdsMVMjPGEvZ0pyP288NypVQjpSWEQ/
QSonJ1tnaHRkQVNWYStnZVElVDIKJVxDM3FRWU45KmRwN3BLYDNRWiJzQz4lSU1uWCFORVw/JyZF
WCpMRGZJLXIjWiNjbyJHOko9LG5vMEpTPE82W1A8LCFCMGtxc0VvYCM7ayYwY0UlMyw/RmJMS2xj
IS4vUmFHcnEKJW51cEJZVlN0KENJYUM1bVU7dVNcPWEsZnRFViY3Z1I7S3AhMCUtRU1gIzA2YzJp
XyovRFVyJVxOI1U/QVpKJThxPlBSJD4+Tl40VENTX24uRy1jQ1lKK0o8bFo6ZlElQ0NOM14KJSo+
YS9WTFZzXzhrJyVETGlabSFiTy5BSUtAKzJWT3AxVF9dQEgwbFBXXDU9PW0uJz83ZjVMVmxENXQ9
TzdzITRqPFZuR0oyKUciKXAqbURmMDMtJjFPTTBdNmYxVUdKIzpsQi8KJUVsdT1POWJVIy4/Rilw
Mi00WG9CWGJgdWUyT3FMKHFxVGJaWUp0SSQ+KVpbNmkzSVtcXExsakE7VkA/WGk8aXJnayhCOSNR
IVw0cnA7Q3RMWjRicCZhNk47ZDRFSVVaXXBecz4KJV46PUFMTkRFW0U8aWAyPU5pUTJDXTZPZyU2
Q1RQZUZDIUk1RUhlY11LOWgvMU5wYmA9TSEnbU1oZTByXDNDREZZV0FULCxiUUwuSFNXTVZRcSFI
dHBGJkIrV2g5XEwlbGhtcFsKJTNhcjNlL0klLlRmKEhIM2NFVmNzcEspYCdWam1JWUczcl1nYDZz
JF5nU0BtYXBTRDE5XSlLKTdAY3QzclhzJF4xaENGWGFGXlpUNWpaRjlia1ZtbGNdQ0pPRDhXP1Rq
W1ZULywKJURkXyZHQ0goTkFxKHFAPFFaZ1Z1L2J1UEtScTlZJE4uOildR0ZSVVAnPVhMLFpzK29t
KGZMbF4vX1s1XmQnb0dpJWshcmEyZ2FTOVVCZThYW1VAXDFwUWZfXEhyMVVWbEYuLywKJWRiamBe
RVZaYVY3b3AzUiMqblRJYVQ4ISRZOS1uKSJEP1ZJLnNEZ2lVaD9JYDhRYE8+VEMrcGZqbW9ybGhz
VD0sczJbM2hFLEhfJGUlXVNVY0I9cV1FWGJFJHBvcnAzWWc7MEsKJU5aN103Rz4hOWo8Sm87bjJJ
b0BmXHIhUiVENkM9PUpxOEZDbEstMmplTkBxXmNobmMyczgtJ2NIQHFQKWQrNE1QQkQ2ZC9WWWBH
SkdGayoyPy8uNkYncS8nMG89aSt0RktWJWEKJUJwS2pKcic6KXMmIkM7WEYjbWUxSUFVXEdiLWE7
QFIxVG0/ckptQjElclY6KFdycU5hcTByKlImcnVGOVg2LiZYSDpXZ0tuWkRvQ2ItOGhJX0sqInBy
a1BWZVdgPVBePypEKT4KJVZfMi9kclZNTV5Acz9fQkcyX2VBWCc0NjE5ay5tYjJpOHJyXzYlKCU9
IkYsTiksKGlkPnM1VC5wJiNcay1kUzFbblJsM0hoZ2I5UkNST1dpZjlxV2VpOm5xXUlgLkotSzo5
RW8KJUM0MTRjQSRxc0kwLSZgXUNdIWdyaF0rViVsTVFvKF9fKGhFbDZYSUVQQkhhc1NpcFhNVVVd
P1dyNHBOYE1sck9OL3RQOlduYWEoLFwrQD0oXTsiYT9WT3VpZ2ExbylrZzA2RVYKJWQ5bFE/YjhG
T1RYT2InbGQ9Lm5takkudChyRCxWSWhDWDB0PTFcRFpoZCpzQypyM0xQKnJfczIxZW9PbUlKOidq
YkBib0xWU0REcVZHZkNFXVFzNjZucEIxVl03Imp0QlIlai0KJVtzZS89UjJvZXFMbVBcK3JxYlgx
WmNOL2hyVjJdM0lFJF1Gb3QwXGVZRms+c1g7YjFbW2VgT2BpVi9QPUNGQWpvcVdjZzYoWD0sVTBY
Wz0vbSFIJjZabjVxXWY+STAtaDdSay8KJV9VWS1iWUZqVDAwdCdqUFNgXUNPRTsuRCZiXVdnTFFB
Ty9hRzg5R01vXDcsaTA4czMiW25XaXFeVDE/NjpDaj1AJE48KjxWOzhLRGxMMCUxaTo5NlpTYD4x
W0ZuT0s4ZyVyUFMKJS5nKDc3R01QNmFXa1t0KC4sIidRIiRdaitoRUpWZVhFS20zVmItYzdZLmck
RkVVV2dBbUdRWEFJVyppYWVFPVpuPVpyalxISWloVFZKXFRyPSEjUVVccFxKOl1KYHNeVyYza0kK
JV9gcTVjYUx1bjVqZSl0N2d0UVxuVCg+MEpgR1Y3QkNBKy0zczdLQEJLUVRlNltZZT9TWV5BaUVB
XkhlMVxUJGAiXlojOzFdNUYuazFxUUJhSChJcDxyaHBSaHJgaWIhZicrMzUKJSNMdWJOO0orXEo/
UT9WL0VWISwzbFIwLm4qaCFoQl9yIj9UaHFyYlhCNmkxNCRPPjgvbDE7S2NoWHRFTStcVDlCKiYi
cExpVUghSyw/Z0RfYzBRVmBMRyU4K29bNypXa01PIiIKJWsxXj1ROCRsJSxKSTQ9U1xtWFFsJFwt
XmBHbDxDQjQkVDJqWVkhNjw7bUFsYWhRdGYkNDdoJkNCQGVkVmg2ViJIUXFrJklVc2NEdGpaNio1
RlFfJjBKIUklaVxwIikqaENrMGcKJWxJRUM0anAyOWtnJGpdJm1RNV5ZVm5WUWxORCFtb21wPyUx
YCpVXFxqbG9XaV4wXjktVDU9TXBvXTtfOVtbWVo2R2xMakcoSmEyTVYiQzNzYzM2aiJtbCtiMFFN
aTJWP2k7Xl8KJV03JzspVi5kb19EPSxiU19uJ0hGXUdtVFY6dFxtJUFwaDciY182UWMwM3U4P2o2
IUtBJT4zRUVZJFxrYXJsZzU6cW1OVjIzZFhodVFCZEJqS2dhQ25CP1MhWWV0ZSFNXj4vOCcKJUdE
RDUhNClzUVAmIXNHKF5OJlNqbT5QdHNJSl9UVENSOmcqSSIxKEQqTGtXXj0+RlomY14lUmhqWj9B
NjshQjw0I0VQQGZraTpkKmV1XGFoOT0xSUlHRmghSC1OLHVScVcqTC4KJUxcQyxqWnU0REFNcyFi
UjNPJkIiPEYzREtZYGFTa1FEc2pRa01ILkIlMzdXWV9YLDdiPGlvWjpyZWlHRWdsVDFfcjo1ZD1D
SGwlM0FDZTcmS0RUaCs9Z01TOihcPyp0bDBmIjsKJVFSWXFCSS0kZyFdRCFDJkNyRkNsVUdIL003
Wko8RFw2a1I8MihGbkBxOC1KXmAxdFplLHRqUlQ9MiUidVgqal9gQVI/bkZyclJgc0okVHMwUEo1
SkIjdDB0RENpITVhS29PTWsKJStEZy9gRU9JWDZbV28/ZTVqPT0va1BvV1EhME9gRmw1TCNWUSVW
JlkjY0UpL2hJTFI4Vi03dFNlQDY1YGtmVnRMWF1pKUlmOmwhRzZ1b2dyM0RKN0BvQSknVyFdM0hc
KUltQSYKJUllXzRUMmhFLUxNJHRpWTg9aWJJTSE+KkRNIVo/STQ+PmhdaSZmKV8hKC43RikkOUM4
T3RdNC1Ic2hWamsoJ0RfNFx0QCtHVExVLyNbLktcaCIpTGJmTDtlRlNXbEUycUI5aHMKJV1oPjQn
ST9kKlxwUzokMFpvOyUkLy0uWGFbTCdxZ1QyYGlvK05uMCw9MkJmVzZdOyI3XmswbXE7KmFcUV9k
PlQqU09KJFNhUkxMVEFMTDpWZ0NcZnUmRV86aUFqQFJkODMhP0wKJTBeIkBXOidPVCJVb2N0aCJ0
I1JVOz1rJE03Nz9TZVlfOE1zV1c1PFcmR1ZycEtnNzQ9K1ozXT05TkdrXCtVT2NwWEBWT3VEKyo7
Ij80Z1BtOGpvOkUjU1ZRPlY2bmsmVCRMaV4KJSVPOy9baTxBLTBqL2omWjVWX3BWalAnc3Ercy0+
L1csLW5BTm40WF1MYyVQb2NqNUxHP0pvTGhKMjg8ZyZIMnI7MD80a2tWKFY4YVVJNzRgW05INkRN
OEhrcT1sTWpsN00oPCEKJVFEPlEqJDNGYWFMXztqW0MmZUdyJ1Y6P10xUG9lYiRPYiooIU9KUG9A
TSxqPzBgQGAtU2JJcVpicV87M2tUPCpAJHA0KHJPbDQzZC9SaCsnIVtNNF8sXyUmQ2gwOV1KXyJq
XlQKJVA+a0plOF1mLi5AWnRRNy89J3ExRSFrKCZHY2ttLkFWbE8jSlMsJSpPREtRcidKX2BjOmZO
Z0pDZGQ9biJGXyc1QWQqbywmK3UqW0RANUFVM0YjWipBV0QvMlRRVDhJQyxIOUkKJWZgSlFjSjNE
PTgscDxmQFtmYVpGP3FFZzBNdEslTDc9PCMmN0pjS1ItTkNrK0BRTEJJJ244ITZHdD9Za24vKWRd
IS9IT1ovSi89SjhHSGJpKiJsLU5LZkdRSCFAZ09ZJzQnMEAKJUxhVDVAIXIvNlk5SmhWcFY2VkxI
JjlcR1ktX0RLKVJCJUEnQG91aD5MaFFwYmhKcWZRKDZjYzA8Ly9fJy1zN2BgVSonc3EvMyZuY2s/
XnNWTTluZCdBRUpQIyU2JCNYSkRjTCsKJSdLJWNYPiskZVtDXVlkXSc8RmtOcEs2MTgzWzlKZFI9
XkAoZSgoQE03KmU2LigvUTU1RDUsIlRFQjkoQ1omLT5jS29yZSFHRjtPKCgoLV8+LU1laVA7VzRT
ZmtqOVtcIjw6c3EKJS0jdVo8NEw4dXUxMEJKS01GQ2MiOVsmZENeXWJTXDA4Tl1HWzp0WVE5UHI5
ZSJfRFEqalQ4UFsoQyY+K2FePi4zKUtGUTUnYTcuW1IpRm5UUHNvTzxXIk9xTCNPMVNTIVMjVyIK
JUFZQEBIZzpfUCJkR0cqcWE8QXNpN0hHLy4wYCQvNjdVb2AqWWJERmtBNzo2XTtiVDgsM1hoWiRA
WS1OXmYrUVVsK0lYO25XPG1PLEZeOjxSSkspaiFGRG9ickRKWF5XN2xoQW4KJVI/M1hARTQuYlRX
by8tR2h1cFMnZ1FiLjNCP1dBKlFBS0guPGxRc1BrY1s9OjtxTSkuLjNPS2QnLFpmXi8kcVZnUDs8
WF1eLFVvYFBcPjY0KkROOzBKTEo+S1BYW1BeRDopKG4KJUdfcSsnNWtdcStLWFFzPkBtPnI1LWY5
WFQ0LXJfQVw5LDR0Y2xpaUJvRTsvJkgzSVFLNVxNWTRTQGljbkZNJkxrcy9uSUU6LEtgYCVGRjI2
bGBdYUxNMCgwM2g2KSk3VG1RUisKJWFYSE09cFVYb2dyZCghRWhcPFtQRkQsZlhKYC9LXyU9Si0y
U3U9R1tDNi0iUEFobFApbWVwIT9tWENYS2VUYmErVWtcWDlgL0Q9bDovL1VtbXJDOSo1SEY3ZEwk
VFdFMmcmZmEKJV1BS0ptYnNpNUhHNSZXQFg2dVhQalxpaz48czBAR0ZoNSdRO05LclhQPG1mPWU9
WTsyM1JSU0duaC1ecEonYWtHVjpyPS1Sbj5obFFWUlJKTDZGJUMjIiNQY2I6PitMWUozMnAKJV1x
XDFNXml0JDdtZURxbVs8UWJqNzgjbTNeL1FXL1RXNzMqP3RwOVBlTip1T2hGaWpvazhoaUxZYUMx
MC8oOm5lNWpmQSRkJik4YDxATSpDal02KjJQVGZoPGJJKWMpNjY2WV4KJUZWTksscU50aDklNHJx
KzIhU1gyYVcvOTAyTDg8cV1DT3U/bTlBMXBybyNlUj0kSSJgX24zTVJROiVoKHFRaCFJbytUTHJY
WVBWZl1mKCpgSytvaVszTFpHTFwjOS8nNkNVSDQKJT9JSiYuR08hWUwvUVE2KGMyRk4rcjVpajEx
J0AsO0pbU0xTMW5oXEFnaStKc0dALDpdQStfYG9SKE1RczUlRiNNb09sQG5ZTj1wN24iWT80ZiIj
WFBmcmsvK1Y4SzkiPCpqZi0KJTJ0dSg5ZThNYCFvWltwXWY0PGE8PWNNWTY2L1VoZSpaVWklQ1w1
b04zcEJHaVMkZEVSLWksb09sY241RlVYYTtNaz1jWzddQUxQdS1vJCxCa0hiSDA7LltsRmdyaT46
Ul4nYCoKJWNQRiFBNTk+ZCglXl1CTXJKT09MbmFFcHRrTUcoRlNxJCI7azNMQDM1OTtIaEhfU0Jh
RD5RTmJKOkRkcykpXmtacVAhVjw4J1lyZSJZUV9YXWEtN1RcYCpHQWRrS2RUayxONC0KJTVDQSJL
O2d1bHVbcjFUcyRAYjdBTmlpPF1ASVMxZDVCMU1ZKWppPkJAZF02I3BqTGtGSU8tcHNdNkhnaFtG
ImpeK3ImMkc1Qy5jT0JDWUZvQ1oyPmcmcTpGTlJlayMuMGtpa00KJXEoS1phNFJCXSdIaCJdV2dt
PU0iSFpURGc9byNyV0VjWl80LiVLcTxKcSpgJWJeOU5kaGAqQShyaSNpR3BnbCoySmFlQCdScjpk
IilbT1IzUFopdTY/VDdJXm1cUHU9QDkkbjQKJSZgTDdxWjskUUNwY3JgYltJcDg2OUBrVzQlaHBL
PjdlaXFCXFtjKS9VZituOCxANTRXMnUmYjtUaGA8LD9kaWA7OHA6JURbL2csLEk3NXREJVNvXik9
QS4kKURbKiU5bTVFaEYKJV4mOnFpXEVoWnREdDtbcDIvV0YqcFkkKGMqVnMuX0EjWU1fLyI1YjYy
YExLVCtySlg3YSp0LS5oPVBIW3A8aU8sbD0sRUcuWXVSSlA/cGlqNTkuQFAwXGZaUG5CX0MzSkRZ
ZVUKJSEyWSYvOSJKR1NwSVNAaUY+IT1hUVZkLGVIVlBXdT8sIyRDXCJlTE9vbiwmVyNIaU80MlV1
OV1RLyM9MEA5VFpnSEhwSiFsU2EkJDwmPnRFaGslZCxTVHFWYWUzdHVyaF5ebFsKJU1lUjgwJmlm
TltbcTpUJFBIKy0vV1BlbHQyS3FgaFo5Qzl0SDhZPlZVQWB1P2hORDJCQSg4MD0yLF5SLj9WZjJL
a2UoVGcvTWhKQlkkJywsQ1E9ZklKWkRYZG0tTUc+VjdRIikKJVtzZSJ1JVdePzNxLDJUbj9FO11N
TF05OSIwWGo5ZWIxMWlPSSs2LF5tIipuZDovLy5RTl9eWGArXCQ9aEFZNGYsKUdRVUdKajRMTiZZ
bCh0bExkJGlGdGFoWU1oRz0zaVs1U1cKJVFHJT1eSHBHcTVUL11GZV8tXC1CXUUoN2VoOCEiNzI1
NCU6bT5KYCZxO1VUSkJXSjpXaE5sPmhqVDM4Z1ZhUEhaSEleQUFHPlohbFgzI25YWmZoMU1tOEhO
OHEvIjplZyFhPGwKJTJRI1hGV05KPkFgbWBDN1A+NyFdXkpEVikzWFQ4dUBqZlBCPWVMQHBSWjJV
KTddUGkmbkgiXVI0SChvKDlsN2dlTlJvbmRraDY1JClsNEk2KzVcTD5sOjItREBqTDZWMDFVL2EK
JVQjdDVfPigzKWxLQENRNWshW1xKUWtQPF9wJjIlWW88cj1cbUlVWTNHMDY4W0NqXnJkYUNEZCZs
YypNTFg1MlFzYnRsaVtda0twTWVGW2ZyOkYqKTE0QHVEQmMuSUFQcFg2cycKJUhRYCgiLDIkLlBw
VCM0aWlPWlIrZzg9aUJodS5gLltXVV5bcUw0cEstWydYcS1PKFI3aFwxQHQmaklfVkA1Qy1xTV1V
ZyZTYzxuVXIkbj5jTXIiPExkIVNrPUc6MXFSZzx0QDgKJUVjLGZsZVtzUnJJKG5VXylSMl0uYjA3
dD85QkA0ZU1uUHE2N3FfR2ZebDwyQVVYYFJqMTROKWlTNTMwb00wRi4rJGZaTC4kZV9SZVMlXjZs
Xm1mKyRhL1lRJGstU3RpaCFLW1kKJUhSUCRcWUdzYE1cZFBiMGc3QmFNJTs+XUM/XytwS1IoQnBk
cSo2YmtwMzspKV49L0tpJ1xBQUsrJ1U4azVdZyJQRjNrMEhkTkNVJmNRMWEyamgrP1IxLT9cKj1T
QyoxbGFROTgKJVpZLUpPcD9eWUREPmczZjRrbysxbC9TNzpHT0Y9byhMOTI5R0FgcltBR0pac15I
OFEtOj9yZmlYZD11IUQqJENbPy4oXEVrSisuQDJkY2hSV0MhblpNLEQyIiUyKVAxSG1vNmUKJVUj
Oy9zaEQtKTg+QiE6MCFYLlJxIXMpQC9yUEJadDlVWlxbSGhlZjNvQFouX2lSJ1ozMjJTPVNvSVdh
OTZTa1F0KmRCNWw8OEsvPVJVSzxrLFw5NkBdQVQyZSN0cFVsLUllWVkKJUUiZlNuS1YiblMuZCRq
WVBORWpwMjlHWSFyXkUmSyEub1ZgQC9XcSlJP0wxVyVmTVFcIUQ8ITEqbS1qRGoyVkZaW1wpYFVV
Q1tzZTZXXlknI0FRcWUvLGA+SkkkbygoPVhaVWcKJS9QQnRdRSxwWjY9dDBCJkhzZVZHJGRgZlok
NVolYCcwaTZuLGtOKSVlWCMzOWluTGIrYzo+RGM8Zl1cQVUxWlFgQGsycS5RdFJbTDU6RCk2T0Zo
LnUhJ1p1QVdiNmxRVEAoSEcKJS01K0dkIWRGSmhkZlArJVRoQFMpLitwbjMiPVJCJnJbOjxaIXFK
UTJDXVU+ITZEInFxVU1hSl0kcXVIPEcoX3NlJ2FzdGE1MDBNb2xtZnVYI2o7P1M7aUY+KW1RXVRw
K3NUc08KJVtBOSVlTTNTLzAoX1NhNE1iIyJKIS8pQylaPSQrTGY0UTpXK2BXSXAyJ2JIdTY8QFRv
RkAvWmAkZiFbVygzM3VhMF9pZSwhZHRtcWInakhvXHBtUENmalNGb0JQUGo5ZVNYX0AKJStTOEQ3
T2Y1LmMpY0Vna0hBVmZmJmBqP1otdEVucTpeKitrXDtzSGcuRFF1LThtYSE0IkRURlU7K29ALzJH
Lz8pKS09TzBFKCImYWVnOUZoXnVbOU4rJlwrIk9TVDw8SzFyWWAKJWEjWkBZKG0lTm9XJ3VsNTgl
UG1YcWhkMEA4aUZQKzd1RzxcISRPQW85VSVwS0ZZaFZtZlNtVExpOFtbRy85WzMpKF8tT3RAbzwr
WCRqMzpZIWlcO0NXaVZSXV4tKFc7VWo8ZiwKJTUtSUVDQSlUNV02bSdyQjY/JFZbIUZZRmMjanVR
SHBrWTdDcEdbbVg9ayFFdXBIVDl1MkhlVSZMZDt0JHFPYVUjTXJXT25hQzplSigrKUxoJU0wSSM2
bTUzZENsPDViVWMtMWUKJUlHbGJISSNSWDwkTydhVk0/YlBWYVI5WFU3Ym5wMiEpXjNuY2ptIyU9
OWN0NT1MTjA7Sm1NOjsta00sZTVTV0chZk4nLHNsNkVBMlBiTF8/MXVRSj0sZ1xrTF9QKCdJRyJl
OEIKJTloK008JTpFLC8wNkxlL1pqT0UhY04jKmQ2WTNaOT49L00mNjksWjhMRGJyRFNpOWIuU0py
Pk1gclRvaUBdUTAlJnArcUEtJUtBV19MazVdal1OU1dlXTIyOVo9LztROTwpNWMKJU00V2ctLm4r
bFUpQG9lZjhbUCNbV1VaMDVXJSYiKF9fYDNkalRIJ1FKJzNgZ1g8X0FtM3FHTmAsPm1jMmtRY2px
Sk50TiNVVmtlZUpRLFlOb2JyVlhGYVIocSZUWXJAalFBIVEKJTlDViRePCUlZE9ARWInMG9hMkNG
YjxhTnNuT1heKDciKGFORD1WbFw0ZlNMXFs2IXNVUzpMNkJWJzc9OC4yWChaOElhL3M5ZW00XmRN
T2lSVF43YGQ7IlNVTVlMS0I/OEM1cm0KJVVQSClmKz0iZV0rTDNPaC09JlVYWXVnTzshP1BDLTVF
J2dxKEUzcGtpPSdOUStKI0ZvRilga2s+VyZNP0JVKDhWPXAoTzc7XFBQMj9qZHVcSz4zYUEsMGQh
SlJnIyIwRCcjZWAKJTBGKkNFInBQUj5pZGdKUWA9Vk5UVjxAJzlXP2Y4aU9pPkR0XnViQzcxVSkn
OUVraTtOJU1ISTpLbzpaIVFbZiZCKjJEbEBWSD0qaV9ULmU9Yi8yPl4rPF1RUmZqPCtYIlV1YzcK
JUZTK1pXKm5hRmhePTVJWClIaCwmSytHcXFNTF9QRUFzJihvMkhrVj0vMS1mNjtzaF07OiVUNjEq
SSczRl1CaSJjMiUsK0w1W1VtSXFPRDYqLTgzLks3QC5XZU1dPCU6OGpMazIKJWU5OT1QJDJiJCxX
YDgkZyZhODIqal9DMSkkc1swKXFoXj1LO1BORXU2anE2TGY+Rkc4I0k+X2AhIS1uJltMaFlzVTdO
UiVGVVghYk9zVDZPUk9KaCsmTGZKKGFoSHBfPFxzZTYKJT4nYEM+OGpXYDphYChEO087V14mLTBR
NCtyJU82NTkyVyRPPEtCYVskKnNrSWI5UzMsJ1szQjhwLlBjLix1I2lQZ25xcElUQCpZVmAiK0E7
SCEuNEFMPCg6Y1hPK0JbJ1lNZjQKJUwmRCtcKWNrUGEvREAqUExgLGgqUE5DXFVXImN0aTczRUI8
QlwncCIvPloiRmhJPV1LLDhxaGcqQ3UxWFcxVC9UPUNfTjxrY0krbmZyWFRMOD9uPFwpN0lZW00m
T2Q7WTVpX3MKJW1LaXJvKj05V1AyIilYQyUrTzYuUDFnWlpGQSdcUFBaXWopZzRcZVFdY2hjIydC
YTY4YS4wKHBQWUJOUjY6b2dOa1siWStHLUhFZSNxbiM7VG8/J1A/P0RJL2siNFVOImM1TzcKJWJW
LXRpSmsyODtQNDw0PEQnM1RPXSJIXSZlL2YlMEZNUSZAQE1haigsMGApdEVlUzxENjQkPjxbVGEk
ayQ6KGElIzg1N1lVbi1dTklMRz0lP2MvLz9ePi4/QS9mVFNPQj9JcVcKJWxQRnQ7IkRDLjFMTVRs
KDJtPmFGTjNZM1AsZSwrOC1Vb11WJFRvbTEpXGxObiExXFRSI2w+dWEsOzhbLDNHdVs7TiJTaTY5
K0hmPU9SNzIrT0NTKVY4NEppNyctTSQsRjEnQCEKJUxfOnA3ZCVrLihCV2E+a2BcKzlQQjhUUkhL
ISRkIzZUNi1ZN2puQ15qSltxb0BjWDdPSl01UDk/dFVbdCU5Lm47PUQyWTBUSVAyImZnZjxtMWti
RkNQaWZtJTwwQkc+KGksaUcKJV1TKm08TyphcTc5dTgubU1FL3RVOzlvQ086YWF0SkMiQiFQSGBo
VVtpdXBgKCcta1ttNFJYJ1VbLjtaSW0rQl9qTSZKVzk7M1lgY0xgT0RCKG5zb2dNJ2BYXDZeJFlT
JSxSRkEKJVNPZlw+OCFSKyFMVGsqWD8ibGU0MjsnWz1XPV5eR2dXLWM9KD5zWnNAakBNKTw+SWE5
UCk7WjRqUlFpOTpoaU9FUFhyLT06Y2hIZnExNio8TG9KSVUkaiM2Y2BoPiltVTFAQykKJSosLnEm
LCVKTVMlKTJkbWA9XXIvTTdiZEFZSD9VZ2o1MUs9LnBJazdfQ2VNYV1YRUlVPEk/dHNJNSI5KiFS
OHUmRkVFYV04PjtKalJjSz5ULEopP1VjR0cqPWN0Z08wMTJoalIKJS9dKlRkQEF0JWNVYWo1NFRo
dDczJlgpQz8kN2RDVitqJz06Q2Q/VypEMFElQ2wzVlNtQk5FazMjVlNNMjZWOFs/Pj1eI11GLUop
KTlhR0VpSUYmMj5XQzVZSyUsJFpQInQ/bTEKJV5wInNubDY9V04+L0JWNm1nTzMuYFltLDAiWGtp
KVc1aV9cKCRYQCEpTidCQ0oyUmVzQk8yKjgxNEdYKUNzWVhGTGMiLlhjZ2M6LSNgMF5CcVAzIXJw
VyZwSkQkQzFSY0hudSwKJUBGTzdCSzU0JWRZdC46WTljS2VtUWlaRWAjMUJOVjMhNU4wJlhTUCNL
aSMwJ0ZyNStJUTMoWTBvYHVwUV9bTU4vNVVZIV1rPz5PPSZWQUgpcHAvPC4sOW5LV2ZnSVFBUXAq
V1gKJTA6MTcmSlJBMS1acSgzI2ZVIW08ODM3Q0wtLywqclpJYTJPLG90ZjdWP1o2aGRxS1ojLWRe
WVMmYjBHOj81UyxWZUk7cVZOW2JwTkNyLVpMX1dfLm1JUlchJl1hRzZfNkBxU0cKJSIrRzVjLldA
XVQ9O1dwbE0/W1JTNiFQPlE1Z1l1akghNj9vbys+anNUblpNdShbWW5uX1MmRTFiKCo2OktfVlY2
KD0zNWEibVhqPTpNXU5YJnQjKWtnMkoiYigwYy91OD1eTUMKJTxnc14yY2VyJlIvKWQzQ150ak1i
SVVJIzdILUNyT1IvdCE2TyUtJENuRldZLipFLF05RFRFOFhfb01ocWU+NyNTKiFMa2drOXFuLy9i
ZkYoPm5RPzhAaGIsTzRXUDUpVD5JRFcKJStscVxAXmRnRTxQISJjUGZSbD1JWzMqVSZHXidJK0BY
Kj5RM2pJXy5Zby8zP2lYNU0wcSg2RSJFS186SipIOClIYThuK2EuTkttMnJMYXBoLjVpV0RtSExc
Yz86TitwMC1gZyUKJVhRVDE1PVdHOmswdStoRzJkbzIpVzt0LmhvWDNYaCVeaUNZcC9iZW0kRS0p
NVZOKitGMmxRXnJuM2lLaypuWipaOUpcdSE4NEdFVTNqRE5iP2RWQURGLld0OTc2I2xMVkooYnMK
JSNZZSdYSyI0MmZRUmI+WSgxTS5BYDopLk1WaD0jYERGQkNSKjhsLnNsZ1hzLStSXVZqMyZoKl9U
VEdRQjkybzs8IkheR0kqPiJrJlw5KjA3b2YkLDpfKjtHWUA0dUtlS1tXSj4KJSglPG1bSStHPkU+
QS8wPW81cSxLNkdTSyMpPkZbSGtRUjlJLCNVZkRuWUQsMVpZMkkqK2JfUCdHM0RnSk1WWDM+LC0m
QiVSclFlcVFMYV4/bT8oIiYqQltfaS90TEkpVEFTYCkKJSQ9Q2ZCXzg6J0NXISItSDZvSD09JEk5
PEkqMC5tXmdqcllSL050X0BqJD1LUkxRL1suKThRSiQ9IUBgdXFLczQ5MTMsZ1BXVCpFJSRnI2Fa
JkJ1MC5aaixHRjYqc1ZgXUpwLzUKJSg4MUktTnI8WHQhaGhnbT8wTDw8SkJcdDtoZ0hGKFF0VHQ3
Si0wQ0o8OSE6aVY/ciU6Q187REBMKjM/OjM5JGdhS1s9NDkrR0FFLjtRQVRpSzBdZEtKSkZYMEld
Z2Q9JjxTaTcKJS5pc0w3Vi5HKTxkLSYyJFQuRmgzIUxYRyUmVHBoJCM2UmBebz41VXFCMyUxdD0i
TkFyK1pAPS9VSjs+dDxeTmkuUU5mLEdVOiZZV0c5azRgI100LStnLzVZQlZeODEyXzpLPW0KJWhQ
YWRPJzFWViRMY1RXY0RNWGI7Vk49M3VoSFFmTENxc1pPLWBUPmVYXVlWUFlxI1FfZnRZNC5RJTNA
QCUrJmRNXGkpdDFBLXVAb2hbJklrMV1JRFtLMmwxKiYsbGNiPElXL1cKJWBhOFtUWHJqPnNRJDMq
VHMvSko0PzRUOGErLCxrPj9PclBubnMkUShwW3RWL1t0b2pQP1ZfQTFXXllfP29pUzMhKS8rLVtu
WmpYMGFdaUYzXlImJGBWMGhaSGFUb1thP1Y6OSMKJUAsKCxuLzUrU0xwaVZnMmQ5PUxmLjhVQklP
KmNzJWUhMz9lNUcxdFZlUGxrZWttdW5VNmlUb0JYMUE4ZjtsTkJbSGRqUEwnUV09IVQ9PGBjcjsr
a2VSTXN1WzpJW0BxKiZfWiQKJTpfQSQjOGx0SCZGP05VLVRPKVU9WiM7PmhVVWNwaExgOlUsL3FJ
S2FSS1snJm9GYUFjJ0xGWEZjWDRHYz51W2lFO0pwZllxbE4lUHIiMm0/TUd1MkUucmBvJCYubUIj
YFU5ITMKJVVKYipqcUg+WiRsZXM0NUQzRURkJiNFMU1eInNxSyczMWhvLm1VdV1naWQvXV5mNz0x
VVJYN1ZPV11ePUROK1MnaSsoMERiaXUxT0BBSE08KToxLihDOj0mYVhdKTloVixGRk4KJW9oJEBu
XVEwVkNjXExuZGMpTThEMygvZmBYIytxN3FdaiteInUsLywyNT5yTlI2QDNqLzw/Xi9lO1Q2Qidu
M0RhLlRMOTZdWkcoMUVeIm9dZ28vUW4qQjhvYkxSL00+ZnNKOEUKJXA+VyMkXkpNXEoqamJJWjku
ckkxJlQ4JWhGPXMjLllSalIuTy87XGplcloqWUdMKCJIKzhGazxVNC1DSj1XTyVgbnNQaTs4KEpY
WmhPJW5FV19iK1RLMm5dOm5GW140cCRDREgKJTlbIUZybzFxYSM3I2EtODYkQ1MhMEoiLSdUQTJU
QG4rZ0ZTVVkya01xdEJKITtVRU1AUDwlZEZZOUJeU01mcCxyOEMrKENiXldzbk04Sz9kRFxpV29w
SylfOFguUUNGRVNXU0oKJV5ScC9RTDM3azgmMSkhRWo4QiE+bGdYTCE3cC9uMjE4UmRvVzMhc0VO
L0hfQCpeP1oyLmIvU2ZUZT9yJG11I09KSWE2SF5aXk9DNmU9TyFsby4zJnFgZUciMURvL1NsQCFA
JjsKJS1Hc1NuKD0kXy9RJXFXPlFXPnE8R3A5S1FbNConaEQmZWU/aFcoQzA1OSslZTtUbSIpW0Yz
Tyc8cUFRP1FSXmpiXG5MVmdaZSRnYCExJ2txXE5NWWpsRytVLS02Rjs+cVNTPygKJS5zQEpvU1U1
S0hOZ0BlTmpeME8pbEcpVDxjYzhgdU1LR1Q7NU1XUktKaDpTRFVSW3BHOGQtREhXbDlpO0ouREY1
T0xhJkhLYlskcEMjJCxSZywlPjZJRDk4SCMxS3FeRmxMbzoKJW5cWl5AXylBajxJOyVwbiwiQTxz
VmNxdDwySFBlQ3JINS1vY3FmWmducUc9bzBwQiFdRjA/V0wvOXVVckllQCJYVSRQVGZTZWJgKF90
Iy0wTzVKaWVIRC9xXy9HUzFkV0RwN28KJTJJMlRgRytcUTojN2YjazdvXEtFWzQ6JT01XzNVVWJz
XlhqZVAzcmYpYGpyKk1cPDhXJzIrZVFjIXVnN2AhYDM6Y1ZrWyw9NkQuQHJPb2peamdfNkY5aFpc
J15SXiYuWVAhYDEKJSdUT2kvX2w3Ji4mbEpEJ1xzVj8mb0I8KlM7SDRALm5CNVYpOWA9ZS8jPylx
Z0klcTg7KFVqZzozXzshUydSRD5ESVhaLVxwSmNvIkVNMmZwKTNuKWdNQUsjRj9FMyEiMCgjKVEK
JU9oRk9qSGJEN1hkXlNyP1k9SFUwYGxEUXNZJjNOcThzUT1yVlY4Mjo7IWErUFZQbmFlVFFVYyFN
dGBMKWgiWidXclx1aDRndT4/NSw1JUZqODlOXzMsczhZUlo4TkAxQ09mQEgKJU0hcWAxYScxRnFr
Vz5iQ2ZvRFVRVWNMNSRzNV9uPUhKRTgnUTZMLkFuUnErRytTJVFSL3NVYTJkIiNqZFA5OEJKTjZV
KSxoQVhjZlZxXVNLYVxYKVhdIUQwbDwyV21abEQvSS8KJUBVLXMpOzVyMkpHT2klPkNVYClRSGg2
JElmck1sNFtySkdIN1lKYkxrbkRDPWcmJTk1KjUhZVdwVSwpKiVQO09cQy49S0ZkNjQnbXA0alNH
RCNHdC4rU28iWTIsJzo1ITJvM2kKJUBzTSNsXTQwMSdcKSxlYEdQKGxrZnFoV0grVTljNW8mPGpx
WGIrWCdqQjBOQ0k2MGk+YVc5NDA5UFZvL1tjY1wqV1JVRyttdWcsKCQlc1Z0YS5TWiZJSiQ6Ji9q
NDdGbkdKJD8KJS48a1gqR1REYyk5NT1CRW5iPCM7cVU0OEtxNzU7ODgmK3VbZ1VpNiE+aHFKZWZb
TzFkKnFnKXElPiUnQV5eUDA8P2M5bEE6RTtISEghRllMVkkjaEFaXT01M3E4QERGYCk+REUKJW1D
V0FUci5uI1YlbzM2IWhtPD5LN1d0XTNxMXBkdT9BRzhZbThkMDBVczMhPiNmYlJzUD5eTChHI1Ql
XWpeb0xEcWE4S2tmJ1xyYSgvTGteMUotXEVwXnNVMlwnQ1BzcEwsIysKJThWUGhmbiw/LjxsZzIz
cz10RmxxOWc9TGJxczlkLj4rQmohM0hrdVE5PkBYc0lucVo3ZD9YXCs/L29fS1kzWHRuZUAxZzo4
WFY6cG1QJTRGMCNzUW9tMVY1SGVZaE9XNEBtKSUKJW0lJ0c7LFNCLVkwQVBKRSYmIkhoTjhoODBI
aChPPSliMyo6Y0IoT3FRSEBuTSlATUFRW0JrNEclUCFfdVA+bnJwQ0NhZGJiSko4YmRgTUdMMmdk
SGI5Y2VSSmheRWpKV3BQU2UKJT9kMUMlWCUwViVJV25WdHAnaltYUSVkO1Y3RjBZZjJLJTlTRjY+
MGVHKDRyMG9WSD1eRHIvLVVqaWk5dUY4RThER2MwYSMuYzdVXTo/TzZPRm1bbmJmQUxxdGY1S0hX
SCxKQDQKJTtxZ0U2LCglcUI5VHNZUTM/PkhcQiYrdHNhb0IwTHBaU1BERkIkOFsoK1QhTCJlTW9g
aCoqO11KTzduVTJlcHNPNychMiVSVCstZ2g3UXVBSHUjRF9ILztINDk9cSc4N0N1ZSUKJSd0QUko
O24rL2pWYXRKX1NxdCxfXkYtaUFsaTgmOm0oPS8mQUIuJEc6Q11EUkk6UFFhNE4+QnFWVShXOjVU
ZVlsNjokRFIoUyVUMSI7OEZpX15FR1wlIzRodFdGbXNpVnMuOykKJV03LFNTZTE+JzNTdTVpVidL
Zm5eSmVIcC87KEAsPypHPGNobGk8L2wvaXIuKSExW2tlazcwNyk0U0kvMEE0ZD5vUi8lNjMiKnIk
MiRFMGJecSMiQj00RyZkSCxrUVZAP3Ioc08KJSQqSD9lMSFgNSJMYG1kLDslJEteQVZPR0FoXWM8
XCE5P15RNFtPXSZDYDFYbGkkPlhyTy0iV2A2TkVvYUtJZFE2TkhDMSllJT5aKjcwcWgoa11LLjtd
LWFsODstQTxTR0JCPiYKJUhZUXEyJDJWLzokQGFjYTMuaCdVYDpSX1E8Iy5SOyI6JCFabjVYUU9I
L250PSVSRE8oKjZFdVAwJlZqIkQyMC1uSWNyLG0hbFRrZVZcKkRDZnIrLVxqcWFXMDlGOypZMTUl
LzsKJTg3b3QkbkIkKS0hOU5SM0E1QW5bLiRJY1AhMTZAKTElYy1nVzEzbDlDR0MwZCRPQS4zJHBZ
cHEnWEFQTTUhJypnTEk6Yy8iRykpLVEyUiUlQSdfXGhncDNHJU49YkxfLTM2K0QKJV1GR08+NSVA
TG9lUlA+XkpBQz9vLiU8aj8iUz1BZDYkLC1PS1NGIkZaJCQqcTczakwuMzxwM2xedGRuTD5fN1wz
PmVqMmZSVCkpcFAwQz5sZClST0EoVywpaU9PLUY0IyFhbisKJWwsJEYzYmltTiZZIi1gKmklNzQs
KUZTQmpCYS1gOTtfWXAlX0IrLFZBZXFub087VUQ7TidkVXNPUkZtKWNfSmlfcUQnJXVCSGtiVU84
TiooRSFjW2UuT1RcSyg+OW5GXDdJIVYKJVpjLVY/QlQ8WU46ayJnYkVHcDBQKUEzWXUyNFM2YTgh
YkRZXFI/JmYoajEwQTs3QWBgcSRMJ01VLzEvLl9dRjVfZUhdPyFeXUA9RTFNU3IoRzo4RTRLOllH
O18vXzZGI2Q9clUKJSNVSkJDXycrdVtlPm4nPWVNLEE1LXVNcGBKUUUhbUB1ITxkbWBQbV0sal5j
ISw2K0NKOW9eaz0iPGlFcVleNlNPVXE/bU9WXz5rVidWWWNpS0V1Jmo4QFRQWWtEOjVvRW9NPS8K
JTpQTSYiS1dyXShgZzE8T0cyMjN0NkZAckNdNGw6TlZlcTxKJFErQFxhaDtLJSVSRzE+IlowZyZQ
bjkmc0VXSGo0R2duYDopPVJaO2JMaFkyPCFUJU5RcWJKVDpFLT04QydmVXIKJTZ1KnQ9PHNfJihk
PktqSCpRSGZpLXBjPCxXIzZoVSF1XU1wK0VHSl4/dGcydVhgWWhLVVVTYU89KVcsckdhS1xrbmYv
QywtcnNnWVhgUFA+am9CTGlNZU01YV89JkBSLFRxUisKJSpjSCIrIm9NWWgyN21uNkZwI19jJGQ6
alFUL143c2EpREN1QjQ+aEkqc0VuYig8MHAsUiVFSWBgT1xdTTRjJU9tbSFwRSIpLiF0M0JcMC9E
ZUUkMyVpaScnXVtVQmJxOERvcVgKJScpMjVWMSIqMXEoSy8mKExsWiRpRUEtQ0xQJ05gJWk5J2hM
ZERccz0mOFVHNlktdVAuYjFbLmdNTj9yOitgYXVJMW47SFM8KyxnM0MhLW5kJVA7YztyNVAoIys5
NjcoW1Y3PjUKJU05SUBYSTNRc1NUVSZbLD9fMXVjUlcyOy9LSUJ1cCEibmVyX0AtaF1tI1RDRSZC
b2M/Ml5mJy5vYFhsNWtqKWFaTildT09ZVCZRVEZgMihiTGlXKUA5RjUwUT5gUjotR11EJkIKJStK
YVAhZ1Vxbl8oMlBYM1prPE1nJjszOG8vRltXPFp0L0JgOkpobi1HQjlpKV4hKWBBRzRmUGwuKXIv
NTdbcCtCXU1gZmw7YV88MTBXWmdaJk5eSWxac3Q6S0xCPToqalVWajIKJWZZQWRASjE4KjFZLSVq
IjZZZGVRTzRRIy5KNUNnJUBJJE9cO2Q4RS1AS14xRjReaykrazhDInAzWGMtWkA7SDNGU1FHayxu
Pk4pW2klWXQnOEhJREhqcTVUbTptIk5oS2ZAJSgKJUEtZyNGLW4sR0pWLSJYb2o1MXV0US5xQyYo
X0JKWTBfO24oNmJFR1gkKSw0PEplNiQ0SjRuW24lMy1XYFghMz1qLEBEVUY1KCY1OE8qXFFuQ0M3
MClVJDBQNDZIckFBZClaRlsKJVdlYlwiNmopQ01LUXFxT0ApWTM1TjM3Z1ZIUG4nc0w8LTZDWitu
NVQhI1RuZmonRmtXN0VcUTU5LCkiLCcxIlVaKSlyb0k9dEddVko4MmwuLmpnImxfIURENGA1M1x1
YmxzbzAKJTgvLjFpcF5ybmw1aTMsPEAmU142PlJMU2ZmUzEoYFU8Tm1tTEEvQSgzTUVHIzdXL2oz
JnFyYz1PRmwlJlM6UUlYP2MoZ24lI2VPLSIsLU9QQ2NCYVgyREk8NENMQ1JFXkQiajwKJSE9TUlh
UFJeMHVCbkkiVkU3XmchUGgkcixmV2hoXls0REdxZmVRYVZgKF9wUlhWaGlNJ0oxIjVxWkxtckw1
SiwscytpODhFKThlQyRePkU6N0VeUFdYT3FVciZKUjQ/WyxpP0wKJVEjUm5uIWlDOFskIV1HWD1X
XFNVKjcuUyNWLCZkISouaXFrUj5fRVAnTlRhNzZIXE84RSIhKSVpU1lwY2I5QjhzIlMqVF1gL2sn
PUNmPUpnMk0wUSJfJGI8TDhGbmI1XVw2dU0KJSZEODZeZCdUalwvXUJsRkcrK1VKOVc9UllqPE5Z
QiIiWDVqJDp1LWZRclg3OiFIKlsrakk2QHRSJzwxIlc9N0kyK2lnXkA7RlFmRTpQPW1DMCRYRFVK
MkckY3BwVU1PRltITjoKJVVqZGwuLGhydHNNRGtrQldPTmlHczZhXmRWWCs3aCQ2VyZKXlY5UnNH
SWRdbEltMmJTKFMoISheT1ErTTRGVG4tXldAYCFjT1pxRklTTlxbKW5EP1YtaTo3RzlaZXVqTV5A
TUYKJU9oRURobnEvUyFXRTNFVlw+IktJVGAwIzZVJHIiZS5jWE9BRFFQaycxY1d1UGNmb19uU3BF
KCRrXFFacWg3PkptVFA+IT4zZCNZKiJTYmQ9QiRSL0xHMGZPLEZHc0AyOWlxKEcKJStOQ3RqPmFV
P2Nnakg2QyltQHIiUFMzb0gxRz10QT1da15ncT06XXBdXXUtZ1crQjJHOSU+W01hKTRPUTxOQldx
M2o3KTdrZi1SMVQ7WWokXmtlJWM4b21iQGstKFpgRWhCQVwKJV4pPl4+QkI8WixXPDA9ODwzNU1n
OlNmKl48XkQyTms4WDBePzlPXUQuZVJQS1RqPmo9O0RlWGViNXI1ZiZabE5AM2QoRE4sQHQxM2FV
K2ZVTnU9KzM6VSZpbzNVW0hbUUtuLDEKJUklWFJtP0dUSCRwOE9NUlVbUzRsSzAhOTJiKEhSUFdM
JzJGY3VcLl5DUjdfI1xUPmdOKlxIWmREIiolMkdxRSlacVVgJnI+MV1LJzsuOERMYWomO0A4Mlpx
L2VXSm1JOlJqV0oKJW91XkZUXS9MNG86SSdIVFZkSFA0YmVbOj9abSl1VGY0RmZbaEFNXlk/UVFg
VmpPTUdvcDQlU01XdDB1biJyLE5SIj5nUmdRXERWSDBySXIlNjZrKSM1JWVEXFJcWnQuZ05XQSwK
JVdwLS1uPldcZGVHZj9GSyU6P1Y7N0Y7RGxrMmIkSDpBMyIxbFBWTCpeUV1uJmhPajdBZWgsRWVu
bSJNbiE2P14mUUU9M3JiP1ZILT9zRVVAKGRONiRSNV9pZ0wrR0E9PnFyM2MKJWxEO2UkSFIkJmRu
XStPXWItZ1RrWS5FaDI3XGREcShQdFcxUENUNlBXVjtwLCNHdWlQOSs6XmVdbHFlUTk2VFhZPitl
aWRIOT4zNV9tJitlJmJLNS9GbFREdHMnZkthWFpMXnIKJW0na0U1U1A8XkU0QUUxW1VTLmRAIjhF
SjtAJmhBYikmKVEzXiFOVmlKQDF0bykmXyI0bSg7MylwTmUpcSQhbkxfSSlZU2FuNWs3STtuPkMy
SmItYXJXRzk/LlQrKik+M0dtWHAKJSE7U2hRJEBVLCZQRiZKQWcsLEJpXTs9N19GOmohW1MkRDFK
M1twdElkSEUoaFQiZkxRT01QIT81IkJfOWEhdCcpcnEwMG03QSMmai5gTi0vRjcuWE1lZmVtRlZv
PjpSa2dBVj0KJTJfLkBnVztWcWtoXTNSNGQ6MXRUXGAwMFZsO3UhaVVbM2Z1bFRhVS5JRyhwZEck
VmJkJUc0dDBHJ0pOKFZkbkcmKWBAaUdWVCFsLD5JSidFWzhjJjBHOmFJLlNDNVtKJlhdS1wKJWgi
QDo4Zis9b0FsUnE4XCpnY3B1Y3JKZ1pjWThcWzFqOCFlSjAxLUZeXEgiJiRSYCxCSVZ0V181SGBf
PytpLzJXcEtQNiFqJD09NXBZJXE2cCJcMV1GaSpddC1dYkROPCU6bWgKJWdvJGQpRFxCYWZfVzZe
Ul5ZPHR0RzQja2taVms1PmVubzNucilSZ2soTFBCb2lvVmA+cEJlVjVjLW4kbmpsVy46JkBNSltf
cSsyLTMicj1MQV8iOEYlaiY8UFJzWEdMXyplJ0EKJUNTNjIpSUZpPUE1XFE0QSxKVltDKCNLQ1Q+
UEA/Vy1hbyVsVk88YS4ySmxQWUk3S09CTTJkRV9cJ2lYQjZhdUxeTF45b0pJWEc+PiRRTDVqTS0x
cGteMmBHdWtUcnMkMC00PVoKJWVVUkldZ2UydTFwQ0tKODxNakpjL3EkMyozZFUhJywmY3FhL1ZZ
XmNvSTQwJDkjI11zcFxPI2MkPlZoU0Y+T2U3U2JxZ01QOGY2RDdzTTpJZjlzZkBDdTlVN19XP0RD
WV1tYzkKJXFhYlZrZy02JHM2Pm1zZ2MyRE5yV2xpXC5oNFdIVUlfOW4lOUU1YT9kLFkoYWhrLlhu
XD85bVctUidGV2YhNUc+amlaLlw7PV5BLk1dNyM/Xkg2cUgoY2VFXjBFNDlrZkBITSEKJVFfNUk2
XjcqVW0hUlJ0Ii04P2sxZkNsRDVSTDQjRClvcV1UKmZUIlVORzI8W2BFVEg0KHMnKzsvKVI9bEg7
WVJqWGswRU45SjFlI3FZZ0FLVyclRFZPS1ItaypcNFA7aEIyZl8KJShPKzBoPT4nN18xaXNYYUVe
I21bRFhMTk1wPDVqNFBSVnUiKnM9ImFjSiwhMElUWlZGWyQzXT5pa2tXOV1HJ080YVFxPWltS1JV
UlIrRV9eVlZRSTQpbnRNOiY8SCFJZSZqZGwKJWhdOSM0b0VpRktoY1IrIzYyT1ddKzVEOVwqMSFa
RWJyVkUoaEVkN3VbTTRtTG88WCksLzhnc3IjSyxkW0ViTGVVODxbRVBQbzFLLT83MihOaUdDYGQ/
K3RKa3JWVVRtSUZzME4KJVs1cC5fYFtIMUhTQ3JgcEhBXFlSRzV1aTAiJD07YGVXPV5nP0ElIlw3
K2dKaTQvLmVdbHJaQl8/aU5cWkY+V0BTW0EzVS9Rb0s6cClcVGUzMnMzYCdjYmpuY2d1ODRoKksu
Zm8KJTA3WCNIW1M9cmI4MGFoZ2czXFR0amtgMjlwVlMnI2pzck9lU2o/WjBGbHBtK2coYVBPalBC
UDNgWS1uIipUTFopVDo0NkReLjcsPGpaUU5UIyQ2ZHVIZ2BERWJCcCtfUjtCKE8KJVw8aDVAZ2Ng
JlcxJ10zdElfZmEtWSs0OS4zSjYiUDpNQUpUMVouVFFbWFJLMTRKW1YwOllMLVtOcCtkQ1FdPnJn
MFhQSmpbWi0vVWM2PCVtVVIoOjBHMl8wIj5JUUQmQ2dUMzoKJUhnUm4wR0BSMHJmdD5xPDpZMEk7
WkxvX2ZEMllSYTQqTCdTSE4qSzNGNU8tSXBUN0otbXVpNzM4bllRVGNLV2lIWTphK2xvYyxyMGps
YDZoU1hCQjlPXnRGanEnTjRKMm4xKEwKJW1YUDY0NCwzKWwwbC1yKk1PO00lVTpZKTVOI2ArYjhr
RT1wO3NHRGM1VVlXXiJScEghYFdUdF8hV24hN2JadExVPUdBMUA7Wl9hIkotMWllOlNxIi1bPWpt
Y1ExdHFeJHFQZCIKJVBcXkg7LE4+SiZjVURfMGsybj9vXTJDV144OEMlaTlGLi8kUmo7YCMoY1JP
LWFwIzpXYUosM0MhV2QwXWEoQnU/RT5BV3MlXCw3MF5lLV45P3VELnM2MUw+UTtKPU9EOCsmVnMK
JSYxU1MlKnUyPEVyKk9HdXBkaTo3ZVxJayohNV0vRCU7UltCNlo/cE5GLD42XF1HRCgjUjhRa0NX
TlpeW1ouampUTGoyI189Ykg+Ul51clwkJCVWQS5hdSd0YU0nXz1PQltdZ0AKJV8mTlUuM2pKcWwl
OVtAI2VOKzlzMydJczdFcz4kQSJmTGMwYWdhQ1VIMC80bFhWKFVxTHMwakRDZVsrTS1NQVFpPjlb
XFswVCUqb1NTTGI0Im42KEhVczo3aT8tTCltaTBPZ24KJWJeaThza3NYOkJtPWNxOFtgOy1NP2cw
KkgmZFciXDdnPmFtVSM1dFE6RGJVWktDWjgyJlFnQWpuPUBDT11WbnJWPjY6dXBtTWFcQjxzLTRb
NWJdODk5OCQlQT9OQFdDPlksMiwKJTNtQiFkQVdvV0BUY1NmMSNtaTdKUEhjSHBtQ0I3JEtvJEFx
K1syUHFFYmtfY1s7I0c/VC4+ak9kSEckRU8vYDkuNm0lJG05Oy8vMCIvMEIwKjtLbSdUaS5TYmJU
alcxJWAoSTcKJU1ASVtePT1rSUcvOmVIZkN0LzJhKislLSlxLT43Qm1zPChDTzVMMFtUPkdTPmdy
Wz5WLEJfY1ZJQ1IqbS8mN0ZnOV5OI3MsVStoKCxcLkN1SjtBWFVNJDtBIiYoIls5Si04VW0KJXFO
TXNkT0k1N28mQ2dRWjhPKk06QTxcJnFJMCc5SGtOSldEIkxdYTBTcTBQQTNuc2ojbl04dWJVc0Nz
bDM1bzIqKF9rXk09Jz1JZjRGOHAlNDwwcm8yc3M9PkZpUWRmIWV0SEMKJWtUWTRVRkxgLyZRakQ6
WyI+Qz4lb0VsN1MqbkdsKlhyJzRYLFEhPHU9Sig4OmBMKm4+IV5IR2dVXzpzW2BYVyFcL2c7bVNr
bVlvUCJAI2prKD0zXmNoP1NROyQlUHUuIjFEPGMKJUlMcCFoNVsuSzRpOHJgMWNiYFhsS2VQJVs6
W0BbdTA3WkdtQT1iNVRKNFooMS0lSjIqOmdCa0U1cVJiQlJRXktLOEtfYXBHXTsmRTk+RDF1O146
XjRRYXQ9Rk5OSCYjVnVkbjIKJUwtXS1dOS4kKy1AOCtbIiFAT2lcOk5mKXBbcV9IQStCU2NJb0ZU
YXRjVGtRNUxNPCEvREBQVEZwUDk7P3JqO2FXQV1XRUY8SypRQUFeYkRdZEAoMlNEWzc2YzVXRCNg
RVlxVycKJWJgWTQwTUIpb2IuXUYzYjdoPSFZXSQsRigqLTYuWEpmNWhxJ0w4SDdpXjM7YWNgVm1L
IycpPiNeOSxYYW0lNVdARUdhNEJrPFNnU0xZckRcOkNGJTk2dHFGcGZJPktdVGVRaioKJSRFZTkh
aztea0NuNklgLSl0XmZdOSpOOFFgJz1hPEssLkQocVNbLlghI0osOl5vMFI1ITtMLjReJlxjXCI6
KUsjTyVwZjZPSideSC5FbiwjUUMtLUc9aU82JmtYQj48JkZeLDMKJW5iQzFtL2IncD1XZjNlcVZL
YC8nQi4zb00uUWRnXEJZMHVMRVZWYSYtW0EmdClkb09zJGszaW5ROHU3Uj9Ecy0+NHVcJzMqPWxs
U25QNzdtaEg3YTwtXWZbSyhyaFNtJi1PJFgKJVU6XSowaV1tLFNXYV49RCpYJ2ouVVwiZTFMNSN0
ayJHVj42Z3BDVWRSWikpZiFzYmxsY0YsJCdGITE2JTZkMWgrPUAiMjhUcEUuLChDPFEtSjsvZCM1
UUtkYS03QzEhR1F1PlUKJTVBMSQ9XiJBPmcxcVNCUlRVQjJjNkksbUk+NGhYXjdLWD9aREF1O0Ao
KEJfSCVQcUItWnJSMk07VDshJlEzbzYhOyplJkVyVm87OzVlXDMtRFs/ZzBaJjpqM009aDNEZSNG
J24KJWBXRihzJ05iP1QxXUlEW1lsTTREcydMWmZhTztOL24qOz8iOF9qY1Q6Wmd1WkxbS1VlYHA3
IUFwNWYsQnAqV2w2IWZ1MSVeOlorIXA7YGY6ZHNaTEheWEM9KVVqcWo/OlZaaXEKJS5lVC9zP1Rk
biJkPGklOj9RTWI3U2wxU3BeTzBXbkM6azA5Om5tbjFHay1vcFBrQnBQKWBUXV1OVmlcNXFSPkBf
bycuJWRdSDtBNiQ2QF1zXz5eY208PyJ0Tjw1TlJBSyc7cikKJUBtYERAVFBhZy1gVCdBZSQxTGlO
JWVdVkEwL291TTQ5SCw5Olw3b15RW3JXKzA4VlQ1LGVEcWAhQmAmPl1wNmleXVssMDFxXWFTYyNt
bGA9VTFWUzw7Pl0vS0NxUnFUcSphTHMKJSIvLSlzTFwtXlRKdSllM0VxZy8xNzZLVSZQWDNSbjZl
WSZQIlxnIW5vOCRhJWIhQzdwP2c5WFhXUTpFTm1BNmBfbkVfcmdTWktaR0twKkhHYVkrQ1xcPi44
V2QwRCdGRWk6cC8KJWVoRUxacTc5ImkqV2o4VU8tVjk/WUFDcGxlKFRyJztebz4hSDI0RUhRN1gs
YmI7NFs/LXBNaDE0QHUvRVo9U3FCTFNvTjVyS2pRPS1EXFBKbWdndHBNT3BoRVk8Jl4/JF9qaGgK
JTttVVhKXUAkKCNSP0VPT1RMPyplX3FJUmtKJG9MOHBJXTksK1VtIkdqak0uVGQydGdCWyZBUGk9
LzQuXmkpPXMpZyMyKiZRW0xYJCYiXlg+KlAkRjZdQVNwI20kNExsNidFWTcKJTBTPCFoOlpnZWc1
OSQybWdvX01FJHMyXUE8T0A0LTheImtOMVtoR0xkJWMxT0FvQEdEbjYzWm47bkRNYm9mSTg0OFRN
Yj9bcDcsJDRoaFcuQys9Q0MtT3VaO2xxckI8VDFbb2cKJVgxcWNoKV1QTChIWEMuKl10Ujs7Ul1D
TkxvXy1YPG9zKHUtUWozXGRmL0o2aEdFbz5haWxvJ1hjcmpqSEAhZjo5KytINllxYS45JTQySFBo
clkkRDdXXFxWTFIiPE1dR0JWWz8KJUtFU2MuIVVRTGRULGM4Uz9EIkg0PiFyJzBsb2ZQMjsnN1xC
JV04Xk9YLkxtQGsuaCNSUydgb0E0V1JnQHFxbThwOlVhYk5NQFwzRWBxIj8kQ1lOXCcpIk9XNyxN
bmlYTERLWEoKJWRdSUFnbmlZWDNMaVZfM2ZNbm1EZTRFM2RkcjxVcSs1ajE9XUpIImQqJyxpLGNB
Y2Mpb2tnU0IoPj5vXTxlXWpkVk1uZmhJTmM6bT9fPiNPOWVVbjFVXj8pYW0nPDFxJ0YvNFAKJT5p
Zj5tUjwlTXFPZG1VUzopdUU7Q15uTXJRXm1qJGg/QU11Vm90ZTxYUHBhS0V0RjVcSiFBdXRfXjxY
MUVjIW8uU0lGLm5rLmpMQ1gvNmRmaCJmSi9yMFkvRDdFN0BjVTQ1P1kKJUVDYXJ1bGI6MTtVb1FX
RV8hUD89W09uLFhwPzpWRT9fWDRZLXFYR1huPUFkODA6KFUrZVJYQyUjc0dhZ3BjdGouI1ZQPnE9
M2EpTipNM1FXVT5aSSRjI0lIN2UndVFEb20hVmgKJTFKO0FrXD9Wa2gsa2Zvb3EuUXNLa29bPXMh
cSFFUFgrbFpUZT49V3FVXEBPPlVPLjAwZUZaSiRgVFdeQkk1UTtlbD03RWtraE9IcGonLCc6b2JQ
X1xcIj90aUlhWnBRYWlqP1UKJS1QSixDJzVJaEdlNVledVokVHFkZE9WNGpoVkopJnJQRTBIOTtR
QzlKJ1dLYUIwUko3O1RnNUtaaTNeW2wxRjtta3E+UUZoLGBYWENzQW1LWispRC09P1xTK3BUJCZw
SFtbIVAKJTJwTkxNNUdNTEZJbVdoVVd0JklHNCRoL29iTVJmV0BWV3NTQyFYWyVrcUhWbFNlSUlZ
aTlRR01lcXI0VGMwWUJ1XFAxPi1ePWhlUlxMVFctM0hGKF43T0YlXyhRTmwnWUFdPVwKJU1XNzBv
MXM+LEIwMj8jXkNcUUg8U04oXWVhT2FXczolK0g2W3BiNjUhXz1rZyVbc2U5blZob1djJHQ+c1la
LzteSitrdSZeWjBDWkEoTjM9VjlhNXQ3QkBeNF5LTDQvR2JjZWoKJTAqRilDJCxLNjlqVVwwZSFM
MyR1QUhfMmpbWVYrJFJYPTcxJmwoZENkNmknK090JF5nKz07P1JPWGAsQ3BYUDVqcmJwTElSZnVG
Q1pnIjJZPDM0OlY6RHEtRTEnZ0JZKDdBLiEKJW5yVFY9UEhNYkJpKFYxaCs+T144IWY4LlxtSzUo
XiZsYVFgPjAwME0iZUkuXiNGbCMnWlQ9MScwYSpRNCcrU09XVWkpNVxVMD5xUENnNSl1W0tALmgr
SisnRj1VVSRRSz5YXUUKJUtqXD1IMSdWUSslLi1YcCI2MXMnI1Q/Pj1SKzdMSEIrLjtSKHE+bGpF
VDglZSI2KShqY0Q/PEw7JjJMQWIsLTZeUm1qTj5jbHE0OmdUWTZHUjJvLC0mb2ppckFJYTo5QD1A
M28KJShaQ0Q8UmdxUVsmPUZ1T09MK0ZNTEVtMl1KLklCP2NRdGpfKSYhNztVK1QlNiMvNzorXyUw
WnFKY2gia11VP0tnS2dOWURCTmxmQlY7blVrNnBvJ1M4MUhvTWdHImM+W0BxSWoKJSN0YjRfTElw
Yj02SD1oZSEzLSQiP20mYFw4LjA4TVdyVy1mUmUjVT5LaCpyTUVdRSw1RkRRdTdqRClhQjczIzg2
U0tzZSguKGhZOnM0TWA0bkA1WGUwXEdBW000YzhaVSxHWF8KJTtJJSFAaTdmJ04hXlZEVzg8PHJq
OydKRjwjIiZJTz9qPCYjLitBLVlJL1VBJTstaDRKXjtUT1M5OFk9QU5vUzJVbDZ1VmUvOVNcSEVF
YWV0cy8mUFRmUFRWYTojX3NPQ3JdLT0KJVRDb29XMU4uZEU4O2BNUmQ+Nmw9R01nKy5nakFKImtG
TkZfTEcmZCErdCZKTjxuJmNzPCg3KjRaNmssV2NvIl41YWJebDFTMk51MVNhWFohWjtKbDRPPlRW
IUpNLGYxMzM9Sj0KJUJ1Mi1sKzFkIUMmbWc3Pmw9cHRFVnAmYzlFSjU8cjA8cTtkK2JeVkdkZC9D
NVpqRVxvZUwiYWhePFNHPldZME04b1FiSU9DO1YxQDFgTjY9UDFkZF4pJGYxaW8rbiZtXm0iXS4K
JVE+ODZUTVMocUIzIi9UZEJkL0tlYVwqVF0nYW1JbTkiQkQhOWdmaDZQLF5KZ2R1QXJeJkNyVD8u
JThbY0EhKSNQIz9HTVk5S0s3Q0pvLDRDRyFaP1dPSiFdc1ZJa11aKDErRWAKJSFIX2EyK1hBSy8h
JmdvazBVPlwiIVRtPm5eSmpiWEFyUzwlcTRVKjdfTWZCXSpaWEgtOiIlVDBGWTZNU1Imb3V1bmZk
JnNhVDREYXBSM2gwb1RmPSZZVDBANmhUIWRfVW5Ca1wKJUNBVyszZFkjVlNiKDdMTj5jOzpOPHE5
aj4rc1lXWkYlWVl0ak8pSUJHLGpkQjhaQTBsPURKUzxsKWxmVUBsR15dZWc9V3RSMCMtNGleLFhJ
T15QXzsmcEFxO1t1ZEc/RCVJSkgKJV9EblwkcStzNDgic2RWbyZFW3RtTiVwIkcuV2doOlBhZkhu
M2U+SmJZVHFHWkw2S19ebjdbciMnO05EZ05EdEctNGtTLDhSdCtLXVlSdEZtLDgtI2ciIWUibUJI
bDxranNDbUsKJUo7MCVdJ1RiUm4oPzJTTiRcQ20xbFo3T2lnIzhzITMiRiZccV5LbEcvOWpTWDBy
KGZTYUdFJVAkLSg9W0ljLihNYz9JLHJaLFA8PF0saSNfTGZeVSZhMkk7MEErVFZTXTxtVE0KJUJt
al90LEpiW1VJYmBKQkMoW3VcLCxQQUYtJCdgLzVtXzI2RzJmaWZILDsiLiFcZHVNXWFiLnJcMVkm
aUpLcyRnM1g/L2JLazphRUNEZDo0KUYmcGhNJVFCdSpXZF89LC48LjEKJV5wRyIsMyM2ckUkNj49
UDNWRmIqNmZAX1prdEssO2tWViRrR0BFbyMwdCI0ZjokV1NCW1lQMm4sMVxYZylAJ14oTDs6YkYl
M3BhL1peYm83M0g4bToxRnBtXiFPKGFNS2pcbCgKJVlwYTFVQiZaIU9dSCM4ITU2ZFFjUTU8Wyhh
V1ZeXGJib0YkUz47N3VLOHBbIjJGSVtobUJBYz06J0lsbVwuN0pfLmxsKlIrV1VvNztYLFpYYVgy
ZD5YLDBlc104WkdzNVNwYDYKJTZCJW47OWUlXk9qV2hyV2xVXC9yRTI5PSRndFIxPi1LR2VqJjUp
KDI5M0huJ0puM3JKUEErXzVQMzM4UGF1L28tbnVlI0Iwc01XYF9FKGwqZTBHdDsqPFFMIytvPGol
QXBGOiUKJSZlVDpWK0VVWk8pYTJIKk4rQ1g/UCpNaVkkUmNnXFZbOVJlJFpWajgrMWpMLylnVU4p
ZWchUD9BRlZZMSlQWD1vTkZpUGUzXUotOkgydE9OcWtnPjFbX0NnRT8wMjkmVUpkLUsKJWYwQ3As
Yl0pRl5tODwzTmQmXjIhNGJURTg+KilwKDNJbGluQlM8L3BrTmBrS2M4bDg9THFEOSVxaWpLKGFA
dF1tOWs1TXViPWlNN09cVXErI2gkOHVPc1pAOW1wck9OWW0tJioKJTtYUVNuN2Zgb1JPci5CI2Zk
N041Q1BCVm4kRScxJTBOIjpTI0dFQUQ5LzRjTjY0UXRNS2RMKXQiRmE1STJKR0s1PClSSGZQRG1y
UTctVHEtZydxViw6Il1PYk87PkEwMVU3TCYKJUExNkxjMEowZFxbcyx0KGNVNzpgYTxJNVJHWkEv
XEVdRW41PihQVGFsU1lYJj5CSXFgVkpPQkZSKURxQWVFXElXVXJGWyQzPCNPLkpiVyE2Mz9xVSI6
YnIvV1NobWRubSMoNSoKJVhzVXNuZm9AdV05WTx0RyJSYShjNjdaSkxjbUJcOS07QDtmN1ExOWlf
RWAlPFovXERhYDYkWW87UjlARzJcQyFNPGheZ0pIcSQraypUaGJvaFBpUXUjay1TaVE6TkZHPCQ+
VEIKJS0jMDleaCZQOSRVai1ZMCI+WDI/OFkvRGo2TzR0WCh0cGgzNm9aTmNdJ0gwUyhiYWlRbWRZ
IWUnQkZFT1AjUiV1QWBhWlY9aitsTVJbTChcM01pXmwrOUk8YE5eZWNVTUlYJicKJSRbRERUQzBG
dV5yREMjdT84SWVsU1JnVzw/KWBoUjhmVFFPLUhlJWA9X01Tcz5mVFhkS0kkUWlPP3JQXlA7TzBt
U1Asbk1qQEs9ZDJwIiNkMyktZi8oJGZoQTg1QlNgYUc6JmkKJUpdRyIpODQrRSJoJTgwYFUlWEtd
TTJkYkFtWUZhPioza1VyK3UpOiVqNCJsZC80KChfMDIzRT5Ic2teUT87YzxcUWpYIm5wKFNrI1tr
blImbnVDQ0gjKlxTclZNZitYKEUpKUgKJUZiQyoxSWxrKUZtQCM2O24zJSdlKkZuP1w+ZXJRXGJK
bGxwZGImPjkobDUuLVJIcy08JGc6IkZMXzFMSVImQEU8OT0lTEU8QzpaPDYoaTQ4UzAwRFhpJF8s
cSpiZ0VcU1JPIWgKJVNPZk9yOXAnSVRMb1Z1LToxYVgjWENzYUpCTEBtbTljUj9TTWA7K0M4VU5T
bzBOPUVPLywocGhZJkE6Yzs/NnBGM28qJ0xZXFZIQ0JZcS8kYWZURytQU3FUckZWR0dsJ0ptOW0K
JWNsT3FuRlg6bjxPUkoiMmg3SS1HQnI9P3VJWmoxPG4jTlVKRVlOOyI5ako9O1FCJ1ZOXnVgcTUj
cDtLRkFIUV1qOlA9YmctMlNgJi8rXCFRRl8qalJPb2c4TSpwL3NkSywuPVAKJWNDKkIpLEwsN1JV
PScxZiFPNUlRZjNySHRlZFBJM1xyVilROT89K3VKcEFQPkZzdGBZJ2ZFPmFAMmZAOkMkWEdHNDUz
NWdnZEpmRkJ0ckQxbFstcSxldTQpSCNvPzJsZWosQ28KJUdEWjk1Vy5gX3RBRW5XdF1kTV9cX2U5
KTpdQCk7LDBkZGlfVydVOy9ZI1ZKKlJsMWFSSGYqSTczdVgiLU0+PVNUVHRrSi09XTBIajZyXTJt
TUdYUkZaWi88OSFzUGBCL3BdSlwKJV9BPmkmNT1JTDs2QUFvMD8vSGQ8YVNTKltTXk9TY2tuUmNl
LSZScT9eVyw2ajc3XCtBci4ma0JVMHNEPEBiOTVfSzgrLCU7TWFhIyNxUC10JEVFJ2duT2hrNVRr
REAwLS5XYmQKJT5UKEUqYCxGYCtXImVyMVUwcTJbYWhXZCJBMFEyJDozbUlGaUtfUS84Syk8bFhE
TGpMMmJaWStBdWRGNCZhV1Y/bldBQE9KXkZtN1RrPWEjRFhMNzpgUkMpIWFOSTRwWG1qK08KJVtJ
dUYrWXQ5ZTI5byo8XG4jci9tMzJQcjdlP3RsXU9tTEZycyJdYXM6YSQhSFkuRyhaXXUlUztWJmo0
XT01LmRqcGJJZzlsQj9LT1FQJVw7OUgzKWVta2xbIj83X2RJLHM9QmYKJVNbbTEjbFFyLUAvMzQ0
aFEkb1k5RUNAW1thXjJvOjc9YmBkbCYobj4pWDQpPTxHMzNua05FYz0rQz03VjdwVXVsbkklXT0o
TjhQb2UpIy1aSlQyYTovMiUkZVNEL1xza1s+USsKJV5jLyxuT2ZPJjo3Q1tsJSQrIz9zQGBPYkRQ
bmBZSVxLU01AT3BiNT9BV0VOIjAyMVJWUGZxO2ZNMy5MOmBRJWo0aUJza0hqU2FZYjtnVjRQUC02
clRwRCM8PSpPISJWbltRRyYKJSdFVFNEKCtYUV5eLzghW0pfPSVIPjQrcCFiTClmXDxKViZsK0hl
K3FaJkV1SCpQO0hqT1tzU1s+U3NjWSEyVi0qNFk7VkA7KkxnVlc3WEtYTG1sQ1EuITdFOjMqaEFj
P1xQWS8KJWMoTExUJlpzc25RW05fLWYrY09kPHVobE0mRT9BXl8nVkVGOyM2bF9AUmlAXjQmTV9e
J00vZDlNQ2FqZjZAQGdvTTY3Vj8wI2ksZ0JZO29cZ3RdcltJSSchMSk2IkwiWEtkYC8KJU1cPCx1
YFZQZTheXl8hOlEpLWQjWiMpTCkiNiVBUzohTjkvamxzZU0wK3NwXkAnOkglUzpMclNpblhXJzpq
PVNvOFBdNjonKXJnI1tGMXRabXFBQkpRbWkybl1BdEIzKHFgbWkKJU5UNi48IlZPM1o6PTJwZF5o
UURmXmxmRFtnaj5AXmZMM2EmVypxaGVqTU5udCpONldqLihqJiVSWVdQLVFmNDBNJ01mQ0FVbEVE
cUs/JHMtMWEqbSdIPHFVcixrRiRycEg5OXEKJUBxIT1HTCRecHVSOy5OOlRiVWE/KExTRSwvJSdl
RmNIMW0yWDczWzRaYG8waFYvVnJqVidGUSpDNzoiT2M4P3RqTiMsSTk4WVFfdD1INCJ1P14tTWJf
THQydClDQ1ZKR0RTZyIKJThIQ0UkU1EmUGEuIzZsRz1CaEwiO2RvdSVnODNobiRSSjtrTisiXkta
UnNQREsiXCoyIyxlNiRYKm9hNTNSaERNcjoqI1VJWyEvX2QwNjtQX0ltLzBEaD5aMytXajNWLXAn
PyoKJSFSWVhOMGRpI1FSWVcubCppKSxkOS1GJl8yUGM8PkQ6LERJI1dDJzZfKkRRRy05P0ZSJWNz
J042Oi5QLi9QKEtDLiNBWlslNE5bcVQ0PWkqUUFEIXM1XDhKNENQXjFBaGUhYSgKJSh0STs0YDVs
a3JHJEhDME83THA8cTNUNCsmO2t1UyktNz1kUUs2dGo2aj1MTzRdWGIhW00pNWxAYVVQVjZfWzFz
X1ZNIWtoZj5SJSMoc29eJU9rL1JCKENHdWgqczZgI28xQWAKJU5cOm86JU1KLVRlajtBTFJaakJp
cVBUOjouOEImLGNDIzBdQGhyb0Q+VmokQ2pUP28uUFM1RmFGJS5OMDlXOipNLmNROVtMc2MuPWdi
bDlqL3VUJUo6cVtVPktWXUdzO0ZtOT0KJUAlZUFASEhPcjIiNERdYjhkanFPaixBW21aWlhLWkds
WzJdPUhGckY/dVFCM2ZgUUFTREBXLGtDS2c1W0JxRWE7TGdgblNJcWdzT1xWODp0US5wSlw0RnNB
UltackBCYGw5Tm0KJWpsJ0dSby81WEwuYDk0ZC5tVU4+NChSKmRvK01lVkMzYTMtVSw2JStwL1Ms
Rmo0NCg4QFBxMENiUWtTOUw1Oks2YGVHcmhxK1kwcjtVVGk8Wjh1O0FqcChHLUA7J0xWTFIlMGsK
JVpiQDUoMlRTKD4mIypyWDJaPEFcXjxzYSJWLkNkTlJ1P15gVGJxMTJOPC4hNUU/M2dlNnU+J0pK
biE9dT9JKjBxNygzOklGQ0dydDZwZz5kblwnbyZgOCp0Py1JPFl0KjdlcGUKJTlWRUolIlRiXi1X
PlReIz80RC89V0BgX2FdbDU2UCE4OWY9XSRONW5XXD9jRU0vTV4sQ2RwZVw0IilRRWEncEVeTVxp
Q08jYERxbGg7c0tibzMiRFNfKzhmS0VcIypwN0hlXjYKJSNuSkc8I1M/RVEkLl9uXywxcWgwQSpm
LktfPzRuOEJCUzEpWXIvPT5gb2hQW1FjUiw/YCY8PUBXaD0/REFSYSFyNyZDT0lKPlg/MEg4OF4t
Qi9HSm9HNEUqcVYtRlZHTGRyQS0KJT5dLzF1IV5YYWBJT3BMOTZqJmxPTG4qM2dgXUI+OS9SSjRz
cHNQKFxUMUVhbSosNmQ6TEc5PVlKZmBrcmkkMG1Sa2RzL2phOk11WEBSRSw4V3AxbT8jVW9uKDVO
Ol1kTC0kMmYKJTxFNDhIRVlPWWQ7aCZ1NnI+aDYtSFVoOWhjVkAsQEUnOidGTilNXzMiRzpJclpK
XVsqZitFWHVVYFgjQ1IiXFMkM05cRkA5QkkpRj1KIms7ZSNLN0hsRiYxYCNeZE1XU15iJ2wKJUJI
OlNlZUkyTV4rLz85T04jNXIxbzFiIVIqSjs/MCZnQU4/IjNMSjchPypBJydoY0xpMjBRZEMiS2sk
RVN0XC44OGkuX1JFXlUnXHBPOk9mImdNL0pQMyFXIUg8KSQ6KkB0R0gKJW8lRmV0VSZsb0cxajFi
OFNXL0BgRWgzN3IpUDhCNmBCTlRsal8wa0Yna2RZKzEwcGgzQS9CLFYiVHRhOEJJb1JBOFdLXm0z
YTYiPlloVUxUSkFeVj4wS1hyYSFdYV5mcTxYPCEKJUNnTlE6JCJmOzZxXGtiKC0nUVBMLUt1czVd
OUU7R19BUShYJ2AySkgsK11YWyRoLU1AYUpISkxgRDEwOiMiVU9VaGxcQHQ9XSgtb0Yqc19mOWFb
J2c6YDFYLjk1S2hiMlBpX1IKJVA2cGRJLV80SFlXIkM0b29TTmZpQlQpQ21VYWVzS1BTNkQhPkFE
bS9LcTkhaVJcdSslUklKMGpQJyU/LzFpLDJhVyNDRllBVS1LLkVrNlxrOGxSQHJNKjBYQFRkP1FK
XjAzIyUKJWVoX1VIKipGMTIxdVZWb2BVYGVlai1aMlowZS1TSlcuKzIpJ1Rzci9aUltlVkQxIzhA
K1s6RldQUWlFamRYVy1SLiQ5PjE/UUQ7WzY7bmxxJl5HSlVkIzBYM2VkM3RfV19idTkKJVEqdEly
X0EhTGxfLDZvRjVzZ2l1RDdBVj0rcjIjWClsYmssS3AlSmEmPThSRjhUPyQsM2lJTSdnQ1tddVI+
IitZVShzSXEqPEc7V0w9ZGVfJlZIZ2c7J1Eyajc4biQhI1cmJkkKJUlkRkBzTXNyNTYobFloc0JT
QGYhN0cwSTtfZWhBYVUxW20jayQ2JFRLT28mXl01M3JeZDwyWmlRZSNZXWBuMi5IUGpxb05jQHJw
c1lWJylWTU4xZVJcbTM7WSEjRUJDRCZKWmwKJT41cUQpKFlQNjRaW25OLkk3S0RVRjsyPmU/XVcw
ZENSIlQ7M1okL01XXyZASjFGVTxebEgrXkxOUSJwTVw6ZjdTTl91SmgtdXFBU0JBPkk4OUZQcVMt
Y3VLWyUlNihaXFcsVmUKJU50RmJVYT83OnFLWSZNSTtJJ0M4NDJyYHRhNXVsJDIpM1NQTC5OUUpd
ODpqI1BVOC1KWWdARUQkXjdaLmU1PTo5QmZML1taWlsyXWBSWiNtbHQxZjY4TktMRDNqVVRFLSM7
WFYKJWhAZDNsJGxuQikvTUBaPDNrTmUkLSYqLkw3UGBqVWkhM19vQThCJS02OHVyTjQ/JF1hN1Ez
NklkXE10Vic1N1FfRCksWWxBSThlY2dSIlIrJyFoalYtIl9YV09SLERdJilqKEcKJWYvPCZXJDRd
RXNgSVMvRTtXZV8+Y2pNW0Q4TWYxK20mLnNoOGFxTmtKSjw1PSJ0TkE8TmAkKl1TZ0Bwbm9iakNT
IUNoKmJIJUFpdUZCIkFpMjx0N1ZJTGFpUG9vKGg6aC4yIWoKJWowKS5sPUA5QDRNQThNXkYpdC4w
OVovcV1EQFYsQTA0TFpkLiZhPm1iXEtTSFdQVGUuNVovUmAjKyhOU2oiT3NHPjAlcEJVSjk5NWko
Q0UyNDlsI3MwT3B1XU5cZWBASGo1YlMKJSM7NU9LKlc3aDY2SVJwTihFLmdcTm1eQkVVLj5eLi1s
YThySWU4ZSJmLWM+SkZWK3RTPCsxbUREPXFqK0thJDAvXikrPGhGaldqdCxeJjBJNWQ+dCYsPz4+
Ois6Y1pIJjRSZzcKJXJgSyZOQWFEXUphX1xxTCUzI2FZM3VPbjxKL11rQGpNU1BMSklvJ00tN09R
bk5ocUA/U3JEZmEyNUsjWG4mLyknPVJOSC4vXi0hZ2ZRJixoclNnOjgxNEM4RiZkLmVgaXBeVkAK
JT4+Ris1ak5XSm4mZFk6cGhbZWRdWC9oLlIxcSlLRmRNJ0QmTkY5bjxCISw1KVN1cDROJlZYQ01C
R0tgZmk2S09TQUlBckZLbUpRY1tOQEhYRl0mInIwTmMlKyFOJDJtU11NPGQKJWA9Wj1uZjxsTiQp
SzcyYSFcSUFrTU82cjFsXzgkTE1JdENsKE41KlxqWCdwVzNDRTk7JnJFbmYtJEsmRSw2OTUrJy5n
TDJuXks6XTYjZD5TW1BidXJLMmFfOyRHNltqTkFxV1cKJTdfYCR1XiJpPFg1TycuLExhI3BRNmQ1
cHNiSktgKXAwIUpjOD0pZClAZElIIkNISVpJIVlMSjQ4NlAvM19ITiRdV000cDE3QT8/ZFItWmlz
PCtYNmE9cW9EcEA6Xk9ZRHFJU2QKJUtUX3VcNGVYLF4iKlJTME88V1lcSE9ZWzxlTyUmXEFfQWZA
M15tXWE6N3ImcDVWKyhtSDc7Xk1HOSdXNjtLI1NlUFJXN2MrKjVoOzYnbUNrLjpqYkxKXFciaGZV
Y2VcLTdqMScKJVI3WVNjaVk7KDpjQjhDI2Ftaz9NMFcvcWNjQk88SVIwVj1lKlQ4Pk1cVl01SGMo
Xi05SXExLWxLazEqMStzKG1lX01gISohQVonc1x1b0k4Ki8+REdmN2BtNigwa1NmQlhfSyoKJWdp
ZUxSTzhlM2spXTdIZUctL1NJYSJnPyY7cFBjSG1SYzNQRmtyOWElczokWi0rLEN0ckc+K1hBWnQu
LmAoYUxfTk4oZShHdWYsOUMnIm9fbl50K3IrISNSJSJLRWc7THNSK2MKJVg+MiNbUl4pV1goNGc9
cWU3XzxyJDIrcCFTWXRyT1hpTHIxJlpYWTpPaD1NWV1pXFFPOENQbUE8dCVoUWBJLW08NW87cz9W
IW8/VVBwcGYjQURfJSM6K1JaRkpJNFF0YmJdKl0KJWFYLTZySS5sMDI0aVZPOWxrIm8lNmpUcmNP
OUQlMkpQYlU9Oj4lY2hkXiwhZDhbQz9KTDFtJlxMbEktVDFpY0stWl87ZmMjOjEvZk1GcTwtLUgp
bDFAV2FmITF1TUlhNDpFIS8KJTdSNWpRJUZ1am1FPVJkTTxAJ0xRWS45QWFeNEhDS1ZET2RORkAx
PmZEKD9EOG8xIVtEX1s7LjxOXWNWV1dfSCNIRElgcj1oVzhXY2VAKWwzUUY/KyNOKSVNXEROKSk3
YmU+TWwKJSInZyI4bFhHXiQ5ODtNTFRrN2ttQ1JoQ3FTU11xIm5CX2ZBKkdQJGFqa1wiXFI4NVRL
a1twQXJnaztHWjQjLUs7Q2hYLWY4TXMuIWpVTigqZjVlZTlfPU5pb1lxXDlxN3BwO2QKJS4kbGos
RDJXVWlcaUY/V2JhJSpsYTUoIWlSL1FGLWtiTWpANUdvajZEV3EjVl0nZWVFRS0hIyhIbU1RJS4y
NWotZjlmaVArYnFtRXA7QmFDbnNacz0xOylyJV08TGpOY0teQmIKJU5hPTs8RisnPiVCYDwlSWcz
PFRURzAiM2UuOV9QVDlPMFdjPWAmcFYxRmdyciV0Z0QrNV1NYExma0VnSTlKTl1oa3M/cEZgWyI2
JWJVLCRYRChLI0BVLCcjMktLOlVaMWViQyEKJSVaOzEkPjljQU01PTxGdD9eOFpCX3JVWWtsNjw6
Rk89ZVosQSZDL0lcUSlCR0U1WlAjJDQqYiFIalt0bztoSDdKK1ljVTFiLUpoaWsmcTZWPWpddUJa
P3I5NTloJXFGLGV1KVgKJTUnXkErXCJncUo3TldLUVlHXlN0LzM9Z0g6Yzs/MUtuWVFgXHNBMlRp
S0NFJDg0SWAlYDldNStjRlMiaGZzMzdDJkJIWV9Lai9RYCJqRV1kbSNrWltRa151VCk3WCFyYl9m
T04KJU1VZF88UC8lMSlBZ1hbTTZuP1lRUjVIKkldcEtWKiM0TlpMZmQ+M21YSFsmL0QwLz5sKSpX
XmkyUy50PC9pKmlKaT5hciQjXSpfKWQ3ayZEIkhANS0uWyhTQ2lbN1snOCVDSSYKJWlsI2R1QiV0
NjI4bC0iQ2RpXzgnXWBTcShGV0Z0UGg3T0sqUC1RJyUtOEQ3IW1GLCpnXFxmR0Q5SGYvOTdRbj8j
Tyk0X19nQ08kb1ozLUJWLVNLYlg3J18oWSw2TjUtUElKJj4KJSpGcDBXRnMzMVBwazFVSmcoO1xl
WV5LUS8xJDY/RU51ND9DIjowX3JwTyR1JDxSRWViSWEhM1hoKiVvRTchXmItPCRSVSJfQ1RfPFBQ
R0ZPZCZOIi9jcXQxSyZWdGdzXDcpLnMKJW83MDJfVU1lOV5sTUpGYGlEKDVHbzA7IXVVKVQ4KiJt
WXQsTV88L1lgITg9NFFBX0c7ZCIhI25SSD9HQlNsaC48IWksIyI2bl9pQVFOVyFeYFVHTEdUWTNL
a2QiWTcvWl0hJyIKJTZHRUldNSJXVDE3IzEsVSlXPSZZME9WSjolYlk2cD1uNVEhMz1fRExKM3NY
QW5VJTU0RUJzTUI0RzxmPSpFLmNHPUAzQktyRjo+Xl8iM1ErNU5cajtCWHVNOVFJNFA2SzFJR3QK
JWQsXjFRJF85XyM2ZlQxYDZUKFJeNFtqPFQyX1VEIUM0dT47KHMnTTFbLStcPVZ1YSpcIThOKVBP
TkBYUCFYYVopPkZ1LT5NQCxaZCtcM0xdQTM0N1xWSTRDUVJDWV5xSFglUy4KJTtOLyg9YyoyZERz
M1FsaVMmSF9LKitfWXRfaVlbSmRMaTdNLCNeZkwvY08zRGxWOkRGcSdIWjEnRVVvdShmQUBabl43
bSRyOTM2M3FZMSdYMU1oKGlPVTckU1MlTTonOVhyN2MKJSRNNl86S0kwMmpMI2I7XTJcLnRzUTFd
XmFRTkQ3OCd0NWlUPG9tODUqRVI4MkJlRzwnLDBZJShsQ19RYWdqaz9sPlsnV2oxciNWNGEuP3I3
ckV1aUExLDZaXVE1X1E0NW1fN3AKJSIkN1Q+RnA6KHVxPTRzPFI8Vyw0OTxVZUlQQTtLN2NbOC9Z
ZjN0akdGI25PU0pbJExnTWUwPUtgXzA4WjxtSURZZWFSc0dnIV82Qk5bLDpSJzlrMCFHXFhmKjdJ
YFhXMzElMCoKJVhLdVJxYzZJcDw6RSg7RUMuREFTQ15yJEU3TFdaczctOGEhInB0RjMwIU51WjtN
VUAwU1c4NTZxaGEmVW5KM2FfNiZ0TUAmSko4R1lNNTJtRF1aNGhZKFc4JjY5RixpODhNIVQKJT8m
QlhhTiZaQmopMyQ9WlE7PC8rKi1SQ0cwPTpOIkleUW1CPzhSO11XVUolP2MvKGo0Q21SJTJIRGsi
Xkg7YiZOLXE8a0g5N2I8SiZHXC5hN0ojIkVmXFFxSlp1JnVGLHQ7QjIKJSxfTUkuXFlQIkhKOERQ
cGQ4YDN0Nl1WQGpTdVsvQGdob0NwJFgscTJZNkZta0NjWCNtL10sbzI+PFo+cT8vRGNqS2VRRHI6
Sz4paWNTKytXJl50Ojoyb0hNUlloMmdqISU7MWAKJT1rKV0xLj1WdDdaby5BJU4mPC9VKWAnKyNg
aWxJaEBjW0kpUEs3Mm4zSTUsU0hzXmlcQWQuaClNcSdUbjU7XkZsa11aLEtiW2xpbEM3K2ZvaktC
TzhYQytqTVMrS0Y+WkdaOFoKJTdATEZVNzUyWGloU20oSTg2IicwPVc1UWglRDJELjZuXzxQNm9B
Iy4hKD5sXDxyWTUnTidiMz5fZTtAS0hENklWbExGWSojYFVbYzdbIy1VNzFrYUc8JFo6IUJTb1E/
X0UrWFoKJWguS09HPVEwRmZiUUhMNDsuJDhRLio+RyQ4UGxpMTNnJ0xPKXJSSkk0VEhrSG41Ky1T
VjNTYnUycTEpW19tLTNmPDVWcWNEMipZJWE5YmpkWCZrV0U9PycqWCdzVVVPQlNmZG0KJVJFSnEm
aSFHPmZYSSJWKmlKRVI/PzZJN3NaNk5MO04hQU8mcjR0ZTwwb0xXc0I4aWBGUD4nVkckVV4jJSpV
USNPRDdtOGw9OUIzQCNDVGwlJCM+VzVgQnJuaGFIWmArKVBcZlAKJSxvcT9rPkFBLEVKWk9QRUZg
P0IyIjhlcl1fL1NxYTcoanA6R0IoRkQ+SitYLFNOXU9zZCxoWk88Sj86WEpGTGppIklHKC0xdXI9
N1hQY2AuTS4qPmFBJyZiRFc5Pz0qQmAxb2UKJU4zPDJQLSVaW2grVzIjIzNLMHBpXTt1UXFaWTMy
REgpckYjWyVkRS4wUzI3a1VcWVNcVitMMTU6SyE9O3IlOElaRFxtR21Vc29nRyJQSGRsQUFjWCJH
UEVBTDJRLEchKyV1NmMKJSlBPWdEJT85Llg2cWlHUERiSlo8KjFkUytpYFFyYV1wXT5mJSsucWBO
aUZXXjJqPTs4RFg4WiFETkMibF1dWidrUFw3Tyktcz8iUC1pOVQ+OmYtcEFlXiZ1VDA9I21xNT1H
b18KJWQ5KC4uKnNaXjpKSTo6RT0nckYpLHQjV3E3NzpWcDYoYUM3U2cpS0wuQiZKRVcyM2deLiR1
ImBNQmtVNSZPVDp1PzEmYCRKUVZNVitePiM1J1BkW2lkVDsrXT45R2lAJWEkSnEKJTp0SSFNPUtW
MltXc1hnKGAuOUxNPD08MEgzPSdtdE4oT1thJCEuXktHYSI2VFU7KTozPzRfOSZHJXNiSF9BaV9n
JSVtSS0/UllvRlxsKmU+UDklN1ROMEhpKzspNk90PzFRSicKJUxiMCtiJzBEJjYzJiVwbjQwZUtF
akRaQTBXS0ctVzFxS2JkXmxaW0NfJjRpc1BWal1OY1FbZztUIkEkajFyJz0nakAoVWxIKmwiMDlM
J0g0OmUnY08nNzkpQVtGXS9AbF41JlcKJVVjJlFLQHEuOTssLk1TSEUyOCZAJlZUO2k7cl4rdFxT
N1g+VlRCPipgXlBcJCs6Q0olOUpkSlIuaWpqblx1PkRNP3FTOiwuPkJGUmpNbGs6QWQ0YEZWVio8
bkA6NChjJVFtXWcKJUprQkUwLF02R3NpTVwwYihNYVJeZzxyQ29dcmtWaTZiWC0nV1Q7MVQjMz4k
bFltNC5dN0ZtRyE3Y2IyKicoVV0nQU1gSCY8J19nNmNWbSo4XmVhX2xvSFdCWDI1Vyx0W247ci8K
JUtWRSJQJFtoO1RlM00jbkExVG0jIixlJ0UwOVZlY2psWVNSKXE1LSIxKzFLWidpPltJPTxzUUJc
MWlaMjNddDkqKSozT0pjazdCRSpIVy5ZQyxjaTc2ZlZWN29vQGoxblEnNT4KJVU/I0tQIzc+J2xQ
P1UwYiZrUD5lLERkWiFdWGolViplbTNCKGRmLic2UWxtN1heRlAtcjYjazdAQkJpdDRxXy5IQ3NY
cGFnXVk3SFU3dGVRIjtga3VgIywmSj4/PTg1JzFpSUoKJWY3W1hqNFVuPEQiQU1oQTtQNThRMSg8
aj5XW2VgKC5aUXVATURiPGFZRyYwNDhTbCdmLFwrVyhGWCJHMz9tXWg6bDJZWi1adGRmajslYDFM
PnNWNmZWRWNRciU7WzJKJyJzUmkKJWxhW3EyO1phQE1FQDxFUUI8VzgoPjMrRW05LF9bP1ptVixg
J0VwViwnUiFZU0lDdCh0cytTPWZnWjBuODEuQ2NNYlpwUFRhLzVPWlk5LUIsOmlhIVVGJF46QidU
NVhpUFAxLWYKJSRIWkBjLGpNKS1dLW9OSTM0KkQ8UCY7blpdTF9CQlBZSkpcL0tKTDllNTM+IVcv
WXBnSlpsWSskNTJVR2w6Qj8vRWhzLy9uTmRxaStCUF5CXEFxRGRtWkNxJG02STI1OyxPQmEKJTNR
aGRqQXVoYzsuKSFhQClEP251VzQ3aWwlPzhWNyxucXRpZWA4MkpkOEZlMzI6Ll9DWiQ2QVU9LmNc
PFtQP1dHLFFXbF4+YVVNZmU0P10oNkYiMG1uTkkxUiVUO0kyJEFDSmkKJVgoQSJrOnJbUGw0Q0xq
KCg9blYxI24nQEE7JnBFNXFYY2InVzQiP2ZRZT1bTDskLk1zSXU1J1M3OVVCQyYuLEdgLDcvNk4w
Qi0zTWFDNmtQLyVMO21GXkQ1XFc+IS05Kl9XPUoKJWtbR00kKTwlIiE/Lj4sbSFlJCcrTlY4V2I2
RDNDRWgnSmRsY0wtNVg4STxLWSQibSJSMShKM0U2XCYkayJHST1CVGcoN0JlaS5VPC5IMmpOKls4
MEBDTDgnbSNBKGhrXm5ER3IKJVo4ImRyX28nUkVKL1YvUmcqVTxpZFZnSGRXKERkbm9TNls/L19m
RTlEZUtJKix0aiRwTCJrUVxMcElhZllcamJQNilpcnBINl9oJDVpUUtlUXJPZWgxYmNDNjgrKllz
UTw9OG8KJT4laDk5T2daXjtNRm87PDRjUl5RQ3FVNGdac2JEWFtScW9MbCU4WTc1Qj1zWGk0KE0/
NklVSFA4N0hqLkQlKk1qTCkjTzBVQCw/QipgbGtsUjY2RDkmNWpRbF4tZGVeOkohKmEKJUs0Vzkt
THI7WCZJMVw0ajZEOE1BNm9yRlJYMGRET1tgIzQwOEpILUNhak5lSCdIQER1TXJnK19cQWkuZ2tu
QUksUCI9VGwjckE8MVo0V1NpXDZOJV5LOTVbUiQmVUgwOD5JLkkKJXA+VnI0a2FKIl4ob1dtVHFa
LiczRk1QZ3FEaChKXyFCXnRIRHEsWGYhNy1PWVw0QUJbYWw0NDk2OztoYEg4X00qI11rdSNcLCU0
X2AlOzwsbGA4OHFUUnFDOzYxP0txXGdyQVYKJSxTUDJLVU5FaF8ma2RJRkJwNy1kYEZGSDk9XjEj
NktvOzBiVENuMXRbLGlhP2pfNCdlNmpfRzRPazFlZy5QZ15DNXFMWF5qOiJEbUQ4ZCw1PzNxNUps
PVZFS2NlLVFDOGUtKT0KJV9BTytlWlUtLFkkMGQjLy41WFVeLGxAcztbQSQ7P0MuYVNnLnFERnVX
RDFPbzEhYEZhZShIYC5fQUFiKDdtaEpGMz4nRlFLRTgxVlBdQjFPOGE3NjI1V2g1IWdtLXJvN20+
JG0KJSVqdGAxNnRIJnAtUGpIXjxGRWUzI1NrR0cwX0EmT0YwWSIoUF1bZmo2ciQuOnI+aDppXTNb
JXNNLkVmMTxRKGhQaUM1MnNVXCJ0XiFgdGNzPkdFOSVOaS1MQVQoMFpjOW9MPGoKJUVXWmNXV2pb
YkskOks9OFJqUmg1TVQuYjk+L0YhbUpRJ1FWWjYzJUMkQDojUkBQayltL0FldUMjTTJFOWZkZERw
OTxQSkdpLiopMGFkVUA5L1JhcGMnW0VaRGNzQl9qQkxtcSkKJWFaV3Q3KkNuYSw4YWhadStYWShp
VGwvO2JVTkFHOjFwcXRzVFQ1IToxbms/NidtLyF1Vlg8XExMPF5gMm84WkBqRlVDTFxxKFJuTTBI
cTA4Yy1SOHI2RkddQEY6Sz1OPWZkNTcKJVZRJU9HUmw4X289LWUsKVAiKUcoKGFkRkAtSGoyWks2
MVhtaignKHJBL1AsVUY9XmNDZl9JIyordVFqTUpKRnEqX2JbNiRZMFs0clElL3VoT1IsZl0hSmEu
Rm1ZYGU8XipkJlQKJTRJNGg4N0EqJy0pVDBWSUU0SUFDRWchWG9eJ05NQjhROkFTLiNQaTVncVNT
N09dLjE/S3JjUEIzPz1yPFpVXEFKYSYqOipFP1c3MFAmKTpVX0tDWWtUTEY+VXFPTl8tNjpvLjIK
JURVb1s0R0UlWDVYSWRhPWo+U09kUEcjLCgtM0RIUyJBLVBXLUpPL0gsREBcJCMrZy1HLXI3P1Qy
dDMlPls6NkZRR0M7QypqOjA6MGN1Oko5PCU4bzFvNChtbSxsQG5abCYvXHUKJWZbWFJAN0cqMkk7
aFU7W0NoWFsvLW0yWiVTN1tuRWsrRSRGPWJJZidlPVVFYTo1aEFTX25dZHRaJzY3OSorU3UqJXQq
cFRfWnFMbT0wKy1kOnJvSTZac20oXTteI2olJFlKRD4KJUw3IV4zSCpMc2tPR2JuTUFXayVbOUQ1
MXEoVUYmQy5fJGQqKC9lSzRXWllqa0xOSkdAX3Q2UFw5alBJRWMhRFVOLSI3UWFGIl1QKUBgRDFy
JHBxWlBINXNVXE03OUUsQD9NJGMKJWAzVDMlPGxicDw0XkpkITgyP2M9UWluVWlZOmx0QUwnLUFH
SEYldUMkJm1ebGMsYzJzMXMmN3RKUDljTVZWRUliN2AucWtVaCsiPTFyPUdeV0ZBRS4rPGtiVCY6
SE5QJlVaTjIKJUpMaCxZOm05anFsNDteUEBRRHA4ODxwN2NdVlVAIyxtQ25vPihtaWc1bEJ0WUtI
VW1AI1tkWDRIInMwYmtUZEs0WGQvJSRCXCQvSlp0SlZcUnJ0QmVxQkszU1JcPGwkZF9RXVgKJWs8
Y01wW1RVM21LP01nTEI1KkAzMXNDc0stMGpcJlhgXUFJPkEnLk1VJ0ghR1tVIm0mTUAnWV89RCE/
T08qXkBxREQvMFMjVTpwajAmbVFnITJIOzNxYC5RdUonV3JDOVdnRy4KJUVFMmQpYGxbIz0nZmZ0
OmpOLjErYCk2LkQ/WStzSEUzQ1orWlJjLzdIPm5XOzVjMUFEUD5zcF5kXUxXRytbXnNiK2wrbWdj
NUVZSDlKbl9oM0UsMTVWJmFMKlVONk4/M3FqRTQKJXEkJzk6K1gpJzwtRWc1XCVAWzlCWiRIWTIy
aUVBbT9OU0pjJlhhZDdQWU4+Mixbay1yKVMoYSNfPClgQmEya2dyQ001YVZAUyp1KEA6Q1ZyP0s9
YU9caEszb2BNJyo4MTxSJE4KJWImJjctNSheLlBTdXQvLi8+M0BJUlxmRF9yMGFrZC05OyZjZHBc
IVxeUC4tMDYna0QlKGhAPmU2QWBuT0YkTihvY2hATyQ9ZXNfUDBxPDQibj1rdGxpYj43IyEtQjlB
ZFJjUnIKJV5fRmhVMzlFPl4kLU9UXmJ0b3U9QW49XWZObG5DWikrWEJaNSJNKT5BZj1yZSkvL1Mr
SGs/T0ZkPTtRdGQ2WkclT3EuOVE4dGQ3YiZtZ2ZjW2ppcF1iTCQkVWJScFYrUkM1NCQKJXJpbVts
JVo9VlArcUAoUFJAKkZbOmY8SGdIJjU5R0BYL28mVnFCSEFoXEt1UVdcPWIuOEM9ZThdODVdOixg
PnM3b0hWSUMja28kXEo/QW5pZHA2VGMvTnFLOlNRcUFpWkxoUFkKJTlXJEFsOT4/ZGhNclchRSMx
TEtSXS0xKkZiIllJZTNtKic0KU8iXGQwXydpJzhGQmlKZClUJyxtTDZGSSkwMlU2ST9RWz87RTtY
PjVMW2hAX14waSZhRS9RR0VdKHBZOUAtPTMKJVZAdT4nXTBobSIiSE9iVU9Xcjk5LC9aUl1LSWgi
XWsvX1pBYURxbCQ4Ty9TLzhaKGZGUEYlMDZPUjpLRWpTajImV0hSOFBQTmMsM2phcSE6JjJzdERh
MzRYLy1SJmNQRWc+KCsKJVMta00rVTAtZyMzQG1CMCMiZWAiZTlvPSdraiokWjFNVk4lLSQiOWtE
M0deXlU6VjtDYjQmSChPSW9ZSWE8QEdpaUJrUnQ+c0skOTRWQUU5KCdycCUrX2VnYUkkI2ptXDVX
bjIKJUM4aV5AYEF1NTZNcHQtUD48IW1aZWlidENoJmwyNyYqNC1lRjxWPzU/T1kqJ0hQbkVQZGh0
cyNjQVVuPi5dKE82MC1GXDYmOEVoKEZMdSdUKEUybkIiJFM2KHBlNWFeL0Bjb0UKJT4uPms5O0w5
Jm8kS0twJW5wN2AoUFdQPFUvXylsSGcjUWZqZGRGKFRtak9sWGhlOSIhU2M7OUA5NlBZISJNQjJW
aihTZ2dfZmUpLjRIJm9xVD1eMjs/QDNJLTR1a0F1L00tJVEKJXIrRDc4OyRcOGwpXW5vaUZmXHFH
NG4kUWlqTD07ZytSTiVIUERBVChhNG02QFQ6PmsrOUJwVCFhKVlJWFwnNGJrWCwhNURSMD1eUyc/
aEMqYT1WdTU3WkhudG9fNEJDbictWFgKJWBXTUFCczFJRT5hOEM0KmhnK2hTZEJvIkxfJTk8Qzgu
KCJSbj0+J0w2TVxvMlpwOz41bkxOXS87U1kmVFI/Njg2Y3QzVD82RXRkb11JISZRMWlNV1FxMDBd
Im5iZyNfSFJGOloKJWMhUyRmLmtHOm8lQVlSVm83Zk1zTDFRWyRkLD1AcjE0UnU/IldeNVAvNzYh
SG1RMjVkXDgxJWRyKjpYVi4iUURgZzlHL2hKLisodCRPNGdLKHA9MCohdUo9bF1tcVk8R2pocWwK
JS5tIk5LPi8nNDdqYm0wc14laFJBTV1dUzpeLlROcTBEcEppR1FgMFctIiooc1xSPSknQ2xsSFZM
MmhDYDBrXmt1I2dPRnBRNUtfbyEhMlo7XWg5RFFcYEJrQ1FUaHVxbCFzSD4KJT8qakNRW0ovU1Nw
XUdgQlVHKU5JSVI1Mj9IPVlSU0xbUU5zKCIwXSM5WWZCY2BwM1VZJTJ1U1guZzlsSDprMT4tJE5Q
MnBpYC1xSmdyWSdII2pXU0ZDWkBlKjNaaFw6bClidHEKJSs2aEdTQkxnW0xiSVxrNkRxQUxCW20l
Qk5iOUolc0JaITNcZWllaSlrI1VDOC1wXDNiNjVNPXUhJF9tU2BXSD1UVUUzLUIhMWUjMTFoYSpr
WVRtTm8hTVdSNyFYaUJNb0tHLFQKJT42a2M/XzkrQD1mQT5ucl4iW3NfS3RoTG9ZXlQkTnJaOnVU
XmQhSG8vJWlvKF5zZC8yXSljI2NGUDRaaFFgVlcrKzIpU0FvRUI7XHBSTVwjLFZVSmozM3JUPT1o
JDslTGpLTnUKJW05JG1wbVI9P0ZEZEJtViZXQytLZyJyKiJPWWUyUCEtMWxCaUxZO1NdZlM6RGg3
O2UsPyg0cGBoUWgxSlFCYjZePVtjXXFwWzY5Vmk9WFE/OER0Oy5mM1dPLVtIP29GZ2k/TlYKJWMj
O3RMMFFbIUUkIVZRNlo6LUluMElkb0dSb25CcFIxKCllZ3VBMDFdOG80MFhUP1RKcF1JUSFQRG5H
JCtdckRII2BzQk0mL1syaVojcyZCZjtjTz9YN2dcblt0IS5cSUZgZk8KJW0rKFNnOERhTyNsRnEt
MF81IlxMWC5dJS85ZTNca11KbmdSVjY8XDwkR2FPLlhhYjotaSFFZCQ5SnUxJEtlaXJmJTM4K1Nt
J1lwYGcyXj1SZkZiPnUhI2ZxYWI2NDE6QHQ4OSsKJUUxWV5uJyNyVlpddWc8Vis8NUZpNU9ANlZg
NGBxQ11EbmkwXnAyRGBXLydKaUNrVCgmQ0RCTUgvRkpObXBNVEg8MEAoOFlpPyYlVGJUVUAiaSRw
OSVDal5sM2wjIyciSi4kJlgKJSEnNDRBZlBAXz0jZGdGaCcvamEqPEowXkliSjdyXzZGWy9zIXNt
MWUoYy4qamBdayJzRScvQzdHa2czUm5BMTsjSUwjLFxzK1lRbWRuXXRcXT1gYS0jWjJKIWlGZ0tm
PERfTz4KJUpBW2duYjpOcmVLdSFhYCIxbTs4Si45KyQ/QGBvRE1jYGEkXzAhTSIiNDFHQiFHR2Js
T1JELlEiI2ImU0dSbS5II2hFVzNCYC8oY25fYUcmal0pMClDJXVScWtZMTdMNUc8SWQKJSEvKE09
bStgLT9wMyw8dVd0LjFvJFVNQ2s2LFI7Q2RLa1UvYmFrPSFcbmJASWZgPmdpajpsOC9YTVJYM2A5
IUpEYjZPZ1swR0UiSVRTI2hyPE86OV9vYUJCcDdZO2xTODczOGcKJTwqNlgySydqPmpbSCFzZTQh
JmhgXix0aW1zJCFkTEA0MzRWKyQpXjIjSDNSLUZXbWFDWzAkQ0gzYG88R0RhXEY5SVwkWmpMS1Q9
OiYpLjVxK1dTIV9CNF80PF8sIzpmP0FFdCcKJWFOQz5FOVhbLjUhTUJIdTlMJlMwYm0jcCJjQVch
MDBaPCZ0ISJnVzZOV0N1YGRGdF5XSTEjRXVyQTRzPUZGXmlOJWcuSXFjbUdDamU6W2kmJEdlVic0
cFNNLFlZN19dQio5Pz0KJShFY08+VlAiTmcuajRmaypMcCZfLlBhVFFWO0wrPz1dNFE2PTo0QCIz
LWdoKjFzZENYSDhzaTJnQWlnLDYqaTdUMWlNIk0xXjMpaE9KIkQuJU1KcDAjam9PSl9EVk5IIzhA
XEQKJXIrODpMIyhVcjIhPE5GOCQ/ZE0tNFpKKEVSLWBPW2JKXExvOE5NZzROW0FSN1RyVGctTSw4
QD5KISRyPCYmUUNycXU/bzVjSygrSnFcOmoxK2dfNTIlVUtCPjdsNmxFJTAoay8KJVRII0dzQ3I7
UD4yQDBcKEhSRDYjXyJlMk1BSUcxREJGOis2UXNULjJcZHRZWSg/YztNPzJ0NkYhYDFSQVIhYDNB
aE5hJilOKzVQJkArPytDQCx1JWM/blprS2JWQis3L0FccXAKJSFWUzlCJU84YiwsImRCZGNtZDBk
OmlFI2ZeY1RAPEBAVCUqIyhDRG0hLmZnPyE8RWw9KU0rYm5gbyg1K0UiTWQpbkE5L2RZU3BTb3E2
JCk9bTBVV2QtRVRyaW5AV2dDZHAmbHMKJT42I0hOQTlbRnMlMC9eSi1acDZDKEJGcGNKOSUjPytK
Y2RSX0ZYYlFLUS9XNTVCJE87IUpFcTguQDtpVzJjcjkhJ0khMCMiOyEhOTJJJldNTFBIM0VpU2g9
Q2FPWjNbTTZYWFEKJSVYbiw5JW0sO2koW2VuXlFtSGVxOlRyXiFcZVFKTyRzcyMoMnVzOyU2PWpP
XGtVN1xWcSdfYTpKL25kTl9vbEVmMFEvWjo0S0lUTSIqZ0xOTy9HXWEvUVNZWCZZWklXPUR1MHIK
JTNrXUM+RzhicChRNig9KzllMUtCLF5XQTY6IlsrbyZhZThXJDtfWSQ2PTpWYTVPMSorPlpoUlti
VD8qUFZBTiJpKVpbZzdeY0siS1V0SlRsWmZzLWtoRykoNG8kTXRtUkBaS3AKJUQ0aj1uaithTV1U
RTVCcFRhUDhAcTVmPmprXUhzWmFULTMlISE8QjlNJzZiKFF0ZzArbi1VPDRTTj4sUi1fWUo9UjAo
RVRjVzlsNFo1VzNNWiZxTidROVZONypxVHIzIXBUbVIKJSRZTVdYYk8wLzhSOyQ6OyYqViJUZTMv
VG0pPVVeMyJRLVkrOUhGKFpLPj49LDFma2oyKnRgRipHVkVbRy83Q3FuI1pAaFNnUCNjZmFWcTYk
b1ouJSc9ViNvWlJbR2ljRCNuPVAKJWs0clFPQl9iXXAiYyhdZk5yLSE/OjtnK2MlbyokKXJTMG5e
JCMwJ28tJUozZ2pRPjZDX1NGNi8iN2tnaSUvQSZqK0ZudGMxTSgwcSJqZyhDXj5uazc9W3BURE1W
LXNTXS9NKikKJTh0UUxSZCxaaWNTUT5Lc0VdQllga3JQdCdgYmlSLG1GYHRqOzZSOHBjbyMvayNY
RHA0KmhOYHAxR29uaWVmbCU4Q1ZfJV04RzpMNVl0JGBKTyY/MVAiP2I2V2kvbk5RRWMwa0IKJXEo
Yks6JTYpUUhQLik6IlhnWTtVai9RUGZALW5GYTZJV0BWSkE8PG1GTy0qYm00Z0VHU3FEIi9wTkZf
Y21wUEdLUUtDTl9nYG0/JGZlXjs+ZDdXcW9FJW1UVUJESjlxKjRVUV4KJVkibEtQJT8udVBvQFBf
UyQocl4oJDZPI1BSOnVpX19eZ0cpSk9zPl01Q0hXWUY9RjY4ZCkjS2tuMW8kcSszPk9iXkxuN2BG
N0EhU2hcOyQoSXVsOVxGSylgUm1OIlEnUks4cUkKJVI6dXUjbj1uKyI6JCstVk0sbDtOK0RTWSg4
Y1dwP2pyJDQxLHJQSixUJWZbZ1EtPlI6NGRHak5OLXBWWk1adXBkbE0wO09SKStDVyhQaUspLWw+
M1hBbkQwUk1WM2AtVEpeREgKJVxTLSVjaWovVzgibF1XKmM5L21WTDJQVlw6WHRMam9EQS90cSop
MipKK0FKaCZmP25sJjJUQHAuQG1LX14mbF46bm4/J0ksVmxaVzFCZklcPmxuYUxWXmFDPGNuXVZb
RjwkW00KJTgmb1ZkKG5YZ2xiaSU3UmlaXSJvQkRCPVwrSkVEVSojW0FRX1hQQS8iI2UvJkYjUj8k
ZkEobD0iJTVSak1QPjI3YCFjI0BcYVYzVEFkNUsoUjo0YSw0JDM2UlQuajVsWUNaS1MKJTlaLERL
K2BxPC5APCk/OytaT01sYi5jVXJiUydEIUU6ajNpIW8hWStJN24hblpOPzUsb0lALktLYFcuTSoq
P2E2XipZKzAwMSsqTCtRcihraCM1ZkZBaSI9U2E9cDg3KlxmU1oKJSIrWGVbN19vdTRMZSgjUGtX
KFUpISpMWXRaIiptPmxgXmchKS40JmcrJ1Q1XShYXSZ1QlIjLSErXlZhNjBVZ2gsSGZlWz01cGYz
XVc5M05fJEhWSVQhWUI7V24uUlxpOmo/QlYKJUAyMzAlZy01aSJVL0ttZjZxQ2AqIVpGMG47RG0r
WXA9ZC0wU0pKMWtySmVmLkosL04kcTNQSUxQRC46WUY2aG9JJiUzVl0ja0pKMlNRaClaaGc3R0hm
a3M8MVQqQ0Q9VSdoZDEKJSQ0NWdnYjg4ISchQWFwXTBVUCM1JDM+WkJeX28/QWxENiwyXCkyXWg/
XUlaLnI0ayVRXUZnRzJKMCNeQkUmQC5qS19eRSNpOFUuLCNsKVZHJEZfVDMpVnNsb1NlbEA4IyY0
Pm4KJVciQlh0cEdJSi4nSV40K0hUIzVaU1hOMEhxS3VXOTVucSFLJUxBXjNFLHJYN2pqTyNzNGQi
O005cWdzViImUTY6TCJkbGEjPnRYZGdaJmgiJC5oaW5kPlhudC1qUl5CWyFxSlkKJSZSV3UtZkkv
TjJjTyxSXi4rOEE1Qz89Qig9OWRzbFBRQyEjQ3FwMmw+L0RdZzlYJUpmOWo1OUVgSzhRLkhuPVBP
PEh0ITUycl1hdFI+WDwlJm9VPiktXUkkMzxKJGpFIyZWQisKJWwqOF9EbjkjPU8tYzJaNk4lZyNt
TyJUPVU2ZmY/UzBsUHA4PEo3OlpJMiVKYldeLl5PKlUidFReLG0sYklQcl5eUyJaPHQmUmhEN2dr
QEdmQWU2QkNqLnNAV1Yna0xqcTpnMlQKJV44PVY8KypyTFBUTFw1P09kL1JKJSZDakswU0w6Uy9I
Qm8qKEczRWkrJ1E5XFs9aScmWXNlOVIydU8sdDRHN2tgOTsoMj5nOTxxWCskZUNkMVE4RVBMMVhr
JFpkZihcOnVKZm0KJWlRQFldS3BLN1cxbjIkaytuUS9VOk1XUE0tQGNPWlo0WHJfWycqJFJOUCYr
cW1WWElWNFk8OE4oOHFzdTtvPXUpYTtCaUBQWXArLnMzOmsqayklTD5Ibzk/QStgcytQXipSP0cK
JTxiRHRnLyYoV0VTJEcubUA/Mz5OSj88M3EiU1BsR1Z1Qj9qPnJzYWNuXiJTaF1tN0cxKzEqczdd
W21KO1wnNCM5WUpVaEpnbF1GWmMoXiVWay50W29yXDpKSVsjK1ZNazpdckkKJUU3KEUhTVxlJE1a
dUE8YFAnLzBILzcsdSJsKCojTWxXQVcxaDAxT0BcODE+PDMmSU9JbF0pYFRncURePVEpZHFNbkJi
QV1IL3UoPDVCLExDKlBic2ZCSWZMLnEkU3NJR1ZvMGkKJXFUZD5rZ01HWDBnOWc6MWVMVSZtMXIk
I2hMLV0/LVopLFdhSzZaUk5oUyk/SU5aP2NmLWVRJChKSkNmbzJAS3VJVGYsUlBwVEQwNFomJ2lT
JWlkY1ZpcS9FTm8jNmFTN1BFcXMKJW0sKVpePy87Y3FJVFVnPD5IaGVfU11kPWc1czJTJlpcZCk3
azlvLVsqV1BkL3JuMEJpblU5OnQvWShKLG9vWFRlL2tAcERhOENXYW1eZGMscj07JlM+RGxFZzNG
OCorYCZsaU8KJW1YUDBbbXNqUmgrOExhcD9bZz1aZipfSm1dKTFvITMlbUZsS2hJYzlIPihSQzAm
KUcrLkxtT0AzLWtAKjdNQSNWblhvZzFWZ25eJ3MzYk9FcW9mJC5DRUlyL3A+aGY+JXMwP3AKJTIp
VDxnUDMmX2RnWjdlWEdCaSRbOWd1JFpTXDNQL2AlVi81WUM/SmNyOyhwQWJxSydXczcrdXJyTmle
K2tdW0wvRlsqTmRFQUkhLGJaZjJAV2xNUGM1QyElWE9gSEdZaWgrIU8KJT9pSFBoQTJOaTJeXGQt
bXAlczcuN3Q6KkVzNVheKm9HYUZ1bksvVC5AUiJtV1x1PmorbVhBaXVwN2lFU2hzcFs2cilaLFI1
JXJuJjVPbUlSQTJXblRKK2hGLXByaV4vPStDMC0KJUosTDE1b05UKiRaJDIzS28sYk1WYW8jKzpH
SFAzMiprWEsuIiRjXys6T19sbF9HcEhVSixKKU9iJiQnVV5cNSg9SilfSnRwTk0sXU5QRzpxMEUn
VSNeXGRoal80I15MOiFIblAKJW0vRHUrOktOIyciLyNBNFlDLTxXZUFAUk9rTzJ0Pm5dZWlLcW5O
LjlyLVwqTz9pPUAzal8rUztqUVpJMFBwcE86XCNBPlxYaD5QTlQtKydOMEU6bGVJSk5WVi5mU2Vc
Wl5eYSkKJW9fSE9XcjRpOiROUEdEaT9pVGJGWSs9PVhyazpIZztnNkdpSS5walMqUEQkWDBFOFhr
czNgImZmakVaOnI5YCtNVCVTZEdUVy50XyNRT0pbaHU9XTVwMm1JT0NTPkhXcm8xTzMKJUVWYz0p
XzJuTUwwRTBwO3I2PGl1S2lVcDhjVGg/YD9pQUFqcnFpJl1CdGVmXyIzVis1NjtvQlVGY1ZOcW8t
YVJ1biwyRy1JLGpaaSRYJDFMOiZYYVdJLjxPZW9SNF1tWShtVzAKJVEpO18oaWhvITdWKkciNjJs
Q0k4SmFmLllvPW9lUFpRSyFVLERXRDEvb1BFPFRfdHJncjdbTTtyK2dmPjxfTGZoaHU7PHRybDop
NUdiczpoaUAkVldRb0BqWFZTO3AiRys4QCoKJVlQbjlgNTU0U3M7Kk8sK10jJ1BzZmYmTlIwRTlg
UklWQVloVDFkT1pROylmK1g4aGNVSUleYEcjSyU3ZWNhS2dmcHE4InJUM1lCYGtXW2F1MT0xR005
a0Mqc0NuJlI0cFhxRTgKJWQvRD4iIlBtX0ZlREw4WW1oISguOEZlPSQ2P043WE5ycXNLVSY0Mkch
ZCZvbkFcYFpacU9AP0hRVHROIW1pRl1VZyxaS1ZNS1s6W1doanBiWElFdF0zTzxOT0NaaXBRV1RL
YHIKJTwwV2tlUVxuKy5ZXUtlSl9wMHQ+UjsnOikmXnUvOWtQWGlkSipHImdHPCEhT2JfYnJdP2k9
N2lFVV9hdWtPWnMlczQmM0BuV3VLPCIuJ085SFsnaEkuSGZvWyhJLmNMaHRqK18KJXBfdTJSZ1xI
NmFEdV1bNz9ocXJScjl0NVUqLmZ0OkZxMStoTyclMC1eQWpKRXI1UComa1lNNlZYNms/N3IobUBe
R0pFWTk/TWBNTyNJZ091Pi4iNGRZNVMxYkhJJEIhckpmXzIKJWgoTzJlXiFFJG9rRFIrUysrRiZW
XkBVJVY+bVRAL2InbElXQyFXIU4oTE1MVTFTPVdyY2klJG9UP21LJEIwWiJBI0NpdVNqUipdbTNI
KmtecFQ0aSIkZUpeNGNSVSs5cGVRPTQKJV4hQksmcjFnR1BFT0wnQj1GLzo8PkJjXGZlQHQ4Rz9b
ViVLZ0FTXFVgVUBhKVtwXVMpPGovb1NwSFAmZlxNQDNnWy5jIzE+b0BAZiJYKCJeaWRuTEFLMEIp
QiskRE8vbWNXNk0KJV9wKkNoNDZBMy1GMmRFN2Vua0tGaGZXNS5TSWE0azEyXWQwYmFaQFxCdWBD
MjRuLnFjTzJDKGJHITkkdU9jNFcsWydhbENkXyJHV1w3LWVXYmhGSCFpOEY7dXBYa21dWkw+SGcK
JThpX09NPCZhPi9CKTlWY2pUXitgWU1XXC5gamRIPDpFSlI2MCg8J2hcL2FoPUYkOFFSXEBqcCNx
OGoqdWlrTSxQa3QoY18jNEVVZlQ/bFZoZ14+LDgwZThtc0JjLEpRT3AoLmQKJTQ/SzskQDlWR249
LFw6aj8mWW87MEEwXmtYdCc1U11Fbk1lVEE9Wz9gdDQhNCk6KjI5OV8vbSpmUSU9VygtPDpkTHRl
RXRAJyFBY0hSMCVqNnJBJVwwVicibkk4JDdbYWpIYmMKJU4jc0c3I0ZIM28rc19tWSNObSJoXEZa
cz85IVRsXiIzRyROV1BXNCpiXlo0QTlXVDpsXzFmLUwwaUdDY1RvSG5bPmAqW2VSbT5NLS8qc15X
XHEyM2tbMiovUSVCODthYUNHX1QKJUQ+QUA+SDdjRiRZNkRYcDlXKzgyPydNVkYlNFNPdWBnaCcv
VGJEJEVUST5pXzxMX0hKRSExO1kvVi9VNkJCNHJFbWExWyZlPmwwYjA3OzwxLFMtK1E+KyxuI1U/
MjMlO2NmK2IKJVAzMDhBSjBxNy9BXFExUFEiXTJVJnQnVDQoazo2Um9mIVhUUz44T1xATFxLaidJ
OUJWOWk6al4nXHReTDAxXC51UEc4JnIsJktnV0N0IkNuOEQ0QjVVQWkuVkQrPXBRZHVHa0oKJWc1
MXVlcmwtLiE3Z2BNTU5bWltMIkUlLGVoY00va0JTISlLLFIkYiRAOGJWND1KVy1TTj1DWTtpc0db
XW5LXk9BcHRCV2QxYiVvbUpeWVAjLVJrNCVbKG9AXztOTzlZW0A+X3QKJS4/XGtdXmBZc0E/aW9P
cTxbN1U+TVJYWlkwU2wsSDhEOG9gVUFpLlZEJ3VAQCdyY2QxVFBuUnA4TlJiQTNSZ0dtWiosI2ct
LVc7dF01YWpoJi5nRSEwZmhATlE/PTFoZjdPWWcKJSF0UDtQJzUjJT84QWMmZjdLJjM3JEIjVERZ
YUMiRnBUMTBoXlRnPSxjKVBcQTxSQHRqZSlOX0xBQFdFSSVgP1hiJjVvLHViPU43SmI1SUxNajxY
JCY6SiFVRzxlUkZRLiUyOmkKJTUpO3F1NVVkIjZmUTheTF1BayM1QytoUlwuOCw0LFZuKjgiR19n
RSVxVkMoI0FVSm00SSg6ajIqY2pGbT89bycrR1Q+TW1pW2dnJC1zVmBcXyY6ZWEuIVBnQU08VU1q
L2ZhIzkKJWl1Q1tiLTpTZG0sWldIP3E0SG5QO2hePlpFQThiYUN0WHQkYSMobDJsbixpUEYmVCUl
VmcrbDZcZDpwLDxwTG5eRlIxIkxZLTdHPVxYWGBsJWQsSEdvPkA/Qm8jKWtJSGwtP10KJV8oTTMq
TFA3MFI+bmNgPXM0aHQzVVdeWHM9RW1xWFVmLUlYMGY5MU1fNlElJUotVUVZbnE8ciRKQThXQmE3
cWUuPVVYIi07MzpHcC08LVdMTnJgajhecmghKTw1NHBeTyNqWkIKJWlqU148NCwwcEJSJCImYmA9
LE0wWUA3LCtVLCpDODMwKjcyMk1VSHBuOCsmJSkwOiZRXXB1S0htcDlWRnE4YzpdSGBYcGFwODVr
LzZxMC81LGM/KlVLa3NWPENkY04iOGQoRi8KJTc2MnRtUyRcVjYnUCdqL08+Y3RLV25mTGZZdSo8
PWJ0akVRKS8tVWEhQGNRNjRsaWAtWFVEI11gUVYiXVlnYU1hJ0dHUScxLFdHWWBnRixQb2VDJEVL
YFRbKmJbbDxwVG5Ab1kKJSxwXV0/PWBoTXVmKydyMzJwPGJwXzMtQy8vdERfXS1EO19QKisxVTpO
LEs0NyQwUSNdSlFvYWoiXFBpSDo2MG9jR2ZFXDM9W1o0IlA6QUgpTGhwZWA4X0BrRVllPzBGOixj
aGIKJV83UExER1VcVEhxYnB1WWNsI0BPWy01TERqPV1OT0ImUj5qcWIxYTQmX0gjLFZpMTBSTF9H
NUEiU3VyS1wpNDM3RHNAPTcsZTgwOzNndDNbZkFCNWpeIy1KX1ZYPT4qZjVHMzcKJVtzUG1HcV5x
KC9RTTpCNSVIKUV1W3NOVmIjQjc8TD5MTkJfPy9QKUtnVT9pcHEjVDk0aElmIjBnVihxZGhWMmMz
TDVeRCQiLjdvM2dbQkhHZW4tbCdSPEcuU2c/Y2dBXSlnNzsKJUs5ZTdaZlZkTFpKKzwpVm1wMk1D
cTlOcXByZldRNEc5Jm0yNUhsUUI8TydKRlpAKGM2IzwyJmlIMDVHMzxRU1FqN3JvLUJaM2kpSzRG
SEw7bCFNMHVmakIhJ1UsPmwvODxvc2oKJSMwcklpblhmTFJmWi9MKz0kSEhKOT1sOCtoV0hJYylF
Ql1wcjYwNk5HZ0Ngb3BYNyoycDx0LF5Zbik2R0ErKGxARDdGZEU1ckEmNWdsPWJCTlpDPypqVzNm
OHMlQlEwcU9ydGEKJXA1cEdEa3U0NVNnZTV0YzNJVS5hTjl1KDUuR0lQUFlyRl07XWlhQitURDdK
Q0ktPU0kczg4ZERsbEZFQzNXSXAtcmAlRXBxIWw4WkgjOy5yQzUwMEVhN209QU0tIzJobzpMTyIK
JT8pPkY0SS87PUxJPFBBKUxHa2BScTFFPCQ+NFVVLkosTV8+bW9BLE9RJXNrXVBPQm9MaGtyXjs/
JihAVzNXSlpua0xabj5aPmZLUEhmJG8oaGprXlhIaEUuKVJAOjE6SUlOSSMKJVVxY0VBYj41PEVH
NWxRXFleNjloOyMkZU1ITC8/XF1BRSkzNCpFM2o2ZWFgb0EmJU8kO1xxW2ZfNlhZKlRyKGpEUigl
VkZuXEZFN24rayVHKWpkTDNnQEFQPHI1RGFATi4nY2IKJVg1NyY+QGRiNzpZP0pVP3MzXU5xb0xn
OGdyU1cnL1IzYGVGVDckVG1oWWlAPkdkVl9ncjpkbCdxWS1SJzMmWWFUSFtiREdePzxAZW4sQ2cw
IjAyaVJiXl0tS2E4WVpgSS0qQmYKJWk0bzZXYkhKU1hgU15bNj01WCY3cFYtUyVKLGV1QF5bZk9u
SUslLWBSZSdRI0orWDJhb1ZsM3NtNFozUWIwKiRIaHInM1Q+MUhsJ2o5KjBsXz0pWD1rP243XG80
TWpCSWMyKlcKJWtOTiNycV40VXFXWDBnSGkyPkkyXEZTW2A+aEpbXjpqX1RbQjAlLFpUXyQjTlk/
XV9tcTgsZW40LWxuV0FFL3I5XjBFLVRjLUQqUiVfcWwjKEsqK1RwdGV0RSE5cywrVDZmKiIKJV4m
STliSWVqMz4kQGBLbjRUNCo/WE8taT9eJCJuLVVhRTIrSFtrIzs/M2Y4MWwkWTMuTT8kJzpwRkVH
XXJnPWliOTEvamtodEZjY0lmKFFXbyghJUJzKy1NJEpxKTIqWVBlMUEKJXJuPlxfYlMmO2crNT9Q
XmNxcERpS0hMIXRyW3NGYmtrLElZMGhdR0JIXzhGazIxOjUiaHUzRHBXQ19dMWJPIVhmXiVac0pU
RCppblEma24xWVBkUk1OaV09Y0gwJkUyLXA8KEsKJUhbaDhFbXNZNGtzNHFWbWlXJm8lKzc8RXNZ
LjxiSWk6KmA7Y2ZVakpxVUU7OHJaIi5Ib3UiUTtmNzNjIV1yVT5RZTJaPkBEcGFHNF1SYnVwJFYu
Y0xcZHBhaldyIWsyYSdTWkEKJURnXlU+NTVPWCtNc0lHRlFgTkBJY1RndSFZSW0jKTBxWUdfTVZb
YiFOUEc9RkosJCklcHVhYCpmQD0tP2l1T2NoYXJjLSdLODQkYWJlKlZqTFZFPyhKLEk/KkljV3Bl
bVMtV2AKJUgyJHJaaTYsTC1dMEg5ZFNXZj9zVSE0Q2duPFtNXmIoays3bVRAMTokR1E+XF5ca3Ms
JUQ9RSRGNzAhNnEiIVRzOmxEbUIucHIraHFwMkNES0JPb1lRP0FkR25IODU+RDJQPU4KJV9dTz90
PmomQnMsITE7LVJONmY8RWQsYyQoJG4mMUdpdE4sb0FBIWMhSjVPQW8jOjRjJEBscUdjNjFtJits
NiRsPTQzQ1NpUTRSTF8/TihnWW9nbjBJNSoqW0Z1YG5qPk01XSYKJU5BWmtzZ2IxLU9nVThEKFxF
RVsoPGVzbF4iLidZPkBnMUElcE5JJUdVNVI1MVdELSYuQ2U2XD8ja1FsNGttMltabnVLYilFb3Ih
L2wtJ1ZwTTdKVFBDVUpXPy4pbzZMbitfbmsKJTRZYzFdJXVzPG9CTyJrZmx1NV5AYmtnXTIxJ09o
Yz0jPGlAckJxNWshbmgzLzcmWDZpX2FoYkFiZTRKWTEmN3JUM15nOCZANXByKEBuTyNtYEVGP3U2
KiQxR2dQXi9GJjNQbFgKJWZOTzcsPkZVZjBdWUxmLkQ6dC9BKERBYV0zYSsjMUNvRiVHRWw1bU5b
OEEzKzpMZkgsPFBnaiQvby5QbjBROjEyTjJfX0YyTChIcTJVOCpEWytNOC4jPnUuOlUvTSNyXjVn
MHMKJSpCXVNZUDIpcE87LSlRI21MR3U0ZkRbTDVYcVYpUzFdViRGLjNnQVlIXCEjQz1VcUAiYClb
JWxWQVZYKUNvLjUtLVlnMUQ0KjlJVlxTWnM0ZV48c0w+aid0VzRiXmM2ZlpEOSgKJSpwXVxeQFVy
SkwobU1jcDgvPWpVQDxTQTFvNSRHdGchYUNzOU51TkYjQ29PJUFtLmFgS3BbXVpabEE7TWgqME43
ZiFVcVxhcDg9cG91MVVnYE1RKSc8YjNvTDghJz4jZTdfZCcKJSc9XnJCMlUpN3RuQy9lbkpAMWho
W0Q+bVUvQk5pNy5yLSJcR0lvSiMrWjNvJlNaUnRsUCROak5jbEJDXVUnJ1E7WXRGZ09tImJ1IydP
PXVuMVBQSVJOYXEsRSxpVTJsOTlcbC4KJV5zSS9BX0YtbixjWiFIb1Ira2k5WEd1OyRrMCpTRGBH
MDpgM1hnWS5xSDhoI3A/IyNFZEVDKTxwckksN0M3X0cpOSYiQihETDxYUCZncS9DJ05JKlJcaU1t
T0lYcStnOSk0PiEKJTw7ITQ/OVFFVldlNVY+QkkhZkFcQzspTkFnQ0NZZDohbUBHWG1dR1E5YG9c
TVBbUyxuVyNkZ21RJFokbFYoOSptNj9SQmBXKVV1aTYwbXFAZ2NBRUtQLF5vRUFebEJpXDMsQFsK
JTxzdDlvX3BHdW1AL08pSGEvUSVJXE0tJz1vczhxWmNVK0ktV1RKYSE8aiN0bFJANmFfZGwwOWxn
cFNaYGJucFs2PikuI1QoaU5XWyRmR1kpOGFtTj8lPVMnUFBWOWpQNl8uTi8KJSMyP1h0KzI+T0Ro
Z2lHYiVjJGk+JCRRXG1XRlBfZjBWbExWR2VRTGxfbjElMS05Qz1aQm9fKykqNiheQWwiJ0I3XmBM
PDM9dSxGOzlgZDx1PG4mam9rQ1ImSCNdPF81MF0wczIKJU9MOVJIRCJTS3QjLmxJYkFxXWxkU0du
O1JZcTc+PmxMZiVTKDFScjZlaThJLjRVT2s3bWAoOS5kZyhNKVdhUjBoKWBvKTVBQGxoXmdjJ0Mx
RVw+XyRjbkgwOi1gZnFjSFhrK0gKJUY5IjAsL3VoRF9jWitAXFQmSFZbXUZBQSIhXmUkIVw1ayox
M2pwJVg9J2Q8KCYpW1tTcEQobEhAclBGLiMiJk1nREhkVypNY1lpNUdbJUAqTGpjJEUhIk4/OSxU
JixyXCxyIjcKJVUxJXUoLlUwNWMpUCJtZFAwTkttMkw9KUhZRGo8XDJnQlRQL21ZSWwtUC1ZVDJV
UyRBMDZHYy5nVyRSWDdeRS9GX3QtZyUnLicqNDFfMSU6PExHZ0wvNiUvMSovOTVTWV9GTmAKJXEs
NEIlRWlDc0ptJypBa0BqUloiWFc8R2tnczs2NWBwcmVgKjQ7MGdkL2RhdT0hb2FnQDE4UztIQURL
KilRNjlhQnVbXUNYMyZmUDhBOV4nYFphOTdDVl5WMCpROFY6IkJaaUgKJSd0Xi90LFMnZCJUSjYp
LiJbR2YtKEh1ZmRNJCF0NUh1XiYuPFA0Uk5PbUMpMlY/WUNWKD1FbGVrIV1SVTJGPzEtS1A2YTdo
RmAnPEVjZSNqLTs0U2dlb11kVEZNbGUocUxwQk0KJSIoK2kpQGZUT1NHJSkhRlMvOS9hbz4zaCQn
akZiVGw/N0RwNEkicmhxZmRERDs8MFRyPzQ1NEJYb3NdLzAtImA7Y0k5MlxyaT0kL18hbDw0Qlko
WlAvLlorLio3NFQpLGBrciYKJSlyTXVcLFJxQ1AyITJlMl0maHMtcV5EJihRUG5QUTlvKEQ/aDZu
PS5cRUc/UjdLKnRPSUxbcSpuXExnVXImRlZhMykwVktAZGNqZVt1JlRpSiVtLjgsST9UTjddIVBM
WS0wKm4KJTMnV05JKi45bjdadSw5MWYnWEZCNGBpK0NwVkBmZGZcJCRYbjYkXiNsZ05hJXA+NDIm
REM+Rz8hbCk0SjpPXU0lRUc5N2ZSSDQrcm4oXEIkNiFfXVhBa2wpODhraCpfWjhhUkIKJVFuNCk1
MytHdS0uYnI1Y1I2N2tIOz9DaT5Za3JQOmsyYzlwWTBQSmtybVY/XnMsP1NOQ1gnbj9EbWBLRE5U
O0dvZnImbEhqLXIiSVxHZGloQkssOStTcCRcdXMrXV1IYW1bOUoKJVNXMz8nW01xbi5nRVIxYSws
JCUzcDptXzNSKUMvcCIrJUhAREdjZ1RNZ1RRP1QqKVFdMmtJIWBRU1FtUzI0PkNJMiE3WHQ0LFQt
XVpvcWMuWD8zMFZtQ0AyT25OR2wyMi4qTGwKJVdgNyQtYUhfJTZnUmlDTVo7Tj9iOCRVOU8taEws
cUNYZD1NPycyWnI8Mz91ST85Q0Q7YEhBImlackQwM2ppT2RhYjRvWVQtQzU+cCokNmFTWURlYnBs
PUw9NSozUDUsUkMnP0sKJS5PbS0xKFxcWHQ9LFklJGV0XEgpaVInY0tkKzFecDZHPFJJbSlwMWwu
XG5bYEJsIVtSajJHYyxmSiFHU2UzQCtoQiFjX25NVXAnYFtUNHRkLk1XPSk6TSg/NVJoNkRhM0ND
R3QKJTUwP2dCWHIuSUImKTAxUSdYNUZESGlbXyMrTG5RVFwjNFZBJks9KidCTUczP0IxSWFJXjcm
NXRyUE90WyInOkUyO1AjTTIoYENFLV9QPXU5WS9ySzNpRT5cRV4sZyJOZjJkK1UKJV8hMyJTb04+
OFNjLGMuSGJgb19dJDBJLVA/JzBRSlBNTio1ZmVnanJVZiRQcylLaE49XXViITw5ZFZnMExBNWIs
MWk8JUhcPTZHSmo1VmhmXzMmMkxXJlRvJEZfdUxZPztSTHEKJTMmVSw9VylXTE5AckloZFdRZFRE
cWktbEdoNlI3K29iY0lOOjVrW0xCcyI6cyZpVjliJy9SN0tVVUJWaXJZa2c0Sks1UkVITDwmY1pU
WnE0ZXMnK0YvWmo1TDswOXNeJldOXCUKJVc+RzVUVU1pKmAsIjdMXixwZTFZQyxQcUVBOFBkLCtu
c0YlQS9RLDpWK1ReaCRfdCs9cVV0LSdnPzxLcSshPkNVXi0pcnM+SC0nKVBpaCYvWWk1NSoiJDFZ
a0tqdCRSMUdjbkUKJT1aaEwmQyRZOG5GXzQqZyZJYGlDPDJzbDcmcCJCW05NIS9FcjshJGlWKTEy
MGUwRHI+UHFOcUNsM10jKW9WUT1QNUpqPUVSJSxvLFgpJipvYGVyUkBaWFFCb19xbFFoblAtaWYK
JUVpdWwjUVdYOj1gRkdYKDZUOEBfSHNpbXBaM2E1OGo8bl8jQCtLQEY5OVtuIkZEN2koKD46YilY
RnUuW1NmPjYycDdBNCVIRURbM21ncihnZiRtQGolKkRoWEhuNEpUXkBnMDIKJUtERW8wREpwU2Fq
dUxIaUkhWjJORHRfMCxGKDtsJyNIVWxuUkExaWxVZ3JFdGdbUmlFTEVWckcoKzkqOT9oTjFfaFRs
WSJETDIjOG1zLmk6byw4XyRodCQtNW1XZUdBL3BaLTYKJU40L3UjLmNjQDgwL1FDQ2g2VyZDWSgs
PGJxOipMKjMvcTElWkJka1Nuby5sZytaXWdJay8sJUYiWFttMmMrUmNAJDc0dWcyZmgwUVk/bjFp
JyQjUjFWTUFLVjw9Pzp1T00iU0sKJWJGcTdCY0oxM28qNEpdW1goJlZCV05HOGJxSj5jKT5ySCZD
Qjh0P1I3T0IvWW9YZmlRJThaSVYuNCJ1dE9BMCQwI1crVWZFcHRFZ08qPnFdNV86QGQoallgYTRR
S3JeVjpyMGgKJTdrSENZbU9ZTylBP2pxUGBMUTdDUnA9InNUVEoiTWkscUJfXSRmPiJxQVVyXF5Z
dE5uKztHMVdgYHBSMCFLa2cjM2IwNTJuVVJsIyVpQlgvI0whWGxoWlcuW1Q5cDNPKiZDY1kKJUtD
MDwiXXRoJj0oPldsajUnYkFMTmE5LTlqLXRgYmcyQDA3aWBkcm0yYEYvTTJFKzlXJz8nJClqMHFq
ImhBblUvYWBZNm1tRUhVb00uRzVTL0ZiPFdwTyk6NkxsVzswbCExYjIKJWhdZE9oNXNyJ043cVc9
Kz1hMmA8cDk7cyJJMlUzYSRoJ05nInVBIlhAZDtXYGZsOTpTMDFLO1BmVStqVG4kZSYuR3VJKiFx
SklLYDJATl8uR3FQcmMrPFI0KGJJcE5hVD5jZTsKJSVZO3EzbXM7RitlJmBhJS9iM1RGLDtULWJU
bVNCW11mLmhEXV9cNSReJmFqc0lKZ2xJbjNhUjpEP14iajlJQW1BcF02cXNoJW82XSY6P1xibFVp
U0dWcylzZCROL2k6KXVUMiIKJSU2YkkpOyJkKj8hPFNtLFNlRzlVT3BuUW0vK0I7bTI+RmlcJl8i
NyNwJV1UN0tgcD1uXUc6NTNSaTZaOGtTO2JhZGA2Z1UoQWVqRE5aR0xoSEU8UztzNGdGP2NWakxk
WmgyQkIKJVBQRmA3QmJoVzVvVylnWjRSOkxiazA+cUJGZicuWS1cPE9gSEA8Zy9XbkA3Q2Q4RTx1
Xz9hcElEWi1FdTlkTilLXHRjM0ArZ1VMQi1VXl1nM2IoVj8vWCVTV0RvIixTSldlSjEKJTppYl1N
NEBzR1tbZD43TC1cMzVUQ0htZ10xK2phVEJDPGpvJiFiPkUtMVU1JGdJKVVmSl08Jm1ETy9nRmdd
ai9XQ2BiJVhpczZKYzFDdC02P0NDY0E/NT8vOkM/IURNZkBsNS8KJUVsN1FxU3BUTFNzL2ZpM2sr
NWw7XCxKQXRabF1BcSRydWxLaU9Ob2xobzw1SGA2WzYmVUZXNlNKRyo1Z0E9XD8qTjMqTG0nNDpa
TCVuOldZQVUjb0slRjhZSV8zal9RaGEyNTMKJVtvIm0nYC1NV1RSIlo1az90K0BEUi05IiIsMnQx
cl5zLD9lXjwqayFyRUFXJl49SVtrVUlZMjc+cS1VOjEoZ2RwUiVHM0VbZD41TCgqIWNYLT5VKVVl
WmI9Wlk+bShjQDxqS3AKJTtgQEguRWRZMD9LZDZlSjk5b0VDJzVBUFsvNm1hVVYoLSQ2UVcyW0hX
aklJbEAwZnFGYkNtblVeM0liZSlwS0Q2RmopYWFSIjBsIjc5RlRmbUdfLVxGViljIjNCK10lRDwv
cToKJTwrZEZXPSkoRmltLUQkQDk0UzVFN1dSPiErOEhBbF5TTk9EaFY+VltTWihJKC9wLzBhS1RZ
X08vIzB1KzBnV0kjJ01OY2QzOWltRjdQKDovIS1NK3UuYyIsMV85SzVjSD80JCcKJWdXLipoZ0E+
cWhRVFlTVm8mJzlbUCNKLWBnKkRkSHImQllHW28rXjswN1RIMGRiWWMzW2NuMFE2L21rRVAxVWUz
Tm1fTFhWMXRGL0YzSFdlcExvUiY/KWVyXk5ZWT5cPHQ9QGMKJVFSNDNEQ2REPClRKnExOGtXL2Rh
UiZKQiRLI2RiNE5nZWpALFErKiNPXjs4M2hiIWRTKXQ4JiNIMmxNK002WW1cYms1Iz9lS3V0ZWVE
IjIkPWwmXUMvME0lSTc6RUJpNUVycyEKJWNhZkUoY0FlYWtGLjYoT0N0SC5QLkhVPzYzaTpEREBX
QGw+XV5gRUxoZXBCLGdFUCNuayNmVjJaJDgwX2pWKywwTW1pcC84KTw7V0RBLm4zNVVhKGUzQzQ+
NTZwZ0U1X0BiVigKJXFfJjtNYFtbPVVVUitHQVpoXEhfby1NLVlwKk9oNClEWWI4cGM7Jm1eJWRf
SjhYPUZPYHFmLyxsXihRLExhMjJFZGQxREQtb2RiQ2RoWCQ4MTM2cE4ucmQ8RzF1MyIrRjVbUnMK
JVxNblhKS0sxSzpTNz1gP0YlS1EtKlM7V0E+NDViI1ssLV5FTXNIIjUucy9cW2lkLnUkRF5ocjBI
SVJcYl4sc150QV1YM1dUPWxcKlNENi5Lb1s/WCpULkVdQTg9ZE1KVUxXPTEKJUY4aVZwIy4sPG80
dTpVZj9PTW4zWXFoRlspUG9PUD9RU1ZpYWM8OkRSIV1uKSkibilsam5Pb1Q7V2xMVmlGcmBaJVJQ
WEc9NkBWLjdvU1tCNjhOL0NCMUdFPU1RKDU7LmwqSzcKJWkkJWs1U0NcQmVLYDVOJkRqcyw/cUFd
Z1gsIkdMVloqdWJUPF5VWGBJWytEKGBMSlw2N1tIXiw1VCxhSk03ZSM9O1FVVz9xL1lvYSxkUV8t
WkdnXmlQJUUmVjdob25yQE5OQ1AKJUJCWjxkYy9eT3EoPEM3bm0obEkwO1cnNE44PUFNaSo5bTJr
NHQjNVtBTHImMjMsRG5JI0BObldQS1ArVCdsQEI4bmEtPyIqZ19mXU1ZYmsiaWxmLF5OTV11cmkt
c3FealMmX0cKJUFGSCs9ZjQmcmRPUEhtPihyYkdNa2AlXUFZXUlicjNPQTVROjkhQlBETURPc3Fh
XFxLUGNnZm9qblxNZF9qO2AuP3VtTFpUJGRnTydEbmYpUiFwNXQhIz1vaFM3RHA8VyUiMnUKJVZW
K0JULVwyTS5caSpscWo6Mi8lPE1ZRiI6aVQrVUwoc3M2MSNzUEphLy5ia2IyZTRrY2xmbScjOEEy
YWdWbVhlQnF1KFE1Ri9NJ15Xc1lGMyhuNSssQTcmPCJKbSlTJChFbjwKJT4jbEonRUk9LEstKiZO
TTs0aE4uT01OSSEwVVRKZCcha29dT3NzKnUoWHVTbV9PMDwqNGlQVDU9SUkhN14yQCEkXCVGbm1X
YzM4KkssYydrODFWUyxkSjxITENxanIlJ0FeNFoKJTczTDlrVHVsVF1vWydNXik6XSVrYk5LPi9q
WFZYKkI/RSZVWVMjaTFXImZAKTknQDZwN2dzJlQpL0dXQzguWU4iJTFlUi0yIiU8JlJ1UlhINWJi
Rm5JWDtGTTg+VTgvLEJxI1EKJT8pY2BTL2VLJGwqQ14pOz0yPTpsU0twI1UiLVs1O0RmVy1lJCRg
MmdbWiRaUS4tK0FQQT4iWkhDWDhKYUY5cVYobmJNMWBaQzFLRFtgWk1XRlY2b2lcJlk9UyVRTWc/
SiRDXDoKJVtPQG9JYDs6PV9aXkRISjA7MiQsUjw/KFVFdUIlSFZSO1FNYlcnOW1DanEuRFtXVW1V
WFY9NUElOVlxayNLLCxfUyxsUFtgWihxSCgnWi5EYERXbDU+JXQ/K1hTYkUuMkIiKjgKJTxfZ046
M0o9YldiMUJAaVYiXD8vMkA1bGNPbjVRYTo9aD5rKnV1QENjPG5jMnJncjlGRkBHZnA1LztNUTlG
Yk8hPSQtYmRDblwqaFwiW08uaE4+aD5hOlowaVJfYyJrMm1TbDkKJTxkWHBFMDdSXSlwNicvbk5M
VkcoYzNsTHRaQlxEJlhDZEpvTW8waHVES0ddWToiTjlrPEc4Sj9WY3E6SG1yM1BOaC51ckdlIz41
WGpNb1o2bl5aSlkqJ0g6anEpYVVhRzhwITAKJV1LcU85K1AuKT4tYidEaGZndSZcZlo8JUxqaU1u
WFhnYlxmXic6J1ZbJm9cKSwvLGZkNHUhRFJMPjpfWWNxOWMwREwpTywpQjlqJC5RVT5yXi9pNSk6
LytiOjkjXDVGZSNILFkKJUM/TC5HQUktOmNhOnNhMC5pZihxbTtudXQrdT5sJFhlaVNWQmZTUC84
XClgblteKTo/UUFoSE1VZy43bz5CUy50XTZwJFwvOjo+X1BBMEghM1xFdFwmUD8zWE5ZOT0hMExr
MjEKJWpHRmwxMVJxP1ljOXBuKWY/cXIzUk1WQkhqM15yVj5ZZF8mOWFvRWtJIyEhMChsbFZlJmhb
Xiw+X0ciY1M0JDtWPlpmI15uRWhKRnJlPEMlUV9XPkwmZVhKJD9QZ0dMVSFcTnQKJTBVbVY8W0Jq
QVQiOW1pZSdBKFpYXV1bcUhKUmVSdVtkRTcoVjdqP1wxX3JIMmE3bCRcWWhqU2YpZllYZ2dpNy8j
MEBCRFtZTDM7ZiFEJFNpPj9JSVMkMDlSLzZsT2luUGZaXF4KJTZ0aF5HMC0pPURfNUZoKF8wQSJq
OlFtVmNebUouRk8+LmNhYEsoNSxOPUFsW2ZaW1JoWWZkT2xNK1BaajRcUSZiW0R0Y2VwaE9tWSNJ
ViRzTnQpMCpebCJHLmQqW2RgKTlESzAKJU80LmJRTksiJV44K3E+O0w2SFVub15eJytTOkdtUl1Q
OyNaMyYrPSRYUGpcIWg7UHFJKlFIaEpZXDBfNEBJRiJCViw8aV1aYy9XbmRKUlpMcldzU0JwYy8q
XCQzMSdISE1GVj8KJTYuVUtsVihkYDZba0J0Lm1vWSdaUyxVXU1WZENtaCdgTlJQQmBCZWReIz4k
dGErcEwvLktSPz5gXydnQGQiUXRCUG1HTyZlbUhYTUlQanA2JHMmMFFfTyxkNzYvYjAjbCpkMHUK
JTI0bGAtUidvLUNxNiJvQXA/VyEmJ3RrW2kjbVE7OTVKP2BCXEUuNEYsQTFGVDtQdTMoPXJPT1pi
PlA1PUQsVjJAJ0A/UiVqOS8sM25eNXAiPUxRaVAuK2NCaG5ITFhfZ0U+JS0KJSh1TGExUiU1Mj1b
PlAyMjptSjorL0xtWzxtKEJVSFRKLzJVIzxnXmdbNj0hWiVyNjUkMy50XlU0NlNYcy8tKzwnPych
QmQ5Sy0kP1VMY0RsMCYoR2NXbVheIyw2SzkwNiE7KkkKJW48TVshQjknZypVVkU0ITBFWUs0UVQl
UixQPmknPm1FdUJPbGlhRFQ3TFMpZGB1Yj07PUlINitNTDNhYEBuc20/NU5IVi8lYWpySEQhTXEv
bWk0WGRDUkdGYVsqV29LOzwlPS4KJTdtTT8oKCs6aFRdVFIhRVlsY3Eqazg2MDNCVl5uX09ARytQ
R0ZLQUZmXjhEIkdjaVddJV9gZ15UdW1OLUwmSnRgJypIZTRiOEF1UlxqXVA9TGNHTj8xQjtMV1Bf
REtoOW4oMT8KJVw8SCVaJTZjKGBeXy9NLmw4YEQwMz91PD4vb2t0cGY6T14nVCQxT29rKU5OXy1j
WzAmUVgyWFg3MzBoLEBkWzRjRCVqSllfa1JuQFUhXzxQS1tBa3NwS2NadSZFaVc3JCdJcEIKJSQ9
amNla0dsXFA5akAzRU1oXXVkNChMJV9HPEA2Zj8uRExOMFEjKGEhMmNrW2U5JCFxPTAiaHNIcWtH
amoqPWJZX0RVT3BGPyQyZ0VHOmhfZz5nc0pNJWphI0EhODJQNHBtPjcKJWAwMilpJTg5TlU6SlR0
R2sjJicoOEtXIiRNXEpZM11PVWhdbWkzcTE3XShJS1dgc3RsVjsqS0s2TEZSWz4iaExZWSpQP1ZQ
XyMxKV51LWhILnVJRSE6RU0udU8ldF8zM2tVUi4KJXEsdXE3PSRmUEMlJ2BFPDMwP2o+SGxCR0g+
MmQ8QWJKXS9zbz5CVWBoJHAuKFFVWWEiNzpSJCxNUGYoWiJITVojUFBbbnNUajVxKTcpZEJBJG1c
TFhlW0JMYmJfVzwhWG05XjYKJUQ8Yy4vcSw0cTNCTzl1OD0oJm0hKipJLlxbVGA5L2JjNGVqQ1lJ
TGxQbWFHOUVQUCpaZzpSbSdfZSM0Q0MjMF1ZOWpiRzdYSjR0J0tZNjY+T0JFPigoW286USpfQV0k
Xk5kLmcKJVAsOG10XiFedWNHZU9NOExXMUtpUD0pUjlJMi1HZCxpW15EaWpGKGY8RUZQSWZNSFNf
R2RWUitackVSLGsxUG1FX0ZTMDIiKCw/YWo4OF0iMyNkMD47bVBjJD1BYFZYL0lvTSsKJVpZLUxF
M2djTj4vRktcVW5nMjYqKUVsRC1WYkdvNlF1TWBGXjMlV2xqJjc/ZGAqVyxhW1pwYCNfZ2laTSxI
I3NbbShbRFo4Wz5IWj5jaS5ULz9KQE9kbzdvKy9sclslalY7XjYKJVVvNzpvZUx0P1BgUTwuL1JP
Wio9Xms8LSo3OSVcZ15NVVZGaUI9Sk1MQzA9OmlRNWRgZT5AI1Q6b1FZcilDalpfU0BWJWBONTw+
VUNKV1dNZTRidTReSTpqVmc/T01XKkA0aUcKJVgsU3RHSTs+Y2A4J3MobklbbyM8bDtgVTM1Uyov
R1Y5WldrV1slPnVAXlMxPEhSNjloZ08lKDgwUilsYTJKJkc2Yy81NyVcVGcscVh1OTNaKUJxREpI
Q2dkTGkhSzUnQz8vOTwKJVU6VVdqRXNmXmViJkQoLUhEdEBFRnIkQmBbQltsWSRvdUQhbWBUKkFd
I1hCbC8lKk5yOUFPdUhCWS9SSlxPajIrSF9OOlBHMjNlZEBFXGtdVyFyPGwzOD9MWTJRRVpFaCp1
NjsKJWw4SXFZTk9qRSZZcUdSKjZpJCJIYUdva20uSEJgLldmQDBzNT9iZCNSQkdeZiRCamEqIkdr
UmxHNVJYKyVpNmxCQSEydWNBLnJJUXIqM2ZIQFdcP28sVVtASys8PG1xWnFpTCUKJVJUZjhYOzVg
althSnI3PSVYMV4tUTQvaERbPWJcWVhcXkVBWzJITyNHY0A+VTdRK0UwQ0FDYzRxZURWbjddJkY0
WEF1NitmIV9IVyxPSyJSR2NXWixXSnNaNEE2OTVqSFgqbV0KJVsnIUUuQFAhMUhFNC9UQ0ZyUUhf
TGVYdEpXaUlOL0dcKD9BOW5QOzo7TzxBU0o7TWAvIz9ZSkxGQ0hkY2hLVjtvV19IPU5DXSUnTDxA
XyMuXCNEOzNAY3NePyVNcm9VUWNtaVAKJXBcOSFAUyQmUGNpPGlVKCE+OEVgOi42JiJnJ2Q6Sjk4
aEsjQlBeaWIhZ1NwKyRfcU1mJSRhPCZKLjEhTCJiJUc4TFk2PSJXTGwzWkNvOUBXTkckaiRCMGQv
azRNLCdDJjs1JkgKJVdwKEIuPi45bGZVK0A0OjxKSzExQmFVIi9RKTxhOkNELyJuMipPNFpcanJo
SzRkcF0hKWhdXU4wMiNISlAsPmdIcThJKj5KOUIkJlxcO0FfO1ZrUFo0aSNJJzUmN0IsJmNvbFsK
JSw2MSsiV1pKOjQ5NVw0Zj5jaDJzOiRhRHExIU5JcyFcN0JUJ3NZL11DT2lbJW1FNFdQZCl0cEVs
MCcnQSIoRTgoJSxeJGRjbGxkMjtVP18xL1o/JlZwZ0EhNDVvKjlFTktRP2QKJT8nc3JXOjJscFJQ
U2QuRU8tNnNeQlcpbnVNXjs/KS9nOT90UUdSSCJfcj9ycFhpKV9GRjpqazBhcVVyLGYsOEJOO1cy
QigtcXJrP0BFUV4hMFZvZmMoOkdvTjpzMlZXITZeV3EKJXIvciFOUDY7K2RqPUM9Nl0sLGgkLic2
M2NfZEA6PypoMThvKXN1dSFFN1t0OWslUzIrNCJkI2U7bj9fWDFdYEI5MmwjKnBOK3RGMVdibzYl
SGcvXEpbRExRUmZpREUrYj9OVCoKJS03KzMuPnJLZiJmPWszOTUhT201P3MzOW1FNFJ1X01yZ0I+
PF9ZTXAzN1olX3JKOSZwbTFjQzpWbE9cUlNvLENeVFg/K1NsLG5cQztcPzoxWkk5LytlUkVzXHBZ
M1xvQjFDUD4KJWJRclFkKS5CXiknQzlsLFldb3IwO29rR1ZxYWY9VjVbTyo3U2FNYVxAJ0ElMF1p
MV0kZHRrXD9aQmAjZEQuOVxJTzFqIkJTY1NRPVhnJEZQKEppZE5nWldHISNedShYVHMkVSgKJW9Y
Lz1gXjMxT25nLzU/QkhMXEIwLVYvJ29kdUw6M002ZWk8a3VWSklgLSk0cTpROldfLDgqZEgvKydk
LWZVYEJeZ0c7LSlXdGM5QT9IXFw1YEYsYHAlKDljJF86RlEhZlU8QlYKJT47TyonV3QnYkxdQShC
OUU3OzNaIXQ7VywwZllRL0ssSigkZVVZSjdVU1FGcl86KD0hLTNkWzFfLWVkUV8sRzs5TCEhclpX
Ny4raTU/LEBbNmA2JT5tcnQ8dSsiRXNPOTlqbG4KJUI1dGk1SmQnMjhIc2JjIjI3cickPCVDPzNN
Pm1GXGlgKE08STRgamdvJWNWYjIwSlliUkQoZU47RkpkakZoQjBGNz5eSiklOUwpYGc+TTomaDxj
aDtKU182PFVTSHFqYkBEIlIKJTtUZmpIRGVqL3QjJFQmSlhnV3VobyUnZSloOG8zckA0cGdvRUg5
N0ZNKkBOXG5eQSMqPDpZZmpIVVpPTUtwRzxmZlNkPW5ybUM6OWJcMTs8XiFnXDRsPXVyWztAUVEr
cCNJcl0KJWE4LUFlOFIzWlE/dFBtPVNwZi8zPUFPKSFLaEQkXFdAPSo4L1JMUyJOUS8nXU1nTUFO
YmREZ1NKWnRENzdwZURaKj9ANTFZP2o0bjA9bV4rSSQrMTU/VEJ0aVRmNFYpbFsqOzsKJTlwTzsq
PUpMNUA8ME0jcis/TVE3MkpPXmhELidcUktfLzZBMytvVmhkN1VySShkU1xQUFRlLzJkSU9NNidy
RGxgXjpeWj82Q0llUyVcYFstQjtoUD5rJWtLVUYycGlbOXFvLTgKJV43VUE9blVpOkk0XSdDMV9t
ZmYkYF5WRzlxYj5bOzlvYlNSKTBCY2peIXR1MVdENDpYWG5BSyUpYzEoWzQtU3AxSiRbc2YsWkpd
My1FNmlYVlIjYTJHMz4lRlMqYWBFXUtERDIKJUNqN2ZUbVpeNFVeLEJ1PjMtOGM+cHFsWmdZbm04
bkIvRlgtL2hSS1BvK1QpQSxFbFYpam5yZWslNFwrTT8mRj4oZ0tJdHFITHAwNUA2OkslbGRUXyxB
QjRycC9OYVVkYmNtWjcKJVhHTXAyaWwwRzA0KEFmZiJIRDMzNTNubDVkajpFRUxVWlVbXSUwI0Fn
W3NSXEk3RmJgMChqJ0JpVWY4JnI4KSdJXWZjSkRtTElSU2Ynbl5acEBXIm5cYDZNZ0VpSk5ccmxM
LkoKJUNVUT9CMjNfdDJGKFQqVGpjaCtpU3MtKUs6LUdrMXInREFjJ29ZPCtaJHA/O1pbRVInJS9T
XlVsUFozVGg6WV5KXG0xR0NHIUE0QkY2KV1lYS9kYkhdN3RYP05aVUBcYDAlUjwKJTQkJ25DQ2ow
N0ouSVlMW2xNNz9aSThCTyw0NFVnQEIzMFo8Z1RsMVhwajAuK1h1aW4iY2NDZG9aK2YmYWJbPDon
bnEjNVdrRCZbb0hoOV4lY2RvQ01UayIlXTRuYjpXaHJEVjwKJXF0S2IpNEZcTWkuKGRNWW0yVW5p
JXNJVixsWlBHJ1lDI2hzZGxsW2NNcyNcNWo3YCNXJTRAX3U4UVVWOyVKcWM7RG1GXmIqTS9XbGF1
XCpwbCZkPlA5Z0NkcS1oaGBvTjltV0gKJV1tWlpBJzxFNChpbG9FWUZeM2IyTnAkOXQ5bC5vPmhn
NDg2W185TFhCPEFRNEN1cExnRlt0ZzU5XG44ZVFKTTVoRnJEQEhkYl4yJGBLJ1chVyxMW1NvMVw4
Z0w+WUk+bytsZ00KJW4icCwoa0RBVChbZDYkbU4pVkg0W0BRLGtKKl8iKUk6MTsoWU5GPnBfSiU0
X2g6bFxsIUFBYkphKWdga2RqQWpUX210M2dTWSEjazMtdFEsRlVDKi9oXnVKWzUzJFFHb0xzSUAK
JVZwREcrZiYqZF5ALl9Oa29pW2kvWEtQWktZM0UlPW0uSm1bPURRPChxbjBEaGZ0S0pJZShWZ0Q4
JVZySGxBcCZMMmtKYyJTVF8pUUslOFV0SWBrTExucWtSY2ZDaEFmNGFxXmkKJWU9WTlqZVtQOm8n
M1VsTFhHOlxHYTRIVFdScyZAPVldb0xRWDcrS0FfPGFvLVhQKC8jNSgpbSJBdT5rRDNpQW9UaCMj
PjBsb19dUWU/ZFooXXQlY2hKLFFDPVhlKTA/aDJlLHAKJUJPOTA1RjcuTCdIUDpNVFs1STFfSD05
ZTxcK1UwMWEjQG1Qa2ddQEkvcm1GK29PQEwxOEdRNUM6N29UNEBHYSxYQHNwX1JKa0NCXTRrWmph
S0JxPTwqVHNhamg3O1o3bUk/PXUKJWFiRVEhbXJQcmNIOGs1P0liZStVPGk5L0ZQZF4kdWRBXUA5
SC1dRGIkdE9TSVNOQ0tXYFJaY1pZcTAjL3AwQTBbU0spXFRaOSNEaS9tUlVVZDpyK0ZxQ0NbJl1h
Xk4hMEE6OiIKJTNqQTsxMVlKPD9ZXi8yQzQyWi0tZGxbKnBlaWhJR1xaWDU0bC5wRilnTkM1X3BT
dHEscjA9dVc9Sis7XW9uUWcyRy5QI1pkdD5WM0UmLT40NWM3WGEsR2I5NjVGJDBOUF4zcEQKJV9e
aVdRY2E7dFpUNjU4SUtmSF1uNmRLXCNpam4/WlBGN3NPbWFxZy45KytqU2EsXm8+PylwRGw6Tm9y
byhIOEpAbWUiVDppaiNEUmc9Mk80JEUlIlNtP3JzcWI6UGRyKicqUVUKJTpvZD4lPUQqWVFuJyg0
LGhkLC1FcmNJJDI0KlA4UUg9bSpFZzZTOyRcVGhOTDcmcyZmWEwhODZscTlDaSc6SEhxXHBPSXJC
MjJeQU9QL2xQSkxjTFpHNUJ1U0olMltOVnJyVjcKJT1yMXFNWSYlNCQ0Wj5QXyY0XlZNVlxda2RR
IThFOD5zR2kxOiF0Sl5AZDFSUGc9ZWJQRnVaaFdrWkhIajIsQEtAZygwYWNoMlJuOW5tWmlAM1pX
NiIyL0pfQW9wX1E8Y1lvW3UKJVNXMTZDJihcWVBEYHReWEhPazsyXTppLzpwOjlAT2tRZmlcNDA7
TEBtI2sqa3AyIzsmV0o+V0BxPkFeMlxDOC5TbHBMSEFFLDJrKUhNOD9Fb1NJMUtGSnFoVWguXGZp
JDhdLU4KJWk3YF5uaC4lUGtDKjg0N2ExWXUxZF5cVk9wRSQ9JkZCdTtDKyJDY3VIUSstMXAoVCQ3
XW1CZSMpTlNxPVxhPUZjbStIaV5dNCQ8QzxmNTRwaVxTITBeRFBIMmwvOmU4Y1clPmMKJTRmJmhG
NUJ1VUw0YSpeUWdFWTpjQkBmcWZIalpabmhoRG5CQDBKUSpsITVzbGRPSUZeJGxiY0NPYVUqST8r
VV8sNHRWaHNJQnF0YG5HJl5Fa0VEZjxXdWo/azpIbVAobTpbZEYKJWJmNXJFSF5rT0Anc2FEPCYo
QTppXVo0YV5HNW9xKVYmXy5bJV1oNWoqR2s3JjNrNC8sY0k0WUFZdEglLk51UXE+XWsoMHJkbkJE
Z20+WG5zJU0razA1IT81amJFWmk1WnRbUCoKJVhybEIrVldIUUZjZ1RXJlg2JWZDZjA5cU9iQmJX
b3A+IVVWblVpRk5eMmg2SEhfVFdOWXEoNzdUNSFzNFZXdSkwaG9vM29vPj9UdHFvUC4iJGciKHVy
YDJmUlhQJVJWcjh1UWQKJVtVRlszZEBrJG1bZ3RRSWYlUjZ0cDhpOGJEOyppZWByKEo/cmB1IS5s
S3QtKClZV2ZoO15aPSxqQ3JRLGNBJjZwU2lzPGctaU1RTEg0Z1AsJCw0WFZHJj9YTmhLISonUjEk
XXAKJTdvI2JmXFMnT15wdSIzT1hMIkk8Pk1gPDJoVyE2akhoKiJMUVZKKGJTMyIoVy88UW0mO2Ip
VlZyVj5hLWxhUTVfZTtiaGQqVUViKmsuWW11R28wY18kRmtrWClMPS1PLFtiK1UKJTpNP0YqNkFM
WGZraGhwK0YwPkoqQTpvbEhZbWIwRjFVQUciQFo4Ii4tMTFcSkpaSi9jLGotQUNPUzBDcTFUYE1W
b2s8YDdWNGs8SzVIQHQ3RGQ6Yls1OmQnN1k3YXJoNT1xdCUKJWh0KDI5S3JsNHFwL1hQMmxEK2w2
VmxRRz8rVy1lOzpZXFFVTysoWSFgOldibzVwRW82a2llTzJtbHE8LFptOGBGLVdtOEJZYDxzUzAi
b1VGQ1dfSiwqSFlwWE5hLmdfOlcpI3EKJV1yZ2VzNDQvSTBPbFQjKm5OUGxjJTFTUkVyXVo8PXAu
aFlDVkhrVzcwcWNRLSRoMixlJ2hdQSdNMiNBVVsiaVhublNNU1BAI1FEKUxJdXVKblxoP2xjb0Yy
RVZkRDQkQmVKSHIKJWpzRUNgZEFlUkhtPWdvIk9ZJmdZOjFOV09FVVNcZT1YL0Q2R0wjWmBaSEVH
QTUoTC0vLVs8VExkJ1w4PzwtdSE/XSZxPSs5OyE6UkEycT1hQlxLJGxFMD1JQ2hDVlJAPlleLXAK
JSo8MCxHLVAtKCU+b19qR2JlUFpkLmBkZTlCKEUzaUxjaWY0bkBkN3UuSDZHcmpOMjMxWWZLSFYh
XVwscGJQSlpfaypJO0dQVztyIVpGanNRWHFNJmsoZCMuL3EpVCIyV1JPIk4KJWlaMWBvbTY6Z0VK
OSZnamYxNTE/MDxpRjZDRmdZUGBVaTBqVW8tKigkVkMnLlR1SGVoSEwxKUEjUU42Pl80LE9WXUcw
I3JrSEA+Km5WbzMwNzk5OiJJLURRU2BEJTxiP2Nhcy4KJU8uLXFuQzstZTZlPlE9UyokXj9JNlli
LD4xdGNZNzNNKi5UaihEYUswbyYxTz4oLyE8WVRlKWBbSz9JNFNGRS5pKC9TbEpAP1UuTydodTxV
WmUzcjs/O2FSTGc4WW45a1g7MXAKJT1lIUduL2xabTU/dDtiZzFHNmFtckExZWZnY0RhUGN1LDdO
VzhjZVhxRSE/Xz86cSgwR1JJSl9KXEJhI2snZ3RcZSNfNDIuJjEkUydBQ1AsIVVIQloqYGBWZWhB
MUBuQmhIYCsKJUJHYjhmY0ErW25VJ2JRTjdfcDJBayIndSNNP20ySSkwbTAkYDFeSV5RK1dlTCw9
UzEsRUU2NiVpKWh1MEZ1XFFWJ29IJDxwVl48LURSXXJOSCxuVSkqQUk3ZUA+RSVHREJzckUKJVM/
J08iaUctbGRgZm9pMVY4Q0A4TXFMWGZRPmRHXmkndS8kO0NAUG5BbydYJiZsLTMuSkVMR3VVLUhS
cGdtcy5QQTVBNzkrIWs4QyEsaGUvXzdUYVFXSychUihsTj1vWXU5SjwKJWdDczoxLWJQOW9bLD1Y
PztzXjdAKmVSVFEpTlRBcTxjMTohT01QcChaP0JIJV0sUT85TUU1XG9XNTc1KmwtQUU0IzRnPilh
JE9HZVwqci9yNDlHbmkqUjtMMmpWXEcrMGkxMzIKJTxNVW5vQmloYjBDR0QxM2xkK1gvJmhPOWQ4
YjNxXDFwU3I2ZFdsTFJGJSgtQztrOkQyQldoOEs9MUJORS5qV3QvN1RmYkFwO0ZOWGosKVFgNidJ
JDNiL2htLUI/RyJHanBQZ0MKJVYxNFl1WCVDRUlpb1dbXUErPCsjcF9dJT0yVSVSR0JmTEFkL0s2
bFAwTXArLWt1W08nQm9KLSUtU0E3JyxRWE9GLHQzdU1ZI09ERT5EXS9AbT9vbVZPPTROMjlDPC40
OGEiLG8KJU5nWSxqNyhAU0BuPDIsMjkna0kjJ29RVG4+YzwmM2gyQ0RhRXJSbW1MNl1dcW89ITBT
SDZTXFg0KyFSPWE/RD1yOj9BYUhQIUksJ19wVTpRJSRxXlFWZDUnbyh0JCZSTFNSWVoKJTFLRmxh
YCdAQkxZJmIzQ1FhOFFjJWl0Um9UMDxeNGJzKzloMUhjSyVbRWVPIW1AY1FzZzAmXjFvQkNfckIo
MmJuL01FO0k+cSlWNlkrKVwvOk1mSSVsU3Ntb0NgdDpfPVsvVW4KJUhGbG1dXV9dUmFAaz5IWyhB
ZURSWihDX1FbLj0pWkM9VTFqZGtKQSonP2FwS0IjZW1FY0s9NF1zNitfNyldaC1xQDReazpjK0pf
PFg9ZEwuQnFRLHNSKihoJkJjMEsoOkBoTWUKJWBkJCYjX0BdXSRGSm5GVGZ1WDdNcTNHbCJeSkhb
ZT1mRiwqRG84XGlAIUtpTF9iLSUjIi40cFhfNV5yKkUhTmU8VTZWazcwbjEiPV8zO1dwXD9nXCRF
XD9uK2ZGMl08Ll1iQV0KJVskTlUtI2NkXEgyRUlcJVRiMDcpZF4tXUxAc084PlwjK3NrRUBnWDJr
PCpDJWtQOEdFWzwvOmRFOXEuay1bSlBFZT43bEVnK1pucFVvVjgxNEBgUDs8O1hiYkQ/RC9gWzhv
Xi8KJVMpS1ZaJDs3TzBPT1cjb21KVik6PkwnMT1BTCo2Ii8rS20uTEwjbjJWayxCWWEyWj42bjVK
V3VdIjksTnIhSSZlRj9bSTVObGA0M2RkTzhhciVwcWZBblsudTosbzZoUCQkI2MKJWg2MkQxY2tu
J1JQKitfRkc/M2cxKkpyNV5BXVdVVlFtSmQpVCNPPENwST1LcStRWzNuTDNyL3FqMkAwbUI0R2ss
X0JoczAxRTNQJmlFXjonOEVlbSJhbEt0ImhGZTAyZ0RBYGAKJUVRWXRwZD5LYlg1J29HRy1GQiJW
UWE/TiIpUjxhXiooN2s1J0xdUWdKT21TSWlsUl9gZD8mUmhqLj4lWzQrZUQ7UERvLnI2cFwxIVVH
I3FAYCwlXFsibHUlayYuTixuSy1yXyEKJVxzXCcpYFUjLENtVV0uQSNZZyVgazxaSkUzPlVKXTBE
RVhBblokLjEsRDtyMS0tQ1IxI2BbJHVZWEhTOmQ4XV9aMC1VXjkwb1J1PigrVkNMblxCIlcjZ1Ix
ZlIxPEtEYWRYRTAKJShsc3BURV80MjM+aiFRW0xuXHFmKmlxVz9XYkUqSWpSXWRjY0FRLjoxPjts
UkFPYTlFNEwnNytuaS5RdHJqZjEzXV5iYFA9WXMzcVtaaDh1Pj4uUXBhRG9HcC4jcG1faUVJKEAK
JXI4JFNNYmphSTUhISNVPlZiTCpVVXBFYWgsI1JAVUwuZF1OaGhPVSpXKkBxZ1hMKC09MkxlbzBO
MFdiUFApYWZcXEdDMT9rYkI9MFNYM2oyV3BKPE0tMVI5Ui9OcCokL2hPQykKJURiMCI5IVNvckVJ
M15EYkUtM2Q8ZkloIitpblI9cmU8US04QjpSSj9fWDwrb28+XisoLDJWWjpaLT9gWU5ral1BNDEk
bXVVM1g5LzQtLEpSJlJPQTVcNkJlRlU0bWdIbVRzQ2gKJU1vPC1ETVU6OXI/aD9iQS89bFFdSmkr
bTtxdFNyT2FeKGFEXlVULnJBJk0yZEU7IilBVW1rK0NuZW1GJTcmRWdJQUFbTVxyTSNaMDwtazks
Ol1vRkAoPmRXaShzZj1ULGBpOUQKJWJcX28uNTE0Z0s6KCEwK3BbNCRzTytNbURYXnA1MiNTNyIk
JTNASkUpM1k8UlcvcjFAUFlVW2ReSEJBN0pMJm8kai4iZ0graTchLjg6ZTAsRzRqbFc3Mi8vdWoi
YEI4UENCTDcKJU9MdChWMi5qSWIyOFpqJ1dlPEsxKiVUMCthPlYhZ2NXU3QoS0M/QjowJDVOJFRq
IVZPMFF1b2NiJzUySihZUHBqPWQrYWsiMWZUQV5HUDZlWFdkYFpNPzQ7ZDw8UlAuLWU3STsKJSIv
ck1BZGhvOD9HOy5oYEVGUFNpIU5ZNjlOZDBVWlVeTUYtcGR1OixZOl5QUV1Rb0xtPC1wIXBRNFxd
ZjUjRVpdRSkzVjtNRCx1SWI4Ui1ULHI2KWIiXGJOR1crQXJXcWhYR1oKJVV0Xk1MQElWZmZCOzgn
TkNEN3MpT0BKLWBDNzE+QE48Y0ZvNWEiSywvXiozK1kmRCUqWSxKTEw6LFlFP2BxTG1oZ3Q9cSFv
OCNATUBbPSohayV0c0NDMEBSLjkyNycrTkk3aHQKJV9jK2FsbTFtI0sxI2tvIio0NjMycWJWPWEt
aTgrU1VtRT9uPURdckIuZjZETS8rUyYhKU82VDBeJFdgMUllMGdqXlVJNDpwWjpDN2tMMUdyVyRg
KUBMMl9OZV89UV9zSTtrQDcKJSIhT1NNSyttYGFvRD9bVysvZE9lLmRCTiNibmMvMUlOJTZYQiJs
ckVDdCchSC5JaENCUjBCWHNQU0JQdEBhUUVfKThSZEsnZVRsb3E0a1BjUD9fXSY0MC9cbmA9NDop
X01Xa1IKJTE5IlxRaidKOE49I09TNzcyM0IuPlBLKlNCQl1uTSw0a1lHVj5bZTctLS8wN1UzWSJj
UzVaNGJIMStcVlNtbWY/WjdtOz1uUWJPKEM1XVBNVzYibmNoQl0vcWspLTg8JWVHbzAKJVshS0BC
ZSNBK2dcNkJgUlJxV25YamRzSHRkRjBQSEtDalYtKlAkLUgiJkEkci9abyE7XSpBOXBpQUNnK1hj
aiQvJTZJLCFhUWgpbFJNbidDLDwxdSc2dEVhaTYxcShpPUxLKGEKJT84WGkib1QpP0U7OkllVDtJ
VFtYVitlIlVaTCw6Tz9JSytNXlBDTGxkI2tnQGdWMVpYciozTzxZTjBfcmlFa1g3LzRXO0xwTUNG
aFZhS2A1TlVjU2JMXjMrZjNwJ0IzTTNtc3IKJT8kS2wsMSNKQDo/QC9ycyFaUFxHWihAPitLRV8r
bCtfQS1wTGkiTyc2UiEnJW9lakprVnU/Zz41aS5tUi8uOk5tMz1FWmZHYW9XM2ZqYD8/Nz9QKjE/
Rz9kZDNkZydxKmJZRCUKJUxYQkVYITZEXnI/dUJSVlgncSomLlNSUnJhcm9vKUxjRj4rOSZlVDRi
PXJFJVxQb1tSQm9JbyEoc2RhNG8mUSdTIyJEZ2g/JU9xZ05RNmNlbSY2cU5DNWRjN19nWHReXVtV
UkMKJVlxUDMqJFxgMTpfQUVMdCU7W1hHbkBQaE1AZTUhXTU1UktTWl06VTtSalslbWYrJ2ooTTAx
KmJfXTYrQnJdXWNcYUJTLjM6L2Y5QVBVallFSClqXTZFbkg2VVVWK0F1VCkzdGcKJTlWTzRMLlpZ
TF0jY0hTbC1nKWVAOUdecjcjLSxgVyM6N1J0QC1hQmNJVGpWSVU7YjMiJ2VXU1Ajb2Y0YExaLHFS
JC5VVm1vKmZRWlspL2pVIz04cTNaPjxIOkRpKGdxRTAyVTQKJSMsKzBFPjldY0NeZChrO0duJ0cr
PEtPPCM1a2wrVEtickZYNCNWbm5ZajtfNm1LcyZtLFsvQ2ouanJfQkNlJTsxOkRIWyJbSHMnLj1h
XkdQMVNtOSNfXWxZUG8nUEBsJWswJWsKJWNAMmVCO1hlYjRAWyZJIkExP0BlKilpSm5PZmwnSE0l
bmFkTlpvKVNDdUdWMDczUDZpUjxLWEZQYz8iaVVJK01rUEt1TCFwUDVTZzAwQidPckF1OSpFcjIz
NkJENSRKTzoyR3MKJW5vM3FQYjBbMyxZIzMsZkREPFFCLWFpOU1JXXNISTZLSydVSkFGW21DRiJZ
O0x1OD0tRyFaREwxSltOSVQnTWFVSSpBUTYuImNqXFFWRzZzOG4rM1JQJzM9Kj8hZU1WPjdXV1YK
JSdBS3JYaFpyYClAUyRSW1I3dFY/MEAmX0E7WiM2NDA/UiI+ZUJTR25hZjkpXm4yXlRMNXFAbEI8
XFFqXDpOSWs9PGBWW0ZDIi11ZTldSy1zYiM+Y3BXRC8jcycqczVVUy4sUmcKJVdMLyo1Yj1XaSlG
PC1SRGcraEwiQ28jUSdPKl9UbF9WNismckpPYmViLHFxNEJlUi4wQWY0VCRhL1B1WmBVJTRrImU7
LWswKUVNIT9acDMrM0A8XUtoczxKb2onamxxMVdOPWcKJXAtVWZvLVtMI2E5UHVUT2kzJ2BCZG9T
WSVIJzIvKGFLZTM+LTNVU0UmayFGMDZ0KzFQJSRHay1tSj5bKHBUUzNBUFZDNCQ7KTBAOip0cnFQ
OG84SXM7dSY9MWg7KGIwYDFTaDcKJTQ1ajcrZCRXOlhfMV1wSmAqVD5raV5CNC9uKkteRjtta2E4
R1RKXF9FO1JMJGlPPVgyaENSUG01c0QvZiJmJjFoXDEjbTI2ZFlCVSZLaERLTDZdbTZsTjBFT0Uo
KmZgQTlULiYKJV1tJ15kTjFuUVNRV3NAcCF1ODhTLy1AMU1mVzVjSjteRE9sQ18pTWY2WUNeYzQj
blpOR0JYPlsqKk1hUD9hcTJlaUhkNlRPL15qaGkvLFQiMV51VzBkaCkmKj45KUZPSUtDS2YKJUo3
UlVgVSUuRyRQW01QYS4jZEk0JU51Sl0mXHAjWSZKQjU5JTRYbDI5Ris/XVtoSWB1alVSV1omP2FO
b1NaNk9sbEVtI044T05namBQaD1UYFVFQ1coLEg9SywqIWRMJW8uNlsKJUZ0ODdASGdwOW9mOFpl
WlUqPS1OQ2IjWDI9SiNHTUZqKmAnMUZlLGQwcDBoXDxYPTljbXU/Z1FTQHRmOUs0aihFX1RfaVFP
PTZWQzpzOVIhXVFELmNBTDNtWGlHclddQ1NBWFAKJWQiPEFUKUJXblg2OzRAV0NpbkxfLGxsQXEu
VyRLST1OU0VKWCo8LzE/UWIlRWRMQCYhL1RKUnNJM2w+XGsscE9DbkUjI2U3JyY4K15yRyNUTUE7
OTEoSmFcS0NKN2BAIkg+MWAKJShIckVRanQhRlVmc2IyZklENy5BZm9tJCdBKCo0RE5mUjdgLWJW
MWZKYChtTGZKJEdfaDhFS0YtUTM7PV1LLSFbPW86PSpTZFcvS082ZE9WbkVYOi0+TmFNbCdMR2JQ
QFRaT18KJSFpcFxPV18jTDZwYUJRZ19ccUptRWxRQ2A1ZFk9Y0UyMTVvSyt1JkRbRzImVW9mZjZn
Jyo5Z29gJkZNQ207XygtYXQ5OHNhMl5jIyprbl1dPFBWKE00ITFyT0EubHFXWyc6YXMKJVVLal0t
OztYdSo5Wl4wNlc1VEdvLUdXcjVIJyhaPCxDbGRsYlgmIUpqLG41XURrJlwpND85RCs0OVI1NWE0
I2NmQS8uQy5bV15pOE0/JUdUWys6UVVgTEhfQCx1a3RDVEVubCwKJVlBY1dnVj0/XksyXF8yQl41
KmllXVotMTM3X25iL2QuOUBZU1g2Lk5RTTdoJ11xQnBWcGRoZSQqKD44QGlxYi0+QCZXMCNNQSZv
PyVzMHRZUlE3J0ZeNG0+Tk4qNkYlbzA5Kz0KJSk/TVIhOzlQXVpQPWs/KylxRkxKS0VULFgrL0cy
LV0mSj8nOkVqQyJXRmI7XlgkXz07W10qPE5mRmMqTls8YUcuQUErXycyKnJyJzkuZm5IbD5gSEVA
bF9SMlgvZjljX0w1YjMKJUE4cF1Da0w6XSddY1QhKyppZXI4S2csY0hdb1VTNillLU5daScma1hu
X1AlckxmWGZUYWk0KWZeKW4wZU0yaENbSEVNVVpwVilNV2Y8X0Y1RSRwUD5TVHR0XmI1L29QKyRo
ZCkKJSNLPDpQLlNFPlZuNjticUpLUWBtaGxvQFg9VV1NYUA3QlBORjAuclZULFs0Jl9ocEdAJEJp
XS1NUzUmVT1dWEFJNkBQKkpbT10vR3FkQipTZ1RIKFVxbEpOMkFdJTA2bG00MF8KJWAuMiJvbz8z
Tkc5bixjT01wanVCUVVhckMudSMwNzlOTkJIXHRDY20qP2RwS250OWA1Tj09VGU6cG9NU0M1cWFa
JV1VOWI7aU4oJml0Ji4qaDU8IixhWytJT0EjY19uamxWMnEKJUFdT0FnSztxQVw8NVZIRklySjVs
ZTRKUFk1SCVhUlNBdD1PIz9CTyhPL1c9bnEyOW0qN0R0bDptLEpWVigqLT4lYDRlRDBUUERrL2Va
X1s1JWg+VFRdYkkmdXI8RVItNlEtKkoKJTwxZmxsTyQwSDw+bCtkT247Zl0xPiZFQiohZlUlPlRQ
QjxXVTs0KEZvOlRoWWlsIy45bVFvQzUhOmgtT0xBLy0vV15ZN0dtKyYqIUVvQSE7WFU7dChHPlBe
PDtJUlV0SThEZFoKJUFHUS5qQ0FFMEtncjFYZDpwaz5XamgxbiE5LyczOVVpMk87ajQ+LlNIWmln
VGJKYDNaLyNiPUpVXyVfJSUobGs6LiE1aG1FLW5lKVpRR18hT2IlLlRgOWYwUFBeWyQ/NDxDI1AK
JUhUSGRXQWErPlZbW0omNyRwPUQmTl1qN05YcENQRzpLRV40QjBXJ1MicDpIVjVuXVdRcXIkLUVp
K3RFNmgmKk1eIUc8S01QXlsuYUtoSEJuYlhrOT1pN1phP01LY25DMGQ6QTwKJT1XVnM4Mj9sLjFo
aiQuPFpTQThHbHJgYzlnIjwzZ2haXz5sUXBgWWc+TWIqM25PLGg0VTZIRWwkLXNCZz11Z1FxUSRM
MyJCPC5EZDQ1ZWVUPldnPWk0Q2hiNFheQEAsM2kxY2YKJXBSZjZdIjQrdEAyWV40PURNRGU2Sj9z
c19LUjhqTG8mL2xoXHNmOC1aUz5RMj0uY1BxU3JFY3I8aDB1PUtQK2RbTk1ZcTo8NVBzcl50QidD
aUdkYC4odVhjUFdnZkw5ZmYqdEcKJS0lXSMsUz85LyowT1ZwPD4kP1BNUFkxU0VMbm10V21Ic0Uw
OTlMUmNgcFxYcEBqOjctS1ApZmZaXD5nPEUzJ15taT83aGJWKW4oOGNIX2FAZDRJSUQpXXUsMG1I
ZVlqbz5JbkIKJWopVGBDS1gmQj86Wl45XzhzVCsyM0JOX15hTzZ1IWhgTWlqQjRrXnVkXyJpM0Nh
ZmJjKU5HYFpVcSU+Rm0rTFQwR2A+ckJKYG1BLk5Wcl9vaWQwbjhjQmRoSU05UzpKNk8oPC8KJV9V
ZXJIJV9JLHEwU11kTnJxdFhzYGFxMz00cj8jNyZWWVEzQGYtaWhxQTRzLUgmYiQjMSZjXDVMRFlo
ZWdeXSw2K2ldM249Kz51QmEjVlEqSWAvYGNVIz4saE0iL1hxVTsmc1AKJVQrSDEvQCxqYSNgZCtB
ISY5RlcmSmYpOHEkcC9UZm5gcTBBJ09pJzUtZjlyVF9GOUVBRjBDKEcobWRZQTU5KUQ8aGkkRDZg
OnU+Qm1CNyldZjRtZXRMRFgrQ0hna1hjXEVtcDQKJSVuMkZBIyspLT0oZnU4XVxjWFs5OyQ2aEdN
IihfTzBhdSIxIyUuOGFKSl5VVUB0MDRNPSomIVBtPixiTTFbPy1BKGU1VDQkL0ZuNFhiXnRWbmFh
b3FhIThTR0tYIkU/TiFpWkEKJTR1PnRrQFtyR05hZEc4UDZCJjglTCVlR1xQQ11VMWExXj5dKG1n
N2c0O29zUlwyQUd0YkoqPStXIlBQciZpJE9kamdzNikoYDJyNXFBW0FfSSZHT2MsMCFGQlZeTGox
Q0ghQlUKJW4qK3RJTnU2SlYqLWNuSFojTW42KiE9VzAjQF5zKz5iOHRbYGlsU045Pm4+REkucGNm
K05adG0qZXRTZkI1Z01vOTBYMUhjQ1UvRUoqa2cwYDQvcGRyIWQrP1VFIyw4IStQak8KJUA2P1Ur
O2I6bW1OOE44LDhnUnNFOHFjOFZMaUhnXlQubztUSmYlIW09b0pQYzdqPnRLJzBvTmUncyY8Lk5m
YjMpIStLM005RFNiLzMxPjd1MzpXYVRpclteZERTWilRYDsiMGkKJSViMmNDPEkqUl8sSzskVTcv
UidwKiRMKCVPJCJCRy9GKDghYFxnNk0sT1FZMyheRlAyQEUvPFcwZVNrM0dyVDknY1x1N2UnbFE/
clVVaVlCLE9RWS0oZElFZFViaGRpTGlKLEAKJXFaZTphPFY3SEcpKTNibUoqXTQqci9zaVcmRT08
QCVhWCk0LnMqYjMlMHE3W2NkOllLST83cmdLYiE7N2RocilwKklgRGZPJFxXbnBNI1M5TENXTGBb
Wy9AaTU3U15aQSZFVEEKJV51TmkkNEU1IzMwTCRCcE5lYV5OKEVHK3BmKUg9Qz5TMS09JjlLbEJk
ZnFgMVQ6cy9KRCJwbEJxdW8tTztbSkI4IUhARE9kLERXR2FFXkAsSmUiLUpKVV00Mz5cKGJNJk5Y
LzoKJTt1LkdUcHU+LGxdN3BnMSFBQVR1M3JKOi1WJTVxS29AV09PQzNtLzRpVltYQDpYKytxM2FB
RFJxW18nNE9xbGRgN3UtXE1DXiQ/Sj8wSDVWMEJscC9uMEBDdEUjKnFSJWQwN1IKJTstWm8rVVZl
YD1dUlBNQ2EvXjtvKEVHLTAsNT02QXBhLDUkMEA4XDBDKHFiRiVUZ2dCNHBFJVFBJkIlXUQ9OWZI
YkFsbVJrXHFSXk5gLnJjLmxwcTRITFNjN0UqaC47QlQ+O0MKJUxcJEg/b2k2NWIzaC5pbWRNVnVv
KkEwRj5MU0RcM0NxK19MNFZBYmZEbzRbTSpBNDItKmZnZkcxK3MvK0xcJzBmYTNLVXVkRzkjak1p
KzpBWUJZRz1VVmxvTFFFXjAsTCg8QW8KJSlzNSo9JDY/OiZpZDsucXI2LHVTMU5ZVFc/TnAyMmk5
T1daVicnWmslOiM4aigpMDtRTjVsUWZqW0IoSj9gTnEkclcoPEhtLCJARypYMVdEbSZJbTtIViJT
VU46LiM7IlEySzYKJSNwV09DO1VfREZQYzNfNmdfRiQuUGswSzouXFdBUSo1aC9mVSEwZGJjRyhp
VEcrQig6ZXFJMG8nJDtLbllmYCltY1YnVD48cTpCXVpAI09WLCFHTyg3Nz50WWBEJXU0S2hjJHIK
JWBIdShIcixBR0pbNiY/J0IvW01YVl8yb1BhM0Y2V2NJMSRycjpJVHNAODorRGN0IkNkKm5wQGNU
Vkg8TkUnVzk2O2dfQmYmTm5DPGlhYGhEVE9WIlxrXWpjTjxkbHJ1PCxFTlcKJTAuRDZQOT9NJWtI
QVJmaHFpMVlALjdCMGMmWTBUTDAkO3FwKSg5UDltaEglNS5HVTUjVFdBUylfPFhfNEBGdSFBRGg2
aGgtJUNPLFNWMmBeVyM0QDhZJGcvIkxAPylkZm1qbGsKJT8wN3FwcytbUGhncXVxZGI8PlBlIiFB
bUIyWU0iN0I+MzciP0ItSFVwa0tacEtdcVkqU1dUQC4oI1NjO04/YVQzOSJoJl5bOjhWczEhWGxJ
WkgkRiJqQFpkJUdsPnFKOyw+VGUKJTFVKj8tP0pFcGFkQTshYmBbWW9rKHAjW3FEQS41RVxSaDNS
al5WPWpXTVUvLF1ncDY5Z1snWlRkczpUaWhRZSxKW1FLNFFnKjlxamM5Z2VZN00tTnBNNF1tLkhZ
VS5yM1BOLioKJUN1W3VLTTZdViZaVExpTDUocCohUFFQW00iI1NTMSQ1T2FAPGF0K0dnaTQ3cEtz
M2osKVBbcUIoZjA9JGY9NUpWLEE2aWJFT2ItKDdNLTNDSWJfVmclLj9uPG0xK0clJTNAP24KJTpp
JV0lLyVgYXE7alxfR3I9UmowPU9kM0MoO0AyVyMiI0hIKzFLOD5DMHJGR2JQTnNWTSpwVmVWUDg9
SnE/cGw/OUg0V0tIRV5mNGIpMjRPclReUyY0IWFLNUNlMTdiKFJeL2gKJVY7UjJOMEchZm9YKTJ1
OTdaOF1lQFQwRClzJmZnN28uQ24oXE5MYDllMEtjcGUpOVIlNVc9PiVwZ2FMJFFULD9xIS0hOzI/
dCEhOUFFZ0RvaShgXkdxV14rIm9rYT06VSJJcDgKJWgnWiZeKTZKcT4/cko2SHFBOmNzPTIuI01c
aFovWUdFIyRFJDcqUFVVaF5cTjEyIzAyYUEkWHMiOVUmcjhyUCFfSS1mO19waDVwRShBNElhOEs9
WCkuY3FqNlFGYShzPV1aM1MKJVF0UCNbQj9idShxcDZDLk9ARkJrKm90WDtZLGdpU0ZXSDlkNyte
O09VZDsuTzheY1d0NTNgITJrXyVDXFBuOkVSO3EhOUhBa0RZbVBSPE9eKmVqdW4uJHNwcD4lOSRj
N0xYRCcKJVw9P2A/bjdsKWw+JTRNbmNLO11BPGhXaylcZ0MwTztiQVhib0g9N0FPMVhrImUyIm5T
JU5jNydKU15OTWJFb1E9R14pMFVWOWEnQzhYXEBZUXMsNzZDUzZ0OGNwYS1lWF4+KU8KJVduKDlM
WiwpJ08+LyMvbHJTdUR1Nzc3YGI/MSJxNCk6OC89ai1raFNaTVtnbmIlX2U1XF1UN2pdcmUjO0Zg
SFFTMTRHZD1bLDE3ZyswPjJdQiYmK1JgajhAdSdtVF1HU1pOW1IKJTRuNUQsNVJAKlA9bCwiUCs7
aCVpQU1qZWFsY09QciNKaSMpcCQ2PXE3SyVzbTZPOWlBaWNuPjhDIXMhPSxKbEYhOVpyLSpPUUoz
bUwoIS9aXC9MPHQjLUdNVE1BTFtUVi51MV0KJSNWMz4vbFJkJkM1KWMwI3BfayU9XDhySXBIJSFY
JWFKXl00ZFw3WyEvdERELygvQ0IsTkFuczAvUWNwb24yXVVcPGQtSyRCRmkpbGo0XyRUa01HTk9g
JFA6RU4vWT02cWJBL1MKJVw7MkJqQSJmMixcamY/YEdOO2tncmwoUVpkSy9RM2U3JyYwLVhvO1kq
UURpPzNzcTtkQnFoTUBNaGZNOGc0ZSE8IiMjTCRoSy07IkkhYm1ZXlJgLS1gRzBCVC5EL045T09m
VkMKJUQzUURBL0NlYGxuTVNPTFYrY3UtJUo9RmlhOyEyRzMqaE4pKSMyJEksJEhLPSs/aGgiRWJK
MSY8LmVxIWNTbnBeKl5wOm5lPlN1aEQ3QEhEZTY2TT5aSEhbOFIoU0VRVENgQSkKJV08QTtPRlZu
ZU46Uk9kJkNsTm9dZyxhKG9EcGFfQWBNST91InNTZTo2VmVZTkldSjEjYWssNVlXInE7KWk2U25l
OURuRWheW2RTJzM7aExQRCFHJEgzXjlgcVEwNCkiWyZcR00KJXBYMGd1WUV1ViFeMENVKltxM1lz
U1NGKyE+PEIrSSlRVyJ1bUZIKCJsO2orNmJOTGozNzRtb0RXLDwhdE1XVTdiQDlKQnRyUTZxXE5s
PytwRCoxX0U+LmEkblFEIShNW2NrZiEKJUUmNnFXTWpwXCcoakIvdUg/SC5DP0dFcidILyxZZkko
XU1vXFUxV3MlXDtIQzMncFs9WlI1bCEjTlAtRyo1UjZrM3UhVyRvQCtzZFtOWSx1KGhwKVZCVUZg
cSZlLFgmbDVcSEUKJWUvMEs5L20uN29ZaCQ4UFEvViZCcjZnUTlLO1dgNmsvZD0iNWwyNSZXQ1pX
KmlVS3BfaEA1cmtpXSRBLDBrcVoySCgra24lWGs3a0MsLlw/TC87dFgpLCZNJW1cc1I8NTVXM28K
JUBLO0BFZFpRSz4wRmUtUzI7PklcZD1CLj0nQEg3MkskS2UvRUhGNmJYNmBVM0s+JFlpUUlFSSZF
XnJuMF04NmliPyxpQGphTFc8KDdJbCJnLzhlQVpkKjxMXWppWzJEbU0yZ0UKJVRHTDNsSVQncD1h
KTAzVW8nNSUqL0c/QDA9NV4xYWslYVdeSyI1UWI0bTBeMm9rWkE/WF4xKF5bPTxUYEBEQGBzQjFj
cW0yT1tnVUEiXEhbL29wR1c8OWtbPy0qPidycUxtPmkKJVUzNkMiNCdMQ1spMkBNSVBFZj43LjVj
MztLQlYiKUw9XFMjZEZkcXFkRS0lYyRWOTAwKTJxMDBLQUQmNmRrM2Y6RFhITSo8UzJrIjIpYXNY
UXE6MVo/L2YnYS9zPmdXYk4hZDcKJXFCZ09gZXBBXTFwQFpSb1ddKWBsaywmWCRjZG1tPGNQIXE4
Zl5BSFxsQWE8YVhXKGA0akMrLVBtbFxNSk00WkdQT1piUT9sKWZlVW1LW1MqSSIlLUgtOSNxc2xQ
IU0kR1k6JWAKJU1jW0NOYVlHKkFVU2Uyc1tZRikuQz83ITM2ZSlgXWphWmxZYUJTM3NlVF03Mk0t
L1BHVW8/XC8iTk0vKShuI2pBRCwmVUYyP0IsVXJMJzdnPkhrNSJmUnEwTjVQPmJxSjlEPTAKJUwj
bURuKTcoUnE8R00oJGpGQGJmTEdTMW1JUHFjXzxALy5VYi4iai4wXWxCbTZWZV5FRGIwSG5JOixw
M3BbQzBwSEozMz1XQzo6VEtoayw0K0RwXThAcyFNQEQ7R2Y1JWU+RiMKJVllMXI5LTw3LiYocDFL
ZzBLbVRMNS1TQ0RRRz9iRW09NWJNM1BGbWBMTD0oVSstYW1YSy5SWmVCT1A3UlguSVciMUk6TUde
JTpTTmssNzgjY3MmTCEqTCEhUTR0KicnZjdLPSgKJUVERSJDakhoKVFjS0xoPStAIjFrbitYLkw1
bEImP1VbPyYnQSdTVmtPXVVpOzYiZE0xJzdWU3AsP2Q8Mms1Qk1BXjVrN1wqKzRrPCpJZ19UKlZm
VVYvKDwxO1QiNT9EZTRCVFEKJTgkO3UoITxjTWxlPjxtZlg7OEUiVFQ6cTFXS0JAKzMjZ3AvIUZB
Q3NVPGpLSixQPiE6XEExL2hDZ2V1SyxhKiQ6UFxQamdnPj02bTY+QDcmJ09fIyNOLjtxJV5BIWlc
O1FhczMKJVVMdD9JIUdbZ0VfMTsrRG9MX0NIMVVHMnI9VF9VbjssMXA8YlMrKGVYaCo4ZmVWLl0i
Z0Q+aHImX0xVQkFeaXUyW0IxO2A3UEk/cXFFI0E7R0wnUFJAQG8qTDViYk1McVVPdDkKJVsyKSJh
Mj4/MSRhJjFxRkI9b2pscTtsTGZMNkcjUnFWRDJSb3IxM0E+QTYkKiMkUE04MFM+WCVRMERmZCdD
bV9IW0hOIT0+SktyQmswQls3MlU+dWpdY1VvJmhAbixyYDZiNmIKJSRlbTBwPyM8cU5fPCRZVzoi
NjdNWSIsQ0Y7QShWOyViVVl0RjBhQDMmbi5dLVQkN003VDhwJXUiL14hbmNuTztMYG1PUWlyQ1Bd
KW08cTZsSS1ZKCInSCQ7VF1Zazk7YWs6JTIKJT9nOztVU2FMUFhUNXI9OExOdUZGIV5TMiRbcV4q
JVtidXExZkxPJiZhWisnKkY0QC8pYC06R1llJCQ1Tz9xZip1RShhJj1lUjlqUU1wSy4wOjBOU2A0
L3VQN0hgJz9KTEpNZW8KJTZHS2plMmAzLUZUZTJYVSMzKDJHM2dtI0NwPXA7JVtoZCkqX3FxJmFq
XXJCKGlISFgxZmMiUSQ/aXJwR0cwbUJYYlM2S0Q1LnEnMHIrOjhcTXUsS2pCbWpEOG09Qi09LG1v
IXIKJUI8T2d1KS1bbGYlN2Y9N0E+aEIkXHI7QyVlPE8iSG9bcGU5b05GKypBV2dFOjQ7WD88RVhZ
S09WP1peIjM6UW5lOFdyUD5CSFElMyxuMFYoSkE5MyckWS08KVNCWjRcNV1TUl4KJS4xO3RBcms5
W1w/UExyZC9kXitQKyNXUCMzRkg9XE8jLkpYa1pbJWkoN2R0WFQ6bV8vX0lNXmQrYT8vc1EnOyUv
LEFnK3VeL01wUWIwTjlcYT4pc0NjbEIuUUJUaD47WE88PDAKJWBJJUk2VmdhRzdtODgiOlNtXUkm
MHN0dCRDciNmMD4vN1AxJCo4MlQkTEUzNCxpMGhLP1ovdWptbEdwOlIpZSkrV1BRQGxJTDBZRCgh
Ryo5VlopZTNWTywpNVleRGFOQjRYSygKJURwTCQ2LGE/VTFqUXJxdSI4NFNJLzg7cGkiTW8lIlJX
TGZmXilJXjRlWnFXV20wQzhlLFwkW19yTj5MKTIuY1c8X1w2QXJeZ0grS2x0KkxrZFBAWChrJG1n
L0BUakAtP2suLD8KJSQubiQrZ1ZTREglL19XQTk9VDAyMztCNThVQ1ciKil1L0RRY0BjWGxFTyko
X0I5NjcyWD8xTW1WSSkuYFxQVj1URlJpV0xnK1lkWyFLcjosQz1JXSUuXjklcz0pODdnalcpUTMK
JSgwQ1FDMDhgbElJOW84Wy9Kb0tjKk1XRi8ySlpcPUdsQWFyO2FYJiFNc2NLdTFHN2xvXVhjV19c
QnJtL0pdI250IkVWO1YrYz9LcmlXPUpBUjcoWDxmXEpfSSY+UF4sMSwzNnUKJTUtczZxRyhtZGBm
OUVCKWYqPShnW1kzI2U2ZT5eYT5CbVJIcXBTK0c6IkVZNVxgN1AyN2UjMFAxODxWODpNKSNsMUEo
MztwUU8hK2RTOnE+M3Q5Z1BCWiQzWWhlJ1prUSgqLToKJVRvPCJwZlgjSi1iNicqaCJbIjguISpz
PC5jaCdKSzVOVF1ZREc3aW5CcG0pY0BWbmkhSVB0VkJwUSxtW1ddPHNOP28tSGk1ajBTO0JVLDZm
Nkkxc1BEMm9IY0w1VUZER1VTXmcKJWBPJWVtOkZEJmk6Wys1XEQkYypmJ1NlKSFbQ0kwOmpTZjJi
T21cLUo2dGNaUSpgJTk5LmxicS4jPzVlVXEvdTcmcUlZS3A9b3E3Y1pnbnVXI21kWnNTRnNjIipc
Z2UqQ3FVRGEKJUY+W1RZZltwIyFgOV8/UF9wSistOXFYQihQdXJZblMlQTRAWWFSZVBDXXUyRmhi
bHApOlxERC89KkRUPCk0R3R1ZUg1cT5GOlYoJjJmM2ZVTiZvQS5dNEZgczJTKElmMCs+RTQKJUBr
I1lJW1sxQWJQTy88RzxkTjFZWyRmZU1DUVhFLGBrZ21JKj1zWFQoRjI6Q1Q7dSoiPUNISyslME0h
TDVgaFpVRyNuUlhORDBUTXE+RSt1PklfJXM4MUlkZV9wUS1XMzU+LyUKJVlXLTUiTDBSNDdhLj5m
LVwrOydOMjlzKGQuO2hKJzNzSiNbXSM+M2FkIjRDZSI0KjBcOURxN2clYj0oOS1WWWc8O0ZvR2ci
UlQ9U2dwcEopbTU5Lms4Xzg+TGFFZCtVUyk3Xk8KJWM0V1BGMjRsJidIWlI4PDJhOFpQVUlsWlBy
c1puIWVibm5yUTQzM2NubGxkUDYwJ29mbDJcRkpqdVYyMyknRUAiW2RGTClqSixEV2xCQ2dGYm5v
MEc8Wj4/bllSMl1INm9TNTsKJSsuLFw9YiRRZ2pVIiY8b3AxQkQoSmk1YmQvYzgqMTFHPVc+NWxX
LnVJUV5QL2o5JFI8Q1c9WyFUajI8K2c4bWk9bzJHaS5oTT88OXBiMERrUj9oZyw9TF03NDo4WWhQ
RHJycHMKJWo+VjVOWmBYdV0hVFdTN25aRSRgJDhbRj5gIVtRMiZhOStMTmo1cmBfIjo4XD9CQy5A
ZXJrX1piMi5QaFRxLih0citvXC1rQCFEcVsjIzZrOlZhNmZhKCpXTSRxM08iY2RnTkkKJT9uO1ss
N2ZKUSkmNWo5YykiVTUzJSxlVXVeLXEmVC1WVjQ9ZW9OT2RMbChkVEZcLEpkVy5vVmpWMGM1WV1v
UF84TDslLldEPGR1NmJJKXRmbUNfPywqX2tHRUQoSyJJJk9sMlIKJUMjNEoxVT9JJEBIU3NMPldE
Zm1sLS1bKVslJWZFRW9PWiwqZVdJUj1OPi0+KkM5Ni5QZigqWDMvazlYZFwlRVAmaDw+Y3JhQXRY
TU8mXXVkWThGQCcmZExSQWRHO3E/R0pGQlAKJVNdNjUxKzZsI3BIMkJdX1RAanAoLVpHbWAjMHVc
QUwrOjlAak0oXUcsPzpUQjdSZm5dUl0+aHNEcWo2OG1Ib2EkQ2UjJ042UzxoMWQsYjkoS3MzLC1U
VVQnS0kiNDw5N05kb2QKJTVwMEZyMDtRJjc6LFBDNUE7PldOXCoyR0FoalBDJzMrLDw8Q29dR14p
OCMmKSNhRDAmJiImXHNJWk9fSmRiMEF0XEVvWyFyUnRddVxsWVx1Y0UnMW9OX2QoMHBDIj0tUD88
M00KJTtrcCZANVluO1MwPSJWQ0tURHVHXXFdWCxYLkpsPiNsKmFmMmQrdTg8Z2g6NWxNdTtlb0do
Qm8qVnA/WXB0ZyNrTUJMTFJObUZiay8xbW5ZLTk4KE9OdVx0UTAuR1pBQztkLWoKJVpZdT8mPjla
KFdWSk9aJVZ1OEZJKCEpQ1kyMUNPWEY1cThaI2ArQHJMY1FiMGlPcWszLVZvb0YtcF5nKlFEZ2kw
LSJLVHEocjplRl08MmFycUBJSSJUZ3ExakV0YXROIjNsciIKJSNsOlc2VkdFOyZZLloocUdRWXJi
QnVRSUdLLW5yU2tIPiVCU1tuUEE3Sk51S0hYZjYkV3RmQWAnNyJeUkhAXEgmZ0NUQGpxUVU0REBW
YnI0OFxQb3BEOikibVkpRXQhbm5JIXAKJSFhaVI7OklLQWQoJHI4aUs1QFszMHFEbVVqQGIxWTtz
Pj1mI0tzRTxeMTQkJTEvKHRSJlE/TU0saFMhb2EwWUkwVjldOTdkW29cJyRaPD01cXB0IWtPWWZG
YCRWOjhUalghQEYKJVtPKyliW01aXVk/Nl5vWDdGRykiLiQlOnAnSFRpJEc3ZEVuO0dvZzFOP2sz
Qlo3SmwvaFIrKUwwOS49UF46Wl9CL3Jua2g0VzE2LGNdMUcub3RyQToyUGYsJF1CazgnPkdFNF0K
JTpIUmluQm8iQ3U/Tm11QV8xbVZzMWNHL1glXm9WTEcrW011ZDpKJCZyVWA5NTVxYkJqclROSUlh
Tl0vZDRIUSg/OF5SWGAjMz1BcmZRc2RdbVU7U1xHK0hOQygjYmwqNFt1LWQKJVkyaWUmJ2xbJyNN
QVFDMHBnUi5tUnFQcFU7TC9dSGpEIjotUmFdPiU4TSwxZ2dfUl1ZISUxczQqInRPMFJUPS4xWTw9
ZToyY1EqXVZcSDZPLDsqb3JXI104Xj0hNz4jMlFZRE8KJW9mKkBfTzNEYlZrZzwmIltPXjhRWjNF
KSdlayNBOkJraC4/STRYXyU/QlVRVERQaW1oLjwkJmgnWDNhI0lJaTY+PWRHW3JbImhBbCpKdD5d
YlRvNVtBYnUoQW8kNi0jJ0JvclIKJV51NmxNUGA+YD05WVBLZlo+MjREO2YnODJgMHBVJ0lgKUMi
XFlya2tBZU0vdVooQnIhZWQpLTMyKFlGT15jREluTTk9Pz9WOUFkRUA3IW1dUDhRTG9sP3VVPFQ4
OmAmXG1wMi8KJWl0MnUvU0UxSCVDaE5qaExWTnQ7Wj5jYFJYdTplO3F0Yz1qMXVKO1tZbFAkdFk5
YERXWDpaXiFfNU0sO2EwYl4wOiVpIV4/WWlkQjlcSlFnIl0pVEFlXFlLTFFDYThdPiVNQmoKJXBl
PWpETjcmLVFwbEd0YTNpRjQ4UTovKVxmajBUNSxicyR1XkNLJWFxIXRORyVQXlZrZDhuXzM2Uyow
cDI4ZEhgZkdKWmtCSDtbYk8qJ0lJNjRyWnAvWic+LTQ5LiIjbkwicE4KJW1ScmNob1BLPkZMK1pV
TFksPXRRNFleVUdpR1AiVi5vP2U/YztRMClDM1dqKWlFNDdcTS9HSU5dZkwjSVxpb2I4OkhIIypi
XkRnUCkjLiZCQ3RWVm1BI3U3P3EobDNvRVxJW0wKJSJKcy41OlFmKTFuZkJza1AxS0doTkF1Syku
dWM4ZHBvVGFyN1dEXkZaVkZSLm9wV1BQUzgyRkxjbFcpTjNUUkc4aFNOVSloZ0lyJEU5dTsnZms6
ViRmXylUNiRfPlI8YUAzQioKJTxsJV1bclJwQmdaKFpJaidHbEFsNHBwKipuOEUoKltNLV0wIlpj
JGREcVFsTEBhc3VRZzRBYyU5XjMzREVHMjReW2RrXCgjby1sMnEtRiIvPSlCbWBMamhDOCwtNXBd
Q2piUywKJWwyJChMVSY/dHAuISE7QjckbldeWGYuLUo8KF0vPkZARz5hXEgvXmM9a2ZXRz48OidH
Wl8iNzlsXVFPWUJca0dlZCQmPG9LYmI2VCtAUShaJkspWCpyQj5PKi9DdGBCWUUjKycKJSMpWysm
ZjdyP1kyUEJeIk9obSRSKz1FMFQzQ1ZrVzshOyhbOzFKP2sxajttNCI2Rl0lcCdJKjBaPCwrXkhN
ZTZHZV86OXQncUE7YTA1WEFRLnFoKV1sKWxjJ2NyXltpR0wjL1cKJV9rJ09HIydTKz8zSC5RIWtI
OS1ELSpfdU9aUSopVGhmYTdNSzpRTG9WSmAuK1FAUz9CTlIkWHQvZjE2RyMuUUlzYE9yKFkrVUlt
U147ZDZUPyJ0VVglJ1AxYTJSO1dKcmRnY2EKJUNBPC5WckJMQy8ySzZhWE1FXXBQO2c/SkkvKVdQ
Y2VzPFIpTkh1dUIwTC4+PjpzR0FlOz8hOzYmLCYyVnIpPjEnY0JqNzE0VzQodFNycUZzQzNzWjtY
bVRISmpwMXAkUiNncF4KJScmKnU7OydRRyZuPGRIXyo3c2hKJTRDJDBQJD80M1hEWV09Z2VXVikj
MSdJVGJiV0U1YltaI2o7KFdha3BZbixoW0teaElZS1UqIytWMVJZaSlWVGByXStmcidHc15ZNWo1
dGcKJTNrKFc+bDwwVmBYKWdlLCgwNTAkOGFrTmohLTloPz9qaHUpJS9aczgtRVQtLV9yOmZsMF8r
LT8pMXFAKjBsXDBxcTNGLl5fMzstNF09cScvRi5yTUcsJydEYl5hWSxYT1pHLiwKJUJldEtTLjdB
Nk1iMlNmcVc1IV8kYGA5PDhhXjdoJWo8XDxwMWFMVD4pJkJNZDVZV1I0VVxEdF1INFFMP1tJJmZB
MVRHKkpLWEpbKSFOWlooQSRfR0JnTjBfM2dvYXJzXTR0ZEUKJSJuYFElUXFvImNedTxsRW0tJ0Uo
PSJidEQ2SFZPW1gxLCZuRTk/NUMuMEJETVA1UC9DVjhYXlpZU0JkVyVsUGNXLT9BJkgzPWpAQk5z
YWxfTzdlb0VJV1tIUmh1PShrZlUtIVgKJSpSYidONT9mc2IvOW9zVVJqT1ZhZj4uZ100c2QqW0Jg
ZyZaYlpWQy5VNmFRYzsnTFEjZE49QyIyVXE8YmBJRitjIkwnTUdKXSVNT3IzRWEnYS9UaF8kKk5C
NEUxM1lVYmglLiYKJW5zRD0ybCtlOitrPE5wJEIuMTUkJEkpcSlSP3FuQGFkNSwwLWE8Xy9XK0E4
STkyOytvOGhmU0xlc09EQiNKLSRLLTllZVFvKCxoTCZ0Uk1mTkRuUDImRz87WFcmNldURGtWX3UK
JUdRTUJLXlNoJVthSnVAXzFTIm5RO1wjcWlyVjRjRUBFYSMrT2o4RG87VEVTP01ELGtVP0I6JiRB
Nz4oSTg8IVpkNT83KCVCNy1mcVhHRioyLHVVTXElPFFkQG0uJyZtby5tOWwKJXE0IlgsOjtnKVw6
WjZaRFZQZ3NzWkdxYypONWxePjxjOzBXOU5ERzVkLTdYPyM6NDVOZSpdJGBmc1EyNVQjND42RG5N
Q3FlKDxVbSY2J1MkOGNRPDMhQSpbWU5uOm9aUXFfZnUKJXJkb2kkSiotJ287RkFwXSNdRmFSWSQn
MjBRYS5fQylSQ0MyaGU0JSNiKkxzYTExbXRSTG9IXHBbK2sqM2pfSzRzVG9AaSJLXTI6NG8jVWw/
IUU1azVLOC9uTDlfaCtQQ1tqWG4KJWFGSzkraWlGXzVtUG4sPGc/OURIcDNndTBhbjxHTTBTJEQ2
JDgkWVFjT0JoLjw5bT9pbEdxbUkoJlRyNmJpYjM0WmJSXDM7TiRVYFJUZzNFQERTTShlUylFaGla
RmNgXj9HY14KJVdyWjhiUmd1OkJpKSdiUUBBRiguVTxySyNPZW9ELVYtYEBnSGozcChyMVtESWRF
cm5YOSZRSmllZTBpcCZJVyVQPlt0Z2tVYmRaJlxMW00jXDF0bzZBXTgvYTtJR3VtLU0ldUMKJWMv
WDc/VitiLlI5akIyK1cyI2tPSVpOZFFPYGJaVE0kRERPQnA1XUQpQjNqLSlpZGsoWCcjcGk4LilL
ZGAvYVxmVD4kPWBvcXFuZTRvNldGYkYlPWgxdGg3U1AyPkJVJGNiMEAKJUtcNGFrbmQ0V1E1LGsw
S2Jva01lPWAiVT4oaW0kNVk0QGlSYS9aKzYqU05gKDM7UzxKOEJqVl4lci9KV1BoXFpzLjcpNG0o
PWhxPy8lXz9lTyl0IiYpZ2tMP1ZYLmE3Qi1RV0QKJSluPFFKbzFNSz1xPztkQ0B1TGtvZGc2X2sj
SGdWXFUrK2dcIj88UVc4R3UxQV4hRkdvYENpcFA/a2pMZSZlPEBEXkZwSE9sIyZSPks9Qic1cDIr
JlRVSE9xQDNgSSQ1Wy9bS1EKJVpWdTM4WENBMyYvQDsnKiZRVlYnTWJaNlAmbjEhLT0vc3NcXkFc
K1xaI3NTSmVyb0sjQyhcbUw6LTI4V0c/WjQmPytxU0ZbN2ExJGpncEtwUVtNaTteZkEvUj5VSnMp
Q2w8SyQKJS80IXFhPVBTU0ldSTJWLV0sWzQ6QGApT1NaOTk1aiUvVCFjPDVIdCJaXTcsUUFmdCUu
P2knX2tTbGRMKE1CRVpcUUVzKDtZJ1FkREZlcWhXWj91VidvZTlhRlYvdVQ9SjxiSy0KJWpSNyc3
azs8TzdoXjlHZDBqLThqTUpRUT85QG1WYFspdWVIJUZkZD1KKC5iQSI2b10hKFhSUyNIbUxtbC5e
O1BiQGJlTTRtJSYhWGxPSXFfPi1aWyE6XDltIS5SP0wpN3VAa1YKJV8+MyNmUTwuQVVLW08sTVo9
VV45MihLVSpuXXBSYjw5PF5uPDlkb09GUj1pOGlGLU4jcEZkSWZqViY6QSdHOUgsUD1eLk5DZWxv
WTs0bjE1Tmw5NUtGRC1WSSxORz5pTEc7dTUKJVpnSUhFLCRwWD9TJDk9Jj9mOlFHL0smPC4qOiYt
KyZpWj5IZjhiOFoyVlUoODtcdWhhWDdqQztuNEVNcl81Nz1gK1M3MFlldT47ai4tSCknKi9mRSMv
LnNbQjE8JCt1IUlgOD0KJW1McVwkWDxFJG5MUDNlXC9MRm4tNWUocjIwWGZVNzk4KVk6Ly0zTzYq
aWEyNTRxIjdGLmlBREpUQFhaVmckYTNZOnFqQzg1aD0jQVEuTTlzazEhZTNoYnVgPUNiO1NJblts
WSoKJUhMZFgoIVEsb3Bma1ouXGtVVGRhTiZPXWJsIk1MT190c1pdV1BoayhDSDc/cnFrLVhHQE9q
Q2k4PkhhXlFgbyQrTG1tTGE3Oy5CUGxyVGRTRUtfOzI/QyMhakdxLTE6TzooUUMKJTYnJypPVyZf
cys+XEpiIjBeVmgvRlNLT2QpVlBMX1dFSHNGJElyWWJPbjQrMWpzYTNKPFpNLC1NXkI4SnE+WjBI
VSM4LjpqZmsrZ2htVy4wcjpjZ2gnQGpWNlhOMHIzNCdqWXIKJWFlVWlqVDA5aysqTDtvSHJQLUU8
KEtqSDZbMTZgc0JKbEIzZ1gubzVKMlBzLiFLMWhQalp0amcsIV1eSmljdWhhNDBXTidrbW8ub0ss
NFpiM3NeOWpLNmEqVnBnUV5xUGpyPlMKJVJwT1VaTEspI2otOyIqPFZdMVZCbDYwPXE1LEBDUVZV
SV1SbFdPXm8jTXVzRy40UFVLKmc2NmlDRyhfW21wOD5LPU1yN15nLTIyLDtxayglOXQtTmtOSGxr
QzdvSyFLSk8zSS0KJXMvaiNKJ0MoJTwnV0xYdWhwPWZnTGVJa01Gc2EqVFs5Iypza2VDU2xHTk9E
Sy4mNmpMPHEsIU9kNzFVY2VsXVY4MTUhQmNxVnRSYCJpK29EMi8hP2FPNmNYSUBSZCspPCwnSk8K
JTNsIUQ+XXFANHUyP1c+I08yOF06VnMqMWo6a0owNUovMVpWMyE7bDkvMiZZcjI8Sl1hRGA3c19O
SGtUUSwtWG5qMiE1aj9gO2diR0ZHPl8hSC8qS1hRLUkqOD5rMkJGRVtLXHIKJSwjW0tLXy5ZYjxD
WjAjI1RzZDYvOkhmR0EvWCNrPD5cRUkyKFwzOU9QLmcwb0g4Ij5FKm1BV2wndWVVWlFXQGsyYW9F
NlFdZGZXPmc/JEg6aTlGTlNlS0QoTFtqKjhYXyFJMWMKJVxISFVoWUcoPldHPD1xWEVgK0wmP1s9
PlYxOExxN2xfPzEoTU0jcj4nZW9QWjJpP29QLDYkQjpKQW41UE5VWVpNcjU9c20qVHAjODM+b0VY
RnBVYy5VJjFZZTttcSguXDwwMmYKJVJLX1krNCIiZDkoKStpUz1NQ29cPlFyXWMoYCU/cV8iXWpu
OmVAP0w+JWR0UWQvQVM+LlpqSDc0Qy1DTSQ6MkkoISpVTjZeYnNwIydBTj8oRVs2XyRkVE5ITDQr
PCxLT2Rkc0MKJWN0SG9fWTc3KjZOcWE5bGtjR0NFVUdzTicjbjchTWxvRGtvQnU0RnMobUNlRC5T
JFJJUTZhLURQSlB1KjBdbTJYRWdpTEJMVmUnV1pIL2JrXipSOC8qLjxedW5BR2dXZDhlVGYKJU9K
OThnPmJFYUExRlt0OlghKiZTYTtvXGlOcTNaPGRlMElRYT1CO2hjJCNTJEg4MCZjW2t0cyNIPWs+
Mj1KLHJsZ0oyJ04vMl9iPGxHO11POWJCcl4/XkJJNDNKP3JfVz82M0MKJShbVyQ1XSxSJTxORmc2
TTMpbWJZOUAnSGBJST1gcFQsRForKWFYLDlYMnNzIihtJzopaikyQWFlSzs1W1VnXSQ1a0hZKmJp
RToiUTg+aj9PNklVUWxtUUQvQSs4M0AqJSpcSF4KJUtEL0c4ZDlMR0ZTNjx0XkJOZFlFPnVXdTJI
Ui5EYEJGZEVGTlFmTj8wJzVHQzAkKytrV05bKD5wO3QyOms9dFpIWWVvR2VbPEtBQm5GSCFvLS9H
aUROcV08ITdWNi8+MClbRDcKJUhdTid0UnRCSiQpPyM6V0RhNl8yJlxNYzgrPWEjWlBdMk8qWE5d
OFBMPl8jIS1jNGZMYWlaT0RdPFxtZUlnXmpJLWcrNUMiYk0rNEAqZUUqX1Anc15lYEIiOiU0MXMj
U0M4YDcKJWBXUzEucGFZLCdYI2RkUmlCX3VxKzBdJlNQKWBvMTo8MV9hTl9DR25nWzFjRUpTTTYu
OGkqWDxZTmk6LFxHZDpAOy86VCU7WDdEN1o1a0w4VytrPF8qVkRVV1YuUyRuY1ErLioKJS9AT0BJ
SF4hIm5IaEQoblksPTxLR0xCWHMqSyRFJ1NvT05vOjJBYWdociM7ak0jQTwvbURXWyxDWWUzYD9z
XGBIO2lKcShLazJXT1txS1AkVWEnblo9blxJNDNoayhsSEglNEIKJU0yb0EtbzIqPy5NdSxKJTdz
R2g/N2dibDZCJW4vbCJfRTAoU25oOCNeU1diL2FuJFpGTldORyMvO1RRKGhiRlk5YTUxSFEwIzwh
JjVgNFgkKUEuVFBQKjxjYHFBQnFcXS1YQ2YKJVgsbjVcYHRSTjcrVj1vWW9vPSpqW2U3ImVLaGtV
MigzZ2ZQTSdGOmFDc29Pb19pKEs0UkcyJU8uZ1gmPl5EcE9cYFc9ZC0zSjwiLzRGYnI8UWdVTm04
PmZbSytvKikvWS8hQDMKJWtoMVwxaE8pdGRxVCVqcGArRCgtNG5eImYydHAoVSZeb0dVWXNJaG43
RmlAXUNUOElZTzNqRlE/KGg8KD1DKUsmOV0tai46MFojMjNyTkpDNGlbbmtPKVxvSioiVSQ6KT1X
SkgKJSFpN0pcVHJuJyE6SHBlOGFUTiVJOksqUypfJG9QXGcrTHBJbHFMIVAzQWktbXJwTWtLOmo1
JEUhQm5iK2E9UWxmcStwcjUwblpWME1oTFJILChtYGNwIWtePVcmYFBxVmhYPGEKJWFlbiJTK1xc
WzIhby1FKDA4JVVvXmozOEYrRGRna2leSVNSPEE7dTBvSDVdLiVXUVVQUGVBdHQjcTFFcjwuamtV
ZnMsPWI5dGw+J0lOayhMM29SdU4+aXBVLGgkRU07Jz5NWmIKJV1YSEUvT2VqMy0pPz5mUU5qNXQi
I2EicixgMGRmIkNzXU4mSlpwZHBMN1cpMUcuPDtzTTBeYTg/ckVdaEteRDFGU0EoSC4zSGtRJTYx
OElgXzFbVmcjNUosLS5aWWBcbzNDOFYKJU4zQ1MkMGlyRW5ZZyxfQF5KaGkiXjcyRnFdVl5rdV0z
OjMuOT86Ly8rKUI3by1EPCg+IVxCLmJsSl8hdGpmOmlKRlVZYl09JS9xOG1gaUIwN0p1UmlhNnBs
cy5pYTs/TkcsK0EKJTFGYUkzYjAjZTxxRkdiP1Fac0ROWSZIXko/QHVrIjttRWZyX0ZRJTheSXJh
R2E0WSpcNitNazIiaDZuPVBTKUltYFI/YSFrbVhyUD0/Yi9HQ2k1Xj1VNEtabF9NVEguKlcsZFEK
JUw0QlloXXFoYDk7dFJzQFNTPSlAV0RUXWw6bENrMD5NclRFcjY/XUFkNDZvQ2UhMFg6KUMsLHAh
Vmk+I2duIiIxLD5cL1drIWpSckJdYGwpUU05YSsyL1JwPz1iLjxcJ0EvZ2EKJUpoImZjUzosS29o
dEByWV9rUkQzS0RMN0gkYmtmUDNHZVMtOFo4WUA6Xz5rbF5FaUhhK0xIQywpWitpR2xIUTBPZjto
bUpZSUlzP0htaDgnJVduT3BIIW1qY0grZClzUiNkTFgKJUNDVVRcRkIyQEgkK2ohaSI5cGlySTdd
QTk/MEdAaSFzOikobEtkbCFCT09rWTAjXms5UGg9ZldmM2cvLWc/XFZkWU4pNTA6NVlpVz8xMWVH
I0txdHFFYz1sSWxaSkFEKk9MTWcKJTc4IUZyT1hPKlFTUURhPyZnXkpNPz0zWTxYQUpCYyswODsq
Zz8zKWY6OWQsa141Z28wMWZRTmZAMU9SMCIjVmljNFtkTmYjSnFOTT5PT0VHbDUlb2hWYjUwaFsr
TSxmYjUlM2gKJVM1O1NeLz81a3NBLEc4Ti5RJiI3VFdTNE8zaiEnSDJjKSNjci9hODRNOW9NNVxA
cW42Km1PaFRIanF1Wi5tYWk+PFY9TFE9N0tDWWY/azovKixkKUQ5MHNGSixtVmldWVtgZkYKJS9H
KjxHPlg3Ti9cMEB0am1AR051YEdfUUE7JGAuM0g0N2VQJlJzc21faltfL0Y8ZURGZGsuXy9KUC9L
UlA3T0NyPkJgJF4hWlktVl02Lz5IL2JYOSVLc2szMDJyMm4hMSJwTnUKJW1WVE9oQzwyWG5EVmEs
aiRZSjkyZ01ZaidGQSNgbC0vM1YvbVxYMW8yNUpgRmxnQ1dTKUMlYCtYaDEnP1RmcyZcRDc9P1Uy
Vml0MTB1XlMqRztuPVpEYyhEYUFcNUc5KEA4JnIKJWZRUVRLVkhbRlpiJitXVURScCNNSUM2YyZl
MlEyRTdYTTtIaDdDNV8oKVZzTWdUQSlXRFhlX2RZSSosc11CTCp0VzVaMSNQIy85dXIhP2I0JjJn
bC4vP09UKC4iby9jPk5VSGoKJUE6bE5haiFucVQ+YzwjK1xMTDNhOCdnazdpXFMkPS80PFArNTZl
aktQbXQxZD4vIklHNFo6U14tbEw3WjA7biNhaEAkVUUtVDxFXFM4bGVyLS5BaTBVMz9zNSxERVxo
ZGdWU24KJT8mXHEobjkzcmlJblwoclwzUlUiNVA7VHNFdUpVS0B0ISZQcHA6OSoiRlpwS0lPcyFz
QW5uUl1iSyY1PXFLaXMuN0dcbzZHJ3FxcVxXbFY4UWE2IVZWVzpoIUNGQHBrTEVLYyoKJSlDJlVJ
IypuTyciX005S2QzOGZKTzhON19USG9JTS8rSmRDUz9gZF9CbFxsVFgrZWFba1BXJjhiU1xRMiJM
bTczUzlkJ2tacWRUXmpRWTsmZGtNUj0yRDApMzpfRFJHJyk7PEIKJUhiLjBMI1QyOCYuakBnJyNW
PCYnOmpySlNkZiNnZFdURmslIWFjKWQ9PktpbGFwVDFKaCVxcnAtWyZgU25DV3M0LmQuKSxyPilv
L0ZvayVNMGs4R1ZjbVMwVzIyIXUwajRaITgKJUJaS3RWUitqakxfXyopTTghNVBncE0mZiFmKidn
KkZ1WHMwW1plK2BCb1haLmwwMi1kYkYlbEhEYUlQPk9kJDVFOmhnaFtEOUYtRzJCNExqTkZBUF4j
cERRRFJARSlEIWVkVGYKJUxEYFVya08lUGE/U2tXYU4zWEwjMEQuQ0JlcmphKzdXPjFiZmNPVUde
b1k6QDlrTyJTX1ZpMW1YaUJzbnJOQDE8KF1Ga3VzKjlGSTVKbGpvKzhrVkNAQWtvVUhpSkBCcnNI
TFoKJUEnW1NdcFtpaj9UPEVcS2ddLjgyXmU7cF1pQjlqNk11NylSbT1CVlpzMy0hM0c3V0k8XjlZ
TlVbTjsqR1ZmRF9dXz5mdShkMl1zZmFZaWtmUyZYaCgibDRgWjFbcSg+NTU1NW8KJWk0bFk9YnUi
XTVyXCsmLiI5OC9xa1B0OktKJWdFT0lpaHRqXkcxKzQ3NzI9VGNARjc4Y1BtP1JfZFQyI2hzdG0y
N3NMT2YwLFAyWEZaWSFQMD1eJDUzIWhlTWBvcUp1aCNbIS8KJV9GUiNzX2JSQmcnXUFFZExSayI6
aT1TMzdHaWVRTW4/XlRPK1E7dGk2QGFSKyNDNjkkMVskLTRyN2F0Sl8hZDovblRQdVlFZEVnbEEw
XT5xQG0qJ3U2SDElRCVDJ1JIZl9IVHUKJVlXU0Y+P1xlXEghSVAlTV5CQmwvVl5MZzg6QzZpZmkv
LUVCbj46LllsaiRWLVJzY0ghK1UkWChCJklvQlYiJmE6Qj5FbUpndUlwVSNEVSZaNjlfOys5UnJU
cUlWNkZ1aFw/SlgKJUkzKGVKZiplYldkRUIlOCJgVmBQJSMsJTRUPyQyOSIjYkgzNU4mRk00PGJW
XV0pP007Ni4iakBMYk87XjE9a0stIzZQKllOSVZRL3FaJVImREouVCgwN10vZitFXylvPlI2LEQK
JWNiUSM7NShFJ29jT0luWyU9LUAmcTpuOXFUK0NHPycqdDg5S2AjNWpgc1tFOiwzMSQpSUxncGhj
ZzstcUk6O1c0Si5sbCg7IWVdUXI+bmBHXW8zNU5nJG85OypZNzNtbltpYSMKJSJVaVFoYlJHImAp
S1BdMW0rTWRAIWUqKmYqXkNTY19bWUphcE9eZT5LJExTOmJgISZyLFkzSDBIK2AnQERELUpDXUcr
JUBmRj00bGhaXGBRbyhkbiQ4S2drMG5EPVwwKGRNJj4KJURiRkNMISVHN2daUlpDMGs2JldtcW5x
V0VeRitzR29SWVdrK0JxU2RxS3VaXGQqJzdvIzVnbmtKcm8lKTN1PEZfITtzdFtrLSJmbyJNRSNW
MzU+dDRtWShfUmNnQ0I8a0xXJS4KJS0lTVY8KSQ8KlIua3AxLnFZWFs5ZTo7LGdwVkVpZlZfODBq
IyllMk0tUUkvUUhwZHNzNVFqNy4wayVpRzk9VlpLR1xGK25jYkw2SzY5c1VXSzBfMjcwS19UYyZG
PCZvPTNkdU0KJSZqLjVWM1ptOU0hJW1OMCspaiJJNkVeXzAqazZDMCYucDpPJSJoPnBLMyY2X3JY
Ll1xcGZaNWcnJjNYTWlzJSJPNDonLnBZaz5kSCVpRGtfTldPVWkkYyQ/QG9uKjohNGhgX1gKJV8h
Mj0oXVlyTDJNY28rUlIvbCxpSHBQW1hwaW1vMCQrQ1FaI1NtbjkhODlnMCQlc3I7cClUQk1uPHQv
ckJgYnEwXl5eZ1ZHPGBZUyRvdS1Kb0VnJDRGYmRlZF9VVSxcKisvZ1sKJTdLZSJRZ1ZMI3BmPiVs
OkxFSmVeMlpSNERgIlYoJERMX0haUk5MczktbDBVX3FoU1ZrPzNCMyovLENcK0VGP2YhQkBmT11L
ajlfaD89cFI9NGF0LVZhQUZXOHBgSVtEPSo7OUsKJXMzQ1YzLj5mKUc5ajdMYSUoNjNwInJSMz4k
MyllY3JcWVdbUGxeJURnamhlczowQVk2XzRoImMjPW9vTjAtJnVUJ25rMlNGcHRuPj4oV0l1KGto
MEdvJU4/O0UxX2Y4XzZELUAKJStLN2BCK00pYGo3SHRmYz5DTCFHKSZFX1YiOHIlXylSdHVZSVw1
ZV4tQkgxYiNeQUlGbyo8WTdVLmA8ZmtqL3ElUFheNzMidGlFaj9mUypnVTZ1ISo7ISZbaFMtXHFO
ZilVMEMKJVk1dDEvN048Kj4oMlE7K2M+dWE2SkdBbThjW0g4KiFLQT1GOiU2bU5Wa2dKOEEpZlFH
TmcrWCxYOWloUmBJWkE5OmVwY2xZXUgsISYwMV02RiRYO2JbSltAT1wsInJvRWU+TlUKJVwiLF0p
RGFYOiNCXjo0bG0vSDEiLWojMk8hL2hQPilEIThoaHNvY1dmWyYrSENBWm9La0dVakQ9YjElYTZY
VkRnKXBTSnNHW2tHQVMoK20hcSNdMytaZWFSPjomWVROSW9sTmAKJUleMDE4cjBxT040N0xiQV5Z
Uk4sNU9dJFRhbWpPXkZoVFQhcnNvT3ViOkV+PgolQUk5X1ByaXZhdGVEYXRhRW5kClwgTm8gbmV3
bGluZSBhdCBlbmQgb2YgZmlsZQpkaWZmIC0tZ2l0IGEvZG9jcy94ZW4tYXBpL3hlbmFwaS1jb3Zl
cnNoZWV0LnRleCBiL2RvY3MveGVuLWFwaS94ZW5hcGktY292ZXJzaGVldC50ZXgKZGVsZXRlZCBm
aWxlIG1vZGUgMTAwNjQ0CmluZGV4IDUzNGExMjQuLjAwMDAwMDAKLS0tIGEvZG9jcy94ZW4tYXBp
L3hlbmFwaS1jb3ZlcnNoZWV0LnRleAorKysgL2Rldi9udWxsCkBAIC0xLDM4ICswLDAgQEAKLSUK
LSUgQ29weXJpZ2h0IChjKSAyMDA2LTIwMDcgWGVuU291cmNlLCBJbmMuCi0lCi0lIFBlcm1pc3Np
b24gaXMgZ3JhbnRlZCB0byBjb3B5LCBkaXN0cmlidXRlIGFuZC9vciBtb2RpZnkgdGhpcyBkb2N1
bWVudCB1bmRlcgotJSB0aGUgdGVybXMgb2YgdGhlIEdOVSBGcmVlIERvY3VtZW50YXRpb24gTGlj
ZW5zZSwgVmVyc2lvbiAxLjIgb3IgYW55IGxhdGVyCi0lIHZlcnNpb24gcHVibGlzaGVkIGJ5IHRo
ZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb247IHdpdGggbm8gSW52YXJpYW50Ci0lIFNlY3Rpb25z
LCBubyBGcm9udC1Db3ZlciBUZXh0cyBhbmQgbm8gQmFjay1Db3ZlciBUZXh0cy4gIEEgY29weSBv
ZiB0aGUKLSUgbGljZW5zZSBpcyBpbmNsdWRlZCBpbiB0aGUgc2VjdGlvbiBlbnRpdGxlZAotJSAi
R05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlIiBvciB0aGUgZmlsZSBmZGwudGV4LgotJQot
JSBBdXRob3JzOiBFd2FuIE1lbGxvciwgUmljaGFyZCBTaGFycCwgRGF2ZSBTY290dCwgSm9uIEhh
cnJvcC4KLSUKLQotJSUgRG9jdW1lbnQgdGl0bGUKLVxuZXdjb21tYW5ke1xkb2N0aXRsZX17WGVu
IE1hbmFnZW1lbnQgQVBJfQotCi1cbmV3Y29tbWFuZHtcY292ZXJzaGVldGxvZ299e3hlbi5lcHN9
Ci0KLSUlIERvY3VtZW50IGRhdGUKLVxuZXdjb21tYW5ke1xkYXRlc3RyaW5nfXsxMHRoIEphbnVh
cnkgMjAxMH0KLQotXG5ld2NvbW1hbmR7XHJlbGVhc2VzdGF0ZW1lbnR9e1N0YWJsZSBSZWxlYXNl
fQotCi0lJSBEb2N1bWVudCByZXZpc2lvbgotXG5ld2NvbW1hbmR7XHJldnN0cmluZ317QVBJIFJl
dmlzaW9uIDEuMC4xMH0KLQotJSUgRG9jdW1lbnQgYXV0aG9ycwotXG5ld2NvbW1hbmR7XGRvY2F1
dGhvcnN9ewotRXdhbiBNZWxsb3I6ICYge1x0dCBld2FuQHhlbnNvdXJjZS5jb219IFxcCi1SaWNo
YXJkIFNoYXJwOiAmIHtcdHQgcmljaGFyZC5zaGFycEB4ZW5zb3VyY2UuY29tfSBcXAotRGF2aWQg
U2NvdHQ6ICYge1x0dCBkYXZpZC5zY290dEB4ZW5zb3VyY2UuY29tfX0KLVxuZXdjb21tYW5ke1xs
ZWdhbG5vdGljZX17Q29weXJpZ2h0IFxjb3B5cmlnaHR7fSAyMDA2LTIwMDcgWGVuU291cmNlLCBJ
bmMuXFwgXFwKLVBlcm1pc3Npb24gaXMgZ3JhbnRlZCB0byBjb3B5LCBkaXN0cmlidXRlIGFuZC9v
ciBtb2RpZnkgdGhpcyBkb2N1bWVudCB1bmRlcgotdGhlIHRlcm1zIG9mIHRoZSBHTlUgRnJlZSBE
b2N1bWVudGF0aW9uIExpY2Vuc2UsIFZlcnNpb24gMS4yIG9yIGFueSBsYXRlcgotdmVyc2lvbiBw
dWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlh
bnQgU2VjdGlvbnMsCi1ubyBGcm9udC1Db3ZlciBUZXh0cyBhbmQgbm8gQmFjay1Db3ZlciBUZXh0
cy4gIEEgY29weSBvZiB0aGUgbGljZW5zZSBpcwotaW5jbHVkZWQgaW4gdGhlIHNlY3Rpb24gZW50
aXRsZWQgIkdOVSBGcmVlIERvY3VtZW50YXRpb24gTGljZW5zZSIuCi19CmRpZmYgLS1naXQgYS9k
b2NzL3hlbi1hcGkveGVuYXBpLWRhdGFtb2RlbC1ncmFwaC5kb3QgYi9kb2NzL3hlbi1hcGkveGVu
YXBpLWRhdGFtb2RlbC1ncmFwaC5kb3QKZGVsZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IGQz
NGVkMzkuLjAwMDAwMDAKLS0tIGEvZG9jcy94ZW4tYXBpL3hlbmFwaS1kYXRhbW9kZWwtZ3JhcGgu
ZG90CisrKyAvZGV2L251bGwKQEAgLTEsNTcgKzAsMCBAQAotIwotIyBDb3B5cmlnaHQgKGMpIDIw
MDYtMjAwNyBYZW5Tb3VyY2UsIEluYy4KLSMKLSMgUGVybWlzc2lvbiBpcyBncmFudGVkIHRvIGNv
cHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlzIGRvY3VtZW50IHVuZGVyCi0jIHRoZSB0
ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlLCBWZXJzaW9uIDEuMiBv
ciBhbnkgbGF0ZXIKLSMgdmVyc2lvbiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91
bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlhbnQKLSMgU2VjdGlvbnMsIG5vIEZyb250LUNvdmVyIFRl
eHRzIGFuZCBubyBCYWNrLUNvdmVyIFRleHRzLiAgQSBjb3B5IG9mIHRoZQotIyBsaWNlbnNlIGlz
IGluY2x1ZGVkIGluIHRoZSBzZWN0aW9uIGVudGl0bGVkCi0jICJHTlUgRnJlZSBEb2N1bWVudGF0
aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxlIGZkbC50ZXguCi0jCi0KLWRpZ3JhcGggIlhlbi1BUEkg
Q2xhc3MgRGlhZ3JhbSIgewotZm9udG5hbWU9IlZlcmRhbmEiOwotCi1ub2RlIFsgc2hhcGU9Ym94
IF07IHNlc3Npb24gVk0gaG9zdCBuZXR3b3JrIFZJRiBQSUYgU1IgVkRJIFZCRCBQQkQgdXNlcjsK
LW5vZGUgWyBzaGFwZT1ib3ggXTsgWFNQb2xpY3kgQUNNUG9saWN5IERQQ0kgUFBDSSBob3N0X2Nw
dSBjb25zb2xlIFZUUE07Ci1ub2RlIFsgc2hhcGU9Ym94IF07IERTQ1NJIFBTQ1NJIERTQ1NJX0hC
QSBQU0NTSV9IQkEgY3B1X3Bvb2w7Ci1ub2RlIFsgc2hhcGU9ZWxsaXBzZSBdOyBWTV9tZXRyaWNz
IFZNX2d1ZXN0X21ldHJpY3MgaG9zdF9tZXRyaWNzOwotbm9kZSBbIHNoYXBlPWVsbGlwc2UgXTsg
UElGX21ldHJpY3MgVklGX21ldHJpY3MgVkJEX21ldHJpY3MgUEJEX21ldHJpY3M7Ci1zZXNzaW9u
IC0+IGhvc3QgWyBhcnJvd2hlYWQ9Im5vbmUiIF0KLXNlc3Npb24gLT4gdXNlciBbIGFycm93aGVh
ZD0ibm9uZSIgXQotVk0gLT4gVk1fbWV0cmljcyBbIGFycm93aGVhZD0ibm9uZSIgXQotVk0gLT4g
Vk1fZ3Vlc3RfbWV0cmljcyBbIGFycm93aGVhZD0ibm9uZSIgXQotVk0gLT4gY29uc29sZSBbIGFy
cm93aGVhZD0iY3JvdyIgXQotaG9zdCAtPiBQQkQgWyBhcnJvd2hlYWQ9ImNyb3ciLCBhcnJvd3Rh
aWw9Im5vbmUiIF0KLWhvc3QgLT4gaG9zdF9tZXRyaWNzIFsgYXJyb3doZWFkPSJub25lIiBdCi1o
b3N0IC0+IGhvc3RfY3B1IFsgYXJyb3doZWFkPSJjcm93IiwgYXJyb3d0YWlsPSJub25lIiBdCi1W
SUYgLT4gVk0gWyBhcnJvd2hlYWQ9Im5vbmUiLCBhcnJvd3RhaWw9ImNyb3ciIF0KLVZJRiAtPiBu
ZXR3b3JrIFsgYXJyb3doZWFkPSJub25lIiwgYXJyb3d0YWlsPSJjcm93IiBdCi1WSUYgLT4gVklG
X21ldHJpY3MgWyBhcnJvd2hlYWQ9Im5vbmUiIF0KLVBJRiAtPiBob3N0IFsgYXJyb3doZWFkPSJu
b25lIiwgYXJyb3d0YWlsPSJjcm93IiBdCi1QSUYgLT4gbmV0d29yayBbIGFycm93aGVhZD0ibm9u
ZSIsIGFycm93dGFpbD0iY3JvdyIgXQotUElGIC0+IFBJRl9tZXRyaWNzIFsgYXJyb3doZWFkPSJu
b25lIiBdCi1TUiAtPiBQQkQgWyBhcnJvd2hlYWQ9ImNyb3ciLCBhcnJvd3RhaWw9Im5vbmUiIF0K
LVBCRCAtPiBQQkRfbWV0cmljcyBbIGFycm93aGVhZD0ibm9uZSIgXQotU1IgLT4gVkRJIFsgYXJy
b3doZWFkPSJjcm93IiwgYXJyb3d0YWlsPSJub25lIiBdCi1WREkgLT4gVkJEIFsgYXJyb3doZWFk
PSJjcm93IiwgYXJyb3d0YWlsPSJub25lIiBdCi1WQkQgLT4gVk0gWyBhcnJvd2hlYWQ9Im5vbmUi
LCBhcnJvd3RhaWw9ImNyb3ciIF0KLVZUUE0gLT4gVk0gWyBhcnJvd2hlYWQ9Im5vbmUiLCBhcnJv
d3RhaWw9ImNyb3ciIF0KLVZCRCAtPiBWQkRfbWV0cmljcyBbIGFycm93aGVhZD0ibm9uZSIgXQot
WFNQb2xpY3kgLT4gaG9zdCBbIGFycm93aGVhZD0ibm9uZSIgXQotWFNQb2xpY3kgLT4gQUNNUG9s
aWN5IFsgYXJyb3doZWFkPSJub25lIiBdCi1EUENJIC0+IFZNIFsgYXJyb3doZWFkPSJub25lIiwg
YXJyb3d0YWlsPSJjcm93IiBdCi1EUENJIC0+IFBQQ0kgWyBhcnJvd2hlYWQ9Im5vbmUiIF0KLVBQ
Q0kgLT4gaG9zdCBbIGFycm93aGVhZD0ibm9uZSIsIGFycm93dGFpbD0iY3JvdyIgXQotRFNDU0kg
LT4gVk0gWyBhcnJvd2hlYWQ9Im5vbmUiLCBhcnJvd3RhaWw9ImNyb3ciIF0KLURTQ1NJX0hCQSAt
PiBWTSBbIGFycm93aGVhZD0ibm9uZSIsIGFycm93dGFpbD0iY3JvdyIgXQotRFNDU0kgLT4gRFND
U0lfSEJBIFsgYXJyb3doZWFkPSJub25lIiwgYXJyb3d0YWlsPSJjcm93IiBdCi1EU0NTSSAtPiBQ
U0NTSSBbIGFycm93aGVhZD0ibm9uZSIgXQotRFNDU0lfSEJBIC0+IFBTQ1NJX0hCQSBbIGFycm93
aGVhZD0iY3JvdyIsIGFycm93dGFpbD0ibm9uZSIgXQotUFNDU0kgLT4gaG9zdCBbIGFycm93aGVh
ZD0ibm9uZSIsIGFycm93dGFpbD0iY3JvdyIgXQotUFNDU0lfSEJBIC0+IGhvc3QgWyBhcnJvd2hl
YWQ9Im5vbmUiLCBhcnJvd3RhaWw9ImNyb3ciIF0KLVBTQ1NJIC0+IFBTQ1NJX0hCQSBbIGFycm93
aGVhZD0ibm9uZSIsIGFycm93dGFpbD0iY3JvdyIgXQotY3B1X3Bvb2wgLT4gaG9zdF9jcHUgWyBh
cnJvd2hlYWQ9ImNyb3ciLCBhcnJvd3RhaWw9Im5vbmUiIF0KLWNwdV9wb29sIC0+IFZNIFsgYXJy
b3doZWFkPSJjcm93IiwgYXJyb3d0YWlsPSJub25lIiBdCi1ob3N0IC0+IGNwdV9wb29sIFsgYXJy
b3doZWFkPSJjcm93IiwgYXJyb3d0YWlsPSJub25lIiBdCi19CmRpZmYgLS1naXQgYS9kb2NzL3hl
bi1hcGkveGVuYXBpLWRhdGFtb2RlbC50ZXggYi9kb2NzL3hlbi1hcGkveGVuYXBpLWRhdGFtb2Rl
bC50ZXgKZGVsZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDRmOWNlMjAuLjAwMDAwMDAKLS0t
IGEvZG9jcy94ZW4tYXBpL3hlbmFwaS1kYXRhbW9kZWwudGV4CisrKyAvZGV2L251bGwKQEAgLTEs
MjAyNDUgKzAsMCBAQAotJQotJSBDb3B5cmlnaHQgKGMpIDIwMDYtMjAwNyBYZW5Tb3VyY2UsIElu
Yy4KLSUgQ29weXJpZ2h0IChjKSAyMDA5IGZsb25hdGVsIEdtYkggJiBDby4gS0cKLSUKLSUgUGVy
bWlzc2lvbiBpcyBncmFudGVkIHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlz
IGRvY3VtZW50IHVuZGVyCi0lIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9jdW1lbnRhdGlv
biBMaWNlbnNlLCBWZXJzaW9uIDEuMiBvciBhbnkgbGF0ZXIKLSUgdmVyc2lvbiBwdWJsaXNoZWQg
YnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlhbnQKLSUgU2Vj
dGlvbnMsIG5vIEZyb250LUNvdmVyIFRleHRzIGFuZCBubyBCYWNrLUNvdmVyIFRleHRzLiAgQSBj
b3B5IG9mIHRoZQotJSBsaWNlbnNlIGlzIGluY2x1ZGVkIGluIHRoZSBzZWN0aW9uIGVudGl0bGVk
Ci0lICJHTlUgRnJlZSBEb2N1bWVudGF0aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxlIGZkbC50ZXgu
Ci0lCi0lIEF1dGhvcnM6IEV3YW4gTWVsbG9yLCBSaWNoYXJkIFNoYXJwLCBEYXZlIFNjb3R0LCBK
b24gSGFycm9wLgotJSBDb250cmlidXRvcjogQW5kcmVhcyBGbG9yYXRoCi0lCi0KLVxjaGFwdGVy
e0FQSSBSZWZlcmVuY2V9Ci1cbGFiZWx7YXBpLXJlZmVyZW5jZX0KLQotCi1cc2VjdGlvbntDbGFz
c2VzfQotVGhlIGZvbGxvd2luZyBjbGFzc2VzIGFyZSBkZWZpbmVkOgotCi1cYmVnaW57Y2VudGVy
fVxiZWdpbnt0YWJ1bGFyfXt8bHB7MTBjbX18fQotXGhsaW5lCi1OYW1lICYgRGVzY3JpcHRpb24g
XFwKLVxobGluZQote1x0dCBzZXNzaW9ufSAmIEEgc2Vzc2lvbiBcXAote1x0dCB0YXNrfSAmIEEg
bG9uZy1ydW5uaW5nIGFzeW5jaHJvbm91cyB0YXNrIFxcCi17XHR0IGV2ZW50fSAmIEFzeW5jaHJv
bm91cyBldmVudCByZWdpc3RyYXRpb24gYW5kIGhhbmRsaW5nIFxcCi17XHR0IFZNfSAmIEEgdmly
dHVhbCBtYWNoaW5lIChvciAnZ3Vlc3QnKSBcXAote1x0dCBWTVxfbWV0cmljc30gJiBUaGUgbWV0
cmljcyBhc3NvY2lhdGVkIHdpdGggYSBWTSBcXAote1x0dCBWTVxfZ3Vlc3RcX21ldHJpY3N9ICYg
VGhlIG1ldHJpY3MgcmVwb3J0ZWQgYnkgdGhlIGd1ZXN0IChhcyBvcHBvc2VkIHRvIGluZmVycmVk
IGZyb20gb3V0c2lkZSkgXFwKLXtcdHQgaG9zdH0gJiBBIHBoeXNpY2FsIGhvc3QgXFwKLXtcdHQg
aG9zdFxfbWV0cmljc30gJiBUaGUgbWV0cmljcyBhc3NvY2lhdGVkIHdpdGggYSBob3N0IFxcCi17
XHR0IGhvc3RcX2NwdX0gJiBBIHBoeXNpY2FsIENQVSBcXAote1x0dCBuZXR3b3JrfSAmIEEgdmly
dHVhbCBuZXR3b3JrIFxcCi17XHR0IFZJRn0gJiBBIHZpcnR1YWwgbmV0d29yayBpbnRlcmZhY2Ug
XFwKLXtcdHQgVklGXF9tZXRyaWNzfSAmIFRoZSBtZXRyaWNzIGFzc29jaWF0ZWQgd2l0aCBhIHZp
cnR1YWwgbmV0d29yayBkZXZpY2UgXFwKLXtcdHQgUElGfSAmIEEgcGh5c2ljYWwgbmV0d29yayBp
bnRlcmZhY2UgKG5vdGUgc2VwYXJhdGUgVkxBTnMgYXJlIHJlcHJlc2VudGVkIGFzIHNldmVyYWwg
UElGcykgXFwKLXtcdHQgUElGXF9tZXRyaWNzfSAmIFRoZSBtZXRyaWNzIGFzc29jaWF0ZWQgd2l0
aCBhIHBoeXNpY2FsIG5ldHdvcmsgaW50ZXJmYWNlIFxcCi17XHR0IFNSfSAmIEEgc3RvcmFnZSBy
ZXBvc2l0b3J5IFxcCi17XHR0IFZESX0gJiBBIHZpcnR1YWwgZGlzayBpbWFnZSBcXAote1x0dCBW
QkR9ICYgQSB2aXJ0dWFsIGJsb2NrIGRldmljZSBcXAote1x0dCBWQkRcX21ldHJpY3N9ICYgVGhl
IG1ldHJpY3MgYXNzb2NpYXRlZCB3aXRoIGEgdmlydHVhbCBibG9jayBkZXZpY2UgXFwKLXtcdHQg
UEJEfSAmIFRoZSBwaHlzaWNhbCBibG9jayBkZXZpY2VzIHRocm91Z2ggd2hpY2ggaG9zdHMgYWNj
ZXNzIFNScyBcXAote1x0dCBjcmFzaGR1bXB9ICYgQSBWTSBjcmFzaGR1bXAgXFwKLXtcdHQgVlRQ
TX0gJiBBIHZpcnR1YWwgVFBNIGRldmljZSBcXAote1x0dCBjb25zb2xlfSAmIEEgY29uc29sZSBc
XAote1x0dCBEUENJfSAmIEEgcGFzcy10aHJvdWdoIFBDSSBkZXZpY2UgXFwKLXtcdHQgUFBDSX0g
JiBBIHBoeXNpY2FsIFBDSSBkZXZpY2UgXFwKLXtcdHQgRFNDU0l9ICYgQSBoYWxmLXZpcnR1YWxp
emVkIFNDU0kgZGV2aWNlIFxcCi17XHR0IERTQ1NJXF9IQkF9ICYgQSBoYWxmLXZpcnR1YWxpemVk
IFNDU0kgaG9zdCBidXMgYWRhcHRlciBcXAote1x0dCBQU0NTSX0gJiBBIHBoeXNpY2FsIFNDU0kg
ZGV2aWNlIFxcCi17XHR0IFBTQ1NJXF9IQkF9ICYgQSBwaHlzaWNhbCBTQ1NJIGhvc3QgYnVzIGFk
YXB0ZXIgXFwKLXtcdHQgdXNlcn0gJiBBIHVzZXIgb2YgdGhlIHN5c3RlbSBcXAote1x0dCBkZWJ1
Z30gJiBBIGJhc2ljIGNsYXNzIGZvciB0ZXN0aW5nIFxcCi17XHR0IFhTUG9saWN5fSAmIEEgY2xh
c3MgZm9yIGhhbmRsaW5nIFhlbiBTZWN1cml0eSBQb2xpY2llcyBcXAote1x0dCBBQ01Qb2xpY3l9
ICYgQSBjbGFzcyBmb3IgaGFuZGxpbmcgQUNNLXR5cGUgcG9saWNpZXMgXFwKLXtcdHQgY3B1XF9w
b29sfSAmIEEgY29udGFpbmVyIGZvciBWTXMgd2hpY2ggc2hvdWxkIHNoYXJlZCB0aGUgc2FtZSBo
b3N0XF9jcHUocykgXFwKLVxobGluZQotXGVuZHt0YWJ1bGFyfVxlbmR7Y2VudGVyfQotXHNlY3Rp
b257UmVsYXRpb25zaGlwcyBCZXR3ZWVuIENsYXNzZXN9Ci1GaWVsZHMgdGhhdCBhcmUgYm91bmQg
dG9nZXRoZXIgYXJlIHNob3duIGluIHRoZSBmb2xsb3dpbmcgdGFibGU6IAotXGJlZ2lue2NlbnRl
cn1cYmVnaW57dGFidWxhcn17fGxsfGx8fQotXGhsaW5lCi17XGVtIG9iamVjdC5maWVsZH0gJiB7
XGVtIG9iamVjdC5maWVsZH0gJiB7XGVtIHJlbGF0aW9uc2hpcH0gXFwKLQotXGhsaW5lCi1ob3N0
LlBCRHMgJiBQQkQuaG9zdCAmIG1hbnktdG8tb25lXFwKLVNSLlBCRHMgJiBQQkQuU1IgJiBtYW55
LXRvLW9uZVxcCi1WREkuVkJEcyAmIFZCRC5WREkgJiBtYW55LXRvLW9uZVxcCi1WREkuY3Jhc2hc
X2R1bXBzICYgY3Jhc2hkdW1wLlZESSAmIG1hbnktdG8tb25lXFwKLVZCRC5WTSAmIFZNLlZCRHMg
JiBvbmUtdG8tbWFueVxcCi1jcmFzaGR1bXAuVk0gJiBWTS5jcmFzaFxfZHVtcHMgJiBvbmUtdG8t
bWFueVxcCi1WSUYuVk0gJiBWTS5WSUZzICYgb25lLXRvLW1hbnlcXAotVklGLm5ldHdvcmsgJiBu
ZXR3b3JrLlZJRnMgJiBvbmUtdG8tbWFueVxcCi1QSUYuaG9zdCAmIGhvc3QuUElGcyAmIG9uZS10
by1tYW55XFwKLVBJRi5uZXR3b3JrICYgbmV0d29yay5QSUZzICYgb25lLXRvLW1hbnlcXAotU1Iu
VkRJcyAmIFZESS5TUiAmIG1hbnktdG8tb25lXFwKLVZUUE0uVk0gJiBWTS5WVFBNcyAmIG9uZS10
by1tYW55XFwKLWNvbnNvbGUuVk0gJiBWTS5jb25zb2xlcyAmIG9uZS10by1tYW55XFwKLURQQ0ku
Vk0gJiBWTS5EUENJcyAmIG9uZS10by1tYW55XFwKLVBQQ0kuaG9zdCAmIGhvc3QuUFBDSXMgJiBv
bmUtdG8tbWFueVxcCi1EU0NTSS5WTSAmIFZNLkRTQ1NJcyAmIG9uZS10by1tYW55XFwKLURTQ1NJ
LkhCQSAmIERTQ1NJXF9IQkEuRFNDU0lzICYgb25lLXRvLW1hbnlcXAotRFNDU0lcX0hCQS5WTSAm
IFZNLkRTQ1NJXF9IQkFzICYgb25lLXRvLW1hbnlcXAotUFNDU0kuaG9zdCAmIGhvc3QuUFNDU0lz
ICYgb25lLXRvLW1hbnlcXAotUFNDU0kuSEJBICYgUFNDU0lcX0hCQS5QU0NTSXMgJiBvbmUtdG8t
bWFueVxcCi1QU0NTSVxfSEJBLmhvc3QgJiBob3N0LlBTQ1NJXF9IQkFzICYgb25lLXRvLW1hbnlc
XAotaG9zdC5yZXNpZGVudFxfVk1zICYgVk0ucmVzaWRlbnRcX29uICYgbWFueS10by1vbmVcXAot
aG9zdC5ob3N0XF9DUFVzICYgaG9zdFxfY3B1Lmhvc3QgJiBtYW55LXRvLW9uZVxcCi1ob3N0LnJl
c2lkZW50XF9jcHVcX3Bvb2xzICYgY3B1XF9wb29sLnJlc2lkZW50XF9vbiAmIG1hbnktdG8tb25l
XFwKLWNwdVxfcG9vbC5zdGFydGVkXF9WTXMgJiBWTS5jcHVcX3Bvb2wgJiBtYW55LXRvLW9uZVxc
Ci1jcHVcX3Bvb2wuaG9zdFxfQ1BVcyAmIGhvc3RcX2NwdS5jcHVcX3Bvb2wgJiBtYW55LXRvLW9u
ZVxcCi1caGxpbmUKLVxlbmR7dGFidWxhcn1cZW5ke2NlbnRlcn0KLQotVGhlIGZvbGxvd2luZyBy
ZXByZXNlbnRzIGJvdW5kIGZpZWxkcyAoYXMgc3BlY2lmaWVkIGFib3ZlKSBkaWFncmFtbWF0aWNh
bGx5LCB1c2luZyBjcm93cy1mb290IG5vdGF0aW9uIHRvIHNwZWNpZnkgb25lLXRvLW9uZSwgb25l
LXRvLW1hbnkgb3IgbWFueS10by1tYW55Ci0gICAgICAgICAgICAgICAgICAgcmVsYXRpb25zaGlw
czoKLQotXGJlZ2lue2NlbnRlcn1ccmVzaXplYm94ezAuOFx0ZXh0d2lkdGh9eyF9ewotXGluY2x1
ZGVncmFwaGljc3t4ZW5hcGktZGF0YW1vZGVsLWdyYXBofQotfVxlbmR7Y2VudGVyfQotXAotXHN1
YnNlY3Rpb257TGlzdCBvZiBib3VuZCBmaWVsZHN9Ci1cc2VjdGlvbntUeXBlc30KLVxzdWJzZWN0
aW9ue1ByaW1pdGl2ZXN9Ci1UaGUgZm9sbG93aW5nIHByaW1pdGl2ZSB0eXBlcyBhcmUgdXNlZCB0
byBzcGVjaWZ5IG1ldGhvZHMgYW5kIGZpZWxkcyBpbiB0aGUgQVBJIFJlZmVyZW5jZToKLQotXGJl
Z2lue2NlbnRlcn1cYmVnaW57dGFidWxhcn17fGxsfH0KLVxobGluZQotVHlwZSAmIERlc2NyaXB0
aW9uIFxcCi1caGxpbmUKLVN0cmluZyAmIHRleHQgc3RyaW5ncyBcXAotSW50ICAgICYgNjQtYml0
IGludGVnZXJzIFxcCi1GbG9hdCAmIElFRUUgZG91YmxlLXByZWNpc2lvbiBmbG9hdGluZy1wb2lu
dCBudW1iZXJzIFxcCi1Cb29sICAgJiBib29sZWFuIFxcCi1EYXRlVGltZSAmIGRhdGUgYW5kIHRp
bWVzdGFtcCBcXAotUmVmIChvYmplY3QgbmFtZSkgJiByZWZlcmVuY2UgdG8gYW4gb2JqZWN0IG9m
IGNsYXNzIG5hbWUgXFwKLVxobGluZQotXGVuZHt0YWJ1bGFyfVxlbmR7Y2VudGVyfQotXHN1YnNl
Y3Rpb257SGlnaGVyIG9yZGVyIHR5cGVzfQotVGhlIGZvbGxvd2luZyB0eXBlIGNvbnN0cnVjdG9y
cyBhcmUgdXNlZDoKLQotXGJlZ2lue2NlbnRlcn1cYmVnaW57dGFidWxhcn17fGxsfH0KLVxobGlu
ZQotVHlwZSAmIERlc2NyaXB0aW9uIFxcCi1caGxpbmUKLUxpc3QgKHQpICYgYW4gYXJiaXRyYXJ5
LWxlbmd0aCBsaXN0IG9mIGVsZW1lbnRzIG9mIHR5cGUgdCBcXAotTWFwIChhICRccmlnaHRhcnJv
dyQgYikgJiBhIHRhYmxlIG1hcHBpbmcgdmFsdWVzIG9mIHR5cGUgYSB0byB2YWx1ZXMgb2YgdHlw
ZSBiIFxcCi1caGxpbmUKLVxlbmR7dGFidWxhcn1cZW5ke2NlbnRlcn0KLVxzdWJzZWN0aW9ue0Vu
dW1lcmF0aW9uIHR5cGVzfQotVGhlIGZvbGxvd2luZyBlbnVtZXJhdGlvbiB0eXBlcyBhcmUgdXNl
ZDoKLQotXGJlZ2lue2xvbmd0YWJsZX17fGxsfH0KLVxobGluZQote1x0dCBlbnVtIGV2ZW50XF9v
cGVyYXRpb259ICYgXFwKLVxobGluZQotXGhzcGFjZXswLjVjbX17XHR0IGFkZH0gJiBBbiBvYmpl
Y3QgaGFzIGJlZW4gY3JlYXRlZCBcXAotXGhzcGFjZXswLjVjbX17XHR0IGRlbH0gJiBBbiBvYmpl
Y3QgaGFzIGJlZW4gZGVsZXRlZCBcXAotXGhzcGFjZXswLjVjbX17XHR0IG1vZH0gJiBBbiBvYmpl
Y3QgaGFzIGJlZW4gbW9kaWZpZWQgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci0KLVx2c3Bh
Y2V7MWNtfQotXGJlZ2lue2xvbmd0YWJsZX17fGxsfH0KLVxobGluZQote1x0dCBlbnVtIGNvbnNv
bGVcX3Byb3RvY29sfSAmIFxcCi1caGxpbmUKLVxoc3BhY2V7MC41Y219e1x0dCB2dDEwMH0gJiBW
VDEwMCB0ZXJtaW5hbCBcXAotXGhzcGFjZXswLjVjbX17XHR0IHJmYn0gJiBSZW1vdGUgRnJhbWVC
dWZmZXIgcHJvdG9jb2wgKGFzIHVzZWQgaW4gVk5DKSBcXAotXGhzcGFjZXswLjVjbX17XHR0IHJk
cH0gJiBSZW1vdGUgRGVza3RvcCBQcm90b2NvbCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0K
LQotXHZzcGFjZXsxY219Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGx8fQotXGhsaW5lCi17XHR0IGVu
dW0gdmRpXF90eXBlfSAmIFxcCi1caGxpbmUKLVxoc3BhY2V7MC41Y219e1x0dCBzeXN0ZW19ICYg
YSBkaXNrIHRoYXQgbWF5IGJlIHJlcGxhY2VkIG9uIHVwZ3JhZGUgXFwKLVxoc3BhY2V7MC41Y219
e1x0dCB1c2VyfSAmIGEgZGlzayB0aGF0IGlzIGFsd2F5cyBwcmVzZXJ2ZWQgb24gdXBncmFkZSBc
XAotXGhzcGFjZXswLjVjbX17XHR0IGVwaGVtZXJhbH0gJiBhIGRpc2sgdGhhdCBtYXkgYmUgcmVm
b3JtYXR0ZWQgb24gdXBncmFkZSBcXAotXGhzcGFjZXswLjVjbX17XHR0IHN1c3BlbmR9ICYgYSBk
aXNrIHRoYXQgc3RvcmVzIGEgc3VzcGVuZCBpbWFnZSBcXAotXGhzcGFjZXswLjVjbX17XHR0IGNy
YXNoZHVtcH0gJiBhIGRpc2sgdGhhdCBzdG9yZXMgVk0gY3Jhc2hkdW1wIGluZm9ybWF0aW9uIFxc
Ci1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotCi1cdnNwYWNlezFjbX0KLVxiZWdpbntsb25ndGFi
bGV9e3xsbHx9Ci1caGxpbmUKLXtcdHQgZW51bSB2bVxfcG93ZXJcX3N0YXRlfSAmIFxcCi1caGxp
bmUKLVxoc3BhY2V7MC41Y219e1x0dCBIYWx0ZWR9ICYgSGFsdGVkIFxcCi1caHNwYWNlezAuNWNt
fXtcdHQgUGF1c2VkfSAmIFBhdXNlZCBcXAotXGhzcGFjZXswLjVjbX17XHR0IFJ1bm5pbmd9ICYg
UnVubmluZyBcXAotXGhzcGFjZXswLjVjbX17XHR0IFN1c3BlbmRlZH0gJiBTdXNwZW5kZWQgXFwK
LVxoc3BhY2V7MC41Y219e1x0dCBDcmFzaGVkfSAmIENyYXNoZWQgXFwKLVxoc3BhY2V7MC41Y219
e1x0dCBVbmtub3dufSAmIFNvbWUgb3RoZXIgdW5rbm93biBzdGF0ZSBcXAotXGhsaW5lCi1cZW5k
e2xvbmd0YWJsZX0KLQotXHZzcGFjZXsxY219Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGx8fQotXGhs
aW5lCi17XHR0IGVudW0gdGFza1xfYWxsb3dlZFxfb3BlcmF0aW9uc30gJiBcXAotXGhsaW5lCi1c
aHNwYWNlezAuNWNtfXtcdHQgQ2FuY2VsfSAmIENhbmNlbCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0
YWJsZX0KLQotXHZzcGFjZXsxY219Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGx8fQotXGhsaW5lCi17
XHR0IGVudW0gdGFza1xfc3RhdHVzXF90eXBlfSAmIFxcCi1caGxpbmUKLVxoc3BhY2V7MC41Y219
e1x0dCBwZW5kaW5nfSAmIHRhc2sgaXMgaW4gcHJvZ3Jlc3MgXFwKLVxoc3BhY2V7MC41Y219e1x0
dCBzdWNjZXNzfSAmIHRhc2sgd2FzIGNvbXBsZXRlZCBzdWNjZXNzZnVsbHkgXFwKLVxoc3BhY2V7
MC41Y219e1x0dCBmYWlsdXJlfSAmIHRhc2sgaGFzIGZhaWxlZCBcXAotXGhzcGFjZXswLjVjbX17
XHR0IGNhbmNlbGxpbmd9ICYgdGFzayBpcyBiZWluZyBjYW5jZWxsZWQgXFwKLVxoc3BhY2V7MC41
Y219e1x0dCBjYW5jZWxsZWR9ICYgdGFzayBoYXMgYmVlbiBjYW5jZWxsZWQgXFwKLVxobGluZQot
XGVuZHtsb25ndGFibGV9Ci0KLVx2c3BhY2V7MWNtfQotXGJlZ2lue2xvbmd0YWJsZX17fGxsfH0K
LVxobGluZQote1x0dCBlbnVtIG9uXF9ub3JtYWxcX2V4aXR9ICYgXFwKLVxobGluZQotXGhzcGFj
ZXswLjVjbX17XHR0IGRlc3Ryb3l9ICYgZGVzdHJveSB0aGUgVk0gc3RhdGUgXFwKLVxoc3BhY2V7
MC41Y219e1x0dCByZXN0YXJ0fSAmIHJlc3RhcnQgdGhlIFZNIFxcCi1caGxpbmUKLVxlbmR7bG9u
Z3RhYmxlfQotCi1cdnNwYWNlezFjbX0KLVxiZWdpbntsb25ndGFibGV9e3xsbHx9Ci1caGxpbmUK
LXtcdHQgZW51bSBvblxfY3Jhc2hcX2JlaGF2aW91cn0gJiBcXAotXGhsaW5lCi1caHNwYWNlezAu
NWNtfXtcdHQgZGVzdHJveX0gJiBkZXN0cm95IHRoZSBWTSBzdGF0ZSBcXAotXGhzcGFjZXswLjVj
bX17XHR0IGNvcmVkdW1wXF9hbmRcX2Rlc3Ryb3l9ICYgcmVjb3JkIGEgY29yZWR1bXAgYW5kIHRo
ZW4gZGVzdHJveSB0aGUgVk0gc3RhdGUgXFwKLVxoc3BhY2V7MC41Y219e1x0dCByZXN0YXJ0fSAm
IHJlc3RhcnQgdGhlIFZNIFxcCi1caHNwYWNlezAuNWNtfXtcdHQgY29yZWR1bXBcX2FuZFxfcmVz
dGFydH0gJiByZWNvcmQgYSBjb3JlZHVtcCBhbmQgdGhlbiByZXN0YXJ0IHRoZSBWTSBcXAotXGhz
cGFjZXswLjVjbX17XHR0IHByZXNlcnZlfSAmIGxlYXZlIHRoZSBjcmFzaGVkIFZNIGFzLWlzIFxc
Ci1caHNwYWNlezAuNWNtfXtcdHQgcmVuYW1lXF9yZXN0YXJ0fSAmIHJlbmFtZSB0aGUgY3Jhc2hl
ZCBWTSBhbmQgc3RhcnQgYSBuZXcgY29weSBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLQot
XHZzcGFjZXsxY219Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGx8fQotXGhsaW5lCi17XHR0IGVudW0g
dmJkXF9tb2RlfSAmIFxcCi1caGxpbmUKLVxoc3BhY2V7MC41Y219e1x0dCBST30gJiBkaXNrIGlz
IG1vdW50ZWQgcmVhZC1vbmx5IFxcCi1caHNwYWNlezAuNWNtfXtcdHQgUld9ICYgZGlzayBpcyBt
b3VudGVkIHJlYWQtd3JpdGUgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci0KLVx2c3BhY2V7
MWNtfQotXGJlZ2lue2xvbmd0YWJsZX17fGxsfH0KLVxobGluZQote1x0dCBlbnVtIHZiZFxfdHlw
ZX0gJiBcXAotXGhsaW5lCi1caHNwYWNlezAuNWNtfXtcdHQgQ0R9ICYgVkJEIHdpbGwgYXBwZWFy
IHRvIGd1ZXN0IGFzIENEIFxcCi1caHNwYWNlezAuNWNtfXtcdHQgRGlza30gJiBWQkQgd2lsbCBh
cHBlYXIgdG8gZ3Vlc3QgYXMgZGlzayBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLQotXHZz
cGFjZXsxY219Ci1cbmV3cGFnZQotCi1cc2VjdGlvbntFcnJvciBIYW5kbGluZ30KLVdoZW4gYSBs
b3ctbGV2ZWwgdHJhbnNwb3J0IGVycm9yIG9jY3Vycywgb3IgYSByZXF1ZXN0IGlzIG1hbGZvcm1l
ZCBhdCB0aGUgSFRUUAotb3IgWE1MLVJQQyBsZXZlbCwgdGhlIHNlcnZlciBtYXkgc2VuZCBhbiBY
TUwtUlBDIEZhdWx0IHJlc3BvbnNlLCBvciB0aGUgY2xpZW50Ci1tYXkgc2ltdWxhdGUgdGhlIHNh
bWUuICBUaGUgY2xpZW50IG11c3QgYmUgcHJlcGFyZWQgdG8gaGFuZGxlIHRoZXNlIGVycm9ycywK
LXRob3VnaCB0aGV5IG1heSBiZSB0cmVhdGVkIGFzIGZhdGFsLiAgT24gdGhlIHdpcmUsIHRoZXNl
IGFyZSB0cmFuc21pdHRlZCBpbiBhCi1mb3JtIHNpbWlsYXIgdG8gdGhpczoKLQotXGJlZ2lue3Zl
cmJhdGltfQotICAgIDxtZXRob2RSZXNwb25zZT4KLSAgICAgIDxmYXVsdD4KLSAgICAgICAgPHZh
bHVlPgotICAgICAgICAgIDxzdHJ1Y3Q+Ci0gICAgICAgICAgICA8bWVtYmVyPgotICAgICAgICAg
ICAgICAgIDxuYW1lPmZhdWx0Q29kZTwvbmFtZT4KLSAgICAgICAgICAgICAgICA8dmFsdWU+PGlu
dD4tMTwvaW50PjwvdmFsdWU+Ci0gICAgICAgICAgICAgIDwvbWVtYmVyPgotICAgICAgICAgICAg
ICA8bWVtYmVyPgotICAgICAgICAgICAgICAgIDxuYW1lPmZhdWx0U3RyaW5nPC9uYW1lPgotICAg
ICAgICAgICAgICAgIDx2YWx1ZT48c3RyaW5nPk1hbGZvcm1lZCByZXF1ZXN0PC9zdHJpbmc+PC92
YWx1ZT4KLSAgICAgICAgICAgIDwvbWVtYmVyPgotICAgICAgICAgIDwvc3RydWN0PgotICAgICAg
ICA8L3ZhbHVlPgotICAgICAgPC9mYXVsdD4KLSAgICA8L21ldGhvZFJlc3BvbnNlPgotXGVuZHt2
ZXJiYXRpbX0KLQotQWxsIG90aGVyIGZhaWx1cmVzIGFyZSByZXBvcnRlZCB3aXRoIGEgbW9yZSBz
dHJ1Y3R1cmVkIGVycm9yIHJlc3BvbnNlLCB0bwotYWxsb3cgYmV0dGVyIGF1dG9tYXRpYyByZXNw
b25zZSB0byBmYWlsdXJlcywgcHJvcGVyIGludGVybmF0aW9uYWxpc2F0aW9uIG9mCi1hbnkgZXJy
b3IgbWVzc2FnZSwgYW5kIGVhc2llciBkZWJ1Z2dpbmcuICBPbiB0aGUgd2lyZSwgdGhlc2UgYXJl
IHRyYW5zbWl0dGVkCi1saWtlIHRoaXM6Ci0KLVxiZWdpbnt2ZXJiYXRpbX0KLSAgICA8c3RydWN0
PgotICAgICAgPG1lbWJlcj4KLSAgICAgICAgPG5hbWU+U3RhdHVzPC9uYW1lPgotICAgICAgICA8
dmFsdWU+RmFpbHVyZTwvdmFsdWU+Ci0gICAgICA8L21lbWJlcj4KLSAgICAgIDxtZW1iZXI+Ci0g
ICAgICAgIDxuYW1lPkVycm9yRGVzY3JpcHRpb248L25hbWU+Ci0gICAgICAgIDx2YWx1ZT4KLSAg
ICAgICAgICA8YXJyYXk+Ci0gICAgICAgICAgICA8ZGF0YT4KLSAgICAgICAgICAgICAgPHZhbHVl
Pk1BUF9EVVBMSUNBVEVfS0VZPC92YWx1ZT4KLSAgICAgICAgICAgICAgPHZhbHVlPkN1c3RvbWVy
PC92YWx1ZT4KLSAgICAgICAgICAgICAgPHZhbHVlPmVTcGVpbCBJbmMuPC92YWx1ZT4KLSAgICAg
ICAgICAgICAgPHZhbHVlPmVTcGVpbCBJbmNvcnBvcmF0ZWQ8L3ZhbHVlPgotICAgICAgICAgICAg
PC9kYXRhPgotICAgICAgICAgIDwvYXJyYXk+Ci0gICAgICAgIDwvdmFsdWU+Ci0gICAgICA8L21l
bWJlcj4KLSAgICA8L3N0cnVjdD4KLVxlbmR7dmVyYmF0aW19Ci0KLU5vdGUgdGhhdCB7XHR0IEVy
cm9yRGVzY3JpcHRpb259IHZhbHVlIGlzIGFuIGFycmF5IG9mIHN0cmluZyB2YWx1ZXMuIFRoZQot
Zmlyc3QgZWxlbWVudCBvZiB0aGUgYXJyYXkgaXMgYW4gZXJyb3IgY29kZTsgdGhlIHJlbWFpbmRl
ciBvZiB0aGUgYXJyYXkgYXJlCi1zdHJpbmdzIHJlcHJlc2VudGluZyBlcnJvciBwYXJhbWV0ZXJz
IHJlbGF0aW5nIHRvIHRoYXQgY29kZS4gIEluIHRoaXMgY2FzZSwKLXRoZSBjbGllbnQgaGFzIGF0
dGVtcHRlZCB0byBhZGQgdGhlIG1hcHBpbmcge1x0dCBDdXN0b21lciAkXHJpZ2h0YXJyb3ckCi1l
U3BpZWwgSW5jb3Jwb3JhdGVkfSB0byBhIE1hcCwgYnV0IGl0IGFscmVhZHkgY29udGFpbnMgdGhl
IG1hcHBpbmcKLXtcdHQgQ3VzdG9tZXIgJFxyaWdodGFycm93JCBlU3BpZWwgSW5jLn0sIGFuZCBz
byB0aGUgcmVxdWVzdCBoYXMgZmFpbGVkLgotCi1UaGUgcmVmZXJlbmNlIGJlbG93IGxpc3RzIGVh
Y2ggcG9zc2libGUgZXJyb3IgcmV0dXJuZWQgYnkgZWFjaCBtZXRob2QuCi1BcyB3ZWxsIGFzIHRo
ZSBlcnJvcnMgZXhwbGljaXRseSBsaXN0ZWQsIGFueSBtZXRob2QgbWF5IHJldHVybiBsb3ctbGV2
ZWwKLWVycm9ycyBhcyBkZXNjcmliZWQgYWJvdmUsIG9yIGFueSBvZiB0aGUgZm9sbG93aW5nIGdl
bmVyaWMgZXJyb3JzOgotCi1cYmVnaW57aXRlbWl6ZX0KLVxpdGVtIEhBTkRMRVxfSU5WQUxJRAot
XGl0ZW0gSU5URVJOQUxcX0VSUk9SCi1caXRlbSBNQVBcX0RVUExJQ0FURVxfS0VZCi1caXRlbSBN
RVNTQUdFXF9NRVRIT0RcX1VOS05PV04KLVxpdGVtIE1FU1NBR0VcX1BBUkFNRVRFUlxfQ09VTlRc
X01JU01BVENICi1caXRlbSBPUEVSQVRJT05cX05PVFxfQUxMT1dFRAotXGl0ZW0gUEVSTUlTU0lP
TlxfREVOSUVECi1caXRlbSBTRVNTSU9OXF9JTlZBTElECi1cZW5ke2l0ZW1pemV9Ci0KLUVhY2gg
cG9zc2libGUgZXJyb3IgY29kZSBpcyBkb2N1bWVudGVkIGluIHRoZSBmb2xsb3dpbmcgc2VjdGlv
bi4KLQotXHN1YnNlY3Rpb257RXJyb3IgQ29kZXN9Ci0KLVxzdWJzdWJzZWN0aW9ue0hBTkRMRVxf
SU5WQUxJRH0KLQotWW91IGdhdmUgYW4gaW52YWxpZCBoYW5kbGUuICBUaGUgb2JqZWN0IG1heSBo
YXZlIHJlY2VudGx5IGJlZW4gZGVsZXRlZC4gCi1UaGUgY2xhc3MgcGFyYW1ldGVyIGdpdmVzIHRo
ZSB0eXBlIG9mIHJlZmVyZW5jZSBnaXZlbiwgYW5kIHRoZSBoYW5kbGUKLXBhcmFtZXRlciBlY2hv
ZXMgdGhlIGJhZCB2YWx1ZSBnaXZlbi4KLQotXHZzcGFjZXswLjNjbX0KLXtcYmYgU2lnbmF0dXJl
On0KLVxiZWdpbnt2ZXJiYXRpbX1IQU5ETEVfSU5WQUxJRChjbGFzcywgaGFuZGxlKVxlbmR7dmVy
YmF0aW19Ci1cYmVnaW57Y2VudGVyfVxydWxlezEwZW19ezAuMXB0fVxlbmR7Y2VudGVyfQotCi1c
c3Vic3Vic2VjdGlvbntJTlRFUk5BTFxfRVJST1J9Ci0KLVRoZSBzZXJ2ZXIgZmFpbGVkIHRvIGhh
bmRsZSB5b3VyIHJlcXVlc3QsIGR1ZSB0byBhbiBpbnRlcm5hbCBlcnJvci4gIFRoZQotZ2l2ZW4g
bWVzc2FnZSBtYXkgZ2l2ZSBkZXRhaWxzIHVzZWZ1bCBmb3IgZGVidWdnaW5nIHRoZSBwcm9ibGVt
LgotCi1cdnNwYWNlezAuM2NtfQote1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfUlO
VEVSTkFMX0VSUk9SKG1lc3NhZ2UpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7
MTBlbX17MC4xcHR9XGVuZHtjZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue01BUFxfRFVQTElDQVRF
XF9LRVl9Ci0KLVlvdSB0cmllZCB0byBhZGQgYSBrZXktdmFsdWUgcGFpciB0byBhIG1hcCwgYnV0
IHRoYXQga2V5IGlzIGFscmVhZHkgdGhlcmUuIAotVGhlIGtleSwgY3VycmVudCB2YWx1ZSwgYW5k
IHRoZSBuZXcgdmFsdWUgdGhhdCB5b3UgdHJpZWQgdG8gc2V0IGFyZSBhbGwKLWVjaG9lZC4KLQot
XHZzcGFjZXswLjNjbX0KLXtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX1NQVBfRFVQ
TElDQVRFX0tFWShrZXksIGN1cnJlbnQgdmFsdWUsIG5ldyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQot
XGJlZ2lue2NlbnRlcn1ccnVsZXsxMGVtfXswLjFwdH1cZW5ke2NlbnRlcn0KLQotXHN1YnN1YnNl
Y3Rpb257TUVTU0FHRVxfTUVUSE9EXF9VTktOT1dOfQotCi1Zb3UgdHJpZWQgdG8gY2FsbCBhIG1l
dGhvZCB0aGF0IGRvZXMgbm90IGV4aXN0LiAgVGhlIG1ldGhvZCBuYW1lIHRoYXQgeW91Ci11c2Vk
IGlzIGVjaG9lZC4KLQotXHZzcGFjZXswLjNjbX0KLXtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2
ZXJiYXRpbX1NRVNTQUdFX01FVEhPRF9VTktOT1dOKG1ldGhvZClcZW5ke3ZlcmJhdGltfQotXGJl
Z2lue2NlbnRlcn1ccnVsZXsxMGVtfXswLjFwdH1cZW5ke2NlbnRlcn0KLQotXHN1YnN1YnNlY3Rp
b257TUVTU0FHRVxfUEFSQU1FVEVSXF9DT1VOVFxfTUlTTUFUQ0h9Ci0KLVlvdSB0cmllZCB0byBj
YWxsIGEgbWV0aG9kIHdpdGggdGhlIGluY29ycmVjdCBudW1iZXIgb2YgcGFyYW1ldGVycy4gIFRo
ZQotZnVsbHktcXVhbGlmaWVkIG1ldGhvZCBuYW1lIHRoYXQgeW91IHVzZWQsIGFuZCB0aGUgbnVt
YmVyIG9mIHJlY2VpdmVkIGFuZAotZXhwZWN0ZWQgcGFyYW1ldGVycyBhcmUgcmV0dXJuZWQuCi0K
LVx2c3BhY2V7MC4zY219Ci17XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19TUVTU0FH
RV9QQVJBTUVURVJfQ09VTlRfTUlTTUFUQ0gobWV0aG9kLCBleHBlY3RlZCwgcmVjZWl2ZWQpXGVu
ZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtjZW50ZXJ9
Ci0KLVxzdWJzdWJzZWN0aW9ue05FVFdPUktcX0FMUkVBRFlcX0NPTk5FQ1RFRH0KLQotWW91IHRy
aWVkIHRvIGNyZWF0ZSBhIFBJRiwgYnV0IHRoZSBuZXR3b3JrIHlvdSB0cmllZCB0byBhdHRhY2gg
aXQgdG8gaXMKLWFscmVhZHkgYXR0YWNoZWQgdG8gc29tZSBvdGhlciBQSUYsIGFuZCBzbyB0aGUg
Y3JlYXRpb24gZmFpbGVkLgotCi1cdnNwYWNlezAuM2NtfQote1xiZiBTaWduYXR1cmU6fQotXGJl
Z2lue3ZlcmJhdGltfU5FVFdPUktfQUxSRUFEWV9DT05ORUNURUQobmV0d29yaywgY29ubmVjdGVk
IFBJRilcZW5ke3ZlcmJhdGltfQotXGJlZ2lue2NlbnRlcn1ccnVsZXsxMGVtfXswLjFwdH1cZW5k
e2NlbnRlcn0KLQotXHN1YnN1YnNlY3Rpb257T1BFUkFUSU9OXF9OT1RcX0FMTE9XRUR9Ci0KLVlv
dSBhdHRlbXB0ZWQgYW4gb3BlcmF0aW9uIHRoYXQgd2FzIG5vdCBhbGxvd2VkLgotCi1cdnNwYWNl
ezAuM2NtfQotTm8gcGFyYW1ldGVycy4KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9
XGVuZHtjZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue1BFUk1JU1NJT05cX0RFTklFRH0KLQotWW91
IGRvIG5vdCBoYXZlIHRoZSByZXF1aXJlZCBwZXJtaXNzaW9ucyB0byBwZXJmb3JtIHRoZSBvcGVy
YXRpb24uCi0KLVx2c3BhY2V7MC4zY219Ci1ObyBwYXJhbWV0ZXJzLgotXGJlZ2lue2NlbnRlcn1c
cnVsZXsxMGVtfXswLjFwdH1cZW5ke2NlbnRlcn0KLQotXHN1YnN1YnNlY3Rpb257UElGXF9JU1xf
UEhZU0lDQUx9Ci0KLVlvdSB0cmllZCB0byBkZXN0cm95IGEgUElGLCBidXQgaXQgcmVwcmVzZW50
cyBhbiBhc3BlY3Qgb2YgdGhlIHBoeXNpY2FsCi1ob3N0IGNvbmZpZ3VyYXRpb24sIGFuZCBzbyBj
YW5ub3QgYmUgZGVzdHJveWVkLiAgVGhlIHBhcmFtZXRlciBlY2hvZXMgdGhlCi1QSUYgaGFuZGxl
IHlvdSBnYXZlLgotCi1cdnNwYWNlezAuM2NtfQote1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3Zl
cmJhdGltfVBJRl9JU19QSFlTSUNBTChQSUYpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9
XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtjZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue1NFU1NJT05c
X0FVVEhFTlRJQ0FUSU9OXF9GQUlMRUR9Ci0KLVRoZSBjcmVkZW50aWFscyBnaXZlbiBieSB0aGUg
dXNlciBhcmUgaW5jb3JyZWN0LCBzbyBhY2Nlc3MgaGFzIGJlZW4gZGVuaWVkLAotYW5kIHlvdSBo
YXZlIG5vdCBiZWVuIGlzc3VlZCBhIHNlc3Npb24gaGFuZGxlLgotCi1cdnNwYWNlezAuM2NtfQot
Tm8gcGFyYW1ldGVycy4KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtjZW50
ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue1NFU1NJT05cX0lOVkFMSUR9Ci0KLVlvdSBnYXZlIGFuIGlu
dmFsaWQgc2Vzc2lvbiBoYW5kbGUuICBJdCBtYXkgaGF2ZSBiZWVuIGludmFsaWRhdGVkIGJ5IGEK
LXNlcnZlciByZXN0YXJ0LCBvciB0aW1lZCBvdXQuICBZb3Ugc2hvdWxkIGdldCBhIG5ldyBzZXNz
aW9uIGhhbmRsZSwgdXNpbmcKLW9uZSBvZiB0aGUgc2Vzc2lvbi5sb2dpblxfIGNhbGxzLiAgVGhp
cyBlcnJvciBkb2VzIG5vdCBpbnZhbGlkYXRlIHRoZQotY3VycmVudCBjb25uZWN0aW9uLiAgVGhl
IGhhbmRsZSBwYXJhbWV0ZXIgZWNob2VzIHRoZSBiYWQgdmFsdWUgZ2l2ZW4uCi0KLVx2c3BhY2V7
MC4zY219Ci17XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19U0VTU0lPTl9JTlZBTElE
KGhhbmRsZSlcZW5ke3ZlcmJhdGltfQotXGJlZ2lue2NlbnRlcn1ccnVsZXsxMGVtfXswLjFwdH1c
ZW5ke2NlbnRlcn0KLQotXHN1YnN1YnNlY3Rpb257U0VTU0lPTlxfTk9UXF9SRUdJU1RFUkVEfQot
Ci1UaGlzIHNlc3Npb24gaXMgbm90IHJlZ2lzdGVyZWQgdG8gcmVjZWl2ZSBldmVudHMuICBZb3Ug
bXVzdCBjYWxsCi1ldmVudC5yZWdpc3RlciBiZWZvcmUgZXZlbnQubmV4dC4gIFRoZSBzZXNzaW9u
IGhhbmRsZSB5b3UgYXJlIHVzaW5nIGlzCi1lY2hvZWQuCi0KLVx2c3BhY2V7MC4zY219Ci17XGJm
IFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19U0VTU0lPTl9OT1RfUkVHSVNURVJFRChoYW5k
bGUpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtj
ZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue1ZBTFVFXF9OT1RcX1NVUFBPUlRFRH0KLQotWW91IGF0
dGVtcHRlZCB0byBzZXQgYSB2YWx1ZSB0aGF0IGlzIG5vdCBzdXBwb3J0ZWQgYnkgdGhpcyBpbXBs
ZW1lbnRhdGlvbi4gCi1UaGUgZnVsbHktcXVhbGlmaWVkIGZpZWxkIG5hbWUgYW5kIHRoZSB2YWx1
ZSB0aGF0IHlvdSB0cmllZCB0byBzZXQgYXJlCi1yZXR1cm5lZC4gIEFsc28gcmV0dXJuZWQgaXMg
YSBkZXZlbG9wZXItb25seSBkaWFnbm9zdGljIHJlYXNvbi4KLQotXHZzcGFjZXswLjNjbX0KLXtc
YmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX1WQUxVRV9OT1RfU1VQUE9SVEVEKGZpZWxk
LCB2YWx1ZSwgcmVhc29uKVxlbmR7dmVyYmF0aW19Ci1cYmVnaW57Y2VudGVyfVxydWxlezEwZW19
ezAuMXB0fVxlbmR7Y2VudGVyfQotCi1cc3Vic3Vic2VjdGlvbntWTEFOXF9UQUdcX0lOVkFMSUR9
Ci0KLVlvdSB0cmllZCB0byBjcmVhdGUgYSBWTEFOLCBidXQgdGhlIHRhZyB5b3UgZ2F2ZSB3YXMg
aW52YWxpZCAtLSBpdCBtdXN0IGJlCi1iZXR3ZWVuIDAgYW5kIDQwOTUuICBUaGUgcGFyYW1ldGVy
IGVjaG9lcyB0aGUgVkxBTiB0YWcgeW91IGdhdmUuCi0KLVx2c3BhY2V7MC4zY219Ci17XGJmIFNp
Z25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19VkxBTl9UQUdfSU5WQUxJRChWTEFOKVxlbmR7dmVy
YmF0aW19Ci1cYmVnaW57Y2VudGVyfVxydWxlezEwZW19ezAuMXB0fVxlbmR7Y2VudGVyfQotCi1c
c3Vic3Vic2VjdGlvbntWTVxfQkFEXF9QT1dFUlxfU1RBVEV9Ci0KLVlvdSBhdHRlbXB0ZWQgYW4g
b3BlcmF0aW9uIG9uIGEgVk0gdGhhdCB3YXMgbm90IGluIGFuIGFwcHJvcHJpYXRlIHBvd2VyCi1z
dGF0ZSBhdCB0aGUgdGltZTsgZm9yIGV4YW1wbGUsIHlvdSBhdHRlbXB0ZWQgdG8gc3RhcnQgYSBW
TSB0aGF0IHdhcwotYWxyZWFkeSBydW5uaW5nLiAgVGhlIHBhcmFtZXRlcnMgcmV0dXJuZWQgYXJl
IHRoZSBWTSdzIGhhbmRsZSwgYW5kIHRoZQotZXhwZWN0ZWQgYW5kIGFjdHVhbCBWTSBzdGF0ZSBh
dCB0aGUgdGltZSBvZiB0aGUgY2FsbC4KLQotXHZzcGFjZXswLjNjbX0KLXtcYmYgU2lnbmF0dXJl
On0KLVxiZWdpbnt2ZXJiYXRpbX1WTV9CQURfUE9XRVJfU1RBVEUodm0sIGV4cGVjdGVkLCBhY3R1
YWwpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtj
ZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue1ZNXF9IVk1cX1JFUVVJUkVEfQotCi1IVk0gaXMgcmVx
dWlyZWQgZm9yIHRoaXMgb3BlcmF0aW9uCi0KLVx2c3BhY2V7MC4zY219Ci17XGJmIFNpZ25hdHVy
ZTp9Ci1cYmVnaW57dmVyYmF0aW19Vk1fSFZNX1JFUVVJUkVEKHZtKVxlbmR7dmVyYmF0aW19Ci1c
YmVnaW57Y2VudGVyfVxydWxlezEwZW19ezAuMXB0fVxlbmR7Y2VudGVyfQotCi1cc3Vic3Vic2Vj
dGlvbntTRUNVUklUWVxfRVJST1J9Ci0KLUEgc2VjdXJpdHkgZXJyb3Igb2NjdXJyZWQuIFRoZSBw
YXJhbWV0ZXIgcHJvdmlkZXMgdGhlIHhlbiBzZWN1cml0eQotZXJyb3IgY29kZSBhbmQgYSBtZXNz
YWdlIGRlc2NyaWJpbmcgdGhlIGVycm9yLgotCi1cdnNwYWNlezAuM2NtfQote1xiZiBTaWduYXR1
cmU6fQotXGJlZ2lue3ZlcmJhdGltfVNFQ1VSSVRZX0VSUk9SKHhzZXJyLCBtZXNzYWdlKVxlbmR7
dmVyYmF0aW19Ci1cYmVnaW57Y2VudGVyfVxydWxlezEwZW19ezAuMXB0fVxlbmR7Y2VudGVyfQot
Ci1cc3Vic3Vic2VjdGlvbntQT09MXF9CQURcX1NUQVRFfQotCi1Zb3UgYXR0ZW1wdGVkIGFuIG9w
ZXJhdGlvbiBvbiBhIHBvb2wgdGhhdCB3YXMgbm90IGluIGFuIGFwcHJvcHJpYXRlIHN0YXRlCi1h
dCB0aGUgdGltZTsgZm9yIGV4YW1wbGUsIHlvdSBhdHRlbXB0ZWQgdG8gYWN0aXZhdGUgYSBwb29s
IHRoYXQgd2FzCi1hbHJlYWR5IGFjdGl2YXRlZC4KLQotXHZzcGFjZXswLjNjbX0KLXtcYmYgU2ln
bmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX1QT09MX0JBRF9TVEFURShjdXJyZW50IHBvb2wgc3Rh
dGUpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtj
ZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue0lOU1VGRklDSUVOVFxfQ1BVU30KLQotWW91IGF0dGVt
cHRlZCB0byBhY3RpdmF0ZSBhIGNwdVxfcG9vbCBidXQgdGhlcmUgYXJlIG5vdCBlbm91Z2gKLXVu
YWxsb2NhdGVkIENQVXMgdG8gc2F0aXNmeSB0aGUgcmVxdWVzdC4KLQotXHZzcGFjZXswLjNjbX0K
LXtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX1JTlNVRkZJQ0lFTlRfQ1BVUyhuZWVk
ZWQgY3B1IGNvdW50LCBhdmFpbGFibGUgY3B1IGNvdW50KVxlbmR7dmVyYmF0aW19Ci1cYmVnaW57
Y2VudGVyfVxydWxlezEwZW19ezAuMXB0fVxlbmR7Y2VudGVyfQotCi1cc3Vic3Vic2VjdGlvbntV
TktPV05cX1NDSEVEXF9QT0xJQ1l9Ci0KLVRoZSBzcGVjaWZpZWQgc2NoZWR1bGVyIHBvbGljeSBp
cyB1bmtvd24gdG8gdGhlIGhvc3QuCi0KLVx2c3BhY2V7MC4zY219Ci17XGJmIFNpZ25hdHVyZTp9
Ci1cYmVnaW57dmVyYmF0aW19VU5LT1dOX1NDSEVEX1BPTElDWSgpXGVuZHt2ZXJiYXRpbX0KLVxi
ZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtjZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0
aW9ue0lOVkFMSURcX0NQVX0KLQotWW91IHRyaWVkIHRvIHJlY29uZmlndXJlIGEgY3B1XF9wb29s
IHdpdGggYSBDUFUgdGhhdCBpcyB1bmtvd24gdG8gdGhlIGhvc3QKLW9yIGhhcyBhIHdyb25nIHN0
YXRlLgotCi1cdnNwYWNlezAuM2NtfQote1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGlt
fUlOVkFMSURfQ1BVKG1lc3NhZ2UpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7
MTBlbX17MC4xcHR9XGVuZHtjZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue0xBU1RcX0NQVVxfTk9U
XF9SRU1PVkVBQkxFfQotCi1Zb3UgdHJpZWQgdG8gcmVtb3ZlIHRoZSBsYXN0IENQVSBmcm9tIGEg
Y3B1XF9wb29sIHRoYXQgaGFzIG9uZSBvciBtb3JlCi1hY3RpdmUgZG9tYWlucy4KLQotXHZzcGFj
ZXswLjNjbX0KLXtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX1MQVNUX0NQVV9OT1Rf
UkVNT1ZFQUJMRShtZXNzYWdlKVxlbmR7dmVyYmF0aW19Ci1cYmVnaW57Y2VudGVyfVxydWxlezEw
ZW19ezAuMXB0fVxlbmR7Y2VudGVyfQotCi0KLVxuZXdwYWdlCi1cc2VjdGlvbntDbGFzczogc2Vz
c2lvbn0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IHNlc3Npb259Ci1cYmVnaW57bG9u
Z3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17
fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgc2Vzc2lvbn0gXFwKLVxtdWx0aWNv
bHVtbnsxfXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94ezEx
Y219e1xlbSBBCi1zZXNzaW9uLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYg
RGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwK
LSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdGhpc1xfaG9zdH0gJiBob3N0IHJl
ZiAmIEN1cnJlbnRseSBjb25uZWN0ZWQgaG9zdCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiAge1x0dCB0aGlzXF91c2VyfSAmIHVzZXIgcmVmICYgQ3VycmVudGx5IGNvbm5lY3RlZCB1
c2VyIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGxhc3RcX2FjdGl2ZX0g
JiBpbnQgJiBUaW1lc3RhbXAgZm9yIGxhc3QgdGltZSBzZXNzaW9uIHdhcyBhY3RpdmUgXFwKLVxo
bGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29jaWF0ZWQgd2l0aCBj
bGFzczogc2Vzc2lvbn0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5sb2dpblxfd2l0aFxfcGFz
c3dvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUF0dGVtcHQgdG8gYXV0aGVudGljYXRlIHRoZSB1
c2VyLCByZXR1cm5pbmcgYSBzZXNzaW9uXF9pZCBpZiBzdWNjZXNzZnVsLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChzZXNzaW9uIHJlZikgbG9naW5f
d2l0aF9wYXNzd29yZCAoc3RyaW5nIHVuYW1lLCBzdHJpbmcgcHdkKVxlbmR7dmVyYmF0aW19Ci0K
LQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdW5hbWUgJiBV
c2VybmFtZSBmb3IgbG9naW4uIFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIHB3ZCAmIFBh
c3N3b3JkIGZvciBsb2dpbi4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXNlc3Npb24g
cmVmCi19Ci0KLQotSUQgb2YgbmV3bHkgY3JlYXRlZCBzZXNzaW9uCi0KLVx2c3BhY2V7MC4zY219
Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFNFU1NJT05cX0FV
VEhFTlRJQ0FUSU9OXF9GQUlMRUR9Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5sb2dvdXR9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUxvZyBvdXQgb2YgYSBzZXNzaW9uLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgbG9nb3V0IChzZXNzaW9uX2lkIHMpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91dWlk
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIHNl
c3Npb24uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
c3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIHNlc3Npb24gcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc2Vzc2lvbiByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF90aGlzXF9ob3N0fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHRoaXNcX2hv
c3QgZmllbGQgb2YgdGhlIGdpdmVuIHNlc3Npb24uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGhvc3QgcmVmKSBnZXRfdGhpc19ob3N0IChzZXNzaW9u
X2lkIHMsIHNlc3Npb24gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgc2Vzc2lvbiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaG9zdCByZWYKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3RoaXNcX3VzZXJ9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdGhpc1xfdXNlciBmaWVsZCBvZiB0aGUgZ2l2ZW4g
c2Vzc2lvbi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSAodXNlciByZWYpIGdldF90aGlzX3VzZXIgKHNlc3Npb25faWQgcywgc2Vzc2lvbiByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBz
ZXNzaW9uIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi11c2VyIHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfbGFzdFxfYWN0aXZlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1H
ZXQgdGhlIGxhc3RcX2FjdGl2ZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gc2Vzc2lvbi4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X2xhc3RfYWN0
aXZlIChzZXNzaW9uX2lkIHMsIHNlc3Npb24gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc2Vzc2lvbiByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50
Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVp
ZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNlIHRvIHRoZSBzZXNzaW9uIGlu
c3RhbmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0
dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChzZXNzaW9uIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Np
b25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0
dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zZXNzaW9uIHJlZgotfQotCi0KLXJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBjdXJyZW50IHN0YXRl
IG9mIHRoZSBnaXZlbiBzZXNzaW9uLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IChzZXNzaW9uIHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBz
LCBzZXNzaW9uIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1
bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IHNlc3Npb24gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXNlc3Npb24gcmVjb3JkCi19Ci0K
LQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257
Q2xhc3M6IHRhc2t9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiB0YXNrfQotXGJlZ2lu
e2xvbmd0YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1u
ezF9e3xsfXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIHRhc2t9IFxcCi1cbXVsdGlj
b2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsx
MWNtfXtcZW0gQQotbG9uZy1ydW5uaW5nIGFzeW5jaHJvbm91cyB0YXNrLn19IFxcCi1caGxpbmUK
LVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7
Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlm
aWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgbmFtZS9sYWJlbH0gJiBzdHJpbmcgJiBhIGh1bWFuLXJlYWRhYmxlIG5hbWUgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbmFtZS9kZXNjcmlwdGlvbn0gJiBzdHJpbmcg
JiBhIG5vdGVzIGZpZWxkIGNvbnRhaW5nIGh1bWFuLXJlYWRhYmxlIGRlc2NyaXB0aW9uIFxcCi0k
XG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN0YXR1c30gJiB0YXNrXF9zdGF0dXNc
X3R5cGUgJiBjdXJyZW50IHN0YXR1cyBvZiB0aGUgdGFzayBcXAotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7cnVufSQgJiAge1x0dCBzZXNzaW9ufSAmIHNlc3Npb24gcmVmICYgdGhlIHNlc3Npb24gdGhh
dCBjcmVhdGVkIHRoZSB0YXNrIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0
IHByb2dyZXNzfSAmIGludCAmIGlmIHRoZSB0YXNrIGlzIHN0aWxsIHBlbmRpbmcsIHRoaXMgZmll
bGQgY29udGFpbnMgdGhlIGVzdGltYXRlZCBwZXJjZW50YWdlIGNvbXBsZXRlICgwLTEwMCkuIElm
IHRhc2sgaGFzIGNvbXBsZXRlZCAoc3VjY2Vzc2Z1bGx5IG9yIHVuc3VjY2Vzc2Z1bGx5KSB0aGlz
IHNob3VsZCBiZSAxMDAuIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHR5
cGV9ICYgc3RyaW5nICYgaWYgdGhlIHRhc2sgaGFzIGNvbXBsZXRlZCBzdWNjZXNzZnVsbHksIHRo
aXMgZmllbGQgY29udGFpbnMgdGhlIHR5cGUgb2YgdGhlIGVuY29kZWQgcmVzdWx0IChpLmUuIG5h
bWUgb2YgdGhlIGNsYXNzIHdob3NlIHJlZmVyZW5jZSBpcyBpbiB0aGUgcmVzdWx0IGZpZWxkKS4g
VW5kZWZpbmVkIG90aGVyd2lzZS4gXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgcmVzdWx0fSAmIHN0cmluZyAmIGlmIHRoZSB0YXNrIGhhcyBjb21wbGV0ZWQgc3VjY2Vzc2Z1
bGx5LCB0aGlzIGZpZWxkIGNvbnRhaW5zIHRoZSByZXN1bHQgdmFsdWUgKGVpdGhlciBWb2lkIG9y
IGFuIG9iamVjdCByZWZlcmVuY2UpLiBVbmRlZmluZWQgb3RoZXJ3aXNlLiBcXAotJFxtYXRoaXR7
Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBlcnJvclxfaW5mb30gJiBzdHJpbmcgU2V0ICYgaWYg
dGhlIHRhc2sgaGFzIGZhaWxlZCwgdGhpcyBmaWVsZCBjb250YWlucyB0aGUgc2V0IG9mIGFzc29j
aWF0ZWQgZXJyb3Igc3RyaW5ncy4gVW5kZWZpbmVkIG90aGVyd2lzZS4gXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgYWxsb3dlZFxfb3BlcmF0aW9uc30gJiAodGFza1xfYWxs
b3dlZFxfb3BlcmF0aW9ucykgU2V0ICYgT3BlcmF0aW9ucyBhbGxvd2VkIG9uIHRoaXMgdGFzayBc
XAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3
aXRoIGNsYXNzOiB0YXNrfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmNhbmNlbH0KLQote1xi
ZiBPdmVydmlldzp9IAotQ2FuY2VsIHRoaXMgdGFzay4gIElmIHRhc2suYWxsb3dlZFxfb3BlcmF0
aW9ucyBkb2VzIG5vdCBjb250YWluIENhbmNlbCwKLXRoZW4gdGhpcyB3aWxsIGZhaWwgd2l0aCBP
UEVSQVRJT05cX05PVFxfQUxMT1dFRC4gIFRoZSB0YXNrIHdpbGwgc2hvdyB0aGUKLXN0YXR1cyAn
Y2FuY2VsbGluZycsIGFuZCB5b3Ugc2hvdWxkIGNvbnRpbnVlIHRvIGNoZWNrIGl0cyBzdGF0dXMg
dW50aWwgaXQKLXNob3dzICdjYW5jZWxsZWQnLiAgVGhlcmUgaXMgbm8gZ3VhcmFudGVlIGFzIHRv
IHRoZSB0aW1lIHdpdGhpbiB3aGljaCB0aGlzCi10YXNrIHdpbGwgYmUgY2FuY2VsbGVkLgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgY2FuY2Vs
IChzZXNzaW9uX2lkIHMsIHRhc2sgcmVmIHRhc2spXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgdGFzayByZWYgfSAmIHRhc2sgJiBUaGUgdGFzayBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNj
bX0KLQotXG5vaW5kZW50e1xiZiBQb3NzaWJsZSBFcnJvciBDb2Rlczp9IHtcdHQgT1BFUkFUSU9O
XF9OT1RcX0FMTE9XRUR9Ci0KLVx2c3BhY2V7MC42Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRo
ZSB0YXNrcyBrbm93biB0byB0aGUgc3lzdGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19ICgodGFzayByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9p
ZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotKHRhc2sgcmVmKSBTZXQKLX0KLQotCi1yZWZlcmVuY2Vz
IHRvIGFsbCBvYmplY3RzCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2
aWV3On0gCi1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIHRhc2suCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChz
ZXNzaW9uX2lkIHMsIHRhc2sgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgdGFzayByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQot
dmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9uYW1lXF9sYWJlbH0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiB0
YXNrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0
cmluZyBnZXRfbmFtZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCB0YXNrIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHRhc2sgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fmdldFxfbmFtZVxfZGVzY3JpcHRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbmFt
ZS9kZXNjcmlwdGlvbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gdGFzay4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfZGVzY3JpcHRp
b24gKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0K
LQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3N0YXR1c30KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBzdGF0dXMgZmllbGQgb2YgdGhlIGdpdmVuIHRhc2su
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKHRhc2tf
c3RhdHVzX3R5cGUpIGdldF9zdGF0dXMgKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi10YXNrXF9zdGF0dXNcX3R5cGUKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Nlc3Npb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0
aGUgc2Vzc2lvbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gdGFzay4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoc2Vzc2lvbiByZWYpIGdldF9zZXNzaW9uIChz
ZXNzaW9uX2lkIHMsIHRhc2sgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgdGFzayByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc2Vzc2lvbiByZWYKLX0K
LQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Byb2dyZXNzfQot
Ci17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHByb2dyZXNzIGZpZWxkIG9mIHRoZSBnaXZlbiB0
YXNrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGlu
dCBnZXRfcHJvZ3Jlc3MgKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3R5
cGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdHlwZSBmaWVsZCBvZiB0aGUgZ2l2ZW4g
dGFzay4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBz
dHJpbmcgZ2V0X3R5cGUgKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X3Jlc3VsdH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSByZXN1bHQgZmllbGQgb2YgdGhl
IGdpdmVuIHRhc2suCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gc3RyaW5nIGdldF9yZXN1bHQgKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX2Vycm9yXF9pbmZvfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGVycm9y
XF9pbmZvIGZpZWxkIG9mIHRoZSBnaXZlbiB0YXNrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0
dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChzdHJpbmcgU2V0KSBnZXRfZXJyb3JfaW5mbyAoc2Vz
c2lvbl9pZCBzLCB0YXNrIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IHRhc2sgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZyBTZXQKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbG93ZWRcX29wZXJh
dGlvbnN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgYWxsb3dlZFxfb3BlcmF0aW9ucyBm
aWVsZCBvZiB0aGUgZ2l2ZW4gdGFzay4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAoKHRhc2tfYWxsb3dlZF9vcGVyYXRpb25zKSBTZXQpIGdldF9hbGxv
d2VkX29wZXJhdGlvbnMgKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci0odGFza1xfYWxsb3dlZFxfb3BlcmF0aW9ucykgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IGEgcmVmZXJlbmNlIHRvIHRoZSB0YXNrIGluc3RhbmNlIHdpdGggdGhlIHNwZWNpZmllZCBV
VUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICh0
YXNrIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1
dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi10YXNrIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250
YWluaW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiB0YXNrLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICh0YXNrIHJlY29yZCkgZ2V0X3Jl
Y29yZCAoc2Vzc2lvbl9pZCBzLCB0YXNrIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHRhc2sgcmVmIH0gJiBzZWxmICYgcmVmZXJl
bmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXRhc2sgcmVj
b3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX2J5XF9uYW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGFsbCB0aGUgdGFz
ayBpbnN0YW5jZXMgd2l0aCB0aGUgZ2l2ZW4gbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKCh0YXNrIHJlZikgU2V0KSBnZXRfYnlfbmFtZV9s
YWJlbCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgbGFiZWwpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiBsYWJlbCAmIGxhYmVsIG9m
IG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLSh0YXNrIHJl
ZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBvYmplY3RzIHdpdGggbWF0Y2ggbmFtZXMKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLQotXHZzcGFjZXsx
Y219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IGV2ZW50fQotXHN1YnNlY3Rpb257RmllbGRz
IGZvciBjbGFzczogZXZlbnR9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0
aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9
e2x8fXtcYmYgZXZlbnR9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxt
dWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0KLUFzeW5jaHJvbm91cyBldmVudCBy
ZWdpc3RyYXRpb24gYW5kIGhhbmRsaW5nLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBU
eXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zfSQg
JiAge1x0dCBpZH0gJiBpbnQgJiBBbiBJRCwgbW9ub3RvbmljYWxseSBpbmNyZWFzaW5nLCBhbmQg
bG9jYWwgdG8gdGhlIGN1cnJlbnQgc2Vzc2lvbiBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5z
fSQgJiAge1x0dCB0aW1lc3RhbXB9ICYgZGF0ZXRpbWUgJiBUaGUgdGltZSBhdCB3aGljaCB0aGUg
ZXZlbnQgb2NjdXJyZWQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgY2xh
c3N9ICYgc3RyaW5nICYgVGhlIG5hbWUgb2YgdGhlIGNsYXNzIG9mIHRoZSBvYmplY3QgdGhhdCBj
aGFuZ2VkIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IG9wZXJhdGlvbn0g
JiBldmVudFxfb3BlcmF0aW9uICYgVGhlIG9wZXJhdGlvbiB0aGF0IHdhcyBwZXJmb3JtZWQgXFwK
LSRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgcmVmfSAmIHN0cmluZyAmIEEgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgdGhhdCBjaGFuZ2VkIFxcCi0kXG1hdGhpdHtST31fXG1hdGhp
dHtpbnN9JCAmICB7XHR0IG9ialxfdXVpZH0gJiBzdHJpbmcgJiBUaGUgdXVpZCBvZiB0aGUgb2Jq
ZWN0IHRoYXQgY2hhbmdlZCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9u
e1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBldmVudH0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5yZWdpc3Rlcn0KLQote1xiZiBPdmVydmlldzp9IAotUmVnaXN0ZXJzIHRoaXMgc2Vzc2lv
biB3aXRoIHRoZSBldmVudCBzeXN0ZW0uICBTcGVjaWZ5aW5nIHRoZSBlbXB0eSBsaXN0Ci13aWxs
IHJlZ2lzdGVyIGZvciBhbGwgY2xhc3Nlcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHJlZ2lzdGVyIChzZXNzaW9uX2lkIHMsIHN0cmluZyBT
ZXQgY2xhc3NlcylcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBzdHJpbmcgU2V0IH0gJiBjbGFzc2VzICYgcmVnaXN0ZXIgZm9yIGV2ZW50cyBmb3Ig
dGhlIGluZGljYXRlZCBjbGFzc2VzIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lk
Ci19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+dW5yZWdpc3Rlcn0KLQote1xiZiBPdmVydmlldzp9
IAotVW5yZWdpc3RlcnMgdGhpcyBzZXNzaW9uIHdpdGggdGhlIGV2ZW50IHN5c3RlbS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHVucmVnaXN0
ZXIgKHNlc3Npb25faWQgcywgc3RyaW5nIFNldCBjbGFzc2VzKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyBTZXQgfSAmIGNsYXNzZXMg
JiByZW1vdmUgdGhpcyBzZXNzaW9uJ3MgcmVnaXN0cmF0aW9uIGZvciB0aGUgaW5kaWNhdGVkIGNs
YXNzZXMgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5uZXh0fQotCi17XGJmIE92ZXJ2aWV3On0gCi1CbG9ja2luZyBjYWxsIHdoaWNo
IHJldHVybnMgYSAocG9zc2libHkgZW1wdHkpIGJhdGNoIG9mIGV2ZW50cy4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKGV2ZW50IHJlY29yZCkgU2V0
KSBuZXh0IChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oZXZlbnQgcmVjb3JkKSBT
ZXQKLX0KLQotCi10aGUgYmF0Y2ggb2YgZXZlbnRzCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRl
bnR7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVzOn0ge1x0dCBTRVNTSU9OXF9OT1RcX1JFR0lTVEVS
RUR9Ci0KLVx2c3BhY2V7MC42Y219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9u
e0NsYXNzOiBWTX0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFZNfQotXGJlZ2lue2xv
bmd0YWJsZX17fGxscHswLjIxXHRleHR3aWR0aH1wezAuMzNcdGV4dHdpZHRofXx9Ci1caGxpbmUK
LVxtdWx0aWNvbHVtbnsxfXt8bH17TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBWTX0g
XFwKLVxtdWx0aWNvbHVtbns0fXt8bHx9e1xwYXJib3h7MTFjbX17XGVtIERlc2NyaXB0aW9uOiBB
Ci12aXJ0dWFsIG1hY2hpbmUgKG9yICdndWVzdCcpLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmll
bGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7
cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCBy
ZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcG93ZXJcX3N0
YXRlfSAmIHZtXF9wb3dlclxfc3RhdGUgJiBDdXJyZW50IHBvd2VyIHN0YXRlIG9mIHRoZSBtYWNo
aW5lIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgbmFtZS9sYWJlbH0gJiBzdHJpbmcgJiBhIGh1
bWFuLXJlYWRhYmxlIG5hbWUgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBuYW1lL2Rlc2NyaXB0
aW9ufSAmIHN0cmluZyAmIGEgbm90ZXMgZmllbGQgY29udGFpbmcgaHVtYW4tcmVhZGFibGUgZGVz
Y3JpcHRpb24gXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCB1c2VyXF92ZXJzaW9ufSAmIGludCAm
IGEgdXNlciB2ZXJzaW9uIG51bWJlciBmb3IgdGhpcyBtYWNoaW5lIFxcCi0kXG1hdGhpdHtSV30k
ICYgIHtcdHQgaXNcX2FcX3RlbXBsYXRlfSAmIGJvb2wgJiB0cnVlIGlmIHRoaXMgaXMgYSB0ZW1w
bGF0ZS4gVGVtcGxhdGUgVk1zIGNhbiBuZXZlciBiZSBzdGFydGVkLCB0aGV5IGFyZSB1c2VkIG9u
bHkgZm9yIGNsb25pbmcgb3RoZXIgVk1zIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgYXV0b1xf
cG93ZXJcX29ufSAmIGJvb2wgJiB0cnVlIGlmIHRoaXMgVk0gc2hvdWxkIGJlIHN0YXJ0ZWQgYXV0
b21hdGljYWxseSBhZnRlciBob3N0IGJvb3QgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0k
ICYgIHtcdHQgc3VzcGVuZFxfVkRJfSAmIFZESSByZWYgJiBUaGUgVkRJIHRoYXQgYSBzdXNwZW5k
IGltYWdlIGlzIHN0b3JlZCBvbi4gKE9ubHkgaGFzIG1lYW5pbmcgaWYgVk0gaXMgY3VycmVudGx5
IHN1c3BlbmRlZCkgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcmVzaWRl
bnRcX29ufSAmIGhvc3QgcmVmICYgdGhlIGhvc3QgdGhlIFZNIGlzIGN1cnJlbnRseSByZXNpZGVu
dCBvbiBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG1lbW9yeS9zdGF0aWNcX21heH0gJiBpbnQg
JiBTdGF0aWNhbGx5LXNldCAoaS5lLiBhYnNvbHV0ZSkgbWF4aW11bSAoYnl0ZXMpIFxcCi0kXG1h
dGhpdHtSV30kICYgIHtcdHQgbWVtb3J5L2R5bmFtaWNcX21heH0gJiBpbnQgJiBEeW5hbWljIG1h
eGltdW0gKGJ5dGVzKSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG1lbW9yeS9keW5hbWljXF9t
aW59ICYgaW50ICYgRHluYW1pYyBtaW5pbXVtIChieXRlcykgXFwKLSRcbWF0aGl0e1JXfSQgJiAg
e1x0dCBtZW1vcnkvc3RhdGljXF9taW59ICYgaW50ICYgU3RhdGljYWxseS1zZXQgKGkuZS4gYWJz
b2x1dGUpIG1pbmltdW0gKGJ5dGVzKSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IFZDUFVzL3Bh
cmFtc30gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgJiBjb25maWd1cmF0aW9u
IHBhcmFtZXRlcnMgZm9yIHRoZSBzZWxlY3RlZCBWQ1BVIHBvbGljeSBcXAotJFxtYXRoaXR7Uld9
JCAmICB7XHR0IFZDUFVzL21heH0gJiBpbnQgJiBNYXggbnVtYmVyIG9mIFZDUFVzIFxcCi0kXG1h
dGhpdHtSV30kICYgIHtcdHQgVkNQVXMvYXRcX3N0YXJ0dXB9ICYgaW50ICYgQm9vdCBudW1iZXIg
b2YgVkNQVXMgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBhY3Rpb25zL2FmdGVyXF9zaHV0ZG93
bn0gJiBvblxfbm9ybWFsXF9leGl0ICYgYWN0aW9uIHRvIHRha2UgYWZ0ZXIgdGhlIGd1ZXN0IGhh
cyBzaHV0ZG93biBpdHNlbGYgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBhY3Rpb25zL2FmdGVy
XF9yZWJvb3R9ICYgb25cX25vcm1hbFxfZXhpdCAmIGFjdGlvbiB0byB0YWtlIGFmdGVyIHRoZSBn
dWVzdCBoYXMgcmVib290ZWQgaXRzZWxmIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgYWN0aW9u
cy9hZnRlclxfY3Jhc2h9ICYgb25cX2NyYXNoXF9iZWhhdmlvdXIgJiBhY3Rpb24gdG8gdGFrZSBp
ZiB0aGUgZ3Vlc3QgY3Jhc2hlcyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCBjb25zb2xlc30gJiAoY29uc29sZSByZWYpIFNldCAmIHZpcnR1YWwgY29uc29sZSBkZXZpY2Vz
IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFZJRnN9ICYgKFZJRiByZWYp
IFNldCAmIHZpcnR1YWwgbmV0d29yayBpbnRlcmZhY2VzIFxcCi0kXG1hdGhpdHtST31fXG1hdGhp
dHtydW59JCAmICB7XHR0IFZCRHN9ICYgKFZCRCByZWYpIFNldCAmIHZpcnR1YWwgYmxvY2sgZGV2
aWNlcyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBjcmFzaFxfZHVtcHN9
ICYgKGNyYXNoZHVtcCByZWYpIFNldCAmIGNyYXNoIGR1bXBzIGFzc29jaWF0ZWQgd2l0aCB0aGlz
IFZNIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFZUUE1zfSAmIChWVFBN
IHJlZikgU2V0ICYgdmlydHVhbCBUUE1zIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAm
ICB7XHR0IERQQ0lzfSAmIChEUENJIHJlZikgU2V0ICYgcGFzcy10aHJvdWdoIFBDSSBkZXZpY2Vz
IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IERTQ1NJc30gJiAoRFNDU0kg
cmVmKSBTZXQgJiBoYWxmLXZpcnR1YWxpemVkIFNDU0kgZGV2aWNlcyBcXAotJFxtYXRoaXR7Uk99
X1xtYXRoaXR7cnVufSQgJiAge1x0dCBEU0NTSVxfSEJBc30gJiAoRFNDU0lcX0hCQSByZWYpIFNl
dCAmIGhhbGYtdmlydHVhbGl6ZWQgU0NTSSBob3N0IGJ1cyBhZGFwdGVycyBcXAotJFxtYXRoaXR7
Uld9JCAmICB7XHR0IFBWL2Jvb3Rsb2FkZXJ9ICYgc3RyaW5nICYgbmFtZSBvZiBvciBwYXRoIHRv
IGJvb3Rsb2FkZXIgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBQVi9rZXJuZWx9ICYgc3RyaW5n
ICYgVVJJIG9mIGtlcm5lbCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IFBWL3JhbWRpc2t9ICYg
c3RyaW5nICYgVVJJIG9mIGluaXRyZCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IFBWL2FyZ3N9
ICYgc3RyaW5nICYga2VybmVsIGNvbW1hbmQtbGluZSBhcmd1bWVudHMgXFwKLSRcbWF0aGl0e1JX
fSQgJiAge1x0dCBQVi9ib290bG9hZGVyXF9hcmdzfSAmIHN0cmluZyAmIG1pc2NlbGxhbmVvdXMg
YXJndW1lbnRzIGZvciB0aGUgYm9vdGxvYWRlciBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IEhW
TS9ib290XF9wb2xpY3l9ICYgc3RyaW5nICYgSFZNIGJvb3QgcG9saWN5IFxcCi0kXG1hdGhpdHtS
V30kICYgIHtcdHQgSFZNL2Jvb3RcX3BhcmFtc30gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3Ry
aW5nKSBNYXAgJiBIVk0gYm9vdCBwYXJhbXMgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBwbGF0
Zm9ybX0gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgJiBwbGF0Zm9ybS1zcGVj
aWZpYyBjb25maWd1cmF0aW9uIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgUENJXF9idXN9ICYg
c3RyaW5nICYgUENJIGJ1cyBwYXRoIGZvciBwYXNzLXRocm91Z2ggZGV2aWNlcyBcXAotJFxtYXRo
aXR7Uld9JCAmICB7XHR0IG90aGVyXF9jb25maWd9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0
cmluZykgTWFwICYgYWRkaXRpb25hbCBjb25maWd1cmF0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1h
dGhpdHtydW59JCAmICB7XHR0IGRvbWlkfSAmIGludCAmIGRvbWFpbiBJRCAoaWYgYXZhaWxhYmxl
LCAtMSBvdGhlcndpc2UpIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGlz
XF9jb250cm9sXF9kb21haW59ICYgYm9vbCAmIHRydWUgaWYgdGhpcyBpcyBhIGNvbnRyb2wgZG9t
YWluIChkb21haW4gMCBvciBhIGRyaXZlciBkb21haW4pIFxcCi0kXG1hdGhpdHtST31fXG1hdGhp
dHtydW59JCAmICB7XHR0IG1ldHJpY3N9ICYgVk1cX21ldHJpY3MgcmVmICYgbWV0cmljcyBhc3Nv
Y2lhdGVkIHdpdGggdGhpcyBWTSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCBndWVzdFxfbWV0cmljc30gJiBWTVxfZ3Vlc3RcX21ldHJpY3MgcmVmICYgbWV0cmljcyBhc3Nv
Y2lhdGVkIHdpdGggdGhlIHJ1bm5pbmcgZ3Vlc3QgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1
bn0kICYgIHtcdHQgc2VjdXJpdHkvbGFiZWx9ICYgc3RyaW5nICYgdGhlIFZNJ3Mgc2VjdXJpdHkg
bGFiZWwgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntQYXJhbWV0ZXIg
RGV0YWlsc30KLVxzdWJzdWJzZWN0aW9ue1BWL2tlcm5lbCBhbmQgUFYvcmFtZGlza30KLVRoZSBc
dGV4dHR0e1BWL2tlcm5lbH0gYW5kIFx0ZXh0dHR7UFYvcmFtZGlza30gcGFyYW1ldGVycyBzaG91
bGQgYmUKLXNwZWNpZmllZCBhcyBVUklzIHdpdGggZWl0aGVyIGEgXHRleHR0dHtmaWxlfSBvciBc
dGV4dHR0e2RhdGF9IHNjaGVtZS4KLQotVGhlIFx0ZXh0dHR7ZmlsZX0gc2NoZW1lIG11c3QgYmUg
dXNlZCB3aGVuIGEgZmlsZSBvbiB0aGUgcmVtb3RlIGRvbTAKLXNob3VsZCBiZSB1c2VkLiAgVGhl
IHJlbW90ZSBkb20wIGlzIHRoZSBvbmUgd2hlcmUgdGhlIGd1ZXN0IHN5c3RlbQotc2hvdWxkIGJl
IHN0YXJ0ZWQgb24uIE9ubHkgYWJzb2x1dGUgZmlsZW5hbWVzIGFyZSBzdXBwb3J0ZWQsIGkuZS4g
dGhlCi1zdHJpbmcgbXVzdCBzdGFydCB3aXRoIFx0ZXh0dHR7ZmlsZTovL30gYXBwZW5kZWQgd2l0
aCB0aGUgYWJzb2x1dGUKLXBhdGguICBUaGlzIGlzIHR5cGljYWxseSB1c2VkIHdoZW4gdGhlIGd1
ZXN0IHN5c3RlbSB1c2UgdGhlIHNhbWUKLW9wZXJhdGluZyBzeXN0ZW1zIGFzIHRoZSBkb20wIG9y
IHRoZXJlIGlzIHNvbWUga2luZCBvZiBzaGFyZWQgc3RvcmFnZQotZm9yIHRoZSBpbWFnZXMgaW5z
aWRlIHRoZSBkb20wcy4KLQotTm90ZSB0aGF0IGZvciBjb21wYXRpYmlsaXR5IHJlYXNvbnMgaXQg
aXMgcG9zc2libGUgLS0tIGJ1dCBub3QKLXJlY29tbWVuZGVkIC0tLSB0byBsZWF2ZSBvdXQgdGhl
IHNjaGVtZSBzcGVjaWZpY2F0aW9uIGZvcgotXHRleHR0dHtmaWxlfSwgaS5lLiBcdGV4dHR0e2Zp
bGU6Ly8vc29tZS9wYXRofSBhbmQgXHRleHR0dHsvc29tZS9wYXRofQotaXMgZXF1aXZhbGVudC4K
LQotRXhhbXBsZXMgKGluIHB5dGhvbik6Ci0KLVVzZSBrZXJuZWwgaW1hZ2Ugd2hpY2ggcmVzaWRl
cyBpbiB0aGUgXHRleHR0dHsvYm9vdH0gZGlyZWN0b3J5OgotXGJlZ2lue3ZlcmJhdGltfQoteGVu
YXBpLlZNLmNyZWF0ZSh7IC4uLgotICAgJ1BWX2tlcm5lbCc6ICdmaWxlOi8vL2Jvb3Qvdm1saW51
ei0yLjYuMjYtMi14ZW4tNjg2JywKLSAgIC4uLiB9KQotXGVuZHt2ZXJiYXRpbX0KLQotVXNlIHJh
bWRpc2sgaW1hZ2Ugd2hpY2ggcmVzaWRlcyBvbiBhIChzaGFyZWQpIG5mcyBkaXJlY3Rvcnk6Ci1c
YmVnaW57dmVyYmF0aW19Ci14ZW5hcGkuVk0uY3JlYXRlKHsgLi4uCi0gICAnUFZfcmFtZGlzayc6
ICdmaWxlOi8vL25mcy94ZW4vZGViaWFuLzUuMC4xL2luaXRyZC5pbWctMi42LjI2LTIteGVuLTY4
NicKLSAgIC4uLiB9KQotXGVuZHt2ZXJiYXRpbX0KLQotV2hlbiBhbiBpbWFnZSBzaG91bGQgYmUg
dXNlZCB3aGljaCByZXNpZGVzIG9uIHRoZSBsb2NhbCBzeXN0ZW0sCi1pLmUuIHRoZSBzeXN0ZW0g
d2hlcmUgdGhlIFhlbkFQSSBjYWxsIGlzIHNlbmQgZnJvbSwgaXQgaXMgcG9zc2libGUgdG8KLXVz
ZSB0aGUgXHRleHR0dHtkYXRhfSBVUkkgc2NoZW1lIGFzIGRlc2NyaWJlZCBpbiBcY2l0ZXtSRkMy
Mzk3fS4gIFRoZQotbWVkaWEtdHlwZSBtdXN0IGJlIHNldCB0byBcdGV4dHR0e2FwcGxpY2F0aW9u
L29jdGV0LXN0cmVhbX0uCi1DdXJyZW50bHkgb25seSBiYXNlNjQgZW5jb2RpbmcgaXMgc3VwcG9y
dGVkLiAgVGhlIFVSSSBtdXN0IHRoZXJlZm9yZQotc3RhcnQgd2l0aCBcdGV4dHR0e2RhdGE6YXBw
bGljYXRpb24vb2N0ZXQtc3RyZWFtO2Jhc2U2NCx9IGZvbGxvd2VkIGJ5Ci10aGUgYmFzZTY0IGVu
Y29kZWQgaW1hZ2UuCi0KLVRoZSBcdGV4dHR0e3hlbi91dGlsL2ZpbGV1cmkucHl9IHByb3ZpZGVz
IGEgaGVscGVyIGZ1bmN0aW9uIHdoaWNoCi10YWtlcyBhIGxvY2FsIGZpbGVuYW1lIGFzIHBhcmFt
ZXRlciBhbmQgYnVpbGQgdXAgdGhlIGNvcnJlY3QgVVJJIGZyb20KLXRoaXMuCi0KLUV4YW1wbGVz
IChpbiBweXRob24pOgotCi1Vc2Uga2VybmVsIGltYWdlIHNwZWNpZmllZCBpbmxpbmU6Ci1cYmVn
aW57dmVyYmF0aW19Ci14ZW5hcGkuVk0uY3JlYXRlKHsgLi4uCi0gICAnUFZfa2VybmVsJzogJ2Rh
dGE6YXBwbGljYXRpb24vb2N0ZXQtc3RyZWFtO2Jhc2U2NCxINFp1Li4uLicKLSAgICAgICMgbW9z
dCBvZiBiYXNlNjQgZW5jb2RlZCBkYXRhIGlzIG9taXR0ZWQgCi0gICAuLi4gfSkKLVxlbmR7dmVy
YmF0aW19Ci0KLVVzaW5nIHRoZSB1dGlsaXR5IGZ1bmN0aW9uOgotXGJlZ2lue3ZlcmJhdGltfQot
ZnJvbSB4ZW4udXRpbC5maWxldXJpIGltcG9ydCBzY2hlbWVfZGF0YQoteGVuYXBpLlZNLmNyZWF0
ZSh7IC4uLgotICAgJ1BWX2tlcm5lbCc6IHNjaGVtZV9kYXRhLmNyZWF0ZV9mcm9tX2ZpbGUoCi0g
ICAgICAgIi94ZW4vZ3Vlc3RzL2ltYWdlcy9kZWJpYW4vNS4wLjEvdm1saW51ei0yLjYuMjYtMi14
ZW4tNjg2IiksCi0gICAuLi4gfSkKLVxlbmR7dmVyYmF0aW19Ci0KLUN1cnJlbnRseSB3aGVuIHVz
aW5nIHRoZSBcdGV4dHR0e2RhdGF9IFVSSSBzY2hlbWUsIGEgdGVtcG9yYXJ5IGZpbGUgaXMKLWNy
ZWF0ZWQgb24gdGhlIHJlbW90ZSBkb20wIGluIHRoZSBkaXJlY3RvcnkKLVx0ZXh0dHR7L3Zhci9y
dW4veGVuZC9ib290fSB3aGljaCBpcyB0aGVuIHVzZWQgZm9yIGJvb3RpbmcuIFdoZW4gbm90Ci11
c2VkIGFueSBsb25nZXIgdGhlIGZpbGUgaXMgZGVsZXRlZC4gIChUaGVyZWZvcmUgcmVhZGluZyBv
ZiB0aGUKLVx0ZXh0dHR7UFYva2VybmVsfSBvciBcdGV4dHR0e1BWL3JhbWRpc2t9IHBhcmFtZXRl
cnMgd2hlbiBjcmVhdGVkIHdpdGgKLWEgXHRleHR0dHtkYXRhfSBVUkkgc2NoZW1lIHJldHVybnMg
YSBmaWxlbmFtZSB0byBhIHRlbXBvcmFyeSBmaWxlIC0tLQotd2hpY2ggbWlnaHQgZXZlbiBub3Qg
ZXhpc3RzIHdoZW4gcXVlcnlpbmcuKSAgVGhpcyBpbXBsZW1lbnRhdGlvbiBtaWdodAotY2hhbmdl
IGluIHRoZSB3YXkgdGhhdCB0aGUgZGF0YSBpcyBkaXJlY3RseSB1c2VkIC0tLSB3aXRob3V0IHRo
ZQotaW5kaXJlY3Rpb24gdXNpbmcgYSBmaWxlLiAgVGhlcmVmb3JlIGRvIG5vdCByZWx5IG9uIHRo
ZSBkYXRhIHJlc3VsdGluZwotZnJvbSBhIHJlYWQgb2YgYSB2YXJpYWJsZXMgd2hpY2ggd2FzIHNl
dCB1c2luZyB0aGUgXHRleHR0dHtkYXRhfQotc2NoZW1lLgotCi1Ob3RlOiBhIG1peCBvZiBkaWZm
ZXJlbnQgc2NoZW1lcyBmb3IgdGhlIHBhcmFtZXRlcnMgaXMgcG9zc2libGU7IGUuZy4KLXRoZSBr
ZXJuZWwgY2FuIGJlIHNwZWNpZmllZCB3aXRoIGEgXHRleHR0dHtmaWxlfSBhbmQgdGhlIHJhbWRp
c2sgd2l0aAotdGhlIFx0ZXh0dHR7ZGF0YX0gVVJJIHNjaGVtZS4KLQotXHN1YnNlY3Rpb257UlBD
cyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IFZNfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmNs
b25lfQotCi17XGJmIE92ZXJ2aWV3On0gCi1DbG9uZXMgdGhlIHNwZWNpZmllZCBWTSwgbWFraW5n
IGEgbmV3IFZNLiBDbG9uZSBhdXRvbWF0aWNhbGx5IGV4cGxvaXRzIHRoZQotY2FwYWJpbGl0aWVz
IG9mIHRoZSB1bmRlcmx5aW5nIHN0b3JhZ2UgcmVwb3NpdG9yeSBpbiB3aGljaCB0aGUgVk0ncyBk
aXNrCi1pbWFnZXMgYXJlIHN0b3JlZCAoZS5nLiBDb3B5IG9uIFdyaXRlKS4gICBUaGlzIGZ1bmN0
aW9uIGNhbiBvbmx5IGJlIGNhbGxlZAotd2hlbiB0aGUgVk0gaXMgaW4gdGhlIEhhbHRlZCBTdGF0
ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0g
cmVmKSBjbG9uZSAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgdm0sIHN0cmluZyBuZXdfbmFtZSlcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYg
fSAmIHZtICYgVGhlIFZNIHRvIGJlIGNsb25lZCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0g
JiBuZXdcX25hbWUgJiBUaGUgbmFtZSBvZiB0aGUgY2xvbmVkIFZNIFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1WTSByZWYKLX0KLQotCi1UaGUgSUQgb2YgdGhlIG5ld2x5IGNyZWF0ZWQg
Vk0uCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVz
On0ge1x0dCBWTVxfQkFEXF9QT1dFUlxfU1RBVEV9Ci0KLVx2c3BhY2V7MC42Y219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+c3RhcnR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVN0YXJ0IHRoZSBz
cGVjaWZpZWQgVk0uICBUaGlzIGZ1bmN0aW9uIGNhbiBvbmx5IGJlIGNhbGxlZCB3aXRoIHRoZSBW
TSBpcyBpbgotdGhlIEhhbHRlZCBTdGF0ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHN0YXJ0IChzZXNzaW9uX2lkIHMsIFZNIHJlZiB2bSwg
Ym9vbCBzdGFydF9wYXVzZWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiB2bSAmIFRoZSBWTSB0byBzdGFydCBcXCBcaGxpbmUg
Ci0KLXtcdHQgYm9vbCB9ICYgc3RhcnRcX3BhdXNlZCAmIEluc3RhbnRpYXRlIFZNIGluIHBhdXNl
ZCBzdGF0ZSBpZiBzZXQgdG8gdHJ1ZS4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZv
aWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJy
b3IgQ29kZXM6fSB7XHR0IFZNXF9CQURcX1BPV0VSXF9TVEFURX0sIHtcdHQKLVZNXF9IVk1cX1JF
UVVJUkVEfQotCi1cdnNwYWNlezAuNmNtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnBhdXNl
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1QYXVzZSB0aGUgc3BlY2lmaWVkIFZNLiBUaGlzIGNhbiBv
bmx5IGJlIGNhbGxlZCB3aGVuIHRoZSBzcGVjaWZpZWQgVk0gaXMgaW4KLXRoZSBSdW5uaW5nIHN0
YXRlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZv
aWQgcGF1c2UgKHNlc3Npb25faWQgcywgVk0gcmVmIHZtKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgdm0gJiBUaGUgVk0gdG8g
cGF1c2UgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3Bh
Y2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFZN
XF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZzcGFjZXswLjZjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn51bnBhdXNlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1SZXN1bWUgdGhlIHNwZWNpZmll
ZCBWTS4gVGhpcyBjYW4gb25seSBiZSBjYWxsZWQgd2hlbiB0aGUgc3BlY2lmaWVkIFZNIGlzCi1p
biB0aGUgUGF1c2VkIHN0YXRlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IHZvaWQgdW5wYXVzZSAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgdm0pXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0g
JiB2bSAmIFRoZSBWTSB0byB1bnBhdXNlIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12
b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVy
cm9yIENvZGVzOn0ge1x0dCBWTVxfQkFEXF9QT1dFUlxfU1RBVEV9Ci0KLVx2c3BhY2V7MC42Y219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y2xlYW5cX3NodXRkb3dufQotCi17XGJmIE92ZXJ2
aWV3On0gCi1BdHRlbXB0IHRvIGNsZWFubHkgc2h1dGRvd24gdGhlIHNwZWNpZmllZCBWTS4gKE5v
dGU6IHRoaXMgbWF5IG5vdCBiZQotc3VwcG9ydGVkLS0tZS5nLiBpZiBhIGd1ZXN0IGFnZW50IGlz
IG5vdCBpbnN0YWxsZWQpLgotCi1PbmNlIHNodXRkb3duIGhhcyBiZWVuIGNvbXBsZXRlZCBwZXJm
b3JtIHBvd2Vyb2ZmIGFjdGlvbiBzcGVjaWZpZWQgaW4gZ3Vlc3QKLWNvbmZpZ3VyYXRpb24uCi0K
LVRoaXMgY2FuIG9ubHkgYmUgY2FsbGVkIHdoZW4gdGhlIHNwZWNpZmllZCBWTSBpcyBpbiB0aGUg
UnVubmluZyBzdGF0ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSB2b2lkIGNsZWFuX3NodXRkb3duIChzZXNzaW9uX2lkIHMsIFZNIHJlZiB2bSlcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYg
fSAmIHZtICYgVGhlIFZNIHRvIHNodXRkb3duIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxl
IEVycm9yIENvZGVzOn0ge1x0dCBWTVxfQkFEXF9QT1dFUlxfU1RBVEV9Ci0KLVx2c3BhY2V7MC42
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y2xlYW5cX3JlYm9vdH0KLQote1xiZiBPdmVy
dmlldzp9IAotQXR0ZW1wdCB0byBjbGVhbmx5IHNodXRkb3duIHRoZSBzcGVjaWZpZWQgVk0gKE5v
dGU6IHRoaXMgbWF5IG5vdCBiZQotc3VwcG9ydGVkLS0tZS5nLiBpZiBhIGd1ZXN0IGFnZW50IGlz
IG5vdCBpbnN0YWxsZWQpLgotCi1PbmNlIHNodXRkb3duIGhhcyBiZWVuIGNvbXBsZXRlZCBwZXJm
b3JtIHJlYm9vdCBhY3Rpb24gc3BlY2lmaWVkIGluIGd1ZXN0Ci1jb25maWd1cmF0aW9uLgotCi1U
aGlzIGNhbiBvbmx5IGJlIGNhbGxlZCB3aGVuIHRoZSBzcGVjaWZpZWQgVk0gaXMgaW4gdGhlIFJ1
bm5pbmcgc3RhdGUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gdm9pZCBjbGVhbl9yZWJvb3QgKHNlc3Npb25faWQgcywgVk0gcmVmIHZtKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYg
dm0gJiBUaGUgVk0gdG8gc2h1dGRvd24gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZv
aWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJy
b3IgQ29kZXM6fSB7XHR0IFZNXF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZzcGFjZXswLjZjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5oYXJkXF9zaHV0ZG93bn0KLQote1xiZiBPdmVydmll
dzp9IAotU3RvcCBleGVjdXRpbmcgdGhlIHNwZWNpZmllZCBWTSB3aXRob3V0IGF0dGVtcHRpbmcg
YSBjbGVhbiBzaHV0ZG93bi4gVGhlbgotcGVyZm9ybSBwb3dlcm9mZiBhY3Rpb24gc3BlY2lmaWVk
IGluIFZNIGNvbmZpZ3VyYXRpb24uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gdm9pZCBoYXJkX3NodXRkb3duIChzZXNzaW9uX2lkIHMsIFZNIHJlZiB2
bSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
TSByZWYgfSAmIHZtICYgVGhlIFZNIHRvIGRlc3Ryb3kgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9z
c2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFZNXF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZzcGFj
ZXswLjZjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5oYXJkXF9yZWJvb3R9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVN0b3AgZXhlY3V0aW5nIHRoZSBzcGVjaWZpZWQgVk0gd2l0aG91dCBhdHRl
bXB0aW5nIGEgY2xlYW4gc2h1dGRvd24uIFRoZW4KLXBlcmZvcm0gcmVib290IGFjdGlvbiBzcGVj
aWZpZWQgaW4gVk0gY29uZmlndXJhdGlvbi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGhhcmRfcmVib290IChzZXNzaW9uX2lkIHMsIFZNIHJl
ZiB2bSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWTSByZWYgfSAmIHZtICYgVGhlIFZNIHRvIHJlYm9vdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnN1c3BlbmR9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVN1c3BlbmQgdGhlIHNwZWNpZmllZCBWTSB0byBkaXNrLiAgVGhpcyBjYW4g
b25seSBiZSBjYWxsZWQgd2hlbiB0aGUKLXNwZWNpZmllZCBWTSBpcyBpbiB0aGUgUnVubmluZyBz
dGF0ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2
b2lkIHN1c3BlbmQgKHNlc3Npb25faWQgcywgVk0gcmVmIHZtKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgdm0gJiBUaGUgVk0g
dG8gc3VzcGVuZCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQot
XHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50e1xiZiBQb3NzaWJsZSBFcnJvciBDb2Rlczp9IHtc
dHQgVk1cX0JBRFxfUE9XRVJcX1NUQVRFfQotCi1cdnNwYWNlezAuNmNtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fnJlc3VtZX0KLQote1xiZiBPdmVydmlldzp9IAotQXdha2VuIHRoZSBzcGVj
aWZpZWQgVk0gYW5kIHJlc3VtZSBpdC4gIFRoaXMgY2FuIG9ubHkgYmUgY2FsbGVkIHdoZW4gdGhl
Ci1zcGVjaWZpZWQgVk0gaXMgaW4gdGhlIFN1c3BlbmRlZCBzdGF0ZS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHJlc3VtZSAoc2Vzc2lvbl9p
ZCBzLCBWTSByZWYgdm0sIGJvb2wgc3RhcnRfcGF1c2VkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgdm0gJiBUaGUgVk0gdG8g
cmVzdW1lIFxcIFxobGluZSAKLQote1x0dCBib29sIH0gJiBzdGFydFxfcGF1c2VkICYgUmVzdW1l
IFZNIGluIHBhdXNlZCBzdGF0ZSBpZiBzZXQgdG8gdHJ1ZS4gXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYg
UG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFZNXF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZz
cGFjZXswLjZjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX1ZDUFVzXF9udW1iZXJc
X2xpdmV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGlzIFZNJ3MgVkNQVXMvYXRcX3N0YXJ0
dXAgdmFsdWUsIGFuZCBzZXQgdGhlIHNhbWUgdmFsdWUgb24gdGhlIFZNLCBpZgotcnVubmluZy4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNl
dF9WQ1BVc19udW1iZXJfbGl2ZSAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgaW50IG52Y3B1
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZN
IHJlZiB9ICYgc2VsZiAmIFRoZSBWTSBcXCBcaGxpbmUgCi0KLXtcdHQgaW50IH0gJiBudmNwdSAm
IFRoZSBudW1iZXIgb2YgVkNQVXMgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQK
LX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5hZGRcX3RvXF9WQ1BVc1xfcGFyYW1zXF9saXZlfQot
Ci17XGJmIE92ZXJ2aWV3On0gCi1BZGQgdGhlIGdpdmVuIGtleS12YWx1ZSBwYWlyIHRvIFZNLlZD
UFVzXF9wYXJhbXMsIGFuZCBhcHBseSB0aGF0IHZhbHVlIG9uCi10aGUgcnVubmluZyBWTS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGFkZF90
b19WQ1BVc19wYXJhbXNfbGl2ZSAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgc3RyaW5nIGtl
eSwgc3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIFRoZSBWTSBcXCBcaGxpbmUgCi0KLXtcdHQg
c3RyaW5nIH0gJiBrZXkgJiBUaGUga2V5IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIHZh
bHVlICYgVGhlIHZhbHVlIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0K
LQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9tZW1vcnlcX2R5bmFtaWNcX21heFxfbGl2ZX0KLQot
e1xiZiBPdmVydmlldzp9IAotU2V0IG1lbW9yeVxfZHluYW1pY1xfbWF4IGluIGRhdGFiYXNlIGFu
ZCBvbiBydW5uaW5nIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHZvaWQgc2V0X21lbW9yeV9keW5hbWljX21heF9saXZlIChzZXNzaW9uX2lkIHMs
IFZNIHJlZiBzZWxmLCBpbnQgbWF4KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIFRoZSBWTSBcXCBcaGxpbmUgCi0K
LXtcdHQgaW50IH0gJiBtYXggJiBUaGUgbWVtb3J5XF9keW5hbWljXF9tYXggdmFsdWUgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5z
ZXRcX21lbW9yeVxfZHluYW1pY1xfbWluXF9saXZlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQg
bWVtb3J5XF9keW5hbWljXF9taW4gaW4gZGF0YWJhc2UgYW5kIG9uIHJ1bm5pbmcgVk0uCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbWVt
b3J5X2R5bmFtaWNfbWluX2xpdmUgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIGludCBtaW4p
XGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0g
cmVmIH0gJiBzZWxmICYgVGhlIFZNIFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIG1pbiAmIFRo
ZSBtZW1vcnlcX2R5bmFtaWNcX21pbiB2YWx1ZSBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNlbmRcX3N5c3JxfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1TZW5kIHRoZSBnaXZlbiBrZXkgYXMgYSBzeXNycSB0byB0aGlzIFZNLiAgVGhl
IGtleSBpcyBzcGVjaWZpZWQgYXMgYSBzaW5nbGUKLWNoYXJhY3RlciAoYSBTdHJpbmcgb2YgbGVu
Z3RoIDEpLiAgVGhpcyBjYW4gb25seSBiZSBjYWxsZWQgd2hlbiB0aGUKLXNwZWNpZmllZCBWTSBp
cyBpbiB0aGUgUnVubmluZyBzdGF0ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNlbmRfc3lzcnEgKHNlc3Npb25faWQgcywgVk0gcmVmIHZt
LCBzdHJpbmcga2V5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFZNIHJlZiB9ICYgdm0gJiBUaGUgVk0gXFwgXGhsaW5lIAotCi17XHR0IHN0cmlu
ZyB9ICYga2V5ICYgVGhlIGtleSB0byBzZW5kIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxl
IEVycm9yIENvZGVzOn0ge1x0dCBWTVxfQkFEXF9QT1dFUlxfU1RBVEV9Ci0KLVx2c3BhY2V7MC42
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2VuZFxfdHJpZ2dlcn0KLQote1xiZiBPdmVy
dmlldzp9IAotU2VuZCB0aGUgbmFtZWQgdHJpZ2dlciB0byB0aGlzIFZNLiAgVGhpcyBjYW4gb25s
eSBiZSBjYWxsZWQgd2hlbiB0aGUKLXNwZWNpZmllZCBWTSBpcyBpbiB0aGUgUnVubmluZyBzdGF0
ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lk
IHNlbmRfdHJpZ2dlciAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgdm0sIHN0cmluZyB0cmlnZ2VyKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJl
ZiB9ICYgdm0gJiBUaGUgVk0gXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdHJpZ2dlciAm
IFRoZSB0cmlnZ2VyIHRvIHNlbmQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQK
LX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3Ig
Q29kZXM6fSB7XHR0IFZNXF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZzcGFjZXswLjZjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5taWdyYXRlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1NaWdy
YXRlIHRoZSBWTSB0byBhbm90aGVyIGhvc3QuICBUaGlzIGNhbiBvbmx5IGJlIGNhbGxlZCB3aGVu
IHRoZSBzcGVjaWZpZWQKLVZNIGlzIGluIHRoZSBSdW5uaW5nIHN0YXRlLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgbWlncmF0ZSAoc2Vzc2lv
bl9pZCBzLCBWTSByZWYgdm0sIHN0cmluZyBkZXN0LCBib29sIGxpdmUsIChzdHJpbmcgLT4gc3Ry
aW5nKSBNYXAgb3B0aW9ucylcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBWTSByZWYgfSAmIHZtICYgVGhlIFZNIFxcIFxobGluZSAKLQote1x0dCBz
dHJpbmcgfSAmIGRlc3QgJiBUaGUgZGVzdGluYXRpb24gaG9zdCBcXCBcaGxpbmUgCi0KLXtcdHQg
Ym9vbCB9ICYgbGl2ZSAmIExpdmUgbWlncmF0aW9uIFxcIFxobGluZSAKLQote1x0dCAoc3RyaW5n
ICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgfSAmIG9wdGlvbnMgJiBPdGhlciBwYXJhbWV0ZXJz
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAu
M2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVzOn0ge1x0dCBWTVxfQkFE
XF9QT1dFUlxfU1RBVEV9Ci0KLVx2c3BhY2V7MC42Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRo
ZSBWTXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZNIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMp
XGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi0oVk0gcmVmKSBTZXQKLX0KLQotCi1BIGxpc3Qgb2YgYWxsIHRo
ZSBJRHMgb2YgYWxsIHRoZSBWTXMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlk
IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Bvd2VyXF9zdGF0ZX0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBwb3dlclxfc3RhdGUgZmllbGQgb2YgdGhlIGdpdmVu
IFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICh2
bV9wb3dlcl9zdGF0ZSkgZ2V0X3Bvd2VyX3N0YXRlIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZN
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi12bVxfcG93ZXJcX3N0YXRlCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxk
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9uYW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfbGFiZWwg
KHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfbmFtZVxfbGFiZWx9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgbmFtZS9sYWJlbCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0u
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBz
ZXRfbmFtZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgc3RyaW5nIHZhbHVlKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBz
dHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmFtZVxfZGVzY3Jp
cHRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbmFtZS9kZXNjcmlwdGlvbiBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9uYW1lX2Rlc2NyaXB0aW9uIChzZXNzaW9uX2lkIHMsIFZN
IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGlu
ZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYg
UmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5zZXRcX25hbWVcX2Rlc2NyaXB0aW9ufQotCi17XGJmIE92ZXJ2aWV3
On0gCi1TZXQgdGhlIG5hbWUvZGVzY3JpcHRpb24gZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X25h
bWVfZGVzY3JpcHRpb24gKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIHN0cmluZyB2YWx1ZSlc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQg
c3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3VzZXJcX3ZlcnNp
b259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXNlclxfdmVyc2lvbiBmaWVsZCBvZiB0
aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gaW50IGdldF91c2VyX3ZlcnNpb24gKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fnNldFxfdXNlclxfdmVyc2lvbn0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSB1c2VyXF92
ZXJzaW9uIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF91c2VyX3ZlcnNpb24gKHNlc3Npb25faWQg
cywgVk0gcmVmIHNlbGYsIGludCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgaW50IH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBz
ZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9p
bmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5nZXRcX2lzXF9hXF90ZW1wbGF0ZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSBpc1xfYVxfdGVtcGxhdGUgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGJvb2wgZ2V0X2lzX2FfdGVtcGxhdGUg
KHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWJvb2wKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX2lzXF9hXF90ZW1wbGF0ZX0KLQot
e1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBpc1xfYVxfdGVtcGxhdGUgZmllbGQgb2YgdGhlIGdp
dmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IHZvaWQgc2V0X2lzX2FfdGVtcGxhdGUgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIGJvb2wg
dmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAK
LVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQot
e1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtc
dHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci17XHR0IGJvb2wgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxl
bmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYXV0b1xf
cG93ZXJcX29ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGF1dG9cX3Bvd2VyXF9vbiBm
aWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gYm9vbCBnZXRfYXV0b19wb3dlcl9vbiAoc2Vzc2lvbl9pZCBzLCBWTSBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotYm9vbAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fnNldFxfYXV0b1xfcG93ZXJcX29ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1T
ZXQgdGhlIGF1dG9cX3Bvd2VyXF9vbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfYXV0b19wb3dl
cl9vbiAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgYm9vbCB2YWx1ZSlcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgYm9vbCB9ICYgdmFs
dWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lk
Ci19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9zdXNwZW5kXF9WREl9Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLUdldCB0aGUgc3VzcGVuZFxfVkRJIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVkRJIHJlZikg
Z2V0X3N1c3BlbmRfVkRJIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1W
REkgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9y
ZXNpZGVudFxfb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgcmVzaWRlbnRcX29uIGZp
ZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSAoaG9zdCByZWYpIGdldF9yZXNpZGVudF9vbiAoc2Vzc2lvbl9pZCBzLCBW
TSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotaG9zdCByZWYKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX21lbW9yeVxfc3RhdGljXF9tYXh9Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLUdldCB0aGUgbWVtb3J5L3N0YXRpY1xfbWF4IGZpZWxkIG9mIHRoZSBnaXZlbiBW
TS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQg
Z2V0X21lbW9yeV9zdGF0aWNfbWF4IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRc
X21lbW9yeVxfc3RhdGljXF9tYXh9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgbWVtb3J5
L3N0YXRpY1xfbWF4IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9tZW1vcnlfc3RhdGljX21heCAo
c2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgaW50IHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVy
ZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIHZhbHVlICYgTmV3
IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbWVtb3J5XF9keW5hbWljXF9tYXh9Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLUdldCB0aGUgbWVtb3J5L2R5bmFtaWNcX21heCBmaWVsZCBvZiB0aGUgZ2l2ZW4g
Vk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50
IGdldF9tZW1vcnlfZHluYW1pY19tYXggKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNl
dFxfbWVtb3J5XF9keW5hbWljXF9tYXh9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgbWVt
b3J5L2R5bmFtaWNcX21heCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbWVtb3J5X2R5bmFtaWNf
bWF4IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBpbnQgdmFsdWUpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IGludCB9ICYgdmFsdWUg
JiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19
Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9tZW1vcnlcX2R5bmFtaWNcX21pbn0KLQote1xi
ZiBPdmVydmlldzp9IAotR2V0IHRoZSBtZW1vcnkvZHluYW1pY1xfbWluIGZpZWxkIG9mIHRoZSBn
aXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSBpbnQgZ2V0X21lbW9yeV9keW5hbWljX21pbiAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+c2V0XF9tZW1vcnlcX2R5bmFtaWNcX21pbn0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRo
ZSBtZW1vcnkvZHluYW1pY1xfbWluIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9tZW1vcnlfZHlu
YW1pY19taW4gKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIGludCB2YWx1ZSlcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgaW50IH0gJiB2
YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZv
aWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX21lbW9yeVxfc3RhdGljXF9taW59Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbWVtb3J5L3N0YXRpY1xfbWluIGZpZWxkIG9mIHRo
ZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJh
dGltfSBpbnQgZ2V0X21lbW9yeV9zdGF0aWNfbWluIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZN
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5zZXRcX21lbW9yeVxfc3RhdGljXF9taW59Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0
aGUgbWVtb3J5L3N0YXRpY1xfbWluIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9tZW1vcnlfc3Rh
dGljX21pbiAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgaW50IHZhbHVlKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIHZh
bHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9p
ZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfVkNQVXNcX3BhcmFtc30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBWQ1BVcy9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5n
IC0+IHN0cmluZykgTWFwKSBnZXRfVkNQVXNfcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBz
ZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX1ZDUFVzXF9wYXJhbXN9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgVkNQVXMvcGFyYW1zIGZpZWxkIG9mIHRoZSBn
aXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHNldF9WQ1BVc19wYXJhbXMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIChzdHJp
bmcgLT4gc3RyaW5nKSBNYXAgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi17XHR0IChzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1h
cCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+YWRkXF90b1xfVkNQVXNcX3BhcmFt
c30KLQote1xiZiBPdmVydmlldzp9IAotQWRkIHRoZSBnaXZlbiBrZXktdmFsdWUgcGFpciB0byB0
aGUgVkNQVXMvcGFyYW1zIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGFkZF90b19WQ1BVc19wYXJhbXMg
KHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIHN0cmluZyBrZXksIHN0cmluZyB2YWx1ZSlcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgc3Ry
aW5nIH0gJiBrZXkgJiBLZXkgdG8gYWRkIFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIHZh
bHVlICYgVmFsdWUgdG8gYWRkIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19
Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+cmVtb3ZlXF9mcm9tXF9WQ1BVc1xfcGFyYW1zfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1SZW1vdmUgdGhlIGdpdmVuIGtleSBhbmQgaXRzIGNvcnJlc3BvbmRp
bmcgdmFsdWUgZnJvbSB0aGUgVkNQVXMvcGFyYW1zCi1maWVsZCBvZiB0aGUgZ2l2ZW4gVk0uICBJ
ZiB0aGUga2V5IGlzIG5vdCBpbiB0aGF0IE1hcCwgdGhlbiBkbyBub3RoaW5nLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcmVtb3ZlX2Zyb21f
VkNQVXNfcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBzdHJpbmcga2V5KVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJp
bmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAK
LXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1ZDUFVzXF9tYXh9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUdldCB0aGUgVkNQVXMvbWF4IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X1ZD
UFVzX21heCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxu
b2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVu
Y2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0K
LQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9WQ1BVc1xfbWF4fQot
Ci17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhlIFZDUFVzL21heCBmaWVsZCBvZiB0aGUgZ2l2ZW4g
Vk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCBzZXRfVkNQVXNfbWF4IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBpbnQgdmFsdWUpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IGlu
dCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9WQ1BVc1xfYXRcX3N0YXJ0
dXB9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVkNQVXMvYXRcX3N0YXJ0dXAgZmllbGQg
b2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IGludCBnZXRfVkNQVXNfYXRfc3RhcnR1cCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+c2V0XF9WQ1BVc1xfYXRcX3N0YXJ0dXB9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNl
dCB0aGUgVkNQVXMvYXRcX3N0YXJ0dXAgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X1ZDUFVzX2F0
X3N0YXJ0dXAgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIGludCB2YWx1ZSlcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgaW50IH0gJiB2
YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZv
aWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FjdGlvbnNcX2FmdGVyXF9zaHV0ZG93
bn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBhY3Rpb25zL2FmdGVyXF9zaHV0ZG93biBm
aWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gKG9uX25vcm1hbF9leGl0KSBnZXRfYWN0aW9uc19hZnRlcl9zaHV0ZG93
biAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8
Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVz
Y3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotb25cX25vcm1hbFxfZXhp
dAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfYWN0aW9u
c1xfYWZ0ZXJcX3NodXRkb3dufQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhlIGFjdGlvbnMv
YWZ0ZXJcX3NodXRkb3duIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9hY3Rpb25zX2FmdGVyX3No
dXRkb3duIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBvbl9ub3JtYWxfZXhpdCB2YWx1ZSlc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQg
b25cX25vcm1hbFxfZXhpdCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9h
Y3Rpb25zXF9hZnRlclxfcmVib290fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGFjdGlv
bnMvYWZ0ZXJcX3JlYm9vdCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKG9uX25vcm1hbF9leGl0KSBnZXRfYWN0
aW9uc19hZnRlcl9yZWJvb3QgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxm
ICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAK
LW9uXF9ub3JtYWxcX2V4aXQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5zZXRcX2FjdGlvbnNcX2FmdGVyXF9yZWJvb3R9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNl
dCB0aGUgYWN0aW9ucy9hZnRlclxfcmVib290IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9hY3Rp
b25zX2FmdGVyX3JlYm9vdCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgb25fbm9ybWFsX2V4
aXQgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi17XHR0IG9uXF9ub3JtYWxcX2V4aXQgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfYWN0aW9uc1xfYWZ0ZXJcX2NyYXNofQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQg
dGhlIGFjdGlvbnMvYWZ0ZXJcX2NyYXNoIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAob25fY3Jhc2hfYmVoYXZp
b3VyKSBnZXRfYWN0aW9uc19hZnRlcl9jcmFzaCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotb25cX2NyYXNoXF9iZWhhdmlvdXIKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX2FjdGlvbnNcX2FmdGVyXF9jcmFzaH0KLQote1xiZiBP
dmVydmlldzp9IAotU2V0IHRoZSBhY3Rpb25zL2FmdGVyXF9jcmFzaCBmaWVsZCBvZiB0aGUgZ2l2
ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
dm9pZCBzZXRfYWN0aW9uc19hZnRlcl9jcmFzaCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwg
b25fY3Jhc2hfYmVoYXZpb3VyIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBvblxfY3Jhc2hcX2JlaGF2aW91ciB9ICYgdmFsdWUg
JiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19
Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9jb25zb2xlc30KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSBjb25zb2xlcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChjb25zb2xlIHJlZikgU2V0KSBn
ZXRfY29uc29sZXMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShjb25z
b2xlIHJlZikgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9WSUZzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZJRnMgZmllbGQgb2YgdGhl
IGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19ICgoVklGIHJlZikgU2V0KSBnZXRfVklGcyAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotKFZJRiByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfVkJEc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBWQkRz
IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAoKFZCRCByZWYpIFNldCkgZ2V0X1ZCRHMgKHNlc3Npb25faWQgcywg
Vk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShWQkQgcmVmKSBTZXQKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2NyYXNoXF9kdW1wc30KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSBjcmFzaFxfZHVtcHMgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoY3Jhc2hkdW1w
IHJlZikgU2V0KSBnZXRfY3Jhc2hfZHVtcHMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLShjcmFzaGR1bXAgcmVmKSBTZXQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1ZUUE1zfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IFZUUE1zIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZUUE0gcmVmKSBTZXQpIGdldF9WVFBNcyAoc2Vzc2lv
bl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVj
dCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZUUE0gcmVmKSBTZXQKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX0RQQ0lzfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIERQQ0lzIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKERQQ0kgcmVmKSBTZXQp
IGdldF9EUENJcyAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKERQQ0kg
cmVmKSBTZXQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X0RTQ1NJc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBEU0NTSXMgZmllbGQgb2YgdGhl
IGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19ICgoRFNDU0kgcmVmKSBTZXQpIGdldF9EU0NTSXMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
TSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxl
bmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotKERTQ1NJIHJlZikgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxk
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9EU0NTSVxfSEJBc30KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBEU0NTSVxfSEJBcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChEU0NTSV9IQkEgcmVmKSBTZXQp
IGdldF9EU0NTSV9IQkFzIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShE
U0NTSVxfSEJBIHJlZikgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9QVlxfYm9vdGxvYWRlcn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBQ
Vi9ib290bG9hZGVyIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X1BWX2Jvb3Rsb2FkZXIgKHNl
c3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVl
IG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfUFZcX2Jvb3Rsb2FkZXJ9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgUFYvYm9vdGxvYWRlciBmaWVsZCBvZiB0aGUgZ2l2ZW4g
Vk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCBzZXRfUFZfYm9vdGxvYWRlciAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgc3RyaW5nIHZh
bHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
e1x0dCBzdHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxl
bmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfUFZcX2tl
cm5lbH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBQVi9rZXJuZWwgZmllbGQgb2YgdGhl
IGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IHN0cmluZyBnZXRfUFZfa2VybmVsIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5zZXRcX1BWXF9rZXJuZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgUFYva2VybmVs
IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9QVl9rZXJuZWwgKHNlc3Npb25faWQgcywgVk0gcmVm
IHNlbGYsIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVj
dCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQg
XFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRl
bnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX1BWXF9yYW1kaXNrfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBWL3Jh
bWRpc2sgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfUFZfcmFtZGlzayAoc2Vzc2lvbl9pZCBz
LCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9QVlxfcmFtZGlza30KLQote1xiZiBPdmVydmlldzp9
IAotU2V0IHRoZSBQVi9yYW1kaXNrIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9QVl9yYW1kaXNr
IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFs
dWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lk
Ci19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9QVlxfYXJnc30KLQote1xiZiBPdmVydmll
dzp9IAotR2V0IHRoZSBQVi9hcmdzIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X1BWX2FyZ3Mg
KHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfUFZcX2FyZ3N9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVNldCB0aGUgUFYvYXJncyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfUFZf
YXJncyAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgc3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAm
IHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
dm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfUFZcX2Jvb3Rsb2FkZXJcX2FyZ3N9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgUFYvYm9vdGxvYWRlclxfYXJncyBmaWVsZCBv
ZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2
ZXJiYXRpbX0gc3RyaW5nIGdldF9QVl9ib290bG9hZGVyX2FyZ3MgKHNlc3Npb25faWQgcywgVk0g
cmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fnNldFxfUFZcX2Jvb3Rsb2FkZXJcX2FyZ3N9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLVNldCB0aGUgUFYvYm9vdGxvYWRlclxfYXJncyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0u
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBz
ZXRfUFZfYm9vdGxvYWRlcl9hcmdzIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBzdHJpbmcg
dmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAK
LVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQot
e1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtc
dHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9IVk1c
X2Jvb3RcX3BvbGljeX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBIVk0vYm9vdFxfcG9s
aWN5IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X0hWTV9ib290X3BvbGljeSAoc2Vzc2lvbl9p
ZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhl
IGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9IVk1cX2Jvb3RcX3BvbGljeX0KLQote1xiZiBP
dmVydmlldzp9IAotU2V0IHRoZSBIVk0vYm9vdFxfcG9saWN5IGZpZWxkIG9mIHRoZSBnaXZlbiBW
TS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lk
IHNldF9IVk1fYm9vdF9wb2xpY3kgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIHN0cmluZyB2
YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX0hWTVxf
Ym9vdFxfcGFyYW1zfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIEhWTS9ib290XF9wYXJh
bXMgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfSFZNX2Jvb3Rf
cGFyYW1zIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRc
cmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5zZXRcX0hWTVxfYm9vdFxfcGFyYW1zfQotCi17XGJmIE92ZXJ2aWV3On0gCi1T
ZXQgdGhlIEhWTS9ib290XF9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X0hWTV9ib290
X3BhcmFtcyAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgKHN0cmluZyAtPiBzdHJpbmcpIE1h
cCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLXtcdHQgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwIH0gJiB2YWx1ZSAmIE5l
dyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQot
Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5hZGRcX3RvXF9IVk1cX2Jvb3RcX3BhcmFtc30KLQote1xiZiBP
dmVydmlldzp9IAotQWRkIHRoZSBnaXZlbiBrZXktdmFsdWUgcGFpciB0byB0aGUgSFZNL2Jvb3Rc
X3BhcmFtcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBhZGRfdG9fSFZNX2Jvb3RfcGFyYW1zIChzZXNz
aW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBzdHJpbmcga2V5LCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9
ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAm
IFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92ZVxfZnJvbVxfSFZNXF9ib290XF9wYXJhbXN9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLVJlbW92ZSB0aGUgZ2l2ZW4ga2V5IGFuZCBpdHMgY29ycmVzcG9uZGlu
ZyB2YWx1ZSBmcm9tIHRoZSBIVk0vYm9vdFxfcGFyYW1zCi1maWVsZCBvZiB0aGUgZ2l2ZW4gVk0u
ICBJZiB0aGUga2V5IGlzIG5vdCBpbiB0aGF0IE1hcCwgdGhlbiBkbyBub3RoaW5nLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcmVtb3ZlX2Zy
b21fSFZNX2Jvb3RfcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBzdHJpbmcga2V5
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZN
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0
dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3BsYXRmb3JtfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHBsYXRmb3JtIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmlu
ZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X3BsYXRmb3JtIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZN
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX3BsYXRmb3JtfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1TZXQgdGhlIHBsYXRmb3JtIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9w
bGF0Zm9ybSAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgKHN0cmluZyAtPiBzdHJpbmcpIE1h
cCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLXtcdHQgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwIH0gJiB2YWx1ZSAmIE5l
dyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQot
Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5hZGRcX3RvXF9wbGF0Zm9ybX0KLQote1xiZiBPdmVydmlldzp9
IAotQWRkIHRoZSBnaXZlbiBrZXktdmFsdWUgcGFpciB0byB0aGUgcGxhdGZvcm0gZmllbGQgb2Yg
dGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IHZvaWQgYWRkX3RvX3BsYXRmb3JtIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBz
dHJpbmcga2V5LCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBc
aGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92
ZVxfZnJvbVxfcGxhdGZvcm19Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJlbW92ZSB0aGUgZ2l2ZW4g
a2V5IGFuZCBpdHMgY29ycmVzcG9uZGluZyB2YWx1ZSBmcm9tIHRoZSBwbGF0Zm9ybSBmaWVsZCBv
ZgotdGhlIGdpdmVuIFZNLiAgSWYgdGhlIGtleSBpcyBub3QgaW4gdGhhdCBNYXAsIHRoZW4gZG8g
bm90aGluZy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHJlbW92ZV9mcm9tX3BsYXRmb3JtIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBz
dHJpbmcga2V5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGlu
ZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1BDSVxf
YnVzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBDSVxfYnVzIGZpZWxkIG9mIHRoZSBn
aXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSBzdHJpbmcgZ2V0X1BDSV9idXMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNl
dFxfUENJXF9idXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgUENJXF9idXMgZmllbGQg
b2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHZvaWQgc2V0X1BDSV9idXMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIHN0
cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxp
bmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X290aGVyXF9jb25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgb3RoZXJcX2NvbmZp
ZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKChzdHJpbmcgLT4gc3RyaW5nKSBNYXApIGdldF9vdGhlcl9jb25m
aWcgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShzdHJpbmcgJFxyaWdo
dGFycm93JCBzdHJpbmcpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fnNldFxfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBv
dGhlclxfY29uZmlnIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9vdGhlcl9jb25maWcgKHNlc3Np
b25faWQgcywgVk0gcmVmIHNlbGYsIChzdHJpbmcgLT4gc3RyaW5nKSBNYXAgdmFsdWUpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IChzdHJp
bmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+YWRkXF90b1xfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotQWRkIHRo
ZSBnaXZlbiBrZXktdmFsdWUgcGFpciB0byB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUg
Z2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gdm9pZCBhZGRfdG9fb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBz
dHJpbmcga2V5LCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBc
aGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92
ZVxfZnJvbVxfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotUmVtb3ZlIHRoZSBn
aXZlbiBrZXkgYW5kIGl0cyBjb3JyZXNwb25kaW5nIHZhbHVlIGZyb20gdGhlIG90aGVyXF9jb25m
aWcKLWZpZWxkIG9mIHRoZSBnaXZlbiBWTS4gIElmIHRoZSBrZXkgaXMgbm90IGluIHRoYXQgTWFw
LCB0aGVuIGRvIG5vdGhpbmcuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gdm9pZCByZW1vdmVfZnJvbV9vdGhlcl9jb25maWcgKHNlc3Npb25faWQgcywg
Vk0gcmVmIHNlbGYsIHN0cmluZyBrZXkpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIHJlbW92ZSBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfZG9taWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgZG9taWQgZmllbGQg
b2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IGludCBnZXRfZG9taWQgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfaXNcX2NvbnRyb2xcX2RvbWFpbn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBpc1xf
Y29udHJvbFxfZG9tYWluIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBib29sIGdldF9pc19jb250cm9sX2RvbWFp
biAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8
Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVz
Y3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotYm9vbAotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbWV0cmljc30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBtZXRyaWNzIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk1fbWV0cmljcyBy
ZWYpIGdldF9tZXRyaWNzIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1W
TVxfbWV0cmljcyByZWYKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5nZXRcX2d1ZXN0XF9tZXRyaWNzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGd1ZXN0
XF9tZXRyaWNzIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk1fZ3Vlc3RfbWV0cmljcyByZWYpIGdldF9ndWVz
dF9tZXRyaWNzIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVy
ZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WTVxfZ3Vl
c3RcX21ldHJpY3MgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9zZWN1cml0eVxfbGFiZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IHRoZSBzZWN1
cml0eSBsYWJlbCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uIFJlZmVyIHRvIHRoZSBYU1BvbGljeSBj
bGFzcwotZm9yIHRoZSBmb3JtYXQgb2YgdGhlIHNlY3VyaXR5IGxhYmVsLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9zZWN1cml0eV9s
YWJlbCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1zdHJpbmcKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX3NlY3VyaXR5XF9sYWJlbH0K
LQote1xiZiBPdmVydmlldzp9Ci1TZXQgdGhlIHNlY3VyaXR5IGxhYmVsIGZpZWxkIG9mIHRoZSBn
aXZlbiBWTS4gUmVmZXIgdG8gdGhlIFhTUG9saWN5IGNsYXNzCi1mb3IgdGhlIGZvcm1hdCBvZiB0
aGUgc2VjdXJpdHkgbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lu
e3ZlcmJhdGltfSBpbnQgc2V0X3NlY3VyaXR5X2xhYmVsIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBz
ZWxmLCBzdHJpbmcKLXNlY3VyaXR5X2xhYmVsLCBzdHJpbmcgb2xkX2xhYmVsKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxm
ICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgc2Vj
dXJpdHlcX2xhYmVsICYgc2VjdXJpdHkgbGFiZWwgZm9yIHRoZSBWTSBcXCBcaGxpbmUKLXtcdHQg
c3RyaW5nIH0gJiBvbGRcX2xhYmVsICYgTGFiZWwgdmFsdWUgdGhhdCB0aGUgc2VjdXJpdHkgbGFi
ZWwgXFwKLSYgJiBtdXN0IGN1cnJlbnRseSBoYXZlIGZvciB0aGUgY2hhbmdlIHRvIHN1Y2NlZWQu
XFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLWludAotfQotCi0KLVJldHVybnMgdGhlIHNzaWRy
ZWYgaW4gY2FzZSBvZiBhbiBWTSB0aGF0IGlzIGN1cnJlbnRseSBydW5uaW5nIG9yCi1wYXVzZWQs
IHplcm8gaW4gY2FzZSBvZiBhIGRvcm1hbnQgVk0gKGhhbHRlZCwgc3VzcGVuZGVkKS4KLQotXHZz
cGFjZXswLjNjbX0KLQotXG5vaW5kZW50e1xiZiBQb3NzaWJsZSBFcnJvciBDb2Rlczp9IHtcdHQg
U0VDVVJJVFlcX0VSUk9SfQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y3JlYXRlfQotCi17XGJmIE92ZXJ2
aWV3On0gCi1DcmVhdGUgYSBuZXcgVk0gaW5zdGFuY2UsIGFuZCByZXR1cm4gaXRzIGhhbmRsZS4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0gcmVm
KSBjcmVhdGUgKHNlc3Npb25faWQgcywgVk0gcmVjb3JkIGFyZ3MpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVjb3JkIH0gJiBhcmdzICYg
QWxsIGNvbnN0cnVjdG9yIGFyZ3VtZW50cyBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
Vk0gcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9iamVjdAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmRlc3Ryb3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLURlc3Ryb3kgdGhlIHNw
ZWNpZmllZCBWTS4gIFRoZSBWTSBpcyBjb21wbGV0ZWx5IHJlbW92ZWQgZnJvbSB0aGUgc3lzdGVt
LiAKLVRoaXMgZnVuY3Rpb24gY2FuIG9ubHkgYmUgY2FsbGVkIHdoZW4gdGhlIFZNIGlzIGluIHRo
ZSBIYWx0ZWQgU3RhdGUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2
ZXJiYXRpbX0gdm9pZCBkZXN0cm95IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xi
ZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNlIHRvIHRoZSBWTSBpbnN0YW5jZSB3aXRoIHRo
ZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSAoVk0gcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVp
ZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBz
dHJpbmcgfSAmIHV1aWQgJiBVVUlEIG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLVZNIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJl
Y29yZCBjb250YWluaW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0gcmVjb3JkKSBn
ZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVy
ZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WTSByZWNv
cmQKLX0KLQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfYnlcX25hbWVcX2xhYmVsfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYWxsIHRoZSBWTSBp
bnN0YW5jZXMgd2l0aCB0aGUgZ2l2ZW4gbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChWTSByZWYpIFNldCkgZ2V0X2J5X25hbWVfbGFiZWwg
KHNlc3Npb25faWQgcywgc3RyaW5nIGxhYmVsKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgbGFiZWwgJiBsYWJlbCBvZiBvYmpl
Y3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2Nt
fQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oVk0gcmVmKSBTZXQK
LX0KLQotCi1yZWZlcmVuY2VzIHRvIG9iamVjdHMgd2l0aCBtYXRjaCBuYW1lcwotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfY3B1XF9wb29sfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgY3B1XF9w
b29sIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
Ci1cYmVnaW57dmVyYmF0aW19ICgoY3B1X3Bvb2wgcmVmKSBTZXQpIGdldF9jcHVfcG9vbCAoc2Vz
c2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLVxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRl
bnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci0oY3B1XF9wb29sIHJlZikgU2V0Ci19Ci0KLQot
cmVmZXJlbmNlcyB0byBjcHVcX3Bvb2wgb2JqZWN0cy4KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Bv
b2xcX25hbWV9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IHRoZSBwb29sXF9uYW1lIGZpZWxkIG9m
IHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVy
YmF0aW19IHN0cmluZyBnZXRfY3B1X3Bvb2wgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci1caGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0
dAotc3RyaW5nCi19Ci0KLQotbmFtZSBvZiBjcHUgcG9vbCB0byB1c2UKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5jcHVcX3Bvb2xcX21pZ3JhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotTWlncmF0ZSB0aGUgVk0g
dG8gYW5vdGhlciBjcHVcX3Bvb2wuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJl
Z2lue3ZlcmJhdGltfSB2b2lkIGNwdV9wb29sX21pZ3JhdGUgKHNlc3Npb25faWQgcywgVk0gcmVm
IHNlbGYsIGNwdV9wb29sIHJlZiBwb29sKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlZn0gJiBwb29sICYgcmVmZXJlbmNlIHRv
IG5ldyBjcHVcX3Bvb2wgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci12b2lkCi19Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFBP
T0xcX0JBRFxfU1RBVEUsIFZNXF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fnNldFxfcG9vbFxfbmFtZX0KLQote1xiZiBPdmVydmlldzp9Ci1TZXQgY3B1IHBvb2wgbmFtZSB0
byB1c2UgZm9yIG5leHQgYWN0aXZhdGlvbi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X3Bvb2xfbmFtZSAoc2Vzc2lvbl9pZCBzLCBWTSBy
ZWYgc2VsZiwgc3RyaW5nIHBvb2xcX25hbWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci1caGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZQote1x0dCBzdHJpbmd9ICYgcG9vbFxfbmFtZSAmIE5ldyBwb29sIG5h
bWUgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRl
bnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci12b2lkCi19Ci0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLQotCi0KLQotXHZzcGFjZXsxY219Ci1cbmV3
cGFnZQotXHNlY3Rpb257Q2xhc3M6IFZNXF9tZXRyaWNzfQotXHN1YnNlY3Rpb257RmllbGRzIGZv
ciBjbGFzczogVk1cX21ldHJpY3N9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3
aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1u
ezN9e2x8fXtcYmYgVk1cX21ldHJpY3N9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2NyaXB0
aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0KLVRoZSBtZXRyaWNz
IGFzc29jaWF0ZWQgd2l0aCBhIFZNLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBl
ICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAg
e1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2Ug
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbWVtb3J5L2FjdHVhbH0gJiBp
bnQgJiBHdWVzdCdzIGFjdHVhbCBtZW1vcnkgKGJ5dGVzKSBcXAotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7cnVufSQgJiAge1x0dCBWQ1BVcy9udW1iZXJ9ICYgaW50ICYgQ3VycmVudCBudW1iZXIgb2Yg
VkNQVXMgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgVkNQVXMvdXRpbGlz
YXRpb259ICYgKGludCAkXHJpZ2h0YXJyb3ckIGZsb2F0KSBNYXAgJiBVdGlsaXNhdGlvbiBmb3Ig
YWxsIG9mIGd1ZXN0J3MgY3VycmVudCBWQ1BVcyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiAge1x0dCBWQ1BVcy9DUFV9ICYgKGludCAkXHJpZ2h0YXJyb3ckIGludCkgTWFwICYgVkNQ
VSB0byBQQ1BVIG1hcCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBWQ1BV
cy9wYXJhbXN9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgVGhlIGxpdmUg
ZXF1aXZhbGVudCB0byBWTS5WQ1BVc1xfcGFyYW1zIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHty
dW59JCAmICB7XHR0IFZDUFVzL2ZsYWdzfSAmIChpbnQgJFxyaWdodGFycm93JCBzdHJpbmcgU2V0
KSBNYXAgJiBDUFUgZmxhZ3MgKGJsb2NrZWQsb25saW5lLHJ1bm5pbmcpIFxcCi0kXG1hdGhpdHtS
T31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN0YXRlfSAmIHN0cmluZyBTZXQgJiBUaGUgc3RhdGUg
b2YgdGhlIGd1ZXN0LCBlZyBibG9ja2VkLCBkeWluZyBldGMgXFwKLSRcbWF0aGl0e1JPfV9cbWF0
aGl0e3J1bn0kICYgIHtcdHQgc3RhcnRcX3RpbWV9ICYgZGF0ZXRpbWUgJiBUaW1lIGF0IHdoaWNo
IHRoaXMgVk0gd2FzIGxhc3QgYm9vdGVkIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAm
ICB7XHR0IGxhc3RcX3VwZGF0ZWR9ICYgZGF0ZXRpbWUgJiBUaW1lIGF0IHdoaWNoIHRoaXMgaW5m
b3JtYXRpb24gd2FzIGxhc3QgdXBkYXRlZCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxz
dWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBWTVxfbWV0cmljc30KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJu
IGEgbGlzdCBvZiBhbGwgdGhlIFZNXF9tZXRyaWNzIGluc3RhbmNlcyBrbm93biB0byB0aGUgc3lz
dGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgo
Vk1fbWV0cmljcyByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19
Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotKFZNXF9tZXRyaWNzIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2Jq
ZWN0cwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBWTVxfbWV0cmljcy4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3V1aWQgKHNlc3Np
b25faWQgcywgVk1fbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8
Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVz
Y3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTVxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5n
Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9tZW1vcnlc
X2FjdHVhbH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBtZW1vcnkvYWN0dWFsIGZpZWxk
IG9mIHRoZSBnaXZlbiBWTVxfbWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X21lbW9yeV9hY3R1YWwgKHNlc3Npb25faWQgcywg
Vk1fbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBWTVxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFs
dWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9WQ1BVc1xfbnVtYmVyfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZDUFVzL251bWJlciBmaWVsZCBvZiB0aGUgZ2l2ZW4g
Vk1cX21ldHJpY3MuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gaW50IGdldF9WQ1BVc19udW1iZXIgKHNlc3Npb25faWQgcywgVk1fbWV0cmljcyByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWTVxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxk
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9WQ1BVc1xfdXRpbGlzYXRpb259Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLUdldCB0aGUgVkNQVXMvdXRpbGlzYXRpb24gZmllbGQgb2YgdGhlIGdpdmVuIFZNXF9t
ZXRyaWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
ICgoaW50IC0+IGZsb2F0KSBNYXApIGdldF9WQ1BVc191dGlsaXNhdGlvbiAoc2Vzc2lvbl9pZCBz
LCBWTV9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IFZNXF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2Nt
fQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oaW50ICRccmlnaHRh
cnJvdyQgZmxvYXQpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfVkNQVXNcX0NQVX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBWQ1BVcy9D
UFUgZmllbGQgb2YgdGhlIGdpdmVuIFZNXF9tZXRyaWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoaW50IC0+IGludCkgTWFwKSBnZXRfVkNQVXNf
Q1BVIChzZXNzaW9uX2lkIHMsIFZNX21ldHJpY3MgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk1cX21ldHJpY3MgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLShpbnQgJFxyaWdodGFycm93JCBpbnQpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfVkNQVXNcX3BhcmFtc30KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSBWQ1BVcy9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVuIFZNXF9tZXRyaWNzLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5n
IC0+IHN0cmluZykgTWFwKSBnZXRfVkNQVXNfcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZNX21ldHJp
Y3MgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgVk1cX21ldHJpY3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9p
bmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShzdHJpbmcgJFxyaWdodGFycm93JCBz
dHJpbmcpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfVkNQVXNcX2ZsYWdzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZDUFVzL2ZsYWdz
IGZpZWxkIG9mIHRoZSBnaXZlbiBWTVxfbWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKGludCAtPiBzdHJpbmcgU2V0KSBNYXApIGdldF9W
Q1BVc19mbGFncyAoc2Vzc2lvbl9pZCBzLCBWTV9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNXF9tZXRyaWNzIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi0oaW50ICRccmlnaHRhcnJvdyQgc3RyaW5nIFNldCkgTWFwCi19Ci0KLQotdmFs
dWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9zdGF0ZX0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSBzdGF0ZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX21ldHJpY3MuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKHN0cmluZyBT
ZXQpIGdldF9zdGF0ZSAoc2Vzc2lvbl9pZCBzLCBWTV9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNXF9tZXRyaWNz
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1zdHJpbmcgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9zdGFydFxfdGltZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSBzdGFydFxfdGltZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX21ldHJpY3MuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gZGF0ZXRpbWUgZ2V0X3N0YXJ0
X3RpbWUgKHNlc3Npb25faWQgcywgVk1fbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTVxfbWV0cmljcyByZWYgfSAm
IHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxh
cn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotZGF0ZXRpbWUKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5nZXRcX2xhc3RcX3VwZGF0ZWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbGFzdFxf
dXBkYXRlZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX21ldHJpY3MuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gZGF0ZXRpbWUgZ2V0X2xhc3RfdXBkYXRl
ZCAoc2Vzc2lvbl9pZCBzLCBWTV9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNXF9tZXRyaWNzIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1kYXRldGltZQotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0aGUg
Vk1cX21ldHJpY3MgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZNX21ldHJpY3MgcmVmKSBn
ZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlE
IG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZNXF9t
ZXRyaWNzIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWlu
aW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBWTVxfbWV0cmljcy4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk1fbWV0cmljcyByZWNv
cmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgVk1fbWV0cmljcyByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTVxfbWV0cmlj
cyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxl
bmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotVk1cX21ldHJpY3MgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9t
IHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IFZNXF9ndWVzdFxf
bWV0cmljc30KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFZNXF9ndWVzdFxfbWV0cmlj
c30KLVxiZWdpbntsb25ndGFibGV9e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxt
dWx0aWNvbHVtbnsxfXt8bH17TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBWTVxfZ3Vl
c3RcX21ldHJpY3N9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0
aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0KLVRoZSBtZXRyaWNzIHJlcG9ydGVkIGJ5
IHRoZSBndWVzdCAoYXMgb3Bwb3NlZCB0byBpbmZlcnJlZCBmcm9tIG91dHNpZGUpLn19IFxcCi1c
aGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxt
YXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBp
ZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0k
ICYgIHtcdHQgb3NcX3ZlcnNpb259ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFw
ICYgdmVyc2lvbiBvZiB0aGUgT1MgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgUFZcX2RyaXZlcnNcX3ZlcnNpb259ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykg
TWFwICYgdmVyc2lvbiBvZiB0aGUgUFYgZHJpdmVycyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7
cnVufSQgJiAge1x0dCBtZW1vcnl9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFw
ICYgZnJlZS91c2VkL3RvdGFsIG1lbW9yeSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCBkaXNrc30gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgJiBkaXNr
IGNvbmZpZ3VyYXRpb24vZnJlZSBzcGFjZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCBuZXR3b3Jrc30gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgJiBu
ZXR3b3JrIGNvbmZpZ3VyYXRpb24gXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgb3RoZXJ9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgYW55dGhpbmcg
ZWxzZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBsYXN0XF91cGRhdGVk
fSAmIGRhdGV0aW1lICYgVGltZSBhdCB3aGljaCB0aGlzIGluZm9ybWF0aW9uIHdhcyBsYXN0IHVw
ZGF0ZWQgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29j
aWF0ZWQgd2l0aCBjbGFzczogVk1cX2d1ZXN0XF9tZXRyaWNzfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfYWxsfQotCi17XGJmIE92ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFs
bCB0aGUgVk1cX2d1ZXN0XF9tZXRyaWNzIGluc3RhbmNlcyBrbm93biB0byB0aGUgc3lzdGVtLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoVk1fZ3Vl
c3RfbWV0cmljcyByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19
Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotKFZNXF9ndWVzdFxfbWV0cmljcyByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8g
YWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX2d1ZXN0XF9tZXRyaWNzLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBn
ZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBWTV9ndWVzdF9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNXF9ndWVzdFxf
bWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9vc1xfdmVyc2lvbn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBvc1xfdmVyc2lvbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX2d1ZXN0XF9tZXRyaWNzLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5n
IC0+IHN0cmluZykgTWFwKSBnZXRfb3NfdmVyc2lvbiAoc2Vzc2lvbl9pZCBzLCBWTV9ndWVzdF9t
ZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFZNXF9ndWVzdFxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKHN0cmluZyAkXHJp
Z2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF9QVlxfZHJpdmVyc1xfdmVyc2lvbn0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBQVlxfZHJpdmVyc1xfdmVyc2lvbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX2d1ZXN0
XF9tZXRyaWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfUFZfZHJpdmVyc192ZXJzaW9uIChzZXNz
aW9uX2lkIHMsIFZNX2d1ZXN0X21ldHJpY3MgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk1cX2d1ZXN0XF9tZXRyaWNzIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX21lbW9yeX0KLQote1xiZiBPdmVydmll
dzp9IAotR2V0IHRoZSBtZW1vcnkgZmllbGQgb2YgdGhlIGdpdmVuIFZNXF9ndWVzdFxfbWV0cmlj
cy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0
cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X21lbW9yeSAoc2Vzc2lvbl9pZCBzLCBWTV9ndWVzdF9t
ZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFZNXF9ndWVzdFxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKHN0cmluZyAkXHJp
Z2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF9kaXNrc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBkaXNrcyBm
aWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX2d1ZXN0XF9tZXRyaWNzLgotCi0gXG5vaW5kZW50IHtcYmYg
U2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBn
ZXRfZGlza3MgKHNlc3Npb25faWQgcywgVk1fZ3Vlc3RfbWV0cmljcyByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTVxfZ3Vlc3Rc
X21ldHJpY3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLShzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcAot
fQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmV0d29ya3N9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbmV0d29ya3MgZmllbGQgb2YgdGhlIGdpdmVu
IFZNXF9ndWVzdFxfbWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X25ldHdvcmtzIChzZXNz
aW9uX2lkIHMsIFZNX2d1ZXN0X21ldHJpY3MgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk1cX2d1ZXN0XF9tZXRyaWNzIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX290aGVyfQotCi17XGJmIE92ZXJ2aWV3
On0gCi1HZXQgdGhlIG90aGVyIGZpZWxkIG9mIHRoZSBnaXZlbiBWTVxfZ3Vlc3RcX21ldHJpY3Mu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChzdHJp
bmcgLT4gc3RyaW5nKSBNYXApIGdldF9vdGhlciAoc2Vzc2lvbl9pZCBzLCBWTV9ndWVzdF9tZXRy
aWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFZNXF9ndWVzdFxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKHN0cmluZyAkXHJpZ2h0
YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9sYXN0XF91cGRhdGVkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGxh
c3RcX3VwZGF0ZWQgZmllbGQgb2YgdGhlIGdpdmVuIFZNXF9ndWVzdFxfbWV0cmljcy4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBkYXRldGltZSBnZXRf
bGFzdF91cGRhdGVkIChzZXNzaW9uX2lkIHMsIFZNX2d1ZXN0X21ldHJpY3MgcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk1cX2d1
ZXN0XF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1kYXRldGltZQotfQotCi0KLXZhbHVlIG9mIHRoZSBm
aWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCBhIHJlZmVyZW5jZSB0byB0aGUgVk1cX2d1ZXN0XF9tZXRyaWNzIGluc3RhbmNlIHdpdGgg
dGhlIHNwZWNpZmllZCBVVUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IChWTV9ndWVzdF9tZXRyaWNzIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25f
aWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJu
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WTVxfZ3Vlc3RcX21ldHJpY3MgcmVmCi19
Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29y
ZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJl
bnQgc3RhdGUgb2YgdGhlIGdpdmVuIFZNXF9ndWVzdFxfbWV0cmljcy4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk1fZ3Vlc3RfbWV0cmljcyByZWNv
cmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgVk1fZ3Vlc3RfbWV0cmljcyByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTVxf
Z3Vlc3RcX21ldHJpY3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZNXF9ndWVzdFxfbWV0cmljcyByZWNvcmQKLX0K
LQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotCi1cdnNwYWNlezFjbX0KLVxuZXdwYWdlCi1cc2VjdGlv
bntDbGFzczogaG9zdH0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IGhvc3R9Ci1cYmVn
aW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1
bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgaG9zdH0gXFwKLVxtdWx0
aWNvbHVtbnsxfXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94
ezExY219e1xlbSBBCi1waHlzaWNhbCBob3N0Ln19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQg
JiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZl
cmVuY2UgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBuYW1lL2xhYmVsfSAmIHN0cmluZyAmIGEg
aHVtYW4tcmVhZGFibGUgbmFtZSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG5hbWUvZGVzY3Jp
cHRpb259ICYgc3RyaW5nICYgYSBub3RlcyBmaWVsZCBjb250YWluZyBodW1hbi1yZWFkYWJsZSBk
ZXNjcmlwdGlvbiBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBBUElcX3Zl
cnNpb24vbWFqb3J9ICYgaW50ICYgbWFqb3IgdmVyc2lvbiBudW1iZXIgXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgQVBJXF92ZXJzaW9uL21pbm9yfSAmIGludCAmIG1pbm9y
IHZlcnNpb24gbnVtYmVyIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IEFQ
SVxfdmVyc2lvbi92ZW5kb3J9ICYgc3RyaW5nICYgaWRlbnRpZmljYXRpb24gb2YgdmVuZG9yIFxc
Ci0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IEFQSVxfdmVyc2lvbi92ZW5kb3Jc
X2ltcGxlbWVudGF0aW9ufSAmIChzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcCAmIGRl
dGFpbHMgb2YgdmVuZG9yIGltcGxlbWVudGF0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHty
dW59JCAmICB7XHR0IGVuYWJsZWR9ICYgYm9vbCAmIFRydWUgaWYgdGhlIGhvc3QgaXMgY3VycmVu
dGx5IGVuYWJsZWQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgc29mdHdh
cmVcX3ZlcnNpb259ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgdmVyc2lv
biBzdHJpbmdzIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgb3RoZXJcX2NvbmZpZ30gJiAoc3Ry
aW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgJiBhZGRpdGlvbmFsIGNvbmZpZ3VyYXRpb24g
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgY2FwYWJpbGl0aWVzfSAmIHN0
cmluZyBTZXQgJiBYZW4gY2FwYWJpbGl0aWVzIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59
JCAmICB7XHR0IGNwdVxfY29uZmlndXJhdGlvbn0gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3Ry
aW5nKSBNYXAgJiBUaGUgQ1BVIGNvbmZpZ3VyYXRpb24gb24gdGhpcyBob3N0LiAgTWF5IGNvbnRh
aW4ga2V5cyBzdWNoIGFzIGBgbnJcX25vZGVzJycsIGBgc29ja2V0c1xfcGVyXF9ub2RlJycsIGBg
Y29yZXNcX3Blclxfc29ja2V0JycsIG9yIGBgdGhyZWFkc1xfcGVyXF9jb3JlJycgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgc2NoZWRcX3BvbGljeX0gJiBzdHJpbmcgJiBT
Y2hlZHVsZXIgcG9saWN5IGN1cnJlbnRseSBpbiBmb3JjZSBvbiB0aGlzIGhvc3QgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgc3VwcG9ydGVkXF9ib290bG9hZGVyc30gJiBz
dHJpbmcgU2V0ICYgYSBsaXN0IG9mIHRoZSBib290bG9hZGVycyBpbnN0YWxsZWQgb24gdGhlIG1h
Y2hpbmUgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcmVzaWRlbnRcX1ZN
c30gJiAoVk0gcmVmKSBTZXQgJiBsaXN0IG9mIFZNcyBjdXJyZW50bHkgcmVzaWRlbnQgb24gaG9z
dCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IGxvZ2dpbmd9ICYgKHN0cmluZyAkXHJpZ2h0YXJy
b3ckIHN0cmluZykgTWFwICYgbG9nZ2luZyBjb25maWd1cmF0aW9uIFxcCi0kXG1hdGhpdHtST31f
XG1hdGhpdHtydW59JCAmICB7XHR0IFBJRnN9ICYgKFBJRiByZWYpIFNldCAmIHBoeXNpY2FsIG5l
dHdvcmsgaW50ZXJmYWNlcyBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IHN1c3BlbmRcX2ltYWdl
XF9zcn0gJiBTUiByZWYgJiBUaGUgU1IgaW4gd2hpY2ggVkRJcyBmb3Igc3VzcGVuZCBpbWFnZXMg
YXJlIGNyZWF0ZWQgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBjcmFzaFxfZHVtcFxfc3J9ICYg
U1IgcmVmICYgVGhlIFNSIGluIHdoaWNoIFZESXMgZm9yIGNyYXNoIGR1bXBzIGFyZSBjcmVhdGVk
IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFBCRHN9ICYgKFBCRCByZWYp
IFNldCAmIHBoeXNpY2FsIGJsb2NrZGV2aWNlcyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiAge1x0dCBQUENJc30gJiAoUFBDSSByZWYpIFNldCAmIHBoeXNpY2FsIFBDSSBkZXZpY2Vz
IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFBTQ1NJc30gJiAoUFNDU0kg
cmVmKSBTZXQgJiBwaHlzaWNhbCBTQ1NJIGRldmljZXMgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0
e3J1bn0kICYgIHtcdHQgUFNDU0lcX0hCQXN9ICYgKFBTQ1NJXF9IQkEgcmVmKSBTZXQgJiBwaHlz
aWNhbCBTQ1NJIGhvc3QgYnVzIGFkYXB0ZXJzIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59
JCAmICB7XHR0IGhvc3RcX0NQVXN9ICYgKGhvc3RcX2NwdSByZWYpIFNldCAmIFRoZSBwaHlzaWNh
bCBDUFVzIG9uIHRoaXMgaG9zdCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCBtZXRyaWNzfSAmIGhvc3RcX21ldHJpY3MgcmVmICYgbWV0cmljcyBhc3NvY2lhdGVkIHdpdGgg
dGhpcyBob3N0IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHJlc2lkZW50
XF9jcHVcX3Bvb2xzfSAmIChjcHVcX3Bvb2wgcmVmKSBTZXQgJiBsaXN0IG9mIGNwdVxfcG9vbHMg
Y3VycmVudGx5IHJlc2lkZW50IG9uIHRoZSBob3N0IFxcCi1caGxpbmUKLVxlbmR7bG9uZ3RhYmxl
fQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IGhvc3R9Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+ZGlzYWJsZX0KLQote1xiZiBPdmVydmlldzp9IAotUHV0cyB0aGUg
aG9zdCBpbnRvIGEgc3RhdGUgaW4gd2hpY2ggbm8gbmV3IFZNcyBjYW4gYmUgc3RhcnRlZC4gQ3Vy
cmVudGx5Ci1hY3RpdmUgVk1zIG9uIHRoZSBob3N0IGNvbnRpbnVlIHRvIGV4ZWN1dGUuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBkaXNhYmxl
IChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIGhvc3QpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIGhvc3QgJiBUaGUgSG9zdCB0
byBkaXNhYmxlIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+ZW5hYmxlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1QdXRzIHRoZSBob3N0
IGludG8gYSBzdGF0ZSBpbiB3aGljaCBuZXcgVk1zIGNhbiBiZSBzdGFydGVkLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgZW5hYmxlIChzZXNz
aW9uX2lkIHMsIGhvc3QgcmVmIGhvc3QpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIGhvc3QgJiBUaGUgSG9zdCB0byBlbmFi
bGUgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9p
bmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5zaHV0ZG93bn0KLQote1xiZiBPdmVydmlldzp9IAotU2h1dGRvd24gdGhlIGhvc3Qu
IChUaGlzIGZ1bmN0aW9uIGNhbiBvbmx5IGJlIGNhbGxlZCBpZiB0aGVyZSBhcmUgbm8KLWN1cnJl
bnRseSBydW5uaW5nIFZNcyBvbiB0aGUgaG9zdCBhbmQgaXQgaXMgZGlzYWJsZWQuKS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNodXRkb3du
IChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIGhvc3QpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIGhvc3QgJiBUaGUgSG9zdCB0
byBzaHV0ZG93biBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fnJlYm9vdH0KLQote1xiZiBPdmVydmlldzp9IAotUmVib290IHRoZSBo
b3N0LiAoVGhpcyBmdW5jdGlvbiBjYW4gb25seSBiZSBjYWxsZWQgaWYgdGhlcmUgYXJlIG5vCi1j
dXJyZW50bHkgcnVubmluZyBWTXMgb24gdGhlIGhvc3QgYW5kIGl0IGlzIGRpc2FibGVkLikuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCByZWJv
b3QgKHNlc3Npb25faWQgcywgaG9zdCByZWYgaG9zdClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgaG9zdCAmIFRoZSBIb3N0
IHRvIHJlYm9vdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmRtZXNnfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGhvc3Qg
eGVuIGRtZXNnLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IHN0cmluZyBkbWVzZyAoc2Vzc2lvbl9pZCBzLCBob3N0IHJlZiBob3N0KVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0gJiBo
b3N0ICYgVGhlIEhvc3QgdG8gcXVlcnkgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0
cmluZwotfQotCi0KLWRtZXNnIHN0cmluZwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmRtZXNnXF9jbGVhcn0K
LQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBob3N0IHhlbiBkbWVzZywgYW5kIGNsZWFyIHRo
ZSBidWZmZXIuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gc3RyaW5nIGRtZXNnX2NsZWFyIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIGhvc3QpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYg
fSAmIGhvc3QgJiBUaGUgSG9zdCB0byBxdWVyeSBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotc3RyaW5nCi19Ci0KLQotZG1lc2cgc3RyaW5nCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9sb2d9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgaG9zdCdzIGxvZyBmaWxlLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfbG9nIChz
ZXNzaW9uX2lkIHMsIGhvc3QgcmVmIGhvc3QpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIGhvc3QgJiBUaGUgSG9zdCB0byBx
dWVyeSBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotVGhlIGNv
bnRlbnRzIG9mIHRoZSBob3N0J3MgcHJpbWFyeSBsb2cgZmlsZQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNl
bmRcX2RlYnVnXF9rZXlzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1JbmplY3QgdGhlIGdpdmVuIHN0
cmluZyBhcyBkZWJ1Z2dpbmcga2V5cyBpbnRvIFhlbi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNlbmRfZGVidWdfa2V5cyAoc2Vzc2lvbl9p
ZCBzLCBob3N0IHJlZiBob3N0LCBzdHJpbmcga2V5cylcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgaG9zdCAmIFRoZSBob3N0
IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleXMgJiBUaGUga2V5cyB0byBzZW5kIFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+bGlzdFxfbWV0aG9kc30KLQote1xiZiBPdmVydmlldzp9IAotTGlzdCBhbGwgc3VwcG9ydGVk
IG1ldGhvZHMuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gKHN0cmluZyBTZXQpIGxpc3RfbWV0aG9kcyAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19
Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotc3RyaW5nIFNldAotfQotCi0KLVRoZSBuYW1lIG9mIGV2ZXJ5IHN1cHBvcnRlZCBtZXRo
b2QuCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVy
biBhIGxpc3Qgb2YgYWxsIHRoZSBob3N0cyBrbm93biB0byB0aGUgc3lzdGVtLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoaG9zdCByZWYpIFNldCkg
Z2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKGhvc3QgcmVmKSBTZXQK
LX0KLQotCi1BIGxpc3Qgb2YgYWxsIHRoZSBJRHMgb2YgYWxsIHRoZSBob3N0cwotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxk
IG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBob3N0IHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGhv
c3QgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfbmFtZVxfbGFiZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUg
bmFtZS9sYWJlbCBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfbGFiZWwgKHNlc3Np
b25faWQgcywgaG9zdCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX25hbWVcX2xhYmVsfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1TZXQgdGhlIG5hbWUvbGFiZWwgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3Qu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBz
ZXRfbmFtZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBob3N0IHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUp
XGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9z
dCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtc
dHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX25hbWVcX2Rl
c2NyaXB0aW9ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIG5hbWUvZGVzY3JpcHRpb24g
ZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9uYW1lX2Rlc2NyaXB0aW9uIChzZXNzaW9uX2lk
IHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVj
dCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2Yg
dGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9uYW1lXF9kZXNjcmlwdGlvbn0KLQote1xi
ZiBPdmVydmlldzp9IAotU2V0IHRoZSBuYW1lL2Rlc2NyaXB0aW9uIGZpZWxkIG9mIHRoZSBnaXZl
biBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IHZvaWQgc2V0X25hbWVfZGVzY3JpcHRpb24gKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwg
c3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IGhvc3QgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9BUElcX3ZlcnNpb25cX21ham9yfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIEFQ
SVxfdmVyc2lvbi9tYWpvciBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X0FQSV92ZXJzaW9uX21h
am9yIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVu
Y2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0K
LQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9BUElcX3ZlcnNpb25c
X21pbm9yfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIEFQSVxfdmVyc2lvbi9taW5vciBm
aWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X0FQSV92ZXJzaW9uX21pbm9yIChzZXNzaW9uX2lkIHMs
IGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50
czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9BUElcX3ZlcnNpb25cX3ZlbmRvcn0KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBBUElcX3ZlcnNpb24vdmVuZG9yIGZpZWxkIG9mIHRoZSBnaXZl
biBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IHN0cmluZyBnZXRfQVBJX3ZlcnNpb25fdmVuZG9yIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
aG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9BUElcX3ZlcnNpb25cX3ZlbmRvclxfaW1wbGVtZW50YXRpb259Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgQVBJXF92ZXJzaW9uL3ZlbmRvclxfaW1wbGVtZW50
YXRpb24gZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChzdHJpbmcgLT4gc3RyaW5nKSBNYXApIGdldF9BUElf
dmVyc2lvbl92ZW5kb3JfaW1wbGVtZW50YXRpb24gKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBo
b3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2VuYWJsZWR9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgZW5hYmxlZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBib29sIGdl
dF9lbmFibGVkIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotYm9v
bAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfc29mdHdh
cmVcX3ZlcnNpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgc29mdHdhcmVcX3ZlcnNp
b24gZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChzdHJpbmcgLT4gc3RyaW5nKSBNYXApIGdldF9zb2Z0d2Fy
ZV92ZXJzaW9uIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKHN0
cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxk
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9vdGhlclxfY29uZmlnfQotCi17XGJmIE92ZXJ2aWV3On0g
Ci1HZXQgdGhlIG90aGVyXF9jb25maWcgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChzdHJpbmcgLT4gc3Ry
aW5nKSBNYXApIGdldF9vdGhlcl9jb25maWcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX290aGVyXF9jb25maWd9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2
ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHNldF9vdGhlcl9jb25maWcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwgKHN0
cmluZyAtPiBzdHJpbmcpIE1hcCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5n
KSBNYXAgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmFkZFxfdG9cX290aGVyXF9j
b25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUFkZCB0aGUgZ2l2ZW4ga2V5LXZhbHVlIHBhaXIg
dG8gdGhlIG90aGVyXF9jb25maWcgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBhZGRfdG9fb3RoZXJf
Y29uZmlnIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYsIHN0cmluZyBrZXksIHN0cmluZyB2
YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byBhZGQgXFwgXGhsaW5lIAotCi17XHR0IHN0
cmluZyB9ICYgdmFsdWUgJiBWYWx1ZSB0byBhZGQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5yZW1vdmVcX2Zyb21cX290aGVyXF9j
b25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJlbW92ZSB0aGUgZ2l2ZW4ga2V5IGFuZCBpdHMg
Y29ycmVzcG9uZGluZyB2YWx1ZSBmcm9tIHRoZSBvdGhlclxfY29uZmlnCi1maWVsZCBvZiB0aGUg
Z2l2ZW4gaG9zdC4gIElmIHRoZSBrZXkgaXMgbm90IGluIHRoYXQgTWFwLCB0aGVuIGRvIG5vdGhp
bmcuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCByZW1vdmVfZnJvbV9vdGhlcl9jb25maWcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwg
c3RyaW5nIGtleSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Nh
cGFiaWxpdGllc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBjYXBhYmlsaXRpZXMgZmll
bGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gKHN0cmluZyBTZXQpIGdldF9jYXBhYmlsaXRpZXMgKHNlc3Npb25faWQg
cywgaG9zdCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcgU2V0Ci19Ci0KLQotdmFsdWUg
b2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9jcHVcX2NvbmZpZ3VyYXRpb259Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgY3B1XF9jb25maWd1cmF0aW9uIGZpZWxkIG9mIHRo
ZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfY3B1X2NvbmZpZ3VyYXRpb24gKHNl
c3Npb25faWQgcywgaG9zdCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRh
cnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX3NjaGVkXF9wb2xpY3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgc2No
ZWRcX3BvbGljeSBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3NjaGVkX3BvbGljeSAoc2Vz
c2lvbl9pZCBzLCBob3N0IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfc3VwcG9ydGVkXF9ib290bG9h
ZGVyc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBzdXBwb3J0ZWRcX2Jvb3Rsb2FkZXJz
IGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IChzdHJpbmcgU2V0KSBnZXRfc3VwcG9ydGVkX2Jvb3Rsb2FkZXJz
IChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nIFNldAot
fQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVzaWRlbnRc
X1ZNc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSByZXNpZGVudFxfVk1zIGZpZWxkIG9m
IHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19ICgoVk0gcmVmKSBTZXQpIGdldF9yZXNpZGVudF9WTXMgKHNlc3Npb25faWQgcywg
aG9zdCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oVk0gcmVmKSBTZXQKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2xvZ2dpbmd9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLUdldCB0aGUgbG9nZ2luZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJp
bmcpIE1hcCkgZ2V0X2xvZ2dpbmcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX2xvZ2dpbmd9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLVNldCB0aGUgbG9nZ2luZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9sb2dnaW5n
IChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYsIChzdHJpbmcgLT4gc3RyaW5nKSBNYXAgdmFs
dWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
aG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LXtcdHQgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwIH0gJiB2YWx1ZSAmIE5ldyB2
YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5hZGRcX3RvXF9sb2dnaW5nfQotCi17XGJmIE92ZXJ2aWV3On0gCi1B
ZGQgdGhlIGdpdmVuIGtleS12YWx1ZSBwYWlyIHRvIHRoZSBsb2dnaW5nIGZpZWxkIG9mIHRoZSBn
aXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IHZvaWQgYWRkX3RvX2xvZ2dpbmcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwgc3Ry
aW5nIGtleSwgc3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBc
aGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92
ZVxfZnJvbVxfbG9nZ2luZ30KLQote1xiZiBPdmVydmlldzp9IAotUmVtb3ZlIHRoZSBnaXZlbiBr
ZXkgYW5kIGl0cyBjb3JyZXNwb25kaW5nIHZhbHVlIGZyb20gdGhlIGxvZ2dpbmcgZmllbGQgb2YK
LXRoZSBnaXZlbiBob3N0LiAgSWYgdGhlIGtleSBpcyBub3QgaW4gdGhhdCBNYXAsIHRoZW4gZG8g
bm90aGluZy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHJlbW92ZV9mcm9tX2xvZ2dpbmcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwg
c3RyaW5nIGtleSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1BJ
RnN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgUElGcyBmaWVsZCBvZiB0aGUgZ2l2ZW4g
aG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAo
KFBJRiByZWYpIFNldCkgZ2V0X1BJRnMgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZilcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi0oUElGIHJlZikgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9zdXNwZW5kXF9pbWFnZVxfc3J9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCB0aGUgc3VzcGVuZFxfaW1hZ2VcX3NyIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0LgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChTUiByZWYpIGdl
dF9zdXNwZW5kX2ltYWdlX3NyIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAm
IHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxh
cn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotU1IgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
c2V0XF9zdXNwZW5kXF9pbWFnZVxfc3J9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgc3Vz
cGVuZFxfaW1hZ2VcX3NyIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X3N1c3BlbmRfaW1hZ2Vf
c3IgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwgU1IgcmVmIHZhbHVlKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IFNSIHJlZiB9
ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9jcmFzaFxfZHVtcFxfc3J9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgY3Jhc2hcX2R1bXBcX3NyIGZpZWxkIG9mIHRoZSBn
aXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IChTUiByZWYpIGdldF9jcmFzaF9kdW1wX3NyIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
aG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotU1IgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+c2V0XF9jcmFzaFxfZHVtcFxfc3J9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNl
dCB0aGUgY3Jhc2hcX2R1bXBcX3NyIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X2NyYXNoX2R1
bXBfc3IgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwgU1IgcmVmIHZhbHVlKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IFNSIHJl
ZiB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9QQkRzfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIFBCRHMgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChQQkQgcmVmKSBTZXQp
IGdldF9QQkRzIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFBC
RCByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfUFBDSXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgUFBDSXMgZmllbGQgb2YgdGhl
IGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gKChQUENJIHJlZikgU2V0KSBnZXRfUFBDSXMgKHNlc3Npb25faWQgcywgaG9zdCByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi0oUFBDSSByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBm
aWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfUFNDU0lzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1H
ZXQgdGhlIFBTQ1NJcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFBTQ1NJIHJlZikgU2V0KSBnZXRfUFND
U0lzIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oUFNDU0kgcmVm
KSBTZXQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1BT
Q1NJXF9IQkFzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBTQ1NJXF9IQkFzIGZpZWxk
IG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19ICgoUFNDU0lfSEJBIHJlZikgU2V0KSBnZXRfUFNDU0lfSEJBcyAoc2Vzc2lv
bl9pZCBzLCBob3N0IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFBTQ1NJXF9IQkEgcmVmKSBTZXQK
LX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2hvc3RcX0NQ
VXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgaG9zdFxfQ1BVcyBmaWVsZCBvZiB0aGUg
Z2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJh
dGltfSAoKGhvc3RfY3B1IHJlZikgU2V0KSBnZXRfaG9zdF9DUFVzIChzZXNzaW9uX2lkIHMsIGhv
c3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKGhvc3RcX2NwdSByZWYpIFNldAotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbWV0cmljc30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBtZXRyaWNzIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0LgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChob3N0X21ldHJp
Y3MgcmVmKSBnZXRfbWV0cmljcyAoc2Vzc2lvbl9pZCBzLCBob3N0IHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLWhvc3RcX21ldHJpY3MgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVm
ZXJlbmNlIHRvIHRoZSBob3N0IGluc3RhbmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChob3N0IHJlZikg
Z2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlkICYgVVVJ
RCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1ob3N0
IHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0
XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRo
ZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChob3N0IHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vz
c2lvbl9pZCBzLCBob3N0IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWhvc3QgcmVjb3JkCi19Ci0K
LQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2J5XF9u
YW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGFsbCB0aGUgaG9zdCBpbnN0YW5j
ZXMgd2l0aCB0aGUgZ2l2ZW4gbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKChob3N0IHJlZikgU2V0KSBnZXRfYnlfbmFtZV9sYWJlbCAoc2Vz
c2lvbl9pZCBzLCBzdHJpbmcgbGFiZWwpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiBsYWJlbCAmIGxhYmVsIG9mIG9iamVjdCB0
byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShob3N0IHJlZikgU2V0Ci19
Ci0KLQotcmVmZXJlbmNlcyB0byBvYmplY3RzIHdpdGggbWF0Y2ggbmFtZXMKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX3Jlc2lkZW50XF9jcHVcX3Bvb2xzfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0
aGUgcmVzaWRlbnRcX2NwdVxfcG9vbHMgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAoKGNwdV9wb29sIHJlZikg
U2V0KSBnZXRfcmVzaWRlbnRfY3B1X3Bvb2xzIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYp
XGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fQote1x0dAotKGNwdVxfcG9vbCByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIGtu
b3duIGNwdVxfcG9vbHMuCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IGhvc3Rc
X21ldHJpY3N9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBob3N0XF9tZXRyaWNzfQot
XGJlZ2lue2xvbmd0YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRp
Y29sdW1uezF9e3xsfXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIGhvc3RcX21ldHJp
Y3N9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnsz
fXtsfH17XHBhcmJveHsxMWNtfXtcZW0KLVRoZSBtZXRyaWNzIGFzc29jaWF0ZWQgd2l0aCBhIGhv
c3QufX0gXFwKLVxobGluZQotUXVhbHMgJiBGaWVsZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAot
XGhsaW5lCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5n
ICYgdW5pcXVlIGlkZW50aWZpZXIvb2JqZWN0IHJlZmVyZW5jZSBcXAotJFxtYXRoaXR7Uk99X1xt
YXRoaXR7cnVufSQgJiAge1x0dCBtZW1vcnkvdG90YWx9ICYgaW50ICYgSG9zdCdzIHRvdGFsIG1l
bW9yeSAoYnl0ZXMpIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IG1lbW9y
eS9mcmVlfSAmIGludCAmIEhvc3QncyBmcmVlIG1lbW9yeSAoYnl0ZXMpIFxcCi0kXG1hdGhpdHtS
T31fXG1hdGhpdHtydW59JCAmICB7XHR0IGxhc3RcX3VwZGF0ZWR9ICYgZGF0ZXRpbWUgJiBUaW1l
IGF0IHdoaWNoIHRoaXMgaW5mb3JtYXRpb24gd2FzIGxhc3QgdXBkYXRlZCBcXAotXGhsaW5lCi1c
ZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBo
b3N0XF9tZXRyaWNzfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxsfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgaG9zdFxfbWV0cmljcyBpbnN0
YW5jZXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSAoKGhvc3RfbWV0cmljcyByZWYpIFNldCkgZ2V0X2FsbCAoc2Vz
c2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKGhvc3RcX21ldHJpY3MgcmVmKSBTZXQKLX0K
LQotCi1yZWZlcmVuY2VzIHRvIGFsbCBvYmplY3RzCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91dWlk
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIGhv
c3RcX21ldHJpY3MuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIGhvc3RfbWV0cmljcyByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBo
b3N0XF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX21lbW9yeVxfdG90YWx9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgbWVtb3J5L3RvdGFsIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0XF9tZXRyaWNz
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGludCBn
ZXRfbWVtb3J5X3RvdGFsIChzZXNzaW9uX2lkIHMsIGhvc3RfbWV0cmljcyByZWYgc2VsZilcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0XF9t
ZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX21lbW9yeVxfZnJlZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSBtZW1vcnkvZnJlZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdFxfbWV0cmljcy4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X21lbW9yeV9m
cmVlIChzZXNzaW9uX2lkIHMsIGhvc3RfbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0XF9tZXRyaWNzIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX2xhc3RcX3VwZGF0ZWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbGFzdFxfdXBk
YXRlZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdFxfbWV0cmljcy4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBkYXRldGltZSBnZXRfbGFzdF91cGRhdGVk
IChzZXNzaW9uX2lkIHMsIGhvc3RfbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0XF9tZXRyaWNzIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi1kYXRldGltZQotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0
aGUgaG9zdFxfbWV0cmljcyBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoaG9zdF9tZXRyaWNz
IHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlk
ICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1ob3N0XF9tZXRyaWNzIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29y
ZCBjb250YWluaW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBob3N0XF9tZXRyaWNz
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChob3N0
X21ldHJpY3MgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIGhvc3RfbWV0cmljcyBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBob3N0XF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1ob3N0XF9tZXRyaWNzIHJlY29yZAotfQot
Ci0KLWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9u
e0NsYXNzOiBob3N0XF9jcHV9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBob3N0XF9j
cHV9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1c
bXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgaG9zdFxf
Y3B1fSBcXAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57
M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEEgcGh5c2ljYWwgQ1BVfX0gXFwKLVxobGluZQotUXVh
bHMgJiBGaWVsZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1hdGhpdHtST31f
XG1hdGhpdHtydW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlkZW50aWZpZXIv
b2JqZWN0IHJlZmVyZW5jZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBo
b3N0fSAmIGhvc3QgcmVmICYgdGhlIGhvc3QgdGhlIENQVSBpcyBpbiBcXAotJFxtYXRoaXR7Uk99
X1xtYXRoaXR7cnVufSQgJiAge1x0dCBudW1iZXJ9ICYgaW50ICYgdGhlIG51bWJlciBvZiB0aGUg
cGh5c2ljYWwgQ1BVIHdpdGhpbiB0aGUgaG9zdCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiAge1x0dCB2ZW5kb3J9ICYgc3RyaW5nICYgdGhlIHZlbmRvciBvZiB0aGUgcGh5c2ljYWwg
Q1BVIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHNwZWVkfSAmIGludCAm
IHRoZSBzcGVlZCBvZiB0aGUgcGh5c2ljYWwgQ1BVIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHty
dW59JCAmICB7XHR0IG1vZGVsbmFtZX0gJiBzdHJpbmcgJiB0aGUgbW9kZWwgbmFtZSBvZiB0aGUg
cGh5c2ljYWwgQ1BVIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN0ZXBw
aW5nfSAmIHN0cmluZyAmIHRoZSBzdGVwcGluZyBvZiB0aGUgcGh5c2ljYWwgQ1BVIFxcCi0kXG1h
dGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGZsYWdzfSAmIHN0cmluZyAmIHRoZSBmbGFn
cyBvZiB0aGUgcGh5c2ljYWwgQ1BVIChhIGRlY29kZWQgdmVyc2lvbiBvZiB0aGUgZmVhdHVyZXMg
ZmllbGQpIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGZlYXR1cmVzfSAm
IHN0cmluZyAmIHRoZSBwaHlzaWNhbCBDUFUgZmVhdHVyZSBiaXRtYXAgXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdXRpbGlzYXRpb259ICYgZmxvYXQgJiB0aGUgY3VycmVu
dCBDUFUgdXRpbGlzYXRpb24gXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQg
Y3B1XF9wb29sfSAmIChjcHVcX3Bvb2wgcmVmKSBTZXQgJiByZWZlcmVuY2UgdG8gY3B1XF9wb29s
IHRoZSBjcHUgYmVsb25ncyB0byBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0
aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBob3N0XF9jcHV9Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qg
b2YgYWxsIHRoZSBob3N0XF9jcHVzIGtub3duIHRvIHRoZSBzeXN0ZW0uCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChob3N0X2NwdSByZWYpIFNldCkg
Z2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKGhvc3RcX2NwdSByZWYp
IFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUg
Z2l2ZW4gaG9zdFxfY3B1LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBob3N0X2NwdSByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBo
b3N0XF9jcHUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfaG9zdH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBo
b3N0IGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0XF9jcHUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGhvc3QgcmVmKSBnZXRfaG9zdCAoc2Vzc2lvbl9p
ZCBzLCBob3N0X2NwdSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBob3N0XF9jcHUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWhvc3QgcmVmCi19Ci0K
LQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9udW1iZXJ9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbnVtYmVyIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0XF9j
cHUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50
IGdldF9udW1iZXIgKHNlc3Npb25faWQgcywgaG9zdF9jcHUgcmVmIHNlbGYpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdFxfY3B1IHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX3ZlbmRvcn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB2ZW5kb3IgZmllbGQgb2Yg
dGhlIGdpdmVuIGhvc3RcX2NwdS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3ZlbmRvciAoc2Vzc2lvbl9pZCBzLCBob3N0X2NwdSBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBob3N0XF9jcHUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBm
aWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfc3BlZWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdl
dCB0aGUgc3BlZWQgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3RcX2NwdS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3NwZWVkIChzZXNzaW9u
X2lkIHMsIGhvc3RfY3B1IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3RcX2NwdSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQot
dmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9tb2RlbG5hbWV9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbW9kZWxuYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0
XF9jcHUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
c3RyaW5nIGdldF9tb2RlbG5hbWUgKHNlc3Npb25faWQgcywgaG9zdF9jcHUgcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdFxf
Y3B1IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX3N0ZXBwaW5nfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHN0
ZXBwaW5nIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0XF9jcHUuCi0KLSBcbm9pbmRlbnQge1xiZiBT
aWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9zdGVwcGluZyAoc2Vzc2lv
bl9pZCBzLCBob3N0X2NwdSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBob3N0XF9jcHUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQot
Ci0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfZmxhZ3N9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgZmxhZ3MgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3RcX2Nw
dS4gIEFzIG9mIHRoaXMgdmVyc2lvbiBvZiB0aGUKLUFQSSwgdGhlIHNlbWFudGljcyBvZiB0aGUg
cmV0dXJuZWQgc3RyaW5nIGFyZSBleHBsaWNpdGx5IHVuc3BlY2lmaWVkLAotYW5kIG1heSBjaGFu
Z2UgaW4gdGhlIGZ1dHVyZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2ZsYWdzIChzZXNzaW9uX2lkIHMsIGhvc3RfY3B1IHJlZiBz
ZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IGhvc3RcX2NwdSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxk
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9mZWF0dXJlc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBmZWF0dXJlcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdFxfY3B1LiBBcyBvZiB0aGlzIHZl
cnNpb24gb2YgdGhlCi1BUEksIHRoZSBzZW1hbnRpY3Mgb2YgdGhlIHJldHVybmVkIHN0cmluZyBh
cmUgZXhwbGljaXRseSB1bnNwZWNpZmllZCwKLWFuZCBtYXkgY2hhbmdlIGluIHRoZSBmdXR1cmUu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5n
IGdldF9mZWF0dXJlcyAoc2Vzc2lvbl9pZCBzLCBob3N0X2NwdSByZWYgc2VsZilcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0XF9jcHUgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfdXRpbGlzYXRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXRpbGlz
YXRpb24gZmllbGQgb2YgdGhlIGdpdmVuIGhvc3RcX2NwdS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBmbG9hdCBnZXRfdXRpbGlzYXRpb24gKHNlc3Np
b25faWQgcywgaG9zdF9jcHUgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdFxfY3B1IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1mbG9hdAotfQot
Ci0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0aGUgaG9zdFxfY3B1IGluc3Rh
bmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19IChob3N0X2NwdSByZWYpIGdldF9ieV91dWlkIChzZXNzaW9u
X2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1
bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVy
biBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaG9zdFxfY3B1IHJlZgotfQotCi0KLXJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBjdXJyZW50IHN0YXRl
IG9mIHRoZSBnaXZlbiBob3N0XF9jcHUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKGhvc3RfY3B1IHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9p
ZCBzLCBob3N0X2NwdSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBob3N0XF9jcHUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWhvc3RcX2NwdSByZWNv
cmQKLX0KLQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfY3B1XF9wb29sfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgY3B1XF9wb29sIGZpZWxk
IG9mIHRoZSBnaXZlbiBob3N0XF9jcHUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQot
XGJlZ2lue3ZlcmJhdGltfSAoKGNwdV9wb29sKSBTZXQpIGdldF9jcHVfcG9vbCAoc2Vzc2lvbl9p
ZCBzLCBob3N0X2NwdSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IGhvc3RcX2NwdSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotKGNwdVxfcG9vbCkgU2V0Ci19
Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91bmFzc2lnbmVk
XF9jcHVzfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCBhIHJlZmVyZW5jZSB0byBhbGwgY3B1cyB0
aGF0IGFyZSBub3QgYXNzaWdlbmQgdG8gYW55IGNwdVxfcG9vbC4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19ICgoaG9zdF9jcHUpIFNldCkgZ2V0X3VuYXNz
aWduZWRfY3B1cyAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLShob3N0XF9jcHUgcmVmKSBTZXQKLX0KLQot
Ci1TZXQgb2YgZnJlZSAobm90IGFzc2lnbmVkKSBjcHVzCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLQotCi1cdnNwYWNlezFjbX0KLVxuZXdwYWdlCi1c
c2VjdGlvbntDbGFzczogbmV0d29ya30KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IG5l
dHdvcmt9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5l
Ci1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgbmV0
d29ya30gXFwKLVxtdWx0aWNvbHVtbnsxfXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1u
ezN9e2x8fXtccGFyYm94ezExY219e1xlbSBBCi12aXJ0dWFsIG5ldHdvcmsufX0gXFwKLVxobGlu
ZQotUXVhbHMgJiBGaWVsZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlkZW50
aWZpZXIvb2JqZWN0IHJlZmVyZW5jZSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG5hbWUvbGFi
ZWx9ICYgc3RyaW5nICYgYSBodW1hbi1yZWFkYWJsZSBuYW1lIFxcCi0kXG1hdGhpdHtSV30kICYg
IHtcdHQgbmFtZS9kZXNjcmlwdGlvbn0gJiBzdHJpbmcgJiBhIG5vdGVzIGZpZWxkIGNvbnRhaW5n
IGh1bWFuLXJlYWRhYmxlIGRlc2NyaXB0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59
JCAmICB7XHR0IFZJRnN9ICYgKFZJRiByZWYpIFNldCAmIGxpc3Qgb2YgY29ubmVjdGVkIHZpZnMg
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgUElGc30gJiAoUElGIHJlZikg
U2V0ICYgbGlzdCBvZiBjb25uZWN0ZWQgcGlmcyBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IGRl
ZmF1bHRcX2dhdGV3YXl9ICYgc3RyaW5nICYgZGVmYXVsdCBnYXRld2F5IFxcCi0kXG1hdGhpdHtS
V30kICYgIHtcdHQgZGVmYXVsdFxfbmV0bWFza30gJiBzdHJpbmcgJiBkZWZhdWx0IG5ldG1hc2sg
XFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBvdGhlclxfY29uZmlnfSAmIChzdHJpbmcgJFxyaWdo
dGFycm93JCBzdHJpbmcpIE1hcCAmIGFkZGl0aW9uYWwgY29uZmlndXJhdGlvbiBcXAotXGhsaW5l
Ci1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNz
OiBuZXR3b3JrfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxsfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgbmV0d29ya3Mga25vd24gdG8gdGhl
IHN5c3RlbQotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
ICgobmV0d29yayByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19
Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotKG5ldHdvcmsgcmVmKSBTZXQKLX0KLQotCi1BIGxpc3Qgb2YgYWxsIHRoZSBJRHMgb2Yg
YWxsIHRoZSBuZXR3b3JrcwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBuZXR3b3JrLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVp
ZCAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmlu
ZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmFtZVxf
bGFiZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbmFtZS9sYWJlbCBmaWVsZCBvZiB0
aGUgZ2l2ZW4gbmV0d29yay4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfbGFiZWwgKHNlc3Npb25faWQgcywgbmV0d29yayBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBuZXR3b3JrIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX25hbWVcX2xhYmVsfQotCi17XGJmIE92ZXJ2aWV3On0g
Ci1TZXQgdGhlIG5hbWUvbGFiZWwgZmllbGQgb2YgdGhlIGdpdmVuIG5ldHdvcmsuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbmFtZV9s
YWJlbCAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQg
c3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX25hbWVcX2Rlc2Ny
aXB0aW9ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIG5hbWUvZGVzY3JpcHRpb24gZmll
bGQgb2YgdGhlIGdpdmVuIG5ldHdvcmsuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9uYW1lX2Rlc2NyaXB0aW9uIChzZXNzaW9uX2lk
IHMsIG5ldHdvcmsgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFs
dWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9uYW1lXF9kZXNjcmlwdGlvbn0K
LQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBuYW1lL2Rlc2NyaXB0aW9uIGZpZWxkIG9mIHRo
ZSBnaXZlbiBuZXR3b3JrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHZvaWQgc2V0X25hbWVfZGVzY3JpcHRpb24gKHNlc3Npb25faWQgcywgbmV0d29y
ayByZWYgc2VsZiwgc3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBOZXcgdmFs
dWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9WSUZzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZJ
RnMgZmllbGQgb2YgdGhlIGdpdmVuIG5ldHdvcmsuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChWSUYgcmVmKSBTZXQpIGdldF9WSUZzIChzZXNzaW9u
X2lkIHMsIG5ldHdvcmsgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZJRiByZWYpIFNldAot
fQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfUElGc30KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBQSUZzIGZpZWxkIG9mIHRoZSBnaXZlbiBuZXR3b3Jr
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoUElG
IHJlZikgU2V0KSBnZXRfUElGcyAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLShQSUYgcmVmKSBTZXQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5nZXRcX2RlZmF1bHRcX2dhdGV3YXl9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCB0aGUgZGVmYXVsdFxfZ2F0ZXdheSBmaWVsZCBvZiB0aGUgZ2l2ZW4gbmV0d29yay4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0
X2RlZmF1bHRfZ2F0ZXdheSAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fnNldFxfZGVmYXVsdFxfZ2F0ZXdheX0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBk
ZWZhdWx0XF9nYXRld2F5IGZpZWxkIG9mIHRoZSBnaXZlbiBuZXR3b3JrLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X2RlZmF1bHRfZ2F0
ZXdheSAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQg
c3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2RlZmF1bHRcX25l
dG1hc2t9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgZGVmYXVsdFxfbmV0bWFzayBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gbmV0d29yay4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2RlZmF1bHRfbmV0bWFzayAoc2Vzc2lvbl9pZCBz
LCBuZXR3b3JrIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1
bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVl
IG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfZGVmYXVsdFxfbmV0bWFza30KLQot
e1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBkZWZhdWx0XF9uZXRtYXNrIGZpZWxkIG9mIHRoZSBn
aXZlbiBuZXR3b3JrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IHZvaWQgc2V0X2RlZmF1bHRfbmV0bWFzayAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJl
ZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0
byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX290aGVyXF9jb25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0
aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gbmV0d29yay4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcp
IE1hcCkgZ2V0X290aGVyX2NvbmZpZyAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdv
cmsgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLShzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcAotfQotCi0K
LXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfb3RoZXJcX2NvbmZpZ30K
LQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBvdGhlclxfY29uZmlnIGZpZWxkIG9mIHRoZSBn
aXZlbiBuZXR3b3JrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IHZvaWQgc2V0X290aGVyX2NvbmZpZyAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBz
ZWxmLCAoc3RyaW5nIC0+IHN0cmluZykgTWFwIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IChzdHJpbmcgJFxyaWdodGFy
cm93JCBzdHJpbmcpIE1hcCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+YWRkXF90
b1xfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotQWRkIHRoZSBnaXZlbiBrZXkt
dmFsdWUgcGFpciB0byB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4KLW5ldHdv
cmsuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCBhZGRfdG9fb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMsIG5ldHdvcmsgcmVmIHNlbGYsIHN0
cmluZyBrZXksIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBuZXR3b3JrIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byBhZGQg
XFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBWYWx1ZSB0byBhZGQgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5y
ZW1vdmVcX2Zyb21cX290aGVyXF9jb25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJlbW92ZSB0
aGUgZ2l2ZW4ga2V5IGFuZCBpdHMgY29ycmVzcG9uZGluZyB2YWx1ZSBmcm9tIHRoZSBvdGhlclxf
Y29uZmlnCi1maWVsZCBvZiB0aGUgZ2l2ZW4gbmV0d29yay4gIElmIHRoZSBrZXkgaXMgbm90IGlu
IHRoYXQgTWFwLCB0aGVuIGRvCi1ub3RoaW5nLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcmVtb3ZlX2Zyb21fb3RoZXJfY29uZmlnIChzZXNz
aW9uX2lkIHMsIG5ldHdvcmsgcmVmIHNlbGYsIHN0cmluZyBrZXkpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiBr
ZXkgJiBLZXkgdG8gcmVtb3ZlIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19
Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y3JlYXRlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1DcmVh
dGUgYSBuZXcgbmV0d29yayBpbnN0YW5jZSwgYW5kIHJldHVybiBpdHMgaGFuZGxlLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChuZXR3b3JrIHJlZikg
Y3JlYXRlIChzZXNzaW9uX2lkIHMsIG5ldHdvcmsgcmVjb3JkIGFyZ3MpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayByZWNvcmQgfSAm
IGFyZ3MgJiBBbGwgY29uc3RydWN0b3IgYXJndW1lbnRzIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi1uZXR3b3JrIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgbmV3bHkgY3JlYXRl
ZCBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5kZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3On0gCi1E
ZXN0cm95IHRoZSBzcGVjaWZpZWQgbmV0d29yayBpbnN0YW5jZS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGRlc3Ryb3kgKHNlc3Npb25faWQg
cywgbmV0d29yayByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBuZXR3b3JrIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVm
ZXJlbmNlIHRvIHRoZSBuZXR3b3JrIGluc3RhbmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChuZXR3b3Jr
IHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlk
ICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1uZXR3b3JrIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250
YWluaW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBuZXR3b3JrLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChuZXR3b3JrIHJlY29yZCkg
Z2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLW5ldHdvcmsgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5nZXRcX2J5XF9uYW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IGFsbCB0aGUgbmV0d29yayBpbnN0YW5jZXMgd2l0aCB0aGUgZ2l2ZW4gbGFiZWwuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChuZXR3b3JrIHJl
ZikgU2V0KSBnZXRfYnlfbmFtZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgbGFiZWwpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5n
IH0gJiBsYWJlbCAmIGxhYmVsIG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLShuZXR3b3JrIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBvYmpl
Y3RzIHdpdGggbWF0Y2ggbmFtZXMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IFZJ
Rn0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFZJRn0KLVxiZWdpbntsb25ndGFibGV9
e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxtdWx0aWNvbHVtbnsxfXt8bH17TmFt
ZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBWSUZ9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9
e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0gQQot
dmlydHVhbCBuZXR3b3JrIGludGVyZmFjZS59fSBcXAotXGhsaW5lCi1RdWFscyAmIEZpZWxkICYg
VHlwZSAmIERlc2NyaXB0aW9uIFxcCi1caGxpbmUKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0k
ICYgIHtcdHQgdXVpZH0gJiBzdHJpbmcgJiB1bmlxdWUgaWRlbnRpZmllci9vYmplY3QgcmVmZXJl
bmNlIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgZGV2aWNlfSAmIHN0cmluZyAmIG5hbWUgb2Yg
bmV0d29yayBkZXZpY2UgYXMgZXhwb3NlZCB0byBndWVzdCBlLmcuIGV0aDAgXFwKLSRcbWF0aGl0
e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgbmV0d29ya30gJiBuZXR3b3JrIHJlZiAmIHZpcnR1
YWwgbmV0d29yayB0byB3aGljaCB0aGlzIHZpZiBpcyBjb25uZWN0ZWQgXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e2luc30kICYgIHtcdHQgVk19ICYgVk0gcmVmICYgdmlydHVhbCBtYWNoaW5lIHRv
IHdoaWNoIHRoaXMgdmlmIGlzIGNvbm5lY3RlZCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IE1B
Q30gJiBzdHJpbmcgJiBldGhlcm5ldCBNQUMgYWRkcmVzcyBvZiB2aXJ0dWFsIGludGVyZmFjZSwg
YXMgZXhwb3NlZCB0byBndWVzdCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IE1UVX0gJiBpbnQg
JiBNVFUgaW4gb2N0ZXRzIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGN1
cnJlbnRseVxfYXR0YWNoZWR9ICYgYm9vbCAmIGlzIHRoZSBkZXZpY2UgY3VycmVudGx5IGF0dGFj
aGVkIChlcmFzZWQgb24gcmVib290KSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAg
e1x0dCBzdGF0dXNcX2NvZGV9ICYgaW50ICYgZXJyb3Ivc3VjY2VzcyBjb2RlIGFzc29jaWF0ZWQg
d2l0aCBsYXN0IGF0dGFjaC1vcGVyYXRpb24gKGVyYXNlZCBvbiByZWJvb3QpIFxcCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN0YXR1c1xfZGV0YWlsfSAmIHN0cmluZyAmIGVy
cm9yL3N1Y2Nlc3MgaW5mb3JtYXRpb24gYXNzb2NpYXRlZCB3aXRoIGxhc3QgYXR0YWNoLW9wZXJh
dGlvbiBzdGF0dXMgKGVyYXNlZCBvbiByZWJvb3QpIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHty
dW59JCAmICB7XHR0IHJ1bnRpbWVcX3Byb3BlcnRpZXN9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ck
IHN0cmluZykgTWFwICYgRGV2aWNlIHJ1bnRpbWUgcHJvcGVydGllcyBcXAotJFxtYXRoaXR7Uld9
JCAmICB7XHR0IHFvcy9hbGdvcml0aG1cX3R5cGV9ICYgc3RyaW5nICYgUW9TIGFsZ29yaXRobSB0
byB1c2UgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBxb3MvYWxnb3JpdGhtXF9wYXJhbXN9ICYg
KHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgcGFyYW1ldGVycyBmb3IgY2hvc2Vu
IFFvUyBhbGdvcml0aG0gXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcW9z
L3N1cHBvcnRlZFxfYWxnb3JpdGhtc30gJiBzdHJpbmcgU2V0ICYgc3VwcG9ydGVkIFFvUyBhbGdv
cml0aG1zIGZvciB0aGlzIFZJRiBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCBtZXRyaWNzfSAmIFZJRlxfbWV0cmljcyByZWYgJiBtZXRyaWNzIGFzc29jaWF0ZWQgd2l0aCB0
aGlzIFZJRiBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNz
b2NpYXRlZCB3aXRoIGNsYXNzOiBWSUZ9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+cGx1Z30K
LQote1xiZiBPdmVydmlldzp9IAotSG90cGx1ZyB0aGUgc3BlY2lmaWVkIFZJRiwgZHluYW1pY2Fs
bHkgYXR0YWNoaW5nIGl0IHRvIHRoZSBydW5uaW5nIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcGx1ZyAoc2Vzc2lvbl9pZCBzLCBWSUYg
cmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIFRoZSBWSUYgdG8gaG90cGx1ZyBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnVucGx1Z30K
LQote1xiZiBPdmVydmlldzp9IAotSG90LXVucGx1ZyB0aGUgc3BlY2lmaWVkIFZJRiwgZHluYW1p
Y2FsbHkgdW5hdHRhY2hpbmcgaXQgZnJvbSB0aGUgcnVubmluZwotVk0uCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCB1bnBsdWcgKHNlc3Npb25f
aWQgcywgVklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1
bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYgJiBUaGUgVklGIHRvIGhvdC11bnBsdWcg
XFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRl
bnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX2FsbH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwg
dGhlIFZJRnMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZJRiByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9p
ZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZJRiByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMg
dG8gYWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVklGLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vz
c2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2RldmljZX0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSBkZXZpY2UgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2Rldmlj
ZSAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2Nt
fQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX2RldmljZX0KLQote1xi
ZiBPdmVydmlldzp9IAotU2V0IHRoZSBkZXZpY2UgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9k
ZXZpY2UgKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcg
fSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmV0d29ya30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBuZXR3b3JrIGZpZWxkIG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKG5ldHdvcmsgcmVm
KSBnZXRfbmV0d29yayAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1u
ZXR3b3JrIHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfVk19Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVk0gZmllbGQgb2YgdGhlIGdpdmVu
IFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAo
Vk0gcmVmKSBnZXRfVk0gKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
Vk0gcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9N
QUN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgTUFDIGZpZWxkIG9mIHRoZSBnaXZlbiBW
SUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3Ry
aW5nIGdldF9NQUMgKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0K
LQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3Ry
aW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9NQUN9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgTUFDIGZpZWxkIG9mIHRoZSBnaXZlbiBWSUYu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBz
ZXRfTUFDIChzZXNzaW9uX2lkIHMsIFZJRiByZWYgc2VsZiwgc3RyaW5nIHZhbHVlKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAm
IHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5n
IH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX01UVX0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSBNVFUgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X01UVSAoc2Vzc2lv
bl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX01UVX0KLQote1xiZiBPdmVydmlldzp9IAot
U2V0IHRoZSBNVFUgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9NVFUgKHNlc3Npb25faWQgcywg
VklGIHJlZiBzZWxmLCBpbnQgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNl
dCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfY3VycmVudGx5XF9hdHRhY2hlZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBjdXJyZW50bHlcX2F0dGFjaGVkIGZpZWxkIG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gYm9vbCBnZXRfY3VycmVu
dGx5X2F0dGFjaGVkIChzZXNzaW9uX2lkIHMsIFZJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUYgcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWJv
b2wKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3N0YXR1
c1xfY29kZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBzdGF0dXNcX2NvZGUgZmllbGQg
b2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSBpbnQgZ2V0X3N0YXR1c19jb2RlIChzZXNzaW9uX2lkIHMsIFZJRiByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
SUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfc3RhdHVzXF9kZXRhaWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUg
c3RhdHVzXF9kZXRhaWwgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3N0YXR1c19kZXRhaWwg
KHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQot
dmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ydW50aW1lXF9wcm9wZXJ0
aWVzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHJ1bnRpbWVcX3Byb3BlcnRpZXMgZmll
bGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X3J1bnRpbWVfcHJvcGVy
dGllcyAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRc
cmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX3Fvc1xfYWxnb3JpdGhtXF90eXBlfQotCi17XGJmIE92ZXJ2aWV3On0g
Ci1HZXQgdGhlIHFvcy9hbGdvcml0aG1cX3R5cGUgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0
X3Fvc19hbGdvcml0aG1fdHlwZSAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5z
ZXRcX3Fvc1xfYWxnb3JpdGhtXF90eXBlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhlIHFv
cy9hbGdvcml0aG1cX3R5cGUgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9xb3NfYWxnb3JpdGht
X3R5cGUgKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcg
fSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcW9zXF9hbGdvcml0aG1cX3Bh
cmFtc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBxb3MvYWxnb3JpdGhtXF9wYXJhbXMg
ZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X3Fvc19hbGdvcml0
aG1fcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUYgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShzdHJp
bmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fnNldFxfcW9zXF9hbGdvcml0aG1cX3BhcmFtc30KLQote1xiZiBPdmVy
dmlldzp9IAotU2V0IHRoZSBxb3MvYWxnb3JpdGhtXF9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVu
IFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2
b2lkIHNldF9xb3NfYWxnb3JpdGhtX3BhcmFtcyAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYs
IChzdHJpbmcgLT4gc3RyaW5nKSBNYXAgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3Ry
aW5nKSBNYXAgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmFkZFxfdG9cX3Fvc1xf
YWxnb3JpdGhtXF9wYXJhbXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUFkZCB0aGUgZ2l2ZW4ga2V5
LXZhbHVlIHBhaXIgdG8gdGhlIHFvcy9hbGdvcml0aG1cX3BhcmFtcyBmaWVsZCBvZiB0aGUKLWdp
dmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIGFkZF90b19xb3NfYWxnb3JpdGhtX3BhcmFtcyAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVm
IHNlbGYsIHN0cmluZyBrZXksIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRv
IGFkZCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fnJlbW92ZVxfZnJvbVxfcW9zXF9hbGdvcml0aG1cX3BhcmFtc30KLQote1xiZiBPdmVydmll
dzp9IAotUmVtb3ZlIHRoZSBnaXZlbiBrZXkgYW5kIGl0cyBjb3JyZXNwb25kaW5nIHZhbHVlIGZy
b20gdGhlCi1xb3MvYWxnb3JpdGhtXF9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4gIElm
IHRoZSBrZXkgaXMgbm90IGluIHRoYXQKLU1hcCwgdGhlbiBkbyBub3RoaW5nLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcmVtb3ZlX2Zyb21f
cW9zX2FsZ29yaXRobV9wYXJhbXMgKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmLCBzdHJpbmcg
a2V5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IFZJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LXtcdHQgc3RyaW5nIH0gJiBrZXkgJiBLZXkgdG8gcmVtb3ZlIFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9xb3NcX3N1cHBv
cnRlZFxfYWxnb3JpdGhtc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBxb3Mvc3VwcG9y
dGVkXF9hbGdvcml0aG1zIGZpZWxkIG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKHN0cmluZyBTZXQpIGdldF9xb3Nfc3Vw
cG9ydGVkX2FsZ29yaXRobXMgKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotc3RyaW5nIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fmdldFxfbWV0cmljc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBtZXRyaWNzIGZpZWxk
IG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gKFZJRl9tZXRyaWNzIHJlZikgZ2V0X21ldHJpY3MgKHNlc3Npb25faWQgcywg
VklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVklGXF9tZXRyaWNzIHJlZgotfQotCi0KLXZhbHVl
IG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfc2VjdXJpdHlcX2xhYmVsfQotCi17
XGJmIE92ZXJ2aWV3On0KLVNldCB0aGUgc2VjdXJpdHkgbGFiZWwgb2YgdGhlIGdpdmVuIFZJRi4g
UmVmZXIgdG8gdGhlIFhTUG9saWN5IGNsYXNzCi1mb3IgdGhlIGZvcm1hdCBvZiB0aGUgc2VjdXJp
dHkgbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHNldF9zZWN1cml0eV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYsIHN0
cmluZwotc2VjdXJpdHlfbGFiZWwsIHN0cmluZyBvbGRfbGFiZWwpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUYgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi0KLXtcdHQgc3RyaW5nIH0gJiBzZWN1cml0
eVxfbGFiZWwgJiBOZXcgdmFsdWUgb2YgdGhlIHNlY3VyaXR5IGxhYmVsIFxcIFxobGluZQote1x0
dCBzdHJpbmcgfSAmIG9sZFxfbGFiZWwgJiBMYWJlbCB2YWx1ZSB0aGF0IHRoZSBzZWN1cml0eSBs
YWJlbCBcXAotJiAmIG11c3QgY3VycmVudGx5IGhhdmUgZm9yIHRoZSBjaGFuZ2UgdG8gc3VjY2Vl
ZC5cXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQKLX0KLQotCi1cdnNwYWNlezAuM2NtfQot
Ci1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVzOn0ge1x0dCBTRUNVUklUWVxfRVJS
T1J9Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3NlY3VyaXR5XF9sYWJlbH0KLQote1xiZiBPdmVy
dmlldzp9Ci1HZXQgdGhlIHNlY3VyaXR5IGxhYmVsIG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3NlY3Vy
aXR5X2xhYmVsIChzZXNzaW9uX2lkIHMsIFZJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1zdHJpbmcK
LX0KLQotCi12YWx1ZSBvZiB0aGUgZ2l2ZW4gZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGV9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNyZWF0ZSBhIG5ldyBWSUYgaW5zdGFuY2UsIGFuZCByZXR1
cm4gaXRzIGhhbmRsZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSAoVklGIHJlZikgY3JlYXRlIChzZXNzaW9uX2lkIHMsIFZJRiByZWNvcmQgYXJncylc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUYg
cmVjb3JkIH0gJiBhcmdzICYgQWxsIGNvbnN0cnVjdG9yIGFyZ3VtZW50cyBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotVklGIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgbmV3bHkg
Y3JlYXRlZCBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5kZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3
On0gCi1EZXN0cm95IHRoZSBzcGVjaWZpZWQgVklGIGluc3RhbmNlLgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9p
ZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNl
IHRvIHRoZSBWSUYgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZJRiByZWYpIGdldF9ieV91
dWlkIChzZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5k
ZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2Jq
ZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVklGIHJlZgotfQot
Ci0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBjdXJyZW50
IHN0YXRlIG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKFZJRiByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywg
VklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVklGIHJlY29yZAotfQotCi0KLWFsbCBmaWVsZHMg
ZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBWSUZcX21l
dHJpY3N9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBWSUZcX21ldHJpY3N9Ci1cYmVn
aW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1
bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgVklGXF9tZXRyaWNzfSBc
XAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9
e1xwYXJib3h7MTFjbX17XGVtCi1UaGUgbWV0cmljcyBhc3NvY2lhdGVkIHdpdGggYSB2aXJ0dWFs
IG5ldHdvcmsgZGV2aWNlLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVz
Y3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1
dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRc
bWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgaW8vcmVhZFxfa2JzfSAmIGZsb2F0ICYg
UmVhZCBiYW5kd2lkdGggKEtpQi9zKSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAg
e1x0dCBpby93cml0ZVxfa2JzfSAmIGZsb2F0ICYgV3JpdGUgYmFuZHdpZHRoIChLaUIvcykgXFwK
LSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbGFzdFxfdXBkYXRlZH0gJiBkYXRl
dGltZSAmIFRpbWUgYXQgd2hpY2ggdGhpcyBpbmZvcm1hdGlvbiB3YXMgbGFzdCB1cGRhdGVkIFxc
Ci1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdp
dGggY2xhc3M6IFZJRlxfbWV0cmljc30KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Fs
bH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwgdGhlIFZJRlxfbWV0
cmljcyBpbnN0YW5jZXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZJRl9tZXRyaWNzIHJlZikgU2V0KSBnZXRf
YWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oVklGXF9tZXRyaWNzIHJlZikg
U2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0cwotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBn
aXZlbiBWSUZcX21ldHJpY3MuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFZJRl9tZXRyaWNzIHJl
ZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0g
Ci1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUK
LXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17
XHR0IFZJRlxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhl
IGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9pb1xfcmVhZFxfa2JzfQotCi17XGJmIE92ZXJ2
aWV3On0gCi1HZXQgdGhlIGlvL3JlYWRcX2ticyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVklGXF9tZXRy
aWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGZs
b2F0IGdldF9pb19yZWFkX2ticyAoc2Vzc2lvbl9pZCBzLCBWSUZfbWV0cmljcyByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUZc
X21ldHJpY3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLWZsb2F0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9pb1xfd3JpdGVcX2tic30KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBpby93cml0ZVxfa2JzIGZpZWxkIG9mIHRoZSBnaXZlbiBWSUZcX21ldHJpY3MuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gZmxvYXQgZ2V0
X2lvX3dyaXRlX2ticyAoc2Vzc2lvbl9pZCBzLCBWSUZfbWV0cmljcyByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUZcX21ldHJp
Y3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLWZsb2F0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF9sYXN0XF91cGRhdGVkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IGxhc3RcX3VwZGF0ZWQgZmllbGQgb2YgdGhlIGdpdmVuIFZJRlxfbWV0cmljcy4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBkYXRldGltZSBnZXRfbGFz
dF91cGRhdGVkIChzZXNzaW9uX2lkIHMsIFZJRl9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRlxfbWV0cmljcyBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotZGF0ZXRpbWUKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5nZXRcX2J5XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWZlcmVu
Y2UgdG8gdGhlIFZJRlxfbWV0cmljcyBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVklGX21l
dHJpY3MgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAm
IHV1aWQgJiBVVUlEIG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLVZJRlxfbWV0cmljcyByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSBy
ZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gVklGXF9tZXRy
aWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChW
SUZfbWV0cmljcyByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgVklGX21ldHJpY3Mg
cmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgVklGXF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WSUZcX21ldHJpY3MgcmVjb3JkCi19Ci0K
LQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257
Q2xhc3M6IFBJRn0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFBJRn0KLVxiZWdpbnts
b25ndGFibGV9e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxtdWx0aWNvbHVtbnsx
fXt8bH17TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBQSUZ9IFxcCi1cbXVsdGljb2x1
bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNt
fXtcZW0gQQotcGh5c2ljYWwgbmV0d29yayBpbnRlcmZhY2UgKG5vdGUgc2VwYXJhdGUgVkxBTnMg
YXJlIHJlcHJlc2VudGVkIGFzIHNldmVyYWwKLVBJRnMpLn19IFxcCi1caGxpbmUKLVF1YWxzICYg
RmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVj
dCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBkZXZpY2V9ICYgc3RyaW5nICYg
bWFjaGluZS1yZWFkYWJsZSBuYW1lIG9mIHRoZSBpbnRlcmZhY2UgKGUuZy4gZXRoMCkgXFwKLSRc
bWF0aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgbmV0d29ya30gJiBuZXR3b3JrIHJlZiAm
IHZpcnR1YWwgbmV0d29yayB0byB3aGljaCB0aGlzIHBpZiBpcyBjb25uZWN0ZWQgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgaG9zdH0gJiBob3N0IHJlZiAmIHBoeXNpY2Fs
IG1hY2hpbmUgdG8gd2hpY2ggdGhpcyBwaWYgaXMgY29ubmVjdGVkIFxcCi0kXG1hdGhpdHtSV30k
ICYgIHtcdHQgTUFDfSAmIHN0cmluZyAmIGV0aGVybmV0IE1BQyBhZGRyZXNzIG9mIHBoeXNpY2Fs
IGludGVyZmFjZSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IE1UVX0gJiBpbnQgJiBNVFUgaW4g
b2N0ZXRzIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgVkxBTn0gJiBpbnQgJiBWTEFOIHRhZyBm
b3IgYWxsIHRyYWZmaWMgcGFzc2luZyB0aHJvdWdoIHRoaXMgaW50ZXJmYWNlIFxcCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IG1ldHJpY3N9ICYgUElGXF9tZXRyaWNzIHJlZiAm
IG1ldHJpY3MgYXNzb2NpYXRlZCB3aXRoIHRoaXMgUElGIFxcCi1caGxpbmUKLVxlbmR7bG9uZ3Rh
YmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IFBJRn0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGVcX1ZMQU59Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNy
ZWF0ZSBhIFZMQU4gaW50ZXJmYWNlIGZyb20gYW4gZXhpc3RpbmcgcGh5c2ljYWwgaW50ZXJmYWNl
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChQSUYg
cmVmKSBjcmVhdGVfVkxBTiAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgZGV2aWNlLCBuZXR3b3JrIHJl
ZiBuZXR3b3JrLCBob3N0IHJlZiBob3N0LCBpbnQgVkxBTilcZW5ke3ZlcmJhdGltfQotCi0KLVxu
b2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIGRldmljZSAmIHBoeXNp
Y2FsIGludGVyZmFjZSBvbiB3aGljaCB0byBjcmF0ZSB0aGUgVkxBTiBpbnRlcmZhY2UgXFwgXGhs
aW5lIAotCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBuZXR3b3JrICYgbmV0d29yayB0byB3aGljaCB0
aGlzIGludGVyZmFjZSBzaG91bGQgYmUgY29ubmVjdGVkIFxcIFxobGluZSAKLQote1x0dCBob3N0
IHJlZiB9ICYgaG9zdCAmIHBoeXNpY2FsIG1hY2hpbmUgdG8gd2hpY2ggdGhpcyBQSUYgaXMgY29u
bmVjdGVkIFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIFZMQU4gJiBWTEFOIHRhZyBmb3IgdGhl
IG5ldyBpbnRlcmZhY2UgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVBJRiByZWYKLX0K
LQotCi1UaGUgcmVmZXJlbmNlIG9mIHRoZSBjcmVhdGVkIFBJRiBvYmplY3QKLVx2c3BhY2V7MC4z
Y219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFZMQU5cX1RB
R1xfSU5WQUxJRH0KLQotXHZzcGFjZXswLjZjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5k
ZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3On0gCi1EZXN0cm95IHRoZSBpbnRlcmZhY2UgKHByb3Zp
ZGVkIGl0IGlzIGEgc3ludGhldGljIGludGVyZmFjZSBsaWtlIGEgVkxBTjsKLWZhaWwgaWYgaXQg
aXMgYSBwaHlzaWNhbCBpbnRlcmZhY2UpLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9pZCBzLCBQSUYgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UElGIHJlZiB9ICYgc2VsZiAmIHRoZSBQSUYgb2JqZWN0IHRvIGRlc3Ryb3kgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2lu
ZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFBJRlxfSVNcX1BIWVNJQ0FMfQot
Ci1cdnNwYWNlezAuNmNtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxsfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgUElGcyBrbm93biB0byB0
aGUgc3lzdGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19ICgoUElGIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi0oUElGIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0cwotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlk
IGZpZWxkIG9mIHRoZSBnaXZlbiBQSUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFBJRiByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBQSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfZGV2aWNlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGRl
dmljZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gUElGLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfZGV2aWNlIChzZXNzaW9uX2lkIHMsIFBJ
RiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfZGV2aWNlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQg
dGhlIGRldmljZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gUElGLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X2RldmljZSAoc2Vzc2lvbl9pZCBz
LCBQSUYgcmVmIHNlbGYsIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8
Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVz
Y3JpcHRpb259IFxcIFxobGluZQote1x0dCBQSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBOZXcgdmFs
dWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9uZXR3b3JrfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IG5ldHdvcmsgZmllbGQgb2YgdGhlIGdpdmVuIFBJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAobmV0d29yayByZWYpIGdldF9uZXR3b3JrIChzZXNz
aW9uX2lkIHMsIFBJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBQSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLW5ldHdvcmsgcmVmCi19Ci0KLQot
dmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ob3N0fQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIGhvc3QgZmllbGQgb2YgdGhlIGdpdmVuIFBJRi4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoaG9zdCByZWYpIGdldF9o
b3N0IChzZXNzaW9uX2lkIHMsIFBJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWhvc3QgcmVmCi19
Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9NQUN9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgTUFDIGZpZWxkIG9mIHRoZSBnaXZlbiBQSUYuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9N
QUMgKHNlc3Npb25faWQgcywgUElGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5k
ZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0K
LQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9NQUN9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVNldCB0aGUgTUFDIGZpZWxkIG9mIHRoZSBnaXZlbiBQSUYuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfTUFDIChz
ZXNzaW9uX2lkIHMsIFBJRiByZWYgc2VsZiwgc3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0K
LQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBJRiByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1
ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQK
LX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX01UVX0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBNVFUgZmllbGQgb2YgdGhlIGdpdmVuIFBJRi4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X01UVSAoc2Vzc2lvbl9pZCBzLCBQ
SUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgUElGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5zZXRcX01UVX0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBN
VFUgZmllbGQgb2YgdGhlIGdpdmVuIFBJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9NVFUgKHNlc3Npb25faWQgcywgUElGIHJlZiBz
ZWxmLCBpbnQgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50
czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgUElGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQote1x0dCBpbnQgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfVkxBTn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBWTEFOIGZpZWxkIG9mIHRoZSBn
aXZlbiBQSUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gaW50IGdldF9WTEFOIChzZXNzaW9uX2lkIHMsIFBJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQSUYgcmVmIH0gJiBzZWxm
ICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAK
LWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfVkxB
Tn0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBWTEFOIGZpZWxkIG9mIHRoZSBnaXZlbiBQ
SUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCBzZXRfVkxBTiAoc2Vzc2lvbl9pZCBzLCBQSUYgcmVmIHNlbGYsIGludCB2YWx1ZSlcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQSUYgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IGludCB9
ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9tZXRyaWNzfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIG1ldHJpY3MgZmllbGQgb2YgdGhlIGdpdmVuIFBJRi4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoUElGX21ldHJpY3Mg
cmVmKSBnZXRfbWV0cmljcyAoc2Vzc2lvbl9pZCBzLCBQSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUElGIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1QSUZcX21ldHJpY3MgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNl
IHRvIHRoZSBQSUYgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFBJRiByZWYpIGdldF9ieV91
dWlkIChzZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5k
ZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2Jq
ZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUElGIHJlZgotfQot
Ci0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBjdXJyZW50
IHN0YXRlIG9mIHRoZSBnaXZlbiBQSUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKFBJRiByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywg
UElGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFBJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUElGIHJlY29yZAotfQotCi0KLWFsbCBmaWVsZHMg
ZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBQSUZcX21l
dHJpY3N9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBQSUZcX21ldHJpY3N9Ci1cYmVn
aW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1
bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgUElGXF9tZXRyaWNzfSBc
XAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9
e1xwYXJib3h7MTFjbX17XGVtCi1UaGUgbWV0cmljcyBhc3NvY2lhdGVkIHdpdGggYSBwaHlzaWNh
bCBuZXR3b3JrIGludGVyZmFjZS59fSBcXAotXGhsaW5lCi1RdWFscyAmIEZpZWxkICYgVHlwZSAm
IERlc2NyaXB0aW9uIFxcCi1caGxpbmUKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgdXVpZH0gJiBzdHJpbmcgJiB1bmlxdWUgaWRlbnRpZmllci9vYmplY3QgcmVmZXJlbmNlIFxc
Ci0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGlvL3JlYWRcX2tic30gJiBmbG9h
dCAmIFJlYWQgYmFuZHdpZHRoIChLaUIvcykgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0k
ICYgIHtcdHQgaW8vd3JpdGVcX2tic30gJiBmbG9hdCAmIFdyaXRlIGJhbmR3aWR0aCAoS2lCL3Mp
IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGxhc3RcX3VwZGF0ZWR9ICYg
ZGF0ZXRpbWUgJiBUaW1lIGF0IHdoaWNoIHRoaXMgaW5mb3JtYXRpb24gd2FzIGxhc3QgdXBkYXRl
ZCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRl
ZCB3aXRoIGNsYXNzOiBQSUZcX21ldHJpY3N9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0
XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBQSUZc
X21ldHJpY3MgaW5zdGFuY2VzIGtub3duIHRvIHRoZSBzeXN0ZW0uCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChQSUZfbWV0cmljcyByZWYpIFNldCkg
Z2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFBJRlxfbWV0cmljcyBy
ZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0
aGUgZ2l2ZW4gUElGXF9tZXRyaWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBQSUZfbWV0cmlj
cyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQSUZcX21ldHJpY3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9p
bmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9m
IHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfaW9cX3JlYWRcX2tic30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBpby9yZWFkXF9rYnMgZmllbGQgb2YgdGhlIGdpdmVuIFBJRlxf
bWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSBmbG9hdCBnZXRfaW9fcmVhZF9rYnMgKHNlc3Npb25faWQgcywgUElGX21ldHJpY3MgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UElGXF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1mbG9hdAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfaW9cX3dyaXRlXF9rYnN9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgaW8vd3JpdGVcX2ticyBmaWVsZCBvZiB0aGUgZ2l2ZW4gUElGXF9tZXRyaWNz
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGZsb2F0
IGdldF9pb193cml0ZV9rYnMgKHNlc3Npb25faWQgcywgUElGX21ldHJpY3MgcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUElGXF9t
ZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi1mbG9hdAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfbGFzdFxfdXBkYXRlZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBsYXN0XF91cGRhdGVkIGZpZWxkIG9mIHRoZSBnaXZlbiBQSUZcX21ldHJpY3MuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gZGF0ZXRpbWUgZ2V0
X2xhc3RfdXBkYXRlZCAoc2Vzc2lvbl9pZCBzLCBQSUZfbWV0cmljcyByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQSUZcX21ldHJp
Y3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLWRhdGV0aW1lCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVm
ZXJlbmNlIHRvIHRoZSBQSUZcX21ldHJpY3MgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVV
SUQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFBJ
Rl9tZXRyaWNzIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5n
IH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi1QSUZcX21ldHJpY3MgcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3RhdGUgb2YgdGhlIGdpdmVuIFBJRlxf
bWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSAoUElGX21ldHJpY3MgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIFBJRl9tZXRy
aWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFBJRlxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUElGXF9tZXRyaWNzIHJlY29yZAot
fQotCi0KLWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0
aW9ue0NsYXNzOiBTUn0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFNSfQotXGJlZ2lu
e2xvbmd0YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1u
ezF9e3xsfXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIFNSfSBcXAotXG11bHRpY29s
dW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9e1xwYXJib3h7MTFj
bX17XGVtIEEKLXN0b3JhZ2UgcmVwb3NpdG9yeS59fSBcXAotXGhsaW5lCi1RdWFscyAmIEZpZWxk
ICYgVHlwZSAmIERlc2NyaXB0aW9uIFxcCi1caGxpbmUKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1
bn0kICYgIHtcdHQgdXVpZH0gJiBzdHJpbmcgJiB1bmlxdWUgaWRlbnRpZmllci9vYmplY3QgcmVm
ZXJlbmNlIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgbmFtZS9sYWJlbH0gJiBzdHJpbmcgJiBh
IGh1bWFuLXJlYWRhYmxlIG5hbWUgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBuYW1lL2Rlc2Ny
aXB0aW9ufSAmIHN0cmluZyAmIGEgbm90ZXMgZmllbGQgY29udGFpbmcgaHVtYW4tcmVhZGFibGUg
ZGVzY3JpcHRpb24gXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgVkRJc30g
JiAoVkRJIHJlZikgU2V0ICYgbWFuYWdlZCB2aXJ0dWFsIGRpc2tzIFxcCi0kXG1hdGhpdHtST31f
XG1hdGhpdHtydW59JCAmICB7XHR0IFBCRHN9ICYgKFBCRCByZWYpIFNldCAmIHBoeXNpY2FsIGJs
b2NrZGV2aWNlcyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB2aXJ0dWFs
XF9hbGxvY2F0aW9ufSAmIGludCAmIHN1bSBvZiB2aXJ0dWFsXF9zaXplcyBvZiBhbGwgVkRJcyBp
biB0aGlzIHN0b3JhZ2UgcmVwb3NpdG9yeSAoaW4gYnl0ZXMpIFxcCi0kXG1hdGhpdHtST31fXG1h
dGhpdHtydW59JCAmICB7XHR0IHBoeXNpY2FsXF91dGlsaXNhdGlvbn0gJiBpbnQgJiBwaHlzaWNh
bCBzcGFjZSBjdXJyZW50bHkgdXRpbGlzZWQgb24gdGhpcyBzdG9yYWdlIHJlcG9zaXRvcnkgKGlu
IGJ5dGVzKS4gTm90ZSB0aGF0IGZvciBzcGFyc2UgZGlzayBmb3JtYXRzLCBwaHlzaWNhbFxfdXRp
bGlzYXRpb24gbWF5IGJlIGxlc3MgdGhhbiB2aXJ0dWFsXF9hbGxvY2F0aW9uIFxcCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IHBoeXNpY2FsXF9zaXplfSAmIGludCAmIHRvdGFs
IHBoeXNpY2FsIHNpemUgb2YgdGhlIHJlcG9zaXRvcnkgKGluIGJ5dGVzKSBcXAotJFxtYXRoaXR7
Uk99X1xtYXRoaXR7aW5zfSQgJiAge1x0dCB0eXBlfSAmIHN0cmluZyAmIHR5cGUgb2YgdGhlIHN0
b3JhZ2UgcmVwb3NpdG9yeSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zfSQgJiAge1x0dCBj
b250ZW50XF90eXBlfSAmIHN0cmluZyAmIHRoZSB0eXBlIG9mIHRoZSBTUidzIGNvbnRlbnQsIGlm
IHJlcXVpcmVkIChlLmcuIElTT3MpIFxcCi1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNl
Y3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IFNSfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfc3VwcG9ydGVkXF90eXBlc30KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJu
IGEgc2V0IG9mIGFsbCB0aGUgU1IgdHlwZXMgc3VwcG9ydGVkIGJ5IHRoZSBzeXN0ZW0uCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKHN0cmluZyBTZXQp
IGdldF9zdXBwb3J0ZWRfdHlwZXMgKHNlc3Npb25faWQgcylcZW5ke3ZlcmJhdGltfQotCi0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0
cmluZyBTZXQKLX0KLQotCi10aGUgc3VwcG9ydGVkIFNSIHR5cGVzCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBT
UnMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAoKFNSIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi0oU1IgcmVmKSBTZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRvIGFsbCBv
YmplY3RzCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1H
ZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIFNSLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBz
LCBTUiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBTUiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9uYW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBTUi4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfbGFi
ZWwgKHNlc3Npb25faWQgcywgU1IgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgU1IgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0K
LXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfbmFtZVxfbGFiZWx9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgbmFtZS9sYWJlbCBmaWVsZCBvZiB0aGUgZ2l2ZW4g
U1IuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCBzZXRfbmFtZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBTUiByZWYgc2VsZiwgc3RyaW5nIHZhbHVl
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFNS
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0
dCBzdHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmFtZVxfZGVz
Y3JpcHRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbmFtZS9kZXNjcmlwdGlvbiBm
aWVsZCBvZiB0aGUgZ2l2ZW4gU1IuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9uYW1lX2Rlc2NyaXB0aW9uIChzZXNzaW9uX2lkIHMs
IFNSIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFNSIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX25hbWVcX2Rlc2NyaXB0aW9ufQotCi17XGJmIE92ZXJ2
aWV3On0gCi1TZXQgdGhlIG5hbWUvZGVzY3JpcHRpb24gZmllbGQgb2YgdGhlIGdpdmVuIFNSLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0
X25hbWVfZGVzY3JpcHRpb24gKHNlc3Npb25faWQgcywgU1IgcmVmIHNlbGYsIHN0cmluZyB2YWx1
ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBT
UiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtc
dHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1ZESXN9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVkRJcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gU1IuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChWREkgcmVm
KSBTZXQpIGdldF9WRElzIChzZXNzaW9uX2lkIHMsIFNSIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFNSIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0o
VkRJIHJlZikgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9QQkRzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBCRHMgZmllbGQgb2YgdGhl
IGdpdmVuIFNSLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19ICgoUEJEIHJlZikgU2V0KSBnZXRfUEJEcyAoc2Vzc2lvbl9pZCBzLCBTUiByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBTUiBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotKFBCRCByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfdmlydHVhbFxfYWxsb2NhdGlvbn0KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSB2aXJ0dWFsXF9hbGxvY2F0aW9uIGZpZWxkIG9mIHRoZSBnaXZlbiBTUi4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3Zp
cnR1YWxfYWxsb2NhdGlvbiAoc2Vzc2lvbl9pZCBzLCBTUiByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBTUiByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
aW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9waHlz
aWNhbFxfdXRpbGlzYXRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgcGh5c2ljYWxc
X3V0aWxpc2F0aW9uIGZpZWxkIG9mIHRoZSBnaXZlbiBTUi4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3BoeXNpY2FsX3V0aWxpc2F0aW9u
IChzZXNzaW9uX2lkIHMsIFNSIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFNSIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3BoeXNpY2FsXF9zaXplfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHBoeXNpY2FsXF9zaXplIGZpZWxkIG9mIHRoZSBnaXZl
biBTUi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBp
bnQgZ2V0X3BoeXNpY2FsX3NpemUgKHNlc3Npb25faWQgcywgU1IgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgU1IgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxf
dHlwZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB0eXBlIGZpZWxkIG9mIHRoZSBnaXZl
biBTUi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBz
dHJpbmcgZ2V0X3R5cGUgKHNlc3Npb25faWQgcywgU1IgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgU1IgcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0
cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfY29u
dGVudFxfdHlwZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBjb250ZW50XF90eXBlIGZp
ZWxkIG9mIHRoZSBnaXZlbiBTUi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2NvbnRlbnRfdHlwZSAoc2Vzc2lvbl9pZCBzLCBTUiBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBTUiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEg
cmVmZXJlbmNlIHRvIHRoZSBTUiBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoU1IgcmVmKSBn
ZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlE
IG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVNSIHJl
ZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9y
ZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBj
dXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBTUi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoU1IgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lk
IHMsIFNSIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFNSIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1TUiByZWNvcmQKLX0KLQotCi1hbGwgZmllbGRz
IGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYnlcX25hbWVcX2xhYmVsfQot
Ci17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYWxsIHRoZSBTUiBpbnN0YW5jZXMgd2l0aCB0aGUgZ2l2
ZW4gbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gKChTUiByZWYpIFNldCkgZ2V0X2J5X25hbWVfbGFiZWwgKHNlc3Npb25faWQgcywgc3RyaW5n
IGxhYmVsKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0g
Ci1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUK
LXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17
XHR0IHN0cmluZyB9ICYgbGFiZWwgJiBsYWJlbCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGlu
ZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYg
UmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oU1IgcmVmKSBTZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRv
IG9iamVjdHMgd2l0aCBtYXRjaCBuYW1lcwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotCi1cdnNwYWNlezFjbX0KLVxuZXdwYWdlCi1cc2VjdGlvbntDbGFz
czogVkRJfQotXHN1YnNlY3Rpb257RmllbGRzIGZvciBjbGFzczogVkRJfQotXGJlZ2lue2xvbmd0
YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xs
fXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIFZESX0gXFwKLVxtdWx0aWNvbHVtbnsx
fXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94ezExY219e1xl
bSBBCi12aXJ0dWFsIGRpc2sgaW1hZ2UufX0gXFwKLVxobGluZQotUXVhbHMgJiBGaWVsZCAmIFR5
cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAm
ICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlkZW50aWZpZXIvb2JqZWN0IHJlZmVyZW5j
ZSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG5hbWUvbGFiZWx9ICYgc3RyaW5nICYgYSBodW1h
bi1yZWFkYWJsZSBuYW1lIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgbmFtZS9kZXNjcmlwdGlv
bn0gJiBzdHJpbmcgJiBhIG5vdGVzIGZpZWxkIGNvbnRhaW5nIGh1bWFuLXJlYWRhYmxlIGRlc2Ny
aXB0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IFNSfSAmIFNSIHJl
ZiAmIHN0b3JhZ2UgcmVwb3NpdG9yeSBpbiB3aGljaCB0aGUgVkRJIHJlc2lkZXMgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgVkJEc30gJiAoVkJEIHJlZikgU2V0ICYgbGlz
dCBvZiB2YmRzIHRoYXQgcmVmZXIgdG8gdGhpcyBkaXNrIFxcCi0kXG1hdGhpdHtST31fXG1hdGhp
dHtydW59JCAmICB7XHR0IGNyYXNoXF9kdW1wc30gJiAoY3Jhc2hkdW1wIHJlZikgU2V0ICYgbGlz
dCBvZiBjcmFzaCBkdW1wcyB0aGF0IHJlZmVyIHRvIHRoaXMgZGlzayBcXAotJFxtYXRoaXR7Uld9
JCAmICB7XHR0IHZpcnR1YWxcX3NpemV9ICYgaW50ICYgc2l6ZSBvZiBkaXNrIGFzIHByZXNlbnRl
ZCB0byB0aGUgZ3Vlc3QgKGluIGJ5dGVzKS4gTm90ZSB0aGF0LCBkZXBlbmRpbmcgb24gc3RvcmFn
ZSBiYWNrZW5kIHR5cGUsIHJlcXVlc3RlZCBzaXplIG1heSBub3QgYmUgcmVzcGVjdGVkIGV4YWN0
bHkgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcGh5c2ljYWxcX3V0aWxp
c2F0aW9ufSAmIGludCAmIGFtb3VudCBvZiBwaHlzaWNhbCBzcGFjZSB0aGF0IHRoZSBkaXNrIGlt
YWdlIGlzIGN1cnJlbnRseSB0YWtpbmcgdXAgb24gdGhlIHN0b3JhZ2UgcmVwb3NpdG9yeSAoaW4g
Ynl0ZXMpIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IHR5cGV9ICYgdmRp
XF90eXBlICYgdHlwZSBvZiB0aGUgVkRJIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgc2hhcmFi
bGV9ICYgYm9vbCAmIHRydWUgaWYgdGhpcyBkaXNrIG1heSBiZSBzaGFyZWQgXFwKLSRcbWF0aGl0
e1JXfSQgJiAge1x0dCByZWFkXF9vbmx5fSAmIGJvb2wgJiB0cnVlIGlmIHRoaXMgZGlzayBtYXkg
T05MWSBiZSBtb3VudGVkIHJlYWQtb25seSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG90aGVy
XF9jb25maWd9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgYWRkaXRpb25h
bCBjb25maWd1cmF0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHNl
Y3VyaXR5L2xhYmVsfSAmIHN0cmluZyAmIHRoZSBWTSdzIHNlY3VyaXR5IGxhYmVsIFxcCi1caGxp
bmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xh
c3M6IFZESX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBPdmVy
dmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwgdGhlIFZESXMga25vd24gdG8gdGhlIHN5c3Rl
bS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZE
SSByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZE
SSByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIG9iamVjdHMKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBv
ZiB0aGUgZ2l2ZW4gVkRJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBWREkgcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkRJIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX25hbWVcX2xhYmVsfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIG5hbWUv
bGFiZWwgZmllbGQgb2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfbGFiZWwgKHNlc3Npb25faWQg
cywgVkRJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFZESSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhl
IGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9uYW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmll
dzp9IAotU2V0IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBWREkuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbmFtZV9s
YWJlbCAoc2Vzc2lvbl9pZCBzLCBWREkgcmVmIHNlbGYsIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWREkgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9
ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9uYW1lXF9kZXNjcmlwdGlvbn0K
LQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBuYW1lL2Rlc2NyaXB0aW9uIGZpZWxkIG9mIHRo
ZSBnaXZlbiBWREkuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gc3RyaW5nIGdldF9uYW1lX2Rlc2NyaXB0aW9uIChzZXNzaW9uX2lkIHMsIFZESSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fnNldFxfbmFtZVxfZGVzY3JpcHRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LVNldCB0aGUgbmFtZS9kZXNjcmlwdGlvbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkRJLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X25hbWVf
ZGVzY3JpcHRpb24gKHNlc3Npb25faWQgcywgVkRJIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkRJIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBz
dHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfU1J9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUdldCB0aGUgU1IgZmllbGQgb2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoU1IgcmVmKSBnZXRfU1Ig
KHNlc3Npb25faWQgcywgVkRJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZESSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotU1IgcmVmCi19Ci0KLQot
dmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9WQkRzfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIFZCRHMgZmllbGQgb2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZCRCByZWYpIFNldCkg
Z2V0X1ZCRHMgKHNlc3Npb25faWQgcywgVkRJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZESSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZCRCBy
ZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxf
Y3Jhc2hcX2R1bXBzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGNyYXNoXF9kdW1wcyBm
aWVsZCBvZiB0aGUgZ2l2ZW4gVkRJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19ICgoY3Jhc2hkdW1wIHJlZikgU2V0KSBnZXRfY3Jhc2hfZHVtcHMgKHNl
c3Npb25faWQgcywgVkRJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IFZESSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKGNyYXNoZHVtcCByZWYpIFNl
dAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdmlydHVh
bFxfc2l6ZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB2aXJ0dWFsXF9zaXplIGZpZWxk
IG9mIHRoZSBnaXZlbiBWREkuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gaW50IGdldF92aXJ0dWFsX3NpemUgKHNlc3Npb25faWQgcywgVkRJIHJlZiBz
ZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IFZESSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+c2V0XF92aXJ0dWFsXF9zaXplfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhl
IHZpcnR1YWxcX3NpemUgZmllbGQgb2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF92aXJ0dWFsX3NpemUgKHNl
c3Npb25faWQgcywgVkRJIHJlZiBzZWxmLCBpbnQgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkRJIHJlZiB9ICYgc2VsZiAmIHJlZmVy
ZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIHZhbHVlICYgTmV3
IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcGh5c2ljYWxcX3V0aWxpc2F0aW9ufQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIHBoeXNpY2FsXF91dGlsaXNhdGlvbiBmaWVsZCBvZiB0aGUgZ2l2
ZW4gVkRJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IGludCBnZXRfcGh5c2ljYWxfdXRpbGlzYXRpb24gKHNlc3Npb25faWQgcywgVkRJIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZE
SSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxl
bmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF90eXBlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHR5cGUgZmllbGQg
b2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSAodmRpX3R5cGUpIGdldF90eXBlIChzZXNzaW9uX2lkIHMsIFZESSByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
REkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXZkaVxfdHlwZQotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfc2hhcmFibGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUg
c2hhcmFibGUgZmllbGQgb2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBib29sIGdldF9zaGFyYWJsZSAoc2Vzc2lvbl9pZCBz
LCBWREkgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50
czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgVkRJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1ib29sCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9zaGFyYWJsZX0KLQote1xiZiBPdmVydmlldzp9IAot
U2V0IHRoZSBzaGFyYWJsZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkRJLgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X3NoYXJhYmxlIChzZXNz
aW9uX2lkIHMsIFZESSByZWYgc2VsZiwgYm9vbCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxu
b2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJl
bmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IGJvb2wgfSAmIHZhbHVlICYgTmV3
IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVhZFxfb25seX0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSByZWFkXF9vbmx5IGZpZWxkIG9mIHRoZSBnaXZlbiBWREkuCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gYm9vbCBnZXRfcmVhZF9vbmx5IChz
ZXNzaW9uX2lkIHMsIFZESSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWJvb2wKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX3JlYWRcX29ubHl9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVNldCB0aGUgcmVhZFxfb25seSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkRJLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0
X3JlYWRfb25seSAoc2Vzc2lvbl9pZCBzLCBWREkgcmVmIHNlbGYsIGJvb2wgdmFsdWUpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkRJIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBib29s
IH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX290aGVyXF9jb25maWd9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2
ZW4gVkRJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMs
IFZESSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcp
IE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfb3Ro
ZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBvdGhlclxfY29uZmlnIGZp
ZWxkIG9mIHRoZSBnaXZlbiBWREkuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMsIFZESSBy
ZWYgc2VsZiwgKHN0cmluZyAtPiBzdHJpbmcpIE1hcCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IChzdHJpbmcgJFxyaWdodGFy
cm93JCBzdHJpbmcpIE1hcCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+YWRkXF90
b1xfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotQWRkIHRoZSBnaXZlbiBrZXkt
dmFsdWUgcGFpciB0byB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkRJLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgYWRk
X3RvX290aGVyX2NvbmZpZyAoc2Vzc2lvbl9pZCBzLCBWREkgcmVmIHNlbGYsIHN0cmluZyBrZXks
IHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBcaGxpbmUgCi0K
LXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92ZVxfZnJvbVxf
b3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotUmVtb3ZlIHRoZSBnaXZlbiBrZXkg
YW5kIGl0cyBjb3JyZXNwb25kaW5nIHZhbHVlIGZyb20gdGhlIG90aGVyXF9jb25maWcKLWZpZWxk
IG9mIHRoZSBnaXZlbiBWREkuICBJZiB0aGUga2V5IGlzIG5vdCBpbiB0aGF0IE1hcCwgdGhlbiBk
byBub3RoaW5nLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IHZvaWQgcmVtb3ZlX2Zyb21fb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMsIFZESSByZWYg
c2VsZiwgc3RyaW5nIGtleSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3Qg
XFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIHJlbW92ZSBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNl
dFxfc2VjdXJpdHlcX2xhYmVsfQotCi17XGJmIE92ZXJ2aWV3On0KLVNldCB0aGUgc2VjdXJpdHkg
bGFiZWwgb2YgdGhlIGdpdmVuIFZESS4gUmVmZXIgdG8gdGhlIFhTUG9saWN5IGNsYXNzCi1mb3Ig
dGhlIGZvcm1hdCBvZiB0aGUgc2VjdXJpdHkgbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9zZWN1cml0eV9sYWJlbCAoc2Vzc2lv
bl9pZCBzLCBWREkgcmVmIHNlbGYsIHN0cmluZwotc2VjdXJpdHlfbGFiZWwsIHN0cmluZyBvbGRf
bGFiZWwpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi0K
LXtcdHQgc3RyaW5nIH0gJiBzZWN1cml0eVxfbGFiZWwgJiBOZXcgdmFsdWUgb2YgdGhlIHNlY3Vy
aXR5IGxhYmVsIFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIG9sZFxfbGFiZWwgJiBMYWJlbCB2
YWx1ZSB0aGF0IHRoZSBzZWN1cml0eSBsYWJlbCBcXAotJiAmIG11c3QgY3VycmVudGx5IGhhdmUg
Zm9yIHRoZSBjaGFuZ2UgdG8gc3VjY2VlZC5cXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQK
LX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVycm9yIENv
ZGVzOn0ge1x0dCBTRUNVUklUWVxfRVJST1J9Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3NlY3Vy
aXR5XF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhlIHNlY3VyaXR5IGxhYmVsIG9m
IHRoZSBnaXZlbiBWREkuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3Zl
cmJhdGltfSBzdHJpbmcgZ2V0X3NlY3VyaXR5X2xhYmVsIChzZXNzaW9uX2lkIHMsIFZESSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi0K
LVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQot
e1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtc
dHQgVkRJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9Ci17XHR0Ci1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZ2l2ZW4gZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNyZWF0ZSBhIG5l
dyBWREkgaW5zdGFuY2UsIGFuZCByZXR1cm4gaXRzIGhhbmRsZS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVkRJIHJlZikgY3JlYXRlIChzZXNzaW9u
X2lkIHMsIFZESSByZWNvcmQgYXJncylcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBWREkgcmVjb3JkIH0gJiBhcmdzICYgQWxsIGNvbnN0cnVjdG9y
IGFyZ3VtZW50cyBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVkRJIHJlZgotfQotCi0K
LXJlZmVyZW5jZSB0byB0aGUgbmV3bHkgY3JlYXRlZCBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5k
ZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3On0gCi1EZXN0cm95IHRoZSBzcGVjaWZpZWQgVkRJIGlu
c3RhbmNlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9pZCBzLCBWREkgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkRJIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12
b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IGEgcmVmZXJlbmNlIHRvIHRoZSBWREkgaW5zdGFuY2Ugd2l0aCB0aGUgc3Bl
Y2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gKFZESSByZWYpIGdldF9ieV91dWlkIChzZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmlu
ZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotVkRJIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29y
ZCBjb250YWluaW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBWREkuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZESSByZWNvcmQpIGdl
dF9yZWNvcmQgKHNlc3Npb25faWQgcywgVkRJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZESSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVkRJIHJl
Y29yZAotfQotCi0KLWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9ieVxfbmFtZVxfbGFiZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhbGwgdGhlIFZE
SSBpbnN0YW5jZXMgd2l0aCB0aGUgZ2l2ZW4gbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChWREkgcmVmKSBTZXQpIGdldF9ieV9uYW1lX2xh
YmVsIChzZXNzaW9uX2lkIHMsIHN0cmluZyBsYWJlbClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIGxhYmVsICYgbGFiZWwgb2Yg
b2JqZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZESSByZWYp
IFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gb2JqZWN0cyB3aXRoIG1hdGNoIG5hbWVzCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNt
fQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBWQkR9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9y
IGNsYXNzOiBWQkR9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQot
XGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtc
YmYgVkJEfSBcXAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1
bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEEKLXZpcnR1YWwgYmxvY2sgZGV2aWNlLn19IFxc
Ci1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQot
JFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1
ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e2lu
c30kICYgIHtcdHQgVk19ICYgVk0gcmVmICYgdGhlIHZpcnR1YWwgbWFjaGluZSBcXAotJFxtYXRo
aXR7Uk99X1xtYXRoaXR7aW5zfSQgJiAge1x0dCBWREl9ICYgVkRJIHJlZiAmIHRoZSB2aXJ0dWFs
IGRpc2sgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBkZXZpY2V9ICYgc3RyaW5nICYgZGV2aWNl
IHNlZW4gYnkgdGhlIGd1ZXN0IGUuZy4gaGRhMSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IGJv
b3RhYmxlfSAmIGJvb2wgJiB0cnVlIGlmIHRoaXMgVkJEIGlzIGJvb3RhYmxlIFxcCi0kXG1hdGhp
dHtSV30kICYgIHtcdHQgbW9kZX0gJiB2YmRcX21vZGUgJiB0aGUgbW9kZSB0aGUgVkJEIHNob3Vs
ZCBiZSBtb3VudGVkIHdpdGggXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCB0eXBlfSAmIHZiZFxf
dHlwZSAmIGhvdyB0aGUgVkJEIHdpbGwgYXBwZWFyIHRvIHRoZSBndWVzdCAoZS5nLiBkaXNrIG9y
IENEKSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBjdXJyZW50bHlcX2F0
dGFjaGVkfSAmIGJvb2wgJiBpcyB0aGUgZGV2aWNlIGN1cnJlbnRseSBhdHRhY2hlZCAoZXJhc2Vk
IG9uIHJlYm9vdCkgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgc3RhdHVz
XF9jb2RlfSAmIGludCAmIGVycm9yL3N1Y2Nlc3MgY29kZSBhc3NvY2lhdGVkIHdpdGggbGFzdCBh
dHRhY2gtb3BlcmF0aW9uIChlcmFzZWQgb24gcmVib290KSBcXAotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7cnVufSQgJiAge1x0dCBzdGF0dXNcX2RldGFpbH0gJiBzdHJpbmcgJiBlcnJvci9zdWNjZXNz
IGluZm9ybWF0aW9uIGFzc29jaWF0ZWQgd2l0aCBsYXN0IGF0dGFjaC1vcGVyYXRpb24gc3RhdHVz
IChlcmFzZWQgb24gcmVib290KSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCBydW50aW1lXF9wcm9wZXJ0aWVzfSAmIChzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1h
cCAmIERldmljZSBydW50aW1lIHByb3BlcnRpZXMgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBx
b3MvYWxnb3JpdGhtXF90eXBlfSAmIHN0cmluZyAmIFFvUyBhbGdvcml0aG0gdG8gdXNlIFxcCi0k
XG1hdGhpdHtSV30kICYgIHtcdHQgcW9zL2FsZ29yaXRobVxfcGFyYW1zfSAmIChzdHJpbmcgJFxy
aWdodGFycm93JCBzdHJpbmcpIE1hcCAmIHBhcmFtZXRlcnMgZm9yIGNob3NlbiBRb1MgYWxnb3Jp
dGhtIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHFvcy9zdXBwb3J0ZWRc
X2FsZ29yaXRobXN9ICYgc3RyaW5nIFNldCAmIHN1cHBvcnRlZCBRb1MgYWxnb3JpdGhtcyBmb3Ig
dGhpcyBWQkQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbWV0cmljc30g
JiBWQkRcX21ldHJpY3MgcmVmICYgbWV0cmljcyBhc3NvY2lhdGVkIHdpdGggdGhpcyBWQkQgXFwK
LVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29jaWF0ZWQgd2l0
aCBjbGFzczogVkJEfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fm1lZGlhXF9jaGFuZ2V9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUNoYW5nZSB0aGUgbWVkaWEgaW4gdGhlIGRldmljZSBmb3IgQ0RS
T00tbGlrZSBkZXZpY2VzIG9ubHkuIEZvciBvdGhlcgotZGV2aWNlcywgZGV0YWNoIHRoZSBWQkQg
YW5kIGF0dGFjaCBhIG5ldyBvbmUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gdm9pZCBtZWRpYV9jaGFuZ2UgKHNlc3Npb25faWQgcywgVkJEIHJlZiB2
YmQsIFZESSByZWYgdmRpKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFZCRCByZWYgfSAmIHZiZCAmIFRoZSB2YmQgcmVwcmVzZW50aW5nIHRoZSBD
RFJPTS1saWtlIGRldmljZSBcXCBcaGxpbmUgCi0KLXtcdHQgVkRJIHJlZiB9ICYgdmRpICYgVGhl
IG5ldyBWREkgdG8gJ2luc2VydCcgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQK
LX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5wbHVnfQotCi17XGJmIE92ZXJ2aWV3On0gCi1Ib3Rw
bHVnIHRoZSBzcGVjaWZpZWQgVkJELCBkeW5hbWljYWxseSBhdHRhY2hpbmcgaXQgdG8gdGhlIHJ1
bm5pbmcgVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gdm9pZCBwbHVnIChzZXNzaW9uX2lkIHMsIFZCRCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkQgcmVmIH0gJiBzZWxmICYg
VGhlIFZCRCB0byBob3RwbHVnIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19
Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+dW5wbHVnfQotCi17XGJmIE92ZXJ2aWV3On0gCi1Ib3Qt
dW5wbHVnIHRoZSBzcGVjaWZpZWQgVkJELCBkeW5hbWljYWxseSB1bmF0dGFjaGluZyBpdCBmcm9t
IHRoZSBydW5uaW5nCi1WTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSB2b2lkIHVucGx1ZyAoc2Vzc2lvbl9pZCBzLCBWQkQgcmVmIHNlbGYpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJEIHJlZiB9
ICYgc2VsZiAmIFRoZSBWQkQgdG8gaG90LXVucGx1ZyBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxh
cn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxsfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgVkJEcyBrbm93biB0byB0aGUgc3lz
dGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgo
VkJEIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0o
VkJEIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0cwotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxk
IG9mIHRoZSBnaXZlbiBWQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFZCRCByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkQg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfVk19Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVk0gZmllbGQgb2Yg
dGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSAoVk0gcmVmKSBnZXRfVk0gKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotVk0gcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9WREl9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVkRJIGZpZWxkIG9mIHRo
ZSBnaXZlbiBWQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gKFZESSByZWYpIGdldF9WREkgKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotVkRJIHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfZGV2aWNlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGRldmljZSBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gVkJELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IHN0cmluZyBnZXRfZGV2aWNlIChzZXNzaW9uX2lkIHMsIFZCRCByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
QkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fnNldFxfZGV2aWNlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhlIGRldmlj
ZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkJELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X2RldmljZSAoc2Vzc2lvbl9pZCBzLCBWQkQgcmVm
IHNlbGYsIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBWQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9ib290YWJsZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBib290YWJs
ZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkJELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IGJvb2wgZ2V0X2Jvb3RhYmxlIChzZXNzaW9uX2lkIHMsIFZCRCBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLWJvb2wKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5zZXRcX2Jvb3RhYmxlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhl
IGJvb3RhYmxlIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfYm9vdGFibGUgKHNlc3Npb25faWQg
cywgVkJEIHJlZiBzZWxmLCBib29sIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgYm9vbCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUg
dG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9tb2RlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIG1vZGUg
ZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAodmJkX21vZGUpIGdldF9tb2RlIChzZXNzaW9uX2lkIHMsIFZCRCBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLXZiZFxfbW9kZQotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfbW9kZX0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRo
ZSBtb2RlIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbW9kZSAoc2Vzc2lvbl9pZCBzLCBWQkQg
cmVmIHNlbGYsIHZiZF9tb2RlIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgdmJkXF9tb2RlIH0gJiB2YWx1ZSAmIE5ldyB2YWx1
ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5nZXRcX3R5cGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdHlw
ZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkJELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19ICh2YmRfdHlwZSkgZ2V0X3R5cGUgKHNlc3Npb25faWQgcywgVkJE
IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotdmJkXF90eXBlCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF90eXBlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQg
dGhlIHR5cGUgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF90eXBlIChzZXNzaW9uX2lkIHMsIFZC
RCByZWYgc2VsZiwgdmJkX3R5cGUgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJEIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCB2YmRcX3R5cGUgfSAmIHZhbHVlICYgTmV3IHZh
bHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfY3VycmVudGx5XF9hdHRhY2hlZH0KLQote1xiZiBPdmVydmll
dzp9IAotR2V0IHRoZSBjdXJyZW50bHlcX2F0dGFjaGVkIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkQu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gYm9vbCBn
ZXRfY3VycmVudGx5X2F0dGFjaGVkIChzZXNzaW9uX2lkIHMsIFZCRCByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkQgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLWJvb2wKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX3N0YXR1c1xfY29kZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBzdGF0dXNcX2Nv
ZGUgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3N0YXR1c19jb2RlIChzZXNzaW9uX2lkIHMsIFZC
RCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBWQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfc3RhdHVzXF9kZXRhaWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCB0aGUgc3RhdHVzXF9kZXRhaWwgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3N0YXR1
c19kZXRhaWwgKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5n
Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ydW50aW1l
XF9wcm9wZXJ0aWVzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHJ1bnRpbWVcX3Byb3Bl
cnRpZXMgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X3J1bnRp
bWVfcHJvcGVydGllcyAoc2Vzc2lvbl9pZCBzLCBWQkQgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJEIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0o
c3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Fvc1xfYWxnb3JpdGhtXF90eXBlfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIHFvcy9hbGdvcml0aG1cX3R5cGUgZmllbGQgb2YgdGhlIGdpdmVu
IFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBz
dHJpbmcgZ2V0X3Fvc19hbGdvcml0aG1fdHlwZSAoc2Vzc2lvbl9pZCBzLCBWQkQgcmVmIHNlbGYp
XGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJE
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5zZXRcX3Fvc1xfYWxnb3JpdGhtXF90eXBlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1T
ZXQgdGhlIHFvcy9hbGdvcml0aG1cX3R5cGUgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9xb3Nf
YWxnb3JpdGhtX3R5cGUgKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUp
XGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJE
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0
dCBzdHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcW9zXF9hbGdv
cml0aG1cX3BhcmFtc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBxb3MvYWxnb3JpdGht
XF9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X3Fv
c19hbGdvcml0aG1fcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZCRCByZWYgc2VsZilcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkQgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLShzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRo
ZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfcW9zXF9hbGdvcml0aG1cX3BhcmFtc30KLQot
e1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBxb3MvYWxnb3JpdGhtXF9wYXJhbXMgZmllbGQgb2Yg
dGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSB2b2lkIHNldF9xb3NfYWxnb3JpdGhtX3BhcmFtcyAoc2Vzc2lvbl9pZCBzLCBWQkQg
cmVmIHNlbGYsIChzdHJpbmcgLT4gc3RyaW5nKSBNYXAgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJEIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCAoc3RyaW5nICRccmlnaHRh
cnJvdyQgc3RyaW5nKSBNYXAgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmFkZFxf
dG9cX3Fvc1xfYWxnb3JpdGhtXF9wYXJhbXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUFkZCB0aGUg
Z2l2ZW4ga2V5LXZhbHVlIHBhaXIgdG8gdGhlIHFvcy9hbGdvcml0aG1cX3BhcmFtcyBmaWVsZCBv
ZiB0aGUKLWdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSB2b2lkIGFkZF90b19xb3NfYWxnb3JpdGhtX3BhcmFtcyAoc2Vzc2lvbl9pZCBz
LCBWQkQgcmVmIHNlbGYsIHN0cmluZyBrZXksIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkQgcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5
ICYgS2V5IHRvIGFkZCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVl
IHRvIGFkZCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fnJlbW92ZVxfZnJvbVxfcW9zXF9hbGdvcml0aG1cX3BhcmFtc30KLQote1xi
ZiBPdmVydmlldzp9IAotUmVtb3ZlIHRoZSBnaXZlbiBrZXkgYW5kIGl0cyBjb3JyZXNwb25kaW5n
IHZhbHVlIGZyb20gdGhlCi1xb3MvYWxnb3JpdGhtXF9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVu
IFZCRC4gIElmIHRoZSBrZXkgaXMgbm90IGluIHRoYXQKLU1hcCwgdGhlbiBkbyBub3RoaW5nLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcmVt
b3ZlX2Zyb21fcW9zX2FsZ29yaXRobV9wYXJhbXMgKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxm
LCBzdHJpbmcga2V5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiBrZXkgJiBLZXkgdG8gcmVtb3ZlIFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9x
b3NcX3N1cHBvcnRlZFxfYWxnb3JpdGhtc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBx
b3Mvc3VwcG9ydGVkXF9hbGdvcml0aG1zIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkQuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKHN0cmluZyBTZXQpIGdl
dF9xb3Nfc3VwcG9ydGVkX2FsZ29yaXRobXMgKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxmKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotc3RyaW5nIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfbWV0cmljc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBtZXRy
aWNzIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZCRF9tZXRyaWNzIHJlZikgZ2V0X21ldHJpY3MgKHNlc3Np
b25faWQgcywgVkJEIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVkJEXF9tZXRyaWNzIHJlZgotfQot
Ci0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmNyZWF0ZX0KLQote1xiZiBP
dmVydmlldzp9IAotQ3JlYXRlIGEgbmV3IFZCRCBpbnN0YW5jZSwgYW5kIHJldHVybiBpdHMgaGFu
ZGxlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChW
QkQgcmVmKSBjcmVhdGUgKHNlc3Npb25faWQgcywgVkJEIHJlY29yZCBhcmdzKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWNvcmQgfSAm
IGFyZ3MgJiBBbGwgY29uc3RydWN0b3IgYXJndW1lbnRzIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi1WQkQgcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9i
amVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmRlc3Ryb3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLURlc3Ry
b3kgdGhlIHNwZWNpZmllZCBWQkQgaW5zdGFuY2UuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBkZXN0cm95IChzZXNzaW9uX2lkIHMsIFZCRCBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X2J5XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWZlcmVuY2UgdG8gdGhlIFZC
RCBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVkJEIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Np
b25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0
dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WQkQgcmVmCi19Ci0KLQotcmVmZXJl
bmNlIHRvIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3RhdGUgb2Yg
dGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSAoVkJEIHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBzLCBWQkQgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
VkJEIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi1WQkQgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9tIHRoZSBv
YmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLQot
XHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IFZCRFxfbWV0cmljc30KLVxz
dWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFZCRFxfbWV0cmljc30KLVxiZWdpbntsb25ndGFi
bGV9e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxtdWx0aWNvbHVtbnsxfXt8bH17
TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBWQkRcX21ldHJpY3N9IFxcCi1cbXVsdGlj
b2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsx
MWNtfXtcZW0KLVRoZSBtZXRyaWNzIGFzc29jaWF0ZWQgd2l0aCBhIHZpcnR1YWwgYmxvY2sgZGV2
aWNlLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwK
LVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmlu
ZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9c
bWF0aGl0e3J1bn0kICYgIHtcdHQgaW8vcmVhZFxfa2JzfSAmIGZsb2F0ICYgUmVhZCBiYW5kd2lk
dGggKEtpQi9zKSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBpby93cml0
ZVxfa2JzfSAmIGZsb2F0ICYgV3JpdGUgYmFuZHdpZHRoIChLaUIvcykgXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbGFzdFxfdXBkYXRlZH0gJiBkYXRldGltZSAmIFRpbWUg
YXQgd2hpY2ggdGhpcyBpbmZvcm1hdGlvbiB3YXMgbGFzdCB1cGRhdGVkIFxcCi1caGxpbmUKLVxl
bmR7bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IFZC
RFxfbWV0cmljc30KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBP
dmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwgdGhlIFZCRFxfbWV0cmljcyBpbnN0YW5j
ZXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAoKFZCRF9tZXRyaWNzIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9u
X2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oVkJEXF9tZXRyaWNzIHJlZikgU2V0Ci19Ci0KLQot
cmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0cwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkRcX21l
dHJpY3MuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
c3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFZCRF9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRFxfbWV0
cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9pb1xfcmVhZFxfa2JzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQg
dGhlIGlvL3JlYWRcX2ticyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkJEXF9tZXRyaWNzLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGZsb2F0IGdldF9pb19y
ZWFkX2ticyAoc2Vzc2lvbl9pZCBzLCBWQkRfbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkRcX21ldHJpY3MgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLWZsb2F0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9pb1xfd3JpdGVcX2tic30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBpby93
cml0ZVxfa2JzIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkRcX21ldHJpY3MuCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gZmxvYXQgZ2V0X2lvX3dyaXRlX2ti
cyAoc2Vzc2lvbl9pZCBzLCBWQkRfbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkRcX21ldHJpY3MgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLWZsb2F0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0
XF9sYXN0XF91cGRhdGVkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGxhc3RcX3VwZGF0
ZWQgZmllbGQgb2YgdGhlIGdpdmVuIFZCRFxfbWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBkYXRldGltZSBnZXRfbGFzdF91cGRhdGVkIChz
ZXNzaW9uX2lkIHMsIFZCRF9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRFxfbWV0cmljcyByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
ZGF0ZXRpbWUKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X2J5XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWZlcmVuY2UgdG8gdGhlIFZC
RFxfbWV0cmljcyBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVkJEX21ldHJpY3MgcmVmKSBn
ZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlE
IG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZCRFxf
bWV0cmljcyByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfcmVjb3JkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFp
bmluZyB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gVkJEXF9tZXRyaWNzLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChWQkRfbWV0cmljcyBy
ZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgVkJEX21ldHJpY3MgcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJEXF9t
ZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi1WQkRcX21ldHJpY3MgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxk
cyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IFBCRH0K
LVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFBCRH0KLVxiZWdpbntsb25ndGFibGV9e3xs
bGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxtdWx0aWNvbHVtbnsxfXt8bH17TmFtZX0g
JiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBQQkR9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rl
c2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0KLVRoZSBw
aHlzaWNhbCBibG9jayBkZXZpY2VzIHRocm91Z2ggd2hpY2ggaG9zdHMgYWNjZXNzIFNScy59fSBc
XAotXGhsaW5lCi1RdWFscyAmIEZpZWxkICYgVHlwZSAmIERlc2NyaXB0aW9uIFxcCi1caGxpbmUK
LSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdXVpZH0gJiBzdHJpbmcgJiB1bmlx
dWUgaWRlbnRpZmllci9vYmplY3QgcmVmZXJlbmNlIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtp
bnN9JCAmICB7XHR0IGhvc3R9ICYgaG9zdCByZWYgJiBwaHlzaWNhbCBtYWNoaW5lIG9uIHdoaWNo
IHRoZSBwYmQgaXMgYXZhaWxhYmxlIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7
XHR0IFNSfSAmIFNSIHJlZiAmIHRoZSBzdG9yYWdlIHJlcG9zaXRvcnkgdGhhdCB0aGUgcGJkIHJl
YWxpc2VzIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IGRldmljZVxfY29u
ZmlnfSAmIChzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcCAmIGEgY29uZmlnIHN0cmlu
ZyB0byBzdHJpbmcgbWFwIHRoYXQgaXMgcHJvdmlkZWQgdG8gdGhlIGhvc3QncyBTUi1iYWNrZW5k
LWRyaXZlciBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBjdXJyZW50bHlc
X2F0dGFjaGVkfSAmIGJvb2wgJiBpcyB0aGUgU1IgY3VycmVudGx5IGF0dGFjaGVkIG9uIHRoaXMg
aG9zdD8gXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29j
aWF0ZWQgd2l0aCBjbGFzczogUEJEfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxs
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgUEJEcyBrbm93
biB0byB0aGUgc3lzdGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19ICgoUEJEIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi0oUEJEIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0
cwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBQQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFBC
RCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfaG9zdH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSBob3N0IGZpZWxkIG9mIHRoZSBnaXZlbiBQQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGhvc3QgcmVmKSBnZXRfaG9zdCAoc2Vzc2lvbl9pZCBz
LCBQQkQgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50
czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgUEJEIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1ob3N0IHJlZgotfQotCi0KLXZhbHVlIG9mIHRo
ZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfU1J9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdl
dCB0aGUgU1IgZmllbGQgb2YgdGhlIGdpdmVuIFBCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoU1IgcmVmKSBnZXRfU1IgKHNlc3Npb25faWQgcywg
UEJEIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFBCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotU1IgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9kZXZpY2VcX2NvbmZpZ30KLQote1xiZiBPdmVydmll
dzp9IAotR2V0IHRoZSBkZXZpY2VcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gUEJELgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5nIC0+
IHN0cmluZykgTWFwKSBnZXRfZGV2aWNlX2NvbmZpZyAoc2Vzc2lvbl9pZCBzLCBQQkQgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UEJEIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2N1cnJlbnRseVxfYXR0
YWNoZWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgY3VycmVudGx5XF9hdHRhY2hlZCBm
aWVsZCBvZiB0aGUgZ2l2ZW4gUEJELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IGJvb2wgZ2V0X2N1cnJlbnRseV9hdHRhY2hlZCAoc2Vzc2lvbl9pZCBz
LCBQQkQgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50
czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgUEJEIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1ib29sCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Y3JlYXRlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1DcmVhdGUg
YSBuZXcgUEJEIGluc3RhbmNlLCBhbmQgcmV0dXJuIGl0cyBoYW5kbGUuCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFBCRCByZWYpIGNyZWF0ZSAoc2Vz
c2lvbl9pZCBzLCBQQkQgcmVjb3JkIGFyZ3MpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUEJEIHJlY29yZCB9ICYgYXJncyAmIEFsbCBjb25zdHJ1
Y3RvciBhcmd1bWVudHMgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVBCRCByZWYKLX0K
LQotCi1yZWZlcmVuY2UgdG8gdGhlIG5ld2x5IGNyZWF0ZWQgb2JqZWN0Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+ZGVzdHJveX0KLQote1xiZiBPdmVydmlldzp9IAotRGVzdHJveSB0aGUgc3BlY2lmaWVkIFBC
RCBpbnN0YW5jZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJh
dGltfSB2b2lkIGRlc3Ryb3kgKHNlc3Npb25faWQgcywgUEJEIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBCRCByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0aGUgUEJEIGluc3RhbmNlIHdpdGggdGhl
IHNwZWNpZmllZCBVVUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IChQQkQgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVp
ZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBz
dHJpbmcgfSAmIHV1aWQgJiBVVUlEIG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLVBCRCByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSBy
ZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gUEJELgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChQQkQgcmVjb3Jk
KSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIFBCRCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQQkQgcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVBC
RCByZWNvcmQKLX0KLQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotCi1cdnNwYWNlezFjbX0KLVxuZXdw
YWdlCi1cc2VjdGlvbntDbGFzczogY3Jhc2hkdW1wfQotXHN1YnNlY3Rpb257RmllbGRzIGZvciBj
bGFzczogY3Jhc2hkdW1wfQotXGJlZ2lue2xvbmd0YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9
fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xsfXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXts
fH17XGJmIGNyYXNoZHVtcH0gXFwKLVxtdWx0aWNvbHVtbnsxfXt8bH17RGVzY3JpcHRpb259ICYg
XG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94ezExY219e1xlbSBBCi1WTSBjcmFzaGR1bXAufX0g
XFwKLVxobGluZQotUXVhbHMgJiBGaWVsZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5l
Ci0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5p
cXVlIGlkZW50aWZpZXIvb2JqZWN0IHJlZmVyZW5jZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7
aW5zfSQgJiAge1x0dCBWTX0gJiBWTSByZWYgJiB0aGUgdmlydHVhbCBtYWNoaW5lIFxcCi0kXG1h
dGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IFZESX0gJiBWREkgcmVmICYgdGhlIHZpcnR1
YWwgZGlzayBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNz
b2NpYXRlZCB3aXRoIGNsYXNzOiBjcmFzaGR1bXB9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
ZGVzdHJveX0KLQote1xiZiBPdmVydmlldzp9IAotRGVzdHJveSB0aGUgc3BlY2lmaWVkIGNyYXNo
ZHVtcC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2
b2lkIGRlc3Ryb3kgKHNlc3Npb25faWQgcywgY3Jhc2hkdW1wIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNyYXNoZHVtcCByZWYg
fSAmIHNlbGYgJiBUaGUgY3Jhc2hkdW1wIHRvIGRlc3Ryb3kgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xi
ZiBPdmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwgdGhlIGNyYXNoZHVtcHMga25vd24g
dG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSAoKGNyYXNoZHVtcCByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7
dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotKGNyYXNoZHVtcCByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8g
YWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gY3Jhc2hkdW1wLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAo
c2Vzc2lvbl9pZCBzLCBjcmFzaGR1bXAgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3Jhc2hkdW1wIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJp
bmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1ZNfQot
Ci17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZNIGZpZWxkIG9mIHRoZSBnaXZlbiBjcmFzaGR1
bXAuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZN
IHJlZikgZ2V0X1ZNIChzZXNzaW9uX2lkIHMsIGNyYXNoZHVtcCByZWYgc2VsZilcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBjcmFzaGR1bXAgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLVZNIHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfVkRJfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZESSBmaWVsZCBvZiB0
aGUgZ2l2ZW4gY3Jhc2hkdW1wLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IChWREkgcmVmKSBnZXRfVkRJIChzZXNzaW9uX2lkIHMsIGNyYXNoZHVtcCBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBjcmFzaGR1bXAgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZESSByZWYKLX0KLQotCi12YWx1ZSBvZiB0aGUg
ZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2J5XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0g
Ci1HZXQgYSByZWZlcmVuY2UgdG8gdGhlIGNyYXNoZHVtcCBpbnN0YW5jZSB3aXRoIHRoZSBzcGVj
aWZpZWQgVVVJRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJh
dGltfSAoY3Jhc2hkdW1wIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1
aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
c3RyaW5nIH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi1jcmFzaGR1bXAgcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3RhdGUgb2YgdGhlIGdpdmVuIGNy
YXNoZHVtcC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSAoY3Jhc2hkdW1wIHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBzLCBjcmFzaGR1bXAg
cmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgY3Jhc2hkdW1wIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1jcmFzaGR1bXAgcmVjb3JkCi19Ci0KLQotYWxs
IGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6
IFZUUE19Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBWVFBNfQotXGJlZ2lue2xvbmd0
YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xs
fXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIFZUUE19IFxcCi1cbXVsdGljb2x1bW57
MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtc
ZW0gQQotdmlydHVhbCBUUE0gZGV2aWNlLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBU
eXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVu
Y2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgVk19ICYgVk0gcmVmICYg
dGhlIHZpcnR1YWwgbWFjaGluZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zfSQgJiAge1x0
dCBiYWNrZW5kfSAmIFZNIHJlZiAmIHRoZSBkb21haW4gd2hlcmUgdGhlIGJhY2tlbmQgaXMgbG9j
YXRlZCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG90aGVyXF9jb25maWd9ICYgKHN0cmluZyAk
XHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgYWRkaXRpb25hbCBjb25maWd1cmF0aW9uIFxcCi1c
aGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGgg
Y2xhc3M6IFZUUE19Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIFZUUE0uCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91
dWlkIChzZXNzaW9uX2lkIHMsIFZUUE0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVlRQTSByZWYgfSAmIHNlbGYgJiByZWZlcmVu
Y2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19
Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9WTX0KLQote1xi
ZiBPdmVydmlldzp9IAotR2V0IHRoZSBWTSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVlRQTS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0gcmVmKSBnZXRf
Vk0gKHNlc3Npb25faWQgcywgVlRQTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWVFBNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WTSByZWYKLX0K
LQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2JhY2tlbmR9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgYmFja2VuZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVlRQ
TS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0g
cmVmKSBnZXRfYmFja2VuZCAoc2Vzc2lvbl9pZCBzLCBWVFBNIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZUUE0gcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLVZNIHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhlIG90aGVyXF9jb25m
aWcgZmllbGQgb2YgdGhlIGdpdmVuIFZUUE0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fQotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X290aGVyX2Nv
bmZpZyAoc2Vzc2lvbl9pZCBzLCBWVFBNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVlRQTSByZWYgfSAmIHNlbGYgJiByZWZlcmVu
Y2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotKHN0cmluZyAkXHJp
Z2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+c2V0XF9vdGhlclxfY29uZmlnfQotCi17XGJmIE92ZXJ2aWV3On0KLVNldCB0aGUg
b3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVlRQTS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X290aGVyX2NvbmZpZyAoc2Vz
c2lvbl9pZCBzLCBWVFBNIHJlZiBzZWxmLCAoc3RyaW5nIC0+IHN0cmluZykgTWFwIHZhbHVlKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVlRQTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLQote1x0dCAo
c3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRv
IHNldCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotCi0KLQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfcnVudGltZVxfcHJvcGVydGllc30KLQote1xiZiBPdmVydmlldzp9Ci1HZXQg
dGhlIHJ1bnRpbWVcX3Byb3BlcnRpZXMgZmllbGQgb2YgdGhlIGdpdmVuIFZUUE0uCi0KLVxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5nIC0+IHN0cmlu
ZykgTWFwKSBnZXRfcnVudGltZV9wcm9wZXJ0aWVzIChzZXNzaW9uX2lkIHMsIFZUUE0gcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
VFBNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9Ci17XHR0Ci0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLUNyZWF0ZSBhIG5ldyBWVFBNIGluc3RhbmNlLCBhbmQgcmV0dXJuIGl0cyBoYW5kbGUu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZUUE0g
cmVmKSBjcmVhdGUgKHNlc3Npb25faWQgcywgVlRQTSByZWNvcmQgYXJncylcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWVFBNIHJlY29yZCB9ICYg
YXJncyAmIEFsbCBjb25zdHJ1Y3RvciBhcmd1bWVudHMgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLVZUUE0gcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9i
amVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmRlc3Ryb3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLURlc3Ry
b3kgdGhlIHNwZWNpZmllZCBWVFBNIGluc3RhbmNlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0
dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9pZCBzLCBWVFBN
IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IFZUUE0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX2J5XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWZlcmVuY2UgdG8gdGhl
IFZUUE0gaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZUUE0gcmVmKSBnZXRfYnlfdXVpZCAo
c2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlEIG9mIG9iamVjdCB0
byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZUUE0gcmVmCi19Ci0KLQot
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3Rh
dGUgb2YgdGhlIGdpdmVuIFZUUE0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gKFZUUE0gcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIFZU
UE0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgVlRQTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVlRQTSByZWNvcmQKLX0KLQotCi1hbGwgZmllbGRz
IGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotCi1cdnNwYWNlezFjbX0KLVxuZXdwYWdlCi1cc2VjdGlvbntDbGFzczogY29uc29s
ZX0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IGNvbnNvbGV9Ci1cYmVnaW57bG9uZ3Rh
YmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9
e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgY29uc29sZX0gXFwKLVxtdWx0aWNvbHVt
bnsxfXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94ezExY219
e1xlbSBBCi1jb25zb2xlLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVz
Y3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1
dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRc
bWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcHJvdG9jb2x9ICYgY29uc29sZVxfcHJv
dG9jb2wgJiB0aGUgcHJvdG9jb2wgdXNlZCBieSB0aGlzIGNvbnNvbGUgXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbG9jYXRpb259ICYgc3RyaW5nICYgVVJJIGZvciB0aGUg
Y29uc29sZSBzZXJ2aWNlIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFZN
fSAmIFZNIHJlZiAmIFZNIHRvIHdoaWNoIHRoaXMgY29uc29sZSBpcyBhdHRhY2hlZCBcXAotJFxt
YXRoaXR7Uld9JCAmICB7XHR0IG90aGVyXF9jb25maWd9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ck
IHN0cmluZykgTWFwICYgYWRkaXRpb25hbCBjb25maWd1cmF0aW9uIFxcCi1caGxpbmUKLVxlbmR7
bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IGNvbnNv
bGV9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBjb25zb2xlcyBrbm93biB0byB0aGUgc3lzdGVt
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoY29u
c29sZSByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
KGNvbnNvbGUgcmVmKSBTZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRvIGFsbCBvYmplY3RzCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQg
ZmllbGQgb2YgdGhlIGdpdmVuIGNvbnNvbGUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIGNvbnNv
bGUgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgY29uc29sZSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhl
IGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9wcm90b2NvbH0KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSBwcm90b2NvbCBmaWVsZCBvZiB0aGUgZ2l2ZW4gY29uc29sZS4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoY29uc29sZV9wcm90b2Nv
bCkgZ2V0X3Byb3RvY29sIChzZXNzaW9uX2lkIHMsIGNvbnNvbGUgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY29uc29sZSByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotY29uc29sZVxfcHJvdG9jb2wKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5nZXRcX2xvY2F0aW9ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IGxvY2F0aW9uIGZpZWxkIG9mIHRoZSBnaXZlbiBjb25zb2xlLgotCi0gXG5vaW5kZW50IHtcYmYg
U2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfbG9jYXRpb24gKHNlc3Np
b25faWQgcywgY29uc29sZSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBjb25zb2xlIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2Nt
fQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1ZNfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIFZNIGZpZWxkIG9mIHRoZSBnaXZlbiBjb25zb2xlLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChWTSByZWYpIGdldF9W
TSAoc2Vzc2lvbl9pZCBzLCBjb25zb2xlIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNvbnNvbGUgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZNIHJl
ZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfb3RoZXJc
X2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBvdGhlclxfY29uZmlnIGZpZWxk
IG9mIHRoZSBnaXZlbiBjb25zb2xlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfb3RoZXJfY29uZmln
IChzZXNzaW9uX2lkIHMsIGNvbnNvbGUgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY29uc29sZSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKHN0cmlu
ZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+c2V0XF9vdGhlclxfY29uZmlnfQotCi17XGJmIE92ZXJ2aWV3On0gCi1T
ZXQgdGhlIG90aGVyXF9jb25maWcgZmllbGQgb2YgdGhlIGdpdmVuIGNvbnNvbGUuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfb3RoZXJf
Y29uZmlnIChzZXNzaW9uX2lkIHMsIGNvbnNvbGUgcmVmIHNlbGYsIChzdHJpbmcgLT4gc3RyaW5n
KSBNYXAgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgY29uc29sZSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLXtcdHQgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwIH0gJiB2
YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZv
aWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5hZGRcX3RvXF9vdGhlclxfY29uZmlnfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1BZGQgdGhlIGdpdmVuIGtleS12YWx1ZSBwYWlyIHRvIHRoZSBvdGhl
clxfY29uZmlnIGZpZWxkIG9mIHRoZSBnaXZlbgotY29uc29sZS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGFkZF90b19vdGhlcl9jb25maWcg
KHNlc3Npb25faWQgcywgY29uc29sZSByZWYgc2VsZiwgc3RyaW5nIGtleSwgc3RyaW5nIHZhbHVl
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNv
bnNvbGUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBcaGxpbmUgCi0KLXtcdHQgc3Ry
aW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92ZVxfZnJvbVxfb3RoZXJcX2Nv
bmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotUmVtb3ZlIHRoZSBnaXZlbiBrZXkgYW5kIGl0cyBj
b3JyZXNwb25kaW5nIHZhbHVlIGZyb20gdGhlIG90aGVyXF9jb25maWcKLWZpZWxkIG9mIHRoZSBn
aXZlbiBjb25zb2xlLiAgSWYgdGhlIGtleSBpcyBub3QgaW4gdGhhdCBNYXAsIHRoZW4gZG8KLW5v
dGhpbmcuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
dm9pZCByZW1vdmVfZnJvbV9vdGhlcl9jb25maWcgKHNlc3Npb25faWQgcywgY29uc29sZSByZWYg
c2VsZiwgc3RyaW5nIGtleSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBjb25zb2xlIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNyZWF0ZSBhIG5ldyBjb25zb2xlIGluc3Rh
bmNlLCBhbmQgcmV0dXJuIGl0cyBoYW5kbGUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGNvbnNvbGUgcmVmKSBjcmVhdGUgKHNlc3Npb25faWQgcywg
Y29uc29sZSByZWNvcmQgYXJncylcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBjb25zb2xlIHJlY29yZCB9ICYgYXJncyAmIEFsbCBjb25zdHJ1Y3Rv
ciBhcmd1bWVudHMgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWNvbnNvbGUgcmVmCi19
Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9iamVjdAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmRlc3Ryb3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLURlc3Ryb3kgdGhlIHNwZWNpZmllZCBj
b25zb2xlIGluc3RhbmNlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9pZCBzLCBjb25zb2xlIHJlZiBzZWxmKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNvbnNv
bGUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2J5XF91
dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWZlcmVuY2UgdG8gdGhlIGNvbnNvbGUg
aW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGNvbnNvbGUgcmVmKSBnZXRfYnlfdXVpZCAoc2Vz
c2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlEIG9mIG9iamVjdCB0byBy
ZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWNvbnNvbGUgcmVmCi19Ci0KLQot
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3Rh
dGUgb2YgdGhlIGdpdmVuIGNvbnNvbGUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKGNvbnNvbGUgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lk
IHMsIGNvbnNvbGUgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgY29uc29sZSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotY29uc29sZSByZWNvcmQKLX0K
LQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotCi1cdnNwYWNlezFjbX0KLVxuZXdwYWdlCi1cc2VjdGlv
bntDbGFzczogRFBDSX0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IERQQ0l9Ci1cYmVn
aW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1
bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgRFBDSX0gXFwKLVxtdWx0
aWNvbHVtbnsxfXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94
ezExY219e1xlbSBBCi1wYXNzLXRocm91Z2ggUENJIGRldmljZS59fSBcXAotXGhsaW5lCi1RdWFs
cyAmIEZpZWxkICYgVHlwZSAmIERlc2NyaXB0aW9uIFxcCi1caGxpbmUKLSRcbWF0aGl0e1JPfV9c
bWF0aGl0e3J1bn0kICYgIHtcdHQgdXVpZH0gJiBzdHJpbmcgJiB1bmlxdWUgaWRlbnRpZmllci9v
YmplY3QgcmVmZXJlbmNlIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN0fSQgJiAge1x0dCBW
TX0gJiBWTSByZWYgJiB0aGUgdmlydHVhbCBtYWNoaW5lIFxcCi0kXG1hdGhpdHtST31fXG1hdGhp
dHtpbnN0fSQgJiAge1x0dCBQUENJfSAmIFBQQ0kgcmVmICYgdGhlIHBoeXNpY2FsIFBDSSBkZXZp
Y2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc3R9JCAmICB7XHR0IGhvdHBsdWdcX3Nsb3R9
ICYgaW50ICYgdGhlIHNsb3QgbnVtYmVyIHRvIHdoaWNoIHRoaXMgUENJIGRldmljZSBpcyBpbnNl
cnRlZCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB2aXJ0dWFsXF9kb21h
aW59ICYgaW50ICYgdGhlIHZpcnR1YWwgZG9tYWluIG51bWJlciBcXAotJFxtYXRoaXR7Uk99X1xt
YXRoaXR7cnVufSQgJiAge1x0dCB2aXJ0dWFsXF9idXN9ICYgaW50ICYgdGhlIHZpcnR1YWwgYnVz
IG51bWJlciBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB2aXJ0dWFsXF9z
bG90fSAmIGludCAmIHRoZSB2aXJ0dWFsIHNsb3QgbnVtYmVyIFxcCi0kXG1hdGhpdHtST31fXG1h
dGhpdHtydW59JCAmICB7XHR0IHZpcnR1YWxcX2Z1bmN9ICYgaW50ICYgdGhlIHZpcnR1YWwgZnVu
YyBudW1iZXIgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdmlydHVhbFxf
bmFtZX0gJiBzdHJpbmcgJiB0aGUgdmlydHVhbCBQQ0kgbmFtZSBcXAotXGhsaW5lCi1cZW5ke2xv
bmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBEUENJfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxsfQotCi17XGJmIE92ZXJ2aWV3On0gCi1S
ZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgRFBDSXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKERQQ0kgcmVmKSBT
ZXQpIGdldF9hbGwgKHNlc3Npb25faWQgcylcZW5ke3ZlcmJhdGltfQotCi0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShEUENJIHJlZikg
U2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0cwotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBn
aXZlbiBEUENJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBEUENJIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERQQ0kgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fmdldFxfVk19Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVk0gZmllbGQgb2YgdGhlIGdp
dmVuIERQQ0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gKFZNIHJlZikgZ2V0X1ZNIChzZXNzaW9uX2lkIHMsIERQQ0kgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFBDSSByZWYgfSAm
IHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxh
cn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotVk0gcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9QUENJfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBQQ0kgZmllbGQgb2YgdGhl
IGdpdmVuIERQQ0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gKFBQQ0kgcmVmKSBnZXRfUFBDSSAoc2Vzc2lvbl9pZCBzLCBEUENJIHJlZiBzZWxmKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERQQ0kg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLVBQQ0kgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF9ob3RwbHVnXF9zbG90fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IGhvdHBsdWdcX3Nsb3QgZmllbGQgb2YgdGhlIGdpdmVuIERQQ0kuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF9ob3RwbHVnX3Nsb3QgKHNl
c3Npb25faWQgcywgRFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBEUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3ZpcnR1YWxcX2RvbWFpbn0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB2aXJ0dWFsXF9kb21haW4gZmllbGQgb2YgdGhlIGdp
dmVuIERQQ0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gaW50IGdldF92aXJ0dWFsX2RvbWFpbiAoc2Vzc2lvbl9pZCBzLCBEUENJIHJlZiBzZWxmKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERQQ0kg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfdmlydHVhbFxfYnVzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHZpcnR1
YWxcX2J1cyBmaWVsZCBvZiB0aGUgZ2l2ZW4gRFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3ZpcnR1YWxfYnVzIChzZXNzaW9uX2lk
IHMsIERQQ0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgRFBDSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVj
dCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhl
IGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF92aXJ0dWFsXF9zbG90fQotCi17XGJmIE92ZXJ2
aWV3On0gCi1HZXQgdGhlIHZpcnR1YWxcX3Nsb3QgZmllbGQgb2YgdGhlIGdpdmVuIERQQ0kuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF92
aXJ0dWFsX3Nsb3QgKHNlc3Npb25faWQgcywgRFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBEUENJIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1p
bnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3ZpcnR1
YWxcX2Z1bmN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdmlydHVhbFxfZnVuYyBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gRFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSBpbnQgZ2V0X3ZpcnR1YWxfZnVuYyAoc2Vzc2lvbl9pZCBzLCBEUENJIHJl
ZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0g
Ci1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUK
LXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17
XHR0IERQQ0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfdmlydHVhbFxfbmFtZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSB2aXJ0dWFsXF9uYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBEUENJLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdmlydHVhbF9u
YW1lIChzZXNzaW9uX2lkIHMsIERQQ0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFBDSSByZWYgfSAmIHNlbGYgJiByZWZlcmVu
Y2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19
Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y3JlYXRlfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1DcmVhdGUgYSBuZXcgRFBDSSBpbnN0YW5jZSwgYW5kIHJldHVybiBpdHMg
aGFuZGxlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IChEUENJIHJlZikgY3JlYXRlIChzZXNzaW9uX2lkIHMsIERQQ0kgcmVjb3JkIGFyZ3MpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFBDSSByZWNv
cmQgfSAmIGFyZ3MgJiBBbGwgY29uc3RydWN0b3IgYXJndW1lbnRzIFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1EUENJIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgbmV3bHkgY3Jl
YXRlZCBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5kZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3On0g
Ci1EZXN0cm95IHRoZSBzcGVjaWZpZWQgRFBDSSBpbnN0YW5jZS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGRlc3Ryb3kgKHNlc3Npb25faWQg
cywgRFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBEUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0
byB0aGUgRFBDSSBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoRFBDSSByZWYpIGdldF9ieV91
dWlkIChzZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5k
ZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2Jq
ZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotRFBDSSByZWYKLX0K
LQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3Jk
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVu
dCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gRFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSAoRFBDSSByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQg
cywgRFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBEUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1EUENJIHJlY29yZAotfQotCi0KLWFsbCBm
aWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBQ
UENJfQotXHN1YnNlY3Rpb257RmllbGRzIGZvciBjbGFzczogUFBDSX0KLVxiZWdpbntsb25ndGFi
bGV9e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxtdWx0aWNvbHVtbnsxfXt8bH17
TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBQUENJfSBcXAotXG11bHRpY29sdW1uezF9
e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVt
IEEKLXBoeXNpY2FsIFBDSSBkZXZpY2UufX0gXFwKLVxobGluZQotUXVhbHMgJiBGaWVsZCAmIFR5
cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAm
ICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlkZW50aWZpZXIvb2JqZWN0IHJlZmVyZW5j
ZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBob3N0fSAmIGhvc3QgcmVm
ICYgIHRoZSBwaHlzaWNhbCBtYWNoaW5lIHRvIHdoaWNoIHRoaXMgUFBDSSBpcyBjb25uZWN0ZWQg
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgZG9tYWlufSAmIGludCAmIHRo
ZSBkb21haW4gbnVtYmVyIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGJ1
c30gJiBpbnQgJiB0aGUgYnVzIG51bWJlciBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCBzbG90fSAmIGludCAmIHRoZSBzbG90IG51bWJlciBcXAotJFxtYXRoaXR7Uk99X1xt
YXRoaXR7cnVufSQgJiAge1x0dCBmdW5jfSAmIGludCAmIHRoZSBmdW5jIG51bWJlciBcXAotJFxt
YXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBuYW1lfSAmIHN0cmluZyAmIHRoZSBQQ0kg
bmFtZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB2ZW5kb3JcX2lkfSAm
IGludCAmIHRoZSB2ZW5kb3IgSUQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgdmVuZG9yXF9uYW1lfSAmIHN0cmluZyAmIHRoZSB2ZW5kb3IgbmFtZSBcXAotJFxtYXRoaXR7
Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBkZXZpY2VcX2lkfSAmIGludCAmIHRoZSBkZXZpY2Ug
SUQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgZGV2aWNlXF9uYW1lfSAm
IHN0cmluZyAmIHRoZSBkZXZpY2UgbmFtZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCByZXZpc2lvblxfaWR9ICYgaW50ICYgdGhlIHJldmlzaW9uIElEIFxcCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGNsYXNzXF9jb2RlfSAmIGludCAmIHRoZSBjbGFz
cyBjb2RlIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGNsYXNzXF9uYW1l
fSAmIHN0cmluZyAmIHRoZSBjbGFzcyBuYW1lIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59
JCAmICB7XHR0IHN1YnN5c3RlbVxfdmVuZG9yXF9pZH0gJiBpbnQgJiB0aGUgc3Vic3lzdGVtIHZl
bmRvciBJRCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBzdWJzeXN0ZW1c
X3ZlbmRvclxfbmFtZX0gJiBzdHJpbmcgJiB0aGUgc3Vic3lzdGVtIHZlbmRvciBuYW1lIFxcCi0k
XG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN1YnN5c3RlbVxfaWR9ICYgaW50ICYg
dGhlIHN1YnN5c3RlbSBJRCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBz
dWJzeXN0ZW1cX25hbWV9ICYgc3RyaW5nICYgdGhlIHN1YnN5c3RlbSBuYW1lIFxcCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGRyaXZlcn0gJiBzdHJpbmcgJiB0aGUgZHJpdmVy
IG5hbWUgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29j
aWF0ZWQgd2l0aCBjbGFzczogUFBDSX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Fs
bH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwgdGhlIFBQQ0lzIGtu
b3duIHRvIHRoZSBzeXN0ZW0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gKChQUENJIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi0oUFBDSSByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIG9i
amVjdHMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdl
dCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3V1aWQgKHNlc3Npb25faWQg
cywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2hvc3R9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCB0aGUgaG9zdCBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoaG9zdCByZWYpIGdldF9ob3N0IChzZXNz
aW9uX2lkIHMsIFBQQ0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFBDSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaG9zdCByZWYKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2RvbWFpbn0KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBkb21haW4gZmllbGQgb2YgdGhlIGdpdmVuIFBQQ0kuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF9kb21h
aW4gKHNlc3Npb25faWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2J1c30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBidXMgZmllbGQgb2YgdGhlIGdpdmVuIFBQQ0kuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF9idXMgKHNl
c3Npb25faWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Nsb3R9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLUdldCB0aGUgc2xvdCBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3Nsb3QgKHNlc3Np
b25faWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Z1bmN9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgZnVuYyBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X2Z1bmMgKHNlc3Npb25f
aWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX25hbWV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCB0aGUgbmFtZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWUgKHNlc3Npb25f
aWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3ZlbmRvclxfaWR9Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLUdldCB0aGUgdmVuZG9yXF9pZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3Zl
bmRvcl9pZCAoc2Vzc2lvbl9pZCBzLCBQUENJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBQQ0kgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWludAot
fQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdmVuZG9yXF9u
YW1lfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHZlbmRvclxfbmFtZSBmaWVsZCBvZiB0
aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSBzdHJpbmcgZ2V0X3ZlbmRvcl9uYW1lIChzZXNzaW9uX2lkIHMsIFBQQ0kgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UFBDSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9kZXZpY2VcX2lkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IGRldmljZVxfaWQgZmllbGQgb2YgdGhlIGdpdmVuIFBQQ0kuCi0KLSBcbm9pbmRlbnQge1xiZiBT
aWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF9kZXZpY2VfaWQgKHNlc3Npb25f
aWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2RldmljZVxfbmFtZX0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSBkZXZpY2VcX25hbWUgZmllbGQgb2YgdGhlIGdpdmVuIFBQQ0kuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdl
dF9kZXZpY2VfbmFtZSAoc2Vzc2lvbl9pZCBzLCBQUENJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBQQ0kgcmVmIH0gJiBzZWxm
ICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAK
LXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxf
cmV2aXNpb25cX2lkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHJldmlzaW9uXF9pZCBm
aWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3JldmlzaW9uX2lkIChzZXNzaW9uX2lkIHMsIFBQQ0kg
cmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgUFBDSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9jbGFzc1xfY29kZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBjbGFzc1xfY29kZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X2NsYXNzX2NvZGUgKHNl
c3Npb25faWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2NsYXNzXF9uYW1lfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1HZXQgdGhlIGNsYXNzXF9uYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBQUENJ
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmlu
ZyBnZXRfY2xhc3NfbmFtZSAoc2Vzc2lvbl9pZCBzLCBQUENJIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBQQ0kgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfc3Vic3lzdGVtXF92ZW5kb3JcX2lkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHN1
YnN5c3RlbVxfdmVuZG9yXF9pZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3N1YnN5c3RlbV92
ZW5kb3JfaWQgKHNlc3Npb25faWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQK
LX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3N1YnN5c3Rl
bVxfdmVuZG9yXF9uYW1lfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHN1YnN5c3RlbVxf
dmVuZG9yXF9uYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBQUENJLgotCi0gXG5vaW5kZW50IHtcYmYg
U2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfc3Vic3lzdGVtX3ZlbmRv
cl9uYW1lIChzZXNzaW9uX2lkIHMsIFBQQ0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFBDSSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5n
Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9zdWJzeXN0
ZW1cX2lkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHN1YnN5c3RlbVxfaWQgZmllbGQg
b2YgdGhlIGdpdmVuIFBQQ0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gaW50IGdldF9zdWJzeXN0ZW1faWQgKHNlc3Npb25faWQgcywgUFBDSSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX3N1YnN5c3RlbVxfbmFtZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBzdWJzeXN0ZW1cX25hbWUgZmllbGQgb2YgdGhlIGdpdmVuIFBQQ0kuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9zdWJzeXN0
ZW1fbmFtZSAoc2Vzc2lvbl9pZCBzLCBQUENJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBQQ0kgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmlu
ZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfZHJpdmVy
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGRyaXZlciBmaWVsZCBvZiB0aGUgZ2l2ZW4g
UFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBz
dHJpbmcgZ2V0X2RyaXZlciAoc2Vzc2lvbl9pZCBzLCBQUENJIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBQQ0kgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0aGUg
UFBDSSBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoUFBDSSByZWYpIGdldF9ieV91dWlkIChz
ZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2JqZWN0IHRv
IHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUFBDSSByZWYKLX0KLQotCi1y
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0
ZSBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSAoUFBDSSByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgUFBD
SSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1QUENJIHJlY29yZAotfQotCi0KLWFsbCBmaWVsZHMg
ZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBEU0NTSX0K
LVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IERTQ1NJfQotXGJlZ2lue2xvbmd0YWJsZX17
fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xsfXtOYW1l
fSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIERTQ1NJfSBcXAotXG11bHRpY29sdW1uezF9e3xs
fXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEEK
LWhhbGYtdmlydHVhbGl6ZWQgU0NTSSBkZXZpY2UufX0gXFwKLVxobGluZQotUXVhbHMgJiBGaWVs
ZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1hdGhpdHtST31fXG1hdGhpdHty
dW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlkZW50aWZpZXIvb2JqZWN0IHJl
ZmVyZW5jZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zdH0kICYgIHtcdHQgVk19ICYgVk0g
cmVmICYgdGhlIHZpcnR1YWwgbWFjaGluZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zdH0k
ICYgIHtcdHQgUFNDU0l9ICYgUFNDU0kgcmVmICYgdGhlIHBoeXNpY2FsIFNDU0kgZGV2aWNlIFxc
Ci0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IEhCQX0gJiBEU0NTSVxfSEJBIHJl
ZiAmIHRoZSBoYWxmLXZpcnR1YWxpemVkIFNDU0kgaG9zdCBidXMgYWRhcHRlciBcXAotJFxtYXRo
aXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB2aXJ0dWFsXF9ob3N0fSAmIGludCAmIHRoZSB2
aXJ0dWFsIGhvc3QgbnVtYmVyIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0
IHZpcnR1YWxcX2NoYW5uZWx9ICYgaW50ICYgdGhlIHZpcnR1YWwgY2hhbm5lbCBudW1iZXIgXFwK
LSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdmlydHVhbFxfdGFyZ2V0fSAmIGlu
dCAmIHRoZSB2aXJ0dWFsIHRhcmdldCBudW1iZXIgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1
bn0kICYgIHtcdHQgdmlydHVhbFxfbHVufSAmIGludCAmIHRoZSB2aXJ0dWFsIGxvZ2ljYWwgdW5p
dCBudW1iZXIgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc3R9JCAmICB7XHR0IHZpcnR1YWxc
X0hDVEx9ICYgc3RyaW5nICYgdGhlIHZpcnR1YWwgSENUTCBcXAotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7cnVufSQgJiAge1x0dCBydW50aW1lXF9wcm9wZXJ0aWVzfSAmIChzdHJpbmcgJFxyaWdodGFy
cm93JCBzdHJpbmcpIE1hcCAmIERldmljZSBydW50aW1lIHByb3BlcnRpZXMgXFwKLVxobGluZQot
XGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29jaWF0ZWQgd2l0aCBjbGFzczog
RFNDU0l9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBEU0NTSXMga25vd24gdG8gdGhlIHN5c3Rl
bS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKERT
Q1NJIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0o
RFNDU0kgcmVmKSBTZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRvIGFsbCBvYmplY3RzCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmll
bGQgb2YgdGhlIGdpdmVuIERTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBEU0NTSSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfVk19Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVk0gZmll
bGQgb2YgdGhlIGdpdmVuIERTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IChWTSByZWYpIGdldF9WTSAoc2Vzc2lvbl9pZCBzLCBEU0NTSSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi1WTSByZWYKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5nZXRcX1BTQ1NJfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBT
Q1NJIGZpZWxkIG9mIHRoZSBnaXZlbiBEU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoUFNDU0kgcmVmKSBnZXRfUFNDU0kgKHNlc3Npb25faWQg
cywgRFNDU0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBEU0NTSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVj
dCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUFNDU0kgcmVmCi19Ci0KLQotdmFsdWUg
b2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9IQkF9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgSEJBIGZpZWxkIG9mIHRoZSBnaXZlbiBEU0NTSS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoRFNDU0lfSEJBIHJlZikgZ2V0X0hC
QSAoc2Vzc2lvbl9pZCBzLCBEU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1EU0NTSVxfSEJB
IHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdmly
dHVhbFxfaG9zdH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB2aXJ0dWFsXF9ob3N0IGZp
ZWxkIG9mIHRoZSBnaXZlbiBEU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3ZpcnR1YWxfaG9zdCAoc2Vzc2lvbl9pZCBzLCBEU0NT
SSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3ZpcnR1YWxcX2NoYW5uZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgdmlydHVhbFxfY2hhbm5lbCBmaWVsZCBvZiB0aGUgZ2l2ZW4gRFNDU0kuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF92
aXJ0dWFsX2NoYW5uZWwgKHNlc3Npb25faWQgcywgRFNDU0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBEU0NTSSByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF92
aXJ0dWFsXF90YXJnZXR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdmlydHVhbFxfdGFy
Z2V0IGZpZWxkIG9mIHRoZSBnaXZlbiBEU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3ZpcnR1YWxfdGFyZ2V0IChzZXNzaW9uX2lk
IHMsIERTQ1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1
bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgRFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9p
bmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRo
ZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdmlydHVhbFxfbHVufQotCi17XGJmIE92ZXJ2
aWV3On0gCi1HZXQgdGhlIHZpcnR1YWxcX2x1biBmaWVsZCBvZiB0aGUgZ2l2ZW4gRFNDU0kuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF92
aXJ0dWFsX2x1biAoc2Vzc2lvbl9pZCBzLCBEU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1p
bnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3ZpcnR1
YWxcX0hDVEx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdmlydHVhbFxfSENUTCBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gRFNDU0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF92aXJ0dWFsX0hDVEwgKHNlc3Npb25faWQgcywgRFND
U0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBEU0NTSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ydW50aW1lXF9wcm9wZXJ0aWVzfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIHJ1bnRpbWVcX3Byb3BlcnRpZXMgZmllbGQgb2YgdGhlIGdpdmVu
IERTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfcnVudGltZV9wcm9wZXJ0aWVzIChzZXNzaW9u
X2lkIHMsIERTQ1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJv
dyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNyZWF0ZSBhIG5ldyBEU0NTSSBpbnN0YW5j
ZSwgYW5kIGNyZWF0ZSBhIG5ldyBEU0NTSVxfSEJBIGluc3RhbmNlIGFzIG5lZWRlZAotdGhhdCB0
aGUgbmV3IERTQ1NJIGluc3RhbmNlIGNvbm5lY3RzIHRvLCBhbmQgcmV0dXJuIHRoZSBoYW5kbGUg
b2YgdGhlIG5ldwotRFNDU0kgaW5zdGFuY2UuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKERTQ1NJIHJlZikgY3JlYXRlIChzZXNzaW9uX2lkIHMsIERT
Q1NJIHJlY29yZCBhcmdzKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgRFNDU0kgcmVjb3JkIH0gJiBhcmdzICYgQWxsIGNvbnN0cnVjdG9yIGFyZ3Vt
ZW50cyBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotRFNDU0kgcmVmCi19Ci0KLQotcmVm
ZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmRlc3Ry
b3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLURlc3Ryb3kgdGhlIHNwZWNpZmllZCBEU0NTSSBpbnN0
YW5jZSwgYW5kIGRlc3Ryb3kgYSBEU0NTSVxfSEJBIGluc3RhbmNlIGFzCi1uZWVkZWQgdGhhdCB0
aGUgc3BlY2lmaWVkIERTQ1NJIGluc3RhbmNlIGNvbm5lY3RzIHRvLgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9p
ZCBzLCBEU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5j
ZSB0byB0aGUgRFNDU0kgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKERTQ1NJIHJlZikgZ2V0
X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlEIG9m
IG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLURTQ1NJIHJl
ZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9y
ZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBj
dXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBEU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoRFNDU0kgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNz
aW9uX2lkIHMsIERTQ1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLURTQ1NJIHJlY29yZAotfQot
Ci0KLWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9u
e0NsYXNzOiBEU0NTSVxfSEJBfQotXHN1YnNlY3Rpb257RmllbGRzIGZvciBjbGFzczogRFNDU0lc
X0hCQX0KLVxiZWdpbntsb25ndGFibGV9e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUK
LVxtdWx0aWNvbHVtbnsxfXt8bH17TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBEU0NT
SVxfSEJBfSBcXAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1
bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEEKLWhhbGYtdmlydHVhbGl6ZWQgU0NTSSBob3N0
IGJ1cyBhZGFwdGVyLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3Jp
cHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlk
fSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e2luc3R9JCAmICB7XHR0IFZNfSAmIFZNIHJlZiAmIHRoZSB2aXJ0dWFs
IG1hY2hpbmUgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgUFNDU0lcX0hC
QXN9ICYgKFBTQ1NJXF9IQkEgcmVmKSBTZXQgJiB0aGUgcGh5c2ljYWwgU0NTSSBIQkFzIFxcCi0k
XG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IERTQ1NJc30gJiAoRFNDU0kgcmVmKSBT
ZXQgJiB0aGUgaGFsZi12aXJ0dWFsaXplZCBTQ1NJIGRldmljZXMgd2hpY2ggYXJlIGNvbm5lY3Rl
ZCB0byB0aGlzIERTQ1NJIEhCQSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zdH0kICYgIHtc
dHQgdmlydHVhbFxfaG9zdH0gJiBpbnQgJiB0aGUgdmlydHVhbCBob3N0IG51bWJlciBcXAotJFxt
YXRoaXR7Uk99X1xtYXRoaXR7aW5zdH0kICYgIHtcdHQgYXNzaWdubWVudFxfbW9kZX0gJiBzdHJp
bmcgJiB0aGUgYXNzaWdubWVudCBtb2RlIG9mIHRoZSBoYWxmLXZpcnR1YWxpemVkIFNDU0kgZGV2
aWNlcyB3aGljaCBhcmUgY29ubmVjdGVkIHRvIHRoaXMgRFNDU0kgSEJBIFxcCi1caGxpbmUKLVxl
bmR7bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IERT
Q1NJXF9IQkF9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBEU0NTSSBIQkFzIGtub3duIHRvIHRo
ZSBzeXN0ZW0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gKChEU0NTSV9IQkEgcmVmKSBTZXQpIGdldF9hbGwgKHNlc3Npb25faWQgcylcZW5ke3ZlcmJh
dGltfQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLShEU0NTSVxfSEJBIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwg
b2JqZWN0cwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBEU0NTSSBIQkEuCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNz
aW9uX2lkIHMsIERTQ1NJX0hCQSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2Nt
fQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQKLXN0cmluZwotfQotCi0K
LXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfVk19Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLUdldCB0aGUgVk0gZmllbGQgb2YgdGhlIGdpdmVuIERTQ1NJIEhCQS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0gcmVmKSBnZXRf
Vk0gKHNlc3Npb25faWQgcywgRFNDU0lfSEJBIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFNDU0lcX0hCQSByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
Vk0gcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9Q
U0NTSVxfSEJBc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBQU0NTSVxfSEJBcyBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gRFNDU0kgSEJBLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19ICgoUFNDU0lfSEJBIHJlZikgU2V0KSBnZXRfUFNDU0lfSEJBcyAo
c2Vzc2lvbl9pZCBzLCBEU0NTSV9IQkEgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBEU0NTSVxfSEJBIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oUFND
U0lcX0hCQSByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfRFNDU0lzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIERTQ1NJcyBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gRFNDU0kgSEJBLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19ICgoRFNDU0kgcmVmKSBTZXQpIGdldF9EU0NTSXMgKHNlc3Npb25f
aWQgcywgRFNDU0lfSEJBIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFNDU0lcX0hCQSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKERTQ1NJIHJlZikg
U2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF92aXJ0
dWFsXF9ob3N0fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHZpcnR1YWxcX2hvc3QgZmll
bGQgb2YgdGhlIGdpdmVuIERTQ1NJIEhCQS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3ZpcnR1YWxfaG9zdCAoc2Vzc2lvbl9pZCBzLCBE
U0NTSV9IQkEgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBEU0NTSVxfSEJBIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Fzc2lnbm1lbnRcX21vZGV9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgYXNzaWdubWVudFxfbW9kZSBmaWVsZCBvZiB0aGUgZ2l2
ZW4gRFNDU0kgSEJBLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IHN0cmluZyBnZXRfYXNzaWdubWVudF9tb2RlIChzZXNzaW9uX2lkIHMsIERTQ1NJX0hC
QSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IERTQ1NJXF9IQkEgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3Qg
XFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRl
bnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRo
ZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmNyZWF0ZX0KLQote1xiZiBPdmVydmlldzp9IAotQ3Jl
YXRlIGEgbmV3IERTQ1NJXF9IQkEgaW5zdGFuY2UsIGFuZCBjcmVhdGUgbmV3IERTQ1NJIGluc3Rh
bmNlcyBvZgotaGFsZi12aXJ0dWFsaXplZCBTQ1NJIGRldmljZXMgd2hpY2ggYXJlIGNvbm5lY3Rl
ZCB0byB0aGUgaGFsZi12aXJ0dWFsaXplZAotU0NTSSBob3N0IGJ1cyBhZGFwdGVyLCBhbmQgcmV0
dXJuIHRoZSBoYW5kbGUgb2YgdGhlIG5ldyBEU0NTSVxfSEJBIGluc3RhbmNlLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChEU0NTSV9IQkEgcmVmKSBj
cmVhdGUgKHNlc3Npb25faWQgcywgRFNDU0lfSEJBIHJlY29yZCBhcmdzKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFNDU0lcX0hCQSByZWNvcmQg
fSAmIGFyZ3MgJiBBbGwgY29uc3RydWN0b3IgYXJndW1lbnRzIFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi1EU0NTSVxfSEJBIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgbmV3bHkg
Y3JlYXRlZCBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5kZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3
On0gCi1EZXN0cm95IHRoZSBzcGVjaWZpZWQgRFNDU0lcX0hCQSBpbnN0YW5jZSwgYW5kIGRlc3Ry
b3kgRFNDU0kgaW5zdGFuY2VzIG9mCi1oYWxmLXZpcnR1YWxpemVkIFNDU0kgZGV2aWNlcyB3aGlj
aCBhcmUgY29ubmVjdGVkIHRvIHRoZSBoYWxmLXZpcnR1YWxpemVkIFNDU0kKLWhvc3QgYnVzIGFk
YXB0ZXIuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
dm9pZCBkZXN0cm95IChzZXNzaW9uX2lkIHMsIERTQ1NJX0hCQSByZWYgc2VsZilcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJXF9IQkEgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNlIHRvIHRoZSBEU0NTSVxfSEJBIGluc3Rh
bmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19IChEU0NTSV9IQkEgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lv
bl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVy
biBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotRFNDU0lcX0hCQSByZWYKLX0KLQotCi1y
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0
ZSBvZiB0aGUgZ2l2ZW4gRFNDU0kgSEJBLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IChEU0NTSV9IQkEgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9u
X2lkIHMsIERTQ1NJX0hCQSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJXF9IQkEgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLURTQ1NJXF9IQkEg
cmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFn
ZQotXHNlY3Rpb257Q2xhc3M6IFBTQ1NJfQotXHN1YnNlY3Rpb257RmllbGRzIGZvciBjbGFzczog
UFNDU0l9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5l
Ci1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgUFND
U0l9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnsz
fXtsfH17XHBhcmJveHsxMWNtfXtcZW0gQQotcGh5c2ljYWwgU0NTSSBkZXZpY2UufX0gXFwKLVxo
bGluZQotUXVhbHMgJiBGaWVsZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1h
dGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlk
ZW50aWZpZXIvb2JqZWN0IHJlZmVyZW5jZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCBob3N0fSAmIGhvc3QgcmVmICYgIHRoZSBwaHlzaWNhbCBtYWNoaW5lIHRvIHdoaWNo
IHRoaXMgUFNDU0kgaXMgY29ubmVjdGVkIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAm
ICB7XHR0IEhCQX0gJiBQU0NTSVxfSEJBIHJlZiAmICB0aGUgcGh5c2ljYWwgU0NTSSBob3N0IGJ1
cyBhZGFwdGVyIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHBoeXNpY2Fs
XF9ob3N0fSAmIGludCAmIHRoZSBwaHlzaWNhbCBob3N0IG51bWJlciBcXAotJFxtYXRoaXR7Uk99
X1xtYXRoaXR7cnVufSQgJiAge1x0dCBwaHlzaWNhbFxfY2hhbm5lbH0gJiBpbnQgJiB0aGUgcGh5
c2ljYWwgY2hhbm5lbCBudW1iZXIgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgcGh5c2ljYWxcX3RhcmdldH0gJiBpbnQgJiB0aGUgcGh5c2ljYWwgdGFyZ2V0IG51bWJlciBc
XAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBwaHlzaWNhbFxfbHVufSAmIGlu
dCAmIHRoZSBwaHlzaWNhbCBsb2dpY2FsIHVuaXQgbnVtYmVyIFxcCi0kXG1hdGhpdHtST31fXG1h
dGhpdHtydW59JCAmICB7XHR0IHBoeXNpY2FsXF9IQ1RMfSAmIHN0cmluZyAmIHRoZSBwaHlzaWNh
bCBIQ1RMIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHZlbmRvclxfbmFt
ZX0gJiBzdHJpbmcgJiB0aGUgdmVuZG9yIG5hbWUgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1
bn0kICYgIHtcdHQgbW9kZWx9ICYgc3RyaW5nICYgdGhlIG1vZGVsIFxcCi0kXG1hdGhpdHtST31f
XG1hdGhpdHtydW59JCAmICB7XHR0IHR5cGVcX2lkfSAmIGludCAmIHRoZSBTQ1NJIHR5cGUgSUQg
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdHlwZX0gJiBzdHJpbmcgJiAg
dGhlIFNDU0kgdHlwZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBkZXZc
X25hbWV9ICYgc3RyaW5nICYgdGhlIFNDU0kgZGV2aWNlIG5hbWUgKGUuZy4gc2RhIG9yIHN0MCkg
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgc2dcX25hbWV9ICYgc3RyaW5n
ICYgdGhlIFNDU0kgZ2VuZXJpYyBkZXZpY2UgbmFtZSAoZS5nLiBzZzApIFxcCi0kXG1hdGhpdHtS
T31fXG1hdGhpdHtydW59JCAmICB7XHR0IHJldmlzaW9ufSAmIHN0cmluZyAmIHRoZSByZXZpc2lv
biBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBzY3NpXF9pZH0gJiBzdHJp
bmcgJiB0aGUgU0NTSSBJRCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBz
Y3NpXF9sZXZlbH0gJiBpbnQgJiB0aGUgU0NTSSBsZXZlbCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0
YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBQU0NTSX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0
dXJuIGEgbGlzdCBvZiBhbGwgdGhlIFBTQ1NJcyBrbm93biB0byB0aGUgc3lzdGVtLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoUFNDU0kgcmVmKSBT
ZXQpIGdldF9hbGwgKHNlc3Npb25faWQgcylcZW5ke3ZlcmJhdGltfQotCi0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShQU0NTSSByZWYp
IFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUg
Z2l2ZW4gUFNDU0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFBTQ1NJIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFNDU0kgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfaG9zdH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBob3N0IGZpZWxkIG9m
IHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSAoaG9zdCByZWYpIGdldF9ob3N0IChzZXNzaW9uX2lkIHMsIFBTQ1NJIHJlZiBz
ZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLWhvc3QgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9IQkF9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgSEJB
IGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSAoUFNDU0lfSEJBIHJlZikgZ2V0X0hCQSAoc2Vzc2lvbl9pZCBz
LCBQU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFBTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1QU0NTSVxfSEJBIHJlZgotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcGh5c2ljYWxcX2hvc3R9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgcGh5c2ljYWxcX2hvc3QgZmllbGQgb2YgdGhlIGdp
dmVuIFBTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IGludCBnZXRfcGh5c2ljYWxfaG9zdCAoc2Vzc2lvbl9pZCBzLCBQU0NTSSByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJ
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX3BoeXNpY2FsXF9jaGFubmVsfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IHBoeXNpY2FsXF9jaGFubmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3BoeXNpY2FsX2No
YW5uZWwgKHNlc3Npb25faWQgcywgUFNDU0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQU0NTSSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19
Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9waHlzaWNhbFxf
dGFyZ2V0fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHBoeXNpY2FsXF90YXJnZXQgZmll
bGQgb2YgdGhlIGdpdmVuIFBTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IGludCBnZXRfcGh5c2ljYWxfdGFyZ2V0IChzZXNzaW9uX2lkIHMsIFBT
Q1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgUFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcGh5c2ljYWxcX2x1bn0KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSBwaHlzaWNhbFxfbHVuIGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3BoeXNp
Y2FsX2x1biAoc2Vzc2lvbl9pZCBzLCBQU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQK
LX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3BoeXNpY2Fs
XF9IQ1RMfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHBoeXNpY2FsXF9IQ1RMIGZpZWxk
IG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3BoeXNpY2FsX0hDVEwgKHNlc3Npb25faWQgcywgUFND
U0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQU0NTSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF92ZW5kb3JcX25hbWV9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgdmVuZG9yXF9uYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3Zl
bmRvcl9uYW1lIChzZXNzaW9uX2lkIHMsIFBTQ1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0K
LQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFNDU0kgcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0
cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbW9k
ZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbW9kZWwgZmllbGQgb2YgdGhlIGdpdmVu
IFBTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IHN0cmluZyBnZXRfbW9kZWwgKHNlc3Npb25faWQgcywgUFNDU0kgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQU0NTSSByZWYgfSAm
IHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxh
cn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF90eXBlXF9pZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB0eXBlXF9pZCBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gUFNDU0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gaW50IGdldF90eXBlX2lkIChzZXNzaW9uX2lkIHMsIFBTQ1NJIHJlZiBz
ZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfdHlwZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB0eXBlIGZp
ZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3R5cGUgKHNlc3Npb25faWQgcywgUFNDU0kgcmVm
IHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBQU0NTSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9kZXZcX25hbWV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0
aGUgZGV2XF9uYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2Rldl9uYW1lIChzZXNz
aW9uX2lkIHMsIFBTQ1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfc2dcX25hbWV9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUdldCB0aGUgc2dcX25hbWUgZmllbGQgb2YgdGhlIGdpdmVuIFBTQ1NJLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBn
ZXRfc2dfbmFtZSAoc2Vzc2lvbl9pZCBzLCBQU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1z
dHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Jl
dmlzaW9ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHJldmlzaW9uIGZpZWxkIG9mIHRo
ZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSBzdHJpbmcgZ2V0X3JldmlzaW9uIChzZXNzaW9uX2lkIHMsIFBTQ1NJIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFND
U0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfc2NzaVxfaWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgc2Nz
aVxfaWQgZmllbGQgb2YgdGhlIGdpdmVuIFBTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0
dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfc2NzaV9pZCAoc2Vzc2lvbl9pZCBz
LCBQU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFBTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Njc2lcX2xldmVsfQotCi17XGJmIE92ZXJ2
aWV3On0gCi1HZXQgdGhlIHNjc2lcX2xldmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3Nj
c2lfbGV2ZWwgKHNlc3Npb25faWQgcywgUFNDU0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQU0NTSSByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50
Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVp
ZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNlIHRvIHRoZSBQU0NTSSBpbnN0
YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoUFNDU0kgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9p
ZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVybiBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUFNDU0kgcmVmCi19Ci0KLQotcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3RhdGUgb2YgdGhl
IGdpdmVuIFBTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IChQU0NTSSByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgUFNDU0kgcmVm
IHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBQU0NTSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotUFNDU0kgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9t
IHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IFBTQ1NJXF9IQkF9
Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBQU0NTSVxfSEJBfQotXGJlZ2lue2xvbmd0
YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xs
fXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIFBTQ1NJXF9IQkF9IFxcCi1cbXVsdGlj
b2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsx
MWNtfXtcZW0gQQotcGh5c2ljYWwgU0NTSSBob3N0IGJ1cyBhZGFwdGVyLn19IFxcCi1caGxpbmUK
LVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7
Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlm
aWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgaG9zdH0gJiBob3N0IHJlZiAmICB0aGUgcGh5c2ljYWwgbWFjaGluZSB0byB3aGljaCB0aGlz
IFBTQ1NJIEhCQSBpcyBjb25uZWN0ZWQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYg
IHtcdHQgcGh5c2ljYWxcX2hvc3R9ICYgaW50ICYgdGhlIHBoeXNpY2FsIGhvc3QgbnVtYmVyIFxc
Ci0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFBTQ1NJc30gJiAoUFNDU0kgcmVm
KSBTZXQgJiB0aGUgcGh5c2ljYWwgU0NTSSBkZXZpY2VzIHdoaWNoIGFyZSBjb25uZWN0ZWQgdG8g
dGhpcyBQU0NTSSBIQkEgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntS
UENzIGFzc29jaWF0ZWQgd2l0aCBjbGFzczogUFNDU0lcX0hCQX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBh
bGwgdGhlIFBTQ1NJIEhCQXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFBTQ1NJX0hCQSByZWYpIFNldCkgZ2V0
X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFBTQ1NJXF9IQkEgcmVmKSBT
ZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRvIGFsbCBvYmplY3RzCi1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0
XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdp
dmVuIFBTQ1NJIEhCQS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSBzdHJpbmcgZ2V0X3V1aWQgKHNlc3Npb25faWQgcywgUFNDU0lfSEJBIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFND
U0lcX0hCQSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9ob3N0fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGhv
c3QgZmllbGQgb2YgdGhlIGdpdmVuIFBTQ1NJIEhCQS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoaG9zdCByZWYpIGdldF9ob3N0IChzZXNzaW9uX2lk
IHMsIFBTQ1NJX0hCQSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJXF9IQkEgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWhvc3QgcmVmCi19Ci0K
LQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9waHlzaWNhbFxfaG9z
dH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBwaHlzaWNhbFxfaG9zdCBmaWVsZCBvZiB0
aGUgZ2l2ZW4gUFNDU0kgSEJBLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IGludCBnZXRfcGh5c2ljYWxfaG9zdCAoc2Vzc2lvbl9pZCBzLCBQU0NTSV9I
QkEgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQU0NTSVxfSEJBIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUg
ZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1BTQ1NJc30KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBQU0NTSXMgZmllbGQgb2YgdGhlIGdpdmVuIFBTQ1NJIEhCQS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFBTQ1NJIHJlZikgU2V0KSBn
ZXRfUFNDU0lzIChzZXNzaW9uX2lkIHMsIFBTQ1NJX0hCQSByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJXF9IQkEgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLShQU0NTSSByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVy
ZW5jZSB0byB0aGUgUFNDU0kgSEJBIGluc3RhbmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChQU0NTSV9I
QkEgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVp
ZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotUFNDU0lcX0hCQSByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQg
Y29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gUFNDU0kgSEJBLgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChQU0NTSV9IQkEg
cmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIFBTQ1NJX0hCQSByZWYgc2VsZilcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJXF9I
QkEgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLVBTQ1NJXF9IQkEgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9t
IHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IHVzZXJ9Ci1cc3Vi
c2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiB1c2VyfQotXGJlZ2lue2xvbmd0YWJsZX17fGxsbHB7
MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xsfXtOYW1lfSAmIFxt
dWx0aWNvbHVtbnszfXtsfH17XGJmIHVzZXJ9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2Ny
aXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0gQQotdXNlciBv
ZiB0aGUgc3lzdGVtLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3Jp
cHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlk
fSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgc2hvcnRcX25hbWV9ICYgc3RyaW5nICYgc2hv
cnQgbmFtZSAoZS5nLiB1c2VyaWQpIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgZnVsbG5hbWV9
ICYgc3RyaW5nICYgZnVsbCBuYW1lIFxcCi1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNl
Y3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IHVzZXJ9Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmll
bGQgb2YgdGhlIGdpdmVuIHVzZXIuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIHVzZXIgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
dXNlciByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9zaG9ydFxfbmFtZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSBzaG9ydFxfbmFtZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gdXNlci4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3Nob3J0X25hbWUgKHNl
c3Npb25faWQgcywgdXNlciByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCB1c2VyIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Z1bGxuYW1lfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1HZXQgdGhlIGZ1bGxuYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiB1c2VyLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBn
ZXRfZnVsbG5hbWUgKHNlc3Npb25faWQgcywgdXNlciByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB1c2VyIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1z
dHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX2Z1
bGxuYW1lfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhlIGZ1bGxuYW1lIGZpZWxkIG9mIHRo
ZSBnaXZlbiB1c2VyLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IHZvaWQgc2V0X2Z1bGxuYW1lIChzZXNzaW9uX2lkIHMsIHVzZXIgcmVmIHNlbGYsIHN0
cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCB1c2VyIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQote1x0dCBzdHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmNy
ZWF0ZX0KLQote1xiZiBPdmVydmlldzp9IAotQ3JlYXRlIGEgbmV3IHVzZXIgaW5zdGFuY2UsIGFu
ZCByZXR1cm4gaXRzIGhhbmRsZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSAodXNlciByZWYpIGNyZWF0ZSAoc2Vzc2lvbl9pZCBzLCB1c2VyIHJlY29y
ZCBhcmdzKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0g
Ci1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUK
LXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17
XHR0IHVzZXIgcmVjb3JkIH0gJiBhcmdzICYgQWxsIGNvbnN0cnVjdG9yIGFyZ3VtZW50cyBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdXNlciByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8g
dGhlIG5ld2x5IGNyZWF0ZWQgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+ZGVzdHJveX0KLQote1xi
ZiBPdmVydmlldzp9IAotRGVzdHJveSB0aGUgc3BlY2lmaWVkIHVzZXIgaW5zdGFuY2UuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBkZXN0cm95
IChzZXNzaW9uX2lkIHMsIHVzZXIgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgdXNlciByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdl
dCBhIHJlZmVyZW5jZSB0byB0aGUgdXNlciBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJ
RC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAodXNl
ciByZWYpIGdldF9ieV91dWlkIChzZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVp
ZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdXNlciByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfcmVjb3JkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFp
bmluZyB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gdXNlci4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAodXNlciByZWNvcmQpIGdldF9yZWNv
cmQgKHNlc3Npb25faWQgcywgdXNlciByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB1c2VyIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi11c2VyIHJlY29y
ZAotfQotCi0KLWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxz
ZWN0aW9ue0NsYXNzOiBYU1BvbGljeX0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFhT
UG9saWN5fQotXGJlZ2lue2xvbmd0YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGlu
ZQotXG11bHRpY29sdW1uezF9e3xsfXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIFhT
UG9saWN5fSBcXAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1
bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEEgWGVuIFNlY3VyaXR5IFBvbGljeX19IFxcCi1c
aGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxt
YXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAgJiB1bmlxdWUg
aWRlbnRpZmllciAvIG9iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JXfSQgICAgICAgICAg
ICAgICYgIHtcdHQgcmVwcn0gJiBzdHJpbmcgICYgcmVwcmVzZW50YXRpb24gb2YgcG9saWN5LCBp
LmUuLCBYTUwgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdHlwZX0gJiB4
c1xfdHlwZSAmIHR5cGUgb2YgdGhlIHBvbGljeSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiB7XHR0IGZsYWdzfSAmIHhzXF9pbnN0YW50aWF0aW9uZmxhZ3MgJiBwb2xpY3kKLXN0YXR1
cyBmbGFncyBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1NlbWFudGlj
cyBvZiB0aGUgY2xhc3M6IFhTUG9saWN5fQotCi1UaGUgWFNQb2xpY3kgY2xhc3MgaXMgdXNlZCBm
b3IgYWRtaW5pc3RlcmluZyBYZW4gU2VjdXJpdHkgcG9saWNpZXMuIFRocm91Z2gKLXRoaXMgY2xh
c3MgYSBuZXcgcG9saWN5IGNhbiBiZSB1cGxvYWRlZCB0byB0aGUgc3lzdGVtLCBsb2FkZWQgaW50
byB0aGUKLVhlbiBoeXBlcnZpc29yIGZvciBlbmZvcmNlbWVudCBhbmQgYmUgc2V0IGFzIHRoZSBw
b2xpY3kgdGhhdCB0aGUKLXN5c3RlbSBpcyBhdXRvbWF0aWNhbGx5IGxvYWRpbmcgd2hlbiB0aGUg
bWFjaGluZSBpcyBzdGFydGVkLgotCi1UaGlzIGNsYXNzIHJldHVybnMgaW5mb3JtYXRpb24gYWJv
dXQgdGhlIGN1cnJlbnRseSBhZG1pbmlzdGVyZWQgcG9saWN5LAotaW5jbHVkaW5nIGEgcmVmZXJl
bmNlIHRvIHRoZSBwb2xpY3kuIFRoaXMgcmVmZXJlbmNlIGNhbiB0aGVuIGJlIHVzZWQgd2l0aAot
cG9saWN5LXNwZWNpZmljIGNsYXNzZXMsIGkuZS4sIHRoZSBBQ01Qb2xpY3kgY2xhc3MsIHRvIGFs
bG93IHJldHJpZXZhbCBvZgotaW5mb3JtYXRpb24gb3IgY2hhbmdlcyB0byBiZSBtYWRlIHRvIGEg
cGFydGljdWxhciBwb2xpY3kuCi0KLVxzdWJzZWN0aW9ue1N0cnVjdHVyZSBhbmQgZGF0YXR5cGVz
IG9mIGNsYXNzOiBYU1BvbGljeX0KLQotRm9ybWF0IG9mIHRoZSBzZWN1cml0eSBsYWJlbDoKLQot
QSBzZWN1cml0eSBsYWJlbCBjb25zaXN0IG9mIHRoZSB0aHJlZSBkaWZmZXJlbnQgcGFydHMge1xp
dCBwb2xpY3kgdHlwZX0sCi17XGl0IHBvbGljeSBuYW1lfSBhbmQge1xpdCBsYWJlbH0gc2VwYXJh
dGVkIHdpdGggY29sb25zLiBUbyBzcGVjaWZ5Ci10aGUgdmlydHVhbCBtYWNoaW5lIGxhYmVsIGZv
ciBhbiBBQ00tdHlwZSBwb2xpY3kge1xpdCB4bS10ZXN0fSwgdGhlCi1zZWN1cml0eSBsYWJlbCBz
dHJpbmcgd291bGQgYmUge1xpdCBBQ006eG0tdGVzdDpibHVlfSwgd2hlcmUgYmx1ZQotZGVub3Rl
cyB0aGUgdmlydHVhbCBtYWNoaW5lJ3MgbGFiZWwuIFRoZSBmb3JtYXQgb2YgcmVzb3VyY2UgbGFi
ZWxzIGlzCi10aGUgc2FtZS5cXFswLjVjbV0KLVRoZSBmb2xsb3dpbmcgZmxhZ3MgYXJlIHVzZWQg
YnkgdGhpcyBjbGFzczoKLQotXGJlZ2lue2xvbmd0YWJsZX17fGx8bHxsfH0KLVxobGluZQote1x0
dCB4c1xfdHlwZX0gJiB2YWx1ZSAmIG1lYW5pbmcgXFwKLVxobGluZQotXGhzcGFjZXswLjVjbX17
XHR0IFhTXF9QT0xJQ1lcX0FDTX0gJiAoMSAkPDwkIDApICYgQUNNLXR5cGUgcG9saWN5IFxcCi1c
aGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotCi1cYmVnaW57bG9uZ3RhYmxlfXt8bHxsfGx8fQotXGhs
aW5lCi17XHR0IHhzXF9pbnN0YW50aWF0aW9uZmxhZ3N9ICYgdmFsdWUgJiBtZWFuaW5nIFxcCi1c
aGxpbmUKLVxoc3BhY2V7MC41Y219e1x0dCBYU1xfSU5TVFxfTk9ORX0gJiAwICYgZG8gbm90aGlu
ZyBcXAotXGhzcGFjZXswLjVjbX17XHR0IFhTXF9JTlNUXF9CT09UfSAmICgxICQ8PCQgMCkgJiBt
YWtlIHN5c3RlbSBib290IHdpdGggdGhpcyBwb2xpY3kgXFwKLVxoc3BhY2V7MC41Y219e1x0dCBY
U1xfSU5TVFxfTE9BRH0gJiAoMSAkPDwkIDEpICYgbG9hZCBwb2xpY3kgaW1tZWRpYXRlbHkgXFwK
LVxobGluZQotXGVuZHtsb25ndGFibGV9Ci0KLVxiZWdpbntsb25ndGFibGV9e3xsfGx8bHx9Ci1c
aGxpbmUKLXtcdHQgeHNcX3BvbGljeXN0YXRlfSAmIHR5cGUgJiBtZWFuaW5nIFxcCi1caGxpbmUK
LVxoc3BhY2V7MC41Y219e1x0dCB4c2Vycn0gJiBpbnQgJiBFcnJvciBjb2RlIGZyb20gb3BlcmF0
aW9uIChpZiBhcHBsaWNhYmxlKSBcXAotXGhzcGFjZXswLjVjbX17XHR0IHhzXF9yZWZ9ICAmIFhT
UG9saWN5IHJlZiAmIHJlZmVyZW5jZSB0byB0aGUgWFMgcG9saWN5IGFzIHJldHVybmVkIGJ5IHRo
ZSBBUEkgXFwKLVxoc3BhY2V7MC41Y219e1x0dCByZXByfSAmIHN0cmluZyAmIHJlcHJlc2VudGF0
aW9uIG9mIHRoZSBwb2xpY3ksIGkuZS4sIFhNTCBcXAotXGhzcGFjZXswLjVjbX17XHR0IHR5cGV9
ICYgeHNcX3R5cGUgJiB0aGUgdHlwZSBvZiB0aGUgcG9saWN5IFxcCi1caHNwYWNlezAuNWNtfXtc
dHQgZmxhZ3MgfSAmIHhzXF9pbnN0YW50aWF0aW9uZmxhZ3MgICYgaW5zdGFudGlhdGlvbiBmbGFn
cyBvZiB0aGUgcG9saWN5IFxcCi1caHNwYWNlezAuNWNtfXtcdHQgdmVyc2lvbn0gJiBzdHJpbmcg
JiB2ZXJzaW9uIG9mIHRoZSBwb2xpY3kgXFwKLVxoc3BhY2V7MC41Y219e1x0dCBlcnJvcnN9ICYg
c3RyaW5nICYgQmFzZTY0LWVuY29kZWQgc2VxdWVuY2Ugb2YgaW50ZWdlciB0dXBsZXMgY29uc2lz
dGluZyBcXAotJiAmIG9mIChlcnJvciBjb2RlLCBkZXRhaWwpOyB3aWxsIGJlIHJldHVybmVkIGFz
IHBhcnQgIFxcCi0mICYgb2YgdGhlIHhzXF9zZXRwb2xpY3kgZnVuY3Rpb24uIFxcCi1caGxpbmUK
LVxlbmR7bG9uZ3RhYmxlfQotCi1cc3Vic2VjdGlvbntBZGRpdGlvbmFsIFJQQ3MgYXNzb2NpYXRl
ZCB3aXRoIGNsYXNzOiBYU1BvbGljeX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3hz
dHlwZX0KLQote1xiZiBPdmVydmlldzp9Ci1SZXR1cm4gdGhlIFhlbiBTZWN1cml0eSBQb2xpY3kg
dHlwZXMgc3VwcG9ydGVkIGJ5IHRoaXMgc3lzdGVtCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB4c190eXBlIGdldF94c3R5cGUgKHNlc3Npb25faWQgcylc
ZW5ke3ZlcmJhdGltfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAoteHNc
X3R5cGUKLX0KLQotZmxhZ3MgcmVwcmVzZW50aW5nIHRoZSBzdXBwb3J0ZWQgWGVuIHNlY3VyaXR5
IHBvbGljeSB0eXBlcwotIFx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX3hzcG9saWN5fQotCi17XGJmIE92
ZXJ2aWV3On0KLVNldCB0aGUgY3VycmVudCBYU1BvbGljeS4gVGhpcyBmdW5jdGlvbiBjYW4gYWxz
byBiZSBiZSB1c2VkIGZvciB1cGRhdGluZyBvZgotYW4gZXhpc3RpbmcgcG9saWN5IHdob3NlIG5h
bWUgbXVzdCBiZSBlcXVpdmFsZW50IHRvIHRoZSBvbmUgb2YgdGhlCi1jdXJyZW50bHkgcnVubmlu
ZyBwb2xpY3kuCi0KLVxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19
IHhzX3BvbGljeXN0YXRlIHNldF94c3BvbGljeSAoc2Vzc2lvbl9pZCBzLCB4c190eXBlIHR5cGUs
IHN0cmluZyByZXByLAoteHNfaW5zdGFudGlhdGlvbmZsYWdzIGZsYWdzLCBib29sIG92ZXJ3cml0
ZSlcZW5ke3ZlcmJhdGltfQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB4c1xf
dHlwZSB9ICYgdHlwZSAmIHRoZSB0eXBlIG9mIHBvbGljeSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5n
fSAmIHJlcHIgJiByZXByZXNlbnRhdGlvbiBvZiB0aGUgcG9saWN5LCBpLmUuLCBYTUwgXFwgXGhs
aW5lCi17XHR0IHhzXF9pbnN0YW50aWF0aW9uZmxhZ3N9ICAgICYgZmxhZ3MgJiBmbGFncyBmb3Ig
dGhlIHNldHRpbmcgb2YgdGhlIHBvbGljeSBcXCBcaGxpbmUKLXtcdHQgYm9vbH0gICAmIG92ZXJ3
cml0ZSAmIHdoZXRoZXIgdG8gb3ZlcndyaXRlIGFuIGV4aXN0aW5nIHBvbGljeSBcXCBcaGxpbmUK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9Ci17XHR0Ci14c1xfcG9saWN5c3RhdGUKLX0KLQotCi1TdGF0ZSBpbmZvcm1h
dGlvbiBhYm91dCB0aGUgcG9saWN5LiBJbiBjYXNlIGFuIGVycm9yIG9jY3VycmVkLCB0aGUgJ3hz
XF9lcnInCi1maWVsZCBjb250YWlucyB0aGUgZXJyb3IgY29kZS4gVGhlICdlcnJvcnMnIG1heSBj
b250YWluIGZ1cnRoZXIgaW5mb3JtYXRpb24KLWFib3V0IHRoZSBlcnJvci4KLSBcdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+cmVzZXRcX3hzcG9saWN5fQotCi17XGJmIE92ZXJ2aWV3On0KLUF0dGVtcHQgdG8gcmVz
ZXQgdGhlIHN5c3RlbSdzIHBvbGljeSBieSBpbnN0YWxsaW5nIHRoZSBkZWZhdWx0IHBvbGljeS4K
LVNpbmNlIHRoaXMgZnVuY3Rpb24gaXMgaW1wbGVtZW50ZWQgYXMgYW4gdXBkYXRlIHRvIHRoZSBj
dXJyZW50IHBvbGljeSwgaXQKLXVuZGVybGllcyB0aGUgc2FtZSByZXN0cmljdGlvbnMuIFRoaXMg
ZnVuY3Rpb24gbWF5IGZhaWwgaWYgZm9yIGV4YW1wbGUKLW90aGVyIGRvbWFpbnMgdGhhbiBEb21h
aW4tMCBhcmUgcnVubmluZyBhbmQgdXNlIGEgZGlmZmVyZW50IGxhYmVsIHRoYW4KLURvbWFpbi0w
Ci0KLVxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHhzX3BvbGlj
eXN0YXRlIHJlc2V0X3hzcG9saWN5IChzZXNzaW9uX2lkIHMsIHhzX3R5cGUgdHlwZSkKLVxlbmR7
dmVyYmF0aW19Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotXHZzcGFjZXswLjNjbX0K
LQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHhzXF90eXBlIH0g
JiB0eXBlICYgdGhlIHR5cGUgb2YgcG9saWN5IFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQK
LXhzXF9wb2xpY3lzdGF0ZQotfQotCi0KLVN0YXRlIGluZm9ybWF0aW9uIGFib3V0IHRoZSBwb2xp
Y3kuIEluIGNhc2UgYW4gZXJyb3Igb2NjdXJyZWQsIHRoZSAneHNcX2VycicKLWZpZWxkIGNvbnRh
aW5zIHRoZSBlcnJvciBjb2RlLiBUaGUgJ2Vycm9ycycgbWF5IGNvbnRhaW4gZnVydGhlciBpbmZv
cm1hdGlvbgotYWJvdXQgdGhlIGVycm9yLgotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfeHNwb2xpY3l9
Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IGluZm9ybWF0aW9uIHJlZ2FyZGluZyB0aGUgY3VycmVu
dGx5IHNldCBYZW4gU2VjdXJpdHkgUG9saWN5Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fQotXGJlZ2lue3ZlcmJhdGltfSB4c19wb2xpY3lzdGF0ZSBnZXRfeHNwb2xpY3kgKHNlc3Npb25f
aWQgcylcZW5ke3ZlcmJhdGltfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYg
UmV0dXJuIFR5cGU6fQote1x0dAoteHNcX3BvbGljeXN0YXRlCi19Ci0KLQotUG9saWN5IHN0YXRl
IGluZm9ybWF0aW9uLgotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJtXF94c2Jvb3Rwb2xpY3l9Ci0KLXtcYmYg
T3ZlcnZpZXc6fQotUmVtb3ZlIGFueSBwb2xpY3kgZnJvbSB0aGUgZGVmYXVsdCBib290IGNvbmZp
Z3VyYXRpb24uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHJtX3hzYm9vdHBvbGljeSAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0
IFNFQ1VSSVRZXF9FUlJPUn0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbGFiZWxlZFxfcmVzb3Vy
Y2VzfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCBhIGxpc3Qgb2YgcmVzb3VyY2VzIHRoYXQgaGF2
ZSBiZWVuIGxhYmVsZWQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3Zl
cmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X2xhYmVsZWRfcmVzb3VyY2VzIChz
ZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmlu
ZykgTWFwCi19Ci0KLQotQSBtYXAgb2YgcmVzb3VyY2VzIHdpdGggdGhlaXIgbGFiZWxzLgotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fnNldFxfcmVzb3VyY2VcX2xhYmVsfQotCi17XGJmIE92ZXJ2aWV3On0KLUxh
YmVsIHRoZSBnaXZlbiByZXNvdXJjZSB3aXRoIHRoZSBnaXZlbiBsYWJlbC4gQW4gZW1wdHkgbGFi
ZWwgcmVtb3ZlcyBhbnkgbGFiZWwKLWZyb20gdGhlIHJlc291cmNlLgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfcmVzb3VyY2VfbGFiZWwg
KHNlc3Npb25faWQgcywgc3RyaW5nIHJlc291cmNlLCBzdHJpbmcKLWxhYmVsLCBzdHJpbmcgb2xk
X2xhYmVsKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0K
LVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQot
e1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtc
dHQgc3RyaW5nIH0gJiByZXNvdXJjZSAmIHJlc291cmNlIHRvIGxhYmVsIFxcIFxobGluZQote1x0
dCBzdHJpbmcgfSAmIGxhYmVsICYgbGFiZWwgZm9yIHRoZSByZXNvdXJjZSBcXCBcaGxpbmUKLXtc
dHQgc3RyaW5nIH0gJiBvbGRcX2xhYmVsICYgT3B0aW9uYWwgbGFiZWwgdmFsdWUgdGhhdCB0aGUg
c2VjdXJpdHkgbGFiZWwgXFwKLSYgJiBtdXN0IGN1cnJlbnRseSBoYXZlIGZvciB0aGUgY2hhbmdl
IHRvIHN1Y2NlZWQuIFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFNFQ1VSSVRZXF9F
UlJPUn0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVzb3VyY2VcX2xhYmVsfQotCi17XGJmIE92
ZXJ2aWV3On0KLUdldCB0aGUgbGFiZWwgb2YgdGhlIGdpdmVuIHJlc291cmNlLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9yZXNvdXJj
ZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgcmVzb3VyY2UpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHJlc291cmNlICYg
cmVzb3VyY2UgdG8gbGFiZWwgXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXN0cmluZwotfQot
Ci0KLVRoZSBsYWJlbCBvZiB0aGUgZ2l2ZW4gcmVzb3VyY2UuCi1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y2Fu
XF9ydW59Ci0KLXtcYmYgT3ZlcnZpZXc6fQotQ2hlY2sgd2hldGhlciBhIFZNIHdpdGggdGhlIGdp
dmVuIHNlY3VyaXR5IGxhYmVsIGNvdWxkIHJ1biBvbiB0aGUgc3lzdGVtLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGNhbl9ydW4gKHNlc3Npb25f
aWQgcywgc3RyaW5nIHNlY3VyaXR5X2xhYmVsKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiBzZWN1cml0eVxfbGFiZWwgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotaW50Ci19Ci0K
LQotRXJyb3IgY29kZSBpbmRpY2F0aW5nIHdoZXRoZXIgYSBWTSB3aXRoIHRoZSBnaXZlbiBzZWN1
cml0eSBsYWJlbCBjb3VsZCBydW4uCi1JZiB6ZXJvLCBpdCBjYW4gcnVuLgotCi1cdnNwYWNlezAu
M2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVzOn0ge1x0dCBTRUNVUklU
WVxfRVJST1J9Ci0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBP
dmVydmlldzp9Ci1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgWFNQb2xpY2llcyBrbm93biB0byB0
aGUgc3lzdGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRp
bX0gKChYU1BvbGljeSByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0
aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0K
LXtcdHQKLShYU1BvbGljeSByZWYpIFNldAotfQotCi0KLUEgbGlzdCBvZiBhbGwgdGhlIElEcyBv
ZiBhbGwgdGhlIFhTUG9saWNpZXMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYg
T3ZlcnZpZXc6fQotR2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBYU1BvbGljeS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRf
dXVpZCAoc2Vzc2lvbl9pZCBzLCBYU1BvbGljeSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFhTUG9saWN5IHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1zdHJp
bmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29y
ZH0KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgYSByZWNvcmQgb2YgdGhlIHJlZmVyZW5jZWQgWFNQ
b2xpY3kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAo
WFNQb2xpY3kgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIHhzX3JlZiB4c3BvbGlj
eSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHhz
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9Ci17XHR0Ci1YU1BvbGljeSByZWNvcmQKLX0KLQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9i
amVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXG5l
d3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBBQ01Qb2xpY3l9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9y
IGNsYXNzOiBBQ01Qb2xpY3l9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0
aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9
e2x8fXtcYmYgQUNNUG9saWN5fSBcXAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0g
JiBcbXVsdGljb2x1bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEFuIEFDTSBTZWN1cml0eSBQ
b2xpY3l9fSBcXAotXGhsaW5lCi1RdWFscyAmIEZpZWxkICYgVHlwZSAmIERlc2NyaXB0aW9uIFxc
Ci1caGxpbmUKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdXVpZH0gJiBzdHJp
bmcgJiB1bmlxdWUgaWRlbnRpZmllciAvIG9iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JX
fSQgICAgICAgICAgICAgICYgIHtcdHQgcmVwcn0gJiBzdHJpbmcgJiByZXByZXNlbnRhdGlvbiBv
ZiBwb2xpY3ksIGluIFhNTCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB0
eXBlfSAmIHhzXF90eXBlICYgdHlwZSBvZiB0aGUgcG9saWN5IFxcCi0kXG1hdGhpdHtST31fXG1h
dGhpdHtydW59JCAmIHtcdHQgZmxhZ3N9ICYgeHNcX2luc3RhbnRpYXRpb25mbGFncyAmIHBvbGlj
eQotc3RhdHVzIGZsYWdzIFxcCi1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotCi1cc3Vic2VjdGlv
bntTdHJ1Y3R1cmUgYW5kIGRhdGF0eXBlcyBvZiBjbGFzczogQUNNUG9saWN5fQotCi1cdnNwYWNl
ezAuNWNtfQotVGhlIGZvbGxvd2luZyBkYXRhIHN0cnVjdHVyZXMgYXJlIHVzZWQ6Ci0KLVxiZWdp
bntsb25ndGFibGV9e3xsfGx8bHx9Ci1caGxpbmUKLXtcdHQgUklQIGFjbVxfcG9saWN5aGVhZGVy
fSAmIHR5cGUgJiBtZWFuaW5nIFxcCi1caGxpbmUKLVxoc3BhY2V7MC41Y219e1x0dCBwb2xpY3lu
YW1lfSAgICYgc3RyaW5nICYgbmFtZSBvZiB0aGUgcG9saWN5IFxcCi1caHNwYWNlezAuNWNtfXtc
dHQgcG9saWN5dXJsIH0gICAmIHN0cmluZyAmIFVSTCBvZiB0aGUgcG9saWN5IFxcCi1caHNwYWNl
ezAuNWNtfXtcdHQgZGF0ZX0gICAgICAgICAmIHN0cmluZyAmIGRhdGEgb2YgdGhlIHBvbGljeSBc
XAotXGhzcGFjZXswLjVjbX17XHR0IHJlZmVyZW5jZX0gICAgJiBzdHJpbmcgJiByZWZlcmVuY2Ug
b2YgdGhlIHBvbGljeSBcXAotXGhzcGFjZXswLjVjbX17XHR0IG5hbWVzcGFjZXVybH0gJiBzdHJp
bmcgJiBuYW1lc3BhY2V1cmwgb2YgdGhlIHBvbGljeSBcXAotXGhzcGFjZXswLjVjbX17XHR0IHZl
cnNpb259ICAgICAgJiBzdHJpbmcgJiB2ZXJzaW9uIG9mIHRoZSBwb2xpY3kgXFwKLVxobGluZQot
XGVuZHtsb25ndGFibGV9Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2hlYWRlcn0KLQote1xiZiBP
dmVydmlldzp9Ci1HZXQgdGhlIHJlZmVyZW5jZWQgcG9saWN5J3MgaGVhZGVyIGluZm9ybWF0aW9u
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gYWNtX3Bv
bGljeWhlYWRlciBnZXRfaGVhZGVyIChzZXNzaW9uX2lkIHMsIHhzIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgeHMgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQK
LWFjbVxfcG9saWN5aGVhZGVyCi19Ci0KLQotVGhlIHBvbGljeSdzIGhlYWRlciBpbmZvcm1hdGlv
bi4KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3htbH0KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhl
IFhNTCByZXByZXNlbnRhdGlvbiBvZiB0aGUgZ2l2ZW4gcG9saWN5LgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9YTUwgKHNlc3Npb25f
aWQgcywgeHMgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCB4cyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotc3RyaW5nCi19Ci0KLQotWE1MIHJlcHJlc2VudGF0
aW9uIG9mIHRoZSByZWZlcmVuY2VkIHBvbGljeQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbWFwfQot
Ci17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgbWFwcGluZyBpbmZvcm1hdGlvbiBvZiB0aGUgZ2l2
ZW4gcG9saWN5LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRp
bX0gc3RyaW5nIGdldF9tYXAgKHNlc3Npb25faWQgcywgeHMgcmVmIHNlbGYpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB4cyByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotc3Ry
aW5nCi19Ci0KLQotTWFwcGluZyBpbmZvcm1hdGlvbiBvZiB0aGUgcmVmZXJlbmNlZCBwb2xpY3ku
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9iaW5hcnl9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IHRo
ZSBiaW5hcnkgcG9saWN5IHJlcHJlc2VudGF0aW9uIG9mIHRoZSByZWZlcmVuY2VkIHBvbGljeS4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBn
ZXRfYmluYXJ5IChzZXNzaW9uX2lkIHMsIHhzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgeHMgcmVmIH0gJiBzZWxmICYgcmVmZXJl
bmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXN0cmluZwotfQot
Ci0KLUJhc2U2NC1lbmNvZGVkIHJlcHJlc2VudGF0aW9uIG9mIHRoZSBiaW5hcnkgcG9saWN5Lgot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfZW5mb3JjZWRcX2JpbmFyeX0KLQote1xiZiBPdmVydmlldzp9
Ci1HZXQgdGhlIGJpbmFyeSBwb2xpY3kgcmVwcmVzZW50YXRpb24gb2YgdGhlIGN1cnJlbnRseSBl
bmZvcmNlZCBBQ00gcG9saWN5LgotSW4gY2FzZSB0aGUgZGVmYXVsdCBwb2xpY3kgaXMgbG9hZGVk
IGluIHRoZSBoeXBlcnZpc29yLCBhIHBvbGljeSBtYXkgYmUKLW1hbmFnZWQgYnkgeGVuZCB0aGF0
IGlzIG5vdCB5ZXQgbG9hZGVkIGludG8gdGhlIGh5cGVydmlzb3IuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2VuZm9yY2VkX2JpbmFy
eSAoc2Vzc2lvbl9pZCBzLCB4cyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHhzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1zdHJpbmcKLX0KLQotCi1CYXNl
NjQtZW5jb2RlZCByZXByZXNlbnRhdGlvbiBvZiB0aGUgYmluYXJ5IHBvbGljeS4KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5nZXRcX1ZNXF9zc2lkcmVmfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgQUNN
IHNzaWRyZWYgb2YgdGhlIGdpdmVuIHZpcnR1YWwgbWFjaGluZS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfVk1fc3NpZHJlZiAoc2Vz
c2lvbl9pZCBzLCB2bSByZWYgdm0pXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCB2bSByZWYgfSAmIHZtICYgcmVmZXJlbmNlIHRvIGEgdmFsaWQgVk0g
XFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLWludAotfQotCi0KLVRoZSBzc2lkcmVmIG9mIHRo
ZSBnaXZlbiB2aXJ0dWFsIG1hY2hpbmUuCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtc
YmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fQotICB7XHR0IEhBTkRMRVxfSU5WQUxJRCwgVk1cX0JB
RFxfUE9XRVJcX1NUQVRFLCBTRUNVUklUWVxfRVJST1J9Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X2FsbH0KLQote1xiZiBPdmVydmlldzp9Ci1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgQUNNUG9s
aWNpZXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
Ci1cYmVnaW57dmVyYmF0aW19ICgoQUNNUG9saWN5IHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9u
X2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fQote1x0dAotKEFDTVBvbGljeSByZWYpIFNldAotfQotCi0KLUEgbGlz
dCBvZiBhbGwgdGhlIElEcyBvZiBhbGwgdGhlIEFDTVBvbGljaWVzCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUg
Z2l2ZW4gQUNNUG9saWN5LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2
ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIEFDTVBvbGljeSByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IEFD
TVBvbGljeSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fQote1x0dAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IGEgcmVjb3Jk
IG9mIHRoZSByZWZlcmVuY2VkIEFDTVBvbGljeS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9Ci1cYmVnaW57dmVyYmF0aW19IChYU1BvbGljeSByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Np
b25faWQgcywgeHNfcmVmIHhzcG9saWN5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgeHMgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLVhTUG9saWN5IHJlY29yZAotfQotCi0K
LWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci0KLVxuZXdwYWdlCi1cc2VjdGlvbntDbGFzczog
ZGVidWd9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBkZWJ1Z30KLXtcYmYgQ2xhc3Mg
ZGVidWcgaGFzIG5vIGZpZWxkcy59Ci1cc3Vic2VjdGlvbntSUENzIGFzc29jaWF0ZWQgd2l0aCBj
bGFzczogZGVidWd9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBkZWJ1ZyByZWNvcmRzIGtub3du
IHRvIHRoZSBzeXN0ZW0KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSAoKGRlYnVnIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi0oZGVidWcgcmVmKSBTZXQKLX0KLQotCi1BIGxpc3Qgb2YgYWxsIHRoZSBJRHMg
b2YgYWxsIHRoZSBkZWJ1ZyByZWNvcmRzCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+cmV0dXJuXF9mYWlsdXJl
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1SZXR1cm4gYW4gQVBJICdzdWNjZXNzZnVsJyBmYWlsdXJl
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQg
cmV0dXJuX2ZhaWx1cmUgKHNlc3Npb25faWQgcylcZW5ke3ZlcmJhdGltfQotCi0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0K
LQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNyZWF0
ZSBhIG5ldyBkZWJ1ZyBpbnN0YW5jZSwgYW5kIHJldHVybiBpdHMgaGFuZGxlLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChkZWJ1ZyByZWYpIGNyZWF0
ZSAoc2Vzc2lvbl9pZCBzLCBkZWJ1ZyByZWNvcmQgYXJncylcZW5ke3ZlcmJhdGltfQotCi0KLVxu
b2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBkZWJ1ZyByZWNvcmQgfSAmIGFyZ3MgJiBB
bGwgY29uc3RydWN0b3IgYXJndW1lbnRzIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1k
ZWJ1ZyByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG5ld2x5IGNyZWF0ZWQgb2JqZWN0Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+ZGVzdHJveX0KLQote1xiZiBPdmVydmlldzp9IAotRGVzdHJveSB0aGUg
c3BlY2lmaWVkIGRlYnVnIGluc3RhbmNlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9pZCBzLCBkZWJ1ZyByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBkZWJ1ZyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxf
YnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0aGUgZGVi
dWcgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRlbnQge1xiZiBT
aWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGRlYnVnIHJlZikgZ2V0X2J5X3V1aWQgKHNl
c3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8g
cmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1kZWJ1ZyByZWYKLX0KLQotCi1y
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0
ZSBvZiB0aGUgZ2l2ZW4gZGVidWcuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gKGRlYnVnIHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBzLCBk
ZWJ1ZyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBkZWJ1ZyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotZGVidWcgcmVjb3JkCi19Ci0KLQotYWxsIGZp
ZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBjcHVcX3Bvb2x9Ci1cc3Vi
c2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBjcHVcX3Bvb2x9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8
bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9
ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgY3B1XF9wb29sfSBcXAotXG11bHRpY29sdW1uezF9
e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVt
IEEgQ1BVIHBvb2x9fSBcXAotXGhsaW5lCi1RdWFscyAmIEZpZWxkICYgVHlwZSAmIERlc2NyaXB0
aW9uIFxcCi1caGxpbmUKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdXVpZH0g
JiBzdHJpbmcgJiB1bmlxdWUgaWRlbnRpZmllciAvIG9iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0
aGl0e1JXfSQgICAgICAgICAgICAgICYgIHtcdHQgbmFtZVxfbGFiZWx9ICYgc3RyaW5nICYgbmFt
ZSBvZiBjcHVcX3Bvb2wgXFwKLSRcbWF0aGl0e1JXfSQgICAgICAgICAgICAgICYgIHtcdHQgbmFt
ZVxfZGVzY3JpcHRpb259ICYgc3RyaW5nICYgY3B1XF9wb29sIGRlc2NyaXB0aW9uIFxcCi0kXG1h
dGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHJlc2lkZW50XF9vbn0gJiBob3N0IHJlZiAm
IHRoZSBob3N0IHRoZSBjcHVcX3Bvb2wgaXMgY3VycmVudGx5IHJlc2lkZW50IG9uIFxcCi0kXG1h
dGhpdHtSV30kICAgICAgICAgICAgICAmICB7XHR0IGF1dG9cX3Bvd2VyXF9vbn0gJiBib29sICYg
VHJ1ZSBpZiB0aGlzIGNwdVxfcG9vbCBzaG91bGQgYmUgYWN0aXZhdGVkIGF1dG9tYXRpY2FsbHkg
YWZ0ZXIgaG9zdCBib290IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN0
YXJ0ZWRcX1ZNc30gJiAoVk0gcmVmKSBTZXQgJiBsaXN0IG9mIFZNcyBjdXJyZW50bHkgc3RhcnRl
ZCBpbiB0aGlzIGNwdVxfcG9vbCBcXAotJFxtYXRoaXR7Uld9JCAgICAgICAgICAgICAgJiAge1x0
dCBuY3B1fSAmIGludGVnZXIgJiBudW1iZXIgb2YgaG9zdFxfQ1BVcyByZXF1ZXN0ZWQgZm9yIHRo
aXMgY3B1XF9wb29sIGF0IG5leHQgc3RhcnQgXFwKLSRcbWF0aGl0e1JXfSQgICAgICAgICAgICAg
ICYgIHtcdHQgc2NoZWRcX3BvbGljeX0gJiBzdHJpbmcgJiBzY2hlZHVsZXIgcG9saWN5IG9uIHRo
aXMgY3B1XF9wb29sIFxcCi0kXG1hdGhpdHtSV30kICAgICAgICAgICAgICAmICB7XHR0IHByb3Bv
c2VkXF9DUFVzfSAmIChzdHJpbmcpIFNldCAmIGxpc3Qgb2YgcHJvcG9zZWQgaG9zdFxfQ1BVcyB0
byBhc3NpZ24gYXQgbmV4dCBhY3RpdmF0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59
JCAmICB7XHR0IGhvc3RcX0NQVXN9ICYgKFZNIHJlZikgU2V0ICYgbGlzdCBvZiBob3N0XF9jcHVz
IGN1cnJlbnRseSBhc3NpZ25lZCB0byB0aGlzIGNwdVxfcG9vbCBcXAotJFxtYXRoaXR7Uk99X1xt
YXRoaXR7cnVufSQgJiAge1x0dCBhY3RpdmF0ZWR9ICYgYm9vbCAmIFRydWUgaWYgdGhpcyBjcHVc
X3Bvb2wgaXMgYWN0aXZhdGVkIFxcCi0kXG1hdGhpdHtSV30kICAgICAgICAgICAgICAmICB7XHR0
IG90aGVyXF9jb25maWd9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgYWRk
aXRpb25hbCBjb25maWd1cmF0aW9uIFxcCi1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNl
Y3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IGNwdVxfcG9vbH0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5hY3RpdmF0ZX0KLQote1xiZiBPdmVydmlldzp9Ci1BY3RpdmF0ZSB0aGUg
Y3B1XF9wb29sIGFuZCBhc3NpZ24gdGhlIGdpdmVuIENQVXMgdG8gaXQuCi1DUFVzIHNwZWNpZmll
ZCBpbiBmaWVsZCBwcm9wb3NlZFxfQ1BVcywgdGhhdCBhcmUgbm90IGV4aXN0aW5nIG9yIG5vdCBm
cmVlLCBhcmUKLWlnbm9yZWQuIElmIHZhbHVlIG9mIG5jcHUgaXMgZ3JlYXRlciB0aGFuIHRoZSBu
dW1iZXIgb2YgQ1BVcyBpbiBmaWVsZAotcHJvcG9zZWRcX0NQVXMsIGFkZGl0aW9uYWwgZnJlZSBD
UFVzIGFyZSBhc3NpZ25lZCB0byB0aGUgY3B1XF9wb29sLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBhY3RpdmF0ZSAoc2Vzc2lvbl9pZCBzLCBj
cHVfcG9vbCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQKLX0KLQotXHZzcGFjZXswLjNjbX0K
LQotXG5vaW5kZW50IHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fQotICAgIHtcdHQgUE9PTFxf
QkFEXF9TVEFURSwgSU5TVUZGSUNJRU5UXF9DUFVTLCBVTktPV05cX1NDSEVEXF9QT0xJQ1l9Ci0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotQ3JlYXRlIGEgbmV3
IGNwdVxfcG9vbCBpbnN0YW5jZSwgYW5kIHJldHVybiBpdHMgaGFuZGxlLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gKGNwdV9wb29sIHJlZikgY3JlYXRl
IChzZXNzaW9uX2lkIHMsIGNwdV9wb29sIHJlY29yZCBhcmdzKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlY29yZCB9ICYgYXJn
cyAmIEFsbCBjb25zdHJ1Y3RvciBhcmd1bWVudHMgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1j
cHVcX3Bvb2wgcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9iamVj
dAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmRlYWN0aXZhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotRGVhY3Rp
dmF0ZSB0aGUgY3B1XF9wb29sIGFuZCByZWxlYXNlIGFsbCBDUFVzIGFzc2lnbmVkIHRvIGl0Lgot
VGhpcyBmdW5jdGlvbiBjYW4gb25seSBiZSBjYWxsZWQgaWYgdGhlcmUgYXJlIG5vIGRvbWFpbnMg
YWN0aXZlIGluIHRoZQotY3B1XF9wb29sLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0K
LVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBkZWFjdGl2YXRlIChzZXNzaW9uX2lkIHMsIGNwdV9wb29s
IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgY3B1XF9wb29sIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9p
bmRlbnQge1xiZiBQb3NzaWJsZSBFcnJvciBDb2Rlczp9IHtcdHQgUE9PTFxfQkFEXF9TVEFURX0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmRlc3Ryb3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotRGVzdHJveSB0
aGUgc3BlY2lmaWVkIGNwdVxfcG9vbC4gVGhlIGNwdVxfcG9vbCBpcyBjb21wbGV0ZWx5IHJlbW92
ZWQgZnJvbSB0aGUKLXN5c3RlbS4KLVRoaXMgZnVuY3Rpb24gY2FuIG9ubHkgYmUgY2FsbGVkIGlm
IHRoZSBjcHVcX3Bvb2wgaXMgZGVhY3RpdmF0ZWQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGRlc3Ryb3kgKHNlc3Npb25faWQgcywgY3B1X3Bv
b2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3Qg
XFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci12b2lkCi19Ci0KLVx2c3BhY2V7MC4zY219Ci0KLVxu
b2luZGVudCB7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVzOn0ge1x0dCBQT09MXF9CQURcX1NUQVRF
fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5hZGRcX2hvc3RcX0NQVVxfbGl2ZX0KLQotCi17XGJmIE92
ZXJ2aWV3On0KLUFkZCBhIGFkZGl0aW9uYWwgQ1BVIGltbWVkaWF0bHkgdG8gdGhlIGNwdVxfcG9v
bC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQg
YWRkX2hvc3RfQ1BVX2xpdmUgKHNlc3Npb25faWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYsIGhvc3Rf
Y3B1IHJlZiBob3N0X2NwdSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUKLXtcdHQgaG9zdFxfY3B1IHJlZiB9ICYgaG9zdFxfY3B1ICYgQ1BVIHRv
IGFkZCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQKLX0KLQotXHZzcGFjZXswLjNjbX0K
LQotXG5vaW5kZW50IHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fQotICAgIHtcdHQgUE9PTFxf
QkFEXF9TVEFURSwgSU5WQUxJRFxfQ1BVfQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5yZW1vdmVcX2hv
c3RcX0NQVVxfbGl2ZX0KLQotCi17XGJmIE92ZXJ2aWV3On0KLVJlbW92ZSBhIENQVSBpbW1lZGlh
dGx5IGZyb20gdGhlIGNwdVxfcG9vbC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1c
YmVnaW57dmVyYmF0aW19IHZvaWQgcmVtb3ZlX2hvc3RfQ1BVX2xpdmUgKHNlc3Npb25faWQgcywg
Y3B1X3Bvb2wgcmVmIHNlbGYsIGhvc3RfY3B1IHJlZiBob3N0X2NwdSlcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLXtcdHQgaG9zdFxfY3B1IHJl
ZiB9ICYgaG9zdFxfY3B1ICYgQ1BVIHRvIHJlbW92ZSBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQK
LXZvaWQKLX0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUG9zc2libGUgRXJy
b3IgQ29kZXM6fQotICAgIHtcdHQgUE9PTFxfQkFEXF9TVEFURSwgSU5WQUxJRFxfQ1BVLCBMQVNU
XF9DUFVcX05PVFxfUkVNT1ZFQUJMRX0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0K
LQote1xiZiBPdmVydmlldzp9Ci1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgY3B1IHBvb2xzIGtu
b3duIHRvIHRoZSBzeXN0ZW0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lu
e3ZlcmJhdGltfSAoKGNwdV9wb29sIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVu
ZHt2ZXJiYXRpbX0KLQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotKGNw
dVxfcG9vbCByZWYpIFNldAotfQotQSBsaXN0IG9mIGFsbCB0aGUgSURzIG9mIHRoZSBjcHUgcG9v
bHMuCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbFxfcmVjb3Jkc30KLQotCi17XGJmIE92ZXJ2
aWV3On0KLVJldHVybiBhIG1hcCBvZiBhbGwgdGhlIGNwdSBwb29sIHJlY29yZHMga25vd24gdG8g
dGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0
aW19ICgoKGNwdV9wb29sIHJlZikgLT4gKGNwdV9wb29sIHJlY29yZCkpIE1hcCkgZ2V0X2FsbF9y
ZWNvcmRzIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi0gXG5vaW5kZW50IHtcYmYg
UmV0dXJuIFR5cGU6fQote1x0dAotKChjcHVcX3Bvb2wgcmVmKSAkXHJpZ2h0YXJyb3ckIChjcHVc
X3Bvb2wgcmVjb3JkKSkgTWFwCi19Ci1BIG1hcCBvZiBhbGwgdGhlIGNwdSBwb29sIHJlY29yZHMg
aW5kZXhlZCBieSBjcHUgcG9vbCByZWYuCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2J5XF9uYW1l
XF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgYWxsIHRoZSBjcHVcX3Bvb2wgaW5zdGFu
Y2VzIHdpdGggdGhlIGdpdmVuIGxhYmVsLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0K
LVxiZWdpbnt2ZXJiYXRpbX0gKChjcHVfcG9vbCByZWYpIFNldCkgZ2V0X2J5X25hbWVfbGFiZWwg
KHNlc3Npb25faWQgcywgc3RyaW5nIGxhYmVsKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiBsYWJlbCAmIGxhYmVsIG9mIG9iamVj
dCB0byByZXR1cm4gXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLShjcHVcX3Bvb2wgcmVmKSBT
ZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRvIG9iamVjdHMgd2l0aCBtYXRjaGluZyBuYW1lcwotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IGEgcmVm
ZXJlbmNlIHRvIHRoZSBjcHVcX3Bvb2wgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAoY3B1X3Bv
b2wgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVp
ZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVybiBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAot
Y3B1XF9wb29sIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9hY3RpdmF0ZWR9Ci0KLQote1xiZiBPdmVydmlldzp9Ci1SZXR1cm4gdGhlIGFj
dGl2YXRpb24gc3RhdGUgb2YgdGhlIGNwdVxfcG9vbCBvYmplY3QuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSBib29sIGdldF9hY3RpdmF0ZWQgKHNlc3Np
b25faWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1ib29sCi19Ci1SZXR1cm5z
IHtcYmYgdHJ1ZX0gaWYgY3B1XF9wb29sIGlzIGFjdGl2ZS4KLQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfYXV0b1xfcG93ZXJcX29ufQotCi0KLXtcYmYgT3ZlcnZpZXc6fQotUmV0dXJuIHRoZSBhdXRv
IHBvd2VyIGF0dHJpYnV0ZSBvZiB0aGUgY3B1XF9wb29sIG9iamVjdC4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IGJvb2wgZ2V0X2F1dG9fcG93ZXJfb24g
KHNlc3Npb25faWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1ib29sCi19Ci1S
ZXR1cm5zIHtcYmYgdHJ1ZX0gaWYgY3B1XF9wb29sIGhhcyB0byBiZSBhY3RpdmF0ZWQgb24geGVu
ZCBzdGFydC4KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfaG9zdFxfQ1BVc30KLQotCi17XGJmIE92
ZXJ2aWV3On0KLVJldHVybiB0aGUgbGlzdCBvZiBob3N0XF9jcHUgcmVmcyBhc3NpZ25lZCB0byB0
aGUgY3B1XF9wb29sIG9iamVjdC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVn
aW57dmVyYmF0aW19ICgoaG9zdF9jcHUgcmVmKSBTZXQpIGdldF9ob3N0X0NQVXMgKHNlc3Npb25f
aWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci0oaG9zdFxfY3B1IHJlZikgU2V0
Ci19Ci1SZXR1cm5zIGEgbGlzdCBvZiByZWZlcmVuY2VzIG9mIGFsbCBob3N0IGNwdXMgYXNzaWdu
ZWQgdG8gdGhlIGNwdVxfcG9vbC4KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmFtZVxfZGVzY3Jp
cHRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IHRoZSBuYW1lL2Rlc2NyaXB0aW9uIGZpZWxk
IG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQot
XGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfZGVzY3JpcHRpb24gKHNlc3Npb25faWQg
cywgY3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXN0cmluZwotfQotCi0KLXZhbHVl
IG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmFtZVxfbGFiZWx9Ci0KLXtcYmYg
T3ZlcnZpZXc6fQotR2V0IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBjcHVcX3Bv
b2wuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSBzdHJp
bmcgZ2V0X25hbWVfbGFiZWwgKHNlc3Npb25faWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0KLXtcdHQKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfbmNwdX0KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhlIG5jcHUgZmllbGQgb2Yg
dGhlIGdpdmVuIGNwdVxfcG9vbC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVn
aW57dmVyYmF0aW19IGludCBnZXRfbmNwdSAoc2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNw
dVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fQote1x0dAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF9wcm9wb3NlZFxfQ1BVc30KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhl
IHByb3Bvc2VkXF9DUFVzIGZpZWxkIG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZykgU2V0KSBnZXRf
cHJvcG9zZWRfQ1BVcyAoc2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZilcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQot
e1x0dAotKHN0cmluZykgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9vdGhlclxfY29uZmlnfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgb3Ro
ZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gY3B1XF9wb29sLgotCi1cbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkg
Z2V0X290aGVyX2NvbmZpZyAoc2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQot
e1x0dAotKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2Yg
dGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6
fQotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3RhdGUgb2YgdGhlIGdpdmVu
IGNwdVxfcG9vbC4KLQotXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRp
bX0gKGNwdV9wb29sIHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
Ci1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUK
LXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17
XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYg
UmV0dXJuIFR5cGU6fQote1x0dAotY3B1XF9wb29sIHJlY29yZAotfQotCi0KLWFsbCBmaWVsZHMg
b2YgdGhlIG9iamVjdC4KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Jlc2lkZW50XF9vbn0KLQote1xi
ZiBPdmVydmlldzp9Ci1HZXQgdGhlIHJlc2lkZW50XF9vbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gY3B1
XF9wb29sLgotCi1cbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAo
aG9zdCByZWYpIGdldF9yZXNpZGVudF9vbiAoc2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNw
dVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUK
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fQote1x0dAotaG9zdCByZWYKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX3NjaGVkXF9wb2xpY3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IHRo
ZSBzY2hlZFxfcG9saWN5IGZpZWxkIG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLVxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfc2NoZWRfcG9s
aWN5IChzZXNzaW9uX2lkIHMsIGNwdV9wb29sIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi1cbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1zdHJpbmcK
LX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3N0YXJ0ZWRc
X1ZNc30KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhlIHN0YXJ0ZWRcX1ZNcyBmaWVsZCBvZiB0
aGUgZ2l2ZW4gY3B1XF9wb29sLgotCi1cbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lu
e3ZlcmJhdGltfSAoKFZNIHJlZikgU2V0KSBnZXRfc3RhcnRlZF9WTXMgKHNlc3Npb25faWQgcywg
Y3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLShWTSByZWYpIFNldAotfQotCi0KLXZhbHVl
IG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmll
dzp9Ci1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIGNwdVxfcG9vbC4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAo
c2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXN0cmluZwotfQot
Ci0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfYXV0b1xfcG93ZXJc
X29ufQotCi17XGJmIE92ZXJ2aWV3On0KLVNldCB0aGUgYXV0b1xfcG93ZXJcX29uIGZpZWxkIG9m
IHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLVxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVn
aW57dmVyYmF0aW19IHZvaWQgc2V0X2F1dG9fcG93ZXJfb24gKHNlc3Npb25faWQgcywgY3B1X3Bv
b2wgcmVmIHNlbGYsIGJvb2wgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lCi17XHR0IGJvb2wgfSAmIHZhbHVlICYgbmV3IGF1dG9cX3Bv
d2VyXF9vbiB2YWx1ZSBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fnNldFxfcHJvcG9zZWRcX0NQVXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotU2V0IHRoZSBw
cm9wb3NlZFxfQ1BVcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gY3B1XF9wb29sLgotCi1cbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9wcm9wb3NlZF9DUFVz
IChzZXNzaW9uX2lkIHMsIGNwdV9wb29sIHJlZiBzZWxmLCBzdHJpbmcgU2V0IGNwdXMpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi17XHR0IHN0
cmluZyBTZXQgfSAmIGNwdXMgJiBTZXQgb2YgcHJlZmVycmVkIENQVSAobnVtYmVycykgdG8gdXNl
IFxcIFxobGluZQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci12b2lkCi19Ci1cdnNwYWNlezAuM2NtfQotCi1cbm9p
bmRlbnQge1xiZiBQb3NzaWJsZSBFcnJvciBDb2Rlczp9Ci0gICAge1x0dCBQT09MXF9CQURcX1NU
QVRFfQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTphZGRcX3RvXF9wcm9wb3NlZFxfQ1BVc30KLQote1xiZiBP
dmVydmlldzp9Ci1BZGQgYSBDUFUgKG51bWJlcikgdG8gdGhlIHByb3Bvc2VkXF9DUFVzIGZpZWxk
IG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLVxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1c
YmVnaW57dmVyYmF0aW19IHZvaWQgYWRkX3RvX3Byb3Bvc2VkX0NQVXMgKHNlc3Npb25faWQgcywg
Y3B1X3Bvb2wgcmVmIHNlbGYsIGludGVnZXIgY3B1KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5k
ZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlZiB9ICYgc2VsZiAmIHJlZmVy
ZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQote1x0dCBpbnRlZ2VyIH0gJiBjcHUgJiBOdW1i
ZXIgb2YgQ1BVIHRvIGFkZCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotXHZzcGFj
ZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fQotICAgIHtc
dHQgUE9PTFxfQkFEXF9TVEFURX0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6cmVtb3ZlXF9mcm9tXF9wcm9w
b3NlZFxfQ1BVc30KLQote1xiZiBPdmVydmlldzp9Ci1SZW1vdmUgYSBDUFUgKG51bWJlcikgZnJv
bSB0aGUgcHJvcG9zZWRcX0NQVXMgZmllbGQgb2YgdGhlIGdpdmVuIGNwdVxfcG9vbC4KLQotXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCByZW1vdmVfZnJv
bV9wcm9wb3NlZF9DUFVzIChzZXNzaW9uX2lkIHMsIGNwdV9wb29sIHJlZiBzZWxmLCBpbnRlZ2Vy
IGNwdSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLVxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
Y3B1XF9wb29sIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGlu
ZQote1x0dCBpbnRlZ2VyIH0gJiBjcHUgJiBOdW1iZXIgb2YgQ1BVIHRvIHJlbW92ZSBcXCBcaGxp
bmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtc
YmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fQotICAgIHtcdHQgUE9PTFxfQkFEXF9TVEFURX0KLQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fnNldFxfbmFtZVxfbGFiZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotU2V0
IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9uYW1lX2xhYmVs
IChzZXNzaW9uX2lkIHMsIGNwdV9wb29sIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci1caGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLXtcdHQgc3RyaW5n
IH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci12
b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9uY3B1fQotCi17XGJmIE92ZXJ2aWV3
On0KLVNldCB0aGUgbmNwdSBmaWVsZCBvZiB0aGUgZ2l2ZW4gY3B1XF9wb29sLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbmNwdSAoc2Vz
c2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZiwgaW50ZWdlciB2YWx1ZSlcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLVxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQote1x0dCBpbnRlZ2VyIH0g
JiB2YWx1ZSAmIE51bWJlciBvZiBjcHVzIHRvIHVzZSBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQK
LXZvaWQKLX0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUG9zc2libGUgRXJy
b3IgQ29kZXM6fQotICAgIHtcdHQgUE9PTFxfQkFEXF9TVEFURX0KLQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fnNldFxfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9Ci1TZXQgdGhlIG90aGVyXF9j
b25maWcgZmllbGQgb2YgdGhlIGdpdmVuIGNwdVxfcG9vbC4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X290aGVyX2NvbmZpZyAoc2Vzc2lv
bl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZiwgKHN0cmluZyAtPiBzdHJpbmcpIE1hcCB2YWx1ZSlc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLVxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9w
b29sIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQote1x0
dCAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgfSAmIHZhbHVlICYgTmV3IHZhbHVl
IHRvIHNldCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotCi0KLQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmFkZFxfdG9cX290aGVyXF9jb25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotQWRkIHRo
ZSBnaXZlbiBrZXktdmFsdWUgcGFpciB0byB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUg
Z2l2ZW4gY3B1XF9wb29sLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2
ZXJiYXRpbX0gdm9pZCBhZGRfdG9fb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMsIGNwdV9wb29s
IHJlZiBzZWxmLCBzdHJpbmcga2V5LCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci1caGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiBrZXkgJiBL
ZXkgdG8gYWRkIFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHZhbHVlICYgVmFsdWUgdG8gYWRk
IFxcIFxobGluZQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fnJlbW92ZVxfZnJvbVxfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9Ci1SZW1vdmUg
dGhlIGdpdmVuIGtleSBhbmQgaXRzIGNvcnJlc3BvbmRpbmcgdmFsdWUgZnJvbSB0aGUgb3RoZXJc
X2NvbmZpZwotZmllbGQgb2YgdGhlIGdpdmVuIGNwdVxfcG9vbC4gIElmIHRoZSBrZXkgaXMgbm90
IGluIHRoYXQgTWFwLCB0aGVuIGRvIG5vdGhpbmcuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHJlbW92ZV9mcm9tX290aGVyX2NvbmZpZyAoc2Vz
c2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZiwgc3RyaW5nIGtleSlcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLVxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIGtl
eSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLVxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQKLX0KLQotCi0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5zZXRcX3NjaGVkXF9wb2xpY3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fQot
U2V0IHRoZSBzY2hlZFxfcG9saWN5IGZpZWxkIG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9zY2hl
ZF9wb2xpY3kgKHNlc3Npb25faWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYsIHN0cmluZyBuZXdfc2No
ZWRfcG9saWN5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lCi17XHR0IHN0cmluZyB9ICYgbmV3XF9zY2hlZFxfcG9saWN5ICYgTmV3IHZhbHVlIHRv
IHNldCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQKLX0KLQotCmRpZmYgLS1naXQgYS9kb2Nz
L3hlbi1hcGkveGVuYXBpLnRleCBiL2RvY3MveGVuLWFwaS94ZW5hcGkudGV4CmRlbGV0ZWQgZmls
ZSBtb2RlIDEwMDY0NAppbmRleCBiNTliNzA2Li4wMDAwMDAwCi0tLSBhL2RvY3MveGVuLWFwaS94
ZW5hcGkudGV4CisrKyAvZGV2L251bGwKQEAgLTEsNjAgKzAsMCBAQAotJQotJSBDb3B5cmlnaHQg
KGMpIDIwMDYtMjAwNyBYZW5Tb3VyY2UsIEluYy4KLSUKLSUgUGVybWlzc2lvbiBpcyBncmFudGVk
IHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlzIGRvY3VtZW50IHVuZGVyCi0l
IHRoZSB0ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlLCBWZXJzaW9u
IDEuMiBvciBhbnkgbGF0ZXIKLSUgdmVyc2lvbiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdh
cmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlhbnQKLSUgU2VjdGlvbnMsIG5vIEZyb250LUNv
dmVyIFRleHRzIGFuZCBubyBCYWNrLUNvdmVyIFRleHRzLiAgQSBjb3B5IG9mIHRoZQotJSBsaWNl
bnNlIGlzIGluY2x1ZGVkIGluIHRoZSBzZWN0aW9uIGVudGl0bGVkCi0lICJHTlUgRnJlZSBEb2N1
bWVudGF0aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxlIGZkbC50ZXguCi0lCi0lIEF1dGhvcnM6IEV3
YW4gTWVsbG9yLCBSaWNoYXJkIFNoYXJwLCBEYXZlIFNjb3R0LCBKb24gSGFycm9wLgotJQotCi1c
ZG9jdW1lbnRjbGFzc3tyZXBvcnR9Ci0KLVx1c2VwYWNrYWdle2E0fQotXHVzZXBhY2thZ2V7Z3Jh
cGhpY3N9Ci1cdXNlcGFja2FnZXtsb25ndGFibGV9Ci1cdXNlcGFja2FnZXtmYW5jeWhkcn0KLVx1
c2VwYWNrYWdle2h5cGVycmVmfQotXHVzZXBhY2thZ2V7YXJyYXl9Ci0KLVxzZXRsZW5ndGhcdG9w
c2tpcHswY219Ci1cc2V0bGVuZ3RoXHRvcG1hcmdpbnswY219Ci1cc2V0bGVuZ3RoXG9kZHNpZGVt
YXJnaW57MGNtfQotXHNldGxlbmd0aFxldmVuc2lkZW1hcmdpbnswY219Ci1cc2V0bGVuZ3RoXHBh
cmluZGVudHswcHR9Ci0KLSUlIFBhcmFtZXRlcnMgZm9yIGNvdmVyc2hlZXQ6Ci1caW5wdXR7eGVu
YXBpLWNvdmVyc2hlZXR9Ci0KLVxiZWdpbntkb2N1bWVudH0KLQotJSBUaGUgY292ZXJzaGVldCBp
dHNlbGYKLVxpbmNsdWRle2NvdmVyc2hlZXR9Ci0KLSUgVGhlIHJldmlzaW9uIGhpc3RvcnkKLVxp
bmNsdWRle3JldmlzaW9uLWhpc3Rvcnl9Ci0KLSUgVGFibGUgb2YgY29udGVudHMKLVx0YWJsZW9m
Y29udGVudHMKLQotCi0lIC4uLiBhbmQgb2ZmIHdlIGdvIQotCi1cY2hhcHRlcntJbnRyb2R1Y3Rp
b259Ci0KLVRoaXMgZG9jdW1lbnQgY29udGFpbnMgYSBkZXNjcmlwdGlvbiBvZiB0aGUgWGVuIE1h
bmFnZW1lbnQgQVBJLS0tYW4gaW50ZXJmYWNlIGZvcgotcmVtb3RlbHkgY29uZmlndXJpbmcgYW5k
IGNvbnRyb2xsaW5nIHZpcnR1YWxpc2VkIGd1ZXN0cyBydW5uaW5nIG9uIGEKLVhlbi1lbmFibGVk
IGhvc3QuIAotCi1caW5wdXR7cHJlc2VudGF0aW9ufQotCi1caW5jbHVkZXt3aXJlLXByb3RvY29s
fQotXGluY2x1ZGV7dm0tbGlmZWN5Y2xlfQotXGluY2x1ZGV7eGVuYXBpLWRhdGFtb2RlbH0KLVxp
bmNsdWRle2ZkbH0KLVxpbmNsdWRle2JpYmxpb2dyYXBoeX0KLQotXGVuZHtkb2N1bWVudH0KLS0g
CjEuNy4yLjUKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti35a-0000Yz-Ry; Mon, 10 Dec 2012 13:08:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TgzMZ-0007yn-1e
	for xen-devel@lists.xen.org; Fri, 07 Dec 2012 14:57:16 +0000
Received: from [85.158.137.99:18653] by server-13.bemta-3.messagelabs.com id
	C3/87-24887-5C302C05; Fri, 07 Dec 2012 14:57:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-217.messagelabs.com!1354892163!15225862!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=Mail larger than max spam size
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4650 invoked from network); 7 Dec 2012 14:56:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Dec 2012 14:56:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,239,1355097600"; d="scan'208";a="216786685"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	07 Dec 2012 14:55:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 7 Dec 2012 09:55:10 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TgzKY-0004T4-Bs;
	Fri, 07 Dec 2012 14:55:10 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Dec 2012 14:55:08 +0000
Message-ID: <1354892110-31108-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
References: <1354892091.31710.80.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Mon, 10 Dec 2012 13:08:04 +0000
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_1/3=5D_docs=3A_Remove_xen-api_docs?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyBkb2N1bWVudCBpcyBhYm91dCBhbiBvbGQgdW5tYWludGFpbmVkIHZlcnNpb24gb2YgdGhl
IFhlbkFQSSwKd2hpY2ggYmVhcnMgbGl0dGxlIHRvIG5vIHJlbGF0aW9uIHRvIHdoYXQgaXMgaW1w
bGVtZW50ZWQgaW4geGFwaSBhbmQKd2hpY2ggaXMgb25seSBwYXJ0aWFsbHkgaW1wbGVtZW50ZWQg
aW4geGVuZCAod2hpY2ggaXMgZGVwcmVjYXRlZCkuIFRoZQpkb2MgaGFzbid0IHNlZW4gbXVjaCBp
biB0aGUgd2F5IG9mIHVwZGF0ZXMgc2luY2UgMjAwOS4KCkFueW9uZSB3aG8gaXMgYWN0dWFsbHkg
aW50ZXJlc3RlZCBjYW4gY29udGludWUgdG8gdXNlIHRoZSB2ZXJzaW9uCndoaWNoIHdhcyBpbiA0
LjIuCgpTaWduZWQtb2ZmLWJ5OiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29t
PgotLS0KIGRvY3MvRG9jcy5tayAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgIDUgLQog
ZG9jcy9NYWtlZmlsZSAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgNyAtCiBkb2NzL3hl
bi1hcGkvTWFrZWZpbGUgICAgICAgICAgICAgICAgICAgfCAgIDQ0IC0KIGRvY3MveGVuLWFwaS9i
aWJsaW9ncmFwaHkudGV4ICAgICAgICAgICB8ICAgIDUgLQogZG9jcy94ZW4tYXBpL2NvdmVyc2hl
ZXQudGV4ICAgICAgICAgICAgIHwgICA2NSAtCiBkb2NzL3hlbi1hcGkvZmRsLnRleCAgICAgICAg
ICAgICAgICAgICAgfCAgNDg4IC0KIGRvY3MveGVuLWFwaS9wcmVzZW50YXRpb24udGV4ICAgICAg
ICAgICB8ICAxNDYgLQogZG9jcy94ZW4tYXBpL3JldmlzaW9uLWhpc3RvcnkudGV4ICAgICAgIHwg
ICA2MSAtCiBkb2NzL3hlbi1hcGkvdG9kby50ZXggICAgICAgICAgICAgICAgICAgfCAgMTM1IC0K
IGRvY3MveGVuLWFwaS92bS1saWZlY3ljbGUudGV4ICAgICAgICAgICB8ICAgNDMgLQogZG9jcy94
ZW4tYXBpL3ZtX2xpZmVjeWNsZS5kb3QgICAgICAgICAgIHwgICAxNyAtCiBkb2NzL3hlbi1hcGkv
d2lyZS1wcm90b2NvbC50ZXggICAgICAgICAgfCAgMzgzIC0KIGRvY3MveGVuLWFwaS94ZW4uZXBz
ICAgICAgICAgICAgICAgICAgICB8ICAgNDQgLQogZG9jcy94ZW4tYXBpL3hlbmFwaS1jb3ZlcnNo
ZWV0LnRleCAgICAgIHwgICAzOCAtCiBkb2NzL3hlbi1hcGkveGVuYXBpLWRhdGFtb2RlbC1ncmFw
aC5kb3QgfCAgIDU3IC0KIGRvY3MveGVuLWFwaS94ZW5hcGktZGF0YW1vZGVsLnRleCAgICAgICB8
MjAyNDUgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogZG9jcy94ZW4tYXBpL3hlbmFw
aS50ZXggICAgICAgICAgICAgICAgIHwgICA2MCAtCiAxNyBmaWxlcyBjaGFuZ2VkLCAwIGluc2Vy
dGlvbnMoKyksIDIxODQzIGRlbGV0aW9ucygtKQogZGVsZXRlIG1vZGUgMTAwNjQ0IGRvY3MveGVu
LWFwaS9NYWtlZmlsZQogZGVsZXRlIG1vZGUgMTAwNjQ0IGRvY3MveGVuLWFwaS9iaWJsaW9ncmFw
aHkudGV4CiBkZWxldGUgbW9kZSAxMDA2NDQgZG9jcy94ZW4tYXBpL2NvdmVyc2hlZXQudGV4CiBk
ZWxldGUgbW9kZSAxMDA2NDQgZG9jcy94ZW4tYXBpL2ZkbC50ZXgKIGRlbGV0ZSBtb2RlIDEwMDY0
NCBkb2NzL3hlbi1hcGkvcHJlc2VudGF0aW9uLnRleAogZGVsZXRlIG1vZGUgMTAwNjQ0IGRvY3Mv
eGVuLWFwaS9yZXZpc2lvbi1oaXN0b3J5LnRleAogZGVsZXRlIG1vZGUgMTAwNjQ0IGRvY3MveGVu
LWFwaS90b2RvLnRleAogZGVsZXRlIG1vZGUgMTAwNjQ0IGRvY3MveGVuLWFwaS92bS1saWZlY3lj
bGUudGV4CiBkZWxldGUgbW9kZSAxMDA2NDQgZG9jcy94ZW4tYXBpL3ZtX2xpZmVjeWNsZS5kb3QK
IGRlbGV0ZSBtb2RlIDEwMDY0NCBkb2NzL3hlbi1hcGkvd2lyZS1wcm90b2NvbC50ZXgKIGRlbGV0
ZSBtb2RlIDEwMDY0NCBkb2NzL3hlbi1hcGkveGVuLmVwcwogZGVsZXRlIG1vZGUgMTAwNjQ0IGRv
Y3MveGVuLWFwaS94ZW5hcGktY292ZXJzaGVldC50ZXgKIGRlbGV0ZSBtb2RlIDEwMDY0NCBkb2Nz
L3hlbi1hcGkveGVuYXBpLWRhdGFtb2RlbC1ncmFwaC5kb3QKIGRlbGV0ZSBtb2RlIDEwMDY0NCBk
b2NzL3hlbi1hcGkveGVuYXBpLWRhdGFtb2RlbC50ZXgKIGRlbGV0ZSBtb2RlIDEwMDY0NCBkb2Nz
L3hlbi1hcGkveGVuYXBpLnRleAoKZGlmZiAtLWdpdCBhL2RvY3MvRG9jcy5tayBiL2RvY3MvRG9j
cy5tawppbmRleCBhYTY1M2QzLi5kY2M4YTIxIDEwMDY0NAotLS0gYS9kb2NzL0RvY3MubWsKKysr
IGIvZG9jcy9Eb2NzLm1rCkBAIC0xLDEyICsxLDcgQEAKLVBTMlBERgkJOj0gcHMycGRmCi1EVklQ
UwkJOj0gZHZpcHMKLUxBVEVYCQk6PSBsYXRleAogRklHMkRFVgkJOj0gZmlnMmRldgogTEFURVgy
SFRNTAk6PSBsYXRleDJodG1sCiBET1hZR0VOCQk6PSBkb3h5Z2VuCiBQT0QyTUFOCQk6PSBwb2Qy
bWFuCiBQT0QySFRNTAk6PSBwb2QyaHRtbAogUE9EMlRFWFQJOj0gcG9kMnRleHQKLURPVAkJOj0g
ZG90Ci1ORUFUTwkJOj0gbmVhdG8KIE1BUktET1dOCTo9IG1hcmtkb3duCmRpZmYgLS1naXQgYS9k
b2NzL01ha2VmaWxlIGIvZG9jcy9NYWtlZmlsZQppbmRleCAwM2YxNDFhLi42MjBhMjk2IDEwMDY0
NAotLS0gYS9kb2NzL01ha2VmaWxlCisrKyBiL2RvY3MvTWFrZWZpbGUKQEAgLTI2LDEwICsyNiw2
IEBAIGFsbDogYnVpbGQKIAogLlBIT05ZOiBidWlsZAogYnVpbGQ6IGh0bWwgdHh0IG1hbi1wYWdl
cyBmaWdzCi0JQGlmIHdoaWNoICQoRE9UKSAxPi9kZXYvbnVsbCAyPi9kZXYvbnVsbCA7IHRoZW4g
ICAgICAgICAgICAgIFwKLQkkKE1BS0UpIC1DIHhlbi1hcGkgYnVpbGQgOyBlbHNlICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgXAotICAgICAgICBlY2hvICJHcmFwaHZpeiAoZG90KSBub3Qg
aW5zdGFsbGVkOyBza2lwcGluZyB4ZW4tYXBpLiIgOyBmaQotCXJtIC1mICouYXV4ICouZHZpICou
YmJsICouYmxnICouZ2xvICouaWR4ICouaWxnICoubG9nICouaW5kICoudG9jCiAKIC5QSE9OWTog
ZGV2LWRvY3MKIGRldi1kb2NzOiBweXRob24tZGV2LWRvY3MKQEAgLTc2LDcgKzcyLDYgQEAgbWFu
NS8lLjU6IG1hbi8lLnBvZC41IE1ha2VmaWxlCiAKIC5QSE9OWTogY2xlYW4KIGNsZWFuOgotCSQo
TUFLRSkgLUMgeGVuLWFwaSBjbGVhbgogCSQoTUFLRSkgLUMgZmlncyBjbGVhbgogCXJtIC1yZiAu
d29yZF9jb3VudCAqLmF1eCAqLmR2aSAqLmJibCAqLmJsZyAqLmdsbyAqLmlkeCAqfiAKIAlybSAt
cmYgKi5pbGcgKi5sb2cgKi5pbmQgKi50b2MgKi5iYWsgY29yZQpAQCAtOTMsOCArODgsNiBAQCBp
bnN0YWxsOiBhbGwKIAlybSAtcmYgJChERVNURElSKSQoRE9DRElSKQogCSQoSU5TVEFMTF9ESVIp
ICQoREVTVERJUikkKERPQ0RJUikKIAotCSQoTUFLRSkgLUMgeGVuLWFwaSBpbnN0YWxsCi0KIAkk
KElOU1RBTExfRElSKSAkKERFU1RESVIpJChNQU5ESVIpCiAJY3AgLWRSIG1hbjEgJChERVNURElS
KSQoTUFORElSKQogCWNwIC1kUiBtYW41ICQoREVTVERJUikkKE1BTkRJUikKZGlmZiAtLWdpdCBh
L2RvY3MveGVuLWFwaS9NYWtlZmlsZSBiL2RvY3MveGVuLWFwaS9NYWtlZmlsZQpkZWxldGVkIGZp
bGUgbW9kZSAxMDA2NDQKaW5kZXggNzdhMDExNy4uMDAwMDAwMAotLS0gYS9kb2NzL3hlbi1hcGkv
TWFrZWZpbGUKKysrIC9kZXYvbnVsbApAQCAtMSw0NCArMCwwIEBACi0jIS91c3IvYmluL21ha2Ug
LWYKLQotWEVOX1JPT1Q9JChDVVJESVIpLy4uLy4uCi1pbmNsdWRlICQoWEVOX1JPT1QpL0NvbmZp
Zy5tawotaW5jbHVkZSAkKFhFTl9ST09UKS9kb2NzL0RvY3MubWsKLQotCi1URVggOj0gJCh3aWxk
Y2FyZCAqLnRleCkKLUVQUyA6PSAkKHdpbGRjYXJkICouZXBzKQotRVBTRE9UIDo9ICQocGF0c3Vi
c3QgJS5kb3QsJS5lcHMsJCh3aWxkY2FyZCAqLmRvdCkpCi0KLS5QSE9OWTogYWxsCi1hbGw6IGJ1
aWxkCi0KLS5QSE9OWTogYnVpbGQKLWJ1aWxkOiB4ZW5hcGkucGRmIHhlbmFwaS5wcwotCi1pbnN0
YWxsOgotCSQoSU5TVEFMTF9ESVIpICQoREVTVERJUikkKERPQ0RJUikvcHMKLQkkKElOU1RBTExf
RElSKSAkKERFU1RESVIpJChET0NESVIpL3BkZgotCi0JWyAtZSB4ZW5hcGkucHMgXSAmJiBjcCB4
ZW5hcGkucHMgJChERVNURElSKSQoRE9DRElSKS9wcyB8fCB0cnVlCi0JWyAtZSB4ZW5hcGkucGRm
IF0gJiYgY3AgeGVuYXBpLnBkZiAkKERFU1RESVIpJChET0NESVIpL3BkZiB8fCB0cnVlCi0KLXhl
bmFwaS5kdmk6ICQoVEVYKSAkKEVQUykgJChFUFNET1QpCi0JJChMQVRFWCkgeGVuYXBpLnRleAot
CSQoTEFURVgpIHhlbmFwaS50ZXgKLQlybSAtZiAqLmF1eCAqLmxvZwotCi0lLnBkZjogJS5wcwot
CSQoUFMyUERGKSAkPCAkQAotCi0lLnBzOiAlLmR2aQotCSQoRFZJUFMpICQ8IC1vICRACi0KLSUu
ZXBzOiAlLmRvdAotCSQoRE9UKSAtVHBzICQ8ID4kQAotCi14ZW5hcGktZGF0YW1vZGVsLWdyYXBo
LmVwczogeGVuYXBpLWRhdGFtb2RlbC1ncmFwaC5kb3QKLQkkKE5FQVRPKSAtR292ZXJsYXA9ZmFs
c2UgLVRwcyAkPCA+JEAKLQotLlBIT05ZOiBjbGVhbgotY2xlYW46Ci0Jcm0gLWYgKi5wZGYgKi5w
cyAqLmR2aSAqLmF1eCAqLmxvZyAqLm91dCAkKEVQU0RPVCkKZGlmZiAtLWdpdCBhL2RvY3MveGVu
LWFwaS9iaWJsaW9ncmFwaHkudGV4IGIvZG9jcy94ZW4tYXBpL2JpYmxpb2dyYXBoeS50ZXgKZGVs
ZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDMwZGYzODcuLjAwMDAwMDAKLS0tIGEvZG9jcy94
ZW4tYXBpL2JpYmxpb2dyYXBoeS50ZXgKKysrIC9kZXYvbnVsbApAQCAtMSw1ICswLDAgQEAKLVxi
ZWdpbnt0aGViaWJsaW9ncmFwaHl9ezl9Ci1cYmliaXRlbVtSRkMyMzk3XXtSRkMyMzk3fQotTWFz
aW50ZXIgTC4sIFx0ZXh0YmZ7VGhlICJkYXRhIiBVUkwgc2NoZW1lfSwgUkZDIDIzOTcsIEF1Z3Vz
dCAxOTk4LAotTmV0d29yayBXb3JraW5nIEdyb3VwLCBodHRwOi8vd3d3LmlldGYub3JnL3JmYy9y
ZmMyMzk3LnR4dAotXGVuZHt0aGViaWJsaW9ncmFwaHl9CmRpZmYgLS1naXQgYS9kb2NzL3hlbi1h
cGkvY292ZXJzaGVldC50ZXggYi9kb2NzL3hlbi1hcGkvY292ZXJzaGVldC50ZXgKZGVsZXRlZCBm
aWxlIG1vZGUgMTAwNjQ0CmluZGV4IDJkNTY4YzUuLjAwMDAwMDAKLS0tIGEvZG9jcy94ZW4tYXBp
L2NvdmVyc2hlZXQudGV4CisrKyAvZGV2L251bGwKQEAgLTEsNjUgKzAsMCBAQAotJQotJSBDb3B5
cmlnaHQgKGMpIDIwMDYtMjAwNyBYZW5Tb3VyY2UsIEluYy4KLSUKLSUgUGVybWlzc2lvbiBpcyBn
cmFudGVkIHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlzIGRvY3VtZW50IHVu
ZGVyCi0lIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlLCBW
ZXJzaW9uIDEuMiBvciBhbnkgbGF0ZXIKLSUgdmVyc2lvbiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUg
U29mdHdhcmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlhbnQKLSUgU2VjdGlvbnMsIG5vIEZy
b250LUNvdmVyIFRleHRzIGFuZCBubyBCYWNrLUNvdmVyIFRleHRzLiAgQSBjb3B5IG9mIHRoZQot
JSBsaWNlbnNlIGlzIGluY2x1ZGVkIGluIHRoZSBzZWN0aW9uIGVudGl0bGVkCi0lICJHTlUgRnJl
ZSBEb2N1bWVudGF0aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxlIGZkbC50ZXguCi0lCi0lIEF1dGhv
cnM6IEV3YW4gTWVsbG9yLCBSaWNoYXJkIFNoYXJwLCBEYXZlIFNjb3R0LCBKb24gSGFycm9wLgot
JQotCi1ccGFnZXN0eWxle2VtcHR5fQotCi1cZG9jdGl0bGV7fSBcaGZpbGwgXHJldnN0cmluZ3t9
Ci0KLVx2c3BhY2V7MWNtfQotCi1cYmVnaW57Y2VudGVyfQotXHJlc2l6ZWJveHs4Y219eyF9e1xp
bmNsdWRlZ3JhcGhpY3N7XGNvdmVyc2hlZXRsb2dvfX0KLQotXHZzcGFjZXsyY219Ci0KLVxiZWdp
bntIdWdlfQotICBcZG9jdGl0bGV7fQotXGVuZHtIdWdlfQotCi1cdnNwYWNlezFjbX0KLVxiZWdp
bntMYXJnZX0KLVZlcnNpb246IFxyZXZzdHJpbmd7fVxcCi1EYXRlOiBcZGF0ZXN0cmluZ3t9Ci1c
XAotXHJlbGVhc2VzdGF0ZW1lbnR7fQotCi1cdnNwYWNlezFjbX0KLVxiZWdpbnt0YWJ1bGFyfXty
bH0KLVxkb2NhdXRob3Jze30KLVxlbmR7dGFidWxhcn0KLVxlbmR7TGFyZ2V9Ci1cZW5ke2NlbnRl
cn0KLVx2c3BhY2V7LjVjbX0KLVxiZWdpbntsYXJnZX0KLVx0ZXh0YmZ7Q29udHJpYnV0b3JzOn0g
XFwKLVxcCi1cYmVnaW57dGFidWxhcn17cHswLjVcdGV4dHdpZHRofWx9Ci1TdGVmYW4gQmVyZ2Vy
LCBJQk0gJiBWaW5jZW50IEhhbnF1ZXosIFhlblNvdXJjZSBcXAotRGFuaWVsIEJlcnJhbmdcJ2Us
IFJlZCBIYXQgJiBKb2huIExldm9uLCBTdW4gTWljcm9zeXN0ZW1zIFxcCi1HYXJldGggQmVzdG9y
LCBJQk0gJiBKb24gTHVkbGFtLCBYZW5Tb3VyY2UgXFwKLUhvbGxpcyBCbGFuY2hhcmQsIElCTSAm
IEFsYXN0YWlyIFRzZSwgWGVuU291cmNlIFxcCi1NaWtlIERheSwgSUJNICYgRGFuaWVsIFZlaWxs
YXJkLCBSZWQgSGF0IFxcCi1KaW0gRmVobGlnLCBOb3ZlbGwgJiBUb20gV2lsa2llLCBVbml2ZXJz
aXR5IG9mIENhbWJyaWRnZSBcXAotSm9uIEhhcnJvcCwgWGVuU291cmNlICYgWW9zdWtlIEl3YW1h
dHN1LCBORUMgXFwKLU1hc2FraSBLYW5ubywgRlVKSVRTVSBcXAotTHV0eiBEdWJlLCBGVUpJVFNV
IFRFQ0hOT0xPR1kgU09MVVRJT05TIFxcCi1cZW5ke3RhYnVsYXJ9Ci1cZW5ke2xhcmdlfQotCi1c
dmZpbGwKLQotXG5vaW5kZW50Ci1cbGVnYWxub3RpY2V7fQotCi1cbmV3cGFnZQotXHBhZ2VzdHls
ZXtmYW5jeX0KZGlmZiAtLWdpdCBhL2RvY3MveGVuLWFwaS9mZGwudGV4IGIvZG9jcy94ZW4tYXBp
L2ZkbC50ZXgKZGVsZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IGQ4MjE0NTcuLjAwMDAwMDAK
LS0tIGEvZG9jcy94ZW4tYXBpL2ZkbC50ZXgKKysrIC9kZXYvbnVsbApAQCAtMSw0ODggKzAsMCBA
QAotXGNoYXB0ZXJ7R05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlfQotJVxsYWJlbHtsYWJl
bF9mZGx9Ci0KLSBcYmVnaW57Y2VudGVyfQotCi0gICAgICAgVmVyc2lvbiAxLjIsIE5vdmVtYmVy
IDIwMDIKLQotCi0gQ29weXJpZ2h0IFxjb3B5cmlnaHQgMjAwMCwyMDAxLDIwMDIgIEZyZWUgU29m
dHdhcmUgRm91bmRhdGlvbiwgSW5jLgotIAotIFxiaWdza2lwCi0gCi0gICAgIDUxIEZyYW5rbGlu
IFN0LCBGaWZ0aCBGbG9vciwgQm9zdG9uLCBNQSAgMDIxMTAtMTMwMSAgVVNBCi0gIAotIFxiaWdz
a2lwCi0gCi0gRXZlcnlvbmUgaXMgcGVybWl0dGVkIHRvIGNvcHkgYW5kIGRpc3RyaWJ1dGUgdmVy
YmF0aW0gY29waWVzCi0gb2YgdGhpcyBsaWNlbnNlIGRvY3VtZW50LCBidXQgY2hhbmdpbmcgaXQg
aXMgbm90IGFsbG93ZWQuCi1cZW5ke2NlbnRlcn0KLQotCi1cYmVnaW57Y2VudGVyfQote1xiZlxs
YXJnZSBQcmVhbWJsZX0KLVxlbmR7Y2VudGVyfQotCi1UaGUgcHVycG9zZSBvZiB0aGlzIExpY2Vu
c2UgaXMgdG8gbWFrZSBhIG1hbnVhbCwgdGV4dGJvb2ssIG9yIG90aGVyCi1mdW5jdGlvbmFsIGFu
ZCB1c2VmdWwgZG9jdW1lbnQgImZyZWUiIGluIHRoZSBzZW5zZSBvZiBmcmVlZG9tOiB0bwotYXNz
dXJlIGV2ZXJ5b25lIHRoZSBlZmZlY3RpdmUgZnJlZWRvbSB0byBjb3B5IGFuZCByZWRpc3RyaWJ1
dGUgaXQsCi13aXRoIG9yIHdpdGhvdXQgbW9kaWZ5aW5nIGl0LCBlaXRoZXIgY29tbWVyY2lhbGx5
IG9yIG5vbmNvbW1lcmNpYWxseS4KLVNlY29uZGFyaWx5LCB0aGlzIExpY2Vuc2UgcHJlc2VydmVz
IGZvciB0aGUgYXV0aG9yIGFuZCBwdWJsaXNoZXIgYSB3YXkKLXRvIGdldCBjcmVkaXQgZm9yIHRo
ZWlyIHdvcmssIHdoaWxlIG5vdCBiZWluZyBjb25zaWRlcmVkIHJlc3BvbnNpYmxlCi1mb3IgbW9k
aWZpY2F0aW9ucyBtYWRlIGJ5IG90aGVycy4KLQotVGhpcyBMaWNlbnNlIGlzIGEga2luZCBvZiAi
Y29weWxlZnQiLCB3aGljaCBtZWFucyB0aGF0IGRlcml2YXRpdmUKLXdvcmtzIG9mIHRoZSBkb2N1
bWVudCBtdXN0IHRoZW1zZWx2ZXMgYmUgZnJlZSBpbiB0aGUgc2FtZSBzZW5zZS4gIEl0Ci1jb21w
bGVtZW50cyB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UsIHdoaWNoIGlzIGEgY29weWxl
ZnQKLWxpY2Vuc2UgZGVzaWduZWQgZm9yIGZyZWUgc29mdHdhcmUuCi0KLVdlIGhhdmUgZGVzaWdu
ZWQgdGhpcyBMaWNlbnNlIGluIG9yZGVyIHRvIHVzZSBpdCBmb3IgbWFudWFscyBmb3IgZnJlZQot
c29mdHdhcmUsIGJlY2F1c2UgZnJlZSBzb2Z0d2FyZSBuZWVkcyBmcmVlIGRvY3VtZW50YXRpb246
IGEgZnJlZQotcHJvZ3JhbSBzaG91bGQgY29tZSB3aXRoIG1hbnVhbHMgcHJvdmlkaW5nIHRoZSBz
YW1lIGZyZWVkb21zIHRoYXQgdGhlCi1zb2Z0d2FyZSBkb2VzLiAgQnV0IHRoaXMgTGljZW5zZSBp
cyBub3QgbGltaXRlZCB0byBzb2Z0d2FyZSBtYW51YWxzOwotaXQgY2FuIGJlIHVzZWQgZm9yIGFu
eSB0ZXh0dWFsIHdvcmssIHJlZ2FyZGxlc3Mgb2Ygc3ViamVjdCBtYXR0ZXIgb3IKLXdoZXRoZXIg
aXQgaXMgcHVibGlzaGVkIGFzIGEgcHJpbnRlZCBib29rLiAgV2UgcmVjb21tZW5kIHRoaXMgTGlj
ZW5zZQotcHJpbmNpcGFsbHkgZm9yIHdvcmtzIHdob3NlIHB1cnBvc2UgaXMgaW5zdHJ1Y3Rpb24g
b3IgcmVmZXJlbmNlLgotCi0KLVxiZWdpbntjZW50ZXJ9Ci17XExhcmdlXGJmIDEuIEFQUExJQ0FC
SUxJVFkgQU5EIERFRklOSVRJT05TfQotXGFkZGNvbnRlbnRzbGluZXt0b2N9e3NlY3Rpb259ezEu
IEFQUExJQ0FCSUxJVFkgQU5EIERFRklOSVRJT05TfQotXGVuZHtjZW50ZXJ9Ci0KLVRoaXMgTGlj
ZW5zZSBhcHBsaWVzIHRvIGFueSBtYW51YWwgb3Igb3RoZXIgd29yaywgaW4gYW55IG1lZGl1bSwg
dGhhdAotY29udGFpbnMgYSBub3RpY2UgcGxhY2VkIGJ5IHRoZSBjb3B5cmlnaHQgaG9sZGVyIHNh
eWluZyBpdCBjYW4gYmUKLWRpc3RyaWJ1dGVkIHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGlzIExpY2Vu
c2UuICBTdWNoIGEgbm90aWNlIGdyYW50cyBhCi13b3JsZC13aWRlLCByb3lhbHR5LWZyZWUgbGlj
ZW5zZSwgdW5saW1pdGVkIGluIGR1cmF0aW9uLCB0byB1c2UgdGhhdAotd29yayB1bmRlciB0aGUg
Y29uZGl0aW9ucyBzdGF0ZWQgaGVyZWluLiAgVGhlIFx0ZXh0YmZ7IkRvY3VtZW50In0sIGJlbG93
LAotcmVmZXJzIHRvIGFueSBzdWNoIG1hbnVhbCBvciB3b3JrLiAgQW55IG1lbWJlciBvZiB0aGUg
cHVibGljIGlzIGEKLWxpY2Vuc2VlLCBhbmQgaXMgYWRkcmVzc2VkIGFzIFx0ZXh0YmZ7InlvdSJ9
LiAgWW91IGFjY2VwdCB0aGUgbGljZW5zZSBpZiB5b3UKLWNvcHksIG1vZGlmeSBvciBkaXN0cmli
dXRlIHRoZSB3b3JrIGluIGEgd2F5IHJlcXVpcmluZyBwZXJtaXNzaW9uCi11bmRlciBjb3B5cmln
aHQgbGF3LgotCi1BIFx0ZXh0YmZ7Ik1vZGlmaWVkIFZlcnNpb24ifSBvZiB0aGUgRG9jdW1lbnQg
bWVhbnMgYW55IHdvcmsgY29udGFpbmluZyB0aGUKLURvY3VtZW50IG9yIGEgcG9ydGlvbiBvZiBp
dCwgZWl0aGVyIGNvcGllZCB2ZXJiYXRpbSwgb3Igd2l0aAotbW9kaWZpY2F0aW9ucyBhbmQvb3Ig
dHJhbnNsYXRlZCBpbnRvIGFub3RoZXIgbGFuZ3VhZ2UuCi0KLUEgXHRleHRiZnsiU2Vjb25kYXJ5
IFNlY3Rpb24ifSBpcyBhIG5hbWVkIGFwcGVuZGl4IG9yIGEgZnJvbnQtbWF0dGVyIHNlY3Rpb24g
b2YKLXRoZSBEb2N1bWVudCB0aGF0IGRlYWxzIGV4Y2x1c2l2ZWx5IHdpdGggdGhlIHJlbGF0aW9u
c2hpcCBvZiB0aGUKLXB1Ymxpc2hlcnMgb3IgYXV0aG9ycyBvZiB0aGUgRG9jdW1lbnQgdG8gdGhl
IERvY3VtZW50J3Mgb3ZlcmFsbCBzdWJqZWN0Ci0ob3IgdG8gcmVsYXRlZCBtYXR0ZXJzKSBhbmQg
Y29udGFpbnMgbm90aGluZyB0aGF0IGNvdWxkIGZhbGwgZGlyZWN0bHkKLXdpdGhpbiB0aGF0IG92
ZXJhbGwgc3ViamVjdC4gIChUaHVzLCBpZiB0aGUgRG9jdW1lbnQgaXMgaW4gcGFydCBhCi10ZXh0
Ym9vayBvZiBtYXRoZW1hdGljcywgYSBTZWNvbmRhcnkgU2VjdGlvbiBtYXkgbm90IGV4cGxhaW4g
YW55Ci1tYXRoZW1hdGljcy4pICBUaGUgcmVsYXRpb25zaGlwIGNvdWxkIGJlIGEgbWF0dGVyIG9m
IGhpc3RvcmljYWwKLWNvbm5lY3Rpb24gd2l0aCB0aGUgc3ViamVjdCBvciB3aXRoIHJlbGF0ZWQg
bWF0dGVycywgb3Igb2YgbGVnYWwsCi1jb21tZXJjaWFsLCBwaGlsb3NvcGhpY2FsLCBldGhpY2Fs
IG9yIHBvbGl0aWNhbCBwb3NpdGlvbiByZWdhcmRpbmcKLXRoZW0uCi0KLVRoZSBcdGV4dGJmeyJJ
bnZhcmlhbnQgU2VjdGlvbnMifSBhcmUgY2VydGFpbiBTZWNvbmRhcnkgU2VjdGlvbnMgd2hvc2Ug
dGl0bGVzCi1hcmUgZGVzaWduYXRlZCwgYXMgYmVpbmcgdGhvc2Ugb2YgSW52YXJpYW50IFNlY3Rp
b25zLCBpbiB0aGUgbm90aWNlCi10aGF0IHNheXMgdGhhdCB0aGUgRG9jdW1lbnQgaXMgcmVsZWFz
ZWQgdW5kZXIgdGhpcyBMaWNlbnNlLiAgSWYgYQotc2VjdGlvbiBkb2VzIG5vdCBmaXQgdGhlIGFi
b3ZlIGRlZmluaXRpb24gb2YgU2Vjb25kYXJ5IHRoZW4gaXQgaXMgbm90Ci1hbGxvd2VkIHRvIGJl
IGRlc2lnbmF0ZWQgYXMgSW52YXJpYW50LiAgVGhlIERvY3VtZW50IG1heSBjb250YWluIHplcm8K
LUludmFyaWFudCBTZWN0aW9ucy4gIElmIHRoZSBEb2N1bWVudCBkb2VzIG5vdCBpZGVudGlmeSBh
bnkgSW52YXJpYW50Ci1TZWN0aW9ucyB0aGVuIHRoZXJlIGFyZSBub25lLgotCi1UaGUgXHRleHRi
ZnsiQ292ZXIgVGV4dHMifSBhcmUgY2VydGFpbiBzaG9ydCBwYXNzYWdlcyBvZiB0ZXh0IHRoYXQg
YXJlIGxpc3RlZCwKLWFzIEZyb250LUNvdmVyIFRleHRzIG9yIEJhY2stQ292ZXIgVGV4dHMsIGlu
IHRoZSBub3RpY2UgdGhhdCBzYXlzIHRoYXQKLXRoZSBEb2N1bWVudCBpcyByZWxlYXNlZCB1bmRl
ciB0aGlzIExpY2Vuc2UuICBBIEZyb250LUNvdmVyIFRleHQgbWF5Ci1iZSBhdCBtb3N0IDUgd29y
ZHMsIGFuZCBhIEJhY2stQ292ZXIgVGV4dCBtYXkgYmUgYXQgbW9zdCAyNSB3b3Jkcy4KLQotQSBc
dGV4dGJmeyJUcmFuc3BhcmVudCJ9IGNvcHkgb2YgdGhlIERvY3VtZW50IG1lYW5zIGEgbWFjaGlu
ZS1yZWFkYWJsZSBjb3B5LAotcmVwcmVzZW50ZWQgaW4gYSBmb3JtYXQgd2hvc2Ugc3BlY2lmaWNh
dGlvbiBpcyBhdmFpbGFibGUgdG8gdGhlCi1nZW5lcmFsIHB1YmxpYywgdGhhdCBpcyBzdWl0YWJs
ZSBmb3IgcmV2aXNpbmcgdGhlIGRvY3VtZW50Ci1zdHJhaWdodGZvcndhcmRseSB3aXRoIGdlbmVy
aWMgdGV4dCBlZGl0b3JzIG9yIChmb3IgaW1hZ2VzIGNvbXBvc2VkIG9mCi1waXhlbHMpIGdlbmVy
aWMgcGFpbnQgcHJvZ3JhbXMgb3IgKGZvciBkcmF3aW5ncykgc29tZSB3aWRlbHkgYXZhaWxhYmxl
Ci1kcmF3aW5nIGVkaXRvciwgYW5kIHRoYXQgaXMgc3VpdGFibGUgZm9yIGlucHV0IHRvIHRleHQg
Zm9ybWF0dGVycyBvcgotZm9yIGF1dG9tYXRpYyB0cmFuc2xhdGlvbiB0byBhIHZhcmlldHkgb2Yg
Zm9ybWF0cyBzdWl0YWJsZSBmb3IgaW5wdXQKLXRvIHRleHQgZm9ybWF0dGVycy4gIEEgY29weSBt
YWRlIGluIGFuIG90aGVyd2lzZSBUcmFuc3BhcmVudCBmaWxlCi1mb3JtYXQgd2hvc2UgbWFya3Vw
LCBvciBhYnNlbmNlIG9mIG1hcmt1cCwgaGFzIGJlZW4gYXJyYW5nZWQgdG8gdGh3YXJ0Ci1vciBk
aXNjb3VyYWdlIHN1YnNlcXVlbnQgbW9kaWZpY2F0aW9uIGJ5IHJlYWRlcnMgaXMgbm90IFRyYW5z
cGFyZW50LgotQW4gaW1hZ2UgZm9ybWF0IGlzIG5vdCBUcmFuc3BhcmVudCBpZiB1c2VkIGZvciBh
bnkgc3Vic3RhbnRpYWwgYW1vdW50Ci1vZiB0ZXh0LiAgQSBjb3B5IHRoYXQgaXMgbm90ICJUcmFu
c3BhcmVudCIgaXMgY2FsbGVkIFx0ZXh0YmZ7Ik9wYXF1ZSJ9LgotCi1FeGFtcGxlcyBvZiBzdWl0
YWJsZSBmb3JtYXRzIGZvciBUcmFuc3BhcmVudCBjb3BpZXMgaW5jbHVkZSBwbGFpbgotQVNDSUkg
d2l0aG91dCBtYXJrdXAsIFRleGluZm8gaW5wdXQgZm9ybWF0LCBMYVRlWCBpbnB1dCBmb3JtYXQs
IFNHTUwKLW9yIFhNTCB1c2luZyBhIHB1YmxpY2x5IGF2YWlsYWJsZSBEVEQsIGFuZCBzdGFuZGFy
ZC1jb25mb3JtaW5nIHNpbXBsZQotSFRNTCwgUG9zdFNjcmlwdCBvciBQREYgZGVzaWduZWQgZm9y
IGh1bWFuIG1vZGlmaWNhdGlvbi4gIEV4YW1wbGVzIG9mCi10cmFuc3BhcmVudCBpbWFnZSBmb3Jt
YXRzIGluY2x1ZGUgUE5HLCBYQ0YgYW5kIEpQRy4gIE9wYXF1ZSBmb3JtYXRzCi1pbmNsdWRlIHBy
b3ByaWV0YXJ5IGZvcm1hdHMgdGhhdCBjYW4gYmUgcmVhZCBhbmQgZWRpdGVkIG9ubHkgYnkKLXBy
b3ByaWV0YXJ5IHdvcmQgcHJvY2Vzc29ycywgU0dNTCBvciBYTUwgZm9yIHdoaWNoIHRoZSBEVEQg
YW5kL29yCi1wcm9jZXNzaW5nIHRvb2xzIGFyZSBub3QgZ2VuZXJhbGx5IGF2YWlsYWJsZSwgYW5k
IHRoZQotbWFjaGluZS1nZW5lcmF0ZWQgSFRNTCwgUG9zdFNjcmlwdCBvciBQREYgcHJvZHVjZWQg
Ynkgc29tZSB3b3JkCi1wcm9jZXNzb3JzIGZvciBvdXRwdXQgcHVycG9zZXMgb25seS4KLQotVGhl
IFx0ZXh0YmZ7IlRpdGxlIFBhZ2UifSBtZWFucywgZm9yIGEgcHJpbnRlZCBib29rLCB0aGUgdGl0
bGUgcGFnZSBpdHNlbGYsCi1wbHVzIHN1Y2ggZm9sbG93aW5nIHBhZ2VzIGFzIGFyZSBuZWVkZWQg
dG8gaG9sZCwgbGVnaWJseSwgdGhlIG1hdGVyaWFsCi10aGlzIExpY2Vuc2UgcmVxdWlyZXMgdG8g
YXBwZWFyIGluIHRoZSB0aXRsZSBwYWdlLiAgRm9yIHdvcmtzIGluCi1mb3JtYXRzIHdoaWNoIGRv
IG5vdCBoYXZlIGFueSB0aXRsZSBwYWdlIGFzIHN1Y2gsICJUaXRsZSBQYWdlIiBtZWFucwotdGhl
IHRleHQgbmVhciB0aGUgbW9zdCBwcm9taW5lbnQgYXBwZWFyYW5jZSBvZiB0aGUgd29yaydzIHRp
dGxlLAotcHJlY2VkaW5nIHRoZSBiZWdpbm5pbmcgb2YgdGhlIGJvZHkgb2YgdGhlIHRleHQuCi0K
LUEgc2VjdGlvbiBcdGV4dGJmeyJFbnRpdGxlZCBYWVoifSBtZWFucyBhIG5hbWVkIHN1YnVuaXQg
b2YgdGhlIERvY3VtZW50IHdob3NlCi10aXRsZSBlaXRoZXIgaXMgcHJlY2lzZWx5IFhZWiBvciBj
b250YWlucyBYWVogaW4gcGFyZW50aGVzZXMgZm9sbG93aW5nCi10ZXh0IHRoYXQgdHJhbnNsYXRl
cyBYWVogaW4gYW5vdGhlciBsYW5ndWFnZS4gIChIZXJlIFhZWiBzdGFuZHMgZm9yIGEKLXNwZWNp
ZmljIHNlY3Rpb24gbmFtZSBtZW50aW9uZWQgYmVsb3csIHN1Y2ggYXMgXHRleHRiZnsiQWNrbm93
bGVkZ2VtZW50cyJ9LAotXHRleHRiZnsiRGVkaWNhdGlvbnMifSwgXHRleHRiZnsiRW5kb3JzZW1l
bnRzIn0sIG9yIFx0ZXh0YmZ7Ikhpc3RvcnkifS4pICAKLVRvIFx0ZXh0YmZ7IlByZXNlcnZlIHRo
ZSBUaXRsZSJ9Ci1vZiBzdWNoIGEgc2VjdGlvbiB3aGVuIHlvdSBtb2RpZnkgdGhlIERvY3VtZW50
IG1lYW5zIHRoYXQgaXQgcmVtYWlucyBhCi1zZWN0aW9uICJFbnRpdGxlZCBYWVoiIGFjY29yZGlu
ZyB0byB0aGlzIGRlZmluaXRpb24uCi0KLVRoZSBEb2N1bWVudCBtYXkgaW5jbHVkZSBXYXJyYW50
eSBEaXNjbGFpbWVycyBuZXh0IHRvIHRoZSBub3RpY2Ugd2hpY2gKLXN0YXRlcyB0aGF0IHRoaXMg
TGljZW5zZSBhcHBsaWVzIHRvIHRoZSBEb2N1bWVudC4gIFRoZXNlIFdhcnJhbnR5Ci1EaXNjbGFp
bWVycyBhcmUgY29uc2lkZXJlZCB0byBiZSBpbmNsdWRlZCBieSByZWZlcmVuY2UgaW4gdGhpcwot
TGljZW5zZSwgYnV0IG9ubHkgYXMgcmVnYXJkcyBkaXNjbGFpbWluZyB3YXJyYW50aWVzOiBhbnkg
b3RoZXIKLWltcGxpY2F0aW9uIHRoYXQgdGhlc2UgV2FycmFudHkgRGlzY2xhaW1lcnMgbWF5IGhh
dmUgaXMgdm9pZCBhbmQgaGFzCi1ubyBlZmZlY3Qgb24gdGhlIG1lYW5pbmcgb2YgdGhpcyBMaWNl
bnNlLgotCi0KLVxiZWdpbntjZW50ZXJ9Ci17XExhcmdlXGJmIDIuIFZFUkJBVElNIENPUFlJTkd9
Ci1cYWRkY29udGVudHNsaW5le3RvY317c2VjdGlvbn17Mi4gVkVSQkFUSU0gQ09QWUlOR30KLVxl
bmR7Y2VudGVyfQotCi1Zb3UgbWF5IGNvcHkgYW5kIGRpc3RyaWJ1dGUgdGhlIERvY3VtZW50IGlu
IGFueSBtZWRpdW0sIGVpdGhlcgotY29tbWVyY2lhbGx5IG9yIG5vbmNvbW1lcmNpYWxseSwgcHJv
dmlkZWQgdGhhdCB0aGlzIExpY2Vuc2UsIHRoZQotY29weXJpZ2h0IG5vdGljZXMsIGFuZCB0aGUg
bGljZW5zZSBub3RpY2Ugc2F5aW5nIHRoaXMgTGljZW5zZSBhcHBsaWVzCi10byB0aGUgRG9jdW1l
bnQgYXJlIHJlcHJvZHVjZWQgaW4gYWxsIGNvcGllcywgYW5kIHRoYXQgeW91IGFkZCBubyBvdGhl
cgotY29uZGl0aW9ucyB3aGF0c29ldmVyIHRvIHRob3NlIG9mIHRoaXMgTGljZW5zZS4gIFlvdSBt
YXkgbm90IHVzZQotdGVjaG5pY2FsIG1lYXN1cmVzIHRvIG9ic3RydWN0IG9yIGNvbnRyb2wgdGhl
IHJlYWRpbmcgb3IgZnVydGhlcgotY29weWluZyBvZiB0aGUgY29waWVzIHlvdSBtYWtlIG9yIGRp
c3RyaWJ1dGUuICBIb3dldmVyLCB5b3UgbWF5IGFjY2VwdAotY29tcGVuc2F0aW9uIGluIGV4Y2hh
bmdlIGZvciBjb3BpZXMuICBJZiB5b3UgZGlzdHJpYnV0ZSBhIGxhcmdlIGVub3VnaAotbnVtYmVy
IG9mIGNvcGllcyB5b3UgbXVzdCBhbHNvIGZvbGxvdyB0aGUgY29uZGl0aW9ucyBpbiBzZWN0aW9u
IDMuCi0KLVlvdSBtYXkgYWxzbyBsZW5kIGNvcGllcywgdW5kZXIgdGhlIHNhbWUgY29uZGl0aW9u
cyBzdGF0ZWQgYWJvdmUsIGFuZAoteW91IG1heSBwdWJsaWNseSBkaXNwbGF5IGNvcGllcy4KLQot
Ci1cYmVnaW57Y2VudGVyfQote1xMYXJnZVxiZiAzLiBDT1BZSU5HIElOIFFVQU5USVRZfQotXGFk
ZGNvbnRlbnRzbGluZXt0b2N9e3NlY3Rpb259ezMuIENPUFlJTkcgSU4gUVVBTlRJVFl9Ci1cZW5k
e2NlbnRlcn0KLQotCi1JZiB5b3UgcHVibGlzaCBwcmludGVkIGNvcGllcyAob3IgY29waWVzIGlu
IG1lZGlhIHRoYXQgY29tbW9ubHkgaGF2ZQotcHJpbnRlZCBjb3ZlcnMpIG9mIHRoZSBEb2N1bWVu
dCwgbnVtYmVyaW5nIG1vcmUgdGhhbiAxMDAsIGFuZCB0aGUKLURvY3VtZW50J3MgbGljZW5zZSBu
b3RpY2UgcmVxdWlyZXMgQ292ZXIgVGV4dHMsIHlvdSBtdXN0IGVuY2xvc2UgdGhlCi1jb3BpZXMg
aW4gY292ZXJzIHRoYXQgY2FycnksIGNsZWFybHkgYW5kIGxlZ2libHksIGFsbCB0aGVzZSBDb3Zl
cgotVGV4dHM6IEZyb250LUNvdmVyIFRleHRzIG9uIHRoZSBmcm9udCBjb3ZlciwgYW5kIEJhY2st
Q292ZXIgVGV4dHMgb24KLXRoZSBiYWNrIGNvdmVyLiAgQm90aCBjb3ZlcnMgbXVzdCBhbHNvIGNs
ZWFybHkgYW5kIGxlZ2libHkgaWRlbnRpZnkKLXlvdSBhcyB0aGUgcHVibGlzaGVyIG9mIHRoZXNl
IGNvcGllcy4gIFRoZSBmcm9udCBjb3ZlciBtdXN0IHByZXNlbnQKLXRoZSBmdWxsIHRpdGxlIHdp
dGggYWxsIHdvcmRzIG9mIHRoZSB0aXRsZSBlcXVhbGx5IHByb21pbmVudCBhbmQKLXZpc2libGUu
ICBZb3UgbWF5IGFkZCBvdGhlciBtYXRlcmlhbCBvbiB0aGUgY292ZXJzIGluIGFkZGl0aW9uLgot
Q29weWluZyB3aXRoIGNoYW5nZXMgbGltaXRlZCB0byB0aGUgY292ZXJzLCBhcyBsb25nIGFzIHRo
ZXkgcHJlc2VydmUKLXRoZSB0aXRsZSBvZiB0aGUgRG9jdW1lbnQgYW5kIHNhdGlzZnkgdGhlc2Ug
Y29uZGl0aW9ucywgY2FuIGJlIHRyZWF0ZWQKLWFzIHZlcmJhdGltIGNvcHlpbmcgaW4gb3RoZXIg
cmVzcGVjdHMuCi0KLUlmIHRoZSByZXF1aXJlZCB0ZXh0cyBmb3IgZWl0aGVyIGNvdmVyIGFyZSB0
b28gdm9sdW1pbm91cyB0byBmaXQKLWxlZ2libHksIHlvdSBzaG91bGQgcHV0IHRoZSBmaXJzdCBv
bmVzIGxpc3RlZCAoYXMgbWFueSBhcyBmaXQKLXJlYXNvbmFibHkpIG9uIHRoZSBhY3R1YWwgY292
ZXIsIGFuZCBjb250aW51ZSB0aGUgcmVzdCBvbnRvIGFkamFjZW50Ci1wYWdlcy4KLQotSWYgeW91
IHB1Ymxpc2ggb3IgZGlzdHJpYnV0ZSBPcGFxdWUgY29waWVzIG9mIHRoZSBEb2N1bWVudCBudW1i
ZXJpbmcKLW1vcmUgdGhhbiAxMDAsIHlvdSBtdXN0IGVpdGhlciBpbmNsdWRlIGEgbWFjaGluZS1y
ZWFkYWJsZSBUcmFuc3BhcmVudAotY29weSBhbG9uZyB3aXRoIGVhY2ggT3BhcXVlIGNvcHksIG9y
IHN0YXRlIGluIG9yIHdpdGggZWFjaCBPcGFxdWUgY29weQotYSBjb21wdXRlci1uZXR3b3JrIGxv
Y2F0aW9uIGZyb20gd2hpY2ggdGhlIGdlbmVyYWwgbmV0d29yay11c2luZwotcHVibGljIGhhcyBh
Y2Nlc3MgdG8gZG93bmxvYWQgdXNpbmcgcHVibGljLXN0YW5kYXJkIG5ldHdvcmsgcHJvdG9jb2xz
Ci1hIGNvbXBsZXRlIFRyYW5zcGFyZW50IGNvcHkgb2YgdGhlIERvY3VtZW50LCBmcmVlIG9mIGFk
ZGVkIG1hdGVyaWFsLgotSWYgeW91IHVzZSB0aGUgbGF0dGVyIG9wdGlvbiwgeW91IG11c3QgdGFr
ZSByZWFzb25hYmx5IHBydWRlbnQgc3RlcHMsCi13aGVuIHlvdSBiZWdpbiBkaXN0cmlidXRpb24g
b2YgT3BhcXVlIGNvcGllcyBpbiBxdWFudGl0eSwgdG8gZW5zdXJlCi10aGF0IHRoaXMgVHJhbnNw
YXJlbnQgY29weSB3aWxsIHJlbWFpbiB0aHVzIGFjY2Vzc2libGUgYXQgdGhlIHN0YXRlZAotbG9j
YXRpb24gdW50aWwgYXQgbGVhc3Qgb25lIHllYXIgYWZ0ZXIgdGhlIGxhc3QgdGltZSB5b3UgZGlz
dHJpYnV0ZSBhbgotT3BhcXVlIGNvcHkgKGRpcmVjdGx5IG9yIHRocm91Z2ggeW91ciBhZ2VudHMg
b3IgcmV0YWlsZXJzKSBvZiB0aGF0Ci1lZGl0aW9uIHRvIHRoZSBwdWJsaWMuCi0KLUl0IGlzIHJl
cXVlc3RlZCwgYnV0IG5vdCByZXF1aXJlZCwgdGhhdCB5b3UgY29udGFjdCB0aGUgYXV0aG9ycyBv
ZiB0aGUKLURvY3VtZW50IHdlbGwgYmVmb3JlIHJlZGlzdHJpYnV0aW5nIGFueSBsYXJnZSBudW1i
ZXIgb2YgY29waWVzLCB0byBnaXZlCi10aGVtIGEgY2hhbmNlIHRvIHByb3ZpZGUgeW91IHdpdGgg
YW4gdXBkYXRlZCB2ZXJzaW9uIG9mIHRoZSBEb2N1bWVudC4KLQotCi1cYmVnaW57Y2VudGVyfQot
e1xMYXJnZVxiZiA0LiBNT0RJRklDQVRJT05TfQotXGFkZGNvbnRlbnRzbGluZXt0b2N9e3NlY3Rp
b259ezQuIE1PRElGSUNBVElPTlN9Ci1cZW5ke2NlbnRlcn0KLQotWW91IG1heSBjb3B5IGFuZCBk
aXN0cmlidXRlIGEgTW9kaWZpZWQgVmVyc2lvbiBvZiB0aGUgRG9jdW1lbnQgdW5kZXIKLXRoZSBj
b25kaXRpb25zIG9mIHNlY3Rpb25zIDIgYW5kIDMgYWJvdmUsIHByb3ZpZGVkIHRoYXQgeW91IHJl
bGVhc2UKLXRoZSBNb2RpZmllZCBWZXJzaW9uIHVuZGVyIHByZWNpc2VseSB0aGlzIExpY2Vuc2Us
IHdpdGggdGhlIE1vZGlmaWVkCi1WZXJzaW9uIGZpbGxpbmcgdGhlIHJvbGUgb2YgdGhlIERvY3Vt
ZW50LCB0aHVzIGxpY2Vuc2luZyBkaXN0cmlidXRpb24KLWFuZCBtb2RpZmljYXRpb24gb2YgdGhl
IE1vZGlmaWVkIFZlcnNpb24gdG8gd2hvZXZlciBwb3NzZXNzZXMgYSBjb3B5Ci1vZiBpdC4gIElu
IGFkZGl0aW9uLCB5b3UgbXVzdCBkbyB0aGVzZSB0aGluZ3MgaW4gdGhlIE1vZGlmaWVkIFZlcnNp
b246Ci0KLVxiZWdpbntpdGVtaXplfQotXGl0ZW1bQS5dIAotICAgVXNlIGluIHRoZSBUaXRsZSBQ
YWdlIChhbmQgb24gdGhlIGNvdmVycywgaWYgYW55KSBhIHRpdGxlIGRpc3RpbmN0Ci0gICBmcm9t
IHRoYXQgb2YgdGhlIERvY3VtZW50LCBhbmQgZnJvbSB0aG9zZSBvZiBwcmV2aW91cyB2ZXJzaW9u
cwotICAgKHdoaWNoIHNob3VsZCwgaWYgdGhlcmUgd2VyZSBhbnksIGJlIGxpc3RlZCBpbiB0aGUg
SGlzdG9yeSBzZWN0aW9uCi0gICBvZiB0aGUgRG9jdW1lbnQpLiAgWW91IG1heSB1c2UgdGhlIHNh
bWUgdGl0bGUgYXMgYSBwcmV2aW91cyB2ZXJzaW9uCi0gICBpZiB0aGUgb3JpZ2luYWwgcHVibGlz
aGVyIG9mIHRoYXQgdmVyc2lvbiBnaXZlcyBwZXJtaXNzaW9uLgotICAgCi1caXRlbVtCLl0KLSAg
IExpc3Qgb24gdGhlIFRpdGxlIFBhZ2UsIGFzIGF1dGhvcnMsIG9uZSBvciBtb3JlIHBlcnNvbnMg
b3IgZW50aXRpZXMKLSAgIHJlc3BvbnNpYmxlIGZvciBhdXRob3JzaGlwIG9mIHRoZSBtb2RpZmlj
YXRpb25zIGluIHRoZSBNb2RpZmllZAotICAgVmVyc2lvbiwgdG9nZXRoZXIgd2l0aCBhdCBsZWFz
dCBmaXZlIG9mIHRoZSBwcmluY2lwYWwgYXV0aG9ycyBvZiB0aGUKLSAgIERvY3VtZW50IChhbGwg
b2YgaXRzIHByaW5jaXBhbCBhdXRob3JzLCBpZiBpdCBoYXMgZmV3ZXIgdGhhbiBmaXZlKSwKLSAg
IHVubGVzcyB0aGV5IHJlbGVhc2UgeW91IGZyb20gdGhpcyByZXF1aXJlbWVudC4KLSAgIAotXGl0
ZW1bQy5dCi0gICBTdGF0ZSBvbiB0aGUgVGl0bGUgcGFnZSB0aGUgbmFtZSBvZiB0aGUgcHVibGlz
aGVyIG9mIHRoZQotICAgTW9kaWZpZWQgVmVyc2lvbiwgYXMgdGhlIHB1Ymxpc2hlci4KLSAgIAot
XGl0ZW1bRC5dCi0gICBQcmVzZXJ2ZSBhbGwgdGhlIGNvcHlyaWdodCBub3RpY2VzIG9mIHRoZSBE
b2N1bWVudC4KLSAgIAotXGl0ZW1bRS5dCi0gICBBZGQgYW4gYXBwcm9wcmlhdGUgY29weXJpZ2h0
IG5vdGljZSBmb3IgeW91ciBtb2RpZmljYXRpb25zCi0gICBhZGphY2VudCB0byB0aGUgb3RoZXIg
Y29weXJpZ2h0IG5vdGljZXMuCi0gICAKLVxpdGVtW0YuXQotICAgSW5jbHVkZSwgaW1tZWRpYXRl
bHkgYWZ0ZXIgdGhlIGNvcHlyaWdodCBub3RpY2VzLCBhIGxpY2Vuc2Ugbm90aWNlCi0gICBnaXZp
bmcgdGhlIHB1YmxpYyBwZXJtaXNzaW9uIHRvIHVzZSB0aGUgTW9kaWZpZWQgVmVyc2lvbiB1bmRl
ciB0aGUKLSAgIHRlcm1zIG9mIHRoaXMgTGljZW5zZSwgaW4gdGhlIGZvcm0gc2hvd24gaW4gdGhl
IEFkZGVuZHVtIGJlbG93LgotICAgCi1caXRlbVtHLl0KLSAgIFByZXNlcnZlIGluIHRoYXQgbGlj
ZW5zZSBub3RpY2UgdGhlIGZ1bGwgbGlzdHMgb2YgSW52YXJpYW50IFNlY3Rpb25zCi0gICBhbmQg
cmVxdWlyZWQgQ292ZXIgVGV4dHMgZ2l2ZW4gaW4gdGhlIERvY3VtZW50J3MgbGljZW5zZSBub3Rp
Y2UuCi0gICAKLVxpdGVtW0guXQotICAgSW5jbHVkZSBhbiB1bmFsdGVyZWQgY29weSBvZiB0aGlz
IExpY2Vuc2UuCi0gICAKLVxpdGVtW0kuXQotICAgUHJlc2VydmUgdGhlIHNlY3Rpb24gRW50aXRs
ZWQgIkhpc3RvcnkiLCBQcmVzZXJ2ZSBpdHMgVGl0bGUsIGFuZCBhZGQKLSAgIHRvIGl0IGFuIGl0
ZW0gc3RhdGluZyBhdCBsZWFzdCB0aGUgdGl0bGUsIHllYXIsIG5ldyBhdXRob3JzLCBhbmQKLSAg
IHB1Ymxpc2hlciBvZiB0aGUgTW9kaWZpZWQgVmVyc2lvbiBhcyBnaXZlbiBvbiB0aGUgVGl0bGUg
UGFnZS4gIElmCi0gICB0aGVyZSBpcyBubyBzZWN0aW9uIEVudGl0bGVkICJIaXN0b3J5IiBpbiB0
aGUgRG9jdW1lbnQsIGNyZWF0ZSBvbmUKLSAgIHN0YXRpbmcgdGhlIHRpdGxlLCB5ZWFyLCBhdXRo
b3JzLCBhbmQgcHVibGlzaGVyIG9mIHRoZSBEb2N1bWVudCBhcwotICAgZ2l2ZW4gb24gaXRzIFRp
dGxlIFBhZ2UsIHRoZW4gYWRkIGFuIGl0ZW0gZGVzY3JpYmluZyB0aGUgTW9kaWZpZWQKLSAgIFZl
cnNpb24gYXMgc3RhdGVkIGluIHRoZSBwcmV2aW91cyBzZW50ZW5jZS4KLSAgIAotXGl0ZW1bSi5d
Ci0gICBQcmVzZXJ2ZSB0aGUgbmV0d29yayBsb2NhdGlvbiwgaWYgYW55LCBnaXZlbiBpbiB0aGUg
RG9jdW1lbnQgZm9yCi0gICBwdWJsaWMgYWNjZXNzIHRvIGEgVHJhbnNwYXJlbnQgY29weSBvZiB0
aGUgRG9jdW1lbnQsIGFuZCBsaWtld2lzZQotICAgdGhlIG5ldHdvcmsgbG9jYXRpb25zIGdpdmVu
IGluIHRoZSBEb2N1bWVudCBmb3IgcHJldmlvdXMgdmVyc2lvbnMKLSAgIGl0IHdhcyBiYXNlZCBv
bi4gIFRoZXNlIG1heSBiZSBwbGFjZWQgaW4gdGhlICJIaXN0b3J5IiBzZWN0aW9uLgotICAgWW91
IG1heSBvbWl0IGEgbmV0d29yayBsb2NhdGlvbiBmb3IgYSB3b3JrIHRoYXQgd2FzIHB1Ymxpc2hl
ZCBhdAotICAgbGVhc3QgZm91ciB5ZWFycyBiZWZvcmUgdGhlIERvY3VtZW50IGl0c2VsZiwgb3Ig
aWYgdGhlIG9yaWdpbmFsCi0gICBwdWJsaXNoZXIgb2YgdGhlIHZlcnNpb24gaXQgcmVmZXJzIHRv
IGdpdmVzIHBlcm1pc3Npb24uCi0gICAKLVxpdGVtW0suXQotICAgRm9yIGFueSBzZWN0aW9uIEVu
dGl0bGVkICJBY2tub3dsZWRnZW1lbnRzIiBvciAiRGVkaWNhdGlvbnMiLAotICAgUHJlc2VydmUg
dGhlIFRpdGxlIG9mIHRoZSBzZWN0aW9uLCBhbmQgcHJlc2VydmUgaW4gdGhlIHNlY3Rpb24gYWxs
Ci0gICB0aGUgc3Vic3RhbmNlIGFuZCB0b25lIG9mIGVhY2ggb2YgdGhlIGNvbnRyaWJ1dG9yIGFj
a25vd2xlZGdlbWVudHMKLSAgIGFuZC9vciBkZWRpY2F0aW9ucyBnaXZlbiB0aGVyZWluLgotICAg
Ci1caXRlbVtMLl0KLSAgIFByZXNlcnZlIGFsbCB0aGUgSW52YXJpYW50IFNlY3Rpb25zIG9mIHRo
ZSBEb2N1bWVudCwKLSAgIHVuYWx0ZXJlZCBpbiB0aGVpciB0ZXh0IGFuZCBpbiB0aGVpciB0aXRs
ZXMuICBTZWN0aW9uIG51bWJlcnMKLSAgIG9yIHRoZSBlcXVpdmFsZW50IGFyZSBub3QgY29uc2lk
ZXJlZCBwYXJ0IG9mIHRoZSBzZWN0aW9uIHRpdGxlcy4KLSAgIAotXGl0ZW1bTS5dCi0gICBEZWxl
dGUgYW55IHNlY3Rpb24gRW50aXRsZWQgIkVuZG9yc2VtZW50cyIuICBTdWNoIGEgc2VjdGlvbgot
ICAgbWF5IG5vdCBiZSBpbmNsdWRlZCBpbiB0aGUgTW9kaWZpZWQgVmVyc2lvbi4KLSAgIAotXGl0
ZW1bTi5dCi0gICBEbyBub3QgcmV0aXRsZSBhbnkgZXhpc3Rpbmcgc2VjdGlvbiB0byBiZSBFbnRp
dGxlZCAiRW5kb3JzZW1lbnRzIgotICAgb3IgdG8gY29uZmxpY3QgaW4gdGl0bGUgd2l0aCBhbnkg
SW52YXJpYW50IFNlY3Rpb24uCi0gICAKLVxpdGVtW08uXQotICAgUHJlc2VydmUgYW55IFdhcnJh
bnR5IERpc2NsYWltZXJzLgotXGVuZHtpdGVtaXplfQotCi1JZiB0aGUgTW9kaWZpZWQgVmVyc2lv
biBpbmNsdWRlcyBuZXcgZnJvbnQtbWF0dGVyIHNlY3Rpb25zIG9yCi1hcHBlbmRpY2VzIHRoYXQg
cXVhbGlmeSBhcyBTZWNvbmRhcnkgU2VjdGlvbnMgYW5kIGNvbnRhaW4gbm8gbWF0ZXJpYWwKLWNv
cGllZCBmcm9tIHRoZSBEb2N1bWVudCwgeW91IG1heSBhdCB5b3VyIG9wdGlvbiBkZXNpZ25hdGUg
c29tZSBvciBhbGwKLW9mIHRoZXNlIHNlY3Rpb25zIGFzIGludmFyaWFudC4gIFRvIGRvIHRoaXMs
IGFkZCB0aGVpciB0aXRsZXMgdG8gdGhlCi1saXN0IG9mIEludmFyaWFudCBTZWN0aW9ucyBpbiB0
aGUgTW9kaWZpZWQgVmVyc2lvbidzIGxpY2Vuc2Ugbm90aWNlLgotVGhlc2UgdGl0bGVzIG11c3Qg
YmUgZGlzdGluY3QgZnJvbSBhbnkgb3RoZXIgc2VjdGlvbiB0aXRsZXMuCi0KLVlvdSBtYXkgYWRk
IGEgc2VjdGlvbiBFbnRpdGxlZCAiRW5kb3JzZW1lbnRzIiwgcHJvdmlkZWQgaXQgY29udGFpbnMK
LW5vdGhpbmcgYnV0IGVuZG9yc2VtZW50cyBvZiB5b3VyIE1vZGlmaWVkIFZlcnNpb24gYnkgdmFy
aW91cwotcGFydGllcy0tZm9yIGV4YW1wbGUsIHN0YXRlbWVudHMgb2YgcGVlciByZXZpZXcgb3Ig
dGhhdCB0aGUgdGV4dCBoYXMKLWJlZW4gYXBwcm92ZWQgYnkgYW4gb3JnYW5pemF0aW9uIGFzIHRo
ZSBhdXRob3JpdGF0aXZlIGRlZmluaXRpb24gb2YgYQotc3RhbmRhcmQuCi0KLVlvdSBtYXkgYWRk
IGEgcGFzc2FnZSBvZiB1cCB0byBmaXZlIHdvcmRzIGFzIGEgRnJvbnQtQ292ZXIgVGV4dCwgYW5k
IGEKLXBhc3NhZ2Ugb2YgdXAgdG8gMjUgd29yZHMgYXMgYSBCYWNrLUNvdmVyIFRleHQsIHRvIHRo
ZSBlbmQgb2YgdGhlIGxpc3QKLW9mIENvdmVyIFRleHRzIGluIHRoZSBNb2RpZmllZCBWZXJzaW9u
LiAgT25seSBvbmUgcGFzc2FnZSBvZgotRnJvbnQtQ292ZXIgVGV4dCBhbmQgb25lIG9mIEJhY2st
Q292ZXIgVGV4dCBtYXkgYmUgYWRkZWQgYnkgKG9yCi10aHJvdWdoIGFycmFuZ2VtZW50cyBtYWRl
IGJ5KSBhbnkgb25lIGVudGl0eS4gIElmIHRoZSBEb2N1bWVudCBhbHJlYWR5Ci1pbmNsdWRlcyBh
IGNvdmVyIHRleHQgZm9yIHRoZSBzYW1lIGNvdmVyLCBwcmV2aW91c2x5IGFkZGVkIGJ5IHlvdSBv
cgotYnkgYXJyYW5nZW1lbnQgbWFkZSBieSB0aGUgc2FtZSBlbnRpdHkgeW91IGFyZSBhY3Rpbmcg
b24gYmVoYWxmIG9mLAoteW91IG1heSBub3QgYWRkIGFub3RoZXI7IGJ1dCB5b3UgbWF5IHJlcGxh
Y2UgdGhlIG9sZCBvbmUsIG9uIGV4cGxpY2l0Ci1wZXJtaXNzaW9uIGZyb20gdGhlIHByZXZpb3Vz
IHB1Ymxpc2hlciB0aGF0IGFkZGVkIHRoZSBvbGQgb25lLgotCi1UaGUgYXV0aG9yKHMpIGFuZCBw
dWJsaXNoZXIocykgb2YgdGhlIERvY3VtZW50IGRvIG5vdCBieSB0aGlzIExpY2Vuc2UKLWdpdmUg
cGVybWlzc2lvbiB0byB1c2UgdGhlaXIgbmFtZXMgZm9yIHB1YmxpY2l0eSBmb3Igb3IgdG8gYXNz
ZXJ0IG9yCi1pbXBseSBlbmRvcnNlbWVudCBvZiBhbnkgTW9kaWZpZWQgVmVyc2lvbi4KLQotCi1c
YmVnaW57Y2VudGVyfQote1xMYXJnZVxiZiA1LiBDT01CSU5JTkcgRE9DVU1FTlRTfQotXGFkZGNv
bnRlbnRzbGluZXt0b2N9e3NlY3Rpb259ezUuIENPTUJJTklORyBET0NVTUVOVFN9Ci1cZW5ke2Nl
bnRlcn0KLQotCi1Zb3UgbWF5IGNvbWJpbmUgdGhlIERvY3VtZW50IHdpdGggb3RoZXIgZG9jdW1l
bnRzIHJlbGVhc2VkIHVuZGVyIHRoaXMKLUxpY2Vuc2UsIHVuZGVyIHRoZSB0ZXJtcyBkZWZpbmVk
IGluIHNlY3Rpb24gNCBhYm92ZSBmb3IgbW9kaWZpZWQKLXZlcnNpb25zLCBwcm92aWRlZCB0aGF0
IHlvdSBpbmNsdWRlIGluIHRoZSBjb21iaW5hdGlvbiBhbGwgb2YgdGhlCi1JbnZhcmlhbnQgU2Vj
dGlvbnMgb2YgYWxsIG9mIHRoZSBvcmlnaW5hbCBkb2N1bWVudHMsIHVubW9kaWZpZWQsIGFuZAot
bGlzdCB0aGVtIGFsbCBhcyBJbnZhcmlhbnQgU2VjdGlvbnMgb2YgeW91ciBjb21iaW5lZCB3b3Jr
IGluIGl0cwotbGljZW5zZSBub3RpY2UsIGFuZCB0aGF0IHlvdSBwcmVzZXJ2ZSBhbGwgdGhlaXIg
V2FycmFudHkgRGlzY2xhaW1lcnMuCi0KLVRoZSBjb21iaW5lZCB3b3JrIG5lZWQgb25seSBjb250
YWluIG9uZSBjb3B5IG9mIHRoaXMgTGljZW5zZSwgYW5kCi1tdWx0aXBsZSBpZGVudGljYWwgSW52
YXJpYW50IFNlY3Rpb25zIG1heSBiZSByZXBsYWNlZCB3aXRoIGEgc2luZ2xlCi1jb3B5LiAgSWYg
dGhlcmUgYXJlIG11bHRpcGxlIEludmFyaWFudCBTZWN0aW9ucyB3aXRoIHRoZSBzYW1lIG5hbWUg
YnV0Ci1kaWZmZXJlbnQgY29udGVudHMsIG1ha2UgdGhlIHRpdGxlIG9mIGVhY2ggc3VjaCBzZWN0
aW9uIHVuaXF1ZSBieQotYWRkaW5nIGF0IHRoZSBlbmQgb2YgaXQsIGluIHBhcmVudGhlc2VzLCB0
aGUgbmFtZSBvZiB0aGUgb3JpZ2luYWwKLWF1dGhvciBvciBwdWJsaXNoZXIgb2YgdGhhdCBzZWN0
aW9uIGlmIGtub3duLCBvciBlbHNlIGEgdW5pcXVlIG51bWJlci4KLU1ha2UgdGhlIHNhbWUgYWRq
dXN0bWVudCB0byB0aGUgc2VjdGlvbiB0aXRsZXMgaW4gdGhlIGxpc3Qgb2YKLUludmFyaWFudCBT
ZWN0aW9ucyBpbiB0aGUgbGljZW5zZSBub3RpY2Ugb2YgdGhlIGNvbWJpbmVkIHdvcmsuCi0KLUlu
IHRoZSBjb21iaW5hdGlvbiwgeW91IG11c3QgY29tYmluZSBhbnkgc2VjdGlvbnMgRW50aXRsZWQg
Ikhpc3RvcnkiCi1pbiB0aGUgdmFyaW91cyBvcmlnaW5hbCBkb2N1bWVudHMsIGZvcm1pbmcgb25l
IHNlY3Rpb24gRW50aXRsZWQKLSJIaXN0b3J5IjsgbGlrZXdpc2UgY29tYmluZSBhbnkgc2VjdGlv
bnMgRW50aXRsZWQgIkFja25vd2xlZGdlbWVudHMiLAotYW5kIGFueSBzZWN0aW9ucyBFbnRpdGxl
ZCAiRGVkaWNhdGlvbnMiLiAgWW91IG11c3QgZGVsZXRlIGFsbCBzZWN0aW9ucwotRW50aXRsZWQg
IkVuZG9yc2VtZW50cyIuCi0KLVxiZWdpbntjZW50ZXJ9Ci17XExhcmdlXGJmIDYuIENPTExFQ1RJ
T05TIE9GIERPQ1VNRU5UU30KLVxhZGRjb250ZW50c2xpbmV7dG9jfXtzZWN0aW9ufXs2LiBDT0xM
RUNUSU9OUyBPRiBET0NVTUVOVFN9Ci1cZW5ke2NlbnRlcn0KLQotWW91IG1heSBtYWtlIGEgY29s
bGVjdGlvbiBjb25zaXN0aW5nIG9mIHRoZSBEb2N1bWVudCBhbmQgb3RoZXIgZG9jdW1lbnRzCi1y
ZWxlYXNlZCB1bmRlciB0aGlzIExpY2Vuc2UsIGFuZCByZXBsYWNlIHRoZSBpbmRpdmlkdWFsIGNv
cGllcyBvZiB0aGlzCi1MaWNlbnNlIGluIHRoZSB2YXJpb3VzIGRvY3VtZW50cyB3aXRoIGEgc2lu
Z2xlIGNvcHkgdGhhdCBpcyBpbmNsdWRlZCBpbgotdGhlIGNvbGxlY3Rpb24sIHByb3ZpZGVkIHRo
YXQgeW91IGZvbGxvdyB0aGUgcnVsZXMgb2YgdGhpcyBMaWNlbnNlIGZvcgotdmVyYmF0aW0gY29w
eWluZyBvZiBlYWNoIG9mIHRoZSBkb2N1bWVudHMgaW4gYWxsIG90aGVyIHJlc3BlY3RzLgotCi1Z
b3UgbWF5IGV4dHJhY3QgYSBzaW5nbGUgZG9jdW1lbnQgZnJvbSBzdWNoIGEgY29sbGVjdGlvbiwg
YW5kIGRpc3RyaWJ1dGUKLWl0IGluZGl2aWR1YWxseSB1bmRlciB0aGlzIExpY2Vuc2UsIHByb3Zp
ZGVkIHlvdSBpbnNlcnQgYSBjb3B5IG9mIHRoaXMKLUxpY2Vuc2UgaW50byB0aGUgZXh0cmFjdGVk
IGRvY3VtZW50LCBhbmQgZm9sbG93IHRoaXMgTGljZW5zZSBpbiBhbGwKLW90aGVyIHJlc3BlY3Rz
IHJlZ2FyZGluZyB2ZXJiYXRpbSBjb3B5aW5nIG9mIHRoYXQgZG9jdW1lbnQuCi0KLQotXGJlZ2lu
e2NlbnRlcn0KLXtcTGFyZ2VcYmYgNy4gQUdHUkVHQVRJT04gV0lUSCBJTkRFUEVOREVOVCBXT1JL
U30KLVxhZGRjb250ZW50c2xpbmV7dG9jfXtzZWN0aW9ufXs3LiBBR0dSRUdBVElPTiBXSVRIIElO
REVQRU5ERU5UIFdPUktTfQotXGVuZHtjZW50ZXJ9Ci0KLQotQSBjb21waWxhdGlvbiBvZiB0aGUg
RG9jdW1lbnQgb3IgaXRzIGRlcml2YXRpdmVzIHdpdGggb3RoZXIgc2VwYXJhdGUKLWFuZCBpbmRl
cGVuZGVudCBkb2N1bWVudHMgb3Igd29ya3MsIGluIG9yIG9uIGEgdm9sdW1lIG9mIGEgc3RvcmFn
ZSBvcgotZGlzdHJpYnV0aW9uIG1lZGl1bSwgaXMgY2FsbGVkIGFuICJhZ2dyZWdhdGUiIGlmIHRo
ZSBjb3B5cmlnaHQKLXJlc3VsdGluZyBmcm9tIHRoZSBjb21waWxhdGlvbiBpcyBub3QgdXNlZCB0
byBsaW1pdCB0aGUgbGVnYWwgcmlnaHRzCi1vZiB0aGUgY29tcGlsYXRpb24ncyB1c2VycyBiZXlv
bmQgd2hhdCB0aGUgaW5kaXZpZHVhbCB3b3JrcyBwZXJtaXQuCi1XaGVuIHRoZSBEb2N1bWVudCBp
cyBpbmNsdWRlZCBpbiBhbiBhZ2dyZWdhdGUsIHRoaXMgTGljZW5zZSBkb2VzIG5vdAotYXBwbHkg
dG8gdGhlIG90aGVyIHdvcmtzIGluIHRoZSBhZ2dyZWdhdGUgd2hpY2ggYXJlIG5vdCB0aGVtc2Vs
dmVzCi1kZXJpdmF0aXZlIHdvcmtzIG9mIHRoZSBEb2N1bWVudC4KLQotSWYgdGhlIENvdmVyIFRl
eHQgcmVxdWlyZW1lbnQgb2Ygc2VjdGlvbiAzIGlzIGFwcGxpY2FibGUgdG8gdGhlc2UKLWNvcGll
cyBvZiB0aGUgRG9jdW1lbnQsIHRoZW4gaWYgdGhlIERvY3VtZW50IGlzIGxlc3MgdGhhbiBvbmUg
aGFsZiBvZgotdGhlIGVudGlyZSBhZ2dyZWdhdGUsIHRoZSBEb2N1bWVudCdzIENvdmVyIFRleHRz
IG1heSBiZSBwbGFjZWQgb24KLWNvdmVycyB0aGF0IGJyYWNrZXQgdGhlIERvY3VtZW50IHdpdGhp
biB0aGUgYWdncmVnYXRlLCBvciB0aGUKLWVsZWN0cm9uaWMgZXF1aXZhbGVudCBvZiBjb3ZlcnMg
aWYgdGhlIERvY3VtZW50IGlzIGluIGVsZWN0cm9uaWMgZm9ybS4KLU90aGVyd2lzZSB0aGV5IG11
c3QgYXBwZWFyIG9uIHByaW50ZWQgY292ZXJzIHRoYXQgYnJhY2tldCB0aGUgd2hvbGUKLWFnZ3Jl
Z2F0ZS4KLQotCi1cYmVnaW57Y2VudGVyfQote1xMYXJnZVxiZiA4LiBUUkFOU0xBVElPTn0KLVxh
ZGRjb250ZW50c2xpbmV7dG9jfXtzZWN0aW9ufXs4LiBUUkFOU0xBVElPTn0KLVxlbmR7Y2VudGVy
fQotCi0KLVRyYW5zbGF0aW9uIGlzIGNvbnNpZGVyZWQgYSBraW5kIG9mIG1vZGlmaWNhdGlvbiwg
c28geW91IG1heQotZGlzdHJpYnV0ZSB0cmFuc2xhdGlvbnMgb2YgdGhlIERvY3VtZW50IHVuZGVy
IHRoZSB0ZXJtcyBvZiBzZWN0aW9uIDQuCi1SZXBsYWNpbmcgSW52YXJpYW50IFNlY3Rpb25zIHdp
dGggdHJhbnNsYXRpb25zIHJlcXVpcmVzIHNwZWNpYWwKLXBlcm1pc3Npb24gZnJvbSB0aGVpciBj
b3B5cmlnaHQgaG9sZGVycywgYnV0IHlvdSBtYXkgaW5jbHVkZQotdHJhbnNsYXRpb25zIG9mIHNv
bWUgb3IgYWxsIEludmFyaWFudCBTZWN0aW9ucyBpbiBhZGRpdGlvbiB0byB0aGUKLW9yaWdpbmFs
IHZlcnNpb25zIG9mIHRoZXNlIEludmFyaWFudCBTZWN0aW9ucy4gIFlvdSBtYXkgaW5jbHVkZSBh
Ci10cmFuc2xhdGlvbiBvZiB0aGlzIExpY2Vuc2UsIGFuZCBhbGwgdGhlIGxpY2Vuc2Ugbm90aWNl
cyBpbiB0aGUKLURvY3VtZW50LCBhbmQgYW55IFdhcnJhbnR5IERpc2NsYWltZXJzLCBwcm92aWRl
ZCB0aGF0IHlvdSBhbHNvIGluY2x1ZGUKLXRoZSBvcmlnaW5hbCBFbmdsaXNoIHZlcnNpb24gb2Yg
dGhpcyBMaWNlbnNlIGFuZCB0aGUgb3JpZ2luYWwgdmVyc2lvbnMKLW9mIHRob3NlIG5vdGljZXMg
YW5kIGRpc2NsYWltZXJzLiAgSW4gY2FzZSBvZiBhIGRpc2FncmVlbWVudCBiZXR3ZWVuCi10aGUg
dHJhbnNsYXRpb24gYW5kIHRoZSBvcmlnaW5hbCB2ZXJzaW9uIG9mIHRoaXMgTGljZW5zZSBvciBh
IG5vdGljZQotb3IgZGlzY2xhaW1lciwgdGhlIG9yaWdpbmFsIHZlcnNpb24gd2lsbCBwcmV2YWls
LgotCi1JZiBhIHNlY3Rpb24gaW4gdGhlIERvY3VtZW50IGlzIEVudGl0bGVkICJBY2tub3dsZWRn
ZW1lbnRzIiwKLSJEZWRpY2F0aW9ucyIsIG9yICJIaXN0b3J5IiwgdGhlIHJlcXVpcmVtZW50IChz
ZWN0aW9uIDQpIHRvIFByZXNlcnZlCi1pdHMgVGl0bGUgKHNlY3Rpb24gMSkgd2lsbCB0eXBpY2Fs
bHkgcmVxdWlyZSBjaGFuZ2luZyB0aGUgYWN0dWFsCi10aXRsZS4KLQotCi1cYmVnaW57Y2VudGVy
fQote1xMYXJnZVxiZiA5LiBURVJNSU5BVElPTn0KLVxhZGRjb250ZW50c2xpbmV7dG9jfXtzZWN0
aW9ufXs5LiBURVJNSU5BVElPTn0KLVxlbmR7Y2VudGVyfQotCi0KLVlvdSBtYXkgbm90IGNvcHks
IG1vZGlmeSwgc3VibGljZW5zZSwgb3IgZGlzdHJpYnV0ZSB0aGUgRG9jdW1lbnQgZXhjZXB0Ci1h
cyBleHByZXNzbHkgcHJvdmlkZWQgZm9yIHVuZGVyIHRoaXMgTGljZW5zZS4gIEFueSBvdGhlciBh
dHRlbXB0IHRvCi1jb3B5LCBtb2RpZnksIHN1YmxpY2Vuc2Ugb3IgZGlzdHJpYnV0ZSB0aGUgRG9j
dW1lbnQgaXMgdm9pZCwgYW5kIHdpbGwKLWF1dG9tYXRpY2FsbHkgdGVybWluYXRlIHlvdXIgcmln
aHRzIHVuZGVyIHRoaXMgTGljZW5zZS4gIEhvd2V2ZXIsCi1wYXJ0aWVzIHdobyBoYXZlIHJlY2Vp
dmVkIGNvcGllcywgb3IgcmlnaHRzLCBmcm9tIHlvdSB1bmRlciB0aGlzCi1MaWNlbnNlIHdpbGwg
bm90IGhhdmUgdGhlaXIgbGljZW5zZXMgdGVybWluYXRlZCBzbyBsb25nIGFzIHN1Y2gKLXBhcnRp
ZXMgcmVtYWluIGluIGZ1bGwgY29tcGxpYW5jZS4KLQotCi1cYmVnaW57Y2VudGVyfQote1xMYXJn
ZVxiZiAxMC4gRlVUVVJFIFJFVklTSU9OUyBPRiBUSElTIExJQ0VOU0V9Ci1cYWRkY29udGVudHNs
aW5le3RvY317c2VjdGlvbn17MTAuIEZVVFVSRSBSRVZJU0lPTlMgT0YgVEhJUyBMSUNFTlNFfQot
XGVuZHtjZW50ZXJ9Ci0KLQotVGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiBtYXkgcHVibGlz
aCBuZXcsIHJldmlzZWQgdmVyc2lvbnMKLW9mIHRoZSBHTlUgRnJlZSBEb2N1bWVudGF0aW9uIExp
Y2Vuc2UgZnJvbSB0aW1lIHRvIHRpbWUuICBTdWNoIG5ldwotdmVyc2lvbnMgd2lsbCBiZSBzaW1p
bGFyIGluIHNwaXJpdCB0byB0aGUgcHJlc2VudCB2ZXJzaW9uLCBidXQgbWF5Ci1kaWZmZXIgaW4g
ZGV0YWlsIHRvIGFkZHJlc3MgbmV3IHByb2JsZW1zIG9yIGNvbmNlcm5zLiAgU2VlCi1odHRwOi8v
d3d3LmdudS5vcmcvY29weWxlZnQvLgotCi1FYWNoIHZlcnNpb24gb2YgdGhlIExpY2Vuc2UgaXMg
Z2l2ZW4gYSBkaXN0aW5ndWlzaGluZyB2ZXJzaW9uIG51bWJlci4KLUlmIHRoZSBEb2N1bWVudCBz
cGVjaWZpZXMgdGhhdCBhIHBhcnRpY3VsYXIgbnVtYmVyZWQgdmVyc2lvbiBvZiB0aGlzCi1MaWNl
bnNlICJvciBhbnkgbGF0ZXIgdmVyc2lvbiIgYXBwbGllcyB0byBpdCwgeW91IGhhdmUgdGhlIG9w
dGlvbiBvZgotZm9sbG93aW5nIHRoZSB0ZXJtcyBhbmQgY29uZGl0aW9ucyBlaXRoZXIgb2YgdGhh
dCBzcGVjaWZpZWQgdmVyc2lvbiBvcgotb2YgYW55IGxhdGVyIHZlcnNpb24gdGhhdCBoYXMgYmVl
biBwdWJsaXNoZWQgKG5vdCBhcyBhIGRyYWZ0KSBieSB0aGUKLUZyZWUgU29mdHdhcmUgRm91bmRh
dGlvbi4gIElmIHRoZSBEb2N1bWVudCBkb2VzIG5vdCBzcGVjaWZ5IGEgdmVyc2lvbgotbnVtYmVy
IG9mIHRoaXMgTGljZW5zZSwgeW91IG1heSBjaG9vc2UgYW55IHZlcnNpb24gZXZlciBwdWJsaXNo
ZWQgKG5vdAotYXMgYSBkcmFmdCkgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4KLQot
Ci1cYmVnaW57Y2VudGVyfQote1xMYXJnZVxiZiBBRERFTkRVTTogSG93IHRvIHVzZSB0aGlzIExp
Y2Vuc2UgZm9yIHlvdXIgZG9jdW1lbnRzfQotXGFkZGNvbnRlbnRzbGluZXt0b2N9e3NlY3Rpb259
e0FEREVORFVNOiBIb3cgdG8gdXNlIHRoaXMgTGljZW5zZSBmb3IgeW91ciBkb2N1bWVudHN9Ci1c
ZW5ke2NlbnRlcn0KLQotVG8gdXNlIHRoaXMgTGljZW5zZSBpbiBhIGRvY3VtZW50IHlvdSBoYXZl
IHdyaXR0ZW4sIGluY2x1ZGUgYSBjb3B5IG9mCi10aGUgTGljZW5zZSBpbiB0aGUgZG9jdW1lbnQg
YW5kIHB1dCB0aGUgZm9sbG93aW5nIGNvcHlyaWdodCBhbmQKLWxpY2Vuc2Ugbm90aWNlcyBqdXN0
IGFmdGVyIHRoZSB0aXRsZSBwYWdlOgotCi1cYmlnc2tpcAotXGJlZ2lue3F1b3RlfQotICAgIENv
cHlyaWdodCBcY29weXJpZ2h0ICBZRUFSICBZT1VSIE5BTUUuCi0gICAgUGVybWlzc2lvbiBpcyBn
cmFudGVkIHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlzIGRvY3VtZW50Ci0g
ICAgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgRnJlZSBEb2N1bWVudGF0aW9uIExpY2Vuc2Us
IFZlcnNpb24gMS4yCi0gICAgb3IgYW55IGxhdGVyIHZlcnNpb24gcHVibGlzaGVkIGJ5IHRoZSBG
cmVlIFNvZnR3YXJlIEZvdW5kYXRpb247Ci0gICAgd2l0aCBubyBJbnZhcmlhbnQgU2VjdGlvbnMs
IG5vIEZyb250LUNvdmVyIFRleHRzLCBhbmQgbm8gQmFjay1Db3ZlciBUZXh0cy4KLSAgICBBIGNv
cHkgb2YgdGhlIGxpY2Vuc2UgaXMgaW5jbHVkZWQgaW4gdGhlIHNlY3Rpb24gZW50aXRsZWQgIkdO
VQotICAgIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlIi4KLVxlbmR7cXVvdGV9Ci1cYmlnc2tp
cAotICAgIAotSWYgeW91IGhhdmUgSW52YXJpYW50IFNlY3Rpb25zLCBGcm9udC1Db3ZlciBUZXh0
cyBhbmQgQmFjay1Db3ZlciBUZXh0cywKLXJlcGxhY2UgdGhlICJ3aXRoLi4uVGV4dHMuIiBsaW5l
IHdpdGggdGhpczoKLQotXGJpZ3NraXAKLVxiZWdpbntxdW90ZX0KLSAgICB3aXRoIHRoZSBJbnZh
cmlhbnQgU2VjdGlvbnMgYmVpbmcgTElTVCBUSEVJUiBUSVRMRVMsIHdpdGggdGhlCi0gICAgRnJv
bnQtQ292ZXIgVGV4dHMgYmVpbmcgTElTVCwgYW5kIHdpdGggdGhlIEJhY2stQ292ZXIgVGV4dHMg
YmVpbmcgTElTVC4KLVxlbmR7cXVvdGV9Ci1cYmlnc2tpcAotICAgIAotSWYgeW91IGhhdmUgSW52
YXJpYW50IFNlY3Rpb25zIHdpdGhvdXQgQ292ZXIgVGV4dHMsIG9yIHNvbWUgb3RoZXIKLWNvbWJp
bmF0aW9uIG9mIHRoZSB0aHJlZSwgbWVyZ2UgdGhvc2UgdHdvIGFsdGVybmF0aXZlcyB0byBzdWl0
IHRoZQotc2l0dWF0aW9uLgotCi1JZiB5b3VyIGRvY3VtZW50IGNvbnRhaW5zIG5vbnRyaXZpYWwg
ZXhhbXBsZXMgb2YgcHJvZ3JhbSBjb2RlLCB3ZQotcmVjb21tZW5kIHJlbGVhc2luZyB0aGVzZSBl
eGFtcGxlcyBpbiBwYXJhbGxlbCB1bmRlciB5b3VyIGNob2ljZSBvZgotZnJlZSBzb2Z0d2FyZSBs
aWNlbnNlLCBzdWNoIGFzIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSwKLXRvIHBlcm1p
dCB0aGVpciB1c2UgaW4gZnJlZSBzb2Z0d2FyZS4KZGlmZiAtLWdpdCBhL2RvY3MveGVuLWFwaS9w
cmVzZW50YXRpb24udGV4IGIvZG9jcy94ZW4tYXBpL3ByZXNlbnRhdGlvbi50ZXgKZGVsZXRlZCBm
aWxlIG1vZGUgMTAwNjQ0CmluZGV4IDE3ZmUzYzUuLjAwMDAwMDAKLS0tIGEvZG9jcy94ZW4tYXBp
L3ByZXNlbnRhdGlvbi50ZXgKKysrIC9kZXYvbnVsbApAQCAtMSwxNDYgKzAsMCBAQAotJQotJSBD
b3B5cmlnaHQgKGMpIDIwMDYtMjAwNyBYZW5Tb3VyY2UsIEluYy4KLSUKLSUgUGVybWlzc2lvbiBp
cyBncmFudGVkIHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlzIGRvY3VtZW50
IHVuZGVyCi0lIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNl
LCBWZXJzaW9uIDEuMiBvciBhbnkgbGF0ZXIKLSUgdmVyc2lvbiBwdWJsaXNoZWQgYnkgdGhlIEZy
ZWUgU29mdHdhcmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlhbnQKLSUgU2VjdGlvbnMsIG5v
IEZyb250LUNvdmVyIFRleHRzIGFuZCBubyBCYWNrLUNvdmVyIFRleHRzLiAgQSBjb3B5IG9mIHRo
ZQotJSBsaWNlbnNlIGlzIGluY2x1ZGVkIGluIHRoZSBzZWN0aW9uIGVudGl0bGVkCi0lICJHTlUg
RnJlZSBEb2N1bWVudGF0aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxlIGZkbC50ZXguCi0lCi0lIEF1
dGhvcnM6IEV3YW4gTWVsbG9yLCBSaWNoYXJkIFNoYXJwLCBEYXZlIFNjb3R0LCBKb24gSGFycm9w
LgotJQotCi1UaGUgQVBJIGlzIHByZXNlbnRlZCBoZXJlIGFzIGEgc2V0IG9mIFJlbW90ZSBQcm9j
ZWR1cmUgQ2FsbHMsIHdpdGggYSB3aXJlCi1mb3JtYXQgYmFzZWQgdXBvbiBYTUwtUlBDLiBObyBz
cGVjaWZpYyBsYW5ndWFnZSBiaW5kaW5ncyBhcmUgcHJlc2NyaWJlZCwKLWFsdGhvdWdoIGV4YW1w
bGVzIHdpbGwgYmUgZ2l2ZW4gaW4gdGhlIHB5dGhvbiBwcm9ncmFtbWluZyBsYW5ndWFnZS4KLSAK
LUFsdGhvdWdoIHdlIGFkb3B0IHNvbWUgdGVybWlub2xvZ3kgZnJvbSBvYmplY3Qtb3JpZW50ZWQg
cHJvZ3JhbW1pbmcsIAotZnV0dXJlIGNsaWVudCBsYW5ndWFnZSBiaW5kaW5ncyBtYXkgb3IgbWF5
IG5vdCBiZSBvYmplY3Qgb3JpZW50ZWQuCi1UaGUgQVBJIHJlZmVyZW5jZSB1c2VzIHRoZSB0ZXJt
aW5vbG9neSB7XGVtIGNsYXNzZXNcL30gYW5kIHtcZW0gb2JqZWN0c1wvfS4KLUZvciBvdXIgcHVy
cG9zZXMgYSB7XGVtIGNsYXNzXC99IGlzIHNpbXBseSBhIGhpZXJhcmNoaWNhbCBuYW1lc3BhY2U7
Ci1hbiB7XGVtIG9iamVjdFwvfSBpcyBhbiBpbnN0YW5jZSBvZiBhIGNsYXNzIHdpdGggaXRzIGZp
ZWxkcyBzZXQgdG8KLXNwZWNpZmljIHZhbHVlcy4gT2JqZWN0cyBhcmUgcGVyc2lzdGVudCBhbmQg
ZXhpc3Qgb24gdGhlIHNlcnZlci1zaWRlLgotQ2xpZW50cyBtYXkgb2J0YWluIG9wYXF1ZSByZWZl
cmVuY2VzIHRvIHRoZXNlIHNlcnZlci1zaWRlIG9iamVjdHMgYW5kIHRoZW4KLWFjY2VzcyB0aGVp
ciBmaWVsZHMgdmlhIGdldC9zZXQgUlBDcy4KLQotJUluIGVhY2ggY2xhc3MgdGhlcmUgaXMgYSAk
XG1hdGhpdHt1aWR9JCBmaWVsZCB0aGF0IGFzc2lnbnMgYW4gaW5kZW50aWZpZXIKLSV0byBlYWNo
IG9iamVjdC4gVGhpcyAkXG1hdGhpdHt1aWR9JCBzZXJ2ZXMgYXMgYW4gb2JqZWN0IHJlZmVyZW5j
ZQotJW9uIGJvdGggY2xpZW50LSBhbmQgc2VydmVyLXNpZGUsIGFuZCBpcyBvZnRlbiBpbmNsdWRl
ZCBhcyBhbiBhcmd1bWVudCBpbgotJVJQQyBtZXNzYWdlcy4KLQotRm9yIGVhY2ggY2xhc3Mgd2Ug
c3BlY2lmeSBhIGxpc3Qgb2YKLWZpZWxkcyBhbG9uZyB3aXRoIHRoZWlyIHtcZW0gdHlwZXNcL30g
YW5kIHtcZW0gcXVhbGlmaWVyc1wvfS4gIEEKLXF1YWxpZmllciBpcyBvbmUgb2Y6Ci1cYmVnaW57
aXRlbWl6ZX0KLSAgXGl0ZW0gJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQ6IHRoZSBmaWVsZCBp
cyBSZWFkCi1Pbmx5LiBGdXJ0aGVybW9yZSwgaXRzIHZhbHVlIGlzIGF1dG9tYXRpY2FsbHkgY29t
cHV0ZWQgYXQgcnVudGltZS4KLUZvciBleGFtcGxlOiBjdXJyZW50IENQVSBsb2FkIGFuZCBkaXNr
IElPIHRocm91Z2hwdXQuCi0gIFxpdGVtICRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc30kOiB0aGUg
ZmllbGQgbXVzdCBiZSBtYW51YWxseSBzZXQKLXdoZW4gYSBuZXcgb2JqZWN0IGlzIGNyZWF0ZWQs
IGJ1dCBpcyB0aGVuIFJlYWQgT25seSBmb3IKLXRoZSBkdXJhdGlvbiBvZiB0aGUgb2JqZWN0J3Mg
bGlmZS4KLUZvciBleGFtcGxlLCB0aGUgbWF4aW11bSBtZW1vcnkgYWRkcmVzc2FibGUgYnkgYSBn
dWVzdCBpcyBzZXQgCi1iZWZvcmUgdGhlIGd1ZXN0IGJvb3RzLgotICBcaXRlbSAkXG1hdGhpdHtS
V30kOiB0aGUgZmllbGQgaXMgUmVhZC9Xcml0ZS4gRm9yIGV4YW1wbGUsIHRoZSBuYW1lCi1vZiBh
IFZNLgotXGVuZHtpdGVtaXplfQotCi1BIGZ1bGwgbGlzdCBvZiB0eXBlcyBpcyBnaXZlbiBpbiBD
aGFwdGVyflxyZWZ7YXBpLXJlZmVyZW5jZX0uIEhvd2V2ZXIsCi10aGVyZSBhcmUgdGhyZWUgdHlw
ZXMgdGhhdCByZXF1aXJlIGV4cGxpY2l0IG1lbnRpb246Ci1cYmVnaW57aXRlbWl6ZX0KLSAgXGl0
ZW0gJHR+XG1hdGhpdHtSZWZ9JDogc2lnbmlmaWVzIGEgcmVmZXJlbmNlIHRvIGFuIG9iamVjdAot
b2YgdHlwZSAkdCQuCi0gIFxpdGVtICR0flxtYXRoaXR7U2V0fSQ6IHNpZ25pZmllcyBhIHNldCBj
b250YWluaW5nCi12YWx1ZXMgb2YgdHlwZSAkdCQuCi0gIFxpdGVtICQodF8xLCB0XzIpflxtYXRo
aXR7TWFwfSQ6IHNpZ25pZmllcyBhIG1hcHBpbmcgZnJvbSB2YWx1ZXMgb2YKLXR5cGUgJHRfMSQg
dG8gdmFsdWVzIG9mIHR5cGUgJHRfMiQuCi1cZW5ke2l0ZW1pemV9Ci0KLU5vdGUgdGhhdCB0aGVy
ZSBhcmUgYSBudW1iZXIgb2YgY2FzZXMgd2hlcmUge1xlbSBSZWZ9cyBhcmUge1xlbSBkb3VibHkK
LWxpbmtlZFwvfS0tLWUuZy5cIGEgVk0gaGFzIGEgZmllbGQgY2FsbGVkIHtcdHQgVklGc30gb2Yg
dHlwZQotJChcbWF0aGl0e1ZJRn1+XG1hdGhpdHtSZWZ9KX5cbWF0aGl0e1NldH0kOyB0aGlzIGZp
ZWxkIGxpc3RzCi10aGUgbmV0d29yayBpbnRlcmZhY2VzIGF0dGFjaGVkIHRvIGEgcGFydGljdWxh
ciBWTS4gU2ltaWxhcmx5LCB0aGUgVklGCi1jbGFzcyBoYXMgYSBmaWVsZCBjYWxsZWQge1x0dCBW
TX0gb2YgdHlwZSAkKFxtYXRoaXR7Vk19fntcbWF0aGl0Ci1SZWZ9KSQgd2hpY2ggcmVmZXJlbmNl
cyB0aGUgVk0gdG8gd2hpY2ggdGhlIGludGVyZmFjZSBpcyBjb25uZWN0ZWQuCi1UaGVzZSB0d28g
ZmllbGRzIGFyZSB7XGVtIGJvdW5kIHRvZ2V0aGVyXC99LCBpbiB0aGUgc2Vuc2UgdGhhdAotY3Jl
YXRpbmcgYSBuZXcgVklGIGNhdXNlcyB0aGUge1x0dCBWSUZzfSBmaWVsZCBvZiB0aGUgY29ycmVz
cG9uZGluZwotVk0gb2JqZWN0IHRvIGJlIHVwZGF0ZWQgYXV0b21hdGljYWxseS4KLQotVGhlIEFQ
SSByZWZlcmVuY2UgZXhwbGljaXRseSBsaXN0cyB0aGUgZmllbGRzIHRoYXQgYXJlCi1ib3VuZCB0
b2dldGhlciBpbiB0aGlzIHdheS4gSXQgYWxzbyBjb250YWlucyBhIGRpYWdyYW0gdGhhdCBzaG93
cwotcmVsYXRpb25zaGlwcyBiZXR3ZWVuIGNsYXNzZXMuIEluIHRoaXMgZGlhZ3JhbSBhbiBlZGdl
IHNpZ25pZmllcyB0aGUKLWV4aXN0ZW5jZSBvZiBhIHBhaXIgb2YgZmllbGRzIHRoYXQgYXJlIGJv
dW5kIHRvZ2V0aGVyLCB1c2luZyBzdGFuZGFyZAotY3Jvd3MtZm9vdCBub3RhdGlvbiB0byBzaWdu
aWZ5IHRoZSB0eXBlIG9mIHJlbGF0aW9uc2hpcCAoZS5nLlwKLW9uZS1tYW55LCBtYW55LW1hbnkp
LgotCi1cc2VjdGlvbntSUENzIGFzc29jaWF0ZWQgd2l0aCBmaWVsZHN9Ci0KLUVhY2ggZmllbGQs
IHtcdHQgZn0sIGhhcyBhbiBSUEMgYWNjZXNzb3IgYXNzb2NpYXRlZCB3aXRoIGl0Ci10aGF0IHJl
dHVybnMge1x0dCBmfSdzIHZhbHVlOgotXGJlZ2lue2l0ZW1pemV9Ci1caXRlbSBgYHtcdHQgZ2V0
XF9mKFJlZiB4KX0nJzogdGFrZXMgYQote1x0dCBSZWZ9IHRoYXQgcmVmZXJzIHRvIGFuIG9iamVj
dCBhbmQgcmV0dXJucyB0aGUgdmFsdWUgb2Yge1x0dCBmfS4KLVxlbmR7aXRlbWl6ZX0KLQotRWFj
aCBmaWVsZCwge1x0dCBmfSwgd2l0aCBhdHRyaWJ1dGUKLXtcZW0gUld9IGFuZCB3aG9zZSBvdXRl
cm1vc3QgdHlwZSBpcyB7XGVtIFNldFwvfSBoYXMgdGhlIGZvbGxvd2luZwotYWRkaXRpb25hbCBS
UENzIGFzc29jaWF0ZWQgd2l0aCBpdDoKLVxiZWdpbntpdGVtaXplfQotXGl0ZW0gYW4gYGB7XHR0
IGFkZFxfdG9cX2YoUmVmIHgsIHYpfScnIFJQQyBhZGRzIGEgbmV3IGVsZW1lbnQgdiB0byB0aGUg
c2V0XGZvb3Rub3RlewotJQotU2luY2Ugc2V0cyBjYW5ub3QgY29udGFpbiBkdXBsaWNhdGUgdmFs
dWVzIHRoaXMgb3BlcmF0aW9uIGhhcyBubyBhY3Rpb24gaW4gdGhlIGNhc2UKLXRoYXQge1x0dCB2
fSB3YXMgYWxyZWFkeSBpbiB0aGUgc2V0LgotJQotfTsKLVxpdGVtIGEgYGB7XHR0IHJlbW92ZVxf
ZnJvbVxfZihSZWYgeCwgdil9JycgUlBDIHJlbW92ZXMgZWxlbWVudCB7XHR0IHZ9IGZyb20gdGhl
IHNldDsKLVxlbmR7aXRlbWl6ZX0KLQotRWFjaCBmaWVsZCwge1x0dCBmfSwgd2l0aCBhdHRyaWJ1
dGUKLXtcZW0gUld9IGFuZCB3aG9zZSBvdXRlcm1vc3QgdHlwZSBpcyB7XGVtIE1hcFwvfSBoYXMg
dGhlIGZvbGxvd2luZwotYWRkaXRpb25hbCBSUENzIGFzc29jaWF0ZWQgd2l0aCBpdDoKLVxiZWdp
bntpdGVtaXplfQotXGl0ZW0gYW4gYGB7XHR0IGFkZFxfdG9cX2YoUmVmIHgsIGssIHYpfScnIFJQ
QyBhZGRzIG5ldyBwYWlyIHtcdHQgKGssIHYpfQotdG8gdGhlIG1hcHBpbmcgc3RvcmVkIGluIHtc
dHQgZn0gaW4gb2JqZWN0IHtcdHQgeH0uIEFkZGluZyBhIG5ldyBwYWlyIGZvciBkdXBsaWNhdGUK
LWtleSwge1x0dCBrfSwgb3ZlcndyaXRlcyBhbnkgcHJldmlvdXMgbWFwcGluZyBmb3Ige1x0dCBr
fS4KLVxpdGVtIGEgYGB7XHR0IHJlbW92ZVxfZnJvbVxfZihSZWYgeCwgayl9JycgUlBDIHJlbW92
ZXMgdGhlIHBhaXIgd2l0aCBrZXkge1x0dCBrfQotZnJvbSB0aGUgbWFwcGluZyBzdG9yZWQgaW4g
e1x0dCBmfSBpbiBvYmplY3Qge1x0dCB4fS4KLVxlbmR7aXRlbWl6ZX0KLQotRWFjaCBmaWVsZCB3
aG9zZSBvdXRlcm1vc3QgdHlwZSBpcyBuZWl0aGVyIHtcZW0gU2V0XC99IG5vciB7XGVtIE1hcFwv
fSwgCi1idXQgd2hvc2UgYXR0cmlidXRlIGlzIHtcZW0gUld9IGhhcyBhbiBSUEMgYWNlc3NvciBh
c3NvY2lhdGVkIHdpdGggaXQKLXRoYXQgc2V0cyBpdHMgdmFsdWU6Ci1cYmVnaW57aXRlbWl6ZX0K
LVxpdGVtIEZvciB7XGVtIFJXXC99ICh7XGVtIFJcL31lYWQve1xlbQotV1wvfXJpdGUpLCBhIGBg
e1x0dCBzZXRcX2YoUmVmIHgsIHYpfScnIFJQQyBmdW5jdGlvbiBpcyBhbHNvIHByb3ZpZGVkLgot
VGhpcyBzZXRzIGZpZWxkIHtcdHQgZn0gb24gb2JqZWN0IHtcdHQgeH0gdG8gdmFsdWUge1x0dCB2
fS4KLVxlbmR7aXRlbWl6ZX0KLQotXHNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3Nl
c30KLQotXGJlZ2lue2l0ZW1pemV9Ci1caXRlbSBFYWNoIGNsYXNzIGhhcyBhIHtcZW0gY29uc3Ry
dWN0b3JcL30gUlBDIG5hbWVkIGBge1x0dCBjcmVhdGV9JycgdGhhdAotdGFrZXMgYXMgcGFyYW1l
dGVycyBhbGwgZmllbGRzIG1hcmtlZCB7XGVtIFJXXC99IGFuZAotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7aW5zfSQuIFRoZSByZXN1bHQgb2YgdGhpcyBSUEMgaXMgdGhhdCBhIG5ldyB7XGVtCi1wZXJz
aXN0ZW50XC99IG9iamVjdCBpcyBjcmVhdGVkIG9uIHRoZSBzZXJ2ZXItc2lkZSB3aXRoIHRoZSBz
cGVjaWZpZWQgZmllbGQKLXZhbHVlcy4KLQotXGl0ZW0gRWFjaCBjbGFzcyBoYXMgYSB7XHR0IGdl
dFxfYnlcX3V1aWQodXVpZCl9IFJQQyB0aGF0IHJldHVybnMgdGhlIG9iamVjdAotb2YgdGhhdCBj
bGFzcyB0aGF0IGhhcyB0aGUgc3BlY2lmaWVkIHtcdHQgdXVpZH0uCi0KLVxpdGVtIEVhY2ggY2xh
c3MgdGhhdCBoYXMgYSB7XHR0IG5hbWVcX2xhYmVsfSBmaWVsZCBoYXMgYQotYGB7XHR0IGdldFxf
YnlcX25hbWVcX2xhYmVsKG5hbWUpfScnIFJQQyB0aGF0IHJldHVybnMgYSBzZXQgb2Ygb2JqZWN0
cyBvZiB0aGF0Ci1jbGFzcyB0aGF0IGhhdmUgdGhlIHNwZWNpZmllZCB7XHR0IGxhYmVsfS4KLQot
XGl0ZW0gRWFjaCBjbGFzcyBoYXMgYSBgYHtcdHQgZGVzdHJveShSZWYgeCl9JycgUlBDIHRoYXQg
ZXhwbGljaXRseSBkZWxldGVzCi10aGUgcGVyc2lzdGVudCBvYmplY3Qgc3BlY2lmaWVkIGJ5IHtc
dHQgeH0gZnJvbSB0aGUgc3lzdGVtLiAgVGhpcyBpcyBhCi1ub24tY2FzY2FkaW5nIGRlbGV0ZSAt
LSBpZiB0aGUgb2JqZWN0IGJlaW5nIHJlbW92ZWQgaXMgcmVmZXJlbmNlZCBieSBhbm90aGVyCi1v
YmplY3QgdGhlbiB0aGUge1x0dCBkZXN0cm95fSBjYWxsIHdpbGwgZmFpbC4KLQotXGVuZHtpdGVt
aXplfQotCi1cc3Vic2VjdGlvbntBZGRpdGlvbmFsIFJQQ3N9Ci0KLUFzIHdlbGwgYXMgdGhlIFJQ
Q3MgZW51bWVyYXRlZCBhYm92ZSwgc29tZSBjbGFzc2VzIGhhdmUgYWRkaXRpb25hbCBSUENzCi1h
c3NvY2lhdGVkIHdpdGggdGhlbS4gRm9yIGV4YW1wbGUsIHRoZSB7XHR0IFZNfSBjbGFzcyBoYXMg
UlBDcyBmb3IgY2xvbmluZywKLXN1c3BlbmRpbmcsIHN0YXJ0aW5nIGV0Yy4gU3VjaCBhZGRpdGlv
bmFsIFJQQ3MgYXJlIGRlc2NyaWJlZCBleHBsaWNpdGx5Ci1pbiB0aGUgQVBJIHJlZmVyZW5jZS4K
ZGlmZiAtLWdpdCBhL2RvY3MveGVuLWFwaS9yZXZpc2lvbi1oaXN0b3J5LnRleCBiL2RvY3MveGVu
LWFwaS9yZXZpc2lvbi1oaXN0b3J5LnRleApkZWxldGVkIGZpbGUgbW9kZSAxMDA2NDQKaW5kZXgg
M2ZhODMyOS4uMDAwMDAwMAotLS0gYS9kb2NzL3hlbi1hcGkvcmV2aXNpb24taGlzdG9yeS50ZXgK
KysrIC9kZXYvbnVsbApAQCAtMSw2MSArMCwwIEBACi17IFxiZiBSZXZpc2lvbiBIaXN0b3J5fQot
Ci0lIFBsZWFzZSBkbyBub3QgdXNlIG1pbmlwYWdlcyBpbiBhIHRhYnVsYXIgZW52aXJvbm1lbnQ7
IHRoaXMgcmVzdWx0cwotJSBpbiBiYWQgdmVydGljYWwgYWxpZ25tZW50LiAKLQotXGJlZ2lue2Zs
dXNobGVmdH0KLVxiZWdpbntjZW50ZXJ9Ci0gXGJlZ2lue3RhYnVsYXJ9e3xsfGx8bHw+e1xyYWdn
ZWRyaWdodH1wezdjbX18fQotICBcaGxpbmUKLSAgMS4wLjAgJiAyN3RoIEFwcmlsIDA3ICYgWGVu
c291cmNlIGV0IGFsLiAmCi0gICAgIEluaXRpYWwgUmV2aXNpb25cdGFidWxhcm5ld2xpbmUKLSAg
XGhsaW5lCi0gIDEuMC4xICYgMTB0aCBEZWMuIDA3ICYgUy4gQmVyZ2VyICYKLSAgICAgQWRkZWQg
WFNQb2xpY3kucmVzZXRcX3hzcG9saWN5LCBWVFBNLmdldFxfb3RoZXJcX2NvbmZpZywKLSAgICAg
VlRQTS5zZXRcX290aGVyY29uZmlnLiBBQ01Qb2xpY3kuZ2V0XF9lbmZvcmNlZFxfYmluYXJ5IG1l
dGhvZHMuXHRhYnVsYXJuZXdsaW5lCi0gIFxobGluZQotICAxLjAuMiAmIDI1dGggSmFuLiAwOCAm
IEouIEZlaGxpZyAmCi0gICAgIEFkZGVkIENyYXNoZWQgVk0gcG93ZXIgc3RhdGUuXHRhYnVsYXJu
ZXdsaW5lCi0gIFxobGluZQotICAxLjAuMyAmIDExdGggRmViLiAwOCAmIFMuIEJlcmdlciAmCi0g
ICAgIEFkZGVkIHRhYmxlIG9mIGNvbnRlbnRzIGFuZCBoeXBlcmxpbmsgY3Jvc3MgcmVmZXJlbmNl
Llx0YWJ1bGFybmV3bGluZQotICBcaGxpbmUKLSAgMS4wLjQgJiAyM3JkIE1hcmNoIDA4ICYgUy4g
QmVyZ2VyICYKLSAgICAgQWRkZWQgWFNQb2xpY3kuY2FuXF9ydW5cdGFidWxhcm5ld2xpbmUKLSAg
XGhsaW5lCi0gIDEuMC41ICYgMTd0aCBBcHIuIDA4ICYgUy4gQmVyZ2VyICYKLSAgICAgQWRkZWQg
dW5kb2N1bWVudGVkIGZpZWxkcyBhbmQgbWV0aG9kcyBmb3IgZGVmYXVsdFxfbmV0bWFzayBhbmQK
LSAgICAgZGVmYXVsdFxfZ2F0ZXdheSB0byB0aGUgTmV0d29yayBjbGFzcy4gUmVtb3ZlZCBhbiB1
bmltcGxlbWVudGVkCi0gICAgIG1ldGhvZCBmcm9tIHRoZSBYU1BvbGljeSBjbGFzcyBhbmQgcmVt
b3ZlZCB0aGUgJ29wdGlvbmFsJyBmcm9tCi0gICAgICdvbGRsYWJlbCcgcGFyYW1ldGVycy5cdGFi
dWxhcm5ld2xpbmUKLSAgXGhsaW5lCi0gIDEuMC42ICYgMjR0aCBKdWwuIDA4ICYgWS4gSXdhbWF0
c3UgJgotICAgICBBZGRlZCBkZWZpbml0aW9ucyBvZiBuZXcgY2xhc3NlcyBEUENJIGFuZCBQUENJ
LiBVcGRhdGVkIHRoZSB0YWJsZQotICAgICBhbmQgdGhlIGRpYWdyYW0gcmVwcmVzZW50aW5nIHJl
bGF0aW9uc2hpcHMgYmV0d2VlbiBjbGFzc2VzLgotICAgICBBZGRlZCBob3N0LlBQQ0lzIGFuZCBW
TS5EUENJcyBmaWVsZHMuXHRhYnVsYXJuZXdsaW5lCi0gIFxobGluZQotICAxLjAuNyAmIDIwdGgg
T2N0LiAwOCAmIE0uIEthbm5vICYKLSAgICAgQWRkZWQgZGVmaW5pdGlvbnMgb2YgbmV3IGNsYXNz
ZXMgRFNDU0kgYW5kIFBTQ1NJLiBVcGRhdGVkIHRoZSB0YWJsZQotICAgICBhbmQgdGhlIGRpYWdy
YW0gcmVwcmVzZW50aW5nIHJlbGF0aW9uc2hpcHMgYmV0d2VlbiBjbGFzc2VzLgotICAgICBBZGRl
ZCBob3N0LlBTQ1NJcyBhbmQgVk0uRFNDU0lzIGZpZWxkcy5cdGFidWxhcm5ld2xpbmUKLSAgXGhs
aW5lCi0gIDEuMC44ICYgMTd0aCBKdW4uIDA5ICYgQS4gRmxvcmF0aCAmCi0gICAgIFVwZGF0ZWQg
aW50ZXJhY3RpdmUgc2Vzc2lvbiBleGFtcGxlLgotICAgICBBZGRlZCBkZXNjcmlwdGlvbiBmb3Ig
XHRleHR0dHtQVi9rZXJuZWx9IGFuZCBcdGV4dHR0e1BWL3JhbWRpc2t9Ci0gICAgIHBhcmFtZXRl
cnMgdXNpbmcgVVJJcy5cdGFidWxhcm5ld2xpbmUKLSAgXGhsaW5lCi0gIDEuMC45ICYgMjB0aCBO
b3YuIDA5ICYgTS4gS2Fubm8gJgotICAgICBBZGRlZCBkZWZpbml0aW9ucyBvZiBuZXcgY2xhc3Nl
cyBEU0NTSVxfSEJBIGFuZCBQU0NTSVxfSEJBLgotICAgICBVcGRhdGVkIHRoZSB0YWJsZSBhbmQg
dGhlIGRpYWdyYW0gcmVwcmVzZW50aW5nIHJlbGF0aW9uc2hpcHMKLSAgICAgYmV0d2VlbiBjbGFz
c2VzLiBBZGRlZCBob3N0LlBTQ1NJXF9IQkFzIGFuZCBWTS5EU0NTSVxfSEJBcwotICAgICBmaWVs
ZHMuXHRhYnVsYXJuZXdsaW5lCi0gIFxobGluZQotICAxLjAuMTAgJiAxMHRoIEphbi4gMTAgJiBM
LiBEdWJlICYKLSAgICAgQWRkZWQgZGVmaW5pdGlvbnMgb2YgbmV3IGNsYXNzZXMgY3B1XF9wb29s
LiBVcGRhdGVkIHRoZSB0YWJsZQotICAgICBhbmQgdGhlIGRpYWdyYW0gcmVwcmVzZW50aW5nIHJl
bGF0aW9uc2hpcHMgYmV0d2VlbiBjbGFzc2VzLgotICAgICBBZGRlZCBmaWVsZHMgaG9zdC5yZXNp
ZGVudFxfY3B1XF9wb29scywgVk0uY3B1XF9wb29sIGFuZAotICAgICBob3N0XF9jcHUuY3B1XF9w
b29sLlx0YWJ1bGFybmV3bGluZQotICBcaGxpbmUKLSBcZW5ke3RhYnVsYXJ9Ci1cZW5ke2NlbnRl
cn0KLVxlbmR7Zmx1c2hsZWZ0fQpkaWZmIC0tZ2l0IGEvZG9jcy94ZW4tYXBpL3RvZG8udGV4IGIv
ZG9jcy94ZW4tYXBpL3RvZG8udGV4CmRlbGV0ZWQgZmlsZSBtb2RlIDEwMDY0NAppbmRleCA2MTVh
NWU1Li4wMDAwMDAwCi0tLSBhL2RvY3MveGVuLWFwaS90b2RvLnRleAorKysgL2Rldi9udWxsCkBA
IC0xLDEzNSArMCwwIEBACi0lCi0lIENvcHlyaWdodCAoYykgMjAwNiBYZW5Tb3VyY2UsIEluYy4K
LSUKLSUgUGVybWlzc2lvbiBpcyBncmFudGVkIHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1v
ZGlmeSB0aGlzIGRvY3VtZW50IHVuZGVyCi0lIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9j
dW1lbnRhdGlvbiBMaWNlbnNlLCBWZXJzaW9uIDEuMiBvciBhbnkgbGF0ZXIKLSUgdmVyc2lvbiBw
dWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlh
bnQKLSUgU2VjdGlvbnMsIG5vIEZyb250LUNvdmVyIFRleHRzIGFuZCBubyBCYWNrLUNvdmVyIFRl
eHRzLiAgQSBjb3B5IG9mIHRoZQotJSBsaWNlbnNlIGlzIGluY2x1ZGVkIGluIHRoZSBzZWN0aW9u
IGVudGl0bGVkCi0lICJHTlUgRnJlZSBEb2N1bWVudGF0aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxl
IGZkbC50ZXguCi0lCi0lIEF1dGhvcnM6IEV3YW4gTWVsbG9yLCBSaWNoYXJkIFNoYXJwLCBEYXZl
IFNjb3R0LCBKb24gSGFycm9wLgotJQotCi1cc2VjdGlvbntUby1Eb30KLQotTG90cyBhbmQgbG90
cyEgSW5jbHVkaW5nOgotCi1cc3Vic2VjdGlvbntDbGFyaXR5fQotCi1cYmVnaW57aXRlbWl6ZX0K
LQotXGl0ZW0gUm9sbCBjb25zdHJ1Y3RvcnMgYW5kIGdldFxfYnlcX3V1aWQgZXRjIChzZWN0aW9u
IDEuMikgaW50byBzZWN0aW9uIDIgc28KLXRoYXQgaXQgaXMgY2xlYXJlciB0aGF0IGVhY2ggY2xh
c3MgaGFzIHRoZXNlLgotCi1caXRlbSBFbXBoYXNpc2UgdGhhdCBlbnVtcyBhcmUgc3RyaW5ncyBv
biB0aGUgd2lyZSwgYW5kIHNvIGFyZSBub3QgcmVzdHJpY3RlZAotdG8gYSBjZXJ0YWluIG51bWJl
ciBvZiBiaXRzLgotCi1caXRlbSBDbGFyaWZ5IHJldHVybiB2YWx1ZXMsIGluIHBhcnRpY3VsYXIg
dGhhdCB2b2lkIG1lYW5zIHJldHVybiBhIHN0YXR1cwotY29kZSwgcG90ZW50aWFsIGVycm9yIGRl
c2NyaXB0aW9uLCBidXQgb3RoZXJ3aXNlIG5vIHZhbHVlLgotCi1caXRlbSBUYWxrIGFib3V0IFVV
SUQgZ2VuZXJhdGlvbi4KLQotXGl0ZW0gQ2xhcmlmeSBzZXNzaW9uIGJlaGF2aW91ciB3cnQgdGlt
ZW91dHMgYW5kIGRpc2Nvbm5lY3RzLgotCi1caXRlbSBDbGFyaWZ5IGJlaGF2aW91ciBvZiBwcm9n
cmVzcyBmaWVsZCBvbiBhc3luY2hyb25vdXMgcmVxdWVzdCBwb2xsaW5nIHdoZW4KLXRoYXQgcmVx
dWVzdCBmYWlscy4KLQotXGl0ZW0gQ2xhcmlmeSB3aGljaCBjYWxscyBoYXZlIGFzeW5jaHJvbm91
cyBjb3VudGVycGFydHMgYnkgbWFya2luZyB0aGVtIGFzIHN1Y2ggaW4gdGhlIHJlZmVyZW5jZS4g
KEluZGl2aWR1YWwgZ2V0dGVycyBhbmQgc2V0dGVycyBhcmUgdG9vIHNtYWxsIGFuZCBxdWljayB0
byBqdXN0aWZ5IGhhdmluZyBhc3luYyB2ZXJzaW9ucykKLQotXGVuZHtpdGVtaXplfQotCi1cc3Vi
c2VjdGlvbntDb250ZW50fQotCi1cc3Vic3Vic2VjdGlvbntNb2RlbH0KLQotXGJlZ2lue2l0ZW1p
emV9Ci0KLVxpdGVtIEltcHJvdmUgdGhlIHNldCBvZiBhdmFpbGFibGUgcG93ZXJcX3N0YXRlcyBh
bmQgY29ycmVzcG9uZGluZyBsaWZlY3ljbGUKLXNlbWFudGljcy4gIFJlbmFtZSBwb3dlclxfc3Rh
dGUsIG1heWJlLgotCi1caXRlbSBTcGVjaWZ5IHRoZSBDUFUgc2NoZWR1bGVyIGNvbmZpZ3VyYXRp
b24gcHJvcGVybHksIGluYyBDUFUgYWZmaW5pdHksCi13ZWlnaHRzLCBldGMuCi0KLVxpdGVtIEFk
ZCBWbS5hcmNoaXRlY3R1cmUgYW5kIEhvc3QuY29tcGF0aWJsZVxfYXJjaGl0ZWN0dXJlIGZpZWxk
cy4KLQotXGl0ZW0gQWRkIG1pZ3JhdGlvbiBjYWxscywgaW5jbHVkaW5nIHRoZSBhYmlsaXR5IHRv
IHRlc3Qgd2hldGhlciBhIG1pZ3JhdGlvbgotd2lsbCBzdWNjZWVkLCBhbmQgYXV0aGVudGljYXRp
b24gdG9rZW4gZXhjaGFuZ2UuCi0KLVxpdGVtIEltcHJvdmUgYXN5bmNocm9ub3VzIHRhc2sgaGFu
ZGxpbmcsIHdpdGggYSByZWdpc3RyYXRpb24gY2FsbCwgYQotYGBibG9ja2luZyBwb2xsJycgY2Fs
bCwgYW5kIGFuIGV4cGxpY2l0IG5vdGlmaWNhdGlvbiBkZXN0aW5hdGlvbi4gIFJlZ2lzdHJhdGlv
bgotZm9yIGBgcG93ZXJcX3N0YXRlJycgaXMgdXNlZnVsLgotCi1caXRlbSBTcGVjaWZ5IHRoYXQg
c2Vzc2lvbiBrZXlzIG91dGxpdmUgdGhlIEhUVFAgc2Vzc2lvbiwgYW5kIGFkZCBhIHRpbWVvdXQK
LWZvciB0aGVtIChjb25maWd1cmFibGUgaW4gdGhlIHRvb2xzKS4KLQotXGl0ZW0gQWRkIHBsYWNl
cyBmb3IgcGVvcGxlIHRvIHN0b3JlIGV4dHJhIGRhdGEgKGBgb3RoZXJDb25maWcnJyBwZXJoYXBz
KQotCi1caXRlbSBTcGVjaWZ5IGhvdyBoYXJkd2FyZSBVVUlEcyBhcmUgdXNlZCAvIGFjY2Vzc2Vk
LgotCi1caXRlbSBNYXJraW5nIFZESXMgYXMgZXhjbHVzaXZlIC8gc2hhcmVhYmxlIChsb2NraW5n
PykKLQotXGl0ZW0gQ29uc2lkZXIgaG93IHRvIHJlcHJlc2VudCBDRFJPTXMgKGFzIFZESXM/KQot
Ci1caXRlbSBEZWZpbmUgbGlzdHMgb2YgZXhjZXB0aW9ucyB3aGljaCBtYXkgYmUgdGhyb3duIGJ5
IGVhY2ggUlBDLCBpbmNsdWRpbmcKLWVycm9yIGNvZGVzIGFuZCBwYXJhbWV0ZXJzLgotCi1caXRl
bSBIb3N0IGNoYXJhY3RlcmlzdGljczogbWluaW11bSBhbW91bnQgb2YgbWVtb3J5LCBUUE0sIG5l
dHdvcmsgYmFuZHdpZHRoLAotYW1vdW50IG9mIGhvc3QgbWVtb3J5LCBhbW91bnQgY29uc3VtZWQg
YnkgVk1zLCBtYXggYW1vdW50IGF2YWlsYWJsZSBmb3IgbmV3Ci1WTXM/Ci0KLVxpdGVtIENvb2tl
ZCByZXNvdXJjZSBtb25pdG9yaW5nIGludGVyZmFjZS4KLQotXGl0ZW0gTmV0d29yayBuZWVkcyBh
ZGRpdGlvbmFsIGF0dHJpYnV0ZXMgdGhhdCBwcm92aWRlIG1lZGlhIGNoYXJhY3RlcmlzdGljcwot
b2YgdGhlIE5JQzoKLQotXGJlZ2lue2l0ZW1pemV9Ci0KLVxpdGVtIFJPIGJhbmR3aWR0aCBpbnRl
Z2VyIEJhbmR3aWR0aCBpbiBtYnBzCi1caXRlbSBSTyBsYXRlbmN5IGludGVnZXIgdGltZSBpbiBt
cyBmb3IgYW4gaWNtcCByb3VuZHRyaXAgdG8gYSBob3N0IG9uIHRoZQotc2FtZSBzdWJuZXQuCi0K
LVxlbmR7aXRlbWl6ZX0KLQotXGl0ZW0gQUNNCi1cYmVnaW57aXRlbWl6ZX0KLQotXGl0ZW0gQSBY
ZW4gc3lzdGVtIGNhbiBiZSBydW5uaW5nIGFuIGFjY2VzcyBjb250cm9sIHBvbGljeSB3aGVyZSBl
YWNoCi1WTSdzIHJ1bi10aW1lIGFjY2VzcyB0byByZXNvdXJjZXMgaXMgcmVzdHJpY3RlZCBieSB0
aGUgbGFiZWwgaXQgaGFzIGJlZW4gZ2l2ZW4KLWNvbXBhcmVkIHRvIHRob3NlIG9mIHRoZSByZXNv
dXJjZXMuIEN1cnJlbnRseSBhIFZNJ3MgY29uZmlndXJhdGlvbiBmaWxlIG1heQotY29udGFpbiBh
IGxpbmUgbGlrZSBhY2Nlc3NcX2NvbnRyb2xbcG9saWN5PSckPCRuYW1lIG9mIHRoZSBzeXN0ZW0n
cwotcG9saWN5JD4kJyxsYWJlbD0nJDwkbGFiZWwgZ2l2ZW4gdG8gVk0kPiQnXS4gIEkgdGhpbmsg
dGhlIGlkZW50aWZpZXJzICdwb2xpY3knCi1hbmQgJ2xhYmVsJyBzaG91bGQgYWxzbyBiZSBwYXJ0
IG9mIHRoZSBWTSBjbGFzcyBlaXRoZXIgZGlyZWN0bHkgaW4gdGhlIGZvcm0KLSdhY2Nlc3NcX2Nv
bnRyb2wvcG9saWN5JyBvciBpbmRpcmVjdGx5IGluIGFuIGFjY2Vzc1xfY29udHJvbCBjbGFzcy4K
LQotXGVuZHtpdGVtaXplfQotCi1caXRlbSBNaWtlIERheSdzIFZtLnByb2ZpbGUgZmllbGQ/Ci0K
LVxpdGVtIENsb25lIGN1c3RvbWlzYXRpb24/Ci0KLVxpdGVtIE5JQyB0ZWFtaW5nPyAgVGhlIE5J
QyBmaWVsZCBvZiB0aGUgTmV0d29yayBjbGFzcyBzaG91bGQgYmUgYSBsaXN0IChTZXQpCi1zbyB0
aGF0IHdlIGNhbiBzaWduaWZ5IE5JQyB0ZWFtaW5nLiAoQ29tYmluaW5nIHBoeXNpY2FsIE5JQ3Mg
aW4gYSBzaW5nbGUgaG9zdAotaW50ZXJmYWNlIHRvIGFjaGlldmUgZ3JlYXRlciBiYW5kd2lkdGgp
LgotCi1cZW5ke2l0ZW1pemV9Ci0KLVxzdWJzdWJzZWN0aW9ue1RyYW5zcG9ydH0KLQotXGJlZ2lu
e2l0ZW1pemV9Ci0KLVxpdGVtIEFsbG93IG5vbi1IVFRQIHRyYW5zcG9ydHMuICBFeHBsaWNpdGx5
IGFsbG93IHN0ZGlvIHRyYW5zcG9ydCwgZm9yIFNTSC4KLQotXGVuZHtpdGVtaXplfQotCi1cc3Vi
c3Vic2VjdGlvbntBdXRoZW50aWNhdGlvbn0KLQotXGJlZ2lue2l0ZW1pemV9Ci0KLVxpdGVtIERl
bGVnYXRpb24gdG8gdGhlIHRyYW5zcG9ydCBsYXllci4KLQotXGl0ZW0gRXh0ZW5kIFBBTSBleGNo
YW5nZSBhY3Jvc3MgdGhlIHdpcmUuCi0KLVxpdGVtIEZpbmUtZ3JhaW5lZCBhY2Nlc3MgY29udHJv
bC4KLQotXGVuZHtpdGVtaXplfQpkaWZmIC0tZ2l0IGEvZG9jcy94ZW4tYXBpL3ZtLWxpZmVjeWNs
ZS50ZXggYi9kb2NzL3hlbi1hcGkvdm0tbGlmZWN5Y2xlLnRleApkZWxldGVkIGZpbGUgbW9kZSAx
MDA2NDQKaW5kZXggYzU4NGI2Ny4uMDAwMDAwMAotLS0gYS9kb2NzL3hlbi1hcGkvdm0tbGlmZWN5
Y2xlLnRleAorKysgL2Rldi9udWxsCkBAIC0xLDQzICswLDAgQEAKLSUKLSUgQ29weXJpZ2h0IChj
KSAyMDA2LTIwMDcgWGVuU291cmNlLCBJbmMuCi0lCi0lIFBlcm1pc3Npb24gaXMgZ3JhbnRlZCB0
byBjb3B5LCBkaXN0cmlidXRlIGFuZC9vciBtb2RpZnkgdGhpcyBkb2N1bWVudCB1bmRlcgotJSB0
aGUgdGVybXMgb2YgdGhlIEdOVSBGcmVlIERvY3VtZW50YXRpb24gTGljZW5zZSwgVmVyc2lvbiAx
LjIgb3IgYW55IGxhdGVyCi0lIHZlcnNpb24gcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJl
IEZvdW5kYXRpb247IHdpdGggbm8gSW52YXJpYW50Ci0lIFNlY3Rpb25zLCBubyBGcm9udC1Db3Zl
ciBUZXh0cyBhbmQgbm8gQmFjay1Db3ZlciBUZXh0cy4gIEEgY29weSBvZiB0aGUKLSUgbGljZW5z
ZSBpcyBpbmNsdWRlZCBpbiB0aGUgc2VjdGlvbiBlbnRpdGxlZAotJSAiR05VIEZyZWUgRG9jdW1l
bnRhdGlvbiBMaWNlbnNlIiBvciB0aGUgZmlsZSBmZGwudGV4LgotJQotJSBBdXRob3JzOiBFd2Fu
IE1lbGxvciwgUmljaGFyZCBTaGFycCwgRGF2ZSBTY290dCwgSm9uIEhhcnJvcC4KLSUKLQotXHNl
Y3Rpb257Vk0gTGlmZWN5Y2xlfQotCi1cYmVnaW57ZmlndXJlfQotXGNlbnRlcmluZwotXHJlc2l6
ZWJveHswLjlcdGV4dHdpZHRofXshfXtcaW5jbHVkZWdyYXBoaWNze3ZtX2xpZmVjeWNsZX19Ci1c
Y2FwdGlvbntWTSBMaWZlY3ljbGV9Ci1cbGFiZWx7ZmlnLXZtLWxpZmVjeWNsZX0KLVxlbmR7Zmln
dXJlfQotCi1GaWd1cmV+XHJlZntmaWctdm0tbGlmZWN5Y2xlfSBzaG93cyB0aGUgc3RhdGVzIHRo
YXQgYSBWTSBjYW4gYmUgaW4KLWFuZCB0aGUgQVBJIGNhbGxzIHRoYXQgY2FuIGJlIHVzZWQgdG8g
bW92ZSB0aGUgVk0gYmV0d2VlbiB0aGVzZSBzdGF0ZXMuICBUaGUgY3Jhc2hlZAotc3RhdGUgaW5k
aWNhdGVzIHRoYXQgdGhlIGd1ZXN0IE9TIHJ1bm5pbmcgd2l0aGluIHRoZSBWTSBoYXMgY3Jhc2hl
ZC4gIFRoZXJlIGlzIG5vCi1BUEkgdG8gZXhwbGljaXRseSBtb3ZlIHRvIHRoZSBjcmFzaGVkIHN0
YXRlLCBob3dldmVyIGEgaGFyZFNodXRkb3duIHdpbGwgbW92ZSB0aGUKLVZNIHRvIHRoZSBwb3dl
cmVkIGRvd24gc3RhdGUuCi0KLVxzZWN0aW9ue1ZNIGJvb3QgcGFyYW1ldGVyc30KLQotVGhlIFZN
IGNsYXNzIGNvbnRhaW5zIGEgbnVtYmVyIG9mIGZpZWxkcyB0aGF0IGNvbnRyb2wgdGhlIHdheSBp
biB3aGljaCB0aGUgVk0gaXMgYm9vdGVkLgotV2l0aCByZWZlcmVuY2UgdG8gdGhlIGZpZWxkcyBk
ZWZpbmVkIGluIHRoZSBWTSBjbGFzcyAoc2VlIGxhdGVyIGluIHRoaXMgZG9jdW1lbnQpLAotdGhp
cyBzZWN0aW9uIG91dGxpbmVzIHRoZSBib290IG9wdGlvbnMgYXZhaWxhYmxlIGFuZCB0aGUgbWVj
aGFuaXNtcyBwcm92aWRlZCBmb3IgY29udHJvbGxpbmcgdGhlbS4KLQotVk0gYm9vdGluZyBpcyBj
b250cm9sbGVkIGJ5IHNldHRpbmcgb25lIG9mIHRoZSB0d28gbXV0dWFsbHkgZXhjbHVzaXZlIGdy
b3VwczogYGBQVicnLCBhbmQgYGBIVk0nJy4gIElmIEhWTS5ib290XF9wb2xpY3kgaXMgdGhlIGVt
cHR5IHN0cmluZywgdGhlbiBwYXJhdmlydHVhbCBkb21haW4gYnVpbGRpbmcgYW5kIGJvb3Rpbmcg
d2lsbCBiZSB1c2VkOyBvdGhlcndpc2UgdGhlIFZNIHdpbGwgYmUgbG9hZGVkIGFzIGFuIEhWTSBk
b21haW4sIGFuZCBib290ZWQgdXNpbmcgYW4gZW11bGF0ZWQgQklPUy4KLQotV2hlbiBwYXJhdmly
dHVhbCBib290aW5nIGlzIGluIHVzZSwgdGhlIFBWL2Jvb3Rsb2FkZXIgZmllbGQgaW5kaWNhdGVz
IHRoZSBib290bG9hZGVyIHRvIHVzZS4gIEl0IG1heSBiZSBgYHB5Z3J1YicnLCBpbiB3aGljaCBj
YXNlIHRoZSBwbGF0Zm9ybSdzIGRlZmF1bHQgaW5zdGFsbGF0aW9uIG9mIHB5Z3J1YiB3aWxsIGJl
IHVzZWQsIG9yIGEgZnVsbCBwYXRoIHdpdGhpbiB0aGUgY29udHJvbCBkb21haW4gdG8gc29tZSBv
dGhlciBib290bG9hZGVyLiAgVGhlIG90aGVyIGZpZWxkcywgUFYva2VybmVsLCBQVi9yYW1kaXNr
LCBQVi9hcmdzIGFuZCBQVi9ib290bG9hZGVyXF9hcmdzIHdpbGwgYmUgcGFzc2VkIHRvIHRoZSBi
b290bG9hZGVyIHVubW9kaWZpZWQsIGFuZCBpbnRlcnByZXRhdGlvbiBvZiB0aG9zZSBmaWVsZHMg
aXMgdGhlbiBzcGVjaWZpYyB0byB0aGUgYm9vdGxvYWRlciBpdHNlbGYsIGluY2x1ZGluZyB0aGUg
cG9zc2liaWxpdHkgdGhhdCB0aGUgYm9vdGxvYWRlciB3aWxsIGlnbm9yZSBzb21lIG9yIGFsbCBv
ZiB0aG9zZSBnaXZlbiB2YWx1ZXMuIEZpbmFsbHkgdGhlIHBhdGhzIG9mIGFsbCBib290YWJsZSBk
aXNrcyBhcmUgYWRkZWQgdG8gdGhlIGJvb3Rsb2FkZXIgY29tbWFuZGxpbmUgKGEgZGlzayBpcyBi
b290YWJsZSBpZiBpdHMgVkJEIGhhcyB0aGUgYm9vdGFibGUgZmxhZyBzZXQpLiBUaGVyZSBtYXkg
YmUgemVybywgb25lIG9yIG1hbnkgYm9vdGFibGUgZGlza3M7IHRoZSBib290bG9hZGVyIGRlY2lk
ZXMgd2hpY2ggZGlzayAoaWYgYW55KSB0byBib290IGZyb20uCi0KLUlmIHRoZSBib290bG9hZGVy
IGlzIHB5Z3J1YiwgdGhlbiB0aGUgbWVudS5sc3QgaXMgcGFyc2VkIGlmIHByZXNlbnQgaW4gdGhl
IGd1ZXN0J3MgZmlsZXN5c3RlbSwgb3RoZXJ3aXNlIHRoZSBzcGVjaWZpZWQga2VybmVsIGFuZCBy
YW1kaXNrIGFyZSB1c2VkLCBvciBhbiBhdXRvZGV0ZWN0ZWQga2VybmVsIGlzIHVzZWQgaWYgbm90
aGluZyBpcyBzcGVjaWZpZWQgYW5kIGF1dG9kZXRlY3Rpb24gaXMgcG9zc2libGUuICBQVi9hcmdz
IGlzIGFwcGVuZGVkIHRvIHRoZSBrZXJuZWwgY29tbWFuZCBsaW5lLCBubyBtYXR0ZXIgd2hpY2gg
bWVjaGFuaXNtIGlzIHVzZWQgZm9yIGZpbmRpbmcgdGhlIGtlcm5lbC4KLQotSWYgUFYvYm9vdGxv
YWRlciBpcyBlbXB0eSBidXQgUFYva2VybmVsIGlzIHNwZWNpZmllZCwgdGhlbiB0aGUga2VybmVs
IGFuZCByYW1kaXNrIHZhbHVlcyB3aWxsIGJlIHRyZWF0ZWQgYXMgcGF0aHMgd2l0aGluIHRoZSBj
b250cm9sIGRvbWFpbi4gIElmIGJvdGggUFYvYm9vdGxvYWRlciBhbmQgUFYva2VybmVsIGFyZSBl
bXB0eSwgdGhlbiB0aGUgYmVoYXZpb3VyIGlzIGFzIGlmIFBWL2Jvb3Rsb2FkZXIgd2FzIHNwZWNp
ZmllZCBhcyBgYHB5Z3J1YicnLgotCi1XaGVuIHVzaW5nIEhWTSBib290aW5nLCBIVk0vYm9vdFxf
cG9saWN5IGFuZCBIVk0vYm9vdFxfcGFyYW1zIHNwZWNpZnkgdGhlIGJvb3QgaGFuZGxpbmcuICBP
bmx5IG9uZSBwb2xpY3kgaXMgY3VycmVudGx5IGRlZmluZWQ6IGBgQklPUyBvcmRlcicnLiAgSW4g
dGhpcyBjYXNlLCBIVk0vYm9vdFxfcGFyYW1zIHNob3VsZCBjb250YWluIG9uZSBrZXktdmFsdWUg
cGFpciBgYG9yZGVyJycgPSBgYE4nJyB3aGVyZSBOIGlzIHRoZSBzdHJpbmcgdGhhdCB3aWxsIGJl
IHBhc3NlZCB0byBRRU1VLgpcIE5vIG5ld2xpbmUgYXQgZW5kIG9mIGZpbGUKZGlmZiAtLWdpdCBh
L2RvY3MveGVuLWFwaS92bV9saWZlY3ljbGUuZG90IGIvZG9jcy94ZW4tYXBpL3ZtX2xpZmVjeWNs
ZS5kb3QKZGVsZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDJjMDYyZjkuLjAwMDAwMDAKLS0t
IGEvZG9jcy94ZW4tYXBpL3ZtX2xpZmVjeWNsZS5kb3QKKysrIC9kZXYvbnVsbApAQCAtMSwxNyAr
MCwwIEBACi1kaWdyYXBoIGd7Ci0KLW5vZGUgW3NoYXBlPWJveF07ICJwb3dlcmVkIGRvd24iIHBh
dXNlZCBydW5uaW5nIHN1c3BlbmRlZCBjcmFzaGVkOwotCi0icG93ZXJlZCBkb3duIiAtPiBwYXVz
ZWQgW2xhYmVsPSJzdGFydChwYXVzZWQ9dHJ1ZSkiXTsKLSJwb3dlcmVkIGRvd24iIC0+IHJ1bm5p
bmcgW2xhYmVsPSJzdGFydChwYXVzZWQ9ZmFsc2UpIl07Ci1ydW5uaW5nIC0+IHN1c3BlbmRlZCBb
bGFiZWw9InN1c3BlbmQiXTsKLXN1c3BlbmRlZCAtPiBydW5uaW5nIFtsYWJlbD0icmVzdW1lKHBh
dXNlZD1mYWxzZSkiXTsKLXN1c3BlbmRlZCAtPiBwYXVzZWQgW2xhYmVsPSJyZXN1bWUocGF1c2Vk
PXRydWUpIl07Ci1wYXVzZWQgLT4gc3VzcGVuZGVkIFtsYWJlbD0ic3VzcGVuZCJdOwotcGF1c2Vk
IC0+IHJ1bm5pbmcgW2xhYmVsPSJyZXN1bWUiXTsKLXJ1bm5pbmcgLT4gInBvd2VyZWQgZG93biIg
W2xhYmVsPSJjbGVhblNodXRkb3duIC9cbmhhcmRTaHV0ZG93biJdOwotcnVubmluZyAtPiBwYXVz
ZWQgW2xhYmVsPSJwYXVzZSJdOwotcnVubmluZyAtPiBjcmFzaGVkIFtsYWJlbD0iZ3Vlc3QgT1Mg
Y3Jhc2giXQotY3Jhc2hlZCAtPiAicG93ZXJlZCBkb3duIiBbbGFiZWw9ImhhcmRTaHV0ZG93biJd
Ci0KLX0KXCBObyBuZXdsaW5lIGF0IGVuZCBvZiBmaWxlCmRpZmYgLS1naXQgYS9kb2NzL3hlbi1h
cGkvd2lyZS1wcm90b2NvbC50ZXggYi9kb2NzL3hlbi1hcGkvd2lyZS1wcm90b2NvbC50ZXgKZGVs
ZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IGRjYjFhMWMuLjAwMDAwMDAKLS0tIGEvZG9jcy94
ZW4tYXBpL3dpcmUtcHJvdG9jb2wudGV4CisrKyAvZGV2L251bGwKQEAgLTEsMzgzICswLDAgQEAK
LSUKLSUgQ29weXJpZ2h0IChjKSAyMDA2LTIwMDcgWGVuU291cmNlLCBJbmMuCi0lIENvcHlyaWdo
dCAoYykgMjAwOSBmbG9uYXRlbCBHbWJIICYgQ28uIEtHCi0lCi0lIFBlcm1pc3Npb24gaXMgZ3Jh
bnRlZCB0byBjb3B5LCBkaXN0cmlidXRlIGFuZC9vciBtb2RpZnkgdGhpcyBkb2N1bWVudCB1bmRl
cgotJSB0aGUgdGVybXMgb2YgdGhlIEdOVSBGcmVlIERvY3VtZW50YXRpb24gTGljZW5zZSwgVmVy
c2lvbiAxLjIgb3IgYW55IGxhdGVyCi0lIHZlcnNpb24gcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNv
ZnR3YXJlIEZvdW5kYXRpb247IHdpdGggbm8gSW52YXJpYW50Ci0lIFNlY3Rpb25zLCBubyBGcm9u
dC1Db3ZlciBUZXh0cyBhbmQgbm8gQmFjay1Db3ZlciBUZXh0cy4gIEEgY29weSBvZiB0aGUKLSUg
bGljZW5zZSBpcyBpbmNsdWRlZCBpbiB0aGUgc2VjdGlvbiBlbnRpdGxlZAotJSAiR05VIEZyZWUg
RG9jdW1lbnRhdGlvbiBMaWNlbnNlIiBvciB0aGUgZmlsZSBmZGwudGV4LgotJQotJSBBdXRob3Jz
OiBFd2FuIE1lbGxvciwgUmljaGFyZCBTaGFycCwgRGF2ZSBTY290dCwgSm9uIEhhcnJvcC4KLSUg
Q29udHJpYnV0b3I6IEFuZHJlYXMgRmxvcmF0aAotJQotCi1cc2VjdGlvbntXaXJlIFByb3RvY29s
IGZvciBSZW1vdGUgQVBJIENhbGxzfQotCi1BUEkgY2FsbHMgYXJlIHNlbnQgb3ZlciBhIG5ldHdv
cmsgdG8gYSBYZW4tZW5hYmxlZCBob3N0IHVzaW5nCi10aGUgWE1MLVJQQyBwcm90b2NvbC4gSW4g
dGhpcyBTZWN0aW9uIHdlIGRlc2NyaWJlIGhvdyB0aGUKLWhpZ2hlci1sZXZlbCB0eXBlcyB1c2Vk
IGluIG91ciBBUEkgUmVmZXJlbmNlIGFyZSBtYXBwZWQgdG8KLXByaW1pdGl2ZSBYTUwtUlBDIHR5
cGVzLgotCi1JbiBvdXIgQVBJIFJlZmVyZW5jZSB3ZSBzcGVjaWZ5IHRoZSBzaWduYXR1cmVzIG9m
IEFQSSBmdW5jdGlvbnMgaW4gdGhlIGZvbGxvd2luZwotc3R5bGU6Ci1cYmVnaW57dmVyYmF0aW19
Ci0gICAgKHJlZl92bSBTZXQpICAgVk0uZ2V0X2FsbCgpCi1cZW5ke3ZlcmJhdGltfQotVGhpcyBz
cGVjaWZpZXMgdGhhdCB0aGUgZnVuY3Rpb24gd2l0aCBuYW1lIHtcdHQgVk0uZ2V0XF9hbGx9IHRh
a2VzCi1ubyBwYXJhbWV0ZXJzIGFuZCByZXR1cm5zIGEgU2V0IG9mIHtcdHQgcmVmXF92bX1zLgot
VGhlc2UgdHlwZXMgYXJlIG1hcHBlZCBvbnRvIFhNTC1SUEMgdHlwZXMgaW4gYSBzdHJhaWdodC1m
b3J3YXJkIG1hbm5lcjoKLVxiZWdpbntpdGVtaXplfQotICBcaXRlbSBGbG9hdHMsIEJvb2xzLCBE
YXRlVGltZXMgYW5kIFN0cmluZ3MgbWFwIGRpcmVjdGx5IHRvIHRoZSBYTUwtUlBDIHtcdHQKLSAg
ZG91YmxlfSwge1x0dCBib29sZWFufSwge1x0dCBkYXRlVGltZS5pc284NjAxfSwgYW5kIHtcdHQg
c3RyaW5nfSBlbGVtZW50cy4KLQotICBcaXRlbSBhbGwgYGB7XHR0IHJlZlxffScnIHR5cGVzIGFy
ZSBvcGFxdWUgcmVmZXJlbmNlcywgZW5jb2RlZCBhcyB0aGUKLSAgWE1MLVJQQydzIHtcdHQgU3Ry
aW5nfSB0eXBlLiBVc2VycyBvZiB0aGUgQVBJIHNob3VsZCBub3QgbWFrZSBhc3N1bXB0aW9ucwot
ICBhYm91dCB0aGUgY29uY3JldGUgZm9ybSBvZiB0aGVzZSBzdHJpbmdzIGFuZCBzaG91bGQgbm90
IGV4cGVjdCB0aGVtIHRvCi0gIHJlbWFpbiB2YWxpZCBhZnRlciB0aGUgY2xpZW50J3Mgc2Vzc2lv
biB3aXRoIHRoZSBzZXJ2ZXIgaGFzIHRlcm1pbmF0ZWQuCi0KLSAgXGl0ZW0gZmllbGRzIG5hbWVk
IGBge1x0dCB1dWlkfScnIG9mIHR5cGUgYGB7XHR0IFN0cmluZ30nJyBhcmUgbWFwcGVkIHRvCi0g
IHRoZSBYTUwtUlBDIHtcdHQgU3RyaW5nfSB0eXBlLiBUaGUgc3RyaW5nIGl0c2VsZiBpcyB0aGUg
T1NGCi0gIERDRSBVVUlEIHByZXNlbnRhdGlvbiBmb3JtYXQgKGFzIG91dHB1dCBieSB7XHR0IHV1
aWRnZW59LCBldGMpLgotCi0gIFxpdGVtIGludHMgYXJlIGFsbCBhc3N1bWVkIHRvIGJlIDY0LWJp
dCBpbiBvdXIgQVBJIGFuZCBhcmUgZW5jb2RlZCBhcyBhCi0gIHN0cmluZyBvZiBkZWNpbWFsIGRp
Z2l0cyAocmF0aGVyIHRoYW4gdXNpbmcgWE1MLVJQQydzIGJ1aWx0LWluIDMyLWJpdCB7XHR0Ci0g
IGk0fSB0eXBlKS4KLQotICBcaXRlbSB2YWx1ZXMgb2YgZW51bSB0eXBlcyBhcmUgZW5jb2RlZCBh
cyBzdHJpbmdzLiBGb3IgZXhhbXBsZSwgYSB2YWx1ZSBvZgotICB7XHR0IGRlc3Ryb3l9IG9mIHR5
cGUge1x0dCBvblxfbm9ybWFsXF9leGl0fSwgd291bGQgYmUgY29udmV5ZWQgYXM6Ci0gIFxiZWdp
bnt2ZXJiYXRpbX0KLSAgICA8dmFsdWU+PHN0cmluZz5kZXN0cm95PC9zdHJpbmc+PC92YWx1ZT4K
LSAgXGVuZHt2ZXJiYXRpbX0KLQotICBcaXRlbSBmb3IgYWxsIG91ciB0eXBlcywge1x0dCB0fSwg
b3VyIHR5cGUge1x0dCB0IFNldH0gc2ltcGx5IG1hcHMgdG8KLSAgWE1MLVJQQydzIHtcdHQgQXJy
YXl9IHR5cGUsIHNvIGZvciBleGFtcGxlIGEgdmFsdWUgb2YgdHlwZSB7XHR0IGNwdVxfZmVhdHVy
ZQotICBTZXR9IHdvdWxkIGJlIHRyYW5zbWl0dGVkIGxpa2UgdGhpczoKLQotICBcYmVnaW57dmVy
YmF0aW19Ci08YXJyYXk+Ci0gIDxkYXRhPgotICAgIDx2YWx1ZT48c3RyaW5nPkNYODwvc3RyaW5n
PjwvdmFsdWU+Ci0gICAgPHZhbHVlPjxzdHJpbmc+UFNFMzY8L3N0cmluZz48L3ZhbHVlPgotICAg
IDx2YWx1ZT48c3RyaW5nPkZQVTwvc3RyaW5nPjwvdmFsdWU+Ci0gIDwvZGF0YT4KLTwvYXJyYXk+
IAotICBcZW5ke3ZlcmJhdGltfQotCi0gIFxpdGVtIGZvciB0eXBlcyB7XHR0IGt9IGFuZCB7XHR0
IHZ9LCBvdXIgdHlwZSB7XHR0IChrLCB2KSBNYXB9IG1hcHMgb250byBhbgotICBYTUwtUlBDIHN0
cnVjdCwgd2l0aCB0aGUga2V5IGFzIHRoZSBuYW1lIG9mIHRoZSBzdHJ1Y3QuICBOb3RlIHRoYXQg
dGhlIHtcdHQKLSAgKGssIHYpIE1hcH0gdHlwZSBpcyBvbmx5IHZhbGlkIHdoZW4ge1x0dCBrfSBp
cyBhIHtcdHQgU3RyaW5nfSwge1x0dCBSZWZ9LCBvcgotICB7XHR0IEludH0sIGFuZCBpbiBlYWNo
IGNhc2UgdGhlIGtleXMgb2YgdGhlIG1hcHMgYXJlIHN0cmluZ2lmaWVkIGFzCi0gIGFib3ZlLiBG
b3IgZXhhbXBsZSwgdGhlIHtcdHQgKFN0cmluZywgZG91YmxlKSBNYXB9IGNvbnRhaW5pbmcgYSB0
aGUgbWFwcGluZ3MKLSAgTWlrZSAkXHJpZ2h0YXJyb3ckIDIuMyBhbmQgSm9obiAkXHJpZ2h0YXJy
b3ckIDEuMiB3b3VsZCBiZSByZXByZXNlbnRlZCBhczoKLQotICBcYmVnaW57dmVyYmF0aW19Ci08
dmFsdWU+Ci0gIDxzdHJ1Y3Q+Ci0gICAgPG1lbWJlcj4KLSAgICAgIDxuYW1lPk1pa2U8L25hbWU+
Ci0gICAgICA8dmFsdWU+PGRvdWJsZT4yLjM8L2RvdWJsZT48L3ZhbHVlPgotICAgIDwvbWVtYmVy
PgotICAgIDxtZW1iZXI+Ci0gICAgICA8bmFtZT5Kb2huPC9uYW1lPgotICAgICAgPHZhbHVlPjxk
b3VibGU+MS4yPC9kb3VibGU+PC92YWx1ZT4KLSAgICA8L21lbWJlcj4KLSAgPC9zdHJ1Y3Q+Ci08
L3ZhbHVlPgotICBcZW5ke3ZlcmJhdGltfQotCi0gIFxpdGVtIG91ciB7XHR0IFZvaWR9IHR5cGUg
aXMgdHJhbnNtaXR0ZWQgYXMgYW4gZW1wdHkgc3RyaW5nLgotCi1cZW5ke2l0ZW1pemV9Ci0KLVxz
dWJzZWN0aW9ue05vdGUgb24gUmVmZXJlbmNlcyB2cyBVVUlEc30KLQotUmVmZXJlbmNlcyBhcmUg
b3BhcXVlIHR5cGVzIC0tLSBlbmNvZGVkIGFzIFhNTC1SUEMgc3RyaW5ncyBvbiB0aGUgd2lyZSAt
LS0gdW5kZXJzdG9vZAotb25seSBieSB0aGUgcGFydGljdWxhciBzZXJ2ZXIgd2hpY2ggZ2VuZXJh
dGVkIHRoZW0uIFNlcnZlcnMgYXJlIGZyZWUgdG8gY2hvb3NlCi1hbnkgY29uY3JldGUgcmVwcmVz
ZW50YXRpb24gdGhleSBmaW5kIGNvbnZlbmllbnQ7IGNsaWVudHMgc2hvdWxkIG5vdCBtYWtlIGFu
eSAKLWFzc3VtcHRpb25zIG9yIGF0dGVtcHQgdG8gcGFyc2UgdGhlIHN0cmluZyBjb250ZW50cy4g
UmVmZXJlbmNlcyBhcmUgbm90IGd1YXJhbnRlZWQKLXRvIGJlIHBlcm1hbmVudCBpZGVudGlmaWVy
cyBmb3Igb2JqZWN0czsgY2xpZW50cyBzaG91bGQgbm90IGFzc3VtZSB0aGF0IHJlZmVyZW5jZXMg
Ci1nZW5lcmF0ZWQgZHVyaW5nIG9uZSBzZXNzaW9uIGFyZSB2YWxpZCBmb3IgYW55IGZ1dHVyZSBz
ZXNzaW9uLiBSZWZlcmVuY2VzIGRvIG5vdAotYWxsb3cgb2JqZWN0cyB0byBiZSBjb21wYXJlZCBm
b3IgZXF1YWxpdHkuIFR3byByZWZlcmVuY2VzIHRvIHRoZSBzYW1lIG9iamVjdCBhcmUKLW5vdCBn
dWFyYW50ZWVkIHRvIGJlIHRleHR1YWxseSBpZGVudGljYWwuCi0KLVVVSURzIGFyZSBpbnRlbmRl
ZCB0byBiZSBwZXJtYW5lbnQgbmFtZXMgZm9yIG9iamVjdHMuIFRoZXkgYXJlCi1ndWFyYW50ZWVk
IHRvIGJlIGluIHRoZSBPU0YgRENFIFVVSUQgcHJlc2VudGF0aW9uIGZvcm1hdCAoYXMgb3V0cHV0
IGJ5IHtcdHQgdXVpZGdlbn0uCi1DbGllbnRzIG1heSBzdG9yZSBVVUlEcyBvbiBkaXNrIGFuZCB1
c2UgdGhlbSB0byBsb29rdXAgb2JqZWN0cyBpbiBzdWJzZXF1ZW50IHNlc3Npb25zCi13aXRoIHRo
ZSBzZXJ2ZXIuIENsaWVudHMgbWF5IGFsc28gdGVzdCBlcXVhbGl0eSBvbiBvYmplY3RzIGJ5IGNv
bXBhcmluZyBVVUlEIHN0cmluZ3MuCi0KLVRoZSBBUEkgcHJvdmlkZXMgbWVjaGFuaXNtcwotZm9y
IHRyYW5zbGF0aW5nIGJldHdlZW4gVVVJRHMgYW5kIG9wYXF1ZSByZWZlcmVuY2VzLiBFYWNoIGNs
YXNzIHRoYXQgY29udGFpbnMgYSBVVUlECi1maWVsZCBwcm92aWRlczoKLVxiZWdpbntpdGVtaXpl
fQotXGl0ZW0gIEEgYGB7XHR0IGdldFxfYnlcX3V1aWR9JycgbWV0aG9kIHRoYXQgdGFrZXMgYSBV
VUlELCAkdSQsIGFuZCByZXR1cm5zIGFuIG9wYXF1ZSByZWZlcmVuY2UKLXRvIHRoZSBzZXJ2ZXIt
c2lkZSBvYmplY3QgdGhhdCBoYXMgVVVJRD0kdSQ7IAotXGl0ZW0gQSB7XHR0IGdldFxfdXVpZH0g
ZnVuY3Rpb24gKGEgcmVndWxhciBgYGZpZWxkIGdldHRlcicnIFJQQykgdGhhdCB0YWtlcyBhbiBv
cGFxdWUgcmVmZXJlbmNlLAotJHIkLCBhbmQgcmV0dXJucyB0aGUgVVVJRCBvZiB0aGUgc2VydmVy
LXNpZGUgb2JqZWN0IHRoYXQgaXMgcmVmZXJlbmNlZCBieSAkciQuCi1cZW5ke2l0ZW1pemV9Ci0K
LVxzdWJzZWN0aW9ue1JldHVybiBWYWx1ZXMvU3RhdHVzIENvZGVzfQotXGxhYmVse3N5bmNocm9u
b3VzLXJlc3VsdH0KLQotVGhlIHJldHVybiB2YWx1ZSBvZiBhbiBSUEMgY2FsbCBpcyBhbiBYTUwt
UlBDIHtcdHQgU3RydWN0fS4KLQotXGJlZ2lue2l0ZW1pemV9Ci1caXRlbSBUaGUgZmlyc3QgZWxl
bWVudCBvZiB0aGUgc3RydWN0IGlzIG5hbWVkIHtcdHQgU3RhdHVzfTsgaXQKLWNvbnRhaW5zIGEg
c3RyaW5nIHZhbHVlIGluZGljYXRpbmcgd2hldGhlciB0aGUgcmVzdWx0IG9mIHRoZSBjYWxsIHdh
cwotYSBgYHtcdHQgU3VjY2Vzc30nJyBvciBhIGBge1x0dCBGYWlsdXJlfScnLgotXGVuZHtpdGVt
aXplfQotCi1JZiB7XHR0IFN0YXR1c30gd2FzIHNldCB0byB7XHR0IFN1Y2Nlc3N9IHRoZW4gdGhl
IFN0cnVjdCBjb250YWlucyBhIHNlY29uZAotZWxlbWVudCBuYW1lZCB7XHR0IFZhbHVlfToKLVxi
ZWdpbntpdGVtaXplfQotXGl0ZW0gVGhlIGVsZW1lbnQgb2YgdGhlIHN0cnVjdCBuYW1lZCB7XHR0
IFZhbHVlfSBjb250YWlucyB0aGUgZnVuY3Rpb24ncyByZXR1cm4gdmFsdWUuCi1cZW5ke2l0ZW1p
emV9Ci0KLUluIHRoZSBjYXNlIHdoZXJlIHtcdHQgU3RhdHVzfSBpcyBzZXQgdG8ge1x0dCBGYWls
dXJlfSB0aGVuCi10aGUgc3RydWN0IGNvbnRhaW5zIGEgc2Vjb25kIGVsZW1lbnQgbmFtZWQge1x0
dCBFcnJvckRlc2NyaXB0aW9ufToKLVxiZWdpbntpdGVtaXplfQotXGl0ZW0gVGhlIGVsZW1lbnQg
b2YgdGhlIHN0cnVjdCBuYW1lZCB7XHR0IEVycm9yRGVzY3JpcHRpb259IGNvbnRhaW5zCi1hbiBh
cnJheSBvZiBzdHJpbmcgdmFsdWVzLiBUaGUgZmlyc3QgZWxlbWVudCBvZiB0aGUgYXJyYXkgaXMg
YW4gZXJyb3IgY29kZTsKLXRoZSByZW1haW5kZXIgb2YgdGhlIGFycmF5IGFyZSBzdHJpbmdzIHJl
cHJlc2VudGluZyBlcnJvciBwYXJhbWV0ZXJzIHJlbGF0aW5nCi10byB0aGF0IGNvZGUuCi1cZW5k
e2l0ZW1pemV9Ci0KLUZvciBleGFtcGxlLCBhbiBYTUwtUlBDIHJldHVybiB2YWx1ZSBmcm9tIHRo
ZSB7XHR0IGhvc3QuZ2V0XF9yZXNpZGVudFxfVk1zfQotZnVuY3Rpb24gYWJvdmUKLW1heSBsb29r
IGxpa2UgdGhpczoKLVxiZWdpbnt2ZXJiYXRpbX0KLSAgICA8c3RydWN0PgotICAgICAgIDxtZW1i
ZXI+Ci0gICAgICAgICA8bmFtZT5TdGF0dXM8L25hbWU+Ci0gICAgICAgICA8dmFsdWU+U3VjY2Vz
czwvdmFsdWU+Ci0gICAgICAgPC9tZW1iZXI+Ci0gICAgICAgPG1lbWJlcj4KLSAgICAgICAgICA8
bmFtZT5WYWx1ZTwvbmFtZT4KLSAgICAgICAgICA8dmFsdWU+Ci0gICAgICAgICAgICA8YXJyYXk+
Ci0gICAgICAgICAgICAgICA8ZGF0YT4KLSAgICAgICAgICAgICAgICAgPHZhbHVlPjgxNTQ3YTM1
LTIwNWMtYTU1MS1jNTc3LTAwYjk4MmM1ZmUwMDwvdmFsdWU+Ci0gICAgICAgICAgICAgICAgIDx2
YWx1ZT42MWM4NWEyMi0wNWRhLWI4YTItMmU1NS0wNmIwODQ3ZGE1MDM8L3ZhbHVlPgotICAgICAg
ICAgICAgICAgICA8dmFsdWU+MWQ0MDFlYzQtM2MxNy0zNWE2LWZjNzktY2VlNmJkOTgxMWZlPC92
YWx1ZT4KLSAgICAgICAgICAgICAgIDwvZGF0YT4KLSAgICAgICAgICAgIDwvYXJyYXk+Ci0gICAg
ICAgICA8L3ZhbHVlPgotICAgICAgIDwvbWVtYmVyPgotICAgIDwvc3RydWN0PgotXGVuZHt2ZXJi
YXRpbX0KLQotXHNlY3Rpb257TWFraW5nIFhNTC1SUEMgQ2FsbHN9Ci0KLVxzdWJzZWN0aW9ue1Ry
YW5zcG9ydCBMYXllcn0KLQotVGhlIGZvbGxvd2luZyB0cmFuc3BvcnQgbGF5ZXJzIGFyZSBjdXJy
ZW50bHkgc3VwcG9ydGVkOgotXGJlZ2lue2l0ZW1pemV9Ci1caXRlbSBIVFRQL1MgZm9yIHJlbW90
ZSBhZG1pbmlzdHJhdGlvbgotXGl0ZW0gSFRUUCBvdmVyIFVuaXggZG9tYWluIHNvY2tldHMgZm9y
IGxvY2FsIGFkbWluaXN0cmF0aW9uCi1cZW5ke2l0ZW1pemV9Ci0KLVxzdWJzZWN0aW9ue1Nlc3Np
b24gTGF5ZXJ9Ci0KLVRoZSBYTUwtUlBDIGludGVyZmFjZSBpcyBzZXNzaW9uLWJhc2VkOyBiZWZv
cmUgeW91IGNhbiBtYWtlIGFyYml0cmFyeSBSUEMgY2FsbHMKLXlvdSBtdXN0IGxvZ2luIGFuZCBp
bml0aWF0ZSBhIHNlc3Npb24uIEZvciBleGFtcGxlOgotXGJlZ2lue3ZlcmJhdGltfQotICAgc2Vz
c2lvbl9pZCAgICBzZXNzaW9uLmxvZ2luX3dpdGhfcGFzc3dvcmQoc3RyaW5nIHVuYW1lLCBzdHJp
bmcgcHdkKQotXGVuZHt2ZXJiYXRpbX0KLVdoZXJlIHtcdHQgdW5hbWV9IGFuZCB7XHR0IHBhc3N3
b3JkfSByZWZlciB0byB5b3VyIHVzZXJuYW1lIGFuZCBwYXNzd29yZAotcmVzcGVjdGl2ZWx5LCBh
cyBkZWZpbmVkIGJ5IHRoZSBYZW4gYWRtaW5pc3RyYXRvci4KLVRoZSB7XHR0IHNlc3Npb25cX2lk
fSByZXR1cm5lZCBieSB7XHR0IHNlc3Npb24ubG9naW5cX3dpdGhcX3Bhc3N3b3JkfSBpcyBwYXNz
ZWQKLXRvIHN1YnNlcXVlbnQgUlBDIGNhbGxzIGFzIGFuIGF1dGhlbnRpY2F0aW9uIHRva2VuLgot
Ci1BIHNlc3Npb24gY2FuIGJlIHRlcm1pbmF0ZWQgd2l0aCB0aGUge1x0dCBzZXNzaW9uLmxvZ291
dH0gZnVuY3Rpb246Ci1cYmVnaW57dmVyYmF0aW19Ci0gICB2b2lkICAgICAgICAgIHNlc3Npb24u
bG9nb3V0KHNlc3Npb25faWQgc2Vzc2lvbikKLVxlbmR7dmVyYmF0aW19Ci0KLVxzdWJzZWN0aW9u
e1N5bmNocm9ub3VzIGFuZCBBc3luY2hyb25vdXMgaW52b2NhdGlvbn0KLQotRWFjaCBtZXRob2Qg
Y2FsbCAoYXBhcnQgZnJvbSBtZXRob2RzIG9uIGBgU2Vzc2lvbicnIGFuZCBgYFRhc2snJyBvYmpl
Y3RzIAotYW5kIGBgZ2V0dGVycycnIGFuZCBgYHNldHRlcnMnJyBkZXJpdmVkIGZyb20gZmllbGRz
KQotY2FuIGJlIG1hZGUgZWl0aGVyIHN5bmNocm9ub3VzbHkgb3IgYXN5bmNocm9ub3VzbHkuCi1B
IHN5bmNocm9ub3VzIFJQQyBjYWxsIGJsb2NrcyB1bnRpbCB0aGUKLXJldHVybiB2YWx1ZSBpcyBy
ZWNlaXZlZDsgdGhlIHJldHVybiB2YWx1ZSBvZiBhIHN5bmNocm9ub3VzIFJQQyBjYWxsIGlzCi1l
eGFjdGx5IGFzIHNwZWNpZmllZCBpbiBTZWN0aW9uflxyZWZ7c3luY2hyb25vdXMtcmVzdWx0fS4K
LQotT25seSBzeW5jaHJvbm91cyBBUEkgY2FsbHMgYXJlIGxpc3RlZCBleHBsaWNpdGx5IGluIHRo
aXMgZG9jdW1lbnQuIAotQWxsIGFzeW5jaHJvbm91cyB2ZXJzaW9ucyBhcmUgaW4gdGhlIHNwZWNp
YWwge1x0dCBBc3luY30gbmFtZXNwYWNlLgotRm9yIGV4YW1wbGUsIHN5bmNocm9ub3VzIGNhbGwg
e1x0dCBWTS5jbG9uZSguLi4pfQotKGRlc2NyaWJlZCBpbiBDaGFwdGVyflxyZWZ7YXBpLXJlZmVy
ZW5jZX0pCi1oYXMgYW4gYXN5bmNocm9ub3VzIGNvdW50ZXJwYXJ0LCB7XHR0Ci1Bc3luYy5WTS5j
bG9uZSguLi4pfSwgdGhhdCBpcyBub24tYmxvY2tpbmcuCi0KLUluc3RlYWQgb2YgcmV0dXJuaW5n
IGl0cyByZXN1bHQgZGlyZWN0bHksIGFuIGFzeW5jaHJvbm91cyBSUEMgY2FsbAotcmV0dXJucyBh
IHtcdHQgdGFzay1pZH07IHRoaXMgaWRlbnRpZmllciBpcyBzdWJzZXF1ZW50bHkgdXNlZAotdG8g
dHJhY2sgdGhlIHN0YXR1cyBvZiBhIHJ1bm5pbmcgYXN5bmNocm9ub3VzIFJQQy4gTm90ZSB0aGF0
IGFuIGFzeW5jaHJvbm91cwotY2FsbCBtYXkgZmFpbCBpbW1lZGlhdGVseSwgYmVmb3JlIGEge1x0
dCB0YXNrLWlkfSBoYXMgZXZlbiBiZWVuIGNyZWF0ZWQtLS10bwotcmVwcmVzZW50IHRoaXMgZXZl
bnR1YWxpdHksIHRoZSByZXR1cm5lZCB7XHR0IHRhc2staWR9Ci1pcyB3cmFwcGVkIGluIGFuIFhN
TC1SUEMgc3RydWN0IHdpdGggYSB7XHR0IFN0YXR1c30sIHtcdHQgRXJyb3JEZXNjcmlwdGlvbn0g
YW5kCi17XHR0IFZhbHVlfSBmaWVsZHMsIGV4YWN0bHkgYXMgc3BlY2lmaWVkIGluIFNlY3Rpb25+
XHJlZntzeW5jaHJvbm91cy1yZXN1bHR9LgotCi1UaGUge1x0dCB0YXNrLWlkfSBpcyBwcm92aWRl
ZCBpbiB0aGUge1x0dCBWYWx1ZX0gZmllbGQgaWYge1x0dCBTdGF0dXN9IGlzIHNldCB0bwote1x0
dCBTdWNjZXNzfS4KLQotVGhlIFJQQyBjYWxsCi1cYmVnaW57dmVyYmF0aW19Ci0gICAgKHJlZl90
YXNrIFNldCkgICBUYXNrLmdldF9hbGwoc2Vzc2lvbl9pZCBzKQotXGVuZHt2ZXJiYXRpbX0gCi1y
ZXR1cm5zIGEgc2V0IG9mIGFsbCB0YXNrIElEcyBrbm93biB0byB0aGUgc3lzdGVtLiBUaGUgc3Rh
dHVzIChpbmNsdWRpbmcgYW55Ci1yZXR1cm5lZCByZXN1bHQgYW5kIGVycm9yIGNvZGVzKSBvZiB0
aGVzZSB0YXNrcwotY2FuIHRoZW4gYmUgcXVlcmllZCBieSBhY2Nlc3NpbmcgdGhlIGZpZWxkcyBv
ZiB0aGUgVGFzayBvYmplY3QgaW4gdGhlIHVzdWFsIHdheS4gCi1Ob3RlIHRoYXQsIGluIG9yZGVy
IHRvIGdldCBhIGNvbnNpc3RlbnQgc25hcHNob3Qgb2YgYSB0YXNrJ3Mgc3RhdGUsIGl0IGlzIGFk
dmlzYWJsZSB0byBjYWxsIHRoZSBgYGdldFxfcmVjb3JkJycgZnVuY3Rpb24uCi0KLVxzZWN0aW9u
e0V4YW1wbGUgaW50ZXJhY3RpdmUgc2Vzc2lvbn0KLVRoaXMgc2VjdGlvbiBkZXNjcmliZXMgaG93
IGFuIGludGVyYWN0aXZlIHNlc3Npb24gbWlnaHQgbG9vaywgdXNpbmcKLXRoZSBweXRob24gQVBJ
LiAgQWxsIHB5dGhvbiB2ZXJzaW9ucyBzdGFydGluZyBmcm9tIDIuNCBzaG91bGQgd29yay4KLQot
VGhlIGV4YW1wbGVzIGluIHRoaXMgc2VjdGlvbiB1c2UgYSByZW1vdGUgWGVuIGhvc3Qgd2l0aCB0
aGUgaXAgYWRkcmVzcwotb2YgXHRleHR0dHsxOTIuMTY4LjcuMjB9IGFuZCB0aGUgeG1scnBjIHBv
cnQgXHRleHR0dHs5MzYzfS4gIE5vCi1hdXRoZW50aWNhdGlvbiBpcyB1c2VkLgotCi1Ob3RlIHRo
YXQgdGhlIHJlbW90ZSBzZXJ2ZXIgbXVzdCBiZSBjb25maWd1cmVkIGluIHRoZSB3YXksIHRoYXQg
aXQKLWFjY2VwdHMgcmVtb3RlIGNvbm5lY3Rpb25zLiAgU29tZSBsaW5lcyBtdXN0IGJlIGFkZGVk
IHRvIHRoZQoteGVuZC1jb25maWcuc3hwIGNvbmZpZ3VyYXRpb24gZmlsZToKLVxiZWdpbnt2ZXJi
YXRpbX0KLSh4ZW4tYXBpLXNlcnZlciAoKDkzNjMgbm9uZSkKLSAgICAgICAgICAgICAgICAgKHVu
aXggbm9uZSkpKQotKHhlbmQtdGNwLXhtbHJwYy1zZXJ2ZXIgeWVzKQotXGVuZHt2ZXJiYXRpbX0K
LVRoZSB4ZW5kIG11c3QgYmUgcmVzdGFydGVkIGFmdGVyIGNoYW5naW5nIHRoZSBjb25maWd1cmF0
aW9uLgotCi1CZWZvcmUgc3RhcnRpbmcgcHl0aG9uLCB0aGUgXHRleHR0dHtQWVRIT05QQVRIfSBt
dXN0IGJlIHNldCB0aGF0IHRoZQotXHRleHR0dHtYZW5BUEkucHl9IGNhbiBiZSBmb3VuZC4gIFR5
cGljYWxseSB0aGUgXHRleHR0dHtYZW5BUEkucHl9IGlzCi1pbnN0YWxsZWQgd2l0aCBvbmUgb2Yg
dGhlIFhlbiBoZWxwZXIgcGFja2FnZXMgd2hpY2ggdGhlIGxhc3QgcGFydCBvZgotdGhlIHBhdGgg
aXMgXHRleHR0dHt4ZW4veG0vWGVuQVBJLnB5fS4KLQotRXhhbXBsZTogVW5kZXIgRGViaWFuIDUu
MCB0aGUgcGFja2FnZSB3aGljaCBjb250YWlucyB0aGUKLVx0ZXh0dHR7WGVuQVBJLnB5fSBpcyBc
dGV4dHR0e3hlbi11dGlscy0zLjItMX0uIFx0ZXh0dHR7WGVuQVBJLnB5fSBpcwotbG9jYXRlZCBp
biBcdGV4dHR0ey91c3IvbGliL3hlbi0zLjItMS9saWIvcHl0aG9uL3hlbi94bX0uIFRoZQotZm9s
bG93aW5nIGNvbW1hbmQgd2lsbCBzZXQgdGhlIFx0ZXh0dHR7UFlUSE9OUEFUSH0gZW52aXJvbm1l
bnQKLXZhcmlhYmxlIGluIGEgYmFzaDoKLQotXGJlZ2lue3ZlcmJhdGltfQotJCBleHBvcnQgUFlU
SE9OUEFUSD0vdXNyL2xpYi94ZW4tMy4yLTEvbGliL3B5dGhvbgotXGVuZHt2ZXJiYXRpbX0KLQot
VGhlbiBweXRob24gY2FuIGJlIHN0YXJ0ZWQgYW5kIHRoZSBYZW5BUEkgbXVzdCBiZSBpbXBvcnRl
ZDoKLQotXGJlZ2lue3ZlcmJhdGltfQotJCBweXRob24KLS4uLgotPj4+IGltcG9ydCB4ZW4ueG0u
WGVuQVBJCi1cZW5ke3ZlcmJhdGltfQotCi1UbyBjcmVhdGUgYSBzZXNzaW9uIHRvIHRoZSByZW1v
dGUgc2VydmVyLCB0aGUKLVx0ZXh0dHR7eGVuLnhtLlhlbkFQSS5TZXNzaW9ufSBjb25zdHJ1Y3Rv
ciBpcyB1c2VkOgotXGJlZ2lue3ZlcmJhdGltfQotPj4+IHNlc3Npb24gPSB4ZW4ueG0uWGVuQVBJ
LlNlc3Npb24oImh0dHA6Ly8xOTIuMTY4LjcuMjA6OTM2MyIpCi1cZW5ke3ZlcmJhdGltfQotCi1G
b3IgYXV0aGVudGljYXRpb24gd2l0aCBhIHVzZXJuYW1lIGFuZCBwYXNzd29yZCB0aGUKLVx0ZXh0
dHR7bG9naW5cX3dpdGhcX3Bhc3N3b3JkfSBpcyB1c2VkOgotXGJlZ2lue3ZlcmJhdGltfQotPj4+
IHNlc3Npb24ubG9naW5fd2l0aF9wYXNzd29yZCgiIiwgIiIpCi1cZW5ke3ZlcmJhdGltfQotCi1X
aGVuIHNlcmlhbGlzZWQsIHRoaXMgY2FsbCBsb29rcyBsaWtlOgotXGJlZ2lue3ZlcmJhdGltfQot
UE9TVCAvUlBDMiBIVFRQLzEuMAotSG9zdDogMTkyLjE2OC43LjIwOjkzNjMKLVVzZXItQWdlbnQ6
IHhtbHJwY2xpYi5weS8xLjAuMSAoYnkgd3d3LnB5dGhvbndhcmUuY29tKQotQ29udGVudC1UeXBl
OiB0ZXh0L3htbAotQ29udGVudC1MZW5ndGg6IDIyMQotCi08P3htbCB2ZXJzaW9uPScxLjAnPz4K
LTxtZXRob2RDYWxsPgotPG1ldGhvZE5hbWU+c2Vzc2lvbi5sb2dpbl93aXRoX3Bhc3N3b3JkPC9t
ZXRob2ROYW1lPgotPHBhcmFtcz4KLTxwYXJhbT4KLTx2YWx1ZT48c3RyaW5nPjwvc3RyaW5nPjwv
dmFsdWU+Ci08L3BhcmFtPgotPHBhcmFtPgotPHZhbHVlPjxzdHJpbmc+PC9zdHJpbmc+PC92YWx1
ZT4KLTwvcGFyYW0+Ci08L3BhcmFtcz4KLTwvbWV0aG9kQ2FsbD4KLVxlbmR7dmVyYmF0aW19Ci0K
LUFuZCB0aGUgcmVzcG9uc2U6Ci1cYmVnaW57dmVyYmF0aW19Ci1IVFRQLzEuMSAyMDAgT0sKLVNl
cnZlcjogQmFzZUhUVFAvMC4zIFB5dGhvbi8yLjUuMgotRGF0ZTogRnJpLCAxMCBKdWwgMjAwOSAw
OTowMToyNyBHTVQKLUNvbnRlbnQtVHlwZTogdGV4dC94bWwKLUNvbnRlbnQtTGVuZ3RoOiAzMTMK
LQotPD94bWwgdmVyc2lvbj0nMS4wJz8+Ci08bWV0aG9kUmVzcG9uc2U+Ci08cGFyYW1zPgotPHBh
cmFtPgotPHZhbHVlPjxzdHJ1Y3Q+Ci08bWVtYmVyPgotPG5hbWU+U3RhdHVzPC9uYW1lPgotPHZh
bHVlPjxzdHJpbmc+U3VjY2Vzczwvc3RyaW5nPjwvdmFsdWU+Ci08L21lbWJlcj4KLTxtZW1iZXI+
Ci08bmFtZT5WYWx1ZTwvbmFtZT4KLTx2YWx1ZT48c3RyaW5nPjY4ZTNhMDA5LTAyNDktNzI1Yi0y
NDZiLTdmYzQzY2Y0ZjE1NDwvc3RyaW5nPjwvdmFsdWU+Ci08L21lbWJlcj4KLTwvc3RydWN0Pjwv
dmFsdWU+Ci08L3BhcmFtPgotPC9wYXJhbXM+Ci08L21ldGhvZFJlc3BvbnNlPgotXGVuZHt2ZXJi
YXRpbX0KLQotTmV4dCwgdGhlIHVzZXIgbWF5IGFjcXVpcmUgYSBsaXN0IG9mIGFsbCB0aGUgVk1z
IGtub3duIHRvIHRoZSBob3N0OgotCi1cYmVnaW57dmVyYmF0aW19Ci0+Pj4gdm1zID0gc2Vzc2lv
bi54ZW5hcGkuVk0uZ2V0X2FsbCgpCi0+Pj4gdm1zCi1bJzAwMDAwMDAwLTAwMDAtMDAwMC0wMDAw
LTAwMDAwMDAwMDAwMCcsICdiMjhlNGVlMy0yMTZmLWZhODUtOWNhZS02MTVlOTU0ZGJiZTcnXQot
XGVuZHt2ZXJiYXRpbX0KLQotVGhlIFZNIHJlZmVyZW5jZXMgaGVyZSBoYXZlIHRoZSBmb3JtIG9m
IGFuIHV1aWQsIHRob3VnaCB0aGV5IG1heQotY2hhbmdlIGluIHRoZSBmdXR1cmUsIGFuZCB0aGV5
IHNob3VsZCBiZSB0cmVhdGVkIGFzIG9wYXF1ZSBzdHJpbmdzLgotCi1Tb21lIGV4YW1wbGVzIG9m
IHVzaW5nIGFjY2Vzc29ycyBmb3Igb2JqZWN0IGZpZWxkczoKLVxiZWdpbnt2ZXJiYXRpbX0KLT4+
PiBzZXNzaW9uLnhlbmFwaS5WTS5nZXRfbmFtZV9sYWJlbCh2bXNbMV0pCi0nZ3Vlc3QwMDInCi0+
Pj4gc2Vzc2lvbi54ZW5hcGkuVk0uZ2V0X2FjdGlvbnNfYWZ0ZXJfcmVib290KHZtc1sxXSkKLSdy
ZXN0YXJ0JwotXGVuZHt2ZXJiYXRpbX0KLQotR3JhYiB0aGUgYWN0dWFsIG1lbW9yeSBhbmQgY3B1
IHV0aWxpc2F0aW9uIG9mIG9uZSB2bToKLVxiZWdpbnt2ZXJiYXRpbX0KLT4+PiBtID0gc2Vzc2lv
bi54ZW5hcGkuVk0uZ2V0X21ldHJpY3Modm1zWzFdKQotPj4+IHNlc3Npb24ueGVuYXBpLlZNX21l
dHJpY3MuZ2V0X21lbW9yeV9hY3R1YWwobSkKLScyNjg0MzU0NTYnCi0+Pj4gc2Vzc2lvbi54ZW5h
cGkuVk1fbWV0cmljcy5nZXRfVkNQVXNfdXRpbGlzYXRpb24obSkKLXsnMCc6IDAuMDAwNDE3NTk5
NTU2MzI5MzUzNjJ9Ci1cZW5ke3ZlcmJhdGltfQotKFRoZSB2aXJ0dWFsIG1hY2hpbmUgaGFzIGFi
b3V0IDI1NiBNQnl0ZSBSQU0gYW5kIGlzIGlkbGUuKQotCi1QYXVzaW5nIGFuZCB1bnBhdXNpbmcg
YSB2bToKLVxiZWdpbnt2ZXJiYXRpbX0KLT4+PiBzZXNzaW9uLnhlbmFwaS5WTS5wYXVzZSh2bXNb
MV0pCi0nJwotPj4+IHNlc3Npb24ueGVuYXBpLlZNLnVucGF1c2Uodm1zWzFdKQotJycKLVxlbmR7
dmVyYmF0aW19Ci0KLVRyeWluZyB0byBzdGFydCBhbiB2bToKLVxiZWdpbnt2ZXJiYXRpbX0KLT4+
PiBzZXNzaW9uLnhlbmFwaS5WTS5zdGFydCh2bXNbMV0sIEZhbHNlKQotLi4uCi06IFhlbi1BUEkg
ZmFpbHVyZTogWydWTV9CQURfUE9XRVJfU1RBVEUnLCBcCi0gICAgJ2IyOGU0ZWUzLTIxNmYtZmE4
NS05Y2FlLTYxNWU5NTRkYmJlNycsICdIYWx0ZWQnLCAnUnVubmluZyddCi1cZW5ke3ZlcmJhdGlt
fQotCi1JbiB0aGlzIGNhc2UgdGhlIHtcdHQgc3RhcnR9IG1lc3NhZ2UgaGFzIGJlZW4gcmVqZWN0
ZWQsIGJlY2F1c2UgdGhlIFZNIGlzCi1hbHJlYWR5IHJ1bm5pbmcsIGFuZCBzbyBhbiBlcnJvciBy
ZXNwb25zZSBoYXMgYmVlbiByZXR1cm5lZC4gIFRoZXNlIGhpZ2gtbGV2ZWwKLWVycm9ycyBhcmUg
cmV0dXJuZWQgYXMgc3RydWN0dXJlZCBkYXRhIChyYXRoZXIgdGhhbiBhcyBYTUwtUlBDIGZhdWx0
cyksCi1hbGxvd2luZyB0aGVtIHRvIGJlIGludGVybmF0aW9uYWxpc2VkLiAgCmRpZmYgLS1naXQg
YS9kb2NzL3hlbi1hcGkveGVuLmVwcyBiL2RvY3MveGVuLWFwaS94ZW4uZXBzCmRlbGV0ZWQgZmls
ZSBtb2RlIDEwMDY0NAppbmRleCBkYTE0ZmU5Li4wMDAwMDAwCi0tLSBhL2RvY3MveGVuLWFwaS94
ZW4uZXBzCisrKyAvZGV2L251bGwKQEAgLTEsNDQgKzAsMCBAQAotJSFQUy1BZG9iZS0zLjEgRVBT
Ri0zLjAKJSVUaXRsZTogeGVuMy0xLjAuZXBzCiUlQ3JlYXRvcjogQWRvYmUgSWxsdXN0cmF0b3Io
UikgMTEKJSVBSThfQ3JlYXRvclZlcnNpb246IDExLjAuMAolQUk5X1ByaW50aW5nRGF0YUJlZ2lu
CiUlRm9yOiBSaWNoIFF1YXJsZXMKJSVDcmVhdGlvbkRhdGU6IDYvMjYvMDYKJSVCb3VuZGluZ0Jv
eDogMCAwIDIxNSA5NAolJUhpUmVzQm91bmRpbmdCb3g6IDAgMCAyMTQuMTY0NiA5My41MTk2CiUl
Q3JvcEJveDogMCAwIDIxNC4xNjQ2IDkzLjUxOTYKJSVMYW5ndWFnZUxldmVsOiAyCiUlRG9jdW1l
bnREYXRhOiBDbGVhbjdCaXQKJSVQYWdlczogMQolJURvY3VtZW50TmVlZGVkUmVzb3VyY2VzOiAK
JSVEb2N1bWVudFN1cHBsaWVkUmVzb3VyY2VzOiBwcm9jc2V0IEFkb2JlX0FHTV9JbWFnZSAoMS4w
IDApCiUlKyBwcm9jc2V0IEFkb2JlX0Nvb2xUeXBlX1V0aWxpdHlfVDQyICgxLjAgMCkKJSUrIHBy
b2NzZXQgQWRvYmVfQ29vbFR5cGVfVXRpbGl0eV9NQUtFT0NGICgxLjE5IDApCiUlKyBwcm9jc2V0
IEFkb2JlX0Nvb2xUeXBlX0NvcmUgKDIuMjMgMCkKJSUrIHByb2NzZXQgQWRvYmVfQUdNX0NvcmUg
KDIuMCAwKQolJSsgcHJvY3NldCBBZG9iZV9BR01fVXRpbHMgKDEuMCAwKQolJURvY3VtZW50Rm9u
dHM6IAolJURvY3VtZW50TmVlZGVkRm9udHM6IAolJURvY3VtZW50TmVlZGVkRmVhdHVyZXM6IAol
JURvY3VtZW50U3VwcGxpZWRGZWF0dXJlczogCiUlRG9jdW1lbnRQcm9jZXNzQ29sb3JzOiAgQmxh
Y2sKJSVEb2N1bWVudEN1c3RvbUNvbG9yczogCiUlQ01ZS0N1c3RvbUNvbG9yOiAKJSVSR0JDdXN0
b21Db2xvcjogCiVBRE9fQ29udGFpbnNYTVA6IE1haW5GaXJzdAolQUk3X1RodW1ibmFpbDogMTI4
IDU2IDgKJSVCZWdpbkRhdGE6IDYyNjYgSGV4IEJ5dGVzCiUwMDAwMzMwMDAwNjYwMDAwOTkwMDAw
Q0MwMDMzMDAwMDMzMzMwMDMzNjYwMDMzOTkwMDMzQ0MwMDMzRkYKJTAwNjYwMDAwNjYzMzAwNjY2
NjAwNjY5OTAwNjZDQzAwNjZGRjAwOTkwMDAwOTkzMzAwOTk2NjAwOTk5OQolMDA5OUNDMDA5OUZG
MDBDQzAwMDBDQzMzMDBDQzY2MDBDQzk5MDBDQ0NDMDBDQ0ZGMDBGRjMzMDBGRjY2CiUwMEZGOTkw
MEZGQ0MzMzAwMDAzMzAwMzMzMzAwNjYzMzAwOTkzMzAwQ0MzMzAwRkYzMzMzMDAzMzMzMzMKJTMz
MzM2NjMzMzM5OTMzMzNDQzMzMzNGRjMzNjYwMDMzNjYzMzMzNjY2NjMzNjY5OTMzNjZDQzMzNjZG
RgolMzM5OTAwMzM5OTMzMzM5OTY2MzM5OTk5MzM5OUNDMzM5OUZGMzNDQzAwMzNDQzMzMzNDQzY2
MzNDQzk5CiUzM0NDQ0MzM0NDRkYzM0ZGMDAzM0ZGMzMzM0ZGNjYzM0ZGOTkzM0ZGQ0MzM0ZGRkY2
NjAwMDA2NjAwMzMKJTY2MDA2NjY2MDA5OTY2MDBDQzY2MDBGRjY2MzMwMDY2MzMzMzY2MzM2NjY2
MzM5OTY2MzNDQzY2MzNGRgolNjY2NjAwNjY2NjMzNjY2NjY2NjY2Njk5NjY2NkNDNjY2NkZGNjY5
OTAwNjY5OTMzNjY5OTY2NjY5OTk5CiU2Njk5Q0M2Njk5RkY2NkNDMDA2NkNDMzM2NkNDNjY2NkND
OTk2NkNDQ0M2NkNDRkY2NkZGMDA2NkZGMzMKJTY2RkY2NjY2RkY5OTY2RkZDQzY2RkZGRjk5MDAw
MDk5MDAzMzk5MDA2Njk5MDA5OTk5MDBDQzk5MDBGRgolOTkzMzAwOTkzMzMzOTkzMzY2OTkzMzk5
OTkzM0NDOTkzM0ZGOTk2NjAwOTk2NjMzOTk2NjY2OTk2Njk5CiU5OTY2Q0M5OTY2RkY5OTk5MDA5
OTk5MzM5OTk5NjY5OTk5OTk5OTk5Q0M5OTk5RkY5OUNDMDA5OUNDMzMKJTk5Q0M2Njk5Q0M5OTk5
Q0NDQzk5Q0NGRjk5RkYwMDk5RkYzMzk5RkY2Njk5RkY5OTk5RkZDQzk5RkZGRgolQ0MwMDAwQ0Mw
MDMzQ0MwMDY2Q0MwMDk5Q0MwMENDQ0MwMEZGQ0MzMzAwQ0MzMzMzQ0MzMzY2Q0MzMzk5CiVDQzMz
Q0NDQzMzRkZDQzY2MDBDQzY2MzNDQzY2NjZDQzY2OTlDQzY2Q0NDQzY2RkZDQzk5MDBDQzk5MzMK
JUNDOTk2NkNDOTk5OUNDOTlDQ0NDOTlGRkNDQ0MwMENDQ0MzM0NDQ0M2NkNDQ0M5OUNDQ0NDQ0ND
Q0NGRgolQ0NGRjAwQ0NGRjMzQ0NGRjY2Q0NGRjk5Q0NGRkNDQ0NGRkZGRkYwMDMzRkYwMDY2RkYw
MDk5RkYwMENDCiVGRjMzMDBGRjMzMzNGRjMzNjZGRjMzOTlGRjMzQ0NGRjMzRkZGRjY2MDBGRjY2
MzNGRjY2NjZGRjY2OTkKJUZGNjZDQ0ZGNjZGRkZGOTkwMEZGOTkzM0ZGOTk2NkZGOTk5OUZGOTlD
Q0ZGOTlGRkZGQ0MwMEZGQ0MzMwolRkZDQzY2RkZDQzk5RkZDQ0NDRkZDQ0ZGRkZGRjMzRkZGRjY2
RkZGRjk5RkZGRkNDMTEwMDAwMDAxMTAwCiUwMDAwMTExMTExMTEyMjAwMDAwMDIyMDAwMDAwMjIy
MjIyMjI0NDAwMDAwMDQ0MDAwMDAwNDQ0NDQ0NDQKJTU1MDAwMDAwNTUwMDAwMDA1NTU1NTU1NTc3
MDAwMDAwNzcwMDAwMDA3Nzc3Nzc3Nzg4MDAwMDAwODgwMAolMDAwMDg4ODg4ODg4QUEwMDAwMDBB
QTAwMDAwMEFBQUFBQUFBQkIwMDAwMDBCQjAwMDAwMEJCQkJCQkJCCiVERDAwMDAwMEREMDAwMDAw
RERERERERERFRTAwMDAwMEVFMDAwMDAwRUVFRUVFRUUwMDAwMDAwMDAwRkYKJTAwRkYwMDAwRkZG
RkZGMDAwMEZGMDBGRkZGRkYwMEZGRkZGRgolNTI0QzQ1RkQxOUZGQThBODdEQThGRDA3N0RBOEE4
RkQ3MEZGN0Q3RDUyN0Q1MjdENTI3RDdEN0Q1MjdECiU1MjdENTJGRDA0N0RGRDZBRkZBODdENTI3
RDdEN0Q1MkZEMEI3RDUyRkQwNDdEQThBOEZENjVGRkE4N0QKJTdENTI3RDUyRkQwNDdERkQwOUE4
N0Q3RDUyN0Q1MjdENTI3RDdERkQ2M0ZGQThGRDA1N0RBOEE4RkZBOAolRkZBOEZGQThGRkE4RkZB
OEZGQThGRkE4RkZBOEE4RkQwNTdEQThGRDVGRkZBODdENTI3RDUyN0Q3REE4CiVBOEZGQThBOEE4
RkZBOEE4QThGRkE4QThBOEZGQThBOEE4RkZBOEE4N0Q3RDUyN0Q1MjdEQThGRDVDRkYKJUE4N0Q1
MjdEN0RBOEE4RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThGRgol
QThBODdEN0Q1MjdEQThGRDVBRkY3RDdENTI3RDdERkQwNEE4RkZBOEE4QThGRkE4QThBOEZGQThB
OEE4CiVGRkE4QThBOEZGQThBOEE4RkZGRDA0QTg1MjdENTI3REE4RkQ1OEZGRkQwNTdERkZBOEZG
QThGRkE4RkYKJUE4RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThB
OEZEMDQ3REE4RkQ1NgolRkY3RDdENTI3RDdEQThBOEZGQThBOEE4RkZBOEE4QThGRkE4QThBOEZG
QThBOEE4RkZBOEE4QThGRkE4CiVBOEE4RkZBOEE4QThGRkE4QThGRDA0N0RBOEZENTRGRkE4RkQw
NDdERkZBOEZGQThGRkE4RkZBOEZGQTgKJUZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4
RkZBOEZGQThGRkE4RkZBOEZGRkQwNDdEQThGRAolNTJGRkE4N0Q1MjdEN0RGRkE4RkZBOEZGQThG
RkE4RkZBOEZGQThGRkE4RkZBOEE4QThGRkE4QThBOEZGCiVBOEE4QThGRkZEMDVBOEZGQThGRkE4
RkZGRDA0N0RBOEZENTFGRjdEN0Q3RDI3RkQwRTUyN0RBOEZGQTgKJUZGQThGRkE4RkZBOEZGQThG
RkE4RkZBOEZGRkZBODI3RkQwNTUyMjcyNzI3RkQwOTUyN0RGRDQ3RkZBOAolNTI3RDdENTJGRDBG
Rjg1MkE4QThBOEZGQThBOEE4RkZBOEE4QThGRkE4RkZBODdERkQxMkY4NTJGRDQ4CiVGRjdEN0Q3
REE4QTgyN0ZEMEZGODdERkZGRkE4RkZBOEZGQThGRkE4RkZBOEZGRkY3REZEMTFGODI3N0QKJUZE
NDhGRjdEN0Q1MjdEQThGRkE4MjdGRDBGRjhGRDA0QThGRkE4QThBOEZGQThGRkE4NTJGRDExRjgy
NwolQThGRDQ5RkY3RDdEN0RBOEZGQThGRjdERkQwRkY4MjdGRkE4RkZBOEZGQThGRkE4RkZBODI3
RkQxMUY4CiU1MkZENEFGRkE4NTI3RDdERkZBOEE4QThGRjUyRkQwRkY4NTJGRkE4RkZBOEE4QThG
RjdERkQxMkY4N0QKJUZENEJGRjdEN0Q3REE4QThGRkE4RkZBOEZGNTJGRDBGRjg3REZGQThGRkE4
RkY3REZEMTJGODUyQThGRAolNEFGRkE4N0Q1MjdEQThBOEE4RkZBOEE4QThGRjI3RkQwRUY4MjdB
OEZGQThGRjUyRkQxMUY4Mjc3RDdECiU3REE4RkQwN0ZGQThGRkE4RkZBOEZEMjNGRkE4RkZBOEZE
MTdGRkE4NTI3RDdERkZBOEZGQThGRkE4RkYKJUZGRkZGRDBGRjg1MkZGRkYyN0ZEMTFGODUyRkY3
RDdEN0RGRkE4N0Q1MjUyMjcyN0Y4MjdGODI3RjgyNwolMjc1MjdERkQwQkZGRkQwNEE4N0RBOEE4
QTg3REZEMDRBOEZGRkZGRkE4N0Q1MjI3RjgyN0Y4MjcyNzUyCiU3REZEMTRGRjdEN0Q1MkZEMDRB
OEZGQThBOEE4RkZBOEE4RkQwRkY4NTIyN0ZEMTBGODI3N0RGRkE4QTgKJTUyMjdGRDExRjgyNzUy
RkQwN0ZGQThGRDBDRjhGRkZGQTgyN0ZEMENGODdERkZGRkZGQThGOEY4Rjg3RAolMjdGODUyN0RG
OEY4RkQwNEZGN0Q3RDdEQThGRkE4RkZBOEZGQThGRkE4RkZGRjdERkQxRkY4MjdBOEZGCiVGRjdE
MjdGRDE2RjhBOEZEMDVGRkE4RkQwQkY4MjdGRjdERkQwRkY4N0RGRkZGRkZBOEY4N0RGRjI3RjgK
JTI3MjdGOEY4RkZGRkZGQTg1MjdEN0RBOEE4RkZBOEE4QThGRkE4QThBOEZGQTg1MkZEMURGODUy
QThGRgolQTg1MkZEMTlGODdERkQwNEZGNTJGRDBCRjgyNzI3RkQxMUY4QThGRkZGQThGODUyRkZG
ODI3RjhGODUyCiVGOEZGRkZGRkE4N0Q3REE4QThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQTg1MkZE
MUJGOEE4RkZGRkE4MjcKJUZEMEJGODdEQThGRjdENTJGRDBCRjhGRDA0RkY1MkZEMUVGODUyRkZG
RkZGNTJBOEZGRkQwNDdEQTg1MgolRkZGRkZGQTg1MjdEN0RGRkE4RkZBOEE4QThGRkE4QThBOEZG
QThGRkE4MjdGRDE4RjgyN0E4QThGRkE4CiUyN0ZEMEFGODI3RkQwNkZGMjdGRDBBRjgyN0ZGRkZG
RkZEMEZGODI3MjdGRDBFRjg1MkZEMEZGRkE4N0QKJTUyQThBOEZGQThGRkE4RkZBOEZGQThGRkE4
RkZBOEZGQTgyN0ZEMTZGODUyRkZGRkZGQTg1MkZEMEJGOAolRkQwN0ZGNTJGRDBBRjgyN0ZGRkZB
OEZEMERGODI3QThGRkZGRkY1MkZEMENGODUyRkQwRkZGQTg1MjdECiU3REZGQThBOEE4RkZBOEE4
QThGRkE4QThBOEZGQThGRjdERkQxNUY4NTJGRkE4QThBODdERkQwQkY4NTIKJUZEMDRGRkE4RkZG
RjUyRkQwQkY4QThGRjdERkQwQ0Y4MjdGRDA1RkZBOEZEMENGODUyRkQwRkZGQTg3RAolN0RBOEE4
RkZBOEZGQThGRkE4RkZBOEZGQThGRkE4RkZBOEZGNTJGRDEyRjgyN0E4RkZBOEZGQThGRjI3CiVG
RDBDRjhGRDA3MjdGRDBDRjhBOEZGNTJGRDBDRjhGRDA3RkZGRDBDRjg3REZEMEZGRkE4NTI3RDdE
RkYKJUE4QThBOEZGQThBOEE4RkZBOEE4QThGRkE4RkY1MkZEMTJGODUyQThGRkE4QThBOEZGN0RG
RDIwRjhGRgolRkYyN0ZEMEJGODUyRkQwNkZGN0RGRDBDRjhBOEZEMEZGRkE4N0Q3REE4QThGRkE4
RkZBOEZGQThGRkE4CiVGRkE4RkZGRkE4MjdGRDEyRjgyN0E4RkZBOEZGQThGRkE4N0RGRDFGRjgy
N0ZGRkZGRDBDRjg3REZEMDYKJUZGN0RGRDBCRjgyN0ZEMTBGRkE4NTI3RDdEQThBOEZGQThBOEE4
RkZBOEE4QThGRkE4N0RGRDE1Rjg1MgolQThBOEE4RkZBOEZGNTJGRDBDRjgyN0Y4MjdGODI3Rjgy
N0Y4MjdGODI3RjgyN0Y4MjdGODI3RjgyNzUyCiVGRjdERkQwQ0Y4QThGRDA2RkYyN0ZEMEJGODI3
RkQxMEZGQTg3RDdEN0RBOEZGQThGRkE4RkZBOEZGQTgKJUZGRkY3REZEMTdGODdERkZGRkE4RkZB
ODUyRkQwQkY4QThGRDE1RkY3REZEMEJGODI3RkQwN0ZGMjdGRAolMEJGODdERkQxMUZGN0Q3RDdE
QThBOEZGQThBOEE4RkZBOEZGQTg1MkZEMTlGODdEQThGRkE4RkY1MkZECiUwQkY4N0RGRDE1RkYy
N0ZEMEJGODI3RkQwNkZGQThGRDBDRjhBOEZEMTFGRkE4NTI3REE4RkZBOEZGQTgKJUZGQThGRkE4
MjdGRDFCRjhBOEE4RkZGRjdERkQwQkY4N0RGRDA2RkY3RDI3Mjc1MjI3NTIyNzUyMjc1MgolMjcy
NzdERkZGRjI3RkQwQkY4N0RGRDA2RkY3REZEMENGOEZEMTJGRjdEN0Q1MkZEMDZBOEZGNTJGRDFE
CiVGODI3RkZBOEZGQThGRDBDRjhBOEZEMDRGRjUyRkQwQ0Y4QThGRkE4RkQwQ0Y4N0RGRDA2RkY1
MkZEMEIKJUY4NTJGRDEzRkY3RDdEN0RGRkE4RkZGRkZGNTJGRDFGRjg1MkZGQThGRjdERkQwQ0Y4
NTI1MjUyRkQwRAolRjhBOEZGRkY3REZEMENGOEZEMDdGRjI3RkQwQkY4NTJGRDEzRkY3RDUyN0RB
OEZGQThBODI3RkQxMUY4CiUyNzI3RkQwRUY4NTJGRkE4RkY1MkZEMTlGODI3QThGRkZGRkY1MkZE
MEJGODUyRkQwNkZGQThGRDBDRjgKJUE4RkQxM0ZGQTg3RDUyQThGRjdERkQxMkY4NTJGRjUyRkQw
RkY4N0RGRkE4RkY3RDI3RkQxNUY4Mjc3RAolRkQwNUZGRkQwQ0Y4NTJGRDA2RkY3REZEMENGOEZE
MTVGRjUyN0Q3RDUyRkQxMkY4N0RGRkE4RkYyN0ZECiUwRUY4MjdBOEZGQThGRkE4N0QyNzI3RkQw
RkY4NTI1MkE4RkQwNkZGQTg1MjI3MjcyNzUyMjcyNzI3NTIKJTI3MjcyN0E4RkQwNkZGNTJGRDA0
Mjc1MjI3MjcyNzUyMjcyNzUyRkQxNUZGQTg1MjI3RkQxMUY4MjdBOAolRkZBOEZGQThBOEZEMEZG
ODUyRkZGRkE4RkY3RDdEN0RBOEE4QTg3RDdENTI3RDUyRkQwNDdEQThBOEZECiU0MEZGN0RGRDEy
Rjg1MkE4RkZBOEZGQThBOEE4N0RGRDBGRjg3REZGRkY3RDdENTI3REZENERGRjdERkQKJTEyRjg3
REE4RkZBOEZGQThGRkE4RkZBODUyRkQwRkY4QThBODdEN0Q3REE4RkQ0Q0ZGNTJGRDEyRjhGRAol
MDRBOEZGQThBOEE4RkZBOEZGN0RGRDEwRjhGRDA0N0RGRDRDRkYyN0ZEMTFGODI3RkZGRkZGQThG
RkE4CiVGRkE4RkZBOEZGQThGRjdERkQwRkY4Mjc3RDdERkQ0QkZGQThGRDEyRjg1MkZGQThGRkE4
QThBOEZGQTgKJUE4QThGRkE4QThBOEZGMjdGRDBGRjgyN0ZENEJGRjdERkQwRkY4MjdGODI3N0RG
RkE4RkZBOEZGQThGRgolQThGRkE4RkZBOEZGQThGRkE4QThGRDEwRjg3REZENEFGRjdERkQwQUE4
N0Q1MjUyNTI3RDdEQThBOEZGCiVBOEZGQThBOEE4RkZBOEE4QThGRkE4QThBOEZGRkQwNEE4N0RB
ODdEQTg3REE4N0RBODdERkQwNDUyQTgKJUE4QThGRDU2RkZBODdEN0Q3REE4QThGRkE4RkZBOEZG
QThGRkE4RkZBOEZGQThGRkE4RkZBOEZGQThGRgolQThGRkE4RkQwNUZGQThGRDA0N0RGRDVCRkZB
ODUyN0Q1MjdEN0RBOEE4RkZBOEE4QThGRkE4QThBOEZGCiVBOEE4QThGRkE4QThBOEZGQThBOEE4
RkZBOEE4RkQwNDdENTJGRDVFRkZGRDA1N0RBOEE4RkZBOEZGQTgKJUZGQThGRkE4RkZBOEZGQThG
RkE4RkZBOEZGQThGRkE4QThGRDA1N0RGRDYwRkY3RDdENTI3RDUyN0Q3RAolRkQwNEE4RkZBOEE4
QThGRkE4QThBOEZGRkQwNEE4N0Q3RDUyN0Q1MjdEN0RGRDYyRkZBOEE4RkQwNzdECiVGRDA0QThG
RkZEMDZBOEZEMDc3REE4RkQ2N0ZGN0Q3RDUyN0Q1MjdENTI3RDUyN0Q1MjdEN0Q3RDUyN0QKJTUy
N0Q1MjdENTI3RDdERkQ2QkZGQThGRDA0N0Q1MjdEN0Q3RDUyN0Q3RDdENTJGRDA0N0RGRDcwRkZG
RAolMDRBOEZEMDc3REE4QThGRkE4RkQ1OEZGRkYKJSVFbmREYXRhCiUlRW5kQ29tbWVudHMKJSVC
ZWdpbkRlZmF1bHRzCiUlVmlld2luZ09yaWVudGF0aW9uOiAxIDAgMCAxCiUlRW5kRGVmYXVsdHMK
JSVCZWdpblByb2xvZwolJUJlZ2luUmVzb3VyY2U6IHByb2NzZXQgQWRvYmVfQUdNX1V0aWxzIDEu
MCAwCiUlVmVyc2lvbjogMS4wIDAKJSVDb3B5cmlnaHQ6IENvcHlyaWdodCAoQykgMjAwMC0yMDAz
IEFkb2JlIFN5c3RlbXMsIEluYy4gIEFsbCBSaWdodHMgUmVzZXJ2ZWQuCnN5c3RlbWRpY3QgL3Nl
dHBhY2tpbmcga25vd24KewoJY3VycmVudHBhY2tpbmcKCXRydWUgc2V0cGFja2luZwp9IGlmCnVz
ZXJkaWN0IC9BZG9iZV9BR01fVXRpbHMgNjggZGljdCBkdXAgYmVnaW4gcHV0Ci9iZGYKewoJYmlu
ZCBkZWYKfSBiaW5kIGRlZgovbmR7CgludWxsIGRlZgp9YmRmCi94ZGYKewoJZXhjaCBkZWYKfWJk
ZgovbGRmIAp7Cglsb2FkIGRlZgp9YmRmCi9kZGYKewoJcHV0Cn1iZGYJCi94ZGRmCnsKCTMgLTEg
cm9sbCBwdXQKfWJkZgkKL3hwdAp7CglleGNoIHB1dAp9YmRmCi9uZGYKeyAKCWV4Y2ggZHVwIHdo
ZXJlewoJCXBvcCBwb3AgcG9wCgl9ewoJCXhkZgoJfWlmZWxzZQp9ZGVmCi9jZG5kZgp7CglleGNo
IGR1cCBjdXJyZW50ZGljdCBleGNoIGtub3duewoJCXBvcCBwb3AKCX17CgkJZXhjaCBkZWYKCX1p
ZmVsc2UKfWRlZgovYmRpY3QKewoJbWFyawp9YmRmCi9lZGljdAp7Cgljb3VudHRvbWFyayAyIGlk
aXYgZHVwIGRpY3QgYmVnaW4ge2RlZn0gcmVwZWF0IHBvcCBjdXJyZW50ZGljdCBlbmQKfWRlZgov
cHNfbGV2ZWwKCS9sYW5ndWFnZWxldmVsIHdoZXJlewoJCXBvcCBzeXN0ZW1kaWN0IC9sYW5ndWFn
ZWxldmVsIGdldCBleGVjCgl9ewoJCTEKCX1pZmVsc2UKZGVmCi9sZXZlbDIgCglwc19sZXZlbCAy
IGdlCmRlZgovbGV2ZWwzIAoJcHNfbGV2ZWwgMyBnZQpkZWYKL3BzX3ZlcnNpb24KCXt2ZXJzaW9u
IGN2cn0gc3RvcHBlZCB7CgkJLTEKCX1pZgpkZWYKL21ha2VyZWFkb25seWFycmF5CnsKCS9wYWNr
ZWRhcnJheSB3aGVyZXsKCQlwb3AgcGFja2VkYXJyYXkKCX17CgkJYXJyYXkgYXN0b3JlIHJlYWRv
bmx5Cgl9aWZlbHNlCn1iZGYKL21hcF9yZXNlcnZlZF9pbmtfbmFtZQp7CglkdXAgdHlwZSAvc3Ry
aW5ndHlwZSBlcXsKCQlkdXAgL1JlZCBlcXsKCQkJcG9wIChfUmVkXykKCQl9ewoJCQlkdXAgL0dy
ZWVuIGVxewoJCQkJcG9wIChfR3JlZW5fKQoJCQl9ewoJCQkJZHVwIC9CbHVlIGVxewoJCQkJCXBv
cCAoX0JsdWVfKQoJCQkJfXsKCQkJCQlkdXAgKCkgY3ZuIGVxewoJCQkJCQlwb3AgKFByb2Nlc3Mp
CgkJCQkJfWlmCgkJCQl9aWZlbHNlCgkJCX1pZmVsc2UKCQl9aWZlbHNlCgl9aWYKfWJkZgovQUdN
VVRJTF9HU1RBVEUgMjIgZGljdCBkZWYKL2dldF9nc3RhdGUKewoJQUdNVVRJTF9HU1RBVEUgYmVn
aW4KCS9BR01VVElMX0dTVEFURV9jbHJfc3BjIGN1cnJlbnRjb2xvcnNwYWNlIGRlZgoJL0FHTVVU
SUxfR1NUQVRFX2Nscl9pbmR4IDAgZGVmCgkvQUdNVVRJTF9HU1RBVEVfY2xyX2NvbXBzIDEyIGFy
cmF5IGRlZgoJbWFyayBjdXJyZW50Y29sb3IgY291bnR0b21hcmsKCQl7QUdNVVRJTF9HU1RBVEVf
Y2xyX2NvbXBzIEFHTVVUSUxfR1NUQVRFX2Nscl9pbmR4IDMgLTEgcm9sbCBwdXQKCQkvQUdNVVRJ
TF9HU1RBVEVfY2xyX2luZHggQUdNVVRJTF9HU1RBVEVfY2xyX2luZHggMSBhZGQgZGVmfSByZXBl
YXQgcG9wCgkvQUdNVVRJTF9HU1RBVEVfZm50IHJvb3Rmb250IGRlZgoJL0FHTVVUSUxfR1NUQVRF
X2x3IGN1cnJlbnRsaW5ld2lkdGggZGVmCgkvQUdNVVRJTF9HU1RBVEVfbGMgY3VycmVudGxpbmVj
YXAgZGVmCgkvQUdNVVRJTF9HU1RBVEVfbGogY3VycmVudGxpbmVqb2luIGRlZgoJL0FHTVVUSUxf
R1NUQVRFX21sIGN1cnJlbnRtaXRlcmxpbWl0IGRlZgoJY3VycmVudGRhc2ggL0FHTVVUSUxfR1NU
QVRFX2RvIHhkZiAvQUdNVVRJTF9HU1RBVEVfZGEgeGRmCgkvQUdNVVRJTF9HU1RBVEVfc2EgY3Vy
cmVudHN0cm9rZWFkanVzdCBkZWYKCS9BR01VVElMX0dTVEFURV9jbHJfcm5kIGN1cnJlbnRjb2xv
cnJlbmRlcmluZyBkZWYKCS9BR01VVElMX0dTVEFURV9vcCBjdXJyZW50b3ZlcnByaW50IGRlZgoJ
L0FHTVVUSUxfR1NUQVRFX2JnIGN1cnJlbnRibGFja2dlbmVyYXRpb24gY3ZsaXQgZGVmCgkvQUdN
VVRJTF9HU1RBVEVfdWNyIGN1cnJlbnR1bmRlcmNvbG9ycmVtb3ZhbCBjdmxpdCBkZWYKCWN1cnJl
bnRjb2xvcnRyYW5zZmVyIGN2bGl0IC9BR01VVElMX0dTVEFURV9neV94ZmVyIHhkZiBjdmxpdCAv
QUdNVVRJTF9HU1RBVEVfYl94ZmVyIHhkZgoJCWN2bGl0IC9BR01VVElMX0dTVEFURV9nX3hmZXIg
eGRmIGN2bGl0IC9BR01VVElMX0dTVEFURV9yX3hmZXIgeGRmCgkvQUdNVVRJTF9HU1RBVEVfaHQg
Y3VycmVudGhhbGZ0b25lIGRlZgoJL0FHTVVUSUxfR1NUQVRFX2ZsdCBjdXJyZW50ZmxhdCBkZWYK
CWVuZAp9ZGVmCi9zZXRfZ3N0YXRlCnsKCUFHTVVUSUxfR1NUQVRFIGJlZ2luCglBR01VVElMX0dT
VEFURV9jbHJfc3BjIHNldGNvbG9yc3BhY2UKCUFHTVVUSUxfR1NUQVRFX2Nscl9pbmR4IHtBR01V
VElMX0dTVEFURV9jbHJfY29tcHMgQUdNVVRJTF9HU1RBVEVfY2xyX2luZHggMSBzdWIgZ2V0Cgkv
QUdNVVRJTF9HU1RBVEVfY2xyX2luZHggQUdNVVRJTF9HU1RBVEVfY2xyX2luZHggMSBzdWIgZGVm
fSByZXBlYXQgc2V0Y29sb3IKCUFHTVVUSUxfR1NUQVRFX2ZudCBzZXRmb250CglBR01VVElMX0dT
VEFURV9sdyBzZXRsaW5ld2lkdGgKCUFHTVVUSUxfR1NUQVRFX2xjIHNldGxpbmVjYXAKCUFHTVVU
SUxfR1NUQVRFX2xqIHNldGxpbmVqb2luCglBR01VVElMX0dTVEFURV9tbCBzZXRtaXRlcmxpbWl0
CglBR01VVElMX0dTVEFURV9kYSBBR01VVElMX0dTVEFURV9kbyBzZXRkYXNoCglBR01VVElMX0dT
VEFURV9zYSBzZXRzdHJva2VhZGp1c3QKCUFHTVVUSUxfR1NUQVRFX2Nscl9ybmQgc2V0Y29sb3Jy
ZW5kZXJpbmcKCUFHTVVUSUxfR1NUQVRFX29wIHNldG92ZXJwcmludAoJQUdNVVRJTF9HU1RBVEVf
YmcgY3Z4IHNldGJsYWNrZ2VuZXJhdGlvbgoJQUdNVVRJTF9HU1RBVEVfdWNyIGN2eCBzZXR1bmRl
cmNvbG9ycmVtb3ZhbAoJQUdNVVRJTF9HU1RBVEVfcl94ZmVyIGN2eCBBR01VVElMX0dTVEFURV9n
X3hmZXIgY3Z4IEFHTVVUSUxfR1NUQVRFX2JfeGZlciBjdngKCQlBR01VVElMX0dTVEFURV9neV94
ZmVyIGN2eCBzZXRjb2xvcnRyYW5zZmVyCglBR01VVElMX0dTVEFURV9odCAvSGFsZnRvbmVUeXBl
IGdldCBkdXAgOSBlcSBleGNoIDEwMCBlcSBvcgoJCXsKCQljdXJyZW50aGFsZnRvbmUgL0hhbGZ0
b25lVHlwZSBnZXQgQUdNVVRJTF9HU1RBVEVfaHQgL0hhbGZ0b25lVHlwZSBnZXQgbmUKCQkJewoJ
CQkgIG1hcmsgQUdNVVRJTF9HU1RBVEVfaHQge3NldGhhbGZ0b25lfSBzdG9wcGVkIGNsZWFydG9t
YXJrCgkJCX0gaWYKCQl9ewoJCUFHTVVUSUxfR1NUQVRFX2h0IHNldGhhbGZ0b25lCgkJfSBpZmVs
c2UKCUFHTVVUSUxfR1NUQVRFX2ZsdCBzZXRmbGF0CgllbmQKfWRlZgovZ2V0X2dzdGF0ZV9hbmRf
bWF0cml4CnsKCUFHTVVUSUxfR1NUQVRFIGJlZ2luCgkvQUdNVVRJTF9HU1RBVEVfY3RtIG1hdHJp
eCBjdXJyZW50bWF0cml4IGRlZgoJZW5kCglnZXRfZ3N0YXRlCn1kZWYKL3NldF9nc3RhdGVfYW5k
X21hdHJpeAp7CglzZXRfZ3N0YXRlCglBR01VVElMX0dTVEFURSBiZWdpbgoJQUdNVVRJTF9HU1RB
VEVfY3RtIHNldG1hdHJpeAoJZW5kCn1kZWYKL0FHTVVUSUxfc3RyMjU2IDI1NiBzdHJpbmcgZGVm
Ci9BR01VVElMX3NyYzI1NiAyNTYgc3RyaW5nIGRlZgovQUdNVVRJTF9kc3Q2NCA2NCBzdHJpbmcg
ZGVmCi9BR01VVElMX3NyY0xlbiBuZAovQUdNVVRJTF9uZHggbmQKL2FnbV9zZXRoYWxmdG9uZQp7
IAoJZHVwCgliZWdpbgoJCS9fRGF0YSBsb2FkCgkJL1RocmVzaG9sZHMgeGRmCgllbmQKCWxldmVs
MyAKCXsgc2V0aGFsZnRvbmUgfXsKCQlkdXAgL0hhbGZ0b25lVHlwZSBnZXQgMyBlcSB7CgkJCXNl
dGhhbGZ0b25lCgkJfSB7cG9wfSBpZmVsc2UKCX1pZmVsc2UKfSBkZWYgCi9yZGNtbnRsaW5lCnsK
CWN1cnJlbnRmaWxlIEFHTVVUSUxfc3RyMjU2IHJlYWRsaW5lIHBvcAoJKCUpIGFuY2hvcnNlYXJj
aCB7cG9wfSBpZgp9IGJkZgovZmlsdGVyX2NteWsKewkKCWR1cCB0eXBlIC9maWxldHlwZSBuZXsK
CQlleGNoICgpIC9TdWJGaWxlRGVjb2RlIGZpbHRlcgoJfQoJewoJCWV4Y2ggcG9wCgl9CglpZmVs
c2UKCVsKCWV4Y2gKCXsKCQlBR01VVElMX3NyYzI1NiByZWFkc3RyaW5nIHBvcAoJCWR1cCBsZW5n
dGggL0FHTVVUSUxfc3JjTGVuIGV4Y2ggZGVmCgkJL0FHTVVUSUxfbmR4IDAgZGVmCgkJQUdNQ09S
RV9wbGF0ZV9uZHggNCBBR01VVElMX3NyY0xlbiAxIHN1YnsKCQkJMSBpbmRleCBleGNoIGdldAoJ
CQlBR01VVElMX2RzdDY0IEFHTVVUSUxfbmR4IDMgLTEgcm9sbCBwdXQKCQkJL0FHTVVUSUxfbmR4
IEFHTVVUSUxfbmR4IDEgYWRkIGRlZgoJCX1mb3IKCQlwb3AKCQlBR01VVElMX2RzdDY0IDAgQUdN
VVRJTF9uZHggZ2V0aW50ZXJ2YWwKCX0KCWJpbmQKCS9leGVjIGN2eAoJXSBjdngKfSBiZGYKL2Zp
bHRlcl9pbmRleGVkX2Rldm4KewoJY3ZpIE5hbWVzIGxlbmd0aCBtdWwgbmFtZXNfaW5kZXggYWRk
IExvb2t1cCBleGNoIGdldAp9IGJkZgovZmlsdGVyX2Rldm4KewkKCTQgZGljdCBiZWdpbgoJL3Ny
Y1N0ciB4ZGYKCS9kc3RTdHIgeGRmCglkdXAgdHlwZSAvZmlsZXR5cGUgbmV7CgkJMCAoKSAvU3Vi
RmlsZURlY29kZSBmaWx0ZXIKCX1pZgoJWwoJZXhjaAoJCVsKCQkJL2RldmljZW5fY29sb3JzcGFj
ZV9kaWN0IC9BR01DT1JFX2dnZXQgY3Z4IC9iZWdpbiBjdngKCQkJY3VycmVudGRpY3QgL3NyY1N0
ciBnZXQgL3JlYWRzdHJpbmcgY3Z4IC9wb3AgY3Z4CgkJCS9kdXAgY3Z4IC9sZW5ndGggY3Z4IDAg
L2d0IGN2eCBbCgkJCQlBZG9iZV9BR01fVXRpbHMgL0FHTVVUSUxfbmR4IDAgL2RkZiBjdngKCQkJ
CW5hbWVzX2luZGV4IE5hbWVzIGxlbmd0aCBjdXJyZW50ZGljdCAvc3JjU3RyIGdldCBsZW5ndGgg
MSBzdWIgewoJCQkJCTEgL2luZGV4IGN2eCAvZXhjaCBjdnggL2dldCBjdngKCQkJCQljdXJyZW50
ZGljdCAvZHN0U3RyIGdldCAvQUdNVVRJTF9uZHggL2xvYWQgY3Z4IDMgLTEgL3JvbGwgY3Z4IC9w
dXQgY3Z4CgkJCQkJQWRvYmVfQUdNX1V0aWxzIC9BR01VVElMX25keCAvQUdNVVRJTF9uZHggL2xv
YWQgY3Z4IDEgL2FkZCBjdnggL2RkZiBjdngKCQkJCX0gZm9yCgkJCQljdXJyZW50ZGljdCAvZHN0
U3RyIGdldCAwIC9BR01VVElMX25keCAvbG9hZCBjdnggL2dldGludGVydmFsIGN2eAoJCQldIGN2
eCAvaWYgY3Z4CgkJCS9lbmQgY3Z4CgkJXSBjdngKCQliaW5kCgkJL2V4ZWMgY3Z4CgldIGN2eAoJ
ZW5kCn0gYmRmCi9BR01VVElMX2ltYWdlZmlsZSBuZAovcmVhZF9pbWFnZV9maWxlCnsKCUFHTVVU
SUxfaW1hZ2VmaWxlIDAgc2V0ZmlsZXBvc2l0aW9uCgkxMCBkaWN0IGJlZ2luCgkvaW1hZ2VEaWN0
IHhkZgoJL2ltYnVmTGVuIFdpZHRoIEJpdHNQZXJDb21wb25lbnQgbXVsIDcgYWRkIDggaWRpdiBk
ZWYKCS9pbWJ1ZklkeCAwIGRlZgoJL29yaWdEYXRhU291cmNlIGltYWdlRGljdCAvRGF0YVNvdXJj
ZSBnZXQgZGVmCgkvb3JpZ011bHRpcGxlRGF0YVNvdXJjZXMgaW1hZ2VEaWN0IC9NdWx0aXBsZURh
dGFTb3VyY2VzIGdldCBkZWYKCS9vcmlnRGVjb2RlIGltYWdlRGljdCAvRGVjb2RlIGdldCBkZWYK
CS9kc3REYXRhU3RyIGltYWdlRGljdCAvV2lkdGggZ2V0IGNvbG9yU3BhY2VFbGVtQ250IG11bCBz
dHJpbmcgZGVmCgkvc3JjRGF0YVN0cnMgWyBpbWFnZURpY3QgYmVnaW4KCQljdXJyZW50ZGljdCAv
TXVsdGlwbGVEYXRhU291cmNlcyBrbm93biB7TXVsdGlwbGVEYXRhU291cmNlcyB7RGF0YVNvdXJj
ZSBsZW5ndGh9ezF9aWZlbHNlfXsxfSBpZmVsc2UKCQl7CgkJCVdpZHRoIERlY29kZSBsZW5ndGgg
MiBkaXYgbXVsIGN2aSBzdHJpbmcKCQl9IHJlcGVhdAoJCWVuZCBdIGRlZgoJaW1hZ2VEaWN0IC9N
dWx0aXBsZURhdGFTb3VyY2VzIGtub3duIHtNdWx0aXBsZURhdGFTb3VyY2VzfXtmYWxzZX0gaWZl
bHNlCgl7CgkJL2ltYnVmQ250IGltYWdlRGljdCAvRGF0YVNvdXJjZSBnZXQgbGVuZ3RoIGRlZgoJ
CS9pbWJ1ZnMgaW1idWZDbnQgYXJyYXkgZGVmCgkJMCAxIGltYnVmQ250IDEgc3ViIHsKCQkJL2lt
YnVmSWR4IHhkZgoJCQlpbWJ1ZnMgaW1idWZJZHggaW1idWZMZW4gc3RyaW5nIHB1dAoJCQlpbWFn
ZURpY3QgL0RhdGFTb3VyY2UgZ2V0IGltYnVmSWR4IFsgQUdNVVRJTF9pbWFnZWZpbGUgaW1idWZz
IGltYnVmSWR4IGdldCAvcmVhZHN0cmluZyBjdnggL3BvcCBjdnggXSBjdnggcHV0CgkJfSBmb3IK
CQlEZXZpY2VOX1BTMiB7CgkJCWltYWdlRGljdCBiZWdpbgoJCSAJL0RhdGFTb3VyY2UgWyBEYXRh
U291cmNlIC9kZXZuX3NlcF9kYXRhc291cmNlIGN2eCBdIGN2eCBkZWYKCQkJL011bHRpcGxlRGF0
YVNvdXJjZXMgZmFsc2UgZGVmCgkJCS9EZWNvZGUgWzAgMV0gZGVmCgkJCWVuZAoJCX0gaWYKCX17
CgkJL2ltYnVmIGltYnVmTGVuIHN0cmluZyBkZWYKCQlJbmRleGVkX0RldmljZU4gbGV2ZWwzIG5v
dCBhbmQgRGV2aWNlTl9Ob25lTmFtZSBvciB7CgkJCWltYWdlRGljdCBiZWdpbgoJCSAJL0RhdGFT
b3VyY2UgW0FHTVVUSUxfaW1hZ2VmaWxlIERlY29kZSBCaXRzUGVyQ29tcG9uZW50IGZhbHNlIDEg
L2ZpbHRlcl9pbmRleGVkX2Rldm4gbG9hZCBkc3REYXRhU3RyIHNyY0RhdGFTdHJzIGRldm5fYWx0
X2RhdGFzb3VyY2UgL2V4ZWMgY3Z4XSBjdnggZGVmCgkJCS9EZWNvZGUgWzAgMV0gZGVmCgkJCWVu
ZAoJCX17CgkJCWltYWdlRGljdCAvRGF0YVNvdXJjZSB7QUdNVVRJTF9pbWFnZWZpbGUgaW1idWYg
cmVhZHN0cmluZyBwb3B9IHB1dAoJCX0gaWZlbHNlCgl9IGlmZWxzZQoJaW1hZ2VEaWN0IGV4Y2gK
CWxvYWQgZXhlYwoJaW1hZ2VEaWN0IC9EYXRhU291cmNlIG9yaWdEYXRhU291cmNlIHB1dAoJaW1h
Z2VEaWN0IC9NdWx0aXBsZURhdGFTb3VyY2VzIG9yaWdNdWx0aXBsZURhdGFTb3VyY2VzIHB1dAoJ
aW1hZ2VEaWN0IC9EZWNvZGUgb3JpZ0RlY29kZSBwdXQJCgllbmQKfSBiZGYKL3dyaXRlX2ltYWdl
X2ZpbGUKewoJYmVnaW4KCXsgKEFHTVVUSUxfaW1hZ2VmaWxlKSAodyspIGZpbGUgfSBzdG9wcGVk
ewoJCWZhbHNlCgl9ewoJCUFkb2JlX0FHTV9VdGlscy9BR01VVElMX2ltYWdlZmlsZSB4ZGRmIAoJ
CTIgZGljdCBiZWdpbgoJCS9pbWJ1ZkxlbiBXaWR0aCBCaXRzUGVyQ29tcG9uZW50IG11bCA3IGFk
ZCA4IGlkaXYgZGVmCgkJTXVsdGlwbGVEYXRhU291cmNlcyB7RGF0YVNvdXJjZSAwIGdldH17RGF0
YVNvdXJjZX1pZmVsc2UgdHlwZSAvZmlsZXR5cGUgZXEgewoJCQkvaW1idWYgaW1idWZMZW4gc3Ry
aW5nIGRlZgoJCX1pZgoJCTEgMSBIZWlnaHQgeyAKCQkJcG9wCgkJCU11bHRpcGxlRGF0YVNvdXJj
ZXMgewoJCQkgCTAgMSBEYXRhU291cmNlIGxlbmd0aCAxIHN1YiB7CgkJCQkJRGF0YVNvdXJjZSB0
eXBlIGR1cAoJCQkJCS9hcnJheXR5cGUgZXEgewoJCQkJCQlwb3AgRGF0YVNvdXJjZSBleGNoIGdl
dCBleGVjCgkJCQkJfXsKCQkJCQkJL2ZpbGV0eXBlIGVxIHsKCQkJCQkJCURhdGFTb3VyY2UgZXhj
aCBnZXQgaW1idWYgcmVhZHN0cmluZyBwb3AKCQkJCQkJfXsKCQkJCQkJCURhdGFTb3VyY2UgZXhj
aCBnZXQKCQkJCQkJfSBpZmVsc2UKCQkJCQl9IGlmZWxzZQoJCQkJCUFHTVVUSUxfaW1hZ2VmaWxl
IGV4Y2ggd3JpdGVzdHJpbmcKCQkJCX0gZm9yCgkJCX17CgkJCQlEYXRhU291cmNlIHR5cGUgZHVw
CgkJCQkvYXJyYXl0eXBlIGVxIHsKCQkJCQlwb3AgRGF0YVNvdXJjZSBleGVjCgkJCQl9ewoJCQkJ
CS9maWxldHlwZSBlcSB7CgkJCQkJCURhdGFTb3VyY2UgaW1idWYgcmVhZHN0cmluZyBwb3AKCQkJ
CQl9ewoJCQkJCQlEYXRhU291cmNlCgkJCQkJfSBpZmVsc2UKCQkJCX0gaWZlbHNlCgkJCQlBR01V
VElMX2ltYWdlZmlsZSBleGNoIHdyaXRlc3RyaW5nCgkJCX0gaWZlbHNlCgkJfWZvcgoJCWVuZAoJ
CXRydWUKCX1pZmVsc2UKCWVuZAp9IGJkZgovY2xvc2VfaW1hZ2VfZmlsZQp7CglBR01VVElMX2lt
YWdlZmlsZSBjbG9zZWZpbGUgKEFHTVVUSUxfaW1hZ2VmaWxlKSBkZWxldGVmaWxlCn1kZWYKc3Rh
dHVzZGljdCAvcHJvZHVjdCBrbm93biB1c2VyZGljdCAvQUdNUF9jdXJyZW50X3Nob3cga25vd24g
bm90IGFuZHsKCS9wc3RyIHN0YXR1c2RpY3QgL3Byb2R1Y3QgZ2V0IGRlZgoJcHN0ciAoSFAgTGFz
ZXJKZXQgMjIwMCkgZXEgCQoJcHN0ciAoSFAgTGFzZXJKZXQgNDAwMCBTZXJpZXMpIGVxIG9yCglw
c3RyIChIUCBMYXNlckpldCA0MDUwIFNlcmllcyApIGVxIG9yCglwc3RyIChIUCBMYXNlckpldCA4
MDAwIFNlcmllcykgZXEgb3IKCXBzdHIgKEhQIExhc2VySmV0IDgxMDAgU2VyaWVzKSBlcSBvcgoJ
cHN0ciAoSFAgTGFzZXJKZXQgODE1MCBTZXJpZXMpIGVxIG9yCglwc3RyIChIUCBMYXNlckpldCA1
MDAwIFNlcmllcykgZXEgb3IKCXBzdHIgKEhQIExhc2VySmV0IDUxMDAgU2VyaWVzKSBlcSBvcgoJ
cHN0ciAoSFAgQ29sb3IgTGFzZXJKZXQgNDUwMCkgZXEgb3IKCXBzdHIgKEhQIENvbG9yIExhc2Vy
SmV0IDQ2MDApIGVxIG9yCglwc3RyIChIUCBMYXNlckpldCA1U2kpIGVxIG9yCglwc3RyIChIUCBM
YXNlckpldCAxMjAwIFNlcmllcykgZXEgb3IKCXBzdHIgKEhQIExhc2VySmV0IDEzMDAgU2VyaWVz
KSBlcSBvcgoJcHN0ciAoSFAgTGFzZXJKZXQgNDEwMCBTZXJpZXMpIGVxIG9yIAoJewogCQl1c2Vy
ZGljdCAvQUdNUF9jdXJyZW50X3Nob3cgL3Nob3cgbG9hZCBwdXQKCQl1c2VyZGljdCAvc2hvdyB7
CgkJICBjdXJyZW50Y29sb3JzcGFjZSAwIGdldAoJCSAgL1BhdHRlcm4gZXEKCQkgIHtmYWxzZSBj
aGFycGF0aCBmfQoJCSAge0FHTVBfY3VycmVudF9zaG93fSBpZmVsc2UKCQl9IHB1dAoJfWlmCglj
dXJyZW50ZGljdCAvcHN0ciB1bmRlZgp9IGlmCi9jb25zdW1laW1hZ2VkYXRhCnsKCWJlZ2luCglj
dXJyZW50ZGljdCAvTXVsdGlwbGVEYXRhU291cmNlcyBrbm93biBub3QKCQl7L011bHRpcGxlRGF0
YVNvdXJjZXMgZmFsc2UgZGVmfSBpZgoJTXVsdGlwbGVEYXRhU291cmNlcwoJCXsKCQkxIGRpY3Qg
YmVnaW4KCQkvZmx1c2hidWZmZXIgV2lkdGggY3ZpIHN0cmluZyBkZWYKCQkxIDEgSGVpZ2h0IGN2
aQoJCQl7CgkJCXBvcAoJCQkwIDEgRGF0YVNvdXJjZSBsZW5ndGggMSBzdWIKCQkJCXsKCQkJCURh
dGFTb3VyY2UgZXhjaCBnZXQKCQkJCWR1cCB0eXBlIGR1cCAKCQkJCS9maWxldHlwZSBlcQoJCQkJ
CXsKCQkJCQlleGNoIGZsdXNoYnVmZmVyIHJlYWRzdHJpbmcgcG9wIHBvcAoJCQkJCX1pZgoJCQkJ
L2FycmF5dHlwZSBlcQoJCQkJCXsKCQkJCQlleGVjIHBvcAoJCQkJCX1pZgoJCQkJfWZvcgoJCQl9
Zm9yCgkJZW5kCgkJfQoJCXsKCQkvRGF0YVNvdXJjZSBsb2FkIHR5cGUgZHVwIAoJCS9maWxldHlw
ZSBlcQoJCQl7CgkJCTEgZGljdCBiZWdpbgoJCQkvZmx1c2hidWZmZXIgV2lkdGggRGVjb2RlIGxl
bmd0aCAyIGRpdiBtdWwgY3ZpIHN0cmluZyBkZWYKCQkJMSAxIEhlaWdodCB7IHBvcCBEYXRhU291
cmNlIGZsdXNoYnVmZmVyIHJlYWRzdHJpbmcgcG9wIHBvcH0gZm9yCgkJCWVuZAoJCQl9aWYKCQkv
YXJyYXl0eXBlIGVxCgkJCXsKCQkJMSAxIEhlaWdodCB7IHBvcCBEYXRhU291cmNlIHBvcCB9IGZv
cgoJCQl9aWYKCQl9aWZlbHNlCgllbmQKfWJkZgovYWRkcHJvY3MKewoJICAyey9leGVjIGxvYWR9
cmVwZWF0CgkgIDMgMSByb2xsCgkgIFsgNSAxIHJvbGwgXSBiaW5kIGN2eAp9ZGVmCi9tb2RpZnlf
aGFsZnRvbmVfeGZlcgp7CgljdXJyZW50aGFsZnRvbmUgZHVwIGxlbmd0aCBkaWN0IGNvcHkgYmVn
aW4KCSBjdXJyZW50ZGljdCAyIGluZGV4IGtub3duewoJIAkxIGluZGV4IGxvYWQgZHVwIGxlbmd0
aCBkaWN0IGNvcHkgYmVnaW4KCQljdXJyZW50ZGljdC9UcmFuc2ZlckZ1bmN0aW9uIGtub3duewoJ
CQkvVHJhbnNmZXJGdW5jdGlvbiBsb2FkCgkJfXsKCQkJY3VycmVudHRyYW5zZmVyCgkJfWlmZWxz
ZQoJCSBhZGRwcm9jcyAvVHJhbnNmZXJGdW5jdGlvbiB4ZGYgCgkJIGN1cnJlbnRkaWN0IGVuZCBk
ZWYKCQljdXJyZW50ZGljdCBlbmQgc2V0aGFsZnRvbmUKCX17IAoJCWN1cnJlbnRkaWN0L1RyYW5z
ZmVyRnVuY3Rpb24ga25vd257CgkJCS9UcmFuc2ZlckZ1bmN0aW9uIGxvYWQgCgkJfXsKCQkJY3Vy
cmVudHRyYW5zZmVyCgkJfWlmZWxzZQoJCWFkZHByb2NzIC9UcmFuc2ZlckZ1bmN0aW9uIHhkZgoJ
CWN1cnJlbnRkaWN0IGVuZCBzZXRoYWxmdG9uZQkJCgkJcG9wCgl9aWZlbHNlCn1kZWYKL2Nsb25l
YXJyYXkKewoJZHVwIHhjaGVjayBleGNoCglkdXAgbGVuZ3RoIGFycmF5IGV4Y2gKCUFkb2JlX0FH
TV9Db3JlL0FHTUNPUkVfdG1wIC0xIGRkZiAKCXsKCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfdG1w
IEFHTUNPUkVfdG1wIDEgYWRkIGRkZiAKCWR1cCB0eXBlIC9kaWN0dHlwZSBlcQoJCXsKCQkJQUdN
Q09SRV90bXAKCQkJZXhjaAoJCQljbG9uZWRpY3QKCQkJQWRvYmVfQUdNX0NvcmUvQUdNQ09SRV90
bXAgNCAtMSByb2xsIGRkZiAKCQl9IGlmCglkdXAgdHlwZSAvYXJyYXl0eXBlIGVxCgkJewoJCQlB
R01DT1JFX3RtcCBleGNoCgkJCWNsb25lYXJyYXkKCQkJQWRvYmVfQUdNX0NvcmUvQUdNQ09SRV90
bXAgNCAtMSByb2xsIGRkZiAKCQl9IGlmCglleGNoIGR1cAoJQUdNQ09SRV90bXAgNCAtMSByb2xs
IHB1dAoJfWZvcmFsbAoJZXhjaCB7Y3Z4fSBpZgp9YmRmCi9jbG9uZWRpY3QKewoJZHVwIGxlbmd0
aCBkaWN0CgliZWdpbgoJCXsKCQlkdXAgdHlwZSAvZGljdHR5cGUgZXEKCQkJewoJCQkJY2xvbmVk
aWN0CgkJCX0gaWYKCQlkdXAgdHlwZSAvYXJyYXl0eXBlIGVxCgkJCXsKCQkJCWNsb25lYXJyYXkK
CQkJfSBpZgoJCWRlZgoJCX1mb3JhbGwKCWN1cnJlbnRkaWN0CgllbmQKfWJkZgovRGV2aWNlTl9Q
UzIKewoJL2N1cnJlbnRjb2xvcnNwYWNlIEFHTUNPUkVfZ2dldCAwIGdldCAvRGV2aWNlTiBlcSBs
ZXZlbDMgbm90IGFuZAp9IGJkZgovSW5kZXhlZF9EZXZpY2VOCnsKCS9pbmRleGVkX2NvbG9yc3Bh
Y2VfZGljdCBBR01DT1JFX2dnZXQgZHVwIG51bGwgbmUgewoJCS9DU0Qga25vd24KCX17CgkJcG9w
IGZhbHNlCgl9IGlmZWxzZQp9IGJkZgovRGV2aWNlTl9Ob25lTmFtZQp7CQoJL05hbWVzIHdoZXJl
IHsKCQlwb3AKCQlmYWxzZSBOYW1lcwoJCXsKCQkJKE5vbmUpIGVxIG9yCgkJfSBmb3JhbGwKCX17
CgkJZmFsc2UKCX1pZmVsc2UKfSBiZGYKL0RldmljZU5fUFMyX2luUmlwX3NlcHMKewoJL0FHTUNP
UkVfaW5fcmlwX3NlcCB3aGVyZQoJewoJCXBvcCBkdXAgdHlwZSBkdXAgL2FycmF5dHlwZSBlcSBl
eGNoIC9wYWNrZWRhcnJheXR5cGUgZXEgb3IKCQl7CgkJCWR1cCAwIGdldCAvRGV2aWNlTiBlcSBs
ZXZlbDMgbm90IGFuZCBBR01DT1JFX2luX3JpcF9zZXAgYW5kCgkJCXsKCQkJCS9jdXJyZW50Y29s
b3JzcGFjZSBleGNoIEFHTUNPUkVfZ3B1dAoJCQkJZmFsc2UKCQkJfQoJCQl7CgkJCQl0cnVlCgkJ
CX1pZmVsc2UKCQl9CgkJewoJCQl0cnVlCgkJfSBpZmVsc2UKCX0KCXsKCQl0cnVlCgl9IGlmZWxz
ZQp9IGJkZgovYmFzZV9jb2xvcnNwYWNlX3R5cGUKewoJZHVwIHR5cGUgL2FycmF5dHlwZSBlcSB7
MCBnZXR9IGlmCn0gYmRmCi9kb2Nfc2V0dXB7CglBZG9iZV9BR01fVXRpbHMgYmVnaW4KfWJkZgov
ZG9jX3RyYWlsZXJ7CgljdXJyZW50ZGljdCBBZG9iZV9BR01fVXRpbHMgZXF7CgkJZW5kCgl9aWYK
fWJkZgpzeXN0ZW1kaWN0IC9zZXRwYWNraW5nIGtub3duCnsKCXNldHBhY2tpbmcKfSBpZgolJUVu
ZFJlc291cmNlCiUlQmVnaW5SZXNvdXJjZTogcHJvY3NldCBBZG9iZV9BR01fQ29yZSAyLjAgMAol
JVZlcnNpb246IDIuMCAwCiUlQ29weXJpZ2h0OiBDb3B5cmlnaHQgKEMpIDE5OTctMjAwMyBBZG9i
ZSBTeXN0ZW1zLCBJbmMuICBBbGwgUmlnaHRzIFJlc2VydmVkLgpzeXN0ZW1kaWN0IC9zZXRwYWNr
aW5nIGtub3duCnsKCWN1cnJlbnRwYWNraW5nCgl0cnVlIHNldHBhY2tpbmcKfSBpZgp1c2VyZGlj
dCAvQWRvYmVfQUdNX0NvcmUgMjE2IGRpY3QgZHVwIGJlZ2luIHB1dAovbmR7CgludWxsIGRlZgp9
YmluZCBkZWYKL0Fkb2JlX0FHTV9Db3JlX0lkIC9BZG9iZV9BR01fQ29yZV8yLjBfMCBkZWYKL0FH
TUNPUkVfc3RyMjU2IDI1NiBzdHJpbmcgZGVmCi9BR01DT1JFX3NhdmUgbmQKL0FHTUNPUkVfZ3Jh
cGhpY3NhdmUgbmQKL0FHTUNPUkVfYyAwIGRlZgovQUdNQ09SRV9tIDAgZGVmCi9BR01DT1JFX3kg
MCBkZWYKL0FHTUNPUkVfayAwIGRlZgovQUdNQ09SRV9jbXlrYnVmIDQgYXJyYXkgZGVmCi9BR01D
T1JFX3NjcmVlbiBbY3VycmVudHNjcmVlbl0gY3Z4IGRlZgovQUdNQ09SRV90bXAgMCBkZWYKL0FH
TUNPUkVfJnNldGdyYXkgbmQKL0FHTUNPUkVfJnNldGNvbG9yIG5kCi9BR01DT1JFXyZzZXRjb2xv
cnNwYWNlIG5kCi9BR01DT1JFXyZzZXRjbXlrY29sb3IgbmQKL0FHTUNPUkVfY3lhbl9wbGF0ZSBu
ZAovQUdNQ09SRV9tYWdlbnRhX3BsYXRlIG5kCi9BR01DT1JFX3llbGxvd19wbGF0ZSBuZAovQUdN
Q09SRV9ibGFja19wbGF0ZSBuZAovQUdNQ09SRV9wbGF0ZV9uZHggbmQKL0FHTUNPUkVfZ2V0X2lu
a19kYXRhIG5kCi9BR01DT1JFX2lzX2NteWtfc2VwIG5kCi9BR01DT1JFX2hvc3Rfc2VwIG5kCi9B
R01DT1JFX2F2b2lkX0wyX3NlcF9zcGFjZSBuZAovQUdNQ09SRV9kaXN0aWxsaW5nIG5kCi9BR01D
T1JFX2NvbXBvc2l0ZV9qb2IgbmQKL0FHTUNPUkVfcHJvZHVjaW5nX3NlcHMgbmQKL0FHTUNPUkVf
cHNfbGV2ZWwgLTEgZGVmCi9BR01DT1JFX3BzX3ZlcnNpb24gLTEgZGVmCi9BR01DT1JFX2Vudmly
b25fb2sgbmQKL0FHTUNPUkVfQ1NBX2NhY2hlIDAgZGljdCBkZWYKL0FHTUNPUkVfQ1NEX2NhY2hl
IDAgZGljdCBkZWYKL0FHTUNPUkVfcGF0dGVybl9jYWNoZSAwIGRpY3QgZGVmCi9BR01DT1JFX2N1
cnJlbnRvdmVycHJpbnQgZmFsc2UgZGVmCi9BR01DT1JFX2RlbHRhWCBuZAovQUdNQ09SRV9kZWx0
YVkgbmQKL0FHTUNPUkVfbmFtZSBuZAovQUdNQ09SRV9zZXBfc3BlY2lhbCBuZAovQUdNQ09SRV9l
cnJfc3RyaW5ncyA0IGRpY3QgZGVmCi9BR01DT1JFX2N1cl9lcnIgbmQKL0FHTUNPUkVfb3ZwIG5k
Ci9BR01DT1JFX2N1cnJlbnRfc3BvdF9hbGlhcyBmYWxzZSBkZWYKL0FHTUNPUkVfaW52ZXJ0aW5n
IGZhbHNlIGRlZgovQUdNQ09SRV9mZWF0dXJlX2RpY3RDb3VudCBuZAovQUdNQ09SRV9mZWF0dXJl
X29wQ291bnQgbmQKL0FHTUNPUkVfZmVhdHVyZV9jdG0gbmQKL0FHTUNPUkVfQ29udmVydFRvUHJv
Y2VzcyBmYWxzZSBkZWYKL0FHTUNPUkVfRGVmYXVsdF9DVE0gbWF0cml4IGRlZgovQUdNQ09SRV9E
ZWZhdWx0X1BhZ2VTaXplIG5kCi9BR01DT1JFX2N1cnJlbnRiZyBuZAovQUdNQ09SRV9jdXJyZW50
dWNyIG5kCi9BR01DT1JFX2dyYWRpZW50Y2FjaGUgMzIgZGljdCBkZWYKL0FHTUNPUkVfaW5fcGF0
dGVybiBmYWxzZSBkZWYKL2tub2Nrb3V0X3VuaXRzcSBuZAovQUdNQ09SRV9DUkRfY2FjaGUgd2hl
cmV7Cglwb3AKfXsKCS9BR01DT1JFX0NSRF9jYWNoZSAwIGRpY3QgZGVmCn1pZmVsc2UKL0FHTUNP
UkVfa2V5X2tub3duCnsKCXdoZXJlewoJCS9BZG9iZV9BR01fQ29yZV9JZCBrbm93bgoJfXsKCQlm
YWxzZQoJfWlmZWxzZQp9bmRmCi9mbHVzaGlucHV0CnsKCXNhdmUKCTIgZGljdCBiZWdpbgoJL0Nv
bXBhcmVCdWZmZXIgMyAtMSByb2xsIGRlZgoJL3JlYWRidWZmZXIgMjU2IHN0cmluZyBkZWYKCW1h
cmsKCXsKCWN1cnJlbnRmaWxlIHJlYWRidWZmZXIge3JlYWRsaW5lfSBzdG9wcGVkCgkJe2NsZWFy
dG9tYXJrIG1hcmt9CgkJewoJCW5vdAoJCQl7cG9wIGV4aXR9CgkJaWYKCQlDb21wYXJlQnVmZmVy
IGVxCgkJCXtleGl0fQoJCWlmCgkJfWlmZWxzZQoJfWxvb3AKCWNsZWFydG9tYXJrCgllbmQKCXJl
c3RvcmUKfWJkZgovZ2V0c3BvdGZ1bmN0aW9uCnsKCUFHTUNPUkVfc2NyZWVuIGV4Y2ggcG9wIGV4
Y2ggcG9wCglkdXAgdHlwZSAvZGljdHR5cGUgZXF7CgkJZHVwIC9IYWxmdG9uZVR5cGUgZ2V0IDEg
ZXF7CgkJCS9TcG90RnVuY3Rpb24gZ2V0CgkJfXsKCQkJZHVwIC9IYWxmdG9uZVR5cGUgZ2V0IDIg
ZXF7CgkJCQkvR3JheVNwb3RGdW5jdGlvbiBnZXQKCQkJfXsgCgkJCQlwb3AKCQkJCXsKCQkJCQlh
YnMgZXhjaCBhYnMgMiBjb3B5IGFkZCAxIGd0ewoJCQkJCQkxIHN1YiBkdXAgbXVsIGV4Y2ggMSBz
dWIgZHVwIG11bCBhZGQgMSBzdWIKCQkJCQl9ewoJCQkJCQlkdXAgbXVsIGV4Y2ggZHVwIG11bCBh
ZGQgMSBleGNoIHN1YgoJCQkJCX1pZmVsc2UKCQkJCX1iaW5kCgkJCX1pZmVsc2UKCQl9aWZlbHNl
Cgl9aWYKfSBkZWYKL2NscF9ucHRoCnsKCWNsaXAgbmV3cGF0aAp9IGRlZgovZW9jbHBfbnB0aAp7
Cgllb2NsaXAgbmV3cGF0aAp9IGRlZgovbnB0aF9jbHAKewoJbmV3cGF0aCBjbGlwCn0gZGVmCi9h
ZGRfZ3JhZAp7CglBR01DT1JFX2dyYWRpZW50Y2FjaGUgMyAxIHJvbGwgcHV0Cn1iZGYKL2V4ZWNf
Z3JhZAp7CglBR01DT1JFX2dyYWRpZW50Y2FjaGUgZXhjaCBnZXQgZXhlYwp9YmRmCi9ncmFwaGlj
X3NldHVwCnsKCS9BR01DT1JFX2dyYXBoaWNzYXZlIHNhdmUgZGVmCgljb25jYXQKCTAgc2V0Z3Jh
eQoJMCBzZXRsaW5lY2FwCgkwIHNldGxpbmVqb2luCgkxIHNldGxpbmV3aWR0aAoJW10gMCBzZXRk
YXNoCgkxMCBzZXRtaXRlcmxpbWl0CgluZXdwYXRoCglmYWxzZSBzZXRvdmVycHJpbnQKCWZhbHNl
IHNldHN0cm9rZWFkanVzdAoJQWRvYmVfQUdNX0NvcmUvc3BvdF9hbGlhcyBnZXQgZXhlYwoJL0Fk
b2JlX0FHTV9JbWFnZSB3aGVyZSB7CgkJcG9wCgkJQWRvYmVfQUdNX0ltYWdlL3Nwb3RfYWxpYXMg
MiBjb3B5IGtub3duewoJCQlnZXQgZXhlYwoJCX17CgkJCXBvcCBwb3AKCQl9aWZlbHNlCgl9IGlm
CgkxMDAgZGljdCBiZWdpbgoJL2RpY3RzdGFja2NvdW50IGNvdW50ZGljdHN0YWNrIGRlZgoJL3No
b3dwYWdlIHt9IGRlZgoJbWFyawp9IGRlZgovZ3JhcGhpY19jbGVhbnVwCnsKCWNsZWFydG9tYXJr
CglkaWN0c3RhY2tjb3VudCAxIGNvdW50ZGljdHN0YWNrIDEgc3ViIHtlbmR9Zm9yCgllbmQKCUFH
TUNPUkVfZ3JhcGhpY3NhdmUgcmVzdG9yZQp9IGRlZgovY29tcG9zZV9lcnJvcl9tc2cKewoJZ3Jl
c3RvcmVhbGwgaW5pdGdyYXBoaWNzCQoJL0hlbHZldGljYSBmaW5kZm9udCAxMCBzY2FsZWZvbnQg
c2V0Zm9udAoJL0FHTUNPUkVfZGVsdGFZIDEwMCBkZWYKCS9BR01DT1JFX2RlbHRhWCAzMTAgZGVm
CgljbGlwcGF0aCBwYXRoYmJveCBuZXdwYXRoIHBvcCBwb3AgMzYgYWRkIGV4Y2ggMzYgYWRkIGV4
Y2ggbW92ZXRvCgkwIEFHTUNPUkVfZGVsdGFZIHJsaW5ldG8gQUdNQ09SRV9kZWx0YVggMCBybGlu
ZXRvCgkwIEFHTUNPUkVfZGVsdGFZIG5lZyBybGluZXRvIEFHTUNPUkVfZGVsdGFYIG5lZyAwIHJs
aW5ldG8gY2xvc2VwYXRoCgkwIEFHTUNPUkVfJnNldGdyYXkKCWdzYXZlIDEgQUdNQ09SRV8mc2V0
Z3JheSBmaWxsIGdyZXN0b3JlIAoJMSBzZXRsaW5ld2lkdGggZ3NhdmUgc3Ryb2tlIGdyZXN0b3Jl
CgljdXJyZW50cG9pbnQgQUdNQ09SRV9kZWx0YVkgMTUgc3ViIGFkZCBleGNoIDggYWRkIGV4Y2gg
bW92ZXRvCgkvQUdNQ09SRV9kZWx0YVkgMTIgZGVmCgkvQUdNQ09SRV90bXAgMCBkZWYKCUFHTUNP
UkVfZXJyX3N0cmluZ3MgZXhjaCBnZXQKCQl7CgkJZHVwIDMyIGVxCgkJCXsKCQkJcG9wCgkJCUFH
TUNPUkVfc3RyMjU2IDAgQUdNQ09SRV90bXAgZ2V0aW50ZXJ2YWwKCQkJc3RyaW5nd2lkdGggcG9w
IGN1cnJlbnRwb2ludCBwb3AgYWRkIEFHTUNPUkVfZGVsdGFYIDI4IGFkZCBndAoJCQkJewoJCQkJ
Y3VycmVudHBvaW50IEFHTUNPUkVfZGVsdGFZIHN1YiBleGNoIHBvcAoJCQkJY2xpcHBhdGggcGF0
aGJib3ggcG9wIHBvcCBwb3AgNDQgYWRkIGV4Y2ggbW92ZXRvCgkJCQl9IGlmCgkJCUFHTUNPUkVf
c3RyMjU2IDAgQUdNQ09SRV90bXAgZ2V0aW50ZXJ2YWwgc2hvdyAoICkgc2hvdwoJCQkwIDEgQUdN
Q09SRV9zdHIyNTYgbGVuZ3RoIDEgc3ViCgkJCQl7CgkJCQlBR01DT1JFX3N0cjI1NiBleGNoIDAg
cHV0CgkJCQl9Zm9yCgkJCS9BR01DT1JFX3RtcCAwIGRlZgoJCQl9CgkJCXsKCQkJCUFHTUNPUkVf
c3RyMjU2IGV4Y2ggQUdNQ09SRV90bXAgeHB0CgkJCQkvQUdNQ09SRV90bXAgQUdNQ09SRV90bXAg
MSBhZGQgZGVmCgkJCX0gaWZlbHNlCgkJfSBmb3JhbGwKfSBiZGYKL2RvY19zZXR1cHsKCUFkb2Jl
X0FHTV9Db3JlIGJlZ2luCgkvQUdNQ09SRV9wc192ZXJzaW9uIHhkZgoJL0FHTUNPUkVfcHNfbGV2
ZWwgeGRmCgllcnJvcmRpY3QgL0FHTV9oYW5kbGVlcnJvciBrbm93biBub3R7CgkJZXJyb3JkaWN0
IC9BR01faGFuZGxlZXJyb3IgZXJyb3JkaWN0IC9oYW5kbGVlcnJvciBnZXQgcHV0CgkJZXJyb3Jk
aWN0IC9oYW5kbGVlcnJvciB7CgkJCUFkb2JlX0FHTV9Db3JlIGJlZ2luCgkJCSRlcnJvciAvbmV3
ZXJyb3IgZ2V0IEFHTUNPUkVfY3VyX2VyciBudWxsIG5lIGFuZHsKCQkJCSRlcnJvciAvbmV3ZXJy
b3IgZmFsc2UgcHV0CgkJCQlBR01DT1JFX2N1cl9lcnIgY29tcG9zZV9lcnJvcl9tc2cKCQkJfWlm
CgkJCSRlcnJvciAvbmV3ZXJyb3IgdHJ1ZSBwdXQKCQkJZW5kCgkJCWVycm9yZGljdCAvQUdNX2hh
bmRsZWVycm9yIGdldCBleGVjCgkJCX0gYmluZCBwdXQKCQl9aWYKCS9BR01DT1JFX2Vudmlyb25f
b2sgCgkJcHNfbGV2ZWwgQUdNQ09SRV9wc19sZXZlbCBnZQoJCXBzX3ZlcnNpb24gQUdNQ09SRV9w
c192ZXJzaW9uIGdlIGFuZCAKCQlBR01DT1JFX3BzX2xldmVsIC0xIGVxIG9yCglkZWYKCUFHTUNP
UkVfZW52aXJvbl9vayBub3QKCQl7L0FHTUNPUkVfY3VyX2VyciAvQUdNQ09SRV9iYWRfZW52aXJv
biBkZWZ9IGlmCgkvQUdNQ09SRV8mc2V0Z3JheSBzeXN0ZW1kaWN0L3NldGdyYXkgZ2V0IGRlZgoJ
bGV2ZWwyewoJCS9BR01DT1JFXyZzZXRjb2xvciBzeXN0ZW1kaWN0L3NldGNvbG9yIGdldCBkZWYK
CQkvQUdNQ09SRV8mc2V0Y29sb3JzcGFjZSBzeXN0ZW1kaWN0L3NldGNvbG9yc3BhY2UgZ2V0IGRl
ZgoJfWlmCgkvQUdNQ09SRV9jdXJyZW50YmcgY3VycmVudGJsYWNrZ2VuZXJhdGlvbiBkZWYKCS9B
R01DT1JFX2N1cnJlbnR1Y3IgY3VycmVudHVuZGVyY29sb3JyZW1vdmFsIGRlZgoJL0FHTUNPUkVf
ZGlzdGlsbGluZwoJCS9wcm9kdWN0IHdoZXJlewoJCQlwb3Agc3lzdGVtZGljdC9zZXRkaXN0aWxs
ZXJwYXJhbXMga25vd24gcHJvZHVjdCAoQWRvYmUgUG9zdFNjcmlwdCBQYXJzZXIpIG5lIGFuZAoJ
CX17CgkJCWZhbHNlCgkJfWlmZWxzZQoJZGVmCglsZXZlbDIgbm90ewoJCS94cHV0ewoJCQlkdXAg
bG9hZCBkdXAgbGVuZ3RoIGV4Y2ggbWF4bGVuZ3RoIGVxewoJCQkJZHVwIGR1cCBsb2FkIGR1cAoJ
CQkJbGVuZ3RoIGR1cCAwIGVxIHtwb3AgMX0gaWYgMiBtdWwgZGljdCBjb3B5IGRlZgoJCQl9aWYK
CQkJbG9hZCBiZWdpbgoJCQkJZGVmCiAJCQllbmQKCQl9ZGVmCgl9ewoJCS94cHV0ewoJCQlsb2Fk
IDMgMSByb2xsIHB1dAoJCX1kZWYKCX1pZmVsc2UKCS9BR01DT1JFX0dTVEFURSBBR01DT1JFX2tl
eV9rbm93biBub3R7CgkJL0FHTUNPUkVfR1NUQVRFIDIxIGRpY3QgZGVmCgkJL0FHTUNPUkVfdG1w
bWF0cml4IG1hdHJpeCBkZWYKCQkvQUdNQ09SRV9nc3RhY2sgMzIgYXJyYXkgZGVmCgkJL0FHTUNP
UkVfZ3N0YWNrcHRyIDAgZGVmCgkJL0FHTUNPUkVfZ3N0YWNrc2F2ZXB0ciAwIGRlZgoJCS9BR01D
T1JFX2dzdGFja2ZyYW1la2V5cyAxMCBkZWYKCQkvQUdNQ09SRV8mZ3NhdmUgL2dzYXZlIGxkZgoJ
CS9BR01DT1JFXyZncmVzdG9yZSAvZ3Jlc3RvcmUgbGRmCgkJL0FHTUNPUkVfJmdyZXN0b3JlYWxs
IC9ncmVzdG9yZWFsbCBsZGYKCQkvQUdNQ09SRV8mc2F2ZSAvc2F2ZSBsZGYKCQkvQUdNQ09SRV9n
ZGljdGNvcHkgewoJCQliZWdpbgoJCQl7IGRlZiB9IGZvcmFsbAoJCQllbmQKCQl9ZGVmCgkJL0FH
TUNPUkVfZ3B1dCB7CgkJCUFHTUNPUkVfZ3N0YWNrIEFHTUNPUkVfZ3N0YWNrcHRyIGdldAoJCQkz
IDEgcm9sbAoJCQlwdXQKCQl9ZGVmCgkJL0FHTUNPUkVfZ2dldCB7CgkJCUFHTUNPUkVfZ3N0YWNr
IEFHTUNPUkVfZ3N0YWNrcHRyIGdldAoJCQlleGNoCgkJCWdldAoJCX1kZWYKCQkvZ3NhdmUgewoJ
CQlBR01DT1JFXyZnc2F2ZQoJCQlBR01DT1JFX2dzdGFjayBBR01DT1JFX2dzdGFja3B0ciBnZXQK
CQkJQUdNQ09SRV9nc3RhY2twdHIgMSBhZGQKCQkJZHVwIDMyIGdlIHtsaW1pdGNoZWNrfSBpZgoJ
CQlBZG9iZV9BR01fQ29yZSBleGNoCgkJCS9BR01DT1JFX2dzdGFja3B0ciB4cHQKCQkJQUdNQ09S
RV9nc3RhY2sgQUdNQ09SRV9nc3RhY2twdHIgZ2V0CgkJCUFHTUNPUkVfZ2RpY3Rjb3B5CgkJfWRl
ZgoJCS9ncmVzdG9yZSB7CgkJCUFHTUNPUkVfJmdyZXN0b3JlCgkJCUFHTUNPUkVfZ3N0YWNrcHRy
IDEgc3ViCgkJCWR1cCBBR01DT1JFX2dzdGFja3NhdmVwdHIgbHQgezEgYWRkfSBpZgoJCQlBZG9i
ZV9BR01fQ29yZSBleGNoCgkJCS9BR01DT1JFX2dzdGFja3B0ciB4cHQKCQl9ZGVmCgkJL2dyZXN0
b3JlYWxsIHsKCQkJQUdNQ09SRV8mZ3Jlc3RvcmVhbGwKCQkJQWRvYmVfQUdNX0NvcmUKCQkJL0FH
TUNPUkVfZ3N0YWNrcHRyIEFHTUNPUkVfZ3N0YWNrc2F2ZXB0ciBwdXQgCgkJfWRlZgoJCS9zYXZl
IHsKCQkJQUdNQ09SRV8mc2F2ZQoJCQlBR01DT1JFX2dzdGFjayBBR01DT1JFX2dzdGFja3B0ciBn
ZXQKCQkJQUdNQ09SRV9nc3RhY2twdHIgMSBhZGQKCQkJZHVwIDMyIGdlIHtsaW1pdGNoZWNrfSBp
ZgoJCQlBZG9iZV9BR01fQ29yZSBiZWdpbgoJCQkJL0FHTUNPUkVfZ3N0YWNrcHRyIGV4Y2ggZGVm
CgkJCQkvQUdNQ09SRV9nc3RhY2tzYXZlcHRyIEFHTUNPUkVfZ3N0YWNrcHRyIGRlZgoJCQllbmQK
CQkJQUdNQ09SRV9nc3RhY2sgQUdNQ09SRV9nc3RhY2twdHIgZ2V0CgkJCUFHTUNPUkVfZ2RpY3Rj
b3B5CgkJfWRlZgoJCTAgMSBBR01DT1JFX2dzdGFjayBsZW5ndGggMSBzdWIgewoJCQkJQUdNQ09S
RV9nc3RhY2sgZXhjaCBBR01DT1JFX2dzdGFja2ZyYW1la2V5cyBkaWN0IHB1dAoJCX0gZm9yCgl9
aWYKCWxldmVsMyAvQUdNQ09SRV8mc3lzc2hmaWxsIEFHTUNPUkVfa2V5X2tub3duIG5vdCBhbmQK
CXsKCQkvQUdNQ09SRV8mc3lzc2hmaWxsIHN5c3RlbWRpY3Qvc2hmaWxsIGdldCBkZWYKCQkvQUdN
Q09SRV8mdXNyc2hmaWxsIC9zaGZpbGwgbG9hZCBkZWYKCQkvQUdNQ09SRV8mc3lzbWFrZXBhdHRl
cm4gc3lzdGVtZGljdC9tYWtlcGF0dGVybiBnZXQgZGVmCgkJL0FHTUNPUkVfJnVzcm1ha2VwYXR0
ZXJuIC9tYWtlcGF0dGVybiBsb2FkIGRlZgoJfWlmCgkvY3VycmVudGNteWtjb2xvciBbMCAwIDAg
MF0gQUdNQ09SRV9ncHV0CgkvY3VycmVudHN0cm9rZWFkanVzdCBmYWxzZSBBR01DT1JFX2dwdXQK
CS9jdXJyZW50Y29sb3JzcGFjZSBbL0RldmljZUdyYXldIEFHTUNPUkVfZ3B1dAoJL3NlcF90aW50
IDAgQUdNQ09SRV9ncHV0CgkvZGV2aWNlbl90aW50cyBbMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAg
MCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwXSBBR01DT1JFX2dwdXQKCS9z
ZXBfY29sb3JzcGFjZV9kaWN0IG51bGwgQUdNQ09SRV9ncHV0CgkvZGV2aWNlbl9jb2xvcnNwYWNl
X2RpY3QgbnVsbCBBR01DT1JFX2dwdXQKCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBudWxsIEFH
TUNPUkVfZ3B1dAoJL2N1cnJlbnRjb2xvcl9pbnRlbnQgKCkgQUdNQ09SRV9ncHV0CgkvY3VzdG9t
Y29sb3JfdGludCAxIEFHTUNPUkVfZ3B1dAoJPDwKCS9NYXhQYXR0ZXJuSXRlbSBjdXJyZW50c3lz
dGVtcGFyYW1zIC9NYXhQYXR0ZXJuQ2FjaGUgZ2V0Cgk+PgoJc2V0dXNlcnBhcmFtcwoJZW5kCn1k
ZWYKL3BhZ2Vfc2V0dXAKewoJL3NldGNteWtjb2xvciB3aGVyZXsKCQlwb3AKCQlBZG9iZV9BR01f
Q29yZS9BR01DT1JFXyZzZXRjbXlrY29sb3IgL3NldGNteWtjb2xvciBsb2FkIHB1dAoJfWlmCglB
ZG9iZV9BR01fQ29yZSBiZWdpbgoJL3NldGNteWtjb2xvcgoJewoJCTQgY29weSBBR01DT1JFX2Nt
eWtidWYgYXN0b3JlIC9jdXJyZW50Y215a2NvbG9yIGV4Y2ggQUdNQ09SRV9ncHV0CgkJMSBzdWIg
NCAxIHJvbGwKCQkzIHsKCQkJMyBpbmRleCBhZGQgbmVnIGR1cCAwIGx0IHsKCQkJCXBvcCAwCgkJ
CX0gaWYKCQkJMyAxIHJvbGwKCQl9IHJlcGVhdAoJCXNldHJnYmNvbG9yIHBvcAoJfW5kZgoJL2N1
cnJlbnRjbXlrY29sb3IKCXsKCQkvY3VycmVudGNteWtjb2xvciBBR01DT1JFX2dnZXQgYWxvYWQg
cG9wCgl9bmRmCgkvc2V0b3ZlcnByaW50Cgl7CgkJcG9wCgl9bmRmCgkvY3VycmVudG92ZXJwcmlu
dAoJewoJCWZhbHNlCgl9bmRmCgkvQUdNQ09SRV9kZXZpY2VEUEkgNzIgMCBtYXRyaXggZGVmYXVs
dG1hdHJpeCBkdHJhbnNmb3JtIGR1cCBtdWwgZXhjaCBkdXAgbXVsIGFkZCBzcXJ0IGRlZgoJL0FH
TUNPUkVfY3lhbl9wbGF0ZSAxIDAgMCAwIHRlc3RfY215a19jb2xvcl9wbGF0ZSBkZWYKCS9BR01D
T1JFX21hZ2VudGFfcGxhdGUgMCAxIDAgMCB0ZXN0X2NteWtfY29sb3JfcGxhdGUgZGVmCgkvQUdN
Q09SRV95ZWxsb3dfcGxhdGUgMCAwIDEgMCB0ZXN0X2NteWtfY29sb3JfcGxhdGUgZGVmCgkvQUdN
Q09SRV9ibGFja19wbGF0ZSAwIDAgMCAxIHRlc3RfY215a19jb2xvcl9wbGF0ZSBkZWYKCS9BR01D
T1JFX3BsYXRlX25keCAKCQlBR01DT1JFX2N5YW5fcGxhdGV7IAoJCQkwCgkJfXsKCQkJQUdNQ09S
RV9tYWdlbnRhX3BsYXRlewoJCQkJMQoJCQl9ewoJCQkJQUdNQ09SRV95ZWxsb3dfcGxhdGV7CgkJ
CQkJMgoJCQkJfXsKCQkJCQlBR01DT1JFX2JsYWNrX3BsYXRlewoJCQkJCQkzCgkJCQkJfXsKCQkJ
CQkJNAoJCQkJCX1pZmVsc2UKCQkJCX1pZmVsc2UKCQkJfWlmZWxzZQoJCX1pZmVsc2UKCQlkZWYK
CS9BR01DT1JFX2hhdmVfcmVwb3J0ZWRfdW5zdXBwb3J0ZWRfY29sb3Jfc3BhY2UgZmFsc2UgZGVm
CgkvQUdNQ09SRV9yZXBvcnRfdW5zdXBwb3J0ZWRfY29sb3Jfc3BhY2UKCXsKCQlBR01DT1JFX2hh
dmVfcmVwb3J0ZWRfdW5zdXBwb3J0ZWRfY29sb3Jfc3BhY2UgZmFsc2UgZXEKCQl7CgkJCShXYXJu
aW5nOiBKb2IgY29udGFpbnMgY29udGVudCB0aGF0IGNhbm5vdCBiZSBzZXBhcmF0ZWQgd2l0aCBv
bi1ob3N0IG1ldGhvZHMuIFRoaXMgY29udGVudCBhcHBlYXJzIG9uIHRoZSBibGFjayBwbGF0ZSwg
YW5kIGtub2NrcyBvdXQgYWxsIG90aGVyIHBsYXRlcy4pID09CgkJCUFkb2JlX0FHTV9Db3JlIC9B
R01DT1JFX2hhdmVfcmVwb3J0ZWRfdW5zdXBwb3J0ZWRfY29sb3Jfc3BhY2UgdHJ1ZSBkZGYKCQl9
IGlmCgl9ZGVmCgkvQUdNQ09SRV9jb21wb3NpdGVfam9iCgkJQUdNQ09SRV9jeWFuX3BsYXRlIEFH
TUNPUkVfbWFnZW50YV9wbGF0ZSBhbmQgQUdNQ09SRV95ZWxsb3dfcGxhdGUgYW5kIEFHTUNPUkVf
YmxhY2tfcGxhdGUgYW5kIGRlZgoJL0FHTUNPUkVfaW5fcmlwX3NlcAoJCS9BR01DT1JFX2luX3Jp
cF9zZXAgd2hlcmV7CgkJCXBvcCBBR01DT1JFX2luX3JpcF9zZXAKCQl9ewoJCQlBR01DT1JFX2Rp
c3RpbGxpbmcgCgkJCXsKCQkJCWZhbHNlCgkJCX17CgkJCQl1c2VyZGljdC9BZG9iZV9BR01fT25I
b3N0X1NlcHMga25vd257CgkJCQkJZmFsc2UKCQkJCX17CgkJCQkJbGV2ZWwyewoJCQkJCQljdXJy
ZW50cGFnZWRldmljZS9TZXBhcmF0aW9ucyAyIGNvcHkga25vd257CgkJCQkJCQlnZXQKCQkJCQkJ
fXsKCQkJCQkJCXBvcCBwb3AgZmFsc2UKCQkJCQkJfWlmZWxzZQoJCQkJCX17CgkJCQkJCWZhbHNl
CgkJCQkJfWlmZWxzZQoJCQkJfWlmZWxzZQoJCQl9aWZlbHNlCgkJfWlmZWxzZQoJZGVmCgkvQUdN
Q09SRV9wcm9kdWNpbmdfc2VwcyBBR01DT1JFX2NvbXBvc2l0ZV9qb2Igbm90IEFHTUNPUkVfaW5f
cmlwX3NlcCBvciBkZWYKCS9BR01DT1JFX2hvc3Rfc2VwIEFHTUNPUkVfcHJvZHVjaW5nX3NlcHMg
QUdNQ09SRV9pbl9yaXBfc2VwIG5vdCBhbmQgZGVmCgkvQUdNX3ByZXNlcnZlX3Nwb3RzIAoJCS9B
R01fcHJlc2VydmVfc3BvdHMgd2hlcmV7CgkJCXBvcCBBR01fcHJlc2VydmVfc3BvdHMKCQl9ewoJ
CQlBR01DT1JFX2Rpc3RpbGxpbmcgQUdNQ09SRV9wcm9kdWNpbmdfc2VwcyBvcgoJCX1pZmVsc2UK
CWRlZgoJL0FHTV9pc19kaXN0aWxsZXJfcHJlc2VydmluZ19zcG90aW1hZ2VzCgl7CgkJY3VycmVu
dGRpc3RpbGxlcnBhcmFtcy9QcmVzZXJ2ZU92ZXJwcmludFNldHRpbmdzIGtub3duCgkJewoJCQlj
dXJyZW50ZGlzdGlsbGVycGFyYW1zL1ByZXNlcnZlT3ZlcnByaW50U2V0dGluZ3MgZ2V0CgkJCQl7
CgkJCQkJY3VycmVudGRpc3RpbGxlcnBhcmFtcy9Db2xvckNvbnZlcnNpb25TdHJhdGVneSBrbm93
bgoJCQkJCXsKCQkJCQkJY3VycmVudGRpc3RpbGxlcnBhcmFtcy9Db2xvckNvbnZlcnNpb25TdHJh
dGVneSBnZXQKCQkJCQkJL0xlYXZlQ29sb3JVbmNoYW5nZWQgZXEKCQkJCQl9ewoJCQkJCQl0cnVl
CgkJCQkJfWlmZWxzZQoJCQkJfXsKCQkJCQlmYWxzZQoJCQkJfWlmZWxzZQoJCX17CgkJCWZhbHNl
CgkJfWlmZWxzZQoJfWRlZgoJL2NvbnZlcnRfc3BvdF90b19wcm9jZXNzIHdoZXJlIHtwb3B9ewoJ
CS9jb252ZXJ0X3Nwb3RfdG9fcHJvY2VzcwoJCXsKCQkJZHVwIG1hcF9hbGlhcyB7CgkJCQkvTmFt
ZSBnZXQgZXhjaCBwb3AKCQkJfSBpZgoJCQlkdXAgZHVwIChOb25lKSBlcSBleGNoIChBbGwpIGVx
IG9yCgkJCQl7CgkJCQlwb3AgZmFsc2UKCQkJCX17CgkJCQlBR01DT1JFX2hvc3Rfc2VwCgkJCQl7
IAoJCQkJCWdzYXZlCgkJCQkJMSAwIDAgMCBzZXRjbXlrY29sb3IgY3VycmVudGdyYXkgMSBleGNo
IHN1YgoJCQkJCTAgMSAwIDAgc2V0Y215a2NvbG9yIGN1cnJlbnRncmF5IDEgZXhjaCBzdWIKCQkJ
CQkwIDAgMSAwIHNldGNteWtjb2xvciBjdXJyZW50Z3JheSAxIGV4Y2ggc3ViCgkJCQkJMCAwIDAg
MSBzZXRjbXlrY29sb3IgY3VycmVudGdyYXkgMSBleGNoIHN1YgoJCQkJCWFkZCBhZGQgYWRkIDAg
ZXEKCQkJCQl7CgkJCQkJCXBvcCBmYWxzZQoJCQkJCX17CgkJCQkJCWZhbHNlIHNldG92ZXJwcmlu
dAoJCQkJCQkxIDEgMSAxIDUgLTEgcm9sbCBmaW5kY215a2N1c3RvbWNvbG9yIDEgc2V0Y3VzdG9t
Y29sb3IKCQkJCQkJY3VycmVudGdyYXkgMCBlcQoJCQkJCX1pZmVsc2UKCQkJCQlncmVzdG9yZQoJ
CQkJfXsKCQkJCQlBR01DT1JFX2Rpc3RpbGxpbmcKCQkJCQl7CgkJCQkJCXBvcCBBR01faXNfZGlz
dGlsbGVyX3ByZXNlcnZpbmdfc3BvdGltYWdlcyBub3QKCQkJCQl9ewoJCQkJCQlBZG9iZV9BR01f
Q29yZS9BR01DT1JFX25hbWUgeGRkZgoJCQkJCQlmYWxzZQoJCQkJCQlBZG9iZV9BR01fQ29yZS9B
R01DT1JFX2luX3BhdHRlcm4ga25vd24ge0Fkb2JlX0FHTV9Db3JlL0FHTUNPUkVfaW5fcGF0dGVy
biBnZXR9e2ZhbHNlfSBpZmVsc2UKCQkJCQkJbm90IGN1cnJlbnRwYWdlZGV2aWNlL092ZXJyaWRl
U2VwYXJhdGlvbnMga25vd24gYW5kCgkJCQkJCQl7CgkJCQkJCQljdXJyZW50cGFnZWRldmljZS9P
dmVycmlkZVNlcGFyYXRpb25zIGdldAoJCQkJCQkJCXsKCQkJCQkJCQkvSHFuU3BvdHMgL1Byb2NT
ZXQgcmVzb3VyY2VzdGF0dXMKCQkJCQkJCQkJewoJCQkJCQkJCQlwb3AgcG9wIHBvcCB0cnVlCgkJ
CQkJCQkJCX1pZgoJCQkJCQkJCX1pZgoJCQkJCQkJfWlmCQkJCQkKCQkJCQkJCXsKCQkJCQkJCUFH
TUNPUkVfbmFtZSAvSHFuU3BvdHMgL1Byb2NTZXQgZmluZHJlc291cmNlIC9UZXN0U3BvdCBnZXQg
ZXhlYyBub3QKCQkJCQkJCX17CgkJCQkJCQlnc2F2ZQoJCQkJCQkJWy9TZXBhcmF0aW9uIEFHTUNP
UkVfbmFtZSAvRGV2aWNlR3JheSB7fV1zZXRjb2xvcnNwYWNlCgkJCQkJCQlmYWxzZQoJCQkJCQkJ
Y3VycmVudHBhZ2VkZXZpY2UvU2VwYXJhdGlvbkNvbG9yTmFtZXMgMiBjb3B5IGtub3duCgkJCQkJ
CQl7CgkJCQkJCQkJZ2V0CgkJCQkJCQkJeyBBR01DT1JFX25hbWUgZXEgb3J9Zm9yYWxsCgkJCQkJ
CQlub3QKCQkJCQkJCX17CgkJCQkJCQkJcG9wIHBvcCBwb3AgdHJ1ZQoJCQkJCQkJfWlmZWxzZQoJ
CQkJCQkJZ3Jlc3RvcmUKCQkJCQkJfWlmZWxzZQoJCQkJCX1pZmVsc2UKCQkJCX1pZmVsc2UKCQkJ
fWlmZWxzZQoJCX1kZWYKCX1pZmVsc2UKCS9jb252ZXJ0X3RvX3Byb2Nlc3Mgd2hlcmUge3BvcH17
CgkJL2NvbnZlcnRfdG9fcHJvY2VzcwoJCXsKCQkJZHVwIGxlbmd0aCAwIGVxCgkJCQl7CgkJCQlw
b3AgZmFsc2UKCQkJCX17CgkJCQlBR01DT1JFX2hvc3Rfc2VwCgkJCQl7IAoJCQkJZHVwIHRydWUg
ZXhjaAoJCQkJCXsKCQkJCQlkdXAgKEN5YW4pIGVxIGV4Y2gKCQkJCQlkdXAgKE1hZ2VudGEpIGVx
IDMgLTEgcm9sbCBvciBleGNoCgkJCQkJZHVwIChZZWxsb3cpIGVxIDMgLTEgcm9sbCBvciBleGNo
CgkJCQkJZHVwIChCbGFjaykgZXEgMyAtMSByb2xsIG9yCgkJCQkJCXtwb3B9CgkJCQkJCXtjb252
ZXJ0X3Nwb3RfdG9fcHJvY2VzcyBhbmR9aWZlbHNlCgkJCQkJfQoJCQkJZm9yYWxsCgkJCQkJewoJ
CQkJCXRydWUgZXhjaAoJCQkJCQl7CgkJCQkJCWR1cCAoQ3lhbikgZXEgZXhjaAoJCQkJCQlkdXAg
KE1hZ2VudGEpIGVxIDMgLTEgcm9sbCBvciBleGNoCgkJCQkJCWR1cCAoWWVsbG93KSBlcSAzIC0x
IHJvbGwgb3IgZXhjaAoJCQkJCQkoQmxhY2spIGVxIG9yIGFuZAoJCQkJCQl9Zm9yYWxsCgkJCQkJ
CW5vdAoJCQkJCX17cG9wIGZhbHNlfWlmZWxzZQoJCQkJfXsKCQkJCWZhbHNlIGV4Y2gKCQkJCQl7
CgkJCQkJZHVwIChDeWFuKSBlcSBleGNoCgkJCQkJZHVwIChNYWdlbnRhKSBlcSAzIC0xIHJvbGwg
b3IgZXhjaAoJCQkJCWR1cCAoWWVsbG93KSBlcSAzIC0xIHJvbGwgb3IgZXhjaAoJCQkJCWR1cCAo
QmxhY2spIGVxIDMgLTEgcm9sbCBvcgoJCQkJCXtwb3B9CgkJCQkJe2NvbnZlcnRfc3BvdF90b19w
cm9jZXNzIG9yfWlmZWxzZQoJCQkJCX0KCQkJCWZvcmFsbAoJCQkJfWlmZWxzZQoJCQl9aWZlbHNl
CgkJfWRlZgoJfWlmZWxzZQkKCS9BR01DT1JFX2F2b2lkX0wyX3NlcF9zcGFjZSAgCgkJdmVyc2lv
biBjdnIgMjAxMiBsdCAKCQlsZXZlbDIgYW5kIAoJCUFHTUNPUkVfcHJvZHVjaW5nX3NlcHMgbm90
IGFuZAoJZGVmCgkvQUdNQ09SRV9pc19jbXlrX3NlcAoJCUFHTUNPUkVfY3lhbl9wbGF0ZSBBR01D
T1JFX21hZ2VudGFfcGxhdGUgb3IgQUdNQ09SRV95ZWxsb3dfcGxhdGUgb3IgQUdNQ09SRV9ibGFj
a19wbGF0ZSBvcgoJZGVmCgkvQUdNX2F2b2lkXzBfY215ayB3aGVyZXsKCQlwb3AgQUdNX2F2b2lk
XzBfY215awoJfXsKCQlBR01fcHJlc2VydmVfc3BvdHMgCgkJdXNlcmRpY3QvQWRvYmVfQUdNX09u
SG9zdF9TZXBzIGtub3duIAoJCXVzZXJkaWN0L0Fkb2JlX0FHTV9JblJpcF9TZXBzIGtub3duIG9y
CgkJbm90IGFuZAoJfWlmZWxzZQoJewoJCS9zZXRjbXlrY29sb3JbCgkJCXsKCQkJCTQgY29weSBh
ZGQgYWRkIGFkZCAwIGVxIGN1cnJlbnRvdmVycHJpbnQgYW5kewoJCQkJCXBvcCAwLjAwMDUKCQkJ
CX1pZgoJCQl9L2V4ZWMgY3Z4CgkJCS9BR01DT1JFXyZzZXRjbXlrY29sb3IgbG9hZCBkdXAgdHlw
ZS9vcGVyYXRvcnR5cGUgbmV7CgkJCQkvZXhlYyBjdngKCQkJfWlmCgkJXWN2eCBkZWYKCX1pZgoJ
QUdNQ09SRV9ob3N0X3NlcHsKCQkvc2V0Y29sb3J0cmFuc2ZlcgoJCXsgCgkJCUFHTUNPUkVfY3lh
bl9wbGF0ZXsKCQkJCXBvcCBwb3AgcG9wCgkJCX17CgkJCSAgCUFHTUNPUkVfbWFnZW50YV9wbGF0
ZXsKCQkJICAJCTQgMyByb2xsIHBvcCBwb3AgcG9wCgkJCSAgCX17CgkJCSAgCQlBR01DT1JFX3ll
bGxvd19wbGF0ZXsKCQkJICAJCQk0IDIgcm9sbCBwb3AgcG9wIHBvcAoJCQkgIAkJfXsKCQkJICAJ
CQk0IDEgcm9sbCBwb3AgcG9wIHBvcAoJCQkgIAkJfWlmZWxzZQoJCQkgIAl9aWZlbHNlCgkJCX1p
ZmVsc2UKCQkJc2V0dHJhbnNmZXIgIAoJCX0JCgkJZGVmCgkJL0FHTUNPUkVfZ2V0X2lua19kYXRh
CgkJCUFHTUNPUkVfY3lhbl9wbGF0ZXsKCQkJCXtwb3AgcG9wIHBvcH0KCQkJfXsKCQkJICAJQUdN
Q09SRV9tYWdlbnRhX3BsYXRlewoJCQkgIAkJezQgMyByb2xsIHBvcCBwb3AgcG9wfQoJCQkgIAl9
ewoJCQkgIAkJQUdNQ09SRV95ZWxsb3dfcGxhdGV7CgkJCSAgCQkJezQgMiByb2xsIHBvcCBwb3Ag
cG9wfQoJCQkgIAkJfXsKCQkJICAJCQl7NCAxIHJvbGwgcG9wIHBvcCBwb3B9CgkJCSAgCQl9aWZl
bHNlCgkJCSAgCX1pZmVsc2UKCQkJfWlmZWxzZQoJCWRlZgoJCS9BR01DT1JFX1JlbW92ZVByb2Nl
c3NDb2xvck5hbWVzCgkJCXsKCQkJMSBkaWN0IGJlZ2luCgkJCS9maWx0ZXJuYW1lCgkJCQl7CgkJ
CQlkdXAgL0N5YW4gZXEgMSBpbmRleCAoQ3lhbikgZXEgb3IKCQkJCQl7cG9wIChfY3lhbl8pfWlm
CgkJCQlkdXAgL01hZ2VudGEgZXEgMSBpbmRleCAoTWFnZW50YSkgZXEgb3IKCQkJCQl7cG9wIChf
bWFnZW50YV8pfWlmCgkJCQlkdXAgL1llbGxvdyBlcSAxIGluZGV4IChZZWxsb3cpIGVxIG9yCgkJ
CQkJe3BvcCAoX3llbGxvd18pfWlmCgkJCQlkdXAgL0JsYWNrIGVxIDEgaW5kZXggKEJsYWNrKSBl
cSBvcgoJCQkJCXtwb3AgKF9ibGFja18pfWlmCgkJCQl9ZGVmCgkJCWR1cCB0eXBlIC9hcnJheXR5
cGUgZXEKCQkJCXtbZXhjaCB7ZmlsdGVybmFtZX1mb3JhbGxdfQoJCQkJe2ZpbHRlcm5hbWV9aWZl
bHNlCgkJCWVuZAoJCQl9ZGVmCgkJL0FHTUNPUkVfSXNTZXBhcmF0aW9uQVByb2Nlc3NDb2xvcgoJ
CQl7CgkJCWR1cCAoQ3lhbikgZXEgZXhjaCBkdXAgKE1hZ2VudGEpIGVxIGV4Y2ggZHVwIChZZWxs
b3cpIGVxIGV4Y2ggKEJsYWNrKSBlcSBvciBvciBvcgoJCQl9ZGVmCgkJbGV2ZWwzIHsKCQkJL0FH
TUNPUkVfSXNDdXJyZW50Q29sb3IKCQkJCXsKCQkJCWdzYXZlCgkJCQlmYWxzZSBzZXRvdmVycHJp
bnQKCQkJCTEgMSAxIDEgNSAtMSByb2xsIGZpbmRjbXlrY3VzdG9tY29sb3IgMSBzZXRjdXN0b21j
b2xvcgoJCQkJY3VycmVudGdyYXkgMCBlcSAKCQkJCWdyZXN0b3JlCgkJCQl9ZGVmCgkJCS9BR01D
T1JFX2ZpbHRlcl9mdW5jdGlvbmRhdGFzb3VyY2UKCQkJCXsJCgkJCQk1IGRpY3QgYmVnaW4KCQkJ
CS9kYXRhX2luIHhkZgoJCQkJZGF0YV9pbiB0eXBlIC9zdHJpbmd0eXBlIGVxCgkJCQkJewoJCQkJ
CS9uY29tcCB4ZGYKCQkJCQkvY29tcCB4ZGYKCQkJCQkvc3RyaW5nX291dCBkYXRhX2luIGxlbmd0
aCBuY29tcCBpZGl2IHN0cmluZyBkZWYKCQkJCQkwIG5jb21wIGRhdGFfaW4gbGVuZ3RoIDEgc3Vi
CgkJCQkJCXsKCQkJCQkJc3RyaW5nX291dCBleGNoIGR1cCBuY29tcCBpZGl2IGV4Y2ggZGF0YV9p
biBleGNoIG5jb21wIGdldGludGVydmFsIGNvbXAgZ2V0IDI1NSBleGNoIHN1YiBwdXQKCQkJCQkJ
fWZvcgoJCQkJCXN0cmluZ19vdXQKCQkJCQl9ewoJCQkJCXN0cmluZyAvc3RyaW5nX2luIHhkZgoJ
CQkJCS9zdHJpbmdfb3V0IDEgc3RyaW5nIGRlZgoJCQkJCS9jb21wb25lbnQgeGRmCgkJCQkJWwoJ
CQkJCWRhdGFfaW4gc3RyaW5nX2luIC9yZWFkc3RyaW5nIGN2eAoJCQkJCQlbY29tcG9uZW50IC9n
ZXQgY3Z4IDI1NSAvZXhjaCBjdnggL3N1YiBjdnggc3RyaW5nX291dCAvZXhjaCBjdnggMCAvZXhj
aCBjdnggL3B1dCBjdnggc3RyaW5nX291dF1jdngKCQkJCQkJWy9wb3AgY3Z4ICgpXWN2eCAvaWZl
bHNlIGN2eAoJCQkJCV1jdnggL1JldXNhYmxlU3RyZWFtRGVjb2RlIGZpbHRlcgoJCQkJfWlmZWxz
ZQoJCQkJZW5kCgkJCQl9ZGVmCgkJCS9BR01DT1JFX3NlcGFyYXRlU2hhZGluZ0Z1bmN0aW9uCgkJ
CQl7CgkJCQkyIGRpY3QgYmVnaW4KCQkJCS9wYWludD8geGRmCgkJCQkvY2hhbm5lbCB4ZGYKCQkJ
CQliZWdpbgoJCQkJCUZ1bmN0aW9uVHlwZSAwIGVxCgkJCQkJCXsKCQkJCQkJL0RhdGFTb3VyY2Ug
Y2hhbm5lbCBSYW5nZSBsZW5ndGggMiBpZGl2IERhdGFTb3VyY2UgQUdNQ09SRV9maWx0ZXJfZnVu
Y3Rpb25kYXRhc291cmNlIGRlZgoJCQkJCQljdXJyZW50ZGljdCAvRGVjb2RlIGtub3duCgkJCQkJ
CQl7L0RlY29kZSBEZWNvZGUgY2hhbm5lbCAyIG11bCAyIGdldGludGVydmFsIGRlZn1pZgoJCQkJ
CQlwYWludD8gbm90CgkJCQkJCQl7L0RlY29kZSBbMSAxXWRlZn1pZgoJCQkJCQl9aWYKCQkJCQlG
dW5jdGlvblR5cGUgMiBlcQoJCQkJCQl7CgkJCQkJCXBhaW50PwoJCQkJCQkJewoJCQkJCQkJL0Mw
IFtDMCBjaGFubmVsIGdldCAxIGV4Y2ggc3ViXSBkZWYKCQkJCQkJCS9DMSBbQzEgY2hhbm5lbCBn
ZXQgMSBleGNoIHN1Yl0gZGVmCgkJCQkJCQl9ewoJCQkJCQkJL0MwIFsxXSBkZWYKCQkJCQkJCS9D
MSBbMV0gZGVmCgkJCQkJCQl9aWZlbHNlCQkJCgkJCQkJCX1pZgoJCQkJCUZ1bmN0aW9uVHlwZSAz
IGVxCgkJCQkJCXsKCQkJCQkJL0Z1bmN0aW9ucyBbRnVuY3Rpb25zIHtjaGFubmVsIHBhaW50PyBB
R01DT1JFX3NlcGFyYXRlU2hhZGluZ0Z1bmN0aW9ufSBmb3JhbGxdIGRlZgkJCQoJCQkJCQl9aWYK
CQkJCQljdXJyZW50ZGljdCAvUmFuZ2Uga25vd24KCQkJCQkJey9SYW5nZSBbMCAxXSBkZWZ9aWYK
CQkJCQljdXJyZW50ZGljdAoJCQkJCWVuZAoJCQkJZW5kCgkJCQl9ZGVmCgkJCS9BR01DT1JFX3Nl
cGFyYXRlU2hhZGluZwoJCQkJewoJCQkJMyAtMSByb2xsIGJlZ2luCgkJCQljdXJyZW50ZGljdCAv
RnVuY3Rpb24ga25vd24KCQkJCQl7CgkJCQkJY3VycmVudGRpY3QgL0JhY2tncm91bmQga25vd24K
CQkJCQkJe1sxIGluZGV4e0JhY2tncm91bmQgMyBpbmRleCBnZXQgMSBleGNoIHN1Yn17MX1pZmVs
c2VdL0JhY2tncm91bmQgeGRmfWlmCgkJCQkJRnVuY3Rpb24gMyAxIHJvbGwgQUdNQ09SRV9zZXBh
cmF0ZVNoYWRpbmdGdW5jdGlvbiAvRnVuY3Rpb24geGRmCgkJCQkJL0NvbG9yU3BhY2UgWy9EZXZp
Y2VHcmF5XSBkZWYKCQkJCQl9ewoJCQkJCUNvbG9yU3BhY2UgZHVwIHR5cGUgL2FycmF5dHlwZSBl
cSB7MCBnZXR9aWYgL0RldmljZUNNWUsgZXEKCQkJCQkJewoJCQkJCQkvQ29sb3JTcGFjZSBbL0Rl
dmljZU4gWy9fY3lhbl8gL19tYWdlbnRhXyAvX3llbGxvd18gL19ibGFja19dIC9EZXZpY2VDTVlL
IHt9XSBkZWYKCQkJCQkJfXsKCQkJCQkJQ29sb3JTcGFjZSBkdXAgMSBnZXQgQUdNQ09SRV9SZW1v
dmVQcm9jZXNzQ29sb3JOYW1lcyAxIGV4Y2ggcHV0CgkJCQkJCX1pZmVsc2UKCQkJCQlDb2xvclNw
YWNlIDAgZ2V0IC9TZXBhcmF0aW9uIGVxCgkJCQkJCXsKCQkJCQkJCXsKCQkJCQkJCQlbMSAvZXhj
aCBjdnggL3N1YiBjdnhdY3Z4CgkJCQkJCQl9ewoJCQkJCQkJCVsvcG9wIGN2eCAxXWN2eAoJCQkJ
CQkJfWlmZWxzZQoJCQkJCQkJQ29sb3JTcGFjZSAzIDMgLTEgcm9sbCBwdXQKCQkJCQkJCXBvcAoJ
CQkJCQl9ewoJCQkJCQkJewoJCQkJCQkJCVtleGNoIENvbG9yU3BhY2UgMSBnZXQgbGVuZ3RoIDEg
c3ViIGV4Y2ggc3ViIC9pbmRleCBjdnggMSAvZXhjaCBjdnggL3N1YiBjdnggQ29sb3JTcGFjZSAx
IGdldCBsZW5ndGggMSBhZGQgMSAvcm9sbCBjdnggQ29sb3JTcGFjZSAxIGdldCBsZW5ndGh7L3Bv
cCBjdnh9IHJlcGVhdF1jdngKCQkJCQkJCX17CgkJCQkJCQkJcG9wIFtDb2xvclNwYWNlIDEgZ2V0
IGxlbmd0aCB7L3BvcCBjdnh9IHJlcGVhdCBjdnggMV1jdngKCQkJCQkJCX1pZmVsc2UKCQkJCQkJ
CUNvbG9yU3BhY2UgMyAzIC0xIHJvbGwgYmluZCBwdXQKCQkJCQkJfWlmZWxzZQoJCQkJCUNvbG9y
U3BhY2UgMiAvRGV2aWNlR3JheSBwdXQJCQkJCQkJCQkJCQkJCQkJCQkKCQkJCQl9aWZlbHNlCgkJ
CQllbmQKCQkJCX1kZWYKCQkJL0FHTUNPUkVfc2VwYXJhdGVTaGFkaW5nRGljdAoJCQkJewoJCQkJ
ZHVwIC9Db2xvclNwYWNlIGdldAoJCQkJZHVwIHR5cGUgL2FycmF5dHlwZSBuZQoJCQkJCXtbZXhj
aF19aWYKCQkJCWR1cCAwIGdldCAvRGV2aWNlQ01ZSyBlcQoJCQkJCXsKCQkJCQlleGNoIGJlZ2lu
IAoJCQkJCWN1cnJlbnRkaWN0CgkJCQkJQUdNQ09SRV9jeWFuX3BsYXRlCgkJCQkJCXswIHRydWV9
aWYKCQkJCQlBR01DT1JFX21hZ2VudGFfcGxhdGUKCQkJCQkJezEgdHJ1ZX1pZgoJCQkJCUFHTUNP
UkVfeWVsbG93X3BsYXRlCgkJCQkJCXsyIHRydWV9aWYKCQkJCQlBR01DT1JFX2JsYWNrX3BsYXRl
CgkJCQkJCXszIHRydWV9aWYKCQkJCQlBR01DT1JFX3BsYXRlX25keCA0IGVxCgkJCQkJCXswIGZh
bHNlfWlmCQkKCQkJCQlkdXAgbm90IGN1cnJlbnRvdmVycHJpbnQgYW5kCgkJCQkJCXsvQUdNQ09S
RV9pZ25vcmVzaGFkZSB0cnVlIGRlZn1pZgoJCQkJCUFHTUNPUkVfc2VwYXJhdGVTaGFkaW5nCgkJ
CQkJY3VycmVudGRpY3QKCQkJCQllbmQgZXhjaAoJCQkJCX1pZgoJCQkJZHVwIDAgZ2V0IC9TZXBh
cmF0aW9uIGVxCgkJCQkJewoJCQkJCWV4Y2ggYmVnaW4KCQkJCQlDb2xvclNwYWNlIDEgZ2V0IGR1
cCAvTm9uZSBuZSBleGNoIC9BbGwgbmUgYW5kCgkJCQkJCXsKCQkJCQkJQ29sb3JTcGFjZSAxIGdl
dCBBR01DT1JFX0lzQ3VycmVudENvbG9yIEFHTUNPUkVfcGxhdGVfbmR4IDQgbHQgYW5kIENvbG9y
U3BhY2UgMSBnZXQgQUdNQ09SRV9Jc1NlcGFyYXRpb25BUHJvY2Vzc0NvbG9yIG5vdCBhbmQKCQkJ
CQkJCXsKCQkJCQkJCUNvbG9yU3BhY2UgMiBnZXQgZHVwIHR5cGUgL2FycmF5dHlwZSBlcSB7MCBn
ZXR9aWYgL0RldmljZUNNWUsgZXEgCgkJCQkJCQkJewoJCQkJCQkJCS9Db2xvclNwYWNlCgkJCQkJ
CQkJCVsKCQkJCQkJCQkJL1NlcGFyYXRpb24KCQkJCQkJCQkJQ29sb3JTcGFjZSAxIGdldAoJCQkJ
CQkJCQkvRGV2aWNlR3JheQoJCQkJCQkJCQkJWwoJCQkJCQkJCQkJQ29sb3JTcGFjZSAzIGdldCAv
ZXhlYyBjdngKCQkJCQkJCQkJCTQgQUdNQ09SRV9wbGF0ZV9uZHggc3ViIC0xIC9yb2xsIGN2eAoJ
CQkJCQkJCQkJNCAxIC9yb2xsIGN2eAoJCQkJCQkJCQkJMyBbL3BvcCBjdnhdY3Z4IC9yZXBlYXQg
Y3Z4CgkJCQkJCQkJCQkxIC9leGNoIGN2eCAvc3ViIGN2eAoJCQkJCQkJCQkJXWN2eAkJCQkJCQkJ
CQoJCQkJCQkJCQldZGVmCgkJCQkJCQkJfXsKCQkJCQkJCQlBR01DT1JFX3JlcG9ydF91bnN1cHBv
cnRlZF9jb2xvcl9zcGFjZQoJCQkJCQkJCUFHTUNPUkVfYmxhY2tfcGxhdGUgbm90CgkJCQkJCQkJ
CXsKCQkJCQkJCQkJY3VycmVudGRpY3QgMCBmYWxzZSBBR01DT1JFX3NlcGFyYXRlU2hhZGluZwoJ
CQkJCQkJCQl9aWYKCQkJCQkJCQl9aWZlbHNlCgkJCQkJCQl9ewoJCQkJCQkJY3VycmVudGRpY3Qg
Q29sb3JTcGFjZSAxIGdldCBBR01DT1JFX0lzQ3VycmVudENvbG9yCgkJCQkJCQkwIGV4Y2ggCgkJ
CQkJCQlkdXAgbm90IGN1cnJlbnRvdmVycHJpbnQgYW5kCgkJCQkJCQkJey9BR01DT1JFX2lnbm9y
ZXNoYWRlIHRydWUgZGVmfWlmCgkJCQkJCQlBR01DT1JFX3NlcGFyYXRlU2hhZGluZwoJCQkJCQkJ
fWlmZWxzZQkKCQkJCQkJfWlmCQkJCgkJCQkJY3VycmVudGRpY3QKCQkJCQllbmQgZXhjaAoJCQkJ
CX1pZgoJCQkJZHVwIDAgZ2V0IC9EZXZpY2VOIGVxCgkJCQkJewoJCQkJCWV4Y2ggYmVnaW4KCQkJ
CQlDb2xvclNwYWNlIDEgZ2V0IGNvbnZlcnRfdG9fcHJvY2VzcwoJCQkJCQl7CgkJCQkJCUNvbG9y
U3BhY2UgMiBnZXQgZHVwIHR5cGUgL2FycmF5dHlwZSBlcSB7MCBnZXR9aWYgL0RldmljZUNNWUsg
ZXEgCgkJCQkJCQl7CgkJCQkJCQkvQ29sb3JTcGFjZQoJCQkJCQkJCVsKCQkJCQkJCQkvRGV2aWNl
TgoJCQkJCQkJCUNvbG9yU3BhY2UgMSBnZXQKCQkJCQkJCQkvRGV2aWNlR3JheQoJCQkJCQkJCQlb
CgkJCQkJCQkJCUNvbG9yU3BhY2UgMyBnZXQgL2V4ZWMgY3Z4CgkJCQkJCQkJCTQgQUdNQ09SRV9w
bGF0ZV9uZHggc3ViIC0xIC9yb2xsIGN2eAoJCQkJCQkJCQk0IDEgL3JvbGwgY3Z4CgkJCQkJCQkJ
CTMgWy9wb3AgY3Z4XWN2eCAvcmVwZWF0IGN2eAoJCQkJCQkJCQkxIC9leGNoIGN2eCAvc3ViIGN2
eAoJCQkJCQkJCQldY3Z4CQkJCQkJCQkJCgkJCQkJCQkJXWRlZgoJCQkJCQkJfXsKCQkJCQkJCUFH
TUNPUkVfcmVwb3J0X3Vuc3VwcG9ydGVkX2NvbG9yX3NwYWNlCgkJCQkJCQlBR01DT1JFX2JsYWNr
X3BsYXRlIG5vdAoJCQkJCQkJCXsKCQkJCQkJCQljdXJyZW50ZGljdCAwIGZhbHNlIEFHTUNPUkVf
c2VwYXJhdGVTaGFkaW5nCgkJCQkJCQkJL0NvbG9yU3BhY2UgWy9EZXZpY2VHcmF5XSBkZWYKCQkJ
CQkJCQl9aWYKCQkJCQkJCX1pZmVsc2UKCQkJCQkJfXsKCQkJCQkJY3VycmVudGRpY3QKCQkJCQkJ
ZmFsc2UgLTEgQ29sb3JTcGFjZSAxIGdldAoJCQkJCQkJewoJCQkJCQkJQUdNQ09SRV9Jc0N1cnJl
bnRDb2xvcgoJCQkJCQkJCXsKCQkJCQkJCQkxIGFkZAoJCQkJCQkJCWV4Y2ggcG9wIHRydWUgZXhj
aCBleGl0CgkJCQkJCQkJfWlmCgkJCQkJCQkxIGFkZAoJCQkJCQkJfWZvcmFsbAoJCQkJCQlleGNo
IAoJCQkJCQlkdXAgbm90IGN1cnJlbnRvdmVycHJpbnQgYW5kCgkJCQkJCQl7L0FHTUNPUkVfaWdu
b3Jlc2hhZGUgdHJ1ZSBkZWZ9aWYKCQkJCQkJQUdNQ09SRV9zZXBhcmF0ZVNoYWRpbmcKCQkJCQkJ
fWlmZWxzZQoJCQkJCWN1cnJlbnRkaWN0CgkJCQkJZW5kIGV4Y2gKCQkJCQl9aWYKCQkJCWR1cCAw
IGdldCBkdXAgL0RldmljZUNNWUsgZXEgZXhjaCBkdXAgL1NlcGFyYXRpb24gZXEgZXhjaCAvRGV2
aWNlTiBlcSBvciBvciBub3QKCQkJCQl7CgkJCQkJZXhjaCBiZWdpbgoJCQkJCUNvbG9yU3BhY2Ug
ZHVwIHR5cGUgL2FycmF5dHlwZSBlcQoJCQkJCQl7MCBnZXR9aWYKCQkJCQkvRGV2aWNlR3JheSBu
ZQoJCQkJCQl7CgkJCQkJCUFHTUNPUkVfcmVwb3J0X3Vuc3VwcG9ydGVkX2NvbG9yX3NwYWNlCgkJ
CQkJCUFHTUNPUkVfYmxhY2tfcGxhdGUgbm90CgkJCQkJCQl7CgkJCQkJCQlDb2xvclNwYWNlIDAg
Z2V0IC9DSUVCYXNlZEEgZXEKCQkJCQkJCQl7CgkJCQkJCQkJL0NvbG9yU3BhY2UgWy9TZXBhcmF0
aW9uIC9fY2llYmFzZWRhXyAvRGV2aWNlR3JheSB7fV0gZGVmCgkJCQkJCQkJfWlmCgkJCQkJCQlD
b2xvclNwYWNlIDAgZ2V0IGR1cCAvQ0lFQmFzZWRBQkMgZXEgZXhjaCBkdXAgL0NJRUJhc2VkREVG
IGVxIGV4Y2ggL0RldmljZVJHQiBlcSBvciBvcgoJCQkJCQkJCXsKCQkJCQkJCQkvQ29sb3JTcGFj
ZSBbL0RldmljZU4gWy9fcmVkXyAvX2dyZWVuXyAvX2JsdWVfXSAvRGV2aWNlUkdCIHt9XSBkZWYK
CQkJCQkJCQl9aWYKCQkJCQkJCUNvbG9yU3BhY2UgMCBnZXQgL0NJRUJhc2VkREVGRyBlcQoJCQkJ
CQkJCXsKCQkJCQkJCQkvQ29sb3JTcGFjZSBbL0RldmljZU4gWy9fY3lhbl8gL19tYWdlbnRhXyAv
X3llbGxvd18gL19ibGFja19dIC9EZXZpY2VDTVlLIHt9XQoJCQkJCQkJCX1pZgoJCQkJCQkJY3Vy
cmVudGRpY3QgMCBmYWxzZSBBR01DT1JFX3NlcGFyYXRlU2hhZGluZwoJCQkJCQkJfWlmCgkJCQkJ
CX1pZgoJCQkJCWN1cnJlbnRkaWN0CgkJCQkJZW5kIGV4Y2gKCQkJCQl9aWYKCQkJCXBvcAoJCQkJ
ZHVwIC9BR01DT1JFX2lnbm9yZXNoYWRlIGtub3duCgkJCQkJewoJCQkJCWJlZ2luCgkJCQkJL0Nv
bG9yU3BhY2UgWy9TZXBhcmF0aW9uIChOb25lKSAvRGV2aWNlR3JheSB7fV0gZGVmCgkJCQkJY3Vy
cmVudGRpY3QgZW5kCgkJCQkJfWlmCgkJCQl9ZGVmCgkJCS9zaGZpbGwKCQkJCXsKCQkJCWNsb25l
ZGljdAoJCQkJQUdNQ09SRV9zZXBhcmF0ZVNoYWRpbmdEaWN0IAoJCQkJZHVwIC9BR01DT1JFX2ln
bm9yZXNoYWRlIGtub3duCgkJCQkJe3BvcH0KCQkJCQl7QUdNQ09SRV8mc3lzc2hmaWxsfWlmZWxz
ZQoJCQkJfWRlZgoJCQkvbWFrZXBhdHRlcm4KCQkJCXsKCQkJCWV4Y2gKCQkJCWR1cCAvUGF0dGVy
blR5cGUgZ2V0IDIgZXEKCQkJCQl7CgkJCQkJY2xvbmVkaWN0CgkJCQkJYmVnaW4KCQkJCQkvU2hh
ZGluZyBTaGFkaW5nIEFHTUNPUkVfc2VwYXJhdGVTaGFkaW5nRGljdCBkZWYKCQkJCQljdXJyZW50
ZGljdCBlbmQKCQkJCQlleGNoIEFHTUNPUkVfJnN5c21ha2VwYXR0ZXJuCgkJCQkJfXsKCQkJCQll
eGNoIEFHTUNPUkVfJnVzcm1ha2VwYXR0ZXJuCgkJCQkJfWlmZWxzZQoJCQkJfWRlZgoJCX1pZgoJ
fWlmCglBR01DT1JFX2luX3JpcF9zZXB7CgkJL3NldGN1c3RvbWNvbG9yCgkJewoJCQlleGNoIGFs
b2FkIHBvcAoJCQlkdXAgNyAxIHJvbGwgaW5SaXBfc3BvdF9oYXNfaW5rIG5vdAl7IAoJCQkJNCB7
NCBpbmRleCBtdWwgNCAxIHJvbGx9CgkJCQlyZXBlYXQKCQkJCS9EZXZpY2VDTVlLIHNldGNvbG9y
c3BhY2UKCQkJCTYgLTIgcm9sbCBwb3AgcG9wCgkJCX17IAoJCQkJQWRvYmVfQUdNX0NvcmUgYmVn
aW4KCQkJCQkvQUdNQ09SRV9rIHhkZiAvQUdNQ09SRV95IHhkZiAvQUdNQ09SRV9tIHhkZiAvQUdN
Q09SRV9jIHhkZgoJCQkJZW5kCgkJCQlbL1NlcGFyYXRpb24gNCAtMSByb2xsIC9EZXZpY2VDTVlL
CgkJCQl7ZHVwIEFHTUNPUkVfYyBtdWwgZXhjaCBkdXAgQUdNQ09SRV9tIG11bCBleGNoIGR1cCBB
R01DT1JFX3kgbXVsIGV4Y2ggQUdNQ09SRV9rIG11bH0KCQkJCV0KCQkJCXNldGNvbG9yc3BhY2UK
CQkJfWlmZWxzZQoJCQlzZXRjb2xvcgoJCX1uZGYKCQkvc2V0c2VwYXJhdGlvbmdyYXkKCQl7CgkJ
CVsvU2VwYXJhdGlvbiAoQWxsKSAvRGV2aWNlR3JheSB7fV0gc2V0Y29sb3JzcGFjZV9vcHQKCQkJ
MSBleGNoIHN1YiBzZXRjb2xvcgoJCX1uZGYKCX17CgkJL3NldHNlcGFyYXRpb25ncmF5CgkJewoJ
CQlBR01DT1JFXyZzZXRncmF5CgkJfW5kZgoJfWlmZWxzZQoJL2ZpbmRjbXlrY3VzdG9tY29sb3IK
CXsKCQk1IG1ha2VyZWFkb25seWFycmF5Cgl9bmRmCgkvc2V0Y3VzdG9tY29sb3IKCXsKCQlleGNo
IGFsb2FkIHBvcCBwb3AKCQk0IHs0IGluZGV4IG11bCA0IDEgcm9sbH0gcmVwZWF0CgkJc2V0Y215
a2NvbG9yIHBvcAoJfW5kZgoJL2hhc19jb2xvcgoJCS9jb2xvcmltYWdlIHdoZXJlewoJCQlBR01D
T1JFX3Byb2R1Y2luZ19zZXBzewoJCQkJcG9wIHRydWUKCQkJfXsKCQkJCXN5c3RlbWRpY3QgZXEK
CQkJfWlmZWxzZQoJCX17CgkJCWZhbHNlCgkJfWlmZWxzZQoJZGVmCgkvbWFwX2luZGV4Cgl7CgkJ
MSBpbmRleCBtdWwgZXhjaCBnZXRpbnRlcnZhbCB7MjU1IGRpdn0gZm9yYWxsCgl9IGJkZgoJL21h
cF9pbmRleGVkX2Rldm4KCXsKCQlMb29rdXAgTmFtZXMgbGVuZ3RoIDMgLTEgcm9sbCBjdmkgbWFw
X2luZGV4Cgl9IGJkZgoJL25fY29sb3JfY29tcG9uZW50cwoJewoJCWJhc2VfY29sb3JzcGFjZV90
eXBlCgkJZHVwIC9EZXZpY2VHcmF5IGVxewoJCQlwb3AgMQoJCX17CgkJCS9EZXZpY2VDTVlLIGVx
ewoJCQkJNAoJCQl9ewoJCQkJMwoJCQl9aWZlbHNlCgkJfWlmZWxzZQoJfWJkZgoJbGV2ZWwyewoJ
CS9tbyAvbW92ZXRvIGxkZgoJCS9saSAvbGluZXRvIGxkZgoJCS9jdiAvY3VydmV0byBsZGYKCQkv
a25vY2tvdXRfdW5pdHNxCgkJewoJCQkxIHNldGdyYXkKCQkJMCAwIDEgMSByZWN0ZmlsbAoJCX1k
ZWYKCQkvbGV2ZWwyU2NyZWVuRnJlcXsKCQkJYmVnaW4KCQkJNjAKCQkJSGFsZnRvbmVUeXBlIDEg
ZXF7CgkJCQlwb3AgRnJlcXVlbmN5CgkJCX1pZgoJCQlIYWxmdG9uZVR5cGUgMiBlcXsKCQkJCXBv
cCBHcmF5RnJlcXVlbmN5CgkJCX1pZgoJCQlIYWxmdG9uZVR5cGUgNSBlcXsKCQkJCXBvcCBEZWZh
dWx0IGxldmVsMlNjcmVlbkZyZXEKCQkJfWlmCgkJCSBlbmQKCQl9ZGVmCgkJL2N1cnJlbnRTY3Jl
ZW5GcmVxewoJCQljdXJyZW50aGFsZnRvbmUgbGV2ZWwyU2NyZWVuRnJlcQoJCX1kZWYKCQlsZXZl
bDIgL3NldGNvbG9yc3BhY2UgQUdNQ09SRV9rZXlfa25vd24gbm90IGFuZHsKCQkJL0FHTUNPUkVf
JiYmc2V0Y29sb3JzcGFjZSAvc2V0Y29sb3JzcGFjZSBsZGYKCQkJL0FHTUNPUkVfUmVwbGFjZU1h
cHBlZENvbG9yCgkJCXsKCQkJCWR1cCB0eXBlIGR1cCAvYXJyYXl0eXBlIGVxIGV4Y2ggL3BhY2tl
ZGFycmF5dHlwZSBlcSBvcgoJCQkJewoJCQkJCWR1cCAwIGdldCBkdXAgL1NlcGFyYXRpb24gZXEK
CQkJCQl7CgkJCQkJCXBvcAoJCQkJCQlkdXAgbGVuZ3RoIGFycmF5IGNvcHkKCQkJCQkJZHVwIGR1
cCAxIGdldAoJCQkJCQljdXJyZW50X3Nwb3RfYWxpYXMKCQkJCQkJewoJCQkJCQkJZHVwIG1hcF9h
bGlhcwoJCQkJCQkJewoJCQkJCQkJCWJlZ2luCgkJCQkJCQkJL3NlcF9jb2xvcnNwYWNlX2RpY3Qg
Y3VycmVudGRpY3QgQUdNQ09SRV9ncHV0CgkJCQkJCQkJcG9wIHBvcAlwb3AKCQkJCQkJCQlbIAoJ
CQkJCQkJCQkvU2VwYXJhdGlvbiBOYW1lIAoJCQkJCQkJCQlDU0EgbWFwX2NzYQoJCQkJCQkJCQlk
dXAgL01hcHBlZENTQSB4ZGYgCgkJCQkJCQkJCS9zZXBfY29sb3JzcGFjZV9wcm9jIGxvYWQKCQkJ
CQkJCQldCgkJCQkJCQkJZHVwIE5hbWUKCQkJCQkJCQllbmQKCQkJCQkJCX1pZgoJCQkJCQl9aWYK
CQkJCQkJbWFwX3Jlc2VydmVkX2lua19uYW1lIDEgeHB0CgkJCQkJfXsKCQkJCQkJL0RldmljZU4g
ZXEgCgkJCQkJCXsKCQkJCQkJCWR1cCBsZW5ndGggYXJyYXkgY29weQoJCQkJCQkJZHVwIGR1cCAx
IGdldCBbIAoJCQkJCQkJCWV4Y2ggewoJCQkJCQkJCQljdXJyZW50X3Nwb3RfYWxpYXN7CgkJCQkJ
CQkJCQlkdXAgbWFwX2FsaWFzewoJCQkJCQkJCQkJCS9OYW1lIGdldCBleGNoIHBvcAoJCQkJCQkJ
CQkJfWlmCgkJCQkJCQkJCX1pZgoJCQkJCQkJCQltYXBfcmVzZXJ2ZWRfaW5rX25hbWUKCQkJCQkJ
CQl9IGZvcmFsbCAKCQkJCQkJCV0gMSB4cHQKCQkJCQkJfWlmCgkJCQkJfWlmZWxzZQoJCQkJfWlm
CgkJCX1kZWYKCQkJL3NldGNvbG9yc3BhY2UKCQkJewoJCQkJZHVwIHR5cGUgZHVwIC9hcnJheXR5
cGUgZXEgZXhjaCAvcGFja2VkYXJyYXl0eXBlIGVxIG9yCgkJCQl7CgkJCQkJZHVwIDAgZ2V0IC9J
bmRleGVkIGVxCgkJCQkJewoJCQkJCQlBR01DT1JFX2Rpc3RpbGxpbmcKCQkJCQkJewoJCQkJCQkJ
L1Bob3Rvc2hvcER1b3RvbmVMaXN0IHdoZXJlCgkJCQkJCQl7CgkJCQkJCQkJcG9wIGZhbHNlCgkJ
CQkJCQl9ewoJCQkJCQkJCXRydWUKCQkJCQkJCX1pZmVsc2UKCQkJCQkJfXsKCQkJCQkJCXRydWUK
CQkJCQkJfWlmZWxzZQoJCQkJCQl7CgkJCQkJCQlhbG9hZCBwb3AgMyAtMSByb2xsCgkJCQkJCQlB
R01DT1JFX1JlcGxhY2VNYXBwZWRDb2xvcgoJCQkJCQkJMyAxIHJvbGwgNCBhcnJheSBhc3RvcmUK
CQkJCQkJfWlmCgkJCQkJfXsKCQkJCQkJQUdNQ09SRV9SZXBsYWNlTWFwcGVkQ29sb3IKCQkJCQl9
aWZlbHNlCgkJCQl9aWYKCQkJCURldmljZU5fUFMyX2luUmlwX3NlcHMge0FHTUNPUkVfJiYmc2V0
Y29sb3JzcGFjZX0gaWYKCQkJfWRlZgoJCX1pZgkKCX17CgkJL2FkagoJCXsKCQkJY3VycmVudHN0
cm9rZWFkanVzdHsKCQkJCXRyYW5zZm9ybQoJCQkJMC4yNSBzdWIgcm91bmQgMC4yNSBhZGQgZXhj
aAoJCQkJMC4yNSBzdWIgcm91bmQgMC4yNSBhZGQgZXhjaAoJCQkJaXRyYW5zZm9ybQoJCQl9aWYK
CQl9ZGVmCgkJL21vewoJCQlhZGogbW92ZXRvCgkJfWRlZgoJCS9saXsKCQkJYWRqIGxpbmV0bwoJ
CX1kZWYKCQkvY3Z7CgkJCTYgMiByb2xsIGFkagoJCQk2IDIgcm9sbCBhZGoKCQkJNiAyIHJvbGwg
YWRqIGN1cnZldG8KCQl9ZGVmCgkJL2tub2Nrb3V0X3VuaXRzcQoJCXsKCQkJMSBzZXRncmF5CgkJ
CTggOCAxIFs4IDAgMCA4IDAgMF0gezxmZmZmZmZmZmZmZmZmZmZmPn0gaW1hZ2UKCQl9ZGVmCgkJ
L2N1cnJlbnRzdHJva2VhZGp1c3R7CgkJCS9jdXJyZW50c3Ryb2tlYWRqdXN0IEFHTUNPUkVfZ2dl
dAoJCX1kZWYKCQkvc2V0c3Ryb2tlYWRqdXN0ewoJCQkvY3VycmVudHN0cm9rZWFkanVzdCBleGNo
IEFHTUNPUkVfZ3B1dAoJCX1kZWYKCQkvY3VycmVudFNjcmVlbkZyZXF7CgkJCWN1cnJlbnRzY3Jl
ZW4gcG9wIHBvcAoJCX1kZWYKCQkvc2V0Y29sb3JzcGFjZQoJCXsKCQkJL2N1cnJlbnRjb2xvcnNw
YWNlIGV4Y2ggQUdNQ09SRV9ncHV0CgkJfSBkZWYKCQkvY3VycmVudGNvbG9yc3BhY2UKCQl7CgkJ
CS9jdXJyZW50Y29sb3JzcGFjZSBBR01DT1JFX2dnZXQKCQl9IGRlZgoJCS9zZXRjb2xvcl9kZXZp
Y2Vjb2xvcgoJCXsKCQkJYmFzZV9jb2xvcnNwYWNlX3R5cGUKCQkJZHVwIC9EZXZpY2VHcmF5IGVx
ewoJCQkJcG9wIHNldGdyYXkKCQkJfXsKCQkJCS9EZXZpY2VDTVlLIGVxewoJCQkJCXNldGNteWtj
b2xvcgoJCQkJfXsKCQkJCQlzZXRyZ2Jjb2xvcgoJCQkJfWlmZWxzZQoJCQl9aWZlbHNlCgkJfWRl
ZgoJCS9zZXRjb2xvcgoJCXsKCQkJY3VycmVudGNvbG9yc3BhY2UgMCBnZXQKCQkJZHVwIC9EZXZp
Y2VHcmF5IG5lewoJCQkJZHVwIC9EZXZpY2VDTVlLIG5lewoJCQkJCWR1cCAvRGV2aWNlUkdCIG5l
ewoJCQkJCQlkdXAgL1NlcGFyYXRpb24gZXF7CgkJCQkJCQlwb3AKCQkJCQkJCWN1cnJlbnRjb2xv
cnNwYWNlIDMgZ2V0IGV4ZWMKCQkJCQkJCWN1cnJlbnRjb2xvcnNwYWNlIDIgZ2V0CgkJCQkJCX17
CgkJCQkJCQlkdXAgL0luZGV4ZWQgZXF7CgkJCQkJCQkJcG9wCgkJCQkJCQkJY3VycmVudGNvbG9y
c3BhY2UgMyBnZXQgZHVwIHR5cGUgL3N0cmluZ3R5cGUgZXF7CgkJCQkJCQkJCWN1cnJlbnRjb2xv
cnNwYWNlIDEgZ2V0IG5fY29sb3JfY29tcG9uZW50cwoJCQkJCQkJCQkzIC0xIHJvbGwgbWFwX2lu
ZGV4CgkJCQkJCQkJfXsKCQkJCQkJCQkJZXhlYwoJCQkJCQkJCX1pZmVsc2UKCQkJCQkJCQljdXJy
ZW50Y29sb3JzcGFjZSAxIGdldAoJCQkJCQkJfXsKCQkJCQkJCQkvQUdNQ09SRV9jdXJfZXJyIC9B
R01DT1JFX2ludmFsaWRfY29sb3Jfc3BhY2UgZGVmCgkJCQkJCQkJQUdNQ09SRV9pbnZhbGlkX2Nv
bG9yX3NwYWNlCgkJCQkJCQl9aWZlbHNlCgkJCQkJCX1pZmVsc2UKCQkJCQl9aWYKCQkJCX1pZgoJ
CQl9aWYKCQkJc2V0Y29sb3JfZGV2aWNlY29sb3IKCQl9IGRlZgoJfWlmZWxzZQoJL3NvcCAvc2V0
b3ZlcnByaW50IGxkZgoJL2x3IC9zZXRsaW5ld2lkdGggbGRmCgkvbGMgL3NldGxpbmVjYXAgbGRm
CgkvbGogL3NldGxpbmVqb2luIGxkZgoJL21sIC9zZXRtaXRlcmxpbWl0IGxkZgoJL2RzaCAvc2V0
ZGFzaCBsZGYKCS9zYWRqIC9zZXRzdHJva2VhZGp1c3QgbGRmCgkvZ3J5IC9zZXRncmF5IGxkZgoJ
L3JnYiAvc2V0cmdiY29sb3IgbGRmCgkvY215ayAvc2V0Y215a2NvbG9yIGxkZgoJL3NlcCAvc2V0
c2VwY29sb3IgbGRmCgkvZGV2biAvc2V0ZGV2aWNlbmNvbG9yIGxkZgoJL2lkeCAvc2V0aW5kZXhl
ZGNvbG9yIGxkZgoJL2NvbHIgL3NldGNvbG9yIGxkZgoJL2NzYWNyZCAvc2V0X2NzYV9jcmQgbGRm
Cgkvc2VwY3MgL3NldHNlcGNvbG9yc3BhY2UgbGRmCgkvZGV2bmNzIC9zZXRkZXZpY2VuY29sb3Jz
cGFjZSBsZGYKCS9pZHhjcyAvc2V0aW5kZXhlZGNvbG9yc3BhY2UgbGRmCgkvY3AgL2Nsb3NlcGF0
aCBsZGYKCS9jbHAgL2NscF9ucHRoIGxkZgoJL2VjbHAgL2VvY2xwX25wdGggbGRmCgkvZiAvZmls
bCBsZGYKCS9lZiAvZW9maWxsIGxkZgoJL0AgL3N0cm9rZSBsZGYKCS9uY2xwIC9ucHRoX2NscCBs
ZGYKCS9nc2V0IC9ncmFwaGljX3NldHVwIGxkZgoJL2djbG4gL2dyYXBoaWNfY2xlYW51cCBsZGYK
CWN1cnJlbnRkaWN0ewoJCWR1cCB4Y2hlY2sgMSBpbmRleCB0eXBlIGR1cCAvYXJyYXl0eXBlIGVx
IGV4Y2ggL3BhY2tlZGFycmF5dHlwZSBlcSBvciBhbmQgewoJCQliaW5kCgkJfWlmCgkJZGVmCgl9
Zm9yYWxsCgkvY3VycmVudHBhZ2VkZXZpY2UgY3VycmVudHBhZ2VkZXZpY2UgZGVmCi9nZXRyYW1w
Y29sb3IgewovaW5keCBleGNoIGRlZgowIDEgTnVtQ29tcCAxIHN1YiB7CmR1cApTYW1wbGVzIGV4
Y2ggZ2V0CmR1cCB0eXBlIC9zdHJpbmd0eXBlIGVxIHsgaW5keCBnZXQgfSBpZgpleGNoClNjYWxp
bmcgZXhjaCBnZXQgYWxvYWQgcG9wCjMgMSByb2xsCm11bCBhZGQKfSBmb3IKQ29sb3JTcGFjZUZh
bWlseSAvU2VwYXJhdGlvbiBlcQoJewoJc2VwCgl9Cgl7CglDb2xvclNwYWNlRmFtaWx5IC9EZXZp
Y2VOIGVxCgkJewoJCWRldm4KCQl9CgkJewoJCXNldGNvbG9yCgkJfWlmZWxzZQoJfWlmZWxzZQp9
IGJpbmQgZGVmCi9zc3NldGJhY2tncm91bmQgeyBhbG9hZCBwb3Agc2V0Y29sb3IgfSBiaW5kIGRl
ZgovUmFkaWFsU2hhZGUgewo0MCBkaWN0IGJlZ2luCi9Db2xvclNwYWNlRmFtaWx5IGV4Y2ggZGVm
Ci9iYWNrZ3JvdW5kIGV4Y2ggZGVmCi9leHQxIGV4Y2ggZGVmCi9leHQwIGV4Y2ggZGVmCi9CQm94
IGV4Y2ggZGVmCi9yMiBleGNoIGRlZgovYzJ5IGV4Y2ggZGVmCi9jMnggZXhjaCBkZWYKL3IxIGV4
Y2ggZGVmCi9jMXkgZXhjaCBkZWYKL2MxeCBleGNoIGRlZgovcmFtcGRpY3QgZXhjaCBkZWYKL3Nl
dGlua292ZXJwcmludCB3aGVyZSB7cG9wIC9zZXRpbmtvdmVycHJpbnR7cG9wfWRlZn1pZgpnc2F2
ZQpCQm94IGxlbmd0aCAwIGd0IHsKbmV3cGF0aApCQm94IDAgZ2V0IEJCb3ggMSBnZXQgbW92ZXRv
CkJCb3ggMiBnZXQgQkJveCAwIGdldCBzdWIgMCBybGluZXRvCjAgQkJveCAzIGdldCBCQm94IDEg
Z2V0IHN1YiBybGluZXRvCkJCb3ggMiBnZXQgQkJveCAwIGdldCBzdWIgbmVnIDAgcmxpbmV0bwpj
bG9zZXBhdGgKY2xpcApuZXdwYXRoCn0gaWYKYzF4IGMyeCBlcQp7CmMxeSBjMnkgbHQgey90aGV0
YSA5MCBkZWZ9ey90aGV0YSAyNzAgZGVmfSBpZmVsc2UKfQp7Ci9zbG9wZSBjMnkgYzF5IHN1YiBj
MnggYzF4IHN1YiBkaXYgZGVmCi90aGV0YSBzbG9wZSAxIGF0YW4gZGVmCmMyeCBjMXggbHQgYzJ5
IGMxeSBnZSBhbmQgeyAvdGhldGEgdGhldGEgMTgwIHN1YiBkZWZ9IGlmCmMyeCBjMXggbHQgYzJ5
IGMxeSBsdCBhbmQgeyAvdGhldGEgdGhldGEgMTgwIGFkZCBkZWZ9IGlmCn0KaWZlbHNlCmdzYXZl
CmNsaXBwYXRoCmMxeCBjMXkgdHJhbnNsYXRlCnRoZXRhIHJvdGF0ZQotOTAgcm90YXRlCnsgcGF0
aGJib3ggfSBzdG9wcGVkCnsgMCAwIDAgMCB9IGlmCi95TWF4IGV4Y2ggZGVmCi94TWF4IGV4Y2gg
ZGVmCi95TWluIGV4Y2ggZGVmCi94TWluIGV4Y2ggZGVmCmdyZXN0b3JlCnhNYXggeE1pbiBlcSB5
TWF4IHlNaW4gZXEgb3IKewpncmVzdG9yZQplbmQKfQp7Ci9tYXggeyAyIGNvcHkgZ3QgeyBwb3Ag
fSB7ZXhjaCBwb3B9IGlmZWxzZSB9IGJpbmQgZGVmCi9taW4geyAyIGNvcHkgbHQgeyBwb3AgfSB7
ZXhjaCBwb3B9IGlmZWxzZSB9IGJpbmQgZGVmCnJhbXBkaWN0IGJlZ2luCjQwIGRpY3QgYmVnaW4K
YmFja2dyb3VuZCBsZW5ndGggMCBndCB7IGJhY2tncm91bmQgc3NzZXRiYWNrZ3JvdW5kIGdzYXZl
IGNsaXBwYXRoIGZpbGwgZ3Jlc3RvcmUgfSBpZgpnc2F2ZQpjMXggYzF5IHRyYW5zbGF0ZQp0aGV0
YSByb3RhdGUKLTkwIHJvdGF0ZQovYzJ5IGMxeCBjMnggc3ViIGR1cCBtdWwgYzF5IGMyeSBzdWIg
ZHVwIG11bCBhZGQgc3FydCBkZWYKL2MxeSAwIGRlZgovYzF4IDAgZGVmCi9jMnggMCBkZWYKZXh0
MCB7CjAgZ2V0cmFtcGNvbG9yCmMyeSByMiBhZGQgcjEgc3ViIDAuMDAwMSBsdAp7CmMxeCBjMXkg
cjEgMzYwIDAgYXJjbgpwYXRoYmJveAovYXltYXggZXhjaCBkZWYKL2F4bWF4IGV4Y2ggZGVmCi9h
eW1pbiBleGNoIGRlZgovYXhtaW4gZXhjaCBkZWYKL2J4TWluIHhNaW4gYXhtaW4gbWluIGRlZgov
YnlNaW4geU1pbiBheW1pbiBtaW4gZGVmCi9ieE1heCB4TWF4IGF4bWF4IG1heCBkZWYKL2J5TWF4
IHlNYXggYXltYXggbWF4IGRlZgpieE1pbiBieU1pbiBtb3ZldG8KYnhNYXggYnlNaW4gbGluZXRv
CmJ4TWF4IGJ5TWF4IGxpbmV0bwpieE1pbiBieU1heCBsaW5ldG8KYnhNaW4gYnlNaW4gbGluZXRv
CmVvZmlsbAp9CnsKYzJ5IHIxIGFkZCByMiBsZQp7CmMxeCBjMXkgcjEgMCAzNjAgYXJjCmZpbGwK
fQp7CmMyeCBjMnkgcjIgMCAzNjAgYXJjIGZpbGwKcjEgcjIgZXEKewovcDF4IHIxIG5lZyBkZWYK
L3AxeSBjMXkgZGVmCi9wMnggcjEgZGVmCi9wMnkgYzF5IGRlZgpwMXggcDF5IG1vdmV0byBwMngg
cDJ5IGxpbmV0byBwMnggeU1pbiBsaW5ldG8gcDF4IHlNaW4gbGluZXRvCmZpbGwKfQp7Ci9BQSBy
MiByMSBzdWIgYzJ5IGRpdiBkZWYKL3RoZXRhIEFBIDEgQUEgZHVwIG11bCBzdWIgc3FydCBkaXYg
MSBhdGFuIGRlZgovU1MxIDkwIHRoZXRhIGFkZCBkdXAgc2luIGV4Y2ggY29zIGRpdiBkZWYKL3Ax
eCByMSBTUzEgU1MxIG11bCBTUzEgU1MxIG11bCAxIGFkZCBkaXYgc3FydCBtdWwgbmVnIGRlZgov
cDF5IHAxeCBTUzEgZGl2IG5lZyBkZWYKL1NTMiA5MCB0aGV0YSBzdWIgZHVwIHNpbiBleGNoIGNv
cyBkaXYgZGVmCi9wMnggcjEgU1MyIFNTMiBtdWwgU1MyIFNTMiBtdWwgMSBhZGQgZGl2IHNxcnQg
bXVsIGRlZgovcDJ5IHAyeCBTUzIgZGl2IG5lZyBkZWYKcjEgcjIgZ3QKewovTDFtYXhYIHAxeCB5
TWluIHAxeSBzdWIgU1MxIGRpdiBhZGQgZGVmCi9MMm1heFggcDJ4IHlNaW4gcDJ5IHN1YiBTUzIg
ZGl2IGFkZCBkZWYKfQp7Ci9MMW1heFggMCBkZWYKL0wybWF4WCAwIGRlZgp9aWZlbHNlCnAxeCBw
MXkgbW92ZXRvIHAyeCBwMnkgbGluZXRvIEwybWF4WCBMMm1heFggcDJ4IHN1YiBTUzIgbXVsIHAy
eSBhZGQgbGluZXRvCkwxbWF4WCBMMW1heFggcDF4IHN1YiBTUzEgbXVsIHAxeSBhZGQgbGluZXRv
CmZpbGwKfQppZmVsc2UKfQppZmVsc2UKfSBpZmVsc2UKfSBpZgpjMXggYzJ4IHN1YiBkdXAgbXVs
CmMxeSBjMnkgc3ViIGR1cCBtdWwKYWRkIDAuNSBleHAKMCBkdHJhbnNmb3JtCmR1cCBtdWwgZXhj
aCBkdXAgbXVsIGFkZCAwLjUgZXhwIDcyIGRpdgowIDcyIG1hdHJpeCBkZWZhdWx0bWF0cml4IGR0
cmFuc2Zvcm0gZHVwIG11bCBleGNoIGR1cCBtdWwgYWRkIHNxcnQKNzIgMCBtYXRyaXggZGVmYXVs
dG1hdHJpeCBkdHJhbnNmb3JtIGR1cCBtdWwgZXhjaCBkdXAgbXVsIGFkZCBzcXJ0CjEgaW5kZXgg
MSBpbmRleCBsdCB7IGV4Y2ggfSBpZiBwb3AKL2hpcmVzIGV4Y2ggZGVmCmhpcmVzIG11bAovbnVt
cGl4IGV4Y2ggZGVmCi9udW1zdGVwcyBOdW1TYW1wbGVzIGRlZgovcmFtcEluZHhJbmMgMSBkZWYK
L3N1YnNhbXBsaW5nIGZhbHNlIGRlZgpudW1waXggMCBuZQp7Ck51bVNhbXBsZXMgbnVtcGl4IGRp
diAwLjUgZ3QKewovbnVtc3RlcHMgbnVtcGl4IDIgZGl2IHJvdW5kIGN2aSBkdXAgMSBsZSB7IHBv
cCAyIH0gaWYgZGVmCi9yYW1wSW5keEluYyBOdW1TYW1wbGVzIDEgc3ViIG51bXN0ZXBzIGRpdiBk
ZWYKL3N1YnNhbXBsaW5nIHRydWUgZGVmCn0gaWYKfSBpZgoveEluYyBjMnggYzF4IHN1YiBudW1z
dGVwcyBkaXYgZGVmCi95SW5jIGMyeSBjMXkgc3ViIG51bXN0ZXBzIGRpdiBkZWYKL3JJbmMgcjIg
cjEgc3ViIG51bXN0ZXBzIGRpdiBkZWYKL2N4IGMxeCBkZWYKL2N5IGMxeSBkZWYKL3JhZGl1cyBy
MSBkZWYKbmV3cGF0aAp4SW5jIDAgZXEgeUluYyAwIGVxIHJJbmMgMCBlcSBhbmQgYW5kCnsKMCBn
ZXRyYW1wY29sb3IKY3ggY3kgcmFkaXVzIDAgMzYwIGFyYwpzdHJva2UKTnVtU2FtcGxlcyAxIHN1
YiBnZXRyYW1wY29sb3IKY3ggY3kgcmFkaXVzIDcyIGhpcmVzIGRpdiBhZGQgMCAzNjAgYXJjCjAg
c2V0bGluZXdpZHRoCnN0cm9rZQp9CnsKMApudW1zdGVwcwp7CmR1cApzdWJzYW1wbGluZyB7IHJv
dW5kIGN2aSB9IGlmCmdldHJhbXBjb2xvcgpjeCBjeSByYWRpdXMgMCAzNjAgYXJjCi9jeCBjeCB4
SW5jIGFkZCBkZWYKL2N5IGN5IHlJbmMgYWRkIGRlZgovcmFkaXVzIHJhZGl1cyBySW5jIGFkZCBk
ZWYKY3ggY3kgcmFkaXVzIDM2MCAwIGFyY24KZW9maWxsCnJhbXBJbmR4SW5jIGFkZAp9CnJlcGVh
dApwb3AKfSBpZmVsc2UKZXh0MSB7CmMyeSByMiBhZGQgcjEgbHQKewpjMnggYzJ5IHIyIDAgMzYw
IGFyYwpmaWxsCn0KewpjMnkgcjEgYWRkIHIyIHN1YiAwLjAwMDEgbGUKewpjMnggYzJ5IHIyIDM2
MCAwIGFyY24KcGF0aGJib3gKL2F5bWF4IGV4Y2ggZGVmCi9heG1heCBleGNoIGRlZgovYXltaW4g
ZXhjaCBkZWYKL2F4bWluIGV4Y2ggZGVmCi9ieE1pbiB4TWluIGF4bWluIG1pbiBkZWYKL2J5TWlu
IHlNaW4gYXltaW4gbWluIGRlZgovYnhNYXggeE1heCBheG1heCBtYXggZGVmCi9ieU1heCB5TWF4
IGF5bWF4IG1heCBkZWYKYnhNaW4gYnlNaW4gbW92ZXRvCmJ4TWF4IGJ5TWluIGxpbmV0bwpieE1h
eCBieU1heCBsaW5ldG8KYnhNaW4gYnlNYXggbGluZXRvCmJ4TWluIGJ5TWluIGxpbmV0bwplb2Zp
bGwKfQp7CmMyeCBjMnkgcjIgMCAzNjAgYXJjIGZpbGwKcjEgcjIgZXEKewovcDF4IHIyIG5lZyBk
ZWYKL3AxeSBjMnkgZGVmCi9wMnggcjIgZGVmCi9wMnkgYzJ5IGRlZgpwMXggcDF5IG1vdmV0byBw
MnggcDJ5IGxpbmV0byBwMnggeU1heCBsaW5ldG8gcDF4IHlNYXggbGluZXRvCmZpbGwKfQp7Ci9B
QSByMiByMSBzdWIgYzJ5IGRpdiBkZWYKL3RoZXRhIEFBIDEgQUEgZHVwIG11bCBzdWIgc3FydCBk
aXYgMSBhdGFuIGRlZgovU1MxIDkwIHRoZXRhIGFkZCBkdXAgc2luIGV4Y2ggY29zIGRpdiBkZWYK
L3AxeCByMiBTUzEgU1MxIG11bCBTUzEgU1MxIG11bCAxIGFkZCBkaXYgc3FydCBtdWwgbmVnIGRl
ZgovcDF5IGMyeSBwMXggU1MxIGRpdiBzdWIgZGVmCi9TUzIgOTAgdGhldGEgc3ViIGR1cCBzaW4g
ZXhjaCBjb3MgZGl2IGRlZgovcDJ4IHIyIFNTMiBTUzIgbXVsIFNTMiBTUzIgbXVsIDEgYWRkIGRp
diBzcXJ0IG11bCBkZWYKL3AyeSBjMnkgcDJ4IFNTMiBkaXYgc3ViIGRlZgpyMSByMiBsdAp7Ci9M
MW1heFggcDF4IHlNYXggcDF5IHN1YiBTUzEgZGl2IGFkZCBkZWYKL0wybWF4WCBwMnggeU1heCBw
Mnkgc3ViIFNTMiBkaXYgYWRkIGRlZgp9CnsKL0wxbWF4WCAwIGRlZgovTDJtYXhYIDAgZGVmCn1p
ZmVsc2UKcDF4IHAxeSBtb3ZldG8gcDJ4IHAyeSBsaW5ldG8gTDJtYXhYIEwybWF4WCBwMnggc3Vi
IFNTMiBtdWwgcDJ5IGFkZCBsaW5ldG8KTDFtYXhYIEwxbWF4WCBwMXggc3ViIFNTMSBtdWwgcDF5
IGFkZCBsaW5ldG8KZmlsbAp9CmlmZWxzZQp9CmlmZWxzZQp9IGlmZWxzZQp9IGlmCmdyZXN0b3Jl
CmdyZXN0b3JlCmVuZAplbmQKZW5kCn0gaWZlbHNlCn0gYmluZCBkZWYKL0dlblN0cmlwcyB7CjQw
IGRpY3QgYmVnaW4KL0NvbG9yU3BhY2VGYW1pbHkgZXhjaCBkZWYKL2JhY2tncm91bmQgZXhjaCBk
ZWYKL2V4dDEgZXhjaCBkZWYKL2V4dDAgZXhjaCBkZWYKL0JCb3ggZXhjaCBkZWYKL3kyIGV4Y2gg
ZGVmCi94MiBleGNoIGRlZgoveTEgZXhjaCBkZWYKL3gxIGV4Y2ggZGVmCi9yYW1wZGljdCBleGNo
IGRlZgovc2V0aW5rb3ZlcnByaW50IHdoZXJlIHtwb3AgL3NldGlua292ZXJwcmludHtwb3B9ZGVm
fWlmCmdzYXZlCkJCb3ggbGVuZ3RoIDAgZ3QgewpuZXdwYXRoCkJCb3ggMCBnZXQgQkJveCAxIGdl
dCBtb3ZldG8KQkJveCAyIGdldCBCQm94IDAgZ2V0IHN1YiAwIHJsaW5ldG8KMCBCQm94IDMgZ2V0
IEJCb3ggMSBnZXQgc3ViIHJsaW5ldG8KQkJveCAyIGdldCBCQm94IDAgZ2V0IHN1YiBuZWcgMCBy
bGluZXRvCmNsb3NlcGF0aApjbGlwCm5ld3BhdGgKfSBpZgp4MSB4MiBlcQp7CnkxIHkyIGx0IHsv
dGhldGEgOTAgZGVmfXsvdGhldGEgMjcwIGRlZn0gaWZlbHNlCn0Kewovc2xvcGUgeTIgeTEgc3Vi
IHgyIHgxIHN1YiBkaXYgZGVmCi90aGV0YSBzbG9wZSAxIGF0YW4gZGVmCngyIHgxIGx0IHkyIHkx
IGdlIGFuZCB7IC90aGV0YSB0aGV0YSAxODAgc3ViIGRlZn0gaWYKeDIgeDEgbHQgeTIgeTEgbHQg
YW5kIHsgL3RoZXRhIHRoZXRhIDE4MCBhZGQgZGVmfSBpZgp9CmlmZWxzZQpnc2F2ZQpjbGlwcGF0
aAp4MSB5MSB0cmFuc2xhdGUKdGhldGEgcm90YXRlCnsgcGF0aGJib3ggfSBzdG9wcGVkCnsgMCAw
IDAgMCB9IGlmCi95TWF4IGV4Y2ggZGVmCi94TWF4IGV4Y2ggZGVmCi95TWluIGV4Y2ggZGVmCi94
TWluIGV4Y2ggZGVmCmdyZXN0b3JlCnhNYXggeE1pbiBlcSB5TWF4IHlNaW4gZXEgb3IKewpncmVz
dG9yZQplbmQKfQp7CnJhbXBkaWN0IGJlZ2luCjIwIGRpY3QgYmVnaW4KYmFja2dyb3VuZCBsZW5n
dGggMCBndCB7IGJhY2tncm91bmQgc3NzZXRiYWNrZ3JvdW5kIGdzYXZlIGNsaXBwYXRoIGZpbGwg
Z3Jlc3RvcmUgfSBpZgpnc2F2ZQp4MSB5MSB0cmFuc2xhdGUKdGhldGEgcm90YXRlCi94U3RhcnQg
MCBkZWYKL3hFbmQgeDIgeDEgc3ViIGR1cCBtdWwgeTIgeTEgc3ViIGR1cCBtdWwgYWRkIDAuNSBl
eHAgZGVmCi95U3BhbiB5TWF4IHlNaW4gc3ViIGRlZgovbnVtc3RlcHMgTnVtU2FtcGxlcyBkZWYK
L3JhbXBJbmR4SW5jIDEgZGVmCi9zdWJzYW1wbGluZyBmYWxzZSBkZWYKeFN0YXJ0IDAgdHJhbnNm
b3JtCnhFbmQgMCB0cmFuc2Zvcm0KMyAtMSByb2xsCnN1YiBkdXAgbXVsCjMgMSByb2xsCnN1YiBk
dXAgbXVsCmFkZCAwLjUgZXhwIDcyIGRpdgowIDcyIG1hdHJpeCBkZWZhdWx0bWF0cml4IGR0cmFu
c2Zvcm0gZHVwIG11bCBleGNoIGR1cCBtdWwgYWRkIHNxcnQKNzIgMCBtYXRyaXggZGVmYXVsdG1h
dHJpeCBkdHJhbnNmb3JtIGR1cCBtdWwgZXhjaCBkdXAgbXVsIGFkZCBzcXJ0CjEgaW5kZXggMSBp
bmRleCBsdCB7IGV4Y2ggfSBpZiBwb3AKbXVsCi9udW1waXggZXhjaCBkZWYKbnVtcGl4IDAgbmUK
ewpOdW1TYW1wbGVzIG51bXBpeCBkaXYgMC41IGd0CnsKL251bXN0ZXBzIG51bXBpeCAyIGRpdiBy
b3VuZCBjdmkgZHVwIDEgbGUgeyBwb3AgMiB9IGlmIGRlZgovcmFtcEluZHhJbmMgTnVtU2FtcGxl
cyAxIHN1YiBudW1zdGVwcyBkaXYgZGVmCi9zdWJzYW1wbGluZyB0cnVlIGRlZgp9IGlmCn0gaWYK
ZXh0MCB7CjAgZ2V0cmFtcGNvbG9yCnhNaW4geFN0YXJ0IGx0CnsgeE1pbiB5TWluIHhNaW4gbmVn
IHlTcGFuIHJlY3RmaWxsIH0gaWYKfSBpZgoveEluYyB4RW5kIHhTdGFydCBzdWIgbnVtc3RlcHMg
ZGl2IGRlZgoveCB4U3RhcnQgZGVmCjAKbnVtc3RlcHMKewpkdXAKc3Vic2FtcGxpbmcgeyByb3Vu
ZCBjdmkgfSBpZgpnZXRyYW1wY29sb3IKeCB5TWluIHhJbmMgeVNwYW4gcmVjdGZpbGwKL3ggeCB4
SW5jIGFkZCBkZWYKcmFtcEluZHhJbmMgYWRkCn0KcmVwZWF0CnBvcApleHQxIHsKeE1heCB4RW5k
IGd0CnsgeEVuZCB5TWluIHhNYXggeEVuZCBzdWIgeVNwYW4gcmVjdGZpbGwgfSBpZgp9IGlmCmdy
ZXN0b3JlCmdyZXN0b3JlCmVuZAplbmQKZW5kCn0gaWZlbHNlCn0gYmluZCBkZWYKfWRlZgovcGFn
ZV90cmFpbGVyCnsKCWVuZAp9ZGVmCi9kb2NfdHJhaWxlcnsKfWRlZgpzeXN0ZW1kaWN0IC9maW5k
Y29sb3JyZW5kZXJpbmcga25vd257CgkvZmluZGNvbG9ycmVuZGVyaW5nIHN5c3RlbWRpY3QgL2Zp
bmRjb2xvcnJlbmRlcmluZyBnZXQgZGVmCn1pZgpzeXN0ZW1kaWN0IC9zZXRjb2xvcnJlbmRlcmlu
ZyBrbm93bnsKCS9zZXRjb2xvcnJlbmRlcmluZyBzeXN0ZW1kaWN0IC9zZXRjb2xvcnJlbmRlcmlu
ZyBnZXQgZGVmCn1pZgovdGVzdF9jbXlrX2NvbG9yX3BsYXRlCnsKCWdzYXZlCglzZXRjbXlrY29s
b3IgY3VycmVudGdyYXkgMSBuZQoJZ3Jlc3RvcmUKfWRlZgovaW5SaXBfc3BvdF9oYXNfaW5rCnsK
CWR1cCBBZG9iZV9BR01fQ29yZS9BR01DT1JFX25hbWUgeGRkZgoJY29udmVydF9zcG90X3RvX3By
b2Nlc3Mgbm90Cn1kZWYKL21hcDI1NV90b19yYW5nZQp7CgkxIGluZGV4IHN1YgoJMyAtMSByb2xs
IDI1NSBkaXYgbXVsIGFkZAp9ZGVmCi9zZXRfY3NhX2NyZAp7Cgkvc2VwX2NvbG9yc3BhY2VfZGlj
dCBudWxsIEFHTUNPUkVfZ3B1dAoJYmVnaW4KCQlDU0EgbWFwX2NzYSBzZXRjb2xvcnNwYWNlX29w
dAoJCXNldF9jcmQKCWVuZAp9CmRlZgovc2V0c2VwY29sb3IKeyAKCS9zZXBfY29sb3JzcGFjZV9k
aWN0IEFHTUNPUkVfZ2dldCBiZWdpbgoJCWR1cCAvc2VwX3RpbnQgZXhjaCBBR01DT1JFX2dwdXQK
CQlUaW50UHJvYwoJZW5kCn0gZGVmCi9zZXRkZXZpY2VuY29sb3IKeyAKCS9kZXZpY2VuX2NvbG9y
c3BhY2VfZGljdCBBR01DT1JFX2dnZXQgYmVnaW4KCQlOYW1lcyBsZW5ndGggY29weQoJCU5hbWVz
IGxlbmd0aCAxIHN1YiAtMSAwCgkJewoJCQkvZGV2aWNlbl90aW50cyBBR01DT1JFX2dnZXQgMyAx
IHJvbGwgeHB0CgkJfSBmb3IKCQlUaW50UHJvYwoJZW5kCn0gZGVmCi9zZXBfY29sb3JzcGFjZV9w
cm9jCnsKCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfdG1wIHhkZGYKCS9zZXBfY29sb3JzcGFjZV9k
aWN0IEFHTUNPUkVfZ2dldCBiZWdpbgoJY3VycmVudGRpY3QvQ29tcG9uZW50cyBrbm93bnsKCQlD
b21wb25lbnRzIGFsb2FkIHBvcCAKCQlUaW50TWV0aG9kL0xhYiBlcXsKCQkJMiB7QUdNQ09SRV90
bXAgbXVsIE5Db21wb25lbnRzIDEgcm9sbH0gcmVwZWF0CgkJCUxNYXggc3ViIEFHTUNPUkVfdG1w
IG11bCBMTWF4IGFkZCAgTkNvbXBvbmVudHMgMSByb2xsCgkJfXsKCQkJVGludE1ldGhvZC9TdWJ0
cmFjdGl2ZSBlcXsKCQkJCU5Db21wb25lbnRzewoJCQkJCUFHTUNPUkVfdG1wIG11bCBOQ29tcG9u
ZW50cyAxIHJvbGwKCQkJCX1yZXBlYXQKCQkJfXsKCQkJCU5Db21wb25lbnRzewoJCQkJCTEgc3Vi
IEFHTUNPUkVfdG1wIG11bCAxIGFkZCAgTkNvbXBvbmVudHMgMSByb2xsCgkJCQl9IHJlcGVhdAoJ
CQl9aWZlbHNlCgkJfWlmZWxzZQoJfXsKCQlDb2xvckxvb2t1cCBBR01DT1JFX3RtcCBDb2xvckxv
b2t1cCBsZW5ndGggMSBzdWIgbXVsIHJvdW5kIGN2aSBnZXQKCQlhbG9hZCBwb3AKCX1pZmVsc2UK
CWVuZAp9IGRlZgovc2VwX2NvbG9yc3BhY2VfZ3JheV9wcm9jCnsKCUFkb2JlX0FHTV9Db3JlL0FH
TUNPUkVfdG1wIHhkZGYKCS9zZXBfY29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCBiZWdpbgoJ
R3JheUxvb2t1cCBBR01DT1JFX3RtcCBHcmF5TG9va3VwIGxlbmd0aCAxIHN1YiBtdWwgcm91bmQg
Y3ZpIGdldAoJZW5kCn0gZGVmCi9zZXBfcHJvY19uYW1lCnsKCWR1cCAwIGdldCAKCWR1cCAvRGV2
aWNlUkdCIGVxIGV4Y2ggL0RldmljZUNNWUsgZXEgb3IgbGV2ZWwyIG5vdCBhbmQgaGFzX2NvbG9y
IG5vdCBhbmR7CgkJcG9wIFsvRGV2aWNlR3JheV0KCQkvc2VwX2NvbG9yc3BhY2VfZ3JheV9wcm9j
Cgl9ewoJCS9zZXBfY29sb3JzcGFjZV9wcm9jCgl9aWZlbHNlCn0gZGVmCi9zZXRzZXBjb2xvcnNw
YWNlCnsgCgljdXJyZW50X3Nwb3RfYWxpYXN7CgkJZHVwIGJlZ2luCgkJCU5hbWUgbWFwX2FsaWFz
ewoJCQkJZXhjaCBwb3AKCQkJfWlmCgkJZW5kCgl9aWYKCWR1cCAvc2VwX2NvbG9yc3BhY2VfZGlj
dCBleGNoIEFHTUNPUkVfZ3B1dAoJYmVnaW4KCS9NYXBwZWRDU0EgQ1NBIG1hcF9jc2EgZGVmCglB
ZG9iZV9BR01fQ29yZS9BR01DT1JFX3NlcF9zcGVjaWFsIE5hbWUgZHVwICgpIGVxIGV4Y2ggKEFs
bCkgZXEgb3IgZGRmCglBR01DT1JFX2F2b2lkX0wyX3NlcF9zcGFjZXsKCQlbL0luZGV4ZWQgTWFw
cGVkQ1NBIHNlcF9wcm9jX25hbWUgMjU1IGV4Y2ggCgkJCXsgMjU1IGRpdiB9IC9leGVjIGN2eCAz
IC0xIHJvbGwgWyA0IDEgcm9sbCBsb2FkIC9leGVjIGN2eCBdIGN2eCAKCQldIHNldGNvbG9yc3Bh
Y2Vfb3B0CgkJL1RpbnRQcm9jIHsKCQkJMjU1IG11bCByb3VuZCBjdmkgc2V0Y29sb3IKCQl9YmRm
Cgl9ewoJCU1hcHBlZENTQSAwIGdldCAvRGV2aWNlQ01ZSyBlcSAKCQljdXJyZW50ZGljdC9Db21w
b25lbnRzIGtub3duIGFuZCAKCQlBR01DT1JFX3NlcF9zcGVjaWFsIG5vdCBhbmR7CgkJCS9UaW50
UHJvYyBbCgkJCQlDb21wb25lbnRzIGFsb2FkIHBvcCBOYW1lIGZpbmRjbXlrY3VzdG9tY29sb3Ig
CgkJCQkvZXhjaCBjdnggL3NldGN1c3RvbWNvbG9yIGN2eAoJCQldIGN2eCBiZGYKCQl9ewogCQkJ
QUdNQ09SRV9ob3N0X3NlcCBOYW1lIChBbGwpIGVxIGFuZHsKIAkJCQkvVGludFByb2MgeyAKCQkJ
CQkxIGV4Y2ggc3ViIHNldHNlcGFyYXRpb25ncmF5IAoJCQkJfWJkZgogCQkJfXsKCQkJCUFHTUNP
UkVfaW5fcmlwX3NlcCBNYXBwZWRDU0EgMCBnZXQgL0RldmljZUNNWUsgZXEgYW5kIAoJCQkJQUdN
Q09SRV9ob3N0X3NlcCBvcgoJCQkJTmFtZSAoKSBlcSBhbmR7CgkJCQkJL1RpbnRQcm9jIFsKCQkJ
CQkJTWFwcGVkQ1NBIHNlcF9wcm9jX25hbWUgZXhjaCAwIGdldCAvRGV2aWNlQ01ZSyBlcXsKCQkJ
CQkJCWN2eCAvc2V0Y215a2NvbG9yIGN2eAoJCQkJCQl9ewoJCQkJCQkJY3Z4IC9zZXRncmF5IGN2
eAoJCQkJCQl9aWZlbHNlCgkJCQkJXSBjdnggYmRmCgkJCQl9ewoJCQkJCUFHTUNPUkVfcHJvZHVj
aW5nX3NlcHMgTWFwcGVkQ1NBIDAgZ2V0IGR1cCAvRGV2aWNlQ01ZSyBlcSBleGNoIC9EZXZpY2VH
cmF5IGVxIG9yIGFuZCBBR01DT1JFX3NlcF9zcGVjaWFsIG5vdCBhbmR7CgkgCQkJCQkvVGludFBy
b2MgWwoJCQkJCQkJL2R1cCBjdngKCQkJCQkJCU1hcHBlZENTQSBzZXBfcHJvY19uYW1lIGN2eCBl
eGNoCgkJCQkJCQkwIGdldCAvRGV2aWNlR3JheSBlcXsKCQkJCQkJCQkxIC9leGNoIGN2eCAvc3Vi
IGN2eCAwIDAgMCA0IC0xIC9yb2xsIGN2eAoJCQkJCQkJfWlmCgkJCQkJCQkvTmFtZSBjdnggL2Zp
bmRjbXlrY3VzdG9tY29sb3IgY3Z4IC9leGNoIGN2eAoJCQkJCQkJQUdNQ09SRV9ob3N0X3NlcHsK
CQkJCQkJCQlBR01DT1JFX2lzX2NteWtfc2VwCgkJCQkJCQkJL05hbWUgY3Z4IAoJCQkJCQkJCS9B
R01DT1JFX0lzU2VwYXJhdGlvbkFQcm9jZXNzQ29sb3IgbG9hZCAvZXhlYyBjdngKCQkJCQkJCQkv
bm90IGN2eCAvYW5kIGN2eCAKCQkJCQkJCX17CgkJCQkJCQkJTmFtZSBpblJpcF9zcG90X2hhc19p
bmsgbm90CgkJCQkJCQl9aWZlbHNlCgkJCQkJCQlbCgkJIAkJCQkJCS9wb3AgY3Z4IDEKCQkJCQkJ
CV0gY3Z4IC9pZiBjdngKCQkJCQkJCS9zZXRjdXN0b21jb2xvciBjdngKCQkJCQkJXSBjdnggYmRm
CiAJCQkJCX17IAoJCQkJCQkvVGludFByb2MgL3NldGNvbG9yIGxkZgoJCQkJCQlbL1NlcGFyYXRp
b24gTmFtZSBNYXBwZWRDU0Egc2VwX3Byb2NfbmFtZSBsb2FkIF0gc2V0Y29sb3JzcGFjZV9vcHQK
CQkJCQl9aWZlbHNlCgkJCQl9aWZlbHNlCgkJCX1pZmVsc2UKCQl9aWZlbHNlCgl9aWZlbHNlCglz
ZXRfY3JkCglzZXRzZXBjb2xvcgoJZW5kCn0gZGVmCi9hZGRpdGl2ZV9ibGVuZAp7CiAgCTMgZGlj
dCBiZWdpbgogIAkvbnVtYXJyYXlzIHhkZgogIAkvbnVtY29sb3JzIHhkZgogIAkwIDEgbnVtY29s
b3JzIDEgc3ViCiAgCQl7CiAgCQkvYzEgeGRmCiAgCQkxCiAgCQkwIDEgbnVtYXJyYXlzIDEgc3Vi
CiAgCQkJewoJCQkxIGV4Y2ggYWRkIC9pbmRleCBjdngKICAJCQljMSAvZ2V0IGN2eCAvbXVsIGN2
eAogIAkJCX1mb3IKIAkJbnVtYXJyYXlzIDEgYWRkIDEgL3JvbGwgY3Z4IAogIAkJfWZvcgogCW51
bWFycmF5cyBbL3BvcCBjdnhdIGN2eCAvcmVwZWF0IGN2eAogIAllbmQKfWRlZgovc3VidHJhY3Rp
dmVfYmxlbmQKewoJMyBkaWN0IGJlZ2luCgkvbnVtYXJyYXlzIHhkZgoJL251bWNvbG9ycyB4ZGYK
CTAgMSBudW1jb2xvcnMgMSBzdWIKCQl7CgkJL2MxIHhkZgoJCTEgMQoJCTAgMSBudW1hcnJheXMg
MSBzdWIKCQkJewoJCQkxIDMgMyAtMSByb2xsIGFkZCAvaW5kZXggY3Z4ICAKCQkJYzEgL2dldCBj
dnggL3N1YiBjdnggL211bCBjdngKCQkJfWZvcgoJCS9zdWIgY3Z4CgkJbnVtYXJyYXlzIDEgYWRk
IDEgL3JvbGwgY3Z4CgkJfWZvcgoJbnVtYXJyYXlzIFsvcG9wIGN2eF0gY3Z4IC9yZXBlYXQgY3Z4
CgllbmQKfWRlZgovZXhlY190aW50X3RyYW5zZm9ybQp7CgkvVGludFByb2MgWwoJCS9UaW50VHJh
bnNmb3JtIGN2eCAvc2V0Y29sb3IgY3Z4CgldIGN2eCBiZGYKCU1hcHBlZENTQSBzZXRjb2xvcnNw
YWNlX29wdAp9IGJkZgovZGV2bl9tYWtlY3VzdG9tY29sb3IKewoJMiBkaWN0IGJlZ2luCgkvbmFt
ZXNfaW5kZXggeGRmCgkvTmFtZXMgeGRmCgkxIDEgMSAxIE5hbWVzIG5hbWVzX2luZGV4IGdldCBm
aW5kY215a2N1c3RvbWNvbG9yCgkvZGV2aWNlbl90aW50cyBBR01DT1JFX2dnZXQgbmFtZXNfaW5k
ZXggZ2V0IHNldGN1c3RvbWNvbG9yCglOYW1lcyBsZW5ndGgge3BvcH0gcmVwZWF0CgllbmQKfSBi
ZGYKL3NldGRldmljZW5jb2xvcnNwYWNlCnsgCglkdXAgL0FsaWFzZWRDb2xvcmFudHMga25vd24g
e2ZhbHNlfXt0cnVlfWlmZWxzZSAKCWN1cnJlbnRfc3BvdF9hbGlhcyBhbmQgewoJCTYgZGljdCBi
ZWdpbgoJCS9uYW1lc19pbmRleCAwIGRlZgoJCWR1cCAvbmFtZXNfbGVuIGV4Y2ggL05hbWVzIGdl
dCBsZW5ndGggZGVmCgkJL25ld19uYW1lcyBuYW1lc19sZW4gYXJyYXkgZGVmCgkJL25ld19Mb29r
dXBUYWJsZXMgbmFtZXNfbGVuIGFycmF5IGRlZgoJCS9hbGlhc19jbnQgMCBkZWYKCQlkdXAgL05h
bWVzIGdldAoJCXsKCQkJZHVwIG1hcF9hbGlhcyB7CgkJCQlleGNoIHBvcAoJCQkJZHVwIC9Db2xv
ckxvb2t1cCBrbm93biB7CgkJCQkJZHVwIGJlZ2luCgkJCQkJbmV3X0xvb2t1cFRhYmxlcyBuYW1l
c19pbmRleCBDb2xvckxvb2t1cCBwdXQKCQkJCQllbmQKCQkJCX17CgkJCQkJZHVwIC9Db21wb25l
bnRzIGtub3duIHsKCQkJCQkJZHVwIGJlZ2luCgkJCQkJCW5ld19Mb29rdXBUYWJsZXMgbmFtZXNf
aW5kZXggQ29tcG9uZW50cyBwdXQKCQkJCQkJZW5kCgkJCQkJfXsKCQkJCQkJZHVwIGJlZ2luCgkJ
CQkJCW5ld19Mb29rdXBUYWJsZXMgbmFtZXNfaW5kZXggW251bGwgbnVsbCBudWxsIG51bGxdIHB1
dAoJCQkJCQllbmQKCQkJCQl9IGlmZWxzZQoJCQkJfSBpZmVsc2UKCQkJCW5ld19uYW1lcyBuYW1l
c19pbmRleCAzIC0xIHJvbGwgL05hbWUgZ2V0IHB1dAoJCQkJL2FsaWFzX2NudCBhbGlhc19jbnQg
MSBhZGQgZGVmIAoJCQl9ewoJCQkJL25hbWUgeGRmCQkJCQoJCQkJbmV3X25hbWVzIG5hbWVzX2lu
ZGV4IG5hbWUgcHV0CgkJCQlkdXAgL0xvb2t1cFRhYmxlcyBrbm93biB7CgkJCQkJZHVwIGJlZ2lu
CgkJCQkJbmV3X0xvb2t1cFRhYmxlcyBuYW1lc19pbmRleCBMb29rdXBUYWJsZXMgbmFtZXNfaW5k
ZXggZ2V0IHB1dAoJCQkJCWVuZAoJCQkJfXsKCQkJCQlkdXAgYmVnaW4KCQkJCQluZXdfTG9va3Vw
VGFibGVzIG5hbWVzX2luZGV4IFtudWxsIG51bGwgbnVsbCBudWxsXSBwdXQKCQkJCQllbmQKCQkJ
CX0gaWZlbHNlCgkJCX0gaWZlbHNlCgkJCS9uYW1lc19pbmRleCBuYW1lc19pbmRleCAxIGFkZCBk
ZWYgCgkJfSBmb3JhbGwKCQlhbGlhc19jbnQgMCBndCB7CgkJCS9BbGlhc2VkQ29sb3JhbnRzIHRy
dWUgZGVmCgkJCTAgMSBuYW1lc19sZW4gMSBzdWIgewoJCQkJL25hbWVzX2luZGV4IHhkZgoJCQkJ
bmV3X0xvb2t1cFRhYmxlcyBuYW1lc19pbmRleCBnZXQgMCBnZXQgbnVsbCBlcSB7CgkJCQkJZHVw
IC9OYW1lcyBnZXQgbmFtZXNfaW5kZXggZ2V0IC9uYW1lIHhkZgoJCQkJCW5hbWUgKEN5YW4pIGVx
IG5hbWUgKE1hZ2VudGEpIGVxIG5hbWUgKFllbGxvdykgZXEgbmFtZSAoQmxhY2spIGVxCgkJCQkJ
b3Igb3Igb3Igbm90IHsKCQkJCQkJL0FsaWFzZWRDb2xvcmFudHMgZmFsc2UgZGVmCgkJCQkJCWV4
aXQKCQkJCQl9IGlmCgkJCQl9IGlmCgkJCX0gZm9yCgkJCUFsaWFzZWRDb2xvcmFudHMgewoJCQkJ
ZHVwIGJlZ2luCgkJCQkvTmFtZXMgbmV3X25hbWVzIGRlZgoJCQkJL0FsaWFzZWRDb2xvcmFudHMg
dHJ1ZSBkZWYKCQkJCS9Mb29rdXBUYWJsZXMgbmV3X0xvb2t1cFRhYmxlcyBkZWYKCQkJCWN1cnJl
bnRkaWN0IC9UVFRhYmxlc0lkeCBrbm93biBub3QgewoJCQkJCS9UVFRhYmxlc0lkeCAtMSBkZWYK
CQkJCX0gaWYKCQkJCWN1cnJlbnRkaWN0IC9OQ29tcG9uZW50cyBrbm93biBub3QgewoJCQkJCS9O
Q29tcG9uZW50cyBUaW50TWV0aG9kIC9TdWJ0cmFjdGl2ZSBlcSB7NH17M31pZmVsc2UgZGVmCgkJ
CQl9IGlmCgkJCQllbmQKCQkJfSBpZgoJCX1pZgoJCWVuZAoJfSBpZgoJZHVwIC9kZXZpY2VuX2Nv
bG9yc3BhY2VfZGljdCBleGNoIEFHTUNPUkVfZ3B1dAoJYmVnaW4KCS9NYXBwZWRDU0EgQ1NBIG1h
cF9jc2EgZGVmCgljdXJyZW50ZGljdCAvQWxpYXNlZENvbG9yYW50cyBrbm93biB7CgkJQWxpYXNl
ZENvbG9yYW50cwoJfXsKCQlmYWxzZQoJfSBpZmVsc2UKCS9UaW50VHJhbnNmb3JtIGxvYWQgdHlw
ZSAvbnVsbHR5cGUgZXEgb3IgewoJCS9UaW50VHJhbnNmb3JtIFsKCQkJMCAxIE5hbWVzIGxlbmd0
aCAxIHN1YgoJCQkJewoJCQkJL1RUVGFibGVzSWR4IFRUVGFibGVzSWR4IDEgYWRkIGRlZgoJCQkJ
ZHVwIExvb2t1cFRhYmxlcyBleGNoIGdldCBkdXAgMCBnZXQgbnVsbCBlcQoJCQkJCXsKCQkJCQkx
IGluZGV4CgkJCQkJTmFtZXMgZXhjaCBnZXQKCQkJCQlkdXAgKEN5YW4pIGVxCgkJCQkJCXsKCQkJ
CQkJcG9wIGV4Y2gKCQkJCQkJTG9va3VwVGFibGVzIGxlbmd0aCBleGNoIHN1YgoJCQkJCQkvaW5k
ZXggY3Z4CgkJCQkJCTAgMCAwCgkJCQkJCX0KCQkJCQkJewoJCQkJCQlkdXAgKE1hZ2VudGEpIGVx
CgkJCQkJCQl7CgkJCQkJCQlwb3AgZXhjaAoJCQkJCQkJTG9va3VwVGFibGVzIGxlbmd0aCBleGNo
IHN1YgoJCQkJCQkJL2luZGV4IGN2eAoJCQkJCQkJMCAvZXhjaCBjdnggMCAwCgkJCQkJCQl9CgkJ
CQkJCQl7CgkJCQkJCQkoWWVsbG93KSBlcQoJCQkJCQkJCXsKCQkJCQkJCQlleGNoCgkJCQkJCQkJ
TG9va3VwVGFibGVzIGxlbmd0aCBleGNoIHN1YgoJCQkJCQkJCS9pbmRleCBjdngKCQkJCQkJCQkw
IDAgMyAtMSAvcm9sbCBjdnggMAoJCQkJCQkJCX0KCQkJCQkJCQl7CgkJCQkJCQkJZXhjaAoJCQkJ
CQkJCUxvb2t1cFRhYmxlcyBsZW5ndGggZXhjaCBzdWIKCQkJCQkJCQkvaW5kZXggY3Z4CgkJCQkJ
CQkJMCAwIDAgNCAtMSAvcm9sbCBjdngKCQkJCQkJCQl9IGlmZWxzZQoJCQkJCQkJfSBpZmVsc2UK
CQkJCQkJfSBpZmVsc2UKCQkJCQk1IC0xIC9yb2xsIGN2eCAvYXN0b3JlIGN2eAoJCQkJCX0KCQkJ
CQl7CgkJCQkJZHVwIGxlbmd0aCAxIHN1YgoJCQkJCUxvb2t1cFRhYmxlcyBsZW5ndGggNCAtMSBy
b2xsIHN1YiAxIGFkZAoJCQkJCS9pbmRleCBjdnggL211bCBjdnggL3JvdW5kIGN2eCAvY3ZpIGN2
eCAvZ2V0IGN2eAoJCQkJCX0gaWZlbHNlCgkJCQkJTmFtZXMgbGVuZ3RoIFRUVGFibGVzSWR4IGFk
ZCAxIGFkZCAxIC9yb2xsIGN2eAoJCQkJfSBmb3IKCQkJTmFtZXMgbGVuZ3RoIFsvcG9wIGN2eF0g
Y3Z4IC9yZXBlYXQgY3Z4CgkJCU5Db21wb25lbnRzIE5hbWVzIGxlbmd0aAogIAkJCVRpbnRNZXRo
b2QgL1N1YnRyYWN0aXZlIGVxCiAgCQkJCXsKICAJCQkJc3VidHJhY3RpdmVfYmxlbmQKICAJCQkJ
fQogIAkJCQl7CiAgCQkJCWFkZGl0aXZlX2JsZW5kCiAgCQkJCX0gaWZlbHNlCgkJXSBjdnggYmRm
Cgl9IGlmCglBR01DT1JFX2hvc3Rfc2VwIHsKCQlOYW1lcyBjb252ZXJ0X3RvX3Byb2Nlc3MgewoJ
CQlleGVjX3RpbnRfdHJhbnNmb3JtCgkJfQoJCXsJCgkJCWN1cnJlbnRkaWN0IC9BbGlhc2VkQ29s
b3JhbnRzIGtub3duIHsKCQkJCUFsaWFzZWRDb2xvcmFudHMgbm90CgkJCX17CgkJCQlmYWxzZQoJ
CQl9IGlmZWxzZQoJCQk1IGRpY3QgYmVnaW4KCQkJL0F2b2lkQWxpYXNlZENvbG9yYW50cyB4ZGYK
CQkJL3BhaW50ZWQ/IGZhbHNlIGRlZgoJCQkvbmFtZXNfaW5kZXggMCBkZWYKCQkJL25hbWVzX2xl
biBOYW1lcyBsZW5ndGggZGVmCgkJCU5hbWVzIHsKCQkJCUF2b2lkQWxpYXNlZENvbG9yYW50cyB7
CgkJCQkJL2N1cnJlbnRzcG90YWxpYXMgY3VycmVudF9zcG90X2FsaWFzIGRlZgoJCQkJCWZhbHNl
IHNldF9zcG90X2FsaWFzCgkJCQl9IGlmCgkJCQlBR01DT1JFX2lzX2NteWtfc2VwIHsKCQkJCQlk
dXAgKEN5YW4pIGVxIEFHTUNPUkVfY3lhbl9wbGF0ZSBhbmQgZXhjaAoJCQkJCWR1cCAoTWFnZW50
YSkgZXEgQUdNQ09SRV9tYWdlbnRhX3BsYXRlIGFuZCBleGNoCgkJCQkJZHVwIChZZWxsb3cpIGVx
IEFHTUNPUkVfeWVsbG93X3BsYXRlIGFuZCBleGNoCgkJCQkJKEJsYWNrKSBlcSBBR01DT1JFX2Js
YWNrX3BsYXRlIGFuZCBvciBvciBvciB7CgkJCQkJCS9kZXZpY2VuX2NvbG9yc3BhY2VfZGljdCBB
R01DT1JFX2dnZXQgL1RpbnRQcm9jIFsKCQkJCQkJCU5hbWVzIG5hbWVzX2luZGV4IC9kZXZuX21h
a2VjdXN0b21jb2xvciBjdngKCQkJCQkJXSBjdnggZGRmCgkJCQkJCS9wYWludGVkPyB0cnVlIGRl
ZgoJCQkJCX0gaWYKCQkJCQlwYWludGVkPyB7ZXhpdH0gaWYKCQkJCX17CgkJCQkJMCAwIDAgMCA1
IC0xIHJvbGwgZmluZGNteWtjdXN0b21jb2xvciAxIHNldGN1c3RvbWNvbG9yIGN1cnJlbnRncmF5
IDAgZXEgewoJCQkJCS9kZXZpY2VuX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgL1RpbnRQ
cm9jIFsKCQkJCQkJTmFtZXMgbmFtZXNfaW5kZXggL2Rldm5fbWFrZWN1c3RvbWNvbG9yIGN2eAoJ
CQkJCV0gY3Z4IGRkZgoJCQkJCS9wYWludGVkPyB0cnVlIGRlZgoJCQkJCWV4aXQKCQkJCQl9IGlm
CgkJCQl9IGlmZWxzZQoJCQkJQXZvaWRBbGlhc2VkQ29sb3JhbnRzIHsKCQkJCQljdXJyZW50c3Bv
dGFsaWFzIHNldF9zcG90X2FsaWFzCgkJCQl9IGlmCgkJCQkvbmFtZXNfaW5kZXggbmFtZXNfaW5k
ZXggMSBhZGQgZGVmCgkJCX0gZm9yYWxsCgkJCXBhaW50ZWQ/IHsKCQkJCS9kZXZpY2VuX2NvbG9y
c3BhY2VfZGljdCBBR01DT1JFX2dnZXQgL25hbWVzX2luZGV4IG5hbWVzX2luZGV4IHB1dAoJCQl9
ewoJCQkJL2RldmljZW5fY29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCAvVGludFByb2MgWwoJ
CQkJCW5hbWVzX2xlbiBbL3BvcCBjdnhdIGN2eCAvcmVwZWF0IGN2eCAxIC9zZXRzZXBhcmF0aW9u
Z3JheSBjdngKCQkJCQkwIDAgMCAwICgpIC9maW5kY215a2N1c3RvbWNvbG9yIGN2eCAwIC9zZXRj
dXN0b21jb2xvciBjdngKCQkJCV0gY3Z4IGRkZgoJCQl9IGlmZWxzZQoJCQllbmQKCQl9IGlmZWxz
ZQoJfQoJewoJCUFHTUNPUkVfaW5fcmlwX3NlcCB7CgkJCU5hbWVzIGNvbnZlcnRfdG9fcHJvY2Vz
cyBub3QKCQl9ewoJCQlsZXZlbDMKCQl9IGlmZWxzZQoJCXsKCQkJWy9EZXZpY2VOIE5hbWVzIE1h
cHBlZENTQSAvVGludFRyYW5zZm9ybSBsb2FkXSBzZXRjb2xvcnNwYWNlX29wdAoJCQkvVGludFBy
b2MgbGV2ZWwzIG5vdCBBR01DT1JFX2luX3JpcF9zZXAgYW5kIHsKCQkJCVsKCQkJCQlOYW1lcyAv
bGVuZ3RoIGN2eCBbL3BvcCBjdnhdIGN2eCAvcmVwZWF0IGN2eAoJCQkJXSBjdnggYmRmCgkJCX17
CgkJCQkvc2V0Y29sb3IgbGRmCgkJCX0gaWZlbHNlCgkJfXsKCQkJZXhlY190aW50X3RyYW5zZm9y
bQoJCX0gaWZlbHNlCgl9IGlmZWxzZQoJc2V0X2NyZAoJL0FsaWFzZWRDb2xvcmFudHMgZmFsc2Ug
ZGVmCgllbmQKfSBkZWYKL3NldGluZGV4ZWRjb2xvcnNwYWNlCnsKCWR1cCAvaW5kZXhlZF9jb2xv
cnNwYWNlX2RpY3QgZXhjaCBBR01DT1JFX2dwdXQKCWJlZ2luCgkJY3VycmVudGRpY3QgL0NTRCBr
bm93biB7CgkJCUNTRCBnZXRfY3NkIC9OYW1lcyBrbm93biB7CgkJCQlDU0QgZ2V0X2NzZCBiZWdp
bgoJCQkJY3VycmVudGRpY3QgZGV2bmNzCgkJCQlBR01DT1JFX2hvc3Rfc2VwewoJCQkJCTQgZGlj
dCBiZWdpbgoJCQkJCS9kZXZuQ29tcENudCBOYW1lcyBsZW5ndGggZGVmCgkJCQkJL05ld0xvb2t1
cCBIaVZhbCAxIGFkZCBzdHJpbmcgZGVmCgkJCQkJMCAxIEhpVmFsIHsKCQkJCQkJL3RhYmxlSW5k
ZXggeGRmCgkJCQkJCUxvb2t1cCBkdXAgdHlwZSAvc3RyaW5ndHlwZSBlcSB7CgkJCQkJCQlkZXZu
Q29tcENudCB0YWJsZUluZGV4IG1hcF9pbmRleAoJCQkJCQl9ewoJCQkJCQkJZXhlYwoJCQkJCQl9
IGlmZWxzZQoJCQkJCQlzZXRkZXZpY2VuY29sb3IKCQkJCQkJY3VycmVudGdyYXkKCQkJCQkJdGFi
bGVJbmRleCBleGNoCgkJCQkJCUhpVmFsIG11bCBjdmkgCgkJCQkJCU5ld0xvb2t1cCAzIDEgcm9s
bCBwdXQKCQkJCQl9IGZvcgoJCQkJCVsvSW5kZXhlZCBjdXJyZW50Y29sb3JzcGFjZSBIaVZhbCBO
ZXdMb29rdXBdIHNldGNvbG9yc3BhY2Vfb3B0CgkJCQkJZW5kCgkJCQl9ewoJCQkJCWxldmVsMwoJ
CQkJCXsKCQkJCQlbL0luZGV4ZWQgWy9EZXZpY2VOIE5hbWVzIE1hcHBlZENTQSAvVGludFRyYW5z
Zm9ybSBsb2FkXSBIaVZhbCBMb29rdXBdIHNldGNvbG9yc3BhY2Vfb3B0CgkJCQkJfXsKCQkJCQlb
L0luZGV4ZWQgTWFwcGVkQ1NBIEhpVmFsCgkJCQkJCVsKCQkJCQkJTG9va3VwIGR1cCB0eXBlIC9z
dHJpbmd0eXBlIGVxCgkJCQkJCQl7L2V4Y2ggY3Z4IENTRCBnZXRfY3NkIC9OYW1lcyBnZXQgbGVu
Z3RoIGR1cCAvbXVsIGN2eCBleGNoIC9nZXRpbnRlcnZhbCBjdnggezI1NSBkaXZ9IC9mb3JhbGwg
Y3Z4fQoJCQkJCQkJey9leGVjIGN2eH1pZmVsc2UKCQkJCQkJCS9UaW50VHJhbnNmb3JtIGxvYWQg
L2V4ZWMgY3Z4CgkJCQkJCV1jdngKCQkJCQldc2V0Y29sb3JzcGFjZV9vcHQKCQkJCQl9aWZlbHNl
CgkJCQl9IGlmZWxzZQoJCQkJZW5kCgkJCX17CgkJCX0gaWZlbHNlCgkJCXNldF9jcmQKCQl9CgkJ
ewoJCQkvTWFwcGVkQ1NBIENTQSBtYXBfY3NhIGRlZgoJCQlBR01DT1JFX2hvc3Rfc2VwIGxldmVs
MiBub3QgYW5kewoJCQkJMCAwIDAgMCBzZXRjbXlrY29sb3IKCQkJfXsKCQkJCVsvSW5kZXhlZCBN
YXBwZWRDU0EgCgkJCQlsZXZlbDIgbm90IGhhc19jb2xvciBub3QgYW5kewoJCQkJCWR1cCAwIGdl
dCBkdXAgL0RldmljZVJHQiBlcSBleGNoIC9EZXZpY2VDTVlLIGVxIG9yewoJCQkJCQlwb3AgWy9E
ZXZpY2VHcmF5XQoJCQkJCX1pZgoJCQkJCUhpVmFsIEdyYXlMb29rdXAKCQkJCX17CgkJCQkJSGlW
YWwgCgkJCQkJY3VycmVudGRpY3QvUmFuZ2VBcnJheSBrbm93bnsKCQkJCQkJeyAKCQkJCQkJCS9p
bmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgYmVnaW4KCQkJCQkJCUxvb2t1cCBl
eGNoIAoJCQkJCQkJZHVwIEhpVmFsIGd0ewoJCQkJCQkJCXBvcCBIaVZhbAoJCQkJCQkJfWlmCgkJ
CQkJCQlOQ29tcG9uZW50cyBtdWwgTkNvbXBvbmVudHMgZ2V0aW50ZXJ2YWwge30gZm9yYWxsCgkJ
CQkJCQlOQ29tcG9uZW50cyAxIHN1YiAtMSAwewoJCQkJCQkJCVJhbmdlQXJyYXkgZXhjaCAyIG11
bCAyIGdldGludGVydmFsIGFsb2FkIHBvcCBtYXAyNTVfdG9fcmFuZ2UKCQkJCQkJCQlOQ29tcG9u
ZW50cyAxIHJvbGwKCQkJCQkJCX1mb3IKCQkJCQkJCWVuZAoJCQkJCQl9IGJpbmQKCQkJCQl9ewoJ
CQkJCQlMb29rdXAKCQkJCQl9aWZlbHNlCgkJCQl9aWZlbHNlCgkJCQldIHNldGNvbG9yc3BhY2Vf
b3B0CgkJCQlzZXRfY3JkCgkJCX1pZmVsc2UKCQl9aWZlbHNlCgllbmQKfWRlZgovc2V0aW5kZXhl
ZGNvbG9yCnsKCUFHTUNPUkVfaG9zdF9zZXAgewoJCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBB
R01DT1JFX2dnZXQgZHVwIC9DU0Qga25vd24gewoJCQliZWdpbgoJCQlDU0QgZ2V0X2NzZCBiZWdp
bgoJCQltYXBfaW5kZXhlZF9kZXZuCgkJCWRldm4KCQkJZW5kCgkJCWVuZAoJCX17CgkJCUFHTUNP
UkVfZ2dldC9Mb29rdXAgZ2V0IDQgMyAtMSByb2xsIG1hcF9pbmRleAoJCQlwb3Agc2V0Y215a2Nv
bG9yCgkJfSBpZmVsc2UKCX17CgkJbGV2ZWwzIG5vdCBBR01DT1JFX2luX3JpcF9zZXAgYW5kIC9p
bmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgL0NTRCBrbm93biBhbmQgewoJCQkv
aW5kZXhlZF9jb2xvcnNwYWNlX2RpY3QgQUdNQ09SRV9nZ2V0IC9DU0QgZ2V0IGdldF9jc2QgYmVn
aW4KCQkJbWFwX2luZGV4ZWRfZGV2bgoJCQlkZXZuCgkJCWVuZAoJCX0KCQl7CgkJCXNldGNvbG9y
CgkJfSBpZmVsc2UKCX1pZmVsc2UKfSBkZWYKL2lnbm9yZWltYWdlZGF0YQp7CgljdXJyZW50b3Zl
cnByaW50IG5vdHsKCQlnc2F2ZQoJCWR1cCBjbG9uZWRpY3QgYmVnaW4KCQkxIHNldGdyYXkKCQkv
RGVjb2RlIFswIDFdIGRlZgoJCS9EYXRhU291cmNlIDxGRj4gZGVmCgkJL011bHRpcGxlRGF0YVNv
dXJjZXMgZmFsc2UgZGVmCgkJL0JpdHNQZXJDb21wb25lbnQgOCBkZWYKCQljdXJyZW50ZGljdCBl
bmQKCQlzeXN0ZW1kaWN0IC9pbWFnZSBnZXQgZXhlYwoJCWdyZXN0b3JlCgkJfWlmCgljb25zdW1l
aW1hZ2VkYXRhCn1kZWYKL2FkZF9jc2EKewoJQWRvYmVfQUdNX0NvcmUgYmVnaW4KCQkJL0FHTUNP
UkVfQ1NBX2NhY2hlIHhwdXQKCWVuZAp9ZGVmCi9nZXRfY3NhX2J5X25hbWUKewoJZHVwIHR5cGUg
ZHVwIC9uYW1ldHlwZSBlcSBleGNoIC9zdHJpbmd0eXBlIGVxIG9yewoJCUFkb2JlX0FHTV9Db3Jl
IGJlZ2luCgkJMSBkaWN0IGJlZ2luCgkJL25hbWUgeGRmCgkJQUdNQ09SRV9DU0FfY2FjaGUKCQl7
CgkJCTAgZ2V0IG5hbWUgZXEgewoJCQkJZXhpdAoJCQl9ewoJCQkJcG9wCgkJCX0gaWZlbHNlCgkJ
fWZvcmFsbAoJCWVuZAoJCWVuZAoJfXsKCQlwb3AKCX0gaWZlbHNlCn1kZWYKL21hcF9jc2EKewoJ
ZHVwIHR5cGUgL25hbWV0eXBlIGVxewoJCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfQ1NBX2NhY2hl
IGdldCBleGNoIGdldAoJfWlmCn1kZWYKL2FkZF9jc2QKewoJQWRvYmVfQUdNX0NvcmUgYmVnaW4K
CQkvQUdNQ09SRV9DU0RfY2FjaGUgeHB1dAoJZW5kCn1kZWYKL2dldF9jc2QKewoJZHVwIHR5cGUg
L25hbWV0eXBlIGVxewoJCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfQ1NEX2NhY2hlIGdldCBleGNo
IGdldAoJfWlmCn1kZWYKL3BhdHRlcm5fYnVmX2luaXQKewoJL2NvdW50IGdldCAwIDAgcHV0Cn0g
ZGVmCi9wYXR0ZXJuX2J1Zl9uZXh0CnsKCWR1cCAvY291bnQgZ2V0IGR1cCAwIGdldAoJZHVwIDMg
MSByb2xsCgkxIGFkZCAwIHhwdAoJZ2V0CQkJCQp9IGRlZgovY2FjaGVwYXR0ZXJuX2NvbXByZXNz
CnsKCTUgZGljdCBiZWdpbgoJY3VycmVudGZpbGUgZXhjaCAwIGV4Y2ggL1N1YkZpbGVEZWNvZGUg
ZmlsdGVyIC9SZWFkRmlsdGVyIGV4Y2ggZGVmCgkvcGF0YXJyYXkgMjAgZGljdCBkZWYKCS9zdHJp
bmdfc2l6ZSAxNjAwMCBkZWYKCS9yZWFkYnVmZmVyIHN0cmluZ19zaXplIHN0cmluZyBkZWYKCWN1
cnJlbnRnbG9iYWwgdHJ1ZSBzZXRnbG9iYWwgCglwYXRhcnJheSAxIGFycmF5IGR1cCAwIDEgcHV0
IC9jb3VudCB4cHQKCXNldGdsb2JhbAoJL0xaV0ZpbHRlciAKCXsKCQlleGNoCgkJZHVwIGxlbmd0
aCAwIGVxIHsKCQkJcG9wCgkJfXsKCQkJcGF0YXJyYXkgZHVwIGxlbmd0aCAxIHN1YiAzIC0xIHJv
bGwgcHV0CgkJfSBpZmVsc2UKCQl7c3RyaW5nX3NpemV9ezB9aWZlbHNlIHN0cmluZwoJfSAvTFpX
RW5jb2RlIGZpbHRlciBkZWYKCXsgCQkKCQlSZWFkRmlsdGVyIHJlYWRidWZmZXIgcmVhZHN0cmlu
ZwoJCWV4Y2ggTFpXRmlsdGVyIGV4Y2ggd3JpdGVzdHJpbmcKCQlub3Qge2V4aXR9IGlmCgl9IGxv
b3AKCUxaV0ZpbHRlciBjbG9zZWZpbGUKCXBhdGFycmF5CQkJCQoJZW5kCn1kZWYKL2NhY2hlcGF0
dGVybgp7CgkyIGRpY3QgYmVnaW4KCWN1cnJlbnRmaWxlIGV4Y2ggMCBleGNoIC9TdWJGaWxlRGVj
b2RlIGZpbHRlciAvUmVhZEZpbHRlciBleGNoIGRlZgoJL3BhdGFycmF5IDIwIGRpY3QgZGVmCglj
dXJyZW50Z2xvYmFsIHRydWUgc2V0Z2xvYmFsIAoJcGF0YXJyYXkgMSBhcnJheSBkdXAgMCAxIHB1
dCAvY291bnQgeHB0CglzZXRnbG9iYWwKCXsKCQlSZWFkRmlsdGVyIDE2MDAwIHN0cmluZyByZWFk
c3RyaW5nIGV4Y2gKCQlwYXRhcnJheSBkdXAgbGVuZ3RoIDEgc3ViIDMgLTEgcm9sbCBwdXQKCQlu
b3Qge2V4aXR9IGlmCgl9IGxvb3AKCXBhdGFycmF5IGR1cCBkdXAgbGVuZ3RoIDEgc3ViICgpIHB1
dAkJCQkJCgllbmQJCn1kZWYKL2FkZF9wYXR0ZXJuCnsKCUFkb2JlX0FHTV9Db3JlIGJlZ2luCgkJ
L0FHTUNPUkVfcGF0dGVybl9jYWNoZSB4cHV0CgllbmQKfWRlZgovZ2V0X3BhdHRlcm4KewoJZHVw
IHR5cGUgL25hbWV0eXBlIGVxewoJCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfcGF0dGVybl9jYWNo
ZSBnZXQgZXhjaCBnZXQKCQlkdXAgd3JhcF9wYWludHByb2MKCX1pZgp9ZGVmCi93cmFwX3BhaW50
cHJvYwp7IAogIHN0YXR1c2RpY3QgL2N1cnJlbnRmaWxlbmFtZWV4dGVuZCBrbm93bnsKCSAgYmVn
aW4KCQkvT2xkUGFpbnRQcm9jIC9QYWludFByb2MgbG9hZCBkZWYKCQkvUGFpbnRQcm9jCgkJewoJ
CSAgbWFyayBleGNoCgkJICBkdXAgL09sZFBhaW50UHJvYyBnZXQgc3RvcHBlZAoJCSAge2Nsb3Nl
ZmlsZSByZXN0b3JlIGVuZH0gaWYKCQkgIGNsZWFydG9tYXJrCgkJfSAgZGVmCgkgIGVuZAogIH0g
e3BvcH0gaWZlbHNlCn0gZGVmCi9tYWtlX3BhdHRlcm4KewoJZHVwIG1hdHJpeCBjdXJyZW50bWF0
cml4IG1hdHJpeCBjb25jYXRtYXRyaXggMCAwIDMgMiByb2xsIGl0cmFuc2Zvcm0KCWV4Y2ggMyBp
bmRleCAvWFN0ZXAgZ2V0IDEgaW5kZXggZXhjaCAyIGNvcHkgZGl2IGN2aSBtdWwgc3ViIHN1YgoJ
ZXhjaCAzIGluZGV4IC9ZU3RlcCBnZXQgMSBpbmRleCBleGNoIDIgY29weSBkaXYgY3ZpIG11bCBz
dWIgc3ViCgltYXRyaXggdHJhbnNsYXRlIGV4Y2ggbWF0cml4IGNvbmNhdG1hdHJpeAoJCQkgIDEg
aW5kZXggYmVnaW4KCQlCQm94IDAgZ2V0IFhTdGVwIGRpdiBjdmkgWFN0ZXAgbXVsIC94c2hpZnQg
ZXhjaCBuZWcgZGVmCgkJQkJveCAxIGdldCBZU3RlcCBkaXYgY3ZpIFlTdGVwIG11bCAveXNoaWZ0
IGV4Y2ggbmVnIGRlZgoJCUJCb3ggMCBnZXQgeHNoaWZ0IGFkZAoJCUJCb3ggMSBnZXQgeXNoaWZ0
IGFkZAoJCUJCb3ggMiBnZXQgeHNoaWZ0IGFkZAoJCUJCb3ggMyBnZXQgeXNoaWZ0IGFkZAoJCTQg
YXJyYXkgYXN0b3JlCgkJL0JCb3ggZXhjaCBkZWYKCQlbIHhzaGlmdCB5c2hpZnQgL3RyYW5zbGF0
ZSBsb2FkIG51bGwgL2V4ZWMgbG9hZCBdIGR1cAoJCTMgL1BhaW50UHJvYyBsb2FkIHB1dCBjdngg
L1BhaW50UHJvYyBleGNoIGRlZgoJCWVuZAoJZ3NhdmUgMCBzZXRncmF5CgltYWtlcGF0dGVybgoJ
Z3Jlc3RvcmUKfWRlZgovc2V0X3BhdHRlcm4KewoJZHVwIC9QYXR0ZXJuVHlwZSBnZXQgMSBlcXsK
CQlkdXAgL1BhaW50VHlwZSBnZXQgMSBlcXsKCQkJY3VycmVudG92ZXJwcmludCBzb3AgWy9EZXZp
Y2VHcmF5XSBzZXRjb2xvcnNwYWNlIDAgc2V0Z3JheQoJCX1pZgoJfWlmCglzZXRwYXR0ZXJuCn1k
ZWYKL3NldGNvbG9yc3BhY2Vfb3B0CnsKCWR1cCBjdXJyZW50Y29sb3JzcGFjZSBlcXsKCQlwb3AK
CX17CgkJc2V0Y29sb3JzcGFjZQoJfWlmZWxzZQp9ZGVmCi91cGRhdGVjb2xvcnJlbmRlcmluZwp7
CgljdXJyZW50Y29sb3JyZW5kZXJpbmcvSW50ZW50IGtub3duewoJCWN1cnJlbnRjb2xvcnJlbmRl
cmluZy9JbnRlbnQgZ2V0Cgl9ewoJCW51bGwKCX1pZmVsc2UKCUludGVudCBuZXsKCQlmYWxzZSAg
CgkJSW50ZW50CgkJQUdNQ09SRV9DUkRfY2FjaGUgewoJCQlleGNoIHBvcCAKCQkJYmVnaW4KCQkJ
CWR1cCBJbnRlbnQgZXF7CgkJCQkJY3VycmVudGRpY3Qgc2V0Y29sb3JyZW5kZXJpbmdfb3B0CgkJ
CQkJZW5kIAoJCQkJCWV4Y2ggcG9wIHRydWUgZXhjaAkKCQkJCQlleGl0CgkJCQl9aWYKCQkJZW5k
CgkJfSBmb3JhbGwKCQlwb3AKCQlub3R7CgkJCXN5c3RlbWRpY3QgL2ZpbmRjb2xvcnJlbmRlcmlu
ZyBrbm93bnsKCQkJCUludGVudCBmaW5kY29sb3JyZW5kZXJpbmcgcG9wCgkJCQkvQ29sb3JSZW5k
ZXJpbmcgZmluZHJlc291cmNlIAoJCQkJZHVwIGxlbmd0aCBkaWN0IGNvcHkKCQkJCXNldGNvbG9y
cmVuZGVyaW5nX29wdAoJCQl9aWYKCQl9aWYKCX1pZgp9IGRlZgovYWRkX2NyZAp7CglBR01DT1JF
X0NSRF9jYWNoZSAzIDEgcm9sbCBwdXQKfWRlZgovc2V0X2NyZAp7CglBR01DT1JFX2hvc3Rfc2Vw
IG5vdCBsZXZlbDIgYW5kewoJCWN1cnJlbnRkaWN0L0NSRCBrbm93bnsKCQkJQUdNQ09SRV9DUkRf
Y2FjaGUgQ1JEIGdldCBkdXAgbnVsbCBuZXsKCQkJCXNldGNvbG9ycmVuZGVyaW5nX29wdAoJCQl9
ewoJCQkJcG9wCgkJCX1pZmVsc2UKCQl9ewoJCQljdXJyZW50ZGljdC9JbnRlbnQga25vd257CgkJ
CQl1cGRhdGVjb2xvcnJlbmRlcmluZwoJCQl9aWYKCQl9aWZlbHNlCgkJY3VycmVudGNvbG9yc3Bh
Y2UgZHVwIHR5cGUgL2FycmF5dHlwZSBlcQoJCQl7MCBnZXR9aWYKCQkvRGV2aWNlUkdCIGVxCgkJ
CXsKCQkJY3VycmVudGRpY3QvVUNSIGtub3duCgkJCQl7L1VDUn17L0FHTUNPUkVfY3VycmVudHVj
cn1pZmVsc2UKCQkJbG9hZCBzZXR1bmRlcmNvbG9ycmVtb3ZhbAoJCQljdXJyZW50ZGljdC9CRyBr
bm93biAKCQkJCXsvQkd9ey9BR01DT1JFX2N1cnJlbnRiZ31pZmVsc2UKCQkJbG9hZCBzZXRibGFj
a2dlbmVyYXRpb24KCQkJfWlmCgl9aWYKfWRlZgovc2V0Y29sb3JyZW5kZXJpbmdfb3B0CnsKCWR1
cCBjdXJyZW50Y29sb3JyZW5kZXJpbmcgZXF7CgkJcG9wCgl9ewoJCWJlZ2luCgkJCS9JbnRlbnQg
SW50ZW50IGRlZgoJCQljdXJyZW50ZGljdAoJCWVuZAoJCXNldGNvbG9ycmVuZGVyaW5nCgl9aWZl
bHNlCn1kZWYKL2NwYWludF9nY29tcAp7Cgljb252ZXJ0X3RvX3Byb2Nlc3MgQWRvYmVfQUdNX0Nv
cmUvQUdNQ09SRV9Db252ZXJ0VG9Qcm9jZXNzIHhkZGYKCUFkb2JlX0FHTV9Db3JlL0FHTUNPUkVf
Q29udmVydFRvUHJvY2VzcyBnZXQgbm90Cgl7CgkJKCVlbmRfY3BhaW50X2djb21wKSBmbHVzaGlu
cHV0Cgl9aWYKfWRlZgovY3BhaW50X2dzZXAKewoJQWRvYmVfQUdNX0NvcmUvQUdNQ09SRV9Db252
ZXJ0VG9Qcm9jZXNzIGdldAoJewkKCQkoJWVuZF9jcGFpbnRfZ3NlcCkgZmx1c2hpbnB1dAoJfWlm
Cn1kZWYKL2NwYWludF9nZW5kCnsKCW5ld3BhdGgKfWRlZgovcGF0aF9yZXoKewoJZHVwIDAgbmV7
CgkJQUdNQ09SRV9kZXZpY2VEUEkgZXhjaCBkaXYgCgkJZHVwIDEgbHR7CgkJCXBvcCAxCgkJfWlm
CgkJc2V0ZmxhdAoJfXsKCQlwb3AKCX1pZmVsc2UgCQp9ZGVmCi9zZXRfc3BvdF9hbGlhc19hcnkK
ewoJL0FHTUNPUkVfU3BvdEFsaWFzQXJ5IHdoZXJlewoJCXBvcCBwb3AKCX17CgkJQWRvYmVfQUdN
X0NvcmUvQUdNQ09SRV9TcG90QWxpYXNBcnkgeGRkZgoJCXRydWUgc2V0X3Nwb3RfYWxpYXMKCX1p
ZmVsc2UKfWRlZgovc2V0X3Nwb3RfYWxpYXMKewoJL0FHTUNPUkVfU3BvdEFsaWFzQXJ5IHdoZXJl
ewoJCS9BR01DT1JFX2N1cnJlbnRfc3BvdF9hbGlhcyAzIC0xIHJvbGwgcHV0Cgl9ewoJCXBvcAoJ
fWlmZWxzZQp9ZGVmCi9jdXJyZW50X3Nwb3RfYWxpYXMKewoJL0FHTUNPUkVfU3BvdEFsaWFzQXJ5
IHdoZXJlewoJCS9BR01DT1JFX2N1cnJlbnRfc3BvdF9hbGlhcyBnZXQKCX17CgkJZmFsc2UKCX1p
ZmVsc2UKfWRlZgovbWFwX2FsaWFzCnsKCS9BR01DT1JFX1Nwb3RBbGlhc0FyeSB3aGVyZXsKCQli
ZWdpbgoJCQkvQUdNQ09SRV9uYW1lIHhkZgoJCQlmYWxzZQkKCQkJQUdNQ09SRV9TcG90QWxpYXNB
cnl7CgkJCQlkdXAvTmFtZSBnZXQgQUdNQ09SRV9uYW1lIGVxewoJCQkJCXNhdmUgZXhjaAoJCQkJ
CS9BZG9iZV9BR01fQ29yZSBjdXJyZW50ZGljdCBkZWYKCQkJCQkvQ1NEIGdldCBnZXRfY3NkCgkJ
CQkJZXhjaCByZXN0b3JlCgkJCQkJZXhjaCBwb3AgdHJ1ZQoJCQkJCWV4aXQKCQkJCX17CgkJCQkJ
cG9wCgkJCQl9aWZlbHNlCgkJCX1mb3JhbGwKCQllbmQKCX17CgkJcG9wIGZhbHNlCgl9aWZlbHNl
Cn1iZGYKL3Nwb3RfYWxpYXMKewoJdHJ1ZSBzZXRfc3BvdF9hbGlhcwoJL0FHTUNPUkVfJnNldGN1
c3RvbWNvbG9yIEFHTUNPUkVfa2V5X2tub3duIG5vdCB7CgkJQWRvYmVfQUdNX0NvcmUvQUdNQ09S
RV8mc2V0Y3VzdG9tY29sb3IgL3NldGN1c3RvbWNvbG9yIGxvYWQgcHV0Cgl9IGlmCgkvY3VzdG9t
Y29sb3JfdGludCAxIEFHTUNPUkVfZ3B1dAoJQWRvYmVfQUdNX0NvcmUgYmVnaW4KCS9zZXRjdXN0
b21jb2xvcgoJewoJCWR1cCAvY3VzdG9tY29sb3JfdGludCBleGNoIEFHTUNPUkVfZ3B1dAoJCWN1
cnJlbnRfc3BvdF9hbGlhc3sKCQkJMSBpbmRleCA0IGdldCBtYXBfYWxpYXN7CgkJCQltYXJrIDMg
MSByb2xsCgkJCQlzZXRzZXBjb2xvcnNwYWNlCgkJCQljb3VudHRvbWFyayAwIG5lewoJCQkJCXNl
dHNlcGNvbG9yCgkJCQl9aWYKCQkJCXBvcAoJCQkJcG9wCgkJCX17CgkJCQlBR01DT1JFXyZzZXRj
dXN0b21jb2xvcgoJCQl9aWZlbHNlCgkJfXsKCQkJQUdNQ09SRV8mc2V0Y3VzdG9tY29sb3IKCQl9
aWZlbHNlCgl9YmRmCgllbmQKfWRlZgovYmVnaW5fZmVhdHVyZQp7CglBZG9iZV9BR01fQ29yZS9B
R01DT1JFX2ZlYXR1cmVfZGljdENvdW50IGNvdW50ZGljdHN0YWNrIHB1dAoJY291bnQgQWRvYmVf
QUdNX0NvcmUvQUdNQ09SRV9mZWF0dXJlX29wQ291bnQgMyAtMSByb2xsIHB1dAoJe0Fkb2JlX0FH
TV9Db3JlL0FHTUNPUkVfZmVhdHVyZV9jdG0gbWF0cml4IGN1cnJlbnRtYXRyaXggcHV0fWlmCn1k
ZWYKL2VuZF9mZWF0dXJlCnsKCTIgZGljdCBiZWdpbgoJL3NwZCAvc2V0cGFnZWRldmljZSBsb2Fk
IGRlZgoJL3NldHBhZ2VkZXZpY2UgeyBnZXRfZ3N0YXRlIHNwZCBzZXRfZ3N0YXRlIH0gZGVmCglz
dG9wcGVkeyRlcnJvci9uZXdlcnJvciBmYWxzZSBwdXR9aWYKCWVuZAoJY291bnQgQWRvYmVfQUdN
X0NvcmUvQUdNQ09SRV9mZWF0dXJlX29wQ291bnQgZ2V0IHN1YiBkdXAgMCBndHt7cG9wfXJlcGVh
dH17cG9wfWlmZWxzZQoJY291bnRkaWN0c3RhY2sgQWRvYmVfQUdNX0NvcmUvQUdNQ09SRV9mZWF0
dXJlX2RpY3RDb3VudCBnZXQgc3ViIGR1cCAwIGd0e3tlbmR9cmVwZWF0fXtwb3B9aWZlbHNlCgl7
QWRvYmVfQUdNX0NvcmUvQUdNQ09SRV9mZWF0dXJlX2N0bSBnZXQgc2V0bWF0cml4fWlmCn1kZWYK
L3NldF9uZWdhdGl2ZQp7CglBZG9iZV9BR01fQ29yZSBiZWdpbgoJL0FHTUNPUkVfaW52ZXJ0aW5n
IGV4Y2ggZGVmCglsZXZlbDJ7CgkJY3VycmVudHBhZ2VkZXZpY2UvTmVnYXRpdmVQcmludCBrbm93
bnsKCQkJY3VycmVudHBhZ2VkZXZpY2UvTmVnYXRpdmVQcmludCBnZXQgQWRvYmVfQUdNX0NvcmUv
QUdNQ09SRV9pbnZlcnRpbmcgZ2V0IG5lewoJCQkJdHJ1ZSBiZWdpbl9mZWF0dXJlIHRydWV7CgkJ
CQkJCWJkaWN0IC9OZWdhdGl2ZVByaW50IEFkb2JlX0FHTV9Db3JlL0FHTUNPUkVfaW52ZXJ0aW5n
IGdldCBlZGljdCBzZXRwYWdlZGV2aWNlCgkJCQl9ZW5kX2ZlYXR1cmUKCQkJfWlmCgkJCS9BR01D
T1JFX2ludmVydGluZyBmYWxzZSBkZWYKCQl9aWYKCX1pZgoJQUdNQ09SRV9pbnZlcnRpbmd7CgkJ
W3sxIGV4Y2ggc3VifS9leGVjIGxvYWQgZHVwIGN1cnJlbnR0cmFuc2ZlciBleGNoXWN2eCBiaW5k
IHNldHRyYW5zZmVyCgkJZ3NhdmUgbmV3cGF0aCBjbGlwcGF0aCAxIC9zZXRzZXBhcmF0aW9uZ3Jh
eSB3aGVyZXtwb3Agc2V0c2VwYXJhdGlvbmdyYXl9e3NldGdyYXl9aWZlbHNlIAoJCS9BR01JUlNf
JmZpbGwgd2hlcmUge3BvcCBBR01JUlNfJmZpbGx9e2ZpbGx9IGlmZWxzZSBncmVzdG9yZQoJfWlm
CgllbmQKfWRlZgovbHdfc2F2ZV9yZXN0b3JlX292ZXJyaWRlIHsKCS9tZCB3aGVyZSB7CgkJcG9w
CgkJbWQgYmVnaW4KCQlpbml0aWFsaXplcGFnZQoJCS9pbml0aWFsaXplcGFnZXt9ZGVmCgkJL3Bt
U1ZzZXR1cHt9IGRlZgoJCS9lbmRwe31kZWYKCQkvcHNle31kZWYKCQkvcHNie31kZWYKCQkvb3Jp
Z19zaG93cGFnZSB3aGVyZQoJCQl7cG9wfQoJCQl7L29yaWdfc2hvd3BhZ2UgL3Nob3dwYWdlIGxv
YWQgZGVmfQoJCWlmZWxzZQoJCS9zaG93cGFnZSB7b3JpZ19zaG93cGFnZSBnUn0gZGVmCgkJZW5k
Cgl9aWYKfWRlZgovcHNjcmlwdF9zaG93cGFnZV9vdmVycmlkZSB7CgkvTlRQU09jdDk1IHdoZXJl
Cgl7CgkJYmVnaW4KCQlzaG93cGFnZQoJCXNhdmUKCQkvc2hvd3BhZ2UgL3Jlc3RvcmUgbG9hZCBk
ZWYKCQkvcmVzdG9yZSB7ZXhjaCBwb3B9ZGVmCgkJZW5kCgl9aWYKfWRlZgovZHJpdmVyX21lZGlh
X292ZXJyaWRlCnsKCS9tZCB3aGVyZSB7CgkJcG9wCgkJbWQgL2luaXRpYWxpemVwYWdlIGtub3du
IHsKCQkJbWQgL2luaXRpYWxpemVwYWdlIHt9IHB1dAoJCX0gaWYKCQltZCAvckMga25vd24gewoJ
CQltZCAvckMgezR7cG9wfXJlcGVhdH0gcHV0CgkJfSBpZgoJfWlmCgkvbXlzZXR1cCB3aGVyZSB7
CgkJL215c2V0dXAgWzEgMCAwIDEgMCAwXSBwdXQKCX1pZgoJQWRvYmVfQUdNX0NvcmUgL0FHTUNP
UkVfRGVmYXVsdF9DVE0gbWF0cml4IGN1cnJlbnRtYXRyaXggcHV0CglsZXZlbDIKCQl7QWRvYmVf
QUdNX0NvcmUgL0FHTUNPUkVfRGVmYXVsdF9QYWdlU2l6ZSBjdXJyZW50cGFnZWRldmljZS9QYWdl
U2l6ZSBnZXQgcHV0fWlmCn1kZWYKL2RyaXZlcl9jaGVja19tZWRpYV9vdmVycmlkZQp7CgkvUHJl
cHNEaWN0IHdoZXJlCgkJe3BvcH0KCQl7CgkJQWRvYmVfQUdNX0NvcmUgL0FHTUNPUkVfRGVmYXVs
dF9DVE0gZ2V0IG1hdHJpeCBjdXJyZW50bWF0cml4IG5lCgkJQWRvYmVfQUdNX0NvcmUgL0FHTUNP
UkVfRGVmYXVsdF9QYWdlU2l6ZSBnZXQgdHlwZSAvYXJyYXl0eXBlIGVxCgkJCXsKCQkJQWRvYmVf
QUdNX0NvcmUgL0FHTUNPUkVfRGVmYXVsdF9QYWdlU2l6ZSBnZXQgMCBnZXQgY3VycmVudHBhZ2Vk
ZXZpY2UvUGFnZVNpemUgZ2V0IDAgZ2V0IGVxIGFuZAoJCQlBZG9iZV9BR01fQ29yZSAvQUdNQ09S
RV9EZWZhdWx0X1BhZ2VTaXplIGdldCAxIGdldCBjdXJyZW50cGFnZWRldmljZS9QYWdlU2l6ZSBn
ZXQgMSBnZXQgZXEgYW5kCgkJCX1pZgoJCQl7CgkJCUFkb2JlX0FHTV9Db3JlIC9BR01DT1JFX0Rl
ZmF1bHRfQ1RNIGdldCBzZXRtYXRyaXgKCQkJfWlmCgkJfWlmZWxzZQp9ZGVmCkFHTUNPUkVfZXJy
X3N0cmluZ3MgYmVnaW4KCS9BR01DT1JFX2JhZF9lbnZpcm9uIChFbnZpcm9ubWVudCBub3Qgc2F0
aXNmYWN0b3J5IGZvciB0aGlzIGpvYi4gRW5zdXJlIHRoYXQgdGhlIFBQRCBpcyBjb3JyZWN0IG9y
IHRoYXQgdGhlIFBvc3RTY3JpcHQgbGV2ZWwgcmVxdWVzdGVkIGlzIHN1cHBvcnRlZCBieSB0aGlz
IHByaW50ZXIuICkgZGVmCgkvQUdNQ09SRV9jb2xvcl9zcGFjZV9vbmhvc3Rfc2VwcyAoVGhpcyBq
b2IgY29udGFpbnMgY29sb3JzIHRoYXQgd2lsbCBub3Qgc2VwYXJhdGUgd2l0aCBvbi1ob3N0IG1l
dGhvZHMuICkgZGVmCgkvQUdNQ09SRV9pbnZhbGlkX2NvbG9yX3NwYWNlIChUaGlzIGpvYiBjb250
YWlucyBhbiBpbnZhbGlkIGNvbG9yIHNwYWNlLiApIGRlZgplbmQKZW5kCnN5c3RlbWRpY3QgL3Nl
dHBhY2tpbmcga25vd24KewoJc2V0cGFja2luZwp9IGlmCiUlRW5kUmVzb3VyY2UKJSVCZWdpblJl
c291cmNlOiBwcm9jc2V0IEFkb2JlX0Nvb2xUeXBlX0NvcmUgMi4yMyAwCiUlQ29weXJpZ2h0OiBD
b3B5cmlnaHQgMTk5Ny0yMDAzIEFkb2JlIFN5c3RlbXMgSW5jb3Jwb3JhdGVkLiAgQWxsIFJpZ2h0
cyBSZXNlcnZlZC4KJSVWZXJzaW9uOiAyLjIzIDAKMTAgZGljdCBiZWdpbgovQWRvYmVfQ29vbFR5
cGVfUGFzc3RocnUgY3VycmVudGRpY3QgZGVmCi9BZG9iZV9Db29sVHlwZV9Db3JlX0RlZmluZWQg
dXNlcmRpY3QgL0Fkb2JlX0Nvb2xUeXBlX0NvcmUga25vd24gZGVmCkFkb2JlX0Nvb2xUeXBlX0Nv
cmVfRGVmaW5lZAoJeyAvQWRvYmVfQ29vbFR5cGVfQ29yZSB1c2VyZGljdCAvQWRvYmVfQ29vbFR5
cGVfQ29yZSBnZXQgZGVmIH0KaWYKdXNlcmRpY3QgL0Fkb2JlX0Nvb2xUeXBlX0NvcmUgNjAgZGlj
dCBkdXAgYmVnaW4gcHV0Ci9BZG9iZV9Db29sVHlwZV9WZXJzaW9uIDIuMjMgZGVmCi9MZXZlbDI/
CglzeXN0ZW1kaWN0IC9sYW5ndWFnZWxldmVsIGtub3duIGR1cAoJCXsgcG9wIHN5c3RlbWRpY3Qg
L2xhbmd1YWdlbGV2ZWwgZ2V0IDIgZ2UgfQoJaWYgZGVmCkxldmVsMj8gbm90Cgl7CgkvY3VycmVu
dGdsb2JhbCBmYWxzZSBkZWYKCS9zZXRnbG9iYWwgL3BvcCBsb2FkIGRlZgoJL2djaGVjayB7IHBv
cCBmYWxzZSB9IGJpbmQgZGVmCgkvY3VycmVudHBhY2tpbmcgZmFsc2UgZGVmCgkvc2V0cGFja2lu
ZyAvcG9wIGxvYWQgZGVmCgkvU2hhcmVkRm9udERpcmVjdG9yeSAwIGRpY3QgZGVmCgl9CmlmCmN1
cnJlbnRwYWNraW5nCnRydWUgc2V0cGFja2luZwovQF9TYXZlU3RhY2tMZXZlbHMKCXsKCUFkb2Jl
X0Nvb2xUeXBlX0RhdGEKCQliZWdpbgoJCUBvcFN0YWNrQ291bnRCeUxldmVsIEBvcFN0YWNrTGV2
ZWwKCQkyIGNvcHkga25vd24gbm90CgkJCXsgMiBjb3B5IDMgZGljdCBkdXAgL2FyZ3MgNyBpbmRl
eCA1IGFkZCBhcnJheSBwdXQgcHV0IGdldCB9CgkJCXsKCQkJZ2V0IGR1cCAvYXJncyBnZXQgZHVw
IGxlbmd0aCAzIGluZGV4IGx0CgkJCQl7CgkJCQlkdXAgbGVuZ3RoIDUgYWRkIGFycmF5IGV4Y2gK
CQkJCTEgaW5kZXggZXhjaCAwIGV4Y2ggcHV0aW50ZXJ2YWwKCQkJCTEgaW5kZXggZXhjaCAvYXJn
cyBleGNoIHB1dAoJCQkJfQoJCQkJeyBwb3AgfQoJCQlpZmVsc2UKCQkJfQoJCWlmZWxzZQoJCQli
ZWdpbgoJCQljb3VudCAyIHN1YiAxIGluZGV4IGx0CgkJCQl7IHBvcCBjb3VudCAxIHN1YiB9CgkJ
CWlmCgkJCWR1cCAvYXJnQ291bnQgZXhjaCBkZWYKCQkJZHVwIDAgZ3QKCQkJCXsKCQkJCWV4Y2gg
MSBpbmRleCAyIGFkZCAxIHJvbGwKCQkJCWFyZ3MgZXhjaCAwIGV4Y2ggZ2V0aW50ZXJ2YWwgCgkJ
CWFzdG9yZSBwb3AKCQkJCX0KCQkJCXsgcG9wIH0KCQkJaWZlbHNlCgkJCWNvdW50IDEgc3ViIC9y
ZXN0Q291bnQgZXhjaCBkZWYKCQkJZW5kCgkJL0BvcFN0YWNrTGV2ZWwgQG9wU3RhY2tMZXZlbCAx
IGFkZCBkZWYKCQljb3VudGRpY3RzdGFjayAxIHN1YgoJCUBkaWN0U3RhY2tDb3VudEJ5TGV2ZWwg
ZXhjaCBAZGljdFN0YWNrTGV2ZWwgZXhjaCBwdXQKCQkvQGRpY3RTdGFja0xldmVsIEBkaWN0U3Rh
Y2tMZXZlbCAxIGFkZCBkZWYKCQllbmQKCX0gYmluZCBkZWYKL0BfUmVzdG9yZVN0YWNrTGV2ZWxz
Cgl7CglBZG9iZV9Db29sVHlwZV9EYXRhCgkJYmVnaW4KCQkvQG9wU3RhY2tMZXZlbCBAb3BTdGFj
a0xldmVsIDEgc3ViIGRlZgoJCUBvcFN0YWNrQ291bnRCeUxldmVsIEBvcFN0YWNrTGV2ZWwgZ2V0
CgkJCWJlZ2luCgkJCWNvdW50IHJlc3RDb3VudCBzdWIgZHVwIDAgZ3QKCQkJCXsgeyBwb3AgfSBy
ZXBlYXQgfQoJCQkJeyBwb3AgfQoJCQlpZmVsc2UKCQkJYXJncyAwIGFyZ0NvdW50IGdldGludGVy
dmFsIHt9IGZvcmFsbAoJCQllbmQKCQkvQGRpY3RTdGFja0xldmVsIEBkaWN0U3RhY2tMZXZlbCAx
IHN1YiBkZWYKCQlAZGljdFN0YWNrQ291bnRCeUxldmVsIEBkaWN0U3RhY2tMZXZlbCBnZXQKCQll
bmQKCWNvdW50ZGljdHN0YWNrIGV4Y2ggc3ViIGR1cCAwIGd0CgkJeyB7IGVuZCB9IHJlcGVhdCB9
CgkJeyBwb3AgfQoJaWZlbHNlCgl9IGJpbmQgZGVmCi9AX1BvcFN0YWNrTGV2ZWxzCgl7CglBZG9i
ZV9Db29sVHlwZV9EYXRhCgkJYmVnaW4KCQkvQG9wU3RhY2tMZXZlbCBAb3BTdGFja0xldmVsIDEg
c3ViIGRlZgoJCS9AZGljdFN0YWNrTGV2ZWwgQGRpY3RTdGFja0xldmVsIDEgc3ViIGRlZgoJCWVu
ZAoJfSBiaW5kIGRlZgovQFJhaXNlCgl7CglleGNoIGN2eCBleGNoIGVycm9yZGljdCBleGNoIGdl
dCBleGVjCglzdG9wCgl9IGJpbmQgZGVmCi9AUmVSYWlzZQoJewoJY3Z4ICRlcnJvciAvZXJyb3Ju
YW1lIGdldCBlcnJvcmRpY3QgZXhjaCBnZXQgZXhlYwoJc3RvcAoJfSBiaW5kIGRlZgovQFN0b3Bw
ZWQKCXsKCTAgQCNTdG9wcGVkCgl9IGJpbmQgZGVmCi9AI1N0b3BwZWQKCXsKCUBfU2F2ZVN0YWNr
TGV2ZWxzCglzdG9wcGVkCgkJeyBAX1Jlc3RvcmVTdGFja0xldmVscyB0cnVlIH0KCQl7IEBfUG9w
U3RhY2tMZXZlbHMgZmFsc2UgfQoJaWZlbHNlCgl9IGJpbmQgZGVmCi9AQXJnCgl7CglBZG9iZV9D
b29sVHlwZV9EYXRhCgkJYmVnaW4KCQlAb3BTdGFja0NvdW50QnlMZXZlbCBAb3BTdGFja0xldmVs
IDEgc3ViIGdldCAvYXJncyBnZXQgZXhjaCBnZXQKCQllbmQKCX0gYmluZCBkZWYKY3VycmVudGds
b2JhbCB0cnVlIHNldGdsb2JhbAovQ1RIYXNSZXNvdXJjZUZvckFsbEJ1ZwoJTGV2ZWwyPwoJCXsK
CQkxIGRpY3QgZHVwIGJlZ2luCgkJbWFyawoJCQl7CgkJCQkoKikgeyBwb3Agc3RvcCB9IDEyOCBz
dHJpbmcgL0NhdGVnb3J5CgkJCXJlc291cmNlZm9yYWxsCgkJCX0KCQlzdG9wcGVkCgkJY2xlYXJ0
b21hcmsKCQljdXJyZW50ZGljdCBlcSBkdXAKCQkJeyBlbmQgfQoJCWlmCgkJbm90CgkJfQoJCXsg
ZmFsc2UgfQoJaWZlbHNlCglkZWYKL0NUSGFzUmVzb3VyY2VTdGF0dXNCdWcKCUxldmVsMj8KCQl7
CgkJbWFyawoJCQl7IC9zdGV2ZWFtZXJpZ2UgL0NhdGVnb3J5IHJlc291cmNlc3RhdHVzIH0KCQlz
dG9wcGVkCgkJCXsgY2xlYXJ0b21hcmsgdHJ1ZSB9CgkJCXsgY2xlYXJ0b21hcmsgY3VycmVudGds
b2JhbCBub3QgfQoJCWlmZWxzZQoJCX0KCQl7IGZhbHNlIH0KCWlmZWxzZQoJZGVmCnNldGdsb2Jh
bAovQ1RSZXNvdXJjZVN0YXR1cwoJCXsKCQltYXJrIDMgMSByb2xsCgkJL0NhdGVnb3J5IGZpbmRy
ZXNvdXJjZQoJCQliZWdpbgoJCQkoe1Jlc291cmNlU3RhdHVzfSBzdG9wcGVkKSAwICgpIC9TdWJG
aWxlRGVjb2RlIGZpbHRlciBjdnggZXhlYwoJCQkJeyBjbGVhcnRvbWFyayBmYWxzZSB9CgkJCQl7
IHsgMyAyIHJvbGwgcG9wIHRydWUgfSB7IGNsZWFydG9tYXJrIGZhbHNlIH0gaWZlbHNlIH0KCQkJ
aWZlbHNlCgkJCWVuZAoJCX0gYmluZCBkZWYKL0NUV29ya0Fyb3VuZEJ1Z3MKCXsKCUxldmVsMj8K
CQl7CgkJL2NpZF9QcmVMb2FkIC9Qcm9jU2V0IHJlc291cmNlc3RhdHVzCgkJCXsKCQkJcG9wIHBv
cAoJCQljdXJyZW50Z2xvYmFsCgkJCW1hcmsKCQkJCXsKCQkJCSgqKQoJCQkJCXsKCQkJCQlkdXAg
L0NNYXAgQ1RIYXNSZXNvdXJjZVN0YXR1c0J1ZwoJCQkJCQl7IENUUmVzb3VyY2VTdGF0dXMgfQoJ
CQkJCQl7IHJlc291cmNlc3RhdHVzIH0KCQkJCQlpZmVsc2UKCQkJCQkJewoJCQkJCQlwb3AgZHVw
IDAgZXEgZXhjaCAxIGVxIG9yCgkJCQkJCQl7CgkJCQkJCQlkdXAgL0NNYXAgZmluZHJlc291cmNl
IGdjaGVjayBzZXRnbG9iYWwKCQkJCQkJCS9DTWFwIHVuZGVmaW5lcmVzb3VyY2UKCQkJCQkJCX0K
CQkJCQkJCXsKCQkJCQkJCXBvcCBDVEhhc1Jlc291cmNlRm9yQWxsQnVnCgkJCQkJCQkJeyBleGl0
IH0KCQkJCQkJCQl7IHN0b3AgfQoJCQkJCQkJaWZlbHNlCgkJCQkJCQl9CgkJCQkJCWlmZWxzZQoJ
CQkJCQl9CgkJCQkJCXsgcG9wIH0KCQkJCQlpZmVsc2UKCQkJCQl9CgkJCQkxMjggc3RyaW5nIC9D
TWFwIHJlc291cmNlZm9yYWxsCgkJCQl9CgkJCXN0b3BwZWQKCQkJCXsgY2xlYXJ0b21hcmsgfQoJ
CQlzdG9wcGVkIHBvcAoJCQlzZXRnbG9iYWwKCQkJfQoJCWlmCgkJfQoJaWYKCX0gYmluZCBkZWYK
L2RvY19zZXR1cAoJewoJQWRvYmVfQ29vbFR5cGVfQ29yZQoJCWJlZ2luCgkJQ1RXb3JrQXJvdW5k
QnVncwoJCS9tb3YgL21vdmV0byBsb2FkIGRlZgoJCS9uZm50IC9uZXdlbmNvZGVkZm9udCBsb2Fk
IGRlZgoJCS9tZm50IC9tYWtlZm9udCBsb2FkIGRlZgoJCS9zZm50IC9zZXRmb250IGxvYWQgZGVm
CgkJL3VmbnQgL3VuZGVmaW5lZm9udCBsb2FkIGRlZgoJCS9jaHAgL2NoYXJwYXRoIGxvYWQgZGVm
CgkJL2F3c2ggL2F3aWR0aHNob3cgbG9hZCBkZWYKCQkvd3NoIC93aWR0aHNob3cgbG9hZCBkZWYK
CQkvYXNoIC9hc2hvdyBsb2FkIGRlZgoJCS9zaCAvc2hvdyBsb2FkIGRlZgoJCWVuZAoJdXNlcmRp
Y3QgL0Fkb2JlX0Nvb2xUeXBlX0RhdGEgMTAgZGljdCBkdXAKCQliZWdpbgoJCS9BZGRXaWR0aHM/
IGZhbHNlIGRlZgoJCS9DQyAwIGRlZgoJCS9jaGFyY29kZSAyIHN0cmluZyBkZWYKCQkvQG9wU3Rh
Y2tDb3VudEJ5TGV2ZWwgMzIgZGljdCBkZWYKCQkvQG9wU3RhY2tMZXZlbCAwIGRlZgoJCS9AZGlj
dFN0YWNrQ291bnRCeUxldmVsIDMyIGRpY3QgZGVmCgkJL0BkaWN0U3RhY2tMZXZlbCAwIGRlZgoJ
CS9JblZNRm9udHNCeUNNYXAgMTAgZGljdCBkZWYKCQkvSW5WTURlZXBDb3BpZWRGb250cyAxMCBk
aWN0IGRlZgoJCWVuZCBwdXQKCX0gYmluZCBkZWYKL2RvY190cmFpbGVyCgl7CgljdXJyZW50ZGlj
dCBBZG9iZV9Db29sVHlwZV9Db3JlIGVxCgkJeyBlbmQgfQoJaWYKCX0gYmluZCBkZWYKL3BhZ2Vf
c2V0dXAKCXsKCUFkb2JlX0Nvb2xUeXBlX0NvcmUgYmVnaW4KCX0gYmluZCBkZWYKL3BhZ2VfdHJh
aWxlcgoJewoJZW5kCgl9IGJpbmQgZGVmCi91bmxvYWQKCXsKCXN5c3RlbWRpY3QgL2xhbmd1YWdl
bGV2ZWwga25vd24KCQl7CgkJc3lzdGVtZGljdC9sYW5ndWFnZWxldmVsIGdldCAyIGdlCgkJCXsK
CQkJdXNlcmRpY3QvQWRvYmVfQ29vbFR5cGVfQ29yZSAyIGNvcHkga25vd24KCQkJCXsgdW5kZWYg
fQoJCQkJeyBwb3AgcG9wIH0KCQkJaWZlbHNlCgkJCX0KCQlpZgoJCX0KCWlmCgl9IGJpbmQgZGVm
Ci9uZGYKCXsKCTEgaW5kZXggd2hlcmUKCQl7IHBvcCBwb3AgcG9wIH0KCQl7IGR1cCB4Y2hlY2sg
eyBiaW5kIH0gaWYgZGVmIH0KCWlmZWxzZQoJfSBkZWYKL2ZpbmRmb250IHN5c3RlbWRpY3QKCWJl
Z2luCgl1c2VyZGljdAoJCWJlZ2luCgkJL2dsb2JhbGRpY3Qgd2hlcmUgeyAvZ2xvYmFsZGljdCBn
ZXQgYmVnaW4gfSBpZgoJCQlkdXAgd2hlcmUgcG9wIGV4Y2ggZ2V0CgkJL2dsb2JhbGRpY3Qgd2hl
cmUgeyBwb3AgZW5kIH0gaWYKCQllbmQKCWVuZApBZG9iZV9Db29sVHlwZV9Db3JlX0RlZmluZWQK
CXsgL3N5c3RlbWZpbmRmb250IGV4Y2ggZGVmIH0KCXsKCS9maW5kZm9udCAxIGluZGV4IGRlZgoJ
L3N5c3RlbWZpbmRmb250IGV4Y2ggZGVmCgl9CmlmZWxzZQovdW5kZWZpbmVmb250Cgl7IHBvcCB9
IG5kZgovY29weWZvbnQKCXsKCWN1cnJlbnRnbG9iYWwgMyAxIHJvbGwKCTEgaW5kZXggZ2NoZWNr
IHNldGdsb2JhbAoJZHVwIG51bGwgZXEgeyAwIH0geyBkdXAgbGVuZ3RoIH0gaWZlbHNlCgkyIGlu
ZGV4IGxlbmd0aCBhZGQgMSBhZGQgZGljdAoJCWJlZ2luCgkJZXhjaAoJCQl7CgkJCTEgaW5kZXgg
L0ZJRCBlcQoJCQkJeyBwb3AgcG9wIH0KCQkJCXsgZGVmIH0KCQkJaWZlbHNlCgkJCX0KCQlmb3Jh
bGwKCQlkdXAgbnVsbCBlcQoJCQl7IHBvcCB9CgkJCXsgeyBkZWYgfSBmb3JhbGwgfQoJCWlmZWxz
ZQoJCWN1cnJlbnRkaWN0CgkJZW5kCglleGNoIHNldGdsb2JhbAoJfSBiaW5kIGRlZgovY29weWFy
cmF5Cgl7CgljdXJyZW50Z2xvYmFsIGV4Y2gKCWR1cCBnY2hlY2sgc2V0Z2xvYmFsCglkdXAgbGVu
Z3RoIGFycmF5IGNvcHkKCWV4Y2ggc2V0Z2xvYmFsCgl9IGJpbmQgZGVmCi9uZXdlbmNvZGVkZm9u
dAoJewoJY3VycmVudGdsb2JhbAoJCXsKCQlTaGFyZWRGb250RGlyZWN0b3J5IDMgaW5kZXggIGtu
b3duCgkJCXsgU2hhcmVkRm9udERpcmVjdG9yeSAzIGluZGV4IGdldCAvRm9udFJlZmVyZW5jZWQg
a25vd24gfQoJCQl7IGZhbHNlIH0KCQlpZmVsc2UKCQl9CgkJewoJCUZvbnREaXJlY3RvcnkgMyBp
bmRleCBrbm93bgoJCQl7IEZvbnREaXJlY3RvcnkgMyBpbmRleCBnZXQgL0ZvbnRSZWZlcmVuY2Vk
IGtub3duIH0KCQkJewoJCQlTaGFyZWRGb250RGlyZWN0b3J5IDMgaW5kZXgga25vd24KCQkJCXsg
U2hhcmVkRm9udERpcmVjdG9yeSAzIGluZGV4IGdldCAvRm9udFJlZmVyZW5jZWQga25vd24gfQoJ
CQkJeyBmYWxzZSB9CgkJCWlmZWxzZQoJCQl9CgkJaWZlbHNlCgkJfQoJaWZlbHNlCglkdXAKCQl7
CgkJMyBpbmRleCBmaW5kZm9udCAvRm9udFJlZmVyZW5jZWQgZ2V0CgkJMiBpbmRleCBkdXAgdHlw
ZSAvbmFtZXR5cGUgZXEKCQkJe2ZpbmRmb250fQoJCWlmIG5lCgkJCXsgcG9wIGZhbHNlIH0KCQlp
ZgoJCX0KCWlmCgkJewoJCXBvcAoJCTEgaW5kZXggZmluZGZvbnQKCQkvRW5jb2RpbmcgZ2V0IGV4
Y2gKCQkwIDEgMjU1CgkJCXsgMiBjb3B5IGdldCAzIGluZGV4IDMgMSByb2xsIHB1dCB9CgkJZm9y
CgkJcG9wIHBvcCBwb3AKCQl9CgkJewoJCWR1cCB0eXBlIC9uYW1ldHlwZSBlcQoJCSAgeyBmaW5k
Zm9udCB9CgkgIGlmCgkJZHVwIGR1cCBtYXhsZW5ndGggMiBhZGQgZGljdAoJCQliZWdpbgoJCQll
eGNoCgkJCQl7CgkJCQkxIGluZGV4IC9GSUQgbmUKCQkJCQl7ZGVmfQoJCQkJCXtwb3AgcG9wfQoJ
CQkJaWZlbHNlCgkJCQl9CgkJCWZvcmFsbAoJCQkvRm9udFJlZmVyZW5jZWQgZXhjaCBkZWYKCQkJ
L0VuY29kaW5nIGV4Y2ggZHVwIGxlbmd0aCBhcnJheSBjb3B5IGRlZgoJCQkvRm9udE5hbWUgMSBp
bmRleCBkdXAgdHlwZSAvc3RyaW5ndHlwZSBlcSB7IGN2biB9IGlmIGRlZiBkdXAKCQkJY3VycmVu
dGRpY3QKCQkJZW5kCgkJZGVmaW5lZm9udCBkZWYKCQl9CglpZmVsc2UKCX0gYmluZCBkZWYKL1Nl
dFN1YnN0aXR1dGVTdHJhdGVneQoJewoJJFN1YnN0aXR1dGVGb250CgkJYmVnaW4KCQlkdXAgdHlw
ZSAvZGljdHR5cGUgbmUKCQkJeyAwIGRpY3QgfQoJCWlmCgkJY3VycmVudGRpY3QgLyRTdHJhdGVn
aWVzIGtub3duCgkJCXsKCQkJZXhjaCAkU3RyYXRlZ2llcyBleGNoIAoJCQkyIGNvcHkga25vd24K
CQkJCXsKCQkJCWdldAoJCQkJMiBjb3B5IG1heGxlbmd0aCBleGNoIG1heGxlbmd0aCBhZGQgZGlj
dAoJCQkJCWJlZ2luCgkJCQkJeyBkZWYgfSBmb3JhbGwKCQkJCQl7IGRlZiB9IGZvcmFsbAoJCQkJ
CWN1cnJlbnRkaWN0CgkJCQkJZHVwIC8kSW5pdCBrbm93bgoJCQkJCQl7IGR1cCAvJEluaXQgZ2V0
IGV4ZWMgfQoJCQkJCWlmCgkJCQkJZW5kCgkJCQkvJFN0cmF0ZWd5IGV4Y2ggZGVmCgkJCQl9CgkJ
CQl7IHBvcCBwb3AgcG9wIH0KCQkJaWZlbHNlCgkJCX0KCQkJeyBwb3AgcG9wIH0KCQlpZmVsc2UK
CQllbmQKCX0gYmluZCBkZWYKL3NjZmYKCXsKCSRTdWJzdGl0dXRlRm9udAoJCWJlZ2luCgkJZHVw
IHR5cGUgL3N0cmluZ3R5cGUgZXEKCQkJeyBkdXAgbGVuZ3RoIGV4Y2ggfQoJCQl7IG51bGwgfQoJ
CWlmZWxzZQoJCS8kc25hbWUgZXhjaCBkZWYKCQkvJHNsZW4gZXhjaCBkZWYKCQkvJGluVk1JbmRl
eAoJCQkkc25hbWUgbnVsbCBlcQoJCQkJewoJCQkJMSBpbmRleCAkc3RyIGN2cwoJCQkJZHVwIGxl
bmd0aCAkc2xlbiBzdWIgJHNsZW4gZ2V0aW50ZXJ2YWwgY3ZuCgkJCQl9CgkJCQl7ICRzbmFtZSB9
CgkJCWlmZWxzZSBkZWYKCQllbmQKCQl7IGZpbmRmb250IH0KCUBTdG9wcGVkCgkJewoJCWR1cCBs
ZW5ndGggOCBhZGQgc3RyaW5nIGV4Y2gKCQkxIGluZGV4IDAgKEJhZEZvbnQ6KSBwdXRpbnRlcnZh
bAoJCTEgaW5kZXggZXhjaCA4IGV4Y2ggZHVwIGxlbmd0aCBzdHJpbmcgY3ZzIHB1dGludGVydmFs
IGN2bgoJCQl7IGZpbmRmb250IH0KCQlAU3RvcHBlZAoJCQl7IHBvcCAvQ291cmllciBmaW5kZm9u
dCB9CgkJaWYKCQl9CglpZgoJJFN1YnN0aXR1dGVGb250CgkJYmVnaW4KCQkvJHNuYW1lIG51bGwg
ZGVmCgkJLyRzbGVuIDAgZGVmCgkJLyRpblZNSW5kZXggbnVsbCBkZWYKCQllbmQKCX0gYmluZCBk
ZWYKL2lzV2lkdGhzT25seUZvbnQKCXsKCWR1cCAvV2lkdGhzT25seSBrbm93bgoJCXsgcG9wIHBv
cCB0cnVlIH0KCQl7CgkJZHVwIC9GRGVwVmVjdG9yIGtub3duCgkJCXsgL0ZEZXBWZWN0b3IgZ2V0
IHsgaXNXaWR0aHNPbmx5Rm9udCBkdXAgeyBleGl0IH0gaWYgfSBmb3JhbGwgfQoJCQl7CgkJCWR1
cCAvRkRBcnJheSBrbm93bgoJCQkJeyAvRkRBcnJheSBnZXQgeyBpc1dpZHRoc09ubHlGb250IGR1
cCB7IGV4aXQgfSBpZiB9IGZvcmFsbCB9CgkJCQl7IHBvcCB9CgkJCWlmZWxzZQoJCQl9CgkJaWZl
bHNlCgkJfQoJaWZlbHNlCgl9IGJpbmQgZGVmCi8/c3RyMSAyNTYgc3RyaW5nIGRlZgovP3NldAoJ
ewoJJFN1YnN0aXR1dGVGb250CgkJYmVnaW4KCQkvJHN1YnN0aXR1dGVGb3VuZCBmYWxzZSBkZWYK
CQkvJGZvbnRuYW1lIDQgaW5kZXggZGVmCgkJLyRkb1NtYXJ0U3ViIGZhbHNlIGRlZgoJCWVuZAoJ
MyBpbmRleAoJY3VycmVudGdsb2JhbCBmYWxzZSBzZXRnbG9iYWwgZXhjaAoJL0NvbXBhdGlibGVG
b250cyAvUHJvY1NldCByZXNvdXJjZXN0YXR1cwoJCXsKCQlwb3AgcG9wCgkJL0NvbXBhdGlibGVG
b250cyAvUHJvY1NldCBmaW5kcmVzb3VyY2UKCQkJYmVnaW4KCQkJZHVwIC9Db21wYXRpYmxlRm9u
dCBjdXJyZW50ZXhjZXB0aW9uCgkJCTEgaW5kZXggL0NvbXBhdGlibGVGb250IHRydWUgc2V0ZXhj
ZXB0aW9uCgkJCTEgaW5kZXggL0ZvbnQgcmVzb3VyY2VzdGF0dXMKCQkJCXsKCQkJCXBvcCBwb3AK
CQkJCTMgMiByb2xsIHNldGdsb2JhbAoJCQkJZW5kCgkJCQlleGNoCgkJCQlkdXAgZmluZGZvbnQK
CQkJCS9Db21wYXRpYmxlRm9udHMgL1Byb2NTZXQgZmluZHJlc291cmNlCgkJCQkJYmVnaW4KCQkJ
CQkzIDEgcm9sbCBleGNoIC9Db21wYXRpYmxlRm9udCBleGNoIHNldGV4Y2VwdGlvbgoJCQkJCWVu
ZAoJCQkJfQoJCQkJewoJCQkJMyAyIHJvbGwgc2V0Z2xvYmFsCgkJCQkxIGluZGV4IGV4Y2ggL0Nv
bXBhdGlibGVGb250IGV4Y2ggc2V0ZXhjZXB0aW9uCgkJCQllbmQKCQkJCWZpbmRmb250CgkJCQkk
U3Vic3RpdHV0ZUZvbnQgLyRzdWJzdGl0dXRlRm91bmQgdHJ1ZSBwdXQKCQkJCX0KCQkJaWZlbHNl
CgkJfQoJCXsgZXhjaCBzZXRnbG9iYWwgZmluZGZvbnQgfQoJaWZlbHNlCgkkU3Vic3RpdHV0ZUZv
bnQKCQliZWdpbgoJCSRzdWJzdGl0dXRlRm91bmQKCQkJewoJCSBmYWxzZQoJCSAoJSVbVXNpbmcg
ZW1iZWRkZWQgZm9udCApIHByaW50CgkJIDUgaW5kZXggP3N0cjEgY3ZzIHByaW50CgkJICggdG8g
YXZvaWQgdGhlIGZvbnQgc3Vic3RpdHV0aW9uIHByb2JsZW0gbm90ZWQgZWFybGllci5dJSVcbikg
cHJpbnQKCQkgfQoJCQl7CgkJCWR1cCAvRm9udE5hbWUga25vd24KCQkJCXsKCQkJCWR1cCAvRm9u
dE5hbWUgZ2V0ICRmb250bmFtZSBlcQoJCQkJMSBpbmRleCAvRGlzdGlsbGVyRmF1eEZvbnQga25v
d24gbm90IGFuZAoJCQkJL2N1cnJlbnRkaXN0aWxsZXJwYXJhbXMgd2hlcmUKCQkJCQl7IHBvcCBm
YWxzZSAyIGluZGV4IGlzV2lkdGhzT25seUZvbnQgbm90IGFuZCB9CgkJCQlpZgoJCQkJfQoJCQkJ
eyBmYWxzZSB9CgkJCWlmZWxzZQoJCQl9CgkJaWZlbHNlCgkJZXhjaCBwb3AKCQkvJGRvU21hcnRT
dWIgdHJ1ZSBkZWYKCQllbmQKCQl7CgkJZXhjaCBwb3AgZXhjaCBwb3AgZXhjaAoJCTIgZGljdCBk
dXAgL0ZvdW5kIDMgaW5kZXggcHV0CgkJZXhjaCBmaW5kZm9udCBleGNoCgkJfQoJCXsKCQlleGNo
IGV4ZWMKCQlleGNoIGR1cCBmaW5kZm9udAoJCWR1cCAvRm9udFR5cGUgZ2V0IDMgZXEKCSAgewoJ
CSAgZXhjaCA/c3RyMSBjdnMKCQkgIGR1cCBsZW5ndGggMSBzdWIKCQkgIC0xIDAKCQl7CgkJCSAg
ZXhjaCBkdXAgMiBpbmRleCBnZXQgNDIgZXEKCQkJewoJCQkJIGV4Y2ggMCBleGNoIGdldGludGVy
dmFsIGN2biA0IDEgcm9sbCAzIDIgcm9sbCBwb3AKCQkJCSBleGl0CgkJCSAgfQoJCQkgIHtleGNo
IHBvcH0gaWZlbHNlCgkJICB9Zm9yCgkJfQoJCXsKCQkgZXhjaCBwb3AKCSAgfSBpZmVsc2UKCQky
IGRpY3QgZHVwIC9Eb3dubG9hZGVkIDYgNSByb2xsIHB1dAoJCX0KCWlmZWxzZQoJZHVwIC9Gb250
TmFtZSA0IGluZGV4IHB1dCBjb3B5Zm9udCBkZWZpbmVmb250IHBvcAoJfSBiaW5kIGRlZgovP3N0
cjIgMjU2IHN0cmluZyBkZWYKLz9hZGQKCXsKCTEgaW5kZXggdHlwZSAvaW50ZWdlcnR5cGUgZXEK
CQl7IGV4Y2ggdHJ1ZSA0IDIgfQoJCXsgZmFsc2UgMyAxIH0KCWlmZWxzZQoJcm9sbAoJMSBpbmRl
eCBmaW5kZm9udAoJZHVwIC9XaWR0aHMga25vd24KCQl7CgkJQWRvYmVfQ29vbFR5cGVfRGF0YSAv
QWRkV2lkdGhzPyB0cnVlIHB1dAoJCWdzYXZlIGR1cCAxMDAwIHNjYWxlZm9udCBzZXRmb250CgkJ
fQoJaWYKCS9Eb3dubG9hZGVkIGtub3duCgkJewoJCWV4ZWMKCQlleGNoCgkJCXsKCQkJZXhjaCA/
c3RyMiBjdnMgZXhjaAoJCQlmaW5kZm9udCAvRG93bmxvYWRlZCBnZXQgMSBkaWN0IGJlZ2luIC9E
b3dubG9hZGVkIDEgaW5kZXggZGVmID9zdHIxIGN2cyBsZW5ndGgKCQkJP3N0cjEgMSBpbmRleCAx
IGFkZCAzIGluZGV4IHB1dGludGVydmFsCgkJCWV4Y2ggbGVuZ3RoIDEgYWRkIDEgaW5kZXggYWRk
CgkJCT9zdHIxIDIgaW5kZXggKCopIHB1dGludGVydmFsCgkJCT9zdHIxIDAgMiBpbmRleCBnZXRp
bnRlcnZhbCBjdm4gZmluZGZvbnQgCgkJCT9zdHIxIDMgaW5kZXggKCspIHB1dGludGVydmFsCgkJ
CTIgZGljdCBkdXAgL0ZvbnROYW1lID9zdHIxIDAgNiBpbmRleCBnZXRpbnRlcnZhbCBjdm4gcHV0
CgkJCWR1cCAvRG93bmxvYWRlZCBEb3dubG9hZGVkIHB1dCBlbmQgY29weWZvbnQKCQkJZHVwIC9G
b250TmFtZSBnZXQgZXhjaCBkZWZpbmVmb250IHBvcCBwb3AgcG9wCgkJCX0KCQkJewoJCQlwb3AK
CQkJfQoJCWlmZWxzZQoJCX0KCQl7CgkJcG9wCgkJZXhjaAoJCQl7CgkJCWZpbmRmb250CgkJCWR1
cCAvRm91bmQgZ2V0CgkJCWR1cCBsZW5ndGggZXhjaCA/c3RyMSBjdnMgcG9wCgkJCT9zdHIxIDEg
aW5kZXggKCspIHB1dGludGVydmFsCgkJCT9zdHIxIDEgaW5kZXggMSBhZGQgNCBpbmRleCA/c3Ry
MiBjdnMgcHV0aW50ZXJ2YWwKCQkJP3N0cjEgZXhjaCAwIGV4Y2ggNSA0IHJvbGwgP3N0cjIgY3Zz
IGxlbmd0aCAxIGFkZCBhZGQgZ2V0aW50ZXJ2YWwgY3ZuCgkJCTEgZGljdCBleGNoIDEgaW5kZXgg
ZXhjaCAvRm9udE5hbWUgZXhjaCBwdXQgY29weWZvbnQKCQkJZHVwIC9Gb250TmFtZSBnZXQgZXhj
aCBkZWZpbmVmb250IHBvcAoJCQl9CgkJCXsKCQkJcG9wCgkJCX0KCQlpZmVsc2UKCQl9CglpZmVs
c2UKCUFkb2JlX0Nvb2xUeXBlX0RhdGEgL0FkZFdpZHRocz8gZ2V0CgkJeyBncmVzdG9yZSBBZG9i
ZV9Db29sVHlwZV9EYXRhIC9BZGRXaWR0aHM/IGZhbHNlIHB1dCB9CglpZgoJfSBiaW5kIGRlZgov
P3NoCgl7CgljdXJyZW50Zm9udCAvRG93bmxvYWRlZCBrbm93biB7IGV4Y2ggfSBpZiBwb3AKCX0g
YmluZCBkZWYKLz9jaHAKCXsKCWN1cnJlbnRmb250IC9Eb3dubG9hZGVkIGtub3duIHsgcG9wIH0g
eyBmYWxzZSBjaHAgfSBpZmVsc2UKCX0gYmluZCBkZWYKLz9tdiAKCXsKCWN1cnJlbnRmb250IC9E
b3dubG9hZGVkIGtub3duIHsgbW92ZXRvIHBvcCBwb3AgfSB7IHBvcCBwb3AgbW92ZXRvIH0gaWZl
bHNlCgl9IGJpbmQgZGVmCnNldHBhY2tpbmcKdXNlcmRpY3QgLyRTdWJzdGl0dXRlRm9udCAyNSBk
aWN0IHB1dAoxIGRpY3QKCWJlZ2luCgkvU3Vic3RpdHV0ZUZvbnQKCQlkdXAgJGVycm9yIGV4Y2gg
MiBjb3B5IGtub3duCgkJCXsgZ2V0IH0KCQkJeyBwb3AgcG9wIHsgcG9wIC9Db3VyaWVyIH0gYmlu
ZCB9CgkJaWZlbHNlIGRlZgoJL2N1cnJlbnRkaXN0aWxsZXJwYXJhbXMgd2hlcmUgZHVwCgkJewoJ
CXBvcCBwb3AKCQljdXJyZW50ZGlzdGlsbGVycGFyYW1zIC9DYW5ub3RFbWJlZEZvbnRQb2xpY3kg
MiBjb3B5IGtub3duCgkJCXsgZ2V0IC9FcnJvciBlcSB9CgkJCXsgcG9wIHBvcCBmYWxzZSB9CgkJ
aWZlbHNlCgkJfQoJaWYgbm90CgkJewoJCWNvdW50ZGljdHN0YWNrIGFycmF5IGRpY3RzdGFjayAw
IGdldAoJCQliZWdpbgoJCQl1c2VyZGljdAoJCQkJYmVnaW4KCQkJCSRTdWJzdGl0dXRlRm9udAoJ
CQkJCWJlZ2luCgkJCQkJLyRzdHIgMTI4IHN0cmluZyBkZWYKCQkJCQkvJGZvbnRwYXQgMTI4IHN0
cmluZyBkZWYKCQkJCQkvJHNsZW4gMCBkZWYKCQkJCQkvJHNuYW1lIG51bGwgZGVmCgkJCQkJLyRt
YXRjaCBmYWxzZSBkZWYKCQkJCQkvJGZvbnRuYW1lIG51bGwgZGVmCgkJCQkJLyRzdWJzdGl0dXRl
Rm91bmQgZmFsc2UgZGVmCgkJCQkJLyRpblZNSW5kZXggbnVsbCBkZWYKCQkJCQkvJGRvU21hcnRT
dWIgdHJ1ZSBkZWYKCQkJCQkvJGRlcHRoIDAgZGVmCgkJCQkJLyRmb250bmFtZSBudWxsIGRlZgoJ
CQkJCS8kaXRhbGljYW5nbGUgMjYuNSBkZWYKCQkJCQkvJGRzdGFjayBudWxsIGRlZgoJCQkJCS8k
U3RyYXRlZ2llcyAxMCBkaWN0IGR1cAoJCQkJCQliZWdpbgoJCQkJCQkvJFR5cGUzVW5kZXJwcmlu
dAoJCQkJCQkJewoJCQkJCQkJY3VycmVudGdsb2JhbCBleGNoIGZhbHNlIHNldGdsb2JhbAoJCQkJ
CQkJMTEgZGljdAoJCQkJCQkJCWJlZ2luCgkJCQkJCQkJL1VzZUZvbnQgZXhjaAoJCQkJCQkJCQkk
V01vZGUgMCBuZQoJCQkJCQkJCQkJewoJCQkJCQkJCQkJZHVwIGxlbmd0aCBkaWN0IGNvcHkKCQkJ
CQkJCQkJCWR1cCAvV01vZGUgJFdNb2RlIHB1dAoJCQkJCQkJCQkJL1VzZUZvbnQgZXhjaCBkZWZp
bmVmb250CgkJCQkJCQkJCQl9CgkJCQkJCQkJCWlmIGRlZgoJCQkJCQkJCS9Gb250TmFtZSAkZm9u
dG5hbWUgZHVwIHR5cGUgL3N0cmluZ3R5cGUgZXEgeyBjdm4gfSBpZiBkZWYKCQkJCQkJCQkvRm9u
dFR5cGUgMyBkZWYKCQkJCQkJCQkvRm9udE1hdHJpeCBbIC4wMDEgMCAwIC4wMDEgMCAwIF0gZGVm
CgkJCQkJCQkJL0VuY29kaW5nIDI1NiBhcnJheSBkdXAgMCAxIDI1NSB7IC8ubm90ZGVmIHB1dCBk
dXAgfSBmb3IgcG9wIGRlZgoJCQkJCQkJCS9Gb250QkJveCBbIDAgMCAwIDAgXSBkZWYKCQkJCQkJ
CQkvQ0NJbmZvIDcgZGljdCBkdXAKCQkJCQkJCQkJYmVnaW4KCQkJCQkJCQkJL2NjIG51bGwgZGVm
CgkJCQkJCQkJCS94IDAgZGVmCgkJCQkJCQkJCS95IDAgZGVmCgkJCQkJCQkJCWVuZCBkZWYKCQkJ
CQkJCQkvQnVpbGRDaGFyCgkJCQkJCQkJCXsKCQkJCQkJCQkJZXhjaAoJCQkJCQkJCQkJYmVnaW4K
CQkJCQkJCQkJCUNDSW5mbwoJCQkJCQkJCQkJCWJlZ2luCgkJCQkJCQkJCQkJMSBzdHJpbmcgZHVw
IDAgMyBpbmRleCBwdXQgZXhjaCBwb3AKCQkJCQkJCQkJCQkvY2MgZXhjaCBkZWYKCQkJCQkJCQkJ
CQlVc2VGb250IDEwMDAgc2NhbGVmb250IHNldGZvbnQKCQkJCQkJCQkJCQljYyBzdHJpbmd3aWR0
aCAveSBleGNoIGRlZiAveCBleGNoIGRlZgoJCQkJCQkJCQkJCXggeSBzZXRjaGFyd2lkdGgKCQkJ
CQkJCQkJCQkkU3Vic3RpdHV0ZUZvbnQgLyRTdHJhdGVneSBnZXQgLyRVbmRlcnByaW50IGdldCBl
eGVjCgkJCQkJCQkJCQkJMCAwIG1vdmV0byBjYyBzaG93CgkJCQkJCQkJCQkJeCB5IG1vdmV0bwoJ
CQkJCQkJCQkJCWVuZAoJCQkJCQkJCQkJZW5kCgkJCQkJCQkJCX0gYmluZCBkZWYKCQkJCQkJCQlj
dXJyZW50ZGljdAoJCQkJCQkJCWVuZAoJCQkJCQkJZXhjaCBzZXRnbG9iYWwKCQkJCQkJCX0gYmlu
ZCBkZWYKCQkJCQkJLyRHZXRhVGludAoJCQkJCQkJMiBkaWN0IGR1cAoJCQkJCQkJCWJlZ2luCgkJ
CQkJCQkJLyRCdWlsZEZvbnQKCQkJCQkJCQkJewoJCQkJCQkJCQlkdXAgL1dNb2RlIGtub3duCgkJ
CQkJCQkJCQl7IGR1cCAvV01vZGUgZ2V0IH0KCQkJCQkJCQkJCXsgMCB9CgkJCQkJCQkJCWlmZWxz
ZQoJCQkJCQkJCQkvJFdNb2RlIGV4Y2ggZGVmCgkJCQkJCQkJCSRmb250bmFtZSBleGNoCgkJCQkJ
CQkJCWR1cCAvRm9udE5hbWUga25vd24KCQkJCQkJCQkJCXsKCQkJCQkJCQkJCWR1cCAvRm9udE5h
bWUgZ2V0CgkJCQkJCQkJCQlkdXAgdHlwZSAvc3RyaW5ndHlwZSBlcSB7IGN2biB9IGlmCgkJCQkJ
CQkJCQl9CgkJCQkJCQkJCQl7IC91bm5hbWVkZm9udCB9CgkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJ
CQlleGNoCgkJCQkJCQkJCUFkb2JlX0Nvb2xUeXBlX0RhdGEgL0luVk1EZWVwQ29waWVkRm9udHMg
Z2V0CgkJCQkJCQkJCTEgaW5kZXggL0ZvbnROYW1lIGdldCBrbm93bgoJCQkJCQkJCQkJewoJCQkJ
CQkJCQkJcG9wCgkJCQkJCQkJCQlBZG9iZV9Db29sVHlwZV9EYXRhIC9JblZNRGVlcENvcGllZEZv
bnRzIGdldAoJCQkJCQkJCQkJMSBpbmRleCBnZXQKCQkJCQkJCQkJCW51bGwgY29weWZvbnQKCQkJ
CQkJCQkJCX0KCQkJCQkJCQkJCXsgJGRlZXBjb3B5Zm9udCB9CgkJCQkJCQkJCWlmZWxzZQoJCQkJ
CQkJCQlleGNoIDEgaW5kZXggZXhjaCAvRm9udEJhc2VkT24gZXhjaCBwdXQKCQkJCQkJCQkJZHVw
IC9Gb250TmFtZSAkZm9udG5hbWUgZHVwIHR5cGUgL3N0cmluZ3R5cGUgZXEgeyBjdm4gfSBpZiBw
dXQKCQkJCQkJCQkJZGVmaW5lZm9udAoJCQkJCQkJCQlBZG9iZV9Db29sVHlwZV9EYXRhIC9JblZN
RGVlcENvcGllZEZvbnRzIGdldAoJCQkJCQkJCQkJYmVnaW4KCQkJCQkJCQkJCWR1cCAvRm9udEJh
c2VkT24gZ2V0IDEgaW5kZXggZGVmCgkJCQkJCQkJCQllbmQKCQkJCQkJCQkJfSBiaW5kIGRlZgoJ
CQkJCQkJCS8kVW5kZXJwcmludAoJCQkJCQkJCQl7CgkJCQkJCQkJCWdzYXZlCgkJCQkJCQkJCXgg
YWJzIHkgYWJzIGd0CgkJCQkJCQkJCQl7IC95IDEwMDAgZGVmIH0KCQkJCQkJCQkJCXsgL3ggLTEw
MDAgZGVmIDUwMCAxMjAgdHJhbnNsYXRlIH0KCQkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJCUxldmVs
Mj8KCQkJCQkJCQkJCXsKCQkJCQkJCQkJCVsgL1NlcGFyYXRpb24gKEFsbCkgL0RldmljZUNNWUsg
eyAwIDAgMCAxIHBvcCB9IF0KCQkJCQkJCQkJCXNldGNvbG9yc3BhY2UKCQkJCQkJCQkJCX0KCQkJ
CQkJCQkJCXsgMCBzZXRncmF5IH0KCQkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJCTEwIHNldGxpbmV3
aWR0aAoJCQkJCQkJCQl4IC44IG11bAoJCQkJCQkJCQlbIDcgMyBdCgkJCQkJCQkJCQl7CgkJCQkJ
CQkJCQl5IG11bCA4IGRpdiAxMjAgc3ViIHggMTAgZGl2IGV4Y2ggbW92ZXRvCgkJCQkJCQkJCQkw
IHkgNCBkaXYgbmVnIHJsaW5ldG8KCQkJCQkJCQkJCWR1cCAwIHJsaW5ldG8KCQkJCQkJCQkJCTAg
eSA0IGRpdiBybGluZXRvCgkJCQkJCQkJCQljbG9zZXBhdGgKCQkJCQkJCQkJCWdzYXZlCgkJCQkJ
CQkJCQlMZXZlbDI/CgkJCQkJCQkJCQkJeyAuMiBzZXRjb2xvciB9CgkJCQkJCQkJCQkJeyAuOCBz
ZXRncmF5IH0KCQkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJZmlsbCBncmVzdG9yZQoJCQkJCQkJ
CQkJc3Ryb2tlCgkJCQkJCQkJCQl9CgkJCQkJCQkJCWZvcmFsbAoJCQkJCQkJCQlwb3AKCQkJCQkJ
CQkJZ3Jlc3RvcmUKCQkJCQkJCQkJfSBiaW5kIGRlZgoJCQkJCQkJCWVuZCBkZWYKCQkJCQkJLyRP
YmxpcXVlCgkJCQkJCQkxIGRpY3QgZHVwCgkJCQkJCQkJYmVnaW4KCQkJCQkJCQkvJEJ1aWxkRm9u
dAoJCQkJCQkJCQl7CgkJCQkJCQkJCWN1cnJlbnRnbG9iYWwgZXhjaCBkdXAgZ2NoZWNrIHNldGds
b2JhbAoJCQkJCQkJCQludWxsIGNvcHlmb250CgkJCQkJCQkJCQliZWdpbgoJCQkJCQkJCQkJL0Zv
bnRCYXNlZE9uCgkJCQkJCQkJCQljdXJyZW50ZGljdCAvRm9udE5hbWUga25vd24KCQkJCQkJCQkJ
CQl7CgkJCQkJCQkJCQkJRm9udE5hbWUKCQkJCQkJCQkJCQlkdXAgdHlwZSAvc3RyaW5ndHlwZSBl
cSB7IGN2biB9IGlmCgkJCQkJCQkJCQkJfQoJCQkJCQkJCQkJCXsgL3VubmFtZWRmb250IH0KCQkJ
CQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJZGVmCgkJCQkJCQkJCQkvRm9udE5hbWUgJGZvbnRuYW1l
IGR1cCB0eXBlIC9zdHJpbmd0eXBlIGVxIHsgY3ZuIH0gaWYgZGVmCgkJCQkJCQkJCQkvY3VycmVu
dGRpc3RpbGxlcnBhcmFtcyB3aGVyZQoJCQkJCQkJCQkJCXsgcG9wIH0KCQkJCQkJCQkJCQl7CgkJ
CQkJCQkJCQkJL0ZvbnRJbmZvIGN1cnJlbnRkaWN0IC9Gb250SW5mbyBrbm93bgoJCQkJCQkJCQkJ
CQl7IEZvbnRJbmZvIG51bGwgY29weWZvbnQgfQoJCQkJCQkJCQkJCQl7IDIgZGljdCB9CgkJCQkJ
CQkJCQkJaWZlbHNlCgkJCQkJCQkJCQkJZHVwCgkJCQkJCQkJCQkJCWJlZ2luCgkJCQkJCQkJCQkJ
CS9JdGFsaWNBbmdsZSAkaXRhbGljYW5nbGUgZGVmCgkJCQkJCQkJCQkJCS9Gb250TWF0cml4IEZv
bnRNYXRyaXgKCQkJCQkJCQkJCQkJWyAxIDAgSXRhbGljQW5nbGUgZHVwIHNpbiBleGNoIGNvcyBk
aXYgMSAwIDAgXQoJCQkJCQkJCQkJCQltYXRyaXggY29uY2F0bWF0cml4IHJlYWRvbmx5CgkJCQkJ
CQkJCQkJCWVuZAoJCQkJCQkJCQkJCTQgMiByb2xsIGRlZgoJCQkJCQkJCQkJCWRlZgoJCQkJCQkJ
CQkJCX0KCQkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJRm9udE5hbWUgY3VycmVudGRpY3QKCQkJ
CQkJCQkJCWVuZAoJCQkJCQkJCQlkZWZpbmVmb250CgkJCQkJCQkJCWV4Y2ggc2V0Z2xvYmFsCgkJ
CQkJCQkJCX0gYmluZCBkZWYKCQkJCQkJCQllbmQgZGVmCgkJCQkJCS8kTm9uZQoJCQkJCQkJMSBk
aWN0IGR1cAoJCQkJCQkJCWJlZ2luCgkJCQkJCQkJLyRCdWlsZEZvbnQge30gYmluZCBkZWYKCQkJ
CQkJCQllbmQgZGVmCgkJCQkJCWVuZCBkZWYKCQkJCQkvJE9ibGlxdWUgU2V0U3Vic3RpdHV0ZVN0
cmF0ZWd5CgkJCQkJLyRmaW5kZm9udEJ5RW51bQoJCQkJCQl7CgkJCQkJCWR1cCB0eXBlIC9zdHJp
bmd0eXBlIGVxIHsgY3ZuIH0gaWYKCQkJCQkJZHVwIC8kZm9udG5hbWUgZXhjaCBkZWYKCQkJCQkJ
JHNuYW1lIG51bGwgZXEKCQkJCQkJCXsgJHN0ciBjdnMgZHVwIGxlbmd0aCAkc2xlbiBzdWIgJHNs
ZW4gZ2V0aW50ZXJ2YWwgfQoJCQkJCQkJeyBwb3AgJHNuYW1lIH0KCQkJCQkJaWZlbHNlCgkJCQkJ
CSRmb250cGF0IGR1cCAwIChmb250cy8qKSBwdXRpbnRlcnZhbCBleGNoIDcgZXhjaCBwdXRpbnRl
cnZhbAoJCQkJCQkvJG1hdGNoIGZhbHNlIGRlZgoJCQkJCQkkU3Vic3RpdHV0ZUZvbnQgLyRkc3Rh
Y2sgY291bnRkaWN0c3RhY2sgYXJyYXkgZGljdHN0YWNrIHB1dAoJCQkJCQltYXJrCgkJCQkJCQl7
CgkJCQkJCQkkZm9udHBhdCAwICRzbGVuIDcgYWRkIGdldGludGVydmFsCgkJCQkJCQkJeyAvJG1h
dGNoIGV4Y2ggZGVmIGV4aXQgfQoJCQkJCQkJJHN0ciBmaWxlbmFtZWZvcmFsbAoJCQkJCQkJfQoJ
CQkJCQlzdG9wcGVkCgkJCQkJCQl7CgkJCQkJCQljbGVhcmRpY3RzdGFjawoJCQkJCQkJY3VycmVu
dGRpY3QKCQkJCQkJCXRydWUKCQkJCQkJCSRTdWJzdGl0dXRlRm9udCAvJGRzdGFjayBnZXQKCQkJ
CQkJCQl7CgkJCQkJCQkJZXhjaAoJCQkJCQkJCQl7CgkJCQkJCQkJCTEgaW5kZXggZXEKCQkJCQkJ
CQkJCXsgcG9wIGZhbHNlIH0KCQkJCQkJCQkJCXsgdHJ1ZSB9CgkJCQkJCQkJCWlmZWxzZQoJCQkJ
CQkJCQl9CgkJCQkJCQkJCXsgYmVnaW4gZmFsc2UgfQoJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCX0K
CQkJCQkJCWZvcmFsbAoJCQkJCQkJcG9wCgkJCQkJCQl9CgkJCQkJCWlmCgkJCQkJCWNsZWFydG9t
YXJrCgkJCQkJCS8kc2xlbiAwIGRlZgoJCQkJCQkkbWF0Y2ggZmFsc2UgbmUKCQkJCQkJCXsgJG1h
dGNoIChmb250cy8pIGFuY2hvcnNlYXJjaCBwb3AgcG9wIGN2biB9CgkJCQkJCQl7IC9Db3VyaWVy
IH0KCQkJCQkJaWZlbHNlCgkJCQkJCX0gYmluZCBkZWYKCQkJCQkvJFJPUyAxIGRpY3QgZHVwCgkJ
CQkJCWJlZ2luCgkJCQkJCS9BZG9iZSA0IGRpY3QgZHVwCgkJCQkJCQliZWdpbgoJCQkJCQkJL0ph
cGFuMSAgWyAvUnl1bWluLUxpZ2h0IC9IZWlzZWlNaW4tVzMKCQkJCQkJCQkJCSAgL0dvdGhpY0JC
Qi1NZWRpdW0gL0hlaXNlaUtha3VHby1XNQoJCQkJCQkJCQkJICAvSGVpc2VpTWFydUdvLVc0IC9K
dW4xMDEtTGlnaHQgXSBkZWYKCQkJCQkJCS9Lb3JlYTEgIFsgL0hZU015ZW9uZ0pvLU1lZGl1bSAv
SFlHb1RoaWMtTWVkaXVtIF0gZGVmCgkJCQkJCQkvR0IxCSAgWyAvU1RTb25nLUxpZ2h0IC9TVEhl
aXRpLVJlZ3VsYXIgXSBkZWYKCQkJCQkJCS9DTlMxCSBbIC9NS2FpLU1lZGl1bSAvTUhlaS1NZWRp
dW0gXSBkZWYKCQkJCQkJCWVuZCBkZWYKCQkJCQkJZW5kIGRlZgoJCQkJCS8kY21hcG5hbWUgbnVs
bCBkZWYKCQkJCQkvJGRlZXBjb3B5Zm9udAoJCQkJCQl7CgkJCQkJCWR1cCAvRm9udFR5cGUgZ2V0
IDAgZXEKCQkJCQkJCXsKCQkJCQkJCTEgZGljdCBkdXAgL0ZvbnROYW1lIC9jb3BpZWQgcHV0IGNv
cHlmb250CgkJCQkJCQkJYmVnaW4KCQkJCQkJCQkvRkRlcFZlY3RvciBGRGVwVmVjdG9yIGNvcHlh
cnJheQoJCQkJCQkJCTAgMSAyIGluZGV4IGxlbmd0aCAxIHN1YgoJCQkJCQkJCQl7CgkJCQkJCQkJ
CTIgY29weSBnZXQgJGRlZXBjb3B5Zm9udAoJCQkJCQkJCQlkdXAgL0ZvbnROYW1lIC9jb3BpZWQg
cHV0CgkJCQkJCQkJCS9jb3BpZWQgZXhjaCBkZWZpbmVmb250CgkJCQkJCQkJCTMgY29weSBwdXQg
cG9wIHBvcAoJCQkJCQkJCQl9CgkJCQkJCQkJZm9yCgkJCQkJCQkJZGVmCgkJCQkJCQkJY3VycmVu
dGRpY3QKCQkJCQkJCQllbmQKCQkJCQkJCX0KCQkJCQkJCXsgJFN0cmF0ZWdpZXMgLyRUeXBlM1Vu
ZGVycHJpbnQgZ2V0IGV4ZWMgfQoJCQkJCQlpZmVsc2UKCQkJCQkJfSBiaW5kIGRlZgoJCQkJCS8k
YnVpbGRmb250bmFtZQoJCQkJCQl7CgkJCQkJCWR1cCAvQ0lERm9udCBmaW5kcmVzb3VyY2UgL0NJ
RFN5c3RlbUluZm8gZ2V0CgkJCQkJCQliZWdpbgoJCQkJCQkJUmVnaXN0cnkgbGVuZ3RoIE9yZGVy
aW5nIGxlbmd0aCBTdXBwbGVtZW50IDggc3RyaW5nIGN2cwoJCQkJCQkJMyBjb3B5IGxlbmd0aCAy
IGFkZCBhZGQgYWRkIHN0cmluZwoJCQkJCQkJZHVwIDUgMSByb2xsIGR1cCAwIFJlZ2lzdHJ5IHB1
dGludGVydmFsCgkJCQkJCQlkdXAgNCBpbmRleCAoLSkgcHV0aW50ZXJ2YWwKCQkJCQkJCWR1cCA0
IGluZGV4IDEgYWRkIE9yZGVyaW5nIHB1dGludGVydmFsCgkJCQkJCQk0IDIgcm9sbCBhZGQgMSBh
ZGQgMiBjb3B5ICgtKSBwdXRpbnRlcnZhbAoJCQkJCQkJZW5kCgkJCQkJCTEgYWRkIDIgY29weSAw
IGV4Y2ggZ2V0aW50ZXJ2YWwgJGNtYXBuYW1lICRmb250cGF0IGN2cyBleGNoCgkJCQkJCWFuY2hv
cnNlYXJjaAoJCQkJCQkJeyBwb3AgcG9wIDMgMiByb2xsIHB1dGludGVydmFsIGN2biAvJGNtYXBu
YW1lIGV4Y2ggZGVmIH0KCQkJCQkJCXsgcG9wIHBvcCBwb3AgcG9wIHBvcCB9CgkJCQkJCWlmZWxz
ZQoJCQkJCQlsZW5ndGgKCQkJCQkJJHN0ciAxIGluZGV4ICgtKSBwdXRpbnRlcnZhbCAxIGFkZAoJ
CQkJCQkkc3RyIDEgaW5kZXggJGNtYXBuYW1lICRmb250cGF0IGN2cyBwdXRpbnRlcnZhbAoJCQkJ
CQkkY21hcG5hbWUgbGVuZ3RoIGFkZAoJCQkJCQkkc3RyIGV4Y2ggMCBleGNoIGdldGludGVydmFs
IGN2bgoJCQkJCQl9IGJpbmQgZGVmCgkJCQkJLyRmaW5kZm9udEJ5Uk9TCgkJCQkJCXsKCQkJCQkJ
LyRmb250bmFtZSBleGNoIGRlZgoJCQkJCQkkUk9TIFJlZ2lzdHJ5IDIgY29weSBrbm93bgoJCQkJ
CQkJewoJCQkJCQkJZ2V0IE9yZGVyaW5nIDIgY29weSBrbm93bgoJCQkJCQkJCXsgZ2V0IH0KCQkJ
CQkJCQl7IHBvcCBwb3AgW10gfQoJCQkJCQkJaWZlbHNlCgkJCQkJCQl9CgkJCQkJCQl7IHBvcCBw
b3AgW10gfQoJCQkJCQlpZmVsc2UKCQkJCQkJZmFsc2UgZXhjaAoJCQkJCQkJewoJCQkJCQkJZHVw
IC9DSURGb250IHJlc291cmNlc3RhdHVzCgkJCQkJCQkJewoJCQkJCQkJCXBvcCBwb3AKCQkJCQkJ
CQlzYXZlCgkJCQkJCQkJMSBpbmRleCAvQ0lERm9udCBmaW5kcmVzb3VyY2UKCQkJCQkJCQlkdXAg
L1dpZHRoc09ubHkga25vd24KCQkJCQkJCQkJeyBkdXAgL1dpZHRoc09ubHkgZ2V0IH0KCQkJCQkJ
CQkJeyBmYWxzZSB9CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJZXhjaCBwb3AKCQkJCQkJCQlleGNo
IHJlc3RvcmUKCQkJCQkJCQkJeyBwb3AgfQoJCQkJCQkJCQl7IGV4Y2ggcG9wIHRydWUgZXhpdCB9
CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJfQoJCQkJCQkJCXsgcG9wIH0KCQkJCQkJCWlmZWxzZQoJ
CQkJCQkJfQoJCQkJCQlmb3JhbGwKCQkJCQkJCXsgJHN0ciBjdnMgJGJ1aWxkZm9udG5hbWUgfQoJ
CQkJCQkJewoJCQkJCQkJZmFsc2UgKCopCgkJCQkJCQkJewoJCQkJCQkJCXNhdmUgZXhjaAoJCQkJ
CQkJCWR1cCAvQ0lERm9udCBmaW5kcmVzb3VyY2UKCQkJCQkJCQlkdXAgL1dpZHRoc09ubHkga25v
d24KCQkJCQkJCQkJeyBkdXAgL1dpZHRoc09ubHkgZ2V0IG5vdCB9CgkJCQkJCQkJCXsgdHJ1ZSB9
CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJZXhjaCAvQ0lEU3lzdGVtSW5mbyBnZXQKCQkJCQkJCQlk
dXAgL1JlZ2lzdHJ5IGdldCBSZWdpc3RyeSBlcQoJCQkJCQkJCWV4Y2ggL09yZGVyaW5nIGdldCBP
cmRlcmluZyBlcSBhbmQgYW5kCgkJCQkJCQkJCXsgZXhjaCByZXN0b3JlIGV4Y2ggcG9wIHRydWUg
ZXhpdCB9CgkJCQkJCQkJCXsgcG9wIHJlc3RvcmUgfQoJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCX0K
CQkJCQkJCSRzdHIgL0NJREZvbnQgcmVzb3VyY2Vmb3JhbGwKCQkJCQkJCQl7ICRidWlsZGZvbnRu
YW1lIH0KCQkJCQkJCQl7ICRmb250bmFtZSAkZmluZGZvbnRCeUVudW0gfQoJCQkJCQkJaWZlbHNl
CgkJCQkJCQl9CgkJCQkJCWlmZWxzZQoJCQkJCQl9IGJpbmQgZGVmCgkJCQkJZW5kCgkJCQllbmQK
CQkJCWN1cnJlbnRkaWN0IC8kZXJyb3Iga25vd24gY3VycmVudGRpY3QgL2xhbmd1YWdlbGV2ZWwg
a25vd24gYW5kIGR1cAoJCQkJCXsgcG9wICRlcnJvciAvU3Vic3RpdHV0ZUZvbnQga25vd24gfQoJ
CQkJaWYKCQkJCWR1cAoJCQkJCXsgJGVycm9yIH0KCQkJCQl7IEFkb2JlX0Nvb2xUeXBlX0NvcmUg
fQoJCQkJaWZlbHNlCgkJCQliZWdpbgoJCQkJCXsKCQkJCQkvU3Vic3RpdHV0ZUZvbnQKCQkJCQkv
Q01hcCAvQ2F0ZWdvcnkgcmVzb3VyY2VzdGF0dXMKCQkJCQkJewoJCQkJCQlwb3AgcG9wCgkJCQkJ
CXsKCQkJCQkJJFN1YnN0aXR1dGVGb250CgkJCQkJCQliZWdpbgoJCQkJCQkJLyRzdWJzdGl0dXRl
Rm91bmQgdHJ1ZSBkZWYKCQkJCQkJCWR1cCBsZW5ndGggJHNsZW4gZ3QKCQkJCQkJCSRzbmFtZSBu
dWxsIG5lIG9yCgkJCQkJCQkkc2xlbiAwIGd0IGFuZAoJCQkJCQkJCXsKCQkJCQkJCQkkc25hbWUg
bnVsbCBlcQoJCQkJCQkJCQl7IGR1cCAkc3RyIGN2cyBkdXAgbGVuZ3RoICRzbGVuIHN1YiAkc2xl
biBnZXRpbnRlcnZhbCBjdm4gfQoJCQkJCQkJCQl7ICRzbmFtZSB9CgkJCQkJCQkJaWZlbHNlCgkJ
CQkJCQkJQWRvYmVfQ29vbFR5cGVfRGF0YSAvSW5WTUZvbnRzQnlDTWFwIGdldAoJCQkJCQkJCTEg
aW5kZXggMiBjb3B5IGtub3duCgkJCQkJCQkJCXsKCQkJCQkJCQkJZ2V0CgkJCQkJCQkJCWZhbHNl
IGV4Y2gKCQkJCQkJCQkJCXsKCQkJCQkJCQkJCXBvcAoJCQkJCQkJCQkJY3VycmVudGdsb2JhbAoJ
CQkJCQkJCQkJCXsKCQkJCQkJCQkJCQlHbG9iYWxGb250RGlyZWN0b3J5IDEgaW5kZXgga25vd24K
CQkJCQkJCQkJCQkJeyBleGNoIHBvcCB0cnVlIGV4aXQgfQoJCQkJCQkJCQkJCQl7IHBvcCB9CgkJ
CQkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJCQkJfQoJCQkJCQkJCQkJCXsKCQkJCQkJCQkJCQlGb250
RGlyZWN0b3J5IDEgaW5kZXgga25vd24KCQkJCQkJCQkJCQkJeyBleGNoIHBvcCB0cnVlIGV4aXQg
fQoJCQkJCQkJCQkJCQl7CgkJCQkJCQkJCQkJCUdsb2JhbEZvbnREaXJlY3RvcnkgMSBpbmRleCBr
bm93bgoJCQkJCQkJCQkJCQkJeyBleGNoIHBvcCB0cnVlIGV4aXQgfQoJCQkJCQkJCQkJCQkJeyBw
b3AgfQoJCQkJCQkJCQkJCQlpZmVsc2UKCQkJCQkJCQkJCQkJfQoJCQkJCQkJCQkJCWlmZWxzZQoJ
CQkJCQkJCQkJCX0KCQkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJfQoJCQkJCQkJCQlmb3JhbGwK
CQkJCQkJCQkJfQoJCQkJCQkJCQl7IHBvcCBwb3AgZmFsc2UgfQoJCQkJCQkJCWlmZWxzZQoJCQkJ
CQkJCQl7CgkJCQkJCQkJCWV4Y2ggcG9wIGV4Y2ggcG9wCgkJCQkJCQkJCX0KCQkJCQkJCQkJewoJ
CQkJCQkJCQlkdXAgL0NNYXAgcmVzb3VyY2VzdGF0dXMKCQkJCQkJCQkJCXsKCQkJCQkJCQkJCXBv
cCBwb3AKCQkJCQkJCQkJCWR1cCAvJGNtYXBuYW1lIGV4Y2ggZGVmCgkJCQkJCQkJCQkvQ01hcCBm
aW5kcmVzb3VyY2UgL0NJRFN5c3RlbUluZm8gZ2V0IHsgZGVmIH0gZm9yYWxsCgkJCQkJCQkJCQkk
ZmluZGZvbnRCeVJPUwoJCQkJCQkJCQkJfQoJCQkJCQkJCQkJewoJCQkJCQkJCQkJMTI4IHN0cmlu
ZyBjdnMKCQkJCQkJCQkJCWR1cCAoLSkgc2VhcmNoCgkJCQkJCQkJCQkJewoJCQkJCQkJCQkJCTMg
MSByb2xsIHNlYXJjaAoJCQkJCQkJCQkJCQl7CgkJCQkJCQkJCQkJCTMgMSByb2xsIHBvcAoJCQkJ
CQkJCQkJCQkJeyBkdXAgY3ZpIH0KCQkJCQkJCQkJCQkJc3RvcHBlZAoJCQkJCQkJCQkJCQkJeyBw
b3AgcG9wIHBvcCBwb3AgcG9wICRmaW5kZm9udEJ5RW51bSB9CgkJCQkJCQkJCQkJCQl7CgkJCQkJ
CQkJCQkJCQk0IDIgcm9sbCBwb3AgcG9wCgkJCQkJCQkJCQkJCQlleGNoIGxlbmd0aAoJCQkJCQkJ
CQkJCQkJZXhjaAoJCQkJCQkJCQkJCQkJMiBpbmRleCBsZW5ndGgKCQkJCQkJCQkJCQkJCTIgaW5k
ZXgKCQkJCQkJCQkJCQkJCXN1YgoJCQkJCQkJCQkJCQkJZXhjaCAxIHN1YiAtMSAwCgkJCQkJCQkJ
CQkJCQkJewoJCQkJCQkJCQkJCQkJCSRzdHIgY3ZzIGR1cCBsZW5ndGgKCQkJCQkJCQkJCQkJCQk0
IGluZGV4CgkJCQkJCQkJCQkJCQkJMAoJCQkJCQkJCQkJCQkJCTQgaW5kZXgKCQkJCQkJCQkJCQkJ
CQk0IDMgcm9sbCBhZGQKCQkJCQkJCQkJCQkJCQlnZXRpbnRlcnZhbAoJCQkJCQkJCQkJCQkJCWV4
Y2ggMSBpbmRleCBleGNoIDMgaW5kZXggZXhjaAoJCQkJCQkJCQkJCQkJCXB1dGludGVydmFsCgkJ
CQkJCQkJCQkJCQkJZHVwIC9DTWFwIHJlc291cmNlc3RhdHVzCgkJCQkJCQkJCQkJCQkJCXsKCQkJ
CQkJCQkJCQkJCQkJcG9wIHBvcAoJCQkJCQkJCQkJCQkJCQk0IDEgcm9sbCBwb3AgcG9wIHBvcAoJ
CQkJCQkJCQkJCQkJCQlkdXAgLyRjbWFwbmFtZSBleGNoIGRlZgoJCQkJCQkJCQkJCQkJCQkvQ01h
cCBmaW5kcmVzb3VyY2UgL0NJRFN5c3RlbUluZm8gZ2V0IHsgZGVmIH0gZm9yYWxsCgkJCQkJCQkJ
CQkJCQkJCSRmaW5kZm9udEJ5Uk9TCgkJCQkJCQkJCQkJCQkJCXRydWUgZXhpdAoJCQkJCQkJCQkJ
CQkJCQl9CgkJCQkJCQkJCQkJCQkJCXsgcG9wIH0KCQkJCQkJCQkJCQkJCQlpZmVsc2UKCQkJCQkJ
CQkJCQkJCQl9CgkJCQkJCQkJCQkJCQlmb3IKCQkJCQkJCQkJCQkJCWR1cCB0eXBlIC9ib29sZWFu
dHlwZSBlcQoJCQkJCQkJCQkJCQkJCXsgcG9wIH0KCQkJCQkJCQkJCQkJCQl7IHBvcCBwb3AgcG9w
ICRmaW5kZm9udEJ5RW51bSB9CgkJCQkJCQkJCQkJCQlpZmVsc2UKCQkJCQkJCQkJCQkJCX0KCQkJ
CQkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJCQkJCX0KCQkJCQkJCQkJCQkJeyBwb3AgcG9wIHBvcCAk
ZmluZGZvbnRCeUVudW0gfQoJCQkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJCX0KCQkJCQkJCQkJ
CQl7IHBvcCBwb3AgJGZpbmRmb250QnlFbnVtIH0KCQkJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCQkJ
fQoJCQkJCQkJCQlpZmVsc2UKCQkJCQkJCQkJfQoJCQkJCQkJCWlmZWxzZQoJCQkJCQkJCX0KCQkJ
CQkJCQl7IC8vU3Vic3RpdHV0ZUZvbnQgZXhlYyB9CgkJCQkJCQlpZmVsc2UKCQkJCQkJCS8kc2xl
biAwIGRlZgoJCQkJCQkJZW5kCgkJCQkJCX0KCQkJCQkJfQoJCQkJCQl7CgkJCQkJCXsKCQkJCQkJ
JFN1YnN0aXR1dGVGb250CgkJCQkJCQliZWdpbgoJCQkJCQkJLyRzdWJzdGl0dXRlRm91bmQgdHJ1
ZSBkZWYKCQkJCQkJCWR1cCBsZW5ndGggJHNsZW4gZ3QKCQkJCQkJCSRzbmFtZSBudWxsIG5lIG9y
CgkJCQkJCQkkc2xlbiAwIGd0IGFuZAoJCQkJCQkJCXsgJGZpbmRmb250QnlFbnVtIH0KCQkJCQkJ
CQl7IC8vU3Vic3RpdHV0ZUZvbnQgZXhlYyB9CgkJCQkJCQlpZmVsc2UKCQkJCQkJCWVuZAoJCQkJ
CQl9CgkJCQkJCX0KCQkJCQlpZmVsc2UKCQkJCQliaW5kIHJlYWRvbmx5IGRlZgoJCQkJCUFkb2Jl
X0Nvb2xUeXBlX0NvcmUgL3NjZmluZGZvbnQgL3N5c3RlbWZpbmRmb250IGxvYWQgcHV0CgkJCQkJ
fQoJCQkJCXsKCQkJCQkvc2NmaW5kZm9udAoJCQkJCQl7CgkJCQkJCSRTdWJzdGl0dXRlRm9udAoJ
CQkJCQkJYmVnaW4KCQkJCQkJCWR1cCBzeXN0ZW1maW5kZm9udAoJCQkJCQkJZHVwIC9Gb250TmFt
ZSBrbm93bgoJCQkJCQkJCXsgZHVwIC9Gb250TmFtZSBnZXQgZHVwIDMgaW5kZXggbmUgfQoJCQkJ
CQkJCXsgL25vbmFtZSB0cnVlIH0KCQkJCQkJCWlmZWxzZQoJCQkJCQkJZHVwCgkJCQkJCQkJewoJ
CQkJCQkJCS8kb3JpZ2ZvbnRuYW1lZm91bmQgMiBpbmRleCBkZWYKCQkJCQkJCQkvJG9yaWdmb250
bmFtZSA0IGluZGV4IGRlZiAvJHN1YnN0aXR1dGVGb3VuZCB0cnVlIGRlZgoJCQkJCQkJCX0KCQkJ
CQkJCWlmCgkJCQkJCQlleGNoIHBvcAoJCQkJCQkJCXsKCQkJCQkJCQkkc2xlbiAwIGd0CgkJCQkJ
CQkJJHNuYW1lIG51bGwgbmUKCQkJCQkJCQkzIGluZGV4IGxlbmd0aCAkc2xlbiBndCBvciBhbmQK
CQkJCQkJCQkJewoJCQkJCQkJCQlwb3AgZHVwICRmaW5kZm9udEJ5RW51bSBmaW5kZm9udAoJCQkJ
CQkJCQlkdXAgbWF4bGVuZ3RoIDEgYWRkIGRpY3QKCQkJCQkJCQkJCWJlZ2luCgkJCQkJCQkJCQkJ
eyAxIGluZGV4IC9GSUQgZXEgeyBwb3AgcG9wIH0geyBkZWYgfSBpZmVsc2UgfQoJCQkJCQkJCQkJ
Zm9yYWxsCgkJCQkJCQkJCQljdXJyZW50ZGljdAoJCQkJCQkJCQkJZW5kCgkJCQkJCQkJCWRlZmlu
ZWZvbnQKCQkJCQkJCQkJZHVwIC9Gb250TmFtZSBrbm93biB7IGR1cCAvRm9udE5hbWUgZ2V0IH0g
eyBudWxsIH0gaWZlbHNlCgkJCQkJCQkJCSRvcmlnZm9udG5hbWVmb3VuZCBuZQoJCQkJCQkJCQkJ
ewoJCQkJCQkJCQkJJG9yaWdmb250bmFtZSAkc3RyIGN2cyBwcmludAoJCQkJCQkJCQkJKCBzdWJz
dGl0dXRpb24gcmV2aXNlZCwgdXNpbmcgKSBwcmludAoJCQkJCQkJCQkJZHVwIC9Gb250TmFtZSBr
bm93bgoJCQkJCQkJCQkJCXsgZHVwIC9Gb250TmFtZSBnZXQgfSB7ICh1bnNwZWNpZmllZCBmb250
KSB9CgkJCQkJCQkJCQlpZmVsc2UKCQkJCQkJCQkJCSRzdHIgY3ZzIHByaW50ICguXG4pIHByaW50
CgkJCQkJCQkJCQl9CgkJCQkJCQkJCWlmCgkJCQkJCQkJCX0KCQkJCQkJCQkJeyBleGNoIHBvcCB9
CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJfQoJCQkJCQkJCXsgZXhjaCBwb3AgfQoJCQkJCQkJaWZl
bHNlCgkJCQkJCQllbmQKCQkJCQkJfSBiaW5kIGRlZgoJCQkJCX0KCQkJCWlmZWxzZQoJCQkJZW5k
CgkJCWVuZAoJCUFkb2JlX0Nvb2xUeXBlX0NvcmVfRGVmaW5lZCBub3QKCQkJewoJCQlBZG9iZV9D
b29sVHlwZV9Db3JlIC9maW5kZm9udAoJCQkJewoJCQkJJFN1YnN0aXR1dGVGb250CgkJCQkJYmVn
aW4KCQkJCQkkZGVwdGggMCBlcQoJCQkJCQl7CgkJCQkJCS8kZm9udG5hbWUgMSBpbmRleCBkdXAg
dHlwZSAvc3RyaW5ndHlwZSBuZSB7ICRzdHIgY3ZzIH0gaWYgZGVmCgkJCQkJCS8kc3Vic3RpdHV0
ZUZvdW5kIGZhbHNlIGRlZgoJCQkJCQl9CgkJCQkJaWYKCQkJCQkvJGRlcHRoICRkZXB0aCAxIGFk
ZCBkZWYKCQkJCQllbmQKCQkJCXNjZmluZGZvbnQKCQkJCSRTdWJzdGl0dXRlRm9udAoJCQkJCWJl
Z2luCgkJCQkJLyRkZXB0aCAkZGVwdGggMSBzdWIgZGVmCgkJCQkJJHN1YnN0aXR1dGVGb3VuZCAk
ZGVwdGggMCBlcSBhbmQKCQkJCQkJewoJCQkJCQkkaW5WTUluZGV4IG51bGwgbmUKCQkJCQkJCXsg
ZHVwICRpblZNSW5kZXggJEFkZEluVk1Gb250IH0KCQkJCQkJaWYKCQkJCQkJJGRvU21hcnRTdWIK
CQkJCQkJCXsKCQkJCQkJCWN1cnJlbnRkaWN0IC8kU3RyYXRlZ3kga25vd24KCQkJCQkJCQl7ICRT
dHJhdGVneSAvJEJ1aWxkRm9udCBnZXQgZXhlYyB9CgkJCQkJCQlpZgoJCQkJCQkJfQoJCQkJCQlp
ZgoJCQkJCQl9CgkJCQkJaWYKCQkJCQllbmQKCQkJCX0gYmluZCBwdXQKCQkJfQoJCWlmCgkJfQoJ
aWYKCWVuZAovJEFkZEluVk1Gb250Cgl7CglleGNoIC9Gb250TmFtZSAyIGNvcHkga25vd24KCQl7
CgkJZ2V0CgkJMSBkaWN0IGR1cCBiZWdpbiBleGNoIDEgaW5kZXggZ2NoZWNrIGRlZiBlbmQgZXhj
aAoJCUFkb2JlX0Nvb2xUeXBlX0RhdGEgL0luVk1Gb250c0J5Q01hcCBnZXQgZXhjaAoJCSREaWN0
QWRkCgkJfQoJCXsgcG9wIHBvcCBwb3AgfQoJaWZlbHNlCgl9IGJpbmQgZGVmCi8kRGljdEFkZAoJ
ewoJMiBjb3B5IGtub3duIG5vdAoJCXsgMiBjb3B5IDQgaW5kZXggbGVuZ3RoIGRpY3QgcHV0IH0K
CWlmCglMZXZlbDI/IG5vdAoJCXsKCQkyIGNvcHkgZ2V0IGR1cCBtYXhsZW5ndGggZXhjaCBsZW5n
dGggNCBpbmRleCBsZW5ndGggYWRkIGx0CgkJMiBjb3B5IGdldCBkdXAgbGVuZ3RoIDQgaW5kZXgg
bGVuZ3RoIGFkZCBleGNoIG1heGxlbmd0aCAxIGluZGV4IGx0CgkJCXsKCQkJMiBtdWwgZGljdAoJ
CQkJYmVnaW4KCQkJCTIgY29weSBnZXQgeyBmb3JhbGwgfSBkZWYKCQkJCTIgY29weSBjdXJyZW50
ZGljdCBwdXQKCQkJCWVuZAoJCQl9CgkJCXsgcG9wIH0KCQlpZmVsc2UKCQl9CglpZgoJZ2V0CgkJ
YmVnaW4KCQkJeyBkZWYgfQoJCWZvcmFsbAoJCWVuZAoJfSBiaW5kIGRlZgplbmQKZW5kCiUlRW5k
UmVzb3VyY2UKJSVCZWdpblJlc291cmNlOiBwcm9jc2V0IEFkb2JlX0Nvb2xUeXBlX1V0aWxpdHlf
TUFLRU9DRiAxLjE5IDAKJSVDb3B5cmlnaHQ6IENvcHlyaWdodCAxOTg3LTIwMDMgQWRvYmUgU3lz
dGVtcyBJbmNvcnBvcmF0ZWQuCiUlVmVyc2lvbjogMS4xOSAwCnN5c3RlbWRpY3QgL2xhbmd1YWdl
bGV2ZWwga25vd24gZHVwCgl7IGN1cnJlbnRnbG9iYWwgZmFsc2Ugc2V0Z2xvYmFsIH0KCXsgZmFs
c2UgfQppZmVsc2UKZXhjaAp1c2VyZGljdCAvQWRvYmVfQ29vbFR5cGVfVXRpbGl0eSAyIGNvcHkg
a25vd24KCXsgMiBjb3B5IGdldCBkdXAgbWF4bGVuZ3RoIDI1IGFkZCBkaWN0IGNvcHkgfQoJeyAy
NSBkaWN0IH0KaWZlbHNlIHB1dApBZG9iZV9Db29sVHlwZV9VdGlsaXR5CgliZWdpbgoJL2N0X0xl
dmVsMj8gZXhjaCBkZWYKCS9jdF9DbG9uZT8gMTE4MzYxNTg2OSBpbnRlcm5hbGRpY3QgZHVwCgkJ
CS9DQ1J1biBrbm93biBub3QKCQkJZXhjaCAvZUNDUnVuIGtub3duIG5vdAoJCQljdF9MZXZlbDI/
IGFuZCBvciBkZWYKY3RfTGV2ZWwyPwoJeyBnbG9iYWxkaWN0IGJlZ2luIGN1cnJlbnRnbG9iYWwg
dHJ1ZSBzZXRnbG9iYWwgfQppZgoJL2N0X0FkZFN0ZENJRE1hcAoJCWN0X0xldmVsMj8KCQkJeyB7
CgkJCSgoSGV4KSA1NyBTdGFydERhdGEKCQkJMDYxNSAxZTI3IDJjMzkgMWM2MCBkOGE4IGNjMzEg
ZmUyYiBmNmUwCgkJCTdhYTMgZTU0MSBlMjFjIDYwZDggYThjOSBjM2QwIDZkOWUgMWM2MAoJCQlk
OGE4IGM5YzIgMDJkNyA5YTFjIDYwZDggYTg0OSAxYzYwIGQ4YTgKCQkJY2MzNiA3NGY0IDExNDQg
YjEzYiA3NykgMCAoKSAvU3ViRmlsZURlY29kZSBmaWx0ZXIgY3Z4IGV4ZWMKCQkJfSB9CgkJCXsg
ewoJCQk8QkFCNDMxRUEwN0YyMDlFQjhDNDM0ODMxMTQ4MUQ5RDNGNzZFM0QxNTI0NjU1NTU3N0Q4
N0JDNTEwRUQ1NEUKCQkgMTE4QzM5Njk3RkE5RjZEQjU4MTI4RTYwRUI4QTEyRkEyNEQ3Q0REMkZB
OTREMjIxRkE5RUM4REEzRTVFNkExQwoJCQk0QUNFQ0M4QzJEMzlDNTRFN0M5NDYwMzFERDE1NkMz
QTZCNEEwOUFEMjlFMTg2N0E+IGVleGVjCgkJCX0gfQoJCWlmZWxzZSBiaW5kIGRlZgp1c2VyZGlj
dCAvY2lkX2V4dGVuc2lvbnMga25vd24KZHVwIHsgY2lkX2V4dGVuc2lvbnMgL2NpZF9VcGRhdGVE
QiBrbm93biBhbmQgfSBpZgoJIHsKCSBjaWRfZXh0ZW5zaW9ucwoJIGJlZ2luCgkgL2NpZF9HZXRD
SURTeXN0ZW1JbmZvCgkJIHsKCQkgMSBpbmRleCB0eXBlIC9zdHJpbmd0eXBlIGVxCgkJCSB7IGV4
Y2ggY3ZuIGV4Y2ggfQoJCSBpZgoJCSBjaWRfZXh0ZW5zaW9ucwoJCQkgYmVnaW4KCQkJIGR1cCBs
b2FkIDIgaW5kZXgga25vd24KCQkJCSB7CgkJCQkgMiBjb3B5CgkJCQkgY2lkX0dldFN0YXR1c0lu
Zm8KCQkJCSBkdXAgbnVsbCBuZQoJCQkJCSB7CgkJCQkJIDEgaW5kZXggbG9hZAoJCQkJCSAzIGlu
ZGV4IGdldAoJCQkJCSBkdXAgbnVsbCBlcQoJCQkJCQkgIHsgcG9wIHBvcCBjaWRfVXBkYXRlREIg
fQoJCQkJCQkgIHsKCQkJCQkJICBleGNoCgkJCQkJCSAgMSBpbmRleCAvQ3JlYXRlZCBnZXQgZXEK
CQkJCQkJCSAgeyBleGNoIHBvcCBleGNoIHBvcCB9CgkJCQkJCQkgIHsgcG9wIGNpZF9VcGRhdGVE
QiB9CgkJCQkJCSAgaWZlbHNlCgkJCQkJCSAgfQoJCQkJCSBpZmVsc2UKCQkJCQkgfQoJCQkJCSB7
IHBvcCBjaWRfVXBkYXRlREIgfQoJCQkJIGlmZWxzZQoJCQkJIH0KCQkJCSB7IGNpZF9VcGRhdGVE
QiB9CgkJCSBpZmVsc2UKCQkJIGVuZAoJCSB9IGJpbmQgZGVmCgkgZW5kCgkgfQppZgpjdF9MZXZl
bDI/Cgl7IGVuZCBzZXRnbG9iYWwgfQppZgoJL2N0X1VzZU5hdGl2ZUNhcGFiaWxpdHk/ICBzeXN0
ZW1kaWN0IC9jb21wb3NlZm9udCBrbm93biBkZWYKCS9jdF9NYWtlT0NGIDM1IGRpY3QgZGVmCgkv
Y3RfVmFycyAyNSBkaWN0IGRlZgoJL2N0X0dseXBoRGlyUHJvY3MgNiBkaWN0IGRlZgoJL2N0X0J1
aWxkQ2hhckRpY3QgMTUgZGljdCBkdXAKCQliZWdpbgoJCS9jaGFyY29kZSAyIHN0cmluZyBkZWYK
CQkvZHN0X3N0cmluZyAxNTAwIHN0cmluZyBkZWYKCQkvbnVsbHN0cmluZyAoKSBkZWYKCQkvdXNl
d2lkdGhzPyB0cnVlIGRlZgoJCWVuZCBkZWYKCWN0X0xldmVsMj8geyBzZXRnbG9iYWwgfSB7IHBv
cCB9IGlmZWxzZQoJY3RfR2x5cGhEaXJQcm9jcwoJCWJlZ2luCgkJL0dldEdseXBoRGlyZWN0b3J5
CgkJCXsKCQkJc3lzdGVtZGljdCAvbGFuZ3VhZ2VsZXZlbCBrbm93bgoJCQkJeyBwb3AgL0NJREZv
bnQgZmluZHJlc291cmNlIC9HbHlwaERpcmVjdG9yeSBnZXQgfQoJCQkJewoJCQkJMSBpbmRleCAv
Q0lERm9udCBmaW5kcmVzb3VyY2UgL0dseXBoRGlyZWN0b3J5CgkJCQlnZXQgZHVwIHR5cGUgL2Rp
Y3R0eXBlIGVxCgkJCQkJewoJCQkJCWR1cCBkdXAgbWF4bGVuZ3RoIGV4Y2ggbGVuZ3RoIHN1YiAy
IGluZGV4IGx0CgkJCQkJCXsKCQkJCQkJZHVwIGxlbmd0aCAyIGluZGV4IGFkZCBkaWN0IGNvcHkg
MiBpbmRleAoJCQkJCQkvQ0lERm9udCBmaW5kcmVzb3VyY2UvR2x5cGhEaXJlY3RvcnkgMiBpbmRl
eCBwdXQKCQkJCQkJfQoJCQkJCWlmCgkJCQkJfQoJCQkJaWYKCQkJCWV4Y2ggcG9wIGV4Y2ggcG9w
CgkJCQl9CgkJCWlmZWxzZQoJCQkrCgkJCX0gZGVmCgkJLysKCQkJewoJCQlzeXN0ZW1kaWN0IC9s
YW5ndWFnZWxldmVsIGtub3duCgkJCQl7CgkJCQljdXJyZW50Z2xvYmFsIGZhbHNlIHNldGdsb2Jh
bAoJCQkJMyBkaWN0IGJlZ2luCgkJCQkJL3ZtIGV4Y2ggZGVmCgkJCQl9CgkJCQl7IDEgZGljdCBi
ZWdpbiB9CgkJCWlmZWxzZQoJCQkvJCBleGNoIGRlZgoJCQlzeXN0ZW1kaWN0IC9sYW5ndWFnZWxl
dmVsIGtub3duCgkJCQl7CgkJCQl2bSBzZXRnbG9iYWwKCQkJCS9ndm0gY3VycmVudGdsb2JhbCBk
ZWYKCQkJCSQgZ2NoZWNrIHNldGdsb2JhbAoJCQkJfQoJCQlpZgoJCQk/IHsgJCBiZWdpbiB9IGlm
CgkJCX0gZGVmCgkJLz8geyAkIHR5cGUgL2RpY3R0eXBlIGVxIH0gZGVmCgkJL3wgewoJCQl1c2Vy
ZGljdCAvQWRvYmVfQ29vbFR5cGVfRGF0YSBrbm93bgoJCQkJewoJCQlBZG9iZV9Db29sVHlwZV9E
YXRhIC9BZGRXaWR0aHM/IGtub3duCgkJCQl7CgkJCQkgY3VycmVudGRpY3QgQWRvYmVfQ29vbFR5
cGVfRGF0YQoJCQkJCWJlZ2luCgkJCQkJICBiZWdpbgoJCQkJCQlBZGRXaWR0aHM/CgkJCQkJCQkJ
ewoJCQkJCQkJCUFkb2JlX0Nvb2xUeXBlX0RhdGEgL0NDIDMgaW5kZXggcHV0CgkJCQkJCQkJPyB7
IGRlZiB9IHsgJCAzIDEgcm9sbCBwdXQgfSBpZmVsc2UKCQkJCQkJCQlDQyBjaGFyY29kZSBleGNo
IDEgaW5kZXggMCAyIGluZGV4IDI1NiBpZGl2IHB1dAoJCQkJCQkJCTEgaW5kZXggZXhjaCAxIGV4
Y2ggMjU2IG1vZCBwdXQKCQkJCQkJCQlzdHJpbmd3aWR0aCAyIGFycmF5IGFzdG9yZQoJCQkJCQkJ
CWN1cnJlbnRmb250IC9XaWR0aHMgZ2V0IGV4Y2ggQ0MgZXhjaCBwdXQKCQkJCQkJCQl9CgkJCQkJ
CQkJeyA/IHsgZGVmIH0geyAkIDMgMSByb2xsIHB1dCB9IGlmZWxzZSB9CgkJCQkJCQlpZmVsc2UK
CQkJCQllbmQKCQkJCWVuZAoJCQkJfQoJCQkJeyA/IHsgZGVmIH0geyAkIDMgMSByb2xsIHB1dCB9
IGlmZWxzZSB9CWlmZWxzZQoJCQkJfQoJCQkJeyA/IHsgZGVmIH0geyAkIDMgMSByb2xsIHB1dCB9
IGlmZWxzZSB9CgkJCWlmZWxzZQoJCQl9IGRlZgoJCS8hCgkJCXsKCQkJPyB7IGVuZCB9IGlmCgkJ
CXN5c3RlbWRpY3QgL2xhbmd1YWdlbGV2ZWwga25vd24KCQkJCXsgZ3ZtIHNldGdsb2JhbCB9CgkJ
CWlmCgkJCWVuZAoJCQl9IGRlZgoJCS86IHsgc3RyaW5nIGN1cnJlbnRmaWxlIGV4Y2ggcmVhZHN0
cmluZyBwb3AgfSBleGVjdXRlb25seSBkZWYKCQllbmQKCWN0X01ha2VPQ0YKCQliZWdpbgoJCS9j
dF9jSGV4RW5jb2RpbmcKCQlbL2MwMC9jMDEvYzAyL2MwMy9jMDQvYzA1L2MwNi9jMDcvYzA4L2Mw
OS9jMEEvYzBCL2MwQy9jMEQvYzBFL2MwRi9jMTAvYzExL2MxMgoJCSAvYzEzL2MxNC9jMTUvYzE2
L2MxNy9jMTgvYzE5L2MxQS9jMUIvYzFDL2MxRC9jMUUvYzFGL2MyMC9jMjEvYzIyL2MyMy9jMjQv
YzI1CgkJIC9jMjYvYzI3L2MyOC9jMjkvYzJBL2MyQi9jMkMvYzJEL2MyRS9jMkYvYzMwL2MzMS9j
MzIvYzMzL2MzNC9jMzUvYzM2L2MzNy9jMzgKCQkgL2MzOS9jM0EvYzNCL2MzQy9jM0QvYzNFL2Mz
Ri9jNDAvYzQxL2M0Mi9jNDMvYzQ0L2M0NS9jNDYvYzQ3L2M0OC9jNDkvYzRBL2M0QgoJCSAvYzRD
L2M0RC9jNEUvYzRGL2M1MC9jNTEvYzUyL2M1My9jNTQvYzU1L2M1Ni9jNTcvYzU4L2M1OS9jNUEv
YzVCL2M1Qy9jNUQvYzVFCgkJIC9jNUYvYzYwL2M2MS9jNjIvYzYzL2M2NC9jNjUvYzY2L2M2Ny9j
NjgvYzY5L2M2QS9jNkIvYzZDL2M2RC9jNkUvYzZGL2M3MC9jNzEKCQkgL2M3Mi9jNzMvYzc0L2M3
NS9jNzYvYzc3L2M3OC9jNzkvYzdBL2M3Qi9jN0MvYzdEL2M3RS9jN0YvYzgwL2M4MS9jODIvYzgz
L2M4NAoJCSAvYzg1L2M4Ni9jODcvYzg4L2M4OS9jOEEvYzhCL2M4Qy9jOEQvYzhFL2M4Ri9jOTAv
YzkxL2M5Mi9jOTMvYzk0L2M5NS9jOTYvYzk3CgkJIC9jOTgvYzk5L2M5QS9jOUIvYzlDL2M5RC9j
OUUvYzlGL2NBMC9jQTEvY0EyL2NBMy9jQTQvY0E1L2NBNi9jQTcvY0E4L2NBOS9jQUEKCQkgL2NB
Qi9jQUMvY0FEL2NBRS9jQUYvY0IwL2NCMS9jQjIvY0IzL2NCNC9jQjUvY0I2L2NCNy9jQjgvY0I5
L2NCQS9jQkIvY0JDL2NCRAoJCSAvY0JFL2NCRi9jQzAvY0MxL2NDMi9jQzMvY0M0L2NDNS9jQzYv
Y0M3L2NDOC9jQzkvY0NBL2NDQi9jQ0MvY0NEL2NDRS9jQ0YvY0QwCgkJIC9jRDEvY0QyL2NEMy9j
RDQvY0Q1L2NENi9jRDcvY0Q4L2NEOS9jREEvY0RCL2NEQy9jREQvY0RFL2NERi9jRTAvY0UxL2NF
Mi9jRTMKCQkgL2NFNC9jRTUvY0U2L2NFNy9jRTgvY0U5L2NFQS9jRUIvY0VDL2NFRC9jRUUvY0VG
L2NGMC9jRjEvY0YyL2NGMy9jRjQvY0Y1L2NGNgoJCSAvY0Y3L2NGOC9jRjkvY0ZBL2NGQi9jRkMv
Y0ZEL2NGRS9jRkZdIGRlZgoJCS9jdF9DSURfU1RSX1NJWkUgODAwMCBkZWYKCQkvY3RfbWtvY2ZT
dHIxMDAgMTAwIHN0cmluZyBkZWYKCQkvY3RfZGVmYXVsdEZvbnRNdHggWy4wMDEgMCAwIC4wMDEg
MCAwXSBkZWYKCQkvY3RfMTAwME10eCBbMTAwMCAwIDAgMTAwMCAwIDBdIGRlZgoJCS9jdF9yYWlz
ZSB7ZXhjaCBjdnggZXhjaCBlcnJvcmRpY3QgZXhjaCBnZXQgZXhlYyBzdG9wfSBiaW5kIGRlZgoJ
CS9jdF9yZXJhaXNlCgkJCXsgY3Z4ICRlcnJvciAvZXJyb3JuYW1lIGdldCAoRXJyb3I6ICkgcHJp
bnQgZHVwICgJCQkJCQkgICkgY3ZzIHByaW50CgkJCQkJZXJyb3JkaWN0IGV4Y2ggZ2V0IGV4ZWMg
c3RvcAoJCQl9IGJpbmQgZGVmCgkJL2N0X2N2bnNpCgkJCXsKCQkJMSBpbmRleCBhZGQgMSBzdWIg
MSBleGNoIDAgNCAxIHJvbGwKCQkJCXsKCQkJCTIgaW5kZXggZXhjaCBnZXQKCQkJCWV4Y2ggOCBi
aXRzaGlmdAoJCQkJYWRkCgkJCQl9CgkJCWZvcgoJCQlleGNoIHBvcAoJCQl9IGJpbmQgZGVmCgkJ
L2N0X0dldEludGVydmFsCgkJCXsKCQkJQWRvYmVfQ29vbFR5cGVfVXRpbGl0eSAvY3RfQnVpbGRD
aGFyRGljdCBnZXQKCQkJCWJlZ2luCgkJCQkvZHN0X2luZGV4IDAgZGVmCgkJCQlkdXAgZHN0X3N0
cmluZyBsZW5ndGggZ3QKCQkJCQl7IGR1cCBzdHJpbmcgL2RzdF9zdHJpbmcgZXhjaCBkZWYgfQoJ
CQkJaWYKCQkJCTEgaW5kZXggY3RfQ0lEX1NUUl9TSVpFIGlkaXYKCQkJCS9hcnJheUluZGV4IGV4
Y2ggZGVmCgkJCQkyIGluZGV4IGFycmF5SW5kZXggIGdldAoJCQkJMiBpbmRleAoJCQkJYXJyYXlJ
bmRleCBjdF9DSURfU1RSX1NJWkUgbXVsCgkJCQlzdWIKCQkJCQl7CgkJCQkJZHVwIDMgaW5kZXgg
YWRkIDIgaW5kZXggbGVuZ3RoIGxlCgkJCQkJCXsKCQkJCQkJMiBpbmRleCBnZXRpbnRlcnZhbAoJ
CQkJCQlkc3Rfc3RyaW5nICBkc3RfaW5kZXggMiBpbmRleCBwdXRpbnRlcnZhbAoJCQkJCQlsZW5n
dGggZHN0X2luZGV4IGFkZCAvZHN0X2luZGV4IGV4Y2ggZGVmCgkJCQkJCWV4aXQKCQkJCQkJfQoJ
CQkJCQl7CgkJCQkJCTEgaW5kZXggbGVuZ3RoIDEgaW5kZXggc3ViCgkJCQkJCWR1cCA0IDEgcm9s
bAoJCQkJCQlnZXRpbnRlcnZhbAoJCQkJCQlkc3Rfc3RyaW5nICBkc3RfaW5kZXggMiBpbmRleCBw
dXRpbnRlcnZhbAoJCQkJCQlwb3AgZHVwIGRzdF9pbmRleCBhZGQgL2RzdF9pbmRleCBleGNoIGRl
ZgoJCQkJCQlzdWIKCQkJCQkJL2FycmF5SW5kZXggYXJyYXlJbmRleCAxIGFkZCBkZWYKCQkJCQkJ
MiBpbmRleCBkdXAgbGVuZ3RoIGFycmF5SW5kZXggZ3QKCQkJCQkJCSAgeyBhcnJheUluZGV4IGdl
dCB9CgkJCQkJCQkgIHsKCQkJCQkJCSAgcG9wCgkJCQkJCQkgIGV4aXQKCQkJCQkJCSAgfQoJCQkJ
CQlpZmVsc2UKCQkJCQkJMAoJCQkJCQl9CgkJCQkJaWZlbHNlCgkJCQkJfQoJCQkJbG9vcAoJCQkJ
cG9wIHBvcCBwb3AKCQkJCWRzdF9zdHJpbmcgMCBkc3RfaW5kZXggZ2V0aW50ZXJ2YWwKCQkJCWVu
ZAoJCQl9IGJpbmQgZGVmCgkJY3RfTGV2ZWwyPwoJCQl7CgkJCS9jdF9yZXNvdXJjZXN0YXR1cwoJ
CQljdXJyZW50Z2xvYmFsIG1hcmsgdHJ1ZSBzZXRnbG9iYWwKCQkJCXsgL3Vua25vd25pbnN0YW5j
ZW5hbWUgL0NhdGVnb3J5IHJlc291cmNlc3RhdHVzIH0KCQkJc3RvcHBlZAoJCQkJeyBjbGVhcnRv
bWFyayBzZXRnbG9iYWwgdHJ1ZSB9CgkJCQl7IGNsZWFydG9tYXJrIGN1cnJlbnRnbG9iYWwgbm90
IGV4Y2ggc2V0Z2xvYmFsIH0KCQkJaWZlbHNlCgkJCQl7CgkJCQkJewoJCQkJCW1hcmsgMyAxIHJv
bGwgL0NhdGVnb3J5IGZpbmRyZXNvdXJjZQoJCQkJCQliZWdpbgoJCQkJCQljdF9WYXJzIC92bSBj
dXJyZW50Z2xvYmFsIHB1dAoJCQkJCQkoe1Jlc291cmNlU3RhdHVzfSBzdG9wcGVkKSAwICgpIC9T
dWJGaWxlRGVjb2RlIGZpbHRlciBjdnggZXhlYwoJCQkJCQkJeyBjbGVhcnRvbWFyayBmYWxzZSB9
CgkJCQkJCQl7IHsgMyAyIHJvbGwgcG9wIHRydWUgfSB7IGNsZWFydG9tYXJrIGZhbHNlIH0gaWZl
bHNlIH0KCQkJCQkJaWZlbHNlCgkJCQkJCWN0X1ZhcnMgL3ZtIGdldCBzZXRnbG9iYWwKCQkJCQkJ
ZW5kCgkJCQkJfQoJCQkJfQoJCQkJeyB7IHJlc291cmNlc3RhdHVzIH0gfQoJCQlpZmVsc2UgYmlu
ZCBkZWYKCQkJL0NJREZvbnQgL0NhdGVnb3J5IGN0X3Jlc291cmNlc3RhdHVzCgkJCQl7IHBvcCBw
b3AgfQoJCQkJewoJCQkJY3VycmVudGdsb2JhbCAgdHJ1ZSBzZXRnbG9iYWwKCQkJCS9HZW5lcmlj
IC9DYXRlZ29yeSBmaW5kcmVzb3VyY2UKCQkJCWR1cCBsZW5ndGggZGljdCBjb3B5CgkJCQlkdXAg
L0luc3RhbmNlVHlwZSAvZGljdHR5cGUgcHV0CgkJCQkvQ0lERm9udCBleGNoIC9DYXRlZ29yeSBk
ZWZpbmVyZXNvdXJjZSBwb3AKCQkJCXNldGdsb2JhbAoJCQkJfQoJCQlpZmVsc2UKCQkJY3RfVXNl
TmF0aXZlQ2FwYWJpbGl0eT8KCQkJCXsKCQkJCS9DSURJbml0IC9Qcm9jU2V0IGZpbmRyZXNvdXJj
ZSBiZWdpbgoJCQkJMTIgZGljdCBiZWdpbgoJCQkJYmVnaW5jbWFwCgkJCQkvQ0lEU3lzdGVtSW5m
byAzIGRpY3QgZHVwIGJlZ2luCgkJCQkgIC9SZWdpc3RyeSAoQWRvYmUpIGRlZgoJCQkJICAvT3Jk
ZXJpbmcgKElkZW50aXR5KSBkZWYKCQkJCSAgL1N1cHBsZW1lbnQgMCBkZWYKCQkJCWVuZCBkZWYK
CQkJCS9DTWFwTmFtZSAvSWRlbnRpdHktSCBkZWYKCQkJCS9DTWFwVmVyc2lvbiAxLjAwMCBkZWYK
CQkJCS9DTWFwVHlwZSAxIGRlZgoJCQkJMSBiZWdpbmNvZGVzcGFjZXJhbmdlCgkJCQk8MDAwMD4g
PEZGRkY+CgkJCQllbmRjb2Rlc3BhY2VyYW5nZQoJCQkJMSBiZWdpbmNpZHJhbmdlCgkJCQk8MDAw
MD4gPEZGRkY+IDAKCQkJCWVuZGNpZHJhbmdlCgkJCQllbmRjbWFwCgkJCQlDTWFwTmFtZSBjdXJy
ZW50ZGljdCAvQ01hcCBkZWZpbmVyZXNvdXJjZSBwb3AKCQkJCWVuZAoJCQkJZW5kCgkJCQl9CgkJ
CWlmCgkJCX0KCQkJewoJCQkvY3RfQ2F0ZWdvcnkgMiBkaWN0IGJlZ2luCgkJCS9DSURGb250ICAx
MCBkaWN0IGRlZgoJCQkvUHJvY1NldAkyIGRpY3QgZGVmCgkJCWN1cnJlbnRkaWN0CgkJCWVuZAoJ
CQlkZWYKCQkJL2RlZmluZXJlc291cmNlCgkJCQl7CgkJCQljdF9DYXRlZ29yeSAxIGluZGV4IDIg
Y29weSBrbm93bgoJCQkJCXsKCQkJCQlnZXQKCQkJCQlkdXAgZHVwIG1heGxlbmd0aCBleGNoIGxl
bmd0aCBlcQoJCQkJCQl7CgkJCQkJCWR1cCBsZW5ndGggMTAgYWRkIGRpY3QgY29weQoJCQkJCQlj
dF9DYXRlZ29yeSAyIGluZGV4IDIgaW5kZXggcHV0CgkJCQkJCX0KCQkJCQlpZgoJCQkJCTMgaW5k
ZXggMyBpbmRleCBwdXQKCQkJCQlwb3AgZXhjaCBwb3AKCQkJCQl9CgkJCQkJeyBwb3AgcG9wIC9k
ZWZpbmVyZXNvdXJjZSAvdW5kZWZpbmVkIGN0X3JhaXNlIH0KCQkJCWlmZWxzZQoJCQkJfSBiaW5k
IGRlZgoJCQkvZmluZHJlc291cmNlCgkJCQl7CgkJCQljdF9DYXRlZ29yeSAxIGluZGV4IDIgY29w
eSBrbm93bgoJCQkJCXsKCQkJCQlnZXQKCQkJCQkyIGluZGV4IDIgY29weSBrbm93bgoJCQkJCQl7
IGdldCAzIDEgcm9sbCBwb3AgcG9wfQoJCQkJCQl7IHBvcCBwb3AgL2ZpbmRyZXNvdXJjZSAvdW5k
ZWZpbmVkcmVzb3VyY2UgY3RfcmFpc2UgfQoJCQkJCWlmZWxzZQoJCQkJCX0KCQkJCQl7IHBvcCBw
b3AgL2ZpbmRyZXNvdXJjZSAvdW5kZWZpbmVkIGN0X3JhaXNlIH0KCQkJCWlmZWxzZQoJCQkJfSBi
aW5kIGRlZgoJCQkvcmVzb3VyY2VzdGF0dXMKCQkJCXsKCQkJCWN0X0NhdGVnb3J5IDEgaW5kZXgg
MiBjb3B5IGtub3duCgkJCQkJewoJCQkJCWdldAoJCQkJCTIgaW5kZXgga25vd24KCQkJCQlleGNo
IHBvcCBleGNoIHBvcAoJCQkJCQl7CgkJCQkJCTAgLTEgdHJ1ZQoJCQkJCQl9CgkJCQkJCXsKCQkJ
CQkJZmFsc2UKCQkJCQkJfQoJCQkJCWlmZWxzZQoJCQkJCX0KCQkJCQl7IHBvcCBwb3AgL2ZpbmRy
ZXNvdXJjZSAvdW5kZWZpbmVkIGN0X3JhaXNlIH0KCQkJCWlmZWxzZQoJCQkJfSBiaW5kIGRlZgoJ
CQkvY3RfcmVzb3VyY2VzdGF0dXMgL3Jlc291cmNlc3RhdHVzIGxvYWQgZGVmCgkJCX0KCQlpZmVs
c2UKCQkvY3RfQ0lESW5pdCAyIGRpY3QKCQkJYmVnaW4KCQkJL2N0X2NpZGZvbnRfc3RyZWFtX2lu
aXQKCQkJCXsKCQkJCQl7CgkJCQkJZHVwIChCaW5hcnkpIGVxCgkJCQkJCXsKCQkJCQkJcG9wCgkJ
CQkJCW51bGwKCQkJCQkJY3VycmVudGZpbGUKCQkJCQkJY3RfTGV2ZWwyPwoJCQkJCQkJewoJCQkJ
CQkJCXsgY2lkX0JZVEVfQ09VTlQgKCkgL1N1YkZpbGVEZWNvZGUgZmlsdGVyIH0KCQkJCQkJCXN0
b3BwZWQKCQkJCQkJCQl7IHBvcCBwb3AgcG9wIH0KCQkJCQkJCWlmCgkJCQkJCQl9CgkJCQkJCWlm
CgkJCQkJCS9yZWFkc3RyaW5nIGxvYWQKCQkJCQkJZXhpdAoJCQkJCQl9CgkJCQkJaWYKCQkJCQlk
dXAgKEhleCkgZXEKCQkJCQkJewoJCQkJCQlwb3AKCQkJCQkJY3VycmVudGZpbGUKCQkJCQkJY3Rf
TGV2ZWwyPwoJCQkJCQkJewoJCQkJCQkJCXsgbnVsbCBleGNoIC9BU0NJSUhleERlY29kZSBmaWx0
ZXIgL3JlYWRzdHJpbmcgfQoJCQkJCQkJc3RvcHBlZAoJCQkJCQkJCXsgcG9wIGV4Y2ggcG9wICg+
KSBleGNoIC9yZWFkaGV4c3RyaW5nIH0KCQkJCQkJCWlmCgkJCQkJCQl9CgkJCQkJCQl7ICg+KSBl
eGNoIC9yZWFkaGV4c3RyaW5nIH0KCQkJCQkJaWZlbHNlCgkJCQkJCWxvYWQKCQkJCQkJZXhpdAoJ
CQkJCQl9CgkJCQkJaWYKCQkJCQkvU3RhcnREYXRhIC90eXBlY2hlY2sgY3RfcmFpc2UKCQkJCQl9
CgkJCQlsb29wCgkJCQljaWRfQllURV9DT1VOVCBjdF9DSURfU1RSX1NJWkUgbGUKCQkJCQl7CgkJ
CQkJMiBjb3B5IGNpZF9CWVRFX0NPVU5UIHN0cmluZyBleGNoIGV4ZWMKCQkJCQlwb3AKCQkJCQkx
IGFycmF5IGR1cAoJCQkJCTMgLTEgcm9sbAoJCQkJCTAgZXhjaCBwdXQKCQkJCQl9CgkJCQkJewoJ
CQkJCWNpZF9CWVRFX0NPVU5UIGN0X0NJRF9TVFJfU0laRSBkaXYgY2VpbGluZyBjdmkKCQkJCQlk
dXAgYXJyYXkgZXhjaCAyIHN1YiAwIGV4Y2ggMSBleGNoCgkJCQkJCXsKCQkJCQkJMiBjb3B5CgkJ
CQkJCTUgaW5kZXgKCQkJCQkJY3RfQ0lEX1NUUl9TSVpFCgkJCQkJCXN0cmluZwoJCQkJCQk2IGlu
ZGV4IGV4ZWMKCQkJCQkJcG9wCgkJCQkJCXB1dAoJCQkJCQlwb3AKCQkJCQkJfQoJCQkJCWZvcgoJ
CQkJCTIgaW5kZXgKCQkJCQljaWRfQllURV9DT1VOVCBjdF9DSURfU1RSX1NJWkUgbW9kIHN0cmlu
ZwoJCQkJCTMgaW5kZXggZXhlYwoJCQkJCXBvcAoJCQkJCTEgaW5kZXggZXhjaAoJCQkJCTEgaW5k
ZXggbGVuZ3RoIDEgc3ViCgkJCQkJZXhjaCBwdXQKCQkJCQl9CgkJCQlpZmVsc2UKCQkJCWNpZF9D
SURGT05UIGV4Y2ggL0dseXBoRGF0YSBleGNoIHB1dAoJCQkJMiBpbmRleCBudWxsIGVxCgkJCQkJ
ewoJCQkJCXBvcCBwb3AgcG9wCgkJCQkJfQoJCQkJCXsKCQkJCQlwb3AgL3JlYWRzdHJpbmcgbG9h
ZAoJCQkJCTEgc3RyaW5nIGV4Y2gKCQkJCQkJewoJCQkJCQkzIGNvcHkgZXhlYwoJCQkJCQlwb3AK
CQkJCQkJZHVwIGxlbmd0aCAwIGVxCgkJCQkJCQl7CgkJCQkJCQlwb3AgcG9wIHBvcCBwb3AgcG9w
CgkJCQkJCQl0cnVlIGV4aXQKCQkJCQkJCX0KCQkJCQkJaWYKCQkJCQkJNCBpbmRleAoJCQkJCQll
cQoJCQkJCQkJewoJCQkJCQkJcG9wIHBvcCBwb3AgcG9wCgkJCQkJCQlmYWxzZSBleGl0CgkJCQkJ
CQl9CgkJCQkJCWlmCgkJCQkJCX0KCQkJCQlsb29wCgkJCQkJcG9wCgkJCQkJfQoJCQkJaWZlbHNl
CgkJCQl9IGJpbmQgZGVmCgkJCS9TdGFydERhdGEKCQkJCXsKCQkJCW1hcmsKCQkJCQl7CgkJCQkJ
Y3VycmVudGRpY3QKCQkJCQlkdXAgL0ZEQXJyYXkgZ2V0IDAgZ2V0IC9Gb250TWF0cml4IGdldAoJ
CQkJCTAgZ2V0IDAuMDAxIGVxCgkJCQkJCXsKCQkJCQkJZHVwIC9DRGV2UHJvYyBrbm93biBub3QK
CQkJCQkJCXsKCQkJCQkJCS9DRGV2UHJvYyAxMTgzNjE1ODY5IGludGVybmFsZGljdCAvc3RkQ0Rl
dlByb2MgMiBjb3B5IGtub3duCgkJCQkJCQkJeyBnZXQgfQoJCQkJCQkJCXsKCQkJCQkJCQlwb3Ag
cG9wCgkJCQkJCQkJeyBwb3AgcG9wIHBvcCBwb3AgcG9wIDAgLTEwMDAgNyBpbmRleCAyIGRpdiA4
ODAgfQoJCQkJCQkJCX0KCQkJCQkJCWlmZWxzZQoJCQkJCQkJZGVmCgkJCQkJCQl9CgkJCQkJCWlm
CgkJCQkJCX0KCQkJCQkJewoJCQkJCQkgL0NEZXZQcm9jCgkJCQkJCQkgewoJCQkJCQkJIHBvcCBw
b3AgcG9wIHBvcCBwb3AKCQkJCQkJCSAwCgkJCQkJCQkgMSBjaWRfdGVtcCAvY2lkX0NJREZPTlQg
Z2V0CgkJCQkJCQkgL0ZEQXJyYXkgZ2V0IDAgZ2V0CgkJCQkJCQkgL0ZvbnRNYXRyaXggZ2V0IDAg
Z2V0IGRpdgoJCQkJCQkJIDcgaW5kZXggMiBkaXYKCQkJCQkJCSAxIGluZGV4IDAuODggbXVsCgkJ
CQkJCQkgfSBkZWYKCQkJCQkJfQoJCQkJCWlmZWxzZQoJCQkJCS9jaWRfdGVtcCAxNSBkaWN0IGRl
ZgoJCQkJCWNpZF90ZW1wCgkJCQkJCWJlZ2luCgkJCQkJCS9jaWRfQ0lERk9OVCBleGNoIGRlZgoJ
CQkJCQkzIGNvcHkgcG9wCgkJCQkJCWR1cCAvY2lkX0JZVEVfQ09VTlQgZXhjaCBkZWYgMCBndAoJ
CQkJCQkJewoJCQkJCQkJY3RfY2lkZm9udF9zdHJlYW1faW5pdAoJCQkJCQkJRkRBcnJheQoJCQkJ
CQkJCXsKCQkJCQkJCQkvUHJpdmF0ZSBnZXQKCQkJCQkJCQlkdXAgL1N1YnJNYXBPZmZzZXQga25v
d24KCQkJCQkJCQkJewoJCQkJCQkJCQliZWdpbgoJCQkJCQkJCQkvU3VicnMgU3VickNvdW50IGFy
cmF5IGRlZgoJCQkJCQkJCQlTdWJycwoJCQkJCQkJCQlTdWJyTWFwT2Zmc2V0CgkJCQkJCQkJCVN1
YnJDb3VudAoJCQkJCQkJCQlTREJ5dGVzCgkJCQkJCQkJCWN0X0xldmVsMj8KCQkJCQkJCQkJCXsK
CQkJCQkJCQkJCWN1cnJlbnRkaWN0IGR1cCAvU3Vick1hcE9mZnNldCB1bmRlZgoJCQkJCQkJCQkJ
ZHVwIC9TdWJyQ291bnQgdW5kZWYKCQkJCQkJCQkJCS9TREJ5dGVzIHVuZGVmCgkJCQkJCQkJCQl9
CgkJCQkJCQkJCWlmCgkJCQkJCQkJCWVuZAoJCQkJCQkJCQkvY2lkX1NEX0JZVEVTIGV4Y2ggZGVm
CgkJCQkJCQkJCS9jaWRfU1VCUl9DT1VOVCBleGNoIGRlZgoJCQkJCQkJCQkvY2lkX1NVQlJfTUFQ
X09GRlNFVCBleGNoIGRlZgoJCQkJCQkJCQkvY2lkX1NVQlJTIGV4Y2ggZGVmCgkJCQkJCQkJCWNp
ZF9TVUJSX0NPVU5UIDAgZ3QKCQkJCQkJCQkJCXsKCQkJCQkJCQkJCUdseXBoRGF0YSBjaWRfU1VC
Ul9NQVBfT0ZGU0VUIGNpZF9TRF9CWVRFUyBjdF9HZXRJbnRlcnZhbAoJCQkJCQkJCQkJMCBjaWRf
U0RfQllURVMgY3RfY3Zuc2kKCQkJCQkJCQkJCTAgMSBjaWRfU1VCUl9DT1VOVCAxIHN1YgoJCQkJ
CQkJCQkJCXsKCQkJCQkJCQkJCQlleGNoIDEgaW5kZXgKCQkJCQkJCQkJCQkxIGFkZAoJCQkJCQkJ
CQkJCWNpZF9TRF9CWVRFUyBtdWwgY2lkX1NVQlJfTUFQX09GRlNFVCBhZGQKCQkJCQkJCQkJCQlH
bHlwaERhdGEgZXhjaCBjaWRfU0RfQllURVMgY3RfR2V0SW50ZXJ2YWwKCQkJCQkJCQkJCQkwIGNp
ZF9TRF9CWVRFUyBjdF9jdm5zaQoJCQkJCQkJCQkJCWNpZF9TVUJSUyA0IDIgcm9sbAoJCQkJCQkJ
CQkJCUdseXBoRGF0YSBleGNoCgkJCQkJCQkJCQkJNCBpbmRleAoJCQkJCQkJCQkJCTEgaW5kZXgK
CQkJCQkJCQkJCQlzdWIKCQkJCQkJCQkJCQljdF9HZXRJbnRlcnZhbAoJCQkJCQkJCQkJCWR1cCBs
ZW5ndGggc3RyaW5nIGNvcHkgcHV0CgkJCQkJCQkJCQkJfQoJCQkJCQkJCQkJZm9yCgkJCQkJCQkJ
CQlwb3AKCQkJCQkJCQkJCX0KCQkJCQkJCQkJaWYKCQkJCQkJCQkJfQoJCQkJCQkJCQl7IHBvcCB9
CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJfQoJCQkJCQkJZm9yYWxsCgkJCQkJCQl9CgkJCQkJCWlm
CgkJCQkJCWNsZWFydG9tYXJrIHBvcCBwb3AKCQkJCQkJZW5kCgkJCQkJQ0lERm9udE5hbWUgY3Vy
cmVudGRpY3QgL0NJREZvbnQgZGVmaW5lcmVzb3VyY2UgcG9wCgkJCQkJZW5kIGVuZAoJCQkJCX0K
CQkJCXN0b3BwZWQKCQkJCQl7IGNsZWFydG9tYXJrIC9TdGFydERhdGEgY3RfcmVyYWlzZSB9CgkJ
CQlpZgoJCQkJfSBiaW5kIGRlZgoJCQljdXJyZW50ZGljdAoJCQllbmQgZGVmCgkJL2N0X3NhdmVD
SURJbml0CgkJCXsKCQkJL0NJREluaXQgL1Byb2NTZXQgY3RfcmVzb3VyY2VzdGF0dXMKCQkJCXsg
dHJ1ZSB9CgkJCQl7IC9DSURJbml0QyAvUHJvY1NldCBjdF9yZXNvdXJjZXN0YXR1cyB9CgkJCWlm
ZWxzZQoJCQkJewoJCQkJcG9wIHBvcAoJCQkJL0NJREluaXQgL1Byb2NTZXQgZmluZHJlc291cmNl
CgkJCQljdF9Vc2VOYXRpdmVDYXBhYmlsaXR5PwoJCQkJCXsgcG9wIG51bGwgfQoJCQkJCXsgL0NJ
REluaXQgY3RfQ0lESW5pdCAvUHJvY1NldCBkZWZpbmVyZXNvdXJjZSBwb3AgfQoJCQkJaWZlbHNl
CgkJCQl9CgkJCQl7IC9DSURJbml0IGN0X0NJREluaXQgL1Byb2NTZXQgZGVmaW5lcmVzb3VyY2Ug
cG9wIG51bGwgfQoJCQlpZmVsc2UKCQkJY3RfVmFycyBleGNoIC9jdF9vbGRDSURJbml0IGV4Y2gg
cHV0CgkJCX0gYmluZCBkZWYKCQkvY3RfcmVzdG9yZUNJREluaXQKCQkJewoJCQljdF9WYXJzIC9j
dF9vbGRDSURJbml0IGdldCBkdXAgbnVsbCBuZQoJCQkJeyAvQ0lESW5pdCBleGNoIC9Qcm9jU2V0
IGRlZmluZXJlc291cmNlIHBvcCB9CgkJCQl7IHBvcCB9CgkJCWlmZWxzZQoJCQl9IGJpbmQgZGVm
CgkJL2N0X0J1aWxkQ2hhclNldFVwCgkJCXsKCQkJMSBpbmRleAoJCQkJYmVnaW4KCQkJCUNJREZv
bnQKCQkJCQliZWdpbgoJCQkJCUFkb2JlX0Nvb2xUeXBlX1V0aWxpdHkgL2N0X0J1aWxkQ2hhckRp
Y3QgZ2V0CgkJCQkJCWJlZ2luCgkJCQkJCS9jdF9kZkNoYXJDb2RlIGV4Y2ggZGVmCgkJCQkJCS9j
dF9kZkRpY3QgZXhjaCBkZWYKCQkJCQkJQ0lERmlyc3RCeXRlIGN0X2RmQ2hhckNvZGUgYWRkCgkJ
CQkJCWR1cCBDSURDb3VudCBnZQoJCQkJCQkJeyBwb3AgMCB9CgkJCQkJCWlmCgkJCQkJCS9jaWQg
ZXhjaCBkZWYKCQkJCQkJCXsKCQkJCQkJCUdseXBoRGlyZWN0b3J5IGNpZCAyIGNvcHkga25vd24K
CQkJCQkJCQl7IGdldCB9CgkJCQkJCQkJeyBwb3AgcG9wIG51bGxzdHJpbmcgfQoJCQkJCQkJaWZl
bHNlCgkJCQkJCQlkdXAgbGVuZ3RoIEZEQnl0ZXMgc3ViIDAgZ3QKCQkJCQkJCQl7CgkJCQkJCQkJ
ZHVwCgkJCQkJCQkJRkRCeXRlcyAwIG5lCgkJCQkJCQkJCXsgMCBGREJ5dGVzIGN0X2N2bnNpIH0K
CQkJCQkJCQkJeyBwb3AgMCB9CgkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJL2ZkSW5kZXggZXhjaCBk
ZWYKCQkJCQkJCQlkdXAgbGVuZ3RoIEZEQnl0ZXMgc3ViIEZEQnl0ZXMgZXhjaCBnZXRpbnRlcnZh
bAoJCQkJCQkJCS9jaGFyc3RyaW5nIGV4Y2ggZGVmCgkJCQkJCQkJZXhpdAoJCQkJCQkJCX0KCQkJ
CQkJCQl7CgkJCQkJCQkJcG9wCgkJCQkJCQkJY2lkIDAgZXEKCQkJCQkJCQkJeyAvY2hhcnN0cmlu
ZyBudWxsc3RyaW5nIGRlZiBleGl0IH0KCQkJCQkJCQlpZgoJCQkJCQkJCS9jaWQgMCBkZWYKCQkJ
CQkJCQl9CgkJCQkJCQlpZmVsc2UKCQkJCQkJCX0KCQkJCQkJbG9vcAoJCQl9IGRlZgoJCS9jdF9T
ZXRDYWNoZURldmljZQoJCQl7CgkJCTAgMCBtb3ZldG8KCQkJZHVwIHN0cmluZ3dpZHRoCgkJCTMg
LTEgcm9sbAoJCQl0cnVlIGNoYXJwYXRoCgkJCXBhdGhiYm94CgkJCTAgLTEwMDAKCQkJNyBpbmRl
eCAyIGRpdiA4ODAKCQkJc2V0Y2FjaGVkZXZpY2UyCgkJCTAgMCBtb3ZldG8KCQkJfSBkZWYKCQkv
Y3RfQ2xvbmVTZXRDYWNoZVByb2MKCQkJewoJCQkxIGVxCgkJCQl7CgkJCQlzdHJpbmd3aWR0aAoJ
CQkJcG9wIC0yIGRpdiAtODgwCgkJCQkwIC0xMDAwIHNldGNoYXJ3aWR0aAoJCQkJbW92ZXRvCgkJ
CQl9CgkJCQl7CgkJCQl1c2V3aWR0aHM/CgkJCQkJewoJCQkJCWN1cnJlbnRmb250IC9XaWR0aHMg
Z2V0IGNpZAoJCQkJCTIgY29weSBrbm93bgoJCQkJCQl7IGdldCBleGNoIHBvcCBhbG9hZCBwb3Ag
fQoJCQkJCQl7IHBvcCBwb3Agc3RyaW5nd2lkdGggfQoJCQkJCWlmZWxzZQoJCQkJCX0KCQkJCQl7
IHN0cmluZ3dpZHRoIH0KCQkJCWlmZWxzZQoJCQkJc2V0Y2hhcndpZHRoCgkJCQkwIDAgbW92ZXRv
CgkJCQl9CgkJCWlmZWxzZQoJCQl9IGRlZgoJCS9jdF9UeXBlM1Nob3dDaGFyU3RyaW5nCgkJCXsK
CQkJY3RfRkREaWN0IGZkSW5kZXggMiBjb3B5IGtub3duCgkJCQl7IGdldCB9CgkJCQl7CgkJCQlj
dXJyZW50Z2xvYmFsIDMgMSByb2xsCgkJCQkxIGluZGV4IGdjaGVjayBzZXRnbG9iYWwKCQkJCWN0
X1R5cGUxRm9udFRlbXBsYXRlIGR1cCBtYXhsZW5ndGggZGljdCBjb3B5CgkJCQkJYmVnaW4KCQkJ
CQlGREFycmF5IGZkSW5kZXggZ2V0CgkJCQkJZHVwIC9Gb250TWF0cml4IDIgY29weSBrbm93bgoJ
CQkJCQl7IGdldCB9CgkJCQkJCXsgcG9wIHBvcCBjdF9kZWZhdWx0Rm9udE10eCB9CgkJCQkJaWZl
bHNlCgkJCQkJL0ZvbnRNYXRyaXggZXhjaCBkdXAgbGVuZ3RoIGFycmF5IGNvcHkgZGVmCgkJCQkJ
L1ByaXZhdGUgZ2V0CgkJCQkJL1ByaXZhdGUgZXhjaCBkZWYKCQkJCQkvV2lkdGhzIHJvb3Rmb250
IC9XaWR0aHMgZ2V0IGRlZgoJCQkJCS9DaGFyU3RyaW5ncyAxIGRpY3QgZHVwIC8ubm90ZGVmCgkJ
CQkJCTxkODQxMjcyY2YxOGY1NGZjMTM+IGR1cCBsZW5ndGggc3RyaW5nIGNvcHkgcHV0IGRlZgoJ
CQkJCWN1cnJlbnRkaWN0CgkJCQkJZW5kCgkJCQkvY3RfVHlwZTFGb250IGV4Y2ggZGVmaW5lZm9u
dAoJCQkJZHVwIDUgMSByb2xsIHB1dAoJCQkJc2V0Z2xvYmFsCgkJCQl9CgkJCWlmZWxzZQoJCQlk
dXAgL0NoYXJTdHJpbmdzIGdldCAxIGluZGV4IC9FbmNvZGluZyBnZXQKCQkJY3RfZGZDaGFyQ29k
ZSBnZXQgY2hhcnN0cmluZyBwdXQKCQkJcm9vdGZvbnQgL1dNb2RlIDIgY29weSBrbm93bgoJCQkJ
eyBnZXQgfQoJCQkJeyBwb3AgcG9wIDAgfQoJCQlpZmVsc2UKCQkJZXhjaAoJCQkxMDAwIHNjYWxl
Zm9udCBzZXRmb250CgkJCWN0X3N0cjEgMCBjdF9kZkNoYXJDb2RlIHB1dAoJCQljdF9zdHIxIGV4
Y2ggY3RfZGZTZXRDYWNoZVByb2MKCQkJY3RfU3ludGhldGljQm9sZAoJCQkJewoJCQkJY3VycmVu
dHBvaW50CgkJCQljdF9zdHIxIHNob3cKCQkJCW5ld3BhdGgKCQkJCW1vdmV0bwoJCQkJY3Rfc3Ry
MSB0cnVlIGNoYXJwYXRoCgkJCQljdF9TdHJva2VXaWR0aCBzZXRsaW5ld2lkdGgKCQkJCXN0cm9r
ZQoJCQkJfQoJCQkJeyBjdF9zdHIxIHNob3cgfQoJCQlpZmVsc2UKCQkJfSBkZWYKCQkvY3RfVHlw
ZTRTaG93Q2hhclN0cmluZwoJCQl7CgkJCWN0X2RmRGljdCBjdF9kZkNoYXJDb2RlIGNoYXJzdHJp
bmcKCQkJRkRBcnJheSBmZEluZGV4IGdldAoJCQlkdXAgL0ZvbnRNYXRyaXggZ2V0IGR1cCBjdF9k
ZWZhdWx0Rm9udE10eCBjdF9tYXRyaXhlcSBub3QKCQkJCXsgY3RfMTAwME10eCBtYXRyaXggY29u
Y2F0bWF0cml4IGNvbmNhdCB9CgkJCQl7IHBvcCB9CgkJCWlmZWxzZQoJCQkvUHJpdmF0ZSBnZXQK
CQkJQWRvYmVfQ29vbFR5cGVfVXRpbGl0eSAvY3RfTGV2ZWwyPyBnZXQgbm90CgkJCQl7CgkJCQlj
dF9kZkRpY3QgL1ByaXZhdGUKCQkJCTMgLTEgcm9sbAoJCQkJCXsgcHV0IH0KCQkJCTExODM2MTU4
NjkgaW50ZXJuYWxkaWN0IC9zdXBlcmV4ZWMgZ2V0IGV4ZWMKCQkJCX0KCQkJaWYKCQkJMTE4MzYx
NTg2OSBpbnRlcm5hbGRpY3QKCQkJQWRvYmVfQ29vbFR5cGVfVXRpbGl0eSAvY3RfTGV2ZWwyPyBn
ZXQKCQkJCXsgMSBpbmRleCB9CgkJCQl7IDMgaW5kZXggL1ByaXZhdGUgZ2V0IG1hcmsgNiAxIHJv
bGwgfQoJCQlpZmVsc2UKCQkJZHVwIC9SdW5JbnQga25vd24KCQkJCXsgL1J1bkludCBnZXQgfQoJ
CQkJeyBwb3AgL0NDUnVuIH0KCQkJaWZlbHNlCgkJCWdldCBleGVjCgkJCUFkb2JlX0Nvb2xUeXBl
X1V0aWxpdHkgL2N0X0xldmVsMj8gZ2V0IG5vdAoJCQkJeyBjbGVhcnRvbWFyayB9CgkJCWlmCgkJ
CX0gYmluZCBkZWYKCQkvY3RfQnVpbGRDaGFySW5jcmVtZW50YWwKCQkJewoJCQkJewoJCQkJQWRv
YmVfQ29vbFR5cGVfVXRpbGl0eSAvY3RfTWFrZU9DRiBnZXQgYmVnaW4KCQkJCWN0X0J1aWxkQ2hh
clNldFVwCgkJCQljdF9TaG93Q2hhclN0cmluZwoJCQkJfQoJCQlzdG9wcGVkCgkJCQl7IHN0b3Ag
fQoJCQlpZgoJCQllbmQKCQkJZW5kCgkJCWVuZAoJCQllbmQKCQkJfSBiaW5kIGRlZgoJCS9CYXNl
Rm9udE5hbWVTdHIgKEJGMDApIGRlZgoJCS9jdF9UeXBlMUZvbnRUZW1wbGF0ZSAxNCBkaWN0CgkJ
CWJlZ2luCgkJCS9Gb250VHlwZSAxIGRlZgoJCQkvRm9udE1hdHJpeCAgWzAuMDAxIDAgMCAwLjAw
MSAwIDBdIGRlZgoJCQkvRm9udEJCb3ggIFstMjUwIC0yNTAgMTI1MCAxMjUwXSBkZWYKCQkJL0Vu
Y29kaW5nIGN0X2NIZXhFbmNvZGluZyBkZWYKCQkJL1BhaW50VHlwZSAwIGRlZgoJCQljdXJyZW50
ZGljdAoJCQllbmQgZGVmCgkJL0Jhc2VGb250VGVtcGxhdGUgMTEgZGljdAoJCQliZWdpbgoJCQkv
Rm9udE1hdHJpeCAgWzAuMDAxIDAgMCAwLjAwMSAwIDBdIGRlZgoJCQkvRm9udEJCb3ggIFstMjUw
IC0yNTAgMTI1MCAxMjUwXSBkZWYKCQkJL0VuY29kaW5nIGN0X2NIZXhFbmNvZGluZyBkZWYKCQkJ
L0J1aWxkQ2hhciAvY3RfQnVpbGRDaGFySW5jcmVtZW50YWwgbG9hZCBkZWYKCQkJY3RfQ2xvbmU/
CgkJCQl7CgkJCQkvRm9udFR5cGUgMyBkZWYKCQkJCS9jdF9TaG93Q2hhclN0cmluZyAvY3RfVHlw
ZTNTaG93Q2hhclN0cmluZyBsb2FkIGRlZgoJCQkJL2N0X2RmU2V0Q2FjaGVQcm9jIC9jdF9DbG9u
ZVNldENhY2hlUHJvYyBsb2FkIGRlZgoJCQkJL2N0X1N5bnRoZXRpY0JvbGQgZmFsc2UgZGVmCgkJ
CQkvY3RfU3Ryb2tlV2lkdGggMSBkZWYKCQkJCX0KCQkJCXsKCQkJCS9Gb250VHlwZSA0IGRlZgoJ
CQkJL1ByaXZhdGUgMSBkaWN0IGR1cCAvbGVuSVYgNCBwdXQgZGVmCgkJCQkvQ2hhclN0cmluZ3Mg
MSBkaWN0IGR1cCAvLm5vdGRlZiA8ZDg0MTI3MmNmMThmNTRmYzEzPiBwdXQgZGVmCgkJCQkvUGFp
bnRUeXBlIDAgZGVmCgkJCQkvY3RfU2hvd0NoYXJTdHJpbmcgL2N0X1R5cGU0U2hvd0NoYXJTdHJp
bmcgbG9hZCBkZWYKCQkJCX0KCQkJaWZlbHNlCgkJCS9jdF9zdHIxIDEgc3RyaW5nIGRlZgoJCQlj
dXJyZW50ZGljdAoJCQllbmQgZGVmCgkJL0Jhc2VGb250RGljdFNpemUgQmFzZUZvbnRUZW1wbGF0
ZSBsZW5ndGggNSBhZGQgZGVmCgkJL2N0X21hdHJpeGVxCgkJCXsKCQkJdHJ1ZSAwIDEgNQoJCQkJ
ewoJCQkJZHVwIDQgaW5kZXggZXhjaCBnZXQgZXhjaCAzIGluZGV4IGV4Y2ggZ2V0IGVxIGFuZAoJ
CQkJZHVwIG5vdAoJCQkJCXsgZXhpdCB9CgkJCQlpZgoJCQkJfQoJCQlmb3IKCQkJZXhjaCBwb3Ag
ZXhjaCBwb3AKCQkJfSBiaW5kIGRlZgoJCS9jdF9tYWtlb2NmCgkJCXsKCQkJMTUgZGljdAoJCQkJ
YmVnaW4KCQkJCWV4Y2ggL1dNb2RlIGV4Y2ggZGVmCgkJCQlleGNoIC9Gb250TmFtZSBleGNoIGRl
ZgoJCQkJL0ZvbnRUeXBlIDAgZGVmCgkJCQkvRk1hcFR5cGUgMiBkZWYKCQkJZHVwIC9Gb250TWF0
cml4IGtub3duCgkJCQl7IGR1cCAvRm9udE1hdHJpeCBnZXQgL0ZvbnRNYXRyaXggZXhjaCBkZWYg
fQoJCQkJeyAvRm9udE1hdHJpeCBtYXRyaXggZGVmIH0KCQkJaWZlbHNlCgkJCQkvYmZDb3VudCAx
IGluZGV4IC9DSURDb3VudCBnZXQgMjU2IGlkaXYgMSBhZGQKCQkJCQlkdXAgMjU2IGd0IHsgcG9w
IDI1Nn0gaWYgZGVmCgkJCQkvRW5jb2RpbmcKCQkJCQkyNTYgYXJyYXkgMCAxIGJmQ291bnQgMSBz
dWIgeyAyIGNvcHkgZHVwIHB1dCBwb3AgfSBmb3IKCQkJCQliZkNvdW50IDEgMjU1IHsgMiBjb3B5
IGJmQ291bnQgcHV0IHBvcCB9IGZvcgoJCQkJCWRlZgoJCQkJL0ZEZXBWZWN0b3IgYmZDb3VudCBk
dXAgMjU2IGx0IHsgMSBhZGQgfSBpZiBhcnJheSBkZWYKCQkJCUJhc2VGb250VGVtcGxhdGUgQmFz
ZUZvbnREaWN0U2l6ZSBkaWN0IGNvcHkKCQkJCQliZWdpbgoJCQkJCS9DSURGb250IGV4Y2ggZGVm
CgkJCQkJQ0lERm9udCAvRm9udEJCb3gga25vd24KCQkJCQkJeyBDSURGb250IC9Gb250QkJveCBn
ZXQgL0ZvbnRCQm94IGV4Y2ggZGVmIH0KCQkJCQlpZgoJCQkJCUNJREZvbnQgL0NEZXZQcm9jIGtu
b3duCgkJCQkJCXsgQ0lERm9udCAvQ0RldlByb2MgZ2V0IC9DRGV2UHJvYyBleGNoIGRlZiB9CgkJ
CQkJaWYKCQkJCQljdXJyZW50ZGljdAoJCQkJCWVuZAoJCQkJQmFzZUZvbnROYW1lU3RyIDMgKDAp
IHB1dGludGVydmFsCgkJCQkwIDEgYmZDb3VudCBkdXAgMjU2IGVxIHsgMSBzdWIgfSBpZgoJCQkJ
CXsKCQkJCQlGRGVwVmVjdG9yIGV4Y2gKCQkJCQkyIGluZGV4IEJhc2VGb250RGljdFNpemUgZGlj
dCBjb3B5CgkJCQkJCWJlZ2luCgkJCQkJCWR1cCAvQ0lERmlyc3RCeXRlIGV4Y2ggMjU2IG11bCBk
ZWYKCQkJCQkJRm9udFR5cGUgMyBlcQoJCQkJCQkJeyAvY3RfRkREaWN0IDIgZGljdCBkZWYgfQoJ
CQkJCQlpZgoJCQkJCQljdXJyZW50ZGljdAoJCQkJCQllbmQKCQkJCQkxIGluZGV4ICAxNgoJCQkJ
CUJhc2VGb250TmFtZVN0ciAgMiAyIGdldGludGVydmFsIGN2cnMgcG9wCgkJCQkJQmFzZUZvbnRO
YW1lU3RyIGV4Y2ggZGVmaW5lZm9udAoJCQkJCXB1dAoJCQkJCX0KCQkJCWZvcgoJCQkJY3RfQ2xv
bmU/CgkJCQkJeyAvV2lkdGhzIDEgaW5kZXggL0NJREZvbnQgZ2V0IC9HbHlwaERpcmVjdG9yeSBn
ZXQgbGVuZ3RoIGRpY3QgZGVmIH0KCQkJCWlmCgkJCQlGb250TmFtZQoJCQkJY3VycmVudGRpY3QK
CQkJCWVuZAoJCQlkZWZpbmVmb250CgkJCWN0X0Nsb25lPwoJCQkJewoJCQkJZ3NhdmUKCQkJCWR1
cCAxMDAwIHNjYWxlZm9udCBzZXRmb250CgkJCQljdF9CdWlsZENoYXJEaWN0CgkJCQkJYmVnaW4K
CQkJCQkvdXNld2lkdGhzPyBmYWxzZSBkZWYKCQkJCQljdXJyZW50Zm9udCAvV2lkdGhzIGdldAoJ
CQkJCQliZWdpbgoJCQkJCQlleGNoIC9DSURGb250IGdldCAvR2x5cGhEaXJlY3RvcnkgZ2V0CgkJ
CQkJCQl7CgkJCQkJCQlwb3AKCQkJCQkJCWR1cCBjaGFyY29kZSBleGNoIDEgaW5kZXggMCAyIGlu
ZGV4IDI1NiBpZGl2IHB1dAoJCQkJCQkJMSBpbmRleCBleGNoIDEgZXhjaCAyNTYgbW9kIHB1dAoJ
CQkJCQkJc3RyaW5nd2lkdGggMiBhcnJheSBhc3RvcmUgZGVmCgkJCQkJCQl9CgkJCQkJCWZvcmFs
bAoJCQkJCQllbmQKCQkJCQkvdXNld2lkdGhzPyB0cnVlIGRlZgoJCQkJCWVuZAoJCQkJZ3Jlc3Rv
cmUKCQkJCX0KCQkJCXsgZXhjaCBwb3AgfQoJCQlpZmVsc2UKCQkJfSBiaW5kIGRlZgoJCS9jdF9D
b21wb3NlRm9udAoJCQl7CgkJCWN0X1VzZU5hdGl2ZUNhcGFiaWxpdHk/CgkJCQl7CgkJCQkyIGlu
ZGV4IC9DTWFwIGN0X3Jlc291cmNlc3RhdHVzCgkJCQkJeyBwb3AgcG9wIGV4Y2ggcG9wIH0KCQkJ
CQl7CgkJCQkJL0NJREluaXQgL1Byb2NTZXQgZmluZHJlc291cmNlCgkJCQkJCWJlZ2luCgkJCQkJ
CTEyIGRpY3QKCQkJCQkJCWJlZ2luCgkJCQkJCQliZWdpbmNtYXAKCQkJCQkJCS9DTWFwTmFtZSAz
IGluZGV4IGRlZgoJCQkJCQkJL0NNYXBWZXJzaW9uIDEuMDAwIGRlZgoJCQkJCQkJL0NNYXBUeXBl
IDEgZGVmCgkJCQkJCQlleGNoIC9XTW9kZSBleGNoIGRlZgoJCQkJCQkJL0NJRFN5c3RlbUluZm8g
MyBkaWN0IGR1cAoJCQkJCQkJCWJlZ2luCgkJCQkJCQkJL1JlZ2lzdHJ5IChBZG9iZSkgZGVmCgkJ
CQkJCQkJL09yZGVyaW5nCgkJCQkJCQkJQ01hcE5hbWUgY3RfbWtvY2ZTdHIxMDAgY3ZzCgkJCQkJ
CQkJKEFkb2JlLSkgc2VhcmNoCgkJCQkJCQkJCXsKCQkJCQkJCQkJcG9wIHBvcAoJCQkJCQkJCQko
LSkgc2VhcmNoCgkJCQkJCQkJCQl7CgkJCQkJCQkJCQlkdXAgbGVuZ3RoIHN0cmluZyBjb3B5CgkJ
CQkJCQkJCQlleGNoIHBvcCBleGNoIHBvcAoJCQkJCQkJCQkJfQoJCQkJCQkJCQkJeyBwb3AgKElk
ZW50aXR5KX0KCQkJCQkJCQkJaWZlbHNlCgkJCQkJCQkJCX0KCQkJCQkJCQkJeyBwb3AgIChJZGVu
dGl0eSkgIH0KCQkJCQkJCQlpZmVsc2UKCQkJCQkJCQlkZWYKCQkJCQkJCQkvU3VwcGxlbWVudCAw
IGRlZgoJCQkJCQkJCWVuZCBkZWYKCQkJCQkJCTEgYmVnaW5jb2Rlc3BhY2VyYW5nZQoJCQkJCQkJ
PDAwMDA+IDxGRkZGPgoJCQkJCQkJZW5kY29kZXNwYWNlcmFuZ2UKCQkJCQkJCTEgYmVnaW5jaWRy
YW5nZQoJCQkJCQkJPDAwMDA+IDxGRkZGPiAwCgkJCQkJCQllbmRjaWRyYW5nZQoJCQkJCQkJZW5k
Y21hcAoJCQkJCQkJQ01hcE5hbWUgY3VycmVudGRpY3QgL0NNYXAgZGVmaW5lcmVzb3VyY2UgcG9w
CgkJCQkJCQllbmQKCQkJCQkJZW5kCgkJCQkJfQoJCQkJaWZlbHNlCgkJCQljb21wb3NlZm9udAoJ
CQkJfQoJCQkJewoJCQkJMyAyIHJvbGwgcG9wCgkJCQkwIGdldCAvQ0lERm9udCBmaW5kcmVzb3Vy
Y2UKCQkJCWN0X21ha2VvY2YKCQkJCX0KCQkJaWZlbHNlCgkJCX0gYmluZCBkZWYKCQkvY3RfTWFr
ZUlkZW50aXR5CgkJCXsKCQkJY3RfVXNlTmF0aXZlQ2FwYWJpbGl0eT8KCQkJCXsKCQkJCTEgaW5k
ZXggL0NNYXAgY3RfcmVzb3VyY2VzdGF0dXMKCQkJCQl7IHBvcCBwb3AgfQoJCQkJCXsKCQkJCQkv
Q0lESW5pdCAvUHJvY1NldCBmaW5kcmVzb3VyY2UgYmVnaW4KCQkJCQkxMiBkaWN0IGJlZ2luCgkJ
CQkJYmVnaW5jbWFwCgkJCQkJL0NNYXBOYW1lIDIgaW5kZXggZGVmCgkJCQkJL0NNYXBWZXJzaW9u
IDEuMDAwIGRlZgoJCQkJCS9DTWFwVHlwZSAxIGRlZgoJCQkJCS9DSURTeXN0ZW1JbmZvIDMgZGlj
dCBkdXAKCQkJCQkJYmVnaW4KCQkJCQkJL1JlZ2lzdHJ5IChBZG9iZSkgZGVmCgkJCQkJCS9PcmRl
cmluZwoJCQkJCQlDTWFwTmFtZSBjdF9ta29jZlN0cjEwMCBjdnMKCQkJCQkJKEFkb2JlLSkgc2Vh
cmNoCgkJCQkJCQl7CgkJCQkJCQlwb3AgcG9wCgkJCQkJCQkoLSkgc2VhcmNoCgkJCQkJCQkJeyBk
dXAgbGVuZ3RoIHN0cmluZyBjb3B5IGV4Y2ggcG9wIGV4Y2ggcG9wIH0KCQkJCQkJCQl7IHBvcCAo
SWRlbnRpdHkpIH0KCQkJCQkJCWlmZWxzZQoJCQkJCQkJfQoJCQkJCQkJeyBwb3AgKElkZW50aXR5
KSB9CgkJCQkJCWlmZWxzZQoJCQkJCQlkZWYKCQkJCQkJL1N1cHBsZW1lbnQgMCBkZWYKCQkJCQkJ
ZW5kIGRlZgoJCQkJCTEgYmVnaW5jb2Rlc3BhY2VyYW5nZQoJCQkJCTwwMDAwPiA8RkZGRj4KCQkJ
CQllbmRjb2Rlc3BhY2VyYW5nZQoJCQkJCTEgYmVnaW5jaWRyYW5nZQoJCQkJCTwwMDAwPiA8RkZG
Rj4gMAoJCQkJCWVuZGNpZHJhbmdlCgkJCQkJZW5kY21hcAoJCQkJCUNNYXBOYW1lIGN1cnJlbnRk
aWN0IC9DTWFwIGRlZmluZXJlc291cmNlIHBvcAoJCQkJCWVuZAoJCQkJCWVuZAoJCQkJCX0KCQkJ
CWlmZWxzZQoJCQkJY29tcG9zZWZvbnQKCQkJCX0KCQkJCXsKCQkJCWV4Y2ggcG9wCgkJCQkwIGdl
dCAvQ0lERm9udCBmaW5kcmVzb3VyY2UKCQkJCWN0X21ha2VvY2YKCQkJCX0KCQkJaWZlbHNlCgkJ
CX0gYmluZCBkZWYKCQljdXJyZW50ZGljdCByZWFkb25seSBwb3AKCQllbmQKCWVuZAolJUVuZFJl
c291cmNlCiUlQmVnaW5SZXNvdXJjZTogcHJvY3NldCBBZG9iZV9Db29sVHlwZV9VdGlsaXR5X1Q0
MiAxLjAgMAolJUNvcHlyaWdodDogQ29weXJpZ2h0IDE5ODctMjAwMyBBZG9iZSBTeXN0ZW1zIElu
Y29ycG9yYXRlZC4KJSVWZXJzaW9uOiAxLjAgMAp1c2VyZGljdCAvY3RfVDQyRGljdCAxNSBkaWN0
IHB1dApjdF9UNDJEaWN0IGJlZ2luCi9JczIwMTU/CnsKICB2ZXJzaW9uCiAgY3ZpCiAgMjAxNQog
IGdlCn0gYmluZCBkZWYKL0FsbG9jR2x5cGhTdG9yYWdlCnsKICBJczIwMTU/CiAgewkKCQlwb3AK
ICB9IAogIHsgCgkJe3N0cmluZ30gZm9yYWxsCiAgfSBpZmVsc2UKfSBiaW5kIGRlZgovVHlwZTQy
RGljdEJlZ2luCnsKCTI1IGRpY3QgYmVnaW4KICAvRm9udE5hbWUgZXhjaCBkZWYKICAvQ2hhclN0
cmluZ3MgMjU2IGRpY3QgCgliZWdpbgoJCSAgLy5ub3RkZWYgMCBkZWYKCQkgIGN1cnJlbnRkaWN0
IAoJZW5kIGRlZgogIC9FbmNvZGluZyBleGNoIGRlZgogIC9QYWludFR5cGUgMCBkZWYKICAvRm9u
dFR5cGUgNDIgZGVmCiAgL0ZvbnRNYXRyaXggWzEgMCAwIDEgMCAwXSBkZWYKICA0IGFycmF5ICBh
c3RvcmUgY3Z4IC9Gb250QkJveCBleGNoIGRlZgogIC9zZm50cwp9IGJpbmQgZGVmCi9UeXBlNDJE
aWN0RW5kICAKewoJIGN1cnJlbnRkaWN0IGR1cCAvRm9udE5hbWUgZ2V0IGV4Y2ggZGVmaW5lZm9u
dCBlbmQKCWN0X1Q0MkRpY3QgZXhjaAoJZHVwIC9Gb250TmFtZSBnZXQgZXhjaCBwdXQKfSBiaW5k
IGRlZgovUkQge3N0cmluZyBjdXJyZW50ZmlsZSBleGNoIHJlYWRzdHJpbmcgcG9wfSBleGVjdXRl
b25seSBkZWYKL1ByZXBGb3IyMDE1CnsKCUlzMjAxNT8KCXsJCSAgCgkJIC9HbHlwaERpcmVjdG9y
eSAKCQkgMTYKCQkgZGljdCBkZWYKCQkgc2ZudHMgMCBnZXQKCQkgZHVwCgkJIDIgaW5kZXgKCQkg
KGdseXgpCgkJIHB1dGludGVydmFsCgkJIDIgaW5kZXggIAoJCSAobG9jeCkKCQkgcHV0aW50ZXJ2
YWwKCQkgcG9wCgkJIHBvcAoJfQoJewoJCSBwb3AKCQkgcG9wCgl9IGlmZWxzZQkJCQp9IGJpbmQg
ZGVmCi9BZGRUNDJDaGFyCnsKCUlzMjAxNT8KCXsKCQkvR2x5cGhEaXJlY3RvcnkgZ2V0IAoJCWJl
Z2luCgkJZGVmCgkJZW5kCgkJcG9wCgkJcG9wCgl9Cgl7CgkJL3NmbnRzIGdldAoJCTQgaW5kZXgK
CQlnZXQKCQkzIGluZGV4CgkgIDIgaW5kZXgKCQlwdXRpbnRlcnZhbAoJCXBvcAoJCXBvcAoJCXBv
cAoJCXBvcAoJfSBpZmVsc2UKfSBiaW5kIGRlZgplbmQKJSVFbmRSZXNvdXJjZQpBZG9iZV9Db29s
VHlwZV9Db3JlIGJlZ2luIC8kT2JsaXF1ZSBTZXRTdWJzdGl0dXRlU3RyYXRlZ3kgZW5kCiUlQmVn
aW5SZXNvdXJjZTogcHJvY3NldCBBZG9iZV9BR01fSW1hZ2UgMS4wIDAKJSVWZXJzaW9uOiAxLjAg
MAolJUNvcHlyaWdodDogQ29weXJpZ2h0IChDKSAyMDAwLTIwMDMgQWRvYmUgU3lzdGVtcywgSW5j
LiAgQWxsIFJpZ2h0cyBSZXNlcnZlZC4Kc3lzdGVtZGljdCAvc2V0cGFja2luZyBrbm93bgp7Cglj
dXJyZW50cGFja2luZwoJdHJ1ZSBzZXRwYWNraW5nCn0gaWYKdXNlcmRpY3QgL0Fkb2JlX0FHTV9J
bWFnZSA3NSBkaWN0IGR1cCBiZWdpbiBwdXQKL0Fkb2JlX0FHTV9JbWFnZV9JZCAvQWRvYmVfQUdN
X0ltYWdlXzEuMF8wIGRlZgovbmR7CgludWxsIGRlZgp9YmluZCBkZWYKL0FHTUlNR18maW1hZ2Ug
bmQKL0FHTUlNR18mY29sb3JpbWFnZSBuZAovQUdNSU1HXyZpbWFnZW1hc2sgbmQKL0FHTUlNR19t
YnVmICgpIGRlZgovQUdNSU1HX3lidWYgKCkgZGVmCi9BR01JTUdfa2J1ZiAoKSBkZWYKL0FHTUlN
R19jIDAgZGVmCi9BR01JTUdfbSAwIGRlZgovQUdNSU1HX3kgMCBkZWYKL0FHTUlNR19rIDAgZGVm
Ci9BR01JTUdfdG1wIG5kCi9BR01JTUdfaW1hZ2VzdHJpbmcwIG5kCi9BR01JTUdfaW1hZ2VzdHJp
bmcxIG5kCi9BR01JTUdfaW1hZ2VzdHJpbmcyIG5kCi9BR01JTUdfaW1hZ2VzdHJpbmczIG5kCi9B
R01JTUdfaW1hZ2VzdHJpbmc0IG5kCi9BR01JTUdfaW1hZ2VzdHJpbmc1IG5kCi9BR01JTUdfY250
IG5kCi9BR01JTUdfZnNhdmUgbmQKL0FHTUlNR19jb2xvckFyeSBuZAovQUdNSU1HX292ZXJyaWRl
IG5kCi9BR01JTUdfbmFtZSBuZAovQUdNSU1HX21hc2tTb3VyY2UgbmQKL2ludmVydF9pbWFnZV9z
YW1wbGVzIG5kCi9rbm9ja291dF9pbWFnZV9zYW1wbGVzCW5kCi9pbWcgbmQKL3NlcGltZyBuZAov
ZGV2bmltZyBuZAovaWR4aW1nIG5kCi9kb2Nfc2V0dXAKeyAKCUFkb2JlX0FHTV9Db3JlIGJlZ2lu
CglBZG9iZV9BR01fSW1hZ2UgYmVnaW4KCS9BR01JTUdfJmltYWdlIHN5c3RlbWRpY3QvaW1hZ2Ug
Z2V0IGRlZgoJL0FHTUlNR18maW1hZ2VtYXNrIHN5c3RlbWRpY3QvaW1hZ2VtYXNrIGdldCBkZWYK
CS9jb2xvcmltYWdlIHdoZXJlewoJCXBvcAoJCS9BR01JTUdfJmNvbG9yaW1hZ2UgL2NvbG9yaW1h
Z2UgbGRmCgl9aWYKCWVuZAoJZW5kCn1kZWYKL3BhZ2Vfc2V0dXAKewoJQWRvYmVfQUdNX0ltYWdl
IGJlZ2luCgkvQUdNSU1HX2NjaW1hZ2VfZXhpc3RzIHsvY3VzdG9tY29sb3JpbWFnZSB3aGVyZSAK
CQl7CgkJCXBvcAoJCQkvQWRvYmVfQUdNX09uSG9zdF9TZXBzIHdoZXJlCgkJCXsKCQkJcG9wIGZh
bHNlCgkJCX17CgkJCS9BZG9iZV9BR01fSW5SaXBfU2VwcyB3aGVyZQoJCQkJewoJCQkJcG9wIGZh
bHNlCgkJCQl9ewoJCQkJCXRydWUKCQkJCSB9aWZlbHNlCgkJCSB9aWZlbHNlCgkJCX17CgkJCWZh
bHNlCgkJfWlmZWxzZSAKCX1iZGYKCWxldmVsMnsKCQkvaW52ZXJ0X2ltYWdlX3NhbXBsZXMKCQl7
CgkJCUFkb2JlX0FHTV9JbWFnZS9BR01JTUdfdG1wIERlY29kZSBsZW5ndGggZGRmCgkJCS9EZWNv
ZGUgWyBEZWNvZGUgMSBnZXQgRGVjb2RlIDAgZ2V0XSBkZWYKCQl9ZGVmCgkJL2tub2Nrb3V0X2lt
YWdlX3NhbXBsZXMKCQl7CgkJCU9wZXJhdG9yL2ltYWdlbWFzayBuZXsKCQkJCS9EZWNvZGUgWzEg
MV0gZGVmCgkJCX1pZgoJCX1kZWYKCX17CQoJCS9pbnZlcnRfaW1hZ2Vfc2FtcGxlcwoJCXsKCQkJ
ezEgZXhjaCBzdWJ9IGN1cnJlbnR0cmFuc2ZlciBhZGRwcm9jcyBzZXR0cmFuc2ZlcgoJCX1kZWYK
CQkva25vY2tvdXRfaW1hZ2Vfc2FtcGxlcwoJCXsKCQkJeyBwb3AgMSB9IGN1cnJlbnR0cmFuc2Zl
ciBhZGRwcm9jcyBzZXR0cmFuc2ZlcgoJCX1kZWYKCX1pZmVsc2UKCS9pbWcgL2ltYWdlb3JtYXNr
IGxkZgoJL3NlcGltZyAvc2VwX2ltYWdlb3JtYXNrIGxkZgoJL2Rldm5pbWcgL2Rldm5faW1hZ2Vv
cm1hc2sgbGRmCgkvaWR4aW1nIC9pbmRleGVkX2ltYWdlb3JtYXNrIGxkZgoJL19jdHlwZSA3IGRl
ZgoJY3VycmVudGRpY3R7CgkJZHVwIHhjaGVjayAxIGluZGV4IHR5cGUgZHVwIC9hcnJheXR5cGUg
ZXEgZXhjaCAvcGFja2VkYXJyYXl0eXBlIGVxIG9yIGFuZHsKCQkJYmluZAoJCX1pZgoJCWRlZgoJ
fWZvcmFsbAp9ZGVmCi9wYWdlX3RyYWlsZXIKewoJZW5kCn1kZWYKL2RvY190cmFpbGVyCnsKfWRl
ZgovaW1hZ2Vvcm1hc2tfc3lzCnsKCWJlZ2luCgkJc2F2ZSBtYXJrCgkJbGV2ZWwyewoJCQljdXJy
ZW50ZGljdAoJCQlPcGVyYXRvciAvaW1hZ2VtYXNrIGVxewoJCQkJQUdNSU1HXyZpbWFnZW1hc2sK
CQkJfXsKCQkJCXVzZV9tYXNrIHsKCQkJCQlsZXZlbDMge3Byb2Nlc3NfbWFza19MMyBBR01JTUdf
JmltYWdlfXttYXNrZWRfaW1hZ2Vfc2ltdWxhdGlvbn1pZmVsc2UKCQkJCX17CgkJCQkJQUdNSU1H
XyZpbWFnZQoJCQkJfWlmZWxzZQoJCQl9aWZlbHNlCgkJfXsKCQkJV2lkdGggSGVpZ2h0CgkJCU9w
ZXJhdG9yIC9pbWFnZW1hc2sgZXF7CgkJCQlEZWNvZGUgMCBnZXQgMSBlcSBEZWNvZGUgMSBnZXQg
MCBlcQlhbmQKCQkJCUltYWdlTWF0cml4IC9EYXRhU291cmNlIGxvYWQKCQkJCUFHTUlNR18maW1h
Z2VtYXNrCgkJCX17CgkJCQlCaXRzUGVyQ29tcG9uZW50IEltYWdlTWF0cml4IC9EYXRhU291cmNl
IGxvYWQKCQkJCUFHTUlNR18maW1hZ2UKCQkJfWlmZWxzZQoJCX1pZmVsc2UKCQljbGVhcnRvbWFy
ayByZXN0b3JlCgllbmQKfWRlZgovb3ZlcnByaW50X3BsYXRlCnsKCWN1cnJlbnRvdmVycHJpbnQg
ewoJCTAgZ2V0IGR1cCB0eXBlIC9uYW1ldHlwZSBlcSB7CgkJCWR1cCAvRGV2aWNlR3JheSBlcXsK
CQkJCXBvcCBBR01DT1JFX2JsYWNrX3BsYXRlIG5vdAoJCQl9ewoJCQkJL0RldmljZUNNWUsgZXF7
CgkJCQkJQUdNQ09SRV9pc19jbXlrX3NlcCBub3QKCQkJCX1pZgoJCQl9aWZlbHNlCgkJfXsKCQkJ
ZmFsc2UgZXhjaAoJCQl7CgkJCQkgQUdNT0hTX3NlcGluayBlcSBvcgoJCQl9IGZvcmFsbAoJCQlu
b3QKCQl9IGlmZWxzZQoJfXsKCQlwb3AgZmFsc2UKCX1pZmVsc2UKfWRlZgovcHJvY2Vzc19tYXNr
X0wzCnsKCWR1cCBiZWdpbgoJL0ltYWdlVHlwZSAxIGRlZgoJZW5kCgk0IGRpY3QgYmVnaW4KCQkv
RGF0YURpY3QgZXhjaCBkZWYKCQkvSW1hZ2VUeXBlIDMgZGVmCgkJL0ludGVybGVhdmVUeXBlIDMg
ZGVmCgkJL01hc2tEaWN0IDkgZGljdCBiZWdpbgoJCQkvSW1hZ2VUeXBlIDEgZGVmCgkJCS9XaWR0
aCBEYXRhRGljdCBkdXAgL01hc2tXaWR0aCBrbm93biB7L01hc2tXaWR0aH17L1dpZHRofSBpZmVs
c2UgZ2V0IGRlZgoJCQkvSGVpZ2h0IERhdGFEaWN0IGR1cCAvTWFza0hlaWdodCBrbm93biB7L01h
c2tIZWlnaHR9ey9IZWlnaHR9IGlmZWxzZSBnZXQgZGVmCgkJCS9JbWFnZU1hdHJpeCBbV2lkdGgg
MCAwIEhlaWdodCBuZWcgMCBIZWlnaHRdIGRlZgoJCQkvTkNvbXBvbmVudHMgMSBkZWYKCQkJL0Jp
dHNQZXJDb21wb25lbnQgMSBkZWYKCQkJL0RlY29kZSBbMCAxXSBkZWYKCQkJL0RhdGFTb3VyY2Ug
QUdNSU1HX21hc2tTb3VyY2UgZGVmCgkJY3VycmVudGRpY3QgZW5kIGRlZgoJY3VycmVudGRpY3Qg
ZW5kCn1kZWYKL3VzZV9tYXNrCnsKCWR1cCB0eXBlIC9kaWN0dHlwZSBlcQoJewoJCWR1cCAvTWFz
ayBrbm93bgl7CgkJCWR1cCAvTWFzayBnZXQgewoJCQkJbGV2ZWwzCgkJCQl7dHJ1ZX0KCQkJCXsK
CQkJCQlkdXAgL01hc2tXaWR0aCBrbm93biB7ZHVwIC9NYXNrV2lkdGggZ2V0IDEgaW5kZXggL1dp
ZHRoIGdldCBlcX17dHJ1ZX1pZmVsc2UgZXhjaAoJCQkJCWR1cCAvTWFza0hlaWdodCBrbm93biB7
ZHVwIC9NYXNrSGVpZ2h0IGdldCAxIGluZGV4IC9IZWlnaHQgZ2V0IGVxfXt0cnVlfWlmZWxzZQoJ
CQkJCTMgLTEgcm9sbCBhbmQKCQkJCX0gaWZlbHNlCgkJCX0KCQkJe2ZhbHNlfSBpZmVsc2UKCQl9
CgkJe2ZhbHNlfSBpZmVsc2UKCX0KCXtmYWxzZX0gaWZlbHNlCn1kZWYKL21ha2VfbGluZV9zb3Vy
Y2UKewoJYmVnaW4KCU11bHRpcGxlRGF0YVNvdXJjZXMgewoJCVsKCQlEZWNvZGUgbGVuZ3RoIDIg
ZGl2IGN2aSB7V2lkdGggc3RyaW5nfSByZXBlYXQKCQldCgl9ewoJCVdpZHRoIERlY29kZSBsZW5n
dGggMiBkaXYgbXVsIGN2aSBzdHJpbmcKCX1pZmVsc2UKCWVuZAp9ZGVmCi9kYXRhc291cmNlX3Rv
X3N0cgp7CglleGNoIGR1cCB0eXBlCglkdXAgL2ZpbGV0eXBlIGVxIHsKCQlwb3AgZXhjaCByZWFk
c3RyaW5nCgl9ewoJCS9hcnJheXR5cGUgZXEgewoJCQlleGVjIGV4Y2ggY29weQoJCX17CgkJCXBv
cAoJCX1pZmVsc2UKCX1pZmVsc2UKCXBvcAp9ZGVmCi9tYXNrZWRfaW1hZ2Vfc2ltdWxhdGlvbgp7
CgkzIGRpY3QgYmVnaW4KCWR1cCBtYWtlX2xpbmVfc291cmNlIC9saW5lX3NvdXJjZSB4ZGYKCS9t
YXNrX3NvdXJjZSBBR01JTUdfbWFza1NvdXJjZSAvTFpXRGVjb2RlIGZpbHRlciBkZWYKCWR1cCAv
V2lkdGggZ2V0IDggZGl2IGNlaWxpbmcgY3ZpIHN0cmluZyAvbWFza19zdHIgeGRmCgliZWdpbgoJ
Z3NhdmUKCTAgMSB0cmFuc2xhdGUgMSAtMSBIZWlnaHQgZGl2IHNjYWxlCgkxIDEgSGVpZ2h0IHsK
CQlwb3AKCQlnc2F2ZQoJCU11bHRpcGxlRGF0YVNvdXJjZXMgewoJCQkwIDEgRGF0YVNvdXJjZSBs
ZW5ndGggMSBzdWIgewoJCQkJZHVwIERhdGFTb3VyY2UgZXhjaCBnZXQKCQkJCWV4Y2ggbGluZV9z
b3VyY2UgZXhjaCBnZXQKCQkJCWRhdGFzb3VyY2VfdG9fc3RyCgkJCX0gZm9yCgkJfXsKCQkJRGF0
YVNvdXJjZSBsaW5lX3NvdXJjZSBkYXRhc291cmNlX3RvX3N0cgoJCX0gaWZlbHNlCgkJPDwKCQkJ
L1BhdHRlcm5UeXBlIDEKCQkJL1BhaW50UHJvYyBbCgkJCQkvcG9wIGN2eAoJCQkJPDwKCQkJCQkv
SW1hZ2VUeXBlIDEKCQkJCQkvV2lkdGggV2lkdGgKCQkJCQkvSGVpZ2h0IDEKCQkJCQkvSW1hZ2VN
YXRyaXggV2lkdGggMS4wIHN1YiAxIG1hdHJpeCBzY2FsZSAwLjUgMCBtYXRyaXggdHJhbnNsYXRl
IG1hdHJpeCBjb25jYXRtYXRyaXgKCQkJCQkvTXVsdGlwbGVEYXRhU291cmNlcyBNdWx0aXBsZURh
dGFTb3VyY2VzCgkJCQkJL0RhdGFTb3VyY2UgbGluZV9zb3VyY2UKCQkJCQkvQml0c1BlckNvbXBv
bmVudCBCaXRzUGVyQ29tcG9uZW50CgkJCQkJL0RlY29kZSBEZWNvZGUKCQkJCT4+CgkJCQkvaW1h
Z2UgY3Z4CgkJCV0gY3Z4CgkJCS9CQm94IFswIDAgV2lkdGggMV0KCQkJL1hTdGVwIFdpZHRoCgkJ
CS9ZU3RlcCAxCgkJCS9QYWludFR5cGUgMQoJCQkvVGlsaW5nVHlwZSAyCgkJPj4KCQltYXRyaXgg
bWFrZXBhdHRlcm4gc2V0X3BhdHRlcm4KCQk8PAoJCQkvSW1hZ2VUeXBlIDEKCQkJL1dpZHRoIFdp
ZHRoCgkJCS9IZWlnaHQgMQoJCQkvSW1hZ2VNYXRyaXggV2lkdGggMSBtYXRyaXggc2NhbGUKCQkJ
L011bHRpcGxlRGF0YVNvdXJjZXMgZmFsc2UKCQkJL0RhdGFTb3VyY2UgbWFza19zb3VyY2UgbWFz
a19zdHIgcmVhZHN0cmluZyBwb3AKCQkJL0JpdHNQZXJDb21wb25lbnQgMQoJCQkvRGVjb2RlIFsw
IDFdCgkJPj4KCQlpbWFnZW1hc2sKCQlncmVzdG9yZQoJCTAgMSB0cmFuc2xhdGUKCX0gZm9yCgln
cmVzdG9yZQoJZW5kCgllbmQKfWRlZgovaW1hZ2Vvcm1hc2sKewoJYmVnaW4KCQlTa2lwSW1hZ2VQ
cm9jIHsKCQkJY3VycmVudGRpY3QgY29uc3VtZWltYWdlZGF0YQoJCX0KCQl7CgkJCXNhdmUgbWFy
awoJCQlsZXZlbDIgQUdNQ09SRV9ob3N0X3NlcCBub3QgYW5kewoJCQkJY3VycmVudGRpY3QKCQkJ
CU9wZXJhdG9yIC9pbWFnZW1hc2sgZXEgRGV2aWNlTl9QUzIgbm90IGFuZCB7CgkJCQkJaW1hZ2Vt
YXNrCgkJCQl9ewoJCQkJCUFHTUNPUkVfaW5fcmlwX3NlcCBjdXJyZW50b3ZlcnByaW50IGFuZCBj
dXJyZW50Y29sb3JzcGFjZSAwIGdldCAvRGV2aWNlR3JheSBlcSBhbmR7CgkJCQkJCVsvU2VwYXJh
dGlvbiAvQmxhY2sgL0RldmljZUdyYXkge31dIHNldGNvbG9yc3BhY2UKCQkJCQkJL0RlY29kZSBb
IERlY29kZSAxIGdldCBEZWNvZGUgMCBnZXQgXSBkZWYKCQkJCQl9aWYKCQkJCQl1c2VfbWFzayB7
CgkJCQkJCWxldmVsMyB7cHJvY2Vzc19tYXNrX0wzIGltYWdlfXttYXNrZWRfaW1hZ2Vfc2ltdWxh
dGlvbn1pZmVsc2UKCQkJCQl9ewoJCQkJCQlEZXZpY2VOX05vbmVOYW1lIERldmljZU5fUFMyIElu
ZGV4ZWRfRGV2aWNlTiBsZXZlbDMgbm90IGFuZCBvciBvciBBR01DT1JFX2luX3JpcF9zZXAgYW5k
IAoJCQkJCQl7CgkJCQkJCQlOYW1lcyBjb252ZXJ0X3RvX3Byb2Nlc3Mgbm90IHsKCQkJCQkJCQky
IGRpY3QgYmVnaW4KCQkJCQkJCQkvaW1hZ2VEaWN0IHhkZgoJCQkJCQkJCS9uYW1lc19pbmRleCAw
IGRlZgoJCQkJCQkJCWdzYXZlCgkJCQkJCQkJaW1hZ2VEaWN0IHdyaXRlX2ltYWdlX2ZpbGUgewoJ
CQkJCQkJCQlOYW1lcyB7CgkJCQkJCQkJCQlkdXAgKE5vbmUpIG5lIHsKCQkJCQkJCQkJCQlbL1Nl
cGFyYXRpb24gMyAtMSByb2xsIC9EZXZpY2VHcmF5IHsxIGV4Y2ggc3VifV0gc2V0Y29sb3JzcGFj
ZQoJCQkJCQkJCQkJCU9wZXJhdG9yIGltYWdlRGljdCByZWFkX2ltYWdlX2ZpbGUKCQkJCQkJCQkJ
CQluYW1lc19pbmRleCAwIGVxIHt0cnVlIHNldG92ZXJwcmludH0gaWYKCQkJCQkJCQkJCQkvbmFt
ZXNfaW5kZXggbmFtZXNfaW5kZXggMSBhZGQgZGVmCgkJCQkJCQkJCQl9ewoJCQkJCQkJCQkJCXBv
cAoJCQkJCQkJCQkJfSBpZmVsc2UKCQkJCQkJCQkJfSBmb3JhbGwKCQkJCQkJCQkJY2xvc2VfaW1h
Z2VfZmlsZQoJCQkJCQkJCX0gaWYKCQkJCQkJCQlncmVzdG9yZQoJCQkJCQkJCWVuZAoJCQkJCQkJ
fXsKCQkJCQkJCQlPcGVyYXRvciAvaW1hZ2VtYXNrIGVxIHsKCQkJCQkJCQkJaW1hZ2VtYXNrCgkJ
CQkJCQkJfXsKCQkJCQkJCQkJaW1hZ2UKCQkJCQkJCQl9IGlmZWxzZQoJCQkJCQkJfSBpZmVsc2UK
CQkJCQkJfXsKCQkJCQkJCU9wZXJhdG9yIC9pbWFnZW1hc2sgZXEgewoJCQkJCQkJCWltYWdlbWFz
awoJCQkJCQkJfXsKCQkJCQkJCQlpbWFnZQoJCQkJCQkJfSBpZmVsc2UKCQkJCQkJfSBpZmVsc2UK
CQkJCQl9aWZlbHNlCgkJCQl9aWZlbHNlCgkJCX17CgkJCQlXaWR0aCBIZWlnaHQKCQkJCU9wZXJh
dG9yIC9pbWFnZW1hc2sgZXF7CgkJCQkJRGVjb2RlIDAgZ2V0IDEgZXEgRGVjb2RlIDEgZ2V0IDAg
ZXEJYW5kCgkJCQkJSW1hZ2VNYXRyaXggL0RhdGFTb3VyY2UgbG9hZAoJCQkJCS9BZG9iZV9BR01f
T25Ib3N0X1NlcHMgd2hlcmUgewoJCQkJCQlwb3AgaW1hZ2VtYXNrCgkJCQkJfXsKCQkJCQkJY3Vy
cmVudGdyYXkgMSBuZXsKCQkJCQkJCWN1cnJlbnRkaWN0IGltYWdlb3JtYXNrX3N5cwoJCQkJCQl9
ewoJCQkJCQkJY3VycmVudG92ZXJwcmludCBub3R7CgkJCQkJCQkJMSBBR01DT1JFXyZzZXRncmF5
CgkJCQkJCQkJY3VycmVudGRpY3QgaW1hZ2Vvcm1hc2tfc3lzCgkJCQkJCQl9ewoJCQkJCQkJCWN1
cnJlbnRkaWN0IGlnbm9yZWltYWdlZGF0YQoJCQkJCQkJfWlmZWxzZQkJCQkgCQkKCQkJCQkJfWlm
ZWxzZQoJCQkJCX1pZmVsc2UKCQkJCX17CgkJCQkJQml0c1BlckNvbXBvbmVudCBJbWFnZU1hdHJp
eCAKCQkJCQlNdWx0aXBsZURhdGFTb3VyY2VzewoJCQkJCQkwIDEgTkNvbXBvbmVudHMgMSBzdWJ7
CgkJCQkJCQlEYXRhU291cmNlIGV4Y2ggZ2V0CgkJCQkJCX1mb3IKCQkJCQl9ewoJCQkJCQkvRGF0
YVNvdXJjZSBsb2FkCgkJCQkJfWlmZWxzZQoJCQkJCU9wZXJhdG9yIC9jb2xvcmltYWdlIGVxewoJ
CQkJCQlBR01DT1JFX2hvc3Rfc2VwewoJCQkJCQkJTXVsdGlwbGVEYXRhU291cmNlcyBsZXZlbDIg
b3IgTkNvbXBvbmVudHMgNCBlcSBhbmR7CgkJCQkJCQkJQUdNQ09SRV9pc19jbXlrX3NlcHsKCQkJ
CQkJCQkJTXVsdGlwbGVEYXRhU291cmNlc3sKCQkJCQkJCQkJCS9EYXRhU291cmNlIFsKCQkJCQkJ
CQkJCQlEYXRhU291cmNlIDAgZ2V0IC9leGVjIGN2eAoJCQkJCQkJCQkJCURhdGFTb3VyY2UgMSBn
ZXQgL2V4ZWMgY3Z4CgkJCQkJCQkJCQkJRGF0YVNvdXJjZSAyIGdldCAvZXhlYyBjdngKCQkJCQkJ
CQkJCQlEYXRhU291cmNlIDMgZ2V0IC9leGVjIGN2eAoJCQkJCQkJCQkJCS9BR01DT1JFX2dldF9p
bmtfZGF0YSBjdngKCQkJCQkJCQkJCV0gY3Z4IGRlZgoJCQkJCQkJCQl9ewoJCQkJCQkJCQkJL0Rh
dGFTb3VyY2UgCgkJCQkJCQkJCQlXaWR0aCBCaXRzUGVyQ29tcG9uZW50IG11bCA3IGFkZCA4IGlk
aXYgSGVpZ2h0IG11bCA0IG11bCAKCQkJCQkJCQkJCS9EYXRhU291cmNlIGxvYWQKCQkJCQkJCQkJ
CWZpbHRlcl9jbXlrIDAgKCkgL1N1YkZpbGVEZWNvZGUgZmlsdGVyIGRlZgoJCQkJCQkJCQl9aWZl
bHNlCgkJCQkJCQkJCS9EZWNvZGUgWyBEZWNvZGUgMCBnZXQgRGVjb2RlIDEgZ2V0IF0gZGVmCgkJ
CQkJCQkJCS9NdWx0aXBsZURhdGFTb3VyY2VzIGZhbHNlIGRlZgoJCQkJCQkJCQkvTkNvbXBvbmVu
dHMgMSBkZWYKCQkJCQkJCQkJL09wZXJhdG9yIC9pbWFnZSBkZWYKCQkJCQkJCQkJaW52ZXJ0X2lt
YWdlX3NhbXBsZXMKCQkJCQkJIAkJCTEgQUdNQ09SRV8mc2V0Z3JheQoJCQkJCQkJCQljdXJyZW50
ZGljdCBpbWFnZW9ybWFza19zeXMKCQkJCQkJCQl9ewoJCQkJCQkJCQljdXJyZW50b3ZlcnByaW50
IG5vdCBPcGVyYXRvci9pbWFnZW1hc2sgZXEgYW5kewogIAkJCSAJCQkJCQkJMSBBR01DT1JFXyZz
ZXRncmF5CiAgCQkJIAkJCQkJCQljdXJyZW50ZGljdCBpbWFnZW9ybWFza19zeXMKICAJCQkgCQkJ
CQkJfXsKICAJCQkgCQkJCQkJCWN1cnJlbnRkaWN0IGlnbm9yZWltYWdlZGF0YQogIAkJCSAJCQkJ
CQl9aWZlbHNlCgkJCQkJCQkJfWlmZWxzZQoJCQkJCQkJfXsJCgkJCQkJCQkJTXVsdGlwbGVEYXRh
U291cmNlcyBOQ29tcG9uZW50cyBBR01JTUdfJmNvbG9yaW1hZ2UJCQkJCQkKCQkJCQkJCX1pZmVs
c2UKCQkJCQkJfXsKCQkJCQkJCXRydWUgTkNvbXBvbmVudHMgY29sb3JpbWFnZQoJCQkJCQl9aWZl
bHNlCgkJCQkJfXsKCQkJCQkJT3BlcmF0b3IgL2ltYWdlIGVxewoJCQkJCQkJQUdNQ09SRV9ob3N0
X3NlcHsKCQkJCQkJCQkvRG9JbWFnZSB0cnVlIGRlZgoJCQkJCQkJCUhvc3RTZXBDb2xvckltYWdl
ewoJCQkJCQkJCQlpbnZlcnRfaW1hZ2Vfc2FtcGxlcwoJCQkJCQkJCX17CgkJCQkJCQkJCUFHTUNP
UkVfYmxhY2tfcGxhdGUgbm90IE9wZXJhdG9yL2ltYWdlbWFzayBuZSBhbmR7CgkJCQkJCQkJCQkv
RG9JbWFnZSBmYWxzZSBkZWYKCQkJCQkJCQkJCWN1cnJlbnRkaWN0IGlnbm9yZWltYWdlZGF0YQoJ
CQkJCSAJCQkJfWlmCgkJCQkJCQkJfWlmZWxzZQoJCQkJCQkgCQkxIEFHTUNPUkVfJnNldGdyYXkK
CQkJCQkJCQlEb0ltYWdlCgkJCQkJCQkJCXtjdXJyZW50ZGljdCBpbWFnZW9ybWFza19zeXN9IGlm
CgkJCQkJCQl9ewoJCQkJCQkJCXVzZV9tYXNrIHsKCQkJCQkJCQkJbGV2ZWwzIHtwcm9jZXNzX21h
c2tfTDMgaW1hZ2V9e21hc2tlZF9pbWFnZV9zaW11bGF0aW9ufWlmZWxzZQoJCQkJCQkJCX17CgkJ
CQkJCQkJCWltYWdlCgkJCQkJCQkJfWlmZWxzZQoJCQkJCQkJfWlmZWxzZQoJCQkJCQl9ewoJCQkJ
CQkJT3BlcmF0b3Iva25vY2tvdXQgZXF7CgkJCQkJCQkJcG9wIHBvcCBwb3AgcG9wIHBvcAoJCQkJ
CQkJCWN1cnJlbnRjb2xvcnNwYWNlIG92ZXJwcmludF9wbGF0ZSBub3R7CgkJCQkJCQkJCWtub2Nr
b3V0X3VuaXRzcQoJCQkJCQkJCX1pZgoJCQkJCQkJfWlmCgkJCQkJCX1pZmVsc2UKCQkJCQl9aWZl
bHNlCgkJCQl9aWZlbHNlCgkJCX1pZmVsc2UKCQkJY2xlYXJ0b21hcmsgcmVzdG9yZQoJCX1pZmVs
c2UKCWVuZAp9ZGVmCi9zZXBfaW1hZ2Vvcm1hc2sKewogCS9zZXBfY29sb3JzcGFjZV9kaWN0IEFH
TUNPUkVfZ2dldCBiZWdpbgoJL01hcHBlZENTQSBDU0EgbWFwX2NzYSBkZWYKCWJlZ2luCglTa2lw
SW1hZ2VQcm9jIHsKCQljdXJyZW50ZGljdCBjb25zdW1laW1hZ2VkYXRhCgl9Cgl7CgkJc2F2ZSBt
YXJrIAoJCUFHTUNPUkVfYXZvaWRfTDJfc2VwX3NwYWNlewoJCQkvRGVjb2RlIFsgRGVjb2RlIDAg
Z2V0IDI1NSBtdWwgRGVjb2RlIDEgZ2V0IDI1NSBtdWwgXSBkZWYKCQl9aWYKIAkJQUdNSU1HX2Nj
aW1hZ2VfZXhpc3RzIAoJCU1hcHBlZENTQSAwIGdldCAvRGV2aWNlQ01ZSyBlcSBhbmQKCQljdXJy
ZW50ZGljdC9Db21wb25lbnRzIGtub3duIGFuZCAKCQlOYW1lICgpIG5lIGFuZCAKCQlOYW1lIChB
bGwpIG5lIGFuZCAKCQlPcGVyYXRvciAvaW1hZ2UgZXEgYW5kCgkJQUdNQ09SRV9wcm9kdWNpbmdf
c2VwcyBub3QgYW5kCgkJbGV2ZWwyIG5vdCBhbmQKCQl7CgkJCVdpZHRoIEhlaWdodCBCaXRzUGVy
Q29tcG9uZW50IEltYWdlTWF0cml4IAoJCQlbCgkJCS9EYXRhU291cmNlIGxvYWQgL2V4ZWMgY3Z4
CgkJCXsKCQkJCTAgMSAyIGluZGV4IGxlbmd0aCAxIHN1YnsKCQkJCQkxIGluZGV4IGV4Y2gKCQkJ
CQkyIGNvcHkgZ2V0IDI1NSB4b3IgcHV0CgkJCQl9Zm9yCgkJCX0gL2V4ZWMgY3Z4CgkJCV0gY3Z4
IGJpbmQKCQkJTWFwcGVkQ1NBIDAgZ2V0IC9EZXZpY2VDTVlLIGVxewoJCQkJQ29tcG9uZW50cyBh
bG9hZCBwb3AKCQkJfXsKCQkJCTAgMCAwIENvbXBvbmVudHMgYWxvYWQgcG9wIDEgZXhjaCBzdWIK
CQkJfWlmZWxzZQoJCQlOYW1lIGZpbmRjbXlrY3VzdG9tY29sb3IKCQkJY3VzdG9tY29sb3JpbWFn
ZQoJCX17CgkJCUFHTUNPUkVfcHJvZHVjaW5nX3NlcHMgbm90ewoJCQkJbGV2ZWwyewoJCQkJCUFH
TUNPUkVfYXZvaWRfTDJfc2VwX3NwYWNlIG5vdCBjdXJyZW50Y29sb3JzcGFjZSAwIGdldCAvU2Vw
YXJhdGlvbiBuZSBhbmR7CgkJCQkJCVsvU2VwYXJhdGlvbiBOYW1lIE1hcHBlZENTQSBzZXBfcHJv
Y19uYW1lIGV4Y2ggMCBnZXQgZXhjaCBsb2FkIF0gc2V0Y29sb3JzcGFjZV9vcHQKCQkJCQkJL3Nl
cF90aW50IEFHTUNPUkVfZ2dldCBzZXRjb2xvcgoJCQkJCX1pZgoJCQkJCWN1cnJlbnRkaWN0IGlt
YWdlb3JtYXNrCgkJCQl9eyAKCQkJCQljdXJyZW50ZGljdAoJCQkJCU9wZXJhdG9yIC9pbWFnZW1h
c2sgZXF7CgkJCQkJCWltYWdlb3JtYXNrCgkJCQkJfXsKCQkJCQkJc2VwX2ltYWdlb3JtYXNrX2xl
djEKCQkJCQl9aWZlbHNlCgkJCQl9aWZlbHNlCiAJCQl9ewoJCQkJQUdNQ09SRV9ob3N0X3NlcHsK
CQkJCQlPcGVyYXRvci9rbm9ja291dCBlcXsKCQkJCQkJY3VycmVudGRpY3QvSW1hZ2VNYXRyaXgg
Z2V0IGNvbmNhdAoJCQkJCQlrbm9ja291dF91bml0c3EKCQkJCQl9ewoJCQkJCQljdXJyZW50Z3Jh
eSAxIG5lewogCQkJCQkJCUFHTUNPUkVfaXNfY215a19zZXAgTmFtZSAoQWxsKSBuZSBhbmR7CiAJ
CQkJCQkJCWxldmVsMnsKCSAJCQkJCQkJCVsgL1NlcGFyYXRpb24gTmFtZSBbL0RldmljZUdyYXld
CgkgCQkJCQkJCQl7IAoJIAkJCQkJCQkJCXNlcF9jb2xvcnNwYWNlX3Byb2MgQUdNQ09SRV9nZXRf
aW5rX2RhdGEKCQkJCQkJCQkJCTEgZXhjaCBzdWIKCSAJCQkJCQkJCX0gYmluZAoJCQkJCQkJCQld
IEFHTUNPUkVfJnNldGNvbG9yc3BhY2UKCQkJCQkJCQkJL3NlcF90aW50IEFHTUNPUkVfZ2dldCBB
R01DT1JFXyZzZXRjb2xvcgogCQkJCQkJCQkJY3VycmVudGRpY3QgaW1hZ2Vvcm1hc2tfc3lzCgkg
CQkJCQkJCX17CgkgCQkJCQkJCQljdXJyZW50ZGljdAoJCQkJCQkJCQlPcGVyYXRvciAvaW1hZ2Vt
YXNrIGVxewoJCQkJCQkJCQkJaW1hZ2Vvcm1hc2tfc3lzCgkJCQkJCQkJCX17CgkJCQkJCQkJCQlz
ZXBfaW1hZ2VfbGV2MV9zZXAKCQkJCQkJCQkJfWlmZWxzZQoJIAkJCQkJCQl9aWZlbHNlCiAJCQkJ
CQkJfXsKIAkJCQkJCQkJT3BlcmF0b3IvaW1hZ2VtYXNrIG5lewoJCQkJCQkJCQlpbnZlcnRfaW1h
Z2Vfc2FtcGxlcwogCQkJCQkJCQl9aWYKCQkgCQkJCQkJY3VycmVudGRpY3QgaW1hZ2Vvcm1hc2tf
c3lzCiAJCQkJCQkJfWlmZWxzZQogCQkJCQkJfXsKIAkJCQkJCQljdXJyZW50b3ZlcnByaW50IG5v
dCBOYW1lIChBbGwpIGVxIG9yIE9wZXJhdG9yL2ltYWdlbWFzayBlcSBhbmR7CgkJCQkJCQkJY3Vy
cmVudGRpY3QgaW1hZ2Vvcm1hc2tfc3lzIAoJCQkJCQkJCX17CgkJCQkJCQkJY3VycmVudG92ZXJw
cmludCBub3QKCQkJCQkJCQkJewogCQkJCQkJCQkJZ3NhdmUgCiAJCQkJCQkJCQlrbm9ja291dF91
bml0c3EKIAkJCQkJCQkJCWdyZXN0b3JlCgkJCQkJCQkJCX1pZgoJCQkJCQkJCWN1cnJlbnRkaWN0
IGNvbnN1bWVpbWFnZWRhdGEgCgkJIAkJCQkJfWlmZWxzZQogCQkJCQkJfWlmZWxzZQoJCSAJCQl9
aWZlbHNlCiAJCQkJfXsKCQkJCQljdXJyZW50Y29sb3JzcGFjZSAwIGdldCAvU2VwYXJhdGlvbiBu
ZXsKCQkJCQkJWy9TZXBhcmF0aW9uIE5hbWUgTWFwcGVkQ1NBIHNlcF9wcm9jX25hbWUgZXhjaCAw
IGdldCBleGNoIGxvYWQgXSBzZXRjb2xvcnNwYWNlX29wdAoJCQkJCQkvc2VwX3RpbnQgQUdNQ09S
RV9nZ2V0IHNldGNvbG9yCgkJCQkJfWlmCgkJCQkJY3VycmVudG92ZXJwcmludCAKCQkJCQlNYXBw
ZWRDU0EgMCBnZXQgL0RldmljZUNNWUsgZXEgYW5kIAoJCQkJCU5hbWUgaW5SaXBfc3BvdF9oYXNf
aW5rIG5vdCBhbmQgCgkJCQkJTmFtZSAoQWxsKSBuZSBhbmQgewoJCQkJCQlpbWFnZW9ybWFza19s
Ml9vdmVycHJpbnQKCQkJCQl9ewoJCQkJCQljdXJyZW50ZGljdCBpbWFnZW9ybWFzawogCQkJCQl9
aWZlbHNlCgkJCQl9aWZlbHNlCgkJCX1pZmVsc2UKCQl9aWZlbHNlCgkJY2xlYXJ0b21hcmsgcmVz
dG9yZQoJfWlmZWxzZQoJZW5kCgllbmQKfWRlZgovZGVjb2RlX2ltYWdlX3NhbXBsZQp7Cgk0IDEg
cm9sbCBleGNoIGR1cCA1IDEgcm9sbAoJc3ViIDIgNCAtMSByb2xsIGV4cCAxIHN1YiBkaXYgbXVs
IGFkZAp9IGJkZgovY29sb3JTcGFjZUVsZW1DbnQKewoJY3VycmVudGNvbG9yc3BhY2UgMCBnZXQg
ZHVwIC9EZXZpY2VDTVlLIGVxIHsKCQlwb3AgNAoJfQoJewoJCS9EZXZpY2VSR0IgZXEgewoJCQlw
b3AgMwoJCX17CgkJCTEKCQl9IGlmZWxzZQoJfSBpZmVsc2UKfSBiZGYKL2Rldm5fc2VwX2RhdGFz
b3VyY2UKewoJMSBkaWN0IGJlZ2luCgkvZGF0YVNvdXJjZSB4ZGYKCVsKCQkwIDEgZGF0YVNvdXJj
ZSBsZW5ndGggMSBzdWIgewoJCQlkdXAgY3VycmVudGRpY3QgL2RhdGFTb3VyY2UgZ2V0IC9leGNo
IGN2eCAvZ2V0IGN2eCAvZXhlYyBjdngKCQkJL2V4Y2ggY3Z4IG5hbWVzX2luZGV4IC9uZSBjdngg
WyAvcG9wIGN2eCBdIGN2eCAvaWYgY3Z4CgkJfSBmb3IKCV0gY3Z4IGJpbmQKCWVuZAp9IGJkZgkJ
Ci9kZXZuX2FsdF9kYXRhc291cmNlCnsKCTExIGRpY3QgYmVnaW4KCS9zcmNEYXRhU3RycyB4ZGYK
CS9kc3REYXRhU3RyIHhkZgoJL2NvbnZQcm9jIHhkZgoJL29yaWdjb2xvclNwYWNlRWxlbUNudCB4
ZGYKCS9vcmlnTXVsdGlwbGVEYXRhU291cmNlcyB4ZGYKCS9vcmlnQml0c1BlckNvbXBvbmVudCB4
ZGYKCS9vcmlnRGVjb2RlIHhkZgoJL29yaWdEYXRhU291cmNlIHhkZgoJL2RzQ250IG9yaWdNdWx0
aXBsZURhdGFTb3VyY2VzIHtvcmlnRGF0YVNvdXJjZSBsZW5ndGh9ezF9aWZlbHNlIGRlZgoJL3Nh
bXBsZXNOZWVkRGVjb2RpbmcKCQkwIDAgMSBvcmlnRGVjb2RlIGxlbmd0aCAxIHN1YiB7CgkJCW9y
aWdEZWNvZGUgZXhjaCBnZXQgYWRkCgkJfSBmb3IKCQlvcmlnRGVjb2RlIGxlbmd0aCAyIGRpdiBk
aXYKCQlkdXAgMSBlcSB7CgkJCS9kZWNvZGVEaXZpc29yIDIgb3JpZ0JpdHNQZXJDb21wb25lbnQg
ZXhwIDEgc3ViIGRlZgoJCX0gaWYKCQkyIG9yaWdCaXRzUGVyQ29tcG9uZW50IGV4cCAxIHN1YiBu
ZQoJZGVmCglbCgkJMCAxIGRzQ250IDEgc3ViIFsKCQkJY3VycmVudGRpY3QgL29yaWdNdWx0aXBs
ZURhdGFTb3VyY2VzIGdldCB7CgkJCQlkdXAgY3VycmVudGRpY3QgL29yaWdEYXRhU291cmNlIGdl
dCBleGNoIGdldCBkdXAgdHlwZQoJCQl9ewoJCQkJY3VycmVudGRpY3QgL29yaWdEYXRhU291cmNl
IGdldCBkdXAgdHlwZQoJCQl9IGlmZWxzZQoJCQlkdXAgL2ZpbGV0eXBlIGVxIHsKCQkJCXBvcCBj
dXJyZW50ZGljdCAvc3JjRGF0YVN0cnMgZ2V0IDMgLTEgL3JvbGwgY3Z4IC9nZXQgY3Z4IC9yZWFk
c3RyaW5nIGN2eCAvcG9wIGN2eAoJCQl9ewoJCQkJL3N0cmluZ3R5cGUgbmUgewoJCQkJCS9leGVj
IGN2eAoJCQkJfSBpZgoJCQkJY3VycmVudGRpY3QgL3NyY0RhdGFTdHJzIGdldCAvZXhjaCBjdngg
MyAtMSAvcm9sbCBjdnggL3hwdCBjdngKCQkJfSBpZmVsc2UKCQldIGN2eCAvZm9yIGN2eAoJCWN1
cnJlbnRkaWN0IC9zcmNEYXRhU3RycyBnZXQgMCAvZ2V0IGN2eCAvbGVuZ3RoIGN2eCAwIC9uZSBj
dnggWwoJCQkwIDEgV2lkdGggMSBzdWIgWwoJCQkJQWRvYmVfQUdNX1V0aWxzIC9BR01VVElMX25k
eCAveGRkZiBjdngKCQkJCWN1cnJlbnRkaWN0IC9vcmlnTXVsdGlwbGVEYXRhU291cmNlcyBnZXQg
ewoJCQkJCTAgMSBkc0NudCAxIHN1YiBbCgkJCQkJCUFkb2JlX0FHTV9VdGlscyAvQUdNVVRJTF9u
ZHgxIC94ZGRmIGN2eAoJCQkJCQljdXJyZW50ZGljdCAvc3JjRGF0YVN0cnMgZ2V0IC9BR01VVElM
X25keDEgL2xvYWQgY3Z4IC9nZXQgY3Z4IC9BR01VVElMX25keCAvbG9hZCBjdnggL2dldCBjdngK
CQkJCQkJc2FtcGxlc05lZWREZWNvZGluZyB7CgkJCQkJCQljdXJyZW50ZGljdCAvZGVjb2RlRGl2
aXNvciBrbm93biB7CgkJCQkJCQkJY3VycmVudGRpY3QgL2RlY29kZURpdmlzb3IgZ2V0IC9kaXYg
Y3Z4CgkJCQkJCQl9ewoJCQkJCQkJCWN1cnJlbnRkaWN0IC9vcmlnRGVjb2RlIGdldCAvQUdNVVRJ
TF9uZHgxIC9sb2FkIGN2eCAyIC9tdWwgY3Z4IDIgL2dldGludGVydmFsIGN2eCAvYWxvYWQgY3Z4
IC9wb3AgY3Z4cwoJCQkJCQkJCUJpdHNQZXJDb21wb25lbnQgL2RlY29kZV9pbWFnZV9zYW1wbGUg
bG9hZCAvZXhlYyBjdngKCQkJCQkJCX0gaWZlbHNlCgkJCQkJCX0gaWYKCQkJCQldIGN2eCAvZm9y
IGN2eAoJCQkJfXsKCQkJCQlBZG9iZV9BR01fVXRpbHMgL0FHTVVUSUxfbmR4MSAwIC9kZGYgY3Z4
CgkJCQkJY3VycmVudGRpY3QgL3NyY0RhdGFTdHJzIGdldCAwIC9nZXQgY3Z4IC9BR01VVElMX25k
eCAvbG9hZCBjdngJCQoJCQkJCWN1cnJlbnRkaWN0IC9vcmlnRGVjb2RlIGdldCBsZW5ndGggMiBp
ZGl2IGR1cCAzIDEgL3JvbGwgY3Z4IC9tdWwgY3Z4IC9leGNoIGN2eCAvZ2V0aW50ZXJ2YWwgY3Z4
IAoJCQkJCVsKCQkJCQkJc2FtcGxlc05lZWREZWNvZGluZyB7CgkJCQkJCQljdXJyZW50ZGljdCAv
ZGVjb2RlRGl2aXNvciBrbm93biB7CgkJCQkJCQkJY3VycmVudGRpY3QgL2RlY29kZURpdmlzb3Ig
Z2V0IC9kaXYgY3Z4CgkJCQkJCQl9ewoJCQkJCQkJCWN1cnJlbnRkaWN0IC9vcmlnRGVjb2RlIGdl
dCAvQUdNVVRJTF9uZHgxIC9sb2FkIGN2eCAyIC9tdWwgY3Z4IDIgL2dldGludGVydmFsIGN2eCAv
YWxvYWQgY3Z4IC9wb3AgY3Z4CgkJCQkJCQkJQml0c1BlckNvbXBvbmVudCAvZGVjb2RlX2ltYWdl
X3NhbXBsZSBsb2FkIC9leGVjIGN2eAoJCQkJCQkJCUFkb2JlX0FHTV9VdGlscyAvQUdNVVRJTF9u
ZHgxIC9BR01VVElMX25keDEgL2xvYWQgY3Z4IDEgL2FkZCBjdnggL2RkZiBjdngKCQkJCQkJCX0g
aWZlbHNlCgkJCQkJCX0gaWYKCQkJCQldIGN2eCAvZm9yYWxsIGN2eAoJCQkJfSBpZmVsc2UKCQkJ
CWN1cnJlbnRkaWN0IC9jb252UHJvYyBnZXQgL2V4ZWMgY3Z4CgkJCQljdXJyZW50ZGljdCAvb3Jp
Z2NvbG9yU3BhY2VFbGVtQ250IGdldCAxIHN1YiAtMSAwIFsKCQkJCQljdXJyZW50ZGljdCAvZHN0
RGF0YVN0ciBnZXQgMyAxIC9yb2xsIGN2eCAvQUdNVVRJTF9uZHggL2xvYWQgY3Z4IGN1cnJlbnRk
aWN0IC9vcmlnY29sb3JTcGFjZUVsZW1DbnQgZ2V0IC9tdWwgY3Z4IC9hZGQgY3Z4IC9leGNoIGN2
eAoJCQkJCWN1cnJlbnRkaWN0IC9jb252UHJvYyBnZXQgL2ZpbHRlcl9pbmRleGVkX2Rldm4gbG9h
ZCBuZSB7CgkJCQkJCTI1NSAvbXVsIGN2eCAvY3ZpIGN2eCAKCQkJCQl9IGlmCgkJCQkJL3B1dCBj
dnggCgkJCQldIGN2eCAvZm9yIGN2eAoJCQldIGN2eCAvZm9yIGN2eAoJCQljdXJyZW50ZGljdCAv
ZHN0RGF0YVN0ciBnZXQKCQldIGN2eCAvaWYgY3Z4CgldIGN2eCBiaW5kCgllbmQKfSBiZGYKL2Rl
dm5faW1hZ2Vvcm1hc2sKewogCS9kZXZpY2VuX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQg
YmVnaW4KCS9NYXBwZWRDU0EgQ1NBIG1hcF9jc2EgZGVmCgkyIGRpY3QgYmVnaW4KCWR1cCBkdXAK
CS9kc3REYXRhU3RyIGV4Y2ggL1dpZHRoIGdldCBjb2xvclNwYWNlRWxlbUNudCBtdWwgc3RyaW5n
IGRlZgoJL3NyY0RhdGFTdHJzIFsgMyAtMSByb2xsIGJlZ2luCgkJY3VycmVudGRpY3QgL011bHRp
cGxlRGF0YVNvdXJjZXMga25vd24ge011bHRpcGxlRGF0YVNvdXJjZXMge0RhdGFTb3VyY2UgbGVu
Z3RofXsxfWlmZWxzZX17MX0gaWZlbHNlCgkJewoJCQlXaWR0aCBEZWNvZGUgbGVuZ3RoIDIgZGl2
IG11bCBjdmkgc3RyaW5nCgkJfSByZXBlYXQKCQllbmQgXSBkZWYKCWJlZ2luCglTa2lwSW1hZ2VQ
cm9jIHsKCQljdXJyZW50ZGljdCBjb25zdW1laW1hZ2VkYXRhCgl9Cgl7CgkJc2F2ZSBtYXJrIAoJ
CUFHTUNPUkVfcHJvZHVjaW5nX3NlcHMgbm90IHsKCQkJbGV2ZWwzIG5vdCB7CgkJCQlPcGVyYXRv
ciAvaW1hZ2VtYXNrIG5lIHsKCQkJCQkvRGF0YVNvdXJjZSBbCgkJCQkJCURhdGFTb3VyY2UgRGVj
b2RlIEJpdHNQZXJDb21wb25lbnQgY3VycmVudGRpY3QgL011bHRpcGxlRGF0YVNvdXJjZXMga25v
d24ge011bHRpcGxlRGF0YVNvdXJjZXN9e2ZhbHNlfSBpZmVsc2UKCQkJCQkJY29sb3JTcGFjZUVs
ZW1DbnQgL2RldmljZW5fY29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCAvVGludFRyYW5zZm9y
bSBnZXQgCgkJCQkJCWRzdERhdGFTdHIgc3JjRGF0YVN0cnMgZGV2bl9hbHRfZGF0YXNvdXJjZSAv
ZXhlYyBjdngKCQkJCQkJXSBjdnggMCAoKSAvU3ViRmlsZURlY29kZSBmaWx0ZXIgZGVmCgkJCQkJ
L011bHRpcGxlRGF0YVNvdXJjZXMgZmFsc2UgZGVmCgkJCQkJL0RlY29kZSBjb2xvclNwYWNlRWxl
bUNudCBbIGV4Y2ggezAgMX0gcmVwZWF0IF0gZGVmCgkJCQl9IGlmCgkJCX1pZgoJCQljdXJyZW50
ZGljdCBpbWFnZW9ybWFzawogCQl9ewoJCQlBR01DT1JFX2hvc3Rfc2VwewoJCQkJTmFtZXMgY29u
dmVydF90b19wcm9jZXNzIHsKCQkJCQlDU0EgbWFwX2NzYSAwIGdldCAvRGV2aWNlQ01ZSyBlcSB7
CgkJCQkJCS9EYXRhU291cmNlCgkJCQkJCQlXaWR0aCBCaXRzUGVyQ29tcG9uZW50IG11bCA3IGFk
ZCA4IGlkaXYgSGVpZ2h0IG11bCA0IG11bCAKCQkJCQkJCVsKCQkJCQkJCURhdGFTb3VyY2UgRGVj
b2RlIEJpdHNQZXJDb21wb25lbnQgY3VycmVudGRpY3QgL011bHRpcGxlRGF0YVNvdXJjZXMga25v
d24ge011bHRpcGxlRGF0YVNvdXJjZXN9e2ZhbHNlfSBpZmVsc2UKCQkJCQkJCTQgL2RldmljZW5f
Y29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCAvVGludFRyYW5zZm9ybSBnZXQgCgkJCQkJCQlk
c3REYXRhU3RyIHNyY0RhdGFTdHJzIGRldm5fYWx0X2RhdGFzb3VyY2UgL2V4ZWMgY3Z4CgkJCQkJ
CQldIGN2eAoJCQkJCQlmaWx0ZXJfY215ayAwICgpIC9TdWJGaWxlRGVjb2RlIGZpbHRlciBkZWYK
CQkJCQkJL011bHRpcGxlRGF0YVNvdXJjZXMgZmFsc2UgZGVmCgkJCQkJCS9EZWNvZGUgWzEgMF0g
ZGVmCgkJCQkJCS9EZXZpY2VHcmF5IHNldGNvbG9yc3BhY2UKCQkJIAkJCWN1cnJlbnRkaWN0IGlt
YWdlb3JtYXNrX3N5cwogCQkJCQl9ewoJCQkJCQlBR01DT1JFX3JlcG9ydF91bnN1cHBvcnRlZF9j
b2xvcl9zcGFjZQoJCQkJCQlBR01DT1JFX2JsYWNrX3BsYXRlIHsKCQkJCQkJCS9EYXRhU291cmNl
IFsKCQkJCQkJCQlEYXRhU291cmNlIERlY29kZSBCaXRzUGVyQ29tcG9uZW50IGN1cnJlbnRkaWN0
IC9NdWx0aXBsZURhdGFTb3VyY2VzIGtub3duIHtNdWx0aXBsZURhdGFTb3VyY2VzfXtmYWxzZX0g
aWZlbHNlCgkJCQkJCQkJQ1NBIG1hcF9jc2EgMCBnZXQgL0RldmljZVJHQiBlcXszfXsxfWlmZWxz
ZSAvZGV2aWNlbl9jb2xvcnNwYWNlX2RpY3QgQUdNQ09SRV9nZ2V0IC9UaW50VHJhbnNmb3JtIGdl
dAoJCQkJCQkJCWRzdERhdGFTdHIgc3JjRGF0YVN0cnMgZGV2bl9hbHRfZGF0YXNvdXJjZSAvZXhl
YyBjdngKCQkJCQkJCQldIGN2eCAwICgpIC9TdWJGaWxlRGVjb2RlIGZpbHRlciBkZWYKCQkJCQkJ
CS9NdWx0aXBsZURhdGFTb3VyY2VzIGZhbHNlIGRlZgoJCQkJCQkJL0RlY29kZSBjb2xvclNwYWNl
RWxlbUNudCBbIGV4Y2ggezAgMX0gcmVwZWF0IF0gZGVmCgkJCQkgCQkJY3VycmVudGRpY3QgaW1h
Z2Vvcm1hc2tfc3lzCgkJCQkgCQl9CgkJCQkJCXsKCSAJCQkJCQlnc2F2ZSAKCSAJCQkJCQlrbm9j
a291dF91bml0c3EKCSAJCQkJCQlncmVzdG9yZQoJCQkJCQkJY3VycmVudGRpY3QgY29uc3VtZWlt
YWdlZGF0YSAKCQkJCQkJfSBpZmVsc2UKIAkJCQkJfSBpZmVsc2UKCQkJCX0KCQkJCXsJCgkJCQkJ
L2RldmljZW5fY29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCAvbmFtZXNfaW5kZXgga25vd24g
ewoJIAkJCQkJT3BlcmF0b3IvaW1hZ2VtYXNrIG5lewoJIAkJCQkJCU11bHRpcGxlRGF0YVNvdXJj
ZXMgewoJCSAJCQkJCQkvRGF0YVNvdXJjZSBbIERhdGFTb3VyY2UgZGV2bl9zZXBfZGF0YXNvdXJj
ZSAvZXhlYyBjdnggXSBjdnggZGVmCgkJCQkJCQkJL011bHRpcGxlRGF0YVNvdXJjZXMgZmFsc2Ug
ZGVmCgkgCQkJCQkJfXsKCQkJCQkJCQkvRGF0YVNvdXJjZSAvRGF0YVNvdXJjZSBsb2FkIGRzdERh
dGFTdHIgc3JjRGF0YVN0cnMgMCBnZXQgZmlsdGVyX2Rldm4gZGVmCgkgCQkJCQkJfSBpZmVsc2UK
CQkJCQkJCWludmVydF9pbWFnZV9zYW1wbGVzCgkgCQkJCQl9IGlmCgkJCSAJCQljdXJyZW50ZGlj
dCBpbWFnZW9ybWFza19zeXMKCSAJCQkJfXsKCSAJCQkJCWN1cnJlbnRvdmVycHJpbnQgbm90IE9w
ZXJhdG9yL2ltYWdlbWFzayBlcSBhbmR7CgkJCQkJCQljdXJyZW50ZGljdCBpbWFnZW9ybWFza19z
eXMgCgkJCQkJCQl9ewoJCQkJCQkJY3VycmVudG92ZXJwcmludCBub3QKCQkJCQkJCQl7CgkgCQkJ
CQkJCWdzYXZlIAoJIAkJCQkJCQlrbm9ja291dF91bml0c3EKCSAJCQkJCQkJZ3Jlc3RvcmUKCQkJ
CQkJCQl9aWYKCQkJCQkJCWN1cnJlbnRkaWN0IGNvbnN1bWVpbWFnZWRhdGEgCgkJCSAJCQl9aWZl
bHNlCgkgCQkJCX1pZmVsc2UKCSAJCQl9aWZlbHNlCiAJCQl9ewoJCQkJY3VycmVudGRpY3QgaW1h
Z2Vvcm1hc2sKCQkJfWlmZWxzZQoJCX1pZmVsc2UKCQljbGVhcnRvbWFyayByZXN0b3JlCgl9aWZl
bHNlCgllbmQKCWVuZAoJZW5kCn1kZWYKL2ltYWdlb3JtYXNrX2wyX292ZXJwcmludAp7CgljdXJy
ZW50ZGljdAoJY3VycmVudGNteWtjb2xvciBhZGQgYWRkIGFkZCAwIGVxewoJCWN1cnJlbnRkaWN0
IGNvbnN1bWVpbWFnZWRhdGEKCX17CgkJbGV2ZWwzeyAJCQkKCQkJY3VycmVudGNteWtjb2xvciAK
CQkJL0FHTUlNR19rIHhkZiAKCQkJL0FHTUlNR195IHhkZiAKCQkJL0FHTUlNR19tIHhkZiAKCQkJ
L0FHTUlNR19jIHhkZgoJCQlPcGVyYXRvci9pbWFnZW1hc2sgZXF7CgkJCQlbL0RldmljZU4gWwoJ
CQkJQUdNSU1HX2MgMCBuZSB7L0N5YW59IGlmCgkJCQlBR01JTUdfbSAwIG5lIHsvTWFnZW50YX0g
aWYKCQkJCUFHTUlNR195IDAgbmUgey9ZZWxsb3d9IGlmCgkJCQlBR01JTUdfayAwIG5lIHsvQmxh
Y2t9IGlmCgkJCQldIC9EZXZpY2VDTVlLIHt9XSBzZXRjb2xvcnNwYWNlCgkJCQlBR01JTUdfYyAw
IG5lIHtBR01JTUdfY30gaWYKCQkJCUFHTUlNR19tIDAgbmUge0FHTUlNR19tfSBpZgoJCQkJQUdN
SU1HX3kgMCBuZSB7QUdNSU1HX3l9IGlmCgkJCQlBR01JTUdfayAwIG5lIHtBR01JTUdfa30gaWYK
CQkJCXNldGNvbG9yCQkJCgkJCX17CQoJCQkJL0RlY29kZSBbIERlY29kZSAwIGdldCAyNTUgbXVs
IERlY29kZSAxIGdldCAyNTUgbXVsIF0gZGVmCgkJCQlbL0luZGV4ZWQgCQkJCQoJCQkJCVsKCQkJ
CQkJL0RldmljZU4gWwoJCQkJCQkJQUdNSU1HX2MgMCBuZSB7L0N5YW59IGlmCgkJCQkJCQlBR01J
TUdfbSAwIG5lIHsvTWFnZW50YX0gaWYKCQkJCQkJCUFHTUlNR195IDAgbmUgey9ZZWxsb3d9IGlm
CgkJCQkJCQlBR01JTUdfayAwIG5lIHsvQmxhY2t9IGlmCgkJCQkJCV0gCgkJCQkJCS9EZXZpY2VD
TVlLIHsKCQkJCQkJCUFHTUlNR19rIDAgZXEgezB9IGlmCgkJCQkJCQlBR01JTUdfeSAwIGVxIHsw
IGV4Y2h9IGlmCgkJCQkJCQlBR01JTUdfbSAwIGVxIHswIDMgMSByb2xsfSBpZgoJCQkJCQkJQUdN
SU1HX2MgMCBlcSB7MCA0IDEgcm9sbH0gaWYJCQkJCQkKCQkJCQkJfQoJCQkJCV0KCQkJCQkyNTUK
CQkJCQl7CgkJCQkJCTI1NSBkaXYgCgkJCQkJCW1hcmsgZXhjaAoJCQkJCQlkdXAJZHVwIGR1cAoJ
CQkJCQlBR01JTUdfayAwIG5lewoJCQkJCQkJL3NlcF90aW50IEFHTUNPUkVfZ2dldCBtdWwgTWFw
cGVkQ1NBIHNlcF9wcm9jX25hbWUgZXhjaCBwb3AgbG9hZCBleGVjIDQgMSByb2xsIHBvcCBwb3Ag
cG9wCQkKCQkJCQkJCWNvdW50dG9tYXJrIDEgcm9sbAoJCQkJCQl9ewoJCQkJCQkJcG9wCgkJCQkJ
CX1pZmVsc2UKCQkJCQkJQUdNSU1HX3kgMCBuZXsKCQkJCQkJCS9zZXBfdGludCBBR01DT1JFX2dn
ZXQgbXVsIE1hcHBlZENTQSBzZXBfcHJvY19uYW1lIGV4Y2ggcG9wIGxvYWQgZXhlYyA0IDIgcm9s
bCBwb3AgcG9wIHBvcAkJCgkJCQkJCQljb3VudHRvbWFyayAxIHJvbGwKCQkJCQkJfXsKCQkJCQkJ
CXBvcAoJCQkJCQl9aWZlbHNlCgkJCQkJCUFHTUlNR19tIDAgbmV7CgkJCQkJCQkvc2VwX3RpbnQg
QUdNQ09SRV9nZ2V0IG11bCBNYXBwZWRDU0Egc2VwX3Byb2NfbmFtZSBleGNoIHBvcCBsb2FkIGV4
ZWMgNCAzIHJvbGwgcG9wIHBvcCBwb3AJCQoJCQkJCQkJY291bnR0b21hcmsgMSByb2xsCgkJCQkJ
CX17CgkJCQkJCQlwb3AKCQkJCQkJfWlmZWxzZQoJCQkJCQlBR01JTUdfYyAwIG5lewoJCQkJCQkJ
L3NlcF90aW50IEFHTUNPUkVfZ2dldCBtdWwgTWFwcGVkQ1NBIHNlcF9wcm9jX25hbWUgZXhjaCBw
b3AgbG9hZCBleGVjIHBvcCBwb3AgcG9wCQkKCQkJCQkJCWNvdW50dG9tYXJrIDEgcm9sbAoJCQkJ
CQl9ewoJCQkJCQkJcG9wCgkJCQkJCX1pZmVsc2UKCQkJCQkJY291bnR0b21hcmsgMSBhZGQgLTEg
cm9sbCBwb3AKCQkJCQl9CgkJCQldIHNldGNvbG9yc3BhY2UKCQkJfWlmZWxzZQoJCQlpbWFnZW9y
bWFza19zeXMKCQl9ewoJd3JpdGVfaW1hZ2VfZmlsZXsKCQljdXJyZW50Y215a2NvbG9yCgkJMCBu
ZXsKCQkJWy9TZXBhcmF0aW9uIC9CbGFjayAvRGV2aWNlR3JheSB7fV0gc2V0Y29sb3JzcGFjZQoJ
CQlnc2F2ZQoJCQkvQmxhY2sKCQkJW3sxIGV4Y2ggc3ViIC9zZXBfdGludCBBR01DT1JFX2dnZXQg
bXVsfSAvZXhlYyBjdnggTWFwcGVkQ1NBIHNlcF9wcm9jX25hbWUgY3Z4IGV4Y2ggcG9wIHs0IDEg
cm9sbCBwb3AgcG9wIHBvcCAxIGV4Y2ggc3VifSAvZXhlYyBjdnhdCgkJCWN2eCBtb2RpZnlfaGFs
ZnRvbmVfeGZlcgoJCQlPcGVyYXRvciBjdXJyZW50ZGljdCByZWFkX2ltYWdlX2ZpbGUKCQkJZ3Jl
c3RvcmUKCQl9aWYKCQkwIG5lewoJCQlbL1NlcGFyYXRpb24gL1llbGxvdyAvRGV2aWNlR3JheSB7
fV0gc2V0Y29sb3JzcGFjZQoJCQlnc2F2ZQoJCQkvWWVsbG93CgkJCVt7MSBleGNoIHN1YiAvc2Vw
X3RpbnQgQUdNQ09SRV9nZ2V0IG11bH0gL2V4ZWMgY3Z4IE1hcHBlZENTQSBzZXBfcHJvY19uYW1l
IGN2eCBleGNoIHBvcCB7NCAyIHJvbGwgcG9wIHBvcCBwb3AgMSBleGNoIHN1Yn0gL2V4ZWMgY3Z4
XQoJCQljdnggbW9kaWZ5X2hhbGZ0b25lX3hmZXIKCQkJT3BlcmF0b3IgY3VycmVudGRpY3QgcmVh
ZF9pbWFnZV9maWxlCgkJCWdyZXN0b3JlCgkJfWlmCgkJMCBuZXsKCQkJWy9TZXBhcmF0aW9uIC9N
YWdlbnRhIC9EZXZpY2VHcmF5IHt9XSBzZXRjb2xvcnNwYWNlCgkJCWdzYXZlCgkJCS9NYWdlbnRh
CgkJCVt7MSBleGNoIHN1YiAvc2VwX3RpbnQgQUdNQ09SRV9nZ2V0IG11bH0gL2V4ZWMgY3Z4IE1h
cHBlZENTQSBzZXBfcHJvY19uYW1lIGN2eCBleGNoIHBvcCB7NCAzIHJvbGwgcG9wIHBvcCBwb3Ag
MSBleGNoIHN1Yn0gL2V4ZWMgY3Z4XQoJCQljdnggbW9kaWZ5X2hhbGZ0b25lX3hmZXIKCQkJT3Bl
cmF0b3IgY3VycmVudGRpY3QgcmVhZF9pbWFnZV9maWxlCgkJCWdyZXN0b3JlCgkJfWlmCgkJMCBu
ZXsKCQkJWy9TZXBhcmF0aW9uIC9DeWFuIC9EZXZpY2VHcmF5IHt9XSBzZXRjb2xvcnNwYWNlCgkJ
CWdzYXZlCgkJCS9DeWFuIAoJCQlbezEgZXhjaCBzdWIgL3NlcF90aW50IEFHTUNPUkVfZ2dldCBt
dWx9IC9leGVjIGN2eCBNYXBwZWRDU0Egc2VwX3Byb2NfbmFtZSBjdnggZXhjaCBwb3Age3BvcCBw
b3AgcG9wIDEgZXhjaCBzdWJ9IC9leGVjIGN2eF0KCQkJY3Z4IG1vZGlmeV9oYWxmdG9uZV94ZmVy
CgkJCU9wZXJhdG9yIGN1cnJlbnRkaWN0IHJlYWRfaW1hZ2VfZmlsZQoJCQlncmVzdG9yZQoJCX0g
aWYKCQkJCWNsb3NlX2ltYWdlX2ZpbGUKCQkJfXsKCQkJCWltYWdlb3JtYXNrCgkJCX1pZmVsc2UK
CQl9aWZlbHNlCgl9aWZlbHNlCn0gZGVmCi9pbmRleGVkX2ltYWdlb3JtYXNrCnsKCWJlZ2luCgkJ
c2F2ZSBtYXJrIAogCQljdXJyZW50ZGljdAogCQlBR01DT1JFX2hvc3Rfc2VwewoJCQlPcGVyYXRv
ci9rbm9ja291dCBlcXsKCQkJCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQg
ZHVwIC9DU0Ega25vd24gewoJCQkJCS9DU0EgZ2V0IG1hcF9jc2EKCQkJCX17CgkJCQkJL0NTRCBn
ZXQgZ2V0X2NzZCAvTmFtZXMgZ2V0CgkJCQl9IGlmZWxzZQoJCQkJb3ZlcnByaW50X3BsYXRlIG5v
dHsKCQkJCQlrbm9ja291dF91bml0c3EKCQkJCX1pZgoJCQl9ewoJCQkJSW5kZXhlZF9EZXZpY2VO
IHsKCQkJCQkvZGV2aWNlbl9jb2xvcnNwYWNlX2RpY3QgQUdNQ09SRV9nZ2V0IC9uYW1lc19pbmRl
eCBrbm93biB7CgkJCSAJCQlpbmRleGVkX2ltYWdlX2xldjJfc2VwCgkJCQkJfXsKCQkJCQkJY3Vy
cmVudG92ZXJwcmludCBub3R7CgkJCQkJCQlrbm9ja291dF91bml0c3EKCQkJIAkJCX1pZgoJCQkg
CQkJY3VycmVudGRpY3QgY29uc3VtZWltYWdlZGF0YQoJCQkJCX0gaWZlbHNlCgkJCQl9ewoJCSAJ
CQlBR01DT1JFX2lzX2NteWtfc2VwewoJCQkJCQlPcGVyYXRvciAvaW1hZ2VtYXNrIGVxewoJCQkJ
CQkJaW1hZ2Vvcm1hc2tfc3lzCgkJCQkJCX17CgkJCQkJCQlsZXZlbDJ7CgkJCQkJCQkJaW5kZXhl
ZF9pbWFnZV9sZXYyX3NlcAoJCQkJCQkJfXsKCQkJCQkJCQlpbmRleGVkX2ltYWdlX2xldjFfc2Vw
CgkJCQkJCQl9aWZlbHNlCgkJCQkJCX1pZmVsc2UKCQkJCQl9ewoJCQkJCQljdXJyZW50b3ZlcnBy
aW50IG5vdHsKCQkJCQkJCWtub2Nrb3V0X3VuaXRzcQoJCQkgCQkJfWlmCgkJCSAJCQljdXJyZW50
ZGljdCBjb25zdW1laW1hZ2VkYXRhCgkJCQkJfWlmZWxzZQoJCQkJfWlmZWxzZQoJCQl9aWZlbHNl
CiAJCX17CgkJCWxldmVsMnsKCQkJCUluZGV4ZWRfRGV2aWNlTiB7CgkJCQkJL2luZGV4ZWRfY29s
b3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldCBiZWdpbgoJCQkJCUNTRCBnZXRfY3NkIGJlZ2luCgkJ
CQl9ewoJCQkJCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgYmVnaW4KCQkJ
CQlDU0EgbWFwX2NzYSAwIGdldCAvRGV2aWNlQ01ZSyBlcSBwc19sZXZlbCAzIGdlIGFuZCBwc192
ZXJzaW9uIDMwMTUuMDA3IGx0IGFuZCB7CgkJCQkJCVsvSW5kZXhlZCBbL0RldmljZU4gWy9DeWFu
IC9NYWdlbnRhIC9ZZWxsb3cgL0JsYWNrXSAvRGV2aWNlQ01ZSyB7fV0gSGlWYWwgTG9va3VwXQoJ
CQkJCQlzZXRjb2xvcnNwYWNlCgkJCQkJfSBpZgoJCQkJCWVuZAoJCQkJfSBpZmVsc2UKCQkJCWlt
YWdlb3JtYXNrCgkJCQlJbmRleGVkX0RldmljZU4gewoJCQkJCWVuZAoJCQkJCWVuZAoJCQkJfSBp
ZgoJCQl9eyAKCQkJCU9wZXJhdG9yIC9pbWFnZW1hc2sgZXF7CgkJCQkJaW1hZ2Vvcm1hc2sKCQkJ
CX17CgkJCQkJaW5kZXhlZF9pbWFnZW9ybWFza19sZXYxCgkJCQl9aWZlbHNlCgkJCX1pZmVsc2UK
IAkJfWlmZWxzZQoJCWNsZWFydG9tYXJrIHJlc3RvcmUKCWVuZAp9ZGVmCi9pbmRleGVkX2ltYWdl
X2xldjJfc2VwCnsKCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgYmVnaW4K
CWJlZ2luCgkJSW5kZXhlZF9EZXZpY2VOIG5vdCB7CgkJCWN1cnJlbnRjb2xvcnNwYWNlIAoJCQlk
dXAgMSAvRGV2aWNlR3JheSBwdXQKCQkJZHVwIDMKCQkJY3VycmVudGNvbG9yc3BhY2UgMiBnZXQg
MSBhZGQgc3RyaW5nCgkJCTAgMSAyIDMgQUdNQ09SRV9nZXRfaW5rX2RhdGEgNCBjdXJyZW50Y29s
b3JzcGFjZSAzIGdldCBsZW5ndGggMSBzdWIKCQkJewoJCQlkdXAgNCBpZGl2IGV4Y2ggY3VycmVu
dGNvbG9yc3BhY2UgMyBnZXQgZXhjaCBnZXQgMjU1IGV4Y2ggc3ViIDIgaW5kZXggMyAxIHJvbGwg
cHV0CgkJCX1mb3IgCgkJCXB1dAlzZXRjb2xvcnNwYWNlCgkJfSBpZgoJCWN1cnJlbnRkaWN0IAoJ
CU9wZXJhdG9yIC9pbWFnZW1hc2sgZXF7CgkJCUFHTUlNR18maW1hZ2VtYXNrCgkJfXsKCQkJdXNl
X21hc2sgewoJCQkJbGV2ZWwzIHtwcm9jZXNzX21hc2tfTDMgQUdNSU1HXyZpbWFnZX17bWFza2Vk
X2ltYWdlX3NpbXVsYXRpb259aWZlbHNlCgkJCX17CgkJCQlBR01JTUdfJmltYWdlCgkJCX1pZmVs
c2UKCQl9aWZlbHNlCgllbmQgZW5kCn1kZWYKICAvT1BJaW1hZ2UKICB7CiAgCWR1cCB0eXBlIC9k
aWN0dHlwZSBuZXsKICAJCTEwIGRpY3QgYmVnaW4KICAJCQkvRGF0YVNvdXJjZSB4ZGYKICAJCQkv
SW1hZ2VNYXRyaXggeGRmCiAgCQkJL0JpdHNQZXJDb21wb25lbnQgeGRmCiAgCQkJL0hlaWdodCB4
ZGYKICAJCQkvV2lkdGggeGRmCiAgCQkJL0ltYWdlVHlwZSAxIGRlZgogIAkJCS9EZWNvZGUgWzAg
MSBkZWZdCiAgCQkJY3VycmVudGRpY3QKICAJCWVuZAogIAl9aWYKICAJZHVwIGJlZ2luCiAgCQkv
TkNvbXBvbmVudHMgMSBjZG5kZgogIAkJL011bHRpcGxlRGF0YVNvdXJjZXMgZmFsc2UgY2RuZGYK
ICAJCS9Ta2lwSW1hZ2VQcm9jIHtmYWxzZX0gY2RuZGYKICAJCS9Ib3N0U2VwQ29sb3JJbWFnZSBm
YWxzZSBjZG5kZgogIAkJL0RlY29kZSBbCiAgCQkJCTAgCiAgCQkJCWN1cnJlbnRjb2xvcnNwYWNl
IDAgZ2V0IC9JbmRleGVkIGVxewogIAkJCQkJMiBCaXRzUGVyQ29tcG9uZW50IGV4cCAxIHN1Ygog
IAkJCQl9ewogIAkJCQkJMQogIAkJCQl9aWZlbHNlCiAgCQldIGNkbmRmCiAgCQkvT3BlcmF0b3Ig
L2ltYWdlIGNkbmRmCiAgCWVuZAogIAkvc2VwX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQg
bnVsbCBlcXsKICAJCWltYWdlb3JtYXNrCiAgCX17CiAgCQlnc2F2ZQogIAkJZHVwIGJlZ2luIGlu
dmVydF9pbWFnZV9zYW1wbGVzIGVuZAogIAkJc2VwX2ltYWdlb3JtYXNrCiAgCQlncmVzdG9yZQog
IAl9aWZlbHNlCiAgfWRlZgovY2FjaGVtYXNrX2xldmVsMgp7CgkzIGRpY3QgYmVnaW4KCS9MWldF
bmNvZGUgZmlsdGVyIC9Xcml0ZUZpbHRlciB4ZGYKCS9yZWFkQnVmZmVyIDI1NiBzdHJpbmcgZGVm
CgkvUmVhZEZpbHRlcgoJCWN1cnJlbnRmaWxlCgkJMCAoJUVuZE1hc2spIC9TdWJGaWxlRGVjb2Rl
IGZpbHRlcgoJCS9BU0NJSTg1RGVjb2RlIGZpbHRlcgoJCS9SdW5MZW5ndGhEZWNvZGUgZmlsdGVy
CglkZWYKCXsKCQlSZWFkRmlsdGVyIHJlYWRCdWZmZXIgcmVhZHN0cmluZyBleGNoCgkJV3JpdGVG
aWx0ZXIgZXhjaCB3cml0ZXN0cmluZwoJCW5vdCB7ZXhpdH0gaWYKCX1sb29wCglXcml0ZUZpbHRl
ciBjbG9zZWZpbGUKCWVuZAp9ZGVmCi9jYWNoZW1hc2tfbGV2ZWwzCnsKCWN1cnJlbnRmaWxlCgk8
PAoJCS9GaWx0ZXIgWyAvU3ViRmlsZURlY29kZSAvQVNDSUk4NURlY29kZSAvUnVuTGVuZ3RoRGVj
b2RlIF0KCQkvRGVjb2RlUGFybXMgWyA8PCAvRU9EQ291bnQgMCAvRU9EU3RyaW5nICglRW5kTWFz
aykgPj4gbnVsbCBudWxsIF0KCQkvSW50ZW50IDEKCT4+CgkvUmV1c2FibGVTdHJlYW1EZWNvZGUg
ZmlsdGVyCn1kZWYKL3Nwb3RfYWxpYXMKewoJL21hcHRvX3NlcF9pbWFnZW9ybWFzayAKCXsKCQlk
dXAgdHlwZSAvZGljdHR5cGUgbmV7CgkJCTEyIGRpY3QgYmVnaW4KCQkJCS9JbWFnZVR5cGUgMSBk
ZWYKCQkJCS9EYXRhU291cmNlIHhkZgoJCQkJL0ltYWdlTWF0cml4IHhkZgoJCQkJL0JpdHNQZXJD
b21wb25lbnQgeGRmCgkJCQkvSGVpZ2h0IHhkZgoJCQkJL1dpZHRoIHhkZgoJCQkJL011bHRpcGxl
RGF0YVNvdXJjZXMgZmFsc2UgZGVmCgkJfXsKCQkJYmVnaW4KCQl9aWZlbHNlCgkJCQkvRGVjb2Rl
IFsvY3VzdG9tY29sb3JfdGludCBBR01DT1JFX2dnZXQgMF0gZGVmCgkJCQkvT3BlcmF0b3IgL2lt
YWdlIGRlZgoJCQkJL0hvc3RTZXBDb2xvckltYWdlIGZhbHNlIGRlZgoJCQkJL1NraXBJbWFnZVBy
b2Mge2ZhbHNlfSBkZWYKCQkJCWN1cnJlbnRkaWN0IAoJCQllbmQKCQlzZXBfaW1hZ2Vvcm1hc2sK
CX1iZGYKCS9jdXN0b21jb2xvcmltYWdlCgl7CgkJQWRvYmVfQUdNX0ltYWdlL0FHTUlNR19jb2xv
ckFyeSB4ZGRmCgkJL2N1c3RvbWNvbG9yX3RpbnQgQUdNQ09SRV9nZ2V0CgkJYmRpY3QKCQkJL05h
bWUgQUdNSU1HX2NvbG9yQXJ5IDQgZ2V0CgkJCS9DU0EgWyAvRGV2aWNlQ01ZSyBdIAoJCQkvVGlu
dE1ldGhvZCAvU3VidHJhY3RpdmUKCQkJL1RpbnRQcm9jIG51bGwKCQkJL01hcHBlZENTQSBudWxs
CgkJCS9OQ29tcG9uZW50cyA0IAoJCQkvQ29tcG9uZW50cyBbIEFHTUlNR19jb2xvckFyeSBhbG9h
ZCBwb3AgcG9wIF0gCgkJZWRpY3QKCQlzZXRzZXBjb2xvcnNwYWNlCgkJbWFwdG9fc2VwX2ltYWdl
b3JtYXNrCgl9bmRmCglBZG9iZV9BR01fSW1hZ2UvQUdNSU1HXyZjdXN0b21jb2xvcmltYWdlIC9j
dXN0b21jb2xvcmltYWdlIGxvYWQgcHV0CgkvY3VzdG9tY29sb3JpbWFnZQoJewoJCUFkb2JlX0FH
TV9JbWFnZS9BR01JTUdfb3ZlcnJpZGUgZmFsc2UgcHV0CgkJZHVwIDQgZ2V0IG1hcF9hbGlhc3sK
CQkJL2N1c3RvbWNvbG9yX3RpbnQgQUdNQ09SRV9nZ2V0IGV4Y2ggc2V0c2VwY29sb3JzcGFjZQoJ
CQlwb3AKCQkJbWFwdG9fc2VwX2ltYWdlb3JtYXNrCgkJfXsKCQkJQUdNSU1HXyZjdXN0b21jb2xv
cmltYWdlCgkJfWlmZWxzZQkJCQoJfWJkZgp9ZGVmCi9zbmFwX3RvX2RldmljZQp7Cgk2IGRpY3Qg
YmVnaW4KCW1hdHJpeCBjdXJyZW50bWF0cml4CglkdXAgMCBnZXQgMCBlcSAxIGluZGV4IDMgZ2V0
IDAgZXEgYW5kCgkxIGluZGV4IDEgZ2V0IDAgZXEgMiBpbmRleCAyIGdldCAwIGVxIGFuZCBvciBl
eGNoIHBvcAoJewoJCTEgMSBkdHJhbnNmb3JtIDAgZ3QgZXhjaCAwIGd0IC9BR01JTUdfeFNpZ24/
IGV4Y2ggZGVmIC9BR01JTUdfeVNpZ24/IGV4Y2ggZGVmCgkJMCAwIHRyYW5zZm9ybQoJCUFHTUlN
R195U2lnbj8ge2Zsb29yIDAuMSBzdWJ9e2NlaWxpbmcgMC4xIGFkZH0gaWZlbHNlIGV4Y2gKCQlB
R01JTUdfeFNpZ24/IHtmbG9vciAwLjEgc3VifXtjZWlsaW5nIDAuMSBhZGR9IGlmZWxzZSBleGNo
CgkJaXRyYW5zZm9ybSAvQUdNSU1HX2xsWSBleGNoIGRlZiAvQUdNSU1HX2xsWCBleGNoIGRlZgoJ
CTEgMSB0cmFuc2Zvcm0KCQlBR01JTUdfeVNpZ24/IHtjZWlsaW5nIDAuMSBhZGR9e2Zsb29yIDAu
MSBzdWJ9IGlmZWxzZSBleGNoCgkJQUdNSU1HX3hTaWduPyB7Y2VpbGluZyAwLjEgYWRkfXtmbG9v
ciAwLjEgc3VifSBpZmVsc2UgZXhjaAoJCWl0cmFuc2Zvcm0gL0FHTUlNR191clkgZXhjaCBkZWYg
L0FHTUlNR191clggZXhjaCBkZWYJCQkKCQlbQUdNSU1HX3VyWCBBR01JTUdfbGxYIHN1YiAwIDAg
QUdNSU1HX3VyWSBBR01JTUdfbGxZIHN1YiAgQUdNSU1HX2xsWCBBR01JTUdfbGxZXSBjb25jYXQK
CX17Cgl9aWZlbHNlCgllbmQKfSBkZWYKbGV2ZWwyIG5vdHsKCS9jb2xvcmJ1ZgoJewoJCTAgMSAy
IGluZGV4IGxlbmd0aCAxIHN1YnsKCQkJZHVwIDIgaW5kZXggZXhjaCBnZXQgCgkJCTI1NSBleGNo
IHN1YiAKCQkJMiBpbmRleCAKCQkJMyAxIHJvbGwgCgkJCXB1dAoJCX1mb3IKCX1kZWYKCS90aW50
X2ltYWdlX3RvX2NvbG9yCgl7CgkJYmVnaW4KCQkJV2lkdGggSGVpZ2h0IEJpdHNQZXJDb21wb25l
bnQgSW1hZ2VNYXRyaXggCgkJCS9EYXRhU291cmNlIGxvYWQKCQllbmQKCQlBZG9iZV9BR01fSW1h
Z2UgYmVnaW4KCQkJL0FHTUlNR19tYnVmIDAgc3RyaW5nIGRlZgoJCQkvQUdNSU1HX3lidWYgMCBz
dHJpbmcgZGVmCgkJCS9BR01JTUdfa2J1ZiAwIHN0cmluZyBkZWYKCQkJewoJCQkJY29sb3JidWYg
ZHVwIGxlbmd0aCBBR01JTUdfbWJ1ZiBsZW5ndGggbmUKCQkJCQl7CgkJCQkJZHVwIGxlbmd0aCBk
dXAgZHVwCgkJCQkJL0FHTUlNR19tYnVmIGV4Y2ggc3RyaW5nIGRlZgoJCQkJCS9BR01JTUdfeWJ1
ZiBleGNoIHN0cmluZyBkZWYKCQkJCQkvQUdNSU1HX2tidWYgZXhjaCBzdHJpbmcgZGVmCgkJCQkJ
fSBpZgoJCQkJZHVwIEFHTUlNR19tYnVmIGNvcHkgQUdNSU1HX3lidWYgY29weSBBR01JTUdfa2J1
ZiBjb3B5IHBvcAoJCQl9CgkJCWFkZHByb2NzCgkJCXtBR01JTUdfbWJ1Zn17QUdNSU1HX3lidWZ9
e0FHTUlNR19rYnVmfSB0cnVlIDQgY29sb3JpbWFnZQkKCQllbmQKCX0gZGVmCQkJCgkvc2VwX2lt
YWdlb3JtYXNrX2xldjEKCXsKCQliZWdpbgoJCQlNYXBwZWRDU0EgMCBnZXQgZHVwIC9EZXZpY2VS
R0IgZXEgZXhjaCAvRGV2aWNlQ01ZSyBlcSBvciBoYXNfY29sb3Igbm90IGFuZHsKCQkJCXsKCQkJ
CQkyNTUgbXVsIHJvdW5kIGN2aSBHcmF5TG9va3VwIGV4Y2ggZ2V0CgkJCQl9IGN1cnJlbnR0cmFu
c2ZlciBhZGRwcm9jcyBzZXR0cmFuc2ZlcgoJCQkJY3VycmVudGRpY3QgaW1hZ2Vvcm1hc2sKCQkJ
fXsKCQkJCS9zZXBfY29sb3JzcGFjZV9kaWN0IEFHTUNPUkVfZ2dldC9Db21wb25lbnRzIGtub3du
ewoJCQkJCU1hcHBlZENTQSAwIGdldCAvRGV2aWNlQ01ZSyBlcXsKCQkJCQkJQ29tcG9uZW50cyBh
bG9hZCBwb3AKCQkJCQl9ewoJCQkJCQkwIDAgMCBDb21wb25lbnRzIGFsb2FkIHBvcCAxIGV4Y2gg
c3ViCgkJCQkJfWlmZWxzZQoJCQkJCUFkb2JlX0FHTV9JbWFnZS9BR01JTUdfayB4ZGRmIAoJCQkJ
CUFkb2JlX0FHTV9JbWFnZS9BR01JTUdfeSB4ZGRmIAoJCQkJCUFkb2JlX0FHTV9JbWFnZS9BR01J
TUdfbSB4ZGRmIAoJCQkJCUFkb2JlX0FHTV9JbWFnZS9BR01JTUdfYyB4ZGRmIAoJCQkJCUFHTUlN
R195IDAuMCBlcSBBR01JTUdfbSAwLjAgZXEgYW5kIEFHTUlNR19jIDAuMCBlcSBhbmR7CgkJCQkJ
CXtBR01JTUdfayBtdWwgMSBleGNoIHN1Yn0gY3VycmVudHRyYW5zZmVyIGFkZHByb2NzIHNldHRy
YW5zZmVyCgkJCQkJCWN1cnJlbnRkaWN0IGltYWdlb3JtYXNrCgkJCQkJfXsgCgkJCQkJCWN1cnJl
bnRjb2xvcnRyYW5zZmVyCgkJCQkJCXtBR01JTUdfayBtdWwgMSBleGNoIHN1Yn0gZXhjaCBhZGRw
cm9jcyA0IDEgcm9sbAoJCQkJCQl7QUdNSU1HX3kgbXVsIDEgZXhjaCBzdWJ9IGV4Y2ggYWRkcHJv
Y3MgNCAxIHJvbGwKCQkJCQkJe0FHTUlNR19tIG11bCAxIGV4Y2ggc3VifSBleGNoIGFkZHByb2Nz
IDQgMSByb2xsCgkJCQkJCXtBR01JTUdfYyBtdWwgMSBleGNoIHN1Yn0gZXhjaCBhZGRwcm9jcyA0
IDEgcm9sbAoJCQkJCQlzZXRjb2xvcnRyYW5zZmVyCgkJCQkJCWN1cnJlbnRkaWN0IHRpbnRfaW1h
Z2VfdG9fY29sb3IKCQkJCQl9aWZlbHNlCgkJCQl9ewoJCQkJCU1hcHBlZENTQSAwIGdldCAvRGV2
aWNlR3JheSBlcSB7CgkJCQkJCXsyNTUgbXVsIHJvdW5kIGN2aSBDb2xvckxvb2t1cCBleGNoIGdl
dCAwIGdldH0gY3VycmVudHRyYW5zZmVyIGFkZHByb2NzIHNldHRyYW5zZmVyCgkJCQkJCWN1cnJl
bnRkaWN0IGltYWdlb3JtYXNrCgkJCQkJfXsKCQkJCQkJTWFwcGVkQ1NBIDAgZ2V0IC9EZXZpY2VD
TVlLIGVxIHsKCQkJCQkJCWN1cnJlbnRjb2xvcnRyYW5zZmVyCgkJCQkJCQl7MjU1IG11bCByb3Vu
ZCBjdmkgQ29sb3JMb29rdXAgZXhjaCBnZXQgMyBnZXQgMSBleGNoIHN1Yn0gZXhjaCBhZGRwcm9j
cyA0IDEgcm9sbAoJCQkJCQkJezI1NSBtdWwgcm91bmQgY3ZpIENvbG9yTG9va3VwIGV4Y2ggZ2V0
IDIgZ2V0IDEgZXhjaCBzdWJ9IGV4Y2ggYWRkcHJvY3MgNCAxIHJvbGwKCQkJCQkJCXsyNTUgbXVs
IHJvdW5kIGN2aSBDb2xvckxvb2t1cCBleGNoIGdldCAxIGdldCAxIGV4Y2ggc3VifSBleGNoIGFk
ZHByb2NzIDQgMSByb2xsCgkJCQkJCQl7MjU1IG11bCByb3VuZCBjdmkgQ29sb3JMb29rdXAgZXhj
aCBnZXQgMCBnZXQgMSBleGNoIHN1Yn0gZXhjaCBhZGRwcm9jcyA0IDEgcm9sbAoJCQkJCQkJc2V0
Y29sb3J0cmFuc2ZlciAKCQkJCQkJCWN1cnJlbnRkaWN0IHRpbnRfaW1hZ2VfdG9fY29sb3IKCQkJ
CQkJfXsgCgkJCQkJCQljdXJyZW50Y29sb3J0cmFuc2ZlcgoJCQkJCQkJe3BvcCAxfSBleGNoIGFk
ZHByb2NzIDQgMSByb2xsCgkJCQkJCQl7MjU1IG11bCByb3VuZCBjdmkgQ29sb3JMb29rdXAgZXhj
aCBnZXQgMiBnZXR9IGV4Y2ggYWRkcHJvY3MgNCAxIHJvbGwKCQkJCQkJCXsyNTUgbXVsIHJvdW5k
IGN2aSBDb2xvckxvb2t1cCBleGNoIGdldCAxIGdldH0gZXhjaCBhZGRwcm9jcyA0IDEgcm9sbAoJ
CQkJCQkJezI1NSBtdWwgcm91bmQgY3ZpIENvbG9yTG9va3VwIGV4Y2ggZ2V0IDAgZ2V0fSBleGNo
IGFkZHByb2NzIDQgMSByb2xsCgkJCQkJCQlzZXRjb2xvcnRyYW5zZmVyIAoJCQkJCQkJY3VycmVu
dGRpY3QgdGludF9pbWFnZV90b19jb2xvcgoJCQkJCQl9aWZlbHNlCgkJCQkJfWlmZWxzZQoJCQkJ
fWlmZWxzZQoJCQl9aWZlbHNlCgkJZW5kCgl9ZGVmCgkvc2VwX2ltYWdlX2xldjFfc2VwCgl7CgkJ
YmVnaW4KCQkJL3NlcF9jb2xvcnNwYWNlX2RpY3QgQUdNQ09SRV9nZ2V0L0NvbXBvbmVudHMga25v
d257CgkJCQlDb21wb25lbnRzIGFsb2FkIHBvcAoJCQkJQWRvYmVfQUdNX0ltYWdlL0FHTUlNR19r
IHhkZGYgCgkJCQlBZG9iZV9BR01fSW1hZ2UvQUdNSU1HX3kgeGRkZiAKCQkJCUFkb2JlX0FHTV9J
bWFnZS9BR01JTUdfbSB4ZGRmIAoJCQkJQWRvYmVfQUdNX0ltYWdlL0FHTUlNR19jIHhkZGYgCgkJ
CQl7QUdNSU1HX2MgbXVsIDEgZXhjaCBzdWJ9CgkJCQl7QUdNSU1HX20gbXVsIDEgZXhjaCBzdWJ9
CgkJCQl7QUdNSU1HX3kgbXVsIDEgZXhjaCBzdWJ9CgkJCQl7QUdNSU1HX2sgbXVsIDEgZXhjaCBz
dWJ9CgkJCX17IAoJCQkJezI1NSBtdWwgcm91bmQgY3ZpIENvbG9yTG9va3VwIGV4Y2ggZ2V0IDAg
Z2V0IDEgZXhjaCBzdWJ9CgkJCQl7MjU1IG11bCByb3VuZCBjdmkgQ29sb3JMb29rdXAgZXhjaCBn
ZXQgMSBnZXQgMSBleGNoIHN1Yn0KCQkJCXsyNTUgbXVsIHJvdW5kIGN2aSBDb2xvckxvb2t1cCBl
eGNoIGdldCAyIGdldCAxIGV4Y2ggc3VifQoJCQkJezI1NSBtdWwgcm91bmQgY3ZpIENvbG9yTG9v
a3VwIGV4Y2ggZ2V0IDMgZ2V0IDEgZXhjaCBzdWJ9CgkJCX1pZmVsc2UKCQkJQUdNQ09SRV9nZXRf
aW5rX2RhdGEgY3VycmVudHRyYW5zZmVyIGFkZHByb2NzIHNldHRyYW5zZmVyCgkJCWN1cnJlbnRk
aWN0IGltYWdlb3JtYXNrX3N5cwoJCWVuZAoJfWRlZgoJL2luZGV4ZWRfaW1hZ2Vvcm1hc2tfbGV2
MQoJewoJCS9pbmRleGVkX2NvbG9yc3BhY2VfZGljdCBBR01DT1JFX2dnZXQgYmVnaW4KCQliZWdp
bgoJCQljdXJyZW50ZGljdAoJCQlNYXBwZWRDU0EgMCBnZXQgZHVwIC9EZXZpY2VSR0IgZXEgZXhj
aCAvRGV2aWNlQ01ZSyBlcSBvciBoYXNfY29sb3Igbm90IGFuZHsKCQkJCXtIaVZhbCBtdWwgcm91
bmQgY3ZpIEdyYXlMb29rdXAgZXhjaCBnZXQgSGlWYWwgZGl2fSBjdXJyZW50dHJhbnNmZXIgYWRk
cHJvY3Mgc2V0dHJhbnNmZXIKCQkJCWltYWdlb3JtYXNrCgkJCX17CgkJCQlNYXBwZWRDU0EgMCBn
ZXQgL0RldmljZUdyYXkgZXEgewoJCQkJCXtIaVZhbCBtdWwgcm91bmQgY3ZpIExvb2t1cCBleGNo
IGdldCBIaVZhbCBkaXZ9IGN1cnJlbnR0cmFuc2ZlciBhZGRwcm9jcyBzZXR0cmFuc2ZlcgoJCQkJ
CWltYWdlb3JtYXNrCgkJCQl9ewoJCQkJCU1hcHBlZENTQSAwIGdldCAvRGV2aWNlQ01ZSyBlcSB7
CgkJCQkJCWN1cnJlbnRjb2xvcnRyYW5zZmVyCgkJCQkJCXs0IG11bCBIaVZhbCBtdWwgcm91bmQg
Y3ZpIDMgYWRkIExvb2t1cCBleGNoIGdldCBIaVZhbCBkaXYgMSBleGNoIHN1Yn0gZXhjaCBhZGRw
cm9jcyA0IDEgcm9sbAoJCQkJCQl7NCBtdWwgSGlWYWwgbXVsIHJvdW5kIGN2aSAyIGFkZCBMb29r
dXAgZXhjaCBnZXQgSGlWYWwgZGl2IDEgZXhjaCBzdWJ9IGV4Y2ggYWRkcHJvY3MgNCAxIHJvbGwK
CQkJCQkJezQgbXVsIEhpVmFsIG11bCByb3VuZCBjdmkgMSBhZGQgTG9va3VwIGV4Y2ggZ2V0IEhp
VmFsIGRpdiAxIGV4Y2ggc3VifSBleGNoIGFkZHByb2NzIDQgMSByb2xsCgkJCQkJCXs0IG11bCBI
aVZhbCBtdWwgcm91bmQgY3ZpCQkgTG9va3VwIGV4Y2ggZ2V0IEhpVmFsIGRpdiAxIGV4Y2ggc3Vi
fSBleGNoIGFkZHByb2NzIDQgMSByb2xsCgkJCQkJCXNldGNvbG9ydHJhbnNmZXIgCgkJCQkJCXRp
bnRfaW1hZ2VfdG9fY29sb3IKCQkJCQl9eyAKCQkJCQkJY3VycmVudGNvbG9ydHJhbnNmZXIKCQkJ
CQkJe3BvcCAxfSBleGNoIGFkZHByb2NzIDQgMSByb2xsCgkJCQkJCXszIG11bCBIaVZhbCBtdWwg
cm91bmQgY3ZpIDIgYWRkIExvb2t1cCBleGNoIGdldCBIaVZhbCBkaXZ9IGV4Y2ggYWRkcHJvY3Mg
NCAxIHJvbGwKCQkJCQkJezMgbXVsIEhpVmFsIG11bCByb3VuZCBjdmkgMSBhZGQgTG9va3VwIGV4
Y2ggZ2V0IEhpVmFsIGRpdn0gZXhjaCBhZGRwcm9jcyA0IDEgcm9sbAoJCQkJCQl7MyBtdWwgSGlW
YWwgbXVsIHJvdW5kIGN2aSAJCUxvb2t1cCBleGNoIGdldCBIaVZhbCBkaXZ9IGV4Y2ggYWRkcHJv
Y3MgNCAxIHJvbGwKCQkJCQkJc2V0Y29sb3J0cmFuc2ZlciAKCQkJCQkJdGludF9pbWFnZV90b19j
b2xvcgoJCQkJCX1pZmVsc2UKCQkJCX1pZmVsc2UKCQkJfWlmZWxzZQoJCWVuZCBlbmQKCX1kZWYK
CS9pbmRleGVkX2ltYWdlX2xldjFfc2VwCgl7CgkJL2luZGV4ZWRfY29sb3JzcGFjZV9kaWN0IEFH
TUNPUkVfZ2dldCBiZWdpbgoJCWJlZ2luCgkJCXs0IG11bCBIaVZhbCBtdWwgcm91bmQgY3ZpCQkg
TG9va3VwIGV4Y2ggZ2V0IEhpVmFsIGRpdiAxIGV4Y2ggc3VifQoJCQl7NCBtdWwgSGlWYWwgbXVs
IHJvdW5kIGN2aSAxIGFkZCBMb29rdXAgZXhjaCBnZXQgSGlWYWwgZGl2IDEgZXhjaCBzdWJ9CgkJ
CXs0IG11bCBIaVZhbCBtdWwgcm91bmQgY3ZpIDIgYWRkIExvb2t1cCBleGNoIGdldCBIaVZhbCBk
aXYgMSBleGNoIHN1Yn0KCQkJezQgbXVsIEhpVmFsIG11bCByb3VuZCBjdmkgMyBhZGQgTG9va3Vw
IGV4Y2ggZ2V0IEhpVmFsIGRpdiAxIGV4Y2ggc3VifQoJCQlBR01DT1JFX2dldF9pbmtfZGF0YSBj
dXJyZW50dHJhbnNmZXIgYWRkcHJvY3Mgc2V0dHJhbnNmZXIKCQkJY3VycmVudGRpY3QgaW1hZ2Vv
cm1hc2tfc3lzCgkJZW5kIGVuZAoJfWRlZgp9aWYKZW5kCnN5c3RlbWRpY3QgL3NldHBhY2tpbmcg
a25vd24KewoJc2V0cGFja2luZwp9IGlmCiUlRW5kUmVzb3VyY2UKY3VycmVudGRpY3QgQWRvYmVf
QUdNX1V0aWxzIGVxIHtlbmR9IGlmCiUlRW5kUHJvbG9nCiUlQmVnaW5TZXR1cApBZG9iZV9BR01f
VXRpbHMgYmVnaW4KMiAyMDEwIEFkb2JlX0FHTV9Db3JlL2RvY19zZXR1cCBnZXQgZXhlYwpBZG9i
ZV9Db29sVHlwZV9Db3JlL2RvY19zZXR1cCBnZXQgZXhlYwpBZG9iZV9BR01fSW1hZ2UvZG9jX3Nl
dHVwIGdldCBleGVjCmN1cnJlbnRkaWN0IEFkb2JlX0FHTV9VdGlscyBlcSB7ZW5kfSBpZgolJUVu
ZFNldHVwCiUlUGFnZTogeGVuMy0xLjAuZXBzIDEKJSVFbmRQYWdlQ29tbWVudHMKJSVCZWdpblBh
Z2VTZXR1cAovY3VycmVudGRpc3RpbGxlcnBhcmFtcyB3aGVyZQp7cG9wIGN1cnJlbnRkaXN0aWxs
ZXJwYXJhbXMgL0NvcmVEaXN0VmVyc2lvbiBnZXQgNTAwMCBsdH0ge3RydWV9IGlmZWxzZQp7IHVz
ZXJkaWN0IC9BSTExX1BERk1hcms1IC9jbGVhcnRvbWFyayBsb2FkIHB1dAp1c2VyZGljdCAvQUkx
MV9SZWFkTWV0YWRhdGFfUERGTWFyazUge2ZsdXNoZmlsZSBjbGVhcnRvbWFyayB9IGJpbmQgcHV0
fQp7IHVzZXJkaWN0IC9BSTExX1BERk1hcms1IC9wZGZtYXJrIGxvYWQgcHV0CnVzZXJkaWN0IC9B
STExX1JlYWRNZXRhZGF0YV9QREZNYXJrNSB7L1BVVCBwZGZtYXJrfSBiaW5kIHB1dCB9IGlmZWxz
ZQpbL05hbWVzcGFjZVB1c2ggQUkxMV9QREZNYXJrNQpbL19vYmpkZWYge2FpX21ldGFkYXRhX3N0
cmVhbV8xMjN9IC90eXBlIC9zdHJlYW0gL09CSiBBSTExX1BERk1hcms1Clt7YWlfbWV0YWRhdGFf
c3RyZWFtXzEyM30KY3VycmVudGZpbGUgMCAoJSAgJiZlbmQgWE1QIHBhY2tldCBtYXJrZXImJikK
L1N1YkZpbGVEZWNvZGUgZmlsdGVyIEFJMTFfUmVhZE1ldGFkYXRhX1BERk1hcms1Cjw/eHBhY2tl
dCBiZWdpbj0n77u/JyBpZD0nVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkJz8+PHg6eG1wbWV0YSB4
bWxuczp4PSdhZG9iZTpuczptZXRhLycgeDp4bXB0az0nWE1QIHRvb2xraXQgMy4wLTI5LCBmcmFt
ZXdvcmsgMS42Jz4KLTxyZGY6UkRGIHhtbG5zOnJkZj0naHR0cDovL3d3dy53My5vcmcvMTk5OS8w
Mi8yMi1yZGYtc3ludGF4LW5zIycgeG1sbnM6aVg9J2h0dHA6Ly9ucy5hZG9iZS5jb20vaVgvMS4w
Lyc+Ci0KLSA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0ndXVpZDpiYWNmNDIzNS1lNDM1LTEx
ZGEtOGYxYS0wMDBkOTNhZmViYjInCi0gIHhtbG5zOnBkZj0naHR0cDovL25zLmFkb2JlLmNvbS9w
ZGYvMS4zLyc+Ci0gIDxwZGY6UHJvZHVjZXI+QWRvYmUgUERGIGxpYnJhcnkgNi42NjwvcGRmOlBy
b2R1Y2VyPgotIDwvcmRmOkRlc2NyaXB0aW9uPgotCi0gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJv
dXQ9J3V1aWQ6YmFjZjQyMzUtZTQzNS0xMWRhLThmMWEtMDAwZDkzYWZlYmIyJwotICB4bWxuczp0
aWZmPSdodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyc+Ci0gPC9yZGY6RGVzY3JpcHRpb24+
Ci0KLSA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0ndXVpZDpiYWNmNDIzNS1lNDM1LTExZGEt
OGYxYS0wMDBkOTNhZmViYjInCi0gIHhtbG5zOnhhcD0naHR0cDovL25zLmFkb2JlLmNvbS94YXAv
MS4wLycKLSAgeG1sbnM6eGFwR0ltZz0naHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL2cvaW1n
Lyc+Ci0gIDx4YXA6Q3JlYXRlRGF0ZT4yMDA2LTA1LTE0VDA5OjM0OjE0LTA3OjAwPC94YXA6Q3Jl
YXRlRGF0ZT4KLSAgPHhhcDpNb2RpZnlEYXRlPjIwMDYtMDYtMjZUMTg6MDM6MTlaPC94YXA6TW9k
aWZ5RGF0ZT4KLSAgPHhhcDpDcmVhdG9yVG9vbD5JbGx1c3RyYXRvcjwveGFwOkNyZWF0b3JUb29s
PgotICA8eGFwOk1ldGFkYXRhRGF0ZT4yMDA2LTA1LTE0VDA5OjM0OjE0LTA3OjAwPC94YXA6TWV0
YWRhdGFEYXRlPgotICA8eGFwOlRodW1ibmFpbHM+Ci0gICA8cmRmOkFsdD4KLSAgICA8cmRmOmxp
IHJkZjpwYXJzZVR5cGU9J1Jlc291cmNlJz4KLSAgICAgPHhhcEdJbWc6Zm9ybWF0PkpQRUc8L3hh
cEdJbWc6Zm9ybWF0PgotICAgICA8eGFwR0ltZzp3aWR0aD4yNTY8L3hhcEdJbWc6d2lkdGg+Ci0g
ICAgIDx4YXBHSW1nOmhlaWdodD4xMTI8L3hhcEdJbWc6aGVpZ2h0PgotICAgICA8eGFwR0ltZzpp
bWFnZT4vOWovNEFBUVNrWkpSZ0FCQWdFQVNBQklBQUQvN1FBc1VHaHZkRzl6YUc5d0lETXVNQUE0
UWtsTkErMEFBQUFBQUJBQVNBQUFBQUVBJiN4QTtBUUJJQUFBQUFRQUIvKzRBRGtGa2IySmxBR1RB
QUFBQUFmL2JBSVFBQmdRRUJBVUVCZ1VGQmdrR0JRWUpDd2dHQmdnTERBb0tDd29LJiN4QTtEQkFN
REF3TURBd1FEQTRQRUE4T0RCTVRGQlFURXh3Ykd4c2NIeDhmSHg4Zkh4OGZId0VIQndjTkRBMFlF
QkFZR2hVUkZSb2ZIeDhmJiN4QTtIeDhmSHg4Zkh4OGZIeDhmSHg4Zkh4OGZIeDhmSHg4Zkh4OGZI
eDhmSHg4Zkh4OGZIeDhmSHg4Zkh4OGYvOEFBRVFnQWNBRUFBd0VSJiN4QTtBQUlSQVFNUkFmL0VB
YUlBQUFBSEFRRUJBUUVBQUFBQUFBQUFBQVFGQXdJR0FRQUhDQWtLQ3dFQUFnSURBUUVCQVFFQUFB
QUFBQUFBJiN4QTtBUUFDQXdRRkJnY0lDUW9MRUFBQ0FRTURBZ1FDQmdjREJBSUdBbk1CQWdNUkJB
QUZJUkl4UVZFR0UyRWljWUVVTXBHaEJ4V3hRaVBCJiN4QTtVdEhoTXhaaThDUnlndkVsUXpSVGtx
S3lZM1BDTlVRbms2T3pOaGRVWkhURDB1SUlKb01KQ2hnWmhKUkZScVMwVnROVktCcnk0L1BFJiN4
QTsxT1QwWlhXRmxhVzF4ZFhsOVdaMmhwYW10c2JXNXZZM1IxZG5kNGVYcDdmSDErZjNPRWhZYUhp
SW1LaTR5TmpvK0NrNVNWbHBlWW1aJiN4QTtxYm5KMmVuNUtqcEtXbXA2aXBxcXVzcmE2dm9SQUFJ
Q0FRSURCUVVFQlFZRUNBTURiUUVBQWhFREJDRVNNVUVGVVJOaElnWnhnWkV5JiN4QTtvYkh3Rk1I
UjRTTkNGVkppY3ZFekpEUkRnaGFTVXlXaVk3TENCM1BTTmVKRWd4ZFVrd2dKQ2hnWkpqWkZHaWRr
ZEZVMzhxT3p3eWdwJiN4QTswK1B6aEpTa3RNVFU1UFJsZFlXVnBiWEYxZVgxUmxabWRvYVdwcmJH
MXViMlIxZG5kNGVYcDdmSDErZjNPRWhZYUhpSW1LaTR5TmpvJiN4QTsrRGxKV1dsNWlabXB1Y25a
NmZrcU9rcGFhbnFLbXFxNnl0cnErdi9hQUF3REFRQUNFUU1SQUQ4QTlVNHE3RlhZcTdGVW4xM3pa
b2VpJiN4QTtML3BzNDllbFZ0by9pbFAreDdmTnFZcTgvd0JXL05yVnB5eWFiYnBaeDlwSC9leWZQ
ZjRCOXh4VklYMVR6cHF4NWZXTHk0UnY1QzZ4JiN4QTsvY3RFeFZUL0FNSitacHZpYTFaajR2SWdQ
L0ROaXJZOHYrYXJUNG80Sm95TndZWEJQL0NNY1ZWN2Z6ZjV6MHB3ajNjNHAvdXE2QmVvJiN4QTs4
UDNvSkgwWXF5dlJmemNpZGxpMWkxOU91eHViZXBYNW1NMVAzRS9MRldmYWZxVmhxTnVMbXhuUzRo
UDdhR3RENEVkUWZZNHFpY1ZkJiN4QTtpcnNWZGlyc1ZkaXJzVmRpcnNWZGlyc1ZkaXJzVmRpcnNW
ZGlyc1ZkaXJzVmRpcnNWZGlyc1ZlY2VjdnpMTWJTYWZvYmd1S3JOZmRRJiN4QTtEM0VYai9yZmQ0
NHF4RFNmTE9xNnpJYnFabWpoa1BKcm1Xck01N2xRZDIrZUtzejAzeXJvMWdBVmdFMG8vd0IyelVj
MTloMEgwREZVJiN4QTszeFYyS3V4VlpMREZNaGpsUlpFUFZIQVlINkRpckhkVjhqYWJjaG5zejlV
bTZnRGVNbjNYdDlHS3NXams4dytWdFJFa2JOYnkvd0F3JiN4QTsrS0tWUjJQWmg4K21LdlZQS0hu
ZXgxK0wwbkF0OVNRVmt0NjdNUDVvNjlSN2R2eHhWazJLdXhWMkt1eFYyS3V4VjJLdXhWMkt1eFYy
JiN4QTtLdXhWMkt1eFYyS3V4VjJLdXhWMkt1eFYyS3ZOUHpJODZ1SGswUFRwT05QaHZwMU81OFln
ZitKZmQ0NHFrM2xUeWtzeXBmNmd0WWp2JiN4QTtCYmtmYUhabTl2QWQ4Vlp1QUFBQUtBYkFERlc4
VmRpcnNWZGlyc1ZkaXFoZTJOcmUyN1c5ekdKSW43SHFENGc5amlyem5XZEh2dkwrJiN4QTtveHoy
OGpDTU56dGJsZGlDdTlEL0FKUS9IRlhxM2tyelpGNWcwNnNsRTFDQ2d1b2hzRDRPdnMzNFlxeVBG
WFlxN0ZYWXE3RlhZcTdGJiN4QTtYWXE3RlhZcTdGWFlxN0ZYWXE3RlhZcTdGWFlxN0ZXUCtkL01Z
MFBSSkpveVByay83cTFIZ3hHNy93Q3hHL3pwaXJ4YVB5UEo1eHNMJiN4QTs2MWx2SjdGU2hLWDhE
TXNxM0JCS01DQ09RQjNZVjNHM2V1S3ZtZnpaZi9tYjVWMSs3MExWdGIxS0s4dEc0a2k4dU9Eb2Qw
a2pKWVZSJiN4QTtodU1WU2ovSFBuYi9BS21EVXY4QXBNbi9BT2E4VmQvam56dC8xTUdwZjlKay93
RHpYaXFjK1dmemgvTVBRdFlzdFFYWGIrOWd0SkF6JiN4QTs2ZmRYVTh0dktoMmRIamRtWDRnVHZT
b080M3hWOXRlVVBOZWtlYS9MMW5ydWxTYzdTN1RseFAyNDNHenh1T3pJMngvcGlxYzRxN0ZYJiN4
QTtsLzV2L25ub2ZrU0I3QzA0YWo1bWRmM2RpRDhFSVlWRWx3UjBGTndnK0krdzN4VjhtNnArWm41
Z2FucUU5L2RlWUw4VDNERjNXSzRsJiN4QTtoakhza2NiS2lxT3dBeFZCUytjL09FeWNKdGQxQ1JP
dkY3dWRoWDVGOFZXMi9uRHpiYk9aTGJXNytDUWloZU82bVEwOEtxd3hWRzIvJiN4QTtucjh4cm1l
TzN0L01Pc1RYRXpCSW9ZN3k2ZDNkalJWVlE1SkpQUURGWDFiK1IvNUkrZkxjMi9tTDh3dk1HcVBP
T010bjVlK3YzQlZDJiN4QTtOMWE3SWVqSC9pb2JmelYzVUt2b1BGWFlxN0ZYWXE3RlhZcTdGWFlx
N0ZYWXE3RlhZcTdGWFlxN0ZYWXE3RlhqWDVtNnU5LzVrTm9oJiN4QTs1UTJDaUZGSGVScU01K2Rh
TDlHS3NxMExUVjA3UzRMVUQ0d09VeDhYYmR2Nllxd1A4Ny95anR2UHVnZXRacWtYbVRUMUxhZmNH
ZzlSJiN4QTtlcHQ1Ry9sYjlrL3N0N0U0cStKN3UxdWJPNm10THFKb0xtM2RvcDRaQVZkSFE4V1Zn
ZWhCR0txV0t1eFY2cCtRZjV0UDVJOHcvVWRSJiN4QTtsUDhBaHZWSEMzaW5jUVM3S3R3QjdkSHAx
WC9WR0t2dEpaRWRCSWpCbzJISlhCcUNEdUNEaXI1Ky9Pai9BSnlSZzAzMS9MM2txWlo5JiN4QTtS
Rlk3dldWbzBjSkd4U0N0VmQvRi9zanRVOUZYeTljWEZ4YzNFbHhjeXZOY1RNWkpwcEdMdTdzYXN6
TWFra25xVGlxbmlyc1ZUUFFQJiN4QTtMbXM2L2ZDejBxMmU0bEFEU3NCOEVhRWdjNUg2S3RUMVB5
NjRxK3hmK2NhUHlyOHFlWFlMcS9taFM5ODFRa0JyK1FWRVVVaTA0MjZuJiN4QTs3RzRZTS8yaVBB
R21LdmZNVmRpcnNWZGlyc1ZkaXJzVmRpcnNWZGlyc1ZkaXJzVmRpcnNWZGlyc1ZXU3lKRkU4cm1p
UnFXWSt3RlRpJiN4QTtyd2pRMWZWUE5FVXMyN1N6UGN5MThRVElhL000cTlPeFYyS3ZBZjhBbkpI
OG1mMHZheStjOUFnLzNLMnFWMWExakc5eENnL3ZsQTZ5JiN4QTtSZ2IvQU15KzY3cXZsWEZYWXE3
RldjeS9uTjU3ZnlKYitTMXZURnBzSEtOcDBxTGg3Yy9adDJrci9kcnZzTzN3L1oyeFZnMkt1eFYy
JiN4QTtLdlJ2eXAvSlR6SjUrdVZ1RkJzUEw4YjhialU1RjJhaCtKSUZOT2IvQUlEdjRZcTkxODBl
YnZ5bS9LSHl6UDVUMHVINnpxa2tmN3l6JiN4QTtnS3ZjTktSVlpieWNpaW51QjFwOWxlT0t2RDV2
K2NndnpFaG1uYlE3dE5Gam5YMDNGdWl2SVVxRHZKS0hvYWpxb1hGV0thdCtZUG56JiN4QTtWMkxh
cDVpMUs4NWJjWnJ1WjFvZXdVdHhBOWdNVlNLV2FhWnpKTTdTU0hxN2tzZHZjNHFpTFBWdFZzaURa
WGs5c1ZxVk1NcngwSjYwJiN4QTs0a1lxelh5MytmZjV1K1g1Rk5sNW12SjRsTzhGOC8xeU1qK1ds
eDZoVWY2cEdLdnBQOG0vK2NyZEo4MTMxdm9IbXkzajBqVzdobGl0JiN4QTtMdUV0OVR1SldORlNq
Rm1oZGlmaEJZZytJTkJpcjZCeFY4UGZuTCtldjVrV241bytaTEhRL01OMVk2WFkzaldrRnJDeThF
TnNvaGtwJiN4QTtWZThpTVQ3NHF3ei9BSlg5K2NuL0FGTmw5L3dTL3dETk9LdS81WDkrY24vVTJY
My9BQVMvODA0cTl4MHovbktDTHloK1ZtaXg2aGNQJiN4QTs1bzgrWDhVbHpjUnlTRDA3ZEpabjlE
NnhJdGFIMHVCRWFpdmp4cUNWWGlYbTc4Ly9BTTJmTkUwalhtdjNGbGJPVFN5MDVtdElWVS9zJiN4
QTswaUlkeC9yczJLc0J1Ynk3dTVUTGRUeVR5bXRaSldaMjNOVHV4SjY0cW1taCtkZk9HZ3lwTG91
dFh1bnNsT0l0N2lTTmRoUUFxRzRrJiN4QTtVN0VZcStxUCtjZC8rY2x0UjgwYXRENVE4NUZIMWE0
Qi9SbXFvaXhpZGtVczBVeUxSRmZpUGhaUUFlbEswcXF6WC9uSTdYUFBubGZ5JiN4QTtkSjVyOHIr
WW0wcExBeFF6YWNMUzF1Rm5hZVpVRG1XZEpHVGlHNkFiNHE4NDg4K2Yvd0E3TkI4ai9sOWVhYjVy
YS8xdnpxVW5GZFBzJiN4QTtJK0gxbTN0bWl0bEhwTXJjWkptK09nSnJpckRkZS81eVovTnZVZERz
ZFM4dmFwOVJnMHF4dExiekJNYlcwY3phbE04dzVqMUlYVmZVJiN4QTtqaDVCVW9vM3hWOWllWkpQ
VDh1Nm80TkN0cE9RVDQrbTFNVmVSZVFFRGEzSVQreEF4SHo1S1A0NHE5RHhWMkt1eFY4ay93RE9S
ZjVNJiN4QTtmNGR2bjgxNkRCVFFieVQvQUU2MmpIdzJrN25xQU9rVWhPM1pXMjZGY1ZlRzRxN0ZY
WXE3RlcxVm1ZS29KWW1nQTNKSnhWOUNmazkvJiN4QTt6alBjYWdJTmQ4OFJ0YjJKcEpiYUxVck5L
T29hNElvWTEveUI4Ujc4ZTZyMFA4K2Z6UWovQUM5OHRXdWhlWFZqdHRadjR6SFpyRXFxJiN4QTts
cGFwOEprVkFLQS9zeGlsT3AvWm9WWHgxTk5OUE04MHp0TE5LeGVTUnlXWm1ZMUxNVHVTVDFPS3JN
VmU5LzhBT0wvNUhhWjU0dTdyJiN4QTt6SjVqaU0zbC9USlJCQlpWS3JjM1hFT3djZ2crbkVyS1NQ
MmlSMnFDcSt3b1BLUGxTM3NQMGZCb3RqRllVNC9WRXRvVmlwNGNBdkh0JiN4QTs0WXErSC84QW5L
SHlQNWY4b2ZtZWJYUVlFdExIVUxLTFVEWlJiUnd5U1NTeE9pTCt5cDlIa0Y2Q3UyMU1WZVJZcTJD
UWFqWWpGWDZHJiN4QTsvazE1OGZXL3lZMG56UHE4cGFhMXM1bDFLZC90TWJGbmplUmllcGRZdVor
ZUt2ejcxWFVKOVMxTzgxR2YrL3ZaNUxpWHY4Y3JsMi9FJiN4QTs0cWhjVmRpcjNQOEFLci9uRkx6
VjV6MGVEWGRXdjAwRFNidFJKWkJvalBjelJuY1NDTGxFcUk0K3l4YXA2OGFVSlZhL09iL25HRy8v
JiN4QTtBQys4dG56Slk2d05YMHlHU09LOGplRDZ2TEY2cDRJNG84aXVwY2hUMElxT3UrS3ZEY1Zk
aXJKZnkwa3VJL3pHOHJQYmxoT05Yc2ZUJiN4QTs0YnR5TnlnQUE3MXhWK2tlcmFOcEdzV0wyR3Iy
TnZxTmpJVk1scGR4SlBFeFU4bEpqa0RLYUVWRzJLb2VieXQ1WW5UVFVuMGl5bFRSJiN4QTtpaDBk
WHRvbUZtWStQcG0yQlg5enc0THg0VXBRZUdLb0wvbFhmNWYvQUZPYXkvd3hwUDFLNWxXZTR0dnFO
dDZVa3FBaFpIVGh4WjFEJiN4QTt0UmlLN25GVXc4eHhtVHk5cWlLS3MxcE9GSHVZMnBpcnlIeUE0
WFc1RlA3Y0RBZk1NcC9oaXIwUEZYWXFwWE56Yld0dkpjM01xUVc4JiN4QTtLbDVwcEdDSWlxS2xt
WnFBQUR1Y1ZmTC9BT2RuL09Sc2VzV3Q1NVg4b2hXMHVkV2d2OVZrU3BtUnRtU0JISHdvUisyUlU5
cWRTcStmJiN4QTtNVmRpcnNWZGlyNmcvd0NjWHZJZmtHNjBmL0ZITWFsNWt0NURITkJPb0FzV3Fl
QmpqcWFsMUhJU241Q2hEWXEraU1WZkN2NTdlWUp0JiN4QTtiL05UWDVYWW1PeXVHMCtCRDBSTFQ5
MHdIemtWbStuRldBNHE3RlV3dG90ZmppSDFaTHRJVytKZlRFZ1UxSFVjZHQ4VlZmOEFuYWYrJiN4
QTtYNy9rdGlxaE5ZNjVPL09hM3VaWHBUazZTTWFmTWpGVm42SjFYL2xqbi81RnYvVEZYZm9uVmY4
QWxqbi9BT1JiL3dCTVZmVjBkeGNlJiN4QTtUUDhBbkRMalB5aXZ0V3RwWUVpZllrYW5kc0NvSC9N
TTViRlh5Tmlyc1ZaVCtWM2xkUE5YNWgrWDlBa1V2YjMxNUd0MG82bTNRK3BQJiN4QTswLzRxUnNW
ZnBUSEhIRkdzY2FoSTBBVkVVQUtxZ1VBQUhRREZYaG4vQURtTHI2NmYrVkthWUcvZTZ6ZndRbFAr
SzRLM0RINkhqVDc4JiN4QTtWZkVHS3V4VjZuL3pqSjVkR3Qvbk5vUWRPY0dtbVhVWnZiNnVoTVIv
NUhHUEZYNkE0cTdGWFlxdGxqU1NONDNGVWNGV0hzUlE0cThIJiN4QTswVm4wcnpSSEZOc1laMnRw
YTdkU1l6OXgzeFY2ZGlxU2ViL09mbDN5am84bXJhN2RyYld5YlJwMWtsZnRIRW5WMlA4QWFkc1Zm
SG41JiN4QTtzL25oNWk4K1hEMmtaYlR2TGlOV0hUVWJlVGlhcTl3dysyM2ZqOWxlMis1VmVhNHE3
RlhZcTdGWFlxeTM4c3Z6RDFYeUo1bmcxaXpyJiN4QTtKYk5TTFViS3RGbmdKK0pmOVplcUhzZmF1
S3Z1dnkvcjJsZVlOR3ROWjBxY1hGaGV4aVdDVWVCMktzT3pLZG1IWTdZcStGZnphMFc1JiN4QTsw
Yjh5dk1kbE9oU3QvUFBEeTd3M0RtYUp2ZXFPTVZZamlyc1ZmcForV0d1NlByZjVmNkJmYVJNa3Rt
YkczaW9oSDd0NG9sUjRtQSt5JiN4QTswYkRpUmlxMzh4L3pJOHRlUVBMczJzNjFNQVFDdG5aS3c5
ZTVsN1J4S2Y4QWhtNktOemlyeFQvb2Q3eXQvd0JTMWZmOGpvY1ZkLzBPJiN4QTs5NVcvNmxxKy93
Q1IwT0twMTVOLzV5eTB2emI1bjAveTlwWGxhK2U4MUNaWWxiMW9pc2E5WGxlbjdNYUFzM3NNVlkv
L0FNNXUrWXZTJiN4QTswVHkzNWNqZmU3dUpyK2RCMlczUVJSMTltTTcvQUhZcStSc1ZkaXIzei9u
RFh5NytrUHpOdXRZa1NzV2kyRWp4di9MUGNzSVVIMHhHJiN4QTtYRlgyeGlyNDgvNXpaOHhmV2ZO
MmcrWDBlcWFiWnZkeXFPZ2t1NU9ORDdoTGNINmNWZk4yS3V4VjlSLzg0UStYZVY3NW04eVNKL2RS
JiN4QTt3YWRiU2VQcU1acGg5SHB4WXErc2NWZGlyc1ZkaXJ4ejh6OUlheDh4RzhSYVEzNmlWU09n
a1dpdVAxTjlPS3BENTUvUHp5MzVTMENGJiN4QTttSXYvQURITkhTUFM0Mm9WY2JlcE93cjZhSHFP
NTdlSVZmSmZuUHp4NWw4NDZ3K3E2OWRtNG1OUkRDUGhoaFFtdnB3cDBWZnhQVWtuJiN4QTtmRlVn
eFYyS3JvNDNrZFk0MUx5T1FxSW9xU1RzQUFNVmU1K1ZmK2NXUE1lbytVcnpVOVhuT25hMUxCejBm
U2pTdk1mRUJkRS9ZNWpiJiN4QTtpTjFyVnVoWEZYaDkxYlhGcmN5MnR6RzBOeEE3UlRST0tNam9l
TEt3UFFnaWh4VlN4VjJLdlgvK2NmZnpnYnliclA2RjFlYW5sblU1JiN4QTtCemRqdGF6dFJSTVA4
aHRoSjkvYmRWNzcrY1A1TGFSK1lkbkZlVzhxV1htQzJUamEzOU9VY2tmMmhGTngzSzFOVllicjc5
TVZmSjNtJiN4QTs3OHIvQUR2NVR2SHR0WDAyUUJSeUZ6QisvaFpha0J1YVY0MXAwYWg5c1ZZcGlx
WWFWNWcxN1NDNTBuVXJyVHpKL2VHMW5raDVVL205JiN4QTtObHJpcWhmNmxxT28zTFhXb1hVMTVj
dHMwOXhJMHNoSHV6a25GWFdHbWFqcU00dDlQdFpyeTRiN01Odkcwcm12K1NnSnhWNm41TC81JiN4
QTt4ZS9ObnpKTEcxenB2NkJzV29YdXRUckM0SFdndHhXYXZ6VUQzeFY5WmZsSCtSM2xMOHRyTm1z
QTE5cmR3bkM4MWVjQVNNdXhNY1NpJiN4QTtvampxSzhRU1QzSm9NVmZOWC9PVjYrWXRlL05xZUsw
MDI3dUxQU2JTM3M0WllvSlhqWXNwdUhJWlZvZmluNG41WXE4Yi93QUorYWYrJiN4QTtyTmZmOUkw
My9OT0t1L3duNXAvNnMxOS8walRmODA0cSt2UCtjTnZLTjVvL2t2V3RWdjdXUzB1OVV2bGlFY3lN
am1HMGorQnVMQUg3JiN4QTtjMG1Ldm9QRlh3QitmY1BtYnpIK2JubVRVSWRMdlpiV081K3AyenJi
eXNoanRGRUFaQ0ZvVll4bGg4OFZlZjhBK0UvTlAvVm12djhBJiN4QTtwR20vNXB4VjMrRS9OUDhB
MVpyNy9wR20vd0NhY1ZmY1AvT0xIbFc0OHY4QTVSV1J1NEd0NzNWTGk0dnJpS1JTanJWdlJqNUE3
N3h3JiN4QTtxZnB4VjY5aXJzVmRpclJJVUVrMEEzSlBRREZYeTEvemtWL3prYm9Nc0xlV1BLQlMv
dm9KS3o2NHBEUVFzQVZaTGM3aVZ0OTIreU8zJiN4QTtMc3ErVmJpNG51Sm5udUpHbW5sSmFTVnlX
Wm1QVWtuY25GVlBGWFlxbVhsM3k1cmZtUFZvTkowVzBlOHY3ZzBTS01kQjNaaWRsVmU3JiN4QTtI
WVlxK3ZmeWYvSVBSUEpNY2VxYXB3MUx6TXdyOVlJckRiVkc2d0EvdGVNaDM4S2IxVmVzNHErWi93
RG5LUDhBS29Sdi9qdlNJYUk1JiN4QTtXUFhZVUhSalJZN21nOGRrZjNvZTVPS3ZtL0ZYWXE3Rlgw
ei9BTTQ4Zm5oRU5ML3dqNWptSnVMS01uUkxoalV5eHFQOTVTZjVrL1lQJiN4QTs4dTNZVlZlaDZU
cDk5NXA4eGlOaWF6dDZseElPa2NTOWFmSWJMaXIyQzc4bitVN3lPT085MGF4dTBpVlVRWEZ0RkxS
VkZGSHhxM1FZJiN4QTtxbG4vQUNxZjhyUCtwTjBQL3VHMm4vVlBGVmUxL0xYOHViUnVWcjVWMGUz
YW9ibEZZV3FHbzZINFl4MHhWUHJXenRMU0lRMnNFZHZDJiN4QTtPa2NTcWlqYW5SUUIyeFZXeFYy
S3V4VjJLdXhWMkt1eFYyS3V4VjJLdXhWMkt1eFYyS3Ztajg3N24vbklmenFaOUM4dWVWN3pTdkt4
JiN4QTtxa3A5ZTJXNXZGNkgxU3Mzd1JuL0FIMkR2KzBUMENydzMvb1cvd0RPMy9xVnAvOEFrZGJm
OVZjVlVybi9BSngzL09pMmdlZWJ5dmNDJiN4QTtLTUZuSWt0MklBNm1peUU0cWtpL2xaNStaZ28w
aDZuWVZraEErOHZpckp2THYvT09INW82cnFkdmJYZW5EUzdLVDRwdFFua2llTkU2JiN4QTsxQ3h1
ek9UK3lCMThRTjhWZlZuNWUvbHI1WThpYVQ5UjBlQ3M4Z0J2TCtRQXp6c083dDJVZnNxTmg4Nm5G
V1Y0cTdGVkc5czdXOXM1JiN4QTs3TzdpV2UxdVkyaG5oY1ZWNDNCVmxZZUJCcGlyNDY4Ni93RE9P
SG4vQUU3ekxlMjNsN1M1TlQwWG56c0xwWklRZlNmY0k0ZDFQSlBzJiN4QTtrMDM2OThWU1AvbFFQ
NXYvQVBVdHpmOEFJMjMvQU9xbUtxRjUrUi81cVdjUHJYV2dTUXg5T1RUVys1OEFQVTN4VkQ2ZitW
WDVseVhzJiN4QTtDMkdqem04NWcyL3BQSHo1ZzFCV2o5dXVLdnQ3OGp0Tzh3V25rNVQ1bTBlVFN2
TWZNeDM1bGFKeE1FL3U1SXpFemdJVk80UDdWZTFNJiN4QTtWZWlZcTdGWFlxN0ZYWXE3RlhZcTdG
WFlxN0ZYWXE3RlhZcTdGWFlxN0ZYWXE3RlhZcTdGWFlxODg4NS9scUxsNU5RMFJRa3pWYWF5JiN4
QTs2S3g2a3g5Z2Y4bnBpckN0TDh4YXZvY3h0WlVab296U1MwbXFwVSsxZDF4Vm1XbStiZEd2Z0Y5
YjZ2TWVzYzN3Nyt6ZlpQMzRxbklJJiN4QTtJQkJxRDBJeFZ2RlhZcXB6M0Z2YnhtU2VSWW94MVoy
Q2o3emlyRzlWODkyRnVDbGl2MXFYK2MxV01IOVorajc4Vll6YjIzbUx6VnFQJiN4QTtHTld1SkIx
YjdNVVNueFBSUitKOThWZXIrVXZKZGg1Zmc1N1hHb1NDa3R5UjBIOHNmZ3Y2OFZaSGlyc1ZkaXJz
VmRpcnNWZGlyc1ZkJiN4QTtpcnNWZGlyc1ZkaXJzVmRpcnNWZGlyc1ZkaXJzVmRpcnNWZGlxVmE1
NVkwWFdvK04vYmhwQUtKT253eXI4bUg2anRpckFOVy9LUFVZJiN4QTttTDZYZEpjeDlvcHYzY2c5
cWlxbjhNVlk4K2dlZGRLSkMydDVDQjFNSEowKytJc3VLcWY2ZTgyUmZBMDA0STdNbFQrSzF4VnNh
bjV3JiN4QTt1L2hqZTdrSjJwRWpBLzhBQ0FZcWlMWHlQNXkxS1FQSmFTcFhyTGROd0krWWM4L3d4
Vmx1amZsSGFSRlpOWHVUY01OemJ3VlJQcGMvJiN4QTtFZm9BeFZuZGpZV1ZqYnJiV2NLUVFMMFJC
UWZNK0o5OFZSR0t1eFYyS3V4VjJLdXhWMkt1eFYyS3V4VjJLdXhWMkt1eFYyS3V4VjJLJiN4QTt1
eFYyS3V4Vi85az08L3hhcEdJbWc6aW1hZ2U+Ci0gICAgPC9yZGY6bGk+Ci0gICA8L3JkZjpBbHQ+
Ci0gIDwveGFwOlRodW1ibmFpbHM+Ci0gPC9yZGY6RGVzY3JpcHRpb24+Ci0KLSA8cmRmOkRlc2Ny
aXB0aW9uIHJkZjphYm91dD0ndXVpZDpiYWNmNDIzNS1lNDM1LTExZGEtOGYxYS0wMDBkOTNhZmVi
YjInCi0gIHhtbG5zOnhhcE1NPSdodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vJz4KLSAg
PHhhcE1NOkRvY3VtZW50SUQ+dXVpZDo2NWFkNGUwZS1lMzY3LTExZGEtOGYxYS0wMDBkOTNhZmVi
YjI8L3hhcE1NOkRvY3VtZW50SUQ+Ci0gPC9yZGY6RGVzY3JpcHRpb24+Ci0KLSA8cmRmOkRlc2Ny
aXB0aW9uIHJkZjphYm91dD0ndXVpZDpiYWNmNDIzNS1lNDM1LTExZGEtOGYxYS0wMDBkOTNhZmVi
YjInCi0gIHhtbG5zOmRjPSdodHRwOi8vcHVybC5vcmcvZGMvZWxlbWVudHMvMS4xLyc+Ci0gIDxk
Yzpmb3JtYXQ+YXBwbGljYXRpb24vcG9zdHNjcmlwdDwvZGM6Zm9ybWF0PgotIDwvcmRmOkRlc2Ny
aXB0aW9uPgotCi08L3JkZjpSREY+Ci08L3g6eG1wbWV0YT4KLSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPD94cGFja2V0IGVuZD0ndyc/PgolICAm
JmVuZCBYTVAgcGFja2V0IG1hcmtlciYmClt7YWlfbWV0YWRhdGFfc3RyZWFtXzEyM30KPDwvVHlw
ZSAvTWV0YWRhdGEgL1N1YnR5cGUgL1hNTD4+Ci9QVVQgQUkxMV9QREZNYXJrNQpbL0RvY3VtZW50
CjEgZGljdCBiZWdpbiAvTWV0YWRhdGEge2FpX21ldGFkYXRhX3N0cmVhbV8xMjN9IGRlZgpjdXJy
ZW50ZGljdCBlbmQgL0JEQyBBSTExX1BERk1hcms1CkFkb2JlX0FHTV9VdGlscyBiZWdpbgpBZG9i
ZV9BR01fQ29yZS9wYWdlX3NldHVwIGdldCBleGVjCkFkb2JlX0Nvb2xUeXBlX0NvcmUvcGFnZV9z
ZXR1cCBnZXQgZXhlYwpBZG9iZV9BR01fSW1hZ2UvcGFnZV9zZXR1cCBnZXQgZXhlYwolJUVuZFBh
Z2VTZXR1cApBZG9iZV9BR01fQ29yZS9BR01DT1JFX3NhdmUgc2F2ZSBkZGYKMSAtMSBzY2FsZSAw
IC05My41MTk2IHRyYW5zbGF0ZQpbMSAwIDAgMSAwIDAgXSAgY29uY2F0CiUgcGFnZSBjbGlwCmdz
YXZlCm5ld3BhdGgKZ3NhdmUgJSBQU0dTdGF0ZQowIDAgbW8KMCA5My41MTk2IGxpCjIxNC4xNjUg
OTMuNTE5NiBsaQoyMTQuMTY1IDAgbGkKY2xwClsxIDAgMCAxIDAgMCBdIGNvbmNhdAo4LjI1ODc5
IDQ2Ljc1NzkgbW8KOC4yNTg3OSAyMi4zMTY1IDI4LjA3ODIgMi41IDUyLjUyMSAyLjUgY3YKNzYu
OTYzNCAyLjUgOTYuNzc2OSAyMi4zMTY1IDk2Ljc3NjkgNDYuNzU3OSBjdgo5Ni43NzY5IDcxLjIw
MzIgNzYuOTYzNCA5MS4wMTk2IDUyLjUyMSA5MS4wMTk2IGN2CjI4LjA3ODIgOTEuMDE5NiA4LjI1
ODc5IDcxLjIwMzIgOC4yNTg3OSA0Ni43NTc5IGN2CmZhbHNlIHNvcAovMCAKWy9EZXZpY2VHcmF5
XSBhZGRfY3NhCjAuODcwNiBncnkKZgo1IGx3CjAgbGMKMCBsago0IG1sCltdIDAgZHNoCnRydWUg
c2Fkago4LjI1ODc5IDQ2Ljc1NzkgbW8KOC4yNTg3OSAyMi4zMTY1IDI4LjA3ODIgMi41IDUyLjUy
MSAyLjUgY3YKNzYuOTYzNCAyLjUgOTYuNzc2OSAyMi4zMTY1IDk2Ljc3NjkgNDYuNzU3OSBjdgo5
Ni43NzY5IDcxLjIwMzIgNzYuOTYzNCA5MS4wMTk2IDUyLjUyMSA5MS4wMTk2IGN2CjI4LjA3ODIg
OTEuMDE5NiA4LjI1ODc5IDcxLjIwMzIgOC4yNTg3OSA0Ni43NTc5IGN2CmNwCjAuNTY0NyBncnkK
QAoxMTYuMTE2IDQ3LjEwNTUgbW8KMTE3LjA3NSA0Mi45OTgxIDExNS41NTUgNDAuMjc5MyAxMTAu
ODk2IDQwLjI3OTMgY3YKMTA2LjUxNiA0MC4yNzkzIDEwMy40NiA0Mi45MzU2IDEwMi40ODMgNDcu
MTA1NSBjdgoxMTYuMTE2IDQ3LjEwNTUgbGkKY3AKMTAxLjA2MyA1My4xNyBtbwo5OS44MDUyIDU4
LjU0MTEgMTAxLjU5NSA2MS4wMDQgMTA2LjI1NiA2MS4wMDQgY3YKMTEwLjIyIDYxLjAwNCAxMTIu
MjA1IDU5LjM1OTQgMTEzLjIzMyA1Ny4zMzc5IGN2CjEzMy4yNjYgNTcuMzM3OSBsaQoxMzEuMzk3
IDYyLjY0NjUgMTIzLjA1IDY3LjcwMTIgMTA1LjAzOCA2Ny43MDEyIGN2Cjg4LjY5MSA2Ny43MDEy
IDc4Ljc2OTEgNjIuODM0IDgxLjU3OTYgNTAuODMyMSBjdgo4NC40NjA1IDM4LjUxMzcgOTcuMDIy
IDMzLjU4NiAxMTIuNDY2IDMzLjU4NiBjdgoxMjUuODIgMzMuNTg2IDEzOC40MSAzNy40Mzk1IDEz
NS4xMjcgNTEuNDYyOSBjdgoxMzQuNzI4IDUzLjE3IGxpCjEwMS4wNjMgNTMuMTcgbGkKLzEgClsv
RGV2aWNlQ01ZS10gYWRkX2NzYQowIDAgMCAxIGNteWsKZgoxMzkuODcxIDQ3LjIzMjUgbW8KMTQw
Ljg2IDQyLjk5ODEgMTQxLjc2NiAzOC44MjgyIDE0Mi4zNjUgMzQuNzg3MiBjdgoxNjIuNTM2IDM0
Ljc4NzIgbGkKMTYxLjUxMiA0MC4zNDU4IGxpCjE2MS42NDggNDAuMzQ1OCBsaQoxNjYuMDQgMzUu
ODU3NSAxNzIuMDY4IDMzLjU4NiAxNzkuMTYgMzMuNTg2IGN2CjE4NS40MjMgMzMuNTg2IDE5NS43
NTggMzUuNDgwNSAxOTIuOTM2IDQ3LjU0NjkgY3YKMTg4LjQ5OCA2Ni40OTgxIGxpCjE2OC4wNTQg
NjYuNDk4MSBsaQoxNzIuMDI2IDQ5LjUwNTkgbGkKMTczLjEyMiA0NC44MzAxIDE3MS40OSA0My4x
ODc2IDE2Ny45NDEgNDMuMTg3NiBjdgoxNjMuMjA5IDQzLjE4NzYgMTYwLjY0NCA0NS44NDE4IDE1
OS41MDcgNTAuNzA1MSBjdgoxNTUuODA3IDY2LjQ5ODEgbGkKMTM1LjM1OCA2Ni40OTgxIGxpCjEz
OS44NzEgNDcuMjMyNSBsaQpmCjM5Ljc2MTggNDcuODM2IG1vCjE3Ljg3NzUgMjAuODczMSBsaQo0
NC42OTM0IDIwLjkyIGxpCjU2LjMwMjMgMzYuNjM2OCBsaQo3NS42MTkyIDIwLjg3MzEgbGkKMTA2
LjY0NiAyMC44NzMxIGxpCjY3LjkxMDcgNTAuNjExNCBsaQo4OS42OTU4IDc4LjY0MDcgbGkKNjIu
NjczOSA3OC42NDA3IGxpCjUxLjU3NzcgNjIuMzI0MyBsaQozMC45MjU4IDc4LjY0MDcgbGkKMCA3
OC42NDA3IGxpCjM5Ljc2MTggNDcuODM2IGxpCmYKMTk5LjA2MSAzNi41OTkyIG1vCjE5Ny4xNjUg
MzYuNTk5MiBsaQoxOTcuMTY1IDM1LjE5MTkgbGkKMjAzLjM4OSAzNS4xOTE5IGxpCjIwMy4zODkg
MzYuNTk5MiBsaQoyMDEuNDkzIDM2LjU5OTIgbGkKMjAxLjQ5MyA0MC45NjczIGxpCjE5OS4wNjEg
NDAuOTY3MyBsaQoxOTkuMDYxIDM2LjU5OTIgbGkKZgoyMDQuMzgxIDM1LjE5MTkgbW8KMjA4LjA2
OSAzNS4xOTE5IGxpCjIwOS4yNzYgMzkuMDYzIGxpCjIwOS4yOTIgMzkuMDYzIGxpCjIxMC40NiAz
NS4xOTE5IGxpCjIxNC4xNjUgMzUuMTkxOSBsaQoyMTQuMTY1IDQwLjk2NzMgbGkKMjExLjkwOSA0
MC45NjczIGxpCjIxMS45MDkgMzYuNjIzNiBsaQoyMTEuODkzIDM2LjYyMzYgbGkKMjEwLjMwOSA0
MC45NjczIGxpCjIwNy45NTYgNDAuOTY3MyBsaQoyMDYuNDY5IDM2LjYyMzYgbGkKMjA2LjQ0NCAz
Ni42MjM2IGxpCjIwNi40NDQgNDAuOTY3MyBsaQoyMDQuMzgxIDQwLjk2NzMgbGkKMjA0LjM4MSAz
NS4xOTE5IGxpCmYKJUFET0JlZ2luQ2xpZW50SW5qZWN0aW9uOiBFbmRQYWdlQ29udGVudCAiQUkx
MUVQUyIKdXNlcmRpY3QgL2Fubm90YXRlcGFnZSAyIGNvcHkga25vd24ge2dldCBleGVjfXtwb3Ag
cG9wfSBpZmVsc2UKCiVBRE9FbmRDbGllbnRJbmplY3Rpb246IEVuZFBhZ2VDb250ZW50ICJBSTEx
RVBTIgolIHBhZ2UgY2xpcApncmVzdG9yZQpncmVzdG9yZSAlIFBTR1N0YXRlCkFkb2JlX0FHTV9D
b3JlL0FHTUNPUkVfc2F2ZSBnZXQgcmVzdG9yZQolJVBhZ2VUcmFpbGVyClsvRU1DIEFJMTFfUERG
TWFyazUKWy9OYW1lc3BhY2VQb3AgQUkxMV9QREZNYXJrNQpBZG9iZV9BR01fSW1hZ2UvcGFnZV90
cmFpbGVyIGdldCBleGVjCkFkb2JlX0Nvb2xUeXBlX0NvcmUvcGFnZV90cmFpbGVyIGdldCBleGVj
CkFkb2JlX0FHTV9Db3JlL3BhZ2VfdHJhaWxlciBnZXQgZXhlYwpjdXJyZW50ZGljdCBBZG9iZV9B
R01fVXRpbHMgZXEge2VuZH0gaWYKJSVUcmFpbGVyCkFkb2JlX0FHTV9JbWFnZS9kb2NfdHJhaWxl
ciBnZXQgZXhlYwpBZG9iZV9Db29sVHlwZV9Db3JlL2RvY190cmFpbGVyIGdldCBleGVjCkFkb2Jl
X0FHTV9Db3JlL2RvY190cmFpbGVyIGdldCBleGVjCiUlRU9GCiVBSTlfUHJpbnRpbmdEYXRhRW5k
Cgp1c2VyZGljdCAvQUk5X3JlYWRfYnVmZmVyIDI1NiBzdHJpbmcgcHV0CnVzZXJkaWN0IGJlZ2lu
Ci9haTlfc2tpcF9kYXRhCnsKCW1hcmsKCXsKCQljdXJyZW50ZmlsZSBBSTlfcmVhZF9idWZmZXIg
eyByZWFkbGluZSB9IHN0b3BwZWQKCQl7CgkJfQoJCXsKCQkJbm90CgkJCXsKCQkJCWV4aXQKCQkJ
fSBpZgoJCQkoJUFJOV9Qcml2YXRlRGF0YUVuZCkgZXEKCQkJewoJCQkJZXhpdAoJCQl9IGlmCgkJ
fSBpZmVsc2UKCX0gbG9vcAoJY2xlYXJ0b21hcmsKfSBkZWYKZW5kCnVzZXJkaWN0IC9haTlfc2tp
cF9kYXRhIGdldCBleGVjCiVBSTlfUHJpdmF0ZURhdGFCZWdpbgolIVBTLUFkb2JlLTMuMCBFUFNG
LTMuMAolJUNyZWF0b3I6IEFkb2JlIElsbHVzdHJhdG9yKFIpIDExLjAKJSVBSThfQ3JlYXRvclZl
cnNpb246IDExLjAuMAolJUZvcjogKFJpY2ggUXVhcmxlcykgKGdsYXNzQ2Fub3B5LCBMTEMpCiUl
VGl0bGU6ICh4ZW4uZXBzKQolJUNyZWF0aW9uRGF0ZTogNi8yNi8wNiAxMTowMyBBTQolQUk5X0Rh
dGFTdHJlYW0KJUdiIi02bCNKJmtGWTxGbHJYU19QQXRtTl1MamJ1PUE8ckA/V1FSQEtdOF1QQS5o
MWxMKWQlZjNAWWIoaFclKFpIVms6PCwrOD45RyU7PCRyK2hcNEpBayUlP0hNamwlcTtSPXMKJWwx
PHM0cW9BMCZWdVFQMjxFMyIiRl4mKi5BVWk1ZU5WaF5ucylKKG5dOHUyNmZDZE1BR2k8ZEY8dC9F
aj8rOCc8XmIsNSVoYz1eTnI6S0ZLTFYoVCY/VGdXZV1ETUFDcXQwQ00KJXI6KFMpZSYxUmI1Rlxr
TjpRTS4raidWaVVJIjIlbWtKKUhBRTtPPV9yVkgtW2hgb29SXiFDI21xLjlISnJyMWtvczdINkpw
PSEjaz8vVEFnMnRYJytKJVlhSm1NQXNzMmgxYksKJWEqLyZkaHFyaGMlQ2djYmI3Ris4OCxUK11M
QXEyS21hRVFJKyQ/P1BrQVBrS1lRKXJCbz5hWCYrL2pjdT5TWWpaZVtQPVMqdWFvM1Bca25NPmxE
UUFTKT0vM2JNVlpoYi1KXzcKJXJfMmFlPyV1Ul9QWlJuU2NPV2ckXVJBXU1eZT1cXFcyRkIncHQ4
SlxmKyQ+OygsMWhycCVKKzY0MXVKIyhnQmdzKlctaGZUdS5ZZUZvVkFlR1BBJFBRR1pQWFk2SmMk
KF1FVj8KJVN1aWVkRmJybjBiOSFdJ2kuJ1IsWGhYU106W2U6KVg0S3UsbGBuPFFhLzZkXjc4XS1G
bmg2VlIrMjgxOC9RdXQqPzImS2tKP1NDNG4sSUcucHVkRkU6aWY+Zz0oTjFUJCE2cGcKJS5TRlU3
YUxyTVY7T0U3ZTIibkM0PSpqT0FlaTkkYzQ1YGc6Rj5JZSJmNj1yPToiMCFDbUdrdFxvKy9RaDRB
SCgrOztYN0VcYUVobipaZ3BQLCslUl9zNUNHMmhxYGVbb1ItUSMKJXJvcnRRci9AISJuKldBZXMp
QGwiTmNuNileamBob2B0ai9aXlpBJ2lkclo9YDFBa3EuJltpQilNOCYuO3BvOlI1bWYqMWFtT3Rl
KURWczg3SVg6a2MtZitpMWg3cXUlWj9jSEEKJXAmMEJXP0tlJmZJXVc9bmhnVSRhSGlGISg0YDJl
KDJeQDVSbF9qSyQmJHVaKGhZaz5EbUwmPmRUblAvcGg8amZYR0ZzQ040P1wpM2hZdU1MLi0uR2RJ
L2BXOWdFNWpUNT4jZVEKJTJkXyNgcFw7QHVzJGEnZ0d1W2oobGgpITdNPm51NCFDRFJSJEwyPC5X
ZkVZVDwyYnNNUWg1ajklLXRkIW9GWDVbXyxtayVeTzNjcW1mKV9tZF9Qak9JdCMrXSs4MGpnRm42
Li8KJSIwSG5KNU8qYDkvRy9oZ1ZuNUg1SS9oSVtycS0kKCshNUZpRUglcjpXcFFycywvMEZQVkI+
IVU6VWAkajxTKE9UTzFZRyVSWFkkYlVuclRDPy9zODZfazV0RjVJWGtxWycmcUMKJT8vckQ6TlNp
O0VKJU9VdWgwYDtVaURqMD1uJSciNGw4bFNYbGg1XC1FdWFfPUVQdVdeR2lASz1MZl1RXD9BM21m
OlY6R1NgTi1GT11ROl4oRG1pbF9CInVCKXBcViwyXUQkW00KJUlfWSZtXEFkNjBEQFxzb15Wcjst
cW1wclhpKHB1bXFoLjMoLmRXQW1QZVkqV2lENylqSFw9VmdoYDtCdDBENV4laV1ISz1ya3Q3TVNA
RTF0JmNUXnRqKT1pMHFoMztWZEgnS04KJTxTL3UmTTpfK2dwbUtEPFtbYkMqOCEtOkpKK0IvY0Yp
Ri5NU19NLlthLVEkMmtuWC01WDBAKksnNVFTRmpCaEclNENlLipZSlU7JyUuZ0ZlXiI+UklNPTda
bTRURzI/PkU1PzYKJTpWNyYzTEhnJiJIJ1JtL28oMkMuXGQjRCNccGViSFw6JEwpR1NROi9OWFBU
Tz5FOkIhRzRCWGFBdDpUaDY9NHNIcC5lWzlwKCRRUj9MdFlZNVEkQD5uYlU5KW5YL2dacF8wakkK
JWhSUDpdQWFBNDNPbEloWWxJSWhvQ0lNO0RbW15EK1k2RV91M2k+Ky5nYz9MIUlQLVsjW1RtSDVx
bzNLblg3WVhdNDVcQkFvOD9odTQtTmAlY1QuJHFmPEQuaFlNWlM9cSZdOWEKJW1hZVZLRkleNSVr
TEheIzUtLXVdNDAwVV1sIV1tI1BmI1VrTWZeNjdtUCNSNTxjLCQoUWc8YHBiSGRjV3JhcjA0Yixf
UTtTKyluUzl1SEc7RGdvNU1nNk9yMF5cUUZTcicoJkAKJWYuY15rQEYvV3RvJUlwVFMrKW5TZSY6
Xm1EZ281TWsqXEY/XlxQOmxuYl9nRj9cTnRhNGliK05UMDA9VD9Kazlra1sya1ZScyUjXzBRcnBv
MVxDOU5mWyVwOCk7amlFKmo0a20KJUlWdTRVP2YwbyJwX1IybW9eZFAiMkhZYUojLzZNQidqUDgj
bmw+KWNYXG5PPzdmNUFzKD5NZEkqVW5KLUZoZyhEKCZWLyRKJTBlaF9ySC1GXDQ0LDhZPDJJNFtZ
SlVtXWJHZGwKJT0uXUM1bWdmImdTWFY9UTFfLmNWWDE3J1cyZ2UtRjkpXU1vQC85LzE+ImRvLjg9
XmJiNDErVlNmWXRWOWhzaVkmcC4iMlM4X1xYOEktOk5iZzJXdENbPF1GQnAsQkhjQzJFSzsKJUJN
MEQ5SV1lImxGPS9rYVV1aEI4WSZFJ3BsRVQ4Il5qR2ZzZSs7RE5AYk4uKjRqPEtuI11nTWIwMzxC
PWUxZypGblNEKGk6Xj9dRFhaXmNVXFZGcz9MTE05NGNNOHJsQVt0UysKJTU8YkcmVm1TRUFAcVlt
TTZTIVBGPTYvKGNbSCUzQkBybyE4M2QmYUM4UlgxMHE5bHVNYjl1Jz1uLW0iYlkoQT9CXU5RNEBh
alAxa3E4TideKUtqOGtZMGZrJXFfcVQhcHEsblEKJUliPmA2S1w/KlxUODBpNzQ3ODA8TlJuU3VW
IzVGREpDbVByVmpOXSRCRmIvVWZtQVNubV04czc2Sz9HVk45O1VPW1oiQmlFMWNxRlMyImsyZiZm
SjEwIi0sPWYhaTNsbCEjVnMKJUM9aiMycW5RdHQ+clMwb10lOipDRlxZYSRwLVNgPmZiOlBHPyQ1
a19wYnRBT1Q4Z1ZbSVZdS20zMUI7PTxYVmIwcUlKQykoczxoZUdbLE1eRkxUR3AlZkFGL289XlFT
X3JoPWgKJUFJPFM9SS5NRShQaztvRW5ZPkRvXHJROW5WLTNIV1c1IVdDQkQxSCZqL21FX0FvazxB
PW1dYTRuR0BXOWBCcitiKlZkYUo6T2Y2UExUJUgmVm1gM2YhO2Y6Ym1VX3BlJzU+MG4KJT5uYyIn
Pz5uZkViZFVcaFAzWGQnWSNbITFoYFsxSShsXi40NlgvViNnSyk8XWZBQUxScEYvMyxjITRnbyRy
XDNucDZPak1mPWc2WlJYQGxqXjZRKUNjIyxNXUVkY01PbmdebiQKJVs0Om1VYTY4cVRvVmAtXCg6
bkhtJUk1SixrQjUwNjNQNSE2ckllSl9rTV5WQDtQJ1lxMXNEWlxHOFJCSkVqI0IoPm5aaDokZl9m
UFo6Om0+TyY5LVhwdUY6XyNNVS1oJXFjSygKJSF1RjJsKG1KVFhST3E6ajs8YFcpZGJOWnVEOG5k
J2xEM1UwM2hGTTRQUExyPSMyNis4by4mXGlfdT07O2I1S2o8TFswKWpRQmFJRzNrbCRscjNsTWgp
I0daKFREX04janRYaWUKJVdMaGg4UiI2SEdMM00iUWkrWWYtZSREYjVyTjxHQVQ1K3FzQWJoWEpo
TSVMWDAnSkpPOChuTDlSRUFeWkIyPUdRVUtkND9Fb3NjXD9VTUQ3VyhvSi8vXzZUVGMoJ3FwKDBX
PCUKJUdZZG5DbEE8aiImW3RmT2VhVCgqOW07QVxCXj9LM1A/WTowLz9gSTU+MlYpVlZoQSZnVihq
SnQhbCViXTRJZUZSaWZkX2IhPC1zIkhlXUUiJylELDduJ2leQD9lOSlyWGVcZnUKJUU1QyVlQF4i
MmdoJT11IXEyYkpQMkAtck05N2s3JkRNQmtGbldRay05JzZQMjFcXjRtSFRqJVpjY0xlMlhGZzFG
Wl1RLmFiJig9LkhXWWswUzdoYUE1S0VTXlFAaGtGQyQyYSIKJUVSRj5yYjBpOFlQPlxZUS1jO1dW
OyFaIW9EMElIc2dRaWtINlQ6YGRmWyFpdDQtOlBQUTByZTUnJFdyLkNbQ0hFNjdrVyUiNSEzPS9e
OWBtYG1kMitkQmNtWW1OOFchJT9PclsKJTVyMW9RUCtEXERvU19QY0hhSEI2P2VfSWhCSUVZVGVy
JFomVCxzVDxqNnVwJ29aYyEzVDBAYm5ybyEpM2syLz9lSUNmPCNzN2tFZmg6OjA2SS5yJGNtcnJJ
PlxDbk8wXk8oXVIKJTUyTCtJaVRtISNdPVtoMyVoSC0uXUElUCU1TWIvbmswczpIcW4mPU5vZUxh
X0hbaidmP1pKZkloOyNgVkl0JS5mX21eLyNzKUUqIVkyVDdyV1ViNzpxJlM7VURYUjtONCowX1kK
JV5cKzIhXlpURCNiciw3cFJSYUM3aCl0NTg6T01QblxiPS1QcW51NSliYTNlQDkwXDIpaDlCU15t
SkdaLF46U2ErR24xQHNdPVc9bmoqZ09oZShzVHRwPlksZzQjRHJRcjowQnIKJV9xaDVbbXIraDwi
TVhvYiVHXkw0cEA4KScrKDE+KWFeYltiXXRcamVvX2Y8SHBZWXI2ZSc0ZlJnXDVuNkRQLXJWX2Vu
bihdNywvKG9gIjo1UydDQmBiRSExQys5MUYiOClIT1kKJXJxUElPblQyMzJKLGFLXkdrOW04cVFY
NzBWblxqcWhnXXRYPFNdbXJpU1BfcDVKSSRtSjhiKDM+UE1IcHJibkZ0NSU1bnM1UTBeNmszYklq
NGt0TFo1QEpdal5Vbm9FKXVPIiEKJTUhNltCL3Q7Q2VCOi50ciVvOGY9bV84VzZoWE8ubDRacU1M
TyQzTzxyVSU1L1JnaTA1N01fS2tnPittTSlMNl1AOXRJbjQ1ZWd0VV9bb2JDcU0rPTNHTiFXQm5H
Vz0jaHI9ZS4KJUtCQzVQP2k9P2g1MllwaW1RTmoiOytrVyRwaWIrSldtOUZKSE9oLV9qVC8va3Em
Wi4hb0giZVNjI3FaT0olWWApazhRczFpNmtETkhNW09PUUFFMCtRbUs9IklyQlJDbSFSaToKJTQ+
YWg2Y1tUWXFsUyYzKl5aOjxlakdVYC9va0QxVWo9Pl0qPTheb1VvNl5LJTRqTVF0Tzd0VHFET3VY
cFwqb28uXWcoRUVBW2gsQTZoJEUoKlVSYkRvX0okaGdaUW0pKmM9MmwKJUpwbUZkR2teQmpHSU1P
YHFxWzkrXV4+YXNrJyVuM0olNUJwZVNqQT0yTDYlKWctKi9zNj01c2E+Xl4qJjY9VERGNU1TWTZo
RihHTjRvJz0lMDM4TCY6b2g3X11SVGU8aD0jZ2YKJXJtKEo5bHE1c1c9KFBhcl5wcjF1Z1pSO3JH
Z2tMKDVDTyI7aW1TRU1cKUNIOSJbRlgtREswamlOQFRiczJdWmAyOlxWPS9uKmUkaXAnKlNOSlFy
OnBeNDFUWHM3bVtzS1I5WmYKJScsITcmSU1kIjtyLEREZG0/SzNCcCcpb0ZEUDQlTmx1LWRWTEUk
NFs9dHBMR29BLjxeZ1QuPmpBQ1xQN3FwMm1wbipbP1lWU0oqMz9sYWFnb2FgL0RxIjQoIyVLQDQw
NGE0VSQKJW5iYFUycEZEVU5bI3F1WEkjbS5uaGdHK2QpOD50MWtHUVlgczc1XjctaGUtLjEwN2hW
Oz5nN1tiPUQ/ZjpLUSQ6a0R0LChxVkJXTF9wPkxGJGZpaEAiaSNIYUE/UzJsJShMSm8KJXJfViEy
RmAyMkxPOl8pdUohWzMibz1dWSFnOltfNVw6OyM6XidmKTE+UT1UWVNSOS4sVSEwbyJybFM8Nlxg
PC9tQk0lIS9yIig+OVQ3anQ/WFUoO20jR0FYTUk1VXBcMiFlPDwKJXMvbUZiWSYoaUxwP0RDKlI5
S0gvcFw9MyQlbzspUEBKJC9sRDdvYCZcJ05zODxLIXUzWG5qKS5qbTE3ZVRDMFFhNUNOO0pJRzFq
PHJtVTkvcSFtPzloQj4tKG0vSGVVY09YaV4KJVdrNF1AYzFoNnNSaG1MMTA8WTpfXkFBMmNyOUFk
SWRKJzgvTyRAXk0jTlksKV1hKl1jKythQmoyYyhoTGdOblVbSzxmRCYtbWIpYjhkdHIyL1gtYDsl
cVRFRC1sWFFKUFYhUlwKJVdnWzUiRjc8XHNtK1ZTNiNtRm1xKUczQW9kdUkkMkFVbE82Zmcoc3VD
b1BXVEcjXDA7XSMmU1k9JlEjamU1V0BhPCM8O3EuS15JXDpYSChcN3FLRmFuR2ZCQDwpQD1DL2NO
Y3EKJUcrNSlHWVNqLWowbTltVzEyaD8qNG9WJlwuLTRGZUlnO05cbCRpcjdUKj9fKFslW05LMWYp
az8yc21PQVtiLW1CLk1yWl45PU87XiRySDkiU1tSOz9ERURjNmI+P0VaSzk8PEUKJWAzP2wwNE5v
JTdjK0tTcDIiPG9jbXR0QldWOENwPDI6KmRxS1MuIlQ4RyNeXy10PFFoLyJxT2BlLz9FM2dtQG88
VyhKZ11mLl9EI05ZdENGNlQlM2A8KVd0Jlg2Ky51bUheSyEKJWxgLipuPyxIZlMwV11bY2N0X0wo
WWJNcTBkRzIyZmxOPkUsNnVyZyQ5VSFYNU1jMltiMXI5QVZTMkhiPEwqaWtmYGJrbj1bZEE3b14u
NzlyJTRlQUdLL2xXQTZrYCViPkxRc3IKJTIjSy4nYDw7QVtfQSlZcz5xSmNXIm9YLShLckxXYG01
VnI7TlQyX2RAZ0Q9dSkvSkdHMz5UTV1mWFlFM1tQMUBEQDJ0I3FMTlV1U2J0bnIkOyJYWDYjPyZq
VVRBXHJZWyVXOF8KJWRtQXQtQk5hIiloJmI5MTZZZSZfMj5USzBkImcxPEI1NlguZm4lb2JNZlhX
RTo9LyM1UT5tSDZEZGFXIyhsdHNTQC85OGk7dE1ITVRya1NSSl0kazFfQ3VIUlIoLmBQKGc+XiIK
JSdsQ0BRaEYjdUQtSlpIYGkzalY2OGQ6WSI+a1A0T0Nba0VAYDBHSUJtQGlsUzEyISEmJCEwKVsx
JVokUyleaT1RbCVfbz8rZD5lMT5UMGgzQW1EIjU/dVkoYE5gSyJRXTtwVzQKJUAoZDo4OSwsPmRh
cHV1UFhELTttWV9lNyIzYFMhYVBrdCUzPGtrWGUua1otUCpcNVU6azNAa0NWSXVGJzRqRUFTTF8s
Y0ZNLlYkIzFIY0F1bERca15rO19jOmMkVzY9JUxIR1IKJVZWVFk0RGkkbVFoIT09bXIzbydyR24l
XW0rS1BaSiN1JnVUPjNkMzFxNDwkRm9VZ1ZuWmxBWiNMQW9vUjAhUldsQltQXDFUKzY+LjxsMUpN
azMiQWdGLWxnc21zJDVLVlZSUToKJSRySzpCS2ovW2hnMS50XDhqbElzKlk6YV9IOltPbWZKMydm
V0JlLjdcIlUrYDlpTyRKTjkySyheMUwjbCNVcWRTbSc1SyNrJiUqcDJWa2RQOE9xSSxdS0lkNVBi
JF10VU1OTWwKJXA4NSFfQ3JSSzQ3YyciSUQ4aSpUK2BHO0ApcFlmIkFMIz5dS1VrWGNVcEVhT2p0
KCZMb1JzT3FkUSZOQSI+Vm1RNWphPS5DOGNZYDFwQkEwK0YhdT5NQEMtNC9KLGU0KVtkZEkKJSdi
SkdVKD0rKnJAblgvK0ItYSswU1hiVWMsMGszRWlebTg0JlArTCwwanFeXUdtP0dmW19iPFkoVTlh
M11fPnMxb0dlJisnWnNMbXE7T01YVGFYQ2Q0LCk+ckFNb2suQCE0bj8KJSVVcUpkaz44bUY+Ikct
NDA+bCQ6Y0VyWGY+bUdoQ188OWNZO1YyVUJDa08mWDs2Njp0T1YtK2RsQTUnRS5IRSRaJydcSlFb
JCZRc1Y1RkFiQFIzWyJOZUFdPm4yWXJZPSxxQVUKJWVxKDZSPCdOTEM2bSI8bDUxZ1FjPCVDJ3Iw
MEkkQD5RQXRjRzE+RDRPaFsmXUpYVj1gPVROKDwyVC0iLEckbFtLNXNaTzJLOzBkLVRZWS05KCYi
OkooMUVsWFwncipXbCZSM2EKJWZMVkoxS1RVQTJQTE4xTEUwPlo6YkQ7OWtOXk1Ka1VyOXFbSVNy
Mks7K1MtOVFIRUpfTnU5b09yPmZtS0kqVVxZbkE8LjlsX29KbilaPChLMWY4VScobzxrI2YhQikt
VXEkWWkKJVE7MS5NTFVVJSdRNEVVLypxRCc8ND1ROTY3ZCdcW1hSa21oY3I4ZGFONDRhJ0NgPThT
ZydmVHBrc2UuJmBeYjljTVYuRSFmUGAqPCleJjQ6QCg/SGNaO2JURDdQJSc9T0UrJmYKJVxWUDNp
OVhuYj5qSklQJkNka2htcy1oTTk5OWpKXFY2V0NlKi9QS2UmNWdCZ1xdaSxDbVxLNDZIMiw8RVNt
K2RjVzxFYE5SNFhRSjo8bTMxSTdPPFlGQDk5bFVLNV1CNFkmcl4KJVUmUT0zXyNLUnJFKE5gQDoi
MSRFRVQ7bExtNmtkVExgN0NrNmpNWlMzVFtAWj90U0RqcE4uVXI6WlNIQ1Ftbi03Ry5AIXRBZz5M
REBMV1omL0k0OTJaUjZvb0xDVHVNX2tQRkUKJV9UaUA9Kj5OajMmOjQtJDAucVVSRihQP3NSKEo4
OjcqQlZaYVlyTz9xRCdBUlw6aDYvYClYcFsnJV5bXyQ/O1ZuWkw3S2VFTHN0M2leZy50PWEuND9r
SExdYFovJ2FuPk8ubCgKJTY9TS9XTjZdYnRsc2VlY1w5Q3BbLydYQSNLYCYpV1tHTSlIMUlLaUlG
OilkYVFKOFhhL15zZmhDWm9sQ1RKKFBTbT1BJiJJVkM7cGVkK0AzaG5gP0o6Iz9MQllCSGI7JzVu
RTUKJWMsQChjWWgzbnAmJFZVZVVNR0YmZj5RWDVCUW8yXzEsJD9nUChJXSJvZnNhYFJQXHRtPCI9
WVlLUys/LC51QDxXNnUtVGEmWSVCXiYuTVchPDpNWEpMYWpVY0RJRnRramhyOGoKJVdpVy9JVTZj
WyFAS1xgYiwlRDAmNiM6V2cyXjRNOWtxSDMnOzw6PDUrQj0uKTtQX0h0LlM1ZFNCcj9ZNEtyNzxk
S1ZoKmBLO1YkYEp1OFZyXzk7bj9zNTdTY0Ztc1w/SC9OaHQKJUgtZzlIQVw8L21IKXU6ZV1hNCU0
Nk8oIkRcO09uZW5rcG8hI2lZMWMuSyI6SmhPY0FGR0tiTWotVzFXSi8uXmgmY1NPQWlNZCI7Tz0y
XiU0SWIoZGNjdXFxKFRCS1xEa29XO2wKJWVQVHNfaWNdLCY7Rk1NLEshWW9qJHAmWU9GRUJ0VFsv
dCJjSVgyI28yJVFgI1c6TTNILChxU2U6YTpkUjtJWjxVRz90YidoMVIiKkwiJGNCNihvY2MvXWdK
XlcpLUtKLlAiXHMKJVN1Zi4tPWZFa3A1J142WnJNIkNsQFNbK2UoUnVIU1xbJjpYLlVnL0soSlBH
QjQ2ZXBZQUk7O1VXIW9VO1RaVmE0T0tlOmUnWVIwKjhCLlNyamcjUC5ATTA4XjBWWDNgPE4kWU4K
JXJ0TjxUU2InSVBJWz01Q3BzVHEoNmA6OiJZLT51X2E3azprWmFnPmdWWG9xVTQwUTA1N242bWVw
MGQhIyJack9tZUxyQzkyUGcsPUpKdDxDW3RLYDtQTGQyRiZpNVdwRzlaXyMKJU5xQyNgISw1ajBH
SXVdNGVLN0FWWXBDXCNXXDZ1LyJdLUhHLllCQGxeYl9fNVJDXkpBNWwwTk1KVl0sMSx0LnNIP0d1
LzkxZy5OPVV1JGtdOEhHalNNIkhQLFNCMFlXZ2NMaioKJU1DPDdOPyVgWmYqOFpqKE1qXyNoXEFf
W1xNbXArO20uQiheRHJxIj5zN141XV9oJ0dQVzMiSUhvaXFGO05nLFEpNEF0X0U9O2BxcSheT0w3
S1gnKFsyZT9sMkA7YDFOSlFaNyQKJWo7S3BYSCFGRik9KUFmMS0qKGdAJSdwTkZJQTBtNjdMKFRp
SElkOW9SWUFqcydqdFA4OEVnQzsxK0t0N1M2MEw8J1BUWlNNXnBGayNFMmJYQzJZXFxEJ19JTDAt
Zl9uMmhKTG0KJWw3P205aW1sZUJgRlNcciR1N150blRsQV9gMUVqWSRKMnNRP1BXPktNLzhhNl5A
OysoLDJPaTgkUyMrbnIzXWldI0tSOnI5LkouLE05IXJkalhrcVdbKzZoTjA5WzInN2djIVQKJTdE
LyEuSEo9NztNL2w8WkBQYHEySF1pPms6KmBUOGdFMCNGKTw9SiJxNk1AXkRfNGsjN0hCOi9kNjM9
UTNzcHQkQShKSjkzZyFDO0VwNF4lJ01HPlVCb0gxZmFcbGtGJStlZnIKJTw+Om0kP0gnM2ksWDg5
ZGc2M01RaWJRRk1EZG1PRkY7P0ZKXi5FUkk4cEhZKTBNNnUtP2VodFdISCdJLW0kRjAkWzdkOE40
XEdUXUhcOFxTZDIvLEhqLFVYbUFfIkdsbDo+bTsKJThaSltqU2xrJzAiOmhtYkVVOldwQT5HNj5k
Jzw3MWEkQlA6PjpKS3AtJEdqalBWakxvOWBFMkpbc3VNSFFbVE0lKU9KL1QlUFc5Lj5mNl5zYklZ
NnE6Jl47Oz5fWTFDRTJDRjQKJWwyU1RoUEM8ImMzN3UoKU9SZlZxW3JVazJtR2AwSSdmbl5gZGJI
RnNtIWwoRDkpaWZTYkVaODdVcHVATyo3JG4qYDdUJDlSXClIY2dVTHFAbT51TT1AK3Q5YGBIJkRn
S2JkRy8KJTFqZnAycGZRNEZoMzo2ZWFFO2JWUTQlVlpHL0Yub1ZNJlRcNWxGIiJlT0kqaSIzcmVU
TjIlVSI8XFZeQV5APikvWkhKXHVHQEhIP15rVjJyN1YpJWBAZk5EWiZHNGZDIWpwLUQKJWQsSGlt
QUVaclxCR1xlO0BNcm1tPykvLVxMcTgjPjRfLW9TVDlyRF5ZKE4rL11mOlRZbDN1ZzI2IiFgZSZe
UlxJZkVoLVxAaXBQaUhGamQ3SzNwRy0qIjJYPU0hbkc7IiZZMT0KJVRrNU9aKGJuKWpRXykzL246
InBmKVByOVtDOSdURGx0OydKN1R0NStxMXVebjdaNlshL0ljOF1fcUVNIyVWITBbJU1vTHMkYEV0
PjF1XWY5SiFEXj4sUihHZyp1LVFwOyZRP0wKJTg/Im4tYUYyJlQzbjstcCUnX0JrN1xIM2BmaV5k
MmErP2dWMUxtL1NVKkNuWGozLk1mK0dHQFJLPTE6aFYqSjZYMU4mSjxTaGdKSUssPWVqTEppbWAl
LV9HUT1YTklYaWxoXFMKJV1DOF0lPFQva0c0Tk5XTVReMERqXzA5SnEzYXBgYipsOzFPQ2klPWNr
Y0FUIXJMLVlKPV5eT29pLzZpdGtSJGNxQ3J1Yj5mKmlDJ2dzbGhmPHJiTmM6cVoobi9ac1YvQEtn
LE4KJTA+PiEwSyc9TEJdamVMZDosIShbODkhZV5uZkRjXVRhbnAtNmJHUm5cU0pCQUAvYEojPztf
KjshazpZIiEqRWFCS2pZT3NsYUZePjJacztfY1I+aiZqVSwwSSI7Zm9qY1BGPTYKJUpoW11IMVIj
KjJwaSFhTldOUDNVa2VjcjpcOy1dbU9HK0VaKzE/a25DUDopRSNmaU8wLGs5c1VnWyJeSUNxQSVX
byY7LiciV2Y5LSRhNSlQOyhoTUJsVzZtbiMzaUMjMCwhaCUKJTJNWU1gO2ZgLzVKJWRlSXFxTi5M
QyIpRVNeIiRVU24mNTFqcXRLUHRJcGQoYHFaYFdnWDdKNnNQVGNfPy5SZStqT3FPcGhLV09zVGQ3
JmgjT0JQWSU8UVVNNUYmV1FCJE07Q1cKJWRaZm5CSFEpKi4/cmJvQjspKW4+bj1qNzNhdUhZXTEy
R1cqWiYmUmA3I1ZiYThZKihWYCNnUU9QK0QsNi9yOy9IVj9nPEtBO0g8Zldka1dGYD5yTV8qIl9b
KmFTZGddX2poOC8KJTRuWVtwakhoVGEoV01KWypUaEkqRC4jIzpqRGdSKSQyYjZSMl0yOXFfaCQ+
T1FXUTYoSTslaXRYWkE0RUE0MVBmOD5eJGY6dS8tZkhDP2pRS1taU0xNLGQyN1onODA6PFE5NWsK
JW1oYiVjNFclRyI4ZF5IT1MuXU9LNDtzTz4sTElKazsmc3IkOy0lRDwwbVxiU1tdJEptJW9RJTJo
S3MvVlwlVTVibUxRUUwzbXBCbjtULkVES1NdLkQrW1osIycuSUNhSE9fNlEKJSQ0blVMKi1YVVZa
OiFfQUhvT1dIWWdQbiMqQDEzbVI4JkM6MjI+RSJWQ0lQPEQzLCRdWFk9a25GI1FUQkIwUDFTRS40
Vm1jIV43KmptUCdiZSdjSSk5IiQ1LE5qOEwmOTtrLiIKJTlrRSVcPUZfX09jaihgTydeLE1PYW9E
JT4yOm8qNitWPTAxWFw2c289a0xnNjVNPWhmU3JnQClkNUtqcldZI2JHL1krZ25GY1EpbV4mUTJk
KW1yWlRpaVlAOEA6UlUmZ1Q+JzUKJS1tWDBiR3NSb3BsYzwsJzxbc24ybE04UlQkK245dGRYL18h
SS0uZSw5MmZGJFdfPkMqWlxqPVdjL2RqVlA7RGUtQTFKQV1iOTdWdCxvVXJOOEIkJTdMPjxrSloh
VlBMXUNRY2QKJUxEXik5Wj8jZTAjL0hlKlNrYlIiXUwsYkZKbDpDcVFbSFZbcHVAU3JKQDQ/OnBr
LVovJ18pSHA5Pzo1Jj5cJTx0WGArWkMuIlNaRDZZcCtNQUQjUnRVNFFCNmxuR3NuWzFrXHMKJXA3
ZXAnZj8mYV0+KlYnRFE/N1pEXWM2KSJhRCNsM0Rvck8iJ0cnJnVaNGYxbkw2Y1tORUtvTmg4XXU+
SD5xQSFqI2h1UydpYkxQSD5aK0FTZF1OOkwyV1JNOmQvbUQsOTNnVW0KJWhDbCJuP2xUTUdJNlx1
IjhRP3NyUTdSK08+cW9tXiJtPV5dZE48bTdgTCIlXj9eRkRvcjAzK0IuVjktQChULDY1MSNWPHEm
VFJkMiMjaz04cFJFSSJyb1dXJT0yMj5Zcj4rSGIKJXMhUT1XVis9U3VQOkVMRkJkZ281TmwoJHRg
UlE+VEhhMj8pNWQlR3ROMUk4JlpwdG9bSCNISUlkRFc2QHM2UF9TczNMUSFxKjJPTGVXVGdhT1ty
SnQ5ZWtzajxDYV4lK2w7VVcKJVdbcExLYUQnYTQzK1QnSi0tb0ouMFY7Nmc5QlJuJzJtYmNDcDZP
UFNNWE5bVFZpZilgS2RCPEhhaGNSKjxAXUJiRk1LPUNUXDwwTU9OWCI/b0IxWCYxU2toSkhvc2tF
TDUuKz0KJSpYOUgsZjBHN3I4LnI2NTxgJWIqJEJjcDA3Ijx0ZERcK1tLUkVlITJhcChaOTUkaHVd
YDM9aFFgPzlvNytRZkc8PlM7Xy0rck5KKUIhZW4kSVB0JmBAczk4cChsOkM8W1AhXTAKJTc7PDwr
Q2F0W0omMkpSUkBlK1FXbSxLOkEmIV8sVUYlUGAvR2A4cClUZj41b2Y8YCEiSGBqLjZUJEUmWipd
cmlOXUZiV2FuPzEkT0tmRUwvYXVHUVFTb3AwZl8+dEZwamByN1wKJSlqPypyZTcpIjpGZHFjbmRb
QilAKnA6RUtEMy8xc1c2WEs9JjVCS0FqNUtUYTI4VkxtL2osKEhPVWlPSmYtJ2AnOiUlK0NjazRq
Zz1VP0gnQ1VaZyclblAuRUY+MWRYW15QNWUKJVc0X0o/SD8nNjBPLitVLCw9Y1dlYWdtJFIzWG0l
QlExRkpzXFVZKFY1TmJsQWxNOWxIbUFIbEZYNSs7MUUzTWZNaUdgWitFIT4zcVZPYjJJNGswUVZX
ZGtmImR0JlkuZ0ZfY1IKJSpMcDsoQ00tdSRJUGpuYlFOX0csKkhQIzooM1U6RFhpcUwuW2RnOTlp
KkJHZ3A9LSEyXFQhRj5LRUlial0ndEonXDAxdTVLTVtzcE1KIWQ9NE9PIVJML0xBcHAtKyI4JyZk
O1kKJVtXXltVKCgoWDBHIz4nUEJRNFlpJmgiaEcnSmErZi04SGFfLHVlMWpUbkNHLj1WT0FAZHRz
bTdQODJLaVUiQkdGKWg9LGNXWDNGbzY5Rm1bcjchRTk1WVlXIVIqcClKaD9fQiIKJVQvXGhmNz81
SE1iJkpvVUtsXlFsO0JwazonLjYiUzh1Py5NKiYxdGREa2RING9VXjA0NHVAdE09STtPSjhJXEFB
aTIiS2o4S0U1QCQ7WU9eaCd0PHRXKFE8LDxIUFlPQ3I0YksKJWdrOlh0YXJSO00hV3BoYjY0SDRY
Y3AvQFApTzYmYGkiQyU6OShXUD1cLyY3JEcwVy8kKEdjV08mXktuSUFwbkshMzYjcUhWST9ySlZL
big0Zy43MDEkXU8hIiRdYGRjJXVUdVAKJT0jT2NXRSJSNzAmKi9eN0xrQ1ZYM3Iua1crL3VjZzEw
WVZAOD1MR15KZi8pVyRMcE0pbWBgIjgyYlBlK0IxbSVkcDMlOWNnK1hmN2hyQ2EzWEZLRV05SSly
RClQNEk0UlRuRjkKJUhhbV9rMW9jOWBBdFd1dT5lTDxsWHM4TS0hNGQ6dFE1NiVqT0VtREVoJShv
QExcdEZMMT9mNS4nXFVCYSo7Z0FaKiU8Ll5lLWVccnJKb1YsMUxJTzpDLDFAMmkrKksoSTBoUnMK
JTttW2plP2tWKyZPRHBJYCVSX2hhR19sYC1LSVtXWjkqaHRaJy4xPE1KMTZQJj9yMS90WTdlJksn
XjJBazVRX08vWVtcXG5uLUtLKjsnbz47MiNkNzdJNlMkR0pWIlU+XlxiblEKJU5vJD08aiNGa1Ji
LHRObT9nJnMjYzJ1YFtfLEpMSSIxTjktVmooREgyOUo/XmtlKVY7Z0RbNlBMNVUvJiJwO2RIMUFh
NmpgKCFNP2NrN0Q3Xm5WXVpgLm5lQ1E/ZFRvW0BtL1UKJSlsP1thMydvLmJMXylTTksmQEVSXmFm
W2hZKjg6XSsrRTNsbElBVUg2blYmLiRbKlNBK0JGJ1cxPTFeZFE5c2k0ZkBGRjlONls/YFpYVDFr
b1JCVFhDK2ZebmknWDpHJklFLSsKJWA3O2Q7JlItPUIyM2JTI25KMissYFdSZVlPWjxLYVMmXktv
ODhqSi1sXGI/TmQsImdOYTQ9WDpHaVQ9IjtPcD0pTVw0aUtDUV5TJV9PNTFNZHAlYmtJQGBXWlQy
SCIuVzFCZ2cKJUckLTROZkxcP2FiLUkrQjMvXz9GM1BRdVJCTWx0cEBVYmxoYCghUUgkaz1dJ0Nq
WlJOb2RRUihja0E7Jk5ZMSdhWE9mbVhdSWo4dCtpUSRgIWdgK29fZCk4UzoqTi9bUzRfQUwKJSgz
Y0BkZDZOTzcoVVtYJmdodTNncjcjNiw3XWJDdDcxJ01ePS9wNnEmM1YxYC9xTVswKTwvKy5JRTMp
O2ROMypUSGAkPTY0OVcqKiVsSl4wbUA+RD8mYD1eVzkuaGYiYS5jJDUKJSlSWig5P3MnbVIqUmkr
SVhjLUpoRCZvbSFYaUZfVytIKzVUWTAuUE0jSFZrM0ZpLVEiYGEmZFdZXG5sMUNNTUpyVEZqSm4m
VzNYISJJMmhtJ2NzJlZZPEsobmZPc15dKi1IJDEKJSQlRHQ8WUYjOD0jYz5JSC9gTDEqZDElUjxV
ZD5IMlldI0soWFZWSW1pM2lcKVtkaWhsVUw2ajU2b2kqQmxVOl9nK3NmVXFDJ0xHSiNbZHBxY190
ZGcvOF1VVkNFO10tTks7RikKJUIqM2lBL2wzSyUwVFA7Ok0qRkdFTlFYJ05iMitYYj1ZM1QpN1dL
YjFXXGA5Ly1JYk5lKEdzXXVHNWpmNCVJWDFQQ11sT0tXI0g+VzMzV2JhJyg3KFJaNGNqQ24iXyEk
WXRdPEsKJWcnRmArZyRsPSZFRUxTSWYnYi06OFB1OnEzLVQ7OS9OKm1JTDhwYS83LGM+dEc0TSt0
MlAwTkdTNUR1dVEsRDBrNmYhWDIpMEo8aFFJTm41ciRpI2JQUDshQ2Q1a0dOIUJrRG0KJWdRYFMv
Q2coSyE0TUMyZ11HTF4tPyElQi8/TnJTOmcuP05iSkBBcFknb2NRLEA6LiUrMUlOYTZYPFA4UlRO
dXBoNCpYN2M5YVNrYSRqUjFeL2hJPXFfNGUwMmlya1szVVwuVTcKJUhLKkpuYSUndVsxL203dSpp
SUxpQlgnWTwnbldDUyopWWI0a1VxXzNXQz1aQVhsXEtrPCptZkNGRXJtZ1ErQHMwYFA9O1QvOzcn
UWNfN21RZyg7XjowQnAqRDg8UHUxXlE6bVcKJVsodF81QGI/bSEzPGJNXVlTN0ZJJEJcYWU2IzpN
NFYrLiJYbFlDWFdPWmByQ1UzMGdHQCwqWSNqRlk2WiFKcWFvPWd1YkRDYSxDO0xwV1UqWmw6SGdY
IksjMj1cJTIwLHA2Xy4KJSZvdDkxSUhzMUFySkZcMDNrMCdaRydjOjxtSCJnWU4xaDBmN15tWkph
cD9gcFxdUk5GIkFuKFxRUiVBZ0spZ3MrKmJvNktLZSthIz5OJVoiaGNuKG0+P1srJ11oXTVdSyw+
WnIKJWBaZzNYXEZAZGhhTHJFQGdgaEcmV2E3MmpTKTBxJzlxbzpnIiZRS1AqRFlea19FLjJuKyss
J1w8bicsa00+XG5tV0RlMCVuNlVfKWFNI0ZjcGIidUgwNzYjZCw5QEFkJilkbDMKJT5JNC9pR1Va
TU5bOFw2XCZdJTExXCVNakZqcSdALSRfOlYoZVFXKzIkJWRtalE3aWUrQE5YX1lMLjg5Jyc6IlZf
QlJKclVuViooRzFhay0wb25nLSdIJCpKb240U0szZUBSYGsKJXFQPj91JGU4VWFyKEdSaUJySWhA
ITFkMGxWTFYkSl5nbj89UkNePS1CU2M4IlBHbzc3NlJVYzVfZ2JiIWlRSXMvaCRIVk9fMltFMk47
P09pNmE/Q2cxQSRuUTowMSMzKzMoTVsKJUUycGA4JCpdUSspTTN0S3IqKTAiRHNZWClwYjs3bEVL
bUVEN3RJRHVqSz8zUEVIcFsnJC10LTRdPW1aX3B1IS0pS0FZUEI1UTozOzAwYU9XMDt0YUgoQmEr
ZVVmUDpAPUVAdT8KJT1wS1xiYzxLUVpnLCcmVVAvKVgrJzZTOGY8WDNBPjJSL3VqRTIkdGgvWCxD
Yy00cEx0XW4+Ljs9Zks9aFYrK3RzUG1DS3AsanR0Li1HOyFmIWUlbEwmMUtrVSEwbU9WaVZvOD4K
JUIpdGw+SVVaRWdCSjghTFlFKyZcTWdkNUwrXjwpPy5DI1liTUQ/Xi5kTXQiMl9SX1BQOjdOWjJt
OzdlXG0sSyh1UzxqTnVBbi0tKyIoMUlZPS11SnNAIyldaCQyaWUnXVlfUmkKJUtrZjcvZEU4XlNI
PitKSXE4XkhDLCs2cC43QVZxUyk8Ml4nXydNJzFTUSRKRlldTjdvcCMzTWoiW0UkcFdgRj4lOytq
QVBVJ1RXZGpMNTZFbm4sKiUza3JQYExOISglWDcwTXMKJVJUQ24tLyRyLmhzNi4qMFdATyFaODNP
OHJjOj5SJFwvMS1RPWI7T0hbNFlYYSlPXDxyKGw5UT0+ODc9cWZKKExrZEAvKjwwR0E4VUp0MCVg
XywyYCk7W0JGYWRGMDliJV83J2MKJTVSZU9rT05HSl9wQmYyPF51RXV0JDw9IypvXiFddGdKPEFV
N1FOc1Y2JStZWU9WSFhzOGxPYWNvS0hBcFMpYVUvXihGXFhwQXRrNUklPCcrPUlWQj5ycyVPbVxL
Z1tUWWY6VEUKJTc9UGFvZmFjPDMpVT5OLWNFN0lgNVVsRTUsc0IyJCFeMWFxQHF0JGBCVVVVWChn
LnFWVlFDXD5DTiFnVS4uUDlKRSZmM3NSWjhMIk09cHR0OihFazdtIkBgPVVyXG9EKkIoO0oKJSNd
Li9tVDItN2U+UE1hJkVzMzQ3MFM3KTJCYnRAZkZqPmNSZTonKCpaX0tBU1RRPUdUXCwlMyw0ZVY5
SERbOyheNiZiWWRFMGdWV1hHJ2BlcCVWLXQlNWQ3JT9TJldaTVw2XUYKJSIlOl9ePWpXUVJiXV9T
YFYtXUMkWGduJi08Y0cyO1g/b25aPzRoV1gvPmhKYUIlV1VrZmhiUzVTPmZfRjQtYDEqbmZWNFg2
SEdoREZWMzF0TjtJJmZwPCZvMyNENjBNYSVZazAKJUo2KGkhWzlBRWBJUlZtbyMqKGEnV2JcaFRb
aStPOEY1WCVoTz5AYkxFXUhtPktbcltwLTc3TFtkOEguKF5uYTV1I2EwOVJGVHJmLkdAXURRVDRC
Xj1VTnE3SSpLbmc3OFwsOVcKJXJsQFdHRzMiUURQLUd0aiRpOylDQFZLVjpaMyFVU2lqPlldIVha
cXUlTGpGVSYjOjVHVydGUSlqZiZuTiJWTTFnJ2hnJEM5SkRUKiZlUF8zS0IsPCJsTytnSkRvMVkr
WVNoNSsKJV1kcXFkbmhRNUhWKTtqQixqWEQ6WzNDWDVMUCUqJFtpUzo0ZTkjPywjJEQ+XThoYUE5
MCJWVls4c2lYcUxpbWBTYi8qPFQxcnErVC5sR2k6LWwyOFRrI1c5M2NCOWhlPyFxclkKJW41VXJL
KkhkclphVSRMZGVXYiZsRyovUyotRTpQcDw4QW50aSJgVWwoUyUwRGFOR24lamRiZmMvJmBQLnFJ
aiI4bkJZVkReRkpodGlUazAySlcibWgxWi0sRSdKa18hbmQzQVUKJWEpU1hqVF0pNyEzKDNdKCZY
cCtPNFZMWzYnbDtSU1Eva0U0PDBXJChNbT1ZSjxUTGM3WVMoL21LIidMcEgndD9TP1ZMbHU6QShF
aFM2IU1zTCd0XCsvdDVCcUZfLkBWKC4nYWQKJTcnYS1lNjpvKkcmVnBoNCUsPTs0aUJXS1U+N2NT
OChxTk1oNTJoLCZTO0MvUmo8NShpO2I7VS5JLWs0RTkzYkFJNUFEUWozUTpZbmYpOyk+cSdLbFw9
Z1A+by4wWT9YTyk2YF0KJWhRcVQxQWhaUEhcWDVMNjdZXVZkbSwram8sWWNsISg0IzkpQVcvSDg4
RUMmN1JfY2ltXVwocFFnXzA8Ni5eY3FrYGJjWyJHKGlCTSgoY2RvSjY8YEM+Qzw2KCJYTmk5Q1Vt
M1sKJVA4UDgpYEV1XjFGZ2ZHZlFfKjROcCVALyRhM1Y0Kj0yMkZxcyJiSik2UGg4NkQ1TWxbKUZW
X2NhZFRdL1NXYTxJKlApaDFqW202QkIoKWZQcGgzRV8xUmhXKFVmRmMuJ2M0dCsKJTdDY0pjPD8j
VTohT1NMZy1objowalZHcjEqW2pob2ArXTQ3XGQpLWEiKTVaajxtazRnKmRjV0MuNGRARSg+YUI9
bjFBdG5jNG9HUVRVcTFxTUA/TFg/PFtxOiReU0dabzQzUmgKJUBmaUNdIiNbKDllPSVFTVg5W0hb
XzknKGljPyJeQkEtVWFsODY8L0M3YlIzWE8vNmokO1tqaTlnbiRvOENVakFjKT4xRGtZZExfbE0k
NGxlSm9ONXMsPmxSZjUvQm8vNWtRVEwKJU4kWFMzIW1dQWZLK2kzVUROJlE0JTU3ZWIqKWRlOSRb
NlIqRkUkL083cStVTi89YTNPazk3QipAYmg8Y1w6YXNCM05zW3Q5Ry01ZmRlYi5pazRpOzw1TWQl
c2BrWXI4Jms+OzMKJScmSlltSjgsNCcjajZGN2grIzpoaidfYClFcGcnWTRqaFRYUVE3QWZrXzJq
K0NRKmpMInNeaVVKb0hLcTVlUjp0PTolPWI4MSQ+QiV0cVQ+NS4xNThfMUVZPi9ybyppSSpaWjYK
JUdxUlRwTGRLb0YrNmptS2gsdDdjMyFrbjQqLyg0VFNNMEo0RnRwTGFlW1s3VV9NcVhNIUd0RVJC
OVhJP2tWXVokUVVcYEQvTkc0bWpDcTkySWlocVI8XiNtOU4hVW0pVTlJWmYKJVlxUzY9TEhuWz4l
QzhAK2FAKCpZNSlAJVxLRnEvSE0jMUtAJEVGOCtndDxOXlZ1LlotP2JSKD4zIysqOnBRXi9cJGol
aGAzMGMvRj5vMklpTERdck5SV1VPUklnW2pccSdkNDYKJWZTR2BZcWQ5ZSYhQ250UGxQUDFePUJH
dEA0PFwpQGtPWjFaSSdYZSxoP2IxXzBiM3VzUzVtbCheYztsQUghb2NiZVE4ZEZdV1hLIkk0PmBp
NW9TOExcT2knYzlEKiNPTkhYSlQKJTBnPi9dOixoJy1Ea1MwMTtac1AhRSUjVmcrcVMzUDpBMG8m
V15wOVMuWV4nbUMqKVJvOjclS2tLXmRYZlo4VVtnWitwcFg4RTgzazxiSC1TUTwlb0FGJlJMJU5D
UTFQTSgoWCgKJU9CVzBrPERQMHBAXUdrQCFjTiRJPSE/OSlOUy9aYEM9JlI7L2g/ZytRQDRva0Zn
bENtI1ArI2dTMzNVcDk9JFEwRkNeME07TS9SK1YhVlduNlw8M2NsOkB0OWJINUghXldTTmAKJVhm
RnJMTlAralBPOCskYFhzQCM0XDIxSzxwL0c5azkmZS5DUG9ScV9rZW0yTiQvMVE4LmxpaWFLTic4
MVMoPkVQWyFgbCxtMkNhIUw2ZC5eTl1XJyI0X0FkdDdyNjY2Qkp1LVAKJURUQT5gcC5VQXFgczZg
SjddVmEsTWokaFlbJ2BRWG42VHIhYV1XdUdiNyRSLU5tIyxbaTAzXzo2ZkQ/ZG8jXF0kXUZMPCdC
VFJRXTBVbSUuXENVYmYjdEt1JWFdWGZqQzgucSQKJTxEPTMkaC1wOmUlUFFvQCItUCFNUGwtJyVc
clhWdCFKMlo9aSl0OVdlOnMuU0AnJU1yISdTZzBIQUAyYjUzamdtK1A5V2Q2VjlCVjwtQjhOLiUz
RU1PamJwIUFMKzBqakJqYnEKJS5DaUxPSzpVPlgoTVAwa15aZDpcaWhzRSNMVEJlUSYsMi5SbiIq
Q0soazcnMy45KjM5Z1AuWUk3Qi1eSXI5Sm1VQUlfUGlIKiRZYERsT1lfQihmRlpbR2I1aClVSFNl
JiNlR2EKJThRYUBSajAtSy9scnM7TUpYU1YxSTFpLUghJTxOMmo6IV87ITpVNyoqYlBSNV0rYyFW
KDpdMlk0UiMlai9uLU4yazheQSs5NiIqOVJRImlwPVxcMnJFNVkyTC9WSC4kQCJqckUKJWFVKm9K
RDZbQjsjcGhfRW0yRWNHcEJgTzEjOjRANXBVMFNCNFE4Uy8+M1QqWy5fWE5ILHBXJDRuQHE5LF1s
NGJXcC4iNVEhLExzV1c8XWlvNz1eRz1oRk91a05gISNdJGVMaSIKJU5nVkc/Ti9Pck4zMzAiLiVM
KENFIlxrT2dlQXE0YDBiam9KUkdOOnBnNGVOZ2VdJz09QmkjWmQ6YFgyJyJVOVxuPTAvdWtLczYq
NkNRcioyIzQva04qVSE9SylWYyctS2VNQ2MKJVs0P0MzIkMuZEBcLVIoc1lIWE8sVWtIIWZRb1Mz
YiI9OSNTXyQiRkhQIyxHUUpMJC5GZ2I2WHMyLUpKS19RKWFNSWwvZi9aSldzY2huPW1QUiNBTkRb
bXBCNCldZEYrXTtZJ0AKJT8iL3BJayI1V2w8MmddTVtFImVbWkVQXXBWb1chamwzJXJNJT0vdFpd
MV5NW2RyXCJfSTddODZuKEdyWFonZSpMJCtpLyRnJUBfalUrSUFzRztFXz8lRGEkPV5eZ0pVSytu
MT0KJUAncy5rIXRrW1cqRDltNlFSMHIjXSdPYW1EJjVQSVMiPnQ3MCc+ODkmdDJfLT8pSDYlQi9C
PnEwKTA0OG5YXmNsSU9sW2tUc0FsMic6JEhWTklgMTIrW10uYE5JdSpcKDtScjEKJVNpW1VZKEhY
UlMyVm5YaHEmSWJEOi1POC5HK3JiVUM3QThzOWxEJ0w0NyQoIjktNCQ0blJEbW1YMUdRaSJJWUFx
I0MhcyVGJE43OEViZTRib09BNCQkMExWIVpPKWNlPlxYcEkKJScuQC5qR1kvOjpQKD5cKFIoKis2
UCJMJ2YoTEE8YExQL1BMXzwxaGhWbj0uVXE7S2NJViErcCVeUWpkZVJgQkJUaEgqU2QsXi0rIk9c
NTNeLTBuJm8vcC07NDpZMnBAZywraC0KJWg4WiguRiozaUVcYUo/OXBcM0gnSWo0WU1hZGFaZjBX
PlU/VjRUKShtUjsjLSxwZFJoYGViO0s8MjIzQlJMbzElZVsqRkdvNC0nIztyY0hcVyZgdXNoNWso
aj5CUkM6UGB1I3EKJS9zSDUpN0kiTnBlM0VoQFRIVjRIYVclS0IhdHBkVzImU1EpNWBoPGpdLXEw
Jzg5QidcJEpPVjQpTidiSldRPjhLSjIkSVEuMmYkL2orTHMrTm0tWjFqJU8iOSZuTlZdV00hbGIK
JU8nSG9YUixcVDRaczJnaG5iY2olZTE4TzRUT1EkRixXV0Q7MSRcPEEnNXJtMSlgQmwqR2IxLCM4
LSxPQFVybDpQXHAnPWtcS2o0PydAI1ZeTSVZLVI+Mm4zJywzM1pHIT50ImIKJWYhQyQ0N3VpKj0q
QF9GczxNWU8jImIuWz9YUWpecWhwVztdQC9xQ1pLbF91c2FLX2wyNXUuNlAjWShJNUQqQyYuU2Bt
WS1fSSE4YT02SjFuJCtIaWhpPitgUERqQz5PRWtLNUsKJUc2ck5sYDJqOEBRUjQhZiVjMCs1Zmln
UXFTJnFjKGM5WiFlQk1AJzglbGlPcTtZPj80WW5yUVJnOkVfVzxQKT5AKWRVKSVHaXQjZVAwJVVd
RkAzQDQsVXUwXi1JWjQ/QnN0YVcKJVtQMFc7R18vOS5uZC9RZU4naCQ8WF1GOGhvSURZZixxRDA8
cytSKixRaTxnVGo7SlFuL24jczdvK3VHI0NDSG9LS0hvITQ+Zj5MJEVIZ2dgcDVXP3QrZzg+b1dq
dGNJMjdAMSkKJUdsNGI5cnBHczBzL2RVI1EzOkZmRUBealJTVT4tI08pM290WHFlQDMjTS1sYjZR
cCY9TyYtcSFrYjshNDNfLkZ0KUdYNVY/L0EyNk4pJCxoal45KUBsPi5eLmxmOWo3YmVqQUgKJTQ6
WSItZmlqYVhYRnFhLVFFXW48K1ZjQV0+YVMxNl9ATVEzKWRmSTJmblNMPl8pdG5HOFkraXJPRVZW
YVwuKVFcSkxQVVwwUjZdbzFvaSdAT3RuQVpiTjwiJFI4dTJzV18nVXUKJThjTSxBQ2hLSlFIZnI6
VzZgQCk0QVY9SUVKRUljcmY1ODA6Zm1FXzk3QTclay81T21uS3BmW1EhamUqWnFKZkBqXFxgYUBF
VzprUSV1UjFvN0g/bTdTIk4rNjkvMm4yU1A6YG0KJUlEa1IoYjNsZ1k5LWo6aGA1Li82am8paGpc
a1NoSi9URkE/RjIrKT40LkNDVz02QHMsbCEwXnVwQT5mUGM8LDVvOk5JbClccyJaYzhnOVViSShu
NGo9cVs+dWVEPiMwQkhNTl4KJU4sLWZyKWZvK01IVllrOW1vWWc2OHApOiJlWjlgQCt0T08nKU04
czo1NE1zam1wIS1ZTE0xKmlhZy4hO0RAR2A/XUhsZDxbKDQlWyslUThtPF1yWDQ0cyxwPUxcNVc0
VjNHIVsKJSYrb3A6L0lOWWJVYy5EbCY2XG1eXXJ0XSI/IjsvW3EsWC01RksqSFIsSEk7NihBPWFe
TjlrSklCW0RTRlg3VmZQcEpKQHNTJD5yOWBvSmBtXXElZlJvQkFxOWwjUG0mIzJhJV4KJUQkTCUm
MU1IOHJISEBjU2lTPCZyM0NLXygmJm80bz1fclxqajlyUTFRXDhCIUZsMFs5V0FhYUduUydbUlY8
YjpIX21vIU9DSFFiTDJHTlVEXSEmNkdaRjdDNVMkRUYyVWdKKWQKJTBSN2NoP0draWxYNCZgM1wn
azVkXWNMQXNFWUVwdGYvP1NvSWd0VDFHWkNTJ2NAM3NRMylXZ1g/UGRrQFJzOkZgJGJANmJLIStD
XUw6KVxxYzNbXE1yU3VyKkxWKjpeaG9TQVEKJUYzQldLP0ImMD9dai5WUFNASjtXYV9AX1cvYl9I
LFYuUixPSHJlRWNLJkZbYGotJT9IU0QoN08qaiJxQy9OJCsuOTI3QzZqZyQsPylwV1QqZmUsNUlP
WWNnP1FXY2BQYkpMMG0KJVhDUDZlNHFULXJEP2lhUWteaDUoIjUvUSFAW1I/R1dfIy5GcUguLldW
SV4nYjU6LGIiKC5KY1dBKWkhLWpMbjJRcDZYRjBMV2FFPzpXcTdxTCoxNG5baDszNi8vXEBpcmlV
J1cKJXFzI1BsT20jPDhFN0lPIyVHLFhGIUFhPTolNyZWdWtBOHNAZTB0ajpMNUoka1VGa2VMOnQi
I3BfQSJPYEZkLSR0P2dMaEpHcWhnW0M5YiJUZGskM2NUUj02RE03JVMjMjlsVFgKJVhYSGlMa2Ri
O2M0YF9IWFUoV0wyYVUrXWZXRlw3bz5nNDFYLz9FYnBjQFhhSmxVMHNPPFZUcHJfWi8/Pm5WZGYu
OHVoIzpjUCg/MiVPZytFOFQ7LUUmL1xcIiwyZFxfNmtnOkkKJSJbdHNJS0VkLzZIRUQvZzkhbmNn
bVMpLFxRNj9yVGh0U3QwUUhjXG85JUxCallnM2IvRnRzWjVvInBbYFpCXl1VSm5NPzJiZEYxMCQ7
aTk2V0syJmZgKE4mYWwuMENhKDExQFEKJVImUy1yOVU3SlhmcSU9cClpYGwxSFRjTXNcMDgzWTtf
OkthOktBWHFdXjN0VlBCMWg9QVdqQGg/RlpjVU5EPFs/PyNqMD5wL1IxMmcrUzQ6MzlfPk9oN11z
QGs8NCEkWmkhLisKJTowPTtNUW1JRXEvPi1rXmcuWmsvWCo0a1NQTkhcO0khcSZvaTpDVk0lKVIp
cDJMWkI0LGZVSTpTR1ZNRGtUcTwsP3IwOTtDO0JZXCE8ajp1ISZrQCxAdGI7az5aQnQpQyk5cGAK
JS89XTZPbCFKSEAhJ3E5bi9jYF1vNWh1RUw3ZlpONCZCcU5YIi9wc087UHNNdVc8cDszXylYZEpz
K0A5QD1pSzhaMHUsYGJmcjcldS5JMWQ8aCNfYWBYNDExL285M2o0ZTU6YGkKJW9oSDRLQzk3YXBu
XSJHREwwIlEvaFQ3U2c2VDJwNCtAME1bLTRvbiNtVTEmJS9rMmQjMClbJjBcQzZWXj1XLTE2OWtN
U2tHRj0tMENVMz9sMTF1XEgyZXJLXTdWKD5GXmAlWisKJXEwPlI5Zl8nRWJhSCljZy9nUVJbSUBM
bDhGKF1OYktvQlo+aHNoamhXMWZiaE1kVjIwXVRCQGs0WyM7IV8hSEZXJlNXZ2JLLyVdM2QjM3A6
MicqMlE1JWU+dT1EXGUhVkchVDAKJU5ZMSRTVWU5PDpgIj02JzEkTkBDcEpUSTs3SEIoLGFQTixB
NVg6UWUhImo1MUxnRyNQUE9aLXNBOT0lYCdVTDQoWFs7VzA9dTJxcydZLiZaYCd1bDhDPG00VmlQ
SEsxcDJZPl8KJVdKdElkMlNdJmQpW1tWPC90Q0l0W0o1LUAuKnBSOShvKSRRTE4mYToqNTk0X2wr
VjNjcDFCPHA+PFpGUjcxKVxzTi9yaEA6Wmk2YEhbJmtFMUIqaGFwai5oNFtkPFdlaUVNI3UKJSdk
JlBnSDYmZFdpPU5TcTtOWURSRiFac2BFXkVuW18kKFVuYidVQmk/aiRTQC5OKFlGSldzLCFmSWsq
cE5FYUkvM051KEphR3BOMGNIJC8+WkdEQ0dJSk1wUFU3Wm1nUVlELDUKJSUnMk4tXzEsXixaM108
SlpSaTNaLzAnQ0VZai0oVjxMQitWUGI6PEY7NHRrTSc8YTY0N1VIY1k1UV1ZbyJeXWk8P0csYks1
VldDL2lyay4yKCNEPVZsM0YoOT0kSktwamsraHAKJVRgQGwqITMuIUEtUCtyJDBsZTBmYz1hdWwv
OTJkZW1SLDhicSc7QCRgcFlhPFsjXj8rLCRaPC42NiRbRm9vdWEzQVluc2JvNTlEWWAiSD9gKzBH
SW03UDdAQmEyS3EhYFJvU24KJUVkM01UUnVFdS8xRzNRWlkpaExeVzMzSEY+R09QNWA/R2tpRUtU
L0ZpNThETWBgW0tKai5sYk1sUztMdVVTVGskJENLUTEhaUVtOUNzQl8ucVBDLG9NNXI0XGwuOGRP
WGhyIm8KJU43IXVFQ2MwRXAiUFZSKz44cUNqI2UrKERnb2NGZi9pL3BqWD9eWWQvXlQ9M1BMaWV0
PFdqLUQxXzRSWSdnNlQqKTxmJClmcEdRTEVZKmJBWFE2XCRDRilcTTIuIm0vNT9wKGAKJXAhQG9C
WjFqbDNRYWcnOURHcSg0ZTQmRnRFO0QuPkVCOFdEYVM2ZTZWc3IyKj5uO3NlOjwqajtqUUxVKmwn
JGJRY0phJSVaJGlGWGpGXT5OVVFga15HZ15ucCsrQyM+TEomLSoKJS4sak5RVVMkXj5EbWQqX18q
K2koM2VpdSlidGQyWmNFVEdPLjtMaUBNcCRWQSQ/T2xdY1JwWHA8Tl5STSc8ZWIoRWo0cjpgJHNQ
NEBBT1hgbGYoU1BMOmFRMyNFSGw0PyIhTFwKJTg9W15qRjZjKSJMKyxoUVYsVFlKZSUyVW9wWlou
SkZtPD8nPFMnS2tkMEVNaE1KSnJiLUtjVlJbWXBmVSZASDc/WWlkRkEuWlFHMDMrZzkicl5XQy47
c11WZHFGQ2c/YCs6KV8KJTFkZDNHa29bZ2oyW0NccyEhSi1JZTBicm5PPEZAO1ZFS2NDLnFUJHFw
O05saSxvJUtNbiU7Q1U8b01eTF4wZDxPPl1UdUJQLDlvdD09RidQPV9aKDxbLlM/YlktTzFBcC1S
Vz0KJU8+MlBIIk0kc2FoTjU9bFZYNTpMREIoUmwuKi5tbl1FdVs2RE90LmJJKypxbzxfWU1hS183
Nk00M2EvLl1mIlF1QyRNNE00TkgyYjlfS0h1KkM6PzFwUT44cCRwTzouXT9MU2IKJWRibSpXR0Zj
NHIpPDFRJio8SHFCWmszbXBCYVM0OSIjLnUxQyxoYU1uJ28hOCMjKiUtJllcIjUodGFQakJEMSUj
OWhBOlpQTCUvRzdgakRaXmIiWkRaPj5QaS4+bDYpQDhcWFUKJWIlOlptOmhXKyQnOmorXWNHdWhm
SjRBQ2I1ZWJCUzt1IjsrO3BUYENNTTZCbWVJJlZ0LkRxNlo7VG0oaSkuKC4iJFNeJlQlJUNmQ0Vk
QC0lYl1idEJjKGlBYl8hSXAuZEJCZzQKJS5USzhJXEVsWGBSa25MIiReREVjJCVZLjs3bFMzOFBl
R0RRRkdFT0craWlmNEVpU3BcTShaJlZEZyIkZ2w1OUlfL2o/VnVnIz9dNDwmI1BmZHJCZ29ha29T
UGdHVEIiTjx1UjMKJVZVIU9lOTJWK1dTZ2Bgay0rQlskcUVbIi0hPXExRHJPTG9hWS1ldStIYS5H
dE8iMigpMCVvZS1hPlklT2VKNi5kIjMnW24tWS1gWTRwJ15WJ1tATUFbZDlgMDczLiE2WEAmJ2YK
JW9gc1oiNnVCKiMzOytoXl44ZU5bYUBMVkAzKEJFJjc/UzM1NEBSUDpQPChHU1AwO3JrSDpiQ2dw
S0RoIUg+cEdfUj5qMm9dK2A2IWVAQDVVbWpsQEpbRWdpLjc7ZFEsbmFkXDUKJSRYXzMsMD1MJTRg
XVdsTVlaQUVoXyhVISEzc0NtSmAqOydOWjBBQT0rb2p1bmA0IzVYUUNnNUYvdF4sWTFGVmxJZTU+
MF5vc0tMTT9wVydma0gwPmJxLmJuOT0vLyI1YlJia3AKJVoiW1wlKVhvIS9OWzZOO2spNjk6TTtK
WlUoOTVaSDg3ZWxgW3M0L2gwPSZsb0FPVltwSCgxcF9HJDIjUDNfRm07O3JGcWtSdWBAV2I4MzdU
Ri9MOUBFLDAsa1BfOmpjIkIyaC4KJShgPkxKKEg3Lm0uQ0xiJ0JwKmhPLWBKXDM/SF83cycsLzwx
U0lEKHNaRk1bW0A2Yy8jKCNSRmAlcD0hSUBYYW07XkZBWyExMj49dTpnMExSTlIrTnIpTiEtMEIy
LmxcMDIvanMKJTRFWSxmXEMrVm09SzhGKVxGMDNqXUxsKGVqWHVXNz5VKFI4LmtrYT5FQlVbblYk
bG1JMilYK3NpLkZIMlojYWZHSjVhVjg8SEdgKzJHOmk7Xy1cZCw0RFhlZTs6RVluRGRkSHUKJT1S
JTNBbzk3Myk+NktsbGVoRnJwaW1qUCVYbGJLaGFdY1tpRlQvQnFaX0NFJS5uXUlqUl5SPmhiKTlT
RSdqZ2YhYVswckQzRihOMy1BVVdxKlVDckwkMHEoay47NzxBRk09OFcKJU02XkMzK1puMlBRRGhY
KzgrSCVIQ2hHKzVoaTpwO1ZSPkRhXj1dI15BbEgtUzYuZGVQXXA7cGBpUHUqQiRxNkw2MVJYSThR
Xl9SUTtCM05LVk1qUmdYUjJvY2lTVE5HKkBuPFkKJV9OIzg8bzE3TD9oPz5NVDYkWFlvQ3B0ZUgw
Oio2UGJeLWlYQUsyb0gwLS9sc1hBV2hfRElIMS5TbzBaMlowMiQvK1hWU288LyVULFMnPUc+aidc
a09RIU4qZWNARyJaXFxrLz8KJSx0XzdCRSIuaEM1cit0I183YEpSKWNPUmwqUVpzVUwyU0AoYTAr
cDBiOz5iWkk0Zi0jJWIkYVhdUFdzNkIjVWglWCdsW249cjciYlkjLTEuW2tANlI/LFJacWBTaTV0
ZkxdTGAKJT1xZGZTLFA+OFgvLm8hZEVlO29ba1FsT1BVLEZMaGA2cCxEPlNrSS5APV9lPUc2JXUu
MUkuSnNdSi1IMV5wWTspZ1M7UDIiR01ZZyxBTnM7Ny1PdVBPcVNRQnJERms6bVxBQWIKJVRjWlRf
cD5nXy0qRzphMEY/NChVR29US2BiIVVVXm1GNSdoUEY2ZSspUEM9WzM+OmFmQl48M3VMTnRbb2Fa
LSMkUGoxVWIjKkNoV3AvXi1WS2cxRCRzJGpacFI0IldgRkByQzEKJS9OaUZNNGlGa2IoO0BgdF4v
XUVvbF9pZ0ZHLVB1PWcicD1WXjFGMyhMaSkvMmRMVjlZX2Nkb2YnXzZJaFBxSDldMGduYEA+aVtb
IT5wa1pKMEhAbDJFNGMmTD1HOFVITCJNU1cKJSRVc1ghT2coSFlLczBLYEhCOkJzayVdNGpJUWhR
QTAiN29EXGxYcjBjIiZvSUJLKikyXzxmYC8iKSU9cjkvXXRiXzRuJUhqVURdXStIOS89PmpqMCpo
K0xFMlpJakxfZSImSWQKJVdOVzo1J3QvVzlsNHJDWTowZSZbLFRxaERXL3B1ayZtVTFWYTpFRj5P
PEUqaFBzJmNdMVRMNXI2dGVvTkVNP1NxJk5UQldhbmFBZEhJRkY4Qm5IQkxwZFcsTGxqQTRWUUsv
LEQKJUY0YFNRPmZxKmhgQXNMQ1g7cSFxLl5QcCpsIlAlVk1kJyNFKixRIkhWPW0rPTxDMFoubEZU
NGZVOVVPQyFJdVNtLTVmXG8nLCZzOC90c1NKN0AqT1doZyFiYktEcFdZaispQmMKJSRYUUxeTGdb
YEtAXTInbF1AKm8uJUQxI1wnK01TI1RFPlk3Y18wazxdJD9zQUhJXDw4XCEvM0lGNil1Uy9sUm41
JSZYVVVsQWNpcDhxSTs6I1NJRC9KMTsrI0dELFtCMydaVkoKJT0yTEpETWJQbz8/LiIwRkJtZWco
WHUmTVxtUjlNI1lsYkdcU204YEE3WEk7KExJcl5CLEsyWChEYkFKaipERmJub2puVVtEOlpQIXI+
QGJMKikkXGhCNTdhNUMpUlJkbW9CUFgKJTxtV2clOmprITozMSQlQCwtQTszJHUhP0tSTF1aOTJD
RkZtWmpYVWgrWU83TSxVXVIiRWdfWTNQOSZkXigrRjpGRjJqSi9tbkJwJkdwZDNbRURfPClbP0lS
T1Zuc3RmOC1RdC4KJSNgJmVVIiw9anNHIT1BSjdrM3NuRSJHQTQwUXFPaC40aFlMKFgmLG9pNFVC
Qmo0biUkaXNxdTsnN1gzKiZgXmYiUzBnZXVII190IWVudCFbOyNFJUNJYDNuY2ZARWc/Y0JhTTcK
JUwkNSRQLG5sRV1UIVtmZ1tTbSRDW005TDtENihXbSJsV24lZDowYUQzYEBAVyZOITluJEInVztS
IypOYyclJk5OQ21RcEpSLGRUKkxcMUdaSVkpbioxKE5EOWssM1lqYTdqYC4KJXBTSUoubE1faGRe
cGJwOV9HTEBNbE5aNXVVPDE4RCohSENfNS5rLC1jaycpZWgvIiVoa2k5MVlgNiRUNEguYWc/Q3Ql
MS5SSCxwXyk9J3Mnb21ENFMvOlIpN0g1NzwuQi90SmMKJUNmI2Q6TDJgKHE0PFRBdERkV208MzVS
LC1FI3FbKVRGJjQ5Ui4rNSEhTlhSZyE9KWc8MyxmYUNpR2VsSlFAYXFzOCN1aGEvLUAxNkpWYnNl
YDo/YDYtN0MkRjJCMmRvODpDQGoKJVljTCVGclpEYmw7cmtgTTFJZCFTIjM0dGJRVzlPI1gxWzpb
ZkdiLUQsQFBMOW0jP1wzT2ckYldwLXVwVThEKWdVS0ksP2c7WTVKSUFSPkA/ZS48c3FfbEM5OEBC
JD8vKDtnOy8KJWUrU2MoRDtVMC5BM00qbl9BUDJGXEAqIi9xLSZaPDlNRVc1ak49aTdZTSYidGp0
XSdeRFZoSXRuQ1NvQURpNnRfRUhQSUhPaGAkZWpUKzU0Y2xJTjY3KU1NZTg6Z0A+KzohUDcKJVI0
LEAsLjlJUklfYSw/SUxWZTQpJFg0ZzhvO0VSLExYbzJiYmopai1JLGBFOSpUV1BhZSQzXDlFL1ow
Uy9YTzwuTVVBa3U9QmMwLy9JU29II1xUJmw4TkhCPScxSXJVZF5VIj0KJUZuMGRbJy0yajRZQywo
MldpS089M2hPQXJFUHQwRkYqPzI5QUNqa2Q+bUhHWFdxYGBcYEhRRy9pYC82TSJtRUBpWlRTPCUi
dUopbnA8JExAIzRMKikjJzpaNF42W2goPmdmJEAKJSlwIz1OMzAuQU1QZV5XMHFpcURQY3NJZnE9
WmljbCtuT11OTFlxRV5pakM6YW9iSy8xQTYoWVhfSzxAbjtjO0NiUVE+UlA9dV4pYEU3KTZpZVp1
XkZRUiFtLT46N0NYNUE8WXEKJTYiNydDLTEzVmo5LlhEK0hPW2InbFhKZ05ebGskTiZAZDZWNVhY
Z2pRLjxORWRfR3MxVUVqW2o4TUE/czBQKVNAZT9tOjNWL2pkWE9xQ2o6LS0tT1FoOVpOWjguXUc5
SVkvaCgKJWJZX0QlT19DNmoxZmdEVCQ4YVsocEtDVFlPXnFdLkEwTms6YWw9Py0nRSMuOk5yIjk0
MlohVGsxJ2A3MmQtMz1cUzAiL2BqcTopMzYnYmwhQC85PVRvLFIpVWUidFpRVz8tYloKJV1wISQ6
VSJSaj89NDVsQz80OydGNldRJlM4PTljMl1IWlE0I3Rnb2BjUkBKU0tRT2JVSUM9Ym9XRixmUFo2
YS1xU2REcy5nLDBCJiU6YDxwS15Hbl9oQCxsIjBXUVlZOWIjMUMKJWQ1WCQiMDxpbmxPUnMwIzxT
WV9NLDtjSSo4RkAlV0A6MSRDZ0suNF1hZzVNNC11Ly0wVGpEdTlOR3ByWyhfcC1rQjxZNjtfNTg6
V1EoRkRuIkNrc19cTShoZixHRFhwMHRAa0MKJUdVPGlwTj9kRGstVypzTy4kWidvX0RvTFBsQEI8
UGNsb2ZLVDhzK0hJUVxrWkg/JWNHIkJBXjk+cmBhT002aCwzJXVtRzUqIlBfKE8yMSd1OW4hajgs
J10nKiwjVTUsN3RrIyIKJUUoUElAZXM0Py5zLG91NjZUZWAxPC4uRSEtOkFOQ0kuNDBRJjxpZVNJ
aDp1L2RUUFlKcWQxVTlBZUBUYytOXlxdTWovI1YsIkwjPE8qSFdVMGNXU0Y9PkM+RzRRbk5aLCFi
OlwKJWEyZzo/LCFHZ10ka1o9Im9iTCxvVywyc05AUDJ0O0FzTk8pMk1TZSVXQl8vbGZtOExuNHNI
LEFMbDRBYFhGRWRkNDJndU1lRF89PkRCXD9gaWwqODsjcTxQYDhGLXJKckJWJy8KJSZGMFppREJM
UyUiJ1BKTi9EV09sbFxCV1srUC1uXkk2UTBALEJFS1szLW0sXTQ8aXF1RzQzTDRGZltURU4ydGVT
KS4ybSddPmJfOyhvdUwxUVlRKCUqZ2dpLUQpSXIkRlRKUzIKJUdXJ2xfN15pQl5OP05WJydrIyFc
PytyK0RkQT5mWiIlUEl1ay5OcFhhV0ZXS2hbKlo/L0cvX0RZazVzWWI9XitPYzdWUjwxRS0wWWx1
Jm88TyZZaithNyQxcm1GKUtmPGFqPCgKJXFvWFdqYk1uU2BnQTQxaVFcMV9qOXRhZkM2cG5fVEdy
YSRaQ209S2srXSJZTS5saXFyKktkQVtmQCk9QFBTSDJpakkyT04/LkgiIT1tKGVuZFo8V2tRPjVn
MjdyTVcwRDJaL0cKJW5bQytMW2AuLjRwdEhTYXFxXjg0ZDRALikuL0YkaGtBWiFdPitdYSNJWSJY
RlJtYUQvQGVlRF5XL0Y7Kmx1WyE6bGpCbXRBZ1liXzg/Q2NOVlUvIUM5JnQ5Sio1SGVrPSduLyEK
JWgmOVxgSkk6IWw1ci5oPWVSOi1zMDI5Z0UhMCVuUE88dTFfJmt0cmpRM1FPUWdtcEFoMmxSLDss
J1VYNidEKi5IQlpda2FHcWddMCcwUihubEgoQiE5KFFIZTwpN0pxS2ZpLyIKJWpiWWBUM1FaV2k9
WURYbDcyO1onZCUkR2RCWjtMM2AxQEtaMGFjWk9UVUhKIjE/UlBXKWNUUE5TTTprW1okPTJsVSIz
XGNvaVhZK0ZLMSlOVGQ/ODxTM14rcUxwbDpiZjk9MFsKJVpBQ1c3OmZQblNibk5eakJnaUVYXWJH
UyhmOiQkMWBgK2FRJV0ibEouI2YiVkFkPGYuXTxKMiFlc29uWVluSlx1RnNkU2Q3clxYZHFSISJa
VUBiOlgqXnFFUGpsRTBXRUEiIjMKJStQZFQlLElwOCxPUG4+LCdyYypjbyxhMytoQ05ZQWFYQkxV
WTA+Xk9dKl1jYHAoM3Q+LjQlOC44T0pZSj5SamlGJWEpIlpudXNYMlExSydNL01gKFpVTypGYF0h
XDNDZi5gbiYKJSRiQ1pvJz11R0xqXEFMMUZUUlJmPk0uXjMiLk02US0uWHJZWGcpZURRYU1iNy9F
Ok9jOlRXVTVoY0QnNChWQlVKNDhsPllyJEdhYm1ZS01YKEhlZS5EclUkSU43YWdqTEQ7Z0sKJWkv
IydtWGVfWGdMLkUwLE1cZVtoOnIsSUUkQlpYSSUmJmQ7WiJdRShKZ0ZQSFlDWF8zTksoMEdjdS5I
Vjo0WW05UFcjQVRnJVZkbFBPZkJGQ11QTSVfX1lrNTVZTCQjPGwxYFsKJT1WVDJzb2puUFxTUnE7
K2osaEVPLSdaXmRnITJvWlwwPnNkPDBRKGJKUHQ7T2BIVGFlKTlsYk80bGVgWUhLUmw5IV9tckEo
TC8+JSldSVhpXDBxZChjPi0tQG1fWVZUaU4scGoKJScxLnJqQDY9K1FUVj9DX2pPP040MEZaMGdT
PiVTZHBrK2YhOTZcR2xANlNxZU1LRlVuISYkRXNJcTg5Mj5rYmduTUg0dF5QTU41O0hKSUZbRidF
LkZSTTFJPj0kKENoV18rPmsKJU9EPCVNLWVcRUpFSF0tWjpNPSciKU5dKCkxLGVGPTZMRSVIJEdi
RGNiWCpZV1UyN1FkTCRWVCwpVGMmNGs+ckxJSjteQ088WDBdK19QOWpnJTM9MDlhX25jXUxAXFE9
US46OU0KJUxoMEYiamkxTlFIZ2QxXU1CZkcnK0tHYCZjWUZmbCMiZEZoXFhedF9LSllAQVo6Z1Ar
QUs0KkJTJjxsajBGbGNvPE9BO2tROj9JQGBuWCIpV10+MDBCbU9rTS5uT1pjcEY1TjQKJSFuRCtk
V09sVydGXVN1c24kWiFgXG9GZzpDM1FINjtRN1BYY1EzPFVdcC1lNTlWK201VFVzSGlWOyk1JT8i
LTkzKHFNYktJdEgoZERvWCQ9PitCZjNibVBoY1FbZnNQJHBWVjwKJUZEcVlQZ0pfSTteSmY4XDIx
O2xXbmpTZ0dmLkEqRkdmcj83OGxAckJyT0ZRLTBlOjAlPlplaGMzMVZWSTouN0UwNTRqUytUJlFL
VCJWcFZYNz1BJj5cVy1OaDNZIVc3ZCxvTiEKJSUyTnFZNVsyMFwtMmJfbjg7a1lyUFVkZF9LY209
MWRFYW9SZ1BNWVokTF1oZDUxbVk6bnFLLUlXL08lPjw/cXIxOCRFX1w3MSJ0b0FGVnQ2JTZBIitX
JFNgJmAhYjp0XzdibUkKJUFFc0NMYzg4KT1nRlNrZmReJz5uN2cnbUE7bCJ0bzIlKCctNilbYk8/
cS0pXGJgSmVtUTwqJHNuRlhcYFtpbEg1TChLZzRQViZTPS9TcF5iMVxEPzlJOC5jRSMsaTRXUGEq
SmQKJXE+bzU5IkszdXE2YFZUTFg7UiVfUCEkIXEzXGI2cFFfTDNyLFVdV1UzRUw/VjVlai4qSyJR
TFBrczYyOl4oZlxsNWM7SS89T1RsSUk3TV9yYyNhWkxGOlxrXGheJWMiUEdgPTAKJSxec2N1Tj5H
bEsvK04zYTkoMWBaXjNtIXJCOHVpLzdvN0htXkZmbHJWPmotSCk/NGssRkUvc1gxPGxgLSRxMnM1
SSguSjsrMU0uZk4uNmQrKipLSyUva1U9U2wuOmVLRGhccUoKJUdjV2xzLTJWWUw6KzUwSG1lRS1d
VG89b1JpJiY9MzBVb3Q9TlxXXT9tWWxgSzIyKWt1NiglM2Q9Qk0oMTJnQSwhNHE9Vyphcy09UlNX
Q19LQldPS0RaOGBuQCxHOCI/QWhJXFAKJWJgXUllWUNcRFEidW4kaD1vSTFfQ2ZNa0pwWU10LjhI
SjkqQCNHYGdsc2Z1ZUJsSSZ1ayZQMFQmI2RGLC5gPlNUMSZmLSEwa1dAUSZTI2BqSVswY1k9OjE+
bDxWaGFpNllkbGoKJSwsISw4TUM+PzIiYlxIZWVxVnNKQU9ta082LmUiVyZdXCpaS2dELjwyJT9t
XmZZWmEnNV4vQjdMc3EmazE+NzlwLEhTVkFlJ1IqU19HckUhXFJ0L21gX0leSl5OMmZNLm8zU1UK
JTQzdEA4Rlw+YDppQm1BWnI5RXIwUWE0JT06RiI/aGA8VylPZzM1YE5CUWdiRDJNT0hKUiVjZShj
dHVUViopST45TUM+Ijk3MWh0OUpubUtyP0xXR1dZTmFUIVUvP000SE5LZEwKJWpmJnNXcy9cVXI7
bUIlKUNLU0tgcVBmNUhnVHBwKE5VblhQZmRnTzpDalsrcWRZR10+VjorQ1s9cDopSz9uYlAuMTVz
SE9EMj9mXS1lK3EvNVQ6b0kiXCdcRyo/bmVxRzQ5Kj4KJUw6Sm1pJmBLWjRbQWNQNFJuNnNEZFZL
O05KWE4jV0ZELW9cX0ZdQERfNj4nbSlIKFpRRykqP1A3LDszImQ3UHErazkoMFVaPFcqVkh1Pkwh
QmMyVl4iP0tGTWxLIydLLFMrNDQKJWdeKVk5I2BDWSNgYEVtcC51SWMjXWxrVEgrbk83bV9jQEcz
LUpEZWc6WStNOm5kdS1mbXNyZmMxbkJJXSNYTSJYVmZUNlAwJ0ViUGdcPCRkXTMlNVZAZEhaOEtE
QjtyIksqOj0KJU1rVWVFajBwNl9GSEtVVl1HYCdIQCRsYXVERVFQNCUoZDdBWE1AZ2ItUF5rUm1o
aXNyWnJiWUtLdWQnYVJXXjJEVVlUXz5YWEkzSkNZR21zUiIoUSE5cFEmQy8jP04jUj9zK04KJXJg
Zz1BYi4oPypab2F1TltTNUFbPnEnaSJTM1tfLVFgTVpdZUs6XTJNQD5dNUM4RGFnU2xOLjMqOWZA
WWUtPTJnZDpYMkAyQF5ySGU2LUc0VWh1JkJBQEdDXiJwUUdQY11XPiIKJVpUP0NtSWgwbT1KYTYy
JVNuOFNSQkpWPUg1RyxxQSZXQlktcEpIXGZASF5udFZdW1ojTU91I181Tk1fJV1JMXFdXyVKayxK
cDI6OF1sT1JuIU8za0dVbFhoJWkzN2EqJVBgSjgKJWhkIUZoOWk/YjVHVTk3NyswUVg5bDlSWi1Q
Iko4NDJLMWNkYnNqaCdnOUg3MWldQShkUypWaC46cjxvXTMwbGxVPztMbEJrcFFUMTI8QkZWJjRB
Z2hrMWFXSnIxVGolVHEiN0YKJWhBPkc9Wjo7bnIiK2I7T1YrOTFlMiZuaGI9QTAvRDFEPFQ8IVoz
TSJaUlJwaltXIzo+RShWNkNbM2BqQGpKP1RHXWtxL1ZbOkFUV0pQdTI1QF1rLXAyM043RVMtMFJU
bnBGQGYKJT5hKm4yS3JkQHVdYUAtVjJeV1EuamRDL2U7SlRQTlM7SS02TjVmJURAX21bVVxGJ1Ih
aDRQJFAyLUI7a0YqYT1ja1NoKzc2cmZGUGN0cThlWGoxcTI5Yl1JITkkV1gqWDRBODgKJWNTXj1y
SUpwU0Zmcjh1JCxZa2ZcLz5GQiowaURybSRaMysxTzo3OzcxTT4sKyxbLVlzRXM/bzohKTY0Qm0j
NSJbTiwiVGdRITBtOTMxJTR0XlhSUnMiczs9UEFaRWRHQGlTM3EKJVghbyZOY0QxKTRoV0lrTzo9
VSxoRzQ3P0o8R0BJN29nMllDPCZXMypoS3VPQU8mVkc3ZmRpP0tJLjlqMFFjV0ZEO2FLXG0sa3Ft
WmIxaF9ANiR0Ik4lMXAwIUNXXTt0UCRZVUIKJTdHSGJWLTdANzRERTVOOmcmJl9JLVQ0MjBDTWdf
IkNJXTtTXydVRUIycUtXLWIxSVNFPSRoUDReaEc5LzJsPUZsMjhYMmJXNFstN01nWDtyRktFWGdg
TVJYWERlb0FpbUQjZSUKJVwkYzdFcEFbNFZyUkotT1ZSXTAhQ0QkSy9dY24rTDxWUCljWFFNRywu
WC9fUGpJLkRHZkpHIzBacCZtJiY7IU08Lj1FZydJMF4qXnM2P0UsXEFjQUFhdGpSSTMzPkNyIU5l
TV0KJUBuL25kZ2hJdGBUYSlTZERSRCZEW2pmY21lc1dCcElbVipKUCtiR1MxZWk/VThab0NtPSZt
TUZrWjJcdSNNI2goPStiNiZRKWcqbTMzZXEmPmVvZWYlNSg3ZD1ITmxXOk02VysKJTxdM2duNiwr
YzVmNktHWCR1TW85XkVOQXIkIz5sTSZwJF0hWiw/dGFKRjBvTmdlTDRoOSo5MUNHSyhzZU1gMCM7
VjYiNy5wR2gkI0dPc0UhZ1QvdCcxZ1gtPkNXM0UqRmdQWyQKJUZfTTRiTWBdXkZQcW9IWGtEdUZI
MSo3cDRRT0UhMU00J0tgX286I2EoSVBedC8rN0cqXjckN0g4cEhVOl4vQls8aFlcKm1wXyM2KVIt
c05NbVFCVVcvVSZpRlxiIllpT2M6S2wKJSY0QV87OCdEJmZZbi0xOnA0OEgwaHRZLTJsNmlcXCFY
XHNPX1IqT1VoNFBjLD5KXGBIPiRaaD9gYjcnPGkjWElmNWxtRmJuXWpEKSljNWwmJ2A8SVNTPiNB
VG8yImBQQHBPVzcKJSQtOV5bblZdTF5jcVc2KF84UDxtYlhPMjNuPFRDJydsTSwyO2VySm1YOjVo
ZlwxNVxvL0A7P01fVTlGYWdmQV1wLzxvJCFvNjVsMDNhPk9CTyc/MG42T1FdS0lTWEdnMUxsZzkK
JVVGaiFnX1RRMFhhUik6U2ZjMW0uZEQqKEgsVHFmLk5zNTBgODE+TT41PEJQQmhhRnVPTFNGXnMj
KzE4PEdBQ1hkUU5HRG4rNl0+TXEoSks/Um5UQFAxPFApYy1cT2gxOFgrXTwKJTdKVWpJYigiIm1f
KGs7UExxYzdOb2A/Zzk7MmYmcVg/SVg1NT5JQjRbMWtcPXBlQTdjTllAY1M5dDNRTic6OEpnXDNs
OGlRaFg6YjxDa3BSaipfciNsJm4+SzZXUGtbMkpJdSEKJVFYbVkzTElOc2lWL0FxXUFhbClSIj1L
XUMtV2deNyZ0X0YpQTItYTtRVllqViwjNGw0MClNSF4qYS9mcC1bNFdaYTtHP3IqZSxXbWJTcCtH
cjRpRHFGNGloaSdiME5JXFchTywKJTxvRGxxQlc7ZFJmbSpWL01QQCYkQS9zWj5RJjEsRUNuVFdC
RDNeJDBoRXNRP0dJQj5abSgmP1wrYjNpQmYzSyN0WSg+bVhaWmNxbCdsJl09YDJDNiRMOFdXZV5h
K1QyIz0sQCUKJVFRQEAhKXAxV1ZLZkYvaktxV0ZEMms6bDI2aElvSlBVQGdLP2xWK008T2snbWhK
XVNFMVRuZVwyTVJ0WUNedXNcSyM+M0JaXlcwcC4mNGVwKz49Vz4tUU5CKFRNMllUVFUxQzgKJU9j
YyJCQTNsUmBASjBla081UCo4OC1CPmtCPlsiU2dvWl5oQCw6akNKbWd1KFMhIThAMU9pOEpPNl4r
MyRnJnNxWClQcFQtcWdmUlZbVj9LTiM0QVc2TjMhIW51R15hTEgoWkEKJUFnUCY7ZyhnPm9UbCwo
S0RlRydBQz5Ebl1OYlhjLScjcWBxPSRtW1NccWlXRWlgLVlHU2QrXyIpb1NlalonNUNDLy8+Nzw6
X2EqX1MpUGNeOWM/ZChCQW5fK2BkLzlhLWMoQkkKJWdKRUo6QUo4SS0+WDBKJ1FCWEdJKyZfYSkl
RTcnQENQbShbJHA3TClKPURgLlxQcWlEW0M8J3BYPTdBRkEqNUVtNzVWRWdNQHNfUl8xKEJcLVxj
T1JJYU8iU0teQyUiKzUoc20KJWFIKWBBcWktQy9dQWsldCM6XTBNXGw9JSlIaiQ3SC9ePHVpQWly
PV5mX2g2cG9mVyZxNV9AIUFXPltrQE9eT0cibTQ/NlRDSlFIakIzJ2VoJW5AIVFcN1I1bWBYa0hT
TmlpXUQKJTlUJ1lVM1dXMDRtITNoLWpBblVZJU8wc1ZrTW1KdClNaEhbUi9hQWxQLG9YbFFcLDlT
QTBwR0NrZSg2Q2BbOlxQQ3I8KlFLcGpoN1RiNm5xWD5AZz8hdURfJzlDUHJGXjlmN08KJTcrPEt0
VDJBbVpFJExvJThbY1IiME1iUlonNTFBaz5OL0EjJmdDP2lJajFnLGNLJjNeKWlcTi1PQShMX1Yi
PiJXL2tXNWdEV2FkZVNwVnIlR1UyVDlCOzZsa0YnK11zV1A9JEcKJTdxNiM9YjUjTytHVW5gOT1i
NGsmQE4xMmtcVlhrRyo0aldrakdZcE5YYFkwJW5URjQzQ0BYMWplLWFYIkhYLlgra01zYTs5alhj
UGxDRV5rZkxEQkFdYFFAaEBucDRHOF1lb2gKJTopSzZBQSpTbCQobDxPaicnIUJQTClvZHM+Ti4u
a19FSkEwWStsMjdAc0FjcW8/TmhyLVNaSDlJWjBsLlM1c2xtPiUhVSYnKkMwODRFXE5PNlJScmRR
S19POyZhXGMyOCNLLW8KJUN0M2xeaWJUanQtKl5TWFgwbV9fVkRGJFJQUTRoIStWcDxbbFNPYEch
M1JbNCcsTWdhOUxfTDQvX007TG9GWTw5TWk/PUw4TmlFM1htXWIlbFdXQ21iKk8hYjs5JTg+RTov
aj4KJWIhTUBhNVFRTSokLEBBXSdVN3RNVDEndGRxJlUjLC4qRWJaNjosT2ZHSDdeR2xMbCxFOTc+
QGFmMER0SW85PT1hJmcua2kuJDpUMHJmMGtxZUAqdEtQLGQhNmtJTlwnVHFjRDYKJStrT3UoITNT
MGdOJ19BczUzNk1oJERHTlFqLzlnXzJkUl8/Mks5ZkQwNy9nQztVdWVkTTZXUlhxPVJlSWxXLzgo
YEIlTUQ1JDFQaFlLLWBXQTRmVEdIYDQiX2hAPWsyKk9AVnAKJUNEXTFRVGRxUzA2QyQ9RmVfXzo6
Rz5kQzYvVnRRKkFrRWduRCg+KkpbYiRecFMpUiprW1NWLTskRUhca2Q8ZWBZM3FHN2FPQzteMkda
Q1dqXiwyXHVwXHBBQU1yI29sWUoxXFYKJV09Pl5WV1NTNDJrIy5YayZJSFhKZGFmZztjQTdhSUtH
WWBXQU4jJFU/Uz1BJUViIVtobW1GcSEnUjJ1RDBTVj9PRkdyKyhqNDojO1BpQlg/XVQ+dFRkWCtU
PzRINkpaPHNbRmIKJSl0LHRlZko5NkZFOG5LOWVCQDM8Ul9RKmRmTG1fdCouc1RDTW8kVzppbWdU
ZCI9YzJNQU0vUz4yNXRFZjdgJ1dAJEBgYklGIWVlSFM3dCUiKjNCZzZpMVFANS1ccU1UJSwjVSMK
JW9IVkM4Ik10YGEmNDhRYlI3WispKUdrZFIjNk1ic2YwRFk8SCFIbzwpK2xqXSpkOGJTKEo2NWYu
JGc2ImJgIyxRJENHOWcrcCY2U2w/YnNxIT5XT0IiL10lK0UnQ1tGXm4/U1gKJUAlQyRUUlY5UG5T
bGgkJkAmYT8uJTlSRWtqXDJmLTZKPjdKPGY5Uk07Z3NnOlpVJk9kTDZ1Z0tWX3NWcU1fZylETHU6
XCRYLTBjQiVxS0QsOzVXaGRsc0UranBZKj9EWWRYdHMKJS5mO0ItbjhIS2NeW01NV2t1IyRMLEdB
bEU9Ky5iXnBnOmJQaDEiTl5OW2QpcD5adFBnbmI/byZEYmklQF9vOGEqRUktY0FQQiYsbmNLb1gk
V0RwV3VXJDJgZWVuLk1mKFsiLkUKJUQ1ZTtJTihPXkBBblQ7RHErWTQ5Jj1YLFBaYSJOR2ZLTiZG
cyRqa1NxQSchcCNmTi8uaDQ7Lkk6WWIxPkVobzg0Um5xKj1GKGluJkRjUEdZRFJpOV4hOVVgT0Vd
WmBwZEhOaVoKJUZOcU1PY29bbC1ZYFclKltBU2RaNz44NFBuKVw5RUp1Rl04UEYxJVJjb3I6VkhV
bUw0QzJkZiY+KzwyalhtWWUobFFYZ1RrWTpOTmxeKWhsJyhjImVjMHNyZyxycVwzX19yQGgKJWUi
Q10nJjYnSV42Yi9yKTJ1RlUkcWxxW0w7KDBdbl1ZdGNvRUslSzEmc1k7dWlyTj5IK1hAcExPcmpE
LyxDVzNGO1IwX2VMLmBzYjY3c2dvOC1xXlksX1NRRmo4MWkqajw4ZC0KJT1DN1NfMFxFMEopYnRU
aDhWNldULjJLdTotPFdILiFLNTdoPyVjS08hbV5IT05xJUQ6Ni5jSlgzVyJHaCM0L282OkpnYG0q
WEdVX11nME9BPC5NLHVNLCxRKGpFZmhsZ190VCkKJS8rV0oqYFM3XkQ6WDlaTkhSOVApRlJbNixP
YjBnOzJqQEpLU04wPzFQQVJRUkw4JDE6JjBxYzVydVYlMF1caT8lXTslJVtCMEBAV2MxUDJIWGIj
Tk4iQDoiTDlDPWYjcFJVcEUKJUlANV5aKDxFaF1OSmdAP2dfRjBkOXNMRG89UCZyJzw4dC5xZ2Fn
LUxGYSRaJkhhVTtlTCc8WlhYMV1SK2BuRyFmXEUwdUg6TkYhZlgkMms5ZEIqPktsSFAxbkFsTEZh
RWdiQXIKJUNxRXRLNWZmRFZKQVo4JFUhMmU0UnU3SWNhbG44ZEs5OVJnbmNoI0hUKEwoKTRsLTon
NWdBaT1fLWhFVyJOL2FpPy8qNUhlTmRFXFNAZGNiUGYkXmlfcVMoNj43LDpBaUdDI1MKJUEiVChi
O2hTO0ZxXyFPIVFaLExrRGRVRWJvZjNlZDJ1L148ZDNZQUJrUE5hVGBMViM9VHVVJ2lDblRuQkA6
ay06MGQ+Y2JMcSFWdFAvTi9lIiU4RnUrcXU8Pk9ZO3FSJzdVJTwKJU8pU2tOQTAhXDtxOyNOU1Q5
PUQvOEg5UzVjX2YqOWFkaGYlbWBaUmNDT0orLkd0a1UhRTsjQXVXNkhrOToqZychLkouQTJbJGBl
cCIzaihYJDIhMkhiX11yJk03cSdxSEAoSEUKJWFha1s/R1hOXFNSTT4nQUtmdHNYLS1rZTUkLmFP
NzVWWTAuUGFuI2Y2VnVYZmxEYmApLnAvY0pxTGpjRVFgTFlZOihGbTxnYVVudGk2SiViQ2AoLmVw
Vz45UFtmbF9HLjRfRy4KJWE2PCdtVkI7N3NLUipSMTA7PlU6PGwsakFbaWdEQVJIMEU0ZUJgLChi
QmdHKUwkUkUia2dkMzQ9QTpzJ0BQOVAnND4pJjxWPTZQKzhYPVthRDdxbm0sRzN0U2s/KmhoWF5u
VDQKJT5yXzw0Jy8pQjs+YS8zWTpWQV87O0s7SnQjbXBGJ19gQms0VjliMUBGSFknViYhKWkiLE1a
MEg2TDQyIl5OakFnUiNXXFBTNiZJKyZqSVwqalxVYmpFU1lfLj9GdD9IOWE3WFkKJWpRVEJBSHNQ
Z28qQiRyXm5hUltGInErYm5oRDVoYUpaPFw6IyZDTjYvMjM/V29OOXNBUnNjdD5BKm4oaj1yIVBc
JCYpdDljSWZ1JVo9UFNVW3A9IiRsXG8raE0xXGtWRlEsXmcKJWtPaCF1WkBOXDBfQl1VTjNQMnFS
RiomLFhwLltOMD4vLCI+P3VkU0osLEVSWzJHYihvQEgvSl5BJGJUcU49Lz0jQyJPbmI6OjdIVyhf
QEZhW0JndTcwZkRBaVtPXjJhNFQ3RDUKJUdXXyQkUy9hbj0kcC9GRTwmdTY3LmxxSF5iJCFsNlZY
PTA5ZkczP2teLkNoQyVvNS5bWUJVUGZLbjJJbE0rRSZIO0VNViVpJmJra1wjMCZca25EaDtITlQ2
Pk91JT1qNGEsNGEKJW4xYi1wOC1PKURaLFJfVUkqYlA1XGdgdW45WlFzSjxAYW4sLkZMX0syTiFk
SSIoYV01MWsybS1tLCs8NlooXmc7UWlWMyRDdS9pI1k5YS8sTDJuQVZePz9SYWJQbEZtLytpTHQK
JWg2QzI9J3BuOUpCU1gqVkYnW1okLSlbKlFsayglUFAoZWZlRSxJXEM5PWg/I1NEPlVNTkZiJCRl
O3MnXlNWOFw7T1Y3RE0lNEJgYiM1dFo9RGNxLUM0NSk2YTxDRUIlQyhxJnUKJSdybFZXVVYyaTQo
QWJIaWpScEMnRVNPQGcqMSpwYT5QVEVJWW5Ra2ViOHFrdCVTOUhZb2M0KlxgN1pgSjRBJy5GIjxY
aTtmSCwkXVA/OjJtcT4mUz4vb2csNm5bbF5dLj42VjwKJU1wPlYlcWQuKVAkQT46SjEqVnBSXUJW
Ryk9LUNgJV0mazlfOWlJQ1NMVFdERGwzbScxUkQrIWdsRG9dbSpkQjAkU18jREUzOHVuLjZINk0h
T0FmKElSSGFuTzQtU3FsZXRSJV4KJTY1OmEoJkpfPE1BMC9zLzsuRlJVUz8laVRWUmcjR200aEsz
Rj1BTWdsXT4xMWpRKURtVCMjKFdTPSwuNytkW2dlVyJyIm48NzBNXlJcTidwRyZQI2crYjItMV1k
IkdiRl01REkKJS1UWnN0KS1EU2FGbDtISSN1MjA6aDI5WVEnWDk8bGcyUyJEY0kvWXE/IShPJTZT
SD4hb1YkRSVmNGhQMyhPXm1dWiwiJFlETGxEI3JuYG5vVnM0OmAnRjYpTSNaIkEyXSVlVnEKJWJi
I21nQDw7QCNrST4iY19LJDAoQUJxPmY8XVchTT5HSEFXP108blpSaGBzPVctSVBCSHVpJyJOMSM4
Oz5CJjNxbDU5LD45TGswYkBbRjw4XE8jWC1rJW5uLUFaL2xzZiRSPGwKJUs6UiRrXThTajozZzR1
c05Fa2dbcFxOIm1AWjM9UGIkXEozajJtT3NlSzosL05bbk5SPmRCOUBhLTdmRHE7RC9uOCJ1XF1Q
bWMyZC1PMU1xUVUpRS5VNmwtQGdzSSVkIidbXiUKJU9UUTk6Z2o4LTdpK15kQ0M9YVNDQ0lyaCFT
aG5bKGM3RUM9L1BwOW07bkUxNDRVPiZHQCd0bGtJZ1Q7WzVoNDtXTVlKcDZDWU9NQGYnRi8tNmcm
RU5MZURkbiIsNl1kIVhxXUIKJTZhdDdzLDtoWTAqUkI7XEpockVwNHUoa0szTHFFRiQyTmdCSk80
PG5nLzNXYjI5Il84WCQscEhCTXNKZjgvZU5TcUVBJ2YhMS81UyRrWCNOQSMxbSQzYFpZKFA8cWVQ
PTZnOVAKJTw1PVdyVTYwMk1WXTpiXUBTMD9PODNCajVsUmZPTCFPUUI4O2cnVk1KJERZS2BDPkY4
LFJkTmklQyoxUyYzTGFdLmxJZV8tR2lybiVYSyRrTEw5MiZdNiJnMC1UXjtNSmBlaFwKJVxGKFpH
XD9SRDZGWjpJYkFYakcqIi1PVDkpQUtYdFEtP2I+R1VbOVxgYXJHQG8tOUxpPGptWSEqRFdCVW5m
RjdpMFo2YD9kRTZFZE9iZj5vXUtrSk04M0k7bzBVdUpzblMjamkKJWdaYGE4TEkyOixPU1lhbFlN
NTVxckRuImdpO3VvUWU/RjBvcCFIaykmZForUCRcIV5LLjdcXjhNWSUyTGJRLGM4RmstK2E6bzpt
dSVWZUFXRE1cZikxWTtRLUBgUVRSaU1KLEcKJVNAbCJvMHIpdCduR1tLbjNXSkY1XUEnQ05pVWJt
T1NHPSouZT5JMVMqJV1FPXErLEB0VGFSUkpTNSpcZS46bVo3Z2NHU09HNmtSWS46YEtRXSZdJFMs
WGUvWzpKSEpfbk5HXGkKJSFNPCMwblZlNkVyIy8xb0hbJUEyK08pPFBNZTlIVzRATFNZWl0vLmIy
Wj1GOmVZVHUmPD01ZmY2Ol10RitXblNILV5gZzs9IzFnXC0oUixmR1ozKC9tY24nMjlsVzNsUV9U
RUsKJUU2NiFfcm0zPFU7X2FtL2lpRExtTGY/QzFfdEEkRURAcyI2NCFOLUBkTjRTWSpIVk08U2ss
UW9Ta2pDIi1ZXWJYWSJiTkQ4V2snNFJZL2xrKz9aYTU5LHRmaSEiSldqZjQ9R3EKJXFvRVYlNjs2
YltIYV0pRFRiZkZTS2Qkc1phJlBxJ1MvUTplYlpVMWxiX2xuUFI0XjpxLClpLWlGUVAiNFpBQShB
Oz00ZDNrSCU1aCheLmBwVkNZcnFzKGNwOjlsQilsUmxTVy4KJVtbIyZOKCNldDJQU1QwVTtEPipt
VlBAZkJnITs1L2Y4KVlzOCVMYzRocCpaJWNhaEZuRUc+blZOJXQxXVpIYWA3Li9oRiMjcixuQG0q
MW9faG86IlFedGVfMENpb2ZhTG1saiIKJVw1WCtUIk9uXl1Lay5qI0BWXEZuWj1fInM/bEdHXFtk
cjJLOTEnRnRsJls9JzY5Xk5IN0JvOyVMRSgrZSkpT2hBOnJMNHRbdWYndCRSNmF0TFA4J29OLGxk
RGQ0Si0yOHBmNz8KJUM6aVdqcE80V0pWQjpHPm49VTxlYSIqZCY9YS1pSV9obVBbN3UsWzZIJyhZ
K2RgZFhoUCtHTCVYaHFAKXB0Vm0tXk9jNzZGczdwUENPKjBWTDMrbipMPz01IVldWz1NKmMmZ00K
JS9BL05QKlFyImc9XVI8RlU8JWluXG5oZEdtOFhAbDVOKCVhSzs4YiYnVlYpPV1QPGcySSt1ZkAl
KWlpVW48b1BfNSd1TUVSallecVVJY1c/OUtGOVJJVWJXMT1eP19MZHE1IyEKJShXPEY6bWcqbmVY
UW9GVl9zK0Y3bCdaL3BQUU1AZ19YLFZCRTpXSllqOj5vU3FHVj83Uzo7Mz5LWTdybl1ydU88TGN1
SidyXSo5YnA8XjMhYllcYj1ULGBlWSJjKGU7TEtrLysKJWcnL21zSypCYD9QXHNtI0MoNGM9JHBZ
UFBPNG9mVTJQUls1alpfPl85cldPLmdlKl1BQF5jRmo0ZmwxKygwaChsOSFOKE05W2o6L1hhaCgw
SjxkP28nNydhUkcoXy51XzZWT00KJSY0YGdJP08sI2lmXTZCS0MlJUEwXlQ7LS0kTGAoMCJFXkVb
WlBxRFYiJykqMDxSWS1oZiVcKk8iMTNMRlRbaklBZy1FOHIzMUMwMiI9RGNfLSQ+dDUuZD4jaDE6
LyYuIXU3Zi4KJUpoWERWKFs/YUZWYENecjQmKz5EIkFUSEYtMCwmZ3MyLEQ+NGZtYGQpRXBNdUA+
MktUUXA/bSY0SDBoL2JuZkhKSmlzOTEjaEUjW101SVw4WEZOR3BeZmFiPyw7OjlhN0taUkcKJSI2
ak4ubUVMNlMwdSteM2tTXmJccTlcMG5FcVZNUFttSi5vNSpwN19aQmJWJC1KJ1I5Qy5ZUGkiZlla
PC5SVSlnLls7SW1NKmRlVDlpNik1VCQvZHNgZzZrMTQtUU4yUF5yR20KJSFQXDZLUzFsTSJrdCom
SW9bZ1VrQ3NfRDk5MWEnPVxaO2tgRiNBX1k5JTg0PEwrNiEvUUtzY2tlQCQ3bkZjNmdbaiJGSGlS
JjZJIl1sITNEMV9NWiVfYSMqWGA6bnVkTSVsVj0KJURYM1k2NU5DPjA3X28nKHJtVWhEXURxYkNI
RlNDdGtGXTxbL18wWSRGN11aWG9FS0MvIyYqUl5EPWIxTWhObCU1O1JkLnRrXjNtJiZwVTQxXD1U
YllfVDArciQxanBhXi1Pb3IKJW5OXyJUY2w5Qj40OEBadV9WN3Q1Pjcsa3A6bGsrXjJuc1dhcjlG
K1duXklIc0c8Yyw8QD8nVWltUGolYDJgNipWZWE5ODFHTCk+YzFNcmBQRFFlTmRxKV91S3IxRGMl
SylUKm4KJV5ycSxxYyJDZF5bZDdATVRYWHE3cnF1ZlNfblRHNj0yMktHX3VEMGdoVUNaZm5sdEEl
I1BhO08nTmAmNF5TRj5GVD4nP2QqcUBDUmMxZ2tYUV9cKm5WQCMyK1hBZy5kPD0/TE0KJS1yK1lm
bkYwPVBfcFFscz9aKDlyLnI/ZG8hWms/PlBPcmVKTGJgW1JHVFlkamxOYCQyR0Ujaio1VWUoYjRE
NSphU2AuL3VIVihFX2ZRNVFKRWsjaz5sWWBYXypoTj9MUT0vckEKJS5tIjEyQ1txQCJUUVtgVkRZ
ayRbKy8tbUlabjotZVRDcitiIVZPbjxWVy4/cGFSPzB1aGo4XmBMW0ZCM2c3aT0jSGppK1AwJ1RP
WlowOmBWMydiKzEoNDAhTTRvS00vcCdsSCQKJTYiZiwoZi9AXl08ZjghV15ZWmZHPD0nSm5YYVVE
WmRhbW5aZSdhaldcPzxBV2k3VUkuP2NYZm5VXDJgOW4rYmw2MGA9bydKbz5HZ10hYEY0X2BsWzk7
UDBCckw4cSU1OzRqP0sKJUhMMFU5M1w5ZFcldVpCTyItWV1SWzUsJlteOTdDdGo4Mzhkb2VBOWo6
ZWB0RDFiOko5JTxjQXRXI3RSWEZoIyNjTW5cZVQ8QTlrQTBqYEI0X0IzciYoJXUxOllOazhdV1FM
PkUKJTs4WU9bMXBRSlA+RyxlNllWKVNJRmtyLFJWXHIzViNKS09EXDVyJV5XcDlqaVAtOz9JL2Uk
MDZJPEpbdUdKdU9KQyJyUkYkWiNAKVhTOl9jQDQ2VFFVVDsiOWFtLztLZXBZZWwKJVBuIS5QXlZq
cCZnV0MrP1pFTm5xJEonJ05UQ2hjUj1XJk0wNHAhPGsiWUkyP11qRFlqPm9DIjtvdW0+XCE6aW0n
YkhdJVw0aWVgO14/VEs+YVdPajhhdCIlak9dRS4wRGRiTDQKJS8jdCspRVFOZms8Ym8iKm0zNT4t
XlldZ3NMNUMrUV0lLDYiXEAsVjg6TWczY2VEbGhmKTRpYnNyMTpwKVheRihFVjs9UkxMUyctLkdi
bWJYO1xlSEJYOU11Mioxa3EkWjJmXlcKJUM8U0c/OzttLCNqVWU8Yks9bUVEOktCc19cP21qMlZJ
bUZyNjt1ZXVWTG50N1dkcm5zSXRoW1FhVyRuRU5jbSkzIjNaQkk1XnJPKTkuWDd0bmFfWGpZJCc8
RkVwKUN0ZmpIKCsKJVVgLjVfTFE3RSopOmpCTSkpQDwkR1JpckY+YHUiKjkxJyVWYDxdWlpxViJ0
KCtYOTFKbWcmWV0rI21SYWojJSw7amNzJ11tNl1IaS8jYSR0Z3BhU1RUVmAzYzFKbEYuW05kPnMK
JSlUU2VqYGdlNVU2Q1E/VixxV0hrWVFqOnNOYDYsMl8zN001MWFJPFcnaG9LP0hcWSteZ11LKEFX
Tl4qSVNaazYyaWoxOlo0QSQuQyg4KTJCLlYmTmVuSTdBUk5lVnAjQXJkdFUKJT4+QUhKZG9NOHBw
WG8iUl5gQ2tdKiY7QjRsdDNKLTouSmZSLTxbcGZVaGUvbWdLZGxMNmRvOSEzRG00QW5ecShAZHVK
SEEuZ1lJaUhfQSxAJTxRNW9aX2Q6P1ZFX1ZaOFM/QEMKJUNwZ2k3XSJUP1lfTGpHUFUtY0JIIWdk
TzlEMnUhJiszWTtrbVswVEM2Zj8hMypPWl1vJkNDdFQ8YDQ4KCN1TzxpMS4xcytgUVZuO2VJP3RJ
XyZHTCtpKD8qaVJLOzpZJFxDLzkKJSxNTk5bSjwtTS42NkM9ckNeby9LRmI9VlVIS11wPys7YEdX
NEgkXmRDQUEsTmxIJShua1VpbVhPWjBwYTQtNV4ycVVRaE03JTouWDE0bFZXXTtrTEEjPHVKYThX
Vlg1WSZgVCgKJVRcMkpLMDMuJiNaRyYrZykwLm1FVSdCVi9USmcyOE9PRVtwPW0tcjNpbmpSUFpo
LSpzX1hcIXFaZktScUM8cmAoOmkwaj9gWFwlWGIyPFMsWy5rRGkoTz9IZS9BQ047W1VLLkcKJVtv
QHUlXycwJm8lRlxDPDRGTVgkcU47LXAxRVo5N0Y+KidqKnRdQl4xPDtcLVFIOkwzYG90ZGs+cV50
Z2EvKkYpJm5NN0E2T2dwZnIuLCxmO0RBUGo7QytPRGo/KThtOGVyQ2EKJVNaXjNIPT5IW1xYSjpO
KXA7ZC4mbUk2U3VQMjswYGwlUyMqUSUrbTQmYXQ/Y1FSb1JBX11PI0dAUkZ1OWY3bFwtUGhybHMj
ZyFPcywxJmdeaGJganQqZV9KVVQpMlFiUnJIYCYKJTBFUCNxLl0vcU46PkQoXGIoYnVXJWJEbDNQ
RCxrRDc5NGZuRm9FSD9ALzctbGBSbydtVkJDOmxUIyRhTDcjQSxpTDc0TypMbTAzPTAiW0k9V2VH
YDpaWUgwI2cjOTs9WjtmbHEKJWhDP3MkO0MhQy5qWj5sMz1BO11rPVEnVWlAJ0VqRyxKJGRdMyV1
KnJMcFw2VSJDTzwxcnVyQmAubS4saChcOFA6a0FXNmo7Tl9UW20qNVpIRTVYZGVqRkJqbzxiUyhg
N1ZiSVwKJUw/O1JNK2pIcjM2XmVuYmI3QiFYIyRUa1xvOS1PcEhQXVBGUi1dJ1RbJ2J0alEjQ0xB
bmlPOkBvKj1wQzlLYSFfQThZcVdAYFRfOTZJQS1wXnJxX1VTPVotaHF1c1BSTFA1OzgKJV8wRGlJ
bCU0ISQwWG1JYFwoSTI3cmwmciMpIkJDPVtwQHNySWAyJmBxZixsOTpiYEU7LXNYLVJQTjxrU0wj
bFhPUzAoWEw8bjBtaDU0QWdqNCZOaF5MWF01M049JkcpJmJeZ2gKJVhmSVssV0dzcjdaXD1tSmxZ
P1hsPG5fPz4yZl1QWDgraU4mIVBSbD4uLVJrLihaJ3RoJ0BbNSZBJCtldWhVXG5URmYzX0hQQ0la
JzZZZjRnNmdebkNZLm9rSiUvaCZHIT9qc10KJVY0OkBuM01BM0k2R2JKIS1kIzVZLCwtXjBYKW9L
JVpzOkhgOE9KbXRXWERdI248bU9AZzxlNSg8KkYtQDJZZVY/MiFNYWI5LzloQSZKXmEqPSM/SnI3
IU1AVko1P10iPkJSNTMKJUtcTSQ8Yis9dERJLi5EOElWa1NEIWssJF8jQmhwUkBDMmIhS2JicS5i
O1BxbkB0Z2slVCZlRkpdW2NfI1gyXk9mKkc6PC5nL2E2VDApV1xzYUQ7aSQ+a0NuPkNlYC9tQ1pw
SGAKJSpJOWU4NyMqc1hjM1Zna1wkNSlVU2BWcFZVWC9UYTs2JzReRmkkZVowaUN1XlVhblNzVj4h
Rj8uNFJKTU5aLiNKa1ZGTzduZWFJbUYvbkEoQHU2U002NF9HOD9WYTopOmYqIVMKJTprY2okMXMt
QiFjNSpAWTo8biJYNypiSjpdY00sZUIoZjdsY09zdC86UVYoUjIvSShLOUtqMitQZkhYXilZTiNX
Q20iL3RNXUhxImY3ZVhPLFBELVZNaF1bOD86bWFXNjplRSMKJTN1WT5KMWp1dUJgWGpIYHIiLDVU
bTdIKS01NjxVblBvMScpKVxVI00pMzJublUzKzw6XjBHS1NBQ0lDUDEwJTtuSEhgNFs7bitGL1xL
VVZbP0VBXTFRS1lUZz1FRiMkOUdWSFcKJVJUIUNzVE4iUWY3UCVSTC1aMnIzI3R1PUBLQFw6Ljky
XiNJLSo+X3RjZUk0QT4qIUlQQDhDayFpT11EKGhcNz4uVW88Jy8iVUJpXDYlWDNLXC5KajBVWU9z
UWQoPF8uZ1szVVIKJWBtOkdPMCghUS4+JzU0NDo0OlBzUC9vcTBFY1YvPFxdSExuW3VEb1Jvb2db
UU1NTDhdRk5tMG9VbjtbcGpLQjAzNCsvVmJzN3BabWhCZiVEZEBIdFJJN09UOGVlMkEtPT4uPSwK
JV1yc09vRTFWLWNcSjBvbyxLKWBnM1Q2JmxqYVhuLF5Mb0xVbl5CP09bVXEzYzUtOUxXVypPVmxu
dVpFNnA8cytZYi1lP1hdZjtIVjApZlNCIWFVITpjWlxPbkxMIzQlKDhgOioKJWgxOj0vRGY/X08k
UzxgL0lZMGZOS1FaMiQpPjpuUD84RSZiWWctXFM+SFhCNFNGOGJSZVYsOUtPREZhOlwjYHJEcjY4
NkliTjRdcmksKC8kOWZgSjc1Mm9lbyMkYyUqXVhzYEUKJV4/MGdNUEM5dHVuOG1LJDwrZjg/Wydr
OUUpWzBoPHAkQl4/PyY+bmNuXGwxO20mXzc+YXVrZlgtPGxYYXEwR2I9WUZcdVQ0NFVvIzBIKmk/
cGg1J1FsTCRMcGpDI1RJWWRDSyIKJVdBU2BDTGcrWlckRDxHODFfXVVrbzhlJEhsYUVhNkVETEZY
KzZeZDtUcE8+JyQ2U1d1bl9MUGtWcmZnV1QtajtDQFwjJ1NZOW1HdGVnVCplV05LUCQhaVhzKk0+
KFhYZy1PNi8KJUJzV1ptYWVDVXJTPVYoYzBNMUchciV1ZUVpJVstViorKUsmW11dWE5mJ05rQ1df
WyteTzY4TFglaG1zPTMmZDhuVlBFYlVNJ0g8PCE3JkpdIlpOdGUkTFhQXm5VUmJDMy1eUCsKJTxY
QnFRVGIrSDBAP0xvJFFHP0U7blhtUHFGbWhHNFw9XytITzduLjhNLFAzM0g+KkRYQW4+S19VQVpQ
SFdMNlwsPz9AWTtvKFNzLDlGTGdCcFNPS29mVG9rPFhMPWdeUykjS0cKJUIqbk1kXm40Wl47Nk50
KCcvSCxyUHRZNUpyJz9rV0J1U24kRFtFMHNJVT9aSEpyIl5jQjdjKXNHLEdIdVlBb2FEaj5vazBd
ZVl1aSMhOlVKX3NJRWdbXEleaihqOmQmMyk1WS4KJTdCZXJeaGwwK0VMRjRwZ0tNX144N2oobmRy
PkIpKEMkIz9Gb2RTN3NqO0AiOys9cy0xQmVPKDIpPnBmNmQzQTZdNFtQJzgwRHF0QzElPWUpR2FM
RydILDA8aWNSWWtDbzBBZC0KJW0lNT9OJlxlaDBaJVotSmtNQUFqPDI1LDotMFI5dG9xI0AwTCJb
RSJHYilWWExdLSc7cEpVWkg/Ml5zVWlQbHMmZVQnOFg0aC9lRiUsMyUsT14xNzEkSFVwa1hmIisn
TCI6PCYKJUNzW0srQzBGRjJQRGRMc0JcV3RJcCFTInVjRjc5O1I8I2xDWm8zWGsqOVA4PUUlcms4
cGRsO1xUZFhiTFFBL01CZkNfIzYuMjQpTjpLWV9SW2BIRDdtXT5ZQzU3QVg3VkBpUlIKJSM8VEBq
XE4sb0JSckNGdGJCPUFBMmljJkVBO29rdCNYYiFLVFFmRiZZQk9AKlNpUWRPMEJkZDYiUG5iYihZ
J2FUMVZhXCdjVTBOWXJDcC4/O3BhSEArZyY+cCZEJW5yT0JzWWoKJVokLSJWTD85LG5bLHIiR0kn
Q0BbVlg1MjtwR1xFLjBWLU9VcCRZci5ZZmMqLnBwPHBOaktFbmVSaDFFTC8wKnQlOTcqJGllNGNk
Pm9bTzgpMlZrN2BwYlpEJCUiIz4+biY3OCIKJUZfWDo7bTwjQy0wSixQOHI1U0BoKStRKCdrcW9w
IW5BITItVlxzJTBeVk4qM05bRl8uQiEkMyg/QDhAcFZVPEdmOiwqNHE/PTgpMkVqVjwwPGc8bjRo
cHRIYFdTNlo/aDhWODsKJTk+ZFZhbTpiTnQycCd1XDVnUm1VPENnNzNLN09Nai1hWVZDRjAzLVQr
JzZYYU5NU1tOQSxWIiw/U0BzU3FVSFY/UyZjVHJuZ0xZUj1vS285VD0rajkoTTZFWGtoRlpGb0gm
XHUKJVUpQ2NwUFhBOkEuanAjQy8+ITEraDJncU9jWHBWW1hMYUAjZS4oPGYsQVZaXWtnb09fUzlT
MC1Ec0YvcChzSD9WKVk1XWRbbyVLaWo3UDcrWlF0dGNlW0ldSiYnYyxJLms6Y04KJVlra18vMWEx
aT0zVEBJaWlyZnNDMihkRU1ZUEVPUnByLWVNY2xgN19hN2BHS0lQalZrSltwIUNtTSpfXW5cJWxa
bXBoVUlDVyslJ1JrYjJOLzpIU0VsXFtIL25jZnVXPlRALkIKJU10MWMiRjRfUlQ/Oy1xP0BASURw
WHQsYUJxQmU9NnFHdTQoQlxVa2peYDktQEskImQ6MGExczRmOi4hSGMtalovMlEnXnAvYF0hVzFb
YGZeP2toUE4zKUZAK15DQ2gtYlFlXEMKJTcocj0pOjYoRlg6JUM+YmEsMGM+SEkuVWM5Y3B0PVNV
OWdyVFQ/L1JyTG91UUlCQC5vPS8iRmtwUyI/SzdwPTIoTFo/J2kyUDJcLFk5UyZeS1dFWSljTSg7
c2UhKXFPJmIkTWYKJWNbKXIocGhEK2JPNjxaOyQ/WT9yK0Y4MVY4IzI5c2VFPDI9ZCdjVGowdTto
YjNAO2UuJzgubiFSKUM0RSw9MVBEU1JXOVNDT107ZiwwKHRfZ1crNSZULUNSKXFBYlhqIUQ9ZSwK
JSFrQVJyJHRMVUAwW1Q4X10ubFkyPkROayJrNWtHLHElYUdeNU4zYU5oTUFyVWhubFkzZzFaRSk9
c0AvWEphNigsLlJWMmNVP2VAcighZjlLPGtQbkNaSydcWmNBVC09YWQ2PloKJUgxLUY1ZTxYSVkp
PUVrJ2huIzZwLjQ8MD9FVCMhPTJ0O14xQ01OLTVEXnA/MVFMOWYxSD4/MGVXR3FaPi5WXicyP0Je
Qm1PVVksM2dyQSJJTzBgUGxFViJrTUJCcUM+cWFpdXQKJXJqRkF0IloqPFlRX2NVQmA6JF9XO1hw
OSomVWZdK01ubC9iJklAX0lGWFdwVTp1MTBhYyViLEpAPEUrUFFgYWZxVmpUOGdtXE1LZzltdUJS
WkpRTjFHYkhvN2xCJmY0LnFedV8KJTFRaDRzYzFJRiVmSml0Z2tySSZGREg7aElyVmJySidEaT9K
SjMxYG1CZCMmTyNYODplJV9JK09JJ1tkPidxVlUlIWVETkhGR2c5aT4kSFNQKkFVNiVEMkNYTTlJ
KCoyOj5PT0gKJUJcXVsxcCt1RnNAT1Y/WmEmMyUuU0ojbUZJOy8hNEhMJWIlbEM0ckBVI291a21i
MzI9Rzk2U1lANlFWWD9iUWptQjk5bGw5dGdJTlEjRWppKHVbQEpDaFpCMzosQW47UlNnY0kKJU44
ZzZyMVx0XzJjI1JFXDZWZGVKOlBzJihlMz47L0MyVzw8Iz03Wk5mQChVaEw8VS5dMnRDQ0ErZTxr
R2pOXT89QW0rZDZtMD83XG9wNSpBPj1gU1dQSmA8LVhxIigvKWExPmMKJVFXMVRZRSEqMXM1NG1y
QT9LMzVLJXNGNShDI1haQVE+M15jXEwoRz1MbXMzMi9yN2NZW000Tz09Nk88Pk5mU05rOitgMSdn
Zm03cTNPWEVVM0NSVC5HM2BCVzphXkAhN0NdRkMKJSZGVHBybEknWCFYJzI/V29XOSJlSGBWXDJZ
JmtfNUdSKENBIkVdQl1bLCk2dUxCX3NaQFQyYWxJYTxRIjhCaThAITFmcyk2cGVvWDInKlZRVl5I
OFBadUM+RE5iZyhiYWx1UUUKJT9iPGBFcV1oMWEuIz9PXjwiSXNXXXFIVGopXiJzTkgwYWdUJys5
IyZCVkgqUG81WV4oZSxKW21PbHQzND0qTShsOipCLTdqbjAxOytNUHBpaTpVRFsicE0yazBnX1Vh
InUuXy4KJUIrV1BfRlJdKiNScSYpKW9Xa3RLKEI5U0tKLTlbIy4sP2FKOl9qL2hWZjJBJmdbWztt
U1VKMGIzKEJWNGEhR0xlSW9scj5jTydQWTNMWU5hS0cnRiRKNydTbmZlKEMvR19DcF4KJSFNQUZx
LlZER04qXi1bQj1Xa3VrRi08R2pWSz5KXlAhVGxJLm9WM2U7SVIvTiJkTWlMS2BtbyIkW3FANVNm
P29oTyVYQCswPTFHIWg/dWRZN3RlSzReLmFMQ1YsLD4mUSpkLloKJTFqNkQzVWVRUXNqM3UyOChO
SyVzY2hlPHVaTnBoLm8tLE0uMGV1Q2pxLTVZRjQncVU/MUtmQD0zPjk8TFVRYik1JithZTRaMUVY
P2s1LVw5KltKXlcxZy9kVE1pYjRPcFkzOWYKJU9LcDZicTdcSilbS3Fib0cvYDcvZlJZOkMtYDIp
ME9ycF9oO0E8VTVbaUMqRSUtdD5hREZsRkdhIj0/J0gkMCtBO1YrZ2RlaFAxPl06U100UUJjKDRa
NmNWdXBaUG9LVGdWPnAKJTxwM1pPWGNUWls2SW9eSkttciZkWVgxRlRsNVVgOUpKUWd0ZEVzXzg2
O3JdKVYtJyUzQFI1Z1A4QC9aLSdxI2h1ajgoUCloNlwjbG1eYkgyY3A/Lm00KyRLQT4qaCVQVTwo
VGoKJXF0ZFE1ZW45UmQ2WlBYZTJyPXUsKD9sXXQwWVRnLk5QWVplZjlgYi8/KDlROnJUQ04oVkRa
Y25SVkNSZGxmcXAzbUhePDxAM08uay40Zy4sI1JST3AvT1o+RkFyT1tHO1ozXjIKJWphQW1WTSI0
KSsjdU1iYy1yMSYzLV9KNkE7Z0BqcHBfYm9qcyVFbUBvNkFDWnEnL2tEKXRMMyNndDJiZy8xQCws
KkhkJyxFIWZsWl0yZV1FPDlaSzo9UWslRSdsJS0kYE9xPysKJTRzRm9VT2NMLnA9XEhnUmtLcXN1
ZlxGclptYmxZYWNuKG9fZixxRGdhTVNAJUIiUEk9RDpFMS9iZSIyKChTaS1CaXJeJ2ZdLidHOUc3
QXViYSliVFhHcjFEWjo3MXVCSWRhV0AKJXFJdSs/R3R1ZjNELmRILzldPTZEWVtnTTtjNTBGQ280
Ij9SXzJHc24oMmJOaCZaSzwzV0opZ00tZGo/SDJpLEZFQzxkMnI9dGQnMGxNXUFrajEnYEReWiU7
K21MTV1DbTNVY1cKJUNbLVE/QEFuZidbZiRlVlhZJmJOLUNvUF1pOT9bNUEjLVdNQSUzTSpVcHJP
VE8yQGs1M0Y6WkROI1hPWVdDbWdiJj5IZW0pLlxMQzZyI0ZvYktmdDNEVjo3dVViUiNVYVVOT14K
JTZTXHJWTFYqPUAob0RJOWssNDldNSlxQDg/Kzt0cF1jVm8zcE5TJytcKz9sdTpBIVwjU1ImVltc
OGE1T09lRUdAKCldLmZZXlBrXyUiQDdjYDgjO002OztsZ19MW2ZKQ1pHLGoKJV9cI2gwVVtYKXJO
OjhZKGNYU2ZDUDooU0dqR2ouNVZtcC5la2g5TkdiRiNrbWxUKDYpZWhQZFlTalAhX1VYPF9fKjU+
W0NeR0EnXzFwPiNNclM5Nm89NSg6dVQsJnM6cVRATjQKJTFsK1deQl81LVcxXFJDL2gwRCYwRmBP
M05cVHBlMjoib24/aC8oL1oxJWxARkhvVzRJLCFeOzxkXDRDXVxCUydiZmtOR1RuaF5rYFZUSTpk
RWVJXC5FKV5ZI3EuTFdjOlRPIWAKJSh1RDZhcDUwOFxdTzpEY2tAPzY1bTBRUjVeKWI2VWNlNUtz
JVgiOGZcND8zMmZYLzhNNDVWPEc/YztVZ2I/bE1xKk9Ca1ExKmxCdDJWXTMuWSsxK0lHOEs3MSw3
IUlJQVxLOCwKJTQxRV4kcls8N2RuTj10XihXJChwcFJoLWBwKmZMJVprKylJM2hcNW8xYURkS1pE
STdmVCZHPSsqW0xQOFtcSTYiVjtBTnMxUT42akhwYHQnK2RLNG0xIjVRKWg3amNIcWhXKWQKJVtB
Ymw2MVUlPjI0Qy9QLjNfUTVOPigsLzNHQkQ/JywxY2FvPklNOGJQbzhRVF1dXm1qRVIiOkc9TEpn
cyo+PkZgMkBgLyNkT1ZENDcvIlQwbHJZMVREYUwtKzFMWUpqbF5YKTYKJSgwWUshUzEuTzRrcEUr
Q1s2SWxSTWZjQnRYLm0pSThEJmJOMiVrKEE8O1pJXz1OdXMzJlc+STlobyJyZ05wOEhPWT4xaGU8
algjKiIvOz83J1JCPHNaP3I9TGNMLHRiZCowbjQKJUYqLVhgaSZBXlZddTEuPmB0L19aQVkpSWha
ZCM0JztDRm0oMkQiXD1NUGQoaiMkRzgmJDo5UU84NCEjX0xyWDNgOlFxXVUxYDxRMixQT0UyI09i
QEpVRkgvWEwxX2Vdbi1eTmsKJW4tYkIsaXRsL2c8Y2wtcGprYU0/VG8vXTtBVDlLcWJwTDYoOF9E
QTxHb0FuQUZVUW5vUWo+dHMjbnJBdWJmR2RaNS5XZUhDb2pyUEZ1IWZjSmYlV3NUY2NnamZDcWJc
JWxEZCIKJWMic2kkOVNTcypnV2VOblc9TWYnSHM4bi09KyRCOmp1bUQxVE8pZkZbXkQrTGhDKz1p
LGtbJzxKISdyaWpzKGBNMUx0Mzc9X0hlJ1tFUSpIL1c5N28tYjhyXUppWzl0STtzI0sKJWVDckNi
UCttX29PUyRkdTEyZiwzRFVfPjs4TmFhWW1yKztUZEpySkFmMWY5VkpwYF9kWldOXS5oUSRlI0Bn
Wko+SWJ1P1hOMmtidS1YUHVcQipyQjxYdFIoc1E/Q082MjJ1PC8KJWdkQTlCMCRmPjlUXzxpT2Nv
ST81KF1uQzZmXDRzSD5AYzE7OE44KmdDbUdrJmFmKFVbIkYnZEVUOGYpN0xGQGVBMzhIMz9HRSsn
PF4obFtPTltUL1trQlZwWEgmLE5lUV40Qm0KJUphbkZoIjcqXE9tMGdabiNPNDItSGNHLE40TXVe
XjlxJVJLZT4jI3BAaC5Nbi00S2FRM1ZEO0NSOWFVJ2tINEAhckIoUiZlQVI0aVkzO21sXEEhKTZd
SkNaQGJvWFhOMmUzaGUKJWtOVWZnYGwkZihTTzBbM2tlIUciJzxlXUdXRWpFTUVQI2R0bTs+KDJb
NUdFQCcqcisrREFXMDc6ciVXITEjajdKSWxgVSZbbSwoWj1QSV9DbWNxJGNoJmFNQk11RD9NNydZ
R1IKJVpMPWZHUFcvVTUyTFNoM2FtbFReIkFwU3UsVlNbQHIlODUkWVElX1lBIl9EYWhqNyI1OmM8
ckVabHRvP1pZc0o3Y1kycXNScU9ZVFJhMiJsQ2hEQmQuW3VrYk9SL0M/Ijc5b0YKJU4jN1tubnIi
P2lHMjc5YnBiP0doMEQ2TnUiUEBIXUQiKHFWcEBaazFwKWdNaFJbODpNSmBfLCc3ST8tKW5aQXFC
JDFbP3RCPF5oJj5GYE9iTDchNTVgYkI+SWtGTEtQbmNPPzIKJURNR2RbUDlUWCkmPzNwOW9gYmw5
NDxSWW8rMUxxXHJuMmlxWHExZFZCPHJZXyVRN3FdNVdWa0tgPG5bPzJhaDtPWjhFdEVtN1Y2LmJj
NXNORmQkajtYX2V0UCc3PzJHZSdkRnIKJWMrS001QU4kYS1MSzBkSU0pKD5JXT86NVM0SFRPNTRX
W2xcLnEwaFwsSm5fZT47dURuNTpQdEA8WSMtaGs1Qio1NHVdaVlEVzUuYixrRSMqXE5BJE5EYVhX
Tmcuczp0cSJddF8KJVsjSl0/VnAuPzNcbG5iXypGa1s5MU8sO0RtbWxeMWc0SG4kbC50KVIyNV9Y
KDBwMj8lVDA7WW82ISQuVnBsazxvckVOQ29GZjFmYmc9NWMqJldFL2EjN05qbGRXPzg1WlM2aTcK
JT4tcj1POyVnYEZhRDVQTlFGXSlNY2trVG4qVnBEcDR0X1w7Ri41LCdCbEEqZDcuYC8/VCxFLmtx
WDNrRDdTU29vWlMsK1hePXRGbjpXVWU0Q0RMYTRaYWFvSzwpS110JCVSOF4KJTIpaG82Tk9GSWZJ
JmUhdG8hPUNYRSMoRj41XkhIX2BfZF1cSCg0cCg3bCY/NmVRK01MQyRiP3FhLEpPV2pZUFxlQCUk
KiQrSSI0MG9jaGEoaS1vJilLV2hYVF48SitSPFpXRygKJWdbdTEnLkl0Y0BnbWdjWUYxTy9fZUVP
KFVvOTssMGMnSFM2I29pRHE6Omdob1YiOzY7W11NV2dBQCQiK18lZDZEbSY5U2IxSCYxNmFhcC1Y
WGVCM0BZOFNDaGUwS0xsPypzZmIKJTM5PUtQLWlZYF8rODkxS21NPUY/Rj5lRTtvcVAzZW84USdS
KFByJCcwXWRTPGQ5bzpfZGMwLFJdPyQ0LUYzY291RkRHbD04ZFxTNGxJNTVzIiREUWxmOlQ9Syo8
YDNvbDBNaD4KJT1oWzY9VEZIYVVkPl01aCFSV2gtVSJjSnU7dGpSNjtFaz1WNGdSO0IrU2xOISJS
MUdcZj9nYzVHTWNSY2cqNkgtNl9DOFpyWTtvPkZZS3FcUVhzXDNIKD1VZjdpJDdWRGlFZWkKJUst
dHVmSCNvUFAiJG1iX1hoM0o4M2MjYnA9LkNsPlpmSyMsb0JBQzY7Vjw2Im4jP11gPl5uUF0yU3VO
IjNnak4mNVU1MGtEIkJqaExXKTw4MFJsW2glLSRWQmZZK0FhODBfM08KJV0kISo1cnAtLiZFWFIw
KG1VQkRjVy4mWEgrJ0txWVxTI3NkLk1ULW5uWEksdCVjbUk/OClAOTpWTEVySVNFTTY/TWsjJnE9
QUlkZHJcYz0tNUZnRCRiQyNmIVs5cEMlPTstUnEKJTM1PDYmXCpuaDIsKzpCc3BLOkgoX1c3IjxJ
Py9GI1VcQC1BTFUkJHIkWSxpR0pcK1dSMkFOc2tVOWgtZGxgdV9TNDBQbEQoJjUiKmYyZSpwMW4z
TDlyYnV1MUc7PGlwO1BPWWkKJUtcU0UlTFsiaUUzbzNxZ05cXGEvXHFPci0kL0wwa09EcHUrX0gt
J3NOYWBATXBuSl5DWGczVUtWXD81UGpiNThSTWhQWUIrNzJVJ0xJPy1pQGB0SilAYEVXSjY8MlJM
ZjEvbVgKJWUyOUtvM28maSVDOkNBZy9pZGVWU05aXT5VSztxW1lRISNhLk1EWWBqXW47RzFdciI2
Yk89Jjg5J3U2blZPIywwWEBCQHRxJlg+J2FPXT9YRjpmTjIzXVFiW0puO2djKWNYJ14KJSxSJyol
Mk84czY8STZmcTstWUMlbDojXVBuMUMrIiNPIkdfSmhMQ0hSMVpeXztjPjJRUmVZQmgxSDMsVFdG
QChPSVNXJlY9bilCckhJK1h0XloqUEcmPmolMFlgU0xtPnUrbFsKJVFDOCFxPCpWXGUvKCo4WytH
NCElKmRUU0dJX1U4S1tMU0FMaiZiOF1nZm0xSkxzZUwyQCdhKG0kZVhmQEtyV1gwW21DcVxFVFJl
XEtzTE9QZycwckhoIkJcTU1FRjdBYWg3YVwKJUg6PktCZlsqOkBwc3QzIVsnWC4tZyojbGstKS5H
JEgjRytma0htL1ZOVzJOYUNBbFFIbG9XZnVMWjU+bDxaUmM1Y20xIkQxJUtrN0tcJFpEWUE6ZVMi
QU5iMmRKc11oXEBeP0MKJUNvKDRjM187I15MZlhsPT0sI2U1JlpdaFY1MykyUD01UmhwUzRvP1Nb
U0NtTD4ucihwJVZnJTJIJFlJVkRrPEUkRjgmXHViVF5hNj4sSkwzQzZkJU4qTGJURG5HJls1IW9l
ZW0KJSp0PkRQJ009M0ErW3JlVy5LOSJAMi5YXTVgPy0qSmpEYW9IaV1ZXjFFTnFzTkAnSmZhYDFT
a1cqaD86Q1RCX3JLWVF1Tzo2YWtNRDhcN0RaXlxWOG9rWkhuXlEvJk4obFdedDkKJVY4Xm1IamE0
OVguRSMoX24xSjhVLDUhaFk7QjtKUT9NaS8hPyJ1UkhlSV9MKD00PC9HWFdQMWgmO3RlLUlRVmM/
Y1lWXHFsJiVCOE5hMWknZmk7JiFHdC9IcmRGSDk5czZRXzYKJW9qbEFZYFMnRTZxXXJxRyRONGUv
LWUkT3RNcFNDSiRUJDEkJWFYVTskSUVdIVVYLWMvcS9KL2I1Kz00UWA3O3N1ZDcsbllrSDIuU2dz
aWEmMXFAUitrPDZicD4uN1VVQ0tIXz8KJUA6UiRqT0tWWCYoYGorRW8+SStHazs3Q1RWdFhqcypB
K0NnKT49IzJmX00kVDxpNmFsZDVTSTJVVWVoZklAN29JWlRMZHJTUmVyYj80OCs9alJbZ0FXLFhO
U18xJWFeWUQwTy4KJS8yblo+KkU5VTIxTVV1LWwwP0tsVmthKEJKWyFWdDNUSi5pckkhbVtxWV0t
ZVFTLHJFbydKLWBIJls7cmxfaUteYGRPaURbJitqXGghSEYzciQlaEluYm9jQSoqZyc5OTRpNmEK
JVdTYFExbWpvPl0zUk1KdDpzJCwtRjFiJkxwVlVEJWwwaG5kaD9cakFAX2NDZFJeNUQxaFdDYThG
Qm9uQEUrITVXak04b21obyE+MGJsY0YpRG5UX0hLVlAxW1Y8RWhuST9EPEAKJU1rPkUkcFtCMyVR
aVoiazZfREBhVEJGL0FdUWwuPXAyTz5NIj1KXyxBbWpyQWdxKmRvQWVAJV1jdTJEPjRoXDVQPUpO
NkY3JSJhZT0vS2xnbnRNVlZcclFGKjFmIVpmXWYsSDYKJWE4UCM9X2ZrVF1qdVNOImojTixzaihb
KzRldSg+R0NTRj00VXQvTDwqPkkwXDE6QkQ7ZmVNZy9PK0NlPD1AaWl0TTdRPCdlUHFlR2wnPyhA
QFBucG0tJlU0WWteKyItZG42T1oKJVw/R25fKkJaN2JmMC5iSlBkQjE4TCsxczVoP0wtLVtlPj5Y
W1NeciZdJDFeamdnZTpnT0U7ZVhSUDBuSWhuVjA7NEg3LyorQUVHdUYraUk9JkUuT2tJR1ljZ0Eh
aFdvN1BQJEoKJTFwcF0pZEY/TkxVOGFqIjRJT3FOYlAjS0AqZlQqQmBWZF1Kcmo3S1lGWihualZr
bzlVTz5hJzhAYjFubiosNy5naGNFTTdNPWRXYSleakIiKEVfWytUMWpMOU0vUEdzKFk/OCsKJUQ1
dCwmPEc1PWcpKigyQ2tbWXJlKHRXM1trR2R1Lkk5XlY6VWQpMi5sZVxIO2UoVC5RMHFDQiRWJy1e
clF1TW9BVWNNVVEtSCF0LlBqZGRKVjpXdWVASUcnbFZVbytTRC8mJFIKJTJiTiUzNykpQGAtU19I
LDstKGNiLW5BaidrSCxBXUYza19gRl05YFtRSzdxI2wuZjFLIkBmKEg4RTsuPGg5NUBsZChhQiRx
bCNJPTE/YEhkKCoscSVJTGQ6cWYoQ0YsYzwlS20KJUctWkIpaDFOZTtqT29JJEhQMjFhKGVtQm1n
KmBxS2RBS2g6U2VoOU9NV20mSk4qa2xoMTpELT9QJ1AhO1wraWY3YyhhTFhUIlU9XG8rNlE/UzxU
czRvdHM/ZGY+JGFMVzthNmsKJWwwQEg4ZiciUD5HRHQ7RDZbSGRkaGAyREcwanNaM2RVOCklcTlk
JlgzXylJJCQ6QHBjPWVlJ2ombzQhXW9vNVVDTlQ/X2BRQiElKilVWzo1bylTR0MxKFJlLDtkQDlV
Rm4rN1IKJUc6JCUiVEExOylSJTdsLEVrMEc0I05fKztzMEY0bDRuU2JrYUJ1VzJQOidvM1BJc1pI
RUdFN2siVEUoTnE6TWBcPTVII0kqaGMqVW9RJyNrQWVLbkU3bD9KcTpYVTY5bSxAY0IKJS03S0RN
RGAoKyNwLTI5XChQMikrTjB0N2A/ZDRyVWlTOmokQlQqcHA8Y0ZrNSVUNT1iY1IpIzokNXNxL05s
ZltoSnJWSDItamdeUWYiU1c2cDpZc0pZNSJeMzBrbD4xKjNTLCwKJTFCUWo6MCtoIyZJR05LVU1M
Y1NPZS50N2M4Lk90ajdRV149MGQ+UidMXzhzVUIyPSttMkpWaC9PQ21OKGJwbideLy80M21gSCM7
TnAqNWEhOSQ+ajk/K3VjVllDJWsrWmY5WmIKJVlQXCg3KSknXFQnLy8nIWMtPy1pbyhyTiU3KS4v
XnBuaUA1cWppPVpbI08sMEcpZlRDOUpiYUNPL15dNnFqamxEJz1zVy9BLkZXdW9cKSRLRzMqSDMl
OGxmLyd0IzhSbEpuc2wKJVI8ZFZhUzdaUStgb0AxREJFLFYkXGpcRVlWUkZfMigmNS4uU1RHSGxN
NTYnbkNMZEo8R1hVZ25ALTU5LCJTXUVLVV00QUxlU3RpRGJbXU1TTD5kMyU0NkBrUkc9Ky4zNyc7
WnIKJTpnOU1cNT40Z2FuPyM1bkxoQGctYy0rVyVCJixRaUEiSl06R2IhWFphMk46ITQ4VkAzaE40
JCZJYmtFN2JPJUBaZ29kTjtYYjBAYWNXVUs0XSxdZlFRbUVua01xa0QyQ1JvJiUKJS85NUJjcjRL
Z1kmI2M/M05cJFtSO0doMXFfcmJkMnBuUDBYaW47QSswbE0oJygtKF1DJVs7YmozXWIxV1RDRDpa
K2BISUZlTldlTmtrNUI2OldVZj9DRExqN0YnMyhYOVQwc0oKJTdDPUVXZV5xYTYzW1BXXVpdQW1B
KC1BailIPClgPjs0SWk5UWlaWjIpOXBFS0cyKFpHV1EsKVwnMzZiXzUvVE9AUVZtVTxVMW9vdFpX
TVIpcWdKRSFyOUMhNGNxLmZgM2EmXTgKJWdBTDchZzRFSlFOXUFaLSYpKldySXJMaiRYPnVlND9E
cy9uQFooIURKTW1bW01uP15GKj9zbjBGPSJgREhUOV0yNDdWOnJNVFxGOCZQRloncGsjSjVrOG9Z
XG9AQlFZUkUyajgKJTszY2lma3NYamM7LyM9YG5lWVQnMCJgKlo+Ok9UM3IwWW1GYTZLY0hYUmlU
ailPV2goUHRvam9mVTtXYzJuIU02ZSY1Rm5gcjshK11IQic0cSZTKnFjPnA+LWFOI0ZeXVdSWzoK
JSs7NERBJHJgXkFZQFRjKjtnNDs3KCZxPDQ8WnBuLmJDRj0vRVciWmRBIz5QVXAnJXVEVnAia29G
XWRnMF49UmctYz0nRUdsTjM6NUsxSlhhUkxVamNASFFPTWoqUCE2Pj1NJT0KJThVSkJNZTthb1Eu
LVJuOyxRJDVYKm1bP2tgPHNAXUU2QlZ1KFxvVGJTY2piPldDSXVQQzZYJSJEMDIiVC9aJEBMcEFI
ZFtqSSkiRWpVVl9yJVZzOTtgZzVKKTc0dCZ0W1pmME4KJThjKFFtTWxkS1wsMEVmIlc/aVhXWSov
VWpyYSNjZi1ubGBSV2ZwUlZeUnBSUWJwKiNfcisjJDo3c1txXlxdQWdWcWpMTXEsM1A5YlV1RGpG
XTxzUT9RO1I/PGlBNHVyXy07XDcKJXBgUF03QzktdCFgcjonY2MvXFNzJlgtT09iZC09RlA3Sjs8
VktMYz9nYTpdLmFHcmohYlU4IzRdXlRtNVlZayFLRGMnVW86XCxKZjlZR15wXjxHR2gkMyUxJD9V
VmRfNHNJTDEKJVVsI2FoPG1BMk0sVUQ6IV5Jako7MlM6YGJyVUkjKGZcSW1oNGhaZFBYSCI5Uz1T
N2ZQJj9FTjZISmJBUGY7biJQYkBuLiNEU0ZhdSVZcDYyUGUtQGlMWEs4aFRHKidQNG1uJCEKJTRd
S1JDWy5oMSUsXTg+LEY9aldBWDZRWDYqTVdcNUM4QGwqbV8jUCJZa0MtJiM/YkUpanFYOlZeKG1W
PmRMLjhAVE80JUpCRyVPIjErYmduNGNcZWlvJGJNb3FERUUsTExWRUcKJT49W0siVj9vXCc8Y2tB
QCtkdUwsXWVBP3NKJWpEWmhCR3JFYDdtNClOXissTTBKbiU+OiJSXiRJMjJEU1hRSickIyRSPkM/
c0EsZ11nKDM1YVZcZnFXUD0wZydsYThmOmwwUl8KJU9NSHVTJmVdR3A9U3JwYl0zRGZ0STtOPEhv
QkxmPyZCMjttQjtEV2pdaC1IYVAwY2VGcllbK3JyRl9lNTRVSUI8OV5BL0RpNytHdCVfZyotZjJe
dUJWcEd1SUMpVERWMj5COksKJW5DWlIhZFkzMCVbKFlyREZvLDlPay85RyFcTi1wVTRjSUpaXFJK
SnJeSSNJJiFwRVdMXXJqUDNAWUdGbkRDKTA9OGteK2ZRbGEtX2hNMzlMK0ZQZiVfUFYsLGgiR1pe
YjkyP3MKJWlBPCQ+LSwxTVhoVEkmXUIxREAzJG1kPCxSQHRsaGZHdT9XakQ7OHBYSTpkZGpBaixc
TGBxTCNMJlRRPFBHQjdKLmdvczcxISxWXmpOTUNJPXNGUiRrbCVJOTl1UE4qK1EvdEIKJWB1U3VH
UiJDYkNTU14hXmFcKlZpcD02Z28mTiFjVEw+PkM5V0ZkSyQ+Il5fL3EzLUBYMTp0RTgobiJhaT8w
bl51Wl9IUnVYIjQrKG9XZl5MOVVNUypYcTFmTHA/KmhgUUgibFcKJV8mIiE+UGtIaENMbitNQlNP
WjE+a1ZKNVQva3BERy43ND05KmVkaD9HVUk2TW8hWCIsXCYnOk0zYyNtcygvRTI0T20rOlpgPSku
RW5VPkYlNTlWP1I7XDNEXWAncSFbKCFTOSIKJT50OT5GJCs3QklyWCQ6TmljdGckTDF0WlljWWhu
TmNaSjYzaD1XNCtTOXVpQVonSStVUChfOGQlaGllKyZjXSxFW2FvTGpSMmRiPlpeSCVyLEVHbWZm
YS5PY1VqY0s9ZHBlZnUKJWltVzNySmpxVzhHcEkmWy9FMSUyUzopYyxHRmhOOUolQVBzZ1BkKCFq
Qzk9TFA7LXI1ZWtLZi04Oyp1cSEiYXVgZUhII2dbQVpmI1woTjgqMXFybyZRdE9MaC85Oi5fQyF0
SVYKJWUiVzxhckZsXmlTbFhGLVw5UGkhX1dpJ0JsIVBVI2JRZ3BfXilhc01jZShkQlNwLWw6TWdG
J2tnPyg7MkklcmZRJl84VkByKDE6cTg1c0hKN3MvT2Y4K2dKb081cVBDTVdDT0kKJVZZWzM1cSpB
S0NlNUJNRjgrQVteanNTIiQ4Ykg2SV5NWEVTRGk0Tl5IQls1SEBGOV1hajdlSCREZEFFNWwxRjRv
U1dGbihHVj9OUVUlVF0lVCZlbGY3YWhwX3AxWUowIyY+OUQKJSM1YCEzQW4nRkwkSWRWPEwoI1oo
TDFEQDFrS1NHaTdeSz5xW1YnTG5XRDMrNHIvMGZCXC5OdVNbOTc3WkUkS0Jicm9oL2M9TFxyVGdl
WHJrWDlfTkYxK3R1Q0dSUFkiNXFuUEMKJTBGM0QyPW1EWiQ3TlBeZk1gYGUyWER1b0ZdQyh1Qk1C
L0glcExIVTRZImlmJE9mSy1XQyc6Ri5fXSViTVooTTJEUSlAM0suIVQpakFKLjZlU29OJCQ2ckg6
VytpOzpzRyIscToKJW9ZbFlNZEZDYV5kaTpARClIXSRCLGojWlIzQ05BS1I0dWpOR3JAaDFcRDBe
QDpdTS9oJkFVRzBKVTFzOVtMJEcjNUw4S1ZEYlNlc1ZcUEEuQWdlcVBiXlROMWlYKVMmQ1doPC4K
JSd0PEEtI2lDTkBMbGVXdFBBZCVjZSw0V2NpaElmSlpGIURecSJJZTxmYlZwX0RKJDY6Q3Q+K2dp
QT1EPWs0LW8qR0grJmVnLjdKNzs+XFU3TlBkYzlwPSJcPjBOJCtCYlNTYF8KJSc+LVkhTiEtPzIm
PihMJVQtcCJLLmRDUG5SQjg4QGhrL1p1ZFlFVSxeOSstVFgxXzpdbExwbFpNXTI7bUJsYjRQXi5C
QmxfS3FPL1k3IzRnMnU2JzRuaVFNZFpVcl1FUlJMW14KJVcrSmdcbT1RK1pCYXF1YjElLEZkTD0s
JDhNVSxHNjU0QThWSiVEZUMobUk+NE1hWEl0aG8iYVhtOCVyI0BHZGdySk4xZnE7NCxYbHAvTVln
OEg/SWBQZVUrY25pLTVVKjsiTU8KJTRPJz9TO3A6aVFGW2c8T3BnallJQHJickJoJUtqclM9Sl4p
MmwkJVhbYTdBUVo4Njh1YGkoVCNrJmZFKEU2MGIjVTA3Qz9kXCNObVZkUCc5NV5BO3EqZVRzRmFR
R1E7cURpYC4KJUNTKi82SzlecUJmcFRhJz8zMWszIipWODU7Xj9xNSRiSzJXZ1ZgcCFCNzZsLyoo
O1NQPVpnKFJfPmNXMHI4bEZoXFhWSVskYlxGLipoNEldQjw/NmROQTZLIl1CUGwtLlRhTXAKJWFe
ITZdRGQ8X21qSk1VcFQiWypBQXNvLTtVSF4/LSNOc0ciKy5FLXVjWCRGV2ZEaiw8czVJRGJwaWxj
JG9ybkNDckdWWilZUHQ8OjhvRCRCPzRkNiZedEw8Sm5rVj9TYSFZZyQKJW9kdG5GVGhkLFs9LmQq
ZmlZUzJKLyo5L2VfWHMiVGIra2YqTmliaShxXUM2IjxlLHBoPDFnU1hkJlxAQzZaKCk4Omk2Ul5s
UGtJWGhpJT45aDVYaChFSlwtc1B0UnBNKVMpbm4KJWpKR0FZP1o4KyhQJD9uJS8/I1hFKzQ+b1tU
RHRTSDRiNUd0WCFcMzs3MXUsclUsWzFUIlFQWDg6VV1SJXJTM01MXkxHQGY2UFVASkZgTF9FUCJY
OXAoWlAzYzFiRlBhR3U8PVoKJVAwPlVHRnA9SmREJEBXQUpdKWIrblxWSkdTdEZWNFYqLTBDTTki
X2gwJy0hdUliSFwzPW9TIlBeWmoxclBjRkUnJillZz07I0JcVzU0aihKNTJ1SHFKcjstO19YYnQ7
P15KU04KJTEmbUgtYlAxWEllKGlFRXMkYFRobixNJmZycj9AOT4+MEdCQjhzanJJYitPJWsnWidW
VCI7UFBIb0YhLUVIQjhlZEA8RjVXcVMrOVNDQlljUFxCO3FtYi1vL141dS11UylnVWEKJTFdSWcu
XnFMUGVsUilDIjJuST49ZzRcb1xcZ0IvPyN1PkY5N3NwJllmdShLJURfbkZdV2FjZToxN2JeVUpZ
JitqXVdeX2k3bS1EKyZGLXMzRyJPX3RXJihGV0pOYjctbW8jaWIKJV1LdVJYNT90SylOPmxAQjpD
JCdSW3MzLEwqWk1XaDFDYTlCNy9ZRCQkW0xRJEs7Yl1sPFs4bCNJPkR0UUhpS3VbcEFGJSdLS1dc
PE0lZWVCWSgmK2xaR2FMSj03VnI3cmhyUUwKJW89ZiJAYFxaTSNqM0s0SUBzKkg8TzQ9Z2Q1Pz5n
c0RmJi9KaFUmWlEmT0xzVldQPFVzcDxVI2lgXHJoUDg+Um1DOTdKVUdANGhFUTxsM0xyPjsnXjZe
Y09pYEpsI3U/ZVs2MzgKJUNWWTpKUVtuZjNZQ0xXbDVQL1onTWA2Oys1QXBjXW89WjZhYCNrb1VK
I2MvJSxnWVFqbzwqWydFbkwtMSs5Si1YOGV0bkBXZVkyKlU5YkYlRltCcmBaZzJUUG1FYl8hSVF1
LTgKJURRUHJUKktnPW1dSyc8NzBDL0RVRnJgIyosNWJVQTM+cmIvVmBzYEpdMEJhYmRQLiMrPycq
W2krWyw7ODEtTCo/RXRRPV0mZiNLKiE2LSxKQkFCM3AzWmRAOWhxQzNCSENLRGsKJWI7Py0rUUNH
RTY1M1o5ZFNmZD1SZyxaYy8/aVRyZytoNFxCWW1nK1JNdFVjJmZ0LzVIczJfXENvdURLWURzImRq
bXRwZUlEIzwhZihBM2FKTzY7VCRjPjBvaCkiRWRKTl5AIm8KJUdpQjRyKUlHOjwoIzpbL2hfPCM8
OCQjQV1uZSlgLktLW1k/WTJOMUpPMT04L05xUy4/PFBxX0BMbT0jciV0Q2xqXGxKXEJyKipJZVYv
YmBxY01jc2NBbWRUI1wmTmlMbTItRWgKJWdaJTNjWWghWlNSZGo9L2MqJGppXFBRXTEiWFFjNTBC
PEE7WzhAPDNMMTlKUG04blYiXkZbOldpdHApQ15FRDdeW0puaTdubytSI1lRKzdQck4jczRxZ1xW
PGh1M0hQb3JuQUQKJUosOEtHcyNYTmQldEZPInJvV1pza0ouJmUwN05vJHM3bD48b0FfTSdlKTEy
TUosSlZecG02SyFYdVJMNkFtJUlvZz1TMSNZVi82P2MxVEVfbiUvPSVJZEI7dWs5J2BVczFTQCoK
JXBVcE9wclxtUChAYSVuLjQwM1EoZE8pNm9eT09qIWAkXiRhMTAuN0tFSF89LCxQQ29uaHUwU1wl
Q1o5MjA9aSlSYmtUZkk/TjlqMF48S2w5WjlAdEBWRTtjT2M/UXEtVmI7QFQKJXBvYVY0b3NvUUol
SClISnJWdWxBaHUzNDRyOlEtTnJHVls5RSJDUEM6ZF4lbXJyXFs+MEZlJzZhU2IzSy9gaVZDSXNn
UzZJNCU1Sk1lLHJgWi9NOihybGBLc0BgNFFTcnNTQUEKJUVYdDBWZ3AscHVtJWVlYydGQ0xVMkgw
LD9PZnJdV28kSElMW1FIREFbcipzKDc5QzgvWjgtMXQ2QlUkIXJrQzVlITBcUjNKNmtDdU0hc2hE
RCdaV2RZTVFQOWYqNmtYQV9RQSQKJVRIWk5WSWVMdGBaVUg9cV5IQi47cFxyIWdCLE84P1NQc0xA
WmxDQChsUmZrbjVOQmQiSTVrUys/IWEuUCdlYksraTJMITlMNldRTlhkOXE9aGhqPDlWXV5IXzlE
Ok10bSRIPjUKJVs/JWgxcVFsUy1OImIlXE5KdWM2ZjcpKmtBSEMyNlQ+Zy1VMVk3bGsnLWgpUSZP
UCNeYCh1YSksK1pBSCcuQ1xAUD1aaXQpXCw/Q1ZkZ24hLHNVdCwqXWBnXTxfJS9YalwrPzMKJWE6
YGtMPkBcUkdBUUBdMmAwOjE6WiUtTXFvZ2wpYj88XHU0WW9kYi5HMiVLInBSOmFCS0FrLWxdSzl0
aXIwcEc9WnVNRk1hTFY/VzE+OC4hPmthR2lbRjpIOHAzQ21XYjQtXUEKJUUhLzJXRD9QTV5URz9Y
PyEhLEJYXTtrWl41aChAWDkwIT0mUjxKa21XYCJhQGZHRVlAJmQ2OCNyLlVgcz8kMWNCMztdI2Yi
cGZVI2hBWmF1cl9hWD9yXVYiViNmPWtobXU2PWYKJTVRU1JSZiVOcjZpKlkpXStnNTstUTVmcGds
Zk5EXVUqbyFZJi0wPG5YSiY8cG1TM3A/NkwmXy48XlA6MVJBOlptciQzX05IUydXX05XYmpIcUx1
cXRsSE9Ub2BcXDZaOio1TXUKJWRHckdDYE46KmBIQi5nb1hWQ3JzT1VOV2wlIU1ZYjZYMU9VSjxF
UEZnZDxLW3FwcktCXFdCVVxQLmJMOl4vWGNvLXQjKmRaU2lMT2pmNERLRjEvNDBkTm5DVyZIV0M5
Tk8uNSoKJS49KkNsZytdblsrIVMtIyhyI1ZsNHBpLU8jLyI4SlFHKCxNY14yJlVuP150TT06IUQx
VFJjaVEnSmVwcVszMUdxNmlaRm85YmE/NzZAJk9IaVxzRyRyNUM3PFk6bDVYaFkzY0EKJUM0RkA1
clFtYUBRPWE/XzcxMlVNTnNINVVFJFFeKDk9LkQ1RTNITltuJE41Qk9QM0AwalE+O2InYmsiZ3BV
Zj1ra0dSNTpjSjlzPEUxbGRQaCc0US8uRzBUQ1NsRElta1AnLUQKJWMvQEdEaEZAXHUvKCxINEI1
R15KVk86V3MhSmg5KTpaSG91ZHMxXmA0T0BhLSdjLih1Vj5UbkNuWVg8UXI9JVdoR1s8bnUvSiI9
WyJwVzlIK1hBIUhtaz4pYmF1Ylc3TE51JW0KJWZpaTFOSWVWVWpyOkpiJCN0ZiQwIk5SXitITj9t
UylYRjZXTSIwNExaWk5yQF5RL1tMUWFiMkQ/UGkkLT05Uy1dIjl0dF9yYl5IWklpSF5YTXNOTHI7
REhxUDFJO0ltPjUjOnUKJTpqXT0+YiNZJm5YZXRINlJZZGtqNDIkcFpGWitCVFFOLT88RFMxLU8m
US9SNy1zNys7SixBKC5pcSFHZDhJaUgwbEwpV0BKKzteOV0sMU1HamwoWCRSWVlFRD5uPnAxNkYv
dVoKJT9Qb0kyRk5ZM29vKG9OUictNmRxZXAiXjdxKjNdWSpxJWZacklgK0U/Z0QlVURWYFcuYkZg
XC5uIWkva0oiZC9zVCdlXHBtV0M0a0I6blRkNWBIJEcyZURQJ20xdCVrRWJIW0EKJVA3VU45WG8m
OlRuRmA5MUtwWW1laWpMSCNbUnNlIVg4Q2NQQTxoI0BdYEsrcFYsYkhISWImQzgyUTNTTS1dZyQj
UU8mM0tcWDsldHMmPiZVRCs2RmJpInQicGdlaDwzU2QvJWQKJWc9ZHBpRUUrNG5hTl82VT4xbipY
Sz9fJ2NQZWNiNzJTdU44Q047aClIZVghWjF0ayRNPVQ9cGdxSmA8PkJucGhqTko2MVk1XGspbF5p
TkQ+KjtFKVVZb1FxMmFPYkVCYVVQcSEKJUlZak9TNSNHa2ZxLHIiaWQoPytFT1wiZGJSUyZzM1k5
UUhOI0owZ2gpZ0M8WTU7QFdXTD9NXz0mYl9iciVsR1c6TTF1NGBHPzxLT2d0LWUnT0QqTio8bycw
WDNAY1w4Rz9xUl0KJUtwSm9oM0E3YlgsciVZa0s8THQ5YltCZW5uYVQ7XFFvPVIwIy9ncVJhLGZw
L0UqX0FeOSpLJmI8JDJTVjVYR1Ynb2RWXjIySEFcaC41bE9QSmI/UXNeV0ZiWXAsYXM1JGpRKDQK
JUstPnI7KkdyV1NfXE8pK2BMSzM3R01ldGIpZ0FqK19oYlxnZUBWNmo/ViFgVWEmWkFua3JsWUkw
QVw2SEVYYE5tcT5hVzRMNTNTOkJDMm9TMldlQE8nOy0/LElfUCU8Jj4vSGsKJWlGIkR1IVE/VFcm
bSMnamcpM0giJD0raFErMDZdIjVrVGteMVZPSVdyKXA9cm0wRyxxYiE7dWsvckxwb0ohTzhnWS9Y
Nkc3IWpLaWw5P0tRaExEKnBtYmRybUosJT8+cCVTTFcKJTpbbnM5YVZVJzksSkojK1JQJFE4RVcu
RV86TidEJWcuVjM5M1JrWCRyNmpFSkI2N3FjcDJtVyhnNy5DREthNCgwQnBZZVY0OVonJVkrZVAu
MDUuS2IxVWxlXUpGJ2pnRjxvLD0KJWY4ZWRQLGk9S11cU3VYPWE3UmVoaTc1TC46dEFLJT9eIzxk
NFhbWERaQVs+ZFluT2deQiJYYCpbSlZlL2BxTSljSS8oc0g1XE9KKklZWjVgWGtJcDJgNTVBQkdY
OUE2QTI3MScKJWFBOEt0NV9rPnVHUSRvXmxfTkE3N0MoczppUmFRPEJPNXRNOi09OiY5ZStnV0ZM
MVJgQmc/bVFcY09dcCdXI1s1RWR0MkZibSVkVVdsPF8sXV8oSCQ/IXM8dE1fXnRsY3JYJzYKJUs9
MWJQazcpOWRDU1NscFdqaTwhQS9zN1ZvaEwyXCEzSUJsW0FXQixoc2Q9PTo6YjRTOypvPTZPYV5b
L0VMZ0xsWnEkMV1tOisyUlhCT05eSm1cVkUiQ0hpcEgwSyZLU0MvQFgKJT4xSzpEK2BgXF1CL0Yl
IU5mPCpgLV1jR29bV0NLM0RsZEUmSlBjU1RFbSw3QFtwV1A3RihaNFY/Sk4vOU0kIVE+XUFCPDFl
PlorVzZVc2NsVEtAKlhhSGdbWTFAVDcyWGNWQkEKJUcuVTdiXmItIkNAJGU8YyRDbypLYFVyKS1I
S2Y3JFNNXF0hJGIiNFQoY2Q6aiFMUSxhRC40ZCxNSFZrcUZjWi5pJDAyQzUvTmxZQjpQV05qPFku
MGVbOVhBc2xPLj8/W3MlP00KJVwqSGJDJW1bTTxxOFssazdncltxOFwkTzJUQjg2LUZoX2wiT0Em
WGxFUCNmdDkmTkNJblBCTURwKkZRKjZPLHQ3PzwpLFUqVytPK3A3bDNMYC1nJiNGMDpaQjBuISc9
WE5Ka0gKJVFYP3RnNilMLU0/Ii1QUicrMHRgUjtrPThfKVU9LyQlLVVCRnN1NmZEdGZzND8hZTdZ
TFllPjlwOGZdXlBBZlIiJTEtSD45RDJaOVxWQFhLWltrMSYiPDFYWixBbVpuKVZRaWMKJTNIaDY5
LEwuISgjU1AsI0lEJ1gsJXJeJlg8PzluZzZoSShKWyZlK1I3QkFqQTgkcmAuajU6aztJTjZdYDJJ
Vmc6J10lVWxqOylUQ1xpK1hcQGEhYVBLTzNOSWxnJWhmZUtqOi4KJVQrPmYwNDVgYkc4KWFdUlo5
J0IxWjwrJko+TCVNR1kvS1xvVGZZNj9KSzIudFgoUGMhXThYV1w/MSFfcDdbTURWPFRoQ2tiUCc8
LVhGaGA4V20hX0U1dTVwbC9IcC9CZ108V3IKJSY+WVlhXERWbWY/Pzs7Xm5IMyhzV2NSKnByOCUp
WSozbGhuUz9KLWNNc0hoc1RnTUNIOi4uVU9UZT07TlRRXUJKIUQmYlAjMmwvbURZKCNXIj44Vk4/
WyRLXG4qNCZxZChPUC8KJTZWXC05OiEmJ1pcXSxiKEtpTjUyKEhNUVQ8STUxcWArbUREVHBEQ2Zk
ZyFTVD87Vkg5V1BZZ0FOPmhCdWo1KVpuWSFGWjYyN3NvI0hXbl4xJUZkcUZfcHNxWE51P0hxZEBQ
QnEKJUEtSTxVWFk3M1M6Y1tOOTxVT2U9VlcyXEozYmlDMDpBRmhOU3AkY0AwJWgkISQiZklPYjtN
RW8lODBtSyNgSlRhR1RHRDFKVi5Mb1wxSF1JOixQVyhEOXR1YTIyL09zQy05TVQKJV1tWiVBWVso
YlZqVD01MTttb3VBaWgvdHJMOFY6VG49MSFBSipoOSxuTihSUyc1QzlBKSxXS1dmYFB1J2hlQ3VI
WC1NWklASi8ndCVRSU44MGRFL2dtR2I3IkhVQUJtLylVZmoKJUJedCRIY19KI28oWDhMViZQYUhq
PVo4V2hTZGBXNWgnQi0wKlRWdGowYEwzVlU0Zl4pIUVIQE82YCtOcXJITi5TJFlRMSNxSDNOZENu
JnVzQF4vR2snSyszcT5jbDFAaks5LiYKJWtyTGEhSCNSZj49TT1ANzZWYVlNUWltciU8YkgySi4z
dW07RUFzNjAuVDcnLkMpITA/OCtYJCRFMHRNU21LRTVFRVImanVkJ10hR19IUWldbClLIUJWS19S
VzRBSkc5bjoiOTgKJUwhSHNFZVRwTFNuai8tKjhvZCk/Jlw1JTI2NjErIWNfTiQsSFRpNU5acFdt
dF4lZUctPy1udEdORl1HNj0uVT9HXUJaMisqP186MCNCRnBRaS1UN0xmJi8/WyQzbGJEQjxHOTEK
JSdEPlRxSmJ0UiZdL0Zdby4vV3Q0ZGhOXGZsXU1rRSFLXmQvIUdDV25GPF1cbkZzSk5Bajt1KkM+
Z042Sj5zMmdoRERWImdcb0pmMzJCXXU/bT9QdUFQO2dNLEcwZ1ElYDBHVzMKJS8wMy0oP0djT0w6
TTVaJnFgSF5XRjBiQ0lLO3VWIVU3XSVbZVdoMVBMXjQ7SiJjaUFIZCh0UXExO05FIVwyaSNdP18+
VURfK01wYSc5Q1JFWjZnWTVjZy49Tk9jXVtWOHVcI0kKJUssQ0ouRSdnXm48UUxgYDs0PmBfVHJR
XjkkZG1WWSVRUHVONSgiQFg9PC0iKjFfV2dPXmZzWkQjQTA8bjNzXlpGKS1ibjZBQVstaVkuYC5n
WWNhT10vIkRsWSFST2VAPGE/THIKJUJDQkslTlc8OlJra2s/LDZbQGxZT2NHTTU7aj4rUmM1L2Jt
PiwyW0gkPVRjZWoyJ2xAMSwuN2UtQkdrLGM0IlYuN0VUKExgRFdcajdtOEQ4ZW9KUWhSbEIoYFFB
JU9oREMyLXIKJURqKl5VQTJbamNrQ3J0KzUjSE1ZQm5EWE0yMDdjLixQZm5XNWNZVlpPUUlPQUMj
NnMkRzI8YnI1MGtqXjUjSzheWiokW007YTNJT0g1J21JQCttNU5MZiM+M2BqdWo5V2s7UEYKJTA4
a15fP0dQJzUtLk9JczBiS2IyVUU1JGRFSj1LTjMzdVRqXjZQM0Q8I1dncjtCJVNINzonJWFIO28k
VGUyO3NbamU6cCprPUBOPmQ9SiVwaSI3QCtbVUJaPm9eWDpIP1RSaEkKJUQ8I3I1O2duISllJ2Nt
RGYpbzdLS2htOHFiMUlHbyYnVl47I0JCOFcsRGI7PTJTbTtpIlsiYzplaCJdbSkxQ2cnUCpJVSgu
bGAudS9tLSR1TGNCLz0tcnAnZF5vK1ByWl1EWEcKJWBlUSViIWg3aC4vcFVEVz4+YF1CIkJpUnJV
WCc+YyJaQiJlSUZlZi0tSDhQcUMuaEZmJERPb0RnJ0UzOWloSjlZbVMoSzlERCkyQlcuXUdcLkdk
P0RKRVJeTD9QKXFBJEpoYDUKJSEhbGFDOGxDTkZATzgvK2wmJz1IPk9UK1llTjknQUBZSz4uPzo8
WVM6bVlIaFNYL0NaSTowJWE/SCdfKCJZZ3VULkhbUnRdT1o8ZTEvcFdJaihIak5dPjNBciFNJi50
IWpSS0wKJV51NyFRaSNKIj0mX3JcZ19iQWQ/U2BxXUZMMWFDYDkxLGY1S2FRKSZRLGQodCdUJDki
UmU+TWNzKDIxNWkxWDhvNlcrTUVdcnAjI0dFUE4sNG8zQUI7LSZNYSQ2JEZCJSsnUycKJShdW05Z
J1puVW9YOVYjZk5eU3FELURaPkVuUlQnYDVdZzQ6ISlgT201XDtAOmpUYU4hLGgncW88RkMiZSpN
VmwmI1o6cTxiYGZYJWktZG8wQkc4YGZVXXI7QWtRMm5nXExyT0YKJWJYTl9eKl5YUSxkcTxoL089
MmNYWTplODklOGY5VlEkQ09vVlNFbTdZbFVSLkMnXUpJNjg4SFBwcHBwXlltSS1cSFwnI3E+W11e
ZVV1MkJZMCQuT1RSJGUmaihYYTRKOGpaOHQKJWUqQDhQZDheZCJMLkdxUSRSMT5eT2RjVlttL2Rn
LE00b1xnSTotSHVdOSVvJ0c7NVdYOTNdKV0sQmlWPT9oIS0oQzVHZyhMK2tlWisrPmJmaiVcUjRU
MzkhayJmdXQjNmUxLjgKJVRISyJePmAxckZuNForTj1YSTsxYGZzbWtOKXU4XWg8T05fUmhhbl0m
Ql1gbDI8RmxMTTcxNHBaOC1MbVBbWCpZOTpxVWJfI0lHPDY3YVlDYVRzVWImSjJiZkwha2kwTDsp
UyYKJV1vbSZoZDZzayZwZjRYUW0vZ3Q9anB1dGIiKT0pNExFcEgpX2FFYysjKU0iaS01Q2VbWmc9
LUk6dGhVR0o6cnE+aV5wMmsjWjpEcCgwMXFpVWxhVT8qYWs1XFIoTiFRYmxITyYKJVY8VnBQQDtX
U1hlYzZoKSdQO1RiOW9SRG89QGFFIjVRUlMtcXA+U3BOWDlvQVdRZnRXJnNuc2FfQmZBKiRKdVx0
J11LK0s6cG9AOmRLYmZnS2Y8R3JhZVpsLzosKFdTV0Q/IUsKJWJFISFjVztLcSpJQmNUR1YsJjJk
Iy1QR2NiU0Bkaz1JQmIxaEBGTXBYTVJFNlFzKFRzI209Iig9Y14tdWRzdVB0S29pW1lBRSkkOTda
RVJyKy0vdHNTJE85bCYhM0QoLCIwJGMKJT4wLyZPPzotLC5LZlRsQUUmJGdFPEFOODdfOmInZFg0
LTUyZFxpLkIhMlJRZjItaCc7Q1A4VzRZS05lTkJJa1hVSzJZSj5hQ0Q4VUBuXFg4NHQjLHJCMyRR
Pm4zUGssbCJHKi4KJSRcW1BGR2NnMXJuVkw0Vm1ZYiRGMmJHQ2E1Z1RYV1AwZyJoZ0VZNVJTST5s
bmZ1O0w8PnRhW0cqVyJ1aT9db1M3XV49LSZDVlc9JSUhXTMzKCpBZiUzPktmIWVWJUxjSmR1YVwK
JXMoXUBSclRcS14kMU5LYiVyMDw8TkAhdEdwaSxaa1I9aE1ZYEYzXGU+I0NxLU4icmhDQHNyNzsq
QUJbPk5pblYqSmQ7cFQuQT41LURYdTVQOlFHI21mbUJ1P2IxLTRXZyNNIUsKJVxaKmtrMk0oPU5j
W2ojVVg8aGNKYSpqXDZnY0MtYj4ucVdbV1xDRCcnVyNtclQuSiszR2hPIWRJbU0qJ21VPE9VJ1FM
Lm8wWU5lN1c9Iyc7S19JOW1UJVskVG5fJFxlRClEXW0KJVl0I2ZzZzxsKkBgQyI7NUxwaDlCUGZv
aVcyLEtIcSk7IjJDTnU/KGdoKyxZcmgnKlVFIzdpJ2k+UUFjVydyUDwsJ2RdNnUhL1RrVkI5QU1f
ITpjSVdnSEFSPUZ0PkA/KVYhMlcKJUFIYCEiRGZmREVYSyxlNzlxOlIxJjZYZVwwXl1dUUBeIkJF
a1Y/QFBUNm8mZ2pBV1goI19DOzRaQmZSJGYsckk5OTpLSDtfKDxeUnBPYGVyPldCJkU7NG4+KVw1
OUxbTXByTCIKJUVtR1s/OXQ9WlJMWSE5KFRJdVgpTiV0PzRtXDs8c1BFPCJLTjpJXE9mVGdoZkI+
PmhFJjskLm1TXWY/P2U4aCJTIW9iNm5BQm4xUVIjOlJsUydQbjc0Uz4vbkpIZCRIZjwtJCQKJVQo
JG9FQ1sibkpIaGlfUCwjTydeWGc3JzwsMlJRTGR1XWReQj5WJV1gX1FMOzghaC9CNUkhZ1RrImU3
blxdTko7JkoqNlJbXzpAVGM6b3NvMUErJklDY2hMRE5MJ1tGWjw9UD4KJUZoYEozK0ZhM3E9Ymg2
SmopNmkpIl5QTHQ6UlNiRCZ0XzBfcFdWcFttLVFfNlAoMzphZmEyU1w9VFQ2W2xtaEFWXy83PDc5
Qy5VblBVIjcvTUsqL1pkL3BgXzJKTS5HN0JWOW4KJUhQWkFqOk1TQ2VicmZcJztDXzN0UEklWC1K
ZUJGakZwSUY6PUtwWk5XMkdiV1NaLU0yQ3IkOisxPTZkWkRWWUo9I3VgaE4vbkUiVzFnUkx0OEFY
byRXNVthWSJCL1xWTllpc2QKJS4taFl0JSg7ZCNcUllmb1djTE1BTlQ3OFdfTGpYXGVaVkdNXTVg
LzdyZTFrZkhGZDAvZ24mQVVHajt0Jy1TXEl1Xm1wcl9LIjsyRUtGTVQvJlxzV3IuPm0oNTI+a3FU
NlRmQiQKJW8tMFppb0ouLkVqK19vJ09lclluIVRQTmBeXiUjLmBtaGVVS1luKWhVS28zTCdsZzBy
RDA7QkJnRi1vcTd0JXA7RSJwOlIrMjs5U0cxa285IzU8LjFkdEYtQ1h1Q0drYDA8clsKJSInZEMp
OjBsal1nRTpAbz4wRXErcF4xYXA4OG0vPW4zMDtebzZpM2EiYXEoMjpmMEwtS25MNWU2QHJkUFZH
L1MhWFgiQUdXRk9gK0E6VDpoZV1KQSklXDtLPzlNLHRQSWBTZ3MKJU1tRDk2XDBfWSg6XUk8cDsj
L0o7MUNZMTAmYWZbMCVGMmdFYiJ1ZWwiJHI6bVZGX11rNWNEQj1AdW9KO190L3A7YD1YZV4hUkZk
WzBpQ2s8NVtKLzQlS2spQmNrRVJsQSRdP1sKJSpwa1svPG9TQUBDdDlEOV06byk3ayE2U00yRmtK
dG9oaDc+WmxHNUMkXkBUci5BKUIzS2NWVjVJNXFPOHEjT1lBOCtTYnNYPyYqb2wsZWxeTV9NLmA0
TkFNOz5FPmRoTV0mOS0KJT4pWG9DYFtLNWVGWm1saDhCQCplQTg/aUkvWGNvOj4hdCRPOzUxSFFN
LWJlcjlwXSRrTF9FMDtIdHRUYC5xOmxoSi9FbFArYkM7Yyg6UCExSUVJK2kzbj5HWU07QTstMjI6
VjYKJWktR3BeNChfKFBUalVALTNBJDknbHNYNDpRVj9CbWVMa0BjRWZTUWhDWlNfZ0xvIS1LMGo4
V2JKY0dmMUNZTFwsOUE9LDJfZyVLZWIvPj4rZDBWWk8uZy0mNyxqZig3RFAjbU0KJUpRWl5rQnRV
YigrV3NjPl80dUcvWy05Oz1OKSRASCcxI3VHUGxaZS1QIUlYWExsZlYxLSg7NE88VGIrVCNLPDUj
Tm49OUNXQSQraywmTUYxQ0lcRTgyMD4kJ2Y1P3U0MFRHKDcKJVdnOChVNjVSUEhUPTFwTjMrTDgh
K0JhTGRJK0A4UVwwJkZwUSdhJlApNlAjJChxPSl1KF9xOyEkbFhxcVNLPFQhM0tsaUFadDlzJWlE
Pm41aydDUjVgc2suME4xUCJgRjRTS00KJVtdVXBhcC9QbmJENERgNTVTWyNeIkY6L3NGOFNXc0k0
bE0mLUdFVVJKXVxETU1NWDxfLkckaE0oXWw7aD5xVnMzXjlJJWM5WjEmJyk7L28jSV1uOzxEJ0k8
Vi47KylLVWxEXCcKJUNCJUdTLltoKWNpXiNVL3BFZGFPJm0rWlIkITEsTmJqO1o9Skl1MyhOJktZ
KkMvaztvPEZcRkhkT0FmQGU+VTpZMV5rPl9STlBNXWVmbTQxQS5bYkpdJVZGP2tUXitkNTg9c0IK
JSckXCpqOCxTODhTPkBDL1czS2ZmPF9zIktKN0ZlNjhwT2A5VDxTVFllJytMPENyXmdeLz48RDhb
QzVqMzRmbUhGXWpXL0ouSDFtTGc6OTZzPCpsNSc0WDRBT2IlWl9OZFUhV0oKJTtPNE1CRVkoYCE9
PVVeVypKdTJbWWNTJDI0LExqZzBtM2ZoLV51WGpuPk10Wk8saUVeJSw4KCxTOz1sIThSMD5wPCdf
UisoUzdzRCJFT0M+ZjVNWlYoYSktdFUsVm0xTnFQIUsKJSphJWEtVCFWNGhhRyddUGw+aDFgZGBU
J11wXk5OVzlNYWpaUVxLdSNZQllPSSInSzhFXTVkXUdRPi82SyRmUj1dXUZDJ2JaXl04aTR0Tyla
bCFeVCRgW2BzMUNuWjduZmAzYUUKJTgkXjc/ZkpuUGg0IjM8dSxSQlBIcDFeYFEsJ1hMXGUkZWRj
ZTEtYmxtIU50YnB0MTkpKEdeKWxYX3Q3bGZGLkhQZWZgQmEzbW5KZzU4citHaTVEZkkiJCRFOipu
PC9CLyp0Pj4KJWItJ0hYOzpYZWM3bUFwaEEzVz86ZyRjMkJBSUklIWlHM3FaWzJZdCJEXzVVa0Ap
TCI9OG5XNjpkYHNjPlNTLWwxK2M4ZlhOYCxETVA8SVNDIzZvQFo/LT5PZl9iUnEwPTs8PmQKJU4y
LW1tMW0tLWYmVW5cRGBqLiM4WSc5ZVk3RSg6UjZtKj5QJWVDMXQmOGlGJj1yPlE7Sy1ePSg0XCkh
JlNrRCZNRCp1PERIS3RSISM7b1tgNDwzJihZLWQ6aFNbWmkpK1htVWQKJXFyZjE6ZGdNdUchNTJn
QkI7NjRvJ1ghOU41V1tBM2JdKClrXDFeOFBXWnA2QmoyMyJSbE9ER3BPak5CTCRkalxkX2E7RkYm
UEBdZTVeLWovaVc7Jj9WPDF0VDw5L09uW0BbZEEKJTpQY0lPaSxBaFYsLSwhTnJnJ2g0QjJUOlZa
ayFMbDNjO1deLkNKP3VeYm1zJGVIR2UjYjQ7UmFFMEEtZiQvWF9rQzJDZWhqZFRjVUI2ZiI2NVo/
UkglUVQ+RTMpP047IjBnVDgKJXJJMVhGNixwZF5nUXFGcC9KMlk3STZBbWhoLyEuUmlya2lwZXRj
Z14zMDt0MDkvakpLVlVkKjhOOzRkS2pQSSowcDBnKzleNG43cktrRFA1S09tbDc/UjhnUmJIVT88
WD9zOzYKJUEjMkxwTT1wdSlvdDZRRjxPSnUlUi8jb05OWlc3YDssJ1dxVCFoO1BeTCJWPm4tcCp1
KVJlOD4yMz1EMzlNZ19oVWltaiZdbTBYZSlsRE5bXGUoTydPJlZfKS4tWEVwXU00blwKJS46YW8i
IydsPFNuZFpWXj9LQlRyWz9FPHRPMmwzU10xS1c+IUZNUSFrc1BgLDtlMSw6X25HMTZPOC8wUWx1
My8jUydFQytFUmYvPTFtWVJfMV8wdDo/I0lXNEw2MGs4ODgmLVsKJUVWdF1nUzlCaWM+Jz9SQFVZ
QENaPmpBZ10vPGkrNG4jaTRAPmQ/NXBhLVxta0YtWEU8JzxeLjZeIzs2OEtZZl9NWUtPdDc9Ql1n
RFs+SUhMQztAQk5GLkI5LlhZUyVrKiUvJkoKJUY2dCkrJUowJjg7Xz1nWEs9TnRpLmRgQjNkTCko
IVlBNTBQTW1PMF88IlRbdDdDPmRgUExWIkxfVjRDcFFGPFQ5Uzw/NXA2V0dTN3JKJDtCUGhgRyxg
NG0zI2w/bWpIQDMoXi4KJTUtZDQyMTFGKjJcWjxcSlBwUik6VDs2VGtUXDlrJ21ySURhR2ovYl9J
OFpnQlQxaSotJT9aaiZLLj8xOlhrOk8kXWUucF8jWDo8TlBTWSI+ITlsKEwtcVk/OiE0PytKRlIp
NTEKJSpVLjgjZDtWOy1EJVNMVklvRjViai1DUCNHbFRwSUJObz9zIiFHI2hUWihpcDVaQygjPGBC
IyhmQGpLOF9BTkRqUjF0T0tHP2YhMT0yNzhxbm1hbT01QEs4JWEmST0vWjRKP24KJTQ4W2k8JU9K
X2tuO2pJJFhYUy4iWGIvTCQ3PFQuYVUhUGFkYUMrLlMoLGFWSzJuUCVDKGBGYCQnSWNnQytHa0Bo
bVA0X1ViZFUxZTdsWG1XKUZkbnAzUnRGPipCY01CXVE6MGYKJWoxZGwvLyRhOjxiM0tXTEUpXTJR
PyctQ2lbOC5RJlQnSzFNcCopXz89bVdMKS5qYG9tODY3NDJqPG0zIT46UVVAKGpCKk9uMj1hXHJj
RmxoTmorPS8wVio0RD9WL1pWLy9XQyIKJWdBMjs8XCIiX29YUSpdWi8jTi5FKUFRT1tmQTQhMENO
WllzRV5rKWJpY0hVOC9XNVwwOVRORkhncztPRE0rXiRTOyhIPG4qM3JibUJcXStGUWs4VTYmM2M8
YmBpVDwlYVwoQTIKJWdZUUIvUTdxNmRiUz1hYydsaVAwWDpWcSduX11qZzEjVVxwKEw8N2tacHJo
dFZpWW0wUEpyZDgpPVQsWERpMkNOJENdc0QtdUBaSy4lSTZKLlQocSdFN2o2dVJSSUdFXGItc10K
JV5ZQFApWzc7X2BcN0hHQjRpUS1KRUlaZlAvTV1uRVguQyZRZWBxOl0kK00wIkhFLEtaOWVlKWMi
TTdcdFViSCFwOmZMPGdiWVpacSZsOiIzLVM1aEk5TSNeS1kjbF9HPzxaNj8KJU9NLC4nTEJwI3E/
XnR1Wis/SGouJUJKVTc3OzsuMz1aZUwpcFJCciZrX1tsck9LXi1bbTQuITo7cSlZQkg/K0lWP3Iw
aUZeZlhDc1BsXyVHcFN0aWo2IjxJVTttZS1vUz1hUUcKJWZfPnI4Rz4tViY7N1svbyRsRCg/LzYm
SCNYZGFTSE9LVyNzciNjVlgmR3QiWUtdMXNCTCclcmU0MmxnTEphV0MiJSpZR1wrMDYmN2oxdWBf
bj1bVV4iO1NQKidHUSkuUTpRRm0KJWc+QmE6XSspXGhNIiQ6cFNlLDozQllwLnREIkxaX0xbKVow
WDlkc21fLE0rciY9VkQoVEo6QW4/dXJPY2NOOkxCNzIkdFNPKF9NUyUwUlQ7JVJBZiRWRk0zRkRH
V2c5cDdSJnUKJSpHWzRXNT5IMjRRX3U5by41R1Y+PXVTbUMxOk5rLj1bOyVlLCdsPW1qPkdGZmtT
NltIMGYnNioiQlZdOjtIMzo9Ol8mRzcqN0REIyYxLEgjJVxiPC48KTssSTg/cGJWRkZfIzQKJXAu
WFUwVVdlQ3VwcCoubj9pVWs4UDM5YEdHTCRnLSc7Ujx0O2FJc1JBU1pPSD5Raz9ORE1COUFkKWc/
TylMKCxFYjJHOD02ImBsUXBYJSgvamE2Pls2I11AKEBNJUMzIzxHISgKJTBKR2lMZ0Y0WDNFPmo9
MFZZREQjLWcsYHRMQ0gpSzZiMkpFajJfMzNYcCZWZFdYSDNgY0JlOWZIa1F1JF9YV0FVMkgkT1o4
ITNObGkxUysnR0ZicU1Sb2VcOV1FW0RtckVcZEUKJSlzMDg9LVtjVk5Pa1MwZGAobSRjaUtoOz4r
WmRUQkVeZjQkO2MkZVxJNVY9NyluJSRSVGYsaTNOWm5wbWA7SEFKJT5KNnEpWTNUMkJiW1VnRWVc
UTU9MG1ZNEc/RExkMVcqQ2wKJU48ZCZpQD43LjQsVlVTSitdPSglN08vOUBgbz9vQyMmcVgyUy9O
MCVFTVBJRG5cQ2VeaShlcT1vVV4rL2Boa1FnTGRtXCEvXlI+YmVTOiFWRi1eTkQ9QChQOiJpbzhI
JVpmTGMKJVNBSlNKMT8rLFwsdWNAcEBdV1xATGQsK0RWRklcZ2BnZ0szMTsnNlVfL1o3cmlzYj0r
YmFVYjM9JW5SX24pLlUwW2g0WGtXYHMhM2lgTjlKR24uNCdpVzNOKUBTVnJncj5jZ0YKJT1GaiJa
XjFwWjAlSXFXKkw5PTRpUiprJ0BYUWBhNlVYQz9EN0JrTCo6NitwUWg2MUNWK2dNQ10zXD8yTUtN
TEkxLyQrITZAaFsmJjA+PXVfMUtNQk9vX2JmT2NAWkpGOUReZTUKJUlcNEZbLyFKdS9gVSg/S1VE
Ljo5XDYoXjdTJjwnOVtTVDtXLW8uUyclVTZAaEJqJHUibT9lUm5lJE88YUEyLj84NVZPMCpKNT80
V0BTS1kpYFItJixCa0g0a1UldD4jYVZJQzMKJWsxY21KT2ViRGBvR2ZBZCxLY0osXTpSLGcoLVc2
Zjplc0JRJTQ8QzA4ZWVHUz43ZkFhX3IwU19mKSlzWWMuKT9DWzJpR1lPTUptQzIpWHBBcHNWVFJy
UFAtM2xMOmNLK1w3J3UKJTVuVS40KUFoRDI6QzIkNChOXSJMMCElOWcsVyM8TkJqZWROVDpuYyFB
dDZNSyUjNkdfUj06bVJlRF8lR0c0K09TUVdBUFJiTlVePCF1cTRQSE00KHAmWW91IylAJmtCNSlO
I0EKJUMlcipENz5iTkxdcU5kKSwnJ3NqJSlXOSNxKU81dGBFYzVfU1YjR1NvZDY8LiVYMFVLbiQi
cmVCXTVrL09iZDQmbk1XR0xlX2FAQi40MTxrbTgwVCJBajJEKylYITlBJyljUy8KJUJMQyg3X1o8
aG0oKmJiXDJtISpCQ3IyTzxfaTg4NlhHQCtmRXA0KyZSZVRIcitfWDomNnAqQT1MUEJOK0RNOGAj
KTEjQUtWXmNTajpnOUAmP2dkbi9RYiQ0Yi1WZklLajV1bHMKJWYmOW9rWGVFWGouTV9qbz10KkJA
J21CMnNhcSRsNEUoN2g9NFo4a1ZfWFxKYVxUMyJqTyY1KSZTQkViMGVaTDkxQThTVGBAazhiVitq
YW45PWxIcyhwMmhRIkc/LVpMNkVKP1kKJV5fLSFvVXRZJFJLV0szRFRBJydLZkdmYmNOJWotJ2ZH
aylSUk1NI01RT2JhR0t1cCxnQltBZCdnZG9STFwoNHRXP1hYYCFUKiMtdGAzdTxFSDBxXTlcT1tl
W0g8ZlE3TlpPZm0KJS8vTWRbUjJicm4iPCIrNkImWFUjT1FcQW1UaCl0MlcjSC81MFhpZmlqQV8y
LmsoLEBwNGtwLj8nMmlvIWZIXy0jK1NZNlhtdDVSOz8vWClpNlAza0Q9SmIwUkoiNicxXjltSUsK
JUxvX2pIaHIyTiswSEcyQy4rbmltZUgocStqXnVlI04tNl5fcFsucmNicDhQYmBWRkpXT084QVhs
K09dJkk8PVxZLEJvbSo0QE9kTDZINXQvTk0iMEpfKChAQ0s1QWdycjVeaiQKJSpfTy5ZQkpjJiQh
Wk1LWjZXbjhWP0F1N28qKG9JaTBORFpJL3VOT0NIVTlKLSlVWzRCNjJ1UExtJkEjdT8uUy4+VHJb
dUtKdWlbJGpDYSQxQEhZPlw3XDxadF4pLyJcaylUJjsKJWpuO01qOCZoVCxdTkNVUzBzTkkzMzRz
PDNtL2oiXlZZS04xM2ZXNDFhYitXRFkhZEIjMz4lcTFmWWEwLDJ0T0I2KWRqJC1kJzYrc0tsKWFL
V09SWm0qQCRyQUFRKy5NLDRDMW4KJT4mMFQ0JUppUTE5Tmw4W0xFOWI/Kk1rZiEoNGVNZylOY2A4
cDdPLGNxOGdHRlY0IXMwWjZbbk8/Wi4vQEdTbVtLIUBTdThOOSpMNlRNMXJYZ3JgbVpDWTNFYzlZ
OHJuV3U6MGUKJWxDOk09VDsnZjgvMjBHaDJjMjJcKWg+YVo2dC8wK11KVDVUXl8jLiYnIWlsZVgh
T2kmaV0mQFs9ZWRiYVQxaEw2ak9gSkdWak0wWVxrRmxLPFBjTkJDLG5wYT83a24yLkBhbzMKJUVE
aCRub1UpWk8hRSwvVjJ0YDEuMFpYOyhHO3BQLjtxa11kNyk9RzcvQ01NZV8yUkswJENcOXM5WnFU
RnA6c09vKTRTb1pBcTZNYzlmPUwhV2RLcm1vWXJILVUhRzVoZjxFbFQKJSlNdGlhMjVwQko3X1Ru
TEoyXWd0XDZjUSksO2lDNVhtXF40WmNsWHQ+OytyLlRHRTw5ZFlOcXRGZnI7YjQhYkRzPTlpVV9I
SFAmV2tpc0ErS1AlO1UrPCE0MWAzaEtcbk1CV2AKJTEqLCN1JWEhRF82XSdxLDtsSD5ablhrZGkv
RmwvMDtnNFdsQ2VfWC44IlNvLjl0Jz9cNnI5RFc6QEhESF1zZlE6RmNVbTBfSERNVj5vW0EtUVI3
MSVDWU45RjxKTEo7LFUoN1gKJWIkZjNgO0lJMkZqOllIPlYjO0QqTSk8TDteXnUnWChcQXFBKyU3
J19OMUNpcVpOVzQwQiUvPl9pYD1QXEBqWWg2VGdYQEstSVw0XDx1SSQoNCFsUT4jRSsoaTJxIzg2
JGttRysKJVtRLj1MYElbcyEuLWtFR0VYXicpcnU5O2EkdWxSKSlcMG5eZj4kJEpUKGdaUFFTZlZs
KWMndDRtXS5sRU1taT8uI05nX0s8TFwrQypXbTk4NloyTHI/XU8/KGBGOzEhWlooWXAKJSRqL29g
KixWLUNTTz4ibTgvO1s9cGJNSVxWZnNTXkxvamxjVTYuRWEmX0pSPkhfZ11jUENlSWRSaSghTTAr
ZldKK1NeYDAoVm1xdS0jUjUnNmdHPmc/WE90IWAsXFpnblpwalsKJVMmS0JuNldYaF5fREdZPS5e
cUUsKWNCSWJiLVk5L0ZOb3UmQDI3Yko9PCk5XCdaVUs7blxLNDwlcmZETlplMSpbTTIzUllUV1dE
cENrZjZoK1NcX1stYVN0NCZWZVBkQD9YYF4KJUBkP0NrZClwRi1Ka0hcUyInbFhaX2tZQSE5ZGU1
aSZnInQsUXFJYj1gOUpRbENzZz82UCJPbkwnLS83QkZgakpqSUhKKFFsNDtOOV41VlhnU18yNlBL
LmU3Q1FVU3JbMFNMLV8KJVxuaGUlJ25iQVtwazVda1BzUD5Mb2ZPRGo2Jy1pcylwYU9vRj5vL0dd
JE8sXWBocnItNkUuPDYzcjJkVm10SjZcTS4mbDotY3BxLzBdSFxXYypTJVM0X2Q9JVNEOF8laGZV
cGYKJV08KDpSVmMjTUtbOU9ObUlORCRGWGk0X0k8Kz8+YD1xbCJTNDo4KGJdUzpCLUE6b0hWYWZz
Qm8vMWYiaDdqbmdGJ3VWODszajMxSmxFYkZAak0jOElnaTRQU21VWC9Nb1MnLisKJWc+Okc4VCtf
YHVFci1GZWYtOTklL2Y9I3RSLT5hYl1KLFU4PU4yQi9VZF0lU2srYG9YazdrQEI5YlRYTFs3KCNK
NlkwVXBuMUdBLlFwKFQwMD9yT29TYyIwS1ZAMjBuUiZBSTgKJVAyLTpzV2pZcyVjOkFBPURIN0xp
YzdtTi4/b2AvMExGYUFYPDFjVU4sZ2lxUGZRMDFXRzh1M2MvRDdJMmlZSjtMXD8mJURmcUcxQjoz
KkE3amhTKW5gZ3BGRkg7VVY0PEtyaUwKJVovSjdXZ1UyLGNIUCUjLkNXNTROPDwzJkZYb1VXRVNI
L05zSWFKTmxdP08+SzdfOk0lWik2KyxLJjxrW2hPPj1RTC9mM1pxLm1RZSJbImYrJkIvSkhYK0Ve
OClgcS0iLmBIMCkKJWxbcD1ILGBybyViXnIiIVoiUmknLDNWPiYwaW1bTUVqNyo7JEdpPUUkITJ1
NGUibTUwOT1cayxTVyVGWD0+I15YUHNpczliaiYrQVlfbCZTLGlyOzJWPWRhV21RckhKPlBlaDsK
JSJGZSs7OWlXZmZRdCgkaEdtSWlLJVRfRVkoRGpkRGwhaVZUUjJRak5ZS0MkRkJyIjJOLiNWaHNY
VT0uKjxXczBhXDM6OnU4Vz5UYTdOYy1jMFBFa0s2UF5CYEUpaFYrKitGLCcKJVArKTFqV3FxY1Ul
Xy88MSglPkFhSFhoIiRacUw1b19ATkxRTVFpJDEsNCZBamhYM25MYzlcPyFSZ2A6JmQ/Tk9GLWk+
TVFLWSEuUUdyQjsrXmJwNDNxPV0tLWhhTzQwa1MnV2AKJSJYcFJgUl9fYkU/K0NINU5LJSxbLD5L
PjdsdGlbdTApRD1VcWw3KkhkaFJuXUpWVlImOV5vWEk5YDVebVtvRG85cF4scEM9S1xKKF5rcGY7
JCJPWSRaYDhJNUJjRlJzL2w/YT4KJVdJamhOOGwmbmonbDQmQHA4Lz5dMFtObD1BIUohXTlYNUI/
Oi5XQWUuRTJxPjFXMTdjcUQhcT5KYnBlZ1AmTkBUOFZENFU9VD4sYWMhWXI2KiMsWWQwdXVcUSxy
JDgoInFcLD4KJUxkaCJDRWBAcyIiRmkhTVErTFNoQDMvaTw2YFtuMSxVNVxTWFxNZDoqYzVVYyVa
cGtrajxYM3IhTFpZQUhTalJmQGAqcHU/IiVNayRqJEpcYjMiXTdcdCo/KzNPXkdtPCFlYzIKJS1l
K2BBOUJNMWpPLEslRlZiPkY/WHBoWl9lVCs5K2MzbThgY1xRdWBOM3E3X2IhPTAhTCs2WUtJSzgl
R3EuOmpfYCFuSURTXjRaM1c2KyFiKlBvKl0qTzpWTCNfa1JFIj5NXTYKJVpZVC0lYiNvcjQjaDI+
SlRwXWhTLmM9WkttcD1fTl0waVxdKGpaZ2dxbiNNKTBPVk4vOkZAY1U+UlhTNEg4KCpbXHEvY01d
QSluT1YlWGJxblcwP11vYFxDRVlVYF8wUENFYCUKJVVbWTBVXzBWPmxCby43XT9PbmcoIkRgZXAn
KidTI01QMjhoVW5jOF03QEdXOyRrZlFOUV1mNGVfQTY/I3MvV01JK19QPHBTTFBeNVNYS1I6RTBn
b1hlXkBQNzF0THJfNDEvb0EKJVJAYyNsMjM8IykhITNIOyplUDtJWmEndUVKYCNbQyQvZ2BTK3Ar
P2tCKlBWTDdsO2t0O0JHW25lMGUqb1lqY0tGJT09KlVldGdtPVNCWDhZQjlPTTZAOjw7RFRON15J
KWgvTSUKJVFSVF5gK2VVIjhgQTEjMEIlRk9kLisrIVtXKyRbZkIya1NNPC5fbUc2SzM2LyIrVig8
VSRALl1yYFxcYSNVR0tQbG0vK0AsJTEyKlozVjtNTGlzL01NVCNTcGteaG5HUDlIaEAKJVsmIVFb
OGldOS9wQmUiSzZGbydwOS8sL2Y9Q1Mxbio8aG9EKiI7WWdUSFkrO2pyJ19xIkpQZl0/XWhBNEUu
XWYwOz1lJjhRMDokWThHJmxJa0cyb1pQNFctXDROKklRXSEnWjMKJSlvak9fQS89bUk1cFBJcG1J
I2RAQ145M0BuMGM4cWwlZ0llYWVMUjlxNGM5W1NQITdjTlcqQjUwbzo9MUpMY1g3TVRDYyQpcyFJ
Lk5pXDcoIk1xbVlfUnVPPzJJY01iamAvMDQKJWg0VishVTFmbGtpaDciUV00ciZJWSE8NGgiJClk
JUZWa00vOGoqVEA4XCorKkMpIzovOC06OWdjPjZyNjotZTFvSFdFbEZqaDVlQTEnP05wcTNSN2ox
NC0valZTaUtFZ0ZfXlgKJWw3PSVaYC1cSDY3R0VTN2JuTEBJKSRmYEBLb1wkIiwpNUJJMFtXXlJt
JUI3NUdEVGo7Ky1hJVJZJT9yaUVuJjdrSnNqX1pnJTt1bF9Pajk5RCwoUyQnWi1mKTwuZV1oUG4l
bTUKJTdRY00tMS5AVlxgVGhhRCJlNlVzOHV1VzNqOS5TVlVUXnVdVVtmayU9QjEpMHE+JyZfUmJb
NyQwRk1EKkFqcVteO3VqNGpUW3BgPkNRcyQjYFQ8Q1NkUzlaTUInWyI2KydKIy8KJWRCL2NtIj9V
PD83b2MpJzc+WmBGWlpQaEY+Rzg8I0dyaTYqP2tdXSM0JC1fJE4iS1BrcjJkbHJRYHVxPS81VFZl
Iz5xZy03L2ZqXzpQXDRqKiUqODQmXHFESG44WD4iIVg4OloKJTtjOVddNEErQDtbZyNGSkBKXF1r
Iko5YiNPUEZrWTwjOm5UVCkvRFIrV2YvOmBHaC5sWl0oV2hvLFM0JStyNWpLLjslTishYHIyWVJG
NSYnIi5aZC5aO0BwLDNfM2FIJi50JU0KJU9HQV5FMitTKVQpXC5hYzVnNXVRKmMvWEI0Rm90YSkr
KUBtRiQ7J0pfKzdhKU9dMVlaQ2kjY0FYKEdfbDksKlMwVlljXD5CWGQzLDNxck0tWCNiI088Y0ww
XmRpcjEpYyZeL1MKJUpuNGBkcElrXkApK0laVjZmQT1pcEopJU1kcTNtRF5PVTs4TiRpVWE5LDtg
RSlfLjw8Z2N1R1tMO1NERSFMXj8jRWVCS3EqUko4JVRgY1dxIz1RSmZMZER1aTllYGssJXEzWmwK
JUFIVDNvVksuMkwzVDJzZDtKZnBZNUhwO2hAJidxMGs6UjQxS0UhdWI0OkQvQGBFPlpAazlrazNm
PG1bVz1pVTxYUHRoYj1GRz9RVm0qcWBXMGRoI2dJUVNvQjhnKTlNLUgzMF4KJSt0Q20mZUk4JWhP
JywvbCdaQlNrZ1AlRjVndStSdXFeMWJiMzREWCVtSXNtQF1mPGJuRytjbm5RNCxkOjBhWWxRSXFK
KDs3RmZrL2clODtNSyJEbytFVUg1Ky81YFB0TCNPIjkKJWBoYkRqWWxIcz9fVURhT2FGJGBwTSwh
XVcsbiQ+a0x1Q05YMz9QTkdpJzFWY2NjIklHOkFJLzomcCYxQVsrLUkiWyYuNzwhOkEqQXFwcklk
XT0yWDpnRmFgSWwlNSpQKldGOHQKJU5SdVJQXlNuX1teYUgyJi08X2deM3UuISNdXCNaKCVFWzlf
VmIqaipiQi9tblE2bmoiNDJLLWhZJzIxbUEvPjdgcVJDJ29DKyxgZyJvLl5lPXIralZUU1RnLUo2
XCw1ci5XZU4KJSpQR2pXIWBwYzJtSzJQNllVTyYtZW81OFU4PjxPOkxZQV4zRi9NNVY1STIzdDRx
V2hXXmJEYU03UCxXWGQvaGhASjhfcCFULktEaFk3YXVEN2FOamIjKVpNTz0uQDlwM0ByIi0KJSVO
YyRtJCptVVowJFxBTDxBMUJZPmJoJy1sM3RONjxdPFVoST8+ISxhPi9WPSVURlFkWlNjXSRMbm5s
LEMtKT9GWkNaaW1NITYuNWwxSlw0QTxjJk0vKidbND9zamdgMSlBRDUKJXAoXzhmYl0wJi5bLy81
dDZeI1I5YVs1ZlUpLEwsUURLWFx0T2xSIlkiPyciSV1nSjVYWz51PVM2YEhbRGc0NS8rRkNBY1kq
MnFwZ3EwPFxNMS9RXlpNPjxiT18/ZVNuaGAvYGQKJT5tLTY3VSJOUFRJSF42NzJJVypmQyhiNyVI
LWE9bEMsY01pWCZLaSk2WElbZGcnZWJqOkIpL0o8JTdVT2MpST1PP2l0cCZoUCQ5JyY6cilOa0w/
QjNNLyRnNilHLWpzaDx1JDUKJVtPT0pBPidXSS05YzAnK3A1PEdEbVNhOFBrRT4haGssJF1XUDYr
KEU3XktoTitoUVJjbC1GMVhSanA4NDU/NjxeIytkRmg3Ziw2YjQjVDFxWmstX1YiQmpwW29FbkM+
anMwaVAKJWF0M1xYTDdrZTBKPzUmZmIoUWlhSU9PJ0xCKS1kVW4rXyJoLEM/QT4rQj9FZVtbUEdF
NGNXLkc6NltDYD1sbjtZPys9NEtdcG9mPSU5aVswJ1ZmYDE1SThVUFUxQiY+Jm5aJSEKJS1UPGk1
LWEqN0hmbV1OVV8nVGRhQmxUOlUqUlk+c2V0dEQ4LUdncCFYW25rXzJCWD5KLiMjPGhWJkhdOitL
PWM1MUg8UC4tPEVXbit1XyVxV2smOD07ZjhQWTpvSjlcPmBJRT0KJT1iPEJJNFYzZDNQRXFWOiUk
YmIxVXMoaS1MUyRbK1I7SShGZDpDT3JlMFxQdEZLIkF0Rj8hRDFYK0hFQlQ4J1MzWVZdbnAxQDZp
NUA9cV5ZNkJjTkZrYEwhJzcrQ11KVTlaVmcKJV5HcVdIVWknNHNDVW1TVDJ1RFQ6JkZcbz1Gc245
MFkpOUwmPClja1kpUCFoKzk+JnBFXzJYPUViQFpkJixsJTNXSXEwZVtgIyszVFUtNz1SJXUoZlkq
YW8vKzpvXCsyIlQrcUAKJWoqaCVeMVs2I00idUFaWDxMcTkjJGYuVGQ2bGZkbWQoWDlhQENxNCdf
SyU4c0ZlZ0g/RSdVQjRNRW5GTS4qT3BuZz1nX2stVzYuTV4/ci1sPF5jcTcsY2hpXGZRRkZPQFpc
WyUKJVgxOjAycGQ2UFpmUXFiVVNMSy1raHNNc0BmXEpMPCdJITRDIi4qLTVoRWBGblosOVVvSTBs
JFBUTUMnXDZVSmVnJyhpcllIOVhpSUJtaDlMKWQtYV8jR1o7cG9PcTNaTGo1TWwKJS9iMD0jViIj
JyFXJGktcmtBJS9VJT5RZTFHXiUsdTdBXjkrbmt1JGA7YyM5cE1VJVsmcjNrIS08NCFpZVtYXGkv
NURXSD1dIURaaEFJSmpePFhgOXE6N09jO049PmwqO2FiZHEKJUlIKyhObSM9aiFkLTJDTzg5SVdk
TWYpRiZXMWc+NmJKTDdsXVY5cTpgdDtOXltfZl9WbEtsY2lwOkNVV1pWXl5pPEAhI1Vlbz9tTWQ1
PXUhU2laZTsmSjQlaDEhXkJMWTpxQFgKJUNbKXVzcS42I3UmIjMxX2lrS0JVXVhPTWotOjIsbmVp
JWAqREdfYkpCJ1VPUjwyUjl0Mi9kQDBDLTpHKGdZSzshZT4uMEptSSE1SjwiaEVeSiNgZWVfLjwo
PTRWMG9BLitoTCEKJUxEST8xUy46bi02MSpeUWVFYDBTSlA4Pk5JK01aPzEvIVMiSjBcJFQqOV9R
KkJmbiUkZVlOPnAxalVmWS9yLCgiRSVGX3EsLml0QkxCQCN0Z0s7REdkVVgnPzZYNy9PU2o8XGwK
JVBkNV9oIypRU1JPRiVSOiNSMi9UOEpybWFwZTZqRi4/UHFgTj1aJUBWS2ctMDBqLGYiaSQxQW1d
Zjk3P0RbLCxVWCdmKWspRmghX2knUDhZNlkzdCNAampYTyNOSEAhLDJiUl8KJShwOkNiby0rdE4p
TVJHW3FeKmc1WCcmXztHQDBIIkBUY05fU1JLUWwjb0BqJ0U/M0E5WmghTmVgS3RXXFNXRjwsPmsx
RS83LSQjamA5KkxJRCwzajRnR29JWyYvZThpZG4zTTMKJSEtNDljLzpLLnEmdDBzPGJXczYpQy5P
Rmc8Ui9YODtXZywiJTNZXj9POy5iTVs0IitHbDo/LGVcXCYyJlZTY1wrRThQMTtBJCpHXzIyaysv
XVRaUVBqK1EqbjQxLUZPXFssdGkKJTVEYSQyOj5gSkFPaFx0MGF0UHRqazE8USk3YWElY0A3WTNv
bDRPbmw0IXVDLWEmLlslWzlgT0c8VExhWklwYVRrS2kjTjZCKlVZb3ArdDgtQTA4ZkAtYFkjbWZO
IkgnYUEuU3EKJVA8Oyo1KiZLX0M1My8iWkYpK21STz1YRlsuWVleXV1xYTw6U2xzIWpdNF5wU0Vn
OTtMOmJrWi9SWDIlXVFsUlc9KCFNMi1CMygpXC5aInQ2T0BtcnU9Rjw6XUc1S1NrSHRHOUkKJWBF
cDY7OSElUVlXcls1UC5RNmU3OEhpbEtMWXFHImtfRiNyInIlKVJMSilxWmxzW3FHPy1RIzdtJGVL
bF5iXzs3XisnNCwia21xSkMhUz1jTydaOlk0cl1gajlYYEA7TFEzQisKJTUkV2FncDI4cWksTE1N
YyFtLC9DcCcuMjIrak9INiRtLiRZXj0uX01aKDwtNmJgOWIiZyxSTUJzMic7ZFIhT29yIUMyaEtV
ZlNCYEN0ITJPOy9uQkZNWWZWPTxUMmA+JlZMVnIKJTQ9UC9GMG1HWGwwQk9lblpnUVxRaWVeKk9h
T2taRDJvdC9WVzQjYjJQc0AxQy5bKzwnNSpVUFdKRUxgPTBeQCZATDU+XGUmNTonTzhZalJBYzss
KGJiUSojdTtsITBePTQpVjcKJS8/JzooWCVrNiwuRm5PUC0/RV1fMjhhKUMjRUBoNmhIXm1kXi1x
RFdqKWg4cCpTYmlgaz1ZSFZTT1YuJDNYKzRuLmpJKEs2UGRbaGhRNStPPEBmbTpKRUBAcitdRUk1
VGVcXTEKJSk+MlohWDY6K1NJLW9MYmxMTVg4Mz1sM0s2aCRMbDBjdWk6YSEpQzE1ZS9zU1NxcDNx
bUxvcUMvPylYKSJITVptITJaTlE4WSJEPl5mI2tBPD1XUlhtWzZnQmk0M2FNKSslW0IKJW4za1Bo
SGMwZlVyRDtFZkVBN2soOGpzLiQnZFFDc1Y2Xlo+W01lKGdcWz1eXExeNSlnXSo3bElHMD1KIzJc
YmgmckwhS15PZy1WV2BVLiMiYEJ0UVtnXz07QTxMWCs8NUljVCsKJSljSGZSJl01S19RRllDRWw+
aStmbixfUS5qY0JwbGNnaEI0SnBPcW1AJ0BMREhiSldsQiU5MFtFYC1aMig0PVgvVz5tZklBTVZR
NDExPVsuNjwlLEhoYypiSDFDYSUoRkppcjQKJT4vQUY6XWI5J0ZIdTpfQysuNDFWQUpwWSMlaTsx
aFV1VEQpIm86S0pUZD5Ob2FlWk9XcC0iS0YuaCtCYWZIc0VSKj4pOk5BY1RmREAhJW9KPU8iRXNo
SSQlaTtAcTVUZHNmJV0KJWoqQGdYPnFbNiMiJWFPREosdUUxP0VOYz9NUzc4XWUmaCY1NS4mP1Fq
blI+YUFpcyJzXFVOdUQ3bi0kKWBiXiQqRWkkTF84TWMhRk1zTWMxZzhhR15aOmtSZEE9LENqTVVt
TS0KJWNiQEBFV0EoNy5QOkFhODZOZmAlIW0/RF5oTVwwYWQqMTtqOF5wXE4xMDdRYVtYajkhVGRE
V1I8YVxMIV4wNmZaWzhVTkYkZyhoaj08OFxGKSxJUV9gZlQ+TVw1VyxkWjpJPSMKJWxzJVBsTD8v
WDIxX208ciIqIXJiS0ticG5CRUUoUHE6KV85OzwkZEFdUVt1PSdwayQ3JXIzR25DYEozS1dkVk03
P0BGKXRgb3FsLypNKFhNXDwhZ1FsIWkkYiRTKzw7TURgaFkKJSw7RWQ/I09mVmQuXy8oXDdgKVdx
V1JST19hTSVUYXJWc2YiJCsxQ2xJJlk2ZzJPNjE3IUBuZTQtMDlQYSpcTEVOQks/QjdTIU86SC4i
WGdEZVtBY0dvUmQhbz8wY1cxUSQzR0AKJSlHPCsqTVRIXnEmXEU4XTtiREBlajQ2TnQjNmBacEVO
azJHTG1ER2JPU2UjayRQUz0yZiI4WC0tNkMpPTtUdUQlWkhGQCpMQllzJ0h0S1MrLURaPl5RaWtY
ME9KNjQhJyZcRyoKJSdoXFNNJSFoc08sMzYzUkZCSj5tLCwoNVZiYUE5IUJpcjtMWyVFcUQ9aykn
OlJZMTZZUlEwcW1GMyQkQ09sJDhDWWhpZDknYyxvTm8uJFNrTE8pS2JvQyVpNmFAQDA8SSpqcG0K
JU48UTxxaGczQSg1J1IhM2I5X24qbWw/Vl5EJUkoUig5L3BRM1FWTkM2WyVGNjE1OnBzYzclKldF
ODthRmZtWFVlZ0NRUS0uRTZSREg7NidmVzVvMDk2JGlfaF09PzlkM28uMDoKJUAxZztGNDEuKlok
SlMiXUY+Zls5Q2RgOSlKSCxrbyJqZDQmIWBkUyFKaXM1KlIzZSUzPidGLkU8YEBra0teRHVlKkhS
cS8jXkVrOiFMQVAwIS1qL2paaygiXEpNWV8kcV1eSz8KJUMpIl5jJkM7dU9BYyUxcUUqL0BrW2ly
JG0nZGhTcF41ZmVaUnNvQGc+Z0lSWEU0VjNgRG1hPzJnU1g7VTxPX19YWEo7azA8OzR0amRvOydX
XkghTTBPZ1hmMXAnSW1sQ19BWUcKJU9iYmhiTmU6JlEnRDxbayxOKXRkV0RZVyxDNzVKdW5OcyJL
JGZvW3U9QiVFZ1tTajdsIiEnVjJTWydrR21oLD8qUicpV0E0VzNTLCNXdEEzNWQ8XCEkO1UyQWEj
MStTV2JgQEMKJSZCdS9TZmVbQVpvLWgobDZBU0tdVnNNWz9NXzI0VSdJVDJPLm02RnEpKzxHSl9F
KjRZS19RRVAlWU5LbyFnMiRAR2QqQVRkbCYmRUAjTW9fb2suISg7RC9hPVwlazs+YjI+cF0KJSQi
Qks/YXBaO04oSi9tV2VhMjsiLm9pM1pDSmdBUjRcPlo5KkZLJiMsVmdNJDZxNDdaPlQ3ZCY2I2hy
R1pWVXU/VVNkVF1MRmI8MCZbLmBqRzZPW2ZvNCJwQVF0RUZGMXVcKHAKJWViIXQ2KkBQKVdYcERN
XlUxJSlSbXAxIWRnOygrJE5gMFMvVylCamNyTideLC1HSktocCo5VEpSaUlXTU0wXWFBO1NfXi0s
XTJVYjw9JjU6O200ZT89ZCRDVCJKIWQ5ZmhVNGoKJVoyQyRjal9LTG9Hb1pRWFIlYzpVZitkNDVP
VXNMSlslW3QibDlOXkJaa3AtREckNS8wUGdRJSFxUzlVdT0hOjdybEdLMXFFI0xeN1RyR09XWzlX
Jz9kJ3I5YStHdSJLSig+VE8KJUEuIj9bRSlCcypQKls9LkFeRCVaUyxVZVNKVVdjTGVQbTIyZjNr
Mm1ldWhoIlJdPmhSPDNvTlFMW10saiJ0USwlaWdQQzFFUyYvbS0+WHQpOipZbWgsQzluIj1laFEy
I1pJKGwKJVsyKWNuOkElX0ota2Y0RikuNjdsYSw2ZzdBJ1BjZ0hrRDVcOGdoTzhLVGQyUzx1NlIm
UTxpIjU1QzNTWjJCK1tscldbYVlMIUQkUy0wQTQ0SlVyK0UsWUBDJzAkalYmMD01VCsKJW9GWmNe
QStxRzI2NHBYZjg7J3VtWW01QTosQjJpKllpZjRhK0c/QSZDRGY9SjouRW0xRXRFOjYzdEByY2Q5
bVkhLG87aTFDaEI+KDFYaz8jKGQrL2gyT0RsWSs7XyQ2JVY3P0YKJVhBQilxcVxbVl9iKEtgc1g4
LDtJNmI8ZzYrQW43MyNVP01VXihcXy5VNC50Oi9sN1YkLGFqJzFkV2RcbG43bmNAK2cpUmpIQy9z
bjhKY0U4bTRAblJpZDxfVnE5a1hLTWdUOW8KJVZNV1I+XGxCVzw8OUloZj11PWEkO3RjTkhNYUVO
MCJWazYjQjlZVGFqZG8zVkxETXE1YUE8PF5WdTM4VyhXSidSKUo8YC83JWMlVSlAbFZyWEZBXzdB
ZCgocFosIj4yWmlJPGgKJUFodFYzRTkoT1A7XlZSK1QtU0VrOmNEIS9sKVlJKiQyay9FO207XCUk
b0U1YiZBRXQuSWA8TitSWk8jYDU0MS4hKlAzRVNxcyRlLFBhT0xcUUsnU3Q3S2pyPUkoSkFcJ1NU
T2sKJUJUTFZUIyUsIlBkKHNuOGZHOS1aXTBpXzxYTCUzP0xAQl1wOzhnTUhuaU8sJWc8WjUpalFN
OyUia2xxQDIlMF5wb1NdQkdAW1N0N25JJFw4QSFeZTdFVFg1YXMiSk5YQk0nO18KJUhfbl1bSkZC
QSRhMGNOQjdOVjsnTnRdLnAsLXFYOlo8JldUUGU6IUBJIyhiTURYRWlDNFxbPV1BK1ltXChqUiRl
PCQ6JlBRUjVjaC0hLURHIUZcaHNpZDFuPjpqJ2xFMylTWWEKJXMmajwoLjBjVFFZVjRNNkZXdE4l
YFRodSooNGFGOTJQbiZvVDgnZSlQQHEvXlZSbDE6WEU5Q0RGYEE1O00oP1JSJmJVLVhhPmxvUVN1
UVdbUGpZOUwhUzdZQSdzVk9CMGEvMkcKJSo3N3JTNlxlc0RsbitFL18hISNEMGpbPkxfLUlnTSIz
dU5rRVMxcVk5R0tUa0AkQFVqJ0YmPj8wanVJJU4/WXRjSFdSPUJrOUI7YkozLlRxITA8YWc6Mjdw
TCgpX3RoOmQvRj0KJVhSX0YuTS5JRFgySTRAWyJeQU5nYTBjTCw2WSJuTXFWOTQhSWhIMzY7SGVo
PlE/Ll1SMV5LKFMvbyYkIyttYDBhUz07aEBhTGxuaENuZDtzYyV0dSgpLj1KbFdaZkBDOUcjJ0YK
JTcwdV9oVjRXR2hdWjJfYFklJE9rYTJFaXMmYDlFLCJRSnFiLWdCM1Y9cilwT2ZeKGFrclhIMiwl
TCUxXUVHaCRnZ11mTiYvakVZVV1oTjZCcWkuRW0lVE1TMEA3MzU6WHRkYScKJW9jbVdKNSI8Ym1a
QUlSV2Bvbldsb1Alcjk4UDZib09wdXFgRTViImRBM3FFSyglcmgyXitYOylMQmc+bzc5Xy5kNyNX
cjInRlo4QmBQZTRpXE5uKj4iREp1IjU2OU1hX0UqOSwKJSFvJFtuTGZQVUFARF0vXy4yV15aPEJy
IXVbJjpzUG5ZWF9UNUgtLjBaIz9AJzdrPzJcKktrMiZnPGpXVEZIIVo2VktlZ0E9MmVBa0xkJihC
aic8QCM4U09sUSlcRkVZY25MOEAKJTYyLEtgZDByPUM9UV9bNUNwJSZzNDkyW1w0KGtLYS8nXm1K
cWFbNDRDI2ZgMXJKT0MlKkUhODI1KWBIXj84YSZKb0ZwUzBWaC4oQlhzaFx1QCZgUEE/ckxwJj9R
OlBRXEYxV0cKJTdRO2AjWDBzcmtRUihzMTIiYnMxND9FPHRDNHVhVFRAcEw+W05eTF1tWUguQ24j
bkxmNlNDUHJQSUhBLUt0dEw/RUBNJVBCMGlOUVlXKlVaSnNXZDU9LCozQl5PU0NwaW5YRnIKJUxp
Ul9PS0RgUEM8TT5vMmhJMTNKPzsuOF88OTYwRE0tYz8zZHNiNWs0JSRERG4sXm40J2xDSzU7YlUz
QG9PX2tOV1wzMVInJjRLbW4hXmlYUk9YWGBfZDNhLjZaJT8pOVxaKEQKJU1MTT1JJjVtKEBMXlsz
MEllR2NBKCpdN2RpOmIkJDVdcWdeI1NyaFhvX0FYcWZLX0tIWkg2STNEJVlyTiMiWENiJjFkNz0t
cjpNPiJOX3VcRzMkQ25LLydOZzFmbjJhTj0vUSoKJXI+LTBOXTZKKi5yR2JuRz9QU14+W19iLTVe
R1tXMTgtYk06cVRLbDdsSCpqKEQ4OEZyY10hSy5XVV5qXjcnPkRpbUVPXjdJIm1sZzYxcz1LREtY
PzUwXko1XjIxMmIwPmFmQXAKJUFXOihwLlItQSgnTyg1UjdEJVBnN0FLaFFUNUxmQWpEXE5eWmlV
QTQmXEhgU2FjNkxgJThqPFU9InFaOHI1Ym00YF1gSlJsdU1lJzxCWXVqUEE+YzMyPVZxcVZzQSx0
L0VNMD8KJSdwME1pamtwV0gjLiksKiVQITY0USR0SWU3ajNsMkdwQVZEPk4mMUg0OiEiXjopYXFU
UDE0ZnI/c0o7WzArYzwlVihsYVNPaGZmblxHIkBvY09BOmBfRicsKiZrTzQ4PkBDMGwKJSciMFlp
WCFIQDw6dGReQiwxb0dYQWBrK2ZUcUtkVU8lSXU8NVM9PUxbQ1BcJz1HSFdqYUc4OGFLQUdjJzNz
TDY8Nzs1YzE2UyRlNVRlUVNnRCdhX21Ua3BiYlciOj8vcFlnXiQKJUMjdT1sVEBvTjVPLWZadDhP
byZFJlxxbT87SmpDdFxdSV9RaSgvQWI2TERdOm8vNFQ9OFRgJk1OLyxeRkFHKVhyRmllPUYhLCM6
c1h1P0BtWC45OjhXZW1kVj9XQSlwWzQ8bDEKJW1MUio1YERvRS4hR14+WT91KzhfXFtJRXVcPz1d
RGRnbGwqV3AhXmAvblhlW0MzOSJ0Ozo9IyojTSdGQzZ1IywoS1pzPz1hPyRSUTJwdDBfOkc7JTA9
XjpxM1FrXz4hUT5YIWoKJU9sQ0s5OFJNYXMxPThJdGw9WS0oJSQ5Ri07IiFUXmUub0RaXy4nMzci
aERMOWJwa2J1by5xJTUuaVVjaFh1XjJgQUhuYVBcJkYlZjs4TzpLISlSXztfcWQsLi0wUE5kRERL
QmIKJUUqLDJbTC4/O3JaQzdMKTtjRTAkJy9pbidqISsjQl5nX0xyVTptOCRvLXU9VltFO0NnQTs+
b1InYmIlWC5daStXRDZCRFdwW14wSy9YYCkiXD9qPTkoVTEwdUNNTE0zUkFdIkUKJWAjQ2dOU25q
PlBOUydCcVxbayssT1khSW0/YEY7UFJGXFIwUypIISUwSy8pMTphOV1FYilETHFZYicmWCIpPV5S
JC50V1daIjQsZkY2SmA6TShLal4yWSw1RCYnVixOXjBbJUIKJWBNZy5FJzRaKVlHOC9oRUBAaV44
LzFXWmk3UnJoUmI7dEtSIyo8ZDsscS4hR1gocVBmXGY3LFozQkdlYGIrNyQiX1BbMiFmNzNcR1ZS
REtwXEYjMUZMZThSIiJcPzNYTWVNTXIKJSpRUGZNMVUlQ1xyUUpCRD4xYj9IOVBLcSttJ0k3dGAl
QD9RUkhUdE9DRkxpK2ZKLFxsPTZrLG9AJWRYW01TT2RMUlhCP2NHdCIlW2VCU2xnZDJAc10tQW07
UUZSKDhzcVpicE4KJT4oRCUtKk1YP1UhSEJEQGYtb102I0ltVlYqTW0raHFpVF11b2BlSG4xLDom
XWVLazZBLjJaKCssYEVgVjNdWEUrJWknIXNdJ0UncU5gZWdVczNuQXMrY10/dS9aU1lqZjJDSWkK
JSRlX2FvK2pFX1I0X248LmRhK2QjMlhHVCpTZCtES2w0RD40XSxSXWhiaiV0NyVbLC48Q2pHPTQ3
aSJlXjU0ZjQtWzFRU1YnS01rPzdvJixXW1Y5S2YvNiZCXT8rSnRhaVR0PzQKJU5cMmdwaWJ0USk6
RlxtJkhDcSJPO0NSXytrYmZGSFJPXVpxXGBBJltWNkFeZ2JOTSNRXCtdSnAyYXR0OGVYUnBDUD8l
LjRlZVQhJylFUmJYLj9gNT4/cU5wVUQtXylEYU9wbnUKJTFPXXVbLzNHYzlUWiE6MjxSPEkzaVJj
RmMvaComKjB1S0ZdbjIwXCVFMlphOzh0JzpFISZpbi84IWNCVj1lMWwxLmIsLVBBTDEsSGE+S1BN
UGBdK2FPOTIoU1E6VlpZLUw/MEYKJWtzbExYUUNgXCglJTVuZStjRGlBX0lwYWAuMm9xQU5fYE11
aUtTMF1hLjE0P1ROIWk8JyY9RjEvZ1FrQiItXWhSKCpxQDtrJTduK1crZTRpO0YlTjUvW1VgZDkl
NCc5bTUrb2UKJTxrZV90Km4kLkA+dWZwQi1TaGRLWkZlSWFocUxyLksuN1I4OF0vRj0jQlVBXSE3
cTJdPVFKa0xnYSRSK2UyQU43Tlgsb2EjX3B1J01jLTZKJydGM0cqYHFJIl1hZVksajJhMCQKJVJn
czFUZjteX19lQSglYFpLT1FqWlZoPD4zYjo/aGVWcGAtaVsyRVkoREldJFdmUy9ccWpGRkU0JCZW
NlUnMClIJG5dPVVMUVQnT0xGR2JUK1lsMUdhUz5ELihISDhnLUhFPDAKJWVbKTg3cjJOYWgqTDJn
VWRcJWBnJTJOdF5pWCJnJExzIyNtLSc+NzZnKCkyKEttM2NBNEsuOSVqcTtDKClgdHRWM1ZXMlBa
UTQ2N1w8azxQOzlvRmIkWiEjO0c/JWQnQzBSY2QKJSVxZF43IVR0RUlMWUwzQSxMOCtmQXBKRmBf
WjMlNDZSYlhQZiVdYWphQ3RMTjBoNy5sXDwqR2FMQ0R0I2xOR20sVmZpYiFrO2E3TkExSDZtKW8x
TjxNW09iJT8yYCJGTCxvUnAKJUs/aCgvMVE4U3BjLFUkSyFzKHJpXC4uQHVGRGJyTkQxU0BcJSk+
R2NUNigsJV0/IiNIcnArRUhQOypQdWxkLGVvQVtURG0+K0AiVWFWXDJkSTNnPmNbVSQkZiU2ZVs3
Zkc7QSEKJUdiUz9rIydhIUojSVR1VChqXm5HUSJ0a1FQaFE3VmdibF4kaiIkPj1bV15nVSwsKypN
TiIydUAtbFMlNCVHYGJyME5zJiJlZXVkNWolbDsjWVRAOUdNaDM1ciljRzpsQWVPO2UKJTJ0USxy
Mi1mPnJOUUdDbyp0QzJLTSRiNFcmQSVxZS9OTWJIIjJrOCQzKmJGWE5FRi5lanErRG5FLzdPcCxo
aS9ZZ21JVC5KUTZKZUhBJkhRNCohUlJYZmU8Y21rdUhhQiglb1MKJS9cbjksKnNzXFc5UHQpPikk
OkBeSikpRVRPUkU/X19zXFlGQF1hbG5gSDI8I103RzhVb151JlI2dUc3QmtBamIyYGZVPm5sYTU3
V2dEQ1BHa186Uj0kaT8xVXFcQFIuUkI3RlAKJUcsTChQPE5mbDElVzE4NmUtJyZxVkVvZFQoYFkj
bDNuZ1s6J1xBLzliLFVDWz8qQUlMRVNRMkhfOjdiT1RKYFprOFRkamRpLUZNXixCMDlfNWlHTExp
M18tKyZdKDE9MmwlQWoKJW1cLmRcK04mdDE8a0s8U203LUIzRTdsRmkoJEgtI1w7PyxoVjQkQHUu
b3I0ZGw1WV9zNip0OEIuLXRZUjhbRWZVajtKLyhWOFNUWURJcWVrZUZAN2lGYkE1MjtQV2xYTFBw
TEUKJU5LRlpKM09raCUxYkNTRmwhRkNnb2dCUSRmTnA5dU1Hb1ZBLkZKIy9mNE50NUpzJjo3IUEw
Rj86OXVQIzc+YTZvZyxcckwrYFYrSCk1N0dyZC9fcTdXJzZUdG8iLC1NTzdOaWAKJU5kZjI+ZDEs
STIyN1hvdVAxOXFCYlpybWU5PmJWSj9IOGk9QmJDWU05KiEsRVZWT0pvOihgUixNZEk2RnFQVkkz
QnVXVjlQXG1sXCxhcW1iImNML0JGNDQybXMiODhhJj1iKCIKJV9tZ29nKTkqOEVINmRGIVsqQ0M7
b2NMVTorOVQnazBmWy5gR1pKIWZdM1oyZW1EWyttMyVwZE9DVDonWTluPFBncl43XVNxMiJpM09Q
b0ZlNCMiRSpdST9CTGhBQjQoNkVhNksKJUNEcihXPDhTJlxjJ2xbcCpwZG80P2xFVUkzPiJUIjdj
YVZKQDMyL10+V0MjIWZpdHU1QWh0KU5QcCk4RyRpa3RgOTpaZ1xPXGs5Uiw2UDduQ0I3TVU9XTQ1
KGluTy5BZFhkWlcKJSY6ZW0iOWlQSW4xQWpZRTUhI1BxZy40UEhSYDxXPztuSDshMEMrYWNJQiti
NiknYXUkTVdGM0plKCtQMzs5XWplL2EmaSdxWWt1dWhKQzZAL0B1RGhLKUNFJEBSKFhDX1dpKVQK
JWFrKGpDJjp0PVU3NUo/WDdjQjRfQko5XlssI2UhZ3FpIy1pXiFGNWQyRFlEJCFMPEVHZl4zRWQx
WmZOKl0qK2lyVGJrZFxAaVlJOGNFLl5WOTRtMTBsJkFwKUlLaio1UF1xaUMKJUBbVFkpUlgwTDco
V05MIzMwY29LYGZzPy8lO3RNOk1dRUAxOCZhOlo8bUFOM2VIaCpWQ3EjZF9gL2xkRC4paS4pQUhK
QExqOmssSWRhPV06cGBNL0JxXGl1LjJnI2hqQC5cWToKJTNRKik+UnAwZ0dxOlckWW5kdFhVQ1Fp
O0FabDJrTiZpKy4yRDwnMkw9cSMtXW04Qm49K21eKFwxTkhYRztZcmw2JmpGMWoqLCFbNDFOI0FR
XS5xTXUqbVBCIUw6XkgqMi5OP1wKJVRxJSdlPUEjXU9GIiJDYE5FW08qJEI0NjFZOztJUF1ebEVf
T1xnLj1iTjFBQERJbyg8ajdCKEgtTF8vM08nVSU3YlNWW2NlZnA4ImRDMnAhMD1iO1QjcE1ldHFo
NE9yY3RoYzEKJWxlYmYtUUkwXjhmVDk5RFBXYENoYlVWRlsjUV9LM1RZNXAtMj4sY28oR0pqaGQ1
N2M7PTpFOEpdYStlPkJnckhdWj4maXUsJGBiVTNzcSdlPyJwdWgraEM3PlQvU2g1ajhkJikKJStI
VjYiLlNEVmUwLTBlcDFdW21VTz0iPWlYVVRlPUwwbVcuJXAwRCVhQDBrWF1MV1AyWSVHLSpfU1Rj
X1IxYmFUQm0rTGUhLW5aNyZCcUFAOlgnTEsvdCZYdFBPXUQyPSQvPG8KJSEpTiwjNFVrVSM1Qj9A
aW0xZ0gqa0ksMWEuN1FKUmlccU5mW0woOWJUcGQmX1xRRCkrRzMoODlOUTFFTzI4QSdjYTpwR2xk
WTU4JmkiO0ZlIjBkTSdKNyIhbGlXVl0zWjlnOj4KJWBEdVM4bjJzYlVjKEksWGE/O0Y5UTgzNjNe
JC0qT2s+PFBDTUcoIkxNRyFRQGY9bUw4Zk04Mlo8VENgLThtPCo5XTFGbm9VIkZELGYkcyNjUy4o
MS9IKlNhVU08M09SO0dIY2IKJVE9K2MyKz5lb1dALUVkNSJfMi1AM2BUYGgwYSlfcmYzRz0+YD1o
IkFCZ3JiTj9YKl5uWEdGcGxcOnQtOUBtLyElWE9LO14naXBjY1tWaSMmOkxBYGRwQF5AYlRINU1u
S0YyKW8KJTRjMjU2bCUxNmBOOT0hZVA/JkYzSjYvIzRwS0RjO0AjNWNcVC5xS2sjamFBRj51UlAm
MEFaOHMqIj8nJEJsUW1MOyVTIkVBT1deTWdJYTcvZGQvNjgsVnVDdFw0SSIxOCYnRzUKJWFFK3U2
YW4yczshaGlddDZGSXI+N01gc1xZcSlPT1dRTmE4KkomUi1HcVRfVGomb0prXE1bVT1sU2lyLj0i
JCUsLGBrWy0mZXNWZ1QuaSo6M1lQVzlccDxNRCRrRjhPZnUtaVEKJTcqMj8iQFVMTFYkcmYsL3Ay
O01TX2FLUGsvPXAuZHJwRmJlaEckT0tYUm5jMkNRMkddJ3RqTVxqLkMpWGJPUSluR2s7QDdnZFBS
Qi8hM0ldYjtgYjNkM2YjSFpJaFVNP2tcKWwKJSJcbzRsRlhsb0tnOXJdQlowNWJPXkstJE5RUCQs
XzI4WVlTaT9fYjk7VHVoVmMqJVdTZWxbTyFfXHExMTkkXm0iQU1DWDlkL0hiUVFdVloxZm5XclJr
JGgydFNCSDZWYz1WUWUKJXE5ITJRKCxqOykjMk11LGhvMDxfWE5kMyMnXzA0Wzw9RFg/aCg6RTtP
VFljSG9hYT1uNTkzOFliLE1NPVAsb2hRJmNkRGAnYC1vNC1iaSQ9MExbRCJqTnRVNmUhRylhUiNZ
MVgKJWlyUl42XD1HV1lLa1hZOT5lMDVaO3BKIWFdRittXDQscnBSODk/ZCs9Ii9qOktcSE84cS4s
OGVnImJUalxtYiItYytLUEo2O1FSPl5XUG8uWnE2T0tmQmU5aVQkK29mbDkycnIKJVJWTVQ0YzhV
dEZAQTUjckElcm5ZbFQqInNcQillQTNMVCVWYD04NTNNO15FcDRwYmdNTGhIT0RWVz50bmltWGIh
M2ooRkRiIk4qWEdBMGUiOiw0dE48KkozZURAaSpkYFgsIUkKJSMkXD9fUVl1WGwuVzdjUC84R0Uv
a3ExOmA8LkY2NDVhVUgmMklYcDc9W1VpKDZWPyFjTHBXWkJUQChhXDxiLnR1PVVFSFxFXzxML0Eu
OS5WIWpUa2pDO24vPFNeSF1TalVqNDkKJUZDdU0vO2lHMS5WXD5oWTwhcD10LDIpWUtVOSstTG5K
P3IjM0crXy5wKmVPOG5XLWowIi0yWVZROjddRWohZ1BEUnNIWjxEYmwtYVRncks/XV1pdENbJVdY
TEBbLERuS3AsbWIKJVUyTyMlXmNfMVtdYGFlPmxhPChuUUBiJiEibCRKXEMkN19oMFMhW18pV0l1
S3AuKGxRVFEjcF1TMls9USc8aXQtW01pa29RLkJwKGJZKFk6ZypFOD48RCtFbHJrYitKTzM4O18K
JWAsLUQlS1osJHNsU25SMCJwJiQoLlYmbDs2PE5DPylhZU8qZHNkPSJcSiIrIl51VmhoSFJVKVRd
VzU5YT9JQjVyYlE+IkRXdG9SYm1EO2hYVjZVNE9zLk1UNEU8NmhxRUsyODcKJUNoUD0jTFprIkc3
M2UvQjpaQD45bmlPYm89R1FEJDk6c0oiWS0oImluN2dqY0BsWUZGP1s1RipVX0xQJVptTSI9XiIi
V0E1cEBSYk1IPiVaOyZZdTkqX2s5OTtwImEuJTBEdDsKJSw0MV5uaEMrLmpGRCZlWU81QGxuWXM/
Y0M2RUluQV81O2NvV1ouN1JAUlFSb1NBOTkmRVIyclsnQE08LEEpM3JJW0xOVDJPbyVrcT5LRF5s
QE1udEsiLGtoaEVKVjBDRWkqayUKJTg6ITI1SVJ0YUZtYWMtU10jSmtkRChVOFI2TTQsNSNTQyFU
N01HL0U0TjhZJEs7WTxQQVVaaytaTkA8MThuR0hgX15MLGk3OVozJTtoOnFVWjQ1TDpUO1dsLWYy
Ik5EPjVLSicKJSFELGRBWjc4IURDO3BzailhL25YNnJrLUNRcikpaFU3MD1NaSMkITlwZzo5bG5W
U2ZZXyg5N09DY0tAOVBHITxFUjxSZT1XNEUjTVtsXVczNlo1IlI3a0clTSNyM3VUYy5RIXIKJT1s
RU5ULylbK29KZzlOQCRwOEwrVHRKRGtZOm4uRWxcZVBMVjkkLVUtXEhFSDdwalcjZmphO2Q9Yy5y
NiEtWFQlOmtkPDNVRldWdGwqLTZNW1BMVjpJJjIwdVcuY2AncXVbYz4KJT1eUFo7TSxcTCtvL3Aw
SUw1P3N0Z1VzODtAamMlJExcbkhkYXA/WTtwdE5KLE9vZihkTl51ZCQzWS4qNzhYTmdlbCdPKktY
ITBoTHA0R2ZrPy1uPE04ZCQrImQoTTlsWFdOIjEKJSZ1ZiRWRj1TP046MiYzWU9gQWQlcjh1VzlU
UVkrcGEpPzBSWjssQkxwLisoZixyckI3I2FyKF5ZWUBSTEsicjRzVlJBNDYuJzRxJy4hPkNHS2Bz
Sm1EXnRXNzZVO0YsJ25IdWAKJW1lIyc0OiEtLDwlUkJcKGE6KzRcT2deWSpaOWpkKGJwRXV1XD9t
P0gzZkovVUpIVkVNPF9IO1JRWmZ1cCYzYzZHQmRTNDc2SGN0U0JMSGUxNVdXRVo4Mi9JVytIZGtt
ZnRYOXUKJVdgQy4yO2xwNzlGZGBBaEwtZDpNQDohOE1WUExeMltkPk9fYHFZdXFhPUdhZz8sV3Ei
XGtoVXEnMy1raGQ/MkxWTFlzb0xuJUMqN2RPNGRsPUNOT0lKbFhdcjNVZVZoOlQuVV8KJVpddTon
VHVmWStfNVY1UiQ8Ri0uV0tRcUBmIjYoRC1oOSdsSVlBV1dAZE8vQl9ZYVU1O0RfI19eXldpIzdd
Tk5sZHI0dEtVS2kmNmRqMkpnWGxmdWIrbSUpZEVxJ2MwOGt1N20KJU4tUVpEVT4wMChiVjorWlVY
TFxnZHRYIiVWI1ohUy9nUjRfYlVkaShMciNiK11jWlQyLSRdaEdRQls9YFYsdSNgUjdOKF1YSXMr
S25AZ1NlVEZYOnBmLj5RTSU8cSxraGpbNWgKJWhDJllnUWhYNSwiW3UyTVpHK1IhI2BKZCNodV00
bzIjVDRWVEcrQmw7ZjJuRywpPFQ2TDloYXBAdEUkbFJqXzJDamhSUjQiNSghISZFTDZxOlYjaGQi
X0lwbCgrIzhjSjY4TTMKJVZdUCYyJCtrRDQqTWFyZVUjbktkJFFLXFM9I2ArMjhobFAsO0wzLGEm
QG4zaEl1M2dtVjwsRldiNWNyLjFQckY7Z15RXU9QTmsickM7PHVWMC8xQ0MnQFheays1ayYsRi4q
OVoKJUtnNWExTFUnKm4ja1RIJCp0cmlWb0YlTWVXJXEtKThyUmZDKyckN0hdTzMwOCRrLTNQIz5H
W0EwRz1pJGRzLkdoOiI6JFI/R007VVZJOzJFMUhfXm9RNWhkIlliOCVBNz1UIjEKJUZOZiJFbHE0
c0VLNS9QP0giamZCLG5zTXBoJkRIVUAzXT45W0QlIXFeX1o3RCg5OF8mXkRGWkAkISFHXTUsVXUh
bz9saVZlS2VyKVdfWWFfXFJEXDo5MVFTaSttTy9jLFQtJzoKJTQja1JEYjFwVS4pMicmRigvQnJb
M1NYWlFJSE4rXC9lXz9XVilDYXI6KGZXJEtIVS9AaFRyNkMhciNIKjkmI19NIyQpKklMUz89OzI/
YyhbPFVSbTxnUkVqLFNTaG1KSSZdTykKJVY1OnFrJzdiUE5BRThaWEctQiJSMkFfb3E8b0ovOWJa
VyhlcGgrW2VkREotTUZIPnBST1QpPUFiV2NfIkdUKVooRSUucW45XithWlkkREhCV143VG9yalhX
TFlHSitOOyhRL10KJW9GRzFsJF8tVmUjXCxBRTo1dWs0bWw4RXBCN10saWhZRV89XiVZWW1JRShy
XiM1Zmc7SCVIQic8Vi86OTFjI0cuKlxtckg4UCIsJFhgLHJccjIlSXVIanItUShZTitmUDRrRU8K
JV1zLDRKXChpOG87YiFwPl0kSk1fSXJFSWlEVCxzdFc6NFBjb3JNUnNFI2koWDxhOGlgUjZsUCFW
Ji5hcHFlUW0sWGVBVW8kV0pCNz0sZ0czKi1AT2hfJiVZaDIrOi1jUlRkYlwKJTZNVEFQJF81W21R
XipyTkZCOVYzIVlWSiErcElVT2okRjxiOjxkZ2g2YGJMcStuRzFTO1lTVlw/XSoiJEdNbF8iPjVB
ciFaT1k5ZiUuay89YWNiVWguNyJmY1U6WV84SisiPEEKJVZEdFsuOU5iRXVKSnMkPlcncmVDMUl0
T0VmTzVFaVFnQGxrU2BmZ08nZWRHLSk+SUs6Pm02XS9cPkgiUTRPPEoiQiFbcEMhdDgmJl5QJSM9
YV48YFM+UFpdc2pkXS07UG46SHUKJVV1V1xJL01OV19PcD00YVZvK04qMCR0cjopJDdYNCchbyVP
ckNaalROXU9ETD4ldWVPWEBhSVFALkEqPmtkXlhbPCJIZXVzLD43RmtPMlwzJVNQNmhQc2o3P1dj
WlInXWZJaFcKJWN1Wj9gLVpbOzZpaUVkYkRXX288RGYvN1ctRUl0Yzg/SUwiTEw+MEwxIzRSaWdV
NVg2SzMnJidYJzcmTzhTJnUuND5uXyFwVXFpJWJYdE05ZDU2S0ErJmFIbUMrNGxHJSZCdEwKJTg5
OD5wSDg5XjVYbmZAUUc3PkUoSCshIi1BVEUrZ2U0cEpOLkJlODpTRV1ybmYxPFUiQjY8Q0c5UGA8
Xz1WayJqSnQ8NywxRWtuYVc/amhPL091JjFgZy8yRjltO10xOjQ3WVgKJSVAX0I8ZUo6LDUwWDVP
TVUiVU1HJkd1WiJyQEI/VFE9T15hWWwrPi5bMColNWQ5cz05cXVjJSY+KkBNIWw9LydzL3I+NFgz
NC5sbENpaHVmcEdWdGQ5SCVpO1IxbDNxUC9SYy8KJTMiUkAiM0FsNV9MdTA8MUUnUSNSUitwQ25p
ZigkOmVlSiIuRD1PMkpkZC9BbUwhO1dSPVRCNFkvNHMxdGNCb0cmLUpPNk83SU1RVXE0dDxzOXAq
Xk4oOFRZKEtpOitUNGk8I1cKJU1WaiVoIVM4LFZaR0I4TyNnbTQxPGVpLF04LTlxdVxMVV9WIl9g
MjhnZ1xUUj0jK3NKLShwMVQlMCQqM186NUFgXWdeKC07KV5tM0tJUnVgIjIoKWExUmYwU0UzJCY3
OFNUNkYKJSRJWk9JLiphNyQwXDspIS1dNXFrTTBKZEUoYmZvMWlAMixOTlluOlgrdVVQWC5cSUld
NldccidIRURMNlUicXNHXk8vITZVP1ZOQDpdKC43KEo4cUYtJVhTYmM3TEJuXT1zPlEKJTxIRGxo
KWlFIUs7Sy5rTFhFcmwjWiksQUZMaW0qPkchaVA2XXN0LVonWkhsUUZeLFApXztVQ25VUiVdUDZI
TmcnSXVqbDJSQTptNSsjJF1wZW0+ZyM/J1kyOjAmQSI8STgqMlkKJVQ+J2RGPUomLFUwRjIlLSFk
PChJVHBeXGckRlUqVEUsSTg8Q1wtUDErWVI4bilBaT9WUy0jbDMkLHRlRHFpKC8oIV9SRF4mUWtv
LVAvT2o2VHIrZkBvYml1YStMdCw4bjYmVHMKJTMmYE11MFgqNWdLQUkmUlgwSTsibGQhRl1UWk5A
TUlDUCRGSWNkY1s0c1FQMkNEcnE+cj5LLT1DVmJmcSRvOjUwT0cnVG5RSigkRSM2LE1waHI5PHBB
RD1aOkJSJ1tgOCk6TCQKJVhDNzJzSl9wbCljUjpaUmtxajlZNkVzZChjW19aXlBLZF5SMj9vQEtP
YEkuIjlPYXEhIyNLQ2NXXiosU2JTR1IlYGBuQExxbjpeTjc0Jzo0RlMxU0lZMTpkKGpsVSQsVi06
ZG0KJWMqXFMhIjtFOlshSEgjI0pJXisjZl5dVV8ySydaLTJGcFNdQmoma2pXXyVkTnJja2E9T1k3
LCZULjRrK2ZTVCo3VzpAXTdncldUQGxTdXBUb1ZjQEFFaUVBMG5uUEAoYV47cCwKJWlpOWZKaUhL
XHNQdTFwSjZlMCVmImtldSkxL25iS2FcQSpuKjJmR0IjNUxCaihkNks0ITpTSlFBOC0iXEhsPVQm
UlAicFkjUltsT2MhWCMqNzpobVpMXWguU2otaCwkKVJnT20KJWlWM1xjKGZpbnQ0Q00oIUksaD9G
NHNPdUtqPGk+J1g8aCYvQDk9QV5MYUJbUExfUEVNMkEhcjlnREtyVS4/Tz42R21acD1GW1YvODll
LiFBIj8uUldxNVdLOUUlV1gmJSxxZGcKJTExPkkwOTlfcm5WOFc0MToiaFNuJCxhOSojT09FOi8y
NGQ+MXA0JmUmNVY8NEllaCxqUyxJNDY5NjJKMjlYXDdwRCFdITYpZSV1SzBfUURfU1xxOTVmR1Vy
b09za29LOTA8QSQKJSI9MFhpbzowRz5VOz9sJj1OJ1ZyJFI4NFZSPChOLFlERlAzWCEkRCFMZlFl
cGFLKGJvQiVjLnUzcjpmUztSXVI/VClnLTolLjJXISFJRl5nPkRtbyFWbTZSVDYjLSg1N04ibyUK
JSkiQy9HM289KlAkYTNVbHA/RyxHO2wqN0FbclZtTVZOZGsnLlZuXmQoaVouLiNKWz9CajBRP1kj
cClncGY9RCtiTT5rKi06aFMyKDxbY3RAJ3QkPyJOY15OTkxmQkFbNWJmS1sKJSU9OE9lS0BlR14y
OzBHPS4tamFcRFRMdFJoYVJrJmdYQl1XMGIoOS1Zb1YzMjcqUywuZXNeNkk5L3JaWFJLWG9IU3BF
PWApbD4yNVpBKVpZRV8vU2I7JlVCbVNMUyhWQVErUSEKJSwsI0AoKjs0JS9NVkA2UkRzMUslJCRR
JVMmYk9PMmhBWV9vSEJWPzRHPVsvIkxOUzUoM15TPEFbbTQiazFvWnBTX208P0FWQzNzP2NINDNS
XF5CVjs8TXIsSkw3M2ZzVFlIRC0KJVJRY2RsamQ5WTxoS15vTDBGLWIjMlpbQEZAN0E2WE07XSNU
bF0wdUZCN1JDVmI/XjdocXRjXl5Sc2RiMGRQc1VLMCE3SnVWWSxZc0toLCcmJVRSNVRickZGWChZ
PF9nLyw0UyoKJSU6WkRgOiFDXGNsOkUrMDRedFNlNztzRzAhNTg1RGBeaE4hQFEjQ0pKUF1eXz82
L2BsJl9gT0wuV3Q4JUJKX11NL0w1W01eQFRvaFQrSS1kaEAkZjNfUCxJTEdAYktmRU1MWGoKJSpm
PlJSVTRma29BXz8xOVU3S0RjN1xPSmFtU1ZpO0JxclQ6VlRrKUVYYDQvYTNUIVdrP0M2OCE7bzA4
OVkrXFVTPFxKL2o8cFB0P1JcPUQkSGoySSZRWlFRPj00LW5YaFYuPmoKJUNJTHVlX1tuaVwlUEdV
QEI4UjFdXEJhb2ZLTkdpZCI6KThvcnFnb2NFQ05zK1lgRV4vJ0hxJGRda0ZzRTJxdClXTGptVnUh
am9TLTItcFtMbSpKSjBSYmpsM0cjWCUqV2g/a2sKJVI6WmYtVFskUmFnJ3QwRUx1SVokSVAnYz1L
biglRi8+QS5Wb2EhJCFARmllOz5hKVZbQF4lMyxUKDpkcEFzQEAoQS03cllbaSZbPVtFR2NHRTBV
SWpyNkA7LVEiNTUyUnJSdFoKJSNTX0QjQ0ppcjk4Q3J1RltFPEVIRUpBclZsJm1HQ2M5Tml0ZDBi
ImxodGJxcioyXFNFJDJWVGssOjJrOWs9JEtfbTI/b0phbm8xTChwa2E9Rmo/czlaaENrRF1yVGo7
K045Li0KJUlrOyFfUDNjRDBnKUwsZDMpTjhNKS5SXCgrbi1uYj8iO2hgTG88cnRPIS1icV9gXUVe
IjZBJTtTV1AxcEhNNU8raEU5Ilg3KXU8WzwoUVNpOVZQIjw4Xlhmc3AhW2s9VWpdZlAKJVBuVHNu
TmRJSGc8MFgwZlpMYUExMSVudTdLJlQkM2srJ0tvRmJWa3M1VikuQ2YtLnJkNHNVSytpV2lMUWsi
N0tiQDl1YiE2YFZEQF5dTjQ6PXNqKHBjVTlwYFpOYD5zL05XYFIKJV1sX0ZdViEma2doazwwRl9i
MylKNkMsQU4nRjE4KGNtTHFkM2dja080QHJxdU9EUT9dM1slazwkQnU2Tl1DJEUmcVxaZkQsSVgr
WkRmWTAkSCJvPTEsLlVpJjA4NmcrLkhtaysKJTRkRmFxKDk5ZVtKVVVxUTJbJD5iRWt0QXQuUE1T
J0xTNW4uNkJgVExrM2gzNzElKShuRl81LUZBPSxPVkxTODZEJUtWb3RCIW5ZQWBDKypxWiVzX14k
JFF1UCFrTzhTZ1JtS1cKJT5CTVZRJT1ARTo6UDhfSyIsVFM0TCRPXFYjNG1bPzg2Y1FGK0c1TSZj
IXAuWzgrW2F0UCRJNDEpQkF1cy5VbWokYFxEWlJcOi1tRG5LSCUtUTdJXTxQdUxSRlpTY0ghRSdl
LG0KJS1WbSlDNVVyRVRfP1MtbEVzPS4oJUlTYkRMYS9IXW85ZFJeWCkhWDhuV0NEQVFQbUtEOzVb
XCYubVxYcFI9TjslLkM3QmQiKjBdJVMpYjROPEhXIWZiUj9JN1lLX0NaVWVjWGAKJTNIJnE9XVdC
P0AsZjI3XzhQLUhbMCYtYDRVVGIlMlBVUjQ7KjYiS2RRa2lrPCdIKDwwQmRGSyxmUVY3YTc2cWhO
KyFaa2Q2SEhRM2giYU8nS1YvZy5MRHI7KSxWayFaRERsTEEKJS8mNmRaP2tgaVI2Qm07IkZzKGJX
JlVCKkwjYDA0cy9qXTxKSDlFN2JVVS8rWD4vOkwlIm5HOyckbnFnPzJERl9UPltKaC9bUVY8dClA
QDxwUzZbZGJIZ0BaZylzWE1qZEwoPmwKJWg6Ki5VNyhldDhWQDJAXVB0bkFqYl0lR0U1QVpUXDxw
S2pQTEA+UF1DaFszKUckWTthRT5LYzphLE9eZS80Slc/OkM+czE1bmQwMEpvOltxNkBIJ0BFITMi
UyhtZFhUZFBQSjoKJT0wNnIlTCIjUiZrMDY8PFpJbkFAPVYjXWYzQyQscmAqI29OUzxvR2Q8Tz86
TUVITVZKInIyOygvREJrPFZza0hZbSdUXS9rYnE6MTxpXE8rb1tXUG4vZGQqIj1hYlovJ3V1VykK
JVFONihjUC5dYWlFQW5JU2gtSE0lUWlqY0piJlQucT9IKzArPDE9LCo0MDxeNWFqWEpVaEkrPjlH
Ny9LISV1REdKQjBqQyYsQDMyJTtPR14hQSY1anVVaFJNWHI8bTs7KyYqMGQKJUEza1knUUlDTTNG
JEZOWjlHP0E0Nm0ndU5GTyZub1pTJ2guZEBJMy9UWnArSFE+LEwyR1x1JjowNz4lOTlDREw+YS1a
K0ZIXG9vK2J0OWxva3NdQ3VmUEchMEA0PUhPL2ViY3AKJTtmYmoyXV5nQ2FZZlcnKUBVPGVhTTlJ
I0VCTW5VOD1RI1pYXVU9MTlOJG1zRCJQP1AxWklVSFBpITlPT1A/VkNoTGU5Sz1xOGQ0IzxcX2x1
UFlndCc3QClVclguLXApP1pjaWUKJS8pXWpeMiJXWSQhMk0tOD5PN0hGZlJzJ1c9J3VbQ1NAb2lX
ZW5nNVgyWEBLL0lXIzMoXT9RU21CQCcnOmBNUVpzVWFpQVpNdV91ZS4nN188VUxuWmhGOVojSVd0
JGw7ZzovMXIKJUZeUW1HTmlhWDtqPj4kbFcoVyNLJ01IT01TXWgvcy5VNGBsYWVAXW0vNCM+NSgz
SkhcWjoxZStVNkZqUD9nTSE+SmZZcDRRL1l1PkM8OEdaJj9CYDMwczZzNzlLKiVMIm9vbVUKJS9p
cTc2X3ItPStEKEFyOls1cStsYmFtMmtdSm8vWUswQm1OLU01JmxCNEhwbDQsSE5OV2xQby1oU2B0
Wlc2XFZiImhQWW5pJyhWMDtSO2AiIVAxI2RNV20wKGUmJ0otXTg0J14KJVZJSilNVjFBMCMvXT5u
R1pccUEkbDdTZ0NJXjcrJS03PHRmaktdYGVcdVpZKTAnV01CWkk2ITtxZChsKHFKNDsyNEhlNHMi
TD8xMyw+LE9OLVZzNkUqP0UpaiF1QT5KKmhpUFkKJTF0KGdMPzVyUmc3REQmSiVZazlPQiNJXjQq
J0wpaD5LIyYzR1UqOEpubz9TIVpBVj4+XkJaMHJHbFQoVk1cL0xiXG9dZT0kQl1YLixmak9SKUMh
RGFfKDxTQTZJIS5wXiw1Ul4KJSQnJEYkZixRc0BmVDI3ZzYiQGAmSzxbMUNDUTU4LU0tZWw8R2Ft
Ukg/IVskJWNEYDRsSkNoQDgzM0cwYVcwLDdwPHUka19qL1RDTW0iJnMsWVsrKlguRkMvWjltYzMo
aF1KKicKJWBlXiNrODg1QChTcSlAOVhCQVpcUlBgSChSOlpRdFZeYWFfOThcUyRRPVBvPFI0ZFtS
IWEuJl9lUCUodEBzaHRYMEhlNldJQlspYFxXXkUiSllCZC9YXlo2IT0sZysnUThMJzcKJWIrJyxr
YDZnSnBRP1xCMmMrOGNGIk5mMi0vYl1rITdCbWJpLG9gaTQhXFxMa1lkSDFPXE82bTpwX0AoRys7
K2ojPCI7Jl9nQHIhUERcOT41VClyRD05USJcVStoVG4wX1wkLlwKJUo1SjgnZClAQzQsclZDTCU+
Z05kNlA/JjdXTk5HNyZ1Y2BYTU8lXTFKJyExYSsjLkw/I1JtYW0qQF0kMj1QQChQVlo6R1EzSz5n
Z3ElZEcqRicjODFfNDF1ZUNZU0tOal1oLXUKJUtoOF8tMkE1Zk9QLFpDWVk8IzBWPDMkP1ttT1En
b3ErPEJMKipPQHEnQ1BeR0FsZFpOYSMsbEJhOm43I1teVjpuckZsTUwoY0ZYUC8jKHNgQC5NT2As
alZuS2QwSlw1YDctLUIKJWs+YydTZTVlQi0mbFM+al9qWT9oMFhuaCpOZFdMRmwhaDFMYlQxTDJg
Uk5CUyVuL0MoXy1SWDYwJHE7XUcrJlBXYTtwYikrMV5VXlUxRVh0YjgkJ3JBL2VcaU41RCVeXnRo
cykKJUtuUUxmIytnRUo8VEZRQlxKVU88KD4wM1suRGpebC9IcWMiSVIyUS1xcUEtXDU1UDNoQy1z
ZV9sST1UVlhPb2NLJTR1TD9KdCF1Uj5oWmJTQ09zWC5MOHFWI0FdcHJFQkBgPlcKJUMyb0grLk5y
bGZkPmMjbS43VzQiUyVBWUlgP2pndVVvSU1MKD8lPDg3S2A7dSc6RDFTTW9vTU9jWWtGX2pzQlFd
QUFrKXE8RDJbSm1OQVlRQlFiKUdlZnBEWzNrcE04VTouLk8KJU9uVSRjVCUzP14uRGhaIUNqW3JB
czUnaGE5PDJITUAwKkJtI04nQWslakJ0ZDRPM0xaPjE+P0A+OyNtJjMvT1lxQXNKOlFCUzEkPTtO
Iys1RShJQFNJQydNTTs+cy9BKlhoVWEKJUAmVkY/I2tFYjRtSnQqKDdOTG9JYD4hczYkLiRTZCdt
R0NbTU1oJmUsWkJHYlk9c1M9KXBiX1U3VVghaClTcG0nL2VXaD1bN2tlKiVAPGNbZlJ1UDI0XVVC
SDIwI0BCSlJQRkIKJUxBYnAsTElVbS4uL2ArYDdXXTNiXVsvckk+OkpiZzsrWD4hRi08SGNTXGEs
PCdXJ2Q2OFs1MjVob3NSYi1tRTQkNSFDK0NOTE1PL0tkKj83NS11aCFLUnA3XFE5YihgJlE5LjoK
JUBNc25EIlw2RSJRQyMiTnEnZ1cxO0gxOjNHXTBYKmgpZ0sxJT40OURbQXEoMEIvIj9hPEFmZ048
Zm90PDc6Q2cnWF1KQDRwU0NsLlorPF5bVGcjO3AtdCpdWisha2NvWThLcUYKJUFMQ1E2NnI/K1FW
cEQjJGs7XCxDNlpSQ2A/amFOKCReTytNLyUvSzA9STRUV00oSWFoLCM4V1hwOGtwKEJhb1NPJmsu
ZFpLIVBhcG8yIiY9Y1YnaWpHaFlxKGBIR3NhXyU4TzAKJU1uYk8kIUdcQ0BhXSY8WCUyRGNcbkRO
NydaKmQ5Lk40b0ItOU5PTS5nbUVyN2BYQVouJDxDJjlcQyFpVEQuLkUrZlZsOmJcOEE4JChVWF08
ZV9QUEc9Py84QWVmOG47aHQrNycKJSQtOHRccSEzZWBcVE86YyMtXXBfM3RsJGJXNkNJPjdkMWki
PEJHWW9gImUwbi9obFcnM1dUUE0kIyU4ITJHJ3NGZWZkJ1FnbkdzbVdvUj4hMko7PXE4UFpdKF9b
Oi0hN01Dby8KJSE9LCZMLD1IczsqVkpaRF42WVx0S2BFaig2aSYkRCpAJE10OFxeVT48VG85KVk7
STg8S11XWDgzQ0g/PUNzRy1HUFMycUM6OVJKdSQpMEZVUT8lamQrIWNOQCVdMiliSU5vXT8KJSQz
KTFvWiE1OE9YXWU1SVYkVUtqViljbm4qZSsnWjAtZ2NyN2g1ImxLIlolPmltcFV0Kiw2by41SDhF
bV0hPD9PLlVNSURVVm48XC9IWmJWRCMuUElLSEljY2YmZShWPlJlVCYKJS5hTnNPal8mSGM0PSdq
Km1zUStvVGg3ME5uYjleY0pONi88WWVZKFhOKSpnUFtAbUhLclc/NydURzVmZTtWKHJBN3BcdVgk
V11SJV8wdUxvJ2Y4cTpMWk8/K244UD9CYW1na0YKJSIsV1oqZ0A3RjhCJyo6R14yb1NgQ0YkP01c
WVRJcmN0V3VDXDhnSGxwPydBJ11kPyE8MF5ZYVlYaCYoPT9iWV09WiNLKHVLVV5JJlZjJltpQUFR
MFxNaUtMU2JKZXJuYmlTOEcKJSovRF85TFUnSXBDZGhscy9wKlBRTS8wTC5VSkhyWitXU1VnTXBu
VzRLS2clRkkrZWNuWzpsIVFJUUNxLWFIPVNYMitvXE9gbStIdF4oPmZRK0AhKWNQJHE+a3E4ZUYy
OicmLiMKJTtdLjAhKXEpaic/LFNxW0ZWKDtUWFRkOHVwYWprdUwuUUAjLzgtVFlPSzFGRUBwREQo
IkQwclE0UDs9cWNhZD9KJGBbV2tpZ21zQj1GbihHP00qZUs/T1FjVTxlMXRFNSIlMCEKJWJVXV1i
aVQ6SkRpaEB0NTlmMixDVEwtKi9KN2Q4KG5VXzAqbFZgS1ddUENaJ3FGY19bW2UkRmJPY0NKcmcs
X2ZJMnVvaGwmYTp1KVxuUkg1cVVAPilRMyhdViEvVWVZXnRsal4KJUkvXVdoIl5FWUZOL0YjO1ZH
MHQhIjg6X0tnJmchT1VLTjkmJDxlMyZARVVXXCJxOzBlZk9XVTBBbzROTS8iRGw/MHM5YG9lbjVm
b1s3Q0U4QGdGRy9QJWEhJCg3SXRwPjdIZE0KJSdqXT1DVmEqXjFkRiQuSEhtdWlrPFBDOmAnaS9S
UUgxMENPJlJaRl9ibWxFOEZEMkEvSTtWIiY1VVJXI0ZQJTo4PiUkYGIuSFZsLGxyZCxIW2pjJlhs
cSRTTydrXEBLZCZLXzkKJV9tNSk8LChwXyNcKCM3blwuTmo0NyZGWEM2OjhTdUllZktlNzJEZCRp
PC5BcG1DYihmJFVsYi0xalo1LSoqbFxvTUNxMF1WNWBWPkxGYmltVEpdYCNDY1FPYmBuLiJYNEEx
bkoKJUchXV9KJTh0I0c5KkZHaWFmQyhaZChEP19fUVFxRmNyQzA3KWUjMGZTanBwRFBkbkFSYipW
KzQ7XiZfTDZvWGtDLSRhUkkwYEJ1TGZcbTBTby1RVkxRSzFBY0NpKEphcGQlTUgKJVEsPWAkYG11
cT5qJWlYLFJtbERQZWhZRUhqSUVabSc2YzVwYDloI19bQzZGSUEvdWs2TGU6QEhST0wtYlpROFZt
UGwzbUghTUNjOChmTGtiYDY1Vz4tLlh1PVhoYHImRCMsXU4KJSk4ZEZNPyJmL0RbOVJVYzE4dXVj
NjRFQ2hNLHQyJkM4cmFZOiQ6aHNPUztrSlteVlJCQTA3Yz1pSFFhNyhtQF8oRS1ydEItWjZ1IXAz
ajhxPThDcVxKdEBSJGc3ViU6KDROXlUKJUBXRCJUNFtxaVBIUyRzVGpOR1NuL0whKGUwbkFwJThD
PiV0LUJLJVJCInM4N0JgTkVSXidENlI8WlYocko6UUA+b2FPUmMpMC5EXkhTQW1ATkhNWWhBIk9x
Z0ZqYD5hRUFkLioKJSxqImBuO2ZPb1U3QCZdcHJzQV43JyctN0dlPzguOTtmZWwhcUZfPylGT2Bm
O1QlUzlbOydca1FjIm9sRWpOVlFXJmxBdGFuWXBkdClxVzYsbCVVMnI/OyhzRCJLIi4mMltcTFEK
JU1pPTwoUCpUZ1xOREFSa1w2LVYhOmthOS1gXnNJa1tKMFUnOCctYjIwZVZOUCxjIkc3bDVnPDtF
LmRRaiZwSiEhO01mV3NhRiVNbkhjOW9oIWBBNG9SVzJpO0dWLmxdTikkb04KJVJdSkJNK1oqbCU4
LWw+ZTVhLiQsUEpLIU8lJnRxVC9tclVASDk9InJkPCpuKEg+XmouS3NMUHA9K19LIz1ma1dUakQ6
Yk1KaFZtS0tjOUEsPlNOPSguXydPJlBjP0Ancm82MScKJS0hZDZATHMuJF9Ub18kdC5eNl9RYlJ0
W2MnRSxLbilAQCpiMjYudT42PyRSLyg/IyE9XihlTk5vJ2lXMmRDV0heZWZNXE4sWU48Y201cjdr
UiwmPlNRaXA4JGAlMGZ1JkNIV10KJUZOMyxLIVR1amMrWUlPXUsvTUI0NiNibiEmckBrN09APXM9
VV9aUCJjPT8pZm5uVW1aNjhxVSg1Mzw1YDlITjQuKUZsSTxvIVM2bShKR1hgU0QrYTU7ZjYsaDA+
WldPOGROOFcKJTowaTM4YC49P2A2aytrYyZKVkJ1Ym8hbiIqO241PWpFPWZPYE5GIlgjbyNaZ2A7
VixBJW0ycUVSYTdSSC47OU5aWzgyV1szMzBlPzNWVykhP10vMk05Z1c0LiZVUkZkNlguNjoKJVox
YT4mKnVHVC9RUFhYTlFAOi1BRigiNDM8Z0c9RyVbJyk/SzFLLiguMjd1SilKW1lRV2ckbWpWQyNb
ZmdjRCEwUk4lIilULkUtazg1M1xGV1o1Qzw2RHAzcWUhR0cqQltzSi8KJTthMlVDWyRcMENHaF1Q
WilxZGVfXDs9QVxYWjhfYWclKkoxYS5VNHNDUTVBKWNqV3V1TkQkOUU5YEA6czFbMW91I2RANmAr
WEVXZFFYNE83YDk3cVIhPlckKzlpbTdILFhLZ2IKJTUzP1c4WC86Sm5YO1ZBLldGXTg5JD9WSy4y
Ly8oOUZNZUczWFI7a1lhakY5TlU6ciRvXGNoXSxVMzVtIT9RI2A6TCo7MXJOPyJyUGY+S3Nya045
PDhfQEpwJy9ka3FtbDchZ2QKJSRCUEhlQTw5P3BJKjouJDxEQ0ldW1s/MiRsKEtAVDFzZGh0Nj9Y
NTk9USJYUEBEa0pgMFA7JjNBRClaMDs6OywtRjAqJUo5OHAtJG9iZzY8TVVzREBoLyFnKD5YYl5b
I11hTmcKJSVdS0BzTm5gV3M2KCtfODpcZD5hV1gyWF4zLUguOj5vczw6Py1LbVZPPl8xdFdrdGdW
P2ttMiJXRlQpIU1VWTxjVkxuQGYyWl45JTZyIW4oPCRMOVlBNjc6dUJGPGZvUEtxRnIKJU5Jamtg
XThkX0dPV20hS1YuUXRCNkFAJFxJSyNKNzdCM0FOJWJZJ0VnNF5lcWchcSE2Zkc5UFwhSXFgQzRz
R1FgKkMuaFwlInJGTWphUTlNVFVqLkMwPzFZTk09Ij9dMjRHbG4KJSgkKj1ATkNmN1NtUTpyQiJb
V1VrSDw+S0taK24tOkEscUdebmdSb1E7Y09SQEIyYDNRQ0YtdUxPSWAmM2xFMERdWmZMLz8mME86
TD49PCpQTlpWZTZAaDlXPiJTOSdpPTBgK2cKJVdiKkYsKV9gaWhOXE5FMDpIKWdwJ0w9RC0uUmQi
KT4ocEVeXz5SWiguUj9yQGtiamJjcSwzVFtkUTVNdEdaOyMqOHBwbDouZkNhW2NHSmElcWZuPWNS
aERzQmhKRUcxSlJLQ20KJSYtMi46MFkjPXBTbSFGOlFWc0FQbzkjX0RHKUsrPFgmS3JEQykqXl88
TC5mcU5COj9CalM7MFcnVkRCNCIyWkMhNW9xS3EzWE5HR14kTi1RU04mJyM5N1R0ZHBxVl9mUDNh
YFoKJS48Qi86WlIvX0xWTVFANm5tP1lISG9cTF03ZjY+a1Y8SXRTZ2E1YkhMdFtNXCZ1JVJzLW5Y
YD8/JEdpOi0hcSFUJ2FuZyRYMTZ0U2BEaSUyPUcwbUY3YUFIKm4uOWhYPCcvPkYKJVdBRSteJlIn
cVFadFhRSFpIVnRRbTJYLF0hOGxIQEZHUjlwZkk9WmVFTjdjaU1mRClOLDRYKyUsSC89LFUuVF9Z
Y2txJCxRRVB0Mk1SUW9QRV0mXlc4JS8yTC9WQWw1PE5xbCIKJUgqNClza1VIJ0VVNkQpU1ZETClS
IlZPPGteTihRci9ddHFcWVIzNW89Xls7IiVdNiVDOT1cNVtiR1pDdWpnMF08UTtoamVWZ1FMXjpc
ZTJYVyZQTys8JkxPckAwSi1GRT1oUF0KJTBTS1IvI2NLNDk5cCpvX0RHVT4sIlBeIkdNWktyMjpL
P0tRWlZobzw1Vi5XUEZaUi42T1dcLz8vPCthMWJLdXViQXMmLCpfZixTWEFAREAuJ11cYCxbc1Uq
WmooYE5hVWsoMEwKJVZRITg7aEtHMCs7Vz1kV01TIzhDXl5fRywhRGxdUUg+YDRwTXNLUyRFRWNa
W1Y+cUIpI18zay85OWVZN1kjO3BvNWFnQ1FfImtOLGpHP1JQSCJCJl05Mk9kNiYlVURoYkhOJ0gK
JWdoY0BuZW1xUSZYWkhobDZOdDA4PDlzNlliKj4vIjxgaFBgV11fJ3Q2PEliOUZ0PnNcISgyX2gx
OkBWWz9uJD9VKFBOIUxXLV8oJiJZZzhXZ1RhXCNmN1dRODk5a0NVbGxSIjkKJUctNV9xSFZUblcn
ZTQkTVZxWGtkKTNWKkkuXmJJUil1LDE3Nzo7InFuND8hS1FyIjNeaVteLWpxZF1WQyFlMUtsaHUu
JC1samZKJCIydCRDXmVJWl4wazhoZzEtPXFEXkU4XCoKJSZmZiFcVyY6R3JaOnJwZyMjQkMkLSRk
bigoZ0ppYjEiJUpTSygoW1pxcjVyQ1BBUSorXDlMWlNFRytBUlZGL1dqQyImTjpNY2ldYTthOGRE
akRdXyMpUjFEaFJZUnJiMDdyZ0gKJWpxUWxQIjNzbXU2IlxYZj5RWD5waDNDKk4qQUI1R09BSkRM
XmhPNCheP15kI1tnKEEzTXFOLWE/ZDBTJDZXZnNsODI7LV1PSnQzW1E6ODdGJyFMSCRsNkZHcm00
TmQqP2c+QzsKJWFwJkRGXzRaUFQ8Slwwbzc4OUs5OUM/LmM4SFo6J2E+JFInVzFGKmttRHM5Yz1Z
ZUdvXyNQKUpaODBcZTNnWWRsLiE0LiI9QlFEJjxTPFRWZWFLQk5rXENpI2UhMzlHOXAtbSwKJTlN
SDs/PGhjNGZAaDtzJUk3SC9cS15rLSheM1VtRWNBR1NaZjF1bVRCIVI1PFZBKFEvPiokRiZtY0gx
J2hBKTdlUkpjNkc1XWlBblxJNEguRnUrQT0qWENMTSQsQmV0QEBdaSsKJTkzU0dnXDpgJzZKSmJv
QWFJNHBqRD1eMUU7KF1YXmJSLk81QCZGJz0rXkNISig5ZT5RMGZxN2lQM0Y7S1IzTXRIVEU9OWdI
SCVRVm9gNT0tQHNJKzxAclJKVCtpQkU5WUImPTwKJWRiLUdoaTw4NzhHUihLcUsoJlQjb0VDLlU4
WHVsRmZjaGhCO21iXGBUOzI5WjtdIW44ZHE/VHNKSHU/TTAyUipuZGFZQiQiPSFuWl8tQUlPcEZS
SjhUSSVxJyM+R0ltNi1DZjsKJSNGO25EZTw3b1pPaUZsJmhWYUA6WHFHWlNuSmxLUHJSLWBpIz5v
PmhFPGZkQ2dWRCJjLjE3Y2Q8SiM6WyZYbCJFci91ciYwKj1vbGVeV1RlKidtYFxNSG5BYUc0JCRJ
TU9GW0oKJSJHMUZKSjhTKDRRaCo2YW5Ka1JJX09qdDFKLDR1dUFMLEVwZFpBMFRXN1piPyMlRSg3
W1BZbXM6TWlrTmdOLzljMURLIl1KLWMmVHEjI3JLSyhmUVImMCI+dWUkPkxqI0BVLHIKJSwmIS5z
OmBPIjE6JUQ1NFM7bktoLTdqNjVdUENYbj5TU2RiO0xlZWFLUGNFQ1A0MmxmU1xkLFtjdSQ5OEBV
I1J1RFIjMi1mOmJZVjppITdtQkhQXV4zb0RYPGI2RDxsbGNSIloKJVUkRDlQNVVVS2RTZ2NYK1Y/
RylIV2MiZmgoUik/PyVzZEgtVWk+ZD0jTzRiOWsiSlxqPTZFYjtrLTdMM1BRYFYvXEhOTUI6Li5K
OSktS189I0VgLUlkRlA/WmRVcj1nVzAqc1cKJUM9JT1ON1ozSkdBb0xRLz0qJXVoLTNfVSpNWydt
b2tcYUxnKjk8SW9HW2BEUCI6aWtjYEhpaURPdFAxMEVaKmZAPCljZj5ePCU7OUhiOSZgLiZEYGBc
QjtcVVRPSko4V2tKaFoKJSEqbkpcPDxObCwrQjBFIlxtZCcrPFMsbVVaKEVxTEhXM0U4X2hoc04+
UG0pK1dbXCY4MiJuLFwlVmhOViwpcTJVKkExX2o7bGxfQyczOCgzYERFMmBVcE1bVmdZQWFMYFBf
aU0KJUA6UjlXYGAvbUwvTHJrIlBlVmpAZiFVMFgkJTEkNj5XdGwnWElAaUBoR18yL10lYDQjOEZm
MCNKL2ozQVJxOSN1XWZndG9QO0ZMPCxYQ2YyNj9WKjcmbHIjXDY7PCctYTtUXEkKJVdwLFJRVHBl
T2FSOj1YbmROKGlYKCdvLmUuN2c5WDBhdSktXDxDW1QiO2pJITZTMFU/TGxrIlJePCNWZXJLbkch
ME1YPzJrN0VCJUhjbzxlKklwWEJZSWY7ZlIhaWNRVzRCbDEKJSFCbEhCSSlzLzpaST9uWWIjTmRs
XHRrOF1DOE5ONmN1LEoyYSYiTiMoX29GKEhXJiFyLT1LZDwqRkwpcnAocElYY3FkYl4uQlY/PUlQ
XVA0UEVKXExtN0AuMFZTTlI1ciYrQ2oKJVYvWmF0QVFbKz5hM0FgbTo0M0ZdV1hqOkhvIVFsPjBp
OVwyanUncSduPEs6VDw1JDVHR1k+QT1NMiFEb2goMjdnRihhSHJabyQ5b2xpSD4/JjlccGFtU0VO
NCFBRDxyV3NaMGgKJU1IaE4+Tk5lITo8RypkXF9VNXRBM2Ija0MxPWZhM0o1WituV0QpSChfNSUt
Sk9bUDtrQkpkXkdpYWRkRSpzNSloaG5GUGcxXmo5Iy9hPzQkWkw5dFo3alNedDNKVDJLRztLX04K
JUomV1I4PSlxaCEyUiJDNF9EO1BLRyo5MDVOJDE6cGtYUWpkNSk0cVBCbFkwZWohWkA4RV5dPi0w
cEhgTUpwQVtIRm1LZCgmMCo7PDNaQjlkISRUYnMmLVRiYHEkLEoyV2c8LUgKJTk5WWNjS2k/a2FT
NWssJSJkTS1zRFMsPEkhbGtpcSM5IWhlUWR0MnBgQiFGJCJ0cS1ISlxSaUlqXkJvVm1hUWwjIy4r
Y0JiKXJeOT51ajInQWdHTWFdaFxPPTwicF1HbTFUcigKJUE4LTI5TjxkOSldUGxkQFJDb0RtPlo0
T05qKkFnb1A9NEhCN1RFMmZuQkRTdElvckY1SW5ZQFJWa0ZaM0EyYnQpMWsuLloqRiFKTE4pYSE/
J0k1P2tlIT1obmFJZ0Y8bHNkZS4KJW06T2shOSs0LDxKVWoyJCpPNDlfJSMnUURWQVlvOUBIdHND
aCVbI1o8VipCOC5PT1taPl9BUGhFdUxTKkg1PzFfb1lNIixUPEt1XWxSWlY1V2JUb3NlMC8pIkxW
M3M8SFYpMjYKJWInV0xCVHIoYmYrOF8mXm9rRyJyMG8nTSNLJFhiQlkhJmBPUStSYl8wZD1TQUk7
UzBFKitlXWJZNVolT2BcQGRMKi5IU3RkVGZwUmphPmVvS2YtJU9cJDkuXThyJ2EwRz5ZKVMKJVIx
MDFoJEInLD9YTWBYRDE1T2ltJkhrUF5kIUtIZDZWJkZWRi4vcl1jQVQhPFZRayVxNV9TKlprWjpr
aUJmR1Jkay1RQzRhMzFuXllJT1A7KUNKTkImSzZuYl9JSV5jPk1yOmMKJT89bGY/LFBtZS1vOV9O
bixjTmtgUlsycVpRcj0kL2k7JyVBT0hCY0Qtb0RsTVIvaUhzQG5TLWppaUoub2xtXFBMKV1PUCMi
L2NbIig0LE8qS19WcDBbPUBKUzxiLTtuUG9VOmAKJUUwTj8/blBvMiFpZTVIVFtRa3BJaWMkZm5a
Oj9IXF81cUplaDUsM3RQS2pvU0BRSGlbTU1JL2JOV21cPThsSV1wUW5VOlM3NiNtOj1PclMsOCNW
LC5QJWFLKWE+PjQ1ZGFERmEKJVhVLFUnQS9vPylFMiV0ZTVgSSxsaTpKZExjXyttaSFWWUV0IS5P
Wi5DJE1SKWgnK3RFRUhlb01PU1BjK1Q9alteL044cShMbSReWzcmR2stUXQiJU9ncT5RJjE/K2Jc
PFxaJVQKJVZ1XFVPQ1BGSF8naHVZTldhTzcmJVk1SWApNlliUUxiUlJkM2RqNUBia1hrQ0UwQWle
V2oxb1lYP10uQTU5KVAibV8pa0VHJUBXLm5jVl1HXHVJcktTX1QnQUAlXG5NTjdFKm0KJSxHY0pU
bDJpZ0FsZjUyQzEnUk9kPF5EX1tRLFlIIS0uMERsciRRIlNrRzotWW1dVmtdJSpaM1goT1AoT0Fn
b0RZIk9WUkVvKTRQKzlTP0dHWVJvaDxAQW0qV2RHXGFjKHItc2gKJVBmUmQ3clxWY3U9XjNdI1on
IVJVaVIxXFBQXlcxNkVEMyM/MVlRNi4nOThYLmJDZTNKWjRLKVQ8O2docWJkcEc0Jm4/UFIzM3Q+
U14vUCE+TDNiMixZb0FQUGlwJ0xeZ1UrL18KJV0rQkJiZkclVlZTTyRBTy9tRzBeTkwmUmttaWk1
cGpiMEtrPDNbRixiXTVGMyxsWVtGND9AMDNMaltDZ09tc3ByVmFJN3JqOyNlNDgvMDwsXmxMXiZj
TSk5VDwoS2ohTT9ySnIKJTkkbm1rJllzaEg8RmxGZ1E5SSlBW0Unay9aZjhAVz1COlRBSj8iJS0m
bFlJXmsjZTtqPls3W2FYSjNrc0A9ZyI2TTZUQF80UjxfbUI3Yk5WUklETklhJldKQ1ZfMEFnUVk4
VEoKJUonYlQsSDJLT05GXjhYXEpCaTdtMFtrcFtAWnJeYzBcLEJ1NE1iaVhNOy9NdS03VFRkYmNJ
dSRWSUQyRj1YXiQjZEppMHFGJylPYSZRaWk+YVUibjBKN0k7cTdCRE08YjxwRy0KJUROXVpmVW9Y
cD5IOzg9V3IwdWg9ZDEwJE1nMD5BOVc9JWEtUmdBaCYwIXQzdTooczJvZFM7MUxuZWU/Y3AxUW90
YjdHOkEhR2xCSUxXMz8nNldGayRcSks/J0duc2VyQik1TSIKJTYhMVQ4ISRTLU1tZkJHJ2V1aitN
R0Q3MVY4UTFtRDFPL0JNXW9MJl4wNWliOyhfNWc5Pis2XkdNSUVedEsiLGg5WEZVZHEidU1ANDgj
YClVK1smaDFiTi0nNmM0UkM9cTM0bjEKJSNNWVBxVyQ6Yy9ETW9eIjt0UE5SUzpdOjpMZ1tidUBP
OXJNLWwsbCdqLl5MOT8+ViNaYl9kI2M7YkVXYDs7ZEFLPDAtczhbQEVURWZVT1Y2SVlqL0AxJGws
OEBQTEcoZVxpV0kKJSs9XHIoQWI4Nl4qUkc8RCZAUWU9KnVHKmJLUDBTcWQ8I0hjVDZvWGZROiFg
QCsxTSlqOjxaSFRTIl46TGdhO2k4bG8oclNubyNoMnBZRzZMT1ZEZCouWk85Vl9aaHQrPVIoRC0K
JT1pPXNYcT9tPSZHWj1fREZDOihfMFg3Ii0yU1I7OnE7R0MhMGRYW29DIScxXXEwQCRITCJrW005
JV0pWSQ1UkRUNm9Tbj0/dXIwJTJiJWp0IWw3QEJQYWQwW1B1NjlSOiE5SXAKJUBjclBkVG1RODFq
RkpLL3I3IjQwZTxKUylndVZAJC1RY09uKC1fWFtAW3BoRiIrYEQvYjBZMTsjPF0tLSpwcT5KL183
akozOEpML2cjMFFCLDN0KCRnQ3FzUUEjbDEmMkNBYV0KJThoKj8pZD1JSVxGXD5yY1doUmItJ2RM
bjRKXjAsclpXLEApN0NJZz1DaD44YyhyI2VfOTwpdW5PPEcvaUphITRATVIjJDZEMDM9K2ZtWjJN
JURkWTBWMztLUSMmSikwPSEzZ2wKJWQ6MWsvbFhGKlIoWXJJIUNIQDRHS1FIXzhJR205OSJKIigk
MTc4QUVTSUFRJlhaPDZOQCRzSUFXayk7NDdtdSlkWGxsYVByJS1pTFgkTWglW0xPLF0zMHI/M19t
ZFk4Y1s8aGUKJU5vPD4qNTdoZmdiVGE2XDNXPTJjRmUlLT44bEFXK1xLc21nJGkjWS1iSkNuKlhi
KHBxPTIrWG05aj9ZYj4ubidcRVNoXS9OU2QrVFRdUjZnVzluPiYoOTRQYjtHaHJJOVQ8VmgKJUkr
MzZtS1EtSVI1YnNSX1pOMTs1amJIdDRxJU9xaShtUzdVZjpXYjlYOXI3TzcrYVI0JUknPHFfXWV1
SkNAcnBhZU5Lc2khUWFQUmFkcDVcLTFmaCdOVCphb28+STVpSDVAN24KJS5hOk5YMVU9cDcxR0oy
XWFXVkEuUCh0P3QsQFo5MnJvckBsXzRBRydtWiZyaCtlLiVzI1RbPldkL0c/O2ondXJZO1w6OCFb
XS5JZGEjPURUaSc0dVdWLF5Zc044VDFhO19oOUIKJUheR09UPTpbRVU6IzVObCM5VkUrbCFZYHVB
SVc6VHJfUitnVTorJmllbyMjO2o3Uk9EcSNcMjRAaDVwdGowODhDWF5NVTk5LGo0Nm87OHQ9TUhH
RzpPXD4mKi5GdV1MJSgkbV0KJS5tWlxkTFg2KlJCOjRVSCRmKiVJVC1iLXIrYFhzLksmRCJbaCVN
TVMoLU8nTUwsQ2E7VWkwIjBSUWtPWi8tNmomYjNHWDlaKGBoaztxSyFDNiZRZjwkO0MlMSFZP1By
PUlJOl8KJURjcUVuai89QyFWRjdLQTluJyY7TlRUcF8xPy8oSFE+IT9wT0FdJl9NX3EmKm0rK1g7
UnBEOUZiWXUhSl1ONVFeXU8yUyspJCdDXiw0ajd0QyxJRnNNKWBqS0kjREkmPVJlK0cKJUBuVV1W
JCY5NEQzQjUnOlJgLlstciFaTTBGcHVgbEZWcXBGLjlcOGNfXCUjTkdYbGVEaTBraVIpQzQiOC1z
NkhrZWpSV2dlVDg8JVAlc1VHLSs+amNpL2NqYFxvKGNOcVtkY1MKJTBTJmk/S21NJ14maFM1L0dR
I2ZmaDNnNFBoPChHTmxdWCRjZkcoOU0mOGxcWVBOJnJEUC1eR11SNCQ3VSVwYVlyJ3ItNko2ZWFl
cGRtalIkLm1ZNzwkNHJyNVFWX1kxcGopMicKJVphdDM6ODs6UCxgTDtjZi5QcyksSzY0UUNCX25e
c21cLSInYz5TLz9XMmZHNG5KMD0zKSNUJEFbXWJnMmA5RDQqTUctY0ZeLi5jJW0qQm9lQyJIdStE
YG8vPGg+LW42LWpdSj8KJUkvZTVvMU5iblVEKldtT0lUW25pRGd0LWZJSig3XG9hImhAaEVJNi4x
NFxaXmZPJTRgTCExRjpNS1Y2cEVKJDFKcjI2Z2xTJlxWZWZuL0IqNGo5IlsjMGs0SEAsLnQrQGVt
W0wKJUEuYlFASUlmdUBRTGhRUFdgQD0nWXU+RlAjVzFsQyE8ZEAmMWNhSTFKX0gpaEpTIV5iNkpV
QjdCNDA8PkFRSSVwLCooLlZePjclMyRINmdgOHQ3PS5oRWBlKEYtIWRCJl5GTSkKJVtObD5ARW41
J2ZuLWZuIlYlcTdHO05eU2xCIys+XjMkSiZKJFFMS1c9ZUJFKERnT2xXVTJnNyJrbzU6aDNMcS5V
IjlsW1dfakdsLD9wSnUxJDhfZ19TUi8lT2BWSXBmNTxAKkIKJSdQaWdFPD41alFNWVFAMV5cbDNS
O0hlR20yR2dlUSZpcWddJytXKiFXQ20pTS4lRzosPFtlJWAzYEV1SVAyY0RccmtVYjhdQVN0SzJH
Q09iY1VwIlsmNWItUWtLcyUrTGJOOTcKJVM2PCY3VTs+SVVdUiUhLl8qVFYhPWw1VEAhTW4jZEBX
WjI+KGZeUmpnIz9cZm07Vmo2SDIibSxhaz4xKjlSLlVnK1YqQGZuNnVCUjpoZmpWJFFPZHMwSE0t
ZWFubUEjbDphNXIKJW5ua1Y8JEJYPmc3VExwdF85cyFHL1JmIilSJWVUOlo9JWNjY0ROOmdHX2ZY
LDlrMikmZG08PS1qc2w6Sk1cU2sxX3BIZ0RwYCNxUjY4J2hRKzI9OkRMZGpZZDdvPDdtVWRHIWgK
JVotV2NgQD9DZ2hccFBTSjRvRmVpJDwiZkw4NU1CaUtaJGlCUV9IcUFnNGlcYFAucTFWKVpbL3Ax
NW9MdTxzIV5CbmE4S0wuYiQib2FSN1QhLiRhdUNTUlI7cEFsP1NiO2MvJnUKJTY9QEdKIklcS2Ez
KjltbGBoUm8jbzU4I2YzMyJbTzxOckEsNkFiaUwpdDZBS0ZzL01yJ0BdQDw3OD9dJThKNEsjMGZF
U1FRVTs/PFhVcEo1V2Ikc0RnZi9gPlVxTFA5PzJfYE8KJTtEQ2hbX1pyXD1jS1BDZVUkcCdFYUtT
SjgsIlxjWmwtTyZvOT9DTjdGL0MzM0UiSEgzWmBuKyI7Y25RVjg8UiQkY2dQa05dI2FFNWhYPCtX
Vm5vKzAuYzg9QUZeXyVTYUpPRigKJWcrMnJZQHBGZWlfJEpeL1ktL0FJMj0pXUooJyZvPDFVcExn
Yy5gUiVtN20/UShAbG85JShEO0RKaS9pQF9pP0xqQTFbWldKPWIxQV0yXmxQVnAhNENiR2pDL2JM
dEdINFQibWwKJT1zTUAjbzxmYU5WQjRXa2E3bGE4JXMrWFNIVCxSKm0zYlQiU1krNFZeNHVkLF07
azcrRlZwMUhdP2VoLWJMcCpfSExeMXMqP3JiN0krdTtvRUFjPy5gKmgibkI/PWdhVXFjKDkKJWNR
SzgqJSNRPmgiYmA1LiZqZVFoTnE7ajFKPk5qNmlIRGVnLi1vV15yWlJWVHIkbTo5JVdFL3RLTz05
SyZeYWJOWzQhZVRuYTcvX1MiIWVqWUplRmw1NWRNJGxIRCMuSy41NHMKJURUVV0vRVBuN15TUSlk
WDclKj4hNmluRWVGakk3NlM1LkRnU0laRjFSUzx0JGIkcU9QYzA3Uy8rZihCSTQ4RikmbyU5SkBs
Y05UWGovLVY0Kj5JJTJCLSUyW0Q1aCE9XSdkUXUKJXJ0KVlFPSQ4dUMyX0w0JWBzI1pRTDZvKiJP
OEJHKVRBVFhlP1tWYjpzLTpAdW9lNkAkXlxYU2ZzOEs1JXJdZ0EtSiw3MzhyMHIzMkhpPHBjcm9Y
N1lnVjxLRnM4TGNmcHBZYkwKJV0+KzwmP2g+Rj5PbXQ6NSdkcDVsNTQ4Jj1nMXBIKG9eL2peKzkp
MHNPKzcuOU8rW0sqPlBuLDReXHIqTjBuJnBwNFQ8dDRUKVZBdWdEImtqSylMKVdxazlmSlJmRTla
cjpkdHQKJU9rWioib105O2hIOzglXD1iWmduOUYuWWVnS0lbVU1WQ184JyleL1xGJ3BpZlA6RF0j
JDFMJmMnTTgtRTMrcU04blQ2JiYlZXIhNlg8RjBdO1REWCREW0s+VWVhSzBFVDRcVEoKJW47X2Zy
Ik9zQzBTS19XRDlKU2pHQDFNW0g4L0onOi04XDcqKEU5Ykw9P3JPdVZBUCYhYkUwaGxQXypMZUIv
NmlNPTNvKnE1Q2BbIzpkcj5ILiRZM2s3Wkk/MTw4bW1faUJFJTUKJTBiYmEscyhsI1NzL2BPWVJl
XTQjK09IU21LZCFgLGR1X3JGV05aJVlBPyNnSTJLNDB0NlMzLDJhWkUmU0NLcSxvI1YhK0lcdU1O
QDNncm5ZajJfOzNUWWRGVkxlK2NqTl47W1MKJTBRaGBLUU1SRV1jR0VcSm9LWykxUmxfaTsrW0Ez
K1RkYVJVIlJILCJUV2hEW1xLTiMhTmJGcXRTIz5QUiZWT1tQWC5LWGdSTlNPMCdCM11TV1lXVEhB
RC0iJWBTJDAjOUdZdGUKJUdAUmoiLmtXPUEpcTgtMUYlQCNhVSphJG5tS2RxPy4kQjEqWWhpQD47
JmolVUQsSTBzcm1WKHUxZExIRkZRTjUpQUIodSVqOkwqUUM/MG9pLVorOm9iJCtcXTRBSEhaNlpq
OWIKJWwwcnRbaTUuV1pgODJkUGQkIVUuOmtUU0YoQzQ8UFVoTkYlSzdzOz0oRm5rcmE9I0UmNlhH
QzVyWHVPTytXUztrQWYlbVdKcW5wcFY/T1lXMVdCOGk5OUU4NTVEX0lNclZAMmkKJUNJYXFTYXRp
RUgtMy9aS1NmalFYMiwwQTFXKjM/Jj4taCFpTV1VZGdhSEpHMTslZVQkbERoMy49NTo5OyVcZTco
JWdLaWU6QmA3QS0wQ2lyLW9bXy01bFwwakpnY1Q3PDFpSksKJUJnaCpoZFZWTE9LM0lda11PXXMp
Z0gkMDw8a2orXkVSJnFtQEtyM0tVbkRHJWlQXGhAPjkyZSwxQztIIlFwVEJIYyNIOCgmOjEsJWk4
MVgkM3QxI2MnTDVWOGAsPF08JFlMVDsKJTRwaDpQNjEuQVw8XSlbOkctYmJKT2s7WlRVPysoQT1K
Rl8rXm1yOW01ITcuRzV1S3IhTGtLbV4+UVIlViY9aXErMHNCSVZtPU9tT2ddRUJwJyNsYVgjOHVa
XmI5PGNIbUtlQmkKJTAiI1khQXFhKVJhPExTWzFrWlhDJmlnInFHTTpAZE5FV3EkQ0I0cz0mNnBJ
TS9gWmslWFdlSmg5LCpTTzZjMHApU1AuI3UhJVxkL2gtMTo4PW11YTdPVmhObF4xPFk5VG80PSEK
JSYuP2VlSm07WnQ1TUYqMjRzaDUxVF5WRlsnSjxSWCE8QCVZZWg6JiYsN0A1Z1VpZ1pLIUc4VURy
MzU1ZmBpb2FXPUUwIy8vI2E2VFwnKW8yQUU9YlAiZCI1UShkdVZoXU41cCQKJWFTMjw2cCIxWVQs
aGI5PFg1by4sK3I8W2xcZTBmNEpFIzJmTzsrMCIwODxhX11CRi4jXCtwT1xYIkxNLDZfb29bZDs1
UidtRElXU0lRRlwsclRWaWJBI11TbSNEREVVck1SYCkKJWplLDEyImYmKUVRY1JqXXA5PEAwXSNh
MT1MQStIJE46PSc+PEg0LVI4aCRFTzJTRVglLWonOSI2az9pO1ZIPm0vcjFQRUlNTDovZihnYSFu
ST5oU01JQVxsazMjNl5wTi5wXEEKJTZQZTJYZWgsPHVubzZuJDg5PD8tZU1cWTdaTTRKbUhXJlNX
XzkpO3VrMkZFPTE1LkFxYGNLcEtTIk8qa04rM0tMTHBFNj5kRSs/MVpTL25eJmBnRldhWFlpJDh0
LURhMDhBPTkKJWRURlpEbi42VG5qT1FsOCxFJmgiN1dKJEcsYFRPXWZPRktvT1BSQiplN3ROSC0/
JCs+P0hAKCsvWzk+IiJxKHJOPlkxVjk9cUZ0UT1cIUY9K0RwZm5ZQ2o5MURKIWU4JjFtaEcKJSxh
LE9EazxrI3IqTDszbUFuWl1COW5PX0hFP1VCQU4lYzJFPEpOUCUhLmlSVlAjO2JaRzBRV0VKdGYi
MChrW14+cTBcLFszWS5uYjAxXzxcRGVfVDxoNmkqUlAiaFluPC5JYUkKJWhzZGc7W1o7S2djaCtR
QScwZ1RtUmhkP25iOVM8K1ZNbTVvblJYbWowYzhCNlI9KTkmYXNbUDJqPiEnYFYmb2FtOidBXUIw
Vl9BRi1CJTw7TENkTj1qRWRXaiFuJ2IzKXVWZUsKJTkuJ2xFV2YwKkExaXRWWT9lXGE0ZS5nTWhh
YSVGbidKNEpmZFRJaUwtLGddXTEsVUUrWyxLKlVcbHJkbVlkb3VdRStlLWMmVkhyTCdlQ1ZkPy5X
TUBKbUFccDAsbGZQPCdQR04KJShXTChaLz9IM0AiS01XaTkiMi5GUF09QUpYO1cjS1tyXEY3WDAs
NSc3dVNZU2YzLWtVUGo/OlFTZ3NKJTAycTtCQUVCIlRcWXFXWChcLEQzPkV0JCE1bjo8ZClmNFYw
N1ZVIScKJStiTS81XV0sVjZpSXEpMlEzPTcvLTJ1W1xGJ0ZWbmVwOTNHSW0sTW1MX2pXX0tQVT9G
WURSOGhvZyIlM1s3ZyddNFYnUyQ0VWBAUkY7I3FHLDxiaD9ZKjVAYlw6IlFeRk4vczoKJVBJQXRn
REM5azonL1tWdUk0NC9abWx0MVFRYm5YWUg4MGgobGk2bV4mTHNhZVBrLl80LTBJO3NWVz9QLEd0
PHA0IyxmJCZSOlVxcTEwOUE2TFhUak9QJkdMdSEkI280Z1lWOWEKJU9HWkFvYExFQSNiMmVFXEsx
QjFgUU5eK0hUZDpnW105V2hmaVprb1YhY0NkKy9CJUpSSk50ZXFoV0xxTjprVCpVbWt1UnMlSmU3
PkNgXHRrPCtpWCYlTThcXj49Y2I3O3IpJUUKJVw/Jys+VmI9YzJqZERkNlpuPSRDPEIxRmc+Q2VA
MGhVal0oTyw3dG0odVxgNGVFbk9GLCc/Kjs+NVBdVyJhR09JZl9HQlYwaHQtLSkjajZTRmo/VWte
NjVEakJEKixebk9pNigKJVo6RE9CXExUKmRqZSc8MCRBQDcjLENuS0dDSTROL2I7T0FscV51aSZa
aERxXVQjOnRYVFA9PkhxSENEbm5JWW9ZTjsxIigkJFlYIiovaDgkQ2djbWQ+NVtWQUZqJjY6bVE0
K0gKJW1QLFxlYlMmJXExXz1XW01TXE5ISmxjbGU0Ok5ndERFajFHY3VELlNXPWRrLGBOQnVvKF9N
a2tkMi9Sa0gwLjdhTVBUSVRJST8lK2V0L2NWX1gkYWYwYl4kREt0JGw4YSkzUVQKJTBybkxuVT5d
aFdoXU0vMWluZWReTzxDYkBmWV5nMFJZO3NSYjdfJF9FL08rYihMRjtXTkZTKFk3RyVFP3BpTiou
OV87R288MzMsOzg/cVsmZE1wNTlBXFMyQTFxZUJnXlRJJUUKJSY6L2VNLWItb1dsa2BIcUghPlBU
Y15XN29LanAqaDNfcWdUZTRaaTQ6JGQoIlZeNSlzQ1xpIzllNEsjR0YpNSslQGU1JEVoNVlwKm02
UyI6aG5MT15xdGdiUzsvZFZlQDY2VEIKJUIra0JQWm9qQl1KakpQbDJsbV87RlxMN29nU2k+Yjhf
TF9oJz89Ri9zKkQjQW1TYzdXLzNHXGNCMDVIPDJvQWdHM2oiMD4vQ1FNJF9zY2lzSElWMClNQGFD
XGRzNSdGJCxJJWkKJThYbEtCLDJsR1JCWUFuSC9SJ2llKm1xRUZtcillT3E3WnVPPShpcSJrIVk8
JUk9RVtiajB0PlFWYGFrS1InJDVeWnFYPWgvMzNBcElHalJnW2ZIMUtPPCE0UUo0XmdRMileP2UK
JUQzUTQ8T3VWVHBMNVBaK1NcNkcjczB1b0lrOFw+UG5LdSJXSGVlUiJAckZjaSU3YGZtXCM/Y0Ml
VSRESy0tJlZXZmpqYHE1Lk10QC0ydSJjbVMkXDU/Z2c0ZVBKJztpbUE7Vz4KJT8sbFlSVC8nU1Yv
aCE6OUdWT1FAMGJpLVJyZ1UsUm9PbTFvaVUrVycqIkhzbnMkSiQiNTRjPikvcC5ScDQ1S1goX0Nh
cyMnPkJvQTQrRTxgVnJsaDhrSFFvRXIlclVDIVFRbSsKJSwxKGtsK1xBbkomYnFuOyVaUEUyYGtG
YVVOK1UiL3A+Sj5eP09UKTg1P15JMz9KXGxgVkhDPkZKOCg+JUhYMzI1SDRtVHJMYidbQzREPF9z
YElaQlUmUGw4b1cwdTNgSmsvUj0KJVBrcz5Dal8uVypZb1M+KHBGMytyayFCbkpAVUBHXSZKXjVb
Z2RrU2E1Y14tdCwtSFtINiM4OEJEdGtAO1RhOHI7YVQzQjxmTmpIMSNrLGAnMV4nKkZeaiglQCNO
TTxbK2laWm8KJWFAS2cqMG8iXyJtUmEjOTtHZUs0WyxTU2VOOi4lUU9GQ1lmSFdfTTBuX0pPaXFE
aT5OVjtZYypdOlFlOU8yJCZkUTVVLl0rS0Z1MUcvU14hMkwoV18jLG1qMkA3TiVHMklkWCYKJXFl
OkAyViFHNiNhaDRFSVsnYXQwZyUoTGRAZDZHNGBgJmo6ayhmMmBYIkkrN0tjTUJrLT8nLEZmVyI/
SzxfcChhMTZeNjdSLDtIak5UKkEiREYyajJzJXA5NDhZS2RpPiFwWEQKJWZjcV4/Oz5JP0JESClm
dV8+O1FrXk07WFEuMCtCLjhSY25EWDwzbD0+aSkkQ25CN11YSkE1aHA7P09cVTE4JlJBPDxuPjsi
NGVhbXBNVmc4PTlvbmUhMXArMyhKJmhaPHJEaz0KJUYuPCFYMyFSWVJQY0A1OiMsbkY8SktkVDEh
TXIsMGBOKS5fS1Y4LTUqKi00JCInQ2RhOmhcNlJrLEZHQjpTXk5DP2wrUjFcIS04YiEqWicrYV1q
VyI1SSk0NDBhclxHYDsrZ1cKJScqN2NhMSlGc2VZaXRRZSgtcyhkMUAscSNXQ0M9YyYmMmRYMkk5
TywkUmVyJClCJiQ2OGRwYEA6bVlqSWlrVnBDR3MoREI4Ok5mM0xVdDR1SkgjY0xYW1ppKS89YHQs
ZXM2aEUKJV1sUGorVk9KQiNKayc4YEk3QlIxQU1vIkdXcHApX1I0YUQ3KC1BTVRPMi0zckJSQWhg
ZyhvKUpwUUtTOU1EUzoiUiZhSUVCU14zaG1YbF9tZkI4N0ZaRS1iMCkzPzpFTz1HIjsKJVFTL2U5
KUEpZV1XPkI0IVVcb2pGRyNKO1NjWkNuZDZyIWBcJVFvdDUyYid0UEVJZkIqTSNqcS5ubFtNc0w+
UTk4SEoiRTc3WENoPzhVKG0oVHNHPk0sREphWVdZWix1JjpDPUIKJWg6NSs1OXFxa1dJSVtWIldM
NF1EYDxFN1lYXUkqYU5mY19FZVgrVXMhPkFRSz1OKjVzL2dsVHQwKF5YRSNsRnUwIypOUS1KPi9L
MyhMbVcnPS00bU9sXjUyMlRgZ2kiKmBKNDAKJWJiUSZfMSJJSmsuXWM+bTEqVCcpNV1WcjciVWVQ
WiwuLktiJSRNK1c6ZlNOKV5TRGUvSjg2Py1MPDpBPmJScCMoPDdfP0I5b2MsPFZ0QnUlMGZsT1la
XGFIWFklRDdBOWlIXmUKJTcsXlIzVk1IKEVtUi86VjFBV0hVaSNZOFQ2PWshVEpjbzElXmYmZGpX
ZSQ5bG1RbC8mY0MzJ3IldGA9M2hQLzovJTBeLGMqbG03WmUiOl5lODE0bSRhWHAnKSYzOmI2Kmo4
cGwKJU85TlkxO1A3WU5TaHVUR2kraCwnRFBSPSs2bCtJakBgbi1hcTM0OzsnSSttM1xuWURDJzBw
VGY8dSFFcmpgK0IqOGpIWihrIkUsO1Q6OkZEUyoyclBhSTlzYmVWMXBpNlJvK0oKJWpdOlExQC49
NWdQLSdoJioiQWE1PDQyUG9gUjNTIUomLiYxLVFTLDAxU0ovW11TQiRvQHBPVl4jWGo5LE90PlhW
MFVvIWEpL09BSy9GWFNtZFRGWWtEcG9FPTVwLEpKVzEjT18KJWFMMEkrYWAuTChJX1cjLTlVTThN
MlBVPFooV1M2W1YoXzoiPEgsQTEhXyNlZTtxMmtEXTVHMT8vWmQqVm1zQktcI3JIc0NMU1BZKGBF
UWIuTiEnPi9bSCYlSWxyJjUqMEdrKWYKJVtMQDlvMyJJYmknJV9oaylvK11xNE5TNmxXdVBpZWNE
c11dLUJYYj9mbkcnP0MxJl1OV1BDQmFFV00zO0MuY0QpSHNEMFNpSjNpNSEiYVpSVk1iJHE1Wl5a
XW8ob1VHLU48K1YKJUdWcjVcaVBHZjlUckRmJUF0YGhDVXRUVCJFWyR0VjIjcE5HOmBzX0BAbVgs
TEgpVUJZSD0qZFNMSjt0cUFJbFNjYHNbS0I0XHJwRjA2ZF1bRDMlJmJNcTtlV0lwKG5naEpZMCkK
JTUtZWsvXU50NXMlYk1oLFU8VUxbI2BFT1Nsa108TTxkMzxYby5SXThbTCVAMTlMYixKVmdZWD1z
ODZsaVNOYDU3KlxwTyMyQDVoTWhrXD5bMj5uUkw3QF0/MF0oYTEoaGRmVCQKJW07OkU4aFEiYVVm
LUZQN0tPbGMxPnNNMU9dVGphUkxta19gZGpSWC4oTVI/QTEwcyNTVnNHJTdUSlM9cU4vNUo3Zyhf
Q1UkdU5nPWAqVyI5WkpMR0MlYztwY2ZMXjYyJGpQaUEKJV01aCFaLVI8SDZjbUMiRmQ7I2JdPjI1
QGVANEAzPVlmNW9PUzUiKkNHVE82SStTXCdiamI/bCs6KzJuOVZKYS1cSHJmIUgqYytrWklfZ21O
V3QvMGc4OGtfYTozK0duO3FVUWMKJTxaSytLLFwuRTMhaTdAWSNGcVgqQz8sXUA0KFlMP0JMWE4v
X0dkQ3FOOyw5Zzg7VU0oYDVFaVQiZDFxU2pxUkEtZFNUKiNRdVtHKkQqQ0FQWzVzai4zKGhpOE87
dCFLI21VM1MKJUFzRVgpUlJpJF1CWzZtOVkzVEEnIUpuXzZOZW4vZ2FPUHF0KDVrPShuM0xQL0dw
cDdZLG1nPS9ZTjAoKDhZMGsoJj5QPmg5ZG5bPzVWSVRbLlc5WHVCMkQhLmptK1UoKDFrWmgKJTgo
VWJKSi9sYlhfS2M8Z1srSz1vNzk2Pms2WkU1P1lKO105cTVFPW4vRUlrTiM3cTtHTHM/XkslNFBU
cEBFczBcZiJwTmRsdWVTYk8kZExmYnM0U3UrXTNLczZtMz5QTFYiMDMKJU5mV3FmSyYlSnI7TyVj
ZTRoQjJqZ2dsPSQqK0U4ImZvbWtKIkwsTz1JNT85UTkvR1E4JWQpUisnNGUoRTZmTjxYOWRlQVVR
UFBDQzM+X2JVXk9qMWooKW4nOy88PFgvUm1eUU8KJVJpNz1xNSNyWGsiMSRJQyFMRm03TTMoPTo4
SCFiQ11qQCFOP3RjNlNuZ1MmbTVdcHA6MCVzWnNhSVBgaiJTIS1AZy0nJko8I0I7Rm0jLytRQ0Qr
TE4+SWNebU0vTWAyaWQkJVEKJWNKc2E1MGklbz9maio8bkBtckBmNkcqbmVTbFQ8bkYxOUVHZmVi
YT5hRTgndCFaImc4SVhrJEtWS2AyPSFOVUZKSmYiWCIjVkMyYVtSaW49PktbMWRpbEk+c0dqXEkv
RTYqLzIKJUlQUT9xZ2JmM19nYGZMMCUrJCFVTitUQTVOL2ZOITwyKW4pTjZkNFFsMERoVUo+N0JE
JjJCcVQjSEEqJFolZTdbZCI3RjZsPCk4ME1AOzZoTmJcPERBbDBCVFskPmMuO0s3J2kKJS11U2sy
Nl5YMFZHQWF1UW1BO19HXW9ydWdZbkMxJVwrPz1vMlEnLGpoLT4lOFtQS1JAOEtWVlhaNSE5Ok5B
MlBGWzY/bmxEa1IzWD5mQlJBN0luSkpqSmtrYjhGWD0mJDZMPXQKJUA4aF9XaUYoZVBfRHNEOltF
MSpqWTZiQklJOWYqME1EKlpFI0Q1ZTE5UC9IM1g1dSdVSnM6ST1bJGpfJDViQEUpOnNHZm9xOiFB
UHMkQl4rMkVBO2svYTNaTixUOmI2Q2pJM1IKJVZxLiYpMWsmQFpCRzQkUiVXSyNcP0NqaypfNlEt
PzNtLlN1RS8/allbQjFMQiJbXytQXW9hS2QnQCpGbjJBdW1PW2UrPyNccUUmXWVuNypPT15AS1du
Q3FAIzZKRXBwaCRMQ1kKJVM6Q0xdLWZiJnE+RzFaWHJwckFSbT5qTDFoNyNXSmh1RT5ISiwjTjVu
ZV0oPUZAOVxFNmR1Vm5DJjZxLEZeJTApQTVoSzprR01nPltXMmI5TnRgYjpGRC8vIm5zUVkubTIi
L2UKJXFtRWx1Rix1WiVuYUdTQT1CWjtQaGlTVUsrW3NNL1xzNTt1IipUKGhsW09UbHI2W1FkU2FH
MF1QcWFOYlJqUU1bcXM+NSJOP1omay5ERkQoTFo2SWMmKzUwXnMkY0NVR0hvKGwKJTxQcCl1USlY
KShDYzojVmM0c1wiZy5lOyleTWtrZjcqaXVhI0RYTlRkdVRPLlYvSlRVbUlrLmxePEk7Q1gmM2ZZ
PjIzZ3Q9RiNIYUpFIyxBPDUzTzsmQUkkM09mNXU3NDohWVkKJSshJ3NMRGEzLkhIRjlpdUxoaE9J
XU1FO2gvam08UC9OYmBwb2tJbT82JFQ/Y11RdFpoRygnbTdxNlMtJGc6JVVcR0ouOHJEK2cndGBK
L3VmRmFEPzdELWdsKSE/QiNnaG0tWUgKJURvWkhOP1ZcMiZlTlEjLj0yJyolXCs8ZVxVNmxTbmor
Rjw3WilDdVFMQTNiZVkyTVZnZEpFPy9cXTM0ZGg7WjZyTStKXWBVTyNWUFosTWxHIiRgJmRyU1VM
V3E9X1lcNTJHS10KJUVrPXVqSGJVUypsKzIuQS5GQUJVaWxcX0BtZTQmXjkhNWckcVlzUSJPNkFd
ZUpQWWE+Wmw3dUBkWyhJU2BLYj9gaCFKbDVdTlkqI2pdcHBnTyg2UHFfX0hkSHJwNm8icD9kTTAK
JUk7KjdEQiRdaSkiXUlCOk5lKS4oNHMuYUMnWGRfbz1CdGFzXz5TNylwUjFFZkFTRWohQE9AVVRq
IzY3ZiMxSkBKYXUnWjltaF1uRGpJazVbSCwjQW0kcTxvOWU1Olg5YlpET1cKJUZLWz8/Oj1tYiZU
KGRbU1dVJSxXLTUoaFkrISdcIVJtLGBxQ0VqYisqSmNKKFs8VEhIK0Q/R2JPJURhNltMXERvYi4i
IzJvc0llTSlYVig7STw5TmFdKSo/LyV1YnRSJS9QJWMKJWluQ2pQbWU+dEJQT3FUMj1oNlxUVGJd
XlIqT0NNVU1AWWBkWztSL0JuLEhuQEBZaDVLO28mPFs1Pz11ODpiOUJXaU5bSkRoQ25VYlJvVTIy
NCFjJVJXI15gNHJTXihKN05KMW4KJTU5OTkqRGEzLkRkdT1qZCVDXGd0OV5oIURGXkI5SC9dWEJ1
I0tNLydqTzlbNjJGTHVbQUhrbFdnYlBrZzphTTM5KkJPYmFrRHRaIWReUGs7cDllJDRQRHE9I2ky
Jy1xTFdPJGsKJV1ybj02V0lPbkw+bDQ2MnJQI11TRExGWDhLZ1lSXmEzKSVZKzMjOihqSi00KW1G
cW0/cXE9NSZFbkFPZm5KVjpGTidhO1toQlVxVjpJakVVW2glbUFEdF9NIklmSzI4czdSUTYKJW9R
TGg8ZkRmNllIbjxoSHAwb1o9Ni4kM3JaaCE5IWhxLEtTaGk+WzhbOHN0KW1lTE8kczdYOGwwKmJX
JEM9OSV1OHVucipjSTFrI1dwczMxOVtwWk5tR0lpWVstPko5T3E+cUoKJU5rM1YsN2lpbCQ6XVIo
W1sqdDBAQE1XKXJMImpQX0osRyNBVVI+KFQ1WF9UXjczQ0AsRmNHb0UuS1hMQSpLYFlrMEJcVzQ7
cGBpIVpZJFlJI1NJRHE9LWxOJiFsaWEtL3FlY2YKJUpBPnRZcHEmPVJJbjBuTiNYTCRGNXFVSDlp
MGpfW2ExMDolTDJzYkMjZVxiWFhIJjcoaihrcSc8RVUoTT9ccy9XZzglcWFjKzxRWCkkZz5STmNZ
X0FHV10xIWlSY0osRWJdYE8KJSE3O2NBWyZYNGRiXTdwJSdVWVNrWTRjPUZMXkMmKjRLMUk3RldH
PTJpZXMySEFTKHVAU2gxUFM3KTlwQF8lSHA8MEE5WlNKPDBwS3BCMiluWD50TzNKTihERD5aaEdM
ciQ3NGUKJUEqZTlkNkM4MEs/Ol1XJUJzbzlqYExQTmtFdVpUJiY0PD5rQVopZ1VQNzwkQkZUSj4i
VWBWR1JhW2teVDJfNUtdRzsoOUsuMzUoOCE1QCpPNmljYGpyWElDUV5ecyRnZzcrSS8KJUouTSYs
LyNdPUc0M15rcV0xQz5WPnNgYVcoS0hxcVsxc1VXL3JrTS5CZWUoJDlcVS4vMDI3c1ViYUpYXiFs
MSNsQVoxczQzNEthNEM9MFA3QDdocj9NSixSViI/TVp1JiVvTFcKJW43NHVAY2JTOi0+Q3BxQ2pw
SkwjYUksIjZySmVZSVIxX0lFJSZlOD4zc1ZBSCghKWFQWHMmcFI7QFhXOUw9aDtKV15nQUlnbUgz
aTtILjYiNlJJNFNVcUJxPls1PSRnJHJMQXAKJSZOLWUpITFQUzMicUcwPlE8b2BiLj1jKDNnO3NA
J0lDLWVdQSxzKVphU0JBNFBqIjxmNjg/KjE9XFo2aDU7Wks+QWZubDxkTE5aLyY2UFFXOkJSQjRf
VDVhdWxyTidPZzM2SVgKJSVLV0hkLHIyLkJRcW0uMU9HamNXJzF0NGs8Qmw9PWJpSkkwaFxcJSpc
TUxpZV5pXitWPWwsMVxlalFAUSdmbEouRkE7NlUlaio5VCZQLWs3JUdUTU1POzxCS2ZrSTFYJG1u
dUEKJVAwSFgtYGs4YFYmMUlvXDpNUWktKGYqXmxxayZdTVRaUGhBT1pGSztfYjteZlczLTBKJUdC
Mj86VlwnW0pTYVdvJnMpLHM/MSlLI3BnbSokR2A0LCtxOTVgXUotdURTRkw8a1gKJSdjVyFrVGg1
VlwsL0w8KFE9biUwXzg5PjghVnAiTFMoUFRJUDV1PGpLKDwpSk1OMSI8IWhtXC9tVHEwX0UoQVAt
VXEvYGxtLTpaNyc4O0BtWU4tYiZUXnRLMGsmRWE4Yi1EWlYKJUhcI0VpMT45Z3FDbnNMVi1qL0dt
IVljYnMyUilvLEJVZEQpMlEiRSNybmEsaERHbkRIT3VUXmdNKV9cUCdvNygyXyVpSVo5THNkJ1hY
Wk5fJCNUL3NlIiUkLk5CJThNJGQoSFkKJVhtQEtFNGpzOGxbNDAndU5YSmYtVT05PiRLSFYpS1Zr
WzpkOU9VV2hwPSEvVVYyWCEsO1k0XXFIQ0s9Sz9rYFNmLkgwVyFiT2BOaVBgJGhQayMnaj1TMkRD
SDZHXlR0PEJvTzkKJVJlUDo6Kzg+KGksbzEsJmNfUEpcR3BGMkdkWD1rKj4mWTRDIkhEXC9ZQFRo
Q3A7VC1nV1VlPFE6KClAMGw4UDwkRWtZIjgpTU47VFpIMSldYUs4dXFnUm9zOiMySztGU11wUlcK
JUQ5ZiZVQy06J0cnNG06IVtHK20mMlRqZjNPZjFJJ2ArMF11TWtjUTleQi00VVx1MzpaVmtlajtT
ZSZnY1Y3KD9XMyFrbXJeLFY8J0NnRk4/UjQxbDpNYkphRCtWQjszMklyMjoKJVYjbE1gWD9UNVgh
Xi9XN01XS3RmYj9obzU4amFBRFJXVE5fYElfbDJRVCgzWVVXMHNiUW0iNm00OklydGZcXzRoNmZW
KWUpVDxLZTQpWnVRLToxckc3K05OZ1ppcCsoJ2JkbksKJVF1ZGRYSj8+ZUM6bXAncEQ6JDw1Mmwn
OFU4YUZpRk81PGxaInAydXFvXWZqKTo8NyhpL0VaQlE2LnFNPyFARXA5Mk9LQldCMmVAXUVoOTdl
SDY0XkkndVFXPUpkbm9FLlE8RikKJTdgY0kpRUJtdFJlJy5VIlQhKWl1QmsjLWo8dCJKb0NSZWpb
VSsvIlxDWW5hWlxNbihOUko5Ny8nYGVOZ0VjOFxWTDU9QmNPbkQoQjFGTHFfY1Y5RUsjYVMnbWw+
YWtsK2ZDWycKJThTMyRyV1RxTU0kbDA8JCZkKVBKaVpcKUM1Wy1UbSROQHRmUyZfaWRAZEREMSI1
OHIzQ3RDJ1BgXV1EMy1cbEg4P0pGJFMmYUxjSCk3OzRDZzUvU2ojKTpDISgtPSpIWmsxJEcKJVRf
W1MoSC9CbEVcJm5nRFs2JUJoZFFdJyUsWm5hRzMxUDssIzZnYktnNCI0TmFoO1YmcydfYD5gWmcw
MDEnOV5KTSZcJE9tI1BmTSwyOmtqQWtLUmpRSmVtJUpnRzlfZyZvbzEKJVEqMTdsJjJVXy4vVz4r
QSM+S0tqT2ZENGxVZksjKmo/NlgtInBGTmQpWS1qIWpDUV48WEZpLW8kOVdoVEsnWWxfbnNuOURX
bz5CdTYhNF0wLisrMypvXlMqa2Y0NFZiSk5kNTUKJSZsUDNpSFNQJVZncUU3N1EtUFhzXyNZKWw2
RVNII2FRKkxFZ2RGbmsrZFUkJCk5XmlZMWJBS3IrQGtvKG8vTVRWPnJHNjk/JF5aUmVrSypNTXFF
P01TOEZtbzNbLXBJUUhHX3QKJS5NJ0lGXlhBcCk7P0FLTDd0NVNMbVtvZDJwPD4/I1MjUmBkLldw
Mmo6STJrNW5zKD1NbVdmLyNEVUx0Z2dWWSwwPCpCU2JjXnFGbSs/MmNscFsqPSg1Vm9WbUloU0Ah
NkY3LTsKJV05bDpdWFAqLFclXltcZ1JcPmZQSlRhbzUwZ1p0KmU3NlEsZXBvRzJabHQqczJrWkol
VSVGT089U2s3KVBtIW9pKFRGJ2srOFtLXSVjVygqS2hILlYsW2NdVTIwLltiQSooUj4KJSEwdSIo
Nm46WCIjaipbM2VHMWpbN2RrLnNeQHJwNmY7TjJVKVFjRV9JR20zcVg7V0BMSlxRdWoxVkluR1Ai
YCsnZFNVZy05RlJwLlpJVDE3Ym1COVs9aUROKE48SFA/Jm8xRGwKJWNMYD45X1EyZFg7MykjcFxN
XTZpRVlGLV8qZSRySUkmbHE3bV9lW2lsbilUa19zSTI8TUJRIickXzRwUHBfZVdOVkpjWDRFNFE5
WUlQLXVqKEVtVWxdPm9YZj9ycD45UjFrXzwKJSdhKVlvP2dRZDdCQ2w6OVw5QUwkXmU8P2FoRk8x
NzZiQGBgOnUsJkBYP09PQWJDXTlEIk8tXjklVFdbYkwmRC1wMUM7RDY2MHFjQ2J0bCZWLXRUOysy
O15VUl47MEBnJFZoQDkKJSNzXSdHSUwyb0VRaT02YCdxLkxyZnBQcVZda0I2Zz1SLzxpcUg3WFND
YkVyNEcla1VOSjxFWT8qR1clMy9dUyxrJSwiZCIiIywuTjQ9bVNPPjczaGVXS0lSYjFzdGwtVVpX
JSEKJSJKR1FiW3NeJy5AVC90Wy1KOG1tYTVZWXI5QUVPQ0wyXGkwZ2QnO2pTSllRPlNMZFkhcFxX
PCdsaVhcWDdTWDlHSE9oK2tEVSU6PTBsRz8tbi1FTFUrPTUiQ2s+OzxgW01mJlkKJUs7UDgtNUVE
NEFWVExdKkNqT3UvOUtIWztaP19MQjFrbEEmV1BVbFtPVEgoT0xjJlw4VE44RSonIS0nc2pqRXJS
SDBjJmJKaUg0clUoK0ZTTCdUPCRUI00iR1AjLHM4X2gzLjIKJW1hcFopVS1YMkcqKmxQSjBmRyp0
V3BQYFIsYyJbJig/OyxbO2AuI0osJWRkL00qVi5qSDYwT0ElOmQpZiNmZChiNFM1WFxwNk9DWDM6
WSE0NEpkTm5nQ2ZKQ0ExN11XblNYOl4KJVVxTW1USDdFcVdNZSRhbDhtaUtyM1tCbm5CMl5laydp
Uy5zPFs0YGBqL08/alZicCo5WV1tWUhYaipePU1GdSU2TFlHNFciQStbV0tqPyksYVY9XkA/RzQ2
ai9gTGNZZ1lfVTMKJSw7JGp0KDhZRy4xQ2tGNCdjSFllOUojKUlJWGhEPSwxZUclZXUrK14nOVtM
cjkuJV5QNjdwZysvITEoJysnTFc4Oi5bQ3A8UlIsOTQuS0VDUF45Mi0uVFNta0dOKiR1SG80LSkK
JVJGLmJfLSMrIXFKTWA3dU1uWl0/ZkFFOkRjPFYrNmE6akhxNzhkciwoTColNCFjOkk4bmdxIXU7
ZzpLXltNLWpAIyVcRyRmKSFMZTE/YUVvPD1LanUuYTskLipRLDsncGpOVmgKJV88VkprTzolOXM+
MFJkSEs+U0NVRVMrRlNQK1VETTtvNzQsQ0VTPkAyKWQ2THFpa2RcQkwqXSErP2ElPXBZWixBP2l1
U1cuPUdSa09xXF5jLGVYKC8jWiEiMz4hI05BbDA0XGwKJTtVNCZZKyYrUys6YGchTU11YSY4N3BY
TlQmS1JMKSJYZEtfPVJRKiU+aTVeMUhaJVQ4KCZHPCgmQHMpKShBZl0/cm1uOz08WzVOciU5b10k
WSJ1cVJlR0lhdWwiWCcsLydOOFcKJVsqJFMrMmVTbitxYF8tPSE6JS1TLS0xNWs7Yk0ub0A4bTxL
U3RQXl06Ty5GW2Fva0Z1VHQ1KTdfOWJncF8+Vz40OnRXJDYkTjFoLCR0RSxVJnVDVCI2aWFfaVpO
PFs2bDlhIVkKJUwxLFdhK0tLYm8nPzhhVjxJSk1eNGhKaTJnPVRlPE4nU1suW2UvRjY9YC1SZCdb
cFc8KDxqQzM4bW5KMixBb2ZkXyJzQC00PmUrWVRzXU9XNDwlPTBBZCU2MUlBNDdWVjZlaT0KJVM5
TWRCMkZpaTpCNGU6NGI6NipSajZTZTI3JyIkK0dCK0kzcyMuNE9uY3FvRXIzSCxwO1s2JFoxMmlr
VDFgPnBTRjBcP3VoOkRKQCItaC9xLio9N24zKzBpYyVSYCg/UzdJPzEKJSIjVm5XT3FqM2EsSGhx
JSZwUDF0T1lmNS0tSGtnWDtZWSRETzAoUV86PG9xdSchMz9IT0VeKi5YREM+JSo5blVtLCJbXSkq
TjtAOkRvP1wpW0tuaWZsbCZIXzRbXk5kOk5XNTcKJUBSLTppbzspSCUhJzJiQU5eMTklaXRvKiFj
LCoyMkJBdC41RChtKCNLQDJkKEdSdDAtS2tFZCMnXnUtQl47OyVIYSUyKl84UDBkQz1HSllILitQ
PS1xR0pOQTteZGpfNEkwdEUKJVdEbkthSykxQ3JeMCRBaGtzQXFyQ0InQ3QpT1tta0ZIZ1E1bkow
XScmRzk/Pi5McjNZJjJwNUIqUThZJzJCNz8+LDhKJS1BLzNdYi1ES0tfKCExKm5EbT1LRy9JdC4/
M0M2VkwKJV01RDhtPT5jYSYyX0wtJTMoOlsoKzskJzVELlkxbTFRVjJRMSksa0ZyW0MsNltxZW1a
UXJUK004KjZBYC5eJFd1UCVIUSlLTT1NLSVtLFJFYy9XPyxjcFNeVXAqQG5kJWtvPGwKJT4kV0ln
ZnBAWDVIbHVzW1Z0SmNrRnFmXVJMTzRlQi5fKjxuSlJMMzVWNGA+clM8RUtAKldSI1FiXU5CVmFz
N3QlTHInZWFLPGNvTW0iT1ksZF1PaDZNV3AwJ0pAZERaWSpGcGIKJW9WcUhEVWc0JUszbSYkXzJu
Zy5IU0JQYW1UOHE1JUwmITtsS2MoOmRFXFxhS0U1N0lwInE/PGspX0FZKUUya1t0LUk8XGpGYGIy
Mz5UTW8mN2E7JEM0XzEpcWVxTmU3XEtFdUAKJSVNIToubT9HZSNbdWBoTmMtX1lIRU4kK18qXXFO
PUspQkBdP3RZKlg/QlRGVitTbldzO3NEZF5sW0RqMiNvb2xXbnUlOGNfKlZVLjkkPiRWKWk8cUcw
XG4zcFlXYV1ROERPYSkKJXExWlxSLEwlcCVKUm9gLi5UZEA8amE0KDNaLC85aVBfVnM9P19gc1dY
YzUjPjs3WkAwMWBMIm1WTW1UV2s5KztmP2pkMDxRI21vSygqXTZzO0BEKWdPOzcoaTNIRCg6LGM4
ViwKJSYyRlRPJWpIKSxYRGM7K0NGUztOY3UjbSJHUXJKKGZvY25TMGtxPltBISdrPElcRnJyK3JC
QjxTP0NYcCdJa1E6KUsnO19HaE00UWpxXWtlJ1NzKGhPZF9hV2NtXDszWURPcVoKJS9ONio5OFNY
Iy9nZ2RfWC0iWzgvY1A4a1U6RyhyVUZrUGc6LFxLXXNcOUVYOTxlYFVGX21CYzArZ1ozJyhadCdb
W1BOdCg7VCxEblxPUC9vZTViWmo/dFM4WWFidD4jci89Y2kKJVI4NlViZiM5I1VFLzxTLGY4NiIr
N09ZPy0rOlAuKEpEXj1jcUIvZ1FfQSNEaGpQQixCPUwrNWtYMihLaFVkMDhBVWgsJ1BSZmI0P2cm
PEI9Qk0zV1JDSTx1Mi1vaDc2W1tWQGoKJSpqPD8zNEBKMmVJX0VqIyxaaGtNSUtWMmgiPD07SURi
J1UqUW0xPGM6c2NWTjw2V0hFN1o7b0NhXFgpQkFpal9AOSVub0Vwc1FHaz1fR2w/ND9iZF1KT09l
Y1xnV0FQaEhrYT0KJSVbdXBEYSE5X0ssLVsjR00xa0thbmVGXWRFKDk9Mlk5SWY8RCRSTVNHKVto
Yy45Y0hmIitcY0RRRGBhMlVdSk9mMkVcTyZMP2M9bC9nbiFFcXBXQyYqQUJHNFw5LTM+YXN0OGwK
JUVdQ0NzPCVSV1cwPSczQEVjblknQTYrS0pIJDEkTE87ZyI9RmMycUBSPzliR3AmdGI8TmpDQW1V
XmVVcDI/cHJOJDNSRmEzczRMUkVySm8jXXVHPVxWV2tYNSZkdU9bS09cX2wKJSliMUEsb09xYFpu
Vm1Bc0hodXVlO0wtSTslPXBgcT0qSk9RQVhCXU9eKXF1KldHMG1JNmtRYG9Yay0tQiYjbHUtVjNl
Y0gtTHRdY05FU2RBJHJCVChLYV5yUSE1PXFHUSJHLkgKJUdVZVIzcWFuMmU1Tz8uLGdAQVZvaSZc
ZylDVUE7Ly9rPmZYOyJ1ayxGKktuPCNEYEItUiw+alxDY14yX0w8Imonb25nZSNPZUg/aF5JK2o6
bSp1UF1CKiktOlA8dU1FW3BrVlgKJUVUVzkxX188Oi9pIXE1KEJVU1tPNWE7cVgxUi1eXVMyL25z
a0ArS29gWltvakgzOz5HbSpbKkssNDhGXEkpaVtnI1ZhMDZwbCw7L0BpLidwOSJdbGMtY2ExIj9T
N0VYX3IiWjIKJUlSS1NJZT07YHFoQVA0LktLPmA7Uz0vczRYV0YsUVpmdTZHVXRKWyU0ZVFER1c+
JmNeTGMrNUFARWYmMCUvMidTaStaU29jJmJnTnI6KDdVYk01by8uald0JiM4Xz1aYzsqTS4KJStW
aCY+N2NPRDRjQnUnbzNBPilBQyY3biRFRUwiSzJzOltwYWZKJStjSUE4WXJCLWpiUGhdKFhTNT1f
Yl8qKCRTI3QoKmlZMz8nbFQhWl00Q25iJWRtKzViTFp0JXNpJmgwO1YKJWsjIidTZnBQPVlOPyw+
Mk8iKUtwM089T2JsMGAwczRhSiRkN25iKEYmTiFRTGB0VVQpWEkpOCw1YWpLVyY0XipuKmInOTko
NlJ0T2JeK3FZOGB0ZFVNXixaOGFdPnFSW1RmYEoKJSYpc3VuQyg2KlNpUyhGNG5JIWkuVEZkZiNe
ZD4xPzQ0cTxNcnUvPUBrVD4lOzBQKEBeI0heMTE1b11TaC9qMl9LIjhAUj0jXkpINXAvYVhbRlxt
UVRtZjMjZzAka110WDw1TSoKJTtQcikzIXFCQCNPQmFhIVAoNWIuLGloVkM5bUZHMlNNZjJAQl8r
RHFeU1s5Sy9wKE0iW2FuYnByP3NJdSlGa0JHLzdSQ1BZc1Y2cEhhIVgwQjhTUSw8YDMyOkZbQENB
MD02KzUKJSw6M0BwPVNpJ2c6XUtwRkora21Pbk9yZTRxIl5tNUtoIyhXMDBrYUI8OkNGMXA2U1Mq
QFImXHJlWidDKTNbRjhMJWwrNG8wRHNiVG1EPTRIcDs+Qk0sdDFfLltLVSQ0byZJZl0KJVRaQDJq
WEBobVtWRCVbbjciX0lMIV88UCdlaGUxISRwTFNwX0YiODVEQ1dAO28kLDtTRk46YV1FWVtWOj9l
U1BsIUJNP1QrTXNpRChjKmNzcTIiWGBWWCVgM2hFOXNZZDBPay4KJTVKIS83NDtSdSdTODRWTy9h
R1FJKmhValJtPSphL1ZWXVk+R1YqPHRFRis6MFImUChQKkxqZE8/L0wzWDUxbV9VXTpSY1dLYEZn
N2s0NjhIRzBVMVhjLFwqY15hZEcmcCQ6QEkKJXBOXzMoMy0nRShWWWdDWSg1NyFvbFtBKjBtWCFk
TG9CUF9xPGtBVUNaLmlnITlzLnVER0puMmopYFAwJzpKWkQ+cmxNV2JmQk0mPS1BWTElY1ZAaStc
RFI4O24/b2ImIU0mbDQKJUgvNk1ZQHF0KWZuYkc1I0osQT5AWFkjXFsoIk1iUCN0JjlpTTVdOSVz
MCM9bDVPJWhnbkU5YzxrcDBdY2l1IjpXNTBjbUdrcUxFMFQ9Ni8qZjNHZC5Ybi1GOSJucnJVKHU8
JyYKJW1NOjZFXDZXPjtEUlxiX1wmaTo8XS4xQztoN0NiZDc8QGw4K3UpTy9EKXIoOV9nLl5dLEU4
JWA/ZEZpJVwqOVBxaiNXNSxWVVYzLzlAMHMzW3IwWTxtYDRxN0FOXTk5IlsqbWMKJTRqJVYmWyVG
NTpuOU9cTWJZJmZKV2MtQDtySGBzU0JRbC9WZ25tR14pckxQZFZrTWktKSRBSmlnOEU5XkQqI1VT
WGdfYFFJaUExVF90M0hiUWw+ZEdJKGtYIkNIRENdaDE4Nz4KJSNVWDFNXD4nKkVgckwsSlFNNXMs
S2ZFRzNbcSFQQ3EtJ0NcKCxhSGMsJi1Rb01hWVY+bUZdZWFkLGd0J0YsbE5gbz5JXWVUJ2pYKSdM
RCwtRD1Nb0tGdEVOXU1cN0ZlQkIxNUkKJSc1Ui1cMy8hKiJXKzdgN2Z0Jz5TcWVsO3FRcykwIT4v
NEsibVNFTWBVWWBeIztFcEc+UTBVT1c3XSZeUnJIOk46NTkzV21ncmdIPXE8aUonb24uXkFxclE5
V2snPj51ZlNHO1IKJW9haTppTy4xS0NMZzd1YGNoIlI9VGdcYVdIRmdKL0xnOCEzTEo8NTRhUigt
a0R0Kio6RVpHTkpMQU5ZVTU0QD9aYW1XSlBfYElbY2xnOjgnSGU3WGpnLFg/Vk5wVCxRcyxRQkkK
JVxGL1BLKSMqNGk+L1tJWmNXS1ZocWMiaSFuKzI+WEU6V2ckTFFGQklROFVhNVRDMkA7IzxkRDxh
ajIxJkotW0BZIlJwITJHPz9GMiFHRWo0TkBfLWA7ajowSF02PSEhUzY0XCkKJVtlQT11PlZjXztb
Uz51TzlWRzxTM0BQX0hCZSs8JFZWay8mXCkwRVc7KSFQQmEmKF8jUC4iczhqQm89MlApUUcnMGJG
JSVOWTBxX2ExQSslWlNYUEVhX2NSb1dzSiZ1Pk48YmwKJT50bTMvOztcSURvcl8zWjErPllYWzc0
ZyhBJ3FedVhgQyVBT2tJXS5gO15DJDd0YFNbXHIqdS9AOSg+OUMsTF4sXzVBW09jQTxNZ2JTclwy
WUlwSCledE9IZT1FL2tSInNHTWQKJSktQ2RiMSdmL3Nxc0lHajV1IV8xRS4vQV1AKi47ZmE6PXBG
RShGQ2VUXD5nYzRTQjZiVlY9cWorbWFzMz5pby9vRlpBTV1iNHNKXycnQi47cyNBU2FhQkcpK2xI
clA/KE9CYCgKJS8uWCgoRGBqQkFNUzVyPiFhWiFbRlkqcE5WK0w0NSg7TnQ9Kyk9WywnI3UjX1U+
PkkvcWdlR18iKEdxLytAMWk4M0dQc1BAJkElXjksQ246VThpNlpRbkxdbFRQZEIqPGAnMiwKJVpO
LDRcY0BBMHRfQDQtZkxnIzFiWFNjM0Qza1FhdWtoc2pAW3VaQzUpTFhrMzRXXChaQXUrIWBfKk5k
Kj5jKUpRa2U/TCgqb21hcnFfMkZrXlhuSV5yO1JjWUE8PF4sMFJIUnEKJSwoY2NATWE/cFZQMjt1
TyUnb2NcMyhIck80Y1BKSSdySSdhOzpkZSwtXGlTYyFTQSlPInJrPy81a1g/JklhaDJOIVc1Szs/
ST1wKCZCJWEtb1xKJVcjSS1JVy81TmdIOD9FSm4KJT9GY2hNPVcrMC1RbihXMkBfamJJMEozc09L
YnRMLiRpOU0nNDBRS2pNKVlUVCMrazs5OWslb11SIzVCQjhpbCRuKihlUi9BTGlwb1NrYS9BKi01
LVM4amRuOj5haS5NXyx1OmwKJSUjWUpcXG1nNWlpKEhRODQxX3BZRCNlKmYoPmZCQkhlakBaaG1K
KEMvUkdmS1A0IyxJQVlwPTxqbGA5UV8tMmdrISRrNzxoaVFvKkIrWkRAXW1GKnI2VSpealcxKXA7
ZkUlcjwKJUBxQlp0IU5nQDsqJVdZUCdpPT9XKCJxWmhRc0NxKEAlSW0hInVtcWExKzhNTFYqbWYk
Wi8mQXQ7SWdJYkdnLD4tTkQ/UnMvZywpWj1obi8oO0NkJVhjPi42PGckaThdKzJkc18KJTd0WlJr
UUorI19PMmYyJktacFpKTj4nSTEybmozS0o5JDBAKE07dUQzLTJMR0BYVCRHRjcjUSgiTlYlI2Yo
M2wlQSpJPVpbclZLczsmLWM0UkkvcyJjayJzczl0OlVDInUnOV4KJVVZc01PPlswbF8xKGAwNE1t
PktKWlRxL1c+SHFDaDZUKlVMTjslOUc+cCI4OEkuMVNcU3MnVyQiVjZJSSpcVUsjTS42c1xWdXJr
dE5rO29qRzUzUihOKVskTUBfbWQwaEY1YDEKJW9iVzhAREhfOyMwRj8/alhVOi4lKUpLSiExY1sn
Wk5nNkNIXiledUchXnRjOCNtWWVEaTl0JS1BdTkvUVEzaDFsV1xeaFxoKnNTRUJgTnFKJj5faGJG
a2UoV2MtUUlXSklQbEkKJT1rbFMmYiUkKSIrJlg8I1I5RSIqJyI/NWlOOGZ1KkpbVGtFai5ccidO
cmghOygoS2kjOEYmWXFub2wlVi1wVjxqQ1FmP2JaXl42SU0qKSRFLE4pNz40V19EOzZQJ15IZlJR
PkAKJVUnLzBMcWRBKy9DaSEtQk1JVVRdWVRhWVFQWjI1JjMwcSdRSEU4ZitgL3BKLC9FaT4oUGNx
KFshISc5QUdRdTFxWlhtQkdUWl9vdSM3dT0+UkdNUHUnXTpAVWhKNWgqbydEPCwKJW9VY3VKNk0o
MmVKJ1ZOdWQoM1I0MT9uW007XidgcU5nMyZlJiIqZXVTNV4mc186N2dNPmwqbWIhSE4wIilqM1xE
N1BMUXEjMiFbMlQxZlVobnFuYHAncUBhbD9tLWhmUGA7QjAKJSZHPEZoZjNHSTsyR3RINi0hVl5b
ITVTWShuLnUqOVBnTkgxVUYkaTdtV2NURWc2TidZZSUrYiNWaUhocW1lQmJVcEVYOT1tVVkrKjBJ
KWQnQDo2a3BFJTc0TCxaTVh0bE9ydUoKJVlLbTklVyJTR3NGZjJtbHI3UW8qOzs1dUVIUVB0NmlZ
WF41KSFGbWUkL2FrMjpgcF5wIioiYi1QJGltJ1VaTDZHR0NGQy0pPmAuJlNyP09oKm9JbE8zY0le
bFtiLEdqLEg2U1gKJTJraWFdcCJMc2lOKztkZkNcW2QuU01VImgzZW9oM2gzPy1mQlVwYE1pU2JC
RV0lRV1yYT1RW1k1KidlPDZkJCQyZCdfYCsuNS03VkNkOWtOJ1wjakhOLWNhZThAWWcmKUNSYyQK
JSNrKj8nIXAjU1VMPnVoSj8yRyVPajgwZmBZQ1kiKlttMyhoOU49W2ZEVWxwViJEUVM4YyJKJE40
bk9ULDJNUjlcZXVRWCw3aVlFbGVebD5gZlZgUC03Vz5WczlFQDQ7SVhpIU0KJWA3dTdwS1RvcWdC
ZiZHUl0pNj1KbFRDQzEpRiooaipsaG1eS2dMOzRuWkQmMCFHayJlbHRJaGdIZiNqLCtvI0opSyJV
JjAmLTUqLjBVI2IhPnRkQitoX20rNU9FLVU3Ly4/XU0KJVEtOTlyPz91ZiRlRm1TN2AucjdrLSRt
XD0/P1RnW0ZWSEZVOWxuSTBLa2diblRDRENHP3UuZzxMKFMiMSJJPWwkKlxBciROYW48UGxsOXUl
RnE6XTJoKEEnQl9XV3QnJ2AibnIKJWJOX01PQGhdcmJnM0c8b1FhaSlvVTNbNFlKRGNPRVZEcmA4
ZT9LT0VMJHU9Nj0/X0NQTT1ISDs9QkZZcEVcMEZzbi8lKElVJSQ9VUUrc0xUVmtmTWxUdF5kISdl
RnJpISxtJSUKJU4hcF0rbD1IOTg/c3VAYFY7IiNLbiNoX1NYZSNdUlBKLF5bVk1TTHQpQD4wSWhA
MEhAP0A4IWouOGtHQ1IxL1FsQjxTPE0hdSRLKjRWJiEyVFIyaFhMKEQ5UGxNNSE3YVxOO2QKJVV0
QV4zJFY7TDE6X2pGKm8uMFFOVmhdVS1gcl89ZFo+SV40SmlDKy9fRFxLaThiOiRFOW9EWVQmWkpI
XFdJZCotQ0pyYSFGUiQ8NSZANnA8THJJQFtrVVtWM2kyP2RJNUJaLisKJS8sSHJZJD5fYjooa2Ys
dDEpZDJDWWtrajJsW1Y6Z1FYYl01a1pXX2UqJm0jaTc8I3VUVWBjbVVqRHI0WHEkYklAN2RTYDlT
Vi9nZ1V1Ii4xPS5Ja3NFVSlLOFQpTzU9SF5BIzIKJVAzImFvJGdKZ1MsIVsnTiZIO0E/OGRQczxP
XHVQZ1ImcmJEJjsyNVJaJTh0bGZPUDxlcFYjMXI0P1c8dEgkQ2pLLTNBSTJoPGJxKSpwMlZJayU7
ZiZUbjpqKi5nUUdmWDAwM0MKJVZfZisyNzozKGFgUi9qcnAqPVMuKlVDPDA+VHUiSWg9VjB1W3Is
XUFfPGkrXUlFKkRgZkN0VHM0MGlGWWo1ViUuUUYjPW89XVYhI1c/XysyVkw9WUJvNiM+SjJxbys2
Z0tucyMKJTNHUjEnUUYrWl9dVG9xZkkmMk0rZl8hdGwwI18tRy8pQmQtWS4rYUhsLEwoL11vWixw
W3JNXT1WVE4rRENUNmxybCswamlIcTBJSjJsXHVYK3FAVkVuIzE4UjghVnUtR0A2QmUKJTs8QEQv
LmR1WU5XSCo/XGIkLUVHO3MiIj0uXEgxLm8oZTpxW2w6PGxEYkRpYkgrbS87XVcmQT9xNm40T2hU
akFWPmNLcERXLGdTUkQ2LGBpbiM1ZilVV1Y/PmZNb0RyOzw7KFAKJTAzKW9ZVzpLYVJQckhEUTha
T3NZa3VQN1EhTyw3I1NdMDI7SmUlJEVkVTduSy8zTl5eK0ssLjNxbypeMD1PSXNmMmNZXyMmLydp
cyU2SithOmBAclpsViEzS0YkK1hwSEBgSE8KJSlRV2tsOWw2Iy9uVEBLKVJickRqZ0Q0NWFjLHFz
Yk9jSC4zJUgiPyheW250Oi9RLSMpbV5aV2Q9QWhsSkZpY0RwWD5kUztiM0hMdF1eQF9lbF5bW2NX
SFQ7Pm1vWmpzayRdOCgKJT8pWkY+ZnA8KHVJVjY4R2UqXTYtYVkuMjloNjZRcWg+MCVcJDl1V0g3
MC9fM1YqZUQiJHJqZWhlUjZFTzVZZ1VAKUUzcS9fLC0yU0U3XlMtZkBxY0VIIUUuWygpNEUhQ2gk
JUQKJUlwMylqJUxbRlY7Uk49ZlUxMU11LGMsdEcmI0NLJGo+QFNdQnFFV2hnPDVaSzIqMnRFW1Y9
N2VOY29WXmtjV2FsWmRXZ0dGJytWWk49UUlYRF0yZl1hayhbLD1oJ0piQ0xibjUKJTs8aTs9RVJY
QClSYlspNEc0LXJgMCJHNHNJQkUhMyMxTUpBQ002byc/MVBVPEhIaktRXldXP0dDW0VISV5iZFVX
aDRoMUQqamlVLzw9TSpxZ0AhUSU+TElyZFZOWFpkaGhnWS4KJV9eNTZET2dWKUdPNjQpQVluJVNe
MTFsMWshPTw1KnEqbFdOTVw2KCJYVjQ9IlQzLlBDNmQ/ZU4sdSdWIUU+bnMicU1PXkY4aEUtViIt
WF44WlNlXWNqV29ZL2VNdUtoJVcsT2oKJTtjSHQuWWFdKlNGJkpXaS1mXT1sTzgkR1A1ZTBVbzxK
J2BMLjlvODpKXURrSlpYJE5gVlkjWWJnVCJ0TF4yIkRGSGZDVTdkRShUYFN0Sm4xLDIrYm5WJGpr
LytNMjghO15jdSEKJV07bGYwKzRXaTIjb2lmZiVUQUtvXTI0ckw6Jm45bFhGNCM9KEtVPEAyJ1sv
MVg6JmgyIWc/LjQ1VS1AbnFuL1ZTSzdPUDAobk42SlFQPSJEbFYjR2BVW1kxVW1jO25cUm81O1cK
JSJdVW4mX08wcmosV05WUSE3LT5OKDFRLkUobjNlbFlXaHVzbSkjLzszWE8iZjFGbTFVJGRORj1N
Yj1WcF9DYmxGNipVQSdpQSJRJjBuYjosOD8uJl0jKyw6XC5xcydsXydSczYKJS4tOjRcUmtKQ0pW
dHImVUtsK0U7XSQsREo8aWNvY3I9QlVXLURfMywsS0g3QywxTCkmS2laL3I/KVMuNGQ1VF1mSi02
K2lFITIuVyQnbVRmO3BGNTQ3M3BqJGdcZyciVGlONy8KJURyP3VgcWQ6SFViWDohMC88XUFaSi8u
J1E9PDFCNVhJb3IxJFIlXUUjPCNqcWNWJzNDVGBzKU8za0txYEUjZjU2UDZjPDE2PzxvbiUnIzhN
USlYRFssLFQhNEonSmE5WC0iYD4KJTJdNjZsU0VwW1wvVVNydHJpbjR0bHJTdDRFWWwmQEB1LShw
Mjc2bEdCQ0U3XGtaIlRYM2Qkb0ErTyNLVD9yYWVhLGBwWVQ+REI0IlEnIUZpITM+dVdGcT5ZMy1t
OE1iOlReMjEKJTVAT0lGSjAjc0xKPTU9LSklW1huSFJzJjxJVy87bj1QU0NnV2JCWUhNQVs6PS5b
T3VyKmVpLG1vOEthWUdJTkE0XnUwOmM5KyFfMjVRUVxSUCY9RD1GYGdJXTUjbWUqXmhFM1YKJUxV
YGZDTktGUVlPL1lES1glYzlMJzA/TFpRbGcmSTJqUnBdMGQmbXAwa2RmVCpLUz06TV42aFEsXEtF
byNXanMra1Mtai1ALm0+SVplSzwxUT9HOCkobVxpIycyRlcsVD9QXSgKJSo9Wi5rTU8sZWJkamIi
c1IzaSwuJFBTRFFVOiZQZ2REZ05sLj9uKjAyZCgyLnA1JiJAWlwuMF1RIWwlQGQ3PWJGZ1dAJHNx
ZUAwSywySi1cS0ZOTXUhYFAoc19rJUlZPF10QG8KJUFfOV48L2gmLT5UdXIxJGojclskOXFvYzk0
aklAImU/NFZPNGQzdE02Ok9CYSQyLlY3PFQpIkUkJVNwNmBJaCwpLzUoJXM+JE9yL3JDXVUhXzxA
KSZWPUhRJ25gaG9PVWhAPG8KJWowMjgnXzE0UW4nXjI0USQlPVVGNSpgW3Ffb2M1TTcoK3QwKnNK
WmFiNUFMSjpjWkh1PUFARTFeY0Q6QkVpYmQ3RyJudWdSV1Y8V19KTUJlTy81QztoSmxDXlNNSkZT
Xl5PRmcKJVNEImVFOSMvODluLiFwS00+ODVTJj5zYjkhI3I3QiY5ViNJQWpLM2dbWUksO2lmRyIr
KyhabCkhSmRKcENUI25IOWUkb09bPVJqNSpBPWQxWypULyc/YmR0QEpYZzBvLjUjWzMKJUl1TCpT
JSpVZHBILTo6QkNlSFpJL1pYUGtZPU9VS1ZlaWpHUzoiZms9cnFpOjNTQmhWakFtbj5HYWxsZjgp
QTdUVl5qPi9qYWZQXSxZPXBIJUVWNWIoVW43OSdHOUlUXF0oZy0KJTFpMWheQShZPF9lOGUwPWJk
PE1aMlJTKTZTTy1hI1wmUUJeNGFnXk5cbiZzW19ROmY/YiVoVjFibkRCIVF1JlIwWWR0dWxuclln
K19LPjYvcS4uWCJRVmtfSEdedD91cj0hTGIKJU9tNEE3b0oyTD4ock42cW5PZ2hMKT1QdDxhc2tB
cyZpQ1VWVDZGcz1XM1NcPTBOJUUlNkcqXFE4cUk7WDtkRTU5PVA3XlJdIjteSVgiTCFLSjpsJUNA
VHQ7RHE4KU1DQEdeLzwKJTxCKysvajNMRllQXVN0SjRYKGFJIWFFPFhkT0g0ZyRYXUlCK0U2OVFG
dVBWRDdUJW9UTlpbZ1JuSmZEO2lLIl1FNmlmQFQhcylvS3AqSmxOLWRSLS5wKTRJayMjcDNxYzRM
XmgKJWtyWWdrRUQpcyM/al4oI21xKmBIXWc7dGtsbSVQImJFP2UzMXEwaUNdJTpVOlFnRyclXWIk
LGxFLVVFMT8xakdAM29PZmovTjVebnJhQDxDRFdlNUw+c08+TitGRmZBJCw5SmMKJWRjcERfK0xD
My8kWFNPXTVEOVw3OHEqX2YzVzYiVmFhRFkrOU5cJS5qY1o7VDRAUGxPPSJeWVsta1hrQkQxXVRz
TjZpJTFBZlQiLEwnNHNPckwlPWYzaFxvPis8S2k8WFtAakYKJTxcTChjOjgmVzJqaytPMHAwSXE7
LCVqXDxZOW5pPWxqWjxUU21aTDMxT1QyMVBzUXBNVUw8OFZZJTgkVlcwSENqKl1WbkRmJCk5Jkxi
Y3VNKlA4I2omSihMZFdIIU5RZkteay8KJTdNQT5wWG9YTTBpRiUsKUAnND1nP0NKKyJlZGJpYVZM
Qz5CaFU1PDUjK1cwUC5VOExuJSp0SE4vXnUzSm8/XDBRYVJoVGswLT0nUUFeKWZFakAzLHEhcFdx
SCgrUCxmLjY7TVkKJWFPYy4sLSxoVGsubW9ccl9YaEVXI29uXmJxTGBILkkiP3QzMSVOZnRsTjNe
a0JBY0osP3JLL1lSVj9XbjYhKVJzIlZfZCIvaS09QjFYMVM1YG8sQ0BqOiIiIjcoTVAiMmRybmcK
JS9gQU5FZHRBSEUqVURoQ0pxRic9KkkoZE42YCI/VVkvI0IxWz4pdEdHYm8kPTdDQWdfJFokdTVG
NlhbSHBuWSRaVllZIkU1KE47aFFZKmlLK2ohNDNTSzs/Mjk3YjtEY0ohODMKJSgydCgmI3MrKzJO
Pz8xRCc3OCZ0SmwlKEpdU2pkIj1VU2pWJj1aJjlNZ1NjOyMwMS9hJF0oUWBvXis2Rl44Lz9iNGph
aUY2QictMUZLRzEnMnVvPGZAYVA/bUAmMDdhUzFgJi0KJUczai9XOnE+QWRgPDtIX3FcbztZXUNT
ZGRwa1hkIyg1RUIjKFVGTjJKIlRlc19rcjUhaiVKJG9sdFFZSShOQE4+SjRwaCpiZCgtJiNwWyhl
ZCZGXGcsZTRFNFptNlZPcUNxKCYKJSYvTUdnZlYpPjNJJzlGJyNPKkdqaSFtVyQxXylDS2cqTG5f
NyIvVFxfO29iXFcmVjhQNTVYbGcrY0RFcTxXNW1pKlEnMW8pUlFkYj86MGxQYmZGOS1yMk5FTjw+
PlY+RUFBTEoKJS1gLCtLaj90Nl8lKksxLUBZSjckKDo0b2lBVklER0tPc3JSWFldUnU9bFZHVFhR
IU9aPnA7NF09Oi84YlM7JW4rX1hGZz41Q0VNZTlHdUBBNmo0VWJIV11HXU02RmRIPzw3aSEKJT0+
ZHMzOlAtYSFSW3NZLTohLTZrSnVpODBxckc9KilWSVJhNWg0ZyslZFsrKjMmbE9RLjolKF1gMy9a
RGY/X0pGSDs5ajpYdEkzJSo6V19XJ1VmLDIiUlkqVjVrM1pnJWglUXIKJXFEJ0tyOVNFIiItX0tq
SzZCaUVtL0ZeJCs9RCtPU0BXO1IwL1RPT0FjO2AnTSVuIkdCLW1AMS0mI1AiNC9aLUoiZTdiQyc0
KFN0KFw7VGwlX2BMOUo4bXBnYSZqSHRxI0AmLC0KJVJqOm9hUFcwWytbUzBaRXE0KVY7RSIrPSJN
Q0wkQ10xSjl1SVM2QnQncllRYTFOJ2BRKSFpJElbaUNoQWpUZ1lwOGZyNXNWaDQtZzN0MCU4KC4i
YkNKYGFyYTNmOlhaYmUvYmIKJTN0Li9OaE1bVSNQbXFsOSFTOy41YTFqJ11mZ0RpKDMnaUVGT1lx
TzQ7TD9ZIiUwUGNoVGpZV14+JE5Qb29hNkwiKyNqckdxJ000OztpNTo/QypTZSNibHVQdGNmOygw
QD5xPWUKJWNOPXReKEhXbz9SWyNdJzFyWmdJcTkiQEVaVi0uXD89XXAmLUooT09ca0pcNCpBQ29T
OkEtS2owcjpvJk9XISs3KCMsdGlKODo8TzJObFciRDJGLE9sRDJyc2Vyaj1hPy8tTTgKJUYpKDdf
N24jTWdrcyUySTM0YiU9QSZZKENgOU8zRkNoJlQtTGloMDRgOUlFMS0kN2MoLDFjXTtUZ05oUSch
NHU3NyZiQDA9R0xrWS00YkNVQFl1bj5XTkJRPlpVNCRzO3FXVCoKJWErcGc+TCkmblIiPSUwN1px
MVtgcjFQUkgsOmpzJiZvai5JU3U9aGZlam9SJ0RQWWhAcjgoTjZxKVkrP2FMYiVaKSlwZ2plclch
bllpZjtSWTY+MCNGY0lVU11DVVI8NCgoTGoKJWAmJmJUYyZqbStTMHBnbFQwcyoyX25kVFEmJm9B
ZT9bXjFtRj5FU21UWmdnYk1ubV1hWzxGVV1wOUJuYkc3LXFeUm0tYjZVb1lMYTQxZydoTjE2bj9A
RUhMSV5EUGdsPyVDU2AKJTpFP2QpbDw2O1xgUFxSJ2ZHL19CPFJtPl5iIVF0OWA9Ol0lWm4pbkMq
JVIiTyszMDxSUDYnP1VxK1pray9WTkNTPiNoal8+XThpRE5uZjhWOFhkOTNdRldRbUFIL1AmcSpW
YzEKJT5mMkAzQCteOUFLOWQ/XydwTlwuW2siLCNQRGtGPzgwdW5fSE8+cGFcQENHX1tjSEovLktg
clY8W1Nicl03OjJOWUlJM14hI0UnNHFzb28laVglR3I9bitlIV1dZl9rYWM4P1YKJTwjJTpGLGFA
P01sJmBINEFqNWEkMEpXXCtpbjotQUleVTJQZG1jNTEjSW4rWEc7dGdLX2RnYyNmQjQ+T0I5RUMt
RE1wcllNcitMODFlMyNicV8zIi4hKz8vaSEkQCYkV1RON1kKJW5XSSlDQT5pMGNkczhxR3ItL2RX
QDNeJFwtXEVmMj1yIzhLVT4nT2lxZ2htXSssPDMvUyoxVmYkOG1lKSVULEIjQzFfXVpiZjMhUClf
JkMhOTksJVVTU1xoWDRjQGQzZUdTbF4KJVRsLnRnYW1xUG9oZDUhMz1ZP289MTsyIlY+WWtwI0tj
aExAIW0+Tzs1aW8lZ0stYzUhJCgmUUZHOVo+NVhEMyFZXUknLENHaUhkLkwtZTZaV1opR2JVc1st
TmA6UDNoT2xDVUMKJUY6XGlAOmlGPjgiQ2goS1IzcTwvcW1dUzxMS0g+Z1dCdDVrMjxocVFPalxU
Vik9VTQ4SWI8RU85aTswdDN1MT0uZ0spbiVqNFFzUGM1Z1EnKzdkZyhSKTlfMU0pUiVRYlIpQ1cK
JWtYN0dcJldZWypkQV8uOTUkRytrTjMyZ2Nsc3BkLFZbOXFIcV43Q0k1RkNnRFdBMEc8JllrYUxV
VE8zSmciKTFSRk1cMycmTiNIRmI8X0xVNVg3O0RPVkc3KFsuOTxcQVg6MzIKJSdtXkYlODhNaFU8
Z0AsbGNGNWZfJGtbIVxYVDhtOTNkMC0mWihFJ0RJY0RKTSFMVzA9Z2BNRVBFO0NmQW1PQGk9aExP
UXBjQDdVSmBIcChmTUZiSzA9KmhNTCVcPSYlXkZ1O0gKJU1SY0Q5RC9jMz1QRXJDJkljMEhcKVxw
cjhGYTVTZkUlND1OaWpYUTsuNUgwZjZIUV4iS2ZmPilvYChfUDpJW0Fza0tudS5XJEVwOjFLIjtj
YDhZLEY3UUVcJSwpRzwmUlhXZ14KJWc0P29WZzlqP2FUcGFSIVpUNlQsSGhfaF9taVdYTyctTVBc
TU9QUSlKaCUsSUBuXSpmMmI2PEJsU1d0Ly9RPFwrPyRPRW9hcnM/WTY2dW5LM2VVTlZtPEtxPWtV
RSZGaGVaPykKJWBCKGc6SDcwT0FLbEBVJFg6bj9hbjVeWTdndS5QInI+QTw9b182X1tOZ1lhakBo
JkNiX2ooOjdHdCRQXjQwQ0xuR2ZOZl1gSyRSdG1SZCZVPFhMLStQMitdZkJhZ1FyazNiYToKJSMy
V1EoXDY5TSJFRHAwIkUjSzVAUFw0J0VwYFs/cVN1b0FUZGZPZjxbOnFANE1FLC9eV0phNl0tOjJj
WCJXcj88YTxcYk8wdVolPlBhJio5Ny0kSk1uL0Y4O0lgRjkzbXJNWioKJVJwSisoSFEmPTA4ZGt1
TSlCIkQ0NCQhMCI0Wmw/amZeLW1IVFFjI3RxJG4iLl9dMzIzQnREXlE3LGJERTAtXzciYEYrWW1C
KTNbKSRUaFppKEYmLHVkN2xgREJeYlhuR2NpV20KJVpALz1IUWFjSzkvNj1YcjJaPFZyLlVfLlUs
Wl4oWWtWQ0lIKSY7WWxpMHFLWkRIJWlbbj0wL1pIcTthZkAlKz42QC1aXy9JRCZ1VCwnXUlkNms7
UDUyLGpXKmAqMGBRJVhWbHIKJS5acDNeSk5ISTpjWjQ1dStYYSw3QHFPWzNTKWtYUUgrJyM3QGNL
ZkUjWnMuQyw1QFg5ZkFYPScvRDorRTJsQEBYbTBeST47b1hNI2tgM2lLXVk9IUljSlU1XSNzMyZr
UTdNKWIKJW88Skk9Z0MwS3BSWjk4NURiaGhOOkUyO145SDloRTZVWFBmYmkhaVksWzsrSz5UaGMw
LTZyLSJUTEdBcThVYj9AMHBVT01CcTJJcUt0b0hRLl47ZjRQbUBcKi00PkgxaC9NQ2EKJVdwcFpH
PVg4RXUyJ3U0Kkg8LT0nVz5uKk4sLychPSokUTNQbVMoaExmPEBEdURZYlIkRmREImI5Lic2IV8x
TGwpUzYyNmsqWEd0cy01azFMOVRrKzFCcmZFSFslXThhVSs9bzUKJTYoVVpmMWlpQmg1RkJRYmgw
cnA6VlA3KUonY0dAXkFTO0hIbXAkLXJCUTpjP0xfZyhTUi4laW1qOFM1P1FRbFs9I1hwQk5nJzc0
LzM3X2otSlgwa0EuTD4tclJiZEczS2g5SkMKJStbPjtTS0wqY0cqZV0rS1xrUT47JEklciFBLnB1
XWFlZiI7X1trWjEhXk1PIjYqKnRpRkNcSTlHXUYqOWFqMEhkM0tYSDsuaExJKS5hR0BYNUlURk1W
LUReXTZlaDJIM2hqNmAKJTFpRj9KXW1AO15ATElbV050OWZVbCYtSSFqXlM9MV9YciVuajw7NlxP
Q0ZDPVAtXTFCM1VGLCc4OHBib2ldY0YiL0srWitgSDVOLHFrITpwV2I4IUFqJ2RkOVlNOHFaZyNl
bjAKJWI6Pyc+WHFFTUxibUZWSTwoL1lnJ2diVTFtOFZeUlFLYSNlPmxgLXNoND48aD9DMl9OM0Q5
XywoMlk4JWJ1TVIoPm86bEFHRkxbJ2chSjlbV2FNP3Q9bWAqTSRvciVIPz1DPkoKJVJqVlltQCV0
STJGQipcVFREJzgkI0VpRis3OCIhLDpeaWQ9SmQzQFs1NyU/Yz9dXC9PRSg9JXIyaGlFOUs0Nkct
Wm1tNi5RJzI5YWoqVklTW3RWR21hOmEmY2RAcEYzTnFMZUoKJUJIQXFcaDEtUiwhVSpqNlA2cUNi
KiJnVUVUX0EpJTA0R2RKIU9DSCMjUyk8SEVIU0MuUDAxV2tbIzkkXUViUmpNM0pCKzQ2NFIvNkAr
SmtdV2I3MlZNQUJmajw+NEhfXjtfI1IKJUA/IiVwPm8oYidicFx0TFgvbmpLUzBJWjY1aCw6QCxT
JG07cXMwITloKjo/YzRBX0FfZl9dOGUlYllLUjIuYFZpNmpIcGVsOjNuKExPNDdKbD5YZlxBYFM9
KDVkQUhsOXFKU0kKJVdBRCMxUVAuMlJXO1NkbWM1R1JEWFZxR1AjTnQqUlM1LWJwP3EiQ1M0WVdk
MFREJWo4SF41PylnZlFxLic+cmFlcGQjPUU5WDRVdGlrTmUvaT9bdWVxYVxQK0ZcIUYrW01fdGYK
JVBcM1NMPWU4Jk4/M1BqM09zIm1YX3MvU1VhQVtrKFksbm05WyNDJTxtWj0kMDpCL2VLcUZNa0E3
Uzo3NV5EN1QhOmJcY2g4Q2pFa191ai5EXTBzPk5bXTFpJE4/XHRlUV9hQjkKJUUrQC5kImEtLklf
MGBmJCojQlNkUHQxMWEraVRtQCs7PTdJLkc5NzgsZVJWblksYFdqUCpBK0tuNSNbYyNuKFRxRUpp
J1JmJExWdTM4QiMjaW9lJj4hYFxJPCZzZ2wkXTRSMToKJSZTIkxwNz9oc0RFX1omTWFXaXEsLUg3
SHI3TE1kPDFobTRhJC1BMGhBOVwtJjdGSCNoZjpcMU1RV0FoI0stRV9yW1A5I0tkKHEvL1k7aUhC
Xi1LOWNMXkYuRyFUSydhXzlWaVIKJUdxVztOSEdfdVg5STRwV1NqM1hTbVlpWjI1dFFibD5wSWtc
XyJXLGYhN1RiXDsvYGFAWjZaRUxDSFBpNCxgXUlRVlRjVkxeL11DdXIhZVpyakJDaVVyXlJScFAs
UHFJUDpeYGAKJT1JUD1qJyIzYGJxW28oUzlxZzxpOi80czpJOFpzLXBJOkhdJUxhcmVdTz10bihd
WnQubjJsZnI8LSdzJ206U0VPcipJQDpUJmZZWmAuWj8/XGlEazpHdUAwOSNsRlsvTj8jYCUKJWU/
RiMtWlhEIVtHZGYmSSRsN0JtVV5XLTNKQUMpIk1CNiJBb1hvTUIiPWcqTDlnUWRWRSQodDxPUVQv
YGt0U2gwOD5LUCg0b2MlYW42Y2oqUTNINkdLSWdvZShgQmArYWFmLWoKJT5vbmVzcD09ZCk8cilf
Uk9hTjZCRlI6Rz4hVTM4Y08vRVBGSmZxQHIqJVwsOUYxIkopJ0A0bCQhMUpjQi9CNCwuOkRmJFcq
bHFedXJMTyhTPigiMkRSIloiZS1BVz4qJE5QWywKJSVMUmRaT2dlPzs8NFdFRkFZaVs4TVpxb1Y9
dWUsWWdJN15gS1BxTWE0clJ1Vl9aYXFacignZyRWbTgwdDgmTCJtPzVOaDsmK1BHaDpIRDVzXGNj
QnRqbk4kMV1KcTRwPykvI1cKJS9WIkU2JWsxSl1KL0cyT0A2I2NnOUM/RCgmNipYISUqZHBLPylK
PC9CSm5VLTVZU2p0VipkQVojMV1DRGwmOV5BX2s+K0Jcbj1cXkhDJU5XOSNyZiUhOl5da09rJDRC
VDlRRzwKJWQ2OSFORFhHTVs7b1FWN2JYaXA6JzotYFokRj5rZCdWJzUoLU9UWzMyUUU2YHBmV1Yh
WGIudU5TZ1w1W0JPUV84KG9uLVoiYnVAU10kQXU2LmxAbDRkO0slJ284UzZSWGNVcE4KJW1pYEgi
SDMtIiU7S1o+cVxMUWFoS0tiaFg8Tzk3MmghWSNaVjwjZ2NEYipKZCxwQj02ZmVjRTxcTUYvXzBR
Y1o7ViducGQybU8uRCwpRUBhYykpKj9CWGIhTS0pWixLYWk1Rl0KJUVFVm9kOVVJUGRCIm5QaGVW
TTJOWT0uTWcwTW1hcFAiIkkiLGUhbGM/cj0lTFdHRUtJYnBKXGFSbGBcMDRDakJGXS5FTVtLIldr
OCJOL1g/ZiRyJ0Y2Tk5WSCFrVWJrWWE7KHMKJSolTitjO0wtQlhvNUMnKS5lZm9aXjhiTWM9bl0m
VUNfNmg3PEo6ZSQlQzxCMEVAaTwnP2oiYl8zIXE8REo8NVd1XkpYM11TNy48dSREcXQ+PEJcTE85
cHVERiJuc19HSVtcQWQKJVI1UU5yLUlJTUojZilPSDNpUmZTVy5JYyVUPiZTPU1wRCc9aWhHNjlN
bixcU0doTyRbajpwTXVSQStYRDxIZUUtUXBrITUwVkVfczBnQj5FJ0JwKjQ3bnNlMz4kJSk1K3JF
Yi4KJWpkPCcnR2MxLW40S01xbC5RV20/ci4qXFguPU9iKmQpZi1AS2c2US5fIUZGZyZzLj4xI0Aw
LEVyQWk5cjo1KTlzQTBZNSxTZHI8VGZcQmtVbkwkMkohYicnRlcvV1hPLjlndD4KJTVtSkw0MV9r
LzhyMmA/NG1ZZSZaVlp1Mm9NZHIzRmNdcyUtVDplX0UsXGc1UmE+ZDpTMCVcV1U7LFdZJSdrRj5p
UDYmQ0pvaWJBXmFxYyRiS1xlJ1JbPFRLLy9AT0RPTkMpUHQKJWd0QWJQYF9pYidwcWJWPG0sIzBv
T0FvX1hQbzZEKVFSTVhiOU1NdUkrT2FHOGBdOWBEWmciTDJGImpSRDIhKjoxIV5vcyU1aDRdb10s
MmcnTiouVCFwLSFvTF9OW1ktcW5YczcKJXJRKSUmLCVWWDhDQyNpUGQzZ1oqSSRgTitbLFg/QFpA
cGJaUlxpSSpVVC1eRDFxXF89UjlnS14zNnRUdWM7cFdgaT9sXVRgOy0pTyNkVE9ZcG48YFtgIWgh
NC5QQ2cyTT1JKygKJXJHM1BSKzZYIiMiT0liM2FyZXBiXDc9VWtdWTUtaVJEcC9xYyw9XTs0a0dz
XVAxXzg7cVlzLDVFUjlJSj90cGEhQGRyOChScW0nR0RlcD5NWWIiVF9XW21AMFMxZj1dSG4nS2IK
JTxDcl1nIWsoTkQrSk1HS0g4PWpuPTpNSlBkISxnSi9RRjZPQTY7aDtmaDlDSCUhcChPUyEiZFNX
c1g3c0JPcjdFQjE1RHAiRWsyRiVsJ1BPMHE9OmlXIXJZWGc4Mlk9LEpqYCwKJSU6PG8lJDNsQk4k
LW9ZWzcvOC83MkJrOEQsVWgjW3BlPGAsXEo4Mkk1aS4nPEtjYHAxZF1vZlkqbnBoNF4qRFYvUykp
NjxsOV1YPmg/UTtvPS1VLUBCRydqZ090czFVKDpnTi8KJUVNJSw5ITUmcjBBTUI0YDEscC0qLDdg
MSQtZmJ1YyZhLFNiZWhsb0BlKmZQO0IxKC1uTlJGcWZMaCQ8ai0vc3UtIyotUixjO1tlSz1fQl0p
Kz5tXVFTQ2lwZDxpZ0Z1JkdoVyYKJVZWLzo9OT1sKDQrM01lbzVMLVc2KnQ1V1FYYEBKNFpJRjRc
JGZGSyZpPUs0K1wuayluS25zaGxyZmFtRl0sPiI8I2BbOU9XWD5jbSt1TT9kQC5cdE5qI1ZqQVAq
YT4obGZzQ0MKJTQ1XVwoT2REcVleUEVNQFVoNzUwK2RnRlEuJk4mSiwjblQzNmFdLCpdRTU2PTBx
U0lALkEuOShHTCs6Q191LilwUGdNIidCODJpT1ZZdTFZSl5oWCFsYmE8W10mQWpfZ186bksKJShr
bF5oM2ZgU20lamxLNjAjbFhUXylHcUVYN2xwOjQnO18wbSdXJiRGbk86KiMyMWhkNmVVIjUiN3Ne
MWowJE8oXVEpLTxxQGgzXVo7InBYTDE0J01nPEJVK2NAbikuOmdsNGIKJS9wWVZhK2FMISRqMD5X
LTpWdExgIl4lMyJIVGdbYCdUbltISiJHN25wRD1bSmpMYzNXUmxxMGlWZjhWa1pFVnIkKFokRjgi
aWNJIiQ1aGRNTiItbjJHL3ReUzEvcXV1LThfPUsKJWsjSTFvJGozJzFnLWRORUZrVktdayY2LEE4
IVApZ19DO0NLOC9LJzklVysyJ2E+W2REcEczR19uImk4VG1HVmlDXGs/JS1dVG4xVk5Hc0NjV2tN
cEwiVCFnbUNiKEYqXzZOQz4KJXJzX3BkV0RTNClWS1ZAIihvKlpcWltSLGttUmtiZ09CYUUoYUxS
YFk8K01IJm5SKzsyKyo9TUUyVlNhQlQ9VFNnMVgvWzs6Qk1mOjEtSDhZWEZMRGcuZUJQRUliQVJs
XzJXPlcKJWVlXCsqZnRIXUtnPi5KP2pEYjxXKkE3S1hqYSFcZUJfdEAmLUxkSW88bWZEOyFrNXNd
anQhWFdHUnIjNlFoTyksOFNXOWZuSmc0OlVgSFEtbjQjLG9XT1k4Vy5rJDYrTFwpUWgKJVooPks+
VShyRCgtYXMtbCVsKFxMUjlhXSU7PFhGPmNINW5mJmgzP3VDRmRAKiwyVzEkIVYtam1aZ2ZNZTpN
aiwkcVdWOEEhWCMnbyNwdDlxMm9HYE5XRzpNNi1MTzZBb3EwPVAKJTolXi9QVV9eXFAyZEQoWUFe
OGFzcFInQSYwckFJK1RldWtYJWE9RzYqNUNwKydPZVNVSz9lR1Myc0cidEFrQUUzOTlrL1hTYjtM
ZmEtb1ZHKk4qcU46Xj9vXiMiRmhfbiNPWnMKJWIjbG5gLFZXLnE4PGBiQi1IVVY7OSh1R3AhTGQl
W1glPGEzY1lOKHVmRFpXYiMmI1FrT3UrZFZnX1JwZ0NKNy8sT1VgYj5nMCFKQy0wb0ZmYUZJZmE1
O1pATUZdUm1pb0tlOW8KJU9GOkBiO3QrOVA/XUotP1lSVXBLVmtLOzk6Sm5bKUs6YCpyJU5wSiJN
Ri5UVFluWkdjPDJGM1RHbmtHTiwsYmpNY3RNRl4uImxaSEEoOilhSilISW1PX0VHU2stNzoqXm43
aWwKJTdCaFwwOl9iVXVdWF1NKlAickcsNUlsKUJfRD5YbEAiUTkyQEAoVTRvRjJrVSFMTzJIVycj
RF4qdUVHR2dmQi5cTkNjZkhBWGJTJXFyX21DWFpoTFQ4OHFHO0BVZTZuIkttIUUKJSYqXCp1Z2hg
TydmZTJTRkYuZFE0L1NFNEVGXigidS1pWytEYyk1IWFvSS9NMjBqcm5SJnBEbkJbXUNERjllbUk0
UkkxKEErKmdUOjokNUtpZWRybylZWyNiWENmSWwxJSRldWYKJVFRJCFsSWInR2gubEw/Y2ReL0da
TGdXRXJRWkgxYUlpKVhjR0VgRyc9ZGZVa1BNKlZZIXFXLlpuXTAvNERlSSM+QE5aLjRbRkouQC1O
JmNEaUZnO0ImVmZub0ZtY1VzOypZTU8KJWU4R0pQa3UxSD5EaS11RjskWE1eI0Uqb0UiVVBxMSFy
IV1CSTxDRE8yL1ZdRSxMdUJFMG8tRV8uMDxgSjNdIl5eOSFeYHRDMionUEQ0OHJGZUk+RE89Ny8z
W01wXWQkcUtqV1AKJTcsaStWX2opSXNQO3ItUSwjUlEhP1ZLIlUiIiI8QDY6PkgtQ0ktc0YmTzpy
bTc/Y0luXkgzMlBZV0BSNWFFPTRIKkdtPjY6K2I3SlFyMnFbL0koPmpZdTFuMl4hRlxVWlRoQ0wK
JURqb2g8WSs+XW5jVGMiPUdzI3VhNUZjYHM0Z2pwKEsxYmhPLDBUS1k6RzB1PjZcLkh1MF1TXzlP
SkBCUUIwRVshXl1JKSdtTWNKPzdKWjkyOUpmYk88JVUhcE4rITs+RFBjJjkKJVdtVDZeNGo5WiZq
PG8mRWFuODEkNDgoUUpDKWMiIWknMG9zJzBYK2NZR1JoLS83bUl1LGUpVVkkWStfRiUzXDIwVi9S
amtZUHQ8XVY+IlFaIl9PLCxuS1BXPjc2dUQ5IztbJ3QKJTRcYShmaUhIZUE4Uy0pPDw9a0tyZVtz
Z2s1Z1g+cStudCdCcTdJREJRX2R1XE1hNVxOOm1oN0owLEZmZCYzZ2M9PXAjVGVMZVVpVVpZLDFX
SlJaQEctV1ZVSEFrb0c5L15IJ0MKJTBfNVtIZipgYCNvJzVgRVApRXQ1YVFVbl4uPF9TOEkpZz1g
YDFdbSNgWWNXQF1FcVk9b2ZaaWApOWdPNS9dLHExUl5tL00vSUBJTVJWQS1aLTBGSk5EY3QxKFg5
KTo6OUpSbU8KJTgtUTNLa11FQmFnaWlMPTExJTt0RCQ6SUdgaCQ+WS1rPzJwO2FDLz4xWjI2bTFm
Kk4hRnIwVSUzPjcsSTk0LWU6MFxBL0EuQVBsM001QEkzQGxPQVRtYEtsNCc3ayU5T1NvMHAKJWRD
LDwvRksmPmNqI15tKWJiTCs4Vkx1WHAsQDRJUExlKVpYYCdbWTVeZCYhNU9ZTi1FIltGWFolTmI+
JVomVXUyQDNJV2Y6PGk0YjtKMmoqNlp0IkxuK09LIityNVNtO0AzSXEKJUNMaGFjK2FcNT9rVTha
J2NrI0wrU15mcDUzVmQ+czJKaiolO3M4dDY6VDkhdWM+aUtAIm9yJnJTRzE5VywzZlQ2IS1zczY4
UjcpVG8pLGE8SGFzJlNIMEhJbU5UcT1JVTBoMUQKJTgrXylOYyRQWXBMZSFlLSFiPU1oNFc5QlRY
RDUnQEZoYTpvVV00cU0/VCRGUEdjU2Y8PixbSnJnWS0uNVdpMkJyXkdDLjchKFs5PzZWaEwnUihF
O0koIXB0UWRTMlddYnRRITUKJTM1V1ApanE5PzBkcy49NChFJnM5VFU3MDhETFUoIS10U2pIQW8/
ZXJoUzw4TTIhKCdQZiNHaWxZL0A1VGgrNGtGU3NsJC9HVSw9MDFyRCMmVTRKW3BPRHRcLyZCPEUr
UlZvWGIKJWZKR1JqSyM7RDpySlYiV0Y1Q0grYGUoJD9KMjMzSVknR2VwPHUzckFtVlBMXGR1Mzkh
VWJFRVUwIlolailiJD9eRTRHPFU2Mik5MENMVzIzNFZsRlpgOT11JGVzdTNpJyEhRkUKJV9yIWp0
N3JoKmoyZ0NSNlNxV1FcSXNjX1ghcU05SkdjcV9DN1RUU0RTPCkyRUtfYDV0I0EsJVs1XDFtRWsn
KUhsMWZKQzE2VV5rYDYrVy1dcnBuQChdWj5KTjMxTVxrJSReUyIKJWpAcC1zZjEhUl9FIllVYXAt
Jlwia0whNGVPXjJaI1kjQyVocWZhcG9bWF1ANnBcWE8sTWQybkkxQmk/Uz9zMGcvSUZlUmczcEg/
NEklJFUnTVtaZjIkbCVcVVJ0PCRVQHItU2AKJT1FYChtIiFqKi1CKXRUVERCSWNiIVgkcDNtZD9H
ST5pMUw4cFhAbEdjcWpnYEM3SDVuRCl0I25dMWY3Q2xodCw1cSw5WHJPLVZETVRVbVgiZlozSmU1
MFclNU9LJCYoQGkrZVQKJSdjM2ZIZWdtLGRha3Vub14xZFVpcUNXM21aMyYwS2o8aDJVIjM8WWZz
M0k+L1o3XWJpMCtXJ1U9Wi9ZckZuXlsiRS1TVlJgbyJjUCFgKSxDQFNPPGQ1aGxxYzEsISZoX0Au
SyoKJWclYStPTlBscUhSKWMnJz9hWV1YMnItbyQiTStvRC80Xz9mK19BYlpycEIrKUtQOkFdblhv
RiU1UTtPbHJrbmQzQHIkNFM4PzMoPmBCXVNtcjova0BKTz9TRzArLSdAODpUOXEKJVgnZ24sPy1l
YDIvX2loKkNbVyYsa29GKiI9dTVOTmRhVis3LlxfWzxyQD1oOnAlQ28rOnNSSjw3Vk11Km0xQkJc
XSxpL1RoR0snJmxlKGE7Lk1VTjBnWEE2c0E9c3BjSl5GUXAKJU1fOU5kXGlXSk0yMG8rRkFoYkM0
OCJRQWFyK0RDUztuVCRIPWNWcERAMVsiKE4pIllQUm5mRD5tajgrZSFxLW9ta3A4UEo5MnUrMDJD
JSslNDdGLzFCMmxEUChsZD8+MTg1PD8KJSg9bCZrWGo/JD9tSW9JS2lZImYyQWVUOzotViEvVzpG
TDk/XG1IYytGSG82K29WZz9rQERILWErI2BRQVJubjknPk45RVUqOGRybmg3Yl9OaFAiRUAlIiMn
TmFdU0JhXGkzM28KJUc7Q3BBJGpJIigwUGtqMj1VN0VJWkIkRkNqX2ZhI0Q3O1oyQUJIZjYoUyFN
bmhrLlhAOkRrNWYxYCx1N2hlZCRkVS5QSWJFbyJNaT1xQEssTzBDUzA0Z1RTZFtdSE1lck9zNUcK
JUwlaE1Ebl4+JlpMW0xFaWUhbmdVMSVmXWVjXkNUMWEjPlJeZS5zKjY9R1psXl5UYF8vRVE8PVw9
TWgnNm1lYW8pKF1PZVcxKDRPZS5wVGhKblFrUUFVUWtQSVchVWlUSnU5WXUKJS9eUilOZldFNz1e
WWNXQWVBJiVURFUyTGlyRGpCYUhgO3M1VXJSb3MkaC4hLDpEdWEtOCVcc2JNW2h0RC5qJT8pU2Ul
KVw3InNMSGJJai4hNlhBNXFYQFlqWnBIOWpVUz4sdS8KJVhJR3EnPD00ZE1qUFxHKkQxMURkYW8t
QDxUJUUmNThPMTJqMVpFaV9VL2FZIjw1aTo8PS47alMiRXNjJEddWlg2RmYrZUY2LT9DUnIkTSQy
RTwnT21kZmFHXnBPOkJeXzF0O3MKJTVcWDxRX0pnS1M5cT5bK01HVSkkPDZTLz1nNyUvTyo1Vz5E
NEkuZlYzI2JVLyNoVUZaYi5zX3BtJFBCXDs5LFQ/P0xAQFhXQFZUT1xWajxPXUgqTT9mJDhCRiFi
ZVVcXT1uL0MKJV9YVEdKMSxEW19DOiNnLGFwOkApUVYtPUNMQUlYPkEtIkwtZisjc3FwJjdydWwx
WlBVIkRdQmNiLSsnQzs7PCxmcmxCPjJcMiFKRmRubTllRjJIcHAxZmhSIVpSMlllQmkxKkMKJWMj
OSgiWldoNUA8UU86YTVnIiZKPC0kbFVrQF02P0c0RD9aLEBZSmY3WygjJzFfSkpcYDs7LHUraF81
T1EyYG1TWmdTZjJvcE8rKTBNQjpuZGdvYl0xLlAhTEMtMT9GKz0iPDIKJU9CUjY4RDtBJSssUC4/
UGRcPkA4L3NadFtAWjUuazpuMT5PO0pKYWBvPTIxbW9YMjkoOyYlc2o7QVojY2xlT2dTQFdIaCVp
QCJzZFNXM3FhNSlCRl9GcTBCWj9AZj4qPUNxa2UKJT5WM1Q2bjA2NW5gUFQ+SEo3PmNPclRRYiZA
LGBpR0dEUF1dM3FJXjY3OllGP0FvczRZKzlvMnEzUnVkNTYnSDZhQXE5LEIjP0lGZ3BuQVlNM3A+
VDFjLVMwXilqNyshJmFuVDUKJUpbdCFhWV1vZ2E2IWgjUSZpQ0wzN01eJC5BIjkmcFtjKW5dN2lp
VD07JyUxdGpWRj1ZWiVqPF1laDNPT1E6ZCMxU24iLjZZXVssKydcPDZnJ05XVWkzWWlPJURgL3Jt
SlhAVF0KJUhuSEs+cjskQCtsbVYlZiQoaU06LSFWOnJqRCJSZ20ndGMhK0E1MiZoNGJgW3Anck9X
RGpra25FRyk1NSVtKzkhOi5oZmRubVBfNmcvJFslT0RVJmBAQWUpQ1tQN1RiMFtuUVEKJUdRPUBo
WmhZYXQ1NUQ7KmBWMDYqKydJJC5lcT9Ba2pWJChqS0hIOyFedTBfRyhJL3JbLy9TaTxOJDxcOSRr
dC1gIVsxS247R29qQ2JUU0BdJU5OWFUlKEJbUEEtbUpYLTVxMEoKJTR0V15EN1NSbi00OkJWIjRJ
YC8nS1ImQG9EOHMjO0JYWTVRT3NNVEcsQ0dpc21mXChqIz9cciY7TSs6RShYUykrMlQ9WiRGYVFf
ITsqaSc1KUlvUDw3NnVTM2VcVC9dTEAvYCwKJWppcjcnTF9WKlJPNERAZk4tNFlwWE9vLWBhNm1f
XF5VJkhMLGNQZzZVMSIkLmBpb1tCcWtQdVNub2g0UChrXjQrVDsxZkpvZS50TC1lNFw9Ty9hXmI3
PzhvK2lSTXVgSisjL2gKJT1FZHAzIzRNYVJwJGU7dW11QyllRSZUWU1nZmZrX1UhKl9WL19kLTs5
XmkzQldbWlVvJyM4RlpBWFIucG48WilLWlJEXk9TKkhsV2AjKS9hYGouKSU3N29fTS5QNEAxOldE
JDgKJVUnOTF1XCtTZEVrK1M2aW5GJD9ubFVhT0gzMixxWlAhXDlWNjQvIlBaYCFlXVYrIWArS2ZW
MmNAMSYwcFBKJEY5LHE6M3EtNmdDNVdgVElBczUhKmBbXV9wS0Q5bjtkITsySjEKJTkkMSg1WDYr
JVlKPGVnPE5QLGczbUFJMF46TixtQXBWJTptNUJZZmFVMXBcYicqNlVdVVAicz9eU1FvaSNqbD1L
OCYxXzhUbmJOY0lROCtYL3NVSSYsMWs6TUZycitrRjtPMm4KJWlJTnMhTiQwSzojPksrY1hvUTNb
bE9LNShKUT81QCM7RFlSWGJaXkNaSid1LDV1bl4oYDlqUEktJnVSb1A1cVpRLW06aGErbWplYVpd
Z01zO1k0ZEpvX0guVi8qM1ZrZT5xYEcKJTxQckNpODQwIilQUkxfPFtAKXRWYFB0O2Y2dEVJLGFn
bUxQVC9uMzVZUEZYNUBEITxcbmUnQ0kqYjQjOVJHV2ZxZzItKjBtREcuNl5AJ21LS2debSQrLScu
YSQ4M291T0k/KE8KJW1nb0wmN0o1MDgzSzBqLUlIZDJBWWZgUj4sIU5kKGEvOklAPV5WRVFDbVQ0
KSJUSkhdKW9pVGY0LDguM2xdWSg8ZEpjX0giI1lBSVpUM1cjQzlkO25gO1QvdWNsXUlLN1IwN1sK
JV8iXXUjOEtJKDddYlkmKCYuVlBsakM5YCZFJlZXKkxIOWIrcGtAL25fYDw9ZCpKViVqYWhdMW5F
OzojckxxRTZgOFQrZHBPOzs5WnJCNEtzJ1Q2RzYkZU09QjBtdFVJUFlvL3QKJSoqKGhxcDAhNz0h
bFdhT08wW01dKSdSOFlBXTtQMC0jYVAhZyo8bVo2QV5vQ2Ekb0syPWw+ViElSkI3KyU8NF9fJUgl
OWppdCQtWmZTTWZBMT48LmFWdVcuVUA8OWshVENpdU4KJTNJJz1KYnI1XGcjPmFQYCY7Z1EzY0tP
WDNEYHMvcjokLFlQL2x1Ykc3UUBTPkdkZnRIODMsY1xdUXNjUyhOYFdEKCcjPkciVCs/Li4pbyxB
bFcyLGVGc05XTCFRVURsNy0xYFYKJWc2PUVTZTUuSGkqPWgxOCxlZVA7VUYpKGk+T0BqLmIyTycl
LFQhNWM1ZidScChmImJwP2RHSylMMSFrYVhbKVM3YChFSDdqXlEpWjpTKUBFZkkkYllCSy9SMTI3
dTchSTVXLygKJVBcSSkuS1FuPiFKVy8zTmMsWmVAayk7OWZZIkRRVDJDSiRtLUkrXz9GLy41QkFX
Pih1YG5IOyNYXEZDaVFoKEg6L0BlXVQ5QiwnL1chP149SiVxZFIpOEdnIyI7NjtSbnV1byIKJUE/
ZGltUEpMR1JzISciS1ReQUYiaiFDTCY8QmpySDA1QzA4M1hGOF4oJFRzQG1nZy0xNzNnNSssSyQr
THBlaGpsUVhbPmpyQicic01eWXFEcGk8XkIsI2lQXW9JJC00OG5KLmIKJUs2K14rQm5wa1MvMmFl
NFxWZCdVckouPWAlPj9xXjs4W2QuNkFcPiNpaT5PW0VAKDFGNFhPJDc8cUoyKj5FXiU7JSQ7aTRD
bV5TVzJFU1gkYiVsZDcvRmJKNjI3dS9QVTZJSmMKJXArbk0iPCprXGU+dTlwITZHOV87RCQ3XTQo
N0A1RVdoXCZHRU9wZEhgYElYIS41QylTLG05ZjhdRiMoRDQkPkJbTGdbVTRBQ0QpMU5NVjwyXSIh
PXBaS2Y+Wj9sSj9sREk8KzgKJTRDIVooIURuXCRtRzQhbDhibUBDQnVsR0JmJj0sbV8zLyFFbHBG
US1JO09sbSVTZWxpVjUyQzgtVmo+dTwqQF0xSTQnPG08R2RkNjlmJ09bT2lwTF1IbDhQTV9yJl9K
RiJkZTUKJUEvTj5vPkJ1cjxbPU02Y09xXicuSWUyYUMmL2wiZjFJRGhFamx0bDZnLiZfTVMoVzEj
WzsnR2NVNHUqYzg/TSknKV1dO21gKmVHMjRHNWwwbkVoblNqcjhPRVFgbixybCtAS1IKJVU1QEcn
UW02Rl05bmpTZWsyOnIsWj0pTGowUUZaVi5RLCRzUiVlLytlZTJpVzZxXkVvJWpkPyxuRVs8Omhg
cGY6QSskTSVjInRwNmZDOVJhSzRIXGdQIjQ9cmgtJDpGOzVTb1gKJVU/UTRHJiFbYyxpdGRnSWVY
dHI6LHRXXWFBa0R1MGUzImFcO24lVEUsZ21TOFQucj02bjBGRmttPCljR2AtKV9CaTlcUEhwdEpy
Z0AjUnRxIWdNTk9kJWpibyctQ0NqLy5VKHAKJSpwVlNJNm1HYDJMKU1mdSdEcTcnLCEjV1gyRV4y
PSMjPktqWS9OW0AjJUw4dFNtTSptXmlGSS44SmcpUGRrdW5aRCNbJ3VcOERjbTZVODkwcVZJbiha
Wl5QNzUqV0p1SDRDdUcKJSIka24mL00nNDxxQ00nQjBtKVgnKidhKyFGUFdFajsqO08wSStWWyND
RlZNa0EuQU5ZNi5zQWIxWk87LnMsIUxnPm1DY1BfQWsjTSxBYmx0X2g3aSVtX09bMjxOIWZMTyRh
QG8KJUNWWSUsKFhfISthL0BPOGMhM0tBcGlKQCNpSDRxXjsiSHBEWWZqTlk7XHAjaHBhVzFNLWpI
S3UqbmtBXkV0YCgmQzRWJiFMZG4rQStfXDczLFYramhkODZsbCI7bG8hOCNOVF4KJTxAJ0NLJTc1
NTRwYSo9YDRbbmYjLkY1ImMwLk5OJFBiPipYX0gjdCQ1KzIjVThFY1M5KiEpK1FMWnI4Uys4N1Vq
V25CQEJqcXFKK0BWLEgkLSJiVCVXQlNhK2lkQGBLYFclXXUKJTNRJEhMYVUuVTYvQFxYJWVgKGI0
UEpHOkwlUCU0RmVTT2MzO04yTUg4Z2xWY0pNTDBZYyZiTUZlUFEiSjZEYysmRCpyX2FWN1hwamNI
Pz5XQjUuYFc2K2VmYUQ9VyVCVk9HVEYKJTZbdWpAPG06KSJyP3NgJjw+KFhzPVFmXD4rLmNsc09W
JD5tZ2FhJGBVZDNiOFU1RmIxVCMqLFdhT2lNaTFQTmE0L1pINjpdMnI2QCdvMUsoPSxGV25RTkA5
ZkxFdFIpKmUuRSgKJW9hVnVwblZma1k7Kms2WD1BOFd0ZGBVVF9XKWpVNyIsPW5wVGM9NyRuNUsu
VkkkLGZXZ29FIztfKlU4bSwzOiJYRSJeQkorRiFbXl5IIztSY1k+YjI4al5vcykiWD9VWCozPicK
JSZrUjRRMiZRPzdePEIvQF1RU10wMVxXYkBhMV03SnFoXS1sVS5RKkJpbFMzYUFIayg+cEAhKSRH
Q0RZSiFHV0snRStnRCRAXD1qIVdeOCtrMyYuL00yRy5WbGtkaCpJMHI5MWUKJV5fUVdeN2dzUDwr
alI6QzxXJ2JfcEVCTG8nXTRPZCkuI2EkaVhuT28uUm1UYlM6TDlOY2xzLE1pIidbIVcsYjgzZktr
SS1uV09UZ3BpYkhxa01Ob25jRDIkaCRJME1WQyZwKl8KJUxEaiZwUCcuOWxNXzhAZGZfJEVPQE1h
Jkc0WiJmUDNCaUdiQWhpcDclSnBWcjdAbTVibSIsP10sM1NTZzMwWDQtTVFQREJhcSFVZkRSMVtT
QWQiVzphW2glNjErLG9DWzFZaUEKJTtlaWxNTWhaNV8sIyJuaGFRLzlDYC0lQEVZalhzVTZmU1Y1
SGE3NUlIM3E+MWdHZWJZRCQ1N2YlWjZdT1tHajtrbGAra1I+dDwiVGYwJ3NvbCNCb2shQSJzN2BW
XS1nK0NBPjAKJVZzXThrWStgOWw4KUAlZCRoJzQ9OTI7Qz5kXSlMUj5wWDIrLDQ1NnRGckZnVkoh
OztGV1dHS1onblFOTTdDdGElPzhTYEsoNkQnZ29uNDsnRWVgPFhPJFpaTDtdL2huYUksUCIKJWcx
REUuOFAyODk6bnRnaGhuSXE2bC4nKl1tU0txIUNCJFxAaS0sXGQxbUoyX3BuRFw+M1JzQGRnXlI1
UyFpcF9JYFV1PVUsaEc/cEEwJW0laT81YC1faE89M1Q2MztkL1RrS0YKJT1gMi5SPy5RRSsyUDRb
K0xbJkpsQlpkLkMxWzlQJzNKazlRXCNWcjRja0xcKExhMDwsYWZOaWhHKisqTGBDVVwwO3BeJ2k4
S1AhU3FgVFAyKGhqRyY9WTRCXSM3Oz9HImUnbzEKJUo9WURaZmQ1bW09O1hPX3AmJDA2ZkQ7Ji9C
KThXXW5VQilXI1VfLCdKPmlDJkgtSE9EQGdwQ2E6XkRfMFRrXk5eWzUrU1omZSEpbD04b006L086
PGVWIiFgQy1cOyNuXmg4JycKJVRGU086Ul5zJjksN2hmQz9WKm9YYjA/cFNsVWpzV1BwSDI3Q2ZU
K2RBYTxXbiQvK2BIYCciOXVWbGVaVjktRXNpPUEybzsnZV1MU2BCYUJhInEyaVxMNVEoYm5wPUlA
TGhPKyEKJU9SbjldWWk3L0xDIzpiWG9QSTJhRTtpXEdFTjEiSyxobnAoVyhMJ0IwN1BeN0RlPW1r
JzVeaGEwbkg2YEZCSlVncWByaiVTMHFyRzMuSmw5WCs9LStsIVs7QlZoXlslOysuL1MKJTczTmdB
O0MuRGMhTj4+U0I6MDFWKkI0RE1ZLSYmZyNtXz0mJEclZSRCOVM+Slx0TGstbTMuSTgzWE8wdU1c
KkZnNGBjRGskZnRpYzlKMDtVYT1nPGFoUVxzcjw6USQzWTAsJigKJXJAREVWb2o8NFczPGVqPT5Y
ZEtuRkVqN0pmKWpRI05IKFAqaEU5XHFAUHJBbVFnTUM+TClsQVdxNGBmJz5AdT8uYGBRZWo3XSww
cUZkb2BtLlBCZHJiQidOPi9MP15uZzBraT8KJTc3SjNzJkAxKS46REhIYEwoUmhdP289L0tUc2lA
cTAqZjNdKmE1SSQjVEZaV1dRNzAqMy9OImpGU2wzM1VFT2JAb1FKPz5objhuR1tiXTg7J0BjKzdt
Ql0oWCdYI3M8WmZGPCkKJU1SYWBRSnVWLUwtbjksITVOQmUtUV83U3BwRFVSQ2VdV1ksN0QlKGRh
OHBaMTg1YyY1PUIkPiw6aV5UI2E9aEk9bCpXWyckNGsvSUJXJSlNRmZjTl5CXllBaUdCN2g4J0Y8
ZGAKJTs2RUFBaSIuJnNVO0chSSJiXXJnaFEvPEBFWGhSOihPX0hAbU1OXiklTnRaU2cjQGZrPDdY
bzgmYE1ZTlxTKSdCUTZxZFpfNkxxZTAmOzJWYigrdVVOVz1rOF9gTTlYXGdwPG0KJUM4TFUoTHUo
PStRWEwkNzFCJzE0ZkhLSlU4XS9dWzUuJDY9aWpoJEtrRTI5cHEiQGJeZjBvVlYqXjxCUEcpL11e
cWZfa3NiM1RQL2JdPUtGMjA0PG5SQDoralpZQVIhcXFcRVMKJSwsbk1nMGJsbmJTTFo+OFk2KSdd
KF9dN25SOysvImVOXDxXVTdRJTU8V0I4IWpFW2hhV3JRInRCTzgmQCZVXzxtRTckNldScitTVWFl
VVBYPjhILVIsclxXXWQkLzk3OTchdDsKJT01JG8jZ1oyZ181PlEwSzUpay5ZNnNfZDomRm5oRT8+
cCw/X0cwa0tacG8vVDhncWAiJ2puNDosbiQtQkJiPixhZEZlUEwsRGQuZGt1S3AxVmxoLjExbSg9
TGY+SFQuLmsyRVgKJUxzLzRLTiRZX1YzRmZARFhbX203WGZGbHU8PURoP0JYNCInbiVyNC1QMDd1
J1s/LCZtQ2ovJWo4QDNFKyZscTpVXGIvPCRmOz1CJDptR250MjdBQk41ZG1WNWAlJ1VZUV5OIlsK
JVE0R1wqWjl1ak5WckVvKHM4I15cU1VGL0E6VmhzJSxdM0BRVjxJPWtEPlV1akIzIis4bHIkQjNS
JCpjYVMobjpkNl5kV21nVGFSY0NBamxlN01fbiJSQ05TcDtxWFFvTXV0cHMKJThGU0tWWi5iPEkm
clYyPmpNN0hmZV9LYWRJaywkc2FXbU1KOm1acz83dGIoaGw6OHJPSmI2YEplXUdxJmlgcidBbm8n
LUs2PVotX1MxYFxuRlJMU1RoWDxZSCVPcm1dPU8ia3IKJUVPYy4zTzJFWURBKUlqdC1DRSVzX1xn
OV8wQkdmXDQ+aURgZk1Gby1EbD5EOzJSQSYiXEdkODtaYGEyNWFNZS1MIiJBRzhxSmBPSlY8JkRz
aElEM0FfUFxsJzVOaGNcZ2csQ2sKJW9OYUZkKzRqVy1iQjtYT1QiRWppKT9kYSdEYC5KVylRcSZr
PDBLSk4oOEl0c1JickRqZ1E6cWhWUl02YSUpRF0pMHRuIzVSKCRsMXByLio/Iy8ralgkVT9nLSk/
X0RSQykjbDsKJWMlIyJZSGE5WllOSFlGdEVkYjtUOCwjcFNAUSkkVk9Cc0FDYWFeVS1sU1kkLWQ9
TnBQPkBVVG5dPW4ndW9zPVBCXFYuMTYzMFloVW5sXyleLmRCSFQ5bTxaX1ZWRm0iXDVmaTAKJT4v
WTBWbnM9JEIzYUpjcUQ5VCNUJXAtLUpEIy8+Ljt1ZCttWEc3VC1iL2lLQDo8N3QsNE1TbU9nZGYj
SGFxaVpPYjkrLkFXT3FIL1ItYl41NUdlKGk4JWxUUD9JTSpJUTA/N0QKJT03UTIsMEA/aGNZdFBo
SEJCSl1LQVRBa0ZeREN0ZDMuJWdTTjsnaXMvJjwxK0BwZzN0XlxcYSlxVTxRQEJIdVc2ci0nX1BI
VGQ7WltBTStaQzchbFVeJkpkNXBzMyNRSik+Q1AKJSopZ2hcOz1mbXFBUCo6P006am4tUVgrRlJV
YjpiMmRAWzpVNiRLWls8QmRRRi5bWUQ0NkwzRkglZmJxT2BiYihaUC5eMyFBKWo7aGMjcF0tWUtK
aFJUT0dkUHJtM082SitWUmIKJSZrOiJbK147RD5oZGhKLTgiPihKYiYvMEksbjAtZyJSXihuUU4m
O3VAZSVaOFpDdFVyXGlvYlBINjkuaTZYc21iPGIhYkNJVmdSST0xcEk3QGpuPG9YaWNDODc7WXUl
XUBzclsKJT5jQjtdWnUkLz9UM0opRC5sXUs4cHRhOGNbKVZdO2ZsbFI7NUw3ZCM9K2Rxa2RwUD9w
R2lsKkQpS1hlXUNbOmVuL1I0RjpPaFRJKTtdOWp0QSZuSCNDVE9XJ3BBOFE/PlY0aVIKJT8xM3JY
XzowTjZYWiRcUVhqcz0sPUJdJidTXnVgbSxZJ1tNSCE5NGhHYmBkLTk2KSsoVGJiNztpRG8lZWZl
QjQrKVEnJDVOVl1OZyppYlw2SFNYP1FfN1xDJlMjN11xXCgqMGMKJWBiViY1aD1ZWiJuMmtVNVRr
RldBI1BEPVllUmhaYklSQ0pFN2RXaWI+TGk6XilpUCNpRWdEYmRTVSgwQi9HJzBTP1pYSiVnXmdu
S2s6YmxtIVM6TSkpViZOZ2c2S1AxQGs9cEwKJTdgKDw2WWhsSE5sZT9sMzhvWlIkLkUqYCw1VEIo
OF4iZUhRalBANjBXRUYscUcmOG0wITgkNSgqYnNgRDNHUFgmUVpaKT5oYFVBLVZzXjBSPDN0JCwt
JzBnXnFcXS9kJ24oTDAKJUg9Y1BwVyIpMzpaYENNO1YzZkkpb11SJDlxIU5UPUNqciUjQTRIW1Qm
cyppMHMmRiJNUUFgLW5MS0ZwO10rJlZLNmFkRD1dOkZATEk8XVInWVJlXWlRW2VRL1VJK1IyK2Fu
TEgKJSJEL2tdTCE0PScwOVM3M0hKOElYLGopYzlSPGFYTlNLYVtJR3EmS2E7ci4mUjpLJHBnZVts
JF5WbztSIz1tUU0qUWosNEQ5QEhXZGwySjBUSEw/ciQyPEwmSVtWOl5tJ0MnRTwKJWBCPFQ+bUkz
MzhyRkJLPWEySCFjTCFINm9EVkxWdFsydG43XmRdNiRzNitXRzhnUWdoa2E5NlU5JiJeSm1Fcj85
cmFeM2RKMWxFV209Rj0+M2RtL1IxZ2YsISk9bmNrblAhV3UKJShoQzNvJDRlQy09JDgwYypgMldW
YGc6bkBdL3BRZ2Y1XF1vaUpLTE9BYWhVYVY5QS44Z3I+ZltWOkZiNy5AM1tMUUQhLWJuRmszPkZx
QmBmQSw/OWkmISJHS0VgPTExWVA6QlQKJVxnXy48YClFQTtMU2c9bV0iTkRvMDcrXDRrWF45KWNQ
ODZUUWVwS3FQJU9JXlt0ZUUoQVdNVT81TkNQYzovTCk+Pk80XzJBVE5jWGZYQzRvaDhaVmAqOjYp
ZVYuRC40PXAjcSIKJS9SRzJyaElNbGw8W2A9PWBQK2dwLmZaZnJYalByWCgwXDlKW1ZrUi5rJD5q
UGdvX1NnLyo+KT4+SSYhJlI7LjM6KDtuOG5OazNKXD9BM1prTV9LazhKODgyVTRHcFhEa2BHSSYK
JW89TSFbWmBedVYqa0lWSlRxZzRtbiEhKDEoaj5JdCNxcHRFXSwwdUxRVTJgZG45bStBblZsOm9e
SlhqanBVYFZ0VTk/Qy5iYC00ZnMoSysqUyRQOjosc3FWQm9LZTs6QV0zWycKJVs4UkBvZ1hqJDtt
YUtjI1shX21VUUJnOV06czhBI1dNRF8ocUdfKFlTailpQEVqKmFSQnBuayFeQC0sIVtLVVJqbUYm
STRrJSY/ST0wc09cUyZONGhrUEtgc0tgOUktZnNYYT8KJWxKITNicVxGVU5dTT90MlZlaGk5ZD0o
VFUnUTItMmY1cG9YaVZUSj1EXVBJaUZfYmlkWCQsQWVLIjlgTCJWYkRuYV4lP0tpQiRqU2I4K2BO
cl9zN3VbUnJtNGJzbXNVRy5nT2wKJS0vWjhoMERTLl9ERl9VQzVxQkcxJWM/YCRrOSxgO0stZnUu
M1QlIU9gcFVPJ3JROGtuWig0Y2NJPSIiLS1HKiU1W1VsRShESlw7T0FwUFA2bmUjP2ZxZmM4Z25f
RUhBZStgMTsKJS1LdSpJPjBqbEBxdDo0U3JSQUFwWHMmPGdXRDVCPlxnc0VQRnFaOTUvSDREKU9q
XUhBXlVmZFcxZS1tMyplKWM1Q3NTcSxdL1UxI0M9OVlMXW5zOShUc2pTVVtTTERKYDQ+OGcKJUxG
TDxhW0wyMCIoOXNVbnBFMDswP0lwX0NNMDssVkUyRShnSUU/c3REXyMkajVJWVFXUDBvMSxMITxt
b2YrTS5Fay5IK2ZuVERJWSdKbSltNi45PnFIQiMyP2J1UzM6Rm1LSlsKJSg+KjVrREEhNlQsXyNj
dV5NZEB1WS5xTGNiOHFJbmdmZCpbckVcNHBRTComNERcS2FBP00tWlBGZyEtTlFaMUU+RjtwZyk5
QztUaG1YQUhDYWtPYTlgVkMpWFVRPTteZkdqPiMKJWxeQ0pLUUkrUERnSU91JlRBRyJeYTdPInNy
UDZBcztmaTwoPCE5Qjo2WVNIZS5XK14hYTUyQWUxaDVFb3Fbcjc9aC5tODRyUTN0TClJUy4+YlYs
MmwsaDw/I2RJWDM4a2RMaUUKJTMuJyMmSjA7KkBrSCVENl02MTEjNTsvWmgoXjY4YUdBWz1vWGZy
Z0xAXGBEUm1EQTVfRzQ8bFdwK1xaIXFxb0RlIlZtcyRJVTNgSDBxMnBzZyY2K3JaRmQ4YF05OldR
TktuX0UKJT0zOTEnaExXSEFuQGpVT3BVXEFcX0tfdHNnNUZUbFYhJEpyKDFaWUxmNihYNEZaZGMo
KCxVVV1DbV01LSlucCpIUHA/SkZrJy47bS0yMmUzW0pTZ2lGJk5JT0QoSkUrOV88Yj4KJVVKNW1e
NDhoJDM+XC0pLldELUMoXj5vJDswPm1GW2VcP0FKYGonWVY0JXMoLU1DLTBzWSIqQEgiO1ZbJUhf
MW10IVFYPVxrMHU2V1lnajRaOS5RalFJQW0zLVwiVmFLaGtiTEMKJWo3VnMyISchRk5nREFzcTAh
VjM0b0wyLGBnaWdVXD1bZmNKa3BTI0xyZzEpUWg2cGE5VVwiSmMqT2Q9cUI0Xzx1UCxwcjlHXXBk
WkAhYj0/SHEqSiYnOCEjSUg7dEE6TWdwNnQKJWAqJG5ZXmUtaiM8RTEhdS8zIzk5ZTcvXFZlOnJS
VzNRITdGSyo3NyY8LTxGNSxuZHAzYD1vMnQ2YU40S2hqY0A4MXROVmddbm5xajxsSzxvcjdSdWRm
a2ZTQjFxNCVCXig9USwKJTJjbVNLVkI3J01SPj0mVGhnVWIzPmorZ1tDVF1EQnIvUnRsNkNrOjBl
PDNLT242cSJuQFdJZ1BxKDFvRiI1YkUpVVRjIzZRdDFsVyNhaWNSRmRgU1QrVWZCTmtHUStmRE9Y
L2kKJVphNmolXk9QKlUzLFptMFw5NkE3TENSYkkiQ0ZFZEBXZWksVlpQKDw9YHI5PFxcMjBROUdA
VnM7Y1lMcDhMIiNvVG8rMTspal1QI2RkYnFQKCN1Zj9ATmttX0pdUTVeOV0sOi8KJWcrNkRjcyMh
RztaTTdoKzxoKj0rcjtING5VSipFZ1lLPzA2VSZxVHQiQW0+Oj9JYDYxVTEvK1gnJi5QUHElI19k
QVZSLWNsQlFJbG0pa1BbKjojUlpWXCVBUW9lSkRnJHArajEKJUxRY2BqbUJddGo1YnM2N1JsTUQr
M2YwQzJDUEE6L0tQQmlTSUstYFdRZXNsRi5ZLGYmJ3RtKmFYME5WJWBEazRkLFJHImYrYHMkV1lM
Wzc7YWFvNTdkLGJhWE9lL21aKjwkdWAKJURbRDskck5WXCZdbUdRVSJFTFIwRzxjaikpc2hUWm5o
ZmdNKmhtdExUUT1iXXA6VDhcOCE8YCVlTWRbISpwTGcyTzopVC5UVnRDTEMsPjcxMicocCwwSlBi
XlNZWkxqW0IoaSoKJUtvSyQ7Jl0+Z2VSOF1NOzJRI3VhWj5zQnFNVlg3JU9VUDJKbVMuTiwjLl50
cD1iIylTVlUvUnA+XiVRYmQ8PTNBclBMTkIhQEgnJCdxXzQtSC1BbiRKdCZEMiYkIjUtLS5rcT8K
JWtgUVM1J0dncldYJilkWEVqaDM3UV4ndSshJ0NgcE4/I2U4Sig1OSpBL110KzVjVFNkWDZEIlQ4
V05SSkFrK0YhPUMxOFZaV2JvP203RHVOaS8uWyoyNXU2JUI9WCRgREVWPTAKJTtFM2g/TUZCT246
UzdCZU5CNTAkOCxLVCI+SzdWQDhzI2BnXyNuIzhVQE8wLiY7RDxbSSZeZ1NfLSV1cmJeKkllVTtZ
QnQuRlhsbG1hT2BMK05YazQ3IU9PbEtCVHJLJ3NhcUQKJW83RlE/O2FWJ21vYF1qWTFgRCxoI0ZW
UGVjOCVTM2g5L2phLFNITz0sY2hdamJMQDo9ZmFRZyNfLy5gaSxZVyYqKC86QUVta0VNJyx0NGNy
K1YsPlFCci1tMkRQZ2RtZF9ocUEKJWpzO0hzOS03P2ZlXC8kcEBFR1I+VGNaVFFxcDI2KkxINVVU
OVMjNC9XNiIvZG8wMCwuai06V3VEWUBqTS5hdUkhaTNqNzVBWU49K28xYiElSExxXS8hLmk6TCdN
PyRjXktLSTUKJUksKVIqJEdXTi9uPTZYN2FNREh1WVVuX3JMSTtmZjwqJG9cVmNTO29wLDVwJF01
RWtqTkVkL2IiOlZacixFS1JRV251TnEvRyInNkUvR0hzJzowOyxqLDtwKWwiLD1QODJMMzkKJTJM
XiUicF1tVzQhVWAqJ21bQENWJS9WYV4/STRpcW5EXDZgSGIpS18zPEdONTUxR04qMzpfRSRwQzI0
JnA2bGloV1EjczIsQ1BpWz86QEVMIk4wN0ZgSiNMRVU0JmQ0NmxiR0cKJW80YWJTOUZKQ1csLyYm
Ij4vO2FtO0BhZW9JaSpUSVwiUSdGa3A3PVBbcFpCISkzL0ldQSNoNStRYUpjODdQMVNFWzg1QEI/
aCFIYjZzZz5zRmpcX1o8dF1FRFAhWF1bb01NaGsKJW9ubjQoYT5UXjlMREo+JGdXZkd1cl9LVl9Z
IiRsMShsQCpOSy04QycpQidXIiVjZ1lSRkdwV1llIk8rdWtNKlxRR1BdZUkvZl8pNTY7dXFHUkBr
TnQhZjVwZV86RmZzPzJbQzoKJSIjbm1TI2Y5OyMwVSRnaz9DVjQ+bTBkOmI9czpbZ2NGMkI5Yz4i
T08nLykxTDxIUWItcmE/ci1bTlJDZFtlJUBZMWZfSHFnMzI6JnJic2A2WDFtbG02ayp1NVhLZlEm
JF5HaDoKJVg0QUF1MlljTktvQmQ+V0lKNCpfKDE2SGAmTEsxJG8yYDY0cTVTSWQnYHRfa3AqJS0i
Z3A2WmAkdTknJlw/OCYkNWE6OS9bTzknOkROdSNoJC86JlNGJ1ozUSY9amw3RGA2SSQKJWRGMzxT
VTRaQUteRGRfcEwoMl9ncCduYkBgWEgxX1ojNS9mUEVFVmI/MD8rclctQWVIQDdPSV9MJSFXKT9h
Z15iSlM3NWteTCsyW2VFcV5mIz1hZTk/MTVnSG9GOiIjaHBKbSQKJUw+WmR1SlBGc3VrSXNxMlRy
Ql4mRF8nLjpFblIpXixlb18oaUpmXCs0Uy9HNmV1bzNPR0c6QU1dNSpBNUtJQDVcYGAqJlItIXE2
VTRpTDBEaC8+bmxpLFQ4PEc7bmY4I0BiIysKJT8kWGRzWG9KTk03TTNjWDU5OUckNXNyOShLJT1h
Q1YkWlsvRTJ1PS5jU0JgUlRxYmtJLCsyTDtuaG8iJTxzcGJFRVlGWEMycjJMN2tBNiFoREIqZlFu
KW9HWUMmP0RJMT8+bycKJSZRPmZUciZZYHNxImFhXnIiMldRZ04zclpPNStZUF8vRF00QVhlUkgk
c3FVazojSEBUNjFqPFcjJVBuXFk2TllHUyREJiM1SGlbcEAvKUE2KDlecHExIkM/KVNgVFpjcVtH
a0cKJXJNYGo+ZXAzIypZR25KTFUyKF42YGcxZExNY3VCLF1vQlNcLSE4TkJeTTdVbWRvXil1Yk9s
bkdJWDxPW2dSMi1yRl49VkBoNERmNSZQanAhYyxVJVBCWiJsSWF0YGF0YFwnOTQKJUdRLjtNSVVE
VyVKcF8hM0w5K0JqcXUzU1ZTTT8lNVlCPVNsRCsyRTdjVkgjZTJlNUw5XiJyKyM0S2ZJL0ppNUIq
UyI/aG1xcFw1ZGcsPz9lXzQlTV1xYWUxTmoqMSZwZ0pyKigKJVQyYldHb25pRnQlNSFcKVFJVSxL
PkM8OjdcWkp0XjJBIiJoKy4kNWU8R3VmVkgqIWloYWEoVmxbbmxWbUpsUCpSRyMpUl8lX0MhKGMo
biU4OGZ1KyxRVF1IckM2IyhGUl4iNlwKJVsnaCtvZGVEITk0P1lMV3JeWFY4Uj1xXDJTQkJhTlJx
Uks5NzFURTFjcVlpIXFXJEBkW1dNakdhakVVTS5DKGJNZyc/JGEvJCNMN1wzSVczLT82aWdFTmFO
XTdKajdpQUFSOkcKJUFDPSRcLHJfRUk4O3Rxa0tjMVFVRTdna1JoSyQ6X1pWXF1yZyEnUCRjLThQ
cFZESFJiVSMkbWU7SmdBM1ZbNiorWzBMK1w6UitMWzJfbFs4M15kOnFeT0ZGRUhYWlRBTkYtRzMK
JVhPT0omMlVXcU9nXEs7MzVOciJBcmshPG0oYWFMQCtjJk1FK08tcVhebyRyTyRjTl4pSVRNMmRT
Pl5uO1ZZMi5eSmZoXFdjUkQ7bGRDOU5vKyUmL2wsVy9ZU081Ol8vQipLLkgKJSMlajhzU1x1JCc4
JTpWSD9BbD5eRWUwM1lfLm49QzxrUzROTjFzX2o+YCY1XTZcPEVlRTFeW1Y5dFN0Wz1zXEo/QyVf
InFdJHRgNzldZnMhYmdbJkQpZmU1TmcmQj9LSkUtPj0KJTFwRyJVQTA/YjleUCE5NC5lQjVxRCom
W0JmVz8qQiYpbSw8RGFdSi9hXmxkPz5CZWtpKUxEZV1gV2AxKEZ1bm5HZTdRQUpGbyouW21KJDF1
YD88aUQtSXMiKiZUPkdAQ1RNZDYKJVxuXjpDXGsiLiNSSGBbT29SLVVUaW11Zl9BOis2PFVkQlQ3
RCpqYGc+RFZmMiVKJl9oSl9oM0dGLzg7YEJuPmVoYVJhaGpZJyVTMFFUK1lDNDVgM00zSGVCZWhX
IScvYi46U0oKJUZaVy06JFBVSiMocUcwRU5UbFhOIV9yUT9cKTlSSGA0UDxLKXRtOmQiYjVnMmUv
Uk5TRlM1TlJoWEs0KSc8ck5yZ2pQI2JZMkEtPVNTdTk6Zm5sXzUnRUhEYCFyXClHYSl0UCQKJXMn
NnMmczdGcyRiYCRcWmxULkJUbFcuSyloQmQqSitBSnRWITNRZXVlcCtsPHFxVXRSXU00KF5oX2Y9
TiZgUFVtN1E3RDQ4cDEtW1JaVFJYMFA6b05jSlI1MShlbjwtRDtFNScKJTxrS2NoO3AsQEtsVUsr
S1o7ImQlMVtqWXNWTTpHYHArbmRPT3VIQ3EnaCVCVVEhMUJibD1gV24qcSRUW1FRRUZZOjddSnFS
Iko9Kjw0P2Q/YiI/LlJkY0c3dEkrUjhtITVUblkKJT5HTm5KKlVoPS4/V0VBJFdpKyFbWz5gZ25C
J2pJdGFeWTZpSVg7IjtmRDhuSVslPWUzUXRVL3FRLU5sZWdlcyYhRj1JMSEnUllVLFJMY2tNYSZ0
XkJwUz0rWWxeImUpOTU/Lz4KJSRXRmMmKEsoVkMiSW88NldWSUdNUl0jViVOPElpLV1OVkkxYFxr
M0YndU9lUihBMnE4X3JLa1BrMVhOVGYvRV5eJGJaaWtUIkNKQmdcWkUnJlYpM15Vc1tnWU10V2ZH
cTBDY2gKJUBNUywiZ0ZIa0A0O05UN1hPSmctaW5ZWFpLJ0lQSksiPEgzbGA5QVhQbEckLmReZDRE
QmFZTyIqOGRAT1E7NzVzRWVsT0JBTnBYLGJJXWpwbTNsbyxnJ2k1OUtnXTBeUytnPScKJS8qSEtj
bEVpKFtkWjgjaW05NiZdR2lsOjhCYSNuZEJnUVhEQCQrUSQ+czYxPltPVV4wZGdiLVNdMGtdMSNw
ZGRsZ08rXkhHL3FlM2h1RURJJ3BpM1RgUHVHbD01MCplajFhMzEKJSh1K140a0dQUU9xSE1TRGAl
aF0scy5PNSZcNFNpUV85JFxoazhGJz5dQlBIXWtmcl5HXkxULlNcU2NTRCYwPj5UNj9FVSlsWTUi
Jmk7L1BRZj51U2JsVSsuS1Y8bDUwMGpSRXUKJW1XSUZMRFhbKEc/ISdEKTZdXyZHZVk0PThcbkYz
WEEpZT9HQ04+TlBRRlgvPiZNYE1tZ0tKOnVtZDd1Rlk1VEtXIyktPCdfV0pMVklONGZjK14wSCY0
am9hWlxgcywrVmdxRlYKJXIxVzt1VWlKbj9bNk44Q2Y4NjY4JywjX1ZnLlZFPW84dD0+PSQ4VCJW
JiFSczZsVzxrWT49PGlTPT9hIl9YOmtxck8xPTU1IUFQNTFXMkdBbD5ERmE+Wj4zcEgsRXUkVEww
R3QKJV9uaCJjMD5TVEotRTZhQ1BxPnRTbHFSMG86c0x0cUg0Pi4zLnM+PCc3XlJuNWZgVnFiSiVC
JXEkMzpDRGwnZC8lcDJbPnM4T0Y/PEckSDBlK1ZvTTA5IzNIRSVNTERVa1FjQmkKJSNPbC0vP0hB
U0Q4al9VIVF1ZEliZSpbVDc3IiJsTVAzdWpgMSpJcGJXdXEjdE0hMU9dTkIiSSRkYEJgSSREVmpJ
PjZgZ1JFQD0qJDhiKlFGOnNFbkVlSERgUGVmdUdAYzsxSmQKJW5lO11FYUVaWF47MGlCSUEpbXFv
b1EwQCNITG1SLVtBRHJ0RC5gMGAnJVVpWHFsbixALVpWJykiRT4mUiJDWkddVUhnJVsyODVjMTlP
VkgjRWVnLCQlXkJLKTFRLz8qJ2RYYiQKJV9ONkYvSjpBLF5KRHNZa1JcSXUiTUR1LGhULl43KCYm
SilENDAjaSM6SlY2OzBmJnI2ITMwUjpqUHVJUE9mQSdAUypTMC9IPkE8ZjFWNkBbJk0rU1E9RDFu
RiVoQSdyXE44KU4KJVc3bmYnTm5ST0NsMkUjXDltIXErREZJPiZnb0IqLTQ7Sk1kTGBoaSNdaF45
YEY5TWszM1FfSFYtaFMlRCYkbDlIZW90YlpFRzQsUlA/ZGNLPTEpMzZZZnM4ZFFqbGxtUlRRIkAK
JVFBN01sTi5sNUItUTxOTVZCX1tPWVptIkxeOF4qRzNgVEhUZyFWLmcxZCllYygsX1dmbGFIXEg8
ZDQ+XjhJVio9UGVXYEosa14rNTpsI0JBR0tycztSV3BgQFgoNl5ebycnOysKJSstYk1VcCFkUSZW
X10wZyNvdU8xWWBZJE06NzJxQ2RfIz82PGQxOztDXWksPUE7M3NASk9zSi5WT0BdKmt0MEQiJGsi
cktoVGs/KGpLO0svIStqVk81NTdqRW9tKnEsVyRadUsKJSRxLFRZO0JQUW4jbj4wWWFQZ2FTUilE
UUZPakg0WSNESEQkS29LM2Y5K2BAKCsjLyQ9UlFBM2lsbWVEaS42PCNDYW0oYTFVSFlvMFJxTk1p
NycyQVYia1hcVF5BKls4bClORiwKJSxlRDhtUCdxKUYjZ09KOGZjZFVCbFhUJ1ZvXVdOKUZHVUU6
MT4sTlQ0NkdRVUs+X05BQy80RiYwUjd1bTlUW0Y3JDRecGxiYEZuLTVqKkVxSDE9anJvOkNyU0ov
MT51blVnazIKJV87QThoTkpPJDlNRVZbMFdrMHIrSDtXTCpwbV0yajVXZTZsVEdwXGRxbCE8UFRH
Z2lpIUhHMzJXLCNAISE4QSJPSj4oWyVpZ2ltMm89USFZNSVOTlRUSS90QDYsVzk/QVhFc0wKJVBW
YD1QXS91PXQ8XiQpQ1VQSy1vaGk6VEJBLlBPdVA8bl1nW09SNkNdLz4tKSpVU1NJL1Y6RGpXNTww
QHAqaURFLG9xc0dmZy0oMWBfcUdiUENDN2tCcyVLc1lBZFM4NCUzJ0EKJUxxXTlCUV9JTDsmSEQw
XnIqRWxmQV9WLlM7NytnLSpsbkE0VEgmWSVLMXUuZlwscllbX2JAdHUiQlRuLG84US4scVtONFtn
OVFnNlxxL3A9UmgudUZIND9IY0ctQV0hTD9RJ0kKJTFKKFNNbi46MikhOj5VNyEqNys0OGZrR0Nd
ZE5IJ11lUUw0VV1pLSo8QV9jPEIlRSQnKy9jXUFQSSIqXzEnPlEmXStnVidkL2c3XmttO2gzaj5K
KkBhXkQjQDpQT0o8NmFaWWIKJW8odHVGTWI6MTEiOC1DSTlodS4tRENeVUhETWo3WCo2b0JrVz4l
Mjs2LHIhUl1bLklyRDNDVmMudGImLGpxVy5FalFCWyxQaVA7Q15WZiZdOT5NVikuNiQzRVxKWWsk
P2Y1Q0YKJVRjQ1JyLk5BSjFtR0c7SDBjTFROSDRRWUdsLl1PJiknbF1LL1AuNkZwOjdGXDVWQSRS
P3VIQkdKc2JBdEVZWFwwIkgqVjZOaUszRSYuZ09TOiZYQUgrbC0+VmAvQjNhJzJnOFgKJT1gTE9q
NW44JmBOPVlcZ1BSXUgjIXNmI2slWEc3NTMwQCRSVSg9LV4qcTApdEZHNjJKcmFddGNBQCMoSCRj
OG90JzdkNFcyZkZnUW5YVkpaWihNSUE5RWgqK0BIVSRLPjQhNjYKJTtackxebk5CPS9oMWBbOFFh
ayhgK1kjRyxRUG4wYUtuPl0sYEtbK3JXams1Sk10Rk1pbCpxMDsubSgoQmYmbksuLjZQSWIkOS8p
LV5EYFJETCduXihNLlVaU204XypLcilOX2oKJSZsK01IWWNiVm1iL2ZVXVQuajZHZl4lS2NVOjpp
MDZdZ01hLyQ8bVBBLCM0ZHJcaDw9ciEuJVFKOiMncS1kRldeVFErR2ZRMzAsTjsmcygpQWZGbDxE
UWY4RS1YJUdkWSpwRzkKJStUKV1hYlxlTW4/KSFGczg0W1pWNDJiMlAyYDxHLyo1U1xyJnF1SWNv
SUBsIkBsSjJpXVtcMSJNNzpXSShqKilvTF9INygxXi5iOygvckYxb0cwK2hWdVI8YnBHcHBqWVEr
T1IKJSFQQiI5PlxSXl9yMnBRQ1xMaFwwUFFoN14nOnM6WXFlRi5rRT1WMUxfRXIwSkMkTl5KITY5
cF5eJmBbbzhfMClnRSU0JFMuTi5NMiJNJGRTXTxFTl1iayw/XzE4aWY8Rj89YToKJUFtbikmUG8i
I2wrMD9zWGVxWGRZaXBCYTY5YmEvLkJTcFg+Q0ZCTic6PidLNFRkVG4tajdoJyo7UVJhMVkydVMj
SSZMZmonU2pdbz9rNHRCJDs5NlMwTjMnNmtXazlkKUE3Mm8KJTduQiI0KjBHJiFmUmEwKlAvWylp
O3RYVWxGXGxQPV1fZFFZbCVwNG1oT3QhJycoPWEkRU10OzMnRT9eLjNHbiRxcm4oTUxbXG8oWkF0
RTwsRSMzVTRSV11QWSdnUUt1OkZIU2kKJTtxMUJbJUNvbTolYDBWSlREMzVlYEcrSTtKc20sWzNF
OW8oLzFCWkcnXFJ0MjNfKzVRUEY9OXAzVDA9dTlha2JgYWEoJWxLXj5PcUl1ZkInZT9hZk0zX0Y9
XT82NFAyYyskciMKJVU4ZzM4aDFubExaKFhVbm1Gc0I+NlQpRFhZV11zRidqMVhURyNgbUw2Zk03
UD1AMUVlPTVOODNWOm9GP15XZyRNa085NT1MQSRfWnEoPW1hXVIjQC07NCZvZURBTSJGXyVlJDMK
JVdTVEkxJGsvWlAuNGFSLlFVMDoxZi9CWl4odCZaSzpZQ1xdSj1BWWxEVkVjJ0o+RFE9VCpybjhM
WCNwZjUwWCI9XFAtIy9BWiRscWtISl42SXVBbithWTpcTS0hYG0pUU48REoKJV04Zz1jNTsrPlFL
SSVMVlxyYlY8N0lmK0cjIXQsLXJIUnExbmZmJTM0aG1bbVFUZ2B0RDtxWm9gNWIlPVRIOUUjQ1kx
aUNTKiNJZiFgSSRkVEJHTDlBZF9fKEJOS3FYQSYzSE0KJWZrVWdjJHRndXNpRyNDLDRlbXUoblAy
P2xUU2olS2dNKCxTO15KNi5HbG0xIk43S3JLQS0iW2ArOU9fbj1cc2UiajVAcmMrVVU9OCJySUEm
Nm8sY0xKZlg0X005TkMiRS0rSSYKJTwrbixyY25QMzdWQyglVTInRTs9O2FXOGNWbk9ZalQmIT4o
LWZDOkMiWnQ3TkxbNDguYHMzSzRYSEguKzlVby5GRSlXIV5fJWM1ZmoxUCpjQFdESiZCVUYhXDhf
ciNubSIxJTEKJTZBLmRFUT5cU0xkPjtBdU9IQTpOYycqPVQ9YTBMODtvQV1YYVI9R0Q2WjZ0J0FS
TEJsWUBdK0ZAJFFMRlw2Llc5cytwWSxOYHEoPDJiZ3BTJ2pTdU4oQyo5aDoyZGVdPlBRRCgKJU9O
MSdoKU4zX3VyMWAhdENtckNzRkVabGpQPVRSLiMvZVdgX2MjWz9VQ1ZeX0g5LytSMDduY0VsOkww
Qi0taHN1Z188OFVPIXNDIilaZEVvZ0s1VF9uMmhXWDNuPmkvYDZFQEYKJSQlODgyKEQkJyRsQlVP
dFk8dHA4Kk8rXDBtX1tXZytqNEt0cWtlV2IlaGFjWyVzSTZ1cSZEVzhUcWRIVVMySGNKbEk2bDdN
OF4mNENTOjJacStbO1tUI14laiova286UGklOWIKJW0kVVNra19wJllcaTQucCckKTVacWA6L1El
RltxOTI5UnJMWm1xOS1lVzs5alBFaUVNKUMzTS4pVnIrTz1FdDY+TjVqQkJgLHQqMjpcOypVIURQ
QTlFVTxBVWJVKz8xNDQpJToKJVU1OXNlLk9AUzJVJV0mUm0qa0c5Vi5zJmM+JCNFNV8xLk4yUjRC
US8/ayR0UjNkRERPPypwQCE+SFRtLmY/SVZgRywlZTpWMlhgOjZzZ1FbZVQiJk5NRWlENVgvXkM/
YSUtP04KJV5GbixQM1UjQmFEXl9cMFNZbD1PbFZUPGhDYHMzblRKbW8hOD89YV42ZXBickwiLi0j
aWlHQGpAbz8rJUdnVGZyUEVSRVw7KGxvTVRSN0NYR1xkdDA+KGUnIVw7NF1tPWVmVCMKJUFHSDRa
SVRfPTRibWUuNi1RV09WLUtdTkBdMWxuVldoNW1UKUE+RChDQjx1NyI7VysjVGw7UjMhTSVlb0RH
JWJlQFldTmlFSGxdQk1wSnNQNjAqZkMhO0cnN28sbFNaNSUnJVcKJW5uc0w6akNyPnVHcyFbPzBE
ajtDTkNDMXMmWUhzZVlJX2RcUC1dKzM0bVBaQWozWkZUMFtNZThsVzEyXChqZCFzP0suNUBjL01l
K0tiRVghUiQsWXQoYVM0XW4uV3BqLy1gOksKJTBaJWcobVU/XU5ST1dHOltAY3VhP0huZmpbQ2s/
dSJuXyw/JzJlSEcxO1h0P0VbQFw9ajR0TyZbY2swUiVyXE49OWhvayc+dSxjXG05dVcjYj9FZ1Uw
KUxDcEZXcSI8OUZCJWwKJStcPTA3YUc1P3Q4SElIVDB0YHBNajtTUnBITC0waFkiKm81YCh1aXFo
L1wkTz5xXj9FImpNTmUxJm9FTShJWCMkXlJTMHVIbEZqY1JqaC08TzBKW19VMG5cTU0nXjteLCFQ
KzAKJSFhSVRfby9wQ2dSbFMtaUcoTDFeVyZJLCIiSkJqKWtsVjc/aFMuQUZrLD9DcU1mNWp0QFc1
dC4rLEc9QG9zNSRDKW8hNypGU2NKX0RFKy1laT0rPD0jXGooJC81NyRdVjssWygKJSFNMSNGKUIp
IWY0cUgtYlkoO0M6REFzMmhCPjk7JS1zRjA+LiheNSg2XVdMQColXVRPUy80JjdQRT1dRCRyJi5k
bE40XjdbZV4uYCtJXikoamVcUFtvJU0rW1xFVE5ybC4/LDQKJU1OOFRBWXJNSSZvdDBPcyo1PHIp
LTwoK28kIl1ERldPbGFYVixyMiJdLyEqSFlAUik5JlRHYFxUczlEX01ULFduaCdgJDA8VDVAX0w+
MnVbKC9oZU4+Y15VKGlbPTpYPTk4TyMKJSEiL2g+JFk6Pi8+P2VbKXFOUFdMXV1LQj47N2JgS1Jz
NW4jPnFsTyM0YEZCMXA1bGRhNGxQOTYwL0d0NC5XLEFFZkhFUmEyUz1fdCs1YmhoIT5wcjxnSmZf
TkFEKU9KQVlUJF8KJWhJOCZacmQwdGJcWmtNbTdFVj9vMEdAUD4yQS5YYk5DTGxAXmlbUGRUcG1o
Q1lSNE5aLyQpUUtpb0Y1aDAwMVonSUIqNU1HLW8+ZWZtVCokN01AWiJBKVohJFZMVFwqaVhPOSgK
JVdfK2c/bmZsZkNwIlJhRXBWLV9PKFJaUHNSKWtlciNBZEY9TFtEdCozWi5ETSxLSTBROyYscjQ9
Xy9BZFVBWlEuZ1A4bClTQDw0M3FOSD4yV1czLnIua2pwXF5GYmsiTExQOEoKJWdnTUg8NTNaSjMs
a11RYVYkXVVKZ21HcScoXElzMUAyQHAvb0c5TnIoJiQ9LG0vTyQ9IiNwPWJKMTlwaytGTV1zYDo/
YElUcEtYUGhkISlgMipraWdOQ3VEZ0VFSnJGbTphX2cKJVhQWyRSNk1tQypZbGouIlNPdCRBJyZo
byw9JG9tSiNCTjhtMiYlTFhtOSM7Kj48R1I/N0UnLTNNRyNsKV5qUnRRZWNWT0dmaixaIjNnZEpX
QEw7QGFMV01uRSFfRydlWyZhYV8KJVtCWl8yOmVSUSNsZk9kRV5QYDBdcXFsQDw8SGM+U1snRTk2
QHBGcTIpbGxAUjkvLjwhaWNAaytXX2I9KGdkRXNdQGxNUWo4TGJnUyUtNiFsOVpAKmEuVzFHYi1Z
OGBGUyhbPFAKJVBEbUFmRSlIYmBoIjBYOzFvJipxSUJCZDVBVFpgLDxfY0hGYGUqQElkLzFBQjVp
bidpayx1NlQmUmE/Ui9KakdgUyxJZWU+REVANUdGVSM/Oj1uQW1QbSZZU2VvVDBeMFZ0XFAKJVo4
bGpDayg/OT1OUWpuZ1piTmlMQ1kvTlooJ2o4P1xjLFhial9dVlcpaWtFMDMlZDJyMixEcmdMXl83
SFgyZFE1SUU3NGBwYHNjaVtwS2RDR21BJ1shTHFbTWY8PUdGWShtcHQKJUlmJVgnZTg1MF86cFpY
azplWEk/PmtlIV01QlAvXz5uXW1xR3IqMmlJOltiMzlMNkBPaDdpLEIrKzdDYTZIOkNHT04rLGE3
Xi5PQkJLSWVaT21FRGtQMD5pZyNzVWVDRU5Fc04KJWdDZ05xQ0whOEZwXENbPmY9VUhDbWtVV2w0
JDdyP1d0PjwjKUxVR0U4RFI8Zj9SdE1JVjEhSiZmWi4lZUsmYk42aCo0UjcwL3IzWS9WKi85PSQt
OmgnRE5Oc3JERStkTDYyOVwKJTZDWF4nLD8iWGxIPnFDMDInUmcyblpYSWBmVygjQzRTMEVaNmMx
Sz8+U05AWUVwSj9yIk9aNmtvKi1xNjMmZ25AXnA0QGckJjYkK2d1JVM+aSFvKltdKzE3WCNhUjFV
RDEzYTsKJSZHLV8hXjArYT1KSD8kNTkoZyIqZEtVWG88PT5TVSdLVDAvSkkyNlRZOVhzMzoyK0pl
MkM/JUEwSW0ySFJsayglZU8/O3RQNVQ6O2c9O0gkMyEkZ1U1Yl9oXzFYajQxTWpfZkwKJV41S0JL
Smllbz8hQU9Wcj8rVzAxNURlQy0jWkZXdSUiIVJca0dRW1FoIj5YSW86YW88RW9FXVBVOFZsTzhM
PjxyNEgvIXFRKXU4S2JxaDZuRW0rT2RTM0QzVSpKb2xDVTtOLz0KJUloJDIuQj9ZKz9pMSwqXEMr
M29PVHFham07N01mNVlTNClrR04jLmxJXVEvJGEqc3M8IWczbGhLYXUuK2k4W05NWUQnaVZhR1tZ
UTtDOHFfVk5YZCJqJVBLWCE6Xl5cNCVINj4KJVRwbFloOWZfR0tMQy0vZkxsJV0uVjpxZGhMSEs5
LjtacD0pYmIvRVxMKy1TRFtFNTJGbDg4WXU1Yz9yQiJMOEowW2k9WykjaDc3RWVkc2chTTpQdDI2
Sl5PMWIrb2FkKDlqMGoKJSJWOW5kajciPU8oQ2UzW2QtQVtnUU8tUjNYO3U6XDxBSzddSHNLLjBQ
J15MLCpoXFFeSi5rKVJQTmdYZCYiPGZlV0NeZGojIWpHVzFTaWg1VVE4bjlLVG9ebEpMcitDImw6
V3AKJVBWSXFnNihYKXEwaS5qS0oydVUxZyE1a18jJ3VFVkMvLzI7WVN0KWViNio/SCohWF1cOUov
UGRqNVhCJU1qazk1I2RIK1NLTXJDUD5gZiMobEdjMEdtPF9fbWpadD85R05kKkoKJWVlJFgiJ2Am
RE5McktqYjFEODZqQEExdFNSMCQ5YSklRGt0RSZJQWokc1lNXj4wV2xVI1U5RkljRU5oU29vK2k/
ZUgmY21BNlFvWzlQSi1rVCFAcjJLPFJzbDo3YG9WcTEiVUoKJWJJTSUualsmRVhOKWA/dGVBUilz
P1lsIUBeJmhHKDFSJFFPLEQ2c0U1RWFkKTpxVGEiU0khVnE0VFFDZUY9XiRWTDMiK3NXX1RyNWBn
MUMiJXFKMjcxLz85cUZkYWckZCEiPWoKJWkzWHFzY0slVTk0VD9raC1ERF8lOEY9OTs2bVc+a0Ey
bTZ1QDYmTGZJJiohND45YVY3T0tyWGcxO0cxKVAmanVoNFVfZzAvS2M8UnMrPmlUbWQiX0pMYm8s
UTZucTdAJyZKL0UKJV89XkRWcmZsJ0NZXj0pWDVsYmdpKltDTlpSbi5kMCM8MF1VM0o1OWdEMitN
Zmg7dTJrN1xVMDhrZlg3JVNjWnNPTDtbW2FrPl9tYkRSWTYlYkIuVHEtNS9mak4hQHMwOS5qWGQK
JTQpRFUyQ15oV24raEw9ISl0ajwia3FEI05AJUtbSEJKZlAmSC9WdGluIkhvTUx1QUN1J08kSD9O
ISpEYSw1a24uX1QldVwwYl8/OWo/IyMhMlhmSjlHSVdZP2M0IlhtP2YnMWUKJVFkLFM1TTNacnNh
b0Y7bVtcK2hwcStHMSQ6OSRqSGA0dWdMUy5KPG9JPkMlMFUtXTtfW0hsUUpFX2hZJG8raWRMYj49
ViwnMmdSbVAoQidzTS0qUTtgM2I/UE9FTGIrLlo+QHEKJU9eRCdWMF9YVlEjSkY0RGQkOSVFLy5O
JE9FKCswU1pTYHU4YT1YVmpQOWhya0tANGIiViRGSU8+U189Il8jbzRybDoqV3ImOS5GNilkQWJi
NllIYkZja2xmKyxZQVo/JVA9LlcKJVMvOChTMD9rWE9WVkE8UG8oSUU6PD8wUW1PXVozPUlfYkRB
LCpoSXI2dU5sM14jVWVhYWdCYVddYTtTSDNwalIkMV1DPzkjZ1NrNVVDbW9kXHRNJGVcQEEiQ19D
QF06WUNWbVoKJS91LFxkbW1sUDE3ciwwSyhDWEw1ZCFSYHNbMkktLFQjPmNGT1dOUUxwclUrPmwu
QTxUTFc0Uy8lQ21VUD9MLjxvOkRNNl1iNGxBbjwqQTw5ZVJOYiNELWdgPFx1QWI2L2AlMVMKJUJi
J2ZhWzpATGtEKDciNFU5V2ppXiJ0PFJyO3FZLVQqLydEP2BuPT45NU41RysrRGFmXW1lUUVdLD1r
PyZUQ1BlZGRxZyFCKEMsV0NlOk1ZUWxfYE1QbHA1Y2tWalVaYSJHPywKJT1cY3JeZ1hmVjVwVig1
bEIpKCNNcEpgQWVScyEzNUdWJFNuXSYvXFVeOiRpZztrIm4/Nz1hQzBeMmo6RT5oXlNxMC9IRUZn
IlFzIT9wM0VpZWdQXG1IS1lDYS40JHVMTzVDVSsKJXJVdS8xYl88SWJmPWVNJVNWOV5BKUVVTGRg
Pi9iZCE+WU8rXCwjRl5CTlFVSmVvZjFoYEgvZixyUUgmZUohRTUjOHFrX2MmLiRQKEddKF0iJyxv
K1BoQT1tTDQ9IXAjYjhhQCMKJV40aTNVXUkiVWBYYSldVW1nQ19NPkdfXz08KUdbTmw1Y0dMPi1R
ViQ6Wih1MTxMYypeITNtQERgbUBYLEgoL2giaU5Ra0NGImVQR15MZi5dbmRtOUE7NjooWixGZW8z
OkIybFYKJWtfI0I9N1MhYDklJlE5SSFCY11wcTVdMjJuX0BqaVpuIz9jWjVoOHImdUUzKi5tUism
KkZTSSNQVmopdTdYXjBvWVE+R0VBVlhqM1teblMuMUQ9TEtXWGJUaVddMUtNOUlIc2AKJWVeWlVg
WHI1VCEoQTZhcGIqL2s4KnA7WlAkSz44aSE6SipfVHU0TlNPRStPXEwrY1RjO3AsbTZbaFEhb1Bo
U1pWUEFoJS9vW15VYFNhRmthaGRhISlUaDlfJFA6RjVaZWdRZmwKJUBEbzQmP1U2YVZEMGhOJysx
YXBMMW1QQFYpJ0pORSlRJl1KZVcuI3RxTz0lN1UmJTBzNU4hMGFLLiQqVmMuODkkWT4rW2o/aFFf
VWckLVtBMEUlPm5EV1ZOJXA6OlYyTyssXVEKJUdTLzhpPmhzNDI+NU8yOThQajoxakVDNDAkMis4
KiorWS1nU1JtKkBcKiY2TyJjczoyNEBuKUVGcUtHNSNYVGAoXDlxU2c1NDBDYVlcMG1BKT5eUWs/
IVchNGRbMG5MTkV0Z28KJTUpJCNxLVRJNmNrUF4+X19KdW1XQV9QWz80MWUlNUhQLUFTI1Q+OE9v
KEVxUyp0b0wwX09NZS1zJzdPNk9RXFlXU0VhLkRiZSFnU2hUUzRvIVdoSjdxRi1Oc2xSWjFYKzou
aiEKJShLLWZHKzQ2XmMobUNaIU9mdT9KVlpjW0dfaE0+LCgwMCNnJVAkaC8zKiQ+Z2Y7KUFXXlBV
RU1mMCxfKz9vU1lxIUkncGcsXmdHMF8yRG9lXEg0M2Q9Mi9dUDBmSnRqVUVhJiQKJSY9ci9GZmpM
KmhFMmleZ0VmNVA6KS11PlUtZDBXUkBbJT5LSmdQXilsPm0rJlM8blBmcyVCMyhOMTlcSjlSYC9c
LFNLKihabzpgb2FaMi1GaScjcWQzPSdjVVpBJjNrVmooMmwKJThRNWIxODkkPTFZcWgyZU46Ljc2
JidtMkBAMCxnVUpVbzpfXj1paW5RRF8hRiJWNCxeSC9qbVEuVlA5TklRY05SXj48Kl4xbVE5PXBP
M2BSOUdMO1dNR2AjWjp1U1I1XmMibzkKJTxjJCI7YUktYS9HX2VPNTQ0Ni08JTZiOWdbOXVkdU1Z
K2U7IVBFRCUuTVIjVFF0TV5qViM5SWszYVNObGIhbWorUEw9X29mZz9QJDtmaShQOkxNSiZKKiEp
OG4xNXRtUG1ZM3EKJT44MXU6LClWRSNHVUE5ZGknITFJVlBrK0BGJDFMIUxDbGEqWzAzVjxSWVgx
MldwUkAhRSxESnE1Q2ctcG0rUkAhND1JdD9oWkkjOlBIW1Q7WnRXOmNPJlQ2ZWAyWU5jQidya08K
JUQ9RGtdV1dqVSNiLm1DU0pQc3VxUXJNQ003TEI3KSFmYWdhV1BkdCJIJXNFSzM+SFZSam9jN0cu
O0IiMzBaMyZSL0lgR2JuISE8LjBKciZzWjBrZSVDUWZPKFNJW0Q5ZDdgLUIKJS5jOVRobzw2ZiMy
LUElI2g8KG9OSyhlYFYuXGosUV0xT0llZkNicj88WW9cOGUmWiYiZj9wMD1cJmldOEppNzsibTBo
SlFbMFc9N0lnZ05FX0MjJy0kTDAjND9vSGdRQENKdDkKJS9scGFOUEFqbHM4LjtGciJCcGEhZkE0
TSo0JVVrQ24/YCRQbWA3YVEiUTQyJDxQVkVPUSE4R2dMLGpDcDVnZjo4PSU0WHA9XnBBbWZMRW9e
P1kuMjIqQTxcJUFjb2BxYG9gO1AKJVNuLytkXVtDLktcYTshKD1aQmxYVW8nRElHOUU8TDJZUCkz
OTZmKypcbWpXT11CaDZOSTtqMyJrUTswNUo5XnBUclQwYVFpLGhKbzQzc3EuJSRwa1VjPSM7N0dQ
LzMuX1lKTTgKJSc2OiwvVHRWdU9cZVsqNG1eUzFbYGAjUD1kPWpNJ0AwKlpXMCkqXGxrQXEuSFZx
NSZUM0UwT0guNTFUJ1NrWnViJEtgZydqKXNpVjVRRGVeM0ROUmJkaWd1OiNbJ0M8Yj1CYkkKJTpw
LUJyYVozaTw9Y25TcCVNTSJdU0hbMHRZLzEnWXEpVWBvRCUjdSxjO1dAXilSOFhuampUYXE1LSw+
VSdhMk0lV0syWzVqWjJBYCckVz1pZ21iUjNEbzwtVCJ1M0RMTTdhPkgKJW9NXztDYmxAUW0zSjc7
KFg5TnNqK0BSIWlBcEEkbkZYTk9dZjo4QFpRV2QsTyZ0VWVrZ0poNDY3QTRnaEFyPmE4TFdWMSw6
KSRRQmdoLE5hZ21zcDkrRypPRCF0ZlliUFEhJkgKJTN1SnNZVyo8SmhBb28sRUxCWUtRRzU9K3VE
TV5Vc0khVkNCTzZia2JKSCUnMiVzbzkpaFBuNW4/W11kXy8tS08lMlMkbSQmMUljWGFYPSFKZ0op
TEExTyE6Ni05IkA/OXNCNloKJUFhPmAhXlpaS1U6J0NMKkNRPHRvI3RBJEFVRy8wOSJKTURRWmJe
XFpfSEBsSlFgN1hoPS5fVyZCOWcrbEFCQU8yOj8qRTlXY0NmL0VoZmlsUDYvXipOb3NSP1cibmFa
YEZGZVYKJT1HMlxzL09KIy4tYVxsQlJdNElnZiNBTFwhRkQqRmRVISM0cE83Qy1mbSxbSk9oUFdC
NycqRilCL1IpaWg3QERKSzJLdFxNOTVeQVwsSTdJQnRBJTZrQFFNKW11NSE9b04nXzAKJWxkSDws
QXA5SSk6TiddaG02IzwpbVtaQlRZNGNdK0c5XT51TSwuNyIkdEBMZk5aTGY+IXU6XVJSKms7PVwj
NzRuZ2NOXldvZTZAQVFbZV1eczBxUWtycTVgMG9DKVxNNVEmRlYKJXJyKTxcaD0xOytyUy5BOHBP
RHIzSityaClyOCRpQW9ybj9lX3VLIj9zNkpTQF5Baip1cnA+M1VeUj1wV3M2QihRbmMvUStFTSVh
W2Y+JUBiVD4xRW5oZ0EmcmlnOF1TXk81dEIKJTVRJFJMcnBmTi5SdVtKdE8hIkUhaislIWFPLmxV
RFdJRlwmVERsJW9HXlQ/LD5sM3U+cTVcaGciKyxqM1AsJWRLMDdXWklsVz8xKFlMLCM0ZD4lamU2
JiNEQ0osZUJYcCItdFUKJVY+Y05faENLZikoZD9SUGVfajsoM05CUSVnQTxuUyEzLTZbQz8tTiUt
VDBQdUppXG8xUHFqOCxSLkBhP1FzTW4ySTgvPWY8Xjs7OiVSNWhmayZsPFZRSlJcXVhScy4wPktz
bUoKJTlWXTw3NiNbPiljazg5Rl1xYiJiVyRCIltsbydWQGVvXCJQIWc1NiRtXWxaUWdQVTU4RlVK
PUREXEk0OCZrUj5cREdWbz5TLmxpV25oPC1ybkBqVHJAaCJxIW1ncU5WSmlOJiwKJTB1KnBYJVxS
LylxYmNuc0RhTz48JVRkbVUqWV45Y0peXGgjUyomYz5vPTpfJzYuXTomTi5uQkxIIlBbazZNUyxJ
QSNIXl0mLU0jNU4sSWtdUixZQHEtSzhJcWJHVFAhbS4ybjEKJWBITzpdcm8sRlZXSlhQaDcuXHBs
ajdGRGAhXW1qZmUlLE1zKmhNOyVVbHIpMUtnSVk0TUFiaTIydSYoUVZOTjtuTk5yaVRybCdxZmYn
MU4lJXUnVDRYT0IsUWs+SlZFNElfWVQKJWFGO18jUy5yOHNgRkZpRztZOyxERl4tUy1rdT4zK0VS
UXBxL1ErZU9tJkJnYFFpaTlgQCs3S2JUQFpJbiVhRG4wRlxVcSxvayRGZ1ZtW3FYWS0wdSM7Ijo0
OTVkWTApS1UucnIKJSs5ZVp1Y09mJ3EzLVAuJUIpNEFFPjI1ZSc7TnI6Pm8naSdLIzomdUlHanRY
LzIzXlVXJUpDQzdNJmcrKFtBX3AwVFpBRl1HWjM9UE5mcHJTcWhCXD9KMUUoJypbRztGWEdWSCQK
JTh0cGVlRVNoRGtfQHRCLz9jRVlvSGNnYWNyS0BmLWEpRzwmIXV1IVVdUTRQVXJwYiNWczRaa0Nm
QVE2V28kcDM5PiEjYEJxOyZzOztPSGVBIyo5bk5mOS45KUtBXk0nK0Q2QTIKJWBmYEJOMjVeK2JF
IUxfbnFcRV1XZURVITBBNmRSQF9kRmkkVkFYLmMqSDczOV9HSWFGS0JKMTRVZlFLb05uNCs3aj5X
KTxXOl5ZKDoiOVMrXmhBcjBeO003MllcLyFuKmwtWDAKJWR0PlZEJ1FBLk9tTCpWNUlJSWVwTlVD
JCMxXCEzJGFDQzE3aS9qXW9iQ1Uxbks4KF1KcG5SUUloPi0kSmVgLmNuYjQqJEo3PzBpYS0/JjYq
S1Y2Y09AckxyWDJfSFJlV0UwcC8KJTciK2I8UisnTTxpY3VUOHFNNjxeSDEobydVTiQyYjZmN1NL
VW02QC1gNUQrVzVnazc3a19rMyxQIS5pPyppIWljcl9GJDAtLkJMI048XiluNS4pKFRdODxdPGpP
JUJKQSRGWkoKJSYubiVraGAuPClAT3NzSVVcYjZIRFlVTWxbMWtiREtEQm4tMF5YTVleKyUqTiZQ
aXBTZi1VbzJgXGkjPDBSclQ5WitIcSRLLiNXbThZXV89MFc6dTZUW3JYRD1RVDBfQmpFQzMKJUNI
OCguUUM1XTdNOm0iUmZyLWhfaSJsTCwpdEpATlZHZmBmXik8cU0mTGNjMUgnVSFRVT5WIi0nbyFn
NSoxJkVhZzc/MzhabDlvL05mXGwmPWVYMFU2LEA4JF9xSyotK1lodFoKJTBiYHAtP0BDZEIkUyhL
MkNPcWhrKFJXLTBDMDNaNTU+TnA1aGRSISdaNT1JMztlXzYzIjxCTz1hKmdEKk5hXj8/bG49KVpU
Ij1rKTxFbVlPXF5DMSs3KDtxRkxBV1VBRiolcVkKJVxjdGdNKlghJkhaUjo/LUZcK11OXURIYyNe
MWNMS1gqMywjMWBGPE0yTXNjYVM9VlxNVEFfME1BYFlMQUdeS2VgXjNlYmBmKXNdU0pNKUBjczQk
WW8pJzE3LlIyVy5RVDVmWVgKJXI6SmlfMDpVSlNANFFeSjFAW01eY1A0QTpUUDRpRl0tLTstJWM5
JysrJ0o+bjpqYE0mRTtWNmk6JVNkNktNP2Y+WU5wbic1Jj50YXMyTU1GbU8jN1VvSiRDY2pKcmtQ
aVw0R0EKJUNhZHNVQks6Q09nc2I8I2UvVylHKy1GdFo+TjdqWD1uJlgmN2dQYT8+a3MkZDgwSGFI
Omp1V1ctREFmYWVGO11bcCkiZCVQJVA5KFtpYTZAXkRGSEQwcS1bWThZLjgsPCo3R1cKJVZpT249
K2NlRl1rTkBKbEBJTkUmbT9HZUYwSjM4ckdHVllgT0lwZmRiISh0MTxUMEdEY2FkZEZmUlxoTDJe
Llc4IVEnaFQpdS0oYkdTZzNxS08nbEU3X2wuSEAqLyZaRTddXk4KJUg1SyxoQWJycl5LO3NScCJF
b1MyL1goNjpXbT9QcEg7ITAvP3JcUmZoNnJwKlFbMWU7O1dndXBBKzVgK1FJSExLSik/S15CPzdv
PDNoRFFuPFlXZCxAKGBGSSdDa2otZTsuNWIKJVsmcS5pJz5oc19PKj00ZWdzWzxZUXUoSlhIdTBo
MkU3OTxYPHNYIlozWWltREFGMFs6TTFjTTYyMCltcE5UaGQpOS9CdGIoUD9tLDEiXEFFS1VNakJj
NUtVZmFXJkFKWkdZIVEKJTtpLGQibWlaPjFVKCNEIWM0XyZkM1VMKSomcVVdVGgmPkBuO0QpaDRo
Qk9eRmZPL0MpY2A/aG1IJXJjOig+JStvWGkoIWBpKCtaNmJBWXNRbSdUUitlcV5Zb1swQTwvVnA8
MFoKJTVCJCMhQ0tscyxMT0RKZTZiTmFKJChrLFAwRDYnKWo5SFhZWVRdY1UmciVQZCVTPCZtRnBo
dS02ZFhfOzdEcCNmRjc2TVtNMz8nTTdYKEM6Mi04KG5IJl4iJyZbQTVmclIyQzsKJS4uZWBJJWFK
bnRDJDpbOEQyNktbZnFNIVYmZE1rI1lvX1paXy8mIT06LWQqRSJqSWNENFBsPzVxK3BKX2BvLTw3
KU0iNkJfX1ldKzhUVyYwNzRvSjMmPyQ5O3JSWjJGSmxlZD4KJU85bz9kSUtiJG8qSl5rTTpFInFo
U3AiQ0ZuRVttJy02WlY/bV4rIjJYakI+dHJPUF9VXnJkVCozLUlOPiw4Tzk/TVNoOFEmaygrSFZF
J1ZPaHFoYjEzTUkwVmFqLFQpVkRpPnEKJSNXIUNnMStUbFpVaUtdNzI7cVQvP2NBbyIyKnNQLHA6
PTtEYU8hMl9jNihiPCNiOm5zTXBMXVxxLDkwOjdNcWgiZ05zUENjWFsiV1c1IzxJUkRHbF9rdUZe
XUpMR20saWxrSzcKJS02NGlfLVJqUlhFTV5xVzkpPGQ0NkZpTmJuJkp0Il83U0pyYihcQzpjaik/
ckNZIk1ZSTorMWZtIkkrKm1xPypQTU9pbEtfU1VGUWcjZjQsO29zOHRaPS8rNWYqcjc9NiZfaWYK
JTdIUUZcJzIqZFlYZlxLSzFULysuQHBqRnRZIVFWI1NYQnIqRVYlLktJIWNdWTdXNFE/clY6JSFx
LUQpJCQ2T2dGbFZRPzJLNUpiakE7XTFbWWAwWzQhdTViaSNZLUM4Z0tWKTUKJSpPKXQvNlJjLzVk
PjFsQzY4bWReXTk/UShqNj5iaSRMJUxJTGcwIzVzNz9JYjA7Tk40YEFkOzgtNi9qRCNobDU1KmBe
ZHBDQyJHPVxTdVUsU2VQJVg2ZE4kbmhrcU5sVm8za1kKJT9oVEJiakgrQUAlLU1oamwkbzVeR15u
YyZFRWZzTSFjIkRgcmxWLmRiME1gMTRnbSpDNmB1Kj5JI1IpMGg9VEAuOz40Z09Rbjc+cCRDZDxG
KHRUS2AoPW10MG9dV0pncGhzR2sKJVhwQ0I6SF87bnE7dFNfW0EoJ0M+aFUoIzJGKWJCIkZQcDwq
TFxAZDxOVkBMTz5uYVtkUzZRcTYjTWIhaz9hPWFHbV0pYlxTTS9hTWZuTmhmXUhDOC1rJE9gSjN0
OGA+X1FvX0UKJVo9YUZiKC1sTjVSbDZAKDA3LlljNWVYdWpaPidMLiFOZztTLT0wUCQ4MClbPkxH
JVxoR0hfKm9aP0U8b1ckQV1IPUslN2xqdFYuJitlbW9UPFJDcCYjV0IhJVVPZGZfM25EQC0KJUgx
PmUvKlduJl8qRT8+bmlGUFRgMD9uUWJYTmdqTVMmU0JZSXM0QmRfQFlrZCthITMpKFQ3RzZJPCVl
UDFeRyJpWlkqWk49RSpLOGwjO209Wk9TSCI4cEAjTVpfXkxrNExcY1AKJWEvJk88b0E0Ui8wQzdS
MkhdSlkmMD80ZGRkV3Q/TEZJbDNsX0BpYnVRYFM7IWZ0a2YtVGBQIXI0dURjLiNCajtLZVhxPzY1
UTtcc01gQUtZYF9cRkdmSzckcC5NLjkhPjt0KkYKJUE/dT5zZT5GTEQlMVZNNFFOSnNdJicnN1cu
aU1cMj9KWilkUGFlZmojQmM+aT4kMVhaUmlDaUddRjE8J11HSmsjPXBRRT49cyRMWDpPS19gb2JI
bS83anUzX2hoIWZoNGVTbTgKJU5BbjhUQCZfT2tvWk41WSZHP0QjSEhIUExJNzNfUCpYbExlbEQk
MFdiMkNTcidFUE91RWdKSWk8KGgpaylKdVRhWVI4KF0oNVUzUDxnbHREV0o0SDNHbzYyVDBNLTxj
ZUJrOCsKJSFSN0NUIy0sbisqNWs8ZXFsQy40Rk9PJyNnaFdNW1smSkJnKGc2QDJaQFQuKlZZVF1H
IUY1bHMjXz1bMG9HJzNaUXMoPmEuI1lxO1ZSR2Z1UVFZKHNeP0Bpbk8mI3R0MGloLykKJSdyZE0+
SEBsQ1pXKUJhLSRUO29bVE88bD9dL2VmLz4lMz51P08uNlhMXiNbRCpcRjE7SURGXmNZbmFlNU5j
VjEiM25XVmtaS3NDSishLmRGO3RhW2BubFgrayZMKWxebTY3KEwKJSldMmRGIk9ESDFlPmYuPSk3
RGdAS0hvPjE+QGdTWEsqYzZOTilpI0FTNVVHPGMiRk9qLy80QC0qVV1cJDhFSVpNNVEpUUA7PDYq
ZWYoTlJbXSZKbHFQQE8vK0puTj5JQm85cWsKJSwpYz1jVSVFJmE7dSFgO3Etam04X1Q1anVfOlYl
SXFwRltJIyU0MUtjRUNGWF1lbEYsJFo3QSYocjpPJyZZTW05QkMnQTxTaU8vPjVFRDtYXiw3MjZF
KEpfLFNvQjoiR3NjTnQKJV1MKionW0NyczoiPy1DRE9vJ25JQlhrbUsuO1NlRE8zUk8/bk4xTj43
PVViWS47WV1sO2srXGtJUz9BYEhyTCtsbjchW01fZC9SaWZEP1F0KidhY2smblNbPitaSCE+ISUk
OUMKJVFOZVg7YShvSFk2ST1NL1FzYWhEbidIOm0oKm8/SVhcZnBKYWBHKEFARFlgSUA+SmFPTzIl
bkJINGFRaztkJiNJWmtqNkBGOTY7LDVZSjBicFtjKDcoR0BVXUc5WjVzPjhgV0sKJU1JRzZeKlZZ
UCYuJSFIWGZBWVsnY1BBJzsnOTdZbGg1OjJOcUs6LyU9cVxLYm83V2dsMXA5c2JBLShLN2MsdWYk
UXAoPE02M1JJPkA9NixhPjlVU24kZGwyMF8rJjQ8bjVCUzsKJVBRVmlATFRYQCk9QTU1RGBFY3Nx
WzsmJkdOW2tkU1kzaipCKkgrbTlrPFFFLzQ2TDttMkdjbUNDQFlcUyR0SlA3SmVlPSZZTDdXUGVQ
Km0ma0w/Pz1JWjJbQkU3aV90K18nUEgKJTVYIlEpPEJGTFJpNixgVnJjOFQwUW07MiQ+K2VKI1FD
RWBpKjZZJiI3Ty0kRlQtN0l1RDBDZSVgZ0Qxckl0LTksYGtuXFNxRiJeRi5qL0poKyRlKFpiMis4
PHI8KSg6S0ZNYyMKJUlmJEcsbWkpK3RgVD4pTCNHMydaKjRITjU9RSxDWktXJkplZWwtSClxUVZL
UGk+XyNbZChwNWVJWDQuOUhEM25rW0UmR3FMUjxYP0xgdCFRO01dMS1fSG5rUkc7YDksVVVMTmAK
JV1ZL2I3LTZzWkFLbUVTJDo0I2JPOUdCajRvby0yMjphak9qNmFQPzxSL2Y/MlIkXSEiUDBGYlxW
ZjQ4c3E8QGhwVFNjLVQ1RlNcNEc9alNSZChWaGtpLXVGMSxxSV1AR1chI24KJWs7c1NRRUNDTGQj
OWEoRy0qZ11dRmF1OjIvbUxAX2giSCxOKHVAZT5GUmRJIiJFXEpnQSpXYURDJGBHcXIxKzs8cmhb
NkM2MGshXUgvJ0llNFpIYUVlMiJXPylwJDprYWQudG8KJTRiLCxpMWEvT0RfIUIoRTNOKiclb01G
a2U8Pms+cmBPdSZVUXFnOlViXXBeOVknMVFSNm5zKWEtNEU2bTNFRV5YYURKWzorbVYzLkxnbzJM
NDJpLVUmR0QoWUktU09sRDFNZWQKJW0sbEBJKC5jOklxbCJvPilQWGNeZnAsRCJMa10wdUwjJzsv
Qk0qMHNFajBPVkBIbklTRUdVL11cIl0pM1ooaHRGMVRZLWFJMihAMEFpX1cza3NUcT9MWDFQbzBL
WEozMj80YSgKJXEpRmdgS1dXby0hPUNoSDgoIkJGRT1fW1UvaTtsWyxHXHU5OjotQHFOcEVDU29a
UXRwbCczMTchIy5gOl5zQ0tGaD9kUVRiI1FbMC4pXElLPyZmVCQmKmcnSFlXL2ZxNlkvKGQKJV09
QWVWcFQmK04oKzVjLix1VCc9IippRWomOFQjTUtFSUk+LW5PQ2k+bT5zRi9uRDBFPT5AOWo/NnVk
UThqTjA3aHBta3A5Kk5DNjtuYHJxWktVXVBvbjBfPXE3QGQyOmp0YUAKJS1pPGFTZVh1NnVcP1kj
S1gqL0gmNCJPVmZNXEkqQSI3aid1LTk7P1ovSUsmWDJHayRHTDdEbi41LyRBXigjbDE8VzYlXDRU
bC9SXC9NVE0uQFtaVmQ2JW5hW0cnKTshW3NQOj0KJSMoKiZVLTI5KixpJHErOnFtNkErZjQ/Mj1Z
YDxBVExLOTZcMUtZUzorTjRzbmhCZiRgVEZoJCRrVk1wN0BxdWNScUFCdTRwLitCSVwwIWxOXT9x
LTBAKmJwMUJuTmtIYyNlQiMKJUdZJ2NIOFN1PyM1OldvYlsqNiNrMk0vUSMiQUs1RDZQc3MtVCN1
YWhTJ083LSNFPWFwOTc9SWxFQkIuITIrWXM1WCYiM0BxPVVmaEpnSzJhY2BhNzRSMC9vOCFjVVNZ
VFNZVnIKJT1MdURPJ01qQkAnKzEzLS9ZNFBYODFaN15wUzIlV3FXNT9BTTVtVWZgSishYT41VWlF
T1EuSlNLW0A4YkBjTE0zSi4jW0xqN047TkF1VztWXCFURTQ5P2tfRnBCJl9ZRUlkPlEKJS91XD1J
ZCo/QWQhIzdGWSZyUC8wJTdjSjkmYDBpYFFaLltVOFk4KGBYZSQlYjdaMW5UR2NOPGIxNS5FI1No
XmpIN1ouKVlaO2tKdVpmYWBWZzBGY1llb2xfPVtWVCgjUEtvbVoKJSwnKjJgWXNUVSM0QDRVNC1f
NF5bKWslPV9gIjwrYmhxOjhjQlU6LE1EXW1WbGtIXSI1Ri9DVnVDVjkhTnI8IjpHI2xiXkRvIjc/
JjZLVlNnODw+OlsnciFMPFxAWkE6K1EpXikKJVdoXmJ0MUxcJTtJQFUmX0s5ZUltMiZZZ3NjQ21O
WVBYMChCVTdsO0JZXHRjVz8yW2FyJF1JNVZdLS0kZjVwMVdQLmtdMi0ic3MtIiInISxBVkxDYCkr
U09LRTVEQVRTV0o8Sm8KJWdtaThuMEguNDNoVS9LdSI4Si9aTXI+NFwucFNZOFhLNilOMj1tdFMl
aSZDMFZuJUxAZCFkPDJgMzVfWlUjRyc+TzkqKlA6KEVGXS9UOnNUMy0xc11ZX2h1SUdsYD9GYjUv
SicKJVsnQmcpODQtPEtgUTBhYloySUQsJFxhTklRbyRQTVNxNjslYzgkckNOTjI2JGA+TGZwXUgh
UiNCOiMzU09yOVd0I1clWW4jZD8lUDVqN2V0YTloJGJNKHJqJyRKOyMmP0ljaSUKJVguT11XOz9b
ODRwQDZDJCdkMmpmYGxUNWZLXDhLJEA1MU1JbTxgJGpwMlNCLSZpNWJMaTpXV2dTPEFUXVJPOy9f
P1BzYWUqOlZIYD82KUshNVgiVDttaEJcWDxhbkNkSVAwQDwKJV9NJEphcyEkX3NIcSs4TGdDYkhY
UlJccUFCYjAvNDpoWHVNKyUiJDpXSV9eZVovcC89XTEhZ2BvLV4mTjlmbyRKJygjJDAvXmIrNlRP
RSInXztFXURramtjJSg5aSlYWEFKc18KJSNpQS49Xz5pIzltaSQ5dC4zbVYnKi1CK1ZkU05Ec2RJ
S2A5azk0cyQta2czSHByPFBdSEFHVyYzKmI4Ni82aiJCITpbNk4kWi8nL1pjYFI6aCZeUWZpRlFz
cyRuXkIiOlovXi0KJStiZTQtJGEram5vRDA8NiZtVnBfTCVAT1gyPlkuRTpyXSwyNGdHUz1wX11d
czNAWywvKz4hV2hNWj1DXUtYKzgjKjVHOS4kJSlbVE9dTzRXcnIqS3AuZm5xXWxbWHNYJlBrS0cK
JWtBMV1NR0RyMzxAQ3FaQWlhUS5fWldyLWk8YFAudD5bSmA4bm5kNmhKKEUpVExnKTBpM2FvJy5i
QS1KOkAzI1pDX15PMTQtaWtidDhQXVk/OU0/PEw5YWdLX11MXFBvaSo0IjAKJSlLQlVoYlAoPSle
aSVaRDokSCtMSU8rNCxfV2VdNUJBaEgwXWMjMzgkZiZXLTJXW3MwXDMnL2FHc1FPUllCWCQmU2tg
TS9HNyNQJi1nPyY2PzshKys0bG5KZUwxNF4mS00mPU0KJWpUcTdjQHFhSlNBZiItbykwWyhXRmdD
PFtTSy41al9eVlIpYjFTV15nMkpOSjFiVU09TTlSJVtYMUo/cjE2amFeVE04Wl5XdEJPaEltMF9E
OShtQj49SSo0SFJxZEs8UVIqIk4KJVosL2o+aG1KMDEhWl1yMCJRPzdKaW1pRFgibmpLSjJWOmM2
QCgwaUQ7Ui1FUysrcD1HITQrUicqSjE5VlpBJmNSS29dPUorcTVeIit1XDRFXjdTV1A1XEFMXDpc
bU1LJlgzZGEKJVtIM0xrRGJxbW0waDJyNjwyJ1QoaVtOY0ItIyppPEhFTUNHbjlCMkBfSDJLOEpt
UTBFLnA0YSI4P2lZRGBuQERHKUsiKEIzNyYqVltlVSkySCs0MSs4YyxTODtuKT9IWj1hOToKJT9s
PzluRm11YUpIRHUvKDRHNkMoPiQmWWduajc7QEB0dWpBSzhUMlhqS0syNWtRUlBlbV1xOlVOX1Rp
cGJzI1EkTSc0SV9FLm49X0BtVy5iL1AoXEElN14oZFFuJkRYJD1sLVgKJT4/akBpXV8pOUw/Uklw
YjdNYGlPMXNlPTI1XEkvKUU/S1VZRSk7QTAwcXF1IUhiYiRrJlFPb2RCTmBROT43VmVCTE4iR0RL
cV1YOUo7KUFFSUJESTI+dWZ0Oj06PWJHcmkuYGIKJSlKQ21UV0FYKz4vVE0mLVZsTCdjIW8tcVsm
c0hTRStfSmNsPmRRUnVmOTVJc0JYXmZoblhALi1RWVpOXzI7OnNlXHFDKWRZb1s7Mk4lPS42Um4m
JU0+IXEjOi0qLExcLyU/RCgKJWo0cFYpLmBsKixCOjY+aUtxNExUNFRxSV8tYyguXXFASUlOOSt0
bSJlTWM6ajN0JnJHLyprNkA4NydOUGFKXitpZFk4VElcRkJzcHFkJCZqTmhAM3UrbGQ0cSRNYSI3
cVtcRSIKJUdiIks/IlE/Y0Y8RWMxaipvZ2dDSVFFWVlBWDBhZzZ1Nkw/Qyk2MWdhKHFvMWEvWiNa
IWs3Wy1cZUJiOyxLLiFMWXNkMmdAS3VpKlU2YVNYRVQtPChLSVo7JzhKRUBJX2c1S0MKJVpeV3Qr
XjphIjhjJUhLYW1EQElhL20tJ0U3Y1ttKHBZOiZTKG8qdU9FWD51cS1RI3FmJXEvTCJsJzF1WGlt
JDVrbVk0NnJmVTFYJyQ2ZWY8MUJlOy9kQGlbVDJDPSpqOzhKYzEKJSJLNF8oP0ItPEpUQlFoTVVD
LnFQMnJ1L1woPEFSPVYiQUdrXS1IVjxMMl5mLTRCM2thYSQrdUhiIVo+JCVuKVszUC5MUW8rcSc/
TCduY11qSVkvRmpuZnBoRWdxUWAnbGldMGgKJWM8ciYwIy5UJ2UiZCJNYnFXIks4ZXVkS0YkIWQ0
Rkc+QVskPDE0Vy8jQDJgJDJGIz0/K0BgZGhWXnFOVWM9cVMyXkwxVWlTXnBGNjRHJiksb0FrJXUn
OyQ4NV9NKyFmLUA/Oy4KJT9aSCovS11cb0JBXERUXDVwUEs/UVchS1xsM1xjRyQlZC5IIilbJz4v
Pm06KE1rST9KOmYtc0FPb0hGYzN1Pz5XTEJBMStgRFtIL0I4R041bEckPHMqaixeclg8R11UP0Ey
WCIKJWVXOVNRInNpaFYoUUxtKkFnQl5UTz85Wz40Y09YTjVRZjwlYS5bQyRTJShPK1ZPRlMnL1dh
RkYoZythL2d1cWlVTGV0SDxvTCQmWFFJUmpCWSU1TjloQDstWDtUUHJ1R110PEYKJTpZSUlqX3JN
UEshSSZOYDRXWW1OVk9pMyUoKnMzOGxnNHIoJm1pMCUvLXA+dVRfUk82UmokOGJvS2U5RnAyN0Q2
KzJUMDNXXFpiXSNUajlmSEd0OC9ocTsiV0EsPzJAPCNuJzEKJUBLa0RhX2E6Q2pCIjU+UkImJzhb
RmZcVV9BIWU4TzZtSFxeL1tASVclSV40WWM9QV0nYTM6LlowXEdGLGk+ZCNcPFlLKS9pbUMuJFN0
S19SQWNtMkA0QT1jVGo7JGBGJUI0Y2MKJTFhWWVFLlRmZllfUUhXJm1eUkNpNkVMOzJDXltpZ0Qx
cDkoaDFcanJUS1dvITBnWjpnVFhjRXBJYTpCRmArXSxzcE5YXGoyNXNfXy07azpTYD5jYDdrPjI+
ZmY6MGpTNzc0QSkKJSNUSyJdKD91JVdTUWo4VjplbEFqaywoZkxldGdqLHFXU2AzbDUibjQkaiFS
R2ZXbElzbV1CLihHcDlVVE4jOCxPI1FYSUU8OWtHYEBnRy4pa14lJEBQOjhBWlk7XzlgMWNVTzgK
JUtSZmVDcl9Ucz9GXGI+Kzo5MWJXQy1YQUUmZGNvTWghNy1fP1ktSStsNFxeZUNKTCIuNV8xSktS
KTlOZ0tSWDdaQyg5KWIudStqTStyS2A3LGNnOFwlQ2NPSDMnVnJ1NlA4LHQKJUZNM0Q8YGMhUFIz
YiZgSmVZPjolMWY/MHFZUUc7IipYcWAwWTJoOmdIKHAuYjNSQmNANUg6YVZiIVhwLDFsL2dSa210
QjNjW2snXCtfMVI1MU5zUWA1SVJcbklHc2lUOTFYUlAKJWsnSEJpa1xsS0xNJD5OP1lYJERoWkNR
Klsnayc7LWVPXStKTUYsdXQuRmB1cD1lUjoxKUlaUmdETCxFQCxRZmo4JWFqVEhhakNUNSNOPSRx
JVU/b1whYyloNlAjOW5jcCsxVD0KJUxXQGBiZDZxRlxtYj4uR0ssRmRIWiE3VFUybFkjM1A1RkAp
VlJcPzxgbVI7YVlmMTdCX3FRJTwqKjBRWj9lMVteXWJhSG81S1VqLz09Y2wwYnU7dFxHPVI+JyFx
VUdTNWAraSsKJTFnaWsnVE5ibz0sJVRmQlJfTEJgMjApbHUxblsvQEY4VkwhUG8zVllxYzJwOFhU
QyktJlVUPEpEXyImVzBuNy4zQD9aXWgicllZSj82TGdmUnNWKDk0Wk1iOz4wNi5kSylAMGsKJU48
QFZBM2g6TElaI1hzKm07ITheajRBTm5KalNraTpsVCIlcmsqKkJGPC4xSSMmdEg5K1VXLjlxciFm
ZUNnRV09PTFzRCw7ZipjXCkrcVxATDdlIk9qRmdDMEFXZ25fZjQ3J20KJS9UdT1qTmNmNXBNNGVW
a19OOVdgO185bXMnWzVrTF4kPUktZG0wbHBwKGs7U1QpKSdIQUhTb3IkJ0FEKitWLFdVZWooUC4y
ajorVzFFRmhyNmBKNGIkWiMoXFNtMFJrKiFYTUQKJUdPZmNpQipJaEwrVWhcUUknRywmbm5mXHRn
Jl9xRzRoTFRoNGBKaTw7bjk1JWMvNzlgXExKT1guMjY5MUcpTCZrUUEuOSJYUTJeUC41UjFNYm1s
MFhOaUBzWEZHaklSJydHdUEKJS4iX10oMGFsJSZwa0ZrWzpkMF9XNmN1NkhNRExOdS5nVj1gYm9x
YVdFLV1nKGxmOScwYiN0LUNMST0lKVciXWJrSV5lRWsnIlJKUy9VZjYjKkNrVylaK0xEb1NjW1I/
Yy9WO14KJT1RVjxAKGZvVz9iIUM0bjZINVMiOyhZKVUnaD1RTENiOSokK0QkaitoPylkP3FwKlZc
Lyw+JTtPJFdGQ14+J1wzZUlTa0BsQ1txQlRvIyUxXS5cSS5sSyFwZVFrbDIsVUB1NzYKJVlCXiNb
R3EuWjcxQzE0OFBydCklLiVON3A/b1hVXDRvIm4+WThWQFJNOGAmMTsqKWxFS3MwMGBxL09GPyIq
Z0g4ISg6ZDZXZ043PjxMViVHT3FQSGYoOEY7MGw+Xz8yMWtZVkwKJXAyY0Q5OWszSjdpM0lEW25R
UjopSCstSUxsVFVfcGZjcnUza2dxYWw1OWdFZldXMFIjNW0lTVM6S0FjOCdAbjhpQWYkUz5PPFY3
MUUyaHEpQ14vOFdiIjwtLF0oXzJbKEonLFgKJS07dSdnJXBLN2JvKldObScuTGtFUCRtbnJeJ1pc
J0ZDUFp0JE48MDkpXVwnZTZKQnJJO0ZEcEMwNGwzJS41RkAtYyZEakknOWdxSzBHMyZtOVxNK1Y+
QEo9ZGohL0VzM24/XycKJV9NQ05WO0psVSQ+VipkVlgzWWBNbmo6MzUmLVhlVjU9c2prWHBwVDNm
dS02LkBFWUgrSl9oTkgzTDBjJV8nPjhzSVN0RjhjOTc5Ij4pRm0rKzlpPSQ/ZWVSalhyYXJnVD1R
cWAKJU9kU3IpM2BJXTRIbkVmIU4oT0VlO0cvamAlNlZNRU5nRjw0OVNMKD1aRWQ0bmZGJ0lSWTFf
amoqSEE0WDEvXnExYFpYM1ZxVWpkNDgqJTwzRmhTZlJUMFVpPXBoVjNtcW4lKkkKJTJFQGpWXGU4
U3JxbTVAPmxFSTpaKXRQPzklcCpYNW1IK2xwWz5UU1tbQWNfa2dvSkxQWWpcbTVQNytRakYxW1Jh
TFowZWQ9c00hNk84SCImXzQ0VDRgWnVedW08ZHRyY0RqNkwKJVNQdVEqJGZlVWZBclVkXGpMY0Jb
OGQuODReI1dGbUtHRzcpUl9ANzljPVJKaHE1M1lCIjJYQGUnOUBHTD4nSik4PXRGay0uNnN1ai5M
bSVQJ0x1ZkpiUnIvJCZERkkkUkVFbFwKJUkwZjZsR0JbY3JtcTVXYUlJNiNoQVxydTZjKzAiLmJn
W1JLcWpKSG4+Piw8ZXFiaWc1WkdHL04oRlplakh0c2ZtLzkiNjBHUC9mYEM4ZkVmLDZRK2phKzow
blxkNjdhTSRzZCUKJV84bnVMOUpRIkRUTj9JbzwnPE0hKScmXl5xQ25dTTNHRXBBTCUnSVVZKzEs
MVsmPUtvbm0vPHRJOSdGRWA5cS1TYFU9VTosND5MR1g+KDskTzUhMVtZbkxpUmxnaWkhPT9BYDcK
JUFbLlJyKk9VRE1OPFM0NyskR05XJSpQdTRiNkI/bSlvTi1aZ19BOyIrV05VPyREOyNpTCdbVWpa
LF8lPixWKExuLkkwPkZMbjMsLiQpTSVrYGg5Yl0lNEs4MkFNcCwiK0dpOjQKJWlVXy1wPz5YXHUx
LFtvSFkvOkdzN1pxOHNlZVVzK0htTz91VTdfKTJHaHJvayJiOTA2YT1UU1UlMzRKVC9CJjcuI2Zw
aGxpPDc0RytMQ2xJOUl1ZHAjQ0wuXC0qOytcRmxUZEMKJSVORyEiKzgnKUY/Lj1TPUQ0KHNSZD1I
ZS5NaSJJXGtndCI3XnQiTGAmdCMzTDUlRmY8aSs/TiskbiFDbDFMIXNtY3JQWkFBL0hxYEQnIz0l
TTJVX1hfLHA8TDwraUt0UjUhOF0KJTlkbE9ATS0lUzgyU2x1XzY9ckE/OUYmR1pbLS8xaCssQjst
KkFaaysxQSVAbCM6QWZNbiIiUW0lMXE8XTc1V0JwUW9MW2ZkREIqKDRwLjQhX2xFRW45WXReIi0/
NSM1JzlZPT0KJUhabVREIyF1bSddTEkoQGtcIUNjUj9gYFBSZ1ZMckNYNyRSJ11qL25YN0NkdE8y
XVpUJXVETF5ZY1tDa1A6YzY1ZC0pZ1ticzdQcUJdJ2pDPVZRM1IqPTYvPmVoVTk4NjwkRGUKJT1o
NiIsYSs3bSFfXkFwUEVjT2hKQ1ZSUFhTcT5UWFcxZHR0cEhdTzhYNT4vZjp1PCk8YkZmUmIkSTMk
bGxdcy4qbSNMZ3RHTF5oZykjbSp1KyYqMWBmT0ksR2U3PWNpSC1VW2EKJSIjYENTTV4vYUdnaWVP
L15wN1dmISQkVGhFPWNYRSY+MypNTUdkZ3E+ZkklX01MXmRoZjsySTAvK3JCVnBBaTFrUVxxXVdf
YT5wKCVqT2ozNjcoJUxGZzMpcTcwOzhlKWteKSUKJSI0P2lQTFhudE1dNTFAcSszYTBqbTxbR0dJ
IkhYZEFhazYnZWgoLHM1NHAjLERRIzxIP2hwJGYwVllpaSJxbGkzImQ5UDtaJ2BHT2MwT2AlK3E4
Xk1VLF42WmsnX21VMy9oXFkKJTNtT21ATCRxXWhOIylSNXMuXVtwckElOCluQFMwJDJpUzYhXiVa
OiQ9PSMyMmtmVCs0NSwhR1VyXkFmOU9jXUddRXNsSDdLRSxyO15IY2o1T19vXClqPkdTLFNuSys/
Kk9ScykKJUwzaGg1IywqTV5lS0wzT1ZwWWwvUDAscy9BJSVUWzIhL2RWMzFRLWtSLnJIVWtaNW9o
cmVoNFRsK1ZdLUZYJSQ6RkZENWY9Y21BOyUwM0FXZUBcPTFtLDxKUlc8I2wmbmFlI2wKJTNMRSdu
cCshW14+c1I9UyZNRVllP05jLiwiU0g2XyxEK0QxRmYmdTpSLmkncEBfMHE1aDRzZVEtViIwT0hV
RVwnVC4pQWFSR05OKW0qZCRBbUZuQGckby5MKkE1WW04XmROcUUKJTljcSksWVI6dUdDTTQnUSNF
JjFhJSlbYzNqdCpvISc/Zy50aUwpREU6JjUhW2pQUlFeNVNDKmxLVG5BLmleUmpSYVByX1dUaWo8
MjViMGBjazdTLiczVWMyXmwuJCNBXD5fJGMKJWREalFkNmBrXDJhSlhUc2AmUiZaYCQ8dHNXaz5S
TzVQMSRAUShzclVNLWBGRElCQyo4LlFjUS00Kks5Lmo+XDptY0ZDQ1EwSj0qUigxMmVCX0lfI1lf
W2poKFMtQyNRIyM/WnMKJVQ3KCYmXmVqVmBoPmMzUiZBZ1I5ZHRYdEdOQGxTNmNSZG5oPiZIXGE6
YXRtRnImczo3Jz5YM2Y4aCFQbmtKJjRESlRzZnIuW2lRPzo2ZTktQU89QGRdXDliX2pocXBDKEEq
ZTQKJVIqVyRZKUNVXmBgcywtWlQpdXEuKkZROGpUcmtRJnFiWC5aXFg7bWA6cVoickBiPC9yM18y
I2lCLDJfb1hAWEdbNGc8XXI3LFFsKUonWm5aJ0heUDxbKmNROz9fVHFqJTtVRVQKJTByMGVPcCxy
a0ZVLS5CNStcNWJTR2gqYCczY20vKVEzcENQMkE2LiZGXFRkZl1jKXIsX0ZvYkdhUixBTk4oc3Mh
QCZOWClvWzRzS0N1RSlBZ181OWw/QDUiM3BHTG5Qa1UiYyUKJTtVQSZgMFZaR25rcDxXOT49PjhB
XWBwS28kI2c7LScrW008SCRFdEhNZGpPTzQqOygpQkoiXCo8QDkuYTonZEZbaEcmbSVgWlpWP2Eo
KGxqczEsPEwsMkxnIWkpZ0taWiRmVmEKJUUzI0lDXl8maEMhVUVoVGY8a2xbS21DVS0hYkUwWFZi
LDFeMTY6Kk82bVFsRnAqT1RpTm9BSyk9X1BjIm5FUCoxU0BGVV9Qa2c/I0JeSVknIyo1NEgsJVxI
PEJmV1soLUZqRDwKJTIlcCFYMEIqc0RwS0JKMihXV3QiXmFiJjZuWyt1WTIiSzVgRCZPYD1gTCc1
KzZlZltAY2suLCg1LixMaXAzT0skO2xvaUNaYGs7cmljLyc/VTtoLlokUHVeZkJNJlJSVGNiVEoK
JThKbiJpXmk0VmtFLXRBcChaYT9bcUwhQVRfSzJ0LT5gPE1pYzFEUVpBdC0yVjs5WTxJMVRhZUs0
UitrY0FVMjJCXzc6XmgsKHBPbG9PbCEpZ09mI1xMK0FtXzswVHNOMyFebzIKJWs/Tz5zRCI6Nkcq
Oj0lT1F0I2JMI1IjR3FRQU4mUjNVXilETkhILkVCQCZCXGV0ciw6YydqLSEhSU10biNHcUIoWVhq
b0w3aUNjKz5nVj5eIks6KEdCMEU8Zm1Qa20uNGM1cVoKJUZjS3BKSGtpWVhLdCU1ZVB1TydnInJE
XyNpcENQZTNvK3RXLCxDPyosK1VZXSxhJFBAVExhcjxWUWskRlppbl09YD5ZJ1phLUFuYiImaF83
Rl5ORidqOWIqTzUlQ3NbbEpTPFYKJTo7YystKShTJzpWXT9JdU0/OzxPOUxtQGZgR19PW1MpM0U2
TlJBK1AnWnJbIVU/LC9lKVZNJCFOLzp1T0w9dTsqQnApZDUwIzYkTTdXcW9FWmJmOWEoam9oOko7
WVlFLWZTQiIKJVg7QD9GUVNDTSo/QSZKSnMpRGdwUik+aWsjNF5gSkMqMG9VM3UoTVJaRi1kQzZd
PE9UIlQmKjA9WT9Wak5XbkB0KEhrInVlbSFpJkxsUkxYOzF0S1g3XGEjYmM2OGIpKlBrZXQKJVQ9
XXNWMVteNElkaDVhUGozYlpcaENiXWFZNmNeJzZTdEJuZiknVGlLbERwdVBQTGdwWl1ILF8lQlFS
bD1YOj5IMk5xVilLV24nIyhcR1AyNlJMLHRaNlMmMzxoO0VWJDJfQ2kKJS5UbG9qWkVfZ3VlNTJz
XF4vYGVsTDVnImRwS2AjSm5RcktWVSRZOTFZViFoYU1zLiFTPCJeP208Rm9mOzRMbDFpW2lgO2Nl
LlMyJEFnOHV1VEBqYU9iTFZCLDZjNTkiJ19BMScKJWhSV3RLNSgjVzFVXDlzaD9gaEtXOEJTci5Y
ZCFvTW51aDRUOFExaklfRUxNRCFNLzw8YS5BdVpQVGliZm1CWXFrJmlHKVFpXkhKdS9kcm1xU186
SzNHMW9aJywlSnF1PEZKRzgKJUtBKEMzSEBZSDpyayoscC42TWt1S0xGW25GMFJWXz4lMkw2ZG0u
OypqZFAwQF84JVB1SU5ia2VEdEBnVVZCYSdqMUl0PENWYC1vPi9AITBySmZYaWNRSUcsQWRRLGw2
Zys9ZFUKJUdwN1NlSzFALzxHaUA9aSQySDQxP2NANG4hcmgsPHJqUiFFKVBWJmQub0tlPDNdWkxG
b0g1X20scVEkWltPYEppbSRTKEpxbERnZ1ZaXWk+RTcrS3MrQCdkXmRQOV5HITgwVVEKJSRBW1Rb
PHBtMTdOL0wvM0xVW0QtOjc4OHRXVFBHQ0A7SVlyYmFHQmNYPWQ+NCsmIyUwSVBBX2tXTHUnY0JM
cm1cLTU9XGo8N0dXWz5ZbDQxcCQzWTVFJTIpZjpSPmpSbVh1Ry8KJUJNZnUjQGxFNk1DWlVFc1xk
K2I/OlNPPD1IdFNgKGE4cSwsK2dbUi1gRVNSXFJFYExmQFZWcEtrVTtnUzAlZVY9Ik9FM0pbTyJk
ZmZnMzBvYiMvUSZLYTwjdVhJU0ZoWTpMPCYKJUpLSVxDVyxAbiEuLGc1NVVOaUVwZUpvM0NwTGQ/
TWVaSkVyWDQjWmc9P19kUmg3Iy1lck1GSmBGSigoXEdKUGs+ZzAhTEwoMlVEbmkjJFEkKGJONFdw
YzwkKi1gTVE7WltRQFUKJSZMPCg3Y29sSzBQSWQiUFA2Lj9JInUzPF8nW10xJ0hmZSVXQ1VeSGNX
OElwbCJAXWBSJjtKSUBmJVBSW0JcJE5hWzhVQyxKM0lnYEZnKXI8VWI2cGk7RTM+c0deUFJqMlYn
MFoKJSNhMGU2SlUlRToobltyJSVWNm0rb0pNOVhfOj9xLUIrMCtXYHNrRjJDXj5cM29uN1I9X1Zw
XClgS1RzcSUsYCJrMElHIlNmZThWNlxpJC04P3QibCY9Qi1xYTNNTD80V3Q9PmkKJSNPNWQrV3A+
blMhTEtEaDpmTGQpRkFBQT4vK28iN1U5bzRQUjJWNys6RGVfLCFtRiNDZy5mNXVuN25XXzJpJWJC
Lz42ITcvLi0pJStCKHNWVS4zQ2JRakMhSSdHWUJfTVQhKTUKJWJlZjw8ZV0lWTBUKFJYLGRILS5v
STxXKEo7NWhocDpwJ1lUbmo/ZW9ldUk2MCosMC1iclA9dE0wSzs3dE1CYiRZbm9cZSMmPSpnPnM1
UVwmUS4kSVFYSmZWOUticylOYFxeNz0KJSlWR2g0YXJiZyhRWU1BXTZBUTVVWCtzLCI5NWdlJCkv
KElUK0pPRTRLZlA6VF1JInBhQXReaiM8V1pDO0klJy9tKj8nLVNTaDVUSSVkSFNyLjJ0J0tgUUhA
USdiLzRpKUpCOTYKJXBqQylGYlF1SVtLPjRLXD0pRm1WPVFbVGtnaTBsazZdJS1eKGduNEpRZk50
OkxbWnVrLWIqPzxKYnVFQUZYYl1LbmEwcVlrZVBrXWA6Zk9DPEw6SEttLSFKLCJQTEllRShDWk0K
JVlXXG5pJ0FWUFguSjYuNzdrbW1mXXRCcCs9KkROXUVVTTttIl0rVFUyN2UkIy9hTzhta2MuU25X
Y0JeNCRwIkJNPmJJPiMvclgjPlhBQjEzXSwlUFdxbjtQMGFjQyFUOmNWQSgKJVM1TSMtZ2NvP1dj
ITgrW2JnMzMkOytPP20mRSEnblQ5Mm5PVzRjbko4cDE5TjYtUVsvS0p1b2picmQtW1BlTFEsMWYv
IWdSSjo/TCk7WkBna1pTYz84ZiRCPTU/ZWJDOUcpWXIKJVY0QXFeRjJgL0UpZylOSyVOYVZhQHVG
M2VkVCorNmU8MkxvazVGWzhDamM5MSxtZ2xdQi4nZmZZRjE6clB1SGQsWiRjUzdNVD5YTEVvS3Fy
cWs+VWJMNCFQMlNwN10tPVFrYCIKJT4vO2JgZFVgUyNQVTtxJStFOkUvOFIvaW9LOFVrVS1JZSRg
QmVZaGclT1R0I01hXy5sRSxRSl0oRzlNMiIhKi0vTFI/R3ImZ2VfT2VoQUpOJDpcM2Y2PV42O3Fb
WGhXJzFXLykKJTZyN0hwPnVjXDEubTgzLUglXXAqXiUucy9hKSVfWTdBaT1PM1ZRUGk6XDQ1XzJk
M09FRUgsNj04Vy51Uj08ISgzNShsI0AsVD86KGduP2QhWUlKT2NRVyZVJ2oxU2Apb1QiUCMKJXA4
U3JAVnUuQkRMW3FYVWJOPlhtRERPOnRIUEpxPWx0JkxYKj9TQ28vYzYnTGJnSm0uVDdSPWs3UFdA
OT1FNVJRPUJUMC5fXlwpQ0A5TFlDTW0/LCVeKFQ6MSonXSola3FLUHEKJT5cSFJbNmIvPE9WMjQ/
RCJFYWNQVkUsLmAhI29FX0VRPSZRallkPjtcMEUhTSs9Oyc7VyUqIzU9OCZsbFNKUGtxMGUmJTRN
SlhjPFIhLDgzNC1NM3VDVihpKCReRTVoVkJoayIKJSRlYCphXjFwLWFcUnJTdTleQSEvZ2pWRilJ
Z3IrYChHWylATS5cYmM7KjM7Tj8xXD1wJSk1O3RdP1AnaFExNUFBbzthPm8rV2FJTFd0ODM0W0VW
QyRVVzglP2AiUUdAMnUuLGYKJWVJdHFjUENUKkAnPTRgXignQEpNNGc6WG1cQ2Q5JlxNTEleQz9w
dVJUPlo0UikidXNENT9KXGtpQjFdcU1xLGRQWERFKjhiU1AxZyVjTUxZV3FTamdac1RadDkiR3I2
SDdVNCUKJWJkOic8SiNaWUVZSlAvNCxIL2VoVTw3M20mMilmYCRDWFwrMGwrJmVIWnMrMWVxLnNo
RWwhc0xdRixUbSU0bFs7Zkw8Zzk4YCZBO1JDWFFzRTNYTlxaZz0mZFVsPFNzXDtrXlkKJSw7L1ov
RXVzSm5MYlhpZU1jbTJNYGJ0JiNjQm9EUWVaXGo6QWoqW2ZscWxVI2MiXEFqWmVNcEhYVV9eW1Yo
OShFWTQ4QjRYMTRQXktRI0JUVGdqYmVfVDVFXUo7W3RnMjdhM2UKJUhvSjBAKTpJXEsiQU00TTYq
SVlbZWBzL14nJTpKWGY+ZSk8bmUpW2dJQCtibVBiVV50X3UtPmpRQio3XiZ0KT0vW3VdL0IwME9w
J2JTZzs2KENiNSksN3BmR0ZiPXJIIj8qKTEKJU8hcXJGUmcpTC4sZmFBbihYa0RONEFFOVtVM1s2
P0g/NTFiUnEoYlQxaENDXFd0KkAkTDZCam4uL0pIY2MlPmppbzk5Qlc4clpzM1xgWl5JRCEiS2BO
JnRgblhJL2xhWjdNZXMKJVoxWEc3IlRPNidIZGRSXT5hJGN0QSk6Q2ouN1ZRaz9KSEZNJk4+PjZv
L1wkQTFFXTtVT1lcNE9hQStKPFo7KkRIJF9yZHFidW9BRy0nRiZPSTllVW5vVDUqXDM8ZFtLMT9M
Kj8KJUBUSj1Gb1k9S1ctXSpNITw3WEBII2szTENiSlopPm9ZMS5KJWE2J0ZeaEoya2xeRzdaZFps
XjhbaU8/K0RoWEh1Ll9LSmA+ZD1qP1dJOis4OWVpc0lvP1EoOS1uRnBwTl5sJW4KJWklQDNFWicm
WHBHUzNxRiM4Z19MVz1cXV9mK2wqYlNfbGEiI0BWJ25GOWRuOz5xRV5pYiU0UV9DI2k9Qi1lSixx
Mms/UC47Rkt1Y24rLWljQSJzOk1wYnByKVk8JE40ZilAX0cKJWBOK0NOPGQnQVY0XW8hb29lQidJ
Rl87cGM7Wl1FOz0iUDNJRktNYT8pZVhbUGchXkA8KCtkQHFoJ2dqdFUuRC1kWDFFUkZhYnFBNi9N
SEdFTlpxMjpoajNsXiVaU0s2RXNwT0MKJTlmUTtGWFU/bXQlOzlLVTgrPSVEZW5NTmFBXURmayFG
LGslJG5vbkdbUDduTTpgIzcpZ0Q7NzxrWjElYm1QWU0pKk1YMShiPz1OT2dMXUAyQHBzbUslNz5B
Qy1zM2hMW1pwVVMKJVZnImhBZk5rQWRiJixTRjc8c2Q/PjZocjBaQiROXioyY1hCNTJDK2tfXWlT
Q3FxPzFuVDBkX2VdJ0NSU2huLEJUblYiOW4wby9Ybzs9c2FNV289SW8wRloxYjAmbkVzLHAyKVIK
JVg9V091VypuJmwxPmM1K2tASD9eYGdYI09kPC9HT0BNI2loWC1FOTtIZCMldFVSNGhfMnU8Xytc
M2NqWmhxTWkva1VmRyZvSVwtaj1mYT1EYjk1OixMKWUiIVk1aj83IjhcR2EKJWBvPGJHSzVGUVZb
czdJWzQnJWUpTydFWTo/NStGJ0UwSC11Xi9eUjo3MSNcWGkoK1s5QFRLISxsIjEoKF9nXGUyRUkw
TmFOTFUuYjI0bm8haC8iKTlSa1kkakZ0PkUqMlc7OTkKJTUlXyQ4cG1hVEZmTSRKN0dIQTUiPF5Q
JFE1SEdYci90OiZJR2c8KEVTJVckIi5sRkMiXGVgR2YkdDE0XDZKUGFIJ0BUIj1JKmwtIzc5KjRT
YDlNVyRHOk9hUEJwSikzZy5eaXUKJWNrcSE0M19PciZYRFFpJS0pREBdaVUnOWw5IU5DWEs6QGpX
clZsMDtnZjRzUFsxOi0rT10vJ2FaMyxXYj05QmNMZ0soTCVcQD9VQ0I7ZkVLWjhsMi5qbjpHTVlq
VVklSy9kZisKJVAvbSduPD9bU0lgRzZgUmJAbyc6QiZgazpkKFRhL1gqRWJBbUFSPlRbTVQnXVA4
XT5WXy9DQWJSTCVXQjorJC1ANDEmZDEkKDdLbjg8YFgkJyhLZkhbO1BEQl5ITk5zX2ssMUcKJW1X
T1lLQExTPkFuPl5gcltnck1ZL2AyYClKJFFvbVIwO2k/SFltJmEjbzVhMl1GWWVRVVZDVEZLVkNw
KzpzPEAnP1NbJ289SCJyLW9LKF5VJSdWNERgVmVZYj5KXVI+JWxpaHEKJW9DRDNGWnBVZ11GXk0v
K10wSTYlbWlZLmgmXzdXJT00dTQ+VkRMNSpVTGpTJlw4REAwbUY1W0dVO1M7YGdHZjZxW3JjPTJY
clk1QiRxKkhILSgxNU1TdGZTYkpLO2ReTnFMWXEKJTgnP0otTVRyMlReaWpTcFRDTVlAWEJQWVkh
SnFwMl9TQiJ1cGYqT0BAR3U3Ykc6S3FeMTBGQ2UiYlw8RXFjI2VTSk9bXDMuaEtSOm8nL25aU002
XV9yLTE8T1ovYlNMK2pRVFEKJUVOMmRUREVcMjZEMzFKQCIpOE1aJHNGTUQiODZUW2tFU2k/XC0l
XDhxbTJxYFBNS2BEZzhHNGFfYkZPM2koWzAyXWwlJi9XV0IrM04hLlBGTy5TYWRiJ09jQ2dDSGFD
UC1hL1kKJWV0TWlSbmc7aj1iOUQ7aWA5dUNmTTYycUlEZD0yb0NEU1AhPk5JTFhFNmVQPF0rVGZu
VnQuUitgITNfaSooMVMzbCQmT2lqXkFtKUpbJ1E5LkRNdHJoL0JXLTJCYWpvTVlhUEsKJTI6Vydi
M25VSEZVLlQkIi9nSWJmS1laQnNVKGRMJWpXLyNxT3M3Y18jKklhXyE9Skk3byg1KzBFLC1UZFRP
Xkk2ZFY5P2xnMltVKGhTXDhwM3JIQE5DUWlGSUZfMj89TGNyVlYKJTwnIVMzNjAwKV5yJmxSLW1y
NzQpYUwnLFhqVj9KPSVATktjJzw6UmJmOjFoYCopRDBMNG5EWilEU2ZmTVdpWUs9bFItZyNaTG1T
NTZlcnJsMXNLMTlZdWBxVWYjSyRdaWdGRmkKJVowSzZmI009IkhSVy1YKW5mN0g6WkI0TzQ0SF0p
R2kwPCdhQVspOWU8Rj1cTUZzLj1HRl9QIUVtXHN1WGk0RkZULj81WGcxLChZYkVAWmlaUkFsWUdX
Y3QvRlZWVGsjMFlgT1YKJVNpPjhiJTJfXmpUJHVDNFY8WiVIREpxJzFwZChOMlomNTAjTSNhRXVB
S0BqRStdTG0kXkhqPiohJjRWWnI5bEVzN21aQnVFSF9TYkQ8aDhnRCFAKkRCV2NIckYsYzNkYCU7
bTYKJTBmSDc8L1IsTC5kJUcrLTslIyUxRnErSmVtb3RuUD1gOy1cQC81c29ocmNOUWVsRig5My8m
UlFCI1tkXV9fVzZKUD5UMWksTkUlYVM2Ul45KVJpWiYvR2YkSl1MSD9lZypWNSMKJWstMzxqMztX
SkIvQF5vdENvLFRTbkclWEwwWjhyZnAnLm1WT20pO0MoUSRYLyJcV1lQXy0sdVQpUk5wJnIjVlVF
cy4qKSVJYUxEKC1Hc0YyMitZRXBEckktUFReLFUvP1xmTTgKJSc7QWlnVTVhJS8tUEAoJFEoXkFX
ZjBGckFhLSwhR1dsZSNyUy1dUl9uPmlga21aYCFRSFJjP1pZPWlba1tYXFEwSCFpLEw6LFhlbyVk
VGVrYisrXFs1NzZxY0RWSl0iRVdFSzQKJT5FTFA7OyRsUlktIVNJQzphXVApTi5yMjo3LHFdIjMx
XE9BQ29TXEUkbDlyMHJHc0pMXFckMDEqRG8yaFIsOk1EMi8rVW9QKjVsPk8haDhITT5PMmduUzBx
Pk06NlQ4UUQsYCMKJTY0MTBKJkVdYWBoJklCK1dlbFdEVUpyNyhNXVpwak5bQihBLjZqME5pNzVu
Vl5pTEonQGRjOTwnTWNTVFR1MDg7RXVzYkM5aVpVdGtnI2RtPDFSLWE9WmYzWV9RbWoyaiU2ZzEK
JS9HP3NjSmQ5K0s9T3I2SlNiWyw/SHQuZXJFRSlFV0dIc1U9NlsmMHMlTlckVk1lMjpxP3AuX2M7
L0JZKGtbPUQ0P3M2RCFHKjxcVkBGP0JjUWJTPF5jND9tLilRaEUwaGBHWnQKJVxyJ2dqLm84Jipn
J1BycWlFNzppckJTTydYPSFpPlVoKEd0J3JoSCg4M1tiQDxVJGYiXHAwYm1OSTNfOTlWckJYRUZZ
IlAhblZTKzBdWEdQKiVfWVk+ajcvK0xHOkdeUjshKD4KJT5WOUEtUkRyWSQsYzsjUzFaZkRfQClZ
b2dSPnFQKWNNN2pvUC9FMCI/Xz9jQUk9aUFRX3RmUGxGdHUmRDpRYm82WVdbXmc2IkFubzksYipk
Pm44O0JobSVvKT5JNnBjUztadEQKJW9DaWQvJ29zIS5DSTZnVC0lXG9nNjdGWCEjL1JHbUpxbV5O
aE5mJkcsRV9XTlFlb21GbHNnWGFiM1pkIVE/VUVyMUpjRFUmVyJYQlpDXWpdSDBbQSpEUWM/I249
SzpyQTpxaVUKJVBIJDJDK2w/b0YhLV5tYiVpXUFgQGJHMGksa1B1M1pKJ08tPFUvZWNIclVwMitY
Pm9HTGp0cUNaIjpvSjhEN0pXOERXP1BSOj0nTCU8KUBxL0svSUlhcDBUaWFrRUUnU0VQYCEKJTQn
ZDdZVWgrLmxlNTJCc2RtPWVhP05haVg9LGNkMy9tT0t0QDpNa09aQVAlXTIzaUtgcypkSD8iP0xd
VFUsLl04JXFSQzs+YiJtOTVcbiVhUUxJKCMnW0lNXy40LWI/PCklRVQKJUBNUVUuSEpQSitbVSs7
PjkpRUYqNFM2UGlTYDZkWl5vWjp1bUswTyQkT2dJQEUxZE9dVD5fXz5IO1xQQ0NAXkVOWF48TyJt
WisqXkBuXG0yR3MiaSlsREEibFVTb04uPy9KUCIKJToxXWFvaEtHYFsyNEMpKidDWCdwaEY1MjYy
MzpwTkM4YGJBLjpmSFNsWmNNRUByNTNnZSZkISNVQlpZJE8xUlReUVNyS2s3VV1rUkhBPXRhVTFl
JDpwdCYlPC1hcCkyTSxASzoKJU1KJCgrQSI5cDwtS3BTPy9MbFJgXl4+ST0zVC1rMkQsdT40YEVW
J0c0Ik0vV1BvOEE8OCpUQ15AW1pRZlI1UVxrK2dXZSNbaCd0aiw3MG5xPCpqXGVwbHNsSSovaWpB
PUNkXDMKJTJnPzddKC5iQCRybkFWYUQ8SD9cVmFlVi1oZG9OZTphPnByRFleTVZSYC8jODdQS0ok
MDc4OSptYj4uVGYyXiZKRi9AX0JMNWBeZztTQy9zQyooW2BnM0hGNy0ucy1rInM2OUYKJTgjS1om
OmU5NC9GMyJGL0ExO24hIVc9VmVORylkcjpRbmxMVnA7JnRwUiVybU1QSysjLlJXNmFHJ0dHJUJ1
SXJGKiJTbzo8MGxrdE5QcTJWIVoyTlRlPUUvQVY/Zys7ViRDLksKJVxmaTZoXFJYdUdkb2l1Tmpg
Y05LcEw5OTg2Mm5WRk86XEIvWDZRZ0grRT88P1lyImYuOUpcRnRJNG1TaFM+V1JTZFpnJi8zNjAt
SDpoWUktTVhvVD9oXDU/aSNwSFRiUydxOioKJVRSPzc2ZzI7ITxdNG4sJGI9XjBlIyo9XXQ2P2Bs
STRmLUM2ZUtOcyRvOzRybWIwN0gzSjJicTxgZEYsPkglckZIUmg9Q2ZNJHEzPiE6VHNhNz1HImhL
QGV0Z1xHNSw9R2w3PCwKJUBNTFlyZSc8VjtMLSk1Oy9VPTApVDZOaUduaDFib29PX1puPV9KZHBW
R2UkKWFyaFFMOy5qNlQ5JFwpXFxHMCQjQUxXT1gkOCU5V0QyNDRbMi0rZmcmO19FaFprKm9GXC07
LjgKJS1WXCRNKSM9TTYqJGE8M2kpdCJaZkdBKG0hdVglRCpub3I+WSJeYWA0XjhOZCY5RChLMyJk
QiZhW0FxMXItTnFhbi1rN0BBVVp1WixXND9gTGo+UGs5bUwiUiguWlJPQywxZUAKJSdgPUZtKUIy
YltqQVJcUCpnTEB1WCUzSWdWTCVsajNKJmc6WiFxQzI7OnJCUlIwdHNMKkByWjwlakBOKmBSbGBc
JjRjRixRR05oL0pGYUw5bktFUmBfRSdRdGtyayhWN25VS10KJVFyV2dGJkM5JnJhOlwqN2lzO2Ve
I3FSbiUlR3VaMFFqQ0kmR11WP0k6PCVwNDMwTERWLlh1cDxKcWFoXCg7ckRrISIpQmRmPG5xTTRD
OmxmOTM6ODBBTSJbdFtBNG5HUTF1VF0KJWhBVl1bOVReNCpWJHRbKz8kOnQnMFNGIk1SO0pFSUxU
MCJQIzd0WkJOcDVvbGIlSmNLNyEhcXNfLVI7I19NbVtlKWBXUWdkPys+WUBnTTBfQyZlOkJZKy0s
VzVEMDdnOm5laVIKJWsiNT9KRylMbm4oXkFaYSUsPF5JU3IyaUZCcztoRlpoYHQwWlxtQGc8Uz5H
Pj49b2o4cSdsWUdlXl47UShoQW84WiQqTmA9WGhARDY8NW4/YE9fbidwP2prSyIzYmQqKFN0aWoK
JWlZaVwscy85KV1GJG5XTWYsMG48TUZWXTBSTTthKEVUYmxSXVdFPzcyWik6TzFXcFlASiJWQFdc
M3NLMmJYUzszO0NvQDBsNjNcJChSOmBmISNyLFI7QkBTSygzRlEuZUBVJW4KJUBAbz9cbVklO1Mw
MXIxbmZjbjsoZXMzOkcmMC9MMV1CWUVmcGdiOFA9ITFOcGxhY1xFOkxTPFRvZ3BJSENrIj4hIko0
I1ddYmk8JEVCdDljK1lWJ1RhaUpfQEBBKSJqQDZNbGoKJVlxPk1WMGk9P0VtLnBTazpSSTohYUw+
Ll1IYyhNbDNlJ2krRUhxdE8pNjwjKyxSckQ7Uy9zTFxnPCZlJyZMTSJUXkY/SFswI3RDZzNCaSsq
Mj1RLE8naVwsXUlcNWhKbmU/S0EKJVJzQl0zaGlOTTY9PDFZPGJiRVtXZFEobmotKzZHY0ZlLmAj
PlApQCUtMk5cITFNJVQsVmojZkswITJAWyRSZTo8Pk1JY2RCMjYvMVJyKksuPlpxI2NuWVFxSTtC
OVNkLSJMISYKJT9rbUdibCtnXGE0aENyai0oKj0oP28xIUJfLCEkbSxTYlIyOCROUTRpLnEjLyFb
QUJPQ2VxcDxoXUUqc0oxXGNOK2VPbiEiOVohaVM4bGpxJXJOM1BnLykyTVBqWD9dYDora0MKJTZv
TVRGa1FNJFM5Vy82WldPP1IkITJZMjdtJ2IlNTBqZFVeO3UlPjltWm9jLFc6JWhHRWNJZ2xSVytv
Q01NRDtrbmxILkpZJC0uaz1iWTYiRU9gSlYkPytCSUROZVdDNS87VmEKJWtOUTduI1MmIkRQTklT
V0JRWzopaHIwK11uXkU3WjlTX1RfPGxcYUs2bmJtMm1dWmNTX0BFKj9mVnJTKlokJS9TNWFzZUY8
bEY7SS82cSteXXM6cSxKa2RNKEQ2c1toRDVzIW4KJSJkVER0XGN1YyYkdUBdJWY3biQwKjJlZm9g
NTFuTmZoX283YE4mP2wrPHRWUWV1VyllYTo1YV0jak9DUl9ILGJDNnEwOVZYTiRvN1A8PkNqbzBR
ZDtlPUQiYismLyJeJixiJ1cKJS01dUw2OWcmMmJJV2VPXUk8NDkjPUxsVj4+RF4tUkIkaEthMXRn
QnRcPj1qIkkiXWRWLUhwYlUpVEgyOEZEYUtPRW9XJGxYXjRnbCpHVT1daUpfV2FaZU5TSVIqI25Q
PEk/b2UKJS9EQkshTSxrPDhpcFUiTmRCRyZaSGpXL2lTbk0rUGpSIm9PRnJGUjFYKEs2VF0vI3No
UzdKUj8/N1VXZE5MU15dZ2ZmLzxaQkNZdDMlXUhWVFJuKEE7PlJgWktkUyRmPlVxWXEKJVlUP1pO
JT9FRC5GW1p1LFEhPD48WSJdP0ZQczlpYGFQYD4/YkZrXyU4dTAxaHAzRXMsbFcsKGMjXiRzIzoq
SzU/V3JxOEtXWmlwX2J0Sj8tKEJBci1NXiUkKTBTOD1fY1RSKFkKJV9sLEouMyc9az4rWU9fLDRH
PDQ2NTktITAjIloqVSdGXzFjal9rSGZTVCVdOlRDZz1bSVIuVW5LVmNjXVprU2sjZFU0LllSLTJa
OmgwLXJJcFcvQlViYEkuTGEjWVRmRl1MUTMKJSk3REhTMF9nIisuUy4lTWpJLF9iaU5nXE0pL1FT
cE4zaiFUV1ZvMFNkM0BWSj50KitENjI/JDo6PSllYE5aUiNoOz8wSzY7Vm5qbFFtPi00V0M9ak0v
MF1hVzxVQC9Nck5OIS0KJVhCNEYlUy9lInMmQlZtcVorLm5zJ3AyUmlpKDoiJWUnImk4WiptVEM5
SSpmcjdgSlBFZlglQkZPZSYnayVvTUw9bFpcbGFyclFXPlFsbEAvKVJ0LjAnJFYyTWJHOlAiOl9m
KyEKJSlhWWJHcUYsX2lBJEJuLVNmRV84YW0jZ1tDIVhZbUQ/Wj1vKms7a21DX25ZSXFFKSooV05b
PXMiSTwuX3IlOyM8QzAwN3RDS0dobDxBMjk8MFFQaGpxKFtPKy1YQCpLUEleYDsKJUZSQE1TXDZz
JipeSSQ2TCsqIm5oOjhUOkpMODUxUDQxRllAKEFxJVc1aTFrbTdJJUVPZy90KyY6XkcxdWVySzEu
NzJzXzhwbk1RPSxfNzwyQFVvJ3BKK2kpRClvOjhFPlpXY1AKJVw8bEw+Oi82QEtZVC8vRGEpRVg+
U2ZGOktkS085I09qLTluRT5IZSInc18vMTxTR0NFcGhbT2lWSlVBTitPZzMuaWFfRiU6IzheNUU6
YGEkX1BMVipcWl5oUUUkc1NpTlZmYyoKJSNwXCozJjJvR2xxdWhLOVU4LihpLiZZcnAqYy1TQzRt
PE0wKjY4NEJwYXVFZDQzJV9fWldXSD9OTjUvQ1pJW0gjJk9ec1c0JzticzJmQ1NiNyZnWiQjYyNt
TD8rSCJIV2NbTTYKJT8wLCIyJi4/TGElJjZQbStGNywrb2M1JFZKZWolWGVlSk1OIyRbQVRhIyFs
S3JpcUo9Nzk+ZDxTIz9ZXzMzbD9Eb0leRlI1IjNtWy9yRmgpVyhLaTNeUz0ob0VDLz9iVzFDMnIK
JSM1ajQ1IzBhXiNSL1RFVUxETGU6SnBLJi0rQl09bGNIKURTIiw1STRgbGhVJU1gc3VqWGVqLS9h
VWdATFdSJ0BFOThUKzdQVHEzYUpoQEplXT49SFZKSXM/XitmI1lFISxxXjMKJTRtSSU3IlM9OUxM
TU9rRydyYUJzO3I+MSsmW3VCVCJrcTAzWGdBQkxoZGhKTSQwZVQuQmhJOSc9LSlSPEApRGFtbDpC
X0Zhb1Nna0YrKk4jWSZXQFQlbD9PNHFAaFBnNjtvWU0KJSNaSiR0PF1IbztqYldBM2s4MEdPKytO
WTBFVT5JNV9aSnRhcT5YJyNmSUR1bl1KdHErWDNvZTdiM1AtdDctLnBeMjdrTDFERHVYKVlJXmBK
OVU1N1M0Wjpzck9NXSV0alZbTCwKJXBSYyJOJ1lZNzBKMCxAaW4/TDZAKXUrI2xZaytlSkdYZ2tG
NUNoQjQiMktxN240aC09UCJ0bGNDUmUpVTMkQzpwNkFSIjFPcGJNJ1kqUGU5V1FaZiRWZDdAZTxp
KjNNPGF0JVcKJWdRaTtVNk0scWRlPHJENzZAVTZEXDEkYzMwR1RpNjxpTjQyLjJIXWZJKEluM2Qt
VlpPJkozOmBHYytabmhaVHBKOV9ET2ZDQG9salN1XWdQNGRAcFpddENBPD5YR14lXSgjVV8KJURB
Zko3WUM9TzVWaWE1XFUyQmdAbWd0W2dYNSthaEQsS0QoQDdhdDZKLkpZY0cqbixVbGo/LW9OLmlI
TS5YYC9hazhsbUhhU1c4cDlSZ050XSJoLTs/LUgmQ1cpPVQ2SUgwTUMKJWVGM3JGUlhcTG0oMzhX
VVRBSzo9P0tCJC0rXHQ0WERDaGIiO1w0J3JzM01yMUxDclNlQ0NeND1OV3RrZFtja29XOS1JQVhN
XE5JLWpoZV4/NDdDRCtKaSdqRk50LW9JMjs2MDAKJVAkOyVQR3EuY2I/JzRfQCJvWHBBWlxYcWdE
UDAkOnIlbk9oTDc2USo0SFlcJzZuMD43KlpTVEQsNDA+JDpqY05TWHNncnJEQXM2ZU5ba25bWDJJ
Tz9WNDtXbSkiJlEwZChgUmgKJSxgbnFhL0g9YyJLNiROWi5OakJcKkhgT3BpX3AnZjloYi1CNmtR
Z2wtRCNmVitKNCgnRT1wLD4iV1o8TnE1VDIvYG1VbyIxJjtbWFlINVAkVFYkRmFlImpMNGAsUFhV
WEFoVWcKJVdpJ0JrOGFvXFlXQShkWklFXktLaWRsJ1ddPlU2JVg5TUZaXV1ZPGdXUTBTYSRYMmVk
ZVJSYjgscitYRWdbck0ta00lPjokLzEyaDpLXGp0Zz07LiJBNillY2A0N0F1XypCX00KJURgLHV0
R2paRHFDYC9EKipyPGlTaCpRbGtxLmNvXEQsNiZpVFs1Ll9nXWVaVVFkRSxPKFBoJShOKTZvTGch
R0VWSHNDPDFIZWV0Pl4nLmg4WW1aJmBvcmw/LClFQG5QOSZROmcKJVBhX2wzUkc/YGcsbHRndDIr
VVMmailaR1hUbD5wbChkJ0Y4aiQlWmU6Zio5S0ldOD1OVFInNiFmY2BXNlZlMzMhPi5LTi5cLU1h
S2lgV2NGUnRxJzZAak1JYGVnN0ouSTxFJEUKJWdadTxET1w7TXJGdGReSGR0OmVqWGk9bWxiTDZd
RGxJPGtSUmtEQyUoRVwoXV5mJzE8LkdhZ1QiLSxtWkQqaCRDQiVLXi1VSzA2aURDUklSTyRHLCNc
MTo4XiowOm5LTzxWVV0KJVNNWStQOTw5TjJXPDwxUlNuL2R0PTNrIjZAW3VeYTpKUUFYNyQ7ckE5
YkBdUGg2WzFFTjtgJ0pIZnVeTytMNChIX2ZlUnA4Iy1dcXByYUcqKjZwKz0jWGFSZ2taQEE/YWI2
LS0KJUk1SDZNYmFnV0drZUxbOGNASzRdT1Q5K19TQ1R0OV1NNz1qXjo5Ik5jIz8nVl9DcDsrUEdU
TD8wUm9LIW9RR25ZVFsrcktBR11RKGVRI3FvWzRgRGYuTSkyZEJGZkVAJ2tnWGQKJTdjKitYbERT
ajAiPilNWUpBTmRfWjleNy1bU1xBUCxvIkktR3VyLzVXJzFjWD0oMGwxQkdkYUM5XSk7JkoibkVa
O1koam0mTT40YlQ+LG9LcD0vWHBgJTgiJVBgYU5tImUyUlQKJUVDKlltKFM0TygjdDQsNFZJKFZN
KCkhST07PGRfbFtyRSZfITlULTVaXyQzTjFELyJGKmVhajtbYz00SWhkLzg0VkUqSFw+STFJYURW
PVdMMjdKRnVsTlIyIS9oSFdwWjVUb0oKJUkpQlM5cnU2Rz9PJFVfa2w9SFo/O0xFcm4mJ2AwLlgv
XDVfM0AiK1BNPW9rWUxkWFhOJmRBKzhBWz8kSU9CJ1o5SSQuKDlsQyRLXj9bcGs1UWcvP2s6Mzx0
bk5rVDZTMmw5cloKJVA+PmpZZUo6Ol9zOCQjRE1OZzgpVD0odU8wTUQ3KXBPSFBVJ01cLy4qY1An
L0peVz9TSVkzbG1DTzhJbmhkLzUyVlYtTHNnbTQ1OVVjQkFaIT9OI1ZTczolXjpBKjNhWVBcbEcK
JXA6R1h1Qlk1Ji8qREhbZWpcb3AxY0BcXzFKWUojU2c3WFJyZl9EVT1XIVdvazRGYi1hWnFoLVBl
PXVBOm5CcEpzYW9dQzVEIXNKNWJmZTooTXNkInBDOXJkUy5nbiRZJkkjckcKJVleXzdDTHFvNW0j
ZzdHJ1MtSnEwPz46RUc1cSJdJD9pI2hPaF9hRjEwVksrISl1JTxNI00/QS9JQmY3UTQ+WjIjTCJj
MFJWcEVhaDYwNzFIQnJPI0A7az8sWmAhOydYQk5kZSsKJVM1bzZbWlQ7NSM0Xy9RKy9AJ2E3IUZW
UDNIUDJaMzYxVj90bmlPaT88az5JMTBfVmtgX2JCRGpSMiI4ZEVMOnFzJ1NVVGQ4ZXJsZ2tnUl9o
V3Unc3FvTEliZnJIMTdONGZwT3AKJVE7aDNYUzNGRipsMEN0JVQ3TEt0TFI9YU0pJG5iKDtJPnJv
bFhtayVSL2tPUVU1NHMkaE9caz9RW09oUF1eLFs+VSoqVW5iOWUpTk5DZVE3PTsnXjokZSY+ZzNw
MHNAYUlhRjEKJUklLjFCNGwoN1BjJnNmNmE3LTxUUDAnMzg2VHJUSzdvZGtNRTFGRGhEPWFTUjdA
Y2A4Y1ZHYkJBSlRgLEFFJks6N3BuWSE3SiZ0aG5zcixsSyRrdEJhNS9fcjhdVyVQYWJlNXMKJS5G
QzI8Wz1jQyVwWTkmJWE0czQ6TjcoXUBhM2ldIy8sJDhtVHItKF88RUxUXV50ZUVEcWxIXztRJi5H
OCI5TF9AW2ona1FJIVEoIUMzTF0kaG1uZ1ZpYDAvQEI1JyFaXzdQYS8KJTspMychaVNzMnBGZFxg
c0wwYltaOmxpLmBiciM+Uk9fY2kqPElLZFpASExIX0BnXDtOKV4nN15ZWXJdQSolPjgsNlBwRUso
XWtKM1Mtcl46cyJZOU1SMWg7KVQ3b1I2YC01SDUKJShZXmRNLWpSRDgsJkMiTmQlUEZOTDxzNC1a
KSRYLjJROE9wbj40SGZOSVMiSC06SFxBKCFeSWs0QTNlbTk/RGpgOm90XlIxbmJjUV0iJFdFT2Vq
NHVKaGkvYXAjQzQxPDVLQGEKJTQjZi1ZcCJwRlBVazxEYW8mK2VRVFtURSUqXG5uQCdqRiVwKzZH
W0FDYG9abWljTUEvaC9UaldaTlhiLFxaW2cqWzpDb2NAJTYtdV1RJkc7MkhsXVAsakVoRzE6MVNr
J3M7bjUKJUVPN0JOSlc5NHNcY1giakZeSDtmPltAX101KEM5NmNCU3RcTzQpL3JaU180OzFwM009
WHBmXyVvX1NXRFJTW2VDQEJAam8hRDpZVi5zKmxlVScxIWFHNC9sSGMwLDEyJStkZmsKJVQ2PGBz
Q1AlSmE9Iy1OOUNbLTozYFI8QU5jQ00mIjJlclVgYGk0OSFfPnRJOy5NSVRhLm1DcVQndVhsXigi
bDJnOy1FTDJLTWl0dERJIUY0UjhSTVItLW1lUUtyYj0qNnQ9bysKJTImNGE5KVk3Y1FMXmFAXjdq
Mlt1cSlmNz0+b2Y1XistTC9oMjxlTEM+LEdIKC9yIm1EXzZMU0UjakZyMT1eQEhcOnI/YVtnL1NF
aCE0ZkE5R1M+dTkxKkxrTFdyaCk4cSxtci0KJVElTkokbUgyQUs7J1hULjxOZ19iIi9QIllQLVc5
Qi5xRmB1YkQxXGVEZDkhJEIwRHIsYjZlSSVsY0EoVS5rOnVoPyVrVV1PKiRYISgnJXJYVGtUTTsw
SkdJOiRjSG91Qz9GKS0KJShDTD51LTwuQkY5Z2huJzRIJV8qXGkhS29ZRkg4VzxXVTsxSC1TNzNs
SyxVTClcbiosZEpLYV0hOFsmPCpkbS1qZyQuKGFQS1UqYChiblU0Wm9KalZeM0xDaj1kSihtPi9Y
WGkKJXBXTDpcTT5tbTs3QVE2aF1pRUZlMD90LWhVWklWIjYxKT8kMlA6RlRvQk5raCVZWm5aIypd
YihPOzEsO1klcm9OXlkpSU8zZl47VyRVOTtTLzl0RUQyVSRfRSotIjYsQGImRFQKJUxIaVo0RiEs
TmNyLDUiajxJXj4zbUNCXSpxJCY3OXJEST82P1U4N04jdC1aVjxhYVFSSlImJDlCKWhdWEcyb11a
YFBzWmRyW3JeQFQ9WF0kaDt1SlxVWDBlUmJOYyUhbnMoQ0IKJUw5aShtazMjM29yZDIuKjRZTVFk
MDNLYyosZk4sSCxoPC5TPG4kMk5raV8nK15MNDs7YXI4VzM2cWhKUUEwZUtGXl0zWjQuPXVKJjZJ
aT5bXWMwRCw6MTRlSG5kQTQ0SiJQSzIKJT0pWzdQKEssaV5aXm1Bc0MtW1IkL1UoZllOUyFiMVhS
QUBLS2RsVFdeN3RdaVYzNz9sNC1SYjEpcVx1YjtcLjBOOjdJcks2ZDtrbzQ6Py9YIl08WzkrUEg6
c1tkczkqSCNydWQKJWVWWEZvRD9uSUNCMHVDOFVNUDdBRkReOkhtZU5SL2wsO1Y4NnU3MyVQYWxm
J2hcJ2ZmXjotI11nL2lmaDRAK206RjU8Y10jXC5db0JEYTlFMVZDczU1biFIJTxuUmZXaDEjQCkK
JT1YRWJZRiw1ckxdOUNiKmMyOC06MjBLZCIxX1dyZFNdJG9DP2E8bFlsWDcoKTdVVDpPSk9OMCo6
aDRhXVVUMCJwbV1MKTxGKWM8PF9MPEInQD1ydU5UPF1MZVMudE1qJnE5P10KJSg+KVlyTy1HbSFd
JCNrdGBbIl1xN0NsVSJoOWhzOEJLJFBUSSFSLUcrNzNiaTVgTzsqWW5TLXRYVGU5PURQcyZGSj1W
Oyk0J2FZVGVtXGAwJz5ERT8pO2MpMThmUyE1Y0g4MUAKJV9CODJFJG1sOmNxSiZzJlwhYUwlIzU2
Vks3U0pdQFU2aEE8XmJXND5rcVJmIi4pX1w9WnBPbE4uJVVRTktjTGFsZUo0KlpMVGZGTjEjVW1r
bD1oVmRAcm0vO0FKLHJtMmFfRFIKJV8pcUFATzRXMnRyTkwoQmZGKTlDLDY/JTlaO0A2L0txUnUh
Y0xHLkZKM2omRllSQk00aVhIPl9rRVA5PjZHTlNhXGFSXFkjTm9XKS9QJDciOWolUlRMR2E/LnJN
akxhcCNELkMKJSJ1YE46XT0qWmJKYGVDMHFwYDxNZ2tnKyNZXj9AJ0leKDJfZSIqYG84V0BuS1lA
cXIubkYwZm1IYiokSmNdK0lHNTtnQDBhZmZpYTQyZiFDXmFFaiUsMzA8VjswSCJebCMjQ2wKJWw/
JSxYXm84K0pnSkY6WmlsSSJlUzZVRTY7SjRYQ2ZXNTAobU9pPipSJUJkMVQ/VyVTRkslQzc/VE5D
LyJmWVNKS0snQz0zMFFfY0dSLkgqLnVBUUpxL3RURk8mK2ZbSVJMMzQKJTAqWCxdUXVMSFFUSDFP
Kkl1TlFkRlQ4Ty5qTWVNKic8RUc+YWxLQjdvKyJeZWU3OlFYVFNYbUJgT1U9dUN0NzY5NWZPdUZE
X0lVQi5rPkUnM1JLRC5ZTFA4ODg0UTdKLHNKNi0KJVM6XUZ0TU8xbGNSYXMqTGVpNGlFS0YmI3VH
VUouUk47SVosVyQtXmtpVUpSOT90QSs7LT5AbSomOVA9aTowZSxKWj9aTEQ4Lk0nLmw6Yjg7MS8v
SENIM2chNGlKYkghOU9hPlAKJTFlJmVyNUJCW0oySlhGRTgoSy9LR1VtcExkIi8nVyRKczZsOCg1
QyZlXT1fJGAxUVs5MjN0SmpjQTlsdV5eMTFlZlBXT2hTYyhKMD9fODdAJEtaP09nOjQ/ayRFaTYu
UE5RUSoKJXBUPllka1VfaDBIIzZCQldqcXFlWkpeSj9FTC0xaW0hZklfMF1qOnNWMkcqSEdsX2tx
MXJdblpsNWFvbU5ZNnVrYDBrNFQoUjEuTk1CSVtpNjBRLS8lNSQuRG9oQyQtOypfKE4KJXEwcVQv
SGFYW0UyWEBVZ1hoSkc2QWBUXSJiXHRNPiMncGxJXS4xblBXYmhJPFElNGJfU281MTQxNyNrWz1z
I0xrYjZmXmI1QmZkYTdMK21Mb0RSOnVRLE9dX0ZQRjIsYjxLXG8KJSsuQmhIQWZoTCYrcS4yWVFt
ImlOa0klTC5PV0wxbWlXQz1DalpaLEZrLyc9N0M3UDpMNGxHUThyLGpGK0IjTUReLzcwbVM4SWx0
LWRUXUlVbzw2J0RaRDRTa0c7S2o2OmBsaiQKJUdtMF5PKidPJUhsS0RWSS9ebVU0TCxNKEclQzxz
OSh1Y2g9WjApZTNuNkNWVkQxJ19caE5ncXQ4PzYpUyEodE9sRmBLcWEmcWc2JzpTYV89Jj8vWmY2
aT9UQ04vaTFhaV1oSSUKJWRdNEdlMypsRjYxOnI6aEInK1JrQzNhVWpLSF1tdCpYWDN0NVhqO0tb
al0hXCVOKUAsMD9zUHI4TFw4J2AvO0Vsa1E8ckw5VUwpZycqJVVRbXNgdFRzMkFARXItLzFxKE91
OV8KJUosTVU1SitxayNpPy8nXU8rNy9EczNMYCxwT0UlZzVROjBgK29JZ1pyOmc2aExDYENacHRb
JWtuR0RzMHIuZ25jclFDaF1jW1lvZ2hyIkNRVERuY2BEdSlLb1NHXi1PcFJgTFYKJXEmXl0ucHFJ
bEVzN2dbNHExJjg6SixfYV1jaTdEIXJgZkRxbmQ+cG5oZHVuTGhrWFxScSFFXG5VLVpXZV8lcEEh
WStQYTFIaEJXJydZUkU1SWZKcVxYUnNfQzBXMi8oL1JIXGMKJTlhMm4nbCJaQl5nYFY+Xi06ZXE9
QG5LLlU0aGxwRWcvSWFaI0ldTnJBSkJMXjwnX15qYkpmcnFXJkMvLzlaNEVdMUEzYk1qPFBadTgw
JFU0XHVpJGBfZSVAZDNTVl9oL1tfVHAKJTcyITl1R0YoI2AtRCxxJCw2XGJLT2l1LipsQVRBRGty
WS8kQyItKDoxSmY/N0xHNzBpK2xnXihIRmNJdSRRbyhgI0VHOWxXUkFGWD9HL11nT0tgMVJTJzxw
OD8nNFBPPENfJ14KJU9XYDc7KmtJJHVYRl1xdFlkI1xWRE9Dcl1sRztgPVZqPyNEIypaYFQ6JkdJ
OkRPLXU3Vy9ZNiYmWGYhWWlvRG44QXFpPDloLSsqKEBiIWBiNVpoLSxeNVswSVhsWSo8PjBUTF4K
JTJLZkklQT9RMWJpIzwyRC5hV0paPFVcZmcjJElTYWQjRWBIPVQ/alspbyxbTDxJO2tcckBzMGxa
PzNfVU8+JmsxMWRKRzNhRTFCR0tFUmkrYC8lVWxrUTI/UiVGYy0sVW0rRi4KJVQsbyJIU11TXFU1
YHAoZUw/TVBTP3AuVzVWNCc0YENeVkJvUExWcSRaQFExMClLNk8jI2EmSFNnM0k+WjFMKEE5bS5J
L1JaOjlUYW5SWVw+OGs1SCNWdXROaVdnJTxCUktkWWIKJUU+PTNKUkU5a0g/Km44azRdV01ZVlZt
X2AodHA3LSZJU3U7WEhzPE5eLykvWDlyK0lhQWs0MGtTV2ghSE5rOFxuNVpcUD5fOnBjSDpgKSpT
YCFrMlsocyIlLWBeUzZPOFZYcVsKJUViSGduSicjMUtdKTcpdTFzPlpuUERCZlJOYHJyIT9CLi1d
JTU4VDJeMTBqMzw/VzooUHNlLGMyNjRnS0xiYkpva1VeYlBVIkQ5PlJlXFlVTyppWFotcFpmNCtl
LSFYVXI5LF8KJUdHcWRfPFNlPWFQWEZUYW5EcV1LRjxJU3UqcTdwYUUnOHIiO2cuVCZbb3IpTlQh
YUIxOThlVl02VCUtXzhyI0FdbGJMLVQhPzdSaV0qNlwvP0dfWDs9cCRkW0U5WCYpP00yMzIKJTso
ZDd0X2UyaF5YPkEkPl4ncTF0cVk4TU1TLUxUSkAocXJQIWAqbmVSbWczXTpIV29caFdmPFQ3biR0
ajQ2aGAuS2RgYDA1ai0lSyNiNz0yLWxSIT4/cW4xKU0wTUpTWmFQTEUKJTpoaV1iTGd1cVo4L1ok
dE5rLEM5NlkvbG5nMCR0QFNXcjUxYGBiST8tYjV1K0A1UEhmNU1UaTllNiFXWTNNZWsmTjRFOkI8
Wk5IXilNYUAkOVNDRVAwaiY5WCtyKkk7VT1yTjUKJVtraWFwKCRCVDYoQkZpZ2sxc3RyPWxnbGpG
LUJTRy5gYGtVKmdPQF1gKTxAUD9lZ2FmUkwzaiVTQHIsXl9dRGoxLUJuJlBEbGFNNVtwayJMMy8i
MVo4TCZjKFVrPideLyQiY1kKJShda0FXLUokXC8yRD4mKlVQLGhQI0ZKK3JEJyUxN3A8OSUqKjJu
dCJnTzBgT1RKVWdtb3M3V2ZjJlZzM2ZpKUZPWFQsTVZfIjVJQzEkNkhZXyg0JCdtXlsmJio/Wjdb
Ry4mQ3EKJS5QKnI9ZCo8PmxZbGtXQydyKE4xQ3VlP2MpMFQ9JEdZQWhSXFYxaTRSYT9AVUJsVTJK
Jy4jPnJkRUprR2pkNjwsS3IwWj9Hb1IzY2QjRyx1MTlvKzRXVGcxTiMsTj07ImxFLTcKJSxkcTpr
I0hBKjJpKCg7bDZQVkJ1XWguPD9KNkI0OXMxc1MmISdGR01KU2NsMWVeJkg3SyhpKGEuTlZtOmYu
VVFsUi4oNGxiSDREVSZRTEEoZTU2XWEsZDAxXSZRTSxsP3JIITQKJV08RSlGVmEmV2NMTmonS0BG
YmEqLGoyaWVVW2wmT1FtTzljWnEsWls7UkxSWFxbO2wvM25UNHUyXEU5LkZmXnE5PnFNZFgxQ1E2
JWVcMiYiNC5zcHJXVlBwczIxKDgsXSNPK0IKJStVQmBZbDRNNyhcKkJyaiM/LTxIaypERHRkdE0p
RzVhRitfXkgjOC0hZ1FhRVo0QStYaHQnaGNjQ2UnQ1dbWydTJjtnaUVLaVlaM2UkN2MmZyhdMSox
UmQ0PkFzLVhTJmkkaGcKJUZkND9XcEZTWFo1ZSpLaGJXIVFKWWExMnEwVSZwY2NjODMuPCtaUjpQ
SzBDJTd1LiJrUEk6V2ovZHBqR0w6LG9eQFE2UyRRaSN0aTQnYExmTzgkcDlyPipLJUc3ZGhYXFRE
aiIKJUtFSWxaM3NMaT4uWExrOklhNmJPczJ0LWBlZkRjMz5XPCtabmRmIlZPalJwJipDOixiRmtm
UUxPJCFvUitmN1E/IWxHbzpMQ1dhSighYElrSTNdJkc6JjEvWVZlWDJLXSQwblEKJT5USS5dZDlm
Z19IbyRmT1NTRl5eVDMnTUsrYUhBKF8yM2IyRDNAUyRYcHVoRzs+XmReajg0YDVfRl5WW1dXblY5
XiIkPEU2JSFAR2lmND1pNkFTRlZlX0soQ01kMDZGN1hmWjoKJU9IZWIncF89SUg8I3M+PGg5aUJG
bCckIzsyYjo9MTZuN1Z0Sy4rPVxEIWczN2EyRSorT2VDaT4jU0hgSzBmUlJBWm1yTVIyUkJmOyc9
QG11LjV0MystWGY5WVFtaVInYWomMTMKJUk8IlEqUE1BQS04amtbUSNVVU5PRS5aK1JrX1VrNzFQ
LV5hYE1oMkUkT200JkFmYE1DYTRZcEciXF5VclNFWk9kOStwSG0+KU8zXUcyWjhqYVhOMzkjWFVu
Ym5QdE9vZ3RKN2YKJVJrVTMwSC9MZXUxSlo+bm0nKUg0ISswX1sxOTBzTjBoX0Vda2tiX1VOYDI5
TUY5KCdxQVdaTW0yT2toUyNMOT9cJzZhQktlIWlEK25FTipZTW08Tl0xRyRtXiZeLkVyYkJSOmUK
JT1wZy5yRD1ONUUuJD8xU1BnXUo7TloqIiElVW1baVhVSD4nZz4oSG9GLkchazo1TCMjMVoqR2Eo
SGJuKS1oXFonaWVZZV4yMmxNKFNAPj1LIWB1WWJcOz9VVlw6NTcpU0JNNFsKJUpDPCt1a0V0b149
WHMuQmghImtBU15RUW0iJlFYPmBsdWZ1OUEsJEYyQm1eX2tqbGpMZlMmIiklOD1iYi9QZF9AWi1z
UCI6LUAuUFhURixqVjNsTnIoWT80bmRpZSI8KjUlJDoKJWA2PU8yYjBXXT5tUW9JRSxgZmdLYG9e
WycuYztZJ1EmUXMiMTlTRC4jW0xOLFhhaTY0KXFxLSlXX1Y+Q0ldLXA2VS1SOj1ZdSVxWl1oN1to
JzI5Pl9jVFhGMkoxT10sQzEvI0wKJS5lVGNYI2JNaFAmOilaTWtpajcvR1dHaTpfOU9HXXA0aEEz
RyxUcWA/SDg7dS9JZ049JDRyUmBuazk+RmJpSilZMyhfbFxNYmQnW2FeTV8wQmMhNlpsI14jOzxT
LUtVaGU1WlQKJVskXCI5XVkvNERtPTduTTIpUilnXDUvRD1ybmFLaDtOXTshUEQnV0NFZThcTFE6
dFM8RE9mYkhJc18iIjJhS1srSkQyLCdANF8lSUZcK0lHTGI7LTFOcmwtQTdgKFtrUC5ESWkKJU89
Nlo2UlRhOD4xSV82IlxIQ2BDUV9mXzZTPmw/KiRKWzJOZVpCYkJrMDlcckAqVE9ALVdAclgkWWsy
aFJsQj5yciZyLkRPdTE3WHBPPTJAcWxicENOREhnYnMzYlF1TzUoTDAKJU0lPlo4Xk8+MzpyOldF
PSUpLHI6UGozL0kkaDtjJ0whQlZeY1Y+REFkQmdtbUdvU3BwVi51VShhMWVobnFNLil1ZCtBbS8r
NDJyRV9EWzxpSDE+L3FQMHJARDswRDUwRSpcRywKJStbUFR1Q2hjZ2tYSl1JUi9TPC5vJCI1b2ZY
PTEsWFYvZkxFUCpOX0xmO19abCw9ZHRZQWpcdClUK2ZeTkQqK0VIbEQ5MShJXHU5M1RmTE5hWVot
U1JIQlIkPDkiQjtKJiw6YFEKJVV0NSJxb1ckVWxYLC85J0RQQ2FdWEhoKmFlSDBeRzFnMW9xVm9e
LCFvJVE8Vj9mV2tGbm41PFJoTTpiM0Y8MEJFLlxxYD5yN1ZcWlNmV18vPCstcVxtRldGPD8/Q11t
OmtQIywKJTt1KixmPF8iPjFAbzwnPi9ZQHBKNERyMS0rKlpiRTV0dFIuV1JCJ0tVLjs0UG9GRGxY
WFJcUVYuWnBzKFpqZzkoTWVVNlpFQDVLLGlTSEpzYyI7XjEhSFhwOjI9XTtPWls2P0EKJWUldDxj
WFlzb29ZMEpoQyxdQTg+akhWaGBmXShLWygxKScsTio/RW5JaXRbV2Mzb0NDZDZVTl9EYzhyLCRW
PWwsNFdpW04pQDw8ajZdVlFiXiIpSSRGQWgkNFlkZF9MWWNOVlMKJTNZaG03MW1pS2ZBPjFAZ11F
W1JaS1dILz1xcUNuXF9PN2dcNGwuKT83My9PSlxyIlhYN0pGTlEpdVZMTVNYOXRoWzQyTCJLYUhY
SS8rIVJrTys7JFtiPVo7cUstS202aXMkPEMKJVA0KitSLkohWVVBcFVLMjA5XkxHRjwjUyYyZW1H
biZbczopQEV1YmNRTVs0Xm9XNkRcLCMoOWg9c2I1WnMyLzBTbm0rUGpkImZHMEVYVVRyKWskPGUy
aV5XQFhXXEckR0lGOjcKJWgjOFRZVnBlLGA9IyJxJnBhSnM6UkMlYUgpbWszMFgvX0VnP20kQjdF
UUpvSSY+T11qTU5nTkNOOFVcR2cvSi8oaihBL0EmKG02cFE4MV0vMChENkVNcHFtZTE/MiVMZClX
Ri8KJTBub0s5ZV5eR18+RTBVaGdjOVNVXzZPOGpcTVM5TzxfNzhPRmRPbUZXPFBlT3EyLTY7OW1H
MFIqT2BTWyJfWUEuYlJtNGdZWF1ocDVYOExjPGBtZWhGTXR0azdEQW5kOFg8XWkKJVttM2snXyJq
WjJLbkR0aWNBQlBhLEhWRk5WTDNRO0RUR15eYkZsYkpNMypoYCxHbFwwRDdJV0ZJPDFgcmwmZUl0
NnF1JCpnQFFQKUBIbV91cixHLydcWVo1Wz9DU2FbZDhXQk8KJVIsTy5HZGs6T0VEWVArJkhMNGBG
SyFgYUpWKiFPOllcV21SJzlSYTpKVykucyksMGBJLGhXcyhARjUwciJoNSVwbD8oPSsoOmElOkVs
Z0hMRUJwPlduVS1vZitYMFJrR0p0WEsKJV1dLD85UDoyOiVNI140NG1lQzMxXkonSkU5XTYjWV5w
QTBJLGliVGQwVTNMXFM1RFJqVkRGIzBfMEZxJEFcS0RicF90J05hOz9YWTxAdD1FVXEtZl9AZlhh
RzYmVkpELz8lNmoKJSNvZV9tZGtcXj8oKWJnZ0BJbUJzY1ZiM0ZvNVVXRV81XTxCTmgkcThycGMj
WmZSOFoxQiEjIUUiOmhxT05iV1BVZTdIdDpVTj45c0htTkMlTG9TdXVMSUNeRXEpMVE2bixrb0cK
JTVnP0FYRW42YXEuQyxVLU5cZ148L0Q/J2E/QzkuVW4rSWtfYGtYJ0EyVXNiZW4hRFNYNilidC9n
ZjBTXkEnJDAoLGBWSkxiaGxNP2RWN2M9TWBwNVlZMDRiJl9wTDR1U0xnYXQKJUVNRk5dcixfRDo6
QERZLDBuW0RdbkQjbDNXbGUhaltIcGoxSEAxL1ZEaztfOV4lV2kvMiRfSm4yQSFMQzVwMThcKUlt
MyQpLyQ9NykwRDxnTipmJzxlWS8tQGNbKFsrS1IjNWQKJWc3NClhIzkkLXMyJWdjU282QSFyK0Fl
OG5YVz1faXM0K21dZjczYj41US4oJHJwY245clhYTVJyPyknZ2NpPEIib0c/QzkpYHROb3M3NT8m
clAvP3JPOG8tNUosZitJcjY9LXMKJW9vQzJFSWZIbldpRDc0dEJFLlBjczVyazRKK3JeWyk2YVI8
Vlh1c0o8Rk0mOEw8YUQpK0c/ZixXOyJRN1EwRipeLDBybEVVRjMrOUJMZHJEI1syPDNUcUU2XCks
XFVkJmAuTU0KJWREanEsLV9DUmM8YjJqZDRGYi5iMip1ZmM0KlcnSGQidVUrZj4kNS0vKnJDYj4v
UHJKXkBdRWRgIlMqRDc/VFlaPG01QChIMztWSFp0Qz9cS0VRSkpCLnRMQjxbUVVBYEhmZj0KJVJy
QWlgYk5Ba2FpRk8tYmFmN2dINi4lSC0mMDVEbCtWZDlLQGcwdC4pYi9YI1ZkOXRbY0RpOSdsKS5K
KWEnbFNHLjlYVDBqXzZJYl9PYjwtIjppQEdaSGFwVlNwbGtXIzBOdG0KJStOU25rJzg8JmppUSNq
bmJIWmBiQV1nXT5QLlxOPmRVbXJmYEtDanRhbkVeXjZBNClfclYnXzEvJVpJLmNwaVw3R1Q6XzBM
aVxlSSpOJTxFIVwxWVFHRTQtLW1pKlkpNWMkSVMKJTJcOjUhYXUtSzs/O0Q3RVtoLmZMZD1bWTk7
VlsxODY7NDhBX1A0akY+cC0mTy1RUWhaSm9MckBGaG9PN2hiRlFNYV4sYW9LdUk+O2VlWz49YXBB
YztGW0Yqb3JPIjRHXlM6bG0KJWteNzRVMjs0WENyJ0xac183TUo+aElDTDVYVE9PUWdUX110TlRc
aShDX08uWTEzdFctbiUhZSM5aDpjK0tJKnFFRjZHbWsuWkIkNG9ydUBRU2Ric2YkKDplYF1ULkhB
VUJwL2kKJTNVI3NvS283dUBVRylTPzEvX11BZ1ItJUY0SV5lUz1SYUAjSmJLKFFjbG4tNUNjXCIs
clAhJSpgcChYMz9UW1xMWVpIYGI4ZVFfb2lDKmNHQ18jOm8+SnFYRiYjTCJ0KUFecmkKJSdeKUVl
M0IiYDEtNyJaQGZFZ2E7K1wlNj45MDc4ST4yMzA1LGgyMTReXGBXIldCKE1BWDZIOSNXT3BXXUFc
OzVpOWhzXEEkTUVIYml0OSsjQlksJShTLyQ3TClkaTVOQmZwOC8KJTBoPzFtW0Y6N2NUN2wxLy5k
cGQqODQqKj4mNG5lRW5pLUJOPlNJQHMkPEg8JktfOk8+Q2QnTUVoc15WLyJlLig0RCU8QGNZKVl1
R08kNXErKTJnZEFwWUBEJkxZPzpFaCdtbjYKJS8rRlByIS1cXiZMZVxkLyIkKShnU25obERyJzFA
ZDFKaDJuLCt0bjxCakBmJVA3Nk9lb2wqJWUyXCdwOzUxLGw3WCMzKC5gMm8uOkUkT1BpYlVtcHNP
PDU7WFc4SyNaRV4tOWQKJU1bL2taQTYmUSlMUkhvOE1pVFdpYzRIbWBoKj9VWlBPS086bkVPSG9b
LTRwVzlOdSVXNGVQcXQkcUBPbCNyZDdXX2BiOk9EISorUUc0VFVKIzQiOWhycl1qbzY9ckoiXkxE
JXIKJUAnRlU3VUpaWUVWb2E2Q045XnU9Mj9UMidxJzdNUHMyXGhDVmEucDdCT0grZHBYLTNbXlAr
bVsmZ3JlPmU0QGQlLWUnYkM7IjJpXldBO1ZmKSwoX2JjRUMrP0JUNDdkajRENT4KJW5kcWQlL1FO
LEtET2JTR0hrMCU0cTpVTFxoZCMtSDZHI0BiL0FqTjg0WGFXSV8lIUs1WC8/NyRdNVpfSkY9OF49
LlpvR1E3bCowNSRiVXEuaSk8W207SjJYa10kLCFWWylrZ0gKJTRrZ1skKW9YNW8xPyMpJWVmXyNc
WCFtI0YwS1pLWzBJRGopTFgmMixmRnJHOipzY2w1LFFPOTw1PU0pXik/RnNJUiRhW28uQCdHZ2Yj
ZU06X0VBVmI6Mi5RcU9GbSJMWktGJlAKJXAzZE8/MC40RGUiMiJpUExpRFQ+RltPJSIvbTZnREFs
Yz5vSnBtXixqKTBKXFJ1bVtVaF5FU2FqW0E8MUppLkxQY09RMEheU1BsMWVXblFCNkxDQWsrJEho
RVs5R2NFQywhTEgKJU47Im9lZnNuNkQ2U2I0KWw1dWFTayglLmJOYSQ7KzE0XF0wUlI3b09UXSFp
c11UMXFrIys8NT5FLWJTITFsMk5FZmNhWHNOM3JbbW9fXzJSYktoLy0iRThTdUJzYTxvTGA5KVcK
JStgXFc+LCNAUTZrPUkkRi1TNkI3TUBnXnI7OzQyPCRSdHA5Kk1nSSMuPUh1N2UnQVFUbWddQ2By
cF5MYE5TYzVMciZOcDswOS8kPDdIZl9WTksnVWdtWkJOPSw9aCdeXjN1NSwKJVg8amg6bWo+YVtX
c3JXUzUyKjdpRV0sVShzNzRnXV91PSs/KXJuNmZlLFNyPyE+XT90XVxRLnRdPWhmR2sxKVEzVCZM
aSRkdEFoMzJCUl0rLjJWaFgnTEs5cjZGX1Y6ZiM4XSIKJU4hP2tCNTRHUXBjZ3FyaU11QzVrUV84
XEVZbTNbJmx1O0EoJVEhdWZrYz8zZDVOJDx1YCV0NlApI2xAPmk2YyVVJzBPOGZeOSVUWWEyUl5r
T0AhYWhiYEhbalM1RkVfbGwkY1IKJS4+NkBDbFFMLV1sI0dcWE4sOC43JkRDLCdoXEtgX25iXkIh
NTxcO3BAMGgzbFVTJURfKWlkWExXLWxoTyojUCxib2YrcD8/IS1KJkxkdGFfRi5rREhMOVFFQ2lv
OEdLKz9iXWIKJS9QSEBXQXVkIjhuPS5fOEYnY2lOMUpeLXJJNCE2WkU6bm4yTUspJENuc041PUMi
a2dwRmFPdSRsQVpbUW05MC10ZWlmbzwpbGhnZjpLUypwKFQtdFJALXVXKDMhajAjby49QlkKJVVI
NlQhWVErVzRBM0NJRGBOc1BsKnU/WC1bYSQ/XTojJz03WCovSU5FPlZfNjRWa1lHaWQ0cjNxI0Mz
Pi9ERlMvbCMlWXFVKC0oZEkkWm0sJjs4Ij1QLy9LN0tWJGBZNktXVnQKJVtKIiFORjdEVTlAcTBg
aDU6dTUtU2NuYHVbVTlvLDBZQ21KcCknNlZLOSxGRGxsPSozY007PnFxczdAYVI3SSkyNm5CJTEw
cWVkMGspKjQ2N11VXihQbFk9ZzA7RGgxL00/YFAKJWJNRmFuK2hHbT0vJFlwYF9cXVE0QyE5YmEu
bG42UEJsV1QsZUMzUlk6YjdoRF9gQywkSCYsTkkjKTpkXyJbZShqYElwPDMjYEhOJEBUOEZTbEpo
RjdTOS9gczFcRGVeQmgmV2AKJVw7UTpPZVJeci5cZFA5OUs/ZVhHKF5bUDFqdSFEJWl1N2Q9UUtK
WywiODlYVF0nKTs0J1BGYi1AcEUnWlctUUlDY2kzSXJKKyZHZWNbNS50MmRmQWRrOEdHT2ZlZmlg
QG47U0sKJSFNKzdOYklzWlYtUDpOcTgqIXFNZlhoTy5EImBWbGdJUmg/VVBdPFxKOCluRWA5O2Yk
KnQ5S0VtPmBZMW5TSCRlQjFgKUppY0NVLltUc18pL0taVWtSdVNraTUuYyYpIl4yTU8KJUxUcD9G
ZzklZlU/XU9sITNnW1BoQGlTPCdlXHImXyNOUC0oNlVVJ2deKDtvPm1ZZTo3XTQwRWYnUWpcJCpC
QjlfX19ePXAiXzxdKDFdbzIlSl4jNyNZJjszVUVHTjY8Qks9XUYKJUc4NS82cTcyYiJcO0JgaiUi
PypoSHAjXW1LWFVoIUxwUGtQa2VQZTFrSl5bRCoxcD1FZGduQilZVCNcVFE2XixWPls3MEFpWCZk
cyhVZDB0XVE2PXQjMW1SKXJBOSU3aGQwcDEKJUpJbF1WJjhuZS9jaT8lYj48IUBmVXVpIztRS20i
dHE8VGRqXyU6cDAhdUxDXm4/J1g9NUsjLVUvYipZST4pKSknSDFQVF01WmVIViZBaiM2JyZrUDZn
M1JvcERXLFxqPkdrOyMKJUVMQFpaMFFUQj1BMFpiTShBLCcnZHQ9V1JNayYpVm1HS0UtWzEpKi0j
K3FIWz8uZmReOk1bLDhmO0k9VjFJNGJXOiFVdXNBPVNeLHJWRGdATyNAT0hcWmtnWiMoaiYvS3NO
PVoKJS4yWCgxXUdETmMpOV02a0hTInFGPmoxWHFIRE1zJWNQQS45XUdfJUBia0VGXVUnRnRWKCJq
ci9ZISMhJF9samtCZFBCVTkwOCQtRDA+aHQ4ajQoQWA1SWsraVR1WnJeVCVyckwKJVJNVXNMS1ly
QiNbaisqKmdeRz9KMkhfa2U4SnVMTFNRa0ZjKVZBKjExcWtFNlQ5YzUvbkZdWEBoMVl0LkBSY2VZ
b1k7PDpmbjg1VEUxbU1ENmcvVzcvcCYzTWU3Kmo/WiNRJUwKJVswcEVALmRzZiVLT3JKJjFLRlVm
bmY4ZmZVKHJmOEZEPyIsNCJpS04nXVM7YVNpTyk0QEMuU0U8aFVPbV4qbipiJS9UWyohdFhjaV9w
LG9daic+ZDc8IV83Pm5YOV0wOm4ycUwKJTAzY1xkInF1PCJgSFtkUy1OPWstREA+VDpfVyxTYWBa
TExrXVdFaypNTy1cRDpHOW9dYDQ7bkJlWVFuK2xyPDs6SCJwVG9XT0EyKFIicy9gXlhnN1tba1d0
LSMxTGhQTXFjNDsKJW0jUmRRR2s+Ujo/YmNWMD9YSjk0UVFzOy4/L2BGVChrWCFscVhuTS1PNCNn
Qj0qdUg6PklpQXU3VkhuZGBBPkQzbjkidEI5XDdJRUk5SGNnXG1SXVJZLnQjJFgrZz1oYWFVTS0K
JVhrZ15xUVk+K2YiOEhcSWNUUmdiUzksZz4nQStcRihtJFlZMVghJzJsY2RHX25wa1w9XD4sTTNu
LyRjJjtZN1UmYkBjQSU0IUk6LzY6OlhqWi8jTjZyIidiUzo1ZVQrRVcxN2QKJUlGMSpENWU5KDJf
bERuPUVVWXIsQDshbiNaKXFTZGIrODs2J0NcJSFWcEg/S2hbQ09CTERvLWdqV3RtbiNedU9GK0tj
SjVPLC88SjUpWE40WD9pYzRqbUQhZTkoPmg1JipdcmsKJU0mc15MaDJobWUwJi5RInIhRltRVy5z
OyE0QDpuMi5rSEhKMk4uIV8lNT5iMjclQl9aRG1DZEtKLEFHXEBwSmZxT1RMakRwajlFbTdwZior
O2QhSmM6UGFTMjBvay1MV0FoTTAKJTs2UWktJE4/Yjs2P2FaI2s/QUJLcidUN0xMWUNJTVsuNTJH
J01hPWUsTm9tUDpyPWZJPzxXUElbQ2NvWChvci4zZ2MvJmo4byZYOWklRi8oWTtubyhUZCNkOl82
NFtXTVxqPioKJWZqUUpCWVlyRnFObEZBKSpQQld1WD0mPjBVIzw7Oj0tWWVIWS4jK1o8KGlwXTNh
YlNAXmcsM1VmUmpwWWtIYEhdYHJvUmFNXTlJNDcsZl5MMy5vMEdXWWhTK2RCWF8oKGFLZDQKJSJo
aWo3SjRYTG1bZXBFOmM2YkFfPVF0WWldZiNxV1BibllaP0BkVW5DMlRyPGJHPDAoQTJBPjw8ZGNc
cTZXMXNmbytcXVIpLmM+KU1eZjBPYlh0QThFYTZJMDpyYW5eXUZvVjcKJWk/K01KVlRRZytWXVZa
ZElSL3VuPXI0QFcjSGtCXENxTTg9JE9YQTQlRnVOY0pGOV4zbSNfW2gjJ3BRU3E0cXRcS3M1YEg/
JD0haFVLdG81XSs/P2pOc2lUUVJqWTk0UVQhQUYKJS1dLHRXO3BCPSVbMkBPQSlwWWVRRlooQDJf
OFU4NEJHLF91PGxxLGNvQD1obkJlKGAlJ0FEakVqLzRRJnJNJWdxSExTNUwwPkhpNyhuYlY7OUtF
KElUT1BjOD1RcXM/KmtLXjoKJWBVRTxwZDRqNkVbdD9DLHBUNG5pZk85NVw9LiNpWTB1NTZCIiZm
bE0lM2wycFVxWEQ2T15pPz5YdEhsXkMtRCMqJTg/JFstbHMhZjdNTjptIiJwSV5dZCdVISItX2cy
UDZaOGIKJS5oVlg0VytxdUopSSpQWy9PcWtOQlFlU1E0dVImZE1WdS1kYlxsOVlNcHA7XD5TKChF
ST5tT01pYFY/JWEwU1NVTms8bU5uXVk+Wm4pSERRcTN1X0hwREwrTyMmWktZXC9kLFQKJUMtXUFh
MSgpLUFZPDQ6YXMtbjpMbSZXImAkWiVKQDJaNyRhRkIrUE42Yj1Nc1wuaCJcbWZATycmKzVuKy8q
QWhfJSFLVz5mImwrTnA5XiNoQSMsZkgqJU0kbSYqbDtNTnU6ITMKJTVDTEUjbmFjXlNpVjxxKFlo
L281RTRBMzQ3aCgwbj8nUmp1clljQFE1OCkwOypKPG0iclVeS2ZAYEM5WGk3VlM/TCgwV0RBSz4y
dSJNUlRBcjpIJFVLNm5aIT9nXighVG0nLCwKJTQmQHN1a1pzIXA5UShMaCQpKFQlMCdwcyg5VCRX
OjhOaWlvMycnYlFDXzQjXD5uZV5tLUVJQz1jc2ZNNmxSLSQ6XDQ2ZUouWC8jRjYqKFpZLiNUZm9L
P2AoKCklRVtfZEM9RkkKJUIhXmwtcVFMQmwnVlVHP1hFTT9ucUApYTpJVytGPTVIMEpMSTBjbjAt
JTwwPW06dSJXPVlqUGstSlhhS3FARlNQPCFdNzBfKl8xU0c1V1JoVyJlTDUmVmlxOl5EX1E7N1lH
XFIKJWoqaWZoQkc0UXVMN15uX0xXcz9yNm1RJFlSS1hpVSlcJUo3aU5naGA3bGwnPTFxKkAzak5G
MDRNPSZMLGhzKUBwVyUmLis1KG5WNV1rTj5qX2NPaEg0OVtRTFdVNF1KJWpyVXEKJVFhSWZZT3RR
KzVLWz1pPU5bTDZycVtoQVg9UidBSlNMMlMwPSxWJ1lbInM+bzxOS3VMQXVDMjRgbC81Umc5WWQl
Y2RqOmctc0g6Kl9JVy4lPi1JKSomL3FUIjNtXSZjLEcnPCMKJXBORlBcIytiQFpxTkoocjRiVWlF
PVM3IkcqO1E/bjBJQGNqR11tTjUqJkxrbiN1Ql0/JGQkPU1xZUAtJyUqVDhsPyFIO0llclhKQExP
XDEoQGRnOSMqaWZzTCdpPUY2K0xyLVcKJSY0M2VxPHUmMkIqXSNuPzQ6T1EyKSw8b1RuLVBdJWk9
aDVlY0crLyxkY3E1Z0BxY0ArNilcJy8iZThrI1gocjdLNWw7MmkuZTFJYUBTJVswJ2xGKFdubiku
Yz8iKStQOWRAV2cKJWw9aTozUy05N1FNamtmVGgrSzs8XV0kbHI0KiZMZiUoRD5bMk9cITAlPSJE
c2BCRVJzbCY8VCQzJVVsI1xJZD1yMmIzJCtKRGokOTN1Z1M8WiM8KHBgLyJjO1c+IUg0RzUjQnQK
JTUvXFxJRGhBQkUxZjNGbG1UamZrJzhHXGUxQGReNiktYHVYXmNbP3JBJGRlWV8/UU1TL1ZMN3I4
XChLZmxsPGJtTnEvJls3KTdQLWYnZVVjXXBCZyVwOFleOClZISwtYTBsJ3IKJVVPZnMwIy9ScCk9
KUZKaCpfcClGYzQwKUsjUTAhMDJFSi1DMEo0KTFuYyh1M108MGVtLFsiU2BFO25oLyNjKjZAcSMw
ZDleIVZKTigodTM3JT8xcTxXUjs1VjUuJW44b180ZWQKJTJNUUxZMjlnPVxRKT4wXmpqKC9oJ2Yz
X0UwNlM0Qz1ATkJkMj0jPkxrOyJKWDIoamZjVzQ/XTk5PixBOFJiTG5RMVwubT5FTCtWTiszVzsz
STMiLUpCI14rSE8qXVp1Oi0ja2cKJVY+LDJDcVc8NkMoX2VeaWFJLzBFR1tZT1lLakIuMyEzNCVN
RikuT2pXWWw3UCs7PFcrNjhcLWVsYm5yWkw3Z0VhW100JTFZQTpjNyxWNiZpZSluWihcMkVeJG8p
UlplNmJRUVwKJSRJRGohMihTP2khU2dCdTxkc2ZkPS9NR048Zz9fKltJL2VKOyoqK1JjUVhINiNo
VEo2RjtTaC9XP1IoZ11cIzBePUxhdT1AdE4pZ2opQipVX0hvcz1nOT1UWHEoVzIkUzpkLzoKJTJt
cWo5Uy0xRGdmUyU5Vyw8WnU4TFtUOkJqLWs0N29sYl1mOyVvbDNJTVBzTTolakZmI2pEX0YvaFcx
KF5lN3VASlssUHVqLVpyNCN0NShlMkI9Z24wQ09PKk1lMUU6Lig2bGoKJTY7NT4tKWRibVBSISsq
byszRUFVKF8hWi07YidZLyNHdWcjZTNUJlQyZ2JkVkY6MFowM2lnO1JDOTNwLlJfcSVtMiE7bjFK
ZFlPSC8iQUVgKCEka0BxSTsqREg/dXVOWU1bLC4KJT5VIkxyLXE+O1orQSc7UC0yNGpSbTtVJT05
VEItX1VMJztzZWFaN1RiI0k6Q29jbVFCOHUyKVAvTnNRSypMKFUvW1JGT1puS0lKLVFDTyhPVFZR
WitWMiJvUjlKPDMpSFNMWkQKJSVzPTVIPzIyVTE2JGxcLmM1c1UnKm5QZENcakNqPWRKLko7PGAw
ZVVjSSplZT5ycyhdZGYkWClLU1YwZV51Nk5ATEE8PDtaLEtkYVhUJ2llPyJxIUtyT3NTIjhZVlVx
aTkia2gKJTs5c19DPl45UERUPitbYEVObjpRNCUkTjlLX3MjKkRuPjpGXzJgYG0qUStGKWdpTU5T
M1NfZzw+c0NMcURKKU9fbEdOcCxFMGMzVWhXOUQhOlhlZ2ddcSFML0glJWRMNlNCVScKJS1uXysz
YENnTSNSX1xbLnBkcGxSM0FHYGg8JTBbdTohSlhPRz8pX2EpRC0kNm80JTNcYTswWjxxZDkqO2gl
RCtDWW1uXnIuX0RwKCZyaExcQEBEZSU/IUJCOGM+W1AvXCsrRlQKJV5EbWBNSy1XRXMwI3VIJi1q
cE51YjVpSiVWVyJlLDNNSjFJcT44IVhFPXRUbiwnNXNMMEJOKWY9dTY3WlVgXyYtUl01Tl1dLVJK
NyxpJj1vJiwrVmFPOEpiLUZXdUs7anRhV0UKJVIyQVpPQlB0a0srQzZYZCEkaGhOL24qIU4iOkUj
QjBYLyttQDo/cWVMMz91QihsIXVYRSNGRChAREVlOVtiITJtJSU1I2szWGlzRVRLTl9QaiVSNjdC
YjcnVi91YVhcRCVTKlMKJVkvdWJaRT4jLU5fSz0hR2FBYyVyKlFiQCFpWjJiOjg2KEsvPG8ia11o
NVl1KTE/MSYtQlRIYXBOYEhzXG5xSV91OzJzOS9JVm8mP1AuUWx0VGJ0K1FfLmorLVwydDNnMzJx
Ml8KJSc4Z1I+XTo3bFYvPzsiQ11oViNlTlNBbzsnIWVAamgiO0I9Wj5WI0I6ZkVRbigkSylQV19M
UCs4dTNASyxLPVpTKkdSTjBkMXAuPWFGI2dncmFfOSIkSjFHYkNcJWFnZmh1WiIKJUBuMjNeInI5
RyVebjVsXlA0aFNuRl9YOSpdbipRLC5JJkM/YDFJKUxWVGI0J2VJTStJW11MZCJfYzElMS84LmEk
JWNcTUxCR3IuJzdOVFFZKF4pUSdfTWouXFRDRSEjRjZKNmUKJT84XUlqYm1DLkY8aTB0KGFkL24/
S0VhVmsmRGxlMlBOT2tzSmYnJ1FkY1A4UE5qVU1KU011ZyxaUSY4b2I3PUYrLDFEVmhaRHUmWUhQ
cmBGKFNtYSMxISxRRSk+bTVlMVBuPSgKJWNWcyltTDhqdSgvZ1M3Ois0SUo0OkQnOio+YjgiXkJT
ZyxQXy9uXUJMKkUsaGVsY21cJC0ubmRqaytJUjVBIlwrTTEoQS03OWxfQjloa2FpPi1UInREVDZF
TCRBQlwyKFUoZUoKJTwiMG5sZWdhVz9LL2FsTS0vU25fOitFMTBmPUdyQDF0cGFFNzYmJHJbRVhP
TW0lIW8tQWZfc0pILDMjX15rND1ub2glcldnQzxaN1tOLS9ncEldTHVmWyFGZT1XZlElOW5QIm8K
JXFlay43cm5hTjJacDF1dDo3YzRUXmhXVVM0aFJVIVB0dHBmOCZVb2omOjc8TWxAYlE7UF9fMSsj
UWklMSlgRFhmMEliNC5uUktjNzxtP0gjY1Q5TjoqQUZbYWlHNUReNWREUHAKJWsjVC4yX1lSSGpP
LGU8OzY8YjJpRTlWMF0qRnNkJmdZV2dvLGlsa3FgcV9COSs+KFArY0w+UldnJFdUMSM1MjcvNDky
Pm1BUiU1PTwyUUgkUHBOXixHQ15MVy9MRj4pYHBzW3AKJTk4JmtaWzQjQm9lPkRrQW0+MUQuPlRT
UWgsP3FoZDdYcCNUZi0wcD5iYi0jbSwoSyJBM1hwUFc1cGcpIVInNF5ZXFo9T08xKCwnKDxmMGcj
VixObG4iM1g0WUloX2dENkRkaysKJSNKc0NsLSgyaSI+USJxTWlcZC81Uy09VCkrOjVlOUVeLEc6
bjp0I2FhWTQ0VFIpKzNfbEQsUyo3VkNjalw7aUliP3ApMShDWS9QIUs0RU9QUjBEO3I7P21SdTxJ
a05sWV9hNC8KJUIvXj03XmBZZDxxIXA8NCJjL2RvYmF1TmBxJWk2P1NpPmNnQ0YuN1EtVzBRZU9L
aVEtbVozKl1fSllfTDddP0tPLmlITzEnTTVaX1lmQy1lYCFSb21jSkkmKitwJi9BJ3IvZWwKJV8k
OWdjRV0rKSQ6akUvZlwoTC5iVUY4NT9FW1A+VyIhZ2NfJ1deKDBZM1xpci1uSEI9OFlKbWtdNjhI
IzInM0NLaVs5aVYxNVNvTmtQMzVcbkxMbi4/dXNIRzYsbypNSGtxTm0KJV4tRitNVCcvcmYqNFZB
NFY6YmpeLj9cVj0pIVArRExGSElYLTtcXC9DTyg0P0UybzNMcSRbRylnREtzOlImbWMiSkpPR0lJ
SCUsQGgmayxQLzwoTDtkbmg8IT1JWHI7ZEZoRTMKJUcmRkdfJi4jaispX2FdcmJsbS1wQVUtdC5e
Vkx0ZkZZXU8yUmZeQFlKPD8uO0F1VC5dPXVKOkZBTWZhbGEtTSZsSjlAOFdOdURRLiZEWEFQWCRx
S1xNUm03SmgwM20kJCxpYm8KJUgwa2g2Sjc6W2pfQFYzW0BbTl91X2xPT1BKNzVAb1IjKE8+USZJ
PjJeLFUoSzApYzElWXViRlowLiJxY0s9KlA6L2NOMmheXUt1MHMpJFtvYDdVJWAib2c0VkJKNFki
YzhFZlQKJTd0ZGJEIkBVLFI8ZCprKFpBQGgwXC0tUkgkXE9oYCFCZEQ3NCM/J1w1NnBjPnA/L0dW
OTY3UWc9SHRfYGtSUWNIYEk7UT4uMVdaNlAvaFFnPDloKVE/cDBPdWlYZTpGLkYqcTcKJVA1NG05
O21dNl8uKmM9OyFbWzVpbTc/VCs+IUZvMmZhZFFNOWIsVjxsY2pVJyMqYWlIMHE6UGBVVTJmM1om
VEUqUUNiZihAMihIJU9wQzxrImI7LTUnYCxbc2xQdV8jMDRqJHAKJUM1LlZcaEghXHBrWzVkWCdB
NiVdWmcmITFONE9tJjhFN14oXmQmQGQzZmpmXy4xL1BrbjZlcF1pJ0Q5TF51JC5XNDAzKFkpMkNS
KENnUUM3Ui5FQkwiYGssO1pwY2ZEMjNsW0kKJSovZCJdbTRIYSlkcE0jVTJJR1RcV1BXWFcmUyhb
YzU0IXRIJ1RJc10lUSpgUzFIO1tNcGVFNykpNjgxPEpfJGY4ZS8pKC9qPFp1KCdZZmEpUiNrZERp
VU5TLSFQVWhhXFcmRnMKJT1DTiRQajFtSmAsVS1KTyJLSiNpPDJhZyMtNSwhNlxwcDRTXDFkRy1P
V1Y4O09ZQ0JwbUJtJGNeaC9pTEFvaSNXNWNgJWxbOVwhZD9qNU1lQidlZVskSXNaNl06Wy82Mz1W
UmsKJStARDFVR29IaE8iOkNVYyNmWklKR2x1Pk9fOFlZdDxITjpuUCU+U2w1YWlGamReVGpOS2pR
K2VBKC5fJC5qUEBcQVUtVEpga0hAMkxEPnRgSDdhMm8pcmVQR0E+VycqQGwsI2cKJTo/XkkwOj9p
Sm1CZUAmQjkpYzJQU3MyRnFFK2tMJERZWihCK3VvZTQoc1skMF9TQWNNQGJPTWEuYzguLF5fNT0y
YzlxRDgtN1tDUjlOYSpMWFNbKW5AJE1lZGlUUCNjXmh0ak4KJV5VKFJYQnEtcmhHPEEwLk8zLEFR
QzMmJlZVMU0rY1xIN19uLiwkYFtOVW9bWDNHa0gsVVwkbWI2KzpMNmlnSChtYDdTZityLC4pZjNM
cm5oNVdwSmlXSWo8T2ZbOWtiZTpFPS8KJXFAdF10WCtMSypLP1otbyxAUiFbVWVGPnVCRENENW1z
dTRmOUpvKCVscnMyZmchVWkvQDJqQ21uQU1bN29WPz1bMUtkaS4nNkQ7aTk6OUNCRS9rS0pCaWA9
XTNoajptVypRalUKJUs6Xz5dIWRFK0dLKUM+aEpzSFwuaXRFTShwTkMoXmM8ZmRdOy8zYCxHM3NR
UDtpXTVMXDhTMj5QKjlVN0ZKR0AiXWcuNj1Qc1snTSM5XE8jI0E3I147LC9pOSVOMF0tb2drPSgK
JSZdRmNGa2spa05iM0goNmlhJEBrWS1uLD5VKWU0bjwyMTVecGBbIkRWbHQxPiRjdTYpRkEwOi0k
cXMuYWxyQ0pmSks7JzE/TiZpNjg7cm5IIzdybEIyYERPb1RqXCQ0OUc4PDgKJXItZSsyIU5Ab0VX
aF40bU08bDgpNmhBLW03T0JxQm4xa1R1UVloJipGLypsWylSVjVaPz9kNV4rJS9fOUE4TkRdYk4+
NGs9OS8yRUBxKipTLDlDPElZKTcoVEEyOmlhQlxQQDsKJSslcmJBTlEsLTU9bmgyQ2YpLG1cWWsh
bFdTKmFbSi9bZ11hYGE1Uz88MzxBZVUvSjQtLFY0Lko5YVZBPkRqM0FhaztFSCJKV2ZrXTwzM1dL
USdIZ0spI0AmcVJOY1plPCcnY1gKJVhzL1wzMDlpX0pbViE3OyUhSnM/OmxHQ05tLE86USUyI2gm
UVlNcm9QNCYoLSReWnQ9b2pfQGEwJF5RXDNPWEUnZ1ZtTWhtQGUxLTc9c1REb1lAY1ZePjMxWllx
S11ebnFqQ1QKJTNwMm80RDQhQTxEPjI8W1A9aiYsQTNRXG5lVS5pI2A+WV9lZzZGK3U8bGg0UU9T
OlJmZkloMnNfWC8rMikuRy83ZGpVM3QiYkhSRUpNQmhdbl0vRkNQVmszKER0Ilo/Yz40bHQKJSRF
Q25zQzctZC9HK0BcLzpJVEdvQGtZPXJXc24zNmNbNW80bU0kKDJScyprS0VSNVpXTGo2bktSJCkk
JUZzWGBKSTdxOzxDK2xeMzYvNzhUWjYsLk4xXUEiKiZiMzYwK0RcdFQKJXBecGdlZU8hcldFVFdt
MVp1MWFMbC5pVC1Jb0xLdVQ7LENFL0VXSlYkI1xAKSgxcjUjRmFIUSQ9bl1ZNiJMYm1vMCQ+M0xy
Qi4kKE4/XSpiI21Ba0QsJ2FPNGlBTjx1IkYrV04KJSwhZlxlNjZaRERVJUtbJjVvRidEXS8iQExT
azUjIUdlc0kvKk5TIWxDazRGalMjWGEhXVxATCkhYzVZXi9aQGxKKjNcKGBwJyRmRyhTI0V0Vl9g
OUxfOCtHYldgdEZ0YGI4O1EKJWM3cVJzIWg4XWs/RSEna0BiWDdbZWZzNCxBYFFDKF1laE5gNW9Q
KnBPNCIwVzgnLyEyYmUyXDxCSTI/ITtOWFdyLkNwaVpGYDVodWBNYHFjSGBFOmc7KFI7Om5aRztM
XF8jSSMKJVhfYjMrbWZQZTknR0VIa0BLcy8+NV1gZWFWVEVWQExfYjdLaVFBQjoxaVUwc0BbaVFv
a3Q6KEZROzFKbFNGZGtrUUgkazhrKismVlBkRl89SihpSU1PZExlVFdrSG8yK0BDclkKJUsncDEn
MScxVyRZYEhAWGBMJTo5USdaOU5tXnAham0vYDMwQVtEVCEuWXN0TWFJPkpUN1FTYlUmdHVoK2Zx
MFg/NTtkPGtIUWVVdWBvYnUqZSRBXUBbYixUPD07ZSY2aCo/cT0KJTg1WDwnKitsRVEySnBWOydq
VGZtT0psTV1iU1UzayxBcVZlQFBROGhXalx1UWtXVk84SEI+ZVdZR2U4KCJwZm9wUE8mTEg2Sypg
TlJST1dKYyFoPDVnMEAyVSdyLFRhSWohOE8KJUw6V2MsVE9jSiFEVCRmS2d1P1VSbFFNb0ZcSEk0
Iy5Xa1Reb24sRD9KJVZwJXFyQSFjbU0mOEBuXSFyPTc+TUY9QnEiMUk/JFhGNF0yK1hLQCRoaC5T
YUJyQk9sWl5HRyoqUTkKJVhmRC4jI29rSFclYmtVLyJ1T19WLFtkRys7dHBQLEpsIz1aYCtVX2dG
TCU7YkFvRDZBQXU2OVBsZ19ca0RzRV5FZSJVWTchV3NVW0U0NHNvPE0tTTVxTlRWQU1FOz9QOW49
NF0KJWBWTW0hT2pYOTFPIiRyRCtkYC5RcDJbMC44JWhhQGFAJVUncTVNZWYhdURfMEZZUy9GK08n
XTApO0ZTQkZbdUloOVY6akRbMlVJWVknWjolL2dzLjI/RkhOQCRCLGJSTWlETzkKJUphTydRWlxO
MFtMO1QpSEdeQUpeJWlHMVI+IS1cISZGXjI4akdAIUVgRUckc2U2aE1tWWZgX3VfRGVOZWdJKWVI
NDAkOFtcJltHUygiNm1uOS0jQ2ZyRz8sRjgnT2tAUEhtZiQKJSw3OnVIPi4iQDNuYUM4SSo3UCFU
YChLK3VIMipCWC8nSjE+T0olMDYuSFhiczw2X1FaVmk7aENcMSU6W09GbCspamYwbWttKkhkYGU8
WGtYKytPb2ttSV5qNEpiL2h1OWE3KScKJVBqSFNSPFVYSCtDK2dgQ0AoK0BERUFIW0tSNFRMMTdj
XiExQDFEQTAyQScuS1dIS1YvNWREKz4/XjQvP0JnbEsiODpmI2daOjBiLGhzYTpwSU1laGBKVTxy
QUomIiJnKV9CKisKJUI9c0xZMloqQSk9QFIvXD9tbDJzQVB1Q1ZIKktGJTNhQzUvcEZbdU1PZzZu
LyJSRkVlP007bU0rWzs5WlMtPmdxRy9oVDljYCxQKV4rQSY9MSJOLXQ5bEhyZC4vKm5wNm0/OWUK
JTdaJWRoMyEkMyQ2cj9mJiRiVi9XWkNaKUBjcl9nWm0qdUFdQVxrTU5gTTxobSJrWmZZaWM1PTM7
czYhJC4sTFYiVDwlKFA+ZWlpOFc5c0Q+Kz8uLE9ERCRtJ1heRUdyXWRHTWEKJWA9ck9PIWhjKGwt
QjBEXzhhYCJnMkhJKjBsWyduI0ZKbzVbYldOKiRwJXVaWkk5S1wvLzBGT08pOmVnNEY9JzRZSk1v
Y1lCQm4qaVZoUD1BOCpRLWwhNDxTZj1LQWpuWzVjaEwKJW1RcVhOJS0hRDFbIkk6ZFwyPiQsSCYs
OWNSXmBDR1BRRV1ER1c4cFdXXSpjSUxsTEA7TiwsIWJYSXRzZkUocVV1QktrTz1oW0QvLExUaWBU
OmY+NTQ/J0pFcCRBdVgkIS9dc24KJWRTRyU9bVglRF1cPjEhPjhVcDUzLGpqQF0pKVcwKCIja2JT
Kj8ucnAlVnVnaCRyOCZeO2pzQicqNU1TKlhGMD5hZ1FJdFZYUSJSUStJc2dETUlGb2NYdE1JS1hG
dUs5VjVHUlwKJVsxKXU4L1o4RW5EPUojSjpuZTcxYjMzNzsuWSMjZklNN0E1LiNiKmdSSCZBcUtw
PG9HLUM6cjdjLTxqTiJ0ZltKLGlnZC9cJUlnL1JxOzFbPCpjImZZNUA6dVs/LlstY204WjkKJWIh
WV9JOm80bXFcakpfYWYzJz4yQms+LmE9XyxUU1JhYDpgX2x0ZUNPdSp1KGwlTV1NImpVU25pcVRs
Ii9CXCReR3JuNlpIMy9kNUpWblA9W1wtXzhgYnMtMHA2KT1KZUghLykKJUY9P09aUUQlNzJwaFdi
ZWQnVDp1VE5QYVFiaVVsNyZJIjhuYFkrWUpdb2MjKC5NTnR0XSdJZExfLHAxbUwsS1xpKDZSK1Ar
X1QwT24oJnA5ZiQjX3RNSWc2IUQtXnM7OGQzaDwKJVdaKyNpMEwkQ1xlcExfXk1IQDJRSy9pJGZr
WE1gZGgtPFosMjQ+Wz1nZyZIKU4iOS9YMlEtRHVIJnU/UGkmRmAnXXA1K09fJjs8XVleMm9ZZU0v
YlE8Lm8/IVdqS1AncUI0Y24KJTEuYjU8ZSg9US4xQktyO0ApdCw0TGJJWEcoXVBaZyVTO1BRNyNW
VEknNSMpayZPS29fKU4lMjFlUTE0Z2c0R2wsbm9OJDJnTDx1RS9UIS5XIzd1WlFSaTNsXTpCdEg9
PDRkYlcKJTI5LmBmZUdEY0kkaihqcWFGX3RzM2YhXCsxMjFZV25sdFpsMlFeREAiIkJeUEo8LG0t
XlVoc1VEMjIpZFY0MUUkW0IqXFdKTzdiP1JZMlVEVyhKWWNQXzk6T0lOSnBJTDRrZD4KJTI0LE1z
NWRiQnVHVXRNYE0icy9NK2shbl1YR0VbaGQyWitCLzVPU0QqWmguP1FJWUJBZldFUkZrUlM/dSct
ZlZWckw2UVxIKUZXTFozWThVJEF0Uj42TkRxWGU/PCdWLVRKQV8KJSopTiZqT3EsM1ZQbCQiZDMy
LiNmUjBOZzhqRGh1cCNhVmtoQjYubzBdb1BYcVRVX0lsSGNeaC9Dbls0OFRiQkNbNm9gUSFndUg+
NkBoQlMtS2ghTEokQj8+RU9GYjFhQCQ5YycKJWorJzdNSk0nVzYqLkpAYVR1LzU8R01sclBNL0k5
Nl8vYm8hLVZvakVXNSJhNShyMVFnYTA6KGtmbjlZXFRcW2FcM29FLVdBcCNmSlYjSF4qK3B0bjos
TyEwR187Ni1OPCZqbmkKJUIuZTRMPiI0R2hwIkEpJ3JEbm5DbG4xcS5mYVNpNVVna0RlOlFGZkQx
bCs1TVEpU2loVis7MT9xME5QRVtZXEQsYFAkJEA2IXU7ZEBJRiMnSTQjdT5XJmA+Q1JYUD9pN2hj
M14KJSIjX0A4RlNFTz9lbCVTWzxcMGFTcF4lbWokRGRdMlkkakd1PFUvW1g/Sj5oSCJFWlYiLD5c
TyZCK21rQSRrRDNoPjNFdVAiXSU/JTZXUlFwLGIoKTRQQ1hSVk5EN1xwIUBROT4KJWk4YkYjRmg/
UCk7TnIsUUZoQiE5ZUAxO181YjEwRCJfJjA9ajkpKGtUaU1dIkljSmcvTVs3UGJLYWc+LSduYSI3
bz4wNFJtMDpKVkRmdTNtaXExaENROTRyaStGSUBFPDZiUUMKJWhedCxXX1xaQXVoTTdwVC8zJnRE
ISJoWzUrZFJNQU0wczoxUTIqXiM+QyopKjdFTDdqLSItcFY7amcodTNELjtKX1dkS1JPNGsvXVtd
cTpMQm5kKV9CS2tIRnE1bkwzWmMwPk4KJVVubjBIcHAuTWdLIl4nK0gkKUI1VlQ1ZCYwLVEpazNt
ciFtPV1fKlFWYi8vOlMqWTlDYDEsMVxVW25MRUdoZ15cYXE/RzgzJnQzUzdtVVhlZlAxNmdaXERM
RjpwWV5dO0lXP2kKJU4hQUktWENGLC1BUztRVWVoVUNvYDBcIlVQXlZbNlA0ImlqNjpfbjtQNjc8
KV5sKltiQ0onJDw5KWJTJiE9K2EoJmBuWitBWl5nV2RFTD9zbjA3VDEqXzthQWdcNzpVai9DKVgK
JUNoOF84NUxGQXElPEhEXyE7MUV0YCw9WSc9ZmxGSS47RyhKPFMnMSViVmRJMDFTO1UoQjBROE9N
Z0I2OTA+JkozPmt1YCVGUWU+cyppWklgJ1ZfcFpkLmJycyJjWG5YLEJdUToKJT89NF1vTFMxbl0z
KTNEJl9rLmQlRSZgU1cqbU4oU2crQCFTOGFtY1tSODY5Oig4aDJ1T0daNVZFYXJSOUFpJ1UqZz9e
MF1WNEslR283WUksIi5RdHAiczY/XjZdQFFFL001S0sKJSNUV2I9aGI9I2psZCojRWdQPWxtQUxr
Il4sYWFHNzNcKzJyYVZlIkMwQk42QD4wWFB1P0cvJTA8dUVSSU8kalpyTE0sWHVWTCJSIis/V3Fx
PC9UKFdyWzRVK1Jfbj1QQFdiSG0KJTNARFEiMzhOW1ljUkNlVCddODhAXjoqL0BiZFw+ImRjS3Nq
T2M2XUpKMmktTWYyVF5sITJWJmc2ckllRS4tdWlwXlUkYlYtR2A4SENSTT10Vm9XaiYmVUMrWVhW
SlolUD4hYzoKJSwyWzg8IlYrWyosP1pIbiVcPG0iLEdnK14hU2xvNGlLZ0ppPDRgZmxBXnIyYDlJ
SUQhZ0M5Sjs+NnVpPldLLzcnLDNtSTRXLjwrWWJva0Q7Xm4qRlwzWVoxXl8rLyRrIkUlNW4KJSs7
ZUhxZiMsMk1OWChtTUk7R0VPb2VgJWtAIWdRNEBrZ2ItXj1abVQ1PTxRSydcMVtbaltbcSxpP0hD
bCRiZktRP0s3IzA6ZXNRc0NTSyUzSE0iIzteL0JpaVRSREw1W3FoZkkKJVEpIy8nQ0JDWS5lLm5J
XmlTJ1BaJS9uaTw4Wy9uNDFZSkEjaTlYSXVEUXFDSEhrI0dnV3VkPGljLDQocFgtX1IvIXNaP11l
XjFnPDcpNWJARjlpc1pudVcjZ1k6Plw7QGpCOW8KJW5oYUxxU1ojVFRdT08iVVRXTyhNS09wUHNu
WEtxMCpAPzxuQ3JnYnNDTGI9X00/c2pRJ2JxWD46OGlWZSwpSEVcZyxhMGVBOyIkbl9RT2dYN00k
Zz8nOCgiSjBmbDcqQTA/ayIKJS9TVFVeRTMhZDZNSzksSFlSI0FSYV4lQUhYRmsuN2YiWEpoYSko
dGQiZjhqJzQ6aThoOVUpaVMnVDRbXU1bPXVKIz1CUkdjI050KjBuUCcwKS9pLjYkWF1mMSk0QixS
T2xUV1cKJVBeQElHVWlxW29ZNyctTzltWlZXXEsqQysiPzRxclo/M1otOV0mM3A9JkdlPyFNazFm
RyFDJ0NuS15GQyMjaC8iRUJQXj81aWJKNiJCMzA6b000ZFVtVko6bVMnMTcyOTVua2gKJVlhOHM3
LzFkRTo+TTtRUGxdPGcnISU/TUlpYWA0RENEazkiND1VbjMsZnNZUEAiQ3FWcys2PmAoXShLW21V
bzJMLjI8LVIlZ0FnUmRsTGQrRWphJVBsVTJYajVHTVdbZChjWkYKJXBFaCpgKCwqcGtWVFImbFRq
aSxQYmRvcmZaPi89SkMtQ2lMMG8lMHA4PmNYLmg1JzIncTVCSnRNOyVTSydcK1tKKnQhPjRxKGA/
JEhPPihQcGkvMjQpcj8qYFJnbGY+NURFXygKJWJgNVM+UllqSUc7NDQ3V2kkOTVZLS9WNWIjMWNi
cWBGJFRvL1dBLydWO1dyNTdBP2REQUlOKiJHWW0wUSc8XWhcKFFMJFAjLDZjLE9ka29HT0cvdWUh
UkBFLTddLk0wZDoiPmgKJWtAL3BqJUxrRWxvdV8xLkU6IzJuYCFtNyhjRitibnA9RnE7LytRVFIy
TXFwSF0+UF9vaG8zU0xsNlsvZ2hwOytjU19rXUMoWV9Ka3IsIT47S2FDXFMtM29yT0c+WS9RVUo0
I2gKJVtocjNMQT8uKytCampPQm8yRHRoKW5sST8qSExOcSgiOCVKMnA0TjJML1NBX1koPGtCa21G
ZnVoUSZWQEwrLzhnIi5EdD1ecCNLWlg9SCJFNnJsZHNHaz5gPSEtLV4lTGlFcVYKJVhgQTxAOFY+
REZabUBCQ0phcG1DWig2VHUzVmg3XydQbDpfY2k/NlRuWlFzLDFoJjojRT0qLFdAO1MlWDkvWyw9
Y3JCSzhDbUUjaU8kbXQhSVs8QUQ0Pi8ubSczWmBxUCEvPj4KJW9VaHQwYmstXSluMmJDZF0sPlRM
YC0qVUtQYFtMTDZucyJaOilxdC5LYjpjcTJZYm4+LEFTVjtjVGQ4NFEvc3BWIj1CUGsrc1pKQ0pb
bjxPZWtdLjZSN2JPXDg7NV1kWW0haSgKJT5IRU9MZnFlJEQ7U2ZlZl5EU0NpQHVxcz1nNWRGJG5U
QDdyalVBOmMtRkpdZmNXImoqZShgZW87amlUM2lwN2lKKTFgU1QkSzZWci4+ZlUhYSpvT0thcER0
YiFRQCIoPFpZXVMKJWgrLm5EUFBQKF9BPWg8R09ZdUViYGJvN1dYOiNSYFhOI2soZGBHZS8/WUtT
NWdSK0BmWlNgWUgpPHFKUmZTVzhba3Uwa0xhRSw+c3IyInJYJDtGJCtqIm5zYkUubHRRQEFcOG4K
JUopRixFQnBZdSFQPjtcLzokRiJlOERgVEdqTWtjPFoqdDZ0YC1xLmMrOUZVLmxFRUQ0cTBMXnNe
PUIySEBLYnIiM2QnSC1cLXByNyZXUTglcF4+YysrY2U+OiovPUMwUilzU10KJTRiYUJlT0BzbFg/
JXM/RE9OaSZaSUtoKEZPQTElMlNxXGZIT08mNzdeJzZeUU9BVUA5aGhGaFxKQ0FsKHMrWDUrSjVM
XSYrLDxQT09PSlU5NT1cdUNPQiRePUAjJlJjT09rTG8KJV5kR2hcYlVlZkhrVE89M19KRzkobjAp
LiZAYi9SRnIwWFs7QEo1SSshMjNqZVRJYic0KzouV0k2PWtkRyJYLWk9I2RrRE0zWGk5KjZfcyxd
LlJBYGYmW0F1Oi80OihiPFdxXCcKJShGPiM6MideVyZcOCQjcmFgYEA2Kl1HWzQ6Y0lJMDBRTCM4
JCdcKExOS1tIUyx1ZFZHJ05RaGkvPjZyUyxrLGolSnBDRlckLldVKThpMzVWOXJNdU9kRD8zNEYk
YlNPTV0wbzEKJVxMNW8lJDImZm1sKENSVmtjLilkLWw3UlU3b3FqTDgzLzNIPU0hXU5MIl04aSlH
Xy8sVyl1JjBlPG1pMjsyKl8oVHV0blMmTSRpLCQ/YGQwUlR0dSY9L2JfNWQwJ2pNX1pIWz4KJU1e
bUNLXyg9KHNdQkd1QylKWXUiZy1xRF08LGk8PWhRR1JPSlc9ITIzI0BNJWBCJjE5aW5bVFQwMyRA
LTIiclghSGwiQkw1a1hUPyYvX0RSTDpXIVJOUFw2MzBbcjd1QkkmPVkKJU1OUkpoN0FdT09rU1Mm
IUwsdEc1OiEkNFsxL3Q3XjxAcFQ6YkRURDVXMmdkY0UqaCRUMD9UbDsjJigoUylKWTJNLlxtbSJk
WWtcTlcuUSFbVk09QjlUYWtpaGZjJG0mTCx1T1QKJWMuR2oiMkEyYUQyLz5SLWdLNGRWVTcqI05t
IlNdVihaIXE/Ikg+aEA9R2tmcUsoMmAlMmdGdThAXFMpczRGK2NVYyVWOS01aFxzNzhQMSk+QDs0
VWJcbEpZSGZyTCEkVTsqU1QKJXJhIXVpXldaVmZeXGp1MUZQaD9kVC5sJWpPY1dqaEBBOkQ1PFda
bVZLYHVPK2JmJjsqTGtRTjVGW0JcOWE4dVxkNlFyciZfRicyMDkqKU5kX0xpMVU7XiZkPF9TY2Rh
PjlVY0kKJV8/R2oyQGovVlJfRjlBckNFXklbX00sVmtrIUZmLmA1VzFRSFNOUzBfP1okdUstQTsp
YiF1bE1NYDZyM2AvJ0FvUDlKKylBN1UjVzxuV3RYMDMtSy1cIzcqXVw4Yiwucm9uV2kKJWxAbUQi
bm00XDxtJDE5YTEyZDhUbVkvdCs7MUNybm46YVljaT8jYj1QVjghWCRpO3JZOFZfXVlPRWhqZVpu
SGdwJGhsX25NKzw2aU9GJiUyZTFaNENPU14qKWo9Y0pnT0Y4NSIKJW9IUWoqTlZzc2EiQFNbW1oy
UiJlPCxiXGxcMHJJJj1wWiFOS2khQ0Y8LDh1bDQrbydwVGpHWis+TjJBRSZMXltiZTFHUTEobmtQ
MlhLTjRORUoxPjRUdSc8dSJxcDBUPlZub1oKJTRkJlhQakokWi0kKTMqcS4yUzg3Om0hJmVEUmxe
WSJGO2JxRFRrT1otSVNYcVEidDdVMmg7bEJXMS5rWWZFcTRxIXAtI2NiImosYydJWDJsPEpDUURV
LzhKY1UycSNgLCksO1IKJShTckZXTXN0U08uTVxfVjtIVidXXUdpPz86SDQ4cyZBVjErXnAmXV43
bENTJUcqOGwkMCJlOGRtPEplWy9rNmdSSjlZZDpmMFQ7LFg0bCZsUjQ/XmokTUNQJ0FaVWZBPGlF
SEwKJTs8QTNDXUwnPDhNYCtcOSw1Ly91KCtQTmUpSjY4MGZnUWMxZTUtUEdlbycyWFRvKidNcF9L
bUlLZlg1amMrL2ByPylWOlg7OCE+UG5qNHFBTXNhb0VkMi9oWT8nUEA1T2hhZGYKJW49aSRpOygy
cUgkbVkvVzphW3R1PC9lZDRNYF5UaTllTWMxcSJZKzdRPl5mKnA9R1JoZGNDaWBNSCRKKyRVOylx
PWEvdU07ISU3UiVlSEViR3VOQ1RvZionQGM7PFtbPCE5bjUKJUxQQUVyT1gwMGteJT5VbzFmUWxW
V2kmPFFYNyhxSVRsdDRRbnA4XTklQ0tcNVNfOG9GbTdrOmY/ME84c2A0YkJ0P0Y0a1g8U2lYY21t
WlYmZS01aCgqdXE5aExPLW5ERyZhSSUKJSoiIWxzWz9FdEdqXFYhdWRWXVA7U2pQczFYKlhFXV9y
MyYnbWpCUSlgLzwjZjpuZG11TCpXVVhXWyVqQTU7MyhPRkJWZDpNVEsyXydxZiM/OE9VV3FhMVo2
bT9vdCNwWXNJV2kKJXFBRzMpPSxIcjVCP3RUMW9PNSFjTlFgbmEmRDRvXCk6YShTOzlVPUBYWyRr
IzM4LEApVEoyZDI8YTQuWFdDTUgsNmYkR2o8RyxpU2ElKXBPYCgyJkBHM0lnRWUiUWBwUzR0LCEK
JSxyOj5HJE09cSRCWWdXNWVsZE1AR211JT1RODgrbFAqOCRmKUJbQktNPUhoY15rQ28vUD9QTCMw
TylvJkowNz1TN11dPD5AJUZTWmkqXjtLInB0WSFrXllMP19RJzYwYyUlSiYKJUtNXXJqZDFZQlBP
SGorNE5pWnRIJ0AvXk8/aUMwajcoWDM1P3BuVk1GVWNqMU9SPjBfV01NI2JOYk1KIURoSFA7KSYx
bDFkRTd1S00qWzhfZjpYZTksPWRoXCY8PC9rKylTclgKJVdqJmQpQTtPWy1cbWVSMzk3RjxQRDBA
V3A+Kl5wYTJkaW4wVHEubS5rbydjLjBbSlJuNidjW0BvbGQsQG0qZmUuQXFDaShLNFxMTVZCIUYz
ckBFVj5lT0NVVSQwYjExZGQoMEQKJWpbbnJJQT1hYGM2bVlKR2k4Nlxfak1hPG4oPnJGNDcpRE9c
UFxNWDslKi4iJXBbQU5qLz81Ryg+WjxhRmcvOktdUGY7bzU+QThqYGdJbWFKUElpJj1dLC0oTEJG
UCRdWDdNLkEKJWxQWyJQOUw9RygsWydZKUhwdUtoOy4lVURjJWJfSWNKWW9MXVBUMDNfU29jLyIi
USRfTyw9NW81Sy1nN1w/ZkFkPCY4OHBsTTtGXztpS1hgMyE2a10jTl91XWFZMGJaJCxTYz4KJS08
XFJgQDJNLEVdaC9nXXJkYS4pWjZzLGYhKGI7aWJvZG0kZm80Jm9kdVUvKW1RVjhKMSZHOT4jXUBi
Mzo7JFo/ZGUuIUJkT0AoJ1swUkBARVdBViNtU1lCLXE3TTtaVlotLUgKJU9UXG1ETHVBJVxbMk1o
NU5SIWptMDQpPFlbTFkncGY3RE9fODBdIjVkZUhgOFh0TSNtKE5TWCo5RmApNlRabFB0JDdzQjUu
aG5kKU48RyhRbT5YX0c7LyRRSzBXNWglYi91JEIKJTFwP1lJM05xJTppYVkhRTZkMyFULFg3RGFu
YDRVTDFidTMnQDsuLHRUNz4vISszMUJsTyhYS0I+JFJVYi51NlNyUWgyKGRBWUM2QD0+UEkoJChB
dVo9UEFFYj8kIyMyQS1YSj4KJV1RQD4qKGJWbWdmbzolc1RAYm0hYGVzX0NBIi1ZSXFOUTlpQ0Zy
dGZHLWFSNEFeI0wyOUhMayw5JzE1PiY+V15JT01dKTlrayY7P2olNCskO2U/UkhRc2krcU4wP1ho
YzhdciUKJTBlLVohQk4rS0ttKmRkTT5iXTB1cmomKmBMVzdjMjQxPj5aNmhaYWlxVkY5WEQpK25q
IzxvVClic20+WV08dC02QU1LNy9ZX2FLSyZhRD1LQ2djSk4jVFJfUC1RLHInITVXRikKJWRCMERG
OFdxb1dQKnVaakNIO29kYEN1aVBsbiVVKUxiSjFhJi8+cDZXLmpAJDdaPWNURSRlQUpNVVBPOCxc
Rm06K0VdYldZT3VKVSRmVlcyYGA6LD8oRyJHPjtFU0hUU0VuVVMKJSlJXW1CIU5eSDg2ODsxYT5L
Yz9tZmQtbnI+NDBwUGlqLEVsXDppPW0vWGFOZmFVTkYmRW9aWFVGdUBwLz04XC5eVzJJKVglP0A5
ZyE5VTYmTnFkVkpSMlpcbjZoOkU+ZzxxQDsKJW8zaChtaDNgcFQzUyQ9YzhoSUMmb1wnO144TDdV
aFFjcjIpWmFRTHNtL14nYF1qIixkYjk+SzoyRzlxdE9bLStIZnVCcGZGb1gmJE88Ql9ARFdub0lC
JF1RdW1ZU05eXmw+Y20KJU1HX1VFL3NMLkZWaXA4NDJbPVs5Pz5hXyltNjJxM1pPKWVbOTwkRk89
RC82ZlRoSHJXSydCQWhyLT49QSNsPFdkM24tKkRLLz01NFNjVGlKW18/bDstUjxDZDxCckxMOiYs
ZXAKJVw5J1o3UF9GTmErVDJkXzhSbVJoPmBzNFY2IzxhPlZyJmUxXSVhXUQlKmJOI0ZALFBAMEUy
Lmg1YDhVMSNNQXMpZlNSUFs/QnJNNmdpUFwhLz9RR2JHWixUbVo5O11wZC00NEwKJV5waTNOSiNC
I1NTMERbbyxkWS9KUnIqYEQkamw6OHJhWUNDViZrLjFMNWlDTj0ja10wakxaOlVXQ1ImcSZidUlQ
QW4/KzNoM2FSUTByTm1kZyhVUzgyYFBhXkxdV2hUaTNuRV4KJUA2KDFZQXBPbVpbIixjRkU+Wych
UzhiL0IrZypUNT9JZy1yO2BeRGw5PkZ1L0BQSkxDS0xgRWhGTjs8LiE3OkQ6N09SaUQyU0poc3I7
TDdrK0QiXWxfR1dBSC1kNVEvbEouK28KJTlYQnFMVWFNXU4yZVd1RTZJN3NoIycoQ21QKGZRImct
RzljMzdpWnRBWipoUUM5VVRSOU5eLyhiVEsvSixcXk9URFI6SElAWTg/akE/SzBVWT4lZUcqW2x0
bnJmZyZEYiNqVG4KJS91VDdjaXBTUCg1InU5QSNiPz8+Ik9HVVQ0Vl1UMjVzZDxlbWptJ2g1JDQj
czlwM3InOl9UQ1tGUjVrcmFaJkZDO01xImkwTmZgYmNOUS5eMF0rN19HP3Q9ImQ1cEwnQD5eMT4K
JVplczorMVZaNTdnIkJRLipCZEJPUGEmUmc0MHRsZT4wMSM3S3BLb0cubkw4UmBtIUVPT0NCOWo9
a2FYaEwwMXA1V0YwMWxBbihrRjklZUpTMEY8azk8YEosP1tIJVFOXDtuQk4KJXJvOEYoWCJcTXQ0
XkZ1ZC1IZlBKIXVHWE47KVNUJCNVYS83Oy9ccUItX2ZZXlRKOTBSJ18kN1VUODQoLWVTUjtPMUda
R09ORC4vLSY/a0ZILlkxLEtLNCwjcTJGNiRGbFcwMTEKJWJKNjJxXEprZj9wa2pON2BqXTw+LUNX
YitxPygkcUlVSFIpckUuOzM6TWs7VWdwZWQ9Z2cjVF1QS1RzPkNrPChZJ2pfX0VNLzJtalphbF9f
Mj9oU0JxIkx0Z2A2Zz5kWnAlS2AKJVEyc2tbRHIzOCRkMCghYC9bb0lOZ2NrQStZaURoSlMnSGYp
PVczOERXNzIiYiZNKC1lakI/NkAnZkhGdVouMlEnJFI0NDxPZXQ0UWw6WCwnRmdzSl8nSGNabkxR
UDBFLDBwc0sKJTUjT2dVY0NTNFppTy1xZyljST5LJ2lVQSZbSCdNRzxEYWdmPEFaU1xdKCEtMm5i
OGZWPS9mPTZOKyJRaVA9IiUsP21aQD9XQj1MLig0XC5PMGFjKUZZXEVIQm9AQUpzaCgta3MKJWlP
TFhgW106OjtvY1J0XzZFczxMPVMxZz5QW05mMmNZUFBlYiJdbTpCa0lCNFouTz1UOVFFUiZmRjxe
LCc2WXJbWGprL3Upb3JZdFhYLVJXOjRoUUddLyVJKjRyL2ZHKWhFdSgKJS5gOTU7IUszKzxPWDRE
TUl1Nk1XQlt1c1QpWEhVRF5XRT4rJUdrKWIsTypgXFZEWE5ZJUFbPmEsLWRpN0NiLl9yT2Y/I2Jn
NG5VclRob1ZUV1owK0w/RE1tI1pHKiI0bjNubycKJTlpYTY8KzIxOFxMVStwbiE6aVEwcScpbVs4
aGBiVWMoVkxjLy06NTNONHQqQExHSUBMWGRYYS9JMFRTKi9HaCFLRHEyKiMya1hRRilCU1ItWyI2
SE06UloubzhRLGhvYSt0PVQKJTxSRW9STy8qVnM3NzdeYiJpWS8oUilqTiZdSHIlOyJYNClHXVZl
P2dYSE5cNk81KShYZkcjP1dyUSouPVUrVG1uOEdfRV1OT2pWbyVDMGJfREcoZTUmS1s4TVA+WVRa
YVAybGMKJSwuVU1mMyw9Ti9lQzlXbGtzNGhEKjxBS29oaE5tJmtJJWo8PjtuZUtqJXVcP1tyKTg9
QW41Qk0yPTZqcSUpQS1tTGslPSYwZEg7PFxvWU5tVW9dRm0nNjMwJ1o4LzFKMmxrVigKJSI1UW9u
QmhVNFRHY087VEtHbnJwR20hN1I4bC5uRz9UIy5OYkshbmNtXFdIYFlVRiRwO1FfbUhrMDIiUj1m
LDc4PnNmV2kmYzcuLXIlK2lCZzRESCNUbVVWUCx0TEcpWi9PazsKJUpwUkZzXyxvP11iSDFsJ25C
IWRuMkZuYj9AKUxWUUMvQ0pTWi9KODk4PUEtSiU1X08yIjttJ108KEUnRl5aTnFcUzZNQU9dNENK
YVhDJT46XV1LakVdXTE7Kikua1tnM1VgN1sKJVYnW2poZCJncVgnKE1RVmNYaTtYImtiT1VTLGIh
OjMnTmQkLFJCN2pTIW5Aa3MkQUdHJVhgO2YxLDYoaFFSTEBhX2VLTG1Tck9BWklUV2cqWkdDNUVc
UFYkU0lbOUQxWF0taGIKJWZmQDpmPzhsIl1mbzQjKk0zMXVeUVY7IllvYUxHZWYzQmVGUSdtbkpW
L0ZNb0ExWzlfbiNQQXBXRT4hIkBsNWFRajpnWmVuNHVvalVWKEA0U1lOYlFNOjwxaVJHT1VwXzly
QVkKJU5vSzdvSmIsP1kuNXVkP1gvTmBCNShQWFJPX19jVSdKb0xlM0tbOmtnPGlUQkhcTFpIVnVM
a0ooJW1EJSZRPlhhTVVpanJVPiE/Skg3IT0wWW51WWVXV2ZpdElcPCxRVHQ3cjIKJWdSKkAiaiMx
WVcpSTU/aFgxMmFFbiVINnJTVWAxZWxSdSopMlBbPEYkN1snIUFPbS90JkwpdWM4M2ZtQS4xZGsr
SFlEcFUvRFpnVTYkakxiN1o+LSFsKlhQVTA3dUIlcTAiQSEKJT44ZjQhWGs5dFFSVjVacmY8cUcz
bEZMRnBkRThbPGkwLDZmYER0TV5AS0JpQSxDInBUUkI+I20xVDtKJCQqQGtqXEdhJGY8YV4uNSFN
Q0dVaFxvRyEpVTdnYGJSQjpUOGcxZlkKJTEpOWwrRGwhM25LZ2hQQjRlcyxVI1toPDxSLkEicEUh
SSJ0RDtUIWFITz4lU2U9PSQlWzV0Lio9SDVNZSRcLytGO29nOidYO2FKRlddLyhXTkMldTVSXDEu
VU9VL25YLW1MajYKJVJiXSJrQUdIOVRoNnQtYjlGSXNManNLbXA0LT9vck1qJUwxPEBUSl85SyY7
TGtmdSJTaXUwWElncWBhUUQtNitvKy5WKVRmN0xnaDdcQ0NkQmFwaTcuX3FNMU9Ma2o5YWZGOzcK
JSRjampKZFAvL2hCL1pRREQvS0h0ZGlTTSVQaFAqUmI3ZCxWVUZeRy1nMlEiMDBqNUpkbVY1c2ZS
L3VwbzFNWCUnO2xOWlI1V09dM1glPFVWVyhtcCpKNWZMO3A1VTVsKUpjYFkKJWRJSjhGJj5bZnFU
cyQkazlwaihyV2RaO0ghQV49TylDSDgoOTo+VjJkZkMiLFA5PG8rJysjYyYtR15yZD5mZEdzQilL
IyRfZXUtcl5YKkMkNWM4UmcoZik1Z0oxQilvVyFUVHQKJUYmVj0vQkZJP3FEJFZRN0xdaS5sJW9A
SGpiU21MUUMqYHVjTSdJIzY7OFgzUzhFQ1kkMCMoXi80MTstISolZSpfLEJcTCYxIy0uVidQTFsv
OTY6ZTRBOV49RlRpbGhIQmBybV0KJSwwVU07UEYtJXNaYWhWcjRLQjU4RWhzYWAlIyxkPEcoV0tB
KnVUXSpiUlZeT1dxJz84KF1aNmknUFVcVyckc0ZXKV1SRFFNK0FiYm9IYEVWTXVlYD5XSDdKdVRQ
RD5kRydJLigKJUJLKi5cVzNJMjk3Y1EhV29MXGgvMmgzXjlKQmtKVWo9c2AsRVpoIXQ8NVAjU1t1
VzUyIUdMIlxYRl5SMWlrUElwK1ZHJmVRaTw1dUkoVyssP0wtbi5McCJdZlEpYCIpViQ5OUMKJSUz
OnUlPGg3YEBbO00hXDdPXkkuT08qQTMqcydjcWc0JSpEUHVvcVFcPy1oQmshZEZzKGNVOU1pV0FR
OGFNbyFeZlArWUJdY1FnPjtcNk1OKEtWaSc5MHJtR2lwOz5ZOzwxJnQKJWNJVTVsJFFaYThaaE5T
dExAL1BqZCFuPDVJa190KkpgW1IsO29Pbm9kdUBeMW9LVDInKikqS1xobGJFXlM/VW5KcCgnamFB
WFs2JWs3bCosX1w3KyEhNXFyQXEyMTghMVckZCoKJSEtdFtiS24wY1I9Zmc4VWJNJTJeb1xVSlNr
QVN0aypjO0AoPktAPW0rWlJXJTZRYlY+I1koLixlR21KRzkiO1xcZlZsP3UtaFdcQS5JdWk2KHFy
PytTZk41X2JYN1oicSVJP3MKJTVBMTpRRChSQyFcSkZPPE1DR1RbbF1GI2M/QzVWSD9CLnUybUVR
SGdBTGRLX19gJ0staVNzO0hsImxUZUtoSFxwVFNKWElrcjpTQnAkO25VaUslMis8PztMSSlePGlD
VThhKiUKJVoxX0BISG1oRUAoL0NGV3JTI0Q9UUowOkc2aGkjZDdoZTgnOEtyaD5CYWlOSGtrSFFz
QyhuP0lTRix0QmhAWl5Gais7YzpFIlo1QyJmXiYuJzVHS2pbUmlhcHEnVGAoOkopPjIKJVtTcTE2
RkAhUEFuMEJYcFBqN25kX1hBXVxUZWJJWSZWZUh0a3E8YmQ5L0B1REAnSi8oQDFvNSJAOmAmMF9A
QjMlIkNOcCVXTFsqQElWXixBSVI/UC8mLzUvcmllI0AwK1dUckkKJU5yVHBjY2Qob1pGb2k1UylP
QU1hRihzQS1VOVEwUUwhRCFrcS9TbHArRC5TLmlDbyVyNVMtdTFQb2lBJVgtNkkqRW5rL3EkNlku
KS5QUEsxOnV1JHVVdUBSWHA1QG9zPUdAKGgKJSg0UytycylwQU1yY1t0XTY9MkB0Ris6U15MSEc9
YD8ya0dqSkI1XVdlNERhLlc4YFRZRlNfW0pFJWtMI1MwKi00V2kzK2VNN2FCRTFfLGMzcW1YbCZu
MTlTcW1CPl5NZ0YwZ2AKJXBuJzZZX2AibWkkRGBpRTFJW0ZMcGU0OW9rNHMqZTVjKlRDZURnI2Ml
WjNjOiZxaiNjNCE2OSpYUVgrUChyUiMjX1QoZ01oKSRoSVxOQUZKO1YwW2srUDVjaVVlXippaTkm
Yl0KJV9sYy4vby48TUQiPmpRTm8raVtkTzBtN3QzLXEjb1wrO2FHKitpa1lNNlopX0NHKis6U1Ns
MickLFxQVDNhWVxaVGIsYzJdT1FvcUU5SltCVE4nY0kmX2ZQRkBbL1lNOm9lPiEKJWBpTTR1cWdB
YkwiQzhEL1VbQFpeP1E9XldKVSwtYF4xLypMJ11dXVc4Tl8ubW8oTl48QmVEXyI3TFFlZEhab3Rv
OiQsXHVWUU8haDFFQEUrNENUUzwhLj4rPiMpPDM4bC9UbW4KJSdLI3Fza01kTXArNz0nbFpUaGsu
KWY1ZzNcTSo/XjA8TXNAVlZtZEZpZzQ5OzstMz11SXNLRSs0RF1uKGxebzYqPXFAKy4/UTZLW2Mv
LUBIbjxYcF1AQ3U/PjEmW3IqSUciJTwKJVpkPmpUNm1FIltNLFsnXzM9VEdvYlwiITBQZjcuaT5Q
PT1TKSpuVmFsLTw9KyJLVXFyMmxIczdmLVtsJjc9WyRXT1Fka05IXjBSV2pSdEAhUicjJ29GSE9H
VDpQbVNtJDxhVXEKJTkhTzhvVlpwRkpKOUtbSD1KKSNuWSwnaT1gaWpPQjAjPTc/PHM0Q0lwSlhj
R2snS0BmQzVeQGhDYjNdNFVuITRVOjVaVCdhY1FBYDtXdEVyOjJyIVtiJD1zUVBZO0w3VicyVGMK
JUc+WlorZjBYLENUU05JVnI6dF1TcXElSC1gKW1uRVJ1QDB1N00rciZEaz9rU2JcMj0jQih0QStX
ZiVKRjQ1OGNlO2dATnVHc14wX0IjI1xQJUEvK2ZeWiIzU1IqPzMhKTwhaEoKJWJjI24tMktPVHVK
aXVAZEY5LXE+SC4kUCMpMyMtSTpmJygjLGZGQm5bO1t1LFdPPUFlUD8ucUspPTxRM0szJklXRjJR
OzEqOztYYWJJQFllNlMqSjQpU1tLP0s9TDRbVHRObTkKJStoaTYrJXUibz4iPzsjQy9pWiNYSEFb
KixpRmR1RHFZNj1aQ1QtUzxDOz5STVosUTloTDhUWVBVdWtCMEU4WUQqK11oXTg+IUYiNVdMUkos
a1dqLG0/UGYuVG1AX1MsMmozVSwKJSY/OGpsOWY8KTheJV5qcCwySkI9bXQySTpGU0E/LmVbSj5B
KCRdQnJEST9fKUdBVUw8TV9tZGVFNERYVGxsaTNjNmVqNT0+VV41LiZET2xGRUhILSdbdVdVPS9k
QmRRX3FZXiUKJW1pajdFOkJTLyg1WUc+P2gsVSQvQDhqMlpHJVl1XEpeTTxBJTdJbCdUbzZkNixS
YzVCLCFOSmlvb2k3OlRTXks0Mzc/KE0hPVZIYC5sRCk+RC9sc19JXThoQFQpNyQuT3BORUkKJWtT
LF5EZDd0RT1yLi00NFE1PllEO1swSlNFVC1KbllqOSE7Nk0tbChGVSNINE0qPkBGSGUwNl1WOTBi
bFRLbCFnTCUoczZrR1k5RU8sNS1zV1hRRC8yUE4wTlw8L2lrXj4kYFwKJS9xL0hHRTknU3Nkak0p
NWoxL1s3XkklRl07ZlA6ck1EXC0mI3JBazQqUyRHZGVJQUVgWyZnN11MRlM2XSIqWWVkcjk5MiMi
QmdUZ3AtaVshLzdMP14/XFBNV2gqOyNrV2hoQ0YKJSxZXWpyRCYwMHMyUDFLLkU4cFRjYnIlVm04
VlZZXG9YX0hMO25Yaj08aHQ5bz5SQ2BtV2klU2w/YDtZVFIxKGtsckNzIWlWKCNdRjBVNTpoTTQq
cFZPLlIlTnEyRGwhK0YjRlMKJTtfKyNmPFlOKjxNXSZwI187XHI8VCNZISglYCJaPDBfVlcvLU44
MkdHbCQpOWJ1bm9CYj4mNCo/NGFAR3JgJ0QrX1N0XF1JZVU0KmxVK1xKZEg9VWwoR0tIalRPcjop
P2xeVCgKJUNXa3EmNXFtR2NEMmQ4OGhWUCMnUEYqZnBOOjooWmVmOUVeKjxjJHUtX280WHJUUD1y
LEk4SixwMmRLXFU+LTEkXnVaUm1xSXI1Sj9QKEshS3NDYTxwdXNCckhoLm5UKVUkISUKJURvVSom
SkpuKjlLS3IuVGJyNHIwVmFfWEUsNXB1YS1ROXNGJDwtdDlwZSRtLEFTI146YHNOVyZbKmxGRT5X
MXJwLWNbJ2EwKURGOC5sL2soXysyNUZQdUhgaERRXigzSUhzKlwKJTVicyQ8LWZVPEJFOWZkdWoj
TmchXStGOCcoQCRfOWZRb09SJlAhWWglTFMwJVVKP2hgVGg2TlVrImJuTWRxTmtUayFSJFZpYGQm
M2ByJmNpS0lxSkZQMWMwdDdbQzk1OiJcRTgKJV5MS1ouU0xaOXBCMkVUMU0wUTlzXWtIUUpaNWQ5
NW06Q0FAUnEpQHEvP2t1YyU6TCFFPjw1Z25bODdebDg9dGM1ZlZzJWRpKjwrc1VQay9iKC5jXTAh
a2lIQWVJOHEiZSRSdWAKJSlMXV5ZNV88RGM8RXNKIUQ2NV0vLF1QWjxxI2RmY1B0XSRvJSxbSkcz
SF9vL11NSVNvOWA1dDtxWyUnNFdPPTw2YlFmKkAuPFFXb2csQzpbNSVUXSwlSXEsNS9GXiVDViRc
VD0KJTgkXEtVOk4+Pm03OG05Zj5OZVFQaiNmVyRxNlxwM0pQPjF0Qjdic0hbK0owcXIzWCwkUklk
L1chbEA9TElzKG4xRVNlZ0tAVlQ0UnEzVmYpYmo7XjhLTUwocGpQZkxcN3UvcT8KJURqZyxrUiNy
Pk8vR19WTElXRFdUNmouJFBLWFlhUWpCSDc9TUBVRj1iLnNUalFSbjdBZ3AxVSZLWGNuTUAuSWM5
SF4nYGFgTzclOiREOSsiPz9JWjcqTCJMUE46M3NVX2RPVEgKJWF1TS0oUWFXREtBPHR0XEtWSXQo
Yi1wJjVGbDY6bCNcc3EtbFAmMD5ldV5TczpmZi5zPTdRPFQ1VShgPGYoNTcuMiFpNW9iZ1wjYlRk
M1ZkPzUzY1BLcV9cQ0krQ2AkLyFULywKJSNeRCY0P11jdG43NmlDPCluUUI+IlZQRWtcWy1Lb1px
KlVQLC5KREhaTlZPaihWLiM1MWdERlFHN2c+UWA5QlQrXTt0IXUvPDpTU2whNl5PNkBsaklhdEso
WiRSR2lWMHU4ZTsKJWknakZxUFZdc0MrLDJQLVRxR2pDXikuMjdMNkBRNXE4XENtQTEvb0ZrdGFj
QFEoYiM4PFc0Yj5hcF5dciI5JEo4V1xZQF5dXWFqIjtQQmVba1hCa1UnSWpQOzllWig4ZVlWTl0K
JWwrWXMuVDMsRlBUKDc6SGROQUVxQyVZZS4+RSRdJ21pITUiOjEyOjIxIVtkMGdCMUdtJD0+MzZt
UzBDMVZ1TitMZFZBTWhDNk1eMkBDJG1tZnBKKkRdby9faF02QlxnP3NWYykKJWYvYlBGQjhOJSZj
J3FTJHA4WkYpbylMS1FDXENNO21aT1JFW1pWI1REc09sTGBiPmopKCFKU0tLTD9QaD90O0ZRZTBg
dS02XD5mVGpcLztCalFuaTlTTS9hLHJCKzJEPE81REcKJVI2OTBOLUVgbERgYFgkJHB1NWFWamc5
YlQpLy1PbCxLQC1yN0MnT19UUDonYCI6XWg6Sjc8b14+QjtIXmcydF8waCxpPSolbj0xIj01X3Ns
SlFKaSktNCdbXmdYV3RgbChgLSQKJUFJL183ajZuRW1TVytEPilhSTFgVVpJVGlGSWVrdWZncmQm
RyNqUTAjZSNxQFNLPl8valRGXVpPYWBoJTlSKTNiJ1VaLl8tPVRzJ0kwKHI4JGw6dVw4OE83Lydc
WztpT1hgamsKJWZOR19yTSVyTz8rRiF1OCtmYiFAbygtZmBmXT9OXi9nUEsvUzpHRmk7K1xYVmZ0
T0Q4Vk1SKzFNYixyV0VSclY3V0VNLWVAcGwjcU1BamxJOSVWZVpKWURLQVFmJzopXzhKKCQKJU0l
cyolIXBXLyplIW83cWVVdWctISZmZSdSZlBAQzkrLU9mLiYpIlBfc1AhbSUvK0QkcS5Gc11oL1Zy
YG0qVmpxZHRYQFVAP200WnFjUUk6KDFXKTJIWjokam0hIkc6Z3UxSF0KJSozUjRdJGptPk9BZjpl
XURiQ2IoPCEjL2lFLUNeOFAxZS9VTjhROmcpUStWRlpbM1FMPjdZJlA2MV49VEVFckVYbWFQaztE
KGZXSmVcc0FZMSZOVChjbWRiaTVXMUYpK1ghST0KJUFnRjAuQ1lMLXMkXjQvZi8xWTtsLFkvJV9N
OiVYXkUhaCtGSSdPZFtmWTAtZWIkMFxsY3JAIi4uUkZ1M0wkVkNlYFNuZChtOnQjRHBXMG07UFFi
VSVWIiY8NEM7c3IhMCowYHAKJSQxaytsXj1ZbkchIUU5bU5WV0FKZHVgUUs2TyUtQC9rWGBdIjBv
ZTAxWUw5NCxWWnBaM0xFMyxlZz5GOl5gLm9TXmdpXD09KGJYPCFKQjJOZ0VPJidWXEsmO1I8Z0M+
MjgmTzkKJTdddSZGIlIpPk9iKFAkPGo7WmlOWSNQWiJLUkFKVGY8bXNHZF9JN0hTJEJhcy0tTyJE
RWs4RGlUazdQZDE7OS1TYDBvRj1jaXQnOlBTYSg0cUJXS3RxIktzUktVQVpMIi1jSy4KJWgwVTxH
IVg/JT4jPXIyP0FQa15UWGRbYWZoIlpFIl11PmQ5XzJrQjVWRDBzWGVLJD4haT80YFctQyZEPk9B
PTxcJyFLOE5HQWk8XzVGKi5iLCc0SGVHUEJPVltwdWtPcE5oJEQKJT1JRmQpa1VwaEw4VF1hMUsv
ME1UVmE9OHQpJWVoZz9ebUw1VG1aTGkkWlFQWlgmIyplYW1TKCZTcThjJyk5UF5POFxMIiZiZFJh
TkZ1MzVRQGs7OVFYXChHXVNOJCRSV3A5XTMKJWUpczI/YW9mP3VlNmpCRGBoNl9pZU1GZ15FSWts
S25JNy4jQURoRzNoV2FlJ2RTNlE5cmpUV0A6OzR0WHFMPm5YXkFWUW9fZnEydFEuLiREOC42Tkw4
cz1yYUF0ZS85NzMkLyoKJTJPdFIkZmI8V1QrQE1LODUlPGBrYHVTYichWVItRCJmLV1LbnJKU1wx
YSFHKzc8UzUyVWQ+MnNfN1tNbStiKHVIWUFSZig9SCcwaUNVcT05OUQ/b2FFMmp0Kl8vN2pTPzIo
T10KJUNUNFVkI01PaihTLk5GKyM1LmYoMVA+ZVVOPGgjOlxhMz45bztrKHNpUCdkZlhJYnB1N1JA
aywiYUo0Xl1iVDNkV2VuPyM0ZSNGbFVmI00rRkBPTGhQI3BuYD8sIW1rLThKXDsKJUZQUlsrPzJF
NFpacicnUSh0bDxzWmw6SmBUVEFyUSMoZXJiXWM2WTZlM3BgZVUpQmtFO18sdE5XMFFHTGptXkEk
aVNxUXJPPC9fbjE8cW9pXyxaJkRBTFpcXjdCPl11R2xzJWUKJVs8Q3RRciFiYWtlWFEudTA3K101
bUdXTEwyNCg6OlxMT0o9WmheTUYxYjFmOS41U1IzMEdKIThhZyJeLkEla2Y3PHJBQFhtdSpXM0Vq
ZE1rMiQ8S1csY2YjVzoiXkxNbilZbVEKJXIrVGBdbVpVYmM1QGE5TW1wI1RfLUBSLEw5P2BlRkwh
c0tEJFo1O0NMbEY3LWpjPkZcQU1bOWlUI1ZJR21FIVVWZ0ptXC1dbHAoa2UsVWU4L1pQYCZjRilc
OUUnXFY1RlcrT1IKJTVkXywiaz9xZSNSRi43J0xZbFE2Sm9saVxLY1xOZ0MvWl5aaGo+ISpuRkFq
bDdfYHI6Jl8rckIqMkk5UWxgMis4XEVtIlQ6VSItSFZoci90MmlwdW9BYSRQZ0NvYSNNNm8rJkIK
JSkzNmIqLlJgUzBAVyRJTCRQW1BrW3QjPTdGb2NXP2YxYVhmSSIpc1czSlVBYTAhSFMkL2xORz0y
J0VZMTctWnQuTTczKUA1VUpSKVRRXTJdJmkvT1VETGdbVz5eU2dcSWc3UCkKJXJMVigsXl8vT0NJ
Il8hMT5zPCpiJTUmPVYyUyNRPShlaVcrb1dHbmEvYlZTMDBXJl1obS9SSkg7cl1XbV8/b3JwM1Qh
XHREYGNWN0RqUy1pZyhRLjpCPXVdSjQqJWlYa0pvW2UKJSJcYzxqPTkiSDI7ZWB0a15YaV88Zm1l
KURIWkJDS2U4Ml8yV1UzPlYwV0U4JUM8JmpBckM0JCRYPFRdViMtI1JwK05bTSRgQzlpYjdZTEk4
Sz1PI1BaVjlVaCxxdDJLVFIrO20KJUgjPStiQihiS3BNaTpGKEk+aFBnJzRIOnIhISVcTjRGbmdG
ZjtyWVptRCc+bytdPWslXVczQzRAMENfJClzPlJVKlMkb28kKywzYlhGPzZeZjxxMDRbVVxzMHEs
O200V3BgVzMKJWVxYWI6JWJpXGgwTVhnSUpOSENPVUoqZCRfSyVCKjtzZTlsbj9UYipJX21CTi9o
JzVJaERbWS4hLG1NO1QuREplaDMlZkduNTU4NElTYTo8VXA1SU9QWydrPV45S1IrYkA3bmMKJUoq
TVUwVHV0WW1OJCZ0KWJTUE1VZHBkLGhMXVt0cTZQampsImM7bDM9aDxPXC9sOEpQMTNHbGFGbS9Y
Oz0tUXRTbmJadUlZRiRvZyJJQmpER185JnROSTFzbERDXTYvOWlgI2MKJW1XcXJdV2pXcyhoWFkn
Pik0TytKajxmOkpOMCZoR1dfcFcvLjQqVyVjbk0zWmJwNWU3PjNTbXBOWC9NPFhAYS8tX04wTjhQ
OkFOIXBecnVRQiw3a3QuTGY8bFVVL05yQl9QLEoKJUhyOWBQOz5QSkwpRDN0US1IWlwsIWNobkUl
KSdMcWMiZ0FPaS9eX05GNSNTXGBhVikuZ0g6J0skRCdcMFcpTUpXXnBiLXNhLzJZYTlOUClnaUIl
VmA5ITclc0o3OkBGQlNXKjEKJT1fJ2xjNEBHbV1XXz0+dDM9bE1SNExRMEhRSF9yMFlcRCc9bVxT
L0YhJVFqPy5ERUkhbFYzRnAsZWg1NTU6cl5IYTIoXWlNKTZUMjYhL106SC4tVWpHV04ya1ZXSFY1
XmotLFgKJXAlKUAqXDFHWVtqKyRbYDA0JyxKKlgmMCkoajxndThlQjRaJGQpM1smN2g/Ui1pQmRO
blY2W05RPUtEQVojL0YtPkpITi8yYl9eaWswZTVyQ2ViWCRaKzlTPzYuKjFDPSgiTSIKJU1xX0lQ
TzRdbCY9S1J0b0ZZOFJbJFNGYzhgKDhuYytzVT89cWYmPDQtQ1xGJiZDOl0oXSotRTtkTyVgVUFZ
LzRDJD4kWl4mNVNkMmZhVHUoXixJRWhNMWRDJGVlWjZaWTM8UzIKJSQkYThwP0xXWEwoLDgzLz80
aTQ8YD9gYyZpMDlMXkB0SmE3QlglPidFMV1MLj5dXSJYO2MtJVc2QWpmZi0sKHBBIicwbjdaREZU
O1NZOVdCTztWRGBVTjBwSytZNjIrUkczbFMKJSZuaWpEPEQlMlFHbF08cUNAIzYhWTBfaT02XGZj
Uyw/KCViRT1aPFVIYCcwOFhQVkcmUCxDbDMlPzhPLiJnbXFfLTM8PkUsJEBOdFJtZmNSWyRdNyhr
PEZSLyFmPnUoOUd1PksKJTwoXzVuSFlvZnFYO2Z1Xi4sY1A6V0UnPGZMVTUtcmlpbFNKVHBdbUpU
J2FmclFlJCwwZDFOVHNkT2lhSjdFXjhgPlYjIVg1XlczIkhJKVtkKz8qdHFdZGJYIWlKVW8lZ0xb
U1gKJTlcLU9rMjUrQmgoOzhzUixMb1MmSi8pUiNgX2Vdc3IzVV9JWXFLOFNFRS4rZWRJQ2VPbV5z
VDowVCMsO2JQUmhYZi1WbyZvMjVYZFVINGhTcUwqVClfby5RTmIrJG0yP05VOz0KJWAoYWZcJ3Q4
O0JhPF82Zm5rNWxeaiRYOio5Jzo0TTVDKzdRNkNPcThhWzs5bytKS2cwSVVwcENQVnNyNkVtMDpB
OFk1OFBlUDljTjQ8LGIyTGUjUj5CW1pVaUZsaSZEPWBMNSsKJSE+UHBya2NTIz4zQiktR09RNTM8
Ok0nVVQsaCI5MVxbJzowN1xiSVRdXS1SRi1OREMySkB0UmVJKkBqbUBbW3Q8ajlbL25jIy9cR0co
WXVoQkY6LFpYXSJrT1RIbkNNXD5YcjsKJUtyYVBMNS4wKCk4UkIrUFE0KC4zWyxlaDdaJGgzcFts
QD5ISVRKbzkiNT9AUScrZXAkXF1VNSdXPEcmLmIiMEdJYVlwM1dmRjgmXjo1WlNqY1B1ImVeaD8w
cTFnSnRHKlEjRFMKJVlHNSJQIlQ5TlRHQWJaYjxZWkMwRUM0a3IhSi0vJnE5MU9aXilicUJXQzc+
TzxCXTpGLS9TPUlqJ2AuYlk9UmQuWHQ8LCNsNCYoU10uPGpHbmEkOz80RyNJUzpfRk8wVSwxIUgK
JUpHXipqRmlqSGpHa2JmVVglZVZgKnJaP0s8QFE2aFwzPVMybTMiNWNxXSJDLkYoVjtWVks1bllH
b1dyVTdSKSYlLjkmdEpvIWJCQWxDQSRrRlcnclZJaCo8RzRJLV8kZV45JCsKJVJXSmVDOSQzRyQ6
T2IhQ28uJTUpbjpzWkgiRF5QLkBBYl4hRVNuImEoIVRAW29XY0ZTJ2dSXyhALCRHO2ooMDVHYEcp
Vy41TiFBI25RUWMnZWcxKyQpNSxKcWNtaWgoOEZsNS0KJUorJmZcS2Y0PVEtPTxtRHAkUzNeR008
Yys4bnBRbTBEUHFHLDcrT0VybUsjQ1pMXFs0VlU/TUdPbFFNKmROP2E3PTAkIzdBXCdIc2hEU2pj
U25ncHBXLSpcM1VCaChtJyJmLnEKJSRuPjxMQDwudXA5JVQ4S25jZyFlZFckMzFVKyssKz51aWZe
MFlwVjFuV2tzWDwvNl1YJDI4KDAvNEdaYD5aL3MtNzImODtjLV5PMXEydDdmWk4mW2UqJXVSajZR
PVBBX0g0cFQKJSVfSToiMk9PZSNFJjFLLFxJYnI0bCFFZydgbG8tODBLY15ma2UhLyo7U2lFa1dP
PCUjREBGOUdGNjczLUIjaDxLLnVqc0RGJXJzWTxsJXNpW2k5RmhSdTZbT25gbDBnJWc2OlkKJVwv
XVBNI0Y/aFotW0xoRSchJmdFNiRLXlIxKzhbOUQ8VFpnW3IxY14xX0VjZj80PDloNE1XY243U0FL
JTFvI0Y5XWc0X2RsIjZuZ29SJFpPZTxFc0IoW3VsPy8nJDF1bEJ0I2UKJSNcUi00QFRFcScqbjxJ
OHAoYiEoOF9tKVFscSoucVtkQSErQWA0Tl9VNmhhN0ZhOlVaSypHNW1TazFgVi5wST9fNipYc3Qo
Z2lhTT8vUyUnUmphaE1MM0pWUWQhYUJ0L3FuYz8KJWA/KUYoJixXdVJvbidaZkdKcTZLOy1VTUg7
Yjk3cGUqYTpnaSdjPF4zW2kvLzhoTDVWRz5HIiRfK1wqaGwiSCIsOUxMVkszU1IzQyFgMWFpQXNa
aUVvbWo6bXJkTSMuIjQvLmMKJTRcZT9eXzg6Zmk2YTk/UTtMOXFfImc0UmtvTi9Tc2Iobmc5NE5D
YW5VTClbcU9KbD5xL0MkRmsqIzduLV1DRj4vaEUmOWpiNz1PVlh0Wlg4V1U7T3UqWlQ6LWtSdF4x
MSdbKjAKJS5CQlVSXkdiVmJiRyYsK21TJEsoTzgxXlxFa0IuVls2MmVwXk0nPHJdJzVwKnFGLURC
VT4tZildWjUnXlRsUE9aMGQjXS5MWUREO1EyPSlgNUxkc2g8VkJvZ3JdbiVaZUctZlEKJSFzajJY
S08iZWIqPTc+YyJfIiRVOTJDNUZJbVZlJWlONFdKJiw4VjkhTUd0akxGRmdkS3FMR18wZis3Rm5n
KW9qQGRxaW1COFZtOVJacyliLixDNlFUbUR0O0g3aSxXMGRIV2AKJTpGV3UoRzxzN1pSOlVzT2Vw
YzNKOCgmNnI7Sz8uRSFMa2xsTi8hMyZbQDxUJj9WRzVDNkJuMV4xO0siaWNQVW0/MExfMERhSXAo
R2RvKitaXSlrNkZpZCctbkgqXV9TIigzOXAKJWEwZG0sOD4pMSoycG9rSk42UDRYJmg9b11HbHA7
ZGReQVNDQStmUVptYyFVMFo2ISpaOFUjPSJgWzZZZTkvX1JnVS05Uyk8SlFhZyhccWk0MSlKYDNX
TkdTOkQwW1ZIcTZeRksKJThbI3BRKmdNaCFCUzRSKm9ONGctWGstYjklSzg2MWc/IkMnbGliOyIl
bzorJl1BQiczUVRNTD8iW2hOP0xGIyVdVi5VOnNbcVlhZjVaTjghRlVtP1kmXkw0W2FvN1U4SW5s
PGsKJVVfbE4+RV1gLic5LCJZUFckaTVCYi9oZi4mUz9KdFJfXmBoKE9ScDpCJUZsSTRwdWMjSz5w
bm5hbEYtNSU4P1JmIk1PaGczSXBATEpHcVkqNU11aE5LUyNTPEUxbl1gRkc+blAKJSY1WFZyLmBJ
KmJFTUkzI0E8JSZmLEldSVYoOV47Mj9fZEQ7PTNkMjBSbF00YktgKGNeO3RKNzsiTnBMOik7OGoz
SmZxLnJVVkdaSElrT042UjE7S1I7PFlNcmE9USQvJ1E7S1oKJVtuNi1bJWYxaTk7OUIjPW9oNWxN
WFdIIjc4bzxQcmFUZXAhN2hrUD1wbTtfP2soYV1aQS4lNSk7Pmc8QTdNUVE9MCcnKnFhOyVVZTAo
Y0BVUHE0Mk9LYG5mRmJdXWJwYD4ja0UKJTopPVEzL2dOVTxUaTtyXXIjSllyKUohQVpsYDJLXDYn
QGRzSTVualRVZVxUIShPZGY0KEdgVXFXaioqc1tfKlIhRkNGaEU4UTJxJTI0ckpzST84Wm5iazhF
MUBmMipzSGA/UEEKJUQrXyxxPz00R0YvT2s4LmpDa1tWYktvL1htY0xPdG1jJzM+U2c4QEZTZnMv
YEhqQ0Q/I0lqKDxaQylhS0VfVSIyYjokSi1lISIyITZjaylVJCQvPEZARDVRXWdwYTVYbkEuMzoK
JVsqajMjJ2UkWlduUm1uLWBfOC04W2NscykxY2EkUEQmJSxxQS9NWjVVXCZdPGNxKW8zRThYOHUw
JzktUE4zSUBQZF4yZ2FKMEk1LjNzaVU4SWwnXTdON1VqbTZbITplVUNCNW0KJStbPl88akYtNGhM
QCFpYzdzczg4TFs9KWBgSSkvWDgqb0gkN3QnPyReUVl1ayVla1xHOWUrLkMyLHVKbG5rUnU6S3BJ
R2ZJNlYxNSkoSmthN2duIjciPyUsQWFIbTA1Y0NDUSEKJS9SLSdbOCp0I0FTUkBuJ2hmcGk1RCNz
XmctSD0/TEpjSCEyR291LEBjKGNSZjlcJ19RWWpgPHInJzI3SGZwLj9uWiFnP2hHZ2xiLnFiSkA3
OUlsPkZkcVlTZ1dMMih1PlBMaUsKJVxiN1kjQERoXkY1KDRbUylzQEVbTVNgRm08KE5bLlZjcW5q
PWtJS202Z1JfNEkyYD9sWW0tbClDLC1aWTYyJiVIbUJnUEYoLWUvdURTJ1NtVE8mbTBlPiooYFN0
dUAhPilhUWUKJWcjUFFlZlBEdG5mdXJMRSlhczJiX1EoIjdaVWZob2k4S3JkYEd0TlVdPk9aJyw6
PTBZRGBHUzwrMVAiLFFWI2IhQWVXQj9uaGNbMWY+JGUoWCpwMi9yT0MmJ2BlKFwpMyVkSj4KJSFy
QjgmPjVEM2E5LDU5S0BlMU50Sk1FNS5mW1QjVTo2ZmU1YWwnNWxvam4nZz5bWTJtMyFYM0VUZT51
Ll9caG5hNlk5bE1VQF40KFIiO0Uqbzw7J0dSJWhLOlpRUl1ASURhS18KJV9FOTRDOHUhPyNHM0pJ
MyZgYTMzVCdgRDxCZllFVk1TL0RQNDB0WE4uKihmRylCQGZBWlUjPmE1LmEhJTNfWiQwIz09VjYk
Q2giLitsO2V1LGxFS3A7KmUsK0h0IzdYcjVpOkcKJWZqaUJVS1Q7K0RvW1E0a0Z0IVMxXWMwZzlB
Ry1VVCFVTCImRF1pUi1ndDJsS2shSGsnXEY6WS5tUjUuJSNUK0hKSnU1KG0lWk5zX0g/Qk1RNTtD
U1FHPU9CXiY+R11Qbk9rPVMKJU1zUCF0T0VoR2UlIyRXLU05W1g0NUdTTlA0RyZaODBIPXJETyY1
Rm1ZaltaTHF1dT0uK0BxL2JNQ1wpUmphKDxGXDFhajU8TTo2T0xARlJUN1dAQkhoOkdHaVJnRU5x
bSY5aUsKJU9vIV9gb1ZCYTFXN0MiIzJbaWE3XVEkP1pjRkpzMSlNaVFHRjwobGwjVVIwLGFvIzsv
JzNsKSUsRDwsOUEqRGBXKSVrWyIjKXMqWWU2ZDI5RyhcRCFmYjkxcDt1a3RvUChQOWIKJW9wbXJs
QihAWj5tamE5U2ZCNE8uRitdcyUhJ0FAXXJVOE8jSkIpWTxJIl5qSjpvPzZPb0Y5PCJoZlc5Nlg1
VlBPcklnU1JMQjoldEFuYmdIK0U/Yi04XCswSVA8O1pEPE0jMVcKJVErNjhFPy4tMSllKmgwKGIt
cSlZRmonMWApNFA1MCFcbUdLbS0pUjxpI2VZK2xHbTE7MixuJXVpNDkiIUdRXGVzaTYiQ3FIQUdW
OitQVDZ0SSMrLShALV1JKnIiLSxfNjIuXj4KJWhZJ0YjXS9WLVsnNGhhIVUxZ1I/LksiWTNnbWlI
LiM8Rj5lYWpNIkpNQ2tubi1gPSN1VjwiT0UyRmIrbTBtTCFUUjdaSk5cXjxSNWdSMzJmU2UjS2op
amdwbDAmWS04VnNacnMKJVMjcFs7ayU5XXNFPlZQaTo2ajZ1Wj8+YzRsQFQwczRMLmA8WzhGOTI1
NzInO0ZLS0Qwb3FEMC0nOCVnLmcyPSw5KjciaC9RPzcwNk0pJFZycTJ0QihzLmlBQzQwOi88cltA
Lj0KJTlQLEdMUyRxP1hbdDJIaSM1UygrSj1fOERUWyc9Qz4hbCRGUEY0Wm9fdTtpKGwzNUoxamJl
KGoiMjxmSC5dPUJJTTMoLioiI25sWjlbMzhaKnQkaSNTOjVYKGVVNG1SZmxIaVwKJSJJODVsU207
Z1tIUCdJPzp1JkVUTWIkJF9LPy8nWDg2N0RJQkhrJ1NPU2kvJSVSLj48TzNCUEsiN3UjXU9tK0pJ
LUBBXmxLNDdAdGciYG5hKlZwdGFFbj9tKmdIbVVdOEJYJGsKJTQ+U2AscydbcUsrQTdOTEEyYzVa
VjJ1UUFjZC0mXi0lRCoqYjNbSEhWITBgbWlVYk4icXJTcFBJLFo9P1hfPDpSLkshJ01STilOQjsw
TipyYDlzWGBuMjluOVFoYmxRb0pJOiEKJSJeQlBpJEghLVksQiI+VTFyX0dwN2VZbVc/TVsjNCsy
LjZKJV9IX0ZcM3VtKSc+b1VebSNTQ2FgKD9oXy0pRVltSF8kM0lXcTxFYkZUOC9GcFVfOlQ1OTpW
RSZVXj9QYyNIWl0KJVptQyg6MEEzbklJOFVnJlVeS0xcIVJQSVdrbj0+b2FcLUpTYTg3T2oqYmwu
QEZEKGo5QVg9SiZoVjdZIyVPW1RBUEc3cnIsc21SWSMoSSNTRVpxQ0wkR1slYkg2S0JZJDM6Um4K
JUpnJUFmJDo1LiRNXW9JdCRBL2EwUDlJRmsjJUU9W2JvIztfSlYmV1xkMUJcKEBvNG5gZUleWWIh
USwsV2ZhcVY+SmBETDNoJThYbyJVJ1Y9aTtkSjtKTCJVXGI5PjRtIV8la0UKJWw1NDhhSlJzVEpt
TFxgNSFKWkBQbmRGN04hX2oyNmxpSCZDIWZkZUJvSEU5WiEtIT9rWiFJR1psaUo8WVJPXkNMY2tW
biJEQTJfOCJuRSRJLzdhZUYvQC1pS21AQ2opWko3J2MKJVw8JmNnWktzMyZdVEJmQ28pKCwxXmxV
YnQxKD5uPGAhXzRmTlRrXUg2cTcqck9EL3VNQVZMVz1SLUpzKkt0XDNrNmAwKyVQbztEWyxIQydg
J2NSYHU2YDA3KVBvREsnNmEjbTMKJShFQElPIjFJYFJSPSZFdExPQTg4MmBnOj4nRC5McXFUbGc+
Vk5sPHBwJipHL1g+Nj8kZjouUS1ZRDlQNjdfZlZiOy8sTE1pNWBpT1xpYVVpMSRFXkZdT3ArVE4x
SmRiOTRfMVUKJTFxaCkiUWxEWko2Q1EvQzpuOz87RCFdUztAXSRtJTszIkQoXDkxZT9YXmkqTj9J
OFxnclpHcCI8akZnXjg7cmlPbCxYKiEsOGNFR0dUUTMjVGRiXmxRcmRrNiZHZ1RScjMmZGIKJSJK
RFBORUJTKFBDJUVkX1wsMUU8WDdKLCdbYWBHRkxCWiNCLjtaM0c5RGY0Z2A5JSYzZ0VlZyViRzQ2
PVk3Uz9aaFEwMkZkVFtwIWxxIypjZmNVRS1vWi05JnAqSTxaSW08L2kKJVAkSyExcVtUQnBncidW
MiF1RXUsYGwocipTbS8+Z2RBTjIiKV1EKyRtVVcvdV5uTG8oZWMuWlJCM2tPKFIhWV1wTVYqbTdX
JVhBRD9BNF4rSisyK15JV1o8TE1mSnJDUlEjUDwKJSsyKSc8TWpGaDUrX0B1J0ZgM15aTWdTbVtb
PXVmSU5aST03L00+M0lOVF9SXEhSJWEwWUhRVmJyODY4RVdtUD1cOWcoPC4vT2NdayIrRXBpMnJJ
VV82SkZnOUglU0xxZW9MUzAKJTBjKzptWmtRPTFMOkVMRCcpXkpxNnNaWEg/ZG9QTHByUyV1OjVn
bj4nPE82UVUsZFBrQVliVzsyJz0uaFUndWxiKSgxLyZTVnJdQjRjKzEqO2xpMUNVJChfJyRUL2Bf
USRpZ20KJUpiS3Q6aFFpOU4sOSxAcEpzNDdoKUo7UTU/W0pvXWZmTTgqJCZqPVo+RkQmbm84Mk1x
SUckSyJvdEc7MFBiUUY1cG4kMj5RKF90UFxJWi1MMSNqdE9bbGNxdSUjRFszNjNSMjEKJVVwXDk0
JDIncCU5ImZJRkQoOTg+Zyc5O2BrUygvYGpNam5rMVEpJHJSZSMxJk1xN2BQQyFqPGBacSZvY205
bT49ZnUmXVUpLXRCZDY6c2o6XGtTaDprVkRVRiYtNWslbW0tMmkKJVZOVj85OSVWaEgyUFtVZD04
LWE+KG5sZDRIOVcpT2U5M3IpaTxKL2Y4LHMxZWNwSDZWRlIxbTgiRG8sV2NKS2o2MS1NNmo1Zyoh
IUc+SUs3WG1GTmw1c1FpLjojNWpncjY7azkKJTAkQFtXNmgnOSM8cnVBTXBARV02TzomQ21tXW9m
VW9KWThnamVgVCRSUkdDKiVoUUNBTHVqTWM3XWU7KTUlR0ltbzcpZD1zNEU3QyRdXVlQXVVZbSVV
IV4+Uy82UXNbJ1EjKCcKJTlRYVRDUFtkTkQvXltycDNJTD9UKS9pbUpNST9yTFhCJlhXUytGSDhL
XERnIWkoOHE2MzpSZkRxMXI4dFpMQk4rck5YRi5SXlFAdUVLVW1ePXNyRihuQSsiLUReYmN1R3I9
KyMKJUNdTSYpOU1jWFRxLzUkYi49dCp1JyprcTRUbUVySycqUHAlSVFsKz0xSTNvYiJUZXVTLXQl
Lj9mOiNgXmkxcWItRSZdZXVcUFNrTSdYQSo3bixBUE9xXyNxMGFvKmpKKFEvLkgKJSJUXyFcXmQp
TiQ7WUloaGFGZHN1QVYjTW0uOXEmVCFZJjFOXEUlUlNLSyY7ODhFSz44Jk1fXmRxbU06aFRGamUs
PmY4SGNpSmdbMCxFaFouODpTUTUrZWJSTVMrbTgtNWU8QEYKJUI0QFtiRF5UKl0qWGcmKTszOjtg
IVYwaTZAIU50MDxMcGJHTklbYXRRXTooN2s8dXIyZ0ovT1dgVmxoLF9TM08yKiklYiNFdE9mSCJf
ZzxmUlojQmEuI0UhV2NPIztLVkMxQjcKJU1waCNaSVozJ3RVP1A1Mmp1bnJXZHJiRlM9WitRVSRD
Lj48V2AjTWw8TiZuNGI/QD05PUo4WEQ3OShqO2kvNG5RMCJQQCViUl1eXiFVXzhFRHJXP0dpYFQ7
V0wzN001SkwhTiEKJT5nZl5VNCJqP0ZkIyNvZ2Q9VGRqImhwcUo1Y1U9ZWRqO3BsR3JoJzJAY1ZS
bmBdSVdOUjliKk02aTg3QE4wTzRfLyM8Uk0pM0knI20sakhHNzBmWT5wO1c+M1tnUkdjYmEjZysK
JXAzLEoucklHZ2QqZjY3XWNCMmdsUS1zcS8/WS4mPyQqZDxDLy5hLlRjUj9GXGshKV5fJC8ibm8m
LSQhaFppQmBkbik9aEwwYFYvKm1MV1xDaERCVjFuNnM1ZCRNNXBFRzF0aUwKJThEJFshUFYhMUU6
K2NcOmwjKzpJYF8/QFFITG5GMitBcl1WOyNaLC0tWFxvdEAoaVwhWUZKLERJOS1Kc2pnITNEXTZi
PyVvcVtzQ2VUY050L2QhUCJaU1I8XyNNUScyZzI9a00KJVxxM3NoUWFVN2YoMUNXLVl0TyxeTz07
LEdDLiVeXkZLND5yUU1BblovVWdOSSpTLlZiS1siX2ojUTonLWBlT2p1UnVuV1YiJ20raGBhS2BT
OVhoYlEtcSdkc0xXWS4vU0pzXk4KJW4iJlhvMy1CNVJRWlI3NnFWaDYiMGItQUk6Sz9ZKjRubzA5
PWQ7QFk9WmZba1pGISEzRUI1JClKbT1nSGgkcTk9XkwpX11IKmk5bGhxSFIvakphT2omTlsocWtt
UlJkSjdhNE4KJUQ2UiIrPUgvIkRkMi4tNUJxUClrImNWXFdmT2NyX0hyZzsjUyEvSzQxJiVEXnFd
cWJWKDkmMjA5Sm4uXC9YNUZgNXNLI2MkNWA6Q1phYkBbNkxCQ3BUMW50bDFENWh0JW1YMF4KJSR0
cDxGXzFyayk4PFA+U2QmQVktNj1DOkY3WDBQcDgyKDRtP2s+YThFb0A1UFEmc08lM0FASnJfND8s
RDFSXltIXShcNCdWZTEkSm5bSm1TS1tNISpOKD4/V0VZMFtVRkEsJEMKJVBEX0NtLiY6a3E+bV1S
cFknTFxCWnBeaVluWVRDWGtEN1gjYnE6SjZLSlA/TVpeOks5PVVYMmckITVkSzRmVlkibGcqI29I
TVBiWypOayooZUhAQlJQMScsSyhqdD0rXzQ+SyMKJW4zX0A+UGdDXTEhVGAzNiRSNCNZI1NlXVND
KHJ0V2RdZzpIViVJSXAxYzkqUXFVVVBzIzg2SUxdLERmVj5jUkVsSW9fbnApLi5xKlM5YWVxbGl1
IipuJSojQ2pLWWZYUktQJyEKJWRYUTguXUgnL28pMElzYy9uXE85OG1ZOUJwWDsmYFZmLTNvSS9K
Yks2S2hAJHJeWjFPIkNCP0ZpR2NNXD4/NTdOR25BOGdFLT9xMEpcUUM9XiwtWjpSaDpFbU1yUDtm
a2MvJD8KJV5VdW9oSlIpcC1XajJZdDtLOW1rW1JBZVcmTTpuQy1uViUoQDdHcnVgLlVPPEk5NXI8
cF1NX2thcTRhImgxLkRGKTwmOyNqNiJWXTVKJ25APU8kdTFuQUI0IzhrOjdAVi4xR2EKJTo7XiJb
SURPO0otZT1tb24xdTpOREBCRWdqSitGJ1ZsXlszcVttZylhOlFnMCgzJDswQUxYRDFpRDJKcGhW
W1xrXCpHR3FqKWxCc0FsOipmQiE6VWRuPipsSS9bcURbOCtBQGsKJURyOV1MLkhELyJRLEtzVChD
Pz0mL2dccUdcT3MpJnAyPjQmPzJpXlBVPyZCXSc2ZUMzPEBBOnVKQGljQ2s/SGRDXWVbUDQrUTZs
RGglajgqMW8iViUwKT04YSZGVWlQO3RbQm4KJXFWUD45Uj4pOy1dWV9sLDg9PEdHSFhPQUdVZCFm
QGdDVkoqSS9IPVIsMT0sW2o9ciJPSmdxWEpsWTNxZksjZUJ1Z1pCXD0nTTkkRTcjV2FNN29YLk46
cywsUkE+UFEwMEQuNVQKJUonSnIsXiZnSywjIUgwU0oicGZwLSNDJy4tPiVWb0xwYVsqVmIiL2xN
KFJcRk8zZTxPQ0otWVltbUBnOWNidWFuKVRMPTxNdXBjcCguMVg9TTFnbE04S1thXlRFY2RwIi5l
RU8KJTs6JWA2WiZjTC5lNGhiTDtPPDhwVzU7W09iZFhXcDkuOUJFJEdwJmRAaDtda1VNdDJyT1om
R3BAQUBzUVtYVjQ+KyllQlJmWytIPFQ6NFF1TlhIQTlESEQtITNvM2BLZm90SjIKJURkWTJkTUtJ
WicvYmxLYFVRWytXbEZFa2lLRjB0cmptb2onY3AmdWpSWFlPbEAwLGgtYGw9N0EwWzdHY09BNDVS
XkgtMVRcV1xZYWpcc2E0WDlEMSpHc0FJPlhGcUJcaEBJaSYKJV5OPk9QKkAvN1s8QV90RUYpMk07
VWRzcTgoUyJnPT1fSjlvPGQqak0/TTtlN0VOSWtGI19DLEpnPDRuTmlsUGhfaj07Oi9FaV5mJm1n
bC5VRkY0KDIwdDdgWixzXWFHXC09LmEKJWRISDcqV2RlO0xkYFxVL1NCbCs4ZyVXSmxzJEMpWkk8
ZzZNSDNaSytPbEZoWSpVOUxJJ2wkS0xvVnBeITtSbCJwJFQodS9kQSJERCYpZ2gtbVEsOEplPVVg
LCQvajBMQ1FjZ1AKJUx0YTwkbSFoKExdTjw4bFI7T2VNRkFYWE1qJllvMSQ3Jl1YZ0dgOWo6ZDRW
J1syWzlaTCcjKU9mUGAvSCoocFFFakcqOSQyUUZrTzFNLl9HQWFQc2FcKXNPY01VcGVvSD1eODwK
JS0icmtrYTFlZkdqak1YVF9cNzlFRWRdYF45dU1ecyRtUD1xUylLYE5ZWkcuYUFVJmVvYV4jOFQp
bGtVXl4lUzY2WEQqT1Q5b1Z1dVxwYWFzQyVUSD1dY10rYGJIImN0ZTtQRXUKJUVISSNGKWcoNz9U
JmNTRUg2RiFOUy5HNlVPNUpQWlEmUy5lPENRWDA+OUw1SEVqOy83NnVRcDQmPzN0PlZQXV0vPjAl
cmo4TTIyZSk2L0EvJ1UoXltxP2YpOCdvIVY7OFpiQ08KJTc/SGYzbmpxUWwpL15fXj5rbStEQ28k
YTpDLUEnKGUnVTQqUyQtVnFOcDJkI1lLS3AsXnQ/MzA8KWU9TllERWRRPVBQYWg9PkJfTTYpJEMo
Pko4PSlHNTwyQTpdWS9NJGpbSDwKJUNAcFdccy0/czpyXEVZQ2QsZCk9UjNmJTkzc0RHam5RWVpJ
NSNfbXBxaTBDSDIoa10yWiZhIkxjbFZIXzhRbS8hTyciQylqL1Q/dE1YY0E6NGBQOSFZUXNiWVI0
QjUtOC5kQToKJWM/Yl0tWkc0LjJWW2hIYExGb11uLD5ZY0xQaklHMF9QRTIhVyI9Ty5SLDEpaWVW
Y2dhcDpNMmVeSTB1KSM0VjM3MmB0aDAqcWdlbW9JWiwtUzs0ZilodGo1OWh0UENGV05MNGwKJSsv
bW83WCgoY3FNS1YpT11hYXU6RWgxY2xjISJwXCwhLXJcNEpdU2cyNEZpY1ZMdUMvOTZUdVxrcjIv
OUJSbVR1RzV0XE87O0AyZj9sM1k0NDxlbi9tQCVcKVs+OjJMKE47KGQKJUpQTSxrTzBDRVRCIkQ+
JChbMixsOiI5UyQ3P1BSNFo1WmFSLD91YUVDPF5NL0hmSFhrZVJSczRhamU9Wj9sSzklO1M0ckcm
OCZCKzpoY3FHP1U5XT1yZmhJbVZeZEkhTEMwK2sKJTBuS00uOlFYYy5pQzxlb2toMyE+VGRzTWFC
QlctW2MvRFw9biJObyxRIlo+aTxrXSRlamtCNCxoayZjUGNsQEpTbyw6TyI6TTtLXkc5RGUjO2Ui
V1FgJzdaPm91OUAxbmlwRiEKJXIoZlYjXlZgOXEkXmpeb0AyWSdXI10paWZtNDhWViJkUE9nOXM3
OklAPTpfX0Y6QV46XE5vRTttQUg5aztgRTlwSC5BQ0ExIiY+ODk0bFxtR1QhZElbOkQyOkRDMSo6
NElCMUkKJSJyaV4lVFU7TlA6LEMpblZYUD9YXUFzRyNjPVdFNldcS0FqZ3VNQCNAVEsnIzpYcXVn
O2xrJyMiJVliMkZsTm9zbCNdPkEzWkY6a0JxImdxRFlVLlBwa2Z1JzAjUEA7Mkk7LVoKJWpvXy1B
Pl48cUlNcTJiZ1deI1tlSjQmMGVoYW07XGJxImJKKlNeQERANVZsXVYyb0s/bmIwUi1KdTEsO1Un
W08rX1BOU3BkLmNfTyttQD9LLi1zJjFwKmRORTFNVCtOOm1iM28KJVY4SDJ1Y09uXiZiJ19LTVhy
XUhXZi0vQjcwJT1gK0ZfTk9jQihRXTpNO1I5X0RwPyRcQDFGMzA3NGxKMm1EbjNlQVJTTkQ5KjtR
Kj1eJ0hZbUYkUzxDdEJGXEcuVFNeJFFJKj4KJVhsWl9NTUJLOz0tOio1S1BMaFBvLWciUmtxKm1h
XVpuJyokIVZZKGRfQE5iMFpEcytWLDAmNUZFOCIzIURuPD5aKm5IaTBeSSFMTWp0UEdXSUJmLDdN
X1NWWixycyVJSjVIS2MKJVZOcjkqRlFkLmsmWXNZR1I1RjE/JlMhZ3A9WzZuJXBCL2Y5KG1yTkJe
UW9JWmVBZ3BLTkZpW1RnMEdXVHJWPV8vaEQlPjViXGMhYC4tc25JcCwkZS0zUVtiYzh0a1ZeVyQ9
Jk8KJThUMEY+Jj1qZCJHVj5IQSYwRGw5TT5ubnRtMztcTSZYclZOI0stTiJnNCovcFZFWTRibz1h
a0A3UVVTP2dAJFtJJTwpKlZwQUI8P0orbTFEPytmVV9QUSZuQ3Mqb09JSixcdUkKJW5cIWVcaVA1
NEpzNSFQL3JuRzNlajFrUFJzN3VdajVRQFgycGVVcVNURG1rWGMxbDVXbDJVT25ycWFWMz9pUmFR
cnEuP3NzNjFWVD02bzwvczUqYUVmNTpRNGxbU3M1SXNJJFQKJXIobS9QNUctb0xKLFBgNnFkb1VX
XlVOLCNMWXItTHMtZUU6OCVKSkdldGZzK0FZRltYVUxEcj1sMCROW1hoc2kocjgidT1KK2NybE47
ckUsSE9QQ29Jc0JoUilHNiZUJGImQWgKJVBiaSEtVCpEXHVyZ1Frc0lnMSxscT4oTEFiNDExKCwq
YiFPZzNsK19mcF9ZQmBDY21pQDg9Z0lQJF5VYUo8cGwtSTEqZCdoLGUvWC0oX0Ynbi41RyUyOyFY
R1JtQiUsSlc1IiYKJUdJOTMpKUtEVUw+b21iLDBCV3FuTWRvSExnLCguQyltbm0rOmNiMnVqbV40
ZE1VLVFUV1kuODVFaW5AIiVbP3Nva0ldKSpnW2JYM1RQTnAsZz0tMUFhPW1cQVUxN3U4Mig6XmEK
JUdqKXUqbzM5O0g9dGhvYzFXalkrXG5HVCQxUFNDYDp1cj9YWnJhRCgkMUJkT1VTSHUmbkppTFQ0
SUw9al4hQl1YIXVhOzwldENyQG9CJD1CX24ucj8jSiM6OkVoTk0hKEFdZmoKJVorRiFyXjQ0Rjol
RD86JVZyYS0vZ01fKFpgJDZuKF5CT3VjMj5dYko+ZmFmXktrTjhHLiZrNTYzLjUuLyVOYEw5Pi1e
VTUndTQvZCs5LWV0cDNaUTcqMWZhbSswWVpzKSZlOyQKJUlhLl1XUUdtJlUwRSRlNlpeKmc6JnQ8
LyNBRWpOdUF0Lk44Ol1mLSRrbDE9cWA4NShmK1E0YDg9cT9nTlNANipLLGgtQWUvRCdRXFw+JGZV
bT1FYyFuW009Nlg7LGEzLU1AaSEKJTpfaVM4KCFBcDpdMEdnamZsSFtuIywsLD5yZSRFLm5BakFp
cDtjJ1okUyJZVkIlKS44UVdwdStaWmgyKD9sVURgKGpcc1InXHU6IjpiPm9mXlJQX01wSjA4SXBF
dDw1YipaIm8KJTQ4aGFXKktSZUFGKjcqK0haMDo5MmJmXScnKFYwW2g2clhFb15jWWJNY0dZanBB
P2NDb3RhLl9gRUItVS9nUEIsIy90QmNDak9eVD5uTj9NQU5hc1w0KGkpXUghdW1kLl9QUWoKJTxl
cUMwcic7NkkvXTBFdC5qS11DcmFIakFuTlNkKDouOSxMZFFEQGFqZUpuIkRNUUwyOVo0OlY+R0gw
dFYwZUQuLEEsKFhMbDlcaCojZjpSbTBXOHFVMUtxXyg/TiZiNjdhMycKJWlGP1EjbjBjR3FFNj11
czJDNkFVSG5kUC1odVNcJFk9WVtLWkskL1A0Q10iQko6LGc0L11OYUJuaCtjRzFdPFNHWW40bzdR
aCg/NjVNIiZAKEYnSTRISDdzUS9wMjMjWVpKX1MKJUJsc0lLQEUrRVdga1wia0BZUzlCU2NYYWRL
a1JsN3BHakosMkxDR2pwXWtwcVxiImVnNmFvXklkajE+XjBxJmksRFcwTnNGTUBEUDMqdTFBczFP
MWBQKVoocDlnQktKRFY+T2oKJVdIVUomMkVHb1NDKEouc284V2o+aCg/IjdSUGc2PlBsZztCYFBk
WU9hbjFzM05OOC5EQT9nMG0oNEdvLis5KD8/QHJjXSFDdUsjTyVyPFdBOy40SEM4cz46c2I/USFJ
I1VQcj0KJUtYbjBqR1QmRHJzKzw8X1pkTDg9QzxSRilfPW1Jb2JdVkZVLjpZZ147Nyo8U1ZDV3FU
VEJZTGcsOztzSEluLGVLbEklPlxcO1BVY0pNZ0Yqa141UFpVPyUnOUpGR00oUz1OdGkKJXEtJ2o3
bVJsXl1xJ0c+JE1BX3A3MHInQ01gbHRAP04wOWtsXVUwUExgI3UwMlg9dSNjTE4zYHI8T1hxNlxb
JV4kXyFJTyZwRUFbP0kmQEU2S3I4IUcvVzg0ZEdBOGd0ay0qLSoKJW5jX1tNOzFndWBaJ0FsTVRQ
MiQ+bUZvIVlTMWprWWRbWmA6aDsocTM1QSpwJ14rSVpKVl9IWklDSDY1RVZwL2IhL1wvPWUlPk4x
P0ZObVxXaFYpYlk1NjJqIywob3Q1UWw7Qi0KJWMsSCM8YSYoMnMuYjtvOVFmIk91YzJENSpfQWBw
VDRMN08rRG5nZG4ub2lgdUtzcFFnPjM6Ll1MJT1pRVlJP3RIaStCa0xucTVsR2BWOFpXTzdEVyNs
b0EkTFo3YGVIUCs9ZE4KJTo9VFNfSEdiTlNgRWdoI3Fkay5saCM0TDNzMWYpam41YSV0YXJBIitt
Lzw8JkFkJGxRXFI8QWIwU1c4WDFBdENvai43LWJjVDo8TzM9YSIiREhFWkVPZj0+MClWbG8vNEIv
TSsKJWRiIitQWyZUVEJwQlkhc0NOajo0RUdBKz1oOHJHUWVLQ1AiS3BmTVMma1tqSlNoJkNjZipQ
LjZYQHNGLid0c0U4LVEiaz5RYSUnTi8uLz9gNS0pTks0aFhzbzhdW1ctbG05JFcKJTZGUClBQ0lQ
V1NsIl8qJGBFY0RXKSR1WiM4OmJmP3JBZSowNWtiTCJSS15DNUZcMWNcRzZBWFdyWWsnPD42TWQq
TFpUOFlGNTFRbU1MVVxBJ0MrPVVEbkdCZCUhbVM5VVpwOUwKJVAtN0llKDJfRklRWiUpJThpKyNj
Y0k2NCwoUWJMQUQ3WXFRV0FjZyZNaTE9ZW5wJENsNiZITWlYJlhlQ0U6XFVDaVppOXFoO1lnN04v
WmdpSiNsXy1lR1RUVDJvJTMuKWckclUKJWtvNmlrOmlXKk07PW0zIStJSG9hZVZSWSVQQCJcOy9p
Q0xRY1IqLz9eWG5YMlJOJ1JOJmBwM2AuQlFjbS5HRGsmQ05yYT5jdCY1ZCstRkNGTTcpIkY2Zm5y
WzRYaU5MKW0zaEMKJW0nTy9wWDg/XicpXUg/V1pBYTdUUGNAXzJyY0hYVSRqXSk8LyJpLykubDkk
I2stTF8zQ20sMztta2o1TzAtSFAjX2VSIVYnQltwJVRNKi1sLitabmE3XCQ9MmljRE9qUXNDbmEK
JW07QjVUIidiPnFkLy5mR2xxNlcnOi86KjIsZkc4RCE8ZVtXP1gkNFkha1hAKk4rSFoxJFlhJyZv
SyUyN2draEkpSDI2N2cqLWQ+Rmk9L1ZkT19bN0VQLk1uUS0mMlZBcUE/PHEKJT8lajJEI11NOF0m
YlJKNkhbTDxSYmw3K0hBO2ZQcEVhJG1eYyRmMWxCQk5VNiNQM1UjWikqPFgmLFtAcFEvaj1kcGgs
SmRcNV1sPi05cFxVLE0ibF8+V2xYZlVQQWw7YCNwZjQKJTNPMyVZQEZlQS83WGhGY1FzNVBKLFkt
MVA/ZlA6QzUwTz9NWyFRXztlSFRyNj1PRjlCNykxVEtfKThUZDNKZEI4SFhzbi1YR2k/N0MoYjBR
b3JsKUxpLC1nTUgkY25vUk00U1wKJUFVaCpHRSs8VmxNX1EkXG4pRkJncltmNF9aX2U9IWJ0KFpj
LGZkRWtCUV1aRlFFYkQ3J2JrcEZVSig2ZyRPP0VFSCEoJidWX3RZOGooPDdTMiNgNzJDJU1MayNx
Tl9kbmlSK2UKJU1XRUwxWVMwWFVwSGU5MV88PkwpVVtxJnVAOGBZVSFrT1hmYnNuaSpNcT9DWkpI
Iipga2A6KjRkVkVTUyVhSUJzWjYvVVhyUFNkc2hpImdAcSovOWxBNVwxKEIxMkBgN1Y5bUAKJU1x
cSpPPExlbSNScz1YPW4rSDtPISQiVik0dG5rYCRNMFBjVUdSPjA0OV5GUycyb0QkKiZGXjtIZzYm
Ik5GKS1qJUE5Iz9oJF9QdEdmWT0xcTZSNj5AZCFbTmpDO19kWGI/UDcKJTAoXC1WWUkhaDVIa1Vt
ZzQ6TGdwYzwoK2Y1J04xLlFWXCEpST5NcDNlUnRfbSIkczNJOVFhTnVdP3ApNlklRU0+NiRARm5e
ZTM7WiRkTidyPCc4YUNhLnJZUEZNOSgiL2hXKnMKJTghX0BRbzpqPihNOUstUFM/NHAsWjN1PiMl
LTpiZzEtJlFCbkE5ZmhHQj4tJENyP1JoNjFdRGorTz49Y2Y/cVE2IltJYUJKLy8/USE0OmhKMl5p
aU5oTEgyRjB0LEY2PilSVz4KJSUiUC9PYmlCZWk8PkpQUV86Zk5vcCpacGgkYGxUPnE8WmtMRWYj
OnU4XGRdUzlmZSVIOk5kOmcsbC1iRXJlQVU1WCsqVy4iaGdjMEBaOGRFQipqSFpGTXAuJWxMcmRf
VmlfJjAKJSpHL1ExJVhBdExBITVYKUJXYyc6OEZZKSQkUW49JSlyQUljKSQ0R3I/QlI3b29UX1Jp
bSgrMWRrdCpQWyQ7OzVBZmUiPzYscDEqaTk2c1k4W2kxNCE/MGo3akkwcjJuQmVlay8KJUJ1Xlwu
UUsoaGhnLURSQF5EamRGQWdHW2hhIWNycylcZChtWSU4alYlMjMpQmo5J1guVythPDJiJjFkJkQ6
WksyJ2MyPUVDZ3BlOmFSVEo3PF5xWFdHJzI/LD5WN2xRbF8/WDQKJVopKF0iUmVtTCZEa09sclJw
YVVYJlMlXSk4LSpsIyZdQmwyNUwnZVAhXm9xLi9UKmE8Py8xSzNlV2JOQ1MnZTJePmtlSChFIz1f
OTF1VjpJKnUhNHFhQCNmOypQLkV0T1g9LGoKJV8rUE1vYWNtMWJjKSkxJFROM2ZOXi1dZXQhRGoh
ZTpEZGtWNGtrMjEvcFwtJDBiblB0NTwwLCRCXkZIOj5oN0NjbiNMaWVvbFElZWZAMEc4JWFsRD0/
UFk7N2I8YyhSSCk+RWYKJWhlcEc9a1RaVzclcTxPIjstZkFXQTNXNG9qPCY+Y05NJkhtP0FGOj1l
SjFpVS4qJGBIXXIoSiUmb1slPTIxNUo3RTdOTHNgMGFYVlIwRkxVNCQ3TG5rPWgtWHEjWkpXYC1q
NFEKJSpYcDYjRScpRHVjXSU1IW9sYV5BaSQsamgzQ2l0XzlyLEIqQGNlRUlbRTRJVVtkNzE2cSIs
ajktc0VtaVVKTVg/ckRNYEUhaXFuLyZgJ09MbnVva21lQTxKM209KF4hYT1iYzQKJUVEOEMhJiVc
SFRYXUwwVVVyODU6LnI5JF9YI0pzJ2tnWSVzbztRMClGXk51Plgqa0QsRkcrPzpEQGNTI0RpPTQn
QCpzN24sWW9OdWZDRy5GaTYjZS4yaFVASFlcbUFSMScmXGgKJURSLk0rMWZZN2ZvLDBxLEhHcy1X
LydpL005LzkxdSknUjskMysoIV9OcTwiaStRNSdZbStyc2otWFFATnJXXiIrWEQzSGxIa0JacCtX
WTZEWFtxUW0zXFwvTighZER0UENVJTYKJTU7cDQmWTFub1oyKVdzSVsyMUFEKCwnL1hKJ1clOElJ
IzZRcTYiT18rZyQ+Q0YxPSo+JjhxMkBQNXEzXFBSXyJMNUovOjpHazpDMzdBW1BdT3A6bEFKWHU8
K1JZMVMxVkJERUgKJS43RU4jPiZvN11cRyVWYTFfRE4vIl8rVG89Lm5Gc1U8QXMlPGFXIWBYbm0i
W18nSFsxPzsuJiwxN241N14tWC9qZWJNMm8jTSQnQSxCTCswRTFXdWprYDddWEJLVVkoVDwpXDkK
JUpdIzYtVzpzaz8ic2xKOyhYbWZoNFJAVSMoLVYlTjAxJlp0R18+OCIpOldRc1dKajNDOj5ZK0sj
SEZgQkNqclYkbT8ubTAtSXFYUDw+SWYtNFZhaENGZXRdYS1kUW8vaU5EMjcKJUheb0dsYml1PG4l
Vz4pSVkjOFkmSXJHcnNbSl9XLV9EN15rcT1MazFnalFdZkBhJTROOjNxVjQldEkkQmo5SFNrcnU5
JnQuaERAZzE/T1dLJVpmJ2ErbGgubkVZKlNSU2EmSS0KJWRIQ0orcF5CSnVbPStoZzUxblYoWGF0
LTlSXUA8JU0yKCs7K2E8dUBmcSl0YzNcOyFnN3REVzRabEUvZWFdY2hzLm0rL2pkZDMvJCZ1YFxy
Qm5ZI20xIl0vLWJwPjFuQWpCPlYKJTt1KWBDXUgsWWpeQ0BAP11qXCIlVzRlSlxuOzk2cl4oQy1m
Zkw0Pz5rMDNdJkViJVY+XDIoMz5lNlhRPyRmUXJgXjlwREAlc1kxN0s9XF1Cb1tyJjBAOztyTVom
IyU2bWk/M0kKJV5tdTtRL0ZFJ3JQJiN1Jm07NV5kKjc+IjpWUEApRG9HbDtTInRqTSpPQ3VQIWwi
UTNCaWcyMW0hSjEpcl4+KjpibSctOmlBUHRCYU9ZQ0Myb0dlR3JNTGFkRjBAKEZKQzBxKXEKJTc9
V1JvUDptLkNlT09RUVpDY3JWY0pJUDEuIip1bTFOSU5yPFZjQmkmTCwvQiM9WzspcXFIdD80bmE0
TCY4SDViUG8mcj5rVixKKExTVkprZi0zJVVAUS82QVVSbidlXykhZjQKJTUpT0c3RnE1VF5JNT0p
U0YjOmg4UChkPmlNIkE1akNOc1lTRTZoQSlLazMtWmZVOCMvXihWakVATUxNImkwXElgJjBCV10y
OUI9Oi9JdCszKDtLN05vYXExNmpdPlQwPlRWKjMKJWZEWzJeVyswSkxtQ2wrNUxob3FbMDQmLXJX
YFlfJnFlc2xnV1BZUjEmKi1rKCZdP1ZaQzsnI21WPEIwV0IpbTxrQmJLbDomNUJcN15dUSFmQmcm
NXNnPEhQJ2NOIjI8SD84c1cKJTRKWWM4ZT1VJz9LakxKTj1TOG5lWTtOQy1PRjc6TkoxNDpAW04+
bm5VIiZWUk8wIkEtaF1kPGQ4bl9YXV4ubzI8WCZZRjdfRnVOJk0tbE9SNmFSTFRpRjhtWFEvJ0Nf
U0lhM3UKJTtebGlQJiFeaU1PU2ZlK1Y2N1g9azYhV1toKj0jaEIuYWE9WV5WIjxRSnMzN2FVY2kv
WnUzKGRtYylhdFJGTlQhWDdJImRiQmgxKTxobVhNTV9vOSUwZm1uKDQvI2ElJShnam8KJWsmXyhr
bUxmLU44SXFWTmpUXlhgZzZYMHBFaHVyZDc1TE5LQl1kT05rL2tZKVpvZE5WSU5GRlVUbUVnPFIs
KkpNI2tBK15fLnRXLy49ZCZyOWQ0amg9MXJWLi4/Zjg4YXJvX0QKJWVaLWtdKl5EOTc4W1Eqa0ox
WkRVMi0raixcaWVSR25NWzQuRGZFL1gvcSFmMilDSkVGZGVDQzhpWUhkLGROPVNdNUQ4dDUsWzA+
aFVLZm5FPERrM0IxbGotTlJrVDduPWFlKkMKJS0zZyknTklKYCtSQFojKGtZOHVUKiJ0VjVEKTI7
O2dtYEsiWDBaV0ErIjVJbm1lSyUqR1hZZmslXE1TOD1hRy4yXWJxJV8wWT5mQytpa0cwYTVfcERX
KmhSVjE3VVsuLidFNzIKJTgoIzA8bSczWmNUPXBgNU4kOmM1PnBEKFUtLkJMbkkvZExmRHFvMSRA
NzZiOFJeUm5scjdfbkNkRz9mPlo1XTpGLENtaSFWSDEzWDRFZTxSXkd1QW9GVyJGJ2dDYiFkb2Bd
WXEKJUQoWENAaDtTRCFtc1lJUVQ5UzRbY2k8IlZwKS4qc2tNczkxSEBHOl9FdWVUYXBwI1ZFUnM0
a0hAZmdFTUZFb1JhZi9QdCM2RzBnNzw1Nl0/T1IxLlRqLXEwJydPcEVAOm9ncXMKJSdCWUsyPFdM
Kjc/TDdhNSFTbWk0U25QZkxQJm5DbyhVJ1NQailqMm8iaWREcy8oI2VHK05SP0loI1YscTdYR1py
RiQ8TkpsZT9qWEs7Jz9YTGJXN1FLO05ETVI7VWhFL08qOyEKJSMqcVArKlRFLFRpcTZSLUYmXjxM
QydvYj1AKzw2OyJqLCE8VHBiPy1FQl5DNEg4KlRnaG1Sams7SWYhOFRScXNqYmpvYEQlJS4+Ol4h
SXAwTiwqUV42aTFqcVFkJUBqMEs6RUcKJV5KNjRSNyNBUSVeWUZNVjs+TzJUQmkrJiNuRVBdMTUz
dFMwaSIkQCI9ZzJFcTM1cWguVmpTRHVAPXFJXCxDLlxLPic1V1s/dTRfKyZsRXU3cU9EKXQwKFNV
bk8vOHRLb0VOYGwKJVtlYyJ1OiQvZWwhY24oXCVvMU0rRVwtO09hNTRFXGY5Jz4kSmtwb0BeYFEi
ZCplQz5tYiJKIV1rKGEoQiVTb29mRTZDOl85TUJRJDJwbEIxIXRvSTRDbCpXO2FgW0RbWSxpJ2gK
JTVqVk9YOEZBSC0nKWB0dDxhSmsxcWVFKjQ7X14xMk1uNUV1PGRKUm1nS2hzR1VwNU1LaEMuVjJI
W2o3VVBvWDdmTTtlYW5UR0RhRF1xXT5KMXFCXV9lM1hwU0tPamZBInRrK1sKJSIzUG0wZCIxPF1w
RHImQi8nLyVpQylBMElRJmAsWEMpRWxERG1PI2tIXjFUZDpIMllCZU9mOCdOa0o7UkFvWyghZC9u
WWspLzJHIj8wSW1BcEkrTGk5MCZTQV5nXVRHOyJkZmwKJSwwPD5NRzpDLjtsWlNIQVYnY0VfVmhJ
NURQYEtPYDRkW0BNOmRRMTQsOkJTRGtrOGZBX1QubidCPHE7KjlESFcuJmQrQmBLOWYxOjAiTWBE
M20lbjNUTShoLElnZFxiMmFTN2QKJXIoRSw6XUBkVCNkTG0talVXYDZvLDhiOCxWaTlrOFZFRy1l
NyRHImBSWXQqSyNPN1U4K2dSPXJmKTBCQ3FHXkBXRztkJ2RiQTdKJGYjYiJgUz46VCYzXENHQ05y
SGdkI0k9VSoKJTcrPCFDMz9dJXM1Wz5HUl08UCYkSzBvbF8oUnE0QE8/Pk5sLFVUQVJSW1EiLDgt
XU1qPHJQYysxVSopSSwvYTRxIShSY3EuKVghVWArIWkyN0ghN0Y9SDBKTmpcXzR1QnMqRVUKJXI3
YzoscCFkMl5pYlVWP1U7XV5RU148WlNAUkJbW0VPSE1PXFJvJ1hyLVdOPEdMRDQ/PDw0UDxfKEox
YisuamBAWSFHXGhObHQlJE06JTBEYygwViZBLU5UT14zb2MzUDYpTkEKJVwwW1A0RS8/T3AkY1xm
Lj1CYUNRP0ZGdTVgRkMua2RCWExtTSxGMS9pYjZWOyg+WidyYG1tZi9mRmBhaUdyXDA1MHFYNCxW
YDt0P0ErJzBUakFZJGpnbCsoNUw0MFlOZFY0KEUKJUc2VSNiN2xtVStNR1J1JEEwLyFfZC5gcGJc
aEo5KCw9TSVTJitGbmFoTUZOZSoxSDcjNlArOis3blEkKVI+P1s7N0ZdbyxualFJY25UI04vUm10
VEExSS9FWkVRdCRHVjgwU0MKJWVPQnM8YCE/UUs2Km1uZCF1PyNAOldFI1paZ0NqSUZyPTlCJz9P
KDg1V2BWSm02JjhrLkE2U2lTVjYtQiJiZCtrWlZiWi8rTWwkMFJPOEItLGFfYW86Ky5lUlgxUWAi
S0MraSYKJUdYSkJWJSsrVUYyZzg1XVRQXiomPyQjPFJlW1tXSW9GRS9qVF1oXFVRSWdEKDxRVmM9
KT5uYElZN2lUbS4lLXIhK0UxS01xRkExKiJJZlw/LSZkTSUyJChPamk4JFwmJHFnL1AKJWUqdS9U
SDpRVy88ZmYpXUUhNEFYWydMYFxbUlhrKS5jcXBHSC5eKEFjNjJtT2pxPnQpcjNxSklrX2tSZVxT
XiFQaCVHTTpTJzJhJWZxU1cyP2tkTkgiWD9KSjdYUVdmQktxb28KJTgjTVF1Rk0hTC0xa0htJG9o
UFc5b0InX2AlImFbY1RYJE4+ZUFCM2JjcGJSNzlwSGRJJk8sM1EuInJrMGIkPS1GRG5hQWhiOUJG
RVs1ZjMlPDdbQXIhXUhkNTIuNSlZKiE7QDoKJT1GOzNBRy1XWzRGckFRaydNcTtPJnUvXC0jUF1Q
WF5HPV9ZSy5lb2trcScnW183LG5MJSUtayMoakQ4cy1GPTFQRCZrL1ErXXMiI2M5IzctSWYxWyNq
cF5jPFgmPkVjUTxOKEkKJWZjX2IqV0pVLEtHLEZDVzRHbVhFRWlmR05jZlgiSkksJjEuLi1hdSs+
XTNQJGwjaVxBYDkmOHNdXD5HZlpWPD5rNm11W24pRCk6VTZ1XzIyYVtgWy0wS1ZMTzVOTkZoTF1f
NCkKJVohMGYhK11VSFU4J0JocjcrbkM/bChvU00tNF9wNlNKJnE3LC1aP3VUWWZjPFd0OkU2K0c6
OGtCSFhAXEtsVCpBNltJOUQwQ3JfV1UwTy4qNVpCV2VxMzchU2hGamAyREFDTVIKJU5GUzo+S0Zq
QiI8Sy8rRVsiK15VL1ljXF9wbERgNz4kP3EiW18sVj4vMXEmQWBfcXRkUG87VFJkXjJPTDItQnRP
IkppSkNAM0YrRjhCS3NeMjloTDlAIjhRTCZlIyQ3bVZrQjcKJS49P1kyISZHTSM+aW4zIjRJKUVF
VkkpJDZeRmArLFlZOyd0XHElJTZBY08tJV9Ya20tblFbcFhDcTo/KlZgbyNaK1BAbicxKG5bSC8r
byVpUHBsSFIuKW4+YCFZOW1QQzcyYz4KJVNjXm1OOnNBNmwkT1ZPOz1LMEYpckI/W01CRSokaSoz
NVljTlxdVTpAYkYycmtsQywvWEQuMWwzUDVZOWdaUXQpSlQ+Jj1tYV1uaWZKRCtCalcsJDYicSVa
VCdRNC1uOCQ4VkMKJUxPN0dATypPJkwoQCMnZEojVSpASnNVc1EqYlRBK2RNVCInRjBBdHNAOkBx
NV83OiJMTWUlR00/dGJxY2A2amBFb1AiQkdII2dJLHBBPDNeUVk2ZlIsZnVmLGNgXkYwWmFdOkIK
JUtxaDZKKUpPWj9bOTkwWFY0XlRCMHJZcU8zU2NPKEAqJy9lKl05by49JDlHc0hvMHVUJEdOSXUs
ZTlvX1YobTNsWm9EXSdEP1tzLytvKVRhLD8nNHFQWD1pZSFOIUFgLktjck8KJS1nUylVSywwSDFV
JFFjNjZxdVlFSCxeVGFpJEsjU2NWNCYjIWs7bEtZUV0zRTRCaShWaV0kPWIkZFkoQDhKJm50Pmcx
TU4zW1o7LChHJCRjUmEyXVgzSjwsOlZVVU9oNSxdWzYKJS0nc0slX2hWUEE/ZGVDXzcnYWJPYXFA
Qk9ZcW9WLlBaK0FpJFJncVk3bS9lLGtVPUFNNWNrWVY3dGRYTGMlJTdcTVBgbmdRSjNRVGVEcU0h
M2VTS1BiOk9pPUdIRUFJKkAqKGkKJTVbUkUrNj5nZVdkQ2REcmtlYEZjZlNEMyonU25kNW1pZGco
cVVNNi8hM1hGaFo4JFYuLDJHIUhdVXBHWU1mZG1QO1QpXm49c2M/aCg0cVtEMUR0Yi1VZXNqJWBD
IUJJSTgpSjEKJWJiJ2dVMj5GYCE4U3Q8bykrbClOTk1IVSMwJ1tBbnFbLDMkNVZkYD9LIyYtbmBP
Nm02WThdVSlrdDtvLTJPLCVwbHRqUDU/PnVvJ1xjZEIyUFY8amJccUY3VVA8UWV1WmNXJjkKJV0q
SUlXJU9ZUjRwKShSYitSOSxwYzZ1TlBdNSFPKGRbM0ZUVFFkUSxCJTgsWTUqYiddYlIsLy4wTmlF
S2dSVE5gPEFIQWxgPF0ta0lsJ3FMLkxRaU5cLS89Pi9uU1tSYGYqbi8KJVdwSnMvJSNNaDBHUWhJ
X11UOiI8cFZQJ2k4QT5xVCMsL2VUNCNKbV81LGVyL1FPNHRjbSxILFY3aC9iIzAkUUotbiphSko+
Il8iXmlSdV5vTHAhSDItNWcsOUUiL1AjbyZeOysKJSwtJElHZ1ZqMWg/RHQuY0NeNzV0OUA5PkU5
OWklZWRrbyVCUStfYldjLGIuRkpuLj88YyFXK0QiTyM9MGJyX25gKDVDM1hSUE1IRWZGbGdLJHVd
MGQjQ2ltaUhOTmY1OkVZSzIKJS9OPypFKV9FQmlcai48Z2oodUJvVXBlcE1jVU44YyJbcXVwMWhA
Tz5GJVBUKTM+Xi9uWEk3KSZXQDFCRytLOSQ0TTEoc18zPzlSbjNLNicoVlAjVC1Ca1ktXkhkK2NG
bE5DOjEKJS5vJlwxJSghOypGM1RpcShKcGpLTSFAMGlgLSJrZCZiVE8pY3VbWTlDcnIxSEhMXkkm
ZzFoTHAnSiM+U247X2VZS1AsZCYpUm4hZWdCM2ZMZVNUPGFacVdtO2VKcElmMiVfPzMKJWcscS1q
ImVlPTwrWzc6aT1dPGRnUCshVHNOX3VzLyZxPCo1ZzBCPzlUUGEobmMnRikuTUpLOEMxImcnKkNp
KmNgU1EqbnRcWT87XjdDL2Y+ZVVcclIxOEBWb0BgYXNsNDFkJ0gKJV4hcVljIW1VbXFkJmJOX1Zq
R0R1JjhKLHVuWzNLUCIiYCNucVtHNk4hcWpENzQ9PDZHRSUlSGxFZ2RAalIzbmdfVzIyS05GUVIt
aD50VD02LmhjWkA2IS5SWTVyUT9FMGlkMSoKJWBpX1dYLHJDOCs2PV9fI0JYRVBNQlptJFVEMCk3
NmFfaVIyXC81SUtgLzFQaSk5Wm5hMyMqJGZDLGlTOWw3UENEOUclcUI3Jk5aTThuJ01QITdbIiYt
ND1dS19DPVdwRVMiNjQKJTptOGlfXUhDRCZTaWoiVCc+PTM2UyEoYlhjXWRgXyZsWkM+SVBNalE5
Lko3KyJqZ28jR1hIKi4pK0lfMkFITS5bYDZWYDkqRzdhJEo5WU1ER1dpTStTYyF1WVtqOm5xW2lH
UScKJVVxWVlLZU07UmwxZWhpLFxuU1VoQkwyTGxlPjg6dXFVNFxoajRGWCEqTE1QRUVSNVxWJygn
aVBpSjVKYT1YSyQuSWBCX0dBIWsxOy4zTSQ3akVyXVUjUE5QMUU7UWVmaDNWIyQKJSZic29GRGRO
RDRwP0wxdFgmamQ2M3RtdTtEPlFQZnI7SmVTOSxRX2M/aUtsWUlYPzNDXS9mXGByVV1ZZzo0RSMq
aGBfIzxjJjlkQWNfI14paSczZTNvI2l1Ik8yJTpjcjlsO0IKJT9lKW9eZD01JEFtWEk6cmBRP3Fj
clZrZ1EvY1JpJHJwbnJSXiQ1UmxrMXQ3WltwVC4icEBkTSVxUDBDLEhALFBeZ1shJGFdPz9pKWRw
S0pzbiVRS0hJVzlgMU1aPFtGWDUwOEcKJVJvXkU4bGdbT1c1PCxnPF5OakpzMlRHOFcpbCUxSjpX
TWtRZjVLUE9eOD5OOCYoW01GYTZpIVNcKD5PPjFRdVwiIVRXUVtuYidsZWJNVi9fcnFHMFlmXkJs
TSRtZjM4YGFpLGYKJW1jWDYtbUU3YFxoSFlHKFhhYlJeaiMxTjAjTWAsOm9wYCFAXTVSXmtGb0Nm
Nz9KVSJOTk9MWD1EVyY7KVRyLFMsOzZUU1xRZiRXM14zWytXaWNYcVpUbCdTcWszX1IobyYiPTwK
JV5BSVY9W3VKKTJIRjVxOz0wTSRScSY6U2E0UDUsXDJyQG90P0cqZ29rXDdfW103L2BnWy4tPiEo
I0pnUTUicjs0W1opVGNVWEZZbD1AVzthb3VSL1ZwIz01RUFDYTNsTDk2REEKJUxZTC82Z1dRWTRJ
ZXQ4XlpDT1FwXUlRJjppaDc3ZmxKS0FRZWIrLkQ/QDNLcjBiNTJtMENCQWFycSJbalJnNVNcUyQy
a2RqbUxCc0RyLzNTOyZmSzFrLldBPCdAM1lJXyUuMGcKJT9pSzNZUydzJE4hbjMyXzBSXCJoOyZS
SUBocXFcUDUpTiVnJnQoWEVHXl5vcC47aVdSN3U/SVkxJlc+KGwoYzE9UGFGWiwmZmAzcCVsTz9d
YCdiNmU3XDlJT2AzX2dlbC8rLE0KJV40KEtRbmBuYFhEaCFZKl5HaDopUz4+XUpyTS84J1YiJVJE
TT05YWctRjFyWF1tZlM0MmpgRS8/LXIlKScnWiw0bnBCb2RKLCYvcWY5WyFnXUZBNSkpXW8iaT5q
SUkzQFZrNEIKJW4pJnA7JiRoaUlcSGFlXT1PN2lwbnQ5MilxVlUzJWlkVV5aam1NTy9gVS5hK1Jk
MCpkbmdGKmsyY20zPjRGW0twcSpDdCJLMUdYOkVrXUdxTlFscyZhbHQ/bkIoMS9hZSxNTmkKJXBp
L0FncyQmYF5fI0NqIWxaS25VSXM/PElaZkErWixPZl5BWHJrIzw+ViNKMGdZSCxsaCM+TFNeOmRs
KHEzMURsNFQqYSJoVk5eJlorS3BTR09iMVAyV0ZQXiVzXiNRSURjJ1cKJW4pKXM2XHBYL1ZBXTcu
JFpvQEVia0IyN0ZsTGdYKTgnR2FRZHFaZiJGT14vWiZRLkU3LGJBUDdmViJQZ28uYSdoRWdRWSFs
SExEKFkzci4nKzNDWWE8MzIqIVVKPVYhZiwobnQKJURuXlhGcS47LkhpVT03SmNpOk9PXzlvPydn
UiVzTUY+YF5mcG5IIkI0bVdXK3FGR2lwKVgociIlcG40cyUiM05MXCpTPSpbdCI/KmhRXS9WOSlu
Jmk/ZE9TY1NaTz9uWU1SbUoKJW5XQyRUW15fWW9YVCJwTnBkX2o4UztnYkNFZDMyOUc7RWBZVTc0
ajNiNGxLZVBNKy8qJVdJWC1wI1lxN2YzYHVSbmAmXExnQm9WL3EjIV40R0I8WjtbbDtudElzbS5G
Q2ZpPUgKJUxpMjtmRnNWRjgvR1w7anFJMXN1TFxwL1JMRkxHSltVSU1ebWU1WVplXzo9N24ldURq
NnMhbS5uJk9PPSFYKjNeaFdSYF04NlwrU1QvZGVDYT02c0phakEpcjpBRktPcHUzWV8KJUE4R2NO
Rz8+JWpeOiwlYXFZYz9MWWxFYjo8a0pecV4jc2YsN005VmdQNStARk9UJD1ITmQpZGlxMnNabTJy
W1IpbCxPQ05rQmpcJzgpbCQvaEY7RChLMCRPOFVbYWhTZkFCa0AKJWxuMlhPXjNdQ19Nc24rTEVd
Zk5BaElpXEAiTU9jNDpdSj1tK0liQ2pxYFotYFxAV1w/M1clJWNdQ1Rxbz9HM3FfcnBvWFZFcV1q
NXMmW3NCYVNyREZCIVYwJGVwaExHPjVRM0QKJV4iU1A7alBJc1xIT1VlWnBOUzE1YWZZUGQoVV9Y
WEcwXD4+KGJeQVBvSDAtLVluNzw4YVxVX14wIzxfZ11vQixscmFXYkhHWWhdYDpAXD9acFxsJkFJ
cjhiZlIrNk9cV2gyNzwKJUpILERgWC9oVF0vKUVLPUpjK1pxcVlGVjBiaVJzZENkZSJBajZjVGwp
Wj82dHIuJy5iX3BTS0tuV1sobD9fLHMrUyFQNFJdOz8iRSs5JWIhTyRyRyciJ21AMCshX0xhOlxS
YDsKJVQzPGNDRyl1Km5YUEU1ZjJcNWFRcnF1V1hlYEZiPjQ/cVNmPTguJ05IaSpDJCYiInU/aXA1
LGFrYGQxXEMsUl9PPmVSXUVya1ovIz9FdCZRLExgbmBuKyE4bi9vRGQpNyhXQlAKJWBPaCs3KCl1
ZzpxK1cubCpiOktuMSdcaWRMQyVlc0g2WF40YWNhWUVcRmxhailCMmlWSTdgSUlaNzleRUBKUGtn
LmRdLiIiZVZKLWApYDMnOFQjKzIyVEMyZktAbXNkNEMpJSEKJWdQQkcpPUZvYXRbczkyWl43dXBr
N1w+Wzw0RiVtVm4iNmQuYCw2Z05RKUYyIilhPTMyNCRtY0dwY2xfMEBPaWhoa1FsMVNacCNkK0xs
STMsN244QEdeJytgSShFNjY/NG8pKTMKJSRHaCx0VVVRQW5zM1ZmMF9VL3MpV3VVTHJgOW8hW287
dThfVmQvaFFYU3E9IktmSjUsSDojP0Jib0dSZm1JblQ0bVtHcTpJVi5vW1dONjhZcV5fVFMqWTQ7
O3I3c3RPVzJKLiwKJWZaO0YzUXJiaFRBYSlHUl1PTCVfMnFQaCRPOyUsZyFvR19CZzVsNS5JTURV
ZTVHXyU8TyVZaEcmVC0vRlBOQklXXFU0aGFyPS81WDx1WTsiQG5PY2RKTj1EbFJuO0pWLGRvdDIK
JTJSNCNlIWQsYmIjOD0mWmdSRU9JNHMoTWZnJ0g9IjYwWms1ZFVqUXJWXiRJSlAsR0ZFVmlDMFMp
XiVvcT1jJjMpNERpb1ZpSStESyl0PUFeMEBUUFhrJWFac3JHIVMjRWtFUDsKJXJlP1lTKi8xTk5P
R3RnN2EqQyQ5bmhgTT1RUzM6RjllM0siMWQtbkBfJG03I01mQ11jV1cpWDI+NiIxX3JISjR0MEJz
V1Nba1lHVUZudClMNnVwJEFqNS9uTTtAOS1nLGYwamwKJThXN1hTLUNpXUY7KGVBVTldTUtIVl5I
PCQ5SzonZDMtV11iMT8la1BuX048R1MpNj4lXCNzJk5cR1lMU2VjIiVFSis9VmJKVWBELHJFaks/
IzQzZENaTC87YkwsUUBJTFxeW18KJTZsJWkhXl00NW9qYGBZZW4jJm8mXzRCRmskQGQ8NE9VQi0p
Oi0jXFBDW21XM29NJkxScD5YSUFDXTpoZl0wO25UajdCdDopaTpBOkVbXWYsQltkJkFIcko4MFBP
SjdibUZAXEUKJT9YQ0oiJEU6Vl9gcWIsUDFKJ0RoQmNAX1lKWmFgVmhsWV9pZCk0X2VuKzVWbT4m
NVBxRj5gW3FZT1BZSjUvLTNMRnBwaWc4aGZRKD9JPFRvbDkzOShwO2E1Sz9YRWQmcVZwPTQKJT4/
VC1GNT5aJ0RFcWBjcT4yb0o9MWk7O3JFRlYxOS86VShBcFxAb0JxKFMjV0VaRGdHZChxbkwuSkdn
JlhWYFhNb2ZBKzZiKCZvJk5gTzxPVVF0QmxbTmU5bWpZbGhrQlZqTSQKJWtXYSw9al9xPDlxPCUp
MWJORypyVWVdcXUhbmwiTDJCcE1bYk5uSmBeTz5ZKGdXUmNBP2R0UWpxKmZfPFk1V2I9YFYzRm1m
OV5FaVxAbTpbYFFkZWhYKllTNysubUBrWGlMQlYKJTd1O2ApMHMtJzNVZ2xeLlVPW0cjT0gnQ2o7
KmF0IyFuKSJpbDsjJz5vREIyV2paV3I7RipWJytuLy8jXERhWGlxIy8kWzVzKVtsKG0zPkw/YzVX
MS1OVkNJJzBtZkc3cWErRV8KJUQ2MV0obCttJWRdWHQtcFwzbz4qSGkkTmNncVM1MVEmZ25NIXEo
aVZAWzpUNWNZbSxFN0ssaSoyWUtWUmE0byslbztGN3BhRnEsOnBRS2xlcjNbLzZba1hsVGFQVm5F
cSF1Jj8KJT8wL3FBTzRvb0c9NTBJXCgiJ0lnbEBrS1U3cj9GSGdOKGZQbGxFSS5bY04tO2tHVzVY
NF9lMl1nNlZKQkJIRUhHWUNCc0QrJ14hUC5aNkotW0pkPlBVPVVsaENhUFY5O2NGVUkKJU8xbG9m
Ijo+cWJSWGI7NUdFaUpdS1o3UlJGNHFULWg9R3MiLT49XF1BOk5CaVcqY1BtXHJDREVVak1uP1Uz
OGE9ZzpYdFFJLmVfcWEscDMwSUNaUGJoWWNwJ2xKPUpeLEl1S3MKJTlOOlA4XT1mZUtoKF1VbjQ3
Vlo1I2ZZXFc0bWpibXBmZ25vLVxVSnRBYSZTJU1aNCFyQUsqWik6T1VIcW9jdWNkKlNWXjdJPTAz
cjZeaiRMQU5sOWRDYiVrO1ZOcHJgZGBRQ20KJXAmJCg1YjFxM1k1Lkk8UlFyPCk9UXI8JzpVal45
O207OW9TKWNSPE4taDBCT2M1V0lMIzZbKmZpTD1hbD9aKzpsakRlOFxMPCEqQXFZJ0whTDxlPklC
VjBlRTptJnUoZTlJYT4KJVkoaEJJPTBUNi9zOD0/PXI4VkBGWFx0LUsjU2l1XTUmQWUmcC8vdFQ/
W2RAbGlvWTEiNEpEZWIzTlt0R1VFIkI/Szh0ISFGcF9PdGplMGs8MW5VWmMxaUhtZjpRXjo8MUEl
PV0KJUZIaEU8RW1VXG1odWglZCVvVjhqT0ZGV3QibCxsZCYhR2Y6TjtgNTJaMmE0VmA/VHVSaWlu
UjhsWXVfXUkhYltGVzV0VGRCZSxDX0debzZvOXUnNkInMVlkNCJ1UUYySEAsUF4KJWdbISRhcFAu
LT1lI2hETiprLl9Zamg0UVJJIXRQXW8hYylwczhCQnRLRFY3ay9TLSgjazRbcyFLPEJJRjJBL0ou
S0MoJlVrMnRYXUReOVpmJTxXc1o1K0cuJVIvWjI7U10zRF8KJTkyQmtwbCFKblZvSWxoJm5zJm4y
JjdpIjBdSCNoYT9MNk5SPmxYZmNDK0dFZWYmTStYUSQ0L1orbXJBVTVERlU+KmFjLGlGWyleKERk
V0w7U3Q+Qm1GcTM+IzxXdF9JWzA+Z1sKJWMwaj1OPyFUWldSOSk9U0VvViIxVz5rWDA9dFh1Jlor
b2A0N1FtXDJmWko9UU5QVHUqSWxBQWdwa1VgNFEkIzs8SStaRlEqaERmWnI9QHFgb1M3PzsyQThs
MmUvaiksZztBQ0cKJVFpPy1AKTE0OCczbUQxU0Y4L1ZCb1xuOC5kbDY4Xm5gYixaVCtBRUBTSTly
LihJKjFvblI3PVJaWV84MHF0alQ2QWBRWVpbTVpHLnJeND1NMkReMGxdSWIkWHI3LTVLMk5CUUEK
JTsuZlwiQVlyOEZaRVlEc2w6NnUqMkEzYiMsNy4nYFJFXFRCM1hTU09VJDk5byovMThiY3VddGpO
XkQ0MjZJU0M8Xi40bDJLMG4kQUAvWnU3MkVwKytbSihuLmdZJUJFYiJdV0AKJV8jNl8qVWdxND1r
IUVdRmBJUmlAbitGcDlTWWNwSFpdQnNsMk9IMlZcKDxQdCYiVGExZSI3J2FOUDoyLzRSP3ByU0xO
QGUpU15YLyRLM3BjTDg9N2teTzFIWFoyNmFtUy1qQjAKJWg4U2dJYytBSllOKyxoY29CQnJkNTBt
O1Y5Mzo3KUM8V2dKYSFbYi9IcDlbaVJeMzBmRHMxbEpZUFw/PFViOj5BWGI/TTZDRk1IMFpKL1Q/
NjkqUVAlYiFuM1UqJG81cDgvMCwKJUlcKjssVlJHQ201NWRYJUwyWi1KckQ6YVg/MHQuS2YmIkUq
cC1zSXM4J14vW1xvMihGaDIoc01dTy1ybl5PPF9eWWtXazlLUl1zOW84WUJRQDRddGZObCRwcjNj
XD02cmpHaTcKJW8yRl9VNHUiQWZCQkVTJHJuWWkiYj0/TSMpa2UvSGoxXWtIKnVYZ1w1TzlXOG5r
IU11NThpMWZcVlY7U0UmWGAncmsnbFhbYFlPdWc0IUhAaHNHOHBcVzU0dVtvNjMnJC5bcm0KJV9B
PkQ0KVozVzpJRyxZOkszKjpKNlwiIS5uQWkiUyxCKThEQig7WzBDa0ldP25MPDMkQVU7M2xbWyFP
TmlAM08oLyxZJ3VDYiVsJmhsI2dDS14vcixha1FLRkhja2MrXV9lcVAKJWtQb0xaZCFQLTowZ1dw
SSZzUE1nKWs1Y2NoXl9GQ1ZqPyhXLnNBJGo1PlRDIlBgZC1bO3AqRDxeVXBHKCpmQF9kLzFsPkg2
X2pacFc6N1JQJlxTKURaaSk1LG5BQDFvQ1FIVE8KJUVlPVZkMFo/MHFvYCpiPzAxMzAmOFBKQl5E
WjZQcm42Z1VrYltDXlklOEdtbm4iTz1PSjwvby9yKmNBW3Jfa0prcDFDVVs2UVFgUTVNTSEpbnJz
RlhaXS81VHBDPiU+Ij4hbjoKJSJoRl9hZy5aaC9WciRIbkVPbnEkSXIxSVpJX0xoR1QqQENDRHR1
a1NBPW4xbVtuIVk0QHFgYlxhdVVUWmRkZklqJWRBSlNdbTA1SFJoKFRtbiRkNEJtSl5VUnBbXiwu
VkRmO3UKJURyb05cOUA6XHFxdCRUNClycDBQMXM0WEFoXzIqMDRcPlpzNEBwWGssWCFaVV9zJ1gm
RW1SdGNwJGpTXW4/V2ZtN15EWHNBLU4hMlRCUDA9aC82Ykk1Ok0wUG9AImI6SykrRmsKJTo+PEF0
IzZdPGgvPjIqR2VLLF8vJChwLnJbUkhxXUBzUERvNURlTEhyNzFLNTJWK1E2ZixvMmcqayQhP1hq
aTtTcyxuVl0pPmIpKk5oaWU5cyxgcyJTIWgxcTc3W0AvYy07MF0KJWhIayRbPEJPOyUzVzdzdUI+
PTw7MFJXUHNZP2s+bSZWOlc9P2lCdWRlSz0rOTMhW0JpXnFbVl1sZFtOY0xRZ1ZGVmdcJyxeM29c
R2xeUjRPZ1VAdSs/YlpPLjNxXF1xb04xXU0KJUosLmpAZ1xPLWEzI1M9LC5FJ11vPVVISixzLWAy
JlZOTVZrOldjPy9UPFU7UXJ0Qyx1NTVpW0Q4VFFXX2JJZFYjcicwaXJzMTpVSTQscC9RNDhqWy9e
WiJlZ0ZvTk8yQjRZRT8KJVpQO0c/M3AyKG48TDIrIydqY1hLb3MtdTtRWTVuSFNlO2cxYyNMaldO
UkpSWlNFYTFSNyxbdG8vOVI/Q2cyMHM/NnVbSFFRSipwQWUkRzJGRz5LWUE6XDNoI0VTTCYjOkBi
T1oKJUNtW1w+WEpAVnBcRTlkOTkwUDdoYlVxI0U8NVFRcT5PJ1BPMS41aFwoUFxGMSFLOEkhVmBF
QWpPXFZnRTdLVVZDSClicj1sIVMiKUM+NipIY2RGZCIqYyxbam5rPEYzNWdDVSwKJWlFOT5SIzBY
P0NsPjM7TFxodWlgR2NsQ1lkRDdPIjc2WGJIWE1FXC1tblVUaipwMz0yMFQ8VCVUbWxDKkchKExF
bURKXjhiVjxUPTBzQUZgZmVRQDEic00mO2I8Q1NsOUFNNVYKJWEtYiNLcCxYYHI3UFUzJilgYzU7
KlBvVEVWZThBUVkoTSNIckpkImkvQ21PVjhObyZPM002Oz1nWF9XIl1ZJVp0PG0hTzViJTIlYW5t
blBLQEBCWSc1bz00OEEjU3UqTCNER08KJWEsclNlUVxQT15NcCRtNV5ldENuLkA/RUlQZGxAVmM9
XCxUXnJzVVhRR0BmcFAwQGBJYG1bJCQpXUxOai9DaGAkJ0RQSmFFUVhKdCdxRTAqMy90M0wkRCsj
RlxgYmRXRU5wIk4KJWE5T1pUSTdpJ1ZSV1EmWk4zLmBDTidWV20/NkVDP1xlLXFNckUsIkUySV1H
UzQ9MzM3M1RXaSErXCdbbFF1WiJXZS4lV11bRiJYOFsiRDhDOnNOckE+UE08WkZNcWo0bFpwVU0K
JUlfPms4NzZASGdBdCUyS21JMltZJzxwYmVXKWpIR181Jjsob3NnOzJhQ3BwJ1ViMUVbLTEhNVlD
PChFSGJQM1V1YkdfPidXIzlNYlI7LjRsYTsxSU8iSTxcL1QhSV1OKU8jXVgKJWRDLTxnTidfRkVQ
M1BjMVpYL0s3Tl9jTSdcO145aFpJSDtCU0xAPl5XY3JVRHFjPlFAP05pZTcoRUw9aUw9MlZgPWU2
Y0VhX1FbQDtZaTUyckArcSY/IjA1UTJra0oiVktxSEYKJTpSc2ViTG5WUnJGaCZoRm4sPylFal1O
VTRQKFBqXEExUkhzVUwqUzwuLzxhVVxeK2gvRV1vJnVVZjlHJW9vLVJgXC0qc0wnamJMRCJiQzEv
IWBmQjshMlFxUWkpbCRUbW1CaW0KJV5kNFRLME9lUlhsNzo3ZmxLdXVdZVwuIjZtYU8qbGRbJzNO
VW84MzJoMGYkIUlBPVlBYjtTaW9fIydMNSssLEcjUSdyRVQycUshKyJzcEdGIlpfazBSXSJTNEdX
PyJPJmItXjYKJVRZIiYjUFBTQFUyUGFeU21uNWZLQVouaVloUDBrQ0t0bSVfWGgjaUg4OXQwTkFg
PS9WNS1fVVdcUGgwPWt0a2tcVmclUEQ/KUUrI0hQMCknNlFPXkNwI0huYUJIXy05SFdlczgKJS83
WGhGPD9IPk4vJ29SYVo1ZC8/UClxcSJsTiRhVG5HcEkzRHRaOkNDQ2hpXEAmdHMuMEZLaWsiLTVM
bUJucmRqUUJ0R2QlXz50VC5DcW9pV207UWI6RjNiMzF1J0QnZFIxMSgKJS0zZGJjK2pXM1A7TmRX
NlM7TFhmOmBsXDxxX2hHaGNZYCI2Vk8ocDZBNWlZakFeTSRKRl9UdCNvWS9XQXJBcHI4bzxHOSE7
IUMzalRHITciLF82RjU6R3BEaDonXEpHRktZRTMKJTlnZjVESSImMHNVa1A6OixwWi1AIWBpSyo4
ZUVjPVVydE0vXWhhTEs5OiNIM1ZZWChQM0hKSzYzS0RoMFldJ2o/O2JYdU8iQjBQJ1VtWDplTVRo
RDw8ZCYtOkxTLDlOVHViSzoKJXEsdVtzTz9lN2RHUT1fJy07PDMzVykkLD9IOnAnbVpcJWpcYSdt
Sl0qdUI2ZFdJa246XipJaF5fYjpSQTlsKkVWbC4lNTZSQnEyc0w5MEkpVCQqT0pYWipqJ0lnVTNK
LzVST3IKJVs6ZmVmTi5xIlYzZihZRmRLbWIvO2klXnI2QSJNSjQjLDJjXTtqPGNiKCxGM2goNl9x
Tiljckw3VFIuWC0lLkonPVcsNE5ROFZdWz1DOU4nbnFlTUIrTmM9ZklqdSY2JXMrO0wKJUhpJzYt
KHAiakQ1NkIiUGpHMjwtLUheNWElU3ArODtyPiJyT0lVUixgNHM7LDwvR0JtRjkzSlFAX0c2WV01
akonXmpBX1ZaXmNZPzVkUk8tRUlNJzM7UUJBP3EkRWQyZGtwQnAKJScicEs4MShVLGkvKEppM3A1
PyolK0dWMmhoRzFDXlx0TC9NKjNRQTpyVyFdcU8jV1lPamxdNHQ3M11hOVBjaTprU0ZqI29fP0pN
azQ7UWUrPDsvUWotKSFdXj8/bFEkXV1dWGQKJT1IbG1mIixQKVZuRVZgLDw/PkNvSlZgQ1JGS09p
PzM2QSk8cU9IZTUuKlZfPVlqMWJKRUIuV1YoLmFsIzNuSTc5Q0ZXMz5OLypwMGxAYUhJKWJPYGNg
UG1DS04lO19OVkgkaDUKJWFjbnE6b2RVPiI7MFRAKktAODBmUVI6MidfWE1JKS8+QnBIO0o0Syos
YFtsRlhCOXJmZHFUPS1IOSpGOUxnalktVkU3RXNkP09oYks5T2IuaStWXHQtbVZ0ITRhTyM8bT9d
KyYKJSkqbnFNYFxgVFE4T0RDIUsiYFhVKFQkI180cyZBR3FJRjQ6PUdhQEo/TTJEU3A0RDQqYDxI
ckpCIWUsbDs5clssIVQ5LWZsVkpJPlI7U3NpM1FxZDdSO1s9XTJzXVViRzMjKFwKJS9CJSc1QSko
PyMmcE5yMS1lWDghMmR0W0NFMTVuOlI/MUdARTpPL2ErP1VbZ2ZENTRMLTk1cTI8LkxMWUpPdE9j
X0ZnVjtxaDRINDMvYitfIUA0Yi4rPjtubi86P3VEY150dS8KJVY3YF4iZUVbMklNNDYsKDhJREFL
JkpYQHQobkJZRjwwLkVYSjBdNlpiaipYK0xLcmNKNVVwPmtfYlg5XEYsSFJOSDRoUytWYVxSX0tg
aHN0OXBPcT8mQDdzZz1TVC8mZSNKQUEKJSojW2BLUkFTVFQoZS9wUlE7SShOT0JhWHRAbyRzPi4t
amMkPD9FRmkoWlBQXXFcJzRKJHRERmI9XF9LSy9KSDxLPERvYiMhTEs8MEFkJ1BDV2puKSxUSihC
ImIhTzNdRi4iYWkKJWxIQERfJWYkbjFodXFXZzpXdEs9LTUmaGdeRis5SlFpWUgjNWBxPlw5X1cy
Pm1yUVgnYEhwYFoiXys7UCxQLm9NPFxAaSlpYFdmdT1ZMXFWRmNjOUpOTUd0XjZXKTZDV1JTREsK
JXIpI0o3TSQ6LzxTc2hXTEgjMU5YY1hza1JvazRaUCc4NVA0JWQ2YixvanQ9I0hmREswPXNBZV1l
WlAna1xUSylHIT4lNUBsV2Q/KUJJaCtXUCpBPFYmSFEyPkg4VUBfVFpJc1AKJUsqbW9RJEckSiNy
I1o/Si1BRC9vJVZRSk8uPFc9QlEsMkwlOVsoVi83QkQ4M2NZVDFBSHI+ODcvMXU/ZWlqWE0yLDNy
KT1NJmpPSCNhJCFcLyoiKCdraVM6VzBrOzZTLipTY1MKJWAuQz9gaUwiLmk4PS01MWRzZidgSWAi
LEwpOyRcc0VrXmoqWmFDaDozVSQ6QVpsZllsKDp0XVM/KENeQ0E0UVM2VjVYbFxHWGwudUhwSiNg
VStBKzxfYShoPFFWUSVWVHEzLTYKJVY8QjBtNTJdLyk2dGFNUmpsZWo5ZUhBUDNaMDZoPDwvTU85
VV5GQSgvZG07QWgoZzkzV2hpTk9LOy5WQzxhXSZHSj5QZT1BRWU9QVBBOSI/RDQtJm0uXDEvci1D
LjZnQT05PlsKJT00V1NFTy9IMmsjNCVabStjNWpQOHVIZnE3Im4yW2cjUzNKJ1pEa2ZNWmNDbSJG
MHRcMUxrZzlLTDZ0UTpySz43XywnIjNuRVBgX1g6LlEwTmssay43P2VGXDxvTmU9YzRSZmEKJWtL
cFhDUmdYSF8yIm9hWDBqNGdtbkFqYT9gWWwoI0BdYDBhYDpbYCMjLilUJUE2cVdkQ2U6VzwtRWRi
SjBrYWRJXFlNMSs2I0tDW0VBSlxfay9aSlM2ZFgiRU4pcGpKP0BBLG4KJSdKLC8/TmUpKjs6NzNz
JDVfUkVuOyVsbyo/ZCUxJzFhSWxCVT9cTzNYSl0jWGluQFU7JlMsWD5JY1ZHTEF1OSZwNWdnMUNL
ZCdyOlhzL2gxYCFgLlo7RTYuYDctc1pgSFVGajUKJW8tMz1COHBVc1xQVV9yS01oW1w/MjI3aiw1
Jm9sISNrXDlMYUIjJVZjT2hmMz1PTDtZblxRSVkmTlooPFxSMHFFJVZSIisvYl89cTRFcVVESTwt
NFJDXyhnKSUjXSF1S150MkUKJTQsTUhqKzF1Oydma180LkpQP0lTTWVbcltkTF5QM3FIV003ITdf
UyosJUpxZykrKD5LQipQJiUuWUMsbmBIaUVkOGAvY2lOSFE2SlFCaSRyWUJMaUxITG4nSjEmOTdc
OjdYb3AKJUZIVGVEMEQpYzNFKE1nPDBTcjMuTGM3OnAsbnVecTxOPCo5PFo2cjAuVF09NDsrPUVY
aUhXWDY1Ij0+IVdtJksqaShRYUBiOmosKGxnTSheNylOc11sWEU4TSkhJmIpZXBzak8KJVA2RF85
Lj5nKHRMJE4hOmFWcC0mKz1pMmo/a0A8YnBdKVRQPD0wXmY/MEo3Km9lUC1JTSlkKEozMyw0aWM+
ImpSRSJaMy9vRlwqUy1JIyVEJTVOSFsjU0paLz1FdVpZbEI9MVkKJWVqN2NyYCdxYzY9PEQnImg/
bjkuLEFBPFNvNW5qOGNBIzI/Jl9kMztfJlNAImlpImRlY3QvXktoP3FhM2VCOG5EJUpXZzU4QWAk
RGpFNlsoVFBJKWU0XVZOXmBeQUlXQDhgZFAKJSI/SCNMZyxkW0UlQU0hTDhFbjZSJ2tYISVcSkxI
LidYaXFZLSZWVW8oOEZfWEFHKFgmOW1BPSguTDslKlcsY0tjPjonc1dacmFNJ2koXUNJXm1ySCZq
TlJAPyQjN2FxbUorbUYKJUAhdWRgPD9SJCQqXllDJW9SX3JwWG1KRylEcTlHWFZ0J0UpJ2JIP15a
LWk2c1IyLHJxZDdCUUJSOT1CZ3EhInNwIm5aRGJsKj8qSy9BYnVKU3U0VCFrS2JJZWcyZnJYVGgq
UFsKJV8nSS9ET3EmZlY+MkImT1g7OjIyMFB0IVUyY0BGNktVaCpZT2hhSm1bNSJSOzdoSydxLkFM
ZjUmOCMjZFRnRDc4Z1I3a29MIihzImBbXEdPNWIxR2JdZVRyWFRnV2F1RFxlaSIKJTcmWltdXVVD
WFVKVigyLiI+PnRdK3RQbmAzKC9sYWJ1ZFpRV2Q5I2hYXDF0XFczKmU8IkFlcl8mbSxzXS4mYi9C
QmBSYG4rOjNyXSFnNkBhbTZHcyY4Zm05P1VQInJXWE1VNDkKJSE8ZzdOYSJnW0gkPE5bNEpzVihT
NSldTyNsNTVAaE1XVWRzaU4vKzQ5ODBdaTctLGY4Xi03XEw/VEpcJHAhRTUsKjw4c2FCbjgqLXFY
VVM8MWxYciI4JSM8UGFTJDYsWyl1N00KJWM7Ul8yWzdgME9hblAqWkxmVGZuUW0wby1NMjxbRGBt
NGxxcDtXK08qQCpkPlBBMUIhRDlCMiZZISw1ZzlRbWlPPj1dJ2lMbXFHLCUxQmdIK3MtLVRTaD1m
Sz9yM1BgUy8rWXUKJUQkOUFRQW1lR2QvZVpMOiExXWc5Q2JsXDhATz01aztjIi9wPzZVRllgM0k6
IScjND0tJERga1dnNS01WT9pWGkxKT1hWiIoOC9dT2BMQUApbEVxa1ZANmIoOSE3IjgkOFAsTEIK
JShvcz1eTTxRI2U9VTdmWDdAQElGVkZLIWI0YEdLdSwmOWc6TSFCMSM0XiwySWpeYXEtQF0sQShU
WSRTJzM8W01WUCdUZkZnKjRUaGttSCg5LkQmbSNLLkEwOTBpa0M+PlkwKVMKJWVSU1JVK1I0VCol
Z1FNSCQ+RCJiZDRyXiosY3MzY245ITBcajplQDUkYFw1S2UxKl5RallBNy9XVFBRQj8+OTc0UTRF
K3VRPWtSZFRuK0NhNzBWOThQWSFxWlF1dHEmSzU9VUMKJUEjPGFJbGgrcCRfQjRwOkBwM2lZbF1c
OjE8VmNbK1NCaVwsZGxyQmlaOig7RyFfYWRUYjFCLiZfa2NnKEg4Q140UDguVj5YVjFSRidQKSd1
Mz9GLyJGSiVyIV89OjRhKGRkK10KJV9iYEtvbS1CPCJYNDFIRkxxTCpDZCUpX2JHaEJjXnFQJl46
J3NlMU9QcTNVOUdjPmEvJlBVcCkiYCJyVCVVO1swNydFW1NeIXM8NHJfKkY0ZG0/TVlcWEkiYC1G
Rj4tJjp1XnQKJTNGWG1OaGRdUFNXLjg0XCdtN0cnIXNZPGo9YC0sUEQ6LUVJXWxFNTgsbl07VV4t
WUheZlJLY3NhWGVwKjk3IzgvciVOMTlrR2o7SldDPm1dT01uJlMkdHQ5NlNwOFFTPS0sIz8KJXMq
PTEtLVU3TG5RYlUrW2RsRyk+bSZSbyFodD1iYHEvLSk2WmIlM0w/TGYwSkZGQjwpJVdRclxkMmg0
X0wxcmNxaCtVdW9qRFpHNTUvY1JMQjQkJ2NzJElRRSptVkpRUEFcWjMKJWhGdVJrIjhwMzhNaD0u
NVhHXVVFT0BlYmpdWkJUWSI4cDVOSGQ7J01EOE86NGpQQGFbKjlRLjNpPnBqP0FtSU9ULmgsY0hj
N19tcSs2OGlFL2U3QUZaZCU+I1FkWCxBYltaT0IKJTM2NWApIycjYGdSV1MoJTJgNTt0T0pBN1xi
VFgtKEQtcDxSOmJRWk8lV0h1Xj8kWCFwUFYyMy8vby9uUytcbGgkX2RoTlJWIkdta0NZNGBIXHVz
X2BQJXBTS1ZRL2VFX1JzXTcKJS1uSk5qKzRRIVI6UWJaYVknJyVXPVIzTio8aDg3LCNMQE5MX0Bi
VmdhPGxCcmFIX2FzMzZbZGJdJUQ0PidEcnUuMlgwMyRRU3BETmUiLDNnTFArQzlcO1M2RWdVTi9v
PmhjY0QKJV9qb0tCRnJmWiMjIiFmaUs6QHRqSzNbMmpNZ2AxI1MkPFQlR0I6KVxuIzs9YkdlM1dT
aDtmKE8zUDNDUTNRMF9fI09eZ2REdFxpSWIybEAzR0NcLV08cF1tcjJxY1czKjhmZ2IKJSpUWyYk
JDFlL1syaiJzZ0VkczkqKnBXZTIrbllnTzM6PmNbLVRMQU0rPyk5SmE/YmkqT1NTV1VpUldMJ0po
bkQtb0Uzbi1BLyM4L2AoOmMjXEFaS0UrNjQrYFJzZ0ImKi1aQ2kKJUBpVF00T0Q+ajFFMCwuMm9Q
Zm49Ki1aQ2lLOkklJ09HYiomRUw7clttKU1EMEVjU0FvRWcyYj8qcFk6J1FmNTNwRXAzQiNHPCpu
OiEvRVQhaVpPUkpVRVdLWFJLRy0nXmhqUGwKJXAmVTFwP3RsOCQzWD1kMiFSPmIyKSVMVlgxSlE2
RipVKGo9L24yVz9AVHNGNjJpMSQuXF40J2dFa2c2WEl1JVVTKzd0Z0RKJTAmZGlCPDZTOERcXXJG
XldIKUVpb0twR150LF0KJVptUSk8ZkVrSyhMbDM7K2w5LUxWUDdYPlJCI2UzNGpfLSUkbU5aJClV
bUsrX2dJXl9PXSVdZ0JWYFdcQW9tO1lAU1s6UHVrbWYyTSI1Tjw8TDNeQWoqQTZqX1knN0MsOk5z
TzoKJWBCOzY/JWQzRWAjPF5FNCo5O1xWKCM4bWNHYlcsO2E6MXEwT1RURU1HPCteKip0RT4zSzBl
TXRMSE9oXCY9NUxBVkdERVlLLSdoYGEjczpZITdzPnEzdD87VlA6ZD5iY11dQ1sKJUVSOW1qJVdd
cjFHNVdPMjRga1tvUyI9X0FPSjxqbGA8ISxmRzNcTDlmPkFdJW1dXCgmNE5NKTBcaUBLPjpIRWEz
Z1JeIkNQOSdET2VcOHFIOl0+PGhxayFHIU85OypcXyRATz8KJU0oIWg3JzsnQWRKL2IsXmAkKC5n
XDVIK0ZqYiU9MWwtPVo5RD9tXUM4Sl9VbG9MVGlwRzNQM18rNjpwYU8zUS8wcVFMRzczOzIlVV4h
IXFzWltCPzIrNVcsLTQ4PjRFamxuQ0gKJWQtVmgyajlOcnAhSCpAbGQsYUIhU0dyR1k6QjBuImxH
Kj5vKmsvbU1GNjY+S1xeTVFGUzo6QSNgVGNdLDA+PiVcSUU8QDlJbzBFdTs/J1NwcydQYy9xSHIj
OXI6L2RrUycwUGYKJUlzQ2tRcVdjVm1EZjlPLnI5ajo8aDt0YlNEXWNJZEdFVE4pTmEtYkJeNChF
U3FSYDZPbiVRTCJaSXJxOSksPXBaYEEkUVZdaks0ITh0OSNKNFFoaSNcbCNmXDcrRDhsUztVTE0K
JSwhRUhTXkxwRnFQS2NYTyYudGxKQTMsYm9bPVBqZ0pzaGY5MEpMNyw/cmVMaixsPjU+LHU7TC9T
Z0hAamZALmctS1lWbjFgYF4mYGFbVC1oRDghJ0pGSERfcS0kVGktYUtJRzQKJSplVlJfQ15rRWdr
OEw0cS03NTBIKmcyRCpLdU45UGlnTVgpNHNgUS5jUyYyL043cFEtaVkjKWY1dDYqXThiOGE4Mlpk
LWptX0wvNCIlXUNnKzE1ODBPcVFJVSdkWVE2JjQ4JWcKJU8hbURpZSdjJjgjRlBwM25jMD1XUTtR
RFIqbEJZdWAzKyI9Kil1OyFiamRCVUFsVkhfQzNhQyIxRlRqck9IYz8oSlUicSU+OTFNWGM0Q0Jn
RU1kM2tUOzRZTGJUOWtlT1RrYEwKJT1cZyReP3FUXVYyJSo0I2syUiRyJF42czBZVTgtYGxjNyha
I0slYlFuMmgoc0FhUEdMWltAWEQ6Sy8scVE8al1hYmhvV3NePCNZPWlJRWQmN288KkdQMj49WU1I
WlFUY0FSOHQKJSU0c15bUmhhU0wvMHBFKFFYJiRhJWsiTF49aGZgKllBbytwMEg3NXFOL1lWYi81
Py1gQEBKUnRnNUlJN2RqZ21RLzM2YDY3PF90UDYhYFNDM2AnX3RubU9tJCkiTDcnUUU0cEgKJWE8
bzpFMSxwayxnXUplVyVXbm9gJGZGVVM6cixeRW8zIz5xJ01RQlpPY1JrQi46aCpFbTlrS0M9YVlG
LVxvQ1wuYmAmSFxDdGMuajRfLFMyVEhxM1UubTEpLzFWbnUlLCU0JUgKJUFiT3RiSzU9KXFZaDVd
Iz9qMTJKZ15eMWZWPytWKE5NclYrYG0tQyRIWkIqKTFSc2AyPCcnO1A4YnIwcFpzRClfVG1EPC5T
ckd1REw4KCFhKSY9bmUjS1RiIVdybldZTmVoVlEKJTooS2lhXmowJlFoRHQtXGJIYThqTVtGSjZB
b1w5ZSZPXWM+M1paMlouXGMxVUNeQl5bYjtBNHNWWTVAWkVOSzE0YEZeNkFmYFZMQlJ1OT5aWl5R
SHA9UV5VczMvQVhWckVgNU0KJVxXcWtPWG4mZ1BJcWEhUjFlYEFjOzc5OGpZbTNRN3EqPzIwMHNV
XUptNTpNVkVdUTxRRVg/QDtSSTJbT1pqLE5RYC48IkdePzFBa3MvXHVZVmhacSxgKT1BTlJEW0hU
PSgyVSYKJTNMYEYyJ3BeUDNEX15lKkZQQnUsVFJfWmIqb1trKStNSlI/XidVYDYwRFFAM1s/O0BD
O1okY1IjTmEhK25lR00rSDIuYj8wclVGJ1MiSmc6ZDM5LEtdUiddVGw2IWxIIUVKWm8KJUA+X11R
YGVhKUVaSzopYC1WQSI1ay9FYC1jNDBvXVJSRmVMa2dsOz80ck8uYi4oZmIwWUxdQDByPDI6LU5Z
JS08Nm8kPDombjRdKTFCXT5pIjFHMCghNVlNXDBXVSxVT0pDJlkKJU1NIidbQlpJJWxecmBVNCE/
USMxTU9bRyVLbkZlSTxSLkJYYl0zbyU9Q1FfKyI5NEhlVzBGZzNaOlskNDJNRlVeTUFAbXJcQFF0
MVdaJz1yM2A4N1oxL0RWKDF1V2RlLlMuYz4KJSFgbk09V203SF5xXy1tKzY+ViZVUVAmNnRYUDQv
cyZJVSclWnVOc2wmL2A/IkpPaFdOOk5xK2JwdSFzUC1nWmR0MDMjM3VIOkJvR2A4JDI1UmJmWkMl
IUklWSgrSmspU1Yhcy0KJUYmWjpkTlshV2gxKyE4MTxPWS0lWFJKL1xxLFIqXmA4JT9gYSE8NFFd
K11tMCFza0pmQ1dqTzwkanIzOCM0aSEiMF5Ga0lGZ045LXAjKGRtYiI1I1pXOSFJa1JaKj9JaDNS
MDYKJTNoS2ZkKCIsTFNIR1U+X2BVZkldRCJEJlhWPXJtMCkwLWk0V2NDLCExPVo4WWAsQ1sxK2Vi
QF4sNj4sQlRwRW9naUZ1PiY7K3JcJ2YscHQ4UDFhMUNkOnA5ZltiKSQwK1wlWSsKJUo9PTkwJEJH
UjoiX0tHdVd0dGc7S2FKYk1QQEVaVDQ/LGZROiwpRF9iKmsoWyNdaE5TXVNQSXBSaU89RlVrNGYv
IlUzK0klJSo0YWpLYUBMOSFNKDZYIWstPFBvITVQVG06UlwKJW9vcV5eTV0hLSU6PC0wSkpPRE8s
ZDJHaipNRSgrMmhHLUdIKDBLKmJQL2dROElhclksIlhcaDZaJzVqIUxIREo8P2JnKV5lUCUvPTIl
a1dpJUVNKFNaM1EsM2leQmtEJ0FObTAKJWRWaVpETGtOJSg1LUlDO05EYyhjMDk8XyFqZVZUUDhB
STUhKywpL0U3bWtEPksxVik6QzUhJy9NXiYocD86LWFXbDk4I0FYTF0rMDVgNCYvbSglWEZVaS8h
YmlsV14nXlEhZDEKJS9MXCYwT3M2XyknI0ZpNFcyTzJea3Q6Tj5bZWBFNFYkTFZEWSNxZElXdFFL
XiZqZi0rTWdAYUxkKEdTPjcxa1pJbitjcjtrV3BPNFx0XkdqakNdSEFfS1QyPUhJTj1SPytqQmsK
JWFSIVZzJWYiIkMoRWRtOjtfVzlbPyhPOmlCPjNVQzxEXVNWNVZ0IUxHZnFlYjRTJ0ZQVUlzU2BU
QGtjVS4wUnF0R1k1cE8rc2dVPFU0bSZZPlVTM20yOUN0KGs2XWcxMTA/L1YKJVBFS21LbygrLDVu
O09YXG9VMzU3QWlZVEM0T2FLQj1bIzZSP0tCNXFAK0BZJitjXGc6NW06NTtyJXElMCZzQ1ZHXjlZ
Vm4xR3VNOSJmQF8mYEBeQTY0WCgrI24iUnVfYlVrIikKJS8yKHA2X1lzcWEkajVBczMjMHAiYFcm
Jl8wQVdeVlIkJzUpZTxZNGQ0X1NYZT8haFNyOEciKiVRSmEvZXIjLWttXicrcCIuayhmVGhOSlZd
WWooTkdKcGFsMjZzODRKJ0NRU2sKJSooOGAlVEVDa14wYERjc09GYGtvME1BIU1sa0dSTzswNTkx
W2trSW8wLVlxbyZYZWNKLzF0dHBgP0FYdXB0LjkoOVNHTihOK1NdVjM0LilJMDYrNzU3QDwlUTUs
RCxBYW42QDQKJUlUI2p1Wzs8KENBLiFecmwwMGgjZDQvVkxCOSEkWktCcy9aVzJYVz1BdU43UjQu
IXEmQkMlPWZlazVLNFQjSTNAWlQmMUtwKE1iWHFPaTE7Nis0b0s+QUdIWz5HWlgtY2M/MFAKJWUw
O2lKOihgMG9mZy9SP1IpRj5YNVtqITJoOyZfKTgpZEwqIy9LZlQmNGRTW1g6XVltciZBMl8rP1Bk
IytTM3ROJkAoQ1RfaShiIW88ZF42WVgzJmNUR0ZqJVEpSzlMWlZUQW0KJUE9aUplKkpLKj5LJlI+
NVJfI1QjVDFSa209OiFCK14jKWBLMy1jU1dUPlgvZWtVSSs9TDdnalhiVlBFLTM3N2IiVW5fQkhr
KWdAY19QakheInNWaHFLZSYzSCQmaEF0P2VnIVAKJWldLjBvVVlKNXEnQDs7XUZaMi4nPEMqal4k
VXNIYWdHWCFicVVnRi8iQlMlIU9vcUtvPm4+KSwlX2xvYUc8NEQ0NlxCLGouMylNVDI4ZHQpZkkk
dDJFPnRRZWppR2kmSkNlWFoKJTgtJFczP20lTCxZUGwxUCI1TzUkMzZrRzlcXSkrVClaZT8nPWAy
aldYTzAwNjEwJyZPLCdpcWwoPWxIZzYndVohZyJndWhENk5aRSpeUEFKY0VCT0NaNGFicFJkIjpV
Ni5SSCkKJSVaTz03NkdqbDRfLHM2WkEuW0VcODItXCIqYyknKyxGPHBpbE9ATVpnSWAzYixDXT5J
OXVtTCwqISJhQWFuSSYpTHBFUEk/bS5uVXBfUDs0Q09VbD5pNGVxLlQ+aipyR1s7LkcKJWBpYFw+
MTxKaG9eZzpcNUNVYFQlPys/I2FfR0wqQjE4M1NqLWpcMTxJYioiJ2JrMVMhLT9YKVtLPltMVmhj
PXNlPiUqOUhwUj1dMzY4PjtuUnRZZFswbHRedD9rSlBNKXVBNmIKJS1JNSdONXA0bm8wSlArRjZO
IVk6TDFoZDpsUUNNbTc5Y3UmUzIjUj1UYjVAI2tPMjFCL2sxXWZsVVJWViUnUGE5aD5CJWthW1Yy
Vlc/dSEnNWEsOSY/aWpANFVZX0c5cTxiS0YKJTYqTlUkMl82YGYxYzVbKiVwPUktWSNFRFZcSlwp
RWtMUFB0VVVXS1pLMzUiVD9DMlcjWmZBKFErTXJJYVdrZiM2bThJOmA0UzBdRGlAQloxaHBxOyMq
cS9zNiVEPSZMa3NoZ2YKJXEhUWFMJTMmYSY2ZUtEQGpKdXFGcDVLY1k2WWJlc186K1hCVVo8Qioj
MTlFJWs5a100IVtGI0NlXWxiLGM3WihwOFIxLUYoQm5iQld0KigyVzNwO107PlE+OiEyZFJhSXRN
ZWsKJXFdZyZmKV82UihyVWlPbWgxPXVbJCJHbioqQWxEPGxUcSpwbVEoaGM1dT1bSklnVVhTWjYi
WDZnSFx1dCMnT15QPi8rMzRjMU0vcG8zPjRQKUwvdDdYbk1Wb14jSFU3cWpjKicKJS9zKzdtMEVw
czorRjZjIVguUVJKRyw8Tis/STs3IjVzKGFuJyQkY2A7XyRxSSxQNyNlImM0dF1wRFA+YGAvNkhu
TmZHbGNZLy9JT1NIKUVaZk5wXyg2UFFTL0lpZyU4bDFESCkKJScnaEEqKT5cU3VQK2hzSWNFV0It
MlJccFg9UCVgcEhnMyJPW1g6aTA2TVstNjlYKlhBVXNeP1tIU2JrcWtJYzg5anVWdCpWWVZhIy9S
TV9SZjhubFkhXD1fImxrWV1BOnFDPHUKJSc3Ryk4Sy4zYVVkW2teYkYrZTQrKCdHMSRsO2pSalJm
J21rR3VpT0NWVD8vXj5jZSpuczIiZCovVlVBQyg7WkY6O3EpUSRbMmZNOV5CYjVzQyEvVkhBTkgy
KWxTWjFHXG0sW2kKJWZAKFZaMDVcTnBORV1ZP2dmRDNuKidxcSUzZTZiRkk7RGBUb0tELEkiOl1O
IV5ySktUQWBvOStAPkJTTko9Kz0zOy9EZ19OaWRyb2Jfay5CXk1BcCRuR1FadC81OklcSFZjQSoK
JTJyL1hzcW1Ic1BGP0BpRGZEIUgubUB1cGY4b2lub05tXyZYRnU/Zz0/YDwuKF87bSg7YEM5Uyss
OkMtZVJZRzJtU05zbFdvV2pSNVRNY2BbJjxOVy4uSDdgczhcUU4jXi42VUsKJTsrI2pwMyx0MT9L
SkdGSClsOj1lQDBbKEpSUFchc1E+KjJxTiUwUG9XOjw5R2kkK0pBTT4jRG5XWzkiX01uJWBGUlcr
N2tsOz44WitqQU46Kl1aLEYrQTJfSk89UHAhITJUT1MKJVRAPGIyZT9ZMWQ/R0xpWz1OYTgzOG49
NiY+JDsqXWIxZDJjJFw8OjBLa188IiwyYjs3NjwuJSZdOT4pbGlaKSNBO2JhTklLPUkvPS0kUEZl
QUUuWC9Bcl9SU285TD1dIzsuPTMKJS5BMSV1LlxFdGEmYUxFYHJHcl85VThXL2s8TEUhQlpfa3Il
PT9tQWleRTchX1o/MjA+LV4uJDA+XVxpYF5jaiVjTjo/PCg0LEA/TURCRydXPUhjLypwTD4mamZa
dD4tVFEtMVwKJVU8biNPZFAmLjlaYHRBRStLPiwkUCJSM3BnRSk1PCdGY0kpXj1BSDctW2hHWDQq
SUAlb2U8SUBCaVlFIiRFVmkyM3RORmZmYldsT0c0PWc3PWxhbDJqQVY3NVR1NCVgLjloKmYKJXFQ
VF1kbENVKUo4OUZQdU43MXNDVC9AdTBUZUo1Xm83MHA8ajYoSUtHOi4vPyg4QilgSjA3N0ApKl5C
aiMjSFVkKCs5R2BTQVleXjJhOFxxWSJUJmJfdWs1bl9XY3NoYCVqQDUKJV9YOnFEKEglQDlkK0tD
JGZMaFBvZTJCalZTcS0vbGhgdThgQHUwaVo8V2ZWZzMobnUuLVpOY2YjSGRBO1IzQjNwQUpFPkA0
MS5tdU9GXz8pXmUiUzBmLSE6NjBRRiZNSzcjLCkKJSsqYk4qXU1ML1hnN0hOLSFgTk08VkojJVlS
PWFNM0BbK3MsZV0mMEgxRj83NjZWUlJeTChvVEtRSXV1KVJ1KzU1WFtsTTNpMTMsJmJsUWxOJTIo
LT02LXM/WDhlaCU6Ly1oRWYKJSFdI0JdI1RAI0csLS1ATG9XXFZabFBSYjpgMTZfYGNVMllkZmBb
SHUqbElcXl4nI0prW3RXWVw5R05bTWAxKy0ucTZuOUghQkxvZ1MpS0xtIyo5cyNvcTQ/cG9VcmIq
XDs/SmUKJWYlUlFlVjItP3FcKFtaTDtURShYMm9NTUZMcGxVODgzK1E6O3BpIkNgQC8vVkVCPVo3
TFs8JzNNRSY+PFI4Im4sQWpoM1xaVVQ1WTFkbl9sOThdPmpUaUcwMVlcSkA6J2UjIS4KJVFvXjxy
NSdENlg7QV81TTlwckQ0TkI3UGw2PW80WixRaHVyIycicVY4ME9nTllfVFs3LU9STF8/RDNNNl0+
N0AvTzhCcUtSYztTQ1VEV1YiQDRAIUJQU2hdQGpqbkhGMEw6M0IKJUs4bmkpcW8mUyRGNl1xOiJh
TlFIUW1TSE1RPm0rP15IOihhViZpVHMuS1ZYKFolOywzWSlKKy1pcEEhIVhNZypeWl5TMkljPDw6
U2dqVF5xRShBWl1FPDw6RW1pRUp1a0tCc2sKJWY/RG4kXm8+OkYxbSRxJWxbWEVQKG9BYDtKVy5b
KC1NYjonQFU7WClLRS0+QUUwaUY2QUA4KSFFTERkJClHTl8wPDpwWDxtLj09bDoyRVY8M1RwNmMk
MS1iQVp0L1FPWSw1WFEKJSZEPlBnSTlZNTs4XiIzSGE8V2UkYmRKS2BtLj06Zz5AKG5pRSlhOzlU
XUBycEZJV0szOi5OS25sK3RmI01KalVAVGsmOWtTTEhrR0BbOl8iazBPZ2wnTFlrWipNcTZbO24x
KU4KJWVoPUVVYUtlY2osWVQraiY9JFYhTGZDZCkyTldPIUIqOmhHKzwjSU5bOk5Ya0YrbEo7Mm1j
IVQsKWMpKCVSJjhiXy8tMidAXCtTPFteTjg9UCgoTmVTZTdKOmV0NSQ/azVCTykKJSJnbF9iLVQu
OERpcFNMLTBhU2Q6PXN1WC1Da0pLUCQpPCtXWCcxOkU+ZidnPGsoJilQJmcmQGVxcVFDZyRDSkZx
aSJpVCMxKCJgaixyYVIkSlhqUGwlOlRuYT5kKDdJSm5ZZ2wKJVVrRU5Maj8oJGVjSWQyKDtLaGpR
LFAjaUFfUyFAaVIpMThmLypBOXEoZlwldCZva1ZlMHNCLTxgQVgxQWRkOUBXOGVFSGAtZD1LY1Rq
dUojOD1xRGllQ11UPlo1PjtrI0RjJiYKJWlfaTJdWisxOWQwXTEyNkJLK1VzQiEtLE5CKVNpTXJw
Tjt1KUNcSm8qSDUsJShXVEhnJC45WlpTVWokNChbYGdVXjkoOzhrRk9yLDYsNkYpTm9nQEEqQmNI
U2VdYVx0SidkIzsKJShFIkFcWSx1RWU0YjZVTTNwODM7bkhlUnVobVUzJ05QYiMhWiRvSVE4V2At
QjVOaSJfYTBHY1kpTDNBbiwxaXQicFMtTC4oWSwrcUQhbXRpVS5Mcj5BRFU/MywoQTI1Qz5GR2YK
JVdgb11cSnMoMDZFX247N2dHZF9ASGBBTyk4Iz40cltWJEImMSxpRWJjXkJGdV4xRypjS1I4L3Ra
QSwsaGojWVFeViQ8ckBnQSwlW2BeXGU1XyoqZnBHPDNEcDp1aEk6KFJGLFIKJWcqcSl0cG5WUzxT
MGVcKlQhOFNCby0/QFRuPFwrVFQxZz9ZZlFnKF4mUzxfcS5xXyVaKC90QSg9XytsS2JvUDhjWDxM
IjovKj9rSixybU5GVHVSVTg2LS8+OFtWX1VvR0NoV0AKJSFJP2BnXEROVFcrLylmSGI9VCFKT0Qo
KU1WLEdgVltDdSc/UCE2XE09LylHc0FTN1IkbCw6JXM2OUVibUEhOWlebjdfQVNBQXRsTWJaMSZE
aVxUTFNnR1hucyIuU3B1NixtNS8KJV49Mm4iaGJiLS8+TkhQUFgzV0VFKD1JXUNRSTFuLF0vSDF1
M2luYlNgJixsPyxiLWw3bT1oKF4uTDtwU0MiKDNtQic3ZlIzcT43Izg8VlNrT0Q4aipxQDAyaj1t
MSNJUiU8VGwKJV80JCJjIz49QUpEYEdQP0hLXCxrO1c5Q1cyKV8+Wk5cIiIhUkJsMVtdUzpeNjc9
XWdyZi8oVyo0KlYzPCRHPDZBUnVncGdMSEIpKF4oOWtNXlwhRSNvWGA0LV1zKFFaZ1g8cC8KJWhW
RURQSmAjaU1JXmFDb1NSMU9TR2NXaiNwKVRUPmY3REtiQm87SlBsRyhmIjVTcVszNzdxXkUiYDt0
QiFyJjVRR0E5TFRfYTBuXSozcEl1aWlFP0I2MGcuYyduYkNyU18+RE0KJWopSyJHNWxhJyFPOiY8
QCQ/KTVwU0prM15IITVvU0E2RCEoJE9vRj5KZ0dIdGtdOUtZUzpWVFRWMVhBKVUmSlEuViI9I2pd
dGFmPVkzbF49Y3RtZ1w1MTwyLFxoSXBTPSdJPWoKJWNnJjtYOy83MW8wSmxSTCMlWyhpak50XmBK
Sz1qJlIsLVY+V2BQI3U7K21VKUo3LzkiZkprJ25XJm1PTEQqW1tkRT10bzRSVCI3TUBKJyVsQi5c
NzxkPjNJNzM2SkpATVVTb0gKJSRoUCI3a1tqTU5DaDtsaD43TGhgJTcoKShPQkxNZVRVPEBPQi84
YWVeb1QocE1GaDt1QCxJWVczTF1KLU0yJERtTF4nZTZxYlcjMU0kKj5iQ11iPnJwZmRsYEZURDNe
P0pmSk8KJS8tViInKz9AJ1s8cCJGNSJKY2h0NzQhXmlHXTc4RCtEMXFfNiEwbUMwVVAqalhHOCY5
R0Q3S3I4JmxBNy04JFlORFtTbDopWD9VZF5NQmwnS24qJ3NiMipoPW42KHNkRzQscl0KJUMoYU4/
YG40L1shbythZEthdWZFZlJcOEpwTypncSFGaTUwSjtdQzBqNy4oXFRwRyRIOyRRVEFRcFtGTFxq
IUssZGpgJTI9UkZDXy0mPCJwTjQsJG9iKERVNCpfbyRDY0Q0dVEKJThCTXRWOGJeZ2kla0RoLVBZ
JGo5Szo2akBJSWEkQ0ZlaVZQSl1jLVZjW3V1QGo1QTlNQ0csQWwvLTpxOEUpJ0pYL2RFVEFOV11Z
IzJnWisuOz1KdStvYFxbWSdHL0A9Jz8pI2kKJXIxKnBJSGtGN0xlY0JnTzBTaixbOzhyLlghQzVt
V2UxdTNzNiM5JkdlWnRdTSl0aEYsQTVbISMiNzBuJy8xMzA+LTxjIlNtK2UqS1laaydLYTMvJlcs
UVAuUmRmSXAqVUV1VSQKJWVxYzU6RD9SbVlOPSg7XFY+czxPWXJmMFxyXk1HJC1ZalF1TGFgZDso
cHNHcjNCKnMpJjJEP0YhNi8kUldkLjdsVEljODorL3JIRmldRkw/Qztkbi8yOU5MZ0ZfckFqXkNS
a20KJTF0a09VYEAhb2VRaTJ1OTRkNHI9YV9zRypVRk0vY2opVjFML01HRCE+PmUxKF1OX0xFMVBq
SllkTD8nYVM8MlwzKzpgbzktS0AoK0kxSjo7SigyXi1hQm09VjxATF1KMFlwL3EKJWAyRkRGKS0l
cU5NJE9TISg8TEtcWDpYdCQjaEZydFdRZ1NDTGElQTUlKjorJSs+NC9PZydcXWxaMTUwLmVHPFBF
Z2UzbGM/VHRtcFU0V1lHVE4qJj9XZkEsPz4pRjBYQ0xVQ18KJTo4RXBnME5zSzpRQUtAKiQlU0Nt
JVQrYCNmai44QVdxN08/PlhcWDpnNDxqckBAc3EuKkBNXy81b1cqJkM9JmgjKkAsMUxwKWltQ0A0
LnRRJzw1ZURnJihcWCVDO21ZaDVeV14KJVk3Li0tXnEmOUU5LUAxbiFjYHFZXkxSUDVSbShONys+
MkBeZ2NnTihMXm1JbWc3Ly5FKkdEaTpAY2lwMEBrYWpkNE8lJ3RXY14hV28ybDYlI0tiTDBSbCpR
V2JQK0RDXG5TUFkKJWV0ZHRJI0tda1IwLF9DQ003bHU6LFI4KWI/LCpWJTNDdSkpJ2ltOmBhRm4r
U1gtTnNKV2dQJEg1V2FfI0tZZmBFJC08UktTKkU4SyRabUxlISQtJVRyUCFtaiQydXNaVW06QmUK
JUZlcEZrLmhGInRrTWZsdDdSVj5NUlU8VmNBOi1nZ2EvTSlOJ1pXSj0rM1lYPl4rSyNpbj8tSlMw
TFdgO14uYTM0WzJeSCQiTjZyRWBpNVpYPTJJZ1U7a20qLzxdUypEWD9ubUUKJTgkZDooSkJNQm9k
UmNpKVMkV0cvRk9qI1cvLC5bIz5AIScpTnM7QjVxYi9JQyJbQ0RmUWRramYkO0BbZU1gRyo5VWEm
WUNcUCJCaGFEJl01IjFZVnJDaSgjUm9AQWVsQjNgSlkKJWVVTD1oVF5wJlhTQVVHKlRgXl5TVDpg
Mm5HY2MqMlgtailpWF5pNks6U2lzT2U0YWhEPj9aYEtWTC49MGsmQDI1WSgpQTdCOV9KUSNdNkdi
LmNbPj1LOWsjODFiVUJwcDghPl8KJTFtIT5YSHA5aHVHNipoSVIxTDAxZ1RTJ18lTGRPMy4vUzpd
N2lYJSliOCsqbEcrT2ppJ05WRVAhWGZXUXE4NXI2azEsMTlJJlQtInFrPW9sbmFfJCFsOCxGYSEh
WmRbN2tQLTIKJVprWkNmXE5taVklTjkuQF5LVXBgMy9wUk1UV0EuVDRxK0k3SjYlWUVyZjFkNz4s
a2wvM0YwOVVLZWRkQmlrVkZPVHNBMypiRixzcC0mTlNpJ21YUTYoWD0qSU00VTIvTUA1US0KJTpl
QFdVTTlEVT06ZT5ZIkZGXmciWGFBP3Q8PixJXmBZaVIiOjdxaDoxJ1trQzU3ci10X3NCU10+aUJM
Ni5Gal5yXDFDNSVMb1QiIkY/KlVnQFFpWnFRTSVyb2hxIXROLTclJUIKJUxebEQoRW8vZE0wTWk5
cGhRWF1yWzMzYz89Lk9CYCglcGxPXk5yV2JQW2VrYTYnZDhyL3JYLFMuYztfY11tVS00VDM+JSxT
WFA1dTRzIk9QY3VtXG5kWmV1XV0xbDAwW0ZiUDEKJUMocmNwPWkzLk9KOTk1UmYtJSU9azBBaEBU
QS1OJCMyMD9da1lEUDRNYEApclw5O2kpI01VKUMoQ29MXVYlKTNvIzpiYC9KYmNgbD9xbCFuNTQu
USE5LF9mODkrY24tViVgJDAKJW9hOS1kbE1ILVlKMG0sKkZtb0NQV21sK1crSzIkSUJpbEFWXGQq
JmtKZSFNbmVxJms4W1NKP15Tak0kRVsqQjdhOHA+c0goSXNsNGdpXnQvLE9PUTlQaDBpK2hfQ1VQ
NTZpYloKJVVIPiwkak07ciNYdDotXmRKRyFHWGhiJFQrMSZELk1mKkxPX0VpM1hpIikhUUpUJzs3
OCYjMDFFYUg0WTptOExTOmM4KEMtaU5ULV1qYmQlQUVETipOQXUzVmJdcl4vKVhCKXQKJSlta3Iv
NjJcSTVpKXEpUU4sKUMsO186QFhiS05IRjQ6TGdmZ084Q1giJFAsZl8pPkktN2EyOyw1LEJBPSFB
OlVqRFZJRz9rcG4tNFFqbXNKQldSIyZgZjA2UVM2bmEqYkZWP2AKJUFkSSxibF1bP08ha1xgMl5A
Kzxmb15ETU0yLDtcU2JdXmJNTyVpT0BiZDRuI2paLT0qZkImTkxmcURtc1NZaSJEKEtoZmVQKkZj
XVE1IjxVXTk+KmdVJmJ1UTYtS1EhWj9vbjwKJUVTQmBMbyVITTZTXltfdFpkM2kqMHNuLEM8LExv
K082V2g3KSNfcl4wS1ZKNVJIWU1wTUEhcFktW2xjT0tJZS5Ca0BTZCMmMjkhKGZTUXUqYlNfXTs9
bmstRWtsaVNbSlFSUGQKJUYuS1w3OydiP2o0ZUhjRDkmJ1lcPUlTIkByMWs6IXFcVVY1L248Uyhf
Uk0uIzhacTE0Vl4vPSxRMUMrUVpuQywoSTM+YDA6J0sjYjdIYGBSZmdYcWo9WypxZClIPGdwRkAt
TScKJTwxQVA0RkwnTDAoSkNWYlxKMzcuNDxnPT9xOGwvYjIwMzRtaFVbc0hmcVZgPFtLIWM+Mjdq
dHJGPi01W2JLUDYkRyZQWEZmUVZqQUZXK2pWIWhTI1Qhcz4kMTlSQydHI0o3REUKJVNXO0hXbHJK
Zk1UR3FyRmZHJ1ElbEgicm1CVzQ9RihxTGxnTi0nNGE+P05gS1JyWUxtI0cnPFRBYFpsPSdXODVl
VyRyL0QrRVg3XSJxYi1dMWZdTmZDT15Nb2lePj5YSl4uWjAKJTMzcUNwTi9YWjYvO29ecy9UbCRa
Xis6XFgjNEpbXC8oYztabTVEUDMuIUtHZjolKyQnSidRTmRbNE50OV5UQmo+XztvIlU2K15RI0sw
OzwwXDMjXTtAaEVFLio6XlFNQ2BQPykKJUxHUk02YmxPcV4zRXArcExZQjBjMWRWTGY3OSNpKlU/
VCg1WTc5aGYmQm5iOFs4alchKytfKmBBNEtgZTwnJltfVEc+KFQybU5QPUNrV0ouSltuNSFrSzgi
cSg8MDkhIzktTl8KJV5NZC9mcCwlQSE2T2hdKydOS05xVUpfY2ddRGBcRV8jIW9iYiI5IVg6ZnRN
MThpdV9oOjZMMkdLLWxFWVowQjlkaERZK2kvJUFhY3MxVUV1Sm0+IihZUUMjdTxVT29laShyZSMK
JT1kaXIhR1NZNlclOThEIkEkJEIkTFs7P081SldfVSRJTGsyMzcsOExHJlVANlMpW1xBQjQmXCdR
U0xCdUczLkkrKzxtMFgnUUxhOzhJKD00QCFNKiUlJ2gycUVlWTgwbDU5QFwKJU9sRD49VCwnLUtq
OiFba1xbSF40OWlQIzAvTGttW1MicC9UVjY0QzdKZk8vZj87TnFJTGVfYUAnRidJJyM0LGBdTUcu
STooWFtrUSZQWHRQTWVGQClFZkxJIk9Cak9BRmhJPmoKJU5HZjFoVmxqKUtDXGJGLEBxYVUrJF9M
PC5dN0BpX21VTls1P0AkPmthWkhrUVlSKztOS1I4ZF9daDwzQFpbOEFxUjg3XmE8RDdOR1RBbThT
ZXAlZE42ZS0kZGgzJ2Rnb2tkX08KJSo+ZyllR1c4YTtkNCEmXi9cXWptPkdwKTteLF8tNmRbUiok
Zj1UU05RbjtVSUhZVCFyXS1JO2lEVlVlYWlOMSIwUl9cIWRUVCw2JWJjIjVJRXNCdDFAQzZyJyMt
Jk9XbHFNOVwKJUVsJ2FIS3U+IV1qOSliMEojIilJaGgxQ2FfRyVEWElzLXQ3TnE3PU0/anMuKltF
MW80JWE5JXE6bV5NRlA7U2s3Tj84UzEiXFE6NVhtR2xBLXNiVElIYipOJlw0REk4MjRzMS0KJUVO
YiI9LWlHTjoiKUhpVUJAXkhZRS0uanQrSTVtQDIkYFI5akRBVmImVTFqZio7RkN0OnEoNkNtWUdY
X0UpLGZjSj9zYWhQLFZ1Q3FaTSliWjI5R1kmRk1UUGZXXVctbVNnVmsKJTlvcSpfX0xqYyY0ZCJv
IVNzaDlqNXM2LG4uYTdoIkUpa1NTVGQsJkQ5LCNhOlYwIkYqKlg4dCJKS1daISJcWTJzQTkpZUxK
aEVvXj8zaEElVFBrLypLaEZob2lbWlNAaUJHVGwKJT50IkZZPWB0XkFEZkJsODE5KTJnZC85RUdb
YGlKSzx1Pj9WTlMzTVxCUS42LktcLHBcUTxkNnJFcmAqTlw1TWs2cTJgLmg3R14haT9dYCxaYzUs
R1wvQD5tVkZrZy5hWFw2TiIKJVM+UScoM1hgMiNSbnVLKj1kUzxuVm5oVE9aRVs2czdnNXJAZFpT
c2ZyKSMqKGVkMjtOMCJEYzc8L2MrUVtVbDpNZnFVVVRDbmoxOV9aYl1dPW86LjVCRlxcM29pXFks
TF1xQCEKJUwrTzk/JWtkRkpsW1hqdCxJSjtdJk0jaUFOUTRjMFZbUlcsWm9tXi85J0pZYStbTSZb
LXBBbkc3UElkNllAWjdtbygwYl48a1pkXkg/IzJuYyw2c3QqZCMoPkdmPyhwJClWQkkKJSNOI290
NWFTWCsoP2oyU2VILTlcRlNWNU01Z19tYGYrLCwvIUo+bVkpbS9kW2NQNWY7RE4haCNEOzFBZzc7
TjpoXSkxK21fUjVhSCNKKUAvWWRuYmBJYmouSmtXUSxxNXRgNk0KJUZsOyRGYS5BcCk+TUMiS0Q8
U2JwX0I8ZEpsbEomV2RscUBqRFo7LyZcX2RFIyVfPjx1XmtkJHVoQWtyPmkmKTtnWSQvQ2YucTcx
Zk8zZF9uRHAiI3RNOk9zO0M8YDQ3YUlhajMKJTwwMCNAbSYzJlFoXWtYLmBUNFFrbCVzWlZCWl5H
QkQ+TyIoVjo4RihITXEvbEQlVFJBVFokY1I7Qk5KZ2xUa2dEajltcmFdJyk/P0A9JyJYI2FyPGpQ
c0xBOyVGIVBfIjFFS2MKJUI9anU/bHBkWU5SODIjVCphUWs3MScudGpYdWQuNDJhUHQtLTgyYSQi
MT5IM1tWWi1mVy8xdHFUVnFcQDJVTEdQNGlmMkFxOG1xamYlLlxpU19CX18yWEpvYzxOMTs5XWYw
RVgKJWVvPiUrbSNwSyslW2JkbmhYYHEmL1tSV3RUJVY9Y1hHMCRBRV1MVHNNRV5PXShiSGdSbHMk
YjpuR0hoUSg9bFUubEZAQ2AvSGlmc14vRTVAOmk+dTcxUkwvSTxpcTVCVFlackYKJVktR3R1W2BA
cGhoUnE+NG4xbio0JTJWZHBOMko4cyhiTy0kSzQuRjw1Yz9rLVlaWV5GZFIpakImZipIPEZOXVpK
bVFma0xrZT5BajpxJjYlM2ArQW5kUXBNOGY8ayEhbCtvTlUKJWpmTktZJj1RQTdGa1RMam1lOkpI
NVIwa1RNQ2Y6MWxXKGRpWzROIkIxXmYsOWJcTDVvI0lTZF1RXEllOSxIX2I2KCddPk9NJTU0NFc8
NkA+Ni5gOmlWPjpMPytHUigvMGE8QykKJSVXKVAqJk9Xbl5cc3M5RDFwcFFkXjQoMjdvNGtNdEJY
UzxOaD9EYyE7PGcjYmhKWzRKPE5CRFdiOC10YTAwPkAnSWNfcnM9bUpEZ2xuOk9rK1gpaUlBUUpL
ZzkuSVVkazVaZGAKJTQ0YXQuLz5aJVAlVEdtXzQjIyszaz1SSXRrYFcnVGY0TCZfNGdwLSIlKnEs
IlkhKllQcVtQWWA/UCVhZ1wkOWwrNWlkamZedWNXaCpXV3RyOiVmVTdLaDVfXGZVX1hCaEQzWmwK
JVEjLnNnZ2JwVmZpZVFsVy9TQyk6LCpTTW9oOXJ1LS02Sj0lVEloMG0jSi9nWSE8XWp0OFhGQDJq
TV5mSEVcQTMyXHBFNDMqXyFba0g0U1M1VDlhRG5UVlFuR0ZCa1BlISY8UWgKJWVSXydoOFVPXGs3
SkhDU1VWLEUjO0taLUJcSVFubmcyQ2IxKSQ8SWQyXzhyQ0ZtYSkzT3U+MXQiLT8lIichOFlSKVZK
VFtrSFteZF9LRDFpRWA/MjlQcEp1K2dxTjEuUDIzTzMKJVVDLj9PKnVjZWpbS0JlPyEoM2YhZ1w3
YFAhKmZgXUFKQmx1RGBLZ0IyOnE7LTlUXFFfVUFCMT82VlVnQjoiOE1fYUUjdSY9MTFIY1klO2Q9
ZTFLRGJpLWwhPy4hTi4pYT9JJmQKJWAhVi5LRSJjKDs/KzQiP0tRcTlKKSlaYXAzKnNLLEIoMjNd
Yl0+RXMoIzhRb1tRTCtwPHNRL0cwUjE9I0snKU1hRihwRzg5cFYpNEZkOCo8X2o0STlcVltFNSRG
NWdkV1xATD8KJVZXQHIlRl9OdDozVV8sPUNILzlNTnIpb2BOYlAoKkx1WmxqOTxkJEZpOjtaM1FF
Sjtscy1RSVZrPFRxS2dmVXNSZFd1NlFuSzpgOXFKOnNZIWYqazZub04iK01QPUhqL2U5IlMKJT1Z
b11aUF5PckAtNjRBMiVhOGxtNF8yX0hMaF5VOTUhTDFcUCJHdEZjSG1sL04pXD5QU201SUhaXjBS
NDhtYj5MPkRxUi1OXU01YjQzZi86WydHQTdFNit0RGFrNnUyIz90LS8KJTlDbEVJQTxlJDw7VVdO
NjFFc3FRUWh1dEZQK00sYmk3YlVMPmQlZ0Q+XyZRbG1BZjRGIUg+RW9MbDhmL0ZSWWk8WzBDIkw+
MUxrPSVPNiJyK3RwISovdSosMmZdXE5QcU5fUlgKJTFpRVYmYy1cXy9DZllHWUFgKi1YWFgwUypF
ZEcvPlB1X2JIa0VDITozRW5DNl1wbjE0Q1k4J1AjT1FGKj9FTy1tY1VAKyo2LjNzaT8yRGczMlJ0
JmVhUUQrZTM4YDo4QS02K28KJS9EaWAtTTxhbi46Q0NGLTorP3IsLjklPlBEQ20uXmtJTFJ1UCIt
MEhCZGZLIWE2b0tgTmxVU2BaR19mS3MzUFV1JFgoKERtODdcbClJMVJIUUdbVT5xaUwyMmQiSVJJ
Pkw3bkcKJVVXK2c1YTc9TyQ8VGZjR0prJS5vZ2dvRlE0TiNuaiIoYyJpYlUhR2VAKjttLXBdPiFg
SSw4cEE/PisiR1VJbE9JKEJhSkRjLGRxKzxNVzIhViJjI1MmP1s7JG8uIk1Tb0BvSUoKJXFBVjo4
ODVVUGFqKD88NUUrIjsjXy9pUkVHNks6aG5CZD5gTztDVE4zJF9Qb1N1YExpNylhIiMzPWUuOFpL
NnMuKDxvNW5RM0VkTFFvOmI5W0VCUmAuZXRYYW47KSpZYnBZaCIKJTQlMk82Pz0vZz9MT1czPSE4
MzMzSm9kZEhSUyg8L0JyOFVnYFRXY2leWyFha1lbXmFuWHIuZjtXbGlBSlBYQnJQbzsiKy43UClq
OFVmYkMuUy5qPSNPVClSO1JkOmlHUVNiUE8KJWFrWTxURE1tMj9wYUk6Zz5RcSwncTNWYygxX1p0
Q0FcNHNzIkstNjUvaTMjOSJIMWNcO1tVY1NYKUtCUitALj9FbmQlUWVLOjgwUGY4Ny9LMWhUUz5r
R2JFRSRdViE5Mmg1LT8KJT00LUpDRTNlITc+bUhSKTYsa2dvJldlcjxlXCZwLm9QakZAOTdMUkEi
NDpvOSE8anJHW0RdV3BiNltjU1pCKWdcYDhsIVJbNFs8M2tWO2xfInIkS1tKb21aKGtxbnVLRGpu
SU0KJWw7W2xqRlklXGw6YWxNZjFgLWBwZjc9JS9oZC1aUz9vUWRFXnRFTio8MGY5KW80Ik4jWkRX
J2RxL003UWdmPDdALGdkZmVoWFtFZV9XWU9hOWJxbVUqMUImOS8oVShYSykkYGMKJVwjbDs0Q1E9
XjZLTC9DUSFfRi5UVFlXNnBXcDJjb1dFNkpRQlNsMl1LXS4nV1ZPRD43QzFXM2RwO003JjcuYjol
P0NmbERTXydWVTYpdXNHNzFbP0YlWFdnUGpXRzJbMnIuPnIKJWQudEI+MmV0O2ZFXTQjYWprJkNe
UnV0Uj9hPDlaUSQyVVdkcDtfVSo5dCMzJjwjZnNkO2I2TChkRC80TSssKjknUGliLyJUOUFQckk+
MmJmalI5Q0IwRGB1YUJKayZHTkY2Oi4KJWxWXSEhJUVhMkxmbWc2N0lHOFdIUzwlRW1MXm47SVRJ
SCxoKyg0ITNKIUs5OmJYb250ZGRSaS4lP25OSy5AWEBSTTk8SjdKUT5wKzsrXWkjUktCbEs+SGgz
TSFWKD4xPSpXMWsKJVRTIUdFWGJQNysrPlgjKyMwSShaIV05UWlgMVk8VXBiPV0yMCpCUkk3NkVh
Tz4kcWpiJ2pCM1RIUDpbQTc0Ii4lTnE7MW4qOj5zLFEvPkh0I2gxQ3BASUBgMzA9aio0cE9eMUgK
JTRWLzkuKipqNzgzSmNDTGVvbj1qZzYpZi41aytGcVc3LS0kPyVLLyJQZ1JlJFhyaVVFY09yZ0lA
aGkuUipzJyViUEQ5PStdR0U9LDFObD5WPjQtTkwiZXNaRjtPWjtQWjVRMzoKJUFFQlhVJk0+LlRt
ZV5dTCVgXCpFVkVIVm5AQ2AuY1hKMDcxQ2hBcD0ycEFqPzdcaiJWQDtvcCk1ViJdalFUSkE3Ry0z
QDRAKjRlYCpZNUIxZStUZy0jalAhIk50ViVEWlcnTTcKJWxZPUNWUWlZI1hQKUxhY1osdC5xal88
LlgzQE1RIXBfIWFNTjldclFCSCdCVjgmSjpVRlFqUithLVRpXkJbUCYrSShrXy0/YFZsZzM4PUJF
cEg3bDpqaiwkZURUL2dsRHBBW1MKJW11XGd1V103ZlcjSkQtUVQ/dTwjNlAqdGpZdUpQWyFqWyZj
ZjUkK007QDh1Y248SHVzT11Ba2VeQkYoTS5bVlxGa3QmakJkSyhKJSdhYDs7Qlw9LGRgJWNkQ2BP
XVNZRz1uYCEKJW1RZ0BrZCY1bF4xWE86P2JZWFZRLjMsRzUqVyVVOUZeSnFnSSkwU2dpaDF0UzJi
aU1IWC1mK1Una0ZHYjlcW0NWPjBLRlkvXyNocS0xV2Bub2M8Y1U9IyZJQzwzQSdANGM4dWMKJTlX
ZFxmQjkxSjlocyZsSlhaL2BnXyxBTFdlIUEhJmBiYD46IzZkZUpoYSltY0VxTV87KUtIZEpBMUI5
OGNcP0U7cE9URSw1UjIiXnBtQWVdRj5hbjMvT2hQQVs2IyRNVC1LcFEKJVJdYkZWSyUxVzIrLkso
cTNUWmI+bzRoXnRxJkowVU4pQCIpYUs/RXQ5aEEuPTRLTE9OPCZmKWdSRzI+I1AzQEw8YGJhM2ZD
Tz9zSllhMjYxa2lmKE44O19KPzNBYU9JVSc5QEcKJWo0VDtGWl5sUVE2X1IvJChpME9bXCUvWVln
WS9NYjJlaXNcKWUuRj5ZXEFtSURFNytUVTwsRHJxSkctQGVDK2tLZUBeMyRDVyddXyZ0LmlSYSpV
WCppRzZmPnJwT1FaYWBiQSsKJSZPYlZeTiNfLUlbUT0pMVs5WCM7QExBPGsoWSNjOE9dXUAiWjkt
L0hQXS85My1YW0ooaDNwaUFldFgnOT1LPD9MY1YxIT5fNlYsPEwpS2UmOD1bYyI3QV43M3I/YSIt
JShicGMKJUFHZystMGFCWVlcJD9iOSdWPyk/TDo2RUlpQWUycjduKnNWIihOakk5NzVycigjX2g5
JUkuUTwhbFttL1k2W1lgJykmOXFbRy1LbCZkPydBQD9xITQsT3RkISRiRTArcjxkQ1UKJSJZO11k
RnUzKl1CXVJcalAyQS4xPitNWEtJIUVOLCpQY2lqLC4nPjpbPChyaik1TT9rRCdGakw5SkcwdWRo
V2I7U3FzRldraHBVcjNZOyY3Yy4xXV87TlBWX1wwQ25XWjxVJCoKJS9NbC1DNzkxYzxRYzw6MSdm
MzNgQjA2N0ZXTEhPJik/ZCdtbXUqZzMlOC9zaiJgQD9IZicuKm5NWEpuOjMxTyUoQ2dARj1JR2hv
aWZAM0hoU3JLcyxVOWtMWSEpQypWPFw1YWwKJVxBJVpva0M+XD1vOmwsIVFSN1NPTGpWbDxBS0Zt
Z3JHRU8zVyUlTls3ZkQqRlNabms4REhQXjVCVz9DUz09YDN1ZS4+aT5MImhfXFJZJkAxTzAvKl9i
R1BFOk9Bck4rUUVDTDMKJVo5LjUkUTwmQE9eSVtKcU43V2g9My91VVQuX2htaklhZl9bWC8vUVdK
dU1lQDRLbkBMSUJwNTVoYjFndURFRChpKFp0JCRadXEpSyhrS0RFSDtIZjIpIS5UTVdnTFQlNWtn
Yi0KJVBpdGZnNXUqV0hiRTE4XCxzdWtDNiNqLDVGY1xGXkxyTyJUVSNqdCUka1JJNVJkNTY7OUlJ
VVIzLVxRQUd1MXFhJGtqaSEsXXNOSDtvbT9lJjhZQFJccGdWQTE2Rl4xIkkiRXMKJWBuLz8kNGUs
QzZyXiFjLlE1M2NnZmlTVlVuPnJVcTIvJzBJNXRKcV1CO0AqYksicjdUK0AyMzIpQlRHPk1QU25z
cjRVWVErVUo2MyhsUVomQ1UwKkAtOTRwIlM7IjpkPFRZXmQKJTZiYmpzOUo/bTlrWiZqdS9OLzhK
USRkcipKIk9dcXJhU2czV2o3R1ZGVG9sSltAZiFkOTpcbzw9XjxkUDtnZjk8I3IqTVI8aFlxal0o
Mk49UUtIc20zWTgwPC5TMTgycT8mUScKJWNYUSRhSUYvT0gqQWlray9eXS1ma2NFSnJkLk9Ebl9n
VmAjakUjb2ZaVFwkP1U/bCg4IU1dOyFPNTBuaWZpYElFQCdDcWZJOCpSJUhda0RBWDlvWkEmQXQ0
bDYkcUhFVy5FM3UKJWFNUl8mU2E0VVcmNi9hPXE2YF5LJzJLQ05aWXJiKG9bbVZrRyc1NlU4cEls
TyxEIUMoKWF0LmRFbz8kRDZhLzlsV3RsVi1sJWcnWkIlPkcjTU1qZmxaUUN0YTp0RDM3U1RHIyYK
JTQoaypGRilrRTQzWixnSiNUUkhhP3NAY3A/amNCYWJXLlJ1aTNeOic0ZFwhJCkuPy0vJS9cSzBu
dCJDP2VXKlhGYE9gWEkuaEw3OUJWIUA3YzVAVmFVJXUqSG5rYFB1YlwnakMKJSZOSVwsbGtHZltw
I1BKKiMzTlI5PGMiJjRFVVMhMSYwb0RtajxcMWs8XDI2VjtONDtPUC9OX2YpMnA4NzFtQGljOiYt
Y2lRVjpdK0RrTVdIQHJIaXJjWWA/P2gxWTFqalc5QW4KJVhgUTBwNid0S2ZRa20pc2ooI1AhJGJR
WDBbU1cqO2onZ1dsWWRDX3BXTkRvby9EQ1hfcyRLTVVXSF1mSG0pZ0djMlJgUGpcOSJMJEkzTmpD
Jk9DbSIpX2oxWjk+SF0hJ0VvTFsKJTJGYVRwRF1ZUl4uSm03PCwrdXJebVo2OjJLLWVVTStaRnQj
KmFUYSVvSHA3UVw/Z2ZlRS9KV0dBLitTMTlaM2A4WkRxMlhDUCctQlROMl1dUlQ5YlU+N19tSUJG
b2dHQy8vQmoKJVs/JWIvPVJ0Izopci9TN2IuM2NsUjoya1tyamdpKShlIzVJT01dOGFIcFMqLlxc
TWA5RlU5JkJJMHJpZj9sOkhobVwiQjo9O0d0YDo/XjUxYV8iUExuPGhpXCVudCVPWT1FcGUKJVcj
NzNCMmYpVi5sXnEjWE9UQzZiUyQjUk9IVVIhZSldPms8MF4zXnJlOV9gYy9kQjNVQDo0QGVabUJL
RWQ2ViMmV0NKYGdNa0VSOj5GPHQxZ1VzPGBNdG5tVz1XWWhJOzNiZDEKJSxzdHUuRUFZT2k2VVVq
MVYmQiwhRmwiST1YM3A0TyRCbCE0JCImW2JoMSheVj9wX2ZZUipeYmZXYEIkbUA4bmIyNzlCW2VH
TFg2Jm8nNk0vS3IwNUFvQEY3cCRBcTY2OldhKGIKJSYsUyE+cE5aLmI9ZTMxQCwmUUI+JSw9UHBf
WC4mR10uJTRaWCJfXVQmPypuJjtsZjFHXmRqQmMiSSdZWDFmdVBVcyxKdUk4Ui9KST1aWUIyQDhf
cjdgPU85a2UiUlRdaigvRj4KJSFLZUBzNEVScFVMP1NTKSM1SGtLPj5pVy1CTjcuWi4tJy8mU0Bd
Iy5cayZuXWBuSVhINl9tLHAtKDo5LEVHRU85JkZHKGpLNWtsYWtUaWJLb29JTXQxSlM6b05qXmtI
OyNSODgKJWJTJkVjT2Y+ZjEkZChyclZCcjI5XTFiXCJWRURYQjVeW05UMVFjPEo7aV9BKzo8K0l1
JWNcNmIhczNEUWE+OkI7Wk9BMURPZzhESUpJSUY7L0VQS1BAbUVWJ0F0SipkSlpDWUEKJVtoV09X
WnRYXGRRX0laPiEtdCdMJSdbRlE6cWRfOCRuc2tTUl5IdVBEOy1mJkBbWlE7YUxqUVBPPTNpTkdl
YyFzSi0tcDJpKnFdXE1dLTVhQ01gdFMjXHFHSDVhJ1gzX2xXMEAKJXJhWTZ0Zk8jSmlGTW08Q0Fs
TklWIWFHdSFxOV4mSmhZY1RyZnVQIkduUTYnP0lLVjJUI1QyPUJVJGBGQ0FzSWBPSmtEO2cyPCg6
PHJVMyxEZXI8XGUpKmIxPG9VSDcsWFlHJzYKJWU1I0RyIk5HOVQvSElMSjFmZ1U2ajdaXUdkXHBW
JWdkQ2gmaGlTZSlcZWY/UFsyMXU6RltJazdKakNuSyZublA9Sz1MblBiRm81YGlqLSxpJyZpNjAv
RyVgb0dvYDRaSFcicVoKJTk4L108JWhKMGRLLDJbQClnXDdrR25xYFZFUkAnPydiZlNPVFUnMUNM
OCpSbz47Uy5tO0RTKkVra3EoQjBuYDFOLCMmQnBCWnQybW1hWTcsbDZyNzg9RTBXRzcubDBuZ09u
JlgKJS00aSczK0wrb3BSdGNdRGlNXkIxRlgxWjU8KTYnRDU6MTp0WmMoZU8zZiM0T1RhJDZaMFVw
KnIvKXIlUGY1YFYtPVZUMC9TLGpvSipSQ2EqXXJNbUBWPjk8U1siQlVbSzJJWGMKJWtfLU91O2M4
KVNhcywicComTUVYKiYqUms2biNkMyJhakRTR1lBTW5BMCEodWJpXllbOEdNZ1AiTjFkUyFGKyFW
Q1FwUEdEM0tFVThpSUVOLD46dXQsbUwtcCs9R1dyJ1YyIWEKJUtdV0FxMFNDWDg0P1AtLkVPSzJA
W3VhZVBiVlsuZk8sKC1hQG1qUzs+PWlsYlwwdEdBbFViN0w5R0dKViJML11xJztAZl5ZXEldRDpe
SDN0aWQwLW9lbT5OTU9mUlZwZylYQEUKJT9cO001aGV0NlA6OERsWHFOYmk4YzlhLFBGXEllb0Jc
VmFNRFpjP1EzJSl1U2xfQGRGcj1hITFWWiwwVkQlWDVENChHUUheVWFFZkdFa0lZPT1ZK2JpUF47
Qks/bVhPKGttTlAKJWtCLFwjYjZXaVRLTlFcUigoS0ElMGoqYWBPN0lBdUxoY0JIUWl1Q3IhQyRl
cDJnUF1wRGF0bGU4JEMua2wmU0UkXk02RWw3SFtcI1A4LidjKlNbXS5UWV5BU1psLWNrZjdBXzkK
JT40NFlQZ05NXzpcK15KNDxrazFnZ1pPQyRYSkU/SkZSKmtRaEhgNiItK04sLlMzWFVGW2VmSDNU
cD8wTkNvJkFZL0Etbj5WUDl0c20yIk1tYFYyU2ViOEhELEdtRipINUlXRToKJWM8SXMzUFhkOW1G
UHBUM21CMidPZT9ZLWAjP0s3LlxeOzVkRXRtZWQlRD0zSzUoMz9kPUFhRkdaRG0mVTBDalE+LTVW
I1FaOjU1RUcpP1lQbWVWRUk2UzgxLDAyTzFcWSU/TFcKJVZSVU5vX0BdR11qKHNGV09WJ2lEWFs7
RyhFN144VFJRS24lLmZfWERtUVBlKShiQFAvS0EuWDE1MVk7W2VEUWlxbVZAL15ZNzMlS3MxN009
YGFDWy5WLkAsYGBpLmBSNCVvNTcKJT9uOlgjR0wvZWVSc2FOOlk6RSk4ck9SQGlYaytXREhKbmtN
OlklNTRubGpGUjh0aT5oJ3MvJGpgc0s4OlxbSHNvMj9MWExfMGgzc1RsWSE3bStZQTE1Y25GUFha
OWAvQVI0ZmcKJS1pTFUtZiRARiJvZThQXENXZWdaVDVwPT1NPSpxXTVlODNgPj5ZUltybjlqJj8/
QjhVT2hxa0ZAcnVvYDJFXTJJSm5rbzApQGZfXVtRQFgwI2kpWGNAS2pvJEBVVDYwRj5Pbz4KJT9V
b0k8M29UZk9oJHVuLFhxJWg6X2gwUC03TC5dJkErYjdzRTNSdV0oMTpzXyVsIUU6SU9Uc046ME9v
Vz9XQyltUFIxZVo3QnJPJ0VdczJLTlZjJ1MoWG5wdTBaOVdDIU5aKSMKJSRrJSlpKz5wWUQ1Ljkr
UkY2dTY4KT5mTFUudCQtVyldaU81SUVoK0QuIUJFMCRXIVJfQ0whOldRbmdEVDAoO14xaUxDc3JF
KSRxTDRMJis1OnNBW1NaRDJfIUdmdS00UGNqPlwKJVlXaDNnK3VKXWpJRnBybUY5U0s2YEorbUln
Vz9cTFYuY0suNG0rWlJNYz0pZW4pP2BMWHNwJiVWN0ApK2hnLWk5RWFcJyxxaUVZZFBmZW41Sm9P
TDwnaXBaVyREUzZHUyklSicKJVJPcmU5bV9kPHMhXDRaPk0qZi1ELkdIUm1gX1duRlpTKkFObHUh
KmNbOCdMaCJkSkQ9M2MuRkdoSm5vPDhvIyk7VUUvMj9qImo0cDchZWBncF1EJ2AsS1hHbGIuc19b
ImlfSjkKJUNKKEdTald1Jlczb1xzMHMmJW8hZT1PI0hgb0FMaCYkS2BJazxcLHVcMVIhYDJwZjtB
byxxKEpML2snRiInP1puLDAvOjYnPk5aVzdMUG9oLyRMRSovYHEjWmNUPlNCKWsxTyUKJWcsI2FR
MFY5bFg9LklyKW1QUD1qLkhFdVZaVT5QSVcnRVlhKDg5IUtDL0FDbyo0LSlOVTg1Nzc0c0I8MGxA
JUJnL24/bDI9cGFcKytfLmFyZmI5YTJnLXNcakRUPlJARSUzOzoKJTQkU11zZ1Eja3JYPkdwQ0VB
ajhvZ0VWU049b28iU0BaXm05PEsxVjolXTVgU2YsPDIrKktmaCQjLjV0aVhEIWhjZCJFRzdbPz1j
PEJsXmIrTlNOWGBIIk9yJlNoXWM3VE5zdDYKJTwkTDk6LmddaWUuVGklIXFndEBKQ190LFxtPzQr
aFBiKW9NIl1DJiQtWipXbSwpQWZSMF5AZyZbM0IyUi87IlIjajxZcycsYkgybnJ0b18wazZgR3Uu
WjVlOmJpUFIyX1EhV3UKJSVlU0c5LzllUz40XD4kKzhKPnU7YklNZE1XO1JBKjVMLSNQSEB1SiQn
Oi9NOFIhXyVZRChYMmRkdGNBR2c8cjo5U1FfP05lYVQmUFBeRGtcRkx1bGdnUlJtTjRbMyllci9V
SkwKJW02RVg6RlU4YW85ajEoaEVSRi1VPC1fQVhDRyUmZiljaTpuVSdTTGs5RDBxOChgQGNtcCVQ
XkpmJC8hM0BpP2cnJlxoInMySEBhWi1kMXE/YyMjImhsXTxxS0VcOyxFYEcoTGUKJUUpYigsLS0i
UC05YWVXXkxJK1E5YDUtN3QzQD0/QDBvXDhrblZDPz4iLSFuIlFXM2hwXENgbltPWi8xXGwtN1A7
J3NoQD5kISRrPCRccW1lbHNlS0FQZyNFPG5wU0VwRTAuRVUKJUpSak9MOjU8OExKYSIpNEY2RjRz
LklrL1dROlhkYzFcXzVLLVVSNkphOUxMQU40OWo4L2dcWEFmLjUkYklOTjAxMzpxISspXmUiMClG
QU1OJFVRdTZLV2NXamFoaUNXaGUxNCoKJTFtaWM4Yz0wT1pkIm1HNGcrRmAhM0RkMFo2MEUhXigq
aW88KlIhLzgxLlpBZnBTc2ItXFI5LV5CLXQqOjFFb0ExJFEzN14xSVxqTUYrQmY0THAiNjZETGtH
LDo3XDhNRVR1JmMKJTtyYltVSl1kKzopdXNMXCReUVoubFlkcWxFNydrSFFcZDVwaThhPTsrM1xH
ay05b2drV0ltdTZrQ0Q7XCJtbVk9PF9qYF9tZkQrQCg4STMvI1ghRi9TWnNfLUE6QGRiSypHcFwK
JWNeUmYpTUU9V0w3MFExOW9RX05mPDpfZT8zXj8jW1tkYDUvXCc9MidOPS8yXGs7aVo9VnVvOUhG
ZyI/MVMmczs+bihHVD4oYiczdT0pZW1iKXIlVWBvYnIyXz5jaituIyc0PnMKJW5PUFhfRXAmPipf
YEhDOiRxcEcxbk51VDk1LSNGMFcpbzQpRTo7VCVLMTJfNUBYOS85JjRddUpBISlKTUZTUCI8K3VV
TVJfdDJfNyJmc3UtVEQ1b3MlVEhRW1VZaksuJ1ZjMC4KJSkzVmxeaUYlbS87TFZtaldgTWJLVD1d
Y1xJQCIzYzdCJVtgXlgjR2spLVRYZVBpNCYxU0RSZmFyXmMlWFpvZE8yQ2ZyTi0kVk8/YCsvRitW
RCknWTRqcCs6aztwaVFSM1dTIzwKJUZGXUomL3AmOEZATy06NCU7WD0iKydSa3AyWmhtZkpnWGFW
QkpQNCpUXWRWWSpTYShqRSxpP0VcLWc/XjtlUmtKL2xONDoxP2YmZS1YLiU8NS1mRHQvPk5NVD5S
XS5kO05EWzsKJTBLbj5XKXNlQ2hNZXNlO28kcFROMTpUanNRRW9TXkZUP1lBVmgpZWdzLG1Obl0r
Q0Bqb00wKmRFaW9iSSE+UiI+JE9NZS1DJTBrN25lNFIjVi9yXmtsWSdeK1hIQiYiQE9gTi4KJVdl
RyVmZz5XU09aXlA0LEFUS0NTK2FBVC8vQig3Vk86ZUpNTXIrOD1cX3BAQiZ0PkJcZUlscTtDYCYs
XyVDTkpGVURDYy5WTzFCVDFbLiQmLzplTnVbS1AqVTwkI2tKbD1iYUYKJWxiWClxRjdzUCxZNFxc
ckdXS0wnSkJzVS5lLScxX0tUcjIkNkVvQ3EiOnRNZ25eQyZdJkJjaTNCXydPJm5uMzM+X1NzW05j
dGshXUxTcWJablddNiFnOV0pJDhUUHRdIyw8bS4KJSI7ZjY0O0BqYVgwb2crUTFjLmYvJjxPbmJo
QlBbPEw6PDUuMjc8W08tLCJlay5KZ0BSYzJaXU9cLilzQWQ2Pzo0ZFJRLXNfTVZecSY+OG5jYDA+
U15WTSI4RypdMUBNQk81WjYKJSdWXDYvaUF1TElANWlmI1EwI1psVS9pYVMoay8kYCI4KzJRL1A9
JDRiJ0E+JmtSV0owNmppZjpXXjZvJlw2Z1QnZyhrY2tSJD4nIVJZVTlgWD5AZiEoRnEjMVJBV1Vi
VFNGJD8KJVlgKUg0O1tcNCErVkBYV1BddVhdQHJbQE0mWiw7aTNLTFlnYVdZPEJoI2w8cCgzRC0y
RF9EYDhbVWJuYSMjXClIJVBiNmkvSmI4NWRkKCFAUHRVVC8sYnExKjdlMT4+SFZDcToKJSxdNCZs
WFVycnEoRlZsXS5vNHFFR0dDSWYxa0RzJ0JMX0BQTXQ8VUxaN088SShhRmhvYVRGUSwmZD9mSyVu
UVhzO0Y5My5Ra28pKTY4X1MhXy9gV0daaWVdNG8+Zj8kVSJyVmwKJWE6aFZ1aFlqYCE4VDopST5F
JCk8W2FbJm9HMWkxTV47NzRaMWNeVnNFM0wyUCgxPzVdNiJwQyMsczxbc2FmJGdPRkI1MElUXTpd
OVZlaThENXRGLWZmSFROWThxWkRQWFAyLTYKJUhKVSYvYD1CJFIqRSlyMCxKUDMnLihcJG1lMjpw
ZGcjVGU2Pk40LihaYWw1bDxlYFZNW2VDSEVgWz9sP1csaiMsIS1JSEVtbiNFb0hTXGoqTU8kVFhC
NWRFJipNRlsjRUlgXCYKJWdWVUFrOiw5NylLMjUwYSNEPjZIck9CW2VIOUlZaCxDMSVsI04/LS83
UzRtRD1yITEsX2s/KV8qNWhAIy9CJlMwT25KSD5xOlxIQC9rXjhhb0FKLThlZ3JRRjEsaEIhTFBO
UWcKJUBkSG1vPWF0WU9yZ0RyMSd0YkQzKG0ldFslLllaZTM8RzxsZFRgQnRScnM8c1gnSGpeV3BH
JDY9MlVdOiouXmkzUCtdcCxIW0NaaD1gUk9zVzJkczYvO2plRD5LcmZINTpoRnUKJT9XVC5bOiFv
Z0FeR3I/IllyaDQvMCNETCdYJXFqQyNAQm8sJXRkPkpkISEvTjlFajE/ISxWOi1hWC89cGdoakQ8
JTcqVVEwMm5vM1FkZVhNP1lYQ1M3VENCX21SPjxRJGhwZC4KJXEodT1HMD1YWWEiTzFCJykjNVNj
V0hxV0pVJ280N2gwV2BsMzQoKShOSExrQVc+IkgiV2FPMDBHOSVmbT9LWCJXImArMUFjWmEnZUA3
U2lsMTcxYDdmXzopJkJYR2E5XD5aUWAKJVtSU19fInVGJ1llJz1POEA7SElNKDNRLjoxIl4+JkFb
R1ssPjNlaTk5UTEoKW1eV1hhYDhsSzZOJilMSWtINkY3NnJjbFBMdSokLkhJS0I9NFtFUy0jcUsl
WEpFVzxvbWBbPC4KJUxSZjRXSUBDbmNdWlFNVnArKDtpInRyZUxKIipHYWk8N0JpUl9fU0QvTDtf
aUd1dUU8U1kiPUdTIThRLGY6aWB0RjEjSkphS2ZcYU5cUDhSPFxDYW1XWThfb1dtdU1kN1dzIzcK
JVVfczdqNmBPYFtpWHJ1ZmAxLzdQJUxQJCI+bUdLY1FqYCopLlZtbWxfPFQxKjFGaWlIIytwYD5e
ay5VXltFJDFzJnVFNktMREJjZi0mVTtXQyFpZCFNQmJzR0p0WFMwYmAjLy8KJWhoZDh0UE4sYnFX
XnAlNyY2QDtAOUExPzFpQ1kzdE9tUStCK25kIjAiMl9xYCRKYHJuaWo9RE1UYnMyTFRPXzIsSFlA
K15PdWlJUkJZIjlybjNvTk47ZWg5WlBoP182SSlfZ1cKJTg0IW91IXQtWXFWOCo7dHEvTUomcF9w
YFdvMzYhbFBBIkQqLikmL2pFZ1dxQS1UdSZmL2VzI1VQTzFdTC80VDElXzNYJGprUnMwLUwxVnJP
Qk5dX3MramhKSSh1KDU7MFpqNCEKJUVVIS5MJ29uOyUobVIvcmd1UVBdIWVIUDlaYjROLTQvZGVq
T09mYC8zJHRET01xSXJlTmZJIj5FK3JgIUdDaWwoJTNWNWReaVBGP0xgNVUpO2VWQ2hDVWsyJmta
UVZMLissUi4KJWEnX2RTOG0wR080KiJecGxAWSw2KFJKSGI/NHBqSzhuZW9hWDJTNGhkWTo9R0c5
TnNxNiMiY2FMcFBjRT4xSEUtaGZLaj5JQTVWJ2w3bVhEZms1ITw+SER1MU1DKHFRPGFVWiMKJUVJ
IUAxM3M8WlxPLl4wYVFEWCJbNGJCQGJHUGRQLCcpbSZaYUNLMU4sIldPMTFnNzlzJyxfKkI0MTRA
PG1JcHRVOj1jWiJhMFtWKF41ZicqJW4rWjokcGA7YyJNPHEjWlE+KGkKJVhyPTNKMUkmSTEqQSsv
OC5fIWpQOTxVQF02RENvQkguOEpULjk2YjZsRTlFZlxuZDk6O3U0JWhWNi1kQywoPlIlZWcuLSQv
SUxvQGVvNU1bK0UucyUqbDIrNGJHbGYmJDhyYTQKJV9QRCc1Z186IW9kRXMqMTFBZWFJbmhfdTth
TEJ0NC9bOXA1TEYqaF1XcEVFdVoqLCNfJjpDMiw8aVQ2O2g4cFNENEdsX0QmWkB0WSRvTmQlVm5R
LEtLQkhoaGNuWlhOZzk+QiYKJTpnayZSKz8kKCFuN1leQC1acmdLXEFhVGw9XjgvOE1rY0MqPmcy
bDo+bjZFYVJCOylHbnUodU1uMiZwXitEW0VjL3M7K0hQc2xRPlQ/OE04J0tPX1FdNW83YzU/J19w
Yl03UDAKJSdPOiluZj0kNzBMJF5gTz4oVUhpU040MERMVFFTWCNHVUAlZjdBXS8jdUlbYkkxT0M+
aSQ6cW9mJ1slUzxsMSQlUjRKVDdhT2xgV1dDMGYnSTZtdGdbJF5Ma2BPaCRnIScqbVUKJUFLcm5d
V1t1QSVHSCEvXGNgR2JbXE4jX0xsOGU0SEg+NlsmIT5QWVhqVFFgQj8+TFRlb0spUzIiKj1ISEIk
Y2BOVkpLIV4sUDRBWytkLEA0UFgrQiI1PHIyJy5CV183UilfcjcKJXMmW1I9MVsiXF9QPTBHI0BW
bUs4QCNlVixyaV9nckNQJDJgM2wnL0xYUytVQ2JabEhOTSkoNjw9VEJtTlcua2xyJVJGLihca1FV
NChnY2FwO3JWUnE+Nyo1PERMREhgO3RfaCoKJVtOKkpSZkdFJSVFMihHWDJPM2BkISJGVHJCWEZb
M2tgLGU+Si1IcUVDXVRPYCI3WWNFbVZfMCkqYnVMOTIpcEFlPEBsSjthRVZ0KV91LTpHYSdGX1Ql
RDRPcydwLlVDOyZPZFgKJUZTSSw/a11JODNMcVVcLzBKOCRwJiZUcEFANktYQU5XUSgzNUs/YSMr
Y01ILnJQNEZgN2pwPVMsTWcnRWthKitrUE5lOikjNzhmbEhKTUd0SG5TWiVcbFRtVD9iNSZqZWxf
XloKJWpbTmhlSnJIcDZFK19RQFMrYyQ/SjlzSzs2Um9IVy9LMGFjLmtQcWREZ29INEZMRjtdLnIv
KCtecXRjbVUxPDZsITpLO1otMFZaWjwzS2AmTmNUW1A3ZTtgWGElKCpHNCRVQ0oKJUwudGA8PFld
cVloayZfKUMvZFhTNiNMOCtmRVxqI2hJTTdyQyRCJ0hoTmthTGMrcFBzZldKckdQVCdwJjNZb0w6
O3Nsc0pQMjUxRFk+IzMuK1JLUydELmcpLmBeWGtnTCUjUzYKJXBCLT5yTS8vam1hQlEqLDJQJSty
Y1VqZE1eanVCNik5OD1HNGhKczcxU28na1REUkl0ME9BT09bVEBHJnJLJkU6IjNYX0Q2VkRZLDhb
bUc+QVlUWXArJScwOmtRLSs7cVFEUWwKJTtoZldgNz1LN2pSPTJYI1p0M3EyKD9RS2FJZy1iQj5r
QiFPIjIxTCZvUkRXREJEOT0nP2xcKjk7L3VANCh0Rjk5ZVppak9sNGxUKShLLUdQOVxMZy5CN0Q3
aGpET2h1KXBPLDwKJTc/clpeSGQ2VVlPJUg0UG9YYVExSVwjczZkUjtLdThmNGY0I1FkLTsvJ2tP
TGdVSTVfVmVUVGdWQT8rWS9PcmxPNFxSa1pyJGNMVi0lSSpBbS4lP1ZEJ2c3NSpeMV1GZ2lrYDAK
JSJuMFtFcUZAYzdqKGJxLVwnYjBiWEhZRlFFOSpEbj8vUyw6NihJREJUNHU+WT1SN1g2K1xoczVL
QjAuazstPjo0YWxNYWwzLiU/akAxM2s0LzhBUSZDUCU+aSY9UkA+KmQ9NXIKJXAnVTUrW3M8P0Mp
czszL1IuTnRzQC42b043WkVGR0U5bTgwbTRwUXBxaFUxKSlacnUnNGJtRmI3YSxaZVMqJW0oPy5r
S1s6KlAmZy5kIS47aTY5LUo1UCdidUphRVlSSVRfNSsKJU9AUjoxNzRrJmkkXjIkQk8rUTBta2g2
LHRHaWgrRnE+MjlxWjk7XS1PSyE0MGBEVHNeKUw3dF89cltZWURmJlkjZFU1I01yPCsrSkhNMzBR
S1MyNUwqNjI9NypRZGhkaTAhKDoKJS01TzJJUzVHV11eUCxEImlvcTw8YWNMNj5cJzEpQk0hLV4m
RzpGKTI3U2lta01zUjhBcEgnP3VnaWRaWScpblRJWkNNMEpMbiw4SSgxNipeUTg/VGJyNittRkZb
KjJsQ1Q0XTkKJTRkQipDPSlfOkNOJTs0LE05Nz1uaWlRYzFQb0Bkb1VmXFthJHUrJkpuVDRMKFNg
STdPaFY0KD1yLjpldD8pYkVyYCNxMG8sKmVndD0wb0QsOWZUVFtNJF0rK1RpbGxlL29AckkKJVZq
PiRgUjFdNVxtMjkwKT9QKV9ORmJUPnVUZmtISTRtSmd1NlkvNFJIVVRtMTs9JCEzNmdUXWhfLTZo
UGNBUDBvOj0hRi0zdE89YVcmK15yZj5zTDteX2tkSGIqLjE4Ti1OK0oKJVhJU2k4XFhqSS9RP0xa
L0ErcTg/V0B1MDNTc2pxPTRQJlJaWiZAVV47KDZbIUFgITpyb2J1bDIpXTUuOmNNSmhBYiRab284
QlIkSShjOWdfY09cQGM6JTNvYlJIYU5GVEVUNDcKJWpyM3IwbFJpXm4qVjxjWzBqYE5YakZzM0lN
M3A8ZWZePVxAIl1kK04wKnI6R2RhRVNYWSdCRGFfLmZvMzJcPjRyJSNOailyWzMvU0EmTTNAXmkr
N0lrSFtjUlEvO0ssM1MzKiYKJWo9W145UXUhaWk4cC1edSg6VUA6RT5wPTteK0NTV2UqX1VaLmJq
WmllakU9W1JqJEM7LnM3NnByMkdxJ1lJP3REQG0jVD5vdW1ZWkcyQDlAN3FgdG0zJjMoOCYyKSFz
T21HO0YKJTUzM2lCbCtfWSVlW3FsTVwyQFJXcE8oQ0pAKWBtPFlyaGttbnQiQ25aUlxXS3FfaCQ3
L2I8TE9uZjgpNCMnPFgnQjxQTU8kNkZ0a2wpOTs9MzVtckEhIVk2YV5NSkB0ZlIsRiIKJWlcTGFe
JjJBVnIzODdXW21IQzErIkBNNClwUTBCXlVKQGdtOTM9XFhQTitRJz4lKWVTbVsyWEk+cz5YRy5E
a0xfTmwhK2tnOVIwUjlaOUMpayk+ZTdtNTghOmdyQiNdZklwLDcKJXBwYUpocDA5SFEqPVBzISg9
bE5zN0VGWzVhPU43UTwobUVQY19mTDxIL2koZFouRipKKjB1bGY7QTNtVGVzKylkSCJPRm80IyYs
P2smLWJjXC1MYTE2UClaSmdaQmI3ckFqVC0KJS9VXlRBIlVNQ0hTKFVWJW1EdF41YlImMicyPkUh
aEZHQ15tYWZQZWhdYDY5YTFdKks5aGxaLydiKy9lOVdrKWBMPyNNSENaUEswQGtFJVRuOXJXR2Fb
a0UmPTsnPFlMUERxY24KJTtCKm5LKjNmKENULW1hXmg2LjYhblI+NGVaVChGaSQ5JDFTKHFnPT5Z
UkA9bm8/KUVtRm81cHVpXChSbihpK2FxYW1JQy8lSz4rWGE+ZnFzXW4+cEI9a2EwSzhaWFNMVyoy
Wj8KJTsiRSFhKV9JMHBoSFtHdDo5b1IqIWtwPlwxX3VvWm8vKiUwVDg/XWlvRCIyYFUkQCZqXWJW
YD9tclxYTmhSUilTYG9vUGAsUGJ1WEtRVUBxM0UpV0wqNDpmTTpjWXNDLi9malEKJUpeTWcjJkFV
Kz5UOTs6dSo1ZD5iLlxYRDJtR2dbc2VRTW1JaWxTSyVNX2UwYTJyUjczbWJBLEtJV0AxQkxyWzk/
S1QnayNLOkBpZiwmR0RUblpbZmlaYWteWlhFXz5RLz8iPUEKJSZfS2NgL09XIUAmIUNoSSFcPmwn
LlhFY25dW0s1IU9HQGpOakJlIVdOJDsjMzsnU1AmMis6PkE3Riw8XjtpQScpUXMhaj5SWS9SPyJL
XUYnKSQpYUFtLGdZKzFvSVRhNWtWO2QKJXFwVV46Jz8tNVovXWxzUUBwbCNkMDs3NXIibHBJNVlX
YiVfY1RWbEAwPWMoUWchMjo3OEU7bGktRVRzZltJOmNgXT4rRzYndChPNC90SlJlIio1TWkxYDBP
bFpLTixjZzNSMTQKJTA4KldTI2NzMFdxMVRFIlVKOFJBKU9aKm8oPk5CJTI2TTxINkVIdHU5c09q
N0UlazdMKSspPD5AUCc2NiNJYmtIOidhKyZLTT8/Q15gI2RcblpNJmhoTCYzTidyVl8hQkM0TFEK
JXBcQlBvVXEoYWlJZzJbMV9ebyh1TXNDRyZwXXJvPCM9amwuWzNdPTw5SXA1MScqLilTPjJrUWJE
LyNMNj8iJl1YUD47MT80V28uNUFfXCVZQE88VjIwUkMqYVlNMFEtWWkvNloKJVU3W2VsaVpgRnM6
Vz5VRExxU3VWYFUnbSUmXi9aIyR0RSE3PGcmanImKEtvQSlPK0RDUU1nSXNCMFFFTGNBJlFQXyQ1
SmZOOFdtPiNoKGZcXmw+ZUYhdD11M25yLFVAKWdoUWoKJVJaYTpEJlFedGpSczVZND8oMD1ZcFs1
Kjs2VDBRQDVmRWUyRUNaOGhGKmFQTEVFOFhBTDgwV2VKXl51SEhmb2pVXlRZMWs7UTVpYEdjJ0Zq
KUlEclByMl48ZF1zQF4uJTk7dUUKJW50SC89YzZDOFAjL2lCZCJMU0dbWFRvOzJZP09hMDpVY2Zq
R2c3YHNeYG8hZUNTV1lrK21iJDpiVmE4XDdaaz1HNG1DYGYqKD0mcWo7NXJLJSg+VDlHLUkxPGNR
L2VqSXUwLTEKJS5nW2dXaSFkNi07WWtycyEhMlkpP1wyOmpPNVwjPVBYWVJtJUFkQktbJHJtWCdX
PSJuREUrJ3FKXk41LlkuPTonIy5mP0A2ZUE6J2lzcCZObDExIl9tV3NTPnFLajwoIiVzIykKJXBl
OyhwRTRfXCgmSSJMNVxUUCg5NT8rWiFBPzhAby5pS0w8Nmw1ajssakoyLV81Kk9gTSgvJ1U4Jj5Z
Ty5MbjRyPklrNlwiQCtFci9eTkYjSCFpImQkMGlpcSVIW2Y2RzA3YjoKJVQkUVRHQDRkaTIxXFEy
bVRHPCRPZmlcaCRcb0VnND1kSSE3VjpicmI/XWdfPiZeci9hZWhXLWxOWERERT1iTzNNPnM0OyRg
LyowVHJhbFg8RztqMk07TEAmI249KElRRjVraSsKJTpodXE2NSJTbSgzIidDOiM5LzZBais2c1Jd
Rj5OV1kqLUNgSFozVEBGSGFtKS9nNypTOm05bk01dGduTyomKk5xLUhuPDNFKXBMIj41XnFzR1Nc
JU9ycmJyc2ErXF9PcXUrYyMKJTYvNz9PRzZmPDMqW3BDPGBTWlthPEMiZnVcWF4tIzkxI1g3M20o
PFVIKCROZ2BPc3FhR3BXJ2lQLGVfaGBCWE9QVitic2FMb2I9WzlSKG0/XztFOTZtZlNwbixFKHAh
Qj0pNyoKJXFtIzU2PilFa0I4UVs9JE1EM0ovKilYUlpZc109Kl1baU9qMiIwT2YqYClpOCtmQG0y
JD8yVWlkYTJ1WmNkNj48ci5HJ1FSXkpOQUNjRTNdZjBXR0duUWA6NihPTkExRGZMXUgKJTIzPF48
JHU5U1w4IVpxKURVKV1EXDpeakc7S2IwXC9RbV1dRF8nUT9qQCs+WjBEW3M4LiI/IU5nN1ZDXTsy
K1JqQVh1YU0raCc3UTBdUkJkY0VQdWIuYiZGWk06IWNyLzImPFsKJV9SPlklKFM3a3IrOkVLZ1tp
MmBHS2JvanBTb2VWSSNGVENpciJpP0JFLmlLQ0JFQUhPLFpCO2UmP3BkWW1yWmxNMV4sPWA7VGdO
UVklR0JLN24rPWpeQiRAS1ZwdDVNcG88O1kKJSdbaTFebCswTyFQPjRNKWtKQHRWNDlbTWhZczk8
MlZcPDVnIj0rO1hdbSNBY2ZKVHQ6b2g8Uj1tSU9oQEMiLiw1RFhcRVopKWJAITs/WFQ+MktvSGo8
aSZAQS8qIVhyJWxKIm8KJW9wXkhoYzJSNzNWLCNnV1JJNFR1Y1hHJiI6SHVcTkg3cShcTDVqZCpY
cF1eXEFrPFFnTnBXQlUubU9kYD1RWVY/V2UsVG1pcEFsaU4jNzZZVTxxRCxbPm9pWFIsQj1wcSJr
ISgKJS9NOmVPaltScyVLPzxXMWc5THFnRzhiPCluRkwuQ09daEY+aTolKVRYdEhuczBIcDxUbjBJ
W0JJbCFuN2hkM2hZKm9DYiJAVSQhbzpkREZCTm1iUTQjMkNFO0ZXPyRuXHNrbWIKJUpMPz4yRHJL
VVshSUdIJEQoTDBaSFdASE9QZFZmXU85V0Q5bnVyMVhOUz5PJFlIVDgjXSJST1cmdDE6bTZnXiE9
Zm40JioqVyxMQVhdXWVlR1UoZGFNPXBUazl0JDNJY18tcmIKJSU0M1A9PWhUZSIvW1c4UUNCRUJM
O2tmaC0vQEgma1RtYjNaZU5DVStLayo2XEVlOGoiLUIqa1kwSVZRczQ0aVgqUUJbKE1MWUpkZzg9
ZDJCSC9iRXFrXW05XlhFLFpoOjBdSEMKJVVFamxjRHFlIz87TVVvXWlwRVAzLmxccmleZVk0cU9B
WHVUVUY0YjphVk9DVWtrUzdkKyFaYShDZ0NlLWVVb1hkczNaSWklImBaODdJPUZjViRAZEE0M28r
OSFkTTY2ZChiUVwKJShBdGNSZ3FgMyVMOUVAXT8+dWMkRjpkUy5RJFFFR21iU2A0YjhvQWkiT2Iq
SStFKzYzP1AhM2hXbnQwTE9dbkZNXmUxXk1nRTwwKVNZPElyWkV0SGhtKFhaTlxRSE5dMjZAPUgK
JUpmMzZiIkhOWiwsUUU+Jy48Vm4rNTZZQnRdXjVhZkhqWGFfPUpLPWFmXFIjPW8ycm5UQSkrQSk/
VW89IVQ0MlI9VSlEI2JfI1JobDA1VigiKWQ8T3BIPCtwYVo/aDdbW0YzQygKJVRFbnIySFwpcUM/
QTRXdWdTV1FPNiwranJJPm1ZOCdwL3NcL2xqWUJFITxSKDVrXWB0NkpiXVdZWEZNJEo3MDMnTlll
WGNcKDxaSz8kKiRFW3JeLGFNT1ZicCdpcnRuJjQsVV0KJTdbOEJ0VE9gVlM/TmorU243Okg2KyMp
SnA2MyUwb2FiaWxXWW0pc1JkUykvIW0wN11lMHBCa0VgWGQkJjJOVy1VIy4+U2NEOU1aZztRQyVP
aDRmRk1TTiVWZ0lQNj4kXzU+LWkKJUtRM1RiZ0VKNDpfIkIpayRdLyJPW2o1Z28/clcyUS1WU0FZ
KHFNKzxvXzQjY2dILkRxTkNaNiIkT0JXMTFAYj1KbFM+aVxLITJBZV1IWXFhb2xtX1Q/RE1RMVtf
RSRXQixYNSwKJVA/QUFiUVFLdFUnaD0uczhHYW50ZUhBJCs2SVteY21DXSlhPVJfPik/MUBVSCZo
cWNdbDdBRG4xbmxqR1cpRV5AI2ZuUio9Yj5fa1I0My9wT1tTXFNPU2VCTSNmZ2giUjh0ZVgKJT0v
ZWkoIzJGOlNVK0pPZ2hndENjT2UlXlQhaihcMjdWYWo/KWwya2lTbHUvXTBbOS40JD4qUD44Nm1G
ci0sOmE3RzBOS0BaImlUcThUMDhQOm8ialQ0WGJSIjhkUCFsIzFnc1kKJWE7dUk/XEA4U3FhWFxK
RkA9M2NcVihKTiVFMCpabV9wRkEjIVlcSDZWYy5vV0xXcEYjaFJiKWw4bTc5Lj90RjVXbmFkbCJW
P3FKVF0lIXBFJlM/Y19cTk0sWnIxNVxkZ2pETloKJWJLTmZxS0taOUIvLzA7QUkxJHJOLSQqMC1J
IkorYSpuSi0qbT9vKUQ6YllmRG10Q2QxcFZjbyRLakQxLkZpRUFVQVVOcUtkXVprI24tclZYbUVS
bkRBM3NROlE/Nj4tSzVYPDQKJSglRl51Xl9RITEkOiVFOmk9UWIpIl1qP0pjJXA/KkJcU3EkLl9a
I2A7KSY3XkR0KFknVzwtKjJRSFU9Ym5DdERCP0dkVi1dYVVYQi9QcjVZZlFxTEQ1dEU5KVJrbUZI
RDliKGgKJWdMMk83Tk8jKiomSz9kPVs3YTotOnNMKVApdVoiQGsjVT0sZlolJyFMUFAhXzEmJnBm
VzhNR0ZwOFVbXFouX0EsInExdG1VSVtJPDtDL0MtTT9JL00laGhiOTRYKW9ULSolKTAKJVROQE1B
XWtoOzZLckdHT2U0JmU/IiFiIyVwMmZYM1ZONmhQYjlPcz5URV4rYF5wZGdATlNYLTVqYU1aVEVl
VU5IQ1JLY0s2ZmFgPSxSUUQqcjJWWEEsOExCbmkyJCM/JDBvdUoKJT5PXEEmVHVqai5JMm0zWywv
VjQiIk5dKipAPyc5O04xLmtORzdsLV8/L1hPbFgyVm00W0xbKzVBLlJEJShlRzpBWUg1NiJdMSFw
SlxuYE9FX05eRU5JaVd0aSI2ISUjPGVBXzcKJStkaDNzZzdEcz0pXXRaJy9tdCpnOCdwImJDNlhF
M0tUazw7Uz42VFRLQVpKPGVrXTBrLG9fQWMxRXE7TzwyaVdJb19Ibm5NbV4iLSFEQkZEaXNvU0BH
N142MlFeWERDWzA8Qm4KJW1EXUxpTSEtXz8sRTxCVUokKEBYLlhAPyQhLTVXImlURU1sL2RxWWIo
KytUT18kQjMoPiRxVz88LmRRIVw2Zj8ua0dhImstPzRBTDYrIUhDSmU6c2ppNDR1JSVaNiEpOGVk
NnAKJTZgXUdvb0gvSztkP1g/NFdFM2ZDITwra0pHJUomNF1yWWI1LiZDLkVLIlFTXGZNOTJkSVlZ
UUZrU0g9XENtO2RcVEtgMm5UJ08vZlVvMCNHL3RBbWppO0IsWCs/STlzbWRtTCoKJTclYmR1MTpN
MFw5UDQ9ajhcbCI8PytSbTpoP1ctbkxROTVpZ0tuOzFtckpLZWZDL3JJPkNLV1NmNFI6TGdQWiZj
bzElRCVpMllRL2k4QHJhRDJIbGBdWTldYSgyPzNaR0lGP2wKJW9oaz9iQ1kqUzhVUUBFXWxZVEhS
MXQtdWtpM1YhaVhaMlRiaklvdUk2LTVcXipqKVxJVUl0XEJgcHRYbSJFQlZYViNjTVU6W21zRSEp
NWs+T2o7ZmJba1ohPFhjWVwuJVVwPF8KJW8kXUciVW9LXDEjPF0sJCw9am5jSj9bK0NJPm1ZWi9d
MlAwQTZaKFVlST9wXj5cMys0Ol1yMjYlK1JER2RiNEsoITw7aStbWSpBRTRvImxjaFhZKGFqSUIv
ZEJQVThhQzI7IUkKJTwvX1FkM1hRTz4nM1EvWjw2VzVIJzNzaVlJJFYnMGJLalYxcllkaFlbX0lQ
bFUlUFR0SUNAXUdNZ0Mxal1TVztwXWE2K0FJLVxgYDlIW1dWcFRIO0hcXmxCN1YkQDE0KmtaM2YK
JSIuPCYmLShsUEs2bGI3UGJFLTljbW9BOVgiYmFUUjdNbywmXGVIKSNnTlM3QTAqOj5LOGZedV0x
cFgkTzs3cmtZUE5uPG9tSEEpUy1bakpeTiVjNFJERmRKZSo3cyg/RlkvKFcKJTplaiJnZm8lRD1x
SiJHXC4rQDMjbiM/aywzPyhDOi5TPWx0L2BHUWdHTXJDaDxMMSFMbShRZS9EUyNaXFpnJXAnZCtK
TF43WWAlQF5bc1VsKmp0NnU6YFtgN2pWUmUoUFRKbysKJVosLj91RVh0O11UNWY4X1tgaGcvVzY7
M0JpTnFpXCZJaSVTLUl1TG5fJFxHRikyYDk3SltMO0hFaCo6WTtcZ15kKVJwaGNbcTcmckI3bzVm
bWdlb0VSPl9Xb2dRcEkoQWdiRUQKJTBjQVxjPlpkXyUjQCJuMCU9Vi5VazAxZnFEWzo7WTZbRHMz
UWwxIkFnMktmQSNNSUFFbCFOMzA5UzgxMTYxbVRdZCkmc1ptKk9XdEE+YDZcQSItR25dTUMuXFgp
NTJYRFQuOkwKJSE+MUNxIXE5dS02KjJgVDI8b0NFVExXKEIhWUM6O2tQQGY+UTRFYVwzdWVIWF1A
O11AT21lQEhRWjwhSVF0MSRWRSMqX2BMQGlSKS5PSVtoKzopYGMvI1oicVg8TyVtMUNDV1wKJVYh
bnAzPEQlaFw6Yj5dS0Y8ODhlS2gkZyRFWlBwUjlZOmpdU104RiE4VSRLRlV1bVVBVy4/dChMPTEk
YztKblEtZnMsXkZua2g3KiJAcDQubFxbTS9oLUkyWVRSQ2FoK3IsU2UKJXM2Izs/LSQ6OSVtMEc9
PFhsc15mbGxTS3NhY0FxblAwRVtqSDQtPFVOVlxGPWtcWnMiITY3WlRQIzhFND42LV9wZysxXE5Q
MVFoLDpWJ1A2KWFvQyReaF9YX08/TCsyUm8wRkYKJVReVjJiSGUzUGFCbEgvM2hyaSQ+M0MkM2hW
b0RibCUlXXJVIl9jZDFWSGdNRFdSXUVFZ3VqVEdENyNgPztSbm1pbydrYEJPZ1lVb2U1VDM/LzVU
Rk8mPHA9ak1wQi4kcnE/W2cKJUNpJ0o6VzxLXk9QUTdNPWgiRW0sSU1yUTxeYF4jMVE3YkdWMj8+
QChwJ2suJGQ8cjlIUFxiIldvakg8Lm0vX0FfJENJVS5PZ19bOT4tQD9AZlc1T3BrIkB1XmA7KWFB
b3E/NVIKJWZyWT5HVzZeTVdcJGplOSMoPDlHSDRkc1tbbG1ETTpWXCNIT0I5IVFeXilvWW0rOi1D
ImIqYE0nYWViYUpZaFAqW2wsWltVYlhzXyUhMnFANVt1TFM1amEndVQtKlpDczY3RTsKJWNodU4z
bCQ4Nj4lTkMnKUVTbnQyTzRBajU5b2pjPEsqXjI+WSsuZUlHc2ppWSNBcjJyImJ0Kk5UZVtaV11U
S109J3BUNHJjRiQqTVpWNkM3ZCFEZkcnLzUxWmdGaU8zVyElQm4KJWJ1cCl0XFNERjBFcHMuIyE/
TVpiRDteOi48ZW9pP3ItZWxFIXU/ImRgZmhpR1NIX1BfTnROTzw8OXQjb3BhS2t1VStKV2dVa2Nx
RTwuMyx1U29MZmY9JjlgOkotb0oxYzFNRjgKJTlIdUFSTUNmMDhYJTMmWVQjdDowXSo6R1hDalpB
T0c+TCtwT3Q+NiZRKDxNOkgyLjFiSFhxY0ZITDc4LEs7YG8/LEFUW2RQRTdbaUMoKjpIRFA0NHRj
W3UkXGdHcGAlKVJyO2MKJTFQcDpNIzkuRyIzITM8OzV0QmRPOzZrS1ZuZUo2PjZKbFBJWkNST1Vj
Xz9NYGdOWkMwOGAlNz8zaylcSGYrLlI6OidzOEtYJklFa15kdSpEbnE2YDk4YTdFa1xtZFlXUkNw
Kj0KJThRY01YVVdTTVZOXEtNQF9rP2BIJUVubiNrI0w1J2NTXmg4X0VHLEUvPUFNc3MkT1Q9TylT
XVNUdVpYTGlBNClySCI/Q1tUQVo9Kl5ISVtGQio2NSVbUGJfPVBtXVZvcFpnJTcKJW51SFBAIzNA
blxALSEhcztqTT9CP0xzLFQiZlhvcG9JX1AobSVlWT5xUldqSChgUVByMWJYZmcxSjU0ZWZYSyht
RTldSzMxRXEiMUdJNSYmN3JSPVY0cygvc11hcT5COi1sVlsKJWkoaFFoWWllJ1djTTJfIm5VcGxa
Ml1QIjJwYmcxdTQ2YFJyJUwrbkRpLFQ0VUpIQkgxYSk3a1I9RikkXVNPJ09wU04rZDxrdStFIlsl
NkJtbmBPSEVoVCxgWEVOOjRMaF5Yc3IKJWlMbEc/MktROmhQYWwzOUs8OjQhb0Y9Q0JaU1JRQ2Zs
bD5nRSgyRWkjQTxINC1iJlcrQF08KV8/U1Jraj9zWCtYLSgmIXQmIkhvajlZSSVhNTpTPj4qZkxe
bCgzNF9iYnMsLjsKJTRZQzJWPDxsai8vUGtaYzYwSEJfbUokOTlCYSNTXERJRjwiRWolJTM2J2JR
MDZGZldtN2ZNMlQjYVVdNlkvO1lARWg9VWNbc1wzOVIlR0FSUDtkQTMhRWFvbWBlSylFKlxodGwK
JTsvPW08YjNxTyInVERVcz1rZmtuLihqS1wpTj5qTiw/azNzYEk1JXQ+W3NBazBdVm4mPGllKGci
Z0tZLDIkdG9RUXVvImBFSyhGWnBxT1RXRmU5KWxaV0AuLTVtLTNgL245XWYKJSE6bE9dJCxTOihe
Pj1JN0lOI3NmPFFQaVckKWVuRU9XJy0pUlRTW2s4Uk4rUilqKWdEIVg4RWFxJEtYMWdyKVY3Sy1m
L1laNDw6XDUuNSo6JzpDWWRGJ2gyVGpYLUFZXyVzVFEKJTxwKy1WKThVUERgPWM8YyU0bUM4YmMp
VDlWPnNYZzg+aW5LK0tOLGslU1FtLjkidVdXUk1DMlFwWE1dNTU3OEZCQFZUNGgzcURHVjlmZj5S
WFReTTIvYV5cUSdjKlBbWFNpOGgKJV5hYGwtajwrIS9XWU5ITClacjBjWWYiNyVvZWRTNjxka1xe
WyUtbk89WD5WUiRoXi1WR0tQUCZyNFQqY2ZeMm5eKClSbjEvQlskOzRJVS10WkJZLls2ciFXSyxN
WGooRFEjKmUKJTFlSnJrLkBdRG9gYildbiRAIikvQldET0dNcCVWOjZ1SFVMSFlwOSZAaDNhJW1c
IXItImdESzo+Z0BxLlUoMypxNkBoYTVBS2MsZ1tNVGVwV3Era2YlSS4rITYmb2U6Z11zNzsKJTtC
dFslYFIoS2JBNE5uTUgqW3UlM0VuVyQvYE9UM11fLitNXEFjWkdJTztlcihAL2UmMSlcWGcsXHBB
XEBPLCtuTjJbaD9HOmxpJClhVjVfSUlSY0BxPzVkSUBDYHFNWDkkS0cKJUssZkltMHBpIyM8WjEq
REhhJjIpK1ciVilvIiswZydnbjQnKTknUjxeQFZFcFtmVkwnLCJVN2tGQ1RxcGc9Y1o4R1F1SlZf
OVwxV0MmOCpeSC46SyElR2hiWjZDakYzZlFJY0oKJUlhO19ZIWN1NSFBViNuPiw0NiVeVDssSDY0
KWNIPGR1ZTNYL1swIU9edWNUYzlMX1wjQUA7WkI8QkdNL1g6bHVbOUVFKkc8SlMtczloRilNP11x
UmBJY0w+RzdjOzNKWnN1YWgKJUMxNl4zMUI6RDg1XkBCRzghZD0yKzRxQlpdY1FENjInSy82LEoz
PmNROSk6NmJNTCUxVHQuTD1PJDNKR1xISXEoYTBYPkE2aThxRnAjUVBicEpiTFA9KW9TLiRXdSVy
Qy9kOEYKJW42VCxTYWNsZ0JgRHIyU0ViTTB1UjA/aENfN1B0SSVQclxDITQuTjFXVCw9ImY5QyJj
J1ZcVjBbakouVWI2P2pLST1AKzU2dT9GbEp0Iio/PmdMTnM/W0BtYiRUOVJjbEFCXXEKJVtETTgx
M0FPSVcjKSc3SWpEaE9IWERIKCMuMVNCYi5kSjR0LykyOHQ4MnJsaSI3LUdqXHBcNUBxJyRVLFxz
dV9HQiI+dWNiazcuUEwrJGIkV1tscFpFPixeWENLVU5lO0A/YDQKJTQtRDYqX21yMy5XcWNQNFlA
dW1ZWVJqcV9vdVouNFNYM0ElKS1uLCQ0WTpGUTg0Uyc2STNqUE85aihmSm9NXSdNJignR2Esbyxl
TFlqJTpjLlI5cnVYP0lbSmYkJytfVVJzMUAKJV5mbUtEQ0VWMWU2JmtzcW1iMGZNMzgrNmtOZWQj
ZEBDQ08/OjtYOmc5cEhLIyNRaC0mQjxzI2BRJks+V0VsKj9eTj4lSEdaVWZDUUhIWUJMb3FCcDtB
TSkkKlwlblBdUUY5bygKJSNCTSpXZFMoJGRENl1MJTc7aTQ3LGwhP1tVUUdWX2tASDhUYCVDP1ps
cSVDbjlsYEo7by5aIUtLLDFuQytqb2A9L3BFNF88UCY4cyQ3SW1TbkIrWW48RVc2KUc1KT5aPjkl
cVoKJUFgUEBKKz0tNmpHTzo8UEd1JlNjWlsmRD9AVjAqMC9TO0ZgPCslRGdSRyNGV0FYQEEhK1JD
ND1uX0ApODw/VWE6PFtydUw1ZThnVT8hSywkbThnSTNiPTY/Wz4pQWktXT44bCoKJWAvTCVnUXIt
LTA3MTk2Wi81YkpFXWElPG0yRDQpUU5UNGwobitDInAoV11LND5LTVNTXyg4Sz09bENdS0hBWkUn
QTFfTjhjIigySjJFTzcsLWBdMkZOLEFZZl8+cyc9azJQa1YKJTM2WklcMUs1bkpYXmMkR0sjbzUx
ImBzM11oaSM5LDY4Ukk3Vi1NMk9oMEAuaURvKC47aSc3a2cwPXJiREcibDIhTm0oKkk2ajlbVmNC
MVZKSzhVTjZXI0Zra3InQlJhQCQoY1cKJStKJyJkWk04LDAyJzZvQEEmSDAhW1RnY2FXNDxUSV9B
TkEzPSMubUdJPm1bMEBGOkkyN2stR1EjMERmbTk/dEdCPyNAUyQ4YytnZEdCR0Feb2FxQVdaMy1N
Ilp0MyFZRyYlMi4KJTw5I3V0Rjg3ay5KXSZFSUpTPUZJIjNHLXFRKEsuLj5MV1pHPmNPJmxoUF9N
MVZLPVJhTUxhLCs6QF90UlEuc1E1ZlY7JXIzTkFVTylsXG5OXGRtL3M3QClUN0VLYVQ0O1cuWk8K
JTM/NShdRD5eRGptP3BARzQxQ1dmY2F1Ky0yMDY1X3FpQStJRilsZyxySSYkQV5KI3Bxbjw+YyhB
X1UqOkZtT1JVREREKVshUCVwODRfRWVNKWBMOFtRVGIndTJYcFZINEcnT1YKJS5zT2FjPT1bOlBv
RG1KR202VFVOXThqIjQuTWhbYSFuTytISVhbNWMuZnFbYVUqQTIzW1c/a2BGRE1oNmNZczBaYz5T
VyEkOGhnUDRVWEk+QENZUUQmPmU7KCtnKSheKj9dOjgKJSRDOFs4ckddN3FoY2xxaVFIRFw5NV1z
LDU4ZU8oTWFqYzJCQyVuSUZec1tPWVBJRUhYXDY7QDdnITB0KS9tNnJHKCo6Ny1bdXRrIT1OMWUv
QmVKSTlwZlRbRUROX1NIWyJEKz4KJWhwPjpaTU1TO1lTVEFFY2FTPWddaVFdRWw/Lk1BRkZWODpn
RFpmVSQ7cGFcVSJZXl4/Yj1LWzFGYy5nPz8/QHQ2WExPRyxPaWVQbktvPF1DZ0V0NlUjRU5vQipp
TF5aMF9XYTkKJTgqZTU1PCZzLDhWKzFEIUI/MS8tLFtmaFUxV14wRGZCTTMzYCdSUSNSczZFSDNV
azdrRzY6LFhDV3NYM08odGwla01WcGw3amRXVWBaP09hQyhMYStwJG9cJEZLN1FdZGQxRT4KJUtR
MFRYK0omNVpLQzZvIXFGL3BFUEVvK1M0KDZdPkwkKWpnITdfcis4RFFsQWJjISEvXEpoOitXPyo+
RFZ0UDgiaj5IMWgqQ01oblFlX11WQmlqPHBOblc7WT81cUdfW1c5RWgKJUczOkA0MyFONFU3XCxb
NyUjaDNBMVxvbCpANnNCKUA1L15JSjg4JSUvMEJIc044X0pVRl05IVQpQW4xTSM0VDM5Sj1jI0dP
Ji5OUW9fJ3ROImgkcEg5KyVWUVYxPGxvIyJFLk0KJStGcTlYM1hxbSVEJVJaKVBST1ouRSNtdWFP
alFAMCt1MW8la0JtUCNjJjU2KTpiPTszNSZPMldLIVd0QiUxTDdmUk9zOCstYWQ0TTVxODkiTCs5
dWNuRmIyZks2O04lUXQqQHAKJTtUQ0hxPkdPb0chNyU+S20uXDQ/X0IvQ1YjZmtzdSVnaUxnXWZ0
Zjk9akhbWVxuLl5TQlJJL1Zcbjo7LWJdZzxIXl9xQi5ETjlaLEZ1OG9jP29VSjhCRjpDMCQ5SzZB
KFRvdUgKJXBvWDwvSC5CNFQiNGlIVykmSW5XLE5NX0ohbVkuL25vLyV1LGIldT9xS1BBLDNOV2lR
cyJ0WydBJzFzcS9jblBGWGNEPkYwXyIvLlFvaUxvaEY3NyxSdSZeaFA0UmNWMmJ0OzsKJT1MVlJm
Z2wyWlRtcUVJaUIwcihNQzYkR24nWW9hUmBHaDQhKTtDMUUrNk0+ZVlnWmVsOjNiJnAvX2U7R0Qr
NipcYlA1SFcnRGktcSZOImFxZiFUJGZsJ2FeOSksPWhUKTFLP10KJS9GazdPRDAqUnFWdXJRLGdI
NmgoUSNtLnVobjdRJzRzR0twXz49NU1tWVNOKCgyQjlSPUd1amJGc15rW01kJkk4YzxzWjdFWDwj
RitsRVgsSSIwXVpcIW9vNj1UUnNMX2hcUyEKJT4/ckJQJk9RXixwXkhvciFrUDVDaE5cLWQrS2w2
KksoSFI2aSJraHMnSkxjPUclMzhoUyxZTyQ7MWltI1lRX2UhX2wxbW5pNExdZ0ckUm8zYU9tM21F
WlQ0RChuOGYuUy1jVikKJUNqRkllVzEiTW9ibk0ibTBmSS1mK18+SSxoZU0zNDcvayVCbG9SOlov
ITJYPV0pQylqMmlMJm4mW25eWmonTkQvK2AiMXUubFk4JlZKNGtRQDJgT1heK2JWNlA5WjpPSTQi
NEcKJTdTaTo5OzpWMy0xNjleN1tRJHI5S19eOTdnPW1oLjpLQE8va0xacFBBQz5vLzdaR1hxZm42
NmJrNCJAXGVXV1xzRitxNERHXUorQ1ZSTjY/PFRVdVZdJDI7Ql42Ik0nLD1hSzAKJT5UdVkuZWAl
Vis6O2ptPGYxXjlLa1UmKik1RThJUjpdMSd0PDhJPWwnR1UyVHBPY1VQaj49LUdHQFg9X1BTPTE6
bD0mOG80Z1VRbXBNL0kuUjJlM0YxISFZQHFxcD5VK1lCb1YKJVFqMC9WK2g5JjRsOzpALl8hMG5h
RXRwVSpcbnNuQS83IUApTDk5TiszLiptdG1zO0IpLERSQ01gb15Ja3MzKWM2aSx0Mi9nLUgodTYz
L2UyQlIlaTNTMVpAXGU5NGdWYltSMzkKJSMhOz9HaWVhc0MnIzZPJSpsdU42K2F1Yi5RUTBoMmZq
X21wYTdeXkIhakhIO15sRjJQKSo7YjdOc2REI0hZKGY2JFZzYmVKJFRDLDZJdDMhcDY9QmtEJVki
RkhKbHU5IW5jNmcKJV9PJFovVEJYbDM+Wzpza0FGLyZiVWwkXmdjYj1LQUpjRy0tWU8qLFRxImFI
I2M8V0EzbGtfMkRTbDYpJCgpV0I8QVssNERDWWY2ci51TnRFTCtgWG0zckhHUiM1NSxGW0lwMj0K
JVJrdDJTYG9dKzJiOihYTlJoXV5UQDdFLlIuQW9qV0JjdC8iIlFacjdTNzRzJ2dEXGBcb0g1JGFD
Pk0+SFVdPkdiOlIrTG0tY1EhXzRULjNKJCFtLi44RW5YXmJcXW1pQFhURCYKJWovLUROayddYVVM
bmZAR0wmMDYjYyoscDZaMiUvaG4qK3Eta2UhL0xZV1tMJTU1akheOzNcZSknKnVcXmouWU03R0VD
SFAyZ01ROGpOM1s+J14xQUNhYSZoPFlJPyc8RUZwVzYKJSdjUVQuJWZda09HSCltR0NFOi8raygv
bmxDQTNvQlRQWzUrQyxdQEdwPEVUTUo4W1w7X2ZFKT0/MSxqOEEqNWA3WjhAWjg2T2VENUFvYSpN
Vz5nWWlYI0InTz1nWi8kV14hRiwKJSlkcFZOXDprITspRGZQMGVqXU07VDMjTDpXbWNROmB1YiRL
NHVbaElENCpjVkZTdVxDXjFsNzIjbGJAVCpYaSwqKD9YYlcsLXNZSmhSIiJyPEhVUWFyY21rc0ZL
OzZwZ1VXXjkKJS80b1hhbCRTbm9DLXVwUydlaF9RP3MvXC5sUElgQ0Q6L2RSRzVmMSpbaCM2Jjlq
VSxGTU8qUUtqWSRFIVY5KGpHXmZnbEYuSFxYYFAmcmpEZnF0dWUzKFQzJnEwb2hLSGpxbEYKJTxT
ZTRvL1o2KGI8NTYrb2YqRFdGZkY0V21yUXQkWEl0IT4xPytVUUFMU0tFW2dDRURCXDREb1NSYC10
bWtQT1E0UG01aVkiXz9DWCIiUzksInRIb1lORF41KCk1Yk1nSjQoSFUKJUgjNzpOI0JiT0puLGBk
WFRQN2VmPy5AYFZnMjItP1BlVU43KUA3Nm9raER0UV89ST9EV0UqQHVObGgtcFozQDdcOSIkdSZt
RmhVbColISU1N106RV89b0o7dWhAQ2BEUXRwIUYKJSVRIVtQRmMvQm9GVGMoWjVYaSpmIU83K0Zo
OGk5TkVmaTQzNF5xaGgmP3BLZk84UVxIZG1xaz1lIkdhRjltOiYyTUVNQEtoT1FjOEs5Z3MsMFo0
J0FJSXIpbUo7NytLKW90YCkKJV83a1hdJjI9XVhOLWFqSmpjIzJiRipOPEVOZmBeTTwhazBwSzBB
LVlqO3NUVD5AOW01ayElSz1gTzM2SmkyO04sN1N1Uk1YbzU+VXI7IiNyPmhLbmNNa11IZmtgLjYz
cVJNYHEKJVx1R1pbY21aKGQqXSltYyhhV0wzSjppaV4qIU5RUkFPPVBROyQtQF1iRE5LMGhmXkZl
TlosIy4lQk8tZiJXNGxualBVOSRLbCkwLk0rWU1BZ15APnI2MSZWKFtdKysiUC1UOEAKJSgsOFw7
VT1QWSxIJW90cVthaTZgclBqQGY0LTswVDAvODIpbUEwL1kvYWdUbXByIWEvUj9rb0UoXkUwWjQy
ITVZUyReN1NSSV8wdDoyNlIlI1NoPF1UVGk6SGhTcDI7Ui5bQzkKJSpHMmVoRmUrWT0/NSRZPm9B
YzkoMGtQcDxdNzhuKEg+bjlyKzJgRHE8TE1bUFIkNVZtJGs0bjBeLEFUc24pNEYiM2FeSFNGZ11l
Si04cnBiUFFLS3RVMEBGa3BBQ0M3VVIjbj0KJWo/QS85YScnTnVLWTEwWS8jbyY5cjxNVEtRRUM3
Ryo7UCxAbz83Xl5FQGRFXyEldVwoUCY0OyhhOVNYZzZKZSFaKEVKXjBebGsuW15IVGRaOmQ8OUJQ
TVMmTjtwKXMwa3M4bjoKJXA7NkknIzAoJi05VUgtYU9HMHVJQk0zdVs3M3IpWV1PZzlBTU9AbjRx
NUVkdSxvOysoK1BmcVlUTiNnLlFDWks1ak8mKHI2XlVCQyVYdCxXaT5Ja1JTcjFhKiFaOS5OTzVf
IXEKJUJWWWxsWEhdZl0lVkNsaWEkPSlHbUApZFtSKFImQ3BEOmJyVEo7JDM5P0w0b1FpcDAiaWBM
cF8mW11TL11bT1Y4b2xqRy0+bzk8LGlTO1hUcXRiISJoY0lmSEJcUlcrR1RPdE0KJVg5LFs0YVNC
Nj8+ZUMsdSZqR1o7TC5ZY2VfX2E/NWUjVi1aVzFqTzxLXyxOKUBXLkZGMUNwQk1bYzxoTS8tRS9A
MlpfTWNpPiZlPC9jTEI6PXMsYjZZQDQsYGYqMC1UWDdRJzsKJTAxdTxMKzA6IWorTy1TcERxYDVT
VUppXSE7O3RdI0EyRVBKcEhbSmohIUxQXSYiWGhLZi5KWEI1X2U6YC5yN01rY2lyZHNGbS0jOig2
ZmFLVU1bSlw7NkdUalJoRjdIT0BzUnIKJTxqXVIoal4zWSNpUTtraCEpXl5kKzpnayk3Ql02JToo
bm8lVHNyT3AiPmkkPldLZCFuZkdRNEFlZ24nJmphcl5lT3BTQV5cLmZMKnE1OEBPOT8xNVM3MzZa
UVZESDosKi9xMm4KJXM2JUdwZ09LJzVfNy5XR3JZJHQlUk04OGU7cW5YMVJGSCNUTnN1Ml1kNGNV
Ul06Wy9nN05kQT5ePmEqK2w0Kz5RTk4yT1JoSWUuMTY2IT1LQSZfV0ZZJnElYShvUFExZXM7VzcK
JV9YJD1RT2xfO0pAIixYQWo/Ojo3XzVtLC1CLWNebj1gMmtraTVnVmEsbGJsQTVjcmQ2JWkySClm
YmJuZEhISChsOGFXWmEwNj4oS0ksWGZzZV5FMzhoVmRZYl4qKUo2QWhCbFAKJT8lYzErIiMtUVZV
aF1HR1BtUVctJTxLPDMjZFBnRE0xTD51NTcsQl1CSjpaOj0oV2VNMmFbXUZIUTxMIm1XOD1zIzNX
XXNecmcxdG0ocnQwKTIoMEZpOkghREkvTWBLYGQsZSEKJUZHZ1JINUI0dClBVExNZTNxR3BaT141
UDBJP2dPT2RUZ08+M1EqbihJWF00KSYyPj5LR1VaYFY9aVEtZTloVFltZz5Fa3QhWGMjVzJhXEMn
Zj81UyEtQGlvT0lGbFszJUdgMGAKJUVaaDRYcWFbIi1nNEVsIj4sIU0tI1xJOT4yWmNVNFBaUCo6
aGZVbVhxWWtVRjZwL2QkKmEqdE1VSDJScykvNTVrKyEyXlldayw1Jj1QXXU1OWNobSIvdT0kZ2Fr
UUhEbjAxInEKJUFMXjdeJ1U3M0RDLDBZTEdZOSdcQGM2T0xDOG9qKyFhaFpWal1UaUlpbysscUFR
VChHIWghZD05TixeJlNZVUNuMCVgSCU6Z2MlNXIhMXJlZTo3TENnUDwmQyQuYUgzbCZXJFsKJWpv
KnFXVjQwT1UjV15hWDxPUm5jcyJFUEFDWysxW3IxQU1xZlx1PUswcUpWY3JFK3NfWkc3Y1w/OVZl
Kk04NkxaUkxBVlBKZ0c5V2QiJkV0KG9cTEclPF5udHAzXSdrX01vcnMKJVlGTS5fWUM3IiwnXnVs
W1drTCwnR2VzdStOKTQubVYicTVGQ2wxSUxHNVpyX0tATzdWIyxhWVAoKl4zY1RibGJsM01ARkVZ
W1NQRlpgJEVzcjZgcHQmLFdoT0ZTUWYlZDRoKCsKJU5jUmcuRT5sWmFCXlhTKV9NJURgW1cpSj1r
I2YnViNCWXRhUlY2KVIrWyExWjAzTilUaSpgWkU9dFNKK2xQMTFLNiZNZjRaNT8naFBkJU5Acllj
VHQrKTd1a2JKQExaVlExZygKJSJ0NVk1JV89SDkwX1YpYjIjLWFrKnNbI25kaVZfY2NwVyNEbHVf
ZyYjLXFfPWEsWl9cTnJAXyhsM2NIZCIlNiMociYuOnVHZCF0MidqY18vNE9nSz9uNVg/JGA+WmdZ
SFxrczYKJUhCPjQlQzJKOU9CXVRsSSgnRkUwXzFfXFs5L3UxWSFYcm5MK0E1PzNfXHUiYVxyOzR0
bkJOTCwhbWVNUWFLRjQ0TG1NcEc7JitzPDRBUzlfV3FxUG5mVjIoZCtZKUlAYDFRYywKJS4yQ1c0
P1FVTlo5KWMiW2xZKEtLbDghV1hsKiU9J0o8K0FjMSgnaWotRTM1RVQ0ITBJcVdQTWpPTV1UJDJO
VkcnIlFWazYqKyViK20qaGYuYFxjNW4hVkYlQiE6ZCw5Mm89dFMKJXEkdTJkKUguWUolbTZWUTsk
TGVwMEtnOVZMSDQsRCg6XHI4VWstWCktNTBXYz5gUE0wM1BHOz9qSG45R2MwTjVOJjQ9V0A7Ylxv
RT1TaElbUW5KVHEhTjgqNkciXixvRG5tcFQKJVdIWEhmZj9dTi89PTI3PU4pUkAnPFY5P1U1bi5W
RFdkXjpLZzF0Oy5ET3NwYE5NWyVeVWo5Ny9FbjtAWyFTbF4qMFUkdWFLLTxiZGQjPDsqW2lpOyo4
cTwqZmJpNTVmRWVxXDMKJWglJEZDa2k3TSUjT29IdEsrR0Y1ZU5EcU8jdT9cPS0zJ2VrKzk7SDdV
MChzQUkncEBpUzlvMywnLSdmb1haVCQwIyRaZUhmLSRnTFwvRUJSZiIsY0IoVGleYWJWLi43Z1w1
LjAKJVEzTDN1Q19vcGtZLWZpWFVfXklRTSpAQCQjP2BYbiJoQUBRITxwWmEkSSFDKV9RIyZnT05Y
bzxJMSpuZzM9PFAuZiVVdC1oLDcnVSttZj0sXSYvI2wmclpuRWdKU3IkUnIuYkkKJSsxdD5GUk49
NUxvKVMsIjJrTUZpMlA4RVtqXUV1WT4+RHNrRVt1cFcqZCUuamMlJStMWUUpTlhcVD9dI29KRXRB
NUNxZnBCJigjSWkoLSFJIy0sY0AlI2lCOVhyLF1QcWE3dVIKJVBVNS9TL2M6JTUsXC5bYSVIUipd
TmxsP0lPcCMjPidpO15tKTlMVzg5YmVNYU9mazBQQmUobzdfdTlDOUlWMjk6JVxGMEYydVBDczso
P0RfPzZZcFFqSjpdOk9MRFJdZWBCZGgKJV1naD9xZGg2bFlULSVRWTtCajMrREJSUF0/QV9DOm0h
WlluM3BuOEFXPy5HaWpAR2JVcDQuLEpKUjgmWFY7MUI6WS9CV19JS3RhQVRgZVZjcyY2b1RZcTIr
V1ZbOkZTNkpFdVcKJTRdJkIiXEdmJmJsUDt1KygpJTAlSFtBYTA9KUwmPlVqXTUtRCw+SSs+N2Nj
WGJzOTM1NEpHPGEzL0MoaVksS1lwRldVcyw0ZkM7JGFbdDhSbigmVFpZalFBY0spQilVN11nQi8K
JTtZaTQoRC1SRlpbdG5CIjd1RmIqMD1LX19wczo0QGNUJWoyRCM6T2ghR1dyM1VTTFNFNEcqSEZh
RmRLalNVciRjYSFyL2dwY3JyY1YrJihNXENvPCdwNFdsLU45QkA8S1A6I0cKJVRKYlsuaVNAP2Iu
clomcXE+TyltUStBWiZvXDVCQFNAYXI5RHI4YkRFVVlYU1teYUtTIU8iZVJOVCtOalMnJUhwZC8o
Pj9MMjVANkJKWVRHbClcaHVTWS1JUT5MaEpScXAhOlIKJVY+J1lnRWNScSI9ZjIzO1tlQDAtLkEx
TWwnUUFfIkolMjJRTTdeM3IjMWZzPCJfLEpXLjglUE5bdUhPWUlHJHBLRTRdXGpxSmFWXlBsVT07
JU4+IyZvWF9VUEFbbE5xP3EyLD8KJUZTYFpiYlJcJTxWN0VRX0IxNSc8OzRWYTxeLFdVY29PQCYq
ZmZSVzdYKHVcYmBmc1liP1NKUSlMbEpnJiRtWmliMVBNJD9BXkdpNCgzTiMyOGtsbUw/PDQmQmJz
WmgsW2k+IjkKJSs2biNiT1dgYkAnKHJCRGtYNmQ1RSNCSyorV3JZZU1JQDFzWidabzRCWkFBXjo5
cTgkQFoscjNzMjFVVG1dOys+Mmk8TElPa1dEJ11uOmZeaF9vYiU0LD1Ga0soNiZSLiRGSWcKJVY6
XDA5TCk6U103WlxGYkU7Ri5ZJGE5TG4hQCtDM1g5JDlPZ2Qva0hkRTRiYy4rS1FeJWZ0V2JrV1Fe
LW5kPkwxNTooTnMjMUBZQTVlaDhlXidKdTRbZTY8L1xtTlg6ZVEnM1IKJTxpISQ0SFxFQixcazov
ZVNzZjpRT19PSDlIcFJnYmQmY2xaUy9kUnE8dGZZQyNaL3VSRUJJRW0kbzJSRyI3ZjpoVUJfWkEt
V1hGWiJjUEVWWW1xTzxVXFp0aCRIJSMoYl44anIKJT11KzVWSTJXYWtwYDwpN1wpbD8hZjNmVCRW
VyxwYVFMR05sVVMrUjg2Mjx1OFs0SUE6MT0zIW5XbjkjdSI0aWpWWWZOTEpcaVE8NDhiLy0kTzVH
TFlBbSUjT01hbDc4TzlvOSUKJT47N3VpQUJFQj9CTXNWUjlgYSNsZz9SW04wNzIoM1kvMHFWXFF0
TG1HcVc8QDIkX2tkZTVxYGVlUTZFLDY7MU84TFAmXUptVFUlMCpWSFVzWiIoL09iJUdONTVJb0tG
KWs3a0kKJUsiYiteb2VKQUdSOU8lIkpjJnRsLlZsWFRyUiVxRFhmTiNeXzVuOUdgc1UsKDMuUVU+
OmkwZGNBXVc+YEIyb3NcZVshSzNHTll0Z3JuUy9ra0dLQk9LciVZS3B1OUpvPCs1JGsKJTtlLV5T
RmYia10jMk4qV3E+JGJjTHBEWlNIQVBPdUJWX2RfLjg1WVsjXFtWRjpyOj5WIWc0L20pZzFJYCYk
aTlGSihQXj4pLyVMJk1ASl5NWV5yTic7RGZHPyYuRGRmV2csJzMKJSMqPXEhMD9tU0o9VVdOVVU0
UllqPClCYFRXQVVgNlNqJG5za1lyOz1HJnUjblI8YSJVTV80NU1yRmUtRjI+QXJJWCwiVlAndEA1
JmY9RCM6PXNURitELCE5PENGZDh1Lkt1aGAKJXBsXiJ1YjQqcS1LL0dGZTcsOmczO1VNPUpEO1JI
a2Mib0llJnInZixxbEVyND5lbTFsTlRzZjkjU2U6USVTSkByaEEwWj1KJGRySDVebC9xPFdsQj9R
YEFQPkYoOjFYRGJlNFkKJTczXmg8QS1iUi4zPighX0JObFIyVC0lUy8qI2VDaiUzVXVINVY9ZUhl
S2NEODVVbFZ1TWlqV28hZyRSJ2lmPj4qa3UlPGFWb1EkaDhjMmlyXE8kRlhNKWJKdTZNOjJNJ1o3
b20KJW0uNF8nPVg6OmsrLSpJSUdXOVRAQV06VU0/VVgkKmQyb0dCPWw7JlprXzwpbVdePys9Skcu
bVBfTkw0YydNPVw7ZUNrXFpmY3FTZ1BwNSJjWGo/XkxSMW9NTTdqI084bTwpOlgKJSpiOSg6QixY
OkBnQTh0VURjVmNtNS4wTGdWQjAhVVNZTlB1SiRWL2Y4SDdtZGZBWVxobGBjLEAwWyRqTWZXK3Um
SyYsbkItSkg8TFliWUdrZ0ZRWjhYLWlCKW9hUS43ZztbTj0KJWZyWXRKNHEoTitKSUYqZHAkNitA
Y2d0W086QURCYFg7MUFZVi5wLW1haVk2Z0B1cklUa1EjJUVpRSlmVCxTS0Q3YiJPMVIjLSNUNyRG
Wm9QbDouKGNUSXVfYC9jLS9SL18yRDEKJWNXVD5tPippTlRxQigkK0M4VCYiY0dmRy41YWlgNS4m
QD5NQyJKLTpdcWc6YjVdZCVhWylMaW9AWVVeKjlUUippNVdpRnM6RDRDZCIzWHElWTk0P1M6XXFK
Nz9DRTMiJFZMPUEKJWtiZTxwZ2BfITolLz9xXjBtSzEmYiZsSic2bDMhRlNaKS4mVmM7dV1nY3J1
Li5LWiY8JEBqbzEuTmZKcF0nS0tvYWI2TiJIISRyX1FFPTRkPyFEdSdBXXEqTTI5ZHEnVigkQjMK
JStzXi9jXHMuPDlWXnFxUT9vYE0ucDJfXElvYTRbSHM2QDsiI3FCam86Ki1DQFVzaF0yKFIvb0Zn
U1JKR1djN0pDNjlxRjxmJz5wREUuJ0dpMThpXUtXQydSbSt0WGNzPFJLRyMKJWBBVD4rWEE2bClY
UDFdZkInKipRQEckSyZuVSRRODosQC5gPEwvTmY0RCs4MkZ1W09IYFspRkhyczdsSmQjWDpWUFs1
NTM5J1FsSVpNPjcyYFFLSypRIWVeN1hVM0kiYiptRDUKJTwwTmEqVXBCIWozbSZAN1Q5cj0rIzRi
VWtCbCZJKTtpZEY1NjBYUzpaLyZEWnJaNjcpPFxeL2tjX1FqIlMxUkdRXFNcXFlAPkFSMy9oRTJy
KVRsXWo4W01wPk8rV2ojRFtbcXQKJURONGNiQWMySENnWkRBKlJqSjpJcCxVWWFYMChUYFxKX2s8
RmtRYFNyW2giS3JwYjJMI0l0bCldOyNXQ0tkVm9rbTs4RC05aFIuaVVwVmQuRmZQU3VsXiQ+O0NS
NyVaMj1LaF0KJSchcVtsIl46P0o5YFFpc2RLQV1saiIhQzRyJigjMVU9Izs7bi4iU2MzOT0mc1lW
SmQvUHA6TmoqW01iamFWQkpsOiRudDc7QltcZk4sJj8haCtnUVNUamJoNVxfQFU/JiZxUEgKJW1V
Km0jY2VrZSxjcFxsWDFbVGZUU2FZTTJDLyhCLFpLVlhkJGtALlc6dTNMc0w+KUdDO25pMmlNPCpy
IW80OlA2QTZCLykiNFktb2NMNHI7PHBoMz8oWCwwLTI9PDFDVFhJMXQKJVVzci88RywsPE0uMGZP
USM0YlcuKkU6J19nQi10OVVuQkIkRT0/QShlZ2lTPUk5ajZzWEw/QjU3c2NhTChnYE1AKjAxYlk9
PDIlW3E6PHFEayFcL2RhVi4rblddVk0rK2NtWzcKJTxBYC8jLltsQjQrL0tYLTAhQUdVQz5Hb2kx
XzpzM2VMVztrWjc5QmNiK04/Y1hxXWVwaGdDMjQoMkwpMTJcPG8hIj1YPGNdWk9hNSJnJGxgXmZo
cnAzZkg9VSVTP3MzaV5vYWEKJUR1OTNDR2QuWjxkIWhoV1kwMFBTPSs7cj1NXTFHWC5faWdIWiVD
bEFJMSg/UGA7IjJ1RU1iZDwiJCZCW1MjLVAsND1tRDkzbEE2SzBSUWo7cVBrWlZcXilqMm47TVFV
Z3NKU10KJUI+bnIkW1ZuSUJxaXIqPkQxQCJuSToyOUloIz5IQVE6Z1lVWUpua1hnYm0jQVZjXTNm
V1ZeP0o1TzomZnBRTVkvXGQybG5fOm5cW2Q8OyctSj0lbWldI2g/az1KJmNfT2FfZXUKJWZMWGgr
OjVcU2ktP0RVQVI7PCY8Nl5ScWJDV1VtSjQnUSZtZidAWC0sOVQvImcvOHBeLlE/MkNdIjZAb1kn
WW91ZVRINFAhbl1GMGhnSGAzMHVzdWFxP2dBTTs1SWxlN2VZYVIKJSFgNzcrbjBmRGQ2IUpaTjYx
YzpGQUxcYGwuVT8pKkwoTiFOUltGbCplQCZOWlo6Y0wiRFFzRU1iOkI1OEFxLE8xZzVfTidUV2U1
XjVUPGk1QlxRaSJCTGJaODhSLFpyQmRHXSwKJVZiQzlhM1g+aFdjRkNEVUJfQFQkQHU1ZU9eZWpP
ZldTVEgzL3FuQ1UkVjUlRG9ucTA1MT8pS2dCXD1HPDRmYCE6RzZuSjFFZ3FYIVZeckNiYyc1JWli
cTQmbT0kO0dkME5HVXMKJVVLc2JzUmA2K1s9a0lpYCIjYWg+J1VtJGJtODxdZ1tJW2tFZiwlLm9T
LjpDZ0k+dDNDQWBVMV4iXlhFTi4kNk1FXG1cMz5SJzEnOldqUEdicTEoSzNlOzBzV0shQG5aLHUr
VG4KJURETUJqSztnWlkoZ3Bhay45Jj9QYyptSikzbldbI2wjOyppZF9pUU9WN28pZjhpPW9YIz5b
IlpJZGsiLWtWTnE3KDdoWFNFVSVDTDVZdDk/cGk3NzdNZTpaLmpAWWt1JVdDdWwKJUJoZVBPV0BO
KiZISVhFb0Q8OkpwaTVsQTZKayVqKTk9KkMvMCYqU1BxakUsOEFTcCtbNlQmWUtWbkhZK1piMC9n
NzxXYUdTWUg1JCosJDJzPWgiPlNcVFEndV9oV0I2QDRSdU8KJVU+JEVvV0MxOV9UYGBXPTFDYy5h
cV07ISYsS04zJ0BUM0MnImE7KE9JaCw0Lms0MG5mN3JmQjNrRSxnMGlLLUg4UWpRQ0lqaEFjdSRr
VS1LSmVWJSNeLV9cLlQsbjlzLlhcN0UKJSUrO2htIVx1cik/X1E4aSEjYjgrYSopQippLy5TMSVH
WnVFSmQraGJwLDY0QipKQTQhKzxCNS9HSVAkUU40MGlCMyozNWombDwnXG1kXnU1XjNSQi5rSnU2
K0hncD8yQ245QS8KJSkkKSh0WFhHOERWPEQiYzVqXyVgSSwnQCkxOWlcO2EqUmhoK1ZkMmswU0NL
ZWZtZUkwOXA+UTkiNEpfLjVUN1hDJmwyO2xRKlRtWmpIRkNWJ21pQz0nPktVTllkcVhcS1J0LTcK
JU9PNyxNNnBmKiJTbE9NblBAa19AbmFjcVhla0xnUEJJI19dXzBGanBVNkUkTio7M1Z1XCIrWjVe
XnFbWiNtIV5DV15CNS9RQmVfK0FfKGE8YGg2Pj87P09gTTVgZS06PkRZV3IKJVdUPmdxcE9bX2Fw
IUpPYTY0bXVFT2lHcWNEV2BqVzMnPjxiV2pcKF5WUCMyY0FlTDpTSlFCU0IyZGlYZSpSNmRLSWZR
MjhOPDRidVhoI2Y9SWU9JV1hWydHY1UlREBqMSIxOnIKJTdUOHE4U09mImc9LmBNWy83NDUnXjVR
SUdmXnNXXkl1QWU7TCMtQj9qWyQvMCdUIyYqNjU9b0ZFMW9LJEZWSXVAPlJcTnFCYFo0aSQ6MFJg
ay42UiwmbVVTLEBcRFVdWUVmVCoKJW5AaFNIYVhca0MxLitBTUtwTScrcUhALTEjLUs8cSMsVzc/
UVstPEEjRzZKblRIOUNeVyw8Y05rNDwqTEQkbUxwTXJiZjRiSGEpR21vWzVTPFFKTTdUc2syVzZS
UCxAakp0IV4KJSk4JDN0VSg2cldpLFBpZVY9WEw7YjBBODJEMzY5cl8nQ1FlZlU5WidzNW0xKDsy
OEhQTUBBOkNgbSMyPDpgLCRVIyJDMjxVKl9ZXlJWajkqcW5nXFhmUGNoIz8/ImEjIy0xPWMKJVcn
NEombShKIm0oTTMpVlBNSV5ETy1NTlVuMHJULWptbVE3V1MsIzlyTD5GZlorWVxeXGZFUC1pXjBN
XWRQRFxhKDJhIkRSU0cya0tgSGhDQDpcO0ZTWmtqNCIwSTQ0T29pKCcKJTt0P0NYVUMrMyMiO3Il
VkdadFlfTFhTOnFGUC5JSEtQQGxBbVBycj8/Ym8/TEYibzs0STgsWWM+P0hLXFE9OTcraDgsX2Iv
dSg4IV1hJWNrT01ZXiE4Nl1LQFJqYSknQ0RuNyYKJT9mTUlfbzpvSUBlQDpXQTo6XDM9SCoqUUBk
Vj5ZZ29gRjRaP0E4Y0MyPUdMbS9ZY0FMQnVGXTpyQSUqU1teOW5JPG1uaFY7S2ApdHA/LGknOERn
MEYuQCdMJk49dXAqWENkLlgKJSg5PkNzMipNPG5VMmk4XSZLV3B0Lk9eKWkhIzVKU1I2O3NDYksp
ZWoyXkQ1LzBPLC5uN3UxRURodT1cMSx1c0ZSPXE/RCtWLiI+byEySy0oYzVzNVQ8KGplI1FOYTU9
KzNMXDQKJWc9ISZIN1lxMWQmS0JOb11acENPTE5YS0xYdTY7cG1ubVJeSktkQHEsbXVpWWxVaF0x
L2dGOTsjaDNiTGcsPG0sKTtydDApS0dFOz42LEU3KnI5PDQ+N19nTSg1Tj04Qkp0ci0KJWMwUyRq
MCFRRStBQSN1ISo3SjkhaVw5QSYvKmNCKWZFYlRWKShVaWVIPzNQY0pfRCZEKSEnN2wmOENxK2Ym
b1BBZ0lzZDppOWlgbl4qLmptQy5McXRvc3VvYW9zK1IzYWEyXi0KJVZDWyl1ayVFK1ctNU4qSTMr
O3Q9IidxRChvdVVjb0kwOCZqRUZVJG5lR01vWD9XZGxSIUYkKVdiKllnaTVvTWhSa3UkIytYYk86
VlxPXCNnV2c1SS0hYUwhKlY6closMFs6KmAKJWRPN3BgQic6cTMpJi9URFRXV25iLkchbi9hMGE9
SjlZNUlTZUo0PiwrWykiLGdnLTY9OVhgPlk6LDhyX0Q0XyUwM2dLPDZZJjklMjMpYjg3OywnWTlC
UlVAPig2U3Jycl1wIjgKJVEwWFEtSG1yJl46RyhDbSJmR2o2Xz1XO0ZEVitnSjxsOE1sX21IaiMy
NTdoRSFlb1ExImMvWVNvTGoxSjBAKDNDRU83M1NQXUhwS0ZuJDI3WW5YJWtJYWUjQjdvdCdvNiFp
PCcKJU5pZXVPTVNCPnJtNys8cyltNjFXWUgjcUhoRGAhV25QMyIyQ11PLyJMPmVGIkNxKjZpamld
LmYwI05aQEQ/dFFcUGZwaiREMFtGbk8vNGQ9SDJZPTsyNXUmUHFzMSMiWE0jRDsKJTo7dCtlaSxW
UDpzITVfOytcN1cvKk5PbmVgLC8+Mz1OS1IqayNnWUU9RU4rQEAlUF1AS0NXI3FqJyhPa1JDLUBZ
NzZvXC5kNE80aSRPNGVfaFIrOyk7Tm4hVGdVOyU6aURJOCkKJV9aSjM+WS1Ec107MFskRFBzJVNV
NVpVVkQsRSIjbzNmJFpaN1A8WSohKFNwdEhgMG9DXzlsXCkqJUlVUl5XRTZTUkNOK1M9alRrajtH
IlIiMEgwXSYqNGtMQkFyMDkkJldnZCIKJS8idCZHPz0oK0RwL1JjXlwnM1BgWkdgYVlNXyU9NE8/
NytENio9a2ZKSTdLVm5QSzhrNlVaRTBbWnBvQTIkUXIyOl4nQyQ1UiN1a3BUS3BsQ0hfIys1anVP
KURtS2AtMmRgWiMKJVhnZEFdaVVrMnBQaEojJzAnWFJSaThYT1JaYXVvTEojSitUVjQsOTc/MjRp
b21KNlkmQnBmXHNqVHA7aGYhcSFMLXNPS2dYZDROQC8tYWpuJkRxSFEoVUsrMFhpYjBVaDgmXVsK
JWRSZz1qR1AvQTQ3VVElPjhxQytjJ3U5ZUJoZ2pJcV1HJzRCI0lFaiM6KTQ1PFw0WmhiRUdKZTpI
Lz0wY0BuNTRfRHJZN3RcMU8tOzVjNGhhbDk0V1xdX0NBbj1fQ2hgTDpyUCoKJSMuNTBvMGliL1M0
c0phQVRJQTdyYkpRczspXEBGby9yR2BQbGYncyhRYzdhazJxSTVZYUw5aklFRG0lUl8nIUFmY1Ui
NSZEMyMlaGFmJUxXWydDaVdSWyhNZjchKVxhcilPQW8KJWAkcT9lTStdPHAhaDZdYWNMUT5ILmw3
Om5mI2I7UTAlRjd1SztGXyIjYDhXOzRmZlMhRVBAKixJRm4wRWZRcihLOi90VmU5MlBxJldcZkBl
UlYtUlgrQl0+K1BhV2JCQmJmMVgKJW5ibGx0TCVpRTRmZzRFTm1eaipZLG5BLCJQS1BqcCpkIUts
STRoO2w7JC1Oc104WVZDWWQ0QklsVlQ6bScwIkNNS0g9X0pBPStbSFdzMm5eX1NFamhULHFVZ1Jw
VWRGJV5VKGoKJWg+Vl0wcTE3LDZZV2oxSF49WG9QMStyPFZrNk4rK0k9dUVMPklDaE1iPyJdcVx0
KDAxIi1XZUVrKFVeOWgsY1ouZGhJcGAoVS9UTFVAWmJNUz45VCNEKVJLcV4xNWxsSnArVyMKJSkh
bzQvZWA8LThlU0xDIz9OaEckZEJoTDg5TD81amJvXSQ2JTVALiYjNEpLdWRGdGxZN1ZkOHVzJ1db
azJCb3NBMEZWNG5GdDcrR21HUSlZVWgpX0dFJVBRVkEnIy1rRUtSV0QKJSkwaDdETzEpWDtYNCZB
YThAOFghMmRkSW0nLSlBR2cobFpDXUxgNShFXzY/WXBFNTBhcDBhTmRxckpQdDVXPkpbPFhYI0oi
Kzg9OScxWjFoTDkyS2YqIVhnYzRiTU4mPGgmRWEKJWkxXHE/Il5nbSJnQGR1PlZ0UidZXCVwaVRC
Y0hTYWhndVh0WkpRITNcNV8haCJrV1JNJEhsaiRbZGdxKl8ndDNwQ2ddZ2Y4XiQ2TSVyIzZCXDFe
W0hoNmUzL1BzRDclXytGTiwKJT5MK0FWVnNlWXEzYDlhZ1ssZElxPUdiXGFfaTtlcF9vJ0JwUihH
O2RzKT1WV2dzOlhsXmFnb1AxWmslZkxybXNOb1ckNUUwQCpHWiFNM1gmWTM0QDIlJ2pHLyEiR05l
NDI7TjoKJVNSNEtGJUdcRylyMDs4Vi1lbzcrPjglPzExV1o+U0g0NHA7M05jIz0maFA8MjNgTCJc
JERyaDMzX3NJUmhnaGskOzNyZVhBJWNFJFIiMC0vJy0rWzxwK2ppSS4idW5ZUjtLLUIKJV1TNSdR
MjI4KWFVcmgxWzA1JkRsX05vVlwjQjNcSEZePm0vTTgsc3JPKSJZdFkzYVUjZkFbQkBHW1hMYj8/
SlpcWU5xKkxtRlNxPShbUWg8bTEtIltQLltwaTkxZGVFRGhcQG8KJW9yNSpLQE9bM3MtciFsJUoo
XSJlKGE7SCdKRSNkSjdjai1OKll0amQhSUVaNG1gSlkoJU8wVl8rOWZEcHFjPCIvSzxMPUduIkZM
cUxIbzUybShuK15QPS1yPjErbS5EQlY6aFIKJWBcQj0laEtIVXFsal5JWS48MGBoMTllaSRwXWJp
WCZQXSRWS0A3Mj5gM11lNmAuc1RhWmg5dWNROFVjNDQuJm5cLzttUG05N3JKXUA1aXFEbFkmayRu
RDczKVF1PU9GQzUiNz8KJVQkMi07NiZmXUsibCF1aCYlKF8oWUo/OUsuZl5wUjsqWiFHQUxgVltb
akY3QWBaUT0qOFIyYlMpOl85KUh1R0Vham5mdD5nNWxZUjBpJSpPUy5GWSMlY1pqM2ZhPzgvUkcz
LycKJW9PIjJTaUtzMU1yNlRzYEpnIUIoNWk4cGBjVTUpQ3BrXiVTRikxOS8jVWsuYilWL05bPSNk
c2ckNHI8VDtYOEZXRjJFOyw7UTpZJVYkQ1I+WT80U2s3NV9DLiREPig2YF8zclkKJSk2VlwjNi9o
MEkibWVqajhoLlgmTzBOUSlVXldYWGg7WTNaQURRMiEzOWpvXV1ULUtJZm9CZ19BYWNaQGFGNzBO
NjsuJTFmcz5WTWxGUiRWJ15CNjFFWF5nNS9NTjUzKCNcIjAKJUxoLlwuQjJEQFNISiJwMmY2bENt
XnA6TzEhRClWKVlFNVcrUkU7LWpOLCk5cEVjLCZXYF1kKVZfIlwnbmozcVBxWWBgK1U1NVNYL3Jn
Xl00bCpFMSU+JEJMO1ZcalZJPE4tXG8KJVppOj80VCU9OVVuI2FvYkxNPihrUUQlWSIqISZcb0o1
Yk5FMy81VycwNDVJL28rbXBMWUUnOGJMNV5eWUM0WFxdXlxXY2ZwWUw/JFxtJ21qVyNyQyU7PHJI
N0BMVTdQK0RwRmAKJUReM0ckI1hzKjE1R0VZWm9JR0VSYmdXVVo1ISlIIVEtPDUvb2hrOGVEcWZe
Y1Bvc0NWbSIjU1RLKE9dO1VZOGRvJmInUXFTPTNbbU4/cWYyWU1fdUUuIlRYLjRsK2pjKzdeMUgK
JTZOXzlFKUBIOTQhbi5yYywzWHIiQ1Ena0BkQV0wbDJiWVBobFdERThLO2k5ImNJI1ZtRGFGRDhZ
T2FbLGxHViNSYm9NQEYwXGYlZkZsKlsmSWBDTzNDVHIyVnFXJ1hAMDQvb2MKJTFZTjZmIVAnOFBW
KWcqXC5XXWopcHBrUWpNOStDTj1JKllBPk5XIWJsWE4/L2xbRVxTaz5uLixGXF8tbD4zQWJyPFVM
aidqLz5uY185dTczNTExZyxQITVMUTZoVSg3N1Ekaz0KJVJRPykwLzwiTUNmVTRKWSU+T1dTZEU5
YWdIV2JmcC9hUVUiM09TaCJYPWpmRmplJ1ghPSZSRy8hcXVSNDJuM2UlKD9maTFbME84ISdAYiIs
ckI/WHNYSi0lZ15uViowKW5hKjoKJSZkJzMlUDtQZ0tTJTEuaUVhOjIjXj81YW5mTzZLVCQwNlIo
cSFtcVBcUi4kR2RJTGkiY0lFX2dILklWVWRENGtSITpDXylFRVEzKCJkQFJDZ1hjTls7MSRaQTAh
NyItYD4kLyYKJSVcRSUiSnAvJ1IiRkkhTjwrYSssQERmN0JQRkVfNSxKYW9qVDd1JmheOF1gdSlq
azIhJUdwTWVIImZwVTRQOk0rKFs3cSY3ZmZgL0pUUGZoWV5HXDBZLis/OUdgRUpKMHFeRmQKJUIl
W2g9bzVbUXRFOV04NTFuMT5DQzhCRWNNckQ+MkBzRVk2RjpcJ05xVm48JEk+U25iT19YXl8vQXVE
cDkkPHU7XHUoIWpIRSsxJV5iXkEtLElnRFxDdThCWGZPL3BNXHRwQl8KJWQwP2JvW1dZI2IkOEN0
bkxmQGtJbm1dbC4vTVFcSUI1TjczP2MuKi4hMSxSVU9mTTprPVohPGBNXFpRZUZgJWYvMG5OODJm
MEUsaiVmbSRkWlFeTXNgSUZSdTw1KSU9WiU2Wm4KJUg/VERVWmMlbmw+L25JW1hWb3FeMl9ZYElI
VFQzS3I1WmhgOUE9ImdMTzAtKD5BTVo5Wz4tTDQ6LV9JZCRdWDBJXCtTSW8iXGYiJF9hLmtsaVQy
VHBPS1QmXTw8L1FJJDlndUMKJSovclBdMzckUVFMTFdDSzRGPzMqbypfX2chMktPa1w9ZkhHbkA6
NkZRYWk7al5fTVVlY0c3VTIjKUVNcEszLjQqU2ZbMEVYOERRTlQiUTpQZl5rViFcQW9pb1pISDhF
LFE3QSYKJW9HPnUiPzhCW05cVT0qUjRLU2ZPLDhYYyFvUSNDSV5wQCpjPTA3Ji5GSzsvbl5ha3E2
bE1PVUBdci9RUS8kKVlOTVo7OGNBTjlhbzBfJCJrSkNBUUo0a00hcjY9JXNoZWV1IW8KJWxQQjhj
LzIxP0pbSlQ3Z2dXPiZYWXNeMDpkdC9rYzRecGM2Ul1TYE1pUVhMRSM8VEwpcnBzV1FpR0okK3I6
MSdnUVMzPEoiNCo5alxGVG8mV2hpQzYlQUZVQEAuOD8zRXNpaUsKJUQlVHN1Vi5MUXJfOic3LG4v
Y0RMT0Zbc11qRTpSMCVdV2EyLFU6Sl9BMTVJLGksUV10JGBPaFovYU11dVlbQyl1KDJRLF4zMSRx
JkxhTUFJRXEmWmhKQCRuaVVdOkhnNWBCT2wKJVpjMCE+W0wvbl1COypQLixBPVxkRS07V2tmWD5Y
MGdOaTRtblZtVFxtc09odVBxYE5ma2EicWBRT0ZMQE1sPEFCSj1qaT5ecCw5ITY9JDhqSUE+WiFp
Ml5iXEVcOCslcDEsLycKJUAzMDYqQUNrN1RpRjE6bUMvUWtNQ2NvXVRNNUclJ0hkJFZqY1UhUGhW
UFM6RmEtUHNgLSk9QFw7XkdKcmxmI3JpLl9mal5kKCpDS11TJVwmZCZhMDY5alNhWmk6UkY3aFFp
IUgKJShhZFlJVCRwNHQhO1o1LTw4ZG8sVFNWK0knKV1cL1lCL1QuMXIpb3FeSnVHYExlKUktVy8/
aC5pI2I8amQ7O2xqQUhgUGc4JT5IazwrQStuP2o7M0BFJVMmX2AkWTZjIVM+JHAKJTs1Xi1YMmwz
LmhIK080cC89LjFXbytJaC1MODpMdGZYJSomJGRcX01YdFg4Wik3b0cqLDklW2lbKm9Rai8lNlQt
YU0mXDZvJGNAVSdzQmxVPHI1aGRIRFUvX0VeT1ZaJCElVVwKJV0xbD49WzpQWmtAMUhyUFNkWHJi
Pjc1Q2FpJ0lySjZYVDtnTVVjYlZqM3VubkhtYzBgcXQmcChOTCNsRVtrWURUZ2tRdEElKm1NcURz
UkgxY0dnczFzJkxqNkhvZyJfJ1RUX00KJXBEQEpCKzBqNlpeXyErJTo8b0twZTZoI0VlNjxdVTEq
PEBPJGtyKGdjYGJFXW5QTUxULm5KZEhSXUNtQTJNIXE8MSkyb1M1R1ErWCU6by45InMzQjVpOS8v
NXA8ZCY1cWkyTUEKJWw7OEkmLFFib0tVbkFaanJjaS5IUURmclNaZSg3TTlQVlQ9WnBxJU5wRE4r
T2hOYkowVkxiaUVdcTAvTzloVnJiUUBWZmRRWURAJjM4ITV0RCRiJiVUMEVVMy9adVEvPU1cIXAK
JS9TWHM2TSwoXjltZ290IXMhOk1LZXVINC8jTTFJUTZVbFF0b0kvbW1VLW5eMmdAaSsuPlVlNFJn
NWklLls3ZDhuNDosQzxQK1VwQjxhVzEuNWtVR29EUVtvTTN1OiE0QWc4PlIKJVM5VUVebDQ7M1BV
MTdaNVghUEU8MUtVPGtsRCQxVGZqQjI3ZG06aFJsO0w6SDZGQjVSX0gzM2UnJTExImJBbitYPSld
SWRgJzpccUJXdWcxKC0sRSotI0lHJTc7PlBzLTl1IyUKJTY4bTIjNGJlQVM2VTlwRiZtLyRgZi0x
aSlcSXQ+JCU+Xz9GaTM5PXUrQVBXP00lP1VQQ2U7X0c/QlIhSjkyTDUrJ19EQydFT2YsdVwzLk1N
WnRgUilMRmg9VCk4bCJOZXMtLDwKJWVrR04zbGonU1NjNy04aEotLF0rNEsoWTdwZi5dLkdpcV9g
LT9bNSFaM2lyYi40Mm8hPG84ZVZyMSglYFpscWsmV1JQczMubVtBbDNKLC4ta1xVXk9PXUFVbzBV
cjU7XTE1RVUKJXEvUi81Q2tPZ1A4LmpLZCElSE9yTWRpVkxxaiZVOlRVJTtdZkRaXU45OUU8KnJb
UlJqOGhIPyRwUyhiX004KXJTWDQ0aV4qYj4yXSwlOTRCRWpZI0kwQ3IjdV1QYjZdUT9jYlYKJVYl
P2Q4Y05jLkFuc3BSamcoal1rZzozQ0ovPCdrVSU4SkdOSlZxWGlpZl5fUVRsRFh0N2hXaGc4ZVU3
dFc9LGUqTWE4bWcqNkNzbiptRjRAM1wrKmRiLWlrVmFhKGNGQ24zOG0KJTlMP19XPE9TWWwrQkoz
OitxUTkiUW5UNUBYWS5HLW4uYVRMWkZDN1xPVnF0cnFaPkdPUCkoJykqVFY7SEwiODEqRDdkb1Bv
STwrUF5wXz9dZS0uK2U7PT5AY1tMQE9ZSCZzdDkKJSZ1bEFmYVppNTNyPDQ1dFlZK2oqX25DIy5R
L09uMy09MSwxMlJtVU8oYk9lTjMnPTRxaCRtcnJXNFlHIV9lZTEjXVpSc2hdaEswN1laZzlCJ1dV
SS9LRGhjI011Ii8sLWtoSGQKJVw+RVtkSmc/MjlBTE90TUE4TigvWzVQLWU1YSxaUGRbKF1QWCU3
X2krR043VCxkayRrYD0oZSI0PWooKG9NS0g/ZHRRRS0uW1gqQUNkI2lLOjEjMFJicTkhY1ZVY1JQ
MEdxO10KJS4sP0FQPks0KSRePXJncT43SVwuYikxXnRdWCJaIlFdbGlZQCJtYCNcTD50KzQrRk1l
YWEhckxZPkIxVWdePDBaP1xdJk1LP3JMRS5nRTlBb1w2NmcpXCpgaEQyLzRNRzIlMjcKJVYqUytC
NnM9R11xWTxZN0Eja2MsNXQxZlYoMDQ9amQ6Tl1jQmkwYlFXYEtAMi5TKWhpYEhIQDw/PF1UbC48
bmMwOGtSUkREMD9JLy0sRG1wJ1Y7VzhWV3NrSi50ISFGOnJwcmMKJWJUYk4xJltoSlRBXGRWS1Rn
Mjg1a2x1SicicEdeYi5iNCpvIy1xRzU1PGhXKTIxdGxHYlc+KygjOC5VPltUZS40aT4jYGxQVlVq
YkpKUkZPPlc+UDNEUWxvSV1mPHVvYTEnODMKJVc3TnE2UXIpJnM4JVVJPyFcRCs8aWRQblNXSWUu
UE1IWDxOXHJtaGVDK289UFx1cGsnP1p1ZU1qcF9NIV1zKE1bXVdVTDlfYTBgWD9qIm0kOSNmclY/
IllGNEdGJiM6TWdTOVwKJS41IUs9IWFtPC45RmxBc29ERSpITm8lbSJgcSkuZEdnRFRbLDVMVTdK
JyltYmNfLVksY0MqYiYiN01EV0pePGsrPmxhV1c7N08ndFhhU2I7a1JmVTcobjlANz03XjhcRjFN
QFwKJWpqPyxVPWtYKGlcbWBdYVNwPmhCU3RHaVhuWFMtIUxBPiVdZGg0QU0yZmsqJ0NjP2stSzA+
RURWSE5cRj0yM286XVljbypKP3E9YiRzW1gpczcrTlxgPmlnU1o6cUlbQUJOcFsKJSI8PGM/Lmdg
WyQ6aUhfRV0pbmI6MzxtL0tfKSVIJjdlIldQL3MwcFFENWBmTEs0aksrYjsicWw1ZVs7LkVzZFY0
N0tpKSIhOnI8ZiYqLFJBTjZlM3RKOiFwXkxqZzpJRFVWZTwKJWpgJTk7MGdFXHNiJzYpQEQ9UD0i
PWdsKWNJPD02KGFTIyQ7Qj1gdWAubmpdTSNNZSllMU91Vk4nX21PXF4pUmUxPD5dMlVSMzBbOlYi
OWk4YExEOksnQzVgLUNPTDtsRzU5QnAKJWNBWWtoJy8/S3JnYnJhaUREcS9nYCxAM1cyTms8TSFO
LTFwSEVeIj5CMVo8TFg9XD5vW0gzc2Q5YStgLSpyJUpUXXJcOkYxbWpIS0pSSyk5WW03dVpYTnJ1
OkJIUHErTlwsNCIKJVtOWjdcY0pdNjZjWTtKcWciUS0jUVJVJCY6ZXBHKGU/Zi9JZ1EmSyw+Vzs2
SlA2WTo3Lz8xbjlpPjNMMkRnUUpvNWs7KGReTWE3JzY5QkYpVzllXUlQa0o2UjJsSSIsZTRKak4K
JW9zTnM3W2YrTzVaKXRuRVQnZm1oLSpydVJbXlkkPFpyQjVVWE9xaSQiOFQ2YGNCVy4jWmojNSxl
aiw4MyZhbDkyJSQndGViPVhoRElrTXBmK3BvX1hba11IamlyS2tMcEExUTIKJU9vajhqZyVLLmxF
UCE6LmwmYDwzJ3JsVWglPyppW0JvIWY9OTJUJmk0cS9Qa3A3VCkjI0AoUzoiZTZHTGRHUiw2JEMn
IjRfKTNHdD1zJFgtSTBDNVBIcjhMI1pLMm1mImgwXSgKJWRuO3RDbkFoY1xVP3NVTWkzc1hvIzgs
RnVUPlM0QFRIRU9lIjFyXyg5OEg3USZRJHJPLjFBKXAsK3EtclpPPiRPSWBdXzVxIjwmL1I7XmZO
WWZhRD9CK2djVlkxMjBHOTcuR2gKJW0zJChEXWh1Ul8+MlFdMm9kRyUxN3VdaWBjJyNWPktbO2pc
XGVNOlBnSEAnRi9IYlRAYVBrYVdcTltJWTtESE8yMl89MlpwLE88XypNIkJMb1lLXE0pI0MjcG9b
dDlqZzQnImEKJSk0cENJbiw2WEwnLlU0bHI7JkUmTmIudVJxZj5gbHJEUGxrcGhlX3RoXUYwK1M7
MkNdSj5dX0hXcU8hbCs+aU9DRz1RPmhmVG9wSThbQzBJYSNZXTIlSF5RKkAmXmEpQDlhREoKJSo9
SGk+UV47OC1ZZTRpW0dpZ144JShmWEplalgzPWZcO2tBXSxgRXJoZ29SbyZxNTg4QlI7YmNHa1Rf
KGRyKGtqIm8zVVstO0ZBYnJINGAsRTgmQDlUXHN0PzdmQy1LZ3RRMFcKJTxoS19zKmxvX1xoRmNO
L0xQQUw2Qz4kVTdNYD9pLUcwY2MpQ1xLNyhqT21HZzRcNzYvQDhzVEA0cVU2Pl5gQFhkZzY7a0Ah
NE5dUmFsUi00O1kpSStwVlVhKCRnLGQ8SWQiSSQKJWNYUixjMCE5QCpybSlcZGJRJU9VclpEMT1K
LEg1RXFYbSlQbGk2az5yaCc1aWY1TF1EIjh1VSxxdCM9O3AtNTJMWmk8Mic/Q3IyNUkuMWxdKW9c
SEJMUDYrUz8lWj4oNXAiKTcKJSRSJ2NTME8ubGE5KnNiXlhpL0kwIUJ1OnRjTGkwQGYiZ1Vlb1Fu
KVlAblJEQiRGKWY4OEFlXE4oYkwtZCtfKzlfNCJfMFxAQUpuMy49VldtTmRRcyFUKTRAaSM3PUln
R1puVVYKJWhfO2lfKEA4UmBDXm4nYyRAPjluTFlFblZLTzxASm1idUxJcUMrQyJUXnJYdExQJ3Iq
QjdXMF5eYSlsJ3BBN1NNL09eV19mIytyLk0uaEJIcEVtKypmY1E0ZlttP01gJzopL0YKJW49RFgn
YFUnMEBbWyIuWFJgYHMkPjtwV1pyZSQnbyE+InQmMktSSlpib0NVdVRSXlsxazZhUWJnaUgyXVE0
JixeKXFJV1YiQyJjV1I6YW0nSypvYkVkdUxLN0s9STMkSUUpWEgKJSVDR1tBJ1g9LFZDNVk7bShS
M0pxWz1aLyNPITNOTWxNWT1jJXN1MCtNQVomS1RDbXRUYElRZjhsPE1CQE4uXmJRKFBpIU1iZU9j
dT81JEJGRDIxTGZvYEsqXGBiZ29zVU9rLC0KJTctbDo2YGNXQjlAcFk/TGdCSmtIYEg4MiZfMCMz
UDhyMi1FP2FjWEVYSFIoUV41NnAnJk1VQzkhXyZJbVBVQTgkWVQxPFw+Yi5XJS1kQlQpaCZTIVtF
P1pkYWNhczpKZUIoQkkKJWdOVlJxZmk8bFBJcXNvMUApWmRnO0UrZDEpdUNMJUVTX189J04nP2ZV
bVsnIVcyJmlXQiZKJ3EqOytHSCZeUkA5KE1yUDgwYCQ9XnJhL2AzUFwpbU9CPVZpKSdLW1FbV0cw
WXQKJUdTcFUtND1gIV1gPyRgM2Npb2tOW3JzL00iQ2w9YHJyWTpNTl0iaytjUm1kUVQhO0Y6Y05m
a1pxM2dTP2xFUkBtTGZxJ1ckYlk5Z3FnOmk/b0hXYipCIk9GKk9tcV86WVoqOzUKJW9RZjtJSSc5
cGpMX2JvOHBHWSE7USYnWmcwKzFIY2tERzUlLS1oQUprO2NsWlxSQS8jIShcY2c6cTtuaUg8KFxs
XkBIajYjQ0Y2clRkQSVMW1MiS1BaRi0jaHJrV11kMVZhOzcKJSdGJDJlOGhla2FIJlI9VmY1TGpd
NU4mZmsmQmImQWJNPk4lanNdT2QhRztzbkhKPV4lVC9XU0hDcEdcZEwzW01wQ1VaJFpzMSctOSIo
bVdIXThbZ0BoaWpudFNKL3FuYXVDWjoKJWlQcFs8XnFPOjk1LFsmV29cOS9vcjRGS2MqXUcwajVW
ZkM6b2g/cmBIdFJpRF1raHJQQVpnKzE+JlAzZjA6V1pAblQ7Iy5aODlSMlNiWD41LzViYFBGV0hd
MEU/cGlzXzlhaloKJW1YKC4qOEowOUFGLFs5S2VLNDU0NjdtUC1sJEZXXFxuYXJCTiQ9Qi1fTGZj
KUhvJEIsJzpxXUVUIj8wNWQpMkspUzhaaEQrbF4sMGReU0o9NmBNVildTTMzalJmYEk2PnJHKlAK
JVRUNjZAT0xyZlQpalpDTVpaSHU7NHFQQkQ3U0M8NSwydUkhZlFzOS1eKFBfI2oxSlxgQS47SlNn
RG82NGZ1UyhlUVVBXE1ILVU9YVU1LysiJyNDWnNrOiZMKUkuKksjIzJjOy8KJWR1OSVJXScqb0w0
OFEwXHJnNUBWYUY9OlhVQSpVSkJzMzxUPCQyI0ROXig9TkFZQkAmNC4/TTFecT0zSlFcYEUkOklj
JlIrNS1LYms1M21RcTNRSlQ1NEVGX3JTJ0k1MmBeZ1sKJWlZcilQW3AvcWVdRS9mO0w1IlA7P2Jt
XCFLK1JQXic+ZHUhS1JtTGgxSVxyZkEsJzliTDksZTcsLyxGV0dcKWhRWCpqcDkvSGB0PVkpN2Ve
VS1vRFAoZEhwVWlAU2licEtTXEYKJURhXzZIajdtc2ImO2teRCVzYz5TbyNhTiomY1krKG5yRHFQ
bztNSFVcKlBQXkd1ZTYyPVhgOEsjKyU4UjAqPF41ImhgclhILTghIkU3MHUkW2dZMDVcOEdWdEVA
PjloXScoUlcKJU5XPkQrZlZZUk44SEZkQ2AhX2dqLHNvLSMuZiY4W0dBcHQ6XUAlJ086bDBHPy0/
UGRwRWZzNlRfOHFTRExhIloyVUpsS0RaW2VnWS9rMllyRC1IRChpZEJeMFU1NDAtcSQ0RVYKJSU6
JTVNLCZXOkpwRGpkbmUnJWMxYmpsIj80IVRaQCVFckAoXilsPlhKJll0Ykw4RC5ZRDooTE47ZmNq
JjozVWhnUzIzNGY4QiooNChwZElxLXFrK1cuYCVISiY8bmI8Y0YnYloKJSxgbmU7Ri0tSC4/NGVh
bTZrcmFsIXR1UV1TZjhwbGRcdGs9MmNxZkFtRmhYVjspUi5balxQZ1dJck87c0ReMVhbNFwqSCwn
V0MmNVVcMj1gOC8rX05IbjZvbih0R28qYkcnSWwKJW42QV8/YVpQO0xAQjN1aVVXbkxARE5vKi9K
UTtqYiorQGQxUDhFWEpDL0xIRVN1ZDlMSW0ncDAvWFlMZVMzMCVhamhOV1c3KUg6aSVROyRsQkRv
MWAoRVtCR15ATFRUTGo7ZDoKJS9LIShKPmMuXmdaSW1qbkBPTkZHTEVhMFosMV48SylyPElCWFtW
WUNPW2cjMDkicldJJ3RoVTUzUjlNUChHP0NTPkRgJV8vUGlwVT1SMTwkQ0tHWl1vXCxmRmM8IyRs
TFEwLzAKJVZnQi1iaTU8PVJYUlxeUUk5MFwqKWo4RFomYyEnanFIRFNLM0FdJURRKVE/OmpDUSRo
NylzY21JV00mN2BxSSE+PUk6ZGVeLzA+WGA4OzJeXjgyVD4qXkdUQjsjcC0zO2JmOy4KJSg8dEpr
MUJsRkJbVm91bWtdXSleb3AuaTcoVi1pIVkoSjApY1hXZUhib0JjQm9qNTJKbWInSklGMTtkJi9c
USxIUEtRLjIzKUxSWUBLQ1heOTxTS0c3ZzFAS0dBJVpiIi1mMDcKJVk1NT4wYlVXZGRRXl0wcW1i
OmdkOW1aTGJjUigsOGsvUTVtV09nMlNCUFZDcG5DJFddX1QqPmY7bmE7VT8yKDJGIUNfQjcodV08
PEtxWzknN0QoNHRuNGFkVVUjXFJ0R1osYUUKJUQoMT0uWjRkckIlbi50VFJST0UsNWxrbl5BQHNM
bG5QWFtjLC8sTVVuWz43NjlESypVR1p1YFtELTwhZiZnU1hbRDhEQlI4dDNKWDVtXC1BTWwobCM/
S0ohdCRVcV1baCIlbmwKJWI+OFxHNyVBJjJeV2kwPUBzPW1FZ14tX1lDNCJpQEF1Ok01RyxJRGJJ
X21fRmxjLFlbY1JvWDJbLFk3X1MxLHVFKjdsPlhmJDQuVD8zNGQibzlNbmEsKypMbT9MJSlYYDEm
NT8KJURzSTs2Vz05T2NeOFdDTVtjWXEwSTNPUkRWT0dvZVxqbyxJLGA2aTQqMjNhTEorVTxGI2xe
NS84YHMqSlNGXFdqSVVEbGBTKVw1VU1bNTdBQmdRKi1WV2w8ZGRSQGZHcklLTiUKJS5qRzo7MU4o
TXRqKkc0WUhkO1JcSVdDPzJCVWlBSENsMTVfSUdoWVxGc25TLEYsLGBcW2JRZF4wL2pAO21gITtN
cWQrLiJoO1FUTHI0Mm07WWB0VikwUGhPMCpYSFlPKSpwaCsKJShGISpyYGBqQGssQT8zWWQrLGVv
YExcNTJHTzQvXl5dbipfOj8sYVQsI2trNDRiRD5wQSY2bCNCaDhwMCNDMnRvQi1fUEFkLj1ENEtJ
PT8iUFg+U1cpQWskS21NKFIwWkYjIm4KJWgsLFl0clNgTDciRnA0Qm5jckJSOyRfInJaVGU9Wjk7
ayMtbnIxTWxRZ1MiREAkRi9ja01aMlhwQzoqUUFEayVTYVNyRUhAQjg4SyYlOi82PlV1PSNHQF1Q
cCRJUTB0ME9mMHEKJUNRJyM6LVRMI2IzRF4wJTEjKCEqMypLQC4+RTUpZF4qXjxOcj9DcmovTisx
S25YWC1vL2s2Y1YsPVhYImVMXyQ+Y2RObkg1PiQvcVZkZnFVU0hvKXJhSzxwL2pPX0VFWyQsIzEK
JUJoZkI0QktgU1paZjlqRTUiQGJMazI3aDplRDdKWjU8SU9xLEBVOnQjdW0uPD05O2E8RipkWXUh
MSw0U19NREE+JjxQayNmW2pyUS9kMEslVmo2MktNP2xtIUBrTSFUNVZqZnEKJTMvOCVeQSlPLVQ+
VzwmWVpYLi9PXWYrMF1GKF06YFlHV2clRCNwZHU1clhwakxoL15tZ0pAYkpULyMtUkhCLC4yPjM8
aGpMSjRmNCNlNE5vRihNITIiSWdSTzpdMToyLzM5PG8KJXAnNTVJO1UqIm5OPSNbNlxmPXUtP1hO
JURfVUgnLihDMnRpPzxkbCYoOHVxcmEkLFReQ2JGVj9eVzpTbTIoTFtOQCg8Mmw2bGFEUiMkcnRs
PmBncEUsK146Yjwkcz5fJ1o1ZSgKJW1XW01fQGM0U09tKCtrO01zcTc4aCYoVicrLyYmbiVmZGdi
SE47W1gnXyMkaD5kMjpVZj9CZyMycVxkbDhVcCtqbGs7ckJRQGo7OTF0RCI2JU1ITiYyLC1jJk1g
YjU8L09SR1cKJWguJj5Ka19Aa0hsJVtIcks2dC9hXXVNMGBpRUNOXmVVNlNZJzE+Zkg9TnFSYyw0
cVVVNi51JCNKLl9DP2kjZ1c1J0g2dVJDPC47UVQ9aFRwJDNFS21mKVlQPFQhdEowXy4paCsKJTJL
blxeSzYoOWMqRidFOVstaT9HWyouV2hTIU5MZ3I8NDo2QDYxUC1kJyRMQ25iQGxQLVVHaldxXUlo
NzQsQUtGMmw3VHQvRmtoOU8sLGZPZnNBPy5QWydpIT9SWE1ARjpqKDQKJVQmU1hWZWZQQ1k0RTV1
XGZpUFc9RXAqbltYXz49KSlQNlhCJUcuKWQzNWhCSFJAVk8jb2NAIiExUGFUdTY4ZyZqNG9VXWNj
MyolNkltSV4oRSs4SlEnSiRWOllETjJVaXFNc3AKJUNWRT89RkQnbyZhOWk/RyRiUy1ZYzd0JFE2
bkonQCZhUW1mYGQrYUEoci5ZSVJyNEslbzEyLWwkJDBiKk9JYDw8S1tcO2FUUlxmZTZzW1xULlkz
I0YjUzRgPzVOSiEhJjRHSmIKJTBfY0xLT1whOnRGYCFzUjVHLidiWTg8QShEXClCTzhVNUk3IlFx
PHJcPF1BK2VGSTVqNk5mcjklNlVQbVM2SjBDYlkxc25ZTjI3c3BfQ1xcN1IhMytsLzlebkckPz8w
aideJWYKJWJ1Oyh0PlsuTkRVU3VhSGA2Yms9b2kwRkRbVG8xR0QhLlowPEA7clgtOTxTVGU4TWVt
YWdRY3VsLkBuR2JGVFxIQU9Xc2pwZiomNmJiNUdKOihnNidDSklyaG9FZWx1SlEoZ28KJV9dUm1A
ZjBUOy5Jbjhsb2A9UTo4MnJMIls7JU1IRDtyQ1shOmMtbUhVVFNrLzIlMTwmcVRlM1w+MD4zXjdN
V2kkSmllPjchYWFUZThITSNqUjBJOSVQIUJbZztGVlBmWSxXXDQKJXFZK2c+L0poIyMsMCVbPnEu
QXI8VTU4SituOD9dPGY0Nm0uYzB0cy1UVFdgJCxdSmRDajJRMkknYmlnI1FwaklOZityX1ZQa3As
Y2RnIyo6L0xwSFNcQVQtYVgnWCooXmdRYD0KJWFuZVtLNDpBWF8jNlg5WGpxMltDSy1JInMvTSle
XTNbVDBGLz5IcnFrI1lWUy5ac089WV5iJi9lXDIoN1VOaidRPmYocGEqSSspaV1uSEFuWGspalg3
MzYmWCQoQGRHITJoXkYKJWJWU3FIMm8kXlBiIUZwPFhZTiFHazdRYCpIYDBIJGpyWV1pYGxHbj9f
KGIhaEdmbyxfaXBoOjgwYD0lZTlfRnQsKXVXcDVKZixEWkRtTCpVNER0YiYuUlxzZ3E1aURxVD5l
YG4KJV5WLl44aSEjNXRzNz1KV0FtVW5mKGBDWjw3bDc3ZycmaUBpLl9jQzUoaispaE9nNm9fMEVJ
b29JWVNGbEoySzdHUlAxZzhxQihhWmk4ZmQvYTVXQTlYQTNENTpfLz9eaStAbVcKJT47dC5fVVhu
aS4oOixjMmVRWUVaR2JOaTNFL0BBIkFHSVRLQEo/TGVGNGdHNltPbjw6M1lwInVsI1taYShuW3Qk
YD83YCFwYztqSDBUVjI5X0peLVk4RXI7SWA1Rm5pPWNVSDAKJStqNVYpKSdvP0JYR0ZnbWAhL05N
OiRdLSUnUHBwLV43MychVHFdXykzWSNzJEJGb3MrUFlMQEhfMzBwYSMpZVUqN24xRmEoY1k8P1Yp
Y2dsImZrUWRLI1Z0cDFyTV1BInJDKnEKJUo4aW4/YGM1SDJQTjtvUiZLVzRXWic7NElcM1w/VEkk
IylxKChRcFtsYnRkQiVxalw3L0g3NnU6UUAjK01FRFRaNWBsVixKI0JRb1tqSHFObCE3Z2UjPD5K
SzduPCZbJ146Ui0KJToyb0BWMiE5PlpVcjZcN19PSGxqZk9CRHVSOmw0M3FQWmpXNjZFVEtodChJ
aEkhIUonUF9tSG5wIiYoSlwzM1hhIlIlc1khQk9ZYUtDXyIjNTJ0R0NrVDRVdVZMI2M+ZkthXCsK
JXJGJjItYy01LUwta1NGbnEkMCM5XEAnJFFjWDcqUU0jKWJZTDdvP1EsYThDKlNbSlooOz4iQ2hh
XjU5Ziw2T1ItOUlIR2doPFkySSRwYU1xS3BRRmc0XlhqYDY8VWkhZjNRMj8KJS1fRztjJV9GZy42
UUI+WU9BJnJpXVhXXGVrLEVFKTo/X3JTOGMtRTw7UUU3WU9GXSo4MyNLa1lHO0hmKisjXTNZKkNp
ODI3YURSSWRLXCFrLzI/QURzJ1JRODU+MTBfRGNNbTcKJWk9dGolMDUhJlBfSydNT1M9Rkg7KWwp
Y18mLFFpZ1ssbGhcblVQN01XU0prMEVZWFQzQypkPkxVMnAsc2staiJhLnAmcmI0V3BWWz8tWyVp
Kl5yRiY7dD9HIU9tRitIXHQqTV8KJSowZmFkOzFYUVVJYD8odSouQy1EJW5qal1POWRQVTgtQkxI
XnRUOj4nPDFyY1AiIlBucmRpV1EiMmlEbD90PWdKXD82WVQ0bykwQ2JhYnU2V2xtT1tcRCc4LUwn
OEVOJyZMIz0KJT5STSxbIjZZVWIwQUxPT2lca0RVSTs8WT5kdFBxLDopbkt1aWw/clMhK1klTm1y
NytBQigmanNgZj4hYDNbLWpLcDRyLGJmSDReMSNIKjJjIXIoNzREMVFPb1ljdDVBODZIUUwKJSdk
YG5MX2dRTksiRHI5OCZla3U+NDchUVwxVWMuKi8lTXNlR0FsXmdIZ3RwQ0A0XGFMQmpAZGJyIW09
MSZiNTNzZ09wQzVpcFMvcEVGUDYpRDMrW2FZIiFWVmVmVEBOQGhSPm0KJSJlUnIlTTpWciZmbEpF
PGYwYiZNKzhDYloyO28jOk9LRTc8aSxsRC9rTkdYSXJSZl5nZDNnX0o3cWduckFXY0cnclspXHJI
JWouQUNtMGRxYk5VbU43OnBINkZoKS5mcDlQYV4KJWwtcExaYm0tQilvQyJzPUJ0ZkhRIW0kNkEm
LT5CWXBUQXQrQCo9RlI1ZmM5UzVbJlNnMFQ/RkdAZyxlXWZzKV9TcC8kUiFaKkQvI0tON05NLyZK
OihMX11rY0lwYTZwNz1QUzUKJUNJT05cUExhMj5TbzQiO2RmKXFuLSYwKDxBZDcqbVNAMjA8Zl0k
KkZnXDMzaVttRlZZOSRscUJaWW9yQjkxWTtsQDxQRydaQy09TFdXIlQnak5NSGAtamo8TU9KSyop
Sy5PImYKJTdQciR0SDsmIVtHQCVeMk8wMVUzVkJCJmlkTG9Xa0EkNzxMJSRFK0I1OFJiRTxJPCgn
Pj9MXkBSYGhZOS8lZz0yNDRqK0kjcU4zal5kPDNHS25ZNjZZQTJcWzY6Mj4mKWVxcjAKJWJKSWBc
TF90VFYqUV05MUZIKSMhcDUjaTE5M0JfbUxOMis0SkVEOXFRQm09PSguLlRVK1V1b1xcZHIsKChk
c1olaTZRYzphLyNFZlczcCZQJTNjcyFiP1ReIjotXVwkQSdWYzIKJSdgailZWzNjIWdHdS9uN1lt
VjlAVXRqME8mWDNQOyRddFtibnNTMUtBaC9gVWZrN0ksPV1fIWlaLE9cUyt0TmNMUHJpY0BQYlE/
OyhORkIjJ1xSIkhGQUdLUF81U2Q0J1V1WW4KJWFea0YlYHNOLFUhN3F1R1RCMz0jK3NMIT1BI2cs
THA1SVdZW1ohMlc9QENFJTMub3RJLDNDXGFHOD9pcE9XO2FYK2dJV0FLQ21ydDhuOWxMXTk0VDAl
aFg1NFhSbGpFaWwvUmgKJXJvOCk8IUReKy4lPztfOVw0YWImOlowSl1wLGJxPDZzUXBIQ1FhPnFo
cklvIXBfIjA7cVpAbkxvNz9dXWNhOjcpTTxeYz4nTicndG87Om5jcDhrPUpcdWFzM0kyI1Y5Y21c
L00KJUBPL08nUCFNSFM+MG1JPjZpX0pzZCFeTWBGXidsITZwMUBgTkJfcllnOypOPUxiKkBkPW1c
JzRoXT88OCtzc0NHUjVQNlNoM1QsbV1VOCwvVCpKUFdTXyFmUFdnM05rNDEiVS4KJVo4SCpdOWVM
VXUzUHFzbUtYNHRcKnVPJ2YpRmAvR1hRUF4tRFJpZDFLZDA2Lms/XEgmKF00VCskLTBvKUdwKDY8
K1NcP2FDOmw8ZVQwRkc7YF5ESiQtQCleRCQkYFlFRWhQVDUKJT1gOzg9KWInYG0jbjQxISYvL0hv
JnNTdExgaUpbMVdMLCVXTS1UXDxYbGxLdWNUNVIlXmo7X2ZSY0VeYGFPVXEzJyxiViskcickMTdk
cmZxUU80cFtaUzVDMEFQJTJUSWNkVWIKJSxHQjFQJzojUDVKJS5GIjA/WmhWZT0/ZV9qJSNQTEVn
bXFdN09LTW9sPCYmNmwvSGk2YyYtOiwzKj5PWV4uOSUjJlIsTT5KOCctbydGaGo5M0osbmdwKmVk
cyZvS1U2VmVnYioKJSRSTD1RM1FvRSg9SjwmS1pgQ0cqOlxobWhiOmFMWEdQUy1RNEFZSD4/WlF1
RjM2bG1mQlxFWlZPZVNkK1NAJipIU1Vyc1VWXjlqPlFwP1lUZE86SE1gN2BuRiwrYytDLC9FI0YK
JTcyTzVXTkw/aFVpLXFEb1FaPVZgWmEzZ3FLRDthVltDXEA7NEtDNFxmVmxFVkQpY0VSPShVU1pn
O3QsV1JgKmcsLnM5Jzk2TjJHL0k/RWssIUgtQUZiNXVDSEtLbStWPWJIaHIKJU1ddEY3WVRMdTkr
SSk3cjJddEwmXzBrOlAtVSVFNz1oRFkraHNYbFNkYVMwcVFbdEdwVzldb2U5IypsViw0bERVNUJk
WXRpWFRqJ3A2UXUjZC5IVEUpV2I4Nm8qKzIha1RVTWMKJSQ3OSJoKT4ubXRNXik2QkRUJ2JYWTo5
bFA8NWJKZFdoWmUzblImKElGY2BLKDM8QzNNaV5jL1E5VFM6SClCMTRiRjgyKU8iSnMnTFE0cCN1
UXFHO2xPX2lXLTRFK21MIjhpIzYKJWZmX0hESCtgVFFERi1KSjUnIjdhL0NXLmFacV1pbDVbb11X
TVFqPkYkKyNIXlsvQ1BHKXExbkRaRitcKDkxRzNGK1xrdTUiKWFoO2M6P2E1QjRCaFppIkdpWCN1
YSJ1bSFgUWAKJVxuPjlzLWBnUl5DVFVeIWM0QGNVbSpyWiY3TikqLkpdaiI4K2pCRjdtXkEmMzs1
UW5dSy5DYlRTSTFgJTE6SCEtXjJhJUpKLjVCUz1tbUtLM0BuTlgsTXQ7YUBDUzFhK20uXGQKJTNQ
V1MmZy82NzpsakdEOF9sXktMaUNGJFhBX05mQ28iX1JcSUJuPF0tNlYlYltjdW9yQFpyXShkV0FT
Ly1VPFVaJj9uUS0mVG0yOC11ZzpJUisyNFxWIyNBWypQRkdUM1VENl8KJShdPDUhcWtTZitbYyhn
Il07TGk8N11yQj45Uz0yI2Ymak47RllQaGQqUy5KJz5dS25HR0dzUWZPbEpAP18vNU5nMCFPLDxd
bF9Ec1lPTiNRPipFcEtXJDY1PmZuNiItaSI3cEIKJV4hVWZjKDFrNHRhWWUhViNqbXErRSEiaCdm
UWQuSVI2S0cmayxjTVFJYGBhJUUoOk1nIyZZMGRUbW0nYzhxPTkjXy1sZk5xNFdDXzxFVDApNysr
YkNlSCteYkYzWVEvQE1rMGkKJSkvamAqZW5wNVE1X15iOz90WHNYYldtclI5MSdRTjkjOzBhX0sz
b1tCKlJjLD1UWnBiWWY8JUtZaC1QWTU7NCdjM0toOHFxUDM5a2FMajdhWURDbz5XLlFNQjhKLm4m
QFhrblIKJUNgTyEjJXBZTHRFYkcvWDpAKVU+aVg1WUxiZkRAQDJja1lxVXVFVF4nZS4tPiVOMFVC
JFcoMzBfRWFYXy5yOW5KKGIkJ2kjSG1cJ1pGbFFUTGV0KG1hWEUhXF4jT2pwOGUwPmoKJVhFbm4k
JmM/Qklxb244L191Ii4scXNeLHNQJU0nci9fJEdHZTlWOCtbNnRjW18xMGRKUUpsMFosakgzYEla
MFhPTiclQF9XYm89ZGNHQmVeKCQmXz8+NF1WclM/RktyazgqZGwKJXBNNEMhZVBRV2pITnJNVmxg
Zjl1ZDtpZUZNaDYiYC83M1pgZUs7R20/b3RacT08UTdBVXVlPFwnXm1pYEk/dGpfOnVYTCYxaTI8
W1wyKz1gMWAiOFk+T2RJb0s6Pi1oQTA8J3AKJUFOLXQtXXJkSXJDUC5kWk8nQUlBUTgsTGIoPW8v
PlpwM2ZbMnI5O0wtTkErO2NFZTFXVlpVNC0qNC8qVkciSl1nMk5YXz8hXmM+LElvSFI6WHU9WHBr
P15TZlEuSDk0XnU2ZzAKJWNza1JFI0tnUiZfKmkzaWtOQW4zNmktMV5cJ1cyTFhXbTY9M2o1NkBL
N0pFcDVpZ2hLT2w5QGNndThyXFo3blhqQ0RRYmlOZWRaTzpvIzQoXGpZSWcuMFw5Ky85NzljW1pk
WXQKJTdaTzZyMmAhNU9QKUZxRFtvdFBxWydRNURORSU2P1BNX2NeTUE+XTRANzBhQWQ0IkJMJlUj
JFgpQ1M4aCMhdFQjYlQybyUxSS0oKD00VTpGO2xvLSInJ1Y1RzBNOCkkbSE0QmgKJVxeTDJBUWxf
WmUxZilMW0hsZVRaNyNEWGxmZFhZRCU0QTgqMzE0TENrSz1zLSlVSFtPRkk8KXFWcGNFODBlOl9a
IjojQihrKF0xMHJXN2FhNUZhNTxQc1UvWENkV0k5JFRoT1MKJVBPXTUpWyJBWVZZPWhcZTNwO1gk
I2xWWnFlZCU1OyMkdFI8YUFiO0xKPGBZQCs/NWdLWlpWMlhbNlRsblQsSThYTEA7dWY5MW4oakdM
SSE9PkZLaTg+ZUQyYEtBYjozbTc8LisKJUskWCpaTl9ZKkQ3QFovTFI3TVk5TTA7TyNlQThaQSgy
UyNXJmY8PmM6KSg+bitRcTNBazEsXjsqTSdnWjElL00zbGNvb28xZkU8cDtTJyE9N1RCPHAjb0A9
KlM1IiNpaihrJkoKJTdnVk4hZUpLSCM3SHBBOkoyOixFMm9gYHJrZE1tM1JjSFsoOSYiLytyXyRP
WTQoJWNURHVQamJpYnMrYD9UPzZwaFdycCJyKmQ5YEZbInVaLC1aST42PyhpJzMpdDhmXk0nZl0K
JUFFS0coXCRBXj5pPSQ3YERuUG9lUzNOWUIsIlBTLiszTUQ1VXM6TCFmW21EJ1hGVy0uYk8pM3FC
N1c0aGlRbFBnQ14xNExrJHQuQ18zNyknVXA+QGxCZjxkJiZHSF9pMVNqczgKJVF1Z0kxYTQkJGdV
ZGI5WjtxYmBYKE5YMGM6XTgyUTU+NjA0RCVRYmRpMGxqNzdpUCtqbD9tMl4+RjQtVWpML0JCSWIy
OSplYkAnNWsudCZJZWZVNDlvbTZWQVNMZiFpVVU1UnAKJUJKVj5PNjlPSilyJixsR1UvTlVsIUxI
XzhEN3JGMGkwKGYvMDJhcTgvQSNdNSlxbm5kbXRhX01KKUB0bkhsWEcnZlFfOj8yNFkvIlJPbEtW
SGQrZi9uRU9EZC9AWV87ZlZdJyEKJTo4LzVGPjBuVHVHcS4wKk4qV0NNSChJY3VvbCg8NzFcImo3
cDciVEJpPEVTZT8iTkdOODxZYERTaSFVWW5nQTg2cWpEW1chV0tuIVRdR280PUw7cExPNUpmOlhg
a18oUF1aaCgKJV5TKk89RlVPJigoNylVPl1TITxsVSxQajgiXEJOYi0sSXRWZkNUQjQwbl1qLzU0
b29OLjA/c1ZJTTNfI0pFVj1GYDY5aGlEcnAoRGo8WileKjxxPGFsT1dVNDU9ZFgvXUhEbzkKJV5H
NzdyXigySi5bJkQ5PjYubyNHUkU2TXIvY1pLdEJFTm9yZTlPVXJic09lYGgiZ3VUcFQ/VEgqKjc9
SSdEK0I5Okp1QDVFUEVMUnIqXThEJGxMcGcrYU1kWChrdDlqZ0NmTEsKJS06KmREaVMxIVtIYHNo
JV4uWFVFPE0jPFgmMlkiW09daDNjN2BLIkhlbT9QVVNMYCVhazJgUj84SkcnZ2xgMiZwY00xQjRZ
Nk1ZLDFxIyxlJVZAVjpqbG5PU1x0dVxPOE9bYi8KJT1XcSNARiZMMjVdJmNoMEUsQWxHVEEhZGlx
W1pgMWRqbSlXWG5XWC0rRTdAczA7J0ErJytwUiMsVTQsT1NNWldyZlhUOS9kZUJDRUgwVSQrJidM
UzpDODYiSGdLVklhWGUtMDEKJSYjNTw7XTxkOHMvNCNzLXFQMFVwI2lBT0Usaio/IVRBRnRjaGlF
OEYydWlBVmpiRidTXF03a0tZZExaZjE9UC1ZYWs4VVNuKGVWZUhVQFNMP2F0QUVgM20lJkVkVHEk
Lls+K2IKJVZrLFxeKFhVZT9wWFxWbmExQWJSUWpfWUFBcDBfa1spKTthPSEobkQ6YCIkOV9uJEUk
RVRzYVddb2EiKmRbM1NrW3JZZkc+WW9zUjI1OV8yJEQqUydIYztFOCw3PS5eZTpVJD4KJW8mKD1j
M2AoKiFpN1VXTCVoYGM1Zz1Ucjo9Lil0R2ZcSHFeOzk3UWFscT03RjhQaWlQa2UzUU5fWS9BRDYq
MUMsOnJeOW9VUjJLSSlXazNBT2RmUWgmTjhMTDAkR1klXXQ2YiIKJSdKYzsqNGdfWkRfXDp0SlJZ
TS9NLjhWOWRXamwkYm0+Ql5hIig9TG8+Uk5ZNVsjNm1MNWQ8cT44KVBXWC8iYydjWEtOJmU+Pikk
cVBGPGFyPi9BMU1xcCRvViJdK1lAaS1vQ0wKJVBoaiFISCpXQlFuMmFDMHFWZy5eJUswZnMhKGlu
IjdvX14oLFdbUUZTQGpIOVUzRyE3bWBsQmJwYzI7Um5ARyJyVzxbLD9XJFhSV3FyTmBQRG9vakVG
Wi1Yc0xGWC9Sa1xbMCkKJS5MYypWYmRvbzM/RV9rSl1LOzRvKykyaFVGSDheJ3MyU1hNVk5dbklx
aXFYYCNIOW4wMSwlTm9zLFwkWmxqckRLRms2OD5gSzZYOWBMNEgrYDRAbFszKDQ+YShMWi0nXkgm
WCoKJTs6WVhRKWElRG0kWmFQdV5MMWtlN1h0aWRUYjcjYChQKGdhV0RdaSpJImsoYGZpXG9qTkow
S209IiRRIiZWJ0owZ2YoYFlvbCZdOkorTGVGXlBVcTNyZ1NbXV5cUWplPSNZWCkKJW85XEBLXW05
RDhYMStfWHI7NU5XcylAbzFwcUpLXWg9M15VRHU/ZUdJZktCJG4pIio8TFtYLEtyVmwxS0llMkNh
Xl0hVERUQUclK2g8dVxBb19qUkZxIzc+T3M3R2ckRHIwSVEKJUosL2U2P2JdK1MtU0tzb3B1QXVx
Ml9YKCZyVWBPZ3E+QyRWKE9wbCdfc3JlOzM8JmcwKFxLLTMiVD9YXzU5MTQ1cT1wPFZJNUxvcVZW
TSp1YzJbUjJpVDtaZE1yLWZpaEQ0cEcKJSE+Q0F1N0BGRCMmZ1BKO1cmbDZjWjspKFduQEomLGw+
XDxaMEVuTVJOaHJDKFNkYjRTX0B0JyhANF5vM2JdUUQ5QF89RXJEa1tFM2BqQllcLVJtSGlnbjcs
WDhDZkdKIzFNUjUKJWg5cC01Ijs+YGA0al1ZSEdUV2gzS1hDZW1cZW90cChHR2dgLCUoQ1hycVQh
cXE7ZTEqXl0zbGBtOGw1cnFYMlAxXjQoRVZzIlFeJmJGZD1TKS9eXnJJazNrPDp1c0IyNy42Kj4K
JW1WXyliUFVXK3BSPjFAWG5vJWVKOTQlVl5gTjtyRSNWV0VESE82cnI7VWJJOklfbG9eaihpVUwq
VCkmM25qK1AzRGRML2o1IV9DRnBpLz5RT1xdbkVyVlEmdXF0ZzhgREVgQEEKJW8oMmJVKyEsbipj
MWJTZlhaIz9rXiZSJzlZUSlkKWh1RS0zckFXVl5GaEZTVnJuK15VcCReYWRxPGpsaDRGWmkjZG4w
VClvQUJIbk43TkxVP2YtOCpAJGBTJ08jP0lYMmgtNVAKJUxYMUgrcyk+V2ZuJVxuZWg6ay9xa0xn
Vj4zOUtURT9fKVYtb1JVQWY8ZTo1PDEiblslJUJEJkpDWk9wbGlRMkhObGBmOiVgaD1YVVBWalMn
P2w5JVlmP2w6S0dgJ1BuUERKSUIKJU1JR086OS5JLjtwR0hvPCVAaG0rVDo+aUVJLzZKOWBLJ1Zn
Wy1BMCgnS29ASUBTa2ZJLl8/PidSc1tPazcnVyI/P1F0K1xxM3A3ak4xMSQtJ0BCNjY7QVdrbigx
b3IpSVdnQTkKJWdbcGdcLDRlbmZCVTZrSTFfbyxLP1F1OlNSQEstMTt1LVhWSy9NKy0pQ28/ZGlN
JWJfLCNEXGE2Z3U4al5aXmQ6Yi5fcFlBY0hLP2ZaclMjaVswSVlFVyNFTWpyQSFRaVE0V0UKJTct
WUwwYDhnSEYuM15pRlBqL19iIzc0USljKWpDLy86TXNBSyldVC9pJDcuQURuUVhgSiE/UWklNmpl
LV4mIz44bmpVJm4sVnVkbjcuVmg4OSJPIUwlSEwyc1JkIWhPZU5tXEMKJT43dChvNCMtcTNbMV47
QGM2PjNzTHU6M0QqPGgxIVc4aSxyXHM8dFBSQTlmV2FOJE1PNUYjJ18wN0o+Lm00ZVRDZ3JBdDFU
USl1R3BraDxqLjc/OS9JaUJVKlpSaUMoP2RMKyoKJV8sKVdCZGxYJTEmOW9RbmQ1PSM6PUtNWmQr
X2E/K1khXmZRPUQ7bzk1JyIxSGRFNys6bmVIVjozSEsnMWBxbFouUDlgNFRcQjFJWkU0P0wzQU4+
NTVDcWBJNGlBZFswRzNATVQKJTwzXVRQWE1hXkJIQTIidFAlWFJMaUl0YFRnUHFsOU8haFIxKEJZ
O2FxR05MPUlzTSghNVArSDRJbiJIZjBzXWsxbEI+VDNGNU1mS2xpUTklJTEiaGoiYi9MS1BwLzlp
MyFHVV0KJVBeLzBmNiZQZUA6bEBmdHJbV2U1cE5wT2IlVitnSTI8IlhRb0NjMzZYJGU6V0ZYayZp
L0IuLGFAJiVuTCpkNS8+XiIxZCNgPF5rNG1OOT1PKG5MSillQWkmWklmdDhUZlhHaicKJT5DIWBp
J2gvKGRQS2NXQCJWbCw/TVBrcz5SYVVwZVVhXkY/UlhsUDQyVUA5RkAmZ0djKkwzQSc9I19uPydz
WzNKJHJUZWomTEBTVEtQclAlaS8oR1hlTlM8UXElZjo7PC5qVkMKJTdgTUdPI0dfcEgrP1RgJi9b
U1VgRGtBZk0/V2RlRCpARS5VL1BKdToubEIuSVspYFxWNkdwV2pCMklQcDplc2R0LThbJGU8IXAv
MyEwc3RTIkxYIVpSO2kzOjhSTVpKSy4nMC4KJXElNy5PNGElX09vNFVcMFpsYWNjTF1PUWdHNi9y
TD5wJlBKUmFpLFkpQVp0Q2I1UExoJC9sbi9BJ25dVV4iPG5OMWU+UjVvZWhJW1orTk5oVGFJYWYs
PzQqPT5abkVRa11fUEcKJTthNT8jUjp0WyRIN3NvKlBsWUFcXzQnSi5YN1tcRyFbZllsPGAhbGAl
ND1DLTFgXiskMiVSVEJnZmJTUyZnaltDKC41SVZGYGA6MiJyMF5kMUpUYkovOmk+VE1nZzsiLD1u
TWoKJTskO2Y5PCpsNU9lJCY3Xm9BSjk3O04oUFtrQ0lPMms4cyRKcFtrQk5sMDdCJFcqQWVZIm9Q
PSRxIktzbSpDJHFpb1ErKSxsN0IlNzhMLCl1aD5WQklYSyYnU2ZjIWdWbUhqOT4KJUZCSG4lVC1Z
QU1lVUZER0wiM0xePSFOM2RKSTQ1cEE9QmJOU1o6Q0BjX25QKDdQN2IpbD87V0JXTkZkcSFJMFgm
LTJzXz1WO1hCRFgnIlJfTyZJMms8Uj5VOWNKX3U5MW1MPjgKJTZsYE1KSkJsR0w1NS1fPzxQUmBM
SUUzKiNnTi9NJlNLVyI6L0pCYjwpM1dVVUs/KCRfcXQtLTpZdSEkQVYtbydVLW1pdF5OR002V2wj
K2MrKy1zNDtKcWUpMGdbYD0wIkE6QSkKJTwua1Njayg9ZTNLSz41OSkhQChfRyopPnFDN1xyWixn
ZGtrQWBDNkZCPTBTXSpqNzknQDdbYFouYSFOVEpcWGtZTltuVy88MyZrT0ZONEFxImE7X29qS1E5
IkpVRytRbDxQUEgKJWEuZVU2JCdLRisycWUyTU5ANmFsWCwvNUkjXTAtNmRrMENyMzpDQVQ5Ly1Q
SWwvUTBDY109V0ZqOXRwZ1FjJytIbzAzbVNycWg6bGxATEdJMSQiQyYkRjZRZzs+TmtNNWZtJmkK
JS9fI1FEUlNWIyJiWzxAL2hjSVtpTFZvPW5hSio+UTFybGlTYlFtcC1lYGFdYiE8PkFXa2UwbTBG
QFczMEJbM3UkaFdpbW5aSGFDNio5LCRQOGNnQUs9UTUrZGw7ITxSMCxSY08KJWtVQlg0LF5BSCZG
PmtRbnI4Ym8iMkg0aDUuRFI6bGA/bkVIWGljTkBqTVRQY1FAZ1M2IT4+cms8MDEpUjdQO2E+PWI1
V19ZcGs3bmNoIltsZmIsR0RQYjwiRztQXGNPWEtpVlsKJUMoblAjWkc5K0lgPXVHNVooJC9yVTBp
MTkuJCE9LnM3SS9GYTFmcFhIbmRiQDtKSUlAPG1layhLNCxVRzY2L1lSMjllMyxDYz44OjVyKmYj
MmpFb24pQi5hW0UpaE1cKTVQTSQKJS5FUFtGQkQ0XWE+XDAtU1QtcS9WN0Q3REcwVTouKi1HWlg8
L2VjSHUjJEZXX14lNSYzaFlASTArZFk1QWpPRmYiNF5gQGFodUVcXEo4RWVdQ19aJDNBJ0BCLjdH
Xk1sNTRLZz4KJUhSaVldJmpbKDssXz1EIyRFYSRsR1FKIyIzLUBSXTxAKmgzLDVCLTByMS0uKFo3
cE1vRFdVSFgiQypQJSw4WjhzXXRBaktjYjk3XFQ5RkByQkQmPz4zTk5NUlVXWUE2ZSlaakoKJSNh
anBqZDZzSDwlJVJpO0NqN3BUWSIvdFoqYXVBbD1ZZlRhRyokSGAvaU1RMkM9S0FzR0o/KCUiYDNM
RzI8IS90YVZOTCVhV0xnaklFTTMrcT1OanRBcTA1RWw/QCwhWyg8cEAKJW5oVChiTFJEJU0+IlVP
aDRqKnFWVi4sOGclQT9FZVE1NjFIT1xOSlZicGQuWlEnMF5AUFUqUilDNlQ0QjNwSWhGSkFKTnBk
O1BTPj9tbWJCbVg3SS0yLShvJFMzYylRMHNLJiEKJWgjXG5PQ0AoYGVpNUQyOCZYUGkiJC5yNyVb
SiVjRUddX246LTpNZCs4Sm81dC9rVEddLUJHdCtLPFRDT2o9LSVPK3JHX0lSa1BhUSdNdCslSkZU
SiU8KVUjc2xQVWZaJFklOlIKJW5CIStYQDZpKDVvKDx1VkxMIVRiZ1lBKD05byhtVnMxM0AjZzhJ
YF0yLUNQbVUyOzpEcUZTSSFucTJHRDJZdEYuQ2RqTTlhUlZNKWAqRTlQISNSb2I0OGIhWVtUPUtR
UEleOisKJS1gYGVCVEhHKW1rPEgzKDg7c29kKVJIbWptQUZEKCRCN2YiIkEuRW9EVWpCJmUubCxW
LDFBJzAiPUckYEE7Jz5UPSs7KDtmMk9IcEAiaWZLN2Iyb1suI18nYDo0JU0vMXRXSkEKJTxxazZe
QCxUPms8SWkrXytpMGxZRm5mVmQiN1gtYlo2XkErV0g4XlsvO1ZFTSNgY2Y8UlJLclVSbzBBQ09j
R1FnWW5uUUlHbmtuZk10bUdmMFZQWyNncks7bkhwb0w1NFZoUmoKJV5xKGQtJWJrMHAqaj5QLGJm
Kyw5TldXVFtDP08pRm1qVUtzbmNPQT4vSyg9MlAnRjtBbmgyUl87KThWNEwkKVtSKyNFUWhRYG45
bCk5T2I5PSxHbXEqKiUjLzEzcUVjTzxDWFQKJTRqRT9pWTlOYU1iM1Fwcy8xWEhhIUBqcEg9XkQs
TDcrKFMzPiRpIytRaDckLj5Hc1A5SjVyaG4jXWJWQzMsVj9rImpnLihYNycpKT4hdW5lSnNnRGNr
W0dVJmBrMDtySjdoTDoKJVBHQlQhbW1TSj1hTT0lPlxQV10kV0lAXTVIKzlEJzJPMGNRRjtlSzU9
OSdHVEY8SEYqKyMuLzM8Wz4uWm8tQkRMOW5XS21kSihfTF5IOlNlcj8rZicpNkU5MzFtXW9zY1Zn
SysKJTwtVDk6WjVlMm1dPS1uZGBdQzYiKGBXX3AzOV90VzlUXGZacTJRSEojUzE4NlxyOUwiS0xQ
JHFhXzwoS240NzIvZVZbdS0/XnMkYDxLKjRxaTxbYygnY1JAVmAzMEw9VkR0bzIKJS5bK09jLjlr
QjQnKSRuOTtsbEMzTGJTLTYzUkVfPFRQLC9wRVFILGtMKGNKPTcmbGpsV2NhWkdyPWNudENPXy9s
LG9RMCwoXVQ6RTJ0TV4lVjAmWC9KWFZcNHFKJlM5bCVvdGkKJS82NSg3LDo5WG8kMHRnLSJ0Uls2
VkslTEpYVnBJNVpBKipkUGBobS5mTkAtclIzblIwIUcvLmJVUCEqMXAwcjpPZjdiZCsqODloN1Uq
PFp1Iipbay9gXWc7XlZDLGFPKXJ0PG8KJUVNcCQtPWReQ11fVFcrO1JvVS8nYi8kSVpDbXJKc0sh
LHE/LkZEVThQOGZMRCRAK0deZ0FnRjZSKHJNJVRNVWAmak9iWSZULUZjL0BTZz43IiFmXVJiPHI0
LSI/bXRTLixsOWUKJTJaX2VNUGlRO0JiUDljJD1fMjZYZVRyNidgKldBPCNlbTdrT2QrYSE5bEFj
T20yJyxZaSVJUGI3YiUzNU0iPFliRC1hKU5NQ2ZVQ1BJSHNMYUMuJnBlP0RJWWsoJCFVbmdPSG0K
JUQyOmYzZVJdYWhsY3JWM0EvZFFucU4iX2lOOzk6clFPJzAnMVUpa1s3SEctZTRdTSlmbWEyJU5P
KDlMWms5NFMrYylUNlVhaWlGQipNcyg3XHEsZWNiKzpDK0ZZJD8yTzJWM2sKJVpNK0xJPldCUVNG
Uj1lQS4rPDg8QVJrcTRuS1R0KjtoI0hpVCFkZj5DZW9pXC5RNV5OZj5tZkcxS1ZLcVY6XGxJbVxV
UC5xaE1TVmopaFVXalV1KkBbVFJFcVZdXSxcVyZpaDkKJUlTOCpMZShcZ0ZWbEhoZkVZJk9TT2oq
OWVSTzwsMipfM2w5ZGJDJE81J0VNcy85M2hgLiFBN2Q9bSxQbkllRGZYcmkxRyE9VF5SUytrWUBR
MkBdbz9SLVYpPF9pT2EjSWwkdWsKJUs9QVcwSGEwMF08VSk9Ji0lYkE1MzwyQio7SlYiaCt1dT5G
JllZbydBN2c0aXAoU0RpPEkjIzQ2QjZiKVomOFNgZDBiYDdrOjRwcG4xNS47OFMrSWJtbVs4cGpA
MUQ0IitPMjMKJWxKUU1pLENALFNkZEI1MiFAbE1rNCItWFw+OCJmME5WJiotckpaWWtvM3Fpbz8/
REVjYVlYaSlpTjROciU5XzlEZlByMWA7KENZKSYpMTFoaixTRCxIbGVERDteTF49L0tNIz8KJSIi
MnMtQk4iTlZJI1FoPCh0VDloUWA1ZjFTQnBxcDU1TDk8ZWdaNDZlZk9kPDVzMC1RUy8zODFBTSlc
TEA9ZkBVO0VdLkk7NC1CVWlEQl08RV02W1JOaEs+Y3JkPmdiOUBQJj8KJVhJMnFyODs5KmdbMDUh
czJAPDwmOmtjSExTMTwiW1NTVlcxQzI0cUBkQUpSSUM/NTErISdsKmNZVEZoM1k9OSdnWHUuWSJY
aShgUCFMNDQxbktwcTw6S0BKTiVMMnM2IiY2YiYKJThdWEAsVGQ6XUBbWzknRCJHPycvPXVxLSpq
ZSVfa2M7Qk1NPi1KLnFhZ0IwRisqO2BtIW5xSSJBVmksb0NJJVYxYVxkbyouOkclVjlwXiFBXUts
M21tK1gmaS5rWkk4SSVTMUgKJS5lcmRsblYpT2kxWEhbQSZ1KWgyK0M7OTxOL0w9RSNTaEhQbjll
TCQsdV49UW9JW1ZOSDlULmknYkkzJFE7bypZLF1jZ0dKWG1ucF5MM3NyIl1fQFsvcmBEVjMicSRg
PDFfKjcKJUJ1VixAM2BKKHU4aGwyXm0/VVdbNjNDQU1nYkRwRWUhOE83XG8jcUJeVEoqRTtzLytp
LHJLNl08VSokbEUxO1dFVUUhJF9xRExbPDQvWktPUCpGY0xkMmBkbFNaInBpRDJuIUIKJV1INy1G
QlNTKXVUOmFGKWE7PVxXMDg0JFg3WG5iTisyJjYyRDUqKHUiMUpvOk9MMj5eLFwiTkZVPmdBKlVK
UFZONmU0bUVXZGtVXUdadC1dIXJgP3NVMjM4TCQ4LDgqLUNiImAKJVsuR29zIUlqIilkYyQ1YEJz
KDptTGpZM01nW29FVmVRUUExLFpcMk8kXT05SjtwPjlEZV5bP2cxQEY5KWduUmJVNyJyMEIzODUu
QVEoMWdpWCVhNjI6KlFqJy9BNSFIMyxKZ1UKJSgzUmUta2cjXiU2ImFKUlVpTydlS0JIPys8TWxk
aDVDNVBfNDB0aThdJTNaXl5RSm8/O0dkM2BVVUhII2pSSm83VlYtUl4vJ2UpLnM3T3NbOCYhPSxI
N2FOV0BJc2czY1BRV00KJW5OWGViNiMzOnFvWVQ1YzArcV8pSWMyaChbS08qKSxTQmsrbWYpLURV
dHUhTkEwYjstVDpWKE0oXTpMamo7Wm5RVGRgUENoTTNAJmFeNDNPTFNEJyUxUUdkSWpDRkRlcDY6
UkMKJSJEI14rPV0uIm5JM3JvIT5vbmt1LiI2L2Q/OU4mLFNCQFdtXi0nOTkqcWtxNF80bnFJPXBk
aGRQK1lLKCF0ZzRJPDRgbVlaT20yT2h0Yl1zJ04zSmhBQi06T2RBbUlnW2s5cVYKJUlaOk1MZldR
cCw/KjcpQD5iZDUpRF8lKFJXb1lWZyIxNWkuQmUqRmZNOTgybFM+cjdLbHRRJ29UckQwcCdhJUdV
V0dBay5EYmdYJk9SJ1NYWWVYOG5DRjcsNGg3X0hyVGdsJDcKJVBYK0k3KCIlPDdWSiw4KSY2TGkx
T2RLRydzMi1WTmNCSi5UPmUicUdTODovbG1tSFspSHAoSFVSZXB1RXFqSjloUkMmYiZGNV5eSSJv
T2taK09HQV5EZWFtXEskNi5XYDJiMTcKJWNFRWBObEFxNHQoImFuZ0hwZ1IzN1lYNTtYYjdUWkEj
KmQlXEgpI0pYZW8nSnJhVkhVWkhkT2FeaW1zVj4lOSYzbzpdITcoLiU3O2ROW20kPE5JIlpJWSFK
MVotZGBQQ05cLi0KJSRSWjxPaU9MY2xGWCErOGY+WWFoSz4tJ0JaMVlvbDtRNmNXVFFoWSRZJzty
TFxXXEcsIVFBVy5sQUxMKC9qUSRWXFRpWClGJGg2Vz0uX0EkM1tsRi5IckkpKEMuXCU9U110TSIK
JUA2Nj4ta2M8J0RAYEx1S2g1IlFBIVo3KXEnO11CMTdSMFtuIm40QDo0TCk9RDpIKldWY0MsXl1K
UkZIYGwtIXFmXnNkSz1ANW1ELi83NzxXL1ByMk1bNCcxOT9qYGpMRFwuYD0KJV4jMWhPakZuJEVE
JS50YEpxa3VzWktBO11oWWJEPllgU2ZQJU1Lc09NSFBbI21oS0RxI2dIREhJRCNmJklZKHQqUUok
MW1Rby4xJTAuZ3BhWWRAQGtnaSdCOmM4W1FEJHRQbkgKJVhMTEhbXVxYJTE6LTY6aEpbMi0xXS5a
bE9ANzJmSTZUSC8iUG4kbEQzVjdkKGU3JSpnXUBzbC5cJ1xsLiF0Y0grRWhRJEBmbigwT089PURs
WkZocTZLREwsT0ZmcVsrSDdjcnUKJTZLY082YCsrWF5NXjZAJktOQDxGIXIyJy9FTy40clAnYW8i
SixYPFM5Y2UyaG5VJUlyclU/L0tkaEJtRz9TMGpOVDE5JWAhWSZecUldVjBiPEZkKmtjdGw8QURX
Rm9wK2g6UUYKJUFebCVbVylgdDctaGxjJ0A7REEtMGk/SnEicHU8bjM0OmBfZmJgLD00aDduLTYm
SXMhZmhhdFotWSYhTSQ5M0E5IyltQGotIzFbJEclWW0xJCYwcD9VQDIqQkIvUGtNYmdhNTQKJUhg
KiUrOiZvaDYhLjFJXC1aKWtzWXVxSyJcKFplNTcwWEtRVSUyLFNfNjRxR1dFPEpAI0tfV1w6JG5o
dFYhSTZbODNEUWglSygnaVlEJ1hZcCZtZXNQMGokKGA4WDM+WFVQWGEKJVIqaCRnTkVbWHQzWDFM
JlxUX3BVMk1vN1FOUFBVKTclL1tlN2NAIXNibDFrJ1UycTZPO0xnbnMkN2gnQ2cscSs+Qm8qKC9i
KyIhKks7dS9ZQUAhbTcoTGIwX0shbG5FOzRLcEYKJTx0Ll5wPVguVE5abU5kRGFKMUpmJCUhPitU
JGYwSVE9YEsiLkxqX1JXTko9P1hWYmFsInQ8bz1xPy9nYjlRQFsnR0I0LEEiTFQpamVnQmZSKjtL
LVUuV2RXR1dqc2VqZVY0Wi0KJTk0S2VVNWkpPWxOKCwsImhxTTdGXFRRVSs3UWJCIUpTWSdeQitI
KydZSFY/Yjk4US51Z1h1UzJPYDg6QjUxK2lKUk1dODQ+Tj5cY1gqSyVxXylrIT9GTTFiVS1KRWkq
Iyx0RWIKJVlSVFY0W2A5ay02QkFHbGMoRlczRitqME46Ql8rdEldUVlkPFYuZnNwUSlbKjwlNkxq
SjInYCghS0BSZ1NZX3Q4SFMhSldtQk0yPjREZ1RoJicydXVjK1NNRUM+L2RWLWBwK2AKJTEuaFxU
KlRzVlxpYTJfJ00lVTEmW0NdNCovXHNcJUg3ZUZsJl9UdWBNcUNWNCdkI2FwIks1V0VEJk5jci8u
KCFANigyXWZwQGZiQkIjNWdZSDlWM2YjckYzcFpjNlw0SlM0RikKJV5eQkVHLDVbWlY5TDRPPDpU
O1lZQHImRmlPLyJZVzsvR3Q6Zj1iTShETjROLG0qNChFKitGbCpkWy1DYTBwLWQ7aGFPbWY3czdj
WFUpPi11XkBHQy1NRmJUaUVvKz1uLF5YVFYKJVYoJENbOVt1SCw4RWBzc0MzWDZwSXArNSUnU0tH
bVJBUjslSnN1JkpOMkBEMF10c3RQXV8uNl8kSFozOiZwJ2hFNSJVa1pwMCtXWydbIjQwVTYxNi0x
X1FuZ2hXKGs3Sjo5PzwKJS5AJHBlNXUrK08tPFFzMlpVaEc1YmxJUy1UTV9DVkFGTDhBYGFKXnVy
Lz48VFI+QDVuSEo4KS89Zl4mLW02SV83Mk4/dT1UQTFyREwkYGhgVDwlTUNwVUI5b0cvbTFYKVRy
UCUKJTpWXUdISyZMajNnVikmQ2klM1gwODpudCs/TkcxaGN0WCNdWk8zIyNyMkJaVjA4XFNzOic0
JkUiOzw6LXFcOmsnP3NTcy8tdG0mYDlML2xOUldQL0kzWmNHcGtjYlZfTDk1ZUsKJVZxISNzW05N
IXEpRl5dXWE8a2lWRjoiJS9scjVQWlcqQiciR1UndHNjQ0VAMSlucTVyJXAnKWZVZ1InK1c/TWZs
NiQnWHFmSyVhN0tAWlRYXCwkPituTlRzWVBaX2YiRyNGajQKJUcuI3RWVHQrXUljNEpCdShuckdY
LkdMN183TitZLjFhZFAiQyk8bTZlWkdmRDlOPGtfNyFFSFhqdF1fNilTOSFPPmQ9JDFCbjpwTU5M
RF05S1BWblQ6KVwicG9AcnAqTTtbVFsKJVhkSilXVlJLXnFFX14yLygvT1BwRDUuV2BEUz50VTQq
cUIpJzRILiltUF1nJkRIaGFCJE91MVBpUm9kTS1mPEBvS0ghX1dXS0orbl1qcSg6JCQpSTomYFdF
PkhAcT9ZVkRWaGMKJU1oTHJKbiMyPG9ELHUnX0kwY3IrPENVclRPPFVgLzJkb1xhVTIhS2g7aSNG
bVtFOictTD9bamNPNVZONT4sIWBRMjg7NVEvXVRbb0NtNE5DYz9pRUdNK1xPWGhaNy8sJy5rZWcK
JVNlMltcTklSXFJhSltSLUshTlVmOFNLRlZqcVZGJWduZWQkJ3NPVykyJ2JFdFROLj5WIW8tV1M3
VEo5SGN0dFBOX0UiSkFJRVFIND5XWWJiO0lpK2haWGZYO0o1JnV0MS8nKWUKJT5YIi5eWjQ+RCFA
JSdwZmEkUE8hTixPSSFTTXJNOU5RPWsiOGJMQyQ7XVpUQSReJnMpV3Q/WEwoSS5XdWNWYVg+KztY
WCMkMz9wNmQ7YkIhbyp0cDFdZENCJDgyKW8/KkdjJywKJTRuJlowJVctQiJCYlVIYE1rQG4uVy9G
a29MPC0oWUpFQzgqNV4ibj1uPixiKjpVRVI2YiJSW2xXWzRQSzxFQFhFLi0xJnQuXmpyUD8mb19B
aE9tLTlFXW9PY2tDRFBgPE9cVVcKJSxCMGJvKyNYcE1lISVdb2ZxW1FRLVVdZEtFYi9bKmFmSllM
TSopRWBvI29ZKjRAcCJIKSJla2AyYVpVVD5mMCNURTEkVmFgK1BCNEdsYFxtKDdGQkFhTyhFRiw4
bklGI3VKYDMKJU4/cFA1KTl0MW1KKnViJjNCSDxcZmxMdURqYSVDUzlETj04Y1FQY0BSQGk1VFcy
MUJzcS9jJmduL09fVlc9RFFrWGFBQEY8WjxyNk5UJy9NJT80TU4yR0Q5XEctQ0VLMEpnMHEKJWty
RG1UTmVsXkBmXl1oQ0xcW3NWNDJZLk9uPT5nckZWRWRrYlZQZjRKTSdbPERDODwjSSJTdEBsPEw+
XT4yYWw1I2snZ2dxVFxdKmoqNVlPcSV1WTQtNkVjWFI5XDZHKU5nKUQKJWtWYUEtQSUjaURSZm90
ZyRSbVBbcUdFcSYuXzJzYF5vZ2VoNXBBOEBiM09eMSgsZ2IkQitxXEI6NiRVaStHOnBBclM9IShe
UlAuaS04PiIyXDksV3IqJDUjblNlbV1YTWphNygKJUI0Q3BXNWEsMilGZG5UTDh0cFk4XD9fRkUm
NSFJJEI4JyFnQSdRUDFKQUhRTl1vbHJJIzs3OWoqM0RXdUJzKGhAUmBxJShyTmtOJlVBZFtZO20n
cm4la2spJ29sbEg7JFdtJm8KJWY4bz4hPSs5UXBiXy1ATkpCQT02I1EhNyQ4LG9FSm5HNiI3aXB0
LHJAcz5JYytjaWgoYkdGXylgUlRaYHBaTmc1ciZIbVtqZyJRIXJLK1s9b1JzaD0lOjs+R0VhSl9i
VCZYaHIKJUk9L047Z2FaW0dcQFNzJTRoOnIscm5QSjZuM2hSIm9YUihGY0NUWG5jLlIvU0xYUDsx
QD90NWdjNEFPTm0oNWNJajlwPjJvcVNEWWpfby48PmtlW2tqJyxjNSM1M100RFQoQWsKJSpsbDcn
aUB1c28sVmgkREQuJVJLbHVjQWZXbjM7TEM2dV4vZygyISo7dFQlcidETSc+SVhWL0deTyxTcV5B
TCE6TmFEcDY/MWVyIVJiXWQqNmdXZzNxJnVWVFQ6QCgxJVxBcTEKJSwkc0ZNPVgqMWs9YTs0V2lg
VlpuNS9mPHU3Pj4mbTxqcCknUkBnJygsJ05HPiknXUB0NSE8T2g+cydXUVVnNikxRGUna04tWmBm
XTQ9YlZ0cCgxJEEqbmFYWi50O1khWGUoXEEKJWloVWZnJWB1Oyw8WWllWGwqS3FPPmordS8sUzsq
bW9OVyJATUdNUEY9ViZqJC1fNjc3O19ncko2cDJjbTZvPEBLWjpZN0M0WDcvSyhPOEpeYWd1XnRE
TV4lO045aDBPSm5PLzIKJS10KXEyJSRPJy5Tc2ovMFdTaltlUjEoMjJYPzcubkRHU1A3WiVIXU1S
XU1FW0FYZUw/SFViYGtccC11aWFtSjtpaysiRHM6WDspVDkwPWBuVWc3Nkk0THA6RWRyUz03Vjdt
Pz8KJTVIT1ErI0QtWSg1ITlQKEpyWmJMX1YpaGpVWiJDKWYtNk4oaUU3bS5wQUtyS1YwMj0wXDBb
NDBCMStBPWBxYCdrTUFrWyhhXjRNRUkhWWRMY2deUUAqZmxcJCxrOmtOXmc+cDQKJUk9TGthXCk0
bl1mMy8uQEFQRGtEZCZwOFUhbm1CWitfN1Y/TVRXMzFZQTtxQzlHTyYtOWtTaSRNbGZ1c1cyRWwr
bSRPTj9YUyYjWiE4M2Q7VUMtXi9TUE8sTT5dO08qMFFTL1sKJV02LXJQO0dhLCNYUilIRFMjakYs
am1ubHQiXVooWGgrOFltJTVxbUBsQlY7OWxCbD5RSypBL05WbEZfY2clQTNUP21aNiNpJkRtPzpA
LWhKQ01VV09CcXNWPz1rcFxKLWZZYisKJUBxcWdbN145bm49KCdVMTsvVDVHL1goMEA9Yz9CcSxs
WEF0XSo3QyRtO0w4JlNjZSMyL3JGZGtYK0M1VUlZbVhXXW4/Jm8vMG5uO0tzNytZbHNlIjVDXWlP
YDUvQkJbNG05QVcKJSVPQGNjTj8nNmxnLFFNZz1PI2hPbl0wX1duKD9ASF8+NVY2NTwubDRAQTZv
NW0/VF45ZHBEIiM2KCVnUltSUGBNaiE4OS5uKVNdc15AJ1JOYC45Ols7SmN1S1loclp1S0FvWlwK
JU1KWVlbT243YCs/N0VjWEBwPXFMKydfaiQ1PSwkMFM8dEEsPXQ3ZHAlSzlKTShKU2VzWHEkbzht
VD9ZaC1fSWhDbW87W15lbmEvZyVBWilnP2FvQWRMM28hJzYrNk5SSUUiTyEKJVInPE9JbzhCZSZi
UjFsMy1cbVNtQEYhMGMvK2U2PkV1bWxHUU0iR1EpRlQpK1JlRyhrOkZmVnNrWzhoWUBrazVpbldD
SihsYkppWFhvPltvN3VHPSMzay86T1pNXUQyNFxzV2AKJUkmP2c3NjJqc0g1YlhdRT5pMCtdcFtF
XUVOOz4xTixiPCU7YU5ZMFgmMFo4ZGoqTCRJKXBWRDNXJWpnaVsoPm5NRTpLaV8+JE4jQzMvZGpH
RFInN089J0BSTGVjNzFdLFZrdT8KJTsrJD8pMkQiISE8T0BWKiE2ODA1VipXZEJpMkRgZk5dSSY5
Yj1pZiFAY11ublBSVV9LZDNRXzcmVDIiZF5NJ15bL11oSD9UZVowMk9oMFpLOC1oTEdBPUotSmI+
bls6U001ZmEKJT8pPVFYKGhGU1FnYl1wcWJkQSMrYSxybC4lPkBmPF1XLz0/MWhyNEZnZiZcKC9G
OGl1WFZBcCRJSyQiZiRIa1I4QGklJ01ETll1cERZMGNrJjxYJm9tRmllLCNma3JzZFE8KjMKJXMl
UCIqXG5CX108W3VfUDViNFpgI0YsV1xfI0QkN0tmNEBjZS1FJjBQMCs6TkFTQCQiUWlYM2Q3Pmox
KXMraUUyLz0uc1ZxNUM1bGRMXE11QTgubEFkXS4nPFlwSC0qPSY/JVsKJUNLMFIkS1xaclooMl5C
aTNFKitCPERqU19lPFY8OT5STDVUQEUkPCRqRjFTMT8yTTtKOVBlPU8qa2RhJiwkNzY8VDBSMHBc
TEUuXz0lbSlFYyI7O2pGWS03O1crQ04nMEBrZl8KJTQ1OE0lWkdJayoqXWRfUDhWUlFjMmNEbyhh
VyRbKWUqOmcyWkY2QmZCV05mQVM/amtWP1NnQUksYXNsQmxiPGlTRl0lOVVzKixrSTArUExIJkk/
Y19caU1Wb2lHK1I5NVtSI1sKJV1Pb2JMVidENTgwJy4sRGwtNnBrInJyQXFQNEdXPk5ObmkqXjAx
bDRnUFdvS2hEZjhobVNvQ29ycipibU1LRHFpaW5JOVJFYEdyRFwmREhxOCpUc2gjLFJOY2giIk9m
ZkJZX2EKJUQycTBWWSZQZzs6aGU+NElSUERCUzNRdUUvMyZsWkwoJSM7PUFFRSMkVFlDLk9wTzAi
KSZvKFNHJSJxU2lYPXRqNCJsQkxwZmRuYEBZOGlHLFYsaEVrKC4nNigzbD1sKVhXY24KJUxqMnJI
NHUodDg6K1o0RjswO1pVS0xiOjtxVTFyXUpKP1RnR0BSX10yLWw+Vi4kXm9PRUBrU1U0KGBdYl4l
YVJGaDxIJV5gZWtEWDdTTEgyVCVFIT5abWE5LGdeSzdZcFVpUUIKJUFHcj5NMENUPG9va0coIWZq
Z3BvQDdgSCtLJjFmLCwsbik5T2xnaC01ZF87RVZURUY8SD1XbCtHJF8kI0hKKmNgTENsPjY+RXU7
M1ZQV080MipmRStAR1deNyRSblxCYFQ+cFUKJVkyQHIsLykkJG8sYTlkKFY4cE0yZV9lZVwlcjIx
KkppaUUlJGReKUFEKzpVMkcnVj9dPispVVNNY1NoQU1JKyM/RmUpJlhXP1AkVEJzQW9pWytPN0hp
SGRBKXE+Y08zTWdjMjoKJTZqPzpkPUdTNEtWPytbTFxBTlZvKT1eaz03WTVzUFRGJnAtMXE+ckJZ
cTkzZCIsKStGVnJSIilbV2U8MFAxREIoVVs4WE1ZWGNCbEw9JGNuR2lPWEliYldtN0Flb1RjR0hr
WnEKJSJSWDEuU01vSGglYlpAVitbNiFnMHIlI2k+R0hIUGZhSl8mQylLYUwlIUdZM1pGc143MVw+
LmxQXCZNdUheIVBXaUtLLTAmNT9cYT1ROG9lPlFgKFVfRE9SYVxxcG9iYWlkKGEKJXJXXlY+Mjsu
Y3NIPyxJXj1eZExlTGwnTmMpJDo4W2NUXSxwI0QoMkJfKE1Qcms2S1luVzcnOkI/ZmMxKjQ6bWdM
WXFWUnNnNTFLLWcnVVxWMjgoYGlbWkwqQitdLClgJVBGJCYKJURrTWtXSV88aCY+cyJxImRHPGtc
Tm5sTkUiQHRFQyFjJksrZ1NzLFRNR2lBPVRZSWBHRnA6XTUkL0VpMEVKJThNMUohWD4/Y0o6Qis9
JWZJay5WZydMKVsuZDJVaWsqMS9BczEKJUo5QWdqLiFANGhQLz5SYiRHVHQ2UyshNV9PKm8vUTA1
LVRIRFUkOjRTc0lUNjM6KSZHOS1oM3QhaVRFZmgkaj1WVmtua2FmZ2RTKEVRbj0wUW5iaDxcIjhs
VyMlUycyMVh0RVkKJWVFXEg0Vjc0bW5mSC1cZi9zWSJTRCk8VU5PSjBmQjZORFkiKU8xVCErV2Jx
P2AjIygrT3BFXEEpP0UnX0NsVFgzJjArTSc+OVFWWVNbMGhdZUF0SFIxYGZCKjhgRVlTOmVSLzsK
JT9ULiQ+Y2JsbTpqNEZQNDpiUy1KUFYidSs7cDpwbFJbVUc2PXUqJWA/OjNZRzEzQGEsKC02clFk
SFRCZV5oMzA6RWZQLnM7L1I2bF5SQUUnY3Rrc2sqY0hUO2peK1pETGNqOEEKJSojPWMxR2NbOXNl
UmVkQ3FEPnFYcTVxbFEpSj0rNDQ4IzguOWovVz8rPygnTyddTCtQKUIyRlYwY0lINDdRWXMpPmBv
c0hWM25PLWpFKyU0YVhJMjpUUWY/Qk9rPzRdKXNCLGUKJSYyYmd1cE5oNUZkYElMc0pRNyNjKjZQ
ZkMsWElnT2tyTzVnO0lfSSwsRjcuVDUrNXQlYUUkT0NbUlBrR1ZLaT4nTGRValslJk1eamBzYldx
OzkiT2xgbzk8QUhSbDc8MHVoYiMKJVFRYnVmND9ccl0wSW8rQGdWPixLPzspMl5nLm0kMjwmKjRJ
NHA/KDRdLE5tZjEsZjIkUCQxU15fIjFDW0hDaz1UMyJINGAyOD0hQGVoaGNWMjxKY2xfWEtcSmpY
YmBOSmJbJlMKJVpbZVFVZjhQTmRrL09mbWQ4SF1iZiY1dTZgOHQtU2djTkwzY29BUSVlREZBbT9h
YFUuQz0/VVxYcztCZlkpXCZqPUZgNV9tWzRtdUw9RkwkK1BXbjAzVWZKQTtpLkdoWTM8QEUKJU9M
P2JzLGVlbkohNEspST48QlBbMFtKQFopVSVNM1o3Zk1PQCtUUWBIXGVfTGo0LkgtYilZZWxLaUVT
NSUmOzJvI2cjIWxpJSVEVHBvV1ByWkltQGRSdF9mIytiNypsWFhNcisKJVZNT1YlKEE2VDU5bG87
Wm1LU2clX2I/YipINTw/cy9wKi0sKTRfVD5RIT9pXkdGLlA+U1NeMk8xUiloSU9qZnAqNTlXdUdQ
XS10QCFbSkIxWlpKQComIipETyZWX0BbTEkvM3IKJTRpLiVEKy08V25DMDJva2ImIm0rLChfIlJs
YERfM0FXTlZwcEFvczVQPFcwTCNAbWp1MypSUWM9P21iJT4udSFUIyMzL3BmPFhsL1JyOlQ3bDgo
UD1QK2srcWEqKDRiVmdJKVQKJSdbUTxVKj1ybF8zSF5SXzEkLUpHY2pqZkBTRjNsPU8kISM2N0dn
Om1dQi9wc0VLW2w6TSc8ZCFWYylPLjJdTUVGYEEnOk00J2QqQV9aTllMVi1qIi5bYms/XWkzKXAm
ZkROS0EKJT9IUiVBJDg3KGBFIytUJDlocWciS01UTVY0MUkoSSheIUMnXEpFVF9XXD1fIj1YRzRH
O09eYW1SYXVfIk4qKVUkYTshPjcpPCtyKClxJiJnRlovNiJHJ1tjWV9HSUs8Xk07PikKJTFPdVZH
OS9fNCVSVDxMTFlVXXBJRmBHQzVdNUBtXElMcHQ3czE5RVZeTm9WSl09MzpjYk9KaEVzNWkySjFh
IlZYXk8pWWBdSmxrO1ZyWU5zUkhjWEowKiYjSk5OTDtlTmBqa00KJV5BLlY5VEVTZkgsZlYlK0tN
UGAqKE11NnEpO0A3PWgpK1AtPUwybmNYZTliO1owTjsucEApLCkhNzg1QERRcCgsIUdFbSMpX0JC
SnJrI002YjdjLmFHTEUkJj87O2E+ZiNBK08KJTFkPU9dODVpU008S1k/MC8/OVZNWzY/akpNJE4p
KTFZblgpbnVYYlFlZjlLOD8jP0RSRFprWVdYSkowZDx0K2lJXkZ0NVM8SENuT049R0JLQlNWaSRe
Tjo2Vy4wVV8xOzZDUGAKJTJuOFg9cFhkMUldLCxzSlVIKF4zIi1TZCYwcDRmTUheb2k0NF82JWwk
IU8scGc6JXEuRyZpVyttTENxNExyWTcwaVtpM2VHLW9oYF1HM2ozWkxrIUJHUCdlXz5odGs5aD9H
Z2cKJVpsMkJGWlFecTQlInVWK0tIXCszKDJvaydkOCg4bkxyLG4zJWQ2VEwmYDxjXDdlcCUoOGVL
LWY3P14hZlxJSSU+YVgzZmgxKjxKVEIqMVBRZVwjSERpUkZMZ2BXYEsoVVNEV24KJWFVa1MsZV9M
Jy9LSE4kK0ojZzVmXCFCKkUwV1oxZ08lXmMsRT49LD5eLWFdQUJgdWFpIyYwOC1AXGtUI29kYSxn
OHItNTZHN2VbZ0M8IXNDOCQkbTBORSNIaDVcNEEnYmdfJF8KJVNWMmskbztFTTtiQ1prPGw/ay1p
NFs9P0JTPihRTVVXRyJyOiU2SiNfZjVSIld1TThUNi1WT0EnZjA7TD9fWFxKK2tcRlxkSmdrS2Eo
W2dxO0duUiVKajdgImolSjJuS0htO1EKJWEzaUs1JXRaI1M2XlBXcFBiQiM6QF4rYkttZSR1TDQ4
PFlMWG9lWExHLDdcSjBzRmNpWUtmPUxaZTkwZEssanFHTWFmODFQP0EsciRlblwqQ2k6Xm1jbCNV
KkVzJiNWX0xcWz4KJSooUmZYTFRmLXIsNlhqZkpmKWAkVSlNTjgyKmo5Sj5rSDtbQW47Ny4obiVL
ZFFrUCQ/b0Q2Qzs4WSlMZGdMa1ZmTCNNb1shU2Y1MkUiKGE8YkBbaihQPUckN1VzWSItWihZUicK
JSozPClpNWRaUyhWayVYJyhZYEloTTQtOSkscT5DRkRuZiclPnNSL0ghPFpqVGBPV0AtOFNSaCMu
I0RdWjFmYGV1aC46WnRTMlBQSTgobzVDbi1YWV9oYSFbOTYzVWlXY2szRUIKJS5FITZLaDtcOj5N
Lm50YU5lal5UbURtXyNaLFpIQ0gmWW5cSXVNRmJSZSdiPWpsMFpYI0E7OkAjVU4oWU44OzUmZEVQ
TWEhJnNnYUhdMU1lMz8jPWtUYEdTKmxUKHJsSzpLOEQKJUImImk2RlZtbzNDMi86KTQxKVhGTiF1
LmFyY3RyaGMnVmJSOVlmQjhsOVdtPl9gRkRnbVlqZ2s4a0xDUUFFJE8yTEtjInIoNCdLLVpTRCFj
cEVxQSgyOks3JiVVQFNPYG5QLzIKJUtPMFleTnA9YCtITEYmTWhXcixzXEEvI09CWSRsN25qNUIn
SzF1a2lbWERDKjpFc21VMFxyPXRMbVlLIkQkO1grUSM2MzwmM3FGcF80KFpmRVhnXClgKjkqTC5x
dWIjYWRtVCEKJSQuIWdVP1U7JzxVbmktMSFUL2hbI0FZbWlrPz1Abm91aV1TU2k5ZjZCJi9DLDJc
VE1sIy9aS0pybWk7Wi9ZKmNMRm1pUiZGJWheIm42IWdyY1I1cy4vOzs2SlZfdTkpQWlvY2kKJWBK
WEhCU1hxcltTOWZZIkQxYXBAYSplb0VCayl0Jmo1JGAzYzVndEs6dXByVDZjKDhKcW0kRSI0UCgw
YTcmPUFpXDNMLnRtVFFLST1SLmxpIksuVDo0dCFEQkZQcChLbz4uO2IKJUBicTFXIWpdYVxiIktU
W3FXSCxzWnJcWENyVXAhMVhmNmpZNlZna2RgZDgzRyE3UitVYXB0c0A0W0k5XjxOYERgYlM5STBe
XEdzaW4rRnFhZSlsSShIWV9KKmwwK2hlbCUvWTYKJTsuP2NUVCReNT5BNTFOc0U1TTZtMV8mQ1By
WThFWlJbXGdEMmxAUFsoLkdAZFAxQVJcMGM8bEwjLFs7WmxEMT5gX09iWnRMZ1hldUVkUylGX2c6
bFM3YmpbRCwvL20kTzs8UW0KJWxsYDNbJ1IhX1tGJTkycypBWVNmVTFOUS0ucTBQJ0UoLitoLmps
M08lIjZoSSk9b1hNI2ZXJE4+TGQxVSUqTGlbYyZuUEZCWDRgXmVidCohKGApcmA9MVRQO0BrOFkz
Lm5HVlQKJVQ+bmBOKUxyVnFMLSU8MkBkOCtaWis2VyVVUkAmPU5kIi1oZjs5QmZuPSwnZ05oN3Fw
XV81VFtoKW5YYGNQNjdnclQ8ZGtVI1BEMz4xZ1opJ2kxTHVHZVFZZjxuclc9YHU1XU4KJUJUNy8/
blNJSUY/WEhbOWU/M1I+LS9KOSlxQmE2MCVKU01ROTBGTEhlTE4tTSMqXisnMidVQCU0NiItQkBx
SCEkRks2aDpgZURBQyw8Wz5oPjhmaFVMTHRFbS4oWU9OKDZzPSQKJXBja2stOG0vaWknOlI3OTxS
OFxdYmF1KDQsLTxYNFMkPSo8XSFFWkojVWo5NnEzOktmX0NFZEcyTVArZENvQys+VGY8JGZtJCRV
O2kjXSklLm08SXEpb05ZMGFnY01vKVpeajsKJURvbUVGOzROczpPQlxrVFlhJEFhYWVBOmdNbEg2
IkBMTnFqJ1ZkI0Y0N0NPdFdBJjI/Zj1RQkwzbyY2dSJvTCI9SGAlQEVOdCtSZDJGN0NoRzQ3S2Ri
XXEpO2xkSkJSK2ZwZ2QKJT5uKmNCTkxicDc0SkdXPEphIkE9Il9OO2NPITwuTDhuSmRZYmVzbXJd
cS5fRCU+amY6LVEmL2JbQFtcQ2whaEJhP1gzaFlRRGxLcyJWL01pT2oxZXJTZEo/Vl4lJCY8TScl
cUkKJSgyVWArVCQjKWFkNm9fXThybkVcWVNkMidNO0BEX087QCttPFUuV05XWkZxX0ZRMyldTGY7
SjI5cDBAOilJWitNYVleTi9PbmtOY2dtMGQjWic3LjkhXCNDNk8yPlZFciFyVGAKJUE6dTZQS0lw
MF9IK0ZiajUsRlhjLTBaRHJrOU5cRUUhdDRkXCl1czFHc2EsVWokNmQ4KUQwSXJPVTpMZWU5SW9x
Tm5qLy8hRGV0Xl9JRSFfUGprOzlMajtQP2QuNVU6Rl9scU4KJVk5WHBlTz8jQixoblZZIkNxLyxm
L1tdU1JrKGxnIWMpQTFqZilhIUs2a01rcmo1IjlmK3VyYG9gKHBYMyFUdSJTRi1XQiEzIW10aFA/
bz4qUC5FUWo3T0olOEF1TDo0QVYwXmQKJW0wa09lYDJuU2QwYnFxSWtLLyZbanQvNiY1KFBQUmpX
bE0+PkZPKlJoRkNZNU5kO3I5OTN0biJwWGJgTENycU5zIzk9XzpDLUpqPlQ8bipGWzFUJUZnLE5U
Y2AlNzIkQHNrZmUKJT1dSDFHKyNxVUBrJEddJXIpQnJzSjZsL0FUZ0tqUkhkLiM/byIqNTMiTCIq
NVZSXT9GQ25tWGlVTWhxajtsRkcxLjVQUmdDcSJ1Xj91KFJKI0BYYlopJj4yKiEqRzJlcWFRRlwK
JUVXYlgibHNuZitucUdiNWdkRyZuVTk8LTE2bUJqdDxKRC5fJF4rNEJaKSxZRlBNZD1FbEZzTU8l
XlRQWTskLkFJTz9KM14yZnEwO2RKMkQ1a2xfIk4tay0lKT5USGNQQSs/Kl0KJUg6OCddNmY5Qj89
VFshPURPR180PTVpT21bcjAwL1w9NVFUaDU2IVlxQCJjKE9ZbG8/U19vI15nQygzV2JIZS0uJVIh
ZDVpMT08SWY0XHE+ZXQoWEgmRDcoK1hCV0k1IkZUPEsKJTwrMzdhNXEpRU9gYkwuTEFmQ28sL1w3
LkRoNVpbOixcUzsmVEtlZ1osKTpkKSsqIj5eNihibUQsRTlzKlxuJXU9Riw8UCpyJEMpaSlaZEgn
cTxHT01vI0UqNitvMShBZC8kRiUKJTAzcykkQFhNPz5UOilHLSJ1aVhKPTpJYDNOPUl1MSddJV9R
WkdMSUQ4Nis4MEUkSz1xYiZUYlk3KFdlJkZPRF1PLEM9XidXYTJvcUFXOnE9LjhNNmo0JCJFQT1j
RkZgVzFaWFMKJU5hZT5FcWQqLzQrS1AtKWdePyJDQm5kOFVvZ1hkaylVXUtIIjxYZUA3dGMtNyxR
Wy4xTmk6SzM1dT1bSkVeQDczUFNQK2wuTjQnX2k4LlxVcyVUN0RhZF9ZaFcmdUVJT3BAM0sKJS1g
MjtKRVxPaT9fYy1nQ2FlLlwvJnQkUm0mTDksa2AxJiQlLmFBJ1htTSVuOFgwI1VvaihIV2ZCR1I8
YSk2W0NQRCpDKlFFLCtvTFNmMmIlKkQjIVJPJ0NZKjRBNVQrZ1s4QCwKJWdRKipJPUpKdWIyNE9S
NWMkMDpQbTdIPiFBJG9CXUUtQ2tbRD5YXmZjLCVPaT8qUUNDJVgyLGguYjxpJ3I6Rz9GOWQwPict
VykzbU5YR2QtTHMqbkgmVGNpdVl1YTE9Jm1cVyIKJWw/QGtqIyZbLSpEZG4oKzZORVFeREttVXNL
UUpCRFZCaCVkXGs0KVZOKmo8MkNNMEhPIkZaXltlXStYOTdBJiw/IydzWXQ6aSJmPy4yQzwkJFIt
PlJmWVx1Oms8YTEsZUhlXG8KJVJGdFpOXD1PQjpDTGdfZXBCZUAmTChNJ08/YztWUDZbPkNhJVBL
Iz9Vc1tdUkNZMG1hITl0XWZAbC1qdC1UOzJkO1FZUltmVG9zSTAoc1JlW2JtLEZBJVpLVlgjSGBj
JEgxJzwKJVQzPnUhbDtdODdcYjozbzE7cHRjXW5GaVxRbDEuMCxSRVAsJDpZZTM4RlxcLWJDI1tY
ZlZxblJGKU9FXVptcDxsZT9aYkxTVyEmXy8oLVdbbHBXXXRdTXFXOGBuY01eW1tSaWkKJWghR2op
WjsuX2tCXmdQWzYoXU85YG8yNVFIc1gpITNabFJvNnA6QlNROnNOVUxSWFo+YFJIT1ZuLDtvOl1v
bC9WXl5XaHVQMUBJc1h0Jz09KkJBYTIqYiJHPVBzVjBiYz9aWCEKJUprIm4kZE5aQjQuIktrMTko
YT5PcV01cD4pWU5GPk5UXEYyclRzYEhmRiNkND5zRUdYMCxTWWNrbDEuaHFLZGBiXTNaXzghMDlx
TV0lV0g+UiQ2PC9vIUoyV2dRLzJtNUthUj4KJUEkZk8nMGlLLFcvJUhkaStrcjxrXEtrYEVHLElU
MjRWb21ZJWhzVDk1clZpND4mQGclQG5FdEZQLGBBV1xlX204SG8hVFtqIjlrWUxTNGZlXD05VUJH
LzQnKUk+KE45KSZWMU4KJTBwYFI8cnFjUGU0IitqdSc5dSM2Rihnb2JlayJaJ0FyV1NyNSY3U3JJ
TEJ0MV1fQU1cKUs/JHVrK0xfTCgrYmZjVDZxR28pVFgoLC5FU2BPUT9LYk9iW0lvZk1WNUJwNkwl
Y08KJVRdRCVVXCI3JjloRUY6IV9EUEZkKzNBL2E3XGpVVD5lWnJRYytNIUJiZyE3ZVVmU1xSU0Jd
VUUlRGs4RCEnMF5RajVObFk7ZEVnPDZBJlgiJSsnJVVAa1JgJFZfcj9kZnAnKVYKJSM2OyQwNDEs
WyQ5KCdXYXElYkNFLUZSXF9hKVlZMy0+QTBfMStoVkNwXnFLW1pGZnJOVG9oNUBSVjk+XzsuRzpQ
SSlvQUUuXnRqPEopSEkjSlA9bVxcLnNyJ1pKUEYnJDFRS2IKJVIzcm5TZTQpZGdRKExhWFMjbTli
VEdkR0ZDVkhvUkRYcT1hTjYrSkwmaHF0cU8xQTxgQmxmZkFcI2p1Qyk6SlssTlY3XHAjNCpJOFlZ
XmM9TUs6O2M7LkZSP09bX1NTM0YoLSQKJSJvNUpfN3RzSmRnUFY5QmwuPUFEbUZALHFibnIiNFc4
VTZvclNoUUpCUj50I18wOytjP1guRztVRF8wVydxSCE2WGhMdT4oSGtpN2tfV0hDNCwhOFcsbj5k
MEozYzteKzNsUT4KJSF1KTBRaSxZQWpHZDBTXDlrZHROPzxpP0lPQENBJ0E8QDNRS2xSNShCOWlo
N2Y4UChQJyRWJlc1Qj86KGVmO2BlPi5aJjIkaGJiMi0pUXNKa1k/ZCQzTEwyTSljRU9NYylgS1oK
JXBpL3BYYE11TmpwJzBpMm9OOE5nJzJGWStDSS5wXE5Dcj9LVU1PYCZFZzwmVDFZS1FYJXFOXDM4
KTRXM14iSlZHKFRtQzBFLDpjM0BOcWJVckUvYSZNVk8tLVsnJU1dUDJ0QzcKJWg+Q0VaOmBTKzEh
b25OVVo1SiRfJFhFK3JhLURgVjNwVjwuJEBbalE+V2k+PClXK1k3NlA+YG9eIWU4Uy9cW1tvKD1G
OFpDcDclSz9FU0gsK1wrPjJGcipcQF4yLFBRLVUtMD4KJSlJa2syUyR1bUYzYydbM1tRcEhjO2Iq
KDFWZD5gY1pRLE1tZiRzZTstKWtJVi8iX1JEJ2NuYypxUDsnW0xcMWQ4aV49PjxwI1M1PVAjPmwp
KTMrQSJuP2BmJ0ZVZk5ROS9daWEKJS84LDo9UlNtUSVVW0heUTVLM25URVtPKiQ4Vl5xY24sRS9S
STxYWUM6Rmsub21QTFwwX2dVW3VoL3RgKjZwdWRKbWtRT3NDPj02X1s+KTdGRmgvMG0zdVlGbycv
cEcvckVQaTYKJV8+RzI6VEYwdEgqXjM5SFZWJlV0W09oUnIlQEM3Nz5XLXBCOSZVLSNkTmQ1b2om
aE0+Kk4qRE9dUzcvP0IoTGpYRzFzY0kvM1g3Y2MhR242cERUQnJHNGxacjRZOE1OV3NQI3UKJTBg
S2NnWDxQI2JMcUVRQzA5JTtxaHVTQzUiJF46N1ZVOjRSUXBqSiVWPUdgdGRPX1ZfMTM2ZmVsIypa
Izs/M3NBPGBVUiI5OGdRVFkiWmxDSzFUOGNfKkZOLSo4KEBFTzdFOjcKJWFSbW9bKWg6TD0oX0U2
UF9uPSpdWWxXSkQmV2BSWldiVVFBQF90MyVpciwmS3FvJUV0bE91KjdgRyc6SD5Cb1ElVG1DJzhZ
Z1MrNEFxMjZhOmFfN3BvPG8uXygxViRHRT5mUj4KJUVUQWxlcGhBZEEqNFdhSTI8XlREK04+XF5V
bW5jO2tFLG1BYyYjalQ/M1MzTkwzMEJ1MV1VNWRtXXVOLmRuNWppZ0I2RGZqNTM+byUrUiY5S1hy
XHBnYyEqX0JiOU1lTDRgXT0KJW4vVk51MUJyTFpjZXFYNURYaFRuYnRCc1RFYlo3KyVrXGk5LnRv
LEJvZkcmI0ZLTkcmW0NdcCNXOjZkLTdiSClxX3FIZk4+MkhcKjhWSyxZNF5WM3Q2PV08bUpZMGZY
ZTFnWy8KJUVrSDY/OUMmcG8+OmZ0OEg8KSNGKi5KOVEpaURCRlMmanByTjokMT8iPl0/Ylw2RkE5
IiJwMzJSbE90KjhPSSVvJ2o9U2deLXEwcz4sKDBxV0kwajI0LkMtSmVoYSIhais3K2YKJThcZmpN
UEQmZGBZV189VG45W2UyIXUzK2QhLFMrLk0uNSJGIUo1XCFuOEopKixbUWY1PFxyVExjV1g6L2dJ
MGV1Y1JaOEk1UnJLWylWI1BdIjByRC5QKDclTTgoOSEyLFR1RSkKJVUlTW4rN14raVo3VygvVFZl
cC1yJGdqVy8rJjtfcV1QcDIzOENGbE5icVMuQWRtP1MwcVhGPmMxZ0Q8Q1FGWVU6J1xMU0NYWG5Q
OixTc0JrRm1kRCtdQ0g0O3BjR3BeU3FvWW4KJXJtLDdVVFt1ITlxU3F0LTouLGhMLjpidSlnNDo6
UCUtRl1CUyo/YC1PcnJ1bD9VaUwmOVtGXTJlcF00NFErJG9SXjAmKGg0KVovR1MoK2EoPnMpN0Vy
WTc/PlEtbnNJLnQ3KjQKJU9zT2JgK01QLiw/PmpCZCJJIzdEQzg1X1VUNjReIl0/ZlIlYWhEN15K
WSNNRU5WTilPWWI/Zm4kayJDP184cFNKPVI3OzEiM1lcRF4/I21EbW1naFRTQ2NbSz1wV0RxMDNQ
ZWkKJW9fNnUzIW5laUpPYVozb1hcOFZwcjlxWyxJPUNFS2heTmYsViR1PTg/XGdFSGcmVDZJVDk4
ZDEvNk5SUl8pYyFpQl9bMyppZGpCPG9zJT5TKmZNUG0pTExEQE4iVGVsPDlKWGoKJUVOS1wpI2Fk
KDYpbSEuSG9sJCpsQ282QF9WRUlnW2U2LScsLnRKW0ovcHMwP0BLWGtEPnFoMEJyUmRnRF8mVy1w
XG0jVS0sQ21MKDJrRy1tUzxAcC8sXUc6bj4jPDciWiUlV1UKJSdcVixtJzRtQGhCbjk5VjFHVCQz
KGdoLkBOWnNIQ1RGYm1jRGVWXlU1ayZONmtyKixmNkJRLychXzAmSDw8dDItNlxKXjhhZTteYTVx
LzJkMyQ4Rko1JWlOSmExUC8vM1k8QEsKJUVTSUZwbTdDOj9mJi5dczE6LTpkOEU5J0hdQiYrNlZc
PkMjaCJFUnVQRScjL2RMWVpcKHVFZVJaNlZtcGRrVUdHTkw+UDViUTJsITEjOTA5aGtJQ01gNV4o
X1YtdVQxMSQ1bnUKJTtQRDRJNFtsQi1PJSs1VjFidDskWEFbWl5CXHBgI1EnYmFtMS9PbSE/IkE+
OT4ybVhrLFdxV2BEbHJKZDBVYzVYYlBLTGNAayEkVjwjNSYxU1pAckotWy0nQGQ+MUFqakhyUlQK
JUoxJVsqJ1NXVV4wXTYoSlxIckQwSVI4S2pmYk0yMElTaFpbSG80JCpbK1lEXj5tTWpGLUVEdFtW
SERPOCtmO1JMXFgmR05fUlJDQz1wVSspLihJYnJuJFsqMWFNbFdhaUg8M2MKJWo9K2JHOWI2T2su
WFw5NyVHIlduQFsuIjI0TCsyO0EzbilyPC4hJ3E9KWE/ZzQlbWIkQHFWJi5NTmwpYW1nWEwrU3Fh
LVpbUidEUyorJnVqb3RpUkdpb0NkLUY6XlkrY0tKY28KJWAtVUxdYF0rKVM5YiU4K3AiPz4lKiRJ
QnE5THJsYSEsO2FlMks9VEJbTDRcWF9KIkJhOTJrP3NsSGBYRGJWVWReXDxCbSpOJEZRRzxYVShp
Ij5jIiQoNz1oPigiOi5RcUM4SUEKJW1XJ3RGYzBNVlpiIjs3J1RaXilOJVlbRzBCJGJvTW1SQnNZ
TGxaLkMiTTIoWmRgMkBATDEpckFbaTBdVSRvSzBwJUA5SCUrZFZnWVZsOj4tbnBuIkkzYFo8KFch
LnNGMFZXZSYKJSpGPWg2K0UpRlkpbGxFdDE2dWtWbV81XUJcImkhVUskOmdMJGMzMjc2Q0UjRGtR
dXAxWUtDUG1ccidaIlstMz0rRGlEPDkjXj5ySSlqUUQpQWF0QUpyPGVubidLczJgSi4+TS4KJWZk
KUZlW2pMK2VScGAxVmBiPGJcQEssYC9HQD8yM1ZYZGg7TC9dbUdAYTplOlU0TGBkakNBaVpDIjgr
WlxpJFxDM0wzSFJkQW4+VWxOMlJBUjVxblg3SFM0QFInZ1pgMUwlT1wKJV9DNVxCNClwPSRUMFtG
L1dvWSJ1ZDVGJ1w1X3NSJm4iNzshKTpYZ24oRGhgYlRwPiIzTF1NVCInL0hwdTFgWFJmKVBhPlhK
MShRQkU9USpGb1ZCJkBHKCdjUmNabk45MCVxYz8KJTFUWDtaNUg5WGguYmFhKTBuQz03QkZdOE5L
Yi0zaSVTcSUyJ0VZYjgsLjosQ01Xc1VvKzspUWZOZXApNkMwJnRGQExWV0RNZG1QbS9TZiRgOTpI
NS0uUCM1OEJXI3NbUT5hRywKJSpiR2hqPVhCOV8wW0guS0dIJ15vV3JPWjBESSpaPW5mVlRAaVNS
LUZldEoxM0stTWpyWj5HMTU3aDZKajg+UmJBKSlZNmkoWG9FJGg/OyJfWFZqcG9iXDJtNWVsXWAt
KTlqbV4KJV1rMWVfV0RgZmtVZitCbGN0U19dYG9OTGgzdG4wKDhBOG1vO1U4QS4tV3VeJik/WCJP
JkpcbyMySmsnRVxmcltBJTxZY2xMZkFOTmlYRV9hSTspc2EqQS1IVk1yWTlmWERaZywKJTRldTcl
RWJwPTptOSVFP0hoKjdxPTwpJik3T1JtWyxyMDxHJlRiaFNDcm10RlpDPmkuaWZvIWA/T0shXFpz
Rit1Zk1dREgkZGMyM1hoKWc1aXVESHVaJlw0NVkiJVpQUzMmJWYKJUpsN3RJTyQ7PHUnbSFsLFw6
WktFYDQtWm9aPEplIThfWU4sW0YsRnNEKVQ2PCFYbDpiJzFYcF1FQXRpXGA1RzlEX1FDXk9oTzln
SVJxOlJJVWJVZlZgKz4iIz1gNzk0JEZFVE8KJSJdc0QsRDooYzpLZ191QidzO0UlNm9HLiJaYSNM
bEVAbEcrKjFia0o0RlRVKChZcGxMU0ptSG9KUis0M25vZUkhKUZDLjpQa0k2YjJgSVJGZz1oVDQm
UG5nKD9zUGtaO0tUZnUKJS9hXWtmZm1HY1E8LFc/aj5hX1o4ZXFHIVckdEJHVENlbyVLcDtDJ1tC
WUItbEg1TXM9TVVZLmtGI0cuUS1FOCY8WlU3Zz9eIUMlI0dKbzIvL09ebUNkQ0NNN2I/SWZHQUIp
KVYKJWhxKmVrOkAvKygoRDwpWkgkN2pYRmhnXlBOMTRoUUBRPTBYX0JNXmcoO1U0T2BjaFU1aDIs
LD4lX1MrOmguSEdyWj5FPFluUCJpJGxmZGhNOnNbcnNJTUE4YDJCJGc3KGFXM2cKJVFQPmA+Vicl
IXNIJzNcTlRfMyNsQCRkJVU0bydQZSZzKUVqbC9rISs0KS8xXkpgOEtTb2MwZXMqJEBdRigpSW90
Q0NUYStXMVJZYk9USXVQLkI/SDcpSjRySkQoLVVeQGNzL3QKJSFJKiJOO0lBNHVgOWApZ2FudVM/
WGpXTGQyRDRPTVtGSnEzVlZqY2Q+LkxxQkpLTW5PIjFdQjRgYGlxZD5jOzJOPTRsODJPPSRaODJU
YVNuPlBBPC0xZXFvMzBGKTNDKloiJ24KJURMJEUhYHIvNUBBR11jVThxbUxMbiFkUVwsbzxBM08m
RmhoSj1HMXJxYVU2UUNnLGQzXi1nL0wuaHFmYjU0bkFla1gqUFdvZ3FNR24xU205Z2VAWi9hVlo5
bmdpZkFKci5aR18KJWA8U3I8N2F1JFtAbWJXMDRrIUtlNERMT0M7Jm82ZT8sMzE6YmMmQCQ+JFQ3
QmpLIi1NYi1KNlA/clVsSVM8Yjw6UjgkJT9TJGRoK1ZAISUrT11dL0xcP0c5ZmQ8XydgW0N0WD8K
JVRrTjRVJF1WL09ndTxqYTk1UVBTWChFIlQhSCtUY0siOVphXFZiTixhYCEsO1BJbmtmPCc+QyNo
OD5aVTFyYUhuOUtpaVxsUVVqckxxX0xGZEU+WSNMdXNyW1QnWWhwIiFLSUMKJU1caWAjLyJnK0NS
UlwzYG82WT8vTFFIbG1wOCNlQT9LZm1FZjRyby9RODktdS9QXTIoWyI6TVE/ZV8wQkVfX1dfMzpD
Rz1ScnJwJlgwZl11WyZvRGk5c3IzdD1QLzZjTTZCcWoKJU02N3IqNjhhK3NBRllGakBtcF8xXUdL
RW5APysscE5SMWNQOls+YWs0WyVQNi9qMGombFxnJGBRbyhpN1ZsQ3VtNCJrLUpaJjRLTXEydDNH
YjgqQjxVREYxUk0lXVlxNDwnNUkKJTRhTy0wU0peOk9PKjtBcWdOXGMpMFRiOzZzL0xJSWxbZ3Rx
YT45PS9QJkg+UTMtc1tsajs8LGgkVkg8SGcxJUVcSDpZJ0ZHQUI0cjNeRVEjQ3Febm5MSXViQk03
aTY6MWwyQmUKJS0lZksrNlYiYGAjXFBsRFw1a3FxSHJXYWk4OTldQitLZCQ7a2pELERaKlVcXTpG
RTNDWWRYM2NbYnQoXytWTT8kXUw+bTJpbG8jOWlgRztZO29pUChIJlhCaVk8NTAzPkk5MzsKJVpF
XE1XLGhdWFRZIlgpKVQ5J2tYY1ljPWpmLDttdWhvM1g1PWhDMlRAOjMrXCFZYCdiazxqQGVNSjpx
K0lFNTRtNFAzQHNqS0tnP0BvcWkobkghRWdgIWUhNmUuOiEiQnRsZCYKJTA6cTIiMUFrUWZNTlg/
JSdeLk5iRWZbUU1LcG0zLz9ZY0lVN1U/RTRQKF0/bVIqaG9uZkw8Wk5TKDhEXW8qbThFa1s1OWpt
NlklMTZDaj0oLW9iJTE1cFUvIjxcT2UhUW49RU0KJVVwXyJtKGswSXQ5S0VmV09xTkozLm0uVypU
MUUiJ2IsbF1sbC5JcU9bTWBPaywqSkYwJmYraltbMVIuLUBlPF05WC5XU0hTWkNebjZOc2JvMHVG
bmtoN21JVUphPnUyLGIoXmAKJU4rZlpTZldGQDYpcFwtK0hKOiZrN1dmcyFZZjlkLDg+SGA6YG5B
Qz9gQ2svcC9gQzMhUUwkLyEvci5CRGF0Ui1SQ1hhNT5taW1QU01RQUJmSS0tJFcmV3Veal0rVkwu
Vl5dLVcKJVZgNms9WzBxZyViVjhoYVhEMThFWTtWOGliQmVnXy5nVktRaF5gcTpnSCo1IXBDXFsn
T3FnM2hgViozSGgndGFCMDlvSUdVSiZLPztbL0k9X1ZENmoxKCY9dXFwX1hWVGFzbDgKJVhVQmxF
ZiQ5YShkJFVPJ01dTkVOIiFUVVhIXjssKVMuZT0oKlAwXmlPOVFtUURnX0BfK2V1PHJGazJiQ2Ri
cm9bPmtsai9qQkJjXD1xSy8rTnRqSXVOUl1ZX0dcQSosY11sNE8KJVlDbi83R0E/J3BlTWMvUipU
b2g5YTRjJ2M1MWYkSWhvRm5ORCM2aVk2bFg5VihKUl8pR2lrPDBZLmh0MShkJ2k9LGJYSkdrLWl0
X0JHbSlWS29PQ0lqQi8wLVVPLiFTTEZZTjAKJWhnImxXPTAxV0ZpUWVNRllzWkJrKiNULGBkVFFM
SjlZOCRUQmAnbkduMy1vLWA/Qm80YVRyXTJtWGI7NTNoO1w1J2crNXQrJ2VVUkkmWlRFMSdNQFIu
J1NWXEZMM1s9IlgyXFMKJVA3Wi91QlJGa15sMDFPQydBK0cpSCVlQDBkPUVjdTlMKixbOU0/OXFA
dUA5KCw1Sl5WRjxLV0csWHBIO2VYNChOM2FuQ3MzIVwqP0M2XlhqNGEzdUZgTTIuJjlhLj1RQStc
RTkKJUspVG4pYSUuKydsW25VLTIkZTdHb2FfTllmYSdpRTI0IStNWWUlTE9dcj5CUypRSihxTkpH
OSxQRCE+T1RjdDo4P0o3L05LdCtWUFRFN1VoMDBrOyE7UGVIMjBpWjc+LERYRUAKJXBFNGE7TDc9
OjpPdEpkdVkqQCJENGElLXMqcXEoMWFUUG1GK29VXytJMUZuSVJWZlVZO0pcPWY3MUlOKzw6Uk1n
VzwtNDQ4Pk9SMGNmMTVhVF0kYSIrUSpGO1RNM1JhRFRJcWsKJU5JZyhcPDNSNykkSWxQZi5oU3RK
RVRHKTsoMFBFaDI+RnJqMlYtSEA6LzYpNkNDRD41b24lKCRsZ1lDTCs7YE88ZklqTzFwa2QmSDBA
VSc/MkFfTDxgQ14+VzVpTjUtPzkhIiwKJUUsUkZOVkFDIiNAIjlaUWcsRnRRNEZKU3M+YUZJOkcl
JEt0MjMvSWlPOiprXlRNcURzNDkmJmVuMlI+X1A/J0QhcnV1RT9lRGA/Nk4zRm5iJSZnZk0xWzst
I0k8J0xROzUmJV8KJUJKaV5tTW50P1dtTUlJRzNtPSdfRmRPbXQuYkEtPz45XmEkQzU4bUdBRG9O
WEAzTGYvKVxGaltUb2F1ckNnWHE6WW5cWUY6XlJmOUg8OyFbb0YoPWNxTzNESVwlPjVXYWoiPE4K
JU40VD4lOFkoXGhgOj8iXzhhK2EjZUZVaUNdbThRUyNQJTpVaVNrS1JyKmBsXThvbGUuIUEmX2pO
bTJPXkJFWUdPQSwuZ2ZQckxPY0Byc2ApJ0dMPXM1YGRtVVwkQnJMSlZCKlYKJTdgZHMiI3RUSyNb
Xj1nKCkyaT9jUi8+Pm9OTEJeLFwlWic2VTY0YjJdOzwnMS5bKjYxLiE8ZyRxR3JPJCciOkgqZT9z
OVhrV0NlcyhJQSwhVV1TQGohUS1iLSVBLmVZTTJaPWsKJVAjcVgpTiFZQVRlc25bcEhjdG1ZK1ph
Ky9gXVVCRz1gIltISGZnJnROLEEtK1BtXjMwPkhcT1VGWzRaO1pSaW4lVys4UW81Yz5WNVohYm5c
JVNkV1JDdXBMPz5DQjFRMTxmUHQKJVlvMSo2XD5MZm9DWjRvRTplQDRbSkZcSFohZFYpQ1lwVVJb
L05KKEk4clA5W0I/Ll4rZiZHMkxUb2dxODNhXyZZVSFQNS0kU1dkb2JZa1tlM2MhdWJcUDIxayMs
ZXNcUF1XP1AKJSckNEUlOyQ1VjtLdCowQzoiLzFRL3MkSUdfPkRgKUJUN2U6UzleYz8sNW5nLjFh
aXA6JF9ibkwjYkVgYFYxaSVlTjEqQSEuPy1ySjJFQHNCXDRTQigmW3M8JzwoVXVHYXQ9PUcKJU5N
KkYmTklKTnRySiZFdDpIa2piLXEjaT9bSz1pdCZhTks4RDp1MyRLQSRcRiZNYVotVjlJPHVyIUBr
W0svQ0NSLCssXWpkZDxTb1R1MyM9X0tOTj1LaTMzTylFLD5zWkRbZkYKJVxEZXMxYm9xOj9PL21I
Zi9jNDQ5N29aTjxrT2lQTWtxcmhSWzhjakooYSFHOGxhPERlLG9qRjNYVlxNSkQwOF04aSdIK10n
Nyo8cWxBYVtwVT5ZQm83bSRhKjI5J0Y1XE46LWcKJTtlTnAiLz4vO0RhIy9PQSdYIl1IKF9bSzw8
UFoxTTtbTy1RYWA7aGc8Q1ttYlUsYV8iQ3FtTGQ0aiZxaU0wLURpV0JPUyNkJSxNIkxkM2ZyWywj
IzQyazNZISVfKzhjXHU5aDkKJWkpcGYvPzlTIS1xb01jcjY8ckQ5TWVzTzVKZ2dgY0twX1IiUltO
LD4nZEw/Xl5TIileNTltZDZGbWdVKUIiSVEmKnRmKSk/O1h0Lyg/clhnQS5WaVhAam9BVUNpcmo7
PEl0NycKJSo9NDlBZnUhVzItJ0shPlUvVTRWLU06S0xbLzJAPFA9JHQ0XTlQX0s+SCk6MUlTJDgo
UCMjMHIlZHQsSydZX1dyT2ExXEFBSm1VNUxJITh1QHRyTzJOIVsuTTFQYWVgWnUmVzEKJV4jOERk
L09wYCEmTFdcdSo2bl5rKCE2azFmZVJeTyliaXRkPFBRV10qSFghQ2QvcXVmUlA3JnROSDRcO1Ze
TzU/UDhDWCdhO3RgVkJxPlc5T0k5OjMqRCdMRzRTOCElPl8jWUAKJTckPWpcTWFUIzdNZFUnKUE5
NyJaImB1L1NYWSgoYyNpKHMyblZFMTREam9caSpENkwpOUYyTCNIQUcmODFNTjEySkBmcUlBYW5q
LTNQQV5dSFBzZmdXWSxDUE1JO0JFNlo5RHEKJVpRT2lUPksrKVwsTUQwLykzRzklLWNIVDtwajx1
MUBIRzJiVDs2J01ZTC9dLUg7b3VtTC1bXyducFdKKF9eZWp0YGppJWspRVoiazolIV1aVikjNFMi
SUNNZVctRGlmPiVPOEMKJUdmVFouPiZJN21VOGo5Y0FpUSUuNzlaUjgsWlFFX1BHT01GU2ZPOm1y
Lz9lPzBMSz5RZyU9VjpXTjpmUCdGX0pPKVkkQ2YvXHJHN09VYkhBWltBMjJXTmZKT1JtO21PNTws
WC8KJWBFXFw7MighTmlVYktcLz10NV48bXNVUjBLL1MjZ0VLSD5ZSj8jbThicyopMzctciRITiJT
LGdOTT9pOTkqLUIsLGJTPzZbXm8iYVJkTVwjVl9PXXJDJGxpWiE4S20tKFhwcT0KJV5mKV4tMnVO
dVApUF9aIzg8NiZAWSZmaTBMTFxqbm1gJ1dnP3VmN25ba2E9bltWXUE1LkZbTEo0KTI6ciNhNF1W
YDEnWC1HXy5NUWEuJi85QDVeYW5HQV05K2RnPCloLEx1KCUKJSxUZyxrVyxGUyRYKEdaPDRbZ0pP
bz8sbVRMIj9eX21yJSFrN0s4VTIqLE1fL1FXPUM/SmVgVUphOV4+IjZETW4lVnM2SmBJNUUna108
QSNlJiNZZlA0JE9WYC8pXSJdTyZXO2IKJW0hOy5UJVxVVVhoKEROJmMpbVtqcjQ2KzZgXG5Wc2lp
cFFqalVaKlliXkpKSVktUzlOZD0+bl1tO0VRXVA3W0s0IXNdPXNfP1tGTVxdZmQmLCpHPGczKXJh
KCdqWCgwPlkvLlkKJUhWPShXVCg5PFkqXmM6KEdPaDpqanMxKkEuP1pjTkZHR3NeZlQrWFReakFy
YWNIJ04uJWlyaFNfMVgoOXBqaXNTV1I4K0RMQCEqZzNoZk84XyZULFA8dS9WcCFSZVhnRDMlbkoK
JSNuYlhmS3ArYG9sPlxTPU5ZVkFuZltIIlcxOkBSRUhkPVVOb0txYkQlQTdkP3JIRz5YO0lDa2Io
a1JYS0FmRCwraUJCJmg3S3BZcTk1VGpDN1c/I1okbiJdOUhoLiJJblsiNUgKJThqcmUuOUFlI1FO
PVE6LkxfWD9gJ0VhKyZvM0AtSlMzcWlbJFkxMWNUOzwzVmw9LFZIY2I1RG44OmMlREBec1BiV1hy
TChKXTQ3RTZlUks6IXEyYjViTkIkOSMqY1dtMTI6RkEKJT8+bjVSPFMpM2xVZ0VDUGUrQW4kTzFo
USwoZ2IyVEpKT1c/NVA/SENYL2RwRzt0Mm9sW0JmSWo1PUtAYGBtbGBVV2ReQjtTSjRMUXAyVjlQ
azZVMypYbHQ9QkUhI1hbIVs9MlkKJUchV0UpbFtrcmpGbVwuWG1pRV9oQGpXSFpgWUxqJVVqTURH
N0JNM2kkZ1JUQkIySWhWRDU5alJaUmhdVThUOG8/RkQ7LCw7OUdgOVticS9JMFtnSU9AWlQ1ZjNZ
X1luZFcyM1YKJWlQaDA3KS1Hc2U3aD02bidqN29bL3RLK3UzRkRTPEtBWkRUXUFMX1pZPiJFRlVq
QidnRj43clBtMycpbk40KTxicUxwcVRUZ2c0aDpnZyhVbT1TZi8oPis+OGs0a14nIloiWlQKJUFr
cTo/XGNCXidBaUFMaEA4dWc8OnUkI3E9TWwhO0ZgLlA/IjErc01cWFxwSDlwQUoxU3VLc0BqQUUn
WmkzRlxtSUdYUHBIM1I3KTYiLHEjbFtoRURbN202U20va1FkYEBqb0oKJTYtTXNtMl9JY3ROUmlu
MmpNSmhMV3NfIUFgJT9EZDI3M1pVRzdBJ01cQ3FHVDAvKU9QNnJkZUNmSydmU1I8TDlaX2xObExj
Y2BkcCdNWFNeYytXJmFhcElTNGpKPEhYM1tGSz0KJWg5UGhpNT5PWT9qdSFWMDZIR1AkYytPOXIw
QV1JOm06I0VwPTt1aUAsNlU+TExCcmRpUC4jJ04zLTFqb1EiLFpzMmFSTmU0OTg9XW4hc28oWk1H
bVVCRiVtamlPXUNIZ0VkWywKJSI+QEgsViVFXFJCaUhRZENLLzMkI2xYVy0zYm4mT25PYS5JWmUp
PGwmRSJzKGNKM280LGA6OihfMmdHYC8lUTxxcUhXYjoiay9CN09TSWdZXU5NZWRhSGxNWSJaXzNX
Pm1WcF4KJXEvMUNtbC5TQWNoK2l0UzhVWjkpIS9aPkYrUVNOSjNBQ1M2NjdPX2xLREsrKS8vUC1c
Q0tQN1NSS3FHMVxLZmk9VVAyVCVkVTlbZlA1dDsxW0lwWyI4YFdzMzBgTEJxWSVObmYKJWFmMlEo
OXIyQz4mSCxRaDZ0ODBWQXRzI2ooMUdfNC87WG1cOzk3bEw6RFsma2khSGFnZmBmOTRxI3RPam9z
VSgobEBMbjxkSycyQkpPIkVxVDoxakUraEdRT1M2JUItO3FtJTAKJT0sakJKLWhFOi1XTGwoaGYy
XmxQKWdRZl1nM3UxWkVQVlhvcCJXaFguKGU7XkhHUGZRJm1qWXVTWm9sLzBxK2JjJz5uaWVYY3NS
X05Pbm85PT1dKWsxYWk8OEk+Z2JeKyU9KGAKJT9jZj0pJ14pckEmZ2AocWU4WlouWmBvVG5bXTNk
UEtuUmRAbWo2YDw6YmorJS5MZE5jSUZrO2AiWSw5W21GRm85PHNOLDFPNClBcS91bD5BWS5hcldP
dE5BYCc9aWBpNjlWdEwKJUppPyFxLz5GQj9eZ2pybWVoTDQhPzBKR15sX2gqKipQQyZIS0s6azwu
XkZAZCJmNylgZUBmJEtjQGRMTGwjT19mN11pcWpoOk1IRi1MQlxzZSxiIlpLXCtTbFlsXDw7XTE/
JFsKJU9PM1xSKUJbOytNRT9ZY0ZMV14nZFYlSGY1KUZeVCZIWjlHblgnaTk2KVRUR0pFRVhJSXRL
c2hWRnEuO149IlJsWVUzOT5cKS5JYS5YV2VQPmMvZy8tWnE1UFM4RURiJ2IucCUKJT1XWWA9WFZe
XTo6QmdIUDVzb284OD8pYSxmcU4xazFhX1ddKS5aIiE2aDR0MllAdFFhM3AuNVdjTEdwcVVeZzhu
bUBZRjUrTmwvalBeZyk3XCgyWTIiVCVYQiNSL04xcD1hMm8KJWU4WUwsZmFfMTpDUmtDalRlVEdq
O1ddWzheLWpXUVxAPUReUGBTTkRAI1J0UjBXJCVZO2ZIbCFdRTJELShHRT9SIW0mK0ouNkkkXFtZ
KzswTz9Ccy1nTiM2QjNsLzpoTVRQMyoKJSkjXWhSLzNBUDE6bGhZUVMsMjg8KmlCN3U6OUQoZFM+
bUU3PSNdUWVQPE40X0NINFxvMG1xI1hsKUhxcj4mY0snOyNtW3JVJGhVN1Yoal0nT1F0P2FRIWtz
Z1ZfSD5hQilKJUQKJUcqXlt0Q19gZmxaSGpINlRUMCZQUSkpZTdYaTZPa0A/JUwxZlsrXz5aMTJC
Y1UhY0lHJDh1JzBcWSpMWEhHUUMuclBIVmVMLzwtS1xoRl1qZzFIPkVJX2ElSjwpSFBOSmNbX1gK
JVksWj9MXTFqN21WKVtHT1lfPThaLXMqP0hORSFSO2deLTJ1QFtTTTdiWkxvanIjQytINWRsQ1JE
J1ZYXjApS19gVjpzOzljIjVvaUszaUk8YTJBXD84XypfM2VeQ0hlJ2s0Oj8KJVdOITg4I0VlQTwi
J0c8Sk5URkA6aiVoa19bQzNDXkJdI0Jja3BALkE9NkZiYCFvOWx0XEFqJXAvWEJoR2BmIklDKi8v
R2ArMF9tKC9tXSh1SzJuTC9ycDhvaDJdNFZLNz5hLmMKJS1wLjhCKSw1WiVrLyxwcmdkKF9baCZC
LXNBMzcuRCpaJCdwbmZXRmVQLS00PTtnK0tuU0U4X0NATFdlX1R1YWkjISdJYSInXV5QO2psc3Ul
PjghWi89N09GM0hwR0RKKilgN0IKJVhkXDoyOCFlTmo0NmtPLTlySCY2b3VfTGMyRSU3ITpjdT1M
cF1fRDRlWVFaaFRSYVk2Zy1ec29MZjJmPUJIP2Jrb15vZmovcGBAbkwvUlViNydLUGQuaEduNkIp
QFRHcC9OVk0KJWBSNVhmVTtCP19NVD9ATSstMzcoPzFoLENLO2U1LTc3PnBYXHIsQWpLKko3dTcq
R3IwWGgpMW1uQjQvKVFtWDZFZTY7SicyY1daKlJHQSI6JHIhdC43aW5HPS9LdHUjXWwhL1kKJU0y
PF1XZyRzKVBiYFlZPFoyb3BwXkI7XVFtIjQzWkFXRm00WFVIUT9jPHVdVmxySFpaVkw+QSxdUEtr
WDxeI3RxRERKTFpOSG9oVS4ocUNYRE1oI3VUSyYxOihEKEdILltXaV8KJUpAUjFlV19yLS8sZkI6
YCxlTDtrcDBpRSJUOFZTbTFvVVA+MDIwJDg3NWxbIiJQRyNDT2NcJV0qZGd1ZyZ0YV5pO3ImIU1n
OjNvNUE8I1opWSddKk9JOF1YIkZgLm4lLF5YaE8KJUpHZkFTXyQvYSMySl5aKTptciltQUg/WUUl
P0lEVFdZbGI1XScjUE8+LFgqUVI/Zl9zVGRGKi0xc0wzOzpoMXRKMmk1Pkkta2c4P2kzWD9IPlth
MStFPVdBLSw+ZywoP0NSNy0KJTktWVtAWTI/KzROSTFcQC5VMyteWVw0dEUhbSkrT09pTkpFUXBy
dT9xPFUlNDApWmlXJF42IipPak9tcCgnaDZYIzNHX09pJGhbMG9TYnA6XXAvTWpmQ3MkTWBcY0s1
U2ZxJWAKJWgsTCRXZjc3X2lmWHNzMUA6YDNhNzNiVTtrKTVjIWFLQ2I0YygnLk0kXSI2WS1eRjRC
TTFIRTxTRSdxaURdOjwhNUc8ZHFdKiFJM05CXlhCbnEsZVk4XnRSbEIhVVoiWEJlVlIKJUBcJ0pc
WTgjN0wtVWQsVlFkJSohJjBpI1JtQ2dpQ0JnOjc9ZnVidVRiXWRnUDg3NjhEXmo3LlIiX2hoRDZL
cWdVOyppYmJnJEI9IiUnSSRCKWtCOFRaLWxsST1sWDo1OWdQRW4KJT9FKG1fVUdsQS5KOVBtOlVl
NjhwamxoU0pLWElRQHMoaSpbZ2U+ckc9XjxYUWhNZF9jITM5KG1NcUp1ZW1BWFVCJ09lRDArZ210
YmkkLXI9YnJCPkU+YnRLIUtQMmxAXU5BM2wKJThDUzc7K0wjNWhZK2ZHUGdIQFcyazJrWzpTcFFF
I0UvKVpbWm0rLlhtPUwkPmpzbWgrZTVXLnBpNzYoc2dJbllyOS5uUl8zLCdYRFttWHROLipcbV5u
RGBDWmJLTjQ3M1hkNz4KJSY7JD9qWiVFMCFPLT91QFxlako0ZCNBQS9JR1siI2B1cmwrJEUlI1Bm
OW91RztZV01dST1SUFBZbmFZTU1EO0JyIj1cUlFxT3QwLiMwRTQ1ZUhEOy8ibWpmRDUic0wlIjNV
bUYKJSlCbDM4NHNRO2dmVkRPPEJVcjgvXUVdIlguNWtmQjpaVENSS3RLLnFhPypMOT5VNXFOQEhs
JWEvK2Y5Nk5JMlBDcDkiTD9xdSJpMkkjciRtSl9XWCo1N29PZCM1MD5UODFVb1MKJUdNNWQ1VGoq
WU9tIV0oTS01dUU7JEQzSGdIbSc2QzBcLF4ucFVRRl9KPFJ0JV1ZNSMtJXNeM2k0VnFXM1pwSzVa
NlsoJEMhVEdKZF9sMEpFZ2ViJlIrREdWa0ZYdWYlMypAUVMKJU9II15IIUsjVldJKTpOIi5EMlNW
K3JhMygyVUk/UVc3NF8jY0VDWVduQmxEXVp0TkJWSE0sV2hqVT5PNT4mWGlhazU5PUNnKmtKJjQl
VjsnTC9xaTgoKCtFTClQYDMwSmQ1IzsKJUhOcUEjRlRHTj1mQkZVMGw1NSgtMFdXMCxIZUAlNEQ3
JipjRFMqXGNkIigzNVEoQyVSL19eN2EnQTVYRzhLK09GPy9FOyJrTm4zLUtha1w1TktgOkElUDhU
K1hVWytNOUo0OkAKJTgvKytIOzc9SiZJSWJXRFtxQkJGXFo2MGBZOzNRO1wwZl9xLkc5RS5kRk01
PVRycy10PGwhNihcJGFAOVBdRltZYXBZdGBVNV1GWCI6Tyk9Zy9NdV9eRyZBXDhOTlolU0RrJUsK
JSpuL19oMSpJWSdfQ0JYYUcvXHAkSzMrQjUkJ1kxUFsnbUxZOlFMXj5vUFw6dWBHQEIiRmAiOC5q
R00mU0tKVE9uNmMxZUJZMEZqa3A/K2xKOy85IjgsRTZQJlg6J2xhYXBmVTIKJVNYaTZYYEx0S25f
Mzg8ZUE8Wl1ZV1kqb0ExK2lmY2VLN0YvQlhpZmQkWT1MJENtbklmQmJmJ1BkYktOPD9ucE8yNDI1
L1MpTDxRRU5zT2NQN3IwZFk4V2BJODozSFY5LzklTjAKJWVGJXBfVVwqQiZiI2YqRGYzSkQwQmxZ
NDJpRGY6UGIxYSlZZy5LOWo9L1guWT43NlpwYUlTVFZVSls0QGl0U08lIkVOUyFhYzlGRTFJP0RH
Nj5PXjY1L1k7SDZoQW8zcUswVkMKJUw8ZjV0IXA0aVhqX2hTNGYlO0ZyQl5sLlgyJ1FzbjdFbHA9
UFdVVmtaQjA7NCNCbmNrZmtmNW5BMz8hS0YuJVBcSSYrIyo0NS1VJ3E8S3UhQG5cUSJYX1t0RyIy
ZjhqW0w6YUgKJS1Ia105OSxhOyI4cD0iZjJhb2Y2JWdhWyxiOF1TPzZOJ0B0Il8mWUs+cl9zOTJB
K0FcIVFjVWRWKTpuTGQkQ0FuIyw6bTFQQioqMFNCQktjKD9TUSkwdGxMU1ApXD4vZiMlWG0KJTYr
S0VZOSw/cl86SkoidUxiTmBoaXEqOlleVXIsRlBYNDMuXTJSVm5mITR1L1AtVy49PkBab2YvV2ki
a2oxblVEZTBRMDAkRFdAUzw7Uy9LXj8mYitTV2MoUG09SE4+bSklMGcKJVNBc3RcInEwIlhZb0Qy
OjVgRmxUSUBPJTlrTGZqKFk5JGoxWVo4KD9SaWJtZD01RiYzPXIibjtmUnFWdFJVbTJHTWsmVCcm
az9ZWUBIQ1hoSjM6azFGbSsuW2NWU10yOyI4QTEKJStSMyRuU2NYbXNnSU1NVjMlLFNYTlFqa21M
PEQsQi0hKlxJPjBHal1xaEFkbGpYUU9QKkYkWmVYP29KT1w7ajFHNVcjKS0iMCQrS0dhc29gUUw8
LGVbbywhbidwWjNUJWstJUMKJTFpSU1mU0RzVTlgUzBjPClKXEVhO0U2SSM8I1M/dGpScTAwZE05
I3VOLEQvcykyIWElQmZyLy9qRGxmSkxjR1BkRV1dRWNWSGJLQD4jXCVQJVE5XVVhVnFnXi1JZjZZ
b0U/cFAKJSxGci8/P00rZFdWKlluUFJeIyxkYXAlOVQudE4hTj47OGRcS2Uxa0FjUksvU1E2aF5Z
PGlSXHRDbWR0ajptRkhlVmVvKC5QcytVPEM6TEAtJlcoK11JZio1O3B0MUojZmNIa1UKJVQvKURd
KmFDZVdeYFwkW005NE8zLG1DdEk4X3EpbXFPViJhWlsrUDpGJ209T0R1QiQ0QDB1VUcoXENLUFRk
KmZMImgxM1w8MU5QMjslW0xuR2hNMF0oRkI3NyNPWVZmZ1hTNUoKJSw3OC5BcW0vZG9cKDI0OHFo
bC82SlRcKTg1WGxTakE/T0ZeTlk6JGE8WHBeM1w7VWUxS21ZKCRBLy5DPlwxR1VEaTNMalgmJWVQ
SFxHPStsZzBzRWsvYSUvUzA5ZmA+XFl1cyMKJWZ1VjUmaV4kTyhnX3U0Z01tNjBxM0lgaEclJ180
IWc4Xj1lXCRQUDAkcmItQT1cKUBPSV8jck1VMWpfJ2BuLFw2bkJGalZrWjlcRilIaDIzN3FsQkVt
aUdkRzRxJmExTkZhZEEKJSplPXUuR0Y1TiRecVI0Ykg8bCVSUjYiVz1wJTFrSlUrO05OV1khJmE6
ZiMoSkc/NmVsTDZrZTE3K05qaUBqPlIhLz5hIixQJSJeVlRBVE5YMz1MJFVQNjk/VDZNRCdCQ18h
RHAKJThMIm5nP3F0O01mYideRl4vVVlLJGBFOERfPTNJXSJqdUBDNzZdZUxhPzQvOkE8OTdkZTtC
SCFpUkJCTGI7amxBJFdmcjBUTF45QiV0NVxHYVU5dVhoMHFfW1xaXWpRQyRCbiYKJUEzXGlPRWZi
XmlqXiVdMDw+VmEmUj1TOkklOStcbGZrUD5GY146ZD4hXS5RXHIzXUE8NE0wZFs9Zl81NUNBdWly
MDBsMTphSFtQV1cnWkpNYjpcWG47Oj0kdSkqKVFjU1NlNisKJS8hbmRnOlRFMCcmZV8ncEIlR046
J1oucUZdZk4mJiEmJiFLUzViKiQmPFt1XWcnLyhETUVhcEFncyE0YHI1KlEuUWs8Jz1KI2VRQmtX
LjI5TWRuUCVGImpZODAoVzYhPlRoOWEKJT49Qko5Q3UtYTtCZEheRSlzQTsiY3FHWCJiVmJJTjwh
bEZWaEktbl1iaHJ1WzJoalVTTTo4cVoyOjg2dUFNN1xMcXMjR1ReNCkvXTk0aCFYNzZiLTBCW2pv
IWVITiI6ImdNKVgKJUcpdDtpbVQsOFUpSz9CVlxTKF9IYj5abzIzQiM9MEdGayk0NC48ZHJSUVov
UjVOQF5ES1k+S2RsWzVwNlctPjBiRVVPb2lQSEdhbjlUblskcCZLSGNHSytBZDVCX3FOUT5aRnEK
JSxhNHAiL0BfOHJjTCIqOUIiRm08Yl5FVUBLNDFFW2BtakljJFFfQCYiI2woSztrKFEvbjc9Zm9R
QDlnLWQ0UjtPaVUlQ0JBQy0uUzxfYztzYFRVb1k2OEpbIlxoZFhzbjUkSG8KJSdEa2R1UyguXEsj
KmdxUEFsZmUtKmAwR3JhRkksTiYpXy0wJFs6KEBjUmZRY0lzOSlOTCE4RW1dQ1Yqa08qOExoaC8t
WSRoLHFTRShDY1ZlRHNhXSUkT2lDT0FfIiozQHBoIWoKJVldKk9eNCIhQmdUdD1CU1xwI2V1TnQv
UVo9VUcxWj08QlxLMl1aTWJBY21wMSMuT1slKnBbUGdROlQnNVhaUXBvOCRkPHM/bkk9WDtaKkhI
UEMrL3JDZUo5OVdBaltMLTYwJGAKJTI1biNVKEdDOjFvU0dWTCd1X0BMbFxUNUElJ2BgSztsWGF0
OiNONmxmPnAlPTc9UipVJFxPTltmdCJKJlhzPVdNV0trV3AxU0U+RU5FNGRWTkVhTWo4cTg2YExi
VWs/NFZrTl4KJSc4UF05Im4sN0Q1Kz4wRjMoNi45WWhqaXJbN1ozRVphYUgwKWw5LGJTaUwlJyhN
O1xQVSdzZEpaSW1vSFVtWThsRzc6MicrRU1gVzs3MTltWGJQJE5HZFlmYDgzbjFUTmdlRV4KJVdO
ZycvLmBEM2xmYmw1bzdVQFFzXS1zNTVOSHAqSm9gRW8mJ2dVZyhwcGovTE9aIkNHTmRBP0wwWiIz
YDFRQnRbUSJlPERTJzheNkdQRDBCUyhRUCwhamJubC5XIVdSKGRmMDcKJSI0Oy5yLDlcJiZdaD5e
QlVsL1JRMiRIKyQ+aEpyYldMOnQkY210MGVsa1BcLztZLVVALTA1T2w9JGkpRVowXF9INm0uYiwr
YVljYDZ1QWEkVTo9TEo4bzpQLVgxTVpWJF5GQ1YKJSlHcjpmISVuKUEubVhwUENLbHFEaUlyaHE8
c2lxVzNwVWxGU2ZUdTAyOl9xby1DTUBZQGFuciJAXGsjVU43LGlkY2xuTGs+ZjEvU0NTYCsrWSc/
PFloOnEpOStjVzRoIyNDQzIKJTB0bi1FXCVwR1tTL05mUmdGZm9TMz5DaSdLYllHaiRjMWlTWGFB
VlNKRTQiNlRhTmRcJmRcPDhvcFtyLE5xW05zW24lVzEiLGNPM1pvcTJmYjI9Qk1kJWU7LW5qPFwv
TT9ZXGMKJWwiUmAsPXJjMS45WF42QSIjWl5XTihHR2FHLDM6LUJOUEVlYzQjUVozUUFZNk5PYj9s
IyY1bGRJRilRMzI6SyVgQzs3UFM0KyEqLGozZlNEMHBXX01rUEI+YlJYPlFiSTxkNz4KJUJfOFNl
T2FVcDw3TEZZJ2p0Jk89SmhPZzY+L3Q8RChHc0dqVEFobzdqVjI+aGgrQycuaW8vbjZFKFViWmta
T25AKG1fOzRgYVplLEN0dFw0T2ZrJSZqOVhQQStPakNNISZaZS0KJWQiJk1fTVFvKz4vVydQVk5P
ITVzalsqTStGZyg4ISRdKUokYnVfK0JSJTFNIjY1MW1yKUA9TWFhclk+JzM1JkdSX0o7JE8/NCM1
NG9YZys+PFo9b2gtZ1RUY1BwLlJscl9pZjwKJUstVj5RXS8yMV07aW5MQFw0L0EvQV4vbHNUKmdZ
ZUdTKUxyPGEhLXJpbVFZXzxtbi4xKWExZz8ybzZTWnJtayxESiNYQj1oSDs5XS88KGMicl9jLTRR
O3BlLWpLNycsayF0Zm4KJUwlP2BPODAzNzhtazNOLjokT01iLicrK1dwVFE5YyV1SCRhXFFda1hF
bD0xOlRqSTRsITk6OTQ3YzZiMS8pTjQ/UmdfWW0vPCJVYmU4cGBbUUEpT2psMCtJa0Vtai5SWEho
REAKJV1GMyQwOEIwXEpbUSFPL2FHUGAxXlFzQzJPLz5PJWBsZDtYaTRiL1cudVgjJ2E9JjxdVmFI
My1nUlY3MiVYP1NiJC0xMmtTZihyXExuUV1QSywxPUFZJ2Q1W1leL11vV0RpMVwKJU00LExgX0sj
ITNdOmErNWdjX1VWKGVzYHI6RzFcLDokVCtXPjZ0cV1udCImRS1CNihRYmQ8VnIsaVczODIvK0tQ
ZzNzKGYsZGpeKWIxaktWJXIsVGJeUSJMYVdnOj5VIkllQ1AKJSMzKVllWVotQ1wmI01qUWZiXTAv
TCFjaUpzKzYlSkhVIS8zYTlSTmMnNFVXMmxFcV1pIlk1NUtZNENDTGJULCs/K0Y7Ui5YL3M1RERq
SWFmMCwqL0BEPXAzdWkzcElgZjU/R3UKJWVbJD4uSmBCXjNMOj4tI3IqW1ImXFEqTEtsL2A7ZUdk
OkJhNWoiMjdBQ2ZdLl9DWyNyPklzI1c+LF0yZ2xeMFkzTS5iV204cCxfP14uUEU4LDdJTSUnV2Zk
LTw4NyJiaCoyLmUKJTUqUkZkSFotPSdaXiJwXk0pKDUjaE5TW1skXmslX0RpaWtpby1xO0knL2Rw
YnEvJENUV0MqOzFDXFZgIT5qZStLal9acWk3SygsPTJGajIjVjxgKWomMilTcWVlLzNQU2dAc2QK
JWxBPVU6QU1EOkBkdWhbRSJobzVRXmt1WVYpW2dHO3BoaGosI29rN15LPCwsLkE7cGJVT2FFTCRO
UiQhKFAzQS9Lajo3MlQmTF07cmNkOjpbWGdFOVVDRGs6Pyt0K0NfbWgqb0cKJVA4cT9OWFpFaClt
dF5HMktRREJrLTElM1RqXlZYKzg1JShZb2tiUVZWQ15mRm5GJTxoUFUvVzJfK2JCW0MhP2pCPy9f
PDpXa3MxUD5aNVtsUWoqODJjcGosOik+NVtWJXBfQ1YKJS5lQi0tNzpuQGRePi1nPV0iWGA9WGRn
I2s+KTxyWiY0XmRqMkkyNG5samgzLClyJiQ+Qz1nPkNMUVcmJ3Eybi5OJCsrOVNBSjVvYmBmMjU9
byd0IVpkVChKUEsib2V0cjItNFIKJTdHR3RcQCNdVD1qSTdrJz4uai4vUlNHMUdecCZvVkg6Ym4v
VyVBbjtiYE0mdS9LNTk/ZCkzRVUnOmJnS0xTaGc2KWYwYGBScHByK1tQRzlwal9CblFINlcvNW42
WFVmbGtNP0wKJUBaaml0NVtBcTdmMmtwRkhSSTk0MUwsVlIjPyc/IW1wLWRKWFpiSE1qVFVgY2gi
Zj9PRUQ9UiM1Pzo1QiEzJVlZXV5UZWUsZSEmJV0+SjROPi4yZXVIRztRaTAna10kX0s5TmwKJTBK
V3JCZTs/aDdGMDI4YjtFc1FCWnImP1wzPD5dTlc6bW8mOkxSPyRMLkFfOzloJF4nNTcsbStLaEIo
dVtgRVxeLStQX142KFNpaDhNMSRENTBBSiJGYlpLIytdVGUtLFhhaWkKJVUpVWQyKllhZClTX18h
IzIhZFFkMCFJKC5DUTNcVXBLSkc4bXA+dCRDSldAM2sjVWU6IXUvcXRZZWpqXihKcyc3Tk1RP3Uq
OVpZVEJhMTslM1grc0xKbzFkOUA3KkNRNi1HV0QKJVFWPylBOEtcKyhOSSlFP29uXzp0Z1ZgMU1R
QHVnOi5dcU9kaW4/cVdBWzBDKF82aFw8Y3VTY2NEV15gLWwqVDhNK0EtTiJicFFjcF45QDghRTk0
dUFSLypoZzIybVlBOzdCPzMKJSghRipAalpgW3FnLD4rWFQ0PFM4Lz05PHVoYl5hQjBiXi1VSURJ
VGRHUUpTXScmYzVaL15IZ1ZHMypOI1koQCFrM3M7N1UzNSElLTpAXmozV290ND4sPlFCY1BaLGZR
UV0wPnQKJSlWWDJcMksrUDIiWC5TTSZhXkQqJWZfVzJaQS5eLF8rOlNCJi4jQGw3QkQ+TkBiNmxp
Q2pTOXRXZ2hVaTg4b0lhSycoNClkWUtzVTFQcVI6aWBJNSdZVVdtPSJMSms0Zy9xLUEKJSpIR10y
REdgTWZXSzdjWmw9bHVaOl1ZRFlTa0JWSlkwPUtzNzohXUFvbTRKWiY8KF9wQC1MO0ZANXVXdEtS
I24qQFpOJTVeOidAMzA9QkxocFl0KEhII2hGVysiLTBlWjIiUiwKJS4wXUNRMlFmcz1mJlA+a2hI
XElIRnM0R1UpcmE5PmFDPkxbNnR0RUo3LmtybW0iK0ZvT19oUGUlJXVgZkNEI2JGWDRkR2RYZlAt
KTsnWWpbZ0JIPS1SVnVARzpxSkMjLW5DSCkKJSkvIUpvTihiXzhSM3BeLDRkc3VEVUQ/S1stQ2RG
VjY+I3BObUdLVVlxbyk4ImJ1OWk3YGIxc0RVO1hbb1FuXjciSSNnXWdlRW9xbGUwMVY9S0tbXlJf
Vzo1YDU6NCNRUTpiM00KJT1eUltSKkFYYipwZkZfRVtxW1s2NUBbRW5sNEEoQjxxSnMnQD5oKGJT
OGtJYkNHbFMlMjReaVdVZmdbbkZLYic5LzRDPCVTPGppPyMtRD5AUy4nJExrQ2pTSi5eLlZFW0hA
S3QKJV5HKiwpW2pdZ05obG5gcDtsbUJDVFRPQjBFQDUoIUxVJW0oR1JbRCouPzBOZDkwJnNSIzxs
OyhSSFpoJDxoaUBwaiFyb2cjLElEdW4jUXBLZFtzZ2BJNyo9Qm5dWj0rbT88YWsKJTlva09Acl06
aVlpa2otNVVoXHA3M2I/V2pHJ2BsJC9SJjdvW0lhMnBUWlY8QEppKzxSLkZRNVRqOmdjIm1Ob1RZ
QVMqJF5mWEA/LiQhSWJxLlxxIiZVM2J0YWJNIW0mLnVwb1YKJTUlUEIzTztaSGAkI3NQSD1ANWwy
PSE6RSw+PUlII2w5R25gMEJBWkZHLlE8YkJTWCI/K0crMTAzZC5COFY2T25gPSlyTC1QVz1KXmhl
Xyg8bzlLZzNZJD1XNDNxcTd1QUk+cVcKJUdBQ05HTTgiJi9KSEdIMHI5LEVSUy44VXRfPz5CK1gz
Jjw4UC00biVJOTVzKEsuTFM9KGdvXjE7Q1VAazQqK1RqbDU9PlhKbjEvXzVcKEArYEdibiVTJjY2
RDYsU3BoNXFGPjAKJSgmbE5aK1suXC5gO0JpUm5OPDdnbEk4OS1hYjNXUUw5ZC9fR0ZUODlYSW0o
YGdKRzZsbyt0cFA5YGUmQmh0cSg2LDovRzdbSEptWFZOW2JnLUppImBkXVY+bF5RU1drZjpSUWUK
JXJCR1tYczhDUixdYDdRN1Q1T1dNaCZDa0JTK2tndEkuSSlPQF5ydUtrMkcnNUlYKG5DYyFVSkpq
Z1QvSCprcUk4bUFEb0Nyal84WSo8LzIkbXUpWz5yL11cUnM2SCQvSixmNVMKJXBWNltjcWZjRF5K
LD47YUkvIU1WXlxkVCJcKTImc3FpPUQtSiwjKW5eXFtuKSV0Rk4+KldIKSpEdV1KJkhhSjFrP2Ja
TmdUPi8vOnJWJ1pKcFpbVFRvRC5xNjBFMWgtTFo5SmsKJTRvVDZzTT5FaHReM2wmXEtNOTklXlxJ
JydyaCM3JHE7OTYjXllZM0VKJUdGdHMiaUdYLG91cE9MIlhLcl0vNU8yYS4vMEYoVXI9LStAOEki
MUBDc0syQ2c1XGteSEVJYDcpPyEKJXJTLS4iaHI9VFYiKCVHaF4iPEdiRD84ckZBb2xdI1xwSnBp
Yk5IVGRuXDopY082YWNpNlJFaVRXWm9ObjFPOCcxVDM/SSk7MGkuNjdcbF1bVEdjY3RuaXNbcT1i
dSU8a0IvU1EKJWoxT1xEaWlvSVovYW1rKkcoK0heZWIoZkduYGU8JUosZklrP2JjV2licFAvT0lY
WmVIX28ocldxPk8kdEdGTlVCazQ8TVA6XUokQyFkaFFUOkc0IWxHXGVSUlEhXEFZa19fbFsKJTMs
UjZrQjxkajg7UygtIz1ObGY3QEg1JjhIWm1EIl5YaVUySDlDRzdDWkguIiNAIlAkNlZzTk4/PC1m
ZFAzPihZZlZoVSg6VjVNcmhKO0hNRVhfQUNdX25OWFonVHIoN2R1VWsKJUlmS0RqTFVVWEhyO1os
cyNJSU4jUThcWWloaDpFaCk1akNFPTE+O0team4waUddSFcxLFk6YEA2JzA8RSFALyM9JFtjSjFL
Lk4ySF1TSiU3ZDNURC4iOWZQYCkkT1JQUCo6UGYKJUxBUlRLSGdkc3FrQyMhZ1p1dTZCPGhFXiFe
IXFDKFNaNF5jRzFEJzhvLyg3P1coRG9FZEcvLi5WYCJcSl5aK1pucE9RRVRsdTFnU0wldWdiVF8m
VCw4PG1rOj9mUypKUTctYFsKJSpgR247cnQiN2lOYWFjJF4wZ0xTcGoxMydeVT0lTkhmUnIvIiZC
bDZiIllAJFoyZFxOLiM1YlAtR2BWQFcmXkc/Jl0sYT5pbCJaYWczXSI+Nyk1SS9sWF8zXyQvUD0h
Kyh1QUkKJSNjNVVLO2NDN04rbzZhbmgkaDUwInJgKDEmckFxXDZzTTVBKTU5Ly85I1xlNSROYzVC
SVQjUyklSShoZiY3RFRGK0dDJ2k4KDVSKCxnbkdZLiJWcV1xLCY4JCQtWWwoS25JRSIKJSEvclhE
Vi1RLiwydDxbXSc7SmotS28lTy0nKkYpcEVFYmlPQD1tU1pfXVNyIjdTMTZWPikzWWIpNmNcQy5B
UEExcmleS1JRcS5JPl9CWihmPC5wXTg1ZUIqTDEqcFY0ZmRYYiUKJVJvLyxiTENhRlA1Z1AjXlNt
VnNeWUdKMzlSaiQ3UVJhVjYkVFg7KUtLIzNFPC9XcURJOGZabk5fYUlGQFxWMVlFQjQzT0JbVT9L
czMiJD8yJGU5ZydjdTk7KmxfUSZIJEs2OjYKJXBddFNMOFpISW5mYjJhRyNeSGVZOiI5XCtocmFE
O2QnSTAjOTNeaDU0VzQmIl1KRU1CMC4tVydkUVBabCZoOjxuQC4/V1UnOChybE9dWVoiLUloNkEq
S0k2N0tFWGJEJExwdSwKJVIvPXJgTFBbbmpfRidLYTtkJFVJWW44SSlPWToxYGFJWCY4aWc8N1Zq
S1JUYlpxaCYmMDNXXFVVdThPKy9kcjxSTTJBI1MiV0xVaGI6UkQ+NmknKWgqNHNGZCk6QzVaY01p
Rk4KJS4wKGVxaClkMVspcGVucD1PQjU+cUEzakRqaFIpRmpXVyVtVz5wPDhBVikkcWRxOlt0MF8/
ZSU/aDQoKkwmc24xJ0Zmb2ZQPyctLkxRXyRBSlVKVS9KNjMicyFNUyczKTIoSVgKJVJnazxKUCE8
V2wvSSpUISU6bl8jTkozLVNJWUt0QCJDdDhYZzdedXA5Y11PM1AqJVYoWT1rQ3VNPT5PVkEtKjMi
WUFqV2Q2PEFvWyslIz8iOigzXWM3KzAxUGI2IlNKcGBcXVIKJUxCSWhlPV8xJlZfbDNrdSZqXmVm
Y1hSTyVkMV45Jmw2SyhsW20yLjAnVl0maTpQbCkjIj0kJ0VJW2Q5MDZHIjFoRilYai86WS4tMT5y
PkpnPCxLOFpqWFpSZ1I5SWpHTkc6dW0KJSpSN18/USZBJjgoMzo2LDBtU1FzOEEzYiVaOyFsImhB
XkZpUkpcXF5Xc3IjT1NrJiNyJytybCNYcylFQVZBTD45a1xHNF41W0U7RXJLc0FnXDFFPCwrPW8n
QiZMcEBkOWVrdDQKJS85Jj9HXTo3VytRdUJjbyNWQjBiLm9scDcjbCYhWVxoZTA4PWIqJWwlMWFv
VzljQUxmTEAjKkNKOHRBWktXVTAoQ0s3MXM0W19sXE0/K0A7U3Msa2wqWUdqRSglaCxSRiVhI04K
JWRUW1VDJUlPX05gOClLUmFpMXVMN2doYz5mN09yNjJCNzNeVC1DQEddYFFhaSw9PDhWb3IodUJA
aygkX0BqNjFzPXMiNnFWJi9JZkkiRTpWKVpedEZWaTt0UVApPjcyJkNlT1oKJSw4Qjsrb1lETFs9
W2QvZl9uZmE9YTYvOD9fZC9FXU1AYzM8Skt1SGc9QTtQX1c7b0tWYGIlS3RrKC41MjRxISlMVTNr
PyYvWGJoJy8vV1BpRiJwK11Sajh1S2NlPHBKVkkhdTQKJS46dFE6Y1JYVFgkcHJcXGYrKl82PEE3
ZXM8dVEzUCQ5bmZoUTxdIyQmT2lyNlBmKUZ0ZC0hZXElZ0s6a09PbFVcRFFyM3REOSlmSyo1cG06
QHIqOydpMHRqRi9cMy1ITEdkOyQKJUZWWkAvXUZzT0czTGdTSUFlOD5GXyUyLk9OUU9XaGglKGJC
OEQiVC1oPmhzYGI1KSstMi9uL1dTSDpjJzArPzoyKVhwLTxQImk5NktfaGs/YGlVUCtWXWspKF9S
OU1VUT5AMUYKJU0xSUJTVmsiX0tnaCIlIUtTOmRxSjZkPm9QISdZbk4mOHA1N2xeT1RxMzRgNDFJ
SGNTYCwqanVFX0w2IUNJazAmQnNvWUxpcyVnKTE1PVFtMlFrX2ZMJygwUmJVNHJaXC9TIzgKJWBT
RDU2anRTUFpqRDJoN1F1JjctLlk9I2tTKVRfVz1rRDtVbCxuZyxCSVpFV0xXJFtEPW5qUHAtWyQh
JENlXWs4K3BabDZGJDhIITpZb3FEalc6W1xXPT1eWj5UPnIybXA9b1MKJTtfK29eJU9jRURvTFtk
UDYycUw7OWxeYWpINHFCZm84WDRUU2UzNyc9JFkpJV42LzFjJV9BayEzcUJ1cTNrQis0XTc+J1xK
bXV1OWNdWWAvJSE9ZGdYbjBMZGA2bm9TKDBEcl0KJTwzYExTM1RUZ1M6JW9ycSNRSk9XJHEmZ1Qn
Il9nJzRkdCNFX14mW1VdQ1YsX1M3QjIrOl5BYFhsJE5EKzgwT0dsJzd1PXU0Q2AiMUpzRl4pRTpO
dWIqSS5iOzM8XlxWbCheO2UKJSNcRGUxaXFybi01Xjg5Mz1vdSU0YDxaWjM8c19rXEVWcExUayVm
VVtRTUFaLiZvQmY9aW1Tbz1qUzFfJSxVZW5lLldvSk9jMmZBJzpuQj1zNyRFSDJbKnBIJEpeOFJU
aWwvaWQKJW4mQTsvYDE6TT04Ojw3YDgxNCQmT0JQQllKUV9FQjBfJWFHXWpDNSwkZislTGJRYyo/
YkZoSlgoLCMvZWwwVXJWb05KaygpblRpNjxhV2IsOylyJjkmMDA4NlBtKVlPN2wmbjkKJUlEUzsz
ZmlTYWRLMEZDISU6MjQ6TEVVTExMaWxDMSgpWCZpUE1DYkxuLSMhWUZvJm1TSnRtQDJfIksrOklA
Q0VTY3NqVD9VYV8nP2k7Pzk6S2dlUzdyJSxsRDM2IWNZOWY7S20KJTI6OkRES3IsLDhvMmpCXTlk
ZyVhUyNFbT8tXm1xIy9WOCFFJzkoZTIwVSZAcSZTVTdpOVcnXi0uYGtuMylNTlVBMDNLdUNkTTg7
NlJPWFZuVTAyamoyT09yYUhBQylTUmVqaXAKJWJkIyprS2dMdT86ZU0tVGk2P2NjQmBbYzxMNWZI
PzsiLTAnWC5VZ0Q+WUpIQkJiXz07J2NNRnRPKWxvMGMublVLWlFUWmkvUGUhT1kwLmA6VGNbVzQ5
ZSQ9OUYkX0ZLQE5DaDgKJT00ZFZSWVkhOTFSVDhAUFFtTkw/LUYrYWUyOiRTPilvZ0tjQzhKPF9l
NV1lX0cuPEJXJ0wsR3RPPGwzN0owKCYvQ2JDNnFLR04xIiRvQiU0QnBGTkZlc3JUQVZYLUtSNi01
QFYKJS4qZmdISCsoIi09XWQ6Z0kwcnRnJDZraEZXbUhgXz1qMUYwIXJnbD0vN0JPQzYxMlJWYEBN
XjZbQUsmWjI0UGJPJ1RvXT4hTWpHWFlgWClqYDpOaUpQRGpdSUJtMylDZzFnQW8KJVVdP1pcJDZM
TUFpYXBEMzQtcWZLUEFMPVBCO1cuQ2Bcby9xIVdGOGhBSlUrRVBiXT5zbjZHJkVeZEpSKEdzWlsn
UFtuNV1DVV89bUBASixrSSl1bk5nRVZjSjwtSUBjZ0tBRWsKJWA4KWNoLys/KDNOXCpVMVwoUCFu
LShoPldsJE5aNztnVWFVY0poRG5EQTsmT2dOLkslaC5URVIqdDJCMS9fS3QmVSVyPW48YyJEZ2RS
Q2dNZytsL2tGLDMmKixlLCZsZDRBVmAKJW50TzE6RFA6YidCbVRvRiZQYlpwYWwvJW5LKTorI15s
ckM8WnFgdTFdV1xtZGZPNVEsPD9cKEAwZSVbNCl0NmtXTVNPcT86NTkjMixdc1VWMHJsKWY1XU43
UDBTaWA7OW9CTzwKJSJQc1g7NEBAMDRDL1gnNDlXQVNUNm0zUC5pTTgjMiU1Oic/SSJra3NhT0RR
cTMiKFFlPD5JdDUsZyVWSms0ImEyZTE6bWRhZGhEN0cqJD0mM15PS0VmRSNXOUxNXDV0ImA9JEoK
JToqXmAtZ11FbWNyUFtOYGApcDZMWCRKMjFcOUc0bWFGLEUnLUt0UmtGYFxUdUFGSGVRUz9zITVe
TW1mIyJvZEY2SiNATlhAQm9zbiYpbltJUVwuO21OTDVuWE05UjpcYWA8VTcKJUNxUSdRYDlSU2Bw
Pj8mM1srPTl0ajROMThjbERwNFtQVFxkOGpPYFNxQTg+VG1UQUIpWmdrRG9gJGFmU0lWM0JqYXJc
XDFvWj9OO0A4TCYnanVmcyIkREVNLyRiN1RuX3A9VzcKJURWQCZSXVZOYHFVUTkvUiNzY0hMK0lz
TV9NOnUwK1c/bConKztRZTZOWlVTNENHUVFLSzMvLC1qdCxhWTkkVkMpXnNbR1xJIydzOC1VKVRE
ZjJjcGBRJltQODtIPThgRD4lI2UKJVtEWj1SPl1KSz9UVV4yWilQVUphTSVyTkAiTFtub0duLjZJ
OV9mUD5sXmxKQSUsQ1YtJDJPRktfcz07QD1nMztkP1tbQF5WMl0tSDE2blZIW2c3Yic4NEpAb1ln
PUFlKGZfW2UKJSlYMjtoVkxGWHJALDhAO2RfLVFLK1pYIWg2VjRoKic4RFpqbm5Ob0JFWC1CN1Am
X1RrJG1rIk1lZE5rZFBoJDZoamhhY1tZMCxfKik3bXNKWHBhO181QzIrNjxfZGtgZk4uRWMKJWJx
KXJBci5HWmM+bFU4N1k8VzBDayEyMz4mbUdZRzVnVkRjJEU5XDhbMW9iZVlxXTAqI10xTlluLk9I
MDpTQjlPZ0kkXDZjby4ncFtAcUchJ3VoSyc4UUNRWEQrZ3E2LE1NOV0KJStcYFNSQVZFZSkyViY1
T1UyQF5daEJmRjg9NkxLPEhgcUJGbDVXJz9EQSVeLyREPS5oKF9pLSc0LD9fQ1Q+XCFnNjQlQF0q
IiNLOypKWic4OzwlKDVAP24zcCg1NkwkQi8wZXEKJVByUVMsISlyMG03dFhqQUstOlxbJ2dHMTk/
c0RwQzhIWDQ6aiFiOT1NXVpNRV5iS3V1VipXPlQhTFgtX2RnamlLbjx1SWFVST0sKzdiOmMvNSZs
cltlbDQhI1gsSmpRUWRYUVsKJTMtVDIwcD1dWF1jSylTblU/Zz0+LE0mIyRKY19aUDsvNmopXnU7
LkgvXSpSK1FxNG5SIXQ8PCVnUVUkMVtjLihfNGRja0AnVmotWFFETmp1RWI/Nm4wbTBxVTUuRjAs
M3V0NVMKJWFvXlNUZl8vbWNvLUA9JiZGWGNgQVxlcHVkSSIiaC9JUmIlcDxyR1s3TW5cMTsrRXFZ
NF9FWWNLcF9MaWArcXFxTStmTVY9clVaTSpNbEYlL3FbTz1HWE80VEwkL3BcVnUzIT4KJVZcbkFO
SmJVY0ojTERFTWo9Sz9ESSRVQ1QoWVYoNSVMIiFZLSkpTGA2bFY7Wj9bPWxwUTFVL2MpdHA3bjBy
N211YTVDJShlYihyJ2N0LjUyQmY4UGonRzlpMWFaW0w/YXRbOWoKJThLXXBWJD41NzUwMW9SWVRR
S2srZTwzcCg9XDdVPEtgKmZTUS4sJm09NDFXZlNXR1skPjJRNy8kYDZWJS9UJlU8bUwjTzM4WXVw
UD1sQU1wKCtuU0ohSSM8R2QkLCkpYT9fRl0KJS9vcj5XXzMjYCMnZT1tdFlXcD9COTBLQm1INS5h
PC1APlxwYjkmSmAqL1hQSU5lSz1XT0NEP1wjYz4+RUYkUEZsZzBAJT9kaE9UbipbZkNlPUljOkNc
cU9RUSVZcV9XWEtiTSEKJSJANEotQWUlNWZaWkEnO3JCZS9SRVdvLDdDRWtZXVM8PTpvOGFEY0k/
V3RtUypKOCppZW1mLTtSOm86LVYvPWZAKCUsdDlrbSsiNnBzUDc7RDEhWERpJGomSk5VMjhYbSsq
P3QKJUBpLnNUZFpRUk0qLSpoUnFAOUlKTiM1PURoPFNiI0EwbmE7TVZoLkA2akVxJ2NLZzdDbmQi
KEEjVE9EXUQ7YUQ3SkBHZDM6Y2Vhc1I6cG4tVFRlRF0hWWxhbiNFJFpZJClqSUEKJUAmNSxSNWMn
Uz4jYmRQIixpR1xaTU5rYiwpJSxXTmZmKTBiZiUwRElCXFIrWSVqQl1aWWZGJ3E6ckpbUzNTdS0p
QjVEVnRYRGVwdTpNaytoTGItQ3MvSU5UM0M/Wjo1JUkyKjQKJUo4VDIhSzQ1Sy9kPGk3UTNZUFUx
XjcvJCIxXm8uKS9zPmtNUEJeYTBrJklLJmQnVVxPbDhHSEFRLyFgPjBzUTAkI2xuS1ExMiUmYG1m
WSxdZFpzbSpBOUFGdWpCNDVHTydYNScKJTZCZnNIWCFpdVY2NE5kZUBUa0EiYEw2ZUVVPiNtRzpV
V0JbJi1TTXRLVDtgclMiaEViPVRnTTZnZ0VbPGpNWFxZYVFkQ2kkO1cwNCcyX14vWlM0OCxET0E0
WUVAb1l0WlxqciQKJVJUKGgtV1kmblU+bE0xTj9vT1BnPnRWRTEhWGJJIV87SkgrbSI1XjU2SkdY
VF8iIj9hcm1rJEg0VitvOzdVMGwrKUhuL1RKLkNXPUNhZlVFMypfMkhDPi5cVnJVWWksJ01jWTEK
JU0rUCczP21VPVooaF8hSDQ8YWs2YnA/NloiTTM0OGRDajROYSJXPlNtPDkvK0x0ZzdfPjNJaSZq
KidQJi8nXFxEJktLSG9GQWxUMiNeIzxNYCs5ZFM1MyI3WSpCUGdMK08jRjoKJXE7QVgtb2JibmdF
XCswTG47WTxdZHVJTj5HcipvRjVSc0wyM0VZZXI8aiUwYDdKOCNRV3NJbmBUcE5qQSw9MlpgVjhf
ZEE2OUk5QUozR1BFSD5SLDE8Tl5McXFtbD5nIWNBIlkKJTpHZiVILF8zLz46S0JpZEBYblBUb3M8
Nj8jPFxpP291YEI1VTE6a3EsYzJwVCxSU3I9Jm1cTT03dGpBISo6JXJrb1c1KUxQVU1AIm0rMjlW
PTFhOyRPQiltb09wUGcnVmFiL2UKJTZHdGlAJkIuMXIpcWshMyZlYiJoZzY7XEJwXydjK0RHT1s7
XCVWUSlRQiYsbVAvWVYjYFFhNG05aU9OYVYrRURgYWksS1VaOEVMWVg1ZTFPYWVUbkhtaj1UVXFE
b2V0TGxQYCkKJVVyJS1SSEU0UyNXTSkvQURCLioqIUonISJNaGJhVUxDS1dlbFMtTzYpTFsrVkZW
T25RX1c1akknQzM5J1YiN2oiInVMIl9OQWBoYyVaXGIhPVU3WF44YD4icjc1SkcpRVBKMDkKJSNP
Lk4zSDdES2tsJTNSRk9tQURJQklkQmNFQ19dJmU+RUNuI3JXP01xXCRcMFpkZltCQFYrNCEybkB0
Z202S1daMmU/Nlc8J3E+aGFPbzJiYiRWW1A5c1RpYTpILWY2NFxKVTUKJWBnZlpCSjhQWCpyJz4+
PEI3cG0yR1g4W0tlKFwiST05KSUiSz0lYkYhMmBrSUkqM15hLjltXSFPSVJcR1I5OmNhYFtRcD1R
RW1zJUEzSjtLQHQ6YStpQ15NJycwMDA4TyFgOG0KJW5TNGtcLlxEdTZIIyk1PnJda2pUWVYwVW4i
QCxyaiZ1XVMiLFJSViZZcFZETzwzUGFtRVt1JyNCTk46UShxYT5wa1YsQUFjSFM9MDlNdTkyMC5b
NEcuMTpiQSdENG08PTk/S1MKJUJhNVlOQ3Rga0hUSVppaCFxI1YpQTFAKDdNWixYIm8jZkNkQCM+
X2w9QE03YEAsIyg2KiU7PSlxQjY7SSlCY2JdOjAmVmpcbVhnJDFEa201SEFST0M0WkEscEtiR3Ry
XD1GK2oKJWktPEgpZFkqRUYnOW8ybmAkVGdITCNFNj0nKmZRUzlObjZGXk0tYVdrKVJYKzhaJHJq
MVRzJjZtb2s5UG5Ia1Y6RGQlN3EmQmM3cE8qRHJhIStUIyNRX28wRV8kNjpqZlVUXycKJTwyOl5h
L2VML08zb01WQSRnJ0RxJ2I4MDtMKG5kVCNIUCddPT1kLV1rWVdpUmNyWEVHSys1REJTOnVwTFZJ
UyN0ODgyKUleZ2VzIjBpUCNTWjdvJTBAYygtJ0ExLy80ZSIyLmcKJTQiUFpPbmJybDxHS1dfKEsi
OkFaZFsoJSpCTDlhN1U/TW4zKyFRdXArVV4oVyYzSmlHWURCMilKVzdkNFR0KVdZREc5NlFJLzVN
U2BmLjRjLEY5NkBnUGB0VklBcztbPVleKUQKJURqMU5FNmgxQ19MKGEyUF9kMDE4VyJVUnIkL08y
JmdwKmtqWS4qbS5bOCtuLjAwalxBRSpNb0FAPi5mMzFibytJYlYjLGpwbiNGS0xkWzxWOi5SSyMj
MmUiMUcuImFgRCVdYS8KJTpnXz8rIi05aCNgQixyJC1QcSEyV2ArISVGJlpcM2xEPic+VXJjQSs0
UmZVSCdKVF9tS2k5Mi1IKXJ1TSFZaypiQyUxXT9AIXRsNkEoM1FOTVpkI3RsT2c9cjpcJGsyJnAo
KkcKJUg1OCRaNCUvb2tBZlVeUVtLOC8jcislVShnYlxWMzdnRVkxTyZiLU8pQyIiJz5zMiFdM2U4
bHUiajwpQyQibSwuQ1c5UHJYKFdgXSkzYlk0Il1cIUxLVjBtNGhUbFIiNVk7LiYKJT84JVReUFBq
KVxeaHFSbjg3XmgkYUMsNDtiZWRzXltyIUgtTTAqdSlDQ29dYD9tUy9KNzM0NmM2YSd1QiJcalFJ
Om46ZUtiImRnL19XPCJaISglIjVrcERLR2VkdT09PEAsVzMKJS5VYFsjOTRmPUpvaWVGLyxwZ0Ek
QS9ZX0FKTV1CQ1tMRWNPWEdgOUMlREBlTUUrN1VVMSZzWmU6REM7RiklSXJkJz5kSScuXFs5VGpD
LnM6cSE4T2NCYUpDWCZGaTQ1VEZ0Z1wKJT4kciQ5aDZTdT4jUyd1WCFvWEJtL1RXVHRjR0FHVzZw
MztvbzdXPEEvXnM1cCFIQmstRXBrdUhKQENiZzlkNGgkX1dzNy8rSjsuaTRBcEFnUi51XHBZRkNB
bmBjUyI7RCcuY0kKJUcxcFZjLy07LWpNLSEnSlUubUFCIWhBQU5ZJW1RSyEkRicpSVRIPkdKUSRY
P2hVSWZdW25tbD5aXVZDJk9EXkFLRzRRLStaWiRRM0B0QWFOcFBhXkZNMUJLKFBkPF9IOl1TKTMK
JTRkckhAVlNgbE1rMGtUcGxxaTsnL1ZZZDIvZjBzKlZvOzQyQStrImIrP1lsMm9Vbms8R09jOzY1
Uk4+czRccyI7IVskZlMxKC9mM3AoWnBVPGZwKVRKN0w1WGgqaEQnVydfYCEKJV0pakAuNFFbMWkk
TVo7XlFsMzAvJF5VcF0+R2xyRXE5VmlTZXAldSYhUmBLXWRCUSYlNzYsTjJfWzNoUzByOlVXPXVM
Umg+OCRDdVZRPVxcOUtZdGtfNl5bZkhnSDg7QTBNTmMKJWk+VytUbVY9ajdQUCpsU2UvXy0oK2Nz
P3NuUVAtNlBccl9qI1YkJ2JBL0BsdStiOWMkQig/WU82XVplUiVWaWZjUiVlJm8oME1KMEZNaj1f
MmhqdGhOcT9kTzpMWUxiLFEubC8KJS5qXXNdKjMpUTRiO2NGVDs4XEAsa2tMKztJKG49SVE1QUk2
Ok8mW0QiVk0sVTE2MGFANEonWHNqaixqJm9KcC0jXkRZZD4uSVhgQTBQN14uVWZBV0xDYG1YW2l1
WEsjaCpDcF0KJS9gOTZvIkA+YEFjbidhXlBjS1pXYTk+XUxUYGVPSzRzdUVOYGwrR2YnMWk2Migv
PyhZTlVWPCotNS4pdFM0K2o0K1JFQjs0bFZfUGtYRT9OJjNwXTgjYT1cMkRvcTYqNV8yKksKJVUq
WV8yPjw2SmBLdVQia19JWF5iQDJkU1wwVVVDS2pYRU5cPSZnKiYhKUlcP1Rsb1NALzNJV2lSJTBQ
QjgwIUBPKGA1MDonRmAkKShua1k4MkRKY2YpODY2JnFTWCpqYFo6dWoKJVpRak1LUmdJanA5SyxJ
bUJLV1MzQSRgVG0tJSQubk1Mc2IrNCJaQFkyOkI7UyclIlE0WlUuW1Y1WjQvN05VMWdNLFRcYGkt
cldkLV44Rm9ZZD82I2NPUlEvZGgkP0FRVGhAYjAKJS9yYWwqZDUyTldVQUIrWilbITJoQzY8XG5W
cjI6JXEsJW1XZCZMVj4kSWtQNER0WUtZPEgsV0owazQ8Uy1jQmwhL0lzYENsSFEmPkgiU0spRFMu
UC9yQVkpNjZDKXElX0ZJJDEKJS5MOTE+cGondEVTOlxoTTtUaSxtLSl1dWE4bFY5ZCVETjFcQSF1
SSZfVT85bDEwJFNvN2BLSVYmN2xoPTkxVktnLD46PDVZIzAmKiZIVi05KCsoQ1VsQ2gzVV9nPy5d
LFdbajUKJTxCcHFHLCphTWkwYStcJ1JOXyssTSV1PVA1IiksOk1OKUY4TGZeUiNIOzQ+aiVAYUVJ
PEQoSz1MPyVCMWtdZ0k6REh1U0RENzg/JW1HIydlUmJiVmpgc3NpND9mcTVDbzFBP1gKJSlRIjNr
RDo+QmA5JVJhLyZhYGdsUVpqUkA3I1pjRk4tXzJAaVxWaVEuSGBuRS1EaGdqLnROSkVYLzhWajg0
Wi1XRjQ4KlkqXGZlYyhsUjxpMiQ2JGYoKjcoTkA/dWIlK2U3J0IKJVI1W2RLZS9ZYT1AXjZPUihQ
ZnNmK3BGW0FKYERxUEthMWczLz84cyg1O10wTGlzVG5sK2ZeLUUwazFBPF9wNkgoTTxzY1IxYFEj
TiU0W3RObCllMTJVKk0hXkwjVzddTl5NUXMKJUdTOmBWPD90JFE+KGoxL1JXXUM0cD9BZzhbdUdW
PUtJYmFlMyE1OGknN1AwO0tzbEFTbjE7a1AsQkpDXSJRMENvViZfXEFfQFVCa2dDO01XTjotQi5c
MV89clU6Z09uK0YqLTYKJWQkLiYqQDQzXiZiKzw+Ol1IZlAkQ1FiUyhfZy9IbDZBWig3L2cicU4h
RlkjRnA6dWpQYFRRXzFPdCJINEo6XlNcMUxSQCJwKDtGOE5jcjxKQWFzPl5UWnRPPyFVdFpnVzFG
T0AKJSQ0XC44bT8hIWA3SikoPGc1Y0tHT1V1PUduXmA6LExlaTgjbUcqaU0zLFJwPEo/SkBnZGZy
RCVhX3V1bF9JUHFYS2JiXCZJISRROiw1Z0dIW2ReUChlNGU1ZSRXIkdCZExML3IKJTlbVFRcaj9m
NW0xKVo1JS1bLlZPITdASFBQcG08aC9ELzJlJzFFUEQhSyM9TzJQYUhKKGhlTGBjQ3VhS0xDNmFk
ISFQZmA/J0BVaSYuW1FtVmNjaiQkIWA3Ill0cS9MN10xdWMKJVk6MlVOSjAvZUNNVzVOI0xdXClV
T049Wy1jVDt0ZiVCZidFQGRIYXA5OV4+OCMwMDJyOmluc0VwO2RAJ1pZcDhyPGA8Xyw9XEtzclJP
VlIqJmt1VWhAWW5pQyYvbF9VaCNMI1EKJT9cU1ZPKV81Pj0pXS9nO21nJiddN0J1a0QqWGJJQiVZ
cVNlI1tvLUhJVjooTywlPk1YKjQpZipxKTZrWig3SSEoVF0yW04kcEVhPTpFOVk4YjhjMEczSyFq
Zy1Ta2g8K1xOU2IKJU0nNT5gMHBPWm5CMERqVWRJTUs4PS5aY2VMQjNCWiVnaFtiVlppMlU9RSRZ
LyJTM29sZF1PL3BvcFdrVSJmdSttKGchYisrSmRYTTciaT1ASSIvZjpALXRFKWczLkpzTSEoSEUK
JVNfM1xhSDZPJi87OV1tO0NMUHBsQURpa1g2RzNTY18oJFpLVEw1QyRoODxNLDU0PmM3LGAoJWE/
cXRCXi0nZjJmaz1oIyc1MElwKl0sWycmTSMnPFtPJzJRTGJfYnUuPDxpOiYKJVYvQjlgTyo3UmNK
O1FCbSVXcjRGWyk6TnFrX0E7bCgnQCMvZWVUI1ZSTj9MOU04MExPaGRxQ0dQXDorayMpQGQ9KnQm
JVorWWpfaFNbPTtwL0hTMUhbTm42KDVnTFg+MTcpKUcKJT8sb0ovYmViYlFDUGQ+UDBiZD9wMyZq
Ol1IUmlYU2cnZWBuUVtCQFVTKUppNEo/NU85QEEnPipNRmlMKkAvNzA0QUg4I2xrYi0+ST5zTylB
USJZPkpAQSt1SSE2dVNiZGM6YGwKJT8hJSklXVU4NVFpJ2IjLzhub1lVaHRRIypKUCkwSkNscG9i
czE/KTxRLm9wTj00PTdabUAtTipJODksajFSQCt0cyheOGQ8TWAxPG05PVpKMkpVQG5qaVVFZVdy
STp0cDkhajQKJVo7KVZYW1JvNW45Y0ZkIihILidWOmJCOlBYLzY/VUNHZCVIQ3NROVomaFJlcUtH
SiptSHI1QlMvRUFwMG9CLlZdbEhpSCxBSlRPKTcsLC9QJDVCTW1VLnJoLjlzQi8iLHAsXEsKJSxE
NF1lOnFCKXJWRCVGJ0tRNnJ0TkIwSFNIQl0sVyxjWDska2NbJT9QUT11UDs0JEpbTT1dKEpSZG5g
NCY9IXExJE09YW5kSDw8XFspZlVsSVcmIz5ISUM6P2hhIjZXJ0UtNy0KJU9xbmJOL0NjL1xkMGct
JVczPydsZUxMO0hXUDtlWnJOQlQhPywsSURycUEpQFkoLTtnP2dfNSJvI3BeXUlzTE83P2lPJG1x
SyY/XkosIl8ocCVyRmlJY3FTbz0yQUEtTUE/JiMKJT9NLzotUldZVjdKLENVIW9nJHBZL1F0JVxv
QFBfZFkzMFQvbUZuQzplTyUkZFd1YG5OV3FXMDhmcm1SNSRiWFoiZlE2aU9UcSx1dCFxMUgyRXFc
NiQtNEs/Qmo2PDM5PnUiR1gKJW5eQWkiaDdpZj41UHFLJldnR3FZZzw5aipSZWY4R1gmO0NPb2M/
Pj1xV1ApczcrVl8lKGQqKWBVXUdDdXI6dWlPXytHXzw6N1hGJFZ0L2s0cWw8VktDMzpVXUFVTkJd
Y2khXkUKJU1FWStGRzomP29IMCNpZTxVMERWNFwoSlBkY1NOQG9kKy0uJ3M/IyZmVm5kJElia1M3
Ji0hZE9vNW90UCI1aU9xYDsqUW5vaFlUNXJwckNDUUchS0VEZS9tTG4rbm9wMS9AZEMKJV1mSipe
X241bVQ8cEcrSnMtPEB0VHJLXCtGZj1kLlNGPz0hSytJcG1wdUJdPmxFdE5ZVF9TSjJoWUBbSGZk
KztIaTAuXUknZTgrRkgsSHBgakVdb1FyYCJnTmhrKVpyVVQ0NUgKJVc3SVZqVjhATjdxQENAO00o
YTUqSCw0bFlyZE8obWw/MCw/cjBLUDdyVEUpOVVtS25VJ2VmIXJNPmNabzIvPzk2bmhmJiFuUiRk
J3EwZyJcWjxTaE9HKSY5YzxGO3E4LUxdWF4KJWZEa1FqNVE0WzYvIVZTIUoqc2QkaU1lU1QpVi1Q
OkJPOkg1ZHIwYTFAcihkTl0xZ2FpJ2dhTTxxZDZHWFVmajYkIm5JISFoc1VObl5dKGBFM0cjOEFY
Qk1xIW4kS0g3LlE7NHIKJVtVSWslZiNxRyFsLVcjaXMwaipBV2NIWCo8QF9xK10jKCg0MXNZQD1q
YylsLmptZ140NTVPVmU6WWtXPi49LCFSZyw3JSxNMFp0OmRzOTxwckImTk1YK05QKHIhUC5ucTl0
Q00KJTxCOTZnTT1hOmZHOHJlbGIzMG1oVCRSPmlEa0FbW3JZPjEhYmIlLFFBWktIaTxAViJVaDVH
VGwoLzVuI1UwLVE9cDJvUGZbQiJ1VG4/VXBqVj1TcTNiJWErZG5fLFdzMCxnPFkKJW9AWmZ1KCtl
KDFIJ0BOJUwqdVQmU2dhK1ldXXQlVjU8W1M6ZUFrZS5xPCJhPmUsLWVEV0o8VEJeMWhjMig5Y2xU
QCIkc2UuXmEmWEc7R1NmYi00XDlTaUNmWGxTWiNbXlJBVF8KJXBaR1c3a2wrVzdWUFdpZFlzMThh
Q1ksVElBWkFhUFBvbVUuXmtNKVVjWzwnNjZHInFobW1VSVIpYEZwMW9tQ1dfJF5ILlJJWjldYlJh
OTBqXiYlVkFcW2c8ZkRJOyYycVNWUjQKJVFZY11xXDVCcFhqTmUqWFROTFRfTCknPE1ARVQwY0hn
Rz9eRFhQTEJiPT1FXnMiJzVca1Y3ODdLUm4lVXMqXUBRPE1YciFlZFk6NmQ9Uz5zS2I8V0dna3Nz
MmpqN1hNWmRNb00KJSxBWjVlM1JgX2svczJjTTRdSms4VmlIYWc0RmFPa3FUST9zYE9mMDFYQV44
L2tITjRQZyRkMCZnKjV0b1sqLFcwWzlxNkFEITdtWmdPKG8qPFVRXl4qdEtjaHJwcVhHLUcyOWUK
JXFPanFmKG01RDBsbkR1QmkmMzdMR3NndWA0ImU9XGNMJCJvOURWcGllZi1TaWlJV0xzZV1ILDsu
cFRqWFVgJjJSUWNTTGVCPGk/QV5DbjZGYHBrXS9BailSVW8/Mic0SCNONSIKJUBsbV03Y3BRODdc
KWtcVFFjcD1MM05xPTUkczdmOSJ1UDtRTzU8TGQrO0xwLkomXDUlWlxDQmFvLEhtL2xPMkMwNUA0
PDFlQCZDIXBYWV85a3RLU2FEUCNRRVhbVXNSLXI/YmAKJXEiWiFfUitTMDtKTlAiUmh1QkFKajVt
RmJlbyc9OWRFJScxOkdFMyM9UmMhW1cqNU11Ly9xXERtRk4uWSYiaWE4I0JQWDVPXzY+Q1NIclFA
SFMzc2BKLGVZYklGRTdoWDVfbEwKJVRGI1AvckcjQjZbIURaKEwnKSQ/cS5PMjlYWlZaYklcZzdQ
LS9NaDJvQFUkcCllTzw4ZTgoW2wxTT1gPTs3M1BLPzhYXkUwU085N1QiOEljZFEpTGc+bFNvPERK
NT0lXUNiRXEKJVFkNmNLSSk2Zz8/PDFzYk1TaU1qRz9ybCJET11eV25eTT47VkNZbFRJLjttOFEx
TUJKSWUucEJjNzhBOGRWZ2JuXHA4YylNRD0/T2QnS2llRGBccDJOOCg2aF1NZy1IZ0xJRjEKJVNi
MGJgW0VJX0NdIy5bZUlKPGJzcDkxMlIoQFI3JmxIT0JzNHMhbFxcTTA2aV1dKCY1bGZQSFBeIStF
NTA9TiYnaVZIOjc1RlU3KC0nYWtRUFB0NDYjSVIlblpLSCppI0BtR1YKJXA0W0ZNOFFLV1g9KGJw
dF4lKSdMZiNdQ1NORUdjNU5wWi1KJSIpPiQjbGpBU1FMK0AsNGYyNkdNKy8vbTMhIWc3ZVJLUDJe
RFsmXmpUW0EmI11pSydOTUZbQ2BwPDsyb3RLbUMKJVokNjVOZV5MajM8aUtmZl5IV0tdOGlQcysl
cDhPcmFSSmM6Mi5JVGlVWzRDTDFVaXMxQz89OzVcVUhxb2hZI1N0P0c2KlNpRWFWckJgbXJLZGU2
NF0uZUs3RiMiUktxNCsuKGAKJWdIVSlhQDZfRzhRZlYtK2srOkhiWlgxO2VJQCc6W0czN0c7YFAi
bURgN1ZqP2ZEWCN1Ol1KJyNuI0BVNisxV18lbEslKTFEM1c8QDw6ZClZOj5OajQ+czVSLVtyLSJF
ZDxPSSMKJUtCRm5ZTGRubTRfPmVUMSg/cG4rcTNGMmU/TDUrNk8nIjhbWThnWSlYSCZnb010Wyks
UnVAJ0FCXHFBcnFiOnBWTkknKl87cUoxRWcsdUBDUUlCZCNHIkQ/anFiZi5RMVUzTmYKJTNwRFE0
Zj5WQV9aaS9dNG5jWXU4YUwrT0FlJTlZKCpGMW4rUTBMQF9wXE5oNiRiKCZIZmZZdD48aXQ5RGJz
YCoyRXFxSm5BbWA0IUhjX2okWVtXQWdOTW9JSzpVY2k/YFxDYmMKJSYkMj5LN2E9SCFxQ2FvQSVE
M2RrOXMxSmRxM00/ZEVhU0JDTnUsISNaWFRTJG4+QDFiPzJNPjFAXmpYI1RBL0tHZm5CNVRhXUlU
XF5AOXQ0a3JJayllITF1ZW8pI0IyKFxjLWYKJVc6S29bX2JXRnU6RDdcMTtaKnBsJTowS0NhMSIr
UmdPQWkzSV4vIU48KWUyQFpidERGXTVoZ3E0TGVnK15cWlQsYXBfUihvRUtFKk83Yi1zT1ByQm1m
SDooXktqKWAqQVFALz4KJTpyVU1hJ3RzTHFXUWVBMiRhXWclYFxiXDhicToqb1xwLj5DR1ppXy9m
LFRfKkZtSlxebC03Xm9wb24sMlw8U2JWTWM9Vl9oN2NgaFtISjxCMjA4QFw9UXNsW0lePmlKUWpC
Y08KJWRsM28xUG9JPzwqb0hlQFckVl9eJE9VcTldK29yS1c8Ul5LRWpaNydJIiwsIm5mKGxHQi9f
PDtxRj9zJmQ7MD9IPnNHVyxLV21WUEZ1LlMnUSRfb0pZKWMoME5nPjw8X3ElJ2wKJTdOWm08ciU9
U0JmPzYjaWVwZlE5RCwoJThaPiVwPklCbVc7KXJwMiZqJCI3XXFZJlVFZSFuL2IxS21LT2lxU01P
XTVrKklnQGFGT1xjbCRvaHU2O080THMoKERqXUNNRD4/REwKJVs5QnFPcUopLXBgTFFkMSM3YzBf
WGBiY1JGRjgjSTNVI0U8cEZPQzo2MlA/T09YZ3JLOFxRX0xnNTJzI0ZZPWJQUFpBaj43YC9jXXBf
R3RGNUA6YGtrRj8vR0BTPmw/MjZtSUgKJSo3aGBJLkZrakFMOTsuYj9lTjMkQi8pcSpkPSkiUzFX
YCtGSVF0PmNAZiowYkxAUztoKWNrVz0xcF0+Tls4InQyU0shakBrTSQ2YlstQl9mYmhIRTAyZTpA
Q0RuYGpMNEZJLzIKJWJQSStcb1ZucHNULGtHK3A5JihIPVpFV09lSywqQVQ7Uj9gSFRyQktzIWMr
IkFhZEo2NCkzSGNmO1JHUGA6dSVoPipXI11lay4vTkxNaE0sZC43MExFdSsoKEdMTmd0QmVSKi4K
JW4pTTZMO21qVm5hV29mYmwoKmIvY21uP21vYm9hJERzT11UP1tTZmhSbVkxayo3Om1LMSw1Qmhc
N2pjMkdoN2ZScG9ZdVolcEg6IWVGaG43WHJMXDFVZzBvZmhiU0wxYWg7azYKJWosRFJBTU06QTwr
Iy5BUFwoV1MmaEooVmhvWWUkNT5BMVgiUlprMU0oJW47RlYyQClMcDxpOmFhZDY5cl1RLVEyZT9j
OCRvNzFKc2JNVEg+PGBgQiklb3BtbTUmb3E+QiFHKE0KJSxGKDp1KSpfUDNhJUU/RD9RJC4xKVA+
Rk9sO3JqQE1EUTNhQ3NTTTthMWxlQmBMdFlZPDk8KT5FIjdsMVMjPGEvZ0pyP288NypVQjpSWEQ/
QSonJ1tnaHRkQVNWYStnZVElVDIKJVxDM3FRWU45KmRwN3BLYDNRWiJzQz4lSU1uWCFORVw/JyZF
WCpMRGZJLXIjWiNjbyJHOko9LG5vMEpTPE82W1A8LCFCMGtxc0VvYCM7ayYwY0UlMyw/RmJMS2xj
IS4vUmFHcnEKJW51cEJZVlN0KENJYUM1bVU7dVNcPWEsZnRFViY3Z1I7S3AhMCUtRU1gIzA2YzJp
XyovRFVyJVxOI1U/QVpKJThxPlBSJD4+Tl40VENTX24uRy1jQ1lKK0o8bFo6ZlElQ0NOM14KJSo+
YS9WTFZzXzhrJyVETGlabSFiTy5BSUtAKzJWT3AxVF9dQEgwbFBXXDU9PW0uJz83ZjVMVmxENXQ9
TzdzITRqPFZuR0oyKUciKXAqbURmMDMtJjFPTTBdNmYxVUdKIzpsQi8KJUVsdT1POWJVIy4/Rilw
Mi00WG9CWGJgdWUyT3FMKHFxVGJaWUp0SSQ+KVpbNmkzSVtcXExsakE7VkA/WGk8aXJnayhCOSNR
IVw0cnA7Q3RMWjRicCZhNk47ZDRFSVVaXXBecz4KJV46PUFMTkRFW0U8aWAyPU5pUTJDXTZPZyU2
Q1RQZUZDIUk1RUhlY11LOWgvMU5wYmA9TSEnbU1oZTByXDNDREZZV0FULCxiUUwuSFNXTVZRcSFI
dHBGJkIrV2g5XEwlbGhtcFsKJTNhcjNlL0klLlRmKEhIM2NFVmNzcEspYCdWam1JWUczcl1nYDZz
JF5nU0BtYXBTRDE5XSlLKTdAY3QzclhzJF4xaENGWGFGXlpUNWpaRjlia1ZtbGNdQ0pPRDhXP1Rq
W1ZULywKJURkXyZHQ0goTkFxKHFAPFFaZ1Z1L2J1UEtScTlZJE4uOildR0ZSVVAnPVhMLFpzK29t
KGZMbF4vX1s1XmQnb0dpJWshcmEyZ2FTOVVCZThYW1VAXDFwUWZfXEhyMVVWbEYuLywKJWRiamBe
RVZaYVY3b3AzUiMqblRJYVQ4ISRZOS1uKSJEP1ZJLnNEZ2lVaD9JYDhRYE8+VEMrcGZqbW9ybGhz
VD0sczJbM2hFLEhfJGUlXVNVY0I9cV1FWGJFJHBvcnAzWWc7MEsKJU5aN103Rz4hOWo8Sm87bjJJ
b0BmXHIhUiVENkM9PUpxOEZDbEstMmplTkBxXmNobmMyczgtJ2NIQHFQKWQrNE1QQkQ2ZC9WWWBH
SkdGayoyPy8uNkYncS8nMG89aSt0RktWJWEKJUJwS2pKcic6KXMmIkM7WEYjbWUxSUFVXEdiLWE7
QFIxVG0/ckptQjElclY6KFdycU5hcTByKlImcnVGOVg2LiZYSDpXZ0tuWkRvQ2ItOGhJX0sqInBy
a1BWZVdgPVBePypEKT4KJVZfMi9kclZNTV5Acz9fQkcyX2VBWCc0NjE5ay5tYjJpOHJyXzYlKCU9
IkYsTiksKGlkPnM1VC5wJiNcay1kUzFbblJsM0hoZ2I5UkNST1dpZjlxV2VpOm5xXUlgLkotSzo5
RW8KJUM0MTRjQSRxc0kwLSZgXUNdIWdyaF0rViVsTVFvKF9fKGhFbDZYSUVQQkhhc1NpcFhNVVVd
P1dyNHBOYE1sck9OL3RQOlduYWEoLFwrQD0oXTsiYT9WT3VpZ2ExbylrZzA2RVYKJWQ5bFE/YjhG
T1RYT2InbGQ9Lm5takkudChyRCxWSWhDWDB0PTFcRFpoZCpzQypyM0xQKnJfczIxZW9PbUlKOidq
YkBib0xWU0REcVZHZkNFXVFzNjZucEIxVl03Imp0QlIlai0KJVtzZS89UjJvZXFMbVBcK3JxYlgx
WmNOL2hyVjJdM0lFJF1Gb3QwXGVZRms+c1g7YjFbW2VgT2BpVi9QPUNGQWpvcVdjZzYoWD0sVTBY
Wz0vbSFIJjZabjVxXWY+STAtaDdSay8KJV9VWS1iWUZqVDAwdCdqUFNgXUNPRTsuRCZiXVdnTFFB
Ty9hRzg5R01vXDcsaTA4czMiW25XaXFeVDE/NjpDaj1AJE48KjxWOzhLRGxMMCUxaTo5NlpTYD4x
W0ZuT0s4ZyVyUFMKJS5nKDc3R01QNmFXa1t0KC4sIidRIiRdaitoRUpWZVhFS20zVmItYzdZLmck
RkVVV2dBbUdRWEFJVyppYWVFPVpuPVpyalxISWloVFZKXFRyPSEjUVVccFxKOl1KYHNeVyYza0kK
JV9gcTVjYUx1bjVqZSl0N2d0UVxuVCg+MEpgR1Y3QkNBKy0zczdLQEJLUVRlNltZZT9TWV5BaUVB
XkhlMVxUJGAiXlojOzFdNUYuazFxUUJhSChJcDxyaHBSaHJgaWIhZicrMzUKJSNMdWJOO0orXEo/
UT9WL0VWISwzbFIwLm4qaCFoQl9yIj9UaHFyYlhCNmkxNCRPPjgvbDE7S2NoWHRFTStcVDlCKiYi
cExpVUghSyw/Z0RfYzBRVmBMRyU4K29bNypXa01PIiIKJWsxXj1ROCRsJSxKSTQ9U1xtWFFsJFwt
XmBHbDxDQjQkVDJqWVkhNjw7bUFsYWhRdGYkNDdoJkNCQGVkVmg2ViJIUXFrJklVc2NEdGpaNio1
RlFfJjBKIUklaVxwIikqaENrMGcKJWxJRUM0anAyOWtnJGpdJm1RNV5ZVm5WUWxORCFtb21wPyUx
YCpVXFxqbG9XaV4wXjktVDU9TXBvXTtfOVtbWVo2R2xMakcoSmEyTVYiQzNzYzM2aiJtbCtiMFFN
aTJWP2k7Xl8KJV03JzspVi5kb19EPSxiU19uJ0hGXUdtVFY6dFxtJUFwaDciY182UWMwM3U4P2o2
IUtBJT4zRUVZJFxrYXJsZzU6cW1OVjIzZFhodVFCZEJqS2dhQ25CP1MhWWV0ZSFNXj4vOCcKJUdE
RDUhNClzUVAmIXNHKF5OJlNqbT5QdHNJSl9UVENSOmcqSSIxKEQqTGtXXj0+RlomY14lUmhqWj9B
NjshQjw0I0VQQGZraTpkKmV1XGFoOT0xSUlHRmghSC1OLHVScVcqTC4KJUxcQyxqWnU0REFNcyFi
UjNPJkIiPEYzREtZYGFTa1FEc2pRa01ILkIlMzdXWV9YLDdiPGlvWjpyZWlHRWdsVDFfcjo1ZD1D
SGwlM0FDZTcmS0RUaCs9Z01TOihcPyp0bDBmIjsKJVFSWXFCSS0kZyFdRCFDJkNyRkNsVUdIL003
Wko8RFw2a1I8MihGbkBxOC1KXmAxdFplLHRqUlQ9MiUidVgqal9gQVI/bkZyclJgc0okVHMwUEo1
SkIjdDB0RENpITVhS29PTWsKJStEZy9gRU9JWDZbV28/ZTVqPT0va1BvV1EhME9gRmw1TCNWUSVW
JlkjY0UpL2hJTFI4Vi03dFNlQDY1YGtmVnRMWF1pKUlmOmwhRzZ1b2dyM0RKN0BvQSknVyFdM0hc
KUltQSYKJUllXzRUMmhFLUxNJHRpWTg9aWJJTSE+KkRNIVo/STQ+PmhdaSZmKV8hKC43RikkOUM4
T3RdNC1Ic2hWamsoJ0RfNFx0QCtHVExVLyNbLktcaCIpTGJmTDtlRlNXbEUycUI5aHMKJV1oPjQn
ST9kKlxwUzokMFpvOyUkLy0uWGFbTCdxZ1QyYGlvK05uMCw9MkJmVzZdOyI3XmswbXE7KmFcUV9k
PlQqU09KJFNhUkxMVEFMTDpWZ0NcZnUmRV86aUFqQFJkODMhP0wKJTBeIkBXOidPVCJVb2N0aCJ0
I1JVOz1rJE03Nz9TZVlfOE1zV1c1PFcmR1ZycEtnNzQ9K1ozXT05TkdrXCtVT2NwWEBWT3VEKyo7
Ij80Z1BtOGpvOkUjU1ZRPlY2bmsmVCRMaV4KJSVPOy9baTxBLTBqL2omWjVWX3BWalAnc3Ercy0+
L1csLW5BTm40WF1MYyVQb2NqNUxHP0pvTGhKMjg8ZyZIMnI7MD80a2tWKFY4YVVJNzRgW05INkRN
OEhrcT1sTWpsN00oPCEKJVFEPlEqJDNGYWFMXztqW0MmZUdyJ1Y6P10xUG9lYiRPYiooIU9KUG9A
TSxqPzBgQGAtU2JJcVpicV87M2tUPCpAJHA0KHJPbDQzZC9SaCsnIVtNNF8sXyUmQ2gwOV1KXyJq
XlQKJVA+a0plOF1mLi5AWnRRNy89J3ExRSFrKCZHY2ttLkFWbE8jSlMsJSpPREtRcidKX2BjOmZO
Z0pDZGQ9biJGXyc1QWQqbywmK3UqW0RANUFVM0YjWipBV0QvMlRRVDhJQyxIOUkKJWZgSlFjSjNE
PTgscDxmQFtmYVpGP3FFZzBNdEslTDc9PCMmN0pjS1ItTkNrK0BRTEJJJ244ITZHdD9Za24vKWRd
IS9IT1ovSi89SjhHSGJpKiJsLU5LZkdRSCFAZ09ZJzQnMEAKJUxhVDVAIXIvNlk5SmhWcFY2VkxI
JjlcR1ktX0RLKVJCJUEnQG91aD5MaFFwYmhKcWZRKDZjYzA8Ly9fJy1zN2BgVSonc3EvMyZuY2s/
XnNWTTluZCdBRUpQIyU2JCNYSkRjTCsKJSdLJWNYPiskZVtDXVlkXSc8RmtOcEs2MTgzWzlKZFI9
XkAoZSgoQE03KmU2LigvUTU1RDUsIlRFQjkoQ1omLT5jS29yZSFHRjtPKCgoLV8+LU1laVA7VzRT
ZmtqOVtcIjw6c3EKJS0jdVo8NEw4dXUxMEJKS01GQ2MiOVsmZENeXWJTXDA4Tl1HWzp0WVE5UHI5
ZSJfRFEqalQ4UFsoQyY+K2FePi4zKUtGUTUnYTcuW1IpRm5UUHNvTzxXIk9xTCNPMVNTIVMjVyIK
JUFZQEBIZzpfUCJkR0cqcWE8QXNpN0hHLy4wYCQvNjdVb2AqWWJERmtBNzo2XTtiVDgsM1hoWiRA
WS1OXmYrUVVsK0lYO25XPG1PLEZeOjxSSkspaiFGRG9ickRKWF5XN2xoQW4KJVI/M1hARTQuYlRX
by8tR2h1cFMnZ1FiLjNCP1dBKlFBS0guPGxRc1BrY1s9OjtxTSkuLjNPS2QnLFpmXi8kcVZnUDs8
WF1eLFVvYFBcPjY0KkROOzBKTEo+S1BYW1BeRDopKG4KJUdfcSsnNWtdcStLWFFzPkBtPnI1LWY5
WFQ0LXJfQVw5LDR0Y2xpaUJvRTsvJkgzSVFLNVxNWTRTQGljbkZNJkxrcy9uSUU6LEtgYCVGRjI2
bGBdYUxNMCgwM2g2KSk3VG1RUisKJWFYSE09cFVYb2dyZCghRWhcPFtQRkQsZlhKYC9LXyU9Si0y
U3U9R1tDNi0iUEFobFApbWVwIT9tWENYS2VUYmErVWtcWDlgL0Q9bDovL1VtbXJDOSo1SEY3ZEwk
VFdFMmcmZmEKJV1BS0ptYnNpNUhHNSZXQFg2dVhQalxpaz48czBAR0ZoNSdRO05LclhQPG1mPWU9
WTsyM1JSU0duaC1ecEonYWtHVjpyPS1Sbj5obFFWUlJKTDZGJUMjIiNQY2I6PitMWUozMnAKJV1x
XDFNXml0JDdtZURxbVs8UWJqNzgjbTNeL1FXL1RXNzMqP3RwOVBlTip1T2hGaWpvazhoaUxZYUMx
MC8oOm5lNWpmQSRkJik4YDxATSpDal02KjJQVGZoPGJJKWMpNjY2WV4KJUZWTksscU50aDklNHJx
KzIhU1gyYVcvOTAyTDg8cV1DT3U/bTlBMXBybyNlUj0kSSJgX24zTVJROiVoKHFRaCFJbytUTHJY
WVBWZl1mKCpgSytvaVszTFpHTFwjOS8nNkNVSDQKJT9JSiYuR08hWUwvUVE2KGMyRk4rcjVpajEx
J0AsO0pbU0xTMW5oXEFnaStKc0dALDpdQStfYG9SKE1RczUlRiNNb09sQG5ZTj1wN24iWT80ZiIj
WFBmcmsvK1Y4SzkiPCpqZi0KJTJ0dSg5ZThNYCFvWltwXWY0PGE8PWNNWTY2L1VoZSpaVWklQ1w1
b04zcEJHaVMkZEVSLWksb09sY241RlVYYTtNaz1jWzddQUxQdS1vJCxCa0hiSDA7LltsRmdyaT46
Ul4nYCoKJWNQRiFBNTk+ZCglXl1CTXJKT09MbmFFcHRrTUcoRlNxJCI7azNMQDM1OTtIaEhfU0Jh
RD5RTmJKOkRkcykpXmtacVAhVjw4J1lyZSJZUV9YXWEtN1RcYCpHQWRrS2RUayxONC0KJTVDQSJL
O2d1bHVbcjFUcyRAYjdBTmlpPF1ASVMxZDVCMU1ZKWppPkJAZF02I3BqTGtGSU8tcHNdNkhnaFtG
ImpeK3ImMkc1Qy5jT0JDWUZvQ1oyPmcmcTpGTlJlayMuMGtpa00KJXEoS1phNFJCXSdIaCJdV2dt
PU0iSFpURGc9byNyV0VjWl80LiVLcTxKcSpgJWJeOU5kaGAqQShyaSNpR3BnbCoySmFlQCdScjpk
IilbT1IzUFopdTY/VDdJXm1cUHU9QDkkbjQKJSZgTDdxWjskUUNwY3JgYltJcDg2OUBrVzQlaHBL
PjdlaXFCXFtjKS9VZituOCxANTRXMnUmYjtUaGA8LD9kaWA7OHA6JURbL2csLEk3NXREJVNvXik9
QS4kKURbKiU5bTVFaEYKJV4mOnFpXEVoWnREdDtbcDIvV0YqcFkkKGMqVnMuX0EjWU1fLyI1YjYy
YExLVCtySlg3YSp0LS5oPVBIW3A8aU8sbD0sRUcuWXVSSlA/cGlqNTkuQFAwXGZaUG5CX0MzSkRZ
ZVUKJSEyWSYvOSJKR1NwSVNAaUY+IT1hUVZkLGVIVlBXdT8sIyRDXCJlTE9vbiwmVyNIaU80MlV1
OV1RLyM9MEA5VFpnSEhwSiFsU2EkJDwmPnRFaGslZCxTVHFWYWUzdHVyaF5ebFsKJU1lUjgwJmlm
TltbcTpUJFBIKy0vV1BlbHQyS3FgaFo5Qzl0SDhZPlZVQWB1P2hORDJCQSg4MD0yLF5SLj9WZjJL
a2UoVGcvTWhKQlkkJywsQ1E9ZklKWkRYZG0tTUc+VjdRIikKJVtzZSJ1JVdePzNxLDJUbj9FO11N
TF05OSIwWGo5ZWIxMWlPSSs2LF5tIipuZDovLy5RTl9eWGArXCQ9aEFZNGYsKUdRVUdKajRMTiZZ
bCh0bExkJGlGdGFoWU1oRz0zaVs1U1cKJVFHJT1eSHBHcTVUL11GZV8tXC1CXUUoN2VoOCEiNzI1
NCU6bT5KYCZxO1VUSkJXSjpXaE5sPmhqVDM4Z1ZhUEhaSEleQUFHPlohbFgzI25YWmZoMU1tOEhO
OHEvIjplZyFhPGwKJTJRI1hGV05KPkFgbWBDN1A+NyFdXkpEVikzWFQ4dUBqZlBCPWVMQHBSWjJV
KTddUGkmbkgiXVI0SChvKDlsN2dlTlJvbmRraDY1JClsNEk2KzVcTD5sOjItREBqTDZWMDFVL2EK
JVQjdDVfPigzKWxLQENRNWshW1xKUWtQPF9wJjIlWW88cj1cbUlVWTNHMDY4W0NqXnJkYUNEZCZs
YypNTFg1MlFzYnRsaVtda0twTWVGW2ZyOkYqKTE0QHVEQmMuSUFQcFg2cycKJUhRYCgiLDIkLlBw
VCM0aWlPWlIrZzg9aUJodS5gLltXVV5bcUw0cEstWydYcS1PKFI3aFwxQHQmaklfVkA1Qy1xTV1V
ZyZTYzxuVXIkbj5jTXIiPExkIVNrPUc6MXFSZzx0QDgKJUVjLGZsZVtzUnJJKG5VXylSMl0uYjA3
dD85QkA0ZU1uUHE2N3FfR2ZebDwyQVVYYFJqMTROKWlTNTMwb00wRi4rJGZaTC4kZV9SZVMlXjZs
Xm1mKyRhL1lRJGstU3RpaCFLW1kKJUhSUCRcWUdzYE1cZFBiMGc3QmFNJTs+XUM/XytwS1IoQnBk
cSo2YmtwMzspKV49L0tpJ1xBQUsrJ1U4azVdZyJQRjNrMEhkTkNVJmNRMWEyamgrP1IxLT9cKj1T
QyoxbGFROTgKJVpZLUpPcD9eWUREPmczZjRrbysxbC9TNzpHT0Y9byhMOTI5R0FgcltBR0pac15I
OFEtOj9yZmlYZD11IUQqJENbPy4oXEVrSisuQDJkY2hSV0MhblpNLEQyIiUyKVAxSG1vNmUKJVUj
Oy9zaEQtKTg+QiE6MCFYLlJxIXMpQC9yUEJadDlVWlxbSGhlZjNvQFouX2lSJ1ozMjJTPVNvSVdh
OTZTa1F0KmRCNWw8OEsvPVJVSzxrLFw5NkBdQVQyZSN0cFVsLUllWVkKJUUiZlNuS1YiblMuZCRq
WVBORWpwMjlHWSFyXkUmSyEub1ZgQC9XcSlJP0wxVyVmTVFcIUQ8ITEqbS1qRGoyVkZaW1wpYFVV
Q1tzZTZXXlknI0FRcWUvLGA+SkkkbygoPVhaVWcKJS9QQnRdRSxwWjY9dDBCJkhzZVZHJGRgZlok
NVolYCcwaTZuLGtOKSVlWCMzOWluTGIrYzo+RGM8Zl1cQVUxWlFgQGsycS5RdFJbTDU6RCk2T0Zo
LnUhJ1p1QVdiNmxRVEAoSEcKJS01K0dkIWRGSmhkZlArJVRoQFMpLitwbjMiPVJCJnJbOjxaIXFK
UTJDXVU+ITZEInFxVU1hSl0kcXVIPEcoX3NlJ2FzdGE1MDBNb2xtZnVYI2o7P1M7aUY+KW1RXVRw
K3NUc08KJVtBOSVlTTNTLzAoX1NhNE1iIyJKIS8pQylaPSQrTGY0UTpXK2BXSXAyJ2JIdTY8QFRv
RkAvWmAkZiFbVygzM3VhMF9pZSwhZHRtcWInakhvXHBtUENmalNGb0JQUGo5ZVNYX0AKJStTOEQ3
T2Y1LmMpY0Vna0hBVmZmJmBqP1otdEVucTpeKitrXDtzSGcuRFF1LThtYSE0IkRURlU7K29ALzJH
Lz8pKS09TzBFKCImYWVnOUZoXnVbOU4rJlwrIk9TVDw8SzFyWWAKJWEjWkBZKG0lTm9XJ3VsNTgl
UG1YcWhkMEA4aUZQKzd1RzxcISRPQW85VSVwS0ZZaFZtZlNtVExpOFtbRy85WzMpKF8tT3RAbzwr
WCRqMzpZIWlcO0NXaVZSXV4tKFc7VWo8ZiwKJTUtSUVDQSlUNV02bSdyQjY/JFZbIUZZRmMjanVR
SHBrWTdDcEdbbVg9ayFFdXBIVDl1MkhlVSZMZDt0JHFPYVUjTXJXT25hQzplSigrKUxoJU0wSSM2
bTUzZENsPDViVWMtMWUKJUlHbGJISSNSWDwkTydhVk0/YlBWYVI5WFU3Ym5wMiEpXjNuY2ptIyU9
OWN0NT1MTjA7Sm1NOjsta00sZTVTV0chZk4nLHNsNkVBMlBiTF8/MXVRSj0sZ1xrTF9QKCdJRyJl
OEIKJTloK008JTpFLC8wNkxlL1pqT0UhY04jKmQ2WTNaOT49L00mNjksWjhMRGJyRFNpOWIuU0py
Pk1gclRvaUBdUTAlJnArcUEtJUtBV19MazVdal1OU1dlXTIyOVo9LztROTwpNWMKJU00V2ctLm4r
bFUpQG9lZjhbUCNbV1VaMDVXJSYiKF9fYDNkalRIJ1FKJzNgZ1g8X0FtM3FHTmAsPm1jMmtRY2px
Sk50TiNVVmtlZUpRLFlOb2JyVlhGYVIocSZUWXJAalFBIVEKJTlDViRePCUlZE9ARWInMG9hMkNG
YjxhTnNuT1heKDciKGFORD1WbFw0ZlNMXFs2IXNVUzpMNkJWJzc9OC4yWChaOElhL3M5ZW00XmRN
T2lSVF43YGQ7IlNVTVlMS0I/OEM1cm0KJVVQSClmKz0iZV0rTDNPaC09JlVYWXVnTzshP1BDLTVF
J2dxKEUzcGtpPSdOUStKI0ZvRilga2s+VyZNP0JVKDhWPXAoTzc7XFBQMj9qZHVcSz4zYUEsMGQh
SlJnIyIwRCcjZWAKJTBGKkNFInBQUj5pZGdKUWA9Vk5UVjxAJzlXP2Y4aU9pPkR0XnViQzcxVSkn
OUVraTtOJU1ISTpLbzpaIVFbZiZCKjJEbEBWSD0qaV9ULmU9Yi8yPl4rPF1RUmZqPCtYIlV1YzcK
JUZTK1pXKm5hRmhePTVJWClIaCwmSytHcXFNTF9QRUFzJihvMkhrVj0vMS1mNjtzaF07OiVUNjEq
SSczRl1CaSJjMiUsK0w1W1VtSXFPRDYqLTgzLks3QC5XZU1dPCU6OGpMazIKJWU5OT1QJDJiJCxX
YDgkZyZhODIqal9DMSkkc1swKXFoXj1LO1BORXU2anE2TGY+Rkc4I0k+X2AhIS1uJltMaFlzVTdO
UiVGVVghYk9zVDZPUk9KaCsmTGZKKGFoSHBfPFxzZTYKJT4nYEM+OGpXYDphYChEO087V14mLTBR
NCtyJU82NTkyVyRPPEtCYVskKnNrSWI5UzMsJ1szQjhwLlBjLix1I2lQZ25xcElUQCpZVmAiK0E7
SCEuNEFMPCg6Y1hPK0JbJ1lNZjQKJUwmRCtcKWNrUGEvREAqUExgLGgqUE5DXFVXImN0aTczRUI8
QlwncCIvPloiRmhJPV1LLDhxaGcqQ3UxWFcxVC9UPUNfTjxrY0krbmZyWFRMOD9uPFwpN0lZW00m
T2Q7WTVpX3MKJW1LaXJvKj05V1AyIilYQyUrTzYuUDFnWlpGQSdcUFBaXWopZzRcZVFdY2hjIydC
YTY4YS4wKHBQWUJOUjY6b2dOa1siWStHLUhFZSNxbiM7VG8/J1A/P0RJL2siNFVOImM1TzcKJWJW
LXRpSmsyODtQNDw0PEQnM1RPXSJIXSZlL2YlMEZNUSZAQE1haigsMGApdEVlUzxENjQkPjxbVGEk
ayQ6KGElIzg1N1lVbi1dTklMRz0lP2MvLz9ePi4/QS9mVFNPQj9JcVcKJWxQRnQ7IkRDLjFMTVRs
KDJtPmFGTjNZM1AsZSwrOC1Vb11WJFRvbTEpXGxObiExXFRSI2w+dWEsOzhbLDNHdVs7TiJTaTY5
K0hmPU9SNzIrT0NTKVY4NEppNyctTSQsRjEnQCEKJUxfOnA3ZCVrLihCV2E+a2BcKzlQQjhUUkhL
ISRkIzZUNi1ZN2puQ15qSltxb0BjWDdPSl01UDk/dFVbdCU5Lm47PUQyWTBUSVAyImZnZjxtMWti
RkNQaWZtJTwwQkc+KGksaUcKJV1TKm08TyphcTc5dTgubU1FL3RVOzlvQ086YWF0SkMiQiFQSGBo
VVtpdXBgKCcta1ttNFJYJ1VbLjtaSW0rQl9qTSZKVzk7M1lgY0xgT0RCKG5zb2dNJ2BYXDZeJFlT
JSxSRkEKJVNPZlw+OCFSKyFMVGsqWD8ibGU0MjsnWz1XPV5eR2dXLWM9KD5zWnNAakBNKTw+SWE5
UCk7WjRqUlFpOTpoaU9FUFhyLT06Y2hIZnExNio8TG9KSVUkaiM2Y2BoPiltVTFAQykKJSosLnEm
LCVKTVMlKTJkbWA9XXIvTTdiZEFZSD9VZ2o1MUs9LnBJazdfQ2VNYV1YRUlVPEk/dHNJNSI5KiFS
OHUmRkVFYV04PjtKalJjSz5ULEopP1VjR0cqPWN0Z08wMTJoalIKJS9dKlRkQEF0JWNVYWo1NFRo
dDczJlgpQz8kN2RDVitqJz06Q2Q/VypEMFElQ2wzVlNtQk5FazMjVlNNMjZWOFs/Pj1eI11GLUop
KTlhR0VpSUYmMj5XQzVZSyUsJFpQInQ/bTEKJV5wInNubDY9V04+L0JWNm1nTzMuYFltLDAiWGtp
KVc1aV9cKCRYQCEpTidCQ0oyUmVzQk8yKjgxNEdYKUNzWVhGTGMiLlhjZ2M6LSNgMF5CcVAzIXJw
VyZwSkQkQzFSY0hudSwKJUBGTzdCSzU0JWRZdC46WTljS2VtUWlaRWAjMUJOVjMhNU4wJlhTUCNL
aSMwJ0ZyNStJUTMoWTBvYHVwUV9bTU4vNVVZIV1rPz5PPSZWQUgpcHAvPC4sOW5LV2ZnSVFBUXAq
V1gKJTA6MTcmSlJBMS1acSgzI2ZVIW08ODM3Q0wtLywqclpJYTJPLG90ZjdWP1o2aGRxS1ojLWRe
WVMmYjBHOj81UyxWZUk7cVZOW2JwTkNyLVpMX1dfLm1JUlchJl1hRzZfNkBxU0cKJSIrRzVjLldA
XVQ9O1dwbE0/W1JTNiFQPlE1Z1l1akghNj9vbys+anNUblpNdShbWW5uX1MmRTFiKCo2OktfVlY2
KD0zNWEibVhqPTpNXU5YJnQjKWtnMkoiYigwYy91OD1eTUMKJTxnc14yY2VyJlIvKWQzQ150ak1i
SVVJIzdILUNyT1IvdCE2TyUtJENuRldZLipFLF05RFRFOFhfb01ocWU+NyNTKiFMa2drOXFuLy9i
ZkYoPm5RPzhAaGIsTzRXUDUpVD5JRFcKJStscVxAXmRnRTxQISJjUGZSbD1JWzMqVSZHXidJK0BY
Kj5RM2pJXy5Zby8zP2lYNU0wcSg2RSJFS186SipIOClIYThuK2EuTkttMnJMYXBoLjVpV0RtSExc
Yz86TitwMC1gZyUKJVhRVDE1PVdHOmswdStoRzJkbzIpVzt0LmhvWDNYaCVeaUNZcC9iZW0kRS0p
NVZOKitGMmxRXnJuM2lLaypuWipaOUpcdSE4NEdFVTNqRE5iP2RWQURGLld0OTc2I2xMVkooYnMK
JSNZZSdYSyI0MmZRUmI+WSgxTS5BYDopLk1WaD0jYERGQkNSKjhsLnNsZ1hzLStSXVZqMyZoKl9U
VEdRQjkybzs8IkheR0kqPiJrJlw5KjA3b2YkLDpfKjtHWUA0dUtlS1tXSj4KJSglPG1bSStHPkU+
QS8wPW81cSxLNkdTSyMpPkZbSGtRUjlJLCNVZkRuWUQsMVpZMkkqK2JfUCdHM0RnSk1WWDM+LC0m
QiVSclFlcVFMYV4/bT8oIiYqQltfaS90TEkpVEFTYCkKJSQ9Q2ZCXzg6J0NXISItSDZvSD09JEk5
PEkqMC5tXmdqcllSL050X0BqJD1LUkxRL1suKThRSiQ9IUBgdXFLczQ5MTMsZ1BXVCpFJSRnI2Fa
JkJ1MC5aaixHRjYqc1ZgXUpwLzUKJSg4MUktTnI8WHQhaGhnbT8wTDw8SkJcdDtoZ0hGKFF0VHQ3
Si0wQ0o8OSE6aVY/ciU6Q187REBMKjM/OjM5JGdhS1s9NDkrR0FFLjtRQVRpSzBdZEtKSkZYMEld
Z2Q9JjxTaTcKJS5pc0w3Vi5HKTxkLSYyJFQuRmgzIUxYRyUmVHBoJCM2UmBebz41VXFCMyUxdD0i
TkFyK1pAPS9VSjs+dDxeTmkuUU5mLEdVOiZZV0c5azRgI100LStnLzVZQlZeODEyXzpLPW0KJWhQ
YWRPJzFWViRMY1RXY0RNWGI7Vk49M3VoSFFmTENxc1pPLWBUPmVYXVlWUFlxI1FfZnRZNC5RJTNA
QCUrJmRNXGkpdDFBLXVAb2hbJklrMV1JRFtLMmwxKiYsbGNiPElXL1cKJWBhOFtUWHJqPnNRJDMq
VHMvSko0PzRUOGErLCxrPj9PclBubnMkUShwW3RWL1t0b2pQP1ZfQTFXXllfP29pUzMhKS8rLVtu
WmpYMGFdaUYzXlImJGBWMGhaSGFUb1thP1Y6OSMKJUAsKCxuLzUrU0xwaVZnMmQ5PUxmLjhVQklP
KmNzJWUhMz9lNUcxdFZlUGxrZWttdW5VNmlUb0JYMUE4ZjtsTkJbSGRqUEwnUV09IVQ9PGBjcjsr
a2VSTXN1WzpJW0BxKiZfWiQKJTpfQSQjOGx0SCZGP05VLVRPKVU9WiM7PmhVVWNwaExgOlUsL3FJ
S2FSS1snJm9GYUFjJ0xGWEZjWDRHYz51W2lFO0pwZllxbE4lUHIiMm0/TUd1MkUucmBvJCYubUIj
YFU5ITMKJVVKYipqcUg+WiRsZXM0NUQzRURkJiNFMU1eInNxSyczMWhvLm1VdV1naWQvXV5mNz0x
VVJYN1ZPV11ePUROK1MnaSsoMERiaXUxT0BBSE08KToxLihDOj0mYVhdKTloVixGRk4KJW9oJEBu
XVEwVkNjXExuZGMpTThEMygvZmBYIytxN3FdaiteInUsLywyNT5yTlI2QDNqLzw/Xi9lO1Q2Qidu
M0RhLlRMOTZdWkcoMUVeIm9dZ28vUW4qQjhvYkxSL00+ZnNKOEUKJXA+VyMkXkpNXEoqamJJWjku
ckkxJlQ4JWhGPXMjLllSalIuTy87XGplcloqWUdMKCJIKzhGazxVNC1DSj1XTyVgbnNQaTs4KEpY
WmhPJW5FV19iK1RLMm5dOm5GW140cCRDREgKJTlbIUZybzFxYSM3I2EtODYkQ1MhMEoiLSdUQTJU
QG4rZ0ZTVVkya01xdEJKITtVRU1AUDwlZEZZOUJeU01mcCxyOEMrKENiXldzbk04Sz9kRFxpV29w
SylfOFguUUNGRVNXU0oKJV5ScC9RTDM3azgmMSkhRWo4QiE+bGdYTCE3cC9uMjE4UmRvVzMhc0VO
L0hfQCpeP1oyLmIvU2ZUZT9yJG11I09KSWE2SF5aXk9DNmU9TyFsby4zJnFgZUciMURvL1NsQCFA
JjsKJS1Hc1NuKD0kXy9RJXFXPlFXPnE8R3A5S1FbNConaEQmZWU/aFcoQzA1OSslZTtUbSIpW0Yz
Tyc8cUFRP1FSXmpiXG5MVmdaZSRnYCExJ2txXE5NWWpsRytVLS02Rjs+cVNTPygKJS5zQEpvU1U1
S0hOZ0BlTmpeME8pbEcpVDxjYzhgdU1LR1Q7NU1XUktKaDpTRFVSW3BHOGQtREhXbDlpO0ouREY1
T0xhJkhLYlskcEMjJCxSZywlPjZJRDk4SCMxS3FeRmxMbzoKJW5cWl5AXylBajxJOyVwbiwiQTxz
VmNxdDwySFBlQ3JINS1vY3FmWmducUc9bzBwQiFdRjA/V0wvOXVVckllQCJYVSRQVGZTZWJgKF90
Iy0wTzVKaWVIRC9xXy9HUzFkV0RwN28KJTJJMlRgRytcUTojN2YjazdvXEtFWzQ6JT01XzNVVWJz
XlhqZVAzcmYpYGpyKk1cPDhXJzIrZVFjIXVnN2AhYDM6Y1ZrWyw9NkQuQHJPb2peamdfNkY5aFpc
J15SXiYuWVAhYDEKJSdUT2kvX2w3Ji4mbEpEJ1xzVj8mb0I8KlM7SDRALm5CNVYpOWA9ZS8jPylx
Z0klcTg7KFVqZzozXzshUydSRD5ESVhaLVxwSmNvIkVNMmZwKTNuKWdNQUsjRj9FMyEiMCgjKVEK
JU9oRk9qSGJEN1hkXlNyP1k9SFUwYGxEUXNZJjNOcThzUT1yVlY4Mjo7IWErUFZQbmFlVFFVYyFN
dGBMKWgiWidXclx1aDRndT4/NSw1JUZqODlOXzMsczhZUlo4TkAxQ09mQEgKJU0hcWAxYScxRnFr
Vz5iQ2ZvRFVRVWNMNSRzNV9uPUhKRTgnUTZMLkFuUnErRytTJVFSL3NVYTJkIiNqZFA5OEJKTjZV
KSxoQVhjZlZxXVNLYVxYKVhdIUQwbDwyV21abEQvSS8KJUBVLXMpOzVyMkpHT2klPkNVYClRSGg2
JElmck1sNFtySkdIN1lKYkxrbkRDPWcmJTk1KjUhZVdwVSwpKiVQO09cQy49S0ZkNjQnbXA0alNH
RCNHdC4rU28iWTIsJzo1ITJvM2kKJUBzTSNsXTQwMSdcKSxlYEdQKGxrZnFoV0grVTljNW8mPGpx
WGIrWCdqQjBOQ0k2MGk+YVc5NDA5UFZvL1tjY1wqV1JVRyttdWcsKCQlc1Z0YS5TWiZJSiQ6Ji9q
NDdGbkdKJD8KJS48a1gqR1REYyk5NT1CRW5iPCM7cVU0OEtxNzU7ODgmK3VbZ1VpNiE+aHFKZWZb
TzFkKnFnKXElPiUnQV5eUDA8P2M5bEE6RTtISEghRllMVkkjaEFaXT01M3E4QERGYCk+REUKJW1D
V0FUci5uI1YlbzM2IWhtPD5LN1d0XTNxMXBkdT9BRzhZbThkMDBVczMhPiNmYlJzUD5eTChHI1Ql
XWpeb0xEcWE4S2tmJ1xyYSgvTGteMUotXEVwXnNVMlwnQ1BzcEwsIysKJThWUGhmbiw/LjxsZzIz
cz10RmxxOWc9TGJxczlkLj4rQmohM0hrdVE5PkBYc0lucVo3ZD9YXCs/L29fS1kzWHRuZUAxZzo4
WFY6cG1QJTRGMCNzUW9tMVY1SGVZaE9XNEBtKSUKJW0lJ0c7LFNCLVkwQVBKRSYmIkhoTjhoODBI
aChPPSliMyo6Y0IoT3FRSEBuTSlATUFRW0JrNEclUCFfdVA+bnJwQ0NhZGJiSko4YmRgTUdMMmdk
SGI5Y2VSSmheRWpKV3BQU2UKJT9kMUMlWCUwViVJV25WdHAnaltYUSVkO1Y3RjBZZjJLJTlTRjY+
MGVHKDRyMG9WSD1eRHIvLVVqaWk5dUY4RThER2MwYSMuYzdVXTo/TzZPRm1bbmJmQUxxdGY1S0hX
SCxKQDQKJTtxZ0U2LCglcUI5VHNZUTM/PkhcQiYrdHNhb0IwTHBaU1BERkIkOFsoK1QhTCJlTW9g
aCoqO11KTzduVTJlcHNPNychMiVSVCstZ2g3UXVBSHUjRF9ILztINDk9cSc4N0N1ZSUKJSd0QUko
O24rL2pWYXRKX1NxdCxfXkYtaUFsaTgmOm0oPS8mQUIuJEc6Q11EUkk6UFFhNE4+QnFWVShXOjVU
ZVlsNjokRFIoUyVUMSI7OEZpX15FR1wlIzRodFdGbXNpVnMuOykKJV03LFNTZTE+JzNTdTVpVidL
Zm5eSmVIcC87KEAsPypHPGNobGk8L2wvaXIuKSExW2tlazcwNyk0U0kvMEE0ZD5vUi8lNjMiKnIk
MiRFMGJecSMiQj00RyZkSCxrUVZAP3Ioc08KJSQqSD9lMSFgNSJMYG1kLDslJEteQVZPR0FoXWM8
XCE5P15RNFtPXSZDYDFYbGkkPlhyTy0iV2A2TkVvYUtJZFE2TkhDMSllJT5aKjcwcWgoa11LLjtd
LWFsODstQTxTR0JCPiYKJUhZUXEyJDJWLzokQGFjYTMuaCdVYDpSX1E8Iy5SOyI6JCFabjVYUU9I
L250PSVSRE8oKjZFdVAwJlZqIkQyMC1uSWNyLG0hbFRrZVZcKkRDZnIrLVxqcWFXMDlGOypZMTUl
LzsKJTg3b3QkbkIkKS0hOU5SM0E1QW5bLiRJY1AhMTZAKTElYy1nVzEzbDlDR0MwZCRPQS4zJHBZ
cHEnWEFQTTUhJypnTEk6Yy8iRykpLVEyUiUlQSdfXGhncDNHJU49YkxfLTM2K0QKJV1GR08+NSVA
TG9lUlA+XkpBQz9vLiU8aj8iUz1BZDYkLC1PS1NGIkZaJCQqcTczakwuMzxwM2xedGRuTD5fN1wz
PmVqMmZSVCkpcFAwQz5sZClST0EoVywpaU9PLUY0IyFhbisKJWwsJEYzYmltTiZZIi1gKmklNzQs
KUZTQmpCYS1gOTtfWXAlX0IrLFZBZXFub087VUQ7TidkVXNPUkZtKWNfSmlfcUQnJXVCSGtiVU84
TiooRSFjW2UuT1RcSyg+OW5GXDdJIVYKJVpjLVY/QlQ8WU46ayJnYkVHcDBQKUEzWXUyNFM2YTgh
YkRZXFI/JmYoajEwQTs3QWBgcSRMJ01VLzEvLl9dRjVfZUhdPyFeXUA9RTFNU3IoRzo4RTRLOllH
O18vXzZGI2Q9clUKJSNVSkJDXycrdVtlPm4nPWVNLEE1LXVNcGBKUUUhbUB1ITxkbWBQbV0sal5j
ISw2K0NKOW9eaz0iPGlFcVleNlNPVXE/bU9WXz5rVidWWWNpS0V1Jmo4QFRQWWtEOjVvRW9NPS8K
JTpQTSYiS1dyXShgZzE8T0cyMjN0NkZAckNdNGw6TlZlcTxKJFErQFxhaDtLJSVSRzE+IlowZyZQ
bjkmc0VXSGo0R2duYDopPVJaO2JMaFkyPCFUJU5RcWJKVDpFLT04QydmVXIKJTZ1KnQ9PHNfJihk
PktqSCpRSGZpLXBjPCxXIzZoVSF1XU1wK0VHSl4/dGcydVhgWWhLVVVTYU89KVcsckdhS1xrbmYv
QywtcnNnWVhgUFA+am9CTGlNZU01YV89JkBSLFRxUisKJSpjSCIrIm9NWWgyN21uNkZwI19jJGQ6
alFUL143c2EpREN1QjQ+aEkqc0VuYig8MHAsUiVFSWBgT1xdTTRjJU9tbSFwRSIpLiF0M0JcMC9E
ZUUkMyVpaScnXVtVQmJxOERvcVgKJScpMjVWMSIqMXEoSy8mKExsWiRpRUEtQ0xQJ05gJWk5J2hM
ZERccz0mOFVHNlktdVAuYjFbLmdNTj9yOitgYXVJMW47SFM8KyxnM0MhLW5kJVA7YztyNVAoIys5
NjcoW1Y3PjUKJU05SUBYSTNRc1NUVSZbLD9fMXVjUlcyOy9LSUJ1cCEibmVyX0AtaF1tI1RDRSZC
b2M/Ml5mJy5vYFhsNWtqKWFaTildT09ZVCZRVEZgMihiTGlXKUA5RjUwUT5gUjotR11EJkIKJStK
YVAhZ1Vxbl8oMlBYM1prPE1nJjszOG8vRltXPFp0L0JgOkpobi1HQjlpKV4hKWBBRzRmUGwuKXIv
NTdbcCtCXU1gZmw7YV88MTBXWmdaJk5eSWxac3Q6S0xCPToqalVWajIKJWZZQWRASjE4KjFZLSVq
IjZZZGVRTzRRIy5KNUNnJUBJJE9cO2Q4RS1AS14xRjReaykrazhDInAzWGMtWkA7SDNGU1FHayxu
Pk4pW2klWXQnOEhJREhqcTVUbTptIk5oS2ZAJSgKJUEtZyNGLW4sR0pWLSJYb2o1MXV0US5xQyYo
X0JKWTBfO24oNmJFR1gkKSw0PEplNiQ0SjRuW24lMy1XYFghMz1qLEBEVUY1KCY1OE8qXFFuQ0M3
MClVJDBQNDZIckFBZClaRlsKJVdlYlwiNmopQ01LUXFxT0ApWTM1TjM3Z1ZIUG4nc0w8LTZDWitu
NVQhI1RuZmonRmtXN0VcUTU5LCkiLCcxIlVaKSlyb0k9dEddVko4MmwuLmpnImxfIURENGA1M1x1
YmxzbzAKJTgvLjFpcF5ybmw1aTMsPEAmU142PlJMU2ZmUzEoYFU8Tm1tTEEvQSgzTUVHIzdXL2oz
JnFyYz1PRmwlJlM6UUlYP2MoZ24lI2VPLSIsLU9QQ2NCYVgyREk8NENMQ1JFXkQiajwKJSE9TUlh
UFJeMHVCbkkiVkU3XmchUGgkcixmV2hoXls0REdxZmVRYVZgKF9wUlhWaGlNJ0oxIjVxWkxtckw1
SiwscytpODhFKThlQyRePkU6N0VeUFdYT3FVciZKUjQ/WyxpP0wKJVEjUm5uIWlDOFskIV1HWD1X
XFNVKjcuUyNWLCZkISouaXFrUj5fRVAnTlRhNzZIXE84RSIhKSVpU1lwY2I5QjhzIlMqVF1gL2sn
PUNmPUpnMk0wUSJfJGI8TDhGbmI1XVw2dU0KJSZEODZeZCdUalwvXUJsRkcrK1VKOVc9UllqPE5Z
QiIiWDVqJDp1LWZRclg3OiFIKlsrakk2QHRSJzwxIlc9N0kyK2lnXkA7RlFmRTpQPW1DMCRYRFVK
MkckY3BwVU1PRltITjoKJVVqZGwuLGhydHNNRGtrQldPTmlHczZhXmRWWCs3aCQ2VyZKXlY5UnNH
SWRdbEltMmJTKFMoISheT1ErTTRGVG4tXldAYCFjT1pxRklTTlxbKW5EP1YtaTo3RzlaZXVqTV5A
TUYKJU9oRURobnEvUyFXRTNFVlw+IktJVGAwIzZVJHIiZS5jWE9BRFFQaycxY1d1UGNmb19uU3BF
KCRrXFFacWg3PkptVFA+IT4zZCNZKiJTYmQ9QiRSL0xHMGZPLEZHc0AyOWlxKEcKJStOQ3RqPmFV
P2Nnakg2QyltQHIiUFMzb0gxRz10QT1da15ncT06XXBdXXUtZ1crQjJHOSU+W01hKTRPUTxOQldx
M2o3KTdrZi1SMVQ7WWokXmtlJWM4b21iQGstKFpgRWhCQVwKJV4pPl4+QkI8WixXPDA9ODwzNU1n
OlNmKl48XkQyTms4WDBePzlPXUQuZVJQS1RqPmo9O0RlWGViNXI1ZiZabE5AM2QoRE4sQHQxM2FV
K2ZVTnU9KzM6VSZpbzNVW0hbUUtuLDEKJUklWFJtP0dUSCRwOE9NUlVbUzRsSzAhOTJiKEhSUFdM
JzJGY3VcLl5DUjdfI1xUPmdOKlxIWmREIiolMkdxRSlacVVgJnI+MV1LJzsuOERMYWomO0A4Mlpx
L2VXSm1JOlJqV0oKJW91XkZUXS9MNG86SSdIVFZkSFA0YmVbOj9abSl1VGY0RmZbaEFNXlk/UVFg
VmpPTUdvcDQlU01XdDB1biJyLE5SIj5nUmdRXERWSDBySXIlNjZrKSM1JWVEXFJcWnQuZ05XQSwK
JVdwLS1uPldcZGVHZj9GSyU6P1Y7N0Y7RGxrMmIkSDpBMyIxbFBWTCpeUV1uJmhPajdBZWgsRWVu
bSJNbiE2P14mUUU9M3JiP1ZILT9zRVVAKGRONiRSNV9pZ0wrR0E9PnFyM2MKJWxEO2UkSFIkJmRu
XStPXWItZ1RrWS5FaDI3XGREcShQdFcxUENUNlBXVjtwLCNHdWlQOSs6XmVdbHFlUTk2VFhZPitl
aWRIOT4zNV9tJitlJmJLNS9GbFREdHMnZkthWFpMXnIKJW0na0U1U1A8XkU0QUUxW1VTLmRAIjhF
SjtAJmhBYikmKVEzXiFOVmlKQDF0bykmXyI0bSg7MylwTmUpcSQhbkxfSSlZU2FuNWs3STtuPkMy
SmItYXJXRzk/LlQrKik+M0dtWHAKJSE7U2hRJEBVLCZQRiZKQWcsLEJpXTs9N19GOmohW1MkRDFK
M1twdElkSEUoaFQiZkxRT01QIT81IkJfOWEhdCcpcnEwMG03QSMmai5gTi0vRjcuWE1lZmVtRlZv
PjpSa2dBVj0KJTJfLkBnVztWcWtoXTNSNGQ6MXRUXGAwMFZsO3UhaVVbM2Z1bFRhVS5JRyhwZEck
VmJkJUc0dDBHJ0pOKFZkbkcmKWBAaUdWVCFsLD5JSidFWzhjJjBHOmFJLlNDNVtKJlhdS1wKJWgi
QDo4Zis9b0FsUnE4XCpnY3B1Y3JKZ1pjWThcWzFqOCFlSjAxLUZeXEgiJiRSYCxCSVZ0V181SGBf
PytpLzJXcEtQNiFqJD09NXBZJXE2cCJcMV1GaSpddC1dYkROPCU6bWgKJWdvJGQpRFxCYWZfVzZe
Ul5ZPHR0RzQja2taVms1PmVubzNucilSZ2soTFBCb2lvVmA+cEJlVjVjLW4kbmpsVy46JkBNSltf
cSsyLTMicj1MQV8iOEYlaiY8UFJzWEdMXyplJ0EKJUNTNjIpSUZpPUE1XFE0QSxKVltDKCNLQ1Q+
UEA/Vy1hbyVsVk88YS4ySmxQWUk3S09CTTJkRV9cJ2lYQjZhdUxeTF45b0pJWEc+PiRRTDVqTS0x
cGteMmBHdWtUcnMkMC00PVoKJWVVUkldZ2UydTFwQ0tKODxNakpjL3EkMyozZFUhJywmY3FhL1ZZ
XmNvSTQwJDkjI11zcFxPI2MkPlZoU0Y+T2U3U2JxZ01QOGY2RDdzTTpJZjlzZkBDdTlVN19XP0RD
WV1tYzkKJXFhYlZrZy02JHM2Pm1zZ2MyRE5yV2xpXC5oNFdIVUlfOW4lOUU1YT9kLFkoYWhrLlhu
XD85bVctUidGV2YhNUc+amlaLlw7PV5BLk1dNyM/Xkg2cUgoY2VFXjBFNDlrZkBITSEKJVFfNUk2
XjcqVW0hUlJ0Ii04P2sxZkNsRDVSTDQjRClvcV1UKmZUIlVORzI8W2BFVEg0KHMnKzsvKVI9bEg7
WVJqWGswRU45SjFlI3FZZ0FLVyclRFZPS1ItaypcNFA7aEIyZl8KJShPKzBoPT4nN18xaXNYYUVe
I21bRFhMTk1wPDVqNFBSVnUiKnM9ImFjSiwhMElUWlZGWyQzXT5pa2tXOV1HJ080YVFxPWltS1JV
UlIrRV9eVlZRSTQpbnRNOiY8SCFJZSZqZGwKJWhdOSM0b0VpRktoY1IrIzYyT1ddKzVEOVwqMSFa
RWJyVkUoaEVkN3VbTTRtTG88WCksLzhnc3IjSyxkW0ViTGVVODxbRVBQbzFLLT83MihOaUdDYGQ/
K3RKa3JWVVRtSUZzME4KJVs1cC5fYFtIMUhTQ3JgcEhBXFlSRzV1aTAiJD07YGVXPV5nP0ElIlw3
K2dKaTQvLmVdbHJaQl8/aU5cWkY+V0BTW0EzVS9Rb0s6cClcVGUzMnMzYCdjYmpuY2d1ODRoKksu
Zm8KJTA3WCNIW1M9cmI4MGFoZ2czXFR0amtgMjlwVlMnI2pzck9lU2o/WjBGbHBtK2coYVBPalBC
UDNgWS1uIipUTFopVDo0NkReLjcsPGpaUU5UIyQ2ZHVIZ2BERWJCcCtfUjtCKE8KJVw8aDVAZ2Ng
JlcxJ10zdElfZmEtWSs0OS4zSjYiUDpNQUpUMVouVFFbWFJLMTRKW1YwOllMLVtOcCtkQ1FdPnJn
MFhQSmpbWi0vVWM2PCVtVVIoOjBHMl8wIj5JUUQmQ2dUMzoKJUhnUm4wR0BSMHJmdD5xPDpZMEk7
WkxvX2ZEMllSYTQqTCdTSE4qSzNGNU8tSXBUN0otbXVpNzM4bllRVGNLV2lIWTphK2xvYyxyMGps
YDZoU1hCQjlPXnRGanEnTjRKMm4xKEwKJW1YUDY0NCwzKWwwbC1yKk1PO00lVTpZKTVOI2ArYjhr
RT1wO3NHRGM1VVlXXiJScEghYFdUdF8hV24hN2JadExVPUdBMUA7Wl9hIkotMWllOlNxIi1bPWpt
Y1ExdHFeJHFQZCIKJVBcXkg7LE4+SiZjVURfMGsybj9vXTJDV144OEMlaTlGLi8kUmo7YCMoY1JP
LWFwIzpXYUosM0MhV2QwXWEoQnU/RT5BV3MlXCw3MF5lLV45P3VELnM2MUw+UTtKPU9EOCsmVnMK
JSYxU1MlKnUyPEVyKk9HdXBkaTo3ZVxJayohNV0vRCU7UltCNlo/cE5GLD42XF1HRCgjUjhRa0NX
TlpeW1ouampUTGoyI189Ykg+Ul51clwkJCVWQS5hdSd0YU0nXz1PQltdZ0AKJV8mTlUuM2pKcWwl
OVtAI2VOKzlzMydJczdFcz4kQSJmTGMwYWdhQ1VIMC80bFhWKFVxTHMwakRDZVsrTS1NQVFpPjlb
XFswVCUqb1NTTGI0Im42KEhVczo3aT8tTCltaTBPZ24KJWJeaThza3NYOkJtPWNxOFtgOy1NP2cw
KkgmZFciXDdnPmFtVSM1dFE6RGJVWktDWjgyJlFnQWpuPUBDT11WbnJWPjY6dXBtTWFcQjxzLTRb
NWJdODk5OCQlQT9OQFdDPlksMiwKJTNtQiFkQVdvV0BUY1NmMSNtaTdKUEhjSHBtQ0I3JEtvJEFx
K1syUHFFYmtfY1s7I0c/VC4+ak9kSEckRU8vYDkuNm0lJG05Oy8vMCIvMEIwKjtLbSdUaS5TYmJU
alcxJWAoSTcKJU1ASVtePT1rSUcvOmVIZkN0LzJhKislLSlxLT43Qm1zPChDTzVMMFtUPkdTPmdy
Wz5WLEJfY1ZJQ1IqbS8mN0ZnOV5OI3MsVStoKCxcLkN1SjtBWFVNJDtBIiYoIls5Si04VW0KJXFO
TXNkT0k1N28mQ2dRWjhPKk06QTxcJnFJMCc5SGtOSldEIkxdYTBTcTBQQTNuc2ojbl04dWJVc0Nz
bDM1bzIqKF9rXk09Jz1JZjRGOHAlNDwwcm8yc3M9PkZpUWRmIWV0SEMKJWtUWTRVRkxgLyZRakQ6
WyI+Qz4lb0VsN1MqbkdsKlhyJzRYLFEhPHU9Sig4OmBMKm4+IV5IR2dVXzpzW2BYVyFcL2c7bVNr
bVlvUCJAI2prKD0zXmNoP1NROyQlUHUuIjFEPGMKJUlMcCFoNVsuSzRpOHJgMWNiYFhsS2VQJVs6
W0BbdTA3WkdtQT1iNVRKNFooMS0lSjIqOmdCa0U1cVJiQlJRXktLOEtfYXBHXTsmRTk+RDF1O146
XjRRYXQ9Rk5OSCYjVnVkbjIKJUwtXS1dOS4kKy1AOCtbIiFAT2lcOk5mKXBbcV9IQStCU2NJb0ZU
YXRjVGtRNUxNPCEvREBQVEZwUDk7P3JqO2FXQV1XRUY8SypRQUFeYkRdZEAoMlNEWzc2YzVXRCNg
RVlxVycKJWJgWTQwTUIpb2IuXUYzYjdoPSFZXSQsRigqLTYuWEpmNWhxJ0w4SDdpXjM7YWNgVm1L
IycpPiNeOSxYYW0lNVdARUdhNEJrPFNnU0xZckRcOkNGJTk2dHFGcGZJPktdVGVRaioKJSRFZTkh
aztea0NuNklgLSl0XmZdOSpOOFFgJz1hPEssLkQocVNbLlghI0osOl5vMFI1ITtMLjReJlxjXCI6
KUsjTyVwZjZPSideSC5FbiwjUUMtLUc9aU82JmtYQj48JkZeLDMKJW5iQzFtL2IncD1XZjNlcVZL
YC8nQi4zb00uUWRnXEJZMHVMRVZWYSYtW0EmdClkb09zJGszaW5ROHU3Uj9Ecy0+NHVcJzMqPWxs
U25QNzdtaEg3YTwtXWZbSyhyaFNtJi1PJFgKJVU6XSowaV1tLFNXYV49RCpYJ2ouVVwiZTFMNSN0
ayJHVj42Z3BDVWRSWikpZiFzYmxsY0YsJCdGITE2JTZkMWgrPUAiMjhUcEUuLChDPFEtSjsvZCM1
UUtkYS03QzEhR1F1PlUKJTVBMSQ9XiJBPmcxcVNCUlRVQjJjNkksbUk+NGhYXjdLWD9aREF1O0Ao
KEJfSCVQcUItWnJSMk07VDshJlEzbzYhOyplJkVyVm87OzVlXDMtRFs/ZzBaJjpqM009aDNEZSNG
J24KJWBXRihzJ05iP1QxXUlEW1lsTTREcydMWmZhTztOL24qOz8iOF9qY1Q6Wmd1WkxbS1VlYHA3
IUFwNWYsQnAqV2w2IWZ1MSVeOlorIXA7YGY6ZHNaTEheWEM9KVVqcWo/OlZaaXEKJS5lVC9zP1Rk
biJkPGklOj9RTWI3U2wxU3BeTzBXbkM6azA5Om5tbjFHay1vcFBrQnBQKWBUXV1OVmlcNXFSPkBf
bycuJWRdSDtBNiQ2QF1zXz5eY208PyJ0Tjw1TlJBSyc7cikKJUBtYERAVFBhZy1gVCdBZSQxTGlO
JWVdVkEwL291TTQ5SCw5Olw3b15RW3JXKzA4VlQ1LGVEcWAhQmAmPl1wNmleXVssMDFxXWFTYyNt
bGA9VTFWUzw7Pl0vS0NxUnFUcSphTHMKJSIvLSlzTFwtXlRKdSllM0VxZy8xNzZLVSZQWDNSbjZl
WSZQIlxnIW5vOCRhJWIhQzdwP2c5WFhXUTpFTm1BNmBfbkVfcmdTWktaR0twKkhHYVkrQ1xcPi44
V2QwRCdGRWk6cC8KJWVoRUxacTc5ImkqV2o4VU8tVjk/WUFDcGxlKFRyJztebz4hSDI0RUhRN1gs
YmI7NFs/LXBNaDE0QHUvRVo9U3FCTFNvTjVyS2pRPS1EXFBKbWdndHBNT3BoRVk8Jl4/JF9qaGgK
JTttVVhKXUAkKCNSP0VPT1RMPyplX3FJUmtKJG9MOHBJXTksK1VtIkdqak0uVGQydGdCWyZBUGk9
LzQuXmkpPXMpZyMyKiZRW0xYJCYiXlg+KlAkRjZdQVNwI20kNExsNidFWTcKJTBTPCFoOlpnZWc1
OSQybWdvX01FJHMyXUE8T0A0LTheImtOMVtoR0xkJWMxT0FvQEdEbjYzWm47bkRNYm9mSTg0OFRN
Yj9bcDcsJDRoaFcuQys9Q0MtT3VaO2xxckI8VDFbb2cKJVgxcWNoKV1QTChIWEMuKl10Ujs7Ul1D
TkxvXy1YPG9zKHUtUWozXGRmL0o2aEdFbz5haWxvJ1hjcmpqSEAhZjo5KytINllxYS45JTQySFBo
clkkRDdXXFxWTFIiPE1dR0JWWz8KJUtFU2MuIVVRTGRULGM4Uz9EIkg0PiFyJzBsb2ZQMjsnN1xC
JV04Xk9YLkxtQGsuaCNSUydgb0E0V1JnQHFxbThwOlVhYk5NQFwzRWBxIj8kQ1lOXCcpIk9XNyxN
bmlYTERLWEoKJWRdSUFnbmlZWDNMaVZfM2ZNbm1EZTRFM2RkcjxVcSs1ajE9XUpIImQqJyxpLGNB
Y2Mpb2tnU0IoPj5vXTxlXWpkVk1uZmhJTmM6bT9fPiNPOWVVbjFVXj8pYW0nPDFxJ0YvNFAKJT5p
Zj5tUjwlTXFPZG1VUzopdUU7Q15uTXJRXm1qJGg/QU11Vm90ZTxYUHBhS0V0RjVcSiFBdXRfXjxY
MUVjIW8uU0lGLm5rLmpMQ1gvNmRmaCJmSi9yMFkvRDdFN0BjVTQ1P1kKJUVDYXJ1bGI6MTtVb1FX
RV8hUD89W09uLFhwPzpWRT9fWDRZLXFYR1huPUFkODA6KFUrZVJYQyUjc0dhZ3BjdGouI1ZQPnE9
M2EpTipNM1FXVT5aSSRjI0lIN2UndVFEb20hVmgKJTFKO0FrXD9Wa2gsa2Zvb3EuUXNLa29bPXMh
cSFFUFgrbFpUZT49V3FVXEBPPlVPLjAwZUZaSiRgVFdeQkk1UTtlbD03RWtraE9IcGonLCc6b2JQ
X1xcIj90aUlhWnBRYWlqP1UKJS1QSixDJzVJaEdlNVledVokVHFkZE9WNGpoVkopJnJQRTBIOTtR
QzlKJ1dLYUIwUko3O1RnNUtaaTNeW2wxRjtta3E+UUZoLGBYWENzQW1LWispRC09P1xTK3BUJCZw
SFtbIVAKJTJwTkxNNUdNTEZJbVdoVVd0JklHNCRoL29iTVJmV0BWV3NTQyFYWyVrcUhWbFNlSUlZ
aTlRR01lcXI0VGMwWUJ1XFAxPi1ePWhlUlxMVFctM0hGKF43T0YlXyhRTmwnWUFdPVwKJU1XNzBv
MXM+LEIwMj8jXkNcUUg8U04oXWVhT2FXczolK0g2W3BiNjUhXz1rZyVbc2U5blZob1djJHQ+c1la
LzteSitrdSZeWjBDWkEoTjM9VjlhNXQ3QkBeNF5LTDQvR2JjZWoKJTAqRilDJCxLNjlqVVwwZSFM
MyR1QUhfMmpbWVYrJFJYPTcxJmwoZENkNmknK090JF5nKz07P1JPWGAsQ3BYUDVqcmJwTElSZnVG
Q1pnIjJZPDM0OlY6RHEtRTEnZ0JZKDdBLiEKJW5yVFY9UEhNYkJpKFYxaCs+T144IWY4LlxtSzUo
XiZsYVFgPjAwME0iZUkuXiNGbCMnWlQ9MScwYSpRNCcrU09XVWkpNVxVMD5xUENnNSl1W0tALmgr
SisnRj1VVSRRSz5YXUUKJUtqXD1IMSdWUSslLi1YcCI2MXMnI1Q/Pj1SKzdMSEIrLjtSKHE+bGpF
VDglZSI2KShqY0Q/PEw7JjJMQWIsLTZeUm1qTj5jbHE0OmdUWTZHUjJvLC0mb2ppckFJYTo5QD1A
M28KJShaQ0Q8UmdxUVsmPUZ1T09MK0ZNTEVtMl1KLklCP2NRdGpfKSYhNztVK1QlNiMvNzorXyUw
WnFKY2gia11VP0tnS2dOWURCTmxmQlY7blVrNnBvJ1M4MUhvTWdHImM+W0BxSWoKJSN0YjRfTElw
Yj02SD1oZSEzLSQiP20mYFw4LjA4TVdyVy1mUmUjVT5LaCpyTUVdRSw1RkRRdTdqRClhQjczIzg2
U0tzZSguKGhZOnM0TWA0bkA1WGUwXEdBW000YzhaVSxHWF8KJTtJJSFAaTdmJ04hXlZEVzg8PHJq
OydKRjwjIiZJTz9qPCYjLitBLVlJL1VBJTstaDRKXjtUT1M5OFk9QU5vUzJVbDZ1VmUvOVNcSEVF
YWV0cy8mUFRmUFRWYTojX3NPQ3JdLT0KJVRDb29XMU4uZEU4O2BNUmQ+Nmw9R01nKy5nakFKImtG
TkZfTEcmZCErdCZKTjxuJmNzPCg3KjRaNmssV2NvIl41YWJebDFTMk51MVNhWFohWjtKbDRPPlRW
IUpNLGYxMzM9Sj0KJUJ1Mi1sKzFkIUMmbWc3Pmw9cHRFVnAmYzlFSjU8cjA8cTtkK2JeVkdkZC9D
NVpqRVxvZUwiYWhePFNHPldZME04b1FiSU9DO1YxQDFgTjY9UDFkZF4pJGYxaW8rbiZtXm0iXS4K
JVE+ODZUTVMocUIzIi9UZEJkL0tlYVwqVF0nYW1JbTkiQkQhOWdmaDZQLF5KZ2R1QXJeJkNyVD8u
JThbY0EhKSNQIz9HTVk5S0s3Q0pvLDRDRyFaP1dPSiFdc1ZJa11aKDErRWAKJSFIX2EyK1hBSy8h
JmdvazBVPlwiIVRtPm5eSmpiWEFyUzwlcTRVKjdfTWZCXSpaWEgtOiIlVDBGWTZNU1Imb3V1bmZk
JnNhVDREYXBSM2gwb1RmPSZZVDBANmhUIWRfVW5Ca1wKJUNBVyszZFkjVlNiKDdMTj5jOzpOPHE5
aj4rc1lXWkYlWVl0ak8pSUJHLGpkQjhaQTBsPURKUzxsKWxmVUBsR15dZWc9V3RSMCMtNGleLFhJ
T15QXzsmcEFxO1t1ZEc/RCVJSkgKJV9EblwkcStzNDgic2RWbyZFW3RtTiVwIkcuV2doOlBhZkhu
M2U+SmJZVHFHWkw2S19ebjdbciMnO05EZ05EdEctNGtTLDhSdCtLXVlSdEZtLDgtI2ciIWUibUJI
bDxranNDbUsKJUo7MCVdJ1RiUm4oPzJTTiRcQ20xbFo3T2lnIzhzITMiRiZccV5LbEcvOWpTWDBy
KGZTYUdFJVAkLSg9W0ljLihNYz9JLHJaLFA8PF0saSNfTGZeVSZhMkk7MEErVFZTXTxtVE0KJUJt
al90LEpiW1VJYmBKQkMoW3VcLCxQQUYtJCdgLzVtXzI2RzJmaWZILDsiLiFcZHVNXWFiLnJcMVkm
aUpLcyRnM1g/L2JLazphRUNEZDo0KUYmcGhNJVFCdSpXZF89LC48LjEKJV5wRyIsMyM2ckUkNj49
UDNWRmIqNmZAX1prdEssO2tWViRrR0BFbyMwdCI0ZjokV1NCW1lQMm4sMVxYZylAJ14oTDs6YkYl
M3BhL1peYm83M0g4bToxRnBtXiFPKGFNS2pcbCgKJVlwYTFVQiZaIU9dSCM4ITU2ZFFjUTU8Wyhh
V1ZeXGJib0YkUz47N3VLOHBbIjJGSVtobUJBYz06J0lsbVwuN0pfLmxsKlIrV1VvNztYLFpYYVgy
ZD5YLDBlc104WkdzNVNwYDYKJTZCJW47OWUlXk9qV2hyV2xVXC9yRTI5PSRndFIxPi1LR2VqJjUp
KDI5M0huJ0puM3JKUEErXzVQMzM4UGF1L28tbnVlI0Iwc01XYF9FKGwqZTBHdDsqPFFMIytvPGol
QXBGOiUKJSZlVDpWK0VVWk8pYTJIKk4rQ1g/UCpNaVkkUmNnXFZbOVJlJFpWajgrMWpMLylnVU4p
ZWchUD9BRlZZMSlQWD1vTkZpUGUzXUotOkgydE9OcWtnPjFbX0NnRT8wMjkmVUpkLUsKJWYwQ3As
Yl0pRl5tODwzTmQmXjIhNGJURTg+KilwKDNJbGluQlM8L3BrTmBrS2M4bDg9THFEOSVxaWpLKGFA
dF1tOWs1TXViPWlNN09cVXErI2gkOHVPc1pAOW1wck9OWW0tJioKJTtYUVNuN2Zgb1JPci5CI2Zk
N041Q1BCVm4kRScxJTBOIjpTI0dFQUQ5LzRjTjY0UXRNS2RMKXQiRmE1STJKR0s1PClSSGZQRG1y
UTctVHEtZydxViw6Il1PYk87PkEwMVU3TCYKJUExNkxjMEowZFxbcyx0KGNVNzpgYTxJNVJHWkEv
XEVdRW41PihQVGFsU1lYJj5CSXFgVkpPQkZSKURxQWVFXElXVXJGWyQzPCNPLkpiVyE2Mz9xVSI6
YnIvV1NobWRubSMoNSoKJVhzVXNuZm9AdV05WTx0RyJSYShjNjdaSkxjbUJcOS07QDtmN1ExOWlf
RWAlPFovXERhYDYkWW87UjlARzJcQyFNPGheZ0pIcSQraypUaGJvaFBpUXUjay1TaVE6TkZHPCQ+
VEIKJS0jMDleaCZQOSRVai1ZMCI+WDI/OFkvRGo2TzR0WCh0cGgzNm9aTmNdJ0gwUyhiYWlRbWRZ
IWUnQkZFT1AjUiV1QWBhWlY9aitsTVJbTChcM01pXmwrOUk8YE5eZWNVTUlYJicKJSRbRERUQzBG
dV5yREMjdT84SWVsU1JnVzw/KWBoUjhmVFFPLUhlJWA9X01Tcz5mVFhkS0kkUWlPP3JQXlA7TzBt
U1Asbk1qQEs9ZDJwIiNkMyktZi8oJGZoQTg1QlNgYUc6JmkKJUpdRyIpODQrRSJoJTgwYFUlWEtd
TTJkYkFtWUZhPioza1VyK3UpOiVqNCJsZC80KChfMDIzRT5Ic2teUT87YzxcUWpYIm5wKFNrI1tr
blImbnVDQ0gjKlxTclZNZitYKEUpKUgKJUZiQyoxSWxrKUZtQCM2O24zJSdlKkZuP1w+ZXJRXGJK
bGxwZGImPjkobDUuLVJIcy08JGc6IkZMXzFMSVImQEU8OT0lTEU8QzpaPDYoaTQ4UzAwRFhpJF8s
cSpiZ0VcU1JPIWgKJVNPZk9yOXAnSVRMb1Z1LToxYVgjWENzYUpCTEBtbTljUj9TTWA7K0M4VU5T
bzBOPUVPLywocGhZJkE6Yzs/NnBGM28qJ0xZXFZIQ0JZcS8kYWZURytQU3FUckZWR0dsJ0ptOW0K
JWNsT3FuRlg6bjxPUkoiMmg3SS1HQnI9P3VJWmoxPG4jTlVKRVlOOyI5ako9O1FCJ1ZOXnVgcTUj
cDtLRkFIUV1qOlA9YmctMlNgJi8rXCFRRl8qalJPb2c4TSpwL3NkSywuPVAKJWNDKkIpLEwsN1JV
PScxZiFPNUlRZjNySHRlZFBJM1xyVilROT89K3VKcEFQPkZzdGBZJ2ZFPmFAMmZAOkMkWEdHNDUz
NWdnZEpmRkJ0ckQxbFstcSxldTQpSCNvPzJsZWosQ28KJUdEWjk1Vy5gX3RBRW5XdF1kTV9cX2U5
KTpdQCk7LDBkZGlfVydVOy9ZI1ZKKlJsMWFSSGYqSTczdVgiLU0+PVNUVHRrSi09XTBIajZyXTJt
TUdYUkZaWi88OSFzUGBCL3BdSlwKJV9BPmkmNT1JTDs2QUFvMD8vSGQ8YVNTKltTXk9TY2tuUmNl
LSZScT9eVyw2ajc3XCtBci4ma0JVMHNEPEBiOTVfSzgrLCU7TWFhIyNxUC10JEVFJ2duT2hrNVRr
REAwLS5XYmQKJT5UKEUqYCxGYCtXImVyMVUwcTJbYWhXZCJBMFEyJDozbUlGaUtfUS84Syk8bFhE
TGpMMmJaWStBdWRGNCZhV1Y/bldBQE9KXkZtN1RrPWEjRFhMNzpgUkMpIWFOSTRwWG1qK08KJVtJ
dUYrWXQ5ZTI5byo8XG4jci9tMzJQcjdlP3RsXU9tTEZycyJdYXM6YSQhSFkuRyhaXXUlUztWJmo0
XT01LmRqcGJJZzlsQj9LT1FQJVw7OUgzKWVta2xbIj83X2RJLHM9QmYKJVNbbTEjbFFyLUAvMzQ0
aFEkb1k5RUNAW1thXjJvOjc9YmBkbCYobj4pWDQpPTxHMzNua05FYz0rQz03VjdwVXVsbkklXT0o
TjhQb2UpIy1aSlQyYTovMiUkZVNEL1xza1s+USsKJV5jLyxuT2ZPJjo3Q1tsJSQrIz9zQGBPYkRQ
bmBZSVxLU01AT3BiNT9BV0VOIjAyMVJWUGZxO2ZNMy5MOmBRJWo0aUJza0hqU2FZYjtnVjRQUC02
clRwRCM8PSpPISJWbltRRyYKJSdFVFNEKCtYUV5eLzghW0pfPSVIPjQrcCFiTClmXDxKViZsK0hl
K3FaJkV1SCpQO0hqT1tzU1s+U3NjWSEyVi0qNFk7VkA7KkxnVlc3WEtYTG1sQ1EuITdFOjMqaEFj
P1xQWS8KJWMoTExUJlpzc25RW05fLWYrY09kPHVobE0mRT9BXl8nVkVGOyM2bF9AUmlAXjQmTV9e
J00vZDlNQ2FqZjZAQGdvTTY3Vj8wI2ksZ0JZO29cZ3RdcltJSSchMSk2IkwiWEtkYC8KJU1cPCx1
YFZQZTheXl8hOlEpLWQjWiMpTCkiNiVBUzohTjkvamxzZU0wK3NwXkAnOkglUzpMclNpblhXJzpq
PVNvOFBdNjonKXJnI1tGMXRabXFBQkpRbWkybl1BdEIzKHFgbWkKJU5UNi48IlZPM1o6PTJwZF5o
UURmXmxmRFtnaj5AXmZMM2EmVypxaGVqTU5udCpONldqLihqJiVSWVdQLVFmNDBNJ01mQ0FVbEVE
cUs/JHMtMWEqbSdIPHFVcixrRiRycEg5OXEKJUBxIT1HTCRecHVSOy5OOlRiVWE/KExTRSwvJSdl
RmNIMW0yWDczWzRaYG8waFYvVnJqVidGUSpDNzoiT2M4P3RqTiMsSTk4WVFfdD1INCJ1P14tTWJf
THQydClDQ1ZKR0RTZyIKJThIQ0UkU1EmUGEuIzZsRz1CaEwiO2RvdSVnODNobiRSSjtrTisiXkta
UnNQREsiXCoyIyxlNiRYKm9hNTNSaERNcjoqI1VJWyEvX2QwNjtQX0ltLzBEaD5aMytXajNWLXAn
PyoKJSFSWVhOMGRpI1FSWVcubCppKSxkOS1GJl8yUGM8PkQ6LERJI1dDJzZfKkRRRy05P0ZSJWNz
J042Oi5QLi9QKEtDLiNBWlslNE5bcVQ0PWkqUUFEIXM1XDhKNENQXjFBaGUhYSgKJSh0STs0YDVs
a3JHJEhDME83THA8cTNUNCsmO2t1UyktNz1kUUs2dGo2aj1MTzRdWGIhW00pNWxAYVVQVjZfWzFz
X1ZNIWtoZj5SJSMoc29eJU9rL1JCKENHdWgqczZgI28xQWAKJU5cOm86JU1KLVRlajtBTFJaakJp
cVBUOjouOEImLGNDIzBdQGhyb0Q+VmokQ2pUP28uUFM1RmFGJS5OMDlXOipNLmNROVtMc2MuPWdi
bDlqL3VUJUo6cVtVPktWXUdzO0ZtOT0KJUAlZUFASEhPcjIiNERdYjhkanFPaixBW21aWlhLWkds
WzJdPUhGckY/dVFCM2ZgUUFTREBXLGtDS2c1W0JxRWE7TGdgblNJcWdzT1xWODp0US5wSlw0RnNB
UltackBCYGw5Tm0KJWpsJ0dSby81WEwuYDk0ZC5tVU4+NChSKmRvK01lVkMzYTMtVSw2JStwL1Ms
Rmo0NCg4QFBxMENiUWtTOUw1Oks2YGVHcmhxK1kwcjtVVGk8Wjh1O0FqcChHLUA7J0xWTFIlMGsK
JVpiQDUoMlRTKD4mIypyWDJaPEFcXjxzYSJWLkNkTlJ1P15gVGJxMTJOPC4hNUU/M2dlNnU+J0pK
biE9dT9JKjBxNygzOklGQ0dydDZwZz5kblwnbyZgOCp0Py1JPFl0KjdlcGUKJTlWRUolIlRiXi1X
PlReIz80RC89V0BgX2FdbDU2UCE4OWY9XSRONW5XXD9jRU0vTV4sQ2RwZVw0IilRRWEncEVeTVxp
Q08jYERxbGg7c0tibzMiRFNfKzhmS0VcIypwN0hlXjYKJSNuSkc8I1M/RVEkLl9uXywxcWgwQSpm
LktfPzRuOEJCUzEpWXIvPT5gb2hQW1FjUiw/YCY8PUBXaD0/REFSYSFyNyZDT0lKPlg/MEg4OF4t
Qi9HSm9HNEUqcVYtRlZHTGRyQS0KJT5dLzF1IV5YYWBJT3BMOTZqJmxPTG4qM2dgXUI+OS9SSjRz
cHNQKFxUMUVhbSosNmQ6TEc5PVlKZmBrcmkkMG1Sa2RzL2phOk11WEBSRSw4V3AxbT8jVW9uKDVO
Ol1kTC0kMmYKJTxFNDhIRVlPWWQ7aCZ1NnI+aDYtSFVoOWhjVkAsQEUnOidGTilNXzMiRzpJclpK
XVsqZitFWHVVYFgjQ1IiXFMkM05cRkA5QkkpRj1KIms7ZSNLN0hsRiYxYCNeZE1XU15iJ2wKJUJI
OlNlZUkyTV4rLz85T04jNXIxbzFiIVIqSjs/MCZnQU4/IjNMSjchPypBJydoY0xpMjBRZEMiS2sk
RVN0XC44OGkuX1JFXlUnXHBPOk9mImdNL0pQMyFXIUg8KSQ6KkB0R0gKJW8lRmV0VSZsb0cxajFi
OFNXL0BgRWgzN3IpUDhCNmBCTlRsal8wa0Yna2RZKzEwcGgzQS9CLFYiVHRhOEJJb1JBOFdLXm0z
YTYiPlloVUxUSkFeVj4wS1hyYSFdYV5mcTxYPCEKJUNnTlE6JCJmOzZxXGtiKC0nUVBMLUt1czVd
OUU7R19BUShYJ2AySkgsK11YWyRoLU1AYUpISkxgRDEwOiMiVU9VaGxcQHQ9XSgtb0Yqc19mOWFb
J2c6YDFYLjk1S2hiMlBpX1IKJVA2cGRJLV80SFlXIkM0b29TTmZpQlQpQ21VYWVzS1BTNkQhPkFE
bS9LcTkhaVJcdSslUklKMGpQJyU/LzFpLDJhVyNDRllBVS1LLkVrNlxrOGxSQHJNKjBYQFRkP1FK
XjAzIyUKJWVoX1VIKipGMTIxdVZWb2BVYGVlai1aMlowZS1TSlcuKzIpJ1Rzci9aUltlVkQxIzhA
K1s6RldQUWlFamRYVy1SLiQ5PjE/UUQ7WzY7bmxxJl5HSlVkIzBYM2VkM3RfV19idTkKJVEqdEly
X0EhTGxfLDZvRjVzZ2l1RDdBVj0rcjIjWClsYmssS3AlSmEmPThSRjhUPyQsM2lJTSdnQ1tddVI+
IitZVShzSXEqPEc7V0w9ZGVfJlZIZ2c7J1Eyajc4biQhI1cmJkkKJUlkRkBzTXNyNTYobFloc0JT
QGYhN0cwSTtfZWhBYVUxW20jayQ2JFRLT28mXl01M3JeZDwyWmlRZSNZXWBuMi5IUGpxb05jQHJw
c1lWJylWTU4xZVJcbTM7WSEjRUJDRCZKWmwKJT41cUQpKFlQNjRaW25OLkk3S0RVRjsyPmU/XVcw
ZENSIlQ7M1okL01XXyZASjFGVTxebEgrXkxOUSJwTVw6ZjdTTl91SmgtdXFBU0JBPkk4OUZQcVMt
Y3VLWyUlNihaXFcsVmUKJU50RmJVYT83OnFLWSZNSTtJJ0M4NDJyYHRhNXVsJDIpM1NQTC5OUUpd
ODpqI1BVOC1KWWdARUQkXjdaLmU1PTo5QmZML1taWlsyXWBSWiNtbHQxZjY4TktMRDNqVVRFLSM7
WFYKJWhAZDNsJGxuQikvTUBaPDNrTmUkLSYqLkw3UGBqVWkhM19vQThCJS02OHVyTjQ/JF1hN1Ez
NklkXE10Vic1N1FfRCksWWxBSThlY2dSIlIrJyFoalYtIl9YV09SLERdJilqKEcKJWYvPCZXJDRd
RXNgSVMvRTtXZV8+Y2pNW0Q4TWYxK20mLnNoOGFxTmtKSjw1PSJ0TkE8TmAkKl1TZ0Bwbm9iakNT
IUNoKmJIJUFpdUZCIkFpMjx0N1ZJTGFpUG9vKGg6aC4yIWoKJWowKS5sPUA5QDRNQThNXkYpdC4w
OVovcV1EQFYsQTA0TFpkLiZhPm1iXEtTSFdQVGUuNVovUmAjKyhOU2oiT3NHPjAlcEJVSjk5NWko
Q0UyNDlsI3MwT3B1XU5cZWBASGo1YlMKJSM7NU9LKlc3aDY2SVJwTihFLmdcTm1eQkVVLj5eLi1s
YThySWU4ZSJmLWM+SkZWK3RTPCsxbUREPXFqK0thJDAvXikrPGhGaldqdCxeJjBJNWQ+dCYsPz4+
Ois6Y1pIJjRSZzcKJXJgSyZOQWFEXUphX1xxTCUzI2FZM3VPbjxKL11rQGpNU1BMSklvJ00tN09R
bk5ocUA/U3JEZmEyNUsjWG4mLyknPVJOSC4vXi0hZ2ZRJixoclNnOjgxNEM4RiZkLmVgaXBeVkAK
JT4+Ris1ak5XSm4mZFk6cGhbZWRdWC9oLlIxcSlLRmRNJ0QmTkY5bjxCISw1KVN1cDROJlZYQ01C
R0tgZmk2S09TQUlBckZLbUpRY1tOQEhYRl0mInIwTmMlKyFOJDJtU11NPGQKJWA9Wj1uZjxsTiQp
SzcyYSFcSUFrTU82cjFsXzgkTE1JdENsKE41KlxqWCdwVzNDRTk7JnJFbmYtJEsmRSw2OTUrJy5n
TDJuXks6XTYjZD5TW1BidXJLMmFfOyRHNltqTkFxV1cKJTdfYCR1XiJpPFg1TycuLExhI3BRNmQ1
cHNiSktgKXAwIUpjOD0pZClAZElIIkNISVpJIVlMSjQ4NlAvM19ITiRdV000cDE3QT8/ZFItWmlz
PCtYNmE9cW9EcEA6Xk9ZRHFJU2QKJUtUX3VcNGVYLF4iKlJTME88V1lcSE9ZWzxlTyUmXEFfQWZA
M15tXWE6N3ImcDVWKyhtSDc7Xk1HOSdXNjtLI1NlUFJXN2MrKjVoOzYnbUNrLjpqYkxKXFciaGZV
Y2VcLTdqMScKJVI3WVNjaVk7KDpjQjhDI2Ftaz9NMFcvcWNjQk88SVIwVj1lKlQ4Pk1cVl01SGMo
Xi05SXExLWxLazEqMStzKG1lX01gISohQVonc1x1b0k4Ki8+REdmN2BtNigwa1NmQlhfSyoKJWdp
ZUxSTzhlM2spXTdIZUctL1NJYSJnPyY7cFBjSG1SYzNQRmtyOWElczokWi0rLEN0ckc+K1hBWnQu
LmAoYUxfTk4oZShHdWYsOUMnIm9fbl50K3IrISNSJSJLRWc7THNSK2MKJVg+MiNbUl4pV1goNGc9
cWU3XzxyJDIrcCFTWXRyT1hpTHIxJlpYWTpPaD1NWV1pXFFPOENQbUE8dCVoUWBJLW08NW87cz9W
IW8/VVBwcGYjQURfJSM6K1JaRkpJNFF0YmJdKl0KJWFYLTZySS5sMDI0aVZPOWxrIm8lNmpUcmNP
OUQlMkpQYlU9Oj4lY2hkXiwhZDhbQz9KTDFtJlxMbEktVDFpY0stWl87ZmMjOjEvZk1GcTwtLUgp
bDFAV2FmITF1TUlhNDpFIS8KJTdSNWpRJUZ1am1FPVJkTTxAJ0xRWS45QWFeNEhDS1ZET2RORkAx
PmZEKD9EOG8xIVtEX1s7LjxOXWNWV1dfSCNIRElgcj1oVzhXY2VAKWwzUUY/KyNOKSVNXEROKSk3
YmU+TWwKJSInZyI4bFhHXiQ5ODtNTFRrN2ttQ1JoQ3FTU11xIm5CX2ZBKkdQJGFqa1wiXFI4NVRL
a1twQXJnaztHWjQjLUs7Q2hYLWY4TXMuIWpVTigqZjVlZTlfPU5pb1lxXDlxN3BwO2QKJS4kbGos
RDJXVWlcaUY/V2JhJSpsYTUoIWlSL1FGLWtiTWpANUdvajZEV3EjVl0nZWVFRS0hIyhIbU1RJS4y
NWotZjlmaVArYnFtRXA7QmFDbnNacz0xOylyJV08TGpOY0teQmIKJU5hPTs8RisnPiVCYDwlSWcz
PFRURzAiM2UuOV9QVDlPMFdjPWAmcFYxRmdyciV0Z0QrNV1NYExma0VnSTlKTl1oa3M/cEZgWyI2
JWJVLCRYRChLI0BVLCcjMktLOlVaMWViQyEKJSVaOzEkPjljQU01PTxGdD9eOFpCX3JVWWtsNjw6
Rk89ZVosQSZDL0lcUSlCR0U1WlAjJDQqYiFIalt0bztoSDdKK1ljVTFiLUpoaWsmcTZWPWpddUJa
P3I5NTloJXFGLGV1KVgKJTUnXkErXCJncUo3TldLUVlHXlN0LzM9Z0g6Yzs/MUtuWVFgXHNBMlRp
S0NFJDg0SWAlYDldNStjRlMiaGZzMzdDJkJIWV9Lai9RYCJqRV1kbSNrWltRa151VCk3WCFyYl9m
T04KJU1VZF88UC8lMSlBZ1hbTTZuP1lRUjVIKkldcEtWKiM0TlpMZmQ+M21YSFsmL0QwLz5sKSpX
XmkyUy50PC9pKmlKaT5hciQjXSpfKWQ3ayZEIkhANS0uWyhTQ2lbN1snOCVDSSYKJWlsI2R1QiV0
NjI4bC0iQ2RpXzgnXWBTcShGV0Z0UGg3T0sqUC1RJyUtOEQ3IW1GLCpnXFxmR0Q5SGYvOTdRbj8j
Tyk0X19nQ08kb1ozLUJWLVNLYlg3J18oWSw2TjUtUElKJj4KJSpGcDBXRnMzMVBwazFVSmcoO1xl
WV5LUS8xJDY/RU51ND9DIjowX3JwTyR1JDxSRWViSWEhM1hoKiVvRTchXmItPCRSVSJfQ1RfPFBQ
R0ZPZCZOIi9jcXQxSyZWdGdzXDcpLnMKJW83MDJfVU1lOV5sTUpGYGlEKDVHbzA7IXVVKVQ4KiJt
WXQsTV88L1lgITg9NFFBX0c7ZCIhI25SSD9HQlNsaC48IWksIyI2bl9pQVFOVyFeYFVHTEdUWTNL
a2QiWTcvWl0hJyIKJTZHRUldNSJXVDE3IzEsVSlXPSZZME9WSjolYlk2cD1uNVEhMz1fRExKM3NY
QW5VJTU0RUJzTUI0RzxmPSpFLmNHPUAzQktyRjo+Xl8iM1ErNU5cajtCWHVNOVFJNFA2SzFJR3QK
JWQsXjFRJF85XyM2ZlQxYDZUKFJeNFtqPFQyX1VEIUM0dT47KHMnTTFbLStcPVZ1YSpcIThOKVBP
TkBYUCFYYVopPkZ1LT5NQCxaZCtcM0xdQTM0N1xWSTRDUVJDWV5xSFglUy4KJTtOLyg9YyoyZERz
M1FsaVMmSF9LKitfWXRfaVlbSmRMaTdNLCNeZkwvY08zRGxWOkRGcSdIWjEnRVVvdShmQUBabl43
bSRyOTM2M3FZMSdYMU1oKGlPVTckU1MlTTonOVhyN2MKJSRNNl86S0kwMmpMI2I7XTJcLnRzUTFd
XmFRTkQ3OCd0NWlUPG9tODUqRVI4MkJlRzwnLDBZJShsQ19RYWdqaz9sPlsnV2oxciNWNGEuP3I3
ckV1aUExLDZaXVE1X1E0NW1fN3AKJSIkN1Q+RnA6KHVxPTRzPFI8Vyw0OTxVZUlQQTtLN2NbOC9Z
ZjN0akdGI25PU0pbJExnTWUwPUtgXzA4WjxtSURZZWFSc0dnIV82Qk5bLDpSJzlrMCFHXFhmKjdJ
YFhXMzElMCoKJVhLdVJxYzZJcDw6RSg7RUMuREFTQ15yJEU3TFdaczctOGEhInB0RjMwIU51WjtN
VUAwU1c4NTZxaGEmVW5KM2FfNiZ0TUAmSko4R1lNNTJtRF1aNGhZKFc4JjY5RixpODhNIVQKJT8m
QlhhTiZaQmopMyQ9WlE7PC8rKi1SQ0cwPTpOIkleUW1CPzhSO11XVUolP2MvKGo0Q21SJTJIRGsi
Xkg7YiZOLXE8a0g5N2I8SiZHXC5hN0ojIkVmXFFxSlp1JnVGLHQ7QjIKJSxfTUkuXFlQIkhKOERQ
cGQ4YDN0Nl1WQGpTdVsvQGdob0NwJFgscTJZNkZta0NjWCNtL10sbzI+PFo+cT8vRGNqS2VRRHI6
Sz4paWNTKytXJl50Ojoyb0hNUlloMmdqISU7MWAKJT1rKV0xLj1WdDdaby5BJU4mPC9VKWAnKyNg
aWxJaEBjW0kpUEs3Mm4zSTUsU0hzXmlcQWQuaClNcSdUbjU7XkZsa11aLEtiW2xpbEM3K2ZvaktC
TzhYQytqTVMrS0Y+WkdaOFoKJTdATEZVNzUyWGloU20oSTg2IicwPVc1UWglRDJELjZuXzxQNm9B
Iy4hKD5sXDxyWTUnTidiMz5fZTtAS0hENklWbExGWSojYFVbYzdbIy1VNzFrYUc8JFo6IUJTb1E/
X0UrWFoKJWguS09HPVEwRmZiUUhMNDsuJDhRLio+RyQ4UGxpMTNnJ0xPKXJSSkk0VEhrSG41Ky1T
VjNTYnUycTEpW19tLTNmPDVWcWNEMipZJWE5YmpkWCZrV0U9PycqWCdzVVVPQlNmZG0KJVJFSnEm
aSFHPmZYSSJWKmlKRVI/PzZJN3NaNk5MO04hQU8mcjR0ZTwwb0xXc0I4aWBGUD4nVkckVV4jJSpV
USNPRDdtOGw9OUIzQCNDVGwlJCM+VzVgQnJuaGFIWmArKVBcZlAKJSxvcT9rPkFBLEVKWk9QRUZg
P0IyIjhlcl1fL1NxYTcoanA6R0IoRkQ+SitYLFNOXU9zZCxoWk88Sj86WEpGTGppIklHKC0xdXI9
N1hQY2AuTS4qPmFBJyZiRFc5Pz0qQmAxb2UKJU4zPDJQLSVaW2grVzIjIzNLMHBpXTt1UXFaWTMy
REgpckYjWyVkRS4wUzI3a1VcWVNcVitMMTU6SyE9O3IlOElaRFxtR21Vc29nRyJQSGRsQUFjWCJH
UEVBTDJRLEchKyV1NmMKJSlBPWdEJT85Llg2cWlHUERiSlo8KjFkUytpYFFyYV1wXT5mJSsucWBO
aUZXXjJqPTs4RFg4WiFETkMibF1dWidrUFw3Tyktcz8iUC1pOVQ+OmYtcEFlXiZ1VDA9I21xNT1H
b18KJWQ5KC4uKnNaXjpKSTo6RT0nckYpLHQjV3E3NzpWcDYoYUM3U2cpS0wuQiZKRVcyM2deLiR1
ImBNQmtVNSZPVDp1PzEmYCRKUVZNVitePiM1J1BkW2lkVDsrXT45R2lAJWEkSnEKJTp0SSFNPUtW
MltXc1hnKGAuOUxNPD08MEgzPSdtdE4oT1thJCEuXktHYSI2VFU7KTozPzRfOSZHJXNiSF9BaV9n
JSVtSS0/UllvRlxsKmU+UDklN1ROMEhpKzspNk90PzFRSicKJUxiMCtiJzBEJjYzJiVwbjQwZUtF
akRaQTBXS0ctVzFxS2JkXmxaW0NfJjRpc1BWal1OY1FbZztUIkEkajFyJz0nakAoVWxIKmwiMDlM
J0g0OmUnY08nNzkpQVtGXS9AbF41JlcKJVVjJlFLQHEuOTssLk1TSEUyOCZAJlZUO2k7cl4rdFxT
N1g+VlRCPipgXlBcJCs6Q0olOUpkSlIuaWpqblx1PkRNP3FTOiwuPkJGUmpNbGs6QWQ0YEZWVio8
bkA6NChjJVFtXWcKJUprQkUwLF02R3NpTVwwYihNYVJeZzxyQ29dcmtWaTZiWC0nV1Q7MVQjMz4k
bFltNC5dN0ZtRyE3Y2IyKicoVV0nQU1gSCY8J19nNmNWbSo4XmVhX2xvSFdCWDI1Vyx0W247ci8K
JUtWRSJQJFtoO1RlM00jbkExVG0jIixlJ0UwOVZlY2psWVNSKXE1LSIxKzFLWidpPltJPTxzUUJc
MWlaMjNddDkqKSozT0pjazdCRSpIVy5ZQyxjaTc2ZlZWN29vQGoxblEnNT4KJVU/I0tQIzc+J2xQ
P1UwYiZrUD5lLERkWiFdWGolViplbTNCKGRmLic2UWxtN1heRlAtcjYjazdAQkJpdDRxXy5IQ3NY
cGFnXVk3SFU3dGVRIjtga3VgIywmSj4/PTg1JzFpSUoKJWY3W1hqNFVuPEQiQU1oQTtQNThRMSg8
aj5XW2VgKC5aUXVATURiPGFZRyYwNDhTbCdmLFwrVyhGWCJHMz9tXWg6bDJZWi1adGRmajslYDFM
PnNWNmZWRWNRciU7WzJKJyJzUmkKJWxhW3EyO1phQE1FQDxFUUI8VzgoPjMrRW05LF9bP1ptVixg
J0VwViwnUiFZU0lDdCh0cytTPWZnWjBuODEuQ2NNYlpwUFRhLzVPWlk5LUIsOmlhIVVGJF46QidU
NVhpUFAxLWYKJSRIWkBjLGpNKS1dLW9OSTM0KkQ8UCY7blpdTF9CQlBZSkpcL0tKTDllNTM+IVcv
WXBnSlpsWSskNTJVR2w6Qj8vRWhzLy9uTmRxaStCUF5CXEFxRGRtWkNxJG02STI1OyxPQmEKJTNR
aGRqQXVoYzsuKSFhQClEP251VzQ3aWwlPzhWNyxucXRpZWA4MkpkOEZlMzI6Ll9DWiQ2QVU9LmNc
PFtQP1dHLFFXbF4+YVVNZmU0P10oNkYiMG1uTkkxUiVUO0kyJEFDSmkKJVgoQSJrOnJbUGw0Q0xq
KCg9blYxI24nQEE7JnBFNXFYY2InVzQiP2ZRZT1bTDskLk1zSXU1J1M3OVVCQyYuLEdgLDcvNk4w
Qi0zTWFDNmtQLyVMO21GXkQ1XFc+IS05Kl9XPUoKJWtbR00kKTwlIiE/Lj4sbSFlJCcrTlY4V2I2
RDNDRWgnSmRsY0wtNVg4STxLWSQibSJSMShKM0U2XCYkayJHST1CVGcoN0JlaS5VPC5IMmpOKls4
MEBDTDgnbSNBKGhrXm5ER3IKJVo4ImRyX28nUkVKL1YvUmcqVTxpZFZnSGRXKERkbm9TNls/L19m
RTlEZUtJKix0aiRwTCJrUVxMcElhZllcamJQNilpcnBINl9oJDVpUUtlUXJPZWgxYmNDNjgrKllz
UTw9OG8KJT4laDk5T2daXjtNRm87PDRjUl5RQ3FVNGdac2JEWFtScW9MbCU4WTc1Qj1zWGk0KE0/
NklVSFA4N0hqLkQlKk1qTCkjTzBVQCw/QipgbGtsUjY2RDkmNWpRbF4tZGVeOkohKmEKJUs0Vzkt
THI7WCZJMVw0ajZEOE1BNm9yRlJYMGRET1tgIzQwOEpILUNhak5lSCdIQER1TXJnK19cQWkuZ2tu
QUksUCI9VGwjckE8MVo0V1NpXDZOJV5LOTVbUiQmVUgwOD5JLkkKJXA+VnI0a2FKIl4ob1dtVHFa
LiczRk1QZ3FEaChKXyFCXnRIRHEsWGYhNy1PWVw0QUJbYWw0NDk2OztoYEg4X00qI11rdSNcLCU0
X2AlOzwsbGA4OHFUUnFDOzYxP0txXGdyQVYKJSxTUDJLVU5FaF8ma2RJRkJwNy1kYEZGSDk9XjEj
NktvOzBiVENuMXRbLGlhP2pfNCdlNmpfRzRPazFlZy5QZ15DNXFMWF5qOiJEbUQ4ZCw1PzNxNUps
PVZFS2NlLVFDOGUtKT0KJV9BTytlWlUtLFkkMGQjLy41WFVeLGxAcztbQSQ7P0MuYVNnLnFERnVX
RDFPbzEhYEZhZShIYC5fQUFiKDdtaEpGMz4nRlFLRTgxVlBdQjFPOGE3NjI1V2g1IWdtLXJvN20+
JG0KJSVqdGAxNnRIJnAtUGpIXjxGRWUzI1NrR0cwX0EmT0YwWSIoUF1bZmo2ciQuOnI+aDppXTNb
JXNNLkVmMTxRKGhQaUM1MnNVXCJ0XiFgdGNzPkdFOSVOaS1MQVQoMFpjOW9MPGoKJUVXWmNXV2pb
YkskOks9OFJqUmg1TVQuYjk+L0YhbUpRJ1FWWjYzJUMkQDojUkBQayltL0FldUMjTTJFOWZkZERw
OTxQSkdpLiopMGFkVUA5L1JhcGMnW0VaRGNzQl9qQkxtcSkKJWFaV3Q3KkNuYSw4YWhadStYWShp
VGwvO2JVTkFHOjFwcXRzVFQ1IToxbms/NidtLyF1Vlg8XExMPF5gMm84WkBqRlVDTFxxKFJuTTBI
cTA4Yy1SOHI2RkddQEY6Sz1OPWZkNTcKJVZRJU9HUmw4X289LWUsKVAiKUcoKGFkRkAtSGoyWks2
MVhtaignKHJBL1AsVUY9XmNDZl9JIyordVFqTUpKRnEqX2JbNiRZMFs0clElL3VoT1IsZl0hSmEu
Rm1ZYGU8XipkJlQKJTRJNGg4N0EqJy0pVDBWSUU0SUFDRWchWG9eJ05NQjhROkFTLiNQaTVncVNT
N09dLjE/S3JjUEIzPz1yPFpVXEFKYSYqOipFP1c3MFAmKTpVX0tDWWtUTEY+VXFPTl8tNjpvLjIK
JURVb1s0R0UlWDVYSWRhPWo+U09kUEcjLCgtM0RIUyJBLVBXLUpPL0gsREBcJCMrZy1HLXI3P1Qy
dDMlPls6NkZRR0M7QypqOjA6MGN1Oko5PCU4bzFvNChtbSxsQG5abCYvXHUKJWZbWFJAN0cqMkk7
aFU7W0NoWFsvLW0yWiVTN1tuRWsrRSRGPWJJZidlPVVFYTo1aEFTX25dZHRaJzY3OSorU3UqJXQq
cFRfWnFMbT0wKy1kOnJvSTZac20oXTteI2olJFlKRD4KJUw3IV4zSCpMc2tPR2JuTUFXayVbOUQ1
MXEoVUYmQy5fJGQqKC9lSzRXWllqa0xOSkdAX3Q2UFw5alBJRWMhRFVOLSI3UWFGIl1QKUBgRDFy
JHBxWlBINXNVXE03OUUsQD9NJGMKJWAzVDMlPGxicDw0XkpkITgyP2M9UWluVWlZOmx0QUwnLUFH
SEYldUMkJm1ebGMsYzJzMXMmN3RKUDljTVZWRUliN2AucWtVaCsiPTFyPUdeV0ZBRS4rPGtiVCY6
SE5QJlVaTjIKJUpMaCxZOm05anFsNDteUEBRRHA4ODxwN2NdVlVAIyxtQ25vPihtaWc1bEJ0WUtI
VW1AI1tkWDRIInMwYmtUZEs0WGQvJSRCXCQvSlp0SlZcUnJ0QmVxQkszU1JcPGwkZF9RXVgKJWs8
Y01wW1RVM21LP01nTEI1KkAzMXNDc0stMGpcJlhgXUFJPkEnLk1VJ0ghR1tVIm0mTUAnWV89RCE/
T08qXkBxREQvMFMjVTpwajAmbVFnITJIOzNxYC5RdUonV3JDOVdnRy4KJUVFMmQpYGxbIz0nZmZ0
OmpOLjErYCk2LkQ/WStzSEUzQ1orWlJjLzdIPm5XOzVjMUFEUD5zcF5kXUxXRytbXnNiK2wrbWdj
NUVZSDlKbl9oM0UsMTVWJmFMKlVONk4/M3FqRTQKJXEkJzk6K1gpJzwtRWc1XCVAWzlCWiRIWTIy
aUVBbT9OU0pjJlhhZDdQWU4+Mixbay1yKVMoYSNfPClgQmEya2dyQ001YVZAUyp1KEA6Q1ZyP0s9
YU9caEszb2BNJyo4MTxSJE4KJWImJjctNSheLlBTdXQvLi8+M0BJUlxmRF9yMGFrZC05OyZjZHBc
IVxeUC4tMDYna0QlKGhAPmU2QWBuT0YkTihvY2hATyQ9ZXNfUDBxPDQibj1rdGxpYj43IyEtQjlB
ZFJjUnIKJV5fRmhVMzlFPl4kLU9UXmJ0b3U9QW49XWZObG5DWikrWEJaNSJNKT5BZj1yZSkvL1Mr
SGs/T0ZkPTtRdGQ2WkclT3EuOVE4dGQ3YiZtZ2ZjW2ppcF1iTCQkVWJScFYrUkM1NCQKJXJpbVts
JVo9VlArcUAoUFJAKkZbOmY8SGdIJjU5R0BYL28mVnFCSEFoXEt1UVdcPWIuOEM9ZThdODVdOixg
PnM3b0hWSUMja28kXEo/QW5pZHA2VGMvTnFLOlNRcUFpWkxoUFkKJTlXJEFsOT4/ZGhNclchRSMx
TEtSXS0xKkZiIllJZTNtKic0KU8iXGQwXydpJzhGQmlKZClUJyxtTDZGSSkwMlU2ST9RWz87RTtY
PjVMW2hAX14waSZhRS9RR0VdKHBZOUAtPTMKJVZAdT4nXTBobSIiSE9iVU9Xcjk5LC9aUl1LSWgi
XWsvX1pBYURxbCQ4Ty9TLzhaKGZGUEYlMDZPUjpLRWpTajImV0hSOFBQTmMsM2phcSE6JjJzdERh
MzRYLy1SJmNQRWc+KCsKJVMta00rVTAtZyMzQG1CMCMiZWAiZTlvPSdraiokWjFNVk4lLSQiOWtE
M0deXlU6VjtDYjQmSChPSW9ZSWE8QEdpaUJrUnQ+c0skOTRWQUU5KCdycCUrX2VnYUkkI2ptXDVX
bjIKJUM4aV5AYEF1NTZNcHQtUD48IW1aZWlidENoJmwyNyYqNC1lRjxWPzU/T1kqJ0hQbkVQZGh0
cyNjQVVuPi5dKE82MC1GXDYmOEVoKEZMdSdUKEUybkIiJFM2KHBlNWFeL0Bjb0UKJT4uPms5O0w5
Jm8kS0twJW5wN2AoUFdQPFUvXylsSGcjUWZqZGRGKFRtak9sWGhlOSIhU2M7OUA5NlBZISJNQjJW
aihTZ2dfZmUpLjRIJm9xVD1eMjs/QDNJLTR1a0F1L00tJVEKJXIrRDc4OyRcOGwpXW5vaUZmXHFH
NG4kUWlqTD07ZytSTiVIUERBVChhNG02QFQ6PmsrOUJwVCFhKVlJWFwnNGJrWCwhNURSMD1eUyc/
aEMqYT1WdTU3WkhudG9fNEJDbictWFgKJWBXTUFCczFJRT5hOEM0KmhnK2hTZEJvIkxfJTk8Qzgu
KCJSbj0+J0w2TVxvMlpwOz41bkxOXS87U1kmVFI/Njg2Y3QzVD82RXRkb11JISZRMWlNV1FxMDBd
Im5iZyNfSFJGOloKJWMhUyRmLmtHOm8lQVlSVm83Zk1zTDFRWyRkLD1AcjE0UnU/IldeNVAvNzYh
SG1RMjVkXDgxJWRyKjpYVi4iUURgZzlHL2hKLisodCRPNGdLKHA9MCohdUo9bF1tcVk8R2pocWwK
JS5tIk5LPi8nNDdqYm0wc14laFJBTV1dUzpeLlROcTBEcEppR1FgMFctIiooc1xSPSknQ2xsSFZM
MmhDYDBrXmt1I2dPRnBRNUtfbyEhMlo7XWg5RFFcYEJrQ1FUaHVxbCFzSD4KJT8qakNRW0ovU1Nw
XUdgQlVHKU5JSVI1Mj9IPVlSU0xbUU5zKCIwXSM5WWZCY2BwM1VZJTJ1U1guZzlsSDprMT4tJE5Q
MnBpYC1xSmdyWSdII2pXU0ZDWkBlKjNaaFw6bClidHEKJSs2aEdTQkxnW0xiSVxrNkRxQUxCW20l
Qk5iOUolc0JaITNcZWllaSlrI1VDOC1wXDNiNjVNPXUhJF9tU2BXSD1UVUUzLUIhMWUjMTFoYSpr
WVRtTm8hTVdSNyFYaUJNb0tHLFQKJT42a2M/XzkrQD1mQT5ucl4iW3NfS3RoTG9ZXlQkTnJaOnVU
XmQhSG8vJWlvKF5zZC8yXSljI2NGUDRaaFFgVlcrKzIpU0FvRUI7XHBSTVwjLFZVSmozM3JUPT1o
JDslTGpLTnUKJW05JG1wbVI9P0ZEZEJtViZXQytLZyJyKiJPWWUyUCEtMWxCaUxZO1NdZlM6RGg3
O2UsPyg0cGBoUWgxSlFCYjZePVtjXXFwWzY5Vmk9WFE/OER0Oy5mM1dPLVtIP29GZ2k/TlYKJWMj
O3RMMFFbIUUkIVZRNlo6LUluMElkb0dSb25CcFIxKCllZ3VBMDFdOG80MFhUP1RKcF1JUSFQRG5H
JCtdckRII2BzQk0mL1syaVojcyZCZjtjTz9YN2dcblt0IS5cSUZgZk8KJW0rKFNnOERhTyNsRnEt
MF81IlxMWC5dJS85ZTNca11KbmdSVjY8XDwkR2FPLlhhYjotaSFFZCQ5SnUxJEtlaXJmJTM4K1Nt
J1lwYGcyXj1SZkZiPnUhI2ZxYWI2NDE6QHQ4OSsKJUUxWV5uJyNyVlpddWc8Vis8NUZpNU9ANlZg
NGBxQ11EbmkwXnAyRGBXLydKaUNrVCgmQ0RCTUgvRkpObXBNVEg8MEAoOFlpPyYlVGJUVUAiaSRw
OSVDal5sM2wjIyciSi4kJlgKJSEnNDRBZlBAXz0jZGdGaCcvamEqPEowXkliSjdyXzZGWy9zIXNt
MWUoYy4qamBdayJzRScvQzdHa2czUm5BMTsjSUwjLFxzK1lRbWRuXXRcXT1gYS0jWjJKIWlGZ0tm
PERfTz4KJUpBW2duYjpOcmVLdSFhYCIxbTs4Si45KyQ/QGBvRE1jYGEkXzAhTSIiNDFHQiFHR2Js
T1JELlEiI2ImU0dSbS5II2hFVzNCYC8oY25fYUcmal0pMClDJXVScWtZMTdMNUc8SWQKJSEvKE09
bStgLT9wMyw8dVd0LjFvJFVNQ2s2LFI7Q2RLa1UvYmFrPSFcbmJASWZgPmdpajpsOC9YTVJYM2A5
IUpEYjZPZ1swR0UiSVRTI2hyPE86OV9vYUJCcDdZO2xTODczOGcKJTwqNlgySydqPmpbSCFzZTQh
JmhgXix0aW1zJCFkTEA0MzRWKyQpXjIjSDNSLUZXbWFDWzAkQ0gzYG88R0RhXEY5SVwkWmpMS1Q9
OiYpLjVxK1dTIV9CNF80PF8sIzpmP0FFdCcKJWFOQz5FOVhbLjUhTUJIdTlMJlMwYm0jcCJjQVch
MDBaPCZ0ISJnVzZOV0N1YGRGdF5XSTEjRXVyQTRzPUZGXmlOJWcuSXFjbUdDamU6W2kmJEdlVic0
cFNNLFlZN19dQio5Pz0KJShFY08+VlAiTmcuajRmaypMcCZfLlBhVFFWO0wrPz1dNFE2PTo0QCIz
LWdoKjFzZENYSDhzaTJnQWlnLDYqaTdUMWlNIk0xXjMpaE9KIkQuJU1KcDAjam9PSl9EVk5IIzhA
XEQKJXIrODpMIyhVcjIhPE5GOCQ/ZE0tNFpKKEVSLWBPW2JKXExvOE5NZzROW0FSN1RyVGctTSw4
QD5KISRyPCYmUUNycXU/bzVjSygrSnFcOmoxK2dfNTIlVUtCPjdsNmxFJTAoay8KJVRII0dzQ3I7
UD4yQDBcKEhSRDYjXyJlMk1BSUcxREJGOis2UXNULjJcZHRZWSg/YztNPzJ0NkYhYDFSQVIhYDNB
aE5hJilOKzVQJkArPytDQCx1JWM/blprS2JWQis3L0FccXAKJSFWUzlCJU84YiwsImRCZGNtZDBk
OmlFI2ZeY1RAPEBAVCUqIyhDRG0hLmZnPyE8RWw9KU0rYm5gbyg1K0UiTWQpbkE5L2RZU3BTb3E2
JCk9bTBVV2QtRVRyaW5AV2dDZHAmbHMKJT42I0hOQTlbRnMlMC9eSi1acDZDKEJGcGNKOSUjPytK
Y2RSX0ZYYlFLUS9XNTVCJE87IUpFcTguQDtpVzJjcjkhJ0khMCMiOyEhOTJJJldNTFBIM0VpU2g9
Q2FPWjNbTTZYWFEKJSVYbiw5JW0sO2koW2VuXlFtSGVxOlRyXiFcZVFKTyRzcyMoMnVzOyU2PWpP
XGtVN1xWcSdfYTpKL25kTl9vbEVmMFEvWjo0S0lUTSIqZ0xOTy9HXWEvUVNZWCZZWklXPUR1MHIK
JTNrXUM+RzhicChRNig9KzllMUtCLF5XQTY6IlsrbyZhZThXJDtfWSQ2PTpWYTVPMSorPlpoUlti
VD8qUFZBTiJpKVpbZzdeY0siS1V0SlRsWmZzLWtoRykoNG8kTXRtUkBaS3AKJUQ0aj1uaithTV1U
RTVCcFRhUDhAcTVmPmprXUhzWmFULTMlISE8QjlNJzZiKFF0ZzArbi1VPDRTTj4sUi1fWUo9UjAo
RVRjVzlsNFo1VzNNWiZxTidROVZONypxVHIzIXBUbVIKJSRZTVdYYk8wLzhSOyQ6OyYqViJUZTMv
VG0pPVVeMyJRLVkrOUhGKFpLPj49LDFma2oyKnRgRipHVkVbRy83Q3FuI1pAaFNnUCNjZmFWcTYk
b1ouJSc9ViNvWlJbR2ljRCNuPVAKJWs0clFPQl9iXXAiYyhdZk5yLSE/OjtnK2MlbyokKXJTMG5e
JCMwJ28tJUozZ2pRPjZDX1NGNi8iN2tnaSUvQSZqK0ZudGMxTSgwcSJqZyhDXj5uazc9W3BURE1W
LXNTXS9NKikKJTh0UUxSZCxaaWNTUT5Lc0VdQllga3JQdCdgYmlSLG1GYHRqOzZSOHBjbyMvayNY
RHA0KmhOYHAxR29uaWVmbCU4Q1ZfJV04RzpMNVl0JGBKTyY/MVAiP2I2V2kvbk5RRWMwa0IKJXEo
Yks6JTYpUUhQLik6IlhnWTtVai9RUGZALW5GYTZJV0BWSkE8PG1GTy0qYm00Z0VHU3FEIi9wTkZf
Y21wUEdLUUtDTl9nYG0/JGZlXjs+ZDdXcW9FJW1UVUJESjlxKjRVUV4KJVkibEtQJT8udVBvQFBf
UyQocl4oJDZPI1BSOnVpX19eZ0cpSk9zPl01Q0hXWUY9RjY4ZCkjS2tuMW8kcSszPk9iXkxuN2BG
N0EhU2hcOyQoSXVsOVxGSylgUm1OIlEnUks4cUkKJVI6dXUjbj1uKyI6JCstVk0sbDtOK0RTWSg4
Y1dwP2pyJDQxLHJQSixUJWZbZ1EtPlI6NGRHak5OLXBWWk1adXBkbE0wO09SKStDVyhQaUspLWw+
M1hBbkQwUk1WM2AtVEpeREgKJVxTLSVjaWovVzgibF1XKmM5L21WTDJQVlw6WHRMam9EQS90cSop
MipKK0FKaCZmP25sJjJUQHAuQG1LX14mbF46bm4/J0ksVmxaVzFCZklcPmxuYUxWXmFDPGNuXVZb
RjwkW00KJTgmb1ZkKG5YZ2xiaSU3UmlaXSJvQkRCPVwrSkVEVSojW0FRX1hQQS8iI2UvJkYjUj8k
ZkEobD0iJTVSak1QPjI3YCFjI0BcYVYzVEFkNUsoUjo0YSw0JDM2UlQuajVsWUNaS1MKJTlaLERL
K2BxPC5APCk/OytaT01sYi5jVXJiUydEIUU6ajNpIW8hWStJN24hblpOPzUsb0lALktLYFcuTSoq
P2E2XipZKzAwMSsqTCtRcihraCM1ZkZBaSI9U2E9cDg3KlxmU1oKJSIrWGVbN19vdTRMZSgjUGtX
KFUpISpMWXRaIiptPmxgXmchKS40JmcrJ1Q1XShYXSZ1QlIjLSErXlZhNjBVZ2gsSGZlWz01cGYz
XVc5M05fJEhWSVQhWUI7V24uUlxpOmo/QlYKJUAyMzAlZy01aSJVL0ttZjZxQ2AqIVpGMG47RG0r
WXA9ZC0wU0pKMWtySmVmLkosL04kcTNQSUxQRC46WUY2aG9JJiUzVl0ja0pKMlNRaClaaGc3R0hm
a3M8MVQqQ0Q9VSdoZDEKJSQ0NWdnYjg4ISchQWFwXTBVUCM1JDM+WkJeX28/QWxENiwyXCkyXWg/
XUlaLnI0ayVRXUZnRzJKMCNeQkUmQC5qS19eRSNpOFUuLCNsKVZHJEZfVDMpVnNsb1NlbEA4IyY0
Pm4KJVciQlh0cEdJSi4nSV40K0hUIzVaU1hOMEhxS3VXOTVucSFLJUxBXjNFLHJYN2pqTyNzNGQi
O005cWdzViImUTY6TCJkbGEjPnRYZGdaJmgiJC5oaW5kPlhudC1qUl5CWyFxSlkKJSZSV3UtZkkv
TjJjTyxSXi4rOEE1Qz89Qig9OWRzbFBRQyEjQ3FwMmw+L0RdZzlYJUpmOWo1OUVgSzhRLkhuPVBP
PEh0ITUycl1hdFI+WDwlJm9VPiktXUkkMzxKJGpFIyZWQisKJWwqOF9EbjkjPU8tYzJaNk4lZyNt
TyJUPVU2ZmY/UzBsUHA4PEo3OlpJMiVKYldeLl5PKlUidFReLG0sYklQcl5eUyJaPHQmUmhEN2dr
QEdmQWU2QkNqLnNAV1Yna0xqcTpnMlQKJV44PVY8KypyTFBUTFw1P09kL1JKJSZDakswU0w6Uy9I
Qm8qKEczRWkrJ1E5XFs9aScmWXNlOVIydU8sdDRHN2tgOTsoMj5nOTxxWCskZUNkMVE4RVBMMVhr
JFpkZihcOnVKZm0KJWlRQFldS3BLN1cxbjIkaytuUS9VOk1XUE0tQGNPWlo0WHJfWycqJFJOUCYr
cW1WWElWNFk8OE4oOHFzdTtvPXUpYTtCaUBQWXArLnMzOmsqayklTD5Ibzk/QStgcytQXipSP0cK
JTxiRHRnLyYoV0VTJEcubUA/Mz5OSj88M3EiU1BsR1Z1Qj9qPnJzYWNuXiJTaF1tN0cxKzEqczdd
W21KO1wnNCM5WUpVaEpnbF1GWmMoXiVWay50W29yXDpKSVsjK1ZNazpdckkKJUU3KEUhTVxlJE1a
dUE8YFAnLzBILzcsdSJsKCojTWxXQVcxaDAxT0BcODE+PDMmSU9JbF0pYFRncURePVEpZHFNbkJi
QV1IL3UoPDVCLExDKlBic2ZCSWZMLnEkU3NJR1ZvMGkKJXFUZD5rZ01HWDBnOWc6MWVMVSZtMXIk
I2hMLV0/LVopLFdhSzZaUk5oUyk/SU5aP2NmLWVRJChKSkNmbzJAS3VJVGYsUlBwVEQwNFomJ2lT
JWlkY1ZpcS9FTm8jNmFTN1BFcXMKJW0sKVpePy87Y3FJVFVnPD5IaGVfU11kPWc1czJTJlpcZCk3
azlvLVsqV1BkL3JuMEJpblU5OnQvWShKLG9vWFRlL2tAcERhOENXYW1eZGMscj07JlM+RGxFZzNG
OCorYCZsaU8KJW1YUDBbbXNqUmgrOExhcD9bZz1aZipfSm1dKTFvITMlbUZsS2hJYzlIPihSQzAm
KUcrLkxtT0AzLWtAKjdNQSNWblhvZzFWZ25eJ3MzYk9FcW9mJC5DRUlyL3A+aGY+JXMwP3AKJTIp
VDxnUDMmX2RnWjdlWEdCaSRbOWd1JFpTXDNQL2AlVi81WUM/SmNyOyhwQWJxSydXczcrdXJyTmle
K2tdW0wvRlsqTmRFQUkhLGJaZjJAV2xNUGM1QyElWE9gSEdZaWgrIU8KJT9pSFBoQTJOaTJeXGQt
bXAlczcuN3Q6KkVzNVheKm9HYUZ1bksvVC5AUiJtV1x1PmorbVhBaXVwN2lFU2hzcFs2cilaLFI1
JXJuJjVPbUlSQTJXblRKK2hGLXByaV4vPStDMC0KJUosTDE1b05UKiRaJDIzS28sYk1WYW8jKzpH
SFAzMiprWEsuIiRjXys6T19sbF9HcEhVSixKKU9iJiQnVV5cNSg9SilfSnRwTk0sXU5QRzpxMEUn
VSNeXGRoal80I15MOiFIblAKJW0vRHUrOktOIyciLyNBNFlDLTxXZUFAUk9rTzJ0Pm5dZWlLcW5O
LjlyLVwqTz9pPUAzal8rUztqUVpJMFBwcE86XCNBPlxYaD5QTlQtKydOMEU6bGVJSk5WVi5mU2Vc
Wl5eYSkKJW9fSE9XcjRpOiROUEdEaT9pVGJGWSs9PVhyazpIZztnNkdpSS5walMqUEQkWDBFOFhr
czNgImZmakVaOnI5YCtNVCVTZEdUVy50XyNRT0pbaHU9XTVwMm1JT0NTPkhXcm8xTzMKJUVWYz0p
XzJuTUwwRTBwO3I2PGl1S2lVcDhjVGg/YD9pQUFqcnFpJl1CdGVmXyIzVis1NjtvQlVGY1ZOcW8t
YVJ1biwyRy1JLGpaaSRYJDFMOiZYYVdJLjxPZW9SNF1tWShtVzAKJVEpO18oaWhvITdWKkciNjJs
Q0k4SmFmLllvPW9lUFpRSyFVLERXRDEvb1BFPFRfdHJncjdbTTtyK2dmPjxfTGZoaHU7PHRybDop
NUdiczpoaUAkVldRb0BqWFZTO3AiRys4QCoKJVlQbjlgNTU0U3M7Kk8sK10jJ1BzZmYmTlIwRTlg
UklWQVloVDFkT1pROylmK1g4aGNVSUleYEcjSyU3ZWNhS2dmcHE4InJUM1lCYGtXW2F1MT0xR005
a0Mqc0NuJlI0cFhxRTgKJWQvRD4iIlBtX0ZlREw4WW1oISguOEZlPSQ2P043WE5ycXNLVSY0Mkch
ZCZvbkFcYFpacU9AP0hRVHROIW1pRl1VZyxaS1ZNS1s6W1doanBiWElFdF0zTzxOT0NaaXBRV1RL
YHIKJTwwV2tlUVxuKy5ZXUtlSl9wMHQ+UjsnOikmXnUvOWtQWGlkSipHImdHPCEhT2JfYnJdP2k9
N2lFVV9hdWtPWnMlczQmM0BuV3VLPCIuJ085SFsnaEkuSGZvWyhJLmNMaHRqK18KJXBfdTJSZ1xI
NmFEdV1bNz9ocXJScjl0NVUqLmZ0OkZxMStoTyclMC1eQWpKRXI1UComa1lNNlZYNms/N3IobUBe
R0pFWTk/TWBNTyNJZ091Pi4iNGRZNVMxYkhJJEIhckpmXzIKJWgoTzJlXiFFJG9rRFIrUysrRiZW
XkBVJVY+bVRAL2InbElXQyFXIU4oTE1MVTFTPVdyY2klJG9UP21LJEIwWiJBI0NpdVNqUipdbTNI
KmtecFQ0aSIkZUpeNGNSVSs5cGVRPTQKJV4hQksmcjFnR1BFT0wnQj1GLzo8PkJjXGZlQHQ4Rz9b
ViVLZ0FTXFVgVUBhKVtwXVMpPGovb1NwSFAmZlxNQDNnWy5jIzE+b0BAZiJYKCJeaWRuTEFLMEIp
QiskRE8vbWNXNk0KJV9wKkNoNDZBMy1GMmRFN2Vua0tGaGZXNS5TSWE0azEyXWQwYmFaQFxCdWBD
MjRuLnFjTzJDKGJHITkkdU9jNFcsWydhbENkXyJHV1w3LWVXYmhGSCFpOEY7dXBYa21dWkw+SGcK
JThpX09NPCZhPi9CKTlWY2pUXitgWU1XXC5gamRIPDpFSlI2MCg8J2hcL2FoPUYkOFFSXEBqcCNx
OGoqdWlrTSxQa3QoY18jNEVVZlQ/bFZoZ14+LDgwZThtc0JjLEpRT3AoLmQKJTQ/SzskQDlWR249
LFw6aj8mWW87MEEwXmtYdCc1U11Fbk1lVEE9Wz9gdDQhNCk6KjI5OV8vbSpmUSU9VygtPDpkTHRl
RXRAJyFBY0hSMCVqNnJBJVwwVicibkk4JDdbYWpIYmMKJU4jc0c3I0ZIM28rc19tWSNObSJoXEZa
cz85IVRsXiIzRyROV1BXNCpiXlo0QTlXVDpsXzFmLUwwaUdDY1RvSG5bPmAqW2VSbT5NLS8qc15X
XHEyM2tbMiovUSVCODthYUNHX1QKJUQ+QUA+SDdjRiRZNkRYcDlXKzgyPydNVkYlNFNPdWBnaCcv
VGJEJEVUST5pXzxMX0hKRSExO1kvVi9VNkJCNHJFbWExWyZlPmwwYjA3OzwxLFMtK1E+KyxuI1U/
MjMlO2NmK2IKJVAzMDhBSjBxNy9BXFExUFEiXTJVJnQnVDQoazo2Um9mIVhUUz44T1xATFxLaidJ
OUJWOWk6al4nXHReTDAxXC51UEc4JnIsJktnV0N0IkNuOEQ0QjVVQWkuVkQrPXBRZHVHa0oKJWc1
MXVlcmwtLiE3Z2BNTU5bWltMIkUlLGVoY00va0JTISlLLFIkYiRAOGJWND1KVy1TTj1DWTtpc0db
XW5LXk9BcHRCV2QxYiVvbUpeWVAjLVJrNCVbKG9AXztOTzlZW0A+X3QKJS4/XGtdXmBZc0E/aW9P
cTxbN1U+TVJYWlkwU2wsSDhEOG9gVUFpLlZEJ3VAQCdyY2QxVFBuUnA4TlJiQTNSZ0dtWiosI2ct
LVc7dF01YWpoJi5nRSEwZmhATlE/PTFoZjdPWWcKJSF0UDtQJzUjJT84QWMmZjdLJjM3JEIjVERZ
YUMiRnBUMTBoXlRnPSxjKVBcQTxSQHRqZSlOX0xBQFdFSSVgP1hiJjVvLHViPU43SmI1SUxNajxY
JCY6SiFVRzxlUkZRLiUyOmkKJTUpO3F1NVVkIjZmUTheTF1BayM1QytoUlwuOCw0LFZuKjgiR19n
RSVxVkMoI0FVSm00SSg6ajIqY2pGbT89bycrR1Q+TW1pW2dnJC1zVmBcXyY6ZWEuIVBnQU08VU1q
L2ZhIzkKJWl1Q1tiLTpTZG0sWldIP3E0SG5QO2hePlpFQThiYUN0WHQkYSMobDJsbixpUEYmVCUl
VmcrbDZcZDpwLDxwTG5eRlIxIkxZLTdHPVxYWGBsJWQsSEdvPkA/Qm8jKWtJSGwtP10KJV8oTTMq
TFA3MFI+bmNgPXM0aHQzVVdeWHM9RW1xWFVmLUlYMGY5MU1fNlElJUotVUVZbnE8ciRKQThXQmE3
cWUuPVVYIi07MzpHcC08LVdMTnJgajhecmghKTw1NHBeTyNqWkIKJWlqU148NCwwcEJSJCImYmA9
LE0wWUA3LCtVLCpDODMwKjcyMk1VSHBuOCsmJSkwOiZRXXB1S0htcDlWRnE4YzpdSGBYcGFwODVr
LzZxMC81LGM/KlVLa3NWPENkY04iOGQoRi8KJTc2MnRtUyRcVjYnUCdqL08+Y3RLV25mTGZZdSo8
PWJ0akVRKS8tVWEhQGNRNjRsaWAtWFVEI11gUVYiXVlnYU1hJ0dHUScxLFdHWWBnRixQb2VDJEVL
YFRbKmJbbDxwVG5Ab1kKJSxwXV0/PWBoTXVmKydyMzJwPGJwXzMtQy8vdERfXS1EO19QKisxVTpO
LEs0NyQwUSNdSlFvYWoiXFBpSDo2MG9jR2ZFXDM9W1o0IlA6QUgpTGhwZWA4X0BrRVllPzBGOixj
aGIKJV83UExER1VcVEhxYnB1WWNsI0BPWy01TERqPV1OT0ImUj5qcWIxYTQmX0gjLFZpMTBSTF9H
NUEiU3VyS1wpNDM3RHNAPTcsZTgwOzNndDNbZkFCNWpeIy1KX1ZYPT4qZjVHMzcKJVtzUG1HcV5x
KC9RTTpCNSVIKUV1W3NOVmIjQjc8TD5MTkJfPy9QKUtnVT9pcHEjVDk0aElmIjBnVihxZGhWMmMz
TDVeRCQiLjdvM2dbQkhHZW4tbCdSPEcuU2c/Y2dBXSlnNzsKJUs5ZTdaZlZkTFpKKzwpVm1wMk1D
cTlOcXByZldRNEc5Jm0yNUhsUUI8TydKRlpAKGM2IzwyJmlIMDVHMzxRU1FqN3JvLUJaM2kpSzRG
SEw7bCFNMHVmakIhJ1UsPmwvODxvc2oKJSMwcklpblhmTFJmWi9MKz0kSEhKOT1sOCtoV0hJYylF
Ql1wcjYwNk5HZ0Ngb3BYNyoycDx0LF5Zbik2R0ErKGxARDdGZEU1ckEmNWdsPWJCTlpDPypqVzNm
OHMlQlEwcU9ydGEKJXA1cEdEa3U0NVNnZTV0YzNJVS5hTjl1KDUuR0lQUFlyRl07XWlhQitURDdK
Q0ktPU0kczg4ZERsbEZFQzNXSXAtcmAlRXBxIWw4WkgjOy5yQzUwMEVhN209QU0tIzJobzpMTyIK
JT8pPkY0SS87PUxJPFBBKUxHa2BScTFFPCQ+NFVVLkosTV8+bW9BLE9RJXNrXVBPQm9MaGtyXjs/
JihAVzNXSlpua0xabj5aPmZLUEhmJG8oaGprXlhIaEUuKVJAOjE6SUlOSSMKJVVxY0VBYj41PEVH
NWxRXFleNjloOyMkZU1ITC8/XF1BRSkzNCpFM2o2ZWFgb0EmJU8kO1xxW2ZfNlhZKlRyKGpEUigl
VkZuXEZFN24rayVHKWpkTDNnQEFQPHI1RGFATi4nY2IKJVg1NyY+QGRiNzpZP0pVP3MzXU5xb0xn
OGdyU1cnL1IzYGVGVDckVG1oWWlAPkdkVl9ncjpkbCdxWS1SJzMmWWFUSFtiREdePzxAZW4sQ2cw
IjAyaVJiXl0tS2E4WVpgSS0qQmYKJWk0bzZXYkhKU1hgU15bNj01WCY3cFYtUyVKLGV1QF5bZk9u
SUslLWBSZSdRI0orWDJhb1ZsM3NtNFozUWIwKiRIaHInM1Q+MUhsJ2o5KjBsXz0pWD1rP243XG80
TWpCSWMyKlcKJWtOTiNycV40VXFXWDBnSGkyPkkyXEZTW2A+aEpbXjpqX1RbQjAlLFpUXyQjTlk/
XV9tcTgsZW40LWxuV0FFL3I5XjBFLVRjLUQqUiVfcWwjKEsqK1RwdGV0RSE5cywrVDZmKiIKJV4m
STliSWVqMz4kQGBLbjRUNCo/WE8taT9eJCJuLVVhRTIrSFtrIzs/M2Y4MWwkWTMuTT8kJzpwRkVH
XXJnPWliOTEvamtodEZjY0lmKFFXbyghJUJzKy1NJEpxKTIqWVBlMUEKJXJuPlxfYlMmO2crNT9Q
XmNxcERpS0hMIXRyW3NGYmtrLElZMGhdR0JIXzhGazIxOjUiaHUzRHBXQ19dMWJPIVhmXiVac0pU
RCppblEma24xWVBkUk1OaV09Y0gwJkUyLXA8KEsKJUhbaDhFbXNZNGtzNHFWbWlXJm8lKzc8RXNZ
LjxiSWk6KmA7Y2ZVakpxVUU7OHJaIi5Ib3UiUTtmNzNjIV1yVT5RZTJaPkBEcGFHNF1SYnVwJFYu
Y0xcZHBhaldyIWsyYSdTWkEKJURnXlU+NTVPWCtNc0lHRlFgTkBJY1RndSFZSW0jKTBxWUdfTVZb
YiFOUEc9RkosJCklcHVhYCpmQD0tP2l1T2NoYXJjLSdLODQkYWJlKlZqTFZFPyhKLEk/KkljV3Bl
bVMtV2AKJUgyJHJaaTYsTC1dMEg5ZFNXZj9zVSE0Q2duPFtNXmIoays3bVRAMTokR1E+XF5ca3Ms
JUQ9RSRGNzAhNnEiIVRzOmxEbUIucHIraHFwMkNES0JPb1lRP0FkR25IODU+RDJQPU4KJV9dTz90
PmomQnMsITE7LVJONmY8RWQsYyQoJG4mMUdpdE4sb0FBIWMhSjVPQW8jOjRjJEBscUdjNjFtJits
NiRsPTQzQ1NpUTRSTF8/TihnWW9nbjBJNSoqW0Z1YG5qPk01XSYKJU5BWmtzZ2IxLU9nVThEKFxF
RVsoPGVzbF4iLidZPkBnMUElcE5JJUdVNVI1MVdELSYuQ2U2XD8ja1FsNGttMltabnVLYilFb3Ih
L2wtJ1ZwTTdKVFBDVUpXPy4pbzZMbitfbmsKJTRZYzFdJXVzPG9CTyJrZmx1NV5AYmtnXTIxJ09o
Yz0jPGlAckJxNWshbmgzLzcmWDZpX2FoYkFiZTRKWTEmN3JUM15nOCZANXByKEBuTyNtYEVGP3U2
KiQxR2dQXi9GJjNQbFgKJWZOTzcsPkZVZjBdWUxmLkQ6dC9BKERBYV0zYSsjMUNvRiVHRWw1bU5b
OEEzKzpMZkgsPFBnaiQvby5QbjBROjEyTjJfX0YyTChIcTJVOCpEWytNOC4jPnUuOlUvTSNyXjVn
MHMKJSpCXVNZUDIpcE87LSlRI21MR3U0ZkRbTDVYcVYpUzFdViRGLjNnQVlIXCEjQz1VcUAiYClb
JWxWQVZYKUNvLjUtLVlnMUQ0KjlJVlxTWnM0ZV48c0w+aid0VzRiXmM2ZlpEOSgKJSpwXVxeQFVy
SkwobU1jcDgvPWpVQDxTQTFvNSRHdGchYUNzOU51TkYjQ29PJUFtLmFgS3BbXVpabEE7TWgqME43
ZiFVcVxhcDg9cG91MVVnYE1RKSc8YjNvTDghJz4jZTdfZCcKJSc9XnJCMlUpN3RuQy9lbkpAMWho
W0Q+bVUvQk5pNy5yLSJcR0lvSiMrWjNvJlNaUnRsUCROak5jbEJDXVUnJ1E7WXRGZ09tImJ1IydP
PXVuMVBQSVJOYXEsRSxpVTJsOTlcbC4KJV5zSS9BX0YtbixjWiFIb1Ira2k5WEd1OyRrMCpTRGBH
MDpgM1hnWS5xSDhoI3A/IyNFZEVDKTxwckksN0M3X0cpOSYiQihETDxYUCZncS9DJ05JKlJcaU1t
T0lYcStnOSk0PiEKJTw7ITQ/OVFFVldlNVY+QkkhZkFcQzspTkFnQ0NZZDohbUBHWG1dR1E5YG9c
TVBbUyxuVyNkZ21RJFokbFYoOSptNj9SQmBXKVV1aTYwbXFAZ2NBRUtQLF5vRUFebEJpXDMsQFsK
JTxzdDlvX3BHdW1AL08pSGEvUSVJXE0tJz1vczhxWmNVK0ktV1RKYSE8aiN0bFJANmFfZGwwOWxn
cFNaYGJucFs2PikuI1QoaU5XWyRmR1kpOGFtTj8lPVMnUFBWOWpQNl8uTi8KJSMyP1h0KzI+T0Ro
Z2lHYiVjJGk+JCRRXG1XRlBfZjBWbExWR2VRTGxfbjElMS05Qz1aQm9fKykqNiheQWwiJ0I3XmBM
PDM9dSxGOzlgZDx1PG4mam9rQ1ImSCNdPF81MF0wczIKJU9MOVJIRCJTS3QjLmxJYkFxXWxkU0du
O1JZcTc+PmxMZiVTKDFScjZlaThJLjRVT2s3bWAoOS5kZyhNKVdhUjBoKWBvKTVBQGxoXmdjJ0Mx
RVw+XyRjbkgwOi1gZnFjSFhrK0gKJUY5IjAsL3VoRF9jWitAXFQmSFZbXUZBQSIhXmUkIVw1ayox
M2pwJVg9J2Q8KCYpW1tTcEQobEhAclBGLiMiJk1nREhkVypNY1lpNUdbJUAqTGpjJEUhIk4/OSxU
JixyXCxyIjcKJVUxJXUoLlUwNWMpUCJtZFAwTkttMkw9KUhZRGo8XDJnQlRQL21ZSWwtUC1ZVDJV
UyRBMDZHYy5nVyRSWDdeRS9GX3QtZyUnLicqNDFfMSU6PExHZ0wvNiUvMSovOTVTWV9GTmAKJXEs
NEIlRWlDc0ptJypBa0BqUloiWFc8R2tnczs2NWBwcmVgKjQ7MGdkL2RhdT0hb2FnQDE4UztIQURL
KilRNjlhQnVbXUNYMyZmUDhBOV4nYFphOTdDVl5WMCpROFY6IkJaaUgKJSd0Xi90LFMnZCJUSjYp
LiJbR2YtKEh1ZmRNJCF0NUh1XiYuPFA0Uk5PbUMpMlY/WUNWKD1FbGVrIV1SVTJGPzEtS1A2YTdo
RmAnPEVjZSNqLTs0U2dlb11kVEZNbGUocUxwQk0KJSIoK2kpQGZUT1NHJSkhRlMvOS9hbz4zaCQn
akZiVGw/N0RwNEkicmhxZmRERDs8MFRyPzQ1NEJYb3NdLzAtImA7Y0k5MlxyaT0kL18hbDw0Qlko
WlAvLlorLio3NFQpLGBrciYKJSlyTXVcLFJxQ1AyITJlMl0maHMtcV5EJihRUG5QUTlvKEQ/aDZu
PS5cRUc/UjdLKnRPSUxbcSpuXExnVXImRlZhMykwVktAZGNqZVt1JlRpSiVtLjgsST9UTjddIVBM
WS0wKm4KJTMnV05JKi45bjdadSw5MWYnWEZCNGBpK0NwVkBmZGZcJCRYbjYkXiNsZ05hJXA+NDIm
REM+Rz8hbCk0SjpPXU0lRUc5N2ZSSDQrcm4oXEIkNiFfXVhBa2wpODhraCpfWjhhUkIKJVFuNCk1
MytHdS0uYnI1Y1I2N2tIOz9DaT5Za3JQOmsyYzlwWTBQSmtybVY/XnMsP1NOQ1gnbj9EbWBLRE5U
O0dvZnImbEhqLXIiSVxHZGloQkssOStTcCRcdXMrXV1IYW1bOUoKJVNXMz8nW01xbi5nRVIxYSws
JCUzcDptXzNSKUMvcCIrJUhAREdjZ1RNZ1RRP1QqKVFdMmtJIWBRU1FtUzI0PkNJMiE3WHQ0LFQt
XVpvcWMuWD8zMFZtQ0AyT25OR2wyMi4qTGwKJVdgNyQtYUhfJTZnUmlDTVo7Tj9iOCRVOU8taEws
cUNYZD1NPycyWnI8Mz91ST85Q0Q7YEhBImlackQwM2ppT2RhYjRvWVQtQzU+cCokNmFTWURlYnBs
PUw9NSozUDUsUkMnP0sKJS5PbS0xKFxcWHQ9LFklJGV0XEgpaVInY0tkKzFecDZHPFJJbSlwMWwu
XG5bYEJsIVtSajJHYyxmSiFHU2UzQCtoQiFjX25NVXAnYFtUNHRkLk1XPSk6TSg/NVJoNkRhM0ND
R3QKJTUwP2dCWHIuSUImKTAxUSdYNUZESGlbXyMrTG5RVFwjNFZBJks9KidCTUczP0IxSWFJXjcm
NXRyUE90WyInOkUyO1AjTTIoYENFLV9QPXU5WS9ySzNpRT5cRV4sZyJOZjJkK1UKJV8hMyJTb04+
OFNjLGMuSGJgb19dJDBJLVA/JzBRSlBNTio1ZmVnanJVZiRQcylLaE49XXViITw5ZFZnMExBNWIs
MWk8JUhcPTZHSmo1VmhmXzMmMkxXJlRvJEZfdUxZPztSTHEKJTMmVSw9VylXTE5AckloZFdRZFRE
cWktbEdoNlI3K29iY0lOOjVrW0xCcyI6cyZpVjliJy9SN0tVVUJWaXJZa2c0Sks1UkVITDwmY1pU
WnE0ZXMnK0YvWmo1TDswOXNeJldOXCUKJVc+RzVUVU1pKmAsIjdMXixwZTFZQyxQcUVBOFBkLCtu
c0YlQS9RLDpWK1ReaCRfdCs9cVV0LSdnPzxLcSshPkNVXi0pcnM+SC0nKVBpaCYvWWk1NSoiJDFZ
a0tqdCRSMUdjbkUKJT1aaEwmQyRZOG5GXzQqZyZJYGlDPDJzbDcmcCJCW05NIS9FcjshJGlWKTEy
MGUwRHI+UHFOcUNsM10jKW9WUT1QNUpqPUVSJSxvLFgpJipvYGVyUkBaWFFCb19xbFFoblAtaWYK
JUVpdWwjUVdYOj1gRkdYKDZUOEBfSHNpbXBaM2E1OGo8bl8jQCtLQEY5OVtuIkZEN2koKD46YilY
RnUuW1NmPjYycDdBNCVIRURbM21ncihnZiRtQGolKkRoWEhuNEpUXkBnMDIKJUtERW8wREpwU2Fq
dUxIaUkhWjJORHRfMCxGKDtsJyNIVWxuUkExaWxVZ3JFdGdbUmlFTEVWckcoKzkqOT9oTjFfaFRs
WSJETDIjOG1zLmk6byw4XyRodCQtNW1XZUdBL3BaLTYKJU40L3UjLmNjQDgwL1FDQ2g2VyZDWSgs
PGJxOipMKjMvcTElWkJka1Nuby5sZytaXWdJay8sJUYiWFttMmMrUmNAJDc0dWcyZmgwUVk/bjFp
JyQjUjFWTUFLVjw9Pzp1T00iU0sKJWJGcTdCY0oxM28qNEpdW1goJlZCV05HOGJxSj5jKT5ySCZD
Qjh0P1I3T0IvWW9YZmlRJThaSVYuNCJ1dE9BMCQwI1crVWZFcHRFZ08qPnFdNV86QGQoallgYTRR
S3JeVjpyMGgKJTdrSENZbU9ZTylBP2pxUGBMUTdDUnA9InNUVEoiTWkscUJfXSRmPiJxQVVyXF5Z
dE5uKztHMVdgYHBSMCFLa2cjM2IwNTJuVVJsIyVpQlgvI0whWGxoWlcuW1Q5cDNPKiZDY1kKJUtD
MDwiXXRoJj0oPldsajUnYkFMTmE5LTlqLXRgYmcyQDA3aWBkcm0yYEYvTTJFKzlXJz8nJClqMHFq
ImhBblUvYWBZNm1tRUhVb00uRzVTL0ZiPFdwTyk6NkxsVzswbCExYjIKJWhdZE9oNXNyJ043cVc9
Kz1hMmA8cDk7cyJJMlUzYSRoJ05nInVBIlhAZDtXYGZsOTpTMDFLO1BmVStqVG4kZSYuR3VJKiFx
SklLYDJATl8uR3FQcmMrPFI0KGJJcE5hVD5jZTsKJSVZO3EzbXM7RitlJmBhJS9iM1RGLDtULWJU
bVNCW11mLmhEXV9cNSReJmFqc0lKZ2xJbjNhUjpEP14iajlJQW1BcF02cXNoJW82XSY6P1xibFVp
U0dWcylzZCROL2k6KXVUMiIKJSU2YkkpOyJkKj8hPFNtLFNlRzlVT3BuUW0vK0I7bTI+RmlcJl8i
NyNwJV1UN0tgcD1uXUc6NTNSaTZaOGtTO2JhZGA2Z1UoQWVqRE5aR0xoSEU8UztzNGdGP2NWakxk
WmgyQkIKJVBQRmA3QmJoVzVvVylnWjRSOkxiazA+cUJGZicuWS1cPE9gSEA8Zy9XbkA3Q2Q4RTx1
Xz9hcElEWi1FdTlkTilLXHRjM0ArZ1VMQi1VXl1nM2IoVj8vWCVTV0RvIixTSldlSjEKJTppYl1N
NEBzR1tbZD43TC1cMzVUQ0htZ10xK2phVEJDPGpvJiFiPkUtMVU1JGdJKVVmSl08Jm1ETy9nRmdd
ai9XQ2BiJVhpczZKYzFDdC02P0NDY0E/NT8vOkM/IURNZkBsNS8KJUVsN1FxU3BUTFNzL2ZpM2sr
NWw7XCxKQXRabF1BcSRydWxLaU9Ob2xobzw1SGA2WzYmVUZXNlNKRyo1Z0E9XD8qTjMqTG0nNDpa
TCVuOldZQVUjb0slRjhZSV8zal9RaGEyNTMKJVtvIm0nYC1NV1RSIlo1az90K0BEUi05IiIsMnQx
cl5zLD9lXjwqayFyRUFXJl49SVtrVUlZMjc+cS1VOjEoZ2RwUiVHM0VbZD41TCgqIWNYLT5VKVVl
WmI9Wlk+bShjQDxqS3AKJTtgQEguRWRZMD9LZDZlSjk5b0VDJzVBUFsvNm1hVVYoLSQ2UVcyW0hX
aklJbEAwZnFGYkNtblVeM0liZSlwS0Q2RmopYWFSIjBsIjc5RlRmbUdfLVxGViljIjNCK10lRDwv
cToKJTwrZEZXPSkoRmltLUQkQDk0UzVFN1dSPiErOEhBbF5TTk9EaFY+VltTWihJKC9wLzBhS1RZ
X08vIzB1KzBnV0kjJ01OY2QzOWltRjdQKDovIS1NK3UuYyIsMV85SzVjSD80JCcKJWdXLipoZ0E+
cWhRVFlTVm8mJzlbUCNKLWBnKkRkSHImQllHW28rXjswN1RIMGRiWWMzW2NuMFE2L21rRVAxVWUz
Tm1fTFhWMXRGL0YzSFdlcExvUiY/KWVyXk5ZWT5cPHQ9QGMKJVFSNDNEQ2REPClRKnExOGtXL2Rh
UiZKQiRLI2RiNE5nZWpALFErKiNPXjs4M2hiIWRTKXQ4JiNIMmxNK002WW1cYms1Iz9lS3V0ZWVE
IjIkPWwmXUMvME0lSTc6RUJpNUVycyEKJWNhZkUoY0FlYWtGLjYoT0N0SC5QLkhVPzYzaTpEREBX
QGw+XV5gRUxoZXBCLGdFUCNuayNmVjJaJDgwX2pWKywwTW1pcC84KTw7V0RBLm4zNVVhKGUzQzQ+
NTZwZ0U1X0BiVigKJXFfJjtNYFtbPVVVUitHQVpoXEhfby1NLVlwKk9oNClEWWI4cGM7Jm1eJWRf
SjhYPUZPYHFmLyxsXihRLExhMjJFZGQxREQtb2RiQ2RoWCQ4MTM2cE4ucmQ8RzF1MyIrRjVbUnMK
JVxNblhKS0sxSzpTNz1gP0YlS1EtKlM7V0E+NDViI1ssLV5FTXNIIjUucy9cW2lkLnUkRF5ocjBI
SVJcYl4sc150QV1YM1dUPWxcKlNENi5Lb1s/WCpULkVdQTg9ZE1KVUxXPTEKJUY4aVZwIy4sPG80
dTpVZj9PTW4zWXFoRlspUG9PUD9RU1ZpYWM8OkRSIV1uKSkibilsam5Pb1Q7V2xMVmlGcmBaJVJQ
WEc9NkBWLjdvU1tCNjhOL0NCMUdFPU1RKDU7LmwqSzcKJWkkJWs1U0NcQmVLYDVOJkRqcyw/cUFd
Z1gsIkdMVloqdWJUPF5VWGBJWytEKGBMSlw2N1tIXiw1VCxhSk03ZSM9O1FVVz9xL1lvYSxkUV8t
WkdnXmlQJUUmVjdob25yQE5OQ1AKJUJCWjxkYy9eT3EoPEM3bm0obEkwO1cnNE44PUFNaSo5bTJr
NHQjNVtBTHImMjMsRG5JI0BObldQS1ArVCdsQEI4bmEtPyIqZ19mXU1ZYmsiaWxmLF5OTV11cmkt
c3FealMmX0cKJUFGSCs9ZjQmcmRPUEhtPihyYkdNa2AlXUFZXUlicjNPQTVROjkhQlBETURPc3Fh
XFxLUGNnZm9qblxNZF9qO2AuP3VtTFpUJGRnTydEbmYpUiFwNXQhIz1vaFM3RHA8VyUiMnUKJVZW
K0JULVwyTS5caSpscWo6Mi8lPE1ZRiI6aVQrVUwoc3M2MSNzUEphLy5ia2IyZTRrY2xmbScjOEEy
YWdWbVhlQnF1KFE1Ri9NJ15Xc1lGMyhuNSssQTcmPCJKbSlTJChFbjwKJT4jbEonRUk9LEstKiZO
TTs0aE4uT01OSSEwVVRKZCcha29dT3NzKnUoWHVTbV9PMDwqNGlQVDU9SUkhN14yQCEkXCVGbm1X
YzM4KkssYydrODFWUyxkSjxITENxanIlJ0FeNFoKJTczTDlrVHVsVF1vWydNXik6XSVrYk5LPi9q
WFZYKkI/RSZVWVMjaTFXImZAKTknQDZwN2dzJlQpL0dXQzguWU4iJTFlUi0yIiU8JlJ1UlhINWJi
Rm5JWDtGTTg+VTgvLEJxI1EKJT8pY2BTL2VLJGwqQ14pOz0yPTpsU0twI1UiLVs1O0RmVy1lJCRg
MmdbWiRaUS4tK0FQQT4iWkhDWDhKYUY5cVYobmJNMWBaQzFLRFtgWk1XRlY2b2lcJlk9UyVRTWc/
SiRDXDoKJVtPQG9JYDs6PV9aXkRISjA7MiQsUjw/KFVFdUIlSFZSO1FNYlcnOW1DanEuRFtXVW1V
WFY9NUElOVlxayNLLCxfUyxsUFtgWihxSCgnWi5EYERXbDU+JXQ/K1hTYkUuMkIiKjgKJTxfZ046
M0o9YldiMUJAaVYiXD8vMkA1bGNPbjVRYTo9aD5rKnV1QENjPG5jMnJncjlGRkBHZnA1LztNUTlG
Yk8hPSQtYmRDblwqaFwiW08uaE4+aD5hOlowaVJfYyJrMm1TbDkKJTxkWHBFMDdSXSlwNicvbk5M
VkcoYzNsTHRaQlxEJlhDZEpvTW8waHVES0ddWToiTjlrPEc4Sj9WY3E6SG1yM1BOaC51ckdlIz41
WGpNb1o2bl5aSlkqJ0g6anEpYVVhRzhwITAKJV1LcU85K1AuKT4tYidEaGZndSZcZlo8JUxqaU1u
WFhnYlxmXic6J1ZbJm9cKSwvLGZkNHUhRFJMPjpfWWNxOWMwREwpTywpQjlqJC5RVT5yXi9pNSk6
LytiOjkjXDVGZSNILFkKJUM/TC5HQUktOmNhOnNhMC5pZihxbTtudXQrdT5sJFhlaVNWQmZTUC84
XClgblteKTo/UUFoSE1VZy43bz5CUy50XTZwJFwvOjo+X1BBMEghM1xFdFwmUD8zWE5ZOT0hMExr
MjEKJWpHRmwxMVJxP1ljOXBuKWY/cXIzUk1WQkhqM15yVj5ZZF8mOWFvRWtJIyEhMChsbFZlJmhb
Xiw+X0ciY1M0JDtWPlpmI15uRWhKRnJlPEMlUV9XPkwmZVhKJD9QZ0dMVSFcTnQKJTBVbVY8W0Jq
QVQiOW1pZSdBKFpYXV1bcUhKUmVSdVtkRTcoVjdqP1wxX3JIMmE3bCRcWWhqU2YpZllYZ2dpNy8j
MEBCRFtZTDM7ZiFEJFNpPj9JSVMkMDlSLzZsT2luUGZaXF4KJTZ0aF5HMC0pPURfNUZoKF8wQSJq
OlFtVmNebUouRk8+LmNhYEsoNSxOPUFsW2ZaW1JoWWZkT2xNK1BaajRcUSZiW0R0Y2VwaE9tWSNJ
ViRzTnQpMCpebCJHLmQqW2RgKTlESzAKJU80LmJRTksiJV44K3E+O0w2SFVub15eJytTOkdtUl1Q
OyNaMyYrPSRYUGpcIWg7UHFJKlFIaEpZXDBfNEBJRiJCViw8aV1aYy9XbmRKUlpMcldzU0JwYy8q
XCQzMSdISE1GVj8KJTYuVUtsVihkYDZba0J0Lm1vWSdaUyxVXU1WZENtaCdgTlJQQmBCZWReIz4k
dGErcEwvLktSPz5gXydnQGQiUXRCUG1HTyZlbUhYTUlQanA2JHMmMFFfTyxkNzYvYjAjbCpkMHUK
JTI0bGAtUidvLUNxNiJvQXA/VyEmJ3RrW2kjbVE7OTVKP2BCXEUuNEYsQTFGVDtQdTMoPXJPT1pi
PlA1PUQsVjJAJ0A/UiVqOS8sM25eNXAiPUxRaVAuK2NCaG5ITFhfZ0U+JS0KJSh1TGExUiU1Mj1b
PlAyMjptSjorL0xtWzxtKEJVSFRKLzJVIzxnXmdbNj0hWiVyNjUkMy50XlU0NlNYcy8tKzwnPych
QmQ5Sy0kP1VMY0RsMCYoR2NXbVheIyw2SzkwNiE7KkkKJW48TVshQjknZypVVkU0ITBFWUs0UVQl
UixQPmknPm1FdUJPbGlhRFQ3TFMpZGB1Yj07PUlINitNTDNhYEBuc20/NU5IVi8lYWpySEQhTXEv
bWk0WGRDUkdGYVsqV29LOzwlPS4KJTdtTT8oKCs6aFRdVFIhRVlsY3Eqazg2MDNCVl5uX09ARytQ
R0ZLQUZmXjhEIkdjaVddJV9gZ15UdW1OLUwmSnRgJypIZTRiOEF1UlxqXVA9TGNHTj8xQjtMV1Bf
REtoOW4oMT8KJVw8SCVaJTZjKGBeXy9NLmw4YEQwMz91PD4vb2t0cGY6T14nVCQxT29rKU5OXy1j
WzAmUVgyWFg3MzBoLEBkWzRjRCVqSllfa1JuQFUhXzxQS1tBa3NwS2NadSZFaVc3JCdJcEIKJSQ9
amNla0dsXFA5akAzRU1oXXVkNChMJV9HPEA2Zj8uRExOMFEjKGEhMmNrW2U5JCFxPTAiaHNIcWtH
amoqPWJZX0RVT3BGPyQyZ0VHOmhfZz5nc0pNJWphI0EhODJQNHBtPjcKJWAwMilpJTg5TlU6SlR0
R2sjJicoOEtXIiRNXEpZM11PVWhdbWkzcTE3XShJS1dgc3RsVjsqS0s2TEZSWz4iaExZWSpQP1ZQ
XyMxKV51LWhILnVJRSE6RU0udU8ldF8zM2tVUi4KJXEsdXE3PSRmUEMlJ2BFPDMwP2o+SGxCR0g+
MmQ8QWJKXS9zbz5CVWBoJHAuKFFVWWEiNzpSJCxNUGYoWiJITVojUFBbbnNUajVxKTcpZEJBJG1c
TFhlW0JMYmJfVzwhWG05XjYKJUQ8Yy4vcSw0cTNCTzl1OD0oJm0hKipJLlxbVGA5L2JjNGVqQ1lJ
TGxQbWFHOUVQUCpaZzpSbSdfZSM0Q0MjMF1ZOWpiRzdYSjR0J0tZNjY+T0JFPigoW286USpfQV0k
Xk5kLmcKJVAsOG10XiFedWNHZU9NOExXMUtpUD0pUjlJMi1HZCxpW15EaWpGKGY8RUZQSWZNSFNf
R2RWUitackVSLGsxUG1FX0ZTMDIiKCw/YWo4OF0iMyNkMD47bVBjJD1BYFZYL0lvTSsKJVpZLUxF
M2djTj4vRktcVW5nMjYqKUVsRC1WYkdvNlF1TWBGXjMlV2xqJjc/ZGAqVyxhW1pwYCNfZ2laTSxI
I3NbbShbRFo4Wz5IWj5jaS5ULz9KQE9kbzdvKy9sclslalY7XjYKJVVvNzpvZUx0P1BgUTwuL1JP
Wio9Xms8LSo3OSVcZ15NVVZGaUI9Sk1MQzA9OmlRNWRgZT5AI1Q6b1FZcilDalpfU0BWJWBONTw+
VUNKV1dNZTRidTReSTpqVmc/T01XKkA0aUcKJVgsU3RHSTs+Y2A4J3MobklbbyM8bDtgVTM1Uyov
R1Y5WldrV1slPnVAXlMxPEhSNjloZ08lKDgwUilsYTJKJkc2Yy81NyVcVGcscVh1OTNaKUJxREpI
Q2dkTGkhSzUnQz8vOTwKJVU6VVdqRXNmXmViJkQoLUhEdEBFRnIkQmBbQltsWSRvdUQhbWBUKkFd
I1hCbC8lKk5yOUFPdUhCWS9SSlxPajIrSF9OOlBHMjNlZEBFXGtdVyFyPGwzOD9MWTJRRVpFaCp1
NjsKJWw4SXFZTk9qRSZZcUdSKjZpJCJIYUdva20uSEJgLldmQDBzNT9iZCNSQkdeZiRCamEqIkdr
UmxHNVJYKyVpNmxCQSEydWNBLnJJUXIqM2ZIQFdcP28sVVtASys8PG1xWnFpTCUKJVJUZjhYOzVg
althSnI3PSVYMV4tUTQvaERbPWJcWVhcXkVBWzJITyNHY0A+VTdRK0UwQ0FDYzRxZURWbjddJkY0
WEF1NitmIV9IVyxPSyJSR2NXWixXSnNaNEE2OTVqSFgqbV0KJVsnIUUuQFAhMUhFNC9UQ0ZyUUhf
TGVYdEpXaUlOL0dcKD9BOW5QOzo7TzxBU0o7TWAvIz9ZSkxGQ0hkY2hLVjtvV19IPU5DXSUnTDxA
XyMuXCNEOzNAY3NePyVNcm9VUWNtaVAKJXBcOSFAUyQmUGNpPGlVKCE+OEVgOi42JiJnJ2Q6Sjk4
aEsjQlBeaWIhZ1NwKyRfcU1mJSRhPCZKLjEhTCJiJUc4TFk2PSJXTGwzWkNvOUBXTkckaiRCMGQv
azRNLCdDJjs1JkgKJVdwKEIuPi45bGZVK0A0OjxKSzExQmFVIi9RKTxhOkNELyJuMipPNFpcanJo
SzRkcF0hKWhdXU4wMiNISlAsPmdIcThJKj5KOUIkJlxcO0FfO1ZrUFo0aSNJJzUmN0IsJmNvbFsK
JSw2MSsiV1pKOjQ5NVw0Zj5jaDJzOiRhRHExIU5JcyFcN0JUJ3NZL11DT2lbJW1FNFdQZCl0cEVs
MCcnQSIoRTgoJSxeJGRjbGxkMjtVP18xL1o/JlZwZ0EhNDVvKjlFTktRP2QKJT8nc3JXOjJscFJQ
U2QuRU8tNnNeQlcpbnVNXjs/KS9nOT90UUdSSCJfcj9ycFhpKV9GRjpqazBhcVVyLGYsOEJOO1cy
QigtcXJrP0BFUV4hMFZvZmMoOkdvTjpzMlZXITZeV3EKJXIvciFOUDY7K2RqPUM9Nl0sLGgkLic2
M2NfZEA6PypoMThvKXN1dSFFN1t0OWslUzIrNCJkI2U7bj9fWDFdYEI5MmwjKnBOK3RGMVdibzYl
SGcvXEpbRExRUmZpREUrYj9OVCoKJS03KzMuPnJLZiJmPWszOTUhT201P3MzOW1FNFJ1X01yZ0I+
PF9ZTXAzN1olX3JKOSZwbTFjQzpWbE9cUlNvLENeVFg/K1NsLG5cQztcPzoxWkk5LytlUkVzXHBZ
M1xvQjFDUD4KJWJRclFkKS5CXiknQzlsLFldb3IwO29rR1ZxYWY9VjVbTyo3U2FNYVxAJ0ElMF1p
MV0kZHRrXD9aQmAjZEQuOVxJTzFqIkJTY1NRPVhnJEZQKEppZE5nWldHISNedShYVHMkVSgKJW9Y
Lz1gXjMxT25nLzU/QkhMXEIwLVYvJ29kdUw6M002ZWk8a3VWSklgLSk0cTpROldfLDgqZEgvKydk
LWZVYEJeZ0c7LSlXdGM5QT9IXFw1YEYsYHAlKDljJF86RlEhZlU8QlYKJT47TyonV3QnYkxdQShC
OUU3OzNaIXQ7VywwZllRL0ssSigkZVVZSjdVU1FGcl86KD0hLTNkWzFfLWVkUV8sRzs5TCEhclpX
Ny4raTU/LEBbNmA2JT5tcnQ8dSsiRXNPOTlqbG4KJUI1dGk1SmQnMjhIc2JjIjI3cickPCVDPzNN
Pm1GXGlgKE08STRgamdvJWNWYjIwSlliUkQoZU47RkpkakZoQjBGNz5eSiklOUwpYGc+TTomaDxj
aDtKU182PFVTSHFqYkBEIlIKJTtUZmpIRGVqL3QjJFQmSlhnV3VobyUnZSloOG8zckA0cGdvRUg5
N0ZNKkBOXG5eQSMqPDpZZmpIVVpPTUtwRzxmZlNkPW5ybUM6OWJcMTs8XiFnXDRsPXVyWztAUVEr
cCNJcl0KJWE4LUFlOFIzWlE/dFBtPVNwZi8zPUFPKSFLaEQkXFdAPSo4L1JMUyJOUS8nXU1nTUFO
YmREZ1NKWnRENzdwZURaKj9ANTFZP2o0bjA9bV4rSSQrMTU/VEJ0aVRmNFYpbFsqOzsKJTlwTzsq
PUpMNUA8ME0jcis/TVE3MkpPXmhELidcUktfLzZBMytvVmhkN1VySShkU1xQUFRlLzJkSU9NNidy
RGxgXjpeWj82Q0llUyVcYFstQjtoUD5rJWtLVUYycGlbOXFvLTgKJV43VUE9blVpOkk0XSdDMV9t
ZmYkYF5WRzlxYj5bOzlvYlNSKTBCY2peIXR1MVdENDpYWG5BSyUpYzEoWzQtU3AxSiRbc2YsWkpd
My1FNmlYVlIjYTJHMz4lRlMqYWBFXUtERDIKJUNqN2ZUbVpeNFVeLEJ1PjMtOGM+cHFsWmdZbm04
bkIvRlgtL2hSS1BvK1QpQSxFbFYpam5yZWslNFwrTT8mRj4oZ0tJdHFITHAwNUA2OkslbGRUXyxB
QjRycC9OYVVkYmNtWjcKJVhHTXAyaWwwRzA0KEFmZiJIRDMzNTNubDVkajpFRUxVWlVbXSUwI0Fn
W3NSXEk3RmJgMChqJ0JpVWY4JnI4KSdJXWZjSkRtTElSU2Ynbl5acEBXIm5cYDZNZ0VpSk5ccmxM
LkoKJUNVUT9CMjNfdDJGKFQqVGpjaCtpU3MtKUs6LUdrMXInREFjJ29ZPCtaJHA/O1pbRVInJS9T
XlVsUFozVGg6WV5KXG0xR0NHIUE0QkY2KV1lYS9kYkhdN3RYP05aVUBcYDAlUjwKJTQkJ25DQ2ow
N0ouSVlMW2xNNz9aSThCTyw0NFVnQEIzMFo8Z1RsMVhwajAuK1h1aW4iY2NDZG9aK2YmYWJbPDon
bnEjNVdrRCZbb0hoOV4lY2RvQ01UayIlXTRuYjpXaHJEVjwKJXF0S2IpNEZcTWkuKGRNWW0yVW5p
JXNJVixsWlBHJ1lDI2hzZGxsW2NNcyNcNWo3YCNXJTRAX3U4UVVWOyVKcWM7RG1GXmIqTS9XbGF1
XCpwbCZkPlA5Z0NkcS1oaGBvTjltV0gKJV1tWlpBJzxFNChpbG9FWUZeM2IyTnAkOXQ5bC5vPmhn
NDg2W185TFhCPEFRNEN1cExnRlt0ZzU5XG44ZVFKTTVoRnJEQEhkYl4yJGBLJ1chVyxMW1NvMVw4
Z0w+WUk+bytsZ00KJW4icCwoa0RBVChbZDYkbU4pVkg0W0BRLGtKKl8iKUk6MTsoWU5GPnBfSiU0
X2g6bFxsIUFBYkphKWdga2RqQWpUX210M2dTWSEjazMtdFEsRlVDKi9oXnVKWzUzJFFHb0xzSUAK
JVZwREcrZiYqZF5ALl9Oa29pW2kvWEtQWktZM0UlPW0uSm1bPURRPChxbjBEaGZ0S0pJZShWZ0Q4
JVZySGxBcCZMMmtKYyJTVF8pUUslOFV0SWBrTExucWtSY2ZDaEFmNGFxXmkKJWU9WTlqZVtQOm8n
M1VsTFhHOlxHYTRIVFdScyZAPVldb0xRWDcrS0FfPGFvLVhQKC8jNSgpbSJBdT5rRDNpQW9UaCMj
PjBsb19dUWU/ZFooXXQlY2hKLFFDPVhlKTA/aDJlLHAKJUJPOTA1RjcuTCdIUDpNVFs1STFfSD05
ZTxcK1UwMWEjQG1Qa2ddQEkvcm1GK29PQEwxOEdRNUM6N29UNEBHYSxYQHNwX1JKa0NCXTRrWmph
S0JxPTwqVHNhamg3O1o3bUk/PXUKJWFiRVEhbXJQcmNIOGs1P0liZStVPGk5L0ZQZF4kdWRBXUA5
SC1dRGIkdE9TSVNOQ0tXYFJaY1pZcTAjL3AwQTBbU0spXFRaOSNEaS9tUlVVZDpyK0ZxQ0NbJl1h
Xk4hMEE6OiIKJTNqQTsxMVlKPD9ZXi8yQzQyWi0tZGxbKnBlaWhJR1xaWDU0bC5wRilnTkM1X3BT
dHEscjA9dVc9Sis7XW9uUWcyRy5QI1pkdD5WM0UmLT40NWM3WGEsR2I5NjVGJDBOUF4zcEQKJV9e
aVdRY2E7dFpUNjU4SUtmSF1uNmRLXCNpam4/WlBGN3NPbWFxZy45KytqU2EsXm8+PylwRGw6Tm9y
byhIOEpAbWUiVDppaiNEUmc9Mk80JEUlIlNtP3JzcWI6UGRyKicqUVUKJTpvZD4lPUQqWVFuJyg0
LGhkLC1FcmNJJDI0KlA4UUg9bSpFZzZTOyRcVGhOTDcmcyZmWEwhODZscTlDaSc6SEhxXHBPSXJC
MjJeQU9QL2xQSkxjTFpHNUJ1U0olMltOVnJyVjcKJT1yMXFNWSYlNCQ0Wj5QXyY0XlZNVlxda2RR
IThFOD5zR2kxOiF0Sl5AZDFSUGc9ZWJQRnVaaFdrWkhIajIsQEtAZygwYWNoMlJuOW5tWmlAM1pX
NiIyL0pfQW9wX1E8Y1lvW3UKJVNXMTZDJihcWVBEYHReWEhPazsyXTppLzpwOjlAT2tRZmlcNDA7
TEBtI2sqa3AyIzsmV0o+V0BxPkFeMlxDOC5TbHBMSEFFLDJrKUhNOD9Fb1NJMUtGSnFoVWguXGZp
JDhdLU4KJWk3YF5uaC4lUGtDKjg0N2ExWXUxZF5cVk9wRSQ9JkZCdTtDKyJDY3VIUSstMXAoVCQ3
XW1CZSMpTlNxPVxhPUZjbStIaV5dNCQ8QzxmNTRwaVxTITBeRFBIMmwvOmU4Y1clPmMKJTRmJmhG
NUJ1VUw0YSpeUWdFWTpjQkBmcWZIalpabmhoRG5CQDBKUSpsITVzbGRPSUZeJGxiY0NPYVUqST8r
VV8sNHRWaHNJQnF0YG5HJl5Fa0VEZjxXdWo/azpIbVAobTpbZEYKJWJmNXJFSF5rT0Anc2FEPCYo
QTppXVo0YV5HNW9xKVYmXy5bJV1oNWoqR2s3JjNrNC8sY0k0WUFZdEglLk51UXE+XWsoMHJkbkJE
Z20+WG5zJU0razA1IT81amJFWmk1WnRbUCoKJVhybEIrVldIUUZjZ1RXJlg2JWZDZjA5cU9iQmJX
b3A+IVVWblVpRk5eMmg2SEhfVFdOWXEoNzdUNSFzNFZXdSkwaG9vM29vPj9UdHFvUC4iJGciKHVy
YDJmUlhQJVJWcjh1UWQKJVtVRlszZEBrJG1bZ3RRSWYlUjZ0cDhpOGJEOyppZWByKEo/cmB1IS5s
S3QtKClZV2ZoO15aPSxqQ3JRLGNBJjZwU2lzPGctaU1RTEg0Z1AsJCw0WFZHJj9YTmhLISonUjEk
XXAKJTdvI2JmXFMnT15wdSIzT1hMIkk8Pk1gPDJoVyE2akhoKiJMUVZKKGJTMyIoVy88UW0mO2Ip
VlZyVj5hLWxhUTVfZTtiaGQqVUViKmsuWW11R28wY18kRmtrWClMPS1PLFtiK1UKJTpNP0YqNkFM
WGZraGhwK0YwPkoqQTpvbEhZbWIwRjFVQUciQFo4Ii4tMTFcSkpaSi9jLGotQUNPUzBDcTFUYE1W
b2s8YDdWNGs8SzVIQHQ3RGQ6Yls1OmQnN1k3YXJoNT1xdCUKJWh0KDI5S3JsNHFwL1hQMmxEK2w2
VmxRRz8rVy1lOzpZXFFVTysoWSFgOldibzVwRW82a2llTzJtbHE8LFptOGBGLVdtOEJZYDxzUzAi
b1VGQ1dfSiwqSFlwWE5hLmdfOlcpI3EKJV1yZ2VzNDQvSTBPbFQjKm5OUGxjJTFTUkVyXVo8PXAu
aFlDVkhrVzcwcWNRLSRoMixlJ2hdQSdNMiNBVVsiaVhublNNU1BAI1FEKUxJdXVKblxoP2xjb0Yy
RVZkRDQkQmVKSHIKJWpzRUNgZEFlUkhtPWdvIk9ZJmdZOjFOV09FVVNcZT1YL0Q2R0wjWmBaSEVH
QTUoTC0vLVs8VExkJ1w4PzwtdSE/XSZxPSs5OyE6UkEycT1hQlxLJGxFMD1JQ2hDVlJAPlleLXAK
JSo8MCxHLVAtKCU+b19qR2JlUFpkLmBkZTlCKEUzaUxjaWY0bkBkN3UuSDZHcmpOMjMxWWZLSFYh
XVwscGJQSlpfaypJO0dQVztyIVpGanNRWHFNJmsoZCMuL3EpVCIyV1JPIk4KJWlaMWBvbTY6Z0VK
OSZnamYxNTE/MDxpRjZDRmdZUGBVaTBqVW8tKigkVkMnLlR1SGVoSEwxKUEjUU42Pl80LE9WXUcw
I3JrSEA+Km5WbzMwNzk5OiJJLURRU2BEJTxiP2Nhcy4KJU8uLXFuQzstZTZlPlE9UyokXj9JNlli
LD4xdGNZNzNNKi5UaihEYUswbyYxTz4oLyE8WVRlKWBbSz9JNFNGRS5pKC9TbEpAP1UuTydodTxV
WmUzcjs/O2FSTGc4WW45a1g7MXAKJT1lIUduL2xabTU/dDtiZzFHNmFtckExZWZnY0RhUGN1LDdO
VzhjZVhxRSE/Xz86cSgwR1JJSl9KXEJhI2snZ3RcZSNfNDIuJjEkUydBQ1AsIVVIQloqYGBWZWhB
MUBuQmhIYCsKJUJHYjhmY0ErW25VJ2JRTjdfcDJBayIndSNNP20ySSkwbTAkYDFeSV5RK1dlTCw9
UzEsRUU2NiVpKWh1MEZ1XFFWJ29IJDxwVl48LURSXXJOSCxuVSkqQUk3ZUA+RSVHREJzckUKJVM/
J08iaUctbGRgZm9pMVY4Q0A4TXFMWGZRPmRHXmkndS8kO0NAUG5BbydYJiZsLTMuSkVMR3VVLUhS
cGdtcy5QQTVBNzkrIWs4QyEsaGUvXzdUYVFXSychUihsTj1vWXU5SjwKJWdDczoxLWJQOW9bLD1Y
PztzXjdAKmVSVFEpTlRBcTxjMTohT01QcChaP0JIJV0sUT85TUU1XG9XNTc1KmwtQUU0IzRnPilh
JE9HZVwqci9yNDlHbmkqUjtMMmpWXEcrMGkxMzIKJTxNVW5vQmloYjBDR0QxM2xkK1gvJmhPOWQ4
YjNxXDFwU3I2ZFdsTFJGJSgtQztrOkQyQldoOEs9MUJORS5qV3QvN1RmYkFwO0ZOWGosKVFgNidJ
JDNiL2htLUI/RyJHanBQZ0MKJVYxNFl1WCVDRUlpb1dbXUErPCsjcF9dJT0yVSVSR0JmTEFkL0s2
bFAwTXArLWt1W08nQm9KLSUtU0E3JyxRWE9GLHQzdU1ZI09ERT5EXS9AbT9vbVZPPTROMjlDPC40
OGEiLG8KJU5nWSxqNyhAU0BuPDIsMjkna0kjJ29RVG4+YzwmM2gyQ0RhRXJSbW1MNl1dcW89ITBT
SDZTXFg0KyFSPWE/RD1yOj9BYUhQIUksJ19wVTpRJSRxXlFWZDUnbyh0JCZSTFNSWVoKJTFLRmxh
YCdAQkxZJmIzQ1FhOFFjJWl0Um9UMDxeNGJzKzloMUhjSyVbRWVPIW1AY1FzZzAmXjFvQkNfckIo
MmJuL01FO0k+cSlWNlkrKVwvOk1mSSVsU3Ntb0NgdDpfPVsvVW4KJUhGbG1dXV9dUmFAaz5IWyhB
ZURSWihDX1FbLj0pWkM9VTFqZGtKQSonP2FwS0IjZW1FY0s9NF1zNitfNyldaC1xQDReazpjK0pf
PFg9ZEwuQnFRLHNSKihoJkJjMEsoOkBoTWUKJWBkJCYjX0BdXSRGSm5GVGZ1WDdNcTNHbCJeSkhb
ZT1mRiwqRG84XGlAIUtpTF9iLSUjIi40cFhfNV5yKkUhTmU8VTZWazcwbjEiPV8zO1dwXD9nXCRF
XD9uK2ZGMl08Ll1iQV0KJVskTlUtI2NkXEgyRUlcJVRiMDcpZF4tXUxAc084PlwjK3NrRUBnWDJr
PCpDJWtQOEdFWzwvOmRFOXEuay1bSlBFZT43bEVnK1pucFVvVjgxNEBgUDs8O1hiYkQ/RC9gWzhv
Xi8KJVMpS1ZaJDs3TzBPT1cjb21KVik6PkwnMT1BTCo2Ii8rS20uTEwjbjJWayxCWWEyWj42bjVK
V3VdIjksTnIhSSZlRj9bSTVObGA0M2RkTzhhciVwcWZBblsudTosbzZoUCQkI2MKJWg2MkQxY2tu
J1JQKitfRkc/M2cxKkpyNV5BXVdVVlFtSmQpVCNPPENwST1LcStRWzNuTDNyL3FqMkAwbUI0R2ss
X0JoczAxRTNQJmlFXjonOEVlbSJhbEt0ImhGZTAyZ0RBYGAKJUVRWXRwZD5LYlg1J29HRy1GQiJW
UWE/TiIpUjxhXiooN2s1J0xdUWdKT21TSWlsUl9gZD8mUmhqLj4lWzQrZUQ7UERvLnI2cFwxIVVH
I3FAYCwlXFsibHUlayYuTixuSy1yXyEKJVxzXCcpYFUjLENtVV0uQSNZZyVgazxaSkUzPlVKXTBE
RVhBblokLjEsRDtyMS0tQ1IxI2BbJHVZWEhTOmQ4XV9aMC1VXjkwb1J1PigrVkNMblxCIlcjZ1Ix
ZlIxPEtEYWRYRTAKJShsc3BURV80MjM+aiFRW0xuXHFmKmlxVz9XYkUqSWpSXWRjY0FRLjoxPjts
UkFPYTlFNEwnNytuaS5RdHJqZjEzXV5iYFA9WXMzcVtaaDh1Pj4uUXBhRG9HcC4jcG1faUVJKEAK
JXI4JFNNYmphSTUhISNVPlZiTCpVVXBFYWgsI1JAVUwuZF1OaGhPVSpXKkBxZ1hMKC09MkxlbzBO
MFdiUFApYWZcXEdDMT9rYkI9MFNYM2oyV3BKPE0tMVI5Ui9OcCokL2hPQykKJURiMCI5IVNvckVJ
M15EYkUtM2Q8ZkloIitpblI9cmU8US04QjpSSj9fWDwrb28+XisoLDJWWjpaLT9gWU5ral1BNDEk
bXVVM1g5LzQtLEpSJlJPQTVcNkJlRlU0bWdIbVRzQ2gKJU1vPC1ETVU6OXI/aD9iQS89bFFdSmkr
bTtxdFNyT2FeKGFEXlVULnJBJk0yZEU7IilBVW1rK0NuZW1GJTcmRWdJQUFbTVxyTSNaMDwtazks
Ol1vRkAoPmRXaShzZj1ULGBpOUQKJWJcX28uNTE0Z0s6KCEwK3BbNCRzTytNbURYXnA1MiNTNyIk
JTNASkUpM1k8UlcvcjFAUFlVW2ReSEJBN0pMJm8kai4iZ0graTchLjg6ZTAsRzRqbFc3Mi8vdWoi
YEI4UENCTDcKJU9MdChWMi5qSWIyOFpqJ1dlPEsxKiVUMCthPlYhZ2NXU3QoS0M/QjowJDVOJFRq
IVZPMFF1b2NiJzUySihZUHBqPWQrYWsiMWZUQV5HUDZlWFdkYFpNPzQ7ZDw8UlAuLWU3STsKJSIv
ck1BZGhvOD9HOy5oYEVGUFNpIU5ZNjlOZDBVWlVeTUYtcGR1OixZOl5QUV1Rb0xtPC1wIXBRNFxd
ZjUjRVpdRSkzVjtNRCx1SWI4Ui1ULHI2KWIiXGJOR1crQXJXcWhYR1oKJVV0Xk1MQElWZmZCOzgn
TkNEN3MpT0BKLWBDNzE+QE48Y0ZvNWEiSywvXiozK1kmRCUqWSxKTEw6LFlFP2BxTG1oZ3Q9cSFv
OCNATUBbPSohayV0c0NDMEBSLjkyNycrTkk3aHQKJV9jK2FsbTFtI0sxI2tvIio0NjMycWJWPWEt
aTgrU1VtRT9uPURdckIuZjZETS8rUyYhKU82VDBeJFdgMUllMGdqXlVJNDpwWjpDN2tMMUdyVyRg
KUBMMl9OZV89UV9zSTtrQDcKJSIhT1NNSyttYGFvRD9bVysvZE9lLmRCTiNibmMvMUlOJTZYQiJs
ckVDdCchSC5JaENCUjBCWHNQU0JQdEBhUUVfKThSZEsnZVRsb3E0a1BjUD9fXSY0MC9cbmA9NDop
X01Xa1IKJTE5IlxRaidKOE49I09TNzcyM0IuPlBLKlNCQl1uTSw0a1lHVj5bZTctLS8wN1UzWSJj
UzVaNGJIMStcVlNtbWY/WjdtOz1uUWJPKEM1XVBNVzYibmNoQl0vcWspLTg8JWVHbzAKJVshS0BC
ZSNBK2dcNkJgUlJxV25YamRzSHRkRjBQSEtDalYtKlAkLUgiJkEkci9abyE7XSpBOXBpQUNnK1hj
aiQvJTZJLCFhUWgpbFJNbidDLDwxdSc2dEVhaTYxcShpPUxLKGEKJT84WGkib1QpP0U7OkllVDtJ
VFtYVitlIlVaTCw6Tz9JSytNXlBDTGxkI2tnQGdWMVpYciozTzxZTjBfcmlFa1g3LzRXO0xwTUNG
aFZhS2A1TlVjU2JMXjMrZjNwJ0IzTTNtc3IKJT8kS2wsMSNKQDo/QC9ycyFaUFxHWihAPitLRV8r
bCtfQS1wTGkiTyc2UiEnJW9lakprVnU/Zz41aS5tUi8uOk5tMz1FWmZHYW9XM2ZqYD8/Nz9QKjE/
Rz9kZDNkZydxKmJZRCUKJUxYQkVYITZEXnI/dUJSVlgncSomLlNSUnJhcm9vKUxjRj4rOSZlVDRi
PXJFJVxQb1tSQm9JbyEoc2RhNG8mUSdTIyJEZ2g/JU9xZ05RNmNlbSY2cU5DNWRjN19nWHReXVtV
UkMKJVlxUDMqJFxgMTpfQUVMdCU7W1hHbkBQaE1AZTUhXTU1UktTWl06VTtSalslbWYrJ2ooTTAx
KmJfXTYrQnJdXWNcYUJTLjM6L2Y5QVBVallFSClqXTZFbkg2VVVWK0F1VCkzdGcKJTlWTzRMLlpZ
TF0jY0hTbC1nKWVAOUdecjcjLSxgVyM6N1J0QC1hQmNJVGpWSVU7YjMiJ2VXU1Ajb2Y0YExaLHFS
JC5VVm1vKmZRWlspL2pVIz04cTNaPjxIOkRpKGdxRTAyVTQKJSMsKzBFPjldY0NeZChrO0duJ0cr
PEtPPCM1a2wrVEtickZYNCNWbm5ZajtfNm1LcyZtLFsvQ2ouanJfQkNlJTsxOkRIWyJbSHMnLj1h
XkdQMVNtOSNfXWxZUG8nUEBsJWswJWsKJWNAMmVCO1hlYjRAWyZJIkExP0BlKilpSm5PZmwnSE0l
bmFkTlpvKVNDdUdWMDczUDZpUjxLWEZQYz8iaVVJK01rUEt1TCFwUDVTZzAwQidPckF1OSpFcjIz
NkJENSRKTzoyR3MKJW5vM3FQYjBbMyxZIzMsZkREPFFCLWFpOU1JXXNISTZLSydVSkFGW21DRiJZ
O0x1OD0tRyFaREwxSltOSVQnTWFVSSpBUTYuImNqXFFWRzZzOG4rM1JQJzM9Kj8hZU1WPjdXV1YK
JSdBS3JYaFpyYClAUyRSW1I3dFY/MEAmX0E7WiM2NDA/UiI+ZUJTR25hZjkpXm4yXlRMNXFAbEI8
XFFqXDpOSWs9PGBWW0ZDIi11ZTldSy1zYiM+Y3BXRC8jcycqczVVUy4sUmcKJVdMLyo1Yj1XaSlG
PC1SRGcraEwiQ28jUSdPKl9UbF9WNismckpPYmViLHFxNEJlUi4wQWY0VCRhL1B1WmBVJTRrImU7
LWswKUVNIT9acDMrM0A8XUtoczxKb2onamxxMVdOPWcKJXAtVWZvLVtMI2E5UHVUT2kzJ2BCZG9T
WSVIJzIvKGFLZTM+LTNVU0UmayFGMDZ0KzFQJSRHay1tSj5bKHBUUzNBUFZDNCQ7KTBAOip0cnFQ
OG84SXM7dSY9MWg7KGIwYDFTaDcKJTQ1ajcrZCRXOlhfMV1wSmAqVD5raV5CNC9uKkteRjtta2E4
R1RKXF9FO1JMJGlPPVgyaENSUG01c0QvZiJmJjFoXDEjbTI2ZFlCVSZLaERLTDZdbTZsTjBFT0Uo
KmZgQTlULiYKJV1tJ15kTjFuUVNRV3NAcCF1ODhTLy1AMU1mVzVjSjteRE9sQ18pTWY2WUNeYzQj
blpOR0JYPlsqKk1hUD9hcTJlaUhkNlRPL15qaGkvLFQiMV51VzBkaCkmKj45KUZPSUtDS2YKJUo3
UlVgVSUuRyRQW01QYS4jZEk0JU51Sl0mXHAjWSZKQjU5JTRYbDI5Ris/XVtoSWB1alVSV1omP2FO
b1NaNk9sbEVtI044T05namBQaD1UYFVFQ1coLEg9SywqIWRMJW8uNlsKJUZ0ODdASGdwOW9mOFpl
WlUqPS1OQ2IjWDI9SiNHTUZqKmAnMUZlLGQwcDBoXDxYPTljbXU/Z1FTQHRmOUs0aihFX1RfaVFP
PTZWQzpzOVIhXVFELmNBTDNtWGlHclddQ1NBWFAKJWQiPEFUKUJXblg2OzRAV0NpbkxfLGxsQXEu
VyRLST1OU0VKWCo8LzE/UWIlRWRMQCYhL1RKUnNJM2w+XGsscE9DbkUjI2U3JyY4K15yRyNUTUE7
OTEoSmFcS0NKN2BAIkg+MWAKJShIckVRanQhRlVmc2IyZklENy5BZm9tJCdBKCo0RE5mUjdgLWJW
MWZKYChtTGZKJEdfaDhFS0YtUTM7PV1LLSFbPW86PSpTZFcvS082ZE9WbkVYOi0+TmFNbCdMR2JQ
QFRaT18KJSFpcFxPV18jTDZwYUJRZ19ccUptRWxRQ2A1ZFk9Y0UyMTVvSyt1JkRbRzImVW9mZjZn
Jyo5Z29gJkZNQ207XygtYXQ5OHNhMl5jIyprbl1dPFBWKE00ITFyT0EubHFXWyc6YXMKJVVLal0t
OztYdSo5Wl4wNlc1VEdvLUdXcjVIJyhaPCxDbGRsYlgmIUpqLG41XURrJlwpND85RCs0OVI1NWE0
I2NmQS8uQy5bV15pOE0/JUdUWys6UVVgTEhfQCx1a3RDVEVubCwKJVlBY1dnVj0/XksyXF8yQl41
KmllXVotMTM3X25iL2QuOUBZU1g2Lk5RTTdoJ11xQnBWcGRoZSQqKD44QGlxYi0+QCZXMCNNQSZv
PyVzMHRZUlE3J0ZeNG0+Tk4qNkYlbzA5Kz0KJSk/TVIhOzlQXVpQPWs/KylxRkxKS0VULFgrL0cy
LV0mSj8nOkVqQyJXRmI7XlgkXz07W10qPE5mRmMqTls8YUcuQUErXycyKnJyJzkuZm5IbD5gSEVA
bF9SMlgvZjljX0w1YjMKJUE4cF1Da0w6XSddY1QhKyppZXI4S2csY0hdb1VTNillLU5daScma1hu
X1AlckxmWGZUYWk0KWZeKW4wZU0yaENbSEVNVVpwVilNV2Y8X0Y1RSRwUD5TVHR0XmI1L29QKyRo
ZCkKJSNLPDpQLlNFPlZuNjticUpLUWBtaGxvQFg9VV1NYUA3QlBORjAuclZULFs0Jl9ocEdAJEJp
XS1NUzUmVT1dWEFJNkBQKkpbT10vR3FkQipTZ1RIKFVxbEpOMkFdJTA2bG00MF8KJWAuMiJvbz8z
Tkc5bixjT01wanVCUVVhckMudSMwNzlOTkJIXHRDY20qP2RwS250OWA1Tj09VGU6cG9NU0M1cWFa
JV1VOWI7aU4oJml0Ji4qaDU8IixhWytJT0EjY19uamxWMnEKJUFdT0FnSztxQVw8NVZIRklySjVs
ZTRKUFk1SCVhUlNBdD1PIz9CTyhPL1c9bnEyOW0qN0R0bDptLEpWVigqLT4lYDRlRDBUUERrL2Va
X1s1JWg+VFRdYkkmdXI8RVItNlEtKkoKJTwxZmxsTyQwSDw+bCtkT247Zl0xPiZFQiohZlUlPlRQ
QjxXVTs0KEZvOlRoWWlsIy45bVFvQzUhOmgtT0xBLy0vV15ZN0dtKyYqIUVvQSE7WFU7dChHPlBe
PDtJUlV0SThEZFoKJUFHUS5qQ0FFMEtncjFYZDpwaz5XamgxbiE5LyczOVVpMk87ajQ+LlNIWmln
VGJKYDNaLyNiPUpVXyVfJSUobGs6LiE1aG1FLW5lKVpRR18hT2IlLlRgOWYwUFBeWyQ/NDxDI1AK
JUhUSGRXQWErPlZbW0omNyRwPUQmTl1qN05YcENQRzpLRV40QjBXJ1MicDpIVjVuXVdRcXIkLUVp
K3RFNmgmKk1eIUc8S01QXlsuYUtoSEJuYlhrOT1pN1phP01LY25DMGQ6QTwKJT1XVnM4Mj9sLjFo
aiQuPFpTQThHbHJgYzlnIjwzZ2haXz5sUXBgWWc+TWIqM25PLGg0VTZIRWwkLXNCZz11Z1FxUSRM
MyJCPC5EZDQ1ZWVUPldnPWk0Q2hiNFheQEAsM2kxY2YKJXBSZjZdIjQrdEAyWV40PURNRGU2Sj9z
c19LUjhqTG8mL2xoXHNmOC1aUz5RMj0uY1BxU3JFY3I8aDB1PUtQK2RbTk1ZcTo8NVBzcl50QidD
aUdkYC4odVhjUFdnZkw5ZmYqdEcKJS0lXSMsUz85LyowT1ZwPD4kP1BNUFkxU0VMbm10V21Ic0Uw
OTlMUmNgcFxYcEBqOjctS1ApZmZaXD5nPEUzJ15taT83aGJWKW4oOGNIX2FAZDRJSUQpXXUsMG1I
ZVlqbz5JbkIKJWopVGBDS1gmQj86Wl45XzhzVCsyM0JOX15hTzZ1IWhgTWlqQjRrXnVkXyJpM0Nh
ZmJjKU5HYFpVcSU+Rm0rTFQwR2A+ckJKYG1BLk5Wcl9vaWQwbjhjQmRoSU05UzpKNk8oPC8KJV9V
ZXJIJV9JLHEwU11kTnJxdFhzYGFxMz00cj8jNyZWWVEzQGYtaWhxQTRzLUgmYiQjMSZjXDVMRFlo
ZWdeXSw2K2ldM249Kz51QmEjVlEqSWAvYGNVIz4saE0iL1hxVTsmc1AKJVQrSDEvQCxqYSNgZCtB
ISY5RlcmSmYpOHEkcC9UZm5gcTBBJ09pJzUtZjlyVF9GOUVBRjBDKEcobWRZQTU5KUQ8aGkkRDZg
OnU+Qm1CNyldZjRtZXRMRFgrQ0hna1hjXEVtcDQKJSVuMkZBIyspLT0oZnU4XVxjWFs5OyQ2aEdN
IihfTzBhdSIxIyUuOGFKSl5VVUB0MDRNPSomIVBtPixiTTFbPy1BKGU1VDQkL0ZuNFhiXnRWbmFh
b3FhIThTR0tYIkU/TiFpWkEKJTR1PnRrQFtyR05hZEc4UDZCJjglTCVlR1xQQ11VMWExXj5dKG1n
N2c0O29zUlwyQUd0YkoqPStXIlBQciZpJE9kamdzNikoYDJyNXFBW0FfSSZHT2MsMCFGQlZeTGox
Q0ghQlUKJW4qK3RJTnU2SlYqLWNuSFojTW42KiE9VzAjQF5zKz5iOHRbYGlsU045Pm4+REkucGNm
K05adG0qZXRTZkI1Z01vOTBYMUhjQ1UvRUoqa2cwYDQvcGRyIWQrP1VFIyw4IStQak8KJUA2P1Ur
O2I6bW1OOE44LDhnUnNFOHFjOFZMaUhnXlQubztUSmYlIW09b0pQYzdqPnRLJzBvTmUncyY8Lk5m
YjMpIStLM005RFNiLzMxPjd1MzpXYVRpclteZERTWilRYDsiMGkKJSViMmNDPEkqUl8sSzskVTcv
UidwKiRMKCVPJCJCRy9GKDghYFxnNk0sT1FZMyheRlAyQEUvPFcwZVNrM0dyVDknY1x1N2UnbFE/
clVVaVlCLE9RWS0oZElFZFViaGRpTGlKLEAKJXFaZTphPFY3SEcpKTNibUoqXTQqci9zaVcmRT08
QCVhWCk0LnMqYjMlMHE3W2NkOllLST83cmdLYiE7N2RocilwKklgRGZPJFxXbnBNI1M5TENXTGBb
Wy9AaTU3U15aQSZFVEEKJV51TmkkNEU1IzMwTCRCcE5lYV5OKEVHK3BmKUg9Qz5TMS09JjlLbEJk
ZnFgMVQ6cy9KRCJwbEJxdW8tTztbSkI4IUhARE9kLERXR2FFXkAsSmUiLUpKVV00Mz5cKGJNJk5Y
LzoKJTt1LkdUcHU+LGxdN3BnMSFBQVR1M3JKOi1WJTVxS29AV09PQzNtLzRpVltYQDpYKytxM2FB
RFJxW18nNE9xbGRgN3UtXE1DXiQ/Sj8wSDVWMEJscC9uMEBDdEUjKnFSJWQwN1IKJTstWm8rVVZl
YD1dUlBNQ2EvXjtvKEVHLTAsNT02QXBhLDUkMEA4XDBDKHFiRiVUZ2dCNHBFJVFBJkIlXUQ9OWZI
YkFsbVJrXHFSXk5gLnJjLmxwcTRITFNjN0UqaC47QlQ+O0MKJUxcJEg/b2k2NWIzaC5pbWRNVnVv
KkEwRj5MU0RcM0NxK19MNFZBYmZEbzRbTSpBNDItKmZnZkcxK3MvK0xcJzBmYTNLVXVkRzkjak1p
KzpBWUJZRz1VVmxvTFFFXjAsTCg8QW8KJSlzNSo9JDY/OiZpZDsucXI2LHVTMU5ZVFc/TnAyMmk5
T1daVicnWmslOiM4aigpMDtRTjVsUWZqW0IoSj9gTnEkclcoPEhtLCJARypYMVdEbSZJbTtIViJT
VU46LiM7IlEySzYKJSNwV09DO1VfREZQYzNfNmdfRiQuUGswSzouXFdBUSo1aC9mVSEwZGJjRyhp
VEcrQig6ZXFJMG8nJDtLbllmYCltY1YnVD48cTpCXVpAI09WLCFHTyg3Nz50WWBEJXU0S2hjJHIK
JWBIdShIcixBR0pbNiY/J0IvW01YVl8yb1BhM0Y2V2NJMSRycjpJVHNAODorRGN0IkNkKm5wQGNU
Vkg8TkUnVzk2O2dfQmYmTm5DPGlhYGhEVE9WIlxrXWpjTjxkbHJ1PCxFTlcKJTAuRDZQOT9NJWtI
QVJmaHFpMVlALjdCMGMmWTBUTDAkO3FwKSg5UDltaEglNS5HVTUjVFdBUylfPFhfNEBGdSFBRGg2
aGgtJUNPLFNWMmBeVyM0QDhZJGcvIkxAPylkZm1qbGsKJT8wN3FwcytbUGhncXVxZGI8PlBlIiFB
bUIyWU0iN0I+MzciP0ItSFVwa0tacEtdcVkqU1dUQC4oI1NjO04/YVQzOSJoJl5bOjhWczEhWGxJ
WkgkRiJqQFpkJUdsPnFKOyw+VGUKJTFVKj8tP0pFcGFkQTshYmBbWW9rKHAjW3FEQS41RVxSaDNS
al5WPWpXTVUvLF1ncDY5Z1snWlRkczpUaWhRZSxKW1FLNFFnKjlxamM5Z2VZN00tTnBNNF1tLkhZ
VS5yM1BOLioKJUN1W3VLTTZdViZaVExpTDUocCohUFFQW00iI1NTMSQ1T2FAPGF0K0dnaTQ3cEtz
M2osKVBbcUIoZjA9JGY9NUpWLEE2aWJFT2ItKDdNLTNDSWJfVmclLj9uPG0xK0clJTNAP24KJTpp
JV0lLyVgYXE7alxfR3I9UmowPU9kM0MoO0AyVyMiI0hIKzFLOD5DMHJGR2JQTnNWTSpwVmVWUDg9
SnE/cGw/OUg0V0tIRV5mNGIpMjRPclReUyY0IWFLNUNlMTdiKFJeL2gKJVY7UjJOMEchZm9YKTJ1
OTdaOF1lQFQwRClzJmZnN28uQ24oXE5MYDllMEtjcGUpOVIlNVc9PiVwZ2FMJFFULD9xIS0hOzI/
dCEhOUFFZ0RvaShgXkdxV14rIm9rYT06VSJJcDgKJWgnWiZeKTZKcT4/cko2SHFBOmNzPTIuI01c
aFovWUdFIyRFJDcqUFVVaF5cTjEyIzAyYUEkWHMiOVUmcjhyUCFfSS1mO19waDVwRShBNElhOEs9
WCkuY3FqNlFGYShzPV1aM1MKJVF0UCNbQj9idShxcDZDLk9ARkJrKm90WDtZLGdpU0ZXSDlkNyte
O09VZDsuTzheY1d0NTNgITJrXyVDXFBuOkVSO3EhOUhBa0RZbVBSPE9eKmVqdW4uJHNwcD4lOSRj
N0xYRCcKJVw9P2A/bjdsKWw+JTRNbmNLO11BPGhXaylcZ0MwTztiQVhib0g9N0FPMVhrImUyIm5T
JU5jNydKU15OTWJFb1E9R14pMFVWOWEnQzhYXEBZUXMsNzZDUzZ0OGNwYS1lWF4+KU8KJVduKDlM
WiwpJ08+LyMvbHJTdUR1Nzc3YGI/MSJxNCk6OC89ai1raFNaTVtnbmIlX2U1XF1UN2pdcmUjO0Zg
SFFTMTRHZD1bLDE3ZyswPjJdQiYmK1JgajhAdSdtVF1HU1pOW1IKJTRuNUQsNVJAKlA9bCwiUCs7
aCVpQU1qZWFsY09QciNKaSMpcCQ2PXE3SyVzbTZPOWlBaWNuPjhDIXMhPSxKbEYhOVpyLSpPUUoz
bUwoIS9aXC9MPHQjLUdNVE1BTFtUVi51MV0KJSNWMz4vbFJkJkM1KWMwI3BfayU9XDhySXBIJSFY
JWFKXl00ZFw3WyEvdERELygvQ0IsTkFuczAvUWNwb24yXVVcPGQtSyRCRmkpbGo0XyRUa01HTk9g
JFA6RU4vWT02cWJBL1MKJVw7MkJqQSJmMixcamY/YEdOO2tncmwoUVpkSy9RM2U3JyYwLVhvO1kq
UURpPzNzcTtkQnFoTUBNaGZNOGc0ZSE8IiMjTCRoSy07IkkhYm1ZXlJgLS1gRzBCVC5EL045T09m
VkMKJUQzUURBL0NlYGxuTVNPTFYrY3UtJUo9RmlhOyEyRzMqaE4pKSMyJEksJEhLPSs/aGgiRWJK
MSY8LmVxIWNTbnBeKl5wOm5lPlN1aEQ3QEhEZTY2TT5aSEhbOFIoU0VRVENgQSkKJV08QTtPRlZu
ZU46Uk9kJkNsTm9dZyxhKG9EcGFfQWBNST91InNTZTo2VmVZTkldSjEjYWssNVlXInE7KWk2U25l
OURuRWheW2RTJzM7aExQRCFHJEgzXjlgcVEwNCkiWyZcR00KJXBYMGd1WUV1ViFeMENVKltxM1lz
U1NGKyE+PEIrSSlRVyJ1bUZIKCJsO2orNmJOTGozNzRtb0RXLDwhdE1XVTdiQDlKQnRyUTZxXE5s
PytwRCoxX0U+LmEkblFEIShNW2NrZiEKJUUmNnFXTWpwXCcoakIvdUg/SC5DP0dFcidILyxZZkko
XU1vXFUxV3MlXDtIQzMncFs9WlI1bCEjTlAtRyo1UjZrM3UhVyRvQCtzZFtOWSx1KGhwKVZCVUZg
cSZlLFgmbDVcSEUKJWUvMEs5L20uN29ZaCQ4UFEvViZCcjZnUTlLO1dgNmsvZD0iNWwyNSZXQ1pX
KmlVS3BfaEA1cmtpXSRBLDBrcVoySCgra24lWGs3a0MsLlw/TC87dFgpLCZNJW1cc1I8NTVXM28K
JUBLO0BFZFpRSz4wRmUtUzI7PklcZD1CLj0nQEg3MkskS2UvRUhGNmJYNmBVM0s+JFlpUUlFSSZF
XnJuMF04NmliPyxpQGphTFc8KDdJbCJnLzhlQVpkKjxMXWppWzJEbU0yZ0UKJVRHTDNsSVQncD1h
KTAzVW8nNSUqL0c/QDA9NV4xYWslYVdeSyI1UWI0bTBeMm9rWkE/WF4xKF5bPTxUYEBEQGBzQjFj
cW0yT1tnVUEiXEhbL29wR1c8OWtbPy0qPidycUxtPmkKJVUzNkMiNCdMQ1spMkBNSVBFZj43LjVj
MztLQlYiKUw9XFMjZEZkcXFkRS0lYyRWOTAwKTJxMDBLQUQmNmRrM2Y6RFhITSo8UzJrIjIpYXNY
UXE6MVo/L2YnYS9zPmdXYk4hZDcKJXFCZ09gZXBBXTFwQFpSb1ddKWBsaywmWCRjZG1tPGNQIXE4
Zl5BSFxsQWE8YVhXKGA0akMrLVBtbFxNSk00WkdQT1piUT9sKWZlVW1LW1MqSSIlLUgtOSNxc2xQ
IU0kR1k6JWAKJU1jW0NOYVlHKkFVU2Uyc1tZRikuQz83ITM2ZSlgXWphWmxZYUJTM3NlVF03Mk0t
L1BHVW8/XC8iTk0vKShuI2pBRCwmVUYyP0IsVXJMJzdnPkhrNSJmUnEwTjVQPmJxSjlEPTAKJUwj
bURuKTcoUnE8R00oJGpGQGJmTEdTMW1JUHFjXzxALy5VYi4iai4wXWxCbTZWZV5FRGIwSG5JOixw
M3BbQzBwSEozMz1XQzo6VEtoayw0K0RwXThAcyFNQEQ7R2Y1JWU+RiMKJVllMXI5LTw3LiYocDFL
ZzBLbVRMNS1TQ0RRRz9iRW09NWJNM1BGbWBMTD0oVSstYW1YSy5SWmVCT1A3UlguSVciMUk6TUde
JTpTTmssNzgjY3MmTCEqTCEhUTR0KicnZjdLPSgKJUVERSJDakhoKVFjS0xoPStAIjFrbitYLkw1
bEImP1VbPyYnQSdTVmtPXVVpOzYiZE0xJzdWU3AsP2Q8Mms1Qk1BXjVrN1wqKzRrPCpJZ19UKlZm
VVYvKDwxO1QiNT9EZTRCVFEKJTgkO3UoITxjTWxlPjxtZlg7OEUiVFQ6cTFXS0JAKzMjZ3AvIUZB
Q3NVPGpLSixQPiE6XEExL2hDZ2V1SyxhKiQ6UFxQamdnPj02bTY+QDcmJ09fIyNOLjtxJV5BIWlc
O1FhczMKJVVMdD9JIUdbZ0VfMTsrRG9MX0NIMVVHMnI9VF9VbjssMXA8YlMrKGVYaCo4ZmVWLl0i
Z0Q+aHImX0xVQkFeaXUyW0IxO2A3UEk/cXFFI0E7R0wnUFJAQG8qTDViYk1McVVPdDkKJVsyKSJh
Mj4/MSRhJjFxRkI9b2pscTtsTGZMNkcjUnFWRDJSb3IxM0E+QTYkKiMkUE04MFM+WCVRMERmZCdD
bV9IW0hOIT0+SktyQmswQls3MlU+dWpdY1VvJmhAbixyYDZiNmIKJSRlbTBwPyM8cU5fPCRZVzoi
NjdNWSIsQ0Y7QShWOyViVVl0RjBhQDMmbi5dLVQkN003VDhwJXUiL14hbmNuTztMYG1PUWlyQ1Bd
KW08cTZsSS1ZKCInSCQ7VF1Zazk7YWs6JTIKJT9nOztVU2FMUFhUNXI9OExOdUZGIV5TMiRbcV4q
JVtidXExZkxPJiZhWisnKkY0QC8pYC06R1llJCQ1Tz9xZip1RShhJj1lUjlqUU1wSy4wOjBOU2A0
L3VQN0hgJz9KTEpNZW8KJTZHS2plMmAzLUZUZTJYVSMzKDJHM2dtI0NwPXA7JVtoZCkqX3FxJmFq
XXJCKGlISFgxZmMiUSQ/aXJwR0cwbUJYYlM2S0Q1LnEnMHIrOjhcTXUsS2pCbWpEOG09Qi09LG1v
IXIKJUI8T2d1KS1bbGYlN2Y9N0E+aEIkXHI7QyVlPE8iSG9bcGU5b05GKypBV2dFOjQ7WD88RVhZ
S09WP1peIjM6UW5lOFdyUD5CSFElMyxuMFYoSkE5MyckWS08KVNCWjRcNV1TUl4KJS4xO3RBcms5
W1w/UExyZC9kXitQKyNXUCMzRkg9XE8jLkpYa1pbJWkoN2R0WFQ6bV8vX0lNXmQrYT8vc1EnOyUv
LEFnK3VeL01wUWIwTjlcYT4pc0NjbEIuUUJUaD47WE88PDAKJWBJJUk2VmdhRzdtODgiOlNtXUkm
MHN0dCRDciNmMD4vN1AxJCo4MlQkTEUzNCxpMGhLP1ovdWptbEdwOlIpZSkrV1BRQGxJTDBZRCgh
Ryo5VlopZTNWTywpNVleRGFOQjRYSygKJURwTCQ2LGE/VTFqUXJxdSI4NFNJLzg7cGkiTW8lIlJX
TGZmXilJXjRlWnFXV20wQzhlLFwkW19yTj5MKTIuY1c8X1w2QXJeZ0grS2x0KkxrZFBAWChrJG1n
L0BUakAtP2suLD8KJSQubiQrZ1ZTREglL19XQTk9VDAyMztCNThVQ1ciKil1L0RRY0BjWGxFTyko
X0I5NjcyWD8xTW1WSSkuYFxQVj1URlJpV0xnK1lkWyFLcjosQz1JXSUuXjklcz0pODdnalcpUTMK
JSgwQ1FDMDhgbElJOW84Wy9Kb0tjKk1XRi8ySlpcPUdsQWFyO2FYJiFNc2NLdTFHN2xvXVhjV19c
QnJtL0pdI250IkVWO1YrYz9LcmlXPUpBUjcoWDxmXEpfSSY+UF4sMSwzNnUKJTUtczZxRyhtZGBm
OUVCKWYqPShnW1kzI2U2ZT5eYT5CbVJIcXBTK0c6IkVZNVxgN1AyN2UjMFAxODxWODpNKSNsMUEo
MztwUU8hK2RTOnE+M3Q5Z1BCWiQzWWhlJ1prUSgqLToKJVRvPCJwZlgjSi1iNicqaCJbIjguISpz
PC5jaCdKSzVOVF1ZREc3aW5CcG0pY0BWbmkhSVB0VkJwUSxtW1ddPHNOP28tSGk1ajBTO0JVLDZm
Nkkxc1BEMm9IY0w1VUZER1VTXmcKJWBPJWVtOkZEJmk6Wys1XEQkYypmJ1NlKSFbQ0kwOmpTZjJi
T21cLUo2dGNaUSpgJTk5LmxicS4jPzVlVXEvdTcmcUlZS3A9b3E3Y1pnbnVXI21kWnNTRnNjIipc
Z2UqQ3FVRGEKJUY+W1RZZltwIyFgOV8/UF9wSistOXFYQihQdXJZblMlQTRAWWFSZVBDXXUyRmhi
bHApOlxERC89KkRUPCk0R3R1ZUg1cT5GOlYoJjJmM2ZVTiZvQS5dNEZgczJTKElmMCs+RTQKJUBr
I1lJW1sxQWJQTy88RzxkTjFZWyRmZU1DUVhFLGBrZ21JKj1zWFQoRjI6Q1Q7dSoiPUNISyslME0h
TDVgaFpVRyNuUlhORDBUTXE+RSt1PklfJXM4MUlkZV9wUS1XMzU+LyUKJVlXLTUiTDBSNDdhLj5m
LVwrOydOMjlzKGQuO2hKJzNzSiNbXSM+M2FkIjRDZSI0KjBcOURxN2clYj0oOS1WWWc8O0ZvR2ci
UlQ9U2dwcEopbTU5Lms4Xzg+TGFFZCtVUyk3Xk8KJWM0V1BGMjRsJidIWlI4PDJhOFpQVUlsWlBy
c1puIWVibm5yUTQzM2NubGxkUDYwJ29mbDJcRkpqdVYyMyknRUAiW2RGTClqSixEV2xCQ2dGYm5v
MEc8Wj4/bllSMl1INm9TNTsKJSsuLFw9YiRRZ2pVIiY8b3AxQkQoSmk1YmQvYzgqMTFHPVc+NWxX
LnVJUV5QL2o5JFI8Q1c9WyFUajI8K2c4bWk9bzJHaS5oTT88OXBiMERrUj9oZyw9TF03NDo4WWhQ
RHJycHMKJWo+VjVOWmBYdV0hVFdTN25aRSRgJDhbRj5gIVtRMiZhOStMTmo1cmBfIjo4XD9CQy5A
ZXJrX1piMi5QaFRxLih0citvXC1rQCFEcVsjIzZrOlZhNmZhKCpXTSRxM08iY2RnTkkKJT9uO1ss
N2ZKUSkmNWo5YykiVTUzJSxlVXVeLXEmVC1WVjQ9ZW9OT2RMbChkVEZcLEpkVy5vVmpWMGM1WV1v
UF84TDslLldEPGR1NmJJKXRmbUNfPywqX2tHRUQoSyJJJk9sMlIKJUMjNEoxVT9JJEBIU3NMPldE
Zm1sLS1bKVslJWZFRW9PWiwqZVdJUj1OPi0+KkM5Ni5QZigqWDMvazlYZFwlRVAmaDw+Y3JhQXRY
TU8mXXVkWThGQCcmZExSQWRHO3E/R0pGQlAKJVNdNjUxKzZsI3BIMkJdX1RAanAoLVpHbWAjMHVc
QUwrOjlAak0oXUcsPzpUQjdSZm5dUl0+aHNEcWo2OG1Ib2EkQ2UjJ042UzxoMWQsYjkoS3MzLC1U
VVQnS0kiNDw5N05kb2QKJTVwMEZyMDtRJjc6LFBDNUE7PldOXCoyR0FoalBDJzMrLDw8Q29dR14p
OCMmKSNhRDAmJiImXHNJWk9fSmRiMEF0XEVvWyFyUnRddVxsWVx1Y0UnMW9OX2QoMHBDIj0tUD88
M00KJTtrcCZANVluO1MwPSJWQ0tURHVHXXFdWCxYLkpsPiNsKmFmMmQrdTg8Z2g6NWxNdTtlb0do
Qm8qVnA/WXB0ZyNrTUJMTFJObUZiay8xbW5ZLTk4KE9OdVx0UTAuR1pBQztkLWoKJVpZdT8mPjla
KFdWSk9aJVZ1OEZJKCEpQ1kyMUNPWEY1cThaI2ArQHJMY1FiMGlPcWszLVZvb0YtcF5nKlFEZ2kw
LSJLVHEocjplRl08MmFycUBJSSJUZ3ExakV0YXROIjNsciIKJSNsOlc2VkdFOyZZLloocUdRWXJi
QnVRSUdLLW5yU2tIPiVCU1tuUEE3Sk51S0hYZjYkV3RmQWAnNyJeUkhAXEgmZ0NUQGpxUVU0REBW
YnI0OFxQb3BEOikibVkpRXQhbm5JIXAKJSFhaVI7OklLQWQoJHI4aUs1QFszMHFEbVVqQGIxWTtz
Pj1mI0tzRTxeMTQkJTEvKHRSJlE/TU0saFMhb2EwWUkwVjldOTdkW29cJyRaPD01cXB0IWtPWWZG
YCRWOjhUalghQEYKJVtPKyliW01aXVk/Nl5vWDdGRykiLiQlOnAnSFRpJEc3ZEVuO0dvZzFOP2sz
Qlo3SmwvaFIrKUwwOS49UF46Wl9CL3Jua2g0VzE2LGNdMUcub3RyQToyUGYsJF1CazgnPkdFNF0K
JTpIUmluQm8iQ3U/Tm11QV8xbVZzMWNHL1glXm9WTEcrW011ZDpKJCZyVWA5NTVxYkJqclROSUlh
Tl0vZDRIUSg/OF5SWGAjMz1BcmZRc2RdbVU7U1xHK0hOQygjYmwqNFt1LWQKJVkyaWUmJ2xbJyNN
QVFDMHBnUi5tUnFQcFU7TC9dSGpEIjotUmFdPiU4TSwxZ2dfUl1ZISUxczQqInRPMFJUPS4xWTw9
ZToyY1EqXVZcSDZPLDsqb3JXI104Xj0hNz4jMlFZRE8KJW9mKkBfTzNEYlZrZzwmIltPXjhRWjNF
KSdlayNBOkJraC4/STRYXyU/QlVRVERQaW1oLjwkJmgnWDNhI0lJaTY+PWRHW3JbImhBbCpKdD5d
YlRvNVtBYnUoQW8kNi0jJ0JvclIKJV51NmxNUGA+YD05WVBLZlo+MjREO2YnODJgMHBVJ0lgKUMi
XFlya2tBZU0vdVooQnIhZWQpLTMyKFlGT15jREluTTk9Pz9WOUFkRUA3IW1dUDhRTG9sP3VVPFQ4
OmAmXG1wMi8KJWl0MnUvU0UxSCVDaE5qaExWTnQ7Wj5jYFJYdTplO3F0Yz1qMXVKO1tZbFAkdFk5
YERXWDpaXiFfNU0sO2EwYl4wOiVpIV4/WWlkQjlcSlFnIl0pVEFlXFlLTFFDYThdPiVNQmoKJXBl
PWpETjcmLVFwbEd0YTNpRjQ4UTovKVxmajBUNSxicyR1XkNLJWFxIXRORyVQXlZrZDhuXzM2Uyow
cDI4ZEhgZkdKWmtCSDtbYk8qJ0lJNjRyWnAvWic+LTQ5LiIjbkwicE4KJW1ScmNob1BLPkZMK1pV
TFksPXRRNFleVUdpR1AiVi5vP2U/YztRMClDM1dqKWlFNDdcTS9HSU5dZkwjSVxpb2I4OkhIIypi
XkRnUCkjLiZCQ3RWVm1BI3U3P3EobDNvRVxJW0wKJSJKcy41OlFmKTFuZkJza1AxS0doTkF1Syku
dWM4ZHBvVGFyN1dEXkZaVkZSLm9wV1BQUzgyRkxjbFcpTjNUUkc4aFNOVSloZ0lyJEU5dTsnZms6
ViRmXylUNiRfPlI8YUAzQioKJTxsJV1bclJwQmdaKFpJaidHbEFsNHBwKipuOEUoKltNLV0wIlpj
JGREcVFsTEBhc3VRZzRBYyU5XjMzREVHMjReW2RrXCgjby1sMnEtRiIvPSlCbWBMamhDOCwtNXBd
Q2piUywKJWwyJChMVSY/dHAuISE7QjckbldeWGYuLUo8KF0vPkZARz5hXEgvXmM9a2ZXRz48OidH
Wl8iNzlsXVFPWUJca0dlZCQmPG9LYmI2VCtAUShaJkspWCpyQj5PKi9DdGBCWUUjKycKJSMpWysm
ZjdyP1kyUEJeIk9obSRSKz1FMFQzQ1ZrVzshOyhbOzFKP2sxajttNCI2Rl0lcCdJKjBaPCwrXkhN
ZTZHZV86OXQncUE7YTA1WEFRLnFoKV1sKWxjJ2NyXltpR0wjL1cKJV9rJ09HIydTKz8zSC5RIWtI
OS1ELSpfdU9aUSopVGhmYTdNSzpRTG9WSmAuK1FAUz9CTlIkWHQvZjE2RyMuUUlzYE9yKFkrVUlt
U147ZDZUPyJ0VVglJ1AxYTJSO1dKcmRnY2EKJUNBPC5WckJMQy8ySzZhWE1FXXBQO2c/SkkvKVdQ
Y2VzPFIpTkh1dUIwTC4+PjpzR0FlOz8hOzYmLCYyVnIpPjEnY0JqNzE0VzQodFNycUZzQzNzWjtY
bVRISmpwMXAkUiNncF4KJScmKnU7OydRRyZuPGRIXyo3c2hKJTRDJDBQJD80M1hEWV09Z2VXVikj
MSdJVGJiV0U1YltaI2o7KFdha3BZbixoW0teaElZS1UqIytWMVJZaSlWVGByXStmcidHc15ZNWo1
dGcKJTNrKFc+bDwwVmBYKWdlLCgwNTAkOGFrTmohLTloPz9qaHUpJS9aczgtRVQtLV9yOmZsMF8r
LT8pMXFAKjBsXDBxcTNGLl5fMzstNF09cScvRi5yTUcsJydEYl5hWSxYT1pHLiwKJUJldEtTLjdB
Nk1iMlNmcVc1IV8kYGA5PDhhXjdoJWo8XDxwMWFMVD4pJkJNZDVZV1I0VVxEdF1INFFMP1tJJmZB
MVRHKkpLWEpbKSFOWlooQSRfR0JnTjBfM2dvYXJzXTR0ZEUKJSJuYFElUXFvImNedTxsRW0tJ0Uo
PSJidEQ2SFZPW1gxLCZuRTk/NUMuMEJETVA1UC9DVjhYXlpZU0JkVyVsUGNXLT9BJkgzPWpAQk5z
YWxfTzdlb0VJV1tIUmh1PShrZlUtIVgKJSpSYidONT9mc2IvOW9zVVJqT1ZhZj4uZ100c2QqW0Jg
ZyZaYlpWQy5VNmFRYzsnTFEjZE49QyIyVXE8YmBJRitjIkwnTUdKXSVNT3IzRWEnYS9UaF8kKk5C
NEUxM1lVYmglLiYKJW5zRD0ybCtlOitrPE5wJEIuMTUkJEkpcSlSP3FuQGFkNSwwLWE8Xy9XK0E4
STkyOytvOGhmU0xlc09EQiNKLSRLLTllZVFvKCxoTCZ0Uk1mTkRuUDImRz87WFcmNldURGtWX3UK
JUdRTUJLXlNoJVthSnVAXzFTIm5RO1wjcWlyVjRjRUBFYSMrT2o4RG87VEVTP01ELGtVP0I6JiRB
Nz4oSTg8IVpkNT83KCVCNy1mcVhHRioyLHVVTXElPFFkQG0uJyZtby5tOWwKJXE0IlgsOjtnKVw6
WjZaRFZQZ3NzWkdxYypONWxePjxjOzBXOU5ERzVkLTdYPyM6NDVOZSpdJGBmc1EyNVQjND42RG5N
Q3FlKDxVbSY2J1MkOGNRPDMhQSpbWU5uOm9aUXFfZnUKJXJkb2kkSiotJ287RkFwXSNdRmFSWSQn
MjBRYS5fQylSQ0MyaGU0JSNiKkxzYTExbXRSTG9IXHBbK2sqM2pfSzRzVG9AaSJLXTI6NG8jVWw/
IUU1azVLOC9uTDlfaCtQQ1tqWG4KJWFGSzkraWlGXzVtUG4sPGc/OURIcDNndTBhbjxHTTBTJEQ2
JDgkWVFjT0JoLjw5bT9pbEdxbUkoJlRyNmJpYjM0WmJSXDM7TiRVYFJUZzNFQERTTShlUylFaGla
RmNgXj9HY14KJVdyWjhiUmd1OkJpKSdiUUBBRiguVTxySyNPZW9ELVYtYEBnSGozcChyMVtESWRF
cm5YOSZRSmllZTBpcCZJVyVQPlt0Z2tVYmRaJlxMW00jXDF0bzZBXTgvYTtJR3VtLU0ldUMKJWMv
WDc/VitiLlI5akIyK1cyI2tPSVpOZFFPYGJaVE0kRERPQnA1XUQpQjNqLSlpZGsoWCcjcGk4LilL
ZGAvYVxmVD4kPWBvcXFuZTRvNldGYkYlPWgxdGg3U1AyPkJVJGNiMEAKJUtcNGFrbmQ0V1E1LGsw
S2Jva01lPWAiVT4oaW0kNVk0QGlSYS9aKzYqU05gKDM7UzxKOEJqVl4lci9KV1BoXFpzLjcpNG0o
PWhxPy8lXz9lTyl0IiYpZ2tMP1ZYLmE3Qi1RV0QKJSluPFFKbzFNSz1xPztkQ0B1TGtvZGc2X2sj
SGdWXFUrK2dcIj88UVc4R3UxQV4hRkdvYENpcFA/a2pMZSZlPEBEXkZwSE9sIyZSPks9Qic1cDIr
JlRVSE9xQDNgSSQ1Wy9bS1EKJVpWdTM4WENBMyYvQDsnKiZRVlYnTWJaNlAmbjEhLT0vc3NcXkFc
K1xaI3NTSmVyb0sjQyhcbUw6LTI4V0c/WjQmPytxU0ZbN2ExJGpncEtwUVtNaTteZkEvUj5VSnMp
Q2w8SyQKJS80IXFhPVBTU0ldSTJWLV0sWzQ6QGApT1NaOTk1aiUvVCFjPDVIdCJaXTcsUUFmdCUu
P2knX2tTbGRMKE1CRVpcUUVzKDtZJ1FkREZlcWhXWj91VidvZTlhRlYvdVQ9SjxiSy0KJWpSNyc3
azs8TzdoXjlHZDBqLThqTUpRUT85QG1WYFspdWVIJUZkZD1KKC5iQSI2b10hKFhSUyNIbUxtbC5e
O1BiQGJlTTRtJSYhWGxPSXFfPi1aWyE6XDltIS5SP0wpN3VAa1YKJV8+MyNmUTwuQVVLW08sTVo9
VV45MihLVSpuXXBSYjw5PF5uPDlkb09GUj1pOGlGLU4jcEZkSWZqViY6QSdHOUgsUD1eLk5DZWxv
WTs0bjE1Tmw5NUtGRC1WSSxORz5pTEc7dTUKJVpnSUhFLCRwWD9TJDk9Jj9mOlFHL0smPC4qOiYt
KyZpWj5IZjhiOFoyVlUoODtcdWhhWDdqQztuNEVNcl81Nz1gK1M3MFlldT47ai4tSCknKi9mRSMv
LnNbQjE8JCt1IUlgOD0KJW1McVwkWDxFJG5MUDNlXC9MRm4tNWUocjIwWGZVNzk4KVk6Ly0zTzYq
aWEyNTRxIjdGLmlBREpUQFhaVmckYTNZOnFqQzg1aD0jQVEuTTlzazEhZTNoYnVgPUNiO1NJblts
WSoKJUhMZFgoIVEsb3Bma1ouXGtVVGRhTiZPXWJsIk1MT190c1pdV1BoayhDSDc/cnFrLVhHQE9q
Q2k4PkhhXlFgbyQrTG1tTGE3Oy5CUGxyVGRTRUtfOzI/QyMhakdxLTE6TzooUUMKJTYnJypPVyZf
cys+XEpiIjBeVmgvRlNLT2QpVlBMX1dFSHNGJElyWWJPbjQrMWpzYTNKPFpNLC1NXkI4SnE+WjBI
VSM4LjpqZmsrZ2htVy4wcjpjZ2gnQGpWNlhOMHIzNCdqWXIKJWFlVWlqVDA5aysqTDtvSHJQLUU8
KEtqSDZbMTZgc0JKbEIzZ1gubzVKMlBzLiFLMWhQalp0amcsIV1eSmljdWhhNDBXTidrbW8ub0ss
NFpiM3NeOWpLNmEqVnBnUV5xUGpyPlMKJVJwT1VaTEspI2otOyIqPFZdMVZCbDYwPXE1LEBDUVZV
SV1SbFdPXm8jTXVzRy40UFVLKmc2NmlDRyhfW21wOD5LPU1yN15nLTIyLDtxayglOXQtTmtOSGxr
QzdvSyFLSk8zSS0KJXMvaiNKJ0MoJTwnV0xYdWhwPWZnTGVJa01Gc2EqVFs5Iypza2VDU2xHTk9E
Sy4mNmpMPHEsIU9kNzFVY2VsXVY4MTUhQmNxVnRSYCJpK29EMi8hP2FPNmNYSUBSZCspPCwnSk8K
JTNsIUQ+XXFANHUyP1c+I08yOF06VnMqMWo6a0owNUovMVpWMyE7bDkvMiZZcjI8Sl1hRGA3c19O
SGtUUSwtWG5qMiE1aj9gO2diR0ZHPl8hSC8qS1hRLUkqOD5rMkJGRVtLXHIKJSwjW0tLXy5ZYjxD
WjAjI1RzZDYvOkhmR0EvWCNrPD5cRUkyKFwzOU9QLmcwb0g4Ij5FKm1BV2wndWVVWlFXQGsyYW9F
NlFdZGZXPmc/JEg6aTlGTlNlS0QoTFtqKjhYXyFJMWMKJVxISFVoWUcoPldHPD1xWEVgK0wmP1s9
PlYxOExxN2xfPzEoTU0jcj4nZW9QWjJpP29QLDYkQjpKQW41UE5VWVpNcjU9c20qVHAjODM+b0VY
RnBVYy5VJjFZZTttcSguXDwwMmYKJVJLX1krNCIiZDkoKStpUz1NQ29cPlFyXWMoYCU/cV8iXWpu
OmVAP0w+JWR0UWQvQVM+LlpqSDc0Qy1DTSQ6MkkoISpVTjZeYnNwIydBTj8oRVs2XyRkVE5ITDQr
PCxLT2Rkc0MKJWN0SG9fWTc3KjZOcWE5bGtjR0NFVUdzTicjbjchTWxvRGtvQnU0RnMobUNlRC5T
JFJJUTZhLURQSlB1KjBdbTJYRWdpTEJMVmUnV1pIL2JrXipSOC8qLjxedW5BR2dXZDhlVGYKJU9K
OThnPmJFYUExRlt0OlghKiZTYTtvXGlOcTNaPGRlMElRYT1CO2hjJCNTJEg4MCZjW2t0cyNIPWs+
Mj1KLHJsZ0oyJ04vMl9iPGxHO11POWJCcl4/XkJJNDNKP3JfVz82M0MKJShbVyQ1XSxSJTxORmc2
TTMpbWJZOUAnSGBJST1gcFQsRForKWFYLDlYMnNzIihtJzopaikyQWFlSzs1W1VnXSQ1a0hZKmJp
RToiUTg+aj9PNklVUWxtUUQvQSs4M0AqJSpcSF4KJUtEL0c4ZDlMR0ZTNjx0XkJOZFlFPnVXdTJI
Ui5EYEJGZEVGTlFmTj8wJzVHQzAkKytrV05bKD5wO3QyOms9dFpIWWVvR2VbPEtBQm5GSCFvLS9H
aUROcV08ITdWNi8+MClbRDcKJUhdTid0UnRCSiQpPyM6V0RhNl8yJlxNYzgrPWEjWlBdMk8qWE5d
OFBMPl8jIS1jNGZMYWlaT0RdPFxtZUlnXmpJLWcrNUMiYk0rNEAqZUUqX1Anc15lYEIiOiU0MXMj
U0M4YDcKJWBXUzEucGFZLCdYI2RkUmlCX3VxKzBdJlNQKWBvMTo8MV9hTl9DR25nWzFjRUpTTTYu
OGkqWDxZTmk6LFxHZDpAOy86VCU7WDdEN1o1a0w4VytrPF8qVkRVV1YuUyRuY1ErLioKJS9AT0BJ
SF4hIm5IaEQoblksPTxLR0xCWHMqSyRFJ1NvT05vOjJBYWdociM7ak0jQTwvbURXWyxDWWUzYD9z
XGBIO2lKcShLazJXT1txS1AkVWEnblo9blxJNDNoayhsSEglNEIKJU0yb0EtbzIqPy5NdSxKJTdz
R2g/N2dibDZCJW4vbCJfRTAoU25oOCNeU1diL2FuJFpGTldORyMvO1RRKGhiRlk5YTUxSFEwIzwh
JjVgNFgkKUEuVFBQKjxjYHFBQnFcXS1YQ2YKJVgsbjVcYHRSTjcrVj1vWW9vPSpqW2U3ImVLaGtV
MigzZ2ZQTSdGOmFDc29Pb19pKEs0UkcyJU8uZ1gmPl5EcE9cYFc9ZC0zSjwiLzRGYnI8UWdVTm04
PmZbSytvKikvWS8hQDMKJWtoMVwxaE8pdGRxVCVqcGArRCgtNG5eImYydHAoVSZeb0dVWXNJaG43
RmlAXUNUOElZTzNqRlE/KGg8KD1DKUsmOV0tai46MFojMjNyTkpDNGlbbmtPKVxvSioiVSQ6KT1X
SkgKJSFpN0pcVHJuJyE6SHBlOGFUTiVJOksqUypfJG9QXGcrTHBJbHFMIVAzQWktbXJwTWtLOmo1
JEUhQm5iK2E9UWxmcStwcjUwblpWME1oTFJILChtYGNwIWtePVcmYFBxVmhYPGEKJWFlbiJTK1xc
WzIhby1FKDA4JVVvXmozOEYrRGRna2leSVNSPEE7dTBvSDVdLiVXUVVQUGVBdHQjcTFFcjwuamtV
ZnMsPWI5dGw+J0lOayhMM29SdU4+aXBVLGgkRU07Jz5NWmIKJV1YSEUvT2VqMy0pPz5mUU5qNXQi
I2EicixgMGRmIkNzXU4mSlpwZHBMN1cpMUcuPDtzTTBeYTg/ckVdaEteRDFGU0EoSC4zSGtRJTYx
OElgXzFbVmcjNUosLS5aWWBcbzNDOFYKJU4zQ1MkMGlyRW5ZZyxfQF5KaGkiXjcyRnFdVl5rdV0z
OjMuOT86Ly8rKUI3by1EPCg+IVxCLmJsSl8hdGpmOmlKRlVZYl09JS9xOG1gaUIwN0p1UmlhNnBs
cy5pYTs/TkcsK0EKJTFGYUkzYjAjZTxxRkdiP1Fac0ROWSZIXko/QHVrIjttRWZyX0ZRJTheSXJh
R2E0WSpcNitNazIiaDZuPVBTKUltYFI/YSFrbVhyUD0/Yi9HQ2k1Xj1VNEtabF9NVEguKlcsZFEK
JUw0QlloXXFoYDk7dFJzQFNTPSlAV0RUXWw6bENrMD5NclRFcjY/XUFkNDZvQ2UhMFg6KUMsLHAh
Vmk+I2duIiIxLD5cL1drIWpSckJdYGwpUU05YSsyL1JwPz1iLjxcJ0EvZ2EKJUpoImZjUzosS29o
dEByWV9rUkQzS0RMN0gkYmtmUDNHZVMtOFo4WUA6Xz5rbF5FaUhhK0xIQywpWitpR2xIUTBPZjto
bUpZSUlzP0htaDgnJVduT3BIIW1qY0grZClzUiNkTFgKJUNDVVRcRkIyQEgkK2ohaSI5cGlySTdd
QTk/MEdAaSFzOikobEtkbCFCT09rWTAjXms5UGg9ZldmM2cvLWc/XFZkWU4pNTA6NVlpVz8xMWVH
I0txdHFFYz1sSWxaSkFEKk9MTWcKJTc4IUZyT1hPKlFTUURhPyZnXkpNPz0zWTxYQUpCYyswODsq
Zz8zKWY6OWQsa141Z28wMWZRTmZAMU9SMCIjVmljNFtkTmYjSnFOTT5PT0VHbDUlb2hWYjUwaFsr
TSxmYjUlM2gKJVM1O1NeLz81a3NBLEc4Ti5RJiI3VFdTNE8zaiEnSDJjKSNjci9hODRNOW9NNVxA
cW42Km1PaFRIanF1Wi5tYWk+PFY9TFE9N0tDWWY/azovKixkKUQ5MHNGSixtVmldWVtgZkYKJS9H
KjxHPlg3Ti9cMEB0am1AR051YEdfUUE7JGAuM0g0N2VQJlJzc21faltfL0Y8ZURGZGsuXy9KUC9L
UlA3T0NyPkJgJF4hWlktVl02Lz5IL2JYOSVLc2szMDJyMm4hMSJwTnUKJW1WVE9oQzwyWG5EVmEs
aiRZSjkyZ01ZaidGQSNgbC0vM1YvbVxYMW8yNUpgRmxnQ1dTKUMlYCtYaDEnP1RmcyZcRDc9P1Uy
Vml0MTB1XlMqRztuPVpEYyhEYUFcNUc5KEA4JnIKJWZRUVRLVkhbRlpiJitXVURScCNNSUM2YyZl
MlEyRTdYTTtIaDdDNV8oKVZzTWdUQSlXRFhlX2RZSSosc11CTCp0VzVaMSNQIy85dXIhP2I0JjJn
bC4vP09UKC4iby9jPk5VSGoKJUE6bE5haiFucVQ+YzwjK1xMTDNhOCdnazdpXFMkPS80PFArNTZl
aktQbXQxZD4vIklHNFo6U14tbEw3WjA7biNhaEAkVUUtVDxFXFM4bGVyLS5BaTBVMz9zNSxERVxo
ZGdWU24KJT8mXHEobjkzcmlJblwoclwzUlUiNVA7VHNFdUpVS0B0ISZQcHA6OSoiRlpwS0lPcyFz
QW5uUl1iSyY1PXFLaXMuN0dcbzZHJ3FxcVxXbFY4UWE2IVZWVzpoIUNGQHBrTEVLYyoKJSlDJlVJ
IypuTyciX005S2QzOGZKTzhON19USG9JTS8rSmRDUz9gZF9CbFxsVFgrZWFba1BXJjhiU1xRMiJM
bTczUzlkJ2tacWRUXmpRWTsmZGtNUj0yRDApMzpfRFJHJyk7PEIKJUhiLjBMI1QyOCYuakBnJyNW
PCYnOmpySlNkZiNnZFdURmslIWFjKWQ9PktpbGFwVDFKaCVxcnAtWyZgU25DV3M0LmQuKSxyPilv
L0ZvayVNMGs4R1ZjbVMwVzIyIXUwajRaITgKJUJaS3RWUitqakxfXyopTTghNVBncE0mZiFmKidn
KkZ1WHMwW1plK2BCb1haLmwwMi1kYkYlbEhEYUlQPk9kJDVFOmhnaFtEOUYtRzJCNExqTkZBUF4j
cERRRFJARSlEIWVkVGYKJUxEYFVya08lUGE/U2tXYU4zWEwjMEQuQ0JlcmphKzdXPjFiZmNPVUde
b1k6QDlrTyJTX1ZpMW1YaUJzbnJOQDE8KF1Ga3VzKjlGSTVKbGpvKzhrVkNAQWtvVUhpSkBCcnNI
TFoKJUEnW1NdcFtpaj9UPEVcS2ddLjgyXmU7cF1pQjlqNk11NylSbT1CVlpzMy0hM0c3V0k8XjlZ
TlVbTjsqR1ZmRF9dXz5mdShkMl1zZmFZaWtmUyZYaCgibDRgWjFbcSg+NTU1NW8KJWk0bFk9YnUi
XTVyXCsmLiI5OC9xa1B0OktKJWdFT0lpaHRqXkcxKzQ3NzI9VGNARjc4Y1BtP1JfZFQyI2hzdG0y
N3NMT2YwLFAyWEZaWSFQMD1eJDUzIWhlTWBvcUp1aCNbIS8KJV9GUiNzX2JSQmcnXUFFZExSayI6
aT1TMzdHaWVRTW4/XlRPK1E7dGk2QGFSKyNDNjkkMVskLTRyN2F0Sl8hZDovblRQdVlFZEVnbEEw
XT5xQG0qJ3U2SDElRCVDJ1JIZl9IVHUKJVlXU0Y+P1xlXEghSVAlTV5CQmwvVl5MZzg6QzZpZmkv
LUVCbj46LllsaiRWLVJzY0ghK1UkWChCJklvQlYiJmE6Qj5FbUpndUlwVSNEVSZaNjlfOys5UnJU
cUlWNkZ1aFw/SlgKJUkzKGVKZiplYldkRUIlOCJgVmBQJSMsJTRUPyQyOSIjYkgzNU4mRk00PGJW
XV0pP007Ni4iakBMYk87XjE9a0stIzZQKllOSVZRL3FaJVImREouVCgwN10vZitFXylvPlI2LEQK
JWNiUSM7NShFJ29jT0luWyU9LUAmcTpuOXFUK0NHPycqdDg5S2AjNWpgc1tFOiwzMSQpSUxncGhj
ZzstcUk6O1c0Si5sbCg7IWVdUXI+bmBHXW8zNU5nJG85OypZNzNtbltpYSMKJSJVaVFoYlJHImAp
S1BdMW0rTWRAIWUqKmYqXkNTY19bWUphcE9eZT5LJExTOmJgISZyLFkzSDBIK2AnQERELUpDXUcr
JUBmRj00bGhaXGBRbyhkbiQ4S2drMG5EPVwwKGRNJj4KJURiRkNMISVHN2daUlpDMGs2JldtcW5x
V0VeRitzR29SWVdrK0JxU2RxS3VaXGQqJzdvIzVnbmtKcm8lKTN1PEZfITtzdFtrLSJmbyJNRSNW
MzU+dDRtWShfUmNnQ0I8a0xXJS4KJS0lTVY8KSQ8KlIua3AxLnFZWFs5ZTo7LGdwVkVpZlZfODBq
IyllMk0tUUkvUUhwZHNzNVFqNy4wayVpRzk9VlpLR1xGK25jYkw2SzY5c1VXSzBfMjcwS19UYyZG
PCZvPTNkdU0KJSZqLjVWM1ptOU0hJW1OMCspaiJJNkVeXzAqazZDMCYucDpPJSJoPnBLMyY2X3JY
Ll1xcGZaNWcnJjNYTWlzJSJPNDonLnBZaz5kSCVpRGtfTldPVWkkYyQ/QG9uKjohNGhgX1gKJV8h
Mj0oXVlyTDJNY28rUlIvbCxpSHBQW1hwaW1vMCQrQ1FaI1NtbjkhODlnMCQlc3I7cClUQk1uPHQv
ckJgYnEwXl5eZ1ZHPGBZUyRvdS1Kb0VnJDRGYmRlZF9VVSxcKisvZ1sKJTdLZSJRZ1ZMI3BmPiVs
OkxFSmVeMlpSNERgIlYoJERMX0haUk5MczktbDBVX3FoU1ZrPzNCMyovLENcK0VGP2YhQkBmT11L
ajlfaD89cFI9NGF0LVZhQUZXOHBgSVtEPSo7OUsKJXMzQ1YzLj5mKUc5ajdMYSUoNjNwInJSMz4k
MyllY3JcWVdbUGxeJURnamhlczowQVk2XzRoImMjPW9vTjAtJnVUJ25rMlNGcHRuPj4oV0l1KGto
MEdvJU4/O0UxX2Y4XzZELUAKJStLN2BCK00pYGo3SHRmYz5DTCFHKSZFX1YiOHIlXylSdHVZSVw1
ZV4tQkgxYiNeQUlGbyo8WTdVLmA8ZmtqL3ElUFheNzMidGlFaj9mUypnVTZ1ISo7ISZbaFMtXHFO
ZilVMEMKJVk1dDEvN048Kj4oMlE7K2M+dWE2SkdBbThjW0g4KiFLQT1GOiU2bU5Wa2dKOEEpZlFH
TmcrWCxYOWloUmBJWkE5OmVwY2xZXUgsISYwMV02RiRYO2JbSltAT1wsInJvRWU+TlUKJVwiLF0p
RGFYOiNCXjo0bG0vSDEiLWojMk8hL2hQPilEIThoaHNvY1dmWyYrSENBWm9La0dVakQ9YjElYTZY
VkRnKXBTSnNHW2tHQVMoK20hcSNdMytaZWFSPjomWVROSW9sTmAKJUleMDE4cjBxT040N0xiQV5Z
Uk4sNU9dJFRhbWpPXkZoVFQhcnNvT3ViOkV+PgolQUk5X1ByaXZhdGVEYXRhRW5kClwgTm8gbmV3
bGluZSBhdCBlbmQgb2YgZmlsZQpkaWZmIC0tZ2l0IGEvZG9jcy94ZW4tYXBpL3hlbmFwaS1jb3Zl
cnNoZWV0LnRleCBiL2RvY3MveGVuLWFwaS94ZW5hcGktY292ZXJzaGVldC50ZXgKZGVsZXRlZCBm
aWxlIG1vZGUgMTAwNjQ0CmluZGV4IDUzNGExMjQuLjAwMDAwMDAKLS0tIGEvZG9jcy94ZW4tYXBp
L3hlbmFwaS1jb3ZlcnNoZWV0LnRleAorKysgL2Rldi9udWxsCkBAIC0xLDM4ICswLDAgQEAKLSUK
LSUgQ29weXJpZ2h0IChjKSAyMDA2LTIwMDcgWGVuU291cmNlLCBJbmMuCi0lCi0lIFBlcm1pc3Np
b24gaXMgZ3JhbnRlZCB0byBjb3B5LCBkaXN0cmlidXRlIGFuZC9vciBtb2RpZnkgdGhpcyBkb2N1
bWVudCB1bmRlcgotJSB0aGUgdGVybXMgb2YgdGhlIEdOVSBGcmVlIERvY3VtZW50YXRpb24gTGlj
ZW5zZSwgVmVyc2lvbiAxLjIgb3IgYW55IGxhdGVyCi0lIHZlcnNpb24gcHVibGlzaGVkIGJ5IHRo
ZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb247IHdpdGggbm8gSW52YXJpYW50Ci0lIFNlY3Rpb25z
LCBubyBGcm9udC1Db3ZlciBUZXh0cyBhbmQgbm8gQmFjay1Db3ZlciBUZXh0cy4gIEEgY29weSBv
ZiB0aGUKLSUgbGljZW5zZSBpcyBpbmNsdWRlZCBpbiB0aGUgc2VjdGlvbiBlbnRpdGxlZAotJSAi
R05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlIiBvciB0aGUgZmlsZSBmZGwudGV4LgotJQot
JSBBdXRob3JzOiBFd2FuIE1lbGxvciwgUmljaGFyZCBTaGFycCwgRGF2ZSBTY290dCwgSm9uIEhh
cnJvcC4KLSUKLQotJSUgRG9jdW1lbnQgdGl0bGUKLVxuZXdjb21tYW5ke1xkb2N0aXRsZX17WGVu
IE1hbmFnZW1lbnQgQVBJfQotCi1cbmV3Y29tbWFuZHtcY292ZXJzaGVldGxvZ299e3hlbi5lcHN9
Ci0KLSUlIERvY3VtZW50IGRhdGUKLVxuZXdjb21tYW5ke1xkYXRlc3RyaW5nfXsxMHRoIEphbnVh
cnkgMjAxMH0KLQotXG5ld2NvbW1hbmR7XHJlbGVhc2VzdGF0ZW1lbnR9e1N0YWJsZSBSZWxlYXNl
fQotCi0lJSBEb2N1bWVudCByZXZpc2lvbgotXG5ld2NvbW1hbmR7XHJldnN0cmluZ317QVBJIFJl
dmlzaW9uIDEuMC4xMH0KLQotJSUgRG9jdW1lbnQgYXV0aG9ycwotXG5ld2NvbW1hbmR7XGRvY2F1
dGhvcnN9ewotRXdhbiBNZWxsb3I6ICYge1x0dCBld2FuQHhlbnNvdXJjZS5jb219IFxcCi1SaWNo
YXJkIFNoYXJwOiAmIHtcdHQgcmljaGFyZC5zaGFycEB4ZW5zb3VyY2UuY29tfSBcXAotRGF2aWQg
U2NvdHQ6ICYge1x0dCBkYXZpZC5zY290dEB4ZW5zb3VyY2UuY29tfX0KLVxuZXdjb21tYW5ke1xs
ZWdhbG5vdGljZX17Q29weXJpZ2h0IFxjb3B5cmlnaHR7fSAyMDA2LTIwMDcgWGVuU291cmNlLCBJ
bmMuXFwgXFwKLVBlcm1pc3Npb24gaXMgZ3JhbnRlZCB0byBjb3B5LCBkaXN0cmlidXRlIGFuZC9v
ciBtb2RpZnkgdGhpcyBkb2N1bWVudCB1bmRlcgotdGhlIHRlcm1zIG9mIHRoZSBHTlUgRnJlZSBE
b2N1bWVudGF0aW9uIExpY2Vuc2UsIFZlcnNpb24gMS4yIG9yIGFueSBsYXRlcgotdmVyc2lvbiBw
dWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlh
bnQgU2VjdGlvbnMsCi1ubyBGcm9udC1Db3ZlciBUZXh0cyBhbmQgbm8gQmFjay1Db3ZlciBUZXh0
cy4gIEEgY29weSBvZiB0aGUgbGljZW5zZSBpcwotaW5jbHVkZWQgaW4gdGhlIHNlY3Rpb24gZW50
aXRsZWQgIkdOVSBGcmVlIERvY3VtZW50YXRpb24gTGljZW5zZSIuCi19CmRpZmYgLS1naXQgYS9k
b2NzL3hlbi1hcGkveGVuYXBpLWRhdGFtb2RlbC1ncmFwaC5kb3QgYi9kb2NzL3hlbi1hcGkveGVu
YXBpLWRhdGFtb2RlbC1ncmFwaC5kb3QKZGVsZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IGQz
NGVkMzkuLjAwMDAwMDAKLS0tIGEvZG9jcy94ZW4tYXBpL3hlbmFwaS1kYXRhbW9kZWwtZ3JhcGgu
ZG90CisrKyAvZGV2L251bGwKQEAgLTEsNTcgKzAsMCBAQAotIwotIyBDb3B5cmlnaHQgKGMpIDIw
MDYtMjAwNyBYZW5Tb3VyY2UsIEluYy4KLSMKLSMgUGVybWlzc2lvbiBpcyBncmFudGVkIHRvIGNv
cHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlzIGRvY3VtZW50IHVuZGVyCi0jIHRoZSB0
ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlLCBWZXJzaW9uIDEuMiBv
ciBhbnkgbGF0ZXIKLSMgdmVyc2lvbiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91
bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlhbnQKLSMgU2VjdGlvbnMsIG5vIEZyb250LUNvdmVyIFRl
eHRzIGFuZCBubyBCYWNrLUNvdmVyIFRleHRzLiAgQSBjb3B5IG9mIHRoZQotIyBsaWNlbnNlIGlz
IGluY2x1ZGVkIGluIHRoZSBzZWN0aW9uIGVudGl0bGVkCi0jICJHTlUgRnJlZSBEb2N1bWVudGF0
aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxlIGZkbC50ZXguCi0jCi0KLWRpZ3JhcGggIlhlbi1BUEkg
Q2xhc3MgRGlhZ3JhbSIgewotZm9udG5hbWU9IlZlcmRhbmEiOwotCi1ub2RlIFsgc2hhcGU9Ym94
IF07IHNlc3Npb24gVk0gaG9zdCBuZXR3b3JrIFZJRiBQSUYgU1IgVkRJIFZCRCBQQkQgdXNlcjsK
LW5vZGUgWyBzaGFwZT1ib3ggXTsgWFNQb2xpY3kgQUNNUG9saWN5IERQQ0kgUFBDSSBob3N0X2Nw
dSBjb25zb2xlIFZUUE07Ci1ub2RlIFsgc2hhcGU9Ym94IF07IERTQ1NJIFBTQ1NJIERTQ1NJX0hC
QSBQU0NTSV9IQkEgY3B1X3Bvb2w7Ci1ub2RlIFsgc2hhcGU9ZWxsaXBzZSBdOyBWTV9tZXRyaWNz
IFZNX2d1ZXN0X21ldHJpY3MgaG9zdF9tZXRyaWNzOwotbm9kZSBbIHNoYXBlPWVsbGlwc2UgXTsg
UElGX21ldHJpY3MgVklGX21ldHJpY3MgVkJEX21ldHJpY3MgUEJEX21ldHJpY3M7Ci1zZXNzaW9u
IC0+IGhvc3QgWyBhcnJvd2hlYWQ9Im5vbmUiIF0KLXNlc3Npb24gLT4gdXNlciBbIGFycm93aGVh
ZD0ibm9uZSIgXQotVk0gLT4gVk1fbWV0cmljcyBbIGFycm93aGVhZD0ibm9uZSIgXQotVk0gLT4g
Vk1fZ3Vlc3RfbWV0cmljcyBbIGFycm93aGVhZD0ibm9uZSIgXQotVk0gLT4gY29uc29sZSBbIGFy
cm93aGVhZD0iY3JvdyIgXQotaG9zdCAtPiBQQkQgWyBhcnJvd2hlYWQ9ImNyb3ciLCBhcnJvd3Rh
aWw9Im5vbmUiIF0KLWhvc3QgLT4gaG9zdF9tZXRyaWNzIFsgYXJyb3doZWFkPSJub25lIiBdCi1o
b3N0IC0+IGhvc3RfY3B1IFsgYXJyb3doZWFkPSJjcm93IiwgYXJyb3d0YWlsPSJub25lIiBdCi1W
SUYgLT4gVk0gWyBhcnJvd2hlYWQ9Im5vbmUiLCBhcnJvd3RhaWw9ImNyb3ciIF0KLVZJRiAtPiBu
ZXR3b3JrIFsgYXJyb3doZWFkPSJub25lIiwgYXJyb3d0YWlsPSJjcm93IiBdCi1WSUYgLT4gVklG
X21ldHJpY3MgWyBhcnJvd2hlYWQ9Im5vbmUiIF0KLVBJRiAtPiBob3N0IFsgYXJyb3doZWFkPSJu
b25lIiwgYXJyb3d0YWlsPSJjcm93IiBdCi1QSUYgLT4gbmV0d29yayBbIGFycm93aGVhZD0ibm9u
ZSIsIGFycm93dGFpbD0iY3JvdyIgXQotUElGIC0+IFBJRl9tZXRyaWNzIFsgYXJyb3doZWFkPSJu
b25lIiBdCi1TUiAtPiBQQkQgWyBhcnJvd2hlYWQ9ImNyb3ciLCBhcnJvd3RhaWw9Im5vbmUiIF0K
LVBCRCAtPiBQQkRfbWV0cmljcyBbIGFycm93aGVhZD0ibm9uZSIgXQotU1IgLT4gVkRJIFsgYXJy
b3doZWFkPSJjcm93IiwgYXJyb3d0YWlsPSJub25lIiBdCi1WREkgLT4gVkJEIFsgYXJyb3doZWFk
PSJjcm93IiwgYXJyb3d0YWlsPSJub25lIiBdCi1WQkQgLT4gVk0gWyBhcnJvd2hlYWQ9Im5vbmUi
LCBhcnJvd3RhaWw9ImNyb3ciIF0KLVZUUE0gLT4gVk0gWyBhcnJvd2hlYWQ9Im5vbmUiLCBhcnJv
d3RhaWw9ImNyb3ciIF0KLVZCRCAtPiBWQkRfbWV0cmljcyBbIGFycm93aGVhZD0ibm9uZSIgXQot
WFNQb2xpY3kgLT4gaG9zdCBbIGFycm93aGVhZD0ibm9uZSIgXQotWFNQb2xpY3kgLT4gQUNNUG9s
aWN5IFsgYXJyb3doZWFkPSJub25lIiBdCi1EUENJIC0+IFZNIFsgYXJyb3doZWFkPSJub25lIiwg
YXJyb3d0YWlsPSJjcm93IiBdCi1EUENJIC0+IFBQQ0kgWyBhcnJvd2hlYWQ9Im5vbmUiIF0KLVBQ
Q0kgLT4gaG9zdCBbIGFycm93aGVhZD0ibm9uZSIsIGFycm93dGFpbD0iY3JvdyIgXQotRFNDU0kg
LT4gVk0gWyBhcnJvd2hlYWQ9Im5vbmUiLCBhcnJvd3RhaWw9ImNyb3ciIF0KLURTQ1NJX0hCQSAt
PiBWTSBbIGFycm93aGVhZD0ibm9uZSIsIGFycm93dGFpbD0iY3JvdyIgXQotRFNDU0kgLT4gRFND
U0lfSEJBIFsgYXJyb3doZWFkPSJub25lIiwgYXJyb3d0YWlsPSJjcm93IiBdCi1EU0NTSSAtPiBQ
U0NTSSBbIGFycm93aGVhZD0ibm9uZSIgXQotRFNDU0lfSEJBIC0+IFBTQ1NJX0hCQSBbIGFycm93
aGVhZD0iY3JvdyIsIGFycm93dGFpbD0ibm9uZSIgXQotUFNDU0kgLT4gaG9zdCBbIGFycm93aGVh
ZD0ibm9uZSIsIGFycm93dGFpbD0iY3JvdyIgXQotUFNDU0lfSEJBIC0+IGhvc3QgWyBhcnJvd2hl
YWQ9Im5vbmUiLCBhcnJvd3RhaWw9ImNyb3ciIF0KLVBTQ1NJIC0+IFBTQ1NJX0hCQSBbIGFycm93
aGVhZD0ibm9uZSIsIGFycm93dGFpbD0iY3JvdyIgXQotY3B1X3Bvb2wgLT4gaG9zdF9jcHUgWyBh
cnJvd2hlYWQ9ImNyb3ciLCBhcnJvd3RhaWw9Im5vbmUiIF0KLWNwdV9wb29sIC0+IFZNIFsgYXJy
b3doZWFkPSJjcm93IiwgYXJyb3d0YWlsPSJub25lIiBdCi1ob3N0IC0+IGNwdV9wb29sIFsgYXJy
b3doZWFkPSJjcm93IiwgYXJyb3d0YWlsPSJub25lIiBdCi19CmRpZmYgLS1naXQgYS9kb2NzL3hl
bi1hcGkveGVuYXBpLWRhdGFtb2RlbC50ZXggYi9kb2NzL3hlbi1hcGkveGVuYXBpLWRhdGFtb2Rl
bC50ZXgKZGVsZXRlZCBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDRmOWNlMjAuLjAwMDAwMDAKLS0t
IGEvZG9jcy94ZW4tYXBpL3hlbmFwaS1kYXRhbW9kZWwudGV4CisrKyAvZGV2L251bGwKQEAgLTEs
MjAyNDUgKzAsMCBAQAotJQotJSBDb3B5cmlnaHQgKGMpIDIwMDYtMjAwNyBYZW5Tb3VyY2UsIElu
Yy4KLSUgQ29weXJpZ2h0IChjKSAyMDA5IGZsb25hdGVsIEdtYkggJiBDby4gS0cKLSUKLSUgUGVy
bWlzc2lvbiBpcyBncmFudGVkIHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlz
IGRvY3VtZW50IHVuZGVyCi0lIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9jdW1lbnRhdGlv
biBMaWNlbnNlLCBWZXJzaW9uIDEuMiBvciBhbnkgbGF0ZXIKLSUgdmVyc2lvbiBwdWJsaXNoZWQg
YnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlhbnQKLSUgU2Vj
dGlvbnMsIG5vIEZyb250LUNvdmVyIFRleHRzIGFuZCBubyBCYWNrLUNvdmVyIFRleHRzLiAgQSBj
b3B5IG9mIHRoZQotJSBsaWNlbnNlIGlzIGluY2x1ZGVkIGluIHRoZSBzZWN0aW9uIGVudGl0bGVk
Ci0lICJHTlUgRnJlZSBEb2N1bWVudGF0aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxlIGZkbC50ZXgu
Ci0lCi0lIEF1dGhvcnM6IEV3YW4gTWVsbG9yLCBSaWNoYXJkIFNoYXJwLCBEYXZlIFNjb3R0LCBK
b24gSGFycm9wLgotJSBDb250cmlidXRvcjogQW5kcmVhcyBGbG9yYXRoCi0lCi0KLVxjaGFwdGVy
e0FQSSBSZWZlcmVuY2V9Ci1cbGFiZWx7YXBpLXJlZmVyZW5jZX0KLQotCi1cc2VjdGlvbntDbGFz
c2VzfQotVGhlIGZvbGxvd2luZyBjbGFzc2VzIGFyZSBkZWZpbmVkOgotCi1cYmVnaW57Y2VudGVy
fVxiZWdpbnt0YWJ1bGFyfXt8bHB7MTBjbX18fQotXGhsaW5lCi1OYW1lICYgRGVzY3JpcHRpb24g
XFwKLVxobGluZQote1x0dCBzZXNzaW9ufSAmIEEgc2Vzc2lvbiBcXAote1x0dCB0YXNrfSAmIEEg
bG9uZy1ydW5uaW5nIGFzeW5jaHJvbm91cyB0YXNrIFxcCi17XHR0IGV2ZW50fSAmIEFzeW5jaHJv
bm91cyBldmVudCByZWdpc3RyYXRpb24gYW5kIGhhbmRsaW5nIFxcCi17XHR0IFZNfSAmIEEgdmly
dHVhbCBtYWNoaW5lIChvciAnZ3Vlc3QnKSBcXAote1x0dCBWTVxfbWV0cmljc30gJiBUaGUgbWV0
cmljcyBhc3NvY2lhdGVkIHdpdGggYSBWTSBcXAote1x0dCBWTVxfZ3Vlc3RcX21ldHJpY3N9ICYg
VGhlIG1ldHJpY3MgcmVwb3J0ZWQgYnkgdGhlIGd1ZXN0IChhcyBvcHBvc2VkIHRvIGluZmVycmVk
IGZyb20gb3V0c2lkZSkgXFwKLXtcdHQgaG9zdH0gJiBBIHBoeXNpY2FsIGhvc3QgXFwKLXtcdHQg
aG9zdFxfbWV0cmljc30gJiBUaGUgbWV0cmljcyBhc3NvY2lhdGVkIHdpdGggYSBob3N0IFxcCi17
XHR0IGhvc3RcX2NwdX0gJiBBIHBoeXNpY2FsIENQVSBcXAote1x0dCBuZXR3b3JrfSAmIEEgdmly
dHVhbCBuZXR3b3JrIFxcCi17XHR0IFZJRn0gJiBBIHZpcnR1YWwgbmV0d29yayBpbnRlcmZhY2Ug
XFwKLXtcdHQgVklGXF9tZXRyaWNzfSAmIFRoZSBtZXRyaWNzIGFzc29jaWF0ZWQgd2l0aCBhIHZp
cnR1YWwgbmV0d29yayBkZXZpY2UgXFwKLXtcdHQgUElGfSAmIEEgcGh5c2ljYWwgbmV0d29yayBp
bnRlcmZhY2UgKG5vdGUgc2VwYXJhdGUgVkxBTnMgYXJlIHJlcHJlc2VudGVkIGFzIHNldmVyYWwg
UElGcykgXFwKLXtcdHQgUElGXF9tZXRyaWNzfSAmIFRoZSBtZXRyaWNzIGFzc29jaWF0ZWQgd2l0
aCBhIHBoeXNpY2FsIG5ldHdvcmsgaW50ZXJmYWNlIFxcCi17XHR0IFNSfSAmIEEgc3RvcmFnZSBy
ZXBvc2l0b3J5IFxcCi17XHR0IFZESX0gJiBBIHZpcnR1YWwgZGlzayBpbWFnZSBcXAote1x0dCBW
QkR9ICYgQSB2aXJ0dWFsIGJsb2NrIGRldmljZSBcXAote1x0dCBWQkRcX21ldHJpY3N9ICYgVGhl
IG1ldHJpY3MgYXNzb2NpYXRlZCB3aXRoIGEgdmlydHVhbCBibG9jayBkZXZpY2UgXFwKLXtcdHQg
UEJEfSAmIFRoZSBwaHlzaWNhbCBibG9jayBkZXZpY2VzIHRocm91Z2ggd2hpY2ggaG9zdHMgYWNj
ZXNzIFNScyBcXAote1x0dCBjcmFzaGR1bXB9ICYgQSBWTSBjcmFzaGR1bXAgXFwKLXtcdHQgVlRQ
TX0gJiBBIHZpcnR1YWwgVFBNIGRldmljZSBcXAote1x0dCBjb25zb2xlfSAmIEEgY29uc29sZSBc
XAote1x0dCBEUENJfSAmIEEgcGFzcy10aHJvdWdoIFBDSSBkZXZpY2UgXFwKLXtcdHQgUFBDSX0g
JiBBIHBoeXNpY2FsIFBDSSBkZXZpY2UgXFwKLXtcdHQgRFNDU0l9ICYgQSBoYWxmLXZpcnR1YWxp
emVkIFNDU0kgZGV2aWNlIFxcCi17XHR0IERTQ1NJXF9IQkF9ICYgQSBoYWxmLXZpcnR1YWxpemVk
IFNDU0kgaG9zdCBidXMgYWRhcHRlciBcXAote1x0dCBQU0NTSX0gJiBBIHBoeXNpY2FsIFNDU0kg
ZGV2aWNlIFxcCi17XHR0IFBTQ1NJXF9IQkF9ICYgQSBwaHlzaWNhbCBTQ1NJIGhvc3QgYnVzIGFk
YXB0ZXIgXFwKLXtcdHQgdXNlcn0gJiBBIHVzZXIgb2YgdGhlIHN5c3RlbSBcXAote1x0dCBkZWJ1
Z30gJiBBIGJhc2ljIGNsYXNzIGZvciB0ZXN0aW5nIFxcCi17XHR0IFhTUG9saWN5fSAmIEEgY2xh
c3MgZm9yIGhhbmRsaW5nIFhlbiBTZWN1cml0eSBQb2xpY2llcyBcXAote1x0dCBBQ01Qb2xpY3l9
ICYgQSBjbGFzcyBmb3IgaGFuZGxpbmcgQUNNLXR5cGUgcG9saWNpZXMgXFwKLXtcdHQgY3B1XF9w
b29sfSAmIEEgY29udGFpbmVyIGZvciBWTXMgd2hpY2ggc2hvdWxkIHNoYXJlZCB0aGUgc2FtZSBo
b3N0XF9jcHUocykgXFwKLVxobGluZQotXGVuZHt0YWJ1bGFyfVxlbmR7Y2VudGVyfQotXHNlY3Rp
b257UmVsYXRpb25zaGlwcyBCZXR3ZWVuIENsYXNzZXN9Ci1GaWVsZHMgdGhhdCBhcmUgYm91bmQg
dG9nZXRoZXIgYXJlIHNob3duIGluIHRoZSBmb2xsb3dpbmcgdGFibGU6IAotXGJlZ2lue2NlbnRl
cn1cYmVnaW57dGFidWxhcn17fGxsfGx8fQotXGhsaW5lCi17XGVtIG9iamVjdC5maWVsZH0gJiB7
XGVtIG9iamVjdC5maWVsZH0gJiB7XGVtIHJlbGF0aW9uc2hpcH0gXFwKLQotXGhsaW5lCi1ob3N0
LlBCRHMgJiBQQkQuaG9zdCAmIG1hbnktdG8tb25lXFwKLVNSLlBCRHMgJiBQQkQuU1IgJiBtYW55
LXRvLW9uZVxcCi1WREkuVkJEcyAmIFZCRC5WREkgJiBtYW55LXRvLW9uZVxcCi1WREkuY3Jhc2hc
X2R1bXBzICYgY3Jhc2hkdW1wLlZESSAmIG1hbnktdG8tb25lXFwKLVZCRC5WTSAmIFZNLlZCRHMg
JiBvbmUtdG8tbWFueVxcCi1jcmFzaGR1bXAuVk0gJiBWTS5jcmFzaFxfZHVtcHMgJiBvbmUtdG8t
bWFueVxcCi1WSUYuVk0gJiBWTS5WSUZzICYgb25lLXRvLW1hbnlcXAotVklGLm5ldHdvcmsgJiBu
ZXR3b3JrLlZJRnMgJiBvbmUtdG8tbWFueVxcCi1QSUYuaG9zdCAmIGhvc3QuUElGcyAmIG9uZS10
by1tYW55XFwKLVBJRi5uZXR3b3JrICYgbmV0d29yay5QSUZzICYgb25lLXRvLW1hbnlcXAotU1Iu
VkRJcyAmIFZESS5TUiAmIG1hbnktdG8tb25lXFwKLVZUUE0uVk0gJiBWTS5WVFBNcyAmIG9uZS10
by1tYW55XFwKLWNvbnNvbGUuVk0gJiBWTS5jb25zb2xlcyAmIG9uZS10by1tYW55XFwKLURQQ0ku
Vk0gJiBWTS5EUENJcyAmIG9uZS10by1tYW55XFwKLVBQQ0kuaG9zdCAmIGhvc3QuUFBDSXMgJiBv
bmUtdG8tbWFueVxcCi1EU0NTSS5WTSAmIFZNLkRTQ1NJcyAmIG9uZS10by1tYW55XFwKLURTQ1NJ
LkhCQSAmIERTQ1NJXF9IQkEuRFNDU0lzICYgb25lLXRvLW1hbnlcXAotRFNDU0lcX0hCQS5WTSAm
IFZNLkRTQ1NJXF9IQkFzICYgb25lLXRvLW1hbnlcXAotUFNDU0kuaG9zdCAmIGhvc3QuUFNDU0lz
ICYgb25lLXRvLW1hbnlcXAotUFNDU0kuSEJBICYgUFNDU0lcX0hCQS5QU0NTSXMgJiBvbmUtdG8t
bWFueVxcCi1QU0NTSVxfSEJBLmhvc3QgJiBob3N0LlBTQ1NJXF9IQkFzICYgb25lLXRvLW1hbnlc
XAotaG9zdC5yZXNpZGVudFxfVk1zICYgVk0ucmVzaWRlbnRcX29uICYgbWFueS10by1vbmVcXAot
aG9zdC5ob3N0XF9DUFVzICYgaG9zdFxfY3B1Lmhvc3QgJiBtYW55LXRvLW9uZVxcCi1ob3N0LnJl
c2lkZW50XF9jcHVcX3Bvb2xzICYgY3B1XF9wb29sLnJlc2lkZW50XF9vbiAmIG1hbnktdG8tb25l
XFwKLWNwdVxfcG9vbC5zdGFydGVkXF9WTXMgJiBWTS5jcHVcX3Bvb2wgJiBtYW55LXRvLW9uZVxc
Ci1jcHVcX3Bvb2wuaG9zdFxfQ1BVcyAmIGhvc3RcX2NwdS5jcHVcX3Bvb2wgJiBtYW55LXRvLW9u
ZVxcCi1caGxpbmUKLVxlbmR7dGFidWxhcn1cZW5ke2NlbnRlcn0KLQotVGhlIGZvbGxvd2luZyBy
ZXByZXNlbnRzIGJvdW5kIGZpZWxkcyAoYXMgc3BlY2lmaWVkIGFib3ZlKSBkaWFncmFtbWF0aWNh
bGx5LCB1c2luZyBjcm93cy1mb290IG5vdGF0aW9uIHRvIHNwZWNpZnkgb25lLXRvLW9uZSwgb25l
LXRvLW1hbnkgb3IgbWFueS10by1tYW55Ci0gICAgICAgICAgICAgICAgICAgcmVsYXRpb25zaGlw
czoKLQotXGJlZ2lue2NlbnRlcn1ccmVzaXplYm94ezAuOFx0ZXh0d2lkdGh9eyF9ewotXGluY2x1
ZGVncmFwaGljc3t4ZW5hcGktZGF0YW1vZGVsLWdyYXBofQotfVxlbmR7Y2VudGVyfQotXAotXHN1
YnNlY3Rpb257TGlzdCBvZiBib3VuZCBmaWVsZHN9Ci1cc2VjdGlvbntUeXBlc30KLVxzdWJzZWN0
aW9ue1ByaW1pdGl2ZXN9Ci1UaGUgZm9sbG93aW5nIHByaW1pdGl2ZSB0eXBlcyBhcmUgdXNlZCB0
byBzcGVjaWZ5IG1ldGhvZHMgYW5kIGZpZWxkcyBpbiB0aGUgQVBJIFJlZmVyZW5jZToKLQotXGJl
Z2lue2NlbnRlcn1cYmVnaW57dGFidWxhcn17fGxsfH0KLVxobGluZQotVHlwZSAmIERlc2NyaXB0
aW9uIFxcCi1caGxpbmUKLVN0cmluZyAmIHRleHQgc3RyaW5ncyBcXAotSW50ICAgICYgNjQtYml0
IGludGVnZXJzIFxcCi1GbG9hdCAmIElFRUUgZG91YmxlLXByZWNpc2lvbiBmbG9hdGluZy1wb2lu
dCBudW1iZXJzIFxcCi1Cb29sICAgJiBib29sZWFuIFxcCi1EYXRlVGltZSAmIGRhdGUgYW5kIHRp
bWVzdGFtcCBcXAotUmVmIChvYmplY3QgbmFtZSkgJiByZWZlcmVuY2UgdG8gYW4gb2JqZWN0IG9m
IGNsYXNzIG5hbWUgXFwKLVxobGluZQotXGVuZHt0YWJ1bGFyfVxlbmR7Y2VudGVyfQotXHN1YnNl
Y3Rpb257SGlnaGVyIG9yZGVyIHR5cGVzfQotVGhlIGZvbGxvd2luZyB0eXBlIGNvbnN0cnVjdG9y
cyBhcmUgdXNlZDoKLQotXGJlZ2lue2NlbnRlcn1cYmVnaW57dGFidWxhcn17fGxsfH0KLVxobGlu
ZQotVHlwZSAmIERlc2NyaXB0aW9uIFxcCi1caGxpbmUKLUxpc3QgKHQpICYgYW4gYXJiaXRyYXJ5
LWxlbmd0aCBsaXN0IG9mIGVsZW1lbnRzIG9mIHR5cGUgdCBcXAotTWFwIChhICRccmlnaHRhcnJv
dyQgYikgJiBhIHRhYmxlIG1hcHBpbmcgdmFsdWVzIG9mIHR5cGUgYSB0byB2YWx1ZXMgb2YgdHlw
ZSBiIFxcCi1caGxpbmUKLVxlbmR7dGFidWxhcn1cZW5ke2NlbnRlcn0KLVxzdWJzZWN0aW9ue0Vu
dW1lcmF0aW9uIHR5cGVzfQotVGhlIGZvbGxvd2luZyBlbnVtZXJhdGlvbiB0eXBlcyBhcmUgdXNl
ZDoKLQotXGJlZ2lue2xvbmd0YWJsZX17fGxsfH0KLVxobGluZQote1x0dCBlbnVtIGV2ZW50XF9v
cGVyYXRpb259ICYgXFwKLVxobGluZQotXGhzcGFjZXswLjVjbX17XHR0IGFkZH0gJiBBbiBvYmpl
Y3QgaGFzIGJlZW4gY3JlYXRlZCBcXAotXGhzcGFjZXswLjVjbX17XHR0IGRlbH0gJiBBbiBvYmpl
Y3QgaGFzIGJlZW4gZGVsZXRlZCBcXAotXGhzcGFjZXswLjVjbX17XHR0IG1vZH0gJiBBbiBvYmpl
Y3QgaGFzIGJlZW4gbW9kaWZpZWQgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci0KLVx2c3Bh
Y2V7MWNtfQotXGJlZ2lue2xvbmd0YWJsZX17fGxsfH0KLVxobGluZQote1x0dCBlbnVtIGNvbnNv
bGVcX3Byb3RvY29sfSAmIFxcCi1caGxpbmUKLVxoc3BhY2V7MC41Y219e1x0dCB2dDEwMH0gJiBW
VDEwMCB0ZXJtaW5hbCBcXAotXGhzcGFjZXswLjVjbX17XHR0IHJmYn0gJiBSZW1vdGUgRnJhbWVC
dWZmZXIgcHJvdG9jb2wgKGFzIHVzZWQgaW4gVk5DKSBcXAotXGhzcGFjZXswLjVjbX17XHR0IHJk
cH0gJiBSZW1vdGUgRGVza3RvcCBQcm90b2NvbCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0K
LQotXHZzcGFjZXsxY219Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGx8fQotXGhsaW5lCi17XHR0IGVu
dW0gdmRpXF90eXBlfSAmIFxcCi1caGxpbmUKLVxoc3BhY2V7MC41Y219e1x0dCBzeXN0ZW19ICYg
YSBkaXNrIHRoYXQgbWF5IGJlIHJlcGxhY2VkIG9uIHVwZ3JhZGUgXFwKLVxoc3BhY2V7MC41Y219
e1x0dCB1c2VyfSAmIGEgZGlzayB0aGF0IGlzIGFsd2F5cyBwcmVzZXJ2ZWQgb24gdXBncmFkZSBc
XAotXGhzcGFjZXswLjVjbX17XHR0IGVwaGVtZXJhbH0gJiBhIGRpc2sgdGhhdCBtYXkgYmUgcmVm
b3JtYXR0ZWQgb24gdXBncmFkZSBcXAotXGhzcGFjZXswLjVjbX17XHR0IHN1c3BlbmR9ICYgYSBk
aXNrIHRoYXQgc3RvcmVzIGEgc3VzcGVuZCBpbWFnZSBcXAotXGhzcGFjZXswLjVjbX17XHR0IGNy
YXNoZHVtcH0gJiBhIGRpc2sgdGhhdCBzdG9yZXMgVk0gY3Jhc2hkdW1wIGluZm9ybWF0aW9uIFxc
Ci1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotCi1cdnNwYWNlezFjbX0KLVxiZWdpbntsb25ndGFi
bGV9e3xsbHx9Ci1caGxpbmUKLXtcdHQgZW51bSB2bVxfcG93ZXJcX3N0YXRlfSAmIFxcCi1caGxp
bmUKLVxoc3BhY2V7MC41Y219e1x0dCBIYWx0ZWR9ICYgSGFsdGVkIFxcCi1caHNwYWNlezAuNWNt
fXtcdHQgUGF1c2VkfSAmIFBhdXNlZCBcXAotXGhzcGFjZXswLjVjbX17XHR0IFJ1bm5pbmd9ICYg
UnVubmluZyBcXAotXGhzcGFjZXswLjVjbX17XHR0IFN1c3BlbmRlZH0gJiBTdXNwZW5kZWQgXFwK
LVxoc3BhY2V7MC41Y219e1x0dCBDcmFzaGVkfSAmIENyYXNoZWQgXFwKLVxoc3BhY2V7MC41Y219
e1x0dCBVbmtub3dufSAmIFNvbWUgb3RoZXIgdW5rbm93biBzdGF0ZSBcXAotXGhsaW5lCi1cZW5k
e2xvbmd0YWJsZX0KLQotXHZzcGFjZXsxY219Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGx8fQotXGhs
aW5lCi17XHR0IGVudW0gdGFza1xfYWxsb3dlZFxfb3BlcmF0aW9uc30gJiBcXAotXGhsaW5lCi1c
aHNwYWNlezAuNWNtfXtcdHQgQ2FuY2VsfSAmIENhbmNlbCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0
YWJsZX0KLQotXHZzcGFjZXsxY219Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGx8fQotXGhsaW5lCi17
XHR0IGVudW0gdGFza1xfc3RhdHVzXF90eXBlfSAmIFxcCi1caGxpbmUKLVxoc3BhY2V7MC41Y219
e1x0dCBwZW5kaW5nfSAmIHRhc2sgaXMgaW4gcHJvZ3Jlc3MgXFwKLVxoc3BhY2V7MC41Y219e1x0
dCBzdWNjZXNzfSAmIHRhc2sgd2FzIGNvbXBsZXRlZCBzdWNjZXNzZnVsbHkgXFwKLVxoc3BhY2V7
MC41Y219e1x0dCBmYWlsdXJlfSAmIHRhc2sgaGFzIGZhaWxlZCBcXAotXGhzcGFjZXswLjVjbX17
XHR0IGNhbmNlbGxpbmd9ICYgdGFzayBpcyBiZWluZyBjYW5jZWxsZWQgXFwKLVxoc3BhY2V7MC41
Y219e1x0dCBjYW5jZWxsZWR9ICYgdGFzayBoYXMgYmVlbiBjYW5jZWxsZWQgXFwKLVxobGluZQot
XGVuZHtsb25ndGFibGV9Ci0KLVx2c3BhY2V7MWNtfQotXGJlZ2lue2xvbmd0YWJsZX17fGxsfH0K
LVxobGluZQote1x0dCBlbnVtIG9uXF9ub3JtYWxcX2V4aXR9ICYgXFwKLVxobGluZQotXGhzcGFj
ZXswLjVjbX17XHR0IGRlc3Ryb3l9ICYgZGVzdHJveSB0aGUgVk0gc3RhdGUgXFwKLVxoc3BhY2V7
MC41Y219e1x0dCByZXN0YXJ0fSAmIHJlc3RhcnQgdGhlIFZNIFxcCi1caGxpbmUKLVxlbmR7bG9u
Z3RhYmxlfQotCi1cdnNwYWNlezFjbX0KLVxiZWdpbntsb25ndGFibGV9e3xsbHx9Ci1caGxpbmUK
LXtcdHQgZW51bSBvblxfY3Jhc2hcX2JlaGF2aW91cn0gJiBcXAotXGhsaW5lCi1caHNwYWNlezAu
NWNtfXtcdHQgZGVzdHJveX0gJiBkZXN0cm95IHRoZSBWTSBzdGF0ZSBcXAotXGhzcGFjZXswLjVj
bX17XHR0IGNvcmVkdW1wXF9hbmRcX2Rlc3Ryb3l9ICYgcmVjb3JkIGEgY29yZWR1bXAgYW5kIHRo
ZW4gZGVzdHJveSB0aGUgVk0gc3RhdGUgXFwKLVxoc3BhY2V7MC41Y219e1x0dCByZXN0YXJ0fSAm
IHJlc3RhcnQgdGhlIFZNIFxcCi1caHNwYWNlezAuNWNtfXtcdHQgY29yZWR1bXBcX2FuZFxfcmVz
dGFydH0gJiByZWNvcmQgYSBjb3JlZHVtcCBhbmQgdGhlbiByZXN0YXJ0IHRoZSBWTSBcXAotXGhz
cGFjZXswLjVjbX17XHR0IHByZXNlcnZlfSAmIGxlYXZlIHRoZSBjcmFzaGVkIFZNIGFzLWlzIFxc
Ci1caHNwYWNlezAuNWNtfXtcdHQgcmVuYW1lXF9yZXN0YXJ0fSAmIHJlbmFtZSB0aGUgY3Jhc2hl
ZCBWTSBhbmQgc3RhcnQgYSBuZXcgY29weSBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLQot
XHZzcGFjZXsxY219Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGx8fQotXGhsaW5lCi17XHR0IGVudW0g
dmJkXF9tb2RlfSAmIFxcCi1caGxpbmUKLVxoc3BhY2V7MC41Y219e1x0dCBST30gJiBkaXNrIGlz
IG1vdW50ZWQgcmVhZC1vbmx5IFxcCi1caHNwYWNlezAuNWNtfXtcdHQgUld9ICYgZGlzayBpcyBt
b3VudGVkIHJlYWQtd3JpdGUgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci0KLVx2c3BhY2V7
MWNtfQotXGJlZ2lue2xvbmd0YWJsZX17fGxsfH0KLVxobGluZQote1x0dCBlbnVtIHZiZFxfdHlw
ZX0gJiBcXAotXGhsaW5lCi1caHNwYWNlezAuNWNtfXtcdHQgQ0R9ICYgVkJEIHdpbGwgYXBwZWFy
IHRvIGd1ZXN0IGFzIENEIFxcCi1caHNwYWNlezAuNWNtfXtcdHQgRGlza30gJiBWQkQgd2lsbCBh
cHBlYXIgdG8gZ3Vlc3QgYXMgZGlzayBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLQotXHZz
cGFjZXsxY219Ci1cbmV3cGFnZQotCi1cc2VjdGlvbntFcnJvciBIYW5kbGluZ30KLVdoZW4gYSBs
b3ctbGV2ZWwgdHJhbnNwb3J0IGVycm9yIG9jY3Vycywgb3IgYSByZXF1ZXN0IGlzIG1hbGZvcm1l
ZCBhdCB0aGUgSFRUUAotb3IgWE1MLVJQQyBsZXZlbCwgdGhlIHNlcnZlciBtYXkgc2VuZCBhbiBY
TUwtUlBDIEZhdWx0IHJlc3BvbnNlLCBvciB0aGUgY2xpZW50Ci1tYXkgc2ltdWxhdGUgdGhlIHNh
bWUuICBUaGUgY2xpZW50IG11c3QgYmUgcHJlcGFyZWQgdG8gaGFuZGxlIHRoZXNlIGVycm9ycywK
LXRob3VnaCB0aGV5IG1heSBiZSB0cmVhdGVkIGFzIGZhdGFsLiAgT24gdGhlIHdpcmUsIHRoZXNl
IGFyZSB0cmFuc21pdHRlZCBpbiBhCi1mb3JtIHNpbWlsYXIgdG8gdGhpczoKLQotXGJlZ2lue3Zl
cmJhdGltfQotICAgIDxtZXRob2RSZXNwb25zZT4KLSAgICAgIDxmYXVsdD4KLSAgICAgICAgPHZh
bHVlPgotICAgICAgICAgIDxzdHJ1Y3Q+Ci0gICAgICAgICAgICA8bWVtYmVyPgotICAgICAgICAg
ICAgICAgIDxuYW1lPmZhdWx0Q29kZTwvbmFtZT4KLSAgICAgICAgICAgICAgICA8dmFsdWU+PGlu
dD4tMTwvaW50PjwvdmFsdWU+Ci0gICAgICAgICAgICAgIDwvbWVtYmVyPgotICAgICAgICAgICAg
ICA8bWVtYmVyPgotICAgICAgICAgICAgICAgIDxuYW1lPmZhdWx0U3RyaW5nPC9uYW1lPgotICAg
ICAgICAgICAgICAgIDx2YWx1ZT48c3RyaW5nPk1hbGZvcm1lZCByZXF1ZXN0PC9zdHJpbmc+PC92
YWx1ZT4KLSAgICAgICAgICAgIDwvbWVtYmVyPgotICAgICAgICAgIDwvc3RydWN0PgotICAgICAg
ICA8L3ZhbHVlPgotICAgICAgPC9mYXVsdD4KLSAgICA8L21ldGhvZFJlc3BvbnNlPgotXGVuZHt2
ZXJiYXRpbX0KLQotQWxsIG90aGVyIGZhaWx1cmVzIGFyZSByZXBvcnRlZCB3aXRoIGEgbW9yZSBz
dHJ1Y3R1cmVkIGVycm9yIHJlc3BvbnNlLCB0bwotYWxsb3cgYmV0dGVyIGF1dG9tYXRpYyByZXNw
b25zZSB0byBmYWlsdXJlcywgcHJvcGVyIGludGVybmF0aW9uYWxpc2F0aW9uIG9mCi1hbnkgZXJy
b3IgbWVzc2FnZSwgYW5kIGVhc2llciBkZWJ1Z2dpbmcuICBPbiB0aGUgd2lyZSwgdGhlc2UgYXJl
IHRyYW5zbWl0dGVkCi1saWtlIHRoaXM6Ci0KLVxiZWdpbnt2ZXJiYXRpbX0KLSAgICA8c3RydWN0
PgotICAgICAgPG1lbWJlcj4KLSAgICAgICAgPG5hbWU+U3RhdHVzPC9uYW1lPgotICAgICAgICA8
dmFsdWU+RmFpbHVyZTwvdmFsdWU+Ci0gICAgICA8L21lbWJlcj4KLSAgICAgIDxtZW1iZXI+Ci0g
ICAgICAgIDxuYW1lPkVycm9yRGVzY3JpcHRpb248L25hbWU+Ci0gICAgICAgIDx2YWx1ZT4KLSAg
ICAgICAgICA8YXJyYXk+Ci0gICAgICAgICAgICA8ZGF0YT4KLSAgICAgICAgICAgICAgPHZhbHVl
Pk1BUF9EVVBMSUNBVEVfS0VZPC92YWx1ZT4KLSAgICAgICAgICAgICAgPHZhbHVlPkN1c3RvbWVy
PC92YWx1ZT4KLSAgICAgICAgICAgICAgPHZhbHVlPmVTcGVpbCBJbmMuPC92YWx1ZT4KLSAgICAg
ICAgICAgICAgPHZhbHVlPmVTcGVpbCBJbmNvcnBvcmF0ZWQ8L3ZhbHVlPgotICAgICAgICAgICAg
PC9kYXRhPgotICAgICAgICAgIDwvYXJyYXk+Ci0gICAgICAgIDwvdmFsdWU+Ci0gICAgICA8L21l
bWJlcj4KLSAgICA8L3N0cnVjdD4KLVxlbmR7dmVyYmF0aW19Ci0KLU5vdGUgdGhhdCB7XHR0IEVy
cm9yRGVzY3JpcHRpb259IHZhbHVlIGlzIGFuIGFycmF5IG9mIHN0cmluZyB2YWx1ZXMuIFRoZQot
Zmlyc3QgZWxlbWVudCBvZiB0aGUgYXJyYXkgaXMgYW4gZXJyb3IgY29kZTsgdGhlIHJlbWFpbmRl
ciBvZiB0aGUgYXJyYXkgYXJlCi1zdHJpbmdzIHJlcHJlc2VudGluZyBlcnJvciBwYXJhbWV0ZXJz
IHJlbGF0aW5nIHRvIHRoYXQgY29kZS4gIEluIHRoaXMgY2FzZSwKLXRoZSBjbGllbnQgaGFzIGF0
dGVtcHRlZCB0byBhZGQgdGhlIG1hcHBpbmcge1x0dCBDdXN0b21lciAkXHJpZ2h0YXJyb3ckCi1l
U3BpZWwgSW5jb3Jwb3JhdGVkfSB0byBhIE1hcCwgYnV0IGl0IGFscmVhZHkgY29udGFpbnMgdGhl
IG1hcHBpbmcKLXtcdHQgQ3VzdG9tZXIgJFxyaWdodGFycm93JCBlU3BpZWwgSW5jLn0sIGFuZCBz
byB0aGUgcmVxdWVzdCBoYXMgZmFpbGVkLgotCi1UaGUgcmVmZXJlbmNlIGJlbG93IGxpc3RzIGVh
Y2ggcG9zc2libGUgZXJyb3IgcmV0dXJuZWQgYnkgZWFjaCBtZXRob2QuCi1BcyB3ZWxsIGFzIHRo
ZSBlcnJvcnMgZXhwbGljaXRseSBsaXN0ZWQsIGFueSBtZXRob2QgbWF5IHJldHVybiBsb3ctbGV2
ZWwKLWVycm9ycyBhcyBkZXNjcmliZWQgYWJvdmUsIG9yIGFueSBvZiB0aGUgZm9sbG93aW5nIGdl
bmVyaWMgZXJyb3JzOgotCi1cYmVnaW57aXRlbWl6ZX0KLVxpdGVtIEhBTkRMRVxfSU5WQUxJRAot
XGl0ZW0gSU5URVJOQUxcX0VSUk9SCi1caXRlbSBNQVBcX0RVUExJQ0FURVxfS0VZCi1caXRlbSBN
RVNTQUdFXF9NRVRIT0RcX1VOS05PV04KLVxpdGVtIE1FU1NBR0VcX1BBUkFNRVRFUlxfQ09VTlRc
X01JU01BVENICi1caXRlbSBPUEVSQVRJT05cX05PVFxfQUxMT1dFRAotXGl0ZW0gUEVSTUlTU0lP
TlxfREVOSUVECi1caXRlbSBTRVNTSU9OXF9JTlZBTElECi1cZW5ke2l0ZW1pemV9Ci0KLUVhY2gg
cG9zc2libGUgZXJyb3IgY29kZSBpcyBkb2N1bWVudGVkIGluIHRoZSBmb2xsb3dpbmcgc2VjdGlv
bi4KLQotXHN1YnNlY3Rpb257RXJyb3IgQ29kZXN9Ci0KLVxzdWJzdWJzZWN0aW9ue0hBTkRMRVxf
SU5WQUxJRH0KLQotWW91IGdhdmUgYW4gaW52YWxpZCBoYW5kbGUuICBUaGUgb2JqZWN0IG1heSBo
YXZlIHJlY2VudGx5IGJlZW4gZGVsZXRlZC4gCi1UaGUgY2xhc3MgcGFyYW1ldGVyIGdpdmVzIHRo
ZSB0eXBlIG9mIHJlZmVyZW5jZSBnaXZlbiwgYW5kIHRoZSBoYW5kbGUKLXBhcmFtZXRlciBlY2hv
ZXMgdGhlIGJhZCB2YWx1ZSBnaXZlbi4KLQotXHZzcGFjZXswLjNjbX0KLXtcYmYgU2lnbmF0dXJl
On0KLVxiZWdpbnt2ZXJiYXRpbX1IQU5ETEVfSU5WQUxJRChjbGFzcywgaGFuZGxlKVxlbmR7dmVy
YmF0aW19Ci1cYmVnaW57Y2VudGVyfVxydWxlezEwZW19ezAuMXB0fVxlbmR7Y2VudGVyfQotCi1c
c3Vic3Vic2VjdGlvbntJTlRFUk5BTFxfRVJST1J9Ci0KLVRoZSBzZXJ2ZXIgZmFpbGVkIHRvIGhh
bmRsZSB5b3VyIHJlcXVlc3QsIGR1ZSB0byBhbiBpbnRlcm5hbCBlcnJvci4gIFRoZQotZ2l2ZW4g
bWVzc2FnZSBtYXkgZ2l2ZSBkZXRhaWxzIHVzZWZ1bCBmb3IgZGVidWdnaW5nIHRoZSBwcm9ibGVt
LgotCi1cdnNwYWNlezAuM2NtfQote1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfUlO
VEVSTkFMX0VSUk9SKG1lc3NhZ2UpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7
MTBlbX17MC4xcHR9XGVuZHtjZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue01BUFxfRFVQTElDQVRF
XF9LRVl9Ci0KLVlvdSB0cmllZCB0byBhZGQgYSBrZXktdmFsdWUgcGFpciB0byBhIG1hcCwgYnV0
IHRoYXQga2V5IGlzIGFscmVhZHkgdGhlcmUuIAotVGhlIGtleSwgY3VycmVudCB2YWx1ZSwgYW5k
IHRoZSBuZXcgdmFsdWUgdGhhdCB5b3UgdHJpZWQgdG8gc2V0IGFyZSBhbGwKLWVjaG9lZC4KLQot
XHZzcGFjZXswLjNjbX0KLXtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX1NQVBfRFVQ
TElDQVRFX0tFWShrZXksIGN1cnJlbnQgdmFsdWUsIG5ldyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQot
XGJlZ2lue2NlbnRlcn1ccnVsZXsxMGVtfXswLjFwdH1cZW5ke2NlbnRlcn0KLQotXHN1YnN1YnNl
Y3Rpb257TUVTU0FHRVxfTUVUSE9EXF9VTktOT1dOfQotCi1Zb3UgdHJpZWQgdG8gY2FsbCBhIG1l
dGhvZCB0aGF0IGRvZXMgbm90IGV4aXN0LiAgVGhlIG1ldGhvZCBuYW1lIHRoYXQgeW91Ci11c2Vk
IGlzIGVjaG9lZC4KLQotXHZzcGFjZXswLjNjbX0KLXtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2
ZXJiYXRpbX1NRVNTQUdFX01FVEhPRF9VTktOT1dOKG1ldGhvZClcZW5ke3ZlcmJhdGltfQotXGJl
Z2lue2NlbnRlcn1ccnVsZXsxMGVtfXswLjFwdH1cZW5ke2NlbnRlcn0KLQotXHN1YnN1YnNlY3Rp
b257TUVTU0FHRVxfUEFSQU1FVEVSXF9DT1VOVFxfTUlTTUFUQ0h9Ci0KLVlvdSB0cmllZCB0byBj
YWxsIGEgbWV0aG9kIHdpdGggdGhlIGluY29ycmVjdCBudW1iZXIgb2YgcGFyYW1ldGVycy4gIFRo
ZQotZnVsbHktcXVhbGlmaWVkIG1ldGhvZCBuYW1lIHRoYXQgeW91IHVzZWQsIGFuZCB0aGUgbnVt
YmVyIG9mIHJlY2VpdmVkIGFuZAotZXhwZWN0ZWQgcGFyYW1ldGVycyBhcmUgcmV0dXJuZWQuCi0K
LVx2c3BhY2V7MC4zY219Ci17XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19TUVTU0FH
RV9QQVJBTUVURVJfQ09VTlRfTUlTTUFUQ0gobWV0aG9kLCBleHBlY3RlZCwgcmVjZWl2ZWQpXGVu
ZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtjZW50ZXJ9
Ci0KLVxzdWJzdWJzZWN0aW9ue05FVFdPUktcX0FMUkVBRFlcX0NPTk5FQ1RFRH0KLQotWW91IHRy
aWVkIHRvIGNyZWF0ZSBhIFBJRiwgYnV0IHRoZSBuZXR3b3JrIHlvdSB0cmllZCB0byBhdHRhY2gg
aXQgdG8gaXMKLWFscmVhZHkgYXR0YWNoZWQgdG8gc29tZSBvdGhlciBQSUYsIGFuZCBzbyB0aGUg
Y3JlYXRpb24gZmFpbGVkLgotCi1cdnNwYWNlezAuM2NtfQote1xiZiBTaWduYXR1cmU6fQotXGJl
Z2lue3ZlcmJhdGltfU5FVFdPUktfQUxSRUFEWV9DT05ORUNURUQobmV0d29yaywgY29ubmVjdGVk
IFBJRilcZW5ke3ZlcmJhdGltfQotXGJlZ2lue2NlbnRlcn1ccnVsZXsxMGVtfXswLjFwdH1cZW5k
e2NlbnRlcn0KLQotXHN1YnN1YnNlY3Rpb257T1BFUkFUSU9OXF9OT1RcX0FMTE9XRUR9Ci0KLVlv
dSBhdHRlbXB0ZWQgYW4gb3BlcmF0aW9uIHRoYXQgd2FzIG5vdCBhbGxvd2VkLgotCi1cdnNwYWNl
ezAuM2NtfQotTm8gcGFyYW1ldGVycy4KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9
XGVuZHtjZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue1BFUk1JU1NJT05cX0RFTklFRH0KLQotWW91
IGRvIG5vdCBoYXZlIHRoZSByZXF1aXJlZCBwZXJtaXNzaW9ucyB0byBwZXJmb3JtIHRoZSBvcGVy
YXRpb24uCi0KLVx2c3BhY2V7MC4zY219Ci1ObyBwYXJhbWV0ZXJzLgotXGJlZ2lue2NlbnRlcn1c
cnVsZXsxMGVtfXswLjFwdH1cZW5ke2NlbnRlcn0KLQotXHN1YnN1YnNlY3Rpb257UElGXF9JU1xf
UEhZU0lDQUx9Ci0KLVlvdSB0cmllZCB0byBkZXN0cm95IGEgUElGLCBidXQgaXQgcmVwcmVzZW50
cyBhbiBhc3BlY3Qgb2YgdGhlIHBoeXNpY2FsCi1ob3N0IGNvbmZpZ3VyYXRpb24sIGFuZCBzbyBj
YW5ub3QgYmUgZGVzdHJveWVkLiAgVGhlIHBhcmFtZXRlciBlY2hvZXMgdGhlCi1QSUYgaGFuZGxl
IHlvdSBnYXZlLgotCi1cdnNwYWNlezAuM2NtfQote1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3Zl
cmJhdGltfVBJRl9JU19QSFlTSUNBTChQSUYpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9
XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtjZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue1NFU1NJT05c
X0FVVEhFTlRJQ0FUSU9OXF9GQUlMRUR9Ci0KLVRoZSBjcmVkZW50aWFscyBnaXZlbiBieSB0aGUg
dXNlciBhcmUgaW5jb3JyZWN0LCBzbyBhY2Nlc3MgaGFzIGJlZW4gZGVuaWVkLAotYW5kIHlvdSBo
YXZlIG5vdCBiZWVuIGlzc3VlZCBhIHNlc3Npb24gaGFuZGxlLgotCi1cdnNwYWNlezAuM2NtfQot
Tm8gcGFyYW1ldGVycy4KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtjZW50
ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue1NFU1NJT05cX0lOVkFMSUR9Ci0KLVlvdSBnYXZlIGFuIGlu
dmFsaWQgc2Vzc2lvbiBoYW5kbGUuICBJdCBtYXkgaGF2ZSBiZWVuIGludmFsaWRhdGVkIGJ5IGEK
LXNlcnZlciByZXN0YXJ0LCBvciB0aW1lZCBvdXQuICBZb3Ugc2hvdWxkIGdldCBhIG5ldyBzZXNz
aW9uIGhhbmRsZSwgdXNpbmcKLW9uZSBvZiB0aGUgc2Vzc2lvbi5sb2dpblxfIGNhbGxzLiAgVGhp
cyBlcnJvciBkb2VzIG5vdCBpbnZhbGlkYXRlIHRoZQotY3VycmVudCBjb25uZWN0aW9uLiAgVGhl
IGhhbmRsZSBwYXJhbWV0ZXIgZWNob2VzIHRoZSBiYWQgdmFsdWUgZ2l2ZW4uCi0KLVx2c3BhY2V7
MC4zY219Ci17XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19U0VTU0lPTl9JTlZBTElE
KGhhbmRsZSlcZW5ke3ZlcmJhdGltfQotXGJlZ2lue2NlbnRlcn1ccnVsZXsxMGVtfXswLjFwdH1c
ZW5ke2NlbnRlcn0KLQotXHN1YnN1YnNlY3Rpb257U0VTU0lPTlxfTk9UXF9SRUdJU1RFUkVEfQot
Ci1UaGlzIHNlc3Npb24gaXMgbm90IHJlZ2lzdGVyZWQgdG8gcmVjZWl2ZSBldmVudHMuICBZb3Ug
bXVzdCBjYWxsCi1ldmVudC5yZWdpc3RlciBiZWZvcmUgZXZlbnQubmV4dC4gIFRoZSBzZXNzaW9u
IGhhbmRsZSB5b3UgYXJlIHVzaW5nIGlzCi1lY2hvZWQuCi0KLVx2c3BhY2V7MC4zY219Ci17XGJm
IFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19U0VTU0lPTl9OT1RfUkVHSVNURVJFRChoYW5k
bGUpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtj
ZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue1ZBTFVFXF9OT1RcX1NVUFBPUlRFRH0KLQotWW91IGF0
dGVtcHRlZCB0byBzZXQgYSB2YWx1ZSB0aGF0IGlzIG5vdCBzdXBwb3J0ZWQgYnkgdGhpcyBpbXBs
ZW1lbnRhdGlvbi4gCi1UaGUgZnVsbHktcXVhbGlmaWVkIGZpZWxkIG5hbWUgYW5kIHRoZSB2YWx1
ZSB0aGF0IHlvdSB0cmllZCB0byBzZXQgYXJlCi1yZXR1cm5lZC4gIEFsc28gcmV0dXJuZWQgaXMg
YSBkZXZlbG9wZXItb25seSBkaWFnbm9zdGljIHJlYXNvbi4KLQotXHZzcGFjZXswLjNjbX0KLXtc
YmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX1WQUxVRV9OT1RfU1VQUE9SVEVEKGZpZWxk
LCB2YWx1ZSwgcmVhc29uKVxlbmR7dmVyYmF0aW19Ci1cYmVnaW57Y2VudGVyfVxydWxlezEwZW19
ezAuMXB0fVxlbmR7Y2VudGVyfQotCi1cc3Vic3Vic2VjdGlvbntWTEFOXF9UQUdcX0lOVkFMSUR9
Ci0KLVlvdSB0cmllZCB0byBjcmVhdGUgYSBWTEFOLCBidXQgdGhlIHRhZyB5b3UgZ2F2ZSB3YXMg
aW52YWxpZCAtLSBpdCBtdXN0IGJlCi1iZXR3ZWVuIDAgYW5kIDQwOTUuICBUaGUgcGFyYW1ldGVy
IGVjaG9lcyB0aGUgVkxBTiB0YWcgeW91IGdhdmUuCi0KLVx2c3BhY2V7MC4zY219Ci17XGJmIFNp
Z25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19VkxBTl9UQUdfSU5WQUxJRChWTEFOKVxlbmR7dmVy
YmF0aW19Ci1cYmVnaW57Y2VudGVyfVxydWxlezEwZW19ezAuMXB0fVxlbmR7Y2VudGVyfQotCi1c
c3Vic3Vic2VjdGlvbntWTVxfQkFEXF9QT1dFUlxfU1RBVEV9Ci0KLVlvdSBhdHRlbXB0ZWQgYW4g
b3BlcmF0aW9uIG9uIGEgVk0gdGhhdCB3YXMgbm90IGluIGFuIGFwcHJvcHJpYXRlIHBvd2VyCi1z
dGF0ZSBhdCB0aGUgdGltZTsgZm9yIGV4YW1wbGUsIHlvdSBhdHRlbXB0ZWQgdG8gc3RhcnQgYSBW
TSB0aGF0IHdhcwotYWxyZWFkeSBydW5uaW5nLiAgVGhlIHBhcmFtZXRlcnMgcmV0dXJuZWQgYXJl
IHRoZSBWTSdzIGhhbmRsZSwgYW5kIHRoZQotZXhwZWN0ZWQgYW5kIGFjdHVhbCBWTSBzdGF0ZSBh
dCB0aGUgdGltZSBvZiB0aGUgY2FsbC4KLQotXHZzcGFjZXswLjNjbX0KLXtcYmYgU2lnbmF0dXJl
On0KLVxiZWdpbnt2ZXJiYXRpbX1WTV9CQURfUE9XRVJfU1RBVEUodm0sIGV4cGVjdGVkLCBhY3R1
YWwpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtj
ZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue1ZNXF9IVk1cX1JFUVVJUkVEfQotCi1IVk0gaXMgcmVx
dWlyZWQgZm9yIHRoaXMgb3BlcmF0aW9uCi0KLVx2c3BhY2V7MC4zY219Ci17XGJmIFNpZ25hdHVy
ZTp9Ci1cYmVnaW57dmVyYmF0aW19Vk1fSFZNX1JFUVVJUkVEKHZtKVxlbmR7dmVyYmF0aW19Ci1c
YmVnaW57Y2VudGVyfVxydWxlezEwZW19ezAuMXB0fVxlbmR7Y2VudGVyfQotCi1cc3Vic3Vic2Vj
dGlvbntTRUNVUklUWVxfRVJST1J9Ci0KLUEgc2VjdXJpdHkgZXJyb3Igb2NjdXJyZWQuIFRoZSBw
YXJhbWV0ZXIgcHJvdmlkZXMgdGhlIHhlbiBzZWN1cml0eQotZXJyb3IgY29kZSBhbmQgYSBtZXNz
YWdlIGRlc2NyaWJpbmcgdGhlIGVycm9yLgotCi1cdnNwYWNlezAuM2NtfQote1xiZiBTaWduYXR1
cmU6fQotXGJlZ2lue3ZlcmJhdGltfVNFQ1VSSVRZX0VSUk9SKHhzZXJyLCBtZXNzYWdlKVxlbmR7
dmVyYmF0aW19Ci1cYmVnaW57Y2VudGVyfVxydWxlezEwZW19ezAuMXB0fVxlbmR7Y2VudGVyfQot
Ci1cc3Vic3Vic2VjdGlvbntQT09MXF9CQURcX1NUQVRFfQotCi1Zb3UgYXR0ZW1wdGVkIGFuIG9w
ZXJhdGlvbiBvbiBhIHBvb2wgdGhhdCB3YXMgbm90IGluIGFuIGFwcHJvcHJpYXRlIHN0YXRlCi1h
dCB0aGUgdGltZTsgZm9yIGV4YW1wbGUsIHlvdSBhdHRlbXB0ZWQgdG8gYWN0aXZhdGUgYSBwb29s
IHRoYXQgd2FzCi1hbHJlYWR5IGFjdGl2YXRlZC4KLQotXHZzcGFjZXswLjNjbX0KLXtcYmYgU2ln
bmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX1QT09MX0JBRF9TVEFURShjdXJyZW50IHBvb2wgc3Rh
dGUpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtj
ZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue0lOU1VGRklDSUVOVFxfQ1BVU30KLQotWW91IGF0dGVt
cHRlZCB0byBhY3RpdmF0ZSBhIGNwdVxfcG9vbCBidXQgdGhlcmUgYXJlIG5vdCBlbm91Z2gKLXVu
YWxsb2NhdGVkIENQVXMgdG8gc2F0aXNmeSB0aGUgcmVxdWVzdC4KLQotXHZzcGFjZXswLjNjbX0K
LXtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX1JTlNVRkZJQ0lFTlRfQ1BVUyhuZWVk
ZWQgY3B1IGNvdW50LCBhdmFpbGFibGUgY3B1IGNvdW50KVxlbmR7dmVyYmF0aW19Ci1cYmVnaW57
Y2VudGVyfVxydWxlezEwZW19ezAuMXB0fVxlbmR7Y2VudGVyfQotCi1cc3Vic3Vic2VjdGlvbntV
TktPV05cX1NDSEVEXF9QT0xJQ1l9Ci0KLVRoZSBzcGVjaWZpZWQgc2NoZWR1bGVyIHBvbGljeSBp
cyB1bmtvd24gdG8gdGhlIGhvc3QuCi0KLVx2c3BhY2V7MC4zY219Ci17XGJmIFNpZ25hdHVyZTp9
Ci1cYmVnaW57dmVyYmF0aW19VU5LT1dOX1NDSEVEX1BPTElDWSgpXGVuZHt2ZXJiYXRpbX0KLVxi
ZWdpbntjZW50ZXJ9XHJ1bGV7MTBlbX17MC4xcHR9XGVuZHtjZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0
aW9ue0lOVkFMSURcX0NQVX0KLQotWW91IHRyaWVkIHRvIHJlY29uZmlndXJlIGEgY3B1XF9wb29s
IHdpdGggYSBDUFUgdGhhdCBpcyB1bmtvd24gdG8gdGhlIGhvc3QKLW9yIGhhcyBhIHdyb25nIHN0
YXRlLgotCi1cdnNwYWNlezAuM2NtfQote1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGlt
fUlOVkFMSURfQ1BVKG1lc3NhZ2UpXGVuZHt2ZXJiYXRpbX0KLVxiZWdpbntjZW50ZXJ9XHJ1bGV7
MTBlbX17MC4xcHR9XGVuZHtjZW50ZXJ9Ci0KLVxzdWJzdWJzZWN0aW9ue0xBU1RcX0NQVVxfTk9U
XF9SRU1PVkVBQkxFfQotCi1Zb3UgdHJpZWQgdG8gcmVtb3ZlIHRoZSBsYXN0IENQVSBmcm9tIGEg
Y3B1XF9wb29sIHRoYXQgaGFzIG9uZSBvciBtb3JlCi1hY3RpdmUgZG9tYWlucy4KLQotXHZzcGFj
ZXswLjNjbX0KLXtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX1MQVNUX0NQVV9OT1Rf
UkVNT1ZFQUJMRShtZXNzYWdlKVxlbmR7dmVyYmF0aW19Ci1cYmVnaW57Y2VudGVyfVxydWxlezEw
ZW19ezAuMXB0fVxlbmR7Y2VudGVyfQotCi0KLVxuZXdwYWdlCi1cc2VjdGlvbntDbGFzczogc2Vz
c2lvbn0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IHNlc3Npb259Ci1cYmVnaW57bG9u
Z3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17
fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgc2Vzc2lvbn0gXFwKLVxtdWx0aWNv
bHVtbnsxfXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94ezEx
Y219e1xlbSBBCi1zZXNzaW9uLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYg
RGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwK
LSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdGhpc1xfaG9zdH0gJiBob3N0IHJl
ZiAmIEN1cnJlbnRseSBjb25uZWN0ZWQgaG9zdCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiAge1x0dCB0aGlzXF91c2VyfSAmIHVzZXIgcmVmICYgQ3VycmVudGx5IGNvbm5lY3RlZCB1
c2VyIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGxhc3RcX2FjdGl2ZX0g
JiBpbnQgJiBUaW1lc3RhbXAgZm9yIGxhc3QgdGltZSBzZXNzaW9uIHdhcyBhY3RpdmUgXFwKLVxo
bGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29jaWF0ZWQgd2l0aCBj
bGFzczogc2Vzc2lvbn0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5sb2dpblxfd2l0aFxfcGFz
c3dvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUF0dGVtcHQgdG8gYXV0aGVudGljYXRlIHRoZSB1
c2VyLCByZXR1cm5pbmcgYSBzZXNzaW9uXF9pZCBpZiBzdWNjZXNzZnVsLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChzZXNzaW9uIHJlZikgbG9naW5f
d2l0aF9wYXNzd29yZCAoc3RyaW5nIHVuYW1lLCBzdHJpbmcgcHdkKVxlbmR7dmVyYmF0aW19Ci0K
LQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdW5hbWUgJiBV
c2VybmFtZSBmb3IgbG9naW4uIFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIHB3ZCAmIFBh
c3N3b3JkIGZvciBsb2dpbi4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXNlc3Npb24g
cmVmCi19Ci0KLQotSUQgb2YgbmV3bHkgY3JlYXRlZCBzZXNzaW9uCi0KLVx2c3BhY2V7MC4zY219
Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFNFU1NJT05cX0FV
VEhFTlRJQ0FUSU9OXF9GQUlMRUR9Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5sb2dvdXR9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUxvZyBvdXQgb2YgYSBzZXNzaW9uLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgbG9nb3V0IChzZXNzaW9uX2lkIHMpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91dWlk
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIHNl
c3Npb24uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
c3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIHNlc3Npb24gcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc2Vzc2lvbiByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF90aGlzXF9ob3N0fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHRoaXNcX2hv
c3QgZmllbGQgb2YgdGhlIGdpdmVuIHNlc3Npb24uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGhvc3QgcmVmKSBnZXRfdGhpc19ob3N0IChzZXNzaW9u
X2lkIHMsIHNlc3Npb24gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgc2Vzc2lvbiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaG9zdCByZWYKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3RoaXNcX3VzZXJ9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdGhpc1xfdXNlciBmaWVsZCBvZiB0aGUgZ2l2ZW4g
c2Vzc2lvbi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSAodXNlciByZWYpIGdldF90aGlzX3VzZXIgKHNlc3Npb25faWQgcywgc2Vzc2lvbiByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBz
ZXNzaW9uIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi11c2VyIHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfbGFzdFxfYWN0aXZlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1H
ZXQgdGhlIGxhc3RcX2FjdGl2ZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gc2Vzc2lvbi4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X2xhc3RfYWN0
aXZlIChzZXNzaW9uX2lkIHMsIHNlc3Npb24gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc2Vzc2lvbiByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50
Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVp
ZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNlIHRvIHRoZSBzZXNzaW9uIGlu
c3RhbmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0
dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChzZXNzaW9uIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Np
b25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0
dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zZXNzaW9uIHJlZgotfQotCi0KLXJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBjdXJyZW50IHN0YXRl
IG9mIHRoZSBnaXZlbiBzZXNzaW9uLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IChzZXNzaW9uIHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBz
LCBzZXNzaW9uIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1
bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IHNlc3Npb24gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXNlc3Npb24gcmVjb3JkCi19Ci0K
LQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257
Q2xhc3M6IHRhc2t9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiB0YXNrfQotXGJlZ2lu
e2xvbmd0YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1u
ezF9e3xsfXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIHRhc2t9IFxcCi1cbXVsdGlj
b2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsx
MWNtfXtcZW0gQQotbG9uZy1ydW5uaW5nIGFzeW5jaHJvbm91cyB0YXNrLn19IFxcCi1caGxpbmUK
LVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7
Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlm
aWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgbmFtZS9sYWJlbH0gJiBzdHJpbmcgJiBhIGh1bWFuLXJlYWRhYmxlIG5hbWUgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbmFtZS9kZXNjcmlwdGlvbn0gJiBzdHJpbmcg
JiBhIG5vdGVzIGZpZWxkIGNvbnRhaW5nIGh1bWFuLXJlYWRhYmxlIGRlc2NyaXB0aW9uIFxcCi0k
XG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN0YXR1c30gJiB0YXNrXF9zdGF0dXNc
X3R5cGUgJiBjdXJyZW50IHN0YXR1cyBvZiB0aGUgdGFzayBcXAotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7cnVufSQgJiAge1x0dCBzZXNzaW9ufSAmIHNlc3Npb24gcmVmICYgdGhlIHNlc3Npb24gdGhh
dCBjcmVhdGVkIHRoZSB0YXNrIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0
IHByb2dyZXNzfSAmIGludCAmIGlmIHRoZSB0YXNrIGlzIHN0aWxsIHBlbmRpbmcsIHRoaXMgZmll
bGQgY29udGFpbnMgdGhlIGVzdGltYXRlZCBwZXJjZW50YWdlIGNvbXBsZXRlICgwLTEwMCkuIElm
IHRhc2sgaGFzIGNvbXBsZXRlZCAoc3VjY2Vzc2Z1bGx5IG9yIHVuc3VjY2Vzc2Z1bGx5KSB0aGlz
IHNob3VsZCBiZSAxMDAuIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHR5
cGV9ICYgc3RyaW5nICYgaWYgdGhlIHRhc2sgaGFzIGNvbXBsZXRlZCBzdWNjZXNzZnVsbHksIHRo
aXMgZmllbGQgY29udGFpbnMgdGhlIHR5cGUgb2YgdGhlIGVuY29kZWQgcmVzdWx0IChpLmUuIG5h
bWUgb2YgdGhlIGNsYXNzIHdob3NlIHJlZmVyZW5jZSBpcyBpbiB0aGUgcmVzdWx0IGZpZWxkKS4g
VW5kZWZpbmVkIG90aGVyd2lzZS4gXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgcmVzdWx0fSAmIHN0cmluZyAmIGlmIHRoZSB0YXNrIGhhcyBjb21wbGV0ZWQgc3VjY2Vzc2Z1
bGx5LCB0aGlzIGZpZWxkIGNvbnRhaW5zIHRoZSByZXN1bHQgdmFsdWUgKGVpdGhlciBWb2lkIG9y
IGFuIG9iamVjdCByZWZlcmVuY2UpLiBVbmRlZmluZWQgb3RoZXJ3aXNlLiBcXAotJFxtYXRoaXR7
Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBlcnJvclxfaW5mb30gJiBzdHJpbmcgU2V0ICYgaWYg
dGhlIHRhc2sgaGFzIGZhaWxlZCwgdGhpcyBmaWVsZCBjb250YWlucyB0aGUgc2V0IG9mIGFzc29j
aWF0ZWQgZXJyb3Igc3RyaW5ncy4gVW5kZWZpbmVkIG90aGVyd2lzZS4gXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgYWxsb3dlZFxfb3BlcmF0aW9uc30gJiAodGFza1xfYWxs
b3dlZFxfb3BlcmF0aW9ucykgU2V0ICYgT3BlcmF0aW9ucyBhbGxvd2VkIG9uIHRoaXMgdGFzayBc
XAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3
aXRoIGNsYXNzOiB0YXNrfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmNhbmNlbH0KLQote1xi
ZiBPdmVydmlldzp9IAotQ2FuY2VsIHRoaXMgdGFzay4gIElmIHRhc2suYWxsb3dlZFxfb3BlcmF0
aW9ucyBkb2VzIG5vdCBjb250YWluIENhbmNlbCwKLXRoZW4gdGhpcyB3aWxsIGZhaWwgd2l0aCBP
UEVSQVRJT05cX05PVFxfQUxMT1dFRC4gIFRoZSB0YXNrIHdpbGwgc2hvdyB0aGUKLXN0YXR1cyAn
Y2FuY2VsbGluZycsIGFuZCB5b3Ugc2hvdWxkIGNvbnRpbnVlIHRvIGNoZWNrIGl0cyBzdGF0dXMg
dW50aWwgaXQKLXNob3dzICdjYW5jZWxsZWQnLiAgVGhlcmUgaXMgbm8gZ3VhcmFudGVlIGFzIHRv
IHRoZSB0aW1lIHdpdGhpbiB3aGljaCB0aGlzCi10YXNrIHdpbGwgYmUgY2FuY2VsbGVkLgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgY2FuY2Vs
IChzZXNzaW9uX2lkIHMsIHRhc2sgcmVmIHRhc2spXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgdGFzayByZWYgfSAmIHRhc2sgJiBUaGUgdGFzayBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNj
bX0KLQotXG5vaW5kZW50e1xiZiBQb3NzaWJsZSBFcnJvciBDb2Rlczp9IHtcdHQgT1BFUkFUSU9O
XF9OT1RcX0FMTE9XRUR9Ci0KLVx2c3BhY2V7MC42Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRo
ZSB0YXNrcyBrbm93biB0byB0aGUgc3lzdGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19ICgodGFzayByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9p
ZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotKHRhc2sgcmVmKSBTZXQKLX0KLQotCi1yZWZlcmVuY2Vz
IHRvIGFsbCBvYmplY3RzCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2
aWV3On0gCi1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIHRhc2suCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChz
ZXNzaW9uX2lkIHMsIHRhc2sgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgdGFzayByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQot
dmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9uYW1lXF9sYWJlbH0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiB0
YXNrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0
cmluZyBnZXRfbmFtZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCB0YXNrIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHRhc2sgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fmdldFxfbmFtZVxfZGVzY3JpcHRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbmFt
ZS9kZXNjcmlwdGlvbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gdGFzay4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfZGVzY3JpcHRp
b24gKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0K
LQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3N0YXR1c30KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBzdGF0dXMgZmllbGQgb2YgdGhlIGdpdmVuIHRhc2su
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKHRhc2tf
c3RhdHVzX3R5cGUpIGdldF9zdGF0dXMgKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi10YXNrXF9zdGF0dXNcX3R5cGUKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Nlc3Npb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0
aGUgc2Vzc2lvbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gdGFzay4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoc2Vzc2lvbiByZWYpIGdldF9zZXNzaW9uIChz
ZXNzaW9uX2lkIHMsIHRhc2sgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgdGFzayByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc2Vzc2lvbiByZWYKLX0K
LQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Byb2dyZXNzfQot
Ci17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHByb2dyZXNzIGZpZWxkIG9mIHRoZSBnaXZlbiB0
YXNrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGlu
dCBnZXRfcHJvZ3Jlc3MgKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3R5
cGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdHlwZSBmaWVsZCBvZiB0aGUgZ2l2ZW4g
dGFzay4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBz
dHJpbmcgZ2V0X3R5cGUgKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X3Jlc3VsdH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSByZXN1bHQgZmllbGQgb2YgdGhl
IGdpdmVuIHRhc2suCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gc3RyaW5nIGdldF9yZXN1bHQgKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX2Vycm9yXF9pbmZvfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGVycm9y
XF9pbmZvIGZpZWxkIG9mIHRoZSBnaXZlbiB0YXNrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0
dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChzdHJpbmcgU2V0KSBnZXRfZXJyb3JfaW5mbyAoc2Vz
c2lvbl9pZCBzLCB0YXNrIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IHRhc2sgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZyBTZXQKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbG93ZWRcX29wZXJh
dGlvbnN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgYWxsb3dlZFxfb3BlcmF0aW9ucyBm
aWVsZCBvZiB0aGUgZ2l2ZW4gdGFzay4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAoKHRhc2tfYWxsb3dlZF9vcGVyYXRpb25zKSBTZXQpIGdldF9hbGxv
d2VkX29wZXJhdGlvbnMgKHNlc3Npb25faWQgcywgdGFzayByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB0YXNrIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci0odGFza1xfYWxsb3dlZFxfb3BlcmF0aW9ucykgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IGEgcmVmZXJlbmNlIHRvIHRoZSB0YXNrIGluc3RhbmNlIHdpdGggdGhlIHNwZWNpZmllZCBV
VUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICh0
YXNrIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1
dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi10YXNrIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250
YWluaW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiB0YXNrLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICh0YXNrIHJlY29yZCkgZ2V0X3Jl
Y29yZCAoc2Vzc2lvbl9pZCBzLCB0YXNrIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHRhc2sgcmVmIH0gJiBzZWxmICYgcmVmZXJl
bmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXRhc2sgcmVj
b3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX2J5XF9uYW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGFsbCB0aGUgdGFz
ayBpbnN0YW5jZXMgd2l0aCB0aGUgZ2l2ZW4gbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKCh0YXNrIHJlZikgU2V0KSBnZXRfYnlfbmFtZV9s
YWJlbCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgbGFiZWwpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiBsYWJlbCAmIGxhYmVsIG9m
IG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLSh0YXNrIHJl
ZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBvYmplY3RzIHdpdGggbWF0Y2ggbmFtZXMKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLQotXHZzcGFjZXsx
Y219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IGV2ZW50fQotXHN1YnNlY3Rpb257RmllbGRz
IGZvciBjbGFzczogZXZlbnR9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0
aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9
e2x8fXtcYmYgZXZlbnR9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxt
dWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0KLUFzeW5jaHJvbm91cyBldmVudCBy
ZWdpc3RyYXRpb24gYW5kIGhhbmRsaW5nLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBU
eXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zfSQg
JiAge1x0dCBpZH0gJiBpbnQgJiBBbiBJRCwgbW9ub3RvbmljYWxseSBpbmNyZWFzaW5nLCBhbmQg
bG9jYWwgdG8gdGhlIGN1cnJlbnQgc2Vzc2lvbiBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5z
fSQgJiAge1x0dCB0aW1lc3RhbXB9ICYgZGF0ZXRpbWUgJiBUaGUgdGltZSBhdCB3aGljaCB0aGUg
ZXZlbnQgb2NjdXJyZWQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgY2xh
c3N9ICYgc3RyaW5nICYgVGhlIG5hbWUgb2YgdGhlIGNsYXNzIG9mIHRoZSBvYmplY3QgdGhhdCBj
aGFuZ2VkIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IG9wZXJhdGlvbn0g
JiBldmVudFxfb3BlcmF0aW9uICYgVGhlIG9wZXJhdGlvbiB0aGF0IHdhcyBwZXJmb3JtZWQgXFwK
LSRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgcmVmfSAmIHN0cmluZyAmIEEgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgdGhhdCBjaGFuZ2VkIFxcCi0kXG1hdGhpdHtST31fXG1hdGhp
dHtpbnN9JCAmICB7XHR0IG9ialxfdXVpZH0gJiBzdHJpbmcgJiBUaGUgdXVpZCBvZiB0aGUgb2Jq
ZWN0IHRoYXQgY2hhbmdlZCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9u
e1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBldmVudH0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5yZWdpc3Rlcn0KLQote1xiZiBPdmVydmlldzp9IAotUmVnaXN0ZXJzIHRoaXMgc2Vzc2lv
biB3aXRoIHRoZSBldmVudCBzeXN0ZW0uICBTcGVjaWZ5aW5nIHRoZSBlbXB0eSBsaXN0Ci13aWxs
IHJlZ2lzdGVyIGZvciBhbGwgY2xhc3Nlcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHJlZ2lzdGVyIChzZXNzaW9uX2lkIHMsIHN0cmluZyBT
ZXQgY2xhc3NlcylcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBzdHJpbmcgU2V0IH0gJiBjbGFzc2VzICYgcmVnaXN0ZXIgZm9yIGV2ZW50cyBmb3Ig
dGhlIGluZGljYXRlZCBjbGFzc2VzIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lk
Ci19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+dW5yZWdpc3Rlcn0KLQote1xiZiBPdmVydmlldzp9
IAotVW5yZWdpc3RlcnMgdGhpcyBzZXNzaW9uIHdpdGggdGhlIGV2ZW50IHN5c3RlbS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHVucmVnaXN0
ZXIgKHNlc3Npb25faWQgcywgc3RyaW5nIFNldCBjbGFzc2VzKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyBTZXQgfSAmIGNsYXNzZXMg
JiByZW1vdmUgdGhpcyBzZXNzaW9uJ3MgcmVnaXN0cmF0aW9uIGZvciB0aGUgaW5kaWNhdGVkIGNs
YXNzZXMgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5uZXh0fQotCi17XGJmIE92ZXJ2aWV3On0gCi1CbG9ja2luZyBjYWxsIHdoaWNo
IHJldHVybnMgYSAocG9zc2libHkgZW1wdHkpIGJhdGNoIG9mIGV2ZW50cy4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKGV2ZW50IHJlY29yZCkgU2V0
KSBuZXh0IChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oZXZlbnQgcmVjb3JkKSBT
ZXQKLX0KLQotCi10aGUgYmF0Y2ggb2YgZXZlbnRzCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRl
bnR7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVzOn0ge1x0dCBTRVNTSU9OXF9OT1RcX1JFR0lTVEVS
RUR9Ci0KLVx2c3BhY2V7MC42Y219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9u
e0NsYXNzOiBWTX0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFZNfQotXGJlZ2lue2xv
bmd0YWJsZX17fGxscHswLjIxXHRleHR3aWR0aH1wezAuMzNcdGV4dHdpZHRofXx9Ci1caGxpbmUK
LVxtdWx0aWNvbHVtbnsxfXt8bH17TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBWTX0g
XFwKLVxtdWx0aWNvbHVtbns0fXt8bHx9e1xwYXJib3h7MTFjbX17XGVtIERlc2NyaXB0aW9uOiBB
Ci12aXJ0dWFsIG1hY2hpbmUgKG9yICdndWVzdCcpLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmll
bGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7
cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCBy
ZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcG93ZXJcX3N0
YXRlfSAmIHZtXF9wb3dlclxfc3RhdGUgJiBDdXJyZW50IHBvd2VyIHN0YXRlIG9mIHRoZSBtYWNo
aW5lIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgbmFtZS9sYWJlbH0gJiBzdHJpbmcgJiBhIGh1
bWFuLXJlYWRhYmxlIG5hbWUgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBuYW1lL2Rlc2NyaXB0
aW9ufSAmIHN0cmluZyAmIGEgbm90ZXMgZmllbGQgY29udGFpbmcgaHVtYW4tcmVhZGFibGUgZGVz
Y3JpcHRpb24gXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCB1c2VyXF92ZXJzaW9ufSAmIGludCAm
IGEgdXNlciB2ZXJzaW9uIG51bWJlciBmb3IgdGhpcyBtYWNoaW5lIFxcCi0kXG1hdGhpdHtSV30k
ICYgIHtcdHQgaXNcX2FcX3RlbXBsYXRlfSAmIGJvb2wgJiB0cnVlIGlmIHRoaXMgaXMgYSB0ZW1w
bGF0ZS4gVGVtcGxhdGUgVk1zIGNhbiBuZXZlciBiZSBzdGFydGVkLCB0aGV5IGFyZSB1c2VkIG9u
bHkgZm9yIGNsb25pbmcgb3RoZXIgVk1zIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgYXV0b1xf
cG93ZXJcX29ufSAmIGJvb2wgJiB0cnVlIGlmIHRoaXMgVk0gc2hvdWxkIGJlIHN0YXJ0ZWQgYXV0
b21hdGljYWxseSBhZnRlciBob3N0IGJvb3QgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0k
ICYgIHtcdHQgc3VzcGVuZFxfVkRJfSAmIFZESSByZWYgJiBUaGUgVkRJIHRoYXQgYSBzdXNwZW5k
IGltYWdlIGlzIHN0b3JlZCBvbi4gKE9ubHkgaGFzIG1lYW5pbmcgaWYgVk0gaXMgY3VycmVudGx5
IHN1c3BlbmRlZCkgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcmVzaWRl
bnRcX29ufSAmIGhvc3QgcmVmICYgdGhlIGhvc3QgdGhlIFZNIGlzIGN1cnJlbnRseSByZXNpZGVu
dCBvbiBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG1lbW9yeS9zdGF0aWNcX21heH0gJiBpbnQg
JiBTdGF0aWNhbGx5LXNldCAoaS5lLiBhYnNvbHV0ZSkgbWF4aW11bSAoYnl0ZXMpIFxcCi0kXG1h
dGhpdHtSV30kICYgIHtcdHQgbWVtb3J5L2R5bmFtaWNcX21heH0gJiBpbnQgJiBEeW5hbWljIG1h
eGltdW0gKGJ5dGVzKSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG1lbW9yeS9keW5hbWljXF9t
aW59ICYgaW50ICYgRHluYW1pYyBtaW5pbXVtIChieXRlcykgXFwKLSRcbWF0aGl0e1JXfSQgJiAg
e1x0dCBtZW1vcnkvc3RhdGljXF9taW59ICYgaW50ICYgU3RhdGljYWxseS1zZXQgKGkuZS4gYWJz
b2x1dGUpIG1pbmltdW0gKGJ5dGVzKSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IFZDUFVzL3Bh
cmFtc30gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgJiBjb25maWd1cmF0aW9u
IHBhcmFtZXRlcnMgZm9yIHRoZSBzZWxlY3RlZCBWQ1BVIHBvbGljeSBcXAotJFxtYXRoaXR7Uld9
JCAmICB7XHR0IFZDUFVzL21heH0gJiBpbnQgJiBNYXggbnVtYmVyIG9mIFZDUFVzIFxcCi0kXG1h
dGhpdHtSV30kICYgIHtcdHQgVkNQVXMvYXRcX3N0YXJ0dXB9ICYgaW50ICYgQm9vdCBudW1iZXIg
b2YgVkNQVXMgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBhY3Rpb25zL2FmdGVyXF9zaHV0ZG93
bn0gJiBvblxfbm9ybWFsXF9leGl0ICYgYWN0aW9uIHRvIHRha2UgYWZ0ZXIgdGhlIGd1ZXN0IGhh
cyBzaHV0ZG93biBpdHNlbGYgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBhY3Rpb25zL2FmdGVy
XF9yZWJvb3R9ICYgb25cX25vcm1hbFxfZXhpdCAmIGFjdGlvbiB0byB0YWtlIGFmdGVyIHRoZSBn
dWVzdCBoYXMgcmVib290ZWQgaXRzZWxmIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgYWN0aW9u
cy9hZnRlclxfY3Jhc2h9ICYgb25cX2NyYXNoXF9iZWhhdmlvdXIgJiBhY3Rpb24gdG8gdGFrZSBp
ZiB0aGUgZ3Vlc3QgY3Jhc2hlcyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCBjb25zb2xlc30gJiAoY29uc29sZSByZWYpIFNldCAmIHZpcnR1YWwgY29uc29sZSBkZXZpY2Vz
IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFZJRnN9ICYgKFZJRiByZWYp
IFNldCAmIHZpcnR1YWwgbmV0d29yayBpbnRlcmZhY2VzIFxcCi0kXG1hdGhpdHtST31fXG1hdGhp
dHtydW59JCAmICB7XHR0IFZCRHN9ICYgKFZCRCByZWYpIFNldCAmIHZpcnR1YWwgYmxvY2sgZGV2
aWNlcyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBjcmFzaFxfZHVtcHN9
ICYgKGNyYXNoZHVtcCByZWYpIFNldCAmIGNyYXNoIGR1bXBzIGFzc29jaWF0ZWQgd2l0aCB0aGlz
IFZNIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFZUUE1zfSAmIChWVFBN
IHJlZikgU2V0ICYgdmlydHVhbCBUUE1zIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAm
ICB7XHR0IERQQ0lzfSAmIChEUENJIHJlZikgU2V0ICYgcGFzcy10aHJvdWdoIFBDSSBkZXZpY2Vz
IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IERTQ1NJc30gJiAoRFNDU0kg
cmVmKSBTZXQgJiBoYWxmLXZpcnR1YWxpemVkIFNDU0kgZGV2aWNlcyBcXAotJFxtYXRoaXR7Uk99
X1xtYXRoaXR7cnVufSQgJiAge1x0dCBEU0NTSVxfSEJBc30gJiAoRFNDU0lcX0hCQSByZWYpIFNl
dCAmIGhhbGYtdmlydHVhbGl6ZWQgU0NTSSBob3N0IGJ1cyBhZGFwdGVycyBcXAotJFxtYXRoaXR7
Uld9JCAmICB7XHR0IFBWL2Jvb3Rsb2FkZXJ9ICYgc3RyaW5nICYgbmFtZSBvZiBvciBwYXRoIHRv
IGJvb3Rsb2FkZXIgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBQVi9rZXJuZWx9ICYgc3RyaW5n
ICYgVVJJIG9mIGtlcm5lbCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IFBWL3JhbWRpc2t9ICYg
c3RyaW5nICYgVVJJIG9mIGluaXRyZCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IFBWL2FyZ3N9
ICYgc3RyaW5nICYga2VybmVsIGNvbW1hbmQtbGluZSBhcmd1bWVudHMgXFwKLSRcbWF0aGl0e1JX
fSQgJiAge1x0dCBQVi9ib290bG9hZGVyXF9hcmdzfSAmIHN0cmluZyAmIG1pc2NlbGxhbmVvdXMg
YXJndW1lbnRzIGZvciB0aGUgYm9vdGxvYWRlciBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IEhW
TS9ib290XF9wb2xpY3l9ICYgc3RyaW5nICYgSFZNIGJvb3QgcG9saWN5IFxcCi0kXG1hdGhpdHtS
V30kICYgIHtcdHQgSFZNL2Jvb3RcX3BhcmFtc30gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3Ry
aW5nKSBNYXAgJiBIVk0gYm9vdCBwYXJhbXMgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBwbGF0
Zm9ybX0gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgJiBwbGF0Zm9ybS1zcGVj
aWZpYyBjb25maWd1cmF0aW9uIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgUENJXF9idXN9ICYg
c3RyaW5nICYgUENJIGJ1cyBwYXRoIGZvciBwYXNzLXRocm91Z2ggZGV2aWNlcyBcXAotJFxtYXRo
aXR7Uld9JCAmICB7XHR0IG90aGVyXF9jb25maWd9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0
cmluZykgTWFwICYgYWRkaXRpb25hbCBjb25maWd1cmF0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1h
dGhpdHtydW59JCAmICB7XHR0IGRvbWlkfSAmIGludCAmIGRvbWFpbiBJRCAoaWYgYXZhaWxhYmxl
LCAtMSBvdGhlcndpc2UpIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGlz
XF9jb250cm9sXF9kb21haW59ICYgYm9vbCAmIHRydWUgaWYgdGhpcyBpcyBhIGNvbnRyb2wgZG9t
YWluIChkb21haW4gMCBvciBhIGRyaXZlciBkb21haW4pIFxcCi0kXG1hdGhpdHtST31fXG1hdGhp
dHtydW59JCAmICB7XHR0IG1ldHJpY3N9ICYgVk1cX21ldHJpY3MgcmVmICYgbWV0cmljcyBhc3Nv
Y2lhdGVkIHdpdGggdGhpcyBWTSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCBndWVzdFxfbWV0cmljc30gJiBWTVxfZ3Vlc3RcX21ldHJpY3MgcmVmICYgbWV0cmljcyBhc3Nv
Y2lhdGVkIHdpdGggdGhlIHJ1bm5pbmcgZ3Vlc3QgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1
bn0kICYgIHtcdHQgc2VjdXJpdHkvbGFiZWx9ICYgc3RyaW5nICYgdGhlIFZNJ3Mgc2VjdXJpdHkg
bGFiZWwgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntQYXJhbWV0ZXIg
RGV0YWlsc30KLVxzdWJzdWJzZWN0aW9ue1BWL2tlcm5lbCBhbmQgUFYvcmFtZGlza30KLVRoZSBc
dGV4dHR0e1BWL2tlcm5lbH0gYW5kIFx0ZXh0dHR7UFYvcmFtZGlza30gcGFyYW1ldGVycyBzaG91
bGQgYmUKLXNwZWNpZmllZCBhcyBVUklzIHdpdGggZWl0aGVyIGEgXHRleHR0dHtmaWxlfSBvciBc
dGV4dHR0e2RhdGF9IHNjaGVtZS4KLQotVGhlIFx0ZXh0dHR7ZmlsZX0gc2NoZW1lIG11c3QgYmUg
dXNlZCB3aGVuIGEgZmlsZSBvbiB0aGUgcmVtb3RlIGRvbTAKLXNob3VsZCBiZSB1c2VkLiAgVGhl
IHJlbW90ZSBkb20wIGlzIHRoZSBvbmUgd2hlcmUgdGhlIGd1ZXN0IHN5c3RlbQotc2hvdWxkIGJl
IHN0YXJ0ZWQgb24uIE9ubHkgYWJzb2x1dGUgZmlsZW5hbWVzIGFyZSBzdXBwb3J0ZWQsIGkuZS4g
dGhlCi1zdHJpbmcgbXVzdCBzdGFydCB3aXRoIFx0ZXh0dHR7ZmlsZTovL30gYXBwZW5kZWQgd2l0
aCB0aGUgYWJzb2x1dGUKLXBhdGguICBUaGlzIGlzIHR5cGljYWxseSB1c2VkIHdoZW4gdGhlIGd1
ZXN0IHN5c3RlbSB1c2UgdGhlIHNhbWUKLW9wZXJhdGluZyBzeXN0ZW1zIGFzIHRoZSBkb20wIG9y
IHRoZXJlIGlzIHNvbWUga2luZCBvZiBzaGFyZWQgc3RvcmFnZQotZm9yIHRoZSBpbWFnZXMgaW5z
aWRlIHRoZSBkb20wcy4KLQotTm90ZSB0aGF0IGZvciBjb21wYXRpYmlsaXR5IHJlYXNvbnMgaXQg
aXMgcG9zc2libGUgLS0tIGJ1dCBub3QKLXJlY29tbWVuZGVkIC0tLSB0byBsZWF2ZSBvdXQgdGhl
IHNjaGVtZSBzcGVjaWZpY2F0aW9uIGZvcgotXHRleHR0dHtmaWxlfSwgaS5lLiBcdGV4dHR0e2Zp
bGU6Ly8vc29tZS9wYXRofSBhbmQgXHRleHR0dHsvc29tZS9wYXRofQotaXMgZXF1aXZhbGVudC4K
LQotRXhhbXBsZXMgKGluIHB5dGhvbik6Ci0KLVVzZSBrZXJuZWwgaW1hZ2Ugd2hpY2ggcmVzaWRl
cyBpbiB0aGUgXHRleHR0dHsvYm9vdH0gZGlyZWN0b3J5OgotXGJlZ2lue3ZlcmJhdGltfQoteGVu
YXBpLlZNLmNyZWF0ZSh7IC4uLgotICAgJ1BWX2tlcm5lbCc6ICdmaWxlOi8vL2Jvb3Qvdm1saW51
ei0yLjYuMjYtMi14ZW4tNjg2JywKLSAgIC4uLiB9KQotXGVuZHt2ZXJiYXRpbX0KLQotVXNlIHJh
bWRpc2sgaW1hZ2Ugd2hpY2ggcmVzaWRlcyBvbiBhIChzaGFyZWQpIG5mcyBkaXJlY3Rvcnk6Ci1c
YmVnaW57dmVyYmF0aW19Ci14ZW5hcGkuVk0uY3JlYXRlKHsgLi4uCi0gICAnUFZfcmFtZGlzayc6
ICdmaWxlOi8vL25mcy94ZW4vZGViaWFuLzUuMC4xL2luaXRyZC5pbWctMi42LjI2LTIteGVuLTY4
NicKLSAgIC4uLiB9KQotXGVuZHt2ZXJiYXRpbX0KLQotV2hlbiBhbiBpbWFnZSBzaG91bGQgYmUg
dXNlZCB3aGljaCByZXNpZGVzIG9uIHRoZSBsb2NhbCBzeXN0ZW0sCi1pLmUuIHRoZSBzeXN0ZW0g
d2hlcmUgdGhlIFhlbkFQSSBjYWxsIGlzIHNlbmQgZnJvbSwgaXQgaXMgcG9zc2libGUgdG8KLXVz
ZSB0aGUgXHRleHR0dHtkYXRhfSBVUkkgc2NoZW1lIGFzIGRlc2NyaWJlZCBpbiBcY2l0ZXtSRkMy
Mzk3fS4gIFRoZQotbWVkaWEtdHlwZSBtdXN0IGJlIHNldCB0byBcdGV4dHR0e2FwcGxpY2F0aW9u
L29jdGV0LXN0cmVhbX0uCi1DdXJyZW50bHkgb25seSBiYXNlNjQgZW5jb2RpbmcgaXMgc3VwcG9y
dGVkLiAgVGhlIFVSSSBtdXN0IHRoZXJlZm9yZQotc3RhcnQgd2l0aCBcdGV4dHR0e2RhdGE6YXBw
bGljYXRpb24vb2N0ZXQtc3RyZWFtO2Jhc2U2NCx9IGZvbGxvd2VkIGJ5Ci10aGUgYmFzZTY0IGVu
Y29kZWQgaW1hZ2UuCi0KLVRoZSBcdGV4dHR0e3hlbi91dGlsL2ZpbGV1cmkucHl9IHByb3ZpZGVz
IGEgaGVscGVyIGZ1bmN0aW9uIHdoaWNoCi10YWtlcyBhIGxvY2FsIGZpbGVuYW1lIGFzIHBhcmFt
ZXRlciBhbmQgYnVpbGQgdXAgdGhlIGNvcnJlY3QgVVJJIGZyb20KLXRoaXMuCi0KLUV4YW1wbGVz
IChpbiBweXRob24pOgotCi1Vc2Uga2VybmVsIGltYWdlIHNwZWNpZmllZCBpbmxpbmU6Ci1cYmVn
aW57dmVyYmF0aW19Ci14ZW5hcGkuVk0uY3JlYXRlKHsgLi4uCi0gICAnUFZfa2VybmVsJzogJ2Rh
dGE6YXBwbGljYXRpb24vb2N0ZXQtc3RyZWFtO2Jhc2U2NCxINFp1Li4uLicKLSAgICAgICMgbW9z
dCBvZiBiYXNlNjQgZW5jb2RlZCBkYXRhIGlzIG9taXR0ZWQgCi0gICAuLi4gfSkKLVxlbmR7dmVy
YmF0aW19Ci0KLVVzaW5nIHRoZSB1dGlsaXR5IGZ1bmN0aW9uOgotXGJlZ2lue3ZlcmJhdGltfQot
ZnJvbSB4ZW4udXRpbC5maWxldXJpIGltcG9ydCBzY2hlbWVfZGF0YQoteGVuYXBpLlZNLmNyZWF0
ZSh7IC4uLgotICAgJ1BWX2tlcm5lbCc6IHNjaGVtZV9kYXRhLmNyZWF0ZV9mcm9tX2ZpbGUoCi0g
ICAgICAgIi94ZW4vZ3Vlc3RzL2ltYWdlcy9kZWJpYW4vNS4wLjEvdm1saW51ei0yLjYuMjYtMi14
ZW4tNjg2IiksCi0gICAuLi4gfSkKLVxlbmR7dmVyYmF0aW19Ci0KLUN1cnJlbnRseSB3aGVuIHVz
aW5nIHRoZSBcdGV4dHR0e2RhdGF9IFVSSSBzY2hlbWUsIGEgdGVtcG9yYXJ5IGZpbGUgaXMKLWNy
ZWF0ZWQgb24gdGhlIHJlbW90ZSBkb20wIGluIHRoZSBkaXJlY3RvcnkKLVx0ZXh0dHR7L3Zhci9y
dW4veGVuZC9ib290fSB3aGljaCBpcyB0aGVuIHVzZWQgZm9yIGJvb3RpbmcuIFdoZW4gbm90Ci11
c2VkIGFueSBsb25nZXIgdGhlIGZpbGUgaXMgZGVsZXRlZC4gIChUaGVyZWZvcmUgcmVhZGluZyBv
ZiB0aGUKLVx0ZXh0dHR7UFYva2VybmVsfSBvciBcdGV4dHR0e1BWL3JhbWRpc2t9IHBhcmFtZXRl
cnMgd2hlbiBjcmVhdGVkIHdpdGgKLWEgXHRleHR0dHtkYXRhfSBVUkkgc2NoZW1lIHJldHVybnMg
YSBmaWxlbmFtZSB0byBhIHRlbXBvcmFyeSBmaWxlIC0tLQotd2hpY2ggbWlnaHQgZXZlbiBub3Qg
ZXhpc3RzIHdoZW4gcXVlcnlpbmcuKSAgVGhpcyBpbXBsZW1lbnRhdGlvbiBtaWdodAotY2hhbmdl
IGluIHRoZSB3YXkgdGhhdCB0aGUgZGF0YSBpcyBkaXJlY3RseSB1c2VkIC0tLSB3aXRob3V0IHRo
ZQotaW5kaXJlY3Rpb24gdXNpbmcgYSBmaWxlLiAgVGhlcmVmb3JlIGRvIG5vdCByZWx5IG9uIHRo
ZSBkYXRhIHJlc3VsdGluZwotZnJvbSBhIHJlYWQgb2YgYSB2YXJpYWJsZXMgd2hpY2ggd2FzIHNl
dCB1c2luZyB0aGUgXHRleHR0dHtkYXRhfQotc2NoZW1lLgotCi1Ob3RlOiBhIG1peCBvZiBkaWZm
ZXJlbnQgc2NoZW1lcyBmb3IgdGhlIHBhcmFtZXRlcnMgaXMgcG9zc2libGU7IGUuZy4KLXRoZSBr
ZXJuZWwgY2FuIGJlIHNwZWNpZmllZCB3aXRoIGEgXHRleHR0dHtmaWxlfSBhbmQgdGhlIHJhbWRp
c2sgd2l0aAotdGhlIFx0ZXh0dHR7ZGF0YX0gVVJJIHNjaGVtZS4KLQotXHN1YnNlY3Rpb257UlBD
cyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IFZNfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmNs
b25lfQotCi17XGJmIE92ZXJ2aWV3On0gCi1DbG9uZXMgdGhlIHNwZWNpZmllZCBWTSwgbWFraW5n
IGEgbmV3IFZNLiBDbG9uZSBhdXRvbWF0aWNhbGx5IGV4cGxvaXRzIHRoZQotY2FwYWJpbGl0aWVz
IG9mIHRoZSB1bmRlcmx5aW5nIHN0b3JhZ2UgcmVwb3NpdG9yeSBpbiB3aGljaCB0aGUgVk0ncyBk
aXNrCi1pbWFnZXMgYXJlIHN0b3JlZCAoZS5nLiBDb3B5IG9uIFdyaXRlKS4gICBUaGlzIGZ1bmN0
aW9uIGNhbiBvbmx5IGJlIGNhbGxlZAotd2hlbiB0aGUgVk0gaXMgaW4gdGhlIEhhbHRlZCBTdGF0
ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0g
cmVmKSBjbG9uZSAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgdm0sIHN0cmluZyBuZXdfbmFtZSlcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYg
fSAmIHZtICYgVGhlIFZNIHRvIGJlIGNsb25lZCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0g
JiBuZXdcX25hbWUgJiBUaGUgbmFtZSBvZiB0aGUgY2xvbmVkIFZNIFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1WTSByZWYKLX0KLQotCi1UaGUgSUQgb2YgdGhlIG5ld2x5IGNyZWF0ZWQg
Vk0uCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVz
On0ge1x0dCBWTVxfQkFEXF9QT1dFUlxfU1RBVEV9Ci0KLVx2c3BhY2V7MC42Y219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+c3RhcnR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVN0YXJ0IHRoZSBz
cGVjaWZpZWQgVk0uICBUaGlzIGZ1bmN0aW9uIGNhbiBvbmx5IGJlIGNhbGxlZCB3aXRoIHRoZSBW
TSBpcyBpbgotdGhlIEhhbHRlZCBTdGF0ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHN0YXJ0IChzZXNzaW9uX2lkIHMsIFZNIHJlZiB2bSwg
Ym9vbCBzdGFydF9wYXVzZWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiB2bSAmIFRoZSBWTSB0byBzdGFydCBcXCBcaGxpbmUg
Ci0KLXtcdHQgYm9vbCB9ICYgc3RhcnRcX3BhdXNlZCAmIEluc3RhbnRpYXRlIFZNIGluIHBhdXNl
ZCBzdGF0ZSBpZiBzZXQgdG8gdHJ1ZS4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZv
aWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJy
b3IgQ29kZXM6fSB7XHR0IFZNXF9CQURcX1BPV0VSXF9TVEFURX0sIHtcdHQKLVZNXF9IVk1cX1JF
UVVJUkVEfQotCi1cdnNwYWNlezAuNmNtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnBhdXNl
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1QYXVzZSB0aGUgc3BlY2lmaWVkIFZNLiBUaGlzIGNhbiBv
bmx5IGJlIGNhbGxlZCB3aGVuIHRoZSBzcGVjaWZpZWQgVk0gaXMgaW4KLXRoZSBSdW5uaW5nIHN0
YXRlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZv
aWQgcGF1c2UgKHNlc3Npb25faWQgcywgVk0gcmVmIHZtKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgdm0gJiBUaGUgVk0gdG8g
cGF1c2UgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3Bh
Y2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFZN
XF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZzcGFjZXswLjZjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn51bnBhdXNlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1SZXN1bWUgdGhlIHNwZWNpZmll
ZCBWTS4gVGhpcyBjYW4gb25seSBiZSBjYWxsZWQgd2hlbiB0aGUgc3BlY2lmaWVkIFZNIGlzCi1p
biB0aGUgUGF1c2VkIHN0YXRlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IHZvaWQgdW5wYXVzZSAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgdm0pXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0g
JiB2bSAmIFRoZSBWTSB0byB1bnBhdXNlIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12
b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVy
cm9yIENvZGVzOn0ge1x0dCBWTVxfQkFEXF9QT1dFUlxfU1RBVEV9Ci0KLVx2c3BhY2V7MC42Y219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y2xlYW5cX3NodXRkb3dufQotCi17XGJmIE92ZXJ2
aWV3On0gCi1BdHRlbXB0IHRvIGNsZWFubHkgc2h1dGRvd24gdGhlIHNwZWNpZmllZCBWTS4gKE5v
dGU6IHRoaXMgbWF5IG5vdCBiZQotc3VwcG9ydGVkLS0tZS5nLiBpZiBhIGd1ZXN0IGFnZW50IGlz
IG5vdCBpbnN0YWxsZWQpLgotCi1PbmNlIHNodXRkb3duIGhhcyBiZWVuIGNvbXBsZXRlZCBwZXJm
b3JtIHBvd2Vyb2ZmIGFjdGlvbiBzcGVjaWZpZWQgaW4gZ3Vlc3QKLWNvbmZpZ3VyYXRpb24uCi0K
LVRoaXMgY2FuIG9ubHkgYmUgY2FsbGVkIHdoZW4gdGhlIHNwZWNpZmllZCBWTSBpcyBpbiB0aGUg
UnVubmluZyBzdGF0ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSB2b2lkIGNsZWFuX3NodXRkb3duIChzZXNzaW9uX2lkIHMsIFZNIHJlZiB2bSlcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYg
fSAmIHZtICYgVGhlIFZNIHRvIHNodXRkb3duIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxl
IEVycm9yIENvZGVzOn0ge1x0dCBWTVxfQkFEXF9QT1dFUlxfU1RBVEV9Ci0KLVx2c3BhY2V7MC42
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y2xlYW5cX3JlYm9vdH0KLQote1xiZiBPdmVy
dmlldzp9IAotQXR0ZW1wdCB0byBjbGVhbmx5IHNodXRkb3duIHRoZSBzcGVjaWZpZWQgVk0gKE5v
dGU6IHRoaXMgbWF5IG5vdCBiZQotc3VwcG9ydGVkLS0tZS5nLiBpZiBhIGd1ZXN0IGFnZW50IGlz
IG5vdCBpbnN0YWxsZWQpLgotCi1PbmNlIHNodXRkb3duIGhhcyBiZWVuIGNvbXBsZXRlZCBwZXJm
b3JtIHJlYm9vdCBhY3Rpb24gc3BlY2lmaWVkIGluIGd1ZXN0Ci1jb25maWd1cmF0aW9uLgotCi1U
aGlzIGNhbiBvbmx5IGJlIGNhbGxlZCB3aGVuIHRoZSBzcGVjaWZpZWQgVk0gaXMgaW4gdGhlIFJ1
bm5pbmcgc3RhdGUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gdm9pZCBjbGVhbl9yZWJvb3QgKHNlc3Npb25faWQgcywgVk0gcmVmIHZtKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYg
dm0gJiBUaGUgVk0gdG8gc2h1dGRvd24gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZv
aWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJy
b3IgQ29kZXM6fSB7XHR0IFZNXF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZzcGFjZXswLjZjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5oYXJkXF9zaHV0ZG93bn0KLQote1xiZiBPdmVydmll
dzp9IAotU3RvcCBleGVjdXRpbmcgdGhlIHNwZWNpZmllZCBWTSB3aXRob3V0IGF0dGVtcHRpbmcg
YSBjbGVhbiBzaHV0ZG93bi4gVGhlbgotcGVyZm9ybSBwb3dlcm9mZiBhY3Rpb24gc3BlY2lmaWVk
IGluIFZNIGNvbmZpZ3VyYXRpb24uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gdm9pZCBoYXJkX3NodXRkb3duIChzZXNzaW9uX2lkIHMsIFZNIHJlZiB2
bSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
TSByZWYgfSAmIHZtICYgVGhlIFZNIHRvIGRlc3Ryb3kgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9z
c2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFZNXF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZzcGFj
ZXswLjZjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5oYXJkXF9yZWJvb3R9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVN0b3AgZXhlY3V0aW5nIHRoZSBzcGVjaWZpZWQgVk0gd2l0aG91dCBhdHRl
bXB0aW5nIGEgY2xlYW4gc2h1dGRvd24uIFRoZW4KLXBlcmZvcm0gcmVib290IGFjdGlvbiBzcGVj
aWZpZWQgaW4gVk0gY29uZmlndXJhdGlvbi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGhhcmRfcmVib290IChzZXNzaW9uX2lkIHMsIFZNIHJl
ZiB2bSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWTSByZWYgfSAmIHZtICYgVGhlIFZNIHRvIHJlYm9vdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnN1c3BlbmR9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVN1c3BlbmQgdGhlIHNwZWNpZmllZCBWTSB0byBkaXNrLiAgVGhpcyBjYW4g
b25seSBiZSBjYWxsZWQgd2hlbiB0aGUKLXNwZWNpZmllZCBWTSBpcyBpbiB0aGUgUnVubmluZyBz
dGF0ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2
b2lkIHN1c3BlbmQgKHNlc3Npb25faWQgcywgVk0gcmVmIHZtKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgdm0gJiBUaGUgVk0g
dG8gc3VzcGVuZCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQot
XHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50e1xiZiBQb3NzaWJsZSBFcnJvciBDb2Rlczp9IHtc
dHQgVk1cX0JBRFxfUE9XRVJcX1NUQVRFfQotCi1cdnNwYWNlezAuNmNtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fnJlc3VtZX0KLQote1xiZiBPdmVydmlldzp9IAotQXdha2VuIHRoZSBzcGVj
aWZpZWQgVk0gYW5kIHJlc3VtZSBpdC4gIFRoaXMgY2FuIG9ubHkgYmUgY2FsbGVkIHdoZW4gdGhl
Ci1zcGVjaWZpZWQgVk0gaXMgaW4gdGhlIFN1c3BlbmRlZCBzdGF0ZS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHJlc3VtZSAoc2Vzc2lvbl9p
ZCBzLCBWTSByZWYgdm0sIGJvb2wgc3RhcnRfcGF1c2VkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgdm0gJiBUaGUgVk0gdG8g
cmVzdW1lIFxcIFxobGluZSAKLQote1x0dCBib29sIH0gJiBzdGFydFxfcGF1c2VkICYgUmVzdW1l
IFZNIGluIHBhdXNlZCBzdGF0ZSBpZiBzZXQgdG8gdHJ1ZS4gXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYg
UG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFZNXF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZz
cGFjZXswLjZjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX1ZDUFVzXF9udW1iZXJc
X2xpdmV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGlzIFZNJ3MgVkNQVXMvYXRcX3N0YXJ0
dXAgdmFsdWUsIGFuZCBzZXQgdGhlIHNhbWUgdmFsdWUgb24gdGhlIFZNLCBpZgotcnVubmluZy4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNl
dF9WQ1BVc19udW1iZXJfbGl2ZSAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgaW50IG52Y3B1
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZN
IHJlZiB9ICYgc2VsZiAmIFRoZSBWTSBcXCBcaGxpbmUgCi0KLXtcdHQgaW50IH0gJiBudmNwdSAm
IFRoZSBudW1iZXIgb2YgVkNQVXMgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQK
LX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5hZGRcX3RvXF9WQ1BVc1xfcGFyYW1zXF9saXZlfQot
Ci17XGJmIE92ZXJ2aWV3On0gCi1BZGQgdGhlIGdpdmVuIGtleS12YWx1ZSBwYWlyIHRvIFZNLlZD
UFVzXF9wYXJhbXMsIGFuZCBhcHBseSB0aGF0IHZhbHVlIG9uCi10aGUgcnVubmluZyBWTS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGFkZF90
b19WQ1BVc19wYXJhbXNfbGl2ZSAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgc3RyaW5nIGtl
eSwgc3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIFRoZSBWTSBcXCBcaGxpbmUgCi0KLXtcdHQg
c3RyaW5nIH0gJiBrZXkgJiBUaGUga2V5IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIHZh
bHVlICYgVGhlIHZhbHVlIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0K
LQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9tZW1vcnlcX2R5bmFtaWNcX21heFxfbGl2ZX0KLQot
e1xiZiBPdmVydmlldzp9IAotU2V0IG1lbW9yeVxfZHluYW1pY1xfbWF4IGluIGRhdGFiYXNlIGFu
ZCBvbiBydW5uaW5nIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHZvaWQgc2V0X21lbW9yeV9keW5hbWljX21heF9saXZlIChzZXNzaW9uX2lkIHMs
IFZNIHJlZiBzZWxmLCBpbnQgbWF4KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIFRoZSBWTSBcXCBcaGxpbmUgCi0K
LXtcdHQgaW50IH0gJiBtYXggJiBUaGUgbWVtb3J5XF9keW5hbWljXF9tYXggdmFsdWUgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5z
ZXRcX21lbW9yeVxfZHluYW1pY1xfbWluXF9saXZlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQg
bWVtb3J5XF9keW5hbWljXF9taW4gaW4gZGF0YWJhc2UgYW5kIG9uIHJ1bm5pbmcgVk0uCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbWVt
b3J5X2R5bmFtaWNfbWluX2xpdmUgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIGludCBtaW4p
XGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0g
cmVmIH0gJiBzZWxmICYgVGhlIFZNIFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIG1pbiAmIFRo
ZSBtZW1vcnlcX2R5bmFtaWNcX21pbiB2YWx1ZSBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNlbmRcX3N5c3JxfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1TZW5kIHRoZSBnaXZlbiBrZXkgYXMgYSBzeXNycSB0byB0aGlzIFZNLiAgVGhl
IGtleSBpcyBzcGVjaWZpZWQgYXMgYSBzaW5nbGUKLWNoYXJhY3RlciAoYSBTdHJpbmcgb2YgbGVu
Z3RoIDEpLiAgVGhpcyBjYW4gb25seSBiZSBjYWxsZWQgd2hlbiB0aGUKLXNwZWNpZmllZCBWTSBp
cyBpbiB0aGUgUnVubmluZyBzdGF0ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNlbmRfc3lzcnEgKHNlc3Npb25faWQgcywgVk0gcmVmIHZt
LCBzdHJpbmcga2V5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFZNIHJlZiB9ICYgdm0gJiBUaGUgVk0gXFwgXGhsaW5lIAotCi17XHR0IHN0cmlu
ZyB9ICYga2V5ICYgVGhlIGtleSB0byBzZW5kIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxl
IEVycm9yIENvZGVzOn0ge1x0dCBWTVxfQkFEXF9QT1dFUlxfU1RBVEV9Ci0KLVx2c3BhY2V7MC42
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2VuZFxfdHJpZ2dlcn0KLQote1xiZiBPdmVy
dmlldzp9IAotU2VuZCB0aGUgbmFtZWQgdHJpZ2dlciB0byB0aGlzIFZNLiAgVGhpcyBjYW4gb25s
eSBiZSBjYWxsZWQgd2hlbiB0aGUKLXNwZWNpZmllZCBWTSBpcyBpbiB0aGUgUnVubmluZyBzdGF0
ZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lk
IHNlbmRfdHJpZ2dlciAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgdm0sIHN0cmluZyB0cmlnZ2VyKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJl
ZiB9ICYgdm0gJiBUaGUgVk0gXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdHJpZ2dlciAm
IFRoZSB0cmlnZ2VyIHRvIHNlbmQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQK
LX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3Ig
Q29kZXM6fSB7XHR0IFZNXF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZzcGFjZXswLjZjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5taWdyYXRlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1NaWdy
YXRlIHRoZSBWTSB0byBhbm90aGVyIGhvc3QuICBUaGlzIGNhbiBvbmx5IGJlIGNhbGxlZCB3aGVu
IHRoZSBzcGVjaWZpZWQKLVZNIGlzIGluIHRoZSBSdW5uaW5nIHN0YXRlLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgbWlncmF0ZSAoc2Vzc2lv
bl9pZCBzLCBWTSByZWYgdm0sIHN0cmluZyBkZXN0LCBib29sIGxpdmUsIChzdHJpbmcgLT4gc3Ry
aW5nKSBNYXAgb3B0aW9ucylcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBWTSByZWYgfSAmIHZtICYgVGhlIFZNIFxcIFxobGluZSAKLQote1x0dCBz
dHJpbmcgfSAmIGRlc3QgJiBUaGUgZGVzdGluYXRpb24gaG9zdCBcXCBcaGxpbmUgCi0KLXtcdHQg
Ym9vbCB9ICYgbGl2ZSAmIExpdmUgbWlncmF0aW9uIFxcIFxobGluZSAKLQote1x0dCAoc3RyaW5n
ICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgfSAmIG9wdGlvbnMgJiBPdGhlciBwYXJhbWV0ZXJz
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAu
M2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVzOn0ge1x0dCBWTVxfQkFE
XF9QT1dFUlxfU1RBVEV9Ci0KLVx2c3BhY2V7MC42Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRo
ZSBWTXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZNIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMp
XGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi0oVk0gcmVmKSBTZXQKLX0KLQotCi1BIGxpc3Qgb2YgYWxsIHRo
ZSBJRHMgb2YgYWxsIHRoZSBWTXMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlk
IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Bvd2VyXF9zdGF0ZX0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBwb3dlclxfc3RhdGUgZmllbGQgb2YgdGhlIGdpdmVu
IFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICh2
bV9wb3dlcl9zdGF0ZSkgZ2V0X3Bvd2VyX3N0YXRlIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZN
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi12bVxfcG93ZXJcX3N0YXRlCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxk
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9uYW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfbGFiZWwg
KHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfbmFtZVxfbGFiZWx9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgbmFtZS9sYWJlbCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0u
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBz
ZXRfbmFtZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgc3RyaW5nIHZhbHVlKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBz
dHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmFtZVxfZGVzY3Jp
cHRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbmFtZS9kZXNjcmlwdGlvbiBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9uYW1lX2Rlc2NyaXB0aW9uIChzZXNzaW9uX2lkIHMsIFZN
IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGlu
ZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYg
UmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5zZXRcX25hbWVcX2Rlc2NyaXB0aW9ufQotCi17XGJmIE92ZXJ2aWV3
On0gCi1TZXQgdGhlIG5hbWUvZGVzY3JpcHRpb24gZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X25h
bWVfZGVzY3JpcHRpb24gKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIHN0cmluZyB2YWx1ZSlc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQg
c3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3VzZXJcX3ZlcnNp
b259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXNlclxfdmVyc2lvbiBmaWVsZCBvZiB0
aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gaW50IGdldF91c2VyX3ZlcnNpb24gKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fnNldFxfdXNlclxfdmVyc2lvbn0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSB1c2VyXF92
ZXJzaW9uIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF91c2VyX3ZlcnNpb24gKHNlc3Npb25faWQg
cywgVk0gcmVmIHNlbGYsIGludCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgaW50IH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBz
ZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9p
bmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5nZXRcX2lzXF9hXF90ZW1wbGF0ZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSBpc1xfYVxfdGVtcGxhdGUgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGJvb2wgZ2V0X2lzX2FfdGVtcGxhdGUg
KHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWJvb2wKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX2lzXF9hXF90ZW1wbGF0ZX0KLQot
e1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBpc1xfYVxfdGVtcGxhdGUgZmllbGQgb2YgdGhlIGdp
dmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IHZvaWQgc2V0X2lzX2FfdGVtcGxhdGUgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIGJvb2wg
dmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAK
LVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQot
e1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtc
dHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci17XHR0IGJvb2wgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxl
bmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYXV0b1xf
cG93ZXJcX29ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGF1dG9cX3Bvd2VyXF9vbiBm
aWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gYm9vbCBnZXRfYXV0b19wb3dlcl9vbiAoc2Vzc2lvbl9pZCBzLCBWTSBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotYm9vbAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fnNldFxfYXV0b1xfcG93ZXJcX29ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1T
ZXQgdGhlIGF1dG9cX3Bvd2VyXF9vbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfYXV0b19wb3dl
cl9vbiAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgYm9vbCB2YWx1ZSlcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgYm9vbCB9ICYgdmFs
dWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lk
Ci19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9zdXNwZW5kXF9WREl9Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLUdldCB0aGUgc3VzcGVuZFxfVkRJIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVkRJIHJlZikg
Z2V0X3N1c3BlbmRfVkRJIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1W
REkgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9y
ZXNpZGVudFxfb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgcmVzaWRlbnRcX29uIGZp
ZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSAoaG9zdCByZWYpIGdldF9yZXNpZGVudF9vbiAoc2Vzc2lvbl9pZCBzLCBW
TSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotaG9zdCByZWYKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX21lbW9yeVxfc3RhdGljXF9tYXh9Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLUdldCB0aGUgbWVtb3J5L3N0YXRpY1xfbWF4IGZpZWxkIG9mIHRoZSBnaXZlbiBW
TS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQg
Z2V0X21lbW9yeV9zdGF0aWNfbWF4IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRc
X21lbW9yeVxfc3RhdGljXF9tYXh9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgbWVtb3J5
L3N0YXRpY1xfbWF4IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9tZW1vcnlfc3RhdGljX21heCAo
c2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgaW50IHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVy
ZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIHZhbHVlICYgTmV3
IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbWVtb3J5XF9keW5hbWljXF9tYXh9Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLUdldCB0aGUgbWVtb3J5L2R5bmFtaWNcX21heCBmaWVsZCBvZiB0aGUgZ2l2ZW4g
Vk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50
IGdldF9tZW1vcnlfZHluYW1pY19tYXggKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNl
dFxfbWVtb3J5XF9keW5hbWljXF9tYXh9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgbWVt
b3J5L2R5bmFtaWNcX21heCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbWVtb3J5X2R5bmFtaWNf
bWF4IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBpbnQgdmFsdWUpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IGludCB9ICYgdmFsdWUg
JiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19
Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9tZW1vcnlcX2R5bmFtaWNcX21pbn0KLQote1xi
ZiBPdmVydmlldzp9IAotR2V0IHRoZSBtZW1vcnkvZHluYW1pY1xfbWluIGZpZWxkIG9mIHRoZSBn
aXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSBpbnQgZ2V0X21lbW9yeV9keW5hbWljX21pbiAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+c2V0XF9tZW1vcnlcX2R5bmFtaWNcX21pbn0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRo
ZSBtZW1vcnkvZHluYW1pY1xfbWluIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9tZW1vcnlfZHlu
YW1pY19taW4gKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIGludCB2YWx1ZSlcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgaW50IH0gJiB2
YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZv
aWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX21lbW9yeVxfc3RhdGljXF9taW59Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbWVtb3J5L3N0YXRpY1xfbWluIGZpZWxkIG9mIHRo
ZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJh
dGltfSBpbnQgZ2V0X21lbW9yeV9zdGF0aWNfbWluIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZN
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5zZXRcX21lbW9yeVxfc3RhdGljXF9taW59Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0
aGUgbWVtb3J5L3N0YXRpY1xfbWluIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9tZW1vcnlfc3Rh
dGljX21pbiAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgaW50IHZhbHVlKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIHZh
bHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9p
ZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfVkNQVXNcX3BhcmFtc30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBWQ1BVcy9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5n
IC0+IHN0cmluZykgTWFwKSBnZXRfVkNQVXNfcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBz
ZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX1ZDUFVzXF9wYXJhbXN9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgVkNQVXMvcGFyYW1zIGZpZWxkIG9mIHRoZSBn
aXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHNldF9WQ1BVc19wYXJhbXMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIChzdHJp
bmcgLT4gc3RyaW5nKSBNYXAgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi17XHR0IChzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1h
cCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+YWRkXF90b1xfVkNQVXNcX3BhcmFt
c30KLQote1xiZiBPdmVydmlldzp9IAotQWRkIHRoZSBnaXZlbiBrZXktdmFsdWUgcGFpciB0byB0
aGUgVkNQVXMvcGFyYW1zIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGFkZF90b19WQ1BVc19wYXJhbXMg
KHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIHN0cmluZyBrZXksIHN0cmluZyB2YWx1ZSlcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgc3Ry
aW5nIH0gJiBrZXkgJiBLZXkgdG8gYWRkIFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIHZh
bHVlICYgVmFsdWUgdG8gYWRkIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19
Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+cmVtb3ZlXF9mcm9tXF9WQ1BVc1xfcGFyYW1zfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1SZW1vdmUgdGhlIGdpdmVuIGtleSBhbmQgaXRzIGNvcnJlc3BvbmRp
bmcgdmFsdWUgZnJvbSB0aGUgVkNQVXMvcGFyYW1zCi1maWVsZCBvZiB0aGUgZ2l2ZW4gVk0uICBJ
ZiB0aGUga2V5IGlzIG5vdCBpbiB0aGF0IE1hcCwgdGhlbiBkbyBub3RoaW5nLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcmVtb3ZlX2Zyb21f
VkNQVXNfcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBzdHJpbmcga2V5KVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJp
bmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAK
LXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1ZDUFVzXF9tYXh9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUdldCB0aGUgVkNQVXMvbWF4IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X1ZD
UFVzX21heCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxu
b2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVu
Y2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0K
LQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9WQ1BVc1xfbWF4fQot
Ci17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhlIFZDUFVzL21heCBmaWVsZCBvZiB0aGUgZ2l2ZW4g
Vk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCBzZXRfVkNQVXNfbWF4IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBpbnQgdmFsdWUpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IGlu
dCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9WQ1BVc1xfYXRcX3N0YXJ0
dXB9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVkNQVXMvYXRcX3N0YXJ0dXAgZmllbGQg
b2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IGludCBnZXRfVkNQVXNfYXRfc3RhcnR1cCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+c2V0XF9WQ1BVc1xfYXRcX3N0YXJ0dXB9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNl
dCB0aGUgVkNQVXMvYXRcX3N0YXJ0dXAgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X1ZDUFVzX2F0
X3N0YXJ0dXAgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIGludCB2YWx1ZSlcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgaW50IH0gJiB2
YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZv
aWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FjdGlvbnNcX2FmdGVyXF9zaHV0ZG93
bn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBhY3Rpb25zL2FmdGVyXF9zaHV0ZG93biBm
aWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gKG9uX25vcm1hbF9leGl0KSBnZXRfYWN0aW9uc19hZnRlcl9zaHV0ZG93
biAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8
Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVz
Y3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotb25cX25vcm1hbFxfZXhp
dAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfYWN0aW9u
c1xfYWZ0ZXJcX3NodXRkb3dufQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhlIGFjdGlvbnMv
YWZ0ZXJcX3NodXRkb3duIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9hY3Rpb25zX2FmdGVyX3No
dXRkb3duIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBvbl9ub3JtYWxfZXhpdCB2YWx1ZSlc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQg
b25cX25vcm1hbFxfZXhpdCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9h
Y3Rpb25zXF9hZnRlclxfcmVib290fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGFjdGlv
bnMvYWZ0ZXJcX3JlYm9vdCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKG9uX25vcm1hbF9leGl0KSBnZXRfYWN0
aW9uc19hZnRlcl9yZWJvb3QgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxm
ICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAK
LW9uXF9ub3JtYWxcX2V4aXQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5zZXRcX2FjdGlvbnNcX2FmdGVyXF9yZWJvb3R9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNl
dCB0aGUgYWN0aW9ucy9hZnRlclxfcmVib290IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9hY3Rp
b25zX2FmdGVyX3JlYm9vdCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgb25fbm9ybWFsX2V4
aXQgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi17XHR0IG9uXF9ub3JtYWxcX2V4aXQgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfYWN0aW9uc1xfYWZ0ZXJcX2NyYXNofQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQg
dGhlIGFjdGlvbnMvYWZ0ZXJcX2NyYXNoIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAob25fY3Jhc2hfYmVoYXZp
b3VyKSBnZXRfYWN0aW9uc19hZnRlcl9jcmFzaCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotb25cX2NyYXNoXF9iZWhhdmlvdXIKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX2FjdGlvbnNcX2FmdGVyXF9jcmFzaH0KLQote1xiZiBP
dmVydmlldzp9IAotU2V0IHRoZSBhY3Rpb25zL2FmdGVyXF9jcmFzaCBmaWVsZCBvZiB0aGUgZ2l2
ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
dm9pZCBzZXRfYWN0aW9uc19hZnRlcl9jcmFzaCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwg
b25fY3Jhc2hfYmVoYXZpb3VyIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBvblxfY3Jhc2hcX2JlaGF2aW91ciB9ICYgdmFsdWUg
JiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19
Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9jb25zb2xlc30KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSBjb25zb2xlcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChjb25zb2xlIHJlZikgU2V0KSBn
ZXRfY29uc29sZXMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShjb25z
b2xlIHJlZikgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9WSUZzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZJRnMgZmllbGQgb2YgdGhl
IGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19ICgoVklGIHJlZikgU2V0KSBnZXRfVklGcyAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotKFZJRiByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfVkJEc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBWQkRz
IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAoKFZCRCByZWYpIFNldCkgZ2V0X1ZCRHMgKHNlc3Npb25faWQgcywg
Vk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShWQkQgcmVmKSBTZXQKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2NyYXNoXF9kdW1wc30KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSBjcmFzaFxfZHVtcHMgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoY3Jhc2hkdW1w
IHJlZikgU2V0KSBnZXRfY3Jhc2hfZHVtcHMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLShjcmFzaGR1bXAgcmVmKSBTZXQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1ZUUE1zfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IFZUUE1zIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZUUE0gcmVmKSBTZXQpIGdldF9WVFBNcyAoc2Vzc2lv
bl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVj
dCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZUUE0gcmVmKSBTZXQKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX0RQQ0lzfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIERQQ0lzIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKERQQ0kgcmVmKSBTZXQp
IGdldF9EUENJcyAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKERQQ0kg
cmVmKSBTZXQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X0RTQ1NJc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBEU0NTSXMgZmllbGQgb2YgdGhl
IGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19ICgoRFNDU0kgcmVmKSBTZXQpIGdldF9EU0NTSXMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
TSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxl
bmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotKERTQ1NJIHJlZikgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxk
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9EU0NTSVxfSEJBc30KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBEU0NTSVxfSEJBcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChEU0NTSV9IQkEgcmVmKSBTZXQp
IGdldF9EU0NTSV9IQkFzIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShE
U0NTSVxfSEJBIHJlZikgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9QVlxfYm9vdGxvYWRlcn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBQ
Vi9ib290bG9hZGVyIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X1BWX2Jvb3Rsb2FkZXIgKHNl
c3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVl
IG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfUFZcX2Jvb3Rsb2FkZXJ9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgUFYvYm9vdGxvYWRlciBmaWVsZCBvZiB0aGUgZ2l2ZW4g
Vk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCBzZXRfUFZfYm9vdGxvYWRlciAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgc3RyaW5nIHZh
bHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
e1x0dCBzdHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxl
bmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfUFZcX2tl
cm5lbH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBQVi9rZXJuZWwgZmllbGQgb2YgdGhl
IGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IHN0cmluZyBnZXRfUFZfa2VybmVsIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5zZXRcX1BWXF9rZXJuZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgUFYva2VybmVs
IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9QVl9rZXJuZWwgKHNlc3Npb25faWQgcywgVk0gcmVm
IHNlbGYsIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVj
dCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQg
XFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRl
bnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX1BWXF9yYW1kaXNrfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBWL3Jh
bWRpc2sgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfUFZfcmFtZGlzayAoc2Vzc2lvbl9pZCBz
LCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9QVlxfcmFtZGlza30KLQote1xiZiBPdmVydmlldzp9
IAotU2V0IHRoZSBQVi9yYW1kaXNrIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9QVl9yYW1kaXNr
IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFs
dWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lk
Ci19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9QVlxfYXJnc30KLQote1xiZiBPdmVydmll
dzp9IAotR2V0IHRoZSBQVi9hcmdzIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X1BWX2FyZ3Mg
KHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfUFZcX2FyZ3N9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVNldCB0aGUgUFYvYXJncyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfUFZf
YXJncyAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgc3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAm
IHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
dm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfUFZcX2Jvb3Rsb2FkZXJcX2FyZ3N9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgUFYvYm9vdGxvYWRlclxfYXJncyBmaWVsZCBv
ZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2
ZXJiYXRpbX0gc3RyaW5nIGdldF9QVl9ib290bG9hZGVyX2FyZ3MgKHNlc3Npb25faWQgcywgVk0g
cmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fnNldFxfUFZcX2Jvb3Rsb2FkZXJcX2FyZ3N9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLVNldCB0aGUgUFYvYm9vdGxvYWRlclxfYXJncyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0u
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBz
ZXRfUFZfYm9vdGxvYWRlcl9hcmdzIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBzdHJpbmcg
dmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAK
LVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQot
e1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtc
dHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9IVk1c
X2Jvb3RcX3BvbGljeX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBIVk0vYm9vdFxfcG9s
aWN5IGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X0hWTV9ib290X3BvbGljeSAoc2Vzc2lvbl9p
ZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhl
IGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9IVk1cX2Jvb3RcX3BvbGljeX0KLQote1xiZiBP
dmVydmlldzp9IAotU2V0IHRoZSBIVk0vYm9vdFxfcG9saWN5IGZpZWxkIG9mIHRoZSBnaXZlbiBW
TS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lk
IHNldF9IVk1fYm9vdF9wb2xpY3kgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIHN0cmluZyB2
YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX0hWTVxf
Ym9vdFxfcGFyYW1zfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIEhWTS9ib290XF9wYXJh
bXMgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfSFZNX2Jvb3Rf
cGFyYW1zIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRc
cmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5zZXRcX0hWTVxfYm9vdFxfcGFyYW1zfQotCi17XGJmIE92ZXJ2aWV3On0gCi1T
ZXQgdGhlIEhWTS9ib290XF9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X0hWTV9ib290
X3BhcmFtcyAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgKHN0cmluZyAtPiBzdHJpbmcpIE1h
cCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLXtcdHQgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwIH0gJiB2YWx1ZSAmIE5l
dyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQot
Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5hZGRcX3RvXF9IVk1cX2Jvb3RcX3BhcmFtc30KLQote1xiZiBP
dmVydmlldzp9IAotQWRkIHRoZSBnaXZlbiBrZXktdmFsdWUgcGFpciB0byB0aGUgSFZNL2Jvb3Rc
X3BhcmFtcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBhZGRfdG9fSFZNX2Jvb3RfcGFyYW1zIChzZXNz
aW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBzdHJpbmcga2V5LCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9
ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAm
IFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92ZVxfZnJvbVxfSFZNXF9ib290XF9wYXJhbXN9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLVJlbW92ZSB0aGUgZ2l2ZW4ga2V5IGFuZCBpdHMgY29ycmVzcG9uZGlu
ZyB2YWx1ZSBmcm9tIHRoZSBIVk0vYm9vdFxfcGFyYW1zCi1maWVsZCBvZiB0aGUgZ2l2ZW4gVk0u
ICBJZiB0aGUga2V5IGlzIG5vdCBpbiB0aGF0IE1hcCwgdGhlbiBkbyBub3RoaW5nLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcmVtb3ZlX2Zy
b21fSFZNX2Jvb3RfcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBzdHJpbmcga2V5
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZN
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0
dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3BsYXRmb3JtfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHBsYXRmb3JtIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmlu
ZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X3BsYXRmb3JtIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZN
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX3BsYXRmb3JtfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1TZXQgdGhlIHBsYXRmb3JtIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9w
bGF0Zm9ybSAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZiwgKHN0cmluZyAtPiBzdHJpbmcpIE1h
cCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLXtcdHQgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwIH0gJiB2YWx1ZSAmIE5l
dyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQot
Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5hZGRcX3RvXF9wbGF0Zm9ybX0KLQote1xiZiBPdmVydmlldzp9
IAotQWRkIHRoZSBnaXZlbiBrZXktdmFsdWUgcGFpciB0byB0aGUgcGxhdGZvcm0gZmllbGQgb2Yg
dGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IHZvaWQgYWRkX3RvX3BsYXRmb3JtIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBz
dHJpbmcga2V5LCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBc
aGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92
ZVxfZnJvbVxfcGxhdGZvcm19Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJlbW92ZSB0aGUgZ2l2ZW4g
a2V5IGFuZCBpdHMgY29ycmVzcG9uZGluZyB2YWx1ZSBmcm9tIHRoZSBwbGF0Zm9ybSBmaWVsZCBv
ZgotdGhlIGdpdmVuIFZNLiAgSWYgdGhlIGtleSBpcyBub3QgaW4gdGhhdCBNYXAsIHRoZW4gZG8g
bm90aGluZy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHJlbW92ZV9mcm9tX3BsYXRmb3JtIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBz
dHJpbmcga2V5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGlu
ZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1BDSVxf
YnVzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBDSVxfYnVzIGZpZWxkIG9mIHRoZSBn
aXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSBzdHJpbmcgZ2V0X1BDSV9idXMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNl
dFxfUENJXF9idXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgUENJXF9idXMgZmllbGQg
b2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHZvaWQgc2V0X1BDSV9idXMgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYsIHN0
cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxp
bmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X290aGVyXF9jb25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgb3RoZXJcX2NvbmZp
ZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKChzdHJpbmcgLT4gc3RyaW5nKSBNYXApIGdldF9vdGhlcl9jb25m
aWcgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShzdHJpbmcgJFxyaWdo
dGFycm93JCBzdHJpbmcpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fnNldFxfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBv
dGhlclxfY29uZmlnIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9vdGhlcl9jb25maWcgKHNlc3Np
b25faWQgcywgVk0gcmVmIHNlbGYsIChzdHJpbmcgLT4gc3RyaW5nKSBNYXAgdmFsdWUpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IChzdHJp
bmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+YWRkXF90b1xfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotQWRkIHRo
ZSBnaXZlbiBrZXktdmFsdWUgcGFpciB0byB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUg
Z2l2ZW4gVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gdm9pZCBhZGRfdG9fb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmLCBz
dHJpbmcga2V5LCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBc
aGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92
ZVxfZnJvbVxfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotUmVtb3ZlIHRoZSBn
aXZlbiBrZXkgYW5kIGl0cyBjb3JyZXNwb25kaW5nIHZhbHVlIGZyb20gdGhlIG90aGVyXF9jb25m
aWcKLWZpZWxkIG9mIHRoZSBnaXZlbiBWTS4gIElmIHRoZSBrZXkgaXMgbm90IGluIHRoYXQgTWFw
LCB0aGVuIGRvIG5vdGhpbmcuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gdm9pZCByZW1vdmVfZnJvbV9vdGhlcl9jb25maWcgKHNlc3Npb25faWQgcywg
Vk0gcmVmIHNlbGYsIHN0cmluZyBrZXkpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIHJlbW92ZSBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfZG9taWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgZG9taWQgZmllbGQg
b2YgdGhlIGdpdmVuIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IGludCBnZXRfZG9taWQgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfaXNcX2NvbnRyb2xcX2RvbWFpbn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBpc1xf
Y29udHJvbFxfZG9tYWluIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBib29sIGdldF9pc19jb250cm9sX2RvbWFp
biAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8
Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVz
Y3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotYm9vbAotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbWV0cmljc30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBtZXRyaWNzIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk1fbWV0cmljcyBy
ZWYpIGdldF9tZXRyaWNzIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1W
TVxfbWV0cmljcyByZWYKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5nZXRcX2d1ZXN0XF9tZXRyaWNzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGd1ZXN0
XF9tZXRyaWNzIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk1fZ3Vlc3RfbWV0cmljcyByZWYpIGdldF9ndWVz
dF9tZXRyaWNzIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVy
ZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WTVxfZ3Vl
c3RcX21ldHJpY3MgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9zZWN1cml0eVxfbGFiZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IHRoZSBzZWN1
cml0eSBsYWJlbCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk0uIFJlZmVyIHRvIHRoZSBYU1BvbGljeSBj
bGFzcwotZm9yIHRoZSBmb3JtYXQgb2YgdGhlIHNlY3VyaXR5IGxhYmVsLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9zZWN1cml0eV9s
YWJlbCAoc2Vzc2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1zdHJpbmcKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX3NlY3VyaXR5XF9sYWJlbH0K
LQote1xiZiBPdmVydmlldzp9Ci1TZXQgdGhlIHNlY3VyaXR5IGxhYmVsIGZpZWxkIG9mIHRoZSBn
aXZlbiBWTS4gUmVmZXIgdG8gdGhlIFhTUG9saWN5IGNsYXNzCi1mb3IgdGhlIGZvcm1hdCBvZiB0
aGUgc2VjdXJpdHkgbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lu
e3ZlcmJhdGltfSBpbnQgc2V0X3NlY3VyaXR5X2xhYmVsIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBz
ZWxmLCBzdHJpbmcKLXNlY3VyaXR5X2xhYmVsLCBzdHJpbmcgb2xkX2xhYmVsKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxm
ICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgc2Vj
dXJpdHlcX2xhYmVsICYgc2VjdXJpdHkgbGFiZWwgZm9yIHRoZSBWTSBcXCBcaGxpbmUKLXtcdHQg
c3RyaW5nIH0gJiBvbGRcX2xhYmVsICYgTGFiZWwgdmFsdWUgdGhhdCB0aGUgc2VjdXJpdHkgbGFi
ZWwgXFwKLSYgJiBtdXN0IGN1cnJlbnRseSBoYXZlIGZvciB0aGUgY2hhbmdlIHRvIHN1Y2NlZWQu
XFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLWludAotfQotCi0KLVJldHVybnMgdGhlIHNzaWRy
ZWYgaW4gY2FzZSBvZiBhbiBWTSB0aGF0IGlzIGN1cnJlbnRseSBydW5uaW5nIG9yCi1wYXVzZWQs
IHplcm8gaW4gY2FzZSBvZiBhIGRvcm1hbnQgVk0gKGhhbHRlZCwgc3VzcGVuZGVkKS4KLQotXHZz
cGFjZXswLjNjbX0KLQotXG5vaW5kZW50e1xiZiBQb3NzaWJsZSBFcnJvciBDb2Rlczp9IHtcdHQg
U0VDVVJJVFlcX0VSUk9SfQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y3JlYXRlfQotCi17XGJmIE92ZXJ2
aWV3On0gCi1DcmVhdGUgYSBuZXcgVk0gaW5zdGFuY2UsIGFuZCByZXR1cm4gaXRzIGhhbmRsZS4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0gcmVm
KSBjcmVhdGUgKHNlc3Npb25faWQgcywgVk0gcmVjb3JkIGFyZ3MpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVjb3JkIH0gJiBhcmdzICYg
QWxsIGNvbnN0cnVjdG9yIGFyZ3VtZW50cyBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
Vk0gcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9iamVjdAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmRlc3Ryb3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLURlc3Ryb3kgdGhlIHNw
ZWNpZmllZCBWTS4gIFRoZSBWTSBpcyBjb21wbGV0ZWx5IHJlbW92ZWQgZnJvbSB0aGUgc3lzdGVt
LiAKLVRoaXMgZnVuY3Rpb24gY2FuIG9ubHkgYmUgY2FsbGVkIHdoZW4gdGhlIFZNIGlzIGluIHRo
ZSBIYWx0ZWQgU3RhdGUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2
ZXJiYXRpbX0gdm9pZCBkZXN0cm95IChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xi
ZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNlIHRvIHRoZSBWTSBpbnN0YW5jZSB3aXRoIHRo
ZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSAoVk0gcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVp
ZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBz
dHJpbmcgfSAmIHV1aWQgJiBVVUlEIG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLVZNIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJl
Y29yZCBjb250YWluaW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0gcmVjb3JkKSBn
ZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIFZNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVy
ZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WTSByZWNv
cmQKLX0KLQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfYnlcX25hbWVcX2xhYmVsfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYWxsIHRoZSBWTSBp
bnN0YW5jZXMgd2l0aCB0aGUgZ2l2ZW4gbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChWTSByZWYpIFNldCkgZ2V0X2J5X25hbWVfbGFiZWwg
KHNlc3Npb25faWQgcywgc3RyaW5nIGxhYmVsKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgbGFiZWwgJiBsYWJlbCBvZiBvYmpl
Y3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2Nt
fQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oVk0gcmVmKSBTZXQK
LX0KLQotCi1yZWZlcmVuY2VzIHRvIG9iamVjdHMgd2l0aCBtYXRjaCBuYW1lcwotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfY3B1XF9wb29sfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgY3B1XF9w
b29sIGZpZWxkIG9mIHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
Ci1cYmVnaW57dmVyYmF0aW19ICgoY3B1X3Bvb2wgcmVmKSBTZXQpIGdldF9jcHVfcG9vbCAoc2Vz
c2lvbl9pZCBzLCBWTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLVxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgVk0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRl
bnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci0oY3B1XF9wb29sIHJlZikgU2V0Ci19Ci0KLQot
cmVmZXJlbmNlcyB0byBjcHVcX3Bvb2wgb2JqZWN0cy4KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Bv
b2xcX25hbWV9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IHRoZSBwb29sXF9uYW1lIGZpZWxkIG9m
IHRoZSBnaXZlbiBWTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVy
YmF0aW19IHN0cmluZyBnZXRfY3B1X3Bvb2wgKHNlc3Npb25faWQgcywgVk0gcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci1caGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0
dAotc3RyaW5nCi19Ci0KLQotbmFtZSBvZiBjcHUgcG9vbCB0byB1c2UKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5jcHVcX3Bvb2xcX21pZ3JhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotTWlncmF0ZSB0aGUgVk0g
dG8gYW5vdGhlciBjcHVcX3Bvb2wuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJl
Z2lue3ZlcmJhdGltfSB2b2lkIGNwdV9wb29sX21pZ3JhdGUgKHNlc3Npb25faWQgcywgVk0gcmVm
IHNlbGYsIGNwdV9wb29sIHJlZiBwb29sKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBWTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlZn0gJiBwb29sICYgcmVmZXJlbmNlIHRv
IG5ldyBjcHVcX3Bvb2wgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci12b2lkCi19Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFBP
T0xcX0JBRFxfU1RBVEUsIFZNXF9CQURcX1BPV0VSXF9TVEFURX0KLQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fnNldFxfcG9vbFxfbmFtZX0KLQote1xiZiBPdmVydmlldzp9Ci1TZXQgY3B1IHBvb2wgbmFtZSB0
byB1c2UgZm9yIG5leHQgYWN0aXZhdGlvbi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X3Bvb2xfbmFtZSAoc2Vzc2lvbl9pZCBzLCBWTSBy
ZWYgc2VsZiwgc3RyaW5nIHBvb2xcX25hbWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci1caGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZQote1x0dCBzdHJpbmd9ICYgcG9vbFxfbmFtZSAmIE5ldyBwb29sIG5h
bWUgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRl
bnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci12b2lkCi19Ci0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLQotCi0KLQotXHZzcGFjZXsxY219Ci1cbmV3
cGFnZQotXHNlY3Rpb257Q2xhc3M6IFZNXF9tZXRyaWNzfQotXHN1YnNlY3Rpb257RmllbGRzIGZv
ciBjbGFzczogVk1cX21ldHJpY3N9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3
aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1u
ezN9e2x8fXtcYmYgVk1cX21ldHJpY3N9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2NyaXB0
aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0KLVRoZSBtZXRyaWNz
IGFzc29jaWF0ZWQgd2l0aCBhIFZNLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBl
ICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAg
e1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2Ug
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbWVtb3J5L2FjdHVhbH0gJiBp
bnQgJiBHdWVzdCdzIGFjdHVhbCBtZW1vcnkgKGJ5dGVzKSBcXAotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7cnVufSQgJiAge1x0dCBWQ1BVcy9udW1iZXJ9ICYgaW50ICYgQ3VycmVudCBudW1iZXIgb2Yg
VkNQVXMgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgVkNQVXMvdXRpbGlz
YXRpb259ICYgKGludCAkXHJpZ2h0YXJyb3ckIGZsb2F0KSBNYXAgJiBVdGlsaXNhdGlvbiBmb3Ig
YWxsIG9mIGd1ZXN0J3MgY3VycmVudCBWQ1BVcyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiAge1x0dCBWQ1BVcy9DUFV9ICYgKGludCAkXHJpZ2h0YXJyb3ckIGludCkgTWFwICYgVkNQ
VSB0byBQQ1BVIG1hcCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBWQ1BV
cy9wYXJhbXN9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgVGhlIGxpdmUg
ZXF1aXZhbGVudCB0byBWTS5WQ1BVc1xfcGFyYW1zIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHty
dW59JCAmICB7XHR0IFZDUFVzL2ZsYWdzfSAmIChpbnQgJFxyaWdodGFycm93JCBzdHJpbmcgU2V0
KSBNYXAgJiBDUFUgZmxhZ3MgKGJsb2NrZWQsb25saW5lLHJ1bm5pbmcpIFxcCi0kXG1hdGhpdHtS
T31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN0YXRlfSAmIHN0cmluZyBTZXQgJiBUaGUgc3RhdGUg
b2YgdGhlIGd1ZXN0LCBlZyBibG9ja2VkLCBkeWluZyBldGMgXFwKLSRcbWF0aGl0e1JPfV9cbWF0
aGl0e3J1bn0kICYgIHtcdHQgc3RhcnRcX3RpbWV9ICYgZGF0ZXRpbWUgJiBUaW1lIGF0IHdoaWNo
IHRoaXMgVk0gd2FzIGxhc3QgYm9vdGVkIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAm
ICB7XHR0IGxhc3RcX3VwZGF0ZWR9ICYgZGF0ZXRpbWUgJiBUaW1lIGF0IHdoaWNoIHRoaXMgaW5m
b3JtYXRpb24gd2FzIGxhc3QgdXBkYXRlZCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxz
dWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBWTVxfbWV0cmljc30KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJu
IGEgbGlzdCBvZiBhbGwgdGhlIFZNXF9tZXRyaWNzIGluc3RhbmNlcyBrbm93biB0byB0aGUgc3lz
dGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgo
Vk1fbWV0cmljcyByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19
Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotKFZNXF9tZXRyaWNzIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2Jq
ZWN0cwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBWTVxfbWV0cmljcy4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3V1aWQgKHNlc3Np
b25faWQgcywgVk1fbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8
Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVz
Y3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTVxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5n
Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9tZW1vcnlc
X2FjdHVhbH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBtZW1vcnkvYWN0dWFsIGZpZWxk
IG9mIHRoZSBnaXZlbiBWTVxfbWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X21lbW9yeV9hY3R1YWwgKHNlc3Npb25faWQgcywg
Vk1fbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBWTVxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFs
dWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9WQ1BVc1xfbnVtYmVyfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZDUFVzL251bWJlciBmaWVsZCBvZiB0aGUgZ2l2ZW4g
Vk1cX21ldHJpY3MuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gaW50IGdldF9WQ1BVc19udW1iZXIgKHNlc3Npb25faWQgcywgVk1fbWV0cmljcyByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWTVxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxk
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9WQ1BVc1xfdXRpbGlzYXRpb259Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLUdldCB0aGUgVkNQVXMvdXRpbGlzYXRpb24gZmllbGQgb2YgdGhlIGdpdmVuIFZNXF9t
ZXRyaWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
ICgoaW50IC0+IGZsb2F0KSBNYXApIGdldF9WQ1BVc191dGlsaXNhdGlvbiAoc2Vzc2lvbl9pZCBz
LCBWTV9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IFZNXF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2Nt
fQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oaW50ICRccmlnaHRh
cnJvdyQgZmxvYXQpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfVkNQVXNcX0NQVX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBWQ1BVcy9D
UFUgZmllbGQgb2YgdGhlIGdpdmVuIFZNXF9tZXRyaWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoaW50IC0+IGludCkgTWFwKSBnZXRfVkNQVXNf
Q1BVIChzZXNzaW9uX2lkIHMsIFZNX21ldHJpY3MgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk1cX21ldHJpY3MgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLShpbnQgJFxyaWdodGFycm93JCBpbnQpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfVkNQVXNcX3BhcmFtc30KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSBWQ1BVcy9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVuIFZNXF9tZXRyaWNzLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5n
IC0+IHN0cmluZykgTWFwKSBnZXRfVkNQVXNfcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZNX21ldHJp
Y3MgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgVk1cX21ldHJpY3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9p
bmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShzdHJpbmcgJFxyaWdodGFycm93JCBz
dHJpbmcpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfVkNQVXNcX2ZsYWdzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZDUFVzL2ZsYWdz
IGZpZWxkIG9mIHRoZSBnaXZlbiBWTVxfbWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKGludCAtPiBzdHJpbmcgU2V0KSBNYXApIGdldF9W
Q1BVc19mbGFncyAoc2Vzc2lvbl9pZCBzLCBWTV9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNXF9tZXRyaWNzIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi0oaW50ICRccmlnaHRhcnJvdyQgc3RyaW5nIFNldCkgTWFwCi19Ci0KLQotdmFs
dWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9zdGF0ZX0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSBzdGF0ZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX21ldHJpY3MuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKHN0cmluZyBT
ZXQpIGdldF9zdGF0ZSAoc2Vzc2lvbl9pZCBzLCBWTV9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNXF9tZXRyaWNz
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1zdHJpbmcgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9zdGFydFxfdGltZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSBzdGFydFxfdGltZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX21ldHJpY3MuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gZGF0ZXRpbWUgZ2V0X3N0YXJ0
X3RpbWUgKHNlc3Npb25faWQgcywgVk1fbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTVxfbWV0cmljcyByZWYgfSAm
IHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxh
cn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotZGF0ZXRpbWUKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5nZXRcX2xhc3RcX3VwZGF0ZWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbGFzdFxf
dXBkYXRlZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX21ldHJpY3MuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gZGF0ZXRpbWUgZ2V0X2xhc3RfdXBkYXRl
ZCAoc2Vzc2lvbl9pZCBzLCBWTV9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNXF9tZXRyaWNzIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1kYXRldGltZQotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0aGUg
Vk1cX21ldHJpY3MgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZNX21ldHJpY3MgcmVmKSBn
ZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlE
IG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZNXF9t
ZXRyaWNzIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWlu
aW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBWTVxfbWV0cmljcy4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk1fbWV0cmljcyByZWNv
cmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgVk1fbWV0cmljcyByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTVxfbWV0cmlj
cyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxl
bmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotVk1cX21ldHJpY3MgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9t
IHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IFZNXF9ndWVzdFxf
bWV0cmljc30KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFZNXF9ndWVzdFxfbWV0cmlj
c30KLVxiZWdpbntsb25ndGFibGV9e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxt
dWx0aWNvbHVtbnsxfXt8bH17TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBWTVxfZ3Vl
c3RcX21ldHJpY3N9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0
aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0KLVRoZSBtZXRyaWNzIHJlcG9ydGVkIGJ5
IHRoZSBndWVzdCAoYXMgb3Bwb3NlZCB0byBpbmZlcnJlZCBmcm9tIG91dHNpZGUpLn19IFxcCi1c
aGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxt
YXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBp
ZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0k
ICYgIHtcdHQgb3NcX3ZlcnNpb259ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFw
ICYgdmVyc2lvbiBvZiB0aGUgT1MgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgUFZcX2RyaXZlcnNcX3ZlcnNpb259ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykg
TWFwICYgdmVyc2lvbiBvZiB0aGUgUFYgZHJpdmVycyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7
cnVufSQgJiAge1x0dCBtZW1vcnl9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFw
ICYgZnJlZS91c2VkL3RvdGFsIG1lbW9yeSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCBkaXNrc30gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgJiBkaXNr
IGNvbmZpZ3VyYXRpb24vZnJlZSBzcGFjZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCBuZXR3b3Jrc30gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgJiBu
ZXR3b3JrIGNvbmZpZ3VyYXRpb24gXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgb3RoZXJ9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgYW55dGhpbmcg
ZWxzZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBsYXN0XF91cGRhdGVk
fSAmIGRhdGV0aW1lICYgVGltZSBhdCB3aGljaCB0aGlzIGluZm9ybWF0aW9uIHdhcyBsYXN0IHVw
ZGF0ZWQgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29j
aWF0ZWQgd2l0aCBjbGFzczogVk1cX2d1ZXN0XF9tZXRyaWNzfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfYWxsfQotCi17XGJmIE92ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFs
bCB0aGUgVk1cX2d1ZXN0XF9tZXRyaWNzIGluc3RhbmNlcyBrbm93biB0byB0aGUgc3lzdGVtLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoVk1fZ3Vl
c3RfbWV0cmljcyByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19
Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotKFZNXF9ndWVzdFxfbWV0cmljcyByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8g
YWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX2d1ZXN0XF9tZXRyaWNzLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBn
ZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBWTV9ndWVzdF9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZNXF9ndWVzdFxf
bWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9vc1xfdmVyc2lvbn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBvc1xfdmVyc2lvbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX2d1ZXN0XF9tZXRyaWNzLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5n
IC0+IHN0cmluZykgTWFwKSBnZXRfb3NfdmVyc2lvbiAoc2Vzc2lvbl9pZCBzLCBWTV9ndWVzdF9t
ZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFZNXF9ndWVzdFxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKHN0cmluZyAkXHJp
Z2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF9QVlxfZHJpdmVyc1xfdmVyc2lvbn0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBQVlxfZHJpdmVyc1xfdmVyc2lvbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX2d1ZXN0
XF9tZXRyaWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfUFZfZHJpdmVyc192ZXJzaW9uIChzZXNz
aW9uX2lkIHMsIFZNX2d1ZXN0X21ldHJpY3MgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk1cX2d1ZXN0XF9tZXRyaWNzIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX21lbW9yeX0KLQote1xiZiBPdmVydmll
dzp9IAotR2V0IHRoZSBtZW1vcnkgZmllbGQgb2YgdGhlIGdpdmVuIFZNXF9ndWVzdFxfbWV0cmlj
cy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0
cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X21lbW9yeSAoc2Vzc2lvbl9pZCBzLCBWTV9ndWVzdF9t
ZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFZNXF9ndWVzdFxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKHN0cmluZyAkXHJp
Z2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF9kaXNrc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBkaXNrcyBm
aWVsZCBvZiB0aGUgZ2l2ZW4gVk1cX2d1ZXN0XF9tZXRyaWNzLgotCi0gXG5vaW5kZW50IHtcYmYg
U2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBn
ZXRfZGlza3MgKHNlc3Npb25faWQgcywgVk1fZ3Vlc3RfbWV0cmljcyByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTVxfZ3Vlc3Rc
X21ldHJpY3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLShzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcAot
fQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmV0d29ya3N9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbmV0d29ya3MgZmllbGQgb2YgdGhlIGdpdmVu
IFZNXF9ndWVzdFxfbWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X25ldHdvcmtzIChzZXNz
aW9uX2lkIHMsIFZNX2d1ZXN0X21ldHJpY3MgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk1cX2d1ZXN0XF9tZXRyaWNzIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX290aGVyfQotCi17XGJmIE92ZXJ2aWV3
On0gCi1HZXQgdGhlIG90aGVyIGZpZWxkIG9mIHRoZSBnaXZlbiBWTVxfZ3Vlc3RcX21ldHJpY3Mu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChzdHJp
bmcgLT4gc3RyaW5nKSBNYXApIGdldF9vdGhlciAoc2Vzc2lvbl9pZCBzLCBWTV9ndWVzdF9tZXRy
aWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFZNXF9ndWVzdFxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKHN0cmluZyAkXHJpZ2h0
YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9sYXN0XF91cGRhdGVkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGxh
c3RcX3VwZGF0ZWQgZmllbGQgb2YgdGhlIGdpdmVuIFZNXF9ndWVzdFxfbWV0cmljcy4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBkYXRldGltZSBnZXRf
bGFzdF91cGRhdGVkIChzZXNzaW9uX2lkIHMsIFZNX2d1ZXN0X21ldHJpY3MgcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVk1cX2d1
ZXN0XF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1kYXRldGltZQotfQotCi0KLXZhbHVlIG9mIHRoZSBm
aWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCBhIHJlZmVyZW5jZSB0byB0aGUgVk1cX2d1ZXN0XF9tZXRyaWNzIGluc3RhbmNlIHdpdGgg
dGhlIHNwZWNpZmllZCBVVUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IChWTV9ndWVzdF9tZXRyaWNzIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25f
aWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJu
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WTVxfZ3Vlc3RcX21ldHJpY3MgcmVmCi19
Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29y
ZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJl
bnQgc3RhdGUgb2YgdGhlIGdpdmVuIFZNXF9ndWVzdFxfbWV0cmljcy4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk1fZ3Vlc3RfbWV0cmljcyByZWNv
cmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgVk1fZ3Vlc3RfbWV0cmljcyByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWTVxf
Z3Vlc3RcX21ldHJpY3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZNXF9ndWVzdFxfbWV0cmljcyByZWNvcmQKLX0K
LQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotCi1cdnNwYWNlezFjbX0KLVxuZXdwYWdlCi1cc2VjdGlv
bntDbGFzczogaG9zdH0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IGhvc3R9Ci1cYmVn
aW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1
bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgaG9zdH0gXFwKLVxtdWx0
aWNvbHVtbnsxfXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94
ezExY219e1xlbSBBCi1waHlzaWNhbCBob3N0Ln19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQg
JiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZl
cmVuY2UgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBuYW1lL2xhYmVsfSAmIHN0cmluZyAmIGEg
aHVtYW4tcmVhZGFibGUgbmFtZSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG5hbWUvZGVzY3Jp
cHRpb259ICYgc3RyaW5nICYgYSBub3RlcyBmaWVsZCBjb250YWluZyBodW1hbi1yZWFkYWJsZSBk
ZXNjcmlwdGlvbiBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBBUElcX3Zl
cnNpb24vbWFqb3J9ICYgaW50ICYgbWFqb3IgdmVyc2lvbiBudW1iZXIgXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgQVBJXF92ZXJzaW9uL21pbm9yfSAmIGludCAmIG1pbm9y
IHZlcnNpb24gbnVtYmVyIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IEFQ
SVxfdmVyc2lvbi92ZW5kb3J9ICYgc3RyaW5nICYgaWRlbnRpZmljYXRpb24gb2YgdmVuZG9yIFxc
Ci0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IEFQSVxfdmVyc2lvbi92ZW5kb3Jc
X2ltcGxlbWVudGF0aW9ufSAmIChzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcCAmIGRl
dGFpbHMgb2YgdmVuZG9yIGltcGxlbWVudGF0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHty
dW59JCAmICB7XHR0IGVuYWJsZWR9ICYgYm9vbCAmIFRydWUgaWYgdGhlIGhvc3QgaXMgY3VycmVu
dGx5IGVuYWJsZWQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgc29mdHdh
cmVcX3ZlcnNpb259ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgdmVyc2lv
biBzdHJpbmdzIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgb3RoZXJcX2NvbmZpZ30gJiAoc3Ry
aW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgJiBhZGRpdGlvbmFsIGNvbmZpZ3VyYXRpb24g
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgY2FwYWJpbGl0aWVzfSAmIHN0
cmluZyBTZXQgJiBYZW4gY2FwYWJpbGl0aWVzIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59
JCAmICB7XHR0IGNwdVxfY29uZmlndXJhdGlvbn0gJiAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3Ry
aW5nKSBNYXAgJiBUaGUgQ1BVIGNvbmZpZ3VyYXRpb24gb24gdGhpcyBob3N0LiAgTWF5IGNvbnRh
aW4ga2V5cyBzdWNoIGFzIGBgbnJcX25vZGVzJycsIGBgc29ja2V0c1xfcGVyXF9ub2RlJycsIGBg
Y29yZXNcX3Blclxfc29ja2V0JycsIG9yIGBgdGhyZWFkc1xfcGVyXF9jb3JlJycgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgc2NoZWRcX3BvbGljeX0gJiBzdHJpbmcgJiBT
Y2hlZHVsZXIgcG9saWN5IGN1cnJlbnRseSBpbiBmb3JjZSBvbiB0aGlzIGhvc3QgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgc3VwcG9ydGVkXF9ib290bG9hZGVyc30gJiBz
dHJpbmcgU2V0ICYgYSBsaXN0IG9mIHRoZSBib290bG9hZGVycyBpbnN0YWxsZWQgb24gdGhlIG1h
Y2hpbmUgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcmVzaWRlbnRcX1ZN
c30gJiAoVk0gcmVmKSBTZXQgJiBsaXN0IG9mIFZNcyBjdXJyZW50bHkgcmVzaWRlbnQgb24gaG9z
dCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IGxvZ2dpbmd9ICYgKHN0cmluZyAkXHJpZ2h0YXJy
b3ckIHN0cmluZykgTWFwICYgbG9nZ2luZyBjb25maWd1cmF0aW9uIFxcCi0kXG1hdGhpdHtST31f
XG1hdGhpdHtydW59JCAmICB7XHR0IFBJRnN9ICYgKFBJRiByZWYpIFNldCAmIHBoeXNpY2FsIG5l
dHdvcmsgaW50ZXJmYWNlcyBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IHN1c3BlbmRcX2ltYWdl
XF9zcn0gJiBTUiByZWYgJiBUaGUgU1IgaW4gd2hpY2ggVkRJcyBmb3Igc3VzcGVuZCBpbWFnZXMg
YXJlIGNyZWF0ZWQgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBjcmFzaFxfZHVtcFxfc3J9ICYg
U1IgcmVmICYgVGhlIFNSIGluIHdoaWNoIFZESXMgZm9yIGNyYXNoIGR1bXBzIGFyZSBjcmVhdGVk
IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFBCRHN9ICYgKFBCRCByZWYp
IFNldCAmIHBoeXNpY2FsIGJsb2NrZGV2aWNlcyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiAge1x0dCBQUENJc30gJiAoUFBDSSByZWYpIFNldCAmIHBoeXNpY2FsIFBDSSBkZXZpY2Vz
IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFBTQ1NJc30gJiAoUFNDU0kg
cmVmKSBTZXQgJiBwaHlzaWNhbCBTQ1NJIGRldmljZXMgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0
e3J1bn0kICYgIHtcdHQgUFNDU0lcX0hCQXN9ICYgKFBTQ1NJXF9IQkEgcmVmKSBTZXQgJiBwaHlz
aWNhbCBTQ1NJIGhvc3QgYnVzIGFkYXB0ZXJzIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59
JCAmICB7XHR0IGhvc3RcX0NQVXN9ICYgKGhvc3RcX2NwdSByZWYpIFNldCAmIFRoZSBwaHlzaWNh
bCBDUFVzIG9uIHRoaXMgaG9zdCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCBtZXRyaWNzfSAmIGhvc3RcX21ldHJpY3MgcmVmICYgbWV0cmljcyBhc3NvY2lhdGVkIHdpdGgg
dGhpcyBob3N0IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHJlc2lkZW50
XF9jcHVcX3Bvb2xzfSAmIChjcHVcX3Bvb2wgcmVmKSBTZXQgJiBsaXN0IG9mIGNwdVxfcG9vbHMg
Y3VycmVudGx5IHJlc2lkZW50IG9uIHRoZSBob3N0IFxcCi1caGxpbmUKLVxlbmR7bG9uZ3RhYmxl
fQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IGhvc3R9Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+ZGlzYWJsZX0KLQote1xiZiBPdmVydmlldzp9IAotUHV0cyB0aGUg
aG9zdCBpbnRvIGEgc3RhdGUgaW4gd2hpY2ggbm8gbmV3IFZNcyBjYW4gYmUgc3RhcnRlZC4gQ3Vy
cmVudGx5Ci1hY3RpdmUgVk1zIG9uIHRoZSBob3N0IGNvbnRpbnVlIHRvIGV4ZWN1dGUuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBkaXNhYmxl
IChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIGhvc3QpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIGhvc3QgJiBUaGUgSG9zdCB0
byBkaXNhYmxlIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+ZW5hYmxlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1QdXRzIHRoZSBob3N0
IGludG8gYSBzdGF0ZSBpbiB3aGljaCBuZXcgVk1zIGNhbiBiZSBzdGFydGVkLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgZW5hYmxlIChzZXNz
aW9uX2lkIHMsIGhvc3QgcmVmIGhvc3QpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIGhvc3QgJiBUaGUgSG9zdCB0byBlbmFi
bGUgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9p
bmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5zaHV0ZG93bn0KLQote1xiZiBPdmVydmlldzp9IAotU2h1dGRvd24gdGhlIGhvc3Qu
IChUaGlzIGZ1bmN0aW9uIGNhbiBvbmx5IGJlIGNhbGxlZCBpZiB0aGVyZSBhcmUgbm8KLWN1cnJl
bnRseSBydW5uaW5nIFZNcyBvbiB0aGUgaG9zdCBhbmQgaXQgaXMgZGlzYWJsZWQuKS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNodXRkb3du
IChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIGhvc3QpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIGhvc3QgJiBUaGUgSG9zdCB0
byBzaHV0ZG93biBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fnJlYm9vdH0KLQote1xiZiBPdmVydmlldzp9IAotUmVib290IHRoZSBo
b3N0LiAoVGhpcyBmdW5jdGlvbiBjYW4gb25seSBiZSBjYWxsZWQgaWYgdGhlcmUgYXJlIG5vCi1j
dXJyZW50bHkgcnVubmluZyBWTXMgb24gdGhlIGhvc3QgYW5kIGl0IGlzIGRpc2FibGVkLikuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCByZWJv
b3QgKHNlc3Npb25faWQgcywgaG9zdCByZWYgaG9zdClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgaG9zdCAmIFRoZSBIb3N0
IHRvIHJlYm9vdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmRtZXNnfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGhvc3Qg
eGVuIGRtZXNnLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IHN0cmluZyBkbWVzZyAoc2Vzc2lvbl9pZCBzLCBob3N0IHJlZiBob3N0KVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0gJiBo
b3N0ICYgVGhlIEhvc3QgdG8gcXVlcnkgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0
cmluZwotfQotCi0KLWRtZXNnIHN0cmluZwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmRtZXNnXF9jbGVhcn0K
LQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBob3N0IHhlbiBkbWVzZywgYW5kIGNsZWFyIHRo
ZSBidWZmZXIuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gc3RyaW5nIGRtZXNnX2NsZWFyIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIGhvc3QpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYg
fSAmIGhvc3QgJiBUaGUgSG9zdCB0byBxdWVyeSBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotc3RyaW5nCi19Ci0KLQotZG1lc2cgc3RyaW5nCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9sb2d9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgaG9zdCdzIGxvZyBmaWxlLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfbG9nIChz
ZXNzaW9uX2lkIHMsIGhvc3QgcmVmIGhvc3QpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIGhvc3QgJiBUaGUgSG9zdCB0byBx
dWVyeSBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotVGhlIGNv
bnRlbnRzIG9mIHRoZSBob3N0J3MgcHJpbWFyeSBsb2cgZmlsZQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNl
bmRcX2RlYnVnXF9rZXlzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1JbmplY3QgdGhlIGdpdmVuIHN0
cmluZyBhcyBkZWJ1Z2dpbmcga2V5cyBpbnRvIFhlbi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNlbmRfZGVidWdfa2V5cyAoc2Vzc2lvbl9p
ZCBzLCBob3N0IHJlZiBob3N0LCBzdHJpbmcga2V5cylcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgaG9zdCAmIFRoZSBob3N0
IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleXMgJiBUaGUga2V5cyB0byBzZW5kIFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+bGlzdFxfbWV0aG9kc30KLQote1xiZiBPdmVydmlldzp9IAotTGlzdCBhbGwgc3VwcG9ydGVk
IG1ldGhvZHMuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gKHN0cmluZyBTZXQpIGxpc3RfbWV0aG9kcyAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19
Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotc3RyaW5nIFNldAotfQotCi0KLVRoZSBuYW1lIG9mIGV2ZXJ5IHN1cHBvcnRlZCBtZXRo
b2QuCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVy
biBhIGxpc3Qgb2YgYWxsIHRoZSBob3N0cyBrbm93biB0byB0aGUgc3lzdGVtLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoaG9zdCByZWYpIFNldCkg
Z2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKGhvc3QgcmVmKSBTZXQK
LX0KLQotCi1BIGxpc3Qgb2YgYWxsIHRoZSBJRHMgb2YgYWxsIHRoZSBob3N0cwotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxk
IG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBob3N0IHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGhv
c3QgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfbmFtZVxfbGFiZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUg
bmFtZS9sYWJlbCBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfbGFiZWwgKHNlc3Np
b25faWQgcywgaG9zdCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX25hbWVcX2xhYmVsfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1TZXQgdGhlIG5hbWUvbGFiZWwgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3Qu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBz
ZXRfbmFtZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBob3N0IHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUp
XGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9z
dCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtc
dHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX25hbWVcX2Rl
c2NyaXB0aW9ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIG5hbWUvZGVzY3JpcHRpb24g
ZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9uYW1lX2Rlc2NyaXB0aW9uIChzZXNzaW9uX2lk
IHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVj
dCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2Yg
dGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9uYW1lXF9kZXNjcmlwdGlvbn0KLQote1xi
ZiBPdmVydmlldzp9IAotU2V0IHRoZSBuYW1lL2Rlc2NyaXB0aW9uIGZpZWxkIG9mIHRoZSBnaXZl
biBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IHZvaWQgc2V0X25hbWVfZGVzY3JpcHRpb24gKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwg
c3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IGhvc3QgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9BUElcX3ZlcnNpb25cX21ham9yfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIEFQ
SVxfdmVyc2lvbi9tYWpvciBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X0FQSV92ZXJzaW9uX21h
am9yIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVu
Y2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0K
LQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9BUElcX3ZlcnNpb25c
X21pbm9yfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIEFQSVxfdmVyc2lvbi9taW5vciBm
aWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X0FQSV92ZXJzaW9uX21pbm9yIChzZXNzaW9uX2lkIHMs
IGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50
czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9BUElcX3ZlcnNpb25cX3ZlbmRvcn0KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBBUElcX3ZlcnNpb24vdmVuZG9yIGZpZWxkIG9mIHRoZSBnaXZl
biBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IHN0cmluZyBnZXRfQVBJX3ZlcnNpb25fdmVuZG9yIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
aG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9BUElcX3ZlcnNpb25cX3ZlbmRvclxfaW1wbGVtZW50YXRpb259Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgQVBJXF92ZXJzaW9uL3ZlbmRvclxfaW1wbGVtZW50
YXRpb24gZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChzdHJpbmcgLT4gc3RyaW5nKSBNYXApIGdldF9BUElf
dmVyc2lvbl92ZW5kb3JfaW1wbGVtZW50YXRpb24gKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBo
b3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2VuYWJsZWR9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgZW5hYmxlZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBib29sIGdl
dF9lbmFibGVkIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotYm9v
bAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfc29mdHdh
cmVcX3ZlcnNpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgc29mdHdhcmVcX3ZlcnNp
b24gZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChzdHJpbmcgLT4gc3RyaW5nKSBNYXApIGdldF9zb2Z0d2Fy
ZV92ZXJzaW9uIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKHN0
cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxk
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9vdGhlclxfY29uZmlnfQotCi17XGJmIE92ZXJ2aWV3On0g
Ci1HZXQgdGhlIG90aGVyXF9jb25maWcgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChzdHJpbmcgLT4gc3Ry
aW5nKSBNYXApIGdldF9vdGhlcl9jb25maWcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX290aGVyXF9jb25maWd9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2
ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHNldF9vdGhlcl9jb25maWcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwgKHN0
cmluZyAtPiBzdHJpbmcpIE1hcCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5n
KSBNYXAgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmFkZFxfdG9cX290aGVyXF9j
b25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUFkZCB0aGUgZ2l2ZW4ga2V5LXZhbHVlIHBhaXIg
dG8gdGhlIG90aGVyXF9jb25maWcgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBhZGRfdG9fb3RoZXJf
Y29uZmlnIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYsIHN0cmluZyBrZXksIHN0cmluZyB2
YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byBhZGQgXFwgXGhsaW5lIAotCi17XHR0IHN0
cmluZyB9ICYgdmFsdWUgJiBWYWx1ZSB0byBhZGQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5yZW1vdmVcX2Zyb21cX290aGVyXF9j
b25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJlbW92ZSB0aGUgZ2l2ZW4ga2V5IGFuZCBpdHMg
Y29ycmVzcG9uZGluZyB2YWx1ZSBmcm9tIHRoZSBvdGhlclxfY29uZmlnCi1maWVsZCBvZiB0aGUg
Z2l2ZW4gaG9zdC4gIElmIHRoZSBrZXkgaXMgbm90IGluIHRoYXQgTWFwLCB0aGVuIGRvIG5vdGhp
bmcuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCByZW1vdmVfZnJvbV9vdGhlcl9jb25maWcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwg
c3RyaW5nIGtleSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Nh
cGFiaWxpdGllc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBjYXBhYmlsaXRpZXMgZmll
bGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gKHN0cmluZyBTZXQpIGdldF9jYXBhYmlsaXRpZXMgKHNlc3Npb25faWQg
cywgaG9zdCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcgU2V0Ci19Ci0KLQotdmFsdWUg
b2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9jcHVcX2NvbmZpZ3VyYXRpb259Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgY3B1XF9jb25maWd1cmF0aW9uIGZpZWxkIG9mIHRo
ZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfY3B1X2NvbmZpZ3VyYXRpb24gKHNl
c3Npb25faWQgcywgaG9zdCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRh
cnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX3NjaGVkXF9wb2xpY3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgc2No
ZWRcX3BvbGljeSBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3NjaGVkX3BvbGljeSAoc2Vz
c2lvbl9pZCBzLCBob3N0IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfc3VwcG9ydGVkXF9ib290bG9h
ZGVyc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBzdXBwb3J0ZWRcX2Jvb3Rsb2FkZXJz
IGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IChzdHJpbmcgU2V0KSBnZXRfc3VwcG9ydGVkX2Jvb3Rsb2FkZXJz
IChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nIFNldAot
fQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVzaWRlbnRc
X1ZNc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSByZXNpZGVudFxfVk1zIGZpZWxkIG9m
IHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19ICgoVk0gcmVmKSBTZXQpIGdldF9yZXNpZGVudF9WTXMgKHNlc3Npb25faWQgcywg
aG9zdCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oVk0gcmVmKSBTZXQKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2xvZ2dpbmd9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLUdldCB0aGUgbG9nZ2luZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJp
bmcpIE1hcCkgZ2V0X2xvZ2dpbmcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX2xvZ2dpbmd9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLVNldCB0aGUgbG9nZ2luZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9sb2dnaW5n
IChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYsIChzdHJpbmcgLT4gc3RyaW5nKSBNYXAgdmFs
dWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
aG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LXtcdHQgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwIH0gJiB2YWx1ZSAmIE5ldyB2
YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5hZGRcX3RvXF9sb2dnaW5nfQotCi17XGJmIE92ZXJ2aWV3On0gCi1B
ZGQgdGhlIGdpdmVuIGtleS12YWx1ZSBwYWlyIHRvIHRoZSBsb2dnaW5nIGZpZWxkIG9mIHRoZSBn
aXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IHZvaWQgYWRkX3RvX2xvZ2dpbmcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwgc3Ry
aW5nIGtleSwgc3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBc
aGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92
ZVxfZnJvbVxfbG9nZ2luZ30KLQote1xiZiBPdmVydmlldzp9IAotUmVtb3ZlIHRoZSBnaXZlbiBr
ZXkgYW5kIGl0cyBjb3JyZXNwb25kaW5nIHZhbHVlIGZyb20gdGhlIGxvZ2dpbmcgZmllbGQgb2YK
LXRoZSBnaXZlbiBob3N0LiAgSWYgdGhlIGtleSBpcyBub3QgaW4gdGhhdCBNYXAsIHRoZW4gZG8g
bm90aGluZy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHJlbW92ZV9mcm9tX2xvZ2dpbmcgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwg
c3RyaW5nIGtleSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1BJ
RnN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgUElGcyBmaWVsZCBvZiB0aGUgZ2l2ZW4g
aG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAo
KFBJRiByZWYpIFNldCkgZ2V0X1BJRnMgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZilcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi0oUElGIHJlZikgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9zdXNwZW5kXF9pbWFnZVxfc3J9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCB0aGUgc3VzcGVuZFxfaW1hZ2VcX3NyIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0LgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChTUiByZWYpIGdl
dF9zdXNwZW5kX2ltYWdlX3NyIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAm
IHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxh
cn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotU1IgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
c2V0XF9zdXNwZW5kXF9pbWFnZVxfc3J9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgc3Vz
cGVuZFxfaW1hZ2VcX3NyIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X3N1c3BlbmRfaW1hZ2Vf
c3IgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwgU1IgcmVmIHZhbHVlKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IFNSIHJlZiB9
ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9jcmFzaFxfZHVtcFxfc3J9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgY3Jhc2hcX2R1bXBcX3NyIGZpZWxkIG9mIHRoZSBn
aXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IChTUiByZWYpIGdldF9jcmFzaF9kdW1wX3NyIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
aG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotU1IgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+c2V0XF9jcmFzaFxfZHVtcFxfc3J9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNl
dCB0aGUgY3Jhc2hcX2R1bXBcX3NyIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X2NyYXNoX2R1
bXBfc3IgKHNlc3Npb25faWQgcywgaG9zdCByZWYgc2VsZiwgU1IgcmVmIHZhbHVlKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IFNSIHJl
ZiB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9QQkRzfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIFBCRHMgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChQQkQgcmVmKSBTZXQp
IGdldF9QQkRzIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFBC
RCByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfUFBDSXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgUFBDSXMgZmllbGQgb2YgdGhl
IGdpdmVuIGhvc3QuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gKChQUENJIHJlZikgU2V0KSBnZXRfUFBDSXMgKHNlc3Npb25faWQgcywgaG9zdCByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi0oUFBDSSByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBm
aWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfUFNDU0lzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1H
ZXQgdGhlIFBTQ1NJcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFBTQ1NJIHJlZikgU2V0KSBnZXRfUFND
U0lzIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oUFNDU0kgcmVm
KSBTZXQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1BT
Q1NJXF9IQkFzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBTQ1NJXF9IQkFzIGZpZWxk
IG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19ICgoUFNDU0lfSEJBIHJlZikgU2V0KSBnZXRfUFNDU0lfSEJBcyAoc2Vzc2lv
bl9pZCBzLCBob3N0IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFBTQ1NJXF9IQkEgcmVmKSBTZXQK
LX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2hvc3RcX0NQ
VXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgaG9zdFxfQ1BVcyBmaWVsZCBvZiB0aGUg
Z2l2ZW4gaG9zdC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJh
dGltfSAoKGhvc3RfY3B1IHJlZikgU2V0KSBnZXRfaG9zdF9DUFVzIChzZXNzaW9uX2lkIHMsIGhv
c3QgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgaG9zdCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKGhvc3RcX2NwdSByZWYpIFNldAotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbWV0cmljc30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBtZXRyaWNzIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0LgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChob3N0X21ldHJp
Y3MgcmVmKSBnZXRfbWV0cmljcyAoc2Vzc2lvbl9pZCBzLCBob3N0IHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLWhvc3RcX21ldHJpY3MgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVm
ZXJlbmNlIHRvIHRoZSBob3N0IGluc3RhbmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChob3N0IHJlZikg
Z2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlkICYgVVVJ
RCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1ob3N0
IHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0
XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRo
ZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBob3N0LgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChob3N0IHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vz
c2lvbl9pZCBzLCBob3N0IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3QgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWhvc3QgcmVjb3JkCi19Ci0K
LQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2J5XF9u
YW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGFsbCB0aGUgaG9zdCBpbnN0YW5j
ZXMgd2l0aCB0aGUgZ2l2ZW4gbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKChob3N0IHJlZikgU2V0KSBnZXRfYnlfbmFtZV9sYWJlbCAoc2Vz
c2lvbl9pZCBzLCBzdHJpbmcgbGFiZWwpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiBsYWJlbCAmIGxhYmVsIG9mIG9iamVjdCB0
byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShob3N0IHJlZikgU2V0Ci19
Ci0KLQotcmVmZXJlbmNlcyB0byBvYmplY3RzIHdpdGggbWF0Y2ggbmFtZXMKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX3Jlc2lkZW50XF9jcHVcX3Bvb2xzfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0
aGUgcmVzaWRlbnRcX2NwdVxfcG9vbHMgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3QuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAoKGNwdV9wb29sIHJlZikg
U2V0KSBnZXRfcmVzaWRlbnRfY3B1X3Bvb2xzIChzZXNzaW9uX2lkIHMsIGhvc3QgcmVmIHNlbGYp
XGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fQote1x0dAotKGNwdVxfcG9vbCByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIGtu
b3duIGNwdVxfcG9vbHMuCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IGhvc3Rc
X21ldHJpY3N9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBob3N0XF9tZXRyaWNzfQot
XGJlZ2lue2xvbmd0YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRp
Y29sdW1uezF9e3xsfXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIGhvc3RcX21ldHJp
Y3N9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnsz
fXtsfH17XHBhcmJveHsxMWNtfXtcZW0KLVRoZSBtZXRyaWNzIGFzc29jaWF0ZWQgd2l0aCBhIGhv
c3QufX0gXFwKLVxobGluZQotUXVhbHMgJiBGaWVsZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAot
XGhsaW5lCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5n
ICYgdW5pcXVlIGlkZW50aWZpZXIvb2JqZWN0IHJlZmVyZW5jZSBcXAotJFxtYXRoaXR7Uk99X1xt
YXRoaXR7cnVufSQgJiAge1x0dCBtZW1vcnkvdG90YWx9ICYgaW50ICYgSG9zdCdzIHRvdGFsIG1l
bW9yeSAoYnl0ZXMpIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IG1lbW9y
eS9mcmVlfSAmIGludCAmIEhvc3QncyBmcmVlIG1lbW9yeSAoYnl0ZXMpIFxcCi0kXG1hdGhpdHtS
T31fXG1hdGhpdHtydW59JCAmICB7XHR0IGxhc3RcX3VwZGF0ZWR9ICYgZGF0ZXRpbWUgJiBUaW1l
IGF0IHdoaWNoIHRoaXMgaW5mb3JtYXRpb24gd2FzIGxhc3QgdXBkYXRlZCBcXAotXGhsaW5lCi1c
ZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBo
b3N0XF9tZXRyaWNzfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxsfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgaG9zdFxfbWV0cmljcyBpbnN0
YW5jZXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSAoKGhvc3RfbWV0cmljcyByZWYpIFNldCkgZ2V0X2FsbCAoc2Vz
c2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKGhvc3RcX21ldHJpY3MgcmVmKSBTZXQKLX0K
LQotCi1yZWZlcmVuY2VzIHRvIGFsbCBvYmplY3RzCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91dWlk
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIGhv
c3RcX21ldHJpY3MuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIGhvc3RfbWV0cmljcyByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBo
b3N0XF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX21lbW9yeVxfdG90YWx9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgbWVtb3J5L3RvdGFsIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0XF9tZXRyaWNz
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGludCBn
ZXRfbWVtb3J5X3RvdGFsIChzZXNzaW9uX2lkIHMsIGhvc3RfbWV0cmljcyByZWYgc2VsZilcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXsw
LjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9
ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0XF9t
ZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX21lbW9yeVxfZnJlZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSBtZW1vcnkvZnJlZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdFxfbWV0cmljcy4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X21lbW9yeV9m
cmVlIChzZXNzaW9uX2lkIHMsIGhvc3RfbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0XF9tZXRyaWNzIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX2xhc3RcX3VwZGF0ZWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbGFzdFxfdXBk
YXRlZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdFxfbWV0cmljcy4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBkYXRldGltZSBnZXRfbGFzdF91cGRhdGVk
IChzZXNzaW9uX2lkIHMsIGhvc3RfbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0XF9tZXRyaWNzIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi1kYXRldGltZQotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0
aGUgaG9zdFxfbWV0cmljcyBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoaG9zdF9tZXRyaWNz
IHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlk
ICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1ob3N0XF9tZXRyaWNzIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29y
ZCBjb250YWluaW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBob3N0XF9tZXRyaWNz
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChob3N0
X21ldHJpY3MgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIGhvc3RfbWV0cmljcyBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBob3N0XF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1ob3N0XF9tZXRyaWNzIHJlY29yZAotfQot
Ci0KLWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9u
e0NsYXNzOiBob3N0XF9jcHV9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBob3N0XF9j
cHV9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1c
bXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgaG9zdFxf
Y3B1fSBcXAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57
M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEEgcGh5c2ljYWwgQ1BVfX0gXFwKLVxobGluZQotUXVh
bHMgJiBGaWVsZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1hdGhpdHtST31f
XG1hdGhpdHtydW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlkZW50aWZpZXIv
b2JqZWN0IHJlZmVyZW5jZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBo
b3N0fSAmIGhvc3QgcmVmICYgdGhlIGhvc3QgdGhlIENQVSBpcyBpbiBcXAotJFxtYXRoaXR7Uk99
X1xtYXRoaXR7cnVufSQgJiAge1x0dCBudW1iZXJ9ICYgaW50ICYgdGhlIG51bWJlciBvZiB0aGUg
cGh5c2ljYWwgQ1BVIHdpdGhpbiB0aGUgaG9zdCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiAge1x0dCB2ZW5kb3J9ICYgc3RyaW5nICYgdGhlIHZlbmRvciBvZiB0aGUgcGh5c2ljYWwg
Q1BVIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHNwZWVkfSAmIGludCAm
IHRoZSBzcGVlZCBvZiB0aGUgcGh5c2ljYWwgQ1BVIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHty
dW59JCAmICB7XHR0IG1vZGVsbmFtZX0gJiBzdHJpbmcgJiB0aGUgbW9kZWwgbmFtZSBvZiB0aGUg
cGh5c2ljYWwgQ1BVIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN0ZXBw
aW5nfSAmIHN0cmluZyAmIHRoZSBzdGVwcGluZyBvZiB0aGUgcGh5c2ljYWwgQ1BVIFxcCi0kXG1h
dGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGZsYWdzfSAmIHN0cmluZyAmIHRoZSBmbGFn
cyBvZiB0aGUgcGh5c2ljYWwgQ1BVIChhIGRlY29kZWQgdmVyc2lvbiBvZiB0aGUgZmVhdHVyZXMg
ZmllbGQpIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGZlYXR1cmVzfSAm
IHN0cmluZyAmIHRoZSBwaHlzaWNhbCBDUFUgZmVhdHVyZSBiaXRtYXAgXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdXRpbGlzYXRpb259ICYgZmxvYXQgJiB0aGUgY3VycmVu
dCBDUFUgdXRpbGlzYXRpb24gXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQg
Y3B1XF9wb29sfSAmIChjcHVcX3Bvb2wgcmVmKSBTZXQgJiByZWZlcmVuY2UgdG8gY3B1XF9wb29s
IHRoZSBjcHUgYmVsb25ncyB0byBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0
aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBob3N0XF9jcHV9Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qg
b2YgYWxsIHRoZSBob3N0XF9jcHVzIGtub3duIHRvIHRoZSBzeXN0ZW0uCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChob3N0X2NwdSByZWYpIFNldCkg
Z2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKGhvc3RcX2NwdSByZWYp
IFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUg
Z2l2ZW4gaG9zdFxfY3B1LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBob3N0X2NwdSByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBo
b3N0XF9jcHUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfaG9zdH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBo
b3N0IGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0XF9jcHUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGhvc3QgcmVmKSBnZXRfaG9zdCAoc2Vzc2lvbl9p
ZCBzLCBob3N0X2NwdSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBob3N0XF9jcHUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWhvc3QgcmVmCi19Ci0K
LQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9udW1iZXJ9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbnVtYmVyIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0XF9j
cHUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50
IGdldF9udW1iZXIgKHNlc3Npb25faWQgcywgaG9zdF9jcHUgcmVmIHNlbGYpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdFxfY3B1IHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX3ZlbmRvcn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB2ZW5kb3IgZmllbGQgb2Yg
dGhlIGdpdmVuIGhvc3RcX2NwdS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3ZlbmRvciAoc2Vzc2lvbl9pZCBzLCBob3N0X2NwdSBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBob3N0XF9jcHUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBm
aWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfc3BlZWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdl
dCB0aGUgc3BlZWQgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3RcX2NwdS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3NwZWVkIChzZXNzaW9u
X2lkIHMsIGhvc3RfY3B1IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IGhvc3RcX2NwdSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQot
dmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9tb2RlbG5hbWV9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbW9kZWxuYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0
XF9jcHUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
c3RyaW5nIGdldF9tb2RlbG5hbWUgKHNlc3Npb25faWQgcywgaG9zdF9jcHUgcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdFxf
Y3B1IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX3N0ZXBwaW5nfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHN0
ZXBwaW5nIGZpZWxkIG9mIHRoZSBnaXZlbiBob3N0XF9jcHUuCi0KLSBcbm9pbmRlbnQge1xiZiBT
aWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9zdGVwcGluZyAoc2Vzc2lv
bl9pZCBzLCBob3N0X2NwdSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBob3N0XF9jcHUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQot
Ci0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfZmxhZ3N9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgZmxhZ3MgZmllbGQgb2YgdGhlIGdpdmVuIGhvc3RcX2Nw
dS4gIEFzIG9mIHRoaXMgdmVyc2lvbiBvZiB0aGUKLUFQSSwgdGhlIHNlbWFudGljcyBvZiB0aGUg
cmV0dXJuZWQgc3RyaW5nIGFyZSBleHBsaWNpdGx5IHVuc3BlY2lmaWVkLAotYW5kIG1heSBjaGFu
Z2UgaW4gdGhlIGZ1dHVyZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2ZsYWdzIChzZXNzaW9uX2lkIHMsIGhvc3RfY3B1IHJlZiBz
ZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IGhvc3RcX2NwdSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxk
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9mZWF0dXJlc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBmZWF0dXJlcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gaG9zdFxfY3B1LiBBcyBvZiB0aGlzIHZl
cnNpb24gb2YgdGhlCi1BUEksIHRoZSBzZW1hbnRpY3Mgb2YgdGhlIHJldHVybmVkIHN0cmluZyBh
cmUgZXhwbGljaXRseSB1bnNwZWNpZmllZCwKLWFuZCBtYXkgY2hhbmdlIGluIHRoZSBmdXR1cmUu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5n
IGdldF9mZWF0dXJlcyAoc2Vzc2lvbl9pZCBzLCBob3N0X2NwdSByZWYgc2VsZilcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBob3N0XF9jcHUgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfdXRpbGlzYXRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXRpbGlz
YXRpb24gZmllbGQgb2YgdGhlIGdpdmVuIGhvc3RcX2NwdS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBmbG9hdCBnZXRfdXRpbGlzYXRpb24gKHNlc3Np
b25faWQgcywgaG9zdF9jcHUgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgaG9zdFxfY3B1IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1mbG9hdAotfQot
Ci0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0aGUgaG9zdFxfY3B1IGluc3Rh
bmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19IChob3N0X2NwdSByZWYpIGdldF9ieV91dWlkIChzZXNzaW9u
X2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1
bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVy
biBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaG9zdFxfY3B1IHJlZgotfQotCi0KLXJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBjdXJyZW50IHN0YXRl
IG9mIHRoZSBnaXZlbiBob3N0XF9jcHUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKGhvc3RfY3B1IHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9p
ZCBzLCBob3N0X2NwdSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBob3N0XF9jcHUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWhvc3RcX2NwdSByZWNv
cmQKLX0KLQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfY3B1XF9wb29sfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgY3B1XF9wb29sIGZpZWxk
IG9mIHRoZSBnaXZlbiBob3N0XF9jcHUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQot
XGJlZ2lue3ZlcmJhdGltfSAoKGNwdV9wb29sKSBTZXQpIGdldF9jcHVfcG9vbCAoc2Vzc2lvbl9p
ZCBzLCBob3N0X2NwdSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IGhvc3RcX2NwdSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotKGNwdVxfcG9vbCkgU2V0Ci19
Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91bmFzc2lnbmVk
XF9jcHVzfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCBhIHJlZmVyZW5jZSB0byBhbGwgY3B1cyB0
aGF0IGFyZSBub3QgYXNzaWdlbmQgdG8gYW55IGNwdVxfcG9vbC4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19ICgoaG9zdF9jcHUpIFNldCkgZ2V0X3VuYXNz
aWduZWRfY3B1cyAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLShob3N0XF9jcHUgcmVmKSBTZXQKLX0KLQot
Ci1TZXQgb2YgZnJlZSAobm90IGFzc2lnbmVkKSBjcHVzCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLQotCi1cdnNwYWNlezFjbX0KLVxuZXdwYWdlCi1c
c2VjdGlvbntDbGFzczogbmV0d29ya30KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IG5l
dHdvcmt9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5l
Ci1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgbmV0
d29ya30gXFwKLVxtdWx0aWNvbHVtbnsxfXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1u
ezN9e2x8fXtccGFyYm94ezExY219e1xlbSBBCi12aXJ0dWFsIG5ldHdvcmsufX0gXFwKLVxobGlu
ZQotUXVhbHMgJiBGaWVsZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlkZW50
aWZpZXIvb2JqZWN0IHJlZmVyZW5jZSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG5hbWUvbGFi
ZWx9ICYgc3RyaW5nICYgYSBodW1hbi1yZWFkYWJsZSBuYW1lIFxcCi0kXG1hdGhpdHtSV30kICYg
IHtcdHQgbmFtZS9kZXNjcmlwdGlvbn0gJiBzdHJpbmcgJiBhIG5vdGVzIGZpZWxkIGNvbnRhaW5n
IGh1bWFuLXJlYWRhYmxlIGRlc2NyaXB0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59
JCAmICB7XHR0IFZJRnN9ICYgKFZJRiByZWYpIFNldCAmIGxpc3Qgb2YgY29ubmVjdGVkIHZpZnMg
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgUElGc30gJiAoUElGIHJlZikg
U2V0ICYgbGlzdCBvZiBjb25uZWN0ZWQgcGlmcyBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IGRl
ZmF1bHRcX2dhdGV3YXl9ICYgc3RyaW5nICYgZGVmYXVsdCBnYXRld2F5IFxcCi0kXG1hdGhpdHtS
V30kICYgIHtcdHQgZGVmYXVsdFxfbmV0bWFza30gJiBzdHJpbmcgJiBkZWZhdWx0IG5ldG1hc2sg
XFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBvdGhlclxfY29uZmlnfSAmIChzdHJpbmcgJFxyaWdo
dGFycm93JCBzdHJpbmcpIE1hcCAmIGFkZGl0aW9uYWwgY29uZmlndXJhdGlvbiBcXAotXGhsaW5l
Ci1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNz
OiBuZXR3b3JrfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxsfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgbmV0d29ya3Mga25vd24gdG8gdGhl
IHN5c3RlbQotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
ICgobmV0d29yayByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19
Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotKG5ldHdvcmsgcmVmKSBTZXQKLX0KLQotCi1BIGxpc3Qgb2YgYWxsIHRoZSBJRHMgb2Yg
YWxsIHRoZSBuZXR3b3JrcwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBuZXR3b3JrLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVp
ZCAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmlu
ZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmFtZVxf
bGFiZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbmFtZS9sYWJlbCBmaWVsZCBvZiB0
aGUgZ2l2ZW4gbmV0d29yay4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfbGFiZWwgKHNlc3Npb25faWQgcywgbmV0d29yayBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBuZXR3b3JrIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX25hbWVcX2xhYmVsfQotCi17XGJmIE92ZXJ2aWV3On0g
Ci1TZXQgdGhlIG5hbWUvbGFiZWwgZmllbGQgb2YgdGhlIGdpdmVuIG5ldHdvcmsuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbmFtZV9s
YWJlbCAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQg
c3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX25hbWVcX2Rlc2Ny
aXB0aW9ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIG5hbWUvZGVzY3JpcHRpb24gZmll
bGQgb2YgdGhlIGdpdmVuIG5ldHdvcmsuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9uYW1lX2Rlc2NyaXB0aW9uIChzZXNzaW9uX2lk
IHMsIG5ldHdvcmsgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFs
dWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9uYW1lXF9kZXNjcmlwdGlvbn0K
LQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBuYW1lL2Rlc2NyaXB0aW9uIGZpZWxkIG9mIHRo
ZSBnaXZlbiBuZXR3b3JrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHZvaWQgc2V0X25hbWVfZGVzY3JpcHRpb24gKHNlc3Npb25faWQgcywgbmV0d29y
ayByZWYgc2VsZiwgc3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBOZXcgdmFs
dWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9WSUZzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZJ
RnMgZmllbGQgb2YgdGhlIGdpdmVuIG5ldHdvcmsuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChWSUYgcmVmKSBTZXQpIGdldF9WSUZzIChzZXNzaW9u
X2lkIHMsIG5ldHdvcmsgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZJRiByZWYpIFNldAot
fQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfUElGc30KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBQSUZzIGZpZWxkIG9mIHRoZSBnaXZlbiBuZXR3b3Jr
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoUElG
IHJlZikgU2V0KSBnZXRfUElGcyAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLShQSUYgcmVmKSBTZXQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5nZXRcX2RlZmF1bHRcX2dhdGV3YXl9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCB0aGUgZGVmYXVsdFxfZ2F0ZXdheSBmaWVsZCBvZiB0aGUgZ2l2ZW4gbmV0d29yay4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0
X2RlZmF1bHRfZ2F0ZXdheSAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fnNldFxfZGVmYXVsdFxfZ2F0ZXdheX0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBk
ZWZhdWx0XF9nYXRld2F5IGZpZWxkIG9mIHRoZSBnaXZlbiBuZXR3b3JrLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X2RlZmF1bHRfZ2F0
ZXdheSAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQg
c3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2RlZmF1bHRcX25l
dG1hc2t9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgZGVmYXVsdFxfbmV0bWFzayBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gbmV0d29yay4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2RlZmF1bHRfbmV0bWFzayAoc2Vzc2lvbl9pZCBz
LCBuZXR3b3JrIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1
bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVl
IG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfZGVmYXVsdFxfbmV0bWFza30KLQot
e1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBkZWZhdWx0XF9uZXRtYXNrIGZpZWxkIG9mIHRoZSBn
aXZlbiBuZXR3b3JrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IHZvaWQgc2V0X2RlZmF1bHRfbmV0bWFzayAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJl
ZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0
byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX290aGVyXF9jb25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0
aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gbmV0d29yay4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcp
IE1hcCkgZ2V0X290aGVyX2NvbmZpZyAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdv
cmsgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLShzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcAotfQotCi0K
LXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfb3RoZXJcX2NvbmZpZ30K
LQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBvdGhlclxfY29uZmlnIGZpZWxkIG9mIHRoZSBn
aXZlbiBuZXR3b3JrLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IHZvaWQgc2V0X290aGVyX2NvbmZpZyAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBz
ZWxmLCAoc3RyaW5nIC0+IHN0cmluZykgTWFwIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IChzdHJpbmcgJFxyaWdodGFy
cm93JCBzdHJpbmcpIE1hcCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+YWRkXF90
b1xfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotQWRkIHRoZSBnaXZlbiBrZXkt
dmFsdWUgcGFpciB0byB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4KLW5ldHdv
cmsuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCBhZGRfdG9fb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMsIG5ldHdvcmsgcmVmIHNlbGYsIHN0
cmluZyBrZXksIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBuZXR3b3JrIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byBhZGQg
XFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBWYWx1ZSB0byBhZGQgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5y
ZW1vdmVcX2Zyb21cX290aGVyXF9jb25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJlbW92ZSB0
aGUgZ2l2ZW4ga2V5IGFuZCBpdHMgY29ycmVzcG9uZGluZyB2YWx1ZSBmcm9tIHRoZSBvdGhlclxf
Y29uZmlnCi1maWVsZCBvZiB0aGUgZ2l2ZW4gbmV0d29yay4gIElmIHRoZSBrZXkgaXMgbm90IGlu
IHRoYXQgTWFwLCB0aGVuIGRvCi1ub3RoaW5nLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcmVtb3ZlX2Zyb21fb3RoZXJfY29uZmlnIChzZXNz
aW9uX2lkIHMsIG5ldHdvcmsgcmVmIHNlbGYsIHN0cmluZyBrZXkpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiBr
ZXkgJiBLZXkgdG8gcmVtb3ZlIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19
Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y3JlYXRlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1DcmVh
dGUgYSBuZXcgbmV0d29yayBpbnN0YW5jZSwgYW5kIHJldHVybiBpdHMgaGFuZGxlLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChuZXR3b3JrIHJlZikg
Y3JlYXRlIChzZXNzaW9uX2lkIHMsIG5ldHdvcmsgcmVjb3JkIGFyZ3MpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgbmV0d29yayByZWNvcmQgfSAm
IGFyZ3MgJiBBbGwgY29uc3RydWN0b3IgYXJndW1lbnRzIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi1uZXR3b3JrIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgbmV3bHkgY3JlYXRl
ZCBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5kZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3On0gCi1E
ZXN0cm95IHRoZSBzcGVjaWZpZWQgbmV0d29yayBpbnN0YW5jZS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGRlc3Ryb3kgKHNlc3Npb25faWQg
cywgbmV0d29yayByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBuZXR3b3JrIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVm
ZXJlbmNlIHRvIHRoZSBuZXR3b3JrIGluc3RhbmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChuZXR3b3Jr
IHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlk
ICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1uZXR3b3JrIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250
YWluaW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBuZXR3b3JrLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChuZXR3b3JrIHJlY29yZCkg
Z2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBzLCBuZXR3b3JrIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLW5ldHdvcmsgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5nZXRcX2J5XF9uYW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IGFsbCB0aGUgbmV0d29yayBpbnN0YW5jZXMgd2l0aCB0aGUgZ2l2ZW4gbGFiZWwuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChuZXR3b3JrIHJl
ZikgU2V0KSBnZXRfYnlfbmFtZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgbGFiZWwpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5n
IH0gJiBsYWJlbCAmIGxhYmVsIG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLShuZXR3b3JrIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBvYmpl
Y3RzIHdpdGggbWF0Y2ggbmFtZXMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IFZJ
Rn0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFZJRn0KLVxiZWdpbntsb25ndGFibGV9
e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxtdWx0aWNvbHVtbnsxfXt8bH17TmFt
ZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBWSUZ9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9
e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0gQQot
dmlydHVhbCBuZXR3b3JrIGludGVyZmFjZS59fSBcXAotXGhsaW5lCi1RdWFscyAmIEZpZWxkICYg
VHlwZSAmIERlc2NyaXB0aW9uIFxcCi1caGxpbmUKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0k
ICYgIHtcdHQgdXVpZH0gJiBzdHJpbmcgJiB1bmlxdWUgaWRlbnRpZmllci9vYmplY3QgcmVmZXJl
bmNlIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgZGV2aWNlfSAmIHN0cmluZyAmIG5hbWUgb2Yg
bmV0d29yayBkZXZpY2UgYXMgZXhwb3NlZCB0byBndWVzdCBlLmcuIGV0aDAgXFwKLSRcbWF0aGl0
e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgbmV0d29ya30gJiBuZXR3b3JrIHJlZiAmIHZpcnR1
YWwgbmV0d29yayB0byB3aGljaCB0aGlzIHZpZiBpcyBjb25uZWN0ZWQgXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e2luc30kICYgIHtcdHQgVk19ICYgVk0gcmVmICYgdmlydHVhbCBtYWNoaW5lIHRv
IHdoaWNoIHRoaXMgdmlmIGlzIGNvbm5lY3RlZCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IE1B
Q30gJiBzdHJpbmcgJiBldGhlcm5ldCBNQUMgYWRkcmVzcyBvZiB2aXJ0dWFsIGludGVyZmFjZSwg
YXMgZXhwb3NlZCB0byBndWVzdCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IE1UVX0gJiBpbnQg
JiBNVFUgaW4gb2N0ZXRzIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGN1
cnJlbnRseVxfYXR0YWNoZWR9ICYgYm9vbCAmIGlzIHRoZSBkZXZpY2UgY3VycmVudGx5IGF0dGFj
aGVkIChlcmFzZWQgb24gcmVib290KSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAg
e1x0dCBzdGF0dXNcX2NvZGV9ICYgaW50ICYgZXJyb3Ivc3VjY2VzcyBjb2RlIGFzc29jaWF0ZWQg
d2l0aCBsYXN0IGF0dGFjaC1vcGVyYXRpb24gKGVyYXNlZCBvbiByZWJvb3QpIFxcCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN0YXR1c1xfZGV0YWlsfSAmIHN0cmluZyAmIGVy
cm9yL3N1Y2Nlc3MgaW5mb3JtYXRpb24gYXNzb2NpYXRlZCB3aXRoIGxhc3QgYXR0YWNoLW9wZXJh
dGlvbiBzdGF0dXMgKGVyYXNlZCBvbiByZWJvb3QpIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHty
dW59JCAmICB7XHR0IHJ1bnRpbWVcX3Byb3BlcnRpZXN9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ck
IHN0cmluZykgTWFwICYgRGV2aWNlIHJ1bnRpbWUgcHJvcGVydGllcyBcXAotJFxtYXRoaXR7Uld9
JCAmICB7XHR0IHFvcy9hbGdvcml0aG1cX3R5cGV9ICYgc3RyaW5nICYgUW9TIGFsZ29yaXRobSB0
byB1c2UgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBxb3MvYWxnb3JpdGhtXF9wYXJhbXN9ICYg
KHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgcGFyYW1ldGVycyBmb3IgY2hvc2Vu
IFFvUyBhbGdvcml0aG0gXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcW9z
L3N1cHBvcnRlZFxfYWxnb3JpdGhtc30gJiBzdHJpbmcgU2V0ICYgc3VwcG9ydGVkIFFvUyBhbGdv
cml0aG1zIGZvciB0aGlzIFZJRiBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCBtZXRyaWNzfSAmIFZJRlxfbWV0cmljcyByZWYgJiBtZXRyaWNzIGFzc29jaWF0ZWQgd2l0aCB0
aGlzIFZJRiBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNz
b2NpYXRlZCB3aXRoIGNsYXNzOiBWSUZ9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+cGx1Z30K
LQote1xiZiBPdmVydmlldzp9IAotSG90cGx1ZyB0aGUgc3BlY2lmaWVkIFZJRiwgZHluYW1pY2Fs
bHkgYXR0YWNoaW5nIGl0IHRvIHRoZSBydW5uaW5nIFZNLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcGx1ZyAoc2Vzc2lvbl9pZCBzLCBWSUYg
cmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIFRoZSBWSUYgdG8gaG90cGx1ZyBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnVucGx1Z30K
LQote1xiZiBPdmVydmlldzp9IAotSG90LXVucGx1ZyB0aGUgc3BlY2lmaWVkIFZJRiwgZHluYW1p
Y2FsbHkgdW5hdHRhY2hpbmcgaXQgZnJvbSB0aGUgcnVubmluZwotVk0uCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCB1bnBsdWcgKHNlc3Npb25f
aWQgcywgVklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1
bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYgJiBUaGUgVklGIHRvIGhvdC11bnBsdWcg
XFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRl
bnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX2FsbH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwg
dGhlIFZJRnMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZJRiByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9p
ZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZJRiByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMg
dG8gYWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVklGLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vz
c2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2RldmljZX0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSBkZXZpY2UgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2Rldmlj
ZSAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2Nt
fQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX2RldmljZX0KLQote1xi
ZiBPdmVydmlldzp9IAotU2V0IHRoZSBkZXZpY2UgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9k
ZXZpY2UgKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcg
fSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmV0d29ya30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBuZXR3b3JrIGZpZWxkIG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKG5ldHdvcmsgcmVm
KSBnZXRfbmV0d29yayAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1u
ZXR3b3JrIHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfVk19Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVk0gZmllbGQgb2YgdGhlIGdpdmVu
IFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAo
Vk0gcmVmKSBnZXRfVk0gKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
Vk0gcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9N
QUN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgTUFDIGZpZWxkIG9mIHRoZSBnaXZlbiBW
SUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3Ry
aW5nIGdldF9NQUMgKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0K
LQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3Ry
aW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9NQUN9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgTUFDIGZpZWxkIG9mIHRoZSBnaXZlbiBWSUYu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBz
ZXRfTUFDIChzZXNzaW9uX2lkIHMsIFZJRiByZWYgc2VsZiwgc3RyaW5nIHZhbHVlKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAm
IHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5n
IH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX01UVX0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSBNVFUgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X01UVSAoc2Vzc2lv
bl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX01UVX0KLQote1xiZiBPdmVydmlldzp9IAot
U2V0IHRoZSBNVFUgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9NVFUgKHNlc3Npb25faWQgcywg
VklGIHJlZiBzZWxmLCBpbnQgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNl
dCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfY3VycmVudGx5XF9hdHRhY2hlZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBjdXJyZW50bHlcX2F0dGFjaGVkIGZpZWxkIG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gYm9vbCBnZXRfY3VycmVu
dGx5X2F0dGFjaGVkIChzZXNzaW9uX2lkIHMsIFZJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUYgcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWJv
b2wKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3N0YXR1
c1xfY29kZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBzdGF0dXNcX2NvZGUgZmllbGQg
b2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSBpbnQgZ2V0X3N0YXR1c19jb2RlIChzZXNzaW9uX2lkIHMsIFZJRiByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
SUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfc3RhdHVzXF9kZXRhaWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUg
c3RhdHVzXF9kZXRhaWwgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3N0YXR1c19kZXRhaWwg
KHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQot
dmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ydW50aW1lXF9wcm9wZXJ0
aWVzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHJ1bnRpbWVcX3Byb3BlcnRpZXMgZmll
bGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X3J1bnRpbWVfcHJvcGVy
dGllcyAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRc
cmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX3Fvc1xfYWxnb3JpdGhtXF90eXBlfQotCi17XGJmIE92ZXJ2aWV3On0g
Ci1HZXQgdGhlIHFvcy9hbGdvcml0aG1cX3R5cGUgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0
X3Fvc19hbGdvcml0aG1fdHlwZSAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFy
fQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5z
ZXRcX3Fvc1xfYWxnb3JpdGhtXF90eXBlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhlIHFv
cy9hbGdvcml0aG1cX3R5cGUgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9xb3NfYWxnb3JpdGht
X3R5cGUgKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcg
fSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcW9zXF9hbGdvcml0aG1cX3Bh
cmFtc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBxb3MvYWxnb3JpdGhtXF9wYXJhbXMg
ZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X3Fvc19hbGdvcml0
aG1fcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUYgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShzdHJp
bmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fnNldFxfcW9zXF9hbGdvcml0aG1cX3BhcmFtc30KLQote1xiZiBPdmVy
dmlldzp9IAotU2V0IHRoZSBxb3MvYWxnb3JpdGhtXF9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVu
IFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2
b2lkIHNldF9xb3NfYWxnb3JpdGhtX3BhcmFtcyAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYs
IChzdHJpbmcgLT4gc3RyaW5nKSBNYXAgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3Ry
aW5nKSBNYXAgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmFkZFxfdG9cX3Fvc1xf
YWxnb3JpdGhtXF9wYXJhbXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUFkZCB0aGUgZ2l2ZW4ga2V5
LXZhbHVlIHBhaXIgdG8gdGhlIHFvcy9hbGdvcml0aG1cX3BhcmFtcyBmaWVsZCBvZiB0aGUKLWdp
dmVuIFZJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIGFkZF90b19xb3NfYWxnb3JpdGhtX3BhcmFtcyAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVm
IHNlbGYsIHN0cmluZyBrZXksIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRv
IGFkZCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fnJlbW92ZVxfZnJvbVxfcW9zXF9hbGdvcml0aG1cX3BhcmFtc30KLQote1xiZiBPdmVydmll
dzp9IAotUmVtb3ZlIHRoZSBnaXZlbiBrZXkgYW5kIGl0cyBjb3JyZXNwb25kaW5nIHZhbHVlIGZy
b20gdGhlCi1xb3MvYWxnb3JpdGhtXF9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVuIFZJRi4gIElm
IHRoZSBrZXkgaXMgbm90IGluIHRoYXQKLU1hcCwgdGhlbiBkbyBub3RoaW5nLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcmVtb3ZlX2Zyb21f
cW9zX2FsZ29yaXRobV9wYXJhbXMgKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmLCBzdHJpbmcg
a2V5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IFZJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LXtcdHQgc3RyaW5nIH0gJiBrZXkgJiBLZXkgdG8gcmVtb3ZlIFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9xb3NcX3N1cHBv
cnRlZFxfYWxnb3JpdGhtc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBxb3Mvc3VwcG9y
dGVkXF9hbGdvcml0aG1zIGZpZWxkIG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKHN0cmluZyBTZXQpIGdldF9xb3Nfc3Vw
cG9ydGVkX2FsZ29yaXRobXMgKHNlc3Npb25faWQgcywgVklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRiByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotc3RyaW5nIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fmdldFxfbWV0cmljc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBtZXRyaWNzIGZpZWxk
IG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gKFZJRl9tZXRyaWNzIHJlZikgZ2V0X21ldHJpY3MgKHNlc3Npb25faWQgcywg
VklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVklGXF9tZXRyaWNzIHJlZgotfQotCi0KLXZhbHVl
IG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfc2VjdXJpdHlcX2xhYmVsfQotCi17
XGJmIE92ZXJ2aWV3On0KLVNldCB0aGUgc2VjdXJpdHkgbGFiZWwgb2YgdGhlIGdpdmVuIFZJRi4g
UmVmZXIgdG8gdGhlIFhTUG9saWN5IGNsYXNzCi1mb3IgdGhlIGZvcm1hdCBvZiB0aGUgc2VjdXJp
dHkgbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHNldF9zZWN1cml0eV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBWSUYgcmVmIHNlbGYsIHN0
cmluZwotc2VjdXJpdHlfbGFiZWwsIHN0cmluZyBvbGRfbGFiZWwpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUYgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi0KLXtcdHQgc3RyaW5nIH0gJiBzZWN1cml0
eVxfbGFiZWwgJiBOZXcgdmFsdWUgb2YgdGhlIHNlY3VyaXR5IGxhYmVsIFxcIFxobGluZQote1x0
dCBzdHJpbmcgfSAmIG9sZFxfbGFiZWwgJiBMYWJlbCB2YWx1ZSB0aGF0IHRoZSBzZWN1cml0eSBs
YWJlbCBcXAotJiAmIG11c3QgY3VycmVudGx5IGhhdmUgZm9yIHRoZSBjaGFuZ2UgdG8gc3VjY2Vl
ZC5cXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQKLX0KLQotCi1cdnNwYWNlezAuM2NtfQot
Ci1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVzOn0ge1x0dCBTRUNVUklUWVxfRVJS
T1J9Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3NlY3VyaXR5XF9sYWJlbH0KLQote1xiZiBPdmVy
dmlldzp9Ci1HZXQgdGhlIHNlY3VyaXR5IGxhYmVsIG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3NlY3Vy
aXR5X2xhYmVsIChzZXNzaW9uX2lkIHMsIFZJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1zdHJpbmcK
LX0KLQotCi12YWx1ZSBvZiB0aGUgZ2l2ZW4gZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGV9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNyZWF0ZSBhIG5ldyBWSUYgaW5zdGFuY2UsIGFuZCByZXR1
cm4gaXRzIGhhbmRsZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSAoVklGIHJlZikgY3JlYXRlIChzZXNzaW9uX2lkIHMsIFZJRiByZWNvcmQgYXJncylc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUYg
cmVjb3JkIH0gJiBhcmdzICYgQWxsIGNvbnN0cnVjdG9yIGFyZ3VtZW50cyBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotVklGIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgbmV3bHkg
Y3JlYXRlZCBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5kZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3
On0gCi1EZXN0cm95IHRoZSBzcGVjaWZpZWQgVklGIGluc3RhbmNlLgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9p
ZCBzLCBWSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgVklGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNl
IHRvIHRoZSBWSUYgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZJRiByZWYpIGdldF9ieV91
dWlkIChzZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5k
ZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2Jq
ZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVklGIHJlZgotfQot
Ci0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBjdXJyZW50
IHN0YXRlIG9mIHRoZSBnaXZlbiBWSUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKFZJRiByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywg
VklGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFZJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVklGIHJlY29yZAotfQotCi0KLWFsbCBmaWVsZHMg
ZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBWSUZcX21l
dHJpY3N9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBWSUZcX21ldHJpY3N9Ci1cYmVn
aW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1
bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgVklGXF9tZXRyaWNzfSBc
XAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9
e1xwYXJib3h7MTFjbX17XGVtCi1UaGUgbWV0cmljcyBhc3NvY2lhdGVkIHdpdGggYSB2aXJ0dWFs
IG5ldHdvcmsgZGV2aWNlLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVz
Y3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1
dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRc
bWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgaW8vcmVhZFxfa2JzfSAmIGZsb2F0ICYg
UmVhZCBiYW5kd2lkdGggKEtpQi9zKSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAg
e1x0dCBpby93cml0ZVxfa2JzfSAmIGZsb2F0ICYgV3JpdGUgYmFuZHdpZHRoIChLaUIvcykgXFwK
LSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbGFzdFxfdXBkYXRlZH0gJiBkYXRl
dGltZSAmIFRpbWUgYXQgd2hpY2ggdGhpcyBpbmZvcm1hdGlvbiB3YXMgbGFzdCB1cGRhdGVkIFxc
Ci1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdp
dGggY2xhc3M6IFZJRlxfbWV0cmljc30KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Fs
bH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwgdGhlIFZJRlxfbWV0
cmljcyBpbnN0YW5jZXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZJRl9tZXRyaWNzIHJlZikgU2V0KSBnZXRf
YWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oVklGXF9tZXRyaWNzIHJlZikg
U2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0cwotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBn
aXZlbiBWSUZcX21ldHJpY3MuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFZJRl9tZXRyaWNzIHJl
ZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0g
Ci1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUK
LXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17
XHR0IFZJRlxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhl
IGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9pb1xfcmVhZFxfa2JzfQotCi17XGJmIE92ZXJ2
aWV3On0gCi1HZXQgdGhlIGlvL3JlYWRcX2ticyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVklGXF9tZXRy
aWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGZs
b2F0IGdldF9pb19yZWFkX2ticyAoc2Vzc2lvbl9pZCBzLCBWSUZfbWV0cmljcyByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUZc
X21ldHJpY3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLWZsb2F0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9pb1xfd3JpdGVcX2tic30KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBpby93cml0ZVxfa2JzIGZpZWxkIG9mIHRoZSBnaXZlbiBWSUZcX21ldHJpY3MuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gZmxvYXQgZ2V0
X2lvX3dyaXRlX2ticyAoc2Vzc2lvbl9pZCBzLCBWSUZfbWV0cmljcyByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWSUZcX21ldHJp
Y3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLWZsb2F0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF9sYXN0XF91cGRhdGVkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IGxhc3RcX3VwZGF0ZWQgZmllbGQgb2YgdGhlIGdpdmVuIFZJRlxfbWV0cmljcy4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBkYXRldGltZSBnZXRfbGFz
dF91cGRhdGVkIChzZXNzaW9uX2lkIHMsIFZJRl9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZJRlxfbWV0cmljcyBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotZGF0ZXRpbWUKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5nZXRcX2J5XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWZlcmVu
Y2UgdG8gdGhlIFZJRlxfbWV0cmljcyBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVklGX21l
dHJpY3MgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAm
IHV1aWQgJiBVVUlEIG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLVZJRlxfbWV0cmljcyByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSBy
ZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gVklGXF9tZXRy
aWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChW
SUZfbWV0cmljcyByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgVklGX21ldHJpY3Mg
cmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgVklGXF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WSUZcX21ldHJpY3MgcmVjb3JkCi19Ci0K
LQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257
Q2xhc3M6IFBJRn0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFBJRn0KLVxiZWdpbnts
b25ndGFibGV9e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxtdWx0aWNvbHVtbnsx
fXt8bH17TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBQSUZ9IFxcCi1cbXVsdGljb2x1
bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNt
fXtcZW0gQQotcGh5c2ljYWwgbmV0d29yayBpbnRlcmZhY2UgKG5vdGUgc2VwYXJhdGUgVkxBTnMg
YXJlIHJlcHJlc2VudGVkIGFzIHNldmVyYWwKLVBJRnMpLn19IFxcCi1caGxpbmUKLVF1YWxzICYg
RmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVj
dCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBkZXZpY2V9ICYgc3RyaW5nICYg
bWFjaGluZS1yZWFkYWJsZSBuYW1lIG9mIHRoZSBpbnRlcmZhY2UgKGUuZy4gZXRoMCkgXFwKLSRc
bWF0aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgbmV0d29ya30gJiBuZXR3b3JrIHJlZiAm
IHZpcnR1YWwgbmV0d29yayB0byB3aGljaCB0aGlzIHBpZiBpcyBjb25uZWN0ZWQgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgaG9zdH0gJiBob3N0IHJlZiAmIHBoeXNpY2Fs
IG1hY2hpbmUgdG8gd2hpY2ggdGhpcyBwaWYgaXMgY29ubmVjdGVkIFxcCi0kXG1hdGhpdHtSV30k
ICYgIHtcdHQgTUFDfSAmIHN0cmluZyAmIGV0aGVybmV0IE1BQyBhZGRyZXNzIG9mIHBoeXNpY2Fs
IGludGVyZmFjZSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IE1UVX0gJiBpbnQgJiBNVFUgaW4g
b2N0ZXRzIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgVkxBTn0gJiBpbnQgJiBWTEFOIHRhZyBm
b3IgYWxsIHRyYWZmaWMgcGFzc2luZyB0aHJvdWdoIHRoaXMgaW50ZXJmYWNlIFxcCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IG1ldHJpY3N9ICYgUElGXF9tZXRyaWNzIHJlZiAm
IG1ldHJpY3MgYXNzb2NpYXRlZCB3aXRoIHRoaXMgUElGIFxcCi1caGxpbmUKLVxlbmR7bG9uZ3Rh
YmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IFBJRn0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGVcX1ZMQU59Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNy
ZWF0ZSBhIFZMQU4gaW50ZXJmYWNlIGZyb20gYW4gZXhpc3RpbmcgcGh5c2ljYWwgaW50ZXJmYWNl
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChQSUYg
cmVmKSBjcmVhdGVfVkxBTiAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgZGV2aWNlLCBuZXR3b3JrIHJl
ZiBuZXR3b3JrLCBob3N0IHJlZiBob3N0LCBpbnQgVkxBTilcZW5ke3ZlcmJhdGltfQotCi0KLVxu
b2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIGRldmljZSAmIHBoeXNp
Y2FsIGludGVyZmFjZSBvbiB3aGljaCB0byBjcmF0ZSB0aGUgVkxBTiBpbnRlcmZhY2UgXFwgXGhs
aW5lIAotCi17XHR0IG5ldHdvcmsgcmVmIH0gJiBuZXR3b3JrICYgbmV0d29yayB0byB3aGljaCB0
aGlzIGludGVyZmFjZSBzaG91bGQgYmUgY29ubmVjdGVkIFxcIFxobGluZSAKLQote1x0dCBob3N0
IHJlZiB9ICYgaG9zdCAmIHBoeXNpY2FsIG1hY2hpbmUgdG8gd2hpY2ggdGhpcyBQSUYgaXMgY29u
bmVjdGVkIFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIFZMQU4gJiBWTEFOIHRhZyBmb3IgdGhl
IG5ldyBpbnRlcmZhY2UgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVBJRiByZWYKLX0K
LQotCi1UaGUgcmVmZXJlbmNlIG9mIHRoZSBjcmVhdGVkIFBJRiBvYmplY3QKLVx2c3BhY2V7MC4z
Y219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFZMQU5cX1RB
R1xfSU5WQUxJRH0KLQotXHZzcGFjZXswLjZjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5k
ZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3On0gCi1EZXN0cm95IHRoZSBpbnRlcmZhY2UgKHByb3Zp
ZGVkIGl0IGlzIGEgc3ludGhldGljIGludGVyZmFjZSBsaWtlIGEgVkxBTjsKLWZhaWwgaWYgaXQg
aXMgYSBwaHlzaWNhbCBpbnRlcmZhY2UpLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9pZCBzLCBQSUYgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UElGIHJlZiB9ICYgc2VsZiAmIHRoZSBQSUYgb2JqZWN0IHRvIGRlc3Ryb3kgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2lu
ZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFBJRlxfSVNcX1BIWVNJQ0FMfQot
Ci1cdnNwYWNlezAuNmNtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxsfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgUElGcyBrbm93biB0byB0
aGUgc3lzdGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19ICgoUElGIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtc
dHQgCi0oUElGIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0cwotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlk
IGZpZWxkIG9mIHRoZSBnaXZlbiBQSUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFBJRiByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBQSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfZGV2aWNlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGRl
dmljZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gUElGLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfZGV2aWNlIChzZXNzaW9uX2lkIHMsIFBJ
RiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfZGV2aWNlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQg
dGhlIGRldmljZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gUElGLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X2RldmljZSAoc2Vzc2lvbl9pZCBz
LCBQSUYgcmVmIHNlbGYsIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8
Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVz
Y3JpcHRpb259IFxcIFxobGluZQote1x0dCBQSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBOZXcgdmFs
dWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9uZXR3b3JrfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IG5ldHdvcmsgZmllbGQgb2YgdGhlIGdpdmVuIFBJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAobmV0d29yayByZWYpIGdldF9uZXR3b3JrIChzZXNz
aW9uX2lkIHMsIFBJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBQSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLW5ldHdvcmsgcmVmCi19Ci0KLQot
dmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ob3N0fQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIGhvc3QgZmllbGQgb2YgdGhlIGdpdmVuIFBJRi4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoaG9zdCByZWYpIGdldF9o
b3N0IChzZXNzaW9uX2lkIHMsIFBJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQSUYgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWhvc3QgcmVmCi19
Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9NQUN9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgTUFDIGZpZWxkIG9mIHRoZSBnaXZlbiBQSUYuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9N
QUMgKHNlc3Npb25faWQgcywgUElGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5k
ZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0K
LQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9NQUN9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVNldCB0aGUgTUFDIGZpZWxkIG9mIHRoZSBnaXZlbiBQSUYuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfTUFDIChz
ZXNzaW9uX2lkIHMsIFBJRiByZWYgc2VsZiwgc3RyaW5nIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0K
LQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBJRiByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1
ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQK
LX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX01UVX0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBNVFUgZmllbGQgb2YgdGhlIGdpdmVuIFBJRi4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X01UVSAoc2Vzc2lvbl9pZCBzLCBQ
SUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgUElGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5zZXRcX01UVX0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBN
VFUgZmllbGQgb2YgdGhlIGdpdmVuIFBJRi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9NVFUgKHNlc3Npb25faWQgcywgUElGIHJlZiBz
ZWxmLCBpbnQgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50
czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgUElGIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQote1x0dCBpbnQgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfVkxBTn0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBWTEFOIGZpZWxkIG9mIHRoZSBn
aXZlbiBQSUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gaW50IGdldF9WTEFOIChzZXNzaW9uX2lkIHMsIFBJRiByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQSUYgcmVmIH0gJiBzZWxm
ICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAK
LWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfVkxB
Tn0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBWTEFOIGZpZWxkIG9mIHRoZSBnaXZlbiBQ
SUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCBzZXRfVkxBTiAoc2Vzc2lvbl9pZCBzLCBQSUYgcmVmIHNlbGYsIGludCB2YWx1ZSlcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQSUYgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IGludCB9
ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9tZXRyaWNzfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIG1ldHJpY3MgZmllbGQgb2YgdGhlIGdpdmVuIFBJRi4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoUElGX21ldHJpY3Mg
cmVmKSBnZXRfbWV0cmljcyAoc2Vzc2lvbl9pZCBzLCBQSUYgcmVmIHNlbGYpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1c
YmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYg
bmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUElGIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci1QSUZcX21ldHJpY3MgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNl
IHRvIHRoZSBQSUYgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFBJRiByZWYpIGdldF9ieV91
dWlkIChzZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5k
ZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2Jq
ZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUElGIHJlZgotfQot
Ci0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9
Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBjdXJyZW50
IHN0YXRlIG9mIHRoZSBnaXZlbiBQSUYuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKFBJRiByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywg
UElGIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFBJRiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUElGIHJlY29yZAotfQotCi0KLWFsbCBmaWVsZHMg
ZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBQSUZcX21l
dHJpY3N9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBQSUZcX21ldHJpY3N9Ci1cYmVn
aW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1
bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgUElGXF9tZXRyaWNzfSBc
XAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9
e1xwYXJib3h7MTFjbX17XGVtCi1UaGUgbWV0cmljcyBhc3NvY2lhdGVkIHdpdGggYSBwaHlzaWNh
bCBuZXR3b3JrIGludGVyZmFjZS59fSBcXAotXGhsaW5lCi1RdWFscyAmIEZpZWxkICYgVHlwZSAm
IERlc2NyaXB0aW9uIFxcCi1caGxpbmUKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgdXVpZH0gJiBzdHJpbmcgJiB1bmlxdWUgaWRlbnRpZmllci9vYmplY3QgcmVmZXJlbmNlIFxc
Ci0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGlvL3JlYWRcX2tic30gJiBmbG9h
dCAmIFJlYWQgYmFuZHdpZHRoIChLaUIvcykgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0k
ICYgIHtcdHQgaW8vd3JpdGVcX2tic30gJiBmbG9hdCAmIFdyaXRlIGJhbmR3aWR0aCAoS2lCL3Mp
IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGxhc3RcX3VwZGF0ZWR9ICYg
ZGF0ZXRpbWUgJiBUaW1lIGF0IHdoaWNoIHRoaXMgaW5mb3JtYXRpb24gd2FzIGxhc3QgdXBkYXRl
ZCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRl
ZCB3aXRoIGNsYXNzOiBQSUZcX21ldHJpY3N9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0
XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBQSUZc
X21ldHJpY3MgaW5zdGFuY2VzIGtub3duIHRvIHRoZSBzeXN0ZW0uCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChQSUZfbWV0cmljcyByZWYpIFNldCkg
Z2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFBJRlxfbWV0cmljcyBy
ZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0
aGUgZ2l2ZW4gUElGXF9tZXRyaWNzLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBQSUZfbWV0cmlj
cyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQSUZcX21ldHJpY3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9p
bmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9m
IHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfaW9cX3JlYWRcX2tic30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBpby9yZWFkXF9rYnMgZmllbGQgb2YgdGhlIGdpdmVuIFBJRlxf
bWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSBmbG9hdCBnZXRfaW9fcmVhZF9rYnMgKHNlc3Npb25faWQgcywgUElGX21ldHJpY3MgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UElGXF9tZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1mbG9hdAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfaW9cX3dyaXRlXF9rYnN9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgaW8vd3JpdGVcX2ticyBmaWVsZCBvZiB0aGUgZ2l2ZW4gUElGXF9tZXRyaWNz
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGZsb2F0
IGdldF9pb193cml0ZV9rYnMgKHNlc3Npb25faWQgcywgUElGX21ldHJpY3MgcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUElGXF9t
ZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi1mbG9hdAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfbGFzdFxfdXBkYXRlZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBsYXN0XF91cGRhdGVkIGZpZWxkIG9mIHRoZSBnaXZlbiBQSUZcX21ldHJpY3MuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gZGF0ZXRpbWUgZ2V0
X2xhc3RfdXBkYXRlZCAoc2Vzc2lvbl9pZCBzLCBQSUZfbWV0cmljcyByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQSUZcX21ldHJp
Y3MgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLWRhdGV0aW1lCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVm
ZXJlbmNlIHRvIHRoZSBQSUZcX21ldHJpY3MgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVV
SUQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFBJ
Rl9tZXRyaWNzIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5n
IH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi1QSUZcX21ldHJpY3MgcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3RhdGUgb2YgdGhlIGdpdmVuIFBJRlxf
bWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSAoUElGX21ldHJpY3MgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIFBJRl9tZXRy
aWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFBJRlxfbWV0cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUElGXF9tZXRyaWNzIHJlY29yZAot
fQotCi0KLWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0
aW9ue0NsYXNzOiBTUn0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFNSfQotXGJlZ2lu
e2xvbmd0YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1u
ezF9e3xsfXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIFNSfSBcXAotXG11bHRpY29s
dW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9e1xwYXJib3h7MTFj
bX17XGVtIEEKLXN0b3JhZ2UgcmVwb3NpdG9yeS59fSBcXAotXGhsaW5lCi1RdWFscyAmIEZpZWxk
ICYgVHlwZSAmIERlc2NyaXB0aW9uIFxcCi1caGxpbmUKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1
bn0kICYgIHtcdHQgdXVpZH0gJiBzdHJpbmcgJiB1bmlxdWUgaWRlbnRpZmllci9vYmplY3QgcmVm
ZXJlbmNlIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgbmFtZS9sYWJlbH0gJiBzdHJpbmcgJiBh
IGh1bWFuLXJlYWRhYmxlIG5hbWUgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBuYW1lL2Rlc2Ny
aXB0aW9ufSAmIHN0cmluZyAmIGEgbm90ZXMgZmllbGQgY29udGFpbmcgaHVtYW4tcmVhZGFibGUg
ZGVzY3JpcHRpb24gXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgVkRJc30g
JiAoVkRJIHJlZikgU2V0ICYgbWFuYWdlZCB2aXJ0dWFsIGRpc2tzIFxcCi0kXG1hdGhpdHtST31f
XG1hdGhpdHtydW59JCAmICB7XHR0IFBCRHN9ICYgKFBCRCByZWYpIFNldCAmIHBoeXNpY2FsIGJs
b2NrZGV2aWNlcyBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB2aXJ0dWFs
XF9hbGxvY2F0aW9ufSAmIGludCAmIHN1bSBvZiB2aXJ0dWFsXF9zaXplcyBvZiBhbGwgVkRJcyBp
biB0aGlzIHN0b3JhZ2UgcmVwb3NpdG9yeSAoaW4gYnl0ZXMpIFxcCi0kXG1hdGhpdHtST31fXG1h
dGhpdHtydW59JCAmICB7XHR0IHBoeXNpY2FsXF91dGlsaXNhdGlvbn0gJiBpbnQgJiBwaHlzaWNh
bCBzcGFjZSBjdXJyZW50bHkgdXRpbGlzZWQgb24gdGhpcyBzdG9yYWdlIHJlcG9zaXRvcnkgKGlu
IGJ5dGVzKS4gTm90ZSB0aGF0IGZvciBzcGFyc2UgZGlzayBmb3JtYXRzLCBwaHlzaWNhbFxfdXRp
bGlzYXRpb24gbWF5IGJlIGxlc3MgdGhhbiB2aXJ0dWFsXF9hbGxvY2F0aW9uIFxcCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IHBoeXNpY2FsXF9zaXplfSAmIGludCAmIHRvdGFs
IHBoeXNpY2FsIHNpemUgb2YgdGhlIHJlcG9zaXRvcnkgKGluIGJ5dGVzKSBcXAotJFxtYXRoaXR7
Uk99X1xtYXRoaXR7aW5zfSQgJiAge1x0dCB0eXBlfSAmIHN0cmluZyAmIHR5cGUgb2YgdGhlIHN0
b3JhZ2UgcmVwb3NpdG9yeSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zfSQgJiAge1x0dCBj
b250ZW50XF90eXBlfSAmIHN0cmluZyAmIHRoZSB0eXBlIG9mIHRoZSBTUidzIGNvbnRlbnQsIGlm
IHJlcXVpcmVkIChlLmcuIElTT3MpIFxcCi1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNl
Y3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IFNSfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfc3VwcG9ydGVkXF90eXBlc30KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJu
IGEgc2V0IG9mIGFsbCB0aGUgU1IgdHlwZXMgc3VwcG9ydGVkIGJ5IHRoZSBzeXN0ZW0uCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKHN0cmluZyBTZXQp
IGdldF9zdXBwb3J0ZWRfdHlwZXMgKHNlc3Npb25faWQgcylcZW5ke3ZlcmJhdGltfQotCi0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0
cmluZyBTZXQKLX0KLQotCi10aGUgc3VwcG9ydGVkIFNSIHR5cGVzCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBT
UnMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAoKFNSIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi0oU1IgcmVmKSBTZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRvIGFsbCBv
YmplY3RzCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1H
ZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIFNSLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBz
LCBTUiByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBTUiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9uYW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBTUi4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfbGFi
ZWwgKHNlc3Npb25faWQgcywgU1IgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgU1IgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0K
LXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfbmFtZVxfbGFiZWx9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLVNldCB0aGUgbmFtZS9sYWJlbCBmaWVsZCBvZiB0aGUgZ2l2ZW4g
U1IuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9p
ZCBzZXRfbmFtZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBTUiByZWYgc2VsZiwgc3RyaW5nIHZhbHVl
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFNS
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0
dCBzdHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmFtZVxfZGVz
Y3JpcHRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbmFtZS9kZXNjcmlwdGlvbiBm
aWVsZCBvZiB0aGUgZ2l2ZW4gU1IuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9uYW1lX2Rlc2NyaXB0aW9uIChzZXNzaW9uX2lkIHMs
IFNSIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFNSIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX25hbWVcX2Rlc2NyaXB0aW9ufQotCi17XGJmIE92ZXJ2
aWV3On0gCi1TZXQgdGhlIG5hbWUvZGVzY3JpcHRpb24gZmllbGQgb2YgdGhlIGdpdmVuIFNSLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0
X25hbWVfZGVzY3JpcHRpb24gKHNlc3Npb25faWQgcywgU1IgcmVmIHNlbGYsIHN0cmluZyB2YWx1
ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBT
UiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtc
dHQgc3RyaW5nIH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1ZESXN9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVkRJcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gU1IuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChWREkgcmVm
KSBTZXQpIGdldF9WRElzIChzZXNzaW9uX2lkIHMsIFNSIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFNSIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0o
VkRJIHJlZikgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9QQkRzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBCRHMgZmllbGQgb2YgdGhl
IGdpdmVuIFNSLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19ICgoUEJEIHJlZikgU2V0KSBnZXRfUEJEcyAoc2Vzc2lvbl9pZCBzLCBTUiByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBTUiBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotKFBCRCByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfdmlydHVhbFxfYWxsb2NhdGlvbn0KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSB2aXJ0dWFsXF9hbGxvY2F0aW9uIGZpZWxkIG9mIHRoZSBnaXZlbiBTUi4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3Zp
cnR1YWxfYWxsb2NhdGlvbiAoc2Vzc2lvbl9pZCBzLCBTUiByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBTUiByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
aW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9waHlz
aWNhbFxfdXRpbGlzYXRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgcGh5c2ljYWxc
X3V0aWxpc2F0aW9uIGZpZWxkIG9mIHRoZSBnaXZlbiBTUi4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3BoeXNpY2FsX3V0aWxpc2F0aW9u
IChzZXNzaW9uX2lkIHMsIFNSIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFNSIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3BoeXNpY2FsXF9zaXplfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHBoeXNpY2FsXF9zaXplIGZpZWxkIG9mIHRoZSBnaXZl
biBTUi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBp
bnQgZ2V0X3BoeXNpY2FsX3NpemUgKHNlc3Npb25faWQgcywgU1IgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgU1IgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxf
dHlwZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB0eXBlIGZpZWxkIG9mIHRoZSBnaXZl
biBTUi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBz
dHJpbmcgZ2V0X3R5cGUgKHNlc3Npb25faWQgcywgU1IgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgU1IgcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0
cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfY29u
dGVudFxfdHlwZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBjb250ZW50XF90eXBlIGZp
ZWxkIG9mIHRoZSBnaXZlbiBTUi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2NvbnRlbnRfdHlwZSAoc2Vzc2lvbl9pZCBzLCBTUiBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBTUiByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEg
cmVmZXJlbmNlIHRvIHRoZSBTUiBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoU1IgcmVmKSBn
ZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlE
IG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVNSIHJl
ZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9y
ZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBj
dXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBTUi4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoU1IgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lk
IHMsIFNSIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFNSIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1TUiByZWNvcmQKLX0KLQotCi1hbGwgZmllbGRz
IGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYnlcX25hbWVcX2xhYmVsfQot
Ci17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYWxsIHRoZSBTUiBpbnN0YW5jZXMgd2l0aCB0aGUgZ2l2
ZW4gbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gKChTUiByZWYpIFNldCkgZ2V0X2J5X25hbWVfbGFiZWwgKHNlc3Npb25faWQgcywgc3RyaW5n
IGxhYmVsKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0g
Ci1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUK
LXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17
XHR0IHN0cmluZyB9ICYgbGFiZWwgJiBsYWJlbCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGlu
ZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYg
UmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oU1IgcmVmKSBTZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRv
IG9iamVjdHMgd2l0aCBtYXRjaCBuYW1lcwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotCi1cdnNwYWNlezFjbX0KLVxuZXdwYWdlCi1cc2VjdGlvbntDbGFz
czogVkRJfQotXHN1YnNlY3Rpb257RmllbGRzIGZvciBjbGFzczogVkRJfQotXGJlZ2lue2xvbmd0
YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xs
fXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIFZESX0gXFwKLVxtdWx0aWNvbHVtbnsx
fXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94ezExY219e1xl
bSBBCi12aXJ0dWFsIGRpc2sgaW1hZ2UufX0gXFwKLVxobGluZQotUXVhbHMgJiBGaWVsZCAmIFR5
cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAm
ICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlkZW50aWZpZXIvb2JqZWN0IHJlZmVyZW5j
ZSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG5hbWUvbGFiZWx9ICYgc3RyaW5nICYgYSBodW1h
bi1yZWFkYWJsZSBuYW1lIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgbmFtZS9kZXNjcmlwdGlv
bn0gJiBzdHJpbmcgJiBhIG5vdGVzIGZpZWxkIGNvbnRhaW5nIGh1bWFuLXJlYWRhYmxlIGRlc2Ny
aXB0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IFNSfSAmIFNSIHJl
ZiAmIHN0b3JhZ2UgcmVwb3NpdG9yeSBpbiB3aGljaCB0aGUgVkRJIHJlc2lkZXMgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgVkJEc30gJiAoVkJEIHJlZikgU2V0ICYgbGlz
dCBvZiB2YmRzIHRoYXQgcmVmZXIgdG8gdGhpcyBkaXNrIFxcCi0kXG1hdGhpdHtST31fXG1hdGhp
dHtydW59JCAmICB7XHR0IGNyYXNoXF9kdW1wc30gJiAoY3Jhc2hkdW1wIHJlZikgU2V0ICYgbGlz
dCBvZiBjcmFzaCBkdW1wcyB0aGF0IHJlZmVyIHRvIHRoaXMgZGlzayBcXAotJFxtYXRoaXR7Uld9
JCAmICB7XHR0IHZpcnR1YWxcX3NpemV9ICYgaW50ICYgc2l6ZSBvZiBkaXNrIGFzIHByZXNlbnRl
ZCB0byB0aGUgZ3Vlc3QgKGluIGJ5dGVzKS4gTm90ZSB0aGF0LCBkZXBlbmRpbmcgb24gc3RvcmFn
ZSBiYWNrZW5kIHR5cGUsIHJlcXVlc3RlZCBzaXplIG1heSBub3QgYmUgcmVzcGVjdGVkIGV4YWN0
bHkgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcGh5c2ljYWxcX3V0aWxp
c2F0aW9ufSAmIGludCAmIGFtb3VudCBvZiBwaHlzaWNhbCBzcGFjZSB0aGF0IHRoZSBkaXNrIGlt
YWdlIGlzIGN1cnJlbnRseSB0YWtpbmcgdXAgb24gdGhlIHN0b3JhZ2UgcmVwb3NpdG9yeSAoaW4g
Ynl0ZXMpIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IHR5cGV9ICYgdmRp
XF90eXBlICYgdHlwZSBvZiB0aGUgVkRJIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgc2hhcmFi
bGV9ICYgYm9vbCAmIHRydWUgaWYgdGhpcyBkaXNrIG1heSBiZSBzaGFyZWQgXFwKLSRcbWF0aGl0
e1JXfSQgJiAge1x0dCByZWFkXF9vbmx5fSAmIGJvb2wgJiB0cnVlIGlmIHRoaXMgZGlzayBtYXkg
T05MWSBiZSBtb3VudGVkIHJlYWQtb25seSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG90aGVy
XF9jb25maWd9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgYWRkaXRpb25h
bCBjb25maWd1cmF0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHNl
Y3VyaXR5L2xhYmVsfSAmIHN0cmluZyAmIHRoZSBWTSdzIHNlY3VyaXR5IGxhYmVsIFxcCi1caGxp
bmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xh
c3M6IFZESX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBPdmVy
dmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwgdGhlIFZESXMga25vd24gdG8gdGhlIHN5c3Rl
bS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZE
SSByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZE
SSByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIG9iamVjdHMKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBv
ZiB0aGUgZ2l2ZW4gVkRJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBWREkgcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkRJIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX25hbWVcX2xhYmVsfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIG5hbWUv
bGFiZWwgZmllbGQgb2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfbGFiZWwgKHNlc3Npb25faWQg
cywgVkRJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFZESSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhl
IGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9uYW1lXF9sYWJlbH0KLQote1xiZiBPdmVydmll
dzp9IAotU2V0IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBWREkuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbmFtZV9s
YWJlbCAoc2Vzc2lvbl9pZCBzLCBWREkgcmVmIHNlbGYsIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWREkgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9
ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQg
Ci12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9uYW1lXF9kZXNjcmlwdGlvbn0K
LQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBuYW1lL2Rlc2NyaXB0aW9uIGZpZWxkIG9mIHRo
ZSBnaXZlbiBWREkuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gc3RyaW5nIGdldF9uYW1lX2Rlc2NyaXB0aW9uIChzZXNzaW9uX2lkIHMsIFZESSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fnNldFxfbmFtZVxfZGVzY3JpcHRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LVNldCB0aGUgbmFtZS9kZXNjcmlwdGlvbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkRJLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X25hbWVf
ZGVzY3JpcHRpb24gKHNlc3Npb25faWQgcywgVkRJIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkRJIHJl
ZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBz
dHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfU1J9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUdldCB0aGUgU1IgZmllbGQgb2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoU1IgcmVmKSBnZXRfU1Ig
KHNlc3Npb25faWQgcywgVkRJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZESSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotU1IgcmVmCi19Ci0KLQot
dmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9WQkRzfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIFZCRHMgZmllbGQgb2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFZCRCByZWYpIFNldCkg
Z2V0X1ZCRHMgKHNlc3Npb25faWQgcywgVkRJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZESSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZCRCBy
ZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxf
Y3Jhc2hcX2R1bXBzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGNyYXNoXF9kdW1wcyBm
aWVsZCBvZiB0aGUgZ2l2ZW4gVkRJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19ICgoY3Jhc2hkdW1wIHJlZikgU2V0KSBnZXRfY3Jhc2hfZHVtcHMgKHNl
c3Npb25faWQgcywgVkRJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IFZESSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKGNyYXNoZHVtcCByZWYpIFNl
dAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdmlydHVh
bFxfc2l6ZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB2aXJ0dWFsXF9zaXplIGZpZWxk
IG9mIHRoZSBnaXZlbiBWREkuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gaW50IGdldF92aXJ0dWFsX3NpemUgKHNlc3Npb25faWQgcywgVkRJIHJlZiBz
ZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IFZESSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+c2V0XF92aXJ0dWFsXF9zaXplfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhl
IHZpcnR1YWxcX3NpemUgZmllbGQgb2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF92aXJ0dWFsX3NpemUgKHNl
c3Npb25faWQgcywgVkRJIHJlZiBzZWxmLCBpbnQgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkRJIHJlZiB9ICYgc2VsZiAmIHJlZmVy
ZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBpbnQgfSAmIHZhbHVlICYgTmV3
IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcGh5c2ljYWxcX3V0aWxpc2F0aW9ufQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIHBoeXNpY2FsXF91dGlsaXNhdGlvbiBmaWVsZCBvZiB0aGUgZ2l2
ZW4gVkRJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IGludCBnZXRfcGh5c2ljYWxfdXRpbGlzYXRpb24gKHNlc3Npb25faWQgcywgVkRJIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZE
SSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxl
bmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF90eXBlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHR5cGUgZmllbGQg
b2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSAodmRpX3R5cGUpIGdldF90eXBlIChzZXNzaW9uX2lkIHMsIFZESSByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
REkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXZkaVxfdHlwZQotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfc2hhcmFibGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUg
c2hhcmFibGUgZmllbGQgb2YgdGhlIGdpdmVuIFZESS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBib29sIGdldF9zaGFyYWJsZSAoc2Vzc2lvbl9pZCBz
LCBWREkgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50
czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgVkRJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1ib29sCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9zaGFyYWJsZX0KLQote1xiZiBPdmVydmlldzp9IAot
U2V0IHRoZSBzaGFyYWJsZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkRJLgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X3NoYXJhYmxlIChzZXNz
aW9uX2lkIHMsIFZESSByZWYgc2VsZiwgYm9vbCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxu
b2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJl
bmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IGJvb2wgfSAmIHZhbHVlICYgTmV3
IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVhZFxfb25seX0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSByZWFkXF9vbmx5IGZpZWxkIG9mIHRoZSBnaXZlbiBWREkuCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gYm9vbCBnZXRfcmVhZF9vbmx5IChz
ZXNzaW9uX2lkIHMsIFZESSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWJvb2wKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX3JlYWRcX29ubHl9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVNldCB0aGUgcmVhZFxfb25seSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkRJLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0
X3JlYWRfb25seSAoc2Vzc2lvbl9pZCBzLCBWREkgcmVmIHNlbGYsIGJvb2wgdmFsdWUpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkRJIHJlZiB9
ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCBib29s
IH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX290aGVyXF9jb25maWd9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2
ZW4gVkRJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMs
IFZESSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcp
IE1hcAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfb3Ro
ZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBvdGhlclxfY29uZmlnIGZp
ZWxkIG9mIHRoZSBnaXZlbiBWREkuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMsIFZESSBy
ZWYgc2VsZiwgKHN0cmluZyAtPiBzdHJpbmcpIE1hcCB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IChzdHJpbmcgJFxyaWdodGFy
cm93JCBzdHJpbmcpIE1hcCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+YWRkXF90
b1xfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotQWRkIHRoZSBnaXZlbiBrZXkt
dmFsdWUgcGFpciB0byB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkRJLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgYWRk
X3RvX290aGVyX2NvbmZpZyAoc2Vzc2lvbl9pZCBzLCBWREkgcmVmIHNlbGYsIHN0cmluZyBrZXks
IHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBcaGxpbmUgCi0K
LXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92ZVxfZnJvbVxf
b3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotUmVtb3ZlIHRoZSBnaXZlbiBrZXkg
YW5kIGl0cyBjb3JyZXNwb25kaW5nIHZhbHVlIGZyb20gdGhlIG90aGVyXF9jb25maWcKLWZpZWxk
IG9mIHRoZSBnaXZlbiBWREkuICBJZiB0aGUga2V5IGlzIG5vdCBpbiB0aGF0IE1hcCwgdGhlbiBk
byBub3RoaW5nLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IHZvaWQgcmVtb3ZlX2Zyb21fb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMsIFZESSByZWYg
c2VsZiwgc3RyaW5nIGtleSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3Qg
XFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIHJlbW92ZSBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNl
dFxfc2VjdXJpdHlcX2xhYmVsfQotCi17XGJmIE92ZXJ2aWV3On0KLVNldCB0aGUgc2VjdXJpdHkg
bGFiZWwgb2YgdGhlIGdpdmVuIFZESS4gUmVmZXIgdG8gdGhlIFhTUG9saWN5IGNsYXNzCi1mb3Ig
dGhlIGZvcm1hdCBvZiB0aGUgc2VjdXJpdHkgbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9zZWN1cml0eV9sYWJlbCAoc2Vzc2lv
bl9pZCBzLCBWREkgcmVmIHNlbGYsIHN0cmluZwotc2VjdXJpdHlfbGFiZWwsIHN0cmluZyBvbGRf
bGFiZWwpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBWREkgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi0K
LXtcdHQgc3RyaW5nIH0gJiBzZWN1cml0eVxfbGFiZWwgJiBOZXcgdmFsdWUgb2YgdGhlIHNlY3Vy
aXR5IGxhYmVsIFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIG9sZFxfbGFiZWwgJiBMYWJlbCB2
YWx1ZSB0aGF0IHRoZSBzZWN1cml0eSBsYWJlbCBcXAotJiAmIG11c3QgY3VycmVudGx5IGhhdmUg
Zm9yIHRoZSBjaGFuZ2UgdG8gc3VjY2VlZC5cXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQK
LX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVycm9yIENv
ZGVzOn0ge1x0dCBTRUNVUklUWVxfRVJST1J9Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3NlY3Vy
aXR5XF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhlIHNlY3VyaXR5IGxhYmVsIG9m
IHRoZSBnaXZlbiBWREkuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3Zl
cmJhdGltfSBzdHJpbmcgZ2V0X3NlY3VyaXR5X2xhYmVsIChzZXNzaW9uX2lkIHMsIFZESSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi0K
LVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQot
e1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtc
dHQgVkRJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9Ci17XHR0Ci1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZ2l2ZW4gZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNyZWF0ZSBhIG5l
dyBWREkgaW5zdGFuY2UsIGFuZCByZXR1cm4gaXRzIGhhbmRsZS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVkRJIHJlZikgY3JlYXRlIChzZXNzaW9u
X2lkIHMsIFZESSByZWNvcmQgYXJncylcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBWREkgcmVjb3JkIH0gJiBhcmdzICYgQWxsIGNvbnN0cnVjdG9y
IGFyZ3VtZW50cyBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVkRJIHJlZgotfQotCi0K
LXJlZmVyZW5jZSB0byB0aGUgbmV3bHkgY3JlYXRlZCBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5k
ZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3On0gCi1EZXN0cm95IHRoZSBzcGVjaWZpZWQgVkRJIGlu
c3RhbmNlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9pZCBzLCBWREkgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkRJIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12
b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IGEgcmVmZXJlbmNlIHRvIHRoZSBWREkgaW5zdGFuY2Ugd2l0aCB0aGUgc3Bl
Y2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gKFZESSByZWYpIGdldF9ieV91dWlkIChzZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmlu
ZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotVkRJIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29y
ZCBjb250YWluaW5nIHRoZSBjdXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBWREkuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZESSByZWNvcmQpIGdl
dF9yZWNvcmQgKHNlc3Npb25faWQgcywgVkRJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZESSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVkRJIHJl
Y29yZAotfQotCi0KLWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9ieVxfbmFtZVxfbGFiZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhbGwgdGhlIFZE
SSBpbnN0YW5jZXMgd2l0aCB0aGUgZ2l2ZW4gbGFiZWwuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKChWREkgcmVmKSBTZXQpIGdldF9ieV9uYW1lX2xh
YmVsIChzZXNzaW9uX2lkIHMsIHN0cmluZyBsYWJlbClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIGxhYmVsICYgbGFiZWwgb2Yg
b2JqZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFZESSByZWYp
IFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gb2JqZWN0cyB3aXRoIG1hdGNoIG5hbWVzCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNt
fQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBWQkR9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9y
IGNsYXNzOiBWQkR9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQot
XGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtc
YmYgVkJEfSBcXAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1
bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEEKLXZpcnR1YWwgYmxvY2sgZGV2aWNlLn19IFxc
Ci1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQot
JFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1
ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e2lu
c30kICYgIHtcdHQgVk19ICYgVk0gcmVmICYgdGhlIHZpcnR1YWwgbWFjaGluZSBcXAotJFxtYXRo
aXR7Uk99X1xtYXRoaXR7aW5zfSQgJiAge1x0dCBWREl9ICYgVkRJIHJlZiAmIHRoZSB2aXJ0dWFs
IGRpc2sgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBkZXZpY2V9ICYgc3RyaW5nICYgZGV2aWNl
IHNlZW4gYnkgdGhlIGd1ZXN0IGUuZy4gaGRhMSBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IGJv
b3RhYmxlfSAmIGJvb2wgJiB0cnVlIGlmIHRoaXMgVkJEIGlzIGJvb3RhYmxlIFxcCi0kXG1hdGhp
dHtSV30kICYgIHtcdHQgbW9kZX0gJiB2YmRcX21vZGUgJiB0aGUgbW9kZSB0aGUgVkJEIHNob3Vs
ZCBiZSBtb3VudGVkIHdpdGggXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCB0eXBlfSAmIHZiZFxf
dHlwZSAmIGhvdyB0aGUgVkJEIHdpbGwgYXBwZWFyIHRvIHRoZSBndWVzdCAoZS5nLiBkaXNrIG9y
IENEKSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBjdXJyZW50bHlcX2F0
dGFjaGVkfSAmIGJvb2wgJiBpcyB0aGUgZGV2aWNlIGN1cnJlbnRseSBhdHRhY2hlZCAoZXJhc2Vk
IG9uIHJlYm9vdCkgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgc3RhdHVz
XF9jb2RlfSAmIGludCAmIGVycm9yL3N1Y2Nlc3MgY29kZSBhc3NvY2lhdGVkIHdpdGggbGFzdCBh
dHRhY2gtb3BlcmF0aW9uIChlcmFzZWQgb24gcmVib290KSBcXAotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7cnVufSQgJiAge1x0dCBzdGF0dXNcX2RldGFpbH0gJiBzdHJpbmcgJiBlcnJvci9zdWNjZXNz
IGluZm9ybWF0aW9uIGFzc29jaWF0ZWQgd2l0aCBsYXN0IGF0dGFjaC1vcGVyYXRpb24gc3RhdHVz
IChlcmFzZWQgb24gcmVib290KSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0
dCBydW50aW1lXF9wcm9wZXJ0aWVzfSAmIChzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1h
cCAmIERldmljZSBydW50aW1lIHByb3BlcnRpZXMgXFwKLSRcbWF0aGl0e1JXfSQgJiAge1x0dCBx
b3MvYWxnb3JpdGhtXF90eXBlfSAmIHN0cmluZyAmIFFvUyBhbGdvcml0aG0gdG8gdXNlIFxcCi0k
XG1hdGhpdHtSV30kICYgIHtcdHQgcW9zL2FsZ29yaXRobVxfcGFyYW1zfSAmIChzdHJpbmcgJFxy
aWdodGFycm93JCBzdHJpbmcpIE1hcCAmIHBhcmFtZXRlcnMgZm9yIGNob3NlbiBRb1MgYWxnb3Jp
dGhtIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHFvcy9zdXBwb3J0ZWRc
X2FsZ29yaXRobXN9ICYgc3RyaW5nIFNldCAmIHN1cHBvcnRlZCBRb1MgYWxnb3JpdGhtcyBmb3Ig
dGhpcyBWQkQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbWV0cmljc30g
JiBWQkRcX21ldHJpY3MgcmVmICYgbWV0cmljcyBhc3NvY2lhdGVkIHdpdGggdGhpcyBWQkQgXFwK
LVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29jaWF0ZWQgd2l0
aCBjbGFzczogVkJEfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fm1lZGlhXF9jaGFuZ2V9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUNoYW5nZSB0aGUgbWVkaWEgaW4gdGhlIGRldmljZSBmb3IgQ0RS
T00tbGlrZSBkZXZpY2VzIG9ubHkuIEZvciBvdGhlcgotZGV2aWNlcywgZGV0YWNoIHRoZSBWQkQg
YW5kIGF0dGFjaCBhIG5ldyBvbmUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gdm9pZCBtZWRpYV9jaGFuZ2UgKHNlc3Npb25faWQgcywgVkJEIHJlZiB2
YmQsIFZESSByZWYgdmRpKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFZCRCByZWYgfSAmIHZiZCAmIFRoZSB2YmQgcmVwcmVzZW50aW5nIHRoZSBD
RFJPTS1saWtlIGRldmljZSBcXCBcaGxpbmUgCi0KLXtcdHQgVkRJIHJlZiB9ICYgdmRpICYgVGhl
IG5ldyBWREkgdG8gJ2luc2VydCcgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQK
LX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5wbHVnfQotCi17XGJmIE92ZXJ2aWV3On0gCi1Ib3Rw
bHVnIHRoZSBzcGVjaWZpZWQgVkJELCBkeW5hbWljYWxseSBhdHRhY2hpbmcgaXQgdG8gdGhlIHJ1
bm5pbmcgVk0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gdm9pZCBwbHVnIChzZXNzaW9uX2lkIHMsIFZCRCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkQgcmVmIH0gJiBzZWxmICYg
VGhlIFZCRCB0byBob3RwbHVnIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19
Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+dW5wbHVnfQotCi17XGJmIE92ZXJ2aWV3On0gCi1Ib3Qt
dW5wbHVnIHRoZSBzcGVjaWZpZWQgVkJELCBkeW5hbWljYWxseSB1bmF0dGFjaGluZyBpdCBmcm9t
IHRoZSBydW5uaW5nCi1WTS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSB2b2lkIHVucGx1ZyAoc2Vzc2lvbl9pZCBzLCBWQkQgcmVmIHNlbGYpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJEIHJlZiB9
ICYgc2VsZiAmIFRoZSBWQkQgdG8gaG90LXVucGx1ZyBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxh
cn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxsfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgVkJEcyBrbm93biB0byB0aGUgc3lz
dGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgo
VkJEIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0o
VkJEIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0cwotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxk
IG9mIHRoZSBnaXZlbiBWQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFZCRCByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFj
ZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkQg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfVk19Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVk0gZmllbGQgb2Yg
dGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSAoVk0gcmVmKSBnZXRfVk0gKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotVk0gcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9WREl9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVkRJIGZpZWxkIG9mIHRo
ZSBnaXZlbiBWQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gKFZESSByZWYpIGdldF9WREkgKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotVkRJIHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfZGV2aWNlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGRldmljZSBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gVkJELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IHN0cmluZyBnZXRfZGV2aWNlIChzZXNzaW9uX2lkIHMsIFZCRCByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
QkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fnNldFxfZGV2aWNlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhlIGRldmlj
ZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkJELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X2RldmljZSAoc2Vzc2lvbl9pZCBzLCBWQkQgcmVm
IHNlbGYsIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBWQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYgdmFsdWUgJiBOZXcgdmFsdWUgdG8gc2V0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9ib290YWJsZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBib290YWJs
ZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkJELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IGJvb2wgZ2V0X2Jvb3RhYmxlIChzZXNzaW9uX2lkIHMsIFZCRCBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLWJvb2wKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5zZXRcX2Jvb3RhYmxlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhl
IGJvb3RhYmxlIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfYm9vdGFibGUgKHNlc3Npb25faWQg
cywgVkJEIHJlZiBzZWxmLCBib29sIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8g
dGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgYm9vbCB9ICYgdmFsdWUgJiBOZXcgdmFsdWUg
dG8gc2V0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9tb2RlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIG1vZGUg
ZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAodmJkX21vZGUpIGdldF9tb2RlIChzZXNzaW9uX2lkIHMsIFZCRCBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLXZiZFxfbW9kZQotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfbW9kZX0KLQote1xiZiBPdmVydmlldzp9IAotU2V0IHRo
ZSBtb2RlIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbW9kZSAoc2Vzc2lvbl9pZCBzLCBWQkQg
cmVmIHNlbGYsIHZiZF9tb2RlIHZhbHVlKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLXtcdHQgdmJkXF9tb2RlIH0gJiB2YWx1ZSAmIE5ldyB2YWx1
ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5nZXRcX3R5cGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdHlw
ZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkJELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19ICh2YmRfdHlwZSkgZ2V0X3R5cGUgKHNlc3Npb25faWQgcywgVkJE
IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotdmJkXF90eXBlCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF90eXBlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQg
dGhlIHR5cGUgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF90eXBlIChzZXNzaW9uX2lkIHMsIFZC
RCByZWYgc2VsZiwgdmJkX3R5cGUgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJEIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCB2YmRcX3R5cGUgfSAmIHZhbHVlICYgTmV3IHZh
bHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfY3VycmVudGx5XF9hdHRhY2hlZH0KLQote1xiZiBPdmVydmll
dzp9IAotR2V0IHRoZSBjdXJyZW50bHlcX2F0dGFjaGVkIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkQu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gYm9vbCBn
ZXRfY3VycmVudGx5X2F0dGFjaGVkIChzZXNzaW9uX2lkIHMsIFZCRCByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkQgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLWJvb2wKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX3N0YXR1c1xfY29kZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBzdGF0dXNcX2Nv
ZGUgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3N0YXR1c19jb2RlIChzZXNzaW9uX2lkIHMsIFZC
RCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBWQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfc3RhdHVzXF9kZXRhaWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCB0aGUgc3RhdHVzXF9kZXRhaWwgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3N0YXR1
c19kZXRhaWwgKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5n
Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ydW50aW1l
XF9wcm9wZXJ0aWVzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHJ1bnRpbWVcX3Byb3Bl
cnRpZXMgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X3J1bnRp
bWVfcHJvcGVydGllcyAoc2Vzc2lvbl9pZCBzLCBWQkQgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0K
LQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJEIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0o
c3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmll
bGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Fvc1xfYWxnb3JpdGhtXF90eXBlfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIHFvcy9hbGdvcml0aG1cX3R5cGUgZmllbGQgb2YgdGhlIGdpdmVu
IFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBz
dHJpbmcgZ2V0X3Fvc19hbGdvcml0aG1fdHlwZSAoc2Vzc2lvbl9pZCBzLCBWQkQgcmVmIHNlbGYp
XGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJE
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5zZXRcX3Fvc1xfYWxnb3JpdGhtXF90eXBlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1T
ZXQgdGhlIHFvcy9hbGdvcml0aG1cX3R5cGUgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9xb3Nf
YWxnb3JpdGhtX3R5cGUgKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUp
XGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJE
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0
dCBzdHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcW9zXF9hbGdv
cml0aG1cX3BhcmFtc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBxb3MvYWxnb3JpdGht
XF9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X3Fv
c19hbGdvcml0aG1fcGFyYW1zIChzZXNzaW9uX2lkIHMsIFZCRCByZWYgc2VsZilcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkQgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLShzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcAotfQotCi0KLXZhbHVlIG9mIHRo
ZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfcW9zXF9hbGdvcml0aG1cX3BhcmFtc30KLQot
e1xiZiBPdmVydmlldzp9IAotU2V0IHRoZSBxb3MvYWxnb3JpdGhtXF9wYXJhbXMgZmllbGQgb2Yg
dGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSB2b2lkIHNldF9xb3NfYWxnb3JpdGhtX3BhcmFtcyAoc2Vzc2lvbl9pZCBzLCBWQkQg
cmVmIHNlbGYsIChzdHJpbmcgLT4gc3RyaW5nKSBNYXAgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJEIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQote1x0dCAoc3RyaW5nICRccmlnaHRh
cnJvdyQgc3RyaW5nKSBNYXAgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmFkZFxf
dG9cX3Fvc1xfYWxnb3JpdGhtXF9wYXJhbXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUFkZCB0aGUg
Z2l2ZW4ga2V5LXZhbHVlIHBhaXIgdG8gdGhlIHFvcy9hbGdvcml0aG1cX3BhcmFtcyBmaWVsZCBv
ZiB0aGUKLWdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSB2b2lkIGFkZF90b19xb3NfYWxnb3JpdGhtX3BhcmFtcyAoc2Vzc2lvbl9pZCBz
LCBWQkQgcmVmIHNlbGYsIHN0cmluZyBrZXksIHN0cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkQgcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi17XHR0IHN0cmluZyB9ICYga2V5
ICYgS2V5IHRvIGFkZCBcXCBcaGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiB2YWx1ZSAmIFZhbHVl
IHRvIGFkZCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fnJlbW92ZVxfZnJvbVxfcW9zXF9hbGdvcml0aG1cX3BhcmFtc30KLQote1xi
ZiBPdmVydmlldzp9IAotUmVtb3ZlIHRoZSBnaXZlbiBrZXkgYW5kIGl0cyBjb3JyZXNwb25kaW5n
IHZhbHVlIGZyb20gdGhlCi1xb3MvYWxnb3JpdGhtXF9wYXJhbXMgZmllbGQgb2YgdGhlIGdpdmVu
IFZCRC4gIElmIHRoZSBrZXkgaXMgbm90IGluIHRoYXQKLU1hcCwgdGhlbiBkbyBub3RoaW5nLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgcmVt
b3ZlX2Zyb21fcW9zX2FsZ29yaXRobV9wYXJhbXMgKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxm
LCBzdHJpbmcga2V5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLXtcdHQgc3RyaW5nIH0gJiBrZXkgJiBLZXkgdG8gcmVtb3ZlIFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9x
b3NcX3N1cHBvcnRlZFxfYWxnb3JpdGhtc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBx
b3Mvc3VwcG9ydGVkXF9hbGdvcml0aG1zIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkQuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKHN0cmluZyBTZXQpIGdl
dF9xb3Nfc3VwcG9ydGVkX2FsZ29yaXRobXMgKHNlc3Npb25faWQgcywgVkJEIHJlZiBzZWxmKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0gCi17XHR0IAotc3RyaW5nIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfbWV0cmljc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBtZXRy
aWNzIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZCRF9tZXRyaWNzIHJlZikgZ2V0X21ldHJpY3MgKHNlc3Np
b25faWQgcywgVkJEIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVkJEXF9tZXRyaWNzIHJlZgotfQot
Ci0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmNyZWF0ZX0KLQote1xiZiBP
dmVydmlldzp9IAotQ3JlYXRlIGEgbmV3IFZCRCBpbnN0YW5jZSwgYW5kIHJldHVybiBpdHMgaGFu
ZGxlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChW
QkQgcmVmKSBjcmVhdGUgKHNlc3Npb25faWQgcywgVkJEIHJlY29yZCBhcmdzKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRCByZWNvcmQgfSAm
IGFyZ3MgJiBBbGwgY29uc3RydWN0b3IgYXJndW1lbnRzIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAK
LXtcdHQgCi1WQkQgcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9i
amVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmRlc3Ryb3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLURlc3Ry
b3kgdGhlIHNwZWNpZmllZCBWQkQgaW5zdGFuY2UuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBkZXN0cm95IChzZXNzaW9uX2lkIHMsIFZCRCBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBWQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X2J5XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWZlcmVuY2UgdG8gdGhlIFZC
RCBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVkJEIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Np
b25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0
dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WQkQgcmVmCi19Ci0KLQotcmVmZXJl
bmNlIHRvIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3RhdGUgb2Yg
dGhlIGdpdmVuIFZCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSAoVkJEIHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBzLCBWQkQgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
VkJEIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi1WQkQgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9tIHRoZSBv
YmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLQot
XHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IFZCRFxfbWV0cmljc30KLVxz
dWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFZCRFxfbWV0cmljc30KLVxiZWdpbntsb25ndGFi
bGV9e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxtdWx0aWNvbHVtbnsxfXt8bH17
TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBWQkRcX21ldHJpY3N9IFxcCi1cbXVsdGlj
b2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsx
MWNtfXtcZW0KLVRoZSBtZXRyaWNzIGFzc29jaWF0ZWQgd2l0aCBhIHZpcnR1YWwgYmxvY2sgZGV2
aWNlLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwK
LVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmlu
ZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9c
bWF0aGl0e3J1bn0kICYgIHtcdHQgaW8vcmVhZFxfa2JzfSAmIGZsb2F0ICYgUmVhZCBiYW5kd2lk
dGggKEtpQi9zKSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBpby93cml0
ZVxfa2JzfSAmIGZsb2F0ICYgV3JpdGUgYmFuZHdpZHRoIChLaUIvcykgXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbGFzdFxfdXBkYXRlZH0gJiBkYXRldGltZSAmIFRpbWUg
YXQgd2hpY2ggdGhpcyBpbmZvcm1hdGlvbiB3YXMgbGFzdCB1cGRhdGVkIFxcCi1caGxpbmUKLVxl
bmR7bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IFZC
RFxfbWV0cmljc30KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBP
dmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwgdGhlIFZCRFxfbWV0cmljcyBpbnN0YW5j
ZXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSAoKFZCRF9tZXRyaWNzIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9u
X2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oVkJEXF9tZXRyaWNzIHJlZikgU2V0Ci19Ci0KLQot
cmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0cwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkRcX21l
dHJpY3MuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
c3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFZCRF9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRFxfbWV0
cmljcyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9pb1xfcmVhZFxfa2JzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQg
dGhlIGlvL3JlYWRcX2ticyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVkJEXF9tZXRyaWNzLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IGZsb2F0IGdldF9pb19y
ZWFkX2ticyAoc2Vzc2lvbl9pZCBzLCBWQkRfbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkRcX21ldHJpY3MgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLWZsb2F0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+Z2V0XF9pb1xfd3JpdGVcX2tic30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBpby93
cml0ZVxfa2JzIGZpZWxkIG9mIHRoZSBnaXZlbiBWQkRcX21ldHJpY3MuCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gZmxvYXQgZ2V0X2lvX3dyaXRlX2ti
cyAoc2Vzc2lvbl9pZCBzLCBWQkRfbWV0cmljcyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWQkRcX21ldHJpY3MgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLWZsb2F0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0
XF9sYXN0XF91cGRhdGVkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGxhc3RcX3VwZGF0
ZWQgZmllbGQgb2YgdGhlIGdpdmVuIFZCRFxfbWV0cmljcy4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBkYXRldGltZSBnZXRfbGFzdF91cGRhdGVkIChz
ZXNzaW9uX2lkIHMsIFZCRF9tZXRyaWNzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZCRFxfbWV0cmljcyByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
ZGF0ZXRpbWUKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X2J5XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWZlcmVuY2UgdG8gdGhlIFZC
RFxfbWV0cmljcyBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVkJEX21ldHJpY3MgcmVmKSBn
ZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlE
IG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZCRFxf
bWV0cmljcyByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfcmVjb3JkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFp
bmluZyB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gVkJEXF9tZXRyaWNzLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChWQkRfbWV0cmljcyBy
ZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgVkJEX21ldHJpY3MgcmVmIHNlbGYpXGVu
ZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVkJEXF9t
ZXRyaWNzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi1WQkRcX21ldHJpY3MgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxk
cyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IFBCRH0K
LVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFBCRH0KLVxiZWdpbntsb25ndGFibGV9e3xs
bGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxtdWx0aWNvbHVtbnsxfXt8bH17TmFtZX0g
JiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBQQkR9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rl
c2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0KLVRoZSBw
aHlzaWNhbCBibG9jayBkZXZpY2VzIHRocm91Z2ggd2hpY2ggaG9zdHMgYWNjZXNzIFNScy59fSBc
XAotXGhsaW5lCi1RdWFscyAmIEZpZWxkICYgVHlwZSAmIERlc2NyaXB0aW9uIFxcCi1caGxpbmUK
LSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdXVpZH0gJiBzdHJpbmcgJiB1bmlx
dWUgaWRlbnRpZmllci9vYmplY3QgcmVmZXJlbmNlIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtp
bnN9JCAmICB7XHR0IGhvc3R9ICYgaG9zdCByZWYgJiBwaHlzaWNhbCBtYWNoaW5lIG9uIHdoaWNo
IHRoZSBwYmQgaXMgYXZhaWxhYmxlIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7
XHR0IFNSfSAmIFNSIHJlZiAmIHRoZSBzdG9yYWdlIHJlcG9zaXRvcnkgdGhhdCB0aGUgcGJkIHJl
YWxpc2VzIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IGRldmljZVxfY29u
ZmlnfSAmIChzdHJpbmcgJFxyaWdodGFycm93JCBzdHJpbmcpIE1hcCAmIGEgY29uZmlnIHN0cmlu
ZyB0byBzdHJpbmcgbWFwIHRoYXQgaXMgcHJvdmlkZWQgdG8gdGhlIGhvc3QncyBTUi1iYWNrZW5k
LWRyaXZlciBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBjdXJyZW50bHlc
X2F0dGFjaGVkfSAmIGJvb2wgJiBpcyB0aGUgU1IgY3VycmVudGx5IGF0dGFjaGVkIG9uIHRoaXMg
aG9zdD8gXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29j
aWF0ZWQgd2l0aCBjbGFzczogUEJEfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxs
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgUEJEcyBrbm93
biB0byB0aGUgc3lzdGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19ICgoUEJEIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi0oUEJEIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0
cwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBQQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFBC
RCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQQkQgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfaG9zdH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSBob3N0IGZpZWxkIG9mIHRoZSBnaXZlbiBQQkQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGhvc3QgcmVmKSBnZXRfaG9zdCAoc2Vzc2lvbl9pZCBz
LCBQQkQgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50
czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgUEJEIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1ob3N0IHJlZgotfQotCi0KLXZhbHVlIG9mIHRo
ZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfU1J9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdl
dCB0aGUgU1IgZmllbGQgb2YgdGhlIGdpdmVuIFBCRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoU1IgcmVmKSBnZXRfU1IgKHNlc3Npb25faWQgcywg
UEJEIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBc
aGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhs
aW5lCi17XHR0IFBCRCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotU1IgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9kZXZpY2VcX2NvbmZpZ30KLQote1xiZiBPdmVydmll
dzp9IAotR2V0IHRoZSBkZXZpY2VcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gUEJELgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5nIC0+
IHN0cmluZykgTWFwKSBnZXRfZGV2aWNlX2NvbmZpZyAoc2Vzc2lvbl9pZCBzLCBQQkQgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UEJEIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2N1cnJlbnRseVxfYXR0
YWNoZWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgY3VycmVudGx5XF9hdHRhY2hlZCBm
aWVsZCBvZiB0aGUgZ2l2ZW4gUEJELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IGJvb2wgZ2V0X2N1cnJlbnRseV9hdHRhY2hlZCAoc2Vzc2lvbl9pZCBz
LCBQQkQgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50
czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgUEJEIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1ib29sCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Y3JlYXRlfQotCi17XGJmIE92ZXJ2aWV3On0gCi1DcmVhdGUg
YSBuZXcgUEJEIGluc3RhbmNlLCBhbmQgcmV0dXJuIGl0cyBoYW5kbGUuCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFBCRCByZWYpIGNyZWF0ZSAoc2Vz
c2lvbl9pZCBzLCBQQkQgcmVjb3JkIGFyZ3MpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUEJEIHJlY29yZCB9ICYgYXJncyAmIEFsbCBjb25zdHJ1
Y3RvciBhcmd1bWVudHMgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVBCRCByZWYKLX0K
LQotCi1yZWZlcmVuY2UgdG8gdGhlIG5ld2x5IGNyZWF0ZWQgb2JqZWN0Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFt
ZTp+ZGVzdHJveX0KLQote1xiZiBPdmVydmlldzp9IAotRGVzdHJveSB0aGUgc3BlY2lmaWVkIFBC
RCBpbnN0YW5jZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJh
dGltfSB2b2lkIGRlc3Ryb3kgKHNlc3Npb25faWQgcywgUEJEIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBCRCByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0aGUgUEJEIGluc3RhbmNlIHdpdGggdGhl
IHNwZWNpZmllZCBVVUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IChQQkQgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVp
ZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBz
dHJpbmcgfSAmIHV1aWQgJiBVVUlEIG9mIG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLVBCRCByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSBy
ZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gUEJELgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChQQkQgcmVjb3Jk
KSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIFBCRCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQQkQgcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVBC
RCByZWNvcmQKLX0KLQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotCi1cdnNwYWNlezFjbX0KLVxuZXdw
YWdlCi1cc2VjdGlvbntDbGFzczogY3Jhc2hkdW1wfQotXHN1YnNlY3Rpb257RmllbGRzIGZvciBj
bGFzczogY3Jhc2hkdW1wfQotXGJlZ2lue2xvbmd0YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9
fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xsfXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXts
fH17XGJmIGNyYXNoZHVtcH0gXFwKLVxtdWx0aWNvbHVtbnsxfXt8bH17RGVzY3JpcHRpb259ICYg
XG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94ezExY219e1xlbSBBCi1WTSBjcmFzaGR1bXAufX0g
XFwKLVxobGluZQotUXVhbHMgJiBGaWVsZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5l
Ci0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5p
cXVlIGlkZW50aWZpZXIvb2JqZWN0IHJlZmVyZW5jZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7
aW5zfSQgJiAge1x0dCBWTX0gJiBWTSByZWYgJiB0aGUgdmlydHVhbCBtYWNoaW5lIFxcCi0kXG1h
dGhpdHtST31fXG1hdGhpdHtpbnN9JCAmICB7XHR0IFZESX0gJiBWREkgcmVmICYgdGhlIHZpcnR1
YWwgZGlzayBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNz
b2NpYXRlZCB3aXRoIGNsYXNzOiBjcmFzaGR1bXB9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
ZGVzdHJveX0KLQote1xiZiBPdmVydmlldzp9IAotRGVzdHJveSB0aGUgc3BlY2lmaWVkIGNyYXNo
ZHVtcC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2
b2lkIGRlc3Ryb3kgKHNlc3Npb25faWQgcywgY3Jhc2hkdW1wIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNyYXNoZHVtcCByZWYg
fSAmIHNlbGYgJiBUaGUgY3Jhc2hkdW1wIHRvIGRlc3Ryb3kgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xi
ZiBPdmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwgdGhlIGNyYXNoZHVtcHMga25vd24g
dG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSAoKGNyYXNoZHVtcCByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7
dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBU
eXBlOn0gCi17XHR0IAotKGNyYXNoZHVtcCByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8g
YWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gY3Jhc2hkdW1wLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAo
c2Vzc2lvbl9pZCBzLCBjcmFzaGR1bXAgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3Jhc2hkdW1wIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJp
bmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1ZNfQot
Ci17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZNIGZpZWxkIG9mIHRoZSBnaXZlbiBjcmFzaGR1
bXAuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZN
IHJlZikgZ2V0X1ZNIChzZXNzaW9uX2lkIHMsIGNyYXNoZHVtcCByZWYgc2VsZilcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBjcmFzaGR1bXAgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLVZNIHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfVkRJfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFZESSBmaWVsZCBvZiB0
aGUgZ2l2ZW4gY3Jhc2hkdW1wLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IChWREkgcmVmKSBnZXRfVkRJIChzZXNzaW9uX2lkIHMsIGNyYXNoZHVtcCBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
IAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBjcmFzaGR1bXAgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZESSByZWYKLX0KLQotCi12YWx1ZSBvZiB0aGUg
ZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2J5XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0g
Ci1HZXQgYSByZWZlcmVuY2UgdG8gdGhlIGNyYXNoZHVtcCBpbnN0YW5jZSB3aXRoIHRoZSBzcGVj
aWZpZWQgVVVJRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJh
dGltfSAoY3Jhc2hkdW1wIHJlZikgZ2V0X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1
aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
c3RyaW5nIH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8gcmV0dXJuIFxcIFxobGluZSAKLQot
XGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fSAKLXtcdHQgCi1jcmFzaGR1bXAgcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3RhdGUgb2YgdGhlIGdpdmVuIGNy
YXNoZHVtcC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGlt
fSAoY3Jhc2hkdW1wIHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBzLCBjcmFzaGR1bXAg
cmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgY3Jhc2hkdW1wIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1jcmFzaGR1bXAgcmVjb3JkCi19Ci0KLQotYWxs
IGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6
IFZUUE19Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBWVFBNfQotXGJlZ2lue2xvbmd0
YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xs
fXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIFZUUE19IFxcCi1cbXVsdGljb2x1bW57
MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtc
ZW0gQQotdmlydHVhbCBUUE0gZGV2aWNlLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBU
eXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVu
Y2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgVk19ICYgVk0gcmVmICYg
dGhlIHZpcnR1YWwgbWFjaGluZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zfSQgJiAge1x0
dCBiYWNrZW5kfSAmIFZNIHJlZiAmIHRoZSBkb21haW4gd2hlcmUgdGhlIGJhY2tlbmQgaXMgbG9j
YXRlZCBcXAotJFxtYXRoaXR7Uld9JCAmICB7XHR0IG90aGVyXF9jb25maWd9ICYgKHN0cmluZyAk
XHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgYWRkaXRpb25hbCBjb25maWd1cmF0aW9uIFxcCi1c
aGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGgg
Y2xhc3M6IFZUUE19Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIFZUUE0uCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91
dWlkIChzZXNzaW9uX2lkIHMsIFZUUE0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVlRQTSByZWYgfSAmIHNlbGYgJiByZWZlcmVu
Y2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19
Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9WTX0KLQote1xi
ZiBPdmVydmlldzp9IAotR2V0IHRoZSBWTSBmaWVsZCBvZiB0aGUgZ2l2ZW4gVlRQTS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0gcmVmKSBnZXRf
Vk0gKHNlc3Npb25faWQgcywgVlRQTSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWVFBNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1WTSByZWYKLX0K
LQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2JhY2tlbmR9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgYmFja2VuZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gVlRQ
TS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0g
cmVmKSBnZXRfYmFja2VuZCAoc2Vzc2lvbl9pZCBzLCBWVFBNIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFZUUE0gcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLVZNIHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhlIG90aGVyXF9jb25m
aWcgZmllbGQgb2YgdGhlIGdpdmVuIFZUUE0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fQotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X290aGVyX2Nv
bmZpZyAoc2Vzc2lvbl9pZCBzLCBWVFBNIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVlRQTSByZWYgfSAmIHNlbGYgJiByZWZlcmVu
Y2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotKHN0cmluZyAkXHJp
Z2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+c2V0XF9vdGhlclxfY29uZmlnfQotCi17XGJmIE92ZXJ2aWV3On0KLVNldCB0aGUg
b3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gVlRQTS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X290aGVyX2NvbmZpZyAoc2Vz
c2lvbl9pZCBzLCBWVFBNIHJlZiBzZWxmLCAoc3RyaW5nIC0+IHN0cmluZykgTWFwIHZhbHVlKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7
MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgVlRQTSBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLQote1x0dCAo
c3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRv
IHNldCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotCi0KLQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfcnVudGltZVxfcHJvcGVydGllc30KLQote1xiZiBPdmVydmlldzp9Ci1HZXQg
dGhlIHJ1bnRpbWVcX3Byb3BlcnRpZXMgZmllbGQgb2YgdGhlIGdpdmVuIFZUUE0uCi0KLVxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19ICgoc3RyaW5nIC0+IHN0cmlu
ZykgTWFwKSBnZXRfcnVudGltZV9wcm9wZXJ0aWVzIChzZXNzaW9uX2lkIHMsIFZUUE0gcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZz
cGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJm
IHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBW
VFBNIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9Ci17XHR0Ci0oc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLUNyZWF0ZSBhIG5ldyBWVFBNIGluc3RhbmNlLCBhbmQgcmV0dXJuIGl0cyBoYW5kbGUu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZUUE0g
cmVmKSBjcmVhdGUgKHNlc3Npb25faWQgcywgVlRQTSByZWNvcmQgYXJncylcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBWVFBNIHJlY29yZCB9ICYg
YXJncyAmIEFsbCBjb25zdHJ1Y3RvciBhcmd1bWVudHMgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLVZUUE0gcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9i
amVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1
YnN1YnNlY3Rpb257UlBDIG5hbWU6fmRlc3Ryb3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLURlc3Ry
b3kgdGhlIHNwZWNpZmllZCBWVFBNIGluc3RhbmNlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0
dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9pZCBzLCBWVFBN
IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IFZUUE0gcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhs
aW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xi
ZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX2J5XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWZlcmVuY2UgdG8gdGhl
IFZUUE0gaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKFZUUE0gcmVmKSBnZXRfYnlfdXVpZCAo
c2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlEIG9mIG9iamVjdCB0
byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZUUE0gcmVmCi19Ci0KLQot
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3Rh
dGUgb2YgdGhlIGdpdmVuIFZUUE0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gKFZUUE0gcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIFZU
UE0gcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgVlRQTSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotVlRQTSByZWNvcmQKLX0KLQotCi1hbGwgZmllbGRz
IGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotCi1cdnNwYWNlezFjbX0KLVxuZXdwYWdlCi1cc2VjdGlvbntDbGFzczogY29uc29s
ZX0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IGNvbnNvbGV9Ci1cYmVnaW57bG9uZ3Rh
YmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9
e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgY29uc29sZX0gXFwKLVxtdWx0aWNvbHVt
bnsxfXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94ezExY219
e1xlbSBBCi1jb25zb2xlLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVz
Y3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1
dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRc
bWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgcHJvdG9jb2x9ICYgY29uc29sZVxfcHJv
dG9jb2wgJiB0aGUgcHJvdG9jb2wgdXNlZCBieSB0aGlzIGNvbnNvbGUgXFwKLSRcbWF0aGl0e1JP
fV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgbG9jYXRpb259ICYgc3RyaW5nICYgVVJJIGZvciB0aGUg
Y29uc29sZSBzZXJ2aWNlIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFZN
fSAmIFZNIHJlZiAmIFZNIHRvIHdoaWNoIHRoaXMgY29uc29sZSBpcyBhdHRhY2hlZCBcXAotJFxt
YXRoaXR7Uld9JCAmICB7XHR0IG90aGVyXF9jb25maWd9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ck
IHN0cmluZykgTWFwICYgYWRkaXRpb25hbCBjb25maWd1cmF0aW9uIFxcCi1caGxpbmUKLVxlbmR7
bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IGNvbnNv
bGV9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBjb25zb2xlcyBrbm93biB0byB0aGUgc3lzdGVt
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoY29u
c29sZSByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
KGNvbnNvbGUgcmVmKSBTZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRvIGFsbCBvYmplY3RzCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQg
ZmllbGQgb2YgdGhlIGdpdmVuIGNvbnNvbGUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIGNvbnNv
bGUgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgY29uc29sZSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhl
IGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9wcm90b2NvbH0KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSBwcm90b2NvbCBmaWVsZCBvZiB0aGUgZ2l2ZW4gY29uc29sZS4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoY29uc29sZV9wcm90b2Nv
bCkgZ2V0X3Byb3RvY29sIChzZXNzaW9uX2lkIHMsIGNvbnNvbGUgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY29uc29sZSByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0g
Ci17XHR0IAotY29uc29sZVxfcHJvdG9jb2wKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5nZXRcX2xvY2F0aW9ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IGxvY2F0aW9uIGZpZWxkIG9mIHRoZSBnaXZlbiBjb25zb2xlLgotCi0gXG5vaW5kZW50IHtcYmYg
U2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfbG9jYXRpb24gKHNlc3Np
b25faWQgcywgY29uc29sZSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBjb25zb2xlIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2Nt
fQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1ZNfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIFZNIGZpZWxkIG9mIHRoZSBnaXZlbiBjb25zb2xlLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChWTSByZWYpIGdldF9W
TSAoc2Vzc2lvbl9pZCBzLCBjb25zb2xlIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5v
aW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVs
YXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNvbnNvbGUgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLVZNIHJl
ZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfb3RoZXJc
X2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBvdGhlclxfY29uZmlnIGZpZWxk
IG9mIHRoZSBnaXZlbiBjb25zb2xlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfb3RoZXJfY29uZmln
IChzZXNzaW9uX2lkIHMsIGNvbnNvbGUgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY29uc29sZSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKHN0cmlu
ZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+c2V0XF9vdGhlclxfY29uZmlnfQotCi17XGJmIE92ZXJ2aWV3On0gCi1T
ZXQgdGhlIG90aGVyXF9jb25maWcgZmllbGQgb2YgdGhlIGdpdmVuIGNvbnNvbGUuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfb3RoZXJf
Y29uZmlnIChzZXNzaW9uX2lkIHMsIGNvbnNvbGUgcmVmIHNlbGYsIChzdHJpbmcgLT4gc3RyaW5n
KSBNYXAgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgY29uc29sZSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLXtcdHQgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwIH0gJiB2
YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZv
aWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5hZGRcX3RvXF9vdGhlclxfY29uZmlnfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1BZGQgdGhlIGdpdmVuIGtleS12YWx1ZSBwYWlyIHRvIHRoZSBvdGhl
clxfY29uZmlnIGZpZWxkIG9mIHRoZSBnaXZlbgotY29uc29sZS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGFkZF90b19vdGhlcl9jb25maWcg
KHNlc3Npb25faWQgcywgY29uc29sZSByZWYgc2VsZiwgc3RyaW5nIGtleSwgc3RyaW5nIHZhbHVl
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNv
bnNvbGUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci17XHR0IHN0cmluZyB9ICYga2V5ICYgS2V5IHRvIGFkZCBcXCBcaGxpbmUgCi0KLXtcdHQgc3Ry
aW5nIH0gJiB2YWx1ZSAmIFZhbHVlIHRvIGFkZCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJlbW92ZVxfZnJvbVxfb3RoZXJcX2Nv
bmZpZ30KLQote1xiZiBPdmVydmlldzp9IAotUmVtb3ZlIHRoZSBnaXZlbiBrZXkgYW5kIGl0cyBj
b3JyZXNwb25kaW5nIHZhbHVlIGZyb20gdGhlIG90aGVyXF9jb25maWcKLWZpZWxkIG9mIHRoZSBn
aXZlbiBjb25zb2xlLiAgSWYgdGhlIGtleSBpcyBub3QgaW4gdGhhdCBNYXAsIHRoZW4gZG8KLW5v
dGhpbmcuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
dm9pZCByZW1vdmVfZnJvbV9vdGhlcl9jb25maWcgKHNlc3Npb25faWQgcywgY29uc29sZSByZWYg
c2VsZiwgc3RyaW5nIGtleSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBjb25zb2xlIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQote1x0dCBzdHJpbmcgfSAmIGtleSAmIEtleSB0byByZW1vdmUgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNyZWF0ZSBhIG5ldyBjb25zb2xlIGluc3Rh
bmNlLCBhbmQgcmV0dXJuIGl0cyBoYW5kbGUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGNvbnNvbGUgcmVmKSBjcmVhdGUgKHNlc3Npb25faWQgcywg
Y29uc29sZSByZWNvcmQgYXJncylcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBjb25zb2xlIHJlY29yZCB9ICYgYXJncyAmIEFsbCBjb25zdHJ1Y3Rv
ciBhcmd1bWVudHMgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWNvbnNvbGUgcmVmCi19
Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9iamVjdAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmRlc3Ryb3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLURlc3Ryb3kgdGhlIHNwZWNpZmllZCBj
b25zb2xlIGluc3RhbmNlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57
dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9pZCBzLCBjb25zb2xlIHJlZiBzZWxmKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNvbnNv
bGUgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXZvaWQKLX0KLQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2J5XF91
dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWZlcmVuY2UgdG8gdGhlIGNvbnNvbGUg
aW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWdu
YXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGNvbnNvbGUgcmVmKSBnZXRfYnlfdXVpZCAoc2Vz
c2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlEIG9mIG9iamVjdCB0byBy
ZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBc
bm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWNvbnNvbGUgcmVmCi19Ci0KLQot
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3Rh
dGUgb2YgdGhlIGdpdmVuIGNvbnNvbGUuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAK
LVxiZWdpbnt2ZXJiYXRpbX0gKGNvbnNvbGUgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lk
IHMsIGNvbnNvbGUgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdj
bX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9u
fSBcXCBcaGxpbmUKLXtcdHQgY29uc29sZSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotY29uc29sZSByZWNvcmQKLX0K
LQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotCi1cdnNwYWNlezFjbX0KLVxuZXdwYWdlCi1cc2VjdGlv
bntDbGFzczogRFBDSX0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IERQQ0l9Ci1cYmVn
aW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1
bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgRFBDSX0gXFwKLVxtdWx0
aWNvbHVtbnsxfXt8bH17RGVzY3JpcHRpb259ICYgXG11bHRpY29sdW1uezN9e2x8fXtccGFyYm94
ezExY219e1xlbSBBCi1wYXNzLXRocm91Z2ggUENJIGRldmljZS59fSBcXAotXGhsaW5lCi1RdWFs
cyAmIEZpZWxkICYgVHlwZSAmIERlc2NyaXB0aW9uIFxcCi1caGxpbmUKLSRcbWF0aGl0e1JPfV9c
bWF0aGl0e3J1bn0kICYgIHtcdHQgdXVpZH0gJiBzdHJpbmcgJiB1bmlxdWUgaWRlbnRpZmllci9v
YmplY3QgcmVmZXJlbmNlIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtpbnN0fSQgJiAge1x0dCBW
TX0gJiBWTSByZWYgJiB0aGUgdmlydHVhbCBtYWNoaW5lIFxcCi0kXG1hdGhpdHtST31fXG1hdGhp
dHtpbnN0fSQgJiAge1x0dCBQUENJfSAmIFBQQ0kgcmVmICYgdGhlIHBoeXNpY2FsIFBDSSBkZXZp
Y2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc3R9JCAmICB7XHR0IGhvdHBsdWdcX3Nsb3R9
ICYgaW50ICYgdGhlIHNsb3QgbnVtYmVyIHRvIHdoaWNoIHRoaXMgUENJIGRldmljZSBpcyBpbnNl
cnRlZCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB2aXJ0dWFsXF9kb21h
aW59ICYgaW50ICYgdGhlIHZpcnR1YWwgZG9tYWluIG51bWJlciBcXAotJFxtYXRoaXR7Uk99X1xt
YXRoaXR7cnVufSQgJiAge1x0dCB2aXJ0dWFsXF9idXN9ICYgaW50ICYgdGhlIHZpcnR1YWwgYnVz
IG51bWJlciBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB2aXJ0dWFsXF9z
bG90fSAmIGludCAmIHRoZSB2aXJ0dWFsIHNsb3QgbnVtYmVyIFxcCi0kXG1hdGhpdHtST31fXG1h
dGhpdHtydW59JCAmICB7XHR0IHZpcnR1YWxcX2Z1bmN9ICYgaW50ICYgdGhlIHZpcnR1YWwgZnVu
YyBudW1iZXIgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdmlydHVhbFxf
bmFtZX0gJiBzdHJpbmcgJiB0aGUgdmlydHVhbCBQQ0kgbmFtZSBcXAotXGhsaW5lCi1cZW5ke2xv
bmd0YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBEUENJfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYWxsfQotCi17XGJmIE92ZXJ2aWV3On0gCi1S
ZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgRFBDSXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKERQQ0kgcmVmKSBT
ZXQpIGdldF9hbGwgKHNlc3Npb25faWQgcylcZW5ke3ZlcmJhdGltfQotCi0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShEUENJIHJlZikg
U2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwgb2JqZWN0cwotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBn
aXZlbiBEUENJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBEUENJIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERQQ0kgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fmdldFxfVk19Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVk0gZmllbGQgb2YgdGhlIGdp
dmVuIERQQ0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gKFZNIHJlZikgZ2V0X1ZNIChzZXNzaW9uX2lkIHMsIERQQ0kgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFBDSSByZWYgfSAm
IHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxh
cn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotVk0gcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF9QUENJfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBQQ0kgZmllbGQgb2YgdGhl
IGdpdmVuIERQQ0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gKFBQQ0kgcmVmKSBnZXRfUFBDSSAoc2Vzc2lvbl9pZCBzLCBEUENJIHJlZiBzZWxmKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERQQ0kg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLVBQQ0kgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF9ob3RwbHVnXF9zbG90fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IGhvdHBsdWdcX3Nsb3QgZmllbGQgb2YgdGhlIGdpdmVuIERQQ0kuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF9ob3RwbHVnX3Nsb3QgKHNl
c3Npb25faWQgcywgRFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBEUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3ZpcnR1YWxcX2RvbWFpbn0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB2aXJ0dWFsXF9kb21haW4gZmllbGQgb2YgdGhlIGdp
dmVuIERQQ0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gaW50IGdldF92aXJ0dWFsX2RvbWFpbiAoc2Vzc2lvbl9pZCBzLCBEUENJIHJlZiBzZWxmKVxl
bmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERQQ0kg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfdmlydHVhbFxfYnVzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHZpcnR1
YWxcX2J1cyBmaWVsZCBvZiB0aGUgZ2l2ZW4gRFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3ZpcnR1YWxfYnVzIChzZXNzaW9uX2lk
IHMsIERQQ0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgRFBDSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVj
dCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhl
IGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF92aXJ0dWFsXF9zbG90fQotCi17XGJmIE92ZXJ2
aWV3On0gCi1HZXQgdGhlIHZpcnR1YWxcX3Nsb3QgZmllbGQgb2YgdGhlIGdpdmVuIERQQ0kuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF92
aXJ0dWFsX3Nsb3QgKHNlc3Npb25faWQgcywgRFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBEUENJIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1p
bnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3ZpcnR1
YWxcX2Z1bmN9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdmlydHVhbFxfZnVuYyBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gRFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSBpbnQgZ2V0X3ZpcnR1YWxfZnVuYyAoc2Vzc2lvbl9pZCBzLCBEUENJIHJl
ZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0g
Ci1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUK
LXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17
XHR0IERQQ0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5l
IAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfdmlydHVhbFxfbmFtZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSB2aXJ0dWFsXF9uYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBEUENJLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdmlydHVhbF9u
YW1lIChzZXNzaW9uX2lkIHMsIERQQ0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxh
cn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJm
IGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFBDSSByZWYgfSAmIHNlbGYgJiByZWZlcmVu
Y2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19
Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y3JlYXRlfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1DcmVhdGUgYSBuZXcgRFBDSSBpbnN0YW5jZSwgYW5kIHJldHVybiBpdHMg
aGFuZGxlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IChEUENJIHJlZikgY3JlYXRlIChzZXNzaW9uX2lkIHMsIERQQ0kgcmVjb3JkIGFyZ3MpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFBDSSByZWNv
cmQgfSAmIGFyZ3MgJiBBbGwgY29uc3RydWN0b3IgYXJndW1lbnRzIFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1EUENJIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgbmV3bHkgY3Jl
YXRlZCBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5kZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3On0g
Ci1EZXN0cm95IHRoZSBzcGVjaWZpZWQgRFBDSSBpbnN0YW5jZS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGRlc3Ryb3kgKHNlc3Npb25faWQg
cywgRFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBEUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0
byB0aGUgRFBDSSBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoRFBDSSByZWYpIGdldF9ieV91
dWlkIChzZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5k
ZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2Jq
ZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotRFBDSSByZWYKLX0K
LQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3Jk
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVu
dCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gRFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSAoRFBDSSByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQg
cywgRFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBEUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1EUENJIHJlY29yZAotfQotCi0KLWFsbCBm
aWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBQ
UENJfQotXHN1YnNlY3Rpb257RmllbGRzIGZvciBjbGFzczogUFBDSX0KLVxiZWdpbntsb25ndGFi
bGV9e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUKLVxtdWx0aWNvbHVtbnsxfXt8bH17
TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBQUENJfSBcXAotXG11bHRpY29sdW1uezF9
e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVt
IEEKLXBoeXNpY2FsIFBDSSBkZXZpY2UufX0gXFwKLVxobGluZQotUXVhbHMgJiBGaWVsZCAmIFR5
cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAm
ICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlkZW50aWZpZXIvb2JqZWN0IHJlZmVyZW5j
ZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBob3N0fSAmIGhvc3QgcmVm
ICYgIHRoZSBwaHlzaWNhbCBtYWNoaW5lIHRvIHdoaWNoIHRoaXMgUFBDSSBpcyBjb25uZWN0ZWQg
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgZG9tYWlufSAmIGludCAmIHRo
ZSBkb21haW4gbnVtYmVyIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGJ1
c30gJiBpbnQgJiB0aGUgYnVzIG51bWJlciBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCBzbG90fSAmIGludCAmIHRoZSBzbG90IG51bWJlciBcXAotJFxtYXRoaXR7Uk99X1xt
YXRoaXR7cnVufSQgJiAge1x0dCBmdW5jfSAmIGludCAmIHRoZSBmdW5jIG51bWJlciBcXAotJFxt
YXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBuYW1lfSAmIHN0cmluZyAmIHRoZSBQQ0kg
bmFtZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB2ZW5kb3JcX2lkfSAm
IGludCAmIHRoZSB2ZW5kb3IgSUQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgdmVuZG9yXF9uYW1lfSAmIHN0cmluZyAmIHRoZSB2ZW5kb3IgbmFtZSBcXAotJFxtYXRoaXR7
Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBkZXZpY2VcX2lkfSAmIGludCAmIHRoZSBkZXZpY2Ug
SUQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgZGV2aWNlXF9uYW1lfSAm
IHN0cmluZyAmIHRoZSBkZXZpY2UgbmFtZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCByZXZpc2lvblxfaWR9ICYgaW50ICYgdGhlIHJldmlzaW9uIElEIFxcCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGNsYXNzXF9jb2RlfSAmIGludCAmIHRoZSBjbGFz
cyBjb2RlIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGNsYXNzXF9uYW1l
fSAmIHN0cmluZyAmIHRoZSBjbGFzcyBuYW1lIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59
JCAmICB7XHR0IHN1YnN5c3RlbVxfdmVuZG9yXF9pZH0gJiBpbnQgJiB0aGUgc3Vic3lzdGVtIHZl
bmRvciBJRCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBzdWJzeXN0ZW1c
X3ZlbmRvclxfbmFtZX0gJiBzdHJpbmcgJiB0aGUgc3Vic3lzdGVtIHZlbmRvciBuYW1lIFxcCi0k
XG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN1YnN5c3RlbVxfaWR9ICYgaW50ICYg
dGhlIHN1YnN5c3RlbSBJRCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBz
dWJzeXN0ZW1cX25hbWV9ICYgc3RyaW5nICYgdGhlIHN1YnN5c3RlbSBuYW1lIFxcCi0kXG1hdGhp
dHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IGRyaXZlcn0gJiBzdHJpbmcgJiB0aGUgZHJpdmVy
IG5hbWUgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29j
aWF0ZWQgd2l0aCBjbGFzczogUFBDSX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Fs
bH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBhbGwgdGhlIFBQQ0lzIGtu
b3duIHRvIHRoZSBzeXN0ZW0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gKChQUENJIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi0oUFBDSSByZWYpIFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIG9i
amVjdHMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdl
dCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3V1aWQgKHNlc3Npb25faWQg
cywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2hvc3R9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCB0aGUgaG9zdCBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoaG9zdCByZWYpIGdldF9ob3N0IChzZXNz
aW9uX2lkIHMsIFBQQ0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFBDSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhl
IG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaG9zdCByZWYKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2RvbWFpbn0KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBkb21haW4gZmllbGQgb2YgdGhlIGdpdmVuIFBQQ0kuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF9kb21h
aW4gKHNlc3Npb25faWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQot
Ci12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2J1c30KLQote1xiZiBP
dmVydmlldzp9IAotR2V0IHRoZSBidXMgZmllbGQgb2YgdGhlIGdpdmVuIFBQQ0kuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF9idXMgKHNl
c3Npb25faWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Nsb3R9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLUdldCB0aGUgc2xvdCBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3Nsb3QgKHNlc3Np
b25faWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Z1bmN9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgZnVuYyBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X2Z1bmMgKHNlc3Npb25f
aWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX25hbWV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAK
LUdldCB0aGUgbmFtZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWUgKHNlc3Npb25f
aWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3ZlbmRvclxfaWR9Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLUdldCB0aGUgdmVuZG9yXF9pZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3Zl
bmRvcl9pZCAoc2Vzc2lvbl9pZCBzLCBQUENJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBQQ0kgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWludAot
fQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdmVuZG9yXF9u
YW1lfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHZlbmRvclxfbmFtZSBmaWVsZCBvZiB0
aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSBzdHJpbmcgZ2V0X3ZlbmRvcl9uYW1lIChzZXNzaW9uX2lkIHMsIFBQQ0kgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UFBDSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9kZXZpY2VcX2lkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IGRldmljZVxfaWQgZmllbGQgb2YgdGhlIGdpdmVuIFBQQ0kuCi0KLSBcbm9pbmRlbnQge1xiZiBT
aWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF9kZXZpY2VfaWQgKHNlc3Npb25f
aWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2RldmljZVxfbmFtZX0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IHRoZSBkZXZpY2VcX25hbWUgZmllbGQgb2YgdGhlIGdpdmVuIFBQQ0kuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdl
dF9kZXZpY2VfbmFtZSAoc2Vzc2lvbl9pZCBzLCBQUENJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBQQ0kgcmVmIH0gJiBzZWxm
ICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAK
LXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxf
cmV2aXNpb25cX2lkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHJldmlzaW9uXF9pZCBm
aWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3JldmlzaW9uX2lkIChzZXNzaW9uX2lkIHMsIFBQQ0kg
cmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0K
LSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgUFBDSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9jbGFzc1xfY29kZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBjbGFzc1xfY29kZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X2NsYXNzX2NvZGUgKHNl
c3Npb25faWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1
ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2NsYXNzXF9uYW1lfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1HZXQgdGhlIGNsYXNzXF9uYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBQUENJ
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmlu
ZyBnZXRfY2xhc3NfbmFtZSAoc2Vzc2lvbl9pZCBzLCBQUENJIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBQQ0kgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfc3Vic3lzdGVtXF92ZW5kb3JcX2lkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHN1
YnN5c3RlbVxfdmVuZG9yXF9pZCBmaWVsZCBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3N1YnN5c3RlbV92
ZW5kb3JfaWQgKHNlc3Npb25faWQgcywgUFBDSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQK
LX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3N1YnN5c3Rl
bVxfdmVuZG9yXF9uYW1lfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHN1YnN5c3RlbVxf
dmVuZG9yXF9uYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBQUENJLgotCi0gXG5vaW5kZW50IHtcYmYg
U2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfc3Vic3lzdGVtX3ZlbmRv
cl9uYW1lIChzZXNzaW9uX2lkIHMsIFBQQ0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFBDSSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5n
Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9zdWJzeXN0
ZW1cX2lkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHN1YnN5c3RlbVxfaWQgZmllbGQg
b2YgdGhlIGdpdmVuIFBQQ0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdp
bnt2ZXJiYXRpbX0gaW50IGdldF9zdWJzeXN0ZW1faWQgKHNlc3Npb25faWQgcywgUFBDSSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX3N1YnN5c3RlbVxfbmFtZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0
IHRoZSBzdWJzeXN0ZW1cX25hbWUgZmllbGQgb2YgdGhlIGdpdmVuIFBQQ0kuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9zdWJzeXN0
ZW1fbmFtZSAoc2Vzc2lvbl9pZCBzLCBQUENJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBQQ0kgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3Bh
Y2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmlu
ZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfZHJpdmVy
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGRyaXZlciBmaWVsZCBvZiB0aGUgZ2l2ZW4g
UFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBz
dHJpbmcgZ2V0X2RyaXZlciAoc2Vzc2lvbl9pZCBzLCBQUENJIHJlZiBzZWxmKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBQQ0kgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9
Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0
dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0aGUg
UFBDSSBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoUFBDSSByZWYpIGdldF9ieV91dWlkIChz
ZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2JqZWN0IHRv
IHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUFBDSSByZWYKLX0KLQotCi1y
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0
ZSBvZiB0aGUgZ2l2ZW4gUFBDSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSAoUFBDSSByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgUFBD
SSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQUENJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1QUENJIHJlY29yZAotfQotCi0KLWFsbCBmaWVsZHMg
ZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBEU0NTSX0K
LVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IERTQ1NJfQotXGJlZ2lue2xvbmd0YWJsZX17
fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xsfXtOYW1l
fSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIERTQ1NJfSBcXAotXG11bHRpY29sdW1uezF9e3xs
fXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEEK
LWhhbGYtdmlydHVhbGl6ZWQgU0NTSSBkZXZpY2UufX0gXFwKLVxobGluZQotUXVhbHMgJiBGaWVs
ZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1hdGhpdHtST31fXG1hdGhpdHty
dW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlkZW50aWZpZXIvb2JqZWN0IHJl
ZmVyZW5jZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zdH0kICYgIHtcdHQgVk19ICYgVk0g
cmVmICYgdGhlIHZpcnR1YWwgbWFjaGluZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zdH0k
ICYgIHtcdHQgUFNDU0l9ICYgUFNDU0kgcmVmICYgdGhlIHBoeXNpY2FsIFNDU0kgZGV2aWNlIFxc
Ci0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IEhCQX0gJiBEU0NTSVxfSEJBIHJl
ZiAmIHRoZSBoYWxmLXZpcnR1YWxpemVkIFNDU0kgaG9zdCBidXMgYWRhcHRlciBcXAotJFxtYXRo
aXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB2aXJ0dWFsXF9ob3N0fSAmIGludCAmIHRoZSB2
aXJ0dWFsIGhvc3QgbnVtYmVyIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0
IHZpcnR1YWxcX2NoYW5uZWx9ICYgaW50ICYgdGhlIHZpcnR1YWwgY2hhbm5lbCBudW1iZXIgXFwK
LSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdmlydHVhbFxfdGFyZ2V0fSAmIGlu
dCAmIHRoZSB2aXJ0dWFsIHRhcmdldCBudW1iZXIgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1
bn0kICYgIHtcdHQgdmlydHVhbFxfbHVufSAmIGludCAmIHRoZSB2aXJ0dWFsIGxvZ2ljYWwgdW5p
dCBudW1iZXIgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e2luc3R9JCAmICB7XHR0IHZpcnR1YWxc
X0hDVEx9ICYgc3RyaW5nICYgdGhlIHZpcnR1YWwgSENUTCBcXAotJFxtYXRoaXR7Uk99X1xtYXRo
aXR7cnVufSQgJiAge1x0dCBydW50aW1lXF9wcm9wZXJ0aWVzfSAmIChzdHJpbmcgJFxyaWdodGFy
cm93JCBzdHJpbmcpIE1hcCAmIERldmljZSBydW50aW1lIHByb3BlcnRpZXMgXFwKLVxobGluZQot
XGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntSUENzIGFzc29jaWF0ZWQgd2l0aCBjbGFzczog
RFNDU0l9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3ZlcnZp
ZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBEU0NTSXMga25vd24gdG8gdGhlIHN5c3Rl
bS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKERT
Q1NJIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0o
RFNDU0kgcmVmKSBTZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRvIGFsbCBvYmplY3RzCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmll
bGQgb2YgdGhlIGdpdmVuIERTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAoc2Vzc2lvbl9pZCBzLCBEU0NTSSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfVk19Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgVk0gZmll
bGQgb2YgdGhlIGdpdmVuIERTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IChWTSByZWYpIGdldF9WTSAoc2Vzc2lvbl9pZCBzLCBEU0NTSSByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtc
YmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0
IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fSAKLXtcdHQgCi1WTSByZWYKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5nZXRcX1BTQ1NJfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIFBT
Q1NJIGZpZWxkIG9mIHRoZSBnaXZlbiBEU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoUFNDU0kgcmVmKSBnZXRfUFNDU0kgKHNlc3Npb25faWQg
cywgRFNDU0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBEU0NTSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVj
dCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUFNDU0kgcmVmCi19Ci0KLQotdmFsdWUg
b2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9IQkF9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgSEJBIGZpZWxkIG9mIHRoZSBnaXZlbiBEU0NTSS4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoRFNDU0lfSEJBIHJlZikgZ2V0X0hC
QSAoc2Vzc2lvbl9pZCBzLCBEU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1EU0NTSVxfSEJB
IHJlZgotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdmly
dHVhbFxfaG9zdH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB2aXJ0dWFsXF9ob3N0IGZp
ZWxkIG9mIHRoZSBnaXZlbiBEU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3ZpcnR1YWxfaG9zdCAoc2Vzc2lvbl9pZCBzLCBEU0NT
SSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQK
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3ZpcnR1YWxcX2NoYW5uZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgdmlydHVhbFxfY2hhbm5lbCBmaWVsZCBvZiB0aGUgZ2l2ZW4gRFNDU0kuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF92
aXJ0dWFsX2NoYW5uZWwgKHNlc3Npb25faWQgcywgRFNDU0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBEU0NTSSByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF92
aXJ0dWFsXF90YXJnZXR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdmlydHVhbFxfdGFy
Z2V0IGZpZWxkIG9mIHRoZSBnaXZlbiBEU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3ZpcnR1YWxfdGFyZ2V0IChzZXNzaW9uX2lk
IHMsIERTQ1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1
bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18
fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBc
XCBcaGxpbmUKLXtcdHQgRFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmpl
Y3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9p
bmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRo
ZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdmlydHVhbFxfbHVufQotCi17XGJmIE92ZXJ2
aWV3On0gCi1HZXQgdGhlIHZpcnR1YWxcX2x1biBmaWVsZCBvZiB0aGUgZ2l2ZW4gRFNDU0kuCi0K
LSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGdldF92
aXJ0dWFsX2x1biAoc2Vzc2lvbl9pZCBzLCBEU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1p
bnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3ZpcnR1
YWxcX0hDVEx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdmlydHVhbFxfSENUTCBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gRFNDU0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF92aXJ0dWFsX0hDVEwgKHNlc3Npb25faWQgcywgRFND
U0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBEU0NTSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ydW50aW1lXF9wcm9wZXJ0aWVzfQotCi17XGJmIE92
ZXJ2aWV3On0gCi1HZXQgdGhlIHJ1bnRpbWVcX3Byb3BlcnRpZXMgZmllbGQgb2YgdGhlIGdpdmVu
IERTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
ICgoc3RyaW5nIC0+IHN0cmluZykgTWFwKSBnZXRfcnVudGltZV9wcm9wZXJ0aWVzIChzZXNzaW9u
X2lkIHMsIERTQ1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBB
cmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oc3RyaW5nICRccmlnaHRhcnJv
dyQgc3RyaW5nKSBNYXAKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1l
On5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNyZWF0ZSBhIG5ldyBEU0NTSSBpbnN0YW5j
ZSwgYW5kIGNyZWF0ZSBhIG5ldyBEU0NTSVxfSEJBIGluc3RhbmNlIGFzIG5lZWRlZAotdGhhdCB0
aGUgbmV3IERTQ1NJIGluc3RhbmNlIGNvbm5lY3RzIHRvLCBhbmQgcmV0dXJuIHRoZSBoYW5kbGUg
b2YgdGhlIG5ldwotRFNDU0kgaW5zdGFuY2UuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKERTQ1NJIHJlZikgY3JlYXRlIChzZXNzaW9uX2lkIHMsIERT
Q1NJIHJlY29yZCBhcmdzKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVu
dHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQot
IFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBc
aGxpbmUKLXtcdHQgRFNDU0kgcmVjb3JkIH0gJiBhcmdzICYgQWxsIGNvbnN0cnVjdG9yIGFyZ3Vt
ZW50cyBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotRFNDU0kgcmVmCi19Ci0KLQotcmVm
ZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmRlc3Ry
b3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLURlc3Ryb3kgdGhlIHNwZWNpZmllZCBEU0NTSSBpbnN0
YW5jZSwgYW5kIGRlc3Ryb3kgYSBEU0NTSVxfSEJBIGluc3RhbmNlIGFzCi1uZWVkZWQgdGhhdCB0
aGUgc3BlY2lmaWVkIERTQ1NJIGluc3RhbmNlIGNvbm5lY3RzIHRvLgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9p
ZCBzLCBEU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2Jq
ZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi12b2lkCi19Ci0KLQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5j
ZSB0byB0aGUgRFNDU0kgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9p
bmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKERTQ1NJIHJlZikgZ2V0
X2J5X3V1aWQgKHNlc3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHV1aWQgJiBVVUlEIG9m
IG9iamVjdCB0byByZXR1cm4gXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLURTQ1NJIHJl
ZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9y
ZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlY29yZCBjb250YWluaW5nIHRoZSBj
dXJyZW50IHN0YXRlIG9mIHRoZSBnaXZlbiBEU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoRFNDU0kgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNz
aW9uX2lkIHMsIERTQ1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLURTQ1NJIHJlY29yZAotfQot
Ci0KLWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxzZWN0aW9u
e0NsYXNzOiBEU0NTSVxfSEJBfQotXHN1YnNlY3Rpb257RmllbGRzIGZvciBjbGFzczogRFNDU0lc
X0hCQX0KLVxiZWdpbntsb25ndGFibGV9e3xsbGxwezAuMzhcdGV4dHdpZHRofXx9Ci1caGxpbmUK
LVxtdWx0aWNvbHVtbnsxfXt8bH17TmFtZX0gJiBcbXVsdGljb2x1bW57M317bHx9e1xiZiBEU0NT
SVxfSEJBfSBcXAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1
bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEEKLWhhbGYtdmlydHVhbGl6ZWQgU0NTSSBob3N0
IGJ1cyBhZGFwdGVyLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3Jp
cHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlk
fSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e2luc3R9JCAmICB7XHR0IFZNfSAmIFZNIHJlZiAmIHRoZSB2aXJ0dWFs
IG1hY2hpbmUgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgUFNDU0lcX0hC
QXN9ICYgKFBTQ1NJXF9IQkEgcmVmKSBTZXQgJiB0aGUgcGh5c2ljYWwgU0NTSSBIQkFzIFxcCi0k
XG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IERTQ1NJc30gJiAoRFNDU0kgcmVmKSBT
ZXQgJiB0aGUgaGFsZi12aXJ0dWFsaXplZCBTQ1NJIGRldmljZXMgd2hpY2ggYXJlIGNvbm5lY3Rl
ZCB0byB0aGlzIERTQ1NJIEhCQSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7aW5zdH0kICYgIHtc
dHQgdmlydHVhbFxfaG9zdH0gJiBpbnQgJiB0aGUgdmlydHVhbCBob3N0IG51bWJlciBcXAotJFxt
YXRoaXR7Uk99X1xtYXRoaXR7aW5zdH0kICYgIHtcdHQgYXNzaWdubWVudFxfbW9kZX0gJiBzdHJp
bmcgJiB0aGUgYXNzaWdubWVudCBtb2RlIG9mIHRoZSBoYWxmLXZpcnR1YWxpemVkIFNDU0kgZGV2
aWNlcyB3aGljaCBhcmUgY29ubmVjdGVkIHRvIHRoaXMgRFNDU0kgSEJBIFxcCi1caGxpbmUKLVxl
bmR7bG9uZ3RhYmxlfQotXHN1YnNlY3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IERT
Q1NJXF9IQkF9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBEU0NTSSBIQkFzIGtub3duIHRvIHRo
ZSBzeXN0ZW0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRp
bX0gKChEU0NTSV9IQkEgcmVmKSBTZXQpIGdldF9hbGwgKHNlc3Npb25faWQgcylcZW5ke3ZlcmJh
dGltfQotCi0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLShEU0NTSVxfSEJBIHJlZikgU2V0Ci19Ci0KLQotcmVmZXJlbmNlcyB0byBhbGwg
b2JqZWN0cwotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBEU0NTSSBIQkEuCi0KLSBcbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNz
aW9uX2lkIHMsIERTQ1NJX0hCQSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0
byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2Nt
fQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQKLXN0cmluZwotfQotCi0K
LXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfVk19Ci0KLXtcYmYgT3Zl
cnZpZXc6fSAKLUdldCB0aGUgVk0gZmllbGQgb2YgdGhlIGdpdmVuIERTQ1NJIEhCQS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoVk0gcmVmKSBnZXRf
Vk0gKHNlc3Npb25faWQgcywgRFNDU0lfSEJBIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFNDU0lcX0hCQSByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQot
XHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAot
Vk0gcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9Q
U0NTSVxfSEJBc30KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBQU0NTSVxfSEJBcyBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gRFNDU0kgSEJBLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19ICgoUFNDU0lfSEJBIHJlZikgU2V0KSBnZXRfUFNDU0lfSEJBcyAo
c2Vzc2lvbl9pZCBzLCBEU0NTSV9IQkEgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBEU0NTSVxfSEJBIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi0oUFND
U0lcX0hCQSByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfRFNDU0lzfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIERTQ1NJcyBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gRFNDU0kgSEJBLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19ICgoRFNDU0kgcmVmKSBTZXQpIGdldF9EU0NTSXMgKHNlc3Npb25f
aWQgcywgRFNDU0lfSEJBIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFNDU0lcX0hCQSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKERTQ1NJIHJlZikg
U2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF92aXJ0
dWFsXF9ob3N0fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHZpcnR1YWxcX2hvc3QgZmll
bGQgb2YgdGhlIGdpdmVuIERTQ1NJIEhCQS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3ZpcnR1YWxfaG9zdCAoc2Vzc2lvbl9pZCBzLCBE
U0NTSV9IQkEgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBEU0NTSVxfSEJBIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUg
b2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBv
ZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Fzc2lnbm1lbnRcX21vZGV9Ci0KLXtc
YmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgYXNzaWdubWVudFxfbW9kZSBmaWVsZCBvZiB0aGUgZ2l2
ZW4gRFNDU0kgSEJBLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IHN0cmluZyBnZXRfYXNzaWdubWVudF9tb2RlIChzZXNzaW9uX2lkIHMsIERTQ1NJX0hC
QSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxp
bmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5l
Ci17XHR0IERTQ1NJXF9IQkEgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3Qg
XFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRl
bnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRo
ZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmNyZWF0ZX0KLQote1xiZiBPdmVydmlldzp9IAotQ3Jl
YXRlIGEgbmV3IERTQ1NJXF9IQkEgaW5zdGFuY2UsIGFuZCBjcmVhdGUgbmV3IERTQ1NJIGluc3Rh
bmNlcyBvZgotaGFsZi12aXJ0dWFsaXplZCBTQ1NJIGRldmljZXMgd2hpY2ggYXJlIGNvbm5lY3Rl
ZCB0byB0aGUgaGFsZi12aXJ0dWFsaXplZAotU0NTSSBob3N0IGJ1cyBhZGFwdGVyLCBhbmQgcmV0
dXJuIHRoZSBoYW5kbGUgb2YgdGhlIG5ldyBEU0NTSVxfSEJBIGluc3RhbmNlLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChEU0NTSV9IQkEgcmVmKSBj
cmVhdGUgKHNlc3Npb25faWQgcywgRFNDU0lfSEJBIHJlY29yZCBhcmdzKVxlbmR7dmVyYmF0aW19
Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVn
aW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgRFNDU0lcX0hCQSByZWNvcmQg
fSAmIGFyZ3MgJiBBbGwgY29uc3RydWN0b3IgYXJndW1lbnRzIFxcIFxobGluZSAKLQotXGVuZHt0
YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi1EU0NTSVxfSEJBIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgbmV3bHkg
Y3JlYXRlZCBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5kZXN0cm95fQotCi17XGJmIE92ZXJ2aWV3
On0gCi1EZXN0cm95IHRoZSBzcGVjaWZpZWQgRFNDU0lcX0hCQSBpbnN0YW5jZSwgYW5kIGRlc3Ry
b3kgRFNDU0kgaW5zdGFuY2VzIG9mCi1oYWxmLXZpcnR1YWxpemVkIFNDU0kgZGV2aWNlcyB3aGlj
aCBhcmUgY29ubmVjdGVkIHRvIHRoZSBoYWxmLXZpcnR1YWxpemVkIFNDU0kKLWhvc3QgYnVzIGFk
YXB0ZXIuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0g
dm9pZCBkZXN0cm95IChzZXNzaW9uX2lkIHMsIERTQ1NJX0hCQSByZWYgc2VsZilcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJXF9IQkEgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXZvaWQKLX0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVpZH0KLQot
e1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNlIHRvIHRoZSBEU0NTSVxfSEJBIGluc3Rh
bmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJl
On0gCi1cYmVnaW57dmVyYmF0aW19IChEU0NTSV9IQkEgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lv
bl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJn
dW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219
fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0g
XFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVy
biBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotRFNDU0lcX0hCQSByZWYKLX0KLQotCi1y
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0
ZSBvZiB0aGUgZ2l2ZW4gRFNDU0kgSEJBLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IChEU0NTSV9IQkEgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9u
X2lkIHMsIERTQ1NJX0hCQSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8
cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlw
dGlvbn0gXFwgXGhsaW5lCi17XHR0IERTQ1NJXF9IQkEgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLURTQ1NJXF9IQkEg
cmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFn
ZQotXHNlY3Rpb257Q2xhc3M6IFBTQ1NJfQotXHN1YnNlY3Rpb257RmllbGRzIGZvciBjbGFzczog
UFNDU0l9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5l
Ci1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgUFND
U0l9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnsz
fXtsfH17XHBhcmJveHsxMWNtfXtcZW0gQQotcGh5c2ljYWwgU0NTSSBkZXZpY2UufX0gXFwKLVxo
bGluZQotUXVhbHMgJiBGaWVsZCAmIFR5cGUgJiBEZXNjcmlwdGlvbiBcXAotXGhsaW5lCi0kXG1h
dGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHV1aWR9ICYgc3RyaW5nICYgdW5pcXVlIGlk
ZW50aWZpZXIvb2JqZWN0IHJlZmVyZW5jZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQg
JiAge1x0dCBob3N0fSAmIGhvc3QgcmVmICYgIHRoZSBwaHlzaWNhbCBtYWNoaW5lIHRvIHdoaWNo
IHRoaXMgUFNDU0kgaXMgY29ubmVjdGVkIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAm
ICB7XHR0IEhCQX0gJiBQU0NTSVxfSEJBIHJlZiAmICB0aGUgcGh5c2ljYWwgU0NTSSBob3N0IGJ1
cyBhZGFwdGVyIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHBoeXNpY2Fs
XF9ob3N0fSAmIGludCAmIHRoZSBwaHlzaWNhbCBob3N0IG51bWJlciBcXAotJFxtYXRoaXR7Uk99
X1xtYXRoaXR7cnVufSQgJiAge1x0dCBwaHlzaWNhbFxfY2hhbm5lbH0gJiBpbnQgJiB0aGUgcGh5
c2ljYWwgY2hhbm5lbCBudW1iZXIgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgcGh5c2ljYWxcX3RhcmdldH0gJiBpbnQgJiB0aGUgcGh5c2ljYWwgdGFyZ2V0IG51bWJlciBc
XAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBwaHlzaWNhbFxfbHVufSAmIGlu
dCAmIHRoZSBwaHlzaWNhbCBsb2dpY2FsIHVuaXQgbnVtYmVyIFxcCi0kXG1hdGhpdHtST31fXG1h
dGhpdHtydW59JCAmICB7XHR0IHBoeXNpY2FsXF9IQ1RMfSAmIHN0cmluZyAmIHRoZSBwaHlzaWNh
bCBIQ1RMIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHZlbmRvclxfbmFt
ZX0gJiBzdHJpbmcgJiB0aGUgdmVuZG9yIG5hbWUgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1
bn0kICYgIHtcdHQgbW9kZWx9ICYgc3RyaW5nICYgdGhlIG1vZGVsIFxcCi0kXG1hdGhpdHtST31f
XG1hdGhpdHtydW59JCAmICB7XHR0IHR5cGVcX2lkfSAmIGludCAmIHRoZSBTQ1NJIHR5cGUgSUQg
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdHlwZX0gJiBzdHJpbmcgJiAg
dGhlIFNDU0kgdHlwZSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBkZXZc
X25hbWV9ICYgc3RyaW5nICYgdGhlIFNDU0kgZGV2aWNlIG5hbWUgKGUuZy4gc2RhIG9yIHN0MCkg
XFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgc2dcX25hbWV9ICYgc3RyaW5n
ICYgdGhlIFNDU0kgZ2VuZXJpYyBkZXZpY2UgbmFtZSAoZS5nLiBzZzApIFxcCi0kXG1hdGhpdHtS
T31fXG1hdGhpdHtydW59JCAmICB7XHR0IHJldmlzaW9ufSAmIHN0cmluZyAmIHRoZSByZXZpc2lv
biBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBzY3NpXF9pZH0gJiBzdHJp
bmcgJiB0aGUgU0NTSSBJRCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCBz
Y3NpXF9sZXZlbH0gJiBpbnQgJiB0aGUgU0NTSSBsZXZlbCBcXAotXGhsaW5lCi1cZW5ke2xvbmd0
YWJsZX0KLVxzdWJzZWN0aW9ue1JQQ3MgYXNzb2NpYXRlZCB3aXRoIGNsYXNzOiBQU0NTSX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0
dXJuIGEgbGlzdCBvZiBhbGwgdGhlIFBTQ1NJcyBrbm93biB0byB0aGUgc3lzdGVtLgotCi0gXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19ICgoUFNDU0kgcmVmKSBT
ZXQpIGdldF9hbGwgKHNlc3Npb25faWQgcylcZW5ke3ZlcmJhdGltfQotCi0KLVx2c3BhY2V7MC4z
Y219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLShQU0NTSSByZWYp
IFNldAotfQotCi0KLXJlZmVyZW5jZXMgdG8gYWxsIG9iamVjdHMKLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5n
ZXRcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUg
Z2l2ZW4gUFNDU0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJi
YXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIFBTQ1NJIHJlZiBzZWxmKVxlbmR7
dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4z
Y219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAm
IHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFNDU0kgcmVm
IH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3Rh
YnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9
IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfaG9zdH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBob3N0IGZpZWxkIG9m
IHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lu
e3ZlcmJhdGltfSAoaG9zdCByZWYpIGdldF9ob3N0IChzZXNzaW9uX2lkIHMsIFBTQ1NJIHJlZiBz
ZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLWhvc3QgcmVmCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9IQkF9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgSEJB
IGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
IAotXGJlZ2lue3ZlcmJhdGltfSAoUFNDU0lfSEJBIHJlZikgZ2V0X0hCQSAoc2Vzc2lvbl9pZCBz
LCBQU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFBTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1QU0NTSVxfSEJBIHJlZgotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcGh5c2ljYWxcX2hvc3R9Ci0K
LXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgcGh5c2ljYWxcX2hvc3QgZmllbGQgb2YgdGhlIGdp
dmVuIFBTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0
aW19IGludCBnZXRfcGh5c2ljYWxfaG9zdCAoc2Vzc2lvbl9pZCBzLCBQU0NTSSByZWYgc2VsZilc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlw
ZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJ
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVu
ZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5
cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBu
YW1lOn5nZXRcX3BoeXNpY2FsXF9jaGFubmVsfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhl
IHBoeXNpY2FsXF9jaGFubmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3BoeXNpY2FsX2No
YW5uZWwgKHNlc3Npb25faWQgcywgUFNDU0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQU0NTSSByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFj
ZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50Ci19
Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9waHlzaWNhbFxf
dGFyZ2V0fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHBoeXNpY2FsXF90YXJnZXQgZmll
bGQgb2YgdGhlIGdpdmVuIFBTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1c
YmVnaW57dmVyYmF0aW19IGludCBnZXRfcGh5c2ljYWxfdGFyZ2V0IChzZXNzaW9uX2lkIHMsIFBT
Q1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6
fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxo
bGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxp
bmUKLXtcdHQgUFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVs
ZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcGh5c2ljYWxcX2x1bn0KLQote1xiZiBPdmVydmlldzp9
IAotR2V0IHRoZSBwaHlzaWNhbFxfbHVuIGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3BoeXNp
Y2FsX2x1biAoc2Vzc2lvbl9pZCBzLCBQU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJIHJlZiB9ICYgc2VsZiAmIHJl
ZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQK
LX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3BoeXNpY2Fs
XF9IQ1RMfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHBoeXNpY2FsXF9IQ1RMIGZpZWxk
IG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3BoeXNpY2FsX0hDVEwgKHNlc3Npb25faWQgcywgUFND
U0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQU0NTSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZp
ZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vi
c3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF92ZW5kb3JcX25hbWV9Ci0KLXtcYmYgT3ZlcnZpZXc6
fSAKLUdldCB0aGUgdmVuZG9yXF9uYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxu
b2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3Zl
bmRvcl9uYW1lIChzZXNzaW9uX2lkIHMsIFBTQ1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0K
LQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57
dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFNDU0kgcmVmIH0gJiBzZWxmICYg
cmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0
cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbW9k
ZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgbW9kZWwgZmllbGQgb2YgdGhlIGdpdmVu
IFBTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19
IHN0cmluZyBnZXRfbW9kZWwgKHNlc3Npb25faWQgcywgUFNDU0kgcmVmIHNlbGYpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xi
ZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQU0NTSSByZWYgfSAm
IHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxh
cn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17
XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF90eXBlXF9pZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB0eXBlXF9pZCBmaWVs
ZCBvZiB0aGUgZ2l2ZW4gUFNDU0kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gaW50IGdldF90eXBlX2lkIChzZXNzaW9uX2lkIHMsIFBTQ1NJIHJlZiBz
ZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
UFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAot
Ci1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1
cm4gVHlwZTp9IAote1x0dCAKLWludAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfdHlwZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSB0eXBlIGZp
ZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAot
XGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3R5cGUgKHNlc3Npb25faWQgcywgUFNDU0kgcmVm
IHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBQU0NTSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9kZXZcX25hbWV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0
aGUgZGV2XF9uYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2Rldl9uYW1lIChzZXNz
aW9uX2lkIHMsIFBTQ1NJIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFNDU0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZh
bHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfc2dcX25hbWV9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLUdldCB0aGUgc2dcX25hbWUgZmllbGQgb2YgdGhlIGdpdmVuIFBTQ1NJLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBn
ZXRfc2dfbmFtZSAoc2Vzc2lvbl9pZCBzLCBQU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1z
dHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Jl
dmlzaW9ufQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHJldmlzaW9uIGZpZWxkIG9mIHRo
ZSBnaXZlbiBQU0NTSS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSBzdHJpbmcgZ2V0X3JldmlzaW9uIChzZXNzaW9uX2lkIHMsIFBTQ1NJIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFND
U0kgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfc2NzaVxfaWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCB0aGUgc2Nz
aVxfaWQgZmllbGQgb2YgdGhlIGdpdmVuIFBTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0
dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfc2NzaV9pZCAoc2Vzc2lvbl9pZCBz
LCBQU0NTSSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IFBTQ1NJIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0
aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Njc2lcX2xldmVsfQotCi17XGJmIE92ZXJ2
aWV3On0gCi1HZXQgdGhlIHNjc2lcX2xldmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBQU0NTSS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBpbnQgZ2V0X3Nj
c2lfbGV2ZWwgKHNlc3Npb25faWQgcywgUFNDU0kgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBQU0NTSSByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZz
cGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotaW50
Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9ieVxfdXVp
ZH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IGEgcmVmZXJlbmNlIHRvIHRoZSBQU0NTSSBpbnN0
YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJRC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoUFNDU0kgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9p
ZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVpZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVybiBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotUFNDU0kgcmVmCi19Ci0KLQotcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29yZH0KLQote1xiZiBPdmVy
dmlldzp9IAotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3RhdGUgb2YgdGhl
IGdpdmVuIFBTQ1NJLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IChQU0NTSSByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Npb25faWQgcywgUFNDU0kgcmVm
IHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBQU0NTSSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotUFNDU0kgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9t
IHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IFBTQ1NJXF9IQkF9
Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBQU0NTSVxfSEJBfQotXGJlZ2lue2xvbmd0
YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xs
fXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIFBTQ1NJXF9IQkF9IFxcCi1cbXVsdGlj
b2x1bW57MX17fGx9e0Rlc2NyaXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsx
MWNtfXtcZW0gQQotcGh5c2ljYWwgU0NTSSBob3N0IGJ1cyBhZGFwdGVyLn19IFxcCi1caGxpbmUK
LVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7
Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlm
aWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtc
dHQgaG9zdH0gJiBob3N0IHJlZiAmICB0aGUgcGh5c2ljYWwgbWFjaGluZSB0byB3aGljaCB0aGlz
IFBTQ1NJIEhCQSBpcyBjb25uZWN0ZWQgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYg
IHtcdHQgcGh5c2ljYWxcX2hvc3R9ICYgaW50ICYgdGhlIHBoeXNpY2FsIGhvc3QgbnVtYmVyIFxc
Ci0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IFBTQ1NJc30gJiAoUFNDU0kgcmVm
KSBTZXQgJiB0aGUgcGh5c2ljYWwgU0NTSSBkZXZpY2VzIHdoaWNoIGFyZSBjb25uZWN0ZWQgdG8g
dGhpcyBQU0NTSSBIQkEgXFwKLVxobGluZQotXGVuZHtsb25ndGFibGV9Ci1cc3Vic2VjdGlvbntS
UENzIGFzc29jaWF0ZWQgd2l0aCBjbGFzczogUFNDU0lcX0hCQX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBPdmVydmlldzp9IAotUmV0dXJuIGEgbGlzdCBvZiBh
bGwgdGhlIFBTQ1NJIEhCQXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFBTQ1NJX0hCQSByZWYpIFNldCkgZ2V0
X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotKFBTQ1NJXF9IQkEgcmVmKSBT
ZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRvIGFsbCBvYmplY3RzCi1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0
XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdp
dmVuIFBTQ1NJIEhCQS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSBzdHJpbmcgZ2V0X3V1aWQgKHNlc3Npb25faWQgcywgUFNDU0lfSEJBIHJlZiBzZWxm
KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3Bh
Y2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0
eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgUFND
U0lcX0hCQSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+Z2V0XF9ob3N0fQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIGhv
c3QgZmllbGQgb2YgdGhlIGdpdmVuIFBTQ1NJIEhCQS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25h
dHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoaG9zdCByZWYpIGdldF9ob3N0IChzZXNzaW9uX2lk
IHMsIFBTQ1NJX0hCQSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYg
QXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3
Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlv
bn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJXF9IQkEgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLWhvc3QgcmVmCi19Ci0K
LQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9waHlzaWNhbFxfaG9z
dH0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRoZSBwaHlzaWNhbFxfaG9zdCBmaWVsZCBvZiB0
aGUgZ2l2ZW4gUFNDU0kgSEJBLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVn
aW57dmVyYmF0aW19IGludCBnZXRfcGh5c2ljYWxfaG9zdCAoc2Vzc2lvbl9pZCBzLCBQU0NTSV9I
QkEgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBQU0NTSVxfSEJBIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0
IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1pbnQKLX0KLQotCi12YWx1ZSBvZiB0aGUg
ZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX1BTQ1NJc30KLQote1xiZiBPdmVydmlldzp9IAot
R2V0IHRoZSBQU0NTSXMgZmllbGQgb2YgdGhlIGdpdmVuIFBTQ1NJIEhCQS4KLQotIFxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAoKFBTQ1NJIHJlZikgU2V0KSBn
ZXRfUFNDU0lzIChzZXNzaW9uX2lkIHMsIFBTQ1NJX0hCQSByZWYgc2VsZilcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5h
bWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJXF9IQkEgcmVmIH0g
JiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1cZW5ke3RhYnVs
YXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAot
e1x0dCAKLShQU0NTSSByZWYpIFNldAotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVy
ZW5jZSB0byB0aGUgUFNDU0kgSEJBIGluc3RhbmNlIHdpdGggdGhlIHNwZWNpZmllZCBVVUlELgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChQU0NTSV9I
QkEgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVp
ZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotUFNDU0lcX0hCQSByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFj
ZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257
UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQg
Y29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gUFNDU0kgSEJBLgotCi0g
XG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChQU0NTSV9IQkEg
cmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIFBTQ1NJX0hCQSByZWYgc2VsZilcZW5k
e3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAu
M2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0g
JiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFBTQ1NJXF9I
QkEgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lIAotCi1c
ZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4g
VHlwZTp9IAote1x0dCAKLVBTQ1NJXF9IQkEgcmVjb3JkCi19Ci0KLQotYWxsIGZpZWxkcyBmcm9t
IHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLQotXHZzcGFjZXsxY219Ci1cbmV3cGFnZQotXHNlY3Rpb257Q2xhc3M6IHVzZXJ9Ci1cc3Vi
c2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiB1c2VyfQotXGJlZ2lue2xvbmd0YWJsZX17fGxsbHB7
MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGluZQotXG11bHRpY29sdW1uezF9e3xsfXtOYW1lfSAmIFxt
dWx0aWNvbHVtbnszfXtsfH17XGJmIHVzZXJ9IFxcCi1cbXVsdGljb2x1bW57MX17fGx9e0Rlc2Ny
aXB0aW9ufSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XHBhcmJveHsxMWNtfXtcZW0gQQotdXNlciBv
ZiB0aGUgc3lzdGVtLn19IFxcCi1caGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3Jp
cHRpb24gXFwKLVxobGluZQotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlk
fSAmIHN0cmluZyAmIHVuaXF1ZSBpZGVudGlmaWVyL29iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0
aGl0e1JPfV9cbWF0aGl0e2luc30kICYgIHtcdHQgc2hvcnRcX25hbWV9ICYgc3RyaW5nICYgc2hv
cnQgbmFtZSAoZS5nLiB1c2VyaWQpIFxcCi0kXG1hdGhpdHtSV30kICYgIHtcdHQgZnVsbG5hbWV9
ICYgc3RyaW5nICYgZnVsbCBuYW1lIFxcCi1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNl
Y3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IHVzZXJ9Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgdGhlIHV1aWQgZmll
bGQgb2YgdGhlIGdpdmVuIHVzZXIuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIHVzZXIgcmVmIHNl
bGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2
c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
dXNlciByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0K
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVy
biBUeXBlOn0gCi17XHR0IAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9zaG9ydFxfbmFtZX0KLQote1xiZiBPdmVydmlldzp9IAotR2V0IHRo
ZSBzaG9ydFxfbmFtZSBmaWVsZCBvZiB0aGUgZ2l2ZW4gdXNlci4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X3Nob3J0X25hbWUgKHNl
c3Npb25faWQgcywgdXNlciByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtc
YmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCB1c2VyIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQot
Ci0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1zdHJpbmcKLX0KLQotCi12
YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2Z1bGxuYW1lfQotCi17XGJm
IE92ZXJ2aWV3On0gCi1HZXQgdGhlIGZ1bGxuYW1lIGZpZWxkIG9mIHRoZSBnaXZlbiB1c2VyLgot
Ci0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBn
ZXRfZnVsbG5hbWUgKHNlc3Npb25faWQgcywgdXNlciByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdp
bnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1l
fSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB1c2VyIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1z
dHJpbmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX2Z1
bGxuYW1lfQotCi17XGJmIE92ZXJ2aWV3On0gCi1TZXQgdGhlIGZ1bGxuYW1lIGZpZWxkIG9mIHRo
ZSBnaXZlbiB1c2VyLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVy
YmF0aW19IHZvaWQgc2V0X2Z1bGxuYW1lIChzZXNzaW9uX2lkIHMsIHVzZXIgcmVmIHNlbGYsIHN0
cmluZyB2YWx1ZSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0K
LQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCB1c2VyIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxo
bGluZSAKLQote1x0dCBzdHJpbmcgfSAmIHZhbHVlICYgTmV3IHZhbHVlIHRvIHNldCBcXCBcaGxp
bmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJm
IFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmNy
ZWF0ZX0KLQote1xiZiBPdmVydmlldzp9IAotQ3JlYXRlIGEgbmV3IHVzZXIgaW5zdGFuY2UsIGFu
ZCByZXR1cm4gaXRzIGhhbmRsZS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJl
Z2lue3ZlcmJhdGltfSAodXNlciByZWYpIGNyZWF0ZSAoc2Vzc2lvbl9pZCBzLCB1c2VyIHJlY29y
ZCBhcmdzKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0g
Ci1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUK
LXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17
XHR0IHVzZXIgcmVjb3JkIH0gJiBhcmdzICYgQWxsIGNvbnN0cnVjdG9yIGFyZ3VtZW50cyBcXCBc
aGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7
XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdXNlciByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8g
dGhlIG5ld2x5IGNyZWF0ZWQgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+ZGVzdHJveX0KLQote1xi
ZiBPdmVydmlldzp9IAotRGVzdHJveSB0aGUgc3BlY2lmaWVkIHVzZXIgaW5zdGFuY2UuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBkZXN0cm95
IChzZXNzaW9uX2lkIHMsIHVzZXIgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRl
bnR7XGJmIEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgdXNlciByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2Ug
dG8gdGhlIG9iamVjdCBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdl
dCBhIHJlZmVyZW5jZSB0byB0aGUgdXNlciBpbnN0YW5jZSB3aXRoIHRoZSBzcGVjaWZpZWQgVVVJ
RC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAodXNl
ciByZWYpIGdldF9ieV91dWlkIChzZXNzaW9uX2lkIHMsIHN0cmluZyB1dWlkKVxlbmR7dmVyYmF0
aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0gCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVp
ZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVybiBcXCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0
IAotdXNlciByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfcmVjb3JkfQotCi17XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFp
bmluZyB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGUgZ2l2ZW4gdXNlci4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3ZlcmJhdGltfSAodXNlciByZWNvcmQpIGdldF9yZWNv
cmQgKHNlc3Npb25faWQgcywgdXNlciByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB1c2VyIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5j
ZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAu
M2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi11c2VyIHJlY29y
ZAotfQotCi0KLWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci0KLVx2c3BhY2V7MWNtfQotXG5ld3BhZ2UKLVxz
ZWN0aW9ue0NsYXNzOiBYU1BvbGljeX0KLVxzdWJzZWN0aW9ue0ZpZWxkcyBmb3IgY2xhc3M6IFhT
UG9saWN5fQotXGJlZ2lue2xvbmd0YWJsZX17fGxsbHB7MC4zOFx0ZXh0d2lkdGh9fH0KLVxobGlu
ZQotXG11bHRpY29sdW1uezF9e3xsfXtOYW1lfSAmIFxtdWx0aWNvbHVtbnszfXtsfH17XGJmIFhT
UG9saWN5fSBcXAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1
bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEEgWGVuIFNlY3VyaXR5IFBvbGljeX19IFxcCi1c
aGxpbmUKLVF1YWxzICYgRmllbGQgJiBUeXBlICYgRGVzY3JpcHRpb24gXFwKLVxobGluZQotJFxt
YXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB1dWlkfSAmIHN0cmluZyAgJiB1bmlxdWUg
aWRlbnRpZmllciAvIG9iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JXfSQgICAgICAgICAg
ICAgICYgIHtcdHQgcmVwcn0gJiBzdHJpbmcgICYgcmVwcmVzZW50YXRpb24gb2YgcG9saWN5LCBp
LmUuLCBYTUwgXFwKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdHlwZX0gJiB4
c1xfdHlwZSAmIHR5cGUgb2YgdGhlIHBvbGljeSBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVu
fSQgJiB7XHR0IGZsYWdzfSAmIHhzXF9pbnN0YW50aWF0aW9uZmxhZ3MgJiBwb2xpY3kKLXN0YXR1
cyBmbGFncyBcXAotXGhsaW5lCi1cZW5ke2xvbmd0YWJsZX0KLVxzdWJzZWN0aW9ue1NlbWFudGlj
cyBvZiB0aGUgY2xhc3M6IFhTUG9saWN5fQotCi1UaGUgWFNQb2xpY3kgY2xhc3MgaXMgdXNlZCBm
b3IgYWRtaW5pc3RlcmluZyBYZW4gU2VjdXJpdHkgcG9saWNpZXMuIFRocm91Z2gKLXRoaXMgY2xh
c3MgYSBuZXcgcG9saWN5IGNhbiBiZSB1cGxvYWRlZCB0byB0aGUgc3lzdGVtLCBsb2FkZWQgaW50
byB0aGUKLVhlbiBoeXBlcnZpc29yIGZvciBlbmZvcmNlbWVudCBhbmQgYmUgc2V0IGFzIHRoZSBw
b2xpY3kgdGhhdCB0aGUKLXN5c3RlbSBpcyBhdXRvbWF0aWNhbGx5IGxvYWRpbmcgd2hlbiB0aGUg
bWFjaGluZSBpcyBzdGFydGVkLgotCi1UaGlzIGNsYXNzIHJldHVybnMgaW5mb3JtYXRpb24gYWJv
dXQgdGhlIGN1cnJlbnRseSBhZG1pbmlzdGVyZWQgcG9saWN5LAotaW5jbHVkaW5nIGEgcmVmZXJl
bmNlIHRvIHRoZSBwb2xpY3kuIFRoaXMgcmVmZXJlbmNlIGNhbiB0aGVuIGJlIHVzZWQgd2l0aAot
cG9saWN5LXNwZWNpZmljIGNsYXNzZXMsIGkuZS4sIHRoZSBBQ01Qb2xpY3kgY2xhc3MsIHRvIGFs
bG93IHJldHJpZXZhbCBvZgotaW5mb3JtYXRpb24gb3IgY2hhbmdlcyB0byBiZSBtYWRlIHRvIGEg
cGFydGljdWxhciBwb2xpY3kuCi0KLVxzdWJzZWN0aW9ue1N0cnVjdHVyZSBhbmQgZGF0YXR5cGVz
IG9mIGNsYXNzOiBYU1BvbGljeX0KLQotRm9ybWF0IG9mIHRoZSBzZWN1cml0eSBsYWJlbDoKLQot
QSBzZWN1cml0eSBsYWJlbCBjb25zaXN0IG9mIHRoZSB0aHJlZSBkaWZmZXJlbnQgcGFydHMge1xp
dCBwb2xpY3kgdHlwZX0sCi17XGl0IHBvbGljeSBuYW1lfSBhbmQge1xpdCBsYWJlbH0gc2VwYXJh
dGVkIHdpdGggY29sb25zLiBUbyBzcGVjaWZ5Ci10aGUgdmlydHVhbCBtYWNoaW5lIGxhYmVsIGZv
ciBhbiBBQ00tdHlwZSBwb2xpY3kge1xpdCB4bS10ZXN0fSwgdGhlCi1zZWN1cml0eSBsYWJlbCBz
dHJpbmcgd291bGQgYmUge1xpdCBBQ006eG0tdGVzdDpibHVlfSwgd2hlcmUgYmx1ZQotZGVub3Rl
cyB0aGUgdmlydHVhbCBtYWNoaW5lJ3MgbGFiZWwuIFRoZSBmb3JtYXQgb2YgcmVzb3VyY2UgbGFi
ZWxzIGlzCi10aGUgc2FtZS5cXFswLjVjbV0KLVRoZSBmb2xsb3dpbmcgZmxhZ3MgYXJlIHVzZWQg
YnkgdGhpcyBjbGFzczoKLQotXGJlZ2lue2xvbmd0YWJsZX17fGx8bHxsfH0KLVxobGluZQote1x0
dCB4c1xfdHlwZX0gJiB2YWx1ZSAmIG1lYW5pbmcgXFwKLVxobGluZQotXGhzcGFjZXswLjVjbX17
XHR0IFhTXF9QT0xJQ1lcX0FDTX0gJiAoMSAkPDwkIDApICYgQUNNLXR5cGUgcG9saWN5IFxcCi1c
aGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotCi1cYmVnaW57bG9uZ3RhYmxlfXt8bHxsfGx8fQotXGhs
aW5lCi17XHR0IHhzXF9pbnN0YW50aWF0aW9uZmxhZ3N9ICYgdmFsdWUgJiBtZWFuaW5nIFxcCi1c
aGxpbmUKLVxoc3BhY2V7MC41Y219e1x0dCBYU1xfSU5TVFxfTk9ORX0gJiAwICYgZG8gbm90aGlu
ZyBcXAotXGhzcGFjZXswLjVjbX17XHR0IFhTXF9JTlNUXF9CT09UfSAmICgxICQ8PCQgMCkgJiBt
YWtlIHN5c3RlbSBib290IHdpdGggdGhpcyBwb2xpY3kgXFwKLVxoc3BhY2V7MC41Y219e1x0dCBY
U1xfSU5TVFxfTE9BRH0gJiAoMSAkPDwkIDEpICYgbG9hZCBwb2xpY3kgaW1tZWRpYXRlbHkgXFwK
LVxobGluZQotXGVuZHtsb25ndGFibGV9Ci0KLVxiZWdpbntsb25ndGFibGV9e3xsfGx8bHx9Ci1c
aGxpbmUKLXtcdHQgeHNcX3BvbGljeXN0YXRlfSAmIHR5cGUgJiBtZWFuaW5nIFxcCi1caGxpbmUK
LVxoc3BhY2V7MC41Y219e1x0dCB4c2Vycn0gJiBpbnQgJiBFcnJvciBjb2RlIGZyb20gb3BlcmF0
aW9uIChpZiBhcHBsaWNhYmxlKSBcXAotXGhzcGFjZXswLjVjbX17XHR0IHhzXF9yZWZ9ICAmIFhT
UG9saWN5IHJlZiAmIHJlZmVyZW5jZSB0byB0aGUgWFMgcG9saWN5IGFzIHJldHVybmVkIGJ5IHRo
ZSBBUEkgXFwKLVxoc3BhY2V7MC41Y219e1x0dCByZXByfSAmIHN0cmluZyAmIHJlcHJlc2VudGF0
aW9uIG9mIHRoZSBwb2xpY3ksIGkuZS4sIFhNTCBcXAotXGhzcGFjZXswLjVjbX17XHR0IHR5cGV9
ICYgeHNcX3R5cGUgJiB0aGUgdHlwZSBvZiB0aGUgcG9saWN5IFxcCi1caHNwYWNlezAuNWNtfXtc
dHQgZmxhZ3MgfSAmIHhzXF9pbnN0YW50aWF0aW9uZmxhZ3MgICYgaW5zdGFudGlhdGlvbiBmbGFn
cyBvZiB0aGUgcG9saWN5IFxcCi1caHNwYWNlezAuNWNtfXtcdHQgdmVyc2lvbn0gJiBzdHJpbmcg
JiB2ZXJzaW9uIG9mIHRoZSBwb2xpY3kgXFwKLVxoc3BhY2V7MC41Y219e1x0dCBlcnJvcnN9ICYg
c3RyaW5nICYgQmFzZTY0LWVuY29kZWQgc2VxdWVuY2Ugb2YgaW50ZWdlciB0dXBsZXMgY29uc2lz
dGluZyBcXAotJiAmIG9mIChlcnJvciBjb2RlLCBkZXRhaWwpOyB3aWxsIGJlIHJldHVybmVkIGFz
IHBhcnQgIFxcCi0mICYgb2YgdGhlIHhzXF9zZXRwb2xpY3kgZnVuY3Rpb24uIFxcCi1caGxpbmUK
LVxlbmR7bG9uZ3RhYmxlfQotCi1cc3Vic2VjdGlvbntBZGRpdGlvbmFsIFJQQ3MgYXNzb2NpYXRl
ZCB3aXRoIGNsYXNzOiBYU1BvbGljeX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3hz
dHlwZX0KLQote1xiZiBPdmVydmlldzp9Ci1SZXR1cm4gdGhlIFhlbiBTZWN1cml0eSBQb2xpY3kg
dHlwZXMgc3VwcG9ydGVkIGJ5IHRoaXMgc3lzdGVtCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB4c190eXBlIGdldF94c3R5cGUgKHNlc3Npb25faWQgcylc
ZW5ke3ZlcmJhdGltfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAoteHNc
X3R5cGUKLX0KLQotZmxhZ3MgcmVwcmVzZW50aW5nIHRoZSBzdXBwb3J0ZWQgWGVuIHNlY3VyaXR5
IHBvbGljeSB0eXBlcwotIFx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5zZXRcX3hzcG9saWN5fQotCi17XGJmIE92
ZXJ2aWV3On0KLVNldCB0aGUgY3VycmVudCBYU1BvbGljeS4gVGhpcyBmdW5jdGlvbiBjYW4gYWxz
byBiZSBiZSB1c2VkIGZvciB1cGRhdGluZyBvZgotYW4gZXhpc3RpbmcgcG9saWN5IHdob3NlIG5h
bWUgbXVzdCBiZSBlcXVpdmFsZW50IHRvIHRoZSBvbmUgb2YgdGhlCi1jdXJyZW50bHkgcnVubmlu
ZyBwb2xpY3kuCi0KLVxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19
IHhzX3BvbGljeXN0YXRlIHNldF94c3BvbGljeSAoc2Vzc2lvbl9pZCBzLCB4c190eXBlIHR5cGUs
IHN0cmluZyByZXByLAoteHNfaW5zdGFudGlhdGlvbmZsYWdzIGZsYWdzLCBib29sIG92ZXJ3cml0
ZSlcZW5ke3ZlcmJhdGltfQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5
cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB4c1xf
dHlwZSB9ICYgdHlwZSAmIHRoZSB0eXBlIG9mIHBvbGljeSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5n
fSAmIHJlcHIgJiByZXByZXNlbnRhdGlvbiBvZiB0aGUgcG9saWN5LCBpLmUuLCBYTUwgXFwgXGhs
aW5lCi17XHR0IHhzXF9pbnN0YW50aWF0aW9uZmxhZ3N9ICAgICYgZmxhZ3MgJiBmbGFncyBmb3Ig
dGhlIHNldHRpbmcgb2YgdGhlIHBvbGljeSBcXCBcaGxpbmUKLXtcdHQgYm9vbH0gICAmIG92ZXJ3
cml0ZSAmIHdoZXRoZXIgdG8gb3ZlcndyaXRlIGFuIGV4aXN0aW5nIHBvbGljeSBcXCBcaGxpbmUK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0KLSBcbm9pbmRlbnQge1xiZiBS
ZXR1cm4gVHlwZTp9Ci17XHR0Ci14c1xfcG9saWN5c3RhdGUKLX0KLQotCi1TdGF0ZSBpbmZvcm1h
dGlvbiBhYm91dCB0aGUgcG9saWN5LiBJbiBjYXNlIGFuIGVycm9yIG9jY3VycmVkLCB0aGUgJ3hz
XF9lcnInCi1maWVsZCBjb250YWlucyB0aGUgZXJyb3IgY29kZS4gVGhlICdlcnJvcnMnIG1heSBj
b250YWluIGZ1cnRoZXIgaW5mb3JtYXRpb24KLWFib3V0IHRoZSBlcnJvci4KLSBcdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+cmVzZXRcX3hzcG9saWN5fQotCi17XGJmIE92ZXJ2aWV3On0KLUF0dGVtcHQgdG8gcmVz
ZXQgdGhlIHN5c3RlbSdzIHBvbGljeSBieSBpbnN0YWxsaW5nIHRoZSBkZWZhdWx0IHBvbGljeS4K
LVNpbmNlIHRoaXMgZnVuY3Rpb24gaXMgaW1wbGVtZW50ZWQgYXMgYW4gdXBkYXRlIHRvIHRoZSBj
dXJyZW50IHBvbGljeSwgaXQKLXVuZGVybGllcyB0aGUgc2FtZSByZXN0cmljdGlvbnMuIFRoaXMg
ZnVuY3Rpb24gbWF5IGZhaWwgaWYgZm9yIGV4YW1wbGUKLW90aGVyIGRvbWFpbnMgdGhhbiBEb21h
aW4tMCBhcmUgcnVubmluZyBhbmQgdXNlIGEgZGlmZmVyZW50IGxhYmVsIHRoYW4KLURvbWFpbi0w
Ci0KLVxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHhzX3BvbGlj
eXN0YXRlIHJlc2V0X3hzcG9saWN5IChzZXNzaW9uX2lkIHMsIHhzX3R5cGUgdHlwZSkKLVxlbmR7
dmVyYmF0aW19Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotXHZzcGFjZXswLjNjbX0K
LQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHhzXF90eXBlIH0g
JiB0eXBlICYgdGhlIHR5cGUgb2YgcG9saWN5IFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQK
LXhzXF9wb2xpY3lzdGF0ZQotfQotCi0KLVN0YXRlIGluZm9ybWF0aW9uIGFib3V0IHRoZSBwb2xp
Y3kuIEluIGNhc2UgYW4gZXJyb3Igb2NjdXJyZWQsIHRoZSAneHNcX2VycicKLWZpZWxkIGNvbnRh
aW5zIHRoZSBlcnJvciBjb2RlLiBUaGUgJ2Vycm9ycycgbWF5IGNvbnRhaW4gZnVydGhlciBpbmZv
cm1hdGlvbgotYWJvdXQgdGhlIGVycm9yLgotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfeHNwb2xpY3l9
Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IGluZm9ybWF0aW9uIHJlZ2FyZGluZyB0aGUgY3VycmVu
dGx5IHNldCBYZW4gU2VjdXJpdHkgUG9saWN5Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6
fQotXGJlZ2lue3ZlcmJhdGltfSB4c19wb2xpY3lzdGF0ZSBnZXRfeHNwb2xpY3kgKHNlc3Npb25f
aWQgcylcZW5ke3ZlcmJhdGltfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYg
UmV0dXJuIFR5cGU6fQote1x0dAoteHNcX3BvbGljeXN0YXRlCi19Ci0KLQotUG9saWN5IHN0YXRl
IGluZm9ybWF0aW9uLgotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnJtXF94c2Jvb3Rwb2xpY3l9Ci0KLXtcYmYg
T3ZlcnZpZXc6fQotUmVtb3ZlIGFueSBwb2xpY3kgZnJvbSB0aGUgZGVmYXVsdCBib290IGNvbmZp
Z3VyYXRpb24uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGlt
fSB2b2lkIHJtX3hzYm9vdHBvbGljeSAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0aW19Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0
IFNFQ1VSSVRZXF9FUlJPUn0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbGFiZWxlZFxfcmVzb3Vy
Y2VzfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCBhIGxpc3Qgb2YgcmVzb3VyY2VzIHRoYXQgaGF2
ZSBiZWVuIGxhYmVsZWQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3Zl
cmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkgZ2V0X2xhYmVsZWRfcmVzb3VyY2VzIChz
ZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5k
ZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmlu
ZykgTWFwCi19Ci0KLQotQSBtYXAgb2YgcmVzb3VyY2VzIHdpdGggdGhlaXIgbGFiZWxzLgotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fnNldFxfcmVzb3VyY2VcX2xhYmVsfQotCi17XGJmIE92ZXJ2aWV3On0KLUxh
YmVsIHRoZSBnaXZlbiByZXNvdXJjZSB3aXRoIHRoZSBnaXZlbiBsYWJlbC4gQW4gZW1wdHkgbGFi
ZWwgcmVtb3ZlcyBhbnkgbGFiZWwKLWZyb20gdGhlIHJlc291cmNlLgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfcmVzb3VyY2VfbGFiZWwg
KHNlc3Npb25faWQgcywgc3RyaW5nIHJlc291cmNlLCBzdHJpbmcKLWxhYmVsLCBzdHJpbmcgb2xk
X2xhYmVsKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0K
LVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQot
e1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtc
dHQgc3RyaW5nIH0gJiByZXNvdXJjZSAmIHJlc291cmNlIHRvIGxhYmVsIFxcIFxobGluZQote1x0
dCBzdHJpbmcgfSAmIGxhYmVsICYgbGFiZWwgZm9yIHRoZSByZXNvdXJjZSBcXCBcaGxpbmUKLXtc
dHQgc3RyaW5nIH0gJiBvbGRcX2xhYmVsICYgT3B0aW9uYWwgbGFiZWwgdmFsdWUgdGhhdCB0aGUg
c2VjdXJpdHkgbGFiZWwgXFwKLSYgJiBtdXN0IGN1cnJlbnRseSBoYXZlIGZvciB0aGUgY2hhbmdl
IHRvIHN1Y2NlZWQuIFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLVxub2luZGVudHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fSB7XHR0IFNFQ1VSSVRZXF9F
UlJPUn0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQot
XHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVzb3VyY2VcX2xhYmVsfQotCi17XGJmIE92
ZXJ2aWV3On0KLUdldCB0aGUgbGFiZWwgb2YgdGhlIGdpdmVuIHJlc291cmNlLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9yZXNvdXJj
ZV9sYWJlbCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgcmVzb3VyY2UpXGVuZHt2ZXJiYXRpbX0KLQot
Ci1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0
YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAm
IHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHJlc291cmNlICYg
cmVzb3VyY2UgdG8gbGFiZWwgXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXN0cmluZwotfQot
Ci0KLVRoZSBsYWJlbCBvZiB0aGUgZ2l2ZW4gcmVzb3VyY2UuCi1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Y2Fu
XF9ydW59Ci0KLXtcYmYgT3ZlcnZpZXc6fQotQ2hlY2sgd2hldGhlciBhIFZNIHdpdGggdGhlIGdp
dmVuIHNlY3VyaXR5IGxhYmVsIGNvdWxkIHJ1biBvbiB0aGUgc3lzdGVtLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gaW50IGNhbl9ydW4gKHNlc3Npb25f
aWQgcywgc3RyaW5nIHNlY3VyaXR5X2xhYmVsKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiBzZWN1cml0eVxfbGFiZWwgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNl
ezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotaW50Ci19Ci0K
LQotRXJyb3IgY29kZSBpbmRpY2F0aW5nIHdoZXRoZXIgYSBWTSB3aXRoIHRoZSBnaXZlbiBzZWN1
cml0eSBsYWJlbCBjb3VsZCBydW4uCi1JZiB6ZXJvLCBpdCBjYW4gcnVuLgotCi1cdnNwYWNlezAu
M2NtfQotCi1cbm9pbmRlbnR7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVzOn0ge1x0dCBTRUNVUklU
WVxfRVJST1J9Ci0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbH0KLQote1xiZiBP
dmVydmlldzp9Ci1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgWFNQb2xpY2llcyBrbm93biB0byB0
aGUgc3lzdGVtLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRp
bX0gKChYU1BvbGljeSByZWYpIFNldCkgZ2V0X2FsbCAoc2Vzc2lvbl9pZCBzKVxlbmR7dmVyYmF0
aW19Ci0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0K
LXtcdHQKLShYU1BvbGljeSByZWYpIFNldAotfQotCi0KLUEgbGlzdCBvZiBhbGwgdGhlIElEcyBv
ZiBhbGwgdGhlIFhTUG9saWNpZXMKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3V1aWR9Ci0KLXtcYmYg
T3ZlcnZpZXc6fQotR2V0IHRoZSB1dWlkIGZpZWxkIG9mIHRoZSBnaXZlbiBYU1BvbGljeS4KLQot
IFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRf
dXVpZCAoc2Vzc2lvbl9pZCBzLCBYU1BvbGljeSByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0K
LVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3Rh
YnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYg
e1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IFhTUG9saWN5IHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2
c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1zdHJp
bmcKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3JlY29y
ZH0KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgYSByZWNvcmQgb2YgdGhlIHJlZmVyZW5jZWQgWFNQ
b2xpY3kuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAo
WFNQb2xpY3kgcmVjb3JkKSBnZXRfcmVjb3JkIChzZXNzaW9uX2lkIHMsIHhzX3JlZiB4c3BvbGlj
eSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHhz
IHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotCi1cZW5k
e3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlw
ZTp9Ci17XHR0Ci1YU1BvbGljeSByZWNvcmQKLX0KLQotCi1hbGwgZmllbGRzIGZyb20gdGhlIG9i
amVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXG5l
d3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBBQ01Qb2xpY3l9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9y
IGNsYXNzOiBBQ01Qb2xpY3l9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8bGxscHswLjM4XHRleHR3aWR0
aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9ICYgXG11bHRpY29sdW1uezN9
e2x8fXtcYmYgQUNNUG9saWN5fSBcXAotXG11bHRpY29sdW1uezF9e3xsfXtEZXNjcmlwdGlvbn0g
JiBcbXVsdGljb2x1bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVtIEFuIEFDTSBTZWN1cml0eSBQ
b2xpY3l9fSBcXAotXGhsaW5lCi1RdWFscyAmIEZpZWxkICYgVHlwZSAmIERlc2NyaXB0aW9uIFxc
Ci1caGxpbmUKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdXVpZH0gJiBzdHJp
bmcgJiB1bmlxdWUgaWRlbnRpZmllciAvIG9iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0aGl0e1JX
fSQgICAgICAgICAgICAgICYgIHtcdHQgcmVwcn0gJiBzdHJpbmcgJiByZXByZXNlbnRhdGlvbiBv
ZiBwb2xpY3ksIGluIFhNTCBcXAotJFxtYXRoaXR7Uk99X1xtYXRoaXR7cnVufSQgJiAge1x0dCB0
eXBlfSAmIHhzXF90eXBlICYgdHlwZSBvZiB0aGUgcG9saWN5IFxcCi0kXG1hdGhpdHtST31fXG1h
dGhpdHtydW59JCAmIHtcdHQgZmxhZ3N9ICYgeHNcX2luc3RhbnRpYXRpb25mbGFncyAmIHBvbGlj
eQotc3RhdHVzIGZsYWdzIFxcCi1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotCi1cc3Vic2VjdGlv
bntTdHJ1Y3R1cmUgYW5kIGRhdGF0eXBlcyBvZiBjbGFzczogQUNNUG9saWN5fQotCi1cdnNwYWNl
ezAuNWNtfQotVGhlIGZvbGxvd2luZyBkYXRhIHN0cnVjdHVyZXMgYXJlIHVzZWQ6Ci0KLVxiZWdp
bntsb25ndGFibGV9e3xsfGx8bHx9Ci1caGxpbmUKLXtcdHQgUklQIGFjbVxfcG9saWN5aGVhZGVy
fSAmIHR5cGUgJiBtZWFuaW5nIFxcCi1caGxpbmUKLVxoc3BhY2V7MC41Y219e1x0dCBwb2xpY3lu
YW1lfSAgICYgc3RyaW5nICYgbmFtZSBvZiB0aGUgcG9saWN5IFxcCi1caHNwYWNlezAuNWNtfXtc
dHQgcG9saWN5dXJsIH0gICAmIHN0cmluZyAmIFVSTCBvZiB0aGUgcG9saWN5IFxcCi1caHNwYWNl
ezAuNWNtfXtcdHQgZGF0ZX0gICAgICAgICAmIHN0cmluZyAmIGRhdGEgb2YgdGhlIHBvbGljeSBc
XAotXGhzcGFjZXswLjVjbX17XHR0IHJlZmVyZW5jZX0gICAgJiBzdHJpbmcgJiByZWZlcmVuY2Ug
b2YgdGhlIHBvbGljeSBcXAotXGhzcGFjZXswLjVjbX17XHR0IG5hbWVzcGFjZXVybH0gJiBzdHJp
bmcgJiBuYW1lc3BhY2V1cmwgb2YgdGhlIHBvbGljeSBcXAotXGhzcGFjZXswLjVjbX17XHR0IHZl
cnNpb259ICAgICAgJiBzdHJpbmcgJiB2ZXJzaW9uIG9mIHRoZSBwb2xpY3kgXFwKLVxobGluZQot
XGVuZHtsb25ndGFibGV9Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFj
ZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2hlYWRlcn0KLQote1xiZiBP
dmVydmlldzp9Ci1HZXQgdGhlIHJlZmVyZW5jZWQgcG9saWN5J3MgaGVhZGVyIGluZm9ybWF0aW9u
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gYWNtX3Bv
bGljeWhlYWRlciBnZXRfaGVhZGVyIChzZXNzaW9uX2lkIHMsIHhzIHJlZiBzZWxmKVxlbmR7dmVy
YmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219
Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtc
YmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgeHMgcmVmIH0gJiBz
ZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQK
LWFjbVxfcG9saWN5aGVhZGVyCi19Ci0KLQotVGhlIHBvbGljeSdzIGhlYWRlciBpbmZvcm1hdGlv
bi4KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJz
dWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3htbH0KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhl
IFhNTCByZXByZXNlbnRhdGlvbiBvZiB0aGUgZ2l2ZW4gcG9saWN5LgotCi0gXG5vaW5kZW50IHtc
YmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gc3RyaW5nIGdldF9YTUwgKHNlc3Npb25f
aWQgcywgeHMgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCB4cyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotc3RyaW5nCi19Ci0KLQotWE1MIHJlcHJlc2VudGF0
aW9uIG9mIHRoZSByZWZlcmVuY2VkIHBvbGljeQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbWFwfQot
Ci17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgbWFwcGluZyBpbmZvcm1hdGlvbiBvZiB0aGUgZ2l2
ZW4gcG9saWN5LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRp
bX0gc3RyaW5nIGdldF9tYXAgKHNlc3Npb25faWQgcywgeHMgcmVmIHNlbGYpXGVuZHt2ZXJiYXRp
bX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxi
ZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBu
YW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCB4cyByZWYgfSAmIHNlbGYg
JiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotc3Ry
aW5nCi19Ci0KLQotTWFwcGluZyBpbmZvcm1hdGlvbiBvZiB0aGUgcmVmZXJlbmNlZCBwb2xpY3ku
Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vi
c2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9iaW5hcnl9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IHRo
ZSBiaW5hcnkgcG9saWN5IHJlcHJlc2VudGF0aW9uIG9mIHRoZSByZWZlcmVuY2VkIHBvbGljeS4K
LQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBn
ZXRfYmluYXJ5IChzZXNzaW9uX2lkIHMsIHhzIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgeHMgcmVmIH0gJiBzZWxmICYgcmVmZXJl
bmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXN0cmluZwotfQot
Ci0KLUJhc2U2NC1lbmNvZGVkIHJlcHJlc2VudGF0aW9uIG9mIHRoZSBiaW5hcnkgcG9saWN5Lgot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fmdldFxfZW5mb3JjZWRcX2JpbmFyeX0KLQote1xiZiBPdmVydmlldzp9
Ci1HZXQgdGhlIGJpbmFyeSBwb2xpY3kgcmVwcmVzZW50YXRpb24gb2YgdGhlIGN1cnJlbnRseSBl
bmZvcmNlZCBBQ00gcG9saWN5LgotSW4gY2FzZSB0aGUgZGVmYXVsdCBwb2xpY3kgaXMgbG9hZGVk
IGluIHRoZSBoeXBlcnZpc29yLCBhIHBvbGljeSBtYXkgYmUKLW1hbmFnZWQgYnkgeGVuZCB0aGF0
IGlzIG5vdCB5ZXQgbG9hZGVkIGludG8gdGhlIGh5cGVydmlzb3IuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X2VuZm9yY2VkX2JpbmFy
eSAoc2Vzc2lvbl9pZCBzLCB4cyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVu
dHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xj
fGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNj
cmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHhzIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0
aGUgb2JqZWN0IFxcIFxobGluZQotCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1zdHJpbmcKLX0KLQotCi1CYXNl
NjQtZW5jb2RlZCByZXByZXNlbnRhdGlvbiBvZiB0aGUgYmluYXJ5IHBvbGljeS4KLVx2c3BhY2V7
MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQ
QyBuYW1lOn5nZXRcX1ZNXF9zc2lkcmVmfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgQUNN
IHNzaWRyZWYgb2YgdGhlIGdpdmVuIHZpcnR1YWwgbWFjaGluZS4KLQotIFxub2luZGVudCB7XGJm
IFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfVk1fc3NpZHJlZiAoc2Vz
c2lvbl9pZCBzLCB2bSByZWYgdm0pXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCB2bSByZWYgfSAmIHZtICYgcmVmZXJlbmNlIHRvIGEgdmFsaWQgVk0g
XFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLWludAotfQotCi0KLVRoZSBzc2lkcmVmIG9mIHRo
ZSBnaXZlbiB2aXJ0dWFsIG1hY2hpbmUuCi0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVudHtc
YmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fQotICB7XHR0IEhBTkRMRVxfSU5WQUxJRCwgVk1cX0JB
RFxfUE9XRVJcX1NUQVRFLCBTRUNVUklUWVxfRVJST1J9Ci0KLVx2c3BhY2V7MC4zY219Ci1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRc
X2FsbH0KLQote1xiZiBPdmVydmlldzp9Ci1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgQUNNUG9s
aWNpZXMga25vd24gdG8gdGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9
Ci1cYmVnaW57dmVyYmF0aW19ICgoQUNNUG9saWN5IHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9u
X2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fQote1x0dAotKEFDTVBvbGljeSByZWYpIFNldAotfQotCi0KLUEgbGlz
dCBvZiBhbGwgdGhlIElEcyBvZiBhbGwgdGhlIEFDTVBvbGljaWVzCi1cdnNwYWNlezAuM2NtfQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+
Z2V0XF91dWlkfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgdXVpZCBmaWVsZCBvZiB0aGUg
Z2l2ZW4gQUNNUG9saWN5LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2
ZXJiYXRpbX0gc3RyaW5nIGdldF91dWlkIChzZXNzaW9uX2lkIHMsIEFDTVBvbGljeSByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IEFD
TVBvbGljeSByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fQote1x0dAotc3RyaW5nCi19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNw
YWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlv
bntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IGEgcmVjb3Jk
IG9mIHRoZSByZWZlcmVuY2VkIEFDTVBvbGljeS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVy
ZTp9Ci1cYmVnaW57dmVyYmF0aW19IChYU1BvbGljeSByZWNvcmQpIGdldF9yZWNvcmQgKHNlc3Np
b25faWQgcywgeHNfcmVmIHhzcG9saWN5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xi
ZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgeHMgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxu
b2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLVhTUG9saWN5IHJlY29yZAotfQotCi0K
LWFsbCBmaWVsZHMgZnJvbSB0aGUgb2JqZWN0Ci0KLVxuZXdwYWdlCi1cc2VjdGlvbntDbGFzczog
ZGVidWd9Ci1cc3Vic2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBkZWJ1Z30KLXtcYmYgQ2xhc3Mg
ZGVidWcgaGFzIG5vIGZpZWxkcy59Ci1cc3Vic2VjdGlvbntSUENzIGFzc29jaWF0ZWQgd2l0aCBj
bGFzczogZGVidWd9Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0KLXtcYmYg
T3ZlcnZpZXc6fSAKLVJldHVybiBhIGxpc3Qgb2YgYWxsIHRoZSBkZWJ1ZyByZWNvcmRzIGtub3du
IHRvIHRoZSBzeXN0ZW0KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9IAotXGJlZ2lue3Zl
cmJhdGltfSAoKGRlYnVnIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6
fSAKLXtcdHQgCi0oZGVidWcgcmVmKSBTZXQKLX0KLQotCi1BIGxpc3Qgb2YgYWxsIHRoZSBJRHMg
b2YgYWxsIHRoZSBkZWJ1ZyByZWNvcmRzCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+cmV0dXJuXF9mYWlsdXJl
fQotCi17XGJmIE92ZXJ2aWV3On0gCi1SZXR1cm4gYW4gQVBJICdzdWNjZXNzZnVsJyBmYWlsdXJl
LgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IHZvaWQg
cmV0dXJuX2ZhaWx1cmUgKHNlc3Npb25faWQgcylcZW5ke3ZlcmJhdGltfQotCi0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9IAote1x0dCAKLXZvaWQKLX0K
LQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUNyZWF0
ZSBhIG5ldyBkZWJ1ZyBpbnN0YW5jZSwgYW5kIHJldHVybiBpdHMgaGFuZGxlLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0gCi1cYmVnaW57dmVyYmF0aW19IChkZWJ1ZyByZWYpIGNyZWF0
ZSAoc2Vzc2lvbl9pZCBzLCBkZWJ1ZyByZWNvcmQgYXJncylcZW5ke3ZlcmJhdGltfQotCi0KLVxu
b2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtc
YmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBkZWJ1ZyByZWNvcmQgfSAmIGFyZ3MgJiBB
bGwgY29uc3RydWN0b3IgYXJndW1lbnRzIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1c
dnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1k
ZWJ1ZyByZWYKLX0KLQotCi1yZWZlcmVuY2UgdG8gdGhlIG5ld2x5IGNyZWF0ZWQgb2JqZWN0Ci1c
dnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2Vj
dGlvbntSUEMgbmFtZTp+ZGVzdHJveX0KLQote1xiZiBPdmVydmlldzp9IAotRGVzdHJveSB0aGUg
c3BlY2lmaWVkIGRlYnVnIGluc3RhbmNlLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0g
Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgZGVzdHJveSAoc2Vzc2lvbl9pZCBzLCBkZWJ1ZyByZWYg
c2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotIAot
XHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17
XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0
dCBkZWJ1ZyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUg
Ci0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJl
dHVybiBUeXBlOn0gCi17XHR0IAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxf
YnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fSAKLUdldCBhIHJlZmVyZW5jZSB0byB0aGUgZGVi
dWcgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQuCi0KLSBcbm9pbmRlbnQge1xiZiBT
aWduYXR1cmU6fSAKLVxiZWdpbnt2ZXJiYXRpbX0gKGRlYnVnIHJlZikgZ2V0X2J5X3V1aWQgKHNl
c3Npb25faWQgcywgc3RyaW5nIHV1aWQpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLSAKLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xw
ezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0
aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiB1dWlkICYgVVVJRCBvZiBvYmplY3QgdG8g
cmV0dXJuIFxcIFxobGluZSAKLQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0g
XG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fSAKLXtcdHQgCi1kZWJ1ZyByZWYKLX0KLQotCi1y
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfcmVjb3JkfQotCi17
XGJmIE92ZXJ2aWV3On0gCi1HZXQgYSByZWNvcmQgY29udGFpbmluZyB0aGUgY3VycmVudCBzdGF0
ZSBvZiB0aGUgZ2l2ZW4gZGVidWcuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fSAKLVxi
ZWdpbnt2ZXJiYXRpbX0gKGRlYnVnIHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBzLCBk
ZWJ1ZyByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRz
On0KLQotIAotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0g
XGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxo
bGluZQote1x0dCBkZWJ1ZyByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBc
XCBcaGxpbmUgCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0gCi17XHR0IAotZGVidWcgcmVjb3JkCi19Ci0KLQotYWxsIGZp
ZWxkcyBmcm9tIHRoZSBvYmplY3QKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZz
cGFjZXswLjNjbX0KLQotXG5ld3BhZ2UKLVxzZWN0aW9ue0NsYXNzOiBjcHVcX3Bvb2x9Ci1cc3Vi
c2VjdGlvbntGaWVsZHMgZm9yIGNsYXNzOiBjcHVcX3Bvb2x9Ci1cYmVnaW57bG9uZ3RhYmxlfXt8
bGxscHswLjM4XHRleHR3aWR0aH18fQotXGhsaW5lCi1cbXVsdGljb2x1bW57MX17fGx9e05hbWV9
ICYgXG11bHRpY29sdW1uezN9e2x8fXtcYmYgY3B1XF9wb29sfSBcXAotXG11bHRpY29sdW1uezF9
e3xsfXtEZXNjcmlwdGlvbn0gJiBcbXVsdGljb2x1bW57M317bHx9e1xwYXJib3h7MTFjbX17XGVt
IEEgQ1BVIHBvb2x9fSBcXAotXGhsaW5lCi1RdWFscyAmIEZpZWxkICYgVHlwZSAmIERlc2NyaXB0
aW9uIFxcCi1caGxpbmUKLSRcbWF0aGl0e1JPfV9cbWF0aGl0e3J1bn0kICYgIHtcdHQgdXVpZH0g
JiBzdHJpbmcgJiB1bmlxdWUgaWRlbnRpZmllciAvIG9iamVjdCByZWZlcmVuY2UgXFwKLSRcbWF0
aGl0e1JXfSQgICAgICAgICAgICAgICYgIHtcdHQgbmFtZVxfbGFiZWx9ICYgc3RyaW5nICYgbmFt
ZSBvZiBjcHVcX3Bvb2wgXFwKLSRcbWF0aGl0e1JXfSQgICAgICAgICAgICAgICYgIHtcdHQgbmFt
ZVxfZGVzY3JpcHRpb259ICYgc3RyaW5nICYgY3B1XF9wb29sIGRlc2NyaXB0aW9uIFxcCi0kXG1h
dGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHJlc2lkZW50XF9vbn0gJiBob3N0IHJlZiAm
IHRoZSBob3N0IHRoZSBjcHVcX3Bvb2wgaXMgY3VycmVudGx5IHJlc2lkZW50IG9uIFxcCi0kXG1h
dGhpdHtSV30kICAgICAgICAgICAgICAmICB7XHR0IGF1dG9cX3Bvd2VyXF9vbn0gJiBib29sICYg
VHJ1ZSBpZiB0aGlzIGNwdVxfcG9vbCBzaG91bGQgYmUgYWN0aXZhdGVkIGF1dG9tYXRpY2FsbHkg
YWZ0ZXIgaG9zdCBib290IFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59JCAmICB7XHR0IHN0
YXJ0ZWRcX1ZNc30gJiAoVk0gcmVmKSBTZXQgJiBsaXN0IG9mIFZNcyBjdXJyZW50bHkgc3RhcnRl
ZCBpbiB0aGlzIGNwdVxfcG9vbCBcXAotJFxtYXRoaXR7Uld9JCAgICAgICAgICAgICAgJiAge1x0
dCBuY3B1fSAmIGludGVnZXIgJiBudW1iZXIgb2YgaG9zdFxfQ1BVcyByZXF1ZXN0ZWQgZm9yIHRo
aXMgY3B1XF9wb29sIGF0IG5leHQgc3RhcnQgXFwKLSRcbWF0aGl0e1JXfSQgICAgICAgICAgICAg
ICYgIHtcdHQgc2NoZWRcX3BvbGljeX0gJiBzdHJpbmcgJiBzY2hlZHVsZXIgcG9saWN5IG9uIHRo
aXMgY3B1XF9wb29sIFxcCi0kXG1hdGhpdHtSV30kICAgICAgICAgICAgICAmICB7XHR0IHByb3Bv
c2VkXF9DUFVzfSAmIChzdHJpbmcpIFNldCAmIGxpc3Qgb2YgcHJvcG9zZWQgaG9zdFxfQ1BVcyB0
byBhc3NpZ24gYXQgbmV4dCBhY3RpdmF0aW9uIFxcCi0kXG1hdGhpdHtST31fXG1hdGhpdHtydW59
JCAmICB7XHR0IGhvc3RcX0NQVXN9ICYgKFZNIHJlZikgU2V0ICYgbGlzdCBvZiBob3N0XF9jcHVz
IGN1cnJlbnRseSBhc3NpZ25lZCB0byB0aGlzIGNwdVxfcG9vbCBcXAotJFxtYXRoaXR7Uk99X1xt
YXRoaXR7cnVufSQgJiAge1x0dCBhY3RpdmF0ZWR9ICYgYm9vbCAmIFRydWUgaWYgdGhpcyBjcHVc
X3Bvb2wgaXMgYWN0aXZhdGVkIFxcCi0kXG1hdGhpdHtSV30kICAgICAgICAgICAgICAmICB7XHR0
IG90aGVyXF9jb25maWd9ICYgKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwICYgYWRk
aXRpb25hbCBjb25maWd1cmF0aW9uIFxcCi1caGxpbmUKLVxlbmR7bG9uZ3RhYmxlfQotXHN1YnNl
Y3Rpb257UlBDcyBhc3NvY2lhdGVkIHdpdGggY2xhc3M6IGNwdVxfcG9vbH0KLVxzdWJzdWJzZWN0
aW9ue1JQQyBuYW1lOn5hY3RpdmF0ZX0KLQote1xiZiBPdmVydmlldzp9Ci1BY3RpdmF0ZSB0aGUg
Y3B1XF9wb29sIGFuZCBhc3NpZ24gdGhlIGdpdmVuIENQVXMgdG8gaXQuCi1DUFVzIHNwZWNpZmll
ZCBpbiBmaWVsZCBwcm9wb3NlZFxfQ1BVcywgdGhhdCBhcmUgbm90IGV4aXN0aW5nIG9yIG5vdCBm
cmVlLCBhcmUKLWlnbm9yZWQuIElmIHZhbHVlIG9mIG5jcHUgaXMgZ3JlYXRlciB0aGFuIHRoZSBu
dW1iZXIgb2YgQ1BVcyBpbiBmaWVsZAotcHJvcG9zZWRcX0NQVXMsIGFkZGl0aW9uYWwgZnJlZSBD
UFVzIGFyZSBhc3NpZ25lZCB0byB0aGUgY3B1XF9wb29sLgotCi0gXG5vaW5kZW50IHtcYmYgU2ln
bmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBhY3RpdmF0ZSAoc2Vzc2lvbl9pZCBzLCBj
cHVfcG9vbCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQKLX0KLQotXHZzcGFjZXswLjNjbX0K
LQotXG5vaW5kZW50IHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fQotICAgIHtcdHQgUE9PTFxf
QkFEXF9TVEFURSwgSU5TVUZGSUNJRU5UXF9DUFVTLCBVTktPV05cX1NDSEVEXF9QT0xJQ1l9Ci0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5jcmVhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotQ3JlYXRlIGEgbmV3
IGNwdVxfcG9vbCBpbnN0YW5jZSwgYW5kIHJldHVybiBpdHMgaGFuZGxlLgotCi0gXG5vaW5kZW50
IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gKGNwdV9wb29sIHJlZikgY3JlYXRl
IChzZXNzaW9uX2lkIHMsIGNwdV9wb29sIHJlY29yZCBhcmdzKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlY29yZCB9ICYgYXJn
cyAmIEFsbCBjb25zdHJ1Y3RvciBhcmd1bWVudHMgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1j
cHVcX3Bvb2wgcmVmCi19Ci0KLQotcmVmZXJlbmNlIHRvIHRoZSBuZXdseSBjcmVhdGVkIG9iamVj
dAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmRlYWN0aXZhdGV9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotRGVhY3Rp
dmF0ZSB0aGUgY3B1XF9wb29sIGFuZCByZWxlYXNlIGFsbCBDUFVzIGFzc2lnbmVkIHRvIGl0Lgot
VGhpcyBmdW5jdGlvbiBjYW4gb25seSBiZSBjYWxsZWQgaWYgdGhlcmUgYXJlIG5vIGRvbWFpbnMg
YWN0aXZlIGluIHRoZQotY3B1XF9wb29sLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0K
LVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBkZWFjdGl2YXRlIChzZXNzaW9uX2lkIHMsIGNwdV9wb29s
IHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotIFxobGlu
ZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUK
LXtcdHQgY3B1XF9wb29sIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxc
IFxobGluZQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtc
YmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9p
bmRlbnQge1xiZiBQb3NzaWJsZSBFcnJvciBDb2Rlczp9IHtcdHQgUE9PTFxfQkFEXF9TVEFURX0K
LQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1
YnNlY3Rpb257UlBDIG5hbWU6fmRlc3Ryb3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotRGVzdHJveSB0
aGUgc3BlY2lmaWVkIGNwdVxfcG9vbC4gVGhlIGNwdVxfcG9vbCBpcyBjb21wbGV0ZWx5IHJlbW92
ZWQgZnJvbSB0aGUKLXN5c3RlbS4KLVRoaXMgZnVuY3Rpb24gY2FuIG9ubHkgYmUgY2FsbGVkIGlm
IHRoZSBjcHVcX3Bvb2wgaXMgZGVhY3RpdmF0ZWQuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIGRlc3Ryb3kgKHNlc3Npb25faWQgcywgY3B1X3Bv
b2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9
Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhs
aW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGlu
ZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3Qg
XFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci12b2lkCi19Ci0KLVx2c3BhY2V7MC4zY219Ci0KLVxu
b2luZGVudCB7XGJmIFBvc3NpYmxlIEVycm9yIENvZGVzOn0ge1x0dCBQT09MXF9CQURcX1NUQVRF
fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5hZGRcX2hvc3RcX0NQVVxfbGl2ZX0KLQotCi17XGJmIE92
ZXJ2aWV3On0KLUFkZCBhIGFkZGl0aW9uYWwgQ1BVIGltbWVkaWF0bHkgdG8gdGhlIGNwdVxfcG9v
bC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQg
YWRkX2hvc3RfQ1BVX2xpdmUgKHNlc3Npb25faWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYsIGhvc3Rf
Y3B1IHJlZiBob3N0X2NwdSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1l
bnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0K
LSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwg
XGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9i
amVjdCBcXCBcaGxpbmUKLXtcdHQgaG9zdFxfY3B1IHJlZiB9ICYgaG9zdFxfY3B1ICYgQ1BVIHRv
IGFkZCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQKLX0KLQotXHZzcGFjZXswLjNjbX0K
LQotXG5vaW5kZW50IHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fQotICAgIHtcdHQgUE9PTFxf
QkFEXF9TVEFURSwgSU5WQUxJRFxfQ1BVfQotCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5yZW1vdmVcX2hv
c3RcX0NQVVxfbGl2ZX0KLQotCi17XGJmIE92ZXJ2aWV3On0KLVJlbW92ZSBhIENQVSBpbW1lZGlh
dGx5IGZyb20gdGhlIGNwdVxfcG9vbC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1c
YmVnaW57dmVyYmF0aW19IHZvaWQgcmVtb3ZlX2hvc3RfQ1BVX2xpdmUgKHNlc3Npb25faWQgcywg
Y3B1X3Bvb2wgcmVmIHNlbGYsIGhvc3RfY3B1IHJlZiBob3N0X2NwdSlcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9
ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNl
bGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLXtcdHQgaG9zdFxfY3B1IHJl
ZiB9ICYgaG9zdFxfY3B1ICYgQ1BVIHRvIHJlbW92ZSBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQK
LXZvaWQKLX0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUG9zc2libGUgRXJy
b3IgQ29kZXM6fQotICAgIHtcdHQgUE9PTFxfQkFEXF9TVEFURSwgSU5WQUxJRFxfQ1BVLCBMQVNU
XF9DUFVcX05PVFxfUkVNT1ZFQUJMRX0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9hbGx9Ci0K
LQote1xiZiBPdmVydmlldzp9Ci1SZXR1cm4gYSBsaXN0IG9mIGFsbCB0aGUgY3B1IHBvb2xzIGtu
b3duIHRvIHRoZSBzeXN0ZW0uCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lu
e3ZlcmJhdGltfSAoKGNwdV9wb29sIHJlZikgU2V0KSBnZXRfYWxsIChzZXNzaW9uX2lkIHMpXGVu
ZHt2ZXJiYXRpbX0KLQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotKGNw
dVxfcG9vbCByZWYpIFNldAotfQotQSBsaXN0IG9mIGFsbCB0aGUgSURzIG9mIHRoZSBjcHUgcG9v
bHMuCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxz
dWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2FsbFxfcmVjb3Jkc30KLQotCi17XGJmIE92ZXJ2
aWV3On0KLVJldHVybiBhIG1hcCBvZiBhbGwgdGhlIGNwdSBwb29sIHJlY29yZHMga25vd24gdG8g
dGhlIHN5c3RlbS4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0
aW19ICgoKGNwdV9wb29sIHJlZikgLT4gKGNwdV9wb29sIHJlY29yZCkpIE1hcCkgZ2V0X2FsbF9y
ZWNvcmRzIChzZXNzaW9uX2lkIHMpXGVuZHt2ZXJiYXRpbX0KLQotCi0gXG5vaW5kZW50IHtcYmYg
UmV0dXJuIFR5cGU6fQote1x0dAotKChjcHVcX3Bvb2wgcmVmKSAkXHJpZ2h0YXJyb3ckIChjcHVc
X3Bvb2wgcmVjb3JkKSkgTWFwCi19Ci1BIG1hcCBvZiBhbGwgdGhlIGNwdSBwb29sIHJlY29yZHMg
aW5kZXhlZCBieSBjcHUgcG9vbCByZWYuCi0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX2J5XF9uYW1l
XF9sYWJlbH0KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgYWxsIHRoZSBjcHVcX3Bvb2wgaW5zdGFu
Y2VzIHdpdGggdGhlIGdpdmVuIGxhYmVsLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0K
LVxiZWdpbnt2ZXJiYXRpbX0gKChjcHVfcG9vbCByZWYpIFNldCkgZ2V0X2J5X25hbWVfbGFiZWwg
KHNlc3Npb25faWQgcywgc3RyaW5nIGxhYmVsKVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50
e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8
Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2Ny
aXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiBsYWJlbCAmIGxhYmVsIG9mIG9iamVj
dCB0byByZXR1cm4gXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLShjcHVcX3Bvb2wgcmVmKSBT
ZXQKLX0KLQotCi1yZWZlcmVuY2VzIHRvIG9iamVjdHMgd2l0aCBtYXRjaGluZyBuYW1lcwotXHZz
cGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rp
b257UlBDIG5hbWU6fmdldFxfYnlcX3V1aWR9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IGEgcmVm
ZXJlbmNlIHRvIHRoZSBjcHVcX3Bvb2wgaW5zdGFuY2Ugd2l0aCB0aGUgc3BlY2lmaWVkIFVVSUQu
Ci0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAoY3B1X3Bv
b2wgcmVmKSBnZXRfYnlfdXVpZCAoc2Vzc2lvbl9pZCBzLCBzdHJpbmcgdXVpZClcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IHN0cmluZyB9ICYgdXVp
ZCAmIFVVSUQgb2Ygb2JqZWN0IHRvIHJldHVybiBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1bGFyfQot
Ci1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAot
Y3B1XF9wb29sIHJlZgotfQotCi0KLXJlZmVyZW5jZSB0byB0aGUgb2JqZWN0Ci1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9hY3RpdmF0ZWR9Ci0KLQote1xiZiBPdmVydmlldzp9Ci1SZXR1cm4gdGhlIGFj
dGl2YXRpb24gc3RhdGUgb2YgdGhlIGNwdVxfcG9vbCBvYmplY3QuCi0KLSBcbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSBib29sIGdldF9hY3RpdmF0ZWQgKHNlc3Np
b25faWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7
XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xj
fHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3Jp
cHRpb259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNl
IHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219
Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1ib29sCi19Ci1SZXR1cm5z
IHtcYmYgdHJ1ZX0gaWYgY3B1XF9wb29sIGlzIGFjdGl2ZS4KLQotXHZzcGFjZXswLjNjbX0KLVx2
c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdl
dFxfYXV0b1xfcG93ZXJcX29ufQotCi0KLXtcYmYgT3ZlcnZpZXc6fQotUmV0dXJuIHRoZSBhdXRv
IHBvd2VyIGF0dHJpYnV0ZSBvZiB0aGUgY3B1XF9wb29sIG9iamVjdC4KLQotIFxub2luZGVudCB7
XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IGJvb2wgZ2V0X2F1dG9fcG93ZXJfb24g
KHNlc3Npb25faWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9p
bmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFy
fXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYg
ZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVm
ZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7
MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1ib29sCi19Ci1S
ZXR1cm5zIHtcYmYgdHJ1ZX0gaWYgY3B1XF9wb29sIGhhcyB0byBiZSBhY3RpdmF0ZWQgb24geGVu
ZCBzdGFydC4KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfaG9zdFxfQ1BVc30KLQotCi17XGJmIE92
ZXJ2aWV3On0KLVJldHVybiB0aGUgbGlzdCBvZiBob3N0XF9jcHUgcmVmcyBhc3NpZ25lZCB0byB0
aGUgY3B1XF9wb29sIG9iamVjdC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVn
aW57dmVyYmF0aW19ICgoaG9zdF9jcHUgcmVmKSBTZXQpIGdldF9ob3N0X0NQVXMgKHNlc3Npb25f
aWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0K
LSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci0oaG9zdFxfY3B1IHJlZikgU2V0
Ci19Ci1SZXR1cm5zIGEgbGlzdCBvZiByZWZlcmVuY2VzIG9mIGFsbCBob3N0IGNwdXMgYXNzaWdu
ZWQgdG8gdGhlIGNwdVxfcG9vbC4KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmFtZVxfZGVzY3Jp
cHRpb259Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IHRoZSBuYW1lL2Rlc2NyaXB0aW9uIGZpZWxk
IG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQot
XGJlZ2lue3ZlcmJhdGltfSBzdHJpbmcgZ2V0X25hbWVfZGVzY3JpcHRpb24gKHNlc3Npb25faWQg
cywgY3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFy
Z3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2Nt
fXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259
IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRo
ZSBvYmplY3QgXFwgXGhsaW5lCi0KLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQot
IFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXN0cmluZwotfQotCi0KLXZhbHVl
IG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfbmFtZVxfbGFiZWx9Ci0KLXtcYmYg
T3ZlcnZpZXc6fQotR2V0IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBjcHVcX3Bv
b2wuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSBzdHJp
bmcgZ2V0X25hbWVfbGFiZWwgKHNlc3Npb25faWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi0KLVxlbmR7
dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBl
On0KLXtcdHQKLXN0cmluZwotfQotCi0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNj
bX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5h
bWU6fmdldFxfbmNwdX0KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhlIG5jcHUgZmllbGQgb2Yg
dGhlIGdpdmVuIGNwdVxfcG9vbC4KLQotIFxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVn
aW57dmVyYmF0aW19IGludCBnZXRfbmNwdSAoc2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNw
dVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUK
LQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fQote1x0dAotaW50Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNl
ezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntS
UEMgbmFtZTp+Z2V0XF9wcm9wb3NlZFxfQ1BVc30KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhl
IHByb3Bvc2VkXF9DUFVzIGZpZWxkIG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZykgU2V0KSBnZXRf
cHJvcG9zZWRfQ1BVcyAoc2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZilcZW5ke3ZlcmJh
dGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQot
XGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLQotXGVuZHt0YWJ1
bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQot
e1x0dAotKHN0cmluZykgU2V0Ci19Ci0KLQotdmFsdWUgb2YgdGhlIGZpZWxkCi1cdnNwYWNlezAu
M2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cc3Vic3Vic2VjdGlvbntSUEMg
bmFtZTp+Z2V0XF9vdGhlclxfY29uZmlnfQotCi17XGJmIE92ZXJ2aWV3On0KLUdldCB0aGUgb3Ro
ZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUgZ2l2ZW4gY3B1XF9wb29sLgotCi1cbm9pbmRlbnQge1xi
ZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAoKHN0cmluZyAtPiBzdHJpbmcpIE1hcCkg
Z2V0X290aGVyX2NvbmZpZyAoc2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZilcZW5ke3Zl
cmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2Nt
fQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7
XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCBy
ZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLVxlbmR7dGFi
dWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQot
e1x0dAotKHN0cmluZyAkXHJpZ2h0YXJyb3ckIHN0cmluZykgTWFwCi19Ci0KLQotdmFsdWUgb2Yg
dGhlIGZpZWxkCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219
Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+Z2V0XF9yZWNvcmR9Ci0KLXtcYmYgT3ZlcnZpZXc6
fQotR2V0IGEgcmVjb3JkIGNvbnRhaW5pbmcgdGhlIGN1cnJlbnQgc3RhdGUgb2YgdGhlIGdpdmVu
IGNwdVxfcG9vbC4KLQotXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRp
bX0gKGNwdV9wb29sIHJlY29yZCkgZ2V0X3JlY29yZCAoc2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCBy
ZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQot
Ci1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUK
LXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17
XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBc
aGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYg
UmV0dXJuIFR5cGU6fQote1x0dAotY3B1XF9wb29sIHJlY29yZAotfQotCi0KLWFsbCBmaWVsZHMg
b2YgdGhlIG9iamVjdC4KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXsw
LjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3Jlc2lkZW50XF9vbn0KLQote1xi
ZiBPdmVydmlldzp9Ci1HZXQgdGhlIHJlc2lkZW50XF9vbiBmaWVsZCBvZiB0aGUgZ2l2ZW4gY3B1
XF9wb29sLgotCi1cbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSAo
aG9zdCByZWYpIGdldF9yZXNpZGVudF9vbiAoc2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2Vs
ZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNw
YWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYg
dHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNw
dVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUK
LVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUmV0dXJu
IFR5cGU6fQote1x0dAotaG9zdCByZWYKLX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3Bh
Y2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9u
e1JQQyBuYW1lOn5nZXRcX3NjaGVkXF9wb2xpY3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotR2V0IHRo
ZSBzY2hlZFxfcG9saWN5IGZpZWxkIG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLVxub2luZGVu
dCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfc2NoZWRfcG9s
aWN5IChzZXNzaW9uX2lkIHMsIGNwdV9wb29sIHJlZiBzZWxmKVxlbmR7dmVyYmF0aW19Ci0KLQot
XG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFi
dWxhcn17fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7
XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlZiB9ICYgc2VsZiAm
IHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQotXGVuZHt0YWJ1bGFyfQotCi1cdnNw
YWNlezAuM2NtfQotCi1cbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci1zdHJpbmcK
LX0KLQotCi12YWx1ZSBvZiB0aGUgZmllbGQKLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2Nt
fQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJzZWN0aW9ue1JQQyBuYW1lOn5nZXRcX3N0YXJ0ZWRc
X1ZNc30KLQote1xiZiBPdmVydmlldzp9Ci1HZXQgdGhlIHN0YXJ0ZWRcX1ZNcyBmaWVsZCBvZiB0
aGUgZ2l2ZW4gY3B1XF9wb29sLgotCi1cbm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lu
e3ZlcmJhdGltfSAoKFZNIHJlZikgU2V0KSBnZXRfc3RhcnRlZF9WTXMgKHNlc3Npb25faWQgcywg
Y3B1X3Bvb2wgcmVmIHNlbGYpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3Vt
ZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9
Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxc
IFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBv
YmplY3QgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2lu
ZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLShWTSByZWYpIFNldAotfQotCi0KLXZhbHVl
IG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAu
M2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fmdldFxfdXVpZH0KLQote1xiZiBPdmVydmll
dzp9Ci1HZXQgdGhlIHV1aWQgZmllbGQgb2YgdGhlIGdpdmVuIGNwdVxfcG9vbC4KLQotIFxub2lu
ZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHN0cmluZyBnZXRfdXVpZCAo
c2Vzc2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZilcZW5ke3ZlcmJhdGltfQotCi0KLVxub2lu
ZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9
e3xjfGN8cHs3Y219fH0KLSBcaGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xiZiBk
ZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNlbGYgJiByZWZl
cmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXsw
LjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXN0cmluZwotfQot
Ci0KLXZhbHVlIG9mIHRoZSBmaWVsZAotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6fnNldFxfYXV0b1xfcG93ZXJc
X29ufQotCi17XGJmIE92ZXJ2aWV3On0KLVNldCB0aGUgYXV0b1xfcG93ZXJcX29uIGZpZWxkIG9m
IHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLVxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1cYmVn
aW57dmVyYmF0aW19IHZvaWQgc2V0X2F1dG9fcG93ZXJfb24gKHNlc3Npb25faWQgcywgY3B1X3Bv
b2wgcmVmIHNlbGYsIGJvb2wgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJm
IEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7
N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRp
b259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRv
IHRoZSBvYmplY3QgXFwgXGhsaW5lCi17XHR0IGJvb2wgfSAmIHZhbHVlICYgbmV3IGF1dG9cX3Bv
d2VyXF9vbiB2YWx1ZSBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0K
LQotXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fnNldFxfcHJvcG9zZWRcX0NQVXN9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotU2V0IHRoZSBw
cm9wb3NlZFxfQ1BVcyBmaWVsZCBvZiB0aGUgZ2l2ZW4gY3B1XF9wb29sLgotCi1cbm9pbmRlbnQg
e1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9wcm9wb3NlZF9DUFVz
IChzZXNzaW9uX2lkIHMsIGNwdV9wb29sIHJlZiBzZWxmLCBzdHJpbmcgU2V0IGNwdXMpXGVuZHt2
ZXJiYXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNj
bX0KLVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci0gXGhsaW5lCi17XGJmIHR5cGV9ICYg
e1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQote1x0dCBjcHVcX3Bvb2wg
cmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwgXGhsaW5lCi17XHR0IHN0
cmluZyBTZXQgfSAmIGNwdXMgJiBTZXQgb2YgcHJlZmVycmVkIENQVSAobnVtYmVycykgdG8gdXNl
IFxcIFxobGluZQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi1cbm9pbmRlbnQg
e1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci12b2lkCi19Ci1cdnNwYWNlezAuM2NtfQotCi1cbm9p
bmRlbnQge1xiZiBQb3NzaWJsZSBFcnJvciBDb2Rlczp9Ci0gICAge1x0dCBQT09MXF9CQURcX1NU
QVRFfQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
c3Vic3Vic2VjdGlvbntSUEMgbmFtZTphZGRcX3RvXF9wcm9wb3NlZFxfQ1BVc30KLQote1xiZiBP
dmVydmlldzp9Ci1BZGQgYSBDUFUgKG51bWJlcikgdG8gdGhlIHByb3Bvc2VkXF9DUFVzIGZpZWxk
IG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLVxub2luZGVudCB7XGJmIFNpZ25hdHVyZTp9Ci1c
YmVnaW57dmVyYmF0aW19IHZvaWQgYWRkX3RvX3Byb3Bvc2VkX0NQVXMgKHNlc3Npb25faWQgcywg
Y3B1X3Bvb2wgcmVmIHNlbGYsIGludGVnZXIgY3B1KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5k
ZW50e1xiZiBBcmd1bWVudHM6fQotCi0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17
fGN8Y3xwezdjbX18fQotIFxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRl
c2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlZiB9ICYgc2VsZiAmIHJlZmVy
ZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQote1x0dCBpbnRlZ2VyIH0gJiBjcHUgJiBOdW1i
ZXIgb2YgQ1BVIHRvIGFkZCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNj
bX0KLQotXG5vaW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotXHZzcGFj
ZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fQotICAgIHtc
dHQgUE9PTFxfQkFEXF9TVEFURX0KLQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1c
dnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6cmVtb3ZlXF9mcm9tXF9wcm9w
b3NlZFxfQ1BVc30KLQote1xiZiBPdmVydmlldzp9Ci1SZW1vdmUgYSBDUFUgKG51bWJlcikgZnJv
bSB0aGUgcHJvcG9zZWRcX0NQVXMgZmllbGQgb2YgdGhlIGdpdmVuIGNwdVxfcG9vbC4KLQotXG5v
aW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCByZW1vdmVfZnJv
bV9wcm9wb3NlZF9DUFVzIChzZXNzaW9uX2lkIHMsIGNwdV9wb29sIHJlZiBzZWxmLCBpbnRlZ2Vy
IGNwdSlcZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1c
dnNwYWNlezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLVxobGluZQote1xi
ZiB0eXBlfSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQg
Y3B1XF9wb29sIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGlu
ZQote1x0dCBpbnRlZ2VyIH0gJiBjcHUgJiBOdW1iZXIgb2YgQ1BVIHRvIHJlbW92ZSBcXCBcaGxp
bmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUmV0
dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtc
YmYgUG9zc2libGUgRXJyb3IgQ29kZXM6fQotICAgIHtcdHQgUE9PTFxfQkFEXF9TVEFURX0KLQot
XHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNl
Y3Rpb257UlBDIG5hbWU6fnNldFxfbmFtZVxfbGFiZWx9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotU2V0
IHRoZSBuYW1lL2xhYmVsIGZpZWxkIG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLSBcbm9pbmRl
bnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9uYW1lX2xhYmVs
IChzZXNzaW9uX2lkIHMsIGNwdV9wb29sIHJlZiBzZWxmLCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJi
YXRpbX0KLQotCi1cbm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0K
LVxiZWdpbnt0YWJ1bGFyfXt8Y3xjfHB7N2NtfXx9Ci1caGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJm
IG5hbWV9ICYge1xiZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYg
fSAmIHNlbGYgJiByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLXtcdHQgc3RyaW5n
IH0gJiB2YWx1ZSAmIE5ldyB2YWx1ZSB0byBzZXQgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0K
LVx2c3BhY2V7MC4zY219Ci0KLSBcbm9pbmRlbnQge1xiZiBSZXR1cm4gVHlwZTp9Ci17XHR0Ci12
b2lkCi19Ci0KLQotCi1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVx2c3BhY2V7MC4z
Y219Ci1cc3Vic3Vic2VjdGlvbntSUEMgbmFtZTp+c2V0XF9uY3B1fQotCi17XGJmIE92ZXJ2aWV3
On0KLVNldCB0aGUgbmNwdSBmaWVsZCBvZiB0aGUgZ2l2ZW4gY3B1XF9wb29sLgotCi0gXG5vaW5k
ZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2ZXJiYXRpbX0gdm9pZCBzZXRfbmNwdSAoc2Vz
c2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZiwgaW50ZWdlciB2YWx1ZSlcZW5ke3ZlcmJhdGlt
fQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJl
Z2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLVxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFt
ZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlZiB9ICYg
c2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQote1x0dCBpbnRlZ2VyIH0g
JiB2YWx1ZSAmIE51bWJlciBvZiBjcHVzIHRvIHVzZSBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0K
LQotXHZzcGFjZXswLjNjbX0KLQotIFxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQK
LXZvaWQKLX0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5vaW5kZW50IHtcYmYgUG9zc2libGUgRXJy
b3IgQ29kZXM6fQotICAgIHtcdHQgUE9PTFxfQkFEXF9TVEFURX0KLQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fnNldFxfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9Ci1TZXQgdGhlIG90aGVyXF9j
b25maWcgZmllbGQgb2YgdGhlIGdpdmVuIGNwdVxfcG9vbC4KLQotIFxub2luZGVudCB7XGJmIFNp
Z25hdHVyZTp9Ci1cYmVnaW57dmVyYmF0aW19IHZvaWQgc2V0X290aGVyX2NvbmZpZyAoc2Vzc2lv
bl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZiwgKHN0cmluZyAtPiBzdHJpbmcpIE1hcCB2YWx1ZSlc
ZW5ke3ZlcmJhdGltfQotCi0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNl
ezAuM2NtfQotXGJlZ2lue3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLVxobGluZQote1xiZiB0eXBl
fSAmIHtcYmYgbmFtZX0gJiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9w
b29sIHJlZiB9ICYgc2VsZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQote1x0
dCAoc3RyaW5nICRccmlnaHRhcnJvdyQgc3RyaW5nKSBNYXAgfSAmIHZhbHVlICYgTmV3IHZhbHVl
IHRvIHNldCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLQotXHZzcGFjZXswLjNjbX0KLQotXG5v
aW5kZW50IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotCi0KLQotXHZzcGFjZXsw
LjNjbX0KLVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBD
IG5hbWU6fmFkZFxfdG9cX290aGVyXF9jb25maWd9Ci0KLXtcYmYgT3ZlcnZpZXc6fQotQWRkIHRo
ZSBnaXZlbiBrZXktdmFsdWUgcGFpciB0byB0aGUgb3RoZXJcX2NvbmZpZyBmaWVsZCBvZiB0aGUg
Z2l2ZW4gY3B1XF9wb29sLgotCi0gXG5vaW5kZW50IHtcYmYgU2lnbmF0dXJlOn0KLVxiZWdpbnt2
ZXJiYXRpbX0gdm9pZCBhZGRfdG9fb3RoZXJfY29uZmlnIChzZXNzaW9uX2lkIHMsIGNwdV9wb29s
IHJlZiBzZWxmLCBzdHJpbmcga2V5LCBzdHJpbmcgdmFsdWUpXGVuZHt2ZXJiYXRpbX0KLQotCi1c
bm9pbmRlbnR7XGJmIEFyZ3VtZW50czp9Ci0KLQotXHZzcGFjZXswLjNjbX0KLVxiZWdpbnt0YWJ1
bGFyfXt8Y3xjfHB7N2NtfXx9Ci1caGxpbmUKLXtcYmYgdHlwZX0gJiB7XGJmIG5hbWV9ICYge1xi
ZiBkZXNjcmlwdGlvbn0gXFwgXGhsaW5lCi17XHR0IGNwdVxfcG9vbCByZWYgfSAmIHNlbGYgJiBy
ZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBcXCBcaGxpbmUKLXtcdHQgc3RyaW5nIH0gJiBrZXkgJiBL
ZXkgdG8gYWRkIFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIHZhbHVlICYgVmFsdWUgdG8gYWRk
IFxcIFxobGluZQotXGVuZHt0YWJ1bGFyfQotCi1cdnNwYWNlezAuM2NtfQotCi0gXG5vaW5kZW50
IHtcYmYgUmV0dXJuIFR5cGU6fQote1x0dAotdm9pZAotfQotCi0KLQotXHZzcGFjZXswLjNjbX0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHN1YnN1YnNlY3Rpb257UlBDIG5hbWU6
fnJlbW92ZVxfZnJvbVxfb3RoZXJcX2NvbmZpZ30KLQote1xiZiBPdmVydmlldzp9Ci1SZW1vdmUg
dGhlIGdpdmVuIGtleSBhbmQgaXRzIGNvcnJlc3BvbmRpbmcgdmFsdWUgZnJvbSB0aGUgb3RoZXJc
X2NvbmZpZwotZmllbGQgb2YgdGhlIGdpdmVuIGNwdVxfcG9vbC4gIElmIHRoZSBrZXkgaXMgbm90
IGluIHRoYXQgTWFwLCB0aGVuIGRvIG5vdGhpbmcuCi0KLSBcbm9pbmRlbnQge1xiZiBTaWduYXR1
cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHJlbW92ZV9mcm9tX290aGVyX2NvbmZpZyAoc2Vz
c2lvbl9pZCBzLCBjcHVfcG9vbCByZWYgc2VsZiwgc3RyaW5nIGtleSlcZW5ke3ZlcmJhdGltfQot
Ci0KLVxub2luZGVudHtcYmYgQXJndW1lbnRzOn0KLQotCi1cdnNwYWNlezAuM2NtfQotXGJlZ2lu
e3RhYnVsYXJ9e3xjfGN8cHs3Y219fH0KLVxobGluZQote1xiZiB0eXBlfSAmIHtcYmYgbmFtZX0g
JiB7XGJmIGRlc2NyaXB0aW9ufSBcXCBcaGxpbmUKLXtcdHQgY3B1XF9wb29sIHJlZiB9ICYgc2Vs
ZiAmIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IFxcIFxobGluZQote1x0dCBzdHJpbmcgfSAmIGtl
eSAmIEtleSB0byByZW1vdmUgXFwgXGhsaW5lCi1cZW5ke3RhYnVsYXJ9Ci0KLVx2c3BhY2V7MC4z
Y219Ci0KLVxub2luZGVudCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQKLX0KLQotCi0K
LVx2c3BhY2V7MC4zY219Ci1cdnNwYWNlezAuM2NtfQotXHZzcGFjZXswLjNjbX0KLVxzdWJzdWJz
ZWN0aW9ue1JQQyBuYW1lOn5zZXRcX3NjaGVkXF9wb2xpY3l9Ci0KLXtcYmYgT3ZlcnZpZXc6fQot
U2V0IHRoZSBzY2hlZFxfcG9saWN5IGZpZWxkIG9mIHRoZSBnaXZlbiBjcHVcX3Bvb2wuCi0KLSBc
bm9pbmRlbnQge1xiZiBTaWduYXR1cmU6fQotXGJlZ2lue3ZlcmJhdGltfSB2b2lkIHNldF9zY2hl
ZF9wb2xpY3kgKHNlc3Npb25faWQgcywgY3B1X3Bvb2wgcmVmIHNlbGYsIHN0cmluZyBuZXdfc2No
ZWRfcG9saWN5KVxlbmR7dmVyYmF0aW19Ci0KLQotXG5vaW5kZW50e1xiZiBBcmd1bWVudHM6fQot
Ci0KLVx2c3BhY2V7MC4zY219Ci1cYmVnaW57dGFidWxhcn17fGN8Y3xwezdjbX18fQotXGhsaW5l
Ci17XGJmIHR5cGV9ICYge1xiZiBuYW1lfSAmIHtcYmYgZGVzY3JpcHRpb259IFxcIFxobGluZQot
e1x0dCBjcHVcX3Bvb2wgcmVmIH0gJiBzZWxmICYgcmVmZXJlbmNlIHRvIHRoZSBvYmplY3QgXFwg
XGhsaW5lCi17XHR0IHN0cmluZyB9ICYgbmV3XF9zY2hlZFxfcG9saWN5ICYgTmV3IHZhbHVlIHRv
IHNldCBcXCBcaGxpbmUKLVxlbmR7dGFidWxhcn0KLVx2c3BhY2V7MC4zY219Ci0KLVxub2luZGVu
dCB7XGJmIFJldHVybiBUeXBlOn0KLXtcdHQKLXZvaWQKLX0KLQotCmRpZmYgLS1naXQgYS9kb2Nz
L3hlbi1hcGkveGVuYXBpLnRleCBiL2RvY3MveGVuLWFwaS94ZW5hcGkudGV4CmRlbGV0ZWQgZmls
ZSBtb2RlIDEwMDY0NAppbmRleCBiNTliNzA2Li4wMDAwMDAwCi0tLSBhL2RvY3MveGVuLWFwaS94
ZW5hcGkudGV4CisrKyAvZGV2L251bGwKQEAgLTEsNjAgKzAsMCBAQAotJQotJSBDb3B5cmlnaHQg
KGMpIDIwMDYtMjAwNyBYZW5Tb3VyY2UsIEluYy4KLSUKLSUgUGVybWlzc2lvbiBpcyBncmFudGVk
IHRvIGNvcHksIGRpc3RyaWJ1dGUgYW5kL29yIG1vZGlmeSB0aGlzIGRvY3VtZW50IHVuZGVyCi0l
IHRoZSB0ZXJtcyBvZiB0aGUgR05VIEZyZWUgRG9jdW1lbnRhdGlvbiBMaWNlbnNlLCBWZXJzaW9u
IDEuMiBvciBhbnkgbGF0ZXIKLSUgdmVyc2lvbiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdh
cmUgRm91bmRhdGlvbjsgd2l0aCBubyBJbnZhcmlhbnQKLSUgU2VjdGlvbnMsIG5vIEZyb250LUNv
dmVyIFRleHRzIGFuZCBubyBCYWNrLUNvdmVyIFRleHRzLiAgQSBjb3B5IG9mIHRoZQotJSBsaWNl
bnNlIGlzIGluY2x1ZGVkIGluIHRoZSBzZWN0aW9uIGVudGl0bGVkCi0lICJHTlUgRnJlZSBEb2N1
bWVudGF0aW9uIExpY2Vuc2UiIG9yIHRoZSBmaWxlIGZkbC50ZXguCi0lCi0lIEF1dGhvcnM6IEV3
YW4gTWVsbG9yLCBSaWNoYXJkIFNoYXJwLCBEYXZlIFNjb3R0LCBKb24gSGFycm9wLgotJQotCi1c
ZG9jdW1lbnRjbGFzc3tyZXBvcnR9Ci0KLVx1c2VwYWNrYWdle2E0fQotXHVzZXBhY2thZ2V7Z3Jh
cGhpY3N9Ci1cdXNlcGFja2FnZXtsb25ndGFibGV9Ci1cdXNlcGFja2FnZXtmYW5jeWhkcn0KLVx1
c2VwYWNrYWdle2h5cGVycmVmfQotXHVzZXBhY2thZ2V7YXJyYXl9Ci0KLVxzZXRsZW5ndGhcdG9w
c2tpcHswY219Ci1cc2V0bGVuZ3RoXHRvcG1hcmdpbnswY219Ci1cc2V0bGVuZ3RoXG9kZHNpZGVt
YXJnaW57MGNtfQotXHNldGxlbmd0aFxldmVuc2lkZW1hcmdpbnswY219Ci1cc2V0bGVuZ3RoXHBh
cmluZGVudHswcHR9Ci0KLSUlIFBhcmFtZXRlcnMgZm9yIGNvdmVyc2hlZXQ6Ci1caW5wdXR7eGVu
YXBpLWNvdmVyc2hlZXR9Ci0KLVxiZWdpbntkb2N1bWVudH0KLQotJSBUaGUgY292ZXJzaGVldCBp
dHNlbGYKLVxpbmNsdWRle2NvdmVyc2hlZXR9Ci0KLSUgVGhlIHJldmlzaW9uIGhpc3RvcnkKLVxp
bmNsdWRle3JldmlzaW9uLWhpc3Rvcnl9Ci0KLSUgVGFibGUgb2YgY29udGVudHMKLVx0YWJsZW9m
Y29udGVudHMKLQotCi0lIC4uLiBhbmQgb2ZmIHdlIGdvIQotCi1cY2hhcHRlcntJbnRyb2R1Y3Rp
b259Ci0KLVRoaXMgZG9jdW1lbnQgY29udGFpbnMgYSBkZXNjcmlwdGlvbiBvZiB0aGUgWGVuIE1h
bmFnZW1lbnQgQVBJLS0tYW4gaW50ZXJmYWNlIGZvcgotcmVtb3RlbHkgY29uZmlndXJpbmcgYW5k
IGNvbnRyb2xsaW5nIHZpcnR1YWxpc2VkIGd1ZXN0cyBydW5uaW5nIG9uIGEKLVhlbi1lbmFibGVk
IGhvc3QuIAotCi1caW5wdXR7cHJlc2VudGF0aW9ufQotCi1caW5jbHVkZXt3aXJlLXByb3RvY29s
fQotXGluY2x1ZGV7dm0tbGlmZWN5Y2xlfQotXGluY2x1ZGV7eGVuYXBpLWRhdGFtb2RlbH0KLVxp
bmNsdWRle2ZkbH0KLVxpbmNsdWRle2JpYmxpb2dyYXBoeX0KLQotXGVuZHtkb2N1bWVudH0KLS0g
CjEuNy4yLjUKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:11:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti38U-0000xA-Oy; Mon, 10 Dec 2012 13:11:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti38T-0000wv-02
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:11:05 +0000
Received: from [85.158.139.211:46762] by server-10.bemta-5.messagelabs.com id
	CA/C2-13383-86FD5C05; Mon, 10 Dec 2012 13:11:04 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355145062!18966250!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1826 invoked from network); 10 Dec 2012 13:11:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:11:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="97269"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 13:11:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 08:11:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti38P-0000ko-0a;
	Mon, 10 Dec 2012 13:11:01 +0000
Date: Mon, 10 Dec 2012 13:10:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355132533.31710.100.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212101305100.4633@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071733450.8801@kaball.uk.xensource.com>
	<1355132533.31710.100.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 9/9] xen: strip /chosen/modules/module@<N>/*
 from dom0 device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Ian Campbell wrote:
> On Fri, 2012-12-07 at 17:35 +0000, Stefano Stabellini wrote:
> > On Thu, 6 Dec 2012, Ian Campbell wrote:
> > > These nodes are used by Xen to find the initial modules.
> > > 
> > > Only drop the "xen,multiboot-module" compatible nodes in case someone
> > > else has a similar idea.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > ---
> > > v4 - /chosen/modules/modules@N not /chosen/module@N
> > > v3 - use a helper to filter out DT elements which are not for dom0.
> > >      Better than an ad-hoc break in the middle of a loop.
> > > ---
> > >  xen/arch/arm/domain_build.c |   40 ++++++++++++++++++++++++++++++++++++++--
> > >  1 files changed, 38 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > index 7a964f7..27e02e4 100644
> > > --- a/xen/arch/arm/domain_build.c
> > > +++ b/xen/arch/arm/domain_build.c
> > > @@ -172,6 +172,40 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
> > >      return prop;
> > >  }
> > >  
> > > +/* Returns the next node in fdt (starting from offset) which should be
> > > + * passed through to dom0.
> > > + */
> > > +static int fdt_next_dom0_node(const void *fdt, int node,
> > > +                              int *depth_out,
> > > +                              int parents[DEVICE_TREE_MAX_DEPTH])
> > > +{
> > > +    int depth = *depth_out;
> > > +
> > > +    while ( (node = fdt_next_node(fdt, node, &depth)) &&
> > > +            node >= 0 && depth >= 0 )
> > > +    {
> > > +        if ( depth >= DEVICE_TREE_MAX_DEPTH )
> > > +            break;
> > > +
> > > +        parents[depth] = node;
> > > +
> > > +        /* Skip /chosen/modules/module@<N>/ and all subnodes */
> > > +        if ( depth >= 3 &&
> > > +             device_tree_node_matches(fdt, parents[1], "chosen") &&
> > > +             device_tree_node_matches(fdt, parents[2], "modules") &&
> > > +             device_tree_node_matches(fdt, parents[3], "module") &&
> > > +             fdt_node_check_compatible(fdt, parents[3],
> > > +                                       "xen,multiboot-module" ) == 0 )
> > > +            continue;
> > > +
> > > +        /* We've arrived at a node which dom0 is interested in. */
> > > +        break;
> > > +    }
> > > +
> > > +    *depth_out = depth;
> > > +    return node;
> > > +}
> > 
> > Can't we just skip the node if it is compatible with
> > "xen,multiboot-module", no matter where it lives?  This should simplify
> > this function greatly and you wouldn't need the parents parameter
> > anymore.
> 
> As well as my previous query about the meaning of the tree structure I
> think that even if we could get away with this in this particular case
> we are going to need this sort of infrastructure once we start doing
> proper filtering of dom0 vs xen nodes in the tree.

Maybe. However in that case we could just have a generic
filter_device_tree_nodes function that takes an array of strings
(compatible string? device tree paths? I would go for both in a struct,
but the former would probably suffice), and filter the DT based on
those. That would be very useful in the long run. This is very ad-hoc
for the /chosen/modules/module@<N> path.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:11:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti38U-0000xA-Oy; Mon, 10 Dec 2012 13:11:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti38T-0000wv-02
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:11:05 +0000
Received: from [85.158.139.211:46762] by server-10.bemta-5.messagelabs.com id
	CA/C2-13383-86FD5C05; Mon, 10 Dec 2012 13:11:04 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355145062!18966250!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1826 invoked from network); 10 Dec 2012 13:11:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:11:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="97269"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 13:11:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 08:11:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti38P-0000ko-0a;
	Mon, 10 Dec 2012 13:11:01 +0000
Date: Mon, 10 Dec 2012 13:10:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355132533.31710.100.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212101305100.4633@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071733450.8801@kaball.uk.xensource.com>
	<1355132533.31710.100.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 9/9] xen: strip /chosen/modules/module@<N>/*
 from dom0 device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Ian Campbell wrote:
> On Fri, 2012-12-07 at 17:35 +0000, Stefano Stabellini wrote:
> > On Thu, 6 Dec 2012, Ian Campbell wrote:
> > > These nodes are used by Xen to find the initial modules.
> > > 
> > > Only drop the "xen,multiboot-module" compatible nodes in case someone
> > > else has a similar idea.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > ---
> > > v4 - /chosen/modules/modules@N not /chosen/module@N
> > > v3 - use a helper to filter out DT elements which are not for dom0.
> > >      Better than an ad-hoc break in the middle of a loop.
> > > ---
> > >  xen/arch/arm/domain_build.c |   40 ++++++++++++++++++++++++++++++++++++++--
> > >  1 files changed, 38 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > index 7a964f7..27e02e4 100644
> > > --- a/xen/arch/arm/domain_build.c
> > > +++ b/xen/arch/arm/domain_build.c
> > > @@ -172,6 +172,40 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
> > >      return prop;
> > >  }
> > >  
> > > +/* Returns the next node in fdt (starting from offset) which should be
> > > + * passed through to dom0.
> > > + */
> > > +static int fdt_next_dom0_node(const void *fdt, int node,
> > > +                              int *depth_out,
> > > +                              int parents[DEVICE_TREE_MAX_DEPTH])
> > > +{
> > > +    int depth = *depth_out;
> > > +
> > > +    while ( (node = fdt_next_node(fdt, node, &depth)) &&
> > > +            node >= 0 && depth >= 0 )
> > > +    {
> > > +        if ( depth >= DEVICE_TREE_MAX_DEPTH )
> > > +            break;
> > > +
> > > +        parents[depth] = node;
> > > +
> > > +        /* Skip /chosen/modules/module@<N>/ and all subnodes */
> > > +        if ( depth >= 3 &&
> > > +             device_tree_node_matches(fdt, parents[1], "chosen") &&
> > > +             device_tree_node_matches(fdt, parents[2], "modules") &&
> > > +             device_tree_node_matches(fdt, parents[3], "module") &&
> > > +             fdt_node_check_compatible(fdt, parents[3],
> > > +                                       "xen,multiboot-module" ) == 0 )
> > > +            continue;
> > > +
> > > +        /* We've arrived at a node which dom0 is interested in. */
> > > +        break;
> > > +    }
> > > +
> > > +    *depth_out = depth;
> > > +    return node;
> > > +}
> > 
> > Can't we just skip the node if it is compatible with
> > "xen,multiboot-module", no matter where it lives?  This should simplify
> > this function greatly and you wouldn't need the parents parameter
> > anymore.
> 
> As well as my previous query about the meaning of the tree structure I
> think that even if we could get away with this in this particular case
> we are going to need this sort of infrastructure once we start doing
> proper filtering of dom0 vs xen nodes in the tree.

Maybe. However in that case we could just have a generic
filter_device_tree_nodes function that takes an array of strings
(compatible string? device tree paths? I would go for both in a struct,
but the former would probably suffice), and filter the DT based on
those. That would be very useful in the long run. This is very ad-hoc
for the /chosen/modules/module@<N> path.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:15:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3CQ-0001H5-F3; Mon, 10 Dec 2012 13:15:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti3CP-0001Gy-9e
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:15:09 +0000
Received: from [85.158.143.35:64078] by server-2.bemta-4.messagelabs.com id
	88/A0-30861-C50E5C05; Mon, 10 Dec 2012 13:15:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355145298!15025212!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23594 invoked from network); 10 Dec 2012 13:14:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:14:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="33561"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 13:14:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 13:14:42 +0000
Message-ID: <1355145281.21160.21.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Dec 2012 13:14:41 +0000
In-Reply-To: <alpine.DEB.2.02.1212101300250.4633@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071728350.8801@kaball.uk.xensource.com>
	<1355132370.31710.98.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212101300250.4633@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/9] xen: arm: parse modules from DT during
	early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-10 at 13:04 +0000, Stefano Stabellini wrote:

> > > You have really written a lot of code here!
> > > I would have thought that just matching on the compatible string would
> > > be enough:
> > > 
> > > else if ( device_tree_node_matches(fdt, node, "linux-zimage") )
> > >      process_linuxzimage_node(fdt, node, name, address_cells, size_cells);
> > > else if ( device_tree_node_matches(fdt, node, "linux-initrd") )
> > >      process_linuxinitrd_node(fdt, node, name, address_cells, size_cells);
> > > 
> > > so that your process_linuxzimage_node and process_linuxinitrd_node start
> > > from the right node and have everything they need to parse it
> > 
> > Is the tree structure of Device Tree meaningless? I'd have thought that
> > a compatible node would only have meaning at a particular place in the
> > tree. Granted compatible nodes are often pretty specific and precise,
> > but is that inherent enough in DT that we can rely on it?
> 
> I don't know if it is entirely meaningless, but surely the compatible
> string is regarded as a much more reliable way to identify a node AFAIK.
> More often than not Linux drivers just use of_find_compatible_node to
> find their DT node.

Hrm, that sounds rather odd to me.

Anyway, there isn't much code needed to do it right -- there's only ~ a
dozen lines to do with actually walking the tree. The rest (i.e. the
majority of this patch) is the internals of process_chosen_modules_node,
the docs and the data structure which would be pretty much the same
regardless of walking the tree. 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:15:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3CQ-0001H5-F3; Mon, 10 Dec 2012 13:15:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti3CP-0001Gy-9e
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:15:09 +0000
Received: from [85.158.143.35:64078] by server-2.bemta-4.messagelabs.com id
	88/A0-30861-C50E5C05; Mon, 10 Dec 2012 13:15:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355145298!15025212!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23594 invoked from network); 10 Dec 2012 13:14:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:14:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="33561"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 13:14:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 13:14:42 +0000
Message-ID: <1355145281.21160.21.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Dec 2012 13:14:41 +0000
In-Reply-To: <alpine.DEB.2.02.1212101300250.4633@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-2-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071728350.8801@kaball.uk.xensource.com>
	<1355132370.31710.98.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212101300250.4633@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/9] xen: arm: parse modules from DT during
	early boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-10 at 13:04 +0000, Stefano Stabellini wrote:

> > > You have really written a lot of code here!
> > > I would have thought that just matching on the compatible string would
> > > be enough:
> > > 
> > > else if ( device_tree_node_matches(fdt, node, "linux-zimage") )
> > >      process_linuxzimage_node(fdt, node, name, address_cells, size_cells);
> > > else if ( device_tree_node_matches(fdt, node, "linux-initrd") )
> > >      process_linuxinitrd_node(fdt, node, name, address_cells, size_cells);
> > > 
> > > so that your process_linuxzimage_node and process_linuxinitrd_node start
> > > from the right node and have everything they need to parse it
> > 
> > Is the tree structure of Device Tree meaningless? I'd have thought that
> > a compatible node would only have meaning at a particular place in the
> > tree. Granted compatible nodes are often pretty specific and precise,
> > but is that inherent enough in DT that we can rely on it?
> 
> I don't know if it is entirely meaningless, but surely the compatible
> string is regarded as a much more reliable way to identify a node AFAIK.
> More often than not Linux drivers just use of_find_compatible_node to
> find their DT node.

Hrm, that sounds rather odd to me.

Anyway, there isn't much code needed to do it right -- there's only ~ a
dozen lines to do with actually walking the tree. The rest (i.e. the
majority of this patch) is the internals of process_chosen_modules_node,
the docs and the data structure which would be pretty much the same
regardless of walking the tree. 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:17:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:17:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3EN-0001No-6E; Mon, 10 Dec 2012 13:17:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti3EL-0001NY-QF
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:17:10 +0000
Received: from [85.158.139.83:38155] by server-4.bemta-5.messagelabs.com id
	B1/B3-14693-5D0E5C05; Mon, 10 Dec 2012 13:17:09 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355145428!21825895!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8466 invoked from network); 10 Dec 2012 13:17:08 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 13:17:08 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:58169 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti3IB-0002Ug-Ol; Mon, 10 Dec 2012 14:21:07 +0100
Date: Mon, 10 Dec 2012 14:17:04 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <76596674.20121210141704@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212101227490.4633@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
	<alpine.DEB.2.02.1212101227490.4633@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] qemu-xen-trad/pt_msi_disable: do not clear
	all MSI flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TW9uZGF5LCBEZWNlbWJlciAxMCwgMjAxMiwgMTozNjo0MSBQTSwgeW91IHdyb3RlOgoKPiAicWVt
dS14ZW4tdHJhZDogZml4IG1zaV90cmFuc2xhdGUgd2l0aCBQViBldmVudCBkZWxpdmVyeSIgYWRk
ZWQgYQo+IHB0X21zaV9kaXNhYmxlKCkgY2FsbCBpbnRvIHB0X21zZ2N0cmxfcmVnX3dyaXRlLCBj
bGVhcmluZyB0aGUgTVNJIGZsYWdzCj4gYXMgYSBjb25zZXF1ZW5jZS4gTVNJcyBnZXQgZW5hYmxl
ZCBhZ2FpbiBzb29uIGFmdGVyIGJ5IGNhbGxpbmcKPiBwdF9tc2lfc2V0dXAuCgo+IEhvd2V2ZXIg
dGhlIE1TSSBmbGFncyBhcmUgb25seSBzZXR1cCBvbmNlIGluwqB0aGUgcHRfbXNnY3RybF9yZWdf
aW5pdAo+IGZ1bmN0aW9uLCBzbyBmcm9tIHRoZSBRRU1VIHBvaW50IG9mIHZpZXcgdGhlIGRldmlj
ZSBoYXMgbG9zdCBzb21lCj4gaW1wb3J0YW50IHByb3BlcnRpZXMsIGxpa2UgZm9yIGV4YW1wbGUg
UENJX01TSV9GTEFHU182NEJJVC4KCj4gVGhpcyBwYXRjaCBmaXhlcyB0aGUgYnVnIGJ5IGNsZWFy
aW5nIG9ubHkgdGhlIE1TSQo+IGVuYWJsZWQvbWFwcGVkL2luaXRpYWxpemVkIGZsYWdzIGluIHB0
X21zaV9kaXNhYmxlLgoKCj4gU2lnbmVkLW9mZi1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzdGVm
YW5vLnN0YWJlbGxpbmlAZXUuY2l0cml4LmNvbT4KPiBUZXN0ZWQtYnk6IEcuUi4gPGZpcmVtZXRl
b3JAdXNlcnMuc291cmNlZm9yZ2UubmV0Pgo+IFhlbi1kZXZlbDogaHR0cDovL21hcmMuaW5mby8/
bD14ZW4tZGV2ZWwmbT0xMzU0ODk4Nzk1MDMwNzUKCj4gZGlmZiAtLWdpdCBhL2h3L3B0LW1zaS5j
IGIvaHcvcHQtbXNpLmMKPiBpbmRleCA3M2Y3MzdkLi5iMDNiOTg5IDEwMDY0NAo+IC0tLSBhL2h3
L3B0LW1zaS5jCj4gKysrIGIvaHcvcHQtbXNpLmMKPiBAQCAtMjEzLDcgKzIxMyw3IEBAIHZvaWQg
cHRfbXNpX2Rpc2FibGUoc3RydWN0IHB0X2RldiAqZGV2KQo+ICAKPiAgb3V0Ogo+ICAgICAgLyog
Y2xlYXIgbXNpIGluZm8gKi8KLSAgICBkZXYtPj5tc2ktPmZsYWdzID0gMDsKKyAgICBkZXYtPj5t
c2ktPmZsYWdzICY9IH4oTVNJX0ZMQUdfVU5JTklUIHwgUFRfTVNJX01BUFBFRCB8IFBDSV9NU0lf
RkxBR1NfRU5BQkxFKTsKPiAgICAgIGRldi0+bXNpLT5waXJxID0gLTE7Cj4gICAgICBkZXYtPm1z
aV90cmFuc19lbiA9IDA7Cj4gIH0KCgpTZWVtcyB0aGlzIHNob3VsZCBiZSBmaXhlZCBmb3IgcWVt
dS11cHN0cmVhbSBhcyB3ZWxsID8KSSB0aGluayBzaW5jZSBzd2l0Y2hpbmcgdG8gcWVtdS11cHN0
cmVhbSBhcyBkZWZhdWx0IGZvciB4ZW4tdW5zdGFibGUgLyA0LjMgc2VlbXMgYXJvdW5kIHRoZSBj
b3JuZXIsCml0J3MgcGVyaGFwcyB3aXNlIGZvciBhbGwgcGF0Y2hlcyB0byBxZW11LXRyYWRpdGlv
bmFsLCB0byBhbHNvIGNoZWNrIGlmIHFlbXUtdXBzdHJlYW0gbmVlZHMgdGhlIHNhbWUgZml4ICh0
byBwcmV2ZW50IHJlZ3Jlc3Npb25zIGFmdGVyIHRoZSBzd2l0Y2gpID8KCgotLQpTYW5kZXIKCgpf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwg
bWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3Jn
L3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:17:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:17:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3EN-0001No-6E; Mon, 10 Dec 2012 13:17:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti3EL-0001NY-QF
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:17:10 +0000
Received: from [85.158.139.83:38155] by server-4.bemta-5.messagelabs.com id
	B1/B3-14693-5D0E5C05; Mon, 10 Dec 2012 13:17:09 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355145428!21825895!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8466 invoked from network); 10 Dec 2012 13:17:08 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 13:17:08 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:58169 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti3IB-0002Ug-Ol; Mon, 10 Dec 2012 14:21:07 +0100
Date: Mon, 10 Dec 2012 14:17:04 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <76596674.20121210141704@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212101227490.4633@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
	<alpine.DEB.2.02.1212101227490.4633@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] qemu-xen-trad/pt_msi_disable: do not clear
	all MSI flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TW9uZGF5LCBEZWNlbWJlciAxMCwgMjAxMiwgMTozNjo0MSBQTSwgeW91IHdyb3RlOgoKPiAicWVt
dS14ZW4tdHJhZDogZml4IG1zaV90cmFuc2xhdGUgd2l0aCBQViBldmVudCBkZWxpdmVyeSIgYWRk
ZWQgYQo+IHB0X21zaV9kaXNhYmxlKCkgY2FsbCBpbnRvIHB0X21zZ2N0cmxfcmVnX3dyaXRlLCBj
bGVhcmluZyB0aGUgTVNJIGZsYWdzCj4gYXMgYSBjb25zZXF1ZW5jZS4gTVNJcyBnZXQgZW5hYmxl
ZCBhZ2FpbiBzb29uIGFmdGVyIGJ5IGNhbGxpbmcKPiBwdF9tc2lfc2V0dXAuCgo+IEhvd2V2ZXIg
dGhlIE1TSSBmbGFncyBhcmUgb25seSBzZXR1cCBvbmNlIGluwqB0aGUgcHRfbXNnY3RybF9yZWdf
aW5pdAo+IGZ1bmN0aW9uLCBzbyBmcm9tIHRoZSBRRU1VIHBvaW50IG9mIHZpZXcgdGhlIGRldmlj
ZSBoYXMgbG9zdCBzb21lCj4gaW1wb3J0YW50IHByb3BlcnRpZXMsIGxpa2UgZm9yIGV4YW1wbGUg
UENJX01TSV9GTEFHU182NEJJVC4KCj4gVGhpcyBwYXRjaCBmaXhlcyB0aGUgYnVnIGJ5IGNsZWFy
aW5nIG9ubHkgdGhlIE1TSQo+IGVuYWJsZWQvbWFwcGVkL2luaXRpYWxpemVkIGZsYWdzIGluIHB0
X21zaV9kaXNhYmxlLgoKCj4gU2lnbmVkLW9mZi1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzdGVm
YW5vLnN0YWJlbGxpbmlAZXUuY2l0cml4LmNvbT4KPiBUZXN0ZWQtYnk6IEcuUi4gPGZpcmVtZXRl
b3JAdXNlcnMuc291cmNlZm9yZ2UubmV0Pgo+IFhlbi1kZXZlbDogaHR0cDovL21hcmMuaW5mby8/
bD14ZW4tZGV2ZWwmbT0xMzU0ODk4Nzk1MDMwNzUKCj4gZGlmZiAtLWdpdCBhL2h3L3B0LW1zaS5j
IGIvaHcvcHQtbXNpLmMKPiBpbmRleCA3M2Y3MzdkLi5iMDNiOTg5IDEwMDY0NAo+IC0tLSBhL2h3
L3B0LW1zaS5jCj4gKysrIGIvaHcvcHQtbXNpLmMKPiBAQCAtMjEzLDcgKzIxMyw3IEBAIHZvaWQg
cHRfbXNpX2Rpc2FibGUoc3RydWN0IHB0X2RldiAqZGV2KQo+ICAKPiAgb3V0Ogo+ICAgICAgLyog
Y2xlYXIgbXNpIGluZm8gKi8KLSAgICBkZXYtPj5tc2ktPmZsYWdzID0gMDsKKyAgICBkZXYtPj5t
c2ktPmZsYWdzICY9IH4oTVNJX0ZMQUdfVU5JTklUIHwgUFRfTVNJX01BUFBFRCB8IFBDSV9NU0lf
RkxBR1NfRU5BQkxFKTsKPiAgICAgIGRldi0+bXNpLT5waXJxID0gLTE7Cj4gICAgICBkZXYtPm1z
aV90cmFuc19lbiA9IDA7Cj4gIH0KCgpTZWVtcyB0aGlzIHNob3VsZCBiZSBmaXhlZCBmb3IgcWVt
dS11cHN0cmVhbSBhcyB3ZWxsID8KSSB0aGluayBzaW5jZSBzd2l0Y2hpbmcgdG8gcWVtdS11cHN0
cmVhbSBhcyBkZWZhdWx0IGZvciB4ZW4tdW5zdGFibGUgLyA0LjMgc2VlbXMgYXJvdW5kIHRoZSBj
b3JuZXIsCml0J3MgcGVyaGFwcyB3aXNlIGZvciBhbGwgcGF0Y2hlcyB0byBxZW11LXRyYWRpdGlv
bmFsLCB0byBhbHNvIGNoZWNrIGlmIHFlbXUtdXBzdHJlYW0gbmVlZHMgdGhlIHNhbWUgZml4ICh0
byBwcmV2ZW50IHJlZ3Jlc3Npb25zIGFmdGVyIHRoZSBzd2l0Y2gpID8KCgotLQpTYW5kZXIKCgpf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwg
bWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3Jn
L3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:18:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3F5-0001S3-Kp; Mon, 10 Dec 2012 13:17:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti3F4-0001Rq-C1
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:17:54 +0000
Received: from [85.158.143.35:16749] by server-2.bemta-4.messagelabs.com id
	75/64-30861-101E5C05; Mon, 10 Dec 2012 13:17:53 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-15.tower-21.messagelabs.com!1355145201!14008511!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20515 invoked from network); 10 Dec 2012 13:13:21 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-15.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 13:13:21 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:58165 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti3EY-0002Rm-G5; Mon, 10 Dec 2012 14:17:22 +0100
Date: Mon, 10 Dec 2012 14:13:19 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1884211891.20121210141319@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212101256080.4633@kaball.uk.xensource.com>
References: <272767244.20121206175454@eikelenboom.it>
	<alpine.DEB.2.02.1212071655460.8801@kaball.uk.xensource.com>
	<1877721103.20121207192546@eikelenboom.it>
	<alpine.DEB.2.02.1212101256080.4633@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upstream qemu-xen,
	log verbosity and compile errors when enabling debug, filenaming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 1:59:51 PM, you wrote:

> On Fri, 7 Dec 2012, Sander Eikelenboom wrote:
>> Friday, December 7, 2012, 6:01:43 PM, you wrote:
>> 
>> > On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
>> >> Hi All,
>> >> 
>> >> Yesterday I have tried building and using upstream qemu and seabios.
>> >> Config.mk:
>> >> QEMU_UPSTREAM_URL ?= git://git.qemu.org/qemu.git
>> >> QEMU_UPSTREAM_REVISION ?= master
>> >> 
>> >> SEABIOS_UPSTREAM_URL ?= git://git.qemu.org/seabios.git
>> >> SEABIOS_UPSTREAM_TAG ?= master
>> >> 
>> >> And i'm happy to say that it works quite ok, even with (secondary pci) pci-passthrough ( using an ATI gfx adapter and windows7 as guest os) :-).
>> >> 
>> >> But it seems to have an issue with a USB controller which is trying to use msi-X interrupts, which makes xl dmesg report:
>> >> (XEN) [2012-12-06 16:07:24] vmsi.c:108:d32767 Unsupported delivery mode 3
>> >> and when "pci_msitranslate=0" is set the error still occurs, only this time the correct domain number is reported, instead of the 32767.
>> >> 
>> >> 
>> >> However, when trying to debug, i noticed although making a debug build (make debug=y && make debug=y install), qemu-dm-<guestname>.log stays almost empty.
>> >> It seems all the defines related to debugging are not set.
>> >> 
>> >> - Would it be appropriated to enable them all when making a debug build ?
>> >> - Would it be wise to also have some more verbose logging when not running a debug build ?
>> 
>> > Yes and yes
>> 
>> >> - And if yes, what would be the nicest way to set the defines ?
>> 
>> > My guess is that it would be enough to turn on XEN_PT_LOGGING_ENABLED by
>> > default
>> 
>> >> - Should the naming of the debug defines be made more consistent ?
>> 
>> > Yes
>> 
>> 
>> 
>> >> 
>> >> When enabling these debug defines by hand:
>> >> 
>> >> xen-all.c
>> >> #define DEBUG_XEN
>> >> 
>> >> xen-mapcache.c
>> >> #define MAPCACHE_DEBUG
>> >> 
>> >> hw/xen-host-pci-device.c
>> >> #define XEN_HOST_PCI_DEVICE_DEBUG
>> >> 
>> >> hw/xen_platform.c
>> >> #define DEBUG_PLATFORM
>> >> 
>> >> hw/xen_pt.h
>> >> #define XEN_PT_LOGGING_ENABLED
>> >> #define XEN_PT_DEBUG_PCI_CONFIG_ACCESS
>> >> 
>> >> I get a lot of compile errors related to wrong types in the debug printf's.
>> 
>> > That's really bad. I would like upstream QEMU to have the same level of
>> > logging as qemu-xen-traditional by default. And they should compile.
>> 
>> 
>> >> Another thing that occurred to me was that the file naming doesn't seem to be overly consistent:
>> >> 
>> >> xen-all.c
>> >> xen-mapcache.c
>> >> xen-mapcache.h
>> >> xen-stub.c
>> >> xen_apic.c
>> >> hw/xen_backend.c
>> >> hw/xen_backend.h
>> >> hw/xen_blkif.h
>> >> hw/xen_common.h
>> >> hw/xen_console.c
>> >> hw/xen_devconfig.c
>> >> hw/xen_disk.c
>> >> hw/xen_domainbuild.c
>> >> hw/xen_domainbuild.h
>> >> hw/xenfb.c
>> >> hw/xen.h
>> >> hw/xen-host-pci-device.c
>> >> hw/xen-host-pci-device.h
>> >> hw/xen_machine_pv.c
>> >> hw/xen_nic.c
>> >> hw/xen_platform.c
>> >> hw/xen_pt.c
>> >> hw/xen_pt_config_init.c
>> >> hw/xen_pt.h
>> >> hw/xen_pt_msi.c
>> >> 
>> >> Would it be worthwhile to make it:
>> >> - consistent all underscore or all minus ?
>> >> - allways xen_ (or xen- depending on the above) ?
>> 
>> > Yes, definitely.
>> > Given that the development window for QEMU 1.4 has just opened might
>> > even be the right time to make these changes.
>> 
>> > Are you volunteering? :)
>> 
>> Erhmm yes i think i should be able to accomplish this :-)
>> And yes i did notice the 1.3 release :-)
>> 
>> Patches would be directly against the qemu-upstream git-tree, first round to xen-devel and when acked send to qemu-list ?

> It is best to CC qemu-devel from the start, they are used to high levels
> of traffic anyway ;)

Ok

>> For the filerenaming, the rest of the qemu sources seems to be mixed, but i think it would be more neat for the xen part to stick to one of the two .. but which one would be prefered ?
>> 1. all underscores
>> 2. all minus

> I would go for 1.
> However I would keep the renaming patch separate from the others,
> because it could start a flame war between underscores supporters
> and minuses supporters :)

Ok i thought about that :-)

> The other changes (better default debug levels, working debugs, debug
> functions naming) should all be non-controversial.

Ok will send those first


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:18:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3F5-0001S3-Kp; Mon, 10 Dec 2012 13:17:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti3F4-0001Rq-C1
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:17:54 +0000
Received: from [85.158.143.35:16749] by server-2.bemta-4.messagelabs.com id
	75/64-30861-101E5C05; Mon, 10 Dec 2012 13:17:53 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-15.tower-21.messagelabs.com!1355145201!14008511!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20515 invoked from network); 10 Dec 2012 13:13:21 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-15.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 13:13:21 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:58165 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti3EY-0002Rm-G5; Mon, 10 Dec 2012 14:17:22 +0100
Date: Mon, 10 Dec 2012 14:13:19 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1884211891.20121210141319@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212101256080.4633@kaball.uk.xensource.com>
References: <272767244.20121206175454@eikelenboom.it>
	<alpine.DEB.2.02.1212071655460.8801@kaball.uk.xensource.com>
	<1877721103.20121207192546@eikelenboom.it>
	<alpine.DEB.2.02.1212101256080.4633@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upstream qemu-xen,
	log verbosity and compile errors when enabling debug, filenaming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 1:59:51 PM, you wrote:

> On Fri, 7 Dec 2012, Sander Eikelenboom wrote:
>> Friday, December 7, 2012, 6:01:43 PM, you wrote:
>> 
>> > On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
>> >> Hi All,
>> >> 
>> >> Yesterday I have tried building and using upstream qemu and seabios.
>> >> Config.mk:
>> >> QEMU_UPSTREAM_URL ?= git://git.qemu.org/qemu.git
>> >> QEMU_UPSTREAM_REVISION ?= master
>> >> 
>> >> SEABIOS_UPSTREAM_URL ?= git://git.qemu.org/seabios.git
>> >> SEABIOS_UPSTREAM_TAG ?= master
>> >> 
>> >> And i'm happy to say that it works quite ok, even with (secondary pci) pci-passthrough ( using an ATI gfx adapter and windows7 as guest os) :-).
>> >> 
>> >> But it seems to have an issue with a USB controller which is trying to use msi-X interrupts, which makes xl dmesg report:
>> >> (XEN) [2012-12-06 16:07:24] vmsi.c:108:d32767 Unsupported delivery mode 3
>> >> and when "pci_msitranslate=0" is set the error still occurs, only this time the correct domain number is reported, instead of the 32767.
>> >> 
>> >> 
>> >> However, when trying to debug, i noticed although making a debug build (make debug=y && make debug=y install), qemu-dm-<guestname>.log stays almost empty.
>> >> It seems all the defines related to debugging are not set.
>> >> 
>> >> - Would it be appropriated to enable them all when making a debug build ?
>> >> - Would it be wise to also have some more verbose logging when not running a debug build ?
>> 
>> > Yes and yes
>> 
>> >> - And if yes, what would be the nicest way to set the defines ?
>> 
>> > My guess is that it would be enough to turn on XEN_PT_LOGGING_ENABLED by
>> > default
>> 
>> >> - Should the naming of the debug defines be made more consistent ?
>> 
>> > Yes
>> 
>> 
>> 
>> >> 
>> >> When enabling these debug defines by hand:
>> >> 
>> >> xen-all.c
>> >> #define DEBUG_XEN
>> >> 
>> >> xen-mapcache.c
>> >> #define MAPCACHE_DEBUG
>> >> 
>> >> hw/xen-host-pci-device.c
>> >> #define XEN_HOST_PCI_DEVICE_DEBUG
>> >> 
>> >> hw/xen_platform.c
>> >> #define DEBUG_PLATFORM
>> >> 
>> >> hw/xen_pt.h
>> >> #define XEN_PT_LOGGING_ENABLED
>> >> #define XEN_PT_DEBUG_PCI_CONFIG_ACCESS
>> >> 
>> >> I get a lot of compile errors related to wrong types in the debug printf's.
>> 
>> > That's really bad. I would like upstream QEMU to have the same level of
>> > logging as qemu-xen-traditional by default. And they should compile.
>> 
>> 
>> >> Another thing that occurred to me was that the file naming doesn't seem to be overly consistent:
>> >> 
>> >> xen-all.c
>> >> xen-mapcache.c
>> >> xen-mapcache.h
>> >> xen-stub.c
>> >> xen_apic.c
>> >> hw/xen_backend.c
>> >> hw/xen_backend.h
>> >> hw/xen_blkif.h
>> >> hw/xen_common.h
>> >> hw/xen_console.c
>> >> hw/xen_devconfig.c
>> >> hw/xen_disk.c
>> >> hw/xen_domainbuild.c
>> >> hw/xen_domainbuild.h
>> >> hw/xenfb.c
>> >> hw/xen.h
>> >> hw/xen-host-pci-device.c
>> >> hw/xen-host-pci-device.h
>> >> hw/xen_machine_pv.c
>> >> hw/xen_nic.c
>> >> hw/xen_platform.c
>> >> hw/xen_pt.c
>> >> hw/xen_pt_config_init.c
>> >> hw/xen_pt.h
>> >> hw/xen_pt_msi.c
>> >> 
>> >> Would it be worthwhile to make it:
>> >> - consistent all underscore or all minus ?
>> >> - allways xen_ (or xen- depending on the above) ?
>> 
>> > Yes, definitely.
>> > Given that the development window for QEMU 1.4 has just opened might
>> > even be the right time to make these changes.
>> 
>> > Are you volunteering? :)
>> 
>> Erhmm yes i think i should be able to accomplish this :-)
>> And yes i did notice the 1.3 release :-)
>> 
>> Patches would be directly against the qemu-upstream git-tree, first round to xen-devel and when acked send to qemu-list ?

> It is best to CC qemu-devel from the start, they are used to high levels
> of traffic anyway ;)

Ok

>> For the filerenaming, the rest of the qemu sources seems to be mixed, but i think it would be more neat for the xen part to stick to one of the two .. but which one would be prefered ?
>> 1. all underscores
>> 2. all minus

> I would go for 1.
> However I would keep the renaming patch separate from the others,
> because it could start a flame war between underscores supporters
> and minuses supporters :)

Ok i thought about that :-)

> The other changes (better default debug levels, working debugs, debug
> functions naming) should all be non-controversial.

Ok will send those first


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:21:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3Ia-0001iQ-9X; Mon, 10 Dec 2012 13:21:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti3IX-0001iG-V7
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:21:30 +0000
Received: from [85.158.139.83:42364] by server-4.bemta-5.messagelabs.com id
	90/FA-14693-9D1E5C05; Mon, 10 Dec 2012 13:21:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355145626!24865624!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17681 invoked from network); 10 Dec 2012 13:20:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:20:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="33699"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 13:20:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 13:20:26 +0000
Message-ID: <1355145625.21160.26.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Dec 2012 13:20:25 +0000
In-Reply-To: <alpine.DEB.2.02.1212101305100.4633@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071733450.8801@kaball.uk.xensource.com>
	<1355132533.31710.100.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212101305100.4633@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] xen: strip /chosen/modules/module@<N>/*
 from dom0 device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-10 at 13:10 +0000, Stefano Stabellini wrote:
> On Mon, 10 Dec 2012, Ian Campbell wrote:
> > On Fri, 2012-12-07 at 17:35 +0000, Stefano Stabellini wrote:
> > > On Thu, 6 Dec 2012, Ian Campbell wrote:
> > > > These nodes are used by Xen to find the initial modules.
> > > > 
> > > > Only drop the "xen,multiboot-module" compatible nodes in case someone
> > > > else has a similar idea.
> > > > 
> > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > > ---
> > > > v4 - /chosen/modules/modules@N not /chosen/module@N
> > > > v3 - use a helper to filter out DT elements which are not for dom0.
> > > >      Better than an ad-hoc break in the middle of a loop.
> > > > ---
> > > >  xen/arch/arm/domain_build.c |   40 ++++++++++++++++++++++++++++++++++++++--
> > > >  1 files changed, 38 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > > index 7a964f7..27e02e4 100644
> > > > --- a/xen/arch/arm/domain_build.c
> > > > +++ b/xen/arch/arm/domain_build.c
> > > > @@ -172,6 +172,40 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
> > > >      return prop;
> > > >  }
> > > >  
> > > > +/* Returns the next node in fdt (starting from offset) which should be
> > > > + * passed through to dom0.
> > > > + */
> > > > +static int fdt_next_dom0_node(const void *fdt, int node,
> > > > +                              int *depth_out,
> > > > +                              int parents[DEVICE_TREE_MAX_DEPTH])
> > > > +{
> > > > +    int depth = *depth_out;
> > > > +
> > > > +    while ( (node = fdt_next_node(fdt, node, &depth)) &&
> > > > +            node >= 0 && depth >= 0 )
> > > > +    {
> > > > +        if ( depth >= DEVICE_TREE_MAX_DEPTH )
> > > > +            break;
> > > > +
> > > > +        parents[depth] = node;
> > > > +
> > > > +        /* Skip /chosen/modules/module@<N>/ and all subnodes */
> > > > +        if ( depth >= 3 &&
> > > > +             device_tree_node_matches(fdt, parents[1], "chosen") &&
> > > > +             device_tree_node_matches(fdt, parents[2], "modules") &&
> > > > +             device_tree_node_matches(fdt, parents[3], "module") &&
> > > > +             fdt_node_check_compatible(fdt, parents[3],
> > > > +                                       "xen,multiboot-module" ) == 0 )
> > > > +            continue;
> > > > +
> > > > +        /* We've arrived at a node which dom0 is interested in. */
> > > > +        break;
> > > > +    }
> > > > +
> > > > +    *depth_out = depth;
> > > > +    return node;
> > > > +}
> > > 
> > > Can't we just skip the node if it is compatible with
> > > "xen,multiboot-module", no matter where it lives?  This should simplify
> > > this function greatly and you wouldn't need the parents parameter
> > > anymore.
> > 
> > As well as my previous query about the meaning of the tree structure I
> > think that even if we could get away with this in this particular case
> > we are going to need this sort of infrastructure once we start doing
> > proper filtering of dom0 vs xen nodes in the tree.
> 
> Maybe. However in that case we could just have a generic
> filter_device_tree_nodes function that takes an array of strings
> (compatible string? device tree paths? I would go for both in a struct,
> but the former would probably suffice), and filter the DT based on
> those. That would be very useful in the long run. This is very ad-hoc
> for the /chosen/modules/module@<N> path.

This function is precisely a filtering function as you are suggesting.
The only difference is that it does the filtering in code instead of
using a string/struct. There is nothing ad-hoc about it it just happens
that the only user right now is the module@N stuff.

I think you'd find that the code to filter based on a struct containing
a path would be more complicated and be a superset of this function,
since you would have to check the path against the same sort of parent
array.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:21:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3Ia-0001iQ-9X; Mon, 10 Dec 2012 13:21:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti3IX-0001iG-V7
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:21:30 +0000
Received: from [85.158.139.83:42364] by server-4.bemta-5.messagelabs.com id
	90/FA-14693-9D1E5C05; Mon, 10 Dec 2012 13:21:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355145626!24865624!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17681 invoked from network); 10 Dec 2012 13:20:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:20:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="33699"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 13:20:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 13:20:26 +0000
Message-ID: <1355145625.21160.26.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Dec 2012 13:20:25 +0000
In-Reply-To: <alpine.DEB.2.02.1212101305100.4633@kaball.uk.xensource.com>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-9-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212071733450.8801@kaball.uk.xensource.com>
	<1355132533.31710.100.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212101305100.4633@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 9/9] xen: strip /chosen/modules/module@<N>/*
 from dom0 device tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-10 at 13:10 +0000, Stefano Stabellini wrote:
> On Mon, 10 Dec 2012, Ian Campbell wrote:
> > On Fri, 2012-12-07 at 17:35 +0000, Stefano Stabellini wrote:
> > > On Thu, 6 Dec 2012, Ian Campbell wrote:
> > > > These nodes are used by Xen to find the initial modules.
> > > > 
> > > > Only drop the "xen,multiboot-module" compatible nodes in case someone
> > > > else has a similar idea.
> > > > 
> > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > > ---
> > > > v4 - /chosen/modules/modules@N not /chosen/module@N
> > > > v3 - use a helper to filter out DT elements which are not for dom0.
> > > >      Better than an ad-hoc break in the middle of a loop.
> > > > ---
> > > >  xen/arch/arm/domain_build.c |   40 ++++++++++++++++++++++++++++++++++++++--
> > > >  1 files changed, 38 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > > index 7a964f7..27e02e4 100644
> > > > --- a/xen/arch/arm/domain_build.c
> > > > +++ b/xen/arch/arm/domain_build.c
> > > > @@ -172,6 +172,40 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
> > > >      return prop;
> > > >  }
> > > >  
> > > > +/* Returns the next node in fdt (starting from offset) which should be
> > > > + * passed through to dom0.
> > > > + */
> > > > +static int fdt_next_dom0_node(const void *fdt, int node,
> > > > +                              int *depth_out,
> > > > +                              int parents[DEVICE_TREE_MAX_DEPTH])
> > > > +{
> > > > +    int depth = *depth_out;
> > > > +
> > > > +    while ( (node = fdt_next_node(fdt, node, &depth)) &&
> > > > +            node >= 0 && depth >= 0 )
> > > > +    {
> > > > +        if ( depth >= DEVICE_TREE_MAX_DEPTH )
> > > > +            break;
> > > > +
> > > > +        parents[depth] = node;
> > > > +
> > > > +        /* Skip /chosen/modules/module@<N>/ and all subnodes */
> > > > +        if ( depth >= 3 &&
> > > > +             device_tree_node_matches(fdt, parents[1], "chosen") &&
> > > > +             device_tree_node_matches(fdt, parents[2], "modules") &&
> > > > +             device_tree_node_matches(fdt, parents[3], "module") &&
> > > > +             fdt_node_check_compatible(fdt, parents[3],
> > > > +                                       "xen,multiboot-module" ) == 0 )
> > > > +            continue;
> > > > +
> > > > +        /* We've arrived at a node which dom0 is interested in. */
> > > > +        break;
> > > > +    }
> > > > +
> > > > +    *depth_out = depth;
> > > > +    return node;
> > > > +}
> > > 
> > > Can't we just skip the node if it is compatible with
> > > "xen,multiboot-module", no matter where it lives?  This should simplify
> > > this function greatly and you wouldn't need the parents parameter
> > > anymore.
> > 
> > As well as my previous query about the meaning of the tree structure I
> > think that even if we could get away with this in this particular case
> > we are going to need this sort of infrastructure once we start doing
> > proper filtering of dom0 vs xen nodes in the tree.
> 
> Maybe. However in that case we could just have a generic
> filter_device_tree_nodes function that takes an array of strings
> (compatible string? device tree paths? I would go for both in a struct,
> but the former would probably suffice), and filter the DT based on
> those. That would be very useful in the long run. This is very ad-hoc
> for the /chosen/modules/module@<N> path.

This function is precisely a filtering function as you are suggesting.
The only difference is that it does the filtering in code instead of
using a string/struct. There is nothing ad-hoc about it it just happens
that the only user right now is the module@N stuff.

I think you'd find that the code to filter based on a struct containing
a path would be more complicated and be a superset of this function,
since you would have to check the path against the same sort of parent
array.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:33:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3TV-0001xR-Pi; Mon, 10 Dec 2012 13:32:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yunhong.jiang@intel.com>) id 1Ti3TU-0001xM-5q
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:32:48 +0000
Received: from [85.158.137.99:47714] by server-5.bemta-3.messagelabs.com id
	37/F4-26311-F74E5C05; Mon, 10 Dec 2012 13:32:47 +0000
X-Env-Sender: yunhong.jiang@intel.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1355146361!18395065!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM3NzUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6178 invoked from network); 10 Dec 2012 13:32:42 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-2.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 13:32:42 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 10 Dec 2012 05:32:40 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,252,1355126400"; d="scan'208";a="178908930"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by AZSMGA002.ch.intel.com with ESMTP; 10 Dec 2012 05:32:39 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 05:32:39 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 05:32:39 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Mon, 10 Dec 2012 21:32:37 +0800
From: "Jiang, Yunhong" <yunhong.jiang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: frame table setup for memory hotplug
Thread-Index: AQHN1JYxVGteCELzMEuvG/T8mYXdW5gSCrDg
Date: Mon, 10 Dec 2012 13:32:37 +0000
Message-ID: <DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
References: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
In-Reply-To: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan, sorry for slow response.

IIRC, the reason we do this is because in memory hotplug situation, there will be a very big hole between the address of the memory populated before hot-plug and the memory populated by hot-added memory. (i.e. the added memory started at very high-end address). So instead of setup the frame table for the whole address space, we expand the frame table dynamically after hotplug.

We have the memory hotplug environment, so if you have any patch, I'm glad to test it, or have my colleagues help to test it.

Thanks
--jyh

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Saturday, December 08, 2012 12:16 AM
> To: Jiang, Yunhong
> Cc: xen-devel
> Subject: frame table setup for memory hotplug
> 
> Yunhong,
> 
> in c/s 20617:283a5357d196 you modified init_frametable() to
> populate the frame table slightly differently for the hotplug
> case. I wonder why you did that, because (apart from the bug
> already fixed, and the off-by-one bugs I'm having a fix pending
> for) I fear you didn't pay attention to the fact that using
> pdx_to_page() on something that doesn't really represent the
> PDX for a valid page may return a value not validly usable here.
> 
> Do you happen to recall what it was that caused you to do that
> adjustment in the first place? If you don't, do you have an
> environment where you would be able to test an eventual
> change of mine (effectively undoing that part of said c/s)?
> 
> Thanks, Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:33:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3TV-0001xR-Pi; Mon, 10 Dec 2012 13:32:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yunhong.jiang@intel.com>) id 1Ti3TU-0001xM-5q
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:32:48 +0000
Received: from [85.158.137.99:47714] by server-5.bemta-3.messagelabs.com id
	37/F4-26311-F74E5C05; Mon, 10 Dec 2012 13:32:47 +0000
X-Env-Sender: yunhong.jiang@intel.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1355146361!18395065!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM3NzUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6178 invoked from network); 10 Dec 2012 13:32:42 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-2.tower-217.messagelabs.com with SMTP;
	10 Dec 2012 13:32:42 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 10 Dec 2012 05:32:40 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,252,1355126400"; d="scan'208";a="178908930"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by AZSMGA002.ch.intel.com with ESMTP; 10 Dec 2012 05:32:39 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 05:32:39 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 05:32:39 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Mon, 10 Dec 2012 21:32:37 +0800
From: "Jiang, Yunhong" <yunhong.jiang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: frame table setup for memory hotplug
Thread-Index: AQHN1JYxVGteCELzMEuvG/T8mYXdW5gSCrDg
Date: Mon, 10 Dec 2012 13:32:37 +0000
Message-ID: <DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
References: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
In-Reply-To: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan, sorry for slow response.

IIRC, the reason we do this is because in memory hotplug situation, there will be a very big hole between the address of the memory populated before hot-plug and the memory populated by hot-added memory. (i.e. the added memory started at very high-end address). So instead of setup the frame table for the whole address space, we expand the frame table dynamically after hotplug.

We have the memory hotplug environment, so if you have any patch, I'm glad to test it, or have my colleagues help to test it.

Thanks
--jyh

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Saturday, December 08, 2012 12:16 AM
> To: Jiang, Yunhong
> Cc: xen-devel
> Subject: frame table setup for memory hotplug
> 
> Yunhong,
> 
> in c/s 20617:283a5357d196 you modified init_frametable() to
> populate the frame table slightly differently for the hotplug
> case. I wonder why you did that, because (apart from the bug
> already fixed, and the off-by-one bugs I'm having a fix pending
> for) I fear you didn't pay attention to the fact that using
> pdx_to_page() on something that doesn't really represent the
> PDX for a valid page may return a value not validly usable here.
> 
> Do you happen to recall what it was that caused you to do that
> adjustment in the first place? If you don't, do you have an
> environment where you would be able to test an eventual
> change of mine (effectively undoing that part of said c/s)?
> 
> Thanks, Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:34:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3UZ-000219-8b; Mon, 10 Dec 2012 13:33:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti3UW-00020s-7I
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:33:53 +0000
Received: from [85.158.143.35:56267] by server-2.bemta-4.messagelabs.com id
	23/C9-30861-FB4E5C05; Mon, 10 Dec 2012 13:33:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355146429!5820886!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32714 invoked from network); 10 Dec 2012 13:33:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:33:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="100285"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 13:33:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 08:33:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti3US-0001D6-Iu;
	Mon, 10 Dec 2012 13:33:48 +0000
Date: Mon, 10 Dec 2012 13:33:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355134031.31710.111.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212101318000.4633@kaball.uk.xensource.com>
References: <CAJ2v2mgkTjuq7cJWsgwbY-NJpAeEVRtai=GJV_EhrVzMuZ7QBA@mail.gmail.com>
	<1355134031.31710.111.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: asad raza <asadrupucit2006@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why we are doing this (below code) in __init
 setup_pagetables() method in ARM arch in setup.c.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Ian Campbell wrote:
> Please read http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions and
> consider re-asking your question with more context. 
> 
> e.g. include your actual goal, what steps have you taken to find the
> answer for yourself etc.

Yes, that would be very helpful, even if the context is just "I am
trying to learn the Xen ARM code".


> On Sat, 2012-12-08 at 13:02 +0000, asad raza wrote:
> >  /* Link in the fixmap pagetable */
> >     pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset)
> >                            >> PAGE_SHIFT);
> >     pte.pt.table = 1;
> >     write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte);
> >     /*
> >      * No flush required here. Individual flushes are done in
> >      * set_fixmap as entries are used.
> >      */

As the comment say, these few lines hook the new copy of the fixmap
pagetable page to the new copy of the second level pagetable page.

Keep in mind that at this point we have copied Xen to the new location
in memory, but the new copy of the pagetable, that is not in-use yet, is
still pointing at the old location.


> >     /* Break up the Xen mapping into 4k pages and protect them separately. */
> >     for ( i = 0; i < LPAE_ENTRIES; i++ )
> >     {
> >         unsigned long mfn = paddr_to_pfn(xen_paddr) + i;
> >         unsigned long va = XEN_VIRT_START + (i << PAGE_SHIFT);
> >         if ( !is_kernel(va) )
> >             break;
> >         pte = mfn_to_xen_entry(mfn);
> >         pte.pt.table = 1; /* 4k mappings always have this bit set */
> >         if ( is_kernel_text(va) || is_kernel_inittext(va) )
> >         {
> >             pte.pt.xn = 0;
> >             pte.pt.ro = 1;
> >         }
> >         if ( is_kernel_rodata(va) )
> >             pte.pt.ro = 1;
> >         write_pte(xen_xenmap + i, pte);
> >         /* No flush required here as page table is not hooked in yet. */
> >     }

This loop updates the Xen mappings in the new copy of the pagetable to
point to the physical address of the new copy of Xen (xen_paddr).
As well as doing that, this loop also changes the way Xen is mapped: so
far Xen was mapped using 2M mappings (only 2 pagetable levels), here we
introduce a third level (xen_xenmap) to map Xen using 4K pages.


> >     pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset)
> >                            >> PAGE_SHIFT);
> >     pte.pt.table = 1;
> >     write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
> >     /* Have changed a mapping used for .text. Flush everything for safety. */
> >     flush_xen_text_tlb();

We hook the third level pagetable page that maps Xen at the new location
using 4K mappings in the new copy of the pagetable.


> >     /* From now on, no mapping may be both writable and executable. */
> >     WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:34:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3UZ-000219-8b; Mon, 10 Dec 2012 13:33:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti3UW-00020s-7I
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:33:53 +0000
Received: from [85.158.143.35:56267] by server-2.bemta-4.messagelabs.com id
	23/C9-30861-FB4E5C05; Mon, 10 Dec 2012 13:33:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355146429!5820886!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32714 invoked from network); 10 Dec 2012 13:33:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:33:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="100285"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 13:33:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 08:33:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti3US-0001D6-Iu;
	Mon, 10 Dec 2012 13:33:48 +0000
Date: Mon, 10 Dec 2012 13:33:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355134031.31710.111.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212101318000.4633@kaball.uk.xensource.com>
References: <CAJ2v2mgkTjuq7cJWsgwbY-NJpAeEVRtai=GJV_EhrVzMuZ7QBA@mail.gmail.com>
	<1355134031.31710.111.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: asad raza <asadrupucit2006@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why we are doing this (below code) in __init
 setup_pagetables() method in ARM arch in setup.c.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Ian Campbell wrote:
> Please read http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions and
> consider re-asking your question with more context. 
> 
> e.g. include your actual goal, what steps have you taken to find the
> answer for yourself etc.

Yes, that would be very helpful, even if the context is just "I am
trying to learn the Xen ARM code".


> On Sat, 2012-12-08 at 13:02 +0000, asad raza wrote:
> >  /* Link in the fixmap pagetable */
> >     pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset)
> >                            >> PAGE_SHIFT);
> >     pte.pt.table = 1;
> >     write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte);
> >     /*
> >      * No flush required here. Individual flushes are done in
> >      * set_fixmap as entries are used.
> >      */

As the comment say, these few lines hook the new copy of the fixmap
pagetable page to the new copy of the second level pagetable page.

Keep in mind that at this point we have copied Xen to the new location
in memory, but the new copy of the pagetable, that is not in-use yet, is
still pointing at the old location.


> >     /* Break up the Xen mapping into 4k pages and protect them separately. */
> >     for ( i = 0; i < LPAE_ENTRIES; i++ )
> >     {
> >         unsigned long mfn = paddr_to_pfn(xen_paddr) + i;
> >         unsigned long va = XEN_VIRT_START + (i << PAGE_SHIFT);
> >         if ( !is_kernel(va) )
> >             break;
> >         pte = mfn_to_xen_entry(mfn);
> >         pte.pt.table = 1; /* 4k mappings always have this bit set */
> >         if ( is_kernel_text(va) || is_kernel_inittext(va) )
> >         {
> >             pte.pt.xn = 0;
> >             pte.pt.ro = 1;
> >         }
> >         if ( is_kernel_rodata(va) )
> >             pte.pt.ro = 1;
> >         write_pte(xen_xenmap + i, pte);
> >         /* No flush required here as page table is not hooked in yet. */
> >     }

This loop updates the Xen mappings in the new copy of the pagetable to
point to the physical address of the new copy of Xen (xen_paddr).
As well as doing that, this loop also changes the way Xen is mapped: so
far Xen was mapped using 2M mappings (only 2 pagetable levels), here we
introduce a third level (xen_xenmap) to map Xen using 4K pages.


> >     pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset)
> >                            >> PAGE_SHIFT);
> >     pte.pt.table = 1;
> >     write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
> >     /* Have changed a mapping used for .text. Flush everything for safety. */
> >     flush_xen_text_tlb();

We hook the third level pagetable page that maps Xen at the new location
using 4K mappings in the new copy of the pagetable.


> >     /* From now on, no mapping may be both writable and executable. */
> >     WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:38:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3Yl-0002La-Pf; Mon, 10 Dec 2012 13:38:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti3Yl-0002LR-2Q
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:38:15 +0000
Received: from [85.158.138.51:19228] by server-10.bemta-3.messagelabs.com id
	37/D8-19806-6C5E5C05; Mon, 10 Dec 2012 13:38:14 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355146683!28145403!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17474 invoked from network); 10 Dec 2012 13:38:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:38:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="100550"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 13:38:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 08:38:02 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti3YY-0001HU-3e;
	Mon, 10 Dec 2012 13:38:02 +0000
Date: Mon, 10 Dec 2012 13:38:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <76596674.20121210141704@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212101336440.4633@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
	<alpine.DEB.2.02.1212101227490.4633@kaball.uk.xensource.com>
	<76596674.20121210141704@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-2003735875-1355146680=:4633"
Cc: xen-devel <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] qemu-xen-trad/pt_msi_disable: do not clear
 all MSI flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-2003735875-1355146680=:4633
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Mon, 10 Dec 2012, Sander Eikelenboom wrote:
> Monday, December 10, 2012, 1:36:41 PM, you wrote:
>=20
> > "qemu-xen-trad: fix msi_translate with PV event delivery" added a
> > pt_msi_disable() call into pt_msgctrl_reg_write, clearing the MSI flags
> > as a consequence. MSIs get enabled again soon after by calling
> > pt_msi_setup.
>=20
> > However the MSI flags are only setup once in=C2=A0the pt_msgctrl_reg_in=
it
> > function, so from the QEMU point of view the device has lost some
> > important properties, like for example PCI_MSI_FLAGS_64BIT.
>=20
> > This patch fixes the bug by clearing only the MSI
> > enabled/mapped/initialized flags in pt_msi_disable.
>=20
>=20
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > Tested-by: G.R. <firemeteor@users.sourceforge.net>
> > Xen-devel: http://marc.info/?l=3Dxen-devel&m=3D135489879503075
>=20
> > diff --git a/hw/pt-msi.c b/hw/pt-msi.c
> > index 73f737d..b03b989 100644
> > --- a/hw/pt-msi.c
> > +++ b/hw/pt-msi.c
> > @@ -213,7 +213,7 @@ void pt_msi_disable(struct pt_dev *dev)
> > =20
> >  out:
> >      /* clear msi info */
> -    dev->>msi->flags =3D 0;
> +    dev->>msi->flags &=3D ~(MSI_FLAG_UNINIT | PT_MSI_MAPPED | PCI_MSI_FL=
AGS_ENABLE);
> >      dev->msi->pirq =3D -1;
> >      dev->msi_trans_en =3D 0;
> >  }
>=20
>=20
> Seems this should be fixed for qemu-upstream as well ?
> I think since switching to qemu-upstream as default for xen-unstable / 4.=
3 seems around the corner,
> it's perhaps wise for all patches to qemu-traditional, to also check if q=
emu-upstream needs the same fix (to prevent regressions after the switch) ?
>=20

That is true and you do right to remind me, but this is one of the few
cases of PCI passthrough code where upstream QEMU and
qemu-xen-traditional differ: MSI translation is not implemented at all
in upstream QEMU so there is no need for this patch there.
--1342847746-2003735875-1355146680=:4633
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-2003735875-1355146680=:4633--


From xen-devel-bounces@lists.xen.org Mon Dec 10 13:38:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3Yl-0002La-Pf; Mon, 10 Dec 2012 13:38:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti3Yl-0002LR-2Q
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:38:15 +0000
Received: from [85.158.138.51:19228] by server-10.bemta-3.messagelabs.com id
	37/D8-19806-6C5E5C05; Mon, 10 Dec 2012 13:38:14 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355146683!28145403!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17474 invoked from network); 10 Dec 2012 13:38:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 13:38:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="100550"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 13:38:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 08:38:02 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti3YY-0001HU-3e;
	Mon, 10 Dec 2012 13:38:02 +0000
Date: Mon, 10 Dec 2012 13:38:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <76596674.20121210141704@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212101336440.4633@kaball.uk.xensource.com>
References: <CAKhsbWaCbg0cP1eUYS6JdeNjpHZenEegOQ-7w7KFHE=xEJkoTQ@mail.gmail.com>
	<alpine.DEB.2.02.1212101227490.4633@kaball.uk.xensource.com>
	<76596674.20121210141704@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-2003735875-1355146680=:4633"
Cc: xen-devel <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] qemu-xen-trad/pt_msi_disable: do not clear
 all MSI flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-2003735875-1355146680=:4633
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Mon, 10 Dec 2012, Sander Eikelenboom wrote:
> Monday, December 10, 2012, 1:36:41 PM, you wrote:
>=20
> > "qemu-xen-trad: fix msi_translate with PV event delivery" added a
> > pt_msi_disable() call into pt_msgctrl_reg_write, clearing the MSI flags
> > as a consequence. MSIs get enabled again soon after by calling
> > pt_msi_setup.
>=20
> > However the MSI flags are only setup once in=C2=A0the pt_msgctrl_reg_in=
it
> > function, so from the QEMU point of view the device has lost some
> > important properties, like for example PCI_MSI_FLAGS_64BIT.
>=20
> > This patch fixes the bug by clearing only the MSI
> > enabled/mapped/initialized flags in pt_msi_disable.
>=20
>=20
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > Tested-by: G.R. <firemeteor@users.sourceforge.net>
> > Xen-devel: http://marc.info/?l=3Dxen-devel&m=3D135489879503075
>=20
> > diff --git a/hw/pt-msi.c b/hw/pt-msi.c
> > index 73f737d..b03b989 100644
> > --- a/hw/pt-msi.c
> > +++ b/hw/pt-msi.c
> > @@ -213,7 +213,7 @@ void pt_msi_disable(struct pt_dev *dev)
> > =20
> >  out:
> >      /* clear msi info */
> -    dev->>msi->flags =3D 0;
> +    dev->>msi->flags &=3D ~(MSI_FLAG_UNINIT | PT_MSI_MAPPED | PCI_MSI_FL=
AGS_ENABLE);
> >      dev->msi->pirq =3D -1;
> >      dev->msi_trans_en =3D 0;
> >  }
>=20
>=20
> Seems this should be fixed for qemu-upstream as well ?
> I think since switching to qemu-upstream as default for xen-unstable / 4.=
3 seems around the corner,
> it's perhaps wise for all patches to qemu-traditional, to also check if q=
emu-upstream needs the same fix (to prevent regressions after the switch) ?
>=20

That is true and you do right to remind me, but this is one of the few
cases of PCI passthrough code where upstream QEMU and
qemu-xen-traditional differ: MSI translation is not implemented at all
in upstream QEMU so there is no need for this patch there.
--1342847746-2003735875-1355146680=:4633
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-2003735875-1355146680=:4633--


From xen-devel-bounces@lists.xen.org Mon Dec 10 13:47:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3hs-0002hy-4h; Mon, 10 Dec 2012 13:47:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Ti3hr-0002ht-As
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:47:39 +0000
Received: from [193.109.254.147:25925] by server-3.bemta-14.messagelabs.com id
	27/EF-01317-AF7E5C05; Mon, 10 Dec 2012 13:47:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1355147257!10057471!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16945 invoked from network); 10 Dec 2012 13:47:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 13:47:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 13:47:37 +0000
Message-Id: <50C5F60602000078000AF672@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 13:47:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yunhong Jiang" <yunhong.jiang@intel.com>
References: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
	<DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.12.12 at 14:32, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:
> IIRC, the reason we do this is because in memory hotplug situation, there 
> will be a very big hole between the address of the memory populated before 
> hot-plug and the memory populated by hot-added memory. (i.e. the added memory 
> started at very high-end address). So instead of setup the frame table for the 
> whole address space, we expand the frame table dynamically after hotplug.
> 
> We have the memory hotplug environment, so if you have any patch, I'm glad 
> to test it, or have my colleagues help to test it.

I meanwhile decided to keep the code logically the same, but
testing the patch at
http://lists.xen.org/archives/html/xen-devel/2012-12/msg00793.html
(or the staging/normal trees once it went in/got pushed) would still be
much appreciated.

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 13:47:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 13:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3hs-0002hy-4h; Mon, 10 Dec 2012 13:47:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Ti3hr-0002ht-As
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 13:47:39 +0000
Received: from [193.109.254.147:25925] by server-3.bemta-14.messagelabs.com id
	27/EF-01317-AF7E5C05; Mon, 10 Dec 2012 13:47:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1355147257!10057471!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16945 invoked from network); 10 Dec 2012 13:47:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 13:47:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 13:47:37 +0000
Message-Id: <50C5F60602000078000AF672@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 13:47:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yunhong Jiang" <yunhong.jiang@intel.com>
References: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
	<DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.12.12 at 14:32, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:
> IIRC, the reason we do this is because in memory hotplug situation, there 
> will be a very big hole between the address of the memory populated before 
> hot-plug and the memory populated by hot-added memory. (i.e. the added memory 
> started at very high-end address). So instead of setup the frame table for the 
> whole address space, we expand the frame table dynamically after hotplug.
> 
> We have the memory hotplug environment, so if you have any patch, I'm glad 
> to test it, or have my colleagues help to test it.

I meanwhile decided to keep the code logically the same, but
testing the patch at
http://lists.xen.org/archives/html/xen-devel/2012-12/msg00793.html
(or the staging/normal trees once it went in/got pushed) would still be
much appreciated.

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 14:04:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 14:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3xd-00030b-P3; Mon, 10 Dec 2012 14:03:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yunhong.jiang@intel.com>) id 1Ti3xc-00030W-GU
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 14:03:56 +0000
Received: from [85.158.139.211:44688] by server-14.bemta-5.messagelabs.com id
	88/6D-09538-BCBE5C05; Mon, 10 Dec 2012 14:03:55 +0000
X-Env-Sender: yunhong.jiang@intel.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355148233!19032486!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM3NzUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25521 invoked from network); 10 Dec 2012 14:03:54 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-15.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 14:03:54 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 10 Dec 2012 06:03:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,252,1355126400"; d="scan'208";a="178920821"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by AZSMGA002.ch.intel.com with ESMTP; 10 Dec 2012 06:03:52 -0800
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 06:03:52 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 06:03:51 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 10 Dec 2012 22:03:50 +0800
From: "Jiang, Yunhong" <yunhong.jiang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: frame table setup for memory hotplug
Thread-Index: AQHN1JYxVGteCELzMEuvG/T8mYXdW5gSCrDg//9/1wCAAIpyQA==
Date: Mon, 10 Dec 2012 14:03:50 +0000
Message-ID: <DDCAE26804250545B9934A2056554AA0371B28@SHSMSX101.ccr.corp.intel.com>
References: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
	<DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
	<50C5F60602000078000AF672@nat28.tlf.novell.com>
In-Reply-To: <50C5F60602000078000AF672@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

That hot-plug system is quite difficult to setup. Will let you know when the test is done.

--jyh

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, December 10, 2012 9:48 PM
> To: Jiang, Yunhong
> Cc: xen-devel
> Subject: RE: frame table setup for memory hotplug
> 
> >>> On 10.12.12 at 14:32, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:
> > IIRC, the reason we do this is because in memory hotplug situation, there
> > will be a very big hole between the address of the memory populated before
> > hot-plug and the memory populated by hot-added memory. (i.e. the added
> memory
> > started at very high-end address). So instead of setup the frame table for the
> > whole address space, we expand the frame table dynamically after hotplug.
> >
> > We have the memory hotplug environment, so if you have any patch, I'm glad
> > to test it, or have my colleagues help to test it.
> 
> I meanwhile decided to keep the code logically the same, but
> testing the patch at
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00793.html
> (or the staging/normal trees once it went in/got pushed) would still be
> much appreciated.
> 
> Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 14:04:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 14:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti3xd-00030b-P3; Mon, 10 Dec 2012 14:03:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yunhong.jiang@intel.com>) id 1Ti3xc-00030W-GU
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 14:03:56 +0000
Received: from [85.158.139.211:44688] by server-14.bemta-5.messagelabs.com id
	88/6D-09538-BCBE5C05; Mon, 10 Dec 2012 14:03:55 +0000
X-Env-Sender: yunhong.jiang@intel.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355148233!19032486!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM3NzUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25521 invoked from network); 10 Dec 2012 14:03:54 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-15.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 14:03:54 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 10 Dec 2012 06:03:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,252,1355126400"; d="scan'208";a="178920821"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by AZSMGA002.ch.intel.com with ESMTP; 10 Dec 2012 06:03:52 -0800
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 06:03:52 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 06:03:51 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 10 Dec 2012 22:03:50 +0800
From: "Jiang, Yunhong" <yunhong.jiang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: frame table setup for memory hotplug
Thread-Index: AQHN1JYxVGteCELzMEuvG/T8mYXdW5gSCrDg//9/1wCAAIpyQA==
Date: Mon, 10 Dec 2012 14:03:50 +0000
Message-ID: <DDCAE26804250545B9934A2056554AA0371B28@SHSMSX101.ccr.corp.intel.com>
References: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
	<DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
	<50C5F60602000078000AF672@nat28.tlf.novell.com>
In-Reply-To: <50C5F60602000078000AF672@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

That hot-plug system is quite difficult to setup. Will let you know when the test is done.

--jyh

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, December 10, 2012 9:48 PM
> To: Jiang, Yunhong
> Cc: xen-devel
> Subject: RE: frame table setup for memory hotplug
> 
> >>> On 10.12.12 at 14:32, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:
> > IIRC, the reason we do this is because in memory hotplug situation, there
> > will be a very big hole between the address of the memory populated before
> > hot-plug and the memory populated by hot-added memory. (i.e. the added
> memory
> > started at very high-end address). So instead of setup the frame table for the
> > whole address space, we expand the frame table dynamically after hotplug.
> >
> > We have the memory hotplug environment, so if you have any patch, I'm glad
> > to test it, or have my colleagues help to test it.
> 
> I meanwhile decided to keep the code logically the same, but
> testing the patch at
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00793.html
> (or the staging/normal trees once it went in/got pushed) would still be
> much appreciated.
> 
> Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 14:11:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 14:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti458-0003C3-MT; Mon, 10 Dec 2012 14:11:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1Ti456-0003By-Dy
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 14:11:40 +0000
Received: from [85.158.138.51:14978] by server-13.bemta-3.messagelabs.com id
	7F/4E-24887-B9DE5C05; Mon, 10 Dec 2012 14:11:39 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355148698!28247497!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11873 invoked from network); 10 Dec 2012 14:11:38 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-11.tower-174.messagelabs.com with AES128-SHA encrypted
	SMTP; 10 Dec 2012 14:11:38 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Mon, 10 Dec 2012 15:11:37 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] VGA passthrough and AMD drivers
Thread-Index: Ac3Wwv/Y5tYt6VdAQMWXX8TGgN5fSA==
Date: Mon, 10 Dec 2012 14:11:36 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C53B78@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/12 16:51, Aur?lien MILLIAT wrote:
>>
>> >> Hi all,
>>
>> >>
>>
>> >> I have made some tests to find a good driver for FirePro V8800 on
>>
>> >> windows 7 64bit HVM.
>>
>> >>
>>
>> >> I have been focused on ?advanced features?: quad buffer and active
>>
>> >> stereoscopy, synchronization ?
>>
>> >>
>>
>> >> The results, for all FirePro drivers (of this year); I can?t get =

>> >> the
>>
>> >> quad buffer/active stereoscopy feature.
>>
>> >>
>>
>> >> But they work on a native installation.
>>
>> >>
>>
>> >Can you describe the setup a little more?
>>
>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>
>> It?s a setup used in CAVE system, I try (and its works, minus some
>> issues) to virtualize ?virtual reality contexts? that needs full =

>> graphics card features.
>>
>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>
>> cores_per_socket : 4
>>
>> threads_per_core : 2
>>
>> cpu_mhz : 2660
>>
>> total_memory : 4079
>>
>> >How many graphic cards per guest?
>>
>> One card per guest.
>>
>> >How many guests? On how many hosts?
>>
>> One guest per computer.
>>
>
>And of course, I just thought of some other questions:
>What version of Xen are you using?
>What kernel are you using in Dom0?

release                : 2.6.32-5-xen-amd64
version                : #1 SMP Sun May 6 08:57:29 UTC 2012
machine                : x86_64
nr_cpus                : 8
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 2
cpu_mhz                : 2660
hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:00000=
000:00000001:00000000
virt_caps              : hvm hvm_directio
total_memory           : 4079
free_cpus              : 0
xen_major              : 4
xen_minor              : 2
xen_extra              : -unstable
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64 =

xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
xen_commandline        : placeholder
cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8) =

xend_config_format     : 4

I will change to a newer version and use  xl toolstack when VGA passthrough=
 will be supported.

>And just to be clear, there is only Dom0 and one Windows 7 HVM guest on ea=
ch machine?

Yes

>> >>
>>
>> >> The only driver that allows this feature is a Radeon HD driver
>>
>> >> (Catalyst 12.10 WHQL).
>>
>> >>
>>
>> >> But this driver becomes unstable when an application using active
>>
>> >> stereo and synchronization is closed:
>>
>> >>
>>
>> >> -The synchronization between two computers is lost.
>>
>> >>
>>
>> >> -The CCC can crash when the synchronization is made again.
>>
>> >>
>>
>> >> Someone have any clues about this?
>>
>> >>
>>
>> >I don't know exactly how this works on AMD/ATI graphics cards, but I
>>
>> >have worked with synchronisation on other graphics cards about 7 =

>> >years
>>
>> >ago, so I have some idea of how you solve the various problems.
>>
>> >
>>
>> >What I don't quite understand is why it would be different between a
>>
>> >virtual environment and the bare-metal ("native") install. My =

>> >immediate
>>
>> >guess is that there is a timing difference, for one of two reasons:
>>
>> >1. IOMMU is adding extra delays to the graphics card reading system mem=
ory.
>>
>> >2. Interrupt delays due to hypervisor.
>>
>> >3. Dom0 or other guest domains "stealing" CPU from the guest.
>>
>> >I don't think those are easy to work around (as they all have to
>>
>> >"happen" in a virtual system), but I also don't REALLY understand why
>>
>> >this should cause problems in the first place, as there isn't any
>>
>> >guarantee as to the timings of either memory reads, interrupt
>>
>> >latency/responsiveness or CPU availability in Windows, so the same
>>
>> >problem would appear in native systems as well, given "the right"
>>
>> >circumstances.
>>
>> >
>>
>> >
>>
>> >What exactly is the crash in CCC?
>>
>> >
>>
>> >(CCC stands for "Catalyst Control Center" - which I think is a =

>> >Windows
>>
>> >"service" to handle certain requests from the driver that can't be =

>> >done
>>
>> >in kernel mode [or shouldn't be done in the driver in general]).
>>
>> After the application is closed, I launch the Catalyst Control Center, =

>> the synchronization state seems to be good. But there is no =

>> synchronization.
>>
>> If I try to apply any modifications of synchronization (synchro server =

>> or client), CCC freeze and I need to kill it manually.
>>
>> I can set the synchronization back after this.
>>
>
>This clearly sounds like a software issue in the CCC itself. I could be wr=
ong, but that's what I think right now. It would be rather difficult to fig=
ure out what is going wrong without at least a repro environment.

I've made a bunch of tests this morning: =

-CCC crash when I've got two displays: I set one to be the synchronization =
server and the other a client at the same time. When I set the server, appl=
y this configuration and set the client after, it didn't crash. =

-If my application (Virtools) crash, synchronization is reset.
-Eyes are sometimes inverted with the same trigger edge.

I've got all this behaviors with both HVM and native installation under 7 6=
4bits.  So I think it's clearly a software issue.

Next step:  7 32bits.

>Whilst I'm all for using Xen for everything, there are sometimes situation=
s when "not using Xen" may actually be the right choice. Can you explain wh=
y running your guests in Xen is of benefit? [If you'd like to answer "none =
of >your business", that's fine, but it may help to understand what the "bu=
siness case" is for this].

The objective is to mutualize graphical cluster for immersive systems. Virt=
ual Reality applications are sensitive in their configurations; it's a pain=
 to manage multiple users and it's nearly impossible to have different conf=
igurations for these users. Usually immersive systems are stuck in one conf=
iguration (OS, drivers, applications ...), and only few people are allowed =
to change settings. =

The idea is to use Xen and VGA passthrough, for create personals environmen=
ts that allow every user to make their own configurations without impacts o=
n others. =


Be able to have VR configurations in virtual machines and to able to run it=
 with 3D features, is a serious benefit for Virtual Reality users.

Aur=E9lien
--
Mats
>
> I will try next week with others computers.
>
> Thanks for the reply,
>
> Aurelien
>
> --
>
> Mats
> > Thanks,
> > Aurelien


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 14:11:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 14:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti458-0003C3-MT; Mon, 10 Dec 2012 14:11:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1Ti456-0003By-Dy
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 14:11:40 +0000
Received: from [85.158.138.51:14978] by server-13.bemta-3.messagelabs.com id
	7F/4E-24887-B9DE5C05; Mon, 10 Dec 2012 14:11:39 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355148698!28247497!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11873 invoked from network); 10 Dec 2012 14:11:38 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-11.tower-174.messagelabs.com with AES128-SHA encrypted
	SMTP; 10 Dec 2012 14:11:38 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Mon, 10 Dec 2012 15:11:37 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] VGA passthrough and AMD drivers
Thread-Index: Ac3Wwv/Y5tYt6VdAQMWXX8TGgN5fSA==
Date: Mon, 10 Dec 2012 14:11:36 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C53B78@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/12/12 16:51, Aur?lien MILLIAT wrote:
>>
>> >> Hi all,
>>
>> >>
>>
>> >> I have made some tests to find a good driver for FirePro V8800 on
>>
>> >> windows 7 64bit HVM.
>>
>> >>
>>
>> >> I have been focused on ?advanced features?: quad buffer and active
>>
>> >> stereoscopy, synchronization ?
>>
>> >>
>>
>> >> The results, for all FirePro drivers (of this year); I can?t get =

>> >> the
>>
>> >> quad buffer/active stereoscopy feature.
>>
>> >>
>>
>> >> But they work on a native installation.
>>
>> >>
>>
>> >Can you describe the setup a little more?
>>
>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>
>> It?s a setup used in CAVE system, I try (and its works, minus some
>> issues) to virtualize ?virtual reality contexts? that needs full =

>> graphics card features.
>>
>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>
>> cores_per_socket : 4
>>
>> threads_per_core : 2
>>
>> cpu_mhz : 2660
>>
>> total_memory : 4079
>>
>> >How many graphic cards per guest?
>>
>> One card per guest.
>>
>> >How many guests? On how many hosts?
>>
>> One guest per computer.
>>
>
>And of course, I just thought of some other questions:
>What version of Xen are you using?
>What kernel are you using in Dom0?

release                : 2.6.32-5-xen-amd64
version                : #1 SMP Sun May 6 08:57:29 UTC 2012
machine                : x86_64
nr_cpus                : 8
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 2
cpu_mhz                : 2660
hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:00000=
000:00000001:00000000
virt_caps              : hvm hvm_directio
total_memory           : 4079
free_cpus              : 0
xen_major              : 4
xen_minor              : 2
xen_extra              : -unstable
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64 =

xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
xen_commandline        : placeholder
cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8) =

xend_config_format     : 4

I will change to a newer version and use  xl toolstack when VGA passthrough=
 will be supported.

>And just to be clear, there is only Dom0 and one Windows 7 HVM guest on ea=
ch machine?

Yes

>> >>
>>
>> >> The only driver that allows this feature is a Radeon HD driver
>>
>> >> (Catalyst 12.10 WHQL).
>>
>> >>
>>
>> >> But this driver becomes unstable when an application using active
>>
>> >> stereo and synchronization is closed:
>>
>> >>
>>
>> >> -The synchronization between two computers is lost.
>>
>> >>
>>
>> >> -The CCC can crash when the synchronization is made again.
>>
>> >>
>>
>> >> Someone have any clues about this?
>>
>> >>
>>
>> >I don't know exactly how this works on AMD/ATI graphics cards, but I
>>
>> >have worked with synchronisation on other graphics cards about 7 =

>> >years
>>
>> >ago, so I have some idea of how you solve the various problems.
>>
>> >
>>
>> >What I don't quite understand is why it would be different between a
>>
>> >virtual environment and the bare-metal ("native") install. My =

>> >immediate
>>
>> >guess is that there is a timing difference, for one of two reasons:
>>
>> >1. IOMMU is adding extra delays to the graphics card reading system mem=
ory.
>>
>> >2. Interrupt delays due to hypervisor.
>>
>> >3. Dom0 or other guest domains "stealing" CPU from the guest.
>>
>> >I don't think those are easy to work around (as they all have to
>>
>> >"happen" in a virtual system), but I also don't REALLY understand why
>>
>> >this should cause problems in the first place, as there isn't any
>>
>> >guarantee as to the timings of either memory reads, interrupt
>>
>> >latency/responsiveness or CPU availability in Windows, so the same
>>
>> >problem would appear in native systems as well, given "the right"
>>
>> >circumstances.
>>
>> >
>>
>> >
>>
>> >What exactly is the crash in CCC?
>>
>> >
>>
>> >(CCC stands for "Catalyst Control Center" - which I think is a =

>> >Windows
>>
>> >"service" to handle certain requests from the driver that can't be =

>> >done
>>
>> >in kernel mode [or shouldn't be done in the driver in general]).
>>
>> After the application is closed, I launch the Catalyst Control Center, =

>> the synchronization state seems to be good. But there is no =

>> synchronization.
>>
>> If I try to apply any modifications of synchronization (synchro server =

>> or client), CCC freeze and I need to kill it manually.
>>
>> I can set the synchronization back after this.
>>
>
>This clearly sounds like a software issue in the CCC itself. I could be wr=
ong, but that's what I think right now. It would be rather difficult to fig=
ure out what is going wrong without at least a repro environment.

I've made a bunch of tests this morning: =

-CCC crash when I've got two displays: I set one to be the synchronization =
server and the other a client at the same time. When I set the server, appl=
y this configuration and set the client after, it didn't crash. =

-If my application (Virtools) crash, synchronization is reset.
-Eyes are sometimes inverted with the same trigger edge.

I've got all this behaviors with both HVM and native installation under 7 6=
4bits.  So I think it's clearly a software issue.

Next step:  7 32bits.

>Whilst I'm all for using Xen for everything, there are sometimes situation=
s when "not using Xen" may actually be the right choice. Can you explain wh=
y running your guests in Xen is of benefit? [If you'd like to answer "none =
of >your business", that's fine, but it may help to understand what the "bu=
siness case" is for this].

The objective is to mutualize graphical cluster for immersive systems. Virt=
ual Reality applications are sensitive in their configurations; it's a pain=
 to manage multiple users and it's nearly impossible to have different conf=
igurations for these users. Usually immersive systems are stuck in one conf=
iguration (OS, drivers, applications ...), and only few people are allowed =
to change settings. =

The idea is to use Xen and VGA passthrough, for create personals environmen=
ts that allow every user to make their own configurations without impacts o=
n others. =


Be able to have VR configurations in virtual machines and to able to run it=
 with 3D features, is a serious benefit for Virtual Reality users.

Aur=E9lien
--
Mats
>
> I will try next week with others computers.
>
> Thanks for the reply,
>
> Aurelien
>
> --
>
> Mats
> > Thanks,
> > Aurelien


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 14:19:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 14:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4CR-0003Ms-Pf; Mon, 10 Dec 2012 14:19:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Ti4CQ-0003Mn-9x
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 14:19:14 +0000
Received: from [85.158.138.51:59328] by server-1.bemta-3.messagelabs.com id
	75/66-12169-16FE5C05; Mon, 10 Dec 2012 14:19:13 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355149151!28222523!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15063 invoked from network); 10 Dec 2012 14:19:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 14:19:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="105708"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 14:19:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 09:19:10 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Ti4CM-0001ww-B6;
	Mon, 10 Dec 2012 14:19:10 +0000
Message-ID: <50C5EF5D.5050307@eu.citrix.com>
Date: Mon, 10 Dec 2012 14:19:09 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: M A <vip2442@gmail.com>, xen-devel <xen-devel@lists.xen.org>
References: <CAPm+FyWYQ39m-ByTRLmph6e4HpoPe_TwjfDyK7FDrEng8pfqqg@mail.gmail.com>
In-Reply-To: <CAPm+FyWYQ39m-ByTRLmph6e4HpoPe_TwjfDyK7FDrEng8pfqqg@mail.gmail.com>
Subject: Re: [Xen-devel] how to make xenalyze continuously reading from
 xentrace file ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/12/12 06:02, M A wrote:
> Hi There,
>
> I'm a student at NYIT, New York and I have project on xen hypervisor.
> Currently i'm working on xentrace and xenalyze to monitor the runstate
> events. What i'm trying do now is:  While i'm capturing  by xentrace I
> want to display the result on the screen using xenalyze.  I tried to use
> loop in the main function but I got errors and when I tried to solve
> these errors I got more errors.
>
> int main(int argc, char *argv[]) {
>      /* Start with warn at stderr. */
>
>      warn = stderr;
>
>      argp_parse(&parser_def, argc, argv, 0, NULL, NULL);
>
>      if (G.trace_file == NULL)
>          exit(1);
>
>      if ( (G.fd = open(G.trace_file, O_RDONLY|O_LARGEFILE)) < 0) {
>          perror("open");
>          error(ERR_SYSTEM, NULL);
>      } else {
>          struct stat64 s;
>          fstat64(G.fd, &s);
>          G.file_size = s.st_size;
>      }
>
>      if ( (G.mh = mread_init(G.fd)) == NULL )
>          perror("mread");
>
>      if (G.symbol_file != NULL)
>          parse_symbol_file(G.symbol_file);
>
>      if(opt.dump_all)
>          warn = stdout;
>
> for (int i=0;i<=1000;i++){
>      init_pcpus();
>
>
>      if(opt.progress)
>          progress_init();
>
>      process_records();
>
>      if(opt.interval_mode)
>          interval_tail();
>
>      if(opt.summary)
>          summary();
>
>      if(opt.report_pcpu)
>          report_pcpu();
> sleep(2);
>
> }
>      if(opt.progress)
>          progress_finish();
>
>      return 0;
> }
>
>
> *Error: vcpu_next_update: FATAL: p->current not NULL! (d32768v0,
> runstate running)*
>
>
> How can I can make xenalyze to continuously read from xentrace output file ?

(Adding xen-devel, since this is definitely a coding question)

So I take it what you're doing is this:
1. Start xentrace:
  # xentrace -e all /tmp/foo.trace &
2. Running your "looping" xenalyze on it:
  # xenalyze -s /tmp/foo.trace

Is that correct?

I don't know what your exact problem is here, but one problem you'll run 
into eventually is that xenalyze expects a certain "finished" file 
format, but there's nothing here to synchronize xenalyze reading the 
file with xentrace writing the file.  The result is that you're bound at 
some point to read a file of which the end is only half-written.

In any case, what you're doing here is functionally not really that 
different from just doing it in bash:
  # while xenalyze -s /tmp/foo.trace && sleep 2 ; true; done

  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 14:19:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 14:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4CR-0003Ms-Pf; Mon, 10 Dec 2012 14:19:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Ti4CQ-0003Mn-9x
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 14:19:14 +0000
Received: from [85.158.138.51:59328] by server-1.bemta-3.messagelabs.com id
	75/66-12169-16FE5C05; Mon, 10 Dec 2012 14:19:13 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355149151!28222523!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15063 invoked from network); 10 Dec 2012 14:19:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 14:19:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="105708"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 14:19:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 09:19:10 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Ti4CM-0001ww-B6;
	Mon, 10 Dec 2012 14:19:10 +0000
Message-ID: <50C5EF5D.5050307@eu.citrix.com>
Date: Mon, 10 Dec 2012 14:19:09 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: M A <vip2442@gmail.com>, xen-devel <xen-devel@lists.xen.org>
References: <CAPm+FyWYQ39m-ByTRLmph6e4HpoPe_TwjfDyK7FDrEng8pfqqg@mail.gmail.com>
In-Reply-To: <CAPm+FyWYQ39m-ByTRLmph6e4HpoPe_TwjfDyK7FDrEng8pfqqg@mail.gmail.com>
Subject: Re: [Xen-devel] how to make xenalyze continuously reading from
 xentrace file ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/12/12 06:02, M A wrote:
> Hi There,
>
> I'm a student at NYIT, New York and I have project on xen hypervisor.
> Currently i'm working on xentrace and xenalyze to monitor the runstate
> events. What i'm trying do now is:  While i'm capturing  by xentrace I
> want to display the result on the screen using xenalyze.  I tried to use
> loop in the main function but I got errors and when I tried to solve
> these errors I got more errors.
>
> int main(int argc, char *argv[]) {
>      /* Start with warn at stderr. */
>
>      warn = stderr;
>
>      argp_parse(&parser_def, argc, argv, 0, NULL, NULL);
>
>      if (G.trace_file == NULL)
>          exit(1);
>
>      if ( (G.fd = open(G.trace_file, O_RDONLY|O_LARGEFILE)) < 0) {
>          perror("open");
>          error(ERR_SYSTEM, NULL);
>      } else {
>          struct stat64 s;
>          fstat64(G.fd, &s);
>          G.file_size = s.st_size;
>      }
>
>      if ( (G.mh = mread_init(G.fd)) == NULL )
>          perror("mread");
>
>      if (G.symbol_file != NULL)
>          parse_symbol_file(G.symbol_file);
>
>      if(opt.dump_all)
>          warn = stdout;
>
> for (int i=0;i<=1000;i++){
>      init_pcpus();
>
>
>      if(opt.progress)
>          progress_init();
>
>      process_records();
>
>      if(opt.interval_mode)
>          interval_tail();
>
>      if(opt.summary)
>          summary();
>
>      if(opt.report_pcpu)
>          report_pcpu();
> sleep(2);
>
> }
>      if(opt.progress)
>          progress_finish();
>
>      return 0;
> }
>
>
> *Error: vcpu_next_update: FATAL: p->current not NULL! (d32768v0,
> runstate running)*
>
>
> How can I can make xenalyze to continuously read from xentrace output file ?

(Adding xen-devel, since this is definitely a coding question)

So I take it what you're doing is this:
1. Start xentrace:
  # xentrace -e all /tmp/foo.trace &
2. Running your "looping" xenalyze on it:
  # xenalyze -s /tmp/foo.trace

Is that correct?

I don't know what your exact problem is here, but one problem you'll run 
into eventually is that xenalyze expects a certain "finished" file 
format, but there's nothing here to synchronize xenalyze reading the 
file with xentrace writing the file.  The result is that you're bound at 
some point to read a file of which the end is only half-written.

In any case, what you're doing here is functionally not really that 
different from just doing it in bash:
  # while xenalyze -s /tmp/foo.trace && sleep 2 ; true; done

  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 14:29:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 14:29:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4MH-0003YE-TY; Mon, 10 Dec 2012 14:29:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Ti4MF-0003Y9-Nu
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 14:29:24 +0000
Received: from [193.109.254.147:24586] by server-16.bemta-14.messagelabs.com
	id 54/12-09215-3C1F5C05; Mon, 10 Dec 2012 14:29:23 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355149760!9592793!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4364 invoked from network); 10 Dec 2012 14:29:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 14:29:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="107041"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 14:29:20 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 09:29:19 -0500
Message-ID: <50C5F1BF.1060604@citrix.com>
Date: Mon, 10 Dec 2012 14:29:19 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <36774CA35642C143BCDE93BA0C68DC5702C53B78@dulac>
In-Reply-To: <36774CA35642C143BCDE93BA0C68DC5702C53B78@dulac>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/12 14:11, Aur=E9lien MILLIAT wrote:
> On 07/12/12 16:51, Aur?lien MILLIAT wrote:
>>>>> Hi all,
>>>>> I have made some tests to find a good driver for FirePro V8800 on
>>>>> windows 7 64bit HVM.
>>>>> I have been focused on ?advanced features?: quad buffer and active
>>>>> stereoscopy, synchronization ?
>>>>> The results, for all FirePro drivers (of this year); I can?t get
>>>>> the
>>>>> quad buffer/active stereoscopy feature.
>>>>> But they work on a native installation.
>>>> Can you describe the setup a little more?
>>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>>
>>> It?s a setup used in CAVE system, I try (and its works, minus some
>>> issues) to virtualize ?virtual reality contexts? that needs full
>>> graphics card features.
>>>
>>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>>
>>> cores_per_socket : 4
>>>
>>> threads_per_core : 2
>>>
>>> cpu_mhz : 2660
>>>
>>> total_memory : 4079
>>>
>>>> How many graphic cards per guest?
>>> One card per guest.
>>>
>>>> How many guests? On how many hosts?
>>> One guest per computer.
>>>
>> And of course, I just thought of some other questions:
>> What version of Xen are you using?
>> What kernel are you using in Dom0?
> release                : 2.6.32-5-xen-amd64
> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
> machine                : x86_64
> nr_cpus                : 8
> nr_nodes               : 1
> cores_per_socket       : 4
> threads_per_core       : 2
> cpu_mhz                : 2660
> hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:000=
00000:00000001:00000000
> virt_caps              : hvm hvm_directio
> total_memory           : 4079
> free_cpus              : 0
> xen_major              : 4
> xen_minor              : 2
> xen_extra              : -unstable
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hv=
m-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=3D0xffff800000000000
> xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
> xen_commandline        : placeholder
> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
> xend_config_format     : 4
>
> I will change to a newer version and use  xl toolstack when VGA passthrou=
gh will be supported.
>
>> And just to be clear, there is only Dom0 and one Windows 7 HVM guest on =
each machine?
> Yes
>
>>>>> The only driver that allows this feature is a Radeon HD driver
>>>>> (Catalyst 12.10 WHQL).
>>>>> But this driver becomes unstable when an application using active
>>>>> stereo and synchronization is closed:
>>>>> -The synchronization between two computers is lost.
>>>>> -The CCC can crash when the synchronization is made again.
>>>>> Someone have any clues about this?
>>>> I don't know exactly how this works on AMD/ATI graphics cards, but I
>>>> have worked with synchronisation on other graphics cards about 7
>>>> years
>>>> ago, so I have some idea of how you solve the various problems.
>>>> What I don't quite understand is why it would be different between a
>>>> virtual environment and the bare-metal ("native") install. My
>>>> immediate
>>>> guess is that there is a timing difference, for one of two reasons:
>>>> 1. IOMMU is adding extra delays to the graphics card reading system me=
mory.
>>>> 2. Interrupt delays due to hypervisor.
>>>> 3. Dom0 or other guest domains "stealing" CPU from the guest.
>>>> I don't think those are easy to work around (as they all have to
>>>> "happen" in a virtual system), but I also don't REALLY understand why
>>>> this should cause problems in the first place, as there isn't any
>>>> guarantee as to the timings of either memory reads, interrupt
>>>> latency/responsiveness or CPU availability in Windows, so the same
>>>> problem would appear in native systems as well, given "the right"
>>>> circumstances.
>>>> What exactly is the crash in CCC?
>>>> (CCC stands for "Catalyst Control Center" - which I think is a
>>>> Windows
>>>> "service" to handle certain requests from the driver that can't be
>>>> done
>>>> in kernel mode [or shouldn't be done in the driver in general]).
>>> After the application is closed, I launch the Catalyst Control Center,
>>> the synchronization state seems to be good. But there is no
>>> synchronization.
>>>
>>> If I try to apply any modifications of synchronization (synchro server
>>> or client), CCC freeze and I need to kill it manually.
>>>
>>> I can set the synchronization back after this.
>>>
>> This clearly sounds like a software issue in the CCC itself. I could be =
wrong, but that's what I think right now. It would be rather difficult to f=
igure out what is going wrong without at least a repro environment.
> I've made a bunch of tests this morning:
> -CCC crash when I've got two displays: I set one to be the synchronizatio=
n server and the other a client at the same time. When I set the server, ap=
ply this configuration and set the client after, it didn't crash.
> -If my application (Virtools) crash, synchronization is reset.
> -Eyes are sometimes inverted with the same trigger edge.
I saw that problem with the product I was working on once or twice. =

Makes it look really "confusing". This was a settings problem in my case =

(because I wrote my own "controls", I could set almost every aspect of =

everything that could possibly be changed, with a very basic command =

line application that interacted pretty straight down to the driver - =

with the usual caveat of "make sure you know what you are doing" - the =

normal GUI Control panel setup was much more "you can only set things =

that make sense for you to set"). That is probably not really what your =

problem is... But could be a configuration of driver or application =

issue, of course.

>
> I've got all this behaviors with both HVM and native installation under 7=
 64bits.  So I think it's clearly a software issue.
>
> Next step:  7 32bits.
So, this is not a Xen issue... Report it to the ATI/AMD folks!

>
>> Whilst I'm all for using Xen for everything, there are sometimes situati=
ons when "not using Xen" may actually be the right choice. Can you explain =
why running your guests in Xen is of benefit? [If you'd like to answer "non=
e of >your business", that's fine, but it may help to understand what the "=
business case" is for this].
> The objective is to mutualize graphical cluster for immersive systems. Vi=
rtual Reality applications are sensitive in their configurations; it's a pa=
in to manage multiple users and it's nearly impossible to have different co=
nfigurations for these users. Usually immersive systems are stuck in one co=
nfiguration (OS, drivers, applications ...), and only few people are allowe=
d to change settings.
> The idea is to use Xen and VGA passthrough, for create personals environm=
ents that allow every user to make their own configurations without impacts=
 on others.
>
> Be able to have VR configurations in virtual machines and to able to run =
it with 3D features, is a serious benefit for Virtual Reality users.

Thanks for your explanation. Makes some sense, however, I feel that it =

also makes things more complex - if the system is so sensitive, it may =

get "upset" simply by having the differences in system behaviour that =

you automatically get from running on a virtual machine vs. "bare =

metal". Don't let that stop you, I'm just saying there may be issues =

caused by Xen (or other Virtualisation products) are not quite as =

transparent as they really should be.

--
Mats

>
> Aur=E9lien
> --
> Mats
>> I will try next week with others computers.
>>
>> Thanks for the reply,
>>
>> Aurelien
>>
>> --
>>
>> Mats
>>> Thanks,
>>> Aurelien
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 14:29:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 14:29:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4MH-0003YE-TY; Mon, 10 Dec 2012 14:29:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Ti4MF-0003Y9-Nu
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 14:29:24 +0000
Received: from [193.109.254.147:24586] by server-16.bemta-14.messagelabs.com
	id 54/12-09215-3C1F5C05; Mon, 10 Dec 2012 14:29:23 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355149760!9592793!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4364 invoked from network); 10 Dec 2012 14:29:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 14:29:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="107041"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 14:29:20 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 09:29:19 -0500
Message-ID: <50C5F1BF.1060604@citrix.com>
Date: Mon, 10 Dec 2012 14:29:19 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <36774CA35642C143BCDE93BA0C68DC5702C53B78@dulac>
In-Reply-To: <36774CA35642C143BCDE93BA0C68DC5702C53B78@dulac>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/12 14:11, Aur=E9lien MILLIAT wrote:
> On 07/12/12 16:51, Aur?lien MILLIAT wrote:
>>>>> Hi all,
>>>>> I have made some tests to find a good driver for FirePro V8800 on
>>>>> windows 7 64bit HVM.
>>>>> I have been focused on ?advanced features?: quad buffer and active
>>>>> stereoscopy, synchronization ?
>>>>> The results, for all FirePro drivers (of this year); I can?t get
>>>>> the
>>>>> quad buffer/active stereoscopy feature.
>>>>> But they work on a native installation.
>>>> Can you describe the setup a little more?
>>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>>
>>> It?s a setup used in CAVE system, I try (and its works, minus some
>>> issues) to virtualize ?virtual reality contexts? that needs full
>>> graphics card features.
>>>
>>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>>
>>> cores_per_socket : 4
>>>
>>> threads_per_core : 2
>>>
>>> cpu_mhz : 2660
>>>
>>> total_memory : 4079
>>>
>>>> How many graphic cards per guest?
>>> One card per guest.
>>>
>>>> How many guests? On how many hosts?
>>> One guest per computer.
>>>
>> And of course, I just thought of some other questions:
>> What version of Xen are you using?
>> What kernel are you using in Dom0?
> release                : 2.6.32-5-xen-amd64
> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
> machine                : x86_64
> nr_cpus                : 8
> nr_nodes               : 1
> cores_per_socket       : 4
> threads_per_core       : 2
> cpu_mhz                : 2660
> hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:000=
00000:00000001:00000000
> virt_caps              : hvm hvm_directio
> total_memory           : 4079
> free_cpus              : 0
> xen_major              : 4
> xen_minor              : 2
> xen_extra              : -unstable
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hv=
m-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=3D0xffff800000000000
> xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
> xen_commandline        : placeholder
> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
> xend_config_format     : 4
>
> I will change to a newer version and use  xl toolstack when VGA passthrou=
gh will be supported.
>
>> And just to be clear, there is only Dom0 and one Windows 7 HVM guest on =
each machine?
> Yes
>
>>>>> The only driver that allows this feature is a Radeon HD driver
>>>>> (Catalyst 12.10 WHQL).
>>>>> But this driver becomes unstable when an application using active
>>>>> stereo and synchronization is closed:
>>>>> -The synchronization between two computers is lost.
>>>>> -The CCC can crash when the synchronization is made again.
>>>>> Someone have any clues about this?
>>>> I don't know exactly how this works on AMD/ATI graphics cards, but I
>>>> have worked with synchronisation on other graphics cards about 7
>>>> years
>>>> ago, so I have some idea of how you solve the various problems.
>>>> What I don't quite understand is why it would be different between a
>>>> virtual environment and the bare-metal ("native") install. My
>>>> immediate
>>>> guess is that there is a timing difference, for one of two reasons:
>>>> 1. IOMMU is adding extra delays to the graphics card reading system me=
mory.
>>>> 2. Interrupt delays due to hypervisor.
>>>> 3. Dom0 or other guest domains "stealing" CPU from the guest.
>>>> I don't think those are easy to work around (as they all have to
>>>> "happen" in a virtual system), but I also don't REALLY understand why
>>>> this should cause problems in the first place, as there isn't any
>>>> guarantee as to the timings of either memory reads, interrupt
>>>> latency/responsiveness or CPU availability in Windows, so the same
>>>> problem would appear in native systems as well, given "the right"
>>>> circumstances.
>>>> What exactly is the crash in CCC?
>>>> (CCC stands for "Catalyst Control Center" - which I think is a
>>>> Windows
>>>> "service" to handle certain requests from the driver that can't be
>>>> done
>>>> in kernel mode [or shouldn't be done in the driver in general]).
>>> After the application is closed, I launch the Catalyst Control Center,
>>> the synchronization state seems to be good. But there is no
>>> synchronization.
>>>
>>> If I try to apply any modifications of synchronization (synchro server
>>> or client), CCC freeze and I need to kill it manually.
>>>
>>> I can set the synchronization back after this.
>>>
>> This clearly sounds like a software issue in the CCC itself. I could be =
wrong, but that's what I think right now. It would be rather difficult to f=
igure out what is going wrong without at least a repro environment.
> I've made a bunch of tests this morning:
> -CCC crash when I've got two displays: I set one to be the synchronizatio=
n server and the other a client at the same time. When I set the server, ap=
ply this configuration and set the client after, it didn't crash.
> -If my application (Virtools) crash, synchronization is reset.
> -Eyes are sometimes inverted with the same trigger edge.
I saw that problem with the product I was working on once or twice. =

Makes it look really "confusing". This was a settings problem in my case =

(because I wrote my own "controls", I could set almost every aspect of =

everything that could possibly be changed, with a very basic command =

line application that interacted pretty straight down to the driver - =

with the usual caveat of "make sure you know what you are doing" - the =

normal GUI Control panel setup was much more "you can only set things =

that make sense for you to set"). That is probably not really what your =

problem is... But could be a configuration of driver or application =

issue, of course.

>
> I've got all this behaviors with both HVM and native installation under 7=
 64bits.  So I think it's clearly a software issue.
>
> Next step:  7 32bits.
So, this is not a Xen issue... Report it to the ATI/AMD folks!

>
>> Whilst I'm all for using Xen for everything, there are sometimes situati=
ons when "not using Xen" may actually be the right choice. Can you explain =
why running your guests in Xen is of benefit? [If you'd like to answer "non=
e of >your business", that's fine, but it may help to understand what the "=
business case" is for this].
> The objective is to mutualize graphical cluster for immersive systems. Vi=
rtual Reality applications are sensitive in their configurations; it's a pa=
in to manage multiple users and it's nearly impossible to have different co=
nfigurations for these users. Usually immersive systems are stuck in one co=
nfiguration (OS, drivers, applications ...), and only few people are allowe=
d to change settings.
> The idea is to use Xen and VGA passthrough, for create personals environm=
ents that allow every user to make their own configurations without impacts=
 on others.
>
> Be able to have VR configurations in virtual machines and to able to run =
it with 3D features, is a serious benefit for Virtual Reality users.

Thanks for your explanation. Makes some sense, however, I feel that it =

also makes things more complex - if the system is so sensitive, it may =

get "upset" simply by having the differences in system behaviour that =

you automatically get from running on a virtual machine vs. "bare =

metal". Don't let that stop you, I'm just saying there may be issues =

caused by Xen (or other Virtualisation products) are not quite as =

transparent as they really should be.

--
Mats

>
> Aur=E9lien
> --
> Mats
>> I will try next week with others computers.
>>
>> Thanks for the reply,
>>
>> Aurelien
>>
>> --
>>
>> Mats
>>> Thanks,
>>> Aurelien
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 14:57:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 14:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4nc-0004K2-7K; Mon, 10 Dec 2012 14:57:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <asadrupucit2006@gmail.com>) id 1Ti4nb-0004Ju-8a
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 14:57:39 +0000
Received: from [85.158.143.35:24315] by server-2.bemta-4.messagelabs.com id
	65/B4-30861-268F5C05; Mon, 10 Dec 2012 14:57:38 +0000
X-Env-Sender: asadrupucit2006@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1355148175!14015551!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27287 invoked from network); 10 Dec 2012 14:02:56 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 14:02:56 -0000
Received: by mail-ob0-f173.google.com with SMTP id xn12so2867032obc.32
	for <xen-devel@lists.xen.org>; Mon, 10 Dec 2012 06:02:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=HUmSDc7bGB5S3cAdTGffuCs6IyUjL1EEf9cJNQH5yE8=;
	b=tdT0kC4Yxahx5YoqKDYeqcHRv5WUIcGbDxgjZ8B+/G10xaF/o02Nq+YTGnGK79MYIi
	ggROVY1QgHrmWzxd7HBQTWxfJS+OKYaxOaCRNluEUvcAIYfPEAmgDFlF7FSmP76n4u6I
	jCiyCcPri+f7s1Hb1FzkgCkomI5zI+sdSzylKma2P9rvYJoDOcGavAIC47MVwmNLtCpG
	xFjtsGygeB99zl7sCzyeoMVKVdHygRX1haRsTD6Z1Yoi6SrOQDIwI0tZbDcqKwsOwtPz
	SqqhbXLmKbKte0d0389fxK6FLtPnukuMK0xjhf0hPCvIE7EuGBZ+eItU3UaeUXqINY9y
	2I8A==
MIME-Version: 1.0
Received: by 10.60.172.178 with SMTP id bd18mr7726580oec.133.1355148174722;
	Mon, 10 Dec 2012 06:02:54 -0800 (PST)
Received: by 10.60.20.3 with HTTP; Mon, 10 Dec 2012 06:02:54 -0800 (PST)
Date: Mon, 10 Dec 2012 19:02:54 +0500
Message-ID: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
From: asad raza <asadrupucit2006@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] why we need to convert "mfn_to_page(smfn)" in
	page_alloc.c?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

void init_domheap_pages(paddr_t ps, paddr_t pe)
{
    unsigned long smfn, emfn;

    ASSERT(!in_irq());

    smfn = round_pgup(ps) >> PAGE_SHIFT;
    emfn = round_pgdown(pe) >> PAGE_SHIFT;

    init_heap_pages(mfn_to_page(smfn), emfn - smfn);
}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 14:57:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 14:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4nc-0004K2-7K; Mon, 10 Dec 2012 14:57:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <asadrupucit2006@gmail.com>) id 1Ti4nb-0004Ju-8a
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 14:57:39 +0000
Received: from [85.158.143.35:24315] by server-2.bemta-4.messagelabs.com id
	65/B4-30861-268F5C05; Mon, 10 Dec 2012 14:57:38 +0000
X-Env-Sender: asadrupucit2006@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1355148175!14015551!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27287 invoked from network); 10 Dec 2012 14:02:56 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 14:02:56 -0000
Received: by mail-ob0-f173.google.com with SMTP id xn12so2867032obc.32
	for <xen-devel@lists.xen.org>; Mon, 10 Dec 2012 06:02:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=HUmSDc7bGB5S3cAdTGffuCs6IyUjL1EEf9cJNQH5yE8=;
	b=tdT0kC4Yxahx5YoqKDYeqcHRv5WUIcGbDxgjZ8B+/G10xaF/o02Nq+YTGnGK79MYIi
	ggROVY1QgHrmWzxd7HBQTWxfJS+OKYaxOaCRNluEUvcAIYfPEAmgDFlF7FSmP76n4u6I
	jCiyCcPri+f7s1Hb1FzkgCkomI5zI+sdSzylKma2P9rvYJoDOcGavAIC47MVwmNLtCpG
	xFjtsGygeB99zl7sCzyeoMVKVdHygRX1haRsTD6Z1Yoi6SrOQDIwI0tZbDcqKwsOwtPz
	SqqhbXLmKbKte0d0389fxK6FLtPnukuMK0xjhf0hPCvIE7EuGBZ+eItU3UaeUXqINY9y
	2I8A==
MIME-Version: 1.0
Received: by 10.60.172.178 with SMTP id bd18mr7726580oec.133.1355148174722;
	Mon, 10 Dec 2012 06:02:54 -0800 (PST)
Received: by 10.60.20.3 with HTTP; Mon, 10 Dec 2012 06:02:54 -0800 (PST)
Date: Mon, 10 Dec 2012 19:02:54 +0500
Message-ID: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
From: asad raza <asadrupucit2006@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] why we need to convert "mfn_to_page(smfn)" in
	page_alloc.c?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

void init_domheap_pages(paddr_t ps, paddr_t pe)
{
    unsigned long smfn, emfn;

    ASSERT(!in_irq());

    smfn = round_pgup(ps) >> PAGE_SHIFT;
    emfn = round_pgdown(pe) >> PAGE_SHIFT;

    init_heap_pages(mfn_to_page(smfn), emfn - smfn);
}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:03:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4tT-0004kD-0o; Mon, 10 Dec 2012 15:03:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti4tR-0004k2-FL
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:03:41 +0000
Received: from [85.158.138.51:22477] by server-6.bemta-3.messagelabs.com id
	5D/C9-28265-CC9F5C05; Mon, 10 Dec 2012 15:03:40 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355151819!20311773!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22052 invoked from network); 10 Dec 2012 15:03:39 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-12.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 15:03:39 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:58743 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti4xD-0003i8-UK; Mon, 10 Dec 2012 16:07:36 +0100
Date: Mon, 10 Dec 2012 16:03:32 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <101480918.20121210160332@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <341064135.20121209223602@eikelenboom.it>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com>
	<341064135.20121209223602@eikelenboom.it>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
	net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Sunday, December 9, 2012, 10:36:02 PM, you wrote:


> Sunday, December 9, 2012, 4:06:37 PM, you wrote:

>> Hi Ian,

>> I guess this issue is similar with this one 
>> http://comments.gmane.org/gmane.linux.network/236358. And netfront also 
>> needs to reserve some tail room for IP/TCP headers too?

> Hi Annie,
> Thanks for digging this up !

> That looks indeed remarkable similar. It's probably revealed by the other netfront/netback changes in 3.7, because i have never seen it before.
> It also seems to take some time before it gets triggered.
> Only the code in netfront.c is that much different, i don't think i'm able to determine the proper size and suggest a fix.

Hi Ian,

> Why is this being discussed in private mail? Please can you resend to
> xen-devel and/or netdev.

Sorry i missed the CC to xen-devel on the first post, wasn't meant to be private in any way.

> I have a vague recollection of a patch to set skb->truesize more
> accurately in xennet_poll (netfront), but I can't seem to find any
> reference to it now.

I tried to gain some extra info from net/core/skbuff.c around the warn.

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3f0636c..a7831b7 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3442,6 +3442,10 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
        }

        WARN_ON_ONCE(delta < len);
+       if(delta < len){
+                net_warn_ratelimited("netfront: WARN delta(%d) < len(%d) truesize(%d) SKB_DATA_ALIGN(%d)  SKB_TRUESIZE(%d) \n",
+                         delta,len,from->truesize,SKB_DATA_ALIGN(sizeof(struct sk_buff)),SKB_TRUESIZE(skb_end_offset(from)));
+       }

        memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
               skb_shinfo(from)->frags,



This results in:

[ 5557.333423] ------------[ cut here ]------------
[ 5557.333471] WARNING: at net/core/skbuff.c:3444 skb_try_coalesce+0x355/0x3d0()
[ 5557.333495] Modules linked in:
[ 5557.333519] Pid: 1872, comm: apache2 Not tainted 3.7.0-rc8-20121209-netdebug #1
[ 5557.333543] Call Trace:
[ 5557.333556]  <IRQ>  [<ffffffff8106752a>] warn_slowpath_common+0x7a/0xb0
[ 5557.333598]  [<ffffffff81067575>] warn_slowpath_null+0x15/0x20
[ 5557.333621]  [<ffffffff816a2b25>] skb_try_coalesce+0x355/0x3d0
[ 5557.333646]  [<ffffffff8175a049>] tcp_try_coalesce+0x69/0xc0
[ 5557.333669]  [<ffffffff8175a0f4>] tcp_queue_rcv+0x54/0x100
[ 5557.333691]  [<ffffffff8176023f>] ? tcp_transmit_skb+0x7ff/0x8d0
[ 5557.333714]  [<ffffffff8175ebbb>] tcp_rcv_established+0x2bb/0x6a0
[ 5557.333737]  [<ffffffff8176733f>] ? tcp_v4_rcv+0x6cf/0xb10
[ 5557.333758]  [<ffffffff81766925>] tcp_v4_do_rcv+0x135/0x480
[ 5557.333951]  [<ffffffff8180be02>] ? _raw_spin_lock_nested+0x42/0x50
[ 5557.333975]  [<ffffffff8176733f>] ? tcp_v4_rcv+0x6cf/0xb10
[ 5557.334009]  [<ffffffff817675cd>] tcp_v4_rcv+0x95d/0xb10
[ 5557.334032]  [<ffffffff810b21e8>] ? lock_acquire+0xd8/0x100
[ 5557.334055]  [<ffffffff81743d95>] ? ip_local_deliver_finish+0x45/0x230
[ 5557.334081]  [<ffffffff81743e6a>] ip_local_deliver_finish+0x11a/0x230
[ 5557.334105]  [<ffffffff81743d95>] ? ip_local_deliver_finish+0x45/0x230
[ 5557.334129]  [<ffffffff81743fb8>] ip_local_deliver+0x38/0x80
[ 5557.334152]  [<ffffffff8174357a>] ip_rcv_finish+0x15a/0x630
[ 5557.334175]  [<ffffffff81743c68>] ip_rcv+0x218/0x300
[ 5557.334197]  [<ffffffff816abf8d>] __netif_receive_skb+0x65d/0x8d0
[ 5557.334220]  [<ffffffff816aba75>] ? __netif_receive_skb+0x145/0x8d0
[ 5557.334244]  [<ffffffff810ae48d>] ? trace_hardirqs_on+0xd/0x10
[ 5557.334268]  [<ffffffff810fafc3>] ? free_hot_cold_page+0x1b3/0x1e0
[ 5557.334294]  [<ffffffff816ae4e8>] netif_receive_skb+0x28/0xf0
[ 5557.334315]  [<ffffffff816a4003>] ? __pskb_pull_tail+0x253/0x340
[ 5557.334343]  [<ffffffff814b3c75>] xennet_poll+0xad5/0xe10
[ 5557.334379]  [<ffffffff816af296>] net_rx_action+0x136/0x260
[ 5557.334403]  [<ffffffff8106f3c1>] ? __do_softirq+0x71/0x1a0
[ 5557.334426]  [<ffffffff8106f419>] __do_softirq+0xc9/0x1a0
[ 5557.334448]  [<ffffffff8180e97c>] call_softirq+0x1c/0x30
[ 5557.334470]  [<ffffffff8100fd95>] do_softirq+0x85/0xf0
[ 5557.334491]  [<ffffffff8106f28e>] irq_exit+0x9e/0xd0
[ 5557.334514]  [<ffffffff813463af>] xen_evtchn_do_upcall+0x2f/0x40
[ 5557.334537]  [<ffffffff8180e9de>] xen_do_hypervisor_callback+0x1e/0x30
[ 5557.334566]  <EOI>  [<ffffffff812b3d30>] ? copy_user_generic_string+0x30/0x40
[ 5557.334607]  [<ffffffff817543ea>] ? tcp_sendmsg+0xafa/0xe10
[ 5557.334632]  [<ffffffff8177a3c9>] ? inet_sendmsg+0xa9/0x100
[ 5557.334654]  [<ffffffff8177a320>] ? inet_autobind+0x70/0x70
[ 5557.334676]  [<ffffffff81697490>] ? sock_destroy_inode+0x40/0x40
[ 5557.334698]  [<ffffffff816975bd>] ? sock_aio_write+0x12d/0x140
[ 5557.334723]  [<ffffffff81144a0b>] ? do_sync_readv_writev+0x9b/0xe0
[ 5557.334750]  [<ffffffff811450bf>] ? do_readv_writev+0xcf/0x1d0
[ 5557.334785]  [<ffffffff811451fe>] ? vfs_writev+0x3e/0x60
[ 5557.334806]  [<ffffffff8114534a>] ? sys_writev+0x5a/0xc0
[ 5557.334826]  [<ffffffff812b537e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[ 5557.334856]  [<ffffffff8180d6e9>] ? system_call_fastpath+0x16/0x1b
[ 5557.334876] ---[ end trace 62036fd3c663553e ]---
[ 5557.335039] skbuff: netfront: WARN delta(18634) < len(18824) truesize(19530) SKB_DATA_ALIGN(256)  SKB_TRUESIZE(896)
[ 6195.800823] skbuff: netfront: WARN delta(18634) < len(18824) truesize(19530) SKB_DATA_ALIGN(256)  SKB_TRUESIZE(896)

I hope it gives some more insight.

--

Sander



> --
> Sander


>> Thanks
>> Annie

>> On 2012-12-9 4:14, Sander Eikelenboom wrote:
>>> Hi All,
>>>
>>> I still seem to hit some network warn in 3.7-rc8+.
>>> It only seems to appear in guests and not in dom0, it happens every once in a while and in multiple guests.
>>>
>>> [  778.846089] ------------[ cut here ]------------
>>> [  778.846107] WARNING: at net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
>>> [  778.846114] Modules linked in:
>>> [  778.846121] Pid: 0, comm: swapper/0 Not tainted 3.7.0-rc8-20121208 #1
>>> [  778.846127] Call Trace:
>>> [  778.846131]  <IRQ>  [<ffffffff8106752a>] warn_slowpath_common+0x7a/0xb0
>>> [  778.846143]  [<ffffffff81067575>] warn_slowpath_null+0x15/0x20
>>> [  778.846149]  [<ffffffff816a2b29>] skb_try_coalesce+0x359/0x390
>>> [  778.846157]  [<ffffffff8175a009>] tcp_try_coalesce+0x69/0xc0
>>> [  778.846163]  [<ffffffff8175a0b4>] tcp_queue_rcv+0x54/0x100
>>> [  778.846168]  [<ffffffff817602ff>] ? tcp_wfree+0x2f/0x140
>>> [  778.846174]  [<ffffffff8175eb7b>] tcp_rcv_established+0x2bb/0x6a0
>>> [  778.846180]  [<ffffffff817672ff>] ? tcp_v4_rcv+0x6cf/0xb10
>>> [  778.846186]  [<ffffffff817668e5>] tcp_v4_do_rcv+0x135/0x480
>>> [  778.846192]  [<ffffffff8180bdc2>] ? _raw_spin_lock_nested+0x42/0x50
>>> [  778.846198]  [<ffffffff817672ff>] ? tcp_v4_rcv+0x6cf/0xb10
>>> [  778.846203]  [<ffffffff8176758d>] tcp_v4_rcv+0x95d/0xb10
>>> [  778.846209]  [<ffffffff810b21e8>] ? lock_acquire+0xd8/0x100
>>> [  778.846216]  [<ffffffff81743d55>] ? ip_local_deliver_finish+0x45/0x230
>>> [  778.846222]  [<ffffffff81743e2a>] ip_local_deliver_finish+0x11a/0x230
>>> [  778.846228]  [<ffffffff81743d55>] ? ip_local_deliver_finish+0x45/0x230
>>> [  778.846284]  [<ffffffff81743f78>] ip_local_deliver+0x38/0x80
>>> [  778.846291]  [<ffffffff8174353a>] ip_rcv_finish+0x15a/0x630
>>> [  778.846297]  [<ffffffff81743c28>] ip_rcv+0x218/0x300
>>> [  778.846303]  [<ffffffff816abf4d>] __netif_receive_skb+0x65d/0x8d0
>>> [  778.846309]  [<ffffffff816aba35>] ? __netif_receive_skb+0x145/0x8d0
>>> [  778.846315]  [<ffffffff810ae48d>] ? trace_hardirqs_on+0xd/0x10
>>> [  778.846322]  [<ffffffff810fafc3>] ? free_hot_cold_page+0x1b3/0x1e0
>>> [  778.846329]  [<ffffffff816ae4a8>] netif_receive_skb+0x28/0xf0
>>> [  778.846334]  [<ffffffff816a3fc3>] ? __pskb_pull_tail+0x253/0x340
>>> [  778.846342]  [<ffffffff814b3c75>] xennet_poll+0xad5/0xe10
>>> [  778.846349]  [<ffffffff816af256>] net_rx_action+0x136/0x260
>>> [  778.846355]  [<ffffffff8106f419>] __do_softirq+0xc9/0x1a0
>>> [  778.846361]  [<ffffffff8180e93c>] call_softirq+0x1c/0x30
>>> [  778.846368]  [<ffffffff8100fd95>] do_softirq+0x85/0xf0
>>> [  778.846373]  [<ffffffff8106f28e>] irq_exit+0x9e/0xd0
>>> [  778.846380]  [<ffffffff813463af>] xen_evtchn_do_upcall+0x2f/0x40
>>> [  778.846386]  [<ffffffff8180e99e>] xen_do_hypervisor_callback+0x1e/0x30
>>> [  778.846391]  <EOI>  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
>>> [  778.846401]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
>>> [  778.846408]  [<ffffffff81008850>] ? xen_safe_halt+0x10/0x20
>>> [  778.846415]  [<ffffffff810170f0>] ? default_idle+0x40/0x90
>>> [  778.846421]  [<ffffffff810174a6>] ? cpu_idle+0x96/0xf0
>>> [  778.846428]  [<ffffffff817e4c0c>] ? rest_init+0xbc/0xd0
>>> [  778.846433]  [<ffffffff817e4b50>] ? csum_partial_copy_generic+0x170/0x170
>>> [  778.846441]  [<ffffffff81ee7be7>] ? start_kernel+0x390/0x39d
>>> [  778.846447]  [<ffffffff81ee7677>] ? repair_env_string+0x5b/0x5b
>>> [  778.846454]  [<ffffffff81ee7356>] ? x86_64_start_reservations+0x131/0x136
>>> [  778.846461]  [<ffffffff81eea915>] ? xen_start_kernel+0x54e/0x550
>>> [  778.846467] ---[ end trace d13d814dbabaca0e ]---
>>>
>>>
>>> --
>>> Sander
>>>
>>>   






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:03:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4tT-0004kD-0o; Mon, 10 Dec 2012 15:03:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti4tR-0004k2-FL
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:03:41 +0000
Received: from [85.158.138.51:22477] by server-6.bemta-3.messagelabs.com id
	5D/C9-28265-CC9F5C05; Mon, 10 Dec 2012 15:03:40 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355151819!20311773!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22052 invoked from network); 10 Dec 2012 15:03:39 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-12.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 15:03:39 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:58743 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti4xD-0003i8-UK; Mon, 10 Dec 2012 16:07:36 +0100
Date: Mon, 10 Dec 2012 16:03:32 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <101480918.20121210160332@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <341064135.20121209223602@eikelenboom.it>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com>
	<341064135.20121209223602@eikelenboom.it>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
	net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Sunday, December 9, 2012, 10:36:02 PM, you wrote:


> Sunday, December 9, 2012, 4:06:37 PM, you wrote:

>> Hi Ian,

>> I guess this issue is similar with this one 
>> http://comments.gmane.org/gmane.linux.network/236358. And netfront also 
>> needs to reserve some tail room for IP/TCP headers too?

> Hi Annie,
> Thanks for digging this up !

> That looks indeed remarkable similar. It's probably revealed by the other netfront/netback changes in 3.7, because i have never seen it before.
> It also seems to take some time before it gets triggered.
> Only the code in netfront.c is that much different, i don't think i'm able to determine the proper size and suggest a fix.

Hi Ian,

> Why is this being discussed in private mail? Please can you resend to
> xen-devel and/or netdev.

Sorry i missed the CC to xen-devel on the first post, wasn't meant to be private in any way.

> I have a vague recollection of a patch to set skb->truesize more
> accurately in xennet_poll (netfront), but I can't seem to find any
> reference to it now.

I tried to gain some extra info from net/core/skbuff.c around the warn.

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3f0636c..a7831b7 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3442,6 +3442,10 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
        }

        WARN_ON_ONCE(delta < len);
+       if(delta < len){
+                net_warn_ratelimited("netfront: WARN delta(%d) < len(%d) truesize(%d) SKB_DATA_ALIGN(%d)  SKB_TRUESIZE(%d) \n",
+                         delta,len,from->truesize,SKB_DATA_ALIGN(sizeof(struct sk_buff)),SKB_TRUESIZE(skb_end_offset(from)));
+       }

        memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
               skb_shinfo(from)->frags,



This results in:

[ 5557.333423] ------------[ cut here ]------------
[ 5557.333471] WARNING: at net/core/skbuff.c:3444 skb_try_coalesce+0x355/0x3d0()
[ 5557.333495] Modules linked in:
[ 5557.333519] Pid: 1872, comm: apache2 Not tainted 3.7.0-rc8-20121209-netdebug #1
[ 5557.333543] Call Trace:
[ 5557.333556]  <IRQ>  [<ffffffff8106752a>] warn_slowpath_common+0x7a/0xb0
[ 5557.333598]  [<ffffffff81067575>] warn_slowpath_null+0x15/0x20
[ 5557.333621]  [<ffffffff816a2b25>] skb_try_coalesce+0x355/0x3d0
[ 5557.333646]  [<ffffffff8175a049>] tcp_try_coalesce+0x69/0xc0
[ 5557.333669]  [<ffffffff8175a0f4>] tcp_queue_rcv+0x54/0x100
[ 5557.333691]  [<ffffffff8176023f>] ? tcp_transmit_skb+0x7ff/0x8d0
[ 5557.333714]  [<ffffffff8175ebbb>] tcp_rcv_established+0x2bb/0x6a0
[ 5557.333737]  [<ffffffff8176733f>] ? tcp_v4_rcv+0x6cf/0xb10
[ 5557.333758]  [<ffffffff81766925>] tcp_v4_do_rcv+0x135/0x480
[ 5557.333951]  [<ffffffff8180be02>] ? _raw_spin_lock_nested+0x42/0x50
[ 5557.333975]  [<ffffffff8176733f>] ? tcp_v4_rcv+0x6cf/0xb10
[ 5557.334009]  [<ffffffff817675cd>] tcp_v4_rcv+0x95d/0xb10
[ 5557.334032]  [<ffffffff810b21e8>] ? lock_acquire+0xd8/0x100
[ 5557.334055]  [<ffffffff81743d95>] ? ip_local_deliver_finish+0x45/0x230
[ 5557.334081]  [<ffffffff81743e6a>] ip_local_deliver_finish+0x11a/0x230
[ 5557.334105]  [<ffffffff81743d95>] ? ip_local_deliver_finish+0x45/0x230
[ 5557.334129]  [<ffffffff81743fb8>] ip_local_deliver+0x38/0x80
[ 5557.334152]  [<ffffffff8174357a>] ip_rcv_finish+0x15a/0x630
[ 5557.334175]  [<ffffffff81743c68>] ip_rcv+0x218/0x300
[ 5557.334197]  [<ffffffff816abf8d>] __netif_receive_skb+0x65d/0x8d0
[ 5557.334220]  [<ffffffff816aba75>] ? __netif_receive_skb+0x145/0x8d0
[ 5557.334244]  [<ffffffff810ae48d>] ? trace_hardirqs_on+0xd/0x10
[ 5557.334268]  [<ffffffff810fafc3>] ? free_hot_cold_page+0x1b3/0x1e0
[ 5557.334294]  [<ffffffff816ae4e8>] netif_receive_skb+0x28/0xf0
[ 5557.334315]  [<ffffffff816a4003>] ? __pskb_pull_tail+0x253/0x340
[ 5557.334343]  [<ffffffff814b3c75>] xennet_poll+0xad5/0xe10
[ 5557.334379]  [<ffffffff816af296>] net_rx_action+0x136/0x260
[ 5557.334403]  [<ffffffff8106f3c1>] ? __do_softirq+0x71/0x1a0
[ 5557.334426]  [<ffffffff8106f419>] __do_softirq+0xc9/0x1a0
[ 5557.334448]  [<ffffffff8180e97c>] call_softirq+0x1c/0x30
[ 5557.334470]  [<ffffffff8100fd95>] do_softirq+0x85/0xf0
[ 5557.334491]  [<ffffffff8106f28e>] irq_exit+0x9e/0xd0
[ 5557.334514]  [<ffffffff813463af>] xen_evtchn_do_upcall+0x2f/0x40
[ 5557.334537]  [<ffffffff8180e9de>] xen_do_hypervisor_callback+0x1e/0x30
[ 5557.334566]  <EOI>  [<ffffffff812b3d30>] ? copy_user_generic_string+0x30/0x40
[ 5557.334607]  [<ffffffff817543ea>] ? tcp_sendmsg+0xafa/0xe10
[ 5557.334632]  [<ffffffff8177a3c9>] ? inet_sendmsg+0xa9/0x100
[ 5557.334654]  [<ffffffff8177a320>] ? inet_autobind+0x70/0x70
[ 5557.334676]  [<ffffffff81697490>] ? sock_destroy_inode+0x40/0x40
[ 5557.334698]  [<ffffffff816975bd>] ? sock_aio_write+0x12d/0x140
[ 5557.334723]  [<ffffffff81144a0b>] ? do_sync_readv_writev+0x9b/0xe0
[ 5557.334750]  [<ffffffff811450bf>] ? do_readv_writev+0xcf/0x1d0
[ 5557.334785]  [<ffffffff811451fe>] ? vfs_writev+0x3e/0x60
[ 5557.334806]  [<ffffffff8114534a>] ? sys_writev+0x5a/0xc0
[ 5557.334826]  [<ffffffff812b537e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[ 5557.334856]  [<ffffffff8180d6e9>] ? system_call_fastpath+0x16/0x1b
[ 5557.334876] ---[ end trace 62036fd3c663553e ]---
[ 5557.335039] skbuff: netfront: WARN delta(18634) < len(18824) truesize(19530) SKB_DATA_ALIGN(256)  SKB_TRUESIZE(896)
[ 6195.800823] skbuff: netfront: WARN delta(18634) < len(18824) truesize(19530) SKB_DATA_ALIGN(256)  SKB_TRUESIZE(896)

I hope it gives some more insight.

--

Sander



> --
> Sander


>> Thanks
>> Annie

>> On 2012-12-9 4:14, Sander Eikelenboom wrote:
>>> Hi All,
>>>
>>> I still seem to hit some network warn in 3.7-rc8+.
>>> It only seems to appear in guests and not in dom0, it happens every once in a while and in multiple guests.
>>>
>>> [  778.846089] ------------[ cut here ]------------
>>> [  778.846107] WARNING: at net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
>>> [  778.846114] Modules linked in:
>>> [  778.846121] Pid: 0, comm: swapper/0 Not tainted 3.7.0-rc8-20121208 #1
>>> [  778.846127] Call Trace:
>>> [  778.846131]  <IRQ>  [<ffffffff8106752a>] warn_slowpath_common+0x7a/0xb0
>>> [  778.846143]  [<ffffffff81067575>] warn_slowpath_null+0x15/0x20
>>> [  778.846149]  [<ffffffff816a2b29>] skb_try_coalesce+0x359/0x390
>>> [  778.846157]  [<ffffffff8175a009>] tcp_try_coalesce+0x69/0xc0
>>> [  778.846163]  [<ffffffff8175a0b4>] tcp_queue_rcv+0x54/0x100
>>> [  778.846168]  [<ffffffff817602ff>] ? tcp_wfree+0x2f/0x140
>>> [  778.846174]  [<ffffffff8175eb7b>] tcp_rcv_established+0x2bb/0x6a0
>>> [  778.846180]  [<ffffffff817672ff>] ? tcp_v4_rcv+0x6cf/0xb10
>>> [  778.846186]  [<ffffffff817668e5>] tcp_v4_do_rcv+0x135/0x480
>>> [  778.846192]  [<ffffffff8180bdc2>] ? _raw_spin_lock_nested+0x42/0x50
>>> [  778.846198]  [<ffffffff817672ff>] ? tcp_v4_rcv+0x6cf/0xb10
>>> [  778.846203]  [<ffffffff8176758d>] tcp_v4_rcv+0x95d/0xb10
>>> [  778.846209]  [<ffffffff810b21e8>] ? lock_acquire+0xd8/0x100
>>> [  778.846216]  [<ffffffff81743d55>] ? ip_local_deliver_finish+0x45/0x230
>>> [  778.846222]  [<ffffffff81743e2a>] ip_local_deliver_finish+0x11a/0x230
>>> [  778.846228]  [<ffffffff81743d55>] ? ip_local_deliver_finish+0x45/0x230
>>> [  778.846284]  [<ffffffff81743f78>] ip_local_deliver+0x38/0x80
>>> [  778.846291]  [<ffffffff8174353a>] ip_rcv_finish+0x15a/0x630
>>> [  778.846297]  [<ffffffff81743c28>] ip_rcv+0x218/0x300
>>> [  778.846303]  [<ffffffff816abf4d>] __netif_receive_skb+0x65d/0x8d0
>>> [  778.846309]  [<ffffffff816aba35>] ? __netif_receive_skb+0x145/0x8d0
>>> [  778.846315]  [<ffffffff810ae48d>] ? trace_hardirqs_on+0xd/0x10
>>> [  778.846322]  [<ffffffff810fafc3>] ? free_hot_cold_page+0x1b3/0x1e0
>>> [  778.846329]  [<ffffffff816ae4a8>] netif_receive_skb+0x28/0xf0
>>> [  778.846334]  [<ffffffff816a3fc3>] ? __pskb_pull_tail+0x253/0x340
>>> [  778.846342]  [<ffffffff814b3c75>] xennet_poll+0xad5/0xe10
>>> [  778.846349]  [<ffffffff816af256>] net_rx_action+0x136/0x260
>>> [  778.846355]  [<ffffffff8106f419>] __do_softirq+0xc9/0x1a0
>>> [  778.846361]  [<ffffffff8180e93c>] call_softirq+0x1c/0x30
>>> [  778.846368]  [<ffffffff8100fd95>] do_softirq+0x85/0xf0
>>> [  778.846373]  [<ffffffff8106f28e>] irq_exit+0x9e/0xd0
>>> [  778.846380]  [<ffffffff813463af>] xen_evtchn_do_upcall+0x2f/0x40
>>> [  778.846386]  [<ffffffff8180e99e>] xen_do_hypervisor_callback+0x1e/0x30
>>> [  778.846391]  <EOI>  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
>>> [  778.846401]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
>>> [  778.846408]  [<ffffffff81008850>] ? xen_safe_halt+0x10/0x20
>>> [  778.846415]  [<ffffffff810170f0>] ? default_idle+0x40/0x90
>>> [  778.846421]  [<ffffffff810174a6>] ? cpu_idle+0x96/0xf0
>>> [  778.846428]  [<ffffffff817e4c0c>] ? rest_init+0xbc/0xd0
>>> [  778.846433]  [<ffffffff817e4b50>] ? csum_partial_copy_generic+0x170/0x170
>>> [  778.846441]  [<ffffffff81ee7be7>] ? start_kernel+0x390/0x39d
>>> [  778.846447]  [<ffffffff81ee7677>] ? repair_env_string+0x5b/0x5b
>>> [  778.846454]  [<ffffffff81ee7356>] ? x86_64_start_reservations+0x131/0x136
>>> [  778.846461]  [<ffffffff81eea915>] ? xen_start_kernel+0x54e/0x550
>>> [  778.846467] ---[ end trace d13d814dbabaca0e ]---
>>>
>>>
>>> --
>>> Sander
>>>
>>>   






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:04:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:04:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4tX-0004kj-Dq; Mon, 10 Dec 2012 15:03:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <matthew.fioravante@jhuapl.edu>) id 1Ti4tV-0004kU-LH
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:03:45 +0000
Received: from [193.109.254.147:26309] by server-2.bemta-14.messagelabs.com id
	AE/F7-20829-0D9F5C05; Mon, 10 Dec 2012 15:03:44 +0000
X-Env-Sender: matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355151807!1743815!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13232 invoked from network); 10 Dec 2012 15:03:31 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 15:03:31 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 5cab_08bd_110552c8_fdc9_42cf_adfc_9ca71d10421c;
	Mon, 10 Dec 2012 10:03:12 -0500
Message-ID: <50C5F9A6.7020408@jhuapl.edu>
Date: Mon, 10 Dec 2012 10:03:02 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50B4D060.9070403@jhuapl.edu>
	<1354029286-17652-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354029286-17652-2-git-send-email-dgdegra@tycho.nsa.gov>
	<50B76DBB.90504@jhuapl.edu>
	<20121207212536.GE9664@phenom.dumpdata.com>
	<1355133507.31710.104.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355133507.31710.104.camel@zakaz.uk.xensource.com>
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] stubdom: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5814882743302537504=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============5814882743302537504==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070105080003040709080003"

This is a cryptographically signed message in MIME format.

--------------ms070105080003040709080003
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Ok why don't we go with backend/vtpm2. That seems to be the less=20
intrusive approach.

On 12/10/2012 04:58 AM, Ian Campbell wrote:
> On Fri, 2012-12-07 at 21:25 +0000, Konrad Rzeszutek Wilk wrote:
>>>> +   snprintf(path, 512, "backend/vtpm/%u/%u/feature-protocol-v2", (u=
nsigned int) tpmif->domid, tpmif->handle);
>>>> +   if ((err =3D xenbus_write(XBT_NIL, path, "1")))
>>>> +   {
>>>> +      /* if we got an error here we should carefully remove the int=
erface and then return */
>>>> +      TPMBACK_ERR("Unable to write feature-protocol-v2 node: %s\n",=
 err);
>>>> +      free(err);
>>>> +      remove_tpmif(tpmif);
>>>> +      goto error_post_irq;
>>>> +   }
>>>> +
>>> My preference is still to do away with the versioning stuff since
>>> tpm is just getting released.
> It is present in the 2.6.18-xen tree and has made its way into distros,=

> at least SLES11.
>
>>   Its not even in linux yet so there is
>>> no confusion. We can even merge the linux patches together and
>>> resubmit as one if thats preferrable. Konrad, Ian, your final votes
>>> on that?
>> I am up for just removing the versioning stuff - and if one really
>> wants to be fool-proof - rename the 'backend/vtpm' to 'backend/vtpm2'
>> Perhaps?
>>
>



--------------ms070105080003040709080003
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIxMDE1MDMwMlowIwYJKoZIhvcNAQkEMRYEFN7tV2RGTxr+rTXt
wLxoMDTGtDdWMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYAmygT6SIdG+OuPshdDCYIt2P+zuXNSTQIT
ktrhcH25JU6y/90n8qjtr7xZBDeVQeOo/S2i8iI3iqEtNVIEXvzl3Z3lZHcpPVpYSX7FPMiu
UL8O62kwBpidqxBiq5y0CKL2EuPC9/s4uojESIE7hcPOm5GBmoFZAaYxzHhV5JHz3gAAAAAA
AA==
--------------ms070105080003040709080003--


--===============5814882743302537504==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5814882743302537504==--


From xen-devel-bounces@lists.xen.org Mon Dec 10 15:04:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:04:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4tX-0004kj-Dq; Mon, 10 Dec 2012 15:03:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <matthew.fioravante@jhuapl.edu>) id 1Ti4tV-0004kU-LH
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:03:45 +0000
Received: from [193.109.254.147:26309] by server-2.bemta-14.messagelabs.com id
	AE/F7-20829-0D9F5C05; Mon, 10 Dec 2012 15:03:44 +0000
X-Env-Sender: matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355151807!1743815!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13232 invoked from network); 10 Dec 2012 15:03:31 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 15:03:31 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 5cab_08bd_110552c8_fdc9_42cf_adfc_9ca71d10421c;
	Mon, 10 Dec 2012 10:03:12 -0500
Message-ID: <50C5F9A6.7020408@jhuapl.edu>
Date: Mon, 10 Dec 2012 10:03:02 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50B4D060.9070403@jhuapl.edu>
	<1354029286-17652-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1354029286-17652-2-git-send-email-dgdegra@tycho.nsa.gov>
	<50B76DBB.90504@jhuapl.edu>
	<20121207212536.GE9664@phenom.dumpdata.com>
	<1355133507.31710.104.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355133507.31710.104.camel@zakaz.uk.xensource.com>
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] stubdom: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5814882743302537504=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============5814882743302537504==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070105080003040709080003"

This is a cryptographically signed message in MIME format.

--------------ms070105080003040709080003
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Ok why don't we go with backend/vtpm2. That seems to be the less=20
intrusive approach.

On 12/10/2012 04:58 AM, Ian Campbell wrote:
> On Fri, 2012-12-07 at 21:25 +0000, Konrad Rzeszutek Wilk wrote:
>>>> +   snprintf(path, 512, "backend/vtpm/%u/%u/feature-protocol-v2", (u=
nsigned int) tpmif->domid, tpmif->handle);
>>>> +   if ((err =3D xenbus_write(XBT_NIL, path, "1")))
>>>> +   {
>>>> +      /* if we got an error here we should carefully remove the int=
erface and then return */
>>>> +      TPMBACK_ERR("Unable to write feature-protocol-v2 node: %s\n",=
 err);
>>>> +      free(err);
>>>> +      remove_tpmif(tpmif);
>>>> +      goto error_post_irq;
>>>> +   }
>>>> +
>>> My preference is still to do away with the versioning stuff since
>>> tpm is just getting released.
> It is present in the 2.6.18-xen tree and has made its way into distros,=

> at least SLES11.
>
>>   Its not even in linux yet so there is
>>> no confusion. We can even merge the linux patches together and
>>> resubmit as one if thats preferrable. Konrad, Ian, your final votes
>>> on that?
>> I am up for just removing the versioning stuff - and if one really
>> wants to be fool-proof - rename the 'backend/vtpm' to 'backend/vtpm2'
>> Perhaps?
>>
>



--------------ms070105080003040709080003
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIxMDE1MDMwMlowIwYJKoZIhvcNAQkEMRYEFN7tV2RGTxr+rTXt
wLxoMDTGtDdWMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYAmygT6SIdG+OuPshdDCYIt2P+zuXNSTQIT
ktrhcH25JU6y/90n8qjtr7xZBDeVQeOo/S2i8iI3iqEtNVIEXvzl3Z3lZHcpPVpYSX7FPMiu
UL8O62kwBpidqxBiq5y0CKL2EuPC9/s4uojESIE7hcPOm5GBmoFZAaYxzHhV5JHz3gAAAAAA
AA==
--------------ms070105080003040709080003--


--===============5814882743302537504==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5814882743302537504==--


From xen-devel-bounces@lists.xen.org Mon Dec 10 15:07:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:07:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4wg-000596-TI; Mon, 10 Dec 2012 15:07:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti4we-00058o-Sx
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:07:00 +0000
Received: from [85.158.143.99:59933] by server-3.bemta-4.messagelabs.com id
	77/05-18211-49AF5C05; Mon, 10 Dec 2012 15:07:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355152019!19066267!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18800 invoked from network); 10 Dec 2012 15:06:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 15:06:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="36983"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 15:07:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 15:06:59 +0000
Message-ID: <1355152017.21160.34.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: asad raza <asadrupucit2006@gmail.com>
Date: Mon, 10 Dec 2012 15:06:57 +0000
In-Reply-To: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
References: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] why we need to convert "mfn_to_page(smfn)" in
 page_alloc.c?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Once again please read
http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions

I'm afraid that very few people are going to be able to spend the time
spoon-feeding you through the code without some indication as to why
they should invest that time in you.

Ian.

On Mon, 2012-12-10 at 14:02 +0000, asad raza wrote:
> void init_domheap_pages(paddr_t ps, paddr_t pe)
> {
>     unsigned long smfn, emfn;
> 
>     ASSERT(!in_irq());
> 
>     smfn = round_pgup(ps) >> PAGE_SHIFT;
>     emfn = round_pgdown(pe) >> PAGE_SHIFT;
> 
>     init_heap_pages(mfn_to_page(smfn), emfn - smfn);
> }
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:07:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:07:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4wg-000596-TI; Mon, 10 Dec 2012 15:07:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti4we-00058o-Sx
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:07:00 +0000
Received: from [85.158.143.99:59933] by server-3.bemta-4.messagelabs.com id
	77/05-18211-49AF5C05; Mon, 10 Dec 2012 15:07:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355152019!19066267!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18800 invoked from network); 10 Dec 2012 15:06:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 15:06:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="36983"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 15:07:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 15:06:59 +0000
Message-ID: <1355152017.21160.34.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: asad raza <asadrupucit2006@gmail.com>
Date: Mon, 10 Dec 2012 15:06:57 +0000
In-Reply-To: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
References: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] why we need to convert "mfn_to_page(smfn)" in
 page_alloc.c?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Once again please read
http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions

I'm afraid that very few people are going to be able to spend the time
spoon-feeding you through the code without some indication as to why
they should invest that time in you.

Ian.

On Mon, 2012-12-10 at 14:02 +0000, asad raza wrote:
> void init_domheap_pages(paddr_t ps, paddr_t pe)
> {
>     unsigned long smfn, emfn;
> 
>     ASSERT(!in_irq());
> 
>     smfn = round_pgup(ps) >> PAGE_SHIFT;
>     emfn = round_pgdown(pe) >> PAGE_SHIFT;
> 
>     init_heap_pages(mfn_to_page(smfn), emfn - smfn);
> }
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:08:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4xt-0005J9-Cl; Mon, 10 Dec 2012 15:08:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti4xs-0005Iu-6y
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:08:16 +0000
Received: from [85.158.143.35:18355] by server-2.bemta-4.messagelabs.com id
	E3/55-30861-FDAF5C05; Mon, 10 Dec 2012 15:08:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355152093!15040855!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 310 invoked from network); 10 Dec 2012 15:08:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 15:08:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="112606"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 15:08:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 10:08:12 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti4xo-0002mi-6Z;
	Mon, 10 Dec 2012 15:08:12 +0000
Date: Mon, 10 Dec 2012 15:08:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: asad raza <asadrupucit2006@gmail.com>
In-Reply-To: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212101506350.1839@kaball.uk.xensource.com>
References: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] why we need to convert "mfn_to_page(smfn)" in
 page_alloc.c?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, asad raza wrote:
> void init_domheap_pages(paddr_t ps, paddr_t pe)
> {
>     unsigned long smfn, emfn;
> 
>     ASSERT(!in_irq());
> 
>     smfn = round_pgup(ps) >> PAGE_SHIFT;
>     emfn = round_pgdown(pe) >> PAGE_SHIFT;
> 
>     init_heap_pages(mfn_to_page(smfn), emfn - smfn);
> }

if you look the definition of init_heap_pages and mfn_to_page:

void init_heap_pages(struct page_info *pg, unsigned long nr_pages)
#define mfn_to_page(mfn)  (frame_table + (pfn_to_pdx(mfn) - frametable_base_mfn))

you should be able to understand why we need to call mfn_to_page

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:08:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4xt-0005J9-Cl; Mon, 10 Dec 2012 15:08:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti4xs-0005Iu-6y
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:08:16 +0000
Received: from [85.158.143.35:18355] by server-2.bemta-4.messagelabs.com id
	E3/55-30861-FDAF5C05; Mon, 10 Dec 2012 15:08:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355152093!15040855!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 310 invoked from network); 10 Dec 2012 15:08:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 15:08:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="112606"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 15:08:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 10:08:12 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti4xo-0002mi-6Z;
	Mon, 10 Dec 2012 15:08:12 +0000
Date: Mon, 10 Dec 2012 15:08:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: asad raza <asadrupucit2006@gmail.com>
In-Reply-To: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212101506350.1839@kaball.uk.xensource.com>
References: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] why we need to convert "mfn_to_page(smfn)" in
 page_alloc.c?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, asad raza wrote:
> void init_domheap_pages(paddr_t ps, paddr_t pe)
> {
>     unsigned long smfn, emfn;
> 
>     ASSERT(!in_irq());
> 
>     smfn = round_pgup(ps) >> PAGE_SHIFT;
>     emfn = round_pgdown(pe) >> PAGE_SHIFT;
> 
>     init_heap_pages(mfn_to_page(smfn), emfn - smfn);
> }

if you look the definition of init_heap_pages and mfn_to_page:

void init_heap_pages(struct page_info *pg, unsigned long nr_pages)
#define mfn_to_page(mfn)  (frame_table + (pfn_to_pdx(mfn) - frametable_base_mfn))

you should be able to understand why we need to call mfn_to_page

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:09:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4yx-0005T8-S2; Mon, 10 Dec 2012 15:09:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Ti4yw-0005Sn-Ep
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 15:09:22 +0000
Received: from [85.158.138.51:12187] by server-14.bemta-3.messagelabs.com id
	16/69-31424-12BF5C05; Mon, 10 Dec 2012 15:09:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355152154!28206874!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTA2OTEy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 377 invoked from network); 10 Dec 2012 15:09:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 15:09:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBAF9CHx007270
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Dec 2012 15:09:12 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBAF9Biw015095
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Dec 2012 15:09:12 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBAF9Bmj005146; Mon, 10 Dec 2012 09:09:11 -0600
Received: from localhost.localdomain (/209.118.182.194)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Dec 2012 07:09:11 -0800
Date: Mon, 10 Dec 2012 10:08:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121210150849.GC6955@localhost.localdomain>
References: <20121207170556.GA6165@phenom.dumpdata.com>
	<1355131949.31710.95.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355131949.31710.95.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Linux Kernel Summit 2012 hallway talks - PV MMU, PVH,
 hpa, tglrx, stefano and me.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 10, 2012 at 09:32:29AM +0000, Ian Campbell wrote:
> On Fri, 2012-12-07 at 17:05 +0000, Konrad Rzeszutek Wilk wrote:
> 
> >    Anyhow, once the PVH works - so can do SMP guests, does
> >    properly interrupt delivery, etc, we would obsolete the PV MMU
> >    mode in 5 years. This means that arch/x86/xen/p2m.c and arch/x86/xen/mmu.c
> >    along with a host of paravirt interfaces would be #ifdef-ed out.
> >    There would also be a note in the Documentation/deprecate-schedule
> >    pointing that out. If everything time-wise aligns itself that
> >    means 2013 is when PVH has it debut and will have its kinks worked
> >    out. 2018 is when PV MMU would be obsoleted. The impact is that in
> >    2018 users would need Intel VT-d or AMD VI-IOMMU capable machine to run
> >    the latest Linux dom0 kernel with device drivers on x86.
> >    You would still be able to run the ancient PV kernels (like 2.6.18) as
> >    guests - just not as a dom0.
> 
> I'm not sure I follow -- why does this future change in mainline Linux
> have any impact on other kernel trees and their ability to run as dom0?

It does not. Thank you for catching that.
The last sentence should not have 'just not as dom0'.

> 
> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:09:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti4yx-0005T8-S2; Mon, 10 Dec 2012 15:09:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Ti4yw-0005Sn-Ep
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 15:09:22 +0000
Received: from [85.158.138.51:12187] by server-14.bemta-3.messagelabs.com id
	16/69-31424-12BF5C05; Mon, 10 Dec 2012 15:09:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355152154!28206874!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTA2OTEy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 377 invoked from network); 10 Dec 2012 15:09:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 15:09:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBAF9CHx007270
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Dec 2012 15:09:12 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBAF9Biw015095
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Dec 2012 15:09:12 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBAF9Bmj005146; Mon, 10 Dec 2012 09:09:11 -0600
Received: from localhost.localdomain (/209.118.182.194)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Dec 2012 07:09:11 -0800
Date: Mon, 10 Dec 2012 10:08:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121210150849.GC6955@localhost.localdomain>
References: <20121207170556.GA6165@phenom.dumpdata.com>
	<1355131949.31710.95.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355131949.31710.95.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Linux Kernel Summit 2012 hallway talks - PV MMU, PVH,
 hpa, tglrx, stefano and me.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 10, 2012 at 09:32:29AM +0000, Ian Campbell wrote:
> On Fri, 2012-12-07 at 17:05 +0000, Konrad Rzeszutek Wilk wrote:
> 
> >    Anyhow, once the PVH works - so can do SMP guests, does
> >    properly interrupt delivery, etc, we would obsolete the PV MMU
> >    mode in 5 years. This means that arch/x86/xen/p2m.c and arch/x86/xen/mmu.c
> >    along with a host of paravirt interfaces would be #ifdef-ed out.
> >    There would also be a note in the Documentation/deprecate-schedule
> >    pointing that out. If everything time-wise aligns itself that
> >    means 2013 is when PVH has it debut and will have its kinks worked
> >    out. 2018 is when PV MMU would be obsoleted. The impact is that in
> >    2018 users would need Intel VT-d or AMD VI-IOMMU capable machine to run
> >    the latest Linux dom0 kernel with device drivers on x86.
> >    You would still be able to run the ancient PV kernels (like 2.6.18) as
> >    guests - just not as a dom0.
> 
> I'm not sure I follow -- why does this future change in mainline Linux
> have any impact on other kernel trees and their ability to run as dom0?

It does not. Thank you for catching that.
The last sentence should not have 'just not as dom0'.

> 
> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:12:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti51t-0005n8-FL; Mon, 10 Dec 2012 15:12:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti51s-0005mu-09
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:12:24 +0000
Received: from [85.158.143.35:40532] by server-1.bemta-4.messagelabs.com id
	27/51-28401-7DBF5C05; Mon, 10 Dec 2012 15:12:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355152339!12316610!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7787 invoked from network); 10 Dec 2012 15:12:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 15:12:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="37191"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 15:12:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 15:12:19 +0000
Message-ID: <1355152338.21160.37.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Mon, 10 Dec 2012 15:12:18 +0000
In-Reply-To: <101480918.20121210160332@eikelenboom.it>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com> <341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
 net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I wrote
> > I have a vague recollection of a patch to set skb->truesize more
> > accurately in xennet_poll (netfront), but I can't seem to find any
> > reference to it now.

I finally found the following in my git tree. Looks like I never sent it
out.

Does it help?

8<--------------------

>From 788ba317fa241512be7a8630b1b58e53faff83ed Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 22 Aug 2012 11:55:31 +0100
Subject: [PATCH] xen/netfront: improve truesize tracking

Fixes WARN_ON from skb_try_coalesce.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 drivers/net/xen-netfront.c |   15 +++++----------
 net/core/skbuff.c          |    2 +-
 2 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index caa0110..b06ef81 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -971,17 +971,12 @@ err:
 		 * overheads. Here, we add the size of the data pulled
 		 * in xennet_fill_frags().
 		 *
-		 * We also adjust for any unused space in the main
-		 * data area by subtracting (RX_COPY_THRESHOLD -
-		 * len). This is especially important with drivers
-		 * which split incoming packets into header and data,
-		 * using only 66 bytes of the main data area (see the
-		 * e1000 driver for example.)  On such systems,
-		 * without this last adjustement, our achievable
-		 * receive throughout using the standard receive
-		 * buffer size was cut by 25%(!!!).
+		 * We also adjust for the __pskb_pull_tail done in
+		 * handle_incoming_queue which pulls data from the
+		 * frags into the head area, which is already
+		 * accounted in RX_COPY_THRESHOLD.
 		 */
-		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
+		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
 		skb->len += skb->data_len;
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 6e04b1f..941a974 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3439,7 +3439,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
 		delta = from->truesize - SKB_TRUESIZE(skb_end_offset(from));
 	}
 
-	WARN_ON_ONCE(delta < len);
+	WARN_ONCE(delta < len, "delta %d < len %d\n", delta, len);
 
 	memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
 	       skb_shinfo(from)->frags,
-- 
1.7.2.5




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:12:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti51t-0005n8-FL; Mon, 10 Dec 2012 15:12:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Ti51s-0005mu-09
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:12:24 +0000
Received: from [85.158.143.35:40532] by server-1.bemta-4.messagelabs.com id
	27/51-28401-7DBF5C05; Mon, 10 Dec 2012 15:12:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355152339!12316610!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7787 invoked from network); 10 Dec 2012 15:12:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 15:12:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="37191"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 15:12:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 15:12:19 +0000
Message-ID: <1355152338.21160.37.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Mon, 10 Dec 2012 15:12:18 +0000
In-Reply-To: <101480918.20121210160332@eikelenboom.it>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com> <341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
 net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I wrote
> > I have a vague recollection of a patch to set skb->truesize more
> > accurately in xennet_poll (netfront), but I can't seem to find any
> > reference to it now.

I finally found the following in my git tree. Looks like I never sent it
out.

Does it help?

8<--------------------

>From 788ba317fa241512be7a8630b1b58e53faff83ed Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 22 Aug 2012 11:55:31 +0100
Subject: [PATCH] xen/netfront: improve truesize tracking

Fixes WARN_ON from skb_try_coalesce.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 drivers/net/xen-netfront.c |   15 +++++----------
 net/core/skbuff.c          |    2 +-
 2 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index caa0110..b06ef81 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -971,17 +971,12 @@ err:
 		 * overheads. Here, we add the size of the data pulled
 		 * in xennet_fill_frags().
 		 *
-		 * We also adjust for any unused space in the main
-		 * data area by subtracting (RX_COPY_THRESHOLD -
-		 * len). This is especially important with drivers
-		 * which split incoming packets into header and data,
-		 * using only 66 bytes of the main data area (see the
-		 * e1000 driver for example.)  On such systems,
-		 * without this last adjustement, our achievable
-		 * receive throughout using the standard receive
-		 * buffer size was cut by 25%(!!!).
+		 * We also adjust for the __pskb_pull_tail done in
+		 * handle_incoming_queue which pulls data from the
+		 * frags into the head area, which is already
+		 * accounted in RX_COPY_THRESHOLD.
 		 */
-		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
+		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
 		skb->len += skb->data_len;
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 6e04b1f..941a974 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3439,7 +3439,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
 		delta = from->truesize - SKB_TRUESIZE(skb_end_offset(from));
 	}
 
-	WARN_ON_ONCE(delta < len);
+	WARN_ONCE(delta < len, "delta %d < len %d\n", delta, len);
 
 	memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
 	       skb_shinfo(from)->frags,
-- 
1.7.2.5




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:14:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti549-0005zi-1B; Mon, 10 Dec 2012 15:14:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti547-0005zY-1S
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:14:43 +0000
Received: from [85.158.139.211:28667] by server-16.bemta-5.messagelabs.com id
	A0/57-09208-26CF5C05; Mon, 10 Dec 2012 15:14:42 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355152481!18341957!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25860 invoked from network); 10 Dec 2012 15:14:41 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-10.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 15:14:41 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:59031 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti57t-0003qd-Op; Mon, 10 Dec 2012 16:18:37 +0100
Date: Mon, 10 Dec 2012 16:14:34 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1779159784.20121210161434@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355152338.21160.37.camel@zakaz.uk.xensource.com>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com>
	<341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
	<1355152338.21160.37.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
	net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 4:12:18 PM, you wrote:

> I wrote
>> > I have a vague recollection of a patch to set skb->truesize more
>> > accurately in xennet_poll (netfront), but I can't seem to find any
>> > reference to it now.

> I finally found the following in my git tree. Looks like I never sent it
> out.

> Does it help?

I will give it a try, but i have no easy mean of triggering it, so it could take some time to have some certainty.

Thx !

Sander

> 8<--------------------

> From 788ba317fa241512be7a8630b1b58e53faff83ed Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 22 Aug 2012 11:55:31 +0100
> Subject: [PATCH] xen/netfront: improve truesize tracking

> Fixes WARN_ON from skb_try_coalesce.

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  drivers/net/xen-netfront.c |   15 +++++----------
>  net/core/skbuff.c          |    2 +-
>  2 files changed, 6 insertions(+), 11 deletions(-)

> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index caa0110..b06ef81 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -971,17 +971,12 @@ err:
>                  * overheads. Here, we add the size of the data pulled
>                  * in xennet_fill_frags().
>                  *
> -                * We also adjust for any unused space in the main
> -                * data area by subtracting (RX_COPY_THRESHOLD -
> -                * len). This is especially important with drivers
> -                * which split incoming packets into header and data,
> -                * using only 66 bytes of the main data area (see the
> -                * e1000 driver for example.)  On such systems,
> -                * without this last adjustement, our achievable
> -                * receive throughout using the standard receive
> -                * buffer size was cut by 25%(!!!).
> +                * We also adjust for the __pskb_pull_tail done in
> +                * handle_incoming_queue which pulls data from the
> +                * frags into the head area, which is already
> +                * accounted in RX_COPY_THRESHOLD.
>                  */
> -               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +               skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>                 skb->len += skb->data_len;
>  
>                 if (rx->flags & XEN_NETRXF_csum_blank)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 6e04b1f..941a974 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3439,7 +3439,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
>                 delta = from->truesize - SKB_TRUESIZE(skb_end_offset(from));
>         }
>  
> -       WARN_ON_ONCE(delta < len);
> +       WARN_ONCE(delta < len, "delta %d < len %d\n", delta, len);
>  
>         memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
>                skb_shinfo(from)->frags,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:14:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti549-0005zi-1B; Mon, 10 Dec 2012 15:14:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti547-0005zY-1S
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:14:43 +0000
Received: from [85.158.139.211:28667] by server-16.bemta-5.messagelabs.com id
	A0/57-09208-26CF5C05; Mon, 10 Dec 2012 15:14:42 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355152481!18341957!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25860 invoked from network); 10 Dec 2012 15:14:41 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-10.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 15:14:41 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:59031 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti57t-0003qd-Op; Mon, 10 Dec 2012 16:18:37 +0100
Date: Mon, 10 Dec 2012 16:14:34 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1779159784.20121210161434@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355152338.21160.37.camel@zakaz.uk.xensource.com>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com>
	<341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
	<1355152338.21160.37.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
	net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 4:12:18 PM, you wrote:

> I wrote
>> > I have a vague recollection of a patch to set skb->truesize more
>> > accurately in xennet_poll (netfront), but I can't seem to find any
>> > reference to it now.

> I finally found the following in my git tree. Looks like I never sent it
> out.

> Does it help?

I will give it a try, but i have no easy mean of triggering it, so it could take some time to have some certainty.

Thx !

Sander

> 8<--------------------

> From 788ba317fa241512be7a8630b1b58e53faff83ed Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 22 Aug 2012 11:55:31 +0100
> Subject: [PATCH] xen/netfront: improve truesize tracking

> Fixes WARN_ON from skb_try_coalesce.

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  drivers/net/xen-netfront.c |   15 +++++----------
>  net/core/skbuff.c          |    2 +-
>  2 files changed, 6 insertions(+), 11 deletions(-)

> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index caa0110..b06ef81 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -971,17 +971,12 @@ err:
>                  * overheads. Here, we add the size of the data pulled
>                  * in xennet_fill_frags().
>                  *
> -                * We also adjust for any unused space in the main
> -                * data area by subtracting (RX_COPY_THRESHOLD -
> -                * len). This is especially important with drivers
> -                * which split incoming packets into header and data,
> -                * using only 66 bytes of the main data area (see the
> -                * e1000 driver for example.)  On such systems,
> -                * without this last adjustement, our achievable
> -                * receive throughout using the standard receive
> -                * buffer size was cut by 25%(!!!).
> +                * We also adjust for the __pskb_pull_tail done in
> +                * handle_incoming_queue which pulls data from the
> +                * frags into the head area, which is already
> +                * accounted in RX_COPY_THRESHOLD.
>                  */
> -               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +               skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>                 skb->len += skb->data_len;
>  
>                 if (rx->flags & XEN_NETRXF_csum_blank)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 6e04b1f..941a974 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3439,7 +3439,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
>                 delta = from->truesize - SKB_TRUESIZE(skb_end_offset(from));
>         }
>  
> -       WARN_ON_ONCE(delta < len);
> +       WARN_ONCE(delta < len, "delta %d < len %d\n", delta, len);
>  
>         memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
>                skb_shinfo(from)->frags,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:15:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti55C-00065n-Ep; Mon, 10 Dec 2012 15:15:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Ti55C-00065f-1D
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:15:50 +0000
Received: from [85.158.139.211:48466] by server-2.bemta-5.messagelabs.com id
	F5/A2-16162-5ACF5C05; Mon, 10 Dec 2012 15:15:49 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355152546!18366175!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8132 invoked from network); 10 Dec 2012 15:15:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 15:15:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="106647"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 15:15:46 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 10:15:45 -0500
Message-ID: <50C5FCA0.9020403@citrix.com>
Date: Mon, 10 Dec 2012 15:15:44 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
In-Reply-To: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] why we need to convert "mfn_to_page(smfn)" in
	page_alloc.c?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/12 14:02, asad raza wrote:
> void init_domheap_pages(paddr_t ps, paddr_t pe)
> {
>      unsigned long smfn, emfn;
>
>      ASSERT(!in_irq());
>
>      smfn = round_pgup(ps) >> PAGE_SHIFT;
>      emfn = round_pgdown(pe) >> PAGE_SHIFT;
>
>      init_heap_pages(mfn_to_page(smfn), emfn - smfn);
> }
Becasue "init_heap_pages" takes struct page_info, rather than a page 
number?

The init_heap_pages does convert the start page to a mfn, but not all 
callers of init_heap_pages has a mfn in the first place, so I gues it's 
just "we have to do this anyways.

--
Mats
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:15:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti55C-00065n-Ep; Mon, 10 Dec 2012 15:15:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Ti55C-00065f-1D
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:15:50 +0000
Received: from [85.158.139.211:48466] by server-2.bemta-5.messagelabs.com id
	F5/A2-16162-5ACF5C05; Mon, 10 Dec 2012 15:15:49 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355152546!18366175!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8132 invoked from network); 10 Dec 2012 15:15:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 15:15:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="106647"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 15:15:46 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 10:15:45 -0500
Message-ID: <50C5FCA0.9020403@citrix.com>
Date: Mon, 10 Dec 2012 15:15:44 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
In-Reply-To: <CAJ2v2mi8WyW7UiHy4auQbSa_Pyygw73abVK5MV_m+whFTGqGtA@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] why we need to convert "mfn_to_page(smfn)" in
	page_alloc.c?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/12 14:02, asad raza wrote:
> void init_domheap_pages(paddr_t ps, paddr_t pe)
> {
>      unsigned long smfn, emfn;
>
>      ASSERT(!in_irq());
>
>      smfn = round_pgup(ps) >> PAGE_SHIFT;
>      emfn = round_pgdown(pe) >> PAGE_SHIFT;
>
>      init_heap_pages(mfn_to_page(smfn), emfn - smfn);
> }
Becasue "init_heap_pages" takes struct page_info, rather than a page 
number?

The init_heap_pages does convert the start page to a mfn, but not all 
callers of init_heap_pages has a mfn in the first place, so I gues it's 
just "we have to do this anyways.

--
Mats
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:16:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:16:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti55Y-00069F-Rc; Mon, 10 Dec 2012 15:16:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Ti55X-000694-Uq
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:16:12 +0000
Received: from [85.158.139.211:52360] by server-2.bemta-5.messagelabs.com id
	0B/73-16162-BBCF5C05; Mon, 10 Dec 2012 15:16:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355152569!17279897!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTA2OTEy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16540 invoked from network); 10 Dec 2012 15:16:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 15:16:10 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBAFFrXl015186
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Dec 2012 15:15:54 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBAFFqje022301
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Dec 2012 15:15:53 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBAFFp82030067; Mon, 10 Dec 2012 09:15:52 -0600
Received: from localhost.localdomain (/209.118.182.194)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Dec 2012 07:15:51 -0800
Date: Mon, 10 Dec 2012 10:15:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20121210151533.GE6955@localhost.localdomain>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
	<1354630913-17287-2-git-send-email-roger.pau@citrix.com>
	<20121207202003.GA9462@phenom.dumpdata.com>
	<50C5D276.6090009@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C5D276.6090009@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "sfr@canb.auug.org.au" <sfr@canb.auug.org.au>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
 llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 10, 2012 at 01:15:50PM +0100, Roger Pau Monn=E9 wrote:
> On 07/12/12 21:20, Konrad Rzeszutek Wilk wrote:
> > On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
> >> Implement a safe version of llist_for_each_entry, and use it in
> >> blkif_free. Previously grants where freed while iterating the list,
> >> which lead to dereferences when trying to fetch the next item.
> > =

> > Looks like xen-blkfront is the only user of this llist_for_each_entry.
> > =

> > Would it be more prudent to put the macro in the llist.h file?
> =

> I'm not able to find out who is the maintainer of llist, should I just
> CC it's author?

Sure. I CC-ed akpm here to solicit his input as well. Either way I am
OK wit this being in xen-blkfront but it just seems that it could
be useful in the llist file since that is where the non-safe version
resides.
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:16:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:16:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti55Y-00069F-Rc; Mon, 10 Dec 2012 15:16:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Ti55X-000694-Uq
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:16:12 +0000
Received: from [85.158.139.211:52360] by server-2.bemta-5.messagelabs.com id
	0B/73-16162-BBCF5C05; Mon, 10 Dec 2012 15:16:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355152569!17279897!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTA2OTEy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16540 invoked from network); 10 Dec 2012 15:16:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 15:16:10 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBAFFrXl015186
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Dec 2012 15:15:54 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBAFFqje022301
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Dec 2012 15:15:53 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBAFFp82030067; Mon, 10 Dec 2012 09:15:52 -0600
Received: from localhost.localdomain (/209.118.182.194)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Dec 2012 07:15:51 -0800
Date: Mon, 10 Dec 2012 10:15:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20121210151533.GE6955@localhost.localdomain>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
	<1354630913-17287-2-git-send-email-roger.pau@citrix.com>
	<20121207202003.GA9462@phenom.dumpdata.com>
	<50C5D276.6090009@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C5D276.6090009@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "sfr@canb.auug.org.au" <sfr@canb.auug.org.au>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
 llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 10, 2012 at 01:15:50PM +0100, Roger Pau Monn=E9 wrote:
> On 07/12/12 21:20, Konrad Rzeszutek Wilk wrote:
> > On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
> >> Implement a safe version of llist_for_each_entry, and use it in
> >> blkif_free. Previously grants where freed while iterating the list,
> >> which lead to dereferences when trying to fetch the next item.
> > =

> > Looks like xen-blkfront is the only user of this llist_for_each_entry.
> > =

> > Would it be more prudent to put the macro in the llist.h file?
> =

> I'm not able to find out who is the maintainer of llist, should I just
> CC it's author?

Sure. I CC-ed akpm here to solicit his input as well. Either way I am
OK wit this being in xen-blkfront but it just seems that it could
be useful in the llist file since that is where the non-safe version
resides.
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:49:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:49:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti5b1-0006gp-To; Mon, 10 Dec 2012 15:48:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti5az-0006gk-Nb
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:48:41 +0000
Received: from [193.109.254.147:61795] by server-16.bemta-14.messagelabs.com
	id FF/1E-09215-95406C05; Mon, 10 Dec 2012 15:48:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355154515!9960225!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7135 invoked from network); 10 Dec 2012 15:48:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 15:48:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="111464"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 15:48:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 10:48:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti5as-0003Rd-B3;
	Mon, 10 Dec 2012 15:48:34 +0000
Date: Mon, 10 Dec 2012 15:48:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <183697744.20121207215104@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212101547560.17523@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
	<alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
	<183697744.20121207215104@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x
 returns same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Dec 2012, Sander Eikelenboom wrote:
> Friday, December 7, 2012, 6:24:10 PM, you wrote:
> 
> > On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
> >> Hi Stefano / Anthony,
> >> 
> >> With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:
> >> 
> >> With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry
> >> 
> >> in qemu-traditional:
> >> 
> >> pt_msix_update_one: pt_msix_update_one requested pirq = 87
> >> pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
> >> pt_msix_update_one: pt_msix_update_one requested pirq = 86
> >> pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
> >> pt_msix_update_one: pt_msix_update_one requested pirq = 85
> >> pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
> >> pt_msix_update_one: pt_msix_update_one requested pirq = 84
> >> pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0
> >> 
> >> 
> >> in qemu-xen (upstream):
> >> 
> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)
> 
> > That is a good pointer, but unfortunately the code that parses those
> > entries look exactly alike in both QEMU trees:
> 
> > qemu-xen-traditional/hw/pt-msi.c:pt_msix_update_one
> 
> > if (!gvec) {
> >         /* if gvec is 0, the guest is asking for a particular pirq that
> >          * is passed as dest_id */
> >         pirq = ((gaddr >> 32) & 0xffffff00) |
> >                (((gaddr & 0xffffffff) >> MSI_TARGET_CPU_SHIFT) & 0xff);
> 
> 
> 
> > qemu-xen/hw/xen_pt_msi.c:msi_msix_setup
> 
> > if (gvec == 0) {
> >         /* if gvec is 0, the guest is asking for a particular pirq that
> >          * is passed as dest_id */
> >         *ppirq = msi_ext_dest_id(addr >> 32) | msi_dest_id(addr);
> 
> > given how msi_ext_dest_id and msi_dest_id are defined, they should
> > behave the same way.
> 
> > Maybe adding a printk in msi_msix_setup to show addr would help
> > nonetheless...
> 
> Hi Stefano,
> 
> I have added some printk's attached i have:
> 
> - qemu-upstream.log           boot of the guest with qemu upstream, device not working
> - qemu-traditional.log        boot of the same guest with qemu traditional, device is working
> 
> - xl-dmesg-upstream.txt       part of xl-dmesg related to boot of guest with qemu-upstream
> - xl-dmesg-traditional.txt    part of xl-dmesg related to boot of sameguest with qemu-traditional
> - xl-dmesg.txt                complete xl-dmesg
> 
> - interrupts-dom0.txt         /proc/interrupts of dom0
> - interrupts-upstream.txt     /proc/interrupts of guest with qemu-upstream

Thank you very much!
Could it be that the error is just due to a typo?

---

diff --git a/hw/xen_pt_msi.c b/hw/xen_pt_msi.c
index 6807672..db757cd 100644
--- a/hw/xen_pt_msi.c
+++ b/hw/xen_pt_msi.c
@@ -321,7 +321,7 @@ static int xen_pt_msix_update_one(XenPCIPassthroughState *s, int entry_nr)
 
     pirq = entry->pirq;
 
-    rc = msi_msix_setup(s, entry->data, entry->data, &pirq, true, entry_nr,
+    rc = msi_msix_setup(s, entry->addr, entry->data, &pirq, true, entry_nr,
                         entry->pirq == XEN_PT_UNASSIGNED_PIRQ);
     if (rc) {
         return rc;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:49:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:49:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti5b1-0006gp-To; Mon, 10 Dec 2012 15:48:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti5az-0006gk-Nb
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:48:41 +0000
Received: from [193.109.254.147:61795] by server-16.bemta-14.messagelabs.com
	id FF/1E-09215-95406C05; Mon, 10 Dec 2012 15:48:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355154515!9960225!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7135 invoked from network); 10 Dec 2012 15:48:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 15:48:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="111464"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 15:48:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 10:48:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti5as-0003Rd-B3;
	Mon, 10 Dec 2012 15:48:34 +0000
Date: Mon, 10 Dec 2012 15:48:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <183697744.20121207215104@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212101547560.17523@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
	<alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
	<183697744.20121207215104@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x
 returns same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Dec 2012, Sander Eikelenboom wrote:
> Friday, December 7, 2012, 6:24:10 PM, you wrote:
> 
> > On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
> >> Hi Stefano / Anthony,
> >> 
> >> With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:
> >> 
> >> With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry
> >> 
> >> in qemu-traditional:
> >> 
> >> pt_msix_update_one: pt_msix_update_one requested pirq = 87
> >> pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
> >> pt_msix_update_one: pt_msix_update_one requested pirq = 86
> >> pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
> >> pt_msix_update_one: pt_msix_update_one requested pirq = 85
> >> pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
> >> pt_msix_update_one: pt_msix_update_one requested pirq = 84
> >> pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0
> >> 
> >> 
> >> in qemu-xen (upstream):
> >> 
> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)
> 
> > That is a good pointer, but unfortunately the code that parses those
> > entries look exactly alike in both QEMU trees:
> 
> > qemu-xen-traditional/hw/pt-msi.c:pt_msix_update_one
> 
> > if (!gvec) {
> >         /* if gvec is 0, the guest is asking for a particular pirq that
> >          * is passed as dest_id */
> >         pirq = ((gaddr >> 32) & 0xffffff00) |
> >                (((gaddr & 0xffffffff) >> MSI_TARGET_CPU_SHIFT) & 0xff);
> 
> 
> 
> > qemu-xen/hw/xen_pt_msi.c:msi_msix_setup
> 
> > if (gvec == 0) {
> >         /* if gvec is 0, the guest is asking for a particular pirq that
> >          * is passed as dest_id */
> >         *ppirq = msi_ext_dest_id(addr >> 32) | msi_dest_id(addr);
> 
> > given how msi_ext_dest_id and msi_dest_id are defined, they should
> > behave the same way.
> 
> > Maybe adding a printk in msi_msix_setup to show addr would help
> > nonetheless...
> 
> Hi Stefano,
> 
> I have added some printk's attached i have:
> 
> - qemu-upstream.log           boot of the guest with qemu upstream, device not working
> - qemu-traditional.log        boot of the same guest with qemu traditional, device is working
> 
> - xl-dmesg-upstream.txt       part of xl-dmesg related to boot of guest with qemu-upstream
> - xl-dmesg-traditional.txt    part of xl-dmesg related to boot of sameguest with qemu-traditional
> - xl-dmesg.txt                complete xl-dmesg
> 
> - interrupts-dom0.txt         /proc/interrupts of dom0
> - interrupts-upstream.txt     /proc/interrupts of guest with qemu-upstream

Thank you very much!
Could it be that the error is just due to a typo?

---

diff --git a/hw/xen_pt_msi.c b/hw/xen_pt_msi.c
index 6807672..db757cd 100644
--- a/hw/xen_pt_msi.c
+++ b/hw/xen_pt_msi.c
@@ -321,7 +321,7 @@ static int xen_pt_msix_update_one(XenPCIPassthroughState *s, int entry_nr)
 
     pirq = entry->pirq;
 
-    rc = msi_msix_setup(s, entry->data, entry->data, &pirq, true, entry_nr,
+    rc = msi_msix_setup(s, entry->addr, entry->data, &pirq, true, entry_nr,
                         entry->pirq == XEN_PT_UNASSIGNED_PIRQ);
     if (rc) {
         return rc;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:53:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:53:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti5fT-0006tU-Ub; Mon, 10 Dec 2012 15:53:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti5fS-0006tM-NA
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:53:18 +0000
Received: from [85.158.137.99:40258] by server-14.bemta-3.messagelabs.com id
	AB/97-31424-96506C05; Mon, 10 Dec 2012 15:53:13 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-5.tower-217.messagelabs.com!1355154792!13844859!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21481 invoked from network); 10 Dec 2012 15:53:13 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-5.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 15:53:13 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:59708 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti5jH-0004Gm-BG; Mon, 10 Dec 2012 16:57:15 +0100
Date: Mon, 10 Dec 2012 16:53:12 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1717959545.20121210165312@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212101547560.17523@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
	<alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
	<183697744.20121207215104@eikelenboom.it>
	<alpine.DEB.2.02.1212101547560.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x
	returns same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 4:48:32 PM, you wrote:

> On Fri, 7 Dec 2012, Sander Eikelenboom wrote:
>> Friday, December 7, 2012, 6:24:10 PM, you wrote:
>> 
>> > On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
>> >> Hi Stefano / Anthony,
>> >> 
>> >> With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:
>> >> 
>> >> With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry
>> >> 
>> >> in qemu-traditional:
>> >> 
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 87
>> >> pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 86
>> >> pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 85
>> >> pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 84
>> >> pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0
>> >> 
>> >> 
>> >> in qemu-xen (upstream):
>> >> 
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)
>> 
>> > That is a good pointer, but unfortunately the code that parses those
>> > entries look exactly alike in both QEMU trees:
>> 
>> > qemu-xen-traditional/hw/pt-msi.c:pt_msix_update_one
>> 
>> > if (!gvec) {
>> >         /* if gvec is 0, the guest is asking for a particular pirq that
>> >          * is passed as dest_id */
>> >         pirq = ((gaddr >> 32) & 0xffffff00) |
>> >                (((gaddr & 0xffffffff) >> MSI_TARGET_CPU_SHIFT) & 0xff);
>> 
>> 
>> 
>> > qemu-xen/hw/xen_pt_msi.c:msi_msix_setup
>> 
>> > if (gvec == 0) {
>> >         /* if gvec is 0, the guest is asking for a particular pirq that
>> >          * is passed as dest_id */
>> >         *ppirq = msi_ext_dest_id(addr >> 32) | msi_dest_id(addr);
>> 
>> > given how msi_ext_dest_id and msi_dest_id are defined, they should
>> > behave the same way.
>> 
>> > Maybe adding a printk in msi_msix_setup to show addr would help
>> > nonetheless...
>> 
>> Hi Stefano,
>> 
>> I have added some printk's attached i have:
>> 
>> - qemu-upstream.log           boot of the guest with qemu upstream, device not working
>> - qemu-traditional.log        boot of the same guest with qemu traditional, device is working
>> 
>> - xl-dmesg-upstream.txt       part of xl-dmesg related to boot of guest with qemu-upstream
>> - xl-dmesg-traditional.txt    part of xl-dmesg related to boot of sameguest with qemu-traditional
>> - xl-dmesg.txt                complete xl-dmesg
>> 
>> - interrupts-dom0.txt         /proc/interrupts of dom0
>> - interrupts-upstream.txt     /proc/interrupts of guest with qemu-upstream

> Thank you very much!
> Could it be that the error is just due to a typo?

Hmmm that i didn't notice that :-)
Must be it .. will give it a go !

Thx !

> ---

> diff --git a/hw/xen_pt_msi.c b/hw/xen_pt_msi.c
> index 6807672..db757cd 100644
> --- a/hw/xen_pt_msi.c
> +++ b/hw/xen_pt_msi.c
> @@ -321,7 +321,7 @@ static int xen_pt_msix_update_one(XenPCIPassthroughState *s, int entry_nr)
>  
>      pirq = entry->pirq;
>  
> -    rc = msi_msix_setup(s, entry->data, entry->data, &pirq, true, entry_nr,
> +    rc = msi_msix_setup(s, entry->addr, entry->data, &pirq, true, entry_nr,
>                          entry->pirq == XEN_PT_UNASSIGNED_PIRQ);
>      if (rc) {
>          return rc;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 15:53:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 15:53:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti5fT-0006tU-Ub; Mon, 10 Dec 2012 15:53:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti5fS-0006tM-NA
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 15:53:18 +0000
Received: from [85.158.137.99:40258] by server-14.bemta-3.messagelabs.com id
	AB/97-31424-96506C05; Mon, 10 Dec 2012 15:53:13 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-5.tower-217.messagelabs.com!1355154792!13844859!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21481 invoked from network); 10 Dec 2012 15:53:13 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-5.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 15:53:13 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:59708 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti5jH-0004Gm-BG; Mon, 10 Dec 2012 16:57:15 +0100
Date: Mon, 10 Dec 2012 16:53:12 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1717959545.20121210165312@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212101547560.17523@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
	<alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
	<183697744.20121207215104@eikelenboom.it>
	<alpine.DEB.2.02.1212101547560.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x
	returns same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 4:48:32 PM, you wrote:

> On Fri, 7 Dec 2012, Sander Eikelenboom wrote:
>> Friday, December 7, 2012, 6:24:10 PM, you wrote:
>> 
>> > On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
>> >> Hi Stefano / Anthony,
>> >> 
>> >> With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:
>> >> 
>> >> With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry
>> >> 
>> >> in qemu-traditional:
>> >> 
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 87
>> >> pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 86
>> >> pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 85
>> >> pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 84
>> >> pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0
>> >> 
>> >> 
>> >> in qemu-xen (upstream):
>> >> 
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)
>> 
>> > That is a good pointer, but unfortunately the code that parses those
>> > entries look exactly alike in both QEMU trees:
>> 
>> > qemu-xen-traditional/hw/pt-msi.c:pt_msix_update_one
>> 
>> > if (!gvec) {
>> >         /* if gvec is 0, the guest is asking for a particular pirq that
>> >          * is passed as dest_id */
>> >         pirq = ((gaddr >> 32) & 0xffffff00) |
>> >                (((gaddr & 0xffffffff) >> MSI_TARGET_CPU_SHIFT) & 0xff);
>> 
>> 
>> 
>> > qemu-xen/hw/xen_pt_msi.c:msi_msix_setup
>> 
>> > if (gvec == 0) {
>> >         /* if gvec is 0, the guest is asking for a particular pirq that
>> >          * is passed as dest_id */
>> >         *ppirq = msi_ext_dest_id(addr >> 32) | msi_dest_id(addr);
>> 
>> > given how msi_ext_dest_id and msi_dest_id are defined, they should
>> > behave the same way.
>> 
>> > Maybe adding a printk in msi_msix_setup to show addr would help
>> > nonetheless...
>> 
>> Hi Stefano,
>> 
>> I have added some printk's attached i have:
>> 
>> - qemu-upstream.log           boot of the guest with qemu upstream, device not working
>> - qemu-traditional.log        boot of the same guest with qemu traditional, device is working
>> 
>> - xl-dmesg-upstream.txt       part of xl-dmesg related to boot of guest with qemu-upstream
>> - xl-dmesg-traditional.txt    part of xl-dmesg related to boot of sameguest with qemu-traditional
>> - xl-dmesg.txt                complete xl-dmesg
>> 
>> - interrupts-dom0.txt         /proc/interrupts of dom0
>> - interrupts-upstream.txt     /proc/interrupts of guest with qemu-upstream

> Thank you very much!
> Could it be that the error is just due to a typo?

Hmmm that i didn't notice that :-)
Must be it .. will give it a go !

Thx !

> ---

> diff --git a/hw/xen_pt_msi.c b/hw/xen_pt_msi.c
> index 6807672..db757cd 100644
> --- a/hw/xen_pt_msi.c
> +++ b/hw/xen_pt_msi.c
> @@ -321,7 +321,7 @@ static int xen_pt_msix_update_one(XenPCIPassthroughState *s, int entry_nr)
>  
>      pirq = entry->pirq;
>  
> -    rc = msi_msix_setup(s, entry->data, entry->data, &pirq, true, entry_nr,
> +    rc = msi_msix_setup(s, entry->addr, entry->data, &pirq, true, entry_nr,
>                          entry->pirq == XEN_PT_UNASSIGNED_PIRQ);
>      if (rc) {
>          return rc;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:01:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti5n7-0007aI-7z; Mon, 10 Dec 2012 16:01:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Ti5n5-0007aD-LH
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:01:11 +0000
Received: from [85.158.139.211:2537] by server-9.bemta-5.messagelabs.com id
	2A/EF-10690-64706C05; Mon, 10 Dec 2012 16:01:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355155269!15618552!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29961 invoked from network); 10 Dec 2012 16:01:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 16:01:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 16:01:08 +0000
Message-Id: <50C6155102000078000AF6FB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 16:01:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartA8993351.0__="
Cc: Charles Arnold <CARNOLD@suse.com>
Subject: [Xen-devel] [PATCH] x86/EFI: work around CFLAGS being passed in
 through environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartA8993351.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Short of a solution to the problem described in
http://lists.xen.org/archives/html/xen-devel/2012-12/msg00648.html,
deal with the bad effect this together with c/s 25751:02b4d5fedb7b has
on the EFI build by filtering out the problematic command line items.

Signed-off-by: Charles Arnold <carnold@suse.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/efi/Makefile
+++ b/xen/arch/x86/efi/Makefile
@@ -5,7 +5,7 @@ obj-y +=3D stub.o
 create =3D test -e $(1) || touch -t 199901010000 $(1)
=20
 efi :=3D $(filter y,$(x86_64)$(shell rm -f disabled))
-efi :=3D $(if $(efi),$(shell $(CC) $(filter-out $(CFLAGS-y),$(CFLAGS)) -c =
check.c 2>disabled && echo y))
+efi :=3D $(if $(efi),$(shell $(CC) $(filter-out $(CFLAGS-y) .%.d,$(CFLAGS)=
) -c check.c 2>disabled && echo y))
 efi :=3D $(if $(efi),$(shell $(LD) -mi386pep --subsystem=3D10 -o =
check.efi check.o 2>disabled && echo y))
 efi :=3D $(if $(efi),$(shell rm disabled)y,$(shell $(call create,boot.init=
.o); $(call create,runtime.o)))
=20




--=__PartA8993351.0__=
Content-Type: text/plain; name="x86-EFI-makefile-cflags-filter.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-EFI-makefile-cflags-filter.patch"

x86/EFI: work around CFLAGS being passed in through environment=0A=0AShort =
of a solution to the problem described in=0Ahttp://lists.xen.org/archives/h=
tml/xen-devel/2012-12/msg00648.html,=0Adeal with the bad effect this =
together with c/s 25751:02b4d5fedb7b has=0Aon the EFI build by filtering =
out the problematic command line items.=0A=0ASigned-off-by: Charles Arnold =
<carnold@suse.com>=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--=
- a/xen/arch/x86/efi/Makefile=0A+++ b/xen/arch/x86/efi/Makefile=0A@@ -5,7 =
+5,7 @@ obj-y +=3D stub.o=0A create =3D test -e $(1) || touch -t 1999010100=
00 $(1)=0A =0A efi :=3D $(filter y,$(x86_64)$(shell rm -f disabled))=0A-efi=
 :=3D $(if $(efi),$(shell $(CC) $(filter-out $(CFLAGS-y),$(CFLAGS)) -c =
check.c 2>disabled && echo y))=0A+efi :=3D $(if $(efi),$(shell $(CC) =
$(filter-out $(CFLAGS-y) .%.d,$(CFLAGS)) -c check.c 2>disabled && echo =
y))=0A efi :=3D $(if $(efi),$(shell $(LD) -mi386pep --subsystem=3D10 -o =
check.efi check.o 2>disabled && echo y))=0A efi :=3D $(if $(efi),$(shell =
rm disabled)y,$(shell $(call create,boot.init.o); $(call create,runtime.o))=
)=0A =0A
--=__PartA8993351.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartA8993351.0__=--


From xen-devel-bounces@lists.xen.org Mon Dec 10 16:01:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti5n7-0007aI-7z; Mon, 10 Dec 2012 16:01:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Ti5n5-0007aD-LH
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:01:11 +0000
Received: from [85.158.139.211:2537] by server-9.bemta-5.messagelabs.com id
	2A/EF-10690-64706C05; Mon, 10 Dec 2012 16:01:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355155269!15618552!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29961 invoked from network); 10 Dec 2012 16:01:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 16:01:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 16:01:08 +0000
Message-Id: <50C6155102000078000AF6FB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 16:01:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartA8993351.0__="
Cc: Charles Arnold <CARNOLD@suse.com>
Subject: [Xen-devel] [PATCH] x86/EFI: work around CFLAGS being passed in
 through environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartA8993351.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Short of a solution to the problem described in
http://lists.xen.org/archives/html/xen-devel/2012-12/msg00648.html,
deal with the bad effect this together with c/s 25751:02b4d5fedb7b has
on the EFI build by filtering out the problematic command line items.

Signed-off-by: Charles Arnold <carnold@suse.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/efi/Makefile
+++ b/xen/arch/x86/efi/Makefile
@@ -5,7 +5,7 @@ obj-y +=3D stub.o
 create =3D test -e $(1) || touch -t 199901010000 $(1)
=20
 efi :=3D $(filter y,$(x86_64)$(shell rm -f disabled))
-efi :=3D $(if $(efi),$(shell $(CC) $(filter-out $(CFLAGS-y),$(CFLAGS)) -c =
check.c 2>disabled && echo y))
+efi :=3D $(if $(efi),$(shell $(CC) $(filter-out $(CFLAGS-y) .%.d,$(CFLAGS)=
) -c check.c 2>disabled && echo y))
 efi :=3D $(if $(efi),$(shell $(LD) -mi386pep --subsystem=3D10 -o =
check.efi check.o 2>disabled && echo y))
 efi :=3D $(if $(efi),$(shell rm disabled)y,$(shell $(call create,boot.init=
.o); $(call create,runtime.o)))
=20




--=__PartA8993351.0__=
Content-Type: text/plain; name="x86-EFI-makefile-cflags-filter.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-EFI-makefile-cflags-filter.patch"

x86/EFI: work around CFLAGS being passed in through environment=0A=0AShort =
of a solution to the problem described in=0Ahttp://lists.xen.org/archives/h=
tml/xen-devel/2012-12/msg00648.html,=0Adeal with the bad effect this =
together with c/s 25751:02b4d5fedb7b has=0Aon the EFI build by filtering =
out the problematic command line items.=0A=0ASigned-off-by: Charles Arnold =
<carnold@suse.com>=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--=
- a/xen/arch/x86/efi/Makefile=0A+++ b/xen/arch/x86/efi/Makefile=0A@@ -5,7 =
+5,7 @@ obj-y +=3D stub.o=0A create =3D test -e $(1) || touch -t 1999010100=
00 $(1)=0A =0A efi :=3D $(filter y,$(x86_64)$(shell rm -f disabled))=0A-efi=
 :=3D $(if $(efi),$(shell $(CC) $(filter-out $(CFLAGS-y),$(CFLAGS)) -c =
check.c 2>disabled && echo y))=0A+efi :=3D $(if $(efi),$(shell $(CC) =
$(filter-out $(CFLAGS-y) .%.d,$(CFLAGS)) -c check.c 2>disabled && echo =
y))=0A efi :=3D $(if $(efi),$(shell $(LD) -mi386pep --subsystem=3D10 -o =
check.efi check.o 2>disabled && echo y))=0A efi :=3D $(if $(efi),$(shell =
rm disabled)y,$(shell $(call create,boot.init.o); $(call create,runtime.o))=
)=0A =0A
--=__PartA8993351.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartA8993351.0__=--


From xen-devel-bounces@lists.xen.org Mon Dec 10 16:08:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti5uA-0007jS-5K; Mon, 10 Dec 2012 16:08:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti5u8-0007jN-Qt
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 16:08:29 +0000
Received: from [85.158.143.99:25725] by server-1.bemta-4.messagelabs.com id
	AB/5F-28401-CF806C05; Mon, 10 Dec 2012 16:08:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1355155705!28231135!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16499 invoked from network); 10 Dec 2012 16:08:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 16:08:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="122414"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 16:08:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 11:08:24 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti5u3-0003jf-Ui;
	Mon, 10 Dec 2012 16:08:23 +0000
Date: Mon, 10 Dec 2012 16:08:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
In-Reply-To: <20674.5586.142286.869968@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212101556590.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
	<20674.5586.142286.869968@mariner.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "dongxiao.xu@intel.com" <dongxiao.xu@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and
 cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Dec 2012, Ian Jackson wrote:
> Stefano Stabellini writes ("[Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and cpu_ioreq_move"):
> > after reviewing the patch "fix multiply issue for int and uint types"
> > with Ian Jackson, we realized that cpu_ioreq_pio and cpu_ioreq_move are
> > in much need for a simplification as well as removal of a possible
> > integer overflow.
> > 
> > This patch series tries to accomplish both switching to two new helper
> > functions and using a more obvious arithmetic. Doing so it should also
> > fix the original problem that Dongxiao was experiencing. The C language
> > can be a nasty backstabber when signed and unsigned integers are
> > involved.
> 
> I think the attached patch is better as it removes some formulaic
> code.  I don't think I have a guest which can repro the bug so I have
> only compile tested it.
> 
> Dongxiao, would you care to take a look ?
> 
> PS: I'm pretty sure the original overflows aren't security problems.
> 
> Thanks,
> Ian.
> 
> commit d19731e4e452e3415a5c03771d0406efc803baa9
> Author: Ian Jackson <ian.jackson@eu.citrix.com>
> Date:   Fri Dec 7 16:02:04 2012 +0000
> 
>     cpu_ioreq_pio, cpu_ioreq_move: introduce read_phys_req_item, write_phys_req_item
>     
>     The current code compare i (int) with req->count (uint32_t) in a for
>     loop, risking an infinite loop if req->count is >INT_MAX.  It also
>     does the multiplication of req->size in a too-small type, leading to
>     integer overflows.
>     
>     Turn read_physical and write_physical into two different helper
>     functions, read_phys_req_item and write_phys_req_item, that take care
>     of adding or subtracting offset depending on sign.
>     
>     This removes the formulaic multiplication to a single place where the
>     integer overflows can be dealt with by casting to wide-enough unsigned
>     types.
>     
>     Reported-By: Dongxiao Xu <dongxiao.xu@intel.com>
>     Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>     Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
> index c6d049c..9b8552c 100644
> --- a/i386-dm/helper2.c
> +++ b/i386-dm/helper2.c
> @@ -339,21 +339,40 @@ static void do_outp(CPUState *env, unsigned long addr,
>      }
>  }
>  
> -static inline void read_physical(uint64_t addr, unsigned long size, void *val)
> +/*
> + * Helper functions which read/write an object from/to physical guest
> + * memory, as part of the implementation of an ioreq.
> + *
> + * Equivalent to
> + *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
> + *                          val, req->size, 0/1)
> + * except without the integer overflow problems.
> + */
> +static void rw_phys_req_item(target_phys_addr_t addr,
> +                             ioreq_t *req, uint32_t i, void *val, int rw)
>  {
> -    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 0);
> +    /* Do everything unsigned so overflow just results in a truncated result
> +     * and accesses to undesired parts of guest memory, which is up
> +     * to the guest */
> +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> +    if (req->df) addr -= offset;
> +    else addr -= offset;

This can't be right, can it?

The search/replace changes below look correct.

For the sake of consistency, could you please send a patch against
upstream QEMU to qemu-devel? The corresponding code is in xen-all.c
(cpu_ioreq_pio and cpu_ioreq_move).



> +    cpu_physical_memory_rw(addr, val, req->size, rw);
>  }
> -
> -static inline void write_physical(uint64_t addr, unsigned long size, void *val)
> +static inline void read_phys_req_item(target_phys_addr_t addr,
> +                                      ioreq_t *req, uint32_t i, void *val)
>  {
> -    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 1);
> +    rw_phys_req_item(addr, req, i, val, 0);
> +}
> +static inline void write_phys_req_item(target_phys_addr_t addr,
> +                                       ioreq_t *req, uint32_t i, void *val)
> +{
> +    rw_phys_req_item(addr, req, i, val, 1);
>  }
>  
>  static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>  {
> -    int i, sign;
> -
> -    sign = req->df ? -1 : 1;
> +    uint32_t i;
>  
>      if (req->dir == IOREQ_READ) {
>          if (!req->data_is_ptr) {
> @@ -363,9 +382,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>  
>              for (i = 0; i < req->count; i++) {
>                  tmp = do_inp(env, req->addr, req->size);
> -                write_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                write_phys_req_item((target_phys_addr_t) req->data, req, i, &tmp);
>              }
>          }
>      } else if (req->dir == IOREQ_WRITE) {
> @@ -375,9 +392,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>              for (i = 0; i < req->count; i++) {
>                  unsigned long tmp = 0;
>  
> -                read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->data, req, i, &tmp);
>                  do_outp(env, req->addr, req->size, tmp);
>              }
>          }
> @@ -386,22 +401,16 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>  
>  static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>  {
> -    int i, sign;
> -
> -    sign = req->df ? -1 : 1;
> +    uint32_t i;
>  
>      if (!req->data_is_ptr) {
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &req->data);
> +                read_phys_req_item(req->addr, req, i, &req->data);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                write_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &req->data);
> +                write_phys_req_item(req->addr, req, i, &req->data);
>              }
>          }
>      } else {
> @@ -409,21 +418,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>  
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> -                write_physical((target_phys_addr_t )req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->addr, req, i, &tmp);
> +                write_phys_req_item(req->data, req, i, &tmp);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> -                write_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->data, req, i, &tmp);
> +                write_phys_req_item(req->addr, req, i, &tmp);
>              }
>          }
>      }
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:08:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti5uA-0007jS-5K; Mon, 10 Dec 2012 16:08:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti5u8-0007jN-Qt
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 16:08:29 +0000
Received: from [85.158.143.99:25725] by server-1.bemta-4.messagelabs.com id
	AB/5F-28401-CF806C05; Mon, 10 Dec 2012 16:08:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1355155705!28231135!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16499 invoked from network); 10 Dec 2012 16:08:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 16:08:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="122414"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 16:08:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 11:08:24 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti5u3-0003jf-Ui;
	Mon, 10 Dec 2012 16:08:23 +0000
Date: Mon, 10 Dec 2012 16:08:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
In-Reply-To: <20674.5586.142286.869968@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212101556590.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
	<20674.5586.142286.869968@mariner.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "dongxiao.xu@intel.com" <dongxiao.xu@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and
 cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Dec 2012, Ian Jackson wrote:
> Stefano Stabellini writes ("[Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and cpu_ioreq_move"):
> > after reviewing the patch "fix multiply issue for int and uint types"
> > with Ian Jackson, we realized that cpu_ioreq_pio and cpu_ioreq_move are
> > in much need for a simplification as well as removal of a possible
> > integer overflow.
> > 
> > This patch series tries to accomplish both switching to two new helper
> > functions and using a more obvious arithmetic. Doing so it should also
> > fix the original problem that Dongxiao was experiencing. The C language
> > can be a nasty backstabber when signed and unsigned integers are
> > involved.
> 
> I think the attached patch is better as it removes some formulaic
> code.  I don't think I have a guest which can repro the bug so I have
> only compile tested it.
> 
> Dongxiao, would you care to take a look ?
> 
> PS: I'm pretty sure the original overflows aren't security problems.
> 
> Thanks,
> Ian.
> 
> commit d19731e4e452e3415a5c03771d0406efc803baa9
> Author: Ian Jackson <ian.jackson@eu.citrix.com>
> Date:   Fri Dec 7 16:02:04 2012 +0000
> 
>     cpu_ioreq_pio, cpu_ioreq_move: introduce read_phys_req_item, write_phys_req_item
>     
>     The current code compare i (int) with req->count (uint32_t) in a for
>     loop, risking an infinite loop if req->count is >INT_MAX.  It also
>     does the multiplication of req->size in a too-small type, leading to
>     integer overflows.
>     
>     Turn read_physical and write_physical into two different helper
>     functions, read_phys_req_item and write_phys_req_item, that take care
>     of adding or subtracting offset depending on sign.
>     
>     This removes the formulaic multiplication to a single place where the
>     integer overflows can be dealt with by casting to wide-enough unsigned
>     types.
>     
>     Reported-By: Dongxiao Xu <dongxiao.xu@intel.com>
>     Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>     Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
> index c6d049c..9b8552c 100644
> --- a/i386-dm/helper2.c
> +++ b/i386-dm/helper2.c
> @@ -339,21 +339,40 @@ static void do_outp(CPUState *env, unsigned long addr,
>      }
>  }
>  
> -static inline void read_physical(uint64_t addr, unsigned long size, void *val)
> +/*
> + * Helper functions which read/write an object from/to physical guest
> + * memory, as part of the implementation of an ioreq.
> + *
> + * Equivalent to
> + *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
> + *                          val, req->size, 0/1)
> + * except without the integer overflow problems.
> + */
> +static void rw_phys_req_item(target_phys_addr_t addr,
> +                             ioreq_t *req, uint32_t i, void *val, int rw)
>  {
> -    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 0);
> +    /* Do everything unsigned so overflow just results in a truncated result
> +     * and accesses to undesired parts of guest memory, which is up
> +     * to the guest */
> +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> +    if (req->df) addr -= offset;
> +    else addr -= offset;

This can't be right, can it?

The search/replace changes below look correct.

For the sake of consistency, could you please send a patch against
upstream QEMU to qemu-devel? The corresponding code is in xen-all.c
(cpu_ioreq_pio and cpu_ioreq_move).



> +    cpu_physical_memory_rw(addr, val, req->size, rw);
>  }
> -
> -static inline void write_physical(uint64_t addr, unsigned long size, void *val)
> +static inline void read_phys_req_item(target_phys_addr_t addr,
> +                                      ioreq_t *req, uint32_t i, void *val)
>  {
> -    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 1);
> +    rw_phys_req_item(addr, req, i, val, 0);
> +}
> +static inline void write_phys_req_item(target_phys_addr_t addr,
> +                                       ioreq_t *req, uint32_t i, void *val)
> +{
> +    rw_phys_req_item(addr, req, i, val, 1);
>  }
>  
>  static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>  {
> -    int i, sign;
> -
> -    sign = req->df ? -1 : 1;
> +    uint32_t i;
>  
>      if (req->dir == IOREQ_READ) {
>          if (!req->data_is_ptr) {
> @@ -363,9 +382,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>  
>              for (i = 0; i < req->count; i++) {
>                  tmp = do_inp(env, req->addr, req->size);
> -                write_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                write_phys_req_item((target_phys_addr_t) req->data, req, i, &tmp);
>              }
>          }
>      } else if (req->dir == IOREQ_WRITE) {
> @@ -375,9 +392,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>              for (i = 0; i < req->count; i++) {
>                  unsigned long tmp = 0;
>  
> -                read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->data, req, i, &tmp);
>                  do_outp(env, req->addr, req->size, tmp);
>              }
>          }
> @@ -386,22 +401,16 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>  
>  static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>  {
> -    int i, sign;
> -
> -    sign = req->df ? -1 : 1;
> +    uint32_t i;
>  
>      if (!req->data_is_ptr) {
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &req->data);
> +                read_phys_req_item(req->addr, req, i, &req->data);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                write_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &req->data);
> +                write_phys_req_item(req->addr, req, i, &req->data);
>              }
>          }
>      } else {
> @@ -409,21 +418,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>  
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> -                write_physical((target_phys_addr_t )req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->addr, req, i, &tmp);
> +                write_phys_req_item(req->data, req, i, &tmp);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> -                write_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->data, req, i, &tmp);
> +                write_phys_req_item(req->addr, req, i, &tmp);
>              }
>          }
>      }
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:32:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6HX-0007z4-EO; Mon, 10 Dec 2012 16:32:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti6HW-0007yz-6v
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:32:38 +0000
Received: from [85.158.143.99:40347] by server-3.bemta-4.messagelabs.com id
	DA/1B-18211-5AE06C05; Mon, 10 Dec 2012 16:32:37 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355157156!17648340!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17402 invoked from network); 10 Dec 2012 16:32:36 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 16:32:36 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:61095 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti6LO-0004ch-F0; Mon, 10 Dec 2012 17:36:38 +0100
Date: Mon, 10 Dec 2012 17:32:35 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <224588152.20121210173235@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212101547560.17523@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
	<alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
	<183697744.20121207215104@eikelenboom.it>
	<alpine.DEB.2.02.1212101547560.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x
	returns same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 4:48:32 PM, you wrote:

> On Fri, 7 Dec 2012, Sander Eikelenboom wrote:
>> Friday, December 7, 2012, 6:24:10 PM, you wrote:
>> 
>> > On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
>> >> Hi Stefano / Anthony,
>> >> 
>> >> With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:
>> >> 
>> >> With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry
>> >> 
>> >> in qemu-traditional:
>> >> 
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 87
>> >> pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 86
>> >> pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 85
>> >> pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 84
>> >> pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0
>> >> 
>> >> 
>> >> in qemu-xen (upstream):
>> >> 
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)
>> 
>> > That is a good pointer, but unfortunately the code that parses those
>> > entries look exactly alike in both QEMU trees:
>> 
>> > qemu-xen-traditional/hw/pt-msi.c:pt_msix_update_one
>> 
>> > if (!gvec) {
>> >         /* if gvec is 0, the guest is asking for a particular pirq that
>> >          * is passed as dest_id */
>> >         pirq = ((gaddr >> 32) & 0xffffff00) |
>> >                (((gaddr & 0xffffffff) >> MSI_TARGET_CPU_SHIFT) & 0xff);
>> 
>> 
>> 
>> > qemu-xen/hw/xen_pt_msi.c:msi_msix_setup
>> 
>> > if (gvec == 0) {
>> >         /* if gvec is 0, the guest is asking for a particular pirq that
>> >          * is passed as dest_id */
>> >         *ppirq = msi_ext_dest_id(addr >> 32) | msi_dest_id(addr);
>> 
>> > given how msi_ext_dest_id and msi_dest_id are defined, they should
>> > behave the same way.
>> 
>> > Maybe adding a printk in msi_msix_setup to show addr would help
>> > nonetheless...
>> 
>> Hi Stefano,
>> 
>> I have added some printk's attached i have:
>> 
>> - qemu-upstream.log           boot of the guest with qemu upstream, device not working
>> - qemu-traditional.log        boot of the same guest with qemu traditional, device is working
>> 
>> - xl-dmesg-upstream.txt       part of xl-dmesg related to boot of guest with qemu-upstream
>> - xl-dmesg-traditional.txt    part of xl-dmesg related to boot of sameguest with qemu-traditional
>> - xl-dmesg.txt                complete xl-dmesg
>> 
>> - interrupts-dom0.txt         /proc/interrupts of dom0
>> - interrupts-upstream.txt     /proc/interrupts of guest with qemu-upstream

> Thank you very much!
> Could it be that the error is just due to a typo?

Tested-By .. yes it was that simple ;-)
Thx !


> ---

> diff --git a/hw/xen_pt_msi.c b/hw/xen_pt_msi.c
> index 6807672..db757cd 100644
> --- a/hw/xen_pt_msi.c
> +++ b/hw/xen_pt_msi.c
> @@ -321,7 +321,7 @@ static int xen_pt_msix_update_one(XenPCIPassthroughState *s, int entry_nr)
>  
>      pirq = entry->pirq;
>  
> -    rc = msi_msix_setup(s, entry->data, entry->data, &pirq, true, entry_nr,
> +    rc = msi_msix_setup(s, entry->addr, entry->data, &pirq, true, entry_nr,
>                          entry->pirq == XEN_PT_UNASSIGNED_PIRQ);
>      if (rc) {
>          return rc;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:32:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6HX-0007z4-EO; Mon, 10 Dec 2012 16:32:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Ti6HW-0007yz-6v
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:32:38 +0000
Received: from [85.158.143.99:40347] by server-3.bemta-4.messagelabs.com id
	DA/1B-18211-5AE06C05; Mon, 10 Dec 2012 16:32:37 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355157156!17648340!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17402 invoked from network); 10 Dec 2012 16:32:36 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Dec 2012 16:32:36 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:61095 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Ti6LO-0004ch-F0; Mon, 10 Dec 2012 17:36:38 +0100
Date: Mon, 10 Dec 2012 17:32:35 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <224588152.20121210173235@eikelenboom.it>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212101547560.17523@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
	<alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
	<183697744.20121207215104@eikelenboom.it>
	<alpine.DEB.2.02.1212101547560.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-unstable] [qemu-xen] pci passthrough msi-x
	returns same pirq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 4:48:32 PM, you wrote:

> On Fri, 7 Dec 2012, Sander Eikelenboom wrote:
>> Friday, December 7, 2012, 6:24:10 PM, you wrote:
>> 
>> > On Thu, 6 Dec 2012, Sander Eikelenboom wrote:
>> >> Hi Stefano / Anthony,
>> >> 
>> >> With the debug output turned on i see some difference between qemu-traditional and qemu-upstream:
>> >> 
>> >> With the pci passthroughed device that fails with msi-x, in qemu-xen it seems to get the same pirq back for every entry
>> >> 
>> >> in qemu-traditional:
>> >> 
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 87
>> >> pt_msix_update_one: Update msix entry 0 with pirq 57 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 86
>> >> pt_msix_update_one: Update msix entry 1 with pirq 56 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 85
>> >> pt_msix_update_one: Update msix entry 2 with pirq 55 gvec 0
>> >> pt_msix_update_one: pt_msix_update_one requested pirq = 84
>> >> pt_msix_update_one: Update msix entry 3 with pirq 54 gvec 0
>> >> 
>> >> 
>> >> in qemu-xen (upstream):
>> >> 
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3057 (entry: 0)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x1)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3056 (entry: 0x1)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x2)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3055 (entry: 0x2)
>> >> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0x3)
>> >> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3054 (entry: 0x3)
>> 
>> > That is a good pointer, but unfortunately the code that parses those
>> > entries look exactly alike in both QEMU trees:
>> 
>> > qemu-xen-traditional/hw/pt-msi.c:pt_msix_update_one
>> 
>> > if (!gvec) {
>> >         /* if gvec is 0, the guest is asking for a particular pirq that
>> >          * is passed as dest_id */
>> >         pirq = ((gaddr >> 32) & 0xffffff00) |
>> >                (((gaddr & 0xffffffff) >> MSI_TARGET_CPU_SHIFT) & 0xff);
>> 
>> 
>> 
>> > qemu-xen/hw/xen_pt_msi.c:msi_msix_setup
>> 
>> > if (gvec == 0) {
>> >         /* if gvec is 0, the guest is asking for a particular pirq that
>> >          * is passed as dest_id */
>> >         *ppirq = msi_ext_dest_id(addr >> 32) | msi_dest_id(addr);
>> 
>> > given how msi_ext_dest_id and msi_dest_id are defined, they should
>> > behave the same way.
>> 
>> > Maybe adding a printk in msi_msix_setup to show addr would help
>> > nonetheless...
>> 
>> Hi Stefano,
>> 
>> I have added some printk's attached i have:
>> 
>> - qemu-upstream.log           boot of the guest with qemu upstream, device not working
>> - qemu-traditional.log        boot of the same guest with qemu traditional, device is working
>> 
>> - xl-dmesg-upstream.txt       part of xl-dmesg related to boot of guest with qemu-upstream
>> - xl-dmesg-traditional.txt    part of xl-dmesg related to boot of sameguest with qemu-traditional
>> - xl-dmesg.txt                complete xl-dmesg
>> 
>> - interrupts-dom0.txt         /proc/interrupts of dom0
>> - interrupts-upstream.txt     /proc/interrupts of guest with qemu-upstream

> Thank you very much!
> Could it be that the error is just due to a typo?

Tested-By .. yes it was that simple ;-)
Thx !


> ---

> diff --git a/hw/xen_pt_msi.c b/hw/xen_pt_msi.c
> index 6807672..db757cd 100644
> --- a/hw/xen_pt_msi.c
> +++ b/hw/xen_pt_msi.c
> @@ -321,7 +321,7 @@ static int xen_pt_msix_update_one(XenPCIPassthroughState *s, int entry_nr)
>  
>      pirq = entry->pirq;
>  
> -    rc = msi_msix_setup(s, entry->data, entry->data, &pirq, true, entry_nr,
> +    rc = msi_msix_setup(s, entry->addr, entry->data, &pirq, true, entry_nr,
>                          entry->pirq == XEN_PT_UNASSIGNED_PIRQ);
>      if (rc) {
>          return rc;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:49:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6Y0-0008Di-Pn; Mon, 10 Dec 2012 16:49:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1Ti6Xy-0008DT-JP
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:49:38 +0000
Received: from [85.158.139.211:62678] by server-1.bemta-5.messagelabs.com id
	0A/4A-12813-1A216C05; Mon, 10 Dec 2012 16:49:37 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355158158!19771708!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2542 invoked from network); 10 Dec 2012 16:49:26 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-5.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Dec 2012 16:49:26 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Mon, 10 Dec 2012 17:49:17 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] VGA passthrough and AMD drivers
Thread-Index: Ac3W6q3sZdM1jooZStCS7Hy3uj/+ag==
Date: Mon, 10 Dec 2012 16:49:16 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C53BB2@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>>>> Hi all,
>>>>>> I have made some tests to find a good driver for FirePro V8800 on 
>>>>>> windows 7 64bit HVM.
>>>>>> I have been focused on ?advanced features?: quad buffer and active 
>>>>>> stereoscopy, synchronization ?
>>>>>> The results, for all FirePro drivers (of this year); I can?t get 
>>>>>> the quad buffer/active stereoscopy feature.
>>>>>> But they work on a native installation.
>>>>> Can you describe the setup a little more?
>>>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>>>
>>>> It?s a setup used in CAVE system, I try (and its works, minus some
>>>> issues) to virtualize ?virtual reality contexts? that needs full 
>>>> graphics card features.
>>>>
>>>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>>>
>>>> cores_per_socket : 4
>>>>
>>>> threads_per_core : 2
>>>>
>>>> cpu_mhz : 2660
>>>>
>>>> total_memory : 4079
>>>>
>>>>> How many graphic cards per guest?
>>>> One card per guest.
>>>>
>>>>> How many guests? On how many hosts?
>>>> One guest per computer.
>>>>
>>> And of course, I just thought of some other questions:
>>> What version of Xen are you using?
>>> What kernel are you using in Dom0?
>> release                : 2.6.32-5-xen-amd64
>> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
>> machine                : x86_64
>> nr_cpus                : 8
>> nr_nodes               : 1
>> cores_per_socket       : 4
>> threads_per_core       : 2
>> cpu_mhz                : 2660
>> hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:00000000:00000001:00000000
>> virt_caps              : hvm hvm_directio
>> total_memory           : 4079
>> free_cpus              : 0
>> xen_major              : 4
>> xen_minor              : 2
>> xen_extra              : -unstable
>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>> xen_scheduler          : credit
>> xen_pagesize           : 4096
>> platform_params        : virt_start=0xffff800000000000
>> xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
>> xen_commandline        : placeholder
>> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
>> xend_config_format     : 4
>>
>> I will change to a newer version and use  xl toolstack when VGA passthrough will be supported.
>>
>>> And just to be clear, there is only Dom0 and one Windows 7 HVM guest on each machine?
>> Yes
>>
>>>>>> The only driver that allows this feature is a Radeon HD driver 
>>>>>> (Catalyst 12.10 WHQL).
>>>>>> But this driver becomes unstable when an application using active 
>>>>>> stereo and synchronization is closed:
>>>>>> -The synchronization between two computers is lost.
>>>>>> -The CCC can crash when the synchronization is made again.
>>>>>> Someone have any clues about this?
>>>>> I don't know exactly how this works on AMD/ATI graphics cards, but 
>>>>> I have worked with synchronisation on other graphics cards about 7 
>>>>> years ago, so I have some idea of how you solve the various 
>>>>> problems.
>>>>> What I don't quite understand is why it would be different between 
>>>>> a virtual environment and the bare-metal ("native") install. My 
>>>>> immediate guess is that there is a timing difference, for one of 
>>>>> two reasons:
>>>>> 1. IOMMU is adding extra delays to the graphics card reading system memory.
>>>>> 2. Interrupt delays due to hypervisor.
>>>>> 3. Dom0 or other guest domains "stealing" CPU from the guest.
>>>>> I don't think those are easy to work around (as they all have to 
>>>>> "happen" in a virtual system), but I also don't REALLY understand 
>>>>> why this should cause problems in the first place, as there isn't 
>>>>> any guarantee as to the timings of either memory reads, interrupt 
>>>>> latency/responsiveness or CPU availability in Windows, so the same 
>>>>> problem would appear in native systems as well, given "the right"
>>>>> circumstances.
>>>>> What exactly is the crash in CCC?
>>>>> (CCC stands for "Catalyst Control Center" - which I think is a 
>>>>> Windows "service" to handle certain requests from the driver that 
>>>>> can't be done in kernel mode [or shouldn't be done in the driver in 
>>>>> general]).
>>>> After the application is closed, I launch the Catalyst Control 
>>>> Center, the synchronization state seems to be good. But there is no 
>>>> synchronization.
>>>>
>>>> If I try to apply any modifications of synchronization (synchro 
>>>> server or client), CCC freeze and I need to kill it manually.
>>>>
>>>> I can set the synchronization back after this.
>>>>
>>> This clearly sounds like a software issue in the CCC itself. I could be wrong, but that's what I think right now. It would be rather difficult to figure out what is going wrong without at least a repro environment.
>> I've made a bunch of tests this morning:
>> -CCC crash when I've got two displays: I set one to be the synchronization server and the other a client at the same time. When I set the server, apply this configuration and set the client after, it didn't crash.
>> -If my application (Virtools) crash, synchronization is reset.
>> -Eyes are sometimes inverted with the same trigger edge.
>I saw that problem with the product I was working on once or twice. 
>Makes it look really "confusing". This was a settings problem in my case (because I wrote my own "controls", I could set almost every aspect of everything that could possibly be changed, with a very basic command line >application that interacted pretty straight down to the driver - with the usual caveat of "make sure you know what you are doing" - the normal GUI Control panel setup was much more "you can only set things that make sense >for you to set"). That is probably not really what your problem is... But could be a configuration of driver or application issue, of course.
>
>>
>> I've got all this behaviors with both HVM and native installation under 7 64bits.  So I think it's clearly a software issue.
>>
>> Next step:  7 32bits.
>So, this is not a Xen issue... Report it to the ATI/AMD folks!
>
Yes, but it doesn't explain why I can't get active stereoscopy with FirePro drivers on HVM.

>>> Whilst I'm all for using Xen for everything, there are sometimes situations when "not using Xen" may actually be the right choice. Can you explain why running your guests in Xen is of benefit? [If you'd like to answer "none of >>>your business", that's fine, but it may help to understand what the "business case" is for this].
>> The objective is to mutualize graphical cluster for immersive systems. Virtual Reality applications are sensitive in their configurations; it's a pain to manage multiple users and it's nearly impossible to have different configurations for these users. Usually immersive systems are stuck in one configuration (OS, drivers, applications ...), and only few people are allowed to change settings.
>> The idea is to use Xen and VGA passthrough, for create personals environments that allow every user to make their own configurations without impacts on others.
>>
>> Be able to have VR configurations in virtual machines and to able to run it with 3D features, is a serious benefit for Virtual Reality users.
>
>Thanks for your explanation. Makes some sense, however, I feel that it also makes things more complex - if the system is so sensitive, it may get "upset" simply by having the differences in system behaviour that you >automatically get from running on a virtual machine vs. "bare metal". Don't let that stop you, I'm just saying there may be issues caused by Xen (or other Virtualisation products) are not quite as transparent as they really should >be.
>

It's not the hardware configurations that is so sensitive but more the software configurations and drivers versions.  
I've already made some demonstrations of Xen capabilities in our use case, there is no negative feedback. I think that HVM behavior is perfect for our uses, except these driver issues. 

I found one minor bug (for us), if the first HVM executed (id=1) has the VGA card, the computer reboot without logs. 
My workaround is to launch an HVM without VGA first, stop it properly, and launch my usual HVM with VGA passthrough. 
I think, it's a bug due to my installation (Xen 4.2.0-unstable).

I just got a new test computer, a Dell Precision T7500 with a V9800 FirePro, maybe I will have the time to test something tomorrow!

Aurelien 
>--
>Mats
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:49:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6Y0-0008Di-Pn; Mon, 10 Dec 2012 16:49:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1Ti6Xy-0008DT-JP
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:49:38 +0000
Received: from [85.158.139.211:62678] by server-1.bemta-5.messagelabs.com id
	0A/4A-12813-1A216C05; Mon, 10 Dec 2012 16:49:37 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355158158!19771708!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2542 invoked from network); 10 Dec 2012 16:49:26 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-5.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Dec 2012 16:49:26 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Mon, 10 Dec 2012 17:49:17 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] VGA passthrough and AMD drivers
Thread-Index: Ac3W6q3sZdM1jooZStCS7Hy3uj/+ag==
Date: Mon, 10 Dec 2012 16:49:16 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C53BB2@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>>>> Hi all,
>>>>>> I have made some tests to find a good driver for FirePro V8800 on 
>>>>>> windows 7 64bit HVM.
>>>>>> I have been focused on ?advanced features?: quad buffer and active 
>>>>>> stereoscopy, synchronization ?
>>>>>> The results, for all FirePro drivers (of this year); I can?t get 
>>>>>> the quad buffer/active stereoscopy feature.
>>>>>> But they work on a native installation.
>>>>> Can you describe the setup a little more?
>>>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>>>
>>>> It?s a setup used in CAVE system, I try (and its works, minus some
>>>> issues) to virtualize ?virtual reality contexts? that needs full 
>>>> graphics card features.
>>>>
>>>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>>>
>>>> cores_per_socket : 4
>>>>
>>>> threads_per_core : 2
>>>>
>>>> cpu_mhz : 2660
>>>>
>>>> total_memory : 4079
>>>>
>>>>> How many graphic cards per guest?
>>>> One card per guest.
>>>>
>>>>> How many guests? On how many hosts?
>>>> One guest per computer.
>>>>
>>> And of course, I just thought of some other questions:
>>> What version of Xen are you using?
>>> What kernel are you using in Dom0?
>> release                : 2.6.32-5-xen-amd64
>> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
>> machine                : x86_64
>> nr_cpus                : 8
>> nr_nodes               : 1
>> cores_per_socket       : 4
>> threads_per_core       : 2
>> cpu_mhz                : 2660
>> hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:00000000:00000001:00000000
>> virt_caps              : hvm hvm_directio
>> total_memory           : 4079
>> free_cpus              : 0
>> xen_major              : 4
>> xen_minor              : 2
>> xen_extra              : -unstable
>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>> xen_scheduler          : credit
>> xen_pagesize           : 4096
>> platform_params        : virt_start=0xffff800000000000
>> xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
>> xen_commandline        : placeholder
>> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
>> xend_config_format     : 4
>>
>> I will change to a newer version and use  xl toolstack when VGA passthrough will be supported.
>>
>>> And just to be clear, there is only Dom0 and one Windows 7 HVM guest on each machine?
>> Yes
>>
>>>>>> The only driver that allows this feature is a Radeon HD driver 
>>>>>> (Catalyst 12.10 WHQL).
>>>>>> But this driver becomes unstable when an application using active 
>>>>>> stereo and synchronization is closed:
>>>>>> -The synchronization between two computers is lost.
>>>>>> -The CCC can crash when the synchronization is made again.
>>>>>> Someone have any clues about this?
>>>>> I don't know exactly how this works on AMD/ATI graphics cards, but 
>>>>> I have worked with synchronisation on other graphics cards about 7 
>>>>> years ago, so I have some idea of how you solve the various 
>>>>> problems.
>>>>> What I don't quite understand is why it would be different between 
>>>>> a virtual environment and the bare-metal ("native") install. My 
>>>>> immediate guess is that there is a timing difference, for one of 
>>>>> two reasons:
>>>>> 1. IOMMU is adding extra delays to the graphics card reading system memory.
>>>>> 2. Interrupt delays due to hypervisor.
>>>>> 3. Dom0 or other guest domains "stealing" CPU from the guest.
>>>>> I don't think those are easy to work around (as they all have to 
>>>>> "happen" in a virtual system), but I also don't REALLY understand 
>>>>> why this should cause problems in the first place, as there isn't 
>>>>> any guarantee as to the timings of either memory reads, interrupt 
>>>>> latency/responsiveness or CPU availability in Windows, so the same 
>>>>> problem would appear in native systems as well, given "the right"
>>>>> circumstances.
>>>>> What exactly is the crash in CCC?
>>>>> (CCC stands for "Catalyst Control Center" - which I think is a 
>>>>> Windows "service" to handle certain requests from the driver that 
>>>>> can't be done in kernel mode [or shouldn't be done in the driver in 
>>>>> general]).
>>>> After the application is closed, I launch the Catalyst Control 
>>>> Center, the synchronization state seems to be good. But there is no 
>>>> synchronization.
>>>>
>>>> If I try to apply any modifications of synchronization (synchro 
>>>> server or client), CCC freeze and I need to kill it manually.
>>>>
>>>> I can set the synchronization back after this.
>>>>
>>> This clearly sounds like a software issue in the CCC itself. I could be wrong, but that's what I think right now. It would be rather difficult to figure out what is going wrong without at least a repro environment.
>> I've made a bunch of tests this morning:
>> -CCC crash when I've got two displays: I set one to be the synchronization server and the other a client at the same time. When I set the server, apply this configuration and set the client after, it didn't crash.
>> -If my application (Virtools) crash, synchronization is reset.
>> -Eyes are sometimes inverted with the same trigger edge.
>I saw that problem with the product I was working on once or twice. 
>Makes it look really "confusing". This was a settings problem in my case (because I wrote my own "controls", I could set almost every aspect of everything that could possibly be changed, with a very basic command line >application that interacted pretty straight down to the driver - with the usual caveat of "make sure you know what you are doing" - the normal GUI Control panel setup was much more "you can only set things that make sense >for you to set"). That is probably not really what your problem is... But could be a configuration of driver or application issue, of course.
>
>>
>> I've got all this behaviors with both HVM and native installation under 7 64bits.  So I think it's clearly a software issue.
>>
>> Next step:  7 32bits.
>So, this is not a Xen issue... Report it to the ATI/AMD folks!
>
Yes, but it doesn't explain why I can't get active stereoscopy with FirePro drivers on HVM.

>>> Whilst I'm all for using Xen for everything, there are sometimes situations when "not using Xen" may actually be the right choice. Can you explain why running your guests in Xen is of benefit? [If you'd like to answer "none of >>>your business", that's fine, but it may help to understand what the "business case" is for this].
>> The objective is to mutualize graphical cluster for immersive systems. Virtual Reality applications are sensitive in their configurations; it's a pain to manage multiple users and it's nearly impossible to have different configurations for these users. Usually immersive systems are stuck in one configuration (OS, drivers, applications ...), and only few people are allowed to change settings.
>> The idea is to use Xen and VGA passthrough, for create personals environments that allow every user to make their own configurations without impacts on others.
>>
>> Be able to have VR configurations in virtual machines and to able to run it with 3D features, is a serious benefit for Virtual Reality users.
>
>Thanks for your explanation. Makes some sense, however, I feel that it also makes things more complex - if the system is so sensitive, it may get "upset" simply by having the differences in system behaviour that you >automatically get from running on a virtual machine vs. "bare metal". Don't let that stop you, I'm just saying there may be issues caused by Xen (or other Virtualisation products) are not quite as transparent as they really should >be.
>

It's not the hardware configurations that is so sensitive but more the software configurations and drivers versions.  
I've already made some demonstrations of Xen capabilities in our use case, there is no negative feedback. I think that HVM behavior is perfect for our uses, except these driver issues. 

I found one minor bug (for us), if the first HVM executed (id=1) has the VGA card, the computer reboot without logs. 
My workaround is to launch an HVM without VGA first, stop it properly, and launch my usual HVM with VGA passthrough. 
I think, it's a bug due to my installation (Xen 4.2.0-unstable).

I just got a new test computer, a Dell Precision T7500 with a V9800 FirePro, maybe I will have the time to test something tomorrow!

Aurelien 
>--
>Mats
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6eN-000094-Qc; Mon, 10 Dec 2012 16:56:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti6eM-00008w-ON
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:56:15 +0000
Received: from [85.158.137.99:32310] by server-16.bemta-3.messagelabs.com id
	B7/9A-07461-92416C05; Mon, 10 Dec 2012 16:56:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355158569!12491424!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20037 invoked from network); 10 Dec 2012 16:56:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 16:56:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="39863"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 16:56:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 16:56:08 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti6eG-0005ip-RH; Mon, 10 Dec 2012 16:56:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti6eG-0002rL-Fe;
	Mon, 10 Dec 2012 16:56:08 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20678.5159.946248.90947@mariner.uk.xensource.com>
Date: Mon, 10 Dec 2012 16:56:07 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>, Bamvor Jian Zhang
	<bjzhang@suse.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"jfehlig@suse.com" <jfehlig@suse.com>
In-Reply-To: <20677.47995.298291.120095@mariner.uk.xensource.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
	<20677.47995.298291.120095@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> I'm not surprised that the original patch makes Bamvor's symptoms go
> away.  Bamvor had one of the possible races (the fd-related one) but
> not the other.

Here (followups to this message, shortly) is v3 of my two-patch series
which after conversation with Ian C I think fully fixes the race, and
which I have tested now.

Bamvor, can you test this and let us know hwether it fixes your problem?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6eN-000094-Qc; Mon, 10 Dec 2012 16:56:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti6eM-00008w-ON
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:56:15 +0000
Received: from [85.158.137.99:32310] by server-16.bemta-3.messagelabs.com id
	B7/9A-07461-92416C05; Mon, 10 Dec 2012 16:56:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355158569!12491424!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20037 invoked from network); 10 Dec 2012 16:56:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 16:56:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="39863"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 16:56:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 16:56:08 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti6eG-0005ip-RH; Mon, 10 Dec 2012 16:56:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti6eG-0002rL-Fe;
	Mon, 10 Dec 2012 16:56:08 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20678.5159.946248.90947@mariner.uk.xensource.com>
Date: Mon, 10 Dec 2012 16:56:07 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>, Bamvor Jian Zhang
	<bjzhang@suse.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"jfehlig@suse.com" <jfehlig@suse.com>
In-Reply-To: <20677.47995.298291.120095@mariner.uk.xensource.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
	<20677.47995.298291.120095@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> I'm not surprised that the original patch makes Bamvor's symptoms go
> away.  Bamvor had one of the possible races (the fd-related one) but
> not the other.

Here (followups to this message, shortly) is v3 of my two-patch series
which after conversation with Ian C I think fully fixes the race, and
which I have tested now.

Bamvor, can you test this and let us know hwether it fixes your problem?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:57:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6fM-0000DA-8f; Mon, 10 Dec 2012 16:57:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti6fK-0000Cw-P4
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:57:15 +0000
Received: from [85.158.137.99:51968] by server-9.bemta-3.messagelabs.com id
	28/33-02388-56416C05; Mon, 10 Dec 2012 16:57:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355158629!18758782!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29832 invoked from network); 10 Dec 2012 16:57:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 16:57:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="39885"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 16:57:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 16:57:07 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti6fD-0005j7-QC; Mon, 10 Dec 2012 16:57:07 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti6fD-0002ua-K1;
	Mon, 10 Dec 2012 16:57:07 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Dec 2012 16:57:03 +0000
Message-ID: <1355158624-11163-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20678.5159.946248.90947@mariner.uk.xensource.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout and
..._fd, in a multithreaded program those calls may be arbitrarily
delayed in relation to other activities within the program.

libxl therefore needs to be prepared to receive very old event
callbacks.  Arrange for this to be the case for fd callbacks.

This requires a new layer of indirection through a "hook nexus" struct
which can outlive the libxl__ev_foo.  Allocation and deallocation of
these nexi is mostly handled in the OSEVENT macros which wrap up
the application's callbacks.

Document the problem and the solution in a comment in libxl_event.c
just before the definition of struct libxl__osevent_hook_nexus.

There is still a race relating to libxl__osevent_occurred_timeout;
this will be addressed in the following patch.

Reported-by: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--
v2:
  - Prepare for fixing timeout race too
  - Break out osevent_release_nexus()
  - nexusop argument to OSEVENT* macros
  - Clarify OSEVENT* nexusop hooks
  - osevent_ev_from_hook_nexus takes a libxl__osevent_hook_nexus*
---
 tools/libxl/libxl_event.c    |  184 +++++++++++++++++++++++++++++++++++------
 tools/libxl/libxl_internal.h |    8 ++-
 2 files changed, 163 insertions(+), 29 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 72cb723..f1fe425 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -38,23 +38,131 @@
  * The application's registration hooks should be called ONLY via
  * these macros, with the ctx locked.  Likewise all the "occurred"
  * entrypoints from the application should assert(!in_hook);
+ *
+ * During the hook call - including while the arguments are being
+ * evaluated - ev->nexus is guaranteed to be valid and refer to the
+ * nexus which is being used for this event registration.  The
+ * arguments should specify ev->nexus for the for_libxl argument and
+ * ev->nexus->for_app_reg (or a pointer to it) for for_app_reg.
  */
-#define OSEVENT_HOOK_INTERN(retval, hookname, ...) do {                      \
-    if (CTX->osevent_hooks) {                                                \
-        CTX->osevent_in_hook++;                                              \
-        retval CTX->osevent_hooks->hookname(CTX->osevent_user, __VA_ARGS__); \
-        CTX->osevent_in_hook--;                                              \
-    }                                                                        \
+#define OSEVENT_HOOK_INTERN(retval, failedp, evkind, hookop, nexusop, ...) do { \
+    if (CTX->osevent_hooks) {                                           \
+        CTX->osevent_in_hook++;                                         \
+        libxl__osevent_hook_nexi *nexi = &CTX->hook_##evkind##_nexi_idle; \
+        osevent_hook_pre_##nexusop(gc, ev, nexi, &ev->nexus);            \
+        retval CTX->osevent_hooks->evkind##_##hookop                    \
+            (CTX->osevent_user, __VA_ARGS__);                           \
+        if ((failedp))                                                  \
+            osevent_hook_failed_##nexusop(gc, ev, nexi, &ev->nexus);     \
+        CTX->osevent_in_hook--;                                         \
+    }                                                                   \
 } while (0)
 
-#define OSEVENT_HOOK(hookname, ...) ({                                       \
-    int osevent_hook_rc = 0;                                                 \
-    OSEVENT_HOOK_INTERN(osevent_hook_rc = , hookname, __VA_ARGS__);          \
-    osevent_hook_rc;                                                         \
+#define OSEVENT_HOOK(evkind, hookop, nexusop, ...) ({                   \
+    int osevent_hook_rc = 0;                                    \
+    OSEVENT_HOOK_INTERN(osevent_hook_rc =, !!osevent_hook_rc,   \
+                        evkind, hookop, nexusop, __VA_ARGS__);          \
+    osevent_hook_rc;                                            \
 })
 
-#define OSEVENT_HOOK_VOID(hookname, ...) \
-    OSEVENT_HOOK_INTERN(/* void */, hookname, __VA_ARGS__)
+#define OSEVENT_HOOK_VOID(evkind, hookop, nexusop, ...)                         \
+    OSEVENT_HOOK_INTERN(/* void */, 0, evkind, hookop, nexusop, __VA_ARGS__)
+
+/*
+ * The application's calls to libxl_osevent_occurred_... may be
+ * indefinitely delayed with respect to the rest of the program (since
+ * they are not necessarily called with any lock held).  So the
+ * for_libxl value we receive may be (almost) arbitrarily old.  All we
+ * know is that it came from this ctx.
+ *
+ * Therefore we may not free the object referred to by any for_libxl
+ * value until we free the whole libxl_ctx.  And if we reuse it we
+ * must be able to tell when an old use turns up, and discard the
+ * stale event.
+ *
+ * Thus we cannot use the ev directly as the for_libxl value - we need
+ * a layer of indirection.
+ *
+ * We do this by keeping a pool of libxl__osevent_hook_nexus structs,
+ * and use pointers to them as for_libxl values.  In fact, there are
+ * two pools: one for fds and one for timeouts.  This ensures that we
+ * don't risk a type error when we upcast nexus->ev.  In each nexus
+ * the ev is either null or points to a valid libxl__ev_time or
+ * libxl__ev_fd, as applicable.
+ *
+ * We /do/ allow ourselves to reassociate an old nexus with a new ev
+ * as otherwise we would have to leak nexi.  (This reassociation
+ * might, of course, be an old ev being reused for a new purpose so
+ * simply comparing the ev pointer is not sufficient.)  Thus the
+ * libxl_osevent_occurred functions need to check that the condition
+ * allegedly signalled by this event actually exists.
+ *
+ * The nexi and the lists are all protected by the ctx lock.
+ */
+ 
+struct libxl__osevent_hook_nexus {
+    void *ev;
+    void *for_app_reg;
+    LIBXL_SLIST_ENTRY(libxl__osevent_hook_nexus) next;
+};
+
+static void *osevent_ev_from_hook_nexus(libxl_ctx *ctx,
+           libxl__osevent_hook_nexus *nexus /* pass  void *for_libxl */)
+{
+    return nexus->ev;
+}
+
+static void osevent_release_nexus(libxl__gc *gc,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus *nexus)
+{
+    nexus->ev = 0;
+    LIBXL_SLIST_INSERT_HEAD(nexi_idle, nexus, next);
+}
+
+/*----- OSEVENT* hook functions for nexusop "alloc" -----*/
+static void osevent_hook_pre_alloc(libxl__gc *gc, void *ev,
+                                   libxl__osevent_hook_nexi *nexi_idle,
+                                   libxl__osevent_hook_nexus **nexus_r)
+{
+    libxl__osevent_hook_nexus *nexus = LIBXL_SLIST_FIRST(nexi_idle);
+    if (nexus) {
+        LIBXL_SLIST_REMOVE_HEAD(nexi_idle, next);
+    } else {
+        nexus = libxl__zalloc(NOGC, sizeof(*nexus));
+    }
+    nexus->ev = ev;
+    *nexus_r = nexus;
+}
+static void osevent_hook_failed_alloc(libxl__gc *gc, void *ev,
+                                      libxl__osevent_hook_nexi *nexi_idle,
+                                      libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+
+/*----- OSEVENT* hook functions for nexusop "release" -----*/
+static void osevent_hook_pre_release(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+static void osevent_hook_failed_release(libxl__gc *gc, void *ev,
+                                        libxl__osevent_hook_nexi *nexi_idle,
+                                        libxl__osevent_hook_nexus **nexus)
+{
+    abort();
+}
+
+/*----- OSEVENT* hook functions for nexusop "noop" -----*/
+static void osevent_hook_pre_noop(libxl__gc *gc, void *ev,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus **nexus) { }
+static void osevent_hook_failed_noop(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus) { }
+
 
 /*
  * fd events
@@ -72,7 +180,8 @@ int libxl__ev_fd_register(libxl__gc *gc, libxl__ev_fd *ev,
 
     DBG("ev_fd=%p register fd=%d events=%x", ev, fd, events);
 
-    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
+    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
+                      events, ev->nexus);
     if (rc) goto out;
 
     ev->fd = fd;
@@ -97,7 +206,7 @@ int libxl__ev_fd_modify(libxl__gc *gc, libxl__ev_fd *ev, short events)
 
     DBG("ev_fd=%p modify fd=%d events=%x", ev, ev->fd, events);
 
-    rc = OSEVENT_HOOK(fd_modify, ev->fd, &ev->for_app_reg, events);
+    rc = OSEVENT_HOOK(fd,modify, noop, ev->fd, &ev->nexus->for_app_reg, events);
     if (rc) goto out;
 
     ev->events = events;
@@ -119,7 +228,7 @@ void libxl__ev_fd_deregister(libxl__gc *gc, libxl__ev_fd *ev)
 
     DBG("ev_fd=%p deregister fd=%d", ev, ev->fd);
 
-    OSEVENT_HOOK_VOID(fd_deregister, ev->fd, ev->for_app_reg);
+    OSEVENT_HOOK_VOID(fd,deregister, release, ev->fd, ev->nexus->for_app_reg);
     LIBXL_LIST_REMOVE(ev, entry);
     ev->fd = -1;
 
@@ -171,7 +280,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 {
     int rc;
 
-    rc = OSEVENT_HOOK(timeout_register, &ev->for_app_reg, absolute, ev);
+    rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
+                      absolute, ev->nexus);
     if (rc) return rc;
 
     ev->infinite = 0;
@@ -184,7 +294,7 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout_deregister, ev->for_app_reg);
+        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -270,7 +380,8 @@ int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
         rc = time_register_finite(gc, ev, absolute);
         if (rc) goto out;
     } else {
-        rc = OSEVENT_HOOK(timeout_modify, &ev->for_app_reg, absolute);
+        rc = OSEVENT_HOOK(timeout,modify, noop,
+                          &ev->nexus->for_app_reg, absolute);
         if (rc) goto out;
 
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
@@ -1010,35 +1121,54 @@ void libxl_osevent_register_hooks(libxl_ctx *ctx,
 
 
 void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
-                               int fd, short events, short revents)
+                               int fd, short events_ign, short revents_ign)
 {
-    libxl__ev_fd *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    assert(fd == ev->fd);
-    revents &= ev->events;
-    if (revents)
-        ev->func(egc, ev, fd, ev->events, revents);
+    libxl__ev_fd *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
+    if (ev->fd != fd) goto out;
 
+    struct pollfd check;
+    for (;;) {
+        check.fd = fd;
+        check.events = ev->events;
+        int r = poll(&check, 1, 0);
+        if (!r)
+            goto out;
+        if (r==1)
+            break;
+        assert(r<0);
+        if (errno != EINTR) {
+            LIBXL__EVENT_DISASTER(egc, "failed poll to check for fd", errno, 0);
+            goto out;
+        }
+    }
+
+    if (check.revents)
+        ev->func(egc, ev, fd, ev->events, check.revents);
+
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
 
 void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 {
-    libxl__ev_time *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
     assert(!ev->infinite);
+
     LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     ev->func(egc, ev, &ev->abs);
 
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index cba3616..6484bcb 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -136,6 +136,8 @@ typedef struct libxl__gc libxl__gc;
 typedef struct libxl__egc libxl__egc;
 typedef struct libxl__ao libxl__ao;
 typedef struct libxl__aop_occurred libxl__aop_occurred;
+typedef struct libxl__osevent_hook_nexus libxl__osevent_hook_nexus;
+typedef struct libxl__osevent_hook_nexi libxl__osevent_hook_nexi;
 
 _hidden void libxl__alloc_failed(libxl_ctx *, const char *func,
                          size_t nmemb, size_t size) __attribute__((noreturn));
@@ -163,7 +165,7 @@ struct libxl__ev_fd {
     libxl__ev_fd_callback *func;
     /* remainder is private for libxl__ev_fd... */
     LIBXL_LIST_ENTRY(libxl__ev_fd) entry;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 
@@ -178,7 +180,7 @@ struct libxl__ev_time {
     int infinite; /* not registered in list or with app if infinite */
     LIBXL_TAILQ_ENTRY(libxl__ev_time) entry;
     struct timeval abs;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 typedef struct libxl__ev_xswatch libxl__ev_xswatch;
@@ -329,6 +331,8 @@ struct libxl__ctx {
     libxl__poller poller_app; /* libxl_osevent_beforepoll and _afterpoll */
     LIBXL_LIST_HEAD(, libxl__poller) pollers_event, pollers_idle;
 
+    LIBXL_SLIST_HEAD(libxl__osevent_hook_nexi, libxl__osevent_hook_nexus)
+        hook_fd_nexi_idle, hook_timeout_nexi_idle;
     LIBXL_LIST_HEAD(, libxl__ev_fd) efds;
     LIBXL_TAILQ_HEAD(, libxl__ev_time) etimes;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:57:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6fM-0000DA-8f; Mon, 10 Dec 2012 16:57:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti6fK-0000Cw-P4
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:57:15 +0000
Received: from [85.158.137.99:51968] by server-9.bemta-3.messagelabs.com id
	28/33-02388-56416C05; Mon, 10 Dec 2012 16:57:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355158629!18758782!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29832 invoked from network); 10 Dec 2012 16:57:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 16:57:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="39885"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 16:57:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 16:57:07 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti6fD-0005j7-QC; Mon, 10 Dec 2012 16:57:07 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti6fD-0002ua-K1;
	Mon, 10 Dec 2012 16:57:07 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Dec 2012 16:57:03 +0000
Message-ID: <1355158624-11163-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20678.5159.946248.90947@mariner.uk.xensource.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout and
..._fd, in a multithreaded program those calls may be arbitrarily
delayed in relation to other activities within the program.

libxl therefore needs to be prepared to receive very old event
callbacks.  Arrange for this to be the case for fd callbacks.

This requires a new layer of indirection through a "hook nexus" struct
which can outlive the libxl__ev_foo.  Allocation and deallocation of
these nexi is mostly handled in the OSEVENT macros which wrap up
the application's callbacks.

Document the problem and the solution in a comment in libxl_event.c
just before the definition of struct libxl__osevent_hook_nexus.

There is still a race relating to libxl__osevent_occurred_timeout;
this will be addressed in the following patch.

Reported-by: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--
v2:
  - Prepare for fixing timeout race too
  - Break out osevent_release_nexus()
  - nexusop argument to OSEVENT* macros
  - Clarify OSEVENT* nexusop hooks
  - osevent_ev_from_hook_nexus takes a libxl__osevent_hook_nexus*
---
 tools/libxl/libxl_event.c    |  184 +++++++++++++++++++++++++++++++++++------
 tools/libxl/libxl_internal.h |    8 ++-
 2 files changed, 163 insertions(+), 29 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 72cb723..f1fe425 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -38,23 +38,131 @@
  * The application's registration hooks should be called ONLY via
  * these macros, with the ctx locked.  Likewise all the "occurred"
  * entrypoints from the application should assert(!in_hook);
+ *
+ * During the hook call - including while the arguments are being
+ * evaluated - ev->nexus is guaranteed to be valid and refer to the
+ * nexus which is being used for this event registration.  The
+ * arguments should specify ev->nexus for the for_libxl argument and
+ * ev->nexus->for_app_reg (or a pointer to it) for for_app_reg.
  */
-#define OSEVENT_HOOK_INTERN(retval, hookname, ...) do {                      \
-    if (CTX->osevent_hooks) {                                                \
-        CTX->osevent_in_hook++;                                              \
-        retval CTX->osevent_hooks->hookname(CTX->osevent_user, __VA_ARGS__); \
-        CTX->osevent_in_hook--;                                              \
-    }                                                                        \
+#define OSEVENT_HOOK_INTERN(retval, failedp, evkind, hookop, nexusop, ...) do { \
+    if (CTX->osevent_hooks) {                                           \
+        CTX->osevent_in_hook++;                                         \
+        libxl__osevent_hook_nexi *nexi = &CTX->hook_##evkind##_nexi_idle; \
+        osevent_hook_pre_##nexusop(gc, ev, nexi, &ev->nexus);            \
+        retval CTX->osevent_hooks->evkind##_##hookop                    \
+            (CTX->osevent_user, __VA_ARGS__);                           \
+        if ((failedp))                                                  \
+            osevent_hook_failed_##nexusop(gc, ev, nexi, &ev->nexus);     \
+        CTX->osevent_in_hook--;                                         \
+    }                                                                   \
 } while (0)
 
-#define OSEVENT_HOOK(hookname, ...) ({                                       \
-    int osevent_hook_rc = 0;                                                 \
-    OSEVENT_HOOK_INTERN(osevent_hook_rc = , hookname, __VA_ARGS__);          \
-    osevent_hook_rc;                                                         \
+#define OSEVENT_HOOK(evkind, hookop, nexusop, ...) ({                   \
+    int osevent_hook_rc = 0;                                    \
+    OSEVENT_HOOK_INTERN(osevent_hook_rc =, !!osevent_hook_rc,   \
+                        evkind, hookop, nexusop, __VA_ARGS__);          \
+    osevent_hook_rc;                                            \
 })
 
-#define OSEVENT_HOOK_VOID(hookname, ...) \
-    OSEVENT_HOOK_INTERN(/* void */, hookname, __VA_ARGS__)
+#define OSEVENT_HOOK_VOID(evkind, hookop, nexusop, ...)                         \
+    OSEVENT_HOOK_INTERN(/* void */, 0, evkind, hookop, nexusop, __VA_ARGS__)
+
+/*
+ * The application's calls to libxl_osevent_occurred_... may be
+ * indefinitely delayed with respect to the rest of the program (since
+ * they are not necessarily called with any lock held).  So the
+ * for_libxl value we receive may be (almost) arbitrarily old.  All we
+ * know is that it came from this ctx.
+ *
+ * Therefore we may not free the object referred to by any for_libxl
+ * value until we free the whole libxl_ctx.  And if we reuse it we
+ * must be able to tell when an old use turns up, and discard the
+ * stale event.
+ *
+ * Thus we cannot use the ev directly as the for_libxl value - we need
+ * a layer of indirection.
+ *
+ * We do this by keeping a pool of libxl__osevent_hook_nexus structs,
+ * and use pointers to them as for_libxl values.  In fact, there are
+ * two pools: one for fds and one for timeouts.  This ensures that we
+ * don't risk a type error when we upcast nexus->ev.  In each nexus
+ * the ev is either null or points to a valid libxl__ev_time or
+ * libxl__ev_fd, as applicable.
+ *
+ * We /do/ allow ourselves to reassociate an old nexus with a new ev
+ * as otherwise we would have to leak nexi.  (This reassociation
+ * might, of course, be an old ev being reused for a new purpose so
+ * simply comparing the ev pointer is not sufficient.)  Thus the
+ * libxl_osevent_occurred functions need to check that the condition
+ * allegedly signalled by this event actually exists.
+ *
+ * The nexi and the lists are all protected by the ctx lock.
+ */
+ 
+struct libxl__osevent_hook_nexus {
+    void *ev;
+    void *for_app_reg;
+    LIBXL_SLIST_ENTRY(libxl__osevent_hook_nexus) next;
+};
+
+static void *osevent_ev_from_hook_nexus(libxl_ctx *ctx,
+           libxl__osevent_hook_nexus *nexus /* pass  void *for_libxl */)
+{
+    return nexus->ev;
+}
+
+static void osevent_release_nexus(libxl__gc *gc,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus *nexus)
+{
+    nexus->ev = 0;
+    LIBXL_SLIST_INSERT_HEAD(nexi_idle, nexus, next);
+}
+
+/*----- OSEVENT* hook functions for nexusop "alloc" -----*/
+static void osevent_hook_pre_alloc(libxl__gc *gc, void *ev,
+                                   libxl__osevent_hook_nexi *nexi_idle,
+                                   libxl__osevent_hook_nexus **nexus_r)
+{
+    libxl__osevent_hook_nexus *nexus = LIBXL_SLIST_FIRST(nexi_idle);
+    if (nexus) {
+        LIBXL_SLIST_REMOVE_HEAD(nexi_idle, next);
+    } else {
+        nexus = libxl__zalloc(NOGC, sizeof(*nexus));
+    }
+    nexus->ev = ev;
+    *nexus_r = nexus;
+}
+static void osevent_hook_failed_alloc(libxl__gc *gc, void *ev,
+                                      libxl__osevent_hook_nexi *nexi_idle,
+                                      libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+
+/*----- OSEVENT* hook functions for nexusop "release" -----*/
+static void osevent_hook_pre_release(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus)
+{
+    osevent_release_nexus(gc, nexi_idle, *nexus);
+}
+static void osevent_hook_failed_release(libxl__gc *gc, void *ev,
+                                        libxl__osevent_hook_nexi *nexi_idle,
+                                        libxl__osevent_hook_nexus **nexus)
+{
+    abort();
+}
+
+/*----- OSEVENT* hook functions for nexusop "noop" -----*/
+static void osevent_hook_pre_noop(libxl__gc *gc, void *ev,
+                                  libxl__osevent_hook_nexi *nexi_idle,
+                                  libxl__osevent_hook_nexus **nexus) { }
+static void osevent_hook_failed_noop(libxl__gc *gc, void *ev,
+                                     libxl__osevent_hook_nexi *nexi_idle,
+                                     libxl__osevent_hook_nexus **nexus) { }
+
 
 /*
  * fd events
@@ -72,7 +180,8 @@ int libxl__ev_fd_register(libxl__gc *gc, libxl__ev_fd *ev,
 
     DBG("ev_fd=%p register fd=%d events=%x", ev, fd, events);
 
-    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
+    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
+                      events, ev->nexus);
     if (rc) goto out;
 
     ev->fd = fd;
@@ -97,7 +206,7 @@ int libxl__ev_fd_modify(libxl__gc *gc, libxl__ev_fd *ev, short events)
 
     DBG("ev_fd=%p modify fd=%d events=%x", ev, ev->fd, events);
 
-    rc = OSEVENT_HOOK(fd_modify, ev->fd, &ev->for_app_reg, events);
+    rc = OSEVENT_HOOK(fd,modify, noop, ev->fd, &ev->nexus->for_app_reg, events);
     if (rc) goto out;
 
     ev->events = events;
@@ -119,7 +228,7 @@ void libxl__ev_fd_deregister(libxl__gc *gc, libxl__ev_fd *ev)
 
     DBG("ev_fd=%p deregister fd=%d", ev, ev->fd);
 
-    OSEVENT_HOOK_VOID(fd_deregister, ev->fd, ev->for_app_reg);
+    OSEVENT_HOOK_VOID(fd,deregister, release, ev->fd, ev->nexus->for_app_reg);
     LIBXL_LIST_REMOVE(ev, entry);
     ev->fd = -1;
 
@@ -171,7 +280,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 {
     int rc;
 
-    rc = OSEVENT_HOOK(timeout_register, &ev->for_app_reg, absolute, ev);
+    rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
+                      absolute, ev->nexus);
     if (rc) return rc;
 
     ev->infinite = 0;
@@ -184,7 +294,7 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout_deregister, ev->for_app_reg);
+        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -270,7 +380,8 @@ int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
         rc = time_register_finite(gc, ev, absolute);
         if (rc) goto out;
     } else {
-        rc = OSEVENT_HOOK(timeout_modify, &ev->for_app_reg, absolute);
+        rc = OSEVENT_HOOK(timeout,modify, noop,
+                          &ev->nexus->for_app_reg, absolute);
         if (rc) goto out;
 
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
@@ -1010,35 +1121,54 @@ void libxl_osevent_register_hooks(libxl_ctx *ctx,
 
 
 void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
-                               int fd, short events, short revents)
+                               int fd, short events_ign, short revents_ign)
 {
-    libxl__ev_fd *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    assert(fd == ev->fd);
-    revents &= ev->events;
-    if (revents)
-        ev->func(egc, ev, fd, ev->events, revents);
+    libxl__ev_fd *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
+    if (ev->fd != fd) goto out;
 
+    struct pollfd check;
+    for (;;) {
+        check.fd = fd;
+        check.events = ev->events;
+        int r = poll(&check, 1, 0);
+        if (!r)
+            goto out;
+        if (r==1)
+            break;
+        assert(r<0);
+        if (errno != EINTR) {
+            LIBXL__EVENT_DISASTER(egc, "failed poll to check for fd", errno, 0);
+            goto out;
+        }
+    }
+
+    if (check.revents)
+        ev->func(egc, ev, fd, ev->events, check.revents);
+
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
 
 void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 {
-    libxl__ev_time *ev = for_libxl;
-
     EGC_INIT(ctx);
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    if (!ev) goto out;
     assert(!ev->infinite);
+
     LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     ev->func(egc, ev, &ev->abs);
 
+ out:
     CTX_UNLOCK;
     EGC_FREE;
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index cba3616..6484bcb 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -136,6 +136,8 @@ typedef struct libxl__gc libxl__gc;
 typedef struct libxl__egc libxl__egc;
 typedef struct libxl__ao libxl__ao;
 typedef struct libxl__aop_occurred libxl__aop_occurred;
+typedef struct libxl__osevent_hook_nexus libxl__osevent_hook_nexus;
+typedef struct libxl__osevent_hook_nexi libxl__osevent_hook_nexi;
 
 _hidden void libxl__alloc_failed(libxl_ctx *, const char *func,
                          size_t nmemb, size_t size) __attribute__((noreturn));
@@ -163,7 +165,7 @@ struct libxl__ev_fd {
     libxl__ev_fd_callback *func;
     /* remainder is private for libxl__ev_fd... */
     LIBXL_LIST_ENTRY(libxl__ev_fd) entry;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 
@@ -178,7 +180,7 @@ struct libxl__ev_time {
     int infinite; /* not registered in list or with app if infinite */
     LIBXL_TAILQ_ENTRY(libxl__ev_time) entry;
     struct timeval abs;
-    void *for_app_reg;
+    libxl__osevent_hook_nexus *nexus;
 };
 
 typedef struct libxl__ev_xswatch libxl__ev_xswatch;
@@ -329,6 +331,8 @@ struct libxl__ctx {
     libxl__poller poller_app; /* libxl_osevent_beforepoll and _afterpoll */
     LIBXL_LIST_HEAD(, libxl__poller) pollers_event, pollers_idle;
 
+    LIBXL_SLIST_HEAD(libxl__osevent_hook_nexi, libxl__osevent_hook_nexus)
+        hook_fd_nexi_idle, hook_timeout_nexi_idle;
     LIBXL_LIST_HEAD(, libxl__ev_fd) efds;
     LIBXL_TAILQ_HEAD(, libxl__ev_time) etimes;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:57:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6fM-0000DJ-Lo; Mon, 10 Dec 2012 16:57:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti6fL-0000Cw-HI
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:57:15 +0000
Received: from [85.158.137.99:41067] by server-9.bemta-3.messagelabs.com id
	0D/53-02388-76416C05; Mon, 10 Dec 2012 16:57:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355158629!18758782!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30040 invoked from network); 10 Dec 2012 16:57:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 16:57:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="39886"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 16:57:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 16:57:11 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti6fG-0005jA-V7; Mon, 10 Dec 2012 16:57:11 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti6fD-0002uc-Pn;
	Mon, 10 Dec 2012 16:57:07 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Dec 2012 16:57:04 +0000
Message-ID: <1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20678.5159.946248.90947@mariner.uk.xensource.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout, in a
multithreaded program those calls may be arbitrarily delayed in
relation to other activities within the program.

Specifically this means when ->timeout_deregister returns, libxl does
not know whether it can safely dispose of the for_libxl value or
whether it needs to retain it in case of an in-progress call to
_occurred_timeout.

The interface could be fixed by requiring the application to make a
new call into libxl to say that the deregistration was complete.

However that new call would have to be threaded through the
application's event loop; this is complicated and some application
authors are likely not to implement it properly.  Furthermore the
easiest way to implement this facility in most event loops is to queue
up a time event for "now".

Shortcut all of this by having libxl always call timeout_modify
setting abs={0,0} (ie, ASAP) instead of timeout_deregister.  This will
cause the application to call _occurred_timeout.  When processing this
calldown we see that we were no longer actually interested and simply
throw it away.

Additionally, there is a race between _occurred_timeout and
->timeout_modify.  If libxl ever adjusts the deadline for a timeout
the application may already be in the process of calling _occurred, in
which case the situation with for_app's lifetime becomes very
complicated.  Therefore abolish libxl__ev_time_modify_{abs,rel} (which
have no callers) and promise to the application only ever to call
->timeout_modify with abs=={0,0}.  The application still needs to cope
with ->timeout_modify racing with its internal function which calls
_occurred_timeout.  Document this.

This is a forwards-compatible change for applications using the libxl
API, and will hopefully eliminate these races in callback-supplying
applications (such as libvirt) without the need for corresponding
changes to the application.

For clarity, fold the body of time_register_finite into its one
remaining call site.  This makes the semantics of ev->infinite
slightly clearer.

Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--
v3:
  - Fix null pointer dereference in case when hooks not supplied.
---
 tools/libxl/libxl_event.c |   89 +++++++--------------------------------------
 tools/libxl/libxl_event.h |   17 ++++++++-
 2 files changed, 29 insertions(+), 77 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index f1fe425..f86c528 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -267,18 +267,11 @@ static int time_rel_to_abs(libxl__gc *gc, int ms, struct timeval *abs_out)
     return 0;
 }
 
-static void time_insert_finite(libxl__gc *gc, libxl__ev_time *ev)
-{
-    libxl__ev_time *evsearch;
-    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
-                              timercmp(&ev->abs, &evsearch->abs, >));
-    ev->infinite = 0;
-}
-
 static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
                                 struct timeval absolute)
 {
     int rc;
+    libxl__ev_time *evsearch;
 
     rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
                       absolute, ev->nexus);
@@ -286,7 +279,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 
     ev->infinite = 0;
     ev->abs = absolute;
-    time_insert_finite(gc, ev);
+    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
+                              timercmp(&ev->abs, &evsearch->abs, >));
 
     return 0;
 }
@@ -294,7 +288,12 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
+        struct timeval right_away = { 0, 0 };
+        if (ev->nexus) /* only set if app provided hooks */
+            ev->nexus->ev = 0;
+        OSEVENT_HOOK_VOID(timeout,modify,
+                          noop /* release nexus in _occurred_ */,
+                          ev->nexus->for_app_reg, right_away);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -364,70 +363,6 @@ int libxl__ev_time_register_rel(libxl__gc *gc, libxl__ev_time *ev,
     return rc;
 }
 
-int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
-                              struct timeval absolute)
-{
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify abs==%lu.%06lu",
-        ev, (unsigned long)absolute.tv_sec, (unsigned long)absolute.tv_usec);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (ev->infinite) {
-        rc = time_register_finite(gc, ev, absolute);
-        if (rc) goto out;
-    } else {
-        rc = OSEVENT_HOOK(timeout,modify, noop,
-                          &ev->nexus->for_app_reg, absolute);
-        if (rc) goto out;
-
-        LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
-        ev->abs = absolute;
-        time_insert_finite(gc, ev);
-    }
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
-int libxl__ev_time_modify_rel(libxl__gc *gc, libxl__ev_time *ev,
-                              int milliseconds)
-{
-    struct timeval absolute;
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify ms=%d", ev, milliseconds);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (milliseconds < 0) {
-        time_deregister(gc, ev);
-        ev->infinite = 1;
-        rc = 0;
-        goto out;
-    }
-
-    rc = time_rel_to_abs(gc, milliseconds, &absolute);
-    if (rc) goto out;
-
-    rc = libxl__ev_time_modify_abs(gc, ev, absolute);
-    if (rc) goto out;
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
 void libxl__ev_time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     CTX_LOCK;
@@ -1161,7 +1096,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    libxl__osevent_hook_nexus *nexus = for_libxl;
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, nexus);
+
+    osevent_release_nexus(gc, &CTX->hook_timeout_nexi_idle, nexus);
+
     if (!ev) goto out;
     assert(!ev->infinite);
 
diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3bcb6d3..51f2721 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -287,8 +287,10 @@ typedef struct libxl_osevent_hooks {
   int (*timeout_register)(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl);
   int (*timeout_modify)(void *user, void **for_app_registration_update,
-                         struct timeval abs);
-  void (*timeout_deregister)(void *user, void *for_app_registration);
+                         struct timeval abs)
+      /* only ever called with abs={0,0}, meaning ASAP */;
+  void (*timeout_deregister)(void *user, void *for_app_registration)
+      /* will never be called */;
 } libxl_osevent_hooks;
 
 /* The application which calls register_fd_hooks promises to
@@ -337,6 +339,17 @@ typedef struct libxl_osevent_hooks {
  * register (or modify), and pass it to subsequent calls to modify
  * or deregister.
  *
+ * Note that the application must cope with a call from libxl to
+ * timeout_modify racing with its own call to
+ * libxl__osevent_occurred_timeout.  libxl guarantees that
+ * timeout_modify will only be called with abs={0,0} but the
+ * application must still ensure that libxl's attempt to cause the
+ * timeout to occur immediately is safely ignored even the timeout is
+ * actually already in the process of occurring.
+ *
+ * timeout_deregister is not used because it forms part of a
+ * deprecated unsafe mode of use of the API.
+ *
  * osevent_register_hooks may be called only once for each libxl_ctx.
  * libxl may make calls to register/modify/deregister from within
  * any libxl function (indeed, it will usually call register from
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 16:57:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 16:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6fM-0000DJ-Lo; Mon, 10 Dec 2012 16:57:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti6fL-0000Cw-HI
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 16:57:15 +0000
Received: from [85.158.137.99:41067] by server-9.bemta-3.messagelabs.com id
	0D/53-02388-76416C05; Mon, 10 Dec 2012 16:57:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355158629!18758782!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30040 invoked from network); 10 Dec 2012 16:57:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 16:57:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="39886"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 16:57:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 16:57:11 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti6fG-0005jA-V7; Mon, 10 Dec 2012 16:57:11 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti6fD-0002uc-Pn;
	Mon, 10 Dec 2012 16:57:07 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Dec 2012 16:57:04 +0000
Message-ID: <1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <20678.5159.946248.90947@mariner.uk.xensource.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because there is not necessarily any lock held at the point the
application (eg, libvirt) calls libxl_osevent_occurred_timeout, in a
multithreaded program those calls may be arbitrarily delayed in
relation to other activities within the program.

Specifically this means when ->timeout_deregister returns, libxl does
not know whether it can safely dispose of the for_libxl value or
whether it needs to retain it in case of an in-progress call to
_occurred_timeout.

The interface could be fixed by requiring the application to make a
new call into libxl to say that the deregistration was complete.

However that new call would have to be threaded through the
application's event loop; this is complicated and some application
authors are likely not to implement it properly.  Furthermore the
easiest way to implement this facility in most event loops is to queue
up a time event for "now".

Shortcut all of this by having libxl always call timeout_modify
setting abs={0,0} (ie, ASAP) instead of timeout_deregister.  This will
cause the application to call _occurred_timeout.  When processing this
calldown we see that we were no longer actually interested and simply
throw it away.

Additionally, there is a race between _occurred_timeout and
->timeout_modify.  If libxl ever adjusts the deadline for a timeout
the application may already be in the process of calling _occurred, in
which case the situation with for_app's lifetime becomes very
complicated.  Therefore abolish libxl__ev_time_modify_{abs,rel} (which
have no callers) and promise to the application only ever to call
->timeout_modify with abs=={0,0}.  The application still needs to cope
with ->timeout_modify racing with its internal function which calls
_occurred_timeout.  Document this.

This is a forwards-compatible change for applications using the libxl
API, and will hopefully eliminate these races in callback-supplying
applications (such as libvirt) without the need for corresponding
changes to the application.

For clarity, fold the body of time_register_finite into its one
remaining call site.  This makes the semantics of ev->infinite
slightly clearer.

Cc: Bamvor Jian Zhang <bjzhang@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--
v3:
  - Fix null pointer dereference in case when hooks not supplied.
---
 tools/libxl/libxl_event.c |   89 +++++++--------------------------------------
 tools/libxl/libxl_event.h |   17 ++++++++-
 2 files changed, 29 insertions(+), 77 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index f1fe425..f86c528 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -267,18 +267,11 @@ static int time_rel_to_abs(libxl__gc *gc, int ms, struct timeval *abs_out)
     return 0;
 }
 
-static void time_insert_finite(libxl__gc *gc, libxl__ev_time *ev)
-{
-    libxl__ev_time *evsearch;
-    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
-                              timercmp(&ev->abs, &evsearch->abs, >));
-    ev->infinite = 0;
-}
-
 static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
                                 struct timeval absolute)
 {
     int rc;
+    libxl__ev_time *evsearch;
 
     rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
                       absolute, ev->nexus);
@@ -286,7 +279,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 
     ev->infinite = 0;
     ev->abs = absolute;
-    time_insert_finite(gc, ev);
+    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
+                              timercmp(&ev->abs, &evsearch->abs, >));
 
     return 0;
 }
@@ -294,7 +288,12 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
 static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     if (!ev->infinite) {
-        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
+        struct timeval right_away = { 0, 0 };
+        if (ev->nexus) /* only set if app provided hooks */
+            ev->nexus->ev = 0;
+        OSEVENT_HOOK_VOID(timeout,modify,
+                          noop /* release nexus in _occurred_ */,
+                          ev->nexus->for_app_reg, right_away);
         LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
     }
 }
@@ -364,70 +363,6 @@ int libxl__ev_time_register_rel(libxl__gc *gc, libxl__ev_time *ev,
     return rc;
 }
 
-int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
-                              struct timeval absolute)
-{
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify abs==%lu.%06lu",
-        ev, (unsigned long)absolute.tv_sec, (unsigned long)absolute.tv_usec);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (ev->infinite) {
-        rc = time_register_finite(gc, ev, absolute);
-        if (rc) goto out;
-    } else {
-        rc = OSEVENT_HOOK(timeout,modify, noop,
-                          &ev->nexus->for_app_reg, absolute);
-        if (rc) goto out;
-
-        LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
-        ev->abs = absolute;
-        time_insert_finite(gc, ev);
-    }
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
-int libxl__ev_time_modify_rel(libxl__gc *gc, libxl__ev_time *ev,
-                              int milliseconds)
-{
-    struct timeval absolute;
-    int rc;
-
-    CTX_LOCK;
-
-    DBG("ev_time=%p modify ms=%d", ev, milliseconds);
-
-    assert(libxl__ev_time_isregistered(ev));
-
-    if (milliseconds < 0) {
-        time_deregister(gc, ev);
-        ev->infinite = 1;
-        rc = 0;
-        goto out;
-    }
-
-    rc = time_rel_to_abs(gc, milliseconds, &absolute);
-    if (rc) goto out;
-
-    rc = libxl__ev_time_modify_abs(gc, ev, absolute);
-    if (rc) goto out;
-
-    rc = 0;
- out:
-    time_done_debug(gc,__func__,ev,rc);
-    CTX_UNLOCK;
-    return rc;
-}
-
 void libxl__ev_time_deregister(libxl__gc *gc, libxl__ev_time *ev)
 {
     CTX_LOCK;
@@ -1161,7 +1096,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
     CTX_LOCK;
     assert(!CTX->osevent_in_hook);
 
-    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
+    libxl__osevent_hook_nexus *nexus = for_libxl;
+    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, nexus);
+
+    osevent_release_nexus(gc, &CTX->hook_timeout_nexi_idle, nexus);
+
     if (!ev) goto out;
     assert(!ev->infinite);
 
diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3bcb6d3..51f2721 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -287,8 +287,10 @@ typedef struct libxl_osevent_hooks {
   int (*timeout_register)(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl);
   int (*timeout_modify)(void *user, void **for_app_registration_update,
-                         struct timeval abs);
-  void (*timeout_deregister)(void *user, void *for_app_registration);
+                         struct timeval abs)
+      /* only ever called with abs={0,0}, meaning ASAP */;
+  void (*timeout_deregister)(void *user, void *for_app_registration)
+      /* will never be called */;
 } libxl_osevent_hooks;
 
 /* The application which calls register_fd_hooks promises to
@@ -337,6 +339,17 @@ typedef struct libxl_osevent_hooks {
  * register (or modify), and pass it to subsequent calls to modify
  * or deregister.
  *
+ * Note that the application must cope with a call from libxl to
+ * timeout_modify racing with its own call to
+ * libxl__osevent_occurred_timeout.  libxl guarantees that
+ * timeout_modify will only be called with abs={0,0} but the
+ * application must still ensure that libxl's attempt to cause the
+ * timeout to occur immediately is safely ignored even the timeout is
+ * actually already in the process of occurring.
+ *
+ * timeout_deregister is not used because it forms part of a
+ * deprecated unsafe mode of use of the API.
+ *
  * osevent_register_hooks may be called only once for each libxl_ctx.
  * libxl may make calls to register/modify/deregister from within
  * any libxl function (indeed, it will usually call register from
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:01:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6jK-0000dJ-JS; Mon, 10 Dec 2012 17:01:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti6jI-0000d3-Kj
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 17:01:20 +0000
Received: from [85.158.139.83:41037] by server-9.bemta-5.messagelabs.com id
	17/87-10690-F5516C05; Mon, 10 Dec 2012 17:01:19 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1355158878!21964876!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9792 invoked from network); 10 Dec 2012 17:01:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 17:01:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="39979"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 17:01:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 17:01:17 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti6jF-0005kc-ML; Mon, 10 Dec 2012 17:01:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti6jF-0002vC-IT;
	Mon, 10 Dec 2012 17:01:17 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20678.5469.347452.416307@mariner.uk.xensource.com>
Date: Mon, 10 Dec 2012 17:01:17 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212101556590.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
	<20674.5586.142286.869968@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1212101556590.17523@kaball.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "dongxiao.xu@intel.com" <dongxiao.xu@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and
 cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini writes ("Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and cpu_ioreq_move"):
> On Fri, 7 Dec 2012, Ian Jackson wrote:
...
> > +    if (req->df) addr -= offset;
> > +    else addr -= offset;
> 
> This can't be right, can it?

Indeed not.  v2 has this fixed.

> The search/replace changes below look correct.

Thanks.

> For the sake of consistency, could you please send a patch against
> upstream QEMU to qemu-devel? The corresponding code is in xen-all.c
> (cpu_ioreq_pio and cpu_ioreq_move).

I will do that.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:01:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6jK-0000dJ-JS; Mon, 10 Dec 2012 17:01:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti6jI-0000d3-Kj
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 17:01:20 +0000
Received: from [85.158.139.83:41037] by server-9.bemta-5.messagelabs.com id
	17/87-10690-F5516C05; Mon, 10 Dec 2012 17:01:19 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1355158878!21964876!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9792 invoked from network); 10 Dec 2012 17:01:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 17:01:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="39979"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 17:01:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 17:01:17 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti6jF-0005kc-ML; Mon, 10 Dec 2012 17:01:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti6jF-0002vC-IT;
	Mon, 10 Dec 2012 17:01:17 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20678.5469.347452.416307@mariner.uk.xensource.com>
Date: Mon, 10 Dec 2012 17:01:17 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1212101556590.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
	<20674.5586.142286.869968@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1212101556590.17523@kaball.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "dongxiao.xu@intel.com" <dongxiao.xu@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and
 cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini writes ("Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and cpu_ioreq_move"):
> On Fri, 7 Dec 2012, Ian Jackson wrote:
...
> > +    if (req->df) addr -= offset;
> > +    else addr -= offset;
> 
> This can't be right, can it?

Indeed not.  v2 has this fixed.

> The search/replace changes below look correct.

Thanks.

> For the sake of consistency, could you please send a patch against
> upstream QEMU to qemu-devel? The corresponding code is in xen-all.c
> (cpu_ioreq_pio and cpu_ioreq_move).

I will do that.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:05:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6nI-0000tj-9H; Mon, 10 Dec 2012 17:05:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Ti6nG-0000td-La
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 17:05:26 +0000
Received: from [85.158.139.211:51699] by server-12.bemta-5.messagelabs.com id
	2C/33-02275-65616C05; Mon, 10 Dec 2012 17:05:26 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355159125!19897482!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6636 invoked from network); 10 Dec 2012 17:05:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 17:05:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="40055"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 17:05:25 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 17:05:24 +0000
Message-ID: <50C61653.70107@citrix.com>
Date: Mon, 10 Dec 2012 18:05:23 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
	<1354630913-17287-2-git-send-email-roger.pau@citrix.com>
	<20121207202003.GA9462@phenom.dumpdata.com>
	<50C5D276.6090009@citrix.com>
	<20121210151533.GE6955@localhost.localdomain>
In-Reply-To: <20121210151533.GE6955@localhost.localdomain>
Cc: "sfr@canb.auug.org.au" <sfr@canb.auug.org.au>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
 llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/12 16:15, Konrad Rzeszutek Wilk wrote:
> On Mon, Dec 10, 2012 at 01:15:50PM +0100, Roger Pau Monn=E9 wrote:
>> On 07/12/12 21:20, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
>>>> Implement a safe version of llist_for_each_entry, and use it in
>>>> blkif_free. Previously grants where freed while iterating the list,
>>>> which lead to dereferences when trying to fetch the next item.
>>>
>>> Looks like xen-blkfront is the only user of this llist_for_each_entry.
>>>
>>> Would it be more prudent to put the macro in the llist.h file?
>>
>> I'm not able to find out who is the maintainer of llist, should I just
>> CC it's author?
> =

> Sure. I CC-ed akpm here to solicit his input as well. Either way I am
> OK wit this being in xen-blkfront but it just seems that it could
> be useful in the llist file since that is where the non-safe version
> resides.

I'm going to resend this series with llist_for_each_entry_safe added to
llist.h in a separated patch. I wouldn't like to delay this series
because we cannot get llist_for_each_entry_safe added to llist.h, that's
why I've added it to blkfront in the first place.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:05:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6nI-0000tj-9H; Mon, 10 Dec 2012 17:05:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Ti6nG-0000td-La
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 17:05:26 +0000
Received: from [85.158.139.211:51699] by server-12.bemta-5.messagelabs.com id
	2C/33-02275-65616C05; Mon, 10 Dec 2012 17:05:26 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355159125!19897482!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6636 invoked from network); 10 Dec 2012 17:05:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 17:05:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="40055"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 17:05:25 +0000
Received: from [192.168.1.30] (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 17:05:24 +0000
Message-ID: <50C61653.70107@citrix.com>
Date: Mon, 10 Dec 2012 18:05:23 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
	<1354630913-17287-2-git-send-email-roger.pau@citrix.com>
	<20121207202003.GA9462@phenom.dumpdata.com>
	<50C5D276.6090009@citrix.com>
	<20121210151533.GE6955@localhost.localdomain>
In-Reply-To: <20121210151533.GE6955@localhost.localdomain>
Cc: "sfr@canb.auug.org.au" <sfr@canb.auug.org.au>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
 llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/12 16:15, Konrad Rzeszutek Wilk wrote:
> On Mon, Dec 10, 2012 at 01:15:50PM +0100, Roger Pau Monn=E9 wrote:
>> On 07/12/12 21:20, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
>>>> Implement a safe version of llist_for_each_entry, and use it in
>>>> blkif_free. Previously grants where freed while iterating the list,
>>>> which lead to dereferences when trying to fetch the next item.
>>>
>>> Looks like xen-blkfront is the only user of this llist_for_each_entry.
>>>
>>> Would it be more prudent to put the macro in the llist.h file?
>>
>> I'm not able to find out who is the maintainer of llist, should I just
>> CC it's author?
> =

> Sure. I CC-ed akpm here to solicit his input as well. Either way I am
> OK wit this being in xen-blkfront but it just seems that it could
> be useful in the llist file since that is where the non-safe version
> resides.

I'm going to resend this series with llist_for_each_entry_safe added to
llist.h in a separated patch. I wouldn't like to delay this series
because we cannot get llist_for_each_entry_safe added to llist.h, that's
why I've added it to blkfront in the first place.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:15:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:15:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6wf-00015Z-CX; Mon, 10 Dec 2012 17:15:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carnold@suse.com>) id 1Ti6we-00015R-0O
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 17:15:08 +0000
Received: from [193.109.254.147:41590] by server-8.bemta-14.messagelabs.com id
	C9/AA-05026-B9816C05; Mon, 10 Dec 2012 17:15:07 +0000
X-Env-Sender: carnold@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355159705!2318549!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12218 invoked from network); 10 Dec 2012 17:15:05 -0000
Received: from novprvoes0310.provo.novell.com (HELO
	novprvoes0310.provo.novell.com) (137.65.248.74)
	by server-7.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 17:15:05 -0000
Received: from INET-PRV-MTA by novprvoes0310.provo.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 10:15:04 -0700
Message-Id: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 10:15:01 -0700
From: "Charles Arnold" <carnold@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ross.philipson@citrix.com
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I haven't seen any activity on this feature.  Is it still planned to be included in Xen 4.3?

- Charles

On Wed, 2012-05-23 at 14:37 +0000, Ross Philipson wrote:
> This patch series introduces support of loading external blocks of firmware
> into a guest. These blocks can currently contain SMBIOS and/or ACPI firmware
> information that is used by HVMLOADER to modify a guests virtual firmware at
> startup. These modules are only used by HVMLOADER.
> 
> The domain building code in libxenguest is passed these firmware blocks
> in the xc_hvm_build_args structure and loads them into the new guest,
> returning the load address. The loading is done in what will become the 
> guests
> low RAM area just behind to load location for HVMLOADER. After their use by
> HVMLOADER they are effectively discarded. It is the caller's job to load the
> base address and length values in xenstore using the paths defined in the 
> new
> hvm_defs.h header so HVMLOADER can located the blocks.
> 
> Currently two types of firmware information are recognized and processed
> in the HVMLOADER though this could be extended.
> 
> 1. SMBIOS: The SMBIOS table building code will attempt to retrieve (for
> predefined set of structure types) any passed in structures. If a match is
> found the passed in table will be used overriding the default values. In
> addition, the SMBIOS code will also enumerate and load any vendor defined
> structures (in the range of types 128 - 255) that as are passed in. See the
> hvm_defs.h header for information on the format of this block.
> 2. ACPI: Static and secondary descriptor tables can be added to the set of
> ACPI table built by HVMLOADER. The ACPI builder code will enumerate passed 
> in
> tables and add them at the end of the secondary table list. See the 
> hvm_defs.h
> header for information on the format of this block.
> 
> There are 4 patches in the series:
> 01 - Add HVM definitions header for firmware passthrough support.
> 02 - Xen control tools support for loading the firmware blocks.
> 03 - Passthrough support for SMBIOS.
> 04 - Passthrough support for ACPI.
> 
> Note this is version 3 of this patch set. Some of the differences:
>  - Generic module support removed, overall functionality was simplified.
>  - Use of xenstore to supply firmware passthrough information to HVMLOADER.
>  - Fixed issues pointed out in the SMBIOS processing code.
>  - Created defines for the SMBIOS handles in use and switched to using
>    the xenstore values in the new hvm_defs.h file.
> 
> Signed-off-by: Ross Philipson <ross.philipson@xxxxxxxxxx>
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:15:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:15:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti6wf-00015Z-CX; Mon, 10 Dec 2012 17:15:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carnold@suse.com>) id 1Ti6we-00015R-0O
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 17:15:08 +0000
Received: from [193.109.254.147:41590] by server-8.bemta-14.messagelabs.com id
	C9/AA-05026-B9816C05; Mon, 10 Dec 2012 17:15:07 +0000
X-Env-Sender: carnold@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355159705!2318549!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12218 invoked from network); 10 Dec 2012 17:15:05 -0000
Received: from novprvoes0310.provo.novell.com (HELO
	novprvoes0310.provo.novell.com) (137.65.248.74)
	by server-7.tower-27.messagelabs.com with SMTP;
	10 Dec 2012 17:15:05 -0000
Received: from INET-PRV-MTA by novprvoes0310.provo.novell.com
	with Novell_GroupWise; Mon, 10 Dec 2012 10:15:04 -0700
Message-Id: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 10 Dec 2012 10:15:01 -0700
From: "Charles Arnold" <carnold@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ross.philipson@citrix.com
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I haven't seen any activity on this feature.  Is it still planned to be included in Xen 4.3?

- Charles

On Wed, 2012-05-23 at 14:37 +0000, Ross Philipson wrote:
> This patch series introduces support of loading external blocks of firmware
> into a guest. These blocks can currently contain SMBIOS and/or ACPI firmware
> information that is used by HVMLOADER to modify a guests virtual firmware at
> startup. These modules are only used by HVMLOADER.
> 
> The domain building code in libxenguest is passed these firmware blocks
> in the xc_hvm_build_args structure and loads them into the new guest,
> returning the load address. The loading is done in what will become the 
> guests
> low RAM area just behind to load location for HVMLOADER. After their use by
> HVMLOADER they are effectively discarded. It is the caller's job to load the
> base address and length values in xenstore using the paths defined in the 
> new
> hvm_defs.h header so HVMLOADER can located the blocks.
> 
> Currently two types of firmware information are recognized and processed
> in the HVMLOADER though this could be extended.
> 
> 1. SMBIOS: The SMBIOS table building code will attempt to retrieve (for
> predefined set of structure types) any passed in structures. If a match is
> found the passed in table will be used overriding the default values. In
> addition, the SMBIOS code will also enumerate and load any vendor defined
> structures (in the range of types 128 - 255) that as are passed in. See the
> hvm_defs.h header for information on the format of this block.
> 2. ACPI: Static and secondary descriptor tables can be added to the set of
> ACPI table built by HVMLOADER. The ACPI builder code will enumerate passed 
> in
> tables and add them at the end of the secondary table list. See the 
> hvm_defs.h
> header for information on the format of this block.
> 
> There are 4 patches in the series:
> 01 - Add HVM definitions header for firmware passthrough support.
> 02 - Xen control tools support for loading the firmware blocks.
> 03 - Passthrough support for SMBIOS.
> 04 - Passthrough support for ACPI.
> 
> Note this is version 3 of this patch set. Some of the differences:
>  - Generic module support removed, overall functionality was simplified.
>  - Use of xenstore to supply firmware passthrough information to HVMLOADER.
>  - Fixed issues pointed out in the SMBIOS processing code.
>  - Created defines for the SMBIOS handles in use and switched to using
>    the xenstore values in the new hvm_defs.h file.
> 
> Signed-off-by: Ross Philipson <ross.philipson@xxxxxxxxxx>
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:25:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti76W-0001Jd-W0; Mon, 10 Dec 2012 17:25:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Ti76V-0001JR-KE
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 17:25:19 +0000
Received: from [85.158.143.35:49332] by server-3.bemta-4.messagelabs.com id
	80/E4-18211-EFA16C05; Mon, 10 Dec 2012 17:25:18 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355160314!13250009!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2543 invoked from network); 10 Dec 2012 17:25:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 17:25:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="40445"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 17:25:09 +0000
Received: from mac.citrite.net (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 17:25:07 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 10 Dec 2012 18:24:48 +0100
Message-ID: <1355160290-22180-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/3] xen-blkback: implement safe iterator for
	the list of persistent grants
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q2hhbmdlIGZvcmVhY2hfZ3JhbnQgaXRlcmF0b3IgdG8gYSBzYWZlIHZlcnNpb24sIHRoYXQgYWxs
b3dzIGZyZWVpbmcKdGhlIGVsZW1lbnQgd2hpbGUgaXRlcmF0aW5nLiBBbHNvIG1vdmUgdGhlIGZy
ZWUgY29kZSBpbgpmcmVlX3BlcnNpc3RlbnRfZ250cyB0byBwcmV2ZW50IGZyZWVpbmcgdGhlIGVs
ZW1lbnQgYmVmb3JlIHRoZSByYl9uZXh0CmNhbGwuCgpSZXBvcnRlZC1ieTogRGFuIENhcnBlbnRl
ciA8ZGFuLmNhcnBlbnRlckBvcmFjbGUuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29u
cmFkQGtlcm5lbC5vcmc+CkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwotLS0KIGRyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jIHwgICAxOCArKysrKysrKysrKy0tLS0tLS0KIDEg
ZmlsZXMgY2hhbmdlZCwgMTEgaW5zZXJ0aW9ucygrKSwgNyBkZWxldGlvbnMoLSkKCmRpZmYgLS1n
aXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4IDc0Mzc0ZmIuLjVhYzg0MWYgMTAwNjQ0Ci0tLSBh
L2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC0xNjEsMTAgKzE2MSwxMiBAQCBzdGF0aWMgaW50IGRp
c3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLAogc3RhdGljIHZvaWQg
bWFrZV9yZXNwb25zZShzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwgdTY0IGlkLAogCQkJICB1bnNp
Z25lZCBzaG9ydCBvcCwgaW50IHN0KTsKIAotI2RlZmluZSBmb3JlYWNoX2dyYW50KHBvcywgcmJ0
cmVlLCBub2RlKSBcCi0JZm9yICgocG9zKSA9IGNvbnRhaW5lcl9vZihyYl9maXJzdCgocmJ0cmVl
KSksIHR5cGVvZigqKHBvcykpLCBub2RlKTsgXAorI2RlZmluZSBmb3JlYWNoX2dyYW50X3NhZmUo
cG9zLCBuLCByYnRyZWUsIG5vZGUpIFwKKwlmb3IgKChwb3MpID0gY29udGFpbmVyX29mKHJiX2Zp
cnN0KChyYnRyZWUpKSwgdHlwZW9mKCoocG9zKSksIG5vZGUpLCBcCisJICAgICAobikgPSByYl9u
ZXh0KCYocG9zKS0+bm9kZSk7IFwKIAkgICAgICYocG9zKS0+bm9kZSAhPSBOVUxMOyBcCi0JICAg
ICAocG9zKSA9IGNvbnRhaW5lcl9vZihyYl9uZXh0KCYocG9zKS0+bm9kZSksIHR5cGVvZigqKHBv
cykpLCBub2RlKSkKKwkgICAgIChwb3MpID0gY29udGFpbmVyX29mKG4sIHR5cGVvZigqKHBvcykp
LCBub2RlKSwgXAorCSAgICAgKG4pID0gKCYocG9zKS0+bm9kZSAhPSBOVUxMKSA/IHJiX25leHQo
Jihwb3MpLT5ub2RlKSA6IE5VTEwpCiAKIAogc3RhdGljIHZvaWQgYWRkX3BlcnNpc3RlbnRfZ250
KHN0cnVjdCByYl9yb290ICpyb290LApAQCAtMjE3LDEwICsyMTksMTEgQEAgc3RhdGljIHZvaWQg
ZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBu
dW0pCiAJc3RydWN0IGdudHRhYl91bm1hcF9ncmFudF9yZWYgdW5tYXBbQkxLSUZfTUFYX1NFR01F
TlRTX1BFUl9SRVFVRVNUXTsKIAlzdHJ1Y3QgcGFnZSAqcGFnZXNbQkxLSUZfTUFYX1NFR01FTlRT
X1BFUl9SRVFVRVNUXTsKIAlzdHJ1Y3QgcGVyc2lzdGVudF9nbnQgKnBlcnNpc3RlbnRfZ250Owor
CXN0cnVjdCByYl9ub2RlICpuOwogCWludCByZXQgPSAwOwogCWludCBzZWdzX3RvX3VubWFwID0g
MDsKIAotCWZvcmVhY2hfZ3JhbnQocGVyc2lzdGVudF9nbnQsIHJvb3QsIG5vZGUpIHsKKwlmb3Jl
YWNoX2dyYW50X3NhZmUocGVyc2lzdGVudF9nbnQsIG4sIHJvb3QsIG5vZGUpIHsKIAkJQlVHX09O
KHBlcnNpc3RlbnRfZ250LT5oYW5kbGUgPT0KIAkJCUJMS0JBQ0tfSU5WQUxJRF9IQU5ETEUpOwog
CQlnbnR0YWJfc2V0X3VubWFwX29wKCZ1bm1hcFtzZWdzX3RvX3VubWFwXSwKQEAgLTIzMCw5ICsy
MzMsNiBAQCBzdGF0aWMgdm9pZCBmcmVlX3BlcnNpc3RlbnRfZ250cyhzdHJ1Y3QgcmJfcm9vdCAq
cm9vdCwgdW5zaWduZWQgaW50IG51bSkKIAkJCXBlcnNpc3RlbnRfZ250LT5oYW5kbGUpOwogCiAJ
CXBhZ2VzW3NlZ3NfdG9fdW5tYXBdID0gcGVyc2lzdGVudF9nbnQtPnBhZ2U7Ci0JCXJiX2VyYXNl
KCZwZXJzaXN0ZW50X2dudC0+bm9kZSwgcm9vdCk7Ci0JCWtmcmVlKHBlcnNpc3RlbnRfZ250KTsK
LQkJbnVtLS07CiAKIAkJaWYgKCsrc2Vnc190b191bm1hcCA9PSBCTEtJRl9NQVhfU0VHTUVOVFNf
UEVSX1JFUVVFU1QgfHwKIAkJCSFyYl9uZXh0KCZwZXJzaXN0ZW50X2dudC0+bm9kZSkpIHsKQEAg
LTI0MSw2ICsyNDEsMTAgQEAgc3RhdGljIHZvaWQgZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0
IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBudW0pCiAJCQlCVUdfT04ocmV0KTsKIAkJCXNl
Z3NfdG9fdW5tYXAgPSAwOwogCQl9CisKKwkJcmJfZXJhc2UoJnBlcnNpc3RlbnRfZ250LT5ub2Rl
LCByb290KTsKKwkJa2ZyZWUocGVyc2lzdGVudF9nbnQpOworCQludW0tLTsKIAl9CiAJQlVHX09O
KG51bSAhPSAwKTsKIH0KLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0
Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:25:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti76W-0001Jd-W0; Mon, 10 Dec 2012 17:25:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Ti76V-0001JR-KE
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 17:25:19 +0000
Received: from [85.158.143.35:49332] by server-3.bemta-4.messagelabs.com id
	80/E4-18211-EFA16C05; Mon, 10 Dec 2012 17:25:18 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355160314!13250009!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2543 invoked from network); 10 Dec 2012 17:25:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 17:25:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="40445"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 17:25:09 +0000
Received: from mac.citrite.net (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 17:25:07 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 10 Dec 2012 18:24:48 +0100
Message-ID: <1355160290-22180-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/3] xen-blkback: implement safe iterator for
	the list of persistent grants
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q2hhbmdlIGZvcmVhY2hfZ3JhbnQgaXRlcmF0b3IgdG8gYSBzYWZlIHZlcnNpb24sIHRoYXQgYWxs
b3dzIGZyZWVpbmcKdGhlIGVsZW1lbnQgd2hpbGUgaXRlcmF0aW5nLiBBbHNvIG1vdmUgdGhlIGZy
ZWUgY29kZSBpbgpmcmVlX3BlcnNpc3RlbnRfZ250cyB0byBwcmV2ZW50IGZyZWVpbmcgdGhlIGVs
ZW1lbnQgYmVmb3JlIHRoZSByYl9uZXh0CmNhbGwuCgpSZXBvcnRlZC1ieTogRGFuIENhcnBlbnRl
ciA8ZGFuLmNhcnBlbnRlckBvcmFjbGUuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29u
cmFkQGtlcm5lbC5vcmc+CkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwotLS0KIGRyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jIHwgICAxOCArKysrKysrKysrKy0tLS0tLS0KIDEg
ZmlsZXMgY2hhbmdlZCwgMTEgaW5zZXJ0aW9ucygrKSwgNyBkZWxldGlvbnMoLSkKCmRpZmYgLS1n
aXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4IDc0Mzc0ZmIuLjVhYzg0MWYgMTAwNjQ0Ci0tLSBh
L2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC0xNjEsMTAgKzE2MSwxMiBAQCBzdGF0aWMgaW50IGRp
c3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLAogc3RhdGljIHZvaWQg
bWFrZV9yZXNwb25zZShzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwgdTY0IGlkLAogCQkJICB1bnNp
Z25lZCBzaG9ydCBvcCwgaW50IHN0KTsKIAotI2RlZmluZSBmb3JlYWNoX2dyYW50KHBvcywgcmJ0
cmVlLCBub2RlKSBcCi0JZm9yICgocG9zKSA9IGNvbnRhaW5lcl9vZihyYl9maXJzdCgocmJ0cmVl
KSksIHR5cGVvZigqKHBvcykpLCBub2RlKTsgXAorI2RlZmluZSBmb3JlYWNoX2dyYW50X3NhZmUo
cG9zLCBuLCByYnRyZWUsIG5vZGUpIFwKKwlmb3IgKChwb3MpID0gY29udGFpbmVyX29mKHJiX2Zp
cnN0KChyYnRyZWUpKSwgdHlwZW9mKCoocG9zKSksIG5vZGUpLCBcCisJICAgICAobikgPSByYl9u
ZXh0KCYocG9zKS0+bm9kZSk7IFwKIAkgICAgICYocG9zKS0+bm9kZSAhPSBOVUxMOyBcCi0JICAg
ICAocG9zKSA9IGNvbnRhaW5lcl9vZihyYl9uZXh0KCYocG9zKS0+bm9kZSksIHR5cGVvZigqKHBv
cykpLCBub2RlKSkKKwkgICAgIChwb3MpID0gY29udGFpbmVyX29mKG4sIHR5cGVvZigqKHBvcykp
LCBub2RlKSwgXAorCSAgICAgKG4pID0gKCYocG9zKS0+bm9kZSAhPSBOVUxMKSA/IHJiX25leHQo
Jihwb3MpLT5ub2RlKSA6IE5VTEwpCiAKIAogc3RhdGljIHZvaWQgYWRkX3BlcnNpc3RlbnRfZ250
KHN0cnVjdCByYl9yb290ICpyb290LApAQCAtMjE3LDEwICsyMTksMTEgQEAgc3RhdGljIHZvaWQg
ZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBu
dW0pCiAJc3RydWN0IGdudHRhYl91bm1hcF9ncmFudF9yZWYgdW5tYXBbQkxLSUZfTUFYX1NFR01F
TlRTX1BFUl9SRVFVRVNUXTsKIAlzdHJ1Y3QgcGFnZSAqcGFnZXNbQkxLSUZfTUFYX1NFR01FTlRT
X1BFUl9SRVFVRVNUXTsKIAlzdHJ1Y3QgcGVyc2lzdGVudF9nbnQgKnBlcnNpc3RlbnRfZ250Owor
CXN0cnVjdCByYl9ub2RlICpuOwogCWludCByZXQgPSAwOwogCWludCBzZWdzX3RvX3VubWFwID0g
MDsKIAotCWZvcmVhY2hfZ3JhbnQocGVyc2lzdGVudF9nbnQsIHJvb3QsIG5vZGUpIHsKKwlmb3Jl
YWNoX2dyYW50X3NhZmUocGVyc2lzdGVudF9nbnQsIG4sIHJvb3QsIG5vZGUpIHsKIAkJQlVHX09O
KHBlcnNpc3RlbnRfZ250LT5oYW5kbGUgPT0KIAkJCUJMS0JBQ0tfSU5WQUxJRF9IQU5ETEUpOwog
CQlnbnR0YWJfc2V0X3VubWFwX29wKCZ1bm1hcFtzZWdzX3RvX3VubWFwXSwKQEAgLTIzMCw5ICsy
MzMsNiBAQCBzdGF0aWMgdm9pZCBmcmVlX3BlcnNpc3RlbnRfZ250cyhzdHJ1Y3QgcmJfcm9vdCAq
cm9vdCwgdW5zaWduZWQgaW50IG51bSkKIAkJCXBlcnNpc3RlbnRfZ250LT5oYW5kbGUpOwogCiAJ
CXBhZ2VzW3NlZ3NfdG9fdW5tYXBdID0gcGVyc2lzdGVudF9nbnQtPnBhZ2U7Ci0JCXJiX2VyYXNl
KCZwZXJzaXN0ZW50X2dudC0+bm9kZSwgcm9vdCk7Ci0JCWtmcmVlKHBlcnNpc3RlbnRfZ250KTsK
LQkJbnVtLS07CiAKIAkJaWYgKCsrc2Vnc190b191bm1hcCA9PSBCTEtJRl9NQVhfU0VHTUVOVFNf
UEVSX1JFUVVFU1QgfHwKIAkJCSFyYl9uZXh0KCZwZXJzaXN0ZW50X2dudC0+bm9kZSkpIHsKQEAg
LTI0MSw2ICsyNDEsMTAgQEAgc3RhdGljIHZvaWQgZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0
IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBudW0pCiAJCQlCVUdfT04ocmV0KTsKIAkJCXNl
Z3NfdG9fdW5tYXAgPSAwOwogCQl9CisKKwkJcmJfZXJhc2UoJnBlcnNpc3RlbnRfZ250LT5ub2Rl
LCByb290KTsKKwkJa2ZyZWUocGVyc2lzdGVudF9nbnQpOworCQludW0tLTsKIAl9CiAJQlVHX09O
KG51bSAhPSAwKTsKIH0KLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0
Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:25:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti76V-0001JS-Jl; Mon, 10 Dec 2012 17:25:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Ti76U-0001JM-7t
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 17:25:18 +0000
Received: from [85.158.143.35:49285] by server-2.bemta-4.messagelabs.com id
	D1/E1-30861-DFA16C05; Mon, 10 Dec 2012 17:25:17 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355160314!13250009!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2476 invoked from network); 10 Dec 2012 17:25:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 17:25:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="40447"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 17:25:10 +0000
Received: from mac.citrite.net (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 17:25:08 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 10 Dec 2012 18:24:50 +0100
Message-ID: <1355160290-22180-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1355160290-22180-1-git-send-email-roger.pau@citrix.com>
References: <1355160290-22180-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 3/3] xen-blkfront: transverse list of
	persistent grants safely
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VXNlIGxsaXN0X2Zvcl9lYWNoX2VudHJ5X3NhZmUgaW4gYmxraWZfZnJlZS4gUHJldmlvdXNseSBn
cmFudHMgd2hlcmUKZnJlZWQgd2hpbGUgaXRlcmF0aW5nIHRoZSBsaXN0LCB3aGljaCBsZWFkIHRv
IGRlcmVmZXJlbmNlcyB3aGVuIHRyeWluZwp0byBmZXRjaCB0aGUgbmV4dCBpdGVtLgoKUmVwb3J0
ZWQtYnk6IERhbiBDYXJwZW50ZXIgPGRhbi5jYXJwZW50ZXJAb3JhY2xlLmNvbT4KU2lnbmVkLW9m
Zi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25yYWQg
Unplc3p1dGVrIFdpbGsgPGtvbnJhZEBrZXJuZWwub3JnPgpDYzogeGVuLWRldmVsQGxpc3RzLnhl
bi5vcmcKLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jIHwgICAgMyArKy0KIDEgZmls
ZXMgY2hhbmdlZCwgMiBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBh
L2RyaXZlcnMvYmxvY2sveGVuLWJsa2Zyb250LmMgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9u
dC5jCmluZGV4IDk2ZTliMDAuLmNmZGIwMzMgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVu
LWJsa2Zyb250LmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQuYwpAQCAtNzkyLDYg
Kzc5Miw3IEBAIHN0YXRpYyB2b2lkIGJsa2lmX2ZyZWUoc3RydWN0IGJsa2Zyb250X2luZm8gKmlu
Zm8sIGludCBzdXNwZW5kKQogewogCXN0cnVjdCBsbGlzdF9ub2RlICphbGxfZ250czsKIAlzdHJ1
Y3QgZ3JhbnQgKnBlcnNpc3RlbnRfZ250OworCXN0cnVjdCBsbGlzdF9ub2RlICpuOwogCiAJLyog
UHJldmVudCBuZXcgcmVxdWVzdHMgYmVpbmcgaXNzdWVkIHVudGlsIHdlIGZpeCB0aGluZ3MgdXAu
ICovCiAJc3Bpbl9sb2NrX2lycSgmaW5mby0+aW9fbG9jayk7CkBAIC04MDQsNyArODA1LDcgQEAg
c3RhdGljIHZvaWQgYmxraWZfZnJlZShzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgaW50IHN1
c3BlbmQpCiAJLyogUmVtb3ZlIGFsbCBwZXJzaXN0ZW50IGdyYW50cyAqLwogCWlmIChpbmZvLT5w
ZXJzaXN0ZW50X2dudHNfYykgewogCQlhbGxfZ250cyA9IGxsaXN0X2RlbF9hbGwoJmluZm8tPnBl
cnNpc3RlbnRfZ250cyk7Ci0JCWxsaXN0X2Zvcl9lYWNoX2VudHJ5KHBlcnNpc3RlbnRfZ250LCBh
bGxfZ250cywgbm9kZSkgeworCQlsbGlzdF9mb3JfZWFjaF9lbnRyeV9zYWZlKHBlcnNpc3RlbnRf
Z250LCBuLCBhbGxfZ250cywgbm9kZSkgewogCQkJZ250dGFiX2VuZF9mb3JlaWduX2FjY2Vzcyhw
ZXJzaXN0ZW50X2dudC0+Z3JlZiwgMCwgMFVMKTsKIAkJCV9fZnJlZV9wYWdlKHBmbl90b19wYWdl
KHBlcnNpc3RlbnRfZ250LT5wZm4pKTsKIAkJCWtmcmVlKHBlcnNpc3RlbnRfZ250KTsKLS0gCjEu
Ny43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4u
b3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:25:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti76V-0001JS-Jl; Mon, 10 Dec 2012 17:25:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Ti76U-0001JM-7t
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 17:25:18 +0000
Received: from [85.158.143.35:49285] by server-2.bemta-4.messagelabs.com id
	D1/E1-30861-DFA16C05; Mon, 10 Dec 2012 17:25:17 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355160314!13250009!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2476 invoked from network); 10 Dec 2012 17:25:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 17:25:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="40447"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 17:25:10 +0000
Received: from mac.citrite.net (10.31.3.228) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 10 Dec 2012 17:25:08 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 10 Dec 2012 18:24:50 +0100
Message-ID: <1355160290-22180-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1355160290-22180-1-git-send-email-roger.pau@citrix.com>
References: <1355160290-22180-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 3/3] xen-blkfront: transverse list of
	persistent grants safely
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VXNlIGxsaXN0X2Zvcl9lYWNoX2VudHJ5X3NhZmUgaW4gYmxraWZfZnJlZS4gUHJldmlvdXNseSBn
cmFudHMgd2hlcmUKZnJlZWQgd2hpbGUgaXRlcmF0aW5nIHRoZSBsaXN0LCB3aGljaCBsZWFkIHRv
IGRlcmVmZXJlbmNlcyB3aGVuIHRyeWluZwp0byBmZXRjaCB0aGUgbmV4dCBpdGVtLgoKUmVwb3J0
ZWQtYnk6IERhbiBDYXJwZW50ZXIgPGRhbi5jYXJwZW50ZXJAb3JhY2xlLmNvbT4KU2lnbmVkLW9m
Zi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25yYWQg
Unplc3p1dGVrIFdpbGsgPGtvbnJhZEBrZXJuZWwub3JnPgpDYzogeGVuLWRldmVsQGxpc3RzLnhl
bi5vcmcKLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jIHwgICAgMyArKy0KIDEgZmls
ZXMgY2hhbmdlZCwgMiBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBh
L2RyaXZlcnMvYmxvY2sveGVuLWJsa2Zyb250LmMgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9u
dC5jCmluZGV4IDk2ZTliMDAuLmNmZGIwMzMgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVu
LWJsa2Zyb250LmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQuYwpAQCAtNzkyLDYg
Kzc5Miw3IEBAIHN0YXRpYyB2b2lkIGJsa2lmX2ZyZWUoc3RydWN0IGJsa2Zyb250X2luZm8gKmlu
Zm8sIGludCBzdXNwZW5kKQogewogCXN0cnVjdCBsbGlzdF9ub2RlICphbGxfZ250czsKIAlzdHJ1
Y3QgZ3JhbnQgKnBlcnNpc3RlbnRfZ250OworCXN0cnVjdCBsbGlzdF9ub2RlICpuOwogCiAJLyog
UHJldmVudCBuZXcgcmVxdWVzdHMgYmVpbmcgaXNzdWVkIHVudGlsIHdlIGZpeCB0aGluZ3MgdXAu
ICovCiAJc3Bpbl9sb2NrX2lycSgmaW5mby0+aW9fbG9jayk7CkBAIC04MDQsNyArODA1LDcgQEAg
c3RhdGljIHZvaWQgYmxraWZfZnJlZShzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgaW50IHN1
c3BlbmQpCiAJLyogUmVtb3ZlIGFsbCBwZXJzaXN0ZW50IGdyYW50cyAqLwogCWlmIChpbmZvLT5w
ZXJzaXN0ZW50X2dudHNfYykgewogCQlhbGxfZ250cyA9IGxsaXN0X2RlbF9hbGwoJmluZm8tPnBl
cnNpc3RlbnRfZ250cyk7Ci0JCWxsaXN0X2Zvcl9lYWNoX2VudHJ5KHBlcnNpc3RlbnRfZ250LCBh
bGxfZ250cywgbm9kZSkgeworCQlsbGlzdF9mb3JfZWFjaF9lbnRyeV9zYWZlKHBlcnNpc3RlbnRf
Z250LCBuLCBhbGxfZ250cywgbm9kZSkgewogCQkJZ250dGFiX2VuZF9mb3JlaWduX2FjY2Vzcyhw
ZXJzaXN0ZW50X2dudC0+Z3JlZiwgMCwgMFVMKTsKIAkJCV9fZnJlZV9wYWdlKHBmbl90b19wYWdl
KHBlcnNpc3RlbnRfZ250LT5wZm4pKTsKIAkJCWtmcmVlKHBlcnNpc3RlbnRfZ250KTsKLS0gCjEu
Ny43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4u
b3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:41:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7MK-0001eY-K6; Mon, 10 Dec 2012 17:41:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti7MJ-0001eS-NG
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 17:41:39 +0000
Received: from [85.158.143.35:46341] by server-3.bemta-4.messagelabs.com id
	E9/57-18211-2DE16C05; Mon, 10 Dec 2012 17:41:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1355158038!14037319!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9137 invoked from network); 10 Dec 2012 16:47:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 16:47:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="119876"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 16:47:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 11:47:17 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti6Vh-0004Nt-5d;
	Mon, 10 Dec 2012 16:47:17 +0000
Date: Mon, 10 Dec 2012 16:47:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <183697744.20121207215104@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212101644330.17523@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
	<alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
	<183697744.20121207215104@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony.Perard@citrix.com, xen-devel <xen-devel@lists.xen.org>,
	qemu-devel@nongnu.org,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] xen: fix trivial PCI passthrough MSI-X bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are currently passing entry->data as address parameter. Pass
entry->addr instead.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Tested-by: Sander Eikelenboom <linux@eikelenboom.it>
Xen-devel: http://marc.info/?l=xen-devel&m=135515462613715

diff --git a/hw/xen_pt_msi.c b/hw/xen_pt_msi.c
index 6807672..db757cd 100644
--- a/hw/xen_pt_msi.c
+++ b/hw/xen_pt_msi.c
@@ -321,7 +321,7 @@ static int xen_pt_msix_update_one(XenPCIPassthroughState *s, int entry_nr)
 
     pirq = entry->pirq;
 
-    rc = msi_msix_setup(s, entry->data, entry->data, &pirq, true, entry_nr,
+    rc = msi_msix_setup(s, entry->addr, entry->data, &pirq, true, entry_nr,
                         entry->pirq == XEN_PT_UNASSIGNED_PIRQ);
     if (rc) {
         return rc;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:41:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7MK-0001eY-K6; Mon, 10 Dec 2012 17:41:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti7MJ-0001eS-NG
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 17:41:39 +0000
Received: from [85.158.143.35:46341] by server-3.bemta-4.messagelabs.com id
	E9/57-18211-2DE16C05; Mon, 10 Dec 2012 17:41:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1355158038!14037319!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9137 invoked from network); 10 Dec 2012 16:47:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 16:47:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="119876"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 16:47:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 11:47:17 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti6Vh-0004Nt-5d;
	Mon, 10 Dec 2012 16:47:17 +0000
Date: Mon, 10 Dec 2012 16:47:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <183697744.20121207215104@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212101644330.17523@kaball.uk.xensource.com>
References: <92707120.20121206223208@eikelenboom.it>
	<alpine.DEB.2.02.1212071702400.8801@kaball.uk.xensource.com>
	<183697744.20121207215104@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony.Perard@citrix.com, xen-devel <xen-devel@lists.xen.org>,
	qemu-devel@nongnu.org,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] xen: fix trivial PCI passthrough MSI-X bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are currently passing entry->data as address parameter. Pass
entry->addr instead.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Tested-by: Sander Eikelenboom <linux@eikelenboom.it>
Xen-devel: http://marc.info/?l=xen-devel&m=135515462613715

diff --git a/hw/xen_pt_msi.c b/hw/xen_pt_msi.c
index 6807672..db757cd 100644
--- a/hw/xen_pt_msi.c
+++ b/hw/xen_pt_msi.c
@@ -321,7 +321,7 @@ static int xen_pt_msix_update_one(XenPCIPassthroughState *s, int entry_nr)
 
     pirq = entry->pirq;
 
-    rc = msi_msix_setup(s, entry->data, entry->data, &pirq, true, entry_nr,
+    rc = msi_msix_setup(s, entry->addr, entry->data, &pirq, true, entry_nr,
                         entry->pirq == XEN_PT_UNASSIGNED_PIRQ);
     if (rc) {
         return rc;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:50:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7Ux-0001o3-Kt; Mon, 10 Dec 2012 17:50:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti7Uw-0001ny-Kq
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 17:50:34 +0000
Received: from [85.158.139.83:23035] by server-10.bemta-5.messagelabs.com id
	89/D0-13383-9E026C05; Mon, 10 Dec 2012 17:50:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355161832!29229757!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3550 invoked from network); 10 Dec 2012 17:50:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 17:50:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="40824"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 17:50:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 17:50:32 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Ti7Ur-0005zX-5j;
	Mon, 10 Dec 2012 17:50:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Ti7Up-00068q-RO;
	Mon, 10 Dec 2012 17:50:28 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14662-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 17:50:27 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14662: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14662 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14662/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14566
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=309ff3ad9dcc
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing 309ff3ad9dcc
+ branch=xen-4.1-testing
+ revision=309ff3ad9dcc
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r 309ff3ad9dcc ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 2 changes to 2 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 17:50:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 17:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7Ux-0001o3-Kt; Mon, 10 Dec 2012 17:50:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti7Uw-0001ny-Kq
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 17:50:34 +0000
Received: from [85.158.139.83:23035] by server-10.bemta-5.messagelabs.com id
	89/D0-13383-9E026C05; Mon, 10 Dec 2012 17:50:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355161832!29229757!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3550 invoked from network); 10 Dec 2012 17:50:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 17:50:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="40824"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 17:50:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 17:50:32 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Ti7Ur-0005zX-5j;
	Mon, 10 Dec 2012 17:50:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Ti7Up-00068q-RO;
	Mon, 10 Dec 2012 17:50:28 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14662-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Dec 2012 17:50:27 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14662: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14662 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14662/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14566
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 14566

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  309ff3ad9dcc
baseline version:
 xen                  a8a9e1c126ea

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=309ff3ad9dcc
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing 309ff3ad9dcc
+ branch=xen-4.1-testing
+ revision=309ff3ad9dcc
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r 309ff3ad9dcc ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 2 changes to 2 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:04:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:04:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7hz-0002CT-OR; Mon, 10 Dec 2012 18:04:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti7hy-0002CH-0Y
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 18:04:02 +0000
Received: from [85.158.139.211:17443] by server-10.bemta-5.messagelabs.com id
	B3/14-13383-11426C05; Mon, 10 Dec 2012 18:04:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355162638!15636609!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15203 invoked from network); 10 Dec 2012 18:04:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 18:04:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="130904"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 18:04:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 13:03:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti7Vy-0005No-62;
	Mon, 10 Dec 2012 17:51:38 +0000
Date: Mon, 10 Dec 2012 17:51:36 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50C5C32802000078000AF542@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212101741460.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C5C32802000078000AF542@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim
	\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Jan Beulich wrote:
> >>> On 07.12.12 at 19:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > Abstract away from vesa.c the funcions to handle a linear framebuffer
> > and print characters to it.
> > The corresponding functions are going to be removed from vesa.c in the
> > next patch.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/drivers/video/Makefile |    1 +
> >  xen/drivers/video/fb.c     |  209 
> > ++++++++++++++++++++++++++++++++++++++++++++
> >  xen/drivers/video/fb.h     |   49 ++++++++++
> >  3 files changed, 259 insertions(+), 0 deletions(-)
> >  create mode 100644 xen/drivers/video/fb.c
> >  create mode 100644 xen/drivers/video/fb.h
> > 
> > diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> > index 2993c39..3b3eb43 100644
> > --- a/xen/drivers/video/Makefile
> > +++ b/xen/drivers/video/Makefile
> > @@ -2,4 +2,5 @@ obj-$(HAS_VGA) := vga.o
> >  obj-$(HAS_VIDEO) += font_8x14.o
> >  obj-$(HAS_VIDEO) += font_8x16.o
> >  obj-$(HAS_VIDEO) += font_8x8.o
> > +obj-$(HAS_VIDEO) += fb.o
> >  obj-$(HAS_VGA) += vesa.o
> > diff --git a/xen/drivers/video/fb.c b/xen/drivers/video/fb.c
> > new file mode 100644
> > index 0000000..282f97e
> > --- /dev/null
> > +++ b/xen/drivers/video/fb.c
> > @@ -0,0 +1,209 @@
> > +/**************************************************************************
> > ****
> > + * fb.c
> > + *
> > + * linear frame buffer handling.
> > + */
> > +
> > +#include <xen/config.h>
> > +#include <xen/kernel.h>
> > +#include <xen/lib.h>
> > +#include <xen/errno.h>
> > +#include "fb.h"
> > +#include "font.h"
> > +
> > +#define MAX_XRES 1900
> > +#define MAX_YRES 1200
> > +#define MAX_BPP 4
> > +#define MAX_FONT_W 8
> > +#define MAX_FONT_H 16
> > +static __initdata unsigned int line_len[MAX_XRES / MAX_FONT_W];
> > +static __initdata unsigned char lbuf[MAX_XRES * MAX_BPP];
> > +static __initdata unsigned char text_buf[(MAX_XRES / MAX_FONT_W) * \
> > +                          (MAX_YRES / MAX_FONT_H)];
> > +
> > +struct fb_status {
> > +    struct fb_prop fbp;
> > +
> > +    unsigned char *lbuf, *text_buf;
> > +    unsigned int *line_len;
> > +    unsigned int xpos, ypos;
> > +};
> > +static struct fb_status fb;
> > +
> > +static void fb_show_line(
> > +    const unsigned char *text_line,
> > +    unsigned char *video_line,
> > +    unsigned int nr_chars,
> > +    unsigned int nr_cells)
> > +{
> > +    unsigned int i, j, b, bpp, pixel;
> > +
> > +    bpp = (fb.fbp.bits_per_pixel + 7) >> 3;
> > +
> > +    for ( i = 0; i < fb.fbp.font->height; i++ )
> > +    {
> > +        unsigned char *ptr = fb.lbuf;
> > +
> > +        for ( j = 0; j < nr_chars; j++ )
> > +        {
> > +            const unsigned char *bits = fb.fbp.font->data;
> > +            bits += ((text_line[j] * fb.fbp.font->height + i) *
> > +                     ((fb.fbp.font->width + 7) >> 3));
> > +            for ( b = fb.fbp.font->width; b--; )
> > +            {
> > +                pixel = (*bits & (1u<<b)) ? fb.fbp.pixel_on : 0;
> > +                memcpy(ptr, &pixel, bpp);
> > +                ptr += bpp;
> > +            }
> > +        }
> > +
> > +        memset(ptr, 0, (fb.fbp.width - nr_chars * fb.fbp.font->width) * bpp);
> > +        memcpy(video_line, fb.lbuf, nr_cells * fb.fbp.font->width * bpp);
> > +        video_line += fb.fbp.bytes_per_line;
> > +    }
> > +}
> > +
> > +/* Fast mode which redraws all modified parts of a 2D text buffer. */
> > +void fb_redraw_puts(const char *s)
> > +{
> > +    unsigned int i, min_redraw_y = fb.ypos;
> > +    char c;
> > +
> > +    /* Paste characters into text buffer. */
> > +    while ( (c = *s++) != '\0' )
> > +    {
> > +        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
> > +        {
> > +            if ( ++fb.ypos >= fb.fbp.text_rows )
> > +            {
> > +                min_redraw_y = 0;
> > +                fb.ypos = fb.fbp.text_rows - 1;
> > +                memmove(fb.text_buf, fb.text_buf + fb.fbp.text_columns,
> > +                        fb.ypos * fb.fbp.text_columns);
> > +                memset(fb.text_buf + fb.ypos * fb.fbp.text_columns, 0, 
> > fb.xpos);
> > +            }
> > +            fb.xpos = 0;
> > +        }
> > +
> > +        if ( c != '\n' )
> > +            fb.text_buf[fb.xpos++ + fb.ypos * fb.fbp.text_columns] = c;
> > +    }
> > +
> > +    /* Render modified section of text buffer to VESA linear framebuffer. 
> > */
> > +    for ( i = min_redraw_y; i <= fb.ypos; i++ )
> > +    {
> > +        const unsigned char *line = fb.text_buf + i * fb.fbp.text_columns;
> > +        unsigned int width;
> > +
> > +        for ( width = fb.fbp.text_columns; width; --width )
> > +            if ( line[width - 1] )
> > +                 break;
> > +        fb_show_line(line,
> > +                       fb.fbp.lfb + i * fb.fbp.font->height * 
> > fb.fbp.bytes_per_line,
> > +                       width, max(fb.line_len[i], width));
> > +        fb.line_len[i] = width;
> > +    }
> > +
> > +    fb.fbp.flush();
> > +}
> > +
> > +/* Slower line-based scroll mode which interacts better with dom0. */
> > +void fb_scroll_puts(const char *s)
> > +{
> > +    unsigned int i;
> > +    char c;
> > +
> > +    while ( (c = *s++) != '\0' )
> > +    {
> > +        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
> > +        {
> > +            unsigned int bytes = (fb.fbp.width *
> > +                                  ((fb.fbp.bits_per_pixel + 7) >> 3));
> > +            unsigned char *src = fb.fbp.lfb + fb.fbp.font->height * 
> > fb.fbp.bytes_per_line;
> > +            unsigned char *dst = fb.fbp.lfb;
> > +
> > +            /* New line: scroll all previous rows up one line. */
> > +            for ( i = fb.fbp.font->height; i < fb.fbp.height; i++ )
> > +            {
> > +                memcpy(dst, src, bytes);
> > +                src += fb.fbp.bytes_per_line;
> > +                dst += fb.fbp.bytes_per_line;
> > +            }
> > +
> > +            /* Render new line. */
> > +            fb_show_line(
> > +                fb.text_buf,
> > +                fb.fbp.lfb + (fb.fbp.text_rows-1) * fb.fbp.font->height * 
> > fb.fbp.bytes_per_line,
> > +                fb.xpos, fb.fbp.text_columns);
> > +
> > +            fb.xpos = 0;
> > +        }
> > +
> > +        if ( c != '\n' )
> > +            fb.text_buf[fb.xpos++] = c;
> > +    }
> > +
> > +    fb.fbp.flush();
> > +}
> > +
> > +void fb_cr(void)
> > +{
> > +    fb.xpos = 0;
> > +}
> > +
> > +int __init fb_init(struct fb_prop fbp)
> > +{
> > +    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
> > +    {
> > +        printk("Couldn't initialize a %xx%x framebuffer early.\n",
> > +                        fbp.width, fbp.height);
> > +        return -EINVAL;
> > +    }
> > +
> > +    fb.fbp = fbp;
> > +    fb.lbuf = lbuf;
> > +    fb.text_buf = text_buf;
> > +    fb.line_len = line_len;
> > +    return 0;
> > +}
> > +
> > +int __init fb_alloc(void)
> > +{
> > +    fb.lbuf = NULL;
> > +    fb.text_buf = NULL;
> > +    fb.line_len = NULL;
> > +
> > +    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
> > +    if ( !fb.lbuf )
> > +        goto fail;
> > +
> > +    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
> > +    if ( !fb.text_buf )
> > +        goto fail;
> > +
> > +    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
> > +    if ( !fb.line_len )
> > +        goto fail;
> > +
> > +    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
> > +    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
> > +    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
> > +
> > +    return 0;
> > +
> > +fail:
> > +    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
> > +                    "the framebuffer\n");
> > +    xfree(fb.lbuf);
> > +    xfree(fb.text_buf);
> > +    xfree(fb.line_len);
> > +
> > +    return -ENOMEM;
> > +}
> > +
> > +void fb_free(void)
> > +{
> > +    xfree(fb.lbuf);
> > +    xfree(fb.text_buf);
> > +    xfree(fb.line_len);
> > +}
> > diff --git a/xen/drivers/video/fb.h b/xen/drivers/video/fb.h
> > new file mode 100644
> > index 0000000..558d058
> > --- /dev/null
> > +++ b/xen/drivers/video/fb.h
> > @@ -0,0 +1,49 @@
> > +/*
> > + * xen/drivers/video/fb.h
> > + *
> > + * Cross-platform framebuffer library
> > + *
> > + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > + * Copyright (c) 2012 Citrix Systems.
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > + */
> > +
> > +#ifndef _XEN_FB_H
> > +#define _XEN_FB_H
> > +
> > +#include <xen/init.h>
> > +
> > +struct fb_prop {
> > +    const struct font_desc *font;
> > +    unsigned char *lfb;
> > +    unsigned int pixel_on;
> > +    uint16_t width, height;
> > +    uint16_t bytes_per_line;
> > +    uint16_t bits_per_pixel;
> > +    void (*flush)(void);
> > +
> > +    unsigned int text_columns;
> > +    unsigned int text_rows;
> > +};
> > +
> > +void fb_redraw_puts(const char *s);
> > +void fb_scroll_puts(const char *s);
> > +void fb_cr(void);
> 
> Please make this fb_create() or alike - "cr" alone could well be
> mistaken as "carriage return".

Actually it is supposed to be "cr": it is used by vesa_endboot to reset
the cursor to column 0.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:04:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:04:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7hz-0002CT-OR; Mon, 10 Dec 2012 18:04:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti7hy-0002CH-0Y
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 18:04:02 +0000
Received: from [85.158.139.211:17443] by server-10.bemta-5.messagelabs.com id
	B3/14-13383-11426C05; Mon, 10 Dec 2012 18:04:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355162638!15636609!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15203 invoked from network); 10 Dec 2012 18:04:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 18:04:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="130904"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 18:04:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 13:03:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti7Vy-0005No-62;
	Mon, 10 Dec 2012 17:51:38 +0000
Date: Mon, 10 Dec 2012 17:51:36 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50C5C32802000078000AF542@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212101741460.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C5C32802000078000AF542@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim
	\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Jan Beulich wrote:
> >>> On 07.12.12 at 19:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > Abstract away from vesa.c the funcions to handle a linear framebuffer
> > and print characters to it.
> > The corresponding functions are going to be removed from vesa.c in the
> > next patch.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/drivers/video/Makefile |    1 +
> >  xen/drivers/video/fb.c     |  209 
> > ++++++++++++++++++++++++++++++++++++++++++++
> >  xen/drivers/video/fb.h     |   49 ++++++++++
> >  3 files changed, 259 insertions(+), 0 deletions(-)
> >  create mode 100644 xen/drivers/video/fb.c
> >  create mode 100644 xen/drivers/video/fb.h
> > 
> > diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
> > index 2993c39..3b3eb43 100644
> > --- a/xen/drivers/video/Makefile
> > +++ b/xen/drivers/video/Makefile
> > @@ -2,4 +2,5 @@ obj-$(HAS_VGA) := vga.o
> >  obj-$(HAS_VIDEO) += font_8x14.o
> >  obj-$(HAS_VIDEO) += font_8x16.o
> >  obj-$(HAS_VIDEO) += font_8x8.o
> > +obj-$(HAS_VIDEO) += fb.o
> >  obj-$(HAS_VGA) += vesa.o
> > diff --git a/xen/drivers/video/fb.c b/xen/drivers/video/fb.c
> > new file mode 100644
> > index 0000000..282f97e
> > --- /dev/null
> > +++ b/xen/drivers/video/fb.c
> > @@ -0,0 +1,209 @@
> > +/**************************************************************************
> > ****
> > + * fb.c
> > + *
> > + * linear frame buffer handling.
> > + */
> > +
> > +#include <xen/config.h>
> > +#include <xen/kernel.h>
> > +#include <xen/lib.h>
> > +#include <xen/errno.h>
> > +#include "fb.h"
> > +#include "font.h"
> > +
> > +#define MAX_XRES 1900
> > +#define MAX_YRES 1200
> > +#define MAX_BPP 4
> > +#define MAX_FONT_W 8
> > +#define MAX_FONT_H 16
> > +static __initdata unsigned int line_len[MAX_XRES / MAX_FONT_W];
> > +static __initdata unsigned char lbuf[MAX_XRES * MAX_BPP];
> > +static __initdata unsigned char text_buf[(MAX_XRES / MAX_FONT_W) * \
> > +                          (MAX_YRES / MAX_FONT_H)];
> > +
> > +struct fb_status {
> > +    struct fb_prop fbp;
> > +
> > +    unsigned char *lbuf, *text_buf;
> > +    unsigned int *line_len;
> > +    unsigned int xpos, ypos;
> > +};
> > +static struct fb_status fb;
> > +
> > +static void fb_show_line(
> > +    const unsigned char *text_line,
> > +    unsigned char *video_line,
> > +    unsigned int nr_chars,
> > +    unsigned int nr_cells)
> > +{
> > +    unsigned int i, j, b, bpp, pixel;
> > +
> > +    bpp = (fb.fbp.bits_per_pixel + 7) >> 3;
> > +
> > +    for ( i = 0; i < fb.fbp.font->height; i++ )
> > +    {
> > +        unsigned char *ptr = fb.lbuf;
> > +
> > +        for ( j = 0; j < nr_chars; j++ )
> > +        {
> > +            const unsigned char *bits = fb.fbp.font->data;
> > +            bits += ((text_line[j] * fb.fbp.font->height + i) *
> > +                     ((fb.fbp.font->width + 7) >> 3));
> > +            for ( b = fb.fbp.font->width; b--; )
> > +            {
> > +                pixel = (*bits & (1u<<b)) ? fb.fbp.pixel_on : 0;
> > +                memcpy(ptr, &pixel, bpp);
> > +                ptr += bpp;
> > +            }
> > +        }
> > +
> > +        memset(ptr, 0, (fb.fbp.width - nr_chars * fb.fbp.font->width) * bpp);
> > +        memcpy(video_line, fb.lbuf, nr_cells * fb.fbp.font->width * bpp);
> > +        video_line += fb.fbp.bytes_per_line;
> > +    }
> > +}
> > +
> > +/* Fast mode which redraws all modified parts of a 2D text buffer. */
> > +void fb_redraw_puts(const char *s)
> > +{
> > +    unsigned int i, min_redraw_y = fb.ypos;
> > +    char c;
> > +
> > +    /* Paste characters into text buffer. */
> > +    while ( (c = *s++) != '\0' )
> > +    {
> > +        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
> > +        {
> > +            if ( ++fb.ypos >= fb.fbp.text_rows )
> > +            {
> > +                min_redraw_y = 0;
> > +                fb.ypos = fb.fbp.text_rows - 1;
> > +                memmove(fb.text_buf, fb.text_buf + fb.fbp.text_columns,
> > +                        fb.ypos * fb.fbp.text_columns);
> > +                memset(fb.text_buf + fb.ypos * fb.fbp.text_columns, 0, 
> > fb.xpos);
> > +            }
> > +            fb.xpos = 0;
> > +        }
> > +
> > +        if ( c != '\n' )
> > +            fb.text_buf[fb.xpos++ + fb.ypos * fb.fbp.text_columns] = c;
> > +    }
> > +
> > +    /* Render modified section of text buffer to VESA linear framebuffer. 
> > */
> > +    for ( i = min_redraw_y; i <= fb.ypos; i++ )
> > +    {
> > +        const unsigned char *line = fb.text_buf + i * fb.fbp.text_columns;
> > +        unsigned int width;
> > +
> > +        for ( width = fb.fbp.text_columns; width; --width )
> > +            if ( line[width - 1] )
> > +                 break;
> > +        fb_show_line(line,
> > +                       fb.fbp.lfb + i * fb.fbp.font->height * 
> > fb.fbp.bytes_per_line,
> > +                       width, max(fb.line_len[i], width));
> > +        fb.line_len[i] = width;
> > +    }
> > +
> > +    fb.fbp.flush();
> > +}
> > +
> > +/* Slower line-based scroll mode which interacts better with dom0. */
> > +void fb_scroll_puts(const char *s)
> > +{
> > +    unsigned int i;
> > +    char c;
> > +
> > +    while ( (c = *s++) != '\0' )
> > +    {
> > +        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
> > +        {
> > +            unsigned int bytes = (fb.fbp.width *
> > +                                  ((fb.fbp.bits_per_pixel + 7) >> 3));
> > +            unsigned char *src = fb.fbp.lfb + fb.fbp.font->height * 
> > fb.fbp.bytes_per_line;
> > +            unsigned char *dst = fb.fbp.lfb;
> > +
> > +            /* New line: scroll all previous rows up one line. */
> > +            for ( i = fb.fbp.font->height; i < fb.fbp.height; i++ )
> > +            {
> > +                memcpy(dst, src, bytes);
> > +                src += fb.fbp.bytes_per_line;
> > +                dst += fb.fbp.bytes_per_line;
> > +            }
> > +
> > +            /* Render new line. */
> > +            fb_show_line(
> > +                fb.text_buf,
> > +                fb.fbp.lfb + (fb.fbp.text_rows-1) * fb.fbp.font->height * 
> > fb.fbp.bytes_per_line,
> > +                fb.xpos, fb.fbp.text_columns);
> > +
> > +            fb.xpos = 0;
> > +        }
> > +
> > +        if ( c != '\n' )
> > +            fb.text_buf[fb.xpos++] = c;
> > +    }
> > +
> > +    fb.fbp.flush();
> > +}
> > +
> > +void fb_cr(void)
> > +{
> > +    fb.xpos = 0;
> > +}
> > +
> > +int __init fb_init(struct fb_prop fbp)
> > +{
> > +    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
> > +    {
> > +        printk("Couldn't initialize a %xx%x framebuffer early.\n",
> > +                        fbp.width, fbp.height);
> > +        return -EINVAL;
> > +    }
> > +
> > +    fb.fbp = fbp;
> > +    fb.lbuf = lbuf;
> > +    fb.text_buf = text_buf;
> > +    fb.line_len = line_len;
> > +    return 0;
> > +}
> > +
> > +int __init fb_alloc(void)
> > +{
> > +    fb.lbuf = NULL;
> > +    fb.text_buf = NULL;
> > +    fb.line_len = NULL;
> > +
> > +    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
> > +    if ( !fb.lbuf )
> > +        goto fail;
> > +
> > +    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
> > +    if ( !fb.text_buf )
> > +        goto fail;
> > +
> > +    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
> > +    if ( !fb.line_len )
> > +        goto fail;
> > +
> > +    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
> > +    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
> > +    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
> > +
> > +    return 0;
> > +
> > +fail:
> > +    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
> > +                    "the framebuffer\n");
> > +    xfree(fb.lbuf);
> > +    xfree(fb.text_buf);
> > +    xfree(fb.line_len);
> > +
> > +    return -ENOMEM;
> > +}
> > +
> > +void fb_free(void)
> > +{
> > +    xfree(fb.lbuf);
> > +    xfree(fb.text_buf);
> > +    xfree(fb.line_len);
> > +}
> > diff --git a/xen/drivers/video/fb.h b/xen/drivers/video/fb.h
> > new file mode 100644
> > index 0000000..558d058
> > --- /dev/null
> > +++ b/xen/drivers/video/fb.h
> > @@ -0,0 +1,49 @@
> > +/*
> > + * xen/drivers/video/fb.h
> > + *
> > + * Cross-platform framebuffer library
> > + *
> > + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > + * Copyright (c) 2012 Citrix Systems.
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > + */
> > +
> > +#ifndef _XEN_FB_H
> > +#define _XEN_FB_H
> > +
> > +#include <xen/init.h>
> > +
> > +struct fb_prop {
> > +    const struct font_desc *font;
> > +    unsigned char *lfb;
> > +    unsigned int pixel_on;
> > +    uint16_t width, height;
> > +    uint16_t bytes_per_line;
> > +    uint16_t bits_per_pixel;
> > +    void (*flush)(void);
> > +
> > +    unsigned int text_columns;
> > +    unsigned int text_rows;
> > +};
> > +
> > +void fb_redraw_puts(const char *s);
> > +void fb_scroll_puts(const char *s);
> > +void fb_cr(void);
> 
> Please make this fb_create() or alike - "cr" alone could well be
> mistaken as "carriage return".

Actually it is supposed to be "cr": it is used by vesa_endboot to reset
the cursor to column 0.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:04:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7hz-0002CM-BS; Mon, 10 Dec 2012 18:04:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti7hx-0002CC-13
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 18:04:01 +0000
Received: from [85.158.139.211:17310] by server-12.bemta-5.messagelabs.com id
	ED/EC-02275-01426C05; Mon, 10 Dec 2012 18:04:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355162638!15636609!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15001 invoked from network); 10 Dec 2012 18:03:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 18:03:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="130875"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 18:03:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 13:03:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti7Yd-0005QP-Mf;
	Mon, 10 Dec 2012 17:54:23 +0000
Date: Mon, 10 Dec 2012 17:54:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50C5C39302000078000AF545@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212101752450.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C5C39302000078000AF545@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim
	\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Jan Beulich wrote:
> >>> On 07.12.12 at 19:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > Make use of the framebuffer functions previously introduced.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
> >  1 files changed, 26 insertions(+), 153 deletions(-)
> > 
> > diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> > index aaf8b23..778cfdf 100644
> > --- a/xen/drivers/video/vesa.c
> > +++ b/xen/drivers/video/vesa.c
> > @@ -13,20 +13,15 @@
> >  #include <asm/io.h>
> >  #include <asm/page.h>
> >  #include "font.h"
> > +#include "fb.h"
> >  
> >  #define vlfb_info    vga_console_info.u.vesa_lfb
> > -#define text_columns (vlfb_info.width / font->width)
> > -#define text_rows    (vlfb_info.height / font->height)
> >  
> > -static void vesa_redraw_puts(const char *s);
> > -static void vesa_scroll_puts(const char *s);
> > +static void lfb_flush(void);
> >  
> > -static unsigned char *lfb, *lbuf, *text_buf;
> > -static unsigned int *__initdata line_len;
> > +static unsigned char *lfb;
> 
> What's the point of retaining this, when ...
> 
> >  static const struct font_desc *font;
> >  static bool_t vga_compat;
> > -static unsigned int pixel_on;
> > -static unsigned int xpos, ypos;
> >  
> >  static unsigned int vram_total;
> >  integer_param("vesa-ram", vram_total);
> > @@ -87,29 +82,26 @@ void __init vesa_early_init(void)
> >  
> >  void __init vesa_init(void)
> >  {
> > -    if ( !font )
> > -        goto fail;
> > -
> > -    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
> > -    if ( !lbuf )
> > -        goto fail;
> > +    struct fb_prop fbp;
> >  
> > -    text_buf = xzalloc_bytes(text_columns * text_rows);
> > -    if ( !text_buf )
> > -        goto fail;
> > +    if ( !font )
> > +        return;
> >  
> > -    line_len = xzalloc_array(unsigned int, text_columns);
> > -    if ( !line_len )
> > -        goto fail;
> > +    fbp.font = font;
> > +    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
> > +    fbp.bytes_per_line = vlfb_info.bytes_per_line;
> > +    fbp.width = vlfb_info.width;
> > +    fbp.height = vlfb_info.height;
> > +    fbp.flush = lfb_flush;
> > +    fbp.text_columns = vlfb_info.width / font->width;
> > +    fbp.text_rows = vlfb_info.height / font->height;
> >  
> > -    lfb = ioremap(vlfb_info.lfb_base, vram_remap);
> > +    fbp.lfb = lfb = ioremap(vlfb_info.lfb_base, vram_remap);
> 
> ... you set up the consolidated field at the same time anyway?

It is used by vesa_mtrr_init and vesa_endboot.
At the moment I don't store fbp in a static variable so after vesa_init
returns, vesa.c doesn't have a way to retrieve it.
Maybe I should introduce a "struct fb_prop fbp" static variable that
replaces lfb?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:04:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7hz-0002CM-BS; Mon, 10 Dec 2012 18:04:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti7hx-0002CC-13
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 18:04:01 +0000
Received: from [85.158.139.211:17310] by server-12.bemta-5.messagelabs.com id
	ED/EC-02275-01426C05; Mon, 10 Dec 2012 18:04:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355162638!15636609!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTA3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15001 invoked from network); 10 Dec 2012 18:03:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 18:03:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="130875"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 18:03:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 13:03:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti7Yd-0005QP-Mf;
	Mon, 10 Dec 2012 17:54:23 +0000
Date: Mon, 10 Dec 2012 17:54:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50C5C39302000078000AF545@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212101752450.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C5C39302000078000AF545@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim
	\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Jan Beulich wrote:
> >>> On 07.12.12 at 19:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > Make use of the framebuffer functions previously introduced.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
> >  1 files changed, 26 insertions(+), 153 deletions(-)
> > 
> > diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
> > index aaf8b23..778cfdf 100644
> > --- a/xen/drivers/video/vesa.c
> > +++ b/xen/drivers/video/vesa.c
> > @@ -13,20 +13,15 @@
> >  #include <asm/io.h>
> >  #include <asm/page.h>
> >  #include "font.h"
> > +#include "fb.h"
> >  
> >  #define vlfb_info    vga_console_info.u.vesa_lfb
> > -#define text_columns (vlfb_info.width / font->width)
> > -#define text_rows    (vlfb_info.height / font->height)
> >  
> > -static void vesa_redraw_puts(const char *s);
> > -static void vesa_scroll_puts(const char *s);
> > +static void lfb_flush(void);
> >  
> > -static unsigned char *lfb, *lbuf, *text_buf;
> > -static unsigned int *__initdata line_len;
> > +static unsigned char *lfb;
> 
> What's the point of retaining this, when ...
> 
> >  static const struct font_desc *font;
> >  static bool_t vga_compat;
> > -static unsigned int pixel_on;
> > -static unsigned int xpos, ypos;
> >  
> >  static unsigned int vram_total;
> >  integer_param("vesa-ram", vram_total);
> > @@ -87,29 +82,26 @@ void __init vesa_early_init(void)
> >  
> >  void __init vesa_init(void)
> >  {
> > -    if ( !font )
> > -        goto fail;
> > -
> > -    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
> > -    if ( !lbuf )
> > -        goto fail;
> > +    struct fb_prop fbp;
> >  
> > -    text_buf = xzalloc_bytes(text_columns * text_rows);
> > -    if ( !text_buf )
> > -        goto fail;
> > +    if ( !font )
> > +        return;
> >  
> > -    line_len = xzalloc_array(unsigned int, text_columns);
> > -    if ( !line_len )
> > -        goto fail;
> > +    fbp.font = font;
> > +    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
> > +    fbp.bytes_per_line = vlfb_info.bytes_per_line;
> > +    fbp.width = vlfb_info.width;
> > +    fbp.height = vlfb_info.height;
> > +    fbp.flush = lfb_flush;
> > +    fbp.text_columns = vlfb_info.width / font->width;
> > +    fbp.text_rows = vlfb_info.height / font->height;
> >  
> > -    lfb = ioremap(vlfb_info.lfb_base, vram_remap);
> > +    fbp.lfb = lfb = ioremap(vlfb_info.lfb_base, vram_remap);
> 
> ... you set up the consolidated field at the same time anyway?

It is used by vesa_mtrr_init and vesa_endboot.
At the moment I don't store fbp in a static variable so after vesa_init
returns, vesa.c doesn't have a way to retrieve it.
Maybe I should introduce a "struct fb_prop fbp" static variable that
replaces lfb?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:04:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7iY-0002GT-68; Mon, 10 Dec 2012 18:04:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti7iW-0002GG-PD
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 18:04:36 +0000
Received: from [85.158.137.99:34669] by server-9.bemta-3.messagelabs.com id
	3B/7A-02388-F2426C05; Mon, 10 Dec 2012 18:04:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355162670!13251108!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6999 invoked from network); 10 Dec 2012 18:04:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 18:04:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="41102"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 18:04:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 18:04:29 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti7iQ-00063r-1R; Mon, 10 Dec 2012 18:04:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti7iP-00071Z-R5;
	Mon, 10 Dec 2012 18:04:29 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: qemu-devel@nongnu.org,
	xen-devel@lists.xensource.com
Date: Mon, 10 Dec 2012 18:04:21 +0000
Message-ID: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH 0/2] cpu_ioreq_pio,
	cpu_ioreq_move: simply and fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These two patches fix a potential infinite loop if an ioreq has a very
large count, and also simplifies the code.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:04:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7iY-0002GT-68; Mon, 10 Dec 2012 18:04:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti7iW-0002GG-PD
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 18:04:36 +0000
Received: from [85.158.137.99:34669] by server-9.bemta-3.messagelabs.com id
	3B/7A-02388-F2426C05; Mon, 10 Dec 2012 18:04:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355162670!13251108!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6999 invoked from network); 10 Dec 2012 18:04:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 18:04:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="41102"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 18:04:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 18:04:29 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti7iQ-00063r-1R; Mon, 10 Dec 2012 18:04:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti7iP-00071Z-R5;
	Mon, 10 Dec 2012 18:04:29 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: qemu-devel@nongnu.org,
	xen-devel@lists.xensource.com
Date: Mon, 10 Dec 2012 18:04:21 +0000
Message-ID: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH 0/2] cpu_ioreq_pio,
	cpu_ioreq_move: simply and fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These two patches fix a potential infinite loop if an ioreq has a very
large count, and also simplifies the code.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:05:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:05:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7ii-0002Hu-Jd; Mon, 10 Dec 2012 18:04:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti7ig-0002HZ-R6
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 18:04:47 +0000
Received: from [85.158.143.35:20117] by server-3.bemta-4.messagelabs.com id
	3C/D8-18211-E3426C05; Mon, 10 Dec 2012 18:04:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355162680!15784508!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15917 invoked from network); 10 Dec 2012 18:04:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 18:04:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="41109"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 18:04:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 18:04:39 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti7iZ-000641-MH; Mon, 10 Dec 2012 18:04:39 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti7iY-00071h-MA;
	Mon, 10 Dec 2012 18:04:38 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: qemu-devel@nongnu.org,
	xen-devel@lists.xensource.com
Date: Mon, 10 Dec 2012 18:04:23 +0000
Message-ID: <1355162663-26956-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Dongxiao Xu <dongxiao.xu@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 2/2] cpu_ioreq_pio,
	cpu_ioreq_move: i should be uint32_t rather than int
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current code compare i (int) with req->count (uint32_t) in a for
loop, risking an infinite loop if req->count is equal to UINT_MAX.

Also i is only used in comparisons or multiplications with unsigned
integers.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Dongxiao Xu <dongxiao.xu@intel.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 xen-all.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 97c8ef4..aabbb80 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -717,7 +717,7 @@ static inline void write_phys_req_item(hwaddr addr,
 
 static void cpu_ioreq_pio(ioreq_t *req)
 {
-    int i;
+    uint32_t i;
 
     if (req->dir == IOREQ_READ) {
         if (!req->data_is_ptr) {
@@ -746,7 +746,7 @@ static void cpu_ioreq_pio(ioreq_t *req)
 
 static void cpu_ioreq_move(ioreq_t *req)
 {
-    int i;
+    uint32_t i;
 
     if (!req->data_is_ptr) {
         if (req->dir == IOREQ_READ) {
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:05:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:05:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7ii-0002Hu-Jd; Mon, 10 Dec 2012 18:04:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti7ig-0002HZ-R6
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 18:04:47 +0000
Received: from [85.158.143.35:20117] by server-3.bemta-4.messagelabs.com id
	3C/D8-18211-E3426C05; Mon, 10 Dec 2012 18:04:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355162680!15784508!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15917 invoked from network); 10 Dec 2012 18:04:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 18:04:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="41109"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 18:04:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 18:04:39 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti7iZ-000641-MH; Mon, 10 Dec 2012 18:04:39 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti7iY-00071h-MA;
	Mon, 10 Dec 2012 18:04:38 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: qemu-devel@nongnu.org,
	xen-devel@lists.xensource.com
Date: Mon, 10 Dec 2012 18:04:23 +0000
Message-ID: <1355162663-26956-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Dongxiao Xu <dongxiao.xu@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 2/2] cpu_ioreq_pio,
	cpu_ioreq_move: i should be uint32_t rather than int
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current code compare i (int) with req->count (uint32_t) in a for
loop, risking an infinite loop if req->count is equal to UINT_MAX.

Also i is only used in comparisons or multiplications with unsigned
integers.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Dongxiao Xu <dongxiao.xu@intel.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 xen-all.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 97c8ef4..aabbb80 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -717,7 +717,7 @@ static inline void write_phys_req_item(hwaddr addr,
 
 static void cpu_ioreq_pio(ioreq_t *req)
 {
-    int i;
+    uint32_t i;
 
     if (req->dir == IOREQ_READ) {
         if (!req->data_is_ptr) {
@@ -746,7 +746,7 @@ static void cpu_ioreq_pio(ioreq_t *req)
 
 static void cpu_ioreq_move(ioreq_t *req)
 {
-    int i;
+    uint32_t i;
 
     if (!req->data_is_ptr) {
         if (req->dir == IOREQ_READ) {
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:05:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:05:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7j1-0002LE-Hu; Mon, 10 Dec 2012 18:05:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti7iw-0002Kt-Ph
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 18:05:03 +0000
Received: from [85.158.137.99:39666] by server-8.bemta-3.messagelabs.com id
	23/37-07786-03426C05; Mon, 10 Dec 2012 18:04:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355162670!13251108!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7079 invoked from network); 10 Dec 2012 18:04:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 18:04:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="41104"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 18:04:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 18:04:31 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti7iR-00063u-Uv; Mon, 10 Dec 2012 18:04:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti7iR-00071d-OD;
	Mon, 10 Dec 2012 18:04:31 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <qemu-devel@nongnu.org>, <xen-devel@lists.xensource.com>
Date: Mon, 10 Dec 2012 18:04:22 +0000
Message-ID: <1355162663-26956-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Dongxiao Xu <dongxiao.xu@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 1/2] cpu_ioreq_pio,
	cpu_ioreq_move: introduce read_phys_req_item, write_phys_req_item
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Replace a lot of formulaic multiplications (containing casts, no less)
with calls to a pair of functions.  This encapsulates in a single
place the operations which require care relating to integer overflow.

Cc: Dongxiao Xu <dongxiao.xu@intel.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 xen-all.c |   73 ++++++++++++++++++++++++++++++++++++-------------------------
 1 files changed, 43 insertions(+), 30 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 046cc2a..97c8ef4 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -682,11 +682,42 @@ static void do_outp(pio_addr_t addr,
     }
 }
 
-static void cpu_ioreq_pio(ioreq_t *req)
+/*
+ * Helper functions which read/write an object from/to physical guest
+ * memory, as part of the implementation of an ioreq.
+ *
+ * Equivalent to
+ *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
+ *                          val, req->size, 0/1)
+ * except without the integer overflow problems.
+ */
+static void rw_phys_req_item(hwaddr addr,
+                             ioreq_t *req, uint32_t i, void *val, int rw)
+{
+    /* Do everything unsigned so overflow just results in a truncated result
+     * and accesses to undesired parts of guest memory, which is up
+     * to the guest */
+    hwaddr offset = (hwaddr)req->size * i;
+    if (req->df) addr -= offset;
+    else addr += offset;
+    cpu_physical_memory_rw(addr, val, req->size, rw);
+}
+
+static inline void read_phys_req_item(hwaddr addr,
+                                      ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 0);
+}
+static inline void write_phys_req_item(hwaddr addr,
+                                       ioreq_t *req, uint32_t i, void *val)
 {
-    int i, sign;
+    rw_phys_req_item(addr, req, i, val, 1);
+}
 
-    sign = req->df ? -1 : 1;
+
+static void cpu_ioreq_pio(ioreq_t *req)
+{
+    int i;
 
     if (req->dir == IOREQ_READ) {
         if (!req->data_is_ptr) {
@@ -696,9 +727,7 @@ static void cpu_ioreq_pio(ioreq_t *req)
 
             for (i = 0; i < req->count; i++) {
                 tmp = do_inp(req->addr, req->size);
-                cpu_physical_memory_write(
-                        req->data + (sign * i * (int64_t)req->size),
-                        (uint8_t *) &tmp, req->size);
+                write_phys_req_item(req->data, req, i, &tmp);
             }
         }
     } else if (req->dir == IOREQ_WRITE) {
@@ -708,9 +737,7 @@ static void cpu_ioreq_pio(ioreq_t *req)
             for (i = 0; i < req->count; i++) {
                 uint32_t tmp = 0;
 
-                cpu_physical_memory_read(
-                        req->data + (sign * i * (int64_t)req->size),
-                        (uint8_t*) &tmp, req->size);
+                read_phys_req_item(req->data, req, i, &tmp);
                 do_outp(req->addr, req->size, tmp);
             }
         }
@@ -719,22 +746,16 @@ static void cpu_ioreq_pio(ioreq_t *req)
 
 static void cpu_ioreq_move(ioreq_t *req)
 {
-    int i, sign;
-
-    sign = req->df ? -1 : 1;
+    int i;
 
     if (!req->data_is_ptr) {
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                cpu_physical_memory_read(
-                        req->addr + (sign * i * (int64_t)req->size),
-                        (uint8_t *) &req->data, req->size);
+                read_phys_req_item(req->addr, req, i, &req->data);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                cpu_physical_memory_write(
-                        req->addr + (sign * i * (int64_t)req->size),
-                        (uint8_t *) &req->data, req->size);
+                write_phys_req_item(req->addr, req, i, &req->data);
             }
         }
     } else {
@@ -742,21 +763,13 @@ static void cpu_ioreq_move(ioreq_t *req)
 
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                cpu_physical_memory_read(
-                        req->addr + (sign * i * (int64_t)req->size),
-                        (uint8_t*) &tmp, req->size);
-                cpu_physical_memory_write(
-                        req->data + (sign * i * (int64_t)req->size),
-                        (uint8_t*) &tmp, req->size);
+                read_phys_req_item(req->addr, req, i, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                cpu_physical_memory_read(
-                        req->data + (sign * i * (int64_t)req->size),
-                        (uint8_t*) &tmp, req->size);
-                cpu_physical_memory_write(
-                        req->addr + (sign * i * (int64_t)req->size),
-                        (uint8_t*) &tmp, req->size);
+                read_phys_req_item(req->data, req, i, &tmp);
+                write_phys_req_item(req->addr, req, i, &tmp);
             }
         }
     }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:05:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:05:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7j1-0002LE-Hu; Mon, 10 Dec 2012 18:05:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Ti7iw-0002Kt-Ph
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 18:05:03 +0000
Received: from [85.158.137.99:39666] by server-8.bemta-3.messagelabs.com id
	23/37-07786-03426C05; Mon, 10 Dec 2012 18:04:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355162670!13251108!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDkzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7079 invoked from network); 10 Dec 2012 18:04:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 18:04:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="41104"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Dec 2012 18:04:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 10 Dec 2012 18:04:31 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Ti7iR-00063u-Uv; Mon, 10 Dec 2012 18:04:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Ti7iR-00071d-OD;
	Mon, 10 Dec 2012 18:04:31 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <qemu-devel@nongnu.org>, <xen-devel@lists.xensource.com>
Date: Mon, 10 Dec 2012 18:04:22 +0000
Message-ID: <1355162663-26956-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Dongxiao Xu <dongxiao.xu@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 1/2] cpu_ioreq_pio,
	cpu_ioreq_move: introduce read_phys_req_item, write_phys_req_item
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Replace a lot of formulaic multiplications (containing casts, no less)
with calls to a pair of functions.  This encapsulates in a single
place the operations which require care relating to integer overflow.

Cc: Dongxiao Xu <dongxiao.xu@intel.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 xen-all.c |   73 ++++++++++++++++++++++++++++++++++++-------------------------
 1 files changed, 43 insertions(+), 30 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 046cc2a..97c8ef4 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -682,11 +682,42 @@ static void do_outp(pio_addr_t addr,
     }
 }
 
-static void cpu_ioreq_pio(ioreq_t *req)
+/*
+ * Helper functions which read/write an object from/to physical guest
+ * memory, as part of the implementation of an ioreq.
+ *
+ * Equivalent to
+ *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
+ *                          val, req->size, 0/1)
+ * except without the integer overflow problems.
+ */
+static void rw_phys_req_item(hwaddr addr,
+                             ioreq_t *req, uint32_t i, void *val, int rw)
+{
+    /* Do everything unsigned so overflow just results in a truncated result
+     * and accesses to undesired parts of guest memory, which is up
+     * to the guest */
+    hwaddr offset = (hwaddr)req->size * i;
+    if (req->df) addr -= offset;
+    else addr += offset;
+    cpu_physical_memory_rw(addr, val, req->size, rw);
+}
+
+static inline void read_phys_req_item(hwaddr addr,
+                                      ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 0);
+}
+static inline void write_phys_req_item(hwaddr addr,
+                                       ioreq_t *req, uint32_t i, void *val)
 {
-    int i, sign;
+    rw_phys_req_item(addr, req, i, val, 1);
+}
 
-    sign = req->df ? -1 : 1;
+
+static void cpu_ioreq_pio(ioreq_t *req)
+{
+    int i;
 
     if (req->dir == IOREQ_READ) {
         if (!req->data_is_ptr) {
@@ -696,9 +727,7 @@ static void cpu_ioreq_pio(ioreq_t *req)
 
             for (i = 0; i < req->count; i++) {
                 tmp = do_inp(req->addr, req->size);
-                cpu_physical_memory_write(
-                        req->data + (sign * i * (int64_t)req->size),
-                        (uint8_t *) &tmp, req->size);
+                write_phys_req_item(req->data, req, i, &tmp);
             }
         }
     } else if (req->dir == IOREQ_WRITE) {
@@ -708,9 +737,7 @@ static void cpu_ioreq_pio(ioreq_t *req)
             for (i = 0; i < req->count; i++) {
                 uint32_t tmp = 0;
 
-                cpu_physical_memory_read(
-                        req->data + (sign * i * (int64_t)req->size),
-                        (uint8_t*) &tmp, req->size);
+                read_phys_req_item(req->data, req, i, &tmp);
                 do_outp(req->addr, req->size, tmp);
             }
         }
@@ -719,22 +746,16 @@ static void cpu_ioreq_pio(ioreq_t *req)
 
 static void cpu_ioreq_move(ioreq_t *req)
 {
-    int i, sign;
-
-    sign = req->df ? -1 : 1;
+    int i;
 
     if (!req->data_is_ptr) {
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                cpu_physical_memory_read(
-                        req->addr + (sign * i * (int64_t)req->size),
-                        (uint8_t *) &req->data, req->size);
+                read_phys_req_item(req->addr, req, i, &req->data);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                cpu_physical_memory_write(
-                        req->addr + (sign * i * (int64_t)req->size),
-                        (uint8_t *) &req->data, req->size);
+                write_phys_req_item(req->addr, req, i, &req->data);
             }
         }
     } else {
@@ -742,21 +763,13 @@ static void cpu_ioreq_move(ioreq_t *req)
 
         if (req->dir == IOREQ_READ) {
             for (i = 0; i < req->count; i++) {
-                cpu_physical_memory_read(
-                        req->addr + (sign * i * (int64_t)req->size),
-                        (uint8_t*) &tmp, req->size);
-                cpu_physical_memory_write(
-                        req->data + (sign * i * (int64_t)req->size),
-                        (uint8_t*) &tmp, req->size);
+                read_phys_req_item(req->addr, req, i, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
             }
         } else if (req->dir == IOREQ_WRITE) {
             for (i = 0; i < req->count; i++) {
-                cpu_physical_memory_read(
-                        req->data + (sign * i * (int64_t)req->size),
-                        (uint8_t*) &tmp, req->size);
-                cpu_physical_memory_write(
-                        req->addr + (sign * i * (int64_t)req->size),
-                        (uint8_t*) &tmp, req->size);
+                read_phys_req_item(req->data, req, i, &tmp);
+                write_phys_req_item(req->addr, req, i, &tmp);
             }
         }
     }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:16:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:16:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7tJ-0002x9-To; Mon, 10 Dec 2012 18:15:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <allen.m.kay@intel.com>) id 1Ti7tI-0002x4-EK
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 18:15:44 +0000
Received: from [85.158.138.51:23786] by server-5.bemta-3.messagelabs.com id
	98/47-26311-FC626C05; Mon, 10 Dec 2012 18:15:43 +0000
X-Env-Sender: allen.m.kay@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355163324!28260083!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM4MjY5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26600 invoked from network); 10 Dec 2012 18:15:25 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-4.tower-174.messagelabs.com with SMTP;
	10 Dec 2012 18:15:25 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 10 Dec 2012 10:15:20 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,252,1355126400"; d="scan'208";a="179021660"
Received: from orsmsx102.amr.corp.intel.com ([10.22.225.129])
	by AZSMGA002.ch.intel.com with ESMTP; 10 Dec 2012 10:15:19 -0800
Received: from orsmsx105.amr.corp.intel.com ([169.254.4.144]) by
	ORSMSX102.amr.corp.intel.com ([169.254.1.125]) with mapi id
	14.01.0355.002; Mon, 10 Dec 2012 10:15:19 -0800
From: "Kay, Allen M" <allen.m.kay@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, G.R.
	<firemeteor@users.sourceforge.net>
Thread-Topic: intel IGD driver intel_detect_pch() failure
Thread-Index: AQHN1tHDx1dAGYFBI0az61NZjDdUcJgSVTWg
Date: Mon, 10 Dec 2012 18:15:18 +0000
Message-ID: <003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.22.254.140]
MIME-Version: 1.0
Cc: "Xu, Dongxiao" <dongxiao.xu@intel.com>,
	"xiantao.zhang@intel" <xiantao.zhang@intel>, "Dong,
	Eddie" <eddie.dong@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TXkgdW5kZXJzdGFuZGluZyBpcyB0aGF0IGk5MTUgZHJpdmVyIG5lZWRzIHRvIGxvb2tzIGF0IHRo
ZSByZWFsIFBDSCdzIGRldmljZSBJRCB0byBhcHBseSBIVyB3b3JrYXJvdW5kcy4NCg0KT25lIHdh
eSB0byBmaXggdGhpcyBpcyB0byBtYWtlIHRoZSBkZXZpY2UgSUQgb2YgdGhlIGZpcnN0IFBDSCdz
ICgwMDowMS4wKSBkZXZpY2UgSUQgcmVmbGVjdCB0aGUgZGV2aWNlIElEIG9mIDAwOjFmLjAgb24g
dGhlIGhvc3QuICBUaGlzIHdheSwgd2hlbiBpOTE1IHJ1bm5pbmcgYXMgYSBndWVzdCB3aWxsIGZp
bmQgdGhlIHZhbGlkIFBDSCBkZXZpY2UgSUQgdG8gbWFrZSB3b3JrYXJvdW5kIGRlY2lzaW9ucyB3
aXRoLg0KDQpJIGRvbid0IGtub3cgd2h5IGl0IHdvdWxkIG1ha2UgYSBkaWZmZXJlbmNlIGlmIGk5
MTUgaXMgYnVpbHQgaW50byB0aGUga2VybmVsIG9yIGFzIGEgbW9kdWxlIHRob3VnaC4NCg0KQWxs
ZW4NCg0KLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCkZyb206IFN0ZWZhbm8gU3RhYmVsbGlu
aSBbbWFpbHRvOnN0ZWZhbm8uc3RhYmVsbGluaUBldS5jaXRyaXguY29tXSANClNlbnQ6IE1vbmRh
eSwgRGVjZW1iZXIgMTAsIDIwMTIgNDoyNyBBTQ0KVG86IEcuUi4NCkNjOiB4ZW4tZGV2ZWw7IFN0
ZWZhbm8gU3RhYmVsbGluaTsgS2F5LCBBbGxlbiBNOyB4aWFudGFvLnpoYW5nQGludGVsOyBYdSwg
RG9uZ3hpYW87IERvbmcsIEVkZGllDQpTdWJqZWN0OiBSZTogaW50ZWwgSUdEIGRyaXZlciBpbnRl
bF9kZXRlY3RfcGNoKCkgZmFpbHVyZQ0KDQpDQydpbmcgc29tZSBlbmdpbmVlcnMgdGhhdCBjb3Vs
ZCBoYXZlIHNvbWUgdXNlZnVsIHN1Z2dlc3Rpb25zDQoNCk9uIE1vbiwgMTAgRGVjIDIwMTIsIEcu
Ui4gd3JvdGU6DQo+IEhlbGxvLCBjb3VsZCBhbnlib2R5IGhlbHA/DQo+IA0KPiBPbiBTdW4sIERl
YyA5LCAyMDEyIGF0IDE6MDAgUE0sIEcuUi4gPGZpcmVtZXRlb3JAdXNlcnMuc291cmNlZm9yZ2Uu
bmV0PiB3cm90ZToNCj4gICAgICAgSSBkdWcgZnVydGhlciBhbmQgZ290IGNvbmZ1c2VkLg0KPiAg
ICAgICBUaGUgaG9zdCBJU0EgYnJpZGdlIDAwOjFmLjAgaXMgYXV0b21hdGljYWxseSBwYXNzZWQt
dGhyb3VnaCBhcyBwYXJ0IG9mIHRoZSBnZnhfcGFzc3RocnUgbWFnaWMuDQo+ICAgICAgIEhvd2V2
ZXIsIGl0IGlzIHBhc3NlZCB0aHJvdWdoIGFzIGEgUENJIGJyaWRnZToNCj4gICAgICAgT24gaG9z
dDogwqAgMDA6MWYuMCBJU0EgYnJpZGdlIFswNjAxXTogSW50ZWwgQ29ycG9yYXRpb24gSDc3IEV4
cHJlc3MgQ2hpcHNldCBMUEMgQ29udHJvbGxlciBbODA4NjoxZTRhXSAocmV2IDA0KQ0KPiAgICAg
ICBPbiBndWVzdDogMDA6MWYuMCBQQ0kgYnJpZGdlIFswNjA0XTogSW50ZWwgQ29ycG9yYXRpb24g
SDc3IEV4cHJlc3MgQ2hpcHNldCBMUEMgQ29udHJvbGxlciBbODA4NjoxZTRhXSAocmV2IDA0KQ0K
PiANCj4gICAgICAgVGhpcyBpcyBib3RoIHRoZSBjYXNlIGZvciBwdXJlIEhWTSAmJiBQVkhWTS4g
QW5kIHRoaXMgb25lIGV4aXN0cyBmb3IgYm90aCBjYXNlIHRvbzoNCj4gICAgICAgMDA6MDEuMCBJ
U0EgYnJpZGdlIFswNjAxXTogSW50ZWwgQ29ycG9yYXRpb24gODIzNzFTQiBQSUlYMyBJU0EgW05h
dG9tYS9Ucml0b24gSUldIFs4MDg2OjcwMDBdDQo+IA0KPiAgICAgICBBbmQgdGhlIGludGVsX2Rl
dGVjdF9wY2goKSBmdW5jdGlvbiBvbmx5IGNoZWNrIHRoZSBmaXJzdCBJU0EgYnJpZGdlIG9uIHRo
ZSBQQ0kgYnVzOg0KPiAgICAgICBwY2ggPSBwY2lfZ2V0X2NsYXNzKFBDSV9DTEFTU19CUklER0Vf
SVNBIDw8IDgsIE5VTEwpOw0KPiANCj4gICAgICAgVW5sZXNzIHRoZXJlIGFyZSBtYWdpYyBlbHNl
d2hlcmUsIEkgY2FuJ3QgaW1hZ2luZSB0aGUgY29kZSB3b3VsZCBiZWhhdmUgZGlmZmVyZW50bHkg
b24gdGhlIHR3byBidWlsZHMuDQo+ICAgICAgIEJ1dCB3aGF0J3MgdGhlIG1hZ2ljIGJlaGluZCB0
aGlzPw0KPiANCj4gICAgICAgQWxzbywgaXMgdGhlcmUgYW55d2F5IHRvIGdldCByaWQgb2YgdGhl
IElTQSBicmlkZ2UgZW11bGF0ZWQgYnkgcWVtdT8NCj4gICAgICAgSSBkb24ndCB0aGluayB0aGlz
IGlzIGV2ZXIgcmVxdWlyZWQgZm9yIG1vc3QgY2FzZS4uLg0KPiANCj4gICAgICAgVGhhbmtzLA0K
PiAgICAgICBUaW1vdGh5DQo+IA0KPiANCj4gICAgICAgT24gU3VuLCBEZWMgOSwgMjAxMiBhdCAx
OjQzIEFNLCBHLlIuIDxmaXJlbWV0ZW9yQHVzZXJzLnNvdXJjZWZvcmdlLm5ldD4gd3JvdGU6DQo+
IA0KPiAgICAgICAgICAgICBIaSBhbGwsDQo+ICAgICAgICAgICAgIEknbSBkZWJ1Z2dpbmcgYW4g
aXNzdWUgdGhhdCBhbiBIVk0gZ3Vlc3QgZmFpbGVkIHRvIHByb2R1Y2UgYW55IG91dHB1dCB3aXRo
IElHRCBwYXNzZWQgdGhyb3VnaC4NCj4gICAgICAgICAgICAgVGhpcyBpcyBhbiBwdXJlIEhWTSBs
aW51eCBndWVzdCB3aXRoIGk5MTUgZHJpdmVyIGRpcmVjdGx5IGNvbXBpbGVkIGluLg0KPiAgICAg
ICAgICAgICBBbiBQVkhWTSBrZXJuZWwgd2l0aCBpOTE1IGRyaXZlciBjb21waWxlZCBhcyBtb2R1
bGUgd29ya3Mgd2l0aG91dCBpc3N1ZS4NCj4gICAgICAgICAgICAgSSdtIG5vdCB5ZXQgc3VyZSB3
aGljaCBmYWN0b3IgaXMgbW9yZSBpbXBvcnRhbnQsIHB1cmUgSFZNIG9yIHRoZSBJOTE1PXkga2Vy
bmVsIGNvbmZpZy4NCj4gDQo+ICAgICAgICAgICAgIFRoZSBkaXJlY3QgY2F1c2Ugb2Ygbm8gb3V0
cHV0IGlzIHRoYXQgdGhlIGRyaXZlciBkb2VzIG5vdCBzZWxlY3QgRGlzcGxheSBQTEwgcHJvcGVy
bHksIHdoaWNoIGlzIGluIHR1cm4NCj4gICAgICAgICAgICAgZHVlIHRvIGZhaWwgdG8gZGV0ZWN0
IHBjaCB0eXBlIHByb3Blcmx5Lg0KPiANCj4gICAgICAgICAgICAgU3RyYW5nZSBlbm91Z2gsIHRo
ZSBpbnRlbF9kZXRlY3RfcGNoKCkgZnVuY3Rpb24gd29ya3MgYnkgY2hlY2tpbmcgdGhlIGRldmlj
ZSBJRCBvZiB0aGUgSVNBIGJyaWRnZQ0KPiAgICAgICAgICAgICBjb21pbmcgd2l0aCB0aGUgY2hp
cHNldDoNCj4gDQo+ICAgICAgICAgICAgIC8qICogVGhlIHJlYXNvbiB0byBwcm9iZSBJU0EgYnJp
ZGdlIGluc3RlYWQgb2YgRGV2MzE6RnVuMCBpcyB0byAqIG1ha2UgZ3JhcGhpY3MgZGV2aWNlIHBh
c3N0aHJvdWdoDQo+ICAgICAgICAgICAgIGZ3b3JrIGVhc3kgZm9yIFZNTSwgdGhhdCBvbmx5ICog
bmVlZCB0byBleHBvc2UgSVNBIGJyaWRnZSB0byBsZXQgZHJpdmVyIGtub3cgdGhlIHJlYWwgaGFy
ZHdhcmXCoA0KPiAgICAgICAgICAgICB1bmRlcm5lYXRoLiBUaGlzIGlzIGEgcmVxdWlyZW1lbnQg
ZnJvbSB2aXJ0dWFsaXphdGlvbiB0ZWFtLiAqLw0KPiANCj4gICAgICAgICAgICAgSSBhZGRlZCBz
b21lIGRlYnVnIG91dHB1dCBpbiB0aGlzIGZ1bmN0aW9uIGFuZCBmaW5kIG91dCB0aGF0IGl0IG9i
dGFpbmVkIGEgc3RyYW5nZSBkZXZpY2UgSUQ6DQo+ICAgICAgICAgICAgIFsgMS4wMDU0MjNdIFtk
cm1dIGludGVsIHBjaCBkZXRlY3QsIGZvdW5kIDAwMDA3MDAwDQo+IA0KPiAgICAgICAgICAgICBU
aGlzIGxvb2tzIGxpa2UgdGhlIElTQSBicmlkZ2UgcHJvdmlkZWQgYnkgcWVtdToNCj4gICAgICAg
ICAgICAgMDA6MDEuMCBJU0EgYnJpZGdlOiBJbnRlbCBDb3Jwb3JhdGlvbiA4MjM3MVNCIFBJSVgz
IElTQSBbTmF0b21hL1RyaXRvbiBJSV0NCj4gICAgICAgICAgICAgMDA6MDEuMCAwNjAxOiA4MDg2
OjcwMDANCj4gDQo+ICAgICAgICAgICAgIEhvd2V2ZXIsIEkgY2FuIGZpbmQgdGhlIHNhbWUgZGV2
aWNlIG9uIGEgUFZIVk0gbGludXggZ3Vlc3QsIGJ1dCB0aGUgaW50ZWxfZGV0ZWN0X3BjaCgpIGlz
IG5vdCBmb29sZWQgYnkNCj4gICAgICAgICAgICAgdGhhdC4gSXMgaXQgZHVlIHRvIEk5MTU9bSBj
b25maWcgb3Igc29tZSBtYWdpYyBwbGF5ZWQgYnkgUFZPUFM/IEFueSBzdWdnZXN0aW9uIGhvdyB0
byBmaXggdGhpcz8NCj4gDQo+ICAgICAgICAgICAgIFRoYW5rcywNCj4gICAgICAgICAgICAgVGlt
b3RoeQ0KPiANCj4gDQo+IA0KPiANCj4gDQpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Dec 10 18:16:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 18:16:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti7tJ-0002x9-To; Mon, 10 Dec 2012 18:15:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <allen.m.kay@intel.com>) id 1Ti7tI-0002x4-EK
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 18:15:44 +0000
Received: from [85.158.138.51:23786] by server-5.bemta-3.messagelabs.com id
	98/47-26311-FC626C05; Mon, 10 Dec 2012 18:15:43 +0000
X-Env-Sender: allen.m.kay@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355163324!28260083!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM4MjY5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26600 invoked from network); 10 Dec 2012 18:15:25 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-4.tower-174.messagelabs.com with SMTP;
	10 Dec 2012 18:15:25 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 10 Dec 2012 10:15:20 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,252,1355126400"; d="scan'208";a="179021660"
Received: from orsmsx102.amr.corp.intel.com ([10.22.225.129])
	by AZSMGA002.ch.intel.com with ESMTP; 10 Dec 2012 10:15:19 -0800
Received: from orsmsx105.amr.corp.intel.com ([169.254.4.144]) by
	ORSMSX102.amr.corp.intel.com ([169.254.1.125]) with mapi id
	14.01.0355.002; Mon, 10 Dec 2012 10:15:19 -0800
From: "Kay, Allen M" <allen.m.kay@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, G.R.
	<firemeteor@users.sourceforge.net>
Thread-Topic: intel IGD driver intel_detect_pch() failure
Thread-Index: AQHN1tHDx1dAGYFBI0az61NZjDdUcJgSVTWg
Date: Mon, 10 Dec 2012 18:15:18 +0000
Message-ID: <003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.22.254.140]
MIME-Version: 1.0
Cc: "Xu, Dongxiao" <dongxiao.xu@intel.com>,
	"xiantao.zhang@intel" <xiantao.zhang@intel>, "Dong,
	Eddie" <eddie.dong@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TXkgdW5kZXJzdGFuZGluZyBpcyB0aGF0IGk5MTUgZHJpdmVyIG5lZWRzIHRvIGxvb2tzIGF0IHRo
ZSByZWFsIFBDSCdzIGRldmljZSBJRCB0byBhcHBseSBIVyB3b3JrYXJvdW5kcy4NCg0KT25lIHdh
eSB0byBmaXggdGhpcyBpcyB0byBtYWtlIHRoZSBkZXZpY2UgSUQgb2YgdGhlIGZpcnN0IFBDSCdz
ICgwMDowMS4wKSBkZXZpY2UgSUQgcmVmbGVjdCB0aGUgZGV2aWNlIElEIG9mIDAwOjFmLjAgb24g
dGhlIGhvc3QuICBUaGlzIHdheSwgd2hlbiBpOTE1IHJ1bm5pbmcgYXMgYSBndWVzdCB3aWxsIGZp
bmQgdGhlIHZhbGlkIFBDSCBkZXZpY2UgSUQgdG8gbWFrZSB3b3JrYXJvdW5kIGRlY2lzaW9ucyB3
aXRoLg0KDQpJIGRvbid0IGtub3cgd2h5IGl0IHdvdWxkIG1ha2UgYSBkaWZmZXJlbmNlIGlmIGk5
MTUgaXMgYnVpbHQgaW50byB0aGUga2VybmVsIG9yIGFzIGEgbW9kdWxlIHRob3VnaC4NCg0KQWxs
ZW4NCg0KLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCkZyb206IFN0ZWZhbm8gU3RhYmVsbGlu
aSBbbWFpbHRvOnN0ZWZhbm8uc3RhYmVsbGluaUBldS5jaXRyaXguY29tXSANClNlbnQ6IE1vbmRh
eSwgRGVjZW1iZXIgMTAsIDIwMTIgNDoyNyBBTQ0KVG86IEcuUi4NCkNjOiB4ZW4tZGV2ZWw7IFN0
ZWZhbm8gU3RhYmVsbGluaTsgS2F5LCBBbGxlbiBNOyB4aWFudGFvLnpoYW5nQGludGVsOyBYdSwg
RG9uZ3hpYW87IERvbmcsIEVkZGllDQpTdWJqZWN0OiBSZTogaW50ZWwgSUdEIGRyaXZlciBpbnRl
bF9kZXRlY3RfcGNoKCkgZmFpbHVyZQ0KDQpDQydpbmcgc29tZSBlbmdpbmVlcnMgdGhhdCBjb3Vs
ZCBoYXZlIHNvbWUgdXNlZnVsIHN1Z2dlc3Rpb25zDQoNCk9uIE1vbiwgMTAgRGVjIDIwMTIsIEcu
Ui4gd3JvdGU6DQo+IEhlbGxvLCBjb3VsZCBhbnlib2R5IGhlbHA/DQo+IA0KPiBPbiBTdW4sIERl
YyA5LCAyMDEyIGF0IDE6MDAgUE0sIEcuUi4gPGZpcmVtZXRlb3JAdXNlcnMuc291cmNlZm9yZ2Uu
bmV0PiB3cm90ZToNCj4gICAgICAgSSBkdWcgZnVydGhlciBhbmQgZ290IGNvbmZ1c2VkLg0KPiAg
ICAgICBUaGUgaG9zdCBJU0EgYnJpZGdlIDAwOjFmLjAgaXMgYXV0b21hdGljYWxseSBwYXNzZWQt
dGhyb3VnaCBhcyBwYXJ0IG9mIHRoZSBnZnhfcGFzc3RocnUgbWFnaWMuDQo+ICAgICAgIEhvd2V2
ZXIsIGl0IGlzIHBhc3NlZCB0aHJvdWdoIGFzIGEgUENJIGJyaWRnZToNCj4gICAgICAgT24gaG9z
dDogwqAgMDA6MWYuMCBJU0EgYnJpZGdlIFswNjAxXTogSW50ZWwgQ29ycG9yYXRpb24gSDc3IEV4
cHJlc3MgQ2hpcHNldCBMUEMgQ29udHJvbGxlciBbODA4NjoxZTRhXSAocmV2IDA0KQ0KPiAgICAg
ICBPbiBndWVzdDogMDA6MWYuMCBQQ0kgYnJpZGdlIFswNjA0XTogSW50ZWwgQ29ycG9yYXRpb24g
SDc3IEV4cHJlc3MgQ2hpcHNldCBMUEMgQ29udHJvbGxlciBbODA4NjoxZTRhXSAocmV2IDA0KQ0K
PiANCj4gICAgICAgVGhpcyBpcyBib3RoIHRoZSBjYXNlIGZvciBwdXJlIEhWTSAmJiBQVkhWTS4g
QW5kIHRoaXMgb25lIGV4aXN0cyBmb3IgYm90aCBjYXNlIHRvbzoNCj4gICAgICAgMDA6MDEuMCBJ
U0EgYnJpZGdlIFswNjAxXTogSW50ZWwgQ29ycG9yYXRpb24gODIzNzFTQiBQSUlYMyBJU0EgW05h
dG9tYS9Ucml0b24gSUldIFs4MDg2OjcwMDBdDQo+IA0KPiAgICAgICBBbmQgdGhlIGludGVsX2Rl
dGVjdF9wY2goKSBmdW5jdGlvbiBvbmx5IGNoZWNrIHRoZSBmaXJzdCBJU0EgYnJpZGdlIG9uIHRo
ZSBQQ0kgYnVzOg0KPiAgICAgICBwY2ggPSBwY2lfZ2V0X2NsYXNzKFBDSV9DTEFTU19CUklER0Vf
SVNBIDw8IDgsIE5VTEwpOw0KPiANCj4gICAgICAgVW5sZXNzIHRoZXJlIGFyZSBtYWdpYyBlbHNl
d2hlcmUsIEkgY2FuJ3QgaW1hZ2luZSB0aGUgY29kZSB3b3VsZCBiZWhhdmUgZGlmZmVyZW50bHkg
b24gdGhlIHR3byBidWlsZHMuDQo+ICAgICAgIEJ1dCB3aGF0J3MgdGhlIG1hZ2ljIGJlaGluZCB0
aGlzPw0KPiANCj4gICAgICAgQWxzbywgaXMgdGhlcmUgYW55d2F5IHRvIGdldCByaWQgb2YgdGhl
IElTQSBicmlkZ2UgZW11bGF0ZWQgYnkgcWVtdT8NCj4gICAgICAgSSBkb24ndCB0aGluayB0aGlz
IGlzIGV2ZXIgcmVxdWlyZWQgZm9yIG1vc3QgY2FzZS4uLg0KPiANCj4gICAgICAgVGhhbmtzLA0K
PiAgICAgICBUaW1vdGh5DQo+IA0KPiANCj4gICAgICAgT24gU3VuLCBEZWMgOSwgMjAxMiBhdCAx
OjQzIEFNLCBHLlIuIDxmaXJlbWV0ZW9yQHVzZXJzLnNvdXJjZWZvcmdlLm5ldD4gd3JvdGU6DQo+
IA0KPiAgICAgICAgICAgICBIaSBhbGwsDQo+ICAgICAgICAgICAgIEknbSBkZWJ1Z2dpbmcgYW4g
aXNzdWUgdGhhdCBhbiBIVk0gZ3Vlc3QgZmFpbGVkIHRvIHByb2R1Y2UgYW55IG91dHB1dCB3aXRo
IElHRCBwYXNzZWQgdGhyb3VnaC4NCj4gICAgICAgICAgICAgVGhpcyBpcyBhbiBwdXJlIEhWTSBs
aW51eCBndWVzdCB3aXRoIGk5MTUgZHJpdmVyIGRpcmVjdGx5IGNvbXBpbGVkIGluLg0KPiAgICAg
ICAgICAgICBBbiBQVkhWTSBrZXJuZWwgd2l0aCBpOTE1IGRyaXZlciBjb21waWxlZCBhcyBtb2R1
bGUgd29ya3Mgd2l0aG91dCBpc3N1ZS4NCj4gICAgICAgICAgICAgSSdtIG5vdCB5ZXQgc3VyZSB3
aGljaCBmYWN0b3IgaXMgbW9yZSBpbXBvcnRhbnQsIHB1cmUgSFZNIG9yIHRoZSBJOTE1PXkga2Vy
bmVsIGNvbmZpZy4NCj4gDQo+ICAgICAgICAgICAgIFRoZSBkaXJlY3QgY2F1c2Ugb2Ygbm8gb3V0
cHV0IGlzIHRoYXQgdGhlIGRyaXZlciBkb2VzIG5vdCBzZWxlY3QgRGlzcGxheSBQTEwgcHJvcGVy
bHksIHdoaWNoIGlzIGluIHR1cm4NCj4gICAgICAgICAgICAgZHVlIHRvIGZhaWwgdG8gZGV0ZWN0
IHBjaCB0eXBlIHByb3Blcmx5Lg0KPiANCj4gICAgICAgICAgICAgU3RyYW5nZSBlbm91Z2gsIHRo
ZSBpbnRlbF9kZXRlY3RfcGNoKCkgZnVuY3Rpb24gd29ya3MgYnkgY2hlY2tpbmcgdGhlIGRldmlj
ZSBJRCBvZiB0aGUgSVNBIGJyaWRnZQ0KPiAgICAgICAgICAgICBjb21pbmcgd2l0aCB0aGUgY2hp
cHNldDoNCj4gDQo+ICAgICAgICAgICAgIC8qICogVGhlIHJlYXNvbiB0byBwcm9iZSBJU0EgYnJp
ZGdlIGluc3RlYWQgb2YgRGV2MzE6RnVuMCBpcyB0byAqIG1ha2UgZ3JhcGhpY3MgZGV2aWNlIHBh
c3N0aHJvdWdoDQo+ICAgICAgICAgICAgIGZ3b3JrIGVhc3kgZm9yIFZNTSwgdGhhdCBvbmx5ICog
bmVlZCB0byBleHBvc2UgSVNBIGJyaWRnZSB0byBsZXQgZHJpdmVyIGtub3cgdGhlIHJlYWwgaGFy
ZHdhcmXCoA0KPiAgICAgICAgICAgICB1bmRlcm5lYXRoLiBUaGlzIGlzIGEgcmVxdWlyZW1lbnQg
ZnJvbSB2aXJ0dWFsaXphdGlvbiB0ZWFtLiAqLw0KPiANCj4gICAgICAgICAgICAgSSBhZGRlZCBz
b21lIGRlYnVnIG91dHB1dCBpbiB0aGlzIGZ1bmN0aW9uIGFuZCBmaW5kIG91dCB0aGF0IGl0IG9i
dGFpbmVkIGEgc3RyYW5nZSBkZXZpY2UgSUQ6DQo+ICAgICAgICAgICAgIFsgMS4wMDU0MjNdIFtk
cm1dIGludGVsIHBjaCBkZXRlY3QsIGZvdW5kIDAwMDA3MDAwDQo+IA0KPiAgICAgICAgICAgICBU
aGlzIGxvb2tzIGxpa2UgdGhlIElTQSBicmlkZ2UgcHJvdmlkZWQgYnkgcWVtdToNCj4gICAgICAg
ICAgICAgMDA6MDEuMCBJU0EgYnJpZGdlOiBJbnRlbCBDb3Jwb3JhdGlvbiA4MjM3MVNCIFBJSVgz
IElTQSBbTmF0b21hL1RyaXRvbiBJSV0NCj4gICAgICAgICAgICAgMDA6MDEuMCAwNjAxOiA4MDg2
OjcwMDANCj4gDQo+ICAgICAgICAgICAgIEhvd2V2ZXIsIEkgY2FuIGZpbmQgdGhlIHNhbWUgZGV2
aWNlIG9uIGEgUFZIVk0gbGludXggZ3Vlc3QsIGJ1dCB0aGUgaW50ZWxfZGV0ZWN0X3BjaCgpIGlz
IG5vdCBmb29sZWQgYnkNCj4gICAgICAgICAgICAgdGhhdC4gSXMgaXQgZHVlIHRvIEk5MTU9bSBj
b25maWcgb3Igc29tZSBtYWdpYyBwbGF5ZWQgYnkgUFZPUFM/IEFueSBzdWdnZXN0aW9uIGhvdyB0
byBmaXggdGhpcz8NCj4gDQo+ICAgICAgICAgICAgIFRoYW5rcywNCj4gICAgICAgICAgICAgVGlt
b3RoeQ0KPiANCj4gDQo+IA0KPiANCj4gDQpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:15:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti8oE-0003PU-4J; Mon, 10 Dec 2012 19:14:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robert.phillips@citrix.com>) id 1Ti8oD-0003PP-4z
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:14:33 +0000
Received: from [85.158.139.83:52638] by server-3.bemta-5.messagelabs.com id
	1C/BF-25441-89436C05; Mon, 10 Dec 2012 19:14:32 +0000
X-Env-Sender: robert.phillips@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355166869!17970489!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7197 invoked from network); 10 Dec 2012 19:14:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 19:14:30 -0000
X-SBRS: None
X-MesageID: 151309
X-Ironport-Server: ftlpip01.citrite.net
X-Remote-IP: 75.150.106.249
X-Policy: $Relay
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="151309"
Received: from 75-150-106-249-newengland.hfc.comcastbusiness.net (HELO
	paine.oldroadcomputing.net) ([75.150.106.249])
	by SMTP.CITRIX.COM with ESMTP; 10 Dec 2012 19:14:28 +0000
From: Robert Phillips <robert.phillips@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Dec 2012 14:13:56 -0500
Message-Id: <1355166836-11338-1-git-send-email-robert.phillips@citrix.com>
X-Mailer: git-send-email 1.7.9.5
Cc: Robert Phillips <robert.phillips@citrix.com>
Subject: [Xen-devel] [PATCH] Avoid race when guest switches between log
	dirty mode and dirty vram mode.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The previous code assumed the guest would be in one of three mutually exclusive
modes for bookkeeping dirty pages: (1) shadow, (2) hap utilizing the log dirty
bitmap to support functionality such as live migrate, (3) hap utilizing the
log dirty bitmap to track dirty vram pages.
Races arose when a guest attempted to track dirty vram while performing live
migrate.  (The dispatch table managed by paging_log_dirty_init() might change
in the middle of a log dirty or a vram tracking function.)

This change allows hap log dirty and hap vram tracking to be concurrent.
Vram tracking no longer uses the log dirty bitmap.  Instead it detects
dirty vram pages by examining their p2m type.  The log dirty bitmap is only
used by the log dirty code.  Because the two operations use different
mechanisms, they are no longer mutually exclusive.

Signed-Off-By: Robert Phillips <robert.phillips@citrix.com>
---
 xen/arch/x86/mm/hap/hap.c    |  191 ++++++++++++++++++------------------------
 xen/arch/x86/mm/paging.c     |  167 ++++++------------------------------
 xen/include/asm-x86/config.h |    1 +
 xen/include/asm-x86/paging.h |    8 +-
 4 files changed, 112 insertions(+), 255 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 78ed3ff..1e226ce 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -56,132 +56,114 @@
 /*          HAP VRAM TRACKING SUPPORT           */
 /************************************************/
 
-static int hap_enable_vram_tracking(struct domain *d)
-{
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
-
-    if ( !dirty_vram )
-        return -EINVAL;
-
-    /* turn on PG_log_dirty bit in paging mode */
-    paging_lock(d);
-    d->arch.paging.mode |= PG_log_dirty;
-    paging_unlock(d);
-
-    /* set l1e entries of P2M table to be read-only. */
-    p2m_change_type_range(d, dirty_vram->begin_pfn, dirty_vram->end_pfn, 
-                          p2m_ram_rw, p2m_ram_logdirty);
-
-    flush_tlb_mask(d->domain_dirty_cpumask);
-    return 0;
-}
-
-static int hap_disable_vram_tracking(struct domain *d)
-{
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
-
-    if ( !dirty_vram )
-        return -EINVAL;
-
-    paging_lock(d);
-    d->arch.paging.mode &= ~PG_log_dirty;
-    paging_unlock(d);
-
-    /* set l1e entries of P2M table with normal mode */
-    p2m_change_type_range(d, dirty_vram->begin_pfn, dirty_vram->end_pfn, 
-                          p2m_ram_logdirty, p2m_ram_rw);
-
-    flush_tlb_mask(d->domain_dirty_cpumask);
-    return 0;
-}
-
-static void hap_clean_vram_tracking(struct domain *d)
-{
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
-
-    if ( !dirty_vram )
-        return;
-
-    /* set l1e entries of P2M table to be read-only. */
-    p2m_change_type_range(d, dirty_vram->begin_pfn, dirty_vram->end_pfn, 
-                          p2m_ram_rw, p2m_ram_logdirty);
-
-    flush_tlb_mask(d->domain_dirty_cpumask);
-}
-
-static void hap_vram_tracking_init(struct domain *d)
-{
-    paging_log_dirty_init(d, hap_enable_vram_tracking,
-                          hap_disable_vram_tracking,
-                          hap_clean_vram_tracking);
-}
+/*
+ * hap_track_dirty_vram()
+ * Create the domain's dv_dirty_vram struct on demand.
+ * Create a dirty vram range on demand when some [begin_pfn:begin_pfn+nr] is
+ * first encountered.
+ * Collect the guest_dirty bitmask, a bit mask of the dirty vram pages, by
+ * calling paging_log_dirty_range(), which interrogates each vram
+ * page's p2m type looking for pages that have been made writable.
+ */
 
 int hap_track_dirty_vram(struct domain *d,
                          unsigned long begin_pfn,
                          unsigned long nr,
-                         XEN_GUEST_HANDLE_64(uint8) dirty_bitmap)
+                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
 {
     long rc = 0;
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
+    struct sh_dirty_vram *dirty_vram;
+    uint8_t *dirty_bitmap = NULL;
 
     if ( nr )
     {
-        if ( paging_mode_log_dirty(d) && dirty_vram )
+        int size = ( nr + BITS_PER_BYTE - 1 ) / BITS_PER_BYTE;
+
+        if ( !paging_mode_log_dirty(d) )
         {
-            if ( begin_pfn != dirty_vram->begin_pfn ||
-                 begin_pfn + nr != dirty_vram->end_pfn )
-            {
-                paging_log_dirty_disable(d);
-                dirty_vram->begin_pfn = begin_pfn;
-                dirty_vram->end_pfn = begin_pfn + nr;
-                rc = paging_log_dirty_enable(d);
-                if (rc != 0)
-                    goto param_fail;
-            }
+            hap_logdirty_init(d);
+            rc = paging_log_dirty_enable(d);
+            if ( rc )
+                goto out;
         }
-        else if ( !paging_mode_log_dirty(d) && !dirty_vram )
+
+        rc = -ENOMEM;
+        dirty_bitmap = xzalloc_bytes( size );
+        if ( !dirty_bitmap )
+            goto out;
+
+        paging_lock(d);
+
+        dirty_vram = d->arch.hvm_domain.dirty_vram;
+        if ( !dirty_vram )
         {
             rc = -ENOMEM;
-            if ( (dirty_vram = xmalloc(struct sh_dirty_vram)) == NULL )
-                goto param_fail;
+            if ( (dirty_vram = xzalloc(struct sh_dirty_vram)) == NULL )
+            {
+                paging_unlock(d);
+                goto out;
+            }
 
+            d->arch.hvm_domain.dirty_vram = dirty_vram;
+        }
+
+        if ( begin_pfn != dirty_vram->begin_pfn ||
+             begin_pfn + nr != dirty_vram->end_pfn )
+        {
             dirty_vram->begin_pfn = begin_pfn;
             dirty_vram->end_pfn = begin_pfn + nr;
-            d->arch.hvm_domain.dirty_vram = dirty_vram;
-            hap_vram_tracking_init(d);
-            rc = paging_log_dirty_enable(d);
-            if (rc != 0)
-                goto param_fail;
+
+            paging_unlock(d);
+
+            /* set l1e entries of range within P2M table to be read-only. */
+            p2m_change_type_range(d, begin_pfn, begin_pfn + nr,
+                                  p2m_ram_rw, p2m_ram_logdirty);
+
+            flush_tlb_mask(d->domain_dirty_cpumask);
+
+            memset(dirty_bitmap, 0xff, size); /* consider all pages dirty */
         }
         else
         {
-            if ( !paging_mode_log_dirty(d) && dirty_vram )
-                rc = -EINVAL;
-            else
-                rc = -ENODATA;
-            goto param_fail;
+            paging_unlock(d);
+
+            domain_pause(d);
+
+            /* get the bitmap */
+            paging_log_dirty_range(d, begin_pfn, nr, dirty_bitmap);
+
+            domain_unpause(d);
+        }
+
+        rc = -EFAULT;
+        if ( copy_to_guest(guest_dirty_bitmap,
+                           dirty_bitmap,
+                           size) == 0 )
+        {
+            rc = 0;
         }
-        /* get the bitmap */
-        rc = paging_log_dirty_range(d, begin_pfn, nr, dirty_bitmap);
     }
-    else
-    {
-        if ( paging_mode_log_dirty(d) && dirty_vram ) {
-            rc = paging_log_dirty_disable(d);
+    else {
+        paging_lock(d);
+
+        dirty_vram = d->arch.hvm_domain.dirty_vram;
+        if ( dirty_vram )
+        {
+            /*
+             * If zero pages specified while tracking dirty vram
+             * then stop tracking
+             */
             xfree(dirty_vram);
-            dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
-        } else
-            rc = 0;
-    }
+            d->arch.hvm_domain.dirty_vram = NULL;
 
-    return rc;
+        }
 
-param_fail:
-    if ( dirty_vram )
-    {
-        xfree(dirty_vram);
-        dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
+        paging_unlock(d);
     }
+out:
+    if ( dirty_bitmap )
+        xfree(dirty_bitmap);
+
     return rc;
 }
 
@@ -223,13 +205,6 @@ static void hap_clean_dirty_bitmap(struct domain *d)
 
 void hap_logdirty_init(struct domain *d)
 {
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
-    if ( paging_mode_log_dirty(d) && dirty_vram )
-    {
-        paging_log_dirty_disable(d);
-        xfree(dirty_vram);
-        dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
-    }
 
     /* Reinitialize logdirty mechanism */
     paging_log_dirty_init(d, hap_enable_log_dirty,
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ea44e39..a5cdbd1 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -437,157 +437,38 @@ int paging_log_dirty_op(struct domain *d, struct xen_domctl_shadow_op *sc)
     return rv;
 }
 
-int paging_log_dirty_range(struct domain *d,
-                            unsigned long begin_pfn,
-                            unsigned long nr,
-                            XEN_GUEST_HANDLE_64(uint8) dirty_bitmap)
+void paging_log_dirty_range(struct domain *d,
+                           unsigned long begin_pfn,
+                           unsigned long nr,
+                           uint8_t *dirty_bitmap)
 {
-    int rv = 0;
-    unsigned long pages = 0;
-    mfn_t *l4, *l3, *l2;
-    unsigned long *l1;
-    int b1, b2, b3, b4;
-    int i2, i3, i4;
-
-    d->arch.paging.log_dirty.clean_dirty_bitmap(d);
-    paging_lock(d);
-
-    PAGING_DEBUG(LOGDIRTY, "log-dirty-range: dom %u faults=%u dirty=%u\n",
-                 d->domain_id,
-                 d->arch.paging.log_dirty.fault_count,
-                 d->arch.paging.log_dirty.dirty_count);
-
-    if ( unlikely(d->arch.paging.log_dirty.failed_allocs) ) {
-        printk("%s: %d failed page allocs while logging dirty pages\n",
-               __FUNCTION__, d->arch.paging.log_dirty.failed_allocs);
-        rv = -ENOMEM;
-        goto out;
-    }
-
-    if ( !d->arch.paging.log_dirty.fault_count &&
-         !d->arch.paging.log_dirty.dirty_count ) {
-        unsigned int size = BITS_TO_LONGS(nr);
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    int i;
+    unsigned long pfn;
 
-        if ( clear_guest(dirty_bitmap, size * BYTES_PER_LONG) != 0 )
-            rv = -EFAULT;
-        goto out;
-    }
-    d->arch.paging.log_dirty.fault_count = 0;
-    d->arch.paging.log_dirty.dirty_count = 0;
+    /*
+     * Set l1e entries of P2M table to be read-only.
+     *
+     * On first write, it page faults, its entry is changed to read-write,
+     * and on retry the write succeeds.
+     *
+     * We populate dirty_bitmap by looking for entries that have been
+     * switched to read-write.
+     */
 
-    b1 = L1_LOGDIRTY_IDX(begin_pfn);
-    b2 = L2_LOGDIRTY_IDX(begin_pfn);
-    b3 = L3_LOGDIRTY_IDX(begin_pfn);
-    b4 = L4_LOGDIRTY_IDX(begin_pfn);
-    l4 = paging_map_log_dirty_bitmap(d);
+    p2m_lock(p2m);
 
-    for ( i4 = b4;
-          (pages < nr) && (i4 < LOGDIRTY_NODE_ENTRIES);
-          i4++ )
+    for ( i = 0, pfn = begin_pfn; pfn < begin_pfn + nr; i++, pfn++ )
     {
-        l3 = (l4 && mfn_valid(l4[i4])) ? map_domain_page(mfn_x(l4[i4])) : NULL;
-        for ( i3 = b3;
-              (pages < nr) && (i3 < LOGDIRTY_NODE_ENTRIES);
-              i3++ )
-        {
-            l2 = ((l3 && mfn_valid(l3[i3])) ?
-                  map_domain_page(mfn_x(l3[i3])) : NULL);
-            for ( i2 = b2;
-                  (pages < nr) && (i2 < LOGDIRTY_NODE_ENTRIES);
-                  i2++ )
-            {
-                unsigned int bytes = PAGE_SIZE;
-                uint8_t *s;
-                l1 = ((l2 && mfn_valid(l2[i2])) ?
-                      map_domain_page(mfn_x(l2[i2])) : NULL);
-
-                s = ((uint8_t*)l1) + (b1 >> 3);
-                bytes -= b1 >> 3;
-
-                if ( likely(((nr - pages + 7) >> 3) < bytes) )
-                    bytes = (unsigned int)((nr - pages + 7) >> 3);
-
-                if ( !l1 )
-                {
-                    if ( clear_guest_offset(dirty_bitmap, pages >> 3,
-                                            bytes) != 0 )
-                    {
-                        rv = -EFAULT;
-                        goto out;
-                    }
-                }
-                /* begin_pfn is not 32K aligned, hence we have to bit
-                 * shift the bitmap */
-                else if ( b1 & 0x7 )
-                {
-                    int i, j;
-                    uint32_t *l = (uint32_t*) s;
-                    int bits = b1 & 0x7;
-                    int bitmask = (1 << bits) - 1;
-                    int size = (bytes + BYTES_PER_LONG - 1) / BYTES_PER_LONG;
-                    unsigned long bitmap[size];
-                    static unsigned long printed = 0;
-
-                    if ( printed != begin_pfn )
-                    {
-                        dprintk(XENLOG_DEBUG, "%s: begin_pfn %lx is not 32K aligned!\n",
-                                __FUNCTION__, begin_pfn);
-                        printed = begin_pfn;
-                    }
-
-                    for ( i = 0; i < size - 1; i++, l++ ) {
-                        bitmap[i] = ((*l) >> bits) |
-                            (((*((uint8_t*)(l + 1))) & bitmask) << (sizeof(*l) * 8 - bits));
-                    }
-                    s = (uint8_t*) l;
-                    size = BYTES_PER_LONG - ((b1 >> 3) & 0x3);
-                    bitmap[i] = 0;
-                    for ( j = 0; j < size; j++, s++ )
-                        bitmap[i] |= (*s) << (j * 8);
-                    bitmap[i] = (bitmap[i] >> bits) | (bitmask << (size * 8 - bits));
-                    if ( copy_to_guest_offset(dirty_bitmap, (pages >> 3),
-                                (uint8_t*) bitmap, bytes) != 0 )
-                    {
-                        rv = -EFAULT;
-                        goto out;
-                    }
-                }
-                else
-                {
-                    if ( copy_to_guest_offset(dirty_bitmap, pages >> 3,
-                                              s, bytes) != 0 )
-                    {
-                        rv = -EFAULT;
-                        goto out;
-                    }
-                }
-
-                pages += bytes << 3;
-                if ( l1 )
-                {
-                    clear_page(l1);
-                    unmap_domain_page(l1);
-                }
-                b1 = b1 & 0x7;
-            }
-            b2 = 0;
-            if ( l2 )
-                unmap_domain_page(l2);
-        }
-        b3 = 0;
-        if ( l3 )
-            unmap_domain_page(l3);
+        p2m_type_t pt;
+        pt = p2m_change_type(d, pfn, p2m_ram_rw, p2m_ram_logdirty);
+        if ( pt == p2m_ram_rw )
+            dirty_bitmap[i >> 3] |= (1 << (i & 7));
     }
-    if ( l4 )
-        unmap_domain_page(l4);
 
-    paging_unlock(d);
+    p2m_unlock(p2m);
 
-    return rv;
-
- out:
-    paging_unlock(d);
-    return rv;
+    flush_tlb_mask(d->domain_dirty_cpumask);
 }
 
 /* Note that this function takes three function pointers. Callers must supply
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 0c4868c..1c08633 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -12,6 +12,7 @@
 
 #define BYTES_PER_LONG (1 << LONG_BYTEORDER)
 #define BITS_PER_LONG (BYTES_PER_LONG << 3)
+#define BITS_PER_BYTE 8
 
 #define CONFIG_X86 1
 #define CONFIG_X86_HT 1
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index 9a40f2c..c3a8848 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -137,10 +137,10 @@ struct paging_mode {
 void paging_free_log_dirty_bitmap(struct domain *d);
 
 /* get the dirty bitmap for a specific range of pfns */
-int paging_log_dirty_range(struct domain *d,
-                           unsigned long begin_pfn,
-                           unsigned long nr,
-                           XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
+void paging_log_dirty_range(struct domain *d,
+                            unsigned long begin_pfn,
+                            unsigned long nr,
+                            uint8_t *dirty_bitmap);
 
 /* enable log dirty */
 int paging_log_dirty_enable(struct domain *d);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:15:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti8oE-0003PU-4J; Mon, 10 Dec 2012 19:14:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robert.phillips@citrix.com>) id 1Ti8oD-0003PP-4z
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:14:33 +0000
Received: from [85.158.139.83:52638] by server-3.bemta-5.messagelabs.com id
	1C/BF-25441-89436C05; Mon, 10 Dec 2012 19:14:32 +0000
X-Env-Sender: robert.phillips@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355166869!17970489!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7197 invoked from network); 10 Dec 2012 19:14:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 19:14:30 -0000
X-SBRS: None
X-MesageID: 151309
X-Ironport-Server: ftlpip01.citrite.net
X-Remote-IP: 75.150.106.249
X-Policy: $Relay
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="151309"
Received: from 75-150-106-249-newengland.hfc.comcastbusiness.net (HELO
	paine.oldroadcomputing.net) ([75.150.106.249])
	by SMTP.CITRIX.COM with ESMTP; 10 Dec 2012 19:14:28 +0000
From: Robert Phillips <robert.phillips@citrix.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Dec 2012 14:13:56 -0500
Message-Id: <1355166836-11338-1-git-send-email-robert.phillips@citrix.com>
X-Mailer: git-send-email 1.7.9.5
Cc: Robert Phillips <robert.phillips@citrix.com>
Subject: [Xen-devel] [PATCH] Avoid race when guest switches between log
	dirty mode and dirty vram mode.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The previous code assumed the guest would be in one of three mutually exclusive
modes for bookkeeping dirty pages: (1) shadow, (2) hap utilizing the log dirty
bitmap to support functionality such as live migrate, (3) hap utilizing the
log dirty bitmap to track dirty vram pages.
Races arose when a guest attempted to track dirty vram while performing live
migrate.  (The dispatch table managed by paging_log_dirty_init() might change
in the middle of a log dirty or a vram tracking function.)

This change allows hap log dirty and hap vram tracking to be concurrent.
Vram tracking no longer uses the log dirty bitmap.  Instead it detects
dirty vram pages by examining their p2m type.  The log dirty bitmap is only
used by the log dirty code.  Because the two operations use different
mechanisms, they are no longer mutually exclusive.

Signed-Off-By: Robert Phillips <robert.phillips@citrix.com>
---
 xen/arch/x86/mm/hap/hap.c    |  191 ++++++++++++++++++------------------------
 xen/arch/x86/mm/paging.c     |  167 ++++++------------------------------
 xen/include/asm-x86/config.h |    1 +
 xen/include/asm-x86/paging.h |    8 +-
 4 files changed, 112 insertions(+), 255 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 78ed3ff..1e226ce 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -56,132 +56,114 @@
 /*          HAP VRAM TRACKING SUPPORT           */
 /************************************************/
 
-static int hap_enable_vram_tracking(struct domain *d)
-{
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
-
-    if ( !dirty_vram )
-        return -EINVAL;
-
-    /* turn on PG_log_dirty bit in paging mode */
-    paging_lock(d);
-    d->arch.paging.mode |= PG_log_dirty;
-    paging_unlock(d);
-
-    /* set l1e entries of P2M table to be read-only. */
-    p2m_change_type_range(d, dirty_vram->begin_pfn, dirty_vram->end_pfn, 
-                          p2m_ram_rw, p2m_ram_logdirty);
-
-    flush_tlb_mask(d->domain_dirty_cpumask);
-    return 0;
-}
-
-static int hap_disable_vram_tracking(struct domain *d)
-{
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
-
-    if ( !dirty_vram )
-        return -EINVAL;
-
-    paging_lock(d);
-    d->arch.paging.mode &= ~PG_log_dirty;
-    paging_unlock(d);
-
-    /* set l1e entries of P2M table with normal mode */
-    p2m_change_type_range(d, dirty_vram->begin_pfn, dirty_vram->end_pfn, 
-                          p2m_ram_logdirty, p2m_ram_rw);
-
-    flush_tlb_mask(d->domain_dirty_cpumask);
-    return 0;
-}
-
-static void hap_clean_vram_tracking(struct domain *d)
-{
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
-
-    if ( !dirty_vram )
-        return;
-
-    /* set l1e entries of P2M table to be read-only. */
-    p2m_change_type_range(d, dirty_vram->begin_pfn, dirty_vram->end_pfn, 
-                          p2m_ram_rw, p2m_ram_logdirty);
-
-    flush_tlb_mask(d->domain_dirty_cpumask);
-}
-
-static void hap_vram_tracking_init(struct domain *d)
-{
-    paging_log_dirty_init(d, hap_enable_vram_tracking,
-                          hap_disable_vram_tracking,
-                          hap_clean_vram_tracking);
-}
+/*
+ * hap_track_dirty_vram()
+ * Create the domain's dv_dirty_vram struct on demand.
+ * Create a dirty vram range on demand when some [begin_pfn:begin_pfn+nr] is
+ * first encountered.
+ * Collect the guest_dirty bitmask, a bit mask of the dirty vram pages, by
+ * calling paging_log_dirty_range(), which interrogates each vram
+ * page's p2m type looking for pages that have been made writable.
+ */
 
 int hap_track_dirty_vram(struct domain *d,
                          unsigned long begin_pfn,
                          unsigned long nr,
-                         XEN_GUEST_HANDLE_64(uint8) dirty_bitmap)
+                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
 {
     long rc = 0;
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
+    struct sh_dirty_vram *dirty_vram;
+    uint8_t *dirty_bitmap = NULL;
 
     if ( nr )
     {
-        if ( paging_mode_log_dirty(d) && dirty_vram )
+        int size = ( nr + BITS_PER_BYTE - 1 ) / BITS_PER_BYTE;
+
+        if ( !paging_mode_log_dirty(d) )
         {
-            if ( begin_pfn != dirty_vram->begin_pfn ||
-                 begin_pfn + nr != dirty_vram->end_pfn )
-            {
-                paging_log_dirty_disable(d);
-                dirty_vram->begin_pfn = begin_pfn;
-                dirty_vram->end_pfn = begin_pfn + nr;
-                rc = paging_log_dirty_enable(d);
-                if (rc != 0)
-                    goto param_fail;
-            }
+            hap_logdirty_init(d);
+            rc = paging_log_dirty_enable(d);
+            if ( rc )
+                goto out;
         }
-        else if ( !paging_mode_log_dirty(d) && !dirty_vram )
+
+        rc = -ENOMEM;
+        dirty_bitmap = xzalloc_bytes( size );
+        if ( !dirty_bitmap )
+            goto out;
+
+        paging_lock(d);
+
+        dirty_vram = d->arch.hvm_domain.dirty_vram;
+        if ( !dirty_vram )
         {
             rc = -ENOMEM;
-            if ( (dirty_vram = xmalloc(struct sh_dirty_vram)) == NULL )
-                goto param_fail;
+            if ( (dirty_vram = xzalloc(struct sh_dirty_vram)) == NULL )
+            {
+                paging_unlock(d);
+                goto out;
+            }
 
+            d->arch.hvm_domain.dirty_vram = dirty_vram;
+        }
+
+        if ( begin_pfn != dirty_vram->begin_pfn ||
+             begin_pfn + nr != dirty_vram->end_pfn )
+        {
             dirty_vram->begin_pfn = begin_pfn;
             dirty_vram->end_pfn = begin_pfn + nr;
-            d->arch.hvm_domain.dirty_vram = dirty_vram;
-            hap_vram_tracking_init(d);
-            rc = paging_log_dirty_enable(d);
-            if (rc != 0)
-                goto param_fail;
+
+            paging_unlock(d);
+
+            /* set l1e entries of range within P2M table to be read-only. */
+            p2m_change_type_range(d, begin_pfn, begin_pfn + nr,
+                                  p2m_ram_rw, p2m_ram_logdirty);
+
+            flush_tlb_mask(d->domain_dirty_cpumask);
+
+            memset(dirty_bitmap, 0xff, size); /* consider all pages dirty */
         }
         else
         {
-            if ( !paging_mode_log_dirty(d) && dirty_vram )
-                rc = -EINVAL;
-            else
-                rc = -ENODATA;
-            goto param_fail;
+            paging_unlock(d);
+
+            domain_pause(d);
+
+            /* get the bitmap */
+            paging_log_dirty_range(d, begin_pfn, nr, dirty_bitmap);
+
+            domain_unpause(d);
+        }
+
+        rc = -EFAULT;
+        if ( copy_to_guest(guest_dirty_bitmap,
+                           dirty_bitmap,
+                           size) == 0 )
+        {
+            rc = 0;
         }
-        /* get the bitmap */
-        rc = paging_log_dirty_range(d, begin_pfn, nr, dirty_bitmap);
     }
-    else
-    {
-        if ( paging_mode_log_dirty(d) && dirty_vram ) {
-            rc = paging_log_dirty_disable(d);
+    else {
+        paging_lock(d);
+
+        dirty_vram = d->arch.hvm_domain.dirty_vram;
+        if ( dirty_vram )
+        {
+            /*
+             * If zero pages specified while tracking dirty vram
+             * then stop tracking
+             */
             xfree(dirty_vram);
-            dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
-        } else
-            rc = 0;
-    }
+            d->arch.hvm_domain.dirty_vram = NULL;
 
-    return rc;
+        }
 
-param_fail:
-    if ( dirty_vram )
-    {
-        xfree(dirty_vram);
-        dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
+        paging_unlock(d);
     }
+out:
+    if ( dirty_bitmap )
+        xfree(dirty_bitmap);
+
     return rc;
 }
 
@@ -223,13 +205,6 @@ static void hap_clean_dirty_bitmap(struct domain *d)
 
 void hap_logdirty_init(struct domain *d)
 {
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
-    if ( paging_mode_log_dirty(d) && dirty_vram )
-    {
-        paging_log_dirty_disable(d);
-        xfree(dirty_vram);
-        dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
-    }
 
     /* Reinitialize logdirty mechanism */
     paging_log_dirty_init(d, hap_enable_log_dirty,
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ea44e39..a5cdbd1 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -437,157 +437,38 @@ int paging_log_dirty_op(struct domain *d, struct xen_domctl_shadow_op *sc)
     return rv;
 }
 
-int paging_log_dirty_range(struct domain *d,
-                            unsigned long begin_pfn,
-                            unsigned long nr,
-                            XEN_GUEST_HANDLE_64(uint8) dirty_bitmap)
+void paging_log_dirty_range(struct domain *d,
+                           unsigned long begin_pfn,
+                           unsigned long nr,
+                           uint8_t *dirty_bitmap)
 {
-    int rv = 0;
-    unsigned long pages = 0;
-    mfn_t *l4, *l3, *l2;
-    unsigned long *l1;
-    int b1, b2, b3, b4;
-    int i2, i3, i4;
-
-    d->arch.paging.log_dirty.clean_dirty_bitmap(d);
-    paging_lock(d);
-
-    PAGING_DEBUG(LOGDIRTY, "log-dirty-range: dom %u faults=%u dirty=%u\n",
-                 d->domain_id,
-                 d->arch.paging.log_dirty.fault_count,
-                 d->arch.paging.log_dirty.dirty_count);
-
-    if ( unlikely(d->arch.paging.log_dirty.failed_allocs) ) {
-        printk("%s: %d failed page allocs while logging dirty pages\n",
-               __FUNCTION__, d->arch.paging.log_dirty.failed_allocs);
-        rv = -ENOMEM;
-        goto out;
-    }
-
-    if ( !d->arch.paging.log_dirty.fault_count &&
-         !d->arch.paging.log_dirty.dirty_count ) {
-        unsigned int size = BITS_TO_LONGS(nr);
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    int i;
+    unsigned long pfn;
 
-        if ( clear_guest(dirty_bitmap, size * BYTES_PER_LONG) != 0 )
-            rv = -EFAULT;
-        goto out;
-    }
-    d->arch.paging.log_dirty.fault_count = 0;
-    d->arch.paging.log_dirty.dirty_count = 0;
+    /*
+     * Set l1e entries of P2M table to be read-only.
+     *
+     * On first write, it page faults, its entry is changed to read-write,
+     * and on retry the write succeeds.
+     *
+     * We populate dirty_bitmap by looking for entries that have been
+     * switched to read-write.
+     */
 
-    b1 = L1_LOGDIRTY_IDX(begin_pfn);
-    b2 = L2_LOGDIRTY_IDX(begin_pfn);
-    b3 = L3_LOGDIRTY_IDX(begin_pfn);
-    b4 = L4_LOGDIRTY_IDX(begin_pfn);
-    l4 = paging_map_log_dirty_bitmap(d);
+    p2m_lock(p2m);
 
-    for ( i4 = b4;
-          (pages < nr) && (i4 < LOGDIRTY_NODE_ENTRIES);
-          i4++ )
+    for ( i = 0, pfn = begin_pfn; pfn < begin_pfn + nr; i++, pfn++ )
     {
-        l3 = (l4 && mfn_valid(l4[i4])) ? map_domain_page(mfn_x(l4[i4])) : NULL;
-        for ( i3 = b3;
-              (pages < nr) && (i3 < LOGDIRTY_NODE_ENTRIES);
-              i3++ )
-        {
-            l2 = ((l3 && mfn_valid(l3[i3])) ?
-                  map_domain_page(mfn_x(l3[i3])) : NULL);
-            for ( i2 = b2;
-                  (pages < nr) && (i2 < LOGDIRTY_NODE_ENTRIES);
-                  i2++ )
-            {
-                unsigned int bytes = PAGE_SIZE;
-                uint8_t *s;
-                l1 = ((l2 && mfn_valid(l2[i2])) ?
-                      map_domain_page(mfn_x(l2[i2])) : NULL);
-
-                s = ((uint8_t*)l1) + (b1 >> 3);
-                bytes -= b1 >> 3;
-
-                if ( likely(((nr - pages + 7) >> 3) < bytes) )
-                    bytes = (unsigned int)((nr - pages + 7) >> 3);
-
-                if ( !l1 )
-                {
-                    if ( clear_guest_offset(dirty_bitmap, pages >> 3,
-                                            bytes) != 0 )
-                    {
-                        rv = -EFAULT;
-                        goto out;
-                    }
-                }
-                /* begin_pfn is not 32K aligned, hence we have to bit
-                 * shift the bitmap */
-                else if ( b1 & 0x7 )
-                {
-                    int i, j;
-                    uint32_t *l = (uint32_t*) s;
-                    int bits = b1 & 0x7;
-                    int bitmask = (1 << bits) - 1;
-                    int size = (bytes + BYTES_PER_LONG - 1) / BYTES_PER_LONG;
-                    unsigned long bitmap[size];
-                    static unsigned long printed = 0;
-
-                    if ( printed != begin_pfn )
-                    {
-                        dprintk(XENLOG_DEBUG, "%s: begin_pfn %lx is not 32K aligned!\n",
-                                __FUNCTION__, begin_pfn);
-                        printed = begin_pfn;
-                    }
-
-                    for ( i = 0; i < size - 1; i++, l++ ) {
-                        bitmap[i] = ((*l) >> bits) |
-                            (((*((uint8_t*)(l + 1))) & bitmask) << (sizeof(*l) * 8 - bits));
-                    }
-                    s = (uint8_t*) l;
-                    size = BYTES_PER_LONG - ((b1 >> 3) & 0x3);
-                    bitmap[i] = 0;
-                    for ( j = 0; j < size; j++, s++ )
-                        bitmap[i] |= (*s) << (j * 8);
-                    bitmap[i] = (bitmap[i] >> bits) | (bitmask << (size * 8 - bits));
-                    if ( copy_to_guest_offset(dirty_bitmap, (pages >> 3),
-                                (uint8_t*) bitmap, bytes) != 0 )
-                    {
-                        rv = -EFAULT;
-                        goto out;
-                    }
-                }
-                else
-                {
-                    if ( copy_to_guest_offset(dirty_bitmap, pages >> 3,
-                                              s, bytes) != 0 )
-                    {
-                        rv = -EFAULT;
-                        goto out;
-                    }
-                }
-
-                pages += bytes << 3;
-                if ( l1 )
-                {
-                    clear_page(l1);
-                    unmap_domain_page(l1);
-                }
-                b1 = b1 & 0x7;
-            }
-            b2 = 0;
-            if ( l2 )
-                unmap_domain_page(l2);
-        }
-        b3 = 0;
-        if ( l3 )
-            unmap_domain_page(l3);
+        p2m_type_t pt;
+        pt = p2m_change_type(d, pfn, p2m_ram_rw, p2m_ram_logdirty);
+        if ( pt == p2m_ram_rw )
+            dirty_bitmap[i >> 3] |= (1 << (i & 7));
     }
-    if ( l4 )
-        unmap_domain_page(l4);
 
-    paging_unlock(d);
+    p2m_unlock(p2m);
 
-    return rv;
-
- out:
-    paging_unlock(d);
-    return rv;
+    flush_tlb_mask(d->domain_dirty_cpumask);
 }
 
 /* Note that this function takes three function pointers. Callers must supply
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 0c4868c..1c08633 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -12,6 +12,7 @@
 
 #define BYTES_PER_LONG (1 << LONG_BYTEORDER)
 #define BITS_PER_LONG (BYTES_PER_LONG << 3)
+#define BITS_PER_BYTE 8
 
 #define CONFIG_X86 1
 #define CONFIG_X86_HT 1
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index 9a40f2c..c3a8848 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -137,10 +137,10 @@ struct paging_mode {
 void paging_free_log_dirty_bitmap(struct domain *d);
 
 /* get the dirty bitmap for a specific range of pfns */
-int paging_log_dirty_range(struct domain *d,
-                           unsigned long begin_pfn,
-                           unsigned long nr,
-                           XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
+void paging_log_dirty_range(struct domain *d,
+                            unsigned long begin_pfn,
+                            unsigned long nr,
+                            uint8_t *dirty_bitmap);
 
 /* enable log dirty */
 int paging_log_dirty_enable(struct domain *d);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:30:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti93B-0003ah-OE; Mon, 10 Dec 2012 19:30:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti93A-0003ac-Gb
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 19:30:00 +0000
Received: from [85.158.143.35:63867] by server-2.bemta-4.messagelabs.com id
	50/A3-30861-73836C05; Mon, 10 Dec 2012 19:29:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355167797!16761910!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31327 invoked from network); 10 Dec 2012 19:29:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 19:29:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="153312"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 19:29:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 14:29:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti935-0006qV-Sj;
	Mon, 10 Dec 2012 19:29:55 +0000
Date: Mon, 10 Dec 2012 19:29:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <ian.jackson@eu.citrix.com>
In-Reply-To: <1355162663-26956-2-git-send-email-ian.jackson@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1212101928420.17523@kaball.uk.xensource.com>
References: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
	<1355162663-26956-2-git-send-email-ian.jackson@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Dongxiao Xu <dongxiao.xu@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/2] cpu_ioreq_pio,
 cpu_ioreq_move: introduce read_phys_req_item, write_phys_req_item
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Ian Jackson wrote:
> Replace a lot of formulaic multiplications (containing casts, no less)
> with calls to a pair of functions.  This encapsulates in a single
> place the operations which require care relating to integer overflow.
> 
> Cc: Dongxiao Xu <dongxiao.xu@intel.com>
> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
>  xen-all.c |   73 ++++++++++++++++++++++++++++++++++++-------------------------
>  1 files changed, 43 insertions(+), 30 deletions(-)
> 
> diff --git a/xen-all.c b/xen-all.c
> index 046cc2a..97c8ef4 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -682,11 +682,42 @@ static void do_outp(pio_addr_t addr,
>      }
>  }
>  
> -static void cpu_ioreq_pio(ioreq_t *req)
> +/*
> + * Helper functions which read/write an object from/to physical guest
> + * memory, as part of the implementation of an ioreq.
> + *
> + * Equivalent to
> + *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
> + *                          val, req->size, 0/1)
> + * except without the integer overflow problems.
> + */
> +static void rw_phys_req_item(hwaddr addr,
> +                             ioreq_t *req, uint32_t i, void *val, int rw)
> +{
> +    /* Do everything unsigned so overflow just results in a truncated result
> +     * and accesses to undesired parts of guest memory, which is up
> +     * to the guest */
> +    hwaddr offset = (hwaddr)req->size * i;
> +    if (req->df) addr -= offset;
> +    else addr += offset;
> +    cpu_physical_memory_rw(addr, val, req->size, rw);
> +}

QEMU's code style is

if (something) {

you can also run the patch through scripts/checkpatch.pl.

Aside from the code style issue:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> +static inline void read_phys_req_item(hwaddr addr,
> +                                      ioreq_t *req, uint32_t i, void *val)
> +{
> +    rw_phys_req_item(addr, req, i, val, 0);
> +}
> +static inline void write_phys_req_item(hwaddr addr,
> +                                       ioreq_t *req, uint32_t i, void *val)
>  {
> -    int i, sign;
> +    rw_phys_req_item(addr, req, i, val, 1);
> +}
>  
> -    sign = req->df ? -1 : 1;
> +
> +static void cpu_ioreq_pio(ioreq_t *req)
> +{
> +    int i;
>  
>      if (req->dir == IOREQ_READ) {
>          if (!req->data_is_ptr) {
> @@ -696,9 +727,7 @@ static void cpu_ioreq_pio(ioreq_t *req)
>  
>              for (i = 0; i < req->count; i++) {
>                  tmp = do_inp(req->addr, req->size);
> -                cpu_physical_memory_write(
> -                        req->data + (sign * i * (int64_t)req->size),
> -                        (uint8_t *) &tmp, req->size);
> +                write_phys_req_item(req->data, req, i, &tmp);
>              }
>          }
>      } else if (req->dir == IOREQ_WRITE) {
> @@ -708,9 +737,7 @@ static void cpu_ioreq_pio(ioreq_t *req)
>              for (i = 0; i < req->count; i++) {
>                  uint32_t tmp = 0;
>  
> -                cpu_physical_memory_read(
> -                        req->data + (sign * i * (int64_t)req->size),
> -                        (uint8_t*) &tmp, req->size);
> +                read_phys_req_item(req->data, req, i, &tmp);
>                  do_outp(req->addr, req->size, tmp);
>              }
>          }
> @@ -719,22 +746,16 @@ static void cpu_ioreq_pio(ioreq_t *req)
>  
>  static void cpu_ioreq_move(ioreq_t *req)
>  {
> -    int i, sign;
> -
> -    sign = req->df ? -1 : 1;
> +    int i;
>  
>      if (!req->data_is_ptr) {
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                cpu_physical_memory_read(
> -                        req->addr + (sign * i * (int64_t)req->size),
> -                        (uint8_t *) &req->data, req->size);
> +                read_phys_req_item(req->addr, req, i, &req->data);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                cpu_physical_memory_write(
> -                        req->addr + (sign * i * (int64_t)req->size),
> -                        (uint8_t *) &req->data, req->size);
> +                write_phys_req_item(req->addr, req, i, &req->data);
>              }
>          }
>      } else {
> @@ -742,21 +763,13 @@ static void cpu_ioreq_move(ioreq_t *req)
>  
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                cpu_physical_memory_read(
> -                        req->addr + (sign * i * (int64_t)req->size),
> -                        (uint8_t*) &tmp, req->size);
> -                cpu_physical_memory_write(
> -                        req->data + (sign * i * (int64_t)req->size),
> -                        (uint8_t*) &tmp, req->size);
> +                read_phys_req_item(req->addr, req, i, &tmp);
> +                write_phys_req_item(req->data, req, i, &tmp);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                cpu_physical_memory_read(
> -                        req->data + (sign * i * (int64_t)req->size),
> -                        (uint8_t*) &tmp, req->size);
> -                cpu_physical_memory_write(
> -                        req->addr + (sign * i * (int64_t)req->size),
> -                        (uint8_t*) &tmp, req->size);
> +                read_phys_req_item(req->data, req, i, &tmp);
> +                write_phys_req_item(req->addr, req, i, &tmp);
>              }
>          }
>      }
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:30:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti93B-0003ah-OE; Mon, 10 Dec 2012 19:30:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Ti93A-0003ac-Gb
	for xen-devel@lists.xensource.com; Mon, 10 Dec 2012 19:30:00 +0000
Received: from [85.158.143.35:63867] by server-2.bemta-4.messagelabs.com id
	50/A3-30861-73836C05; Mon, 10 Dec 2012 19:29:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355167797!16761910!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODgzNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31327 invoked from network); 10 Dec 2012 19:29:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 19:29:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,252,1355097600"; 
   d="scan'208";a="153312"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	10 Dec 2012 19:29:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 10 Dec 2012 14:29:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Ti935-0006qV-Sj;
	Mon, 10 Dec 2012 19:29:55 +0000
Date: Mon, 10 Dec 2012 19:29:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <ian.jackson@eu.citrix.com>
In-Reply-To: <1355162663-26956-2-git-send-email-ian.jackson@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1212101928420.17523@kaball.uk.xensource.com>
References: <1355162663-26956-1-git-send-email-ian.jackson@eu.citrix.com>
	<1355162663-26956-2-git-send-email-ian.jackson@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Dongxiao Xu <dongxiao.xu@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/2] cpu_ioreq_pio,
 cpu_ioreq_move: introduce read_phys_req_item, write_phys_req_item
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012, Ian Jackson wrote:
> Replace a lot of formulaic multiplications (containing casts, no less)
> with calls to a pair of functions.  This encapsulates in a single
> place the operations which require care relating to integer overflow.
> 
> Cc: Dongxiao Xu <dongxiao.xu@intel.com>
> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
>  xen-all.c |   73 ++++++++++++++++++++++++++++++++++++-------------------------
>  1 files changed, 43 insertions(+), 30 deletions(-)
> 
> diff --git a/xen-all.c b/xen-all.c
> index 046cc2a..97c8ef4 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -682,11 +682,42 @@ static void do_outp(pio_addr_t addr,
>      }
>  }
>  
> -static void cpu_ioreq_pio(ioreq_t *req)
> +/*
> + * Helper functions which read/write an object from/to physical guest
> + * memory, as part of the implementation of an ioreq.
> + *
> + * Equivalent to
> + *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
> + *                          val, req->size, 0/1)
> + * except without the integer overflow problems.
> + */
> +static void rw_phys_req_item(hwaddr addr,
> +                             ioreq_t *req, uint32_t i, void *val, int rw)
> +{
> +    /* Do everything unsigned so overflow just results in a truncated result
> +     * and accesses to undesired parts of guest memory, which is up
> +     * to the guest */
> +    hwaddr offset = (hwaddr)req->size * i;
> +    if (req->df) addr -= offset;
> +    else addr += offset;
> +    cpu_physical_memory_rw(addr, val, req->size, rw);
> +}

QEMU's code style is

if (something) {

you can also run the patch through scripts/checkpatch.pl.

Aside from the code style issue:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> +static inline void read_phys_req_item(hwaddr addr,
> +                                      ioreq_t *req, uint32_t i, void *val)
> +{
> +    rw_phys_req_item(addr, req, i, val, 0);
> +}
> +static inline void write_phys_req_item(hwaddr addr,
> +                                       ioreq_t *req, uint32_t i, void *val)
>  {
> -    int i, sign;
> +    rw_phys_req_item(addr, req, i, val, 1);
> +}
>  
> -    sign = req->df ? -1 : 1;
> +
> +static void cpu_ioreq_pio(ioreq_t *req)
> +{
> +    int i;
>  
>      if (req->dir == IOREQ_READ) {
>          if (!req->data_is_ptr) {
> @@ -696,9 +727,7 @@ static void cpu_ioreq_pio(ioreq_t *req)
>  
>              for (i = 0; i < req->count; i++) {
>                  tmp = do_inp(req->addr, req->size);
> -                cpu_physical_memory_write(
> -                        req->data + (sign * i * (int64_t)req->size),
> -                        (uint8_t *) &tmp, req->size);
> +                write_phys_req_item(req->data, req, i, &tmp);
>              }
>          }
>      } else if (req->dir == IOREQ_WRITE) {
> @@ -708,9 +737,7 @@ static void cpu_ioreq_pio(ioreq_t *req)
>              for (i = 0; i < req->count; i++) {
>                  uint32_t tmp = 0;
>  
> -                cpu_physical_memory_read(
> -                        req->data + (sign * i * (int64_t)req->size),
> -                        (uint8_t*) &tmp, req->size);
> +                read_phys_req_item(req->data, req, i, &tmp);
>                  do_outp(req->addr, req->size, tmp);
>              }
>          }
> @@ -719,22 +746,16 @@ static void cpu_ioreq_pio(ioreq_t *req)
>  
>  static void cpu_ioreq_move(ioreq_t *req)
>  {
> -    int i, sign;
> -
> -    sign = req->df ? -1 : 1;
> +    int i;
>  
>      if (!req->data_is_ptr) {
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                cpu_physical_memory_read(
> -                        req->addr + (sign * i * (int64_t)req->size),
> -                        (uint8_t *) &req->data, req->size);
> +                read_phys_req_item(req->addr, req, i, &req->data);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                cpu_physical_memory_write(
> -                        req->addr + (sign * i * (int64_t)req->size),
> -                        (uint8_t *) &req->data, req->size);
> +                write_phys_req_item(req->addr, req, i, &req->data);
>              }
>          }
>      } else {
> @@ -742,21 +763,13 @@ static void cpu_ioreq_move(ioreq_t *req)
>  
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                cpu_physical_memory_read(
> -                        req->addr + (sign * i * (int64_t)req->size),
> -                        (uint8_t*) &tmp, req->size);
> -                cpu_physical_memory_write(
> -                        req->data + (sign * i * (int64_t)req->size),
> -                        (uint8_t*) &tmp, req->size);
> +                read_phys_req_item(req->addr, req, i, &tmp);
> +                write_phys_req_item(req->data, req, i, &tmp);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                cpu_physical_memory_read(
> -                        req->data + (sign * i * (int64_t)req->size),
> -                        (uint8_t*) &tmp, req->size);
> -                cpu_physical_memory_write(
> -                        req->addr + (sign * i * (int64_t)req->size),
> -                        (uint8_t*) &tmp, req->size);
> +                read_phys_req_item(req->data, req, i, &tmp);
> +                write_phys_req_item(req->addr, req, i, &tmp);
>              }
>          }
>      }
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sb-0004C0-BB; Mon, 10 Dec 2012 19:56:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SZ-0004An-6k
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.83:15115] by server-15.bemta-5.messagelabs.com id
	98/30-20523-E5E36C05; Mon, 10 Dec 2012 19:56:14 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355169373!27502597!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31457 invoked from network); 10 Dec 2012 19:56:14 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-2.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:14 -0000
X-TM-IMSS-Message-ID: <099fb9ee0000fdcf@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fb9ee0000fdcf ;
	Mon, 10 Dec 2012 14:56:16 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8u9007869; 
	Mon, 10 Dec 2012 14:56:09 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:35 -0500
Message-Id: <1355169347-25917-3-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 02/14] stubdom/vtpm: correct the buffer size
	returned by TPM_CAP_PROP_INPUT_BUFFER
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The vtpm2 ABI supports packets of up to 4088 bytes by default; expose
this property though the TPM's interface so clients do not attempt to
send larger packets.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/Makefile           |  1 +
 stubdom/vtpm-bufsize.patch | 13 +++++++++++++
 2 files changed, 14 insertions(+)
 create mode 100644 stubdom/vtpm-bufsize.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 709b71e..12f8a6f 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -207,6 +207,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	tar xzf $<
 	mv tpm_emulator-$(TPMEMU_VERSION) $@
 	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
+	patch -d $@ -p1 < vtpm-bufsize.patch
 	mkdir $@/build
 	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
diff --git a/stubdom/vtpm-bufsize.patch b/stubdom/vtpm-bufsize.patch
new file mode 100644
index 0000000..9c9304c
--- /dev/null
+++ b/stubdom/vtpm-bufsize.patch
@@ -0,0 +1,13 @@
+diff --git a/config.h.in b/config.h.in
+index d16a997..8088a2a 100644
+--- a/config.h.in
++++ b/config.h.in
+@@ -27,7 +27,7 @@
+ #define TPM_STORAGE_NAME "${TPM_STORAGE_NAME}"
+ #define TPM_DEVICE_NAME  "${TPM_DEVICE_NAME}"
+ #define TPM_LOG_FILE     "${TPM_LOG_FILE}"
+-#define TPM_CMD_BUF_SIZE 4096
++#define TPM_CMD_BUF_SIZE 4088
+ 
+ #endif /* _CONFIG_H_ */
+ 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9SZ-0004B0-5q; Mon, 10 Dec 2012 19:56:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SY-0004An-9q
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:14 +0000
Received: from [85.158.139.211:40566] by server-15.bemta-5.messagelabs.com id
	D4/30-20523-D5E36C05; Mon, 10 Dec 2012 19:56:13 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-206.messagelabs.com!1355169371!19812186!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3991 invoked from network); 10 Dec 2012 19:56:12 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-14.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:12 -0000
X-TM-IMSS-Message-ID: <67fe26b40008022d@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe26b40008022d ;
	Mon, 10 Dec 2012 14:55:41 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uB007869; 
	Mon, 10 Dec 2012 14:56:09 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:37 -0500
Message-Id: <1355169347-25917-5-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 04/14] stubdom/vtpm: Allow repoen of closed
	devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Allow the vtpm device to be disconnected and reconnected so that a
bootloader (like pv-grub) can submit measurements and return the vtpm
device to its initial state before booting the target kernel.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 extras/mini-os/tpmback.c  | 23 ++++++++++++++++++++++-
 extras/mini-os/tpmfront.c | 14 ++++++++++++--
 2 files changed, 34 insertions(+), 3 deletions(-)

diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index 50f8a5d..69a7f2d 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -608,6 +608,24 @@ error_post_map:
    return -1;
 }
 
+static void disconnect_fe(tpmif_t* tpmif)
+{
+   if (tpmif->status == CONNECTED) {
+      tpmif->status = DISCONNECTING;
+      mask_evtchn(tpmif->evtchn);
+
+      if(gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->page, 1)) {
+	 TPMBACK_ERR("%u/%u Error occured while trying to unmap shared page\n", (unsigned int) tpmif->domid, tpmif->handle);
+      }
+
+      unbind_evtchn(tpmif->evtchn);
+   }
+   tpmif->status = DISCONNECTED;
+   tpmif_change_state(tpmif, XenbusStateInitWait);
+
+   TPMBACK_LOG("Frontend %u/%u disconnected\n", (unsigned int) tpmif->domid, tpmif->handle);
+}
+
 static int frontend_changed(tpmif_t* tpmif)
 {
    int state = xenbus_read_integer(tpmif->fe_state_path);
@@ -634,8 +652,11 @@ static int frontend_changed(tpmif_t* tpmif)
 	 tpmif_change_state(tpmif, XenbusStateClosing);
 	 break;
 
-      case XenbusStateUnknown: /* keep it here */
       case XenbusStateClosed:
+         disconnect_fe(tpmif);
+	 break;
+
+      case XenbusStateUnknown: /* keep it here */
 	 free_tpmif(tpmif);
 	 break;
 
diff --git a/extras/mini-os/tpmfront.c b/extras/mini-os/tpmfront.c
index ac9ba42..1ef51cf 100644
--- a/extras/mini-os/tpmfront.c
+++ b/extras/mini-os/tpmfront.c
@@ -146,6 +146,9 @@ static int wait_for_backend_closed(xenbus_event_queue* events, char* path)
 	 case XenbusStateClosed:
 	    TPMFRONT_LOG("Backend Closed\n");
 	    return 0;
+	 case XenbusStateInitWait:
+	    TPMFRONT_LOG("Backend Closed (waiting for reconnect)\n");
+	    return 0;
 	 default:
 	    xenbus_wait_for_watch(events);
       }
@@ -306,10 +309,10 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
    TPMFRONT_LOG("Shutting down tpmfront\n");
    /* disconnect */
    if(dev->state == XenbusStateConnected) {
-      dev->state = XenbusStateClosing;
-      //FIXME: Transaction for this?
       /* Tell backend we are closing */
+      dev->state = XenbusStateClosing;
       if((err = xenbus_printf(XBT_NIL, dev->nodename, "state", "%u", (unsigned int) dev->state))) {
+	 TPMFRONT_ERR("Unable to write to %s, error was %s", dev->nodename, err);
 	 free(err);
       }
 
@@ -333,6 +336,13 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
       /* Wait for the backend to close and unmap shared pages, ignore any errors */
       wait_for_backend_state_changed(dev, XenbusStateClosed);
 
+      /* Prepare for a later reopen (possibly by a kexec'd kernel) */
+      dev->state = XenbusStateInitialising;
+      if((err = xenbus_printf(XBT_NIL, dev->nodename, "state", "%u", (unsigned int) dev->state))) {
+	 TPMFRONT_ERR("Unable to write to %s, error was %s", dev->nodename, err);
+	 free(err);
+      }
+
       /* Close event channel and unmap shared page */
       mask_evtchn(dev->evtchn);
       unbind_evtchn(dev->evtchn);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Se-0004Dy-7s; Mon, 10 Dec 2012 19:56:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sa-0004Bc-Gc
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:16 +0000
Received: from [85.158.139.83:15152] by server-12.bemta-5.messagelabs.com id
	8D/5C-02275-F5E36C05; Mon, 10 Dec 2012 19:56:15 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355169374!29242669!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8080 invoked from network); 10 Dec 2012 19:56:14 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-12.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:14 -0000
X-TM-IMSS-Message-ID: <099fc8300000fddb@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fc8300000fddb ;
	Mon, 10 Dec 2012 14:56:20 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uJ007869; 
	Mon, 10 Dec 2012 14:56:12 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:45 -0500
Message-Id: <1355169347-25917-13-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 12/14] mini-os/tpmback: add
	tpmback_get_peercontext
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows the XSM label of the TPM's client domain to be retrieved.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 extras/mini-os/events.c          | 22 ++++++++++++++++++++++
 extras/mini-os/include/events.h  |  1 +
 extras/mini-os/include/tpmback.h |  2 ++
 extras/mini-os/tpmback.c         | 11 +++++++++++
 4 files changed, 36 insertions(+)

diff --git a/extras/mini-os/events.c b/extras/mini-os/events.c
index 2f359a5..5327e14 100644
--- a/extras/mini-os/events.c
+++ b/extras/mini-os/events.c
@@ -21,6 +21,7 @@
 #include <mini-os/hypervisor.h>
 #include <mini-os/events.h>
 #include <mini-os/lib.h>
+#include <xen/xsm/flask_op.h>
 
 #define NR_EVS 1024
 
@@ -258,6 +259,27 @@ int evtchn_bind_interdomain(domid_t pal, evtchn_port_t remote_port,
     return rc;
 }
 
+int evtchn_get_peercontext(evtchn_port_t local_port, char *ctx, int size)
+{
+    int rc;
+    uint32_t sid;
+    struct xen_flask_op op;
+    op.cmd = FLASK_GET_PEER_SID;
+    op.interface_version = XEN_FLASK_INTERFACE_VERSION;
+    op.u.peersid.evtchn = local_port;
+    rc = _hypercall1(int, xsm_op, &op);
+    if (rc)
+        return rc;
+    sid = op.u.peersid.sid;
+    op.cmd = FLASK_SID_TO_CONTEXT;
+    op.u.sid_context.sid = sid;
+    op.u.sid_context.size = size;
+    set_xen_guest_handle(op.u.sid_context.context, ctx);
+    rc = _hypercall1(int, xsm_op, &op);
+    return rc;
+}
+
+
 /*
  * Local variables:
  * mode: C
diff --git a/extras/mini-os/include/events.h b/extras/mini-os/include/events.h
index 912e4cf..0e9d3a7 100644
--- a/extras/mini-os/include/events.h
+++ b/extras/mini-os/include/events.h
@@ -37,6 +37,7 @@ int evtchn_alloc_unbound(domid_t pal, evtchn_handler_t handler,
 int evtchn_bind_interdomain(domid_t pal, evtchn_port_t remote_port,
 							evtchn_handler_t handler, void *data,
 							evtchn_port_t *local_port);
+int evtchn_get_peercontext(evtchn_port_t local_port, char *ctx, int size);
 void unbind_all_ports(void);
 
 static inline int notify_remote_via_evtchn(evtchn_port_t port)
diff --git a/extras/mini-os/include/tpmback.h b/extras/mini-os/include/tpmback.h
index a6cbbf1..4408986 100644
--- a/extras/mini-os/include/tpmback.h
+++ b/extras/mini-os/include/tpmback.h
@@ -99,4 +99,6 @@ void* tpmback_get_opaque(domid_t domid, unsigned int handle);
 /* Returns zero if successful, nonzero on failure (no such frontend) */
 int tpmback_set_opaque(domid_t domid, unsigned int handle, void* opaque);
 
+/* Get the XSM context of the given domain (using the tpmback event channel) */
+int tpmback_get_peercontext(domid_t domid, unsigned int handle, void* buffer, int buflen);
 #endif
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index cac07fc..ab69cb7 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -793,6 +793,17 @@ unsigned char* tpmback_get_uuid(domid_t domid, unsigned int handle)
    return tpmif->uuid;
 }
 
+int tpmback_get_peercontext(domid_t domid, unsigned int handle, void* buffer, int buflen)
+{
+   tpmif_t* tpmif;
+   if((tpmif = get_tpmif(domid, handle)) == NULL) {
+      TPMBACK_DEBUG("get_uuid() failed, %u/%u is an invalid frontend\n", (unsigned int) domid, handle);
+      return -1;
+   }
+
+   return evtchn_get_peercontext(tpmif->evtchn, buffer, buflen);
+}
+
 static void event_listener(void)
 {
    const char* bepath = "backend/vtpm2";
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Se-0004Ec-Vu; Mon, 10 Dec 2012 19:56:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sb-0004Be-3R
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:17 +0000
Received: from [85.158.139.211:40638] by server-4.bemta-5.messagelabs.com id
	9F/00-14693-06E36C05; Mon, 10 Dec 2012 19:56:16 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355169374!19921266!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15511 invoked from network); 10 Dec 2012 19:56:15 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-16.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:15 -0000
X-TM-IMSS-Message-ID: <67fe325800080233@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe325800080233 ;
	Mon, 10 Dec 2012 14:55:44 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uI007869; 
	Mon, 10 Dec 2012 14:56:11 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:44 -0500
Message-Id: <1355169347-25917-12-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 11/14] mini-os/tpmback: Replace UUID field with
	opaque pointer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Instead of only recording the UUID field, which may not be of interest
to all tpmback implementations, provide a user-settable opaque pointer
associated with the tpmback instance.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 extras/mini-os/include/tpmback.h |  9 +++++++--
 extras/mini-os/tpmback.c         | 31 ++++++++++++++++++++++++++++---
 stubdom/vtpmmgr/init.c           |  8 +++++++-
 stubdom/vtpmmgr/vtpmmgr.c        |  2 +-
 4 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/extras/mini-os/include/tpmback.h b/extras/mini-os/include/tpmback.h
index 3c11c34..a6cbbf1 100644
--- a/extras/mini-os/include/tpmback.h
+++ b/extras/mini-os/include/tpmback.h
@@ -45,10 +45,10 @@ struct tpmcmd {
    domid_t domid;		/* Domid of the frontend */
    uint8_t locality;    /* Locality requested by the frontend */
    unsigned int handle;	/* Handle of the frontend */
-   unsigned char uuid[16];			/* uuid of the tpm interface */
+   void *opaque;        /* Opaque pointer taken from the tpmback instance */
 
-   unsigned int req_len;		/* Size of the command in buf - set by tpmback driver */
    uint8_t* req;			/* tpm command bits, allocated by driver, DON'T FREE IT */
+   unsigned int req_len;		/* Size of the command in buf - set by tpmback driver */
    unsigned int resp_len;	/* Size of the outgoing command,
 				   you set this before passing the cmd object to tpmback_resp */
    uint8_t* resp;		/* Buffer for response - YOU MUST ALLOCATE IT, YOU MUST ALSO FREE IT */
@@ -94,4 +94,9 @@ int tpmback_num_frontends(void);
  * The return value is internally allocated, so don't free it */
 unsigned char* tpmback_get_uuid(domid_t domid, unsigned int handle);
 
+/* Get and set the opaque pointer for a tpmback instance */
+void* tpmback_get_opaque(domid_t domid, unsigned int handle);
+/* Returns zero if successful, nonzero on failure (no such frontend) */
+int tpmback_set_opaque(domid_t domid, unsigned int handle, void* opaque);
+
 #endif
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index 1c46e5d..cac07fc 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -92,6 +92,7 @@ struct tpmif {
    enum { DISCONNECTED, DISCONNECTING, CONNECTED } status;
 
    unsigned char uuid[16];
+   void* opaque;
 
    /* state flags */
    int flags;
@@ -380,6 +381,7 @@ inline tpmif_t* __init_tpmif(domid_t domid, unsigned int handle)
    tpmif->status = DISCONNECTED;
    tpmif->page = NULL;
    tpmif->flags = 0;
+   tpmif->opaque = NULL;
    memset(tpmif->uuid, 0, sizeof(tpmif->uuid));
    return tpmif;
 }
@@ -757,6 +759,29 @@ static void generate_backend_events(const char* path)
    return;
 }
 
+void* tpmback_get_opaque(domid_t domid, unsigned int handle)
+{
+   tpmif_t* tpmif;
+   if((tpmif = get_tpmif(domid, handle)) == NULL) {
+      TPMBACK_DEBUG("get_opaque() failed, %u/%u is an invalid frontend\n", (unsigned int) domid, handle);
+      return NULL;
+   }
+
+   return tpmif->opaque;
+}
+
+int tpmback_set_opaque(domid_t domid, unsigned int handle, void *opaque)
+{
+   tpmif_t* tpmif;
+   if((tpmif = get_tpmif(domid, handle)) == NULL) {
+      TPMBACK_DEBUG("set_opaque() failed, %u/%u is an invalid frontend\n", (unsigned int) domid, handle);
+      return -1;
+   }
+
+   tpmif->opaque = opaque;
+   return 0;
+}
+
 unsigned char* tpmback_get_uuid(domid_t domid, unsigned int handle)
 {
    tpmif_t* tpmif;
@@ -853,12 +878,12 @@ void shutdown_tpmback(void)
    schedule();
 }
 
-inline void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, unsigned char uuid[16])
+static void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, void *opaque)
 {
    tpmcmd->domid = domid;
    tpmcmd->locality = -1;
    tpmcmd->handle = handle;
-   memcpy(tpmcmd->uuid, uuid, sizeof(tpmcmd->uuid));
+   tpmcmd->opaque = opaque;
    tpmcmd->req = NULL;
    tpmcmd->req_len = 0;
    tpmcmd->resp = NULL;
@@ -880,7 +905,7 @@ tpmcmd_t* get_request(tpmif_t* tpmif) {
    if((cmd = malloc(sizeof(*cmd))) == NULL) {
       goto error;
    }
-   init_tpmcmd(cmd, tpmif->domid, tpmif->handle, tpmif->uuid);
+   init_tpmcmd(cmd, tpmif->domid, tpmif->handle, tpmif->opaque);
 
    shr = tpmif->page;
    cmd->req_len = shr->length;
diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index 00dd9f3..33ac152 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -436,6 +436,12 @@ egress:
    return status;
 }
 
+/* Set up the opaque field to contain a pointer to the UUID */
+static void set_opaque_to_uuid(domid_t domid, unsigned int handle)
+{
+   tpmback_set_opaque(domid, handle, tpmback_get_uuid(domid, handle));
+}
+
 TPM_RESULT vtpmmgr_init(int argc, char** argv) {
    TPM_RESULT status = TPM_SUCCESS;
 
@@ -462,7 +468,7 @@ TPM_RESULT vtpmmgr_init(int argc, char** argv) {
    }
 
    //Setup tpmback device
-   init_tpmback(NULL, NULL);
+   init_tpmback(set_opaque_to_uuid, NULL);
 
    //Setup tpm access
    switch(opts.tpmdriver) {
diff --git a/stubdom/vtpmmgr/vtpmmgr.c b/stubdom/vtpmmgr/vtpmmgr.c
index 563f4e8..270ca8a 100644
--- a/stubdom/vtpmmgr/vtpmmgr.c
+++ b/stubdom/vtpmmgr/vtpmmgr.c
@@ -61,7 +61,7 @@ void main_loop(void) {
       tpmcmd->resp = respbuf;
 
       /* Process the command */
-      vtpmmgr_handle_cmd(tpmcmd->uuid, tpmcmd);
+      vtpmmgr_handle_cmd(tpmcmd->opaque, tpmcmd);
 
       /* Send response */
       tpmback_resp(tpmcmd);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sd-0004DX-Ou; Mon, 10 Dec 2012 19:56:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sa-0004Ax-GD
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:16 +0000
Received: from [85.158.139.83:31200] by server-7.bemta-5.messagelabs.com id
	FE/59-08009-06E36C05; Mon, 10 Dec 2012 19:56:16 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355169374!24924093!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21235 invoked from network); 10 Dec 2012 19:56:14 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-14.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:14 -0000
X-TM-IMSS-Message-ID: <099fc38f0000fdd9@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fc38f0000fdd9 ;
	Mon, 10 Dec 2012 14:56:18 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uH007869; 
	Mon, 10 Dec 2012 14:56:11 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:43 -0500
Message-Id: <1355169347-25917-11-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 10/14] mini-os/tpmback: set up callbacks before
	enumeration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The open/close callbacks in tpmback cannot be properly initalized in
order to catch the initial enumeration events because init_tpmback
clears the callbacks and then asynchronously starts the enumeration of
existing tpmback devices. Fix this by passing the callbacks to
init_tpmback so they can be installed before enumeration.

This also removes the unused callbacks for suspend and resume.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 extras/mini-os/include/tpmback.h | 12 +-----------
 extras/mini-os/tpmback.c         | 31 +++----------------------------
 stubdom/vtpm/vtpm.c              |  2 +-
 stubdom/vtpmmgr/init.c           |  2 +-
 4 files changed, 6 insertions(+), 41 deletions(-)

diff --git a/extras/mini-os/include/tpmback.h b/extras/mini-os/include/tpmback.h
index ec9eda4..3c11c34 100644
--- a/extras/mini-os/include/tpmback.h
+++ b/extras/mini-os/include/tpmback.h
@@ -56,7 +56,7 @@ struct tpmcmd {
 typedef struct tpmcmd tpmcmd_t;
 
 /* Initialize the tpm backend driver */
-void init_tpmback(void);
+void init_tpmback(void (*open_cb)(domid_t, unsigned int), void (*close_cb)(domid_t, unsigned int));
 
 /* Shutdown tpm backend driver */
 void shutdown_tpmback(void);
@@ -94,14 +94,4 @@ int tpmback_num_frontends(void);
  * The return value is internally allocated, so don't free it */
 unsigned char* tpmback_get_uuid(domid_t domid, unsigned int handle);
 
-/* Specify a function to call when a new tpm device connects */
-void tpmback_set_open_callback(void (*cb)(domid_t, unsigned int));
-
-/* Specify a function to call when a tpm device disconnects */
-void tpmback_set_close_callback(void (*cb)(domid_t, unsigned int));
-
-//Not Implemented
-void tpmback_set_suspend_callback(void (*cb)(domid_t, unsigned int));
-void tpmback_set_resume_callback(void (*cb)(domid_t, unsigned int));
-
 #endif
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index 69a7f2d..1c46e5d 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -114,8 +114,6 @@ struct tpmback_dev {
    /* Callbacks */
    void (*open_callback)(domid_t, unsigned int);
    void (*close_callback)(domid_t, unsigned int);
-   void (*suspend_callback)(domid_t, unsigned int);
-   void (*resume_callback)(domid_t, unsigned int);
 };
 typedef struct tpmback_dev tpmback_dev_t;
 
@@ -131,8 +129,6 @@ static tpmback_dev_t gtpmdev = {
    .events = NULL,
    .open_callback = NULL,
    .close_callback = NULL,
-   .suspend_callback = NULL,
-   .resume_callback = NULL,
 };
 struct wait_queue_head waitq;
 int globalinit = 0;
@@ -772,23 +768,6 @@ unsigned char* tpmback_get_uuid(domid_t domid, unsigned int handle)
    return tpmif->uuid;
 }
 
-void tpmback_set_open_callback(void (*cb)(domid_t, unsigned int))
-{
-   gtpmdev.open_callback = cb;
-}
-void tpmback_set_close_callback(void (*cb)(domid_t, unsigned int))
-{
-   gtpmdev.close_callback = cb;
-}
-void tpmback_set_suspend_callback(void (*cb)(domid_t, unsigned int))
-{
-   gtpmdev.suspend_callback = cb;
-}
-void tpmback_set_resume_callback(void (*cb)(domid_t, unsigned int))
-{
-   gtpmdev.resume_callback = cb;
-}
-
 static void event_listener(void)
 {
    const char* bepath = "backend/vtpm2";
@@ -835,7 +814,7 @@ void event_thread(void* p) {
    event_listener();
 }
 
-void init_tpmback(void)
+void init_tpmback(void (*open_cb)(domid_t, unsigned int), void (*close_cb)(domid_t, unsigned int))
 {
    if(!globalinit) {
       init_waitqueue_head(&waitq);
@@ -847,8 +826,8 @@ void init_tpmback(void)
    gtpmdev.num_tpms = 0;
    gtpmdev.flags = 0;
 
-   gtpmdev.open_callback = gtpmdev.close_callback = NULL;
-   gtpmdev.suspend_callback = gtpmdev.resume_callback = NULL;
+   gtpmdev.open_callback = open_cb;
+   gtpmdev.close_callback = close_cb;
 
    eventthread = create_thread("tpmback-listener", event_thread, NULL);
 
@@ -856,10 +835,6 @@ void init_tpmback(void)
 
 void shutdown_tpmback(void)
 {
-   /* Disable callbacks */
-   gtpmdev.open_callback = gtpmdev.close_callback = NULL;
-   gtpmdev.suspend_callback = gtpmdev.resume_callback = NULL;
-
    TPMBACK_LOG("Shutting down tpm backend\n");
    /* Set the quit flag */
    gtpmdev.flags = TPMIF_CLOSED;
diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
index d576c8f..feb8aa3 100644
--- a/stubdom/vtpm/vtpm.c
+++ b/stubdom/vtpm/vtpm.c
@@ -357,7 +357,7 @@ int main(int argc, char **argv)
    }
 
    /* Initialize devices */
-   init_tpmback();
+   init_tpmback(NULL, NULL);
    if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
       error("Unable to initialize tpmfront device");
       goto abort_posttpmfront;
diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index a158020..00dd9f3 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -462,7 +462,7 @@ TPM_RESULT vtpmmgr_init(int argc, char** argv) {
    }
 
    //Setup tpmback device
-   init_tpmback();
+   init_tpmback(NULL, NULL);
 
    //Setup tpm access
    switch(opts.tpmdriver) {
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sb-0004C0-BB; Mon, 10 Dec 2012 19:56:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SZ-0004An-6k
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.83:15115] by server-15.bemta-5.messagelabs.com id
	98/30-20523-E5E36C05; Mon, 10 Dec 2012 19:56:14 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355169373!27502597!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31457 invoked from network); 10 Dec 2012 19:56:14 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-2.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:14 -0000
X-TM-IMSS-Message-ID: <099fb9ee0000fdcf@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fb9ee0000fdcf ;
	Mon, 10 Dec 2012 14:56:16 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8u9007869; 
	Mon, 10 Dec 2012 14:56:09 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:35 -0500
Message-Id: <1355169347-25917-3-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 02/14] stubdom/vtpm: correct the buffer size
	returned by TPM_CAP_PROP_INPUT_BUFFER
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The vtpm2 ABI supports packets of up to 4088 bytes by default; expose
this property though the TPM's interface so clients do not attempt to
send larger packets.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/Makefile           |  1 +
 stubdom/vtpm-bufsize.patch | 13 +++++++++++++
 2 files changed, 14 insertions(+)
 create mode 100644 stubdom/vtpm-bufsize.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 709b71e..12f8a6f 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -207,6 +207,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	tar xzf $<
 	mv tpm_emulator-$(TPMEMU_VERSION) $@
 	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
+	patch -d $@ -p1 < vtpm-bufsize.patch
 	mkdir $@/build
 	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
diff --git a/stubdom/vtpm-bufsize.patch b/stubdom/vtpm-bufsize.patch
new file mode 100644
index 0000000..9c9304c
--- /dev/null
+++ b/stubdom/vtpm-bufsize.patch
@@ -0,0 +1,13 @@
+diff --git a/config.h.in b/config.h.in
+index d16a997..8088a2a 100644
+--- a/config.h.in
++++ b/config.h.in
+@@ -27,7 +27,7 @@
+ #define TPM_STORAGE_NAME "${TPM_STORAGE_NAME}"
+ #define TPM_DEVICE_NAME  "${TPM_DEVICE_NAME}"
+ #define TPM_LOG_FILE     "${TPM_LOG_FILE}"
+-#define TPM_CMD_BUF_SIZE 4096
++#define TPM_CMD_BUF_SIZE 4088
+ 
+ #endif /* _CONFIG_H_ */
+ 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sf-0004F3-Bn; Mon, 10 Dec 2012 19:56:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sc-0004Az-Lq
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:18 +0000
Received: from [85.158.139.83:15295] by server-9.bemta-5.messagelabs.com id
	58/BA-10690-26E36C05; Mon, 10 Dec 2012 19:56:18 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355169375!29270011!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4717 invoked from network); 10 Dec 2012 19:56:16 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-5.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:16 -0000
X-TM-IMSS-Message-ID: <099fc8dc0000fddc@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fc8dc0000fddc ;
	Mon, 10 Dec 2012 14:56:20 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uK007869; 
	Mon, 10 Dec 2012 14:56:12 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:46 -0500
Message-Id: <1355169347-25917-14-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 13/14] stubdom/vtpm: constrain locality by XSM
	label
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds the ability for a vTPM to constrain what localities a given
client domain can use based on its XSM label. For example:

  locality=user_1:vm_r:domU_t=0,1,2 locality=user_1:vm_r:watcher_t=5

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/vtpm/vtpm.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 69 insertions(+), 2 deletions(-)

diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
index feb8aa3..3fbdc59 100644
--- a/stubdom/vtpm/vtpm.c
+++ b/stubdom/vtpm/vtpm.c
@@ -139,6 +139,30 @@ int check_ordinal(tpmcmd_t* tpmcmd) {
    return true;
 }
 
+struct locality_item {
+	char* lbl;
+	uint8_t mask[32];
+};
+#define MAX_CLIENT_LOCALITIES 16
+static struct locality_item client_locality[MAX_CLIENT_LOCALITIES];
+static int nr_client_localities = 0;
+
+static void *generate_locality_mask(domid_t domid, unsigned int handle)
+{
+   char label[512];
+   int i;
+   if (tpmback_get_peercontext(domid, handle, label, sizeof(label)))
+      BUG();
+   for(i=0; i < nr_client_localities; i++) {
+      if (!strcmp(client_locality[i].lbl, label))
+         goto found;
+   }
+   return NULL;
+ found:
+   tpmback_set_opaque(domid, handle, client_locality[i].mask);
+   return client_locality[i].mask;
+}
+
 static void main_loop(void) {
    tpmcmd_t* tpmcmd = NULL;
    int res = -1;
@@ -164,11 +188,24 @@ static void main_loop(void) {
    while(tpmcmd) {
       /* Handle the request */
       if(tpmcmd->req_len) {
+	 uint8_t* locality_mask = tpmcmd->opaque;
+	 uint8_t locality_bit = (1 << (tpmcmd->locality & 7));
+	 int locality_byte = tpmcmd->locality >> 3;
 	 tpmcmd->resp = NULL;
 	 tpmcmd->resp_len = 0;
 
-         /* First check for disabled ordinals */
-         if(!check_ordinal(tpmcmd)) {
+	 if (nr_client_localities && !locality_mask)
+	    locality_mask = generate_locality_mask(tpmcmd->domid, tpmcmd->handle);
+	 if (nr_client_localities && !locality_mask) {
+            error("Unknown client label in tpm_handle_command");
+            create_error_response(tpmcmd, TPM_FAIL);
+	 }
+	 else if (nr_client_localities && !(locality_mask[locality_byte] & locality_bit)) {
+            error("Invalid locality (%d) for client in tpm_handle_command", tpmcmd->locality);
+            create_error_response(tpmcmd, TPM_FAIL);
+	 }
+         /* Check for disabled ordinals */
+         else if(!check_ordinal(tpmcmd)) {
             create_error_response(tpmcmd, TPM_BAD_ORDINAL);
          }
          /* If not disabled, do the command */
@@ -273,6 +310,36 @@ int parse_cmd_line(int argc, char** argv)
             pch = strtok(NULL, ",");
          }
       }
+      else if(!strncmp(argv[i], "locality=", 9)) {
+        char *lbl = argv[i] + 9;
+	char *pch = strchr(lbl, '=');
+	uint8_t* locality_mask = client_locality[nr_client_localities].mask;
+	if (pch == NULL) {
+		 error("Invalid locality specification: %s", lbl);
+		 return -1;
+	}
+	if (nr_client_localities == MAX_CLIENT_LOCALITIES) {
+		error("Too many locality specifications");
+		return -1;
+	}
+	client_locality[nr_client_localities].lbl = lbl;
+	memset(locality_mask, 0, 32);
+	nr_client_localities++;
+	*pch = 0;
+	pch = strtok(pch + 1, ",");
+	while (pch != NULL) {
+		int loc;
+		if (sscanf(pch, "%u", &loc) == 1) {
+			uint8_t locality_bit = (1 << (loc & 7));
+			int locality_byte = loc >> 3;
+			locality_mask[locality_byte] |= locality_bit;
+		} else {
+			error("Invalid locality item: %s", pch);
+			return -1;
+		}
+		pch = strtok(NULL, ",");
+	}
+      }
       else {
 	 error("Invalid command line option `%s'", argv[i]);
       }
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9SZ-0004BJ-I8; Mon, 10 Dec 2012 19:56:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SY-0004Am-8i
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:14 +0000
Received: from [85.158.139.211:10125] by server-1.bemta-5.messagelabs.com id
	B2/7C-12813-D5E36C05; Mon, 10 Dec 2012 19:56:13 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-206.messagelabs.com!1355169371!19812185!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3990 invoked from network); 10 Dec 2012 19:56:12 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-14.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:12 -0000
X-TM-IMSS-Message-ID: <67fe22dd0008022a@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe22dd0008022a ;
	Mon, 10 Dec 2012 14:55:40 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8u7007869; 
	Mon, 10 Dec 2012 14:56:08 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:33 -0500
Message-Id: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v3 00/14] vTPM new ABI, extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch queue goes on top of Matthew Fioravante's [VTPM v7 0/8]
series.  The xenbus device name has changed to "vtpm2", and some
documentation has been added about PCRs (those extended by pv-grub and
those added in locality 5).  A new Linux patch is also needed, and will
be posted as a reply to this email; the layout of the shared page has
changed slightly (length field changed from uint16_t to uint32_t).

Patches have been reordered a bit in an attempt to have the series make
the most sense possible if partially applied.  Patch #8 still breaks
automatic vTPM domain shutdown, so only applying #1-6 would be useful if
we would like that feature to continue working while the libxl-based
shutdown request is not finished.

Patch 10-13 are new here; they allow localities to be restricted for
certain domains.  This is an important security feature if multiple
domains are accessing the same vTPM, and without this feature the
locality 5 PCRs introduced by #7 are no different from the lower 24
defined in the TPM specification.

Patch 14 is a build cleanup that fixes the third consecutive build
without an intervening "make clean" when NEWLIB_STAMPFILE is touched
after gmp is extracted.


New ABI patches:
    [PATCH 01/14] mini-os/tpm{back,front}: Change shared page ABI
    [PATCH 02/14] stubdom/vtpm: correct the buffer size returned by
    [PATCH 03/14] stubdom/vtpm: Support locality field

New vTPM features:
    [PATCH 04/14] stubdom/vtpm: Allow repoen of closed devices
    [PATCH 05/14] stubdom/vtpm: make state save operation atomic
    [PATCH 06/14] stubdom/grub: send kernel measurements to vTPM

Support for multiple client domains distinguished by locality:
    [PATCH 07/14] stubdom/vtpm: Add locality-5 PCRs
    [PATCH 08/14] stubdom/vtpm: support multiple backends
    [PATCH 09/14] stubdom/vtpm: Add PCR pass-through to hardware TPM
    [PATCH 10/14] mini-os/tpmback: set up callbacks before enumeration
    [PATCH 11/14] mini-os/tpmback: Replace UUID field with opaque
    [PATCH 12/14] mini-os/tpmback: add tpmback_get_peercontext
    [PATCH 13/14] stubdom/vtpm: constrain locality by XSM label

Other:
    [PATCH 14/14] stubdom/Makefile: Fix gmp extract rule

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9SZ-0004B0-5q; Mon, 10 Dec 2012 19:56:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SY-0004An-9q
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:14 +0000
Received: from [85.158.139.211:40566] by server-15.bemta-5.messagelabs.com id
	D4/30-20523-D5E36C05; Mon, 10 Dec 2012 19:56:13 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-206.messagelabs.com!1355169371!19812186!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3991 invoked from network); 10 Dec 2012 19:56:12 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-14.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:12 -0000
X-TM-IMSS-Message-ID: <67fe26b40008022d@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe26b40008022d ;
	Mon, 10 Dec 2012 14:55:41 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uB007869; 
	Mon, 10 Dec 2012 14:56:09 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:37 -0500
Message-Id: <1355169347-25917-5-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 04/14] stubdom/vtpm: Allow repoen of closed
	devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Allow the vtpm device to be disconnected and reconnected so that a
bootloader (like pv-grub) can submit measurements and return the vtpm
device to its initial state before booting the target kernel.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 extras/mini-os/tpmback.c  | 23 ++++++++++++++++++++++-
 extras/mini-os/tpmfront.c | 14 ++++++++++++--
 2 files changed, 34 insertions(+), 3 deletions(-)

diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index 50f8a5d..69a7f2d 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -608,6 +608,24 @@ error_post_map:
    return -1;
 }
 
+static void disconnect_fe(tpmif_t* tpmif)
+{
+   if (tpmif->status == CONNECTED) {
+      tpmif->status = DISCONNECTING;
+      mask_evtchn(tpmif->evtchn);
+
+      if(gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->page, 1)) {
+	 TPMBACK_ERR("%u/%u Error occured while trying to unmap shared page\n", (unsigned int) tpmif->domid, tpmif->handle);
+      }
+
+      unbind_evtchn(tpmif->evtchn);
+   }
+   tpmif->status = DISCONNECTED;
+   tpmif_change_state(tpmif, XenbusStateInitWait);
+
+   TPMBACK_LOG("Frontend %u/%u disconnected\n", (unsigned int) tpmif->domid, tpmif->handle);
+}
+
 static int frontend_changed(tpmif_t* tpmif)
 {
    int state = xenbus_read_integer(tpmif->fe_state_path);
@@ -634,8 +652,11 @@ static int frontend_changed(tpmif_t* tpmif)
 	 tpmif_change_state(tpmif, XenbusStateClosing);
 	 break;
 
-      case XenbusStateUnknown: /* keep it here */
       case XenbusStateClosed:
+         disconnect_fe(tpmif);
+	 break;
+
+      case XenbusStateUnknown: /* keep it here */
 	 free_tpmif(tpmif);
 	 break;
 
diff --git a/extras/mini-os/tpmfront.c b/extras/mini-os/tpmfront.c
index ac9ba42..1ef51cf 100644
--- a/extras/mini-os/tpmfront.c
+++ b/extras/mini-os/tpmfront.c
@@ -146,6 +146,9 @@ static int wait_for_backend_closed(xenbus_event_queue* events, char* path)
 	 case XenbusStateClosed:
 	    TPMFRONT_LOG("Backend Closed\n");
 	    return 0;
+	 case XenbusStateInitWait:
+	    TPMFRONT_LOG("Backend Closed (waiting for reconnect)\n");
+	    return 0;
 	 default:
 	    xenbus_wait_for_watch(events);
       }
@@ -306,10 +309,10 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
    TPMFRONT_LOG("Shutting down tpmfront\n");
    /* disconnect */
    if(dev->state == XenbusStateConnected) {
-      dev->state = XenbusStateClosing;
-      //FIXME: Transaction for this?
       /* Tell backend we are closing */
+      dev->state = XenbusStateClosing;
       if((err = xenbus_printf(XBT_NIL, dev->nodename, "state", "%u", (unsigned int) dev->state))) {
+	 TPMFRONT_ERR("Unable to write to %s, error was %s", dev->nodename, err);
 	 free(err);
       }
 
@@ -333,6 +336,13 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
       /* Wait for the backend to close and unmap shared pages, ignore any errors */
       wait_for_backend_state_changed(dev, XenbusStateClosed);
 
+      /* Prepare for a later reopen (possibly by a kexec'd kernel) */
+      dev->state = XenbusStateInitialising;
+      if((err = xenbus_printf(XBT_NIL, dev->nodename, "state", "%u", (unsigned int) dev->state))) {
+	 TPMFRONT_ERR("Unable to write to %s, error was %s", dev->nodename, err);
+	 free(err);
+      }
+
       /* Close event channel and unmap shared page */
       mask_evtchn(dev->evtchn);
       unbind_evtchn(dev->evtchn);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sa-0004Bn-UM; Mon, 10 Dec 2012 19:56:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SY-0004Ao-Nz
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.211:40579] by server-14.bemta-5.messagelabs.com id
	38/98-09538-D5E36C05; Mon, 10 Dec 2012 19:56:13 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355169371!18378533!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16560 invoked from network); 10 Dec 2012 19:56:11 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-10.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:11 -0000
X-TM-IMSS-Message-ID: <67fe24250008022c@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe24250008022c ;
	Mon, 10 Dec 2012 14:55:40 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8u8007869; 
	Mon, 10 Dec 2012 14:56:08 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:34 -0500
Message-Id: <1355169347-25917-2-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 01/14] mini-os/tpm{back,
	front}: Change shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This changes the vTPM shared page ABI from a copy of the Xen network
interface to a single-page interface that better reflects the expected
behavior of a TPM: only a single request packet can be sent at any given
time, and every packet sent generates a single response packet. This
protocol change should also increase efficiency as it avoids mapping and
unmapping grants when possible. The vtpm xenbus device has been renamed
to "vtpm2" to avoid conflicts with existing (xen-patched) kernels
supporting the old interface.

While the contents of the shared page have been defined to allow packets
larger than a single page (actually 4088 bytes) by allowing the client
to add extra grant references, the mapping of these extra references has
not been implemented; a feature node in xenstore may be used in the
future to indicate full support for the multi-page protocol. Most uses
of the TPM should not require this feature.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 extras/mini-os/include/tpmback.h  |   1 +
 extras/mini-os/include/tpmfront.h |   7 +-
 extras/mini-os/tpmback.c          | 135 ++++++++++++++------------------------
 extras/mini-os/tpmfront.c         | 119 +++++++++++++--------------------
 tools/libxl/libxl.c               |   8 +--
 xen/include/public/io/tpmif.h     |  45 +++----------
 6 files changed, 114 insertions(+), 201 deletions(-)

diff --git a/extras/mini-os/include/tpmback.h b/extras/mini-os/include/tpmback.h
index ff86732..ec9eda4 100644
--- a/extras/mini-os/include/tpmback.h
+++ b/extras/mini-os/include/tpmback.h
@@ -43,6 +43,7 @@
 
 struct tpmcmd {
    domid_t domid;		/* Domid of the frontend */
+   uint8_t locality;    /* Locality requested by the frontend */
    unsigned int handle;	/* Handle of the frontend */
    unsigned char uuid[16];			/* uuid of the tpm interface */
 
diff --git a/extras/mini-os/include/tpmfront.h b/extras/mini-os/include/tpmfront.h
index fd2cb17..a0c7c4d 100644
--- a/extras/mini-os/include/tpmfront.h
+++ b/extras/mini-os/include/tpmfront.h
@@ -37,9 +37,7 @@ struct tpmfront_dev {
    grant_ref_t ring_ref;
    evtchn_port_t evtchn;
 
-   tpmif_tx_interface_t* tx;
-
-   void** pages;
+   vtpm_shared_page_t *page;
 
    domid_t bedomid;
    char* nodename;
@@ -77,6 +75,9 @@ void shutdown_tpmfront(struct tpmfront_dev* dev);
  * */
 int tpmfront_cmd(struct tpmfront_dev* dev, uint8_t* req, size_t reqlen, uint8_t** resp, size_t* resplen);
 
+/* Set the locality used for communicating with a vTPM */
+int tpmfront_set_locality(struct tpmfront_dev* dev, int locality);
+
 #ifdef HAVE_LIBC
 #include <sys/stat.h>
 /* POSIX IO functions:
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index 658fed1..50f8a5d 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -86,10 +86,7 @@ struct tpmif {
    evtchn_port_t evtchn;
 
    /* Shared page */
-   tpmif_tx_interface_t* tx;
-
-   /* pointer to TPMIF_RX_RING_SIZE pages */
-   void** pages;
+   vtpm_shared_page_t *page;
 
    enum xenbus_state state;
    enum { DISCONNECTED, DISCONNECTING, CONNECTED } status;
@@ -312,7 +309,6 @@ int insert_tpmif(tpmif_t* tpmif)
       remove_tpmif(tpmif);
       goto error_post_irq;
    }
-
    return 0;
 error:
    local_irq_restore(flags);
@@ -336,7 +332,7 @@ int tpmif_change_state(tpmif_t* tpmif, enum xenbus_state state)
    if (tpmif->state == state)
       return 0;
 
-   snprintf(path, 512, "backend/vtpm/%u/%u/state", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/state", (unsigned int) tpmif->domid, tpmif->handle);
 
    if((err = xenbus_read(XBT_NIL, path, &value))) {
       TPMBACK_ERR("Unable to read backend state %s, error was %s\n", path, err);
@@ -362,7 +358,7 @@ int tpmif_change_state(tpmif_t* tpmif, enum xenbus_state state)
    }
 
    /*update xenstore*/
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_printf(XBT_NIL, path, "state", "%u", state))) {
       TPMBACK_ERR("Error writing to xenstore %s, error was %s new state=%d\n", path, err, state);
       free(err);
@@ -386,8 +382,7 @@ inline tpmif_t* __init_tpmif(domid_t domid, unsigned int handle)
    tpmif->fe_state_path = NULL;
    tpmif->state = XenbusStateInitialising;
    tpmif->status = DISCONNECTED;
-   tpmif->tx = NULL;
-   tpmif->pages = NULL;
+   tpmif->page = NULL;
    tpmif->flags = 0;
    memset(tpmif->uuid, 0, sizeof(tpmif->uuid));
    return tpmif;
@@ -395,9 +390,6 @@ inline tpmif_t* __init_tpmif(domid_t domid, unsigned int handle)
 
 void __free_tpmif(tpmif_t* tpmif)
 {
-   if(tpmif->pages) {
-      free(tpmif->pages);
-   }
    if(tpmif->fe_path) {
       free(tpmif->fe_path);
    }
@@ -424,23 +416,17 @@ tpmif_t* new_tpmif(domid_t domid, unsigned int handle)
    tpmif = __init_tpmif(domid, handle);
 
    /* Get the uuid from xenstore */
-   snprintf(path, 512, "backend/vtpm/%u/%u/uuid", (unsigned int) domid, handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/uuid", (unsigned int) domid, handle);
    if((!xenbus_read_uuid(path, tpmif->uuid))) {
       TPMBACK_ERR("Error reading %s\n", path);
       goto error;
    }
 
-   /* allocate pages to be used for shared mapping */
-   if((tpmif->pages = malloc(sizeof(void*) * TPMIF_TX_RING_SIZE)) == NULL) {
-      goto error;
-   }
-   memset(tpmif->pages, 0, sizeof(void*) * TPMIF_TX_RING_SIZE);
-
    if(tpmif_change_state(tpmif, XenbusStateInitWait)) {
       goto error;
    }
 
-   snprintf(path, 512, "backend/vtpm/%u/%u/frontend", (unsigned int) domid, handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/frontend", (unsigned int) domid, handle);
    if((err = xenbus_read(XBT_NIL, path, &tpmif->fe_path))) {
       TPMBACK_ERR("Error creating new tpm instance xenbus_read(%s), Error = %s", path, err);
       free(err);
@@ -486,7 +472,7 @@ void free_tpmif(tpmif_t* tpmif)
       tpmif->status = DISCONNECTING;
       mask_evtchn(tpmif->evtchn);
 
-      if(gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->tx, 1)) {
+      if(gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->page, 1)) {
 	 TPMBACK_ERR("%u/%u Error occured while trying to unmap shared page\n", (unsigned int) tpmif->domid, tpmif->handle);
       }
 
@@ -508,7 +494,7 @@ void free_tpmif(tpmif_t* tpmif)
    schedule();
 
    /* Remove the old xenbus entries */
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_rm(XBT_NIL, path))) {
       TPMBACK_ERR("Error cleaning up xenbus entries path=%s error=%s\n", path, err);
       free(err);
@@ -529,9 +515,10 @@ void free_tpmif(tpmif_t* tpmif)
 void tpmback_handler(evtchn_port_t port, struct pt_regs *regs, void *data)
 {
    tpmif_t* tpmif = (tpmif_t*) data;
-   tpmif_tx_request_t* tx = &tpmif->tx->ring[0].req;
-   /* Throw away 0 size events, these can trigger from event channel unmasking */
-   if(tx->size == 0)
+   vtpm_shared_page_t* pg = tpmif->page;
+
+   /* Only pay attention if the request is ready */
+   if (pg->state == 0)
       return;
 
    TPMBACK_DEBUG("EVENT CHANNEL FIRE %u/%u\n", (unsigned int) tpmif->domid, tpmif->handle);
@@ -585,11 +572,10 @@ int connect_fe(tpmif_t* tpmif)
    free(value);
 
    domid = tpmif->domid;
-   if((tpmif->tx = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &ringref, PROT_READ | PROT_WRITE)) == NULL) {
+   if((tpmif->page = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &ringref, PROT_READ | PROT_WRITE)) == NULL) {
       TPMBACK_ERR("Failed to map grant reference %u/%u\n", (unsigned int) tpmif->domid, tpmif->handle);
       return -1;
    }
-   memset(tpmif->tx, 0, PAGE_SIZE);
 
    /*Bind the event channel */
    if((evtchn_bind_interdomain(tpmif->domid, evtchn, tpmback_handler, tpmif, &tpmif->evtchn)))
@@ -600,7 +586,7 @@ int connect_fe(tpmif_t* tpmif)
    unmask_evtchn(tpmif->evtchn);
 
    /* Write the ready flag and change status to connected */
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_printf(XBT_NIL, path, "ready", "%u", 1))) {
       TPMBACK_ERR("%u/%u Unable to write ready flag on connect_fe()\n", (unsigned int) tpmif->domid, tpmif->handle);
       free(err);
@@ -618,7 +604,7 @@ error_post_evtchn:
    mask_evtchn(tpmif->evtchn);
    unbind_evtchn(tpmif->evtchn);
 error_post_map:
-   gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->tx, 1);
+   gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->page, 1);
    return -1;
 }
 
@@ -670,8 +656,8 @@ static int parse_eventstr(const char* evstr, domid_t* domid, unsigned int* handl
    char* value;
    unsigned int udomid = 0;
    tpmif_t* tpmif;
-   /* First check for new frontends, this occurs when /backend/vtpm/<domid>/<handle> gets created. Note we what the sscanf to fail on the last %s */
-   if (sscanf(evstr, "backend/vtpm/%u/%u/%40s", &udomid, handle, cmd) == 2) {
+   /* First check for new frontends, this occurs when /backend/vtpm2/<domid>/<handle> gets created. Note we what the sscanf to fail on the last %s */
+   if (sscanf(evstr, "backend/vtpm2/%u/%u/%40s", &udomid, handle, cmd) == 2) {
       *domid = udomid;
       /* Make sure the entry exists, if this event triggers because the entry dissapeared then ignore it */
       if((err = xenbus_read(XBT_NIL, evstr, &value))) {
@@ -685,7 +671,7 @@ static int parse_eventstr(const char* evstr, domid_t* domid, unsigned int* handl
 	 return EV_NONE;
       }
       return EV_NEWFE;
-   } else if((ret = sscanf(evstr, "/local/domain/%u/device/vtpm/%u/%40s", &udomid, handle, cmd)) == 3) {
+   } else if((ret = sscanf(evstr, "/local/domain/%u/device/vtpm2/%u/%40s", &udomid, handle, cmd)) == 3) {
       *domid = udomid;
       if (!strcmp(cmd, "state"))
 	 return EV_STCHNG;
@@ -784,7 +770,7 @@ void tpmback_set_resume_callback(void (*cb)(domid_t, unsigned int))
 
 static void event_listener(void)
 {
-   const char* bepath = "backend/vtpm";
+   const char* bepath = "backend/vtpm2";
    char **path;
    char* err;
 
@@ -874,6 +860,7 @@ void shutdown_tpmback(void)
 inline void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, unsigned char uuid[16])
 {
    tpmcmd->domid = domid;
+   tpmcmd->locality = -1;
    tpmcmd->handle = handle;
    memcpy(tpmcmd->uuid, uuid, sizeof(tpmcmd->uuid));
    tpmcmd->req = NULL;
@@ -884,12 +871,12 @@ inline void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, un
 
 tpmcmd_t* get_request(tpmif_t* tpmif) {
    tpmcmd_t* cmd;
-   tpmif_tx_request_t* tx;
-   int offset;
-   int tocopy;
-   int i;
-   uint32_t domid;
+   vtpm_shared_page_t* shr;
+   unsigned int offset;
    int flags;
+#ifdef TPMBACK_PRINT_DEBUG
+   int i;
+#endif
 
    local_irq_save(flags);
 
@@ -899,35 +886,22 @@ tpmcmd_t* get_request(tpmif_t* tpmif) {
    }
    init_tpmcmd(cmd, tpmif->domid, tpmif->handle, tpmif->uuid);
 
-   tx = &tpmif->tx->ring[0].req;
-   cmd->req_len = tx->size;
+   shr = tpmif->page;
+   cmd->req_len = shr->length;
+   cmd->locality = shr->locality;
+   offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+   if (offset > PAGE_SIZE || offset + cmd->req_len > PAGE_SIZE) {
+      TPMBACK_ERR("%u/%u Command size too long for shared page!\n", (unsigned int) tpmif->domid, tpmif->handle);
+      goto error;
+   }
    /* Allocate the buffer */
    if(cmd->req_len) {
       if((cmd->req = malloc(cmd->req_len)) == NULL) {
 	 goto error;
       }
    }
-   /* Copy the bits from the shared pages */
-   offset = 0;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && offset < cmd->req_len; ++i) {
-      tx = &tpmif->tx->ring[i].req;
-
-      /* Map the page with the data */
-      domid = (uint32_t)tpmif->domid;
-      if((tpmif->pages[i] = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &tx->ref, PROT_READ)) == NULL) {
-	 TPMBACK_ERR("%u/%u Unable to map shared page during read!\n", (unsigned int) tpmif->domid, tpmif->handle);
-	 goto error;
-      }
-
-      /* do the copy now */
-      tocopy = min(cmd->req_len - offset, PAGE_SIZE);
-      memcpy(&cmd->req[offset], tpmif->pages[i], tocopy);
-      offset += tocopy;
-
-      /* release the page */
-      gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->pages[i], 1);
-
-   }
+   /* Copy the bits from the shared page(s) */
+   memcpy(cmd->req, offset + (uint8_t*)shr, cmd->req_len);
 
 #ifdef TPMBACK_PRINT_DEBUG
    TPMBACK_DEBUG("Received Tpm Command from %u/%u of size %u", (unsigned int) tpmif->domid, tpmif->handle, cmd->req_len);
@@ -958,38 +932,24 @@ error:
 
 void send_response(tpmcmd_t* cmd, tpmif_t* tpmif)
 {
-   tpmif_tx_request_t* tx;
-   int offset;
-   int i;
-   uint32_t domid;
-   int tocopy;
+   vtpm_shared_page_t* shr;
+   unsigned int offset;
    int flags;
+#ifdef TPMBACK_PRINT_DEBUG
+int i;
+#endif
 
    local_irq_save(flags);
 
-   tx = &tpmif->tx->ring[0].req;
-   tx->size = cmd->resp_len;
-
-   offset = 0;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && offset < cmd->resp_len; ++i) {
-      tx = &tpmif->tx->ring[i].req;
-
-      /* Map the page with the data */
-      domid = (uint32_t)tpmif->domid;
-      if((tpmif->pages[i] = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &tx->ref, PROT_WRITE)) == NULL) {
-	 TPMBACK_ERR("%u/%u Unable to map shared page during write!\n", (unsigned int) tpmif->domid, tpmif->handle);
-	 goto error;
-      }
-
-      /* do the copy now */
-      tocopy = min(cmd->resp_len - offset, PAGE_SIZE);
-      memcpy(tpmif->pages[i], &cmd->resp[offset], tocopy);
-      offset += tocopy;
-
-      /* release the page */
-      gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->pages[i], 1);
+   shr = tpmif->page;
+   shr->length = cmd->resp_len;
 
+   offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+   if (offset > PAGE_SIZE || offset + cmd->resp_len > PAGE_SIZE) {
+      TPMBACK_ERR("%u/%u Command size too long for shared page!\n", (unsigned int) tpmif->domid, tpmif->handle);
+      goto error;
    }
+   memcpy(offset + (uint8_t*)shr, cmd->resp, cmd->resp_len);
 
 #ifdef TPMBACK_PRINT_DEBUG
    TPMBACK_DEBUG("Sent response to %u/%u of size %u", (unsigned int) tpmif->domid, tpmif->handle, cmd->resp_len);
@@ -1003,6 +963,7 @@ void send_response(tpmcmd_t* cmd, tpmif_t* tpmif)
 #endif
    /* clear the ready flag and send the event channel notice to the frontend */
    tpmif_req_finished(tpmif);
+   shr->state = 0;
    notify_remote_via_evtchn(tpmif->evtchn);
 error:
    local_irq_restore(flags);
diff --git a/extras/mini-os/tpmfront.c b/extras/mini-os/tpmfront.c
index 0218d7f..ac9ba42 100644
--- a/extras/mini-os/tpmfront.c
+++ b/extras/mini-os/tpmfront.c
@@ -176,7 +176,7 @@ static int wait_for_backend_state_changed(struct tpmfront_dev* dev, XenbusState
 	 ret = wait_for_backend_closed(&events, path);
 	 break;
       default:
-	 break;
+         TPMFRONT_ERR("Bad wait state %d, ignoring\n", state);
    }
 
    if((err = xenbus_unwatch_path_token(XBT_NIL, path, path))) {
@@ -190,13 +190,13 @@ static int tpmfront_connect(struct tpmfront_dev* dev)
 {
    char* err;
    /* Create shared page */
-   dev->tx = (tpmif_tx_interface_t*) alloc_page();
-   if(dev->tx == NULL) {
+   dev->page = (vtpm_shared_page_t*) alloc_page();
+   if(dev->page == NULL) {
       TPMFRONT_ERR("Unable to allocate page for shared memory\n");
       goto error;
    }
-   memset(dev->tx, 0, PAGE_SIZE);
-   dev->ring_ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->tx), 0);
+   memset(dev->page, 0, PAGE_SIZE);
+   dev->ring_ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->page), 0);
    TPMFRONT_DEBUG("grant ref is %lu\n", (unsigned long) dev->ring_ref);
 
    /*Create event channel */
@@ -228,7 +228,7 @@ error_postevtchn:
       unbind_evtchn(dev->evtchn);
 error_postmap:
       gnttab_end_access(dev->ring_ref);
-      free_page(dev->tx);
+      free_page(dev->page);
 error:
    return -1;
 }
@@ -240,7 +240,6 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
    char path[512];
    char* value, *err;
    unsigned long long ival;
-   int i;
 
    printk("============= Init TPM Front ================\n");
 
@@ -251,7 +250,7 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
    dev->fd = -1;
 #endif
 
-   nodename = _nodename ? _nodename : "device/vtpm/0";
+   nodename = _nodename ? _nodename : "device/vtpm2/0";
    dev->nodename = strdup(nodename);
 
    init_waitqueue_head(&dev->waitq);
@@ -289,19 +288,6 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
       goto error;
    }
 
-   /* Allocate pages that will contain the messages */
-   dev->pages = malloc(sizeof(void*) * TPMIF_TX_RING_SIZE);
-   if(dev->pages == NULL) {
-      goto error;
-   }
-   memset(dev->pages, 0, sizeof(void*) * TPMIF_TX_RING_SIZE);
-   for(i = 0; i < TPMIF_TX_RING_SIZE; ++i) {
-      dev->pages[i] = (void*)alloc_page();
-      if(dev->pages[i] == NULL) {
-	 goto error;
-      }
-   }
-
    TPMFRONT_LOG("Initialization Completed successfully\n");
 
    return dev;
@@ -314,8 +300,6 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
 {
    char* err;
    char path[512];
-   int i;
-   tpmif_tx_request_t* tx;
    if(dev == NULL) {
       return;
    }
@@ -349,27 +333,12 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
       /* Wait for the backend to close and unmap shared pages, ignore any errors */
       wait_for_backend_state_changed(dev, XenbusStateClosed);
 
-      /* Cleanup any shared pages */
-      if(dev->pages) {
-	 for(i = 0; i < TPMIF_TX_RING_SIZE; ++i) {
-	    if(dev->pages[i]) {
-	       tx = &dev->tx->ring[i].req;
-	       if(tx->ref != 0) {
-		  gnttab_end_access(tx->ref);
-	       }
-	       free_page(dev->pages[i]);
-	    }
-	 }
-	 free(dev->pages);
-      }
-
       /* Close event channel and unmap shared page */
       mask_evtchn(dev->evtchn);
       unbind_evtchn(dev->evtchn);
       gnttab_end_access(dev->ring_ref);
 
-      free_page(dev->tx);
-
+      free_page(dev->page);
    }
 
    /* Cleanup memory usage */
@@ -387,13 +356,17 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
 
 int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 {
+   unsigned int offset;
+   vtpm_shared_page_t* shr = NULL;
+#ifdef TPMFRONT_PRINT_DEBUG
    int i;
-   tpmif_tx_request_t* tx = NULL;
+#endif
    /* Error Checking */
    if(dev == NULL || dev->state != XenbusStateConnected) {
       TPMFRONT_ERR("Tried to send message through disconnected frontend\n");
       return -1;
    }
+   shr = dev->page;
 
 #ifdef TPMFRONT_PRINT_DEBUG
    TPMFRONT_DEBUG("Sending Msg to backend size=%u", (unsigned int) length);
@@ -407,19 +380,16 @@ int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 #endif
 
    /* Copy to shared pages now */
-   for(i = 0; length > 0 && i < TPMIF_TX_RING_SIZE; ++i) {
-      /* Share the page */
-      tx = &dev->tx->ring[i].req;
-      tx->unused = 0;
-      tx->addr = virt_to_mach(dev->pages[i]);
-      tx->ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->pages[i]), 0);
-      /* Copy the bits to the page */
-      tx->size = length > PAGE_SIZE ? PAGE_SIZE : length;
-      memcpy(dev->pages[i], &msg[i * PAGE_SIZE], tx->size);
-
-      /* Update counters */
-      length -= tx->size;
+   offset = sizeof(*shr);
+   if (length + offset > PAGE_SIZE) {
+      TPMFRONT_ERR("Message too long for shared page\n");
+      return -1;
    }
+   memcpy(offset + (uint8_t*)shr, msg, length);
+   shr->length = length;
+   barrier();
+   shr->state = 1;
+
    dev->waiting = 1;
    dev->resplen = 0;
 #ifdef HAVE_LIBC
@@ -434,44 +404,41 @@ int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 }
 int tpmfront_recv(struct tpmfront_dev* dev, uint8_t** msg, size_t *length)
 {
-   tpmif_tx_request_t* tx;
-   int i;
+   unsigned int offset;
+   vtpm_shared_page_t* shr = NULL;
+#ifdef TPMFRONT_PRINT_DEBUG
+int i;
+#endif
    if(dev == NULL || dev->state != XenbusStateConnected) {
       TPMFRONT_ERR("Tried to receive message from disconnected frontend\n");
       return -1;
    }
    /*Wait for the response */
    wait_event(dev->waitq, (!dev->waiting));
+   shr = dev->page;
+   if (shr->state != 0)
+      goto quit;
 
    /* Initialize */
    *msg = NULL;
-   *length = 0;
+   *length = shr->length;
+   offset = sizeof(*shr);
 
-   /* special case, just quit */
-   tx = &dev->tx->ring[0].req;
-   if(tx->size == 0 ) {
-       goto quit;
-   }
-   /* Get the total size */
-   tx = &dev->tx->ring[0].req;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && tx->size > 0; ++i) {
-      tx = &dev->tx->ring[i].req;
-      *length += tx->size;
+   if (*length + offset > PAGE_SIZE) {
+      TPMFRONT_ERR("Reply too long for shared page\n");
+      return -1;
    }
+
    /* Alloc the buffer */
    if(dev->respbuf) {
       free(dev->respbuf);
    }
    *msg = dev->respbuf = malloc(*length);
    dev->resplen = *length;
+
    /* Copy the bits */
-   tx = &dev->tx->ring[0].req;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && tx->size > 0; ++i) {
-      tx = &dev->tx->ring[i].req;
-      memcpy(&(*msg)[i * PAGE_SIZE], dev->pages[i], tx->size);
-      gnttab_end_access(tx->ref);
-      tx->ref = 0;
-   }
+   memcpy(*msg, offset + (uint8_t*)shr, *length);
+
 #ifdef TPMFRONT_PRINT_DEBUG
    TPMFRONT_DEBUG("Received response from backend size=%u", (unsigned int) *length);
    for(i = 0; i < *length; ++i) {
@@ -504,6 +471,14 @@ int tpmfront_cmd(struct tpmfront_dev* dev, uint8_t* req, size_t reqlen, uint8_t*
    return 0;
 }
 
+int tpmfront_set_locality(struct tpmfront_dev* dev, int locality)
+{
+   if (!dev || !dev->page)
+      return -1;
+   dev->page->locality = locality;
+   return 0;
+}
+
 #ifdef HAVE_LIBC
 #include <errno.h>
 int tpmfront_open(struct tpmfront_dev* dev)
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 8d921bc..7af5da3 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1756,7 +1756,7 @@ void libxl__device_vtpm_add(libxl__egc *egc, uint32_t domid,
             goto out;
         }
         l = libxl__xs_directory(gc, XBT_NULL,
-              GCSPRINTF("%s/device/vtpm", dompath), &nb);
+              GCSPRINTF("%s/device/vtpm2", dompath), &nb);
         if(l == NULL || nb == 0) {
             vtpm->devid = 0;
         } else {
@@ -1815,7 +1815,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
 
     *num = 0;
 
-    fe_path = libxl__sprintf(gc, "%s/device/vtpm", libxl__xs_get_dompath(gc, domid));
+    fe_path = libxl__sprintf(gc, "%s/device/vtpm2", libxl__xs_get_dompath(gc, domid));
     dir = libxl__xs_directory(gc, XBT_NULL, fe_path, &ndirs);
     if(dir) {
        vtpms = malloc(sizeof(*vtpms) * ndirs);
@@ -1830,7 +1830,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
           vtpm->devid = atoi(*dir);
 
           tmp = libxl__xs_read(gc, XBT_NULL,
-                GCSPRINTF("%s/%s/backend_id",
+                GCSPRINTF("%s/%s/backend-id",
                    fe_path, *dir));
           vtpm->backend_domid = atoi(tmp);
 
@@ -1863,7 +1863,7 @@ int libxl_device_vtpm_getinfo(libxl_ctx *ctx,
     dompath = libxl__xs_get_dompath(gc, domid);
     vtpminfo->devid = vtpm->devid;
 
-    vtpmpath = GCSPRINTF("%s/device/vtpm/%d", dompath, vtpminfo->devid);
+    vtpmpath = GCSPRINTF("%s/device/vtpm2/%d", dompath, vtpminfo->devid);
     vtpminfo->backend = xs_read(ctx->xsh, XBT_NULL,
           GCSPRINTF("%s/backend", vtpmpath), NULL);
     if (!vtpminfo->backend) {
diff --git a/xen/include/public/io/tpmif.h b/xen/include/public/io/tpmif.h
index 02ccdab..7c96530 100644
--- a/xen/include/public/io/tpmif.h
+++ b/xen/include/public/io/tpmif.h
@@ -1,7 +1,7 @@
 /******************************************************************************
  * tpmif.h
  *
- * TPM I/O interface for Xen guest OSes.
+ * TPM I/O interface for Xen guest OSes, v2
  *
  * Permission is hereby granted, free of charge, to any person obtaining a copy
  * of this software and associated documentation files (the "Software"), to
@@ -21,48 +21,23 @@
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  *
- * Copyright (c) 2005, IBM Corporation
- *
- * Author: Stefan Berger, stefanb@us.ibm.com
- * Grant table support: Mahadevan Gomathisankaran
- *
- * This code has been derived from tools/libxc/xen/io/netif.h
- *
- * Copyright (c) 2003-2004, Keir Fraser
  */
 
 #ifndef __XEN_PUBLIC_IO_TPMIF_H__
 #define __XEN_PUBLIC_IO_TPMIF_H__
 
-#include "../grant_table.h"
+struct vtpm_shared_page {
+    uint32_t length;         /* request/response length in bytes */
 
-struct tpmif_tx_request {
-    unsigned long addr;   /* Machine address of packet.   */
-    grant_ref_t ref;      /* grant table access reference */
-    uint16_t unused;
-    uint16_t size;        /* Packet size in bytes.        */
-};
-typedef struct tpmif_tx_request tpmif_tx_request_t;
-
-/*
- * The TPMIF_TX_RING_SIZE defines the number of pages the
- * front-end and backend can exchange (= size of array).
- */
-typedef uint32_t TPMIF_RING_IDX;
-
-#define TPMIF_TX_RING_SIZE 1
-
-/* This structure must fit in a memory page. */
-
-struct tpmif_ring {
-    struct tpmif_tx_request req;
-};
-typedef struct tpmif_ring tpmif_ring_t;
+    uint8_t state;           /* 0 - response ready / idle
+                              * 1 - request ready / working */
+    uint8_t locality;        /* for the current request */
+    uint8_t pad;
 
-struct tpmif_tx_interface {
-    struct tpmif_ring ring[TPMIF_TX_RING_SIZE];
+    uint8_t nr_extra_pages;  /* extra pages for long packets; may be zero */
+    uint32_t extra_pages[0]; /* grant IDs; length is actually nr_extra_pages */
 };
-typedef struct tpmif_tx_interface tpmif_tx_interface_t;
+typedef struct vtpm_shared_page vtpm_shared_page_t;
 
 #endif
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sf-0004F3-Bn; Mon, 10 Dec 2012 19:56:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sc-0004Az-Lq
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:18 +0000
Received: from [85.158.139.83:15295] by server-9.bemta-5.messagelabs.com id
	58/BA-10690-26E36C05; Mon, 10 Dec 2012 19:56:18 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355169375!29270011!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4717 invoked from network); 10 Dec 2012 19:56:16 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-5.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:16 -0000
X-TM-IMSS-Message-ID: <099fc8dc0000fddc@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fc8dc0000fddc ;
	Mon, 10 Dec 2012 14:56:20 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uK007869; 
	Mon, 10 Dec 2012 14:56:12 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:46 -0500
Message-Id: <1355169347-25917-14-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 13/14] stubdom/vtpm: constrain locality by XSM
	label
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds the ability for a vTPM to constrain what localities a given
client domain can use based on its XSM label. For example:

  locality=user_1:vm_r:domU_t=0,1,2 locality=user_1:vm_r:watcher_t=5

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/vtpm/vtpm.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 69 insertions(+), 2 deletions(-)

diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
index feb8aa3..3fbdc59 100644
--- a/stubdom/vtpm/vtpm.c
+++ b/stubdom/vtpm/vtpm.c
@@ -139,6 +139,30 @@ int check_ordinal(tpmcmd_t* tpmcmd) {
    return true;
 }
 
+struct locality_item {
+	char* lbl;
+	uint8_t mask[32];
+};
+#define MAX_CLIENT_LOCALITIES 16
+static struct locality_item client_locality[MAX_CLIENT_LOCALITIES];
+static int nr_client_localities = 0;
+
+static void *generate_locality_mask(domid_t domid, unsigned int handle)
+{
+   char label[512];
+   int i;
+   if (tpmback_get_peercontext(domid, handle, label, sizeof(label)))
+      BUG();
+   for(i=0; i < nr_client_localities; i++) {
+      if (!strcmp(client_locality[i].lbl, label))
+         goto found;
+   }
+   return NULL;
+ found:
+   tpmback_set_opaque(domid, handle, client_locality[i].mask);
+   return client_locality[i].mask;
+}
+
 static void main_loop(void) {
    tpmcmd_t* tpmcmd = NULL;
    int res = -1;
@@ -164,11 +188,24 @@ static void main_loop(void) {
    while(tpmcmd) {
       /* Handle the request */
       if(tpmcmd->req_len) {
+	 uint8_t* locality_mask = tpmcmd->opaque;
+	 uint8_t locality_bit = (1 << (tpmcmd->locality & 7));
+	 int locality_byte = tpmcmd->locality >> 3;
 	 tpmcmd->resp = NULL;
 	 tpmcmd->resp_len = 0;
 
-         /* First check for disabled ordinals */
-         if(!check_ordinal(tpmcmd)) {
+	 if (nr_client_localities && !locality_mask)
+	    locality_mask = generate_locality_mask(tpmcmd->domid, tpmcmd->handle);
+	 if (nr_client_localities && !locality_mask) {
+            error("Unknown client label in tpm_handle_command");
+            create_error_response(tpmcmd, TPM_FAIL);
+	 }
+	 else if (nr_client_localities && !(locality_mask[locality_byte] & locality_bit)) {
+            error("Invalid locality (%d) for client in tpm_handle_command", tpmcmd->locality);
+            create_error_response(tpmcmd, TPM_FAIL);
+	 }
+         /* Check for disabled ordinals */
+         else if(!check_ordinal(tpmcmd)) {
             create_error_response(tpmcmd, TPM_BAD_ORDINAL);
          }
          /* If not disabled, do the command */
@@ -273,6 +310,36 @@ int parse_cmd_line(int argc, char** argv)
             pch = strtok(NULL, ",");
          }
       }
+      else if(!strncmp(argv[i], "locality=", 9)) {
+        char *lbl = argv[i] + 9;
+	char *pch = strchr(lbl, '=');
+	uint8_t* locality_mask = client_locality[nr_client_localities].mask;
+	if (pch == NULL) {
+		 error("Invalid locality specification: %s", lbl);
+		 return -1;
+	}
+	if (nr_client_localities == MAX_CLIENT_LOCALITIES) {
+		error("Too many locality specifications");
+		return -1;
+	}
+	client_locality[nr_client_localities].lbl = lbl;
+	memset(locality_mask, 0, 32);
+	nr_client_localities++;
+	*pch = 0;
+	pch = strtok(pch + 1, ",");
+	while (pch != NULL) {
+		int loc;
+		if (sscanf(pch, "%u", &loc) == 1) {
+			uint8_t locality_bit = (1 << (loc & 7));
+			int locality_byte = loc >> 3;
+			locality_mask[locality_byte] |= locality_bit;
+		} else {
+			error("Invalid locality item: %s", pch);
+			return -1;
+		}
+		pch = strtok(NULL, ",");
+	}
+      }
       else {
 	 error("Invalid command line option `%s'", argv[i]);
       }
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sb-0004CC-NJ; Mon, 10 Dec 2012 19:56:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SZ-0004Am-7N
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.83:15125] by server-1.bemta-5.messagelabs.com id
	C6/7C-12813-E5E36C05; Mon, 10 Dec 2012 19:56:14 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355169373!27502596!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31450 invoked from network); 10 Dec 2012 19:56:14 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-2.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:14 -0000
X-TM-IMSS-Message-ID: <099fc3020000fdd8@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fc3020000fdd8 ;
	Mon, 10 Dec 2012 14:56:18 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uG007869; 
	Mon, 10 Dec 2012 14:56:10 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:42 -0500
Message-Id: <1355169347-25917-10-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 09/14] stubdom/vtpm: Add PCR pass-through to
	hardware TPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows the hardware TPM's PCRs to be accessed from a vTPM for
debugging and as a simple alternative to a deep quote in situations
where the integrity of the vTPM's own TCB is not in question.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/Makefile                   |  1 +
 stubdom/vtpm-pcr-passthrough.patch | 73 ++++++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm_cmd.c            | 38 ++++++++++++++++++++
 3 files changed, 112 insertions(+)
 create mode 100644 stubdom/vtpm-pcr-passthrough.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index a657fd2..053fe18 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -210,6 +210,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	patch -d $@ -p1 < vtpm-bufsize.patch
 	patch -d $@ -p1 < vtpm-locality.patch
 	patch -d $@ -p1 < vtpm-locality5-pcrs.patch
+	patch -d $@ -p1 < vtpm-pcr-passthrough.patch
 	mkdir $@/build
 	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
diff --git a/stubdom/vtpm-pcr-passthrough.patch b/stubdom/vtpm-pcr-passthrough.patch
new file mode 100644
index 0000000..4e898a5
--- /dev/null
+++ b/stubdom/vtpm-pcr-passthrough.patch
@@ -0,0 +1,73 @@
+diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
+index f8f7f0f..885af52 100644
+--- a/tpm/tpm_capability.c
++++ b/tpm/tpm_capability.c
+@@ -72,7 +72,7 @@ static TPM_RESULT cap_property(UINT32 subCapSize, BYTE *subCap,
+   switch (property) {
+     case TPM_CAP_PROP_PCR:
+       debug("[TPM_CAP_PROP_PCR]");
+-      return return_UINT32(respSize, resp, TPM_NUM_PCR);
++      return return_UINT32(respSize, resp, TPM_NUM_PCR_V);
+ 
+     case TPM_CAP_PROP_DIR:
+       debug("[TPM_CAP_PROP_DIR]");
+diff --git a/tpm/tpm_emulator_extern.h b/tpm/tpm_emulator_extern.h
+index 36a32dd..77ed595 100644
+--- a/tpm/tpm_emulator_extern.h
++++ b/tpm/tpm_emulator_extern.h
+@@ -56,6 +56,7 @@ void (*tpm_free)(/*const*/ void *ptr);
+ /* random numbers */
+ 
+ void (*tpm_get_extern_random_bytes)(void *buf, size_t nbytes);
++void tpm_get_extern_pcr(int index, void *buf);
+ 
+ /* usec since last call */
+ 
+diff --git a/tpm/tpm_integrity.c b/tpm/tpm_integrity.c
+index 66ece83..f3c4196 100644
+--- a/tpm/tpm_integrity.c
++++ b/tpm/tpm_integrity.c
+@@ -56,8 +56,11 @@ TPM_RESULT TPM_Extend(TPM_PCRINDEX pcrNum, TPM_DIGEST *inDigest,
+ TPM_RESULT TPM_PCRRead(TPM_PCRINDEX pcrIndex, TPM_PCRVALUE *outDigest)
+ {
+   info("TPM_PCRRead()");
+-  if (pcrIndex >= TPM_NUM_PCR) return TPM_BADINDEX;
+-  memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
++  if (pcrIndex >= TPM_NUM_PCR_V) return TPM_BADINDEX;
++  if (pcrIndex >= TPM_NUM_PCR)
++	tpm_get_extern_pcr(pcrIndex - TPM_NUM_PCR, outDigest);
++  else
++    memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
+   return TPM_SUCCESS;
+ }
+ 
+@@ -138,12 +141,15 @@ TPM_RESULT tpm_compute_pcr_digest(TPM_PCR_SELECTION *pcrSelection,
+   BYTE *buf, *ptr;
+   info("tpm_compute_pcr_digest()");
+   /* create PCR composite */
+-  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR
++  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR_V
+       || pcrSelection->sizeOfSelect == 0) return TPM_INVALID_PCR_INFO;
+   for (i = 0, j = 0; i < pcrSelection->sizeOfSelect * 8; i++) {
+     /* is PCR number i selected ? */
+     if (pcrSelection->pcrSelect[i >> 3] & (1 << (i & 7))) {
+-      memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALUE));
++      if (i >= TPM_NUM_PCR)
++        tpm_get_extern_pcr(i - TPM_NUM_PCR, &comp.pcrValue[j++]);
++      else
++        memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALUE));
+     }
+   }
+   memcpy(&comp.select, pcrSelection, sizeof(TPM_PCR_SELECTION));
+diff --git a/tpm/tpm_structures.h b/tpm/tpm_structures.h
+index 08cef1e..8c97fc5 100644
+--- a/tpm/tpm_structures.h
++++ b/tpm/tpm_structures.h
+@@ -677,6 +677,7 @@ typedef struct tdTPM_CMK_MA_APPROVAL {
+  * Number of PCRs of the TPM (must be a multiple of eight)
+  */
+ #define TPM_NUM_PCR 32
++#define TPM_NUM_PCR_V (TPM_NUM_PCR + 24)
+ 
+ /*
+  * TPM_PCR_SELECTION ([TPM_Part2], Section 8.1)
diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
index 7eae98b..ed058fb 100644
--- a/stubdom/vtpm/vtpm_cmd.c
+++ b/stubdom/vtpm/vtpm_cmd.c
@@ -134,6 +134,44 @@ egress:
 
 }
 
+extern struct tpmfront_dev* tpmfront_dev;
+void tpm_get_extern_pcr(int index, void *buf) {
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* cmdbuf, *resp, *bptr;
+   size_t resplen = 0;
+   UINT32 len;
+
+   /*Ask the real tpm for the PCR value */
+   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
+   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
+
+   /*Create the raw tpm command */
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, index));
+
+   /* Send cmd, wait for response */
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
+      ERR_TPMFRONT);
+
+   bptr = resp; len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   //Check return status of command
+   CHECKSTATUSGOTO(ord, "TPM_PCRRead()");
+
+   //Get the PCR value out
+   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, buf, 20), ERR_MALFORMED);
+
+   goto egress;
+abort_egress:
+   memset(buf, 0x20, 20);
+egress:
+   free(cmdbuf);
+}
+
 TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length)
 {
    TPM_RESULT status = TPM_SUCCESS;
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sd-0004DX-Ou; Mon, 10 Dec 2012 19:56:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sa-0004Ax-GD
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:16 +0000
Received: from [85.158.139.83:31200] by server-7.bemta-5.messagelabs.com id
	FE/59-08009-06E36C05; Mon, 10 Dec 2012 19:56:16 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355169374!24924093!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21235 invoked from network); 10 Dec 2012 19:56:14 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-14.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:14 -0000
X-TM-IMSS-Message-ID: <099fc38f0000fdd9@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fc38f0000fdd9 ;
	Mon, 10 Dec 2012 14:56:18 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uH007869; 
	Mon, 10 Dec 2012 14:56:11 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:43 -0500
Message-Id: <1355169347-25917-11-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 10/14] mini-os/tpmback: set up callbacks before
	enumeration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The open/close callbacks in tpmback cannot be properly initalized in
order to catch the initial enumeration events because init_tpmback
clears the callbacks and then asynchronously starts the enumeration of
existing tpmback devices. Fix this by passing the callbacks to
init_tpmback so they can be installed before enumeration.

This also removes the unused callbacks for suspend and resume.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 extras/mini-os/include/tpmback.h | 12 +-----------
 extras/mini-os/tpmback.c         | 31 +++----------------------------
 stubdom/vtpm/vtpm.c              |  2 +-
 stubdom/vtpmmgr/init.c           |  2 +-
 4 files changed, 6 insertions(+), 41 deletions(-)

diff --git a/extras/mini-os/include/tpmback.h b/extras/mini-os/include/tpmback.h
index ec9eda4..3c11c34 100644
--- a/extras/mini-os/include/tpmback.h
+++ b/extras/mini-os/include/tpmback.h
@@ -56,7 +56,7 @@ struct tpmcmd {
 typedef struct tpmcmd tpmcmd_t;
 
 /* Initialize the tpm backend driver */
-void init_tpmback(void);
+void init_tpmback(void (*open_cb)(domid_t, unsigned int), void (*close_cb)(domid_t, unsigned int));
 
 /* Shutdown tpm backend driver */
 void shutdown_tpmback(void);
@@ -94,14 +94,4 @@ int tpmback_num_frontends(void);
  * The return value is internally allocated, so don't free it */
 unsigned char* tpmback_get_uuid(domid_t domid, unsigned int handle);
 
-/* Specify a function to call when a new tpm device connects */
-void tpmback_set_open_callback(void (*cb)(domid_t, unsigned int));
-
-/* Specify a function to call when a tpm device disconnects */
-void tpmback_set_close_callback(void (*cb)(domid_t, unsigned int));
-
-//Not Implemented
-void tpmback_set_suspend_callback(void (*cb)(domid_t, unsigned int));
-void tpmback_set_resume_callback(void (*cb)(domid_t, unsigned int));
-
 #endif
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index 69a7f2d..1c46e5d 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -114,8 +114,6 @@ struct tpmback_dev {
    /* Callbacks */
    void (*open_callback)(domid_t, unsigned int);
    void (*close_callback)(domid_t, unsigned int);
-   void (*suspend_callback)(domid_t, unsigned int);
-   void (*resume_callback)(domid_t, unsigned int);
 };
 typedef struct tpmback_dev tpmback_dev_t;
 
@@ -131,8 +129,6 @@ static tpmback_dev_t gtpmdev = {
    .events = NULL,
    .open_callback = NULL,
    .close_callback = NULL,
-   .suspend_callback = NULL,
-   .resume_callback = NULL,
 };
 struct wait_queue_head waitq;
 int globalinit = 0;
@@ -772,23 +768,6 @@ unsigned char* tpmback_get_uuid(domid_t domid, unsigned int handle)
    return tpmif->uuid;
 }
 
-void tpmback_set_open_callback(void (*cb)(domid_t, unsigned int))
-{
-   gtpmdev.open_callback = cb;
-}
-void tpmback_set_close_callback(void (*cb)(domid_t, unsigned int))
-{
-   gtpmdev.close_callback = cb;
-}
-void tpmback_set_suspend_callback(void (*cb)(domid_t, unsigned int))
-{
-   gtpmdev.suspend_callback = cb;
-}
-void tpmback_set_resume_callback(void (*cb)(domid_t, unsigned int))
-{
-   gtpmdev.resume_callback = cb;
-}
-
 static void event_listener(void)
 {
    const char* bepath = "backend/vtpm2";
@@ -835,7 +814,7 @@ void event_thread(void* p) {
    event_listener();
 }
 
-void init_tpmback(void)
+void init_tpmback(void (*open_cb)(domid_t, unsigned int), void (*close_cb)(domid_t, unsigned int))
 {
    if(!globalinit) {
       init_waitqueue_head(&waitq);
@@ -847,8 +826,8 @@ void init_tpmback(void)
    gtpmdev.num_tpms = 0;
    gtpmdev.flags = 0;
 
-   gtpmdev.open_callback = gtpmdev.close_callback = NULL;
-   gtpmdev.suspend_callback = gtpmdev.resume_callback = NULL;
+   gtpmdev.open_callback = open_cb;
+   gtpmdev.close_callback = close_cb;
 
    eventthread = create_thread("tpmback-listener", event_thread, NULL);
 
@@ -856,10 +835,6 @@ void init_tpmback(void)
 
 void shutdown_tpmback(void)
 {
-   /* Disable callbacks */
-   gtpmdev.open_callback = gtpmdev.close_callback = NULL;
-   gtpmdev.suspend_callback = gtpmdev.resume_callback = NULL;
-
    TPMBACK_LOG("Shutting down tpm backend\n");
    /* Set the quit flag */
    gtpmdev.flags = TPMIF_CLOSED;
diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
index d576c8f..feb8aa3 100644
--- a/stubdom/vtpm/vtpm.c
+++ b/stubdom/vtpm/vtpm.c
@@ -357,7 +357,7 @@ int main(int argc, char **argv)
    }
 
    /* Initialize devices */
-   init_tpmback();
+   init_tpmback(NULL, NULL);
    if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
       error("Unable to initialize tpmfront device");
       goto abort_posttpmfront;
diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index a158020..00dd9f3 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -462,7 +462,7 @@ TPM_RESULT vtpmmgr_init(int argc, char** argv) {
    }
 
    //Setup tpmback device
-   init_tpmback();
+   init_tpmback(NULL, NULL);
 
    //Setup tpm access
    switch(opts.tpmdriver) {
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Se-0004Ec-Vu; Mon, 10 Dec 2012 19:56:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sb-0004Be-3R
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:17 +0000
Received: from [85.158.139.211:40638] by server-4.bemta-5.messagelabs.com id
	9F/00-14693-06E36C05; Mon, 10 Dec 2012 19:56:16 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355169374!19921266!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15511 invoked from network); 10 Dec 2012 19:56:15 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-16.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:15 -0000
X-TM-IMSS-Message-ID: <67fe325800080233@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe325800080233 ;
	Mon, 10 Dec 2012 14:55:44 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uI007869; 
	Mon, 10 Dec 2012 14:56:11 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:44 -0500
Message-Id: <1355169347-25917-12-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 11/14] mini-os/tpmback: Replace UUID field with
	opaque pointer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Instead of only recording the UUID field, which may not be of interest
to all tpmback implementations, provide a user-settable opaque pointer
associated with the tpmback instance.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 extras/mini-os/include/tpmback.h |  9 +++++++--
 extras/mini-os/tpmback.c         | 31 ++++++++++++++++++++++++++++---
 stubdom/vtpmmgr/init.c           |  8 +++++++-
 stubdom/vtpmmgr/vtpmmgr.c        |  2 +-
 4 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/extras/mini-os/include/tpmback.h b/extras/mini-os/include/tpmback.h
index 3c11c34..a6cbbf1 100644
--- a/extras/mini-os/include/tpmback.h
+++ b/extras/mini-os/include/tpmback.h
@@ -45,10 +45,10 @@ struct tpmcmd {
    domid_t domid;		/* Domid of the frontend */
    uint8_t locality;    /* Locality requested by the frontend */
    unsigned int handle;	/* Handle of the frontend */
-   unsigned char uuid[16];			/* uuid of the tpm interface */
+   void *opaque;        /* Opaque pointer taken from the tpmback instance */
 
-   unsigned int req_len;		/* Size of the command in buf - set by tpmback driver */
    uint8_t* req;			/* tpm command bits, allocated by driver, DON'T FREE IT */
+   unsigned int req_len;		/* Size of the command in buf - set by tpmback driver */
    unsigned int resp_len;	/* Size of the outgoing command,
 				   you set this before passing the cmd object to tpmback_resp */
    uint8_t* resp;		/* Buffer for response - YOU MUST ALLOCATE IT, YOU MUST ALSO FREE IT */
@@ -94,4 +94,9 @@ int tpmback_num_frontends(void);
  * The return value is internally allocated, so don't free it */
 unsigned char* tpmback_get_uuid(domid_t domid, unsigned int handle);
 
+/* Get and set the opaque pointer for a tpmback instance */
+void* tpmback_get_opaque(domid_t domid, unsigned int handle);
+/* Returns zero if successful, nonzero on failure (no such frontend) */
+int tpmback_set_opaque(domid_t domid, unsigned int handle, void* opaque);
+
 #endif
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index 1c46e5d..cac07fc 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -92,6 +92,7 @@ struct tpmif {
    enum { DISCONNECTED, DISCONNECTING, CONNECTED } status;
 
    unsigned char uuid[16];
+   void* opaque;
 
    /* state flags */
    int flags;
@@ -380,6 +381,7 @@ inline tpmif_t* __init_tpmif(domid_t domid, unsigned int handle)
    tpmif->status = DISCONNECTED;
    tpmif->page = NULL;
    tpmif->flags = 0;
+   tpmif->opaque = NULL;
    memset(tpmif->uuid, 0, sizeof(tpmif->uuid));
    return tpmif;
 }
@@ -757,6 +759,29 @@ static void generate_backend_events(const char* path)
    return;
 }
 
+void* tpmback_get_opaque(domid_t domid, unsigned int handle)
+{
+   tpmif_t* tpmif;
+   if((tpmif = get_tpmif(domid, handle)) == NULL) {
+      TPMBACK_DEBUG("get_opaque() failed, %u/%u is an invalid frontend\n", (unsigned int) domid, handle);
+      return NULL;
+   }
+
+   return tpmif->opaque;
+}
+
+int tpmback_set_opaque(domid_t domid, unsigned int handle, void *opaque)
+{
+   tpmif_t* tpmif;
+   if((tpmif = get_tpmif(domid, handle)) == NULL) {
+      TPMBACK_DEBUG("set_opaque() failed, %u/%u is an invalid frontend\n", (unsigned int) domid, handle);
+      return -1;
+   }
+
+   tpmif->opaque = opaque;
+   return 0;
+}
+
 unsigned char* tpmback_get_uuid(domid_t domid, unsigned int handle)
 {
    tpmif_t* tpmif;
@@ -853,12 +878,12 @@ void shutdown_tpmback(void)
    schedule();
 }
 
-inline void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, unsigned char uuid[16])
+static void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, void *opaque)
 {
    tpmcmd->domid = domid;
    tpmcmd->locality = -1;
    tpmcmd->handle = handle;
-   memcpy(tpmcmd->uuid, uuid, sizeof(tpmcmd->uuid));
+   tpmcmd->opaque = opaque;
    tpmcmd->req = NULL;
    tpmcmd->req_len = 0;
    tpmcmd->resp = NULL;
@@ -880,7 +905,7 @@ tpmcmd_t* get_request(tpmif_t* tpmif) {
    if((cmd = malloc(sizeof(*cmd))) == NULL) {
       goto error;
    }
-   init_tpmcmd(cmd, tpmif->domid, tpmif->handle, tpmif->uuid);
+   init_tpmcmd(cmd, tpmif->domid, tpmif->handle, tpmif->opaque);
 
    shr = tpmif->page;
    cmd->req_len = shr->length;
diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index 00dd9f3..33ac152 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -436,6 +436,12 @@ egress:
    return status;
 }
 
+/* Set up the opaque field to contain a pointer to the UUID */
+static void set_opaque_to_uuid(domid_t domid, unsigned int handle)
+{
+   tpmback_set_opaque(domid, handle, tpmback_get_uuid(domid, handle));
+}
+
 TPM_RESULT vtpmmgr_init(int argc, char** argv) {
    TPM_RESULT status = TPM_SUCCESS;
 
@@ -462,7 +468,7 @@ TPM_RESULT vtpmmgr_init(int argc, char** argv) {
    }
 
    //Setup tpmback device
-   init_tpmback(NULL, NULL);
+   init_tpmback(set_opaque_to_uuid, NULL);
 
    //Setup tpm access
    switch(opts.tpmdriver) {
diff --git a/stubdom/vtpmmgr/vtpmmgr.c b/stubdom/vtpmmgr/vtpmmgr.c
index 563f4e8..270ca8a 100644
--- a/stubdom/vtpmmgr/vtpmmgr.c
+++ b/stubdom/vtpmmgr/vtpmmgr.c
@@ -61,7 +61,7 @@ void main_loop(void) {
       tpmcmd->resp = respbuf;
 
       /* Process the command */
-      vtpmmgr_handle_cmd(tpmcmd->uuid, tpmcmd);
+      vtpmmgr_handle_cmd(tpmcmd->opaque, tpmcmd);
 
       /* Send response */
       tpmback_resp(tpmcmd);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9SZ-0004BJ-I8; Mon, 10 Dec 2012 19:56:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SY-0004Am-8i
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:14 +0000
Received: from [85.158.139.211:10125] by server-1.bemta-5.messagelabs.com id
	B2/7C-12813-D5E36C05; Mon, 10 Dec 2012 19:56:13 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-206.messagelabs.com!1355169371!19812185!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3990 invoked from network); 10 Dec 2012 19:56:12 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-14.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:12 -0000
X-TM-IMSS-Message-ID: <67fe22dd0008022a@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe22dd0008022a ;
	Mon, 10 Dec 2012 14:55:40 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8u7007869; 
	Mon, 10 Dec 2012 14:56:08 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:33 -0500
Message-Id: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v3 00/14] vTPM new ABI, extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch queue goes on top of Matthew Fioravante's [VTPM v7 0/8]
series.  The xenbus device name has changed to "vtpm2", and some
documentation has been added about PCRs (those extended by pv-grub and
those added in locality 5).  A new Linux patch is also needed, and will
be posted as a reply to this email; the layout of the shared page has
changed slightly (length field changed from uint16_t to uint32_t).

Patches have been reordered a bit in an attempt to have the series make
the most sense possible if partially applied.  Patch #8 still breaks
automatic vTPM domain shutdown, so only applying #1-6 would be useful if
we would like that feature to continue working while the libxl-based
shutdown request is not finished.

Patch 10-13 are new here; they allow localities to be restricted for
certain domains.  This is an important security feature if multiple
domains are accessing the same vTPM, and without this feature the
locality 5 PCRs introduced by #7 are no different from the lower 24
defined in the TPM specification.

Patch 14 is a build cleanup that fixes the third consecutive build
without an intervening "make clean" when NEWLIB_STAMPFILE is touched
after gmp is extracted.


New ABI patches:
    [PATCH 01/14] mini-os/tpm{back,front}: Change shared page ABI
    [PATCH 02/14] stubdom/vtpm: correct the buffer size returned by
    [PATCH 03/14] stubdom/vtpm: Support locality field

New vTPM features:
    [PATCH 04/14] stubdom/vtpm: Allow repoen of closed devices
    [PATCH 05/14] stubdom/vtpm: make state save operation atomic
    [PATCH 06/14] stubdom/grub: send kernel measurements to vTPM

Support for multiple client domains distinguished by locality:
    [PATCH 07/14] stubdom/vtpm: Add locality-5 PCRs
    [PATCH 08/14] stubdom/vtpm: support multiple backends
    [PATCH 09/14] stubdom/vtpm: Add PCR pass-through to hardware TPM
    [PATCH 10/14] mini-os/tpmback: set up callbacks before enumeration
    [PATCH 11/14] mini-os/tpmback: Replace UUID field with opaque
    [PATCH 12/14] mini-os/tpmback: add tpmback_get_peercontext
    [PATCH 13/14] stubdom/vtpm: constrain locality by XSM label

Other:
    [PATCH 14/14] stubdom/Makefile: Fix gmp extract rule

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Se-0004Dy-7s; Mon, 10 Dec 2012 19:56:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sa-0004Bc-Gc
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:16 +0000
Received: from [85.158.139.83:15152] by server-12.bemta-5.messagelabs.com id
	8D/5C-02275-F5E36C05; Mon, 10 Dec 2012 19:56:15 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355169374!29242669!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8080 invoked from network); 10 Dec 2012 19:56:14 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-12.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:14 -0000
X-TM-IMSS-Message-ID: <099fc8300000fddb@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fc8300000fddb ;
	Mon, 10 Dec 2012 14:56:20 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uJ007869; 
	Mon, 10 Dec 2012 14:56:12 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:45 -0500
Message-Id: <1355169347-25917-13-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 12/14] mini-os/tpmback: add
	tpmback_get_peercontext
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows the XSM label of the TPM's client domain to be retrieved.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 extras/mini-os/events.c          | 22 ++++++++++++++++++++++
 extras/mini-os/include/events.h  |  1 +
 extras/mini-os/include/tpmback.h |  2 ++
 extras/mini-os/tpmback.c         | 11 +++++++++++
 4 files changed, 36 insertions(+)

diff --git a/extras/mini-os/events.c b/extras/mini-os/events.c
index 2f359a5..5327e14 100644
--- a/extras/mini-os/events.c
+++ b/extras/mini-os/events.c
@@ -21,6 +21,7 @@
 #include <mini-os/hypervisor.h>
 #include <mini-os/events.h>
 #include <mini-os/lib.h>
+#include <xen/xsm/flask_op.h>
 
 #define NR_EVS 1024
 
@@ -258,6 +259,27 @@ int evtchn_bind_interdomain(domid_t pal, evtchn_port_t remote_port,
     return rc;
 }
 
+int evtchn_get_peercontext(evtchn_port_t local_port, char *ctx, int size)
+{
+    int rc;
+    uint32_t sid;
+    struct xen_flask_op op;
+    op.cmd = FLASK_GET_PEER_SID;
+    op.interface_version = XEN_FLASK_INTERFACE_VERSION;
+    op.u.peersid.evtchn = local_port;
+    rc = _hypercall1(int, xsm_op, &op);
+    if (rc)
+        return rc;
+    sid = op.u.peersid.sid;
+    op.cmd = FLASK_SID_TO_CONTEXT;
+    op.u.sid_context.sid = sid;
+    op.u.sid_context.size = size;
+    set_xen_guest_handle(op.u.sid_context.context, ctx);
+    rc = _hypercall1(int, xsm_op, &op);
+    return rc;
+}
+
+
 /*
  * Local variables:
  * mode: C
diff --git a/extras/mini-os/include/events.h b/extras/mini-os/include/events.h
index 912e4cf..0e9d3a7 100644
--- a/extras/mini-os/include/events.h
+++ b/extras/mini-os/include/events.h
@@ -37,6 +37,7 @@ int evtchn_alloc_unbound(domid_t pal, evtchn_handler_t handler,
 int evtchn_bind_interdomain(domid_t pal, evtchn_port_t remote_port,
 							evtchn_handler_t handler, void *data,
 							evtchn_port_t *local_port);
+int evtchn_get_peercontext(evtchn_port_t local_port, char *ctx, int size);
 void unbind_all_ports(void);
 
 static inline int notify_remote_via_evtchn(evtchn_port_t port)
diff --git a/extras/mini-os/include/tpmback.h b/extras/mini-os/include/tpmback.h
index a6cbbf1..4408986 100644
--- a/extras/mini-os/include/tpmback.h
+++ b/extras/mini-os/include/tpmback.h
@@ -99,4 +99,6 @@ void* tpmback_get_opaque(domid_t domid, unsigned int handle);
 /* Returns zero if successful, nonzero on failure (no such frontend) */
 int tpmback_set_opaque(domid_t domid, unsigned int handle, void* opaque);
 
+/* Get the XSM context of the given domain (using the tpmback event channel) */
+int tpmback_get_peercontext(domid_t domid, unsigned int handle, void* buffer, int buflen);
 #endif
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index cac07fc..ab69cb7 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -793,6 +793,17 @@ unsigned char* tpmback_get_uuid(domid_t domid, unsigned int handle)
    return tpmif->uuid;
 }
 
+int tpmback_get_peercontext(domid_t domid, unsigned int handle, void* buffer, int buflen)
+{
+   tpmif_t* tpmif;
+   if((tpmif = get_tpmif(domid, handle)) == NULL) {
+      TPMBACK_DEBUG("get_uuid() failed, %u/%u is an invalid frontend\n", (unsigned int) domid, handle);
+      return -1;
+   }
+
+   return evtchn_get_peercontext(tpmif->evtchn, buffer, buflen);
+}
+
 static void event_listener(void)
 {
    const char* bepath = "backend/vtpm2";
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sa-0004Bn-UM; Mon, 10 Dec 2012 19:56:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SY-0004Ao-Nz
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.211:40579] by server-14.bemta-5.messagelabs.com id
	38/98-09538-D5E36C05; Mon, 10 Dec 2012 19:56:13 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355169371!18378533!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16560 invoked from network); 10 Dec 2012 19:56:11 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-10.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:11 -0000
X-TM-IMSS-Message-ID: <67fe24250008022c@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe24250008022c ;
	Mon, 10 Dec 2012 14:55:40 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8u8007869; 
	Mon, 10 Dec 2012 14:56:08 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:34 -0500
Message-Id: <1355169347-25917-2-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 01/14] mini-os/tpm{back,
	front}: Change shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This changes the vTPM shared page ABI from a copy of the Xen network
interface to a single-page interface that better reflects the expected
behavior of a TPM: only a single request packet can be sent at any given
time, and every packet sent generates a single response packet. This
protocol change should also increase efficiency as it avoids mapping and
unmapping grants when possible. The vtpm xenbus device has been renamed
to "vtpm2" to avoid conflicts with existing (xen-patched) kernels
supporting the old interface.

While the contents of the shared page have been defined to allow packets
larger than a single page (actually 4088 bytes) by allowing the client
to add extra grant references, the mapping of these extra references has
not been implemented; a feature node in xenstore may be used in the
future to indicate full support for the multi-page protocol. Most uses
of the TPM should not require this feature.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 extras/mini-os/include/tpmback.h  |   1 +
 extras/mini-os/include/tpmfront.h |   7 +-
 extras/mini-os/tpmback.c          | 135 ++++++++++++++------------------------
 extras/mini-os/tpmfront.c         | 119 +++++++++++++--------------------
 tools/libxl/libxl.c               |   8 +--
 xen/include/public/io/tpmif.h     |  45 +++----------
 6 files changed, 114 insertions(+), 201 deletions(-)

diff --git a/extras/mini-os/include/tpmback.h b/extras/mini-os/include/tpmback.h
index ff86732..ec9eda4 100644
--- a/extras/mini-os/include/tpmback.h
+++ b/extras/mini-os/include/tpmback.h
@@ -43,6 +43,7 @@
 
 struct tpmcmd {
    domid_t domid;		/* Domid of the frontend */
+   uint8_t locality;    /* Locality requested by the frontend */
    unsigned int handle;	/* Handle of the frontend */
    unsigned char uuid[16];			/* uuid of the tpm interface */
 
diff --git a/extras/mini-os/include/tpmfront.h b/extras/mini-os/include/tpmfront.h
index fd2cb17..a0c7c4d 100644
--- a/extras/mini-os/include/tpmfront.h
+++ b/extras/mini-os/include/tpmfront.h
@@ -37,9 +37,7 @@ struct tpmfront_dev {
    grant_ref_t ring_ref;
    evtchn_port_t evtchn;
 
-   tpmif_tx_interface_t* tx;
-
-   void** pages;
+   vtpm_shared_page_t *page;
 
    domid_t bedomid;
    char* nodename;
@@ -77,6 +75,9 @@ void shutdown_tpmfront(struct tpmfront_dev* dev);
  * */
 int tpmfront_cmd(struct tpmfront_dev* dev, uint8_t* req, size_t reqlen, uint8_t** resp, size_t* resplen);
 
+/* Set the locality used for communicating with a vTPM */
+int tpmfront_set_locality(struct tpmfront_dev* dev, int locality);
+
 #ifdef HAVE_LIBC
 #include <sys/stat.h>
 /* POSIX IO functions:
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index 658fed1..50f8a5d 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -86,10 +86,7 @@ struct tpmif {
    evtchn_port_t evtchn;
 
    /* Shared page */
-   tpmif_tx_interface_t* tx;
-
-   /* pointer to TPMIF_RX_RING_SIZE pages */
-   void** pages;
+   vtpm_shared_page_t *page;
 
    enum xenbus_state state;
    enum { DISCONNECTED, DISCONNECTING, CONNECTED } status;
@@ -312,7 +309,6 @@ int insert_tpmif(tpmif_t* tpmif)
       remove_tpmif(tpmif);
       goto error_post_irq;
    }
-
    return 0;
 error:
    local_irq_restore(flags);
@@ -336,7 +332,7 @@ int tpmif_change_state(tpmif_t* tpmif, enum xenbus_state state)
    if (tpmif->state == state)
       return 0;
 
-   snprintf(path, 512, "backend/vtpm/%u/%u/state", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/state", (unsigned int) tpmif->domid, tpmif->handle);
 
    if((err = xenbus_read(XBT_NIL, path, &value))) {
       TPMBACK_ERR("Unable to read backend state %s, error was %s\n", path, err);
@@ -362,7 +358,7 @@ int tpmif_change_state(tpmif_t* tpmif, enum xenbus_state state)
    }
 
    /*update xenstore*/
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_printf(XBT_NIL, path, "state", "%u", state))) {
       TPMBACK_ERR("Error writing to xenstore %s, error was %s new state=%d\n", path, err, state);
       free(err);
@@ -386,8 +382,7 @@ inline tpmif_t* __init_tpmif(domid_t domid, unsigned int handle)
    tpmif->fe_state_path = NULL;
    tpmif->state = XenbusStateInitialising;
    tpmif->status = DISCONNECTED;
-   tpmif->tx = NULL;
-   tpmif->pages = NULL;
+   tpmif->page = NULL;
    tpmif->flags = 0;
    memset(tpmif->uuid, 0, sizeof(tpmif->uuid));
    return tpmif;
@@ -395,9 +390,6 @@ inline tpmif_t* __init_tpmif(domid_t domid, unsigned int handle)
 
 void __free_tpmif(tpmif_t* tpmif)
 {
-   if(tpmif->pages) {
-      free(tpmif->pages);
-   }
    if(tpmif->fe_path) {
       free(tpmif->fe_path);
    }
@@ -424,23 +416,17 @@ tpmif_t* new_tpmif(domid_t domid, unsigned int handle)
    tpmif = __init_tpmif(domid, handle);
 
    /* Get the uuid from xenstore */
-   snprintf(path, 512, "backend/vtpm/%u/%u/uuid", (unsigned int) domid, handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/uuid", (unsigned int) domid, handle);
    if((!xenbus_read_uuid(path, tpmif->uuid))) {
       TPMBACK_ERR("Error reading %s\n", path);
       goto error;
    }
 
-   /* allocate pages to be used for shared mapping */
-   if((tpmif->pages = malloc(sizeof(void*) * TPMIF_TX_RING_SIZE)) == NULL) {
-      goto error;
-   }
-   memset(tpmif->pages, 0, sizeof(void*) * TPMIF_TX_RING_SIZE);
-
    if(tpmif_change_state(tpmif, XenbusStateInitWait)) {
       goto error;
    }
 
-   snprintf(path, 512, "backend/vtpm/%u/%u/frontend", (unsigned int) domid, handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/frontend", (unsigned int) domid, handle);
    if((err = xenbus_read(XBT_NIL, path, &tpmif->fe_path))) {
       TPMBACK_ERR("Error creating new tpm instance xenbus_read(%s), Error = %s", path, err);
       free(err);
@@ -486,7 +472,7 @@ void free_tpmif(tpmif_t* tpmif)
       tpmif->status = DISCONNECTING;
       mask_evtchn(tpmif->evtchn);
 
-      if(gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->tx, 1)) {
+      if(gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->page, 1)) {
 	 TPMBACK_ERR("%u/%u Error occured while trying to unmap shared page\n", (unsigned int) tpmif->domid, tpmif->handle);
       }
 
@@ -508,7 +494,7 @@ void free_tpmif(tpmif_t* tpmif)
    schedule();
 
    /* Remove the old xenbus entries */
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_rm(XBT_NIL, path))) {
       TPMBACK_ERR("Error cleaning up xenbus entries path=%s error=%s\n", path, err);
       free(err);
@@ -529,9 +515,10 @@ void free_tpmif(tpmif_t* tpmif)
 void tpmback_handler(evtchn_port_t port, struct pt_regs *regs, void *data)
 {
    tpmif_t* tpmif = (tpmif_t*) data;
-   tpmif_tx_request_t* tx = &tpmif->tx->ring[0].req;
-   /* Throw away 0 size events, these can trigger from event channel unmasking */
-   if(tx->size == 0)
+   vtpm_shared_page_t* pg = tpmif->page;
+
+   /* Only pay attention if the request is ready */
+   if (pg->state == 0)
       return;
 
    TPMBACK_DEBUG("EVENT CHANNEL FIRE %u/%u\n", (unsigned int) tpmif->domid, tpmif->handle);
@@ -585,11 +572,10 @@ int connect_fe(tpmif_t* tpmif)
    free(value);
 
    domid = tpmif->domid;
-   if((tpmif->tx = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &ringref, PROT_READ | PROT_WRITE)) == NULL) {
+   if((tpmif->page = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &ringref, PROT_READ | PROT_WRITE)) == NULL) {
       TPMBACK_ERR("Failed to map grant reference %u/%u\n", (unsigned int) tpmif->domid, tpmif->handle);
       return -1;
    }
-   memset(tpmif->tx, 0, PAGE_SIZE);
 
    /*Bind the event channel */
    if((evtchn_bind_interdomain(tpmif->domid, evtchn, tpmback_handler, tpmif, &tpmif->evtchn)))
@@ -600,7 +586,7 @@ int connect_fe(tpmif_t* tpmif)
    unmask_evtchn(tpmif->evtchn);
 
    /* Write the ready flag and change status to connected */
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_printf(XBT_NIL, path, "ready", "%u", 1))) {
       TPMBACK_ERR("%u/%u Unable to write ready flag on connect_fe()\n", (unsigned int) tpmif->domid, tpmif->handle);
       free(err);
@@ -618,7 +604,7 @@ error_post_evtchn:
    mask_evtchn(tpmif->evtchn);
    unbind_evtchn(tpmif->evtchn);
 error_post_map:
-   gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->tx, 1);
+   gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->page, 1);
    return -1;
 }
 
@@ -670,8 +656,8 @@ static int parse_eventstr(const char* evstr, domid_t* domid, unsigned int* handl
    char* value;
    unsigned int udomid = 0;
    tpmif_t* tpmif;
-   /* First check for new frontends, this occurs when /backend/vtpm/<domid>/<handle> gets created. Note we what the sscanf to fail on the last %s */
-   if (sscanf(evstr, "backend/vtpm/%u/%u/%40s", &udomid, handle, cmd) == 2) {
+   /* First check for new frontends, this occurs when /backend/vtpm2/<domid>/<handle> gets created. Note we what the sscanf to fail on the last %s */
+   if (sscanf(evstr, "backend/vtpm2/%u/%u/%40s", &udomid, handle, cmd) == 2) {
       *domid = udomid;
       /* Make sure the entry exists, if this event triggers because the entry dissapeared then ignore it */
       if((err = xenbus_read(XBT_NIL, evstr, &value))) {
@@ -685,7 +671,7 @@ static int parse_eventstr(const char* evstr, domid_t* domid, unsigned int* handl
 	 return EV_NONE;
       }
       return EV_NEWFE;
-   } else if((ret = sscanf(evstr, "/local/domain/%u/device/vtpm/%u/%40s", &udomid, handle, cmd)) == 3) {
+   } else if((ret = sscanf(evstr, "/local/domain/%u/device/vtpm2/%u/%40s", &udomid, handle, cmd)) == 3) {
       *domid = udomid;
       if (!strcmp(cmd, "state"))
 	 return EV_STCHNG;
@@ -784,7 +770,7 @@ void tpmback_set_resume_callback(void (*cb)(domid_t, unsigned int))
 
 static void event_listener(void)
 {
-   const char* bepath = "backend/vtpm";
+   const char* bepath = "backend/vtpm2";
    char **path;
    char* err;
 
@@ -874,6 +860,7 @@ void shutdown_tpmback(void)
 inline void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, unsigned char uuid[16])
 {
    tpmcmd->domid = domid;
+   tpmcmd->locality = -1;
    tpmcmd->handle = handle;
    memcpy(tpmcmd->uuid, uuid, sizeof(tpmcmd->uuid));
    tpmcmd->req = NULL;
@@ -884,12 +871,12 @@ inline void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, un
 
 tpmcmd_t* get_request(tpmif_t* tpmif) {
    tpmcmd_t* cmd;
-   tpmif_tx_request_t* tx;
-   int offset;
-   int tocopy;
-   int i;
-   uint32_t domid;
+   vtpm_shared_page_t* shr;
+   unsigned int offset;
    int flags;
+#ifdef TPMBACK_PRINT_DEBUG
+   int i;
+#endif
 
    local_irq_save(flags);
 
@@ -899,35 +886,22 @@ tpmcmd_t* get_request(tpmif_t* tpmif) {
    }
    init_tpmcmd(cmd, tpmif->domid, tpmif->handle, tpmif->uuid);
 
-   tx = &tpmif->tx->ring[0].req;
-   cmd->req_len = tx->size;
+   shr = tpmif->page;
+   cmd->req_len = shr->length;
+   cmd->locality = shr->locality;
+   offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+   if (offset > PAGE_SIZE || offset + cmd->req_len > PAGE_SIZE) {
+      TPMBACK_ERR("%u/%u Command size too long for shared page!\n", (unsigned int) tpmif->domid, tpmif->handle);
+      goto error;
+   }
    /* Allocate the buffer */
    if(cmd->req_len) {
       if((cmd->req = malloc(cmd->req_len)) == NULL) {
 	 goto error;
       }
    }
-   /* Copy the bits from the shared pages */
-   offset = 0;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && offset < cmd->req_len; ++i) {
-      tx = &tpmif->tx->ring[i].req;
-
-      /* Map the page with the data */
-      domid = (uint32_t)tpmif->domid;
-      if((tpmif->pages[i] = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &tx->ref, PROT_READ)) == NULL) {
-	 TPMBACK_ERR("%u/%u Unable to map shared page during read!\n", (unsigned int) tpmif->domid, tpmif->handle);
-	 goto error;
-      }
-
-      /* do the copy now */
-      tocopy = min(cmd->req_len - offset, PAGE_SIZE);
-      memcpy(&cmd->req[offset], tpmif->pages[i], tocopy);
-      offset += tocopy;
-
-      /* release the page */
-      gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->pages[i], 1);
-
-   }
+   /* Copy the bits from the shared page(s) */
+   memcpy(cmd->req, offset + (uint8_t*)shr, cmd->req_len);
 
 #ifdef TPMBACK_PRINT_DEBUG
    TPMBACK_DEBUG("Received Tpm Command from %u/%u of size %u", (unsigned int) tpmif->domid, tpmif->handle, cmd->req_len);
@@ -958,38 +932,24 @@ error:
 
 void send_response(tpmcmd_t* cmd, tpmif_t* tpmif)
 {
-   tpmif_tx_request_t* tx;
-   int offset;
-   int i;
-   uint32_t domid;
-   int tocopy;
+   vtpm_shared_page_t* shr;
+   unsigned int offset;
    int flags;
+#ifdef TPMBACK_PRINT_DEBUG
+int i;
+#endif
 
    local_irq_save(flags);
 
-   tx = &tpmif->tx->ring[0].req;
-   tx->size = cmd->resp_len;
-
-   offset = 0;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && offset < cmd->resp_len; ++i) {
-      tx = &tpmif->tx->ring[i].req;
-
-      /* Map the page with the data */
-      domid = (uint32_t)tpmif->domid;
-      if((tpmif->pages[i] = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &tx->ref, PROT_WRITE)) == NULL) {
-	 TPMBACK_ERR("%u/%u Unable to map shared page during write!\n", (unsigned int) tpmif->domid, tpmif->handle);
-	 goto error;
-      }
-
-      /* do the copy now */
-      tocopy = min(cmd->resp_len - offset, PAGE_SIZE);
-      memcpy(tpmif->pages[i], &cmd->resp[offset], tocopy);
-      offset += tocopy;
-
-      /* release the page */
-      gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->pages[i], 1);
+   shr = tpmif->page;
+   shr->length = cmd->resp_len;
 
+   offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+   if (offset > PAGE_SIZE || offset + cmd->resp_len > PAGE_SIZE) {
+      TPMBACK_ERR("%u/%u Command size too long for shared page!\n", (unsigned int) tpmif->domid, tpmif->handle);
+      goto error;
    }
+   memcpy(offset + (uint8_t*)shr, cmd->resp, cmd->resp_len);
 
 #ifdef TPMBACK_PRINT_DEBUG
    TPMBACK_DEBUG("Sent response to %u/%u of size %u", (unsigned int) tpmif->domid, tpmif->handle, cmd->resp_len);
@@ -1003,6 +963,7 @@ void send_response(tpmcmd_t* cmd, tpmif_t* tpmif)
 #endif
    /* clear the ready flag and send the event channel notice to the frontend */
    tpmif_req_finished(tpmif);
+   shr->state = 0;
    notify_remote_via_evtchn(tpmif->evtchn);
 error:
    local_irq_restore(flags);
diff --git a/extras/mini-os/tpmfront.c b/extras/mini-os/tpmfront.c
index 0218d7f..ac9ba42 100644
--- a/extras/mini-os/tpmfront.c
+++ b/extras/mini-os/tpmfront.c
@@ -176,7 +176,7 @@ static int wait_for_backend_state_changed(struct tpmfront_dev* dev, XenbusState
 	 ret = wait_for_backend_closed(&events, path);
 	 break;
       default:
-	 break;
+         TPMFRONT_ERR("Bad wait state %d, ignoring\n", state);
    }
 
    if((err = xenbus_unwatch_path_token(XBT_NIL, path, path))) {
@@ -190,13 +190,13 @@ static int tpmfront_connect(struct tpmfront_dev* dev)
 {
    char* err;
    /* Create shared page */
-   dev->tx = (tpmif_tx_interface_t*) alloc_page();
-   if(dev->tx == NULL) {
+   dev->page = (vtpm_shared_page_t*) alloc_page();
+   if(dev->page == NULL) {
       TPMFRONT_ERR("Unable to allocate page for shared memory\n");
       goto error;
    }
-   memset(dev->tx, 0, PAGE_SIZE);
-   dev->ring_ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->tx), 0);
+   memset(dev->page, 0, PAGE_SIZE);
+   dev->ring_ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->page), 0);
    TPMFRONT_DEBUG("grant ref is %lu\n", (unsigned long) dev->ring_ref);
 
    /*Create event channel */
@@ -228,7 +228,7 @@ error_postevtchn:
       unbind_evtchn(dev->evtchn);
 error_postmap:
       gnttab_end_access(dev->ring_ref);
-      free_page(dev->tx);
+      free_page(dev->page);
 error:
    return -1;
 }
@@ -240,7 +240,6 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
    char path[512];
    char* value, *err;
    unsigned long long ival;
-   int i;
 
    printk("============= Init TPM Front ================\n");
 
@@ -251,7 +250,7 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
    dev->fd = -1;
 #endif
 
-   nodename = _nodename ? _nodename : "device/vtpm/0";
+   nodename = _nodename ? _nodename : "device/vtpm2/0";
    dev->nodename = strdup(nodename);
 
    init_waitqueue_head(&dev->waitq);
@@ -289,19 +288,6 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
       goto error;
    }
 
-   /* Allocate pages that will contain the messages */
-   dev->pages = malloc(sizeof(void*) * TPMIF_TX_RING_SIZE);
-   if(dev->pages == NULL) {
-      goto error;
-   }
-   memset(dev->pages, 0, sizeof(void*) * TPMIF_TX_RING_SIZE);
-   for(i = 0; i < TPMIF_TX_RING_SIZE; ++i) {
-      dev->pages[i] = (void*)alloc_page();
-      if(dev->pages[i] == NULL) {
-	 goto error;
-      }
-   }
-
    TPMFRONT_LOG("Initialization Completed successfully\n");
 
    return dev;
@@ -314,8 +300,6 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
 {
    char* err;
    char path[512];
-   int i;
-   tpmif_tx_request_t* tx;
    if(dev == NULL) {
       return;
    }
@@ -349,27 +333,12 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
       /* Wait for the backend to close and unmap shared pages, ignore any errors */
       wait_for_backend_state_changed(dev, XenbusStateClosed);
 
-      /* Cleanup any shared pages */
-      if(dev->pages) {
-	 for(i = 0; i < TPMIF_TX_RING_SIZE; ++i) {
-	    if(dev->pages[i]) {
-	       tx = &dev->tx->ring[i].req;
-	       if(tx->ref != 0) {
-		  gnttab_end_access(tx->ref);
-	       }
-	       free_page(dev->pages[i]);
-	    }
-	 }
-	 free(dev->pages);
-      }
-
       /* Close event channel and unmap shared page */
       mask_evtchn(dev->evtchn);
       unbind_evtchn(dev->evtchn);
       gnttab_end_access(dev->ring_ref);
 
-      free_page(dev->tx);
-
+      free_page(dev->page);
    }
 
    /* Cleanup memory usage */
@@ -387,13 +356,17 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
 
 int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 {
+   unsigned int offset;
+   vtpm_shared_page_t* shr = NULL;
+#ifdef TPMFRONT_PRINT_DEBUG
    int i;
-   tpmif_tx_request_t* tx = NULL;
+#endif
    /* Error Checking */
    if(dev == NULL || dev->state != XenbusStateConnected) {
       TPMFRONT_ERR("Tried to send message through disconnected frontend\n");
       return -1;
    }
+   shr = dev->page;
 
 #ifdef TPMFRONT_PRINT_DEBUG
    TPMFRONT_DEBUG("Sending Msg to backend size=%u", (unsigned int) length);
@@ -407,19 +380,16 @@ int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 #endif
 
    /* Copy to shared pages now */
-   for(i = 0; length > 0 && i < TPMIF_TX_RING_SIZE; ++i) {
-      /* Share the page */
-      tx = &dev->tx->ring[i].req;
-      tx->unused = 0;
-      tx->addr = virt_to_mach(dev->pages[i]);
-      tx->ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->pages[i]), 0);
-      /* Copy the bits to the page */
-      tx->size = length > PAGE_SIZE ? PAGE_SIZE : length;
-      memcpy(dev->pages[i], &msg[i * PAGE_SIZE], tx->size);
-
-      /* Update counters */
-      length -= tx->size;
+   offset = sizeof(*shr);
+   if (length + offset > PAGE_SIZE) {
+      TPMFRONT_ERR("Message too long for shared page\n");
+      return -1;
    }
+   memcpy(offset + (uint8_t*)shr, msg, length);
+   shr->length = length;
+   barrier();
+   shr->state = 1;
+
    dev->waiting = 1;
    dev->resplen = 0;
 #ifdef HAVE_LIBC
@@ -434,44 +404,41 @@ int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 }
 int tpmfront_recv(struct tpmfront_dev* dev, uint8_t** msg, size_t *length)
 {
-   tpmif_tx_request_t* tx;
-   int i;
+   unsigned int offset;
+   vtpm_shared_page_t* shr = NULL;
+#ifdef TPMFRONT_PRINT_DEBUG
+int i;
+#endif
    if(dev == NULL || dev->state != XenbusStateConnected) {
       TPMFRONT_ERR("Tried to receive message from disconnected frontend\n");
       return -1;
    }
    /*Wait for the response */
    wait_event(dev->waitq, (!dev->waiting));
+   shr = dev->page;
+   if (shr->state != 0)
+      goto quit;
 
    /* Initialize */
    *msg = NULL;
-   *length = 0;
+   *length = shr->length;
+   offset = sizeof(*shr);
 
-   /* special case, just quit */
-   tx = &dev->tx->ring[0].req;
-   if(tx->size == 0 ) {
-       goto quit;
-   }
-   /* Get the total size */
-   tx = &dev->tx->ring[0].req;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && tx->size > 0; ++i) {
-      tx = &dev->tx->ring[i].req;
-      *length += tx->size;
+   if (*length + offset > PAGE_SIZE) {
+      TPMFRONT_ERR("Reply too long for shared page\n");
+      return -1;
    }
+
    /* Alloc the buffer */
    if(dev->respbuf) {
       free(dev->respbuf);
    }
    *msg = dev->respbuf = malloc(*length);
    dev->resplen = *length;
+
    /* Copy the bits */
-   tx = &dev->tx->ring[0].req;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && tx->size > 0; ++i) {
-      tx = &dev->tx->ring[i].req;
-      memcpy(&(*msg)[i * PAGE_SIZE], dev->pages[i], tx->size);
-      gnttab_end_access(tx->ref);
-      tx->ref = 0;
-   }
+   memcpy(*msg, offset + (uint8_t*)shr, *length);
+
 #ifdef TPMFRONT_PRINT_DEBUG
    TPMFRONT_DEBUG("Received response from backend size=%u", (unsigned int) *length);
    for(i = 0; i < *length; ++i) {
@@ -504,6 +471,14 @@ int tpmfront_cmd(struct tpmfront_dev* dev, uint8_t* req, size_t reqlen, uint8_t*
    return 0;
 }
 
+int tpmfront_set_locality(struct tpmfront_dev* dev, int locality)
+{
+   if (!dev || !dev->page)
+      return -1;
+   dev->page->locality = locality;
+   return 0;
+}
+
 #ifdef HAVE_LIBC
 #include <errno.h>
 int tpmfront_open(struct tpmfront_dev* dev)
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 8d921bc..7af5da3 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1756,7 +1756,7 @@ void libxl__device_vtpm_add(libxl__egc *egc, uint32_t domid,
             goto out;
         }
         l = libxl__xs_directory(gc, XBT_NULL,
-              GCSPRINTF("%s/device/vtpm", dompath), &nb);
+              GCSPRINTF("%s/device/vtpm2", dompath), &nb);
         if(l == NULL || nb == 0) {
             vtpm->devid = 0;
         } else {
@@ -1815,7 +1815,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
 
     *num = 0;
 
-    fe_path = libxl__sprintf(gc, "%s/device/vtpm", libxl__xs_get_dompath(gc, domid));
+    fe_path = libxl__sprintf(gc, "%s/device/vtpm2", libxl__xs_get_dompath(gc, domid));
     dir = libxl__xs_directory(gc, XBT_NULL, fe_path, &ndirs);
     if(dir) {
        vtpms = malloc(sizeof(*vtpms) * ndirs);
@@ -1830,7 +1830,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
           vtpm->devid = atoi(*dir);
 
           tmp = libxl__xs_read(gc, XBT_NULL,
-                GCSPRINTF("%s/%s/backend_id",
+                GCSPRINTF("%s/%s/backend-id",
                    fe_path, *dir));
           vtpm->backend_domid = atoi(tmp);
 
@@ -1863,7 +1863,7 @@ int libxl_device_vtpm_getinfo(libxl_ctx *ctx,
     dompath = libxl__xs_get_dompath(gc, domid);
     vtpminfo->devid = vtpm->devid;
 
-    vtpmpath = GCSPRINTF("%s/device/vtpm/%d", dompath, vtpminfo->devid);
+    vtpmpath = GCSPRINTF("%s/device/vtpm2/%d", dompath, vtpminfo->devid);
     vtpminfo->backend = xs_read(ctx->xsh, XBT_NULL,
           GCSPRINTF("%s/backend", vtpmpath), NULL);
     if (!vtpminfo->backend) {
diff --git a/xen/include/public/io/tpmif.h b/xen/include/public/io/tpmif.h
index 02ccdab..7c96530 100644
--- a/xen/include/public/io/tpmif.h
+++ b/xen/include/public/io/tpmif.h
@@ -1,7 +1,7 @@
 /******************************************************************************
  * tpmif.h
  *
- * TPM I/O interface for Xen guest OSes.
+ * TPM I/O interface for Xen guest OSes, v2
  *
  * Permission is hereby granted, free of charge, to any person obtaining a copy
  * of this software and associated documentation files (the "Software"), to
@@ -21,48 +21,23 @@
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  *
- * Copyright (c) 2005, IBM Corporation
- *
- * Author: Stefan Berger, stefanb@us.ibm.com
- * Grant table support: Mahadevan Gomathisankaran
- *
- * This code has been derived from tools/libxc/xen/io/netif.h
- *
- * Copyright (c) 2003-2004, Keir Fraser
  */
 
 #ifndef __XEN_PUBLIC_IO_TPMIF_H__
 #define __XEN_PUBLIC_IO_TPMIF_H__
 
-#include "../grant_table.h"
+struct vtpm_shared_page {
+    uint32_t length;         /* request/response length in bytes */
 
-struct tpmif_tx_request {
-    unsigned long addr;   /* Machine address of packet.   */
-    grant_ref_t ref;      /* grant table access reference */
-    uint16_t unused;
-    uint16_t size;        /* Packet size in bytes.        */
-};
-typedef struct tpmif_tx_request tpmif_tx_request_t;
-
-/*
- * The TPMIF_TX_RING_SIZE defines the number of pages the
- * front-end and backend can exchange (= size of array).
- */
-typedef uint32_t TPMIF_RING_IDX;
-
-#define TPMIF_TX_RING_SIZE 1
-
-/* This structure must fit in a memory page. */
-
-struct tpmif_ring {
-    struct tpmif_tx_request req;
-};
-typedef struct tpmif_ring tpmif_ring_t;
+    uint8_t state;           /* 0 - response ready / idle
+                              * 1 - request ready / working */
+    uint8_t locality;        /* for the current request */
+    uint8_t pad;
 
-struct tpmif_tx_interface {
-    struct tpmif_ring ring[TPMIF_TX_RING_SIZE];
+    uint8_t nr_extra_pages;  /* extra pages for long packets; may be zero */
+    uint32_t extra_pages[0]; /* grant IDs; length is actually nr_extra_pages */
 };
-typedef struct tpmif_tx_interface tpmif_tx_interface_t;
+typedef struct vtpm_shared_page vtpm_shared_page_t;
 
 #endif
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sb-0004CC-NJ; Mon, 10 Dec 2012 19:56:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SZ-0004Am-7N
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.83:15125] by server-1.bemta-5.messagelabs.com id
	C6/7C-12813-E5E36C05; Mon, 10 Dec 2012 19:56:14 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355169373!27502596!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31450 invoked from network); 10 Dec 2012 19:56:14 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-2.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:14 -0000
X-TM-IMSS-Message-ID: <099fc3020000fdd8@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fc3020000fdd8 ;
	Mon, 10 Dec 2012 14:56:18 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uG007869; 
	Mon, 10 Dec 2012 14:56:10 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:42 -0500
Message-Id: <1355169347-25917-10-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 09/14] stubdom/vtpm: Add PCR pass-through to
	hardware TPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows the hardware TPM's PCRs to be accessed from a vTPM for
debugging and as a simple alternative to a deep quote in situations
where the integrity of the vTPM's own TCB is not in question.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/Makefile                   |  1 +
 stubdom/vtpm-pcr-passthrough.patch | 73 ++++++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm_cmd.c            | 38 ++++++++++++++++++++
 3 files changed, 112 insertions(+)
 create mode 100644 stubdom/vtpm-pcr-passthrough.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index a657fd2..053fe18 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -210,6 +210,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	patch -d $@ -p1 < vtpm-bufsize.patch
 	patch -d $@ -p1 < vtpm-locality.patch
 	patch -d $@ -p1 < vtpm-locality5-pcrs.patch
+	patch -d $@ -p1 < vtpm-pcr-passthrough.patch
 	mkdir $@/build
 	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
diff --git a/stubdom/vtpm-pcr-passthrough.patch b/stubdom/vtpm-pcr-passthrough.patch
new file mode 100644
index 0000000..4e898a5
--- /dev/null
+++ b/stubdom/vtpm-pcr-passthrough.patch
@@ -0,0 +1,73 @@
+diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
+index f8f7f0f..885af52 100644
+--- a/tpm/tpm_capability.c
++++ b/tpm/tpm_capability.c
+@@ -72,7 +72,7 @@ static TPM_RESULT cap_property(UINT32 subCapSize, BYTE *subCap,
+   switch (property) {
+     case TPM_CAP_PROP_PCR:
+       debug("[TPM_CAP_PROP_PCR]");
+-      return return_UINT32(respSize, resp, TPM_NUM_PCR);
++      return return_UINT32(respSize, resp, TPM_NUM_PCR_V);
+ 
+     case TPM_CAP_PROP_DIR:
+       debug("[TPM_CAP_PROP_DIR]");
+diff --git a/tpm/tpm_emulator_extern.h b/tpm/tpm_emulator_extern.h
+index 36a32dd..77ed595 100644
+--- a/tpm/tpm_emulator_extern.h
++++ b/tpm/tpm_emulator_extern.h
+@@ -56,6 +56,7 @@ void (*tpm_free)(/*const*/ void *ptr);
+ /* random numbers */
+ 
+ void (*tpm_get_extern_random_bytes)(void *buf, size_t nbytes);
++void tpm_get_extern_pcr(int index, void *buf);
+ 
+ /* usec since last call */
+ 
+diff --git a/tpm/tpm_integrity.c b/tpm/tpm_integrity.c
+index 66ece83..f3c4196 100644
+--- a/tpm/tpm_integrity.c
++++ b/tpm/tpm_integrity.c
+@@ -56,8 +56,11 @@ TPM_RESULT TPM_Extend(TPM_PCRINDEX pcrNum, TPM_DIGEST *inDigest,
+ TPM_RESULT TPM_PCRRead(TPM_PCRINDEX pcrIndex, TPM_PCRVALUE *outDigest)
+ {
+   info("TPM_PCRRead()");
+-  if (pcrIndex >= TPM_NUM_PCR) return TPM_BADINDEX;
+-  memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
++  if (pcrIndex >= TPM_NUM_PCR_V) return TPM_BADINDEX;
++  if (pcrIndex >= TPM_NUM_PCR)
++	tpm_get_extern_pcr(pcrIndex - TPM_NUM_PCR, outDigest);
++  else
++    memcpy(outDigest, &PCR_VALUE[pcrIndex], sizeof(TPM_PCRVALUE));
+   return TPM_SUCCESS;
+ }
+ 
+@@ -138,12 +141,15 @@ TPM_RESULT tpm_compute_pcr_digest(TPM_PCR_SELECTION *pcrSelection,
+   BYTE *buf, *ptr;
+   info("tpm_compute_pcr_digest()");
+   /* create PCR composite */
+-  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR
++  if ((pcrSelection->sizeOfSelect * 8) > TPM_NUM_PCR_V
+       || pcrSelection->sizeOfSelect == 0) return TPM_INVALID_PCR_INFO;
+   for (i = 0, j = 0; i < pcrSelection->sizeOfSelect * 8; i++) {
+     /* is PCR number i selected ? */
+     if (pcrSelection->pcrSelect[i >> 3] & (1 << (i & 7))) {
+-      memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALUE));
++      if (i >= TPM_NUM_PCR)
++        tpm_get_extern_pcr(i - TPM_NUM_PCR, &comp.pcrValue[j++]);
++      else
++        memcpy(&comp.pcrValue[j++], &PCR_VALUE[i], sizeof(TPM_PCRVALUE));
+     }
+   }
+   memcpy(&comp.select, pcrSelection, sizeof(TPM_PCR_SELECTION));
+diff --git a/tpm/tpm_structures.h b/tpm/tpm_structures.h
+index 08cef1e..8c97fc5 100644
+--- a/tpm/tpm_structures.h
++++ b/tpm/tpm_structures.h
+@@ -677,6 +677,7 @@ typedef struct tdTPM_CMK_MA_APPROVAL {
+  * Number of PCRs of the TPM (must be a multiple of eight)
+  */
+ #define TPM_NUM_PCR 32
++#define TPM_NUM_PCR_V (TPM_NUM_PCR + 24)
+ 
+ /*
+  * TPM_PCR_SELECTION ([TPM_Part2], Section 8.1)
diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
index 7eae98b..ed058fb 100644
--- a/stubdom/vtpm/vtpm_cmd.c
+++ b/stubdom/vtpm/vtpm_cmd.c
@@ -134,6 +134,44 @@ egress:
 
 }
 
+extern struct tpmfront_dev* tpmfront_dev;
+void tpm_get_extern_pcr(int index, void *buf) {
+   TPM_RESULT status = TPM_SUCCESS;
+   uint8_t* cmdbuf, *resp, *bptr;
+   size_t resplen = 0;
+   UINT32 len;
+
+   /*Ask the real tpm for the PCR value */
+   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
+   UINT32 size;
+   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
+   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
+
+   /*Create the raw tpm command */
+   bptr = cmdbuf = malloc(size);
+   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
+   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, index));
+
+   /* Send cmd, wait for response */
+   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
+      ERR_TPMFRONT);
+
+   bptr = resp; len = resplen;
+   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
+
+   //Check return status of command
+   CHECKSTATUSGOTO(ord, "TPM_PCRRead()");
+
+   //Get the PCR value out
+   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, buf, 20), ERR_MALFORMED);
+
+   goto egress;
+abort_egress:
+   memset(buf, 0x20, 20);
+egress:
+   free(cmdbuf);
+}
+
 TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length)
 {
    TPM_RESULT status = TPM_SUCCESS;
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sf-0004Fd-Sq; Mon, 10 Dec 2012 19:56:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sd-0004C6-Km
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:19 +0000
Received: from [85.158.139.83:31249] by server-13.bemta-5.messagelabs.com id
	B5/DD-10716-06E36C05; Mon, 10 Dec 2012 19:56:16 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355169375!26525171!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20715 invoked from network); 10 Dec 2012 19:56:16 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:16 -0000
X-TM-IMSS-Message-ID: <099fc9f50000fdde@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fc9f50000fdde ;
	Mon, 10 Dec 2012 14:56:20 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uL007869; 
	Mon, 10 Dec 2012 14:56:13 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:47 -0500
Message-Id: <1355169347-25917-15-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 14/14] stubdom/Makefile: Fix gmp extract rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When NEWLIB_STAMPFILE is updated but gmp has already been extracted, the mv
command will incorrectly create a subdirectory instead of renaming. Remove the
old target before renaming to fix this.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 053fe18..a2463fe 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -162,6 +162,7 @@ ifeq ($(XEN_TARGET_ARCH), x86_32)
 endif
 gmp-$(XEN_TARGET_ARCH): gmp-$(GMP_VERSION).tar.bz2 $(NEWLIB_STAMPFILE)
 	tar xjf $<
+	rm $@ -rf || :
 	mv gmp-$(GMP_VERSION) $@
 	#patch -d $@ -p0 < gmp.patch
 	cd $@; CPPFLAGS="-isystem $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include $(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" CC=$(CC) $(GMPEXT) ./configure --disable-shared --enable-static --disable-fft --without-readline --prefix=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sf-0004Fd-Sq; Mon, 10 Dec 2012 19:56:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sd-0004C6-Km
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:19 +0000
Received: from [85.158.139.83:31249] by server-13.bemta-5.messagelabs.com id
	B5/DD-10716-06E36C05; Mon, 10 Dec 2012 19:56:16 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355169375!26525171!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20715 invoked from network); 10 Dec 2012 19:56:16 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:16 -0000
X-TM-IMSS-Message-ID: <099fc9f50000fdde@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fc9f50000fdde ;
	Mon, 10 Dec 2012 14:56:20 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uL007869; 
	Mon, 10 Dec 2012 14:56:13 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:47 -0500
Message-Id: <1355169347-25917-15-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 14/14] stubdom/Makefile: Fix gmp extract rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When NEWLIB_STAMPFILE is updated but gmp has already been extracted, the mv
command will incorrectly create a subdirectory instead of renaming. Remove the
old target before renaming to fix this.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 053fe18..a2463fe 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -162,6 +162,7 @@ ifeq ($(XEN_TARGET_ARCH), x86_32)
 endif
 gmp-$(XEN_TARGET_ARCH): gmp-$(GMP_VERSION).tar.bz2 $(NEWLIB_STAMPFILE)
 	tar xjf $<
+	rm $@ -rf || :
 	mv gmp-$(GMP_VERSION) $@
 	#patch -d $@ -p0 < gmp.patch
 	cd $@; CPPFLAGS="-isystem $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include $(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" CC=$(CC) $(GMPEXT) ./configure --disable-shared --enable-static --disable-fft --without-readline --prefix=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sc-0004Ck-Vp; Mon, 10 Dec 2012 19:56:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SZ-0004Az-Jk
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.211:22276] by server-9.bemta-5.messagelabs.com id
	6D/AA-10690-E5E36C05; Mon, 10 Dec 2012 19:56:14 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355169373!15646608!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17606 invoked from network); 10 Dec 2012 19:56:14 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-13.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:14 -0000
X-TM-IMSS-Message-ID: <67fe2d0b00080230@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe2d0b00080230 ;
	Mon, 10 Dec 2012 14:55:43 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uF007869; 
	Mon, 10 Dec 2012 14:56:10 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:41 -0500
Message-Id: <1355169347-25917-9-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 08/14] stubdom/vtpm: support multiple backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/vtpm/vtpm.c | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
index aaf1a24..d576c8f 100644
--- a/stubdom/vtpm/vtpm.c
+++ b/stubdom/vtpm/vtpm.c
@@ -141,8 +141,6 @@ int check_ordinal(tpmcmd_t* tpmcmd) {
 
 static void main_loop(void) {
    tpmcmd_t* tpmcmd = NULL;
-   domid_t domid;		/* Domid of frontend */
-   unsigned int handle;	/* handle of frontend */
    int res = -1;
 
    info("VTPM Initializing\n");
@@ -162,15 +160,7 @@ static void main_loop(void) {
       goto abort_postpcrs;
    }
 
-   /* Wait for the frontend domain to connect */
-   info("Waiting for frontend domain to connect..");
-   if(tpmback_wait_for_frontend_connect(&domid, &handle) == 0) {
-      info("VTPM attached to Frontend %u/%u", (unsigned int) domid, handle);
-   } else {
-      error("Unable to attach to a frontend");
-   }
-
-   tpmcmd = tpmback_req(domid, handle);
+   tpmcmd = tpmback_req_any();
    while(tpmcmd) {
       /* Handle the request */
       if(tpmcmd->req_len) {
@@ -194,7 +184,7 @@ static void main_loop(void) {
       tpmback_resp(tpmcmd);
 
       /* Wait for the next request */
-      tpmcmd = tpmback_req(domid, handle);
+      tpmcmd = tpmback_req_any();
 
    }
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sc-0004CX-IO; Mon, 10 Dec 2012 19:56:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SZ-0004Ay-LU
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.83:31134] by server-3.bemta-5.messagelabs.com id
	37/52-25441-E5E36C05; Mon, 10 Dec 2012 19:56:14 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355169373!28691690!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17480 invoked from network); 10 Dec 2012 19:56:13 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-13.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:13 -0000
X-TM-IMSS-Message-ID: <099fba8a0000fdd1@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fba8a0000fdd1 ;
	Mon, 10 Dec 2012 14:56:16 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uA007869; 
	Mon, 10 Dec 2012 14:56:09 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:36 -0500
Message-Id: <1355169347-25917-4-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 03/14] stubdom/vtpm: Support locality field
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The vTPM protocol now contains a field allowing the locality of a
command to be specified; pass this to the TPM when processing a packet.
While the locality is not currently checked for validity, a binding
between locality and some distinguishing feature of the client domain
(such as the XSM label) will need to be defined in order to properly
support a multi-client vTPM.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 stubdom/Makefile            |  1 +
 stubdom/vtpm-locality.patch | 50 +++++++++++++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm.c         |  2 +-
 3 files changed, 52 insertions(+), 1 deletion(-)
 create mode 100644 stubdom/vtpm-locality.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 12f8a6f..fb0e5f9 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -208,6 +208,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	mv tpm_emulator-$(TPMEMU_VERSION) $@
 	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
 	patch -d $@ -p1 < vtpm-bufsize.patch
+	patch -d $@ -p1 < vtpm-locality.patch
 	mkdir $@/build
 	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
diff --git a/stubdom/vtpm-locality.patch b/stubdom/vtpm-locality.patch
new file mode 100644
index 0000000..8ab7dea
--- /dev/null
+++ b/stubdom/vtpm-locality.patch
@@ -0,0 +1,50 @@
+diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
+index 60bbb90..f8f7f0f 100644
+--- a/tpm/tpm_capability.c
++++ b/tpm/tpm_capability.c
+@@ -949,6 +949,8 @@ static TPM_RESULT set_vendor(UINT32 subCap, BYTE *setValue,
+                              UINT32 setValueSize, BOOL ownerAuth,
+                              BOOL deactivated, BOOL disabled)
+ {
++  if (tpmData.stany.flags.localityModifier != 8)
++    return TPM_BAD_PARAMETER;
+   /* set the capability area with the specified data, on failure
+      deactivate the TPM */
+   switch (subCap) {
+diff --git a/tpm/tpm_cmd_handler.c b/tpm/tpm_cmd_handler.c
+index 288d1ce..9e1cfb4 100644
+--- a/tpm/tpm_cmd_handler.c
++++ b/tpm/tpm_cmd_handler.c
+@@ -4132,7 +4132,7 @@ void tpm_emulator_shutdown()
+   tpm_extern_release();
+ }
+ 
+-int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t **out, uint32_t *out_size)
++int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t **out, uint32_t *out_size, int locality)
+ {
+   TPM_REQUEST req;
+   TPM_RESPONSE rsp;
+@@ -4140,7 +4140,9 @@ int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t **out, uint3
+   UINT32 len;
+   BOOL free_out;
+ 
+-  debug("tpm_handle_command()");
++  debug("tpm_handle_command(%d)", locality);
++  if (locality != -1)
++    tpmData.stany.flags.localityModifier = locality;
+ 
+   /* we need the whole packet at once, otherwise unmarshalling will fail */
+   if (tpm_unmarshal_TPM_REQUEST((uint8_t**)&in, &in_size, &req) != 0) {
+diff --git a/tpm/tpm_emulator.h b/tpm/tpm_emulator.h
+index eed749e..4c228bd 100644
+--- a/tpm/tpm_emulator.h
++++ b/tpm/tpm_emulator.h
+@@ -59,7 +59,7 @@ void tpm_emulator_shutdown(void);
+  * its usage. In case of an error, all internally allocated memory
+  * is released and the the state of out and out_size is unspecified.
+  */ 
+-int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t **out, uint32_t *out_size);
++int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t **out, uint32_t *out_size, int locality);
+ 
+ #endif /* _TPM_EMULATOR_H_ */
+ 
diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
index 71aef78..eb7912f 100644
--- a/stubdom/vtpm/vtpm.c
+++ b/stubdom/vtpm/vtpm.c
@@ -183,7 +183,7 @@ static void main_loop(void) {
          }
          /* If not disabled, do the command */
          else {
-            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len)) != 0) {
+            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len, tpmcmd->locality)) != 0) {
                error("tpm_handle_command() failed");
                create_error_response(tpmcmd, TPM_FAIL);
             }
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sc-0004CL-5O; Mon, 10 Dec 2012 19:56:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SZ-0004Ax-9b
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.211:10173] by server-7.bemta-5.messagelabs.com id
	C9/59-08009-E5E36C05; Mon, 10 Dec 2012 19:56:14 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355169373!15646606!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17576 invoked from network); 10 Dec 2012 19:56:13 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-13.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:13 -0000
X-TM-IMSS-Message-ID: <67fe28b70008022f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe28b70008022f ;
	Mon, 10 Dec 2012 14:55:42 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uD007869; 
	Mon, 10 Dec 2012 14:56:10 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:39 -0500
Message-Id: <1355169347-25917-7-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 06/14] stubdom/grub: send kernel measurements to
	vTPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows a domU with an arbitrary kernel and initrd to take advantage
of the static root of trust provided by a vTPM.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Acked-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 docs/misc/vtpm.txt      | 41 ++++++++++++++++++++++++++++---------
 stubdom/Makefile        |  2 +-
 stubdom/grub/Makefile   |  1 +
 stubdom/grub/kexec.c    | 54 +++++++++++++++++++++++++++++++++++++++++++++++++
 stubdom/grub/minios.cfg |  1 +
 5 files changed, 89 insertions(+), 10 deletions(-)

diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt
index fc6029a..6ea27bd 100644
--- a/docs/misc/vtpm.txt
+++ b/docs/misc/vtpm.txt
@@ -1,7 +1,7 @@
 Copyright (c) 2010-2012 United States Government, as represented by
 the Secretary of Defense.  All rights reserved.
 November 12 2012
-Authors: Matthew Fioravante (JHUAPL),
+Authors: Matthew Fioravante (JHUAPL), Daniel De Graaf (NSA)
 
 This document describes the virtual Trusted Platform Module (vTPM) subsystem
 for Xen. The reader is assumed to have familiarity with building and installing
@@ -15,7 +15,8 @@ operating system (a DomU).  This allows programs to interact with a TPM in a
 virtual system the same way they interact with a TPM on the physical system.
 Each guest gets its own unique, emulated, software TPM.  However, each of the
 vTPM's secrets (Keys, NVRAM, etc) are managed by a vTPM Manager domain, which
-seals the secrets to the Physical TPM.  Thus, the vTPM subsystem extends the
+seals the secrets to the Physical TPM.  If the process of creating each of these
+domains (manager, vTPM, and guest) is trusted, the vTPM subsystem extends the
 chain of trust rooted in the hardware TPM to virtual machines in Xen. Each
 major component of vTPM is implemented as a separate domain, providing secure
 separation guaranteed by the hypervisor. The vTPM domains are implemented in
@@ -119,14 +120,17 @@ the stubdom tree.
 Compiling the LINUX dom0 kernel:
 --------------------------------
 
-The Linux dom0 kernel has no special prerequisites.
+The Linux dom0 kernel should not try accessing the TPM while the vTPM
+Manager domain is accessing it; the simplest way to accomplish this is
+to ensure the kernel is compiled without a driver for the TPM, or avoid
+loading the driver by blacklisting the module.
 
 Compiling the LINUX domU kernel:
 --------------------------------
 
-The domU kernel used by domains with vtpms must
-include the xen-tpmfront.ko driver. It can be built
-directly into the kernel or as a module.
+The domU kernel used by domains with vtpms must include the xen-tpmfront.ko
+driver. It can be built directly into the kernel or as a module; however, some
+features such as IMA require the TPM to be built in to the kernel.
 
 CONFIG_TCG_TPM=y
 CONFIG_TCG_XEN=y
@@ -160,9 +164,10 @@ disk=["file:/var/vtpmmgrdom.img,hda,w"]
 name="vtpmmgrdom"
 iomem=["fed40,5"]
 
-The iomem line tells xl to allow access to the TPM
-IO memory pages, which are 5 pages that start at
-0xfed40000.
+The iomem line tells xl to allow access to all of the TPM IO memory
+pages, which are 5 pages (one per locality) that start at 0xfed40000. By
+default, the TPM manager uses locality 0 (so only the page at 0xfed40 is
+needed); this can be changed on the domain's command line.
 
 Starting and stopping the manager:
 ----------------------------------
@@ -285,6 +290,24 @@ On guest:
 You may wish to write a script to start your vtpm and guest together.
 
 ------------------------------
+INTEGRATION WITH PV-GRUB
+------------------------------
+
+The vTPM currently starts up with all PCRs set to their default values (all
+zeros for the lower 16).  This means that any decisions about the
+trustworthiness of the created domain must be made based on the environment that
+created the vTPM and the domU; for example, a system that only constructs images
+using a trusted configuration and guest kernel be able to provide guarantees
+about the guests and any measurements done that kernel (such as the IMA TCB
+log).  Guests wishing to use a custom kernel in such a secure environment are
+often started using the pv-grub bootloader as the kernel, which then can load
+the untrusted kernel without needing to parse an untrusted filesystem and kernel
+in dom0.  If the pv-grub stub domain succeeds in connecting to a vTPM, it will
+extend the hash of the kernel that it boots into PCR #4, and will extend the
+command line and initrd into PCR #5 before booting so that a domU booted in this
+way can attest to its early boot state.
+
+------------------------------
 MORE INFORMATION
 ------------------------------
 
diff --git a/stubdom/Makefile b/stubdom/Makefile
index fb0e5f9..95e10f3 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -398,7 +398,7 @@ grub-upstream: grub-$(GRUB_VERSION).tar.gz
 	done
 
 .PHONY: grub
-grub: grub-upstream $(CROSS_ROOT)
+grub: cross-polarssl grub-upstream $(CROSS_ROOT)
 	mkdir -p grub-$(XEN_TARGET_ARCH)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ OBJ_DIR=$(CURDIR)/grub-$(XEN_TARGET_ARCH)
 
diff --git a/stubdom/grub/Makefile b/stubdom/grub/Makefile
index d6e3a1e..6bd2c4c 100644
--- a/stubdom/grub/Makefile
+++ b/stubdom/grub/Makefile
@@ -60,6 +60,7 @@ NETBOOT_SOURCES:=$(addprefix netboot/,$(NETBOOT_SOURCES))
 $(BOOT): DEF_CPPFLAGS+=-D__ASSEMBLY__
 
 PV_GRUB_SOURCES = kexec.c mini-os.c
+PV_GRUB_SOURCES += ../polarssl-$(XEN_TARGET_ARCH)/library/sha1.o
 
 SOURCES = $(NETBOOT_SOURCES) $(STAGE2_SOURCES) $(PV_GRUB_SOURCES)
 
diff --git a/stubdom/grub/kexec.c b/stubdom/grub/kexec.c
index b21c91a..cef357e 100644
--- a/stubdom/grub/kexec.c
+++ b/stubdom/grub/kexec.c
@@ -28,7 +28,9 @@
 #include <blkfront.h>
 #include <netfront.h>
 #include <fbfront.h>
+#include <tpmfront.h>
 #include <shared.h>
+#include <byteswap.h>
 
 #include "mini-os.h"
 
@@ -54,6 +56,22 @@ static unsigned long allocated;
 int pin_table(xc_interface *xc_handle, unsigned int type, unsigned long mfn,
               domid_t dom);
 
+#define TPM_TAG_RQU_COMMAND 0xC1
+#define TPM_ORD_Extend 20
+
+struct pcr_extend_cmd {
+	uint16_t tag;
+	uint32_t size;
+	uint32_t ord;
+
+	uint32_t pcr;
+	unsigned char hash[20];
+} __attribute__((packed));
+
+/* Not imported from polarssl's header since the prototype unhelpfully defines
+ * the input as unsigned char, which causes pointer type mismatches */
+void sha1(const void *input, size_t ilen, unsigned char output[20]);
+
 /* We need mfn to appear as target_pfn, so exchange with the MFN there */
 static void do_exchange(struct xc_dom_image *dom, xen_pfn_t target_pfn, xen_pfn_t source_mfn)
 {
@@ -117,6 +135,40 @@ int kexec_allocate(struct xc_dom_image *dom, xen_vaddr_t up_to)
     return 0;
 }
 
+static void tpm_hash2pcr(struct xc_dom_image *dom, char *cmdline)
+{
+	struct tpmfront_dev* tpm = init_tpmfront(NULL);
+	uint8_t *resp;
+	size_t resplen = 0;
+	struct pcr_extend_cmd cmd;
+
+	/* If all guests have access to a vTPM, it may be useful to replace this
+	 * with ASSERT(tpm) to prevent configuration errors from allowing a guest
+	 * to boot without a TPM (or with a TPM that has not been sent any
+	 * measurements, which could allow forging the measurements).
+	 */
+	if (!tpm)
+		return;
+
+	cmd.tag = bswap_16(TPM_TAG_RQU_COMMAND);
+	cmd.size = bswap_32(sizeof(cmd));
+	cmd.ord = bswap_32(TPM_ORD_Extend);
+	cmd.pcr = bswap_32(4); // PCR #4 for kernel
+	sha1(dom->kernel_blob, dom->kernel_size, cmd.hash);
+
+	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
+
+	cmd.pcr = bswap_32(5); // PCR #5 for cmdline
+	sha1(cmdline, strlen(cmdline), cmd.hash);
+	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
+
+	cmd.pcr = bswap_32(5); // PCR #5 for initrd
+	sha1(dom->ramdisk_blob, dom->ramdisk_size, cmd.hash);
+	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
+
+	shutdown_tpmfront(tpm);
+}
+
 void kexec(void *kernel, long kernel_size, void *module, long module_size, char *cmdline, unsigned long flags)
 {
     struct xc_dom_image *dom;
@@ -151,6 +203,8 @@ void kexec(void *kernel, long kernel_size, void *module, long module_size, char
     dom->console_evtchn = start_info.console.domU.evtchn;
     dom->xenstore_evtchn = start_info.store_evtchn;
 
+    tpm_hash2pcr(dom, cmdline);
+
     if ( (rc = xc_dom_boot_xen_init(dom, xc_handle, domid)) != 0 ) {
         grub_printf("xc_dom_boot_xen_init returned %d\n", rc);
         errnum = ERR_BOOT_FAILURE;
diff --git a/stubdom/grub/minios.cfg b/stubdom/grub/minios.cfg
index 40cfa68..8df4909 100644
--- a/stubdom/grub/minios.cfg
+++ b/stubdom/grub/minios.cfg
@@ -1,2 +1,3 @@
 CONFIG_START_NETWORK=n
 CONFIG_SPARSE_BSS=n
+CONFIG_TPMFRONT=y
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Se-0004EH-JO; Mon, 10 Dec 2012 19:56:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sa-0004Bd-Rt
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:17 +0000
Received: from [85.158.139.83:31196] by server-16.bemta-5.messagelabs.com id
	1D/BD-09208-06E36C05; Mon, 10 Dec 2012 19:56:16 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355169372!24924090!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21210 invoked from network); 10 Dec 2012 19:56:13 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-14.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:13 -0000
X-TM-IMSS-Message-ID: <099fbcfa0000fdd3@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fbcfa0000fdd3 ;
	Mon, 10 Dec 2012 14:56:17 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uC007869; 
	Mon, 10 Dec 2012 14:56:09 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:38 -0500
Message-Id: <1355169347-25917-6-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 05/14] stubdom/vtpm: make state save operation
	atomic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This changes the save format of the vtpm stubdom to include two copies
of the saved data: one active, and one inactive. When saving the state,
data is written to the inactive slot before updating the key and hash
saved with the TPM Manager, which determines the active slot when the
vTPM starts up.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/vtpm/vtpmblk.c | 74 ++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 60 insertions(+), 14 deletions(-)

diff --git a/stubdom/vtpm/vtpmblk.c b/stubdom/vtpm/vtpmblk.c
index b343bd8..fe529ab 100644
--- a/stubdom/vtpm/vtpmblk.c
+++ b/stubdom/vtpm/vtpmblk.c
@@ -26,6 +26,7 @@
 
 static struct blkfront_dev* blkdev = NULL;
 static int blkfront_fd = -1;
+static uint64_t slot_size = 0;
 
 int init_vtpmblk(struct tpmfront_dev* tpmfront_dev)
 {
@@ -45,6 +46,8 @@ int init_vtpmblk(struct tpmfront_dev* tpmfront_dev)
       goto error;
    }
 
+   slot_size = blkinfo.sectors * blkinfo.sector_size / 2;
+
    return 0;
 error:
    shutdown_blkfront(blkdev);
@@ -59,15 +62,20 @@ void shutdown_vtpmblk(void)
    blkdev = NULL;
 }
 
-int write_vtpmblk_raw(uint8_t *data, size_t data_length)
+static int write_vtpmblk_raw(uint8_t *data, size_t data_length, int slot)
 {
    int rc;
    uint32_t lenbuf;
-   debug("Begin Write data=%p len=%u", data, data_length);
+   debug("Begin Write data=%p len=%u slot=%u ssize=%u", data, data_length, slot, slot_size);
+
+   if (data_length > slot_size - 4) {
+      error("vtpm data cannot fit in data slot (%d/%d).", data_length, slot_size - 4);
+      return -1;
+   }
 
    lenbuf = cpu_to_be32((uint32_t)data_length);
 
-   lseek(blkfront_fd, 0, SEEK_SET);
+   lseek(blkfront_fd, slot * slot_size, SEEK_SET);
    if((rc = write(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
       error("write(length) failed! error was %s", strerror(errno));
       return -1;
@@ -82,12 +90,12 @@ int write_vtpmblk_raw(uint8_t *data, size_t data_length)
    return 0;
 }
 
-int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
+static int read_vtpmblk_raw(uint8_t **data, size_t *data_length, int slot)
 {
    int rc;
    uint32_t lenbuf;
 
-   lseek(blkfront_fd, 0, SEEK_SET);
+   lseek(blkfront_fd, slot * slot_size, SEEK_SET);
    if(( rc = read(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
       error("read(length) failed! error was %s", strerror(errno));
       return -1;
@@ -97,6 +105,10 @@ int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
       error("read 0 data_length for NVM");
       return -1;
    }
+   if(*data_length > slot_size - 4) {
+      error("read invalid data_length for NVM");
+      return -1;
+   }
 
    *data = tpm_malloc(*data_length);
    if((rc = read(blkfront_fd, *data, *data_length)) != *data_length) {
@@ -104,7 +116,7 @@ int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
       return -1;
    }
 
-   info("Read %u bytes from NVM persistent storage", *data_length);
+   info("Read %u bytes from NVM persistent storage (slot %d)", *data_length, slot);
    return 0;
 }
 
@@ -221,6 +233,9 @@ egress:
    return rc;
 }
 
+/* Current active state slot, or -1 if no valid saved state exists */
+static int active_slot = -1;
+
 int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length) {
    int rc;
    uint8_t* cipher = NULL;
@@ -228,12 +243,17 @@ int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_
    uint8_t hashkey[HASHKEYSZ];
    uint8_t* symkey = hashkey + HASHSZ;
 
+   /* Switch to the other slot. Note that in a new vTPM, the read will not
+	* succeed, so active_slot will be -1 and we will write to slot 0.
+	*/
+   active_slot = !active_slot;
+
    /* Encrypt the data */
    if((rc = encrypt_vtpmblk(data, data_length, &cipher, &cipher_len, symkey))) {
       goto abort_egress;
    }
    /* Write to disk */
-   if((rc = write_vtpmblk_raw(cipher, cipher_len))) {
+   if((rc = write_vtpmblk_raw(cipher, cipher_len, active_slot))) {
       goto abort_egress;
    }
    /* Get sha1 hash of data */
@@ -256,7 +276,8 @@ int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data
    size_t cipher_len = 0;
    size_t keysize;
    uint8_t* hashkey = NULL;
-   uint8_t hash[HASHSZ];
+   uint8_t hash0[HASHSZ];
+   uint8_t hash1[HASHSZ];
    uint8_t* symkey;
 
    /* Retreive the hash and the key from the manager */
@@ -270,14 +291,32 @@ int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data
    }
    symkey = hashkey + HASHSZ;
 
-   /* Read from disk now */
-   if((rc = read_vtpmblk_raw(&cipher, &cipher_len))) {
+   active_slot = 0;
+   debug("Reading slot 0 from disk\n");
+   if((rc = read_vtpmblk_raw(&cipher, &cipher_len, 0))) {
       goto abort_egress;
    }
 
    /* Compute the hash of the cipher text and compare */
-   sha1(cipher, cipher_len, hash);
-   if(memcmp(hash, hashkey, HASHSZ)) {
+   sha1(cipher, cipher_len, hash0);
+   if(!memcmp(hash0, hashkey, HASHSZ))
+      goto valid;
+
+   free(cipher);
+   cipher = NULL;
+
+   active_slot = 1;
+   debug("Reading slot 1 from disk (offset=%u)\n", slot_size);
+   if((rc = read_vtpmblk_raw(&cipher, &cipher_len, 1))) {
+      goto abort_egress;
+   }
+
+   /* Compute the hash of the cipher text and compare */
+   sha1(cipher, cipher_len, hash1);
+   if(!memcmp(hash1, hashkey, HASHSZ))
+      goto valid;
+
+   {
       int i;
       error("NVM Storage Checksum failed!");
       printf("Expected: ");
@@ -285,14 +324,20 @@ int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data
 	 printf("%02hhX ", hashkey[i]);
       }
       printf("\n");
-      printf("Actual:   ");
+      printf("Slot 0:   ");
+      for(i = 0; i < HASHSZ; ++i) {
+	 printf("%02hhX ", hash0[i]);
+      }
+      printf("\n");
+      printf("Slot 1:   ");
       for(i = 0; i < HASHSZ; ++i) {
-	 printf("%02hhX ", hash[i]);
+	 printf("%02hhX ", hash1[i]);
       }
       printf("\n");
       rc = -1;
       goto abort_egress;
    }
+valid:
 
    /* Decrypt the blob */
    if((rc = decrypt_vtpmblk(cipher, cipher_len, data, data_length, symkey))) {
@@ -300,6 +345,7 @@ int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data
    }
    goto egress;
 abort_egress:
+   active_slot = -1;
 egress:
    free(cipher);
    free(hashkey);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Se-0004EH-JO; Mon, 10 Dec 2012 19:56:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sa-0004Bd-Rt
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:17 +0000
Received: from [85.158.139.83:31196] by server-16.bemta-5.messagelabs.com id
	1D/BD-09208-06E36C05; Mon, 10 Dec 2012 19:56:16 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355169372!24924090!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21210 invoked from network); 10 Dec 2012 19:56:13 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-14.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:13 -0000
X-TM-IMSS-Message-ID: <099fbcfa0000fdd3@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fbcfa0000fdd3 ;
	Mon, 10 Dec 2012 14:56:17 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uC007869; 
	Mon, 10 Dec 2012 14:56:09 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:38 -0500
Message-Id: <1355169347-25917-6-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 05/14] stubdom/vtpm: make state save operation
	atomic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This changes the save format of the vtpm stubdom to include two copies
of the saved data: one active, and one inactive. When saving the state,
data is written to the inactive slot before updating the key and hash
saved with the TPM Manager, which determines the active slot when the
vTPM starts up.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/vtpm/vtpmblk.c | 74 ++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 60 insertions(+), 14 deletions(-)

diff --git a/stubdom/vtpm/vtpmblk.c b/stubdom/vtpm/vtpmblk.c
index b343bd8..fe529ab 100644
--- a/stubdom/vtpm/vtpmblk.c
+++ b/stubdom/vtpm/vtpmblk.c
@@ -26,6 +26,7 @@
 
 static struct blkfront_dev* blkdev = NULL;
 static int blkfront_fd = -1;
+static uint64_t slot_size = 0;
 
 int init_vtpmblk(struct tpmfront_dev* tpmfront_dev)
 {
@@ -45,6 +46,8 @@ int init_vtpmblk(struct tpmfront_dev* tpmfront_dev)
       goto error;
    }
 
+   slot_size = blkinfo.sectors * blkinfo.sector_size / 2;
+
    return 0;
 error:
    shutdown_blkfront(blkdev);
@@ -59,15 +62,20 @@ void shutdown_vtpmblk(void)
    blkdev = NULL;
 }
 
-int write_vtpmblk_raw(uint8_t *data, size_t data_length)
+static int write_vtpmblk_raw(uint8_t *data, size_t data_length, int slot)
 {
    int rc;
    uint32_t lenbuf;
-   debug("Begin Write data=%p len=%u", data, data_length);
+   debug("Begin Write data=%p len=%u slot=%u ssize=%u", data, data_length, slot, slot_size);
+
+   if (data_length > slot_size - 4) {
+      error("vtpm data cannot fit in data slot (%d/%d).", data_length, slot_size - 4);
+      return -1;
+   }
 
    lenbuf = cpu_to_be32((uint32_t)data_length);
 
-   lseek(blkfront_fd, 0, SEEK_SET);
+   lseek(blkfront_fd, slot * slot_size, SEEK_SET);
    if((rc = write(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
       error("write(length) failed! error was %s", strerror(errno));
       return -1;
@@ -82,12 +90,12 @@ int write_vtpmblk_raw(uint8_t *data, size_t data_length)
    return 0;
 }
 
-int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
+static int read_vtpmblk_raw(uint8_t **data, size_t *data_length, int slot)
 {
    int rc;
    uint32_t lenbuf;
 
-   lseek(blkfront_fd, 0, SEEK_SET);
+   lseek(blkfront_fd, slot * slot_size, SEEK_SET);
    if(( rc = read(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
       error("read(length) failed! error was %s", strerror(errno));
       return -1;
@@ -97,6 +105,10 @@ int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
       error("read 0 data_length for NVM");
       return -1;
    }
+   if(*data_length > slot_size - 4) {
+      error("read invalid data_length for NVM");
+      return -1;
+   }
 
    *data = tpm_malloc(*data_length);
    if((rc = read(blkfront_fd, *data, *data_length)) != *data_length) {
@@ -104,7 +116,7 @@ int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
       return -1;
    }
 
-   info("Read %u bytes from NVM persistent storage", *data_length);
+   info("Read %u bytes from NVM persistent storage (slot %d)", *data_length, slot);
    return 0;
 }
 
@@ -221,6 +233,9 @@ egress:
    return rc;
 }
 
+/* Current active state slot, or -1 if no valid saved state exists */
+static int active_slot = -1;
+
 int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length) {
    int rc;
    uint8_t* cipher = NULL;
@@ -228,12 +243,17 @@ int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_
    uint8_t hashkey[HASHKEYSZ];
    uint8_t* symkey = hashkey + HASHSZ;
 
+   /* Switch to the other slot. Note that in a new vTPM, the read will not
+	* succeed, so active_slot will be -1 and we will write to slot 0.
+	*/
+   active_slot = !active_slot;
+
    /* Encrypt the data */
    if((rc = encrypt_vtpmblk(data, data_length, &cipher, &cipher_len, symkey))) {
       goto abort_egress;
    }
    /* Write to disk */
-   if((rc = write_vtpmblk_raw(cipher, cipher_len))) {
+   if((rc = write_vtpmblk_raw(cipher, cipher_len, active_slot))) {
       goto abort_egress;
    }
    /* Get sha1 hash of data */
@@ -256,7 +276,8 @@ int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data
    size_t cipher_len = 0;
    size_t keysize;
    uint8_t* hashkey = NULL;
-   uint8_t hash[HASHSZ];
+   uint8_t hash0[HASHSZ];
+   uint8_t hash1[HASHSZ];
    uint8_t* symkey;
 
    /* Retreive the hash and the key from the manager */
@@ -270,14 +291,32 @@ int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data
    }
    symkey = hashkey + HASHSZ;
 
-   /* Read from disk now */
-   if((rc = read_vtpmblk_raw(&cipher, &cipher_len))) {
+   active_slot = 0;
+   debug("Reading slot 0 from disk\n");
+   if((rc = read_vtpmblk_raw(&cipher, &cipher_len, 0))) {
       goto abort_egress;
    }
 
    /* Compute the hash of the cipher text and compare */
-   sha1(cipher, cipher_len, hash);
-   if(memcmp(hash, hashkey, HASHSZ)) {
+   sha1(cipher, cipher_len, hash0);
+   if(!memcmp(hash0, hashkey, HASHSZ))
+      goto valid;
+
+   free(cipher);
+   cipher = NULL;
+
+   active_slot = 1;
+   debug("Reading slot 1 from disk (offset=%u)\n", slot_size);
+   if((rc = read_vtpmblk_raw(&cipher, &cipher_len, 1))) {
+      goto abort_egress;
+   }
+
+   /* Compute the hash of the cipher text and compare */
+   sha1(cipher, cipher_len, hash1);
+   if(!memcmp(hash1, hashkey, HASHSZ))
+      goto valid;
+
+   {
       int i;
       error("NVM Storage Checksum failed!");
       printf("Expected: ");
@@ -285,14 +324,20 @@ int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data
 	 printf("%02hhX ", hashkey[i]);
       }
       printf("\n");
-      printf("Actual:   ");
+      printf("Slot 0:   ");
+      for(i = 0; i < HASHSZ; ++i) {
+	 printf("%02hhX ", hash0[i]);
+      }
+      printf("\n");
+      printf("Slot 1:   ");
       for(i = 0; i < HASHSZ; ++i) {
-	 printf("%02hhX ", hash[i]);
+	 printf("%02hhX ", hash1[i]);
       }
       printf("\n");
       rc = -1;
       goto abort_egress;
    }
+valid:
 
    /* Decrypt the blob */
    if((rc = decrypt_vtpmblk(cipher, cipher_len, data, data_length, symkey))) {
@@ -300,6 +345,7 @@ int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data
    }
    goto egress;
 abort_egress:
+   active_slot = -1;
 egress:
    free(cipher);
    free(hashkey);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sc-0004CL-5O; Mon, 10 Dec 2012 19:56:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SZ-0004Ax-9b
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.211:10173] by server-7.bemta-5.messagelabs.com id
	C9/59-08009-E5E36C05; Mon, 10 Dec 2012 19:56:14 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355169373!15646606!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17576 invoked from network); 10 Dec 2012 19:56:13 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-13.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:13 -0000
X-TM-IMSS-Message-ID: <67fe28b70008022f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe28b70008022f ;
	Mon, 10 Dec 2012 14:55:42 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uD007869; 
	Mon, 10 Dec 2012 14:56:10 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:39 -0500
Message-Id: <1355169347-25917-7-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 06/14] stubdom/grub: send kernel measurements to
	vTPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows a domU with an arbitrary kernel and initrd to take advantage
of the static root of trust provided by a vTPM.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Acked-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 docs/misc/vtpm.txt      | 41 ++++++++++++++++++++++++++++---------
 stubdom/Makefile        |  2 +-
 stubdom/grub/Makefile   |  1 +
 stubdom/grub/kexec.c    | 54 +++++++++++++++++++++++++++++++++++++++++++++++++
 stubdom/grub/minios.cfg |  1 +
 5 files changed, 89 insertions(+), 10 deletions(-)

diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt
index fc6029a..6ea27bd 100644
--- a/docs/misc/vtpm.txt
+++ b/docs/misc/vtpm.txt
@@ -1,7 +1,7 @@
 Copyright (c) 2010-2012 United States Government, as represented by
 the Secretary of Defense.  All rights reserved.
 November 12 2012
-Authors: Matthew Fioravante (JHUAPL),
+Authors: Matthew Fioravante (JHUAPL), Daniel De Graaf (NSA)
 
 This document describes the virtual Trusted Platform Module (vTPM) subsystem
 for Xen. The reader is assumed to have familiarity with building and installing
@@ -15,7 +15,8 @@ operating system (a DomU).  This allows programs to interact with a TPM in a
 virtual system the same way they interact with a TPM on the physical system.
 Each guest gets its own unique, emulated, software TPM.  However, each of the
 vTPM's secrets (Keys, NVRAM, etc) are managed by a vTPM Manager domain, which
-seals the secrets to the Physical TPM.  Thus, the vTPM subsystem extends the
+seals the secrets to the Physical TPM.  If the process of creating each of these
+domains (manager, vTPM, and guest) is trusted, the vTPM subsystem extends the
 chain of trust rooted in the hardware TPM to virtual machines in Xen. Each
 major component of vTPM is implemented as a separate domain, providing secure
 separation guaranteed by the hypervisor. The vTPM domains are implemented in
@@ -119,14 +120,17 @@ the stubdom tree.
 Compiling the LINUX dom0 kernel:
 --------------------------------
 
-The Linux dom0 kernel has no special prerequisites.
+The Linux dom0 kernel should not try accessing the TPM while the vTPM
+Manager domain is accessing it; the simplest way to accomplish this is
+to ensure the kernel is compiled without a driver for the TPM, or avoid
+loading the driver by blacklisting the module.
 
 Compiling the LINUX domU kernel:
 --------------------------------
 
-The domU kernel used by domains with vtpms must
-include the xen-tpmfront.ko driver. It can be built
-directly into the kernel or as a module.
+The domU kernel used by domains with vtpms must include the xen-tpmfront.ko
+driver. It can be built directly into the kernel or as a module; however, some
+features such as IMA require the TPM to be built in to the kernel.
 
 CONFIG_TCG_TPM=y
 CONFIG_TCG_XEN=y
@@ -160,9 +164,10 @@ disk=["file:/var/vtpmmgrdom.img,hda,w"]
 name="vtpmmgrdom"
 iomem=["fed40,5"]
 
-The iomem line tells xl to allow access to the TPM
-IO memory pages, which are 5 pages that start at
-0xfed40000.
+The iomem line tells xl to allow access to all of the TPM IO memory
+pages, which are 5 pages (one per locality) that start at 0xfed40000. By
+default, the TPM manager uses locality 0 (so only the page at 0xfed40 is
+needed); this can be changed on the domain's command line.
 
 Starting and stopping the manager:
 ----------------------------------
@@ -285,6 +290,24 @@ On guest:
 You may wish to write a script to start your vtpm and guest together.
 
 ------------------------------
+INTEGRATION WITH PV-GRUB
+------------------------------
+
+The vTPM currently starts up with all PCRs set to their default values (all
+zeros for the lower 16).  This means that any decisions about the
+trustworthiness of the created domain must be made based on the environment that
+created the vTPM and the domU; for example, a system that only constructs images
+using a trusted configuration and guest kernel be able to provide guarantees
+about the guests and any measurements done that kernel (such as the IMA TCB
+log).  Guests wishing to use a custom kernel in such a secure environment are
+often started using the pv-grub bootloader as the kernel, which then can load
+the untrusted kernel without needing to parse an untrusted filesystem and kernel
+in dom0.  If the pv-grub stub domain succeeds in connecting to a vTPM, it will
+extend the hash of the kernel that it boots into PCR #4, and will extend the
+command line and initrd into PCR #5 before booting so that a domU booted in this
+way can attest to its early boot state.
+
+------------------------------
 MORE INFORMATION
 ------------------------------
 
diff --git a/stubdom/Makefile b/stubdom/Makefile
index fb0e5f9..95e10f3 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -398,7 +398,7 @@ grub-upstream: grub-$(GRUB_VERSION).tar.gz
 	done
 
 .PHONY: grub
-grub: grub-upstream $(CROSS_ROOT)
+grub: cross-polarssl grub-upstream $(CROSS_ROOT)
 	mkdir -p grub-$(XEN_TARGET_ARCH)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ OBJ_DIR=$(CURDIR)/grub-$(XEN_TARGET_ARCH)
 
diff --git a/stubdom/grub/Makefile b/stubdom/grub/Makefile
index d6e3a1e..6bd2c4c 100644
--- a/stubdom/grub/Makefile
+++ b/stubdom/grub/Makefile
@@ -60,6 +60,7 @@ NETBOOT_SOURCES:=$(addprefix netboot/,$(NETBOOT_SOURCES))
 $(BOOT): DEF_CPPFLAGS+=-D__ASSEMBLY__
 
 PV_GRUB_SOURCES = kexec.c mini-os.c
+PV_GRUB_SOURCES += ../polarssl-$(XEN_TARGET_ARCH)/library/sha1.o
 
 SOURCES = $(NETBOOT_SOURCES) $(STAGE2_SOURCES) $(PV_GRUB_SOURCES)
 
diff --git a/stubdom/grub/kexec.c b/stubdom/grub/kexec.c
index b21c91a..cef357e 100644
--- a/stubdom/grub/kexec.c
+++ b/stubdom/grub/kexec.c
@@ -28,7 +28,9 @@
 #include <blkfront.h>
 #include <netfront.h>
 #include <fbfront.h>
+#include <tpmfront.h>
 #include <shared.h>
+#include <byteswap.h>
 
 #include "mini-os.h"
 
@@ -54,6 +56,22 @@ static unsigned long allocated;
 int pin_table(xc_interface *xc_handle, unsigned int type, unsigned long mfn,
               domid_t dom);
 
+#define TPM_TAG_RQU_COMMAND 0xC1
+#define TPM_ORD_Extend 20
+
+struct pcr_extend_cmd {
+	uint16_t tag;
+	uint32_t size;
+	uint32_t ord;
+
+	uint32_t pcr;
+	unsigned char hash[20];
+} __attribute__((packed));
+
+/* Not imported from polarssl's header since the prototype unhelpfully defines
+ * the input as unsigned char, which causes pointer type mismatches */
+void sha1(const void *input, size_t ilen, unsigned char output[20]);
+
 /* We need mfn to appear as target_pfn, so exchange with the MFN there */
 static void do_exchange(struct xc_dom_image *dom, xen_pfn_t target_pfn, xen_pfn_t source_mfn)
 {
@@ -117,6 +135,40 @@ int kexec_allocate(struct xc_dom_image *dom, xen_vaddr_t up_to)
     return 0;
 }
 
+static void tpm_hash2pcr(struct xc_dom_image *dom, char *cmdline)
+{
+	struct tpmfront_dev* tpm = init_tpmfront(NULL);
+	uint8_t *resp;
+	size_t resplen = 0;
+	struct pcr_extend_cmd cmd;
+
+	/* If all guests have access to a vTPM, it may be useful to replace this
+	 * with ASSERT(tpm) to prevent configuration errors from allowing a guest
+	 * to boot without a TPM (or with a TPM that has not been sent any
+	 * measurements, which could allow forging the measurements).
+	 */
+	if (!tpm)
+		return;
+
+	cmd.tag = bswap_16(TPM_TAG_RQU_COMMAND);
+	cmd.size = bswap_32(sizeof(cmd));
+	cmd.ord = bswap_32(TPM_ORD_Extend);
+	cmd.pcr = bswap_32(4); // PCR #4 for kernel
+	sha1(dom->kernel_blob, dom->kernel_size, cmd.hash);
+
+	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
+
+	cmd.pcr = bswap_32(5); // PCR #5 for cmdline
+	sha1(cmdline, strlen(cmdline), cmd.hash);
+	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
+
+	cmd.pcr = bswap_32(5); // PCR #5 for initrd
+	sha1(dom->ramdisk_blob, dom->ramdisk_size, cmd.hash);
+	tpmfront_cmd(tpm, (void*)&cmd, sizeof(cmd), &resp, &resplen);
+
+	shutdown_tpmfront(tpm);
+}
+
 void kexec(void *kernel, long kernel_size, void *module, long module_size, char *cmdline, unsigned long flags)
 {
     struct xc_dom_image *dom;
@@ -151,6 +203,8 @@ void kexec(void *kernel, long kernel_size, void *module, long module_size, char
     dom->console_evtchn = start_info.console.domU.evtchn;
     dom->xenstore_evtchn = start_info.store_evtchn;
 
+    tpm_hash2pcr(dom, cmdline);
+
     if ( (rc = xc_dom_boot_xen_init(dom, xc_handle, domid)) != 0 ) {
         grub_printf("xc_dom_boot_xen_init returned %d\n", rc);
         errnum = ERR_BOOT_FAILURE;
diff --git a/stubdom/grub/minios.cfg b/stubdom/grub/minios.cfg
index 40cfa68..8df4909 100644
--- a/stubdom/grub/minios.cfg
+++ b/stubdom/grub/minios.cfg
@@ -1,2 +1,3 @@
 CONFIG_START_NETWORK=n
 CONFIG_SPARSE_BSS=n
+CONFIG_TPMFRONT=y
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sc-0004CX-IO; Mon, 10 Dec 2012 19:56:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SZ-0004Ay-LU
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.83:31134] by server-3.bemta-5.messagelabs.com id
	37/52-25441-E5E36C05; Mon, 10 Dec 2012 19:56:14 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355169373!28691690!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17480 invoked from network); 10 Dec 2012 19:56:13 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-13.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:13 -0000
X-TM-IMSS-Message-ID: <099fba8a0000fdd1@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fba8a0000fdd1 ;
	Mon, 10 Dec 2012 14:56:16 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uA007869; 
	Mon, 10 Dec 2012 14:56:09 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:36 -0500
Message-Id: <1355169347-25917-4-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 03/14] stubdom/vtpm: Support locality field
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The vTPM protocol now contains a field allowing the locality of a
command to be specified; pass this to the TPM when processing a packet.
While the locality is not currently checked for validity, a binding
between locality and some distinguishing feature of the client domain
(such as the XSM label) will need to be defined in order to properly
support a multi-client vTPM.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 stubdom/Makefile            |  1 +
 stubdom/vtpm-locality.patch | 50 +++++++++++++++++++++++++++++++++++++++++++++
 stubdom/vtpm/vtpm.c         |  2 +-
 3 files changed, 52 insertions(+), 1 deletion(-)
 create mode 100644 stubdom/vtpm-locality.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 12f8a6f..fb0e5f9 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -208,6 +208,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	mv tpm_emulator-$(TPMEMU_VERSION) $@
 	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
 	patch -d $@ -p1 < vtpm-bufsize.patch
+	patch -d $@ -p1 < vtpm-locality.patch
 	mkdir $@/build
 	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
diff --git a/stubdom/vtpm-locality.patch b/stubdom/vtpm-locality.patch
new file mode 100644
index 0000000..8ab7dea
--- /dev/null
+++ b/stubdom/vtpm-locality.patch
@@ -0,0 +1,50 @@
+diff --git a/tpm/tpm_capability.c b/tpm/tpm_capability.c
+index 60bbb90..f8f7f0f 100644
+--- a/tpm/tpm_capability.c
++++ b/tpm/tpm_capability.c
+@@ -949,6 +949,8 @@ static TPM_RESULT set_vendor(UINT32 subCap, BYTE *setValue,
+                              UINT32 setValueSize, BOOL ownerAuth,
+                              BOOL deactivated, BOOL disabled)
+ {
++  if (tpmData.stany.flags.localityModifier != 8)
++    return TPM_BAD_PARAMETER;
+   /* set the capability area with the specified data, on failure
+      deactivate the TPM */
+   switch (subCap) {
+diff --git a/tpm/tpm_cmd_handler.c b/tpm/tpm_cmd_handler.c
+index 288d1ce..9e1cfb4 100644
+--- a/tpm/tpm_cmd_handler.c
++++ b/tpm/tpm_cmd_handler.c
+@@ -4132,7 +4132,7 @@ void tpm_emulator_shutdown()
+   tpm_extern_release();
+ }
+ 
+-int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t **out, uint32_t *out_size)
++int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t **out, uint32_t *out_size, int locality)
+ {
+   TPM_REQUEST req;
+   TPM_RESPONSE rsp;
+@@ -4140,7 +4140,9 @@ int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t **out, uint3
+   UINT32 len;
+   BOOL free_out;
+ 
+-  debug("tpm_handle_command()");
++  debug("tpm_handle_command(%d)", locality);
++  if (locality != -1)
++    tpmData.stany.flags.localityModifier = locality;
+ 
+   /* we need the whole packet at once, otherwise unmarshalling will fail */
+   if (tpm_unmarshal_TPM_REQUEST((uint8_t**)&in, &in_size, &req) != 0) {
+diff --git a/tpm/tpm_emulator.h b/tpm/tpm_emulator.h
+index eed749e..4c228bd 100644
+--- a/tpm/tpm_emulator.h
++++ b/tpm/tpm_emulator.h
+@@ -59,7 +59,7 @@ void tpm_emulator_shutdown(void);
+  * its usage. In case of an error, all internally allocated memory
+  * is released and the the state of out and out_size is unspecified.
+  */ 
+-int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t **out, uint32_t *out_size);
++int tpm_handle_command(const uint8_t *in, uint32_t in_size, uint8_t **out, uint32_t *out_size, int locality);
+ 
+ #endif /* _TPM_EMULATOR_H_ */
+ 
diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
index 71aef78..eb7912f 100644
--- a/stubdom/vtpm/vtpm.c
+++ b/stubdom/vtpm/vtpm.c
@@ -183,7 +183,7 @@ static void main_loop(void) {
          }
          /* If not disabled, do the command */
          else {
-            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len)) != 0) {
+            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len, tpmcmd->locality)) != 0) {
                error("tpm_handle_command() failed");
                create_error_response(tpmcmd, TPM_FAIL);
             }
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9Sc-0004Ck-Vp; Mon, 10 Dec 2012 19:56:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9SZ-0004Az-Jk
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:15 +0000
Received: from [85.158.139.211:22276] by server-9.bemta-5.messagelabs.com id
	6D/AA-10690-E5E36C05; Mon, 10 Dec 2012 19:56:14 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355169373!15646608!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17606 invoked from network); 10 Dec 2012 19:56:14 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-13.tower-206.messagelabs.com with SMTP;
	10 Dec 2012 19:56:14 -0000
X-TM-IMSS-Message-ID: <67fe2d0b00080230@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 67fe2d0b00080230 ;
	Mon, 10 Dec 2012 14:55:43 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uF007869; 
	Mon, 10 Dec 2012 14:56:10 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:41 -0500
Message-Id: <1355169347-25917-9-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 08/14] stubdom/vtpm: support multiple backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/vtpm/vtpm.c | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
index aaf1a24..d576c8f 100644
--- a/stubdom/vtpm/vtpm.c
+++ b/stubdom/vtpm/vtpm.c
@@ -141,8 +141,6 @@ int check_ordinal(tpmcmd_t* tpmcmd) {
 
 static void main_loop(void) {
    tpmcmd_t* tpmcmd = NULL;
-   domid_t domid;		/* Domid of frontend */
-   unsigned int handle;	/* handle of frontend */
    int res = -1;
 
    info("VTPM Initializing\n");
@@ -162,15 +160,7 @@ static void main_loop(void) {
       goto abort_postpcrs;
    }
 
-   /* Wait for the frontend domain to connect */
-   info("Waiting for frontend domain to connect..");
-   if(tpmback_wait_for_frontend_connect(&domid, &handle) == 0) {
-      info("VTPM attached to Frontend %u/%u", (unsigned int) domid, handle);
-   } else {
-      error("Unable to attach to a frontend");
-   }
-
-   tpmcmd = tpmback_req(domid, handle);
+   tpmcmd = tpmback_req_any();
    while(tpmcmd) {
       /* Handle the request */
       if(tpmcmd->req_len) {
@@ -194,7 +184,7 @@ static void main_loop(void) {
       tpmback_resp(tpmcmd);
 
       /* Wait for the next request */
-      tpmcmd = tpmback_req(domid, handle);
+      tpmcmd = tpmback_req_any();
 
    }
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9So-0004Ma-AG; Mon, 10 Dec 2012 19:56:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sm-0004L2-SP
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:29 +0000
Received: from [85.158.139.83:15610] by server-8.bemta-5.messagelabs.com id
	29/EE-15003-C6E36C05; Mon, 10 Dec 2012 19:56:28 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355169372!27794577!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17381 invoked from network); 10 Dec 2012 19:56:13 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-10.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:13 -0000
X-TM-IMSS-Message-ID: <099fbfd70000fdd6@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fbfd70000fdd6 ;
	Mon, 10 Dec 2012 14:56:18 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uE007869; 
	Mon, 10 Dec 2012 14:56:10 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:40 -0500
Message-Id: <1355169347-25917-8-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 07/14] stubdom/vtpm: Add locality-5 PCRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This defines new PCRs (24-31) that are restricted to locality 5, which
can be used by an agent outside a domain to record information about its
measurements and activity. These PCRs cannot be initialized from the
hardware TPM (since most hardware TPMs do not define PCR 24+).

This definition may need to be changed in the future, as the TCG's VTPM
working group is working to define the meaning of PCRs above 23 on a
vTPM; the existing PC-client specification allows these PCRs to be
implementation-defined.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/Makefile                  |  1 +
 stubdom/vtpm-locality5-pcrs.patch | 33 +++++++++++++++++++++++++++++++++
 stubdom/vtpm/README               | 15 ++++++++++++++-
 stubdom/vtpm/vtpm.c               |  8 ++++----
 stubdom/vtpm/vtpm_pcrs.h          |  6 +++---
 5 files changed, 55 insertions(+), 8 deletions(-)
 create mode 100644 stubdom/vtpm-locality5-pcrs.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 95e10f3..a657fd2 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -209,6 +209,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
 	patch -d $@ -p1 < vtpm-bufsize.patch
 	patch -d $@ -p1 < vtpm-locality.patch
+	patch -d $@ -p1 < vtpm-locality5-pcrs.patch
 	mkdir $@/build
 	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
diff --git a/stubdom/vtpm-locality5-pcrs.patch b/stubdom/vtpm-locality5-pcrs.patch
new file mode 100644
index 0000000..f697035
--- /dev/null
+++ b/stubdom/vtpm-locality5-pcrs.patch
@@ -0,0 +1,33 @@
+diff --git a/tpm/tpm_data.c b/tpm/tpm_data.c
+index 50c9697..d8cac09 100644
+--- a/tpm/tpm_data.c
++++ b/tpm/tpm_data.c
+@@ -151,9 +151,12 @@ void tpm_init_data(void)
+     init_pcr_attr(22, TRUE, 0x04, 0x04);
+     init_pcr_attr(23, TRUE, 0x1f, 0x1f);
+   }
+-  for (i = 24; i < TPM_NUM_PCR; i++) {
+-    init_pcr_attr(i, TRUE, 0x00, 0x00);
+-  }
++  for (i = 24; i < 28 && i < TPM_NUM_PCR; i++)
++    init_pcr_attr(i, FALSE, 0x00, 0x20);
++  for (i = 28; i < 32 && i < TPM_NUM_PCR; i++)
++    init_pcr_attr(i, TRUE, 0x20, 0x20);
++  for (i = 32; i < TPM_NUM_PCR; i++)
++    init_pcr_attr(i, FALSE, 0x00, 0xC0);
+   if (tpmConf & TPM_CONF_GENERATE_EK) {
+     /* generate a new endorsement key */
+     tpm_rsa_generate_key(&tpmData.permanent.data.endorsementKey, 2048);
+diff --git a/tpm/tpm_structures.h b/tpm/tpm_structures.h
+index f746c05..08cef1e 100644
+--- a/tpm/tpm_structures.h
++++ b/tpm/tpm_structures.h
+@@ -676,7 +676,7 @@ typedef struct tdTPM_CMK_MA_APPROVAL {
+ /*
+  * Number of PCRs of the TPM (must be a multiple of eight)
+  */
+-#define TPM_NUM_PCR 24
++#define TPM_NUM_PCR 32
+ 
+ /*
+  * TPM_PCR_SELECTION ([TPM_Part2], Section 8.1)
diff --git a/stubdom/vtpm/README b/stubdom/vtpm/README
index 11bdacb..b0bd8f9 100644
--- a/stubdom/vtpm/README
+++ b/stubdom/vtpm/README
@@ -1,7 +1,7 @@
 Copyright (c) 2010-2012 United States Government, as represented by
 the Secretary of Defense.  All rights reserved.
 November 12 2012
-Authors: Matthew Fioravante (JHUAPL),
+Authors: Matthew Fioravante (JHUAPL), Daniel De Graaf (NSA)
 
 This document describes the operation and command line interface
 of vtpm-stubdom. See docs/misc/vtpm.txt for details on the
@@ -68,6 +68,19 @@ hwinitpcr=<PCRSPEC>: Initialize the virtual Platform Configuration Registers
 	will copy pcrs 5, 12, 13, 14, 15, and 16.
 
 ------------------------------
+VIRTUAL-TPM SPECIFIC FEATURES
+------------------------------
+
+The virtual TPM emulator provides some extensions to the TPM specification that
+are useful in a virtualized environment. The featues added to the emulator are:
+
+ * Support for specifying localities beyond the standard 0-4
+ * Extended PCRs 24-31 that can only be extended by locality 5
+
+Locality 5 is intended for use by a measurement agent running outside the
+primary domain using the VM.
+
+------------------------------
 REFERENCES
 ------------------------------
 
diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
index eb7912f..aaf1a24 100644
--- a/stubdom/vtpm/vtpm.c
+++ b/stubdom/vtpm/vtpm.c
@@ -253,18 +253,18 @@ int parse_cmd_line(int argc, char** argv)
                opt_args.hwinitpcrs = VTPM_PCRNONE;
             } else if(sscanf(pch, "%u", &v1) == 1) {
                //Set one
-               if(v1 >= TPM_NUM_PCR) {
+               if(v1 >= VTPM_NUMPCRS) {
                   error("hwinitpcr error: Invalid PCR index %u", v1);
                   return -1;
                }
                opt_args.hwinitpcrs |= (1 << v1);
             } else if(sscanf(pch, "%u-%u", &v1, &v2) == 2) {
                //Set range
-               if(v1 >= TPM_NUM_PCR) {
+               if(v1 >= VTPM_NUMPCRS) {
                   error("hwinitpcr error: Invalid PCR index %u", v1);
                   return -1;
                }
-               if(v2 >= TPM_NUM_PCR) {
+               if(v2 >= VTPM_NUMPCRS) {
                   error("hwinitpcr error: Invalid PCR index %u", v1);
                   return -1;
                }
@@ -312,7 +312,7 @@ int parse_cmd_line(int argc, char** argv)
 
       pcrstr[0] = '\0';
       info("The following PCRs will be initialized with values from the hardware TPM:");
-      for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
+      for(unsigned int i = 0; i < VTPM_NUMPCRS; ++i) {
          if(opt_args.hwinitpcrs & (1 << i)) {
             ptr += sprintf(ptr, "%u, ", i);
          }
diff --git a/stubdom/vtpm/vtpm_pcrs.h b/stubdom/vtpm/vtpm_pcrs.h
index 11835f9..bd9068c 100644
--- a/stubdom/vtpm/vtpm_pcrs.h
+++ b/stubdom/vtpm/vtpm_pcrs.h
@@ -40,11 +40,11 @@
 #define VTPM_PCR22 1 << 22
 #define VTPM_PCR23 1 << 23
 
-#define VTPM_PCRALL (1 << TPM_NUM_PCR) - 1
-#define VTPM_PCRNONE 0
-
 #define VTPM_NUMPCRS 24
 
+#define VTPM_PCRALL (1 << VTPM_NUMPCRS) - 1
+#define VTPM_PCRNONE 0
+
 struct tpmfront_dev;
 
 TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 19:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 19:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9So-0004Ma-AG; Mon, 10 Dec 2012 19:56:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9Sm-0004L2-SP
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 19:56:29 +0000
Received: from [85.158.139.83:15610] by server-8.bemta-5.messagelabs.com id
	29/EE-15003-C6E36C05; Mon, 10 Dec 2012 19:56:28 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355169372!27794577!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17381 invoked from network); 10 Dec 2012 19:56:13 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-10.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 19:56:13 -0000
X-TM-IMSS-Message-ID: <099fbfd70000fdd6@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 099fbfd70000fdd6 ;
	Mon, 10 Dec 2012 14:56:18 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAJu8uE007869; 
	Mon, 10 Dec 2012 14:56:10 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 14:55:40 -0500
Message-Id: <1355169347-25917-8-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 07/14] stubdom/vtpm: Add locality-5 PCRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This defines new PCRs (24-31) that are restricted to locality 5, which
can be used by an agent outside a domain to record information about its
measurements and activity. These PCRs cannot be initialized from the
hardware TPM (since most hardware TPMs do not define PCR 24+).

This definition may need to be changed in the future, as the TCG's VTPM
working group is working to define the meaning of PCRs above 23 on a
vTPM; the existing PC-client specification allows these PCRs to be
implementation-defined.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 stubdom/Makefile                  |  1 +
 stubdom/vtpm-locality5-pcrs.patch | 33 +++++++++++++++++++++++++++++++++
 stubdom/vtpm/README               | 15 ++++++++++++++-
 stubdom/vtpm/vtpm.c               |  8 ++++----
 stubdom/vtpm/vtpm_pcrs.h          |  6 +++---
 5 files changed, 55 insertions(+), 8 deletions(-)
 create mode 100644 stubdom/vtpm-locality5-pcrs.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 95e10f3..a657fd2 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -209,6 +209,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
 	patch -d $@ -p1 < vtpm-bufsize.patch
 	patch -d $@ -p1 < vtpm-locality.patch
+	patch -d $@ -p1 < vtpm-locality5-pcrs.patch
 	mkdir $@/build
 	cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
diff --git a/stubdom/vtpm-locality5-pcrs.patch b/stubdom/vtpm-locality5-pcrs.patch
new file mode 100644
index 0000000..f697035
--- /dev/null
+++ b/stubdom/vtpm-locality5-pcrs.patch
@@ -0,0 +1,33 @@
+diff --git a/tpm/tpm_data.c b/tpm/tpm_data.c
+index 50c9697..d8cac09 100644
+--- a/tpm/tpm_data.c
++++ b/tpm/tpm_data.c
+@@ -151,9 +151,12 @@ void tpm_init_data(void)
+     init_pcr_attr(22, TRUE, 0x04, 0x04);
+     init_pcr_attr(23, TRUE, 0x1f, 0x1f);
+   }
+-  for (i = 24; i < TPM_NUM_PCR; i++) {
+-    init_pcr_attr(i, TRUE, 0x00, 0x00);
+-  }
++  for (i = 24; i < 28 && i < TPM_NUM_PCR; i++)
++    init_pcr_attr(i, FALSE, 0x00, 0x20);
++  for (i = 28; i < 32 && i < TPM_NUM_PCR; i++)
++    init_pcr_attr(i, TRUE, 0x20, 0x20);
++  for (i = 32; i < TPM_NUM_PCR; i++)
++    init_pcr_attr(i, FALSE, 0x00, 0xC0);
+   if (tpmConf & TPM_CONF_GENERATE_EK) {
+     /* generate a new endorsement key */
+     tpm_rsa_generate_key(&tpmData.permanent.data.endorsementKey, 2048);
+diff --git a/tpm/tpm_structures.h b/tpm/tpm_structures.h
+index f746c05..08cef1e 100644
+--- a/tpm/tpm_structures.h
++++ b/tpm/tpm_structures.h
+@@ -676,7 +676,7 @@ typedef struct tdTPM_CMK_MA_APPROVAL {
+ /*
+  * Number of PCRs of the TPM (must be a multiple of eight)
+  */
+-#define TPM_NUM_PCR 24
++#define TPM_NUM_PCR 32
+ 
+ /*
+  * TPM_PCR_SELECTION ([TPM_Part2], Section 8.1)
diff --git a/stubdom/vtpm/README b/stubdom/vtpm/README
index 11bdacb..b0bd8f9 100644
--- a/stubdom/vtpm/README
+++ b/stubdom/vtpm/README
@@ -1,7 +1,7 @@
 Copyright (c) 2010-2012 United States Government, as represented by
 the Secretary of Defense.  All rights reserved.
 November 12 2012
-Authors: Matthew Fioravante (JHUAPL),
+Authors: Matthew Fioravante (JHUAPL), Daniel De Graaf (NSA)
 
 This document describes the operation and command line interface
 of vtpm-stubdom. See docs/misc/vtpm.txt for details on the
@@ -68,6 +68,19 @@ hwinitpcr=<PCRSPEC>: Initialize the virtual Platform Configuration Registers
 	will copy pcrs 5, 12, 13, 14, 15, and 16.
 
 ------------------------------
+VIRTUAL-TPM SPECIFIC FEATURES
+------------------------------
+
+The virtual TPM emulator provides some extensions to the TPM specification that
+are useful in a virtualized environment. The featues added to the emulator are:
+
+ * Support for specifying localities beyond the standard 0-4
+ * Extended PCRs 24-31 that can only be extended by locality 5
+
+Locality 5 is intended for use by a measurement agent running outside the
+primary domain using the VM.
+
+------------------------------
 REFERENCES
 ------------------------------
 
diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
index eb7912f..aaf1a24 100644
--- a/stubdom/vtpm/vtpm.c
+++ b/stubdom/vtpm/vtpm.c
@@ -253,18 +253,18 @@ int parse_cmd_line(int argc, char** argv)
                opt_args.hwinitpcrs = VTPM_PCRNONE;
             } else if(sscanf(pch, "%u", &v1) == 1) {
                //Set one
-               if(v1 >= TPM_NUM_PCR) {
+               if(v1 >= VTPM_NUMPCRS) {
                   error("hwinitpcr error: Invalid PCR index %u", v1);
                   return -1;
                }
                opt_args.hwinitpcrs |= (1 << v1);
             } else if(sscanf(pch, "%u-%u", &v1, &v2) == 2) {
                //Set range
-               if(v1 >= TPM_NUM_PCR) {
+               if(v1 >= VTPM_NUMPCRS) {
                   error("hwinitpcr error: Invalid PCR index %u", v1);
                   return -1;
                }
-               if(v2 >= TPM_NUM_PCR) {
+               if(v2 >= VTPM_NUMPCRS) {
                   error("hwinitpcr error: Invalid PCR index %u", v1);
                   return -1;
                }
@@ -312,7 +312,7 @@ int parse_cmd_line(int argc, char** argv)
 
       pcrstr[0] = '\0';
       info("The following PCRs will be initialized with values from the hardware TPM:");
-      for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
+      for(unsigned int i = 0; i < VTPM_NUMPCRS; ++i) {
          if(opt_args.hwinitpcrs & (1 << i)) {
             ptr += sprintf(ptr, "%u, ", i);
          }
diff --git a/stubdom/vtpm/vtpm_pcrs.h b/stubdom/vtpm/vtpm_pcrs.h
index 11835f9..bd9068c 100644
--- a/stubdom/vtpm/vtpm_pcrs.h
+++ b/stubdom/vtpm/vtpm_pcrs.h
@@ -40,11 +40,11 @@
 #define VTPM_PCR22 1 << 22
 #define VTPM_PCR23 1 << 23
 
-#define VTPM_PCRALL (1 << TPM_NUM_PCR) - 1
-#define VTPM_PCRNONE 0
-
 #define VTPM_NUMPCRS 24
 
+#define VTPM_PCRALL (1 << VTPM_NUMPCRS) - 1
+#define VTPM_PCRNONE 0
+
 struct tpmfront_dev;
 
 TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 20:00:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 20:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9WL-000635-WF; Mon, 10 Dec 2012 20:00:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9WL-00062k-89
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 20:00:09 +0000
Received: from [85.158.139.83:46327] by server-11.bemta-5.messagelabs.com id
	81/71-31624-84F36C05; Mon, 10 Dec 2012 20:00:08 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355169606!26525558!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1699 invoked from network); 10 Dec 2012 20:00:06 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 20:00:06 -0000
X-TM-IMSS-Message-ID: <09a34f790000ff44@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 09a34f790000ff44 ;
	Mon, 10 Dec 2012 15:00:11 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAK03eL008812; 
	Mon, 10 Dec 2012 15:00:03 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 15:00:03 -0500
Message-Id: <1355169603-26991-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] drivers/tpm-xen: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This changes the vTPM shared page ABI from a copy of the Xen network
interface to a single-page interface that better reflects the expected
behavior of a TPM: only a single request packet can be sent at any given
time, and every packet sent generates a single response packet. This
protocol change should also increase efficiency as it avoids mapping and
unmapping grants when possible.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 drivers/char/tpm/xen-tpmfront_if.c | 195 +++++++------------------------------
 include/xen/interface/io/tpmif.h   |  42 ++------
 2 files changed, 42 insertions(+), 195 deletions(-)

diff --git a/drivers/char/tpm/xen-tpmfront_if.c b/drivers/char/tpm/xen-tpmfront_if.c
index ba7fad8..374ad1b 100644
--- a/drivers/char/tpm/xen-tpmfront_if.c
+++ b/drivers/char/tpm/xen-tpmfront_if.c
@@ -53,7 +53,7 @@
 struct tpm_private {
 	struct tpm_chip *chip;
 
-	struct tpmif_tx_interface *tx;
+	struct vtpm_shared_page *page;
 	atomic_t refcnt;
 	unsigned int evtchn;
 	u8 is_connected;
@@ -61,8 +61,6 @@ struct tpm_private {
 
 	spinlock_t tx_lock;
 
-	struct tx_buffer *tx_buffers[TPMIF_TX_RING_SIZE];
-
 	atomic_t tx_busy;
 	void *tx_remember;
 
@@ -73,15 +71,7 @@ struct tpm_private {
 	int ring_ref;
 };
 
-struct tx_buffer {
-	unsigned int size;	/* available space in data */
-	unsigned int len;	/* used space in data */
-	unsigned char *data;	/* pointer to a page */
-};
-
-
 /* locally visible variables */
-static grant_ref_t gref_head;
 static struct tpm_private *my_priv;
 
 /* local function prototypes */
@@ -92,8 +82,6 @@ static int tpmif_connect(struct xenbus_device *dev,
 		struct tpm_private *tp,
 		domid_t domid);
 static DECLARE_TASKLET(tpmif_rx_tasklet, tpmif_rx_action, 0);
-static int tpmif_allocate_tx_buffers(struct tpm_private *tp);
-static void tpmif_free_tx_buffers(struct tpm_private *tp);
 static void tpmif_set_connected_state(struct tpm_private *tp,
 		u8 newstate);
 static int tpm_xmit(struct tpm_private *tp,
@@ -101,52 +89,6 @@ static int tpm_xmit(struct tpm_private *tp,
 		void *remember);
 static void destroy_tpmring(struct tpm_private *tp);
 
-static inline int
-tx_buffer_copy(struct tx_buffer *txb, const u8 *src, int len,
-		int isuserbuffer)
-{
-	int copied = len;
-
-	if (len > txb->size)
-		copied = txb->size;
-	if (isuserbuffer) {
-		if (copy_from_user(txb->data, src, copied))
-			return -EFAULT;
-	} else {
-		memcpy(txb->data, src, copied);
-	}
-	txb->len = len;
-	return copied;
-}
-
-static inline struct tx_buffer *tx_buffer_alloc(void)
-{
-	struct tx_buffer *txb;
-
-	txb = kzalloc(sizeof(struct tx_buffer), GFP_KERNEL);
-	if (!txb)
-		return NULL;
-
-	txb->len = 0;
-	txb->size = PAGE_SIZE;
-	txb->data = (unsigned char *)__get_free_page(GFP_KERNEL);
-	if (txb->data == NULL) {
-		kfree(txb);
-		txb = NULL;
-	}
-
-	return txb;
-}
-
-
-static inline void tx_buffer_free(struct tx_buffer *txb)
-{
-	if (txb) {
-		free_page((long)txb->data);
-		kfree(txb);
-	}
-}
-
 /**************************************************************
   Utility function for the tpm_private structure
  **************************************************************/
@@ -162,15 +104,12 @@ static void tpm_private_put(void)
 	if (!atomic_dec_and_test(&my_priv->refcnt))
 		return;
 
-	tpmif_free_tx_buffers(my_priv);
 	kfree(my_priv);
 	my_priv = NULL;
 }
 
 static struct tpm_private *tpm_private_get(void)
 {
-	int err;
-
 	if (my_priv) {
 		atomic_inc(&my_priv->refcnt);
 		return my_priv;
@@ -181,9 +120,6 @@ static struct tpm_private *tpm_private_get(void)
 		return NULL;
 
 	tpm_private_init(my_priv);
-	err = tpmif_allocate_tx_buffers(my_priv);
-	if (err < 0)
-		tpm_private_put();
 
 	return my_priv;
 }
@@ -218,22 +154,22 @@ int vtpm_vd_send(struct tpm_private *tp,
 static int setup_tpmring(struct xenbus_device *dev,
 		struct tpm_private *tp)
 {
-	struct tpmif_tx_interface *sring;
+	struct vtpm_shared_page *sring;
 	int err;
 
 	tp->ring_ref = GRANT_INVALID_REF;
 
-	sring = (void *)__get_free_page(GFP_KERNEL);
+	sring = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
 	if (!sring) {
 		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
 		return -ENOMEM;
 	}
-	tp->tx = sring;
+	tp->page = sring;
 
-	err = xenbus_grant_ring(dev, virt_to_mfn(tp->tx));
+	err = xenbus_grant_ring(dev, virt_to_mfn(tp->page));
 	if (err < 0) {
 		free_page((unsigned long)sring);
-		tp->tx = NULL;
+		tp->page = NULL;
 		xenbus_dev_fatal(dev, err, "allocating grant reference");
 		goto fail;
 	}
@@ -256,9 +192,9 @@ static void destroy_tpmring(struct tpm_private *tp)
 
 	if (tp->ring_ref != GRANT_INVALID_REF) {
 		gnttab_end_foreign_access(tp->ring_ref,
-				0, (unsigned long)tp->tx);
+				0, (unsigned long)tp->page);
 		tp->ring_ref = GRANT_INVALID_REF;
-		tp->tx = NULL;
+		tp->page = NULL;
 	}
 
 	if (tp->evtchn)
@@ -457,10 +393,10 @@ static int tpmif_connect(struct xenbus_device *dev,
 }
 
 static const struct xenbus_device_id tpmfront_ids[] = {
-	{ "vtpm" },
+	{ "vtpm2" },
 	{ "" }
 };
-MODULE_ALIAS("xen:vtpm");
+MODULE_ALIAS("xen:vtpm2");
 
 static DEFINE_XENBUS_DRIVER(tpmfront, ,
 		.probe = tpmfront_probe,
@@ -470,62 +406,30 @@ static DEFINE_XENBUS_DRIVER(tpmfront, ,
 		.suspend = tpmfront_suspend,
 		);
 
-static int tpmif_allocate_tx_buffers(struct tpm_private *tp)
-{
-	unsigned int i;
-
-	for (i = 0; i < TPMIF_TX_RING_SIZE; i++) {
-		tp->tx_buffers[i] = tx_buffer_alloc();
-		if (!tp->tx_buffers[i]) {
-			tpmif_free_tx_buffers(tp);
-			return -ENOMEM;
-		}
-	}
-	return 0;
-}
-
-static void tpmif_free_tx_buffers(struct tpm_private *tp)
-{
-	unsigned int i;
-
-	for (i = 0; i < TPMIF_TX_RING_SIZE; i++)
-		tx_buffer_free(tp->tx_buffers[i]);
-}
-
 static void tpmif_rx_action(unsigned long priv)
 {
 	struct tpm_private *tp = (struct tpm_private *)priv;
-	int i = 0;
 	unsigned int received;
 	unsigned int offset = 0;
 	u8 *buffer;
-	struct tpmif_tx_request *tx = &tp->tx->ring[i].req;
+	struct vtpm_shared_page *shr = tp->page;
 
 	atomic_set(&tp->tx_busy, 0);
 	wake_up_interruptible(&tp->wait_q);
 
-	received = tx->size;
+	offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+	received = shr->length;
+
+	if (offset > PAGE_SIZE || offset + received > PAGE_SIZE) {
+		printk(KERN_WARNING "tpmif_rx_action packet too large\n");
+		return;
+	}
 
 	buffer = kmalloc(received, GFP_ATOMIC);
 	if (!buffer)
 		return;
 
-	for (i = 0; i < TPMIF_TX_RING_SIZE && offset < received; i++) {
-		struct tx_buffer *txb = tp->tx_buffers[i];
-		struct tpmif_tx_request *tx;
-		unsigned int tocopy;
-
-		tx = &tp->tx->ring[i].req;
-		tocopy = tx->size;
-		if (tocopy > PAGE_SIZE)
-			tocopy = PAGE_SIZE;
-
-		memcpy(&buffer[offset], txb->data, tocopy);
-
-		gnttab_release_grant_reference(&gref_head, tx->ref);
-
-		offset += tocopy;
-	}
+	memcpy(buffer, offset + (u8*)shr, received);
 
 	vtpm_vd_recv(tp->chip, buffer, received, tp->tx_remember);
 	kfree(buffer);
@@ -550,8 +454,7 @@ static int tpm_xmit(struct tpm_private *tp,
 		const u8 *buf, size_t count, int isuserbuffer,
 		void *remember)
 {
-	struct tpmif_tx_request *tx;
-	int i;
+	struct vtpm_shared_page *shr;
 	unsigned int offset = 0;
 
 	spin_lock_irq(&tp->tx_lock);
@@ -566,48 +469,23 @@ static int tpm_xmit(struct tpm_private *tp,
 		return -EIO;
 	}
 
-	for (i = 0; count > 0 && i < TPMIF_TX_RING_SIZE; i++) {
-		struct tx_buffer *txb = tp->tx_buffers[i];
-		int copied;
-
-		if (!txb) {
-			spin_unlock_irq(&tp->tx_lock);
-			return -EFAULT;
-		}
-
-		copied = tx_buffer_copy(txb, &buf[offset], count,
-				isuserbuffer);
-		if (copied < 0) {
-			/* An error occurred */
-			spin_unlock_irq(&tp->tx_lock);
-			return copied;
-		}
-		count -= copied;
-		offset += copied;
-
-		tx = &tp->tx->ring[i].req;
-		tx->addr = virt_to_machine(txb->data).maddr;
-		tx->size = txb->len;
-		tx->unused = 0;
-
-		/* Get the granttable reference for this page. */
-		tx->ref = gnttab_claim_grant_reference(&gref_head);
-		if (tx->ref == -ENOSPC) {
-			spin_unlock_irq(&tp->tx_lock);
-			return -ENOSPC;
-		}
-		gnttab_grant_foreign_access_ref(tx->ref,
-				tp->backend_id,
-				virt_to_mfn(txb->data),
-				0 /*RW*/);
-		wmb();
-	}
+	shr = tp->page;
+	offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+
+	if (offset > PAGE_SIZE)
+		return -EIO;
+
+	if (offset + count > PAGE_SIZE)
+		count = PAGE_SIZE - offset;
+
+	memcpy(offset + (u8*)shr, buf, count);
+	shr->length = count;
+	barrier();
+	shr->state = 1;
 
 	atomic_set(&tp->tx_busy, 1);
 	tp->tx_remember = remember;
 
-	mb();
-
 	notify_remote_via_evtchn(tp->evtchn);
 
 	spin_unlock_irq(&tp->tx_lock);
@@ -667,12 +545,6 @@ static int __init tpmif_init(void)
 	if (!tp)
 		return -ENOMEM;
 
-	if (gnttab_alloc_grant_references(TPMIF_TX_RING_SIZE,
-				&gref_head) < 0) {
-		tpm_private_put();
-		return -EFAULT;
-	}
-
 	return xenbus_register_frontend(&tpmfront_driver);
 }
 module_init(tpmif_init);
@@ -680,7 +552,6 @@ module_init(tpmif_init);
 static void __exit tpmif_exit(void)
 {
 	xenbus_unregister_driver(&tpmfront_driver);
-	gnttab_free_grant_references(gref_head);
 	tpm_private_put();
 }
 module_exit(tpmif_exit);
diff --git a/include/xen/interface/io/tpmif.h b/include/xen/interface/io/tpmif.h
index c9e7294..b55ac56 100644
--- a/include/xen/interface/io/tpmif.h
+++ b/include/xen/interface/io/tpmif.h
@@ -1,7 +1,7 @@
 /******************************************************************************
  * tpmif.h
  *
- * TPM I/O interface for Xen guest OSes.
+ * TPM I/O interface for Xen guest OSes, v2
  *
  * Permission is hereby granted, free of charge, to any person obtaining a copy
  * of this software and associated documentation files (the "Software"), to
@@ -21,45 +21,21 @@
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  *
- * Copyright (c) 2005, IBM Corporation
- *
- * Author: Stefan Berger, stefanb@us.ibm.com
- * Grant table support: Mahadevan Gomathisankaran
- *
- * This code has been derived from tools/libxc/xen/io/netif.h
- *
- * Copyright (c) 2003-2004, Keir Fraser
  */
 
 #ifndef __XEN_PUBLIC_IO_TPMIF_H__
 #define __XEN_PUBLIC_IO_TPMIF_H__
 
-#include "../grant_table.h"
-
-struct tpmif_tx_request {
-	unsigned long addr;   /* Machine address of packet.   */
-	grant_ref_t ref;      /* grant table access reference */
-	uint16_t unused;
-	uint16_t size;        /* Packet size in bytes.        */
-};
-struct tpmif_tx_request;
+struct vtpm_shared_page {
+	uint32_t length;         /* request/response length in bytes */
 
-/*
- * The TPMIF_TX_RING_SIZE defines the number of pages the
- * front-end and backend can exchange (= size of array).
- */
-#define TPMIF_TX_RING_SIZE 1
-
-/* This structure must fit in a memory page. */
-
-struct tpmif_ring {
-	struct tpmif_tx_request req;
-};
-struct tpmif_ring;
+	uint8_t state;           /* 0 - response ready / idle
+                              * 1 - request ready / working */
+	uint8_t locality;        /* for the current request */
+	uint8_t pad;
 
-struct tpmif_tx_interface {
-	struct tpmif_ring ring[TPMIF_TX_RING_SIZE];
+	uint8_t nr_extra_pages;  /* extra pages for long packets; may be zero */
+	uint32_t extra_pages[0]; /* grant IDs; length is actually nr_extra_pages */
 };
-struct tpmif_tx_interface;
 
 #endif
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 20:00:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 20:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ti9WL-000635-WF; Mon, 10 Dec 2012 20:00:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Ti9WL-00062k-89
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 20:00:09 +0000
Received: from [85.158.139.83:46327] by server-11.bemta-5.messagelabs.com id
	81/71-31624-84F36C05; Mon, 10 Dec 2012 20:00:08 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355169606!26525558!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1699 invoked from network); 10 Dec 2012 20:00:06 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-182.messagelabs.com with SMTP;
	10 Dec 2012 20:00:06 -0000
X-TM-IMSS-Message-ID: <09a34f790000ff44@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 09a34f790000ff44 ;
	Mon, 10 Dec 2012 15:00:11 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBAK03eL008812; 
	Mon, 10 Dec 2012 15:00:03 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Mon, 10 Dec 2012 15:00:03 -0500
Message-Id: <1355169603-26991-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] drivers/tpm-xen: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This changes the vTPM shared page ABI from a copy of the Xen network
interface to a single-page interface that better reflects the expected
behavior of a TPM: only a single request packet can be sent at any given
time, and every packet sent generates a single response packet. This
protocol change should also increase efficiency as it avoids mapping and
unmapping grants when possible.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 drivers/char/tpm/xen-tpmfront_if.c | 195 +++++++------------------------------
 include/xen/interface/io/tpmif.h   |  42 ++------
 2 files changed, 42 insertions(+), 195 deletions(-)

diff --git a/drivers/char/tpm/xen-tpmfront_if.c b/drivers/char/tpm/xen-tpmfront_if.c
index ba7fad8..374ad1b 100644
--- a/drivers/char/tpm/xen-tpmfront_if.c
+++ b/drivers/char/tpm/xen-tpmfront_if.c
@@ -53,7 +53,7 @@
 struct tpm_private {
 	struct tpm_chip *chip;
 
-	struct tpmif_tx_interface *tx;
+	struct vtpm_shared_page *page;
 	atomic_t refcnt;
 	unsigned int evtchn;
 	u8 is_connected;
@@ -61,8 +61,6 @@ struct tpm_private {
 
 	spinlock_t tx_lock;
 
-	struct tx_buffer *tx_buffers[TPMIF_TX_RING_SIZE];
-
 	atomic_t tx_busy;
 	void *tx_remember;
 
@@ -73,15 +71,7 @@ struct tpm_private {
 	int ring_ref;
 };
 
-struct tx_buffer {
-	unsigned int size;	/* available space in data */
-	unsigned int len;	/* used space in data */
-	unsigned char *data;	/* pointer to a page */
-};
-
-
 /* locally visible variables */
-static grant_ref_t gref_head;
 static struct tpm_private *my_priv;
 
 /* local function prototypes */
@@ -92,8 +82,6 @@ static int tpmif_connect(struct xenbus_device *dev,
 		struct tpm_private *tp,
 		domid_t domid);
 static DECLARE_TASKLET(tpmif_rx_tasklet, tpmif_rx_action, 0);
-static int tpmif_allocate_tx_buffers(struct tpm_private *tp);
-static void tpmif_free_tx_buffers(struct tpm_private *tp);
 static void tpmif_set_connected_state(struct tpm_private *tp,
 		u8 newstate);
 static int tpm_xmit(struct tpm_private *tp,
@@ -101,52 +89,6 @@ static int tpm_xmit(struct tpm_private *tp,
 		void *remember);
 static void destroy_tpmring(struct tpm_private *tp);
 
-static inline int
-tx_buffer_copy(struct tx_buffer *txb, const u8 *src, int len,
-		int isuserbuffer)
-{
-	int copied = len;
-
-	if (len > txb->size)
-		copied = txb->size;
-	if (isuserbuffer) {
-		if (copy_from_user(txb->data, src, copied))
-			return -EFAULT;
-	} else {
-		memcpy(txb->data, src, copied);
-	}
-	txb->len = len;
-	return copied;
-}
-
-static inline struct tx_buffer *tx_buffer_alloc(void)
-{
-	struct tx_buffer *txb;
-
-	txb = kzalloc(sizeof(struct tx_buffer), GFP_KERNEL);
-	if (!txb)
-		return NULL;
-
-	txb->len = 0;
-	txb->size = PAGE_SIZE;
-	txb->data = (unsigned char *)__get_free_page(GFP_KERNEL);
-	if (txb->data == NULL) {
-		kfree(txb);
-		txb = NULL;
-	}
-
-	return txb;
-}
-
-
-static inline void tx_buffer_free(struct tx_buffer *txb)
-{
-	if (txb) {
-		free_page((long)txb->data);
-		kfree(txb);
-	}
-}
-
 /**************************************************************
   Utility function for the tpm_private structure
  **************************************************************/
@@ -162,15 +104,12 @@ static void tpm_private_put(void)
 	if (!atomic_dec_and_test(&my_priv->refcnt))
 		return;
 
-	tpmif_free_tx_buffers(my_priv);
 	kfree(my_priv);
 	my_priv = NULL;
 }
 
 static struct tpm_private *tpm_private_get(void)
 {
-	int err;
-
 	if (my_priv) {
 		atomic_inc(&my_priv->refcnt);
 		return my_priv;
@@ -181,9 +120,6 @@ static struct tpm_private *tpm_private_get(void)
 		return NULL;
 
 	tpm_private_init(my_priv);
-	err = tpmif_allocate_tx_buffers(my_priv);
-	if (err < 0)
-		tpm_private_put();
 
 	return my_priv;
 }
@@ -218,22 +154,22 @@ int vtpm_vd_send(struct tpm_private *tp,
 static int setup_tpmring(struct xenbus_device *dev,
 		struct tpm_private *tp)
 {
-	struct tpmif_tx_interface *sring;
+	struct vtpm_shared_page *sring;
 	int err;
 
 	tp->ring_ref = GRANT_INVALID_REF;
 
-	sring = (void *)__get_free_page(GFP_KERNEL);
+	sring = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
 	if (!sring) {
 		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
 		return -ENOMEM;
 	}
-	tp->tx = sring;
+	tp->page = sring;
 
-	err = xenbus_grant_ring(dev, virt_to_mfn(tp->tx));
+	err = xenbus_grant_ring(dev, virt_to_mfn(tp->page));
 	if (err < 0) {
 		free_page((unsigned long)sring);
-		tp->tx = NULL;
+		tp->page = NULL;
 		xenbus_dev_fatal(dev, err, "allocating grant reference");
 		goto fail;
 	}
@@ -256,9 +192,9 @@ static void destroy_tpmring(struct tpm_private *tp)
 
 	if (tp->ring_ref != GRANT_INVALID_REF) {
 		gnttab_end_foreign_access(tp->ring_ref,
-				0, (unsigned long)tp->tx);
+				0, (unsigned long)tp->page);
 		tp->ring_ref = GRANT_INVALID_REF;
-		tp->tx = NULL;
+		tp->page = NULL;
 	}
 
 	if (tp->evtchn)
@@ -457,10 +393,10 @@ static int tpmif_connect(struct xenbus_device *dev,
 }
 
 static const struct xenbus_device_id tpmfront_ids[] = {
-	{ "vtpm" },
+	{ "vtpm2" },
 	{ "" }
 };
-MODULE_ALIAS("xen:vtpm");
+MODULE_ALIAS("xen:vtpm2");
 
 static DEFINE_XENBUS_DRIVER(tpmfront, ,
 		.probe = tpmfront_probe,
@@ -470,62 +406,30 @@ static DEFINE_XENBUS_DRIVER(tpmfront, ,
 		.suspend = tpmfront_suspend,
 		);
 
-static int tpmif_allocate_tx_buffers(struct tpm_private *tp)
-{
-	unsigned int i;
-
-	for (i = 0; i < TPMIF_TX_RING_SIZE; i++) {
-		tp->tx_buffers[i] = tx_buffer_alloc();
-		if (!tp->tx_buffers[i]) {
-			tpmif_free_tx_buffers(tp);
-			return -ENOMEM;
-		}
-	}
-	return 0;
-}
-
-static void tpmif_free_tx_buffers(struct tpm_private *tp)
-{
-	unsigned int i;
-
-	for (i = 0; i < TPMIF_TX_RING_SIZE; i++)
-		tx_buffer_free(tp->tx_buffers[i]);
-}
-
 static void tpmif_rx_action(unsigned long priv)
 {
 	struct tpm_private *tp = (struct tpm_private *)priv;
-	int i = 0;
 	unsigned int received;
 	unsigned int offset = 0;
 	u8 *buffer;
-	struct tpmif_tx_request *tx = &tp->tx->ring[i].req;
+	struct vtpm_shared_page *shr = tp->page;
 
 	atomic_set(&tp->tx_busy, 0);
 	wake_up_interruptible(&tp->wait_q);
 
-	received = tx->size;
+	offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+	received = shr->length;
+
+	if (offset > PAGE_SIZE || offset + received > PAGE_SIZE) {
+		printk(KERN_WARNING "tpmif_rx_action packet too large\n");
+		return;
+	}
 
 	buffer = kmalloc(received, GFP_ATOMIC);
 	if (!buffer)
 		return;
 
-	for (i = 0; i < TPMIF_TX_RING_SIZE && offset < received; i++) {
-		struct tx_buffer *txb = tp->tx_buffers[i];
-		struct tpmif_tx_request *tx;
-		unsigned int tocopy;
-
-		tx = &tp->tx->ring[i].req;
-		tocopy = tx->size;
-		if (tocopy > PAGE_SIZE)
-			tocopy = PAGE_SIZE;
-
-		memcpy(&buffer[offset], txb->data, tocopy);
-
-		gnttab_release_grant_reference(&gref_head, tx->ref);
-
-		offset += tocopy;
-	}
+	memcpy(buffer, offset + (u8*)shr, received);
 
 	vtpm_vd_recv(tp->chip, buffer, received, tp->tx_remember);
 	kfree(buffer);
@@ -550,8 +454,7 @@ static int tpm_xmit(struct tpm_private *tp,
 		const u8 *buf, size_t count, int isuserbuffer,
 		void *remember)
 {
-	struct tpmif_tx_request *tx;
-	int i;
+	struct vtpm_shared_page *shr;
 	unsigned int offset = 0;
 
 	spin_lock_irq(&tp->tx_lock);
@@ -566,48 +469,23 @@ static int tpm_xmit(struct tpm_private *tp,
 		return -EIO;
 	}
 
-	for (i = 0; count > 0 && i < TPMIF_TX_RING_SIZE; i++) {
-		struct tx_buffer *txb = tp->tx_buffers[i];
-		int copied;
-
-		if (!txb) {
-			spin_unlock_irq(&tp->tx_lock);
-			return -EFAULT;
-		}
-
-		copied = tx_buffer_copy(txb, &buf[offset], count,
-				isuserbuffer);
-		if (copied < 0) {
-			/* An error occurred */
-			spin_unlock_irq(&tp->tx_lock);
-			return copied;
-		}
-		count -= copied;
-		offset += copied;
-
-		tx = &tp->tx->ring[i].req;
-		tx->addr = virt_to_machine(txb->data).maddr;
-		tx->size = txb->len;
-		tx->unused = 0;
-
-		/* Get the granttable reference for this page. */
-		tx->ref = gnttab_claim_grant_reference(&gref_head);
-		if (tx->ref == -ENOSPC) {
-			spin_unlock_irq(&tp->tx_lock);
-			return -ENOSPC;
-		}
-		gnttab_grant_foreign_access_ref(tx->ref,
-				tp->backend_id,
-				virt_to_mfn(txb->data),
-				0 /*RW*/);
-		wmb();
-	}
+	shr = tp->page;
+	offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+
+	if (offset > PAGE_SIZE)
+		return -EIO;
+
+	if (offset + count > PAGE_SIZE)
+		count = PAGE_SIZE - offset;
+
+	memcpy(offset + (u8*)shr, buf, count);
+	shr->length = count;
+	barrier();
+	shr->state = 1;
 
 	atomic_set(&tp->tx_busy, 1);
 	tp->tx_remember = remember;
 
-	mb();
-
 	notify_remote_via_evtchn(tp->evtchn);
 
 	spin_unlock_irq(&tp->tx_lock);
@@ -667,12 +545,6 @@ static int __init tpmif_init(void)
 	if (!tp)
 		return -ENOMEM;
 
-	if (gnttab_alloc_grant_references(TPMIF_TX_RING_SIZE,
-				&gref_head) < 0) {
-		tpm_private_put();
-		return -EFAULT;
-	}
-
 	return xenbus_register_frontend(&tpmfront_driver);
 }
 module_init(tpmif_init);
@@ -680,7 +552,6 @@ module_init(tpmif_init);
 static void __exit tpmif_exit(void)
 {
 	xenbus_unregister_driver(&tpmfront_driver);
-	gnttab_free_grant_references(gref_head);
 	tpm_private_put();
 }
 module_exit(tpmif_exit);
diff --git a/include/xen/interface/io/tpmif.h b/include/xen/interface/io/tpmif.h
index c9e7294..b55ac56 100644
--- a/include/xen/interface/io/tpmif.h
+++ b/include/xen/interface/io/tpmif.h
@@ -1,7 +1,7 @@
 /******************************************************************************
  * tpmif.h
  *
- * TPM I/O interface for Xen guest OSes.
+ * TPM I/O interface for Xen guest OSes, v2
  *
  * Permission is hereby granted, free of charge, to any person obtaining a copy
  * of this software and associated documentation files (the "Software"), to
@@ -21,45 +21,21 @@
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  *
- * Copyright (c) 2005, IBM Corporation
- *
- * Author: Stefan Berger, stefanb@us.ibm.com
- * Grant table support: Mahadevan Gomathisankaran
- *
- * This code has been derived from tools/libxc/xen/io/netif.h
- *
- * Copyright (c) 2003-2004, Keir Fraser
  */
 
 #ifndef __XEN_PUBLIC_IO_TPMIF_H__
 #define __XEN_PUBLIC_IO_TPMIF_H__
 
-#include "../grant_table.h"
-
-struct tpmif_tx_request {
-	unsigned long addr;   /* Machine address of packet.   */
-	grant_ref_t ref;      /* grant table access reference */
-	uint16_t unused;
-	uint16_t size;        /* Packet size in bytes.        */
-};
-struct tpmif_tx_request;
+struct vtpm_shared_page {
+	uint32_t length;         /* request/response length in bytes */
 
-/*
- * The TPMIF_TX_RING_SIZE defines the number of pages the
- * front-end and backend can exchange (= size of array).
- */
-#define TPMIF_TX_RING_SIZE 1
-
-/* This structure must fit in a memory page. */
-
-struct tpmif_ring {
-	struct tpmif_tx_request req;
-};
-struct tpmif_ring;
+	uint8_t state;           /* 0 - response ready / idle
+                              * 1 - request ready / working */
+	uint8_t locality;        /* for the current request */
+	uint8_t pad;
 
-struct tpmif_tx_interface {
-	struct tpmif_ring ring[TPMIF_TX_RING_SIZE];
+	uint8_t nr_extra_pages;  /* extra pages for long packets; may be zero */
+	uint32_t extra_pages[0]; /* grant IDs; length is actually nr_extra_pages */
 };
-struct tpmif_tx_interface;
 
 #endif
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:20:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:20:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAlq-000771-Hy; Mon, 10 Dec 2012 21:20:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TiAlo-00076w-Ni
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:20:12 +0000
Received: from [85.158.143.35:63308] by server-2.bemta-4.messagelabs.com id
	9B/16-30861-B0256C05; Mon, 10 Dec 2012 21:20:11 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355174409!13269233!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21168 invoked from network); 10 Dec 2012 21:20:10 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 21:20:10 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id CEFCF8407D;
	Mon, 10 Dec 2012 22:20:08 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id 1Lre7MxvGYis; Mon, 10 Dec 2012 22:20:08 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 761D38407A;
	Mon, 10 Dec 2012 22:20:08 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TiAli-0003sf-RB; Mon, 10 Dec 2012 22:20:06 +0100
Date: Mon, 10 Dec 2012 22:20:06 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121210212006.GA5833@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-13-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355169347-25917-13-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 12/14] mini-os/tpmback: add
 tpmback_get_peercontext
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel De Graaf, le Mon 10 Dec 2012 14:55:45 -0500, a =E9crit :
> +int evtchn_get_peercontext(evtchn_port_t local_port, char *ctx, int size)

I don't know about peercontext, is it supposed to be a string, or just
data?  In the latter case, void* would be better.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:20:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:20:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAlq-000771-Hy; Mon, 10 Dec 2012 21:20:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TiAlo-00076w-Ni
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:20:12 +0000
Received: from [85.158.143.35:63308] by server-2.bemta-4.messagelabs.com id
	9B/16-30861-B0256C05; Mon, 10 Dec 2012 21:20:11 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355174409!13269233!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21168 invoked from network); 10 Dec 2012 21:20:10 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Dec 2012 21:20:10 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id CEFCF8407D;
	Mon, 10 Dec 2012 22:20:08 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id 1Lre7MxvGYis; Mon, 10 Dec 2012 22:20:08 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 761D38407A;
	Mon, 10 Dec 2012 22:20:08 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TiAli-0003sf-RB; Mon, 10 Dec 2012 22:20:06 +0100
Date: Mon, 10 Dec 2012 22:20:06 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121210212006.GA5833@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-13-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355169347-25917-13-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 12/14] mini-os/tpmback: add
 tpmback_get_peercontext
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel De Graaf, le Mon 10 Dec 2012 14:55:45 -0500, a =E9crit :
> +int evtchn_get_peercontext(evtchn_port_t local_port, char *ctx, int size)

I don't know about peercontext, is it supposed to be a string, or just
data?  In the latter case, void* would be better.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:23:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAoF-0007EC-3I; Mon, 10 Dec 2012 21:22:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TiAoD-0007E5-F8
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:22:41 +0000
Received: from [85.158.138.51:31302] by server-5.bemta-3.messagelabs.com id
	E9/9A-26311-0A256C05; Mon, 10 Dec 2012 21:22:40 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355174559!26517996!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6691 invoked from network); 10 Dec 2012 21:22:39 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-15.tower-174.messagelabs.com with SMTP;
	10 Dec 2012 21:22:39 -0000
X-TM-IMSS-Message-ID: <684d1b9e0008175e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 684d1b9e0008175e ;
	Mon, 10 Dec 2012 16:21:56 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBALMNZq015207; 
	Mon, 10 Dec 2012 16:22:23 -0500
Message-ID: <50C6528F.1040809@tycho.nsa.gov>
Date: Mon, 10 Dec 2012 16:22:23 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-13-git-send-email-dgdegra@tycho.nsa.gov>
	<20121210212006.GA5833@type.youpi.perso.aquilenet.fr>
In-Reply-To: <20121210212006.GA5833@type.youpi.perso.aquilenet.fr>
Subject: Re: [Xen-devel] [PATCH 12/14] mini-os/tpmback: add
	tpmback_get_peercontext
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/10/2012 04:20 PM, Samuel Thibault wrote:
> Daniel De Graaf, le Mon 10 Dec 2012 14:55:45 -0500, a =E9crit :
>> +int evtchn_get_peercontext(evtchn_port_t local_port, char *ctx, int siz=
e)
> =

> I don't know about peercontext, is it supposed to be a string, or just
> data?  In the latter case, void* would be better.
> =

> Samuel

It is a string, for example "user_1:vm_r:domU_t" as used in #13

-- =

Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:23:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAoF-0007EC-3I; Mon, 10 Dec 2012 21:22:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TiAoD-0007E5-F8
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:22:41 +0000
Received: from [85.158.138.51:31302] by server-5.bemta-3.messagelabs.com id
	E9/9A-26311-0A256C05; Mon, 10 Dec 2012 21:22:40 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355174559!26517996!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6691 invoked from network); 10 Dec 2012 21:22:39 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-15.tower-174.messagelabs.com with SMTP;
	10 Dec 2012 21:22:39 -0000
X-TM-IMSS-Message-ID: <684d1b9e0008175e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 684d1b9e0008175e ;
	Mon, 10 Dec 2012 16:21:56 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBALMNZq015207; 
	Mon, 10 Dec 2012 16:22:23 -0500
Message-ID: <50C6528F.1040809@tycho.nsa.gov>
Date: Mon, 10 Dec 2012 16:22:23 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-13-git-send-email-dgdegra@tycho.nsa.gov>
	<20121210212006.GA5833@type.youpi.perso.aquilenet.fr>
In-Reply-To: <20121210212006.GA5833@type.youpi.perso.aquilenet.fr>
Subject: Re: [Xen-devel] [PATCH 12/14] mini-os/tpmback: add
	tpmback_get_peercontext
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/10/2012 04:20 PM, Samuel Thibault wrote:
> Daniel De Graaf, le Mon 10 Dec 2012 14:55:45 -0500, a =E9crit :
>> +int evtchn_get_peercontext(evtchn_port_t local_port, char *ctx, int siz=
e)
> =

> I don't know about peercontext, is it supposed to be a string, or just
> data?  In the latter case, void* would be better.
> =

> Samuel

It is a string, for example "user_1:vm_r:domU_t" as used in #13

-- =

Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:25:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAqc-0007Nr-Q3; Mon, 10 Dec 2012 21:25:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TiAqb-0007Nj-SL
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:25:10 +0000
Received: from [85.158.143.99:42703] by server-2.bemta-4.messagelabs.com id
	E3/F7-30861-53356C05; Mon, 10 Dec 2012 21:25:09 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355174704!23622765!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23629 invoked from network); 10 Dec 2012 21:25:04 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 21:25:04 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id F1E6784072;
	Mon, 10 Dec 2012 22:24:04 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id o7vbT0X2TMLn; Mon, 10 Dec 2012 22:24:04 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id A82A584061;
	Mon, 10 Dec 2012 22:24:04 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TiApX-0003vm-2C; Mon, 10 Dec 2012 22:24:03 +0100
Date: Mon, 10 Dec 2012 22:24:03 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121210212403.GB5833@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-15-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355169347-25917-15-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 14/14] stubdom/Makefile: Fix gmp extract rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel De Graaf, le Mon 10 Dec 2012 14:55:47 -0500, a =E9crit :
>  gmp-$(XEN_TARGET_ARCH): gmp-$(GMP_VERSION).tar.bz2 $(NEWLIB_STAMPFILE)
>  	tar xjf $<
> +	rm $@ -rf || :

I'm realizing...

>  	mv gmp-$(GMP_VERSION) $@
>  	#patch -d $@ -p0 < gmp.patch
>  	cd $@; CPPFLAGS=3D"-isystem $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/=
include $(TARGET_CPPFLAGS)" CFLAGS=3D"$(TARGET_CFLAGS)" CC=3D$(CC) $(GMPEXT=
) ./configure --disable-shared --enable-static --disable-fft --without-read=
line --prefix=3D$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf

Which is configure invoked here?  In the other cases, we have used a
stampfile, to separate unpacking and configure stages, which avoids the
issue.  Is it because libgmp does not support build out of tree?  If so,
then I'm fine with the rm change, although -rm $@ -rf instead of || :
would be more readable.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:25:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAqc-0007Nr-Q3; Mon, 10 Dec 2012 21:25:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TiAqb-0007Nj-SL
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:25:10 +0000
Received: from [85.158.143.99:42703] by server-2.bemta-4.messagelabs.com id
	E3/F7-30861-53356C05; Mon, 10 Dec 2012 21:25:09 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355174704!23622765!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23629 invoked from network); 10 Dec 2012 21:25:04 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 21:25:04 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id F1E6784072;
	Mon, 10 Dec 2012 22:24:04 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id o7vbT0X2TMLn; Mon, 10 Dec 2012 22:24:04 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id A82A584061;
	Mon, 10 Dec 2012 22:24:04 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TiApX-0003vm-2C; Mon, 10 Dec 2012 22:24:03 +0100
Date: Mon, 10 Dec 2012 22:24:03 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121210212403.GB5833@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-15-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355169347-25917-15-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 14/14] stubdom/Makefile: Fix gmp extract rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel De Graaf, le Mon 10 Dec 2012 14:55:47 -0500, a =E9crit :
>  gmp-$(XEN_TARGET_ARCH): gmp-$(GMP_VERSION).tar.bz2 $(NEWLIB_STAMPFILE)
>  	tar xjf $<
> +	rm $@ -rf || :

I'm realizing...

>  	mv gmp-$(GMP_VERSION) $@
>  	#patch -d $@ -p0 < gmp.patch
>  	cd $@; CPPFLAGS=3D"-isystem $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/=
include $(TARGET_CPPFLAGS)" CFLAGS=3D"$(TARGET_CFLAGS)" CC=3D$(CC) $(GMPEXT=
) ./configure --disable-shared --enable-static --disable-fft --without-read=
line --prefix=3D$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf

Which is configure invoked here?  In the other cases, we have used a
stampfile, to separate unpacking and configure stages, which avoids the
issue.  Is it because libgmp does not support build out of tree?  If so,
then I'm fine with the rm change, although -rm $@ -rf instead of || :
would be more readable.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:25:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAr0-0007QH-7i; Mon, 10 Dec 2012 21:25:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TiAqy-0007Pu-BL
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:25:32 +0000
Received: from [85.158.139.83:52114] by server-6.bemta-5.messagelabs.com id
	67/CD-30498-B4356C05; Mon, 10 Dec 2012 21:25:31 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355174729!28699149!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3901 invoked from network); 10 Dec 2012 21:25:30 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 21:25:30 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 89B0A84072;
	Mon, 10 Dec 2012 22:25:29 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id q1MALdv6E7hX; Mon, 10 Dec 2012 22:25:29 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 6284984061;
	Mon, 10 Dec 2012 22:25:28 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TiAqt-0003wS-3Q; Mon, 10 Dec 2012 22:25:27 +0100
Date: Mon, 10 Dec 2012 22:25:27 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121210212527.GC5833@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-13-git-send-email-dgdegra@tycho.nsa.gov>
	<20121210212006.GA5833@type.youpi.perso.aquilenet.fr>
	<50C6528F.1040809@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C6528F.1040809@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 12/14] mini-os/tpmback: add
 tpmback_get_peercontext
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel De Graaf, le Mon 10 Dec 2012 16:22:23 -0500, a =E9crit :
> On 12/10/2012 04:20 PM, Samuel Thibault wrote:
> > Daniel De Graaf, le Mon 10 Dec 2012 14:55:45 -0500, a =E9crit :
> >> +int evtchn_get_peercontext(evtchn_port_t local_port, char *ctx, int s=
ize)
> > =

> > I don't know about peercontext, is it supposed to be a string, or just
> > data?  In the latter case, void* would be better.
> > =

> > Samuel
> =

> It is a string, for example "user_1:vm_r:domU_t" as used in #13

Ok, then

Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:25:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAr0-0007QH-7i; Mon, 10 Dec 2012 21:25:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TiAqy-0007Pu-BL
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:25:32 +0000
Received: from [85.158.139.83:52114] by server-6.bemta-5.messagelabs.com id
	67/CD-30498-B4356C05; Mon, 10 Dec 2012 21:25:31 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355174729!28699149!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3901 invoked from network); 10 Dec 2012 21:25:30 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 21:25:30 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 89B0A84072;
	Mon, 10 Dec 2012 22:25:29 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id q1MALdv6E7hX; Mon, 10 Dec 2012 22:25:29 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 6284984061;
	Mon, 10 Dec 2012 22:25:28 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TiAqt-0003wS-3Q; Mon, 10 Dec 2012 22:25:27 +0100
Date: Mon, 10 Dec 2012 22:25:27 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121210212527.GC5833@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-13-git-send-email-dgdegra@tycho.nsa.gov>
	<20121210212006.GA5833@type.youpi.perso.aquilenet.fr>
	<50C6528F.1040809@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C6528F.1040809@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 12/14] mini-os/tpmback: add
 tpmback_get_peercontext
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel De Graaf, le Mon 10 Dec 2012 16:22:23 -0500, a =E9crit :
> On 12/10/2012 04:20 PM, Samuel Thibault wrote:
> > Daniel De Graaf, le Mon 10 Dec 2012 14:55:45 -0500, a =E9crit :
> >> +int evtchn_get_peercontext(evtchn_port_t local_port, char *ctx, int s=
ize)
> > =

> > I don't know about peercontext, is it supposed to be a string, or just
> > data?  In the latter case, void* would be better.
> > =

> > Samuel
> =

> It is a string, for example "user_1:vm_r:domU_t" as used in #13

Ok, then

Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:29:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAuL-0007gX-Rt; Mon, 10 Dec 2012 21:29:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TiAuJ-0007gK-So
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:29:00 +0000
Received: from [85.158.138.51:49470] by server-11.bemta-3.messagelabs.com id
	9C/A4-19361-61456C05; Mon, 10 Dec 2012 21:28:54 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355174932!28210680!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4249 invoked from network); 10 Dec 2012 21:28:53 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-16.tower-174.messagelabs.com with SMTP;
	10 Dec 2012 21:28:53 -0000
X-TM-IMSS-Message-ID: <6852d3c6000818be@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 6852d3c6000818be ;
	Mon, 10 Dec 2012 16:28:10 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBALSb7P015746; 
	Mon, 10 Dec 2012 16:28:37 -0500
Message-ID: <50C65405.4070708@tycho.nsa.gov>
Date: Mon, 10 Dec 2012 16:28:37 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-15-git-send-email-dgdegra@tycho.nsa.gov>
	<20121210212403.GB5833@type.youpi.perso.aquilenet.fr>
In-Reply-To: <20121210212403.GB5833@type.youpi.perso.aquilenet.fr>
Subject: Re: [Xen-devel] [PATCH 14/14] stubdom/Makefile: Fix gmp extract rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/10/2012 04:24 PM, Samuel Thibault wrote:
> Daniel De Graaf, le Mon 10 Dec 2012 14:55:47 -0500, a =E9crit :
>>  gmp-$(XEN_TARGET_ARCH): gmp-$(GMP_VERSION).tar.bz2 $(NEWLIB_STAMPFILE)
>>  	tar xjf $<
>> +	rm $@ -rf || :
> =

> I'm realizing...
> =

>>  	mv gmp-$(GMP_VERSION) $@
>>  	#patch -d $@ -p0 < gmp.patch
>>  	cd $@; CPPFLAGS=3D"-isystem $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf=
/include $(TARGET_CPPFLAGS)" CFLAGS=3D"$(TARGET_CFLAGS)" CC=3D$(CC) $(GMPEX=
T) ./configure --disable-shared --enable-static --disable-fft --without-rea=
dline --prefix=3D$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf
> =

> Which is configure invoked here?  In the other cases, we have used a
> stampfile, to separate unpacking and configure stages, which avoids the
> issue.  Is it because libgmp does not support build out of tree?  If so,
> then I'm fine with the rm change, although -rm $@ -rf instead of || :
> would be more readable.
> =

> Samuel
> =


I didn't look too closely at the reasons why the configure is done here; the
rm is needed regardless of where the configure is done, assuming the tarball
could be touched while the directory exists.

I chose "|| :" instead of -rm because it silences make's output on the fail=
ure,
which will be normal the first time it is run. If -rm is preferred, that's
trivial to fix.

-- =

Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:29:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAuL-0007gX-Rt; Mon, 10 Dec 2012 21:29:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TiAuJ-0007gK-So
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:29:00 +0000
Received: from [85.158.138.51:49470] by server-11.bemta-3.messagelabs.com id
	9C/A4-19361-61456C05; Mon, 10 Dec 2012 21:28:54 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355174932!28210680!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4249 invoked from network); 10 Dec 2012 21:28:53 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-16.tower-174.messagelabs.com with SMTP;
	10 Dec 2012 21:28:53 -0000
X-TM-IMSS-Message-ID: <6852d3c6000818be@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 6852d3c6000818be ;
	Mon, 10 Dec 2012 16:28:10 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBALSb7P015746; 
	Mon, 10 Dec 2012 16:28:37 -0500
Message-ID: <50C65405.4070708@tycho.nsa.gov>
Date: Mon, 10 Dec 2012 16:28:37 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-15-git-send-email-dgdegra@tycho.nsa.gov>
	<20121210212403.GB5833@type.youpi.perso.aquilenet.fr>
In-Reply-To: <20121210212403.GB5833@type.youpi.perso.aquilenet.fr>
Subject: Re: [Xen-devel] [PATCH 14/14] stubdom/Makefile: Fix gmp extract rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/10/2012 04:24 PM, Samuel Thibault wrote:
> Daniel De Graaf, le Mon 10 Dec 2012 14:55:47 -0500, a =E9crit :
>>  gmp-$(XEN_TARGET_ARCH): gmp-$(GMP_VERSION).tar.bz2 $(NEWLIB_STAMPFILE)
>>  	tar xjf $<
>> +	rm $@ -rf || :
> =

> I'm realizing...
> =

>>  	mv gmp-$(GMP_VERSION) $@
>>  	#patch -d $@ -p0 < gmp.patch
>>  	cd $@; CPPFLAGS=3D"-isystem $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf=
/include $(TARGET_CPPFLAGS)" CFLAGS=3D"$(TARGET_CFLAGS)" CC=3D$(CC) $(GMPEX=
T) ./configure --disable-shared --enable-static --disable-fft --without-rea=
dline --prefix=3D$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf
> =

> Which is configure invoked here?  In the other cases, we have used a
> stampfile, to separate unpacking and configure stages, which avoids the
> issue.  Is it because libgmp does not support build out of tree?  If so,
> then I'm fine with the rm change, although -rm $@ -rf instead of || :
> would be more readable.
> =

> Samuel
> =


I didn't look too closely at the reasons why the configure is done here; the
rm is needed regardless of where the configure is done, assuming the tarball
could be touched while the directory exists.

I chose "|| :" instead of -rm because it silences make's output on the fail=
ure,
which will be normal the first time it is run. If -rm is preferred, that's
trivial to fix.

-- =

Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:34:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAzE-0007tz-5L; Mon, 10 Dec 2012 21:34:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TiAzC-0007tu-CC
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:34:02 +0000
Received: from [85.158.138.51:7306] by server-10.bemta-3.messagelabs.com id
	4B/37-19806-94556C05; Mon, 10 Dec 2012 21:34:01 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355175240!28278826!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2146 invoked from network); 10 Dec 2012 21:34:00 -0000
Received: from toccata.ens-lyon.fr (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 21:34:00 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 4F8A684076;
	Mon, 10 Dec 2012 22:34:00 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id rWN5TjdIQ3tE; Mon, 10 Dec 2012 22:34:00 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 0BE9884072;
	Mon, 10 Dec 2012 22:34:00 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TiAz3-0007CU-PQ; Mon, 10 Dec 2012 22:33:53 +0100
Date: Mon, 10 Dec 2012 22:33:53 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121210213353.GO5833@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-15-git-send-email-dgdegra@tycho.nsa.gov>
	<20121210212403.GB5833@type.youpi.perso.aquilenet.fr>
	<50C65405.4070708@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C65405.4070708@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 14/14] stubdom/Makefile: Fix gmp extract rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel De Graaf, le Mon 10 Dec 2012 16:28:37 -0500, a =E9crit :
> I chose "|| :" instead of -rm because it silences make's output on the fa=
ilure,

Ok, right.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 21:34:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 21:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiAzE-0007tz-5L; Mon, 10 Dec 2012 21:34:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1TiAzC-0007tu-CC
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 21:34:02 +0000
Received: from [85.158.138.51:7306] by server-10.bemta-3.messagelabs.com id
	4B/37-19806-94556C05; Mon, 10 Dec 2012 21:34:01 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355175240!28278826!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2146 invoked from network); 10 Dec 2012 21:34:00 -0000
Received: from toccata.ens-lyon.fr (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Dec 2012 21:34:00 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 4F8A684076;
	Mon, 10 Dec 2012 22:34:00 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id rWN5TjdIQ3tE; Mon, 10 Dec 2012 22:34:00 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 0BE9884072;
	Mon, 10 Dec 2012 22:34:00 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1TiAz3-0007CU-PQ; Mon, 10 Dec 2012 22:33:53 +0100
Date: Mon, 10 Dec 2012 22:33:53 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20121210213353.GO5833@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-15-git-send-email-dgdegra@tycho.nsa.gov>
	<20121210212403.GB5833@type.youpi.perso.aquilenet.fr>
	<50C65405.4070708@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C65405.4070708@tycho.nsa.gov>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 14/14] stubdom/Makefile: Fix gmp extract rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel De Graaf, le Mon 10 Dec 2012 16:28:37 -0500, a =E9crit :
> I chose "|| :" instead of -rm because it silences make's output on the fa=
ilure,

Ok, right.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 22:01:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 22:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiBPk-0008VY-BW; Mon, 10 Dec 2012 22:01:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TiBPj-0008VT-8A
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 22:01:27 +0000
Received: from [85.158.143.99:50987] by server-3.bemta-4.messagelabs.com id
	42/1E-18211-6BB56C05; Mon, 10 Dec 2012 22:01:26 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355176883!27904826!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDEwODM1NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17453 invoked from network); 10 Dec 2012 22:01:25 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 22:01:25 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355176885; x=1386712885;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=qt3xT7EIsTGGPoEQOEXAAxBEjhkoRpTrzz4phaAjd5U=;
	b=dn731GzijuQA49lJ10XNH5Rvhb/ClCWjDj3qXUDsG8VQBAd6HoKPVsiM
	6eqar536LC3XAUr4nvQXWmjrLP1un3oUVyMzf7H6h8P/uIsAjyNkr5OJY
	leqI7eRshPIWuvUq24u5/75hHF0yTqpMBmKLZHnmt6TurvGbEOjUTySAf c=;
X-IronPort-AV: E=Sophos;i="4.84,254,1355097600"; d="scan'208";a="413616802"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 10 Dec 2012 22:01:22 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBAM1KSV014208
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Mon, 10 Dec 2012 22:01:21 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.118) by
	ex10-hub-31005.ant.amazon.com (10.185.176.12) with Microsoft SMTP
	Server id 14.2.247.3; Mon, 10 Dec 2012 14:01:10 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Mon, 10 Dec 2012 14:01:10 -0800
Date: Mon, 10 Dec 2012 14:01:10 -0800
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121210220108.GA11695@u109add4315675089e695.ant.amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
	<50BF839002000078000AE3DF@nat28.tlf.novell.com>
	<20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
	<50C0850402000078000AE77F@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C0850402000078000AE77F@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012 10:44:04 +0000, Jan Beulich <JBeulich@suse.com> wrote:
> On Wed, 5 Dec 2012 09:06:20 -0800, Matt Wilson <msw@amazon.com> wrote:
> > On Wed, 5 Dec 2012 16:25:36 +0000, Jan Beulich <JBeulich@suse.com> wrote:
> > > On Wed, 5 Dec 2012 07:59:08 -0800, Matt Wilson <msw@amazon.com> wrote:
> > > > 
> > > > If this is true, the existing is_pinned_vcpu() test is broken:
> > > > 
> > > >    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
> > > >                               cpumask_weight((v)->cpu_affinity) == 1)
> > > > 
> > > > It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
> > > > the MSR traps will suddenly start working.
> > > > 
> > > > See commit: 
> > > >   http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd 
> > > 
> > > I don't see what's wrong here. Certain things merely require the
> > > pCPU that a vCPU runs on to be stable, which is what the test
> > > above is for.
> > 
> > Me either. That said, are you willing to Ack and commit my patch that
> > started this thread?
> 
> In no case without Andrew's concerns addressed. Beyond that,
> I'd be hesitant to ack it as I'm myself suspecting side effects that
> we don't want and/or aren't aware of, and in no case could I
> commit it without Keir's ack.

Jan,

So today if I boot Xen without dom0_vcpus_pin set, dom0's vCPUs will
be allowed to run on any pCPU. Xen will block attempts to write
certain MSRs (MSR_AMD64_NB_CFG, MSR_FAM10H_MMIO_CONF_BASE and
MSR_IA32_ENERGY_PERF_BIAS). The VCPUOP_get_physid subop of the vcpu_op
hypercall will not return the initial APIC ID or ACPI ID for dom0.

Also today, if I run "xl vcpu-pin 0 0", suddenly those MSR writes and
the VCPUOP_get_physid hypercall will start working for vCPU 0. For
what it's worth, only legacy XenoLinux-derived kernels appear to use
this hypercall during SMP boot. Upstream Linux does not.

I think that the real risk is in the XenoLinux SMP booting code on AMD
processors where sometimes initial APIC ID != ACPI ID. If the CPU
pinning changes, the ACPI ID to APIC ID mapping will be wrong. This
broke PowerNow! when it ran in dom0.

But PowerNow! is handled by the hypervisor now. So what's the real
danger here?

Andrew, your thoughts?

Thanks,

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 22:01:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 22:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiBPk-0008VY-BW; Mon, 10 Dec 2012 22:01:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TiBPj-0008VT-8A
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 22:01:27 +0000
Received: from [85.158.143.99:50987] by server-3.bemta-4.messagelabs.com id
	42/1E-18211-6BB56C05; Mon, 10 Dec 2012 22:01:26 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355176883!27904826!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDEwODM1NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17453 invoked from network); 10 Dec 2012 22:01:25 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Dec 2012 22:01:25 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355176885; x=1386712885;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=qt3xT7EIsTGGPoEQOEXAAxBEjhkoRpTrzz4phaAjd5U=;
	b=dn731GzijuQA49lJ10XNH5Rvhb/ClCWjDj3qXUDsG8VQBAd6HoKPVsiM
	6eqar536LC3XAUr4nvQXWmjrLP1un3oUVyMzf7H6h8P/uIsAjyNkr5OJY
	leqI7eRshPIWuvUq24u5/75hHF0yTqpMBmKLZHnmt6TurvGbEOjUTySAf c=;
X-IronPort-AV: E=Sophos;i="4.84,254,1355097600"; d="scan'208";a="413616802"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 10 Dec 2012 22:01:22 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBAM1KSV014208
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Mon, 10 Dec 2012 22:01:21 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.118) by
	ex10-hub-31005.ant.amazon.com (10.185.176.12) with Microsoft SMTP
	Server id 14.2.247.3; Mon, 10 Dec 2012 14:01:10 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Mon, 10 Dec 2012 14:01:10 -0800
Date: Mon, 10 Dec 2012 14:01:10 -0800
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121210220108.GA11695@u109add4315675089e695.ant.amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
	<50BF839002000078000AE3DF@nat28.tlf.novell.com>
	<20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
	<50C0850402000078000AE77F@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C0850402000078000AE77F@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012 10:44:04 +0000, Jan Beulich <JBeulich@suse.com> wrote:
> On Wed, 5 Dec 2012 09:06:20 -0800, Matt Wilson <msw@amazon.com> wrote:
> > On Wed, 5 Dec 2012 16:25:36 +0000, Jan Beulich <JBeulich@suse.com> wrote:
> > > On Wed, 5 Dec 2012 07:59:08 -0800, Matt Wilson <msw@amazon.com> wrote:
> > > > 
> > > > If this is true, the existing is_pinned_vcpu() test is broken:
> > > > 
> > > >    #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
> > > >                               cpumask_weight((v)->cpu_affinity) == 1)
> > > > 
> > > > It's && not ||. So if someone pins dom0 vCPUs to pCPUs 1:1 after boot,
> > > > the MSR traps will suddenly start working.
> > > > 
> > > > See commit: 
> > > >   http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=cc0854dd 
> > > 
> > > I don't see what's wrong here. Certain things merely require the
> > > pCPU that a vCPU runs on to be stable, which is what the test
> > > above is for.
> > 
> > Me either. That said, are you willing to Ack and commit my patch that
> > started this thread?
> 
> In no case without Andrew's concerns addressed. Beyond that,
> I'd be hesitant to ack it as I'm myself suspecting side effects that
> we don't want and/or aren't aware of, and in no case could I
> commit it without Keir's ack.

Jan,

So today if I boot Xen without dom0_vcpus_pin set, dom0's vCPUs will
be allowed to run on any pCPU. Xen will block attempts to write
certain MSRs (MSR_AMD64_NB_CFG, MSR_FAM10H_MMIO_CONF_BASE and
MSR_IA32_ENERGY_PERF_BIAS). The VCPUOP_get_physid subop of the vcpu_op
hypercall will not return the initial APIC ID or ACPI ID for dom0.

Also today, if I run "xl vcpu-pin 0 0", suddenly those MSR writes and
the VCPUOP_get_physid hypercall will start working for vCPU 0. For
what it's worth, only legacy XenoLinux-derived kernels appear to use
this hypercall during SMP boot. Upstream Linux does not.

I think that the real risk is in the XenoLinux SMP booting code on AMD
processors where sometimes initial APIC ID != ACPI ID. If the CPU
pinning changes, the ACPI ID to APIC ID mapping will be wrong. This
broke PowerNow! when it ran in dom0.

But PowerNow! is handled by the hypervisor now. So what's the real
danger here?

Andrew, your thoughts?

Thanks,

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 23:57:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 23:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiDDJ-0000jK-3L; Mon, 10 Dec 2012 23:56:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TiDDH-0000jD-PV
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 23:56:43 +0000
Received: from [85.158.143.99:36586] by server-3.bemta-4.messagelabs.com id
	F0/B4-18211-BB676C05; Mon, 10 Dec 2012 23:56:43 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355183801!21365911!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5566 invoked from network); 10 Dec 2012 23:56:42 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-11.tower-216.messagelabs.com with SMTP;
	10 Dec 2012 23:56:42 -0000
Received: from [137.65.220.249] ([137.65.220.249])
	by mail.novell.com with ESMTP; Mon, 10 Dec 2012 16:56:34 -0700
Message-ID: <50C676B1.4070609@suse.com>
Date: Mon, 10 Dec 2012 16:56:33 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>	<20661.3989.258191.396175@mariner.uk.xensource.com>	<1354101923.25834.16.camel@zakaz.uk.xensource.com>	<20674.16214.934271.479230@mariner.uk.xensource.com>	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
	<20677.47995.298291.120095@mariner.uk.xensource.com>
In-Reply-To: <20677.47995.298291.120095@mariner.uk.xensource.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
>   
>> I took Bamvor's most recent response to mean that a per-event lock was
>> already in place in libvirt and inferred that this was the reason why
>> the originally proposed one line fix worked for them. Perhaps I
>> misunderstood?
>>     
>
> Yes, I think that's what Bamvor meant but I don't think it's correct
> that such a lock eliminates the race.  libvirt has to release that
> lock before making the callback (to follow the libxl locking rules
> which are necessary to avoid deadlock).
>   

And it does.  The event loop lock is released just before invoking the
callback, and re-acquired just after the callback returns.

Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 10 23:57:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Dec 2012 23:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiDDJ-0000jK-3L; Mon, 10 Dec 2012 23:56:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TiDDH-0000jD-PV
	for xen-devel@lists.xen.org; Mon, 10 Dec 2012 23:56:43 +0000
Received: from [85.158.143.99:36586] by server-3.bemta-4.messagelabs.com id
	F0/B4-18211-BB676C05; Mon, 10 Dec 2012 23:56:43 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355183801!21365911!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5566 invoked from network); 10 Dec 2012 23:56:42 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-11.tower-216.messagelabs.com with SMTP;
	10 Dec 2012 23:56:42 -0000
Received: from [137.65.220.249] ([137.65.220.249])
	by mail.novell.com with ESMTP; Mon, 10 Dec 2012 16:56:34 -0700
Message-ID: <50C676B1.4070609@suse.com>
Date: Mon, 10 Dec 2012 16:56:33 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>	<20661.3989.258191.396175@mariner.uk.xensource.com>	<1354101923.25834.16.camel@zakaz.uk.xensource.com>	<20674.16214.934271.479230@mariner.uk.xensource.com>	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
	<20677.47995.298291.120095@mariner.uk.xensource.com>
In-Reply-To: <20677.47995.298291.120095@mariner.uk.xensource.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
>   
>> I took Bamvor's most recent response to mean that a per-event lock was
>> already in place in libvirt and inferred that this was the reason why
>> the originally proposed one line fix worked for them. Perhaps I
>> misunderstood?
>>     
>
> Yes, I think that's what Bamvor meant but I don't think it's correct
> that such a lock eliminates the race.  libvirt has to release that
> lock before making the callback (to follow the libxl locking rules
> which are necessary to avoid deadlock).
>   

And it does.  The event loop lock is released just before invoking the
callback, and re-acquired just after the callback returns.

Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 00:20:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 00:20:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiDZt-0001NM-7h; Tue, 11 Dec 2012 00:20:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiDZr-0001NH-Ql
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 00:20:04 +0000
Received: from [85.158.139.211:9206] by server-13.bemta-5.messagelabs.com id
	AE/4F-10716-23C76C05; Tue, 11 Dec 2012 00:20:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355185201!18445412!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk0MTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10339 invoked from network); 11 Dec 2012 00:20:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 00:20:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,254,1355097600"; 
   d="scan'208";a="46535"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 00:20:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 00:20:00 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiDZo-0008E5-VB;
	Tue, 11 Dec 2012 00:20:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiDZo-0008OG-Do;
	Tue, 11 Dec 2012 00:20:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14663-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 00:20:00 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14663: FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14663 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14663/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                  2 host-install(2) broken in 14655 REGR. vs. 14563
 build-amd64-pvops            2 host-install(2) broken in 14655 REGR. vs. 14563
 build-amd64-oldkern          2 host-install(2) broken in 14655 REGR. vs. 14563

Tests which are failing intermittently (not blocking):
 test-i386-i386-xl-winxpsp3    5 xen-boot                    fail pass in 14655
 test-i386-i386-pv             3 host-install(3)  broken in 14655 pass in 14663
 test-i386-i386-xl             3 host-install(3)  broken in 14655 pass in 14663
 test-i386-i386-pair   4 host-install/dst_host(4) broken in 14655 pass in 14663
 test-i386-i386-pair   3 host-install/src_host(3) broken in 14655 pass in 14663
 test-i386-i386-xl-qemut-win   3 host-install(3)  broken in 14655 pass in 14663
 test-i386-i386-xl-qemut-winxpsp3 3 host-install(3) broken in 14655 pass in 14663

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-sedf     11 guest-localmigrate       fail blocked in 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 14655 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14655 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14655 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14655 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14655 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 14655 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14655 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 14655 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14655 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14655 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14655 n/a
 test-i386-i386-xl-winxpsp3   13 guest-stop            fail in 14655 never pass
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-win          1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14655 n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)       blocked in 14655 n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)        blocked in 14655 n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 00:20:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 00:20:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiDZt-0001NM-7h; Tue, 11 Dec 2012 00:20:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiDZr-0001NH-Ql
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 00:20:04 +0000
Received: from [85.158.139.211:9206] by server-13.bemta-5.messagelabs.com id
	AE/4F-10716-23C76C05; Tue, 11 Dec 2012 00:20:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355185201!18445412!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk0MTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10339 invoked from network); 11 Dec 2012 00:20:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 00:20:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,254,1355097600"; 
   d="scan'208";a="46535"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 00:20:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 00:20:00 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiDZo-0008E5-VB;
	Tue, 11 Dec 2012 00:20:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiDZo-0008OG-Do;
	Tue, 11 Dec 2012 00:20:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14663-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 00:20:00 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14663: FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14663 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14663/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                  2 host-install(2) broken in 14655 REGR. vs. 14563
 build-amd64-pvops            2 host-install(2) broken in 14655 REGR. vs. 14563
 build-amd64-oldkern          2 host-install(2) broken in 14655 REGR. vs. 14563

Tests which are failing intermittently (not blocking):
 test-i386-i386-xl-winxpsp3    5 xen-boot                    fail pass in 14655
 test-i386-i386-pv             3 host-install(3)  broken in 14655 pass in 14663
 test-i386-i386-xl             3 host-install(3)  broken in 14655 pass in 14663
 test-i386-i386-pair   4 host-install/dst_host(4) broken in 14655 pass in 14663
 test-i386-i386-pair   3 host-install/src_host(3) broken in 14655 pass in 14663
 test-i386-i386-xl-qemut-win   3 host-install(3)  broken in 14655 pass in 14663
 test-i386-i386-xl-qemut-winxpsp3 3 host-install(3) broken in 14655 pass in 14663

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-sedf     11 guest-localmigrate       fail blocked in 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 14655 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14655 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 14655 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 14655 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 14655 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-qemut-win     1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 14655 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 14655 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 14655 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 14655 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-qemut-win-vcpus1  1 xen-build-check(1)    blocked in 14655 n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-i386-xl-qemut-win-vcpus1  1 xen-build-check(1) blocked in 14655 n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)       blocked in 14655 n/a
 test-i386-i386-xl-winxpsp3   13 guest-stop            fail in 14655 never pass
 test-amd64-i386-win           1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-win          1 xen-build-check(1)        blocked in 14655 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14655 n/a
 test-amd64-amd64-xl-qemut-win  1 xen-build-check(1)       blocked in 14655 n/a
 test-amd64-amd64-qemut-win    1 xen-build-check(1)        blocked in 14655 n/a

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25942:1206a3526673
tag:         tip
user:        Kouya Shimura <kouya@jp.fujitsu.com>
date:        Thu Dec 06 11:11:58 2012 +0100
    
    x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vram
    
    Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    xen-unstable changeset: 26203:b5cb6cccc32c
    xen-unstable date: Thu Nov 29 11:01:00 UTC 2012
    
    
changeset:   25941:8194979b8104
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:10:15 2012 +0100
    
    ACPI: fix return value of XEN_PM_PDC platform op
    
    Should return -EFAULT when copying to guest memory fails.
    
    Once touching this code, also switch to using the more relaxed copy
    function (copying from the same guest memory already validated the
    virtual address range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26196:5faf5b8b702e
    xen-unstable date: Wed Nov 28 09:03:51 UTC 2012
    
    
changeset:   25940:594b333b211d
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 06 11:07:56 2012 +0100
    
    x86: fix hypercall continuation cancellation in XENMAPSPACE_gmfn_range compat wrapper
    
    When no continuation was established, there must also not be an attempt
    to cancel it - hypercall_cancel_continuation(), in the non-HVM, non-
    multicall case, adjusts the guest mode return address in a way assuming
    that an earlier call hypercall_create_continuation() took place.
    
    Once touching this code, also restructure it slightly to improve
    readability and switch to using the more relaxed copy function (copying
    from the same guest memory already validated the virtual address
    range).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset: 26195:7670eabcbafc
    xen-unstable date: Wed Nov 28 09:02:26 UTC 2012
    
    
changeset:   25939:6efa959326cc
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Dec 06 11:01:46 2012 +0100
    
    MAINTAINERS: Reference stable maintenance policy
    
    I also couldn't resist fixing a typo and adding a reference to
    http://wiki.xen.org/wiki/Submitting_Xen_Patches for the normal case as
    well.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 26238:53805e238cca
    xen-unstable date: Thu Dec  6 09:56:53 UTC 2012
    
    
changeset:   25938:b306bce61341
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 04 18:02:59 2012 +0000
    
    x86: get_page_from_gfn() must return NULL for invalid GFNs
    
    ... also in the non-translated case.
    
    This is XSA-32 / CVE-2012-xxxx.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Jackson <ian.jackson.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 00:35:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 00:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiDou-0001d6-Sk; Tue, 11 Dec 2012 00:35:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TiDos-0001d1-VS
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 00:35:35 +0000
Received: from [85.158.138.51:13588] by server-4.bemta-3.messagelabs.com id
	F2/FD-30023-6DF76C05; Tue, 11 Dec 2012 00:35:34 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355186133!28000817!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22525 invoked from network); 11 Dec 2012 00:35:33 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-2.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	11 Dec 2012 00:35:33 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:63277 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TiDsc-0000zf-7J; Tue, 11 Dec 2012 01:39:26 +0100
Date: Tue, 11 Dec 2012 01:35:16 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <66219392.20121211013516@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355152338.21160.37.camel@zakaz.uk.xensource.com>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com>
	<341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
	<1355152338.21160.37.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
	net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 4:12:18 PM, you wrote:

> I wrote
>> > I have a vague recollection of a patch to set skb->truesize more
>> > accurately in xennet_poll (netfront), but I can't seem to find any
>> > reference to it now.

> I finally found the following in my git tree. Looks like I never sent it
> out.

> Does it help?

Hi Ian,

I have tested for a few hours, but haven't seen the warn with the patch.
So i hope it's fixed!

Thx

--

Sander

> 8<--------------------

> From 788ba317fa241512be7a8630b1b58e53faff83ed Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 22 Aug 2012 11:55:31 +0100
> Subject: [PATCH] xen/netfront: improve truesize tracking

> Fixes WARN_ON from skb_try_coalesce.

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  drivers/net/xen-netfront.c |   15 +++++----------
>  net/core/skbuff.c          |    2 +-
>  2 files changed, 6 insertions(+), 11 deletions(-)

> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index caa0110..b06ef81 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -971,17 +971,12 @@ err:
>                  * overheads. Here, we add the size of the data pulled
>                  * in xennet_fill_frags().
>                  *
> -                * We also adjust for any unused space in the main
> -                * data area by subtracting (RX_COPY_THRESHOLD -
> -                * len). This is especially important with drivers
> -                * which split incoming packets into header and data,
> -                * using only 66 bytes of the main data area (see the
> -                * e1000 driver for example.)  On such systems,
> -                * without this last adjustement, our achievable
> -                * receive throughout using the standard receive
> -                * buffer size was cut by 25%(!!!).
> +                * We also adjust for the __pskb_pull_tail done in
> +                * handle_incoming_queue which pulls data from the
> +                * frags into the head area, which is already
> +                * accounted in RX_COPY_THRESHOLD.
>                  */
> -               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +               skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>                 skb->len += skb->data_len;
>  
>                 if (rx->flags & XEN_NETRXF_csum_blank)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 6e04b1f..941a974 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3439,7 +3439,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
>                 delta = from->truesize - SKB_TRUESIZE(skb_end_offset(from));
>         }
>  
> -       WARN_ON_ONCE(delta < len);
> +       WARN_ONCE(delta < len, "delta %d < len %d\n", delta, len);
>  
>         memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
>                skb_shinfo(from)->frags,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 00:35:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 00:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiDou-0001d6-Sk; Tue, 11 Dec 2012 00:35:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TiDos-0001d1-VS
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 00:35:35 +0000
Received: from [85.158.138.51:13588] by server-4.bemta-3.messagelabs.com id
	F2/FD-30023-6DF76C05; Tue, 11 Dec 2012 00:35:34 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355186133!28000817!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22525 invoked from network); 11 Dec 2012 00:35:33 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-2.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	11 Dec 2012 00:35:33 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:63277 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TiDsc-0000zf-7J; Tue, 11 Dec 2012 01:39:26 +0100
Date: Tue, 11 Dec 2012 01:35:16 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <66219392.20121211013516@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355152338.21160.37.camel@zakaz.uk.xensource.com>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com>
	<341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
	<1355152338.21160.37.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
	net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 4:12:18 PM, you wrote:

> I wrote
>> > I have a vague recollection of a patch to set skb->truesize more
>> > accurately in xennet_poll (netfront), but I can't seem to find any
>> > reference to it now.

> I finally found the following in my git tree. Looks like I never sent it
> out.

> Does it help?

Hi Ian,

I have tested for a few hours, but haven't seen the warn with the patch.
So i hope it's fixed!

Thx

--

Sander

> 8<--------------------

> From 788ba317fa241512be7a8630b1b58e53faff83ed Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 22 Aug 2012 11:55:31 +0100
> Subject: [PATCH] xen/netfront: improve truesize tracking

> Fixes WARN_ON from skb_try_coalesce.

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  drivers/net/xen-netfront.c |   15 +++++----------
>  net/core/skbuff.c          |    2 +-
>  2 files changed, 6 insertions(+), 11 deletions(-)

> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index caa0110..b06ef81 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -971,17 +971,12 @@ err:
>                  * overheads. Here, we add the size of the data pulled
>                  * in xennet_fill_frags().
>                  *
> -                * We also adjust for any unused space in the main
> -                * data area by subtracting (RX_COPY_THRESHOLD -
> -                * len). This is especially important with drivers
> -                * which split incoming packets into header and data,
> -                * using only 66 bytes of the main data area (see the
> -                * e1000 driver for example.)  On such systems,
> -                * without this last adjustement, our achievable
> -                * receive throughout using the standard receive
> -                * buffer size was cut by 25%(!!!).
> +                * We also adjust for the __pskb_pull_tail done in
> +                * handle_incoming_queue which pulls data from the
> +                * frags into the head area, which is already
> +                * accounted in RX_COPY_THRESHOLD.
>                  */
> -               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +               skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>                 skb->len += skb->data_len;
>  
>                 if (rx->flags & XEN_NETRXF_csum_blank)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 6e04b1f..941a974 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3439,7 +3439,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
>                 delta = from->truesize - SKB_TRUESIZE(skb_end_offset(from));
>         }
>  
> -       WARN_ON_ONCE(delta < len);
> +       WARN_ONCE(delta < len, "delta %d < len %d\n", delta, len);
>  
>         memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
>                skb_shinfo(from)->frags,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 01:23:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 01:23:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiEYQ-0005t6-A1; Tue, 11 Dec 2012 01:22:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiEYP-0005t1-1e
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 01:22:37 +0000
Received: from [85.158.139.83:10602] by server-8.bemta-5.messagelabs.com id
	0A/61-15003-BDA86C05; Tue, 11 Dec 2012 01:22:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355188955!17999300!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk0MTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18890 invoked from network); 11 Dec 2012 01:22:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 01:22:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,255,1355097600"; 
   d="scan'208";a="47014"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 01:22:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 01:22:34 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiEYM-00005l-RX;
	Tue, 11 Dec 2012 01:22:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiEYM-0005AS-8B;
	Tue, 11 Dec 2012 01:22:34 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14665-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 01:22:34 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14665: FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14665 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14665/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops            2 host-install(2) broken in 14639 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win-vcpus1  8 guest-saverestore    fail pass in 14639
 test-amd64-i386-qemuu-rhel6hvm-amd 3 host-install(3) broken in 14639 pass in 14665
 test-amd64-i386-qemuu-rhel6hvm-intel 3 host-install(3) broken in 14639 pass in 14665
 test-amd64-i386-qemuu-win-vcpus1 3 host-install(3) broken in 14639 pass in 14665
 test-amd64-i386-xl-qemuu-win7-amd64 3 host-install(3) broken in 14639 pass in 14665

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop     fail in 14639 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14639 n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)       blocked in 14639 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14639 n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)        blocked in 14639 n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-win-vcpus1                             fail    
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   fail    
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 01:23:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 01:23:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiEYQ-0005t6-A1; Tue, 11 Dec 2012 01:22:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiEYP-0005t1-1e
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 01:22:37 +0000
Received: from [85.158.139.83:10602] by server-8.bemta-5.messagelabs.com id
	0A/61-15003-BDA86C05; Tue, 11 Dec 2012 01:22:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355188955!17999300!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk0MTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18890 invoked from network); 11 Dec 2012 01:22:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 01:22:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,255,1355097600"; 
   d="scan'208";a="47014"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 01:22:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 01:22:34 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiEYM-00005l-RX;
	Tue, 11 Dec 2012 01:22:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiEYM-0005AS-8B;
	Tue, 11 Dec 2012 01:22:34 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14665-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 01:22:34 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14665: FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14665 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14665/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops            2 host-install(2) broken in 14639 REGR. vs. 14472

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win-vcpus1  8 guest-saverestore    fail pass in 14639
 test-amd64-i386-qemuu-rhel6hvm-amd 3 host-install(3) broken in 14639 pass in 14665
 test-amd64-i386-qemuu-rhel6hvm-intel 3 host-install(3) broken in 14639 pass in 14665
 test-amd64-i386-qemuu-win-vcpus1 3 host-install(3) broken in 14639 pass in 14665
 test-amd64-i386-xl-qemuu-win7-amd64 3 host-install(3) broken in 14639 pass in 14665

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop     fail in 14639 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 14639 n/a
 test-amd64-amd64-xl-qemuu-win  1 xen-build-check(1)       blocked in 14639 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 14639 n/a
 test-amd64-amd64-qemuu-win    1 xen-build-check(1)        blocked in 14639 n/a

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-win-vcpus1                             fail    
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   fail    
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3752993df8af5cffa1b8219fe175d235597b4474
Merge: 1e6f3bf... 6d6c9f5...
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Dec 5 11:31:01 2012 +0000

    Merge commit 'v1.3.0' into xen-staging-master-4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 02:43:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 02:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiFoW-0006gp-Nd; Tue, 11 Dec 2012 02:43:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TiFoV-0006gk-A7
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 02:43:19 +0000
Received: from [85.158.139.83:64803] by server-3.bemta-5.messagelabs.com id
	A8/75-25441-6CD96C05; Tue, 11 Dec 2012 02:43:18 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355193796!28575524!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTQ5MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7633 invoked from network); 11 Dec 2012 02:43:17 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 02:43:17 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBB2hE1c028538
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 11 Dec 2012 02:43:15 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBB2hDCl025546
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 11 Dec 2012 02:43:14 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBB2hD3i032712; Mon, 10 Dec 2012 20:43:13 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Dec 2012 18:43:12 -0800
Date: Mon, 10 Dec 2012 18:43:11 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20121210184311.4fc11316@mantra.us.oracle.com>
In-Reply-To: <50C5BCD602000078000AF51A@nat28.tlf.novell.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012 09:43:34 +0000
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 08.12.12 at 02:46, Mukesh Rathor <mukesh.rathor@oracle.com>
> >>> wrote:
> > The second is msi.c. I don't understand it very well, and need to
> > figure what to do for PVH. Would appreciate suggestions if anyone
> > knows.
> 
> Why do you think you need to do something specially for PVH here
> in the first place? The only adjustment I would expect might be
> needed is address translation (depending on how PVH deal with
> MMIO addresses).

Ok, thanks. Looks like I'm missing some address translation somewhere,
getting EPT violation from dom0. Time to read up more on msi-x and debug.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 02:43:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 02:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiFoW-0006gp-Nd; Tue, 11 Dec 2012 02:43:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TiFoV-0006gk-A7
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 02:43:19 +0000
Received: from [85.158.139.83:64803] by server-3.bemta-5.messagelabs.com id
	A8/75-25441-6CD96C05; Tue, 11 Dec 2012 02:43:18 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355193796!28575524!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTQ5MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7633 invoked from network); 11 Dec 2012 02:43:17 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 02:43:17 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBB2hE1c028538
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 11 Dec 2012 02:43:15 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBB2hDCl025546
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 11 Dec 2012 02:43:14 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBB2hD3i032712; Mon, 10 Dec 2012 20:43:13 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Dec 2012 18:43:12 -0800
Date: Mon, 10 Dec 2012 18:43:11 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20121210184311.4fc11316@mantra.us.oracle.com>
In-Reply-To: <50C5BCD602000078000AF51A@nat28.tlf.novell.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012 09:43:34 +0000
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 08.12.12 at 02:46, Mukesh Rathor <mukesh.rathor@oracle.com>
> >>> wrote:
> > The second is msi.c. I don't understand it very well, and need to
> > figure what to do for PVH. Would appreciate suggestions if anyone
> > knows.
> 
> Why do you think you need to do something specially for PVH here
> in the first place? The only adjustment I would expect might be
> needed is address translation (depending on how PVH deal with
> MMIO addresses).

Ok, thanks. Looks like I'm missing some address translation somewhere,
getting EPT violation from dom0. Time to read up more on msi-x and debug.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 03:45:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 03:45:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiGma-00076n-Ov; Tue, 11 Dec 2012 03:45:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TiGmY-00076i-On
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 03:45:23 +0000
Received: from [85.158.137.99:26369] by server-9.bemta-3.messagelabs.com id
	10/77-02388-D4CA6C05; Tue, 11 Dec 2012 03:45:17 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355197514!13295994!1
X-Originating-IP: [209.85.223.172]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_DONG,HTML_20_30,
	HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1692 invoked from network); 11 Dec 2012 03:45:15 -0000
Received: from mail-ie0-f172.google.com (HELO mail-ie0-f172.google.com)
	(209.85.223.172)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 03:45:15 -0000
Received: by mail-ie0-f172.google.com with SMTP id c13so11776618ieb.17
	for <xen-devel@lists.xen.org>; Mon, 10 Dec 2012 19:45:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=zH0AfRNmQWUEUaNg9hBVOPidL/FcBBi4B2zxsbRseSI=;
	b=Q0GEkrILmFKnurkYnhCEnLsQ6RnMCLjj1JXBU7V3R8mKdsbIyTfEVBC31cmCRSqr4A
	7JX9PIpk26MEnrUdrpLtsKPqIlZ0LC6h9FLRtxCL7Vtvn87NH7fNJg/qFykoljGPHUiR
	gKQc3mxOVRgvjTwo5FKemNZK4fsct/+SAJQ1ySUDQbECW+zPCSCRiv6P7mqBQw5Hu5Jd
	CJ1Zxtyd32x/AeEVYbsKrQh9dinsx7XDMKXgnt1HQVlXibTsd0zDBZ8JN7PHbSShtTPU
	MMrKv1dqSpDDtyDy+By/fqV2VfsoJVWIItom+6s7ygi2LXZJOEHLkbtgMZ7xsLfkhrlt
	5fDw==
MIME-Version: 1.0
Received: by 10.50.178.106 with SMTP id cx10mr8805531igc.24.1355197513951;
	Mon, 10 Dec 2012 19:45:13 -0800 (PST)
Received: by 10.64.37.39 with HTTP; Mon, 10 Dec 2012 19:45:13 -0800 (PST)
In-Reply-To: <003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
Date: Tue, 11 Dec 2012 11:45:13 +0800
X-Google-Sender-Auth: wb1q7uq_N9pId-lz-HaTfCl-w7E
Message-ID: <CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: "Kay, Allen M" <allen.m.kay@intel.com>
Cc: "Xu, Dongxiao" <dongxiao.xu@intel.com>, xen-devel <xen-devel@lists.xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0766400272027752363=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0766400272027752363==
Content-Type: multipart/alternative; boundary=e89a8f5036c86b686204d08b8258

--e89a8f5036c86b686204d08b8258
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 11, 2012 at 2:15 AM, Kay, Allen M <allen.m.kay@intel.com> wrote:

> My understanding is that i915 driver needs to looks at the real PCH's
> device ID to apply HW workarounds.
>
> One way to fix this is to make the device ID of the first PCH's (00:01.0)
> device ID reflect the device ID of 00:1f.0 on the host.  This way, when
> i915 running as a guest will find the valid PCH device ID to make
> workaround decisions with.
>
> I don't know why it would make a difference if i915 is built into the
> kernel or as a module though.
>
> Allen
>
> Thanks Allen for your input.
But module v.s. built-in is not the only difference. Another difference is
the PVHVM build vs. pure HVM build.
Both share the same PCI layout but different result. Any theory how to
explain the difference? What makes the PVHVM version work?

Thanks,
Timothy



> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Monday, December 10, 2012 4:27 AM
> To: G.R.
> Cc: xen-devel; Stefano Stabellini; Kay, Allen M; xiantao.zhang@intel; Xu,
> Dongxiao; Dong, Eddie
> Subject: Re: intel IGD driver intel_detect_pch() failure
>
> CC'ing some engineers that could have some useful suggestions
>
> On Mon, 10 Dec 2012, G.R. wrote:
> > Hello, could anybody help?
> >
> > On Sun, Dec 9, 2012 at 1:00 PM, G.R. <firemeteor@users.sourceforge.net>
> wrote:
> >       I dug further and got confused.
> >       The host ISA bridge 00:1f.0 is automatically passed-through as
> part of the gfx_passthru magic.
> >       However, it is passed through as a PCI bridge:
> >       On host:   00:1f.0 ISA bridge [0601]: Intel Corporation H77
> Express Chipset LPC Controller [8086:1e4a] (rev 04)
> >       On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express
> Chipset LPC Controller [8086:1e4a] (rev 04)
> >
> >       This is both the case for pure HVM && PVHVM. And this one exists
> for both case too:
> >       00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> [Natoma/Triton II] [8086:7000]
> >
> >       And the intel_detect_pch() function only check the first ISA
> bridge on the PCI bus:
> >       pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
> >
> >       Unless there are magic elsewhere, I can't imagine the code would
> behave differently on the two builds.
> >       But what's the magic behind this?
> >
> >       Also, is there anyway to get rid of the ISA bridge emulated by
> qemu?
> >       I don't think this is ever required for most case...
> >
> >       Thanks,
> >       Timothy
> >
> >
> >       On Sun, Dec 9, 2012 at 1:43 AM, G.R. <
> firemeteor@users.sourceforge.net> wrote:
> >
> >             Hi all,
> >             I'm debugging an issue that an HVM guest failed to produce
> any output with IGD passed through.
> >             This is an pure HVM linux guest with i915 driver directly
> compiled in.
> >             An PVHVM kernel with i915 driver compiled as module works
> without issue.
> >             I'm not yet sure which factor is more important, pure HVM or
> the I915=y kernel config.
> >
> >             The direct cause of no output is that the driver does not
> select Display PLL properly, which is in turn
> >             due to fail to detect pch type properly.
> >
> >             Strange enough, the intel_detect_pch() function works by
> checking the device ID of the ISA bridge
> >             coming with the chipset:
> >
> >             /* * The reason to probe ISA bridge instead of Dev31:Fun0 is
> to * make graphics device passthrough
> >             fwork easy for VMM, that only * need to expose ISA bridge to
> let driver know the real hardware
> >             underneath. This is a requirement from virtualization team.
> */
> >
> >             I added some debug output in this function and find out that
> it obtained a strange device ID:
> >             [ 1.005423] [drm] intel pch detect, found 00007000
> >
> >             This looks like the ISA bridge provided by qemu:
> >             00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA
> [Natoma/Triton II]
> >             00:01.0 0601: 8086:7000
> >
> >             However, I can find the same device on a PVHVM linux guest,
> but the intel_detect_pch() is not fooled by
> >             that. Is it due to I915=m config or some magic played by
> PVOPS? Any suggestion how to fix this?
> >
> >             Thanks,
> >             Timothy
> >
> >
> >
> >
> >
>

--e89a8f5036c86b686204d08b8258
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Tue, Dec 11, 2=
012 at 2:15 AM, Kay, Allen M <span dir=3D"ltr">&lt;<a href=3D"mailto:allen.=
m.kay@intel.com" target=3D"_blank">allen.m.kay@intel.com</a>&gt;</span> wro=
te:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">My understanding is that i915 driver needs t=
o looks at the real PCH&#39;s device ID to apply HW workarounds.<br>
<br>
One way to fix this is to make the device ID of the first PCH&#39;s (00:01.=
0) device ID reflect the device ID of 00:1f.0 on the host. =A0This way, whe=
n i915 running as a guest will find the valid PCH device ID to make workaro=
und decisions with.<br>

<br>
I don&#39;t know why it would make a difference if i915 is built into the k=
ernel or as a module though.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Allen<br>
</font></span><div class=3D"HOEnZb"><div class=3D"h5"><br></div></div></blo=
ckquote><div>Thanks Allen for your input. <br>But module v.s. built-in is n=
ot the only difference. Another difference is the PVHVM build vs. pure HVM =
build.<br>
Both share the same PCI layout but different result. Any theory how to expl=
ain the difference? What makes the PVHVM version work?<br><br>Thanks,<br>Ti=
mothy<br><br>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0=
 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">
-----Original Message-----<br>
From: Stefano Stabellini [mailto:<a href=3D"mailto:stefano.stabellini@eu.ci=
trix.com">stefano.stabellini@eu.citrix.com</a>]<br>
Sent: Monday, December 10, 2012 4:27 AM<br>
To: G.R.<br>
Cc: xen-devel; Stefano Stabellini; Kay, Allen M; xiantao.zhang@intel; Xu, D=
ongxiao; Dong, Eddie<br>
Subject: Re: intel IGD driver intel_detect_pch() failure<br>
<br>
CC&#39;ing some engineers that could have some useful suggestions<br>
<br>
On Mon, 10 Dec 2012, G.R. wrote:<br>
&gt; Hello, could anybody help?<br>
&gt;<br>
&gt; On Sun, Dec 9, 2012 at 1:00 PM, G.R. &lt;<a href=3D"mailto:firemeteor@=
users.sourceforge.net">firemeteor@users.sourceforge.net</a>&gt; wrote:<br>
&gt; =A0 =A0 =A0 I dug further and got confused.<br>
&gt; =A0 =A0 =A0 The host ISA bridge 00:1f.0 is automatically passed-throug=
h as part of the gfx_passthru magic.<br>
&gt; =A0 =A0 =A0 However, it is passed through as a PCI bridge:<br>
&gt; =A0 =A0 =A0 On host: =A0 00:1f.0 ISA bridge [0601]: Intel Corporation =
H77 Express Chipset LPC Controller [8086:1e4a] (rev 04)<br>
&gt; =A0 =A0 =A0 On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77=
 Express Chipset LPC Controller [8086:1e4a] (rev 04)<br>
&gt;<br>
&gt; =A0 =A0 =A0 This is both the case for pure HVM &amp;&amp; PVHVM. And t=
his one exists for both case too:<br>
&gt; =A0 =A0 =A0 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3=
 ISA [Natoma/Triton II] [8086:7000]<br>
&gt;<br>
&gt; =A0 =A0 =A0 And the intel_detect_pch() function only check the first I=
SA bridge on the PCI bus:<br>
&gt; =A0 =A0 =A0 pch =3D pci_get_class(PCI_CLASS_BRIDGE_ISA &lt;&lt; 8, NUL=
L);<br>
&gt;<br>
&gt; =A0 =A0 =A0 Unless there are magic elsewhere, I can&#39;t imagine the =
code would behave differently on the two builds.<br>
&gt; =A0 =A0 =A0 But what&#39;s the magic behind this?<br>
&gt;<br>
&gt; =A0 =A0 =A0 Also, is there anyway to get rid of the ISA bridge emulate=
d by qemu?<br>
&gt; =A0 =A0 =A0 I don&#39;t think this is ever required for most case...<b=
r>
&gt;<br>
&gt; =A0 =A0 =A0 Thanks,<br>
&gt; =A0 =A0 =A0 Timothy<br>
&gt;<br>
&gt;<br>
&gt; =A0 =A0 =A0 On Sun, Dec 9, 2012 at 1:43 AM, G.R. &lt;<a href=3D"mailto=
:firemeteor@users.sourceforge.net">firemeteor@users.sourceforge.net</a>&gt;=
 wrote:<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 Hi all,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 I&#39;m debugging an issue that an HVM guest f=
ailed to produce any output with IGD passed through.<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 This is an pure HVM linux guest with i915 driv=
er directly compiled in.<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 An PVHVM kernel with i915 driver compiled as m=
odule works without issue.<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 I&#39;m not yet sure which factor is more impo=
rtant, pure HVM or the I915=3Dy kernel config.<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 The direct cause of no output is that the driv=
er does not select Display PLL properly, which is in turn<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 due to fail to detect pch type properly.<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 Strange enough, the intel_detect_pch() functio=
n works by checking the device ID of the ISA bridge<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 coming with the chipset:<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 /* * The reason to probe ISA bridge instead of=
 Dev31:Fun0 is to * make graphics device passthrough<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 fwork easy for VMM, that only * need to expose=
 ISA bridge to let driver know the real hardware=A0<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 underneath. This is a requirement from virtual=
ization team. */<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 I added some debug output in this function and=
 find out that it obtained a strange device ID:<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 [ 1.005423] [drm] intel pch detect, found 0000=
7000<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 This looks like the ISA bridge provided by qem=
u:<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 00:01.0 ISA bridge: Intel Corporation 82371SB =
PIIX3 ISA [Natoma/Triton II]<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 00:01.0 0601: 8086:7000<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 However, I can find the same device on a PVHVM=
 linux guest, but the intel_detect_pch() is not fooled by<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 that. Is it due to I915=3Dm config or some mag=
ic played by PVOPS? Any suggestion how to fix this?<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 Thanks,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 Timothy<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
</div></div></blockquote></div><br></div>

--e89a8f5036c86b686204d08b8258--


--===============0766400272027752363==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0766400272027752363==--


From xen-devel-bounces@lists.xen.org Tue Dec 11 03:45:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 03:45:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiGma-00076n-Ov; Tue, 11 Dec 2012 03:45:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TiGmY-00076i-On
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 03:45:23 +0000
Received: from [85.158.137.99:26369] by server-9.bemta-3.messagelabs.com id
	10/77-02388-D4CA6C05; Tue, 11 Dec 2012 03:45:17 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355197514!13295994!1
X-Originating-IP: [209.85.223.172]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_DONG,HTML_20_30,
	HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1692 invoked from network); 11 Dec 2012 03:45:15 -0000
Received: from mail-ie0-f172.google.com (HELO mail-ie0-f172.google.com)
	(209.85.223.172)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 03:45:15 -0000
Received: by mail-ie0-f172.google.com with SMTP id c13so11776618ieb.17
	for <xen-devel@lists.xen.org>; Mon, 10 Dec 2012 19:45:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=zH0AfRNmQWUEUaNg9hBVOPidL/FcBBi4B2zxsbRseSI=;
	b=Q0GEkrILmFKnurkYnhCEnLsQ6RnMCLjj1JXBU7V3R8mKdsbIyTfEVBC31cmCRSqr4A
	7JX9PIpk26MEnrUdrpLtsKPqIlZ0LC6h9FLRtxCL7Vtvn87NH7fNJg/qFykoljGPHUiR
	gKQc3mxOVRgvjTwo5FKemNZK4fsct/+SAJQ1ySUDQbECW+zPCSCRiv6P7mqBQw5Hu5Jd
	CJ1Zxtyd32x/AeEVYbsKrQh9dinsx7XDMKXgnt1HQVlXibTsd0zDBZ8JN7PHbSShtTPU
	MMrKv1dqSpDDtyDy+By/fqV2VfsoJVWIItom+6s7ygi2LXZJOEHLkbtgMZ7xsLfkhrlt
	5fDw==
MIME-Version: 1.0
Received: by 10.50.178.106 with SMTP id cx10mr8805531igc.24.1355197513951;
	Mon, 10 Dec 2012 19:45:13 -0800 (PST)
Received: by 10.64.37.39 with HTTP; Mon, 10 Dec 2012 19:45:13 -0800 (PST)
In-Reply-To: <003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
Date: Tue, 11 Dec 2012 11:45:13 +0800
X-Google-Sender-Auth: wb1q7uq_N9pId-lz-HaTfCl-w7E
Message-ID: <CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: "Kay, Allen M" <allen.m.kay@intel.com>
Cc: "Xu, Dongxiao" <dongxiao.xu@intel.com>, xen-devel <xen-devel@lists.xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0766400272027752363=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0766400272027752363==
Content-Type: multipart/alternative; boundary=e89a8f5036c86b686204d08b8258

--e89a8f5036c86b686204d08b8258
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 11, 2012 at 2:15 AM, Kay, Allen M <allen.m.kay@intel.com> wrote:

> My understanding is that i915 driver needs to looks at the real PCH's
> device ID to apply HW workarounds.
>
> One way to fix this is to make the device ID of the first PCH's (00:01.0)
> device ID reflect the device ID of 00:1f.0 on the host.  This way, when
> i915 running as a guest will find the valid PCH device ID to make
> workaround decisions with.
>
> I don't know why it would make a difference if i915 is built into the
> kernel or as a module though.
>
> Allen
>
> Thanks Allen for your input.
But module v.s. built-in is not the only difference. Another difference is
the PVHVM build vs. pure HVM build.
Both share the same PCI layout but different result. Any theory how to
explain the difference? What makes the PVHVM version work?

Thanks,
Timothy



> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Monday, December 10, 2012 4:27 AM
> To: G.R.
> Cc: xen-devel; Stefano Stabellini; Kay, Allen M; xiantao.zhang@intel; Xu,
> Dongxiao; Dong, Eddie
> Subject: Re: intel IGD driver intel_detect_pch() failure
>
> CC'ing some engineers that could have some useful suggestions
>
> On Mon, 10 Dec 2012, G.R. wrote:
> > Hello, could anybody help?
> >
> > On Sun, Dec 9, 2012 at 1:00 PM, G.R. <firemeteor@users.sourceforge.net>
> wrote:
> >       I dug further and got confused.
> >       The host ISA bridge 00:1f.0 is automatically passed-through as
> part of the gfx_passthru magic.
> >       However, it is passed through as a PCI bridge:
> >       On host:   00:1f.0 ISA bridge [0601]: Intel Corporation H77
> Express Chipset LPC Controller [8086:1e4a] (rev 04)
> >       On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77 Express
> Chipset LPC Controller [8086:1e4a] (rev 04)
> >
> >       This is both the case for pure HVM && PVHVM. And this one exists
> for both case too:
> >       00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA
> [Natoma/Triton II] [8086:7000]
> >
> >       And the intel_detect_pch() function only check the first ISA
> bridge on the PCI bus:
> >       pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
> >
> >       Unless there are magic elsewhere, I can't imagine the code would
> behave differently on the two builds.
> >       But what's the magic behind this?
> >
> >       Also, is there anyway to get rid of the ISA bridge emulated by
> qemu?
> >       I don't think this is ever required for most case...
> >
> >       Thanks,
> >       Timothy
> >
> >
> >       On Sun, Dec 9, 2012 at 1:43 AM, G.R. <
> firemeteor@users.sourceforge.net> wrote:
> >
> >             Hi all,
> >             I'm debugging an issue that an HVM guest failed to produce
> any output with IGD passed through.
> >             This is an pure HVM linux guest with i915 driver directly
> compiled in.
> >             An PVHVM kernel with i915 driver compiled as module works
> without issue.
> >             I'm not yet sure which factor is more important, pure HVM or
> the I915=y kernel config.
> >
> >             The direct cause of no output is that the driver does not
> select Display PLL properly, which is in turn
> >             due to fail to detect pch type properly.
> >
> >             Strange enough, the intel_detect_pch() function works by
> checking the device ID of the ISA bridge
> >             coming with the chipset:
> >
> >             /* * The reason to probe ISA bridge instead of Dev31:Fun0 is
> to * make graphics device passthrough
> >             fwork easy for VMM, that only * need to expose ISA bridge to
> let driver know the real hardware
> >             underneath. This is a requirement from virtualization team.
> */
> >
> >             I added some debug output in this function and find out that
> it obtained a strange device ID:
> >             [ 1.005423] [drm] intel pch detect, found 00007000
> >
> >             This looks like the ISA bridge provided by qemu:
> >             00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA
> [Natoma/Triton II]
> >             00:01.0 0601: 8086:7000
> >
> >             However, I can find the same device on a PVHVM linux guest,
> but the intel_detect_pch() is not fooled by
> >             that. Is it due to I915=m config or some magic played by
> PVOPS? Any suggestion how to fix this?
> >
> >             Thanks,
> >             Timothy
> >
> >
> >
> >
> >
>

--e89a8f5036c86b686204d08b8258
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Tue, Dec 11, 2=
012 at 2:15 AM, Kay, Allen M <span dir=3D"ltr">&lt;<a href=3D"mailto:allen.=
m.kay@intel.com" target=3D"_blank">allen.m.kay@intel.com</a>&gt;</span> wro=
te:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">My understanding is that i915 driver needs t=
o looks at the real PCH&#39;s device ID to apply HW workarounds.<br>
<br>
One way to fix this is to make the device ID of the first PCH&#39;s (00:01.=
0) device ID reflect the device ID of 00:1f.0 on the host. =A0This way, whe=
n i915 running as a guest will find the valid PCH device ID to make workaro=
und decisions with.<br>

<br>
I don&#39;t know why it would make a difference if i915 is built into the k=
ernel or as a module though.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Allen<br>
</font></span><div class=3D"HOEnZb"><div class=3D"h5"><br></div></div></blo=
ckquote><div>Thanks Allen for your input. <br>But module v.s. built-in is n=
ot the only difference. Another difference is the PVHVM build vs. pure HVM =
build.<br>
Both share the same PCI layout but different result. Any theory how to expl=
ain the difference? What makes the PVHVM version work?<br><br>Thanks,<br>Ti=
mothy<br><br>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0=
 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">
-----Original Message-----<br>
From: Stefano Stabellini [mailto:<a href=3D"mailto:stefano.stabellini@eu.ci=
trix.com">stefano.stabellini@eu.citrix.com</a>]<br>
Sent: Monday, December 10, 2012 4:27 AM<br>
To: G.R.<br>
Cc: xen-devel; Stefano Stabellini; Kay, Allen M; xiantao.zhang@intel; Xu, D=
ongxiao; Dong, Eddie<br>
Subject: Re: intel IGD driver intel_detect_pch() failure<br>
<br>
CC&#39;ing some engineers that could have some useful suggestions<br>
<br>
On Mon, 10 Dec 2012, G.R. wrote:<br>
&gt; Hello, could anybody help?<br>
&gt;<br>
&gt; On Sun, Dec 9, 2012 at 1:00 PM, G.R. &lt;<a href=3D"mailto:firemeteor@=
users.sourceforge.net">firemeteor@users.sourceforge.net</a>&gt; wrote:<br>
&gt; =A0 =A0 =A0 I dug further and got confused.<br>
&gt; =A0 =A0 =A0 The host ISA bridge 00:1f.0 is automatically passed-throug=
h as part of the gfx_passthru magic.<br>
&gt; =A0 =A0 =A0 However, it is passed through as a PCI bridge:<br>
&gt; =A0 =A0 =A0 On host: =A0 00:1f.0 ISA bridge [0601]: Intel Corporation =
H77 Express Chipset LPC Controller [8086:1e4a] (rev 04)<br>
&gt; =A0 =A0 =A0 On guest: 00:1f.0 PCI bridge [0604]: Intel Corporation H77=
 Express Chipset LPC Controller [8086:1e4a] (rev 04)<br>
&gt;<br>
&gt; =A0 =A0 =A0 This is both the case for pure HVM &amp;&amp; PVHVM. And t=
his one exists for both case too:<br>
&gt; =A0 =A0 =A0 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3=
 ISA [Natoma/Triton II] [8086:7000]<br>
&gt;<br>
&gt; =A0 =A0 =A0 And the intel_detect_pch() function only check the first I=
SA bridge on the PCI bus:<br>
&gt; =A0 =A0 =A0 pch =3D pci_get_class(PCI_CLASS_BRIDGE_ISA &lt;&lt; 8, NUL=
L);<br>
&gt;<br>
&gt; =A0 =A0 =A0 Unless there are magic elsewhere, I can&#39;t imagine the =
code would behave differently on the two builds.<br>
&gt; =A0 =A0 =A0 But what&#39;s the magic behind this?<br>
&gt;<br>
&gt; =A0 =A0 =A0 Also, is there anyway to get rid of the ISA bridge emulate=
d by qemu?<br>
&gt; =A0 =A0 =A0 I don&#39;t think this is ever required for most case...<b=
r>
&gt;<br>
&gt; =A0 =A0 =A0 Thanks,<br>
&gt; =A0 =A0 =A0 Timothy<br>
&gt;<br>
&gt;<br>
&gt; =A0 =A0 =A0 On Sun, Dec 9, 2012 at 1:43 AM, G.R. &lt;<a href=3D"mailto=
:firemeteor@users.sourceforge.net">firemeteor@users.sourceforge.net</a>&gt;=
 wrote:<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 Hi all,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 I&#39;m debugging an issue that an HVM guest f=
ailed to produce any output with IGD passed through.<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 This is an pure HVM linux guest with i915 driv=
er directly compiled in.<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 An PVHVM kernel with i915 driver compiled as m=
odule works without issue.<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 I&#39;m not yet sure which factor is more impo=
rtant, pure HVM or the I915=3Dy kernel config.<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 The direct cause of no output is that the driv=
er does not select Display PLL properly, which is in turn<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 due to fail to detect pch type properly.<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 Strange enough, the intel_detect_pch() functio=
n works by checking the device ID of the ISA bridge<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 coming with the chipset:<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 /* * The reason to probe ISA bridge instead of=
 Dev31:Fun0 is to * make graphics device passthrough<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 fwork easy for VMM, that only * need to expose=
 ISA bridge to let driver know the real hardware=A0<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 underneath. This is a requirement from virtual=
ization team. */<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 I added some debug output in this function and=
 find out that it obtained a strange device ID:<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 [ 1.005423] [drm] intel pch detect, found 0000=
7000<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 This looks like the ISA bridge provided by qem=
u:<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 00:01.0 ISA bridge: Intel Corporation 82371SB =
PIIX3 ISA [Natoma/Triton II]<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 00:01.0 0601: 8086:7000<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 However, I can find the same device on a PVHVM=
 linux guest, but the intel_detect_pch() is not fooled by<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 that. Is it due to I915=3Dm config or some mag=
ic played by PVOPS? Any suggestion how to fix this?<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 Thanks,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 Timothy<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
</div></div></blockquote></div><br></div>

--e89a8f5036c86b686204d08b8258--


--===============0766400272027752363==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0766400272027752363==--


From xen-devel-bounces@lists.xen.org Tue Dec 11 04:33:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 04:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiHWK-0007W6-0E; Tue, 11 Dec 2012 04:32:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiHWI-0007W1-2e
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 04:32:38 +0000
Received: from [193.109.254.147:3377] by server-1.bemta-14.messagelabs.com id
	98/EE-25314-567B6C05; Tue, 11 Dec 2012 04:32:37 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355200351!1725964!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk0MTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11278 invoked from network); 11 Dec 2012 04:32:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 04:32:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,256,1355097600"; 
   d="scan'208";a="48146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 04:32:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 04:32:30 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiHWA-000142-K2;
	Tue, 11 Dec 2012 04:32:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiHWA-0003Q9-9w;
	Tue, 11 Dec 2012 04:32:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14664-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 04:32:30 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14664: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14664 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14664/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  03cb71bc32f9
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  Dan Magenheimer <dan.magenheimer@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=03cb71bc32f9
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 03cb71bc32f9
+ branch=xen-unstable
+ revision=03cb71bc32f9
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 03cb71bc32f9 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 36 changesets with 83 changes to 58 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 04:33:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 04:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiHWK-0007W6-0E; Tue, 11 Dec 2012 04:32:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiHWI-0007W1-2e
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 04:32:38 +0000
Received: from [193.109.254.147:3377] by server-1.bemta-14.messagelabs.com id
	98/EE-25314-567B6C05; Tue, 11 Dec 2012 04:32:37 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355200351!1725964!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk0MTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11278 invoked from network); 11 Dec 2012 04:32:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 04:32:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,256,1355097600"; 
   d="scan'208";a="48146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 04:32:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 04:32:30 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiHWA-000142-K2;
	Tue, 11 Dec 2012 04:32:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiHWA-0003Q9-9w;
	Tue, 11 Dec 2012 04:32:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14664-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 04:32:30 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14664: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14664 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14664/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14565
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14565

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  03cb71bc32f9
baseline version:
 xen                  bc624b00d6d6

------------------------------------------------------------
People who touched revisions under test:
  Dan Magenheimer <dan.magenheimer@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=03cb71bc32f9
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 03cb71bc32f9
+ branch=xen-unstable
+ revision=03cb71bc32f9
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 03cb71bc32f9 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 36 changesets with 83 changes to 58 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 05:51:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 05:51:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiIk0-0001FD-LZ; Tue, 11 Dec 2012 05:50:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TiIjz-0001Er-9u
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 05:50:51 +0000
Received: from [193.109.254.147:14687] by server-2.bemta-14.messagelabs.com id
	99/E3-20829-AB9C6C05; Tue, 11 Dec 2012 05:50:50 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355205031!2436001!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwODY0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10829 invoked from network); 11 Dec 2012 05:50:33 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-27.messagelabs.com with SMTP;
	11 Dec 2012 05:50:33 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 10 Dec 2012 21:49:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,256,1355126400"; d="scan'208";a="255589700"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 10 Dec 2012 21:50:03 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 21:50:03 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Tue, 11 Dec 2012 13:50:01 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio	and
	cpu_ioreq_move
Thread-Index: AQHN1JhAUKNnK9kkmkSz9vxuXDVP3pgNAeWAgAYbfIA=
Date: Tue, 11 Dec 2012 05:50:01 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEBE6C8@SHSMSX102.ccr.corp.intel.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
	<20674.5586.142286.869968@mariner.uk.xensource.com>
	<1354897843.31710.93.camel@zakaz.uk.xensource.com>
	<20674.6749.646806.53950@mariner.uk.xensource.com>
In-Reply-To: <20674.6749.646806.53950@mariner.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio	and
 cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Jackson [mailto:Ian.Jackson@eu.citrix.com]
> Sent: Saturday, December 08, 2012 12:34 AM
> To: Ian Campbell
> Cc: Stefano Stabellini; Xu, Dongxiao; xen-devel@lists.xensource.com;
> qemu-devel@nongnu.org
> Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and
> cpu_ioreq_move
> 
> Ian Campbell writes ("Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify
> cpu_ioreq_pio	and cpu_ioreq_move"):
> > On Fri, 2012-12-07 at 16:14 +0000, Ian Jackson wrote:
> > > +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> > > +    if (req->df) addr -= offset;
> > > +    else addr -= offset;
> >
> > One of these -= should be a += I presume?
> 
> Uh, yes.
> 
> > [...]
> > > +                write_phys_req_item((target_phys_addr_t) req->data,
> req, i, &tmp);
> >
> > This seems to be the only one with this cast, why?
> 
> This is a mistake.
> 
> > write_phys_req_item takes a target_phys_addr_t so this will happen
> > regardless I think.
> 
> Indeed.
> 
> Ian.

Tested this v2 patch on my system, and it works.

Thanks,
Dongxiao


> 
> commit fd3865f8e0d867a203b4ddcb22eefa827cfaca0a
> Author: Ian Jackson <ian.jackson@eu.citrix.com>
> Date:   Fri Dec 7 16:02:04 2012 +0000
> 
>     cpu_ioreq_pio, cpu_ioreq_move: introduce read_phys_req_item,
> write_phys_req_item
> 
>     The current code compare i (int) with req->count (uint32_t) in a for
>     loop, risking an infinite loop if req->count is >INT_MAX.  It also
>     does the multiplication of req->size in a too-small type, leading to
>     integer overflows.
> 
>     Turn read_physical and write_physical into two different helper
>     functions, read_phys_req_item and write_phys_req_item, that take care
>     of adding or subtracting offset depending on sign.
> 
>     This removes the formulaic multiplication to a single place where the
>     integer overflows can be dealt with by casting to wide-enough unsigned
>     types.
> 
>     Reported-By: Dongxiao Xu <dongxiao.xu@intel.com>
>     Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>     Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
>     --
>     v2: Fix sign when !!req->df.  Remove a useless cast.
> 
> diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
> index c6d049c..63a938b 100644
> --- a/i386-dm/helper2.c
> +++ b/i386-dm/helper2.c
> @@ -339,21 +339,40 @@ static void do_outp(CPUState *env, unsigned long
> addr,
>      }
>  }
> 
> -static inline void read_physical(uint64_t addr, unsigned long size, void *val)
> +/*
> + * Helper functions which read/write an object from/to physical guest
> + * memory, as part of the implementation of an ioreq.
> + *
> + * Equivalent to
> + *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
> + *                          val, req->size, 0/1)
> + * except without the integer overflow problems.
> + */
> +static void rw_phys_req_item(target_phys_addr_t addr,
> +                             ioreq_t *req, uint32_t i, void *val, int rw)
>  {
> -    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 0);
> +    /* Do everything unsigned so overflow just results in a truncated result
> +     * and accesses to undesired parts of guest memory, which is up
> +     * to the guest */
> +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> +    if (req->df) addr -= offset;
> +    else addr += offset;
> +    cpu_physical_memory_rw(addr, val, req->size, rw);
>  }
> -
> -static inline void write_physical(uint64_t addr, unsigned long size, void *val)
> +static inline void read_phys_req_item(target_phys_addr_t addr,
> +                                      ioreq_t *req, uint32_t i, void
> *val)
>  {
> -    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 1);
> +    rw_phys_req_item(addr, req, i, val, 0);
> +}
> +static inline void write_phys_req_item(target_phys_addr_t addr,
> +                                       ioreq_t *req, uint32_t i, void
> *val)
> +{
> +    rw_phys_req_item(addr, req, i, val, 1);
>  }
> 
>  static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>  {
> -    int i, sign;
> -
> -    sign = req->df ? -1 : 1;
> +    uint32_t i;
> 
>      if (req->dir == IOREQ_READ) {
>          if (!req->data_is_ptr) {
> @@ -363,9 +382,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
> 
>              for (i = 0; i < req->count; i++) {
>                  tmp = do_inp(env, req->addr, req->size);
> -                write_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                write_phys_req_item(req->data, req, i, &tmp);
>              }
>          }
>      } else if (req->dir == IOREQ_WRITE) {
> @@ -375,9 +392,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>              for (i = 0; i < req->count; i++) {
>                  unsigned long tmp = 0;
> 
> -                read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->data, req, i, &tmp);
>                  do_outp(env, req->addr, req->size, tmp);
>              }
>          }
> @@ -386,22 +401,16 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> *req)
> 
>  static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>  {
> -    int i, sign;
> -
> -    sign = req->df ? -1 : 1;
> +    uint32_t i;
> 
>      if (!req->data_is_ptr) {
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &req->data);
> +                read_phys_req_item(req->addr, req, i, &req->data);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                write_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &req->data);
> +                write_phys_req_item(req->addr, req, i, &req->data);
>              }
>          }
>      } else {
> @@ -409,21 +418,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t
> *req)
> 
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> -                write_physical((target_phys_addr_t )req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->addr, req, i, &tmp);
> +                write_phys_req_item(req->data, req, i, &tmp);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> -                write_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->data, req, i, &tmp);
> +                write_phys_req_item(req->addr, req, i, &tmp);
>              }
>          }
>      }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 05:51:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 05:51:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiIk0-0001FD-LZ; Tue, 11 Dec 2012 05:50:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TiIjz-0001Er-9u
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 05:50:51 +0000
Received: from [193.109.254.147:14687] by server-2.bemta-14.messagelabs.com id
	99/E3-20829-AB9C6C05; Tue, 11 Dec 2012 05:50:50 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355205031!2436001!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwODY0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10829 invoked from network); 11 Dec 2012 05:50:33 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-27.messagelabs.com with SMTP;
	11 Dec 2012 05:50:33 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 10 Dec 2012 21:49:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,256,1355126400"; d="scan'208";a="255589700"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 10 Dec 2012 21:50:03 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 21:50:03 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Tue, 11 Dec 2012 13:50:01 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio	and
	cpu_ioreq_move
Thread-Index: AQHN1JhAUKNnK9kkmkSz9vxuXDVP3pgNAeWAgAYbfIA=
Date: Tue, 11 Dec 2012 05:50:01 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEBE6C8@SHSMSX102.ccr.corp.intel.com>
References: <alpine.DEB.2.02.1209101842150.15568@kaball.uk.xensource.com>
	<20674.5586.142286.869968@mariner.uk.xensource.com>
	<1354897843.31710.93.camel@zakaz.uk.xensource.com>
	<20674.6749.646806.53950@mariner.uk.xensource.com>
In-Reply-To: <20674.6749.646806.53950@mariner.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio	and
 cpu_ioreq_move
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Jackson [mailto:Ian.Jackson@eu.citrix.com]
> Sent: Saturday, December 08, 2012 12:34 AM
> To: Ian Campbell
> Cc: Stefano Stabellini; Xu, Dongxiao; xen-devel@lists.xensource.com;
> qemu-devel@nongnu.org
> Subject: Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify cpu_ioreq_pio and
> cpu_ioreq_move
> 
> Ian Campbell writes ("Re: [Xen-devel] [PATCH 0/2] QEMU/xen: simplify
> cpu_ioreq_pio	and cpu_ioreq_move"):
> > On Fri, 2012-12-07 at 16:14 +0000, Ian Jackson wrote:
> > > +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> > > +    if (req->df) addr -= offset;
> > > +    else addr -= offset;
> >
> > One of these -= should be a += I presume?
> 
> Uh, yes.
> 
> > [...]
> > > +                write_phys_req_item((target_phys_addr_t) req->data,
> req, i, &tmp);
> >
> > This seems to be the only one with this cast, why?
> 
> This is a mistake.
> 
> > write_phys_req_item takes a target_phys_addr_t so this will happen
> > regardless I think.
> 
> Indeed.
> 
> Ian.

Tested this v2 patch on my system, and it works.

Thanks,
Dongxiao


> 
> commit fd3865f8e0d867a203b4ddcb22eefa827cfaca0a
> Author: Ian Jackson <ian.jackson@eu.citrix.com>
> Date:   Fri Dec 7 16:02:04 2012 +0000
> 
>     cpu_ioreq_pio, cpu_ioreq_move: introduce read_phys_req_item,
> write_phys_req_item
> 
>     The current code compare i (int) with req->count (uint32_t) in a for
>     loop, risking an infinite loop if req->count is >INT_MAX.  It also
>     does the multiplication of req->size in a too-small type, leading to
>     integer overflows.
> 
>     Turn read_physical and write_physical into two different helper
>     functions, read_phys_req_item and write_phys_req_item, that take care
>     of adding or subtracting offset depending on sign.
> 
>     This removes the formulaic multiplication to a single place where the
>     integer overflows can be dealt with by casting to wide-enough unsigned
>     types.
> 
>     Reported-By: Dongxiao Xu <dongxiao.xu@intel.com>
>     Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>     Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
>     --
>     v2: Fix sign when !!req->df.  Remove a useless cast.
> 
> diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
> index c6d049c..63a938b 100644
> --- a/i386-dm/helper2.c
> +++ b/i386-dm/helper2.c
> @@ -339,21 +339,40 @@ static void do_outp(CPUState *env, unsigned long
> addr,
>      }
>  }
> 
> -static inline void read_physical(uint64_t addr, unsigned long size, void *val)
> +/*
> + * Helper functions which read/write an object from/to physical guest
> + * memory, as part of the implementation of an ioreq.
> + *
> + * Equivalent to
> + *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
> + *                          val, req->size, 0/1)
> + * except without the integer overflow problems.
> + */
> +static void rw_phys_req_item(target_phys_addr_t addr,
> +                             ioreq_t *req, uint32_t i, void *val, int rw)
>  {
> -    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 0);
> +    /* Do everything unsigned so overflow just results in a truncated result
> +     * and accesses to undesired parts of guest memory, which is up
> +     * to the guest */
> +    target_phys_addr_t offset = (target_phys_addr_t)req->size * i;
> +    if (req->df) addr -= offset;
> +    else addr += offset;
> +    cpu_physical_memory_rw(addr, val, req->size, rw);
>  }
> -
> -static inline void write_physical(uint64_t addr, unsigned long size, void *val)
> +static inline void read_phys_req_item(target_phys_addr_t addr,
> +                                      ioreq_t *req, uint32_t i, void
> *val)
>  {
> -    return cpu_physical_memory_rw((target_phys_addr_t)addr, val, size, 1);
> +    rw_phys_req_item(addr, req, i, val, 0);
> +}
> +static inline void write_phys_req_item(target_phys_addr_t addr,
> +                                       ioreq_t *req, uint32_t i, void
> *val)
> +{
> +    rw_phys_req_item(addr, req, i, val, 1);
>  }
> 
>  static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>  {
> -    int i, sign;
> -
> -    sign = req->df ? -1 : 1;
> +    uint32_t i;
> 
>      if (req->dir == IOREQ_READ) {
>          if (!req->data_is_ptr) {
> @@ -363,9 +382,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
> 
>              for (i = 0; i < req->count; i++) {
>                  tmp = do_inp(env, req->addr, req->size);
> -                write_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                write_phys_req_item(req->data, req, i, &tmp);
>              }
>          }
>      } else if (req->dir == IOREQ_WRITE) {
> @@ -375,9 +392,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>              for (i = 0; i < req->count; i++) {
>                  unsigned long tmp = 0;
> 
> -                read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->data, req, i, &tmp);
>                  do_outp(env, req->addr, req->size, tmp);
>              }
>          }
> @@ -386,22 +401,16 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> *req)
> 
>  static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>  {
> -    int i, sign;
> -
> -    sign = req->df ? -1 : 1;
> +    uint32_t i;
> 
>      if (!req->data_is_ptr) {
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &req->data);
> +                read_phys_req_item(req->addr, req, i, &req->data);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                write_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &req->data);
> +                write_phys_req_item(req->addr, req, i, &req->data);
>              }
>          }
>      } else {
> @@ -409,21 +418,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t
> *req)
> 
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> -                write_physical((target_phys_addr_t )req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->addr, req, i, &tmp);
> +                write_phys_req_item(req->data, req, i, &tmp);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
> -                read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> -                write_physical(req->addr
> -                  + (sign * i * req->size),
> -                  req->size, &tmp);
> +                read_phys_req_item(req->data, req, i, &tmp);
> +                write_phys_req_item(req->addr, req, i, &tmp);
>              }
>          }
>      }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 06:28:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 06:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiJJo-0001aY-6b; Tue, 11 Dec 2012 06:27:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TiJJm-0001aT-1W
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 06:27:50 +0000
Received: from [85.158.139.211:29392] by server-6.bemta-5.messagelabs.com id
	AF/1C-30498-562D6C05; Tue, 11 Dec 2012 06:27:49 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355207263!19478220!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI5MTU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8244 invoked from network); 11 Dec 2012 06:27:44 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-7.tower-206.messagelabs.com with SMTP;
	11 Dec 2012 06:27:44 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 10 Dec 2012 22:27:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,256,1355126400"; d="scan'208";a="255602228"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 10 Dec 2012 22:27:40 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 22:27:40 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Tue, 11 Dec 2012 14:27:39 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "konrad.wilk@oracle.com"
	<konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
	for map_sg hook
Thread-Index: Ac3XaIzprqGxeKpdR6GzVMagEkgdpA==
Date: Tue, 11 Dec 2012 06:27:38 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEBE725@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCBbbWFpbHRv
OkpCZXVsaWNoQHN1c2UuY29tXQ0KPiBTZW50OiBUaHVyc2RheSwgRGVjZW1iZXIgMDYsIDIwMTIg
OTozOCBQTQ0KPiBUbzogWHUsIERvbmd4aWFvDQo+IENjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9y
Zzsga29ucmFkLndpbGtAb3JhY2xlLmNvbTsNCj4gbGludXgta2VybmVsQHZnZXIua2VybmVsLm9y
Zw0KPiBTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gW1BBVENIXSB4ZW4vc3dpb3RsYjogRXhjaGFu
Z2UgdG8gY29udGlndW91cyBtZW1vcnkNCj4gZm9yIG1hcF9zZyBob29rDQo+IA0KPiA+Pj4gT24g
MDYuMTIuMTIgYXQgMTQ6MDgsIERvbmd4aWFvIFh1IDxkb25neGlhby54dUBpbnRlbC5jb20+IHdy
b3RlOg0KPiA+IFdoaWxlIG1hcHBpbmcgc2cgYnVmZmVycywgY2hlY2tpbmcgdG8gY3Jvc3MgcGFn
ZSBETUEgYnVmZmVyIGlzIGFsc28NCj4gPiBuZWVkZWQuIElmIHRoZSBndWVzdCBETUEgYnVmZmVy
IGNyb3NzZXMgcGFnZSBib3VuZGFyeSwgWGVuIHNob3VsZA0KPiA+IGV4Y2hhbmdlIGNvbnRpZ3Vv
dXMgbWVtb3J5IGZvciBpdC4NCj4gPg0KPiA+IEJlc2lkZXMsIGl0IGlzIG5lZWRlZCB0byBiYWNr
dXAgdGhlIG9yaWdpbmFsIHBhZ2UgY29udGVudHMgYW5kIGNvcHkgaXQNCj4gPiBiYWNrIGFmdGVy
IG1lbW9yeSBleGNoYW5nZSBpcyBkb25lLg0KPiA+DQo+ID4gVGhpcyBmaXhlcyBpc3N1ZXMgaWYg
ZGV2aWNlIERNQSBpbnRvIHNvZnR3YXJlIHN0YXRpYyBidWZmZXJzLCBhbmQgaW4NCj4gPiBjYXNl
IHRoZSBzdGF0aWMgYnVmZmVyIGNyb3NzIHBhZ2UgYm91bmRhcnkgd2hpY2ggcGFnZXMgYXJlIG5v
dA0KPiA+IGNvbnRpZ3VvdXMgaW4gcmVhbCBoYXJkd2FyZS4NCj4gPg0KPiA+IFNpZ25lZC1vZmYt
Ynk6IERvbmd4aWFvIFh1IDxkb25neGlhby54dUBpbnRlbC5jb20+DQo+ID4gU2lnbmVkLW9mZi1i
eTogWGlhbnRhbyBaaGFuZyA8eGlhbnRhby56aGFuZ0BpbnRlbC5jb20+DQo+ID4gLS0tDQo+ID4g
IGRyaXZlcnMveGVuL3N3aW90bGIteGVuLmMgfCAgIDQ3DQo+ID4gKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKystDQo+ID4gIDEgZmlsZXMgY2hhbmdlZCwgNDYgaW5z
ZXJ0aW9ucygrKSwgMSBkZWxldGlvbnMoLSkNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS9kcml2ZXJz
L3hlbi9zd2lvdGxiLXhlbi5jIGIvZHJpdmVycy94ZW4vc3dpb3RsYi14ZW4uYw0KPiA+IGluZGV4
IDU4ZGI2ZGYuLmU4ZjBjZmIgMTAwNjQ0DQo+ID4gLS0tIGEvZHJpdmVycy94ZW4vc3dpb3RsYi14
ZW4uYw0KPiA+ICsrKyBiL2RyaXZlcnMveGVuL3N3aW90bGIteGVuLmMNCj4gPiBAQCAtNDYxLDYg
KzQ2MSwyMiBAQCB4ZW5fc3dpb3RsYl9zeW5jX3NpbmdsZV9mb3JfZGV2aWNlKHN0cnVjdCBkZXZp
Y2UNCj4gPiAqaHdkZXYsIGRtYV9hZGRyX3QgZGV2X2FkZHIsICB9DQo+ID4gRVhQT1JUX1NZTUJP
TF9HUEwoeGVuX3N3aW90bGJfc3luY19zaW5nbGVfZm9yX2RldmljZSk7DQo+ID4NCj4gPiArc3Rh
dGljIGJvb2wNCj4gPiArY2hlY2tfY29udGluZ3VvdXNfcmVnaW9uKHVuc2lnbmVkIGxvbmcgdnN0
YXJ0LCB1bnNpZ25lZCBsb25nIG9yZGVyKQ0KPiANCj4gY2hlY2tfY29udGluZ3VvdXNfcmVnaW9u
KHVuc2lnbmVkIGxvbmcgdnN0YXJ0LCB1bnNpZ25lZCBpbnQgb3JkZXIpDQo+IA0KPiBCdXQgLSB3
aHkgZG8geW91IG5lZWQgdG8gZG8gdGhpcyBjaGVjayBvcmRlciBiYXNlZCBpbiB0aGUgZmlyc3Qg
cGxhY2U/IENoZWNraW5nDQo+IHRoZSBhY3R1YWwgbGVuZ3RoIG9mIHRoZSBidWZmZXIgc2hvdWxk
IHN1ZmZpY2UuDQoNClRoYW5rcywgdGhlIHdvcmQgImNvbnRpbmd1b3VzIiBpcyBtaXN0eXBlZCBp
biB0aGUgZnVuY3Rpb24sIGl0IHNob3VsZCBiZSAiY29udGlndW91cyIuDQqhoaGhDQpjaGVja19j
b250aWd1b3VzX3JlZ2lvbigpIGZ1bmN0aW9uIGlzIHVzZWQgdG8gY2hlY2sgd2hldGhlciBwYWdl
cyBhcmUgY29udGlndW91cyBpbiBoYXJkd2FyZS4NClRoZSBsZW5ndGggb25seSBpbmRpY2F0ZXMg
d2hldGhlciB0aGUgYnVmZmVyIGNyb3NzZXMgcGFnZSBib3VuZGFyeS4gSWYgYnVmZmVyIGNyb3Nz
ZXMgcGFnZXMgYW5kIHRoZXkgYXJlIG5vdCBjb250aWd1b3VzIGluIGhhcmR3YXJlLCB3ZSBkbyBu
ZWVkIHRvIGV4Y2hhbmdlIG1lbW9yeSBpbiBYZW4uDQoNCj4gDQo+ID4gK3sNCj4gPiArCXVuc2ln
bmVkIGxvbmcgcHJldl9tYSA9IHhlbl92aXJ0X3RvX2J1cygodm9pZCAqKXZzdGFydCk7DQo+ID4g
Kwl1bnNpZ25lZCBsb25nIG5leHRfbWE7DQo+IA0KPiBwaHlzX2FkZHJfdCBvciBzb21lIHN1Y2gg
Zm9yIGJvdGggb2YgdGhlbS4NCg0KVGhhbmtzLg0KU2hvdWxkIGJlIGRtYV9hZGRyX3Q/DQoNCj4g
DQo+ID4gKwlpbnQgaTsNCj4gDQo+IHVuc2lnbmVkIGxvbmcNCg0KVGhhbmtzLg0KDQo+IA0KPiA+
ICsNCj4gPiArCWZvciAoaSA9IDE7IGkgPCAoMSA8PCBvcmRlcik7IGkrKykgew0KPiANCj4gMVVM
DQoNClRoYW5rcy4NCg0KPiANCj4gPiArCQluZXh0X21hID0geGVuX3ZpcnRfdG9fYnVzKCh2b2lk
ICopKHZzdGFydCArIGkgKiBQQUdFX1NJWkUpKTsNCj4gPiArCQlpZiAobmV4dF9tYSAhPSBwcmV2
X21hICsgUEFHRV9TSVpFKQ0KPiA+ICsJCQlyZXR1cm4gZmFsc2U7DQo+ID4gKwkJcHJldl9tYSA9
IG5leHRfbWE7DQo+ID4gKwl9DQo+ID4gKwlyZXR1cm4gdHJ1ZTsNCj4gPiArfQ0KPiA+ICsNCj4g
PiAgLyoNCj4gPiAgICogTWFwIGEgc2V0IG9mIGJ1ZmZlcnMgZGVzY3JpYmVkIGJ5IHNjYXR0ZXJs
aXN0IGluIHN0cmVhbWluZyBtb2RlIGZvcg0KPiBETUEuDQo+ID4gICAqIFRoaXMgaXMgdGhlIHNj
YXR0ZXItZ2F0aGVyIHZlcnNpb24gb2YgdGhlIGFib3ZlDQo+ID4geGVuX3N3aW90bGJfbWFwX3Bh
Z2UgQEAgLTQ4OSw3ICs1MDUsMzYgQEANCj4gPiB4ZW5fc3dpb3RsYl9tYXBfc2dfYXR0cnMoc3Ry
dWN0IGRldmljZSAqaHdkZXYsIHN0cnVjdCBzY2F0dGVybGlzdA0KPiA+ICpzZ2wsDQo+ID4NCj4g
PiAgCWZvcl9lYWNoX3NnKHNnbCwgc2csIG5lbGVtcywgaSkgew0KPiA+ICAJCXBoeXNfYWRkcl90
IHBhZGRyID0gc2dfcGh5cyhzZyk7DQo+ID4gLQkJZG1hX2FkZHJfdCBkZXZfYWRkciA9IHhlbl9w
aHlzX3RvX2J1cyhwYWRkcik7DQo+ID4gKwkJdW5zaWduZWQgbG9uZyB2c3RhcnQsIG9yZGVyOw0K
PiA+ICsJCWRtYV9hZGRyX3QgZGV2X2FkZHI7DQo+ID4gKw0KPiA+ICsJCS8qDQo+ID4gKwkJICog
V2hpbGUgbWFwcGluZyBzZyBidWZmZXJzLCBjaGVja2luZyB0byBjcm9zcyBwYWdlIERNQSBidWZm
ZXINCj4gPiArCQkgKiBpcyBhbHNvIG5lZWRlZC4gSWYgdGhlIGd1ZXN0IERNQSBidWZmZXIgY3Jv
c3NlcyBwYWdlDQo+ID4gKwkJICogYm91bmRhcnksIFhlbiBzaG91bGQgZXhjaGFuZ2UgY29udGln
dW91cyBtZW1vcnkgZm9yIGl0Lg0KPiA+ICsJCSAqIEJlc2lkZXMsIGl0IGlzIG5lZWRlZCB0byBi
YWNrdXAgdGhlIG9yaWdpbmFsIHBhZ2UgY29udGVudHMNCj4gPiArCQkgKiBhbmQgY29weSBpdCBi
YWNrIGFmdGVyIG1lbW9yeSBleGNoYW5nZSBpcyBkb25lLg0KPiA+ICsJCSAqLw0KPiA+ICsJCWlm
IChyYW5nZV9zdHJhZGRsZXNfcGFnZV9ib3VuZGFyeShwYWRkciwgc2ctPmxlbmd0aCkpIHsNCj4g
PiArCQkJdnN0YXJ0ID0gKHVuc2lnbmVkIGxvbmcpX192YShwYWRkciAmIFBBR0VfTUFTSyk7DQo+
ID4gKwkJCW9yZGVyID0gZ2V0X29yZGVyKHNnLT5sZW5ndGggKyAocGFkZHIgJiB+UEFHRV9NQVNL
KSk7DQo+ID4gKwkJCWlmICghY2hlY2tfY29udGluZ3VvdXNfcmVnaW9uKHZzdGFydCwgb3JkZXIp
KSB7DQo+ID4gKwkJCQl1bnNpZ25lZCBsb25nIGJ1ZjsNCj4gPiArCQkJCWJ1ZiA9IF9fZ2V0X2Zy
ZWVfcGFnZXMoR0ZQX0tFUk5FTCwgb3JkZXIpOw0KPiA+ICsJCQkJbWVtY3B5KCh2b2lkICopYnVm
LCAodm9pZCAqKXZzdGFydCwNCj4gPiArCQkJCQlQQUdFX1NJWkUgKiAoMSA8PCBvcmRlcikpOw0K
PiA+ICsJCQkJaWYgKHhlbl9jcmVhdGVfY29udGlndW91c19yZWdpb24odnN0YXJ0LCBvcmRlciwN
Cj4gPiArCQkJCQkJZmxzNjQocGFkZHIpKSkgew0KPiA+ICsJCQkJCWZyZWVfcGFnZXMoYnVmLCBv
cmRlcik7DQo+ID4gKwkJCQkJcmV0dXJuIDA7DQo+ID4gKwkJCQl9DQo+ID4gKwkJCQltZW1jcHko
KHZvaWQgKil2c3RhcnQsICh2b2lkICopYnVmLA0KPiA+ICsJCQkJCVBBR0VfU0laRSAqICgxIDw8
IG9yZGVyKSk7DQo+ID4gKwkJCQlmcmVlX3BhZ2VzKGJ1Ziwgb3JkZXIpOw0KPiA+ICsJCQl9DQo+
ID4gKwkJfQ0KPiA+ICsNCj4gPiArCQlkZXZfYWRkciA9IHhlbl9waHlzX3RvX2J1cyhwYWRkcik7
DQo+ID4NCj4gPiAgCQlpZiAoc3dpb3RsYl9mb3JjZSB8fA0KPiA+ICAJCSAgICAhZG1hX2NhcGFi
bGUoaHdkZXYsIGRldl9hZGRyLCBzZy0+bGVuZ3RoKSB8fA0KPiANCj4gSG93IGFib3V0IHN3aW90
bGJfbWFwX3BhZ2UoKSAoZm9yIHRoZSBjb21wb3VuZCBwYWdlIGNhc2UpPw0KDQpZZXMhIFRoaXMg
c2hvdWxkIGFsc28gbmVlZCBzaW1pbGFyIGhhbmRsaW5nLg0KDQpPbmUgdGhpbmcgbmVlZHMgZnVy
dGhlciBjb25zaWRlcmF0aW9uIGlzIHRoYXQsIHRoZSBhYm92ZSBhcHByb2FjaCBpbnRyb2R1Y2Vz
IHR3byBtZW1vcnkgY29waWVzLCB3aGljaCBoYXMgcmFjZSBjb25kaXRpb24gdGhhdCwgd2hlbiB3
ZSBhcmUgZXhjaGFuZ2luZy9jb3B5aW5nIHBhZ2VzLCBkb20wIG1heSB2aXNpdCBvdGhlciBlbGVt
ZW50cyByaWdodCBpbiB0aGUgcGFnZXMuDQoNCk9uZSBjaG9pY2UgaXMgdG8gbW92ZSB0aGUgbWVt
b3J5IGNvcHkgaW4gaHlwZXJ2aXNvciwgd2hpY2ggcmVxdWlyZXMgdXMgdG8gbW9kaWZ5IHRoZSBY
RU5NRU1fZXhjaGFuZ2UgaHlwZXJjYWxsIGFuZCBhZGQgY2VydGFpbiBmbGFncyBpbmRpY2F0aW5n
IHdoZXRoZXIgdGhlIGV4Y2hhbmdlIG5lZWRzIG1lbW9yeSBjb3B5aW5nLg0KDQpPciBhbm90aGVy
IGNob2ljZSB0byBzb2x2ZSB0aGlzIGlzc3VlIGluIGRyaXZlciBzaWRlIHRvIGF2b2lkIERNQSBp
bnRvIHN1Y2ggc3RhdGljIGJ1ZmZlcnM/IFRoaXMgaXMgZWFzeSB0byBtb2RpZnkgb25lIGRyaXZl
ciBidXQgbWF5IGhhdmUgZGlmZmljdWx0aWVzIHRvIG1vbml0b3Igc28gbWFueSBkZXZpY2UgZHJp
dmVycy4NCg0KVGhhbmtzLA0KRG9uZ3hpYW8NCg0KPiANCj4gSmFuDQoNCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QK
WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Dec 11 06:28:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 06:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiJJo-0001aY-6b; Tue, 11 Dec 2012 06:27:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TiJJm-0001aT-1W
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 06:27:50 +0000
Received: from [85.158.139.211:29392] by server-6.bemta-5.messagelabs.com id
	AF/1C-30498-562D6C05; Tue, 11 Dec 2012 06:27:49 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355207263!19478220!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI5MTU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8244 invoked from network); 11 Dec 2012 06:27:44 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-7.tower-206.messagelabs.com with SMTP;
	11 Dec 2012 06:27:44 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 10 Dec 2012 22:27:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,256,1355126400"; d="scan'208";a="255602228"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 10 Dec 2012 22:27:40 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 22:27:40 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Tue, 11 Dec 2012 14:27:39 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "konrad.wilk@oracle.com"
	<konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
	for map_sg hook
Thread-Index: Ac3XaIzprqGxeKpdR6GzVMagEkgdpA==
Date: Tue, 11 Dec 2012 06:27:38 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEBE725@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCBbbWFpbHRv
OkpCZXVsaWNoQHN1c2UuY29tXQ0KPiBTZW50OiBUaHVyc2RheSwgRGVjZW1iZXIgMDYsIDIwMTIg
OTozOCBQTQ0KPiBUbzogWHUsIERvbmd4aWFvDQo+IENjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9y
Zzsga29ucmFkLndpbGtAb3JhY2xlLmNvbTsNCj4gbGludXgta2VybmVsQHZnZXIua2VybmVsLm9y
Zw0KPiBTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gW1BBVENIXSB4ZW4vc3dpb3RsYjogRXhjaGFu
Z2UgdG8gY29udGlndW91cyBtZW1vcnkNCj4gZm9yIG1hcF9zZyBob29rDQo+IA0KPiA+Pj4gT24g
MDYuMTIuMTIgYXQgMTQ6MDgsIERvbmd4aWFvIFh1IDxkb25neGlhby54dUBpbnRlbC5jb20+IHdy
b3RlOg0KPiA+IFdoaWxlIG1hcHBpbmcgc2cgYnVmZmVycywgY2hlY2tpbmcgdG8gY3Jvc3MgcGFn
ZSBETUEgYnVmZmVyIGlzIGFsc28NCj4gPiBuZWVkZWQuIElmIHRoZSBndWVzdCBETUEgYnVmZmVy
IGNyb3NzZXMgcGFnZSBib3VuZGFyeSwgWGVuIHNob3VsZA0KPiA+IGV4Y2hhbmdlIGNvbnRpZ3Vv
dXMgbWVtb3J5IGZvciBpdC4NCj4gPg0KPiA+IEJlc2lkZXMsIGl0IGlzIG5lZWRlZCB0byBiYWNr
dXAgdGhlIG9yaWdpbmFsIHBhZ2UgY29udGVudHMgYW5kIGNvcHkgaXQNCj4gPiBiYWNrIGFmdGVy
IG1lbW9yeSBleGNoYW5nZSBpcyBkb25lLg0KPiA+DQo+ID4gVGhpcyBmaXhlcyBpc3N1ZXMgaWYg
ZGV2aWNlIERNQSBpbnRvIHNvZnR3YXJlIHN0YXRpYyBidWZmZXJzLCBhbmQgaW4NCj4gPiBjYXNl
IHRoZSBzdGF0aWMgYnVmZmVyIGNyb3NzIHBhZ2UgYm91bmRhcnkgd2hpY2ggcGFnZXMgYXJlIG5v
dA0KPiA+IGNvbnRpZ3VvdXMgaW4gcmVhbCBoYXJkd2FyZS4NCj4gPg0KPiA+IFNpZ25lZC1vZmYt
Ynk6IERvbmd4aWFvIFh1IDxkb25neGlhby54dUBpbnRlbC5jb20+DQo+ID4gU2lnbmVkLW9mZi1i
eTogWGlhbnRhbyBaaGFuZyA8eGlhbnRhby56aGFuZ0BpbnRlbC5jb20+DQo+ID4gLS0tDQo+ID4g
IGRyaXZlcnMveGVuL3N3aW90bGIteGVuLmMgfCAgIDQ3DQo+ID4gKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKystDQo+ID4gIDEgZmlsZXMgY2hhbmdlZCwgNDYgaW5z
ZXJ0aW9ucygrKSwgMSBkZWxldGlvbnMoLSkNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS9kcml2ZXJz
L3hlbi9zd2lvdGxiLXhlbi5jIGIvZHJpdmVycy94ZW4vc3dpb3RsYi14ZW4uYw0KPiA+IGluZGV4
IDU4ZGI2ZGYuLmU4ZjBjZmIgMTAwNjQ0DQo+ID4gLS0tIGEvZHJpdmVycy94ZW4vc3dpb3RsYi14
ZW4uYw0KPiA+ICsrKyBiL2RyaXZlcnMveGVuL3N3aW90bGIteGVuLmMNCj4gPiBAQCAtNDYxLDYg
KzQ2MSwyMiBAQCB4ZW5fc3dpb3RsYl9zeW5jX3NpbmdsZV9mb3JfZGV2aWNlKHN0cnVjdCBkZXZp
Y2UNCj4gPiAqaHdkZXYsIGRtYV9hZGRyX3QgZGV2X2FkZHIsICB9DQo+ID4gRVhQT1JUX1NZTUJP
TF9HUEwoeGVuX3N3aW90bGJfc3luY19zaW5nbGVfZm9yX2RldmljZSk7DQo+ID4NCj4gPiArc3Rh
dGljIGJvb2wNCj4gPiArY2hlY2tfY29udGluZ3VvdXNfcmVnaW9uKHVuc2lnbmVkIGxvbmcgdnN0
YXJ0LCB1bnNpZ25lZCBsb25nIG9yZGVyKQ0KPiANCj4gY2hlY2tfY29udGluZ3VvdXNfcmVnaW9u
KHVuc2lnbmVkIGxvbmcgdnN0YXJ0LCB1bnNpZ25lZCBpbnQgb3JkZXIpDQo+IA0KPiBCdXQgLSB3
aHkgZG8geW91IG5lZWQgdG8gZG8gdGhpcyBjaGVjayBvcmRlciBiYXNlZCBpbiB0aGUgZmlyc3Qg
cGxhY2U/IENoZWNraW5nDQo+IHRoZSBhY3R1YWwgbGVuZ3RoIG9mIHRoZSBidWZmZXIgc2hvdWxk
IHN1ZmZpY2UuDQoNClRoYW5rcywgdGhlIHdvcmQgImNvbnRpbmd1b3VzIiBpcyBtaXN0eXBlZCBp
biB0aGUgZnVuY3Rpb24sIGl0IHNob3VsZCBiZSAiY29udGlndW91cyIuDQqhoaGhDQpjaGVja19j
b250aWd1b3VzX3JlZ2lvbigpIGZ1bmN0aW9uIGlzIHVzZWQgdG8gY2hlY2sgd2hldGhlciBwYWdl
cyBhcmUgY29udGlndW91cyBpbiBoYXJkd2FyZS4NClRoZSBsZW5ndGggb25seSBpbmRpY2F0ZXMg
d2hldGhlciB0aGUgYnVmZmVyIGNyb3NzZXMgcGFnZSBib3VuZGFyeS4gSWYgYnVmZmVyIGNyb3Nz
ZXMgcGFnZXMgYW5kIHRoZXkgYXJlIG5vdCBjb250aWd1b3VzIGluIGhhcmR3YXJlLCB3ZSBkbyBu
ZWVkIHRvIGV4Y2hhbmdlIG1lbW9yeSBpbiBYZW4uDQoNCj4gDQo+ID4gK3sNCj4gPiArCXVuc2ln
bmVkIGxvbmcgcHJldl9tYSA9IHhlbl92aXJ0X3RvX2J1cygodm9pZCAqKXZzdGFydCk7DQo+ID4g
Kwl1bnNpZ25lZCBsb25nIG5leHRfbWE7DQo+IA0KPiBwaHlzX2FkZHJfdCBvciBzb21lIHN1Y2gg
Zm9yIGJvdGggb2YgdGhlbS4NCg0KVGhhbmtzLg0KU2hvdWxkIGJlIGRtYV9hZGRyX3Q/DQoNCj4g
DQo+ID4gKwlpbnQgaTsNCj4gDQo+IHVuc2lnbmVkIGxvbmcNCg0KVGhhbmtzLg0KDQo+IA0KPiA+
ICsNCj4gPiArCWZvciAoaSA9IDE7IGkgPCAoMSA8PCBvcmRlcik7IGkrKykgew0KPiANCj4gMVVM
DQoNClRoYW5rcy4NCg0KPiANCj4gPiArCQluZXh0X21hID0geGVuX3ZpcnRfdG9fYnVzKCh2b2lk
ICopKHZzdGFydCArIGkgKiBQQUdFX1NJWkUpKTsNCj4gPiArCQlpZiAobmV4dF9tYSAhPSBwcmV2
X21hICsgUEFHRV9TSVpFKQ0KPiA+ICsJCQlyZXR1cm4gZmFsc2U7DQo+ID4gKwkJcHJldl9tYSA9
IG5leHRfbWE7DQo+ID4gKwl9DQo+ID4gKwlyZXR1cm4gdHJ1ZTsNCj4gPiArfQ0KPiA+ICsNCj4g
PiAgLyoNCj4gPiAgICogTWFwIGEgc2V0IG9mIGJ1ZmZlcnMgZGVzY3JpYmVkIGJ5IHNjYXR0ZXJs
aXN0IGluIHN0cmVhbWluZyBtb2RlIGZvcg0KPiBETUEuDQo+ID4gICAqIFRoaXMgaXMgdGhlIHNj
YXR0ZXItZ2F0aGVyIHZlcnNpb24gb2YgdGhlIGFib3ZlDQo+ID4geGVuX3N3aW90bGJfbWFwX3Bh
Z2UgQEAgLTQ4OSw3ICs1MDUsMzYgQEANCj4gPiB4ZW5fc3dpb3RsYl9tYXBfc2dfYXR0cnMoc3Ry
dWN0IGRldmljZSAqaHdkZXYsIHN0cnVjdCBzY2F0dGVybGlzdA0KPiA+ICpzZ2wsDQo+ID4NCj4g
PiAgCWZvcl9lYWNoX3NnKHNnbCwgc2csIG5lbGVtcywgaSkgew0KPiA+ICAJCXBoeXNfYWRkcl90
IHBhZGRyID0gc2dfcGh5cyhzZyk7DQo+ID4gLQkJZG1hX2FkZHJfdCBkZXZfYWRkciA9IHhlbl9w
aHlzX3RvX2J1cyhwYWRkcik7DQo+ID4gKwkJdW5zaWduZWQgbG9uZyB2c3RhcnQsIG9yZGVyOw0K
PiA+ICsJCWRtYV9hZGRyX3QgZGV2X2FkZHI7DQo+ID4gKw0KPiA+ICsJCS8qDQo+ID4gKwkJICog
V2hpbGUgbWFwcGluZyBzZyBidWZmZXJzLCBjaGVja2luZyB0byBjcm9zcyBwYWdlIERNQSBidWZm
ZXINCj4gPiArCQkgKiBpcyBhbHNvIG5lZWRlZC4gSWYgdGhlIGd1ZXN0IERNQSBidWZmZXIgY3Jv
c3NlcyBwYWdlDQo+ID4gKwkJICogYm91bmRhcnksIFhlbiBzaG91bGQgZXhjaGFuZ2UgY29udGln
dW91cyBtZW1vcnkgZm9yIGl0Lg0KPiA+ICsJCSAqIEJlc2lkZXMsIGl0IGlzIG5lZWRlZCB0byBi
YWNrdXAgdGhlIG9yaWdpbmFsIHBhZ2UgY29udGVudHMNCj4gPiArCQkgKiBhbmQgY29weSBpdCBi
YWNrIGFmdGVyIG1lbW9yeSBleGNoYW5nZSBpcyBkb25lLg0KPiA+ICsJCSAqLw0KPiA+ICsJCWlm
IChyYW5nZV9zdHJhZGRsZXNfcGFnZV9ib3VuZGFyeShwYWRkciwgc2ctPmxlbmd0aCkpIHsNCj4g
PiArCQkJdnN0YXJ0ID0gKHVuc2lnbmVkIGxvbmcpX192YShwYWRkciAmIFBBR0VfTUFTSyk7DQo+
ID4gKwkJCW9yZGVyID0gZ2V0X29yZGVyKHNnLT5sZW5ndGggKyAocGFkZHIgJiB+UEFHRV9NQVNL
KSk7DQo+ID4gKwkJCWlmICghY2hlY2tfY29udGluZ3VvdXNfcmVnaW9uKHZzdGFydCwgb3JkZXIp
KSB7DQo+ID4gKwkJCQl1bnNpZ25lZCBsb25nIGJ1ZjsNCj4gPiArCQkJCWJ1ZiA9IF9fZ2V0X2Zy
ZWVfcGFnZXMoR0ZQX0tFUk5FTCwgb3JkZXIpOw0KPiA+ICsJCQkJbWVtY3B5KCh2b2lkICopYnVm
LCAodm9pZCAqKXZzdGFydCwNCj4gPiArCQkJCQlQQUdFX1NJWkUgKiAoMSA8PCBvcmRlcikpOw0K
PiA+ICsJCQkJaWYgKHhlbl9jcmVhdGVfY29udGlndW91c19yZWdpb24odnN0YXJ0LCBvcmRlciwN
Cj4gPiArCQkJCQkJZmxzNjQocGFkZHIpKSkgew0KPiA+ICsJCQkJCWZyZWVfcGFnZXMoYnVmLCBv
cmRlcik7DQo+ID4gKwkJCQkJcmV0dXJuIDA7DQo+ID4gKwkJCQl9DQo+ID4gKwkJCQltZW1jcHko
KHZvaWQgKil2c3RhcnQsICh2b2lkICopYnVmLA0KPiA+ICsJCQkJCVBBR0VfU0laRSAqICgxIDw8
IG9yZGVyKSk7DQo+ID4gKwkJCQlmcmVlX3BhZ2VzKGJ1Ziwgb3JkZXIpOw0KPiA+ICsJCQl9DQo+
ID4gKwkJfQ0KPiA+ICsNCj4gPiArCQlkZXZfYWRkciA9IHhlbl9waHlzX3RvX2J1cyhwYWRkcik7
DQo+ID4NCj4gPiAgCQlpZiAoc3dpb3RsYl9mb3JjZSB8fA0KPiA+ICAJCSAgICAhZG1hX2NhcGFi
bGUoaHdkZXYsIGRldl9hZGRyLCBzZy0+bGVuZ3RoKSB8fA0KPiANCj4gSG93IGFib3V0IHN3aW90
bGJfbWFwX3BhZ2UoKSAoZm9yIHRoZSBjb21wb3VuZCBwYWdlIGNhc2UpPw0KDQpZZXMhIFRoaXMg
c2hvdWxkIGFsc28gbmVlZCBzaW1pbGFyIGhhbmRsaW5nLg0KDQpPbmUgdGhpbmcgbmVlZHMgZnVy
dGhlciBjb25zaWRlcmF0aW9uIGlzIHRoYXQsIHRoZSBhYm92ZSBhcHByb2FjaCBpbnRyb2R1Y2Vz
IHR3byBtZW1vcnkgY29waWVzLCB3aGljaCBoYXMgcmFjZSBjb25kaXRpb24gdGhhdCwgd2hlbiB3
ZSBhcmUgZXhjaGFuZ2luZy9jb3B5aW5nIHBhZ2VzLCBkb20wIG1heSB2aXNpdCBvdGhlciBlbGVt
ZW50cyByaWdodCBpbiB0aGUgcGFnZXMuDQoNCk9uZSBjaG9pY2UgaXMgdG8gbW92ZSB0aGUgbWVt
b3J5IGNvcHkgaW4gaHlwZXJ2aXNvciwgd2hpY2ggcmVxdWlyZXMgdXMgdG8gbW9kaWZ5IHRoZSBY
RU5NRU1fZXhjaGFuZ2UgaHlwZXJjYWxsIGFuZCBhZGQgY2VydGFpbiBmbGFncyBpbmRpY2F0aW5n
IHdoZXRoZXIgdGhlIGV4Y2hhbmdlIG5lZWRzIG1lbW9yeSBjb3B5aW5nLg0KDQpPciBhbm90aGVy
IGNob2ljZSB0byBzb2x2ZSB0aGlzIGlzc3VlIGluIGRyaXZlciBzaWRlIHRvIGF2b2lkIERNQSBp
bnRvIHN1Y2ggc3RhdGljIGJ1ZmZlcnM/IFRoaXMgaXMgZWFzeSB0byBtb2RpZnkgb25lIGRyaXZl
ciBidXQgbWF5IGhhdmUgZGlmZmljdWx0aWVzIHRvIG1vbml0b3Igc28gbWFueSBkZXZpY2UgZHJp
dmVycy4NCg0KVGhhbmtzLA0KRG9uZ3hpYW8NCg0KPiANCj4gSmFuDQoNCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QK
WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Dec 11 06:40:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 06:40:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiJVP-0001lP-Gm; Tue, 11 Dec 2012 06:39:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TiJVO-0001lK-79
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 06:39:50 +0000
Received: from [85.158.143.35:47124] by server-3.bemta-4.messagelabs.com id
	89/5F-18211-535D6C05; Tue, 11 Dec 2012 06:39:49 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355207987!13510228!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwODY0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32383 invoked from network); 11 Dec 2012 06:39:48 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-12.tower-21.messagelabs.com with SMTP;
	11 Dec 2012 06:39:48 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 10 Dec 2012 22:38:49 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,256,1355126400"; d="scan'208";a="255605880"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 10 Dec 2012 22:39:37 -0800
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 22:39:37 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 22:39:37 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Tue, 11 Dec 2012 14:39:36 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
	hook
Thread-Index: AQHN1IR0DIODKC0DeEab0mvv5Cx6jJgTKUew
Date: Tue, 11 Dec 2012 06:39:35 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
In-Reply-To: <20121207140852.GC3140@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Friday, December 07, 2012 10:09 PM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
> hook
> 
> On Thu, Dec 06, 2012 at 09:08:42PM +0800, Dongxiao Xu wrote:
> > While mapping sg buffers, checking to cross page DMA buffer is also
> > needed. If the guest DMA buffer crosses page boundary, Xen should
> > exchange contiguous memory for it.
> 
> So this is when we cross those 2MB contingous swatch of buffers.
> Wouldn't we get the same problem with the 'map_page' call? If the driver tried
> to map say a 4MB DMA region?

Yes, it also needs such check, as I just replied to Jan's mail.

> 
> What if this check was done in the routines that provide the software static
> buffers and there try to provide a nice DMA contingous swatch of pages?

Yes, this approach also came to our mind, which needs to modify the driver itself.
If so, it requires driver not using such static buffers (e.g., from kmalloc) to do DMA even if the buffer is continuous in native.
Is this acceptable by kernel/driver upstream?

Thanks,
Dongxiao

> 
> >
> > Besides, it is needed to backup the original page contents and copy it
> > back after memory exchange is done.
> >
> > This fixes issues if device DMA into software static buffers, and in
> > case the static buffer cross page boundary which pages are not
> > contiguous in real hardware.
> >
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > ---
> >  drivers/xen/swiotlb-xen.c |   47
> ++++++++++++++++++++++++++++++++++++++++++++-
> >  1 files changed, 46 insertions(+), 1 deletions(-)
> >
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 58db6df..e8f0cfb 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device
> > *hwdev, dma_addr_t dev_addr,  }
> > EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
> >
> > +static bool
> > +check_continguous_region(unsigned long vstart, unsigned long order) {
> > +	unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
> > +	unsigned long next_ma;
> > +	int i;
> > +
> > +	for (i = 1; i < (1 << order); i++) {
> > +		next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
> > +		if (next_ma != prev_ma + PAGE_SIZE)
> > +			return false;
> > +		prev_ma = next_ma;
> > +	}
> > +	return true;
> > +}
> > +
> >  /*
> >   * Map a set of buffers described by scatterlist in streaming mode for
> DMA.
> >   * This is the scatter-gather version of the above
> > xen_swiotlb_map_page @@ -489,7 +505,36 @@
> > xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist
> > *sgl,
> >
> >  	for_each_sg(sgl, sg, nelems, i) {
> >  		phys_addr_t paddr = sg_phys(sg);
> > -		dma_addr_t dev_addr = xen_phys_to_bus(paddr);
> > +		unsigned long vstart, order;
> > +		dma_addr_t dev_addr;
> > +
> > +		/*
> > +		 * While mapping sg buffers, checking to cross page DMA buffer
> > +		 * is also needed. If the guest DMA buffer crosses page
> > +		 * boundary, Xen should exchange contiguous memory for it.
> > +		 * Besides, it is needed to backup the original page contents
> > +		 * and copy it back after memory exchange is done.
> > +		 */
> > +		if (range_straddles_page_boundary(paddr, sg->length)) {
> > +			vstart = (unsigned long)__va(paddr & PAGE_MASK);
> > +			order = get_order(sg->length + (paddr & ~PAGE_MASK));
> > +			if (!check_continguous_region(vstart, order)) {
> > +				unsigned long buf;
> > +				buf = __get_free_pages(GFP_KERNEL, order);
> > +				memcpy((void *)buf, (void *)vstart,
> > +					PAGE_SIZE * (1 << order));
> > +				if (xen_create_contiguous_region(vstart, order,
> > +						fls64(paddr))) {
> > +					free_pages(buf, order);
> > +					return 0;
> > +				}
> > +				memcpy((void *)vstart, (void *)buf,
> > +					PAGE_SIZE * (1 << order));
> > +				free_pages(buf, order);
> > +			}
> > +		}
> > +
> > +		dev_addr = xen_phys_to_bus(paddr);
> >
> >  		if (swiotlb_force ||
> >  		    !dma_capable(hwdev, dev_addr, sg->length) ||
> > --
> > 1.7.1
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 06:40:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 06:40:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiJVP-0001lP-Gm; Tue, 11 Dec 2012 06:39:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TiJVO-0001lK-79
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 06:39:50 +0000
Received: from [85.158.143.35:47124] by server-3.bemta-4.messagelabs.com id
	89/5F-18211-535D6C05; Tue, 11 Dec 2012 06:39:49 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355207987!13510228!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwODY0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32383 invoked from network); 11 Dec 2012 06:39:48 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-12.tower-21.messagelabs.com with SMTP;
	11 Dec 2012 06:39:48 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 10 Dec 2012 22:38:49 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,256,1355126400"; d="scan'208";a="255605880"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 10 Dec 2012 22:39:37 -0800
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 22:39:37 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 10 Dec 2012 22:39:37 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Tue, 11 Dec 2012 14:39:36 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
	hook
Thread-Index: AQHN1IR0DIODKC0DeEab0mvv5Cx6jJgTKUew
Date: Tue, 11 Dec 2012 06:39:35 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
In-Reply-To: <20121207140852.GC3140@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Friday, December 07, 2012 10:09 PM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
> hook
> 
> On Thu, Dec 06, 2012 at 09:08:42PM +0800, Dongxiao Xu wrote:
> > While mapping sg buffers, checking to cross page DMA buffer is also
> > needed. If the guest DMA buffer crosses page boundary, Xen should
> > exchange contiguous memory for it.
> 
> So this is when we cross those 2MB contingous swatch of buffers.
> Wouldn't we get the same problem with the 'map_page' call? If the driver tried
> to map say a 4MB DMA region?

Yes, it also needs such check, as I just replied to Jan's mail.

> 
> What if this check was done in the routines that provide the software static
> buffers and there try to provide a nice DMA contingous swatch of pages?

Yes, this approach also came to our mind, which needs to modify the driver itself.
If so, it requires driver not using such static buffers (e.g., from kmalloc) to do DMA even if the buffer is continuous in native.
Is this acceptable by kernel/driver upstream?

Thanks,
Dongxiao

> 
> >
> > Besides, it is needed to backup the original page contents and copy it
> > back after memory exchange is done.
> >
> > This fixes issues if device DMA into software static buffers, and in
> > case the static buffer cross page boundary which pages are not
> > contiguous in real hardware.
> >
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > ---
> >  drivers/xen/swiotlb-xen.c |   47
> ++++++++++++++++++++++++++++++++++++++++++++-
> >  1 files changed, 46 insertions(+), 1 deletions(-)
> >
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 58db6df..e8f0cfb 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -461,6 +461,22 @@ xen_swiotlb_sync_single_for_device(struct device
> > *hwdev, dma_addr_t dev_addr,  }
> > EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
> >
> > +static bool
> > +check_continguous_region(unsigned long vstart, unsigned long order) {
> > +	unsigned long prev_ma = xen_virt_to_bus((void *)vstart);
> > +	unsigned long next_ma;
> > +	int i;
> > +
> > +	for (i = 1; i < (1 << order); i++) {
> > +		next_ma = xen_virt_to_bus((void *)(vstart + i * PAGE_SIZE));
> > +		if (next_ma != prev_ma + PAGE_SIZE)
> > +			return false;
> > +		prev_ma = next_ma;
> > +	}
> > +	return true;
> > +}
> > +
> >  /*
> >   * Map a set of buffers described by scatterlist in streaming mode for
> DMA.
> >   * This is the scatter-gather version of the above
> > xen_swiotlb_map_page @@ -489,7 +505,36 @@
> > xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist
> > *sgl,
> >
> >  	for_each_sg(sgl, sg, nelems, i) {
> >  		phys_addr_t paddr = sg_phys(sg);
> > -		dma_addr_t dev_addr = xen_phys_to_bus(paddr);
> > +		unsigned long vstart, order;
> > +		dma_addr_t dev_addr;
> > +
> > +		/*
> > +		 * While mapping sg buffers, checking to cross page DMA buffer
> > +		 * is also needed. If the guest DMA buffer crosses page
> > +		 * boundary, Xen should exchange contiguous memory for it.
> > +		 * Besides, it is needed to backup the original page contents
> > +		 * and copy it back after memory exchange is done.
> > +		 */
> > +		if (range_straddles_page_boundary(paddr, sg->length)) {
> > +			vstart = (unsigned long)__va(paddr & PAGE_MASK);
> > +			order = get_order(sg->length + (paddr & ~PAGE_MASK));
> > +			if (!check_continguous_region(vstart, order)) {
> > +				unsigned long buf;
> > +				buf = __get_free_pages(GFP_KERNEL, order);
> > +				memcpy((void *)buf, (void *)vstart,
> > +					PAGE_SIZE * (1 << order));
> > +				if (xen_create_contiguous_region(vstart, order,
> > +						fls64(paddr))) {
> > +					free_pages(buf, order);
> > +					return 0;
> > +				}
> > +				memcpy((void *)vstart, (void *)buf,
> > +					PAGE_SIZE * (1 << order));
> > +				free_pages(buf, order);
> > +			}
> > +		}
> > +
> > +		dev_addr = xen_phys_to_bus(paddr);
> >
> >  		if (swiotlb_force ||
> >  		    !dma_capable(hwdev, dev_addr, sg->length) ||
> > --
> > 1.7.1
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 07:19:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 07:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiK7O-0002Pr-BD; Tue, 11 Dec 2012 07:19:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiK7N-0002Pm-3q
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 07:19:05 +0000
Received: from [85.158.143.99:63881] by server-2.bemta-4.messagelabs.com id
	BC/FE-30861-86ED6C05; Tue, 11 Dec 2012 07:19:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1355210343!28036724!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk0MTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20416 invoked from network); 11 Dec 2012 07:19:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 07:19:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,256,1355097600"; 
   d="scan'208";a="49676"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 07:19:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 07:19:03 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiK7L-00028v-DZ;
	Tue, 11 Dec 2012 07:19:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiK7K-0000Jc-RO;
	Tue, 11 Dec 2012 07:19:03 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14666-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 07:19:02 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14666: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14666 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14666/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14481
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14481

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1
baseline version:
 linux                2fc3fd44850e68f21ce1746c2e94470cab44ffab

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=3bbbcb136d00b0718e63f7fd633f098e0889c3d1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 3bbbcb136d00b0718e63f7fd633f098e0889c3d1
+ branch=linux-3.0
+ revision=3bbbcb136d00b0718e63f7fd633f098e0889c3d1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git 3bbbcb136d00b0718e63f7fd633f098e0889c3d1:tested/linux-3.0
Counting objects: 1   
Counting objects: 325, done.
Compressing objects:   2% (1/46)   
Compressing objects:   4% (2/46)   
Compressing objects:   6% (3/46)   
Compressing objects:   8% (4/46)   
Compressing objects:  10% (5/46)   
Compressing objects:  13% (6/46)   
Compressing objects:  15% (7/46)   
Compressing objects:  17% (8/46)   
Compressing objects:  19% (9/46)   
Compressing objects:  21% (10/46)   
Compressing objects:  23% (11/46)   
Compressing objects:  26% (12/46)   
Compressing objects:  28% (13/46)   
Compressing objects:  30% (14/46)   
Compressing objects:  32% (15/46)   
Compressing objects:  34% (16/46)   
Compressing objects:  36% (17/46)   
Compressing objects:  39% (18/46)   
Compressing objects:  41% (19/46)   
Compressing objects:  43% (20/46)   
Compressing objects:  45% (21/46)   
Compressing objects:  47% (22/46)   
Compressing objects:  50% (23/46)   
Compressing objects:  52% (24/46)   
Compressing objects:  54% (25/46)   
Compressing objects:  56% (26/46)   
Compressing objects:  58% (27/46)   
Compressing objects:  60% (28/46)   
Compressing objects:  63% (29/46)   
Compressing objects:  65% (30/46)   
Compressing objects:  67% (31/46)   
Compressing objects:  69% (32/46)   
Compressing objects:  71% (33/46)   
Compressing objects:  73% (34/46)   
Compressing objects:  76% (35/46)   
Compressing objects:  78% (36/46)   
Compressing objects:  80% (37/46)   
Compressing objects:  82% (38/46)   
Compressing objects:  84% (39/46)   
Compressing objects:  86% (40/46)   
Compressing objects:  89% (41/46)   
Compressing objects:  91% (42/46)   
Compressing objects:  93% (43/46)   
Compressing objects:  95% (44/46)   
Compressing objects:  97% (45/46)   
Compressing objects: 100% (46/46)   
Compressing objects: 100% (46/46), done.
Writing objects:   0% (1/231)   
Writing objects:   1% (3/231)   
Writing objects:   2% (5/231)   
Writing objects:   3% (7/231)   
Writing objects:   4% (10/231)   
Writing objects:   5% (12/231)   
Writing objects:   6% (14/231)   
Writing objects:   7% (17/231)   
Writing objects:   8% (19/231)   
Writing objects:   9% (21/231)   
Writing objects:  10% (24/231)   
Writing objects:  11% (26/231)   
Writing objects:  12% (28/231)   
Writing objects:  13% (31/231)   
Writing objects:  14% (33/231)   
Writing objects:  15% (35/231)   
Writing objects:  16% (37/231)   
Writing objects:  17% (41/231)   
Writing objects:  18% (42/231)   
Writing objects:  19% (44/231)   
Writing objects:  20% (48/231)   
Writing objects:  21% (50/231)   
Writing objects:  22% (51/231)   
Writing objects:  23% (54/231)   
Writing objects:  24% (56/231)   
Writing objects:  25% (59/231)   
Writing objects:  26% (61/231)   
Writing objects:  27% (64/231)   
Writing objects:  28% (65/231)   
Writing objects:  29% (67/231)   
Writing objects:  30% (70/231)   
Writing objects:  31% (72/231)   
Writing objects:  32% (74/231)   
Writing objects:  33% (77/231)   
Writing objects:  34% (79/231)   
Writing objects:  35% (81/231)   
Writing objects:  36% (84/231)   
Writing objects:  37% (86/231)   
Writing objects:  38% (88/231)   
Writing objects:  39% (91/231)   
Writing objects:  40% (93/231)   
Writing objects:  41% (95/231)   
Writing objects:  42% (98/231)   
Writing objects:  43% (100/231)   
Writing objects:  44% (102/231)   
Writing objects:  45% (104/231)   
Writing objects:  46% (107/231)   
Writing objects:  47% (109/231)   
Writing objects:  48% (111/231)   
Writing objects:  49% (114/231)   
Writing objects:  50% (116/231)   
Writing objects:  51% (118/231)   
Writing objects:  52% (121/231)   
Writing objects:  53% (123/231)   
Writing objects:  54% (125/231)   
Writing objects:  55% (128/231)   
Writing objects:  56% (130/231)   
Writing objects:  57% (132/231)   
Writing objects:  58% (134/231)   
Writing objects:  59% (137/231)   
Writing objects:  60% (139/231)   
Writing objects:  61% (141/231)   
Writing objects:  62% (144/231)   
Writing objects:  63% (146/231)   
Writing objects:  64% (148/231)   
Writing objects:  65% (151/231)   
Writing objects:  66% (153/231)   
Writing objects:  67% (155/231)   
Writing objects:  68% (158/231)   
Writing objects:  69% (160/231)   
Writing objects:  70% (162/231)   
Writing objects:  71% (165/231)   
Writing objects:  72% (167/231)   
Writing objects:  73% (169/231)   
Writing objects:  74% (171/231)   
Writing objects:  75% (174/231)   
Writing objects:  76% (176/231)   
Writing objects:  77% (178/231)   
Writing objects:  78% (181/231)   
Writing objects:  79% (183/231)   
Writing objects:  80% (185/231)   
Writing objects:  81% (188/231)   
Writing objects:  82% (190/231)   
Writing objects:  83% (192/231)   
Writing objects:  84% (195/231)   
Writing objects:  85% (197/231)   
Writing objects:  86% (199/231)   
Writing objects:  87% (201/231)   
Writing objects:  88% (204/231)   
Writing objects:  89% (206/231)   
Writing objects:  90% (208/231)   
Writing objects:  91% (211/231)   
Writing objects:  92% (213/231)   
Writing objects:  93% (215/231)   
Writing objects:  94% (218/231)   
Writing objects:  95% (220/231)   
Writing objects:  96% (222/231)   
Writing objects:  97% (225/231)   
Writing objects:  98% (227/231)   
Writing objects:  99% (229/231)   
Writing objects: 100% (231/231)   
Writing objects: 100% (231/231), 35.27 KiB, done.
Total 231 (delta 190), reused 225 (delta 184)
To xen@xenbits.xensource.com:git/linux-pvops.git
   2fc3fd4..3bbbcb1  3bbbcb136d00b0718e63f7fd633f098e0889c3d1 -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 07:19:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 07:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiK7O-0002Pr-BD; Tue, 11 Dec 2012 07:19:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiK7N-0002Pm-3q
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 07:19:05 +0000
Received: from [85.158.143.99:63881] by server-2.bemta-4.messagelabs.com id
	BC/FE-30861-86ED6C05; Tue, 11 Dec 2012 07:19:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1355210343!28036724!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk0MTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20416 invoked from network); 11 Dec 2012 07:19:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 07:19:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,256,1355097600"; 
   d="scan'208";a="49676"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 07:19:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 07:19:03 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiK7L-00028v-DZ;
	Tue, 11 Dec 2012 07:19:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiK7K-0000Jc-RO;
	Tue, 11 Dec 2012 07:19:03 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14666-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 07:19:02 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14666: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14666 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14666/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14481
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14481

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1
baseline version:
 linux                2fc3fd44850e68f21ce1746c2e94470cab44ffab

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=3bbbcb136d00b0718e63f7fd633f098e0889c3d1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 3bbbcb136d00b0718e63f7fd633f098e0889c3d1
+ branch=linux-3.0
+ revision=3bbbcb136d00b0718e63f7fd633f098e0889c3d1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git 3bbbcb136d00b0718e63f7fd633f098e0889c3d1:tested/linux-3.0
Counting objects: 1   
Counting objects: 325, done.
Compressing objects:   2% (1/46)   
Compressing objects:   4% (2/46)   
Compressing objects:   6% (3/46)   
Compressing objects:   8% (4/46)   
Compressing objects:  10% (5/46)   
Compressing objects:  13% (6/46)   
Compressing objects:  15% (7/46)   
Compressing objects:  17% (8/46)   
Compressing objects:  19% (9/46)   
Compressing objects:  21% (10/46)   
Compressing objects:  23% (11/46)   
Compressing objects:  26% (12/46)   
Compressing objects:  28% (13/46)   
Compressing objects:  30% (14/46)   
Compressing objects:  32% (15/46)   
Compressing objects:  34% (16/46)   
Compressing objects:  36% (17/46)   
Compressing objects:  39% (18/46)   
Compressing objects:  41% (19/46)   
Compressing objects:  43% (20/46)   
Compressing objects:  45% (21/46)   
Compressing objects:  47% (22/46)   
Compressing objects:  50% (23/46)   
Compressing objects:  52% (24/46)   
Compressing objects:  54% (25/46)   
Compressing objects:  56% (26/46)   
Compressing objects:  58% (27/46)   
Compressing objects:  60% (28/46)   
Compressing objects:  63% (29/46)   
Compressing objects:  65% (30/46)   
Compressing objects:  67% (31/46)   
Compressing objects:  69% (32/46)   
Compressing objects:  71% (33/46)   
Compressing objects:  73% (34/46)   
Compressing objects:  76% (35/46)   
Compressing objects:  78% (36/46)   
Compressing objects:  80% (37/46)   
Compressing objects:  82% (38/46)   
Compressing objects:  84% (39/46)   
Compressing objects:  86% (40/46)   
Compressing objects:  89% (41/46)   
Compressing objects:  91% (42/46)   
Compressing objects:  93% (43/46)   
Compressing objects:  95% (44/46)   
Compressing objects:  97% (45/46)   
Compressing objects: 100% (46/46)   
Compressing objects: 100% (46/46), done.
Writing objects:   0% (1/231)   
Writing objects:   1% (3/231)   
Writing objects:   2% (5/231)   
Writing objects:   3% (7/231)   
Writing objects:   4% (10/231)   
Writing objects:   5% (12/231)   
Writing objects:   6% (14/231)   
Writing objects:   7% (17/231)   
Writing objects:   8% (19/231)   
Writing objects:   9% (21/231)   
Writing objects:  10% (24/231)   
Writing objects:  11% (26/231)   
Writing objects:  12% (28/231)   
Writing objects:  13% (31/231)   
Writing objects:  14% (33/231)   
Writing objects:  15% (35/231)   
Writing objects:  16% (37/231)   
Writing objects:  17% (41/231)   
Writing objects:  18% (42/231)   
Writing objects:  19% (44/231)   
Writing objects:  20% (48/231)   
Writing objects:  21% (50/231)   
Writing objects:  22% (51/231)   
Writing objects:  23% (54/231)   
Writing objects:  24% (56/231)   
Writing objects:  25% (59/231)   
Writing objects:  26% (61/231)   
Writing objects:  27% (64/231)   
Writing objects:  28% (65/231)   
Writing objects:  29% (67/231)   
Writing objects:  30% (70/231)   
Writing objects:  31% (72/231)   
Writing objects:  32% (74/231)   
Writing objects:  33% (77/231)   
Writing objects:  34% (79/231)   
Writing objects:  35% (81/231)   
Writing objects:  36% (84/231)   
Writing objects:  37% (86/231)   
Writing objects:  38% (88/231)   
Writing objects:  39% (91/231)   
Writing objects:  40% (93/231)   
Writing objects:  41% (95/231)   
Writing objects:  42% (98/231)   
Writing objects:  43% (100/231)   
Writing objects:  44% (102/231)   
Writing objects:  45% (104/231)   
Writing objects:  46% (107/231)   
Writing objects:  47% (109/231)   
Writing objects:  48% (111/231)   
Writing objects:  49% (114/231)   
Writing objects:  50% (116/231)   
Writing objects:  51% (118/231)   
Writing objects:  52% (121/231)   
Writing objects:  53% (123/231)   
Writing objects:  54% (125/231)   
Writing objects:  55% (128/231)   
Writing objects:  56% (130/231)   
Writing objects:  57% (132/231)   
Writing objects:  58% (134/231)   
Writing objects:  59% (137/231)   
Writing objects:  60% (139/231)   
Writing objects:  61% (141/231)   
Writing objects:  62% (144/231)   
Writing objects:  63% (146/231)   
Writing objects:  64% (148/231)   
Writing objects:  65% (151/231)   
Writing objects:  66% (153/231)   
Writing objects:  67% (155/231)   
Writing objects:  68% (158/231)   
Writing objects:  69% (160/231)   
Writing objects:  70% (162/231)   
Writing objects:  71% (165/231)   
Writing objects:  72% (167/231)   
Writing objects:  73% (169/231)   
Writing objects:  74% (171/231)   
Writing objects:  75% (174/231)   
Writing objects:  76% (176/231)   
Writing objects:  77% (178/231)   
Writing objects:  78% (181/231)   
Writing objects:  79% (183/231)   
Writing objects:  80% (185/231)   
Writing objects:  81% (188/231)   
Writing objects:  82% (190/231)   
Writing objects:  83% (192/231)   
Writing objects:  84% (195/231)   
Writing objects:  85% (197/231)   
Writing objects:  86% (199/231)   
Writing objects:  87% (201/231)   
Writing objects:  88% (204/231)   
Writing objects:  89% (206/231)   
Writing objects:  90% (208/231)   
Writing objects:  91% (211/231)   
Writing objects:  92% (213/231)   
Writing objects:  93% (215/231)   
Writing objects:  94% (218/231)   
Writing objects:  95% (220/231)   
Writing objects:  96% (222/231)   
Writing objects:  97% (225/231)   
Writing objects:  98% (227/231)   
Writing objects:  99% (229/231)   
Writing objects: 100% (231/231)   
Writing objects: 100% (231/231), 35.27 KiB, done.
Total 231 (delta 190), reused 225 (delta 184)
To xen@xenbits.xensource.com:git/linux-pvops.git
   2fc3fd4..3bbbcb1  3bbbcb136d00b0718e63f7fd633f098e0889c3d1 -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 09:27:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 09:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiM6p-00047S-QD; Tue, 11 Dec 2012 09:26:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TiM6o-00047N-3h
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 09:26:38 +0000
Received: from [85.158.139.83:5601] by server-8.bemta-5.messagelabs.com id
	B4/43-15003-D4CF6C05; Tue, 11 Dec 2012 09:26:37 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355217983!29193849!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_00_10,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9457 invoked from network); 11 Dec 2012 09:26:25 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 09:26:25 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so8568988iac.18
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 01:26:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=RqLfE8y8yMSlS1chgE3yO2CgiNMu4O0s4glrbHKZPjI=;
	b=Tk5hiS1IY0035s4ja59g2yHEcnz5Ly98NGfEXegZsLUSfXkT4lth4we96d0Ry1M9Qs
	bhqJ15cpmeovAkBs/lc9TZWrcVCQvHplrtk9ZFS1XjU3r/IrNbeB1uzKzgplSykBWoxB
	viKQY0WWYoxo+pfdg4dU1yC8KhmoezCwR5+koaPQcK8a0wTBD5QCi9pFmxGw3YUVC1fY
	lD74KJWW6+hmnAkvCmsT4D3FWoEu4tNrIhJA5lsOfHXB0Dj3QqHvenO6ewvmWEGZfSDy
	pueH0h7qZp3S/Az5wYhkweYB6LyIe2HVmud2xZQCruaE1QAZZtLn+L6om5stQ7kwRIMe
	L04g==
MIME-Version: 1.0
Received: by 10.50.6.169 with SMTP id c9mr9395316iga.24.1355217983497; Tue, 11
	Dec 2012 01:26:23 -0800 (PST)
Received: by 10.64.37.39 with HTTP; Tue, 11 Dec 2012 01:26:23 -0800 (PST)
Date: Tue, 11 Dec 2012 17:26:23 +0800
X-Google-Sender-Auth: sOls7MqdI7RTXDb80q7yuD8Ta_M
Message-ID: <CAKhsbWY_LpCKHzNCOqD7qysqeY+4vEtgGu6h=TNx6x3iacTrGg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Need help to debug win7 BSOD on IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4997373185778400963=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4997373185778400963==
Content-Type: multipart/alternative; boundary=e89a8f6467177fe67d04d0904675

--e89a8f6467177fe67d04d0904675
Content-Type: text/plain; charset=ISO-8859-1

Hi guys,
I met win7 vga passthrough problem and would like to analysis the memory
dump to identify the issues. Could anybody lend me a hand when I'm in
trouble?

Some details below and would post follow up later.
Any suggestion on the triage / experiment direction would be appreciated.

I have a working build for passing through intel HD4000 to a debian domU.
But win7 does not quite work. Issues I've *randomly *met so far:
1) black screen after windows loading files
2) failed to access install media (I access install media / virtual disk
through NFS)
3) BSOD memory_mangement during boot
4) BSOD system_service_exception during boot
As I continuously try restarting the domU, syndrome keeps change randomly.

At first glance, these appear to be irrelevant. But they only happens when
I try IGD passthrough.
And these appear both during installation and, if I finish install by
temporally disable vga passthrough, during reboot after video driver
install.

I'm now going to download windows SDK to analysis the memory dump.
Will likely need your expertise later.

Thanks,
Timothy

--e89a8f6467177fe67d04d0904675
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi guys,<br>I met win7 vga passthrough problem and would like to analysis t=
he memory dump to identify the issues. Could anybody lend me a hand when I&=
#39;m in trouble?<br><br>Some details below and would post follow up later.=
<br>
Any suggestion on the triage / experiment direction would be appreciated.<b=
r><br>I have a working build for passing through intel HD4000 to a debian d=
omU.<br>But win7 does not quite work. Issues I&#39;ve <b>randomly </b>met s=
o far:<br>
1) black screen after windows loading files<br>2) failed to access install =
media (I access install media / virtual disk through NFS)<br>3) BSOD memory=
_mangement during boot<br>4) BSOD system_service_exception during boot<br>
As I continuously try restarting the domU, syndrome keeps change randomly.<=
br><br>At first glance, these appear to be irrelevant. But they only happen=
s when I try IGD passthrough.<br>And these appear both during installation =
and, if I finish install by temporally disable vga passthrough, during rebo=
ot after video driver install.<br>
<br>I&#39;m now going to download windows SDK to analysis the memory dump.<=
br>Will likely need your expertise later.<br><br>Thanks,<br>Timothy<br>

--e89a8f6467177fe67d04d0904675--


--===============4997373185778400963==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4997373185778400963==--


From xen-devel-bounces@lists.xen.org Tue Dec 11 09:27:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 09:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiM6p-00047S-QD; Tue, 11 Dec 2012 09:26:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TiM6o-00047N-3h
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 09:26:38 +0000
Received: from [85.158.139.83:5601] by server-8.bemta-5.messagelabs.com id
	B4/43-15003-D4CF6C05; Tue, 11 Dec 2012 09:26:37 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355217983!29193849!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_00_10,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9457 invoked from network); 11 Dec 2012 09:26:25 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 09:26:25 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so8568988iac.18
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 01:26:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=RqLfE8y8yMSlS1chgE3yO2CgiNMu4O0s4glrbHKZPjI=;
	b=Tk5hiS1IY0035s4ja59g2yHEcnz5Ly98NGfEXegZsLUSfXkT4lth4we96d0Ry1M9Qs
	bhqJ15cpmeovAkBs/lc9TZWrcVCQvHplrtk9ZFS1XjU3r/IrNbeB1uzKzgplSykBWoxB
	viKQY0WWYoxo+pfdg4dU1yC8KhmoezCwR5+koaPQcK8a0wTBD5QCi9pFmxGw3YUVC1fY
	lD74KJWW6+hmnAkvCmsT4D3FWoEu4tNrIhJA5lsOfHXB0Dj3QqHvenO6ewvmWEGZfSDy
	pueH0h7qZp3S/Az5wYhkweYB6LyIe2HVmud2xZQCruaE1QAZZtLn+L6om5stQ7kwRIMe
	L04g==
MIME-Version: 1.0
Received: by 10.50.6.169 with SMTP id c9mr9395316iga.24.1355217983497; Tue, 11
	Dec 2012 01:26:23 -0800 (PST)
Received: by 10.64.37.39 with HTTP; Tue, 11 Dec 2012 01:26:23 -0800 (PST)
Date: Tue, 11 Dec 2012 17:26:23 +0800
X-Google-Sender-Auth: sOls7MqdI7RTXDb80q7yuD8Ta_M
Message-ID: <CAKhsbWY_LpCKHzNCOqD7qysqeY+4vEtgGu6h=TNx6x3iacTrGg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Need help to debug win7 BSOD on IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4997373185778400963=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4997373185778400963==
Content-Type: multipart/alternative; boundary=e89a8f6467177fe67d04d0904675

--e89a8f6467177fe67d04d0904675
Content-Type: text/plain; charset=ISO-8859-1

Hi guys,
I met win7 vga passthrough problem and would like to analysis the memory
dump to identify the issues. Could anybody lend me a hand when I'm in
trouble?

Some details below and would post follow up later.
Any suggestion on the triage / experiment direction would be appreciated.

I have a working build for passing through intel HD4000 to a debian domU.
But win7 does not quite work. Issues I've *randomly *met so far:
1) black screen after windows loading files
2) failed to access install media (I access install media / virtual disk
through NFS)
3) BSOD memory_mangement during boot
4) BSOD system_service_exception during boot
As I continuously try restarting the domU, syndrome keeps change randomly.

At first glance, these appear to be irrelevant. But they only happens when
I try IGD passthrough.
And these appear both during installation and, if I finish install by
temporally disable vga passthrough, during reboot after video driver
install.

I'm now going to download windows SDK to analysis the memory dump.
Will likely need your expertise later.

Thanks,
Timothy

--e89a8f6467177fe67d04d0904675
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi guys,<br>I met win7 vga passthrough problem and would like to analysis t=
he memory dump to identify the issues. Could anybody lend me a hand when I&=
#39;m in trouble?<br><br>Some details below and would post follow up later.=
<br>
Any suggestion on the triage / experiment direction would be appreciated.<b=
r><br>I have a working build for passing through intel HD4000 to a debian d=
omU.<br>But win7 does not quite work. Issues I&#39;ve <b>randomly </b>met s=
o far:<br>
1) black screen after windows loading files<br>2) failed to access install =
media (I access install media / virtual disk through NFS)<br>3) BSOD memory=
_mangement during boot<br>4) BSOD system_service_exception during boot<br>
As I continuously try restarting the domU, syndrome keeps change randomly.<=
br><br>At first glance, these appear to be irrelevant. But they only happen=
s when I try IGD passthrough.<br>And these appear both during installation =
and, if I finish install by temporally disable vga passthrough, during rebo=
ot after video driver install.<br>
<br>I&#39;m now going to download windows SDK to analysis the memory dump.<=
br>Will likely need your expertise later.<br><br>Thanks,<br>Timothy<br>

--e89a8f6467177fe67d04d0904675--


--===============4997373185778400963==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4997373185778400963==--


From xen-devel-bounces@lists.xen.org Tue Dec 11 09:40:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 09:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiMJO-0004OR-Ou; Tue, 11 Dec 2012 09:39:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TiMJN-0004OJ-G6
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 09:39:37 +0000
Received: from [85.158.137.99:19029] by server-11.bemta-3.messagelabs.com id
	DD/BA-13335-85FF6C05; Tue, 11 Dec 2012 09:39:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1355218768!18846148!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27656 invoked from network); 11 Dec 2012 09:39:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 09:39:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="52551"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 09:39:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 11 Dec 2012 09:39:28 +0000
Message-ID: <1355218767.30696.6.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 11 Dec 2012 09:39:27 +0000
In-Reply-To: <alpine.DEB.2.02.1212101741460.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C5C32802000078000AF542@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212101741460.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-10 at 17:51 +0000, Stefano Stabellini wrote:

[snip, 300 lines of unnecessarily quoted material, come on guys!]

> > > +void fb_cr(void);
> > 
> > Please make this fb_create() or alike - "cr" alone could well be
> > mistaken as "carriage return".

Or control register which was my first thought.

> Actually it is supposed to be "cr": it is used by vesa_endboot to reset
> the cursor to column 0.

fb_carriage_return? It's not called so frequently that a little extra
typing will hurt.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 09:40:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 09:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiMJO-0004OR-Ou; Tue, 11 Dec 2012 09:39:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TiMJN-0004OJ-G6
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 09:39:37 +0000
Received: from [85.158.137.99:19029] by server-11.bemta-3.messagelabs.com id
	DD/BA-13335-85FF6C05; Tue, 11 Dec 2012 09:39:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1355218768!18846148!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27656 invoked from network); 11 Dec 2012 09:39:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 09:39:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="52551"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 09:39:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 11 Dec 2012 09:39:28 +0000
Message-ID: <1355218767.30696.6.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 11 Dec 2012 09:39:27 +0000
In-Reply-To: <alpine.DEB.2.02.1212101741460.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C5C32802000078000AF542@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212101741460.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-10 at 17:51 +0000, Stefano Stabellini wrote:

[snip, 300 lines of unnecessarily quoted material, come on guys!]

> > > +void fb_cr(void);
> > 
> > Please make this fb_create() or alike - "cr" alone could well be
> > mistaken as "carriage return".

Or control register which was my first thought.

> Actually it is supposed to be "cr": it is used by vesa_endboot to reset
> the cursor to column 0.

fb_carriage_return? It's not called so frequently that a little extra
typing will hurt.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 10:21:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 10:21:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiMx1-0005DL-PA; Tue, 11 Dec 2012 10:20:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bjzhang@suse.com>) id 1TiMx0-0005DG-5Y
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 10:20:34 +0000
Received: from [193.109.254.147:37236] by server-1.bemta-14.messagelabs.com id
	7B/3A-25314-1F807C05; Tue, 11 Dec 2012 10:20:33 +0000
X-Env-Sender: bjzhang@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355221184!9676886!1
X-Originating-IP: [137.65.250.214]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8277 invoked from network); 11 Dec 2012 10:19:45 -0000
Received: from soto.provo.novell.com (HELO soto.provo.novell.com)
	(137.65.250.214) by server-13.tower-27.messagelabs.com with SMTP;
	11 Dec 2012 10:19:45 -0000
Received: from INET-RELAY2-MTA by soto.provo.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 03:19:43 -0700
Message-Id: <50C779320200003000031717@soto.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 03:19:30 -0700
From: "Bamvor Jian Zhang" <bjzhang@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Jim Fehlig" <JFEHLIG@suse.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
	<20677.47995.298291.120095@mariner.uk.xensource.com>
	<20678.5159.946248.90947@mariner.uk.xensource.com>
In-Reply-To: <20678.5159.946248.90947@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



 >>>Ian Jackson <Ian.Jackson@eu.citrix.com> wrote: 
> Ian Jackson writes ("Re: [PATCH] fix race condition between libvirtd event  
> handling and libxl fd deregister"): 
> > I'm not surprised that the original patch makes Bamvor's symptoms go 
> > away.  Bamvor had one of the possible races (the fd-related one) but 
> > not the other. 
>  
> Here (followups to this message, shortly) is v3 of my two-patch series 
> which after conversation with Ian C I think fully fixes the race, and 
> which I have tested now. 
>  
> Bamvor, can you test this and let us know hwether it fixes your problem? 
Ok. i am working on it.  will send the test result to u.
BTW, without your patches, i encounter the time race condition during libvirt save vm sometimes.
> Thanks, 
> Ian. 
>  
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 10:21:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 10:21:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiMx1-0005DL-PA; Tue, 11 Dec 2012 10:20:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bjzhang@suse.com>) id 1TiMx0-0005DG-5Y
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 10:20:34 +0000
Received: from [193.109.254.147:37236] by server-1.bemta-14.messagelabs.com id
	7B/3A-25314-1F807C05; Tue, 11 Dec 2012 10:20:33 +0000
X-Env-Sender: bjzhang@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355221184!9676886!1
X-Originating-IP: [137.65.250.214]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8277 invoked from network); 11 Dec 2012 10:19:45 -0000
Received: from soto.provo.novell.com (HELO soto.provo.novell.com)
	(137.65.250.214) by server-13.tower-27.messagelabs.com with SMTP;
	11 Dec 2012 10:19:45 -0000
Received: from INET-RELAY2-MTA by soto.provo.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 03:19:43 -0700
Message-Id: <50C779320200003000031717@soto.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 03:19:30 -0700
From: "Bamvor Jian Zhang" <bjzhang@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Jim Fehlig" <JFEHLIG@suse.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
	<20677.47995.298291.120095@mariner.uk.xensource.com>
	<20678.5159.946248.90947@mariner.uk.xensource.com>
In-Reply-To: <20678.5159.946248.90947@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



 >>>Ian Jackson <Ian.Jackson@eu.citrix.com> wrote: 
> Ian Jackson writes ("Re: [PATCH] fix race condition between libvirtd event  
> handling and libxl fd deregister"): 
> > I'm not surprised that the original patch makes Bamvor's symptoms go 
> > away.  Bamvor had one of the possible races (the fd-related one) but 
> > not the other. 
>  
> Here (followups to this message, shortly) is v3 of my two-patch series 
> which after conversation with Ian C I think fully fixes the race, and 
> which I have tested now. 
>  
> Bamvor, can you test this and let us know hwether it fixes your problem? 
Ok. i am working on it.  will send the test result to u.
BTW, without your patches, i encounter the time race condition during libvirt save vm sometimes.
> Thanks, 
> Ian. 
>  
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 10:26:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 10:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiN2J-0005MP-HM; Tue, 11 Dec 2012 10:26:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1TiN2H-0005MJ-9p
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 10:26:01 +0000
Received: from [193.109.254.147:32378] by server-14.bemta-14.messagelabs.com
	id 7A/49-14517-83A07C05; Tue, 11 Dec 2012 10:26:00 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1355221555!7006826!1
X-Originating-IP: [74.125.149.67]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23518 invoked from network); 11 Dec 2012 10:25:58 -0000
Received: from na3sys009aog101.obsmtp.com (HELO na3sys009aog101.obsmtp.com)
	(74.125.149.67)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 10:25:58 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob101.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUMcKM0XFKDwkq7BZKT0WBq8/MUXaZeMn@postini.com;
	Tue, 11 Dec 2012 02:25:57 PST
Received: from INHYMS172.ca.com (155.35.35.46) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Tue, 11 Dec 2012 15:55:52 +0530
Received: from INHYMS111A.ca.com ([169.254.3.239]) by INHYMS172.ca.com
	([155.35.35.46]) with mapi id 14.01.0355.002;
	Tue, 11 Dec 2012 15:55:52 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Matt Wilson <msw@amazon.com>
Thread-Topic: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
	properly when larger MTU sizes are used
Thread-Index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAgAd+rmAEvv3JQAAGcEEQAAliekAAQ4jfWA=
Date: Tue, 11 Dec 2012 10:25:51 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {2AD0BE99-4330-4D5C-B080-DD90DE784B33}
x-cr-hashedpuzzle: B6Ry EXm3 IU1h I465 JkAJ Lbm1 NMqv OkwD QFGa SgIJ VMHr
	WK7s X159 cU7/ eXHR hOTz; 3;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAbQBzAHcAQABhAG0AYQB6AG8AbgAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {2AD0BE99-4330-4D5C-B080-DD90DE784B33};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Tue,
	11 Dec 2012 10:21:39 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDACAAVgAyAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Matt Wilson [mailto:msw@amazon.com]
> Sent: Thursday, December 06, 2012 11:05 AM
> To: Palagummi, Siva
> Cc: Ian Campbell; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> On Wed, Dec 05, 2012 at 11:56:32AM +0000, Palagummi, Siva wrote:
> > Matt,
> [...]
> > You are right. The above chunk which is already part of the upstream
> > is unfortunately incorrect for some cases. We also ran into issues
> > in our environment around a week back and found this problem. The
> > count will be different based on head len because of the
> > optimization that start_new_rx_buffer is trying to do for large
> > buffers.  A hole of size "offset_in_page" will be left in first page
> > during copy if the remaining buffer size is >=PAG_SIZE. This
> > subsequently affects the copy_off as well.
> >
> > So xen_netbk_count_skb_slots actually needs a fix to calculate the
> > count correctly based on head len. And also a fix to calculate the
> > copy_off properly to which the data from fragments gets copied.
> 
> Can you explain more about the copy_off problem? I'm not seeing it.

You can clearly see below that copy_off is input to start_new_rx_buffer while copying frags.
So if the buggy "count" calculation below is fixed based on offset_in_page value then  copy_off value also will change accordingly.

        count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);

        copy_off = skb_headlen(skb) % PAGE_SIZE;

        if (skb_shinfo(skb)->gso_size)
                count++;

        for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
                unsigned long size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
                unsigned long bytes;
                while (size > 0) {
                        BUG_ON(copy_off > MAX_BUFFER_OFFSET);

                        if (start_new_rx_buffer(copy_off, size, 0)) {
                                count++;
                                copy_off = 0;
                        }

So a correct calculation should be somewhat like below because of the optimization in start_new_rx_buffer for larger sizes.

      linear_len = skb_headlen(skb)
	count = (linear_len <= PAGE_SIZE)
              ? 1
              :DIV_ROUND_UP(offset_in_page(skb->data)+linear_len, PAGE_SIZE));

      copy_off = ((offset_in_page(skb->data)+linear_len) < 2*PAGE_SIZE)
			? linear_len % PAGE_SIZE;
			: (offset_in_page(skb->data)+linear_len) % PAGE_SIZE;

> 
> > max_required_rx_slots also may require a fix to account the
> > additional slot that may be required in case mtu >= PAG_SIZE. For
> > worst case scenario atleast another +1.  One thing that is still
> > puzzling here is, max_required_rx_slots seems to be assuming that
> > linear length in head will never be greater than mtu size. But that
> > doesn't seem to be the case all the time. I wonder if it requires
> > some kind of fix there or special handling when count_skb_slots
> > exceeds max_required_rx_slots.
> 
> We should only be using the number of pages required to copy the
> data. The fix shouldn't be to anticipate wasting ring space by
> increasing the return value of max_required_rx_slots().
> 
I do not think we are wasting any ring space. But just ensuring that we have enough before proceeding ahead.
> [...]
> 
> > > Why increment count by the /estimated/ count instead of the actual
> > > number of slots used? We have the number of slots in the line just
> > > above, in sco->meta_slots_used.
> > >
> >
> > Count actually refers to ring slots consumed rather than meta_slots
> > used.  Count can be different from meta_slots_used.
> 
> Aah, indeed. This can end up being too pessimistic if you have lots of
> frags that require multiple copy operations. I still think that it
> would be better to calculate the actual number of ring slots consumed
> by netbk_gop_skb() to avoid other bugs like the one you originally
> fixed.
> 


counting done in count_skb_slots is what exactly it is. The fix done above is to make it same so that no need to re calculate again.

Thanks
Siva


> > > > >  		__skb_queue_tail(&rxq, skb);
> > > > >
> > > > > +		skb = skb_peek(&netbk->rx_queue);
> > > > > +		if (skb == NULL)
> > > > > +			break;
> > > > > +		sco = (struct skb_cb_overlay *)skb->cb;
> > > > >  		/* Filled the batch queue? */
> > > > > -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > > > > +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> > > > >  			break;
> > > > >  	}
> > > > >
> > >
> > > This change I like.
> > >
> > > We're working on a patch to improve the buffer efficiency and the
> > > miscalculation problem. Siva, I'd be happy to re-base and re-submit
> > > this patch (with minor adjustments) as part of that work, unless
> you
> > > want to handle that.
> > >
> > > Matt
> >
> > Thanks!!  Please feel free to re-base and re-submit :-)
> 
> OK, thanks!
> 
> Matt


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 10:26:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 10:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiN2J-0005MP-HM; Tue, 11 Dec 2012 10:26:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1TiN2H-0005MJ-9p
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 10:26:01 +0000
Received: from [193.109.254.147:32378] by server-14.bemta-14.messagelabs.com
	id 7A/49-14517-83A07C05; Tue, 11 Dec 2012 10:26:00 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1355221555!7006826!1
X-Originating-IP: [74.125.149.67]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23518 invoked from network); 11 Dec 2012 10:25:58 -0000
Received: from na3sys009aog101.obsmtp.com (HELO na3sys009aog101.obsmtp.com)
	(74.125.149.67)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 10:25:58 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob101.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUMcKM0XFKDwkq7BZKT0WBq8/MUXaZeMn@postini.com;
	Tue, 11 Dec 2012 02:25:57 PST
Received: from INHYMS172.ca.com (155.35.35.46) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Tue, 11 Dec 2012 15:55:52 +0530
Received: from INHYMS111A.ca.com ([169.254.3.239]) by INHYMS172.ca.com
	([155.35.35.46]) with mapi id 14.01.0355.002;
	Tue, 11 Dec 2012 15:55:52 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Matt Wilson <msw@amazon.com>
Thread-Topic: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
	properly when larger MTU sizes are used
Thread-Index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAgAd+rmAEvv3JQAAGcEEQAAliekAAQ4jfWA=
Date: Tue, 11 Dec 2012 10:25:51 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {2AD0BE99-4330-4D5C-B080-DD90DE784B33}
x-cr-hashedpuzzle: B6Ry EXm3 IU1h I465 JkAJ Lbm1 NMqv OkwD QFGa SgIJ VMHr
	WK7s X159 cU7/ eXHR hOTz; 3;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAbQBzAHcAQABhAG0AYQB6AG8AbgAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {2AD0BE99-4330-4D5C-B080-DD90DE784B33};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Tue,
	11 Dec 2012 10:21:39 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDACAAVgAyAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Matt Wilson [mailto:msw@amazon.com]
> Sent: Thursday, December 06, 2012 11:05 AM
> To: Palagummi, Siva
> Cc: Ian Campbell; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> On Wed, Dec 05, 2012 at 11:56:32AM +0000, Palagummi, Siva wrote:
> > Matt,
> [...]
> > You are right. The above chunk which is already part of the upstream
> > is unfortunately incorrect for some cases. We also ran into issues
> > in our environment around a week back and found this problem. The
> > count will be different based on head len because of the
> > optimization that start_new_rx_buffer is trying to do for large
> > buffers.  A hole of size "offset_in_page" will be left in first page
> > during copy if the remaining buffer size is >=PAG_SIZE. This
> > subsequently affects the copy_off as well.
> >
> > So xen_netbk_count_skb_slots actually needs a fix to calculate the
> > count correctly based on head len. And also a fix to calculate the
> > copy_off properly to which the data from fragments gets copied.
> 
> Can you explain more about the copy_off problem? I'm not seeing it.

You can clearly see below that copy_off is input to start_new_rx_buffer while copying frags.
So if the buggy "count" calculation below is fixed based on offset_in_page value then  copy_off value also will change accordingly.

        count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);

        copy_off = skb_headlen(skb) % PAGE_SIZE;

        if (skb_shinfo(skb)->gso_size)
                count++;

        for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
                unsigned long size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
                unsigned long bytes;
                while (size > 0) {
                        BUG_ON(copy_off > MAX_BUFFER_OFFSET);

                        if (start_new_rx_buffer(copy_off, size, 0)) {
                                count++;
                                copy_off = 0;
                        }

So a correct calculation should be somewhat like below because of the optimization in start_new_rx_buffer for larger sizes.

      linear_len = skb_headlen(skb)
	count = (linear_len <= PAGE_SIZE)
              ? 1
              :DIV_ROUND_UP(offset_in_page(skb->data)+linear_len, PAGE_SIZE));

      copy_off = ((offset_in_page(skb->data)+linear_len) < 2*PAGE_SIZE)
			? linear_len % PAGE_SIZE;
			: (offset_in_page(skb->data)+linear_len) % PAGE_SIZE;

> 
> > max_required_rx_slots also may require a fix to account the
> > additional slot that may be required in case mtu >= PAG_SIZE. For
> > worst case scenario atleast another +1.  One thing that is still
> > puzzling here is, max_required_rx_slots seems to be assuming that
> > linear length in head will never be greater than mtu size. But that
> > doesn't seem to be the case all the time. I wonder if it requires
> > some kind of fix there or special handling when count_skb_slots
> > exceeds max_required_rx_slots.
> 
> We should only be using the number of pages required to copy the
> data. The fix shouldn't be to anticipate wasting ring space by
> increasing the return value of max_required_rx_slots().
> 
I do not think we are wasting any ring space. But just ensuring that we have enough before proceeding ahead.
> [...]
> 
> > > Why increment count by the /estimated/ count instead of the actual
> > > number of slots used? We have the number of slots in the line just
> > > above, in sco->meta_slots_used.
> > >
> >
> > Count actually refers to ring slots consumed rather than meta_slots
> > used.  Count can be different from meta_slots_used.
> 
> Aah, indeed. This can end up being too pessimistic if you have lots of
> frags that require multiple copy operations. I still think that it
> would be better to calculate the actual number of ring slots consumed
> by netbk_gop_skb() to avoid other bugs like the one you originally
> fixed.
> 


counting done in count_skb_slots is what exactly it is. The fix done above is to make it same so that no need to re calculate again.

Thanks
Siva


> > > > >  		__skb_queue_tail(&rxq, skb);
> > > > >
> > > > > +		skb = skb_peek(&netbk->rx_queue);
> > > > > +		if (skb == NULL)
> > > > > +			break;
> > > > > +		sco = (struct skb_cb_overlay *)skb->cb;
> > > > >  		/* Filled the batch queue? */
> > > > > -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > > > > +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> > > > >  			break;
> > > > >  	}
> > > > >
> > >
> > > This change I like.
> > >
> > > We're working on a patch to improve the buffer efficiency and the
> > > miscalculation problem. Siva, I'd be happy to re-base and re-submit
> > > this patch (with minor adjustments) as part of that work, unless
> you
> > > want to handle that.
> > >
> > > Matt
> >
> > Thanks!!  Please feel free to re-base and re-submit :-)
> 
> OK, thanks!
> 
> Matt


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 10:43:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 10:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNJ5-0005al-54; Tue, 11 Dec 2012 10:43:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TiNJ3-0005ag-0u
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 10:43:21 +0000
Received: from [85.158.139.83:63403] by server-1.bemta-5.messagelabs.com id
	74/6C-12813-84E07C05; Tue, 11 Dec 2012 10:43:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355222599!29209846!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM5Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1521 invoked from network); 11 Dec 2012 10:43:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 10:43:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 10:43:18 +0000
Message-Id: <50C71C5402000078000AF9C3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 10:43:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
	<50BF839002000078000AE3DF@nat28.tlf.novell.com>
	<20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
	<50C0850402000078000AE77F@nat28.tlf.novell.com>
	<20121210220108.GA11695@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121210220108.GA11695@u109add4315675089e695.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.12.12 at 23:01, Matt Wilson <msw@amazon.com> wrote:
> So today if I boot Xen without dom0_vcpus_pin set, dom0's vCPUs will
> be allowed to run on any pCPU. Xen will block attempts to write
> certain MSRs (MSR_AMD64_NB_CFG, MSR_FAM10H_MMIO_CONF_BASE and
> MSR_IA32_ENERGY_PERF_BIAS). The VCPUOP_get_physid subop of the vcpu_op
> hypercall will not return the initial APIC ID or ACPI ID for dom0.
> 
> Also today, if I run "xl vcpu-pin 0 0", suddenly those MSR writes and
> the VCPUOP_get_physid hypercall will start working for vCPU 0. For

Exactly as intended.

> what it's worth, only legacy XenoLinux-derived kernels appear to use
> this hypercall during SMP boot. Upstream Linux does not.

Because that's being used for the "cpufreq=dom0" case, which I
don't think the pv-ops kernel really ever tried to support.

> I think that the real risk is in the XenoLinux SMP booting code on AMD
> processors where sometimes initial APIC ID != ACPI ID. If the CPU
> pinning changes, the ACPI ID to APIC ID mapping will be wrong. This
> broke PowerNow! when it ran in dom0.
> 
> But PowerNow! is handled by the hypervisor now. So what's the real
> danger here?

I never liked the idea of "cpufreq=dom0", and our kernels
intentionally make it impossible to be used (i.e. much like
pv-ops). But for there potentially being people wanting this, we
can't simply rip it out.

The real danger is with exactly what you describe - a careless
caller using the result of any of the then possible operations in
a context where they're not valid anymore.

And there are use cases outside of Dom0-based power
management: If carefully done, this allows implementation of
a kernel driver (extension) similar to x86's msr.ko, allowing
access to the physical CPU's MSRs. And properly implementing
the dcdbas, coretemp, and via-cputemp drivers.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 10:43:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 10:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNJ5-0005al-54; Tue, 11 Dec 2012 10:43:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TiNJ3-0005ag-0u
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 10:43:21 +0000
Received: from [85.158.139.83:63403] by server-1.bemta-5.messagelabs.com id
	74/6C-12813-84E07C05; Tue, 11 Dec 2012 10:43:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355222599!29209846!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM5Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1521 invoked from network); 11 Dec 2012 10:43:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 10:43:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 10:43:18 +0000
Message-Id: <50C71C5402000078000AF9C3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 10:43:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <2614dd8be3a01247230c.1354687327@u109add4315675089e695>
	<50BF2574.6080702@citrix.com>
	<20121205155906.GA32088@u109add4315675089e695.ant.amazon.com>
	<50BF839002000078000AE3DF@nat28.tlf.novell.com>
	<20121205170618.GC32088@u109add4315675089e695.ant.amazon.com>
	<50C0850402000078000AE77F@nat28.tlf.novell.com>
	<20121210220108.GA11695@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121210220108.GA11695@u109add4315675089e695.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: schedule: allow dom0 vCPUs to be
 re-pinned when dom0_vcpus_pin is set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.12.12 at 23:01, Matt Wilson <msw@amazon.com> wrote:
> So today if I boot Xen without dom0_vcpus_pin set, dom0's vCPUs will
> be allowed to run on any pCPU. Xen will block attempts to write
> certain MSRs (MSR_AMD64_NB_CFG, MSR_FAM10H_MMIO_CONF_BASE and
> MSR_IA32_ENERGY_PERF_BIAS). The VCPUOP_get_physid subop of the vcpu_op
> hypercall will not return the initial APIC ID or ACPI ID for dom0.
> 
> Also today, if I run "xl vcpu-pin 0 0", suddenly those MSR writes and
> the VCPUOP_get_physid hypercall will start working for vCPU 0. For

Exactly as intended.

> what it's worth, only legacy XenoLinux-derived kernels appear to use
> this hypercall during SMP boot. Upstream Linux does not.

Because that's being used for the "cpufreq=dom0" case, which I
don't think the pv-ops kernel really ever tried to support.

> I think that the real risk is in the XenoLinux SMP booting code on AMD
> processors where sometimes initial APIC ID != ACPI ID. If the CPU
> pinning changes, the ACPI ID to APIC ID mapping will be wrong. This
> broke PowerNow! when it ran in dom0.
> 
> But PowerNow! is handled by the hypervisor now. So what's the real
> danger here?

I never liked the idea of "cpufreq=dom0", and our kernels
intentionally make it impossible to be used (i.e. much like
pv-ops). But for there potentially being people wanting this, we
can't simply rip it out.

The real danger is with exactly what you describe - a careless
caller using the result of any of the then possible operations in
a context where they're not valid anymore.

And there are use cases outside of Dom0-based power
management: If carefully done, this allows implementation of
a kernel driver (extension) similar to x86's msr.ko, allowing
access to the physical CPU's MSRs. And properly implementing
the dcdbas, coretemp, and via-cputemp drivers.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 10:48:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 10:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNNU-0005i9-Rw; Tue, 11 Dec 2012 10:47:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TiNNT-0005i3-Ce
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 10:47:55 +0000
Received: from [193.109.254.147:51972] by server-7.bemta-14.messagelabs.com id
	21/91-02272-A5F07C05; Tue, 11 Dec 2012 10:47:54 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355222873!9567951!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16356 invoked from network); 11 Dec 2012 10:47:53 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	11 Dec 2012 10:47:53 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:52268 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TiNRN-0007VA-PY; Tue, 11 Dec 2012 11:51:57 +0100
Date: Tue, 11 Dec 2012 11:47:49 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <10810636308.20121211114749@eikelenboom.it>
To: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
In-Reply-To: <36774CA35642C143BCDE93BA0C68DC5702C53BB2@dulac>
References: <36774CA35642C143BCDE93BA0C68DC5702C53BB2@dulac>
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 5:49:16 PM, you wrote:

>>>>>>> Hi all,
>>>>>>> I have made some tests to find a good driver for FirePro V8800 on 
>>>>>>> windows 7 64bit HVM.
>>>>>>> I have been focused on ?advanced features?: quad buffer and active 
>>>>>>> stereoscopy, synchronization ?
>>>>>>> The results, for all FirePro drivers (of this year); I can?t get 
>>>>>>> the quad buffer/active stereoscopy feature.
>>>>>>> But they work on a native installation.
>>>>>> Can you describe the setup a little more?
>>>>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>>>>
>>>>> It?s a setup used in CAVE system, I try (and its works, minus some
>>>>> issues) to virtualize ?virtual reality contexts? that needs full 
>>>>> graphics card features.
>>>>>
>>>>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>>>>
>>>>> cores_per_socket : 4
>>>>>
>>>>> threads_per_core : 2
>>>>>
>>>>> cpu_mhz : 2660
>>>>>
>>>>> total_memory : 4079
>>>>>
>>>>>> How many graphic cards per guest?
>>>>> One card per guest.
>>>>>
>>>>>> How many guests? On how many hosts?
>>>>> One guest per computer.
>>>>>
>>>> And of course, I just thought of some other questions:
>>>> What version of Xen are you using?
>>>> What kernel are you using in Dom0?
>>> release                : 2.6.32-5-xen-amd64
>>> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
>>> machine                : x86_64
>>> nr_cpus                : 8
>>> nr_nodes               : 1
>>> cores_per_socket       : 4
>>> threads_per_core       : 2
>>> cpu_mhz                : 2660
>>> hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:00000000:00000001:00000000
>>> virt_caps              : hvm hvm_directio
>>> total_memory           : 4079
>>> free_cpus              : 0
>>> xen_major              : 4
>>> xen_minor              : 2
>>> xen_extra              : -unstable
>>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>>> xen_scheduler          : credit
>>> xen_pagesize           : 4096
>>> platform_params        : virt_start=0xffff800000000000
>>> xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
>>> xen_commandline        : placeholder
>>> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
>>> xend_config_format     : 4
>>>
>>> I will change to a newer version and use  xl toolstack when VGA passthrough will be supported.
>>>
>>>> And just to be clear, there is only Dom0 and one Windows 7 HVM guest on each machine?
>>> Yes
>>>
>>>>>>> The only driver that allows this feature is a Radeon HD driver 
>>>>>>> (Catalyst 12.10 WHQL).
>>>>>>> But this driver becomes unstable when an application using active 
>>>>>>> stereo and synchronization is closed:
>>>>>>> -The synchronization between two computers is lost.
>>>>>>> -The CCC can crash when the synchronization is made again.
>>>>>>> Someone have any clues about this?
>>>>>> I don't know exactly how this works on AMD/ATI graphics cards, but 
>>>>>> I have worked with synchronisation on other graphics cards about 7 
>>>>>> years ago, so I have some idea of how you solve the various 
>>>>>> problems.
>>>>>> What I don't quite understand is why it would be different between 
>>>>>> a virtual environment and the bare-metal ("native") install. My 
>>>>>> immediate guess is that there is a timing difference, for one of 
>>>>>> two reasons:
>>>>>> 1. IOMMU is adding extra delays to the graphics card reading system memory.
>>>>>> 2. Interrupt delays due to hypervisor.
>>>>>> 3. Dom0 or other guest domains "stealing" CPU from the guest.
>>>>>> I don't think those are easy to work around (as they all have to 
>>>>>> "happen" in a virtual system), but I also don't REALLY understand 
>>>>>> why this should cause problems in the first place, as there isn't 
>>>>>> any guarantee as to the timings of either memory reads, interrupt 
>>>>>> latency/responsiveness or CPU availability in Windows, so the same 
>>>>>> problem would appear in native systems as well, given "the right"
>>>>>> circumstances.
>>>>>> What exactly is the crash in CCC?
>>>>>> (CCC stands for "Catalyst Control Center" - which I think is a 
>>>>>> Windows "service" to handle certain requests from the driver that 
>>>>>> can't be done in kernel mode [or shouldn't be done in the driver in 
>>>>>> general]).
>>>>> After the application is closed, I launch the Catalyst Control 
>>>>> Center, the synchronization state seems to be good. But there is no 
>>>>> synchronization.
>>>>>
>>>>> If I try to apply any modifications of synchronization (synchro 
>>>>> server or client), CCC freeze and I need to kill it manually.
>>>>>
>>>>> I can set the synchronization back after this.
>>>>>
>>>> This clearly sounds like a software issue in the CCC itself. I could be wrong, but that's what I think right now. It would be rather difficult to figure out what is going wrong without at least a repro environment.
>>> I've made a bunch of tests this morning:
>>> -CCC crash when I've got two displays: I set one to be the synchronization server and the other a client at the same time. When I set the server, apply this configuration and set the client after, it didn't crash.
>>> -If my application (Virtools) crash, synchronization is reset.
>>> -Eyes are sometimes inverted with the same trigger edge.
>>I saw that problem with the product I was working on once or twice. 
>>Makes it look really "confusing". This was a settings problem in my case (because I wrote my own "controls", I could set almost every aspect of everything that could possibly be changed, with a very basic command line >application that interacted pretty straight down to the driver - with the usual caveat of "make sure you know what you are doing" - the normal GUI Control panel setup was much more "you can only set things that make sense >for you to set"). That is probably not really what your problem is... But could be a configuration of driver or application issue, of course.
>>
>>>
>>> I've got all this behaviors with both HVM and native installation under 7 64bits.  So I think it's clearly a software issue.
>>>
>>> Next step:  7 32bits.
>>So, this is not a Xen issue... Report it to the ATI/AMD folks!
>>
> Yes, but it doesn't explain why I can't get active stereoscopy with FirePro drivers on HVM.

>>>> Whilst I'm all for using Xen for everything, there are sometimes situations when "not using Xen" may actually be the right choice. Can you explain why running your guests in Xen is of benefit? [If you'd like to answer "none of >>>your business", that's fine, but it may help to understand what the "business case" is for this].
>>> The objective is to mutualize graphical cluster for immersive systems. Virtual Reality applications are sensitive in their configurations; it's a pain to manage multiple users and it's nearly impossible to have different configurations for these users. Usually immersive systems are stuck in one configuration (OS, drivers, applications ...), and only few people are allowed to change settings.
>>> The idea is to use Xen and VGA passthrough, for create personals environments that allow every user to make their own configurations without impacts on others.
>>>
>>> Be able to have VR configurations in virtual machines and to able to run it with 3D features, is a serious benefit for Virtual Reality users.
>>
>>Thanks for your explanation. Makes some sense, however, I feel that it also makes things more complex - if the system is so sensitive, it may get "upset" simply by having the differences in system behaviour that you >automatically get from running on a virtual machine vs. "bare metal". Don't let that stop you, I'm just saying there may be issues caused by Xen (or other Virtualisation products) are not quite as transparent as they really should >be.
>>

> It's not the hardware configurations that is so sensitive but more the software configurations and drivers versions.  
> I've already made some demonstrations of Xen capabilities in our use case, there is no negative feedback. I think that HVM behavior is perfect for our uses, except these driver issues. 

> I found one minor bug (for us), if the first HVM executed (id=1) has the VGA card, the computer reboot without logs. 
> My workaround is to launch an HVM without VGA first, stop it properly, and launch my usual HVM with VGA passthrough. 
> I think, it's a bug due to my installation (Xen 4.2.0-unstable).

> I just got a new test computer, a Dell Precision T7500 with a V9800 FirePro, maybe I will have the time to test something tomorrow!

Hi,

My experiences with CCC and vga passthrough aren't great either (bluescreen most of the time).
It works now with the 12.11 catalyst beta drivers and not installing CCC, but just installing the driver and opencl packages from the c:\AMD\packages dir after stopping the install after the total package is unpacked.
Don't know if the stereoscopy also comes in a seperate package.

It runs opencl fine now, with near native performance (with luxmark benchmark) :-)
(with xen-unstable, qemu-upstream, linux 3.7 dom0, win7 guest, 12.11 catalyst drivers, ati radeon HD 6570 at the moment)

--
Sander

> Aurelien 
>>--
>>Mats
>>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 10:48:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 10:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNNU-0005i9-Rw; Tue, 11 Dec 2012 10:47:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TiNNT-0005i3-Ce
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 10:47:55 +0000
Received: from [193.109.254.147:51972] by server-7.bemta-14.messagelabs.com id
	21/91-02272-A5F07C05; Tue, 11 Dec 2012 10:47:54 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355222873!9567951!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16356 invoked from network); 11 Dec 2012 10:47:53 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	11 Dec 2012 10:47:53 -0000
Received: from 136-70-ftth.on.nl ([88.159.70.136]:52268 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TiNRN-0007VA-PY; Tue, 11 Dec 2012 11:51:57 +0100
Date: Tue, 11 Dec 2012 11:47:49 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <10810636308.20121211114749@eikelenboom.it>
To: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
In-Reply-To: <36774CA35642C143BCDE93BA0C68DC5702C53BB2@dulac>
References: <36774CA35642C143BCDE93BA0C68DC5702C53BB2@dulac>
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 5:49:16 PM, you wrote:

>>>>>>> Hi all,
>>>>>>> I have made some tests to find a good driver for FirePro V8800 on 
>>>>>>> windows 7 64bit HVM.
>>>>>>> I have been focused on ?advanced features?: quad buffer and active 
>>>>>>> stereoscopy, synchronization ?
>>>>>>> The results, for all FirePro drivers (of this year); I can?t get 
>>>>>>> the quad buffer/active stereoscopy feature.
>>>>>>> But they work on a native installation.
>>>>>> Can you describe the setup a little more?
>>>>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>>>>
>>>>> It?s a setup used in CAVE system, I try (and its works, minus some
>>>>> issues) to virtualize ?virtual reality contexts? that needs full 
>>>>> graphics card features.
>>>>>
>>>>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>>>>
>>>>> cores_per_socket : 4
>>>>>
>>>>> threads_per_core : 2
>>>>>
>>>>> cpu_mhz : 2660
>>>>>
>>>>> total_memory : 4079
>>>>>
>>>>>> How many graphic cards per guest?
>>>>> One card per guest.
>>>>>
>>>>>> How many guests? On how many hosts?
>>>>> One guest per computer.
>>>>>
>>>> And of course, I just thought of some other questions:
>>>> What version of Xen are you using?
>>>> What kernel are you using in Dom0?
>>> release                : 2.6.32-5-xen-amd64
>>> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
>>> machine                : x86_64
>>> nr_cpus                : 8
>>> nr_nodes               : 1
>>> cores_per_socket       : 4
>>> threads_per_core       : 2
>>> cpu_mhz                : 2660
>>> hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:00000000:00000001:00000000
>>> virt_caps              : hvm hvm_directio
>>> total_memory           : 4079
>>> free_cpus              : 0
>>> xen_major              : 4
>>> xen_minor              : 2
>>> xen_extra              : -unstable
>>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>>> xen_scheduler          : credit
>>> xen_pagesize           : 4096
>>> platform_params        : virt_start=0xffff800000000000
>>> xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
>>> xen_commandline        : placeholder
>>> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
>>> xend_config_format     : 4
>>>
>>> I will change to a newer version and use  xl toolstack when VGA passthrough will be supported.
>>>
>>>> And just to be clear, there is only Dom0 and one Windows 7 HVM guest on each machine?
>>> Yes
>>>
>>>>>>> The only driver that allows this feature is a Radeon HD driver 
>>>>>>> (Catalyst 12.10 WHQL).
>>>>>>> But this driver becomes unstable when an application using active 
>>>>>>> stereo and synchronization is closed:
>>>>>>> -The synchronization between two computers is lost.
>>>>>>> -The CCC can crash when the synchronization is made again.
>>>>>>> Someone have any clues about this?
>>>>>> I don't know exactly how this works on AMD/ATI graphics cards, but 
>>>>>> I have worked with synchronisation on other graphics cards about 7 
>>>>>> years ago, so I have some idea of how you solve the various 
>>>>>> problems.
>>>>>> What I don't quite understand is why it would be different between 
>>>>>> a virtual environment and the bare-metal ("native") install. My 
>>>>>> immediate guess is that there is a timing difference, for one of 
>>>>>> two reasons:
>>>>>> 1. IOMMU is adding extra delays to the graphics card reading system memory.
>>>>>> 2. Interrupt delays due to hypervisor.
>>>>>> 3. Dom0 or other guest domains "stealing" CPU from the guest.
>>>>>> I don't think those are easy to work around (as they all have to 
>>>>>> "happen" in a virtual system), but I also don't REALLY understand 
>>>>>> why this should cause problems in the first place, as there isn't 
>>>>>> any guarantee as to the timings of either memory reads, interrupt 
>>>>>> latency/responsiveness or CPU availability in Windows, so the same 
>>>>>> problem would appear in native systems as well, given "the right"
>>>>>> circumstances.
>>>>>> What exactly is the crash in CCC?
>>>>>> (CCC stands for "Catalyst Control Center" - which I think is a 
>>>>>> Windows "service" to handle certain requests from the driver that 
>>>>>> can't be done in kernel mode [or shouldn't be done in the driver in 
>>>>>> general]).
>>>>> After the application is closed, I launch the Catalyst Control 
>>>>> Center, the synchronization state seems to be good. But there is no 
>>>>> synchronization.
>>>>>
>>>>> If I try to apply any modifications of synchronization (synchro 
>>>>> server or client), CCC freeze and I need to kill it manually.
>>>>>
>>>>> I can set the synchronization back after this.
>>>>>
>>>> This clearly sounds like a software issue in the CCC itself. I could be wrong, but that's what I think right now. It would be rather difficult to figure out what is going wrong without at least a repro environment.
>>> I've made a bunch of tests this morning:
>>> -CCC crash when I've got two displays: I set one to be the synchronization server and the other a client at the same time. When I set the server, apply this configuration and set the client after, it didn't crash.
>>> -If my application (Virtools) crash, synchronization is reset.
>>> -Eyes are sometimes inverted with the same trigger edge.
>>I saw that problem with the product I was working on once or twice. 
>>Makes it look really "confusing". This was a settings problem in my case (because I wrote my own "controls", I could set almost every aspect of everything that could possibly be changed, with a very basic command line >application that interacted pretty straight down to the driver - with the usual caveat of "make sure you know what you are doing" - the normal GUI Control panel setup was much more "you can only set things that make sense >for you to set"). That is probably not really what your problem is... But could be a configuration of driver or application issue, of course.
>>
>>>
>>> I've got all this behaviors with both HVM and native installation under 7 64bits.  So I think it's clearly a software issue.
>>>
>>> Next step:  7 32bits.
>>So, this is not a Xen issue... Report it to the ATI/AMD folks!
>>
> Yes, but it doesn't explain why I can't get active stereoscopy with FirePro drivers on HVM.

>>>> Whilst I'm all for using Xen for everything, there are sometimes situations when "not using Xen" may actually be the right choice. Can you explain why running your guests in Xen is of benefit? [If you'd like to answer "none of >>>your business", that's fine, but it may help to understand what the "business case" is for this].
>>> The objective is to mutualize graphical cluster for immersive systems. Virtual Reality applications are sensitive in their configurations; it's a pain to manage multiple users and it's nearly impossible to have different configurations for these users. Usually immersive systems are stuck in one configuration (OS, drivers, applications ...), and only few people are allowed to change settings.
>>> The idea is to use Xen and VGA passthrough, for create personals environments that allow every user to make their own configurations without impacts on others.
>>>
>>> Be able to have VR configurations in virtual machines and to able to run it with 3D features, is a serious benefit for Virtual Reality users.
>>
>>Thanks for your explanation. Makes some sense, however, I feel that it also makes things more complex - if the system is so sensitive, it may get "upset" simply by having the differences in system behaviour that you >automatically get from running on a virtual machine vs. "bare metal". Don't let that stop you, I'm just saying there may be issues caused by Xen (or other Virtualisation products) are not quite as transparent as they really should >be.
>>

> It's not the hardware configurations that is so sensitive but more the software configurations and drivers versions.  
> I've already made some demonstrations of Xen capabilities in our use case, there is no negative feedback. I think that HVM behavior is perfect for our uses, except these driver issues. 

> I found one minor bug (for us), if the first HVM executed (id=1) has the VGA card, the computer reboot without logs. 
> My workaround is to launch an HVM without VGA first, stop it properly, and launch my usual HVM with VGA passthrough. 
> I think, it's a bug due to my installation (Xen 4.2.0-unstable).

> I just got a new test computer, a Dell Precision T7500 with a V9800 FirePro, maybe I will have the time to test something tomorrow!

Hi,

My experiences with CCC and vga passthrough aren't great either (bluescreen most of the time).
It works now with the 12.11 catalyst beta drivers and not installing CCC, but just installing the driver and opencl packages from the c:\AMD\packages dir after stopping the install after the total package is unpacked.
Don't know if the stereoscopy also comes in a seperate package.

It runs opencl fine now, with near native performance (with luxmark benchmark) :-)
(with xen-unstable, qemu-upstream, linux 3.7 dom0, win7 guest, 12.11 catalyst drivers, ati radeon HD 6570 at the moment)

--
Sander

> Aurelien 
>>--
>>Mats
>>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 10:51:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 10:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNQ8-0005qp-L8; Tue, 11 Dec 2012 10:50:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TiNQ6-0005qg-MV
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 10:50:38 +0000
Received: from [85.158.137.99:30391] by server-11.bemta-3.messagelabs.com id
	D5/1C-13335-DFF07C05; Tue, 11 Dec 2012 10:50:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355223036!18803859!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM5Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4931 invoked from network); 11 Dec 2012 10:50:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 10:50:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 10:50:35 +0000
Message-Id: <50C71E0B02000078000AF9D4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 10:50:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C5C39302000078000AF545@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212101752450.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212101752450.17523@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.12.12 at 18:54, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 10 Dec 2012, Jan Beulich wrote:
>> >>> On 07.12.12 at 19:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> > --- a/xen/drivers/video/vesa.c
>> > +++ b/xen/drivers/video/vesa.c
>> > @@ -13,20 +13,15 @@
>> >  #include <asm/io.h>
>> >  #include <asm/page.h>
>> >  #include "font.h"
>> > +#include "fb.h"
>> >  
>> >  #define vlfb_info    vga_console_info.u.vesa_lfb
>> > -#define text_columns (vlfb_info.width / font->width)
>> > -#define text_rows    (vlfb_info.height / font->height)
>> >  
>> > -static void vesa_redraw_puts(const char *s);
>> > -static void vesa_scroll_puts(const char *s);
>> > +static void lfb_flush(void);
>> >  
>> > -static unsigned char *lfb, *lbuf, *text_buf;
>> > -static unsigned int *__initdata line_len;
>> > +static unsigned char *lfb;
>> 
>> What's the point of retaining this, when ...
>> 
>> >  static const struct font_desc *font;
>> >  static bool_t vga_compat;
>> > -static unsigned int pixel_on;
>> > -static unsigned int xpos, ypos;
>> >  
>> >  static unsigned int vram_total;
>> >  integer_param("vesa-ram", vram_total);
>> > @@ -87,29 +82,26 @@ void __init vesa_early_init(void)
>> >  
>> >  void __init vesa_init(void)
>> >  {
>> > -    if ( !font )
>> > -        goto fail;
>> > -
>> > -    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
>> > -    if ( !lbuf )
>> > -        goto fail;
>> > +    struct fb_prop fbp;
>> >  
>> > -    text_buf = xzalloc_bytes(text_columns * text_rows);
>> > -    if ( !text_buf )
>> > -        goto fail;
>> > +    if ( !font )
>> > +        return;
>> >  
>> > -    line_len = xzalloc_array(unsigned int, text_columns);
>> > -    if ( !line_len )
>> > -        goto fail;
>> > +    fbp.font = font;
>> > +    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
>> > +    fbp.bytes_per_line = vlfb_info.bytes_per_line;
>> > +    fbp.width = vlfb_info.width;
>> > +    fbp.height = vlfb_info.height;
>> > +    fbp.flush = lfb_flush;
>> > +    fbp.text_columns = vlfb_info.width / font->width;
>> > +    fbp.text_rows = vlfb_info.height / font->height;
>> >  
>> > -    lfb = ioremap(vlfb_info.lfb_base, vram_remap);
>> > +    fbp.lfb = lfb = ioremap(vlfb_info.lfb_base, vram_remap);
>> 
>> ... you set up the consolidated field at the same time anyway?
> 
> It is used by vesa_mtrr_init and vesa_endboot.
> At the moment I don't store fbp in a static variable so after vesa_init
> returns, vesa.c doesn't have a way to retrieve it.
> Maybe I should introduce a "struct fb_prop fbp" static variable that
> replaces lfb?

Oh, I didn't realize that the static is private to the new fb.c. I'd
think sharing this would make sense (and the specific driver could
then fill the generic fields there directly). But I'll leave that up to
you then which way you prefer - it just looked like unnecessary
redundancy to me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 10:51:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 10:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNQ8-0005qp-L8; Tue, 11 Dec 2012 10:50:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TiNQ6-0005qg-MV
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 10:50:38 +0000
Received: from [85.158.137.99:30391] by server-11.bemta-3.messagelabs.com id
	D5/1C-13335-DFF07C05; Tue, 11 Dec 2012 10:50:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355223036!18803859!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM5Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4931 invoked from network); 11 Dec 2012 10:50:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 10:50:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 10:50:35 +0000
Message-Id: <50C71E0B02000078000AF9D4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 10:50:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C5C39302000078000AF545@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212101752450.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212101752450.17523@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.12.12 at 18:54, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 10 Dec 2012, Jan Beulich wrote:
>> >>> On 07.12.12 at 19:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> > --- a/xen/drivers/video/vesa.c
>> > +++ b/xen/drivers/video/vesa.c
>> > @@ -13,20 +13,15 @@
>> >  #include <asm/io.h>
>> >  #include <asm/page.h>
>> >  #include "font.h"
>> > +#include "fb.h"
>> >  
>> >  #define vlfb_info    vga_console_info.u.vesa_lfb
>> > -#define text_columns (vlfb_info.width / font->width)
>> > -#define text_rows    (vlfb_info.height / font->height)
>> >  
>> > -static void vesa_redraw_puts(const char *s);
>> > -static void vesa_scroll_puts(const char *s);
>> > +static void lfb_flush(void);
>> >  
>> > -static unsigned char *lfb, *lbuf, *text_buf;
>> > -static unsigned int *__initdata line_len;
>> > +static unsigned char *lfb;
>> 
>> What's the point of retaining this, when ...
>> 
>> >  static const struct font_desc *font;
>> >  static bool_t vga_compat;
>> > -static unsigned int pixel_on;
>> > -static unsigned int xpos, ypos;
>> >  
>> >  static unsigned int vram_total;
>> >  integer_param("vesa-ram", vram_total);
>> > @@ -87,29 +82,26 @@ void __init vesa_early_init(void)
>> >  
>> >  void __init vesa_init(void)
>> >  {
>> > -    if ( !font )
>> > -        goto fail;
>> > -
>> > -    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
>> > -    if ( !lbuf )
>> > -        goto fail;
>> > +    struct fb_prop fbp;
>> >  
>> > -    text_buf = xzalloc_bytes(text_columns * text_rows);
>> > -    if ( !text_buf )
>> > -        goto fail;
>> > +    if ( !font )
>> > +        return;
>> >  
>> > -    line_len = xzalloc_array(unsigned int, text_columns);
>> > -    if ( !line_len )
>> > -        goto fail;
>> > +    fbp.font = font;
>> > +    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
>> > +    fbp.bytes_per_line = vlfb_info.bytes_per_line;
>> > +    fbp.width = vlfb_info.width;
>> > +    fbp.height = vlfb_info.height;
>> > +    fbp.flush = lfb_flush;
>> > +    fbp.text_columns = vlfb_info.width / font->width;
>> > +    fbp.text_rows = vlfb_info.height / font->height;
>> >  
>> > -    lfb = ioremap(vlfb_info.lfb_base, vram_remap);
>> > +    fbp.lfb = lfb = ioremap(vlfb_info.lfb_base, vram_remap);
>> 
>> ... you set up the consolidated field at the same time anyway?
> 
> It is used by vesa_mtrr_init and vesa_endboot.
> At the moment I don't store fbp in a static variable so after vesa_init
> returns, vesa.c doesn't have a way to retrieve it.
> Maybe I should introduce a "struct fb_prop fbp" static variable that
> replaces lfb?

Oh, I didn't realize that the static is private to the new fb.c. I'd
think sharing this would make sense (and the specific driver could
then fill the generic fields there directly). But I'll leave that up to
you then which way you prefer - it just looked like unnecessary
redundancy to me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:18:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNrB-0006B6-MT; Tue, 11 Dec 2012 11:18:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiNr9-0006B1-Nc
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 11:18:35 +0000
Received: from [193.109.254.147:25598] by server-1.bemta-14.messagelabs.com id
	A0/5F-25314-A8617C05; Tue, 11 Dec 2012 11:18:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355224712!9704768!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14640 invoked from network); 11 Dec 2012 11:18:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 11:18:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="55926"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 11:18:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 11:18:32 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TiNr6-0003X6-2e; Tue, 11 Dec 2012 11:18:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TiNr5-0007kg-Tq;
	Tue, 11 Dec 2012 11:18:31 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20679.5765.810785.732270@mariner.uk.xensource.com>
Date: Tue, 11 Dec 2012 11:18:29 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C676B1.4070609@suse.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
	<20677.47995.298291.120095@mariner.uk.xensource.com>
	<50C676B1.4070609@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> Ian Jackson wrote:
> > Yes, I think that's what Bamvor meant but I don't think it's correct
> > that such a lock eliminates the race.  libvirt has to release that
> > lock before making the callback (to follow the libxl locking rules
> > which are necessary to avoid deadlock).
> 
> And it does.  The event loop lock is released just before invoking the
> callback, and re-acquired just after the callback returns.

Right.  So I think the race that my patches are trying to fix does exist.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:18:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNrB-0006B6-MT; Tue, 11 Dec 2012 11:18:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiNr9-0006B1-Nc
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 11:18:35 +0000
Received: from [193.109.254.147:25598] by server-1.bemta-14.messagelabs.com id
	A0/5F-25314-A8617C05; Tue, 11 Dec 2012 11:18:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355224712!9704768!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14640 invoked from network); 11 Dec 2012 11:18:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 11:18:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="55926"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 11:18:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 11:18:32 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TiNr6-0003X6-2e; Tue, 11 Dec 2012 11:18:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TiNr5-0007kg-Tq;
	Tue, 11 Dec 2012 11:18:31 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20679.5765.810785.732270@mariner.uk.xensource.com>
Date: Tue, 11 Dec 2012 11:18:29 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C676B1.4070609@suse.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
	<20677.47995.298291.120095@mariner.uk.xensource.com>
	<50C676B1.4070609@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> Ian Jackson wrote:
> > Yes, I think that's what Bamvor meant but I don't think it's correct
> > that such a lock eliminates the race.  libvirt has to release that
> > lock before making the callback (to follow the libxl locking rules
> > which are necessary to avoid deadlock).
> 
> And it does.  The event loop lock is released just before invoking the
> callback, and re-acquired just after the callback returns.

Right.  So I think the race that my patches are trying to fix does exist.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:19:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNrh-0006DF-2W; Tue, 11 Dec 2012 11:19:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1TiNrf-0006Cu-K9; Tue, 11 Dec 2012 11:19:07 +0000
Received: from [85.158.143.35:49500] by server-1.bemta-4.messagelabs.com id
	A5/E8-28401-AA617C05; Tue, 11 Dec 2012 11:19:06 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355224735!14333564!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=2.0 required=7.0 tests=RATWARE_GECKO_BUILD, RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8916 invoked from network); 11 Dec 2012 11:19:05 -0000
Received: from mail-la0-f45.google.com (HELO mail-la0-f45.google.com)
	(209.85.215.45)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 11:19:05 -0000
Received: by mail-la0-f45.google.com with SMTP id p9so3344968laa.32
	for <multiple recipients>; Tue, 11 Dec 2012 03:18:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=YYU3USoDlGnhcAJlsxbaLY9ckHfsNuOvjIdSxoCfKM8=;
	b=cL+l/x9yQzM38IaQypVdpDQGYq4LFHrvTHKVCj0+CyTsIDVs1QGfZZljy8hPGrqjXA
	2y7WeTxuwLYgAUQvG/GcVePP3MpDXzz6TGhiAgC8LRpfIDLGpZVV2l2qVnXTBk371do9
	t4zzwWBJ7seSdJgLGyIRZYewFSAgQlUUj1U6bfom+QUtoGwNnEimZJhbnL421m3dU1Ls
	xuKspo2mYc5fuMqqfUx3vDp7bcjv53XZQKE6l5cPPynYwASZG38Qx8vEVT/RcH80C48J
	x6n1OCsEubv1/Qrv10pZ3RzXL+W34ljIAZNz6QODkgYBvHfohaCwHut9QtW2PDfXCzIk
	kvFg==
Received: by 10.112.25.193 with SMTP id e1mr5846892lbg.94.1355224735355;
	Tue, 11 Dec 2012 03:18:55 -0800 (PST)
Received: from [172.16.26.11] (b01bf226.bb.sky.com. [176.27.242.38])
	by mx.google.com with ESMTPS id hu6sm9075736lab.13.2012.12.11.03.18.53
	(version=SSLv3 cipher=OTHER); Tue, 11 Dec 2012 03:18:54 -0800 (PST)
Message-ID: <50C7169D.7080302@xen.org>
Date: Tue, 11 Dec 2012 11:18:53 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: [Xen-devel] [Reminder] FOSDEM Virt Devroom deadline on friday,
	the 14th
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See https://lists.fosdem.org/pipermail/fosdem/2012-November/001660.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:19:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNrh-0006DF-2W; Tue, 11 Dec 2012 11:19:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1TiNrf-0006Cu-K9; Tue, 11 Dec 2012 11:19:07 +0000
Received: from [85.158.143.35:49500] by server-1.bemta-4.messagelabs.com id
	A5/E8-28401-AA617C05; Tue, 11 Dec 2012 11:19:06 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355224735!14333564!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=2.0 required=7.0 tests=RATWARE_GECKO_BUILD, RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8916 invoked from network); 11 Dec 2012 11:19:05 -0000
Received: from mail-la0-f45.google.com (HELO mail-la0-f45.google.com)
	(209.85.215.45)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 11:19:05 -0000
Received: by mail-la0-f45.google.com with SMTP id p9so3344968laa.32
	for <multiple recipients>; Tue, 11 Dec 2012 03:18:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=YYU3USoDlGnhcAJlsxbaLY9ckHfsNuOvjIdSxoCfKM8=;
	b=cL+l/x9yQzM38IaQypVdpDQGYq4LFHrvTHKVCj0+CyTsIDVs1QGfZZljy8hPGrqjXA
	2y7WeTxuwLYgAUQvG/GcVePP3MpDXzz6TGhiAgC8LRpfIDLGpZVV2l2qVnXTBk371do9
	t4zzwWBJ7seSdJgLGyIRZYewFSAgQlUUj1U6bfom+QUtoGwNnEimZJhbnL421m3dU1Ls
	xuKspo2mYc5fuMqqfUx3vDp7bcjv53XZQKE6l5cPPynYwASZG38Qx8vEVT/RcH80C48J
	x6n1OCsEubv1/Qrv10pZ3RzXL+W34ljIAZNz6QODkgYBvHfohaCwHut9QtW2PDfXCzIk
	kvFg==
Received: by 10.112.25.193 with SMTP id e1mr5846892lbg.94.1355224735355;
	Tue, 11 Dec 2012 03:18:55 -0800 (PST)
Received: from [172.16.26.11] (b01bf226.bb.sky.com. [176.27.242.38])
	by mx.google.com with ESMTPS id hu6sm9075736lab.13.2012.12.11.03.18.53
	(version=SSLv3 cipher=OTHER); Tue, 11 Dec 2012 03:18:54 -0800 (PST)
Message-ID: <50C7169D.7080302@xen.org>
Date: Tue, 11 Dec 2012 11:18:53 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: [Xen-devel] [Reminder] FOSDEM Virt Devroom deadline on friday,
	the 14th
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See https://lists.fosdem.org/pipermail/fosdem/2012-November/001660.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:20:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNsm-0006M7-LN; Tue, 11 Dec 2012 11:20:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiNsl-0006Lv-WA
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 11:20:16 +0000
Received: from [85.158.143.99:30097] by server-1.bemta-4.messagelabs.com id
	EB/BA-28401-FE617C05; Tue, 11 Dec 2012 11:20:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355224808!28947421!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32691 invoked from network); 11 Dec 2012 11:20:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 11:20:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="56015"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 11:20:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 11:20:08 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TiNse-0003Xi-Eu; Tue, 11 Dec 2012 11:20:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TiNse-0007km-Am;
	Tue, 11 Dec 2012 11:20:08 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20679.5864.226667.350928@mariner.uk.xensource.com>
Date: Tue, 11 Dec 2012 11:20:08 +0000
To: Bamvor Jian Zhang <bjzhang@suse.com>
In-Reply-To: <50C779320200003000031717@soto.provo.novell.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
	<20677.47995.298291.120095@mariner.uk.xensource.com>
	<20678.5159.946248.90947@mariner.uk.xensource.com>
	<50C779320200003000031717@soto.provo.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Jim Fehlig <JFEHLIG@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bamvor Jian Zhang writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> Ok. i am working on it.  will send the test result to u.

Marvellous.

> BTW, without your patches, i encounter the time race condition
> during libvirt save vm sometimes.

Oh, excellent, something resembling a test case :-).

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:20:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNsm-0006M7-LN; Tue, 11 Dec 2012 11:20:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiNsl-0006Lv-WA
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 11:20:16 +0000
Received: from [85.158.143.99:30097] by server-1.bemta-4.messagelabs.com id
	EB/BA-28401-FE617C05; Tue, 11 Dec 2012 11:20:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355224808!28947421!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32691 invoked from network); 11 Dec 2012 11:20:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 11:20:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="56015"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 11:20:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 11:20:08 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TiNse-0003Xi-Eu; Tue, 11 Dec 2012 11:20:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TiNse-0007km-Am;
	Tue, 11 Dec 2012 11:20:08 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20679.5864.226667.350928@mariner.uk.xensource.com>
Date: Tue, 11 Dec 2012 11:20:08 +0000
To: Bamvor Jian Zhang <bjzhang@suse.com>
In-Reply-To: <50C779320200003000031717@soto.provo.novell.com>
References: <0451e6041bdd88c90eee.1353395794@linux-bjrd.bjz>
	<20661.3989.258191.396175@mariner.uk.xensource.com>
	<1354101923.25834.16.camel@zakaz.uk.xensource.com>
	<20674.16214.934271.479230@mariner.uk.xensource.com>
	<1355134766.31710.119.camel@zakaz.uk.xensource.com>
	<20677.47995.298291.120095@mariner.uk.xensource.com>
	<20678.5159.946248.90947@mariner.uk.xensource.com>
	<50C779320200003000031717@soto.provo.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Jim Fehlig <JFEHLIG@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] fix race condition between libvirtd event
 handling and libxl fd deregister
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bamvor Jian Zhang writes ("Re: [PATCH] fix race condition between libvirtd event handling and libxl fd deregister"):
> Ok. i am working on it.  will send the test result to u.

Marvellous.

> BTW, without your patches, i encounter the time race condition
> during libvirt save vm sometimes.

Oh, excellent, something resembling a test case :-).

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:24:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:24:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNwq-0006d4-BO; Tue, 11 Dec 2012 11:24:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiNwo-0006cu-L8
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 11:24:26 +0000
Received: from [85.158.139.211:57451] by server-10.bemta-5.messagelabs.com id
	7A/F5-13383-9E717C05; Tue, 11 Dec 2012 11:24:25 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355225063!19118456!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25251 invoked from network); 11 Dec 2012 11:24:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 11:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="232063"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 11:24:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 06:24:22 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiNwk-0003lf-Fm;
	Tue, 11 Dec 2012 11:24:22 +0000
Date: Tue, 11 Dec 2012 11:24:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355218767.30696.6.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212111124080.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C5C32802000078000AF542@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212101741460.17523@kaball.uk.xensource.com>
	<1355218767.30696.6.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012, Ian Campbell wrote:
> On Mon, 2012-12-10 at 17:51 +0000, Stefano Stabellini wrote:
> 
> [snip, 300 lines of unnecessarily quoted material, come on guys!]
> 
> > > > +void fb_cr(void);
> > > 
> > > Please make this fb_create() or alike - "cr" alone could well be
> > > mistaken as "carriage return".
> 
> Or control register which was my first thought.
> 
> > Actually it is supposed to be "cr": it is used by vesa_endboot to reset
> > the cursor to column 0.
> 
> fb_carriage_return? It's not called so frequently that a little extra
> typing will hurt.

Yeah.. good idea

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:24:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:24:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiNwq-0006d4-BO; Tue, 11 Dec 2012 11:24:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiNwo-0006cu-L8
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 11:24:26 +0000
Received: from [85.158.139.211:57451] by server-10.bemta-5.messagelabs.com id
	7A/F5-13383-9E717C05; Tue, 11 Dec 2012 11:24:25 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355225063!19118456!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25251 invoked from network); 11 Dec 2012 11:24:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 11:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="232063"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 11:24:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 06:24:22 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiNwk-0003lf-Fm;
	Tue, 11 Dec 2012 11:24:22 +0000
Date: Tue, 11 Dec 2012 11:24:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355218767.30696.6.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212111124080.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212071747200.8801@kaball.uk.xensource.com>
	<1354903377-13068-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<50C5C32802000078000AF542@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212101741460.17523@kaball.uk.xensource.com>
	<1355218767.30696.6.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012, Ian Campbell wrote:
> On Mon, 2012-12-10 at 17:51 +0000, Stefano Stabellini wrote:
> 
> [snip, 300 lines of unnecessarily quoted material, come on guys!]
> 
> > > > +void fb_cr(void);
> > > 
> > > Please make this fb_create() or alike - "cr" alone could well be
> > > mistaken as "carriage return".
> 
> Or control register which was my first thought.
> 
> > Actually it is supposed to be "cr": it is used by vesa_endboot to reset
> > the cursor to column 0.
> 
> fb_carriage_return? It's not called so frequently that a little extra
> typing will hurt.

Yeah.. good idea

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:46:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiOHe-00074x-3S; Tue, 11 Dec 2012 11:45:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TiOHc-00074p-Q3
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 11:45:57 +0000
Received: from [85.158.139.211:42541] by server-8.bemta-5.messagelabs.com id
	53/92-15003-4FC17C05; Tue, 11 Dec 2012 11:45:56 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355226354!19123053!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16876 invoked from network); 11 Dec 2012 11:45:54 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-3.tower-206.messagelabs.com with SMTP;
	11 Dec 2012 11:45:54 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 57B68C5618D;
	Tue, 11 Dec 2012 11:45:43 +0000 (GMT)
Date: Tue, 11 Dec 2012 11:45:42 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Xen Devel <xen-devel@lists.xen.org>
Message-ID: <22A70860509BC87BE892A601@nimrod.local>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In this email message:
 http://www.gossamer-threads.com/lists/xen/devel/256737
a patch to libxl_domain_suspend is presented to enable live migrate
with the qemu device model in xen-unstable. This patch has *NOT*
been taken into the 4.2.1 tree (as far as I can see).

In 4.2.0, adding this patch and attempting a live migrate using
the qemu device model (using xl) produces a seg fault due to
unitialised variables.

Should I expect live-migrate of qemu-dm vms to work under 4.2.1?
If so, should the patch (or a modification thereof) to remove
the check from libxl_domain_suspend be applied to 4.2.1-testing
or is there more to do?

I am very happy to commit test resources to this.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffeffff700 (LWP 5995)]
0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0  0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x00007ffff6d4e970 in libxl__domain_suspend_common_switch_qemu_logdirty (domid=<optimized out>, enable=<optimized out>, user=0x7ffff00024e8) at libxl_dom.c:728
#2  0x00007ffff6d5c1ae in libxl__srm_callout_received_save (msg=0x7fffefffe41a " error", len=<optimized out>, user=0x7ffff00024e8) at _libxl_save_msgs_callout.c:162
#3  0x00007ffff6d5b736 in helper_stdout_readable (egc=0x7fffefffe5a0, ev=0x7ffff0002560, fd=38, events=<optimized out>, revents=<optimized out>) at libxl_save_callout.c:283
#4  0x00007ffff6d601f1 in afterpoll_internal (egc=0x7fffefffe5a0, poller=0x7ffff00028c0, nfds=4, fds=0x7ffff00048b0, now=...) at libxl_event.c:948
#5  0x00007ffff6d604db in eventloop_iteration (egc=0x7fffefffe5a0, poller=0x7ffff00028c0) at libxl_event.c:1368
#6  0x00007ffff6d616b3 in libxl__ao_inprogress (ao=0x7ffff0001d40, file=<optimized out>, line=<optimized out>, func=<optimized out>) at libxl_event.c:1614
#7  0x00007ffff6d3ab75 in libxl_domain_suspend (ctx=<optimized out>, domid=1, fd=10, flags=<optimized out>, ao_how=<optimized out>) at libxl.c:796
#8  0x000000000043677e in migrate_domain_send (ctx=0x7ffff0008860, domid=1, fd=10) at hypervisor/xen_libxl.c:587
#9  0x000000000043698a in live_migrate_send (hyperconn=0x7ffff0001c70, server=0x7ffff0001cb0, node_ip=0x7ffff00041e0 "10.157.128.20", fd=10) at hypervisor/xen_libxl.c:647
#10 0x0000000000422a70 in migrate_server_action (request=0x7ffff0002980) at action/node_action.c:1287
#11 0x00000000004240c1 in runAction (socket_fd=8) at action/handleaction.c:138
#12 0x00000000004179bd in runcomm (socket=0x8) at xvpagent.c:253
#13 0x0000000000427502 in trackedthread_run (arg=0x66df20) at util/util.c:179
#14 0x00007ffff5c9ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#15 0x00007ffff59ca4bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#16 0x0000000000000000 in ?? ()


-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:46:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiOHe-00074x-3S; Tue, 11 Dec 2012 11:45:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TiOHc-00074p-Q3
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 11:45:57 +0000
Received: from [85.158.139.211:42541] by server-8.bemta-5.messagelabs.com id
	53/92-15003-4FC17C05; Tue, 11 Dec 2012 11:45:56 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355226354!19123053!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16876 invoked from network); 11 Dec 2012 11:45:54 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-3.tower-206.messagelabs.com with SMTP;
	11 Dec 2012 11:45:54 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 57B68C5618D;
	Tue, 11 Dec 2012 11:45:43 +0000 (GMT)
Date: Tue, 11 Dec 2012 11:45:42 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Xen Devel <xen-devel@lists.xen.org>
Message-ID: <22A70860509BC87BE892A601@nimrod.local>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In this email message:
 http://www.gossamer-threads.com/lists/xen/devel/256737
a patch to libxl_domain_suspend is presented to enable live migrate
with the qemu device model in xen-unstable. This patch has *NOT*
been taken into the 4.2.1 tree (as far as I can see).

In 4.2.0, adding this patch and attempting a live migrate using
the qemu device model (using xl) produces a seg fault due to
unitialised variables.

Should I expect live-migrate of qemu-dm vms to work under 4.2.1?
If so, should the patch (or a modification thereof) to remove
the check from libxl_domain_suspend be applied to 4.2.1-testing
or is there more to do?

I am very happy to commit test resources to this.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffeffff700 (LWP 5995)]
0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0  0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x00007ffff6d4e970 in libxl__domain_suspend_common_switch_qemu_logdirty (domid=<optimized out>, enable=<optimized out>, user=0x7ffff00024e8) at libxl_dom.c:728
#2  0x00007ffff6d5c1ae in libxl__srm_callout_received_save (msg=0x7fffefffe41a " error", len=<optimized out>, user=0x7ffff00024e8) at _libxl_save_msgs_callout.c:162
#3  0x00007ffff6d5b736 in helper_stdout_readable (egc=0x7fffefffe5a0, ev=0x7ffff0002560, fd=38, events=<optimized out>, revents=<optimized out>) at libxl_save_callout.c:283
#4  0x00007ffff6d601f1 in afterpoll_internal (egc=0x7fffefffe5a0, poller=0x7ffff00028c0, nfds=4, fds=0x7ffff00048b0, now=...) at libxl_event.c:948
#5  0x00007ffff6d604db in eventloop_iteration (egc=0x7fffefffe5a0, poller=0x7ffff00028c0) at libxl_event.c:1368
#6  0x00007ffff6d616b3 in libxl__ao_inprogress (ao=0x7ffff0001d40, file=<optimized out>, line=<optimized out>, func=<optimized out>) at libxl_event.c:1614
#7  0x00007ffff6d3ab75 in libxl_domain_suspend (ctx=<optimized out>, domid=1, fd=10, flags=<optimized out>, ao_how=<optimized out>) at libxl.c:796
#8  0x000000000043677e in migrate_domain_send (ctx=0x7ffff0008860, domid=1, fd=10) at hypervisor/xen_libxl.c:587
#9  0x000000000043698a in live_migrate_send (hyperconn=0x7ffff0001c70, server=0x7ffff0001cb0, node_ip=0x7ffff00041e0 "10.157.128.20", fd=10) at hypervisor/xen_libxl.c:647
#10 0x0000000000422a70 in migrate_server_action (request=0x7ffff0002980) at action/node_action.c:1287
#11 0x00000000004240c1 in runAction (socket_fd=8) at action/handleaction.c:138
#12 0x00000000004179bd in runcomm (socket=0x8) at xvpagent.c:253
#13 0x0000000000427502 in trackedthread_run (arg=0x66df20) at util/util.c:179
#14 0x00007ffff5c9ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#15 0x00007ffff59ca4bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#16 0x0000000000000000 in ?? ()


-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiOLO-0007I7-B8; Tue, 11 Dec 2012 11:49:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiOLL-0007HV-Oa
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 11:49:48 +0000
Received: from [85.158.138.51:44186] by server-16.bemta-3.messagelabs.com id
	24/4F-27634-ADD17C05; Tue, 11 Dec 2012 11:49:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355226585!28372405!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26121 invoked from network); 11 Dec 2012 11:49:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 11:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="57002"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 11:49:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 11:49:24 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiOKy-0003iy-7C;
	Tue, 11 Dec 2012 11:49:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiOKx-0007Rq-0T;
	Tue, 11 Dec 2012 11:49:23 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14667-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 11:49:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14667: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14667 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14667/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-sedf     11 guest-localmigrate       fail blocked in 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=1206a3526673
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 1206a3526673
+ branch=xen-4.2-testing
+ revision=1206a3526673
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r 1206a3526673 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 4 changesets with 4 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiOLO-0007I7-B8; Tue, 11 Dec 2012 11:49:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiOLL-0007HV-Oa
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 11:49:48 +0000
Received: from [85.158.138.51:44186] by server-16.bemta-3.messagelabs.com id
	24/4F-27634-ADD17C05; Tue, 11 Dec 2012 11:49:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355226585!28372405!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26121 invoked from network); 11 Dec 2012 11:49:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 11:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="57002"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 11:49:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 11:49:24 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiOKy-0003iy-7C;
	Tue, 11 Dec 2012 11:49:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiOKx-0007Rq-0T;
	Tue, 11 Dec 2012 11:49:23 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14667-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 11:49:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14667: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14667 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14667/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14563
 test-amd64-amd64-xl-sedf     11 guest-localmigrate       fail blocked in 14563

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  1206a3526673
baseline version:
 xen                  b306bce61341

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=1206a3526673
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 1206a3526673
+ branch=xen-4.2-testing
+ revision=1206a3526673
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r 1206a3526673 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 4 changesets with 4 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:52:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiONy-0007kV-6V; Tue, 11 Dec 2012 11:52:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TiONw-0007kB-DY
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 11:52:28 +0000
Received: from [85.158.137.99:57450] by server-7.bemta-3.messagelabs.com id
	3D/DE-23008-B7E17C05; Tue, 11 Dec 2012 11:52:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1355226743!13495725!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM5Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2060 invoked from network); 11 Dec 2012 11:52:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 11:52:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 11:52:23 +0000
Message-Id: <50C72C8502000078000AFA6D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 11:52:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <matthew.fioravante@jhuapl.edu>, "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169603-26991-1-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1355169603-26991-1-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] drivers/tpm-xen: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.12.12 at 21:00, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> This changes the vTPM shared page ABI from a copy of the Xen network
> interface to a single-page interface that better reflects the expected
> behavior of a TPM: only a single request packet can be sent at any given
> time, and every packet sent generates a single response packet. This
> protocol change should also increase efficiency as it avoids mapping and
> unmapping grants when possible.

Given

-#define TPMIF_TX_RING_SIZE 1

where was the problem?

Also, the patch replaces the old interface by the new one - how
is that compatible with older implementations?

The positive aspect is that the new interface isn't address size
dependent anymore (and hence mixed size backend/frontend can
work together, which isn't the case for the original one).

Jan

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  drivers/char/tpm/xen-tpmfront_if.c | 195 +++++++------------------------------
>  include/xen/interface/io/tpmif.h   |  42 ++------
>  2 files changed, 42 insertions(+), 195 deletions(-)
> 
> diff --git a/drivers/char/tpm/xen-tpmfront_if.c 
> b/drivers/char/tpm/xen-tpmfront_if.c
> index ba7fad8..374ad1b 100644
> --- a/drivers/char/tpm/xen-tpmfront_if.c
> +++ b/drivers/char/tpm/xen-tpmfront_if.c
> @@ -53,7 +53,7 @@
>  struct tpm_private {
>  	struct tpm_chip *chip;
>  
> -	struct tpmif_tx_interface *tx;
> +	struct vtpm_shared_page *page;
>  	atomic_t refcnt;
>  	unsigned int evtchn;
>  	u8 is_connected;
> @@ -61,8 +61,6 @@ struct tpm_private {
>  
>  	spinlock_t tx_lock;
>  
> -	struct tx_buffer *tx_buffers[TPMIF_TX_RING_SIZE];
> -
>  	atomic_t tx_busy;
>  	void *tx_remember;
>  
> @@ -73,15 +71,7 @@ struct tpm_private {
>  	int ring_ref;
>  };
>  
> -struct tx_buffer {
> -	unsigned int size;	/* available space in data */
> -	unsigned int len;	/* used space in data */
> -	unsigned char *data;	/* pointer to a page */
> -};
> -
> -
>  /* locally visible variables */
> -static grant_ref_t gref_head;
>  static struct tpm_private *my_priv;
>  
>  /* local function prototypes */
> @@ -92,8 +82,6 @@ static int tpmif_connect(struct xenbus_device *dev,
>  		struct tpm_private *tp,
>  		domid_t domid);
>  static DECLARE_TASKLET(tpmif_rx_tasklet, tpmif_rx_action, 0);
> -static int tpmif_allocate_tx_buffers(struct tpm_private *tp);
> -static void tpmif_free_tx_buffers(struct tpm_private *tp);
>  static void tpmif_set_connected_state(struct tpm_private *tp,
>  		u8 newstate);
>  static int tpm_xmit(struct tpm_private *tp,
> @@ -101,52 +89,6 @@ static int tpm_xmit(struct tpm_private *tp,
>  		void *remember);
>  static void destroy_tpmring(struct tpm_private *tp);
>  
> -static inline int
> -tx_buffer_copy(struct tx_buffer *txb, const u8 *src, int len,
> -		int isuserbuffer)
> -{
> -	int copied = len;
> -
> -	if (len > txb->size)
> -		copied = txb->size;
> -	if (isuserbuffer) {
> -		if (copy_from_user(txb->data, src, copied))
> -			return -EFAULT;
> -	} else {
> -		memcpy(txb->data, src, copied);
> -	}
> -	txb->len = len;
> -	return copied;
> -}
> -
> -static inline struct tx_buffer *tx_buffer_alloc(void)
> -{
> -	struct tx_buffer *txb;
> -
> -	txb = kzalloc(sizeof(struct tx_buffer), GFP_KERNEL);
> -	if (!txb)
> -		return NULL;
> -
> -	txb->len = 0;
> -	txb->size = PAGE_SIZE;
> -	txb->data = (unsigned char *)__get_free_page(GFP_KERNEL);
> -	if (txb->data == NULL) {
> -		kfree(txb);
> -		txb = NULL;
> -	}
> -
> -	return txb;
> -}
> -
> -
> -static inline void tx_buffer_free(struct tx_buffer *txb)
> -{
> -	if (txb) {
> -		free_page((long)txb->data);
> -		kfree(txb);
> -	}
> -}
> -
>  /**************************************************************
>    Utility function for the tpm_private structure
>   **************************************************************/
> @@ -162,15 +104,12 @@ static void tpm_private_put(void)
>  	if (!atomic_dec_and_test(&my_priv->refcnt))
>  		return;
>  
> -	tpmif_free_tx_buffers(my_priv);
>  	kfree(my_priv);
>  	my_priv = NULL;
>  }
>  
>  static struct tpm_private *tpm_private_get(void)
>  {
> -	int err;
> -
>  	if (my_priv) {
>  		atomic_inc(&my_priv->refcnt);
>  		return my_priv;
> @@ -181,9 +120,6 @@ static struct tpm_private *tpm_private_get(void)
>  		return NULL;
>  
>  	tpm_private_init(my_priv);
> -	err = tpmif_allocate_tx_buffers(my_priv);
> -	if (err < 0)
> -		tpm_private_put();
>  
>  	return my_priv;
>  }
> @@ -218,22 +154,22 @@ int vtpm_vd_send(struct tpm_private *tp,
>  static int setup_tpmring(struct xenbus_device *dev,
>  		struct tpm_private *tp)
>  {
> -	struct tpmif_tx_interface *sring;
> +	struct vtpm_shared_page *sring;
>  	int err;
>  
>  	tp->ring_ref = GRANT_INVALID_REF;
>  
> -	sring = (void *)__get_free_page(GFP_KERNEL);
> +	sring = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
>  	if (!sring) {
>  		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
>  		return -ENOMEM;
>  	}
> -	tp->tx = sring;
> +	tp->page = sring;
>  
> -	err = xenbus_grant_ring(dev, virt_to_mfn(tp->tx));
> +	err = xenbus_grant_ring(dev, virt_to_mfn(tp->page));
>  	if (err < 0) {
>  		free_page((unsigned long)sring);
> -		tp->tx = NULL;
> +		tp->page = NULL;
>  		xenbus_dev_fatal(dev, err, "allocating grant reference");
>  		goto fail;
>  	}
> @@ -256,9 +192,9 @@ static void destroy_tpmring(struct tpm_private *tp)
>  
>  	if (tp->ring_ref != GRANT_INVALID_REF) {
>  		gnttab_end_foreign_access(tp->ring_ref,
> -				0, (unsigned long)tp->tx);
> +				0, (unsigned long)tp->page);
>  		tp->ring_ref = GRANT_INVALID_REF;
> -		tp->tx = NULL;
> +		tp->page = NULL;
>  	}
>  
>  	if (tp->evtchn)
> @@ -457,10 +393,10 @@ static int tpmif_connect(struct xenbus_device *dev,
>  }
>  
>  static const struct xenbus_device_id tpmfront_ids[] = {
> -	{ "vtpm" },
> +	{ "vtpm2" },
>  	{ "" }
>  };
> -MODULE_ALIAS("xen:vtpm");
> +MODULE_ALIAS("xen:vtpm2");
>  
>  static DEFINE_XENBUS_DRIVER(tpmfront, ,
>  		.probe = tpmfront_probe,
> @@ -470,62 +406,30 @@ static DEFINE_XENBUS_DRIVER(tpmfront, ,
>  		.suspend = tpmfront_suspend,
>  		);
>  
> -static int tpmif_allocate_tx_buffers(struct tpm_private *tp)
> -{
> -	unsigned int i;
> -
> -	for (i = 0; i < TPMIF_TX_RING_SIZE; i++) {
> -		tp->tx_buffers[i] = tx_buffer_alloc();
> -		if (!tp->tx_buffers[i]) {
> -			tpmif_free_tx_buffers(tp);
> -			return -ENOMEM;
> -		}
> -	}
> -	return 0;
> -}
> -
> -static void tpmif_free_tx_buffers(struct tpm_private *tp)
> -{
> -	unsigned int i;
> -
> -	for (i = 0; i < TPMIF_TX_RING_SIZE; i++)
> -		tx_buffer_free(tp->tx_buffers[i]);
> -}
> -
>  static void tpmif_rx_action(unsigned long priv)
>  {
>  	struct tpm_private *tp = (struct tpm_private *)priv;
> -	int i = 0;
>  	unsigned int received;
>  	unsigned int offset = 0;
>  	u8 *buffer;
> -	struct tpmif_tx_request *tx = &tp->tx->ring[i].req;
> +	struct vtpm_shared_page *shr = tp->page;
>  
>  	atomic_set(&tp->tx_busy, 0);
>  	wake_up_interruptible(&tp->wait_q);
>  
> -	received = tx->size;
> +	offset = sizeof(*shr) + 4*shr->nr_extra_pages;
> +	received = shr->length;
> +
> +	if (offset > PAGE_SIZE || offset + received > PAGE_SIZE) {
> +		printk(KERN_WARNING "tpmif_rx_action packet too large\n");
> +		return;
> +	}
>  
>  	buffer = kmalloc(received, GFP_ATOMIC);
>  	if (!buffer)
>  		return;
>  
> -	for (i = 0; i < TPMIF_TX_RING_SIZE && offset < received; i++) {
> -		struct tx_buffer *txb = tp->tx_buffers[i];
> -		struct tpmif_tx_request *tx;
> -		unsigned int tocopy;
> -
> -		tx = &tp->tx->ring[i].req;
> -		tocopy = tx->size;
> -		if (tocopy > PAGE_SIZE)
> -			tocopy = PAGE_SIZE;
> -
> -		memcpy(&buffer[offset], txb->data, tocopy);
> -
> -		gnttab_release_grant_reference(&gref_head, tx->ref);
> -
> -		offset += tocopy;
> -	}
> +	memcpy(buffer, offset + (u8*)shr, received);
>  
>  	vtpm_vd_recv(tp->chip, buffer, received, tp->tx_remember);
>  	kfree(buffer);
> @@ -550,8 +454,7 @@ static int tpm_xmit(struct tpm_private *tp,
>  		const u8 *buf, size_t count, int isuserbuffer,
>  		void *remember)
>  {
> -	struct tpmif_tx_request *tx;
> -	int i;
> +	struct vtpm_shared_page *shr;
>  	unsigned int offset = 0;
>  
>  	spin_lock_irq(&tp->tx_lock);
> @@ -566,48 +469,23 @@ static int tpm_xmit(struct tpm_private *tp,
>  		return -EIO;
>  	}
>  
> -	for (i = 0; count > 0 && i < TPMIF_TX_RING_SIZE; i++) {
> -		struct tx_buffer *txb = tp->tx_buffers[i];
> -		int copied;
> -
> -		if (!txb) {
> -			spin_unlock_irq(&tp->tx_lock);
> -			return -EFAULT;
> -		}
> -
> -		copied = tx_buffer_copy(txb, &buf[offset], count,
> -				isuserbuffer);
> -		if (copied < 0) {
> -			/* An error occurred */
> -			spin_unlock_irq(&tp->tx_lock);
> -			return copied;
> -		}
> -		count -= copied;
> -		offset += copied;
> -
> -		tx = &tp->tx->ring[i].req;
> -		tx->addr = virt_to_machine(txb->data).maddr;
> -		tx->size = txb->len;
> -		tx->unused = 0;
> -
> -		/* Get the granttable reference for this page. */
> -		tx->ref = gnttab_claim_grant_reference(&gref_head);
> -		if (tx->ref == -ENOSPC) {
> -			spin_unlock_irq(&tp->tx_lock);
> -			return -ENOSPC;
> -		}
> -		gnttab_grant_foreign_access_ref(tx->ref,
> -				tp->backend_id,
> -				virt_to_mfn(txb->data),
> -				0 /*RW*/);
> -		wmb();
> -	}
> +	shr = tp->page;
> +	offset = sizeof(*shr) + 4*shr->nr_extra_pages;
> +
> +	if (offset > PAGE_SIZE)
> +		return -EIO;
> +
> +	if (offset + count > PAGE_SIZE)
> +		count = PAGE_SIZE - offset;
> +
> +	memcpy(offset + (u8*)shr, buf, count);
> +	shr->length = count;
> +	barrier();
> +	shr->state = 1;
>  
>  	atomic_set(&tp->tx_busy, 1);
>  	tp->tx_remember = remember;
>  
> -	mb();
> -
>  	notify_remote_via_evtchn(tp->evtchn);
>  
>  	spin_unlock_irq(&tp->tx_lock);
> @@ -667,12 +545,6 @@ static int __init tpmif_init(void)
>  	if (!tp)
>  		return -ENOMEM;
>  
> -	if (gnttab_alloc_grant_references(TPMIF_TX_RING_SIZE,
> -				&gref_head) < 0) {
> -		tpm_private_put();
> -		return -EFAULT;
> -	}
> -
>  	return xenbus_register_frontend(&tpmfront_driver);
>  }
>  module_init(tpmif_init);
> @@ -680,7 +552,6 @@ module_init(tpmif_init);
>  static void __exit tpmif_exit(void)
>  {
>  	xenbus_unregister_driver(&tpmfront_driver);
> -	gnttab_free_grant_references(gref_head);
>  	tpm_private_put();
>  }
>  module_exit(tpmif_exit);
> diff --git a/include/xen/interface/io/tpmif.h 
> b/include/xen/interface/io/tpmif.h
> index c9e7294..b55ac56 100644
> --- a/include/xen/interface/io/tpmif.h
> +++ b/include/xen/interface/io/tpmif.h
> @@ -1,7 +1,7 @@
>  
> /****************************************************************************
> **
>   * tpmif.h
>   *
> - * TPM I/O interface for Xen guest OSes.
> + * TPM I/O interface for Xen guest OSes, v2
>   *
>   * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
>   * of this software and associated documentation files (the "Software"), to
> @@ -21,45 +21,21 @@
>   * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
>   * DEALINGS IN THE SOFTWARE.
>   *
> - * Copyright (c) 2005, IBM Corporation
> - *
> - * Author: Stefan Berger, stefanb@us.ibm.com 
> - * Grant table support: Mahadevan Gomathisankaran
> - *
> - * This code has been derived from tools/libxc/xen/io/netif.h
> - *
> - * Copyright (c) 2003-2004, Keir Fraser
>   */
>  
>  #ifndef __XEN_PUBLIC_IO_TPMIF_H__
>  #define __XEN_PUBLIC_IO_TPMIF_H__
>  
> -#include "../grant_table.h"
> -
> -struct tpmif_tx_request {
> -	unsigned long addr;   /* Machine address of packet.   */
> -	grant_ref_t ref;      /* grant table access reference */
> -	uint16_t unused;
> -	uint16_t size;        /* Packet size in bytes.        */
> -};
> -struct tpmif_tx_request;
> +struct vtpm_shared_page {
> +	uint32_t length;         /* request/response length in bytes */
>  
> -/*
> - * The TPMIF_TX_RING_SIZE defines the number of pages the
> - * front-end and backend can exchange (= size of array).
> - */
> -#define TPMIF_TX_RING_SIZE 1
> -
> -/* This structure must fit in a memory page. */
> -
> -struct tpmif_ring {
> -	struct tpmif_tx_request req;
> -};
> -struct tpmif_ring;
> +	uint8_t state;           /* 0 - response ready / idle
> +                              * 1 - request ready / working */
> +	uint8_t locality;        /* for the current request */
> +	uint8_t pad;
>  
> -struct tpmif_tx_interface {
> -	struct tpmif_ring ring[TPMIF_TX_RING_SIZE];
> +	uint8_t nr_extra_pages;  /* extra pages for long packets; may be zero */
> +	uint32_t extra_pages[0]; /* grant IDs; length is actually nr_extra_pages */
>  };
> -struct tpmif_tx_interface;
>  
>  #endif
> -- 
> 1.7.11.7
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 11:52:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 11:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiONy-0007kV-6V; Tue, 11 Dec 2012 11:52:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TiONw-0007kB-DY
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 11:52:28 +0000
Received: from [85.158.137.99:57450] by server-7.bemta-3.messagelabs.com id
	3D/DE-23008-B7E17C05; Tue, 11 Dec 2012 11:52:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1355226743!13495725!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM5Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2060 invoked from network); 11 Dec 2012 11:52:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 11:52:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 11:52:23 +0000
Message-Id: <50C72C8502000078000AFA6D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 11:52:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <matthew.fioravante@jhuapl.edu>, "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169603-26991-1-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1355169603-26991-1-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] drivers/tpm-xen: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.12.12 at 21:00, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> This changes the vTPM shared page ABI from a copy of the Xen network
> interface to a single-page interface that better reflects the expected
> behavior of a TPM: only a single request packet can be sent at any given
> time, and every packet sent generates a single response packet. This
> protocol change should also increase efficiency as it avoids mapping and
> unmapping grants when possible.

Given

-#define TPMIF_TX_RING_SIZE 1

where was the problem?

Also, the patch replaces the old interface by the new one - how
is that compatible with older implementations?

The positive aspect is that the new interface isn't address size
dependent anymore (and hence mixed size backend/frontend can
work together, which isn't the case for the original one).

Jan

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  drivers/char/tpm/xen-tpmfront_if.c | 195 +++++++------------------------------
>  include/xen/interface/io/tpmif.h   |  42 ++------
>  2 files changed, 42 insertions(+), 195 deletions(-)
> 
> diff --git a/drivers/char/tpm/xen-tpmfront_if.c 
> b/drivers/char/tpm/xen-tpmfront_if.c
> index ba7fad8..374ad1b 100644
> --- a/drivers/char/tpm/xen-tpmfront_if.c
> +++ b/drivers/char/tpm/xen-tpmfront_if.c
> @@ -53,7 +53,7 @@
>  struct tpm_private {
>  	struct tpm_chip *chip;
>  
> -	struct tpmif_tx_interface *tx;
> +	struct vtpm_shared_page *page;
>  	atomic_t refcnt;
>  	unsigned int evtchn;
>  	u8 is_connected;
> @@ -61,8 +61,6 @@ struct tpm_private {
>  
>  	spinlock_t tx_lock;
>  
> -	struct tx_buffer *tx_buffers[TPMIF_TX_RING_SIZE];
> -
>  	atomic_t tx_busy;
>  	void *tx_remember;
>  
> @@ -73,15 +71,7 @@ struct tpm_private {
>  	int ring_ref;
>  };
>  
> -struct tx_buffer {
> -	unsigned int size;	/* available space in data */
> -	unsigned int len;	/* used space in data */
> -	unsigned char *data;	/* pointer to a page */
> -};
> -
> -
>  /* locally visible variables */
> -static grant_ref_t gref_head;
>  static struct tpm_private *my_priv;
>  
>  /* local function prototypes */
> @@ -92,8 +82,6 @@ static int tpmif_connect(struct xenbus_device *dev,
>  		struct tpm_private *tp,
>  		domid_t domid);
>  static DECLARE_TASKLET(tpmif_rx_tasklet, tpmif_rx_action, 0);
> -static int tpmif_allocate_tx_buffers(struct tpm_private *tp);
> -static void tpmif_free_tx_buffers(struct tpm_private *tp);
>  static void tpmif_set_connected_state(struct tpm_private *tp,
>  		u8 newstate);
>  static int tpm_xmit(struct tpm_private *tp,
> @@ -101,52 +89,6 @@ static int tpm_xmit(struct tpm_private *tp,
>  		void *remember);
>  static void destroy_tpmring(struct tpm_private *tp);
>  
> -static inline int
> -tx_buffer_copy(struct tx_buffer *txb, const u8 *src, int len,
> -		int isuserbuffer)
> -{
> -	int copied = len;
> -
> -	if (len > txb->size)
> -		copied = txb->size;
> -	if (isuserbuffer) {
> -		if (copy_from_user(txb->data, src, copied))
> -			return -EFAULT;
> -	} else {
> -		memcpy(txb->data, src, copied);
> -	}
> -	txb->len = len;
> -	return copied;
> -}
> -
> -static inline struct tx_buffer *tx_buffer_alloc(void)
> -{
> -	struct tx_buffer *txb;
> -
> -	txb = kzalloc(sizeof(struct tx_buffer), GFP_KERNEL);
> -	if (!txb)
> -		return NULL;
> -
> -	txb->len = 0;
> -	txb->size = PAGE_SIZE;
> -	txb->data = (unsigned char *)__get_free_page(GFP_KERNEL);
> -	if (txb->data == NULL) {
> -		kfree(txb);
> -		txb = NULL;
> -	}
> -
> -	return txb;
> -}
> -
> -
> -static inline void tx_buffer_free(struct tx_buffer *txb)
> -{
> -	if (txb) {
> -		free_page((long)txb->data);
> -		kfree(txb);
> -	}
> -}
> -
>  /**************************************************************
>    Utility function for the tpm_private structure
>   **************************************************************/
> @@ -162,15 +104,12 @@ static void tpm_private_put(void)
>  	if (!atomic_dec_and_test(&my_priv->refcnt))
>  		return;
>  
> -	tpmif_free_tx_buffers(my_priv);
>  	kfree(my_priv);
>  	my_priv = NULL;
>  }
>  
>  static struct tpm_private *tpm_private_get(void)
>  {
> -	int err;
> -
>  	if (my_priv) {
>  		atomic_inc(&my_priv->refcnt);
>  		return my_priv;
> @@ -181,9 +120,6 @@ static struct tpm_private *tpm_private_get(void)
>  		return NULL;
>  
>  	tpm_private_init(my_priv);
> -	err = tpmif_allocate_tx_buffers(my_priv);
> -	if (err < 0)
> -		tpm_private_put();
>  
>  	return my_priv;
>  }
> @@ -218,22 +154,22 @@ int vtpm_vd_send(struct tpm_private *tp,
>  static int setup_tpmring(struct xenbus_device *dev,
>  		struct tpm_private *tp)
>  {
> -	struct tpmif_tx_interface *sring;
> +	struct vtpm_shared_page *sring;
>  	int err;
>  
>  	tp->ring_ref = GRANT_INVALID_REF;
>  
> -	sring = (void *)__get_free_page(GFP_KERNEL);
> +	sring = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
>  	if (!sring) {
>  		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
>  		return -ENOMEM;
>  	}
> -	tp->tx = sring;
> +	tp->page = sring;
>  
> -	err = xenbus_grant_ring(dev, virt_to_mfn(tp->tx));
> +	err = xenbus_grant_ring(dev, virt_to_mfn(tp->page));
>  	if (err < 0) {
>  		free_page((unsigned long)sring);
> -		tp->tx = NULL;
> +		tp->page = NULL;
>  		xenbus_dev_fatal(dev, err, "allocating grant reference");
>  		goto fail;
>  	}
> @@ -256,9 +192,9 @@ static void destroy_tpmring(struct tpm_private *tp)
>  
>  	if (tp->ring_ref != GRANT_INVALID_REF) {
>  		gnttab_end_foreign_access(tp->ring_ref,
> -				0, (unsigned long)tp->tx);
> +				0, (unsigned long)tp->page);
>  		tp->ring_ref = GRANT_INVALID_REF;
> -		tp->tx = NULL;
> +		tp->page = NULL;
>  	}
>  
>  	if (tp->evtchn)
> @@ -457,10 +393,10 @@ static int tpmif_connect(struct xenbus_device *dev,
>  }
>  
>  static const struct xenbus_device_id tpmfront_ids[] = {
> -	{ "vtpm" },
> +	{ "vtpm2" },
>  	{ "" }
>  };
> -MODULE_ALIAS("xen:vtpm");
> +MODULE_ALIAS("xen:vtpm2");
>  
>  static DEFINE_XENBUS_DRIVER(tpmfront, ,
>  		.probe = tpmfront_probe,
> @@ -470,62 +406,30 @@ static DEFINE_XENBUS_DRIVER(tpmfront, ,
>  		.suspend = tpmfront_suspend,
>  		);
>  
> -static int tpmif_allocate_tx_buffers(struct tpm_private *tp)
> -{
> -	unsigned int i;
> -
> -	for (i = 0; i < TPMIF_TX_RING_SIZE; i++) {
> -		tp->tx_buffers[i] = tx_buffer_alloc();
> -		if (!tp->tx_buffers[i]) {
> -			tpmif_free_tx_buffers(tp);
> -			return -ENOMEM;
> -		}
> -	}
> -	return 0;
> -}
> -
> -static void tpmif_free_tx_buffers(struct tpm_private *tp)
> -{
> -	unsigned int i;
> -
> -	for (i = 0; i < TPMIF_TX_RING_SIZE; i++)
> -		tx_buffer_free(tp->tx_buffers[i]);
> -}
> -
>  static void tpmif_rx_action(unsigned long priv)
>  {
>  	struct tpm_private *tp = (struct tpm_private *)priv;
> -	int i = 0;
>  	unsigned int received;
>  	unsigned int offset = 0;
>  	u8 *buffer;
> -	struct tpmif_tx_request *tx = &tp->tx->ring[i].req;
> +	struct vtpm_shared_page *shr = tp->page;
>  
>  	atomic_set(&tp->tx_busy, 0);
>  	wake_up_interruptible(&tp->wait_q);
>  
> -	received = tx->size;
> +	offset = sizeof(*shr) + 4*shr->nr_extra_pages;
> +	received = shr->length;
> +
> +	if (offset > PAGE_SIZE || offset + received > PAGE_SIZE) {
> +		printk(KERN_WARNING "tpmif_rx_action packet too large\n");
> +		return;
> +	}
>  
>  	buffer = kmalloc(received, GFP_ATOMIC);
>  	if (!buffer)
>  		return;
>  
> -	for (i = 0; i < TPMIF_TX_RING_SIZE && offset < received; i++) {
> -		struct tx_buffer *txb = tp->tx_buffers[i];
> -		struct tpmif_tx_request *tx;
> -		unsigned int tocopy;
> -
> -		tx = &tp->tx->ring[i].req;
> -		tocopy = tx->size;
> -		if (tocopy > PAGE_SIZE)
> -			tocopy = PAGE_SIZE;
> -
> -		memcpy(&buffer[offset], txb->data, tocopy);
> -
> -		gnttab_release_grant_reference(&gref_head, tx->ref);
> -
> -		offset += tocopy;
> -	}
> +	memcpy(buffer, offset + (u8*)shr, received);
>  
>  	vtpm_vd_recv(tp->chip, buffer, received, tp->tx_remember);
>  	kfree(buffer);
> @@ -550,8 +454,7 @@ static int tpm_xmit(struct tpm_private *tp,
>  		const u8 *buf, size_t count, int isuserbuffer,
>  		void *remember)
>  {
> -	struct tpmif_tx_request *tx;
> -	int i;
> +	struct vtpm_shared_page *shr;
>  	unsigned int offset = 0;
>  
>  	spin_lock_irq(&tp->tx_lock);
> @@ -566,48 +469,23 @@ static int tpm_xmit(struct tpm_private *tp,
>  		return -EIO;
>  	}
>  
> -	for (i = 0; count > 0 && i < TPMIF_TX_RING_SIZE; i++) {
> -		struct tx_buffer *txb = tp->tx_buffers[i];
> -		int copied;
> -
> -		if (!txb) {
> -			spin_unlock_irq(&tp->tx_lock);
> -			return -EFAULT;
> -		}
> -
> -		copied = tx_buffer_copy(txb, &buf[offset], count,
> -				isuserbuffer);
> -		if (copied < 0) {
> -			/* An error occurred */
> -			spin_unlock_irq(&tp->tx_lock);
> -			return copied;
> -		}
> -		count -= copied;
> -		offset += copied;
> -
> -		tx = &tp->tx->ring[i].req;
> -		tx->addr = virt_to_machine(txb->data).maddr;
> -		tx->size = txb->len;
> -		tx->unused = 0;
> -
> -		/* Get the granttable reference for this page. */
> -		tx->ref = gnttab_claim_grant_reference(&gref_head);
> -		if (tx->ref == -ENOSPC) {
> -			spin_unlock_irq(&tp->tx_lock);
> -			return -ENOSPC;
> -		}
> -		gnttab_grant_foreign_access_ref(tx->ref,
> -				tp->backend_id,
> -				virt_to_mfn(txb->data),
> -				0 /*RW*/);
> -		wmb();
> -	}
> +	shr = tp->page;
> +	offset = sizeof(*shr) + 4*shr->nr_extra_pages;
> +
> +	if (offset > PAGE_SIZE)
> +		return -EIO;
> +
> +	if (offset + count > PAGE_SIZE)
> +		count = PAGE_SIZE - offset;
> +
> +	memcpy(offset + (u8*)shr, buf, count);
> +	shr->length = count;
> +	barrier();
> +	shr->state = 1;
>  
>  	atomic_set(&tp->tx_busy, 1);
>  	tp->tx_remember = remember;
>  
> -	mb();
> -
>  	notify_remote_via_evtchn(tp->evtchn);
>  
>  	spin_unlock_irq(&tp->tx_lock);
> @@ -667,12 +545,6 @@ static int __init tpmif_init(void)
>  	if (!tp)
>  		return -ENOMEM;
>  
> -	if (gnttab_alloc_grant_references(TPMIF_TX_RING_SIZE,
> -				&gref_head) < 0) {
> -		tpm_private_put();
> -		return -EFAULT;
> -	}
> -
>  	return xenbus_register_frontend(&tpmfront_driver);
>  }
>  module_init(tpmif_init);
> @@ -680,7 +552,6 @@ module_init(tpmif_init);
>  static void __exit tpmif_exit(void)
>  {
>  	xenbus_unregister_driver(&tpmfront_driver);
> -	gnttab_free_grant_references(gref_head);
>  	tpm_private_put();
>  }
>  module_exit(tpmif_exit);
> diff --git a/include/xen/interface/io/tpmif.h 
> b/include/xen/interface/io/tpmif.h
> index c9e7294..b55ac56 100644
> --- a/include/xen/interface/io/tpmif.h
> +++ b/include/xen/interface/io/tpmif.h
> @@ -1,7 +1,7 @@
>  
> /****************************************************************************
> **
>   * tpmif.h
>   *
> - * TPM I/O interface for Xen guest OSes.
> + * TPM I/O interface for Xen guest OSes, v2
>   *
>   * Permission is hereby granted, free of charge, to any person obtaining a 
> copy
>   * of this software and associated documentation files (the "Software"), to
> @@ -21,45 +21,21 @@
>   * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
>   * DEALINGS IN THE SOFTWARE.
>   *
> - * Copyright (c) 2005, IBM Corporation
> - *
> - * Author: Stefan Berger, stefanb@us.ibm.com 
> - * Grant table support: Mahadevan Gomathisankaran
> - *
> - * This code has been derived from tools/libxc/xen/io/netif.h
> - *
> - * Copyright (c) 2003-2004, Keir Fraser
>   */
>  
>  #ifndef __XEN_PUBLIC_IO_TPMIF_H__
>  #define __XEN_PUBLIC_IO_TPMIF_H__
>  
> -#include "../grant_table.h"
> -
> -struct tpmif_tx_request {
> -	unsigned long addr;   /* Machine address of packet.   */
> -	grant_ref_t ref;      /* grant table access reference */
> -	uint16_t unused;
> -	uint16_t size;        /* Packet size in bytes.        */
> -};
> -struct tpmif_tx_request;
> +struct vtpm_shared_page {
> +	uint32_t length;         /* request/response length in bytes */
>  
> -/*
> - * The TPMIF_TX_RING_SIZE defines the number of pages the
> - * front-end and backend can exchange (= size of array).
> - */
> -#define TPMIF_TX_RING_SIZE 1
> -
> -/* This structure must fit in a memory page. */
> -
> -struct tpmif_ring {
> -	struct tpmif_tx_request req;
> -};
> -struct tpmif_ring;
> +	uint8_t state;           /* 0 - response ready / idle
> +                              * 1 - request ready / working */
> +	uint8_t locality;        /* for the current request */
> +	uint8_t pad;
>  
> -struct tpmif_tx_interface {
> -	struct tpmif_ring ring[TPMIF_TX_RING_SIZE];
> +	uint8_t nr_extra_pages;  /* extra pages for long packets; may be zero */
> +	uint32_t extra_pages[0]; /* grant IDs; length is actually nr_extra_pages */
>  };
> -struct tpmif_tx_interface;
>  
>  #endif
> -- 
> 1.7.11.7
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 12:04:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 12:04:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiOZ4-0000KV-Gt; Tue, 11 Dec 2012 12:03:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiOZ3-0000KO-6x
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 12:03:57 +0000
Received: from [85.158.143.35:17033] by server-2.bemta-4.messagelabs.com id
	F4/5D-30861-C2127C05; Tue, 11 Dec 2012 12:03:56 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355227309!4764880!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22007 invoked from network); 11 Dec 2012 12:01:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 12:01:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="235231"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 12:01:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 07:01:32 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiOWi-0004Nd-BS;
	Tue, 11 Dec 2012 12:01:32 +0000
Date: Tue, 11 Dec 2012 12:01:30 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1966257002-1355226688=:17523"
Content-ID: <alpine.DEB.2.02.1212111152440.17523@kaball.uk.xensource.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1966257002-1355226688=:17523
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1212111152441.17523@kaball.uk.xensource.com>

On Tue, 11 Dec 2012, G.R. wrote:
>=20
> On Tue, Dec 11, 2012 at 2:15 AM, Kay, Allen M <allen.m.kay@intel.com> wro=
te:
>       My understanding is that i915 driver needs to looks at the real PCH=
's device ID to apply HW workarounds.
>=20
>       One way to fix this is to make the device ID of the first PCH's (00=
:01.0) device ID reflect the device ID of
>       00:1f.0 on the host. =C2=A0This way, when i915 running as a guest w=
ill find the valid PCH device ID to make workaround
>       decisions with.
>
>=20
>       I don't know why it would make a difference if i915 is built into t=
he kernel or as a module though.
>=20
>       Allen
>=20
> Thanks Allen for your input.
> But module v.s. built-in is not the only difference. Another difference i=
s the PVHVM build vs. pure HVM build.
> Both share the same PCI layout but different result. Any theory how to ex=
plain the difference? What makes the PVHVM version
> work?
=20
Please don't use HTML in emails.

PVHVM Linux guests setup interrupts differently: they request an event
channel for each legacy interrupt or MSI/MSI-X, then the hypervisor uses
these event channels to inject notifications into the guest, rather
than emulated interrupts or emulated MSIs.

Reading again the description of the bug, wouldn't it be better to just
the fix the problem in Linux?
In fact this looks like a bug in intel_detect_pch(): QEMU is emulating a
PCI-PCI bridge and the driver is checking for an PCI-ISA bridge (to help
with virtualization?). Moreover it only checks the first PCI-ISA bridge.

As far as I know Xen has never exposed a PCI-ISA bridge with vendor =3D=3D
Intel to the guest.


 =C2=A0
>       -----Original Message-----
>       From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
>       Sent: Monday, December 10, 2012 4:27 AM
>       To: G.R.
>       Cc: xen-devel; Stefano Stabellini; Kay, Allen M; xiantao.zhang@inte=
l; Xu, Dongxiao; Dong, Eddie
>       Subject: Re: intel IGD driver intel_detect_pch() failure
>=20
>       CC'ing some engineers that could have some useful suggestions
>=20
>       On Mon, 10 Dec 2012, G.R. wrote:
>       > Hello, could anybody help?
>       >
>       > On Sun, Dec 9, 2012 at 1:00 PM, G.R. <firemeteor@users.sourceforg=
e.net> wrote:
>       > =C2=A0 =C2=A0 =C2=A0 I dug further and got confused.
>       > =C2=A0 =C2=A0 =C2=A0 The host ISA bridge 00:1f.0 is automatically=
 passed-through as part of the gfx_passthru magic.
>       > =C2=A0 =C2=A0 =C2=A0 However, it is passed through as a PCI bridg=
e:
>       > =C2=A0 =C2=A0 =C2=A0 On host: =C2=A0 00:1f.0 ISA bridge [0601]: I=
ntel Corporation H77 Express Chipset LPC Controller [8086:1e4a] (rev
>       04)
>       > =C2=A0 =C2=A0 =C2=A0 On guest: 00:1f.0 PCI bridge [0604]: Intel C=
orporation H77 Express Chipset LPC Controller [8086:1e4a] (rev
>       04)
>       >
>       > =C2=A0 =C2=A0 =C2=A0 This is both the case for pure HVM && PVHVM.=
 And this one exists for both case too:
>       > =C2=A0 =C2=A0 =C2=A0 00:01.0 ISA bridge [0601]: Intel Corporation=
 82371SB PIIX3 ISA [Natoma/Triton II] [8086:7000]
>       >
>       > =C2=A0 =C2=A0 =C2=A0 And the intel_detect_pch() function only che=
ck the first ISA bridge on the PCI bus:
>       > =C2=A0 =C2=A0 =C2=A0 pch =3D pci_get_class(PCI_CLASS_BRIDGE_ISA <=
< 8, NULL);
>       >
>       > =C2=A0 =C2=A0 =C2=A0 Unless there are magic elsewhere, I can't im=
agine the code would behave differently on the two builds.
>       > =C2=A0 =C2=A0 =C2=A0 But what's the magic behind this?
>       >
>       > =C2=A0 =C2=A0 =C2=A0 Also, is there anyway to get rid of the ISA =
bridge emulated by qemu?
>       > =C2=A0 =C2=A0 =C2=A0 I don't think this is ever required for most=
 case...
>       >
>       > =C2=A0 =C2=A0 =C2=A0 Thanks,
>       > =C2=A0 =C2=A0 =C2=A0 Timothy
>       >
>       >
>       > =C2=A0 =C2=A0 =C2=A0 On Sun, Dec 9, 2012 at 1:43 AM, G.R. <fireme=
teor@users.sourceforge.net> wrote:
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Hi all,
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 I'm debugging an issue =
that an HVM guest failed to produce any output with IGD passed through.
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 This is an pure HVM lin=
ux guest with i915 driver directly compiled in.
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 An PVHVM kernel with i9=
15 driver compiled as module works without issue.
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 I'm not yet sure which =
factor is more important, pure HVM or the I915=3Dy kernel config.
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 The direct cause of no =
output is that the driver does not select Display PLL properly, which is in
>       turn
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 due to fail to detect p=
ch type properly.
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Strange enough, the int=
el_detect_pch() function works by checking the device ID of the ISA bridge
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 coming with the chipset=
:
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* * The reason to prob=
e ISA bridge instead of Dev31:Fun0 is to * make graphics device passthrough
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fwork easy for VMM, tha=
t only * need to expose ISA bridge to let driver know the real hardware=C2=
=A0
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 underneath. This is a r=
equirement from virtualization team. */
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 I added some debug outp=
ut in this function and find out that it obtained a strange device ID:
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 [ 1.005423] [drm] intel=
 pch detect, found 00007000
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 This looks like the ISA=
 bridge provided by qemu:
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 00:01.0 ISA bridge: Int=
el Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 00:01.0 0601: 8086:7000
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 However, I can find the=
 same device on a PVHVM linux guest, but the intel_detect_pch() is not fool=
ed
>       by
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 that. Is it due to I915=
=3Dm config or some magic played by PVOPS? Any suggestion how to fix this?
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Thanks,
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Timothy
>       >
>       >
>       >
>       >
>       >
>=20
>=20
>=20
>=20
--1342847746-1966257002-1355226688=:17523
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1966257002-1355226688=:17523--


From xen-devel-bounces@lists.xen.org Tue Dec 11 12:04:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 12:04:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiOZ4-0000KV-Gt; Tue, 11 Dec 2012 12:03:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiOZ3-0000KO-6x
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 12:03:57 +0000
Received: from [85.158.143.35:17033] by server-2.bemta-4.messagelabs.com id
	F4/5D-30861-C2127C05; Tue, 11 Dec 2012 12:03:56 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355227309!4764880!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22007 invoked from network); 11 Dec 2012 12:01:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 12:01:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,258,1355097600"; 
   d="scan'208";a="235231"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 12:01:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 07:01:32 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiOWi-0004Nd-BS;
	Tue, 11 Dec 2012 12:01:32 +0000
Date: Tue, 11 Dec 2012 12:01:30 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1966257002-1355226688=:17523"
Content-ID: <alpine.DEB.2.02.1212111152440.17523@kaball.uk.xensource.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1966257002-1355226688=:17523
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1212111152441.17523@kaball.uk.xensource.com>

On Tue, 11 Dec 2012, G.R. wrote:
>=20
> On Tue, Dec 11, 2012 at 2:15 AM, Kay, Allen M <allen.m.kay@intel.com> wro=
te:
>       My understanding is that i915 driver needs to looks at the real PCH=
's device ID to apply HW workarounds.
>=20
>       One way to fix this is to make the device ID of the first PCH's (00=
:01.0) device ID reflect the device ID of
>       00:1f.0 on the host. =C2=A0This way, when i915 running as a guest w=
ill find the valid PCH device ID to make workaround
>       decisions with.
>
>=20
>       I don't know why it would make a difference if i915 is built into t=
he kernel or as a module though.
>=20
>       Allen
>=20
> Thanks Allen for your input.
> But module v.s. built-in is not the only difference. Another difference i=
s the PVHVM build vs. pure HVM build.
> Both share the same PCI layout but different result. Any theory how to ex=
plain the difference? What makes the PVHVM version
> work?
=20
Please don't use HTML in emails.

PVHVM Linux guests setup interrupts differently: they request an event
channel for each legacy interrupt or MSI/MSI-X, then the hypervisor uses
these event channels to inject notifications into the guest, rather
than emulated interrupts or emulated MSIs.

Reading again the description of the bug, wouldn't it be better to just
the fix the problem in Linux?
In fact this looks like a bug in intel_detect_pch(): QEMU is emulating a
PCI-PCI bridge and the driver is checking for an PCI-ISA bridge (to help
with virtualization?). Moreover it only checks the first PCI-ISA bridge.

As far as I know Xen has never exposed a PCI-ISA bridge with vendor =3D=3D
Intel to the guest.


 =C2=A0
>       -----Original Message-----
>       From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
>       Sent: Monday, December 10, 2012 4:27 AM
>       To: G.R.
>       Cc: xen-devel; Stefano Stabellini; Kay, Allen M; xiantao.zhang@inte=
l; Xu, Dongxiao; Dong, Eddie
>       Subject: Re: intel IGD driver intel_detect_pch() failure
>=20
>       CC'ing some engineers that could have some useful suggestions
>=20
>       On Mon, 10 Dec 2012, G.R. wrote:
>       > Hello, could anybody help?
>       >
>       > On Sun, Dec 9, 2012 at 1:00 PM, G.R. <firemeteor@users.sourceforg=
e.net> wrote:
>       > =C2=A0 =C2=A0 =C2=A0 I dug further and got confused.
>       > =C2=A0 =C2=A0 =C2=A0 The host ISA bridge 00:1f.0 is automatically=
 passed-through as part of the gfx_passthru magic.
>       > =C2=A0 =C2=A0 =C2=A0 However, it is passed through as a PCI bridg=
e:
>       > =C2=A0 =C2=A0 =C2=A0 On host: =C2=A0 00:1f.0 ISA bridge [0601]: I=
ntel Corporation H77 Express Chipset LPC Controller [8086:1e4a] (rev
>       04)
>       > =C2=A0 =C2=A0 =C2=A0 On guest: 00:1f.0 PCI bridge [0604]: Intel C=
orporation H77 Express Chipset LPC Controller [8086:1e4a] (rev
>       04)
>       >
>       > =C2=A0 =C2=A0 =C2=A0 This is both the case for pure HVM && PVHVM.=
 And this one exists for both case too:
>       > =C2=A0 =C2=A0 =C2=A0 00:01.0 ISA bridge [0601]: Intel Corporation=
 82371SB PIIX3 ISA [Natoma/Triton II] [8086:7000]
>       >
>       > =C2=A0 =C2=A0 =C2=A0 And the intel_detect_pch() function only che=
ck the first ISA bridge on the PCI bus:
>       > =C2=A0 =C2=A0 =C2=A0 pch =3D pci_get_class(PCI_CLASS_BRIDGE_ISA <=
< 8, NULL);
>       >
>       > =C2=A0 =C2=A0 =C2=A0 Unless there are magic elsewhere, I can't im=
agine the code would behave differently on the two builds.
>       > =C2=A0 =C2=A0 =C2=A0 But what's the magic behind this?
>       >
>       > =C2=A0 =C2=A0 =C2=A0 Also, is there anyway to get rid of the ISA =
bridge emulated by qemu?
>       > =C2=A0 =C2=A0 =C2=A0 I don't think this is ever required for most=
 case...
>       >
>       > =C2=A0 =C2=A0 =C2=A0 Thanks,
>       > =C2=A0 =C2=A0 =C2=A0 Timothy
>       >
>       >
>       > =C2=A0 =C2=A0 =C2=A0 On Sun, Dec 9, 2012 at 1:43 AM, G.R. <fireme=
teor@users.sourceforge.net> wrote:
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Hi all,
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 I'm debugging an issue =
that an HVM guest failed to produce any output with IGD passed through.
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 This is an pure HVM lin=
ux guest with i915 driver directly compiled in.
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 An PVHVM kernel with i9=
15 driver compiled as module works without issue.
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 I'm not yet sure which =
factor is more important, pure HVM or the I915=3Dy kernel config.
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 The direct cause of no =
output is that the driver does not select Display PLL properly, which is in
>       turn
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 due to fail to detect p=
ch type properly.
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Strange enough, the int=
el_detect_pch() function works by checking the device ID of the ISA bridge
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 coming with the chipset=
:
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* * The reason to prob=
e ISA bridge instead of Dev31:Fun0 is to * make graphics device passthrough
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fwork easy for VMM, tha=
t only * need to expose ISA bridge to let driver know the real hardware=C2=
=A0
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 underneath. This is a r=
equirement from virtualization team. */
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 I added some debug outp=
ut in this function and find out that it obtained a strange device ID:
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 [ 1.005423] [drm] intel=
 pch detect, found 00007000
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 This looks like the ISA=
 bridge provided by qemu:
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 00:01.0 ISA bridge: Int=
el Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 00:01.0 0601: 8086:7000
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 However, I can find the=
 same device on a PVHVM linux guest, but the intel_detect_pch() is not fool=
ed
>       by
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 that. Is it due to I915=
=3Dm config or some magic played by PVOPS? Any suggestion how to fix this?
>       >
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Thanks,
>       > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Timothy
>       >
>       >
>       >
>       >
>       >
>=20
>=20
>=20
>=20
--1342847746-1966257002-1355226688=:17523
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1966257002-1355226688=:17523--


From xen-devel-bounces@lists.xen.org Tue Dec 11 12:06:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 12:06:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiOar-0000Sh-3R; Tue, 11 Dec 2012 12:05:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TiOao-0000SY-TS
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 12:05:47 +0000
Received: from [85.158.139.83:35279] by server-6.bemta-5.messagelabs.com id
	D7/CD-30498-A9127C05; Tue, 11 Dec 2012 12:05:46 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355227545!25335068!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11617 invoked from network); 11 Dec 2012 12:05:45 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-7.tower-182.messagelabs.com with SMTP;
	11 Dec 2012 12:05:45 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 696F2C5617B;
	Tue, 11 Dec 2012 12:05:34 +0000 (GMT)
Date: Tue, 11 Dec 2012 12:05:33 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Xen Devel <xen-devel@lists.xen.org>
Message-ID: <C76A4AA2522F56F728DDCA4A@nimrod.local>
In-Reply-To: <22A70860509BC87BE892A601@nimrod.local>
References: <22A70860509BC87BE892A601@nimrod.local>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This particular segv seems to be because in
  libxl__domain_suspend_common_switch_qemu_logdirty
in libxl_dom.c the variables 'got', and 'got_ret' do not appear to be initialised
to NULL (got certainly should be, got_ret should be if there is any possibility
of libxl__xs_read_checked not writing to got_ret which it doesn't seem to be).
Code inspection suggests this issue is still there in 4.2.1, hence my wondering
whether other stuff needs bringing in from unstable.


--On 11 December 2012 11:45:42 +0000 Alex Bligh <alex@alex.org.uk> wrote:

> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffeffff700 (LWP 5995)]
> 0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) bt
># 0  0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
># 1  0x00007ffff6d4e970 in libxl__domain_suspend_common_switch_qemu_logdirty (domid=<optimized out>, enable=<optimized out>, user=0x7ffff00024e8) at libxl_dom.c:728
># 2  0x00007ffff6d5c1ae in libxl__srm_callout_received_save (msg=0x7fffefffe41a " error", len=<optimized out>, user=0x7ffff00024e8) at _libxl_save_msgs_callout.c:162
># 3  0x00007ffff6d5b736 in helper_stdout_readable (egc=0x7fffefffe5a0, ev=0x7ffff0002560, fd=38, events=<optimized out>, revents=<optimized out>) at libxl_save_callout.c:283
># 4  0x00007ffff6d601f1 in afterpoll_internal (egc=0x7fffefffe5a0, poller=0x7ffff00028c0, nfds=4, fds=0x7ffff00048b0, now=...) at libxl_event.c:948
># 5  0x00007ffff6d604db in eventloop_iteration (egc=0x7fffefffe5a0, poller=0x7ffff00028c0) at libxl_event.c:1368
># 6  0x00007ffff6d616b3 in libxl__ao_inprogress (ao=0x7ffff0001d40, file=<optimized out>, line=<optimized out>, func=<optimized out>) at libxl_event.c:1614
># 7  0x00007ffff6d3ab75 in libxl_domain_suspend (ctx=<optimized out>, domid=1, fd=10, flags=<optimized out>, ao_how=<optimized out>) at libxl.c:796
># 8  0x000000000043677e in migrate_domain_send (ctx=0x7ffff0008860, domid=1, fd=10) at hypervisor/xen_libxl.c:587
># 9  0x000000000043698a in live_migrate_send (hyperconn=0x7ffff0001c70, server=0x7ffff0001cb0, node_ip=0x7ffff00041e0 "10.157.128.20", fd=10) at hypervisor/xen_libxl.c:647
># 10 0x0000000000422a70 in migrate_server_action (request=0x7ffff0002980) at action/node_action.c:1287
># 11 0x00000000004240c1 in runAction (socket_fd=8) at action/handleaction.c:138
># 12 0x00000000004179bd in runcomm (socket=0x8) at xvpagent.c:253
># 13 0x0000000000427502 in trackedthread_run (arg=0x66df20) at util/util.c:179
># 14 0x00007ffff5c9ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
># 15 0x00007ffff59ca4bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
># 16 0x0000000000000000 in ?? ()



-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 12:06:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 12:06:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiOar-0000Sh-3R; Tue, 11 Dec 2012 12:05:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TiOao-0000SY-TS
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 12:05:47 +0000
Received: from [85.158.139.83:35279] by server-6.bemta-5.messagelabs.com id
	D7/CD-30498-A9127C05; Tue, 11 Dec 2012 12:05:46 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355227545!25335068!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11617 invoked from network); 11 Dec 2012 12:05:45 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-7.tower-182.messagelabs.com with SMTP;
	11 Dec 2012 12:05:45 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 696F2C5617B;
	Tue, 11 Dec 2012 12:05:34 +0000 (GMT)
Date: Tue, 11 Dec 2012 12:05:33 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Xen Devel <xen-devel@lists.xen.org>
Message-ID: <C76A4AA2522F56F728DDCA4A@nimrod.local>
In-Reply-To: <22A70860509BC87BE892A601@nimrod.local>
References: <22A70860509BC87BE892A601@nimrod.local>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This particular segv seems to be because in
  libxl__domain_suspend_common_switch_qemu_logdirty
in libxl_dom.c the variables 'got', and 'got_ret' do not appear to be initialised
to NULL (got certainly should be, got_ret should be if there is any possibility
of libxl__xs_read_checked not writing to got_ret which it doesn't seem to be).
Code inspection suggests this issue is still there in 4.2.1, hence my wondering
whether other stuff needs bringing in from unstable.


--On 11 December 2012 11:45:42 +0000 Alex Bligh <alex@alex.org.uk> wrote:

> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffeffff700 (LWP 5995)]
> 0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) bt
># 0  0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
># 1  0x00007ffff6d4e970 in libxl__domain_suspend_common_switch_qemu_logdirty (domid=<optimized out>, enable=<optimized out>, user=0x7ffff00024e8) at libxl_dom.c:728
># 2  0x00007ffff6d5c1ae in libxl__srm_callout_received_save (msg=0x7fffefffe41a " error", len=<optimized out>, user=0x7ffff00024e8) at _libxl_save_msgs_callout.c:162
># 3  0x00007ffff6d5b736 in helper_stdout_readable (egc=0x7fffefffe5a0, ev=0x7ffff0002560, fd=38, events=<optimized out>, revents=<optimized out>) at libxl_save_callout.c:283
># 4  0x00007ffff6d601f1 in afterpoll_internal (egc=0x7fffefffe5a0, poller=0x7ffff00028c0, nfds=4, fds=0x7ffff00048b0, now=...) at libxl_event.c:948
># 5  0x00007ffff6d604db in eventloop_iteration (egc=0x7fffefffe5a0, poller=0x7ffff00028c0) at libxl_event.c:1368
># 6  0x00007ffff6d616b3 in libxl__ao_inprogress (ao=0x7ffff0001d40, file=<optimized out>, line=<optimized out>, func=<optimized out>) at libxl_event.c:1614
># 7  0x00007ffff6d3ab75 in libxl_domain_suspend (ctx=<optimized out>, domid=1, fd=10, flags=<optimized out>, ao_how=<optimized out>) at libxl.c:796
># 8  0x000000000043677e in migrate_domain_send (ctx=0x7ffff0008860, domid=1, fd=10) at hypervisor/xen_libxl.c:587
># 9  0x000000000043698a in live_migrate_send (hyperconn=0x7ffff0001c70, server=0x7ffff0001cb0, node_ip=0x7ffff00041e0 "10.157.128.20", fd=10) at hypervisor/xen_libxl.c:647
># 10 0x0000000000422a70 in migrate_server_action (request=0x7ffff0002980) at action/node_action.c:1287
># 11 0x00000000004240c1 in runAction (socket_fd=8) at action/handleaction.c:138
># 12 0x00000000004179bd in runcomm (socket=0x8) at xvpagent.c:253
># 13 0x0000000000427502 in trackedthread_run (arg=0x66df20) at util/util.c:179
># 14 0x00007ffff5c9ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
># 15 0x00007ffff59ca4bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
># 16 0x0000000000000000 in ?? ()



-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 12:10:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 12:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiOfK-0000iQ-Sc; Tue, 11 Dec 2012 12:10:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiOfJ-0000iK-UJ
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 12:10:26 +0000
Received: from [85.158.139.211:5468] by server-3.bemta-5.messagelabs.com id
	64/42-25441-1B227C05; Tue, 11 Dec 2012 12:10:25 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355227822!20023885!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32686 invoked from network); 11 Dec 2012 12:10:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 12:10:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="236468"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 12:10:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 07:10:21 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiOfF-0004Va-Kj;
	Tue, 11 Dec 2012 12:10:21 +0000
Date: Tue, 11 Dec 2012 12:10:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20121210184311.4fc11316@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> On Mon, 10 Dec 2012 09:43:34 +0000
> "Jan Beulich" <JBeulich@suse.com> wrote:
> 
> > >>> On 08.12.12 at 02:46, Mukesh Rathor <mukesh.rathor@oracle.com>
> > >>> wrote:
> > > The second is msi.c. I don't understand it very well, and need to
> > > figure what to do for PVH. Would appreciate suggestions if anyone
> > > knows.
> > 
> > Why do you think you need to do something specially for PVH here
> > in the first place? The only adjustment I would expect might be
> > needed is address translation (depending on how PVH deal with
> > MMIO addresses).
> 
> Ok, thanks. Looks like I'm missing some address translation somewhere,
> getting EPT violation from dom0. Time to read up more on msi-x and debug.

That's strange because AFAIK Linux is never editing the MSI-X entries
directly: give a look at arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs,
Linux only remaps MSIs into pirqs using hypercalls.
Xen should be the only one to touch the real MSI-X table.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 12:10:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 12:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiOfK-0000iQ-Sc; Tue, 11 Dec 2012 12:10:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiOfJ-0000iK-UJ
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 12:10:26 +0000
Received: from [85.158.139.211:5468] by server-3.bemta-5.messagelabs.com id
	64/42-25441-1B227C05; Tue, 11 Dec 2012 12:10:25 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355227822!20023885!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32686 invoked from network); 11 Dec 2012 12:10:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 12:10:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="236468"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 12:10:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 07:10:21 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiOfF-0004Va-Kj;
	Tue, 11 Dec 2012 12:10:21 +0000
Date: Tue, 11 Dec 2012 12:10:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20121210184311.4fc11316@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> On Mon, 10 Dec 2012 09:43:34 +0000
> "Jan Beulich" <JBeulich@suse.com> wrote:
> 
> > >>> On 08.12.12 at 02:46, Mukesh Rathor <mukesh.rathor@oracle.com>
> > >>> wrote:
> > > The second is msi.c. I don't understand it very well, and need to
> > > figure what to do for PVH. Would appreciate suggestions if anyone
> > > knows.
> > 
> > Why do you think you need to do something specially for PVH here
> > in the first place? The only adjustment I would expect might be
> > needed is address translation (depending on how PVH deal with
> > MMIO addresses).
> 
> Ok, thanks. Looks like I'm missing some address translation somewhere,
> getting EPT violation from dom0. Time to read up more on msi-x and debug.

That's strange because AFAIK Linux is never editing the MSI-X entries
directly: give a look at arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs,
Linux only remaps MSIs into pirqs using hypercalls.
Xen should be the only one to touch the real MSI-X table.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 13:45:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 13:45:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiQ8M-0001oo-0N; Tue, 11 Dec 2012 13:44:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiQ8K-0001oY-Kl
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 13:44:29 +0000
Received: from [85.158.143.35:41053] by server-3.bemta-4.messagelabs.com id
	B7/68-18211-BB837C05; Tue, 11 Dec 2012 13:44:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355233391!15174610!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6678 invoked from network); 11 Dec 2012 13:43:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 13:43:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="60299"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 13:43:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 13:43:07 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiQ71-0004bJ-L7;
	Tue, 11 Dec 2012 13:43:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiQ70-0002CU-2a;
	Tue, 11 Dec 2012 13:43:07 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14668-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 13:43:06 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14668: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14668 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14668/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check             fail   never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-win-vcpus1                             fail    
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   fail    
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=3752993df8af5cffa1b8219fe175d235597b4474
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable 3752993df8af5cffa1b8219fe175d235597b4474
+ branch=qemu-upstream-unstable
+ revision=3752993df8af5cffa1b8219fe175d235597b4474
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push xen@xenbits.xensource.com:git/qemu-upstream-unstable.git 3752993df8af5cffa1b8219fe175d235597b4474:master
Counting objects: 1   
Counting objects: 3   
Counting objects: 10   
Counting objects: 16   
Counting objects: 34   
Counting objects: 72   
Counting objects: 127   
Counting objects: 216   
Counting objects: 497   
Counting objects: 619   
Counting objects: 667, done.
Compressing objects:   0% (1/126)   
Compressing objects:   1% (2/126)   
Compressing objects:   2% (3/126)   
Compressing objects:   3% (4/126)   
Compressing objects:   4% (6/126)   
Compressing objects:   5% (7/126)   
Compressing objects:   6% (8/126)   
Compressing objects:   7% (9/126)   
Compressing objects:   8% (11/126)   
Compressing objects:   9% (12/126)   
Compressing objects:  10% (13/126)   
Compressing objects:  11% (14/126)   
Compressing objects:  12% (16/126)   
Compressing objects:  13% (17/126)   
Compressing objects:  14% (18/126)   
Compressing objects:  15% (19/126)   
Compressing objects:  16% (21/126)   
Compressing objects:  17% (22/126)   
Compressing objects:  18% (23/126)   
Compressing objects:  19% (24/126)   
Compressing objects:  20% (26/126)   
Compressing objects:  21% (27/126)   
Compressing objects:  22% (28/126)   
Compressing objects:  23% (29/126)   
Compressing objects:  24% (31/126)   
Compressing objects:  25% (32/126)   
Compressing objects:  26% (33/126)   
Compressing objects:  27% (35/126)   
Compressing objects:  28% (36/126)   
Compressing objects:  29% (37/126)   
Compressing objects:  30% (38/126)   
Compressing objects:  31% (40/126)   
Compressing objects:  32% (41/126)   
Compressing objects:  33% (42/126)   
Compressing objects:  34% (43/126)   
Compressing objects:  35% (45/126)   
Compressing objects:  36% (46/126)   
Compressing objects:  37% (47/126)   
Compressing objects:  38% (48/126)   
Compressing objects:  39% (50/126)   
Compressing objects:  40% (51/126)   
Compressing objects:  41% (52/126)   
Compressing objects:  42% (53/126)   
Compressing objects:  43% (55/126)   
Compressing objects:  44% (56/126)   
Compressing objects:  45% (57/126)   
Compressing objects:  46% (58/126)   
Compressing objects:  47% (60/126)   
Compressing objects:  48% (61/126)   
Compressing objects:  49% (62/126)   
Compressing objects:  50% (63/126)   
Compressing objects:  51% (65/126)   
Compressing objects:  52% (66/126)   
Compressing objects:  53% (67/126)   
Compressing objects:  54% (69/126)   
Compressing objects:  55% (70/126)   
Compressing objects:  56% (71/126)   
Compressing objects:  57% (72/126)   
Compressing objects:  58% (74/126)   
Compressing objects:  59% (75/126)   
Compressing objects:  60% (76/126)   
Compressing objects:  61% (77/126)   
Compressing objects:  62% (79/126)   
Compressing objects:  63% (80/126)   
Compressing objects:  64% (81/126)   
Compressing objects:  65% (82/126)   
Compressing objects:  66% (84/126)   
Compressing objects:  67% (85/126)   
Compressing objects:  68% (86/126)   
Compressing objects:  69% (87/126)   
Compressing objects:  70% (89/126)   
Compressing objects:  71% (90/126)   
Compressing objects:  72% (91/126)   
Compressing objects:  73% (92/126)   
Compressing objects:  74% (94/126)   
Compressing objects:  75% (95/126)   
Compressing objects:  76% (96/126)   
Compressing objects:  77% (98/126)   
Compressing objects:  78% (99/126)   
Compressing objects:  79% (100/126)   
Compressing objects:  80% (101/126)   
Compressing objects:  81% (103/126)   
Compressing objects:  82% (104/126)   
Compressing objects:  83% (105/126)   
Compressing objects:  84% (106/126)   
Compressing objects:  85% (108/126)   
Compressing objects:  86% (109/126)   
Compressing objects:  87% (110/126)   
Compressing objects:  88% (111/126)   
Compressing objects:  89% (113/126)   
Compressing objects:  90% (114/126)   
Compressing objects:  91% (115/126)   
Compressing objects:  92% (116/126)   
Compressing objects:  93% (118/126)   
Compressing objects:  94% (119/126)   
Compressing objects:  95% (120/126)   
Compressing objects:  96% (121/126)   
Compressing objects:  97% (123/126)   
Compressing objects:  98% (124/126)   
Compressing objects:  99% (125/126)   
Compressing objects: 100% (126/126)   
Compressing objects: 100% (126/126), done.
Writing objects:   0% (1/500)   
Writing objects:   1% (5/500)   
Writing objects:   2% (10/500)   
Writing objects:   3% (15/500)   
Writing objects:   4% (20/500)   
Writing objects:   5% (25/500)   
Writing objects:   6% (30/500)   
Writing objects:   7% (35/500)   
Writing objects:   8% (40/500)   
Writing objects:   9% (45/500)   
Writing objects:  10% (50/500)   
Writing objects:  11% (55/500)   
Writing objects:  12% (60/500)   
Writing objects:  13% (65/500)   
Writing objects:  14% (70/500)   
Writing objects:  15% (75/500)   
Writing objects:  16% (80/500)   
Writing objects:  17% (85/500)   
Writing objects:  18% (90/500)   
Writing objects:  19% (95/500)   
Writing objects:  20% (100/500)   
Writing objects:  21% (105/500)   
Writing objects:  22% (110/500)   
Writing objects:  23% (115/500)   
Writing objects:  24% (120/500)   
Writing objects:  25% (125/500)   
Writing objects:  26% (130/500)   
Writing objects:  27% (135/500)   
Writing objects:  28% (140/500)   
Writing objects:  29% (145/500)   
Writing objects:  30% (150/500)   
Writing objects:  31% (155/500)   
Writing objects:  32% (161/500)   
Writing objects:  33% (165/500)   
Writing objects:  34% (170/500)   
Writing objects:  35% (175/500)   
Writing objects:  36% (180/500)   
Writing objects:  37% (185/500)   
Writing objects:  38% (190/500)   
Writing objects:  39% (195/500)   
Writing objects:  40% (200/500)   
Writing objects:  41% (205/500)   
Writing objects:  42% (210/500)   
Writing objects:  43% (215/500)   
Writing objects:  44% (220/500)   
Writing objects:  45% (225/500)   
Writing objects:  46% (230/500)   
Writing objects:  47% (235/500)   
Writing objects:  48% (240/500)   
Writing objects:  49% (245/500)   
Writing objects:  50% (250/500)   
Writing objects:  51% (255/500)   
Writing objects:  52% (260/500)   
Writing objects:  53% (265/500)   
Writing objects:  54% (270/500)   
Writing objects:  55% (275/500)   
Writing objects:  56% (280/500)   
Writing objects:  57% (285/500)   
Writing objects:  58% (290/500)   
Writing objects:  59% (295/500)   
Writing objects:  60% (300/500)   
Writing objects:  61% (305/500)   
Writing objects:  62% (310/500)   
Writing objects:  63% (315/500)   
Writing objects:  64% (320/500)   
Writing objects:  65% (325/500)   
Writing objects:  66% (330/500)   
Writing objects:  67% (335/500)   
Writing objects:  68% (342/500)   
Writing objects:  69% (345/500)   
Writing objects:  70% (351/500)   
Writing objects:  71% (355/500)   
Writing objects:  72% (360/500)   
Writing objects:  73% (365/500)   
Writing objects:  74% (370/500)   
Writing objects:  75% (375/500)   
Writing objects:  76% (380/500)   
Writing objects:  77% (385/500)   
Writing objects:  78% (390/500)   
Writing objects:  79% (395/500)   
Writing objects:  80% (400/500)   
Writing objects:  81% (405/500)   
Writing objects:  82% (410/500)   
Writing objects:  83% (415/500)   
Writing objects:  84% (420/500)   
Writing objects:  85% (425/500)   
Writing objects:  86% (430/500)   
Writing objects:  87% (435/500)   
Writing objects:  88% (440/500)   
Writing objects:  89% (445/500)   
Writing objects:  90% (450/500)   
Writing objects:  91% (455/500)   
Writing objects:  92% (460/500)   
Writing objects:  93% (465/500)   
Writing objects:  94% (470/500)   
Writing objects:  95% (475/500)   
Writing objects:  96% (480/500)   
Writing objects:  97% (485/500)   
Writing objects:  98% (490/500)   
Writing objects:  99% (495/500)   
Writing objects: 100% (500/500)   
Writing objects: 100% (500/500), 136.27 KiB, done.
Total 500 (delta 364), reused 500 (delta 364)
To xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
   1e6f3bf..3752993  3752993df8af5cffa1b8219fe175d235597b4474 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 13:45:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 13:45:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiQ8M-0001oo-0N; Tue, 11 Dec 2012 13:44:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiQ8K-0001oY-Kl
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 13:44:29 +0000
Received: from [85.158.143.35:41053] by server-3.bemta-4.messagelabs.com id
	B7/68-18211-BB837C05; Tue, 11 Dec 2012 13:44:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355233391!15174610!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6678 invoked from network); 11 Dec 2012 13:43:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 13:43:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="60299"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 13:43:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 13:43:07 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiQ71-0004bJ-L7;
	Tue, 11 Dec 2012 13:43:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiQ70-0002CU-2a;
	Tue, 11 Dec 2012 13:43:07 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14668-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 13:43:06 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 14668: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14668 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14668/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-qemuu-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemuu-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-qemuu-win   16 leak-check/check             fail   never pass

version targeted for testing:
 qemuu                3752993df8af5cffa1b8219fe175d235597b4474
baseline version:
 qemuu                1e6f3bf92c84d9ba8fdc61f4deb1777e737c7a2c

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-win-vcpus1                             fail    
 test-amd64-i386-xl-qemuu-win-vcpus1                          fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-amd64-qemuu-win                                   fail    
 test-amd64-i386-qemuu-win                                    fail    
 test-amd64-amd64-xl-qemuu-win                                fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=3752993df8af5cffa1b8219fe175d235597b4474
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable 3752993df8af5cffa1b8219fe175d235597b4474
+ branch=qemu-upstream-unstable
+ revision=3752993df8af5cffa1b8219fe175d235597b4474
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push xen@xenbits.xensource.com:git/qemu-upstream-unstable.git 3752993df8af5cffa1b8219fe175d235597b4474:master
Counting objects: 1   
Counting objects: 3   
Counting objects: 10   
Counting objects: 16   
Counting objects: 34   
Counting objects: 72   
Counting objects: 127   
Counting objects: 216   
Counting objects: 497   
Counting objects: 619   
Counting objects: 667, done.
Compressing objects:   0% (1/126)   
Compressing objects:   1% (2/126)   
Compressing objects:   2% (3/126)   
Compressing objects:   3% (4/126)   
Compressing objects:   4% (6/126)   
Compressing objects:   5% (7/126)   
Compressing objects:   6% (8/126)   
Compressing objects:   7% (9/126)   
Compressing objects:   8% (11/126)   
Compressing objects:   9% (12/126)   
Compressing objects:  10% (13/126)   
Compressing objects:  11% (14/126)   
Compressing objects:  12% (16/126)   
Compressing objects:  13% (17/126)   
Compressing objects:  14% (18/126)   
Compressing objects:  15% (19/126)   
Compressing objects:  16% (21/126)   
Compressing objects:  17% (22/126)   
Compressing objects:  18% (23/126)   
Compressing objects:  19% (24/126)   
Compressing objects:  20% (26/126)   
Compressing objects:  21% (27/126)   
Compressing objects:  22% (28/126)   
Compressing objects:  23% (29/126)   
Compressing objects:  24% (31/126)   
Compressing objects:  25% (32/126)   
Compressing objects:  26% (33/126)   
Compressing objects:  27% (35/126)   
Compressing objects:  28% (36/126)   
Compressing objects:  29% (37/126)   
Compressing objects:  30% (38/126)   
Compressing objects:  31% (40/126)   
Compressing objects:  32% (41/126)   
Compressing objects:  33% (42/126)   
Compressing objects:  34% (43/126)   
Compressing objects:  35% (45/126)   
Compressing objects:  36% (46/126)   
Compressing objects:  37% (47/126)   
Compressing objects:  38% (48/126)   
Compressing objects:  39% (50/126)   
Compressing objects:  40% (51/126)   
Compressing objects:  41% (52/126)   
Compressing objects:  42% (53/126)   
Compressing objects:  43% (55/126)   
Compressing objects:  44% (56/126)   
Compressing objects:  45% (57/126)   
Compressing objects:  46% (58/126)   
Compressing objects:  47% (60/126)   
Compressing objects:  48% (61/126)   
Compressing objects:  49% (62/126)   
Compressing objects:  50% (63/126)   
Compressing objects:  51% (65/126)   
Compressing objects:  52% (66/126)   
Compressing objects:  53% (67/126)   
Compressing objects:  54% (69/126)   
Compressing objects:  55% (70/126)   
Compressing objects:  56% (71/126)   
Compressing objects:  57% (72/126)   
Compressing objects:  58% (74/126)   
Compressing objects:  59% (75/126)   
Compressing objects:  60% (76/126)   
Compressing objects:  61% (77/126)   
Compressing objects:  62% (79/126)   
Compressing objects:  63% (80/126)   
Compressing objects:  64% (81/126)   
Compressing objects:  65% (82/126)   
Compressing objects:  66% (84/126)   
Compressing objects:  67% (85/126)   
Compressing objects:  68% (86/126)   
Compressing objects:  69% (87/126)   
Compressing objects:  70% (89/126)   
Compressing objects:  71% (90/126)   
Compressing objects:  72% (91/126)   
Compressing objects:  73% (92/126)   
Compressing objects:  74% (94/126)   
Compressing objects:  75% (95/126)   
Compressing objects:  76% (96/126)   
Compressing objects:  77% (98/126)   
Compressing objects:  78% (99/126)   
Compressing objects:  79% (100/126)   
Compressing objects:  80% (101/126)   
Compressing objects:  81% (103/126)   
Compressing objects:  82% (104/126)   
Compressing objects:  83% (105/126)   
Compressing objects:  84% (106/126)   
Compressing objects:  85% (108/126)   
Compressing objects:  86% (109/126)   
Compressing objects:  87% (110/126)   
Compressing objects:  88% (111/126)   
Compressing objects:  89% (113/126)   
Compressing objects:  90% (114/126)   
Compressing objects:  91% (115/126)   
Compressing objects:  92% (116/126)   
Compressing objects:  93% (118/126)   
Compressing objects:  94% (119/126)   
Compressing objects:  95% (120/126)   
Compressing objects:  96% (121/126)   
Compressing objects:  97% (123/126)   
Compressing objects:  98% (124/126)   
Compressing objects:  99% (125/126)   
Compressing objects: 100% (126/126)   
Compressing objects: 100% (126/126), done.
Writing objects:   0% (1/500)   
Writing objects:   1% (5/500)   
Writing objects:   2% (10/500)   
Writing objects:   3% (15/500)   
Writing objects:   4% (20/500)   
Writing objects:   5% (25/500)   
Writing objects:   6% (30/500)   
Writing objects:   7% (35/500)   
Writing objects:   8% (40/500)   
Writing objects:   9% (45/500)   
Writing objects:  10% (50/500)   
Writing objects:  11% (55/500)   
Writing objects:  12% (60/500)   
Writing objects:  13% (65/500)   
Writing objects:  14% (70/500)   
Writing objects:  15% (75/500)   
Writing objects:  16% (80/500)   
Writing objects:  17% (85/500)   
Writing objects:  18% (90/500)   
Writing objects:  19% (95/500)   
Writing objects:  20% (100/500)   
Writing objects:  21% (105/500)   
Writing objects:  22% (110/500)   
Writing objects:  23% (115/500)   
Writing objects:  24% (120/500)   
Writing objects:  25% (125/500)   
Writing objects:  26% (130/500)   
Writing objects:  27% (135/500)   
Writing objects:  28% (140/500)   
Writing objects:  29% (145/500)   
Writing objects:  30% (150/500)   
Writing objects:  31% (155/500)   
Writing objects:  32% (161/500)   
Writing objects:  33% (165/500)   
Writing objects:  34% (170/500)   
Writing objects:  35% (175/500)   
Writing objects:  36% (180/500)   
Writing objects:  37% (185/500)   
Writing objects:  38% (190/500)   
Writing objects:  39% (195/500)   
Writing objects:  40% (200/500)   
Writing objects:  41% (205/500)   
Writing objects:  42% (210/500)   
Writing objects:  43% (215/500)   
Writing objects:  44% (220/500)   
Writing objects:  45% (225/500)   
Writing objects:  46% (230/500)   
Writing objects:  47% (235/500)   
Writing objects:  48% (240/500)   
Writing objects:  49% (245/500)   
Writing objects:  50% (250/500)   
Writing objects:  51% (255/500)   
Writing objects:  52% (260/500)   
Writing objects:  53% (265/500)   
Writing objects:  54% (270/500)   
Writing objects:  55% (275/500)   
Writing objects:  56% (280/500)   
Writing objects:  57% (285/500)   
Writing objects:  58% (290/500)   
Writing objects:  59% (295/500)   
Writing objects:  60% (300/500)   
Writing objects:  61% (305/500)   
Writing objects:  62% (310/500)   
Writing objects:  63% (315/500)   
Writing objects:  64% (320/500)   
Writing objects:  65% (325/500)   
Writing objects:  66% (330/500)   
Writing objects:  67% (335/500)   
Writing objects:  68% (342/500)   
Writing objects:  69% (345/500)   
Writing objects:  70% (351/500)   
Writing objects:  71% (355/500)   
Writing objects:  72% (360/500)   
Writing objects:  73% (365/500)   
Writing objects:  74% (370/500)   
Writing objects:  75% (375/500)   
Writing objects:  76% (380/500)   
Writing objects:  77% (385/500)   
Writing objects:  78% (390/500)   
Writing objects:  79% (395/500)   
Writing objects:  80% (400/500)   
Writing objects:  81% (405/500)   
Writing objects:  82% (410/500)   
Writing objects:  83% (415/500)   
Writing objects:  84% (420/500)   
Writing objects:  85% (425/500)   
Writing objects:  86% (430/500)   
Writing objects:  87% (435/500)   
Writing objects:  88% (440/500)   
Writing objects:  89% (445/500)   
Writing objects:  90% (450/500)   
Writing objects:  91% (455/500)   
Writing objects:  92% (460/500)   
Writing objects:  93% (465/500)   
Writing objects:  94% (470/500)   
Writing objects:  95% (475/500)   
Writing objects:  96% (480/500)   
Writing objects:  97% (485/500)   
Writing objects:  98% (490/500)   
Writing objects:  99% (495/500)   
Writing objects: 100% (500/500)   
Writing objects: 100% (500/500), 136.27 KiB, done.
Total 500 (delta 364), reused 500 (delta 364)
To xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
   1e6f3bf..3752993  3752993df8af5cffa1b8219fe175d235597b4474 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 14:10:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 14:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiQWs-0006Sr-NU; Tue, 11 Dec 2012 14:09:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TiQWq-0006Sj-PE
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 14:09:48 +0000
Received: from [85.158.143.99:42804] by server-2.bemta-4.messagelabs.com id
	C7/68-30861-CAE37C05; Tue, 11 Dec 2012 14:09:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355234987!23672237!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24197 invoked from network); 11 Dec 2012 14:09:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 14:09:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="61128"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 14:09:48 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 11 Dec 2012 14:09:46 +0000
Message-ID: <1355234985.843.10.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Tue, 11 Dec 2012 14:09:45 +0000
In-Reply-To: <22A70860509BC87BE892A601@nimrod.local>
References: <22A70860509BC87BE892A601@nimrod.local>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-11 at 11:45 +0000, Alex Bligh wrote:
> In this email message:
>  http://www.gossamer-threads.com/lists/xen/devel/256737
> a patch to libxl_domain_suspend is presented to enable live migrate
> with the qemu device model in xen-unstable. This patch has *NOT*
> been taken into the 4.2.1 tree (as far as I can see).
> 
> In 4.2.0, adding this patch and attempting a live migrate using
> the qemu device model (using xl) produces a seg fault due to
> unitialised variables.

Really using xl? because the stack trace suggests otherwise.

> Should I expect live-migrate of qemu-dm vms to work under 4.2.1?

Do you mean VMs using the upstream "qemu-xen" device model, as opposed
to the default "qemu-xen-traditional" model?

Migration of HVM guests in 4.2.x is only supported with the
qemu-xen-traditional device model and AFAIK there is no plan to backport
this support to 4.2.

It still shouldn't crash though. I'm not sure how it got this far since
libxl on 4.2 explicitly checks the DM version before attempting to
migrate and will refuse to even try with a qemu-xen DM.

Ian.

> If so, should the patch (or a modification thereof) to remove
> the check from libxl_domain_suspend be applied to 4.2.1-testing
> or is there more to do?
> 
> I am very happy to commit test resources to this.
> 
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffeffff700 (LWP 5995)]
> 0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) bt
> #0  0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #1  0x00007ffff6d4e970 in libxl__domain_suspend_common_switch_qemu_logdirty (domid=<optimized out>, enable=<optimized out>, user=0x7ffff00024e8) at libxl_dom.c:728
> #2  0x00007ffff6d5c1ae in libxl__srm_callout_received_save (msg=0x7fffefffe41a " error", len=<optimized out>, user=0x7ffff00024e8) at _libxl_save_msgs_callout.c:162
> #3  0x00007ffff6d5b736 in helper_stdout_readable (egc=0x7fffefffe5a0, ev=0x7ffff0002560, fd=38, events=<optimized out>, revents=<optimized out>) at libxl_save_callout.c:283
> #4  0x00007ffff6d601f1 in afterpoll_internal (egc=0x7fffefffe5a0, poller=0x7ffff00028c0, nfds=4, fds=0x7ffff00048b0, now=...) at libxl_event.c:948
> #5  0x00007ffff6d604db in eventloop_iteration (egc=0x7fffefffe5a0, poller=0x7ffff00028c0) at libxl_event.c:1368
> #6  0x00007ffff6d616b3 in libxl__ao_inprogress (ao=0x7ffff0001d40, file=<optimized out>, line=<optimized out>, func=<optimized out>) at libxl_event.c:1614
> #7  0x00007ffff6d3ab75 in libxl_domain_suspend (ctx=<optimized out>, domid=1, fd=10, flags=<optimized out>, ao_how=<optimized out>) at libxl.c:796
> #8  0x000000000043677e in migrate_domain_send (ctx=0x7ffff0008860, domid=1, fd=10) at hypervisor/xen_libxl.c:587
> #9  0x000000000043698a in live_migrate_send (hyperconn=0x7ffff0001c70, server=0x7ffff0001cb0, node_ip=0x7ffff00041e0 "10.157.128.20", fd=10) at hypervisor/xen_libxl.c:647
> #10 0x0000000000422a70 in migrate_server_action (request=0x7ffff0002980) at action/node_action.c:1287
> #11 0x00000000004240c1 in runAction (socket_fd=8) at action/handleaction.c:138
> #12 0x00000000004179bd in runcomm (socket=0x8) at xvpagent.c:253
> #13 0x0000000000427502 in trackedthread_run (arg=0x66df20) at util/util.c:179
> #14 0x00007ffff5c9ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
> #15 0x00007ffff59ca4bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
> #16 0x0000000000000000 in ?? ()
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 14:10:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 14:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiQWs-0006Sr-NU; Tue, 11 Dec 2012 14:09:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TiQWq-0006Sj-PE
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 14:09:48 +0000
Received: from [85.158.143.99:42804] by server-2.bemta-4.messagelabs.com id
	C7/68-30861-CAE37C05; Tue, 11 Dec 2012 14:09:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355234987!23672237!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24197 invoked from network); 11 Dec 2012 14:09:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 14:09:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="61128"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 14:09:48 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 11 Dec 2012 14:09:46 +0000
Message-ID: <1355234985.843.10.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Tue, 11 Dec 2012 14:09:45 +0000
In-Reply-To: <22A70860509BC87BE892A601@nimrod.local>
References: <22A70860509BC87BE892A601@nimrod.local>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-11 at 11:45 +0000, Alex Bligh wrote:
> In this email message:
>  http://www.gossamer-threads.com/lists/xen/devel/256737
> a patch to libxl_domain_suspend is presented to enable live migrate
> with the qemu device model in xen-unstable. This patch has *NOT*
> been taken into the 4.2.1 tree (as far as I can see).
> 
> In 4.2.0, adding this patch and attempting a live migrate using
> the qemu device model (using xl) produces a seg fault due to
> unitialised variables.

Really using xl? because the stack trace suggests otherwise.

> Should I expect live-migrate of qemu-dm vms to work under 4.2.1?

Do you mean VMs using the upstream "qemu-xen" device model, as opposed
to the default "qemu-xen-traditional" model?

Migration of HVM guests in 4.2.x is only supported with the
qemu-xen-traditional device model and AFAIK there is no plan to backport
this support to 4.2.

It still shouldn't crash though. I'm not sure how it got this far since
libxl on 4.2 explicitly checks the DM version before attempting to
migrate and will refuse to even try with a qemu-xen DM.

Ian.

> If so, should the patch (or a modification thereof) to remove
> the check from libxl_domain_suspend be applied to 4.2.1-testing
> or is there more to do?
> 
> I am very happy to commit test resources to this.
> 
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffeffff700 (LWP 5995)]
> 0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) bt
> #0  0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #1  0x00007ffff6d4e970 in libxl__domain_suspend_common_switch_qemu_logdirty (domid=<optimized out>, enable=<optimized out>, user=0x7ffff00024e8) at libxl_dom.c:728
> #2  0x00007ffff6d5c1ae in libxl__srm_callout_received_save (msg=0x7fffefffe41a " error", len=<optimized out>, user=0x7ffff00024e8) at _libxl_save_msgs_callout.c:162
> #3  0x00007ffff6d5b736 in helper_stdout_readable (egc=0x7fffefffe5a0, ev=0x7ffff0002560, fd=38, events=<optimized out>, revents=<optimized out>) at libxl_save_callout.c:283
> #4  0x00007ffff6d601f1 in afterpoll_internal (egc=0x7fffefffe5a0, poller=0x7ffff00028c0, nfds=4, fds=0x7ffff00048b0, now=...) at libxl_event.c:948
> #5  0x00007ffff6d604db in eventloop_iteration (egc=0x7fffefffe5a0, poller=0x7ffff00028c0) at libxl_event.c:1368
> #6  0x00007ffff6d616b3 in libxl__ao_inprogress (ao=0x7ffff0001d40, file=<optimized out>, line=<optimized out>, func=<optimized out>) at libxl_event.c:1614
> #7  0x00007ffff6d3ab75 in libxl_domain_suspend (ctx=<optimized out>, domid=1, fd=10, flags=<optimized out>, ao_how=<optimized out>) at libxl.c:796
> #8  0x000000000043677e in migrate_domain_send (ctx=0x7ffff0008860, domid=1, fd=10) at hypervisor/xen_libxl.c:587
> #9  0x000000000043698a in live_migrate_send (hyperconn=0x7ffff0001c70, server=0x7ffff0001cb0, node_ip=0x7ffff00041e0 "10.157.128.20", fd=10) at hypervisor/xen_libxl.c:647
> #10 0x0000000000422a70 in migrate_server_action (request=0x7ffff0002980) at action/node_action.c:1287
> #11 0x00000000004240c1 in runAction (socket_fd=8) at action/handleaction.c:138
> #12 0x00000000004179bd in runcomm (socket=0x8) at xvpagent.c:253
> #13 0x0000000000427502 in trackedthread_run (arg=0x66df20) at util/util.c:179
> #14 0x00007ffff5c9ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
> #15 0x00007ffff59ca4bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
> #16 0x0000000000000000 in ?? ()
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 14:13:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 14:13:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiQZi-0006ah-CU; Tue, 11 Dec 2012 14:12:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TiQZh-0006ab-I1
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 14:12:45 +0000
Received: from [193.109.254.147:48414] by server-2.bemta-14.messagelabs.com id
	F8/6D-20829-C5F37C05; Tue, 11 Dec 2012 14:12:44 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355235079!9975133!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTEwNzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12584 invoked from network); 11 Dec 2012 14:11:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 14:11:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="231015"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 14:11:18 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Tue, 11 Dec 2012
	09:11:18 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Charles Arnold <carnold@suse.com>, xen-devel <xen-devel@lists.xen.org>
Date: Tue, 11 Dec 2012 09:11:30 -0500
Thread-Topic: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
Thread-Index: Ac3W+edALZddPZ4+T3Wb6vxIEVGlcAAr1LQg
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
References: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
In-Reply-To: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yea I guess I should follow up on this. I did not manage to get it into 4.2 but I thought it had clearance for 4.3. Do I need to resubmit the patch set?

Thanks
Ross

> -----Original Message-----
> From: Charles Arnold [mailto:carnold@suse.com]
> Sent: Monday, December 10, 2012 12:15 PM
> To: xen-devel
> Cc: Ross Philipson
> Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
> 
> I haven't seen any activity on this feature.  Is it still planned to be
> included in Xen 4.3?
> 
> - Charles
> 
> On Wed, 2012-05-23 at 14:37 +0000, Ross Philipson wrote:
> > This patch series introduces support of loading external blocks of
> > firmware into a guest. These blocks can currently contain SMBIOS
> > and/or ACPI firmware information that is used by HVMLOADER to modify a
> > guests virtual firmware at startup. These modules are only used by
> HVMLOADER.
> >
> > The domain building code in libxenguest is passed these firmware
> > blocks in the xc_hvm_build_args structure and loads them into the new
> > guest, returning the load address. The loading is done in what will
> > become the guests low RAM area just behind to load location for
> > HVMLOADER. After their use by HVMLOADER they are effectively
> > discarded. It is the caller's job to load the base address and length
> > values in xenstore using the paths defined in the new hvm_defs.h
> > header so HVMLOADER can located the blocks.
> >
> > Currently two types of firmware information are recognized and
> > processed in the HVMLOADER though this could be extended.
> >
> > 1. SMBIOS: The SMBIOS table building code will attempt to retrieve
> > (for predefined set of structure types) any passed in structures. If a
> > match is found the passed in table will be used overriding the default
> > values. In addition, the SMBIOS code will also enumerate and load any
> > vendor defined structures (in the range of types 128 - 255) that as
> > are passed in. See the hvm_defs.h header for information on the format
> of this block.
> > 2. ACPI: Static and secondary descriptor tables can be added to the
> > set of ACPI table built by HVMLOADER. The ACPI builder code will
> > enumerate passed in tables and add them at the end of the secondary
> > table list. See the hvm_defs.h header for information on the format of
> > this block.
> >
> > There are 4 patches in the series:
> > 01 - Add HVM definitions header for firmware passthrough support.
> > 02 - Xen control tools support for loading the firmware blocks.
> > 03 - Passthrough support for SMBIOS.
> > 04 - Passthrough support for ACPI.
> >
> > Note this is version 3 of this patch set. Some of the differences:
> >  - Generic module support removed, overall functionality was
> simplified.
> >  - Use of xenstore to supply firmware passthrough information to
> HVMLOADER.
> >  - Fixed issues pointed out in the SMBIOS processing code.
> >  - Created defines for the SMBIOS handles in use and switched to using
> >    the xenstore values in the new hvm_defs.h file.
> >
> > Signed-off-by: Ross Philipson <ross.philipson@xxxxxxxxxx>
> >
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 14:13:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 14:13:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiQZi-0006ah-CU; Tue, 11 Dec 2012 14:12:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TiQZh-0006ab-I1
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 14:12:45 +0000
Received: from [193.109.254.147:48414] by server-2.bemta-14.messagelabs.com id
	F8/6D-20829-C5F37C05; Tue, 11 Dec 2012 14:12:44 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355235079!9975133!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTEwNzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12584 invoked from network); 11 Dec 2012 14:11:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 14:11:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="231015"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 14:11:18 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Tue, 11 Dec 2012
	09:11:18 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Charles Arnold <carnold@suse.com>, xen-devel <xen-devel@lists.xen.org>
Date: Tue, 11 Dec 2012 09:11:30 -0500
Thread-Topic: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
Thread-Index: Ac3W+edALZddPZ4+T3Wb6vxIEVGlcAAr1LQg
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
References: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
In-Reply-To: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yea I guess I should follow up on this. I did not manage to get it into 4.2 but I thought it had clearance for 4.3. Do I need to resubmit the patch set?

Thanks
Ross

> -----Original Message-----
> From: Charles Arnold [mailto:carnold@suse.com]
> Sent: Monday, December 10, 2012 12:15 PM
> To: xen-devel
> Cc: Ross Philipson
> Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
> 
> I haven't seen any activity on this feature.  Is it still planned to be
> included in Xen 4.3?
> 
> - Charles
> 
> On Wed, 2012-05-23 at 14:37 +0000, Ross Philipson wrote:
> > This patch series introduces support of loading external blocks of
> > firmware into a guest. These blocks can currently contain SMBIOS
> > and/or ACPI firmware information that is used by HVMLOADER to modify a
> > guests virtual firmware at startup. These modules are only used by
> HVMLOADER.
> >
> > The domain building code in libxenguest is passed these firmware
> > blocks in the xc_hvm_build_args structure and loads them into the new
> > guest, returning the load address. The loading is done in what will
> > become the guests low RAM area just behind to load location for
> > HVMLOADER. After their use by HVMLOADER they are effectively
> > discarded. It is the caller's job to load the base address and length
> > values in xenstore using the paths defined in the new hvm_defs.h
> > header so HVMLOADER can located the blocks.
> >
> > Currently two types of firmware information are recognized and
> > processed in the HVMLOADER though this could be extended.
> >
> > 1. SMBIOS: The SMBIOS table building code will attempt to retrieve
> > (for predefined set of structure types) any passed in structures. If a
> > match is found the passed in table will be used overriding the default
> > values. In addition, the SMBIOS code will also enumerate and load any
> > vendor defined structures (in the range of types 128 - 255) that as
> > are passed in. See the hvm_defs.h header for information on the format
> of this block.
> > 2. ACPI: Static and secondary descriptor tables can be added to the
> > set of ACPI table built by HVMLOADER. The ACPI builder code will
> > enumerate passed in tables and add them at the end of the secondary
> > table list. See the hvm_defs.h header for information on the format of
> > this block.
> >
> > There are 4 patches in the series:
> > 01 - Add HVM definitions header for firmware passthrough support.
> > 02 - Xen control tools support for loading the firmware blocks.
> > 03 - Passthrough support for SMBIOS.
> > 04 - Passthrough support for ACPI.
> >
> > Note this is version 3 of this patch set. Some of the differences:
> >  - Generic module support removed, overall functionality was
> simplified.
> >  - Use of xenstore to supply firmware passthrough information to
> HVMLOADER.
> >  - Fixed issues pointed out in the SMBIOS processing code.
> >  - Created defines for the SMBIOS handles in use and switched to using
> >    the xenstore values in the new hvm_defs.h file.
> >
> > Signed-off-by: Ross Philipson <ross.philipson@xxxxxxxxxx>
> >
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 14:14:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 14:14:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiQan-0006ik-G5; Tue, 11 Dec 2012 14:13:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TiQal-0006iD-EI
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 14:13:51 +0000
Received: from [85.158.143.35:10117] by server-3.bemta-4.messagelabs.com id
	F5/D6-18211-E9F37C05; Tue, 11 Dec 2012 14:13:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355235126!4262180!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20710 invoked from network); 11 Dec 2012 14:12:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 14:12:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="61222"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 14:12:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 11 Dec 2012 14:12:06 +0000
Message-ID: <1355235125.843.13.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Tue, 11 Dec 2012 14:12:05 +0000
In-Reply-To: <C76A4AA2522F56F728DDCA4A@nimrod.local>
References: <22A70860509BC87BE892A601@nimrod.local>
	<C76A4AA2522F56F728DDCA4A@nimrod.local>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-11 at 12:05 +0000, Alex Bligh wrote:
> This particular segv seems to be because in
>   libxl__domain_suspend_common_switch_qemu_logdirty
> in libxl_dom.c the variables 'got', and 'got_ret' do not appear to be initialised
> to NULL (got certainly should be, got_ret should be if there is any possibility
> of libxl__xs_read_checked not writing to got_ret which it doesn't seem to be).
> Code inspection suggests this issue is still there in 4.2.1, hence my wondering
> whether other stuff needs bringing in from unstable.

libxl__xs_read_checked will always either initialise the variable
(perhaps to NULL) or return an error. On both callsites we check for
error and "goto out".

I think the crash is because the code uses got_ret without checking if
it was NULL, which can happen if the path is not present. Ian (J) does
that make sense as something which is allowed to happen?

As I said in my early mail I'm not sure why you are getting here at all
though.

Ian.


> 
> --On 11 December 2012 11:45:42 +0000 Alex Bligh <alex@alex.org.uk> wrote:
> 
> > Program received signal SIGSEGV, Segmentation fault.
> > [Switching to Thread 0x7fffeffff700 (LWP 5995)]
> > 0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> > (gdb) bt
> ># 0  0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> ># 1  0x00007ffff6d4e970 in libxl__domain_suspend_common_switch_qemu_logdirty (domid=<optimized out>, enable=<optimized out>, user=0x7ffff00024e8) at libxl_dom.c:728
> ># 2  0x00007ffff6d5c1ae in libxl__srm_callout_received_save (msg=0x7fffefffe41a " error", len=<optimized out>, user=0x7ffff00024e8) at _libxl_save_msgs_callout.c:162
> ># 3  0x00007ffff6d5b736 in helper_stdout_readable (egc=0x7fffefffe5a0, ev=0x7ffff0002560, fd=38, events=<optimized out>, revents=<optimized out>) at libxl_save_callout.c:283
> ># 4  0x00007ffff6d601f1 in afterpoll_internal (egc=0x7fffefffe5a0, poller=0x7ffff00028c0, nfds=4, fds=0x7ffff00048b0, now=...) at libxl_event.c:948
> ># 5  0x00007ffff6d604db in eventloop_iteration (egc=0x7fffefffe5a0, poller=0x7ffff00028c0) at libxl_event.c:1368
> ># 6  0x00007ffff6d616b3 in libxl__ao_inprogress (ao=0x7ffff0001d40, file=<optimized out>, line=<optimized out>, func=<optimized out>) at libxl_event.c:1614
> ># 7  0x00007ffff6d3ab75 in libxl_domain_suspend (ctx=<optimized out>, domid=1, fd=10, flags=<optimized out>, ao_how=<optimized out>) at libxl.c:796
> ># 8  0x000000000043677e in migrate_domain_send (ctx=0x7ffff0008860, domid=1, fd=10) at hypervisor/xen_libxl.c:587
> ># 9  0x000000000043698a in live_migrate_send (hyperconn=0x7ffff0001c70, server=0x7ffff0001cb0, node_ip=0x7ffff00041e0 "10.157.128.20", fd=10) at hypervisor/xen_libxl.c:647
> ># 10 0x0000000000422a70 in migrate_server_action (request=0x7ffff0002980) at action/node_action.c:1287
> ># 11 0x00000000004240c1 in runAction (socket_fd=8) at action/handleaction.c:138
> ># 12 0x00000000004179bd in runcomm (socket=0x8) at xvpagent.c:253
> ># 13 0x0000000000427502 in trackedthread_run (arg=0x66df20) at util/util.c:179
> ># 14 0x00007ffff5c9ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
> ># 15 0x00007ffff59ca4bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
> ># 16 0x0000000000000000 in ?? ()
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 14:14:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 14:14:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiQan-0006ik-G5; Tue, 11 Dec 2012 14:13:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TiQal-0006iD-EI
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 14:13:51 +0000
Received: from [85.158.143.35:10117] by server-3.bemta-4.messagelabs.com id
	F5/D6-18211-E9F37C05; Tue, 11 Dec 2012 14:13:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355235126!4262180!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20710 invoked from network); 11 Dec 2012 14:12:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 14:12:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="61222"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 14:12:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 11 Dec 2012 14:12:06 +0000
Message-ID: <1355235125.843.13.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Tue, 11 Dec 2012 14:12:05 +0000
In-Reply-To: <C76A4AA2522F56F728DDCA4A@nimrod.local>
References: <22A70860509BC87BE892A601@nimrod.local>
	<C76A4AA2522F56F728DDCA4A@nimrod.local>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-11 at 12:05 +0000, Alex Bligh wrote:
> This particular segv seems to be because in
>   libxl__domain_suspend_common_switch_qemu_logdirty
> in libxl_dom.c the variables 'got', and 'got_ret' do not appear to be initialised
> to NULL (got certainly should be, got_ret should be if there is any possibility
> of libxl__xs_read_checked not writing to got_ret which it doesn't seem to be).
> Code inspection suggests this issue is still there in 4.2.1, hence my wondering
> whether other stuff needs bringing in from unstable.

libxl__xs_read_checked will always either initialise the variable
(perhaps to NULL) or return an error. On both callsites we check for
error and "goto out".

I think the crash is because the code uses got_ret without checking if
it was NULL, which can happen if the path is not present. Ian (J) does
that make sense as something which is allowed to happen?

As I said in my early mail I'm not sure why you are getting here at all
though.

Ian.


> 
> --On 11 December 2012 11:45:42 +0000 Alex Bligh <alex@alex.org.uk> wrote:
> 
> > Program received signal SIGSEGV, Segmentation fault.
> > [Switching to Thread 0x7fffeffff700 (LWP 5995)]
> > 0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> > (gdb) bt
> ># 0  0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> ># 1  0x00007ffff6d4e970 in libxl__domain_suspend_common_switch_qemu_logdirty (domid=<optimized out>, enable=<optimized out>, user=0x7ffff00024e8) at libxl_dom.c:728
> ># 2  0x00007ffff6d5c1ae in libxl__srm_callout_received_save (msg=0x7fffefffe41a " error", len=<optimized out>, user=0x7ffff00024e8) at _libxl_save_msgs_callout.c:162
> ># 3  0x00007ffff6d5b736 in helper_stdout_readable (egc=0x7fffefffe5a0, ev=0x7ffff0002560, fd=38, events=<optimized out>, revents=<optimized out>) at libxl_save_callout.c:283
> ># 4  0x00007ffff6d601f1 in afterpoll_internal (egc=0x7fffefffe5a0, poller=0x7ffff00028c0, nfds=4, fds=0x7ffff00048b0, now=...) at libxl_event.c:948
> ># 5  0x00007ffff6d604db in eventloop_iteration (egc=0x7fffefffe5a0, poller=0x7ffff00028c0) at libxl_event.c:1368
> ># 6  0x00007ffff6d616b3 in libxl__ao_inprogress (ao=0x7ffff0001d40, file=<optimized out>, line=<optimized out>, func=<optimized out>) at libxl_event.c:1614
> ># 7  0x00007ffff6d3ab75 in libxl_domain_suspend (ctx=<optimized out>, domid=1, fd=10, flags=<optimized out>, ao_how=<optimized out>) at libxl.c:796
> ># 8  0x000000000043677e in migrate_domain_send (ctx=0x7ffff0008860, domid=1, fd=10) at hypervisor/xen_libxl.c:587
> ># 9  0x000000000043698a in live_migrate_send (hyperconn=0x7ffff0001c70, server=0x7ffff0001cb0, node_ip=0x7ffff00041e0 "10.157.128.20", fd=10) at hypervisor/xen_libxl.c:647
> ># 10 0x0000000000422a70 in migrate_server_action (request=0x7ffff0002980) at action/node_action.c:1287
> ># 11 0x00000000004240c1 in runAction (socket_fd=8) at action/handleaction.c:138
> ># 12 0x00000000004179bd in runcomm (socket=0x8) at xvpagent.c:253
> ># 13 0x0000000000427502 in trackedthread_run (arg=0x66df20) at util/util.c:179
> ># 14 0x00007ffff5c9ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
> ># 15 0x00007ffff59ca4bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
> ># 16 0x0000000000000000 in ?? ()
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 14:30:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 14:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiQqd-0007V7-Nx; Tue, 11 Dec 2012 14:30:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TiQqb-0007V2-Uc
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 14:30:14 +0000
Received: from [85.158.143.35:61639] by server-3.bemta-4.messagelabs.com id
	B8/41-18211-57347C05; Tue, 11 Dec 2012 14:30:13 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355236197!4791363!1
X-Originating-IP: [209.85.210.177]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_10_20,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19562 invoked from network); 11 Dec 2012 14:29:59 -0000
Received: from mail-ia0-f177.google.com (HELO mail-ia0-f177.google.com)
	(209.85.210.177)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 14:29:59 -0000
Received: by mail-ia0-f177.google.com with SMTP id u21so6758163ial.36
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 06:29:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=3Bzlzzm1NlWJp5iAqo05dcSPQp9jytCAFjYIk6JmDxg=;
	b=dG/DKI0uJQmc1gdWLFAvF/VHqaBPc5hcNOTFHMwze2uyu9aBUnzKhrnRFHgyVlaXe9
	j1OJqXH1d/+Xyl64aiewa7Gd7Kk0+gn8V9wnYCEUV1LLusRHVkQ7top3xCoZE/iPGMkp
	x+rdD5a4XyWr8tMUMnR7Jb1JPgRR0hTGggeR6eCIvfVR0OJM1WKcEhQgUFGZylydxN9v
	FYp8MDXFeMWmx5/IC1Q9LGXQfw4QSBYpNXECT3Iw32K4quWQfNuTgpsBbyC6jH8ZFVPJ
	MWZMpyjh2AnzBMnt8Fym2MYb0d2e8bNWGpItinCarrJWnQ+kaAsR/7bzlOpo50NvrlcR
	+KdA==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr10151877igq.37.1355236188703; Tue,
	11 Dec 2012 06:29:48 -0800 (PST)
Received: by 10.64.37.39 with HTTP; Tue, 11 Dec 2012 06:29:48 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
Date: Tue, 11 Dec 2012 22:29:48 +0800
X-Google-Sender-Auth: CI0PUy5kIUMilvQPuBRBIsCHquo
Message-ID: <CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay, Allen M" <allen.m.kay@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0734868967734853921=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0734868967734853921==
Content-Type: multipart/alternative; boundary=14dae93411159d4cfa04d094837b

--14dae93411159d4cfa04d094837b
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 11, 2012 at 8:01 PM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:
>
> On Tue, 11 Dec 2012, G.R. wrote:
> >
> > On Tue, Dec 11, 2012 at 2:15 AM, Kay, Allen M <allen.m.kay@intel.com>
wrote:
> >       My understanding is that i915 driver needs to looks at the real
PCH's device ID to apply HW workarounds.
> >
> >       One way to fix this is to make the device ID of the first PCH's
(00:01.0) device ID reflect the device ID of
> >       00:1f.0 on the host.  This way, when i915 running as a guest will
find the valid PCH device ID to make workaround
> >       decisions with.
> >
> >
> >       I don't know why it would make a difference if i915 is built into
the kernel or as a module though.
> >
> >       Allen
> >
> > Thanks Allen for your input.
> > But module v.s. built-in is not the only difference. Another difference
is the PVHVM build vs. pure HVM build.
> > Both share the same PCI layout but different result. Any theory how to
explain the difference? What makes the PVHVM version
> > work?
>
> Please don't use HTML in emails.
>

I'm using gmail and not sure how to control html formatting.
I explicitly use 'remove formatting' this time, does it look better now?
Sorry for the inconvenience.

>
> PVHVM Linux guests setup interrupts differently: they request an event
> channel for each legacy interrupt or MSI/MSI-X, then the hypervisor uses
> these event channels to inject notifications into the guest, rather
> than emulated interrupts or emulated MSIs.
>
Will this affect the result of pci_get_class() as called by the intel
driver?
If not, this can still not explain the different behavior.
Maybe I need to do one more experiment when I got time.

>
> Reading again the description of the bug, wouldn't it be better to just
> the fix the problem in Linux?
> In fact this looks like a bug in intel_detect_pch(): QEMU is emulating a
> PCI-PCI bridge and the driver is checking for an PCI-ISA bridge (to help
> with virtualization?). Moreover it only checks the first PCI-ISA bridge.
>
> As far as I know Xen has never exposed a PCI-ISA bridge with vendor ==
> Intel to the guest.
>
But why do xen expose the PCI-ISA bridge in host as a PCI-PCI bridge,
doesn't it sounds strange?

--14dae93411159d4cfa04d094837b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br>On Tue, Dec 11, 2012 at 8:01 PM, Stefano Stabellini &lt;<a href=3D"mail=
to:stefano.stabellini@eu.citrix.com">stefano.stabellini@eu.citrix.com</a>&g=
t; wrote:<br>&gt;<br>&gt; On Tue, 11 Dec 2012, G.R. wrote:<br>&gt; &gt;<br>
&gt; &gt; On Tue, Dec 11, 2012 at 2:15 AM, Kay, Allen M &lt;<a href=3D"mail=
to:allen.m.kay@intel.com">allen.m.kay@intel.com</a>&gt; wrote:<br>&gt; &gt;=
 =A0 =A0 =A0 My understanding is that i915 driver needs to looks at the rea=
l PCH&#39;s device ID to apply HW workarounds.<br>
&gt; &gt;<br>&gt; &gt; =A0 =A0 =A0 One way to fix this is to make the devic=
e ID of the first PCH&#39;s (00:01.0) device ID reflect the device ID of<br=
>&gt; &gt; =A0 =A0 =A0 00:1f.0 on the host. =A0This way, when i915 running =
as a guest will find the valid PCH device ID to make workaround<br>
&gt; &gt; =A0 =A0 =A0 decisions with.<br>&gt; &gt;<br>&gt; &gt;<br>&gt; &gt=
; =A0 =A0 =A0 I don&#39;t know why it would make a difference if i915 is bu=
ilt into the kernel or as a module though.<br>&gt; &gt;<br>&gt; &gt; =A0 =
=A0 =A0 Allen<br>
&gt; &gt;<br>&gt; &gt; Thanks Allen for your input.<br>&gt; &gt; But module=
 v.s. built-in is not the only difference. Another difference is the PVHVM =
build vs. pure HVM build.<br>&gt; &gt; Both share the same PCI layout but d=
ifferent result. Any theory how to explain the difference? What makes the P=
VHVM version<br>
&gt; &gt; work?<br>&gt;<br>&gt; Please don&#39;t use HTML in emails.<br>&gt=
;<br><br>I&#39;m using gmail and not sure how to control html formatting.<b=
r>I explicitly use &#39;remove formatting&#39; this time, does it look bett=
er now?<br>
Sorry for the inconvenience.<br>=A0<br>&gt;<br>&gt; PVHVM Linux guests setu=
p interrupts differently: they request an event<br>&gt; channel for each le=
gacy interrupt or MSI/MSI-X, then the hypervisor uses<br>&gt; these event c=
hannels to inject notifications into the guest, rather<br>
&gt; than emulated interrupts or emulated MSIs.<br>&gt;<br>Will this affect=
 the result of pci_get_class() as called by the intel driver?<br>If not, th=
is can still not explain the different behavior.<br>Maybe I need to do one =
more experiment when I got time.<br>
=A0<br>&gt;<br>&gt; Reading again the description of the bug, wouldn&#39;t =
it be better to just<br>&gt; the fix the problem in Linux?<br>&gt; In fact =
this looks like a bug in intel_detect_pch(): QEMU is emulating a<br>&gt; PC=
I-PCI bridge and the driver is checking for an PCI-ISA bridge (to help<br>
&gt; with virtualization?). Moreover it only checks the first PCI-ISA bridg=
e.<br>&gt;<br>&gt; As far as I know Xen has never exposed a PCI-ISA bridge =
with vendor =3D=3D<br>&gt; Intel to the guest.<br>&gt;<br>But why do xen ex=
pose the PCI-ISA bridge in host as a PCI-PCI bridge, doesn&#39;t it sounds =
strange?<br>

--14dae93411159d4cfa04d094837b--


--===============0734868967734853921==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0734868967734853921==--


From xen-devel-bounces@lists.xen.org Tue Dec 11 14:30:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 14:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiQqd-0007V7-Nx; Tue, 11 Dec 2012 14:30:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TiQqb-0007V2-Uc
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 14:30:14 +0000
Received: from [85.158.143.35:61639] by server-3.bemta-4.messagelabs.com id
	B8/41-18211-57347C05; Tue, 11 Dec 2012 14:30:13 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355236197!4791363!1
X-Originating-IP: [209.85.210.177]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_10_20,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19562 invoked from network); 11 Dec 2012 14:29:59 -0000
Received: from mail-ia0-f177.google.com (HELO mail-ia0-f177.google.com)
	(209.85.210.177)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 14:29:59 -0000
Received: by mail-ia0-f177.google.com with SMTP id u21so6758163ial.36
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 06:29:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=3Bzlzzm1NlWJp5iAqo05dcSPQp9jytCAFjYIk6JmDxg=;
	b=dG/DKI0uJQmc1gdWLFAvF/VHqaBPc5hcNOTFHMwze2uyu9aBUnzKhrnRFHgyVlaXe9
	j1OJqXH1d/+Xyl64aiewa7Gd7Kk0+gn8V9wnYCEUV1LLusRHVkQ7top3xCoZE/iPGMkp
	x+rdD5a4XyWr8tMUMnR7Jb1JPgRR0hTGggeR6eCIvfVR0OJM1WKcEhQgUFGZylydxN9v
	FYp8MDXFeMWmx5/IC1Q9LGXQfw4QSBYpNXECT3Iw32K4quWQfNuTgpsBbyC6jH8ZFVPJ
	MWZMpyjh2AnzBMnt8Fym2MYb0d2e8bNWGpItinCarrJWnQ+kaAsR/7bzlOpo50NvrlcR
	+KdA==
MIME-Version: 1.0
Received: by 10.50.57.225 with SMTP id l1mr10151877igq.37.1355236188703; Tue,
	11 Dec 2012 06:29:48 -0800 (PST)
Received: by 10.64.37.39 with HTTP; Tue, 11 Dec 2012 06:29:48 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
Date: Tue, 11 Dec 2012 22:29:48 +0800
X-Google-Sender-Auth: CI0PUy5kIUMilvQPuBRBIsCHquo
Message-ID: <CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay, Allen M" <allen.m.kay@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0734868967734853921=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0734868967734853921==
Content-Type: multipart/alternative; boundary=14dae93411159d4cfa04d094837b

--14dae93411159d4cfa04d094837b
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 11, 2012 at 8:01 PM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:
>
> On Tue, 11 Dec 2012, G.R. wrote:
> >
> > On Tue, Dec 11, 2012 at 2:15 AM, Kay, Allen M <allen.m.kay@intel.com>
wrote:
> >       My understanding is that i915 driver needs to looks at the real
PCH's device ID to apply HW workarounds.
> >
> >       One way to fix this is to make the device ID of the first PCH's
(00:01.0) device ID reflect the device ID of
> >       00:1f.0 on the host.  This way, when i915 running as a guest will
find the valid PCH device ID to make workaround
> >       decisions with.
> >
> >
> >       I don't know why it would make a difference if i915 is built into
the kernel or as a module though.
> >
> >       Allen
> >
> > Thanks Allen for your input.
> > But module v.s. built-in is not the only difference. Another difference
is the PVHVM build vs. pure HVM build.
> > Both share the same PCI layout but different result. Any theory how to
explain the difference? What makes the PVHVM version
> > work?
>
> Please don't use HTML in emails.
>

I'm using gmail and not sure how to control html formatting.
I explicitly use 'remove formatting' this time, does it look better now?
Sorry for the inconvenience.

>
> PVHVM Linux guests setup interrupts differently: they request an event
> channel for each legacy interrupt or MSI/MSI-X, then the hypervisor uses
> these event channels to inject notifications into the guest, rather
> than emulated interrupts or emulated MSIs.
>
Will this affect the result of pci_get_class() as called by the intel
driver?
If not, this can still not explain the different behavior.
Maybe I need to do one more experiment when I got time.

>
> Reading again the description of the bug, wouldn't it be better to just
> the fix the problem in Linux?
> In fact this looks like a bug in intel_detect_pch(): QEMU is emulating a
> PCI-PCI bridge and the driver is checking for an PCI-ISA bridge (to help
> with virtualization?). Moreover it only checks the first PCI-ISA bridge.
>
> As far as I know Xen has never exposed a PCI-ISA bridge with vendor ==
> Intel to the guest.
>
But why do xen expose the PCI-ISA bridge in host as a PCI-PCI bridge,
doesn't it sounds strange?

--14dae93411159d4cfa04d094837b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br>On Tue, Dec 11, 2012 at 8:01 PM, Stefano Stabellini &lt;<a href=3D"mail=
to:stefano.stabellini@eu.citrix.com">stefano.stabellini@eu.citrix.com</a>&g=
t; wrote:<br>&gt;<br>&gt; On Tue, 11 Dec 2012, G.R. wrote:<br>&gt; &gt;<br>
&gt; &gt; On Tue, Dec 11, 2012 at 2:15 AM, Kay, Allen M &lt;<a href=3D"mail=
to:allen.m.kay@intel.com">allen.m.kay@intel.com</a>&gt; wrote:<br>&gt; &gt;=
 =A0 =A0 =A0 My understanding is that i915 driver needs to looks at the rea=
l PCH&#39;s device ID to apply HW workarounds.<br>
&gt; &gt;<br>&gt; &gt; =A0 =A0 =A0 One way to fix this is to make the devic=
e ID of the first PCH&#39;s (00:01.0) device ID reflect the device ID of<br=
>&gt; &gt; =A0 =A0 =A0 00:1f.0 on the host. =A0This way, when i915 running =
as a guest will find the valid PCH device ID to make workaround<br>
&gt; &gt; =A0 =A0 =A0 decisions with.<br>&gt; &gt;<br>&gt; &gt;<br>&gt; &gt=
; =A0 =A0 =A0 I don&#39;t know why it would make a difference if i915 is bu=
ilt into the kernel or as a module though.<br>&gt; &gt;<br>&gt; &gt; =A0 =
=A0 =A0 Allen<br>
&gt; &gt;<br>&gt; &gt; Thanks Allen for your input.<br>&gt; &gt; But module=
 v.s. built-in is not the only difference. Another difference is the PVHVM =
build vs. pure HVM build.<br>&gt; &gt; Both share the same PCI layout but d=
ifferent result. Any theory how to explain the difference? What makes the P=
VHVM version<br>
&gt; &gt; work?<br>&gt;<br>&gt; Please don&#39;t use HTML in emails.<br>&gt=
;<br><br>I&#39;m using gmail and not sure how to control html formatting.<b=
r>I explicitly use &#39;remove formatting&#39; this time, does it look bett=
er now?<br>
Sorry for the inconvenience.<br>=A0<br>&gt;<br>&gt; PVHVM Linux guests setu=
p interrupts differently: they request an event<br>&gt; channel for each le=
gacy interrupt or MSI/MSI-X, then the hypervisor uses<br>&gt; these event c=
hannels to inject notifications into the guest, rather<br>
&gt; than emulated interrupts or emulated MSIs.<br>&gt;<br>Will this affect=
 the result of pci_get_class() as called by the intel driver?<br>If not, th=
is can still not explain the different behavior.<br>Maybe I need to do one =
more experiment when I got time.<br>
=A0<br>&gt;<br>&gt; Reading again the description of the bug, wouldn&#39;t =
it be better to just<br>&gt; the fix the problem in Linux?<br>&gt; In fact =
this looks like a bug in intel_detect_pch(): QEMU is emulating a<br>&gt; PC=
I-PCI bridge and the driver is checking for an PCI-ISA bridge (to help<br>
&gt; with virtualization?). Moreover it only checks the first PCI-ISA bridg=
e.<br>&gt;<br>&gt; As far as I know Xen has never exposed a PCI-ISA bridge =
with vendor =3D=3D<br>&gt; Intel to the guest.<br>&gt;<br>But why do xen ex=
pose the PCI-ISA bridge in host as a PCI-PCI bridge, doesn&#39;t it sounds =
strange?<br>

--14dae93411159d4cfa04d094837b--


--===============0734868967734853921==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0734868967734853921==--


From xen-devel-bounces@lists.xen.org Tue Dec 11 14:55:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 14:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiREn-0008Ti-GN; Tue, 11 Dec 2012 14:55:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TiREl-0008Td-JJ
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 14:55:11 +0000
Received: from [85.158.143.35:19003] by server-1.bemta-4.messagelabs.com id
	48/F9-28401-E4947C05; Tue, 11 Dec 2012 14:55:10 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355237708!16881399!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14635 invoked from network); 11 Dec 2012 14:55:08 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-6.tower-21.messagelabs.com with SMTP;
	11 Dec 2012 14:55:08 -0000
X-TM-IMSS-Message-ID: <6c10cc0c0008770f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 6c10cc0c0008770f ;
	Tue, 11 Dec 2012 09:54:32 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBBEt2v6016823; 
	Tue, 11 Dec 2012 09:55:02 -0500
Message-ID: <50C74946.3090807@tycho.nsa.gov>
Date: Tue, 11 Dec 2012 09:55:02 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169603-26991-1-git-send-email-dgdegra@tycho.nsa.gov>
	<50C72C8502000078000AFA6D@nat28.tlf.novell.com>
In-Reply-To: <50C72C8502000078000AFA6D@nat28.tlf.novell.com>
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] drivers/tpm-xen: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/11/2012 06:52 AM, Jan Beulich wrote:
>>>> On 10.12.12 at 21:00, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> This changes the vTPM shared page ABI from a copy of the Xen network
>> interface to a single-page interface that better reflects the expected
>> behavior of a TPM: only a single request packet can be sent at any given
>> time, and every packet sent generates a single response packet. This
>> protocol change should also increase efficiency as it avoids mapping and
>> unmapping grants when possible.
> 
> Given
> 
> -#define TPMIF_TX_RING_SIZE 1
> 
> where was the problem?

The shared ring still needed to refer to grants and a series of shared pages
for requests and replies, and was implemented by mapping and unmapping grants
on each request.  While a persistent mapping (like being introduced in Linux)
could also have addressed the efficiency issues, redoing the shared page
seemed cleaner.  Redoing the shared page allows potentially supporting TPM
packets up to 1MB in size, although that requires using the extra_pages list
which isn't implemented (most, if not all, users won't use large packets in
order to support hardware TPMs with hard limitations on the packet size).  It
also allows introducing an out-of-band locality field for requests, and the
status field could easily be extended to allow command cancellation - although
that would require a vTPM supporting cancellation; CPU-based vTPMs are fast
enough that cancellation is not needed to meet the timing requirements.

> Also, the patch replaces the old interface by the new one - how
> is that compatible with older implementations?

It is not; we've decided to call this "vtpm2" in xenbus to address this.  From
the condition of the old vtpm code in Xen, there appear to be few users of the
old implementation, even if there are many users of a kernel with the driver
present.
 
> The positive aspect is that the new interface isn't address size
> dependent anymore (and hence mixed size backend/frontend can
> work together, which isn't the case for the original one).
>
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 14:55:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 14:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiREn-0008Ti-GN; Tue, 11 Dec 2012 14:55:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TiREl-0008Td-JJ
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 14:55:11 +0000
Received: from [85.158.143.35:19003] by server-1.bemta-4.messagelabs.com id
	48/F9-28401-E4947C05; Tue, 11 Dec 2012 14:55:10 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355237708!16881399!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14635 invoked from network); 11 Dec 2012 14:55:08 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-6.tower-21.messagelabs.com with SMTP;
	11 Dec 2012 14:55:08 -0000
X-TM-IMSS-Message-ID: <6c10cc0c0008770f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 6c10cc0c0008770f ;
	Tue, 11 Dec 2012 09:54:32 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBBEt2v6016823; 
	Tue, 11 Dec 2012 09:55:02 -0500
Message-ID: <50C74946.3090807@tycho.nsa.gov>
Date: Tue, 11 Dec 2012 09:55:02 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169603-26991-1-git-send-email-dgdegra@tycho.nsa.gov>
	<50C72C8502000078000AFA6D@nat28.tlf.novell.com>
In-Reply-To: <50C72C8502000078000AFA6D@nat28.tlf.novell.com>
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] drivers/tpm-xen: Change vTPM shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/11/2012 06:52 AM, Jan Beulich wrote:
>>>> On 10.12.12 at 21:00, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> This changes the vTPM shared page ABI from a copy of the Xen network
>> interface to a single-page interface that better reflects the expected
>> behavior of a TPM: only a single request packet can be sent at any given
>> time, and every packet sent generates a single response packet. This
>> protocol change should also increase efficiency as it avoids mapping and
>> unmapping grants when possible.
> 
> Given
> 
> -#define TPMIF_TX_RING_SIZE 1
> 
> where was the problem?

The shared ring still needed to refer to grants and a series of shared pages
for requests and replies, and was implemented by mapping and unmapping grants
on each request.  While a persistent mapping (like being introduced in Linux)
could also have addressed the efficiency issues, redoing the shared page
seemed cleaner.  Redoing the shared page allows potentially supporting TPM
packets up to 1MB in size, although that requires using the extra_pages list
which isn't implemented (most, if not all, users won't use large packets in
order to support hardware TPMs with hard limitations on the packet size).  It
also allows introducing an out-of-band locality field for requests, and the
status field could easily be extended to allow command cancellation - although
that would require a vTPM supporting cancellation; CPU-based vTPMs are fast
enough that cancellation is not needed to meet the timing requirements.

> Also, the patch replaces the old interface by the new one - how
> is that compatible with older implementations?

It is not; we've decided to call this "vtpm2" in xenbus to address this.  From
the condition of the old vtpm code in Xen, there appear to be few users of the
old implementation, even if there are many users of a kernel with the driver
present.
 
> The positive aspect is that the new interface isn't address size
> dependent anymore (and hence mixed size backend/frontend can
> work together, which isn't the case for the original one).
>
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:09:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRRu-0000Ha-VW; Tue, 11 Dec 2012 15:08:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TiRRt-0000HT-19
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:08:45 +0000
Received: from [193.109.254.147:57597] by server-10.bemta-14.messagelabs.com
	id 3A/93-31741-C7C47C05; Tue, 11 Dec 2012 15:08:44 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355238460!9983346!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9482 invoked from network); 11 Dec 2012 15:07:40 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-16.tower-27.messagelabs.com with SMTP;
	11 Dec 2012 15:07:40 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 3BCB5C5617B;
	Tue, 11 Dec 2012 15:07:29 +0000 (GMT)
Date: Tue, 11 Dec 2012 15:07:28 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <62660DE6F077667C4FFB290E@nimrod.local>
In-Reply-To: <1355234985.843.10.camel@zakaz.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,

(two messages in one)

>> In 4.2.0, adding this patch and attempting a live migrate using
>> the qemu device model (using xl) produces a seg fault due to
>> unitialised variables.
>
> Really using xl? because the stack trace suggests otherwise.

Sorry, libxl.

>> Should I expect live-migrate of qemu-dm vms to work under 4.2.1?
>
> Do you mean VMs using the upstream "qemu-xen" device model, as opposed
> to the default "qemu-xen-traditional" model?

Yes, using upstream qemu-xen.

> Migration of HVM guests in 4.2.x is only supported with the
> qemu-xen-traditional device model and AFAIK there is no plan to backport
> this support to 4.2.

Ah. What would be involved in a backport? We use HVM guests under 4.2 and
need qemu-xen for various reasons.

> It still shouldn't crash though. I'm not sure how it got this far since
> libxl on 4.2 explicitly checks the DM version before attempting to
> migrate and will refuse to even try with a qemu-xen DM.

We had removed the check (per the pointer to the mail message I sent)
for the qemu-xen model.

> libxl__xs_read_checked will always either initialise the variable
> (perhaps to NULL) or return an error. On both callsites we check for
> error and "goto out".

It returns NULL if the error is not ENOENT I think.

> I think the crash is because the code uses got_ret without checking if
> it was NULL, which can happen if the path is not present. Ian (J) does
> that make sense as something which is allowed to happen?

gdb showed one pointer was NULL and the other pointed to some rubbish
(the latter is confusing).

Alex



>
> Ian.
>
>> If so, should the patch (or a modification thereof) to remove
>> the check from libxl_domain_suspend be applied to 4.2.1-testing
>> or is there more to do?
>>
>> I am very happy to commit test resources to this.
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> [Switching to Thread 0x7fffeffff700 (LWP 5995)]
>> 0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> (gdb) bt
>> # 0  0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> # 1  0x00007ffff6d4e970 in libxl__domain_suspend_common_switch_qemu_logdirty (domid=<optimized out>, enable=<optimized out>, user=0x7ffff00024e8) at libxl_dom.c:728
>> # 2  0x00007ffff6d5c1ae in libxl__srm_callout_received_save (msg=0x7fffefffe41a " error", len=<optimized out>, user=0x7ffff00024e8) at _libxl_save_msgs_callout.c:162
>> # 3  0x00007ffff6d5b736 in helper_stdout_readable (egc=0x7fffefffe5a0, ev=0x7ffff0002560, fd=38, events=<optimized out>, revents=<optimized out>) at libxl_save_callout.c:283
>> # 4  0x00007ffff6d601f1 in afterpoll_internal (egc=0x7fffefffe5a0, poller=0x7ffff00028c0, nfds=4, fds=0x7ffff00048b0, now=...) at libxl_event.c:948
>> # 5  0x00007ffff6d604db in eventloop_iteration (egc=0x7fffefffe5a0, poller=0x7ffff00028c0) at libxl_event.c:1368
>> # 6  0x00007ffff6d616b3 in libxl__ao_inprogress (ao=0x7ffff0001d40, file=<optimized out>, line=<optimized out>, func=<optimized out>) at libxl_event.c:1614
>> # 7  0x00007ffff6d3ab75 in libxl_domain_suspend (ctx=<optimized out>, domid=1, fd=10, flags=<optimized out>, ao_how=<optimized out>) at libxl.c:796
>> # 8  0x000000000043677e in migrate_domain_send (ctx=0x7ffff0008860, domid=1, fd=10) at hypervisor/xen_libxl.c:587
>> # 9  0x000000000043698a in live_migrate_send (hyperconn=0x7ffff0001c70, server=0x7ffff0001cb0, node_ip=0x7ffff00041e0 "10.157.128.20", fd=10) at hypervisor/xen_libxl.c:647
>> # 10 0x0000000000422a70 in migrate_server_action (request=0x7ffff0002980) at action/node_action.c:1287
>> # 11 0x00000000004240c1 in runAction (socket_fd=8) at action/handleaction.c:138
>> # 12 0x00000000004179bd in runcomm (socket=0x8) at xvpagent.c:253
>> # 13 0x0000000000427502 in trackedthread_run (arg=0x66df20) at util/util.c:179
>> # 14 0x00007ffff5c9ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
>> # 15 0x00007ffff59ca4bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> # 16 0x0000000000000000 in ?? ()
>>
>>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>



-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:09:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRRu-0000Ha-VW; Tue, 11 Dec 2012 15:08:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TiRRt-0000HT-19
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:08:45 +0000
Received: from [193.109.254.147:57597] by server-10.bemta-14.messagelabs.com
	id 3A/93-31741-C7C47C05; Tue, 11 Dec 2012 15:08:44 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355238460!9983346!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9482 invoked from network); 11 Dec 2012 15:07:40 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-16.tower-27.messagelabs.com with SMTP;
	11 Dec 2012 15:07:40 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 3BCB5C5617B;
	Tue, 11 Dec 2012 15:07:29 +0000 (GMT)
Date: Tue, 11 Dec 2012 15:07:28 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <62660DE6F077667C4FFB290E@nimrod.local>
In-Reply-To: <1355234985.843.10.camel@zakaz.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,

(two messages in one)

>> In 4.2.0, adding this patch and attempting a live migrate using
>> the qemu device model (using xl) produces a seg fault due to
>> unitialised variables.
>
> Really using xl? because the stack trace suggests otherwise.

Sorry, libxl.

>> Should I expect live-migrate of qemu-dm vms to work under 4.2.1?
>
> Do you mean VMs using the upstream "qemu-xen" device model, as opposed
> to the default "qemu-xen-traditional" model?

Yes, using upstream qemu-xen.

> Migration of HVM guests in 4.2.x is only supported with the
> qemu-xen-traditional device model and AFAIK there is no plan to backport
> this support to 4.2.

Ah. What would be involved in a backport? We use HVM guests under 4.2 and
need qemu-xen for various reasons.

> It still shouldn't crash though. I'm not sure how it got this far since
> libxl on 4.2 explicitly checks the DM version before attempting to
> migrate and will refuse to even try with a qemu-xen DM.

We had removed the check (per the pointer to the mail message I sent)
for the qemu-xen model.

> libxl__xs_read_checked will always either initialise the variable
> (perhaps to NULL) or return an error. On both callsites we check for
> error and "goto out".

It returns NULL if the error is not ENOENT I think.

> I think the crash is because the code uses got_ret without checking if
> it was NULL, which can happen if the path is not present. Ian (J) does
> that make sense as something which is allowed to happen?

gdb showed one pointer was NULL and the other pointed to some rubbish
(the latter is confusing).

Alex



>
> Ian.
>
>> If so, should the patch (or a modification thereof) to remove
>> the check from libxl_domain_suspend be applied to 4.2.1-testing
>> or is there more to do?
>>
>> I am very happy to commit test resources to this.
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> [Switching to Thread 0x7fffeffff700 (LWP 5995)]
>> 0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> (gdb) bt
>> # 0  0x00007ffff5a0862a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> # 1  0x00007ffff6d4e970 in libxl__domain_suspend_common_switch_qemu_logdirty (domid=<optimized out>, enable=<optimized out>, user=0x7ffff00024e8) at libxl_dom.c:728
>> # 2  0x00007ffff6d5c1ae in libxl__srm_callout_received_save (msg=0x7fffefffe41a " error", len=<optimized out>, user=0x7ffff00024e8) at _libxl_save_msgs_callout.c:162
>> # 3  0x00007ffff6d5b736 in helper_stdout_readable (egc=0x7fffefffe5a0, ev=0x7ffff0002560, fd=38, events=<optimized out>, revents=<optimized out>) at libxl_save_callout.c:283
>> # 4  0x00007ffff6d601f1 in afterpoll_internal (egc=0x7fffefffe5a0, poller=0x7ffff00028c0, nfds=4, fds=0x7ffff00048b0, now=...) at libxl_event.c:948
>> # 5  0x00007ffff6d604db in eventloop_iteration (egc=0x7fffefffe5a0, poller=0x7ffff00028c0) at libxl_event.c:1368
>> # 6  0x00007ffff6d616b3 in libxl__ao_inprogress (ao=0x7ffff0001d40, file=<optimized out>, line=<optimized out>, func=<optimized out>) at libxl_event.c:1614
>> # 7  0x00007ffff6d3ab75 in libxl_domain_suspend (ctx=<optimized out>, domid=1, fd=10, flags=<optimized out>, ao_how=<optimized out>) at libxl.c:796
>> # 8  0x000000000043677e in migrate_domain_send (ctx=0x7ffff0008860, domid=1, fd=10) at hypervisor/xen_libxl.c:587
>> # 9  0x000000000043698a in live_migrate_send (hyperconn=0x7ffff0001c70, server=0x7ffff0001cb0, node_ip=0x7ffff00041e0 "10.157.128.20", fd=10) at hypervisor/xen_libxl.c:647
>> # 10 0x0000000000422a70 in migrate_server_action (request=0x7ffff0002980) at action/node_action.c:1287
>> # 11 0x00000000004240c1 in runAction (socket_fd=8) at action/handleaction.c:138
>> # 12 0x00000000004179bd in runcomm (socket=0x8) at xvpagent.c:253
>> # 13 0x0000000000427502 in trackedthread_run (arg=0x66df20) at util/util.c:179
>> # 14 0x00007ffff5c9ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
>> # 15 0x00007ffff59ca4bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> # 16 0x0000000000000000 in ?? ()
>>
>>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>



-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:11:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRU8-0000NA-HY; Tue, 11 Dec 2012 15:11:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiRU6-0000N2-RV
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:11:03 +0000
Received: from [85.158.139.83:42203] by server-1.bemta-5.messagelabs.com id
	89/57-12813-50D47C05; Tue, 11 Dec 2012 15:11:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355238658!26664683!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTEwNzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6291 invoked from network); 11 Dec 2012 15:11:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:11:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="241342"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 15:10:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 10:10:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiRU1-0007L4-BF;
	Tue, 11 Dec 2012 15:10:57 +0000
Date: Tue, 11 Dec 2012 15:10:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-908096761-1355238314=:17523"
Content-ID: <alpine.DEB.2.02.1212111505400.17523@kaball.uk.xensource.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-908096761-1355238314=:17523
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1212111505401.17523@kaball.uk.xensource.com>

On Tue, 11 Dec 2012, G.R. wrote:
> On Tue, Dec 11, 2012 at 8:01 PM, Stefano Stabellini <stefano.stabellini@e=
u.citrix.com> wrote:
> >
> > On Tue, 11 Dec 2012, G.R. wrote:
> > >
> > > On Tue, Dec 11, 2012 at 2:15 AM, Kay, Allen M <allen.m.kay@intel.com>=
 wrote:
> > > =C2=A0 =C2=A0 =C2=A0 My understanding is that i915 driver needs to lo=
oks at the real PCH's device ID to apply HW workarounds.
> > >
> > > =C2=A0 =C2=A0 =C2=A0 One way to fix this is to make the device ID of =
the first PCH's (00:01.0) device ID reflect the device ID of
> > > =C2=A0 =C2=A0 =C2=A0 00:1f.0 on the host. =C2=A0This way, when i915 r=
unning as a guest will find the valid PCH device ID to make workaround
> > > =C2=A0 =C2=A0 =C2=A0 decisions with.
> > >
> > >
> > > =C2=A0 =C2=A0 =C2=A0 I don't know why it would make a difference if i=
915 is built into the kernel or as a module though.
> > >
> > > =C2=A0 =C2=A0 =C2=A0 Allen
> > >
> > > Thanks Allen for your input.
> > > But module v.s. built-in is not the only difference. Another differen=
ce is the PVHVM build vs. pure HVM build.
> > > Both share the same PCI layout but different result. Any theory how t=
o explain the difference? What makes the PVHVM version
> > > work?
> >
> > Please don't use HTML in emails.
> >
>=20
> I'm using gmail and not sure how to control html formatting.
> I explicitly use 'remove formatting' this time, does it look better now?
> Sorry for the inconvenience.

It is still html. You just need to click on "<< Plain Text".


> > PVHVM Linux guests setup interrupts differently: they request an event
> > channel for each legacy interrupt or MSI/MSI-X, then the hypervisor use=
s
> > these event channels to inject notifications into the guest, rather
> > than emulated interrupts or emulated MSIs.
> >
> Will this affect the result of pci_get_class() as called by the intel dri=
ver?
> If not, this can still not explain the different behavior.
> Maybe I need to do one more experiment when I got time.

No, it doesn't.


> > Reading again the description of the bug, wouldn't it be better to just
> > the fix the problem in Linux?
> > In fact this looks like a bug in intel_detect_pch(): QEMU is emulating =
a
> > PCI-PCI bridge and the driver is checking for an PCI-ISA bridge (to hel=
p
> > with virtualization?). Moreover it only checks the first PCI-ISA bridge=
=2E
> >
> > As far as I know Xen has never exposed a PCI-ISA bridge with vendor =3D=
=3D
> > Intel to the guest.
> >
> But why do xen expose the PCI-ISA bridge in host as a PCI-PCI bridge, doe=
sn't it sounds strange?

Yes, it does sound strange, possibly wrong.
--1342847746-908096761-1355238314=:17523
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-908096761-1355238314=:17523--


From xen-devel-bounces@lists.xen.org Tue Dec 11 15:11:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRU8-0000NA-HY; Tue, 11 Dec 2012 15:11:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiRU6-0000N2-RV
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:11:03 +0000
Received: from [85.158.139.83:42203] by server-1.bemta-5.messagelabs.com id
	89/57-12813-50D47C05; Tue, 11 Dec 2012 15:11:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355238658!26664683!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTEwNzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6291 invoked from network); 11 Dec 2012 15:11:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:11:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="241342"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 15:10:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 10:10:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiRU1-0007L4-BF;
	Tue, 11 Dec 2012 15:10:57 +0000
Date: Tue, 11 Dec 2012 15:10:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-908096761-1355238314=:17523"
Content-ID: <alpine.DEB.2.02.1212111505400.17523@kaball.uk.xensource.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-908096761-1355238314=:17523
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1212111505401.17523@kaball.uk.xensource.com>

On Tue, 11 Dec 2012, G.R. wrote:
> On Tue, Dec 11, 2012 at 8:01 PM, Stefano Stabellini <stefano.stabellini@e=
u.citrix.com> wrote:
> >
> > On Tue, 11 Dec 2012, G.R. wrote:
> > >
> > > On Tue, Dec 11, 2012 at 2:15 AM, Kay, Allen M <allen.m.kay@intel.com>=
 wrote:
> > > =C2=A0 =C2=A0 =C2=A0 My understanding is that i915 driver needs to lo=
oks at the real PCH's device ID to apply HW workarounds.
> > >
> > > =C2=A0 =C2=A0 =C2=A0 One way to fix this is to make the device ID of =
the first PCH's (00:01.0) device ID reflect the device ID of
> > > =C2=A0 =C2=A0 =C2=A0 00:1f.0 on the host. =C2=A0This way, when i915 r=
unning as a guest will find the valid PCH device ID to make workaround
> > > =C2=A0 =C2=A0 =C2=A0 decisions with.
> > >
> > >
> > > =C2=A0 =C2=A0 =C2=A0 I don't know why it would make a difference if i=
915 is built into the kernel or as a module though.
> > >
> > > =C2=A0 =C2=A0 =C2=A0 Allen
> > >
> > > Thanks Allen for your input.
> > > But module v.s. built-in is not the only difference. Another differen=
ce is the PVHVM build vs. pure HVM build.
> > > Both share the same PCI layout but different result. Any theory how t=
o explain the difference? What makes the PVHVM version
> > > work?
> >
> > Please don't use HTML in emails.
> >
>=20
> I'm using gmail and not sure how to control html formatting.
> I explicitly use 'remove formatting' this time, does it look better now?
> Sorry for the inconvenience.

It is still html. You just need to click on "<< Plain Text".


> > PVHVM Linux guests setup interrupts differently: they request an event
> > channel for each legacy interrupt or MSI/MSI-X, then the hypervisor use=
s
> > these event channels to inject notifications into the guest, rather
> > than emulated interrupts or emulated MSIs.
> >
> Will this affect the result of pci_get_class() as called by the intel dri=
ver?
> If not, this can still not explain the different behavior.
> Maybe I need to do one more experiment when I got time.

No, it doesn't.


> > Reading again the description of the bug, wouldn't it be better to just
> > the fix the problem in Linux?
> > In fact this looks like a bug in intel_detect_pch(): QEMU is emulating =
a
> > PCI-PCI bridge and the driver is checking for an PCI-ISA bridge (to hel=
p
> > with virtualization?). Moreover it only checks the first PCI-ISA bridge=
=2E
> >
> > As far as I know Xen has never exposed a PCI-ISA bridge with vendor =3D=
=3D
> > Intel to the guest.
> >
> But why do xen expose the PCI-ISA bridge in host as a PCI-PCI bridge, doe=
sn't it sounds strange?

Yes, it does sound strange, possibly wrong.
--1342847746-908096761-1355238314=:17523
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-908096761-1355238314=:17523--


From xen-devel-bounces@lists.xen.org Tue Dec 11 15:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRUn-0000RE-Vk; Tue, 11 Dec 2012 15:11:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TiRUm-0000R0-I7
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:11:44 +0000
Received: from [85.158.139.211:21868] by server-14.bemta-5.messagelabs.com id
	B9/A3-09538-F2D47C05; Tue, 11 Dec 2012 15:11:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1355238702!19911557!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21267 invoked from network); 11 Dec 2012 15:11:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:11:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="63042"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 15:11:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 11 Dec 2012 15:11:40 +0000
Message-ID: <1355238698.843.42.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Tue, 11 Dec 2012 15:11:38 +0000
In-Reply-To: <62660DE6F077667C4FFB290E@nimrod.local>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
	<62660DE6F077667C4FFB290E@nimrod.local>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-11 at 15:07 +0000, Alex Bligh wrote:
> > Migration of HVM guests in 4.2.x is only supported with the
> > qemu-xen-traditional device model and AFAIK there is no plan to backport
> > this support to 4.2.
> 
> Ah. What would be involved in a backport? We use HVM guests under 4.2 and
> need qemu-xen for various reasons.

AFAIK its a pretty big qemu-side backport, plus you need the whole libxl
series not just the final patch that you linked to.

I don't think this is a candidate for 4.2.x. You could try doing it
locally though I guess.

> > It still shouldn't crash though. I'm not sure how it got this far since
> > libxl on 4.2 explicitly checks the DM version before attempting to
> > migrate and will refuse to even try with a qemu-xen DM.
> 
> We had removed the check (per the pointer to the mail message I sent)
> for the qemu-xen model.

Oh well, then all bets are off really.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRUn-0000RE-Vk; Tue, 11 Dec 2012 15:11:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TiRUm-0000R0-I7
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:11:44 +0000
Received: from [85.158.139.211:21868] by server-14.bemta-5.messagelabs.com id
	B9/A3-09538-F2D47C05; Tue, 11 Dec 2012 15:11:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1355238702!19911557!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21267 invoked from network); 11 Dec 2012 15:11:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:11:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="63042"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 15:11:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 11 Dec 2012 15:11:40 +0000
Message-ID: <1355238698.843.42.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Tue, 11 Dec 2012 15:11:38 +0000
In-Reply-To: <62660DE6F077667C4FFB290E@nimrod.local>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
	<62660DE6F077667C4FFB290E@nimrod.local>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-11 at 15:07 +0000, Alex Bligh wrote:
> > Migration of HVM guests in 4.2.x is only supported with the
> > qemu-xen-traditional device model and AFAIK there is no plan to backport
> > this support to 4.2.
> 
> Ah. What would be involved in a backport? We use HVM guests under 4.2 and
> need qemu-xen for various reasons.

AFAIK its a pretty big qemu-side backport, plus you need the whole libxl
series not just the final patch that you linked to.

I don't think this is a candidate for 4.2.x. You could try doing it
locally though I guess.

> > It still shouldn't crash though. I'm not sure how it got this far since
> > libxl on 4.2 explicitly checks the DM version before attempting to
> > migrate and will refuse to even try with a qemu-xen DM.
> 
> We had removed the check (per the pointer to the mail message I sent)
> for the qemu-xen model.

Oh well, then all bets are off really.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:14:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRXO-0000dK-IC; Tue, 11 Dec 2012 15:14:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TiRXN-0000d7-CV
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:14:25 +0000
Received: from [85.158.139.83:10083] by server-4.bemta-5.messagelabs.com id
	46/2A-14693-0DD47C05; Tue, 11 Dec 2012 15:14:24 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355238855!29411747!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10445 invoked from network); 11 Dec 2012 15:14:16 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:14:16 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so1894208wib.14
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 07:14:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=psohxqCGWwx0dGKmydw2C4i9p9N6D/40rXmWrUicss0=;
	b=zdvYSfc11wKUnvBSuBb+MVPcqnFrgJ8gsqGB1s1KgBMsiqZh6uwaaUoeiL7VgV7KB3
	qfUDLGF5SFywTc0HkpGH3hNn14JbFsX3JXpQluFph2ujA5li8ybh2ms/3ue9Mg0IxAAj
	4Z2P4ypsQo+/49Ik6Cu9hxMKp7GELY1VPz29dJ9e1Ko/Z6y1q73vE2hm1nLcfC1xPpBH
	sEI100lN35PuUinYrULG00Rb2sYb1EuHmZY2suBoLnFb3YpH+7H20C4uKOuCxVofm2YK
	TIUE61SMuxe3UxhfmFr2GdPqwV/uMpSz7bJi+tFL+Sphr//l9uJFxdLss1xcBTAKMVTA
	p2GQ==
Received: by 10.180.78.1 with SMTP id x1mr17250052wiw.17.1355238855735;
	Tue, 11 Dec 2012 07:14:15 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id p2sm16717991wic.7.2012.12.11.07.14.13
	(version=SSLv3 cipher=OTHER); Tue, 11 Dec 2012 07:14:14 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Tue, 11 Dec 2012 15:14:09 +0000
From: Keir Fraser <keir@xen.org>
To: Ross Philipson <Ross.Philipson@citrix.com>,
	Charles Arnold <carnold@suse.com>, xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCECFE41.55688%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
Thread-Index: Ac3W+edALZddPZ4+T3Wb6vxIEVGlcAAr1LQgAAI8Pyg=
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yes, you need to re-submit against current xen-unstable tip (which should be
easy since this code doesn't churn very much).

On 11/12/2012 14:11, "Ross Philipson" <Ross.Philipson@citrix.com> wrote:

> Yea I guess I should follow up on this. I did not manage to get it into 4.2
> but I thought it had clearance for 4.3. Do I need to resubmit the patch set?
> 
> Thanks
> Ross
> 
>> -----Original Message-----
>> From: Charles Arnold [mailto:carnold@suse.com]
>> Sent: Monday, December 10, 2012 12:15 PM
>> To: xen-devel
>> Cc: Ross Philipson
>> Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
>> 
>> I haven't seen any activity on this feature.  Is it still planned to be
>> included in Xen 4.3?
>> 
>> - Charles
>> 
>> On Wed, 2012-05-23 at 14:37 +0000, Ross Philipson wrote:
>>> This patch series introduces support of loading external blocks of
>>> firmware into a guest. These blocks can currently contain SMBIOS
>>> and/or ACPI firmware information that is used by HVMLOADER to modify a
>>> guests virtual firmware at startup. These modules are only used by
>> HVMLOADER.
>>> 
>>> The domain building code in libxenguest is passed these firmware
>>> blocks in the xc_hvm_build_args structure and loads them into the new
>>> guest, returning the load address. The loading is done in what will
>>> become the guests low RAM area just behind to load location for
>>> HVMLOADER. After their use by HVMLOADER they are effectively
>>> discarded. It is the caller's job to load the base address and length
>>> values in xenstore using the paths defined in the new hvm_defs.h
>>> header so HVMLOADER can located the blocks.
>>> 
>>> Currently two types of firmware information are recognized and
>>> processed in the HVMLOADER though this could be extended.
>>> 
>>> 1. SMBIOS: The SMBIOS table building code will attempt to retrieve
>>> (for predefined set of structure types) any passed in structures. If a
>>> match is found the passed in table will be used overriding the default
>>> values. In addition, the SMBIOS code will also enumerate and load any
>>> vendor defined structures (in the range of types 128 - 255) that as
>>> are passed in. See the hvm_defs.h header for information on the format
>> of this block.
>>> 2. ACPI: Static and secondary descriptor tables can be added to the
>>> set of ACPI table built by HVMLOADER. The ACPI builder code will
>>> enumerate passed in tables and add them at the end of the secondary
>>> table list. See the hvm_defs.h header for information on the format of
>>> this block.
>>> 
>>> There are 4 patches in the series:
>>> 01 - Add HVM definitions header for firmware passthrough support.
>>> 02 - Xen control tools support for loading the firmware blocks.
>>> 03 - Passthrough support for SMBIOS.
>>> 04 - Passthrough support for ACPI.
>>> 
>>> Note this is version 3 of this patch set. Some of the differences:
>>>  - Generic module support removed, overall functionality was
>> simplified.
>>>  - Use of xenstore to supply firmware passthrough information to
>> HVMLOADER.
>>>  - Fixed issues pointed out in the SMBIOS processing code.
>>>  - Created defines for the SMBIOS handles in use and switched to using
>>>    the xenstore values in the new hvm_defs.h file.
>>> 
>>> Signed-off-by: Ross Philipson <ross.philipson@xxxxxxxxxx>
>>> 
>> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:14:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRXO-0000dK-IC; Tue, 11 Dec 2012 15:14:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TiRXN-0000d7-CV
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:14:25 +0000
Received: from [85.158.139.83:10083] by server-4.bemta-5.messagelabs.com id
	46/2A-14693-0DD47C05; Tue, 11 Dec 2012 15:14:24 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355238855!29411747!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10445 invoked from network); 11 Dec 2012 15:14:16 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:14:16 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so1894208wib.14
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 07:14:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=psohxqCGWwx0dGKmydw2C4i9p9N6D/40rXmWrUicss0=;
	b=zdvYSfc11wKUnvBSuBb+MVPcqnFrgJ8gsqGB1s1KgBMsiqZh6uwaaUoeiL7VgV7KB3
	qfUDLGF5SFywTc0HkpGH3hNn14JbFsX3JXpQluFph2ujA5li8ybh2ms/3ue9Mg0IxAAj
	4Z2P4ypsQo+/49Ik6Cu9hxMKp7GELY1VPz29dJ9e1Ko/Z6y1q73vE2hm1nLcfC1xPpBH
	sEI100lN35PuUinYrULG00Rb2sYb1EuHmZY2suBoLnFb3YpH+7H20C4uKOuCxVofm2YK
	TIUE61SMuxe3UxhfmFr2GdPqwV/uMpSz7bJi+tFL+Sphr//l9uJFxdLss1xcBTAKMVTA
	p2GQ==
Received: by 10.180.78.1 with SMTP id x1mr17250052wiw.17.1355238855735;
	Tue, 11 Dec 2012 07:14:15 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id p2sm16717991wic.7.2012.12.11.07.14.13
	(version=SSLv3 cipher=OTHER); Tue, 11 Dec 2012 07:14:14 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Tue, 11 Dec 2012 15:14:09 +0000
From: Keir Fraser <keir@xen.org>
To: Ross Philipson <Ross.Philipson@citrix.com>,
	Charles Arnold <carnold@suse.com>, xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCECFE41.55688%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
Thread-Index: Ac3W+edALZddPZ4+T3Wb6vxIEVGlcAAr1LQgAAI8Pyg=
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yes, you need to re-submit against current xen-unstable tip (which should be
easy since this code doesn't churn very much).

On 11/12/2012 14:11, "Ross Philipson" <Ross.Philipson@citrix.com> wrote:

> Yea I guess I should follow up on this. I did not manage to get it into 4.2
> but I thought it had clearance for 4.3. Do I need to resubmit the patch set?
> 
> Thanks
> Ross
> 
>> -----Original Message-----
>> From: Charles Arnold [mailto:carnold@suse.com]
>> Sent: Monday, December 10, 2012 12:15 PM
>> To: xen-devel
>> Cc: Ross Philipson
>> Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
>> 
>> I haven't seen any activity on this feature.  Is it still planned to be
>> included in Xen 4.3?
>> 
>> - Charles
>> 
>> On Wed, 2012-05-23 at 14:37 +0000, Ross Philipson wrote:
>>> This patch series introduces support of loading external blocks of
>>> firmware into a guest. These blocks can currently contain SMBIOS
>>> and/or ACPI firmware information that is used by HVMLOADER to modify a
>>> guests virtual firmware at startup. These modules are only used by
>> HVMLOADER.
>>> 
>>> The domain building code in libxenguest is passed these firmware
>>> blocks in the xc_hvm_build_args structure and loads them into the new
>>> guest, returning the load address. The loading is done in what will
>>> become the guests low RAM area just behind to load location for
>>> HVMLOADER. After their use by HVMLOADER they are effectively
>>> discarded. It is the caller's job to load the base address and length
>>> values in xenstore using the paths defined in the new hvm_defs.h
>>> header so HVMLOADER can located the blocks.
>>> 
>>> Currently two types of firmware information are recognized and
>>> processed in the HVMLOADER though this could be extended.
>>> 
>>> 1. SMBIOS: The SMBIOS table building code will attempt to retrieve
>>> (for predefined set of structure types) any passed in structures. If a
>>> match is found the passed in table will be used overriding the default
>>> values. In addition, the SMBIOS code will also enumerate and load any
>>> vendor defined structures (in the range of types 128 - 255) that as
>>> are passed in. See the hvm_defs.h header for information on the format
>> of this block.
>>> 2. ACPI: Static and secondary descriptor tables can be added to the
>>> set of ACPI table built by HVMLOADER. The ACPI builder code will
>>> enumerate passed in tables and add them at the end of the secondary
>>> table list. See the hvm_defs.h header for information on the format of
>>> this block.
>>> 
>>> There are 4 patches in the series:
>>> 01 - Add HVM definitions header for firmware passthrough support.
>>> 02 - Xen control tools support for loading the firmware blocks.
>>> 03 - Passthrough support for SMBIOS.
>>> 04 - Passthrough support for ACPI.
>>> 
>>> Note this is version 3 of this patch set. Some of the differences:
>>>  - Generic module support removed, overall functionality was
>> simplified.
>>>  - Use of xenstore to supply firmware passthrough information to
>> HVMLOADER.
>>>  - Fixed issues pointed out in the SMBIOS processing code.
>>>  - Created defines for the SMBIOS handles in use and switched to using
>>>    the xenstore values in the new hvm_defs.h file.
>>> 
>>> Signed-off-by: Ross Philipson <ross.philipson@xxxxxxxxxx>
>>> 
>> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:18:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRbN-0000q6-7w; Tue, 11 Dec 2012 15:18:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carnold@suse.com>) id 1TiRbL-0000py-Gj
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:18:31 +0000
Received: from [85.158.139.211:16962] by server-2.bemta-5.messagelabs.com id
	AD/3B-16162-6CE47C05; Tue, 11 Dec 2012 15:18:30 +0000
X-Env-Sender: carnold@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355239100!18542442!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21485 invoked from network); 11 Dec 2012 15:18:27 -0000
Received: from novprvoes0310.provo.novell.com (HELO
	novprvoes0310.provo.novell.com) (137.65.248.74)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 15:18:27 -0000
Received: from INET-PRV-MTA by novprvoes0310.provo.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 08:18:16 -0700
Message-Id: <50C6EC470200009100083E54@novprvoes0310.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 08:18:15 -0700
From: "Charles Arnold" <carnold@suse.com>
To: "Ross Philipson" <Ross.Philipson@citrix.com>,
	"xen-devel" <xen-devel@lists.xen.org>
References: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12/11/2012 at 07:11 AM, in message
<831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>, Ross
Philipson <Ross.Philipson@citrix.com> wrote: 
> Yea I guess I should follow up on this. I did not manage to get it into 4.2 
> but I thought it had clearance for 4.3. Do I need to resubmit the patch set?

It has been several months since you last submitted.
You probably should resubmit the patches based on the current unstable code base.

- Charles



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:18:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRbN-0000q6-7w; Tue, 11 Dec 2012 15:18:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carnold@suse.com>) id 1TiRbL-0000py-Gj
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:18:31 +0000
Received: from [85.158.139.211:16962] by server-2.bemta-5.messagelabs.com id
	AD/3B-16162-6CE47C05; Tue, 11 Dec 2012 15:18:30 +0000
X-Env-Sender: carnold@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355239100!18542442!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21485 invoked from network); 11 Dec 2012 15:18:27 -0000
Received: from novprvoes0310.provo.novell.com (HELO
	novprvoes0310.provo.novell.com) (137.65.248.74)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 15:18:27 -0000
Received: from INET-PRV-MTA by novprvoes0310.provo.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 08:18:16 -0700
Message-Id: <50C6EC470200009100083E54@novprvoes0310.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 08:18:15 -0700
From: "Charles Arnold" <carnold@suse.com>
To: "Ross Philipson" <Ross.Philipson@citrix.com>,
	"xen-devel" <xen-devel@lists.xen.org>
References: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12/11/2012 at 07:11 AM, in message
<831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>, Ross
Philipson <Ross.Philipson@citrix.com> wrote: 
> Yea I guess I should follow up on this. I did not manage to get it into 4.2 
> but I thought it had clearance for 4.3. Do I need to resubmit the patch set?

It has been several months since you last submitted.
You probably should resubmit the patches based on the current unstable code base.

- Charles



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:26:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRj7-00011b-7L; Tue, 11 Dec 2012 15:26:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiRj5-00011W-Aa
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:26:31 +0000
Received: from [85.158.139.211:22120] by server-13.bemta-5.messagelabs.com id
	9E/D9-10716-6A057C05; Tue, 11 Dec 2012 15:26:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355239588!17456742!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTEwNzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14733 invoked from network); 11 Dec 2012 15:26:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:26:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="244381"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 15:26:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 10:26:27 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiRj1-0007Zw-H8;
	Tue, 11 Dec 2012 15:26:27 +0000
Date: Tue, 11 Dec 2012 15:26:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355238698.843.42.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
	<62660DE6F077667C4FFB290E@nimrod.local>
	<1355238698.843.42.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Alex Bligh <alex@alex.org.uk>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012, Ian Campbell wrote:
> On Tue, 2012-12-11 at 15:07 +0000, Alex Bligh wrote:
> > > Migration of HVM guests in 4.2.x is only supported with the
> > > qemu-xen-traditional device model and AFAIK there is no plan to backport
> > > this support to 4.2.
> > 
> > Ah. What would be involved in a backport? We use HVM guests under 4.2 and
> > need qemu-xen for various reasons.
> 
> AFAIK its a pretty big qemu-side backport, plus you need the whole libxl
> series not just the final patch that you linked to.
> 
> I don't think this is a candidate for 4.2.x. You could try doing it
> locally though I guess.

This is the patch series that needs to be backported in QEMU:
http://marc.info/?l=qemu-devel&m=134920288412400&w=2

And this is the libxl counterpart:
http://marc.info/?l=xen-devel&m=134944750724252

I would be OK with backporting the QEMU side, but I'll leave the
decision on the libxl side up to you.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:26:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRj7-00011b-7L; Tue, 11 Dec 2012 15:26:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiRj5-00011W-Aa
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:26:31 +0000
Received: from [85.158.139.211:22120] by server-13.bemta-5.messagelabs.com id
	9E/D9-10716-6A057C05; Tue, 11 Dec 2012 15:26:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355239588!17456742!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTEwNzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14733 invoked from network); 11 Dec 2012 15:26:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:26:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="244381"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 15:26:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 10:26:27 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiRj1-0007Zw-H8;
	Tue, 11 Dec 2012 15:26:27 +0000
Date: Tue, 11 Dec 2012 15:26:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355238698.843.42.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
	<62660DE6F077667C4FFB290E@nimrod.local>
	<1355238698.843.42.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Alex Bligh <alex@alex.org.uk>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012, Ian Campbell wrote:
> On Tue, 2012-12-11 at 15:07 +0000, Alex Bligh wrote:
> > > Migration of HVM guests in 4.2.x is only supported with the
> > > qemu-xen-traditional device model and AFAIK there is no plan to backport
> > > this support to 4.2.
> > 
> > Ah. What would be involved in a backport? We use HVM guests under 4.2 and
> > need qemu-xen for various reasons.
> 
> AFAIK its a pretty big qemu-side backport, plus you need the whole libxl
> series not just the final patch that you linked to.
> 
> I don't think this is a candidate for 4.2.x. You could try doing it
> locally though I guess.

This is the patch series that needs to be backported in QEMU:
http://marc.info/?l=qemu-devel&m=134920288412400&w=2

And this is the libxl counterpart:
http://marc.info/?l=xen-devel&m=134944750724252

I would be OK with backporting the QEMU side, but I'll leave the
decision on the libxl side up to you.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:36:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRsb-0001X8-6d; Tue, 11 Dec 2012 15:36:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TiRsY-0001Wc-Nk
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:36:18 +0000
Received: from [193.109.254.147:23295] by server-4.bemta-14.messagelabs.com id
	DB/48-18856-2F257C05; Tue, 11 Dec 2012 15:36:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355240173!9610334!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23553 invoked from network); 11 Dec 2012 15:36:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:36:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="267305"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 15:36:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 10:36:12 -0500
Received: from [10.80.239.153] (helo=andrewcoop.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TiRsS-0007kQ-8V;
	Tue, 11 Dec 2012 15:36:12 +0000
MIME-Version: 1.0
X-Mercurial-Node: def9d03429f677ace004f2a6be541fc6090031ea
Message-ID: <def9d03429f677ace004.1355240056@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
References: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Tue, 11 Dec 2012 15:34:16 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 1 of 2 V4] x86/IST: Create set_ist() helper
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 xen/arch/x86/hvm/svm/svm.c      |  12 ++++++------
 xen/arch/x86/x86_64/traps.c     |   6 +++---
 xen/include/asm-x86/processor.h |  18 ++++++++++++++----
 3 files changed, 23 insertions(+), 13 deletions(-)


... to save using open-coded bitwise operations, and update all IST
manipulation sites to use the function.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

Changes since v1:
 * Change ASSERT() to assert against IST_MAX, the maximum Xen used IST
   value, rather than the maximum possible IST value.
 * Removed second '& 7UL' as it would be optimised out in all acceptable
   use cases.

diff -r bc624b00d6d6 -r def9d03429f6 xen/arch/x86/hvm/svm/svm.c
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -869,9 +869,9 @@ static void svm_ctxt_switch_from(struct 
     svm_vmload(per_cpu(root_vmcb, cpu));
 
     /* Resume use of ISTs now that the host TR is reinstated. */
-    idt_tables[cpu][TRAP_double_fault].a  |= IST_DF << 32;
-    idt_tables[cpu][TRAP_nmi].a           |= IST_NMI << 32;
-    idt_tables[cpu][TRAP_machine_check].a |= IST_MCE << 32;
+    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_DF);
+    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NMI);
+    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_MCE);
 }
 
 static void svm_ctxt_switch_to(struct vcpu *v)
@@ -893,9 +893,9 @@ static void svm_ctxt_switch_to(struct vc
      * Cannot use ISTs for NMI/#MC/#DF while we are running with the guest TR.
      * But this doesn't matter: the IST is only req'd to handle SYSCALL/SYSRET.
      */
-    idt_tables[cpu][TRAP_double_fault].a  &= ~(7UL << 32);
-    idt_tables[cpu][TRAP_nmi].a           &= ~(7UL << 32);
-    idt_tables[cpu][TRAP_machine_check].a &= ~(7UL << 32);
+    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_NONE);
+    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NONE);
+    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
 
     svm_restore_dr(v);
 
diff -r bc624b00d6d6 -r def9d03429f6 xen/arch/x86/x86_64/traps.c
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -370,9 +370,9 @@ void __devinit subarch_percpu_traps_init
     {
         /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
         set_intr_gate(TRAP_double_fault, &double_fault);
-        idt_table[TRAP_double_fault].a  |= IST_DF << 32;
-        idt_table[TRAP_nmi].a           |= IST_NMI << 32;
-        idt_table[TRAP_machine_check].a |= IST_MCE << 32;
+        set_ist(&idt_table[TRAP_double_fault],  IST_DF);
+        set_ist(&idt_table[TRAP_nmi],           IST_NMI);
+        set_ist(&idt_table[TRAP_machine_check], IST_MCE);
 
         /*
          * The 32-on-64 hypercall entry vector is only accessible from ring 1.
diff -r bc624b00d6d6 -r def9d03429f6 xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -425,10 +425,20 @@ struct tss_struct {
     u8 __cacheline_filler[24];
 } __cacheline_aligned __attribute__((packed));
 
-#define IST_DF  1UL
-#define IST_NMI 2UL
-#define IST_MCE 3UL
-#define IST_MAX 3UL
+#define IST_NONE 0UL
+#define IST_DF   1UL
+#define IST_NMI  2UL
+#define IST_MCE  3UL
+#define IST_MAX  3UL
+
+/* Set the interrupt stack table used by a particular interrupt
+ * descriptor table entry. */
+static always_inline void set_ist(idt_entry_t * idt, unsigned long ist)
+{
+    /* ist is a 3 bit field, 32 bits into the idt entry. */
+    ASSERT( ist <= IST_MAX );
+    idt->a = ( idt->a & ~(7UL << 32) ) | ( ist << 32 );
+}
 
 #define IDT_ENTRIES 256
 extern idt_entry_t idt_table[];

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:36:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRsb-0001X8-6d; Tue, 11 Dec 2012 15:36:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TiRsY-0001Wc-Nk
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:36:18 +0000
Received: from [193.109.254.147:23295] by server-4.bemta-14.messagelabs.com id
	DB/48-18856-2F257C05; Tue, 11 Dec 2012 15:36:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355240173!9610334!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23553 invoked from network); 11 Dec 2012 15:36:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:36:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="267305"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 15:36:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 10:36:12 -0500
Received: from [10.80.239.153] (helo=andrewcoop.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TiRsS-0007kQ-8V;
	Tue, 11 Dec 2012 15:36:12 +0000
MIME-Version: 1.0
X-Mercurial-Node: def9d03429f677ace004f2a6be541fc6090031ea
Message-ID: <def9d03429f677ace004.1355240056@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
References: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Tue, 11 Dec 2012 15:34:16 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 1 of 2 V4] x86/IST: Create set_ist() helper
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 xen/arch/x86/hvm/svm/svm.c      |  12 ++++++------
 xen/arch/x86/x86_64/traps.c     |   6 +++---
 xen/include/asm-x86/processor.h |  18 ++++++++++++++----
 3 files changed, 23 insertions(+), 13 deletions(-)


... to save using open-coded bitwise operations, and update all IST
manipulation sites to use the function.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

Changes since v1:
 * Change ASSERT() to assert against IST_MAX, the maximum Xen used IST
   value, rather than the maximum possible IST value.
 * Removed second '& 7UL' as it would be optimised out in all acceptable
   use cases.

diff -r bc624b00d6d6 -r def9d03429f6 xen/arch/x86/hvm/svm/svm.c
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -869,9 +869,9 @@ static void svm_ctxt_switch_from(struct 
     svm_vmload(per_cpu(root_vmcb, cpu));
 
     /* Resume use of ISTs now that the host TR is reinstated. */
-    idt_tables[cpu][TRAP_double_fault].a  |= IST_DF << 32;
-    idt_tables[cpu][TRAP_nmi].a           |= IST_NMI << 32;
-    idt_tables[cpu][TRAP_machine_check].a |= IST_MCE << 32;
+    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_DF);
+    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NMI);
+    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_MCE);
 }
 
 static void svm_ctxt_switch_to(struct vcpu *v)
@@ -893,9 +893,9 @@ static void svm_ctxt_switch_to(struct vc
      * Cannot use ISTs for NMI/#MC/#DF while we are running with the guest TR.
      * But this doesn't matter: the IST is only req'd to handle SYSCALL/SYSRET.
      */
-    idt_tables[cpu][TRAP_double_fault].a  &= ~(7UL << 32);
-    idt_tables[cpu][TRAP_nmi].a           &= ~(7UL << 32);
-    idt_tables[cpu][TRAP_machine_check].a &= ~(7UL << 32);
+    set_ist(&idt_tables[cpu][TRAP_double_fault],  IST_NONE);
+    set_ist(&idt_tables[cpu][TRAP_nmi],           IST_NONE);
+    set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
 
     svm_restore_dr(v);
 
diff -r bc624b00d6d6 -r def9d03429f6 xen/arch/x86/x86_64/traps.c
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -370,9 +370,9 @@ void __devinit subarch_percpu_traps_init
     {
         /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
         set_intr_gate(TRAP_double_fault, &double_fault);
-        idt_table[TRAP_double_fault].a  |= IST_DF << 32;
-        idt_table[TRAP_nmi].a           |= IST_NMI << 32;
-        idt_table[TRAP_machine_check].a |= IST_MCE << 32;
+        set_ist(&idt_table[TRAP_double_fault],  IST_DF);
+        set_ist(&idt_table[TRAP_nmi],           IST_NMI);
+        set_ist(&idt_table[TRAP_machine_check], IST_MCE);
 
         /*
          * The 32-on-64 hypercall entry vector is only accessible from ring 1.
diff -r bc624b00d6d6 -r def9d03429f6 xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -425,10 +425,20 @@ struct tss_struct {
     u8 __cacheline_filler[24];
 } __cacheline_aligned __attribute__((packed));
 
-#define IST_DF  1UL
-#define IST_NMI 2UL
-#define IST_MCE 3UL
-#define IST_MAX 3UL
+#define IST_NONE 0UL
+#define IST_DF   1UL
+#define IST_NMI  2UL
+#define IST_MCE  3UL
+#define IST_MAX  3UL
+
+/* Set the interrupt stack table used by a particular interrupt
+ * descriptor table entry. */
+static always_inline void set_ist(idt_entry_t * idt, unsigned long ist)
+{
+    /* ist is a 3 bit field, 32 bits into the idt entry. */
+    ASSERT( ist <= IST_MAX );
+    idt->a = ( idt->a & ~(7UL << 32) ) | ( ist << 32 );
+}
 
 #define IDT_ENTRIES 256
 extern idt_entry_t idt_table[];

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRsa-0001Wy-Ph; Tue, 11 Dec 2012 15:36:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TiRsY-0001Wd-QT
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:36:18 +0000
Received: from [193.109.254.147:23292] by server-15.bemta-14.messagelabs.com
	id BE/86-12105-2F257C05; Tue, 11 Dec 2012 15:36:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355240173!9610334!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23478 invoked from network); 11 Dec 2012 15:36:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:36:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="267304"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 15:36:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 10:36:12 -0500
Received: from [10.80.239.153] (helo=andrewcoop.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TiRsS-0007kQ-7x;
	Tue, 11 Dec 2012 15:36:12 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Tue, 11 Dec 2012 15:34:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0 of 2 V4] Kexec NMI/MCE fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is V4 of the series.

Incorperated all previous feedback, and added IDT entry helper functions
to address the present race conditions and IST issues.

 xen/arch/x86/hvm/svm/svm.c      |   12 ++--
 xen/arch/x86/x86_64/traps.c     |    6 +-
 xen/include/asm-x86/processor.h |   19 ++++-
 xen/arch/x86/crash.c            |  117 ++++++++++++++++++++++++++++++++++-----
 xen/arch/x86/machine_kexec.c    |   19 ++++++
 xen/arch/x86/x86_64/entry.S     |   34 +++++++++++
 xen/include/asm-x86/desc.h      |   42 ++++++++++++++
 xen/include/asm-x86/processor.h |    4 +
 8 files changed, 224 insertions(+), 29 deletions(-)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRsa-0001Wy-Ph; Tue, 11 Dec 2012 15:36:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TiRsY-0001Wd-QT
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:36:18 +0000
Received: from [193.109.254.147:23292] by server-15.bemta-14.messagelabs.com
	id BE/86-12105-2F257C05; Tue, 11 Dec 2012 15:36:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355240173!9610334!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23478 invoked from network); 11 Dec 2012 15:36:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:36:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="267304"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 15:36:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 10:36:12 -0500
Received: from [10.80.239.153] (helo=andrewcoop.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TiRsS-0007kQ-7x;
	Tue, 11 Dec 2012 15:36:12 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Tue, 11 Dec 2012 15:34:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0 of 2 V4] Kexec NMI/MCE fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is V4 of the series.

Incorperated all previous feedback, and added IDT entry helper functions
to address the present race conditions and IST issues.

 xen/arch/x86/hvm/svm/svm.c      |   12 ++--
 xen/arch/x86/x86_64/traps.c     |    6 +-
 xen/include/asm-x86/processor.h |   19 ++++-
 xen/arch/x86/crash.c            |  117 ++++++++++++++++++++++++++++++++++-----
 xen/arch/x86/machine_kexec.c    |   19 ++++++
 xen/arch/x86/x86_64/entry.S     |   34 +++++++++++
 xen/include/asm-x86/desc.h      |   42 ++++++++++++++
 xen/include/asm-x86/processor.h |    4 +
 8 files changed, 224 insertions(+), 29 deletions(-)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRsb-0001XF-JQ; Tue, 11 Dec 2012 15:36:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TiRsZ-0001Wf-1z
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:36:19 +0000
Received: from [85.158.137.99:16010] by server-2.bemta-3.messagelabs.com id
	9D/40-11239-1F257C05; Tue, 11 Dec 2012 15:36:17 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355240173!18856196!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTEwNzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13392 invoked from network); 11 Dec 2012 15:36:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:36:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="246406"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 15:36:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 10:36:12 -0500
Received: from [10.80.239.153] (helo=andrewcoop.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TiRsS-0007kQ-8q;
	Tue, 11 Dec 2012 15:36:12 +0000
MIME-Version: 1.0
X-Mercurial-Node: 528bd4030389d6b9ec6e36cd018f04fb10099346
Message-ID: <528bd4030389d6b9ec6e.1355240057@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
References: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Tue, 11 Dec 2012 15:34:17 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 2 of 2 V4] x86/kexec: Change NMI and MCE
	handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 xen/arch/x86/crash.c            |  117 ++++++++++++++++++++++++++++++++++-----
 xen/arch/x86/machine_kexec.c    |   19 ++++++
 xen/arch/x86/x86_64/entry.S     |   34 +++++++++++
 xen/include/asm-x86/desc.h      |   42 ++++++++++++++
 xen/include/asm-x86/processor.h |    4 +
 5 files changed, 201 insertions(+), 15 deletions(-)


Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safer in more circumstances.

This patch adds three new low level routines:
 * nmi_crash
    This is a special NMI handler for using during a kexec crash.
 * enable_nmis
    This function enables NMIs by executing an iret-to-self, to
    disengage the hardware NMI latch.
 * trap_nop
    This is a no op handler which irets immediately.  It is actually in
    the middle of enable_nmis to reuse the iret instruction, without
    having a single lone aligned iret inflating the code side.

And adds three new IDT entry helper routines:
 * _write_gate_lower
    This is a substitute for using cmpxchg16b to update a 128bit
    structure at once.  It assumes that the top 64 bits are unchanged
    (and ASSERT()s the fact) and performs a regular write on the lower
    64 bits.
 * _set_gate_lower
    This is functionally equivalent to the already present _set_gate(),
    except it uses _write_gate_lower rather than updating both 64bit
    values.
 * _update_gate_addr_lower
    This is designed to update an IDT entry handler only, without
    altering any other settings in the entry.  It also uses
    _write_gate_lower.

The IDT entry helpers are required because:
  * Is it unsafe to attempt a disable/update/re-enable cycle on the NMI
    or MCE IDT entries.
  * We need to be able to update NMI handlers without changing the IST
    entry.


As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable the crashing cpus NMI/MCE interrupt stack tables.
    Disabling the stack tables removes race conditions which would lead
    to corrupt exception frames and infinite loops.  As this pcpu is
    never planning to execute a sysret back to a pv vcpu, the update is
    safe from a security point of view.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the nmi_crash handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

Changes since v3:
 * Added IDT entry helpers to safely update NMI/MCE entries.

Changes since v2:

 * Rework the alteration of the stack tables to completely remove the
   possibility of a PV domain getting very lucky with the "NMI or MCE in
   a 1 instruction race window on sysret" and managing to execute code
   in the hypervisor context.
 * Make use of set_ist() from the previous patch in the series to avoid
   open-coding the IST manipulation.

Changes since v1:
 * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
   during the original refactoring.
 * Fold trap_nop into the middle of enable_nmis to reuse the iret.
 * Expand comments in areas as per Tim's suggestions.

diff -r def9d03429f6 -r 528bd4030389 xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,128 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    int cpu = smp_processor_id();
+
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( cpu != crashing_cpu );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        /* Disable the interrupt stack table for the MCE handler.  This
+         * prevents race conditions between clearing MCIP and receving a
+         * new MCE, during which the exception frame would be clobbered
+         * and the MCE handler fall into an infinite loop.  We are soon
+         * going to disable the NMI watchdog, so the loop would not be
+         * caught.
+         *
+         * We do not need to change the NMI IST, as the nmi_crash
+         * handler is immue to corrupt exception frames, by virtue of
+         * being designed never to return.
+         *
+         * This update is safe from a security point of view, as this
+         * pcpu is never going to try to sysret back to a PV vcpu.
+         */
+        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
+
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+        atomic_dec(&waiting_for_crash_ipi);
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely scenario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
+     * can deliberately queue up another NMI at the LAPIC which will not
+     * be delivered as the hardware NMI latch is currently in effect.
+     * This means that if NMIs become unlatched (e.g. following a
+     * non-fatal MCE), the LAPIC will force us back here rather than
+     * wandering back into regular Xen code.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i, cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+
+            if ( i == cpu )
+            {
+                /* Disable the interrupt stack tables for this MCE and
+                 * NMI handler (shortly to become a nop) as there is a 1
+                 * instruction race window where NMIs could be
+                 * re-enabled and corrupt the exception frame, leaving
+                 * us unable to continue on this crash path (which half
+                 * defeats the point of using the nop handler in the
+                 * first place).
+                 *
+                 * This update is safe from a security point of view, as
+                 * this pcpu is never going to try to sysret back to a
+                 * PV vcpu.
+                 */
+                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
+            }
+            else
+                /* Do not update stack table for other pcpus. */
+                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi], &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r def9d03429f6 -r 528bd4030389 xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -81,12 +81,31 @@ void machine_kexec(xen_kexec_image_t *im
         .base = (unsigned long)(boot_cpu_gdt_table - FIRST_RESERVED_GDT_ENTRY),
         .limit = LAST_RESERVED_GDT_BYTE
     };
+    int i;
 
     /* We are about to permenantly jump out of the Xen context into the kexec
      * purgatory code.  We really dont want to be still servicing interupts.
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.
+     *
+     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
+     * pcpus other than us have the nmi_crash handler, while we have the nop
+     * handler.
+     *
+     * The MCE handlers touch extensive areas of Xen code and data.  At this
+     * point, there is nothing we can usefully do, so set the nop handler.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+            _update_gate_addr_lower(&idt_tables[i][TRAP_machine_check], &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r def9d03429f6 -r 528bd4030389 xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,45 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        cli
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* Enable NMIs.  No special register assumptions. Only %rax is not preserved. */
+ENTRY(enable_nmis)
+        movq  %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        retq
+
+/* No op trap handler.  Required for kexec crash path.  This is not
+ * declared with the ENTRY() macro to avoid wasted alignment space.
+ */
+.globl trap_nop
+trap_nop:
+        iretq
+
+
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r def9d03429f6 -r 528bd4030389 xen/include/asm-x86/desc.h
--- a/xen/include/asm-x86/desc.h
+++ b/xen/include/asm-x86/desc.h
@@ -106,6 +106,19 @@ typedef struct {
     u64 a, b;
 } idt_entry_t;
 
+/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
+ * bits of the address not changing, which is a safe aumption until our
+ * code size exceeds 4GB.
+ *
+ * Ideally, we would use cmpxchg16b, but this is not supported on some
+ * old AMD 64bit capable processors, and has no safe equivelent.
+ */
+static inline void _write_gate_lower(idt_entry_t * gate, idt_entry_t * new)
+{
+    ASSERT(gate->b == new->b);
+    *(volatile unsigned long *)&gate->a = new->a;
+}
+
 #define _set_gate(gate_addr,type,dpl,addr)               \
 do {                                                     \
     (gate_addr)->a = 0;                                  \
@@ -122,6 +135,35 @@ do {                                    
         (1UL << 47);                                     \
 } while (0)
 
+#define _set_gate_lower(gate_addr,type,dpl,addr)         \
+    do {                                                 \
+    idt_entry_t idte;                                    \
+    idte.b = (gate_addr)->b;                             \
+    idte.a =                                            \
+        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) | \
+        ((unsigned long)(dpl) << 45) |                   \
+        ((unsigned long)(type) << 40) |                  \
+        ((unsigned long)(addr) & 0xFFFFUL) |             \
+        ((unsigned long)__HYPERVISOR_CS64 << 16) |       \
+        (1UL << 47);                                     \
+    _write_gate_lower((gate_addr), &idte);               \
+} while (0)
+
+/* Update the lower half handler of an IDT Entry, without changing any
+ * other configuration. */
+static inline void _update_gate_addr_lower(idt_entry_t * gate, void * addr)
+{
+    idt_entry_t idte;
+    idte.a = gate->a;
+
+    idte.b = ((unsigned long)(addr) >> 32);
+    idte.a &= 0x0000FFFFFFFF0000ULL;
+    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
+        ((unsigned long)(addr) & 0xFFFFUL);
+
+    _write_gate_lower(gate, &idte);
+}
+
 #define _set_tssldt_desc(desc,addr,limit,type)           \
 do {                                                     \
     (desc)[0].b = (desc)[1].b = 0;                       \
diff -r def9d03429f6 -r 528bd4030389 xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiRsb-0001XF-JQ; Tue, 11 Dec 2012 15:36:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TiRsZ-0001Wf-1z
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:36:19 +0000
Received: from [85.158.137.99:16010] by server-2.bemta-3.messagelabs.com id
	9D/40-11239-1F257C05; Tue, 11 Dec 2012 15:36:17 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355240173!18856196!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTEwNzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13392 invoked from network); 11 Dec 2012 15:36:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:36:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,259,1355097600"; 
   d="scan'208";a="246406"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 15:36:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 10:36:12 -0500
Received: from [10.80.239.153] (helo=andrewcoop.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TiRsS-0007kQ-8q;
	Tue, 11 Dec 2012 15:36:12 +0000
MIME-Version: 1.0
X-Mercurial-Node: 528bd4030389d6b9ec6e36cd018f04fb10099346
Message-ID: <528bd4030389d6b9ec6e.1355240057@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
References: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Tue, 11 Dec 2012 15:34:17 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 2 of 2 V4] x86/kexec: Change NMI and MCE
	handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 xen/arch/x86/crash.c            |  117 ++++++++++++++++++++++++++++++++++-----
 xen/arch/x86/machine_kexec.c    |   19 ++++++
 xen/arch/x86/x86_64/entry.S     |   34 +++++++++++
 xen/include/asm-x86/desc.h      |   42 ++++++++++++++
 xen/include/asm-x86/processor.h |    4 +
 5 files changed, 201 insertions(+), 15 deletions(-)


Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safer in more circumstances.

This patch adds three new low level routines:
 * nmi_crash
    This is a special NMI handler for using during a kexec crash.
 * enable_nmis
    This function enables NMIs by executing an iret-to-self, to
    disengage the hardware NMI latch.
 * trap_nop
    This is a no op handler which irets immediately.  It is actually in
    the middle of enable_nmis to reuse the iret instruction, without
    having a single lone aligned iret inflating the code side.

And adds three new IDT entry helper routines:
 * _write_gate_lower
    This is a substitute for using cmpxchg16b to update a 128bit
    structure at once.  It assumes that the top 64 bits are unchanged
    (and ASSERT()s the fact) and performs a regular write on the lower
    64 bits.
 * _set_gate_lower
    This is functionally equivalent to the already present _set_gate(),
    except it uses _write_gate_lower rather than updating both 64bit
    values.
 * _update_gate_addr_lower
    This is designed to update an IDT entry handler only, without
    altering any other settings in the entry.  It also uses
    _write_gate_lower.

The IDT entry helpers are required because:
  * Is it unsafe to attempt a disable/update/re-enable cycle on the NMI
    or MCE IDT entries.
  * We need to be able to update NMI handlers without changing the IST
    entry.


As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable the crashing cpus NMI/MCE interrupt stack tables.
    Disabling the stack tables removes race conditions which would lead
    to corrupt exception frames and infinite loops.  As this pcpu is
    never planning to execute a sysret back to a pv vcpu, the update is
    safe from a security point of view.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the nmi_crash handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

Changes since v3:
 * Added IDT entry helpers to safely update NMI/MCE entries.

Changes since v2:

 * Rework the alteration of the stack tables to completely remove the
   possibility of a PV domain getting very lucky with the "NMI or MCE in
   a 1 instruction race window on sysret" and managing to execute code
   in the hypervisor context.
 * Make use of set_ist() from the previous patch in the series to avoid
   open-coding the IST manipulation.

Changes since v1:
 * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
   during the original refactoring.
 * Fold trap_nop into the middle of enable_nmis to reuse the iret.
 * Expand comments in areas as per Tim's suggestions.

diff -r def9d03429f6 -r 528bd4030389 xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,128 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    int cpu = smp_processor_id();
+
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( cpu != crashing_cpu );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        /* Disable the interrupt stack table for the MCE handler.  This
+         * prevents race conditions between clearing MCIP and receving a
+         * new MCE, during which the exception frame would be clobbered
+         * and the MCE handler fall into an infinite loop.  We are soon
+         * going to disable the NMI watchdog, so the loop would not be
+         * caught.
+         *
+         * We do not need to change the NMI IST, as the nmi_crash
+         * handler is immue to corrupt exception frames, by virtue of
+         * being designed never to return.
+         *
+         * This update is safe from a security point of view, as this
+         * pcpu is never going to try to sysret back to a PV vcpu.
+         */
+        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
+
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+        atomic_dec(&waiting_for_crash_ipi);
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely scenario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
+     * can deliberately queue up another NMI at the LAPIC which will not
+     * be delivered as the hardware NMI latch is currently in effect.
+     * This means that if NMIs become unlatched (e.g. following a
+     * non-fatal MCE), the LAPIC will force us back here rather than
+     * wandering back into regular Xen code.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i, cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+
+            if ( i == cpu )
+            {
+                /* Disable the interrupt stack tables for this MCE and
+                 * NMI handler (shortly to become a nop) as there is a 1
+                 * instruction race window where NMIs could be
+                 * re-enabled and corrupt the exception frame, leaving
+                 * us unable to continue on this crash path (which half
+                 * defeats the point of using the nop handler in the
+                 * first place).
+                 *
+                 * This update is safe from a security point of view, as
+                 * this pcpu is never going to try to sysret back to a
+                 * PV vcpu.
+                 */
+                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
+            }
+            else
+                /* Do not update stack table for other pcpus. */
+                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi], &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r def9d03429f6 -r 528bd4030389 xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -81,12 +81,31 @@ void machine_kexec(xen_kexec_image_t *im
         .base = (unsigned long)(boot_cpu_gdt_table - FIRST_RESERVED_GDT_ENTRY),
         .limit = LAST_RESERVED_GDT_BYTE
     };
+    int i;
 
     /* We are about to permenantly jump out of the Xen context into the kexec
      * purgatory code.  We really dont want to be still servicing interupts.
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.
+     *
+     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
+     * pcpus other than us have the nmi_crash handler, while we have the nop
+     * handler.
+     *
+     * The MCE handlers touch extensive areas of Xen code and data.  At this
+     * point, there is nothing we can usefully do, so set the nop handler.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+            _update_gate_addr_lower(&idt_tables[i][TRAP_machine_check], &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r def9d03429f6 -r 528bd4030389 xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,45 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        cli
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* Enable NMIs.  No special register assumptions. Only %rax is not preserved. */
+ENTRY(enable_nmis)
+        movq  %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        retq
+
+/* No op trap handler.  Required for kexec crash path.  This is not
+ * declared with the ENTRY() macro to avoid wasted alignment space.
+ */
+.globl trap_nop
+trap_nop:
+        iretq
+
+
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r def9d03429f6 -r 528bd4030389 xen/include/asm-x86/desc.h
--- a/xen/include/asm-x86/desc.h
+++ b/xen/include/asm-x86/desc.h
@@ -106,6 +106,19 @@ typedef struct {
     u64 a, b;
 } idt_entry_t;
 
+/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
+ * bits of the address not changing, which is a safe aumption until our
+ * code size exceeds 4GB.
+ *
+ * Ideally, we would use cmpxchg16b, but this is not supported on some
+ * old AMD 64bit capable processors, and has no safe equivelent.
+ */
+static inline void _write_gate_lower(idt_entry_t * gate, idt_entry_t * new)
+{
+    ASSERT(gate->b == new->b);
+    *(volatile unsigned long *)&gate->a = new->a;
+}
+
 #define _set_gate(gate_addr,type,dpl,addr)               \
 do {                                                     \
     (gate_addr)->a = 0;                                  \
@@ -122,6 +135,35 @@ do {                                    
         (1UL << 47);                                     \
 } while (0)
 
+#define _set_gate_lower(gate_addr,type,dpl,addr)         \
+    do {                                                 \
+    idt_entry_t idte;                                    \
+    idte.b = (gate_addr)->b;                             \
+    idte.a =                                            \
+        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) | \
+        ((unsigned long)(dpl) << 45) |                   \
+        ((unsigned long)(type) << 40) |                  \
+        ((unsigned long)(addr) & 0xFFFFUL) |             \
+        ((unsigned long)__HYPERVISOR_CS64 << 16) |       \
+        (1UL << 47);                                     \
+    _write_gate_lower((gate_addr), &idte);               \
+} while (0)
+
+/* Update the lower half handler of an IDT Entry, without changing any
+ * other configuration. */
+static inline void _update_gate_addr_lower(idt_entry_t * gate, void * addr)
+{
+    idt_entry_t idte;
+    idte.a = gate->a;
+
+    idte.b = ((unsigned long)(addr) >> 32);
+    idte.a &= 0x0000FFFFFFFF0000ULL;
+    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
+        ((unsigned long)(addr) & 0xFFFFUL);
+
+    _write_gate_lower(gate, &idte);
+}
+
 #define _set_tssldt_desc(desc,addr,limit,type)           \
 do {                                                     \
     (desc)[0].b = (desc)[1].b = 0;                       \
diff -r def9d03429f6 -r 528bd4030389 xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:59:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiSEV-0002dH-TW; Tue, 11 Dec 2012 15:58:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TiSEU-0002dB-Ct
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:58:58 +0000
Received: from [85.158.139.83:20619] by server-6.bemta-5.messagelabs.com id
	E0/E6-30498-14857C05; Tue, 11 Dec 2012 15:58:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355241536!29393731!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM5Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25034 invoked from network); 11 Dec 2012 15:58:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 15:58:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 15:58:56 +0000
Message-Id: <50C7664D02000078000AFBE1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 15:58:53 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
	<528bd4030389d6b9ec6e.1355240057@andrewcoop.uk.xensource.com>
In-Reply-To: <528bd4030389d6b9ec6e.1355240057@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 2 V4] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.12.12 at 16:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> -    /* Would it be better to replace the trap vector here? */
> -    set_nmi_callback(crash_nmi_callback);
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +
> +            if ( i == cpu )
> +            {
> +                /* Disable the interrupt stack tables for this MCE and
> +                 * NMI handler (shortly to become a nop) as there is a 1
> +                 * instruction race window where NMIs could be
> +                 * re-enabled and corrupt the exception frame, leaving
> +                 * us unable to continue on this crash path (which half
> +                 * defeats the point of using the nop handler in the
> +                 * first place).
> +                 *
> +                 * This update is safe from a security point of view, as
> +                 * this pcpu is never going to try to sysret back to a
> +                 * PV vcpu.
> +                 */

This comment appears to have become stale with the latest
changes.

> +                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);

No need for the extra & on functions and arrays.

> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
> +            }
> +            else
> +                /* Do not update stack table for other pcpus. */
> +                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi], &nmi_crash);
> +        }
> +

>...

> +/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
> + * bits of the address not changing, which is a safe aumption until our

assumption

> + * code size exceeds 4GB.

1Gb.

> + *
> + * Ideally, we would use cmpxchg16b, but this is not supported on some
> + * old AMD 64bit capable processors, and has no safe equivelent.

equivalent

> + */
> +static inline void _write_gate_lower(idt_entry_t * gate, idt_entry_t * new)

static inline void _write_gate_lower(idt_entry_t *gate, idt_entry_t *new)

(similar extra blanks elsewhere)

Also, to make clear which of the two is the entry written, const-
qualifying the other one might be a good idea.

> +{
> +    ASSERT(gate->b == new->b);
> +    *(volatile unsigned long *)&gate->a = new->a;

volatile? And if so, why not volatilie-qualify the function parameter?

> +}
> +
>  #define _set_gate(gate_addr,type,dpl,addr)               \
>  do {                                                     \
>      (gate_addr)->a = 0;                                  \
> @@ -122,6 +135,35 @@ do {                                    
>          (1UL << 47);                                     \
>  } while (0)
>  
> +#define _set_gate_lower(gate_addr,type,dpl,addr)         \
> +    do {                                                 \
> +    idt_entry_t idte;                                    \
> +    idte.b = (gate_addr)->b;                             \
> +    idte.a =                                            \
> +        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) | \
> +        ((unsigned long)(dpl) << 45) |                   \
> +        ((unsigned long)(type) << 40) |                  \
> +        ((unsigned long)(addr) & 0xFFFFUL) |             \
> +        ((unsigned long)__HYPERVISOR_CS64 << 16) |       \
> +        (1UL << 47);                                     \
> +    _write_gate_lower((gate_addr), &idte);               \

No need for extra inner parentheses.

> +} while (0)
> +
> +/* Update the lower half handler of an IDT Entry, without changing any
> + * other configuration. */
> +static inline void _update_gate_addr_lower(idt_entry_t * gate, void * addr)

Any reason for this being an inline function and the other being
a macro?

Jan

> +{
> +    idt_entry_t idte;
> +    idte.a = gate->a;
> +
> +    idte.b = ((unsigned long)(addr) >> 32);
> +    idte.a &= 0x0000FFFFFFFF0000ULL;
> +    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
> +        ((unsigned long)(addr) & 0xFFFFUL);
> +
> +    _write_gate_lower(gate, &idte);
> +}
> +
>  #define _set_tssldt_desc(desc,addr,limit,type)           \
>  do {                                                     \
>      (desc)[0].b = (desc)[1].b = 0;                       \


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 15:59:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 15:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiSEV-0002dH-TW; Tue, 11 Dec 2012 15:58:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TiSEU-0002dB-Ct
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:58:58 +0000
Received: from [85.158.139.83:20619] by server-6.bemta-5.messagelabs.com id
	E0/E6-30498-14857C05; Tue, 11 Dec 2012 15:58:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355241536!29393731!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM5Nzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25034 invoked from network); 11 Dec 2012 15:58:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 15:58:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Dec 2012 15:58:56 +0000
Message-Id: <50C7664D02000078000AFBE1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 11 Dec 2012 15:58:53 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
	<528bd4030389d6b9ec6e.1355240057@andrewcoop.uk.xensource.com>
In-Reply-To: <528bd4030389d6b9ec6e.1355240057@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 2 V4] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.12.12 at 16:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> -    /* Would it be better to replace the trap vector here? */
> -    set_nmi_callback(crash_nmi_callback);
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +
> +            if ( i == cpu )
> +            {
> +                /* Disable the interrupt stack tables for this MCE and
> +                 * NMI handler (shortly to become a nop) as there is a 1
> +                 * instruction race window where NMIs could be
> +                 * re-enabled and corrupt the exception frame, leaving
> +                 * us unable to continue on this crash path (which half
> +                 * defeats the point of using the nop handler in the
> +                 * first place).
> +                 *
> +                 * This update is safe from a security point of view, as
> +                 * this pcpu is never going to try to sysret back to a
> +                 * PV vcpu.
> +                 */

This comment appears to have become stale with the latest
changes.

> +                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);

No need for the extra & on functions and arrays.

> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
> +            }
> +            else
> +                /* Do not update stack table for other pcpus. */
> +                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi], &nmi_crash);
> +        }
> +

>...

> +/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
> + * bits of the address not changing, which is a safe aumption until our

assumption

> + * code size exceeds 4GB.

1Gb.

> + *
> + * Ideally, we would use cmpxchg16b, but this is not supported on some
> + * old AMD 64bit capable processors, and has no safe equivelent.

equivalent

> + */
> +static inline void _write_gate_lower(idt_entry_t * gate, idt_entry_t * new)

static inline void _write_gate_lower(idt_entry_t *gate, idt_entry_t *new)

(similar extra blanks elsewhere)

Also, to make clear which of the two is the entry written, const-
qualifying the other one might be a good idea.

> +{
> +    ASSERT(gate->b == new->b);
> +    *(volatile unsigned long *)&gate->a = new->a;

volatile? And if so, why not volatilie-qualify the function parameter?

> +}
> +
>  #define _set_gate(gate_addr,type,dpl,addr)               \
>  do {                                                     \
>      (gate_addr)->a = 0;                                  \
> @@ -122,6 +135,35 @@ do {                                    
>          (1UL << 47);                                     \
>  } while (0)
>  
> +#define _set_gate_lower(gate_addr,type,dpl,addr)         \
> +    do {                                                 \
> +    idt_entry_t idte;                                    \
> +    idte.b = (gate_addr)->b;                             \
> +    idte.a =                                            \
> +        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) | \
> +        ((unsigned long)(dpl) << 45) |                   \
> +        ((unsigned long)(type) << 40) |                  \
> +        ((unsigned long)(addr) & 0xFFFFUL) |             \
> +        ((unsigned long)__HYPERVISOR_CS64 << 16) |       \
> +        (1UL << 47);                                     \
> +    _write_gate_lower((gate_addr), &idte);               \

No need for extra inner parentheses.

> +} while (0)
> +
> +/* Update the lower half handler of an IDT Entry, without changing any
> + * other configuration. */
> +static inline void _update_gate_addr_lower(idt_entry_t * gate, void * addr)

Any reason for this being an inline function and the other being
a macro?

Jan

> +{
> +    idt_entry_t idte;
> +    idte.a = gate->a;
> +
> +    idte.b = ((unsigned long)(addr) >> 32);
> +    idte.a &= 0x0000FFFFFFFF0000ULL;
> +    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
> +        ((unsigned long)(addr) & 0xFFFFUL);
> +
> +    _write_gate_lower(gate, &idte);
> +}
> +
>  #define _set_tssldt_desc(desc,addr,limit,type)           \
>  do {                                                     \
>      (desc)[0].b = (desc)[1].b = 0;                       \


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 17:07:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 17:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiTIP-0004lc-CW; Tue, 11 Dec 2012 17:07:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TiTIN-0004lX-Ht
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 17:07:03 +0000
Received: from [193.109.254.147:60024] by server-3.bemta-14.messagelabs.com id
	31/ED-01317-63867C05; Tue, 11 Dec 2012 17:07:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355245620!9735671!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTY1MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11008 invoked from network); 11 Dec 2012 17:07:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 17:07:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBBH6vj5022982
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 11 Dec 2012 17:06:58 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBBH6uRP020809
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 11 Dec 2012 17:06:56 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBBH6uu1021868; Tue, 11 Dec 2012 11:06:56 -0600
Received: from localhost.localdomain (/130.35.70.61)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 11 Dec 2012 09:06:55 -0800
Date: Tue, 11 Dec 2012 12:06:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Message-ID: <20121211170653.GG9347@localhost.localdomain>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Friday, December 07, 2012 10:09 PM
> > To: Xu, Dongxiao
> > Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
> > hook
> > 
> > On Thu, Dec 06, 2012 at 09:08:42PM +0800, Dongxiao Xu wrote:
> > > While mapping sg buffers, checking to cross page DMA buffer is also
> > > needed. If the guest DMA buffer crosses page boundary, Xen should
> > > exchange contiguous memory for it.
> > 
> > So this is when we cross those 2MB contingous swatch of buffers.
> > Wouldn't we get the same problem with the 'map_page' call? If the driver tried
> > to map say a 4MB DMA region?
> 
> Yes, it also needs such check, as I just replied to Jan's mail.
> 
> > 
> > What if this check was done in the routines that provide the software static
> > buffers and there try to provide a nice DMA contingous swatch of pages?
> 
> Yes, this approach also came to our mind, which needs to modify the driver itself.
> If so, it requires driver not using such static buffers (e.g., from kmalloc) to do DMA even if the buffer is continuous in native.

I am bit loss here.

Is the issue you found only with drivers that do not use DMA API?
Can you perhaps point me to the code that triggered this fix in the first
place?
> Is this acceptable by kernel/driver upstream?

I am still not completely clear on what you had in mind. The one method I
thought about that might help in this is to have Xen-SWIOTLB
track which memory ranges were exchanged (so xen_swiotlb_fixup
would save the *buf and the size for each call to
xen_create_contiguous_region in a list or array).

When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they
would consult said array/list to see if the region they retrieved
crosses said 2MB chunks. If so.. and here I am unsure of
what would be the best way to proceed.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 17:07:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 17:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiTIP-0004lc-CW; Tue, 11 Dec 2012 17:07:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TiTIN-0004lX-Ht
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 17:07:03 +0000
Received: from [193.109.254.147:60024] by server-3.bemta-14.messagelabs.com id
	31/ED-01317-63867C05; Tue, 11 Dec 2012 17:07:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355245620!9735671!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMTY1MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11008 invoked from network); 11 Dec 2012 17:07:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 17:07:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBBH6vj5022982
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 11 Dec 2012 17:06:58 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBBH6uRP020809
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 11 Dec 2012 17:06:56 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBBH6uu1021868; Tue, 11 Dec 2012 11:06:56 -0600
Received: from localhost.localdomain (/130.35.70.61)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 11 Dec 2012 09:06:55 -0800
Date: Tue, 11 Dec 2012 12:06:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Message-ID: <20121211170653.GG9347@localhost.localdomain>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Friday, December 07, 2012 10:09 PM
> > To: Xu, Dongxiao
> > Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
> > hook
> > 
> > On Thu, Dec 06, 2012 at 09:08:42PM +0800, Dongxiao Xu wrote:
> > > While mapping sg buffers, checking to cross page DMA buffer is also
> > > needed. If the guest DMA buffer crosses page boundary, Xen should
> > > exchange contiguous memory for it.
> > 
> > So this is when we cross those 2MB contingous swatch of buffers.
> > Wouldn't we get the same problem with the 'map_page' call? If the driver tried
> > to map say a 4MB DMA region?
> 
> Yes, it also needs such check, as I just replied to Jan's mail.
> 
> > 
> > What if this check was done in the routines that provide the software static
> > buffers and there try to provide a nice DMA contingous swatch of pages?
> 
> Yes, this approach also came to our mind, which needs to modify the driver itself.
> If so, it requires driver not using such static buffers (e.g., from kmalloc) to do DMA even if the buffer is continuous in native.

I am bit loss here.

Is the issue you found only with drivers that do not use DMA API?
Can you perhaps point me to the code that triggered this fix in the first
place?
> Is this acceptable by kernel/driver upstream?

I am still not completely clear on what you had in mind. The one method I
thought about that might help in this is to have Xen-SWIOTLB
track which memory ranges were exchanged (so xen_swiotlb_fixup
would save the *buf and the size for each call to
xen_create_contiguous_region in a list or array).

When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they
would consult said array/list to see if the region they retrieved
crosses said 2MB chunks. If so.. and here I am unsure of
what would be the best way to proceed.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 17:07:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 17:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiTIm-0004ml-Pm; Tue, 11 Dec 2012 17:07:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1TiTIl-0004mP-7f
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 17:07:27 +0000
Received: from [85.158.139.211:24618] by server-1.bemta-5.messagelabs.com id
	B7/52-12813-E4867C05; Tue, 11 Dec 2012 17:07:26 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355245644!18247393!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2173 invoked from network); 11 Dec 2012 17:07:25 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-8.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	11 Dec 2012 17:07:25 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Tue, 11 Dec 2012 18:07:24 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: Sander Eikelenboom <linux@eikelenboom.it>
Thread-Topic: [Xen-devel] VGA passthrough and AMD drivers
Thread-Index: Ac3W6q3sZdM1jooZStCS7Hy3uj/+agAmeaKAAA6gObA=
Date: Tue, 11 Dec 2012 17:07:22 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C53C9F@dulac>
References: <36774CA35642C143BCDE93BA0C68DC5702C53BB2@dulac>
	<10810636308.20121211114749@eikelenboom.it>
In-Reply-To: <10810636308.20121211114749@eikelenboom.it>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>>>>>> Hi all,
>>>>>>>> I have made some tests to find a good driver for FirePro V8800 
>>>>>>>> on windows 7 64bit HVM.
>>>>>>>> I have been focused on ?advanced features?: quad buffer and 
>>>>>>>> active stereoscopy, synchronization ?
>>>>>>>> The results, for all FirePro drivers (of this year); I can?t get 
>>>>>>>> the quad buffer/active stereoscopy feature.
>>>>>>>> But they work on a native installation.
>>>>>>> Can you describe the setup a little more?
>>>>>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>>>>>
>>>>>> It?s a setup used in CAVE system, I try (and its works, minus some
>>>>>> issues) to virtualize ?virtual reality contexts? that needs full 
>>>>>> graphics card features.
>>>>>>
>>>>>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>>>>>
>>>>>> cores_per_socket : 4
>>>>>>
>>>>>> threads_per_core : 2
>>>>>>
>>>>>> cpu_mhz : 2660
>>>>>>
>>>>>> total_memory : 4079
>>>>>>
>>>>>>> How many graphic cards per guest?
>>>>>> One card per guest.
>>>>>>
>>>>>>> How many guests? On how many hosts?
>>>>>> One guest per computer.
>>>>>>
>>>>> And of course, I just thought of some other questions:
>>>>> What version of Xen are you using?
>>>>> What kernel are you using in Dom0?
>>>> release                : 2.6.32-5-xen-amd64
>>>> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
>>>> machine                : x86_64
>>>> nr_cpus                : 8
>>>> nr_nodes               : 1
>>>> cores_per_socket       : 4
>>>> threads_per_core       : 2
>>>> cpu_mhz                : 2660
>>>> hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:00000000:00000001:00000000
>>>> virt_caps              : hvm hvm_directio
>>>> total_memory           : 4079
>>>> free_cpus              : 0
>>>> xen_major              : 4
>>>> xen_minor              : 2
>>>> xen_extra              : -unstable
>>>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>>>> xen_scheduler          : credit
>>>> xen_pagesize           : 4096
>>>> platform_params        : virt_start=0xffff800000000000
>>>> xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
>>>> xen_commandline        : placeholder
>>>> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
>>>> xend_config_format     : 4
>>>>
>>>> I will change to a newer version and use  xl toolstack when VGA passthrough will be supported.
>>>>
>>>>> And just to be clear, there is only Dom0 and one Windows 7 HVM guest on each machine?
>>>> Yes
>>>>
>>>>>>>> The only driver that allows this feature is a Radeon HD driver 
>>>>>>>> (Catalyst 12.10 WHQL).
>>>>>>>> But this driver becomes unstable when an application using 
>>>>>>>> active stereo and synchronization is closed:
>>>>>>>> -The synchronization between two computers is lost.
>>>>>>>> -The CCC can crash when the synchronization is made again.
>>>>>>>> Someone have any clues about this?
>>>>>>> I don't know exactly how this works on AMD/ATI graphics cards, 
>>>>>>> but I have worked with synchronisation on other graphics cards 
>>>>>>> about 7 years ago, so I have some idea of how you solve the 
>>>>>>> various problems.
>>>>>>> What I don't quite understand is why it would be different 
>>>>>>> between a virtual environment and the bare-metal ("native") 
>>>>>>> install. My immediate guess is that there is a timing difference, 
>>>>>>> for one of two reasons:
>>>>>>> 1. IOMMU is adding extra delays to the graphics card reading system memory.
>>>>>>> 2. Interrupt delays due to hypervisor.
>>>>>>> 3. Dom0 or other guest domains "stealing" CPU from the guest.
>>>>>>> I don't think those are easy to work around (as they all have to 
>>>>>>> "happen" in a virtual system), but I also don't REALLY understand 
>>>>>>> why this should cause problems in the first place, as there isn't 
>>>>>>> any guarantee as to the timings of either memory reads, interrupt 
>>>>>>> latency/responsiveness or CPU availability in Windows, so the 
>>>>>>> same problem would appear in native systems as well, given "the right"
>>>>>>> circumstances.
>>>>>>> What exactly is the crash in CCC?
>>>>>>> (CCC stands for "Catalyst Control Center" - which I think is a 
>>>>>>> Windows "service" to handle certain requests from the driver that 
>>>>>>> can't be done in kernel mode [or shouldn't be done in the driver 
>>>>>>> in general]).
>>>>>> After the application is closed, I launch the Catalyst Control 
>>>>> Center, the synchronization state seems to be good. But there is 
>>>>>> no synchronization.
>>>>>>
>>>>>> If I try to apply any modifications of synchronization (synchro 
>>>>>> server or client), CCC freeze and I need to kill it manually.
>>>>>>
>>>>>> I can set the synchronization back after this.
>>>>>>
>>>>> This clearly sounds like a software issue in the CCC itself. I could be wrong, but that's what I think right now. It would be rather difficult to figure out what is going wrong without at least a repro environment.
>>>> I've made a bunch of tests this morning:
>>>> -CCC crash when I've got two displays: I set one to be the synchronization server and the other a client at the same time. When I set the server, apply this configuration and set the client after, it didn't crash.
>>>> -If my application (Virtools) crash, synchronization is reset.
>>>> -Eyes are sometimes inverted with the same trigger edge.
>>>I saw that problem with the product I was working on once or twice. 
>>>Makes it look really "confusing". This was a settings problem in my case (because I wrote my own "controls", I could set almost every aspect of everything that could possibly be changed, with a very basic command line >>application that interacted pretty straight down to the driver - with the usual caveat of "make sure you know what you are doing" - the normal GUI Control panel setup was much more "you can only set things that make sense >>for you to set"). That is probably not really what your problem is... But could be a configuration of driver or application issue, of course.
>>>
>>>>
>>>> I've got all this behaviors with both HVM and native installation under 7 64bits.  So I think it's clearly a software issue.
>>>>
>>>> Next step:  7 32bits.
>>>So, this is not a Xen issue... Report it to the ATI/AMD folks!
>>>
>> Yes, but it doesn't explain why I can't get active stereoscopy with FirePro drivers on HVM.
>
>>>>> Whilst I'm all for using Xen for everything, there are sometimes situations when "not using Xen" may actually be the right choice. Can you explain why running your guests in Xen is of benefit? [If you'd like to answer "none >of >>>your business", that's fine, but it may help to understand what the "business case" is for this].
>>>> The objective is to mutualize graphical cluster for immersive systems. Virtual Reality applications are sensitive in their configurations; it's a pain to manage multiple users and it's nearly impossible to have different >configurations for these users. Usually immersive systems are stuck in one configuration (OS, drivers, applications ...), and only few people are allowed to change settings.
>>>> The idea is to use Xen and VGA passthrough, for create personals environments that allow every user to make their own configurations without impacts on others.
>>>>
>>>> Be able to have VR configurations in virtual machines and to able to run it with 3D features, is a serious benefit for Virtual Reality users.
>>>
>>>Thanks for your explanation. Makes some sense, however, I feel that it also makes things more complex - if the system is so sensitive, it may get "upset" simply by having the differences in system behaviour that you >>automatically get from running on a virtual machine vs. "bare metal". Don't let that stop you, I'm just saying there may be issues caused by Xen (or other Virtualisation products) are not quite as transparent as they really should >>be.
>>>>>
>
>> It's not the hardware configurations that is so sensitive but more the software configurations and drivers versions.  
>> I've already made some demonstrations of Xen capabilities in our use case, there is no negative feedback. I think that HVM behavior is perfect for our uses, except these driver issues. 
>>
>> I found one minor bug (for us), if the first HVM executed (id=1) has the VGA card, the computer reboot without logs. 
>> My workaround is to launch an HVM without VGA first, stop it properly, and launch my usual HVM with VGA passthrough. 
>> I think, it's a bug due to my installation (Xen 4.2.0-unstable).
>>
>> I just got a new test computer, a Dell Precision T7500 with a V9800 FirePro, maybe I will have the time to test something tomorrow!

Exactly the same behaviors with this computer. 

>
>Hi,
>
>My experiences with CCC and vga passthrough aren't great either (bluescreen most of the time).
>It works now with the 12.11 catalyst beta drivers and not installing CCC, but just installing the driver and opencl packages from the c:\AMD\packages dir after stopping the install after the total package is unpacked.
>Don't know if the stereoscopy also comes in a seperate package.
>
>It runs opencl fine now, with near native performance (with luxmark benchmark) :-) (with xen-unstable, qemu-upstream, linux 3.7 dom0, win7 guest, 12.11 catalyst drivers, ati radeon HD 6570 at the moment)
>

Thanks for the feedbacks! I will try what you said.
But for my use case I need CCC, even if it isn't works properly, users know how to use it and make it work.  

Aurelien

>--
>Sander
>
> Aurelien
>>--
>>Mats
>>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 17:07:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 17:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiTIm-0004ml-Pm; Tue, 11 Dec 2012 17:07:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1TiTIl-0004mP-7f
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 17:07:27 +0000
Received: from [85.158.139.211:24618] by server-1.bemta-5.messagelabs.com id
	B7/52-12813-E4867C05; Tue, 11 Dec 2012 17:07:26 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355245644!18247393!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2173 invoked from network); 11 Dec 2012 17:07:25 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-8.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	11 Dec 2012 17:07:25 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Tue, 11 Dec 2012 18:07:24 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: Sander Eikelenboom <linux@eikelenboom.it>
Thread-Topic: [Xen-devel] VGA passthrough and AMD drivers
Thread-Index: Ac3W6q3sZdM1jooZStCS7Hy3uj/+agAmeaKAAA6gObA=
Date: Tue, 11 Dec 2012 17:07:22 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C53C9F@dulac>
References: <36774CA35642C143BCDE93BA0C68DC5702C53BB2@dulac>
	<10810636308.20121211114749@eikelenboom.it>
In-Reply-To: <10810636308.20121211114749@eikelenboom.it>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] VGA passthrough and AMD drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>>>>>> Hi all,
>>>>>>>> I have made some tests to find a good driver for FirePro V8800 
>>>>>>>> on windows 7 64bit HVM.
>>>>>>>> I have been focused on ?advanced features?: quad buffer and 
>>>>>>>> active stereoscopy, synchronization ?
>>>>>>>> The results, for all FirePro drivers (of this year); I can?t get 
>>>>>>>> the quad buffer/active stereoscopy feature.
>>>>>>>> But they work on a native installation.
>>>>>>> Can you describe the setup a little more?
>>>>>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>>>>>
>>>>>> It?s a setup used in CAVE system, I try (and its works, minus some
>>>>>> issues) to virtualize ?virtual reality contexts? that needs full 
>>>>>> graphics card features.
>>>>>>
>>>>>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>>>>>
>>>>>> cores_per_socket : 4
>>>>>>
>>>>>> threads_per_core : 2
>>>>>>
>>>>>> cpu_mhz : 2660
>>>>>>
>>>>>> total_memory : 4079
>>>>>>
>>>>>>> How many graphic cards per guest?
>>>>>> One card per guest.
>>>>>>
>>>>>>> How many guests? On how many hosts?
>>>>>> One guest per computer.
>>>>>>
>>>>> And of course, I just thought of some other questions:
>>>>> What version of Xen are you using?
>>>>> What kernel are you using in Dom0?
>>>> release                : 2.6.32-5-xen-amd64
>>>> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
>>>> machine                : x86_64
>>>> nr_cpus                : 8
>>>> nr_nodes               : 1
>>>> cores_per_socket       : 4
>>>> threads_per_core       : 2
>>>> cpu_mhz                : 2660
>>>> hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:00000000:00000001:00000000
>>>> virt_caps              : hvm hvm_directio
>>>> total_memory           : 4079
>>>> free_cpus              : 0
>>>> xen_major              : 4
>>>> xen_minor              : 2
>>>> xen_extra              : -unstable
>>>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>>>> xen_scheduler          : credit
>>>> xen_pagesize           : 4096
>>>> platform_params        : virt_start=0xffff800000000000
>>>> xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
>>>> xen_commandline        : placeholder
>>>> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
>>>> xend_config_format     : 4
>>>>
>>>> I will change to a newer version and use  xl toolstack when VGA passthrough will be supported.
>>>>
>>>>> And just to be clear, there is only Dom0 and one Windows 7 HVM guest on each machine?
>>>> Yes
>>>>
>>>>>>>> The only driver that allows this feature is a Radeon HD driver 
>>>>>>>> (Catalyst 12.10 WHQL).
>>>>>>>> But this driver becomes unstable when an application using 
>>>>>>>> active stereo and synchronization is closed:
>>>>>>>> -The synchronization between two computers is lost.
>>>>>>>> -The CCC can crash when the synchronization is made again.
>>>>>>>> Someone have any clues about this?
>>>>>>> I don't know exactly how this works on AMD/ATI graphics cards, 
>>>>>>> but I have worked with synchronisation on other graphics cards 
>>>>>>> about 7 years ago, so I have some idea of how you solve the 
>>>>>>> various problems.
>>>>>>> What I don't quite understand is why it would be different 
>>>>>>> between a virtual environment and the bare-metal ("native") 
>>>>>>> install. My immediate guess is that there is a timing difference, 
>>>>>>> for one of two reasons:
>>>>>>> 1. IOMMU is adding extra delays to the graphics card reading system memory.
>>>>>>> 2. Interrupt delays due to hypervisor.
>>>>>>> 3. Dom0 or other guest domains "stealing" CPU from the guest.
>>>>>>> I don't think those are easy to work around (as they all have to 
>>>>>>> "happen" in a virtual system), but I also don't REALLY understand 
>>>>>>> why this should cause problems in the first place, as there isn't 
>>>>>>> any guarantee as to the timings of either memory reads, interrupt 
>>>>>>> latency/responsiveness or CPU availability in Windows, so the 
>>>>>>> same problem would appear in native systems as well, given "the right"
>>>>>>> circumstances.
>>>>>>> What exactly is the crash in CCC?
>>>>>>> (CCC stands for "Catalyst Control Center" - which I think is a 
>>>>>>> Windows "service" to handle certain requests from the driver that 
>>>>>>> can't be done in kernel mode [or shouldn't be done in the driver 
>>>>>>> in general]).
>>>>>> After the application is closed, I launch the Catalyst Control 
>>>>> Center, the synchronization state seems to be good. But there is 
>>>>>> no synchronization.
>>>>>>
>>>>>> If I try to apply any modifications of synchronization (synchro 
>>>>>> server or client), CCC freeze and I need to kill it manually.
>>>>>>
>>>>>> I can set the synchronization back after this.
>>>>>>
>>>>> This clearly sounds like a software issue in the CCC itself. I could be wrong, but that's what I think right now. It would be rather difficult to figure out what is going wrong without at least a repro environment.
>>>> I've made a bunch of tests this morning:
>>>> -CCC crash when I've got two displays: I set one to be the synchronization server and the other a client at the same time. When I set the server, apply this configuration and set the client after, it didn't crash.
>>>> -If my application (Virtools) crash, synchronization is reset.
>>>> -Eyes are sometimes inverted with the same trigger edge.
>>>I saw that problem with the product I was working on once or twice. 
>>>Makes it look really "confusing". This was a settings problem in my case (because I wrote my own "controls", I could set almost every aspect of everything that could possibly be changed, with a very basic command line >>application that interacted pretty straight down to the driver - with the usual caveat of "make sure you know what you are doing" - the normal GUI Control panel setup was much more "you can only set things that make sense >>for you to set"). That is probably not really what your problem is... But could be a configuration of driver or application issue, of course.
>>>
>>>>
>>>> I've got all this behaviors with both HVM and native installation under 7 64bits.  So I think it's clearly a software issue.
>>>>
>>>> Next step:  7 32bits.
>>>So, this is not a Xen issue... Report it to the ATI/AMD folks!
>>>
>> Yes, but it doesn't explain why I can't get active stereoscopy with FirePro drivers on HVM.
>
>>>>> Whilst I'm all for using Xen for everything, there are sometimes situations when "not using Xen" may actually be the right choice. Can you explain why running your guests in Xen is of benefit? [If you'd like to answer "none >of >>>your business", that's fine, but it may help to understand what the "business case" is for this].
>>>> The objective is to mutualize graphical cluster for immersive systems. Virtual Reality applications are sensitive in their configurations; it's a pain to manage multiple users and it's nearly impossible to have different >configurations for these users. Usually immersive systems are stuck in one configuration (OS, drivers, applications ...), and only few people are allowed to change settings.
>>>> The idea is to use Xen and VGA passthrough, for create personals environments that allow every user to make their own configurations without impacts on others.
>>>>
>>>> Be able to have VR configurations in virtual machines and to able to run it with 3D features, is a serious benefit for Virtual Reality users.
>>>
>>>Thanks for your explanation. Makes some sense, however, I feel that it also makes things more complex - if the system is so sensitive, it may get "upset" simply by having the differences in system behaviour that you >>automatically get from running on a virtual machine vs. "bare metal". Don't let that stop you, I'm just saying there may be issues caused by Xen (or other Virtualisation products) are not quite as transparent as they really should >>be.
>>>>>
>
>> It's not the hardware configurations that is so sensitive but more the software configurations and drivers versions.  
>> I've already made some demonstrations of Xen capabilities in our use case, there is no negative feedback. I think that HVM behavior is perfect for our uses, except these driver issues. 
>>
>> I found one minor bug (for us), if the first HVM executed (id=1) has the VGA card, the computer reboot without logs. 
>> My workaround is to launch an HVM without VGA first, stop it properly, and launch my usual HVM with VGA passthrough. 
>> I think, it's a bug due to my installation (Xen 4.2.0-unstable).
>>
>> I just got a new test computer, a Dell Precision T7500 with a V9800 FirePro, maybe I will have the time to test something tomorrow!

Exactly the same behaviors with this computer. 

>
>Hi,
>
>My experiences with CCC and vga passthrough aren't great either (bluescreen most of the time).
>It works now with the 12.11 catalyst beta drivers and not installing CCC, but just installing the driver and opencl packages from the c:\AMD\packages dir after stopping the install after the total package is unpacked.
>Don't know if the stereoscopy also comes in a seperate package.
>
>It runs opencl fine now, with near native performance (with luxmark benchmark) :-) (with xen-unstable, qemu-upstream, linux 3.7 dom0, win7 guest, 12.11 catalyst drivers, ati radeon HD 6570 at the moment)
>

Thanks for the feedbacks! I will try what you said.
But for my use case I need CCC, even if it isn't works properly, users know how to use it and make it work.  

Aurelien

>--
>Sander
>
> Aurelien
>>--
>>Mats
>>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 17:24:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 17:24:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiTYs-000568-Dj; Tue, 11 Dec 2012 17:24:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TiTYq-000563-QG
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 17:24:05 +0000
Received: from [85.158.139.211:54202] by server-5.bemta-5.messagelabs.com id
	6E/E1-22648-43C67C05; Tue, 11 Dec 2012 17:24:04 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355246641!20073672!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTEwNzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10408 invoked from network); 11 Dec 2012 17:24:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 17:24:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,260,1355097600"; 
   d="scan'208";a="267432"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 17:24:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 12:24:00 -0500
Received: from [10.80.239.153]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id 1TiTYm-000131-Hr;
	Tue, 11 Dec 2012 17:24:00 +0000
Message-ID: <50C76C30.20307@citrix.com>
Date: Tue, 11 Dec 2012 17:24:00 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121127 Icedove/10.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
	<528bd4030389d6b9ec6e.1355240057@andrewcoop.uk.xensource.com>
	<50C7664D02000078000AFBE1@nat28.tlf.novell.com>
In-Reply-To: <50C7664D02000078000AFBE1@nat28.tlf.novell.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 V4] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/12/12 15:58, Jan Beulich wrote:
>>>> On 11.12.12 at 16:34, Andrew Cooper<andrew.cooper3@citrix.com>  wrote:
>> -    /* Would it be better to replace the trap vector here? */
>> -    set_nmi_callback(crash_nmi_callback);
>> +
>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>> +     * invokes do_nmi_crash (above), which cause them to write state and
>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>> +     * cause it to return to this function ASAP.
>> +     */
>> +    for ( i = 0; i<  nr_cpu_ids; ++i )
>> +        if ( idt_tables[i] )
>> +        {
>> +
>> +            if ( i == cpu )
>> +            {
>> +                /* Disable the interrupt stack tables for this MCE and
>> +                 * NMI handler (shortly to become a nop) as there is a 1
>> +                 * instruction race window where NMIs could be
>> +                 * re-enabled and corrupt the exception frame, leaving
>> +                 * us unable to continue on this crash path (which half
>> +                 * defeats the point of using the nop handler in the
>> +                 * first place).
>> +                 *
>> +                 * This update is safe from a security point of view, as
>> +                 * this pcpu is never going to try to sysret back to a
>> +                 * PV vcpu.
>> +                 */
> This comment appears to have become stale with the latest
> changes.

Ok

>
>> +                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0,&trap_nop);
> No need for the extra&  on functions and arrays.

I was continuing the prevailing style from traps.c  Personally, I prefer 
the & notation for function pointers, to be consistent with regular 
pointers, even though I am aware that it is not strictly needed.  I am 
not fussed if you wish to insist on one style, but we do have mixed 
styles across the codebase.

>
>> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
>> +            }
>> +            else
>> +                /* Do not update stack table for other pcpus. */
>> +                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi],&nmi_crash);
>> +        }
>> +
>> ...
>> +/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
>> + * bits of the address not changing, which is a safe aumption until our
> assumption
>
>> + * code size exceeds 4GB.
> 1Gb.
>
>> + *
>> + * Ideally, we would use cmpxchg16b, but this is not supported on some
>> + * old AMD 64bit capable processors, and has no safe equivelent.
> equivalent
>
>> + */
>> +static inline void _write_gate_lower(idt_entry_t * gate, idt_entry_t * new)
> static inline void _write_gate_lower(idt_entry_t *gate, idt_entry_t *new)
>
> (similar extra blanks elsewhere)
>
> Also, to make clear which of the two is the entry written, const-
> qualifying the other one might be a good idea.

Ok

>
>> +{
>> +    ASSERT(gate->b == new->b);
>> +    *(volatile unsigned long *)&gate->a = new->a;
> volatile? And if so, why not volatilie-qualify the function parameter?

I was looking to avoid having the compiler inline this function and 
decide that it can merge *gate_addr and idte together, resulting in 
multiple writes to gate_addr.  Without the volatile, the compiler is 
free to make this optimization, which puts us back with the racy case we 
are trying to avoid.  The reason for avoiding the volatile function 
parameter is so the assertion equality can be optimized where possible.

>
>> +}
>> +
>>   #define _set_gate(gate_addr,type,dpl,addr)               \
>>   do {                                                     \
>>       (gate_addr)->a = 0;                                  \
>> @@ -122,6 +135,35 @@ do {
>>           (1UL<<  47);                                     \
>>   } while (0)
>>
>> +#define _set_gate_lower(gate_addr,type,dpl,addr)         \
>> +    do {                                                 \
>> +    idt_entry_t idte;                                    \
>> +    idte.b = (gate_addr)->b;                             \
>> +    idte.a =                                            \
>> +        (((unsigned long)(addr)&  0xFFFF0000UL)<<  32) | \
>> +        ((unsigned long)(dpl)<<  45) |                   \
>> +        ((unsigned long)(type)<<  40) |                  \
>> +        ((unsigned long)(addr)&  0xFFFFUL) |             \
>> +        ((unsigned long)__HYPERVISOR_CS64<<  16) |       \
>> +        (1UL<<  47);                                     \
>> +    _write_gate_lower((gate_addr),&idte);               \
> No need for extra inner parentheses.
>
>> +} while (0)
>> +
>> +/* Update the lower half handler of an IDT Entry, without changing any
>> + * other configuration. */
>> +static inline void _update_gate_addr_lower(idt_entry_t * gate, void * addr)
> Any reason for this being an inline function and the other being
> a macro?
>
> Jan

Where possible, I prefer static inline in preference to macros because 
of the added type checking etc.

_set_gate_lower is based on _set_gate, so I used the _set_gate style.

Again, I am not overly fussed if you have a strong preference for 
style.  (Both of these styles are mixed across the codebase and have no 
indication of preference in CODING_STYLE)

~Andrew

>
>> +{
>> +    idt_entry_t idte;
>> +    idte.a = gate->a;
>> +
>> +    idte.b = ((unsigned long)(addr)>>  32);
>> +    idte.a&= 0x0000FFFFFFFF0000ULL;
>> +    idte.a |= (((unsigned long)(addr)&  0xFFFF0000UL)<<  32) |
>> +        ((unsigned long)(addr)&  0xFFFFUL);
>> +
>> +    _write_gate_lower(gate,&idte);
>> +}
>> +
>>   #define _set_tssldt_desc(desc,addr,limit,type)           \
>>   do {                                                     \
>>       (desc)[0].b = (desc)[1].b = 0;                       \


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 17:24:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 17:24:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiTYs-000568-Dj; Tue, 11 Dec 2012 17:24:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TiTYq-000563-QG
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 17:24:05 +0000
Received: from [85.158.139.211:54202] by server-5.bemta-5.messagelabs.com id
	6E/E1-22648-43C67C05; Tue, 11 Dec 2012 17:24:04 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355246641!20073672!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTEwNzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10408 invoked from network); 11 Dec 2012 17:24:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 17:24:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,260,1355097600"; 
   d="scan'208";a="267432"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 17:24:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 12:24:00 -0500
Received: from [10.80.239.153]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id 1TiTYm-000131-Hr;
	Tue, 11 Dec 2012 17:24:00 +0000
Message-ID: <50C76C30.20307@citrix.com>
Date: Tue, 11 Dec 2012 17:24:00 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121127 Icedove/10.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
	<528bd4030389d6b9ec6e.1355240057@andrewcoop.uk.xensource.com>
	<50C7664D02000078000AFBE1@nat28.tlf.novell.com>
In-Reply-To: <50C7664D02000078000AFBE1@nat28.tlf.novell.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 V4] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/12/12 15:58, Jan Beulich wrote:
>>>> On 11.12.12 at 16:34, Andrew Cooper<andrew.cooper3@citrix.com>  wrote:
>> -    /* Would it be better to replace the trap vector here? */
>> -    set_nmi_callback(crash_nmi_callback);
>> +
>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>> +     * invokes do_nmi_crash (above), which cause them to write state and
>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>> +     * cause it to return to this function ASAP.
>> +     */
>> +    for ( i = 0; i<  nr_cpu_ids; ++i )
>> +        if ( idt_tables[i] )
>> +        {
>> +
>> +            if ( i == cpu )
>> +            {
>> +                /* Disable the interrupt stack tables for this MCE and
>> +                 * NMI handler (shortly to become a nop) as there is a 1
>> +                 * instruction race window where NMIs could be
>> +                 * re-enabled and corrupt the exception frame, leaving
>> +                 * us unable to continue on this crash path (which half
>> +                 * defeats the point of using the nop handler in the
>> +                 * first place).
>> +                 *
>> +                 * This update is safe from a security point of view, as
>> +                 * this pcpu is never going to try to sysret back to a
>> +                 * PV vcpu.
>> +                 */
> This comment appears to have become stale with the latest
> changes.

Ok

>
>> +                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0,&trap_nop);
> No need for the extra&  on functions and arrays.

I was continuing the prevailing style from traps.c  Personally, I prefer 
the & notation for function pointers, to be consistent with regular 
pointers, even though I am aware that it is not strictly needed.  I am 
not fussed if you wish to insist on one style, but we do have mixed 
styles across the codebase.

>
>> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
>> +            }
>> +            else
>> +                /* Do not update stack table for other pcpus. */
>> +                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi],&nmi_crash);
>> +        }
>> +
>> ...
>> +/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
>> + * bits of the address not changing, which is a safe aumption until our
> assumption
>
>> + * code size exceeds 4GB.
> 1Gb.
>
>> + *
>> + * Ideally, we would use cmpxchg16b, but this is not supported on some
>> + * old AMD 64bit capable processors, and has no safe equivelent.
> equivalent
>
>> + */
>> +static inline void _write_gate_lower(idt_entry_t * gate, idt_entry_t * new)
> static inline void _write_gate_lower(idt_entry_t *gate, idt_entry_t *new)
>
> (similar extra blanks elsewhere)
>
> Also, to make clear which of the two is the entry written, const-
> qualifying the other one might be a good idea.

Ok

>
>> +{
>> +    ASSERT(gate->b == new->b);
>> +    *(volatile unsigned long *)&gate->a = new->a;
> volatile? And if so, why not volatilie-qualify the function parameter?

I was looking to avoid having the compiler inline this function and 
decide that it can merge *gate_addr and idte together, resulting in 
multiple writes to gate_addr.  Without the volatile, the compiler is 
free to make this optimization, which puts us back with the racy case we 
are trying to avoid.  The reason for avoiding the volatile function 
parameter is so the assertion equality can be optimized where possible.

>
>> +}
>> +
>>   #define _set_gate(gate_addr,type,dpl,addr)               \
>>   do {                                                     \
>>       (gate_addr)->a = 0;                                  \
>> @@ -122,6 +135,35 @@ do {
>>           (1UL<<  47);                                     \
>>   } while (0)
>>
>> +#define _set_gate_lower(gate_addr,type,dpl,addr)         \
>> +    do {                                                 \
>> +    idt_entry_t idte;                                    \
>> +    idte.b = (gate_addr)->b;                             \
>> +    idte.a =                                            \
>> +        (((unsigned long)(addr)&  0xFFFF0000UL)<<  32) | \
>> +        ((unsigned long)(dpl)<<  45) |                   \
>> +        ((unsigned long)(type)<<  40) |                  \
>> +        ((unsigned long)(addr)&  0xFFFFUL) |             \
>> +        ((unsigned long)__HYPERVISOR_CS64<<  16) |       \
>> +        (1UL<<  47);                                     \
>> +    _write_gate_lower((gate_addr),&idte);               \
> No need for extra inner parentheses.
>
>> +} while (0)
>> +
>> +/* Update the lower half handler of an IDT Entry, without changing any
>> + * other configuration. */
>> +static inline void _update_gate_addr_lower(idt_entry_t * gate, void * addr)
> Any reason for this being an inline function and the other being
> a macro?
>
> Jan

Where possible, I prefer static inline in preference to macros because 
of the added type checking etc.

_set_gate_lower is based on _set_gate, so I used the _set_gate style.

Again, I am not overly fussed if you have a strong preference for 
style.  (Both of these styles are mixed across the codebase and have no 
indication of preference in CODING_STYLE)

~Andrew

>
>> +{
>> +    idt_entry_t idte;
>> +    idte.a = gate->a;
>> +
>> +    idte.b = ((unsigned long)(addr)>>  32);
>> +    idte.a&= 0x0000FFFFFFFF0000ULL;
>> +    idte.a |= (((unsigned long)(addr)&  0xFFFF0000UL)<<  32) |
>> +        ((unsigned long)(addr)&  0xFFFFUL);
>> +
>> +    _write_gate_lower(gate,&idte);
>> +}
>> +
>>   #define _set_tssldt_desc(desc,addr,limit,type)           \
>>   do {                                                     \
>>       (desc)[0].b = (desc)[1].b = 0;                       \


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 17:46:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 17:46:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiTuM-0005bh-2m; Tue, 11 Dec 2012 17:46:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiTuK-0005bc-Fq
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 17:46:16 +0000
Received: from [85.158.138.51:21323] by server-7.bemta-3.messagelabs.com id
	B1/A9-23008-76177C05; Tue, 11 Dec 2012 17:46:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355247974!24207230!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2779 invoked from network); 11 Dec 2012 17:46:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 17:46:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,260,1355097600"; 
   d="scan'208";a="67725"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 17:46:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 17:46:14 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiTuI-0006Qg-68;
	Tue, 11 Dec 2012 17:46:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiTuH-0000eY-QK;
	Tue, 11 Dec 2012 17:46:13 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14669-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 17:46:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14669: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14669 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14669/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14664
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14664

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  03cb71bc32f9
baseline version:
 xen                  03cb71bc32f9

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 17:46:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 17:46:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiTuM-0005bh-2m; Tue, 11 Dec 2012 17:46:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiTuK-0005bc-Fq
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 17:46:16 +0000
Received: from [85.158.138.51:21323] by server-7.bemta-3.messagelabs.com id
	B1/A9-23008-76177C05; Tue, 11 Dec 2012 17:46:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355247974!24207230!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2779 invoked from network); 11 Dec 2012 17:46:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 17:46:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,260,1355097600"; 
   d="scan'208";a="67725"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 17:46:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 17:46:14 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiTuI-0006Qg-68;
	Tue, 11 Dec 2012 17:46:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiTuH-0000eY-QK;
	Tue, 11 Dec 2012 17:46:13 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14669-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 17:46:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14669: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14669 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14669/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14664
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14664

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  03cb71bc32f9
baseline version:
 xen                  03cb71bc32f9

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 19:15:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 19:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiVI9-0006cu-He; Tue, 11 Dec 2012 19:14:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TiVI7-0006cp-LF
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 19:14:55 +0000
Received: from [85.158.143.35:63381] by server-2.bemta-4.messagelabs.com id
	7F/A9-30861-E2687C05; Tue, 11 Dec 2012 19:14:54 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355253292!15216733!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1622 invoked from network); 11 Dec 2012 19:14:53 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-21.messagelabs.com with SMTP;
	11 Dec 2012 19:14:53 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 397E0C5618E;
	Tue, 11 Dec 2012 19:14:36 +0000 (GMT)
Date: Tue, 11 Dec 2012 19:14:34 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <0DDCB3A726C4BDEE758F607F@nimrod.local>
In-Reply-To: <alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>	<1355234985.843.10.camel@zakaz.uk.xensource.com>	<62660DE6F077667C4FFB290E@nimrod.local>	<1355238698.843.42.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Xen Devel <xen-devel@lists.xen.org>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefono, Ian,

--On 11 December 2012 15:26:25 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> This is the patch series that needs to be backported in QEMU:
> http://marc.info/?l=qemu-devel&m=134920288412400&w=2
>
> And this is the libxl counterpart:
> http://marc.info/?l=xen-devel&m=134944750724252
>
> I would be OK with backporting the QEMU side,

That would be great and very useful (even if it doesn't ultimately make 4.2.x)

> but I'll leave the
> decision on the libxl side up to you.

I'd already planned to try backporting this patchset. If it goes in reasonably cleanly,
it's probably within my competence, and I'm happy to test etc.

We need qemu-xen for the better snapshotting capability on qcow2 (rebase
in particular), and not having live-migrate on HVM is a bit of a PITA.

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 19:15:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 19:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiVI9-0006cu-He; Tue, 11 Dec 2012 19:14:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TiVI7-0006cp-LF
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 19:14:55 +0000
Received: from [85.158.143.35:63381] by server-2.bemta-4.messagelabs.com id
	7F/A9-30861-E2687C05; Tue, 11 Dec 2012 19:14:54 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355253292!15216733!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1622 invoked from network); 11 Dec 2012 19:14:53 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-21.messagelabs.com with SMTP;
	11 Dec 2012 19:14:53 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 397E0C5618E;
	Tue, 11 Dec 2012 19:14:36 +0000 (GMT)
Date: Tue, 11 Dec 2012 19:14:34 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <0DDCB3A726C4BDEE758F607F@nimrod.local>
In-Reply-To: <alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>	<1355234985.843.10.camel@zakaz.uk.xensource.com>	<62660DE6F077667C4FFB290E@nimrod.local>	<1355238698.843.42.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Xen Devel <xen-devel@lists.xen.org>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefono, Ian,

--On 11 December 2012 15:26:25 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> This is the patch series that needs to be backported in QEMU:
> http://marc.info/?l=qemu-devel&m=134920288412400&w=2
>
> And this is the libxl counterpart:
> http://marc.info/?l=xen-devel&m=134944750724252
>
> I would be OK with backporting the QEMU side,

That would be great and very useful (even if it doesn't ultimately make 4.2.x)

> but I'll leave the
> decision on the libxl side up to you.

I'd already planned to try backporting this patchset. If it goes in reasonably cleanly,
it's probably within my competence, and I'm happy to test etc.

We need qemu-xen for the better snapshotting capability on qcow2 (rebase
in particular), and not having live-migrate on HVM is a bit of a PITA.

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 19:25:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 19:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiVS4-0006mm-Km; Tue, 11 Dec 2012 19:25:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiVS3-0006mh-SG
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 19:25:12 +0000
Received: from [85.158.138.51:11141] by server-16.bemta-3.messagelabs.com id
	83/34-27634-29887C05; Tue, 11 Dec 2012 19:25:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1355253903!20438865!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19445 invoked from network); 11 Dec 2012 19:25:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 19:25:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,260,1355097600"; 
   d="scan'208";a="308556"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 19:24:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 14:24:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiVRp-0002qX-73;
	Tue, 11 Dec 2012 19:24:57 +0000
Date: Tue, 11 Dec 2012 19:24:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Alex Bligh <alex@alex.org.uk>
In-Reply-To: <0DDCB3A726C4BDEE758F607F@nimrod.local>
Message-ID: <alpine.DEB.2.02.1212111922150.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
	<62660DE6F077667C4FFB290E@nimrod.local>
	<1355238698.843.42.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
	<0DDCB3A726C4BDEE758F607F@nimrod.local>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Xen Devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012, Alex Bligh wrote:
> Stefono, Ian,
> 
> --On 11 December 2012 15:26:25 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > This is the patch series that needs to be backported in QEMU:
> > http://marc.info/?l=qemu-devel&m=134920288412400&w=2
> >
> > And this is the libxl counterpart:
> > http://marc.info/?l=xen-devel&m=134944750724252
> >
> > I would be OK with backporting the QEMU side,
> 
> That would be great and very useful (even if it doesn't ultimately make 4.2.x)
> 
> > but I'll leave the
> > decision on the libxl side up to you.
> 
> I'd already planned to try backporting this patchset. If it goes in reasonably cleanly,
> it's probably within my competence, and I'm happy to test etc.
> 
> We need qemu-xen for the better snapshotting capability on qcow2 (rebase
> in particular), and not having live-migrate on HVM is a bit of a PITA.

It would be great if you could implement QEMU qcow2 snapshotting support
in libxl, so that everything can happen seamlessly using a variation of xl
save/restore.
It is a while that we wanted that feature but we never got around to it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 19:25:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 19:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiVS4-0006mm-Km; Tue, 11 Dec 2012 19:25:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TiVS3-0006mh-SG
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 19:25:12 +0000
Received: from [85.158.138.51:11141] by server-16.bemta-3.messagelabs.com id
	83/34-27634-29887C05; Tue, 11 Dec 2012 19:25:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1355253903!20438865!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODg2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19445 invoked from network); 11 Dec 2012 19:25:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 19:25:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,260,1355097600"; 
   d="scan'208";a="308556"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	11 Dec 2012 19:24:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 11 Dec 2012 14:24:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TiVRp-0002qX-73;
	Tue, 11 Dec 2012 19:24:57 +0000
Date: Tue, 11 Dec 2012 19:24:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Alex Bligh <alex@alex.org.uk>
In-Reply-To: <0DDCB3A726C4BDEE758F607F@nimrod.local>
Message-ID: <alpine.DEB.2.02.1212111922150.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
	<62660DE6F077667C4FFB290E@nimrod.local>
	<1355238698.843.42.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
	<0DDCB3A726C4BDEE758F607F@nimrod.local>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Xen Devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012, Alex Bligh wrote:
> Stefono, Ian,
> 
> --On 11 December 2012 15:26:25 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > This is the patch series that needs to be backported in QEMU:
> > http://marc.info/?l=qemu-devel&m=134920288412400&w=2
> >
> > And this is the libxl counterpart:
> > http://marc.info/?l=xen-devel&m=134944750724252
> >
> > I would be OK with backporting the QEMU side,
> 
> That would be great and very useful (even if it doesn't ultimately make 4.2.x)
> 
> > but I'll leave the
> > decision on the libxl side up to you.
> 
> I'd already planned to try backporting this patchset. If it goes in reasonably cleanly,
> it's probably within my competence, and I'm happy to test etc.
> 
> We need qemu-xen for the better snapshotting capability on qcow2 (rebase
> in particular), and not having live-migrate on HVM is a bit of a PITA.

It would be great if you could implement QEMU qcow2 snapshotting support
in libxl, so that everything can happen seamlessly using a variation of xl
save/restore.
It is a while that we wanted that feature but we never got around to it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 19:46:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 19:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiVmS-00078v-55; Tue, 11 Dec 2012 19:46:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TiVmQ-00078p-FB
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 19:46:14 +0000
Received: from [85.158.139.83:61022] by server-7.bemta-5.messagelabs.com id
	35/D3-08009-58D87C05; Tue, 11 Dec 2012 19:46:13 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355255173!29296503!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4891 invoked from network); 11 Dec 2012 19:46:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 19:46:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,260,1355097600"; 
   d="scan'208";a="69346"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 19:46:13 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 11 Dec 2012 19:46:12 +0000
Message-ID: <50C78D84.1030404@citrix.com>
Date: Tue, 11 Dec 2012 20:46:12 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] Driver domains and device handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I'm currently reviewing the driver domains protocol proposal, but I
think that before reviewing the protocol we should make clear what kind
of store backends libxl supports, and what are the plans for future
backends.

One of the benefits of the driver domain protocol is that it should
allow to split device connection in at least two phases, which is
important for live migration. The first phase should contain all the
logic that's slower, and should be performed on the receiver domain
without pausing the migrated domain. I've been trying to figure out
which kind of operation should be done in this phase for the different
types of backends, but with the backend support we currently have in
libxl (blkback, qdisk and blktap) I don't think we are able to perform
any kind of preparatory work before actually connecting the device.

One of the backends which I think libxl should support is iSCSI, that
also allows live migration. I've also been trying to figure out how we
are going to handle this kind of devices, and I'm unsure if it will be
best to handle them using Qemu as the backend, which currently has a
userspace implementation of iSCSI, or using an in-kernel initiator and
blkback. The benefits of using Qemu is that it is all contained in
userspace, and we don't pollute the Dom0 (or the Driver Domain) with
unneeded devices, on the other hand it is probably slower than using a
in-kernel initiator. Doing it one way or another, I'm still not able to
see what we can offload to the "preparatory" phase, in the Qemu case we
just launch Qemu, and if we decide to use an in-kernel initiator we only
have to launch a hotplug script with something like: iscsiadm -m node -T
<iqn> -p <ip:port>.

I'm sure there's people on the list with more experience than me on this
field, and I would like to ask for some use-cases where this
"preparatory" phase would be useful, and what actions will be performed
on it.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 19:46:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 19:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiVmS-00078v-55; Tue, 11 Dec 2012 19:46:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TiVmQ-00078p-FB
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 19:46:14 +0000
Received: from [85.158.139.83:61022] by server-7.bemta-5.messagelabs.com id
	35/D3-08009-58D87C05; Tue, 11 Dec 2012 19:46:13 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355255173!29296503!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4891 invoked from network); 11 Dec 2012 19:46:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 19:46:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,260,1355097600"; 
   d="scan'208";a="69346"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 19:46:13 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 11 Dec 2012 19:46:12 +0000
Message-ID: <50C78D84.1030404@citrix.com>
Date: Tue, 11 Dec 2012 20:46:12 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] Driver domains and device handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I'm currently reviewing the driver domains protocol proposal, but I
think that before reviewing the protocol we should make clear what kind
of store backends libxl supports, and what are the plans for future
backends.

One of the benefits of the driver domain protocol is that it should
allow to split device connection in at least two phases, which is
important for live migration. The first phase should contain all the
logic that's slower, and should be performed on the receiver domain
without pausing the migrated domain. I've been trying to figure out
which kind of operation should be done in this phase for the different
types of backends, but with the backend support we currently have in
libxl (blkback, qdisk and blktap) I don't think we are able to perform
any kind of preparatory work before actually connecting the device.

One of the backends which I think libxl should support is iSCSI, that
also allows live migration. I've also been trying to figure out how we
are going to handle this kind of devices, and I'm unsure if it will be
best to handle them using Qemu as the backend, which currently has a
userspace implementation of iSCSI, or using an in-kernel initiator and
blkback. The benefits of using Qemu is that it is all contained in
userspace, and we don't pollute the Dom0 (or the Driver Domain) with
unneeded devices, on the other hand it is probably slower than using a
in-kernel initiator. Doing it one way or another, I'm still not able to
see what we can offload to the "preparatory" phase, in the Qemu case we
just launch Qemu, and if we decide to use an in-kernel initiator we only
have to launch a hotplug script with something like: iscsiadm -m node -T
<iqn> -p <ip:port>.

I'm sure there's people on the list with more experience than me on this
field, and I would like to ask for some use-cases where this
"preparatory" phase would be useful, and what actions will be performed
on it.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 19:52:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 19:52:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiVsA-0007a4-VL; Tue, 11 Dec 2012 19:52:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TiVs9-0007Zt-5k
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 19:52:09 +0000
Received: from [85.158.139.83:23261] by server-9.bemta-5.messagelabs.com id
	CC/27-10690-8EE87C05; Tue, 11 Dec 2012 19:52:08 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355255527!27969547!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9115 invoked from network); 11 Dec 2012 19:52:07 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-10.tower-182.messagelabs.com with SMTP;
	11 Dec 2012 19:52:07 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id C32DDC5618D;
	Tue, 11 Dec 2012 19:51:54 +0000 (GMT)
Date: Tue, 11 Dec 2012 19:51:50 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <B83A0346AE0177C703855ED1@nimrod.local>
In-Reply-To: <alpine.DEB.2.02.1212111922150.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>	<1355234985.843.10.camel@zakaz.uk.xensource.com>	<62660DE6F077667C4FFB290E@nimrod.local>	<1355238698.843.42.camel@zakaz.uk.xensource.com>	<alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>	<0DDCB3A726C4BDEE758F607F@nimrod.local>
	<alpine.DEB.2.02.1212111922150.17523@kaball.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



--On 11 December 2012 19:24:54 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> It would be great if you could implement QEMU qcow2 snapshotting support
> in libxl, so that everything can happen seamlessly using a variation of xl
> save/restore.

I'd guess that isn't hard. We already have QEMU qcow2 snapshotting support
on qemu-xen working via direct calls to QEMU - I think it 'worked just like
kvm'. I seem to remember there is some fiddling to do with an open fd number
in qemu for the live rebase, and some limitations (for instance I couldn't
see how to do a consistent snapshot of the same device presented both pv
and emulated under HVM - which is theoretically an issue if you have
one partition mounted one way and one the other).

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 19:52:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 19:52:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiVsA-0007a4-VL; Tue, 11 Dec 2012 19:52:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TiVs9-0007Zt-5k
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 19:52:09 +0000
Received: from [85.158.139.83:23261] by server-9.bemta-5.messagelabs.com id
	CC/27-10690-8EE87C05; Tue, 11 Dec 2012 19:52:08 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355255527!27969547!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9115 invoked from network); 11 Dec 2012 19:52:07 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-10.tower-182.messagelabs.com with SMTP;
	11 Dec 2012 19:52:07 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id C32DDC5618D;
	Tue, 11 Dec 2012 19:51:54 +0000 (GMT)
Date: Tue, 11 Dec 2012 19:51:50 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <B83A0346AE0177C703855ED1@nimrod.local>
In-Reply-To: <alpine.DEB.2.02.1212111922150.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>	<1355234985.843.10.camel@zakaz.uk.xensource.com>	<62660DE6F077667C4FFB290E@nimrod.local>	<1355238698.843.42.camel@zakaz.uk.xensource.com>	<alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>	<0DDCB3A726C4BDEE758F607F@nimrod.local>
	<alpine.DEB.2.02.1212111922150.17523@kaball.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



--On 11 December 2012 19:24:54 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> It would be great if you could implement QEMU qcow2 snapshotting support
> in libxl, so that everything can happen seamlessly using a variation of xl
> save/restore.

I'd guess that isn't hard. We already have QEMU qcow2 snapshotting support
on qemu-xen working via direct calls to QEMU - I think it 'worked just like
kvm'. I seem to remember there is some fiddling to do with an open fd number
in qemu for the live rebase, and some limitations (for instance I couldn't
see how to do a consistent snapshot of the same device presented both pv
and emulated under HVM - which is theoretically an issue if you have
one partition mounted one way and one the other).

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 20:33:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 20:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiWVn-000891-At; Tue, 11 Dec 2012 20:33:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiWVm-00088u-5T
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 20:33:06 +0000
Received: from [85.158.139.83:45112] by server-5.bemta-5.messagelabs.com id
	DA/24-22648-18897C05; Tue, 11 Dec 2012 20:33:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1355257984!22161109!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9731 invoked from network); 11 Dec 2012 20:33:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 20:33:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,260,1355097600"; 
   d="scan'208";a="70072"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 20:33:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 20:33:03 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiWVj-0007kt-UK;
	Tue, 11 Dec 2012 20:33:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiWVj-0005KG-LL;
	Tue, 11 Dec 2012 20:33:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14670-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 20:33:03 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14670: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14670 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14670/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14666
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14666

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0
baseline version:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0
+ branch=linux-3.0
+ revision=4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git 4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0:tested/linux-3.0
Counting objects: 1   
Counting objects: 90, done.
Compressing objects:   6% (1/15)   
Compressing objects:  13% (2/15)   
Compressing objects:  20% (3/15)   
Compressing objects:  26% (4/15)   
Compressing objects:  33% (5/15)   
Compressing objects:  40% (6/15)   
Compressing objects:  46% (7/15)   
Compressing objects:  53% (8/15)   
Compressing objects:  60% (9/15)   
Compressing objects:  66% (10/15)   
Compressing objects:  73% (11/15)   
Compressing objects:  80% (12/15)   
Compressing objects:  86% (13/15)   
Compressing objects:  93% (14/15)   
Compressing objects: 100% (15/15)   
Compressing objects: 100% (15/15), done.
Writing objects:   1% (1/62)   
Writing objects:   3% (2/62)   
Writing objects:   4% (3/62)   
Writing objects:   6% (4/62)   
Writing objects:   8% (5/62)   
Writing objects:   9% (6/62)   
Writing objects:  11% (7/62)   
Writing objects:  12% (8/62)   
Writing objects:  14% (9/62)   
Writing objects:  16% (10/62)   
Writing objects:  17% (11/62)   
Writing objects:  19% (12/62)   
Writing objects:  20% (13/62)   
Writing objects:  22% (14/62)   
Writing objects:  24% (15/62)   
Writing objects:  25% (16/62)   
Writing objects:  27% (17/62)   
Writing objects:  29% (18/62)   
Writing objects:  30% (19/62)   
Writing objects:  32% (20/62)   
Writing objects:  33% (21/62)   
Writing objects:  35% (22/62)   
Writing objects:  37% (23/62)   
Writing objects:  38% (24/62)   
Writing objects:  40% (25/62)   
Writing objects:  41% (26/62)   
Writing objects:  43% (27/62)   
Writing objects:  45% (28/62)   
Writing objects:  46% (29/62)   
Writing objects:  48% (30/62)   
Writing objects:  50% (31/62)   
Writing objects:  51% (32/62)   
Writing objects:  53% (33/62)   
Writing objects:  54% (34/62)   
Writing objects:  56% (35/62)   
Writing objects:  58% (36/62)   
Writing objects:  59% (37/62)   
Writing objects:  61% (38/62)   
Writing objects:  62% (39/62)   
Writing objects:  64% (40/62)   
Writing objects:  66% (41/62)   
Writing objects:  67% (42/62)   
Writing objects:  69% (43/62)   
Writing objects:  70% (44/62)   
Writing objects:  72% (45/62)   
Writing objects:  74% (46/62)   
Writing objects:  75% (47/62)   
Writing objects:  77% (48/62)   
Writing objects:  79% (49/62)   
Writing objects:  80% (50/62)   
Writing objects:  82% (51/62)   
Writing objects:  83% (52/62)   
Writing objects:  85% (53/62)   
Writing objects:  87% (54/62)   
Writing objects:  88% (55/62)   
Writing objects:  90% (56/62)   
Writing objects:  91% (57/62)   
Writing objects:  93% (58/62)   
Writing objects:  95% (59/62)   
Writing objects:  96% (60/62)   
Writing objects:  98% (61/62)   
Writing objects: 100% (62/62)   
Writing objects: 100% (62/62), 11.71 KiB, done.
Total 62 (delta 47), reused 52 (delta 46)
To xen@xenbits.xensource.com:git/linux-pvops.git
   3bbbcb1..4eb15b7  4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0 -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 20:33:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 20:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiWVn-000891-At; Tue, 11 Dec 2012 20:33:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiWVm-00088u-5T
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 20:33:06 +0000
Received: from [85.158.139.83:45112] by server-5.bemta-5.messagelabs.com id
	DA/24-22648-18897C05; Tue, 11 Dec 2012 20:33:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1355257984!22161109!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9731 invoked from network); 11 Dec 2012 20:33:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 20:33:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,260,1355097600"; 
   d="scan'208";a="70072"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Dec 2012 20:33:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 11 Dec 2012 20:33:03 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiWVj-0007kt-UK;
	Tue, 11 Dec 2012 20:33:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiWVj-0005KG-LL;
	Tue, 11 Dec 2012 20:33:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14670-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Dec 2012 20:33:03 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14670: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14670 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14670/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14666
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14666

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0
baseline version:
 linux                3bbbcb136d00b0718e63f7fd633f098e0889c3d1

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0
+ branch=linux-3.0
+ revision=4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git 4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0:tested/linux-3.0
Counting objects: 1   
Counting objects: 90, done.
Compressing objects:   6% (1/15)   
Compressing objects:  13% (2/15)   
Compressing objects:  20% (3/15)   
Compressing objects:  26% (4/15)   
Compressing objects:  33% (5/15)   
Compressing objects:  40% (6/15)   
Compressing objects:  46% (7/15)   
Compressing objects:  53% (8/15)   
Compressing objects:  60% (9/15)   
Compressing objects:  66% (10/15)   
Compressing objects:  73% (11/15)   
Compressing objects:  80% (12/15)   
Compressing objects:  86% (13/15)   
Compressing objects:  93% (14/15)   
Compressing objects: 100% (15/15)   
Compressing objects: 100% (15/15), done.
Writing objects:   1% (1/62)   
Writing objects:   3% (2/62)   
Writing objects:   4% (3/62)   
Writing objects:   6% (4/62)   
Writing objects:   8% (5/62)   
Writing objects:   9% (6/62)   
Writing objects:  11% (7/62)   
Writing objects:  12% (8/62)   
Writing objects:  14% (9/62)   
Writing objects:  16% (10/62)   
Writing objects:  17% (11/62)   
Writing objects:  19% (12/62)   
Writing objects:  20% (13/62)   
Writing objects:  22% (14/62)   
Writing objects:  24% (15/62)   
Writing objects:  25% (16/62)   
Writing objects:  27% (17/62)   
Writing objects:  29% (18/62)   
Writing objects:  30% (19/62)   
Writing objects:  32% (20/62)   
Writing objects:  33% (21/62)   
Writing objects:  35% (22/62)   
Writing objects:  37% (23/62)   
Writing objects:  38% (24/62)   
Writing objects:  40% (25/62)   
Writing objects:  41% (26/62)   
Writing objects:  43% (27/62)   
Writing objects:  45% (28/62)   
Writing objects:  46% (29/62)   
Writing objects:  48% (30/62)   
Writing objects:  50% (31/62)   
Writing objects:  51% (32/62)   
Writing objects:  53% (33/62)   
Writing objects:  54% (34/62)   
Writing objects:  56% (35/62)   
Writing objects:  58% (36/62)   
Writing objects:  59% (37/62)   
Writing objects:  61% (38/62)   
Writing objects:  62% (39/62)   
Writing objects:  64% (40/62)   
Writing objects:  66% (41/62)   
Writing objects:  67% (42/62)   
Writing objects:  69% (43/62)   
Writing objects:  70% (44/62)   
Writing objects:  72% (45/62)   
Writing objects:  74% (46/62)   
Writing objects:  75% (47/62)   
Writing objects:  77% (48/62)   
Writing objects:  79% (49/62)   
Writing objects:  80% (50/62)   
Writing objects:  82% (51/62)   
Writing objects:  83% (52/62)   
Writing objects:  85% (53/62)   
Writing objects:  87% (54/62)   
Writing objects:  88% (55/62)   
Writing objects:  90% (56/62)   
Writing objects:  91% (57/62)   
Writing objects:  93% (58/62)   
Writing objects:  95% (59/62)   
Writing objects:  96% (60/62)   
Writing objects:  98% (61/62)   
Writing objects: 100% (62/62)   
Writing objects: 100% (62/62), 11.71 KiB, done.
Total 62 (delta 47), reused 52 (delta 46)
To xen@xenbits.xensource.com:git/linux-pvops.git
   3bbbcb1..4eb15b7  4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0 -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 20:51:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 20:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiWms-0008Jg-W8; Tue, 11 Dec 2012 20:50:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TiWmr-0008Jb-Cg
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 20:50:45 +0000
Received: from [85.158.143.35:32031] by server-2.bemta-4.messagelabs.com id
	26/D6-30861-4AC97C05; Tue, 11 Dec 2012 20:50:44 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355259043!13620586!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzgxMzM=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzgxMzM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18942 invoked from network); 11 Dec 2012 20:50:43 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 20:50:43 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFJhy0PGlDs
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-079-194.pools.arcor-ip.net [88.65.79.194])
	by smtp.strato.de (josoe mo49) (RZmta 31.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id Y01764oBBKSjnq ;
	Tue, 11 Dec 2012 21:50:32 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id D05681884C; Tue, 11 Dec 2012 21:50:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Tue, 11 Dec 2012 21:50:26 +0100
Message-Id: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.0.1
Cc: Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	jbeulich@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen/blkback: prevent repeated backend_changed
	invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

backend_changed might be called multiple times, which will leak
be->mode. Make sure it will be called only once. Remove some unneeded
checks. Also the be->mode string was leaked, release the memory on
device shutdown.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

incorporate all comments from Jan.
fold the oneline change to xen_blkbk_remove into this change
now its compile tested.

 drivers/block/xen-blkback/xenbus.c | 69 ++++++++++++++++++--------------------
 1 file changed, 33 insertions(+), 36 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index f58434c..5ca77c3 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -28,6 +28,7 @@ struct backend_info {
 	unsigned		major;
 	unsigned		minor;
 	char			*mode;
+	unsigned		alive;
 };
 
 static struct kmem_cache *xen_blkif_cachep;
@@ -366,6 +367,7 @@ static int xen_blkbk_remove(struct xenbus_device *dev)
 		be->blkif = NULL;
 	}
 
+	kfree(be->mode);
 	kfree(be);
 	dev_set_drvdata(&dev->dev, NULL);
 	return 0;
@@ -501,10 +503,14 @@ static void backend_changed(struct xenbus_watch *watch,
 		= container_of(watch, struct backend_info, backend_watch);
 	struct xenbus_device *dev = be->dev;
 	int cdrom = 0;
-	char *device_type;
+	char *device_type, *p;
+	long handle;
 
 	DPRINTK("");
 
+	if (be->alive)
+		return;
+
 	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
 			   &major, &minor);
 	if (XENBUS_EXIST_ERR(err)) {
@@ -520,12 +526,7 @@ static void backend_changed(struct xenbus_watch *watch,
 		return;
 	}
 
-	if ((be->major || be->minor) &&
-	    ((be->major != major) || (be->minor != minor))) {
-		pr_warn(DRV_PFX "changing physical device (from %x:%x to %x:%x) not supported.\n",
-			be->major, be->minor, major, minor);
-		return;
-	}
+	be->alive = 1;
 
 	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
 	if (IS_ERR(be->mode)) {
@@ -541,39 +542,35 @@ static void backend_changed(struct xenbus_watch *watch,
 		kfree(device_type);
 	}
 
-	if (be->major == 0 && be->minor == 0) {
-		/* Front end dir is a number, which is used as the handle. */
-
-		char *p = strrchr(dev->otherend, '/') + 1;
-		long handle;
-		err = strict_strtoul(p, 0, &handle);
-		if (err)
-			return;
-
-		be->major = major;
-		be->minor = minor;
+	/* Front end dir is a number, which is used as the handle. */
+	p = strrchr(dev->otherend, '/') + 1;
+	err = strict_strtoul(p, 0, &handle);
+	if (err)
+		return;
 
-		err = xen_vbd_create(be->blkif, handle, major, minor,
-				 (NULL == strchr(be->mode, 'w')), cdrom);
-		if (err) {
-			be->major = 0;
-			be->minor = 0;
-			xenbus_dev_fatal(dev, err, "creating vbd structure");
-			return;
-		}
+	be->major = major;
+	be->minor = minor;
 
-		err = xenvbd_sysfs_addif(dev);
-		if (err) {
-			xen_vbd_free(&be->blkif->vbd);
-			be->major = 0;
-			be->minor = 0;
-			xenbus_dev_fatal(dev, err, "creating sysfs entries");
-			return;
-		}
+	err = xen_vbd_create(be->blkif, handle, major, minor,
+			 (NULL == strchr(be->mode, 'w')), cdrom);
+	if (err) {
+		be->major = 0;
+		be->minor = 0;
+		xenbus_dev_fatal(dev, err, "creating vbd structure");
+		return;
+	}
 
-		/* We're potentially connected now */
-		xen_update_blkif_status(be->blkif);
+	err = xenvbd_sysfs_addif(dev);
+	if (err) {
+		xen_vbd_free(&be->blkif->vbd);
+		be->major = 0;
+		be->minor = 0;
+		xenbus_dev_fatal(dev, err, "creating sysfs entries");
+		return;
 	}
+
+	/* We're potentially connected now */
+	xen_update_blkif_status(be->blkif);
 }
 
 
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 20:51:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 20:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiWms-0008Jg-W8; Tue, 11 Dec 2012 20:50:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TiWmr-0008Jb-Cg
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 20:50:45 +0000
Received: from [85.158.143.35:32031] by server-2.bemta-4.messagelabs.com id
	26/D6-30861-4AC97C05; Tue, 11 Dec 2012 20:50:44 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355259043!13620586!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzgxMzM=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0NzgxMzM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18942 invoked from network); 11 Dec 2012 20:50:43 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 20:50:43 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFJhy0PGlDs
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-079-194.pools.arcor-ip.net [88.65.79.194])
	by smtp.strato.de (josoe mo49) (RZmta 31.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id Y01764oBBKSjnq ;
	Tue, 11 Dec 2012 21:50:32 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id D05681884C; Tue, 11 Dec 2012 21:50:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Tue, 11 Dec 2012 21:50:26 +0100
Message-Id: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.0.1
Cc: Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	jbeulich@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen/blkback: prevent repeated backend_changed
	invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

backend_changed might be called multiple times, which will leak
be->mode. Make sure it will be called only once. Remove some unneeded
checks. Also the be->mode string was leaked, release the memory on
device shutdown.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

incorporate all comments from Jan.
fold the oneline change to xen_blkbk_remove into this change
now its compile tested.

 drivers/block/xen-blkback/xenbus.c | 69 ++++++++++++++++++--------------------
 1 file changed, 33 insertions(+), 36 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index f58434c..5ca77c3 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -28,6 +28,7 @@ struct backend_info {
 	unsigned		major;
 	unsigned		minor;
 	char			*mode;
+	unsigned		alive;
 };
 
 static struct kmem_cache *xen_blkif_cachep;
@@ -366,6 +367,7 @@ static int xen_blkbk_remove(struct xenbus_device *dev)
 		be->blkif = NULL;
 	}
 
+	kfree(be->mode);
 	kfree(be);
 	dev_set_drvdata(&dev->dev, NULL);
 	return 0;
@@ -501,10 +503,14 @@ static void backend_changed(struct xenbus_watch *watch,
 		= container_of(watch, struct backend_info, backend_watch);
 	struct xenbus_device *dev = be->dev;
 	int cdrom = 0;
-	char *device_type;
+	char *device_type, *p;
+	long handle;
 
 	DPRINTK("");
 
+	if (be->alive)
+		return;
+
 	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
 			   &major, &minor);
 	if (XENBUS_EXIST_ERR(err)) {
@@ -520,12 +526,7 @@ static void backend_changed(struct xenbus_watch *watch,
 		return;
 	}
 
-	if ((be->major || be->minor) &&
-	    ((be->major != major) || (be->minor != minor))) {
-		pr_warn(DRV_PFX "changing physical device (from %x:%x to %x:%x) not supported.\n",
-			be->major, be->minor, major, minor);
-		return;
-	}
+	be->alive = 1;
 
 	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
 	if (IS_ERR(be->mode)) {
@@ -541,39 +542,35 @@ static void backend_changed(struct xenbus_watch *watch,
 		kfree(device_type);
 	}
 
-	if (be->major == 0 && be->minor == 0) {
-		/* Front end dir is a number, which is used as the handle. */
-
-		char *p = strrchr(dev->otherend, '/') + 1;
-		long handle;
-		err = strict_strtoul(p, 0, &handle);
-		if (err)
-			return;
-
-		be->major = major;
-		be->minor = minor;
+	/* Front end dir is a number, which is used as the handle. */
+	p = strrchr(dev->otherend, '/') + 1;
+	err = strict_strtoul(p, 0, &handle);
+	if (err)
+		return;
 
-		err = xen_vbd_create(be->blkif, handle, major, minor,
-				 (NULL == strchr(be->mode, 'w')), cdrom);
-		if (err) {
-			be->major = 0;
-			be->minor = 0;
-			xenbus_dev_fatal(dev, err, "creating vbd structure");
-			return;
-		}
+	be->major = major;
+	be->minor = minor;
 
-		err = xenvbd_sysfs_addif(dev);
-		if (err) {
-			xen_vbd_free(&be->blkif->vbd);
-			be->major = 0;
-			be->minor = 0;
-			xenbus_dev_fatal(dev, err, "creating sysfs entries");
-			return;
-		}
+	err = xen_vbd_create(be->blkif, handle, major, minor,
+			 (NULL == strchr(be->mode, 'w')), cdrom);
+	if (err) {
+		be->major = 0;
+		be->minor = 0;
+		xenbus_dev_fatal(dev, err, "creating vbd structure");
+		return;
+	}
 
-		/* We're potentially connected now */
-		xen_update_blkif_status(be->blkif);
+	err = xenvbd_sysfs_addif(dev);
+	if (err) {
+		xen_vbd_free(&be->blkif->vbd);
+		be->major = 0;
+		be->minor = 0;
+		xenbus_dev_fatal(dev, err, "creating sysfs entries");
+		return;
 	}
+
+	/* We're potentially connected now */
+	xen_update_blkif_status(be->blkif);
 }
 
 
-- 
1.8.0.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 21:35:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 21:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiXTc-0000BG-Kt; Tue, 11 Dec 2012 21:34:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TiXTb-0000BB-A5
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 21:34:55 +0000
Received: from [85.158.137.99:7116] by server-13.bemta-3.messagelabs.com id
	78/02-00465-EF6A7C05; Tue, 11 Dec 2012 21:34:54 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355261691!13450062!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gOTYzODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28644 invoked from network); 11 Dec 2012 21:34:53 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 21:34:53 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355261693; x=1386797693;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=LMUo5RZU7ReuH/PFlf1HvTbXQUFo6IdD+XzDJviLFUg=;
	b=crtAuORE2OQYa8QXIIeGCQH3MHs8ghwYzzcHaJoES5djsXtHwguqTUOw
	bi5bjucDTkTlO8INdFLr8737FwAeswClDC3HJ3oNRwEGXdDXo3hFw4zof
	tTZAllZW/vcZNuk8OIveCyvC92Bx0hZYQMXBFHnSIceVsO7EanACa/NLF s=;
X-IronPort-AV: E=Sophos;i="4.84,262,1355097600"; d="scan'208";a="348566055"
Received: from smtp-in-31001.sea31.amazon.com ([10.184.168.27])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 11 Dec 2012 21:34:41 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-31001.sea31.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBBLYeX9003042
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 11 Dec 2012 21:34:41 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Tue, 11 Dec 2012 13:34:39 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Tue, 11 Dec 2012 13:34:39 -0800
Date: Tue, 11 Dec 2012 13:34:39 -0800
From: Matt Wilson <msw@amazon.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Message-ID: <20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > -----Original Message-----
> > From: Matt Wilson [mailto:msw@amazon.com]
> > Sent: Thursday, December 06, 2012 11:05 AM
> > To: Palagummi, Siva
> > Cc: Ian Campbell; xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
> > properly when larger MTU sizes are used
> > 
> > On Wed, Dec 05, 2012 at 11:56:32AM +0000, Palagummi, Siva wrote:
> > > Matt,
> > [...]
> > > You are right. The above chunk which is already part of the upstream
> > > is unfortunately incorrect for some cases. We also ran into issues
> > > in our environment around a week back and found this problem. The
> > > count will be different based on head len because of the
> > > optimization that start_new_rx_buffer is trying to do for large
> > > buffers.  A hole of size "offset_in_page" will be left in first page
> > > during copy if the remaining buffer size is >=PAG_SIZE. This
> > > subsequently affects the copy_off as well.
> > >
> > > So xen_netbk_count_skb_slots actually needs a fix to calculate the
> > > count correctly based on head len. And also a fix to calculate the
> > > copy_off properly to which the data from fragments gets copied.
> > 
> > Can you explain more about the copy_off problem? I'm not seeing it.
>
> You can clearly see below that copy_off is input to
> start_new_rx_buffer while copying frags.

Yes, but that's the right thing to do. copy_off should be set to the
destination offset after copying the last byte of linear data, which
means "skb_headlen(skb) % PAGE_SIZE" is correct.

> So if the buggy "count" calculation below is fixed based on
> offset_in_page value then copy_off value also will change
> accordingly.

This calculation is not incorrect. You should only need as many
PAGE_SIZE buffers as you have linear data to fill.

>         count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
>         copy_off = skb_headlen(skb) % PAGE_SIZE;
> 
>         if (skb_shinfo(skb)->gso_size)
>                 count++;
> 
>         for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
>                 unsigned long size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
>                 unsigned long bytes;
>                 while (size > 0) {
>                         BUG_ON(copy_off > MAX_BUFFER_OFFSET);
> 
>                         if (start_new_rx_buffer(copy_off, size, 0)) {
>                                 count++;
>                                 copy_off = 0;
>                         }
> 
>
> So a correct calculation should be somewhat like below because of
> the optimization in start_new_rx_buffer for larger sizes.

start_new_rx_buffer() should not be starting a new buffer after the
first pass copying the linear data.

>       linear_len = skb_headlen(skb)
> 	count = (linear_len <= PAGE_SIZE)
>               ? 1
>               :DIV_ROUND_UP(offset_in_page(skb->data)+linear_len, PAGE_SIZE));
> 
>       copy_off = ((offset_in_page(skb->data)+linear_len) < 2*PAGE_SIZE)
> 			? linear_len % PAGE_SIZE;
> 			: (offset_in_page(skb->data)+linear_len) % PAGE_SIZE;

A change like this makes the code much more difficult to understand.

> > > max_required_rx_slots also may require a fix to account the
> > > additional slot that may be required in case mtu >= PAG_SIZE. For
> > > worst case scenario atleast another +1.  One thing that is still
> > > puzzling here is, max_required_rx_slots seems to be assuming that
> > > linear length in head will never be greater than mtu size. But that
> > > doesn't seem to be the case all the time. I wonder if it requires
> > > some kind of fix there or special handling when count_skb_slots
> > > exceeds max_required_rx_slots.
> > 
> > We should only be using the number of pages required to copy the
> > data. The fix shouldn't be to anticipate wasting ring space by
> > increasing the return value of max_required_rx_slots().
> > 
>
> I do not think we are wasting any ring space. But just ensuring that
> we have enough before proceeding ahead.

For some SKBs with large linear buffers, we certainly are wasting
space. Go back and read the explanation in
  http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html

> > [...]
> > 
> > > > Why increment count by the /estimated/ count instead of the actual
> > > > number of slots used? We have the number of slots in the line just
> > > > above, in sco->meta_slots_used.
> > > >
> > >
> > > Count actually refers to ring slots consumed rather than meta_slots
> > > used.  Count can be different from meta_slots_used.
> > 
> > Aah, indeed. This can end up being too pessimistic if you have lots of
> > frags that require multiple copy operations. I still think that it
> > would be better to calculate the actual number of ring slots consumed
> > by netbk_gop_skb() to avoid other bugs like the one you originally
> > fixed.
> > 
>
> counting done in count_skb_slots is what exactly it is. The fix done
> above is to make it same so that no need to re calculate again.

Today, the counting done in count_skb_slots() *does not* match the
number of buffer slots consumed by netbk_gop_skb().

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 21:35:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 21:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiXTc-0000BG-Kt; Tue, 11 Dec 2012 21:34:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TiXTb-0000BB-A5
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 21:34:55 +0000
Received: from [85.158.137.99:7116] by server-13.bemta-3.messagelabs.com id
	78/02-00465-EF6A7C05; Tue, 11 Dec 2012 21:34:54 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355261691!13450062!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gOTYzODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28644 invoked from network); 11 Dec 2012 21:34:53 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 21:34:53 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355261693; x=1386797693;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=LMUo5RZU7ReuH/PFlf1HvTbXQUFo6IdD+XzDJviLFUg=;
	b=crtAuORE2OQYa8QXIIeGCQH3MHs8ghwYzzcHaJoES5djsXtHwguqTUOw
	bi5bjucDTkTlO8INdFLr8737FwAeswClDC3HJ3oNRwEGXdDXo3hFw4zof
	tTZAllZW/vcZNuk8OIveCyvC92Bx0hZYQMXBFHnSIceVsO7EanACa/NLF s=;
X-IronPort-AV: E=Sophos;i="4.84,262,1355097600"; d="scan'208";a="348566055"
Received: from smtp-in-31001.sea31.amazon.com ([10.184.168.27])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 11 Dec 2012 21:34:41 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-31001.sea31.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBBLYeX9003042
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 11 Dec 2012 21:34:41 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Tue, 11 Dec 2012 13:34:39 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Tue, 11 Dec 2012 13:34:39 -0800
Date: Tue, 11 Dec 2012 13:34:39 -0800
From: Matt Wilson <msw@amazon.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Message-ID: <20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > -----Original Message-----
> > From: Matt Wilson [mailto:msw@amazon.com]
> > Sent: Thursday, December 06, 2012 11:05 AM
> > To: Palagummi, Siva
> > Cc: Ian Campbell; xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
> > properly when larger MTU sizes are used
> > 
> > On Wed, Dec 05, 2012 at 11:56:32AM +0000, Palagummi, Siva wrote:
> > > Matt,
> > [...]
> > > You are right. The above chunk which is already part of the upstream
> > > is unfortunately incorrect for some cases. We also ran into issues
> > > in our environment around a week back and found this problem. The
> > > count will be different based on head len because of the
> > > optimization that start_new_rx_buffer is trying to do for large
> > > buffers.  A hole of size "offset_in_page" will be left in first page
> > > during copy if the remaining buffer size is >=PAG_SIZE. This
> > > subsequently affects the copy_off as well.
> > >
> > > So xen_netbk_count_skb_slots actually needs a fix to calculate the
> > > count correctly based on head len. And also a fix to calculate the
> > > copy_off properly to which the data from fragments gets copied.
> > 
> > Can you explain more about the copy_off problem? I'm not seeing it.
>
> You can clearly see below that copy_off is input to
> start_new_rx_buffer while copying frags.

Yes, but that's the right thing to do. copy_off should be set to the
destination offset after copying the last byte of linear data, which
means "skb_headlen(skb) % PAGE_SIZE" is correct.

> So if the buggy "count" calculation below is fixed based on
> offset_in_page value then copy_off value also will change
> accordingly.

This calculation is not incorrect. You should only need as many
PAGE_SIZE buffers as you have linear data to fill.

>         count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
>         copy_off = skb_headlen(skb) % PAGE_SIZE;
> 
>         if (skb_shinfo(skb)->gso_size)
>                 count++;
> 
>         for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
>                 unsigned long size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
>                 unsigned long bytes;
>                 while (size > 0) {
>                         BUG_ON(copy_off > MAX_BUFFER_OFFSET);
> 
>                         if (start_new_rx_buffer(copy_off, size, 0)) {
>                                 count++;
>                                 copy_off = 0;
>                         }
> 
>
> So a correct calculation should be somewhat like below because of
> the optimization in start_new_rx_buffer for larger sizes.

start_new_rx_buffer() should not be starting a new buffer after the
first pass copying the linear data.

>       linear_len = skb_headlen(skb)
> 	count = (linear_len <= PAGE_SIZE)
>               ? 1
>               :DIV_ROUND_UP(offset_in_page(skb->data)+linear_len, PAGE_SIZE));
> 
>       copy_off = ((offset_in_page(skb->data)+linear_len) < 2*PAGE_SIZE)
> 			? linear_len % PAGE_SIZE;
> 			: (offset_in_page(skb->data)+linear_len) % PAGE_SIZE;

A change like this makes the code much more difficult to understand.

> > > max_required_rx_slots also may require a fix to account the
> > > additional slot that may be required in case mtu >= PAG_SIZE. For
> > > worst case scenario atleast another +1.  One thing that is still
> > > puzzling here is, max_required_rx_slots seems to be assuming that
> > > linear length in head will never be greater than mtu size. But that
> > > doesn't seem to be the case all the time. I wonder if it requires
> > > some kind of fix there or special handling when count_skb_slots
> > > exceeds max_required_rx_slots.
> > 
> > We should only be using the number of pages required to copy the
> > data. The fix shouldn't be to anticipate wasting ring space by
> > increasing the return value of max_required_rx_slots().
> > 
>
> I do not think we are wasting any ring space. But just ensuring that
> we have enough before proceeding ahead.

For some SKBs with large linear buffers, we certainly are wasting
space. Go back and read the explanation in
  http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html

> > [...]
> > 
> > > > Why increment count by the /estimated/ count instead of the actual
> > > > number of slots used? We have the number of slots in the line just
> > > > above, in sco->meta_slots_used.
> > > >
> > >
> > > Count actually refers to ring slots consumed rather than meta_slots
> > > used.  Count can be different from meta_slots_used.
> > 
> > Aah, indeed. This can end up being too pessimistic if you have lots of
> > frags that require multiple copy operations. I still think that it
> > would be better to calculate the actual number of ring slots consumed
> > by netbk_gop_skb() to avoid other bugs like the one you originally
> > fixed.
> > 
>
> counting done in count_skb_slots is what exactly it is. The fix done
> above is to make it same so that no need to re calculate again.

Today, the counting done in count_skb_slots() *does not* match the
number of buffer slots consumed by netbk_gop_skb().

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 22:35:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 22:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiYQ4-0000sc-9E; Tue, 11 Dec 2012 22:35:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TiYQ2-0000sX-Pb
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 22:35:19 +0000
Received: from [193.109.254.147:63590] by server-7.bemta-14.messagelabs.com id
	F6/26-02272-625B7C05; Tue, 11 Dec 2012 22:35:18 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355265314!2487337!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3034 invoked from network); 11 Dec 2012 22:35:16 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 22:35:16 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Tue, 11 Dec 2012 15:35:04 -0700
Message-ID: <50C7B518.9080503@suse.com>
Date: Tue, 11 Dec 2012 15:35:04 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1355158624-11163-1-git-send-email-ian.jackson@eu.citrix.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Because there is not necessarily any lock held at the point the
> application (eg, libvirt) calls libxl_osevent_occurred_timeout and
> ..._fd, in a multithreaded program those calls may be arbitrarily
> delayed in relation to other activities within the program.
>
> libxl therefore needs to be prepared to receive very old event
> callbacks.  Arrange for this to be the case for fd callbacks.
>
> This requires a new layer of indirection through a "hook nexus" struct
> which can outlive the libxl__ev_foo.  Allocation and deallocation of
> these nexi is mostly handled in the OSEVENT macros which wrap up
> the application's callbacks.
>
> Document the problem and the solution in a comment in libxl_event.c
> just before the definition of struct libxl__osevent_hook_nexus.
>
> There is still a race relating to libxl__osevent_occurred_timeout;
> this will be addressed in the following patch.
>
> Reported-by: Bamvor Jian Zhang <bjzhang@suse.com>
> Cc: Bamvor Jian Zhang <bjzhang@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>   

Hi Ian,

Thanks for the patches! I've found some time to test them in the context
of Xen 4.2 and have some comments. For this patch, only a few nits below.

> --
> v2:
>   - Prepare for fixing timeout race too
>   - Break out osevent_release_nexus()
>   - nexusop argument to OSEVENT* macros
>   - Clarify OSEVENT* nexusop hooks
>   - osevent_ev_from_hook_nexus takes a libxl__osevent_hook_nexus*
> ---
>  tools/libxl/libxl_event.c    |  184 +++++++++++++++++++++++++++++++++++------
>  tools/libxl/libxl_internal.h |    8 ++-
>  2 files changed, 163 insertions(+), 29 deletions(-)
>
> diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
> index 72cb723..f1fe425 100644
> --- a/tools/libxl/libxl_event.c
> +++ b/tools/libxl/libxl_event.c
> @@ -38,23 +38,131 @@
>   * The application's registration hooks should be called ONLY via
>   * these macros, with the ctx locked.  Likewise all the "occurred"
>   * entrypoints from the application should assert(!in_hook);
> + *
> + * During the hook call - including while the arguments are being
> + * evaluated - ev->nexus is guaranteed to be valid and refer to the
> + * nexus which is being used for this event registration.  The
> + * arguments should specify ev->nexus for the for_libxl argument and
> + * ev->nexus->for_app_reg (or a pointer to it) for for_app_reg.
>   */
> -#define OSEVENT_HOOK_INTERN(retval, hookname, ...) do {                      \
> -    if (CTX->osevent_hooks) {                                                \
> -        CTX->osevent_in_hook++;                                              \
> -        retval CTX->osevent_hooks->hookname(CTX->osevent_user, __VA_ARGS__); \
> -        CTX->osevent_in_hook--;                                              \
> -    }                                                                        \
> +#define OSEVENT_HOOK_INTERN(retval, failedp, evkind, hookop, nexusop, ...) do { \
> +    if (CTX->osevent_hooks) {                                           \
> +        CTX->osevent_in_hook++;                                         \
> +        libxl__osevent_hook_nexi *nexi = &CTX->hook_##evkind##_nexi_idle; \
> +        osevent_hook_pre_##nexusop(gc, ev, nexi, &ev->nexus);            \
> +        retval CTX->osevent_hooks->evkind##_##hookop                    \
> +            (CTX->osevent_user, __VA_ARGS__);                           \
> +        if ((failedp))                                                  \
> +            osevent_hook_failed_##nexusop(gc, ev, nexi, &ev->nexus);     \
> +        CTX->osevent_in_hook--;                                         \
> +    }                                                                   \
>  } while (0)
>  
> -#define OSEVENT_HOOK(hookname, ...) ({                                       \
> -    int osevent_hook_rc = 0;                                                 \
> -    OSEVENT_HOOK_INTERN(osevent_hook_rc = , hookname, __VA_ARGS__);          \
> -    osevent_hook_rc;                                                         \
> +#define OSEVENT_HOOK(evkind, hookop, nexusop, ...) ({                   \
> +    int osevent_hook_rc = 0;                                    \
> +    OSEVENT_HOOK_INTERN(osevent_hook_rc =, !!osevent_hook_rc,   \
> +                        evkind, hookop, nexusop, __VA_ARGS__);          \
> +    osevent_hook_rc;                                            \
>  })
>  
> -#define OSEVENT_HOOK_VOID(hookname, ...) \
> -    OSEVENT_HOOK_INTERN(/* void */, hookname, __VA_ARGS__)
> +#define OSEVENT_HOOK_VOID(evkind, hookop, nexusop, ...)                         \
> +    OSEVENT_HOOK_INTERN(/* void */, 0, evkind, hookop, nexusop, __VA_ARGS__)
> +
> +/*
> + * The application's calls to libxl_osevent_occurred_... may be
> + * indefinitely delayed with respect to the rest of the program (since
> + * they are not necessarily called with any lock held).  So the
> + * for_libxl value we receive may be (almost) arbitrarily old.  All we
> + * know is that it came from this ctx.
> + *
> + * Therefore we may not free the object referred to by any for_libxl
> + * value until we free the whole libxl_ctx.  And if we reuse it we
> + * must be able to tell when an old use turns up, and discard the
> + * stale event.
> + *
> + * Thus we cannot use the ev directly as the for_libxl value - we need
> + * a layer of indirection.
> + *
> + * We do this by keeping a pool of libxl__osevent_hook_nexus structs,
> + * and use pointers to them as for_libxl values.  In fact, there are
> + * two pools: one for fds and one for timeouts.  This ensures that we
> + * don't risk a type error when we upcast nexus->ev.  In each nexus
> + * the ev is either null or points to a valid libxl__ev_time or
> + * libxl__ev_fd, as applicable.
> + *
> + * We /do/ allow ourselves to reassociate an old nexus with a new ev
> + * as otherwise we would have to leak nexi.  (This reassociation
> + * might, of course, be an old ev being reused for a new purpose so
> + * simply comparing the ev pointer is not sufficient.)  Thus the
> + * libxl_osevent_occurred functions need to check that the condition
> + * allegedly signalled by this event actually exists.
> + *
> + * The nexi and the lists are all protected by the ctx lock.
> + */
> + 
> +struct libxl__osevent_hook_nexus {
> +    void *ev;
> +    void *for_app_reg;
> +    LIBXL_SLIST_ENTRY(libxl__osevent_hook_nexus) next;
> +};
> +
> +static void *osevent_ev_from_hook_nexus(libxl_ctx *ctx,
> +           libxl__osevent_hook_nexus *nexus /* pass  void *for_libxl */)
> +{
> +    return nexus->ev;
> +}
> +
> +static void osevent_release_nexus(libxl__gc *gc,
> +                                  libxl__osevent_hook_nexi *nexi_idle,
> +                                  libxl__osevent_hook_nexus *nexus)
> +{
> +    nexus->ev = 0;
> +    LIBXL_SLIST_INSERT_HEAD(nexi_idle, nexus, next);
> +}
> +
> +/*----- OSEVENT* hook functions for nexusop "alloc" -----*/
> +static void osevent_hook_pre_alloc(libxl__gc *gc, void *ev,
> +                                   libxl__osevent_hook_nexi *nexi_idle,
> +                                   libxl__osevent_hook_nexus **nexus_r)
> +{
> +    libxl__osevent_hook_nexus *nexus = LIBXL_SLIST_FIRST(nexi_idle);
> +    if (nexus) {
> +        LIBXL_SLIST_REMOVE_HEAD(nexi_idle, next);
> +    } else {
> +        nexus = libxl__zalloc(NOGC, sizeof(*nexus));
> +    }
> +    nexus->ev = ev;
> +    *nexus_r = nexus;
> +}
> +static void osevent_hook_failed_alloc(libxl__gc *gc, void *ev,
> +                                      libxl__osevent_hook_nexi *nexi_idle,
> +                                      libxl__osevent_hook_nexus **nexus)
> +{
> +    osevent_release_nexus(gc, nexi_idle, *nexus);
> +}
> +
> +/*----- OSEVENT* hook functions for nexusop "release" -----*/
> +static void osevent_hook_pre_release(libxl__gc *gc, void *ev,
> +                                     libxl__osevent_hook_nexi *nexi_idle,
> +                                     libxl__osevent_hook_nexus **nexus)
> +{
> +    osevent_release_nexus(gc, nexi_idle, *nexus);
> +}
> +static void osevent_hook_failed_release(libxl__gc *gc, void *ev,
> +                                        libxl__osevent_hook_nexi *nexi_idle,
> +                                        libxl__osevent_hook_nexus **nexus)
> +{
> +    abort();
> +}
> +
> +/*----- OSEVENT* hook functions for nexusop "noop" -----*/
> +static void osevent_hook_pre_noop(libxl__gc *gc, void *ev,
> +                                  libxl__osevent_hook_nexi *nexi_idle,
> +                                  libxl__osevent_hook_nexus **nexus) { }
> +static void osevent_hook_failed_noop(libxl__gc *gc, void *ev,
> +                                     libxl__osevent_hook_nexi *nexi_idle,
> +                                     libxl__osevent_hook_nexus **nexus) { }
> +
>  
>  /*
>   * fd events
> @@ -72,7 +180,8 @@ int libxl__ev_fd_register(libxl__gc *gc, libxl__ev_fd *ev,
>  
>      DBG("ev_fd=%p register fd=%d events=%x", ev, fd, events);
>  
> -    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
> +    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
>   

Nit, should there be a space between 'fd,' and 'register'? Also, not
that gcc complained, but register is a keyword.

> +                      events, ev->nexus);
>      if (rc) goto out;
>  
>      ev->fd = fd;
> @@ -97,7 +206,7 @@ int libxl__ev_fd_modify(libxl__gc *gc, libxl__ev_fd *ev, short events)
>  
>      DBG("ev_fd=%p modify fd=%d events=%x", ev, ev->fd, events);
>  
> -    rc = OSEVENT_HOOK(fd_modify, ev->fd, &ev->for_app_reg, events);
> +    rc = OSEVENT_HOOK(fd,modify, noop, ev->fd, &ev->nexus->for_app_reg, events);
>   

Same nit here, and in a few other uses of OSEVENT_HOOK* below.

Regards,
Jim

>      if (rc) goto out;
>  
>      ev->events = events;
> @@ -119,7 +228,7 @@ void libxl__ev_fd_deregister(libxl__gc *gc, libxl__ev_fd *ev)
>  
>      DBG("ev_fd=%p deregister fd=%d", ev, ev->fd);
>  
> -    OSEVENT_HOOK_VOID(fd_deregister, ev->fd, ev->for_app_reg);
> +    OSEVENT_HOOK_VOID(fd,deregister, release, ev->fd, ev->nexus->for_app_reg);
>      LIBXL_LIST_REMOVE(ev, entry);
>      ev->fd = -1;
>  
> @@ -171,7 +280,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
>  {
>      int rc;
>  
> -    rc = OSEVENT_HOOK(timeout_register, &ev->for_app_reg, absolute, ev);
> +    rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
> +                      absolute, ev->nexus);
>      if (rc) return rc;
>  
>      ev->infinite = 0;
> @@ -184,7 +294,7 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
>  static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
>  {
>      if (!ev->infinite) {
> -        OSEVENT_HOOK_VOID(timeout_deregister, ev->for_app_reg);
> +        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
>          LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
>      }
>  }
> @@ -270,7 +380,8 @@ int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
>          rc = time_register_finite(gc, ev, absolute);
>          if (rc) goto out;
>      } else {
> -        rc = OSEVENT_HOOK(timeout_modify, &ev->for_app_reg, absolute);
> +        rc = OSEVENT_HOOK(timeout,modify, noop,
> +                          &ev->nexus->for_app_reg, absolute);
>          if (rc) goto out;
>  
>          LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
> @@ -1010,35 +1121,54 @@ void libxl_osevent_register_hooks(libxl_ctx *ctx,
>  
>  
>  void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
> -                               int fd, short events, short revents)
> +                               int fd, short events_ign, short revents_ign)
>  {
> -    libxl__ev_fd *ev = for_libxl;
> -
>      EGC_INIT(ctx);
>      CTX_LOCK;
>      assert(!CTX->osevent_in_hook);
>  
> -    assert(fd == ev->fd);
> -    revents &= ev->events;
> -    if (revents)
> -        ev->func(egc, ev, fd, ev->events, revents);
> +    libxl__ev_fd *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
> +    if (!ev) goto out;
> +    if (ev->fd != fd) goto out;
>  
> +    struct pollfd check;
> +    for (;;) {
> +        check.fd = fd;
> +        check.events = ev->events;
> +        int r = poll(&check, 1, 0);
> +        if (!r)
> +            goto out;
> +        if (r==1)
> +            break;
> +        assert(r<0);
> +        if (errno != EINTR) {
> +            LIBXL__EVENT_DISASTER(egc, "failed poll to check for fd", errno, 0);
> +            goto out;
> +        }
> +    }
> +
> +    if (check.revents)
> +        ev->func(egc, ev, fd, ev->events, check.revents);
> +
> + out:
>      CTX_UNLOCK;
>      EGC_FREE;
>  }
>  
>  void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>  {
> -    libxl__ev_time *ev = for_libxl;
> -
>      EGC_INIT(ctx);
>      CTX_LOCK;
>      assert(!CTX->osevent_in_hook);
>  
> +    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
> +    if (!ev) goto out;
>      assert(!ev->infinite);
> +
>      LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
>      ev->func(egc, ev, &ev->abs);
>  
> + out:
>      CTX_UNLOCK;
>      EGC_FREE;
>  }
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index cba3616..6484bcb 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -136,6 +136,8 @@ typedef struct libxl__gc libxl__gc;
>  typedef struct libxl__egc libxl__egc;
>  typedef struct libxl__ao libxl__ao;
>  typedef struct libxl__aop_occurred libxl__aop_occurred;
> +typedef struct libxl__osevent_hook_nexus libxl__osevent_hook_nexus;
> +typedef struct libxl__osevent_hook_nexi libxl__osevent_hook_nexi;
>  
>  _hidden void libxl__alloc_failed(libxl_ctx *, const char *func,
>                           size_t nmemb, size_t size) __attribute__((noreturn));
> @@ -163,7 +165,7 @@ struct libxl__ev_fd {
>      libxl__ev_fd_callback *func;
>      /* remainder is private for libxl__ev_fd... */
>      LIBXL_LIST_ENTRY(libxl__ev_fd) entry;
> -    void *for_app_reg;
> +    libxl__osevent_hook_nexus *nexus;
>  };
>  
>  
> @@ -178,7 +180,7 @@ struct libxl__ev_time {
>      int infinite; /* not registered in list or with app if infinite */
>      LIBXL_TAILQ_ENTRY(libxl__ev_time) entry;
>      struct timeval abs;
> -    void *for_app_reg;
> +    libxl__osevent_hook_nexus *nexus;
>  };
>  
>  typedef struct libxl__ev_xswatch libxl__ev_xswatch;
> @@ -329,6 +331,8 @@ struct libxl__ctx {
>      libxl__poller poller_app; /* libxl_osevent_beforepoll and _afterpoll */
>      LIBXL_LIST_HEAD(, libxl__poller) pollers_event, pollers_idle;
>  
> +    LIBXL_SLIST_HEAD(libxl__osevent_hook_nexi, libxl__osevent_hook_nexus)
> +        hook_fd_nexi_idle, hook_timeout_nexi_idle;
>      LIBXL_LIST_HEAD(, libxl__ev_fd) efds;
>      LIBXL_TAILQ_HEAD(, libxl__ev_time) etimes;
>  
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 22:35:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 22:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiYQ4-0000sc-9E; Tue, 11 Dec 2012 22:35:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TiYQ2-0000sX-Pb
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 22:35:19 +0000
Received: from [193.109.254.147:63590] by server-7.bemta-14.messagelabs.com id
	F6/26-02272-625B7C05; Tue, 11 Dec 2012 22:35:18 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355265314!2487337!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3034 invoked from network); 11 Dec 2012 22:35:16 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Dec 2012 22:35:16 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Tue, 11 Dec 2012 15:35:04 -0700
Message-ID: <50C7B518.9080503@suse.com>
Date: Tue, 11 Dec 2012 15:35:04 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1355158624-11163-1-git-send-email-ian.jackson@eu.citrix.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Because there is not necessarily any lock held at the point the
> application (eg, libvirt) calls libxl_osevent_occurred_timeout and
> ..._fd, in a multithreaded program those calls may be arbitrarily
> delayed in relation to other activities within the program.
>
> libxl therefore needs to be prepared to receive very old event
> callbacks.  Arrange for this to be the case for fd callbacks.
>
> This requires a new layer of indirection through a "hook nexus" struct
> which can outlive the libxl__ev_foo.  Allocation and deallocation of
> these nexi is mostly handled in the OSEVENT macros which wrap up
> the application's callbacks.
>
> Document the problem and the solution in a comment in libxl_event.c
> just before the definition of struct libxl__osevent_hook_nexus.
>
> There is still a race relating to libxl__osevent_occurred_timeout;
> this will be addressed in the following patch.
>
> Reported-by: Bamvor Jian Zhang <bjzhang@suse.com>
> Cc: Bamvor Jian Zhang <bjzhang@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>   

Hi Ian,

Thanks for the patches! I've found some time to test them in the context
of Xen 4.2 and have some comments. For this patch, only a few nits below.

> --
> v2:
>   - Prepare for fixing timeout race too
>   - Break out osevent_release_nexus()
>   - nexusop argument to OSEVENT* macros
>   - Clarify OSEVENT* nexusop hooks
>   - osevent_ev_from_hook_nexus takes a libxl__osevent_hook_nexus*
> ---
>  tools/libxl/libxl_event.c    |  184 +++++++++++++++++++++++++++++++++++------
>  tools/libxl/libxl_internal.h |    8 ++-
>  2 files changed, 163 insertions(+), 29 deletions(-)
>
> diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
> index 72cb723..f1fe425 100644
> --- a/tools/libxl/libxl_event.c
> +++ b/tools/libxl/libxl_event.c
> @@ -38,23 +38,131 @@
>   * The application's registration hooks should be called ONLY via
>   * these macros, with the ctx locked.  Likewise all the "occurred"
>   * entrypoints from the application should assert(!in_hook);
> + *
> + * During the hook call - including while the arguments are being
> + * evaluated - ev->nexus is guaranteed to be valid and refer to the
> + * nexus which is being used for this event registration.  The
> + * arguments should specify ev->nexus for the for_libxl argument and
> + * ev->nexus->for_app_reg (or a pointer to it) for for_app_reg.
>   */
> -#define OSEVENT_HOOK_INTERN(retval, hookname, ...) do {                      \
> -    if (CTX->osevent_hooks) {                                                \
> -        CTX->osevent_in_hook++;                                              \
> -        retval CTX->osevent_hooks->hookname(CTX->osevent_user, __VA_ARGS__); \
> -        CTX->osevent_in_hook--;                                              \
> -    }                                                                        \
> +#define OSEVENT_HOOK_INTERN(retval, failedp, evkind, hookop, nexusop, ...) do { \
> +    if (CTX->osevent_hooks) {                                           \
> +        CTX->osevent_in_hook++;                                         \
> +        libxl__osevent_hook_nexi *nexi = &CTX->hook_##evkind##_nexi_idle; \
> +        osevent_hook_pre_##nexusop(gc, ev, nexi, &ev->nexus);            \
> +        retval CTX->osevent_hooks->evkind##_##hookop                    \
> +            (CTX->osevent_user, __VA_ARGS__);                           \
> +        if ((failedp))                                                  \
> +            osevent_hook_failed_##nexusop(gc, ev, nexi, &ev->nexus);     \
> +        CTX->osevent_in_hook--;                                         \
> +    }                                                                   \
>  } while (0)
>  
> -#define OSEVENT_HOOK(hookname, ...) ({                                       \
> -    int osevent_hook_rc = 0;                                                 \
> -    OSEVENT_HOOK_INTERN(osevent_hook_rc = , hookname, __VA_ARGS__);          \
> -    osevent_hook_rc;                                                         \
> +#define OSEVENT_HOOK(evkind, hookop, nexusop, ...) ({                   \
> +    int osevent_hook_rc = 0;                                    \
> +    OSEVENT_HOOK_INTERN(osevent_hook_rc =, !!osevent_hook_rc,   \
> +                        evkind, hookop, nexusop, __VA_ARGS__);          \
> +    osevent_hook_rc;                                            \
>  })
>  
> -#define OSEVENT_HOOK_VOID(hookname, ...) \
> -    OSEVENT_HOOK_INTERN(/* void */, hookname, __VA_ARGS__)
> +#define OSEVENT_HOOK_VOID(evkind, hookop, nexusop, ...)                         \
> +    OSEVENT_HOOK_INTERN(/* void */, 0, evkind, hookop, nexusop, __VA_ARGS__)
> +
> +/*
> + * The application's calls to libxl_osevent_occurred_... may be
> + * indefinitely delayed with respect to the rest of the program (since
> + * they are not necessarily called with any lock held).  So the
> + * for_libxl value we receive may be (almost) arbitrarily old.  All we
> + * know is that it came from this ctx.
> + *
> + * Therefore we may not free the object referred to by any for_libxl
> + * value until we free the whole libxl_ctx.  And if we reuse it we
> + * must be able to tell when an old use turns up, and discard the
> + * stale event.
> + *
> + * Thus we cannot use the ev directly as the for_libxl value - we need
> + * a layer of indirection.
> + *
> + * We do this by keeping a pool of libxl__osevent_hook_nexus structs,
> + * and use pointers to them as for_libxl values.  In fact, there are
> + * two pools: one for fds and one for timeouts.  This ensures that we
> + * don't risk a type error when we upcast nexus->ev.  In each nexus
> + * the ev is either null or points to a valid libxl__ev_time or
> + * libxl__ev_fd, as applicable.
> + *
> + * We /do/ allow ourselves to reassociate an old nexus with a new ev
> + * as otherwise we would have to leak nexi.  (This reassociation
> + * might, of course, be an old ev being reused for a new purpose so
> + * simply comparing the ev pointer is not sufficient.)  Thus the
> + * libxl_osevent_occurred functions need to check that the condition
> + * allegedly signalled by this event actually exists.
> + *
> + * The nexi and the lists are all protected by the ctx lock.
> + */
> + 
> +struct libxl__osevent_hook_nexus {
> +    void *ev;
> +    void *for_app_reg;
> +    LIBXL_SLIST_ENTRY(libxl__osevent_hook_nexus) next;
> +};
> +
> +static void *osevent_ev_from_hook_nexus(libxl_ctx *ctx,
> +           libxl__osevent_hook_nexus *nexus /* pass  void *for_libxl */)
> +{
> +    return nexus->ev;
> +}
> +
> +static void osevent_release_nexus(libxl__gc *gc,
> +                                  libxl__osevent_hook_nexi *nexi_idle,
> +                                  libxl__osevent_hook_nexus *nexus)
> +{
> +    nexus->ev = 0;
> +    LIBXL_SLIST_INSERT_HEAD(nexi_idle, nexus, next);
> +}
> +
> +/*----- OSEVENT* hook functions for nexusop "alloc" -----*/
> +static void osevent_hook_pre_alloc(libxl__gc *gc, void *ev,
> +                                   libxl__osevent_hook_nexi *nexi_idle,
> +                                   libxl__osevent_hook_nexus **nexus_r)
> +{
> +    libxl__osevent_hook_nexus *nexus = LIBXL_SLIST_FIRST(nexi_idle);
> +    if (nexus) {
> +        LIBXL_SLIST_REMOVE_HEAD(nexi_idle, next);
> +    } else {
> +        nexus = libxl__zalloc(NOGC, sizeof(*nexus));
> +    }
> +    nexus->ev = ev;
> +    *nexus_r = nexus;
> +}
> +static void osevent_hook_failed_alloc(libxl__gc *gc, void *ev,
> +                                      libxl__osevent_hook_nexi *nexi_idle,
> +                                      libxl__osevent_hook_nexus **nexus)
> +{
> +    osevent_release_nexus(gc, nexi_idle, *nexus);
> +}
> +
> +/*----- OSEVENT* hook functions for nexusop "release" -----*/
> +static void osevent_hook_pre_release(libxl__gc *gc, void *ev,
> +                                     libxl__osevent_hook_nexi *nexi_idle,
> +                                     libxl__osevent_hook_nexus **nexus)
> +{
> +    osevent_release_nexus(gc, nexi_idle, *nexus);
> +}
> +static void osevent_hook_failed_release(libxl__gc *gc, void *ev,
> +                                        libxl__osevent_hook_nexi *nexi_idle,
> +                                        libxl__osevent_hook_nexus **nexus)
> +{
> +    abort();
> +}
> +
> +/*----- OSEVENT* hook functions for nexusop "noop" -----*/
> +static void osevent_hook_pre_noop(libxl__gc *gc, void *ev,
> +                                  libxl__osevent_hook_nexi *nexi_idle,
> +                                  libxl__osevent_hook_nexus **nexus) { }
> +static void osevent_hook_failed_noop(libxl__gc *gc, void *ev,
> +                                     libxl__osevent_hook_nexi *nexi_idle,
> +                                     libxl__osevent_hook_nexus **nexus) { }
> +
>  
>  /*
>   * fd events
> @@ -72,7 +180,8 @@ int libxl__ev_fd_register(libxl__gc *gc, libxl__ev_fd *ev,
>  
>      DBG("ev_fd=%p register fd=%d events=%x", ev, fd, events);
>  
> -    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
> +    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
>   

Nit, should there be a space between 'fd,' and 'register'? Also, not
that gcc complained, but register is a keyword.

> +                      events, ev->nexus);
>      if (rc) goto out;
>  
>      ev->fd = fd;
> @@ -97,7 +206,7 @@ int libxl__ev_fd_modify(libxl__gc *gc, libxl__ev_fd *ev, short events)
>  
>      DBG("ev_fd=%p modify fd=%d events=%x", ev, ev->fd, events);
>  
> -    rc = OSEVENT_HOOK(fd_modify, ev->fd, &ev->for_app_reg, events);
> +    rc = OSEVENT_HOOK(fd,modify, noop, ev->fd, &ev->nexus->for_app_reg, events);
>   

Same nit here, and in a few other uses of OSEVENT_HOOK* below.

Regards,
Jim

>      if (rc) goto out;
>  
>      ev->events = events;
> @@ -119,7 +228,7 @@ void libxl__ev_fd_deregister(libxl__gc *gc, libxl__ev_fd *ev)
>  
>      DBG("ev_fd=%p deregister fd=%d", ev, ev->fd);
>  
> -    OSEVENT_HOOK_VOID(fd_deregister, ev->fd, ev->for_app_reg);
> +    OSEVENT_HOOK_VOID(fd,deregister, release, ev->fd, ev->nexus->for_app_reg);
>      LIBXL_LIST_REMOVE(ev, entry);
>      ev->fd = -1;
>  
> @@ -171,7 +280,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
>  {
>      int rc;
>  
> -    rc = OSEVENT_HOOK(timeout_register, &ev->for_app_reg, absolute, ev);
> +    rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
> +                      absolute, ev->nexus);
>      if (rc) return rc;
>  
>      ev->infinite = 0;
> @@ -184,7 +294,7 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
>  static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
>  {
>      if (!ev->infinite) {
> -        OSEVENT_HOOK_VOID(timeout_deregister, ev->for_app_reg);
> +        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
>          LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
>      }
>  }
> @@ -270,7 +380,8 @@ int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
>          rc = time_register_finite(gc, ev, absolute);
>          if (rc) goto out;
>      } else {
> -        rc = OSEVENT_HOOK(timeout_modify, &ev->for_app_reg, absolute);
> +        rc = OSEVENT_HOOK(timeout,modify, noop,
> +                          &ev->nexus->for_app_reg, absolute);
>          if (rc) goto out;
>  
>          LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
> @@ -1010,35 +1121,54 @@ void libxl_osevent_register_hooks(libxl_ctx *ctx,
>  
>  
>  void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
> -                               int fd, short events, short revents)
> +                               int fd, short events_ign, short revents_ign)
>  {
> -    libxl__ev_fd *ev = for_libxl;
> -
>      EGC_INIT(ctx);
>      CTX_LOCK;
>      assert(!CTX->osevent_in_hook);
>  
> -    assert(fd == ev->fd);
> -    revents &= ev->events;
> -    if (revents)
> -        ev->func(egc, ev, fd, ev->events, revents);
> +    libxl__ev_fd *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
> +    if (!ev) goto out;
> +    if (ev->fd != fd) goto out;
>  
> +    struct pollfd check;
> +    for (;;) {
> +        check.fd = fd;
> +        check.events = ev->events;
> +        int r = poll(&check, 1, 0);
> +        if (!r)
> +            goto out;
> +        if (r==1)
> +            break;
> +        assert(r<0);
> +        if (errno != EINTR) {
> +            LIBXL__EVENT_DISASTER(egc, "failed poll to check for fd", errno, 0);
> +            goto out;
> +        }
> +    }
> +
> +    if (check.revents)
> +        ev->func(egc, ev, fd, ev->events, check.revents);
> +
> + out:
>      CTX_UNLOCK;
>      EGC_FREE;
>  }
>  
>  void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>  {
> -    libxl__ev_time *ev = for_libxl;
> -
>      EGC_INIT(ctx);
>      CTX_LOCK;
>      assert(!CTX->osevent_in_hook);
>  
> +    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
> +    if (!ev) goto out;
>      assert(!ev->infinite);
> +
>      LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
>      ev->func(egc, ev, &ev->abs);
>  
> + out:
>      CTX_UNLOCK;
>      EGC_FREE;
>  }
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index cba3616..6484bcb 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -136,6 +136,8 @@ typedef struct libxl__gc libxl__gc;
>  typedef struct libxl__egc libxl__egc;
>  typedef struct libxl__ao libxl__ao;
>  typedef struct libxl__aop_occurred libxl__aop_occurred;
> +typedef struct libxl__osevent_hook_nexus libxl__osevent_hook_nexus;
> +typedef struct libxl__osevent_hook_nexi libxl__osevent_hook_nexi;
>  
>  _hidden void libxl__alloc_failed(libxl_ctx *, const char *func,
>                           size_t nmemb, size_t size) __attribute__((noreturn));
> @@ -163,7 +165,7 @@ struct libxl__ev_fd {
>      libxl__ev_fd_callback *func;
>      /* remainder is private for libxl__ev_fd... */
>      LIBXL_LIST_ENTRY(libxl__ev_fd) entry;
> -    void *for_app_reg;
> +    libxl__osevent_hook_nexus *nexus;
>  };
>  
>  
> @@ -178,7 +180,7 @@ struct libxl__ev_time {
>      int infinite; /* not registered in list or with app if infinite */
>      LIBXL_TAILQ_ENTRY(libxl__ev_time) entry;
>      struct timeval abs;
> -    void *for_app_reg;
> +    libxl__osevent_hook_nexus *nexus;
>  };
>  
>  typedef struct libxl__ev_xswatch libxl__ev_xswatch;
> @@ -329,6 +331,8 @@ struct libxl__ctx {
>      libxl__poller poller_app; /* libxl_osevent_beforepoll and _afterpoll */
>      LIBXL_LIST_HEAD(, libxl__poller) pollers_event, pollers_idle;
>  
> +    LIBXL_SLIST_HEAD(libxl__osevent_hook_nexi, libxl__osevent_hook_nexus)
> +        hook_fd_nexi_idle, hook_timeout_nexi_idle;
>      LIBXL_LIST_HEAD(, libxl__ev_fd) efds;
>      LIBXL_TAILQ_HEAD(, libxl__ev_time) etimes;
>  
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 22:54:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 22:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiYhy-0001L5-Kt; Tue, 11 Dec 2012 22:53:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TiYhx-0001Kz-Cv
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 22:53:49 +0000
Received: from [85.158.139.211:36219] by server-7.bemta-5.messagelabs.com id
	7A/67-08009-C79B7C05; Tue, 11 Dec 2012 22:53:48 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355266425!18568341!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26106 invoked from network); 11 Dec 2012 22:53:47 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 22:53:47 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Tue, 11 Dec 2012 15:53:41 -0700
Message-ID: <50C7B974.4050706@suse.com>
Date: Tue, 11 Dec 2012 15:53:40 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Because there is not necessarily any lock held at the point the
> application (eg, libvirt) calls libxl_osevent_occurred_timeout, in a
> multithreaded program those calls may be arbitrarily delayed in
> relation to other activities within the program.
>
> Specifically this means when ->timeout_deregister returns, libxl does
> not know whether it can safely dispose of the for_libxl value or
> whether it needs to retain it in case of an in-progress call to
> _occurred_timeout.
>
> The interface could be fixed by requiring the application to make a
> new call into libxl to say that the deregistration was complete.
>
> However that new call would have to be threaded through the
> application's event loop; this is complicated and some application
> authors are likely not to implement it properly.  Furthermore the
> easiest way to implement this facility in most event loops is to queue
> up a time event for "now".
>
> Shortcut all of this by having libxl always call timeout_modify
> setting abs={0,0} (ie, ASAP) instead of timeout_deregister.  This will
> cause the application to call _occurred_timeout.  When processing this
> calldown we see that we were no longer actually interested and simply
> throw it away.
>
> Additionally, there is a race between _occurred_timeout and
> ->timeout_modify.  If libxl ever adjusts the deadline for a timeout
> the application may already be in the process of calling _occurred, in
> which case the situation with for_app's lifetime becomes very
> complicated.  Therefore abolish libxl__ev_time_modify_{abs,rel} (which
> have no callers) and promise to the application only ever to call
> ->timeout_modify with abs=={0,0}.  The application still needs to cope
> with ->timeout_modify racing with its internal function which calls
> _occurred_timeout.  Document this.
>
> This is a forwards-compatible change for applications using the libxl
> API, and will hopefully eliminate these races in callback-supplying
> applications (such as libvirt) without the need for corresponding
> changes to the application.
>   

I'm not sure how to avoid changing the application (libvirt). Currently,
the libvirt libxl driver removes the timeout from the libvirt event loop
in the timeout_deregister() function. This function is never called now
and hence the timeout is never removed. I had to make the following
change in the libxl driver to use your patches

diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index 302f81c..d8f078e 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -225,16 +225,11 @@ static int libxlTimeoutRegisterEventHook(void *priv,

static int libxlTimeoutModifyEventHook(void *priv ATTRIBUTE_UNUSED,
void **hndp,
- struct timeval abs_t)
+ struct timeval abs_t ATTRIBUTE_UNUSED)
{
- struct timeval now;
- int timeout;
struct libxlOSEventHookTimerInfo *timer_info = *hndp;

- gettimeofday(&now, NULL);
- timeout = (abs_t.tv_usec - now.tv_usec) / 1000;
- timeout += (abs_t.tv_sec - now.tv_sec) * 1000;
- virEventUpdateTimeout(timer_info->id, timeout);
+ virEventRemoveTimeout(timer_info->id);
return 0;
}

I could also call virEventUpdateTimeout() with a timeout of 0 to make it
fire on the next iteration of the event loop, and then remove the
timeout in the callback before invoking
libxl_osevent_occurred_timeout(). But either way, changes need to be
made to the application.

> For clarity, fold the body of time_register_finite into its one
> remaining call site.  This makes the semantics of ev->infinite
> slightly clearer.
>
> Cc: Bamvor Jian Zhang <bjzhang@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>
> --
> v3:
>   - Fix null pointer dereference in case when hooks not supplied.
> ---
>  tools/libxl/libxl_event.c |   89 +++++++--------------------------------------
>  tools/libxl/libxl_event.h |   17 ++++++++-
>  2 files changed, 29 insertions(+), 77 deletions(-)
>
> diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
> index f1fe425..f86c528 100644
> --- a/tools/libxl/libxl_event.c
> +++ b/tools/libxl/libxl_event.c
> @@ -267,18 +267,11 @@ static int time_rel_to_abs(libxl__gc *gc, int ms, struct timeval *abs_out)
>      return 0;
>  }
>  
> -static void time_insert_finite(libxl__gc *gc, libxl__ev_time *ev)
> -{
> -    libxl__ev_time *evsearch;
> -    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
> -                              timercmp(&ev->abs, &evsearch->abs, >));
> -    ev->infinite = 0;
> -}
> -
>  static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
>                                  struct timeval absolute)
>  {
>      int rc;
> +    libxl__ev_time *evsearch;
>  
>      rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
>                        absolute, ev->nexus);
> @@ -286,7 +279,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
>  
>      ev->infinite = 0;
>      ev->abs = absolute;
> -    time_insert_finite(gc, ev);
> +    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
> +                              timercmp(&ev->abs, &evsearch->abs, >));
>  
>      return 0;
>  }
> @@ -294,7 +288,12 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
>  static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
>  {
>      if (!ev->infinite) {
> -        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
> +        struct timeval right_away = { 0, 0 };
> +        if (ev->nexus) /* only set if app provided hooks */
> +            ev->nexus->ev = 0;
> +        OSEVENT_HOOK_VOID(timeout,modify,
>   

Nit again about the space here.

> +                          noop /* release nexus in _occurred_ */,
> +                          ev->nexus->for_app_reg, right_away);
>   

I had to change this to &ev->nexus->for_app_reg to get it to work
properly. But with this change, and the aforementioned hack to the
libvirt libxl driver, I'm not seeing the race any longer.

Regards,
Jim

>          LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
>      }
>  }
> @@ -364,70 +363,6 @@ int libxl__ev_time_register_rel(libxl__gc *gc, libxl__ev_time *ev,
>      return rc;
>  }
>  
> -int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
> -                              struct timeval absolute)
> -{
> -    int rc;
> -
> -    CTX_LOCK;
> -
> -    DBG("ev_time=%p modify abs==%lu.%06lu",
> -        ev, (unsigned long)absolute.tv_sec, (unsigned long)absolute.tv_usec);
> -
> -    assert(libxl__ev_time_isregistered(ev));
> -
> -    if (ev->infinite) {
> -        rc = time_register_finite(gc, ev, absolute);
> -        if (rc) goto out;
> -    } else {
> -        rc = OSEVENT_HOOK(timeout,modify, noop,
> -                          &ev->nexus->for_app_reg, absolute);
> -        if (rc) goto out;
> -
> -        LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
> -        ev->abs = absolute;
> -        time_insert_finite(gc, ev);
> -    }
> -
> -    rc = 0;
> - out:
> -    time_done_debug(gc,__func__,ev,rc);
> -    CTX_UNLOCK;
> -    return rc;
> -}
> -
> -int libxl__ev_time_modify_rel(libxl__gc *gc, libxl__ev_time *ev,
> -                              int milliseconds)
> -{
> -    struct timeval absolute;
> -    int rc;
> -
> -    CTX_LOCK;
> -
> -    DBG("ev_time=%p modify ms=%d", ev, milliseconds);
> -
> -    assert(libxl__ev_time_isregistered(ev));
> -
> -    if (milliseconds < 0) {
> -        time_deregister(gc, ev);
> -        ev->infinite = 1;
> -        rc = 0;
> -        goto out;
> -    }
> -
> -    rc = time_rel_to_abs(gc, milliseconds, &absolute);
> -    if (rc) goto out;
> -
> -    rc = libxl__ev_time_modify_abs(gc, ev, absolute);
> -    if (rc) goto out;
> -
> -    rc = 0;
> - out:
> -    time_done_debug(gc,__func__,ev,rc);
> -    CTX_UNLOCK;
> -    return rc;
> -}
> -
>  void libxl__ev_time_deregister(libxl__gc *gc, libxl__ev_time *ev)
>  {
>      CTX_LOCK;
> @@ -1161,7 +1096,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>      CTX_LOCK;
>      assert(!CTX->osevent_in_hook);
>  
> -    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
> +    libxl__osevent_hook_nexus *nexus = for_libxl;
> +    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, nexus);
> +
> +    osevent_release_nexus(gc, &CTX->hook_timeout_nexi_idle, nexus);
> +
>      if (!ev) goto out;
>      assert(!ev->infinite);
>  
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 3bcb6d3..51f2721 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -287,8 +287,10 @@ typedef struct libxl_osevent_hooks {
>    int (*timeout_register)(void *user, void **for_app_registration_out,
>                            struct timeval abs, void *for_libxl);
>    int (*timeout_modify)(void *user, void **for_app_registration_update,
> -                         struct timeval abs);
> -  void (*timeout_deregister)(void *user, void *for_app_registration);
> +                         struct timeval abs)
> +      /* only ever called with abs={0,0}, meaning ASAP */;
> +  void (*timeout_deregister)(void *user, void *for_app_registration)
> +      /* will never be called */;
>  } libxl_osevent_hooks;
>  
>  /* The application which calls register_fd_hooks promises to
> @@ -337,6 +339,17 @@ typedef struct libxl_osevent_hooks {
>   * register (or modify), and pass it to subsequent calls to modify
>   * or deregister.
>   *
> + * Note that the application must cope with a call from libxl to
> + * timeout_modify racing with its own call to
> + * libxl__osevent_occurred_timeout.  libxl guarantees that
> + * timeout_modify will only be called with abs={0,0} but the
> + * application must still ensure that libxl's attempt to cause the
> + * timeout to occur immediately is safely ignored even the timeout is
> + * actually already in the process of occurring.
> + *
> + * timeout_deregister is not used because it forms part of a
> + * deprecated unsafe mode of use of the API.
> + *
>   * osevent_register_hooks may be called only once for each libxl_ctx.
>   * libxl may make calls to register/modify/deregister from within
>   * any libxl function (indeed, it will usually call register from
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 22:54:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 22:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiYhy-0001L5-Kt; Tue, 11 Dec 2012 22:53:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TiYhx-0001Kz-Cv
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 22:53:49 +0000
Received: from [85.158.139.211:36219] by server-7.bemta-5.messagelabs.com id
	7A/67-08009-C79B7C05; Tue, 11 Dec 2012 22:53:48 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355266425!18568341!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26106 invoked from network); 11 Dec 2012 22:53:47 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 22:53:47 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Tue, 11 Dec 2012 15:53:41 -0700
Message-ID: <50C7B974.4050706@suse.com>
Date: Tue, 11 Dec 2012 15:53:40 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Because there is not necessarily any lock held at the point the
> application (eg, libvirt) calls libxl_osevent_occurred_timeout, in a
> multithreaded program those calls may be arbitrarily delayed in
> relation to other activities within the program.
>
> Specifically this means when ->timeout_deregister returns, libxl does
> not know whether it can safely dispose of the for_libxl value or
> whether it needs to retain it in case of an in-progress call to
> _occurred_timeout.
>
> The interface could be fixed by requiring the application to make a
> new call into libxl to say that the deregistration was complete.
>
> However that new call would have to be threaded through the
> application's event loop; this is complicated and some application
> authors are likely not to implement it properly.  Furthermore the
> easiest way to implement this facility in most event loops is to queue
> up a time event for "now".
>
> Shortcut all of this by having libxl always call timeout_modify
> setting abs={0,0} (ie, ASAP) instead of timeout_deregister.  This will
> cause the application to call _occurred_timeout.  When processing this
> calldown we see that we were no longer actually interested and simply
> throw it away.
>
> Additionally, there is a race between _occurred_timeout and
> ->timeout_modify.  If libxl ever adjusts the deadline for a timeout
> the application may already be in the process of calling _occurred, in
> which case the situation with for_app's lifetime becomes very
> complicated.  Therefore abolish libxl__ev_time_modify_{abs,rel} (which
> have no callers) and promise to the application only ever to call
> ->timeout_modify with abs=={0,0}.  The application still needs to cope
> with ->timeout_modify racing with its internal function which calls
> _occurred_timeout.  Document this.
>
> This is a forwards-compatible change for applications using the libxl
> API, and will hopefully eliminate these races in callback-supplying
> applications (such as libvirt) without the need for corresponding
> changes to the application.
>   

I'm not sure how to avoid changing the application (libvirt). Currently,
the libvirt libxl driver removes the timeout from the libvirt event loop
in the timeout_deregister() function. This function is never called now
and hence the timeout is never removed. I had to make the following
change in the libxl driver to use your patches

diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index 302f81c..d8f078e 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -225,16 +225,11 @@ static int libxlTimeoutRegisterEventHook(void *priv,

static int libxlTimeoutModifyEventHook(void *priv ATTRIBUTE_UNUSED,
void **hndp,
- struct timeval abs_t)
+ struct timeval abs_t ATTRIBUTE_UNUSED)
{
- struct timeval now;
- int timeout;
struct libxlOSEventHookTimerInfo *timer_info = *hndp;

- gettimeofday(&now, NULL);
- timeout = (abs_t.tv_usec - now.tv_usec) / 1000;
- timeout += (abs_t.tv_sec - now.tv_sec) * 1000;
- virEventUpdateTimeout(timer_info->id, timeout);
+ virEventRemoveTimeout(timer_info->id);
return 0;
}

I could also call virEventUpdateTimeout() with a timeout of 0 to make it
fire on the next iteration of the event loop, and then remove the
timeout in the callback before invoking
libxl_osevent_occurred_timeout(). But either way, changes need to be
made to the application.

> For clarity, fold the body of time_register_finite into its one
> remaining call site.  This makes the semantics of ev->infinite
> slightly clearer.
>
> Cc: Bamvor Jian Zhang <bjzhang@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>
> --
> v3:
>   - Fix null pointer dereference in case when hooks not supplied.
> ---
>  tools/libxl/libxl_event.c |   89 +++++++--------------------------------------
>  tools/libxl/libxl_event.h |   17 ++++++++-
>  2 files changed, 29 insertions(+), 77 deletions(-)
>
> diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
> index f1fe425..f86c528 100644
> --- a/tools/libxl/libxl_event.c
> +++ b/tools/libxl/libxl_event.c
> @@ -267,18 +267,11 @@ static int time_rel_to_abs(libxl__gc *gc, int ms, struct timeval *abs_out)
>      return 0;
>  }
>  
> -static void time_insert_finite(libxl__gc *gc, libxl__ev_time *ev)
> -{
> -    libxl__ev_time *evsearch;
> -    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
> -                              timercmp(&ev->abs, &evsearch->abs, >));
> -    ev->infinite = 0;
> -}
> -
>  static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
>                                  struct timeval absolute)
>  {
>      int rc;
> +    libxl__ev_time *evsearch;
>  
>      rc = OSEVENT_HOOK(timeout,register, alloc, &ev->nexus->for_app_reg,
>                        absolute, ev->nexus);
> @@ -286,7 +279,8 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
>  
>      ev->infinite = 0;
>      ev->abs = absolute;
> -    time_insert_finite(gc, ev);
> +    LIBXL_TAILQ_INSERT_SORTED(&CTX->etimes, entry, ev, evsearch, /*empty*/,
> +                              timercmp(&ev->abs, &evsearch->abs, >));
>  
>      return 0;
>  }
> @@ -294,7 +288,12 @@ static int time_register_finite(libxl__gc *gc, libxl__ev_time *ev,
>  static void time_deregister(libxl__gc *gc, libxl__ev_time *ev)
>  {
>      if (!ev->infinite) {
> -        OSEVENT_HOOK_VOID(timeout,deregister, release, ev->nexus->for_app_reg);
> +        struct timeval right_away = { 0, 0 };
> +        if (ev->nexus) /* only set if app provided hooks */
> +            ev->nexus->ev = 0;
> +        OSEVENT_HOOK_VOID(timeout,modify,
>   

Nit again about the space here.

> +                          noop /* release nexus in _occurred_ */,
> +                          ev->nexus->for_app_reg, right_away);
>   

I had to change this to &ev->nexus->for_app_reg to get it to work
properly. But with this change, and the aforementioned hack to the
libvirt libxl driver, I'm not seeing the race any longer.

Regards,
Jim

>          LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
>      }
>  }
> @@ -364,70 +363,6 @@ int libxl__ev_time_register_rel(libxl__gc *gc, libxl__ev_time *ev,
>      return rc;
>  }
>  
> -int libxl__ev_time_modify_abs(libxl__gc *gc, libxl__ev_time *ev,
> -                              struct timeval absolute)
> -{
> -    int rc;
> -
> -    CTX_LOCK;
> -
> -    DBG("ev_time=%p modify abs==%lu.%06lu",
> -        ev, (unsigned long)absolute.tv_sec, (unsigned long)absolute.tv_usec);
> -
> -    assert(libxl__ev_time_isregistered(ev));
> -
> -    if (ev->infinite) {
> -        rc = time_register_finite(gc, ev, absolute);
> -        if (rc) goto out;
> -    } else {
> -        rc = OSEVENT_HOOK(timeout,modify, noop,
> -                          &ev->nexus->for_app_reg, absolute);
> -        if (rc) goto out;
> -
> -        LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
> -        ev->abs = absolute;
> -        time_insert_finite(gc, ev);
> -    }
> -
> -    rc = 0;
> - out:
> -    time_done_debug(gc,__func__,ev,rc);
> -    CTX_UNLOCK;
> -    return rc;
> -}
> -
> -int libxl__ev_time_modify_rel(libxl__gc *gc, libxl__ev_time *ev,
> -                              int milliseconds)
> -{
> -    struct timeval absolute;
> -    int rc;
> -
> -    CTX_LOCK;
> -
> -    DBG("ev_time=%p modify ms=%d", ev, milliseconds);
> -
> -    assert(libxl__ev_time_isregistered(ev));
> -
> -    if (milliseconds < 0) {
> -        time_deregister(gc, ev);
> -        ev->infinite = 1;
> -        rc = 0;
> -        goto out;
> -    }
> -
> -    rc = time_rel_to_abs(gc, milliseconds, &absolute);
> -    if (rc) goto out;
> -
> -    rc = libxl__ev_time_modify_abs(gc, ev, absolute);
> -    if (rc) goto out;
> -
> -    rc = 0;
> - out:
> -    time_done_debug(gc,__func__,ev,rc);
> -    CTX_UNLOCK;
> -    return rc;
> -}
> -
>  void libxl__ev_time_deregister(libxl__gc *gc, libxl__ev_time *ev)
>  {
>      CTX_LOCK;
> @@ -1161,7 +1096,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>      CTX_LOCK;
>      assert(!CTX->osevent_in_hook);
>  
> -    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, for_libxl);
> +    libxl__osevent_hook_nexus *nexus = for_libxl;
> +    libxl__ev_time *ev = osevent_ev_from_hook_nexus(ctx, nexus);
> +
> +    osevent_release_nexus(gc, &CTX->hook_timeout_nexi_idle, nexus);
> +
>      if (!ev) goto out;
>      assert(!ev->infinite);
>  
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 3bcb6d3..51f2721 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -287,8 +287,10 @@ typedef struct libxl_osevent_hooks {
>    int (*timeout_register)(void *user, void **for_app_registration_out,
>                            struct timeval abs, void *for_libxl);
>    int (*timeout_modify)(void *user, void **for_app_registration_update,
> -                         struct timeval abs);
> -  void (*timeout_deregister)(void *user, void *for_app_registration);
> +                         struct timeval abs)
> +      /* only ever called with abs={0,0}, meaning ASAP */;
> +  void (*timeout_deregister)(void *user, void *for_app_registration)
> +      /* will never be called */;
>  } libxl_osevent_hooks;
>  
>  /* The application which calls register_fd_hooks promises to
> @@ -337,6 +339,17 @@ typedef struct libxl_osevent_hooks {
>   * register (or modify), and pass it to subsequent calls to modify
>   * or deregister.
>   *
> + * Note that the application must cope with a call from libxl to
> + * timeout_modify racing with its own call to
> + * libxl__osevent_occurred_timeout.  libxl guarantees that
> + * timeout_modify will only be called with abs={0,0} but the
> + * application must still ensure that libxl's attempt to cause the
> + * timeout to occur immediately is safely ignored even the timeout is
> + * actually already in the process of occurring.
> + *
> + * timeout_deregister is not used because it forms part of a
> + * deprecated unsafe mode of use of the API.
> + *
>   * osevent_register_hooks may be called only once for each libxl_ctx.
>   * libxl may make calls to register/modify/deregister from within
>   * any libxl function (indeed, it will usually call register from
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 23:44:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 23:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiZUp-00021A-OP; Tue, 11 Dec 2012 23:44:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TiZUo-000215-9H
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 23:44:18 +0000
Received: from [85.158.139.83:46915] by server-3.bemta-5.messagelabs.com id
	2E/A8-25441-155C7C05; Tue, 11 Dec 2012 23:44:17 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355269456!29314544!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Nzk2NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13748 invoked from network); 11 Dec 2012 23:44:16 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 23:44:16 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 59ACE177A;
	Wed, 12 Dec 2012 01:44:15 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 314CA20060; Wed, 12 Dec 2012 01:44:15 +0200 (EET)
Date: Wed, 12 Dec 2012 01:44:15 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20121211234414.GC8912@reaktio.net>
References: <201211070206.qA7261Bp028589@wind.enjellic.com>
	<1352272948.12977.20.camel@hastur.hellion.org.uk>
	<20121201140336.GS8912@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121201140336.GS8912@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Dec 01, 2012 at 04:03:36PM +0200, Pasi K=E4rkk=E4inen wrote:
> Hello,
> =

> IanJ: Just a reminder to commit these two patches to xen-4.1-testing.. =

> =

> It'd be good to have them for Xen 4.1.4.
> =


ping? =



-- Pasi

> =

> On Wed, Nov 07, 2012 at 08:22:28AM +0100, Ian Campbell wrote:
> > On Wed, 2012-11-07 at 02:06 +0000, Dr. Greg Wettstein wrote:
> > > ---------------------------------------------------------------------=
------
> > > Backport of the following patch from development:
> > > =

> > > # User Ian Campbell <[hidden email]>
> > > # Date 1309968705 -3600
> > > # Node ID e4781aedf817c5ab36f6f3077e44c43c566a2812
> > > # Parent 700d0f03d50aa6619d313c1ff6aea7fd429d28a7
> > > libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> > > =

> > > This patch properly terminates the tapdisk2 process(es) started
> > > to service a virtual block device.
> > > =

> > > Signed-off-by: Greg Wettstein <greg@enjellic.com>
> > =

> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > =

> > > =

> > > diff -r 700d0f03d50a tools/blktap2/control/tap-ctl-list.c
> > > --- a/tools/blktap2/control/tap-ctl-list.c	Mon Oct 29 09:04:48 2012 +=
0100
> > > +++ b/tools/blktap2/control/tap-ctl-list.c	Tue Nov 06 19:52:48 2012 -=
0600
> > > @@ -506,17 +506,15 @@ out:
> > >  }
> > >  =

> > >  int
> > > -tap_ctl_find_minor(const char *type, const char *path)
> > > +tap_ctl_find(const char *type, const char *path, tap_list_t *tap)
> > >  {
> > >  	tap_list_t **list, **_entry;
> > > -	int minor, err;
> > > +	int ret =3D -ENOENT, err;
> > >  =

> > >  	err =3D tap_ctl_list(&list);
> > >  	if (err)
> > >  		return err;
> > >  =

> > > -	minor =3D -1;
> > > -
> > >  	for (_entry =3D list; *_entry !=3D NULL; ++_entry) {
> > >  		tap_list_t *entry  =3D *_entry;
> > >  =

> > > @@ -526,11 +524,13 @@ tap_ctl_find_minor(const char *type, con
> > >  		if (path && (!entry->path || strcmp(entry->path, path)))
> > >  			continue;
> > >  =

> > > -		minor =3D entry->minor;
> > > +		*tap =3D *entry;
> > > +		tap->type =3D tap->path =3D NULL;
> > > +		ret =3D 0;
> > >  		break;
> > >  	}
> > >  =

> > >  	tap_ctl_free_list(list);
> > >  =

> > > -	return minor >=3D 0 ? minor : -ENOENT;
> > > +	return ret;
> > >  }
> > > diff -r 700d0f03d50a tools/blktap2/control/tap-ctl.h
> > > --- a/tools/blktap2/control/tap-ctl.h	Mon Oct 29 09:04:48 2012 +0100
> > > +++ b/tools/blktap2/control/tap-ctl.h	Tue Nov 06 19:52:48 2012 -0600
> > > @@ -76,7 +76,7 @@ int tap_ctl_get_driver_id(const char *ha
> > >  =

> > >  int tap_ctl_list(tap_list_t ***list);
> > >  void tap_ctl_free_list(tap_list_t **list);
> > > -int tap_ctl_find_minor(const char *type, const char *path);
> > > +int tap_ctl_find(const char *type, const char *path, tap_list_t *tap=
);
> > >  =

> > >  int tap_ctl_allocate(int *minor, char **devname);
> > >  int tap_ctl_free(const int minor);
> > > diff -r 700d0f03d50a tools/libxl/libxl_blktap2.c
> > > --- a/tools/libxl/libxl_blktap2.c	Mon Oct 29 09:04:48 2012 +0100
> > > +++ b/tools/libxl/libxl_blktap2.c	Tue Nov 06 19:52:48 2012 -0600
> > > @@ -18,6 +18,8 @@
> > >  =

> > >  #include "tap-ctl.h"
> > >  =

> > > +#include <string.h>
> > > +
> > >  int libxl__blktap_enabled(libxl__gc *gc)
> > >  {
> > >      const char *msg;
> > > @@ -30,12 +32,13 @@ const char *libxl__blktap_devpath(libxl_
> > >  {
> > >      const char *type;
> > >      char *params, *devname =3D NULL;
> > > -    int minor, err;
> > > +    tap_list_t tap;
> > > +    int err;
> > >  =

> > >      type =3D libxl__device_disk_string_of_format(format);
> > > -    minor =3D tap_ctl_find_minor(type, disk);
> > > -    if (minor >=3D 0) {
> > > -        devname =3D libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d",=
 minor);
> > > +    err =3D tap_ctl_find(type, disk, &tap);
> > > +    if (err =3D=3D 0) {
> > > +        devname =3D libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d",=
 tap.minor);
> > >          if (devname)
> > >              return devname;
> > >      }
> > > @@ -49,3 +52,28 @@ const char *libxl__blktap_devpath(libxl_
> > >  =

> > >      return NULL;
> > >  }
> > > +
> > > +
> > > +void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
> > > +{
> > > +    char *path, *params, *type, *disk;
> > > +    int err;
> > > +    tap_list_t tap;
> > > +
> > > +    path =3D libxl__sprintf(gc, "%s/tapdisk-params", be_path);
> > > +    if (!path) return;
> > > +
> > > +    params =3D libxl__xs_read(gc, XBT_NULL, path);
> > > +    if (!params) return;
> > > +
> > > +    type =3D params;
> > > +    disk =3D strchr(params, ':');
> > > +    if (!disk) return;
> > > +
> > > +    *disk++ =3D '\0';
> > > +
> > > +    err =3D tap_ctl_find(type, disk, &tap);
> > > +    if (err < 0) return;
> > > +
> > > +    tap_ctl_destroy(tap.id, tap.minor);
> > > +}
> > > diff -r 700d0f03d50a tools/libxl/libxl_device.c
> > > --- a/tools/libxl/libxl_device.c	Mon Oct 29 09:04:48 2012 +0100
> > > +++ b/tools/libxl/libxl_device.c	Tue Nov 06 19:52:48 2012 -0600
> > > @@ -250,6 +250,7 @@ int libxl__device_destroy(libxl_ctx *ctx
> > >      if (!state)
> > >          goto out;
> > >      if (atoi(state) !=3D 4) {
> > > +        libxl__device_destroy_tapdisk(&gc, be_path);
> > >          xs_rm(ctx->xsh, XBT_NULL, be_path);
> > >          goto out;
> > >      }
> > > @@ -368,6 +369,7 @@ int libxl__devices_destroy(libxl_ctx *ct
> > >              }
> > >          }
> > >      }
> > > +    libxl__device_destroy_tapdisk(&gc, be_path);
> > >  out:
> > >      libxl__free_all(&gc);
> > >      return 0;
> > > diff -r 700d0f03d50a tools/libxl/libxl_internal.h
> > > --- a/tools/libxl/libxl_internal.h	Mon Oct 29 09:04:48 2012 +0100
> > > +++ b/tools/libxl/libxl_internal.h	Tue Nov 06 19:52:48 2012 -0600
> > > @@ -314,6 +314,12 @@ _hidden const char *libxl__blktap_devpat
> > >                                   const char *disk,
> > >                                   libxl_disk_format format);
> > >  =

> > > +/* libxl__device_destroy_tapdisk:
> > > + *   Destroys any tapdisk process associated with the backend repres=
ented
> > > + *   by be_path.
> > > + */
> > > +_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_p=
ath);
> > > +
> > >  _hidden char *libxl__uuid2string(libxl__gc *gc, const libxl_uuid uui=
d);
> > >  =

> > >  struct libxl__xen_console_reader {
> > > diff -r 700d0f03d50a tools/libxl/libxl_noblktap2.c
> > > --- a/tools/libxl/libxl_noblktap2.c	Mon Oct 29 09:04:48 2012 +0100
> > > +++ b/tools/libxl/libxl_noblktap2.c	Tue Nov 06 19:52:48 2012 -0600
> > > @@ -27,3 +27,7 @@ const char *libxl__blktap_devpath(libxl_
> > >  {
> > >      return NULL;
> > >  }
> > > +
> > > +void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
> > > +{
> > > +}
> > > ---------------------------------------------------------------------=
------
> > > =

> > > As always,
> > > Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
> > > 4206 N. 19th Ave.           Specializing in information infra-structu=
re
> > > Fargo, ND  58102            development.
> > > PH: 701-281-1686
> > > FAX: 701-281-3949           EMAIL: greg@enjellic.com
> > > ---------------------------------------------------------------------=
---------
> > > "Man, despite his artistic pretensions, his sophistication and many
> > >  accomplishments, owes the fact of his existence to a six-inch layer =
of
> > >  topsoil and the fact that it rains."
> > >                                 -- Anonymous writer on perspective.
> > >                                    GAUSSIAN quote.
> > > =

> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > =

> > =

> > =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 11 23:44:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Dec 2012 23:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiZUp-00021A-OP; Tue, 11 Dec 2012 23:44:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TiZUo-000215-9H
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 23:44:18 +0000
Received: from [85.158.139.83:46915] by server-3.bemta-5.messagelabs.com id
	2E/A8-25441-155C7C05; Tue, 11 Dec 2012 23:44:17 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355269456!29314544!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Nzk2NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13748 invoked from network); 11 Dec 2012 23:44:16 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Dec 2012 23:44:16 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 59ACE177A;
	Wed, 12 Dec 2012 01:44:15 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 314CA20060; Wed, 12 Dec 2012 01:44:15 +0200 (EET)
Date: Wed, 12 Dec 2012 01:44:15 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20121211234414.GC8912@reaktio.net>
References: <201211070206.qA7261Bp028589@wind.enjellic.com>
	<1352272948.12977.20.camel@hastur.hellion.org.uk>
	<20121201140336.GS8912@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121201140336.GS8912@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Dec 01, 2012 at 04:03:36PM +0200, Pasi K=E4rkk=E4inen wrote:
> Hello,
> =

> IanJ: Just a reminder to commit these two patches to xen-4.1-testing.. =

> =

> It'd be good to have them for Xen 4.1.4.
> =


ping? =



-- Pasi

> =

> On Wed, Nov 07, 2012 at 08:22:28AM +0100, Ian Campbell wrote:
> > On Wed, 2012-11-07 at 02:06 +0000, Dr. Greg Wettstein wrote:
> > > ---------------------------------------------------------------------=
------
> > > Backport of the following patch from development:
> > > =

> > > # User Ian Campbell <[hidden email]>
> > > # Date 1309968705 -3600
> > > # Node ID e4781aedf817c5ab36f6f3077e44c43c566a2812
> > > # Parent 700d0f03d50aa6619d313c1ff6aea7fd429d28a7
> > > libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> > > =

> > > This patch properly terminates the tapdisk2 process(es) started
> > > to service a virtual block device.
> > > =

> > > Signed-off-by: Greg Wettstein <greg@enjellic.com>
> > =

> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > =

> > > =

> > > diff -r 700d0f03d50a tools/blktap2/control/tap-ctl-list.c
> > > --- a/tools/blktap2/control/tap-ctl-list.c	Mon Oct 29 09:04:48 2012 +=
0100
> > > +++ b/tools/blktap2/control/tap-ctl-list.c	Tue Nov 06 19:52:48 2012 -=
0600
> > > @@ -506,17 +506,15 @@ out:
> > >  }
> > >  =

> > >  int
> > > -tap_ctl_find_minor(const char *type, const char *path)
> > > +tap_ctl_find(const char *type, const char *path, tap_list_t *tap)
> > >  {
> > >  	tap_list_t **list, **_entry;
> > > -	int minor, err;
> > > +	int ret =3D -ENOENT, err;
> > >  =

> > >  	err =3D tap_ctl_list(&list);
> > >  	if (err)
> > >  		return err;
> > >  =

> > > -	minor =3D -1;
> > > -
> > >  	for (_entry =3D list; *_entry !=3D NULL; ++_entry) {
> > >  		tap_list_t *entry  =3D *_entry;
> > >  =

> > > @@ -526,11 +524,13 @@ tap_ctl_find_minor(const char *type, con
> > >  		if (path && (!entry->path || strcmp(entry->path, path)))
> > >  			continue;
> > >  =

> > > -		minor =3D entry->minor;
> > > +		*tap =3D *entry;
> > > +		tap->type =3D tap->path =3D NULL;
> > > +		ret =3D 0;
> > >  		break;
> > >  	}
> > >  =

> > >  	tap_ctl_free_list(list);
> > >  =

> > > -	return minor >=3D 0 ? minor : -ENOENT;
> > > +	return ret;
> > >  }
> > > diff -r 700d0f03d50a tools/blktap2/control/tap-ctl.h
> > > --- a/tools/blktap2/control/tap-ctl.h	Mon Oct 29 09:04:48 2012 +0100
> > > +++ b/tools/blktap2/control/tap-ctl.h	Tue Nov 06 19:52:48 2012 -0600
> > > @@ -76,7 +76,7 @@ int tap_ctl_get_driver_id(const char *ha
> > >  =

> > >  int tap_ctl_list(tap_list_t ***list);
> > >  void tap_ctl_free_list(tap_list_t **list);
> > > -int tap_ctl_find_minor(const char *type, const char *path);
> > > +int tap_ctl_find(const char *type, const char *path, tap_list_t *tap=
);
> > >  =

> > >  int tap_ctl_allocate(int *minor, char **devname);
> > >  int tap_ctl_free(const int minor);
> > > diff -r 700d0f03d50a tools/libxl/libxl_blktap2.c
> > > --- a/tools/libxl/libxl_blktap2.c	Mon Oct 29 09:04:48 2012 +0100
> > > +++ b/tools/libxl/libxl_blktap2.c	Tue Nov 06 19:52:48 2012 -0600
> > > @@ -18,6 +18,8 @@
> > >  =

> > >  #include "tap-ctl.h"
> > >  =

> > > +#include <string.h>
> > > +
> > >  int libxl__blktap_enabled(libxl__gc *gc)
> > >  {
> > >      const char *msg;
> > > @@ -30,12 +32,13 @@ const char *libxl__blktap_devpath(libxl_
> > >  {
> > >      const char *type;
> > >      char *params, *devname =3D NULL;
> > > -    int minor, err;
> > > +    tap_list_t tap;
> > > +    int err;
> > >  =

> > >      type =3D libxl__device_disk_string_of_format(format);
> > > -    minor =3D tap_ctl_find_minor(type, disk);
> > > -    if (minor >=3D 0) {
> > > -        devname =3D libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d",=
 minor);
> > > +    err =3D tap_ctl_find(type, disk, &tap);
> > > +    if (err =3D=3D 0) {
> > > +        devname =3D libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d",=
 tap.minor);
> > >          if (devname)
> > >              return devname;
> > >      }
> > > @@ -49,3 +52,28 @@ const char *libxl__blktap_devpath(libxl_
> > >  =

> > >      return NULL;
> > >  }
> > > +
> > > +
> > > +void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
> > > +{
> > > +    char *path, *params, *type, *disk;
> > > +    int err;
> > > +    tap_list_t tap;
> > > +
> > > +    path =3D libxl__sprintf(gc, "%s/tapdisk-params", be_path);
> > > +    if (!path) return;
> > > +
> > > +    params =3D libxl__xs_read(gc, XBT_NULL, path);
> > > +    if (!params) return;
> > > +
> > > +    type =3D params;
> > > +    disk =3D strchr(params, ':');
> > > +    if (!disk) return;
> > > +
> > > +    *disk++ =3D '\0';
> > > +
> > > +    err =3D tap_ctl_find(type, disk, &tap);
> > > +    if (err < 0) return;
> > > +
> > > +    tap_ctl_destroy(tap.id, tap.minor);
> > > +}
> > > diff -r 700d0f03d50a tools/libxl/libxl_device.c
> > > --- a/tools/libxl/libxl_device.c	Mon Oct 29 09:04:48 2012 +0100
> > > +++ b/tools/libxl/libxl_device.c	Tue Nov 06 19:52:48 2012 -0600
> > > @@ -250,6 +250,7 @@ int libxl__device_destroy(libxl_ctx *ctx
> > >      if (!state)
> > >          goto out;
> > >      if (atoi(state) !=3D 4) {
> > > +        libxl__device_destroy_tapdisk(&gc, be_path);
> > >          xs_rm(ctx->xsh, XBT_NULL, be_path);
> > >          goto out;
> > >      }
> > > @@ -368,6 +369,7 @@ int libxl__devices_destroy(libxl_ctx *ct
> > >              }
> > >          }
> > >      }
> > > +    libxl__device_destroy_tapdisk(&gc, be_path);
> > >  out:
> > >      libxl__free_all(&gc);
> > >      return 0;
> > > diff -r 700d0f03d50a tools/libxl/libxl_internal.h
> > > --- a/tools/libxl/libxl_internal.h	Mon Oct 29 09:04:48 2012 +0100
> > > +++ b/tools/libxl/libxl_internal.h	Tue Nov 06 19:52:48 2012 -0600
> > > @@ -314,6 +314,12 @@ _hidden const char *libxl__blktap_devpat
> > >                                   const char *disk,
> > >                                   libxl_disk_format format);
> > >  =

> > > +/* libxl__device_destroy_tapdisk:
> > > + *   Destroys any tapdisk process associated with the backend repres=
ented
> > > + *   by be_path.
> > > + */
> > > +_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_p=
ath);
> > > +
> > >  _hidden char *libxl__uuid2string(libxl__gc *gc, const libxl_uuid uui=
d);
> > >  =

> > >  struct libxl__xen_console_reader {
> > > diff -r 700d0f03d50a tools/libxl/libxl_noblktap2.c
> > > --- a/tools/libxl/libxl_noblktap2.c	Mon Oct 29 09:04:48 2012 +0100
> > > +++ b/tools/libxl/libxl_noblktap2.c	Tue Nov 06 19:52:48 2012 -0600
> > > @@ -27,3 +27,7 @@ const char *libxl__blktap_devpath(libxl_
> > >  {
> > >      return NULL;
> > >  }
> > > +
> > > +void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
> > > +{
> > > +}
> > > ---------------------------------------------------------------------=
------
> > > =

> > > As always,
> > > Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
> > > 4206 N. 19th Ave.           Specializing in information infra-structu=
re
> > > Fargo, ND  58102            development.
> > > PH: 701-281-1686
> > > FAX: 701-281-3949           EMAIL: greg@enjellic.com
> > > ---------------------------------------------------------------------=
---------
> > > "Man, despite his artistic pretensions, his sophistication and many
> > >  accomplishments, owes the fact of his existence to a six-inch layer =
of
> > >  topsoil and the fact that it rains."
> > >                                 -- Anonymous writer on perspective.
> > >                                    GAUSSIAN quote.
> > > =

> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > =

> > =

> > =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 00:48:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 00:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiaU2-0002ly-Pv; Wed, 12 Dec 2012 00:47:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiaU1-0002lt-I9
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 00:47:33 +0000
Received: from [85.158.143.35:36134] by server-3.bemta-4.messagelabs.com id
	06/13-18211-424D7C05; Wed, 12 Dec 2012 00:47:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355273251!4324201!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk5OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10162 invoked from network); 12 Dec 2012 00:47:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 00:47:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,262,1355097600"; 
   d="scan'208";a="74030"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 00:47:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 00:47:31 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiaTz-0000b7-EH;
	Wed, 12 Dec 2012 00:47:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiaTx-0002xM-Vq;
	Wed, 12 Dec 2012 00:47:31 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14671-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 00:47:30 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14671: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7785210734838658872=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7785210734838658872==
Content-Type: text/plain

flight 14671 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14671/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14669
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14669

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  ef8c1b607b10
baseline version:
 xen                  03cb71bc32f9

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Boris Ostrovsky <boris.ostrovsky@amd.com>
  Charles Arnold <carnold@suse.com>
  Dan Magenheimer <dan.magenheimer@oracle.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=ef8c1b607b10
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable ef8c1b607b10
+ branch=xen-unstable
+ revision=ef8c1b607b10
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r ef8c1b607b10 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 4 changesets with 8 changes to 8 files


--===============7785210734838658872==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7785210734838658872==--

From xen-devel-bounces@lists.xen.org Wed Dec 12 00:48:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 00:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiaU2-0002ly-Pv; Wed, 12 Dec 2012 00:47:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiaU1-0002lt-I9
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 00:47:33 +0000
Received: from [85.158.143.35:36134] by server-3.bemta-4.messagelabs.com id
	06/13-18211-424D7C05; Wed, 12 Dec 2012 00:47:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355273251!4324201!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk5OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10162 invoked from network); 12 Dec 2012 00:47:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 00:47:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,262,1355097600"; 
   d="scan'208";a="74030"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 00:47:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 00:47:31 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiaTz-0000b7-EH;
	Wed, 12 Dec 2012 00:47:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiaTx-0002xM-Vq;
	Wed, 12 Dec 2012 00:47:31 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14671-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 00:47:30 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14671: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7785210734838658872=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7785210734838658872==
Content-Type: text/plain

flight 14671 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14671/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14669
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14669

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  ef8c1b607b10
baseline version:
 xen                  03cb71bc32f9

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Boris Ostrovsky <boris.ostrovsky@amd.com>
  Charles Arnold <carnold@suse.com>
  Dan Magenheimer <dan.magenheimer@oracle.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=ef8c1b607b10
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable ef8c1b607b10
+ branch=xen-unstable
+ revision=ef8c1b607b10
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r ef8c1b607b10 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 4 changesets with 8 changes to 8 files


--===============7785210734838658872==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7785210734838658872==--

From xen-devel-bounces@lists.xen.org Wed Dec 12 01:04:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 01:04:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiajz-0006tM-Qb; Wed, 12 Dec 2012 01:04:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tiajy-0006tH-Fa
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 01:04:02 +0000
Received: from [85.158.138.51:61171] by server-2.bemta-3.messagelabs.com id
	96/10-11239-108D7C05; Wed, 12 Dec 2012 01:04:01 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355274240!28180719!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MjA3Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17015 invoked from network); 12 Dec 2012 01:04:00 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-2.tower-174.messagelabs.com with SMTP;
	12 Dec 2012 01:04:00 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 11 Dec 2012 17:03:59 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,263,1355126400"; d="scan'208";a="260837702"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 11 Dec 2012 17:03:58 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.19.9.28) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 11 Dec 2012 17:03:58 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx109.amr.corp.intel.com (10.19.9.28) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 11 Dec 2012 17:03:58 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Wed, 12 Dec 2012 09:03:56 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
	hook
Thread-Index: AQHN18IfWR3YW92mSEilHdTo9iWj55gUUxcA
Date: Wed, 12 Dec 2012 01:03:55 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
In-Reply-To: <20121211170653.GG9347@localhost.localdomain>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Wednesday, December 12, 2012 1:07 AM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
> hook
> 
> On Tue, Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > Sent: Friday, December 07, 2012 10:09 PM
> > > To: Xu, Dongxiao
> > > Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> > > Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for
> > > map_sg hook
> > >
> > > On Thu, Dec 06, 2012 at 09:08:42PM +0800, Dongxiao Xu wrote:
> > > > While mapping sg buffers, checking to cross page DMA buffer is
> > > > also needed. If the guest DMA buffer crosses page boundary, Xen
> > > > should exchange contiguous memory for it.
> > >
> > > So this is when we cross those 2MB contingous swatch of buffers.
> > > Wouldn't we get the same problem with the 'map_page' call? If the
> > > driver tried to map say a 4MB DMA region?
> >
> > Yes, it also needs such check, as I just replied to Jan's mail.
> >
> > >
> > > What if this check was done in the routines that provide the
> > > software static buffers and there try to provide a nice DMA contingous
> swatch of pages?
> >
> > Yes, this approach also came to our mind, which needs to modify the driver
> itself.
> > If so, it requires driver not using such static buffers (e.g., from kmalloc) to do
> DMA even if the buffer is continuous in native.
> 
> I am bit loss here.
> 
> Is the issue you found only with drivers that do not use DMA API?
> Can you perhaps point me to the code that triggered this fix in the first place?

Yes, we met this issue on a specific SAS device/driver, and it calls into libata-core code, you can refer to function ata_dev_read_id() called from ata_dev_reread_id() in drivers/ata/libata-core.c.

In the above function, the target buffer is (void *)dev->link->ap->sector_buf, which is 512 bytes static buffer and unfortunately it across the page boundary.

> > Is this acceptable by kernel/driver upstream?
> 
> I am still not completely clear on what you had in mind. The one method I
> thought about that might help in this is to have Xen-SWIOTLB track which
> memory ranges were exchanged (so xen_swiotlb_fixup would save the *buf
> and the size for each call to xen_create_contiguous_region in a list or array).
> 
> When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they would
> consult said array/list to see if the region they retrieved crosses said 2MB
> chunks. If so.. and here I am unsure of what would be the best way to proceed.

We thought we can solve the issue in several ways:

1) Like the previous patch I sent out, we check the DMA region in xen_swiotlb_map_page() and xen_swiotlb_map_sg_attr(), and if DMA region crosses page boundary, we exchange the memory and copy the content. However it has race condition that when copying the memory content (we introduced two memory copies in the patch), some other code may also visit the page, which may encounter incorrect values.
2) Mostly the same as item 1), the only difference is that we put the memory content copy inside Xen hypervisor but not in Dom0. This requires we add certain flags to indicating memory moving in the XENMEM_exchange hypercall.
3) As you also mentioned, this is not a common case, it is only triggered by some specific devices/drivers. we can fix it in certain driver to avoid DMA into static buffers. Like (void *)dev->link->ap->sector_buf in the above case. But I am not sure whether it is acceptable by kernel/driver upstream.

Thanks,
Dongxiao



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 01:04:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 01:04:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiajz-0006tM-Qb; Wed, 12 Dec 2012 01:04:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Tiajy-0006tH-Fa
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 01:04:02 +0000
Received: from [85.158.138.51:61171] by server-2.bemta-3.messagelabs.com id
	96/10-11239-108D7C05; Wed, 12 Dec 2012 01:04:01 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355274240!28180719!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1MjA3Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17015 invoked from network); 12 Dec 2012 01:04:00 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-2.tower-174.messagelabs.com with SMTP;
	12 Dec 2012 01:04:00 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 11 Dec 2012 17:03:59 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,263,1355126400"; d="scan'208";a="260837702"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 11 Dec 2012 17:03:58 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.19.9.28) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 11 Dec 2012 17:03:58 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx109.amr.corp.intel.com (10.19.9.28) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 11 Dec 2012 17:03:58 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Wed, 12 Dec 2012 09:03:56 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
	hook
Thread-Index: AQHN18IfWR3YW92mSEilHdTo9iWj55gUUxcA
Date: Wed, 12 Dec 2012 01:03:55 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
In-Reply-To: <20121211170653.GG9347@localhost.localdomain>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Wednesday, December 12, 2012 1:07 AM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
> hook
> 
> On Tue, Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > Sent: Friday, December 07, 2012 10:09 PM
> > > To: Xu, Dongxiao
> > > Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> > > Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for
> > > map_sg hook
> > >
> > > On Thu, Dec 06, 2012 at 09:08:42PM +0800, Dongxiao Xu wrote:
> > > > While mapping sg buffers, checking to cross page DMA buffer is
> > > > also needed. If the guest DMA buffer crosses page boundary, Xen
> > > > should exchange contiguous memory for it.
> > >
> > > So this is when we cross those 2MB contingous swatch of buffers.
> > > Wouldn't we get the same problem with the 'map_page' call? If the
> > > driver tried to map say a 4MB DMA region?
> >
> > Yes, it also needs such check, as I just replied to Jan's mail.
> >
> > >
> > > What if this check was done in the routines that provide the
> > > software static buffers and there try to provide a nice DMA contingous
> swatch of pages?
> >
> > Yes, this approach also came to our mind, which needs to modify the driver
> itself.
> > If so, it requires driver not using such static buffers (e.g., from kmalloc) to do
> DMA even if the buffer is continuous in native.
> 
> I am bit loss here.
> 
> Is the issue you found only with drivers that do not use DMA API?
> Can you perhaps point me to the code that triggered this fix in the first place?

Yes, we met this issue on a specific SAS device/driver, and it calls into libata-core code, you can refer to function ata_dev_read_id() called from ata_dev_reread_id() in drivers/ata/libata-core.c.

In the above function, the target buffer is (void *)dev->link->ap->sector_buf, which is 512 bytes static buffer and unfortunately it across the page boundary.

> > Is this acceptable by kernel/driver upstream?
> 
> I am still not completely clear on what you had in mind. The one method I
> thought about that might help in this is to have Xen-SWIOTLB track which
> memory ranges were exchanged (so xen_swiotlb_fixup would save the *buf
> and the size for each call to xen_create_contiguous_region in a list or array).
> 
> When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they would
> consult said array/list to see if the region they retrieved crosses said 2MB
> chunks. If so.. and here I am unsure of what would be the best way to proceed.

We thought we can solve the issue in several ways:

1) Like the previous patch I sent out, we check the DMA region in xen_swiotlb_map_page() and xen_swiotlb_map_sg_attr(), and if DMA region crosses page boundary, we exchange the memory and copy the content. However it has race condition that when copying the memory content (we introduced two memory copies in the patch), some other code may also visit the page, which may encounter incorrect values.
2) Mostly the same as item 1), the only difference is that we put the memory content copy inside Xen hypervisor but not in Dom0. This requires we add certain flags to indicating memory moving in the XENMEM_exchange hypercall.
3) As you also mentioned, this is not a common case, it is only triggered by some specific devices/drivers. we can fix it in certain driver to avoid DMA into static buffers. Like (void *)dev->link->ap->sector_buf in the above case. But I am not sure whether it is acceptable by kernel/driver upstream.

Thanks,
Dongxiao



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 01:46:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 01:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TibOP-00078w-CE; Wed, 12 Dec 2012 01:45:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TibON-00078r-Rq
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 01:45:47 +0000
Received: from [85.158.137.99:40662] by server-9.bemta-3.messagelabs.com id
	38/00-11948-6C1E7C05; Wed, 12 Dec 2012 01:45:42 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355276739!18310994!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI5NzA5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9229 invoked from network); 12 Dec 2012 01:45:41 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-10.tower-217.messagelabs.com with SMTP;
	12 Dec 2012 01:45:41 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 11 Dec 2012 17:45:38 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,263,1355126400"; d="scan'208";a="256138100"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 11 Dec 2012 17:45:37 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 12 Dec 2012 09:35:54 +0800
Message-Id: <1355276154-22803-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH] intr.c: remove i386 related code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

i386 arch is no longer supported by Xen, remove the related code.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |    7 -------
 1 files changed, 0 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index ef8b925..535248a 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -295,16 +295,9 @@ void vmx_intr_assist(void)
                     intack.vector;
         __vmwrite(GUEST_INTR_STATUS, status);
         if (v->arch.hvm_vmx.eoi_exitmap_changed) {
-#ifdef __i386__
-#define UPDATE_EOI_EXITMAP(v, e) {                             \
-        if (test_and_clear_bit(e, &v->arch.hvm_vmx.eoi_exitmap_changed)) {      \
-                __vmwrite(EOI_EXIT_BITMAP##e, v->arch.hvm_vmx.eoi_exit_bitmap[e]);    \
-                __vmwrite(EOI_EXIT_BITMAP##e##_HIGH, v.arch.hvm_vmx.eoi_exit_bitmap[e] >> 32);}}
-#else
 #define UPDATE_EOI_EXITMAP(v, e) {                             \
         if (test_and_clear_bit(e, &v->arch.hvm_vmx.eoi_exitmap_changed)) {      \
                 __vmwrite(EOI_EXIT_BITMAP##e, v->arch.hvm_vmx.eoi_exit_bitmap[e]);}}
-#endif
                 UPDATE_EOI_EXITMAP(v, 0);
                 UPDATE_EOI_EXITMAP(v, 1);
                 UPDATE_EOI_EXITMAP(v, 2);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 01:46:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 01:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TibOP-00078w-CE; Wed, 12 Dec 2012 01:45:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TibON-00078r-Rq
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 01:45:47 +0000
Received: from [85.158.137.99:40662] by server-9.bemta-3.messagelabs.com id
	38/00-11948-6C1E7C05; Wed, 12 Dec 2012 01:45:42 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355276739!18310994!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzI5NzA5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9229 invoked from network); 12 Dec 2012 01:45:41 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-10.tower-217.messagelabs.com with SMTP;
	12 Dec 2012 01:45:41 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 11 Dec 2012 17:45:38 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,263,1355126400"; d="scan'208";a="256138100"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 11 Dec 2012 17:45:37 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 12 Dec 2012 09:35:54 +0800
Message-Id: <1355276154-22803-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH] intr.c: remove i386 related code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

i386 arch is no longer supported by Xen, remove the related code.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |    7 -------
 1 files changed, 0 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index ef8b925..535248a 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -295,16 +295,9 @@ void vmx_intr_assist(void)
                     intack.vector;
         __vmwrite(GUEST_INTR_STATUS, status);
         if (v->arch.hvm_vmx.eoi_exitmap_changed) {
-#ifdef __i386__
-#define UPDATE_EOI_EXITMAP(v, e) {                             \
-        if (test_and_clear_bit(e, &v->arch.hvm_vmx.eoi_exitmap_changed)) {      \
-                __vmwrite(EOI_EXIT_BITMAP##e, v->arch.hvm_vmx.eoi_exit_bitmap[e]);    \
-                __vmwrite(EOI_EXIT_BITMAP##e##_HIGH, v.arch.hvm_vmx.eoi_exit_bitmap[e] >> 32);}}
-#else
 #define UPDATE_EOI_EXITMAP(v, e) {                             \
         if (test_and_clear_bit(e, &v->arch.hvm_vmx.eoi_exitmap_changed)) {      \
                 __vmwrite(EOI_EXIT_BITMAP##e, v->arch.hvm_vmx.eoi_exit_bitmap[e]);}}
-#endif
                 UPDATE_EOI_EXITMAP(v, 0);
                 UPDATE_EOI_EXITMAP(v, 1);
                 UPDATE_EOI_EXITMAP(v, 2);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgg-0007tV-Qa; Wed, 12 Dec 2012 03:08:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgf-0007tK-8V
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:45 +0000
Received: from [85.158.139.83:50572] by server-16.bemta-5.messagelabs.com id
	4B/28-09208-C35F7C05; Wed, 12 Dec 2012 03:08:44 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355281723!27708233!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31571 invoked from network); 12 Dec 2012 03:08:44 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:44 -0000
Received: by mail-ea0-f181.google.com with SMTP id k14so48263eaa.26
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=Db7VGsjMakL83P1SjDzUYhhoafNuHGE54jsw3bnQBsU=;
	b=Am/68jjuOHzHxCRc+F1E2KPe8xSmUlp2WkcOZC2SSMpdq/lw0haT7zvMD2ijOmMcNg
	exZx4AIDA7pGE8VlTnc9B9eqnRYtIKb48VuB+A9YMMOrQNdj4bnuGzhVTeZYuixLMGgG
	OqHGvTOooDK38xit2Wqs3RxHKSLtU2nfzCw66TdPxQ3oJ3F9HjjAeGsrNfiQYpnNxwVJ
	xSq2If7epOcjL4lA+fUMv4KzKy1DXZwl+pCHaLLfO1Ese9BNNX8fkLD7arn/HGZ8QcEn
	zgXyg0ENH4EfH4b8qY5JN/7iWjgoguWewyAaAEaHnnmr8IjR8TLHqBfa24eWntXjL4xi
	fqCg==
Received: by 10.14.173.65 with SMTP id u41mr881542eel.13.1355281723794;
	Tue, 11 Dec 2012 19:08:43 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.42
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:43 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 21b142498d469992125190dbca1df68ffd0d0ac6
Message-Id: <21b142498d4699921251.1355280773@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:53 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3 of 6 v2] xen: sched_credit: use
 current_on_cpu() when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Defined by the previous change, using it wherever it is appropriate
throughout sched_credit.c makes the code easier to read.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -231,7 +231,7 @@ static void burn_credits(struct csched_v
     unsigned int credits;
 
     /* Assert svc is current */
-    ASSERT(svc==CSCHED_VCPU(per_cpu(schedule_data, svc->vcpu->processor).curr));
+    ASSERT( svc == CSCHED_VCPU(current_on_cpu(svc->vcpu->processor)) );
 
     if ( (delta = now - svc->start_time) <= 0 )
         return;
@@ -249,8 +249,7 @@ DEFINE_PER_CPU(unsigned int, last_tickle
 static inline void
 __runq_tickle(unsigned int cpu, struct csched_vcpu *new)
 {
-    struct csched_vcpu * const cur =
-        CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
+    struct csched_vcpu * const cur = CSCHED_VCPU(current_on_cpu(cpu));
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
     cpumask_t mask, idle_mask;
     int idlers_empty;
@@ -387,7 +386,7 @@ csched_alloc_pdata(const struct schedule
         per_cpu(schedule_data, cpu).sched_priv = spc;
 
     /* Start off idling... */
-    BUG_ON(!is_idle_vcpu(per_cpu(schedule_data, cpu).curr));
+    BUG_ON(!is_idle_vcpu(current_on_cpu(cpu)));
     cpumask_set_cpu(cpu, prv->idlers);
 
     spin_unlock_irqrestore(&prv->lock, flags);
@@ -730,7 +729,7 @@ csched_vcpu_sleep(const struct scheduler
 
     BUG_ON( is_idle_vcpu(vc) );
 
-    if ( per_cpu(schedule_data, vc->processor).curr == vc )
+    if ( current_on_cpu(vc->processor) == vc )
         cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
     else if ( __vcpu_on_runq(svc) )
         __runq_remove(svc);
@@ -744,7 +743,7 @@ csched_vcpu_wake(const struct scheduler 
 
     BUG_ON( is_idle_vcpu(vc) );
 
-    if ( unlikely(per_cpu(schedule_data, cpu).curr == vc) )
+    if ( unlikely(current_on_cpu(cpu) == vc) )
     {
         SCHED_STAT_CRANK(vcpu_wake_running);
         return;
@@ -1213,7 +1212,7 @@ static struct csched_vcpu *
 csched_runq_steal(int peer_cpu, int cpu, int pri)
 {
     const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
-    const struct vcpu * const peer_vcpu = per_cpu(schedule_data, peer_cpu).curr;
+    const struct vcpu * const peer_vcpu = current_on_cpu(peer_cpu);
     struct csched_vcpu *speer;
     struct list_head *iter;
     struct vcpu *vc;
@@ -1502,7 +1501,7 @@ csched_dump_pcpu(const struct scheduler 
     printk("core=%s\n", cpustr);
 
     /* current VCPU */
-    svc = CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
+    svc = CSCHED_VCPU(current_on_cpu(cpu));
     if ( svc )
     {
         printk("\trun: ");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgk-0007uT-O6; Wed, 12 Dec 2012 03:08:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgj-0007u8-LF
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:49 +0000
Received: from [85.158.139.83:48471] by server-10.bemta-5.messagelabs.com id
	62/4A-13383-045F7C05; Wed, 12 Dec 2012 03:08:48 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355281728!29479563!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2604 invoked from network); 12 Dec 2012 03:08:48 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:48 -0000
Received: by mail-ee0-f47.google.com with SMTP id e51so70522eek.34
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=Kbul388qkUEp/4xGqdthzJu4N3M1lLIqHwBGVkbg9bI=;
	b=fH2+rYgvLaGN/rO0Hnjvzz+puSiuB9UeVmchLPjF8RnVFiGSvCQjCpwnHWkL0aLujl
	KH7W9qCwm+UZmKlxGoOfv00D3/twOiqVvV5Tow14hdEkx7xL24O4vzOEAU9X+QFWur6g
	QlbBFWtZZSd/ps0EJWDuF7zma19hVU78Ptj7dv4V2cHul0a42YLxdb2VjtSurdQXoa7D
	DPiEZ3q+Bki+GPC/HnWhGS0WfQL1RWj2Je60gDlDBX6YkVc/IvyVCwdPS4IvPMANTPWM
	m8g12522lAu4ac2sftRdBGCqBXQlwi6CArJqmuHRzBb2EpTqGBsgU87dRU/RI34WkOET
	9S9g==
Received: by 10.14.194.199 with SMTP id m47mr899447een.11.1355281728031;
	Tue, 11 Dec 2012 19:08:48 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.46
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:47 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 036a3bb938a550f2ee0cbdf89cd218a5d2854790
Message-Id: <036a3bb938a550f2ee0c.1355280776@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:56 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 6 of 6 v2] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

About tickling, and PCPU selection.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * Dummy `struct d {}', accommodating `cpu' only, removed
   in spite of the much more readable `trace_var(..., sizeof(cpu), &cpu)',
   as suggested.

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -21,6 +21,7 @@
 #include <asm/atomic.h>
 #include <xen/errno.h>
 #include <xen/keyhandler.h>
+#include <xen/trace.h>
 
 
 /*
@@ -97,6 +98,18 @@
 
 
 /*
+ * Credit tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED_SCHED_TASKLET TRC_SCHED_CLASS_EVT(CSCHED, 1)
+#define TRC_CSCHED_ACCOUNT_START TRC_SCHED_CLASS_EVT(CSCHED, 2)
+#define TRC_CSCHED_ACCOUNT_STOP  TRC_SCHED_CLASS_EVT(CSCHED, 3)
+#define TRC_CSCHED_STOLEN_VCPU   TRC_SCHED_CLASS_EVT(CSCHED, 4)
+#define TRC_CSCHED_PICKED_CPU    TRC_SCHED_CLASS_EVT(CSCHED, 5)
+#define TRC_CSCHED_TICKLE        TRC_SCHED_CLASS_EVT(CSCHED, 6)
+
+
+/*
  * Boot parameters
  */
 static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
@@ -315,9 +328,18 @@ static inline void
         }
     }
 
-    /* Send scheduler interrupts to designated CPUs */
     if ( !cpumask_empty(&mask) )
+    {
+        if ( unlikely(tb_init_done) )
+        {
+            /* Avoid TRACE_*: saves checking !tb_init_done each step */
+            for_each_cpu(cpu, &mask)
+                trace_var(TRC_CSCHED_TICKLE, 0, sizeof(cpu), &cpu);
+        }
+
+        /* Send scheduler interrupts to designated CPUs */
         cpumask_raise_softirq(&mask, SCHEDULE_SOFTIRQ);
+    }
 }
 
 static void
@@ -554,6 +576,8 @@ static int
     if ( commit && spc )
        spc->idle_bias = cpu;
 
+    TRACE_3D(TRC_CSCHED_PICKED_CPU, vc->domain->domain_id, vc->vcpu_id, cpu);
+
     return cpu;
 }
 
@@ -586,6 +610,9 @@ static inline void
         }
     }
 
+    TRACE_3D(TRC_CSCHED_ACCOUNT_START, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
+
     spin_unlock_irqrestore(&prv->lock, flags);
 }
 
@@ -608,6 +635,9 @@ static inline void
     {
         list_del_init(&sdom->active_sdom_elem);
     }
+
+    TRACE_3D(TRC_CSCHED_ACCOUNT_STOP, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
 }
 
 static void
@@ -1241,6 +1271,8 @@ csched_runq_steal(int peer_cpu, int cpu,
             if (__csched_vcpu_is_migrateable(vc, cpu))
             {
                 /* We got a candidate. Grab it! */
+                TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
+                         vc->domain->domain_id, vc->vcpu_id);
                 SCHED_VCPU_STAT_CRANK(speer, migrate_q);
                 SCHED_STAT_CRANK(migrate_queued);
                 WARN_ON(vc->is_urgent);
@@ -1401,6 +1433,7 @@ csched_schedule(
     /* Tasklet work (which runs in idle VCPU context) overrides all else. */
     if ( tasklet_work_scheduled )
     {
+        TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
         snext = CSCHED_VCPU(idle_vcpu[cpu]);
         snext->pri = CSCHED_PRI_TS_BOOST;
     }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgh-0007te-5r; Wed, 12 Dec 2012 03:08:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgf-0007tG-22
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:45 +0000
Received: from [85.158.139.211:7199] by server-2.bemta-5.messagelabs.com id
	B5/64-16162-C35F7C05; Wed, 12 Dec 2012 03:08:44 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355281722!18319869!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10687 invoked from network); 12 Dec 2012 03:08:42 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:42 -0000
Received: by mail-ee0-f42.google.com with SMTP id c41so70342eek.15
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=HffxcbKZHV59wN41HF6I5mOtAAIXL3+p1piVGnKWN9s=;
	b=zpQPGKQiszQrBhi0qkj+R7G6A1GGLXifMOTSjDx2HaeHtPS0f93smnE1x3dEUTBNME
	2SrqZi/NbfjF4mUvlqNaHAC3wwNbEK8Mv9epYgF2D24EwwgmcZYVwUCZFgnVcS9x/AsL
	gy5S0AQCTyT/+/O+2b23VHuFyf4sxztUTggAks1GO6iTfUk1zjB2weu3Y2UfmTL1JHR0
	exQD+s8kqkjoh9S0CrlVGXkAHBtFJKa5aVIssUkobr95DC/Wbn3UzJyYnY2rV9gHFfjP
	PosVmmzcumeyNcF6dvgEzEuRDwUeJt1at22eaHDQ5MPVYW6j3FPJifj5M4jMdjPkP9cw
	xNag==
Received: by 10.14.0.3 with SMTP id 3mr854398eea.16.1355281722465;
	Tue, 11 Dec 2012 19:08:42 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:41 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 69860abfe7aec775f7817964bab1333f60215cdf
Message-Id: <69860abfe7aec775f781.1355280772@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:52 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2 of 6 v2] xen: sched_credit: improve tickling
	of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Right now, when a VCPU wakes-up, we check whether it should preempt
what is running on the PCPU, and whether or not the waking VCPU can
be migrated (by tickling some idlers). However, this can result in
suboptimal or even wrong behaviour, as explained here:

 http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html

This change, instead, when deciding which PCPU(s) to tickle, upon
VCPU wake-up, considers both what it is likely to happen on the PCPU
where the wakeup occurs,and whether or not there are idlers where
the woken-up VCPU can run. In fact, if there are, we can avoid
interrupting the running VCPU. Only in case there aren't any of
these PCPUs, preemption and migration are the way to go.

This has been tested (on top of the previous change) by running
the following benchmarks inside 2, 6 and 10 VMs, concurrently, on
a shared host, each with 2 VCPUs and 960 MB of memory (host had 16
ways and 12 GB RAM).

1) All VMs had 'cpus="all"' in their config file.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 50.078467 +/- 1.6676162 | 49.673667 +/- 0.0094321 |
 | 6   | 63.259472 +/- 0.1137586 | 61.680011 +/- 1.0208723 |
 | 10  | 91.246797 +/- 0.1154008 | 90.396720 +/- 1.5900423 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 485.56333 +/- 6.0527356 | 487.83167 +/- 0.7602850 |
 | 6   | 401.36278 +/- 1.9745916 | 409.96778 +/- 3.6761092 |
 | 10  | 294.43933 +/- 0.8064945 | 302.49033 +/- 0.2343978 |
$ specjbb2005 ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 43150.63 +/- 1359.5616  | 43275.427 +/- 606.28185 |
 | 6   | 29274.29 +/- 1024.4042  | 29716.189 +/- 1290.1878 |
 | 10  | 19061.28 +/- 512.88561  | 19192.599 +/- 605.66058 |


2) All VMs had their VCPUs statically pinned to the host's PCPUs.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 47.8211   +/- 0.0215504 | 47.826900 +/- 0.0077872 |
 | 6   | 62.689122 +/- 0.0877173 | 62.764539 +/- 0.3882493 |
 | 10  | 90.321097 +/- 1.4803867 | 89.974570 +/- 1.1437566 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 550.97667 +/- 2.3512355 | 550.87000 +/- 0.8140792 |
 | 6   | 443.15000 +/- 5.7471797 | 454.01056 +/- 8.4373466 |
 | 10  | 313.89233 +/- 1.3237493 | 321.81167 +/- 0.3528418 |
$ specjbb2005 ... (throughput, higher is better)
 | 2   | 49591.057 +/- 952.93384 | 49594.195 +/- 799.57976 |
 | 6   | 33538.247 +/- 1089.2115 | 33671.758 +/- 1077.6806 |
 | 10  | 21927.870 +/- 831.88742 | 21891.131 +/- 563.37929 |


Numbers show how the change has either no or very limited impact
(specjbb2005 case) or, when it does have some impact, that is a
real improvement in performances (sysbench-memory case).

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * Rewritten as per George's suggestion, in order to improve readability.
 * Killed some of the stats, with the only exception of `tickle_idlers_none'
   and `tickle_idlers_some'. They don't make things look that terrible and
   I think they could be useful.
 * The preemption+migration of the currently running VCPU has been turned
   into a migration request, instead than just tickling. I traced this
   thing some more, and it looks like that is the way to go. Tickling is
   not effective here, because the woken PCPU would expect cur to be out
   of the scheduler tail, which is likely false (cur->is_running is
   still set to 1).

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -133,6 +133,7 @@ struct csched_vcpu {
         uint32_t state_idle;
         uint32_t migrate_q;
         uint32_t migrate_r;
+        uint32_t kicked_away;
     } stats;
 #endif
 };
@@ -251,54 +252,67 @@ static inline void
     struct csched_vcpu * const cur =
         CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
-    cpumask_t mask;
+    cpumask_t mask, idle_mask;
+    int idlers_empty;
 
     ASSERT(cur);
     cpumask_clear(&mask);
 
-    /* If strictly higher priority than current VCPU, signal the CPU */
-    if ( new->pri > cur->pri )
+    idlers_empty = cpumask_empty(prv->idlers);
+
+    /*
+     * If the pcpu is idle, or there are no idlers and the new
+     * vcpu is a higher priority than the old vcpu, run it here.
+     *
+     * If there are idle cpus, first try to find one suitable to run
+     * new, so we can avoid preempting cur.  If we cannot find a
+     * suitable idler on which to run new, run it here, but try to
+     * find a suitable idler on which to run cur instead.
+     */
+    if ( cur->pri == CSCHED_PRI_IDLE
+         || (idlers_empty && new->pri > cur->pri) )
     {
-        if ( cur->pri == CSCHED_PRI_IDLE )
-            SCHED_STAT_CRANK(tickle_local_idler);
-        else if ( cur->pri == CSCHED_PRI_TS_OVER )
-            SCHED_STAT_CRANK(tickle_local_over);
-        else if ( cur->pri == CSCHED_PRI_TS_UNDER )
-            SCHED_STAT_CRANK(tickle_local_under);
-        else
-            SCHED_STAT_CRANK(tickle_local_other);
-
+        if ( cur->pri != CSCHED_PRI_IDLE )
+            SCHED_STAT_CRANK(tickle_idlers_none);
         cpumask_set_cpu(cpu, &mask);
     }
+    else if ( !idlers_empty )
+    {
+        /* Check whether or not there are idlers that can run new */
+        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
 
-    /*
-     * If this CPU has at least two runnable VCPUs, we tickle any idlers to
-     * let them know there is runnable work in the system...
-     */
-    if ( cur->pri > CSCHED_PRI_IDLE )
-    {
-        if ( cpumask_empty(prv->idlers) )
+        /*
+         * If there are no suitable idlers for new, and it's higher
+         * priority than cur, ask the scheduler to migrate cur away.
+         * We have to act like this (instead of just waking some of
+         * the idlers suitable for cur) because cur is running.
+         *
+         * If there are suitable idlers for new, no matter priorities,
+         * leave cur alone (as it is running and is, likely, cache-hot)
+         * and wake some of them (which is waking up and so is, likely,
+         * cache cold anyway).
+         */
+        if ( cpumask_empty(&idle_mask) && new->pri > cur->pri )
         {
             SCHED_STAT_CRANK(tickle_idlers_none);
+            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
+            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
+            SCHED_STAT_CRANK(migrate_kicked_away);
+            set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
+            cpumask_set_cpu(cpu, &mask);
         }
-        else
+        else if ( !cpumask_empty(&idle_mask) )
         {
-            cpumask_t idle_mask;
-
-            cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
-            if ( !cpumask_empty(&idle_mask) )
+            /* Which of the idlers suitable for new shall we wake up? */
+            SCHED_STAT_CRANK(tickle_idlers_some);
+            if ( opt_tickle_one_idle )
             {
-                SCHED_STAT_CRANK(tickle_idlers_some);
-                if ( opt_tickle_one_idle )
-                {
-                    this_cpu(last_tickle_cpu) = 
-                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
-                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
-                }
-                else
-                    cpumask_or(&mask, &mask, &idle_mask);
+                this_cpu(last_tickle_cpu) =
+                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
+                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
             }
-            cpumask_and(&mask, &mask, new->vcpu->cpu_affinity);
+            else
+                cpumask_or(&mask, &mask, &idle_mask);
         }
     }
 
@@ -1456,13 +1470,14 @@ csched_dump_vcpu(struct csched_vcpu *svc
     {
         printk(" credit=%i [w=%u]", atomic_read(&svc->credit), sdom->weight);
 #ifdef CSCHED_STATS
-        printk(" (%d+%u) {a/i=%u/%u m=%u+%u}",
+        printk(" (%d+%u) {a/i=%u/%u m=%u+%u (k=%u)}",
                 svc->stats.credit_last,
                 svc->stats.credit_incr,
                 svc->stats.state_active,
                 svc->stats.state_idle,
                 svc->stats.migrate_q,
-                svc->stats.migrate_r);
+                svc->stats.migrate_r,
+                svc->stats.kicked_away);
 #endif
     }
 
diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h
--- a/xen/include/xen/perfc_defn.h
+++ b/xen/include/xen/perfc_defn.h
@@ -39,10 +39,6 @@ PERFCOUNTER(vcpu_wake_runnable,     "csc
 PERFCOUNTER(vcpu_wake_not_runnable, "csched: vcpu_wake_not_runnable")
 PERFCOUNTER(vcpu_park,              "csched: vcpu_park")
 PERFCOUNTER(vcpu_unpark,            "csched: vcpu_unpark")
-PERFCOUNTER(tickle_local_idler,     "csched: tickle_local_idler")
-PERFCOUNTER(tickle_local_over,      "csched: tickle_local_over")
-PERFCOUNTER(tickle_local_under,     "csched: tickle_local_under")
-PERFCOUNTER(tickle_local_other,     "csched: tickle_local_other")
 PERFCOUNTER(tickle_idlers_none,     "csched: tickle_idlers_none")
 PERFCOUNTER(tickle_idlers_some,     "csched: tickle_idlers_some")
 PERFCOUNTER(load_balance_idle,      "csched: load_balance_idle")
@@ -52,6 +48,7 @@ PERFCOUNTER(steal_trylock_failed,   "csc
 PERFCOUNTER(steal_peer_idle,        "csched: steal_peer_idle")
 PERFCOUNTER(migrate_queued,         "csched: migrate_queued")
 PERFCOUNTER(migrate_running,        "csched: migrate_running")
+PERFCOUNTER(migrate_kicked_away,    "csched: migrate_kicked_away")
 PERFCOUNTER(vcpu_hot,               "csched: vcpu_hot")
 
 PERFCOUNTER(need_flush_tlb_flush,   "PG_need_flush tlb flushes")

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgn-0007vW-L8; Wed, 12 Dec 2012 03:08:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgm-0007ux-Or
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:53 +0000
Received: from [85.158.137.99:21719] by server-5.bemta-3.messagelabs.com id
	A3/41-15136-F35F7C05; Wed, 12 Dec 2012 03:08:47 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355281726!15800889!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24486 invoked from network); 12 Dec 2012 03:08:46 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:46 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so75483eek.32
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=TAIIobiTMG08yifL+Qj1Zdb2kqNNzG+DIRMwYL+3Px0=;
	b=z57u6PM/U06yDmq0eONqDONL4U19TiR0ZE4kG6us5bp7Z3S+vKfeUiN/WYjlo4O4qc
	YPCI1fI2mWLuFOra5Z608sM+RbSC6ILeBP5rNC2jyPVAj8zqdoozir3I2sqBdDtr9hiA
	+vcGZRF/FrgSvnLLWueTe0zIvTVGzYh8O7CQvZ8iS+5rJ/IcrGJVJaygUNwsJbDOW+lb
	U3FZgUHYP71MB7B8KWhm5gJ1PfEPtJ13ZkZ6SwMuxXl/C6OfCKCj8n965O1pC+E7A+IF
	YS7KbBEUb9U8A8pShK2jHHFwwnc6NapZiI5zb07w1Cxqa0Q3P+ZAs+b51jm1MNEgKpQt
	ox/w==
Received: by 10.14.184.131 with SMTP id s3mr847962eem.38.1355281726643;
	Tue, 11 Dec 2012 19:08:46 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.45
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:45 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 68004a57d91a9bf8b37140cd654e9f47abc2d690
Message-Id: <68004a57d91a9bf8b371.1355280775@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:55 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5 of 6 v2] xen: tracing: introduce per-scheduler
 trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So that it becomes possible to create scheduler specific trace records,
within each scheduler, without worrying about the overlapping, and also
without giving up being able to recognise them univocally. The latter
is deemed as useful, since we can have more than one scheduler running
at the same time, thanks to cpupools.

The event ID is 12 bits, and this change uses the upper 3 of them for
the 'scheduler ID'. This means we're limited to 8 schedulers and to
512 scheduler specific tracing events. Both seem reasonable limitations
as of now.

This also converts the existing credit2 tracing (the only scheduler
generating tracing events up to now) to the new system.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * The event ID generaion macro is now called `TRC_SCHED_CLASS_EVT()',
   and has been generalized and put in trace.h, as suggested.
 * The handling of per-scheduler tracing IDs and masks have been
   restructured, properly naming "ID" the numerical identifiers
   and "MASK" the bitmasks, as requested.

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -29,18 +29,22 @@
 #define d2printk(x...)
 //#define d2printk printk
 
-#define TRC_CSCHED2_TICK        TRC_SCHED_CLASS + 1
-#define TRC_CSCHED2_RUNQ_POS    TRC_SCHED_CLASS + 2
-#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3
-#define TRC_CSCHED2_CREDIT_ADD  TRC_SCHED_CLASS + 4
-#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5
-#define TRC_CSCHED2_TICKLE       TRC_SCHED_CLASS + 6
-#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7
-#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8
-#define TRC_CSCHED2_UPDATE_LOAD   TRC_SCHED_CLASS + 9
-#define TRC_CSCHED2_RUNQ_ASSIGN   TRC_SCHED_CLASS + 10
-#define TRC_CSCHED2_UPDATE_VCPU_LOAD   TRC_SCHED_CLASS + 11
-#define TRC_CSCHED2_UPDATE_RUNQ_LOAD   TRC_SCHED_CLASS + 12
+/*
+ * Credit2 tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED2_TICK             TRC_SCHED_CLASS_EVT(CSCHED2, 1)
+#define TRC_CSCHED2_RUNQ_POS         TRC_SCHED_CLASS_EVT(CSCHED2, 2)
+#define TRC_CSCHED2_CREDIT_BURN      TRC_SCHED_CLASS_EVT(CSCHED2, 3)
+#define TRC_CSCHED2_CREDIT_ADD       TRC_SCHED_CLASS_EVT(CSCHED2, 4)
+#define TRC_CSCHED2_TICKLE_CHECK     TRC_SCHED_CLASS_EVT(CSCHED2, 5)
+#define TRC_CSCHED2_TICKLE           TRC_SCHED_CLASS_EVT(CSCHED2, 6)
+#define TRC_CSCHED2_CREDIT_RESET     TRC_SCHED_CLASS_EVT(CSCHED2, 7)
+#define TRC_CSCHED2_SCHED_TASKLET    TRC_SCHED_CLASS_EVT(CSCHED2, 8)
+#define TRC_CSCHED2_UPDATE_LOAD      TRC_SCHED_CLASS_EVT(CSCHED2, 9)
+#define TRC_CSCHED2_RUNQ_ASSIGN      TRC_SCHED_CLASS_EVT(CSCHED2, 10)
+#define TRC_CSCHED2_UPDATE_VCPU_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 11)
+#define TRC_CSCHED2_UPDATE_RUNQ_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 12)
 
 /*
  * WARNING: This is still in an experimental phase.  Status and work can be found at the
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
--- a/xen/include/public/trace.h
+++ b/xen/include/public/trace.h
@@ -57,6 +57,32 @@
 #define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
 #define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
 
+/*
+ * The highest 3 bits of the last 12 bits of TRC_SCHED_CLASS above are
+ * reserved for encoding what scheduler produced the information. The
+ * actual event is encoded in the last 9 bits.
+ *
+ * This means we have 8 scheduling IDs available (which means at most 8
+ * schedulers generating events) and, in each scheduler, up to 512
+ * different events.
+ */
+#define TRC_SCHED_ID_BITS 3
+#define TRC_SCHED_ID_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
+#define TRC_SCHED_ID_MASK (((1UL<<TRC_SCHED_ID_BITS) - 1) << TRC_SCHED_ID_SHIFT)
+#define TRC_SCHED_EVT_MASK (~(TRC_SCHED_ID_MASK))
+
+/* Per-scheduler IDs, to identify scheduler specific events */
+#define TRC_SCHED_CSCHED   0
+#define TRC_SCHED_CSCHED2  1
+#define TRC_SCHED_SEDF     2
+#define TRC_SCHED_ARINC653 3
+
+/* Per-scheduler tracing */
+#define TRC_SCHED_CLASS_EVT(_c, _e) \
+  ( ( TRC_SCHED_CLASS | \
+      ((TRC_SCHED_##_c << TRC_SCHED_ID_SHIFT) & TRC_SCHED_ID_MASK) ) + \
+    (_e & TRC_SCHED_EVT_MASK) )
+
 /* Trace classes for Hardware */
 #define TRC_HW_PM           0x00801000   /* Power management traces */
 #define TRC_HW_IRQ          0x00802000   /* Traces relating to the handling of IRQs */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgc-0007sz-Ux; Wed, 12 Dec 2012 03:08:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgb-0007st-DW
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:41 +0000
Received: from [85.158.143.35:33648] by server-2.bemta-4.messagelabs.com id
	EB/C5-30861-835F7C05; Wed, 12 Dec 2012 03:08:40 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355281719!13644047!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4932 invoked from network); 12 Dec 2012 03:08:40 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:40 -0000
Received: by mail-ee0-f53.google.com with SMTP id c50so69850eek.26
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:message-id:user-agent:date:from:to:cc;
	bh=CU5Se5yyr5Ehc59LKD7CLnZ9zJOVKFeOO1kBin2n+VY=;
	b=ToRpgJEB86b3QnH8MTmXIzVy1hfM7I0Yku7bkPyNaxHXvTi5kBuLIF0aprqNLCxBd9
	kQhMBGaAgTC6FM8ZSbfIP2r3S8Jc1Z/wt8v5KsRL8+qLtTfVeXwtwKzSqqlXy60Fab/B
	GdIAaSAZTbKTWbrcTrRMqSF1bea/9skoHyqAcZi54sD3P8a/fAUB8+7lCUzf5AWGMWH3
	CJ091SexvM86kUweLlFwGouaQ151239iiOVTufBoPxCh7tRj9wBzy3aK2uPiNX7xyqdt
	AaAT3DYWLZFQTiGmdulBgCCq5gRQGtOUV8Q45aRqoEgwfltgEN/oSOoqjfxhDlAG93M8
	RBPA==
Received: by 10.14.219.3 with SMTP id l3mr892263eep.5.1355281719364;
	Tue, 11 Dec 2012 19:08:39 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.37
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:38 -0800 (PST)
MIME-Version: 1.0
Message-Id: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:50 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 6 v2] xen: sched_credit: fix picking &
 tickling and also add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello everyone,

This is v2 of my previously submitted series about fixing scheduling anomalies
and introducing some tracing in the credit scheduler (with a couple of other
side effects).

All comments v1 got have been addressed and the series grew a couple of more
patches as I found some other issues, still falling under the broad description
given in the above paragraph. Details are given in the single changelogs but,
trying to make review as easy as possible, here it comes a short overview.

 [1 of 6] xen: sched_credit: improve picking up the idlal CPU for a VCPU
 [2 of 6] xen: sched_credit: improve tickling of idle CPUs

Are the fixes to the scheduling anomalies, happening during PCPU picking and
tickling, respectively. The latter has already been extensively discussed (by
me and George, mainly); the former is a new --small but nasty-- thing I
discovered during a couple of heavy tracing sessions. :-)

All the benchmarks have been rerun. No big changes in trends or anything, what
held true for v1 still does here (although, honestly, numbers looks even a
little bit better).

 [3 of 6] xen: sched_credit: use current_on_cpu() when appropriate

Is just (an attempt) to improve code readability.

 [4 of 6] xen: tracing: report where a VCPU wakes up

Is just (an attempt) to improve trace readability.

 [5 of 6] xen: tracing: introduce per-scheduler trace event IDs
 [6 of 6] xen: sched_credit: add some tracing

Finally, are what enables per-scheduler trace record generation already
discussed (again, mostly by me and George) and reworked as suggested and
requested during review of v1 of this series.

Thanks and Regards,
Dario

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticge-0007tC-Dg; Wed, 12 Dec 2012 03:08:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgc-0007sy-Pm
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:43 +0000
Received: from [193.109.254.147:19113] by server-12.bemta-14.messagelabs.com
	id 75/ED-00510-A35F7C05; Wed, 12 Dec 2012 03:08:42 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355281720!8629143!1
X-Originating-IP: [209.85.215.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22455 invoked from network); 12 Dec 2012 03:08:41 -0000
Received: from mail-ea0-f182.google.com (HELO mail-ea0-f182.google.com)
	(209.85.215.182)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:41 -0000
Received: by mail-ea0-f182.google.com with SMTP id a14so51661eaa.41
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=Xepr8ZjbJ19J2F08N9WsB7J7k/0b8FYzBWTfgm8mncA=;
	b=HH+K2Zq3lG7oy34iihjixLxnUmSJywx8PvGStMLz24eLOexhvmiFr9fmd688jKkXbX
	j3wkTmdELDF5bvhXFlaW0dHHX/hMYXhEsk08dc873EmypS5pnhhdSP7ngYeIN4pPzWUg
	V6n5kDN3OvTCgm6sXdJiiy7P3tAiNTKBxEI4Ck7/Z85HOR96/mPP4RPHDekM2H3izBWO
	APJE8JJD1bbi1IWFbgboot75USGlU4H22EVdPnq8RcLgRCElSIT30KMnJnLaa5bcAeKC
	j6hqqOzaIuMedBQ/Zm4+l+WXqzoXn8bGxxq3lFge9FuH+r0XzlONWeEe/DaYkhRHTNZW
	98Sg==
Received: by 10.14.184.134 with SMTP id s6mr816787eem.43.1355281720809;
	Tue, 11 Dec 2012 19:08:40 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.39
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:40 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: bced65aa4410b02720647629fe1cc209af17a5c2
Message-Id: <bced65aa4410b0272064.1355280771@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:51 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve picking up
 the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In _csched_cpu_pick() we try to select the best possible CPU for
running a VCPU, considering the characteristics of the underlying
hardware (i.e., how many threads, core, sockets, and how busy they
are). What we want is "the idle execution vehicle with the most
idling neighbours in its grouping".

In order to achieve it, we select a CPU from the VCPU's affinity,
giving preference to its current processor if possible, as the basis
for the comparison with all the other CPUs. Problem is, to discount
the VCPU itself when computing this "idleness" (in an attempt to be
fair wrt its current processor), we arbitrarily and unconditionally
consider that selected CPU as idle, even when it is not the case,
for instance:
 1. If the CPU is not the one where the VCPU is running (perhaps due
    to the affinity being changed);
 2. The CPU is where the VCPU is running, but it has other VCPUs in
    its runq, so it won't go idle even if the VCPU in question goes.

This is exemplified in the trace below:

]  3.466115364 x|------|------| d10v1   22005(2:2:5) 3 [ a 1 8 ]
   ... ... ...
   3.466122856 x|------|------| d10v1 runstate_change d10v1 running->offline
   3.466123046 x|------|------| d?v? runstate_change d32767v0 runnable->running
   ... ... ...
]  3.466126887 x|------|------| d32767v0   28004(2:8:4) 3 [ a 1 8 ]

22005(...) line (the first line) means _csched_cpu_pick() was called on
VCPU 1 of domain 10, while it is running on CPU 0, and it choose CPU 8,
which is busy ('|'), even if there are plenty of idle CPUs. That is
because, as a consequence of changing the VCPU affinity, CPU 8 was
chosen as the basis for the comparison, and therefore considered idle
(its bit gets unconditionally set in the bitmask representing the idle
CPUs). 28004(...) line means the VCPU is woken up and queued on CPU 8's
runq, where it waits for a context switch or a migration, in order to
be able to execute.

This change fixes things by only considering the "guessed" CPU idle if
the VCPU in question is both running there and is its only runnable
VCPU.

While at it, change the name of the two variables (within
_csched_cpu_pick() ) counting the numbers of idlers for `cpu' and
`nxt' in `nr_idlers_cpu' and `nr_idlers_nxt', which makes their job
a little more evident than now that they're just called `weight_*'.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -59,6 +59,8 @@
 #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
 #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
 #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
+/* Is the first element of _cpu's runq its idle vcpu? */
+#define IS_RUNQ_IDLE(_cpu)  (is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
 
 
 /*
@@ -479,9 +481,14 @@ static int
      * distinct cores first and guarantees we don't do something stupid
      * like run two VCPUs on co-hyperthreads while there are idle cores
      * or sockets.
+     *
+     * Notice that, when computing the "idleness" of cpu, we may want to
+     * discount vc. That is, iff vc is the currently running and the only
+     * runnable vcpu on cpu, we add cpu to the idlers.
      */
     cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
-    cpumask_set_cpu(cpu, &idlers);
+    if ( current_on_cpu(cpu) == vc && IS_RUNQ_IDLE(cpu) )
+        cpumask_set_cpu(cpu, &idlers);
     cpumask_and(&cpus, &cpus, &idlers);
     cpumask_clear_cpu(cpu, &cpus);
 
@@ -489,7 +496,7 @@ static int
     {
         cpumask_t cpu_idlers;
         cpumask_t nxt_idlers;
-        int nxt, weight_cpu, weight_nxt;
+        int nxt, nr_idlers_cpu, nr_idlers_nxt;
         int migrate_factor;
 
         nxt = cpumask_cycle(cpu, &cpus);
@@ -513,12 +520,12 @@ static int
             cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
         }
 
-        weight_cpu = cpumask_weight(&cpu_idlers);
-        weight_nxt = cpumask_weight(&nxt_idlers);
+        nr_idlers_cpu = cpumask_weight(&cpu_idlers);
+        nr_idlers_nxt = cpumask_weight(&nxt_idlers);
         /* smt_power_savings: consolidate work rather than spreading it */
         if ( sched_smt_power_savings ?
-             weight_cpu > weight_nxt :
-             weight_cpu * migrate_factor < weight_nxt )
+             nr_idlers_cpu > nr_idlers_nxt :
+             nr_idlers_cpu * migrate_factor < nr_idlers_nxt )
         {
             cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
             spc = CSCHED_PCPU(nxt);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -396,6 +396,9 @@ extern struct vcpu *idle_vcpu[NR_CPUS];
 #define is_idle_domain(d) ((d)->domain_id == DOMID_IDLE)
 #define is_idle_vcpu(v)   (is_idle_domain((v)->domain))
 
+#define current_on_cpu(_c) \
+  ( (per_cpu(schedule_data, _c).curr) )
+
 #define DOMAIN_DESTROYED (1<<31) /* assumes atomic_t is >= 32 bits */
 #define put_domain(_d) \
   if ( atomic_dec_and_test(&(_d)->refcnt) ) domain_destroy(_d)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgm-0007uy-8X; Wed, 12 Dec 2012 03:08:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgk-0007uL-PM
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:51 +0000
Received: from [85.158.137.99:62742] by server-14.bemta-3.messagelabs.com id
	95/42-27443-D35F7C05; Wed, 12 Dec 2012 03:08:45 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355281725!18993738!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 813 invoked from network); 12 Dec 2012 03:08:45 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:45 -0000
Received: by mail-ee0-f51.google.com with SMTP id d4so75441eek.24
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=4fqSdE16a7UG7pG0ezKTc2EBhx/379Xqxn76iqTiD9g=;
	b=FrA5S1Gu1rg0+3TLoU9+mErVyX5AamU158HAdAZXNHccf0+FCq+yRJEuQJP0/3QEUt
	6H93oHuDO7y3bNcRXFZUVV95VehG5hu4Mx6X1Wpq2yk5wZHqJ2Xm17wwjV4/e9d6Jyxr
	UbwU6kJtIy/nijX3HpcgUEl23bFEef9tHrC1S/M6R0q/bz7ZXpPfKMsAWlW8XLFSbNh5
	pJD1g1trwDHM3uSqhuQwZjf8qhULvICWdY0pgIzYNdXFuV97OqGEUsqk+ORpzAHtHsG3
	sdBd8nR1r56pV0PLMHFi3Bl2FRY2eacrcfsxi8QCxEHWOMThfoaCK4l9S/TGrxdVZLbi
	WsvA==
Received: by 10.14.221.5 with SMTP id q5mr879126eep.33.1355281725130;
	Tue, 11 Dec 2012 19:08:45 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.43
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:44 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 7a199dea34425e890b311410e3e0deabf9455d25
Message-Id: <7a199dea34425e890b31.1355280774@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:54 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4 of 6 v2] xen: tracing: report where a VCPU
	wakes up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When looking at traces, it turns out to be useful to know where a
waking-up VCPU is being queued. Yes, that is always the CPU where
it ran last, but that information can well be lost in past trace
records!

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/schedule.c b/xen/common/schedule.c
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -371,7 +371,7 @@ void vcpu_wake(struct vcpu *v)
 
     vcpu_schedule_unlock_irqrestore(v, flags);
 
-    TRACE_2D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id);
+    TRACE_3D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id, v->processor);
 }
 
 void vcpu_unblock(struct vcpu *v)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticge-0007tC-Dg; Wed, 12 Dec 2012 03:08:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgc-0007sy-Pm
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:43 +0000
Received: from [193.109.254.147:19113] by server-12.bemta-14.messagelabs.com
	id 75/ED-00510-A35F7C05; Wed, 12 Dec 2012 03:08:42 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355281720!8629143!1
X-Originating-IP: [209.85.215.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22455 invoked from network); 12 Dec 2012 03:08:41 -0000
Received: from mail-ea0-f182.google.com (HELO mail-ea0-f182.google.com)
	(209.85.215.182)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:41 -0000
Received: by mail-ea0-f182.google.com with SMTP id a14so51661eaa.41
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=Xepr8ZjbJ19J2F08N9WsB7J7k/0b8FYzBWTfgm8mncA=;
	b=HH+K2Zq3lG7oy34iihjixLxnUmSJywx8PvGStMLz24eLOexhvmiFr9fmd688jKkXbX
	j3wkTmdELDF5bvhXFlaW0dHHX/hMYXhEsk08dc873EmypS5pnhhdSP7ngYeIN4pPzWUg
	V6n5kDN3OvTCgm6sXdJiiy7P3tAiNTKBxEI4Ck7/Z85HOR96/mPP4RPHDekM2H3izBWO
	APJE8JJD1bbi1IWFbgboot75USGlU4H22EVdPnq8RcLgRCElSIT30KMnJnLaa5bcAeKC
	j6hqqOzaIuMedBQ/Zm4+l+WXqzoXn8bGxxq3lFge9FuH+r0XzlONWeEe/DaYkhRHTNZW
	98Sg==
Received: by 10.14.184.134 with SMTP id s6mr816787eem.43.1355281720809;
	Tue, 11 Dec 2012 19:08:40 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.39
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:40 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: bced65aa4410b02720647629fe1cc209af17a5c2
Message-Id: <bced65aa4410b0272064.1355280771@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:51 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve picking up
 the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In _csched_cpu_pick() we try to select the best possible CPU for
running a VCPU, considering the characteristics of the underlying
hardware (i.e., how many threads, core, sockets, and how busy they
are). What we want is "the idle execution vehicle with the most
idling neighbours in its grouping".

In order to achieve it, we select a CPU from the VCPU's affinity,
giving preference to its current processor if possible, as the basis
for the comparison with all the other CPUs. Problem is, to discount
the VCPU itself when computing this "idleness" (in an attempt to be
fair wrt its current processor), we arbitrarily and unconditionally
consider that selected CPU as idle, even when it is not the case,
for instance:
 1. If the CPU is not the one where the VCPU is running (perhaps due
    to the affinity being changed);
 2. The CPU is where the VCPU is running, but it has other VCPUs in
    its runq, so it won't go idle even if the VCPU in question goes.

This is exemplified in the trace below:

]  3.466115364 x|------|------| d10v1   22005(2:2:5) 3 [ a 1 8 ]
   ... ... ...
   3.466122856 x|------|------| d10v1 runstate_change d10v1 running->offline
   3.466123046 x|------|------| d?v? runstate_change d32767v0 runnable->running
   ... ... ...
]  3.466126887 x|------|------| d32767v0   28004(2:8:4) 3 [ a 1 8 ]

22005(...) line (the first line) means _csched_cpu_pick() was called on
VCPU 1 of domain 10, while it is running on CPU 0, and it choose CPU 8,
which is busy ('|'), even if there are plenty of idle CPUs. That is
because, as a consequence of changing the VCPU affinity, CPU 8 was
chosen as the basis for the comparison, and therefore considered idle
(its bit gets unconditionally set in the bitmask representing the idle
CPUs). 28004(...) line means the VCPU is woken up and queued on CPU 8's
runq, where it waits for a context switch or a migration, in order to
be able to execute.

This change fixes things by only considering the "guessed" CPU idle if
the VCPU in question is both running there and is its only runnable
VCPU.

While at it, change the name of the two variables (within
_csched_cpu_pick() ) counting the numbers of idlers for `cpu' and
`nxt' in `nr_idlers_cpu' and `nr_idlers_nxt', which makes their job
a little more evident than now that they're just called `weight_*'.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -59,6 +59,8 @@
 #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
 #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
 #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
+/* Is the first element of _cpu's runq its idle vcpu? */
+#define IS_RUNQ_IDLE(_cpu)  (is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
 
 
 /*
@@ -479,9 +481,14 @@ static int
      * distinct cores first and guarantees we don't do something stupid
      * like run two VCPUs on co-hyperthreads while there are idle cores
      * or sockets.
+     *
+     * Notice that, when computing the "idleness" of cpu, we may want to
+     * discount vc. That is, iff vc is the currently running and the only
+     * runnable vcpu on cpu, we add cpu to the idlers.
      */
     cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
-    cpumask_set_cpu(cpu, &idlers);
+    if ( current_on_cpu(cpu) == vc && IS_RUNQ_IDLE(cpu) )
+        cpumask_set_cpu(cpu, &idlers);
     cpumask_and(&cpus, &cpus, &idlers);
     cpumask_clear_cpu(cpu, &cpus);
 
@@ -489,7 +496,7 @@ static int
     {
         cpumask_t cpu_idlers;
         cpumask_t nxt_idlers;
-        int nxt, weight_cpu, weight_nxt;
+        int nxt, nr_idlers_cpu, nr_idlers_nxt;
         int migrate_factor;
 
         nxt = cpumask_cycle(cpu, &cpus);
@@ -513,12 +520,12 @@ static int
             cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
         }
 
-        weight_cpu = cpumask_weight(&cpu_idlers);
-        weight_nxt = cpumask_weight(&nxt_idlers);
+        nr_idlers_cpu = cpumask_weight(&cpu_idlers);
+        nr_idlers_nxt = cpumask_weight(&nxt_idlers);
         /* smt_power_savings: consolidate work rather than spreading it */
         if ( sched_smt_power_savings ?
-             weight_cpu > weight_nxt :
-             weight_cpu * migrate_factor < weight_nxt )
+             nr_idlers_cpu > nr_idlers_nxt :
+             nr_idlers_cpu * migrate_factor < nr_idlers_nxt )
         {
             cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
             spc = CSCHED_PCPU(nxt);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -396,6 +396,9 @@ extern struct vcpu *idle_vcpu[NR_CPUS];
 #define is_idle_domain(d) ((d)->domain_id == DOMID_IDLE)
 #define is_idle_vcpu(v)   (is_idle_domain((v)->domain))
 
+#define current_on_cpu(_c) \
+  ( (per_cpu(schedule_data, _c).curr) )
+
 #define DOMAIN_DESTROYED (1<<31) /* assumes atomic_t is >= 32 bits */
 #define put_domain(_d) \
   if ( atomic_dec_and_test(&(_d)->refcnt) ) domain_destroy(_d)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgk-0007uT-O6; Wed, 12 Dec 2012 03:08:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgj-0007u8-LF
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:49 +0000
Received: from [85.158.139.83:48471] by server-10.bemta-5.messagelabs.com id
	62/4A-13383-045F7C05; Wed, 12 Dec 2012 03:08:48 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355281728!29479563!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2604 invoked from network); 12 Dec 2012 03:08:48 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:48 -0000
Received: by mail-ee0-f47.google.com with SMTP id e51so70522eek.34
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=Kbul388qkUEp/4xGqdthzJu4N3M1lLIqHwBGVkbg9bI=;
	b=fH2+rYgvLaGN/rO0Hnjvzz+puSiuB9UeVmchLPjF8RnVFiGSvCQjCpwnHWkL0aLujl
	KH7W9qCwm+UZmKlxGoOfv00D3/twOiqVvV5Tow14hdEkx7xL24O4vzOEAU9X+QFWur6g
	QlbBFWtZZSd/ps0EJWDuF7zma19hVU78Ptj7dv4V2cHul0a42YLxdb2VjtSurdQXoa7D
	DPiEZ3q+Bki+GPC/HnWhGS0WfQL1RWj2Je60gDlDBX6YkVc/IvyVCwdPS4IvPMANTPWM
	m8g12522lAu4ac2sftRdBGCqBXQlwi6CArJqmuHRzBb2EpTqGBsgU87dRU/RI34WkOET
	9S9g==
Received: by 10.14.194.199 with SMTP id m47mr899447een.11.1355281728031;
	Tue, 11 Dec 2012 19:08:48 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.46
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:47 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 036a3bb938a550f2ee0cbdf89cd218a5d2854790
Message-Id: <036a3bb938a550f2ee0c.1355280776@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:56 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 6 of 6 v2] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

About tickling, and PCPU selection.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * Dummy `struct d {}', accommodating `cpu' only, removed
   in spite of the much more readable `trace_var(..., sizeof(cpu), &cpu)',
   as suggested.

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -21,6 +21,7 @@
 #include <asm/atomic.h>
 #include <xen/errno.h>
 #include <xen/keyhandler.h>
+#include <xen/trace.h>
 
 
 /*
@@ -97,6 +98,18 @@
 
 
 /*
+ * Credit tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED_SCHED_TASKLET TRC_SCHED_CLASS_EVT(CSCHED, 1)
+#define TRC_CSCHED_ACCOUNT_START TRC_SCHED_CLASS_EVT(CSCHED, 2)
+#define TRC_CSCHED_ACCOUNT_STOP  TRC_SCHED_CLASS_EVT(CSCHED, 3)
+#define TRC_CSCHED_STOLEN_VCPU   TRC_SCHED_CLASS_EVT(CSCHED, 4)
+#define TRC_CSCHED_PICKED_CPU    TRC_SCHED_CLASS_EVT(CSCHED, 5)
+#define TRC_CSCHED_TICKLE        TRC_SCHED_CLASS_EVT(CSCHED, 6)
+
+
+/*
  * Boot parameters
  */
 static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
@@ -315,9 +328,18 @@ static inline void
         }
     }
 
-    /* Send scheduler interrupts to designated CPUs */
     if ( !cpumask_empty(&mask) )
+    {
+        if ( unlikely(tb_init_done) )
+        {
+            /* Avoid TRACE_*: saves checking !tb_init_done each step */
+            for_each_cpu(cpu, &mask)
+                trace_var(TRC_CSCHED_TICKLE, 0, sizeof(cpu), &cpu);
+        }
+
+        /* Send scheduler interrupts to designated CPUs */
         cpumask_raise_softirq(&mask, SCHEDULE_SOFTIRQ);
+    }
 }
 
 static void
@@ -554,6 +576,8 @@ static int
     if ( commit && spc )
        spc->idle_bias = cpu;
 
+    TRACE_3D(TRC_CSCHED_PICKED_CPU, vc->domain->domain_id, vc->vcpu_id, cpu);
+
     return cpu;
 }
 
@@ -586,6 +610,9 @@ static inline void
         }
     }
 
+    TRACE_3D(TRC_CSCHED_ACCOUNT_START, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
+
     spin_unlock_irqrestore(&prv->lock, flags);
 }
 
@@ -608,6 +635,9 @@ static inline void
     {
         list_del_init(&sdom->active_sdom_elem);
     }
+
+    TRACE_3D(TRC_CSCHED_ACCOUNT_STOP, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
 }
 
 static void
@@ -1241,6 +1271,8 @@ csched_runq_steal(int peer_cpu, int cpu,
             if (__csched_vcpu_is_migrateable(vc, cpu))
             {
                 /* We got a candidate. Grab it! */
+                TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
+                         vc->domain->domain_id, vc->vcpu_id);
                 SCHED_VCPU_STAT_CRANK(speer, migrate_q);
                 SCHED_STAT_CRANK(migrate_queued);
                 WARN_ON(vc->is_urgent);
@@ -1401,6 +1433,7 @@ csched_schedule(
     /* Tasklet work (which runs in idle VCPU context) overrides all else. */
     if ( tasklet_work_scheduled )
     {
+        TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
         snext = CSCHED_VCPU(idle_vcpu[cpu]);
         snext->pri = CSCHED_PRI_TS_BOOST;
     }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgc-0007sz-Ux; Wed, 12 Dec 2012 03:08:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgb-0007st-DW
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:41 +0000
Received: from [85.158.143.35:33648] by server-2.bemta-4.messagelabs.com id
	EB/C5-30861-835F7C05; Wed, 12 Dec 2012 03:08:40 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355281719!13644047!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4932 invoked from network); 12 Dec 2012 03:08:40 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:40 -0000
Received: by mail-ee0-f53.google.com with SMTP id c50so69850eek.26
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:message-id:user-agent:date:from:to:cc;
	bh=CU5Se5yyr5Ehc59LKD7CLnZ9zJOVKFeOO1kBin2n+VY=;
	b=ToRpgJEB86b3QnH8MTmXIzVy1hfM7I0Yku7bkPyNaxHXvTi5kBuLIF0aprqNLCxBd9
	kQhMBGaAgTC6FM8ZSbfIP2r3S8Jc1Z/wt8v5KsRL8+qLtTfVeXwtwKzSqqlXy60Fab/B
	GdIAaSAZTbKTWbrcTrRMqSF1bea/9skoHyqAcZi54sD3P8a/fAUB8+7lCUzf5AWGMWH3
	CJ091SexvM86kUweLlFwGouaQ151239iiOVTufBoPxCh7tRj9wBzy3aK2uPiNX7xyqdt
	AaAT3DYWLZFQTiGmdulBgCCq5gRQGtOUV8Q45aRqoEgwfltgEN/oSOoqjfxhDlAG93M8
	RBPA==
Received: by 10.14.219.3 with SMTP id l3mr892263eep.5.1355281719364;
	Tue, 11 Dec 2012 19:08:39 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.37
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:38 -0800 (PST)
MIME-Version: 1.0
Message-Id: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:50 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 6 v2] xen: sched_credit: fix picking &
 tickling and also add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello everyone,

This is v2 of my previously submitted series about fixing scheduling anomalies
and introducing some tracing in the credit scheduler (with a couple of other
side effects).

All comments v1 got have been addressed and the series grew a couple of more
patches as I found some other issues, still falling under the broad description
given in the above paragraph. Details are given in the single changelogs but,
trying to make review as easy as possible, here it comes a short overview.

 [1 of 6] xen: sched_credit: improve picking up the idlal CPU for a VCPU
 [2 of 6] xen: sched_credit: improve tickling of idle CPUs

Are the fixes to the scheduling anomalies, happening during PCPU picking and
tickling, respectively. The latter has already been extensively discussed (by
me and George, mainly); the former is a new --small but nasty-- thing I
discovered during a couple of heavy tracing sessions. :-)

All the benchmarks have been rerun. No big changes in trends or anything, what
held true for v1 still does here (although, honestly, numbers looks even a
little bit better).

 [3 of 6] xen: sched_credit: use current_on_cpu() when appropriate

Is just (an attempt) to improve code readability.

 [4 of 6] xen: tracing: report where a VCPU wakes up

Is just (an attempt) to improve trace readability.

 [5 of 6] xen: tracing: introduce per-scheduler trace event IDs
 [6 of 6] xen: sched_credit: add some tracing

Finally, are what enables per-scheduler trace record generation already
discussed (again, mostly by me and George) and reworked as suggested and
requested during review of v1 of this series.

Thanks and Regards,
Dario

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgg-0007tV-Qa; Wed, 12 Dec 2012 03:08:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgf-0007tK-8V
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:45 +0000
Received: from [85.158.139.83:50572] by server-16.bemta-5.messagelabs.com id
	4B/28-09208-C35F7C05; Wed, 12 Dec 2012 03:08:44 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355281723!27708233!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31571 invoked from network); 12 Dec 2012 03:08:44 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:44 -0000
Received: by mail-ea0-f181.google.com with SMTP id k14so48263eaa.26
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=Db7VGsjMakL83P1SjDzUYhhoafNuHGE54jsw3bnQBsU=;
	b=Am/68jjuOHzHxCRc+F1E2KPe8xSmUlp2WkcOZC2SSMpdq/lw0haT7zvMD2ijOmMcNg
	exZx4AIDA7pGE8VlTnc9B9eqnRYtIKb48VuB+A9YMMOrQNdj4bnuGzhVTeZYuixLMGgG
	OqHGvTOooDK38xit2Wqs3RxHKSLtU2nfzCw66TdPxQ3oJ3F9HjjAeGsrNfiQYpnNxwVJ
	xSq2If7epOcjL4lA+fUMv4KzKy1DXZwl+pCHaLLfO1Ese9BNNX8fkLD7arn/HGZ8QcEn
	zgXyg0ENH4EfH4b8qY5JN/7iWjgoguWewyAaAEaHnnmr8IjR8TLHqBfa24eWntXjL4xi
	fqCg==
Received: by 10.14.173.65 with SMTP id u41mr881542eel.13.1355281723794;
	Tue, 11 Dec 2012 19:08:43 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.42
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:43 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 21b142498d469992125190dbca1df68ffd0d0ac6
Message-Id: <21b142498d4699921251.1355280773@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:53 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3 of 6 v2] xen: sched_credit: use
 current_on_cpu() when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Defined by the previous change, using it wherever it is appropriate
throughout sched_credit.c makes the code easier to read.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -231,7 +231,7 @@ static void burn_credits(struct csched_v
     unsigned int credits;
 
     /* Assert svc is current */
-    ASSERT(svc==CSCHED_VCPU(per_cpu(schedule_data, svc->vcpu->processor).curr));
+    ASSERT( svc == CSCHED_VCPU(current_on_cpu(svc->vcpu->processor)) );
 
     if ( (delta = now - svc->start_time) <= 0 )
         return;
@@ -249,8 +249,7 @@ DEFINE_PER_CPU(unsigned int, last_tickle
 static inline void
 __runq_tickle(unsigned int cpu, struct csched_vcpu *new)
 {
-    struct csched_vcpu * const cur =
-        CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
+    struct csched_vcpu * const cur = CSCHED_VCPU(current_on_cpu(cpu));
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
     cpumask_t mask, idle_mask;
     int idlers_empty;
@@ -387,7 +386,7 @@ csched_alloc_pdata(const struct schedule
         per_cpu(schedule_data, cpu).sched_priv = spc;
 
     /* Start off idling... */
-    BUG_ON(!is_idle_vcpu(per_cpu(schedule_data, cpu).curr));
+    BUG_ON(!is_idle_vcpu(current_on_cpu(cpu)));
     cpumask_set_cpu(cpu, prv->idlers);
 
     spin_unlock_irqrestore(&prv->lock, flags);
@@ -730,7 +729,7 @@ csched_vcpu_sleep(const struct scheduler
 
     BUG_ON( is_idle_vcpu(vc) );
 
-    if ( per_cpu(schedule_data, vc->processor).curr == vc )
+    if ( current_on_cpu(vc->processor) == vc )
         cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
     else if ( __vcpu_on_runq(svc) )
         __runq_remove(svc);
@@ -744,7 +743,7 @@ csched_vcpu_wake(const struct scheduler 
 
     BUG_ON( is_idle_vcpu(vc) );
 
-    if ( unlikely(per_cpu(schedule_data, cpu).curr == vc) )
+    if ( unlikely(current_on_cpu(cpu) == vc) )
     {
         SCHED_STAT_CRANK(vcpu_wake_running);
         return;
@@ -1213,7 +1212,7 @@ static struct csched_vcpu *
 csched_runq_steal(int peer_cpu, int cpu, int pri)
 {
     const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
-    const struct vcpu * const peer_vcpu = per_cpu(schedule_data, peer_cpu).curr;
+    const struct vcpu * const peer_vcpu = current_on_cpu(peer_cpu);
     struct csched_vcpu *speer;
     struct list_head *iter;
     struct vcpu *vc;
@@ -1502,7 +1501,7 @@ csched_dump_pcpu(const struct scheduler 
     printk("core=%s\n", cpustr);
 
     /* current VCPU */
-    svc = CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
+    svc = CSCHED_VCPU(current_on_cpu(cpu));
     if ( svc )
     {
         printk("\trun: ");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgn-0007vW-L8; Wed, 12 Dec 2012 03:08:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgm-0007ux-Or
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:53 +0000
Received: from [85.158.137.99:21719] by server-5.bemta-3.messagelabs.com id
	A3/41-15136-F35F7C05; Wed, 12 Dec 2012 03:08:47 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355281726!15800889!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24486 invoked from network); 12 Dec 2012 03:08:46 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:46 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so75483eek.32
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=TAIIobiTMG08yifL+Qj1Zdb2kqNNzG+DIRMwYL+3Px0=;
	b=z57u6PM/U06yDmq0eONqDONL4U19TiR0ZE4kG6us5bp7Z3S+vKfeUiN/WYjlo4O4qc
	YPCI1fI2mWLuFOra5Z608sM+RbSC6ILeBP5rNC2jyPVAj8zqdoozir3I2sqBdDtr9hiA
	+vcGZRF/FrgSvnLLWueTe0zIvTVGzYh8O7CQvZ8iS+5rJ/IcrGJVJaygUNwsJbDOW+lb
	U3FZgUHYP71MB7B8KWhm5gJ1PfEPtJ13ZkZ6SwMuxXl/C6OfCKCj8n965O1pC+E7A+IF
	YS7KbBEUb9U8A8pShK2jHHFwwnc6NapZiI5zb07w1Cxqa0Q3P+ZAs+b51jm1MNEgKpQt
	ox/w==
Received: by 10.14.184.131 with SMTP id s3mr847962eem.38.1355281726643;
	Tue, 11 Dec 2012 19:08:46 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.45
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:45 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 68004a57d91a9bf8b37140cd654e9f47abc2d690
Message-Id: <68004a57d91a9bf8b371.1355280775@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:55 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5 of 6 v2] xen: tracing: introduce per-scheduler
 trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So that it becomes possible to create scheduler specific trace records,
within each scheduler, without worrying about the overlapping, and also
without giving up being able to recognise them univocally. The latter
is deemed as useful, since we can have more than one scheduler running
at the same time, thanks to cpupools.

The event ID is 12 bits, and this change uses the upper 3 of them for
the 'scheduler ID'. This means we're limited to 8 schedulers and to
512 scheduler specific tracing events. Both seem reasonable limitations
as of now.

This also converts the existing credit2 tracing (the only scheduler
generating tracing events up to now) to the new system.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * The event ID generaion macro is now called `TRC_SCHED_CLASS_EVT()',
   and has been generalized and put in trace.h, as suggested.
 * The handling of per-scheduler tracing IDs and masks have been
   restructured, properly naming "ID" the numerical identifiers
   and "MASK" the bitmasks, as requested.

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -29,18 +29,22 @@
 #define d2printk(x...)
 //#define d2printk printk
 
-#define TRC_CSCHED2_TICK        TRC_SCHED_CLASS + 1
-#define TRC_CSCHED2_RUNQ_POS    TRC_SCHED_CLASS + 2
-#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3
-#define TRC_CSCHED2_CREDIT_ADD  TRC_SCHED_CLASS + 4
-#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5
-#define TRC_CSCHED2_TICKLE       TRC_SCHED_CLASS + 6
-#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7
-#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8
-#define TRC_CSCHED2_UPDATE_LOAD   TRC_SCHED_CLASS + 9
-#define TRC_CSCHED2_RUNQ_ASSIGN   TRC_SCHED_CLASS + 10
-#define TRC_CSCHED2_UPDATE_VCPU_LOAD   TRC_SCHED_CLASS + 11
-#define TRC_CSCHED2_UPDATE_RUNQ_LOAD   TRC_SCHED_CLASS + 12
+/*
+ * Credit2 tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED2_TICK             TRC_SCHED_CLASS_EVT(CSCHED2, 1)
+#define TRC_CSCHED2_RUNQ_POS         TRC_SCHED_CLASS_EVT(CSCHED2, 2)
+#define TRC_CSCHED2_CREDIT_BURN      TRC_SCHED_CLASS_EVT(CSCHED2, 3)
+#define TRC_CSCHED2_CREDIT_ADD       TRC_SCHED_CLASS_EVT(CSCHED2, 4)
+#define TRC_CSCHED2_TICKLE_CHECK     TRC_SCHED_CLASS_EVT(CSCHED2, 5)
+#define TRC_CSCHED2_TICKLE           TRC_SCHED_CLASS_EVT(CSCHED2, 6)
+#define TRC_CSCHED2_CREDIT_RESET     TRC_SCHED_CLASS_EVT(CSCHED2, 7)
+#define TRC_CSCHED2_SCHED_TASKLET    TRC_SCHED_CLASS_EVT(CSCHED2, 8)
+#define TRC_CSCHED2_UPDATE_LOAD      TRC_SCHED_CLASS_EVT(CSCHED2, 9)
+#define TRC_CSCHED2_RUNQ_ASSIGN      TRC_SCHED_CLASS_EVT(CSCHED2, 10)
+#define TRC_CSCHED2_UPDATE_VCPU_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 11)
+#define TRC_CSCHED2_UPDATE_RUNQ_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 12)
 
 /*
  * WARNING: This is still in an experimental phase.  Status and work can be found at the
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
--- a/xen/include/public/trace.h
+++ b/xen/include/public/trace.h
@@ -57,6 +57,32 @@
 #define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
 #define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
 
+/*
+ * The highest 3 bits of the last 12 bits of TRC_SCHED_CLASS above are
+ * reserved for encoding what scheduler produced the information. The
+ * actual event is encoded in the last 9 bits.
+ *
+ * This means we have 8 scheduling IDs available (which means at most 8
+ * schedulers generating events) and, in each scheduler, up to 512
+ * different events.
+ */
+#define TRC_SCHED_ID_BITS 3
+#define TRC_SCHED_ID_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
+#define TRC_SCHED_ID_MASK (((1UL<<TRC_SCHED_ID_BITS) - 1) << TRC_SCHED_ID_SHIFT)
+#define TRC_SCHED_EVT_MASK (~(TRC_SCHED_ID_MASK))
+
+/* Per-scheduler IDs, to identify scheduler specific events */
+#define TRC_SCHED_CSCHED   0
+#define TRC_SCHED_CSCHED2  1
+#define TRC_SCHED_SEDF     2
+#define TRC_SCHED_ARINC653 3
+
+/* Per-scheduler tracing */
+#define TRC_SCHED_CLASS_EVT(_c, _e) \
+  ( ( TRC_SCHED_CLASS | \
+      ((TRC_SCHED_##_c << TRC_SCHED_ID_SHIFT) & TRC_SCHED_ID_MASK) ) + \
+    (_e & TRC_SCHED_EVT_MASK) )
+
 /* Trace classes for Hardware */
 #define TRC_HW_PM           0x00801000   /* Power management traces */
 #define TRC_HW_IRQ          0x00802000   /* Traces relating to the handling of IRQs */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgh-0007te-5r; Wed, 12 Dec 2012 03:08:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgf-0007tG-22
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:45 +0000
Received: from [85.158.139.211:7199] by server-2.bemta-5.messagelabs.com id
	B5/64-16162-C35F7C05; Wed, 12 Dec 2012 03:08:44 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355281722!18319869!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10687 invoked from network); 12 Dec 2012 03:08:42 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:42 -0000
Received: by mail-ee0-f42.google.com with SMTP id c41so70342eek.15
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=HffxcbKZHV59wN41HF6I5mOtAAIXL3+p1piVGnKWN9s=;
	b=zpQPGKQiszQrBhi0qkj+R7G6A1GGLXifMOTSjDx2HaeHtPS0f93smnE1x3dEUTBNME
	2SrqZi/NbfjF4mUvlqNaHAC3wwNbEK8Mv9epYgF2D24EwwgmcZYVwUCZFgnVcS9x/AsL
	gy5S0AQCTyT/+/O+2b23VHuFyf4sxztUTggAks1GO6iTfUk1zjB2weu3Y2UfmTL1JHR0
	exQD+s8kqkjoh9S0CrlVGXkAHBtFJKa5aVIssUkobr95DC/Wbn3UzJyYnY2rV9gHFfjP
	PosVmmzcumeyNcF6dvgEzEuRDwUeJt1at22eaHDQ5MPVYW6j3FPJifj5M4jMdjPkP9cw
	xNag==
Received: by 10.14.0.3 with SMTP id 3mr854398eea.16.1355281722465;
	Tue, 11 Dec 2012 19:08:42 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:41 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 69860abfe7aec775f7817964bab1333f60215cdf
Message-Id: <69860abfe7aec775f781.1355280772@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:52 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2 of 6 v2] xen: sched_credit: improve tickling
	of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Right now, when a VCPU wakes-up, we check whether it should preempt
what is running on the PCPU, and whether or not the waking VCPU can
be migrated (by tickling some idlers). However, this can result in
suboptimal or even wrong behaviour, as explained here:

 http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html

This change, instead, when deciding which PCPU(s) to tickle, upon
VCPU wake-up, considers both what it is likely to happen on the PCPU
where the wakeup occurs,and whether or not there are idlers where
the woken-up VCPU can run. In fact, if there are, we can avoid
interrupting the running VCPU. Only in case there aren't any of
these PCPUs, preemption and migration are the way to go.

This has been tested (on top of the previous change) by running
the following benchmarks inside 2, 6 and 10 VMs, concurrently, on
a shared host, each with 2 VCPUs and 960 MB of memory (host had 16
ways and 12 GB RAM).

1) All VMs had 'cpus="all"' in their config file.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 50.078467 +/- 1.6676162 | 49.673667 +/- 0.0094321 |
 | 6   | 63.259472 +/- 0.1137586 | 61.680011 +/- 1.0208723 |
 | 10  | 91.246797 +/- 0.1154008 | 90.396720 +/- 1.5900423 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 485.56333 +/- 6.0527356 | 487.83167 +/- 0.7602850 |
 | 6   | 401.36278 +/- 1.9745916 | 409.96778 +/- 3.6761092 |
 | 10  | 294.43933 +/- 0.8064945 | 302.49033 +/- 0.2343978 |
$ specjbb2005 ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 43150.63 +/- 1359.5616  | 43275.427 +/- 606.28185 |
 | 6   | 29274.29 +/- 1024.4042  | 29716.189 +/- 1290.1878 |
 | 10  | 19061.28 +/- 512.88561  | 19192.599 +/- 605.66058 |


2) All VMs had their VCPUs statically pinned to the host's PCPUs.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 47.8211   +/- 0.0215504 | 47.826900 +/- 0.0077872 |
 | 6   | 62.689122 +/- 0.0877173 | 62.764539 +/- 0.3882493 |
 | 10  | 90.321097 +/- 1.4803867 | 89.974570 +/- 1.1437566 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 550.97667 +/- 2.3512355 | 550.87000 +/- 0.8140792 |
 | 6   | 443.15000 +/- 5.7471797 | 454.01056 +/- 8.4373466 |
 | 10  | 313.89233 +/- 1.3237493 | 321.81167 +/- 0.3528418 |
$ specjbb2005 ... (throughput, higher is better)
 | 2   | 49591.057 +/- 952.93384 | 49594.195 +/- 799.57976 |
 | 6   | 33538.247 +/- 1089.2115 | 33671.758 +/- 1077.6806 |
 | 10  | 21927.870 +/- 831.88742 | 21891.131 +/- 563.37929 |


Numbers show how the change has either no or very limited impact
(specjbb2005 case) or, when it does have some impact, that is a
real improvement in performances (sysbench-memory case).

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * Rewritten as per George's suggestion, in order to improve readability.
 * Killed some of the stats, with the only exception of `tickle_idlers_none'
   and `tickle_idlers_some'. They don't make things look that terrible and
   I think they could be useful.
 * The preemption+migration of the currently running VCPU has been turned
   into a migration request, instead than just tickling. I traced this
   thing some more, and it looks like that is the way to go. Tickling is
   not effective here, because the woken PCPU would expect cur to be out
   of the scheduler tail, which is likely false (cur->is_running is
   still set to 1).

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -133,6 +133,7 @@ struct csched_vcpu {
         uint32_t state_idle;
         uint32_t migrate_q;
         uint32_t migrate_r;
+        uint32_t kicked_away;
     } stats;
 #endif
 };
@@ -251,54 +252,67 @@ static inline void
     struct csched_vcpu * const cur =
         CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
-    cpumask_t mask;
+    cpumask_t mask, idle_mask;
+    int idlers_empty;
 
     ASSERT(cur);
     cpumask_clear(&mask);
 
-    /* If strictly higher priority than current VCPU, signal the CPU */
-    if ( new->pri > cur->pri )
+    idlers_empty = cpumask_empty(prv->idlers);
+
+    /*
+     * If the pcpu is idle, or there are no idlers and the new
+     * vcpu is a higher priority than the old vcpu, run it here.
+     *
+     * If there are idle cpus, first try to find one suitable to run
+     * new, so we can avoid preempting cur.  If we cannot find a
+     * suitable idler on which to run new, run it here, but try to
+     * find a suitable idler on which to run cur instead.
+     */
+    if ( cur->pri == CSCHED_PRI_IDLE
+         || (idlers_empty && new->pri > cur->pri) )
     {
-        if ( cur->pri == CSCHED_PRI_IDLE )
-            SCHED_STAT_CRANK(tickle_local_idler);
-        else if ( cur->pri == CSCHED_PRI_TS_OVER )
-            SCHED_STAT_CRANK(tickle_local_over);
-        else if ( cur->pri == CSCHED_PRI_TS_UNDER )
-            SCHED_STAT_CRANK(tickle_local_under);
-        else
-            SCHED_STAT_CRANK(tickle_local_other);
-
+        if ( cur->pri != CSCHED_PRI_IDLE )
+            SCHED_STAT_CRANK(tickle_idlers_none);
         cpumask_set_cpu(cpu, &mask);
     }
+    else if ( !idlers_empty )
+    {
+        /* Check whether or not there are idlers that can run new */
+        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
 
-    /*
-     * If this CPU has at least two runnable VCPUs, we tickle any idlers to
-     * let them know there is runnable work in the system...
-     */
-    if ( cur->pri > CSCHED_PRI_IDLE )
-    {
-        if ( cpumask_empty(prv->idlers) )
+        /*
+         * If there are no suitable idlers for new, and it's higher
+         * priority than cur, ask the scheduler to migrate cur away.
+         * We have to act like this (instead of just waking some of
+         * the idlers suitable for cur) because cur is running.
+         *
+         * If there are suitable idlers for new, no matter priorities,
+         * leave cur alone (as it is running and is, likely, cache-hot)
+         * and wake some of them (which is waking up and so is, likely,
+         * cache cold anyway).
+         */
+        if ( cpumask_empty(&idle_mask) && new->pri > cur->pri )
         {
             SCHED_STAT_CRANK(tickle_idlers_none);
+            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
+            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
+            SCHED_STAT_CRANK(migrate_kicked_away);
+            set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
+            cpumask_set_cpu(cpu, &mask);
         }
-        else
+        else if ( !cpumask_empty(&idle_mask) )
         {
-            cpumask_t idle_mask;
-
-            cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
-            if ( !cpumask_empty(&idle_mask) )
+            /* Which of the idlers suitable for new shall we wake up? */
+            SCHED_STAT_CRANK(tickle_idlers_some);
+            if ( opt_tickle_one_idle )
             {
-                SCHED_STAT_CRANK(tickle_idlers_some);
-                if ( opt_tickle_one_idle )
-                {
-                    this_cpu(last_tickle_cpu) = 
-                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
-                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
-                }
-                else
-                    cpumask_or(&mask, &mask, &idle_mask);
+                this_cpu(last_tickle_cpu) =
+                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
+                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
             }
-            cpumask_and(&mask, &mask, new->vcpu->cpu_affinity);
+            else
+                cpumask_or(&mask, &mask, &idle_mask);
         }
     }
 
@@ -1456,13 +1470,14 @@ csched_dump_vcpu(struct csched_vcpu *svc
     {
         printk(" credit=%i [w=%u]", atomic_read(&svc->credit), sdom->weight);
 #ifdef CSCHED_STATS
-        printk(" (%d+%u) {a/i=%u/%u m=%u+%u}",
+        printk(" (%d+%u) {a/i=%u/%u m=%u+%u (k=%u)}",
                 svc->stats.credit_last,
                 svc->stats.credit_incr,
                 svc->stats.state_active,
                 svc->stats.state_idle,
                 svc->stats.migrate_q,
-                svc->stats.migrate_r);
+                svc->stats.migrate_r,
+                svc->stats.kicked_away);
 #endif
     }
 
diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h
--- a/xen/include/xen/perfc_defn.h
+++ b/xen/include/xen/perfc_defn.h
@@ -39,10 +39,6 @@ PERFCOUNTER(vcpu_wake_runnable,     "csc
 PERFCOUNTER(vcpu_wake_not_runnable, "csched: vcpu_wake_not_runnable")
 PERFCOUNTER(vcpu_park,              "csched: vcpu_park")
 PERFCOUNTER(vcpu_unpark,            "csched: vcpu_unpark")
-PERFCOUNTER(tickle_local_idler,     "csched: tickle_local_idler")
-PERFCOUNTER(tickle_local_over,      "csched: tickle_local_over")
-PERFCOUNTER(tickle_local_under,     "csched: tickle_local_under")
-PERFCOUNTER(tickle_local_other,     "csched: tickle_local_other")
 PERFCOUNTER(tickle_idlers_none,     "csched: tickle_idlers_none")
 PERFCOUNTER(tickle_idlers_some,     "csched: tickle_idlers_some")
 PERFCOUNTER(load_balance_idle,      "csched: load_balance_idle")
@@ -52,6 +48,7 @@ PERFCOUNTER(steal_trylock_failed,   "csc
 PERFCOUNTER(steal_peer_idle,        "csched: steal_peer_idle")
 PERFCOUNTER(migrate_queued,         "csched: migrate_queued")
 PERFCOUNTER(migrate_running,        "csched: migrate_running")
+PERFCOUNTER(migrate_kicked_away,    "csched: migrate_kicked_away")
 PERFCOUNTER(vcpu_hot,               "csched: vcpu_hot")
 
 PERFCOUNTER(need_flush_tlb_flush,   "PG_need_flush tlb flushes")

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 03:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 03:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Ticgm-0007uy-8X; Wed, 12 Dec 2012 03:08:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Ticgk-0007uL-PM
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 03:08:51 +0000
Received: from [85.158.137.99:62742] by server-14.bemta-3.messagelabs.com id
	95/42-27443-D35F7C05; Wed, 12 Dec 2012 03:08:45 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355281725!18993738!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 813 invoked from network); 12 Dec 2012 03:08:45 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 03:08:45 -0000
Received: by mail-ee0-f51.google.com with SMTP id d4so75441eek.24
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 19:08:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=4fqSdE16a7UG7pG0ezKTc2EBhx/379Xqxn76iqTiD9g=;
	b=FrA5S1Gu1rg0+3TLoU9+mErVyX5AamU158HAdAZXNHccf0+FCq+yRJEuQJP0/3QEUt
	6H93oHuDO7y3bNcRXFZUVV95VehG5hu4Mx6X1Wpq2yk5wZHqJ2Xm17wwjV4/e9d6Jyxr
	UbwU6kJtIy/nijX3HpcgUEl23bFEef9tHrC1S/M6R0q/bz7ZXpPfKMsAWlW8XLFSbNh5
	pJD1g1trwDHM3uSqhuQwZjf8qhULvICWdY0pgIzYNdXFuV97OqGEUsqk+ORpzAHtHsG3
	sdBd8nR1r56pV0PLMHFi3Bl2FRY2eacrcfsxi8QCxEHWOMThfoaCK4l9S/TGrxdVZLbi
	WsvA==
Received: by 10.14.221.5 with SMTP id q5mr879126eep.33.1355281725130;
	Tue, 11 Dec 2012 19:08:45 -0800 (PST)
Received: from [127.0.1.1] (ip-181-153.sn1.eutelia.it. [62.94.181.153])
	by mx.google.com with ESMTPS id 46sm53299432eeg.4.2012.12.11.19.08.43
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 11 Dec 2012 19:08:44 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 7a199dea34425e890b311410e3e0deabf9455d25
Message-Id: <7a199dea34425e890b31.1355280774@Solace>
In-Reply-To: <patchbomb.1355280770@Solace>
References: <patchbomb.1355280770@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 03:52:54 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4 of 6 v2] xen: tracing: report where a VCPU
	wakes up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When looking at traces, it turns out to be useful to know where a
waking-up VCPU is being queued. Yes, that is always the CPU where
it ran last, but that information can well be lost in past trace
records!

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/xen/common/schedule.c b/xen/common/schedule.c
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -371,7 +371,7 @@ void vcpu_wake(struct vcpu *v)
 
     vcpu_schedule_unlock_irqrestore(v, flags);
 
-    TRACE_2D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id);
+    TRACE_3D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id, v->processor);
 }
 
 void vcpu_unblock(struct vcpu *v)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 06:07:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 06:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TifTA-0001QC-He; Wed, 12 Dec 2012 06:07:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TifT8-0001Q7-MD
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 06:06:58 +0000
Received: from [193.109.254.147:14815] by server-15.bemta-14.messagelabs.com
	id AF/32-05116-10F18C05; Wed, 12 Dec 2012 06:06:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1355292417!5037703!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk5OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27664 invoked from network); 12 Dec 2012 06:06:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 06:06:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,264,1355097600"; 
   d="scan'208";a="77417"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 06:06:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 06:06:42 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TifSs-0002DM-Jh;
	Wed, 12 Dec 2012 06:06:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TifSs-00070P-4V;
	Wed, 12 Dec 2012 06:06:42 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14672-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 06:06:42 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14672: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14672 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14672/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14667
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14667

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  e32c114016f7
baseline version:
 xen                  1206a3526673

------------------------------------------------------------
People who touched revisions under test:
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=e32c114016f7
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing e32c114016f7
+ branch=xen-4.2-testing
+ revision=e32c114016f7
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r e32c114016f7 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 06:07:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 06:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TifTA-0001QC-He; Wed, 12 Dec 2012 06:07:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TifT8-0001Q7-MD
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 06:06:58 +0000
Received: from [193.109.254.147:14815] by server-15.bemta-14.messagelabs.com
	id AF/32-05116-10F18C05; Wed, 12 Dec 2012 06:06:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1355292417!5037703!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDk5OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27664 invoked from network); 12 Dec 2012 06:06:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 06:06:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,264,1355097600"; 
   d="scan'208";a="77417"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 06:06:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 06:06:42 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TifSs-0002DM-Jh;
	Wed, 12 Dec 2012 06:06:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TifSs-00070P-4V;
	Wed, 12 Dec 2012 06:06:42 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14672-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 06:06:42 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14672: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14672 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14672/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14667
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14667

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  e32c114016f7
baseline version:
 xen                  1206a3526673

------------------------------------------------------------
People who touched revisions under test:
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=e32c114016f7
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing e32c114016f7
+ branch=xen-4.2-testing
+ revision=e32c114016f7
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r e32c114016f7 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 07:17:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 07:17:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TigZD-000212-Ln; Wed, 12 Dec 2012 07:17:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1TigZC-00020x-1b
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 07:17:18 +0000
Received: from [193.109.254.147:13324] by server-6.bemta-14.messagelabs.com id
	66/B9-25153-D7F28C05; Wed, 12 Dec 2012 07:17:17 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355296635!10056676!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6300 invoked from network); 12 Dec 2012 07:17:15 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 07:17:15 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so122797wgb.32
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 23:17:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=0CyIC5Rei5v2kVuaH2ZrGQmAb9+fpx2tMX9wYDtgJZ4=;
	b=UigAMudNqN1yYR5QSK5Yzb3Ldjv9zv1r/R2uGRotOByuUDuFCTJyFksiUJzF5ESSeZ
	lKNudqLnDf/SB0JHoSpS6tqtNGOlEmhQdBFkv2hhNmI/M+8BYX+arob6zEr/T2yi37Ur
	HsSiRxMcng2LT9H11FiiDKfdNKuBgMmB5Am4jglwagw4n+aakVqZXb9NYQmHxt1lw/2B
	VdqtbILBznlup+LWl8Y1GKQg5ZeB6rNgUP9cswvm8EG49IDgtHKtU8rWNGdaxtIhS63B
	uSIGQHLeMQW3QvVvEv4a5gqtghjjU8YwHJIgxe9+zfmYqQpNH5lAANjLiGuv79UR7Y+t
	PrSQ==
MIME-Version: 1.0
Received: by 10.180.86.36 with SMTP id m4mr21048847wiz.5.1355296635607; Tue,
	11 Dec 2012 23:17:15 -0800 (PST)
Received: by 10.194.64.194 with HTTP; Tue, 11 Dec 2012 23:17:15 -0800 (PST)
In-Reply-To: <CANq0ewsQcSR-k4QJWo6MMS5sT+cof68QRd3hMnXejyvTiVotxA@mail.gmail.com>
References: <CANq0ewtOt=LabAgGWmCqu=g4quZP1KD8u2CEyiAVDPCBCvSHHg@mail.gmail.com>
	<CANq0ewsQcSR-k4QJWo6MMS5sT+cof68QRd3hMnXejyvTiVotxA@mail.gmail.com>
Date: Wed, 12 Dec 2012 12:47:15 +0530
Message-ID: <CANq0ews63CfDKkeqa4aK8kGCtfv9ka6p3dH6FX+kfO2Np=kLNw@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3943941161852703430=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3943941161852703430==
Content-Type: multipart/alternative; boundary=f46d0442859c87daa004d0a29657

--f46d0442859c87daa004d0a29657
Content-Type: text/plain; charset=ISO-8859-1

Hello,
        I am a post graduate student.I want to do research in live
migration of virtual machine using Xen.So i need help ,as to what can I do
in this area.How can I optimize the precopy algorithm and how. Orelse what
can I do so that live migration of vm occurs with good performance?


Hoping for favorable reply.

regards,
DigvijaySingh

--f46d0442859c87daa004d0a29657
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_quote"><div class=3D"gmail_quote"><br>Hello,<br>=A0=
=A0=A0=A0=A0=A0=A0 I am a post graduate student.I want to do research in li=
ve migration of virtual machine using Xen.So i need help ,as to what can I =
do in this area.How can I optimize the precopy algorithm and how. Orelse wh=
at can I do so that live migration of vm occurs with good performance?<div>

<br>
<br>Hoping for favorable reply.<br><br>regards,<br>DigvijaySingh<br>
</div></div><br>
</div><br>

--f46d0442859c87daa004d0a29657--


--===============3943941161852703430==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3943941161852703430==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 07:17:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 07:17:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TigZD-000212-Ln; Wed, 12 Dec 2012 07:17:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1TigZC-00020x-1b
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 07:17:18 +0000
Received: from [193.109.254.147:13324] by server-6.bemta-14.messagelabs.com id
	66/B9-25153-D7F28C05; Wed, 12 Dec 2012 07:17:17 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355296635!10056676!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6300 invoked from network); 12 Dec 2012 07:17:15 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 07:17:15 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so122797wgb.32
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 23:17:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=0CyIC5Rei5v2kVuaH2ZrGQmAb9+fpx2tMX9wYDtgJZ4=;
	b=UigAMudNqN1yYR5QSK5Yzb3Ldjv9zv1r/R2uGRotOByuUDuFCTJyFksiUJzF5ESSeZ
	lKNudqLnDf/SB0JHoSpS6tqtNGOlEmhQdBFkv2hhNmI/M+8BYX+arob6zEr/T2yi37Ur
	HsSiRxMcng2LT9H11FiiDKfdNKuBgMmB5Am4jglwagw4n+aakVqZXb9NYQmHxt1lw/2B
	VdqtbILBznlup+LWl8Y1GKQg5ZeB6rNgUP9cswvm8EG49IDgtHKtU8rWNGdaxtIhS63B
	uSIGQHLeMQW3QvVvEv4a5gqtghjjU8YwHJIgxe9+zfmYqQpNH5lAANjLiGuv79UR7Y+t
	PrSQ==
MIME-Version: 1.0
Received: by 10.180.86.36 with SMTP id m4mr21048847wiz.5.1355296635607; Tue,
	11 Dec 2012 23:17:15 -0800 (PST)
Received: by 10.194.64.194 with HTTP; Tue, 11 Dec 2012 23:17:15 -0800 (PST)
In-Reply-To: <CANq0ewsQcSR-k4QJWo6MMS5sT+cof68QRd3hMnXejyvTiVotxA@mail.gmail.com>
References: <CANq0ewtOt=LabAgGWmCqu=g4quZP1KD8u2CEyiAVDPCBCvSHHg@mail.gmail.com>
	<CANq0ewsQcSR-k4QJWo6MMS5sT+cof68QRd3hMnXejyvTiVotxA@mail.gmail.com>
Date: Wed, 12 Dec 2012 12:47:15 +0530
Message-ID: <CANq0ews63CfDKkeqa4aK8kGCtfv9ka6p3dH6FX+kfO2Np=kLNw@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3943941161852703430=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3943941161852703430==
Content-Type: multipart/alternative; boundary=f46d0442859c87daa004d0a29657

--f46d0442859c87daa004d0a29657
Content-Type: text/plain; charset=ISO-8859-1

Hello,
        I am a post graduate student.I want to do research in live
migration of virtual machine using Xen.So i need help ,as to what can I do
in this area.How can I optimize the precopy algorithm and how. Orelse what
can I do so that live migration of vm occurs with good performance?


Hoping for favorable reply.

regards,
DigvijaySingh

--f46d0442859c87daa004d0a29657
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_quote"><div class=3D"gmail_quote"><br>Hello,<br>=A0=
=A0=A0=A0=A0=A0=A0 I am a post graduate student.I want to do research in li=
ve migration of virtual machine using Xen.So i need help ,as to what can I =
do in this area.How can I optimize the precopy algorithm and how. Orelse wh=
at can I do so that live migration of vm occurs with good performance?<div>

<br>
<br>Hoping for favorable reply.<br><br>regards,<br>DigvijaySingh<br>
</div></div><br>
</div><br>

--f46d0442859c87daa004d0a29657--


--===============3943941161852703430==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3943941161852703430==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 08:06:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 08:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TihJz-0002oR-6x; Wed, 12 Dec 2012 08:05:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TihJx-0002oM-UV
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 08:05:38 +0000
Received: from [85.158.143.99:38713] by server-2.bemta-4.messagelabs.com id
	25/6E-30861-1DA38C05; Wed, 12 Dec 2012 08:05:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355299526!22238927!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2227 invoked from network); 12 Dec 2012 08:05:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 08:05:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 08:05:25 +0000
Message-Id: <50C848CB02000078000AFD38@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 08:05:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
	<528bd4030389d6b9ec6e.1355240057@andrewcoop.uk.xensource.com>
	<50C7664D02000078000AFBE1@nat28.tlf.novell.com>
	<50C76C30.20307@citrix.com>
In-Reply-To: <50C76C30.20307@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 V4] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.12.12 at 18:24, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 11/12/12 15:58, Jan Beulich wrote:
>>>>> On 11.12.12 at 16:34, Andrew Cooper<andrew.cooper3@citrix.com>  wrote:
>>> -    /* Would it be better to replace the trap vector here? */
>>> -    set_nmi_callback(crash_nmi_callback);
>>> +
>>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>>> +     * invokes do_nmi_crash (above), which cause them to write state and
>>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>>> +     * cause it to return to this function ASAP.
>>> +     */
>>> +    for ( i = 0; i<  nr_cpu_ids; ++i )
>>> +        if ( idt_tables[i] )
>>> +        {
>>> +
>>> +            if ( i == cpu )
>>> +            {
>>> +                /* Disable the interrupt stack tables for this MCE and
>>> +                 * NMI handler (shortly to become a nop) as there is a 1
>>> +                 * instruction race window where NMIs could be
>>> +                 * re-enabled and corrupt the exception frame, leaving
>>> +                 * us unable to continue on this crash path (which half
>>> +                 * defeats the point of using the nop handler in the
>>> +                 * first place).
>>> +                 *
>>> +                 * This update is safe from a security point of view, as
>>> +                 * this pcpu is never going to try to sysret back to a
>>> +                 * PV vcpu.
>>> +                 */
>> This comment appears to have become stale with the latest
>> changes.
> 
> Ok
> 
>>
>>> +                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0,&trap_nop);
>> No need for the extra&  on functions and arrays.
> 
> I was continuing the prevailing style from traps.c  Personally, I prefer 
> the & notation for function pointers, to be consistent with regular 
> pointers, even though I am aware that it is not strictly needed.  I am 
> not fussed if you wish to insist on one style, but we do have mixed 
> styles across the codebase.

I'll leave that to your preference then.

>>> +{
>>> +    ASSERT(gate->b == new->b);
>>> +    *(volatile unsigned long *)&gate->a = new->a;
>> volatile? And if so, why not volatilie-qualify the function parameter?
> 
> I was looking to avoid having the compiler inline this function and 
> decide that it can merge *gate_addr and idte together, resulting in 
> multiple writes to gate_addr.  Without the volatile, the compiler is 
> free to make this optimization, which puts us back with the racy case we 
> are trying to avoid.  The reason for avoiding the volatile function 
> parameter is so the assertion equality can be optimized where possible.

Optimizing ASSERT() expressions is pointless. If you really want
volatile here, put it on the parameter. Casts, as having some risk
associated with them, ought to be avoided wherever possible.

>>> +} while (0)
>>> +
>>> +/* Update the lower half handler of an IDT Entry, without changing any
>>> + * other configuration. */
>>> +static inline void _update_gate_addr_lower(idt_entry_t * gate, void * addr)
>> Any reason for this being an inline function and the other being
>> a macro?
>>
>> Jan
> 
> Where possible, I prefer static inline in preference to macros because 
> of the added type checking etc.
> 
> _set_gate_lower is based on _set_gate, so I used the _set_gate style.
> 
> Again, I am not overly fussed if you have a strong preference for 
> style.  (Both of these styles are mixed across the codebase and have no 
> indication of preference in CODING_STYLE)

Generally, when a parameter is referenced more than once, or
when local variables are needed, we should prefer inline
functions over macros. With the exception of this causing
dependency problems, of course.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 08:06:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 08:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TihJz-0002oR-6x; Wed, 12 Dec 2012 08:05:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TihJx-0002oM-UV
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 08:05:38 +0000
Received: from [85.158.143.99:38713] by server-2.bemta-4.messagelabs.com id
	25/6E-30861-1DA38C05; Wed, 12 Dec 2012 08:05:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355299526!22238927!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2227 invoked from network); 12 Dec 2012 08:05:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 08:05:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 08:05:25 +0000
Message-Id: <50C848CB02000078000AFD38@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 08:05:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <patchbomb.1355240055@andrewcoop.uk.xensource.com>
	<528bd4030389d6b9ec6e.1355240057@andrewcoop.uk.xensource.com>
	<50C7664D02000078000AFBE1@nat28.tlf.novell.com>
	<50C76C30.20307@citrix.com>
In-Reply-To: <50C76C30.20307@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 V4] x86/kexec: Change NMI and MCE
 handling on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.12.12 at 18:24, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 11/12/12 15:58, Jan Beulich wrote:
>>>>> On 11.12.12 at 16:34, Andrew Cooper<andrew.cooper3@citrix.com>  wrote:
>>> -    /* Would it be better to replace the trap vector here? */
>>> -    set_nmi_callback(crash_nmi_callback);
>>> +
>>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>>> +     * invokes do_nmi_crash (above), which cause them to write state and
>>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>>> +     * cause it to return to this function ASAP.
>>> +     */
>>> +    for ( i = 0; i<  nr_cpu_ids; ++i )
>>> +        if ( idt_tables[i] )
>>> +        {
>>> +
>>> +            if ( i == cpu )
>>> +            {
>>> +                /* Disable the interrupt stack tables for this MCE and
>>> +                 * NMI handler (shortly to become a nop) as there is a 1
>>> +                 * instruction race window where NMIs could be
>>> +                 * re-enabled and corrupt the exception frame, leaving
>>> +                 * us unable to continue on this crash path (which half
>>> +                 * defeats the point of using the nop handler in the
>>> +                 * first place).
>>> +                 *
>>> +                 * This update is safe from a security point of view, as
>>> +                 * this pcpu is never going to try to sysret back to a
>>> +                 * PV vcpu.
>>> +                 */
>> This comment appears to have become stale with the latest
>> changes.
> 
> Ok
> 
>>
>>> +                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0,&trap_nop);
>> No need for the extra&  on functions and arrays.
> 
> I was continuing the prevailing style from traps.c  Personally, I prefer 
> the & notation for function pointers, to be consistent with regular 
> pointers, even though I am aware that it is not strictly needed.  I am 
> not fussed if you wish to insist on one style, but we do have mixed 
> styles across the codebase.

I'll leave that to your preference then.

>>> +{
>>> +    ASSERT(gate->b == new->b);
>>> +    *(volatile unsigned long *)&gate->a = new->a;
>> volatile? And if so, why not volatilie-qualify the function parameter?
> 
> I was looking to avoid having the compiler inline this function and 
> decide that it can merge *gate_addr and idte together, resulting in 
> multiple writes to gate_addr.  Without the volatile, the compiler is 
> free to make this optimization, which puts us back with the racy case we 
> are trying to avoid.  The reason for avoiding the volatile function 
> parameter is so the assertion equality can be optimized where possible.

Optimizing ASSERT() expressions is pointless. If you really want
volatile here, put it on the parameter. Casts, as having some risk
associated with them, ought to be avoided wherever possible.

>>> +} while (0)
>>> +
>>> +/* Update the lower half handler of an IDT Entry, without changing any
>>> + * other configuration. */
>>> +static inline void _update_gate_addr_lower(idt_entry_t * gate, void * addr)
>> Any reason for this being an inline function and the other being
>> a macro?
>>
>> Jan
> 
> Where possible, I prefer static inline in preference to macros because 
> of the added type checking etc.
> 
> _set_gate_lower is based on _set_gate, so I used the _set_gate style.
> 
> Again, I am not overly fussed if you have a strong preference for 
> style.  (Both of these styles are mixed across the codebase and have no 
> indication of preference in CODING_STYLE)

Generally, when a parameter is referenced more than once, or
when local variables are needed, we should prefer inline
functions over macros. With the exception of this causing
dependency problems, of course.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:43:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiipw-0003kv-CB; Wed, 12 Dec 2012 09:42:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tiipu-0003kq-Tr
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 09:42:43 +0000
Received: from [85.158.139.83:30971] by server-3.bemta-5.messagelabs.com id
	D0/7C-25441-29158C05; Wed, 12 Dec 2012 09:42:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355305361!29367431!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4400 invoked from network); 12 Dec 2012 09:42:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 09:42:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 09:42:40 +0000
Message-Id: <50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 09:42:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,<konrad.wilk@oracle.com>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
 backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> backend_changed might be called multiple times, which will leak
> be->mode. Make sure it will be called only once. Remove some unneeded
> checks. Also the be->mode string was leaked, release the memory on
> device shutdown.

So did I miss some discussion here? I haven't seen any
confirmation of this function indeed being supposed to be called
just once.

Also, as said previously, if indeed it is to be called just once,
removing the watch during/after the first invocation would seem
to be the more appropriate thing to do.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:43:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiipw-0003kv-CB; Wed, 12 Dec 2012 09:42:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tiipu-0003kq-Tr
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 09:42:43 +0000
Received: from [85.158.139.83:30971] by server-3.bemta-5.messagelabs.com id
	D0/7C-25441-29158C05; Wed, 12 Dec 2012 09:42:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355305361!29367431!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4400 invoked from network); 12 Dec 2012 09:42:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 09:42:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 09:42:40 +0000
Message-Id: <50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 09:42:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,<konrad.wilk@oracle.com>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
 backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> backend_changed might be called multiple times, which will leak
> be->mode. Make sure it will be called only once. Remove some unneeded
> checks. Also the be->mode string was leaked, release the memory on
> device shutdown.

So did I miss some discussion here? I haven't seen any
confirmation of this function indeed being supposed to be called
just once.

Also, as said previously, if indeed it is to be called just once,
removing the watch during/after the first invocation would seem
to be the more appropriate thing to do.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:46:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:46:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiitU-0003tc-24; Wed, 12 Dec 2012 09:46:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiitS-0003tX-Pf
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 09:46:23 +0000
Received: from [85.158.138.51:41411] by server-3.bemta-3.messagelabs.com id
	59/E9-31588-64258C05; Wed, 12 Dec 2012 09:45:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355305539!20603560!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18041 invoked from network); 12 Dec 2012 09:45:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 09:45:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; 
   d="scan'208";a="81624"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 09:45:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 09:45:39 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tiisl-0003Tp-13;
	Wed, 12 Dec 2012 09:45:39 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tiisk-0005KG-UD;
	Wed, 12 Dec 2012 09:45:38 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14673-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 09:45:38 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14673: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14673 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14673/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14671
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14671

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  ef8c1b607b10
baseline version:
 xen                  ef8c1b607b10

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:46:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:46:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiitU-0003tc-24; Wed, 12 Dec 2012 09:46:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiitS-0003tX-Pf
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 09:46:23 +0000
Received: from [85.158.138.51:41411] by server-3.bemta-3.messagelabs.com id
	59/E9-31588-64258C05; Wed, 12 Dec 2012 09:45:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355305539!20603560!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18041 invoked from network); 12 Dec 2012 09:45:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 09:45:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; 
   d="scan'208";a="81624"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 09:45:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 09:45:39 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tiisl-0003Tp-13;
	Wed, 12 Dec 2012 09:45:39 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tiisk-0005KG-UD;
	Wed, 12 Dec 2012 09:45:38 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14673-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 09:45:38 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14673: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14673 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14673/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14671
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14671

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  ef8c1b607b10
baseline version:
 xen                  ef8c1b607b10

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:48:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiiv5-0003z2-IQ; Wed, 12 Dec 2012 09:48:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Tiiv3-0003yr-T2
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 09:48:01 +0000
Received: from [85.158.139.211:33849] by server-3.bemta-5.messagelabs.com id
	E6/56-25441-0D258C05; Wed, 12 Dec 2012 09:48:00 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355305680!17565598!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MjM4NjY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MjM4NjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10486 invoked from network); 12 Dec 2012 09:48:00 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 09:48:00 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFJjy0PF1k=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-071-014.pools.arcor-ip.net [88.65.71.14])
	by smtp.strato.de (joses mo21) (RZmta 31.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 602d92oBC9WIfU ;
	Wed, 12 Dec 2012 10:47:50 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id B16951884C; Wed, 12 Dec 2012 10:47:49 +0100 (CET)
Date: Wed, 12 Dec 2012 10:47:49 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121212094749.GA3382@aepfle.de>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
	<50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: xen-devel@lists.xen.org, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
	backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, Jan Beulich wrote:

> >>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> > backend_changed might be called multiple times, which will leak
> > be->mode. Make sure it will be called only once. Remove some unneeded
> > checks. Also the be->mode string was leaked, release the memory on
> > device shutdown.
> 
> So did I miss some discussion here? I haven't seen any
> confirmation of this function indeed being supposed to be called
> just once.
> 
> Also, as said previously, if indeed it is to be called just once,
> removing the watch during/after the first invocation would seem
> to be the more appropriate thing to do.

Does the API allow this, that the called function can disable the watch?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:48:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiiv5-0003z2-IQ; Wed, 12 Dec 2012 09:48:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Tiiv3-0003yr-T2
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 09:48:01 +0000
Received: from [85.158.139.211:33849] by server-3.bemta-5.messagelabs.com id
	E6/56-25441-0D258C05; Wed, 12 Dec 2012 09:48:00 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355305680!17565598!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MjM4NjY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1MjM4NjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10486 invoked from network); 12 Dec 2012 09:48:00 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 09:48:00 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFJjy0PF1k=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-071-014.pools.arcor-ip.net [88.65.71.14])
	by smtp.strato.de (joses mo21) (RZmta 31.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 602d92oBC9WIfU ;
	Wed, 12 Dec 2012 10:47:50 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id B16951884C; Wed, 12 Dec 2012 10:47:49 +0100 (CET)
Date: Wed, 12 Dec 2012 10:47:49 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121212094749.GA3382@aepfle.de>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
	<50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: xen-devel@lists.xen.org, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
	backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, Jan Beulich wrote:

> >>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> > backend_changed might be called multiple times, which will leak
> > be->mode. Make sure it will be called only once. Remove some unneeded
> > checks. Also the be->mode string was leaked, release the memory on
> > device shutdown.
> 
> So did I miss some discussion here? I haven't seen any
> confirmation of this function indeed being supposed to be called
> just once.
> 
> Also, as said previously, if indeed it is to be called just once,
> removing the watch during/after the first invocation would seem
> to be the more appropriate thing to do.

Does the API allow this, that the called function can disable the watch?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:51:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:51:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiiy4-0004Ad-6N; Wed, 12 Dec 2012 09:51:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tiiy2-0004AU-GA
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 09:51:06 +0000
Received: from [85.158.137.99:60500] by server-14.bemta-3.messagelabs.com id
	62/29-27443-98358C05; Wed, 12 Dec 2012 09:51:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355305863!13891257!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18813 invoked from network); 12 Dec 2012 09:51:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 09:51:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 09:51:03 +0000
Message-Id: <50C8619502000078000AFD80@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 09:51:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] 4.2.1-rc2 and 4.1.4-rc2 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All,

the current plan is for these to be the final RCs on both trees, so
this is the last call for eventual backport requests.

And of course - please test!

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:51:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:51:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiiy4-0004Ad-6N; Wed, 12 Dec 2012 09:51:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tiiy2-0004AU-GA
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 09:51:06 +0000
Received: from [85.158.137.99:60500] by server-14.bemta-3.messagelabs.com id
	62/29-27443-98358C05; Wed, 12 Dec 2012 09:51:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355305863!13891257!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18813 invoked from network); 12 Dec 2012 09:51:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 09:51:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 09:51:03 +0000
Message-Id: <50C8619502000078000AFD80@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 09:51:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] 4.2.1-rc2 and 4.1.4-rc2 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All,

the current plan is for these to be the final RCs on both trees, so
this is the last call for eventual backport requests.

And of course - please test!

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:53:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:53:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tij0Q-0004Ku-OO; Wed, 12 Dec 2012 09:53:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tij0P-0004Kp-Pv
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 09:53:33 +0000
Received: from [85.158.143.99:33218] by server-3.bemta-4.messagelabs.com id
	B9/17-18211-D1458C05; Wed, 12 Dec 2012 09:53:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1355306009!22725695!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4231 invoked from network); 12 Dec 2012 09:53:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 09:53:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 09:53:29 +0000
Message-Id: <50C8622802000078000AFD96@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 09:53:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
	<50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
	<20121212094749.GA3382@aepfle.de>
In-Reply-To: <20121212094749.GA3382@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
 backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.12.12 at 10:47, Olaf Hering <olaf@aepfle.de> wrote:
> On Wed, Dec 12, Jan Beulich wrote:
> 
>> >>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
>> > backend_changed might be called multiple times, which will leak
>> > be->mode. Make sure it will be called only once. Remove some unneeded
>> > checks. Also the be->mode string was leaked, release the memory on
>> > device shutdown.
>> 
>> So did I miss some discussion here? I haven't seen any
>> confirmation of this function indeed being supposed to be called
>> just once.
>> 
>> Also, as said previously, if indeed it is to be called just once,
>> removing the watch during/after the first invocation would seem
>> to be the more appropriate thing to do.
> 
> Does the API allow this, that the called function can disable the watch?

That is what would need looking into (and why I said "during/after").

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:53:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:53:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tij0Q-0004Ku-OO; Wed, 12 Dec 2012 09:53:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tij0P-0004Kp-Pv
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 09:53:33 +0000
Received: from [85.158.143.99:33218] by server-3.bemta-4.messagelabs.com id
	B9/17-18211-D1458C05; Wed, 12 Dec 2012 09:53:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1355306009!22725695!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4231 invoked from network); 12 Dec 2012 09:53:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 09:53:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 09:53:29 +0000
Message-Id: <50C8622802000078000AFD96@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 09:53:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
	<50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
	<20121212094749.GA3382@aepfle.de>
In-Reply-To: <20121212094749.GA3382@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
 backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.12.12 at 10:47, Olaf Hering <olaf@aepfle.de> wrote:
> On Wed, Dec 12, Jan Beulich wrote:
> 
>> >>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
>> > backend_changed might be called multiple times, which will leak
>> > be->mode. Make sure it will be called only once. Remove some unneeded
>> > checks. Also the be->mode string was leaked, release the memory on
>> > device shutdown.
>> 
>> So did I miss some discussion here? I haven't seen any
>> confirmation of this function indeed being supposed to be called
>> just once.
>> 
>> Also, as said previously, if indeed it is to be called just once,
>> removing the watch during/after the first invocation would seem
>> to be the more appropriate thing to do.
> 
> Does the API allow this, that the called function can disable the watch?

That is what would need looking into (and why I said "during/after").

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:59:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tij5h-0004Y9-Iw; Wed, 12 Dec 2012 09:59:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tij5g-0004Y2-2G
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 09:59:00 +0000
Received: from [85.158.143.99:59209] by server-3.bemta-4.messagelabs.com id
	85/9F-18211-36558C05; Wed, 12 Dec 2012 09:58:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1355306338!22726447!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26942 invoked from network); 12 Dec 2012 09:58:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 09:58:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 09:38:27 +0000
Message-Id: <50C85E9F02000078000AFD65@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 09:38:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
	<40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.12.12 at 02:03, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
>> On Tue, Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
>> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
>> > > What if this check was done in the routines that provide the
>> > > software static buffers and there try to provide a nice DMA contingous
>> swatch of pages?
>> >
>> > Yes, this approach also came to our mind, which needs to modify the driver
>> itself.
>> > If so, it requires driver not using such static buffers (e.g., from 
> kmalloc) to do
>> DMA even if the buffer is continuous in native.
>> 
>> I am bit loss here.
>> 
>> Is the issue you found only with drivers that do not use DMA API?
>> Can you perhaps point me to the code that triggered this fix in the first 
> place?
> 
> Yes, we met this issue on a specific SAS device/driver, and it calls into 
> libata-core code, you can refer to function ata_dev_read_id() called from 
> ata_dev_reread_id() in drivers/ata/libata-core.c.
> 
> In the above function, the target buffer is (void *)dev->link->ap->sector_buf, 
> which is 512 bytes static buffer and unfortunately it across the page 
> boundary.

I wonder whether such use of sg_init_one()/sg_set_buf() is correct
in the first place. While there aren't any restrictions documented for
its use, one clearly can't pass in whatever one wants (a pointer into
vmalloc()-ed memory, for instance, won't work afaict).

I didn't go through all other users of it, but quite a few of the uses
elsewhere look similarly questionable.

>> I am still not completely clear on what you had in mind. The one method I
>> thought about that might help in this is to have Xen-SWIOTLB track which
>> memory ranges were exchanged (so xen_swiotlb_fixup would save the *buf
>> and the size for each call to xen_create_contiguous_region in a list or 
> array).
>> 
>> When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they would
>> consult said array/list to see if the region they retrieved crosses said 2MB
>> chunks. If so.. and here I am unsure of what would be the best way to 
> proceed.
> 
> We thought we can solve the issue in several ways:
> 
> 1) Like the previous patch I sent out, we check the DMA region in 
> xen_swiotlb_map_page() and xen_swiotlb_map_sg_attr(), and if DMA region 
> crosses page boundary, we exchange the memory and copy the content. However 
> it has race condition that when copying the memory content (we introduced two 
> memory copies in the patch), some other code may also visit the page, which 
> may encounter incorrect values.

That's why, after mapping a buffer (or SG list) one has to call the
sync functions before looking at data. Any race as described by
you is therefore a programming error.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 09:59:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 09:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tij5h-0004Y9-Iw; Wed, 12 Dec 2012 09:59:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tij5g-0004Y2-2G
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 09:59:00 +0000
Received: from [85.158.143.99:59209] by server-3.bemta-4.messagelabs.com id
	85/9F-18211-36558C05; Wed, 12 Dec 2012 09:58:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1355306338!22726447!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26942 invoked from network); 12 Dec 2012 09:58:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 09:58:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 09:38:27 +0000
Message-Id: <50C85E9F02000078000AFD65@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 09:38:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
	<40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.12.12 at 02:03, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
>> On Tue, Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
>> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
>> > > What if this check was done in the routines that provide the
>> > > software static buffers and there try to provide a nice DMA contingous
>> swatch of pages?
>> >
>> > Yes, this approach also came to our mind, which needs to modify the driver
>> itself.
>> > If so, it requires driver not using such static buffers (e.g., from 
> kmalloc) to do
>> DMA even if the buffer is continuous in native.
>> 
>> I am bit loss here.
>> 
>> Is the issue you found only with drivers that do not use DMA API?
>> Can you perhaps point me to the code that triggered this fix in the first 
> place?
> 
> Yes, we met this issue on a specific SAS device/driver, and it calls into 
> libata-core code, you can refer to function ata_dev_read_id() called from 
> ata_dev_reread_id() in drivers/ata/libata-core.c.
> 
> In the above function, the target buffer is (void *)dev->link->ap->sector_buf, 
> which is 512 bytes static buffer and unfortunately it across the page 
> boundary.

I wonder whether such use of sg_init_one()/sg_set_buf() is correct
in the first place. While there aren't any restrictions documented for
its use, one clearly can't pass in whatever one wants (a pointer into
vmalloc()-ed memory, for instance, won't work afaict).

I didn't go through all other users of it, but quite a few of the uses
elsewhere look similarly questionable.

>> I am still not completely clear on what you had in mind. The one method I
>> thought about that might help in this is to have Xen-SWIOTLB track which
>> memory ranges were exchanged (so xen_swiotlb_fixup would save the *buf
>> and the size for each call to xen_create_contiguous_region in a list or 
> array).
>> 
>> When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they would
>> consult said array/list to see if the region they retrieved crosses said 2MB
>> chunks. If so.. and here I am unsure of what would be the best way to 
> proceed.
> 
> We thought we can solve the issue in several ways:
> 
> 1) Like the previous patch I sent out, we check the DMA region in 
> xen_swiotlb_map_page() and xen_swiotlb_map_sg_attr(), and if DMA region 
> crosses page boundary, we exchange the memory and copy the content. However 
> it has race condition that when copying the memory content (we introduced two 
> memory copies in the patch), some other code may also visit the page, which 
> may encounter incorrect values.

That's why, after mapping a buffer (or SG list) one has to call the
sync functions before looking at data. Any race as described by
you is therefore a programming error.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:05:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:05:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijBQ-0004oU-Pu; Wed, 12 Dec 2012 10:04:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TijBP-0004oO-Bf
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:04:55 +0000
Received: from [85.158.143.99:46665] by server-1.bemta-4.messagelabs.com id
	CE/39-28401-6C658C05; Wed, 12 Dec 2012 10:04:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355306694!23798145!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19353 invoked from network); 12 Dec 2012 10:04:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 10:04:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 10:04:53 +0000
Message-Id: <50C864D402000078000AFDB0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 10:04:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
In-Reply-To: <bced65aa4410b0272064.1355280771@Solace>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Keir Fraser <keir@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.12.12 at 03:52, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -59,6 +59,8 @@
>  #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
>  #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
>  #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
> +/* Is the first element of _cpu's runq its idle vcpu? */
> +#define IS_RUNQ_IDLE(_cpu)  (is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
>  
>  
>  /*
> @@ -479,9 +481,14 @@ static int
>       * distinct cores first and guarantees we don't do something stupid
>       * like run two VCPUs on co-hyperthreads while there are idle cores
>       * or sockets.
> +     *
> +     * Notice that, when computing the "idleness" of cpu, we may want to
> +     * discount vc. That is, iff vc is the currently running and the only
> +     * runnable vcpu on cpu, we add cpu to the idlers.
>       */
>      cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> -    cpumask_set_cpu(cpu, &idlers);
> +    if ( current_on_cpu(cpu) == vc && IS_RUNQ_IDLE(cpu) )
> +        cpumask_set_cpu(cpu, &idlers);
>      cpumask_and(&cpus, &cpus, &idlers);
>      cpumask_clear_cpu(cpu, &cpus);
>  
> @@ -489,7 +496,7 @@ static int
>      {
>          cpumask_t cpu_idlers;
>          cpumask_t nxt_idlers;
> -        int nxt, weight_cpu, weight_nxt;
> +        int nxt, nr_idlers_cpu, nr_idlers_nxt;
>          int migrate_factor;
>  
>          nxt = cpumask_cycle(cpu, &cpus);
> @@ -513,12 +520,12 @@ static int
>              cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
>          }
>  
> -        weight_cpu = cpumask_weight(&cpu_idlers);
> -        weight_nxt = cpumask_weight(&nxt_idlers);
> +        nr_idlers_cpu = cpumask_weight(&cpu_idlers);
> +        nr_idlers_nxt = cpumask_weight(&nxt_idlers);
>          /* smt_power_savings: consolidate work rather than spreading it */
>          if ( sched_smt_power_savings ?
> -             weight_cpu > weight_nxt :
> -             weight_cpu * migrate_factor < weight_nxt )
> +             nr_idlers_cpu > nr_idlers_nxt :
> +             nr_idlers_cpu * migrate_factor < nr_idlers_nxt )
>          {
>              cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
>              spc = CSCHED_PCPU(nxt);

Despite you mentioning this in the description, these last two hunks
are, afaict, only renaming variables (and that's even debatable, as
the current names aren't really misleading imo), and hence I don't
think belong in a patch that clearly has the potential for causing
(performance) regressions.

That said - I don't think it will (and even more, I'm agreeable to the
change done).

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -396,6 +396,9 @@ extern struct vcpu *idle_vcpu[NR_CPUS];
>  #define is_idle_domain(d) ((d)->domain_id == DOMID_IDLE)
>  #define is_idle_vcpu(v)   (is_idle_domain((v)->domain))
>  
> +#define current_on_cpu(_c) \
> +  ( (per_cpu(schedule_data, _c).curr) )
> +

This, imo, really belings into sched-if.h.

Plus - what's the point of double parentheses, when in fact none
at all would be needed?

And finally, why "_c" and not just "c"?

Jan

>  #define DOMAIN_DESTROYED (1<<31) /* assumes atomic_t is >= 32 bits */
>  #define put_domain(_d) \
>    if ( atomic_dec_and_test(&(_d)->refcnt) ) domain_destroy(_d)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:05:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:05:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijBQ-0004oU-Pu; Wed, 12 Dec 2012 10:04:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TijBP-0004oO-Bf
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:04:55 +0000
Received: from [85.158.143.99:46665] by server-1.bemta-4.messagelabs.com id
	CE/39-28401-6C658C05; Wed, 12 Dec 2012 10:04:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355306694!23798145!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19353 invoked from network); 12 Dec 2012 10:04:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 10:04:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 10:04:53 +0000
Message-Id: <50C864D402000078000AFDB0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 10:04:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
In-Reply-To: <bced65aa4410b0272064.1355280771@Solace>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Keir Fraser <keir@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.12.12 at 03:52, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -59,6 +59,8 @@
>  #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
>  #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
>  #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
> +/* Is the first element of _cpu's runq its idle vcpu? */
> +#define IS_RUNQ_IDLE(_cpu)  (is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
>  
>  
>  /*
> @@ -479,9 +481,14 @@ static int
>       * distinct cores first and guarantees we don't do something stupid
>       * like run two VCPUs on co-hyperthreads while there are idle cores
>       * or sockets.
> +     *
> +     * Notice that, when computing the "idleness" of cpu, we may want to
> +     * discount vc. That is, iff vc is the currently running and the only
> +     * runnable vcpu on cpu, we add cpu to the idlers.
>       */
>      cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> -    cpumask_set_cpu(cpu, &idlers);
> +    if ( current_on_cpu(cpu) == vc && IS_RUNQ_IDLE(cpu) )
> +        cpumask_set_cpu(cpu, &idlers);
>      cpumask_and(&cpus, &cpus, &idlers);
>      cpumask_clear_cpu(cpu, &cpus);
>  
> @@ -489,7 +496,7 @@ static int
>      {
>          cpumask_t cpu_idlers;
>          cpumask_t nxt_idlers;
> -        int nxt, weight_cpu, weight_nxt;
> +        int nxt, nr_idlers_cpu, nr_idlers_nxt;
>          int migrate_factor;
>  
>          nxt = cpumask_cycle(cpu, &cpus);
> @@ -513,12 +520,12 @@ static int
>              cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
>          }
>  
> -        weight_cpu = cpumask_weight(&cpu_idlers);
> -        weight_nxt = cpumask_weight(&nxt_idlers);
> +        nr_idlers_cpu = cpumask_weight(&cpu_idlers);
> +        nr_idlers_nxt = cpumask_weight(&nxt_idlers);
>          /* smt_power_savings: consolidate work rather than spreading it */
>          if ( sched_smt_power_savings ?
> -             weight_cpu > weight_nxt :
> -             weight_cpu * migrate_factor < weight_nxt )
> +             nr_idlers_cpu > nr_idlers_nxt :
> +             nr_idlers_cpu * migrate_factor < nr_idlers_nxt )
>          {
>              cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
>              spc = CSCHED_PCPU(nxt);

Despite you mentioning this in the description, these last two hunks
are, afaict, only renaming variables (and that's even debatable, as
the current names aren't really misleading imo), and hence I don't
think belong in a patch that clearly has the potential for causing
(performance) regressions.

That said - I don't think it will (and even more, I'm agreeable to the
change done).

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -396,6 +396,9 @@ extern struct vcpu *idle_vcpu[NR_CPUS];
>  #define is_idle_domain(d) ((d)->domain_id == DOMID_IDLE)
>  #define is_idle_vcpu(v)   (is_idle_domain((v)->domain))
>  
> +#define current_on_cpu(_c) \
> +  ( (per_cpu(schedule_data, _c).curr) )
> +

This, imo, really belings into sched-if.h.

Plus - what's the point of double parentheses, when in fact none
at all would be needed?

And finally, why "_c" and not just "c"?

Jan

>  #define DOMAIN_DESTROYED (1<<31) /* assumes atomic_t is >= 32 bits */
>  #define put_domain(_d) \
>    if ( atomic_dec_and_test(&(_d)->refcnt) ) domain_destroy(_d)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:17:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijNg-000501-2s; Wed, 12 Dec 2012 10:17:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TijNe-0004zw-8E
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:17:34 +0000
Received: from [85.158.138.51:29308] by server-16.bemta-3.messagelabs.com id
	84/6D-27634-8B958C05; Wed, 12 Dec 2012 10:17:28 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355307447!20461123!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjI2OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31794 invoked from network); 12 Dec 2012 10:17:27 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 10:17:27 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 95C5B1553;
	Wed, 12 Dec 2012 12:17:12 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 0207220060; Wed, 12 Dec 2012 12:17:12 +0200 (EET)
Date: Wed, 12 Dec 2012 12:17:11 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121212101711.GD8912@reaktio.net>
References: <50C8619502000078000AFD80@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C8619502000078000AFD80@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2.1-rc2 and 4.1.4-rc2 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, 2012 at 09:51:01AM +0000, Jan Beulich wrote:
> All,
> 
> the current plan is for these to be the final RCs on both trees, so
> this is the last call for eventual backport requests.
>

Xen 4.1 blktap2 bugfix patches posted here:

http://lists.xen.org/archives/html/xen-devel/2012-11/msg00220.html
http://lists.xen.org/archives/html/xen-devel/2012-11/msg00221.html

Both acked-by / signed-off-by Ian Campbell.

Also the XZ patch for 4.1.4?
http://lists.xen.org/archives/html/xen-devel/2012-12/msg00152.html

> And of course - please test!
> 

I'll try to find some time for testing!

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:17:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijNg-000501-2s; Wed, 12 Dec 2012 10:17:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TijNe-0004zw-8E
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:17:34 +0000
Received: from [85.158.138.51:29308] by server-16.bemta-3.messagelabs.com id
	84/6D-27634-8B958C05; Wed, 12 Dec 2012 10:17:28 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355307447!20461123!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjI2OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31794 invoked from network); 12 Dec 2012 10:17:27 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 10:17:27 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 95C5B1553;
	Wed, 12 Dec 2012 12:17:12 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 0207220060; Wed, 12 Dec 2012 12:17:12 +0200 (EET)
Date: Wed, 12 Dec 2012 12:17:11 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121212101711.GD8912@reaktio.net>
References: <50C8619502000078000AFD80@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C8619502000078000AFD80@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2.1-rc2 and 4.1.4-rc2 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, 2012 at 09:51:01AM +0000, Jan Beulich wrote:
> All,
> 
> the current plan is for these to be the final RCs on both trees, so
> this is the last call for eventual backport requests.
>

Xen 4.1 blktap2 bugfix patches posted here:

http://lists.xen.org/archives/html/xen-devel/2012-11/msg00220.html
http://lists.xen.org/archives/html/xen-devel/2012-11/msg00221.html

Both acked-by / signed-off-by Ian Campbell.

Also the XZ patch for 4.1.4?
http://lists.xen.org/archives/html/xen-devel/2012-12/msg00152.html

> And of course - please test!
> 

I'll try to find some time for testing!

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:19:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:19:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijPb-00054n-Jn; Wed, 12 Dec 2012 10:19:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TijPa-00054g-2p
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:19:34 +0000
Received: from [85.158.143.99:33370] by server-2.bemta-4.messagelabs.com id
	70/B4-30861-53A58C05; Wed, 12 Dec 2012 10:19:33 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1355307555!19498845!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20566 invoked from network); 12 Dec 2012 10:19:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 10:19:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; d="asc'?scan'208";a="82763"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 10:19:16 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 10:19:14 +0000
Message-ID: <1355307553.3992.32.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 12 Dec 2012 11:19:13 +0100
In-Reply-To: <50C864D402000078000AFDB0@nat28.tlf.novell.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Keir Fraser <keir@xen.org>, George
	Dunlap <george.dunlap@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0810598598097089303=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0810598598097089303==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-69b/YboXRwmnXqCMlsxH"

--=-69b/YboXRwmnXqCMlsxH
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-12-12 at 10:04 +0000, Jan Beulich wrote:=20
> > -        weight_cpu =3D cpumask_weight(&cpu_idlers);
> > -        weight_nxt =3D cpumask_weight(&nxt_idlers);
> > +        nr_idlers_cpu =3D cpumask_weight(&cpu_idlers);
> > +        nr_idlers_nxt =3D cpumask_weight(&nxt_idlers);
> >          /* smt_power_savings: consolidate work rather than spreading i=
t */
> >          if ( sched_smt_power_savings ?
> > -             weight_cpu > weight_nxt :
> > -             weight_cpu * migrate_factor < weight_nxt )
> > +             nr_idlers_cpu > nr_idlers_nxt :
> > +             nr_idlers_cpu * migrate_factor < nr_idlers_nxt )
> >          {
> >              cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
> >              spc =3D CSCHED_PCPU(nxt);
>=20
> Despite you mentioning this in the description, these last two hunks
> are, afaict, only renaming variables (and that's even debatable, as
> the current names aren't really misleading imo), and hence I don't
> think belong in a patch that clearly has the potential for causing
> (performance) regressions.
>=20
Ok, I think I can live with the current names too... Just a matter of
taste. :-)

> That said - I don't think it will (and even more, I'm agreeable to the
> change done).
>=20
It has been benchmarked, together with the next change, and the results
are in the changelog of 2/6. Numbers there show that the combination of
those two changes are much more an improvement than anything else, at
least for the workloads I considered (which includes sysbench and
specjbb2005).

Anyway, I think I see your point, and I can either move the remane
somewhere else or kill it entirely.

> > --- a/xen/include/xen/sched.h
> > +++ b/xen/include/xen/sched.h
> > @@ -396,6 +396,9 @@ extern struct vcpu *idle_vcpu[NR_CPUS];
> >  #define is_idle_domain(d) ((d)->domain_id =3D=3D DOMID_IDLE)
> >  #define is_idle_vcpu(v)   (is_idle_domain((v)->domain))
> > =20
> > +#define current_on_cpu(_c) \
> > +  ( (per_cpu(schedule_data, _c).curr) )
> > +
>=20
> This, imo, really belings into sched-if.h.
>=20
Ok.

> Plus - what's the point of double parentheses, when in fact none
> at all would be needed?
>=20
> And finally, why "_c" and not just "c"?
>=20
Nothing particular, just "personal macro style", I guess, which I can
convert to what you ask and resend.

Thanks,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-69b/YboXRwmnXqCMlsxH
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDIWiEACgkQk4XaBE3IOsTJpACgoQLHtamf1x64g6aEqFAkOyQe
QOoAnR8DLTmnTa92Ti9fHduOp8Yj/UnO
=pncH
-----END PGP SIGNATURE-----

--=-69b/YboXRwmnXqCMlsxH--


--===============0810598598097089303==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0810598598097089303==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 10:19:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:19:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijPb-00054n-Jn; Wed, 12 Dec 2012 10:19:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TijPa-00054g-2p
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:19:34 +0000
Received: from [85.158.143.99:33370] by server-2.bemta-4.messagelabs.com id
	70/B4-30861-53A58C05; Wed, 12 Dec 2012 10:19:33 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1355307555!19498845!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20566 invoked from network); 12 Dec 2012 10:19:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 10:19:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; d="asc'?scan'208";a="82763"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 10:19:16 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 10:19:14 +0000
Message-ID: <1355307553.3992.32.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 12 Dec 2012 11:19:13 +0100
In-Reply-To: <50C864D402000078000AFDB0@nat28.tlf.novell.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Keir Fraser <keir@xen.org>, George
	Dunlap <george.dunlap@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0810598598097089303=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0810598598097089303==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-69b/YboXRwmnXqCMlsxH"

--=-69b/YboXRwmnXqCMlsxH
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-12-12 at 10:04 +0000, Jan Beulich wrote:=20
> > -        weight_cpu =3D cpumask_weight(&cpu_idlers);
> > -        weight_nxt =3D cpumask_weight(&nxt_idlers);
> > +        nr_idlers_cpu =3D cpumask_weight(&cpu_idlers);
> > +        nr_idlers_nxt =3D cpumask_weight(&nxt_idlers);
> >          /* smt_power_savings: consolidate work rather than spreading i=
t */
> >          if ( sched_smt_power_savings ?
> > -             weight_cpu > weight_nxt :
> > -             weight_cpu * migrate_factor < weight_nxt )
> > +             nr_idlers_cpu > nr_idlers_nxt :
> > +             nr_idlers_cpu * migrate_factor < nr_idlers_nxt )
> >          {
> >              cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
> >              spc =3D CSCHED_PCPU(nxt);
>=20
> Despite you mentioning this in the description, these last two hunks
> are, afaict, only renaming variables (and that's even debatable, as
> the current names aren't really misleading imo), and hence I don't
> think belong in a patch that clearly has the potential for causing
> (performance) regressions.
>=20
Ok, I think I can live with the current names too... Just a matter of
taste. :-)

> That said - I don't think it will (and even more, I'm agreeable to the
> change done).
>=20
It has been benchmarked, together with the next change, and the results
are in the changelog of 2/6. Numbers there show that the combination of
those two changes are much more an improvement than anything else, at
least for the workloads I considered (which includes sysbench and
specjbb2005).

Anyway, I think I see your point, and I can either move the remane
somewhere else or kill it entirely.

> > --- a/xen/include/xen/sched.h
> > +++ b/xen/include/xen/sched.h
> > @@ -396,6 +396,9 @@ extern struct vcpu *idle_vcpu[NR_CPUS];
> >  #define is_idle_domain(d) ((d)->domain_id =3D=3D DOMID_IDLE)
> >  #define is_idle_vcpu(v)   (is_idle_domain((v)->domain))
> > =20
> > +#define current_on_cpu(_c) \
> > +  ( (per_cpu(schedule_data, _c).curr) )
> > +
>=20
> This, imo, really belings into sched-if.h.
>=20
Ok.

> Plus - what's the point of double parentheses, when in fact none
> at all would be needed?
>=20
> And finally, why "_c" and not just "c"?
>=20
Nothing particular, just "personal macro style", I guess, which I can
convert to what you ask and resend.

Thanks,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-69b/YboXRwmnXqCMlsxH
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDIWiEACgkQk4XaBE3IOsTJpACgoQLHtamf1x64g6aEqFAkOyQe
QOoAnR8DLTmnTa92Ti9fHduOp8Yj/UnO
=pncH
-----END PGP SIGNATURE-----

--=-69b/YboXRwmnXqCMlsxH--


--===============0810598598097089303==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0810598598097089303==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 10:25:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijUp-0005Iw-DE; Wed, 12 Dec 2012 10:24:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TijUo-0005Iq-Qy
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:24:58 +0000
Received: from [85.158.137.99:26909] by server-10.bemta-3.messagelabs.com id
	66/F2-07616-57B58C05; Wed, 12 Dec 2012 10:24:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355307892!13525679!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21815 invoked from network); 12 Dec 2012 10:24:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 10:24:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 10:24:51 +0000
Message-Id: <50C8698202000078000AFDD2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 10:24:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "=?UTF-8?B?UGFzaSBLw6Rya2vDpGluZW4=?=" <pasik@iki.fi>
References: <50C8619502000078000AFD80@nat28.tlf.novell.com>
	<20121212101711.GD8912@reaktio.net>
In-Reply-To: <20121212101711.GD8912@reaktio.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2.1-rc2 and 4.1.4-rc2 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDEyLjEyLjEyIGF0IDExOjE3LCBQYXNpIEvDpHJra8OkaW5lbjxwYXNpa0Bpa2kuZmk+
IHdyb3RlOgo+IE9uIFdlZCwgRGVjIDEyLCAyMDEyIGF0IDA5OjUxOjAxQU0gKzAwMDAsIEphbiBC
ZXVsaWNoIHdyb3RlOgo+PiBBbGwsCj4+IAo+PiB0aGUgY3VycmVudCBwbGFuIGlzIGZvciB0aGVz
ZSB0byBiZSB0aGUgZmluYWwgUkNzIG9uIGJvdGggdHJlZXMsIHNvCj4+IHRoaXMgaXMgdGhlIGxh
c3QgY2FsbCBmb3IgZXZlbnR1YWwgYmFja3BvcnQgcmVxdWVzdHMuCj4+Cj4gCj4gWGVuIDQuMSBi
bGt0YXAyIGJ1Z2ZpeCBwYXRjaGVzIHBvc3RlZCBoZXJlOgo+IAo+IGh0dHA6Ly9saXN0cy54ZW4u
b3JnL2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMTItMTEvbXNnMDAyMjAuaHRtbCAKPiBodHRw
Oi8vbGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEyLTExL21zZzAwMjIx
Lmh0bWwgCj4gCj4gQm90aCBhY2tlZC1ieSAvIHNpZ25lZC1vZmYtYnkgSWFuIENhbXBiZWxsLgoK
SSBoYWQgc2VlbiB5b3VyIHBpbmcgdG8gSWFuSiwgYW5kIEkgY2FuIG9ubHkgZGVmZXIgdG8gaGlt
IG9uIHRob3NlCm9uZXMuCgo+IEFsc28gdGhlIFhaIHBhdGNoIGZvciA0LjEuND8KPiBodHRwOi8v
bGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEyLTEyL21zZzAwMTUyLmh0
bWwgCgpJIGhhZCBhbHJlYWR5IGluZGljYXRlZCB0aGF0IEkgY29uc2lkZXIgaXQgdG9vIGxhdGUg
Zm9yIDQuMS40OyA0LjEuNQppcyBnb2luZyB0byBiZSBhbiBvcHRpb24uCgpKYW4KCl9fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRl
dmVsCg==

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:25:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijUp-0005Iw-DE; Wed, 12 Dec 2012 10:24:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TijUo-0005Iq-Qy
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:24:58 +0000
Received: from [85.158.137.99:26909] by server-10.bemta-3.messagelabs.com id
	66/F2-07616-57B58C05; Wed, 12 Dec 2012 10:24:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355307892!13525679!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21815 invoked from network); 12 Dec 2012 10:24:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 10:24:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 10:24:51 +0000
Message-Id: <50C8698202000078000AFDD2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 10:24:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "=?UTF-8?B?UGFzaSBLw6Rya2vDpGluZW4=?=" <pasik@iki.fi>
References: <50C8619502000078000AFD80@nat28.tlf.novell.com>
	<20121212101711.GD8912@reaktio.net>
In-Reply-To: <20121212101711.GD8912@reaktio.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2.1-rc2 and 4.1.4-rc2 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDEyLjEyLjEyIGF0IDExOjE3LCBQYXNpIEvDpHJra8OkaW5lbjxwYXNpa0Bpa2kuZmk+
IHdyb3RlOgo+IE9uIFdlZCwgRGVjIDEyLCAyMDEyIGF0IDA5OjUxOjAxQU0gKzAwMDAsIEphbiBC
ZXVsaWNoIHdyb3RlOgo+PiBBbGwsCj4+IAo+PiB0aGUgY3VycmVudCBwbGFuIGlzIGZvciB0aGVz
ZSB0byBiZSB0aGUgZmluYWwgUkNzIG9uIGJvdGggdHJlZXMsIHNvCj4+IHRoaXMgaXMgdGhlIGxh
c3QgY2FsbCBmb3IgZXZlbnR1YWwgYmFja3BvcnQgcmVxdWVzdHMuCj4+Cj4gCj4gWGVuIDQuMSBi
bGt0YXAyIGJ1Z2ZpeCBwYXRjaGVzIHBvc3RlZCBoZXJlOgo+IAo+IGh0dHA6Ly9saXN0cy54ZW4u
b3JnL2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMTItMTEvbXNnMDAyMjAuaHRtbCAKPiBodHRw
Oi8vbGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEyLTExL21zZzAwMjIx
Lmh0bWwgCj4gCj4gQm90aCBhY2tlZC1ieSAvIHNpZ25lZC1vZmYtYnkgSWFuIENhbXBiZWxsLgoK
SSBoYWQgc2VlbiB5b3VyIHBpbmcgdG8gSWFuSiwgYW5kIEkgY2FuIG9ubHkgZGVmZXIgdG8gaGlt
IG9uIHRob3NlCm9uZXMuCgo+IEFsc28gdGhlIFhaIHBhdGNoIGZvciA0LjEuND8KPiBodHRwOi8v
bGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEyLTEyL21zZzAwMTUyLmh0
bWwgCgpJIGhhZCBhbHJlYWR5IGluZGljYXRlZCB0aGF0IEkgY29uc2lkZXIgaXQgdG9vIGxhdGUg
Zm9yIDQuMS40OyA0LjEuNQppcyBnb2luZyB0byBiZSBhbiBvcHRpb24uCgpKYW4KCl9fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRl
dmVsCg==

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:32:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:32:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijbT-0005Tn-C6; Wed, 12 Dec 2012 10:31:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TijbR-0005Ti-Lp
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:31:49 +0000
Received: from [193.109.254.147:18365] by server-11.bemta-14.messagelabs.com
	id 92/AC-02659-51D58C05; Wed, 12 Dec 2012 10:31:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355308247!8788076!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4716 invoked from network); 12 Dec 2012 10:30:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 10:30:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 10:30:46 +0000
Message-Id: <50C86AE602000078000AFDE1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 10:30:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
	<1355307553.3992.32.camel@Abyss>
In-Reply-To: <1355307553.3992.32.camel@Abyss>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Keir Fraser <keir@xen.org>,
	GeorgeDunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.12.12 at 11:19, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> On Wed, 2012-12-12 at 10:04 +0000, Jan Beulich wrote: 
>> Despite you mentioning this in the description, these last two hunks
>> are, afaict, only renaming variables (and that's even debatable, as
>> the current names aren't really misleading imo), and hence I don't
>> think belong in a patch that clearly has the potential for causing
>> (performance) regressions.
>> 
> Ok, I think I can live with the current names too... Just a matter of
> taste. :-)
> 
>> That said - I don't think it will (and even more, I'm agreeable to the
>> change done).
>> 
> It has been benchmarked, together with the next change, and the results
> are in the changelog of 2/6. Numbers there show that the combination of
> those two changes are much more an improvement than anything else, at
> least for the workloads I considered (which includes sysbench and
> specjbb2005).
> 
> Anyway, I think I see your point, and I can either move the remane
> somewhere else or kill it entirely.

Yes please; I'll leave it to George to decide upon an eventual
separate renaming patch.

Btw., when you resend, can you please also fix the subject, so
grepping the changeset titles for "idle" would actually hit on this
change?

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:32:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:32:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijbT-0005Tn-C6; Wed, 12 Dec 2012 10:31:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TijbR-0005Ti-Lp
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:31:49 +0000
Received: from [193.109.254.147:18365] by server-11.bemta-14.messagelabs.com
	id 92/AC-02659-51D58C05; Wed, 12 Dec 2012 10:31:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355308247!8788076!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4716 invoked from network); 12 Dec 2012 10:30:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 10:30:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 10:30:46 +0000
Message-Id: <50C86AE602000078000AFDE1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 10:30:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
	<1355307553.3992.32.camel@Abyss>
In-Reply-To: <1355307553.3992.32.camel@Abyss>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Keir Fraser <keir@xen.org>,
	GeorgeDunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.12.12 at 11:19, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> On Wed, 2012-12-12 at 10:04 +0000, Jan Beulich wrote: 
>> Despite you mentioning this in the description, these last two hunks
>> are, afaict, only renaming variables (and that's even debatable, as
>> the current names aren't really misleading imo), and hence I don't
>> think belong in a patch that clearly has the potential for causing
>> (performance) regressions.
>> 
> Ok, I think I can live with the current names too... Just a matter of
> taste. :-)
> 
>> That said - I don't think it will (and even more, I'm agreeable to the
>> change done).
>> 
> It has been benchmarked, together with the next change, and the results
> are in the changelog of 2/6. Numbers there show that the combination of
> those two changes are much more an improvement than anything else, at
> least for the workloads I considered (which includes sysbench and
> specjbb2005).
> 
> Anyway, I think I see your point, and I can either move the remane
> somewhere else or kill it entirely.

Yes please; I'll leave it to George to decide upon an eventual
separate renaming patch.

Btw., when you resend, can you please also fix the subject, so
grepping the changeset titles for "idle" would actually hit on this
change?

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:34:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijeC-0005aI-VM; Wed, 12 Dec 2012 10:34:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TijeB-0005aA-JN
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:34:39 +0000
Received: from [85.158.137.99:45870] by server-2.bemta-3.messagelabs.com id
	05/39-11239-EBD58C05; Wed, 12 Dec 2012 10:34:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355308477!19050705!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29071 invoked from network); 12 Dec 2012 10:34:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 10:34:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; 
   d="scan'208";a="83263"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 10:34:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 10:34:37 +0000
Message-ID: <1355308476.10554.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 12 Dec 2012 10:34:36 +0000
In-Reply-To: <50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
	<50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
 backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-12 at 09:42 +0000, Jan Beulich wrote:
> >>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> > backend_changed might be called multiple times, which will leak
> > be->mode. Make sure it will be called only once. Remove some unneeded
> > checks. Also the be->mode string was leaked, release the memory on
> > device shutdown.
> 
> So did I miss some discussion here? I haven't seen any
> confirmation of this function indeed being supposed to be called
> just once.
> 
> Also, as said previously, if indeed it is to be called just once,
> removing the watch during/after the first invocation would seem
> to be the more appropriate thing to do.

The watch fires (often needlessly) when you first register it so in the
common case it is going to be called twice. Of course that first time
should abort early on so perhaps that's a moot point.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:34:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijeC-0005aI-VM; Wed, 12 Dec 2012 10:34:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TijeB-0005aA-JN
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:34:39 +0000
Received: from [85.158.137.99:45870] by server-2.bemta-3.messagelabs.com id
	05/39-11239-EBD58C05; Wed, 12 Dec 2012 10:34:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355308477!19050705!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29071 invoked from network); 12 Dec 2012 10:34:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 10:34:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; 
   d="scan'208";a="83263"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 10:34:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 10:34:37 +0000
Message-ID: <1355308476.10554.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 12 Dec 2012 10:34:36 +0000
In-Reply-To: <50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
	<50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
 backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-12 at 09:42 +0000, Jan Beulich wrote:
> >>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> > backend_changed might be called multiple times, which will leak
> > be->mode. Make sure it will be called only once. Remove some unneeded
> > checks. Also the be->mode string was leaked, release the memory on
> > device shutdown.
> 
> So did I miss some discussion here? I haven't seen any
> confirmation of this function indeed being supposed to be called
> just once.
> 
> Also, as said previously, if indeed it is to be called just once,
> removing the watch during/after the first invocation would seem
> to be the more appropriate thing to do.

The watch fires (often needlessly) when you first register it so in the
common case it is going to be called twice. Of course that first time
should abort early on so perhaps that's a moot point.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:41:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tijkl-0005mV-QB; Wed, 12 Dec 2012 10:41:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tijkl-0005m0-6I
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:41:27 +0000
Received: from [85.158.143.35:37710] by server-2.bemta-4.messagelabs.com id
	A7/C7-30861-65F58C05; Wed, 12 Dec 2012 10:41:26 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355308683!16989457!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29205 invoked from network); 12 Dec 2012 10:38:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 10:38:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; d="asc'?scan'208";a="83392"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 10:38:03 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 10:38:02 +0000
Message-ID: <1355308681.3992.34.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 12 Dec 2012 11:38:01 +0100
In-Reply-To: <50C86AE602000078000AFDE1@nat28.tlf.novell.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
	<1355307553.3992.32.camel@Abyss>
	<50C86AE602000078000AFDE1@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2887060042266147812=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2887060042266147812==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-9C+J7uJPPhfh4NPZUzyB"

--=-9C+J7uJPPhfh4NPZUzyB
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-12-12 at 10:30 +0000, Jan Beulich wrote:=20
> > Anyway, I think I see your point, and I can either move the remane
> > somewhere else or kill it entirely.
>=20
> Yes please; I'll leave it to George to decide upon an eventual
> separate renaming patch.
>=20
Ok.

> Btw., when you resend, can you please also fix the subject, so
> grepping the changeset titles for "idle" would actually hit on this
> change?
>=20
Ups! My bad, sorry for that. I sure will.

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-9C+J7uJPPhfh4NPZUzyB
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDIXokACgkQk4XaBE3IOsRYkACfW82MPGf7Fo54LMs6ZXakkf0G
0C8AoKakl0Smyj09hvf/6G7ACyxW1c53
=4OTA
-----END PGP SIGNATURE-----

--=-9C+J7uJPPhfh4NPZUzyB--


--===============2887060042266147812==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2887060042266147812==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 10:41:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tijkl-0005mV-QB; Wed, 12 Dec 2012 10:41:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tijkl-0005m0-6I
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:41:27 +0000
Received: from [85.158.143.35:37710] by server-2.bemta-4.messagelabs.com id
	A7/C7-30861-65F58C05; Wed, 12 Dec 2012 10:41:26 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355308683!16989457!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29205 invoked from network); 12 Dec 2012 10:38:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 10:38:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; d="asc'?scan'208";a="83392"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 10:38:03 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 10:38:02 +0000
Message-ID: <1355308681.3992.34.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 12 Dec 2012 11:38:01 +0100
In-Reply-To: <50C86AE602000078000AFDE1@nat28.tlf.novell.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
	<1355307553.3992.32.camel@Abyss>
	<50C86AE602000078000AFDE1@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2887060042266147812=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2887060042266147812==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-9C+J7uJPPhfh4NPZUzyB"

--=-9C+J7uJPPhfh4NPZUzyB
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-12-12 at 10:30 +0000, Jan Beulich wrote:=20
> > Anyway, I think I see your point, and I can either move the remane
> > somewhere else or kill it entirely.
>=20
> Yes please; I'll leave it to George to decide upon an eventual
> separate renaming patch.
>=20
Ok.

> Btw., when you resend, can you please also fix the subject, so
> grepping the changeset titles for "idle" would actually hit on this
> change?
>=20
Ups! My bad, sorry for that. I sure will.

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-9C+J7uJPPhfh4NPZUzyB
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDIXokACgkQk4XaBE3IOsRYkACfW82MPGf7Fo54LMs6ZXakkf0G
0C8AoKakl0Smyj09hvf/6G7ACyxW1c53
=4OTA
-----END PGP SIGNATURE-----

--=-9C+J7uJPPhfh4NPZUzyB--


--===============2887060042266147812==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2887060042266147812==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 10:43:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tijmf-0005rF-Ba; Wed, 12 Dec 2012 10:43:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tijme-0005rA-EP
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:43:24 +0000
Received: from [85.158.139.83:14709] by server-12.bemta-5.messagelabs.com id
	87/E0-02275-BCF58C05; Wed, 12 Dec 2012 10:43:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355309002!28806658!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8627 invoked from network); 12 Dec 2012 10:43:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 10:43:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; 
   d="scan'208";a="83565"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 10:43:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 10:43:22 +0000
Message-ID: <1355309001.10554.13.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Wed, 12 Dec 2012 10:43:21 +0000
In-Reply-To: <50C78D84.1030404@citrix.com>
References: <50C78D84.1030404@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Driver domains and device handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTEyLTExIGF0IDE5OjQ2ICswMDAwLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Ogo+IGp1c3QgbGF1bmNoIFFlbXUsIGFuZCBpZiB3ZSBkZWNpZGUgdG8gdXNlIGFuIGluLWtlcm5l
bCBpbml0aWF0b3Igd2Ugb25seQo+IGhhdmUgdG8gbGF1bmNoIGEgaG90cGx1ZyBzY3JpcHQgd2l0
aCBzb21ldGhpbmcgbGlrZTogaXNjc2lhZG0gLW0gbm9kZSAtVAo+IDxpcW4+IC1wIDxpcDpwb3J0
Pi4KCmlzbid0IHRoZXJlIGFsc28gYSBpc2NzaWFkbSBsb2dpbiB3aGljaCBpcyBuZWVkZWQgYXQg
c29tZSBwb2ludD8KCkluIGFueSBjYXNlIGFsbW9zdCBhbnkgaXNjc2lhZG0gY29tbWFuZCBoYXMg
dGhlIHBvdGVudGlhbCB0byBiZSBzbG93CmNvbXBhcmVkIHRvIHRoZSBhbW91bnQgb2YgZG93bnRp
bWUgd2Ugd291bGQgbGlrZSB0byBhaW0gZm9yIGR1cmluZyBhCm1pZ3JhdGlvbi4KCj4gSSdtIHN1
cmUgdGhlcmUncyBwZW9wbGUgb24gdGhlIGxpc3Qgd2l0aCBtb3JlIGV4cGVyaWVuY2UgdGhhbiBt
ZSBvbiB0aGlzCj4gZmllbGQsIGFuZCBJIHdvdWxkIGxpa2UgdG8gYXNrIGZvciBzb21lIHVzZS1j
YXNlcyB3aGVyZSB0aGlzCj4gInByZXBhcmF0b3J5IiBwaGFzZSB3b3VsZCBiZSB1c2VmdWwsIGFu
ZCB3aGF0IGFjdGlvbnMgd2lsbCBiZSBwZXJmb3JtZWQKPiBvbiBpdC4KCk1pZ2h0IHdlIHdvcnRo
IGluY2x1ZGluZyB4ZW4tYXBpIGluIHRoaXMgZGlzY3Vzc2lvbiBzaW5jZSB0aGUgeGFwaSBndXlz
CmhhdmUgYSBmYWlyIGJpdCBvZiBrbm93bGVkZ2Ugb2YgdGhlIHJlcXVpcmVtZW50IGhlcmUuCgpJ
YW4uCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:43:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tijmf-0005rF-Ba; Wed, 12 Dec 2012 10:43:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tijme-0005rA-EP
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:43:24 +0000
Received: from [85.158.139.83:14709] by server-12.bemta-5.messagelabs.com id
	87/E0-02275-BCF58C05; Wed, 12 Dec 2012 10:43:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355309002!28806658!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8627 invoked from network); 12 Dec 2012 10:43:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 10:43:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; 
   d="scan'208";a="83565"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 10:43:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 10:43:22 +0000
Message-ID: <1355309001.10554.13.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Wed, 12 Dec 2012 10:43:21 +0000
In-Reply-To: <50C78D84.1030404@citrix.com>
References: <50C78D84.1030404@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Driver domains and device handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTEyLTExIGF0IDE5OjQ2ICswMDAwLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Ogo+IGp1c3QgbGF1bmNoIFFlbXUsIGFuZCBpZiB3ZSBkZWNpZGUgdG8gdXNlIGFuIGluLWtlcm5l
bCBpbml0aWF0b3Igd2Ugb25seQo+IGhhdmUgdG8gbGF1bmNoIGEgaG90cGx1ZyBzY3JpcHQgd2l0
aCBzb21ldGhpbmcgbGlrZTogaXNjc2lhZG0gLW0gbm9kZSAtVAo+IDxpcW4+IC1wIDxpcDpwb3J0
Pi4KCmlzbid0IHRoZXJlIGFsc28gYSBpc2NzaWFkbSBsb2dpbiB3aGljaCBpcyBuZWVkZWQgYXQg
c29tZSBwb2ludD8KCkluIGFueSBjYXNlIGFsbW9zdCBhbnkgaXNjc2lhZG0gY29tbWFuZCBoYXMg
dGhlIHBvdGVudGlhbCB0byBiZSBzbG93CmNvbXBhcmVkIHRvIHRoZSBhbW91bnQgb2YgZG93bnRp
bWUgd2Ugd291bGQgbGlrZSB0byBhaW0gZm9yIGR1cmluZyBhCm1pZ3JhdGlvbi4KCj4gSSdtIHN1
cmUgdGhlcmUncyBwZW9wbGUgb24gdGhlIGxpc3Qgd2l0aCBtb3JlIGV4cGVyaWVuY2UgdGhhbiBt
ZSBvbiB0aGlzCj4gZmllbGQsIGFuZCBJIHdvdWxkIGxpa2UgdG8gYXNrIGZvciBzb21lIHVzZS1j
YXNlcyB3aGVyZSB0aGlzCj4gInByZXBhcmF0b3J5IiBwaGFzZSB3b3VsZCBiZSB1c2VmdWwsIGFu
ZCB3aGF0IGFjdGlvbnMgd2lsbCBiZSBwZXJmb3JtZWQKPiBvbiBpdC4KCk1pZ2h0IHdlIHdvcnRo
IGluY2x1ZGluZyB4ZW4tYXBpIGluIHRoaXMgZGlzY3Vzc2lvbiBzaW5jZSB0aGUgeGFwaSBndXlz
CmhhdmUgYSBmYWlyIGJpdCBvZiBrbm93bGVkZ2Ugb2YgdGhlIHJlcXVpcmVtZW50IGhlcmUuCgpJ
YW4uCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:47:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijqP-000644-6c; Wed, 12 Dec 2012 10:47:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TijqN-00063x-KY
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:47:15 +0000
Received: from [193.109.254.147:46990] by server-7.bemta-14.messagelabs.com id
	4C/AD-08102-1B068C05; Wed, 12 Dec 2012 10:47:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355309111!8673659!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0ODAwOTY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0ODAwOTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24812 invoked from network); 12 Dec 2012 10:45:14 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 10:45:14 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFJjy0PF1k=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-071-014.pools.arcor-ip.net [88.65.71.14])
	by smtp.strato.de (joses mo47) (RZmta 31.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id K034a0oBCAU0lz ;
	Wed, 12 Dec 2012 11:45:01 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id E1A881884C; Wed, 12 Dec 2012 11:45:00 +0100 (CET)
Date: Wed, 12 Dec 2012 11:45:00 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121212104500.GB3382@aepfle.de>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
	<50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
	<1355308476.10554.11.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355308476.10554.11.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Jan Beulich <JBeulich@suse.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
 backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, Ian Campbell wrote:

> On Wed, 2012-12-12 at 09:42 +0000, Jan Beulich wrote:
> > >>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> > > backend_changed might be called multiple times, which will leak
> > > be->mode. Make sure it will be called only once. Remove some unneeded
> > > checks. Also the be->mode string was leaked, release the memory on
> > > device shutdown.
> > 
> > So did I miss some discussion here? I haven't seen any
> > confirmation of this function indeed being supposed to be called
> > just once.
> > 
> > Also, as said previously, if indeed it is to be called just once,
> > removing the watch during/after the first invocation would seem
> > to be the more appropriate thing to do.
> 
> The watch fires (often needlessly) when you first register it so in the
> common case it is going to be called twice. Of course that first time
> should abort early on so perhaps that's a moot point.

The current code handles that, if a property does not exist the function
will exit early.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 10:47:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 10:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TijqP-000644-6c; Wed, 12 Dec 2012 10:47:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TijqN-00063x-KY
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 10:47:15 +0000
Received: from [193.109.254.147:46990] by server-7.bemta-14.messagelabs.com id
	4C/AD-08102-1B068C05; Wed, 12 Dec 2012 10:47:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355309111!8673659!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0ODAwOTY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0ODAwOTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24812 invoked from network); 12 Dec 2012 10:45:14 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 10:45:14 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFJjy0PF1k=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-071-014.pools.arcor-ip.net [88.65.71.14])
	by smtp.strato.de (joses mo47) (RZmta 31.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id K034a0oBCAU0lz ;
	Wed, 12 Dec 2012 11:45:01 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id E1A881884C; Wed, 12 Dec 2012 11:45:00 +0100 (CET)
Date: Wed, 12 Dec 2012 11:45:00 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121212104500.GB3382@aepfle.de>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
	<50C85F9F02000078000AFD6E@nat28.tlf.novell.com>
	<1355308476.10554.11.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355308476.10554.11.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Jan Beulich <JBeulich@suse.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
 backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, Ian Campbell wrote:

> On Wed, 2012-12-12 at 09:42 +0000, Jan Beulich wrote:
> > >>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> > > backend_changed might be called multiple times, which will leak
> > > be->mode. Make sure it will be called only once. Remove some unneeded
> > > checks. Also the be->mode string was leaked, release the memory on
> > > device shutdown.
> > 
> > So did I miss some discussion here? I haven't seen any
> > confirmation of this function indeed being supposed to be called
> > just once.
> > 
> > Also, as said previously, if indeed it is to be called just once,
> > removing the watch during/after the first invocation would seem
> > to be the more appropriate thing to do.
> 
> The watch fires (often needlessly) when you first register it so in the
> common case it is going to be called twice. Of course that first time
> should abort early on so perhaps that's a moot point.

The current code handles that, if a property does not exist the function
will exit early.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 11:16:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 11:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TikII-0006LY-PH; Wed, 12 Dec 2012 11:16:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TikIH-0006LT-Qp
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 11:16:06 +0000
Received: from [85.158.138.51:29902] by server-5.bemta-3.messagelabs.com id
	08/5F-15136-47768C05; Wed, 12 Dec 2012 11:16:04 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355310916!28543650!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29195 invoked from network); 12 Dec 2012 11:15:18 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 11:15:18 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so573721vbi.32
	for <xen-devel@lists.xen.org>; Wed, 12 Dec 2012 03:15:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=aNcJuqggwoYSmJJWfZIOsA+H5U/YZq657456Uhs/Bow=;
	b=VRZ3trvS3y83++jxfyQMoHIrHI6FGh4LbzVyGErVWWgfvnj9qoapuP6TQXNNuRaQ15
	wTgMzXX9b9Q4Zh4Kc04ssbk8wRkjhKP4Pqhw8ucN6xTLIXQnHix6C6boUX2V5hL11bmS
	qIRIPzFoqME+SSjrQ1GOxAfUas8Lv+oxUR4VGvgtLPwFsJ+tXpxsLST2ky/TCBWtpVci
	AQ4/DnUvLnFUPwedZQaZEsUVAKJe92m2IX8ZH0tRgX+acFHeKyIx/30p8vx0mJAuX16o
	MrbWNmfHydh0uw4Jl0xkspS6QYiE5izA9AE4FJYfWYZYsPnlA+WOiABCOQ3IpZySEhxn
	tPyQ==
MIME-Version: 1.0
Received: by 10.52.64.131 with SMTP id o3mr258442vds.116.1355310916415; Wed,
	12 Dec 2012 03:15:16 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Wed, 12 Dec 2012 03:15:16 -0800 (PST)
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
References: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
Date: Wed, 12 Dec 2012 11:15:16 +0000
X-Google-Sender-Auth: PUVTiJHbQ2VtwNAvHwA75ACUBZQ
Message-ID: <CAFLBxZYWKF_gTN08r9M+U3F1Xdj2fRGKxuqC9NGqtMu5ibEvAA@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Ross Philipson <Ross.Philipson@citrix.com>
Cc: Charles Arnold <carnold@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4122976393985216638=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4122976393985216638==
Content-Type: multipart/alternative; boundary=20cf307ac797bbb02c04d0a5e92e

--20cf307ac797bbb02c04d0a5e92e
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 11, 2012 at 2:11 PM, Ross Philipson
<Ross.Philipson@citrix.com>wrote:

> Yea I guess I should follow up on this. I did not manage to get it into
> 4.2 but I thought it had clearance for 4.3. Do I need to resubmit the patch
> set?
>

In open-source, the standard is for the person doing the submitting to keep
track of if their patch has been applied and remind the list / maintainers
when it seems to have been forgotten.  This has the effect of
"distributing" the work of keeping track of patches, so that a little bit
of work is done by each developer, rather than all the work being done by
the maintainers.  Much more scalable. :-)

 -George


>
> Thanks
> Ross
>
> > -----Original Message-----
> > From: Charles Arnold [mailto:carnold@suse.com]
> > Sent: Monday, December 10, 2012 12:15 PM
> > To: xen-devel
> > Cc: Ross Philipson
> > Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
> >
> > I haven't seen any activity on this feature.  Is it still planned to be
> > included in Xen 4.3?
> >
> > - Charles
> >
> > On Wed, 2012-05-23 at 14:37 +0000, Ross Philipson wrote:
> > > This patch series introduces support of loading external blocks of
> > > firmware into a guest. These blocks can currently contain SMBIOS
> > > and/or ACPI firmware information that is used by HVMLOADER to modify a
> > > guests virtual firmware at startup. These modules are only used by
> > HVMLOADER.
> > >
> > > The domain building code in libxenguest is passed these firmware
> > > blocks in the xc_hvm_build_args structure and loads them into the new
> > > guest, returning the load address. The loading is done in what will
> > > become the guests low RAM area just behind to load location for
> > > HVMLOADER. After their use by HVMLOADER they are effectively
> > > discarded. It is the caller's job to load the base address and length
> > > values in xenstore using the paths defined in the new hvm_defs.h
> > > header so HVMLOADER can located the blocks.
> > >
> > > Currently two types of firmware information are recognized and
> > > processed in the HVMLOADER though this could be extended.
> > >
> > > 1. SMBIOS: The SMBIOS table building code will attempt to retrieve
> > > (for predefined set of structure types) any passed in structures. If a
> > > match is found the passed in table will be used overriding the default
> > > values. In addition, the SMBIOS code will also enumerate and load any
> > > vendor defined structures (in the range of types 128 - 255) that as
> > > are passed in. See the hvm_defs.h header for information on the format
> > of this block.
> > > 2. ACPI: Static and secondary descriptor tables can be added to the
> > > set of ACPI table built by HVMLOADER. The ACPI builder code will
> > > enumerate passed in tables and add them at the end of the secondary
> > > table list. See the hvm_defs.h header for information on the format of
> > > this block.
> > >
> > > There are 4 patches in the series:
> > > 01 - Add HVM definitions header for firmware passthrough support.
> > > 02 - Xen control tools support for loading the firmware blocks.
> > > 03 - Passthrough support for SMBIOS.
> > > 04 - Passthrough support for ACPI.
> > >
> > > Note this is version 3 of this patch set. Some of the differences:
> > >  - Generic module support removed, overall functionality was
> > simplified.
> > >  - Use of xenstore to supply firmware passthrough information to
> > HVMLOADER.
> > >  - Fixed issues pointed out in the SMBIOS processing code.
> > >  - Created defines for the SMBIOS handles in use and switched to using
> > >    the xenstore values in the new hvm_defs.h file.
> > >
> > > Signed-off-by: Ross Philipson <ross.philipson@xxxxxxxxxx>
> > >
> >
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--20cf307ac797bbb02c04d0a5e92e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 11, 2012 at 2:11 PM, Ross Philipson <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:Ross.Philipson@citrix.com" target=3D"_blank">Ross.Philipson@ci=
trix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"=
gmail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Yea I guess I should follow up on this. I di=
d not manage to get it into 4.2 but I thought it had clearance for 4.3. Do =
I need to resubmit the patch set?<br>
</blockquote><div><br>In open-source, the standard is for the person doing =
the submitting to keep track of if their patch has been applied and remind =
the list / maintainers when it seems to have been forgotten.=A0 This has th=
e effect of &quot;distributing&quot; the work of keeping track of patches, =
so that a little bit of work is done by each developer, rather than all the=
 work being done by the maintainers.=A0 Much more scalable. :-)<br>
<br>=A0-George<br>=A0</div><blockquote class=3D"gmail_quote" style=3D"margi=
n:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Thanks<br>
<span class=3D"HOEnZb"><font color=3D"#888888">Ross<br>
</font></span><div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt; -----Original Message-----<br>
&gt; From: Charles Arnold [mailto:<a href=3D"mailto:carnold@suse.com">carno=
ld@suse.com</a>]<br>
&gt; Sent: Monday, December 10, 2012 12:15 PM<br>
&gt; To: xen-devel<br>
&gt; Cc: Ross Philipson<br>
&gt; Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough<br>
&gt;<br>
&gt; I haven&#39;t seen any activity on this feature. =A0Is it still planne=
d to be<br>
&gt; included in Xen 4.3?<br>
&gt;<br>
&gt; - Charles<br>
&gt;<br>
&gt; On Wed, 2012-05-23 at 14:37 +0000, Ross Philipson wrote:<br>
&gt; &gt; This patch series introduces support of loading external blocks o=
f<br>
&gt; &gt; firmware into a guest. These blocks can currently contain SMBIOS<=
br>
&gt; &gt; and/or ACPI firmware information that is used by HVMLOADER to mod=
ify a<br>
&gt; &gt; guests virtual firmware at startup. These modules are only used b=
y<br>
&gt; HVMLOADER.<br>
&gt; &gt;<br>
&gt; &gt; The domain building code in libxenguest is passed these firmware<=
br>
&gt; &gt; blocks in the xc_hvm_build_args structure and loads them into the=
 new<br>
&gt; &gt; guest, returning the load address. The loading is done in what wi=
ll<br>
&gt; &gt; become the guests low RAM area just behind to load location for<b=
r>
&gt; &gt; HVMLOADER. After their use by HVMLOADER they are effectively<br>
&gt; &gt; discarded. It is the caller&#39;s job to load the base address an=
d length<br>
&gt; &gt; values in xenstore using the paths defined in the new hvm_defs.h<=
br>
&gt; &gt; header so HVMLOADER can located the blocks.<br>
&gt; &gt;<br>
&gt; &gt; Currently two types of firmware information are recognized and<br=
>
&gt; &gt; processed in the HVMLOADER though this could be extended.<br>
&gt; &gt;<br>
&gt; &gt; 1. SMBIOS: The SMBIOS table building code will attempt to retriev=
e<br>
&gt; &gt; (for predefined set of structure types) any passed in structures.=
 If a<br>
&gt; &gt; match is found the passed in table will be used overriding the de=
fault<br>
&gt; &gt; values. In addition, the SMBIOS code will also enumerate and load=
 any<br>
&gt; &gt; vendor defined structures (in the range of types 128 - 255) that =
as<br>
&gt; &gt; are passed in. See the hvm_defs.h header for information on the f=
ormat<br>
&gt; of this block.<br>
&gt; &gt; 2. ACPI: Static and secondary descriptor tables can be added to t=
he<br>
&gt; &gt; set of ACPI table built by HVMLOADER. The ACPI builder code will<=
br>
&gt; &gt; enumerate passed in tables and add them at the end of the seconda=
ry<br>
&gt; &gt; table list. See the hvm_defs.h header for information on the form=
at of<br>
&gt; &gt; this block.<br>
&gt; &gt;<br>
&gt; &gt; There are 4 patches in the series:<br>
&gt; &gt; 01 - Add HVM definitions header for firmware passthrough support.=
<br>
&gt; &gt; 02 - Xen control tools support for loading the firmware blocks.<b=
r>
&gt; &gt; 03 - Passthrough support for SMBIOS.<br>
&gt; &gt; 04 - Passthrough support for ACPI.<br>
&gt; &gt;<br>
&gt; &gt; Note this is version 3 of this patch set. Some of the differences=
:<br>
&gt; &gt; =A0- Generic module support removed, overall functionality was<br=
>
&gt; simplified.<br>
&gt; &gt; =A0- Use of xenstore to supply firmware passthrough information t=
o<br>
&gt; HVMLOADER.<br>
&gt; &gt; =A0- Fixed issues pointed out in the SMBIOS processing code.<br>
&gt; &gt; =A0- Created defines for the SMBIOS handles in use and switched t=
o using<br>
&gt; &gt; =A0 =A0the xenstore values in the new hvm_defs.h file.<br>
&gt; &gt;<br>
&gt; &gt; Signed-off-by: Ross Philipson &lt;ross.philipson@xxxxxxxxxx&gt;<b=
r>
&gt; &gt;<br>
&gt;<br>
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div>

--20cf307ac797bbb02c04d0a5e92e--


--===============4122976393985216638==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4122976393985216638==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 11:16:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 11:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TikII-0006LY-PH; Wed, 12 Dec 2012 11:16:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TikIH-0006LT-Qp
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 11:16:06 +0000
Received: from [85.158.138.51:29902] by server-5.bemta-3.messagelabs.com id
	08/5F-15136-47768C05; Wed, 12 Dec 2012 11:16:04 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355310916!28543650!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29195 invoked from network); 12 Dec 2012 11:15:18 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 11:15:18 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so573721vbi.32
	for <xen-devel@lists.xen.org>; Wed, 12 Dec 2012 03:15:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=aNcJuqggwoYSmJJWfZIOsA+H5U/YZq657456Uhs/Bow=;
	b=VRZ3trvS3y83++jxfyQMoHIrHI6FGh4LbzVyGErVWWgfvnj9qoapuP6TQXNNuRaQ15
	wTgMzXX9b9Q4Zh4Kc04ssbk8wRkjhKP4Pqhw8ucN6xTLIXQnHix6C6boUX2V5hL11bmS
	qIRIPzFoqME+SSjrQ1GOxAfUas8Lv+oxUR4VGvgtLPwFsJ+tXpxsLST2ky/TCBWtpVci
	AQ4/DnUvLnFUPwedZQaZEsUVAKJe92m2IX8ZH0tRgX+acFHeKyIx/30p8vx0mJAuX16o
	MrbWNmfHydh0uw4Jl0xkspS6QYiE5izA9AE4FJYfWYZYsPnlA+WOiABCOQ3IpZySEhxn
	tPyQ==
MIME-Version: 1.0
Received: by 10.52.64.131 with SMTP id o3mr258442vds.116.1355310916415; Wed,
	12 Dec 2012 03:15:16 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Wed, 12 Dec 2012 03:15:16 -0800 (PST)
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
References: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
Date: Wed, 12 Dec 2012 11:15:16 +0000
X-Google-Sender-Auth: PUVTiJHbQ2VtwNAvHwA75ACUBZQ
Message-ID: <CAFLBxZYWKF_gTN08r9M+U3F1Xdj2fRGKxuqC9NGqtMu5ibEvAA@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Ross Philipson <Ross.Philipson@citrix.com>
Cc: Charles Arnold <carnold@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4122976393985216638=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4122976393985216638==
Content-Type: multipart/alternative; boundary=20cf307ac797bbb02c04d0a5e92e

--20cf307ac797bbb02c04d0a5e92e
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 11, 2012 at 2:11 PM, Ross Philipson
<Ross.Philipson@citrix.com>wrote:

> Yea I guess I should follow up on this. I did not manage to get it into
> 4.2 but I thought it had clearance for 4.3. Do I need to resubmit the patch
> set?
>

In open-source, the standard is for the person doing the submitting to keep
track of if their patch has been applied and remind the list / maintainers
when it seems to have been forgotten.  This has the effect of
"distributing" the work of keeping track of patches, so that a little bit
of work is done by each developer, rather than all the work being done by
the maintainers.  Much more scalable. :-)

 -George


>
> Thanks
> Ross
>
> > -----Original Message-----
> > From: Charles Arnold [mailto:carnold@suse.com]
> > Sent: Monday, December 10, 2012 12:15 PM
> > To: xen-devel
> > Cc: Ross Philipson
> > Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
> >
> > I haven't seen any activity on this feature.  Is it still planned to be
> > included in Xen 4.3?
> >
> > - Charles
> >
> > On Wed, 2012-05-23 at 14:37 +0000, Ross Philipson wrote:
> > > This patch series introduces support of loading external blocks of
> > > firmware into a guest. These blocks can currently contain SMBIOS
> > > and/or ACPI firmware information that is used by HVMLOADER to modify a
> > > guests virtual firmware at startup. These modules are only used by
> > HVMLOADER.
> > >
> > > The domain building code in libxenguest is passed these firmware
> > > blocks in the xc_hvm_build_args structure and loads them into the new
> > > guest, returning the load address. The loading is done in what will
> > > become the guests low RAM area just behind to load location for
> > > HVMLOADER. After their use by HVMLOADER they are effectively
> > > discarded. It is the caller's job to load the base address and length
> > > values in xenstore using the paths defined in the new hvm_defs.h
> > > header so HVMLOADER can located the blocks.
> > >
> > > Currently two types of firmware information are recognized and
> > > processed in the HVMLOADER though this could be extended.
> > >
> > > 1. SMBIOS: The SMBIOS table building code will attempt to retrieve
> > > (for predefined set of structure types) any passed in structures. If a
> > > match is found the passed in table will be used overriding the default
> > > values. In addition, the SMBIOS code will also enumerate and load any
> > > vendor defined structures (in the range of types 128 - 255) that as
> > > are passed in. See the hvm_defs.h header for information on the format
> > of this block.
> > > 2. ACPI: Static and secondary descriptor tables can be added to the
> > > set of ACPI table built by HVMLOADER. The ACPI builder code will
> > > enumerate passed in tables and add them at the end of the secondary
> > > table list. See the hvm_defs.h header for information on the format of
> > > this block.
> > >
> > > There are 4 patches in the series:
> > > 01 - Add HVM definitions header for firmware passthrough support.
> > > 02 - Xen control tools support for loading the firmware blocks.
> > > 03 - Passthrough support for SMBIOS.
> > > 04 - Passthrough support for ACPI.
> > >
> > > Note this is version 3 of this patch set. Some of the differences:
> > >  - Generic module support removed, overall functionality was
> > simplified.
> > >  - Use of xenstore to supply firmware passthrough information to
> > HVMLOADER.
> > >  - Fixed issues pointed out in the SMBIOS processing code.
> > >  - Created defines for the SMBIOS handles in use and switched to using
> > >    the xenstore values in the new hvm_defs.h file.
> > >
> > > Signed-off-by: Ross Philipson <ross.philipson@xxxxxxxxxx>
> > >
> >
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--20cf307ac797bbb02c04d0a5e92e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, Dec 11, 2012 at 2:11 PM, Ross Philipson <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:Ross.Philipson@citrix.com" target=3D"_blank">Ross.Philipson@ci=
trix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"=
gmail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Yea I guess I should follow up on this. I di=
d not manage to get it into 4.2 but I thought it had clearance for 4.3. Do =
I need to resubmit the patch set?<br>
</blockquote><div><br>In open-source, the standard is for the person doing =
the submitting to keep track of if their patch has been applied and remind =
the list / maintainers when it seems to have been forgotten.=A0 This has th=
e effect of &quot;distributing&quot; the work of keeping track of patches, =
so that a little bit of work is done by each developer, rather than all the=
 work being done by the maintainers.=A0 Much more scalable. :-)<br>
<br>=A0-George<br>=A0</div><blockquote class=3D"gmail_quote" style=3D"margi=
n:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Thanks<br>
<span class=3D"HOEnZb"><font color=3D"#888888">Ross<br>
</font></span><div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt; -----Original Message-----<br>
&gt; From: Charles Arnold [mailto:<a href=3D"mailto:carnold@suse.com">carno=
ld@suse.com</a>]<br>
&gt; Sent: Monday, December 10, 2012 12:15 PM<br>
&gt; To: xen-devel<br>
&gt; Cc: Ross Philipson<br>
&gt; Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough<br>
&gt;<br>
&gt; I haven&#39;t seen any activity on this feature. =A0Is it still planne=
d to be<br>
&gt; included in Xen 4.3?<br>
&gt;<br>
&gt; - Charles<br>
&gt;<br>
&gt; On Wed, 2012-05-23 at 14:37 +0000, Ross Philipson wrote:<br>
&gt; &gt; This patch series introduces support of loading external blocks o=
f<br>
&gt; &gt; firmware into a guest. These blocks can currently contain SMBIOS<=
br>
&gt; &gt; and/or ACPI firmware information that is used by HVMLOADER to mod=
ify a<br>
&gt; &gt; guests virtual firmware at startup. These modules are only used b=
y<br>
&gt; HVMLOADER.<br>
&gt; &gt;<br>
&gt; &gt; The domain building code in libxenguest is passed these firmware<=
br>
&gt; &gt; blocks in the xc_hvm_build_args structure and loads them into the=
 new<br>
&gt; &gt; guest, returning the load address. The loading is done in what wi=
ll<br>
&gt; &gt; become the guests low RAM area just behind to load location for<b=
r>
&gt; &gt; HVMLOADER. After their use by HVMLOADER they are effectively<br>
&gt; &gt; discarded. It is the caller&#39;s job to load the base address an=
d length<br>
&gt; &gt; values in xenstore using the paths defined in the new hvm_defs.h<=
br>
&gt; &gt; header so HVMLOADER can located the blocks.<br>
&gt; &gt;<br>
&gt; &gt; Currently two types of firmware information are recognized and<br=
>
&gt; &gt; processed in the HVMLOADER though this could be extended.<br>
&gt; &gt;<br>
&gt; &gt; 1. SMBIOS: The SMBIOS table building code will attempt to retriev=
e<br>
&gt; &gt; (for predefined set of structure types) any passed in structures.=
 If a<br>
&gt; &gt; match is found the passed in table will be used overriding the de=
fault<br>
&gt; &gt; values. In addition, the SMBIOS code will also enumerate and load=
 any<br>
&gt; &gt; vendor defined structures (in the range of types 128 - 255) that =
as<br>
&gt; &gt; are passed in. See the hvm_defs.h header for information on the f=
ormat<br>
&gt; of this block.<br>
&gt; &gt; 2. ACPI: Static and secondary descriptor tables can be added to t=
he<br>
&gt; &gt; set of ACPI table built by HVMLOADER. The ACPI builder code will<=
br>
&gt; &gt; enumerate passed in tables and add them at the end of the seconda=
ry<br>
&gt; &gt; table list. See the hvm_defs.h header for information on the form=
at of<br>
&gt; &gt; this block.<br>
&gt; &gt;<br>
&gt; &gt; There are 4 patches in the series:<br>
&gt; &gt; 01 - Add HVM definitions header for firmware passthrough support.=
<br>
&gt; &gt; 02 - Xen control tools support for loading the firmware blocks.<b=
r>
&gt; &gt; 03 - Passthrough support for SMBIOS.<br>
&gt; &gt; 04 - Passthrough support for ACPI.<br>
&gt; &gt;<br>
&gt; &gt; Note this is version 3 of this patch set. Some of the differences=
:<br>
&gt; &gt; =A0- Generic module support removed, overall functionality was<br=
>
&gt; simplified.<br>
&gt; &gt; =A0- Use of xenstore to supply firmware passthrough information t=
o<br>
&gt; HVMLOADER.<br>
&gt; &gt; =A0- Fixed issues pointed out in the SMBIOS processing code.<br>
&gt; &gt; =A0- Created defines for the SMBIOS handles in use and switched t=
o using<br>
&gt; &gt; =A0 =A0the xenstore values in the new hvm_defs.h file.<br>
&gt; &gt;<br>
&gt; &gt; Signed-off-by: Ross Philipson &lt;ross.philipson@xxxxxxxxxx&gt;<b=
r>
&gt; &gt;<br>
&gt;<br>
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div>

--20cf307ac797bbb02c04d0a5e92e--


--===============4122976393985216638==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4122976393985216638==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 11:36:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 11:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tikbg-0006Wi-Rw; Wed, 12 Dec 2012 11:36:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tikbf-0006Wa-0T
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 11:36:07 +0000
Received: from [85.158.139.211:14616] by server-11.bemta-5.messagelabs.com id
	16/3E-31624-62C68C05; Wed, 12 Dec 2012 11:36:06 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355312163!18674806!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODkwNDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11546 invoked from network); 12 Dec 2012 11:36:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 11:36:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; 
   d="scan'208";a="411077"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	12 Dec 2012 11:36:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 12 Dec 2012 06:36:02 -0500
Received: from [10.80.239.153] (helo=andrewcoop.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TikbD-0000Yt-TP;
	Wed, 12 Dec 2012 11:35:39 +0000
MIME-Version: 1.0
X-Mercurial-Node: 96b068439bc453521aea4e2070b79409189f17e6
Message-ID: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 11:35:34 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling on
	kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 xen/arch/x86/crash.c            |  116 ++++++++++++++++++++++++++++++++++-----
 xen/arch/x86/machine_kexec.c    |   19 ++++++
 xen/arch/x86/x86_64/entry.S     |   34 +++++++++++
 xen/include/asm-x86/desc.h      |   45 +++++++++++++++
 xen/include/asm-x86/processor.h |    4 +
 5 files changed, 203 insertions(+), 15 deletions(-)


Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safer in more circumstances.

This patch adds three new low level routines:
 * nmi_crash
    This is a special NMI handler for using during a kexec crash.
 * enable_nmis
    This function enables NMIs by executing an iret-to-self, to
    disengage the hardware NMI latch.
 * trap_nop
    This is a no op handler which irets immediately.  It is actually in
    the middle of enable_nmis to reuse the iret instruction, without
    having a single lone aligned iret inflating the code side.

And adds three new IDT entry helper routines:
 * _write_gate_lower
    This is a substitute for using cmpxchg16b to update a 128bit
    structure at once.  It assumes that the top 64 bits are unchanged
    (and ASSERT()s the fact) and performs a regular write on the lower
    64 bits.
 * _set_gate_lower
    This is functionally equivalent to the already present _set_gate(),
    except it uses _write_gate_lower rather than updating both 64bit
    values.
 * _update_gate_addr_lower
    This is designed to update an IDT entry handler only, without
    altering any other settings in the entry.  It also uses
    _write_gate_lower.

The IDT entry helpers are required because:
  * Is it unsafe to attempt a disable/update/re-enable cycle on the NMI
    or MCE IDT entries.
  * We need to be able to update NMI handlers without changing the IST
    entry.


As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable the crashing cpus NMI/MCE interrupt stack tables.
    Disabling the stack tables removes race conditions which would lead
    to corrupt exception frames and infinite loops.  As this pcpu is
    never planning to execute a sysret back to a pv vcpu, the update is
    safe from a security point of view.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the nmi_crash handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

Changes since v4:
 * Change stale comments, spelling corrections and coding style changes.

Changes since v3:
 * Added IDT entry helpers to safely update NMI/MCE entries.

Changes since v2:

 * Rework the alteration of the stack tables to completely remove the
   possibility of a PV domain getting very lucky with the "NMI or MCE in
   a 1 instruction race window on sysret" and managing to execute code
   in the hypervisor context.
 * Make use of set_ist() from the previous patch in the series to avoid
   open-coding the IST manipulation.

Changes since v1:
 * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
   during the original refactoring.
 * Fold trap_nop into the middle of enable_nmis to reuse the iret.
 * Expand comments in areas as per Tim's suggestions.

diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,127 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    int cpu = smp_processor_id();
+
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( cpu != crashing_cpu );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        /* Disable the interrupt stack table for the MCE handler.  This
+         * prevents race conditions between clearing MCIP and receving a
+         * new MCE, during which the exception frame would be clobbered
+         * and the MCE handler fall into an infinite loop.  We are soon
+         * going to disable the NMI watchdog, so the loop would not be
+         * caught.
+         *
+         * We do not need to change the NMI IST, as the nmi_crash
+         * handler is immue to corrupt exception frames, by virtue of
+         * being designed never to return.
+         *
+         * This update is safe from a security point of view, as this
+         * pcpu is never going to try to sysret back to a PV vcpu.
+         */
+        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
+
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+        atomic_dec(&waiting_for_crash_ipi);
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely scenario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
+     * can deliberately queue up another NMI at the LAPIC which will not
+     * be delivered as the hardware NMI latch is currently in effect.
+     * This means that if NMIs become unlatched (e.g. following a
+     * non-fatal MCE), the LAPIC will force us back here rather than
+     * wandering back into regular Xen code.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i, cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+
+            if ( i == cpu )
+            {
+                /* Disable the interrupt stack tables for this cpus MCE
+                 * and NMI handlers, and alter the NMI handler to have
+                 * no operation.  Disabling the stack tables prevents
+                 * stack corruption race conditions, while changing the
+                 * handler helps prevent cascading faults; we are
+                 * certainly going to crash by this point.
+                 *
+                 * This update is safe from a security point of view, as
+                 * this pcpu is never going to try to sysret back to a
+                 * PV vcpu.
+                 */
+                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
+            }
+            else
+                /* Do not update stack table for other pcpus. */
+                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi], &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -81,12 +81,31 @@ void machine_kexec(xen_kexec_image_t *im
         .base = (unsigned long)(boot_cpu_gdt_table - FIRST_RESERVED_GDT_ENTRY),
         .limit = LAST_RESERVED_GDT_BYTE
     };
+    int i;
 
     /* We are about to permenantly jump out of the Xen context into the kexec
      * purgatory code.  We really dont want to be still servicing interupts.
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.
+     *
+     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
+     * pcpus other than us have the nmi_crash handler, while we have the nop
+     * handler.
+     *
+     * The MCE handlers touch extensive areas of Xen code and data.  At this
+     * point, there is nothing we can usefully do, so set the nop handler.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+            _update_gate_addr_lower(&idt_tables[i][TRAP_machine_check], &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,45 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        cli
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* Enable NMIs.  No special register assumptions. Only %rax is not preserved. */
+ENTRY(enable_nmis)
+        movq  %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        retq
+
+/* No op trap handler.  Required for kexec crash path.  This is not
+ * declared with the ENTRY() macro to avoid wasted alignment space.
+ */
+.globl trap_nop
+trap_nop:
+        iretq
+
+
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/desc.h
--- a/xen/include/asm-x86/desc.h
+++ b/xen/include/asm-x86/desc.h
@@ -106,6 +106,21 @@ typedef struct {
     u64 a, b;
 } idt_entry_t;
 
+/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
+ * bits of the address not changing, which is a safe assumption as all
+ * functions we are likely to load will live inside the 1GB
+ * code/data/bss address range.
+ *
+ * Ideally, we would use cmpxchg16b, but this is not supported on some
+ * old AMD 64bit capable processors, and has no safe equivalent.
+ */
+static inline void _write_gate_lower(volatile idt_entry_t *gate,
+                                     const idt_entry_t *new)
+{
+    ASSERT(gate->b == new->b);
+    gate->a = new->a;
+}
+
 #define _set_gate(gate_addr,type,dpl,addr)               \
 do {                                                     \
     (gate_addr)->a = 0;                                  \
@@ -122,6 +137,36 @@ do {                                    
         (1UL << 47);                                     \
 } while (0)
 
+static inline void _set_gate_lower(idt_entry_t *gate, unsigned long type,
+                                   unsigned long dpl, void *addr)
+{
+    idt_entry_t idte;
+    idte.b = gate->b;
+    idte.a =
+        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
+        ((unsigned long)(dpl) << 45) |
+        ((unsigned long)(type) << 40) |
+        ((unsigned long)(addr) & 0xFFFFUL) |
+        ((unsigned long)__HYPERVISOR_CS64 << 16) |
+        (1UL << 47);
+    _write_gate_lower(gate, &idte);
+}
+
+/* Update the lower half handler of an IDT Entry, without changing any
+ * other configuration. */
+static inline void _update_gate_addr_lower(idt_entry_t *gate, void *addr)
+{
+    idt_entry_t idte;
+    idte.a = gate->a;
+
+    idte.b = ((unsigned long)(addr) >> 32);
+    idte.a &= 0x0000FFFFFFFF0000ULL;
+    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
+        ((unsigned long)(addr) & 0xFFFFUL);
+
+    _write_gate_lower(gate, &idte);
+}
+
 #define _set_tssldt_desc(desc,addr,limit,type)           \
 do {                                                     \
     (desc)[0].b = (desc)[1].b = 0;                       \
diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 11:36:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 11:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tikbg-0006Wi-Rw; Wed, 12 Dec 2012 11:36:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tikbf-0006Wa-0T
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 11:36:07 +0000
Received: from [85.158.139.211:14616] by server-11.bemta-5.messagelabs.com id
	16/3E-31624-62C68C05; Wed, 12 Dec 2012 11:36:06 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355312163!18674806!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODkwNDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11546 invoked from network); 12 Dec 2012 11:36:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 11:36:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,265,1355097600"; 
   d="scan'208";a="411077"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	12 Dec 2012 11:36:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 12 Dec 2012 06:36:02 -0500
Received: from [10.80.239.153] (helo=andrewcoop.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1TikbD-0000Yt-TP;
	Wed, 12 Dec 2012 11:35:39 +0000
MIME-Version: 1.0
X-Mercurial-Node: 96b068439bc453521aea4e2070b79409189f17e6
Message-ID: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 12 Dec 2012 11:35:34 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling on
	kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 xen/arch/x86/crash.c            |  116 ++++++++++++++++++++++++++++++++++-----
 xen/arch/x86/machine_kexec.c    |   19 ++++++
 xen/arch/x86/x86_64/entry.S     |   34 +++++++++++
 xen/include/asm-x86/desc.h      |   45 +++++++++++++++
 xen/include/asm-x86/processor.h |    4 +
 5 files changed, 203 insertions(+), 15 deletions(-)


Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safer in more circumstances.

This patch adds three new low level routines:
 * nmi_crash
    This is a special NMI handler for using during a kexec crash.
 * enable_nmis
    This function enables NMIs by executing an iret-to-self, to
    disengage the hardware NMI latch.
 * trap_nop
    This is a no op handler which irets immediately.  It is actually in
    the middle of enable_nmis to reuse the iret instruction, without
    having a single lone aligned iret inflating the code side.

And adds three new IDT entry helper routines:
 * _write_gate_lower
    This is a substitute for using cmpxchg16b to update a 128bit
    structure at once.  It assumes that the top 64 bits are unchanged
    (and ASSERT()s the fact) and performs a regular write on the lower
    64 bits.
 * _set_gate_lower
    This is functionally equivalent to the already present _set_gate(),
    except it uses _write_gate_lower rather than updating both 64bit
    values.
 * _update_gate_addr_lower
    This is designed to update an IDT entry handler only, without
    altering any other settings in the entry.  It also uses
    _write_gate_lower.

The IDT entry helpers are required because:
  * Is it unsafe to attempt a disable/update/re-enable cycle on the NMI
    or MCE IDT entries.
  * We need to be able to update NMI handlers without changing the IST
    entry.


As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable the crashing cpus NMI/MCE interrupt stack tables.
    Disabling the stack tables removes race conditions which would lead
    to corrupt exception frames and infinite loops.  As this pcpu is
    never planning to execute a sysret back to a pv vcpu, the update is
    safe from a security point of view.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the nmi_crash handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--

Changes since v4:
 * Change stale comments, spelling corrections and coding style changes.

Changes since v3:
 * Added IDT entry helpers to safely update NMI/MCE entries.

Changes since v2:

 * Rework the alteration of the stack tables to completely remove the
   possibility of a PV domain getting very lucky with the "NMI or MCE in
   a 1 instruction race window on sysret" and managing to execute code
   in the hypervisor context.
 * Make use of set_ist() from the previous patch in the series to avoid
   open-coding the IST manipulation.

Changes since v1:
 * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
   during the original refactoring.
 * Fold trap_nop into the middle of enable_nmis to reuse the iret.
 * Expand comments in areas as per Tim's suggestions.

diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,127 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    int cpu = smp_processor_id();
+
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( cpu != crashing_cpu );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        /* Disable the interrupt stack table for the MCE handler.  This
+         * prevents race conditions between clearing MCIP and receving a
+         * new MCE, during which the exception frame would be clobbered
+         * and the MCE handler fall into an infinite loop.  We are soon
+         * going to disable the NMI watchdog, so the loop would not be
+         * caught.
+         *
+         * We do not need to change the NMI IST, as the nmi_crash
+         * handler is immue to corrupt exception frames, by virtue of
+         * being designed never to return.
+         *
+         * This update is safe from a security point of view, as this
+         * pcpu is never going to try to sysret back to a PV vcpu.
+         */
+        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
+
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+        atomic_dec(&waiting_for_crash_ipi);
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely scenario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
+     * can deliberately queue up another NMI at the LAPIC which will not
+     * be delivered as the hardware NMI latch is currently in effect.
+     * This means that if NMIs become unlatched (e.g. following a
+     * non-fatal MCE), the LAPIC will force us back here rather than
+     * wandering back into regular Xen code.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i, cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+
+            if ( i == cpu )
+            {
+                /* Disable the interrupt stack tables for this cpus MCE
+                 * and NMI handlers, and alter the NMI handler to have
+                 * no operation.  Disabling the stack tables prevents
+                 * stack corruption race conditions, while changing the
+                 * handler helps prevent cascading faults; we are
+                 * certainly going to crash by this point.
+                 *
+                 * This update is safe from a security point of view, as
+                 * this pcpu is never going to try to sysret back to a
+                 * PV vcpu.
+                 */
+                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
+            }
+            else
+                /* Do not update stack table for other pcpus. */
+                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi], &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -81,12 +81,31 @@ void machine_kexec(xen_kexec_image_t *im
         .base = (unsigned long)(boot_cpu_gdt_table - FIRST_RESERVED_GDT_ENTRY),
         .limit = LAST_RESERVED_GDT_BYTE
     };
+    int i;
 
     /* We are about to permenantly jump out of the Xen context into the kexec
      * purgatory code.  We really dont want to be still servicing interupts.
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.
+     *
+     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
+     * pcpus other than us have the nmi_crash handler, while we have the nop
+     * handler.
+     *
+     * The MCE handlers touch extensive areas of Xen code and data.  At this
+     * point, there is nothing we can usefully do, so set the nop handler.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+            _update_gate_addr_lower(&idt_tables[i][TRAP_machine_check], &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,45 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        cli
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* Enable NMIs.  No special register assumptions. Only %rax is not preserved. */
+ENTRY(enable_nmis)
+        movq  %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        retq
+
+/* No op trap handler.  Required for kexec crash path.  This is not
+ * declared with the ENTRY() macro to avoid wasted alignment space.
+ */
+.globl trap_nop
+trap_nop:
+        iretq
+
+
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/desc.h
--- a/xen/include/asm-x86/desc.h
+++ b/xen/include/asm-x86/desc.h
@@ -106,6 +106,21 @@ typedef struct {
     u64 a, b;
 } idt_entry_t;
 
+/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
+ * bits of the address not changing, which is a safe assumption as all
+ * functions we are likely to load will live inside the 1GB
+ * code/data/bss address range.
+ *
+ * Ideally, we would use cmpxchg16b, but this is not supported on some
+ * old AMD 64bit capable processors, and has no safe equivalent.
+ */
+static inline void _write_gate_lower(volatile idt_entry_t *gate,
+                                     const idt_entry_t *new)
+{
+    ASSERT(gate->b == new->b);
+    gate->a = new->a;
+}
+
 #define _set_gate(gate_addr,type,dpl,addr)               \
 do {                                                     \
     (gate_addr)->a = 0;                                  \
@@ -122,6 +137,36 @@ do {                                    
         (1UL << 47);                                     \
 } while (0)
 
+static inline void _set_gate_lower(idt_entry_t *gate, unsigned long type,
+                                   unsigned long dpl, void *addr)
+{
+    idt_entry_t idte;
+    idte.b = gate->b;
+    idte.a =
+        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
+        ((unsigned long)(dpl) << 45) |
+        ((unsigned long)(type) << 40) |
+        ((unsigned long)(addr) & 0xFFFFUL) |
+        ((unsigned long)__HYPERVISOR_CS64 << 16) |
+        (1UL << 47);
+    _write_gate_lower(gate, &idte);
+}
+
+/* Update the lower half handler of an IDT Entry, without changing any
+ * other configuration. */
+static inline void _update_gate_addr_lower(idt_entry_t *gate, void *addr)
+{
+    idt_entry_t idte;
+    idte.a = gate->a;
+
+    idte.b = ((unsigned long)(addr) >> 32);
+    idte.a &= 0x0000FFFFFFFF0000ULL;
+    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
+        ((unsigned long)(addr) & 0xFFFFUL);
+
+    _write_gate_lower(gate, &idte);
+}
+
 #define _set_tssldt_desc(desc,addr,limit,type)           \
 do {                                                     \
     (desc)[0].b = (desc)[1].b = 0;                       \
diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 12:35:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 12:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TilWo-00085K-Do; Wed, 12 Dec 2012 12:35:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1TilWm-000854-GB; Wed, 12 Dec 2012 12:35:08 +0000
Received: from [85.158.138.51:39098] by server-2.bemta-3.messagelabs.com id
	2E/78-11239-BF978C05; Wed, 12 Dec 2012 12:35:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355315706!20637397!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13129 invoked from network); 12 Dec 2012 12:35:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 12:35:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,266,1355097600"; 
   d="scan'208";a="86960"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 12:35:06 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 12:35:06 +0000
Message-ID: <50C879FB.7060208@citrix.com>
Date: Wed, 12 Dec 2012 13:35:07 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355309001.10554.13.camel@zakaz.uk.xensource.com>
Cc: xen-api@lists.xen.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Driver domains and device handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTIvMTIvMTIgMTE6NDMsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBUdWUsIDIwMTItMTIt
MTEgYXQgMTk6NDYgKzAwMDAsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4+IGp1c3QgbGF1bmNo
IFFlbXUsIGFuZCBpZiB3ZSBkZWNpZGUgdG8gdXNlIGFuIGluLWtlcm5lbCBpbml0aWF0b3Igd2Ug
b25seQo+PiBoYXZlIHRvIGxhdW5jaCBhIGhvdHBsdWcgc2NyaXB0IHdpdGggc29tZXRoaW5nIGxp
a2U6IGlzY3NpYWRtIC1tIG5vZGUgLVQKPj4gPGlxbj4gLXAgPGlwOnBvcnQ+Lgo+IAo+IGlzbid0
IHRoZXJlIGFsc28gYSBpc2NzaWFkbSBsb2dpbiB3aGljaCBpcyBuZWVkZWQgYXQgc29tZSBwb2lu
dD8KCkxvZ2luIGlzIGRvbmUgZHVyaW5nIHRoZSBwbHVnIGJ5IGFkZGluZyAtLWxvZ2luIHRvIHRo
ZSBhYm92ZSBjb21tYW5kLApidXQgSSB0aGluayB5b3UgY2FuIGFsc28gcGVyZm9ybSB0aGUgbG9n
aW4gaW4gYSBkaXNjb3ZlcnksIGFuZCBJIGd1ZXNzCnRoaXMgbG9naW4gaXMga2VlcCBieSBpc2Nz
aWQsIHNvIHlvdSBkb24ndCBuZWVkIHRvIHBlcmZvcm0gaXQgd2hlbgpwbHVnaW4gdGhlIGRldmlj
ZXMuIEJ1dCBJJ20gbm90IHN1cmUgaWYgaXMgd29ydGggcGVyZm9ybWluZyBhIGRpc2NvdmVyeQpq
dXN0IHRvIGxvZ2luLgoKPiBJbiBhbnkgY2FzZSBhbG1vc3QgYW55IGlzY3NpYWRtIGNvbW1hbmQg
aGFzIHRoZSBwb3RlbnRpYWwgdG8gYmUgc2xvdwo+IGNvbXBhcmVkIHRvIHRoZSBhbW91bnQgb2Yg
ZG93bnRpbWUgd2Ugd291bGQgbGlrZSB0byBhaW0gZm9yIGR1cmluZyBhCj4gbWlncmF0aW9uLgoK
SSB3aWxsIGRvIHNvbWUgbW9yZSByZXNlYXJjaCBhbmQgdGltbWluZyBhYm91dCBpc2NzaWFkbSwg
dG8gc2VlIGlmCnRoZXJlJ3MgYW55d2F5IGluIHdoaWNoIHdlIGNhbiBzcGVlZHVwIHRoZSBhY3R1
YWwgY29ubmVjdGlvbiBvZiB0aGUgZGV2aWNlLgoKPj4gSSdtIHN1cmUgdGhlcmUncyBwZW9wbGUg
b24gdGhlIGxpc3Qgd2l0aCBtb3JlIGV4cGVyaWVuY2UgdGhhbiBtZSBvbiB0aGlzCj4+IGZpZWxk
LCBhbmQgSSB3b3VsZCBsaWtlIHRvIGFzayBmb3Igc29tZSB1c2UtY2FzZXMgd2hlcmUgdGhpcwo+
PiAicHJlcGFyYXRvcnkiIHBoYXNlIHdvdWxkIGJlIHVzZWZ1bCwgYW5kIHdoYXQgYWN0aW9ucyB3
aWxsIGJlIHBlcmZvcm1lZAo+PiBvbiBpdC4KPiAKPiBNaWdodCB3ZSB3b3J0aCBpbmNsdWRpbmcg
eGVuLWFwaSBpbiB0aGlzIGRpc2N1c3Npb24gc2luY2UgdGhlIHhhcGkgZ3V5cwo+IGhhdmUgYSBm
YWlyIGJpdCBvZiBrbm93bGVkZ2Ugb2YgdGhlIHJlcXVpcmVtZW50IGhlcmUuCgpJIGFncmVlLCBs
ZXQncyBDQyB4ZW4tYXBpQGxpc3RzLnhlbi5vcmcuCgoKX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Dec 12 12:35:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 12:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TilWo-00085K-Do; Wed, 12 Dec 2012 12:35:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1TilWm-000854-GB; Wed, 12 Dec 2012 12:35:08 +0000
Received: from [85.158.138.51:39098] by server-2.bemta-3.messagelabs.com id
	2E/78-11239-BF978C05; Wed, 12 Dec 2012 12:35:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355315706!20637397!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13129 invoked from network); 12 Dec 2012 12:35:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 12:35:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,266,1355097600"; 
   d="scan'208";a="86960"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 12:35:06 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 12:35:06 +0000
Message-ID: <50C879FB.7060208@citrix.com>
Date: Wed, 12 Dec 2012 13:35:07 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355309001.10554.13.camel@zakaz.uk.xensource.com>
Cc: xen-api@lists.xen.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Driver domains and device handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTIvMTIvMTIgMTE6NDMsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBUdWUsIDIwMTItMTIt
MTEgYXQgMTk6NDYgKzAwMDAsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4+IGp1c3QgbGF1bmNo
IFFlbXUsIGFuZCBpZiB3ZSBkZWNpZGUgdG8gdXNlIGFuIGluLWtlcm5lbCBpbml0aWF0b3Igd2Ug
b25seQo+PiBoYXZlIHRvIGxhdW5jaCBhIGhvdHBsdWcgc2NyaXB0IHdpdGggc29tZXRoaW5nIGxp
a2U6IGlzY3NpYWRtIC1tIG5vZGUgLVQKPj4gPGlxbj4gLXAgPGlwOnBvcnQ+Lgo+IAo+IGlzbid0
IHRoZXJlIGFsc28gYSBpc2NzaWFkbSBsb2dpbiB3aGljaCBpcyBuZWVkZWQgYXQgc29tZSBwb2lu
dD8KCkxvZ2luIGlzIGRvbmUgZHVyaW5nIHRoZSBwbHVnIGJ5IGFkZGluZyAtLWxvZ2luIHRvIHRo
ZSBhYm92ZSBjb21tYW5kLApidXQgSSB0aGluayB5b3UgY2FuIGFsc28gcGVyZm9ybSB0aGUgbG9n
aW4gaW4gYSBkaXNjb3ZlcnksIGFuZCBJIGd1ZXNzCnRoaXMgbG9naW4gaXMga2VlcCBieSBpc2Nz
aWQsIHNvIHlvdSBkb24ndCBuZWVkIHRvIHBlcmZvcm0gaXQgd2hlbgpwbHVnaW4gdGhlIGRldmlj
ZXMuIEJ1dCBJJ20gbm90IHN1cmUgaWYgaXMgd29ydGggcGVyZm9ybWluZyBhIGRpc2NvdmVyeQpq
dXN0IHRvIGxvZ2luLgoKPiBJbiBhbnkgY2FzZSBhbG1vc3QgYW55IGlzY3NpYWRtIGNvbW1hbmQg
aGFzIHRoZSBwb3RlbnRpYWwgdG8gYmUgc2xvdwo+IGNvbXBhcmVkIHRvIHRoZSBhbW91bnQgb2Yg
ZG93bnRpbWUgd2Ugd291bGQgbGlrZSB0byBhaW0gZm9yIGR1cmluZyBhCj4gbWlncmF0aW9uLgoK
SSB3aWxsIGRvIHNvbWUgbW9yZSByZXNlYXJjaCBhbmQgdGltbWluZyBhYm91dCBpc2NzaWFkbSwg
dG8gc2VlIGlmCnRoZXJlJ3MgYW55d2F5IGluIHdoaWNoIHdlIGNhbiBzcGVlZHVwIHRoZSBhY3R1
YWwgY29ubmVjdGlvbiBvZiB0aGUgZGV2aWNlLgoKPj4gSSdtIHN1cmUgdGhlcmUncyBwZW9wbGUg
b24gdGhlIGxpc3Qgd2l0aCBtb3JlIGV4cGVyaWVuY2UgdGhhbiBtZSBvbiB0aGlzCj4+IGZpZWxk
LCBhbmQgSSB3b3VsZCBsaWtlIHRvIGFzayBmb3Igc29tZSB1c2UtY2FzZXMgd2hlcmUgdGhpcwo+
PiAicHJlcGFyYXRvcnkiIHBoYXNlIHdvdWxkIGJlIHVzZWZ1bCwgYW5kIHdoYXQgYWN0aW9ucyB3
aWxsIGJlIHBlcmZvcm1lZAo+PiBvbiBpdC4KPiAKPiBNaWdodCB3ZSB3b3J0aCBpbmNsdWRpbmcg
eGVuLWFwaSBpbiB0aGlzIGRpc2N1c3Npb24gc2luY2UgdGhlIHhhcGkgZ3V5cwo+IGhhdmUgYSBm
YWlyIGJpdCBvZiBrbm93bGVkZ2Ugb2YgdGhlIHJlcXVpcmVtZW50IGhlcmUuCgpJIGFncmVlLCBs
ZXQncyBDQyB4ZW4tYXBpQGxpc3RzLnhlbi5vcmcuCgoKX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Dec 12 12:50:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 12:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tillj-0008Hw-27; Wed, 12 Dec 2012 12:50:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tillh-0008Hq-4W
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 12:50:33 +0000
Received: from [85.158.137.99:49701] by server-8.bemta-3.messagelabs.com id
	8A/D7-01297-89D78C05; Wed, 12 Dec 2012 12:50:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1355316624!18752890!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16905 invoked from network); 12 Dec 2012 12:50:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 12:50:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 12:50:23 +0000
Message-Id: <50C88B9E02000078000AFE55@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 12:50:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
In-Reply-To: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
 on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 >>> On 12.12.12 at 12:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> xen/arch/x86/crash.c            |  116 ++++++++++++++++++++++++++++++++++-----
>  xen/arch/x86/machine_kexec.c    |   19 ++++++
>  xen/arch/x86/x86_64/entry.S     |   34 +++++++++++
>  xen/include/asm-x86/desc.h      |   45 +++++++++++++++
>  xen/include/asm-x86/processor.h |    4 +
>  5 files changed, 203 insertions(+), 15 deletions(-)
> 
> 
> Experimentally, certain crash kernels will triple fault very early after
> starting if started with NMIs disabled.  This was discovered when
> experimenting with a debug keyhandler which deliberately created a
> reentrant NMI, causing stack corruption.
> 
> Because of this discovered bug, and that the future changes to the NMI
> handling will make the kexec path more fragile, take the time now to
> bullet-proof the kexec behaviour to be safer in more circumstances.
> 
> This patch adds three new low level routines:
>  * nmi_crash
>     This is a special NMI handler for using during a kexec crash.
>  * enable_nmis
>     This function enables NMIs by executing an iret-to-self, to
>     disengage the hardware NMI latch.
>  * trap_nop
>     This is a no op handler which irets immediately.  It is actually in
>     the middle of enable_nmis to reuse the iret instruction, without
>     having a single lone aligned iret inflating the code side.
> 
> And adds three new IDT entry helper routines:
>  * _write_gate_lower
>     This is a substitute for using cmpxchg16b to update a 128bit
>     structure at once.  It assumes that the top 64 bits are unchanged
>     (and ASSERT()s the fact) and performs a regular write on the lower
>     64 bits.
>  * _set_gate_lower
>     This is functionally equivalent to the already present _set_gate(),
>     except it uses _write_gate_lower rather than updating both 64bit
>     values.
>  * _update_gate_addr_lower
>     This is designed to update an IDT entry handler only, without
>     altering any other settings in the entry.  It also uses
>     _write_gate_lower.
> 
> The IDT entry helpers are required because:
>   * Is it unsafe to attempt a disable/update/re-enable cycle on the NMI
>     or MCE IDT entries.
>   * We need to be able to update NMI handlers without changing the IST
>     entry.
> 
> 
> As a result, the new behaviour of the kexec_crash path is:
> 
> nmi_shootdown_cpus() will:
> 
>  * Disable the crashing cpus NMI/MCE interrupt stack tables.
>     Disabling the stack tables removes race conditions which would lead
>     to corrupt exception frames and infinite loops.  As this pcpu is
>     never planning to execute a sysret back to a pv vcpu, the update is
>     safe from a security point of view.
> 
>  * Swap the NMI trap handlers.
>     The crashing pcpu gets the nop handler, to prevent it getting stuck in
>     an NMI context, causing a hang instead of crash.  The non-crashing
>     pcpus all get the nmi_crash handler which is designed never to
>     return.
> 
> do_nmi_crash() will:
> 
>  * Save the crash notes and shut the pcpu down.
>     There is now an extra per-cpu variable to prevent us from executing
>     this multiple times.  In the case where we reenter midway through,
>     attempt the whole operation again in preference to not completing
>     it in the first place.
> 
>  * Set up another NMI at the LAPIC.
>     Even when the LAPIC has been disabled, the ID and command registers
>     are still usable.  As a result, we can deliberately queue up a new
>     NMI to re-interrupt us later if NMIs get unlatched.  Because of the
>     call to __stop_this_cpu(), we have to hand craft self_nmi() to be
>     safe from General Protection Faults.
> 
>  * Fall into infinite loop.
> 
> machine_kexec() will:
> 
>   * Swap the MCE handlers to be a nop.
>      We cannot prevent MCEs from being delivered when we pass off to the
>      crash kernel, and the less Xen context is being touched the better.
> 
>   * Explicitly enable NMIs.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

(but for committing I'd want at least one other knowledgeable
person's ack)

> --
> 
> Changes since v4:
>  * Change stale comments, spelling corrections and coding style changes.
> 
> Changes since v3:
>  * Added IDT entry helpers to safely update NMI/MCE entries.
> 
> Changes since v2:
> 
>  * Rework the alteration of the stack tables to completely remove the
>    possibility of a PV domain getting very lucky with the "NMI or MCE in
>    a 1 instruction race window on sysret" and managing to execute code
>    in the hypervisor context.
>  * Make use of set_ist() from the previous patch in the series to avoid
>    open-coding the IST manipulation.
> 
> Changes since v1:
>  * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
>    during the original refactoring.
>  * Fold trap_nop into the middle of enable_nmis to reuse the iret.
>  * Expand comments in areas as per Tim's suggestions.
> 
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/crash.c
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,127 @@
>  
>  static atomic_t waiting_for_crash_ipi;
>  static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>  
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. 
> */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>  {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    int cpu = smp_processor_id();
> +
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. 
> */
> +    ASSERT( cpu != crashing_cpu );
> +
> +    /* Save crash information and shut down CPU.  Attempt only once. */
> +    if ( ! this_cpu(crash_save_done) )
> +    {
> +        /* Disable the interrupt stack table for the MCE handler.  This
> +         * prevents race conditions between clearing MCIP and receving a
> +         * new MCE, during which the exception frame would be clobbered
> +         * and the MCE handler fall into an infinite loop.  We are soon
> +         * going to disable the NMI watchdog, so the loop would not be
> +         * caught.
> +         *
> +         * We do not need to change the NMI IST, as the nmi_crash
> +         * handler is immue to corrupt exception frames, by virtue of
> +         * being designed never to return.
> +         *
> +         * This update is safe from a security point of view, as this
> +         * pcpu is never going to try to sysret back to a PV vcpu.
> +         */
> +        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
> +
> +        kexec_crash_save_cpu();
> +        __stop_this_cpu();
> +
> +        this_cpu(crash_save_done) = 1;
> +        atomic_dec(&waiting_for_crash_ipi);
> +    }
> +
> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
> +     * back to its boot state, so we are unable to rely on the regular
> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
> +     * (The likely scenario is that we have reverted from x2apic mode to
> +     * xapic, at which point #GPFs will occur if we use the apic_*
> +     * functions)
> +     *
> +     * The ICR and APIC ID of the LAPIC are still valid even during
> +     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
> +     * can deliberately queue up another NMI at the LAPIC which will not
> +     * be delivered as the hardware NMI latch is currently in effect.
> +     * This means that if NMIs become unlatched (e.g. following a
> +     * non-fatal MCE), the LAPIC will force us back here rather than
> +     * wandering back into regular Xen code.
>       */
> -    if ( cpu == crashing_cpu )
> -        return 1;
> -    local_irq_disable();
> +    switch ( current_local_apic_mode() )
> +    {
> +        u32 apic_id;
>  
> -    kexec_crash_save_cpu();
> +    case APIC_MODE_X2APIC:
> +        apic_id = apic_rdmsr(APIC_ID);
>  
> -    __stop_this_cpu();
> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | 
> ((u64)apic_id << 32));
> +        break;
>  
> -    atomic_dec(&waiting_for_crash_ipi);
> +    case APIC_MODE_XAPIC:
> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
> +
> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
> +            cpu_relax();
> +
> +        apic_mem_write(APIC_ICR2, apic_id << 24);
> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
> +        break;
> +
> +    default:
> +        break;
> +    }
>  
>      for ( ; ; )
>          halt();
> -
> -    return 1;
>  }
>  
>  static void nmi_shootdown_cpus(void)
>  {
>      unsigned long msecs;
> +    int i, cpu = smp_processor_id();
>  
>      local_irq_disable();
>  
> -    crashing_cpu = smp_processor_id();
> +    crashing_cpu = cpu;
>      local_irq_count(crashing_cpu) = 0;
>  
>      atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
> -    /* Would it be better to replace the trap vector here? */
> -    set_nmi_callback(crash_nmi_callback);
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +
> +            if ( i == cpu )
> +            {
> +                /* Disable the interrupt stack tables for this cpus MCE
> +                 * and NMI handlers, and alter the NMI handler to have
> +                 * no operation.  Disabling the stack tables prevents
> +                 * stack corruption race conditions, while changing the
> +                 * handler helps prevent cascading faults; we are
> +                 * certainly going to crash by this point.
> +                 *
> +                 * This update is safe from a security point of view, as
> +                 * this pcpu is never going to try to sysret back to a
> +                 * PV vcpu.
> +                 */
> +                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
> +            }
> +            else
> +                /* Do not update stack table for other pcpus. */
> +                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi], 
> &nmi_crash);
> +        }
> +
>      /* Ensure the new callback function is set before sending out the NMI. 
> */
>      wmb();
>  
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/machine_kexec.c
> --- a/xen/arch/x86/machine_kexec.c
> +++ b/xen/arch/x86/machine_kexec.c
> @@ -81,12 +81,31 @@ void machine_kexec(xen_kexec_image_t *im
>          .base = (unsigned long)(boot_cpu_gdt_table - 
> FIRST_RESERVED_GDT_ENTRY),
>          .limit = LAST_RESERVED_GDT_BYTE
>      };
> +    int i;
>  
>      /* We are about to permenantly jump out of the Xen context into the 
> kexec
>       * purgatory code.  We really dont want to be still servicing 
> interupts.
>       */
>      local_irq_disable();
>  
> +    /* Now regular interrupts are disabled, we need to reduce the impact
> +     * of interrupts not disabled by 'cli'.
> +     *
> +     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
> +     * pcpus other than us have the nmi_crash handler, while we have the 
> nop
> +     * handler.
> +     *
> +     * The MCE handlers touch extensive areas of Xen code and data.  At 
> this
> +     * point, there is nothing we can usefully do, so set the nop handler.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +            _update_gate_addr_lower(&idt_tables[i][TRAP_machine_check], 
> &trap_nop);
> +
> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
> +     * not like running with NMIs disabled. */
> +    enable_nmis();
> +
>      /*
>       * compat_machine_kexec() returns to idle pagetables, which requires us
>       * to be running on a static GDT mapping (idle pagetables have no GDT
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -635,11 +635,45 @@ ENTRY(nmi)
>          movl  $TRAP_nmi,4(%rsp)
>          jmp   handle_ist_exception
>  
> +ENTRY(nmi_crash)
> +        cli
> +        pushq $0
> +        movl $TRAP_nmi,4(%rsp)
> +        SAVE_ALL
> +        movq %rsp,%rdi
> +        callq do_nmi_crash /* Does not return */
> +        ud2
> +
>  ENTRY(machine_check)
>          pushq $0
>          movl  $TRAP_machine_check,4(%rsp)
>          jmp   handle_ist_exception
>  
> +/* Enable NMIs.  No special register assumptions. Only %rax is not 
> preserved. */
> +ENTRY(enable_nmis)
> +        movq  %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        retq
> +
> +/* No op trap handler.  Required for kexec crash path.  This is not
> + * declared with the ENTRY() macro to avoid wasted alignment space.
> + */
> +.globl trap_nop
> +trap_nop:
> +        iretq
> +
> +
> +
>  .section .rodata, "a", @progbits
>  
>  ENTRY(exception_table)
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/desc.h
> --- a/xen/include/asm-x86/desc.h
> +++ b/xen/include/asm-x86/desc.h
> @@ -106,6 +106,21 @@ typedef struct {
>      u64 a, b;
>  } idt_entry_t;
>  
> +/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
> + * bits of the address not changing, which is a safe assumption as all
> + * functions we are likely to load will live inside the 1GB
> + * code/data/bss address range.
> + *
> + * Ideally, we would use cmpxchg16b, but this is not supported on some
> + * old AMD 64bit capable processors, and has no safe equivalent.
> + */
> +static inline void _write_gate_lower(volatile idt_entry_t *gate,
> +                                     const idt_entry_t *new)
> +{
> +    ASSERT(gate->b == new->b);
> +    gate->a = new->a;
> +}
> +
>  #define _set_gate(gate_addr,type,dpl,addr)               \
>  do {                                                     \
>      (gate_addr)->a = 0;                                  \
> @@ -122,6 +137,36 @@ do {                                    
>          (1UL << 47);                                     \
>  } while (0)
>  
> +static inline void _set_gate_lower(idt_entry_t *gate, unsigned long type,
> +                                   unsigned long dpl, void *addr)
> +{
> +    idt_entry_t idte;
> +    idte.b = gate->b;
> +    idte.a =
> +        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
> +        ((unsigned long)(dpl) << 45) |
> +        ((unsigned long)(type) << 40) |
> +        ((unsigned long)(addr) & 0xFFFFUL) |
> +        ((unsigned long)__HYPERVISOR_CS64 << 16) |
> +        (1UL << 47);
> +    _write_gate_lower(gate, &idte);
> +}
> +
> +/* Update the lower half handler of an IDT Entry, without changing any
> + * other configuration. */
> +static inline void _update_gate_addr_lower(idt_entry_t *gate, void *addr)
> +{
> +    idt_entry_t idte;
> +    idte.a = gate->a;
> +
> +    idte.b = ((unsigned long)(addr) >> 32);
> +    idte.a &= 0x0000FFFFFFFF0000ULL;
> +    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
> +        ((unsigned long)(addr) & 0xFFFFUL);
> +
> +    _write_gate_lower(gate, &idte);
> +}
> +
>  #define _set_tssldt_desc(desc,addr,limit,type)           \
>  do {                                                     \
>      (desc)[0].b = (desc)[1].b = 0;                       \
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/processor.h
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
>  DECLARE_TRAP_HANDLER(divide_error);
>  DECLARE_TRAP_HANDLER(debug);
>  DECLARE_TRAP_HANDLER(nmi);
> +DECLARE_TRAP_HANDLER(nmi_crash);
>  DECLARE_TRAP_HANDLER(int3);
>  DECLARE_TRAP_HANDLER(overflow);
>  DECLARE_TRAP_HANDLER(bounds);
> @@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>  DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>  #undef DECLARE_TRAP_HANDLER
>  
> +void trap_nop(void);
> +void enable_nmis(void);
> +
>  void syscall_enter(void);
>  void sysenter_entry(void);
>  void sysenter_eflags_saved(void);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 12:50:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 12:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tillj-0008Hw-27; Wed, 12 Dec 2012 12:50:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tillh-0008Hq-4W
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 12:50:33 +0000
Received: from [85.158.137.99:49701] by server-8.bemta-3.messagelabs.com id
	8A/D7-01297-89D78C05; Wed, 12 Dec 2012 12:50:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1355316624!18752890!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16905 invoked from network); 12 Dec 2012 12:50:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 12:50:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 12:50:23 +0000
Message-Id: <50C88B9E02000078000AFE55@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 12:50:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
In-Reply-To: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
 on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 >>> On 12.12.12 at 12:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> xen/arch/x86/crash.c            |  116 ++++++++++++++++++++++++++++++++++-----
>  xen/arch/x86/machine_kexec.c    |   19 ++++++
>  xen/arch/x86/x86_64/entry.S     |   34 +++++++++++
>  xen/include/asm-x86/desc.h      |   45 +++++++++++++++
>  xen/include/asm-x86/processor.h |    4 +
>  5 files changed, 203 insertions(+), 15 deletions(-)
> 
> 
> Experimentally, certain crash kernels will triple fault very early after
> starting if started with NMIs disabled.  This was discovered when
> experimenting with a debug keyhandler which deliberately created a
> reentrant NMI, causing stack corruption.
> 
> Because of this discovered bug, and that the future changes to the NMI
> handling will make the kexec path more fragile, take the time now to
> bullet-proof the kexec behaviour to be safer in more circumstances.
> 
> This patch adds three new low level routines:
>  * nmi_crash
>     This is a special NMI handler for using during a kexec crash.
>  * enable_nmis
>     This function enables NMIs by executing an iret-to-self, to
>     disengage the hardware NMI latch.
>  * trap_nop
>     This is a no op handler which irets immediately.  It is actually in
>     the middle of enable_nmis to reuse the iret instruction, without
>     having a single lone aligned iret inflating the code side.
> 
> And adds three new IDT entry helper routines:
>  * _write_gate_lower
>     This is a substitute for using cmpxchg16b to update a 128bit
>     structure at once.  It assumes that the top 64 bits are unchanged
>     (and ASSERT()s the fact) and performs a regular write on the lower
>     64 bits.
>  * _set_gate_lower
>     This is functionally equivalent to the already present _set_gate(),
>     except it uses _write_gate_lower rather than updating both 64bit
>     values.
>  * _update_gate_addr_lower
>     This is designed to update an IDT entry handler only, without
>     altering any other settings in the entry.  It also uses
>     _write_gate_lower.
> 
> The IDT entry helpers are required because:
>   * Is it unsafe to attempt a disable/update/re-enable cycle on the NMI
>     or MCE IDT entries.
>   * We need to be able to update NMI handlers without changing the IST
>     entry.
> 
> 
> As a result, the new behaviour of the kexec_crash path is:
> 
> nmi_shootdown_cpus() will:
> 
>  * Disable the crashing cpus NMI/MCE interrupt stack tables.
>     Disabling the stack tables removes race conditions which would lead
>     to corrupt exception frames and infinite loops.  As this pcpu is
>     never planning to execute a sysret back to a pv vcpu, the update is
>     safe from a security point of view.
> 
>  * Swap the NMI trap handlers.
>     The crashing pcpu gets the nop handler, to prevent it getting stuck in
>     an NMI context, causing a hang instead of crash.  The non-crashing
>     pcpus all get the nmi_crash handler which is designed never to
>     return.
> 
> do_nmi_crash() will:
> 
>  * Save the crash notes and shut the pcpu down.
>     There is now an extra per-cpu variable to prevent us from executing
>     this multiple times.  In the case where we reenter midway through,
>     attempt the whole operation again in preference to not completing
>     it in the first place.
> 
>  * Set up another NMI at the LAPIC.
>     Even when the LAPIC has been disabled, the ID and command registers
>     are still usable.  As a result, we can deliberately queue up a new
>     NMI to re-interrupt us later if NMIs get unlatched.  Because of the
>     call to __stop_this_cpu(), we have to hand craft self_nmi() to be
>     safe from General Protection Faults.
> 
>  * Fall into infinite loop.
> 
> machine_kexec() will:
> 
>   * Swap the MCE handlers to be a nop.
>      We cannot prevent MCEs from being delivered when we pass off to the
>      crash kernel, and the less Xen context is being touched the better.
> 
>   * Explicitly enable NMIs.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

(but for committing I'd want at least one other knowledgeable
person's ack)

> --
> 
> Changes since v4:
>  * Change stale comments, spelling corrections and coding style changes.
> 
> Changes since v3:
>  * Added IDT entry helpers to safely update NMI/MCE entries.
> 
> Changes since v2:
> 
>  * Rework the alteration of the stack tables to completely remove the
>    possibility of a PV domain getting very lucky with the "NMI or MCE in
>    a 1 instruction race window on sysret" and managing to execute code
>    in the hypervisor context.
>  * Make use of set_ist() from the previous patch in the series to avoid
>    open-coding the IST manipulation.
> 
> Changes since v1:
>  * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
>    during the original refactoring.
>  * Fold trap_nop into the middle of enable_nmis to reuse the iret.
>  * Expand comments in areas as per Tim's suggestions.
> 
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/crash.c
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,127 @@
>  
>  static atomic_t waiting_for_crash_ipi;
>  static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>  
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. 
> */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>  {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    int cpu = smp_processor_id();
> +
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. 
> */
> +    ASSERT( cpu != crashing_cpu );
> +
> +    /* Save crash information and shut down CPU.  Attempt only once. */
> +    if ( ! this_cpu(crash_save_done) )
> +    {
> +        /* Disable the interrupt stack table for the MCE handler.  This
> +         * prevents race conditions between clearing MCIP and receving a
> +         * new MCE, during which the exception frame would be clobbered
> +         * and the MCE handler fall into an infinite loop.  We are soon
> +         * going to disable the NMI watchdog, so the loop would not be
> +         * caught.
> +         *
> +         * We do not need to change the NMI IST, as the nmi_crash
> +         * handler is immue to corrupt exception frames, by virtue of
> +         * being designed never to return.
> +         *
> +         * This update is safe from a security point of view, as this
> +         * pcpu is never going to try to sysret back to a PV vcpu.
> +         */
> +        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
> +
> +        kexec_crash_save_cpu();
> +        __stop_this_cpu();
> +
> +        this_cpu(crash_save_done) = 1;
> +        atomic_dec(&waiting_for_crash_ipi);
> +    }
> +
> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
> +     * back to its boot state, so we are unable to rely on the regular
> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
> +     * (The likely scenario is that we have reverted from x2apic mode to
> +     * xapic, at which point #GPFs will occur if we use the apic_*
> +     * functions)
> +     *
> +     * The ICR and APIC ID of the LAPIC are still valid even during
> +     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
> +     * can deliberately queue up another NMI at the LAPIC which will not
> +     * be delivered as the hardware NMI latch is currently in effect.
> +     * This means that if NMIs become unlatched (e.g. following a
> +     * non-fatal MCE), the LAPIC will force us back here rather than
> +     * wandering back into regular Xen code.
>       */
> -    if ( cpu == crashing_cpu )
> -        return 1;
> -    local_irq_disable();
> +    switch ( current_local_apic_mode() )
> +    {
> +        u32 apic_id;
>  
> -    kexec_crash_save_cpu();
> +    case APIC_MODE_X2APIC:
> +        apic_id = apic_rdmsr(APIC_ID);
>  
> -    __stop_this_cpu();
> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | 
> ((u64)apic_id << 32));
> +        break;
>  
> -    atomic_dec(&waiting_for_crash_ipi);
> +    case APIC_MODE_XAPIC:
> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
> +
> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
> +            cpu_relax();
> +
> +        apic_mem_write(APIC_ICR2, apic_id << 24);
> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
> +        break;
> +
> +    default:
> +        break;
> +    }
>  
>      for ( ; ; )
>          halt();
> -
> -    return 1;
>  }
>  
>  static void nmi_shootdown_cpus(void)
>  {
>      unsigned long msecs;
> +    int i, cpu = smp_processor_id();
>  
>      local_irq_disable();
>  
> -    crashing_cpu = smp_processor_id();
> +    crashing_cpu = cpu;
>      local_irq_count(crashing_cpu) = 0;
>  
>      atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
> -    /* Would it be better to replace the trap vector here? */
> -    set_nmi_callback(crash_nmi_callback);
> +
> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
> +     * invokes do_nmi_crash (above), which cause them to write state and
> +     * fall into a loop.  The crashing pcpu gets the nop handler to
> +     * cause it to return to this function ASAP.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +        {
> +
> +            if ( i == cpu )
> +            {
> +                /* Disable the interrupt stack tables for this cpus MCE
> +                 * and NMI handlers, and alter the NMI handler to have
> +                 * no operation.  Disabling the stack tables prevents
> +                 * stack corruption race conditions, while changing the
> +                 * handler helps prevent cascading faults; we are
> +                 * certainly going to crash by this point.
> +                 *
> +                 * This update is safe from a security point of view, as
> +                 * this pcpu is never going to try to sysret back to a
> +                 * PV vcpu.
> +                 */
> +                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
> +            }
> +            else
> +                /* Do not update stack table for other pcpus. */
> +                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi], 
> &nmi_crash);
> +        }
> +
>      /* Ensure the new callback function is set before sending out the NMI. 
> */
>      wmb();
>  
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/machine_kexec.c
> --- a/xen/arch/x86/machine_kexec.c
> +++ b/xen/arch/x86/machine_kexec.c
> @@ -81,12 +81,31 @@ void machine_kexec(xen_kexec_image_t *im
>          .base = (unsigned long)(boot_cpu_gdt_table - 
> FIRST_RESERVED_GDT_ENTRY),
>          .limit = LAST_RESERVED_GDT_BYTE
>      };
> +    int i;
>  
>      /* We are about to permenantly jump out of the Xen context into the 
> kexec
>       * purgatory code.  We really dont want to be still servicing 
> interupts.
>       */
>      local_irq_disable();
>  
> +    /* Now regular interrupts are disabled, we need to reduce the impact
> +     * of interrupts not disabled by 'cli'.
> +     *
> +     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
> +     * pcpus other than us have the nmi_crash handler, while we have the 
> nop
> +     * handler.
> +     *
> +     * The MCE handlers touch extensive areas of Xen code and data.  At 
> this
> +     * point, there is nothing we can usefully do, so set the nop handler.
> +     */
> +    for ( i = 0; i < nr_cpu_ids; ++i )
> +        if ( idt_tables[i] )
> +            _update_gate_addr_lower(&idt_tables[i][TRAP_machine_check], 
> &trap_nop);
> +
> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
> +     * not like running with NMIs disabled. */
> +    enable_nmis();
> +
>      /*
>       * compat_machine_kexec() returns to idle pagetables, which requires us
>       * to be running on a static GDT mapping (idle pagetables have no GDT
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -635,11 +635,45 @@ ENTRY(nmi)
>          movl  $TRAP_nmi,4(%rsp)
>          jmp   handle_ist_exception
>  
> +ENTRY(nmi_crash)
> +        cli
> +        pushq $0
> +        movl $TRAP_nmi,4(%rsp)
> +        SAVE_ALL
> +        movq %rsp,%rdi
> +        callq do_nmi_crash /* Does not return */
> +        ud2
> +
>  ENTRY(machine_check)
>          pushq $0
>          movl  $TRAP_machine_check,4(%rsp)
>          jmp   handle_ist_exception
>  
> +/* Enable NMIs.  No special register assumptions. Only %rax is not 
> preserved. */
> +ENTRY(enable_nmis)
> +        movq  %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        retq
> +
> +/* No op trap handler.  Required for kexec crash path.  This is not
> + * declared with the ENTRY() macro to avoid wasted alignment space.
> + */
> +.globl trap_nop
> +trap_nop:
> +        iretq
> +
> +
> +
>  .section .rodata, "a", @progbits
>  
>  ENTRY(exception_table)
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/desc.h
> --- a/xen/include/asm-x86/desc.h
> +++ b/xen/include/asm-x86/desc.h
> @@ -106,6 +106,21 @@ typedef struct {
>      u64 a, b;
>  } idt_entry_t;
>  
> +/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
> + * bits of the address not changing, which is a safe assumption as all
> + * functions we are likely to load will live inside the 1GB
> + * code/data/bss address range.
> + *
> + * Ideally, we would use cmpxchg16b, but this is not supported on some
> + * old AMD 64bit capable processors, and has no safe equivalent.
> + */
> +static inline void _write_gate_lower(volatile idt_entry_t *gate,
> +                                     const idt_entry_t *new)
> +{
> +    ASSERT(gate->b == new->b);
> +    gate->a = new->a;
> +}
> +
>  #define _set_gate(gate_addr,type,dpl,addr)               \
>  do {                                                     \
>      (gate_addr)->a = 0;                                  \
> @@ -122,6 +137,36 @@ do {                                    
>          (1UL << 47);                                     \
>  } while (0)
>  
> +static inline void _set_gate_lower(idt_entry_t *gate, unsigned long type,
> +                                   unsigned long dpl, void *addr)
> +{
> +    idt_entry_t idte;
> +    idte.b = gate->b;
> +    idte.a =
> +        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
> +        ((unsigned long)(dpl) << 45) |
> +        ((unsigned long)(type) << 40) |
> +        ((unsigned long)(addr) & 0xFFFFUL) |
> +        ((unsigned long)__HYPERVISOR_CS64 << 16) |
> +        (1UL << 47);
> +    _write_gate_lower(gate, &idte);
> +}
> +
> +/* Update the lower half handler of an IDT Entry, without changing any
> + * other configuration. */
> +static inline void _update_gate_addr_lower(idt_entry_t *gate, void *addr)
> +{
> +    idt_entry_t idte;
> +    idte.a = gate->a;
> +
> +    idte.b = ((unsigned long)(addr) >> 32);
> +    idte.a &= 0x0000FFFFFFFF0000ULL;
> +    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
> +        ((unsigned long)(addr) & 0xFFFFUL);
> +
> +    _write_gate_lower(gate, &idte);
> +}
> +
>  #define _set_tssldt_desc(desc,addr,limit,type)           \
>  do {                                                     \
>      (desc)[0].b = (desc)[1].b = 0;                       \
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/processor.h
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
>  DECLARE_TRAP_HANDLER(divide_error);
>  DECLARE_TRAP_HANDLER(debug);
>  DECLARE_TRAP_HANDLER(nmi);
> +DECLARE_TRAP_HANDLER(nmi_crash);
>  DECLARE_TRAP_HANDLER(int3);
>  DECLARE_TRAP_HANDLER(overflow);
>  DECLARE_TRAP_HANDLER(bounds);
> @@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>  DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>  #undef DECLARE_TRAP_HANDLER
>  
> +void trap_nop(void);
> +void enable_nmis(void);
> +
>  void syscall_enter(void);
>  void sysenter_entry(void);
>  void sysenter_eflags_saved(void);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 13:04:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 13:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tilz0-0000EZ-5L; Wed, 12 Dec 2012 13:04:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tilyy-0000EM-Ar
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 13:04:16 +0000
Received: from [85.158.138.51:5740] by server-1.bemta-3.messagelabs.com id
	14/63-08906-DC088C05; Wed, 12 Dec 2012 13:04:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355317448!27362567!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE0NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7594 invoked from network); 12 Dec 2012 13:04:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 13:04:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,266,1355097600"; 
   d="scan'208";a="389776"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	12 Dec 2012 13:04:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 12 Dec 2012 08:04:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tilyp-000273-4I;
	Wed, 12 Dec 2012 13:04:07 +0000
Date: Wed, 12 Dec 2012 13:04:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Alex Bligh <alex@alex.org.uk>
In-Reply-To: <B83A0346AE0177C703855ED1@nimrod.local>
Message-ID: <alpine.DEB.2.02.1212121223380.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
	<62660DE6F077667C4FFB290E@nimrod.local>
	<1355238698.843.42.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
	<0DDCB3A726C4BDEE758F607F@nimrod.local>
	<alpine.DEB.2.02.1212111922150.17523@kaball.uk.xensource.com>
	<B83A0346AE0177C703855ED1@nimrod.local>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] QEMU qcow2 snapshotting (was Xen 4.2.1 live migration
 with qemu device model)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012, Alex Bligh wrote:
> --On 11 December 2012 19:24:54 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > It would be great if you could implement QEMU qcow2 snapshotting support
> > in libxl, so that everything can happen seamlessly using a variation of xl
> > save/restore.
> 
> I'd guess that isn't hard. We already have QEMU qcow2 snapshotting support
> on qemu-xen working via direct calls to QEMU - I think it 'worked just like
> kvm'. I seem to remember there is some fiddling to do with an open fd number
> in qemu for the live rebase, and some limitations (for instance I couldn't
> see how to do a consistent snapshot of the same device presented both pv
> and emulated under HVM - which is theoretically an issue if you have
> one partition mounted one way and one the other).

It shouldn't be hard: we already have pretty good QMP support in libxl,
so it should be just a matter of plumbing through the new commands.

QEMU supports both internal and external snapshots, but I think that the
internal ones are more interesting (snapshot saved within the qcow2
file), which ones are you using?

Also QEMU's snapshots support both disk and/or the entire VM state
(savevm vs. snapshot_blkdev): one thing that would be cool is if we
could save the Xen VM state (AKA xl save) within the snapshot in the
qcow2 image. Basically we are talking about a new pair of xl
save/restore commands (or maybe options to the existing save/restore
commands) that make use of QEMU qcow2 snapshots as storage for both the
disk delta and the VM state.
To do this we would probably need a new QEMU QMP command, but I think
that it would be very useful.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 13:04:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 13:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tilz0-0000EZ-5L; Wed, 12 Dec 2012 13:04:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tilyy-0000EM-Ar
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 13:04:16 +0000
Received: from [85.158.138.51:5740] by server-1.bemta-3.messagelabs.com id
	14/63-08906-DC088C05; Wed, 12 Dec 2012 13:04:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355317448!27362567!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE0NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7594 invoked from network); 12 Dec 2012 13:04:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 13:04:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,266,1355097600"; 
   d="scan'208";a="389776"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	12 Dec 2012 13:04:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 12 Dec 2012 08:04:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tilyp-000273-4I;
	Wed, 12 Dec 2012 13:04:07 +0000
Date: Wed, 12 Dec 2012 13:04:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Alex Bligh <alex@alex.org.uk>
In-Reply-To: <B83A0346AE0177C703855ED1@nimrod.local>
Message-ID: <alpine.DEB.2.02.1212121223380.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
	<62660DE6F077667C4FFB290E@nimrod.local>
	<1355238698.843.42.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
	<0DDCB3A726C4BDEE758F607F@nimrod.local>
	<alpine.DEB.2.02.1212111922150.17523@kaball.uk.xensource.com>
	<B83A0346AE0177C703855ED1@nimrod.local>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] QEMU qcow2 snapshotting (was Xen 4.2.1 live migration
 with qemu device model)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012, Alex Bligh wrote:
> --On 11 December 2012 19:24:54 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > It would be great if you could implement QEMU qcow2 snapshotting support
> > in libxl, so that everything can happen seamlessly using a variation of xl
> > save/restore.
> 
> I'd guess that isn't hard. We already have QEMU qcow2 snapshotting support
> on qemu-xen working via direct calls to QEMU - I think it 'worked just like
> kvm'. I seem to remember there is some fiddling to do with an open fd number
> in qemu for the live rebase, and some limitations (for instance I couldn't
> see how to do a consistent snapshot of the same device presented both pv
> and emulated under HVM - which is theoretically an issue if you have
> one partition mounted one way and one the other).

It shouldn't be hard: we already have pretty good QMP support in libxl,
so it should be just a matter of plumbing through the new commands.

QEMU supports both internal and external snapshots, but I think that the
internal ones are more interesting (snapshot saved within the qcow2
file), which ones are you using?

Also QEMU's snapshots support both disk and/or the entire VM state
(savevm vs. snapshot_blkdev): one thing that would be cool is if we
could save the Xen VM state (AKA xl save) within the snapshot in the
qcow2 image. Basically we are talking about a new pair of xl
save/restore commands (or maybe options to the existing save/restore
commands) that make use of QEMU qcow2 snapshots as storage for both the
disk delta and the VM state.
To do this we would probably need a new QEMU QMP command, but I think
that it would be very useful.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 14:16:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 14:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tin6e-0001FS-5M; Wed, 12 Dec 2012 14:16:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tin6c-0001FN-E5
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 14:16:14 +0000
Received: from [85.158.143.35:39258] by server-1.bemta-4.messagelabs.com id
	AB/87-28401-DA198C05; Wed, 12 Dec 2012 14:16:13 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355321757!10344655!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30549 invoked from network); 12 Dec 2012 14:15:58 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 14:15:58 -0000
Received: (qmail 12938 invoked from network); 12 Dec 2012 16:15:56 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	12 Dec 2012 16:15:56 +0200
Message-ID: <50C891CF.805@gmail.com>
Date: Wed, 12 Dec 2012 16:16:47 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 232998,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_NO_LINK_NMD; NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY;
	NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	810cb52f28dbe582f0fa54a5df19f9df.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t8.17e7n001r.ba4], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44354
Subject: [Xen-devel] Handling a MEM_EVENT_REASON_CR3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

a few questions about receiving a MEM_EVENT_REASON_CR3 even in dom0 
userspace:

1. If I call

xc_set_hvm_param(xci, domain_id,
                  HVM_PARAM_MEMORY_EVENT_CR3,
                  HVMPME_onchangeonly);

that only triggers a CR3 event if the new value that the guest writes to 
CR3 is different from the existing value, is that assumption correct?

2. mem_event.h says that if "CR3 was hit: gfn is CR3 value". I'm 
assuming that gfn is the _new_ value, and that the old value is 
unavailable, is that also correct?

3. Is it possible to, upon intercepting the CR3 write, write a 
_different_ value to CR3, instead of gfn?

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 14:16:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 14:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tin6e-0001FS-5M; Wed, 12 Dec 2012 14:16:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tin6c-0001FN-E5
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 14:16:14 +0000
Received: from [85.158.143.35:39258] by server-1.bemta-4.messagelabs.com id
	AB/87-28401-DA198C05; Wed, 12 Dec 2012 14:16:13 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355321757!10344655!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30549 invoked from network); 12 Dec 2012 14:15:58 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 14:15:58 -0000
Received: (qmail 12938 invoked from network); 12 Dec 2012 16:15:56 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	12 Dec 2012 16:15:56 +0200
Message-ID: <50C891CF.805@gmail.com>
Date: Wed, 12 Dec 2012 16:16:47 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 232998,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_NO_LINK_NMD; NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY;
	NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	810cb52f28dbe582f0fa54a5df19f9df.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t8.17e7n001r.ba4], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44354
Subject: [Xen-devel] Handling a MEM_EVENT_REASON_CR3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

a few questions about receiving a MEM_EVENT_REASON_CR3 even in dom0 
userspace:

1. If I call

xc_set_hvm_param(xci, domain_id,
                  HVM_PARAM_MEMORY_EVENT_CR3,
                  HVMPME_onchangeonly);

that only triggers a CR3 event if the new value that the guest writes to 
CR3 is different from the existing value, is that assumption correct?

2. mem_event.h says that if "CR3 was hit: gfn is CR3 value". I'm 
assuming that gfn is the _new_ value, and that the old value is 
unavailable, is that also correct?

3. Is it possible to, upon intercepting the CR3 write, write a 
_different_ value to CR3, instead of gfn?

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 15:23:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 15:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tio8j-0001ak-Hc; Wed, 12 Dec 2012 15:22:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tio8i-0001af-QI
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 15:22:28 +0000
Received: from [85.158.139.83:37951] by server-14.bemta-5.messagelabs.com id
	17/94-09538-131A8C05; Wed, 12 Dec 2012 15:22:25 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355325743!29013362!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16810 invoked from network); 12 Dec 2012 15:22:24 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-182.messagelabs.com with SMTP;
	12 Dec 2012 15:22:24 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 3A6F5C5618E;
	Wed, 12 Dec 2012 15:22:12 +0000 (GMT)
Date: Wed, 12 Dec 2012 15:22:11 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <9019841307D10F928D694CD5@nimrod.local>
In-Reply-To: <alpine.DEB.2.02.1212121223380.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
	<62660DE6F077667C4FFB290E@nimrod.local>
	<1355238698.843.42.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
	<0DDCB3A726C4BDEE758F607F@nimrod.local>
	<alpine.DEB.2.02.1212111922150.17523@kaball.uk.xensource.com>
	<B83A0346AE0177C703855ED1@nimrod.local>
	<alpine.DEB.2.02.1212121223380.17523@kaball.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] QEMU qcow2 snapshotting (was Xen 4.2.1 live
	migration with qemu device model)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano,

--On 12 December 2012 13:04:04 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> QEMU supports both internal and external snapshots, but I think that the
> internal ones are more interesting (snapshot saved within the qcow2
> file), which ones are you using?

We currently use external snapshots + rebase. The rebase stuff has made external
snapshots far more usable particularly for CoW base images.

I shall try to have a look at this - but getting live migration working with the
device model is the top priority for us. I had a quick look this morning and
most of the changes are to the JSON library it seems!

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 15:23:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 15:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tio8j-0001ak-Hc; Wed, 12 Dec 2012 15:22:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tio8i-0001af-QI
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 15:22:28 +0000
Received: from [85.158.139.83:37951] by server-14.bemta-5.messagelabs.com id
	17/94-09538-131A8C05; Wed, 12 Dec 2012 15:22:25 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355325743!29013362!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16810 invoked from network); 12 Dec 2012 15:22:24 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-182.messagelabs.com with SMTP;
	12 Dec 2012 15:22:24 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 3A6F5C5618E;
	Wed, 12 Dec 2012 15:22:12 +0000 (GMT)
Date: Wed, 12 Dec 2012 15:22:11 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <9019841307D10F928D694CD5@nimrod.local>
In-Reply-To: <alpine.DEB.2.02.1212121223380.17523@kaball.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<1355234985.843.10.camel@zakaz.uk.xensource.com>
	<62660DE6F077667C4FFB290E@nimrod.local>
	<1355238698.843.42.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212111521190.17523@kaball.uk.xensource.com>
	<0DDCB3A726C4BDEE758F607F@nimrod.local>
	<alpine.DEB.2.02.1212111922150.17523@kaball.uk.xensource.com>
	<B83A0346AE0177C703855ED1@nimrod.local>
	<alpine.DEB.2.02.1212121223380.17523@kaball.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] QEMU qcow2 snapshotting (was Xen 4.2.1 live
	migration with qemu device model)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano,

--On 12 December 2012 13:04:04 +0000 Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> QEMU supports both internal and external snapshots, but I think that the
> internal ones are more interesting (snapshot saved within the qcow2
> file), which ones are you using?

We currently use external snapshots + rebase. The rebase stuff has made external
snapshots far more usable particularly for CoW base images.

I shall try to have a look at this - but getting live migration working with the
device model is the top priority for us. I had a quick look this morning and
most of the changes are to the JSON library it seems!

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 15:34:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 15:34:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TioJS-0001lA-V3; Wed, 12 Dec 2012 15:33:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TioJQ-0001l5-R7
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 15:33:33 +0000
Received: from [85.158.138.51:49614] by server-9.bemta-3.messagelabs.com id
	81/5C-11948-7C3A8C05; Wed, 12 Dec 2012 15:33:27 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355326406!22248109!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22120 invoked from network); 12 Dec 2012 15:33:26 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 15:33:26 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so316418wgb.32
	for <xen-devel@lists.xen.org>; Wed, 12 Dec 2012 07:33:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=8Piy6qP1mpj2+DpmotTIFWNTZQAXXwxscC8HeCeL6Jw=;
	b=kDA/YHaZlwOiYM5bmT9O6I8XschYzTa9o9e0ylLaOKV+JwWk6paI7Yk+nZbTO9jxsB
	LpOxmbwJy0wm/Llcn5rV0bj/l2Q4UaZmNG9Z3D8qOdcIG1VSSFxvh+Czmnap3alfKtGi
	q2forUVy9m4mrUSnHDyhg2HahLYXIDCJVqmCbtYwl+7qbxoDxhYx9KctNw21hCfSruu/
	/GBtVN4T3tQYQduqjmAepUB1Yi2p/TP+ZUIt6QSTH4h16MqCITiOk9aNJtcJccggxayo
	8UB7MS+0cCMhPwMZVV0ROJ46EcfqGwbidV6emTuC+E04hXi0Eof+a+Gk1DL7bx695Z5P
	VfQQ==
Received: by 10.180.92.74 with SMTP id ck10mr18736686wib.9.1355326406098;
	Wed, 12 Dec 2012 07:33:26 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id bz12sm21713316wib.5.2012.12.12.07.33.20
	(version=SSLv3 cipher=OTHER); Wed, 12 Dec 2012 07:33:25 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Wed, 12 Dec 2012 15:33:14 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <CCEE543A.55784%keir@xen.org>
Thread-Topic: [PATCH V5] x86/kexec: Change NMI and MCE handling on kexec path
Thread-Index: Ac3Yff/vMmtbpMdeOEainzElAgeNog==
In-Reply-To: <50C88B9E02000078000AFE55@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Tim Deegan <tim@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
	on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/2012 12:50, "Jan Beulich" <JBeulich@suse.com> wrote:

>>   * Explicitly enable NMIs.
>> 
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> (but for committing I'd want at least one other knowledgeable
> person's ack)

It looks okay to me too. Should I wait for an Ack from Tim too?

 -- Keir

>> --
>> 
>> Changes since v4:
>>  * Change stale comments, spelling corrections and coding style changes.
>> 
>> Changes since v3:
>>  * Added IDT entry helpers to safely update NMI/MCE entries.
>> 
>> Changes since v2:
>> 
>>  * Rework the alteration of the stack tables to completely remove the
>>    possibility of a PV domain getting very lucky with the "NMI or MCE in
>>    a 1 instruction race window on sysret" and managing to execute code
>>    in the hypervisor context.
>>  * Make use of set_ist() from the previous patch in the series to avoid
>>    open-coding the IST manipulation.
>> 
>> Changes since v1:
>>  * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
>>    during the original refactoring.
>>  * Fold trap_nop into the middle of enable_nmis to reuse the iret.
>>  * Expand comments in areas as per Tim's suggestions.
>> 
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/crash.c
>> --- a/xen/arch/x86/crash.c
>> +++ b/xen/arch/x86/crash.c
>> @@ -32,41 +32,127 @@
>>  
>>  static atomic_t waiting_for_crash_ipi;
>>  static unsigned int crashing_cpu;
>> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>>  
>> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
>> +/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing.
>> */
>> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>>  {
>> -    /* Don't do anything if this handler is invoked on crashing cpu.
>> -     * Otherwise, system will completely hang. Crashing cpu can get
>> -     * an NMI if system was initially booted with nmi_watchdog parameter.
>> +    int cpu = smp_processor_id();
>> +
>> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct.
>> */
>> +    ASSERT( cpu != crashing_cpu );
>> +
>> +    /* Save crash information and shut down CPU.  Attempt only once. */
>> +    if ( ! this_cpu(crash_save_done) )
>> +    {
>> +        /* Disable the interrupt stack table for the MCE handler.  This
>> +         * prevents race conditions between clearing MCIP and receving a
>> +         * new MCE, during which the exception frame would be clobbered
>> +         * and the MCE handler fall into an infinite loop.  We are soon
>> +         * going to disable the NMI watchdog, so the loop would not be
>> +         * caught.
>> +         *
>> +         * We do not need to change the NMI IST, as the nmi_crash
>> +         * handler is immue to corrupt exception frames, by virtue of
>> +         * being designed never to return.
>> +         *
>> +         * This update is safe from a security point of view, as this
>> +         * pcpu is never going to try to sysret back to a PV vcpu.
>> +         */
>> +        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
>> +
>> +        kexec_crash_save_cpu();
>> +        __stop_this_cpu();
>> +
>> +        this_cpu(crash_save_done) = 1;
>> +        atomic_dec(&waiting_for_crash_ipi);
>> +    }
>> +
>> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
>> +     * back to its boot state, so we are unable to rely on the regular
>> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
>> +     * (The likely scenario is that we have reverted from x2apic mode to
>> +     * xapic, at which point #GPFs will occur if we use the apic_*
>> +     * functions)
>> +     *
>> +     * The ICR and APIC ID of the LAPIC are still valid even during
>> +     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
>> +     * can deliberately queue up another NMI at the LAPIC which will not
>> +     * be delivered as the hardware NMI latch is currently in effect.
>> +     * This means that if NMIs become unlatched (e.g. following a
>> +     * non-fatal MCE), the LAPIC will force us back here rather than
>> +     * wandering back into regular Xen code.
>>       */
>> -    if ( cpu == crashing_cpu )
>> -        return 1;
>> -    local_irq_disable();
>> +    switch ( current_local_apic_mode() )
>> +    {
>> +        u32 apic_id;
>>  
>> -    kexec_crash_save_cpu();
>> +    case APIC_MODE_X2APIC:
>> +        apic_id = apic_rdmsr(APIC_ID);
>>  
>> -    __stop_this_cpu();
>> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL |
>> ((u64)apic_id << 32));
>> +        break;
>>  
>> -    atomic_dec(&waiting_for_crash_ipi);
>> +    case APIC_MODE_XAPIC:
>> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
>> +
>> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
>> +            cpu_relax();
>> +
>> +        apic_mem_write(APIC_ICR2, apic_id << 24);
>> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
>> +        break;
>> +
>> +    default:
>> +        break;
>> +    }
>>  
>>      for ( ; ; )
>>          halt();
>> -
>> -    return 1;
>>  }
>>  
>>  static void nmi_shootdown_cpus(void)
>>  {
>>      unsigned long msecs;
>> +    int i, cpu = smp_processor_id();
>>  
>>      local_irq_disable();
>>  
>> -    crashing_cpu = smp_processor_id();
>> +    crashing_cpu = cpu;
>>      local_irq_count(crashing_cpu) = 0;
>>  
>>      atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
>> -    /* Would it be better to replace the trap vector here? */
>> -    set_nmi_callback(crash_nmi_callback);
>> +
>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>> +     * invokes do_nmi_crash (above), which cause them to write state and
>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>> +     * cause it to return to this function ASAP.
>> +     */
>> +    for ( i = 0; i < nr_cpu_ids; ++i )
>> +        if ( idt_tables[i] )
>> +        {
>> +
>> +            if ( i == cpu )
>> +            {
>> +                /* Disable the interrupt stack tables for this cpus MCE
>> +                 * and NMI handlers, and alter the NMI handler to have
>> +                 * no operation.  Disabling the stack tables prevents
>> +                 * stack corruption race conditions, while changing the
>> +                 * handler helps prevent cascading faults; we are
>> +                 * certainly going to crash by this point.
>> +                 *
>> +                 * This update is safe from a security point of view, as
>> +                 * this pcpu is never going to try to sysret back to a
>> +                 * PV vcpu.
>> +                 */
>> +                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
>> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
>> +            }
>> +            else
>> +                /* Do not update stack table for other pcpus. */
>> +                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi],
>> &nmi_crash);
>> +        }
>> +
>>      /* Ensure the new callback function is set before sending out the NMI.
>> */
>>      wmb();
>>  
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/machine_kexec.c
>> --- a/xen/arch/x86/machine_kexec.c
>> +++ b/xen/arch/x86/machine_kexec.c
>> @@ -81,12 +81,31 @@ void machine_kexec(xen_kexec_image_t *im
>>          .base = (unsigned long)(boot_cpu_gdt_table -
>> FIRST_RESERVED_GDT_ENTRY),
>>          .limit = LAST_RESERVED_GDT_BYTE
>>      };
>> +    int i;
>>  
>>      /* We are about to permenantly jump out of the Xen context into the
>> kexec
>>       * purgatory code.  We really dont want to be still servicing
>> interupts.
>>       */
>>      local_irq_disable();
>>  
>> +    /* Now regular interrupts are disabled, we need to reduce the impact
>> +     * of interrupts not disabled by 'cli'.
>> +     *
>> +     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
>> +     * pcpus other than us have the nmi_crash handler, while we have the
>> nop
>> +     * handler.
>> +     *
>> +     * The MCE handlers touch extensive areas of Xen code and data.  At
>> this
>> +     * point, there is nothing we can usefully do, so set the nop handler.
>> +     */
>> +    for ( i = 0; i < nr_cpu_ids; ++i )
>> +        if ( idt_tables[i] )
>> +            _update_gate_addr_lower(&idt_tables[i][TRAP_machine_check],
>> &trap_nop);
>> +
>> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
>> +     * not like running with NMIs disabled. */
>> +    enable_nmis();
>> +
>>      /*
>>       * compat_machine_kexec() returns to idle pagetables, which requires us
>>       * to be running on a static GDT mapping (idle pagetables have no GDT
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/x86_64/entry.S
>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -635,11 +635,45 @@ ENTRY(nmi)
>>          movl  $TRAP_nmi,4(%rsp)
>>          jmp   handle_ist_exception
>>  
>> +ENTRY(nmi_crash)
>> +        cli
>> +        pushq $0
>> +        movl $TRAP_nmi,4(%rsp)
>> +        SAVE_ALL
>> +        movq %rsp,%rdi
>> +        callq do_nmi_crash /* Does not return */
>> +        ud2
>> +
>>  ENTRY(machine_check)
>>          pushq $0
>>          movl  $TRAP_machine_check,4(%rsp)
>>          jmp   handle_ist_exception
>>  
>> +/* Enable NMIs.  No special register assumptions. Only %rax is not
>> preserved. */
>> +ENTRY(enable_nmis)
>> +        movq  %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
>> +
>> +        iretq /* Disable the hardware NMI latch */
>> +1:
>> +        retq
>> +
>> +/* No op trap handler.  Required for kexec crash path.  This is not
>> + * declared with the ENTRY() macro to avoid wasted alignment space.
>> + */
>> +.globl trap_nop
>> +trap_nop:
>> +        iretq
>> +
>> +
>> +
>>  .section .rodata, "a", @progbits
>>  
>>  ENTRY(exception_table)
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/desc.h
>> --- a/xen/include/asm-x86/desc.h
>> +++ b/xen/include/asm-x86/desc.h
>> @@ -106,6 +106,21 @@ typedef struct {
>>      u64 a, b;
>>  } idt_entry_t;
>>  
>> +/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
>> + * bits of the address not changing, which is a safe assumption as all
>> + * functions we are likely to load will live inside the 1GB
>> + * code/data/bss address range.
>> + *
>> + * Ideally, we would use cmpxchg16b, but this is not supported on some
>> + * old AMD 64bit capable processors, and has no safe equivalent.
>> + */
>> +static inline void _write_gate_lower(volatile idt_entry_t *gate,
>> +                                     const idt_entry_t *new)
>> +{
>> +    ASSERT(gate->b == new->b);
>> +    gate->a = new->a;
>> +}
>> +
>>  #define _set_gate(gate_addr,type,dpl,addr)               \
>>  do {                                                     \
>>      (gate_addr)->a = 0;                                  \
>> @@ -122,6 +137,36 @@ do {
>>          (1UL << 47);                                     \
>>  } while (0)
>>  
>> +static inline void _set_gate_lower(idt_entry_t *gate, unsigned long type,
>> +                                   unsigned long dpl, void *addr)
>> +{
>> +    idt_entry_t idte;
>> +    idte.b = gate->b;
>> +    idte.a =
>> +        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
>> +        ((unsigned long)(dpl) << 45) |
>> +        ((unsigned long)(type) << 40) |
>> +        ((unsigned long)(addr) & 0xFFFFUL) |
>> +        ((unsigned long)__HYPERVISOR_CS64 << 16) |
>> +        (1UL << 47);
>> +    _write_gate_lower(gate, &idte);
>> +}
>> +
>> +/* Update the lower half handler of an IDT Entry, without changing any
>> + * other configuration. */
>> +static inline void _update_gate_addr_lower(idt_entry_t *gate, void *addr)
>> +{
>> +    idt_entry_t idte;
>> +    idte.a = gate->a;
>> +
>> +    idte.b = ((unsigned long)(addr) >> 32);
>> +    idte.a &= 0x0000FFFFFFFF0000ULL;
>> +    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
>> +        ((unsigned long)(addr) & 0xFFFFUL);
>> +
>> +    _write_gate_lower(gate, &idte);
>> +}
>> +
>>  #define _set_tssldt_desc(desc,addr,limit,type)           \
>>  do {                                                     \
>>      (desc)[0].b = (desc)[1].b = 0;                       \
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/processor.h
>> --- a/xen/include/asm-x86/processor.h
>> +++ b/xen/include/asm-x86/processor.h
>> @@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
>>  DECLARE_TRAP_HANDLER(divide_error);
>>  DECLARE_TRAP_HANDLER(debug);
>>  DECLARE_TRAP_HANDLER(nmi);
>> +DECLARE_TRAP_HANDLER(nmi_crash);
>>  DECLARE_TRAP_HANDLER(int3);
>>  DECLARE_TRAP_HANDLER(overflow);
>>  DECLARE_TRAP_HANDLER(bounds);
>> @@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>>  DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>>  #undef DECLARE_TRAP_HANDLER
>>  
>> +void trap_nop(void);
>> +void enable_nmis(void);
>> +
>>  void syscall_enter(void);
>>  void sysenter_entry(void);
>>  void sysenter_eflags_saved(void);
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 15:34:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 15:34:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TioJS-0001lA-V3; Wed, 12 Dec 2012 15:33:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TioJQ-0001l5-R7
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 15:33:33 +0000
Received: from [85.158.138.51:49614] by server-9.bemta-3.messagelabs.com id
	81/5C-11948-7C3A8C05; Wed, 12 Dec 2012 15:33:27 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355326406!22248109!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22120 invoked from network); 12 Dec 2012 15:33:26 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 15:33:26 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so316418wgb.32
	for <xen-devel@lists.xen.org>; Wed, 12 Dec 2012 07:33:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=8Piy6qP1mpj2+DpmotTIFWNTZQAXXwxscC8HeCeL6Jw=;
	b=kDA/YHaZlwOiYM5bmT9O6I8XschYzTa9o9e0ylLaOKV+JwWk6paI7Yk+nZbTO9jxsB
	LpOxmbwJy0wm/Llcn5rV0bj/l2Q4UaZmNG9Z3D8qOdcIG1VSSFxvh+Czmnap3alfKtGi
	q2forUVy9m4mrUSnHDyhg2HahLYXIDCJVqmCbtYwl+7qbxoDxhYx9KctNw21hCfSruu/
	/GBtVN4T3tQYQduqjmAepUB1Yi2p/TP+ZUIt6QSTH4h16MqCITiOk9aNJtcJccggxayo
	8UB7MS+0cCMhPwMZVV0ROJ46EcfqGwbidV6emTuC+E04hXi0Eof+a+Gk1DL7bx695Z5P
	VfQQ==
Received: by 10.180.92.74 with SMTP id ck10mr18736686wib.9.1355326406098;
	Wed, 12 Dec 2012 07:33:26 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id bz12sm21713316wib.5.2012.12.12.07.33.20
	(version=SSLv3 cipher=OTHER); Wed, 12 Dec 2012 07:33:25 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Wed, 12 Dec 2012 15:33:14 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <CCEE543A.55784%keir@xen.org>
Thread-Topic: [PATCH V5] x86/kexec: Change NMI and MCE handling on kexec path
Thread-Index: Ac3Yff/vMmtbpMdeOEainzElAgeNog==
In-Reply-To: <50C88B9E02000078000AFE55@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Tim Deegan <tim@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
	on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/2012 12:50, "Jan Beulich" <JBeulich@suse.com> wrote:

>>   * Explicitly enable NMIs.
>> 
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> (but for committing I'd want at least one other knowledgeable
> person's ack)

It looks okay to me too. Should I wait for an Ack from Tim too?

 -- Keir

>> --
>> 
>> Changes since v4:
>>  * Change stale comments, spelling corrections and coding style changes.
>> 
>> Changes since v3:
>>  * Added IDT entry helpers to safely update NMI/MCE entries.
>> 
>> Changes since v2:
>> 
>>  * Rework the alteration of the stack tables to completely remove the
>>    possibility of a PV domain getting very lucky with the "NMI or MCE in
>>    a 1 instruction race window on sysret" and managing to execute code
>>    in the hypervisor context.
>>  * Make use of set_ist() from the previous patch in the series to avoid
>>    open-coding the IST manipulation.
>> 
>> Changes since v1:
>>  * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
>>    during the original refactoring.
>>  * Fold trap_nop into the middle of enable_nmis to reuse the iret.
>>  * Expand comments in areas as per Tim's suggestions.
>> 
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/crash.c
>> --- a/xen/arch/x86/crash.c
>> +++ b/xen/arch/x86/crash.c
>> @@ -32,41 +32,127 @@
>>  
>>  static atomic_t waiting_for_crash_ipi;
>>  static unsigned int crashing_cpu;
>> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>>  
>> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
>> +/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing.
>> */
>> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>>  {
>> -    /* Don't do anything if this handler is invoked on crashing cpu.
>> -     * Otherwise, system will completely hang. Crashing cpu can get
>> -     * an NMI if system was initially booted with nmi_watchdog parameter.
>> +    int cpu = smp_processor_id();
>> +
>> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct.
>> */
>> +    ASSERT( cpu != crashing_cpu );
>> +
>> +    /* Save crash information and shut down CPU.  Attempt only once. */
>> +    if ( ! this_cpu(crash_save_done) )
>> +    {
>> +        /* Disable the interrupt stack table for the MCE handler.  This
>> +         * prevents race conditions between clearing MCIP and receving a
>> +         * new MCE, during which the exception frame would be clobbered
>> +         * and the MCE handler fall into an infinite loop.  We are soon
>> +         * going to disable the NMI watchdog, so the loop would not be
>> +         * caught.
>> +         *
>> +         * We do not need to change the NMI IST, as the nmi_crash
>> +         * handler is immue to corrupt exception frames, by virtue of
>> +         * being designed never to return.
>> +         *
>> +         * This update is safe from a security point of view, as this
>> +         * pcpu is never going to try to sysret back to a PV vcpu.
>> +         */
>> +        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
>> +
>> +        kexec_crash_save_cpu();
>> +        __stop_this_cpu();
>> +
>> +        this_cpu(crash_save_done) = 1;
>> +        atomic_dec(&waiting_for_crash_ipi);
>> +    }
>> +
>> +    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
>> +     * back to its boot state, so we are unable to rely on the regular
>> +     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
>> +     * (The likely scenario is that we have reverted from x2apic mode to
>> +     * xapic, at which point #GPFs will occur if we use the apic_*
>> +     * functions)
>> +     *
>> +     * The ICR and APIC ID of the LAPIC are still valid even during
>> +     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
>> +     * can deliberately queue up another NMI at the LAPIC which will not
>> +     * be delivered as the hardware NMI latch is currently in effect.
>> +     * This means that if NMIs become unlatched (e.g. following a
>> +     * non-fatal MCE), the LAPIC will force us back here rather than
>> +     * wandering back into regular Xen code.
>>       */
>> -    if ( cpu == crashing_cpu )
>> -        return 1;
>> -    local_irq_disable();
>> +    switch ( current_local_apic_mode() )
>> +    {
>> +        u32 apic_id;
>>  
>> -    kexec_crash_save_cpu();
>> +    case APIC_MODE_X2APIC:
>> +        apic_id = apic_rdmsr(APIC_ID);
>>  
>> -    __stop_this_cpu();
>> +        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL |
>> ((u64)apic_id << 32));
>> +        break;
>>  
>> -    atomic_dec(&waiting_for_crash_ipi);
>> +    case APIC_MODE_XAPIC:
>> +        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
>> +
>> +        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
>> +            cpu_relax();
>> +
>> +        apic_mem_write(APIC_ICR2, apic_id << 24);
>> +        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
>> +        break;
>> +
>> +    default:
>> +        break;
>> +    }
>>  
>>      for ( ; ; )
>>          halt();
>> -
>> -    return 1;
>>  }
>>  
>>  static void nmi_shootdown_cpus(void)
>>  {
>>      unsigned long msecs;
>> +    int i, cpu = smp_processor_id();
>>  
>>      local_irq_disable();
>>  
>> -    crashing_cpu = smp_processor_id();
>> +    crashing_cpu = cpu;
>>      local_irq_count(crashing_cpu) = 0;
>>  
>>      atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
>> -    /* Would it be better to replace the trap vector here? */
>> -    set_nmi_callback(crash_nmi_callback);
>> +
>> +    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
>> +     * invokes do_nmi_crash (above), which cause them to write state and
>> +     * fall into a loop.  The crashing pcpu gets the nop handler to
>> +     * cause it to return to this function ASAP.
>> +     */
>> +    for ( i = 0; i < nr_cpu_ids; ++i )
>> +        if ( idt_tables[i] )
>> +        {
>> +
>> +            if ( i == cpu )
>> +            {
>> +                /* Disable the interrupt stack tables for this cpus MCE
>> +                 * and NMI handlers, and alter the NMI handler to have
>> +                 * no operation.  Disabling the stack tables prevents
>> +                 * stack corruption race conditions, while changing the
>> +                 * handler helps prevent cascading faults; we are
>> +                 * certainly going to crash by this point.
>> +                 *
>> +                 * This update is safe from a security point of view, as
>> +                 * this pcpu is never going to try to sysret back to a
>> +                 * PV vcpu.
>> +                 */
>> +                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
>> +                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
>> +            }
>> +            else
>> +                /* Do not update stack table for other pcpus. */
>> +                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi],
>> &nmi_crash);
>> +        }
>> +
>>      /* Ensure the new callback function is set before sending out the NMI.
>> */
>>      wmb();
>>  
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/machine_kexec.c
>> --- a/xen/arch/x86/machine_kexec.c
>> +++ b/xen/arch/x86/machine_kexec.c
>> @@ -81,12 +81,31 @@ void machine_kexec(xen_kexec_image_t *im
>>          .base = (unsigned long)(boot_cpu_gdt_table -
>> FIRST_RESERVED_GDT_ENTRY),
>>          .limit = LAST_RESERVED_GDT_BYTE
>>      };
>> +    int i;
>>  
>>      /* We are about to permenantly jump out of the Xen context into the
>> kexec
>>       * purgatory code.  We really dont want to be still servicing
>> interupts.
>>       */
>>      local_irq_disable();
>>  
>> +    /* Now regular interrupts are disabled, we need to reduce the impact
>> +     * of interrupts not disabled by 'cli'.
>> +     *
>> +     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
>> +     * pcpus other than us have the nmi_crash handler, while we have the
>> nop
>> +     * handler.
>> +     *
>> +     * The MCE handlers touch extensive areas of Xen code and data.  At
>> this
>> +     * point, there is nothing we can usefully do, so set the nop handler.
>> +     */
>> +    for ( i = 0; i < nr_cpu_ids; ++i )
>> +        if ( idt_tables[i] )
>> +            _update_gate_addr_lower(&idt_tables[i][TRAP_machine_check],
>> &trap_nop);
>> +
>> +    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
>> +     * not like running with NMIs disabled. */
>> +    enable_nmis();
>> +
>>      /*
>>       * compat_machine_kexec() returns to idle pagetables, which requires us
>>       * to be running on a static GDT mapping (idle pagetables have no GDT
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/x86_64/entry.S
>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -635,11 +635,45 @@ ENTRY(nmi)
>>          movl  $TRAP_nmi,4(%rsp)
>>          jmp   handle_ist_exception
>>  
>> +ENTRY(nmi_crash)
>> +        cli
>> +        pushq $0
>> +        movl $TRAP_nmi,4(%rsp)
>> +        SAVE_ALL
>> +        movq %rsp,%rdi
>> +        callq do_nmi_crash /* Does not return */
>> +        ud2
>> +
>>  ENTRY(machine_check)
>>          pushq $0
>>          movl  $TRAP_machine_check,4(%rsp)
>>          jmp   handle_ist_exception
>>  
>> +/* Enable NMIs.  No special register assumptions. Only %rax is not
>> preserved. */
>> +ENTRY(enable_nmis)
>> +        movq  %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
>> +
>> +        iretq /* Disable the hardware NMI latch */
>> +1:
>> +        retq
>> +
>> +/* No op trap handler.  Required for kexec crash path.  This is not
>> + * declared with the ENTRY() macro to avoid wasted alignment space.
>> + */
>> +.globl trap_nop
>> +trap_nop:
>> +        iretq
>> +
>> +
>> +
>>  .section .rodata, "a", @progbits
>>  
>>  ENTRY(exception_table)
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/desc.h
>> --- a/xen/include/asm-x86/desc.h
>> +++ b/xen/include/asm-x86/desc.h
>> @@ -106,6 +106,21 @@ typedef struct {
>>      u64 a, b;
>>  } idt_entry_t;
>>  
>> +/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
>> + * bits of the address not changing, which is a safe assumption as all
>> + * functions we are likely to load will live inside the 1GB
>> + * code/data/bss address range.
>> + *
>> + * Ideally, we would use cmpxchg16b, but this is not supported on some
>> + * old AMD 64bit capable processors, and has no safe equivalent.
>> + */
>> +static inline void _write_gate_lower(volatile idt_entry_t *gate,
>> +                                     const idt_entry_t *new)
>> +{
>> +    ASSERT(gate->b == new->b);
>> +    gate->a = new->a;
>> +}
>> +
>>  #define _set_gate(gate_addr,type,dpl,addr)               \
>>  do {                                                     \
>>      (gate_addr)->a = 0;                                  \
>> @@ -122,6 +137,36 @@ do {
>>          (1UL << 47);                                     \
>>  } while (0)
>>  
>> +static inline void _set_gate_lower(idt_entry_t *gate, unsigned long type,
>> +                                   unsigned long dpl, void *addr)
>> +{
>> +    idt_entry_t idte;
>> +    idte.b = gate->b;
>> +    idte.a =
>> +        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
>> +        ((unsigned long)(dpl) << 45) |
>> +        ((unsigned long)(type) << 40) |
>> +        ((unsigned long)(addr) & 0xFFFFUL) |
>> +        ((unsigned long)__HYPERVISOR_CS64 << 16) |
>> +        (1UL << 47);
>> +    _write_gate_lower(gate, &idte);
>> +}
>> +
>> +/* Update the lower half handler of an IDT Entry, without changing any
>> + * other configuration. */
>> +static inline void _update_gate_addr_lower(idt_entry_t *gate, void *addr)
>> +{
>> +    idt_entry_t idte;
>> +    idte.a = gate->a;
>> +
>> +    idte.b = ((unsigned long)(addr) >> 32);
>> +    idte.a &= 0x0000FFFFFFFF0000ULL;
>> +    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
>> +        ((unsigned long)(addr) & 0xFFFFUL);
>> +
>> +    _write_gate_lower(gate, &idte);
>> +}
>> +
>>  #define _set_tssldt_desc(desc,addr,limit,type)           \
>>  do {                                                     \
>>      (desc)[0].b = (desc)[1].b = 0;                       \
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/include/asm-x86/processor.h
>> --- a/xen/include/asm-x86/processor.h
>> +++ b/xen/include/asm-x86/processor.h
>> @@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
>>  DECLARE_TRAP_HANDLER(divide_error);
>>  DECLARE_TRAP_HANDLER(debug);
>>  DECLARE_TRAP_HANDLER(nmi);
>> +DECLARE_TRAP_HANDLER(nmi_crash);
>>  DECLARE_TRAP_HANDLER(int3);
>>  DECLARE_TRAP_HANDLER(overflow);
>>  DECLARE_TRAP_HANDLER(bounds);
>> @@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
>>  DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
>>  #undef DECLARE_TRAP_HANDLER
>>  
>> +void trap_nop(void);
>> +void enable_nmis(void);
>> +
>>  void syscall_enter(void);
>>  void sysenter_entry(void);
>>  void sysenter_eflags_saved(void);
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 16:14:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 16:14:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiowI-0002Pb-W9; Wed, 12 Dec 2012 16:13:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TiowH-0002PV-7U
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 16:13:41 +0000
Received: from [85.158.143.35:45663] by server-3.bemta-4.messagelabs.com id
	C3/6A-18211-43DA8C05; Wed, 12 Dec 2012 16:13:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355328790!17032000!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21616 invoked from network); 12 Dec 2012 16:13:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 16:13:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 16:12:54 +0000
Message-Id: <50C8BB1802000078000AFEDE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 16:12:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>, "Keir Fraser" <keir@xen.org>
References: <50C88B9E02000078000AFE55@nat28.tlf.novell.com>
	<CCEE543A.55784%keir@xen.org>
In-Reply-To: <CCEE543A.55784%keir@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
 on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.12.12 at 16:33, Keir Fraser <keir@xen.org> wrote:
> On 12/12/2012 12:50, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>>>   * Explicitly enable NMIs.
>>> 
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> 
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>> 
>> (but for committing I'd want at least one other knowledgeable
>> person's ack)
> 
> It looks okay to me too. Should I wait for an Ack from Tim too?

An extra pair of eyes certainly wouldn't hurt.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 16:14:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 16:14:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiowI-0002Pb-W9; Wed, 12 Dec 2012 16:13:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TiowH-0002PV-7U
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 16:13:41 +0000
Received: from [85.158.143.35:45663] by server-3.bemta-4.messagelabs.com id
	C3/6A-18211-43DA8C05; Wed, 12 Dec 2012 16:13:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355328790!17032000!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQxMTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21616 invoked from network); 12 Dec 2012 16:13:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 16:13:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Dec 2012 16:12:54 +0000
Message-Id: <50C8BB1802000078000AFEDE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 12 Dec 2012 16:12:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>, "Keir Fraser" <keir@xen.org>
References: <50C88B9E02000078000AFE55@nat28.tlf.novell.com>
	<CCEE543A.55784%keir@xen.org>
In-Reply-To: <CCEE543A.55784%keir@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
 on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.12.12 at 16:33, Keir Fraser <keir@xen.org> wrote:
> On 12/12/2012 12:50, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>>>   * Explicitly enable NMIs.
>>> 
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> 
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>> 
>> (but for committing I'd want at least one other knowledgeable
>> person's ack)
> 
> It looks okay to me too. Should I wait for an Ack from Tim too?

An extra pair of eyes certainly wouldn't hurt.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 16:50:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 16:50:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipUz-0002cq-0z; Wed, 12 Dec 2012 16:49:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TipUx-0002cl-U9
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 16:49:32 +0000
Received: from [85.158.143.99:39354] by server-1.bemta-4.messagelabs.com id
	EE/7F-28401-B95B8C05; Wed, 12 Dec 2012 16:49:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355330969!18001873!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26070 invoked from network); 12 Dec 2012 16:49:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 16:49:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,266,1355097600"; 
   d="scan'208";a="92981"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 16:49:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 16:45:40 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TipRE-0006Fk-Gi; Wed, 12 Dec 2012 16:45:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TipRE-0001T4-C8;
	Wed, 12 Dec 2012 16:45:40 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.46260.266332.338851@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 16:45:40 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355235125.843.13.camel@zakaz.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<C76A4AA2522F56F728DDCA4A@nimrod.local>
	<1355235125.843.13.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Alex Bligh <alex@alex.org.uk>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model"):
> libxl__xs_read_checked will always either initialise the variable
> (perhaps to NULL) or return an error. On both callsites we check for
> error and "goto out".

Yes.  And the error handling case after the strcmp does indeed handle
got_ret==NULL but the test fails to guard against this before running
strcmp.

> I think the crash is because the code uses got_ret without checking if
> it was NULL, which can happen if the path is not present. Ian (J) does
> that make sense as something which is allowed to happen?

Yes.  I think it is an error somewhere.  I don't understand how this
is happening.  This situation might occur if the qemu crashed and two
separate attempts were made to send a logdirty command, I guess.

> As I said in my early mail I'm not sure why you are getting here at all
> though.

Right.

I think the patch below fixes the segfault but it doesn't fix the
underlying cause.

Ian.

Subject: libxl: qemu trad logdirty: Tolerate ENOENT on ret path

It can happen in error conditions that lds->ret_path doesn't exist,
and libxl__xs_read_checked signals this by setting got_ret=NULL.  If
this happens, fail without crashing.

Reported-by: Alex Bligh <alex@alex.org.uk>,
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 95da18e..7586a6c 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -725,7 +725,7 @@ static void domain_suspend_switch_qemu_xen_traditional_logdirty
             rc = libxl__xs_read_checked(gc, t, lds->ret_path, &got_ret);
             if (rc) goto out;
 
-            if (strcmp(got, got_ret)) {
+            if (!got_ret || strcmp(got, got_ret)) {
                 LOG(ERROR,"controlling logdirty: qemu was already sent"
                     " command `%s' (xenstore path `%s') but result is `%s'",
                     got, lds->cmd_path, got_ret ? got_ret : "<none>");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 16:50:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 16:50:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipUz-0002cq-0z; Wed, 12 Dec 2012 16:49:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TipUx-0002cl-U9
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 16:49:32 +0000
Received: from [85.158.143.99:39354] by server-1.bemta-4.messagelabs.com id
	EE/7F-28401-B95B8C05; Wed, 12 Dec 2012 16:49:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355330969!18001873!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26070 invoked from network); 12 Dec 2012 16:49:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 16:49:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,266,1355097600"; 
   d="scan'208";a="92981"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 16:49:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 16:45:40 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TipRE-0006Fk-Gi; Wed, 12 Dec 2012 16:45:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TipRE-0001T4-C8;
	Wed, 12 Dec 2012 16:45:40 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.46260.266332.338851@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 16:45:40 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355235125.843.13.camel@zakaz.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<C76A4AA2522F56F728DDCA4A@nimrod.local>
	<1355235125.843.13.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Alex Bligh <alex@alex.org.uk>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model"):
> libxl__xs_read_checked will always either initialise the variable
> (perhaps to NULL) or return an error. On both callsites we check for
> error and "goto out".

Yes.  And the error handling case after the strcmp does indeed handle
got_ret==NULL but the test fails to guard against this before running
strcmp.

> I think the crash is because the code uses got_ret without checking if
> it was NULL, which can happen if the path is not present. Ian (J) does
> that make sense as something which is allowed to happen?

Yes.  I think it is an error somewhere.  I don't understand how this
is happening.  This situation might occur if the qemu crashed and two
separate attempts were made to send a logdirty command, I guess.

> As I said in my early mail I'm not sure why you are getting here at all
> though.

Right.

I think the patch below fixes the segfault but it doesn't fix the
underlying cause.

Ian.

Subject: libxl: qemu trad logdirty: Tolerate ENOENT on ret path

It can happen in error conditions that lds->ret_path doesn't exist,
and libxl__xs_read_checked signals this by setting got_ret=NULL.  If
this happens, fail without crashing.

Reported-by: Alex Bligh <alex@alex.org.uk>,
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 95da18e..7586a6c 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -725,7 +725,7 @@ static void domain_suspend_switch_qemu_xen_traditional_logdirty
             rc = libxl__xs_read_checked(gc, t, lds->ret_path, &got_ret);
             if (rc) goto out;
 
-            if (strcmp(got, got_ret)) {
+            if (!got_ret || strcmp(got, got_ret)) {
                 LOG(ERROR,"controlling logdirty: qemu was already sent"
                     " command `%s' (xenstore path `%s') but result is `%s'",
                     got, lds->cmd_path, got_ret ? got_ret : "<none>");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:04:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:04:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipjC-0002xP-43; Wed, 12 Dec 2012 17:04:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TipjA-0002xH-VN
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:04:13 +0000
Received: from [85.158.139.83:22135] by server-8.bemta-5.messagelabs.com id
	F0/4B-15003-C09B8C05; Wed, 12 Dec 2012 17:04:12 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355331850!27835016!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6187 invoked from network); 12 Dec 2012 17:04:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:04:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="94074"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:04:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:04:10 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tipj7-0006Rj-Qt; Wed, 12 Dec 2012 17:04:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tipj7-0001YD-Lu;
	Wed, 12 Dec 2012 17:04:09 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.47369.582575.481507@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:04:09 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C7B518.9080503@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-1-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B518.9080503@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race"):
> Thanks for the patches! I've found some time to test them in the context
> of Xen 4.2 and have some comments. For this patch, only a few nits below.

Thanks for the review and testing.

> > -    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
> > +    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
> >
> 
> Nit, should there be a space between 'fd,' and 'register'?

I did this deliberately because the macro uses token pasting to turn
this into fd_register, at least in many of the uses.

>>Also, not that gcc complained, but register is a keyword.

The token "register" gets pasted together with other things by the
preprocessor before the compiler sees it, so it's correct as far as
the language spec goes.  As for it being potentially confusing,
changing what appears here is difficult without changing the names
in libxl_osevent_hooks.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:04:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:04:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipjC-0002xP-43; Wed, 12 Dec 2012 17:04:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TipjA-0002xH-VN
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:04:13 +0000
Received: from [85.158.139.83:22135] by server-8.bemta-5.messagelabs.com id
	F0/4B-15003-C09B8C05; Wed, 12 Dec 2012 17:04:12 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355331850!27835016!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6187 invoked from network); 12 Dec 2012 17:04:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:04:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="94074"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:04:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:04:10 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tipj7-0006Rj-Qt; Wed, 12 Dec 2012 17:04:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tipj7-0001YD-Lu;
	Wed, 12 Dec 2012 17:04:09 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.47369.582575.481507@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:04:09 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C7B518.9080503@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-1-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B518.9080503@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race"):
> Thanks for the patches! I've found some time to test them in the context
> of Xen 4.2 and have some comments. For this patch, only a few nits below.

Thanks for the review and testing.

> > -    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
> > +    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
> >
> 
> Nit, should there be a space between 'fd,' and 'register'?

I did this deliberately because the macro uses token pasting to turn
this into fd_register, at least in many of the uses.

>>Also, not that gcc complained, but register is a keyword.

The token "register" gets pasted together with other things by the
preprocessor before the compiler sees it, so it's correct as far as
the language spec goes.  As for it being potentially confusing,
changing what appears here is difficult without changing the names
in libxl_osevent_hooks.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:14:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:14:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tipsu-0003TZ-1A; Wed, 12 Dec 2012 17:14:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tipss-0003TS-DI
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:14:14 +0000
Received: from [85.158.139.83:49813] by server-13.bemta-5.messagelabs.com id
	39/EA-10716-56BB8C05; Wed, 12 Dec 2012 17:14:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355332452!22224816!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13918 invoked from network); 12 Dec 2012 17:14:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:14:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="94405"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:14:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:14:12 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tipsq-0006ak-D2; Wed, 12 Dec 2012 17:14:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tipsq-0001ZW-82;
	Wed, 12 Dec 2012 17:14:12 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.47971.962603.851882@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:14:11 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C7B974.4050706@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Ian Jackson wrote:
> > This is a forwards-compatible change for applications using the libxl
> > API, and will hopefully eliminate these races in callback-supplying
> > applications (such as libvirt) without the need for corresponding
> > changes to the application.

When I wrote this of course I forgot to mention that previously libxl
would never call libxl_osevent_hooks->timeout_modify and now it never
calls ->timeout_deregister.

So this change can expose bugs in the application's implementation of
->timeout_modify.

> I'm not sure how to avoid changing the application (libvirt). Currently,
> the libvirt libxl driver removes the timeout from the libvirt event loop
> in the timeout_deregister() function. This function is never called now
> and hence the timeout is never removed. I had to make the following
> change in the libxl driver to use your patches

I think this is because of a bug of the kind I describe above.

> - gettimeofday(&now, NULL);
> - timeout = (abs_t.tv_usec - now.tv_usec) / 1000;
> - timeout += (abs_t.tv_sec - now.tv_sec) * 1000;

Specifically, this code has an integer arithmetic overflow.

If libxl calls this function with a value of abs_t which is more than
about INT_MAX/1000 seconds (24 days on a 32-bit machine) in the past,
the calculation (abs_t.tv_sec - now.tv_sec) * 1000; overflows.

And the patch makes libxl always pass abs_t={0,0} which is more than
24 days ago unless the code is running during January 1970 :-).

> - virEventUpdateTimeout(timer_info->id, timeout);

Also, what does this do if timeout is negative ?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:14:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:14:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tipsu-0003TZ-1A; Wed, 12 Dec 2012 17:14:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tipss-0003TS-DI
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:14:14 +0000
Received: from [85.158.139.83:49813] by server-13.bemta-5.messagelabs.com id
	39/EA-10716-56BB8C05; Wed, 12 Dec 2012 17:14:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355332452!22224816!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13918 invoked from network); 12 Dec 2012 17:14:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:14:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="94405"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:14:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:14:12 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tipsq-0006ak-D2; Wed, 12 Dec 2012 17:14:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tipsq-0001ZW-82;
	Wed, 12 Dec 2012 17:14:12 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.47971.962603.851882@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:14:11 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C7B974.4050706@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Ian Jackson wrote:
> > This is a forwards-compatible change for applications using the libxl
> > API, and will hopefully eliminate these races in callback-supplying
> > applications (such as libvirt) without the need for corresponding
> > changes to the application.

When I wrote this of course I forgot to mention that previously libxl
would never call libxl_osevent_hooks->timeout_modify and now it never
calls ->timeout_deregister.

So this change can expose bugs in the application's implementation of
->timeout_modify.

> I'm not sure how to avoid changing the application (libvirt). Currently,
> the libvirt libxl driver removes the timeout from the libvirt event loop
> in the timeout_deregister() function. This function is never called now
> and hence the timeout is never removed. I had to make the following
> change in the libxl driver to use your patches

I think this is because of a bug of the kind I describe above.

> - gettimeofday(&now, NULL);
> - timeout = (abs_t.tv_usec - now.tv_usec) / 1000;
> - timeout += (abs_t.tv_sec - now.tv_sec) * 1000;

Specifically, this code has an integer arithmetic overflow.

If libxl calls this function with a value of abs_t which is more than
about INT_MAX/1000 seconds (24 days on a 32-bit machine) in the past,
the calculation (abs_t.tv_sec - now.tv_sec) * 1000; overflows.

And the patch makes libxl always pass abs_t={0,0} which is more than
24 days ago unless the code is running during January 1970 :-).

> - virEventUpdateTimeout(timer_info->id, timeout);

Also, what does this do if timeout is negative ?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:15:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipuG-0003Z3-MK; Wed, 12 Dec 2012 17:15:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <crobinso@redhat.com>) id 1TiaXx-0002sc-0A
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 00:51:37 +0000
Received: from [85.158.143.35:42935] by server-1.bemta-4.messagelabs.com id
	97/89-28401-815D7C05; Wed, 12 Dec 2012 00:51:36 +0000
X-Env-Sender: crobinso@redhat.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355273494!4849055!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTYyNTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25315 invoked from network); 12 Dec 2012 00:51:35 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-21.messagelabs.com with SMTP;
	12 Dec 2012 00:51:35 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBC0pS0q010815
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 11 Dec 2012 19:51:28 -0500
Received: from [10.3.113.129] (ovpn-113-129.phx2.redhat.com [10.3.113.129])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id qBC0pQfj023078; Tue, 11 Dec 2012 19:51:27 -0500
Message-ID: <50C7D50E.5050403@redhat.com>
Date: Tue, 11 Dec 2012 19:51:26 -0500
From: Cole Robinson <crobinso@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <1354313593-13358-1-git-send-email-jfehlig@suse.com>
	<50C119C7.807@redhat.com> <50C1284B.2060303@suse.com>
	<1354897102.23012.49.camel@Abyss>
In-Reply-To: <1354897102.23012.49.camel@Abyss>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Wed, 12 Dec 2012 17:15:38 +0000
Cc: "xen@lists.fedoraproject.org" <xen@lists.fedoraproject.org>,
	libvir-list@redhat.com, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>, virt <virt@lists.fedoraproject.org>,
	Eric Blake <eblake@redhat.com>
Subject: Re: [Xen-devel] [Fedora-xen] [libvirt] [PATCH] Convert libxl driver
 to Xen 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/07/2012 11:18 AM, Dario Faggioli wrote:
> On Thu, 2012-12-06 at 16:20 -0700, Jim Fehlig wrote: 
>>>> V2:
>>>>   Remove 128 vcpu limit.
>>>>   Remove split_string_into_string_list() function copied from xen
>>>>   sources since libvirt now has virStringSplit().
>>>>     
>>>
>>> Tested on Fedora 18, with its use of xen 4.2.  ACK; let's get this pushed.
>>>   
>>
>> Thanks, pushed.
>>
> Cool!
> 
> (letting virt@lists.fedoraproject.org and xen@lists.fedoraproject.org
> <xen@lists.fedoraproject.org> know about that!)
> 

I pushed an F18 build yesterday, but it didn't contain the libxl patch since
it wasn't a trivial backport to 0.10.2 and I wanted to get a build out ASAP
since it was fixing a CVE. I'll take another stab at the libxl backport later
this week.

- Cole


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:15:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipuG-0003Z3-MK; Wed, 12 Dec 2012 17:15:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <crobinso@redhat.com>) id 1TiaXx-0002sc-0A
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 00:51:37 +0000
Received: from [85.158.143.35:42935] by server-1.bemta-4.messagelabs.com id
	97/89-28401-815D7C05; Wed, 12 Dec 2012 00:51:36 +0000
X-Env-Sender: crobinso@redhat.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355273494!4849055!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTYyNTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25315 invoked from network); 12 Dec 2012 00:51:35 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-21.messagelabs.com with SMTP;
	12 Dec 2012 00:51:35 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBC0pS0q010815
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 11 Dec 2012 19:51:28 -0500
Received: from [10.3.113.129] (ovpn-113-129.phx2.redhat.com [10.3.113.129])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id qBC0pQfj023078; Tue, 11 Dec 2012 19:51:27 -0500
Message-ID: <50C7D50E.5050403@redhat.com>
Date: Tue, 11 Dec 2012 19:51:26 -0500
From: Cole Robinson <crobinso@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <1354313593-13358-1-git-send-email-jfehlig@suse.com>
	<50C119C7.807@redhat.com> <50C1284B.2060303@suse.com>
	<1354897102.23012.49.camel@Abyss>
In-Reply-To: <1354897102.23012.49.camel@Abyss>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Wed, 12 Dec 2012 17:15:38 +0000
Cc: "xen@lists.fedoraproject.org" <xen@lists.fedoraproject.org>,
	libvir-list@redhat.com, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>, virt <virt@lists.fedoraproject.org>,
	Eric Blake <eblake@redhat.com>
Subject: Re: [Xen-devel] [Fedora-xen] [libvirt] [PATCH] Convert libxl driver
 to Xen 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/07/2012 11:18 AM, Dario Faggioli wrote:
> On Thu, 2012-12-06 at 16:20 -0700, Jim Fehlig wrote: 
>>>> V2:
>>>>   Remove 128 vcpu limit.
>>>>   Remove split_string_into_string_list() function copied from xen
>>>>   sources since libvirt now has virStringSplit().
>>>>     
>>>
>>> Tested on Fedora 18, with its use of xen 4.2.  ACK; let's get this pushed.
>>>   
>>
>> Thanks, pushed.
>>
> Cool!
> 
> (letting virt@lists.fedoraproject.org and xen@lists.fedoraproject.org
> <xen@lists.fedoraproject.org> know about that!)
> 

I pushed an F18 build yesterday, but it didn't contain the libxl patch since
it wasn't a trivial backport to 0.10.2 and I wanted to get a build out ASAP
since it was fixing a CVE. I'll take another stab at the libxl backport later
this week.

- Cole


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:15:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipuF-0003Yl-Sx; Wed, 12 Dec 2012 17:15:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <f.soltani298@gmail.com>) id 1TiRwW-000239-AB
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:40:24 +0000
Received: from [85.158.139.211:8878] by server-5.bemta-5.messagelabs.com id
	C0/59-22648-7E357C05; Tue, 11 Dec 2012 15:40:23 +0000
X-Env-Sender: f.soltani298@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355240421!19226284!1
X-Originating-IP: [209.85.216.169]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30695 invoked from network); 11 Dec 2012 15:40:22 -0000
Received: from mail-qc0-f169.google.com (HELO mail-qc0-f169.google.com)
	(209.85.216.169)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:40:22 -0000
Received: by mail-qc0-f169.google.com with SMTP id t2so2186735qcq.28
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 07:40:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=b4qkLIZwOEZUPb/AzX10XUxy0rx6hbbCeeBzT17CxDo=;
	b=Tz7IRYZflcv+gjbz7YVc2l55xkcaC8po4mgBZnZFZWsbhfDVZ39OFkjxyLpicG4+wD
	0UTMQYUH+Rozh56CEgVA59zDEG4BwL1+NEGRuMzKLncBAV8lZehiTQsws0y2DqCE2oL0
	9vkTVNMB0J78MLMgVFRSCjMZmXjH2wlt3UP6ea+DQZlzVqISJfZIvK/jO2Fhl6BEi9P9
	msl37+lp6nJt7EaeRFu3v/MJ/H8LirEowKO+7TpDrUMmS8tMWO8cZEmzvU/SvMn7m5cs
	cQCcekt0JYhSAA9eCJzVimlyTjy8SEPcFyYmnbyDDjUuXSusUZjzxmQXpbEdDkrE+hik
	rPrg==
MIME-Version: 1.0
Received: by 10.49.2.35 with SMTP id 3mr39786127qer.36.1355240421162; Tue, 11
	Dec 2012 07:40:21 -0800 (PST)
Received: by 10.49.13.230 with HTTP; Tue, 11 Dec 2012 07:40:20 -0800 (PST)
Date: Tue, 11 Dec 2012 19:10:20 +0330
Message-ID: <CAKLxbwJ92Fs9YKDKSjkCPok5-M7mpB-ttXJZXp-hexRpT6ZBdg@mail.gmail.com>
From: fahimeh soltaninejad <f.soltani298@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 12 Dec 2012 17:15:38 +0000
Subject: [Xen-devel] hw to assign devices with VT-d in (fedora with Xen
	hypervisor)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0248726795859437867=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0248726795859437867==
Content-Type: multipart/alternative; boundary=047d7b677700e37fa504d0957f71

--047d7b677700e37fa504d0957f71
Content-Type: text/plain; charset=ISO-8859-1

hi,
i have installed Xen hypervisor on fedora 17 and now i want to assign
devices directly to a guest OS through VT-d. does any one know how can i do
it?
i found some solutions for that but those had some steps for when
installing Xen while i installed Xen on fedora. i am going to know,can i
assign device to a guest OS through VT-d capability with installed Xen on
fedora 17?
thanks.

--047d7b677700e37fa504d0957f71
Content-Type: text/html; charset=ISO-8859-1

hi,<br>
i have installed Xen hypervisor on fedora 17 and now i want to assign 
devices directly to a guest OS through VT-d. does any one know how can i
 do it?<br>
i found some solutions for that but those had some steps for when 
installing Xen while i installed Xen on fedora. i am going to know,can i
 assign device to a guest OS through VT-d capability with installed Xen 
on fedora 17?<br>
thanks.

--047d7b677700e37fa504d0957f71--


--===============0248726795859437867==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0248726795859437867==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 17:15:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipuF-0003Yl-Sx; Wed, 12 Dec 2012 17:15:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <f.soltani298@gmail.com>) id 1TiRwW-000239-AB
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:40:24 +0000
Received: from [85.158.139.211:8878] by server-5.bemta-5.messagelabs.com id
	C0/59-22648-7E357C05; Tue, 11 Dec 2012 15:40:23 +0000
X-Env-Sender: f.soltani298@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355240421!19226284!1
X-Originating-IP: [209.85.216.169]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30695 invoked from network); 11 Dec 2012 15:40:22 -0000
Received: from mail-qc0-f169.google.com (HELO mail-qc0-f169.google.com)
	(209.85.216.169)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:40:22 -0000
Received: by mail-qc0-f169.google.com with SMTP id t2so2186735qcq.28
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 07:40:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=b4qkLIZwOEZUPb/AzX10XUxy0rx6hbbCeeBzT17CxDo=;
	b=Tz7IRYZflcv+gjbz7YVc2l55xkcaC8po4mgBZnZFZWsbhfDVZ39OFkjxyLpicG4+wD
	0UTMQYUH+Rozh56CEgVA59zDEG4BwL1+NEGRuMzKLncBAV8lZehiTQsws0y2DqCE2oL0
	9vkTVNMB0J78MLMgVFRSCjMZmXjH2wlt3UP6ea+DQZlzVqISJfZIvK/jO2Fhl6BEi9P9
	msl37+lp6nJt7EaeRFu3v/MJ/H8LirEowKO+7TpDrUMmS8tMWO8cZEmzvU/SvMn7m5cs
	cQCcekt0JYhSAA9eCJzVimlyTjy8SEPcFyYmnbyDDjUuXSusUZjzxmQXpbEdDkrE+hik
	rPrg==
MIME-Version: 1.0
Received: by 10.49.2.35 with SMTP id 3mr39786127qer.36.1355240421162; Tue, 11
	Dec 2012 07:40:21 -0800 (PST)
Received: by 10.49.13.230 with HTTP; Tue, 11 Dec 2012 07:40:20 -0800 (PST)
Date: Tue, 11 Dec 2012 19:10:20 +0330
Message-ID: <CAKLxbwJ92Fs9YKDKSjkCPok5-M7mpB-ttXJZXp-hexRpT6ZBdg@mail.gmail.com>
From: fahimeh soltaninejad <f.soltani298@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 12 Dec 2012 17:15:38 +0000
Subject: [Xen-devel] hw to assign devices with VT-d in (fedora with Xen
	hypervisor)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0248726795859437867=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0248726795859437867==
Content-Type: multipart/alternative; boundary=047d7b677700e37fa504d0957f71

--047d7b677700e37fa504d0957f71
Content-Type: text/plain; charset=ISO-8859-1

hi,
i have installed Xen hypervisor on fedora 17 and now i want to assign
devices directly to a guest OS through VT-d. does any one know how can i do
it?
i found some solutions for that but those had some steps for when
installing Xen while i installed Xen on fedora. i am going to know,can i
assign device to a guest OS through VT-d capability with installed Xen on
fedora 17?
thanks.

--047d7b677700e37fa504d0957f71
Content-Type: text/html; charset=ISO-8859-1

hi,<br>
i have installed Xen hypervisor on fedora 17 and now i want to assign 
devices directly to a guest OS through VT-d. does any one know how can i
 do it?<br>
i found some solutions for that but those had some steps for when 
installing Xen while i installed Xen on fedora. i am going to know,can i
 assign device to a guest OS through VT-d capability with installed Xen 
on fedora 17?<br>
thanks.

--047d7b677700e37fa504d0957f71--


--===============0248726795859437867==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0248726795859437867==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 17:15:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipuG-0003Yu-9A; Wed, 12 Dec 2012 17:15:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sebastien.fremal@gmail.com>) id 1TiRwh-000243-PN
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:40:36 +0000
Received: from [85.158.139.211:11980] by server-14.bemta-5.messagelabs.com id
	95/5D-09538-2F357C05; Tue, 11 Dec 2012 15:40:34 +0000
X-Env-Sender: sebastien.fremal@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1355240427!19950981!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23138 invoked from network); 11 Dec 2012 15:40:29 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:40:29 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so4491521vbi.32
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 07:40:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=gN2jhu9xHzl1ZcUBJf6qjAdjsi6gb88uzuPwAG+EulU=;
	b=mlQhUoxrLSV8rS/EDBz1TGHQAi4tafz4mQKjXf36O1b0lWdo6GVqinTvXzldBBzk+q
	EwUNR7q/mgmFNZZTEIYJXpWSW5m/qnBojciWbA3kagEX6OlhgMqBEkbL3utHxNbAHHBm
	QIKYX19X6uaxbP7PJ6u7e051wtD/7V1JzUrbwFR/X2HgMNgGW9k6JP/M85oOCakYcLEQ
	d6JLZk2vAUVpO5VkDW3i8Qk9V4G8qKNYLZw0dpcpePrj1WKQ9cIilKYx7z1somo81qFH
	CTvSWp4EkjbDoWMNKI1WTPP3iYDKlr38yf2vCnCNp6h7f0MTXZ/OxAcwXxW7qBXhX2Z4
	sY3Q==
Received: by 10.220.150.145 with SMTP id y17mr11219668vcv.11.1355240426989;
	Tue, 11 Dec 2012 07:40:26 -0800 (PST)
MIME-Version: 1.0
Received: by 10.58.233.110 with HTTP; Tue, 11 Dec 2012 07:40:06 -0800 (PST)
From: =?ISO-8859-1?Q?S=E9bastien_Fr=E9mal?= <sebastien.fremal@gmail.com>
Date: Tue, 11 Dec 2012 16:40:06 +0100
Message-ID: <CAOV6k-Cs-uk2Kts6QArtW5MbK1thRhvfcWqJ7ydZJAWou0KSnA@mail.gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 12 Dec 2012 17:15:38 +0000
Subject: [Xen-devel] Stubdom compilation fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5472436811425591345=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5472436811425591345==
Content-Type: multipart/alternative; boundary=f46d043d67e93c684004d09580dd

--f46d043d67e93c684004d09580dd
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,

I'm trying to compile Xen 4.2.0 on Ubuntu 11.10 but it crashes when
compiling the stubdom part. As he can't find stddef.h, he can't create
libraries and finally crashes. You can find a sample of errors below :

Making all in misc
make[6]: entrant dans le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/mi=
sc
=BB
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include
-D__MINIOS__ -DHAVE_LIBC -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/../tools/xenstore  -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_64 -U
__linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include
-isystem include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4
-I/home/fremals/xen-4.2.0/stubdom/include
-I/home/fremals/xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1
-fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-blocks
-fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3Dgnu99
-Wall -Wstrict-prototypes -Wdeclaration-after-statement
-Wno-unused-but-set-variable   -fno-stack-protector -fno-exceptions
-D_I386MACH_ALLOW_HW_INTERRUPTS
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/
-isystem
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-in=
clude
-isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86=
_64
-L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/lib=
nosys
-L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64
-DPACKAGE_NAME=3D\"newlib\" -DPACKAGE_TARNAME=3D\"newlib\"
-DPACKAGE_VERSION=3D\"1.16.0\" -DPACKAGE_STRING=3D\"newlib\ 1.16.0\"
-DPACKAGE_BUGREPORT=3D\"\" -I.
-I../../../../../newlib-1.16.0/newlib/libc/misc -O2 -DMISSING_SYSCALL_NAMES
-fno-builtin      -O2 -g -g -O2   -c -o lib_a-__dprintf.o `test -f
'__dprintf.c' || echo
'../../../../../newlib-1.16.0/newlib/libc/misc/'`__dprintf.c
In file included from
/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include/sys/reent=
.h:14:0,
                 from
/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include/reent.h:4=
8,
                 from
../../../../../newlib-1.16.0/newlib/libc/misc/__dprintf.c:8:
/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include/sys/_type=
s.h:67:20:
erreur fatale: stddef.h : Aucun fichier ou dossier de ce type
compilation termin=E9e.
make[6]: *** [lib_a-__dprintf.o] Erreur 1
make[6]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/mi=
sc
=BB
Making all in machine
make[6]: entrant dans le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine
=BB
Making all in x86_64
make[7]: entrant dans le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine/x86_64
=BB
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include
-D__MINIOS__ -DHAVE_LIBC -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/../tools/xenstore  -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_64 -U
__linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include
-isystem include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4
-I/home/fremals/xen-4.2.0/stubdom/include
-I/home/fremals/xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1
-fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-blocks
-fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3Dgnu99
-Wall -Wstrict-prototypes -Wdeclaration-after-statement
-Wno-unused-but-set-variable   -fno-stack-protector -fno-exceptions
-D_I386MACH_ALLOW_HW_INTERRUPTS
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/
-isystem
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-in=
clude
-isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86=
_64
-L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/lib=
nosys
-L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2
-DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib_a-setjmp.o `test -f
'setjmp.S' || echo
'../../../../../../newlib-1.16.0/newlib/libc/machine/x86_64/'`setjmp.S
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include
-D__MINIOS__ -DHAVE_LIBC -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/../tools/xenstore  -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_64 -U
__linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include
-isystem include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4
-I/home/fremals/xen-4.2.0/stubdom/include
-I/home/fremals/xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1
-fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-blocks
-fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3Dgnu99
-Wall -Wstrict-prototypes -Wdeclaration-after-statement
-Wno-unused-but-set-variable   -fno-stack-protector -fno-exceptions
-D_I386MACH_ALLOW_HW_INTERRUPTS
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/
-isystem
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-in=
clude
-isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86=
_64
-L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/lib=
nosys
-L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2
-DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib_a-memcpy.o `test -f
'memcpy.S' || echo
'../../../../../../newlib-1.16.0/newlib/libc/machine/x86_64/'`memcpy.S
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include
-D__MINIOS__ -DHAVE_LIBC -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/../tools/xenstore  -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_64 -U
__linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include
-isystem include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4
-I/home/fremals/xen-4.2.0/stubdom/include
-I/home/fremals/xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1
-fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-blocks
-fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3Dgnu99
-Wall -Wstrict-prototypes -Wdeclaration-after-statement
-Wno-unused-but-set-variable   -fno-stack-protector -fno-exceptions
-D_I386MACH_ALLOW_HW_INTERRUPTS
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/
-isystem
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-in=
clude
-isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86=
_64
-L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/lib=
nosys
-L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2
-DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib_a-memset.o `test -f
'memset.S' || echo
'../../../../../../newlib-1.16.0/newlib/libc/machine/x86_64/'`memset.S
rm -f lib.a
ar cru lib.a lib_a-setjmp.o lib_a-memcpy.o lib_a-memset.o
ranlib lib.a
make[7]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine/x86_64
=BB
Making all in .
make[7]: entrant dans le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine
=BB
rm -f lib.a
ln x86_64/lib.a lib.a >/dev/null 2>/dev/null || \
     cp x86_64/lib.a lib.a
make[7]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine
=BB
make[6]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine
=BB
Making all in .
make[6]: entrant dans le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =
=BB
rm -f libc.a
rm -rf tmp
mkdir tmp
cd tmp; \
     for i in argz/lib.a stdlib/lib.a ctype/lib.a search/lib.a stdio/lib.a
string/lib.a signal/lib.a time/lib.a locale/lib.a reent/lib.a  errno/lib.a
misc/lib.a     machine/lib.a ; do \
       ar x ../$i; \
     done; \
    ar rc ../libc.a *.o
ar: ../argz/lib.a: No such file or directory
ar: ../stdlib/lib.a: No such file or directory
ar: ../ctype/lib.a: No such file or directory
ar: ../search/lib.a: No such file or directory
ar: ../stdio/lib.a: No such file or directory
ar: ../string/lib.a: No such file or directory
ar: ../signal/lib.a: No such file or directory
ar: ../time/lib.a: No such file or directory
ar: ../locale/lib.a: No such file or directory
ar: ../reent/lib.a: No such file or directory
ar: ../errno/lib.a: No such file or directory
ar: ../misc/lib.a: No such file or directory
ranlib libc.a
rm -rf tmp
make[6]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =
=BB
make[5]: *** [all-recursive] Erreur 1
make[5]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =
=BB
make[4]: *** [all-recursive] Erreur 1
make[4]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib =BB
make[3]: *** [all] Erreur 2
make[3]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib =BB
make[2]: *** [all-target-newlib] Erreur 2
make[2]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64 =BB
make[1]: *** [all] Erreur 2
make[1]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64 =BB
make: *** [cross-root-x86_64/x86_64-xen-elf/lib/libc.a] Erreur 2

I searched on google, but no solutions were proposed. I exported the path
to stddef.h :

export CPPFLAGS=3D-I/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include/

But now, I have a new error concerning the code of Xen :
/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/test.o: In function
`app_main':
/home/fremals/xen-4.2.0/extras/mini-os/test.c:441: multiple definition of
`app_main'
/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/main.o:/home/fremals/xen-4=
.2.0/extras/mini-os/main.c:187:
first defined here
make[1]: *** [/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/mini-os]
Erreur 1
make[1]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/extras/mini-o=
s =BB
make: *** [c-stubdom] Erreur 2

Can you please help me to resolve these errors please ?

Thank you !!

Best regards,

Sebastien Fremal

--f46d043d67e93c684004d09580dd
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br><br>I&#39;m trying to compile Xen 4.2.0 on Ubuntu 11.10 but it cr=
ashes when compiling the stubdom part. As he can&#39;t find stddef.h, he ca=
n&#39;t create libraries and finally crashes. You can find a sample of erro=
rs below :<br>

<br>Making all in misc<br>make[6]: entrant dans le r=E9pertoire =AB /home/f=
remals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/misc =BB<=
br>gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -=
D__MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/=
mini-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xen=
store=A0 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include=
/x86 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86=
/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fre=
mals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/frema=
ls/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem incl=
ude -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isyst=
em /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fre=
mals/xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/inc=
lude -mno-red-zone -O1 -fno-omit-frame-pointer=A0 -m64 -mno-red-zone -fno-r=
eorder-blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing =
-std=3Dgnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-u=
nused-but-set-variable=A0=A0 -fno-stack-protector -fno-exceptions -D_I386MA=
CH_ALLOW_HW_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x=
86_64-xen-elf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/=
newlib-1.16.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-=
x86_64/x86_64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/new=
lib-x86_64/x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubd=
om/newlib-1.16.0/libgloss/x86_64 -DPACKAGE_NAME=3D\&quot;newlib\&quot; -DPA=
CKAGE_TARNAME=3D\&quot;newlib\&quot; -DPACKAGE_VERSION=3D\&quot;1.16.0\&quo=
t; -DPACKAGE_STRING=3D\&quot;newlib\ 1.16.0\&quot; -DPACKAGE_BUGREPORT=3D\&=
quot;\&quot; -I. -I../../../../../newlib-1.16.0/newlib/libc/misc -O2 -DMISS=
ING_SYSCALL_NAMES -fno-builtin=A0=A0=A0=A0=A0 -O2 -g -g -O2=A0=A0 -c -o lib=
_a-__dprintf.o `test -f &#39;__dprintf.c&#39; || echo &#39;../../../../../n=
ewlib-1.16.0/newlib/libc/misc/&#39;`__dprintf.c<br>

In file included from /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/=
libc/include/sys/reent.h:14:0,<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0 from /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/in=
clude/reent.h:48,<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 from =
../../../../../newlib-1.16.0/newlib/libc/misc/__dprintf.c:8:<br>

/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include/sys/_type=
s.h:67:20: erreur fatale: stddef.h : Aucun fichier ou dossier de ce type<br=
>compilation termin=E9e.<br>make[6]: *** [lib_a-__dprintf.o] Erreur 1<br>

make[6]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/misc =BB<br>Making all in machine<br>ma=
ke[6]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/new=
lib-x86_64/x86_64-xen-elf/newlib/libc/machine =BB<br>

Making all in x86_64<br>make[7]: entrant dans le r=E9pertoire =AB /home/fre=
mals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/machine/x86=
_64 =BB<br>gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/i=
nclude -D__MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/..=
/extras/mini-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../t=
ools/xenstore=A0 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os=
/include/x86 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/inc=
lude/x86/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /=
home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /ho=
me/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isys=
tem include -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/includ=
e -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/=
home/fremals/xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/..=
/xen/include -mno-red-zone -O1 -fno-omit-frame-pointer=A0 -m64 -mno-red-zon=
e -fno-reorder-blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-a=
liasing -std=3Dgnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statemen=
t -Wno-unused-but-set-variable=A0=A0 -fno-stack-protector -fno-exceptions -=
D_I386MACH_ALLOW_HW_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86=
_64/x86_64-xen-elf/newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-=
x86_64/x86_64-xen-elf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/=
stubdom/newlib-1.16.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom=
/newlib-x86_64/x86_64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stu=
bdom/newlib-x86_64/x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2=
.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-b=
uiltin=A0=A0=A0 -c -o lib_a-setjmp.o `test -f &#39;setjmp.S&#39; || echo &#=
39;../../../../../../newlib-1.16.0/newlib/libc/machine/x86_64/&#39;`setjmp.=
S<br>

gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re=A0 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x8=
6 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x8=
6_64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremal=
s/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/=
xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include=
 -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem =
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremal=
s/xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/includ=
e -mno-red-zone -O1 -fno-omit-frame-pointer=A0 -m64 -mno-red-zone -fno-reor=
der-blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -st=
d=3Dgnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unus=
ed-but-set-variable=A0=A0 -fno-stack-protector -fno-exceptions -D_I386MACH_=
ALLOW_HW_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-=
xen-elf/newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/new=
lib-1.16.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86=
_64/x86_64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib=
-x86_64/x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/=
newlib-1.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-builtin=A0=
=A0=A0 -c -o lib_a-memcpy.o `test -f &#39;memcpy.S&#39; || echo &#39;../../=
../../../../newlib-1.16.0/newlib/libc/machine/x86_64/&#39;`memcpy.S<br>

gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re=A0 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x8=
6 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x8=
6_64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremal=
s/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/=
xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include=
 -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem =
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremal=
s/xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/includ=
e -mno-red-zone -O1 -fno-omit-frame-pointer=A0 -m64 -mno-red-zone -fno-reor=
der-blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -st=
d=3Dgnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unus=
ed-but-set-variable=A0=A0 -fno-stack-protector -fno-exceptions -D_I386MACH_=
ALLOW_HW_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-=
xen-elf/newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/new=
lib-1.16.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86=
_64/x86_64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib=
-x86_64/x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/=
newlib-1.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-builtin=A0=
=A0=A0 -c -o lib_a-memset.o `test -f &#39;memset.S&#39; || echo &#39;../../=
../../../../newlib-1.16.0/newlib/libc/machine/x86_64/&#39;`memset.S<br>

rm -f lib.a<br>ar cru lib.a lib_a-setjmp.o lib_a-memcpy.o lib_a-memset.o <b=
r>ranlib lib.a<br>make[7]: quittant le r=E9pertoire =AB /home/fremals/xen-4=
.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/machine/x86_64 =BB<br=
>
Making all in .<br>
make[7]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc/machine =BB<br>rm -f lib.a<br>ln x8=
6_64/lib.a lib.a &gt;/dev/null 2&gt;/dev/null || \<br>=A0=A0=A0 =A0cp x86_6=
4/lib.a lib.a<br>

make[7]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/machine =BB<br>make[6]: quittant le r=
=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-el=
f/newlib/libc/machine =BB<br>

Making all in .<br>make[6]: entrant dans le r=E9pertoire =AB /home/fremals/=
xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =BB<br>rm -f lib=
c.a<br>rm -rf tmp<br>mkdir tmp<br>cd tmp; \<br>=A0=A0=A0 =A0for i in argz/l=
ib.a stdlib/lib.a ctype/lib.a search/lib.a stdio/lib.a=A0 string/lib.a sign=
al/lib.a time/lib.a locale/lib.a reent/lib.a=A0 errno/lib.a misc/lib.a=A0=
=A0=A0=A0 machine/lib.a ; do \<br>

=A0=A0=A0 =A0=A0 ar x ../$i; \<br>=A0=A0=A0 =A0done; \<br>=A0=A0=A0 ar rc .=
./libc.a *.o<br>ar: ../argz/lib.a: No such file or directory<br>ar: ../stdl=
ib/lib.a: No such file or directory<br>ar: ../ctype/lib.a: No such file or =
directory<br>ar: ../search/lib.a: No such file or directory<br>

ar: ../stdio/lib.a: No such file or directory<br>ar: ../string/lib.a: No su=
ch file or directory<br>ar: ../signal/lib.a: No such file or directory<br>a=
r: ../time/lib.a: No such file or directory<br>ar: ../locale/lib.a: No such=
 file or directory<br>

ar: ../reent/lib.a: No such file or directory<br>ar: ../errno/lib.a: No suc=
h file or directory<br>ar: ../misc/lib.a: No such file or directory<br>ranl=
ib libc.a<br>rm -rf tmp<br>make[6]: quittant le r=E9pertoire =AB /home/frem=
als/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =BB<br>

make[5]: *** [all-recursive] Erreur 1<br>make[5]: quittant le r=E9pertoire =
=AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/lib=
c =BB<br>make[4]: *** [all-recursive] Erreur 1<br>make[4]: quittant le r=E9=
pertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/n=
ewlib =BB<br>

make[3]: *** [all] Erreur 2<br>make[3]: quittant le r=E9pertoire =AB /home/=
fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib =BB<br>make[2=
]: *** [all-target-newlib] Erreur 2<br>make[2]: quittant le r=E9pertoire =
=AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64 =BB<br>

make[1]: *** [all] Erreur 2<br>make[1]: quittant le r=E9pertoire =AB /home/=
fremals/xen-4.2.0/stubdom/newlib-x86_64 =BB<br>make: *** [cross-root-x86_64=
/x86_64-xen-elf/lib/libc.a] Erreur 2<br><br>I searched on google, but no so=
lutions were proposed. I exported the path to stddef.h :<br>

<br>export CPPFLAGS=3D-I/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include/<br><br=
>But now, I have a new error concerning the code of Xen :<br>/home/fremals/=
xen-4.2.0/stubdom/mini-os-x86_64-c/test.o: In function `app_main&#39;:<br>

/home/fremals/xen-4.2.0/extras/mini-os/test.c:441: multiple definition of `=
app_main&#39;<br>/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/main.o:/h=
ome/fremals/xen-4.2.0/extras/mini-os/main.c:187: first defined here<br>

make[1]: *** [/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/mini-os] Err=
eur 1<br>make[1]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/extr=
as/mini-os =BB<br>make: *** [c-stubdom] Erreur 2<br><br>Can you please help=
 me to resolve these errors please ?<br>

<br>Thank you !!<br><br>Best regards,<br><br>Sebastien Fremal<br>

--f46d043d67e93c684004d09580dd--


--===============5472436811425591345==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5472436811425591345==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 17:15:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipuG-0003Yu-9A; Wed, 12 Dec 2012 17:15:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sebastien.fremal@gmail.com>) id 1TiRwh-000243-PN
	for xen-devel@lists.xen.org; Tue, 11 Dec 2012 15:40:36 +0000
Received: from [85.158.139.211:11980] by server-14.bemta-5.messagelabs.com id
	95/5D-09538-2F357C05; Tue, 11 Dec 2012 15:40:34 +0000
X-Env-Sender: sebastien.fremal@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1355240427!19950981!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23138 invoked from network); 11 Dec 2012 15:40:29 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 15:40:29 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so4491521vbi.32
	for <xen-devel@lists.xen.org>; Tue, 11 Dec 2012 07:40:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=gN2jhu9xHzl1ZcUBJf6qjAdjsi6gb88uzuPwAG+EulU=;
	b=mlQhUoxrLSV8rS/EDBz1TGHQAi4tafz4mQKjXf36O1b0lWdo6GVqinTvXzldBBzk+q
	EwUNR7q/mgmFNZZTEIYJXpWSW5m/qnBojciWbA3kagEX6OlhgMqBEkbL3utHxNbAHHBm
	QIKYX19X6uaxbP7PJ6u7e051wtD/7V1JzUrbwFR/X2HgMNgGW9k6JP/M85oOCakYcLEQ
	d6JLZk2vAUVpO5VkDW3i8Qk9V4G8qKNYLZw0dpcpePrj1WKQ9cIilKYx7z1somo81qFH
	CTvSWp4EkjbDoWMNKI1WTPP3iYDKlr38yf2vCnCNp6h7f0MTXZ/OxAcwXxW7qBXhX2Z4
	sY3Q==
Received: by 10.220.150.145 with SMTP id y17mr11219668vcv.11.1355240426989;
	Tue, 11 Dec 2012 07:40:26 -0800 (PST)
MIME-Version: 1.0
Received: by 10.58.233.110 with HTTP; Tue, 11 Dec 2012 07:40:06 -0800 (PST)
From: =?ISO-8859-1?Q?S=E9bastien_Fr=E9mal?= <sebastien.fremal@gmail.com>
Date: Tue, 11 Dec 2012 16:40:06 +0100
Message-ID: <CAOV6k-Cs-uk2Kts6QArtW5MbK1thRhvfcWqJ7ydZJAWou0KSnA@mail.gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 12 Dec 2012 17:15:38 +0000
Subject: [Xen-devel] Stubdom compilation fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5472436811425591345=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5472436811425591345==
Content-Type: multipart/alternative; boundary=f46d043d67e93c684004d09580dd

--f46d043d67e93c684004d09580dd
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,

I'm trying to compile Xen 4.2.0 on Ubuntu 11.10 but it crashes when
compiling the stubdom part. As he can't find stddef.h, he can't create
libraries and finally crashes. You can find a sample of errors below :

Making all in misc
make[6]: entrant dans le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/mi=
sc
=BB
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include
-D__MINIOS__ -DHAVE_LIBC -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/../tools/xenstore  -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_64 -U
__linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include
-isystem include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4
-I/home/fremals/xen-4.2.0/stubdom/include
-I/home/fremals/xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1
-fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-blocks
-fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3Dgnu99
-Wall -Wstrict-prototypes -Wdeclaration-after-statement
-Wno-unused-but-set-variable   -fno-stack-protector -fno-exceptions
-D_I386MACH_ALLOW_HW_INTERRUPTS
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/
-isystem
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-in=
clude
-isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86=
_64
-L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/lib=
nosys
-L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64
-DPACKAGE_NAME=3D\"newlib\" -DPACKAGE_TARNAME=3D\"newlib\"
-DPACKAGE_VERSION=3D\"1.16.0\" -DPACKAGE_STRING=3D\"newlib\ 1.16.0\"
-DPACKAGE_BUGREPORT=3D\"\" -I.
-I../../../../../newlib-1.16.0/newlib/libc/misc -O2 -DMISSING_SYSCALL_NAMES
-fno-builtin      -O2 -g -g -O2   -c -o lib_a-__dprintf.o `test -f
'__dprintf.c' || echo
'../../../../../newlib-1.16.0/newlib/libc/misc/'`__dprintf.c
In file included from
/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include/sys/reent=
.h:14:0,
                 from
/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include/reent.h:4=
8,
                 from
../../../../../newlib-1.16.0/newlib/libc/misc/__dprintf.c:8:
/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include/sys/_type=
s.h:67:20:
erreur fatale: stddef.h : Aucun fichier ou dossier de ce type
compilation termin=E9e.
make[6]: *** [lib_a-__dprintf.o] Erreur 1
make[6]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/mi=
sc
=BB
Making all in machine
make[6]: entrant dans le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine
=BB
Making all in x86_64
make[7]: entrant dans le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine/x86_64
=BB
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include
-D__MINIOS__ -DHAVE_LIBC -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/../tools/xenstore  -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_64 -U
__linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include
-isystem include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4
-I/home/fremals/xen-4.2.0/stubdom/include
-I/home/fremals/xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1
-fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-blocks
-fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3Dgnu99
-Wall -Wstrict-prototypes -Wdeclaration-after-statement
-Wno-unused-but-set-variable   -fno-stack-protector -fno-exceptions
-D_I386MACH_ALLOW_HW_INTERRUPTS
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/
-isystem
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-in=
clude
-isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86=
_64
-L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/lib=
nosys
-L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2
-DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib_a-setjmp.o `test -f
'setjmp.S' || echo
'../../../../../../newlib-1.16.0/newlib/libc/machine/x86_64/'`setjmp.S
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include
-D__MINIOS__ -DHAVE_LIBC -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/../tools/xenstore  -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_64 -U
__linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include
-isystem include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4
-I/home/fremals/xen-4.2.0/stubdom/include
-I/home/fremals/xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1
-fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-blocks
-fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3Dgnu99
-Wall -Wstrict-prototypes -Wdeclaration-after-statement
-Wno-unused-but-set-variable   -fno-stack-protector -fno-exceptions
-D_I386MACH_ALLOW_HW_INTERRUPTS
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/
-isystem
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-in=
clude
-isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86=
_64
-L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/lib=
nosys
-L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2
-DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib_a-memcpy.o `test -f
'memcpy.S' || echo
'../../../../../../newlib-1.16.0/newlib/libc/machine/x86_64/'`memcpy.S
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include
-D__MINIOS__ -DHAVE_LIBC -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/../tools/xenstore  -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_64 -U
__linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem
/home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem
/home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include
-isystem include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4
-I/home/fremals/xen-4.2.0/stubdom/include
-I/home/fremals/xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1
-fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-blocks
-fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3Dgnu99
-Wall -Wstrict-prototypes -Wdeclaration-after-statement
-Wno-unused-but-set-variable   -fno-stack-protector -fno-exceptions
-D_I386MACH_ALLOW_HW_INTERRUPTS
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/
-isystem
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-in=
clude
-isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include
-B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86=
_64
-L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/lib=
nosys
-L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2
-DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib_a-memset.o `test -f
'memset.S' || echo
'../../../../../../newlib-1.16.0/newlib/libc/machine/x86_64/'`memset.S
rm -f lib.a
ar cru lib.a lib_a-setjmp.o lib_a-memcpy.o lib_a-memset.o
ranlib lib.a
make[7]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine/x86_64
=BB
Making all in .
make[7]: entrant dans le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine
=BB
rm -f lib.a
ln x86_64/lib.a lib.a >/dev/null 2>/dev/null || \
     cp x86_64/lib.a lib.a
make[7]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine
=BB
make[6]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/ma=
chine
=BB
Making all in .
make[6]: entrant dans le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =
=BB
rm -f libc.a
rm -rf tmp
mkdir tmp
cd tmp; \
     for i in argz/lib.a stdlib/lib.a ctype/lib.a search/lib.a stdio/lib.a
string/lib.a signal/lib.a time/lib.a locale/lib.a reent/lib.a  errno/lib.a
misc/lib.a     machine/lib.a ; do \
       ar x ../$i; \
     done; \
    ar rc ../libc.a *.o
ar: ../argz/lib.a: No such file or directory
ar: ../stdlib/lib.a: No such file or directory
ar: ../ctype/lib.a: No such file or directory
ar: ../search/lib.a: No such file or directory
ar: ../stdio/lib.a: No such file or directory
ar: ../string/lib.a: No such file or directory
ar: ../signal/lib.a: No such file or directory
ar: ../time/lib.a: No such file or directory
ar: ../locale/lib.a: No such file or directory
ar: ../reent/lib.a: No such file or directory
ar: ../errno/lib.a: No such file or directory
ar: ../misc/lib.a: No such file or directory
ranlib libc.a
rm -rf tmp
make[6]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =
=BB
make[5]: *** [all-recursive] Erreur 1
make[5]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =
=BB
make[4]: *** [all-recursive] Erreur 1
make[4]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib =BB
make[3]: *** [all] Erreur 2
make[3]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib =BB
make[2]: *** [all-target-newlib] Erreur 2
make[2]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64 =BB
make[1]: *** [all] Erreur 2
make[1]: quittant le r=E9pertoire =AB
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64 =BB
make: *** [cross-root-x86_64/x86_64-xen-elf/lib/libc.a] Erreur 2

I searched on google, but no solutions were proposed. I exported the path
to stddef.h :

export CPPFLAGS=3D-I/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include/

But now, I have a new error concerning the code of Xen :
/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/test.o: In function
`app_main':
/home/fremals/xen-4.2.0/extras/mini-os/test.c:441: multiple definition of
`app_main'
/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/main.o:/home/fremals/xen-4=
.2.0/extras/mini-os/main.c:187:
first defined here
make[1]: *** [/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/mini-os]
Erreur 1
make[1]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/extras/mini-o=
s =BB
make: *** [c-stubdom] Erreur 2

Can you please help me to resolve these errors please ?

Thank you !!

Best regards,

Sebastien Fremal

--f46d043d67e93c684004d09580dd
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br><br>I&#39;m trying to compile Xen 4.2.0 on Ubuntu 11.10 but it cr=
ashes when compiling the stubdom part. As he can&#39;t find stddef.h, he ca=
n&#39;t create libraries and finally crashes. You can find a sample of erro=
rs below :<br>

<br>Making all in misc<br>make[6]: entrant dans le r=E9pertoire =AB /home/f=
remals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/misc =BB<=
br>gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -=
D__MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/=
mini-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xen=
store=A0 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include=
/x86 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86=
/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fre=
mals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/frema=
ls/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem incl=
ude -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isyst=
em /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fre=
mals/xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/inc=
lude -mno-red-zone -O1 -fno-omit-frame-pointer=A0 -m64 -mno-red-zone -fno-r=
eorder-blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing =
-std=3Dgnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-u=
nused-but-set-variable=A0=A0 -fno-stack-protector -fno-exceptions -D_I386MA=
CH_ALLOW_HW_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x=
86_64-xen-elf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/=
newlib-1.16.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-=
x86_64/x86_64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/new=
lib-x86_64/x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubd=
om/newlib-1.16.0/libgloss/x86_64 -DPACKAGE_NAME=3D\&quot;newlib\&quot; -DPA=
CKAGE_TARNAME=3D\&quot;newlib\&quot; -DPACKAGE_VERSION=3D\&quot;1.16.0\&quo=
t; -DPACKAGE_STRING=3D\&quot;newlib\ 1.16.0\&quot; -DPACKAGE_BUGREPORT=3D\&=
quot;\&quot; -I. -I../../../../../newlib-1.16.0/newlib/libc/misc -O2 -DMISS=
ING_SYSCALL_NAMES -fno-builtin=A0=A0=A0=A0=A0 -O2 -g -g -O2=A0=A0 -c -o lib=
_a-__dprintf.o `test -f &#39;__dprintf.c&#39; || echo &#39;../../../../../n=
ewlib-1.16.0/newlib/libc/misc/&#39;`__dprintf.c<br>

In file included from /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/=
libc/include/sys/reent.h:14:0,<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0 from /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/in=
clude/reent.h:48,<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 from =
../../../../../newlib-1.16.0/newlib/libc/misc/__dprintf.c:8:<br>

/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include/sys/_type=
s.h:67:20: erreur fatale: stddef.h : Aucun fichier ou dossier de ce type<br=
>compilation termin=E9e.<br>make[6]: *** [lib_a-__dprintf.o] Erreur 1<br>

make[6]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/misc =BB<br>Making all in machine<br>ma=
ke[6]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/new=
lib-x86_64/x86_64-xen-elf/newlib/libc/machine =BB<br>

Making all in x86_64<br>make[7]: entrant dans le r=E9pertoire =AB /home/fre=
mals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/machine/x86=
_64 =BB<br>gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/i=
nclude -D__MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/..=
/extras/mini-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../t=
ools/xenstore=A0 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os=
/include/x86 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/inc=
lude/x86/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /=
home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /ho=
me/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isys=
tem include -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/includ=
e -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/=
home/fremals/xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/..=
/xen/include -mno-red-zone -O1 -fno-omit-frame-pointer=A0 -m64 -mno-red-zon=
e -fno-reorder-blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-a=
liasing -std=3Dgnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statemen=
t -Wno-unused-but-set-variable=A0=A0 -fno-stack-protector -fno-exceptions -=
D_I386MACH_ALLOW_HW_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86=
_64/x86_64-xen-elf/newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-=
x86_64/x86_64-xen-elf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/=
stubdom/newlib-1.16.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom=
/newlib-x86_64/x86_64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stu=
bdom/newlib-x86_64/x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2=
.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-b=
uiltin=A0=A0=A0 -c -o lib_a-setjmp.o `test -f &#39;setjmp.S&#39; || echo &#=
39;../../../../../../newlib-1.16.0/newlib/libc/machine/x86_64/&#39;`setjmp.=
S<br>

gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re=A0 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x8=
6 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x8=
6_64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremal=
s/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/=
xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include=
 -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem =
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremal=
s/xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/includ=
e -mno-red-zone -O1 -fno-omit-frame-pointer=A0 -m64 -mno-red-zone -fno-reor=
der-blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -st=
d=3Dgnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unus=
ed-but-set-variable=A0=A0 -fno-stack-protector -fno-exceptions -D_I386MACH_=
ALLOW_HW_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-=
xen-elf/newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/new=
lib-1.16.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86=
_64/x86_64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib=
-x86_64/x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/=
newlib-1.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-builtin=A0=
=A0=A0 -c -o lib_a-memcpy.o `test -f &#39;memcpy.S&#39; || echo &#39;../../=
../../../../newlib-1.16.0/newlib/libc/machine/x86_64/&#39;`memcpy.S<br>

gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re=A0 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x8=
6 -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x8=
6_64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremal=
s/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/=
xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include=
 -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem =
/home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremal=
s/xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/includ=
e -mno-red-zone -O1 -fno-omit-frame-pointer=A0 -m64 -mno-red-zone -fno-reor=
der-blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -st=
d=3Dgnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unus=
ed-but-set-variable=A0=A0 -fno-stack-protector -fno-exceptions -D_I386MACH_=
ALLOW_HW_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-=
xen-elf/newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/new=
lib-1.16.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86=
_64/x86_64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib=
-x86_64/x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/=
newlib-1.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-builtin=A0=
=A0=A0 -c -o lib_a-memset.o `test -f &#39;memset.S&#39; || echo &#39;../../=
../../../../newlib-1.16.0/newlib/libc/machine/x86_64/&#39;`memset.S<br>

rm -f lib.a<br>ar cru lib.a lib_a-setjmp.o lib_a-memcpy.o lib_a-memset.o <b=
r>ranlib lib.a<br>make[7]: quittant le r=E9pertoire =AB /home/fremals/xen-4=
.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/machine/x86_64 =BB<br=
>
Making all in .<br>
make[7]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc/machine =BB<br>rm -f lib.a<br>ln x8=
6_64/lib.a lib.a &gt;/dev/null 2&gt;/dev/null || \<br>=A0=A0=A0 =A0cp x86_6=
4/lib.a lib.a<br>

make[7]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/machine =BB<br>make[6]: quittant le r=
=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-el=
f/newlib/libc/machine =BB<br>

Making all in .<br>make[6]: entrant dans le r=E9pertoire =AB /home/fremals/=
xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =BB<br>rm -f lib=
c.a<br>rm -rf tmp<br>mkdir tmp<br>cd tmp; \<br>=A0=A0=A0 =A0for i in argz/l=
ib.a stdlib/lib.a ctype/lib.a search/lib.a stdio/lib.a=A0 string/lib.a sign=
al/lib.a time/lib.a locale/lib.a reent/lib.a=A0 errno/lib.a misc/lib.a=A0=
=A0=A0=A0 machine/lib.a ; do \<br>

=A0=A0=A0 =A0=A0 ar x ../$i; \<br>=A0=A0=A0 =A0done; \<br>=A0=A0=A0 ar rc .=
./libc.a *.o<br>ar: ../argz/lib.a: No such file or directory<br>ar: ../stdl=
ib/lib.a: No such file or directory<br>ar: ../ctype/lib.a: No such file or =
directory<br>ar: ../search/lib.a: No such file or directory<br>

ar: ../stdio/lib.a: No such file or directory<br>ar: ../string/lib.a: No su=
ch file or directory<br>ar: ../signal/lib.a: No such file or directory<br>a=
r: ../time/lib.a: No such file or directory<br>ar: ../locale/lib.a: No such=
 file or directory<br>

ar: ../reent/lib.a: No such file or directory<br>ar: ../errno/lib.a: No suc=
h file or directory<br>ar: ../misc/lib.a: No such file or directory<br>ranl=
ib libc.a<br>rm -rf tmp<br>make[6]: quittant le r=E9pertoire =AB /home/frem=
als/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =BB<br>

make[5]: *** [all-recursive] Erreur 1<br>make[5]: quittant le r=E9pertoire =
=AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/lib=
c =BB<br>make[4]: *** [all-recursive] Erreur 1<br>make[4]: quittant le r=E9=
pertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/n=
ewlib =BB<br>

make[3]: *** [all] Erreur 2<br>make[3]: quittant le r=E9pertoire =AB /home/=
fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib =BB<br>make[2=
]: *** [all-target-newlib] Erreur 2<br>make[2]: quittant le r=E9pertoire =
=AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64 =BB<br>

make[1]: *** [all] Erreur 2<br>make[1]: quittant le r=E9pertoire =AB /home/=
fremals/xen-4.2.0/stubdom/newlib-x86_64 =BB<br>make: *** [cross-root-x86_64=
/x86_64-xen-elf/lib/libc.a] Erreur 2<br><br>I searched on google, but no so=
lutions were proposed. I exported the path to stddef.h :<br>

<br>export CPPFLAGS=3D-I/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include/<br><br=
>But now, I have a new error concerning the code of Xen :<br>/home/fremals/=
xen-4.2.0/stubdom/mini-os-x86_64-c/test.o: In function `app_main&#39;:<br>

/home/fremals/xen-4.2.0/extras/mini-os/test.c:441: multiple definition of `=
app_main&#39;<br>/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/main.o:/h=
ome/fremals/xen-4.2.0/extras/mini-os/main.c:187: first defined here<br>

make[1]: *** [/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/mini-os] Err=
eur 1<br>make[1]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/extr=
as/mini-os =BB<br>make: *** [c-stubdom] Erreur 2<br><br>Can you please help=
 me to resolve these errors please ?<br>

<br>Thank you !!<br><br>Best regards,<br><br>Sebastien Fremal<br>

--f46d043d67e93c684004d09580dd--


--===============5472436811425591345==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5472436811425591345==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 17:15:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipuF-0003Yd-GV; Wed, 12 Dec 2012 17:15:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1TiQts-0007kF-Gx
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 14:33:36 +0000
Received: from [85.158.139.211:29847] by server-16.bemta-5.messagelabs.com id
	47/76-09208-F3447C05; Tue, 11 Dec 2012 14:33:35 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355236415!15775493!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4784 invoked from network); 11 Dec 2012 14:33:35 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 14:33:35 -0000
Received: by mail-wi0-f179.google.com with SMTP id o1so2030062wic.6
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 06:33:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=XjKJPNfBPGpmaT0pNF10pja655KY6jVPk/ocg2X+qt8=;
	b=PTh1C8107+HO5qh2htJSijanU+4vqCtfzFMwQFJYhCvJcG9ZQAY0qhobh3l5HR1Kmp
	H3pd7f42S3YPHAEd1hhDGjtUyhkz3M/52QxK+D9JzI3gLfiF10gXgf1CC5hp58FaHgeT
	eunAnROyEUc7F2m3/UCP/x8YaI7aF7oXRFw2FJvrozz/a5OeJDKFMil9oWe35uXeOyHP
	hX7aHCncwObZqjabr+c12Zl8bW+MR5q50Qns6e7qX2cM53c5ex29W9m6HWlswwePz5e1
	JkmdyurYhwxNQU0k8d0h1WxttXfDYnfoHqxkq5fOEuJo1Emf8lFsMOvEup9QW4WV3Yjh
	c0FQ==
MIME-Version: 1.0
Received: by 10.194.120.132 with SMTP id lc4mr870667wjb.59.1355236414993; Tue,
	11 Dec 2012 06:33:34 -0800 (PST)
Received: by 10.194.64.194 with HTTP; Tue, 11 Dec 2012 06:33:34 -0800 (PST)
In-Reply-To: <CANq0ewsxV1Zb7A1N06Y_r6ogC=L39cWZeLBw-dONMWxcFhc8cw@mail.gmail.com>
References: <CANq0ewsxV1Zb7A1N06Y_r6ogC=L39cWZeLBw-dONMWxcFhc8cw@mail.gmail.com>
Date: Tue, 11 Dec 2012 20:03:34 +0530
Message-ID: <CANq0ewsao_cmc5qd0WOPH2rnPN++ccHU2E3Sp2Tysq4Qa_c6xA@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xensource.com
X-Mailman-Approved-At: Wed, 12 Dec 2012 17:15:38 +0000
Subject: [Xen-devel] How to optimize pre-copy algorithm of xen to minimize
	downtime?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0918991000150564987=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0918991000150564987==
Content-Type: multipart/alternative; boundary=089e0118450a1a337004d0949111

--089e0118450a1a337004d0949111
Content-Type: text/plain; charset=ISO-8859-1

Hello,
         If I want to optimize the performance of precopy algorithm so that
live migration of virtual machine using xen occurs with minimum
downtime,then how to do it?

regards,
Digvijay

--089e0118450a1a337004d0949111
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_quote">Hello,<br>=A0=A0=A0=A0=A0=A0=A0=A0 If I want=
 to optimize the performance of precopy algorithm so that live migration of=
 virtual machine using xen occurs with minimum downtime,then how to do it?<=
br><br>regards,<br>
Digvijay<br>
</div><br>

--089e0118450a1a337004d0949111--


--===============0918991000150564987==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0918991000150564987==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 17:15:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TipuF-0003Yd-GV; Wed, 12 Dec 2012 17:15:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1TiQts-0007kF-Gx
	for xen-devel@lists.xensource.com; Tue, 11 Dec 2012 14:33:36 +0000
Received: from [85.158.139.211:29847] by server-16.bemta-5.messagelabs.com id
	47/76-09208-F3447C05; Tue, 11 Dec 2012 14:33:35 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355236415!15775493!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4784 invoked from network); 11 Dec 2012 14:33:35 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Dec 2012 14:33:35 -0000
Received: by mail-wi0-f179.google.com with SMTP id o1so2030062wic.6
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Dec 2012 06:33:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=XjKJPNfBPGpmaT0pNF10pja655KY6jVPk/ocg2X+qt8=;
	b=PTh1C8107+HO5qh2htJSijanU+4vqCtfzFMwQFJYhCvJcG9ZQAY0qhobh3l5HR1Kmp
	H3pd7f42S3YPHAEd1hhDGjtUyhkz3M/52QxK+D9JzI3gLfiF10gXgf1CC5hp58FaHgeT
	eunAnROyEUc7F2m3/UCP/x8YaI7aF7oXRFw2FJvrozz/a5OeJDKFMil9oWe35uXeOyHP
	hX7aHCncwObZqjabr+c12Zl8bW+MR5q50Qns6e7qX2cM53c5ex29W9m6HWlswwePz5e1
	JkmdyurYhwxNQU0k8d0h1WxttXfDYnfoHqxkq5fOEuJo1Emf8lFsMOvEup9QW4WV3Yjh
	c0FQ==
MIME-Version: 1.0
Received: by 10.194.120.132 with SMTP id lc4mr870667wjb.59.1355236414993; Tue,
	11 Dec 2012 06:33:34 -0800 (PST)
Received: by 10.194.64.194 with HTTP; Tue, 11 Dec 2012 06:33:34 -0800 (PST)
In-Reply-To: <CANq0ewsxV1Zb7A1N06Y_r6ogC=L39cWZeLBw-dONMWxcFhc8cw@mail.gmail.com>
References: <CANq0ewsxV1Zb7A1N06Y_r6ogC=L39cWZeLBw-dONMWxcFhc8cw@mail.gmail.com>
Date: Tue, 11 Dec 2012 20:03:34 +0530
Message-ID: <CANq0ewsao_cmc5qd0WOPH2rnPN++ccHU2E3Sp2Tysq4Qa_c6xA@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xensource.com
X-Mailman-Approved-At: Wed, 12 Dec 2012 17:15:38 +0000
Subject: [Xen-devel] How to optimize pre-copy algorithm of xen to minimize
	downtime?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0918991000150564987=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0918991000150564987==
Content-Type: multipart/alternative; boundary=089e0118450a1a337004d0949111

--089e0118450a1a337004d0949111
Content-Type: text/plain; charset=ISO-8859-1

Hello,
         If I want to optimize the performance of precopy algorithm so that
live migration of virtual machine using xen occurs with minimum
downtime,then how to do it?

regards,
Digvijay

--089e0118450a1a337004d0949111
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_quote">Hello,<br>=A0=A0=A0=A0=A0=A0=A0=A0 If I want=
 to optimize the performance of precopy algorithm so that live migration of=
 virtual machine using xen occurs with minimum downtime,then how to do it?<=
br><br>regards,<br>
Digvijay<br>
</div><br>

--089e0118450a1a337004d0949111--


--===============0918991000150564987==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0918991000150564987==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 17:16:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tipui-0003iQ-AV; Wed, 12 Dec 2012 17:16:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tipug-0003ht-Ug
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:16:07 +0000
Received: from [85.158.137.99:51017] by server-9.bemta-3.messagelabs.com id
	FC/F0-11948-6DBB8C05; Wed, 12 Dec 2012 17:16:06 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1355332565!18238525!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16395 invoked from network); 12 Dec 2012 17:16:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="94455"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:16:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:16:04 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tipue-0006bH-QC; Wed, 12 Dec 2012 17:16:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tipue-0001Zr-Lh;
	Wed, 12 Dec 2012 17:16:04 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.48084.559983.299246@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:16:04 +0000
To: Jim Fehlig <jfehlig@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
In-Reply-To: <20680.47971.962603.851882@mariner.uk.xensource.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> > - gettimeofday(&now, NULL);
> > - timeout = (abs_t.tv_usec - now.tv_usec) / 1000;
> > - timeout += (abs_t.tv_sec - now.tv_sec) * 1000;
> 
> Specifically, this code has an integer arithmetic overflow.

If you look at beforepoll_internal in libxl_event.c, near line 859,
you can see how I tackled this same problem there.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:16:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tipui-0003iQ-AV; Wed, 12 Dec 2012 17:16:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tipug-0003ht-Ug
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:16:07 +0000
Received: from [85.158.137.99:51017] by server-9.bemta-3.messagelabs.com id
	FC/F0-11948-6DBB8C05; Wed, 12 Dec 2012 17:16:06 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1355332565!18238525!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16395 invoked from network); 12 Dec 2012 17:16:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="94455"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:16:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:16:04 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tipue-0006bH-QC; Wed, 12 Dec 2012 17:16:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tipue-0001Zr-Lh;
	Wed, 12 Dec 2012 17:16:04 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.48084.559983.299246@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:16:04 +0000
To: Jim Fehlig <jfehlig@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
In-Reply-To: <20680.47971.962603.851882@mariner.uk.xensource.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> > - gettimeofday(&now, NULL);
> > - timeout = (abs_t.tv_usec - now.tv_usec) / 1000;
> > - timeout += (abs_t.tv_sec - now.tv_sec) * 1000;
> 
> Specifically, this code has an integer arithmetic overflow.

If you look at beforepoll_internal in libxl_event.c, near line 859,
you can see how I tackled this same problem there.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:20:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tipyn-0004NI-1V; Wed, 12 Dec 2012 17:20:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1Tipyl-0004N6-1F
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:20:19 +0000
Received: from [193.109.254.147:38319] by server-9.bemta-14.messagelabs.com id
	9C/7B-24482-2DCB8C05; Wed, 12 Dec 2012 17:20:18 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355332815!2046336!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22224 invoked from network); 12 Dec 2012 17:20:17 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 17:20:17 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Wed, 12 Dec 2012 10:20:12 -0700
Message-ID: <50C8BCCB.3000802@suse.com>
Date: Wed, 12 Dec 2012 10:20:11 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-1-git-send-email-ian.jackson@eu.citrix.com>	<50C7B518.9080503@suse.com>
	<20680.47369.582575.481507@mariner.uk.xensource.com>
In-Reply-To: <20680.47369.582575.481507@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race"):
>   
>> Thanks for the patches! I've found some time to test them in the context
>> of Xen 4.2 and have some comments. For this patch, only a few nits below.
>>     
>
> Thanks for the review and testing.
>
>   
>>> -    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
>>> +    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
>>>
>>>       
>> Nit, should there be a space between 'fd,' and 'register'?
>>     
>
> I did this deliberately because the macro uses token pasting to turn
> this into fd_register, at least in many of the uses.
>   

Ok.

>   
>>> Also, not that gcc complained, but register is a keyword.
>>>       
>
> The token "register" gets pasted together with other things by the
> preprocessor before the compiler sees it, so it's correct as far as
> the language spec goes.  As for it being potentially confusing,
> changing what appears here is difficult without changing the names
> in libxl_osevent_hooks.
>   

Yes, understood.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:20:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tipyn-0004NI-1V; Wed, 12 Dec 2012 17:20:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1Tipyl-0004N6-1F
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:20:19 +0000
Received: from [193.109.254.147:38319] by server-9.bemta-14.messagelabs.com id
	9C/7B-24482-2DCB8C05; Wed, 12 Dec 2012 17:20:18 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355332815!2046336!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22224 invoked from network); 12 Dec 2012 17:20:17 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 17:20:17 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Wed, 12 Dec 2012 10:20:12 -0700
Message-ID: <50C8BCCB.3000802@suse.com>
Date: Wed, 12 Dec 2012 10:20:11 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-1-git-send-email-ian.jackson@eu.citrix.com>	<50C7B518.9080503@suse.com>
	<20680.47369.582575.481507@mariner.uk.xensource.com>
In-Reply-To: <20680.47369.582575.481507@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 1/2] libxl: fix stale fd event callback race"):
>   
>> Thanks for the patches! I've found some time to test them in the context
>> of Xen 4.2 and have some comments. For this patch, only a few nits below.
>>     
>
> Thanks for the review and testing.
>
>   
>>> -    rc = OSEVENT_HOOK(fd_register, fd, &ev->for_app_reg, events, ev);
>>> +    rc = OSEVENT_HOOK(fd,register, alloc, fd, &ev->nexus->for_app_reg,
>>>
>>>       
>> Nit, should there be a space between 'fd,' and 'register'?
>>     
>
> I did this deliberately because the macro uses token pasting to turn
> this into fd_register, at least in many of the uses.
>   

Ok.

>   
>>> Also, not that gcc complained, but register is a keyword.
>>>       
>
> The token "register" gets pasted together with other things by the
> preprocessor before the compiler sees it, so it's correct as far as
> the language spec goes.  As for it being potentially confusing,
> changing what appears here is difficult without changing the names
> in libxl_osevent_hooks.
>   

Yes, understood.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:26:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:26:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiq4o-0004dd-SF; Wed, 12 Dec 2012 17:26:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1Tiq4o-0004dX-3C
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:26:34 +0000
Received: from [85.158.137.99:22696] by server-7.bemta-3.messagelabs.com id
	C1/49-23008-94EB8C05; Wed, 12 Dec 2012 17:26:33 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355333190!19129853!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30054 invoked from network); 12 Dec 2012 17:26:31 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-4.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 17:26:31 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Wed, 12 Dec 2012 10:26:24 -0700
Message-ID: <50C8BE3F.4040402@suse.com>
Date: Wed, 12 Dec 2012 10:26:23 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
In-Reply-To: <20680.47971.962603.851882@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
>   
>> Ian Jackson wrote:
>>     
>>> This is a forwards-compatible change for applications using the libxl
>>> API, and will hopefully eliminate these races in callback-supplying
>>> applications (such as libvirt) without the need for corresponding
>>> changes to the application.
>>>       
>
> When I wrote this of course I forgot to mention that previously libxl
> would never call libxl_osevent_hooks->timeout_modify and now it never
> calls ->timeout_deregister.
>
> So this change can expose bugs in the application's implementation of
> ->timeout_modify.
>
>   
>> I'm not sure how to avoid changing the application (libvirt). Currently,
>> the libvirt libxl driver removes the timeout from the libvirt event loop
>> in the timeout_deregister() function. This function is never called now
>> and hence the timeout is never removed. I had to make the following
>> change in the libxl driver to use your patches
>>     
>
> I think this is because of a bug of the kind I describe above.
>
>   
>> - gettimeofday(&now, NULL);
>> - timeout = (abs_t.tv_usec - now.tv_usec) / 1000;
>> - timeout += (abs_t.tv_sec - now.tv_sec) * 1000;
>>     
>
> Specifically, this code has an integer arithmetic overflow.
>   

Well, this patch is removing that buggy code :).  After again reading
libxl_event.h, I'm considering the below patch in the libvirt libxl
driver.  The change is primarily inspired by this comment for
libxl_osevent_occurred_timeout:

/* Implicitly, on entry to this function the timeout has been
 * deregistered.  If _occurred_timeout is called, libxl will not
 * call timeout_deregister; if it wants to requeue the timeout it
 * will call timeout_register again.
 */

Regards,
Jim

diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index 302f81c..d4055be 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -184,6 +184,8 @@ static void libxlTimerCallback(int timer
ATTRIBUTE_UNUSED, void *timer_v)
 {
     struct libxlOSEventHookTimerInfo *timer_info = timer_v;
 
+    virEventRemoveTimeout(timer_info->id);
+    timer_info->id = -1;
     libxl_osevent_occurred_timeout(timer_info->priv->ctx,
timer_info->xl_priv);
 }
 
@@ -225,16 +227,12 @@ static int libxlTimeoutRegisterEventHook(void *priv,
 
 static int libxlTimeoutModifyEventHook(void *priv ATTRIBUTE_UNUSED,
                                        void **hndp,
-                                       struct timeval abs_t)
+                                       struct timeval abs_t
ATTRIBUTE_UNUSED)
 {
-    struct timeval now;
-    int timeout;
     struct libxlOSEventHookTimerInfo *timer_info = *hndp;
 
-    gettimeofday(&now, NULL);
-    timeout = (abs_t.tv_usec - now.tv_usec) / 1000;
-    timeout += (abs_t.tv_sec - now.tv_sec) * 1000;
-    virEventUpdateTimeout(timer_info->id, timeout);
+    if (timer_info->id > 0)
+        virEventUpdateTimeout(timer_info->id, 0);
     return 0;
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:26:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:26:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiq4o-0004dd-SF; Wed, 12 Dec 2012 17:26:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1Tiq4o-0004dX-3C
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:26:34 +0000
Received: from [85.158.137.99:22696] by server-7.bemta-3.messagelabs.com id
	C1/49-23008-94EB8C05; Wed, 12 Dec 2012 17:26:33 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355333190!19129853!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30054 invoked from network); 12 Dec 2012 17:26:31 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-4.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 17:26:31 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Wed, 12 Dec 2012 10:26:24 -0700
Message-ID: <50C8BE3F.4040402@suse.com>
Date: Wed, 12 Dec 2012 10:26:23 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
In-Reply-To: <20680.47971.962603.851882@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
>   
>> Ian Jackson wrote:
>>     
>>> This is a forwards-compatible change for applications using the libxl
>>> API, and will hopefully eliminate these races in callback-supplying
>>> applications (such as libvirt) without the need for corresponding
>>> changes to the application.
>>>       
>
> When I wrote this of course I forgot to mention that previously libxl
> would never call libxl_osevent_hooks->timeout_modify and now it never
> calls ->timeout_deregister.
>
> So this change can expose bugs in the application's implementation of
> ->timeout_modify.
>
>   
>> I'm not sure how to avoid changing the application (libvirt). Currently,
>> the libvirt libxl driver removes the timeout from the libvirt event loop
>> in the timeout_deregister() function. This function is never called now
>> and hence the timeout is never removed. I had to make the following
>> change in the libxl driver to use your patches
>>     
>
> I think this is because of a bug of the kind I describe above.
>
>   
>> - gettimeofday(&now, NULL);
>> - timeout = (abs_t.tv_usec - now.tv_usec) / 1000;
>> - timeout += (abs_t.tv_sec - now.tv_sec) * 1000;
>>     
>
> Specifically, this code has an integer arithmetic overflow.
>   

Well, this patch is removing that buggy code :).  After again reading
libxl_event.h, I'm considering the below patch in the libvirt libxl
driver.  The change is primarily inspired by this comment for
libxl_osevent_occurred_timeout:

/* Implicitly, on entry to this function the timeout has been
 * deregistered.  If _occurred_timeout is called, libxl will not
 * call timeout_deregister; if it wants to requeue the timeout it
 * will call timeout_register again.
 */

Regards,
Jim

diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index 302f81c..d4055be 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -184,6 +184,8 @@ static void libxlTimerCallback(int timer
ATTRIBUTE_UNUSED, void *timer_v)
 {
     struct libxlOSEventHookTimerInfo *timer_info = timer_v;
 
+    virEventRemoveTimeout(timer_info->id);
+    timer_info->id = -1;
     libxl_osevent_occurred_timeout(timer_info->priv->ctx,
timer_info->xl_priv);
 }
 
@@ -225,16 +227,12 @@ static int libxlTimeoutRegisterEventHook(void *priv,
 
 static int libxlTimeoutModifyEventHook(void *priv ATTRIBUTE_UNUSED,
                                        void **hndp,
-                                       struct timeval abs_t)
+                                       struct timeval abs_t
ATTRIBUTE_UNUSED)
 {
-    struct timeval now;
-    int timeout;
     struct libxlOSEventHookTimerInfo *timer_info = *hndp;
 
-    gettimeofday(&now, NULL);
-    timeout = (abs_t.tv_usec - now.tv_usec) / 1000;
-    timeout += (abs_t.tv_sec - now.tv_sec) * 1000;
-    virEventUpdateTimeout(timer_info->id, timeout);
+    if (timer_info->id > 0)
+        virEventUpdateTimeout(timer_info->id, 0);
     return 0;
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:38:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqFp-0004pF-D0; Wed, 12 Dec 2012 17:37:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiqFn-0004pA-PJ
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:37:55 +0000
Received: from [85.158.139.83:13951] by server-12.bemta-5.messagelabs.com id
	34/64-02275-3F0C8C05; Wed, 12 Dec 2012 17:37:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355333874!23273267!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4382 invoked from network); 12 Dec 2012 17:37:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:37:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="95174"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:37:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:37:54 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TiqFl-0006ps-Ue; Wed, 12 Dec 2012 17:37:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TiqFl-000463-CP;
	Wed, 12 Dec 2012 17:37:53 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.49391.646654.814456@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:37:51 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C8BE3F.4040402@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Ian Jackson wrote:
> > Specifically, this code has an integer arithmetic overflow.
> 
> Well, this patch is removing that buggy code :).

I think you need to have some code somewhere which does the
complicated arithmetic dance to avoid the integer overflow.  Does your
timer registration function have the same bug ?

>  After again reading
> libxl_event.h, I'm considering the below patch in the libvirt libxl
> driver.  The change is primarily inspired by this comment for
> libxl_osevent_occurred_timeout:
...
> /* Implicitly, on entry to this function the timeout has been
>  * deregistered.  If _occurred_timeout is called, libxl will not
>  * call timeout_deregister; if it wants to requeue the timeout it
>  * will call timeout_register again.
>  */

Well your patch is only correct when used with the new libxl, with my
patches.

> diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
> index 302f81c..d4055be 100644
> --- a/src/libxl/libxl_driver.c
> +++ b/src/libxl/libxl_driver.c
> @@ -184,6 +184,8 @@ static void libxlTimerCallback(int timer
> ATTRIBUTE_UNUSED, void *timer_v)
>  {
>      struct libxlOSEventHookTimerInfo *timer_info = timer_v;
>  
> +    virEventRemoveTimeout(timer_info->id);
> +    timer_info->id = -1;

I don't understand why you need this.  Doesn't libvirt remove timers
when they fire ?  If it doesn't, do they otherwise not keep
reoccurring ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:38:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqFp-0004pF-D0; Wed, 12 Dec 2012 17:37:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiqFn-0004pA-PJ
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:37:55 +0000
Received: from [85.158.139.83:13951] by server-12.bemta-5.messagelabs.com id
	34/64-02275-3F0C8C05; Wed, 12 Dec 2012 17:37:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355333874!23273267!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4382 invoked from network); 12 Dec 2012 17:37:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:37:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="95174"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:37:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:37:54 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TiqFl-0006ps-Ue; Wed, 12 Dec 2012 17:37:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TiqFl-000463-CP;
	Wed, 12 Dec 2012 17:37:53 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.49391.646654.814456@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:37:51 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C8BE3F.4040402@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Ian Jackson wrote:
> > Specifically, this code has an integer arithmetic overflow.
> 
> Well, this patch is removing that buggy code :).

I think you need to have some code somewhere which does the
complicated arithmetic dance to avoid the integer overflow.  Does your
timer registration function have the same bug ?

>  After again reading
> libxl_event.h, I'm considering the below patch in the libvirt libxl
> driver.  The change is primarily inspired by this comment for
> libxl_osevent_occurred_timeout:
...
> /* Implicitly, on entry to this function the timeout has been
>  * deregistered.  If _occurred_timeout is called, libxl will not
>  * call timeout_deregister; if it wants to requeue the timeout it
>  * will call timeout_register again.
>  */

Well your patch is only correct when used with the new libxl, with my
patches.

> diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
> index 302f81c..d4055be 100644
> --- a/src/libxl/libxl_driver.c
> +++ b/src/libxl/libxl_driver.c
> @@ -184,6 +184,8 @@ static void libxlTimerCallback(int timer
> ATTRIBUTE_UNUSED, void *timer_v)
>  {
>      struct libxlOSEventHookTimerInfo *timer_info = timer_v;
>  
> +    virEventRemoveTimeout(timer_info->id);
> +    timer_info->id = -1;

I don't understand why you need this.  Doesn't libvirt remove timers
when they fire ?  If it doesn't, do they otherwise not keep
reoccurring ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:43:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqLC-0004zY-80; Wed, 12 Dec 2012 17:43:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TiqLA-0004zT-VN
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:43:29 +0000
Received: from [85.158.139.211:55522] by server-10.bemta-5.messagelabs.com id
	74/36-13383-042C8C05; Wed, 12 Dec 2012 17:43:28 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355334206!19362547!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE0NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32156 invoked from network); 12 Dec 2012 17:43:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:43:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="436228"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	12 Dec 2012 17:43:26 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 12 Dec 2012 12:43:25 -0500
Message-ID: <50C8C23B.6070003@citrix.com>
Date: Wed, 12 Dec 2012 17:43:23 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CANq0ewsxV1Zb7A1N06Y_r6ogC=L39cWZeLBw-dONMWxcFhc8cw@mail.gmail.com>
	<CANq0ewsao_cmc5qd0WOPH2rnPN++ccHU2E3Sp2Tysq4Qa_c6xA@mail.gmail.com>
In-Reply-To: <CANq0ewsao_cmc5qd0WOPH2rnPN++ccHU2E3Sp2Tysq4Qa_c6xA@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] How to optimize pre-copy algorithm of xen to
 minimize downtime?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/12/12 14:33, digvijay chauhan wrote:
>
> Hello,
>          If I want to optimize the performance of precopy algorithm so 
> that live migration of virtual machine using xen occurs with minimum 
> downtime,then how to do it?
Having spent quite a bit of time studying the time that a domain is down 
during migration in XenServer, I'm not convinced pre-copy is what is the 
biggest part of downtime, assuming the Open Source product behaviour is 
at least mostly doing the same things - although what the host and guest 
is up to before it gets suspended/paused will have a big impact as well. 
If the guest is VERY busy, you may well end up with very little memory 
that doesn't need to be copied in the final copy - not sure how you can 
improve this.

Have you made some measurements to show where the time is spent during 
downtime.
What workloads are you looking at? What downtime numbers have you got at 
present, and what portion of this is down to actually copy time, and 
what is other things that need to happen during migration (e.g. bringing 
the virtual HD and Net devices down, and then up again on the new guest)?

My above comments assume there is a FAST link between the old and new 
host (my testing has been primarily using "localhost" migration, so the 
new guest is on the same machine as the old one). For a slow link, you 
may find that the copy time is more of the whole time, but again, I'm 
not sure there is a huge amount that can be done without a rather large 
amount of added complexity - and that may well be better put elsewhere, 
e.g. compression on the network link, or some such.

I have patch for Linux 3.7rcX that improves the time Dom0 spends mapping 
in the guest memory, which does have a good impact on the overall copy 
time, but not that huge impact on the downtime (since the copy time is 
only small part of overall downtime, at least in my testcases). Look at 
list archives for Friday 7th of Dec for the latest version.

--
Mats
>
> regards,
> Digvijay
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:43:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqLC-0004zY-80; Wed, 12 Dec 2012 17:43:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TiqLA-0004zT-VN
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:43:29 +0000
Received: from [85.158.139.211:55522] by server-10.bemta-5.messagelabs.com id
	74/36-13383-042C8C05; Wed, 12 Dec 2012 17:43:28 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355334206!19362547!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE0NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32156 invoked from network); 12 Dec 2012 17:43:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:43:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="436228"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	12 Dec 2012 17:43:26 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 12 Dec 2012 12:43:25 -0500
Message-ID: <50C8C23B.6070003@citrix.com>
Date: Wed, 12 Dec 2012 17:43:23 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CANq0ewsxV1Zb7A1N06Y_r6ogC=L39cWZeLBw-dONMWxcFhc8cw@mail.gmail.com>
	<CANq0ewsao_cmc5qd0WOPH2rnPN++ccHU2E3Sp2Tysq4Qa_c6xA@mail.gmail.com>
In-Reply-To: <CANq0ewsao_cmc5qd0WOPH2rnPN++ccHU2E3Sp2Tysq4Qa_c6xA@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] How to optimize pre-copy algorithm of xen to
 minimize downtime?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/12/12 14:33, digvijay chauhan wrote:
>
> Hello,
>          If I want to optimize the performance of precopy algorithm so 
> that live migration of virtual machine using xen occurs with minimum 
> downtime,then how to do it?
Having spent quite a bit of time studying the time that a domain is down 
during migration in XenServer, I'm not convinced pre-copy is what is the 
biggest part of downtime, assuming the Open Source product behaviour is 
at least mostly doing the same things - although what the host and guest 
is up to before it gets suspended/paused will have a big impact as well. 
If the guest is VERY busy, you may well end up with very little memory 
that doesn't need to be copied in the final copy - not sure how you can 
improve this.

Have you made some measurements to show where the time is spent during 
downtime.
What workloads are you looking at? What downtime numbers have you got at 
present, and what portion of this is down to actually copy time, and 
what is other things that need to happen during migration (e.g. bringing 
the virtual HD and Net devices down, and then up again on the new guest)?

My above comments assume there is a FAST link between the old and new 
host (my testing has been primarily using "localhost" migration, so the 
new guest is on the same machine as the old one). For a slow link, you 
may find that the copy time is more of the whole time, but again, I'm 
not sure there is a huge amount that can be done without a rather large 
amount of added complexity - and that may well be better put elsewhere, 
e.g. compression on the network link, or some such.

I have patch for Linux 3.7rcX that improves the time Dom0 spends mapping 
in the guest memory, which does have a good impact on the overall copy 
time, but not that huge impact on the downtime (since the copy time is 
only small part of overall downtime, at least in my testcases). Look at 
list archives for Friday 7th of Dec for the latest version.

--
Mats
>
> regards,
> Digvijay
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqOX-00056u-Sd; Wed, 12 Dec 2012 17:46:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiqOW-00056m-Dh
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:46:56 +0000
Received: from [85.158.137.99:40006] by server-8.bemta-3.messagelabs.com id
	D8/3A-01297-F03C8C05; Wed, 12 Dec 2012 17:46:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355334414!13612811!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26071 invoked from network); 12 Dec 2012 17:46:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:46:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="95387"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:46:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:46:54 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TiqOU-0006uC-8p; Wed, 12 Dec 2012 17:46:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TiqOU-0006xo-3C;
	Wed, 12 Dec 2012 17:46:54 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.49934.2544.943098@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:46:54 +0000
To: "greg@enjellic.com" <greg@enjellic.com>
In-Reply-To: <201211070207.qA727wF1028601@wind.enjellic.com>
References: <201211070207.qA727wF1028601@wind.enjellic.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dr. Greg Wettstein writes ("[PATCH 2/2] 4.1.2 blktap2 cleanup fixes."):
> ---------------------------------------------------------------------------
> The following patch when applied on top of:
> 
> libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> 
> Establishes correct cleanup behavior for blktap devices.  This patch
> implements the release of the backend device before calling for
> the destruction of the userspace component of the blktap device.
> 
> Without this patch the kernel xen-blkback driver deadlocks with
> the blktap2 user control plane until the IPC channel is terminated by the
> timeout on the select() call.  This results in a noticeable delay
> in the termination of the guest and causes the blktap minor
> number which had been allocated to be orphaned.

This looks plausible.  But shouldn't it be applied to xen-unstable and
Xen 4.2 too ?

>      if (atoi(state) != 4) {
> -        libxl__device_destroy_tapdisk(&gc, be_path);
> -        xs_rm(ctx->xsh, XBT_NULL, be_path);
> +	libxl__device_destroy_tapdisk(&gc, be_path);
>          goto out;

And this contains a whitespace change.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqOX-00056u-Sd; Wed, 12 Dec 2012 17:46:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiqOW-00056m-Dh
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:46:56 +0000
Received: from [85.158.137.99:40006] by server-8.bemta-3.messagelabs.com id
	D8/3A-01297-F03C8C05; Wed, 12 Dec 2012 17:46:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355334414!13612811!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26071 invoked from network); 12 Dec 2012 17:46:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:46:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="95387"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:46:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:46:54 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TiqOU-0006uC-8p; Wed, 12 Dec 2012 17:46:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TiqOU-0006xo-3C;
	Wed, 12 Dec 2012 17:46:54 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.49934.2544.943098@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:46:54 +0000
To: "greg@enjellic.com" <greg@enjellic.com>
In-Reply-To: <201211070207.qA727wF1028601@wind.enjellic.com>
References: <201211070207.qA727wF1028601@wind.enjellic.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dr. Greg Wettstein writes ("[PATCH 2/2] 4.1.2 blktap2 cleanup fixes."):
> ---------------------------------------------------------------------------
> The following patch when applied on top of:
> 
> libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> 
> Establishes correct cleanup behavior for blktap devices.  This patch
> implements the release of the backend device before calling for
> the destruction of the userspace component of the blktap device.
> 
> Without this patch the kernel xen-blkback driver deadlocks with
> the blktap2 user control plane until the IPC channel is terminated by the
> timeout on the select() call.  This results in a noticeable delay
> in the termination of the guest and causes the blktap minor
> number which had been allocated to be orphaned.

This looks plausible.  But shouldn't it be applied to xen-unstable and
Xen 4.2 too ?

>      if (atoi(state) != 4) {
> -        libxl__device_destroy_tapdisk(&gc, be_path);
> -        xs_rm(ctx->xsh, XBT_NULL, be_path);
> +	libxl__device_destroy_tapdisk(&gc, be_path);
>          goto out;

And this contains a whitespace change.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:47:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqOv-00059F-9w; Wed, 12 Dec 2012 17:47:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiqOt-00058v-Lw
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:47:19 +0000
Received: from [193.109.254.147:47886] by server-6.bemta-14.messagelabs.com id
	F2/E7-25153-723C8C05; Wed, 12 Dec 2012 17:47:19 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1355334438!10381640!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26311 invoked from network); 12 Dec 2012 17:47:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:47:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="95397"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:47:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:47:17 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TiqOr-0006uO-No; Wed, 12 Dec 2012 17:47:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TiqOr-0006xx-K2;
	Wed, 12 Dec 2012 17:47:17 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.49957.522626.285995@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:47:17 +0000
To: "greg@enjellic.com" <greg@enjellic.com>
In-Reply-To: <201211070206.qA7261Bp028589@wind.enjellic.com>
References: <201211070206.qA7261Bp028589@wind.enjellic.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dr. Greg Wettstein writes ("[PATCH 1/2] 4.1.2 blktap2 cleanup fixes."):
> ---------------------------------------------------------------------------
> Backport of the following patch from development:
> 
> # User Ian Campbell <[hidden email]>
> # Date 1309968705 -3600
> # Node ID e4781aedf817c5ab36f6f3077e44c43c566a2812
> # Parent 700d0f03d50aa6619d313c1ff6aea7fd429d28a7
> libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> 
> This patch properly terminates the tapdisk2 process(es) started
> to service a virtual block device.
> 
> Signed-off-by: Greg Wettstein <greg@enjellic.com>

Thanks, I have applied this to 4.1.

Ian.

# HG changeset patch
# User Ian Jackson <Ian.Jackson@eu.citrix.com>
# Date 1355334075 0
# Node ID 255a0b6a81041e51fe38ef0e919a6541ffe0d119
# Parent  a866cc5b8235ae05b178b9a904a59569b005f177
From: Ian Campbell <ian.campbell@citrix.com>

libxl: attempt to cleanup tapdisk processes on disk backend destroy.

This patch properly terminates the tapdisk2 process(es) started
to service a virtual block device.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

xen-unstable changeset: 23883:7998217630e2
xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
Signed-off-by: Greg Wettstein <greg@enjellic.com>
Backport-requested-by: Greg Wettstein <greg@enjellic.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff -r a866cc5b8235 -r 255a0b6a8104 tools/blktap2/control/tap-ctl-list.c
--- a/tools/blktap2/control/tap-ctl-list.c	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/blktap2/control/tap-ctl-list.c	Wed Dec 12 17:41:15 2012 +0000
@@ -506,17 +506,15 @@ out:
 }
 
 int
-tap_ctl_find_minor(const char *type, const char *path)
+tap_ctl_find(const char *type, const char *path, tap_list_t *tap)
 {
 	tap_list_t **list, **_entry;
-	int minor, err;
+	int ret = -ENOENT, err;
 
 	err = tap_ctl_list(&list);
 	if (err)
 		return err;
 
-	minor = -1;
-
 	for (_entry = list; *_entry != NULL; ++_entry) {
 		tap_list_t *entry  = *_entry;
 
@@ -526,11 +524,13 @@ tap_ctl_find_minor(const char *type, con
 		if (path && (!entry->path || strcmp(entry->path, path)))
 			continue;
 
-		minor = entry->minor;
+		*tap = *entry;
+		tap->type = tap->path = NULL;
+		ret = 0;
 		break;
 	}
 
 	tap_ctl_free_list(list);
 
-	return minor >= 0 ? minor : -ENOENT;
+	return ret;
 }
diff -r a866cc5b8235 -r 255a0b6a8104 tools/blktap2/control/tap-ctl.h
--- a/tools/blktap2/control/tap-ctl.h	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/blktap2/control/tap-ctl.h	Wed Dec 12 17:41:15 2012 +0000
@@ -76,7 +76,7 @@ int tap_ctl_get_driver_id(const char *ha
 
 int tap_ctl_list(tap_list_t ***list);
 void tap_ctl_free_list(tap_list_t **list);
-int tap_ctl_find_minor(const char *type, const char *path);
+int tap_ctl_find(const char *type, const char *path, tap_list_t *tap);
 
 int tap_ctl_allocate(int *minor, char **devname);
 int tap_ctl_free(const int minor);
diff -r a866cc5b8235 -r 255a0b6a8104 tools/libxl/libxl_blktap2.c
--- a/tools/libxl/libxl_blktap2.c	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/libxl/libxl_blktap2.c	Wed Dec 12 17:41:15 2012 +0000
@@ -18,6 +18,8 @@
 
 #include "tap-ctl.h"
 
+#include <string.h>
+
 int libxl__blktap_enabled(libxl__gc *gc)
 {
     const char *msg;
@@ -30,12 +32,13 @@ const char *libxl__blktap_devpath(libxl_
 {
     const char *type;
     char *params, *devname = NULL;
-    int minor, err;
+    tap_list_t tap;
+    int err;
 
     type = libxl__device_disk_string_of_format(format);
-    minor = tap_ctl_find_minor(type, disk);
-    if (minor >= 0) {
-        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", minor);
+    err = tap_ctl_find(type, disk, &tap);
+    if (err == 0) {
+        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", tap.minor);
         if (devname)
             return devname;
     }
@@ -49,3 +52,28 @@ const char *libxl__blktap_devpath(libxl_
 
     return NULL;
 }
+
+
+void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+{
+    char *path, *params, *type, *disk;
+    int err;
+    tap_list_t tap;
+
+    path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
+    if (!path) return;
+
+    params = libxl__xs_read(gc, XBT_NULL, path);
+    if (!params) return;
+
+    type = params;
+    disk = strchr(params, ':');
+    if (!disk) return;
+
+    *disk++ = '\0';
+
+    err = tap_ctl_find(type, disk, &tap);
+    if (err < 0) return;
+
+    tap_ctl_destroy(tap.id, tap.minor);
+}
diff -r a866cc5b8235 -r 255a0b6a8104 tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/libxl/libxl_device.c	Wed Dec 12 17:41:15 2012 +0000
@@ -250,6 +250,7 @@ int libxl__device_destroy(libxl_ctx *ctx
     if (!state)
         goto out;
     if (atoi(state) != 4) {
+        libxl__device_destroy_tapdisk(&gc, be_path);
         xs_rm(ctx->xsh, XBT_NULL, be_path);
         goto out;
     }
@@ -368,6 +369,7 @@ int libxl__devices_destroy(libxl_ctx *ct
             }
         }
     }
+    libxl__device_destroy_tapdisk(&gc, be_path);
 out:
     libxl__free_all(&gc);
     return 0;
diff -r a866cc5b8235 -r 255a0b6a8104 tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/libxl/libxl_internal.h	Wed Dec 12 17:41:15 2012 +0000
@@ -314,6 +314,12 @@ _hidden const char *libxl__blktap_devpat
                                  const char *disk,
                                  libxl_disk_format format);
 
+/* libxl__device_destroy_tapdisk:
+ *   Destroys any tapdisk process associated with the backend represented
+ *   by be_path.
+ */
+_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path);
+
 _hidden char *libxl__uuid2string(libxl__gc *gc, const libxl_uuid uuid);
 
 struct libxl__xen_console_reader {
diff -r a866cc5b8235 -r 255a0b6a8104 tools/libxl/libxl_noblktap2.c
--- a/tools/libxl/libxl_noblktap2.c	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/libxl/libxl_noblktap2.c	Wed Dec 12 17:41:15 2012 +0000
@@ -27,3 +27,7 @@ const char *libxl__blktap_devpath(libxl_
 {
     return NULL;
 }
+
+void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+{
+}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:47:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqOv-00059F-9w; Wed, 12 Dec 2012 17:47:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiqOt-00058v-Lw
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 17:47:19 +0000
Received: from [193.109.254.147:47886] by server-6.bemta-14.messagelabs.com id
	F2/E7-25153-723C8C05; Wed, 12 Dec 2012 17:47:19 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1355334438!10381640!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26311 invoked from network); 12 Dec 2012 17:47:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 17:47:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="95397"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 17:47:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 17:47:17 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TiqOr-0006uO-No; Wed, 12 Dec 2012 17:47:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TiqOr-0006xx-K2;
	Wed, 12 Dec 2012 17:47:17 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20680.49957.522626.285995@mariner.uk.xensource.com>
Date: Wed, 12 Dec 2012 17:47:17 +0000
To: "greg@enjellic.com" <greg@enjellic.com>
In-Reply-To: <201211070206.qA7261Bp028589@wind.enjellic.com>
References: <201211070206.qA7261Bp028589@wind.enjellic.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dr. Greg Wettstein writes ("[PATCH 1/2] 4.1.2 blktap2 cleanup fixes."):
> ---------------------------------------------------------------------------
> Backport of the following patch from development:
> 
> # User Ian Campbell <[hidden email]>
> # Date 1309968705 -3600
> # Node ID e4781aedf817c5ab36f6f3077e44c43c566a2812
> # Parent 700d0f03d50aa6619d313c1ff6aea7fd429d28a7
> libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> 
> This patch properly terminates the tapdisk2 process(es) started
> to service a virtual block device.
> 
> Signed-off-by: Greg Wettstein <greg@enjellic.com>

Thanks, I have applied this to 4.1.

Ian.

# HG changeset patch
# User Ian Jackson <Ian.Jackson@eu.citrix.com>
# Date 1355334075 0
# Node ID 255a0b6a81041e51fe38ef0e919a6541ffe0d119
# Parent  a866cc5b8235ae05b178b9a904a59569b005f177
From: Ian Campbell <ian.campbell@citrix.com>

libxl: attempt to cleanup tapdisk processes on disk backend destroy.

This patch properly terminates the tapdisk2 process(es) started
to service a virtual block device.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

xen-unstable changeset: 23883:7998217630e2
xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
Signed-off-by: Greg Wettstein <greg@enjellic.com>
Backport-requested-by: Greg Wettstein <greg@enjellic.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff -r a866cc5b8235 -r 255a0b6a8104 tools/blktap2/control/tap-ctl-list.c
--- a/tools/blktap2/control/tap-ctl-list.c	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/blktap2/control/tap-ctl-list.c	Wed Dec 12 17:41:15 2012 +0000
@@ -506,17 +506,15 @@ out:
 }
 
 int
-tap_ctl_find_minor(const char *type, const char *path)
+tap_ctl_find(const char *type, const char *path, tap_list_t *tap)
 {
 	tap_list_t **list, **_entry;
-	int minor, err;
+	int ret = -ENOENT, err;
 
 	err = tap_ctl_list(&list);
 	if (err)
 		return err;
 
-	minor = -1;
-
 	for (_entry = list; *_entry != NULL; ++_entry) {
 		tap_list_t *entry  = *_entry;
 
@@ -526,11 +524,13 @@ tap_ctl_find_minor(const char *type, con
 		if (path && (!entry->path || strcmp(entry->path, path)))
 			continue;
 
-		minor = entry->minor;
+		*tap = *entry;
+		tap->type = tap->path = NULL;
+		ret = 0;
 		break;
 	}
 
 	tap_ctl_free_list(list);
 
-	return minor >= 0 ? minor : -ENOENT;
+	return ret;
 }
diff -r a866cc5b8235 -r 255a0b6a8104 tools/blktap2/control/tap-ctl.h
--- a/tools/blktap2/control/tap-ctl.h	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/blktap2/control/tap-ctl.h	Wed Dec 12 17:41:15 2012 +0000
@@ -76,7 +76,7 @@ int tap_ctl_get_driver_id(const char *ha
 
 int tap_ctl_list(tap_list_t ***list);
 void tap_ctl_free_list(tap_list_t **list);
-int tap_ctl_find_minor(const char *type, const char *path);
+int tap_ctl_find(const char *type, const char *path, tap_list_t *tap);
 
 int tap_ctl_allocate(int *minor, char **devname);
 int tap_ctl_free(const int minor);
diff -r a866cc5b8235 -r 255a0b6a8104 tools/libxl/libxl_blktap2.c
--- a/tools/libxl/libxl_blktap2.c	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/libxl/libxl_blktap2.c	Wed Dec 12 17:41:15 2012 +0000
@@ -18,6 +18,8 @@
 
 #include "tap-ctl.h"
 
+#include <string.h>
+
 int libxl__blktap_enabled(libxl__gc *gc)
 {
     const char *msg;
@@ -30,12 +32,13 @@ const char *libxl__blktap_devpath(libxl_
 {
     const char *type;
     char *params, *devname = NULL;
-    int minor, err;
+    tap_list_t tap;
+    int err;
 
     type = libxl__device_disk_string_of_format(format);
-    minor = tap_ctl_find_minor(type, disk);
-    if (minor >= 0) {
-        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", minor);
+    err = tap_ctl_find(type, disk, &tap);
+    if (err == 0) {
+        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", tap.minor);
         if (devname)
             return devname;
     }
@@ -49,3 +52,28 @@ const char *libxl__blktap_devpath(libxl_
 
     return NULL;
 }
+
+
+void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+{
+    char *path, *params, *type, *disk;
+    int err;
+    tap_list_t tap;
+
+    path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
+    if (!path) return;
+
+    params = libxl__xs_read(gc, XBT_NULL, path);
+    if (!params) return;
+
+    type = params;
+    disk = strchr(params, ':');
+    if (!disk) return;
+
+    *disk++ = '\0';
+
+    err = tap_ctl_find(type, disk, &tap);
+    if (err < 0) return;
+
+    tap_ctl_destroy(tap.id, tap.minor);
+}
diff -r a866cc5b8235 -r 255a0b6a8104 tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/libxl/libxl_device.c	Wed Dec 12 17:41:15 2012 +0000
@@ -250,6 +250,7 @@ int libxl__device_destroy(libxl_ctx *ctx
     if (!state)
         goto out;
     if (atoi(state) != 4) {
+        libxl__device_destroy_tapdisk(&gc, be_path);
         xs_rm(ctx->xsh, XBT_NULL, be_path);
         goto out;
     }
@@ -368,6 +369,7 @@ int libxl__devices_destroy(libxl_ctx *ct
             }
         }
     }
+    libxl__device_destroy_tapdisk(&gc, be_path);
 out:
     libxl__free_all(&gc);
     return 0;
diff -r a866cc5b8235 -r 255a0b6a8104 tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/libxl/libxl_internal.h	Wed Dec 12 17:41:15 2012 +0000
@@ -314,6 +314,12 @@ _hidden const char *libxl__blktap_devpat
                                  const char *disk,
                                  libxl_disk_format format);
 
+/* libxl__device_destroy_tapdisk:
+ *   Destroys any tapdisk process associated with the backend represented
+ *   by be_path.
+ */
+_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path);
+
 _hidden char *libxl__uuid2string(libxl__gc *gc, const libxl_uuid uuid);
 
 struct libxl__xen_console_reader {
diff -r a866cc5b8235 -r 255a0b6a8104 tools/libxl/libxl_noblktap2.c
--- a/tools/libxl/libxl_noblktap2.c	Wed Dec 12 09:40:16 2012 +0000
+++ b/tools/libxl/libxl_noblktap2.c	Wed Dec 12 17:41:15 2012 +0000
@@ -27,3 +27,7 @@ const char *libxl__blktap_devpath(libxl_
 {
     return NULL;
 }
+
+void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+{
+}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:53:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqV5-0005TE-CE; Wed, 12 Dec 2012 17:53:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TiqV4-0005T9-Jk
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 17:53:42 +0000
Received: from [85.158.143.99:27052] by server-3.bemta-4.messagelabs.com id
	5D/1F-18211-5A4C8C05; Wed, 12 Dec 2012 17:53:41 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355334811!22347440!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM4ODky\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11328 invoked from network); 12 Dec 2012 17:53:31 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-12.tower-216.messagelabs.com with SMTP;
	12 Dec 2012 17:53:31 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 12 Dec 2012 09:53:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,267,1355126400"; d="scan'208";a="230605597"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by azsmga001.ch.intel.com with ESMTP; 12 Dec 2012 09:53:29 -0800
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 12 Dec 2012 09:53:29 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 12 Dec 2012 09:53:28 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 13 Dec 2012 01:53:27 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH V1 1/2] Xen acpi memory hotplug driver
Thread-Index: AQHN1IPpUNtbxbdZSkG9MqBQJp2L85gVdm/g
Date: Wed, 12 Dec 2012 17:53:26 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A7C7B@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
	<20121207140528.GA3140@phenom.dumpdata.com>
In-Reply-To: <20121207140528.GA3140@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"lenb@kernel.org" <lenb@kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 06, 2012 at 04:27:36AM +0000, Liu, Jinsong wrote:
>>>>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig index
>>>>>> 126d8ce..abd0396 100644 --- a/drivers/xen/Kconfig
>>>>>> +++ b/drivers/xen/Kconfig
>>>>>> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
>>>>>>  	  Allow kernel fetching MCE error from Xen platform and
>>>>>>  	  converting it into Linux mcelog format for mcelog tools
>>>>>> 
>>>>>> +config XEN_ACPI_MEMORY_HOTPLUG
>>>>>> +	bool "Xen ACPI memory hotplug"
>>>>> 
>>>>> There should be a way to make this a module.
>>>> 
>>>> I have some concerns to make it a module:
>>>> 1. xen and native memhotplug driver both work as module, while we
>>>> need early load xen driver. 
>>>> 2. if possible, a xen stub driver may solve load sequence issue,
>>>>   but it may involve other issues * if xen driver load then unload,
>>>> native driver may have chance to load successfully;
>>> 
>>> The stub driver would still "occupy" the ACPI bus for the memory
>>> hotplug PnP, so I think this would not be a problem.
>>> 
>> 
>> I'm not quite clear your mean here, do you mean it has
>> 1. xen_stub driver + xen_memhoplug driver, then xen_strub driver
>> unload and entirely replaced by xen_memhotplug driver, or 
>> 2. xen_stub driver (w/ stub ops) + xen_memhotplug ops (not driver),
>> then xen_stub driver keep occupying but its stub ops later replaced
>> by xen_memhotplug ops?  
> 
> #2
>> 
>> If in way #1, it has risk that native driver may load (if xen driver
>> unload). 
>> If in way #2, xen_memhotplug ops lose the chance to probe/add/bind
>> existed memory devices (since it's done when driver registerred). 
> 
> Could the stub driver have a queue of events?

If so, why not do 'real' add ops (like our patch did, to build-in xen memory hotplug logic)?
I'm not quite clear your purpose of insisting module -- what's advantage of module you prefer?

> 
>> 
>>>>   * if xen driver load --> unload --> load again, then it will lose
>>>> hotplug notification during unload period;
>>> 
>>> Sure. But I think we can do it with this driver? After all the
>>> function of it is to just tell the firmware to turn on/off sockets
>>> - and if we miss one notification we won't take advantage of the
>>> power savings - but we can do that later on. 
>>> 
>> 
>> Not only inform firmware.
>> Hotplug notify callback will invoke acpi_bus_add -> ... ->
>> implicitly invoke drv->ops.add method to add the hotadded memory
>> device.  
> 
> Gotcha.

? So it will lose the notification and no way to add the new memory device in the future.

Xen memory hotplug logic consist of 2 parts:
1) driver logic (.add/.remove etc)
2) notification install/callback logic
If you want to use 'xen_stub driver + .add/.remove ops', then notification install/callback logic would implement with xen_stub driver (means in build-in part, otherwise it would lose notification when the ops unload) --> but that would make xen_stub in big build-in size.

>> 
>>> 
>>>>   * if xen driver load --> unload --> load again, then it will
>>>> re-add all memory devices, but the handle for 'booting memory
>>>> device' and 'hotplug memory device' are different while we have no
>>>> way to distinguish these 2 kind of devices.
>>> 
>>> Wouldn't the stub driver hold onto that?
>>> 
>> 
>> Same question as comment #1. Do you mean it has a xen_stub driver
>> (w/ stub ops) and a xen_memhotplug ops? 
> 
> Correct.
>> 
>>>> 
>>>> IMHO I think to make xen hotplug logic as module may involves
>>>> unexpected result. Is there any obvious advantages of doing so?
>>>> after all we have provided config choice to user. Thoughts?
>>> 
>>> Yes, it becomes a module - which is what we want.
>>> 
>> 
>> What I meant here is, module will bring some unexpected issues for
>> xen hotplug. 
>> We can provide user 'bool' config choice, let them decide to
>> build-in or not, but not 'tristate' choice. 
> 
> What would be involved in making it an tristate choice?
>> 
>> Thanks
>> Jinsong


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 17:53:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 17:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqV5-0005TE-CE; Wed, 12 Dec 2012 17:53:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TiqV4-0005T9-Jk
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 17:53:42 +0000
Received: from [85.158.143.99:27052] by server-3.bemta-4.messagelabs.com id
	5D/1F-18211-5A4C8C05; Wed, 12 Dec 2012 17:53:41 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355334811!22347440!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjM4ODky\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11328 invoked from network); 12 Dec 2012 17:53:31 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-12.tower-216.messagelabs.com with SMTP;
	12 Dec 2012 17:53:31 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 12 Dec 2012 09:53:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,267,1355126400"; d="scan'208";a="230605597"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by azsmga001.ch.intel.com with ESMTP; 12 Dec 2012 09:53:29 -0800
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 12 Dec 2012 09:53:29 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 12 Dec 2012 09:53:28 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 13 Dec 2012 01:53:27 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH V1 1/2] Xen acpi memory hotplug driver
Thread-Index: AQHN1IPpUNtbxbdZSkG9MqBQJp2L85gVdm/g
Date: Wed, 12 Dec 2012 17:53:26 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A7C7B@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
	<20121207140528.GA3140@phenom.dumpdata.com>
In-Reply-To: <20121207140528.GA3140@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"lenb@kernel.org" <lenb@kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 06, 2012 at 04:27:36AM +0000, Liu, Jinsong wrote:
>>>>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig index
>>>>>> 126d8ce..abd0396 100644 --- a/drivers/xen/Kconfig
>>>>>> +++ b/drivers/xen/Kconfig
>>>>>> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
>>>>>>  	  Allow kernel fetching MCE error from Xen platform and
>>>>>>  	  converting it into Linux mcelog format for mcelog tools
>>>>>> 
>>>>>> +config XEN_ACPI_MEMORY_HOTPLUG
>>>>>> +	bool "Xen ACPI memory hotplug"
>>>>> 
>>>>> There should be a way to make this a module.
>>>> 
>>>> I have some concerns to make it a module:
>>>> 1. xen and native memhotplug driver both work as module, while we
>>>> need early load xen driver. 
>>>> 2. if possible, a xen stub driver may solve load sequence issue,
>>>>   but it may involve other issues * if xen driver load then unload,
>>>> native driver may have chance to load successfully;
>>> 
>>> The stub driver would still "occupy" the ACPI bus for the memory
>>> hotplug PnP, so I think this would not be a problem.
>>> 
>> 
>> I'm not quite clear your mean here, do you mean it has
>> 1. xen_stub driver + xen_memhoplug driver, then xen_strub driver
>> unload and entirely replaced by xen_memhotplug driver, or 
>> 2. xen_stub driver (w/ stub ops) + xen_memhotplug ops (not driver),
>> then xen_stub driver keep occupying but its stub ops later replaced
>> by xen_memhotplug ops?  
> 
> #2
>> 
>> If in way #1, it has risk that native driver may load (if xen driver
>> unload). 
>> If in way #2, xen_memhotplug ops lose the chance to probe/add/bind
>> existed memory devices (since it's done when driver registerred). 
> 
> Could the stub driver have a queue of events?

If so, why not do 'real' add ops (like our patch did, to build-in xen memory hotplug logic)?
I'm not quite clear your purpose of insisting module -- what's advantage of module you prefer?

> 
>> 
>>>>   * if xen driver load --> unload --> load again, then it will lose
>>>> hotplug notification during unload period;
>>> 
>>> Sure. But I think we can do it with this driver? After all the
>>> function of it is to just tell the firmware to turn on/off sockets
>>> - and if we miss one notification we won't take advantage of the
>>> power savings - but we can do that later on. 
>>> 
>> 
>> Not only inform firmware.
>> Hotplug notify callback will invoke acpi_bus_add -> ... ->
>> implicitly invoke drv->ops.add method to add the hotadded memory
>> device.  
> 
> Gotcha.

? So it will lose the notification and no way to add the new memory device in the future.

Xen memory hotplug logic consist of 2 parts:
1) driver logic (.add/.remove etc)
2) notification install/callback logic
If you want to use 'xen_stub driver + .add/.remove ops', then notification install/callback logic would implement with xen_stub driver (means in build-in part, otherwise it would lose notification when the ops unload) --> but that would make xen_stub in big build-in size.

>> 
>>> 
>>>>   * if xen driver load --> unload --> load again, then it will
>>>> re-add all memory devices, but the handle for 'booting memory
>>>> device' and 'hotplug memory device' are different while we have no
>>>> way to distinguish these 2 kind of devices.
>>> 
>>> Wouldn't the stub driver hold onto that?
>>> 
>> 
>> Same question as comment #1. Do you mean it has a xen_stub driver
>> (w/ stub ops) and a xen_memhotplug ops? 
> 
> Correct.
>> 
>>>> 
>>>> IMHO I think to make xen hotplug logic as module may involves
>>>> unexpected result. Is there any obvious advantages of doing so?
>>>> after all we have provided config choice to user. Thoughts?
>>> 
>>> Yes, it becomes a module - which is what we want.
>>> 
>> 
>> What I meant here is, module will bring some unexpected issues for
>> xen hotplug. 
>> We can provide user 'bool' config choice, let them decide to
>> build-in or not, but not 'tristate' choice. 
> 
> What would be involved in making it an tristate choice?
>> 
>> Thanks
>> Jinsong


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 18:01:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 18:01:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqcZ-0005i6-BT; Wed, 12 Dec 2012 18:01:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TiqcX-0005hz-P9
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 18:01:25 +0000
Received: from [85.158.138.51:47678] by server-10.bemta-3.messagelabs.com id
	6A/5E-07616-076C8C05; Wed, 12 Dec 2012 18:01:20 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355335278!18646897!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28934 invoked from network); 12 Dec 2012 18:01:19 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 18:01:19 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Wed, 12 Dec 2012 11:01:10 -0700
Message-ID: <50C8C665.2030202@suse.com>
Date: Wed, 12 Dec 2012 11:01:09 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>	<20680.47971.962603.851882@mariner.uk.xensource.com>	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
In-Reply-To: <20680.49391.646654.814456@mariner.uk.xensource.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
>   
>> Ian Jackson wrote:
>>     
>>> Specifically, this code has an integer arithmetic overflow.
>>>       
>> Well, this patch is removing that buggy code :).
>>     
>
> I think you need to have some code somewhere which does the
> complicated arithmetic dance to avoid the integer overflow.  Does your
> timer registration function have the same bug ?
>   

Ah yes.  I'll take care of it following your suggestion around
beforepoll_internal.

>   
>>  After again reading
>> libxl_event.h, I'm considering the below patch in the libvirt libxl
>> driver.  The change is primarily inspired by this comment for
>> libxl_osevent_occurred_timeout:
>>     
> ...
>   
>> /* Implicitly, on entry to this function the timeout has been
>>  * deregistered.  If _occurred_timeout is called, libxl will not
>>  * call timeout_deregister; if it wants to requeue the timeout it
>>  * will call timeout_register again.
>>  */
>>     
>
> Well your patch is only correct when used with the new libxl, with my
> patches.
>   

Hmm, it is not clear to me how to make the libxl driver work correctly
with libxl pre and post your patches :-/.

>   
>> diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
>> index 302f81c..d4055be 100644
>> --- a/src/libxl/libxl_driver.c
>> +++ b/src/libxl/libxl_driver.c
>> @@ -184,6 +184,8 @@ static void libxlTimerCallback(int timer
>> ATTRIBUTE_UNUSED, void *timer_v)
>>  {
>>      struct libxlOSEventHookTimerInfo *timer_info = timer_v;
>>  
>> +    virEventRemoveTimeout(timer_info->id);
>> +    timer_info->id = -1;
>>     
>
> I don't understand why you need this.  Doesn't libvirt remove timers
> when they fire ?  If it doesn't, do they otherwise not keep reoccurring ?
>   

No, timers are not removed when they fire.  And they are continuous, so
will keep firing at each timeout.  They can be disabled by setting the
timeout to -1, and can be set to fire on each iteration of the event
loop by setting the timeout to 0.  But they must be explicitly removed
with virEventRemoveTimeout when no longer needed.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 18:01:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 18:01:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqcZ-0005i6-BT; Wed, 12 Dec 2012 18:01:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TiqcX-0005hz-P9
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 18:01:25 +0000
Received: from [85.158.138.51:47678] by server-10.bemta-3.messagelabs.com id
	6A/5E-07616-076C8C05; Wed, 12 Dec 2012 18:01:20 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355335278!18646897!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28934 invoked from network); 12 Dec 2012 18:01:19 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 18:01:19 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Wed, 12 Dec 2012 11:01:10 -0700
Message-ID: <50C8C665.2030202@suse.com>
Date: Wed, 12 Dec 2012 11:01:09 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>	<20680.47971.962603.851882@mariner.uk.xensource.com>	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
In-Reply-To: <20680.49391.646654.814456@mariner.uk.xensource.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
>   
>> Ian Jackson wrote:
>>     
>>> Specifically, this code has an integer arithmetic overflow.
>>>       
>> Well, this patch is removing that buggy code :).
>>     
>
> I think you need to have some code somewhere which does the
> complicated arithmetic dance to avoid the integer overflow.  Does your
> timer registration function have the same bug ?
>   

Ah yes.  I'll take care of it following your suggestion around
beforepoll_internal.

>   
>>  After again reading
>> libxl_event.h, I'm considering the below patch in the libvirt libxl
>> driver.  The change is primarily inspired by this comment for
>> libxl_osevent_occurred_timeout:
>>     
> ...
>   
>> /* Implicitly, on entry to this function the timeout has been
>>  * deregistered.  If _occurred_timeout is called, libxl will not
>>  * call timeout_deregister; if it wants to requeue the timeout it
>>  * will call timeout_register again.
>>  */
>>     
>
> Well your patch is only correct when used with the new libxl, with my
> patches.
>   

Hmm, it is not clear to me how to make the libxl driver work correctly
with libxl pre and post your patches :-/.

>   
>> diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
>> index 302f81c..d4055be 100644
>> --- a/src/libxl/libxl_driver.c
>> +++ b/src/libxl/libxl_driver.c
>> @@ -184,6 +184,8 @@ static void libxlTimerCallback(int timer
>> ATTRIBUTE_UNUSED, void *timer_v)
>>  {
>>      struct libxlOSEventHookTimerInfo *timer_info = timer_v;
>>  
>> +    virEventRemoveTimeout(timer_info->id);
>> +    timer_info->id = -1;
>>     
>
> I don't understand why you need this.  Doesn't libvirt remove timers
> when they fire ?  If it doesn't, do they otherwise not keep reoccurring ?
>   

No, timers are not removed when they fire.  And they are continuous, so
will keep firing at each timeout.  They can be disabled by setting the
timeout to -1, and can be set to fire on each iteration of the event
loop by setting the timeout to 0.  But they must be explicitly removed
with virEventRemoveTimeout when no longer needed.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 18:03:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 18:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiqeb-0005ny-Sb; Wed, 12 Dec 2012 18:03:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tiqea-0005no-15
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 18:03:32 +0000
Received: from [85.158.139.211:16850] by server-14.bemta-5.messagelabs.com id
	90/D6-09538-3F6C8C05; Wed, 12 Dec 2012 18:03:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355335410!20119099!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25752 invoked from network); 12 Dec 2012 18:03:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 18:03:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="95876"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 18:03:30 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 18:03:30 +0000
Message-ID: <1355335409.10554.40.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 18:03:29 +0000
In-Reply-To: <20680.49934.2544.943098@mariner.uk.xensource.com>
References: <201211070207.qA727wF1028601@wind.enjellic.com>
	<20680.49934.2544.943098@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-12 at 17:46 +0000, Ian Jackson wrote:
> Dr. Greg Wettstein writes ("[PATCH 2/2] 4.1.2 blktap2 cleanup fixes."):
> > ---------------------------------------------------------------------------
> > The following patch when applied on top of:
> > 
> > libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> > 
> > Establishes correct cleanup behavior for blktap devices.  This patch
> > implements the release of the backend device before calling for
> > the destruction of the userspace component of the blktap device.
> > 
> > Without this patch the kernel xen-blkback driver deadlocks with
> > the blktap2 user control plane until the IPC channel is terminated by the
> > timeout on the select() call.  This results in a noticeable delay
> > in the termination of the guest and causes the blktap minor
> > number which had been allocated to be orphaned.
> 
> This looks plausible.  But shouldn't it be applied to xen-unstable and
> Xen 4.2 too ?

Xen unstable and 4.2 changed in other ways which included fixing this
issue as a side effect. I can't remember which change it was but I think
it was either your asyncing up of the interfaces or the block script
stuff which I did. You can probably find out which by looking at
previous postings of this series.

So this is a fix for the code as it was in 4.1.x

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 18:03:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 18:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiqeb-0005ny-Sb; Wed, 12 Dec 2012 18:03:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tiqea-0005no-15
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 18:03:32 +0000
Received: from [85.158.139.211:16850] by server-14.bemta-5.messagelabs.com id
	90/D6-09538-3F6C8C05; Wed, 12 Dec 2012 18:03:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355335410!20119099!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25752 invoked from network); 12 Dec 2012 18:03:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 18:03:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,267,1355097600"; 
   d="scan'208";a="95876"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 18:03:30 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 12 Dec 2012 18:03:30 +0000
Message-ID: <1355335409.10554.40.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 18:03:29 +0000
In-Reply-To: <20680.49934.2544.943098@mariner.uk.xensource.com>
References: <201211070207.qA727wF1028601@wind.enjellic.com>
	<20680.49934.2544.943098@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-12 at 17:46 +0000, Ian Jackson wrote:
> Dr. Greg Wettstein writes ("[PATCH 2/2] 4.1.2 blktap2 cleanup fixes."):
> > ---------------------------------------------------------------------------
> > The following patch when applied on top of:
> > 
> > libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> > 
> > Establishes correct cleanup behavior for blktap devices.  This patch
> > implements the release of the backend device before calling for
> > the destruction of the userspace component of the blktap device.
> > 
> > Without this patch the kernel xen-blkback driver deadlocks with
> > the blktap2 user control plane until the IPC channel is terminated by the
> > timeout on the select() call.  This results in a noticeable delay
> > in the termination of the guest and causes the blktap minor
> > number which had been allocated to be orphaned.
> 
> This looks plausible.  But shouldn't it be applied to xen-unstable and
> Xen 4.2 too ?

Xen unstable and 4.2 changed in other ways which included fixing this
issue as a side effect. I can't remember which change it was but I think
it was either your asyncing up of the interfaces or the block script
stuff which I did. You can probably find out which by looking at
previous postings of this series.

So this is a fix for the code as it was in 4.1.x

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 18:05:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 18:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqgA-0005uQ-Bu; Wed, 12 Dec 2012 18:05:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1Tiqg8-0005u7-9D; Wed, 12 Dec 2012 18:05:08 +0000
Received: from [85.158.143.35:34491] by server-3.bemta-4.messagelabs.com id
	4C/77-18211-357C8C05; Wed, 12 Dec 2012 18:05:07 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355335505!13941733!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjI2OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13625 invoked from network); 12 Dec 2012 18:05:06 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 18:05:06 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 53F042C34;
	Wed, 12 Dec 2012 20:05:04 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 39B0320060; Wed, 12 Dec 2012 20:05:04 +0200 (EET)
Date: Wed, 12 Dec 2012 20:05:04 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: fahimeh soltaninejad <f.soltani298@gmail.com>
Message-ID: <20121212180503.GF8912@reaktio.net>
References: <CAKLxbwJ92Fs9YKDKSjkCPok5-M7mpB-ttXJZXp-hexRpT6ZBdg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAKLxbwJ92Fs9YKDKSjkCPok5-M7mpB-ttXJZXp-hexRpT6ZBdg@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] hw to assign devices with VT-d in (fedora with Xen
 hypervisor)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 11, 2012 at 07:10:20PM +0330, fahimeh soltaninejad wrote:
>    hi,

Hello,

>    i have installed Xen hypervisor on fedora 17 and now i want to assign
>    devices directly to a guest OS through VT-d. does any one know how can i
>    do it?
>    i found some solutions for that but those had some steps for when
>    installing Xen while i installed Xen on fedora. i am going to know,can i
>    assign device to a guest OS through VT-d capability with installed Xen on
>    fedora 17?
>

This question is suitable for xen-users mailinglist, so please drop xen-devel from future replies..

There's nothing Fedora 17 specific in Xen PCI passthru really. 
You need to:

1) Enable IOMMU in BIOS (VT-d)
2) Make sure IOMMU is enabled in Xen (iommu=1 xen cmdline option) and verify it gets enabled from hypervisor dmesg (xm dmesg).
3) Configure xen-pciback driver in dom0 kernel to "hide" the PCI devices you want to passthru.
4) Make sure "xm pci-list-assignable" works and lists your PCI devices.
5) Set up the VM cfgfile (/etc/xen/<vm>) with PCI passthru.
6) Done.


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 18:05:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 18:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiqgA-0005uQ-Bu; Wed, 12 Dec 2012 18:05:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1Tiqg8-0005u7-9D; Wed, 12 Dec 2012 18:05:08 +0000
Received: from [85.158.143.35:34491] by server-3.bemta-4.messagelabs.com id
	4C/77-18211-357C8C05; Wed, 12 Dec 2012 18:05:07 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355335505!13941733!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjI2OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13625 invoked from network); 12 Dec 2012 18:05:06 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 18:05:06 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 53F042C34;
	Wed, 12 Dec 2012 20:05:04 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 39B0320060; Wed, 12 Dec 2012 20:05:04 +0200 (EET)
Date: Wed, 12 Dec 2012 20:05:04 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: fahimeh soltaninejad <f.soltani298@gmail.com>
Message-ID: <20121212180503.GF8912@reaktio.net>
References: <CAKLxbwJ92Fs9YKDKSjkCPok5-M7mpB-ttXJZXp-hexRpT6ZBdg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAKLxbwJ92Fs9YKDKSjkCPok5-M7mpB-ttXJZXp-hexRpT6ZBdg@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] hw to assign devices with VT-d in (fedora with Xen
 hypervisor)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 11, 2012 at 07:10:20PM +0330, fahimeh soltaninejad wrote:
>    hi,

Hello,

>    i have installed Xen hypervisor on fedora 17 and now i want to assign
>    devices directly to a guest OS through VT-d. does any one know how can i
>    do it?
>    i found some solutions for that but those had some steps for when
>    installing Xen while i installed Xen on fedora. i am going to know,can i
>    assign device to a guest OS through VT-d capability with installed Xen on
>    fedora 17?
>

This question is suitable for xen-users mailinglist, so please drop xen-devel from future replies..

There's nothing Fedora 17 specific in Xen PCI passthru really. 
You need to:

1) Enable IOMMU in BIOS (VT-d)
2) Make sure IOMMU is enabled in Xen (iommu=1 xen cmdline option) and verify it gets enabled from hypervisor dmesg (xm dmesg).
3) Configure xen-pciback driver in dom0 kernel to "hide" the PCI devices you want to passthru.
4) Make sure "xm pci-list-assignable" works and lists your PCI devices.
5) Set up the VM cfgfile (/etc/xen/<vm>) with PCI passthru.
6) Done.


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 18:31:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 18:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tir4q-0006qF-Uj; Wed, 12 Dec 2012 18:30:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>) id 1Tir4p-0006q9-Nz
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 18:30:40 +0000
Received: from [85.158.139.211:9091] by server-7.bemta-5.messagelabs.com id
	09/60-08009-D4DC8C05; Wed, 12 Dec 2012 18:30:37 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-12.tower-206.messagelabs.com!1355337032!20112681!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_20_30,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31847 invoked from network); 12 Dec 2012 18:30:34 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 18:30:34 -0000
Received: from aplexcas1.dom1.jhuapl.edu (unknown [128.244.198.90]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 6225_0506_2e0349e1_962e_4edc_b619_855aed6710bc;
	Wed, 12 Dec 2012 13:30:23 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Wed, 12 Dec 2012
	13:30:24 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: =?iso-8859-1?Q?S=E9bastien_Fr=E9mal?= <sebastien.fremal@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 12 Dec 2012 13:30:22 -0500
Thread-Topic: [Xen-devel] Stubdom compilation fails
Thread-Index: Ac3YjFbW09IbOXGhRJalFXZb+lgtJwACi7HQ
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BE1622C8@aplesstripe.dom1.jhuapl.edu>
References: <CAOV6k-Cs-uk2Kts6QArtW5MbK1thRhvfcWqJ7ydZJAWou0KSnA@mail.gmail.com>
In-Reply-To: <CAOV6k-Cs-uk2Kts6QArtW5MbK1thRhvfcWqJ7ydZJAWou0KSnA@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] Stubdom compilation fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7417595522030104750=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7417595522030104750==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_068F06DC4D106941B297C0C5F9F446EA48BE1622C8aplesstripedo_"

--_000_068F06DC4D106941B297C0C5F9F446EA48BE1622C8aplesstripedo_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

I think this is a bug in c-stubdom

Edit the file stubdom/c/minios.cfg

Change
CONFIG_TEST=3Dy
to
CONFIG_TEST=3Dn

From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.o=
rg] On Behalf Of S=E9bastien Fr=E9mal
Sent: Tuesday, December 11, 2012 10:40 AM
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Stubdom compilation fails

Hello,

I'm trying to compile Xen 4.2.0 on Ubuntu 11.10 but it crashes when compili=
ng the stubdom part. As he can't find stddef.h, he can't create libraries a=
nd finally crashes. You can find a sample of errors below :

Making all in misc
make[6]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc/misc =BB
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re  -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 =
-isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_=
64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremals/=
xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xe=
n-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include -=
isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem /h=
ome/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremals/=
xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/include =
-mno-red-zone -O1 -fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-=
blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3D=
gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-b=
ut-set-variable   -fno-stack-protector -fno-exceptions -D_I386MACH_ALLOW_HW=
_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/=
newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-e=
lf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16=
.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/=
x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1=
.16.0/libgloss/x86_64 -DPACKAGE_NAME=3D\"newlib\" -DPACKAGE_TARNAME=3D\"new=
lib\" -DPACKAGE_VERSION=3D\"1.16.0\" -DPACKAGE_STRING=3D\"newlib\ 1.16.0\" =
-DPACKAGE_BUGREPORT=3D\"\" -I. -I../../../../../newlib-1.16.0/newlib/libc/m=
isc -O2 -DMISSING_SYSCALL_NAMES -fno-builtin      -O2 -g -g -O2   -c -o lib=
_a-__dprintf.o `test -f '__dprintf.c' || echo '../../../../../newlib-1.16.0=
/newlib/libc/misc/'`__dprintf.c
In file included from /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/=
libc/include/sys/reent.h:14:0,
                 from /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/=
libc/include/reent.h:48,
                 from ../../../../../newlib-1.16.0/newlib/libc/misc/__dprin=
tf.c:8:
/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include/sys/_type=
s.h:67:20: erreur fatale: stddef.h : Aucun fichier ou dossier de ce type
compilation termin=E9e.
make[6]: *** [lib_a-__dprintf.o] Erreur 1
make[6]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/misc =BB
Making all in machine
make[6]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc/machine =BB
Making all in x86_64
make[7]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc/machine/x86_64 =BB
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re  -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 =
-isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_=
64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremals/=
xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xe=
n-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include -=
isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem /h=
ome/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremals/=
xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/include =
-mno-red-zone -O1 -fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-=
blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3D=
gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-b=
ut-set-variable   -fno-stack-protector -fno-exceptions -D_I386MACH_ALLOW_HW=
_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/=
newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-e=
lf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16=
.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/=
x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1=
.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib=
_a-setjmp.o `test -f 'setjmp.S' || echo '../../../../../../newlib-1.16.0/ne=
wlib/libc/machine/x86_64/'`setjmp.S
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re  -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 =
-isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_=
64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremals/=
xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xe=
n-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include -=
isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem /h=
ome/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremals/=
xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/include =
-mno-red-zone -O1 -fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-=
blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3D=
gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-b=
ut-set-variable   -fno-stack-protector -fno-exceptions -D_I386MACH_ALLOW_HW=
_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/=
newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-e=
lf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16=
.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/=
x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1=
.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib=
_a-memcpy.o `test -f 'memcpy.S' || echo '../../../../../../newlib-1.16.0/ne=
wlib/libc/machine/x86_64/'`memcpy.S
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re  -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 =
-isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_=
64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremals/=
xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xe=
n-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include -=
isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem /h=
ome/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremals/=
xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/include =
-mno-red-zone -O1 -fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-=
blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3D=
gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-b=
ut-set-variable   -fno-stack-protector -fno-exceptions -D_I386MACH_ALLOW_HW=
_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/=
newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-e=
lf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16=
.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/=
x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1=
.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib=
_a-memset.o `test -f 'memset.S' || echo '../../../../../../newlib-1.16.0/ne=
wlib/libc/machine/x86_64/'`memset.S
rm -f lib.a
ar cru lib.a lib_a-setjmp.o lib_a-memcpy.o lib_a-memset.o
ranlib lib.a
make[7]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/machine/x86_64 =BB
Making all in .
make[7]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc/machine =BB
rm -f lib.a
ln x86_64/lib.a lib.a >/dev/null 2>/dev/null || \
     cp x86_64/lib.a lib.a
make[7]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/machine =BB
make[6]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/machine =BB
Making all in .
make[6]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc =BB
rm -f libc.a
rm -rf tmp
mkdir tmp
cd tmp; \
     for i in argz/lib.a stdlib/lib.a ctype/lib.a search/lib.a stdio/lib.a =
 string/lib.a signal/lib.a time/lib.a locale/lib.a reent/lib.a  errno/lib.a=
 misc/lib.a     machine/lib.a ; do \
       ar x ../$i; \
     done; \
    ar rc ../libc.a *.o
ar: ../argz/lib.a: No such file or directory
ar: ../stdlib/lib.a: No such file or directory
ar: ../ctype/lib.a: No such file or directory
ar: ../search/lib.a: No such file or directory
ar: ../stdio/lib.a: No such file or directory
ar: ../string/lib.a: No such file or directory
ar: ../signal/lib.a: No such file or directory
ar: ../time/lib.a: No such file or directory
ar: ../locale/lib.a: No such file or directory
ar: ../reent/lib.a: No such file or directory
ar: ../errno/lib.a: No such file or directory
ar: ../misc/lib.a: No such file or directory
ranlib libc.a
rm -rf tmp
make[6]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc =BB
make[5]: *** [all-recursive] Erreur 1
make[5]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc =BB
make[4]: *** [all-recursive] Erreur 1
make[4]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib =BB
make[3]: *** [all] Erreur 2
make[3]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib =BB
make[2]: *** [all-target-newlib] Erreur 2
make[2]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64 =BB
make[1]: *** [all] Erreur 2
make[1]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64 =BB
make: *** [cross-root-x86_64/x86_64-xen-elf/lib/libc.a] Erreur 2

I searched on google, but no solutions were proposed. I exported the path t=
o stddef.h :

export CPPFLAGS=3D-I/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include/

But now, I have a new error concerning the code of Xen :
/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/test.o: In function `app_m=
ain':
/home/fremals/xen-4.2.0/extras/mini-os/test.c:441: multiple definition of `=
app_main'
/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/main.o:/home/fremals/xen-4=
.2.0/extras/mini-os/main.c:187: first defined here
make[1]: *** [/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/mini-os] Err=
eur 1
make[1]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/extras/mini-o=
s =BB
make: *** [c-stubdom] Erreur 2

Can you please help me to resolve these errors please ?

Thank you !!

Best regards,

Sebastien Fremal

--_000_068F06DC4D106941B297C0C5F9F446EA48BE1622C8aplesstripedo_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; charset=3Diso-8859-=
1">
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><meta name=3DGenerator content=3D"Microso=
ft Word 14 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue vli=
nk=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span style=3D'f=
ont-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>I think t=
his is a bug in c-stubdom<o:p></o:p></span></p><p class=3DMsoNormal><span s=
tyle=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>=
<o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span style=3D'font-size:1=
1.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>Edit the file stubd=
om/c/minios.cfg<o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'fo=
nt-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'><o:p>&nbsp=
;</o:p></span></p><p class=3DMsoNormal><span style=3D'font-size:11.0pt;font=
-family:"Calibri","sans-serif";color:#1F497D'>Change <br>CONFIG_TEST=3Dy<o:=
p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-size:11.0pt;fon=
t-family:"Calibri","sans-serif";color:#1F497D'>to<o:p></o:p></span></p><p c=
lass=3DMsoNormal><span style=3D'font-size:11.0pt;font-family:"Calibri","san=
s-serif";color:#1F497D'>CONFIG_TEST=3Dn<o:p></o:p></span></p><p class=3DMso=
Normal><span style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";c=
olor:#1F497D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><b><span sty=
le=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span></b><=
span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> xen-deve=
l-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] <b>On Beha=
lf Of </b>S=E9bastien Fr=E9mal<br><b>Sent:</b> Tuesday, December 11, 2012 1=
0:40 AM<br><b>To:</b> xen-devel@lists.xen.org<br><b>Subject:</b> [Xen-devel=
] Stubdom compilation fails<o:p></o:p></span></p><p class=3DMsoNormal><o:p>=
&nbsp;</o:p></p><p class=3DMsoNormal>Hello,<br><br>I'm trying to compile Xe=
n 4.2.0 on Ubuntu 11.10 but it crashes when compiling the stubdom part. As =
he can't find stddef.h, he can't create libraries and finally crashes. You =
can find a sample of errors below :<br><br>Making all in misc<br>make[6]: e=
ntrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_=
64/x86_64-xen-elf/newlib/libc/misc =BB<br>gcc -isystem /home/fremals/xen-4.=
2.0/stubdom/../extras/mini-os/include -D__MINIOS__ -DHAVE_LIBC -isystem /ho=
me/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home=
/fremals/xen-4.2.0/stubdom/../tools/xenstore&nbsp; -isystem /home/fremals/x=
en-4.2.0/stubdom/../extras/mini-os/include/x86 -isystem /home/fremals/xen-4=
.2.0/stubdom/../extras/mini-os/include/x86/x86_64 -U __linux__ -U __FreeBSD=
__ -U __sun__ -nostdinc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/=
mini-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/cross-root-x=
86_64/x86_64-xen-elf/include -isystem include -isystem /home/fremals/xen-4.=
2.0/stubdom/lwip-x86_64/src/include -isystem /home/fremals/xen-4.2.0/stubdo=
m/lwip-x86_64/src/include/ipv4 -I/home/fremals/xen-4.2.0/stubdom/include -I=
/home/fremals/xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1 -fno-omit-=
frame-pointer&nbsp; -m64 -mno-red-zone -fno-reorder-blocks -fno-asynchronou=
s-unwind-tables -m64 -g -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-pr=
ototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable&nbsp;&n=
bsp; -fno-stack-protector -fno-exceptions -D_I386MACH_ALLOW_HW_INTERRUPTS -=
B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/ -isys=
tem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/tar=
g-include -isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/lib=
c/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/li=
bgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-el=
f/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libglos=
s/x86_64 -DPACKAGE_NAME=3D\&quot;newlib\&quot; -DPACKAGE_TARNAME=3D\&quot;n=
ewlib\&quot; -DPACKAGE_VERSION=3D\&quot;1.16.0\&quot; -DPACKAGE_STRING=3D\&=
quot;newlib\ 1.16.0\&quot; -DPACKAGE_BUGREPORT=3D\&quot;\&quot; -I. -I../..=
/../../../newlib-1.16.0/newlib/libc/misc -O2 -DMISSING_SYSCALL_NAMES -fno-b=
uiltin&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -O2 -g -g -O2&nbsp;&nbsp; -c -o lib_a-=
__dprintf.o `test -f '__dprintf.c' || echo '../../../../../newlib-1.16.0/ne=
wlib/libc/misc/'`__dprintf.c<br>In file included from /home/fremals/xen-4.2=
.0/stubdom/newlib-1.16.0/newlib/libc/include/sys/reent.h:14:0,<br>&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp; from /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc=
/include/reent.h:48,<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; from ../../../../../newlib-1.=
16.0/newlib/libc/misc/__dprintf.c:8:<br>/home/fremals/xen-4.2.0/stubdom/new=
lib-1.16.0/newlib/libc/include/sys/_types.h:67:20: erreur fatale: stddef.h =
: Aucun fichier ou dossier de ce type<br>compilation termin=E9e.<br>make[6]=
: *** [lib_a-__dprintf.o] Erreur 1<br>make[6]: quittant le r=E9pertoire =AB=
 /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/m=
isc =BB<br>Making all in machine<br>make[6]: entrant dans le r=E9pertoire =
=AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/lib=
c/machine =BB<br>Making all in x86_64<br>make[7]: entrant dans le r=E9perto=
ire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib=
/libc/machine/x86_64 =BB<br>gcc -isystem /home/fremals/xen-4.2.0/stubdom/..=
/extras/mini-os/include -D__MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen=
-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xen-4=
.2.0/stubdom/../tools/xenstore&nbsp; -isystem /home/fremals/xen-4.2.0/stubd=
om/../extras/mini-os/include/x86 -isystem /home/fremals/xen-4.2.0/stubdom/.=
./extras/mini-os/include/x86/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ =
-nostdinc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/includ=
e/posix -isystem /home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-x=
en-elf/include -isystem include -isystem /home/fremals/xen-4.2.0/stubdom/lw=
ip-x86_64/src/include -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/=
src/include/ipv4 -I/home/fremals/xen-4.2.0/stubdom/include -I/home/fremals/=
xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1 -fno-omit-frame-pointer&=
nbsp; -m64 -mno-red-zone -fno-reorder-blocks -fno-asynchronous-unwind-table=
s -m64 -g -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-prototypes -Wdec=
laration-after-statement -Wno-unused-but-set-variable&nbsp;&nbsp; -fno-stac=
k-protector -fno-exceptions -D_I386MACH_ALLOW_HW_INTERRUPTS -B/home/fremals=
/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/ -isystem /home/frem=
als/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-include -isy=
stem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include -B/h=
ome/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86_64 =
-L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/lib=
nosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2 -=
DMISSING_SYSCALL_NAMES -fno-builtin&nbsp;&nbsp;&nbsp; -c -o lib_a-setjmp.o =
`test -f 'setjmp.S' || echo '../../../../../../newlib-1.16.0/newlib/libc/ma=
chine/x86_64/'`setjmp.S<br>gcc -isystem /home/fremals/xen-4.2.0/stubdom/../=
extras/mini-os/include -D__MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-=
4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xen-4.=
2.0/stubdom/../tools/xenstore&nbsp; -isystem /home/fremals/xen-4.2.0/stubdo=
m/../extras/mini-os/include/x86 -isystem /home/fremals/xen-4.2.0/stubdom/..=
/extras/mini-os/include/x86/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ -=
nostdinc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include=
/posix -isystem /home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xe=
n-elf/include -isystem include -isystem /home/fremals/xen-4.2.0/stubdom/lwi=
p-x86_64/src/include -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/s=
rc/include/ipv4 -I/home/fremals/xen-4.2.0/stubdom/include -I/home/fremals/x=
en-4.2.0/stubdom/../xen/include -mno-red-zone -O1 -fno-omit-frame-pointer&n=
bsp; -m64 -mno-red-zone -fno-reorder-blocks -fno-asynchronous-unwind-tables=
 -m64 -g -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-prototypes -Wdecl=
aration-after-statement -Wno-unused-but-set-variable&nbsp;&nbsp; -fno-stack=
-protector -fno-exceptions -D_I386MACH_ALLOW_HW_INTERRUPTS -B/home/fremals/=
xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/ -isystem /home/frema=
ls/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-include -isys=
tem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include -B/ho=
me/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86_64 -=
L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/libn=
osys -L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2 -D=
MISSING_SYSCALL_NAMES -fno-builtin&nbsp;&nbsp;&nbsp; -c -o lib_a-memcpy.o `=
test -f 'memcpy.S' || echo '../../../../../../newlib-1.16.0/newlib/libc/mac=
hine/x86_64/'`memcpy.S<br>gcc -isystem /home/fremals/xen-4.2.0/stubdom/../e=
xtras/mini-os/include -D__MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4=
.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xen-4.2=
.0/stubdom/../tools/xenstore&nbsp; -isystem /home/fremals/xen-4.2.0/stubdom=
/../extras/mini-os/include/x86 -isystem /home/fremals/xen-4.2.0/stubdom/../=
extras/mini-os/include/x86/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ -n=
ostdinc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/=
posix -isystem /home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen=
-elf/include -isystem include -isystem /home/fremals/xen-4.2.0/stubdom/lwip=
-x86_64/src/include -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/sr=
c/include/ipv4 -I/home/fremals/xen-4.2.0/stubdom/include -I/home/fremals/xe=
n-4.2.0/stubdom/../xen/include -mno-red-zone -O1 -fno-omit-frame-pointer&nb=
sp; -m64 -mno-red-zone -fno-reorder-blocks -fno-asynchronous-unwind-tables =
-m64 -g -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-prototypes -Wdecla=
ration-after-statement -Wno-unused-but-set-variable&nbsp;&nbsp; -fno-stack-=
protector -fno-exceptions -D_I386MACH_ALLOW_HW_INTERRUPTS -B/home/fremals/x=
en-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/ -isystem /home/fremal=
s/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-include -isyst=
em /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include -B/hom=
e/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86_64 -L=
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/libno=
sys -L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2 -DM=
ISSING_SYSCALL_NAMES -fno-builtin&nbsp;&nbsp;&nbsp; -c -o lib_a-memset.o `t=
est -f 'memset.S' || echo '../../../../../../newlib-1.16.0/newlib/libc/mach=
ine/x86_64/'`memset.S<br>rm -f lib.a<br>ar cru lib.a lib_a-setjmp.o lib_a-m=
emcpy.o lib_a-memset.o <br>ranlib lib.a<br>make[7]: quittant le r=E9pertoir=
e =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/l=
ibc/machine/x86_64 =BB<br>Making all in .<br>make[7]: entrant dans le r=E9p=
ertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/ne=
wlib/libc/machine =BB<br>rm -f lib.a<br>ln x86_64/lib.a lib.a &gt;/dev/null=
 2&gt;/dev/null || \<br>&nbsp;&nbsp;&nbsp; &nbsp;cp x86_64/lib.a lib.a<br>m=
ake[7]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib=
-x86_64/x86_64-xen-elf/newlib/libc/machine =BB<br>make[6]: quittant le r=E9=
pertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/n=
ewlib/libc/machine =BB<br>Making all in .<br>make[6]: entrant dans le r=E9p=
ertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/ne=
wlib/libc =BB<br>rm -f libc.a<br>rm -rf tmp<br>mkdir tmp<br>cd tmp; \<br>&n=
bsp;&nbsp;&nbsp; &nbsp;for i in argz/lib.a stdlib/lib.a ctype/lib.a search/=
lib.a stdio/lib.a&nbsp; string/lib.a signal/lib.a time/lib.a locale/lib.a r=
eent/lib.a&nbsp; errno/lib.a misc/lib.a&nbsp;&nbsp;&nbsp;&nbsp; machine/lib=
.a ; do \<br>&nbsp;&nbsp;&nbsp; &nbsp;&nbsp; ar x ../$i; \<br>&nbsp;&nbsp;&=
nbsp; &nbsp;done; \<br>&nbsp;&nbsp;&nbsp; ar rc ../libc.a *.o<br>ar: ../arg=
z/lib.a: No such file or directory<br>ar: ../stdlib/lib.a: No such file or =
directory<br>ar: ../ctype/lib.a: No such file or directory<br>ar: ../search=
/lib.a: No such file or directory<br>ar: ../stdio/lib.a: No such file or di=
rectory<br>ar: ../string/lib.a: No such file or directory<br>ar: ../signal/=
lib.a: No such file or directory<br>ar: ../time/lib.a: No such file or dire=
ctory<br>ar: ../locale/lib.a: No such file or directory<br>ar: ../reent/lib=
.a: No such file or directory<br>ar: ../errno/lib.a: No such file or direct=
ory<br>ar: ../misc/lib.a: No such file or directory<br>ranlib libc.a<br>rm =
-rf tmp<br>make[6]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/st=
ubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =BB<br>make[5]: *** [all-rec=
ursive] Erreur 1<br>make[5]: quittant le r=E9pertoire =AB /home/fremals/xen=
-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =BB<br>make[4]: ***=
 [all-recursive] Erreur 1<br>make[4]: quittant le r=E9pertoire =AB /home/fr=
emals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib =BB<br>make[3]:=
 *** [all] Erreur 2<br>make[3]: quittant le r=E9pertoire =AB /home/fremals/=
xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib =BB<br>make[2]: *** [=
all-target-newlib] Erreur 2<br>make[2]: quittant le r=E9pertoire =AB /home/=
fremals/xen-4.2.0/stubdom/newlib-x86_64 =BB<br>make[1]: *** [all] Erreur 2<=
br>make[1]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/ne=
wlib-x86_64 =BB<br>make: *** [cross-root-x86_64/x86_64-xen-elf/lib/libc.a] =
Erreur 2<br><br>I searched on google, but no solutions were proposed. I exp=
orted the path to stddef.h :<br><br>export CPPFLAGS=3D-I/usr/lib/gcc/x86_64=
-linux-gnu/4.6.1/include/<br><br>But now, I have a new error concerning the=
 code of Xen :<br>/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/test.o: =
In function `app_main':<br>/home/fremals/xen-4.2.0/extras/mini-os/test.c:44=
1: multiple definition of `app_main'<br>/home/fremals/xen-4.2.0/stubdom/min=
i-os-x86_64-c/main.o:/home/fremals/xen-4.2.0/extras/mini-os/main.c:187: fir=
st defined here<br>make[1]: *** [/home/fremals/xen-4.2.0/stubdom/mini-os-x8=
6_64-c/mini-os] Erreur 1<br>make[1]: quittant le r=E9pertoire =AB /home/fre=
mals/xen-4.2.0/extras/mini-os =BB<br>make: *** [c-stubdom] Erreur 2<br><br>=
Can you please help me to resolve these errors please ?<br><br>Thank you !!=
<br><br>Best regards,<br><br>Sebastien Fremal<o:p></o:p></p></div></body></=
html>=

--_000_068F06DC4D106941B297C0C5F9F446EA48BE1622C8aplesstripedo_--


--===============7417595522030104750==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7417595522030104750==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 18:31:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 18:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tir4q-0006qF-Uj; Wed, 12 Dec 2012 18:30:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>) id 1Tir4p-0006q9-Nz
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 18:30:40 +0000
Received: from [85.158.139.211:9091] by server-7.bemta-5.messagelabs.com id
	09/60-08009-D4DC8C05; Wed, 12 Dec 2012 18:30:37 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-12.tower-206.messagelabs.com!1355337032!20112681!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_20_30,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31847 invoked from network); 12 Dec 2012 18:30:34 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 18:30:34 -0000
Received: from aplexcas1.dom1.jhuapl.edu (unknown [128.244.198.90]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 6225_0506_2e0349e1_962e_4edc_b619_855aed6710bc;
	Wed, 12 Dec 2012 13:30:23 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Wed, 12 Dec 2012
	13:30:24 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: =?iso-8859-1?Q?S=E9bastien_Fr=E9mal?= <sebastien.fremal@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 12 Dec 2012 13:30:22 -0500
Thread-Topic: [Xen-devel] Stubdom compilation fails
Thread-Index: Ac3YjFbW09IbOXGhRJalFXZb+lgtJwACi7HQ
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BE1622C8@aplesstripe.dom1.jhuapl.edu>
References: <CAOV6k-Cs-uk2Kts6QArtW5MbK1thRhvfcWqJ7ydZJAWou0KSnA@mail.gmail.com>
In-Reply-To: <CAOV6k-Cs-uk2Kts6QArtW5MbK1thRhvfcWqJ7ydZJAWou0KSnA@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] Stubdom compilation fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7417595522030104750=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7417595522030104750==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_068F06DC4D106941B297C0C5F9F446EA48BE1622C8aplesstripedo_"

--_000_068F06DC4D106941B297C0C5F9F446EA48BE1622C8aplesstripedo_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

I think this is a bug in c-stubdom

Edit the file stubdom/c/minios.cfg

Change
CONFIG_TEST=3Dy
to
CONFIG_TEST=3Dn

From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.o=
rg] On Behalf Of S=E9bastien Fr=E9mal
Sent: Tuesday, December 11, 2012 10:40 AM
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Stubdom compilation fails

Hello,

I'm trying to compile Xen 4.2.0 on Ubuntu 11.10 but it crashes when compili=
ng the stubdom part. As he can't find stddef.h, he can't create libraries a=
nd finally crashes. You can find a sample of errors below :

Making all in misc
make[6]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc/misc =BB
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re  -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 =
-isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_=
64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremals/=
xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xe=
n-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include -=
isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem /h=
ome/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremals/=
xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/include =
-mno-red-zone -O1 -fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-=
blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3D=
gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-b=
ut-set-variable   -fno-stack-protector -fno-exceptions -D_I386MACH_ALLOW_HW=
_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/=
newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-e=
lf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16=
.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/=
x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1=
.16.0/libgloss/x86_64 -DPACKAGE_NAME=3D\"newlib\" -DPACKAGE_TARNAME=3D\"new=
lib\" -DPACKAGE_VERSION=3D\"1.16.0\" -DPACKAGE_STRING=3D\"newlib\ 1.16.0\" =
-DPACKAGE_BUGREPORT=3D\"\" -I. -I../../../../../newlib-1.16.0/newlib/libc/m=
isc -O2 -DMISSING_SYSCALL_NAMES -fno-builtin      -O2 -g -g -O2   -c -o lib=
_a-__dprintf.o `test -f '__dprintf.c' || echo '../../../../../newlib-1.16.0=
/newlib/libc/misc/'`__dprintf.c
In file included from /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/=
libc/include/sys/reent.h:14:0,
                 from /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/=
libc/include/reent.h:48,
                 from ../../../../../newlib-1.16.0/newlib/libc/misc/__dprin=
tf.c:8:
/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include/sys/_type=
s.h:67:20: erreur fatale: stddef.h : Aucun fichier ou dossier de ce type
compilation termin=E9e.
make[6]: *** [lib_a-__dprintf.o] Erreur 1
make[6]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/misc =BB
Making all in machine
make[6]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc/machine =BB
Making all in x86_64
make[7]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc/machine/x86_64 =BB
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re  -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 =
-isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_=
64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremals/=
xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xe=
n-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include -=
isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem /h=
ome/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremals/=
xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/include =
-mno-red-zone -O1 -fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-=
blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3D=
gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-b=
ut-set-variable   -fno-stack-protector -fno-exceptions -D_I386MACH_ALLOW_HW=
_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/=
newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-e=
lf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16=
.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/=
x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1=
.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib=
_a-setjmp.o `test -f 'setjmp.S' || echo '../../../../../../newlib-1.16.0/ne=
wlib/libc/machine/x86_64/'`setjmp.S
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re  -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 =
-isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_=
64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremals/=
xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xe=
n-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include -=
isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem /h=
ome/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremals/=
xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/include =
-mno-red-zone -O1 -fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-=
blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3D=
gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-b=
ut-set-variable   -fno-stack-protector -fno-exceptions -D_I386MACH_ALLOW_HW=
_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/=
newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-e=
lf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16=
.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/=
x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1=
.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib=
_a-memcpy.o `test -f 'memcpy.S' || echo '../../../../../../newlib-1.16.0/ne=
wlib/libc/machine/x86_64/'`memcpy.S
gcc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include -D__=
MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4.2.0/stubdom/../extras/min=
i-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/../tools/xensto=
re  -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86 =
-isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/x86/x86_=
64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/fremals/=
xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xe=
n-4.2.0/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem include -=
isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include -isystem /h=
ome/fremals/xen-4.2.0/stubdom/lwip-x86_64/src/include/ipv4 -I/home/fremals/=
xen-4.2.0/stubdom/include -I/home/fremals/xen-4.2.0/stubdom/../xen/include =
-mno-red-zone -O1 -fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-=
blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=3D=
gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-b=
ut-set-variable   -fno-stack-protector -fno-exceptions -D_I386MACH_ALLOW_HW=
_INTERRUPTS -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/=
newlib/ -isystem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-e=
lf/newlib/targ-include -isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16=
.0/newlib/libc/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_=
64-xen-elf/libgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/=
x86_64-xen-elf/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1=
.16.0/libgloss/x86_64 -O2 -DMISSING_SYSCALL_NAMES -fno-builtin    -c -o lib=
_a-memset.o `test -f 'memset.S' || echo '../../../../../../newlib-1.16.0/ne=
wlib/libc/machine/x86_64/'`memset.S
rm -f lib.a
ar cru lib.a lib_a-setjmp.o lib_a-memcpy.o lib_a-memset.o
ranlib lib.a
make[7]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/machine/x86_64 =BB
Making all in .
make[7]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc/machine =BB
rm -f lib.a
ln x86_64/lib.a lib.a >/dev/null 2>/dev/null || \
     cp x86_64/lib.a lib.a
make[7]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/machine =BB
make[6]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc/machine =BB
Making all in .
make[6]: entrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/n=
ewlib-x86_64/x86_64-xen-elf/newlib/libc =BB
rm -f libc.a
rm -rf tmp
mkdir tmp
cd tmp; \
     for i in argz/lib.a stdlib/lib.a ctype/lib.a search/lib.a stdio/lib.a =
 string/lib.a signal/lib.a time/lib.a locale/lib.a reent/lib.a  errno/lib.a=
 misc/lib.a     machine/lib.a ; do \
       ar x ../$i; \
     done; \
    ar rc ../libc.a *.o
ar: ../argz/lib.a: No such file or directory
ar: ../stdlib/lib.a: No such file or directory
ar: ../ctype/lib.a: No such file or directory
ar: ../search/lib.a: No such file or directory
ar: ../stdio/lib.a: No such file or directory
ar: ../string/lib.a: No such file or directory
ar: ../signal/lib.a: No such file or directory
ar: ../time/lib.a: No such file or directory
ar: ../locale/lib.a: No such file or directory
ar: ../reent/lib.a: No such file or directory
ar: ../errno/lib.a: No such file or directory
ar: ../misc/lib.a: No such file or directory
ranlib libc.a
rm -rf tmp
make[6]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc =BB
make[5]: *** [all-recursive] Erreur 1
make[5]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib/libc =BB
make[4]: *** [all-recursive] Erreur 1
make[4]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib =BB
make[3]: *** [all] Erreur 2
make[3]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64/x86_64-xen-elf/newlib =BB
make[2]: *** [all-target-newlib] Erreur 2
make[2]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64 =BB
make[1]: *** [all] Erreur 2
make[1]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newli=
b-x86_64 =BB
make: *** [cross-root-x86_64/x86_64-xen-elf/lib/libc.a] Erreur 2

I searched on google, but no solutions were proposed. I exported the path t=
o stddef.h :

export CPPFLAGS=3D-I/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include/

But now, I have a new error concerning the code of Xen :
/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/test.o: In function `app_m=
ain':
/home/fremals/xen-4.2.0/extras/mini-os/test.c:441: multiple definition of `=
app_main'
/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/main.o:/home/fremals/xen-4=
.2.0/extras/mini-os/main.c:187: first defined here
make[1]: *** [/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/mini-os] Err=
eur 1
make[1]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/extras/mini-o=
s =BB
make: *** [c-stubdom] Erreur 2

Can you please help me to resolve these errors please ?

Thank you !!

Best regards,

Sebastien Fremal

--_000_068F06DC4D106941B297C0C5F9F446EA48BE1622C8aplesstripedo_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; charset=3Diso-8859-=
1">
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><meta name=3DGenerator content=3D"Microso=
ft Word 14 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue vli=
nk=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span style=3D'f=
ont-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>I think t=
his is a bug in c-stubdom<o:p></o:p></span></p><p class=3DMsoNormal><span s=
tyle=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>=
<o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span style=3D'font-size:1=
1.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>Edit the file stubd=
om/c/minios.cfg<o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'fo=
nt-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'><o:p>&nbsp=
;</o:p></span></p><p class=3DMsoNormal><span style=3D'font-size:11.0pt;font=
-family:"Calibri","sans-serif";color:#1F497D'>Change <br>CONFIG_TEST=3Dy<o:=
p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-size:11.0pt;fon=
t-family:"Calibri","sans-serif";color:#1F497D'>to<o:p></o:p></span></p><p c=
lass=3DMsoNormal><span style=3D'font-size:11.0pt;font-family:"Calibri","san=
s-serif";color:#1F497D'>CONFIG_TEST=3Dn<o:p></o:p></span></p><p class=3DMso=
Normal><span style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";c=
olor:#1F497D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><b><span sty=
le=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span></b><=
span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> xen-deve=
l-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] <b>On Beha=
lf Of </b>S=E9bastien Fr=E9mal<br><b>Sent:</b> Tuesday, December 11, 2012 1=
0:40 AM<br><b>To:</b> xen-devel@lists.xen.org<br><b>Subject:</b> [Xen-devel=
] Stubdom compilation fails<o:p></o:p></span></p><p class=3DMsoNormal><o:p>=
&nbsp;</o:p></p><p class=3DMsoNormal>Hello,<br><br>I'm trying to compile Xe=
n 4.2.0 on Ubuntu 11.10 but it crashes when compiling the stubdom part. As =
he can't find stddef.h, he can't create libraries and finally crashes. You =
can find a sample of errors below :<br><br>Making all in misc<br>make[6]: e=
ntrant dans le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_=
64/x86_64-xen-elf/newlib/libc/misc =BB<br>gcc -isystem /home/fremals/xen-4.=
2.0/stubdom/../extras/mini-os/include -D__MINIOS__ -DHAVE_LIBC -isystem /ho=
me/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home=
/fremals/xen-4.2.0/stubdom/../tools/xenstore&nbsp; -isystem /home/fremals/x=
en-4.2.0/stubdom/../extras/mini-os/include/x86 -isystem /home/fremals/xen-4=
.2.0/stubdom/../extras/mini-os/include/x86/x86_64 -U __linux__ -U __FreeBSD=
__ -U __sun__ -nostdinc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/=
mini-os/include/posix -isystem /home/fremals/xen-4.2.0/stubdom/cross-root-x=
86_64/x86_64-xen-elf/include -isystem include -isystem /home/fremals/xen-4.=
2.0/stubdom/lwip-x86_64/src/include -isystem /home/fremals/xen-4.2.0/stubdo=
m/lwip-x86_64/src/include/ipv4 -I/home/fremals/xen-4.2.0/stubdom/include -I=
/home/fremals/xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1 -fno-omit-=
frame-pointer&nbsp; -m64 -mno-red-zone -fno-reorder-blocks -fno-asynchronou=
s-unwind-tables -m64 -g -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-pr=
ototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable&nbsp;&n=
bsp; -fno-stack-protector -fno-exceptions -D_I386MACH_ALLOW_HW_INTERRUPTS -=
B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/ -isys=
tem /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/tar=
g-include -isystem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/lib=
c/include -B/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/li=
bgloss/x86_64 -L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-el=
f/libgloss/libnosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libglos=
s/x86_64 -DPACKAGE_NAME=3D\&quot;newlib\&quot; -DPACKAGE_TARNAME=3D\&quot;n=
ewlib\&quot; -DPACKAGE_VERSION=3D\&quot;1.16.0\&quot; -DPACKAGE_STRING=3D\&=
quot;newlib\ 1.16.0\&quot; -DPACKAGE_BUGREPORT=3D\&quot;\&quot; -I. -I../..=
/../../../newlib-1.16.0/newlib/libc/misc -O2 -DMISSING_SYSCALL_NAMES -fno-b=
uiltin&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -O2 -g -g -O2&nbsp;&nbsp; -c -o lib_a-=
__dprintf.o `test -f '__dprintf.c' || echo '../../../../../newlib-1.16.0/ne=
wlib/libc/misc/'`__dprintf.c<br>In file included from /home/fremals/xen-4.2=
.0/stubdom/newlib-1.16.0/newlib/libc/include/sys/reent.h:14:0,<br>&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp; from /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc=
/include/reent.h:48,<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; from ../../../../../newlib-1.=
16.0/newlib/libc/misc/__dprintf.c:8:<br>/home/fremals/xen-4.2.0/stubdom/new=
lib-1.16.0/newlib/libc/include/sys/_types.h:67:20: erreur fatale: stddef.h =
: Aucun fichier ou dossier de ce type<br>compilation termin=E9e.<br>make[6]=
: *** [lib_a-__dprintf.o] Erreur 1<br>make[6]: quittant le r=E9pertoire =AB=
 /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/m=
isc =BB<br>Making all in machine<br>make[6]: entrant dans le r=E9pertoire =
=AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/lib=
c/machine =BB<br>Making all in x86_64<br>make[7]: entrant dans le r=E9perto=
ire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib=
/libc/machine/x86_64 =BB<br>gcc -isystem /home/fremals/xen-4.2.0/stubdom/..=
/extras/mini-os/include -D__MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen=
-4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xen-4=
.2.0/stubdom/../tools/xenstore&nbsp; -isystem /home/fremals/xen-4.2.0/stubd=
om/../extras/mini-os/include/x86 -isystem /home/fremals/xen-4.2.0/stubdom/.=
./extras/mini-os/include/x86/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ =
-nostdinc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/includ=
e/posix -isystem /home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-x=
en-elf/include -isystem include -isystem /home/fremals/xen-4.2.0/stubdom/lw=
ip-x86_64/src/include -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/=
src/include/ipv4 -I/home/fremals/xen-4.2.0/stubdom/include -I/home/fremals/=
xen-4.2.0/stubdom/../xen/include -mno-red-zone -O1 -fno-omit-frame-pointer&=
nbsp; -m64 -mno-red-zone -fno-reorder-blocks -fno-asynchronous-unwind-table=
s -m64 -g -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-prototypes -Wdec=
laration-after-statement -Wno-unused-but-set-variable&nbsp;&nbsp; -fno-stac=
k-protector -fno-exceptions -D_I386MACH_ALLOW_HW_INTERRUPTS -B/home/fremals=
/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/ -isystem /home/frem=
als/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-include -isy=
stem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include -B/h=
ome/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86_64 =
-L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/lib=
nosys -L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2 -=
DMISSING_SYSCALL_NAMES -fno-builtin&nbsp;&nbsp;&nbsp; -c -o lib_a-setjmp.o =
`test -f 'setjmp.S' || echo '../../../../../../newlib-1.16.0/newlib/libc/ma=
chine/x86_64/'`setjmp.S<br>gcc -isystem /home/fremals/xen-4.2.0/stubdom/../=
extras/mini-os/include -D__MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-=
4.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xen-4.=
2.0/stubdom/../tools/xenstore&nbsp; -isystem /home/fremals/xen-4.2.0/stubdo=
m/../extras/mini-os/include/x86 -isystem /home/fremals/xen-4.2.0/stubdom/..=
/extras/mini-os/include/x86/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ -=
nostdinc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include=
/posix -isystem /home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xe=
n-elf/include -isystem include -isystem /home/fremals/xen-4.2.0/stubdom/lwi=
p-x86_64/src/include -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/s=
rc/include/ipv4 -I/home/fremals/xen-4.2.0/stubdom/include -I/home/fremals/x=
en-4.2.0/stubdom/../xen/include -mno-red-zone -O1 -fno-omit-frame-pointer&n=
bsp; -m64 -mno-red-zone -fno-reorder-blocks -fno-asynchronous-unwind-tables=
 -m64 -g -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-prototypes -Wdecl=
aration-after-statement -Wno-unused-but-set-variable&nbsp;&nbsp; -fno-stack=
-protector -fno-exceptions -D_I386MACH_ALLOW_HW_INTERRUPTS -B/home/fremals/=
xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/ -isystem /home/frema=
ls/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-include -isys=
tem /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include -B/ho=
me/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86_64 -=
L/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/libn=
osys -L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2 -D=
MISSING_SYSCALL_NAMES -fno-builtin&nbsp;&nbsp;&nbsp; -c -o lib_a-memcpy.o `=
test -f 'memcpy.S' || echo '../../../../../../newlib-1.16.0/newlib/libc/mac=
hine/x86_64/'`memcpy.S<br>gcc -isystem /home/fremals/xen-4.2.0/stubdom/../e=
xtras/mini-os/include -D__MINIOS__ -DHAVE_LIBC -isystem /home/fremals/xen-4=
.2.0/stubdom/../extras/mini-os/include/posix -isystem /home/fremals/xen-4.2=
.0/stubdom/../tools/xenstore&nbsp; -isystem /home/fremals/xen-4.2.0/stubdom=
/../extras/mini-os/include/x86 -isystem /home/fremals/xen-4.2.0/stubdom/../=
extras/mini-os/include/x86/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ -n=
ostdinc -isystem /home/fremals/xen-4.2.0/stubdom/../extras/mini-os/include/=
posix -isystem /home/fremals/xen-4.2.0/stubdom/cross-root-x86_64/x86_64-xen=
-elf/include -isystem include -isystem /home/fremals/xen-4.2.0/stubdom/lwip=
-x86_64/src/include -isystem /home/fremals/xen-4.2.0/stubdom/lwip-x86_64/sr=
c/include/ipv4 -I/home/fremals/xen-4.2.0/stubdom/include -I/home/fremals/xe=
n-4.2.0/stubdom/../xen/include -mno-red-zone -O1 -fno-omit-frame-pointer&nb=
sp; -m64 -mno-red-zone -fno-reorder-blocks -fno-asynchronous-unwind-tables =
-m64 -g -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-prototypes -Wdecla=
ration-after-statement -Wno-unused-but-set-variable&nbsp;&nbsp; -fno-stack-=
protector -fno-exceptions -D_I386MACH_ALLOW_HW_INTERRUPTS -B/home/fremals/x=
en-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/ -isystem /home/fremal=
s/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/targ-include -isyst=
em /home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/newlib/libc/include -B/hom=
e/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/x86_64 -L=
/home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/libgloss/libno=
sys -L/home/fremals/xen-4.2.0/stubdom/newlib-1.16.0/libgloss/x86_64 -O2 -DM=
ISSING_SYSCALL_NAMES -fno-builtin&nbsp;&nbsp;&nbsp; -c -o lib_a-memset.o `t=
est -f 'memset.S' || echo '../../../../../../newlib-1.16.0/newlib/libc/mach=
ine/x86_64/'`memset.S<br>rm -f lib.a<br>ar cru lib.a lib_a-setjmp.o lib_a-m=
emcpy.o lib_a-memset.o <br>ranlib lib.a<br>make[7]: quittant le r=E9pertoir=
e =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/l=
ibc/machine/x86_64 =BB<br>Making all in .<br>make[7]: entrant dans le r=E9p=
ertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/ne=
wlib/libc/machine =BB<br>rm -f lib.a<br>ln x86_64/lib.a lib.a &gt;/dev/null=
 2&gt;/dev/null || \<br>&nbsp;&nbsp;&nbsp; &nbsp;cp x86_64/lib.a lib.a<br>m=
ake[7]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib=
-x86_64/x86_64-xen-elf/newlib/libc/machine =BB<br>make[6]: quittant le r=E9=
pertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/n=
ewlib/libc/machine =BB<br>Making all in .<br>make[6]: entrant dans le r=E9p=
ertoire =AB /home/fremals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/ne=
wlib/libc =BB<br>rm -f libc.a<br>rm -rf tmp<br>mkdir tmp<br>cd tmp; \<br>&n=
bsp;&nbsp;&nbsp; &nbsp;for i in argz/lib.a stdlib/lib.a ctype/lib.a search/=
lib.a stdio/lib.a&nbsp; string/lib.a signal/lib.a time/lib.a locale/lib.a r=
eent/lib.a&nbsp; errno/lib.a misc/lib.a&nbsp;&nbsp;&nbsp;&nbsp; machine/lib=
.a ; do \<br>&nbsp;&nbsp;&nbsp; &nbsp;&nbsp; ar x ../$i; \<br>&nbsp;&nbsp;&=
nbsp; &nbsp;done; \<br>&nbsp;&nbsp;&nbsp; ar rc ../libc.a *.o<br>ar: ../arg=
z/lib.a: No such file or directory<br>ar: ../stdlib/lib.a: No such file or =
directory<br>ar: ../ctype/lib.a: No such file or directory<br>ar: ../search=
/lib.a: No such file or directory<br>ar: ../stdio/lib.a: No such file or di=
rectory<br>ar: ../string/lib.a: No such file or directory<br>ar: ../signal/=
lib.a: No such file or directory<br>ar: ../time/lib.a: No such file or dire=
ctory<br>ar: ../locale/lib.a: No such file or directory<br>ar: ../reent/lib=
.a: No such file or directory<br>ar: ../errno/lib.a: No such file or direct=
ory<br>ar: ../misc/lib.a: No such file or directory<br>ranlib libc.a<br>rm =
-rf tmp<br>make[6]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/st=
ubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =BB<br>make[5]: *** [all-rec=
ursive] Erreur 1<br>make[5]: quittant le r=E9pertoire =AB /home/fremals/xen=
-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc =BB<br>make[4]: ***=
 [all-recursive] Erreur 1<br>make[4]: quittant le r=E9pertoire =AB /home/fr=
emals/xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib =BB<br>make[3]:=
 *** [all] Erreur 2<br>make[3]: quittant le r=E9pertoire =AB /home/fremals/=
xen-4.2.0/stubdom/newlib-x86_64/x86_64-xen-elf/newlib =BB<br>make[2]: *** [=
all-target-newlib] Erreur 2<br>make[2]: quittant le r=E9pertoire =AB /home/=
fremals/xen-4.2.0/stubdom/newlib-x86_64 =BB<br>make[1]: *** [all] Erreur 2<=
br>make[1]: quittant le r=E9pertoire =AB /home/fremals/xen-4.2.0/stubdom/ne=
wlib-x86_64 =BB<br>make: *** [cross-root-x86_64/x86_64-xen-elf/lib/libc.a] =
Erreur 2<br><br>I searched on google, but no solutions were proposed. I exp=
orted the path to stddef.h :<br><br>export CPPFLAGS=3D-I/usr/lib/gcc/x86_64=
-linux-gnu/4.6.1/include/<br><br>But now, I have a new error concerning the=
 code of Xen :<br>/home/fremals/xen-4.2.0/stubdom/mini-os-x86_64-c/test.o: =
In function `app_main':<br>/home/fremals/xen-4.2.0/extras/mini-os/test.c:44=
1: multiple definition of `app_main'<br>/home/fremals/xen-4.2.0/stubdom/min=
i-os-x86_64-c/main.o:/home/fremals/xen-4.2.0/extras/mini-os/main.c:187: fir=
st defined here<br>make[1]: *** [/home/fremals/xen-4.2.0/stubdom/mini-os-x8=
6_64-c/mini-os] Erreur 1<br>make[1]: quittant le r=E9pertoire =AB /home/fre=
mals/xen-4.2.0/extras/mini-os =BB<br>make: *** [c-stubdom] Erreur 2<br><br>=
Can you please help me to resolve these errors please ?<br><br>Thank you !!=
<br><br>Best regards,<br><br>Sebastien Fremal<o:p></o:p></p></div></body></=
html>=

--_000_068F06DC4D106941B297C0C5F9F446EA48BE1622C8aplesstripedo_--


--===============7417595522030104750==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7417595522030104750==--


From xen-devel-bounces@lists.xen.org Wed Dec 12 18:35:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 18:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tir9b-00072M-W5; Wed, 12 Dec 2012 18:35:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen@lippux.com>)
	id 1Tir9a-00072D-Fd; Wed, 12 Dec 2012 18:35:34 +0000
Received: from [85.158.139.211:26477] by server-14.bemta-5.messagelabs.com id
	31/69-09538-57EC8C05; Wed, 12 Dec 2012 18:35:33 +0000
X-Env-Sender: xen@lippux.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355337332!18415964!1
X-Originating-IP: [78.46.181.14]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6041 invoked from network); 12 Dec 2012 18:35:33 -0000
Received: from www107.your-server.de (HELO www107.your-server.de)
	(78.46.181.14)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 18:35:33 -0000
Received: from [78.46.5.203] (helo=sslproxy01.your-server.de)
	by www107.your-server.de with esmtpsa (TLSv1:AES256-SHA:256)
	(Exim 4.74) (envelope-from <xen@lippux.com>)
	id 1Tir9Y-0001Z4-NW; Wed, 12 Dec 2012 19:35:32 +0100
Received: from [192.168.0.32] (helo=webmail03.your-server.de)
	by sslproxy01.your-server.de with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <xen@lippux.com>)
	id 1Tir9b-0001x0-Tw; Wed, 12 Dec 2012 19:35:35 +0100
Received: from ashlynn.lippux.de (ashlynn.lippux.de [5.9.218.242]) by
	webmail.your-server.de (Horde Framework) with HTTP; Wed, 12 Dec 2012
	19:35:26 +0100
Date: Wed, 12 Dec 2012 19:35:26 +0100
Message-ID: <20121212193526.Horde.3nLQZVQvoipQyM5uc7nCRgA@webmail.your-server.de>
From: xen@lippux.com
To: xen-users@lists.xen.org
User-Agent: Internet Messaging Program (IMP) H4 (5.0.24)
MIME-Version: 1.0
Content-Disposition: inline
X-Authenticated-Sender: xen@lippux.com
X-Virus-Scanned: Clear (ClamAV 0.97.5/15751/Wed Dec 12 18:45:57 2012)
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] Compiling init-xenstore-domain.c to initialize a OCamel
 Xenstore Stubdomain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"; DelSp="Yes"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

I post this message to the xen-users and xen-devel list because I  
think that it is not clear if a user or only a developer can help me  
out.

At the moment I'm working on Dom0 disaggregation and I want to create  
a OCamel Xenstore Stubdomain. So I hope that I have all needed  
dependencies so I have Xen 4.2 with XSM and FLASK compiled in (XMS  
Framework appears in xl dmesg so I think it works completely) and  
Linux 3.6.10 (information in the web said that Linux 3.5 or above with  
pv_ops is needed for this).

Now I need to have the init-xenstore-domain utility to modify the init  
scripts so that the xenstore stubdomain get loaded instead of starting  
xenstored in Dom0 (I hope that this is right). So I wanted to compile  
init-xenstore-domain.c from the source directory /tools/xenstore/  
within the Xen 4.2.0 source directory.

I tried to do this with this command:

gcc -c -I/root/xen-4.2.0/dist/install/usr/include/  
-I/root/xen-4.2.0/tools/libxc/ -I/root/xen-4.2.0/xen/include/  
/root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c -o  
/root/init-xenstore-domain


In my case the xen-4.2.0 directory is located inside a debian base  
"build" chroot in the /root/ directory. At the moment I get this error  
and I don't know how to include the libelf.h file because of this I  
get this error:

In file included from  
/root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
/root/xen-4.2.0/tools/libxc/xc_dom.h:17:31: error:  
xen/libelf/libelf.h: No such file or directory
In file included from  
/root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
/root/xen-4.2.0/tools/libxc/xc_dom.h:63: error: field 'parms' has  
incomplete type

I tried to include the directory /root/xen-4.2.0/xen/include/xen/  
(base on my directory structure) and also created a symlink so that  
the folder xen and the subfolder libelf (like mentioned in the include  
directive of the source file) and linked to the libelf.h file which is  
included in the directory mentioned above, but all this thing result  
in the same error so how could I compile init-xenstore-domain utility?  
The normal xenstore stubdomain file (not the ocamel variant) was  
builded successfully by the build system. But the build system at now  
don't seem to build the init-xenstore-domain utility so I thought I  
have to compile it seperately.

Hope that someone can help me out.

Best Regards


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 18:35:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 18:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tir9b-00072M-W5; Wed, 12 Dec 2012 18:35:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen@lippux.com>)
	id 1Tir9a-00072D-Fd; Wed, 12 Dec 2012 18:35:34 +0000
Received: from [85.158.139.211:26477] by server-14.bemta-5.messagelabs.com id
	31/69-09538-57EC8C05; Wed, 12 Dec 2012 18:35:33 +0000
X-Env-Sender: xen@lippux.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355337332!18415964!1
X-Originating-IP: [78.46.181.14]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6041 invoked from network); 12 Dec 2012 18:35:33 -0000
Received: from www107.your-server.de (HELO www107.your-server.de)
	(78.46.181.14)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Dec 2012 18:35:33 -0000
Received: from [78.46.5.203] (helo=sslproxy01.your-server.de)
	by www107.your-server.de with esmtpsa (TLSv1:AES256-SHA:256)
	(Exim 4.74) (envelope-from <xen@lippux.com>)
	id 1Tir9Y-0001Z4-NW; Wed, 12 Dec 2012 19:35:32 +0100
Received: from [192.168.0.32] (helo=webmail03.your-server.de)
	by sslproxy01.your-server.de with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <xen@lippux.com>)
	id 1Tir9b-0001x0-Tw; Wed, 12 Dec 2012 19:35:35 +0100
Received: from ashlynn.lippux.de (ashlynn.lippux.de [5.9.218.242]) by
	webmail.your-server.de (Horde Framework) with HTTP; Wed, 12 Dec 2012
	19:35:26 +0100
Date: Wed, 12 Dec 2012 19:35:26 +0100
Message-ID: <20121212193526.Horde.3nLQZVQvoipQyM5uc7nCRgA@webmail.your-server.de>
From: xen@lippux.com
To: xen-users@lists.xen.org
User-Agent: Internet Messaging Program (IMP) H4 (5.0.24)
MIME-Version: 1.0
Content-Disposition: inline
X-Authenticated-Sender: xen@lippux.com
X-Virus-Scanned: Clear (ClamAV 0.97.5/15751/Wed Dec 12 18:45:57 2012)
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] Compiling init-xenstore-domain.c to initialize a OCamel
 Xenstore Stubdomain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"; DelSp="Yes"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

I post this message to the xen-users and xen-devel list because I  
think that it is not clear if a user or only a developer can help me  
out.

At the moment I'm working on Dom0 disaggregation and I want to create  
a OCamel Xenstore Stubdomain. So I hope that I have all needed  
dependencies so I have Xen 4.2 with XSM and FLASK compiled in (XMS  
Framework appears in xl dmesg so I think it works completely) and  
Linux 3.6.10 (information in the web said that Linux 3.5 or above with  
pv_ops is needed for this).

Now I need to have the init-xenstore-domain utility to modify the init  
scripts so that the xenstore stubdomain get loaded instead of starting  
xenstored in Dom0 (I hope that this is right). So I wanted to compile  
init-xenstore-domain.c from the source directory /tools/xenstore/  
within the Xen 4.2.0 source directory.

I tried to do this with this command:

gcc -c -I/root/xen-4.2.0/dist/install/usr/include/  
-I/root/xen-4.2.0/tools/libxc/ -I/root/xen-4.2.0/xen/include/  
/root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c -o  
/root/init-xenstore-domain


In my case the xen-4.2.0 directory is located inside a debian base  
"build" chroot in the /root/ directory. At the moment I get this error  
and I don't know how to include the libelf.h file because of this I  
get this error:

In file included from  
/root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
/root/xen-4.2.0/tools/libxc/xc_dom.h:17:31: error:  
xen/libelf/libelf.h: No such file or directory
In file included from  
/root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
/root/xen-4.2.0/tools/libxc/xc_dom.h:63: error: field 'parms' has  
incomplete type

I tried to include the directory /root/xen-4.2.0/xen/include/xen/  
(base on my directory structure) and also created a symlink so that  
the folder xen and the subfolder libelf (like mentioned in the include  
directive of the source file) and linked to the libelf.h file which is  
included in the directory mentioned above, but all this thing result  
in the same error so how could I compile init-xenstore-domain utility?  
The normal xenstore stubdomain file (not the ocamel variant) was  
builded successfully by the build system. But the build system at now  
don't seem to build the init-xenstore-domain utility so I thought I  
have to compile it seperately.

Hope that someone can help me out.

Best Regards


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 19:00:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 19:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TirXt-0007d8-Qe; Wed, 12 Dec 2012 19:00:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>)
	id 1TirXs-0007d0-Dv; Wed, 12 Dec 2012 19:00:40 +0000
Received: from [193.109.254.147:23543] by server-12.bemta-14.messagelabs.com
	id 52/D4-06523-754D8C05; Wed, 12 Dec 2012 19:00:39 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-5.tower-27.messagelabs.com!1355338838!7221271!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9921 invoked from network); 12 Dec 2012 19:00:38 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-5.tower-27.messagelabs.com with SMTP;
	12 Dec 2012 19:00:38 -0000
X-TM-IMSS-Message-ID: <7217caf300098e85@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 7217caf300098e85 ;
	Wed, 12 Dec 2012 13:59:53 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBCJ0PHI024039; 
	Wed, 12 Dec 2012 14:00:25 -0500
Message-ID: <50C8D449.9080404@tycho.nsa.gov>
Date: Wed, 12 Dec 2012 14:00:25 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen@lippux.com
References: <20121212193526.Horde.3nLQZVQvoipQyM5uc7nCRgA@webmail.your-server.de>
In-Reply-To: <20121212193526.Horde.3nLQZVQvoipQyM5uc7nCRgA@webmail.your-server.de>
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Compiling init-xenstore-domain.c to initialize a
 OCamel Xenstore Stubdomain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/2012 01:35 PM, xen@lippux.com wrote:
> Hello all,
> 
> I post this message to the xen-users and xen-devel list because I think that it is not clear if a user or only a developer can help me out.
> 
> At the moment I'm working on Dom0 disaggregation and I want to create a OCamel Xenstore Stubdomain. So I hope that I have all needed dependencies so I have Xen 4.2 with XSM and FLASK compiled in (XMS Framework appears in xl dmesg so I think it works completely) and Linux 3.6.10 (information in the web said that Linux 3.5 or above with pv_ops is needed for this).
> 
> Now I need to have the init-xenstore-domain utility to modify the init scripts so that the xenstore stubdomain get loaded instead of starting xenstored in Dom0 (I hope that this is right). So I wanted to compile init-xenstore-domain.c from the source directory /tools/xenstore/ within the Xen 4.2.0 source directory.
> 
> I tried to do this with this command:
> 
> gcc -c -I/root/xen-4.2.0/dist/install/usr/include/ -I/root/xen-4.2.0/tools/libxc/ -I/root/xen-4.2.0/xen/include/ /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c -o /root/init-xenstore-domain
> 
> 
> In my case the xen-4.2.0 directory is located inside a debian base "build" chroot in the /root/ directory. At the moment I get this error and I don't know how to include the libelf.h file because of this I get this error:
> 
> In file included from /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
> /root/xen-4.2.0/tools/libxc/xc_dom.h:17:31: error: xen/libelf/libelf.h: No such file or directory
> In file included from /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
> /root/xen-4.2.0/tools/libxc/xc_dom.h:63: error: field 'parms' has incomplete type
>

Hmm, looks like it's not getting built by default. Try running:

make -C stubdom/xenstore init-xenstore-domain

This results in a successful build via the following two commands on my system:

gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF .init-xenstore-domain.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -I. -I/home/daniel/xen/stubdom/xenstore/../../tools/libxc -I/home/daniel/xen/stubdom/xenstore/../../tools/include -I/home/daniel/xen/stubdom/xenstore/../../tools/libxc -I/home/daniel/xen/stubdom/xenstore/../../tools/include  -c -o init-xenstore-domain.o init-xenstore-domain.c 
gcc     init-xenstore-domain.o libxenstore.so /home/daniel/xen/stubdom/xenstore/../../tools/libxc/libxenctrl.so /home/daniel/xen/stubdom/xenstore/../../tools/libxc/libxenguest.so /home/daniel/xen/stubdom/xenstore/../../tools/xenstore/libxenstore.so -o init-xenstore-domain 


> I tried to include the directory /root/xen-4.2.0/xen/include/xen/ (base on my directory structure) and also created a symlink so that the folder xen and the subfolder libelf (like mentioned in the include directive of the source file) and linked to the libelf.h file which is included in the directory mentioned above, but all this thing result in the same error so how could I compile init-xenstore-domain utility? The normal xenstore stubdomain file (not the ocamel variant) was builded successfully by the build system. But the build system at now don't seem to build the init-xenstore-domain utility so I thought I have to compile it seperately.
> 
> Hope that someone can help me out.
> 
> Best Regards
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 19:00:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 19:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TirXt-0007d8-Qe; Wed, 12 Dec 2012 19:00:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>)
	id 1TirXs-0007d0-Dv; Wed, 12 Dec 2012 19:00:40 +0000
Received: from [193.109.254.147:23543] by server-12.bemta-14.messagelabs.com
	id 52/D4-06523-754D8C05; Wed, 12 Dec 2012 19:00:39 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-5.tower-27.messagelabs.com!1355338838!7221271!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9921 invoked from network); 12 Dec 2012 19:00:38 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-5.tower-27.messagelabs.com with SMTP;
	12 Dec 2012 19:00:38 -0000
X-TM-IMSS-Message-ID: <7217caf300098e85@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 7217caf300098e85 ;
	Wed, 12 Dec 2012 13:59:53 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBCJ0PHI024039; 
	Wed, 12 Dec 2012 14:00:25 -0500
Message-ID: <50C8D449.9080404@tycho.nsa.gov>
Date: Wed, 12 Dec 2012 14:00:25 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen@lippux.com
References: <20121212193526.Horde.3nLQZVQvoipQyM5uc7nCRgA@webmail.your-server.de>
In-Reply-To: <20121212193526.Horde.3nLQZVQvoipQyM5uc7nCRgA@webmail.your-server.de>
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Compiling init-xenstore-domain.c to initialize a
 OCamel Xenstore Stubdomain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/2012 01:35 PM, xen@lippux.com wrote:
> Hello all,
> 
> I post this message to the xen-users and xen-devel list because I think that it is not clear if a user or only a developer can help me out.
> 
> At the moment I'm working on Dom0 disaggregation and I want to create a OCamel Xenstore Stubdomain. So I hope that I have all needed dependencies so I have Xen 4.2 with XSM and FLASK compiled in (XMS Framework appears in xl dmesg so I think it works completely) and Linux 3.6.10 (information in the web said that Linux 3.5 or above with pv_ops is needed for this).
> 
> Now I need to have the init-xenstore-domain utility to modify the init scripts so that the xenstore stubdomain get loaded instead of starting xenstored in Dom0 (I hope that this is right). So I wanted to compile init-xenstore-domain.c from the source directory /tools/xenstore/ within the Xen 4.2.0 source directory.
> 
> I tried to do this with this command:
> 
> gcc -c -I/root/xen-4.2.0/dist/install/usr/include/ -I/root/xen-4.2.0/tools/libxc/ -I/root/xen-4.2.0/xen/include/ /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c -o /root/init-xenstore-domain
> 
> 
> In my case the xen-4.2.0 directory is located inside a debian base "build" chroot in the /root/ directory. At the moment I get this error and I don't know how to include the libelf.h file because of this I get this error:
> 
> In file included from /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
> /root/xen-4.2.0/tools/libxc/xc_dom.h:17:31: error: xen/libelf/libelf.h: No such file or directory
> In file included from /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
> /root/xen-4.2.0/tools/libxc/xc_dom.h:63: error: field 'parms' has incomplete type
>

Hmm, looks like it's not getting built by default. Try running:

make -C stubdom/xenstore init-xenstore-domain

This results in a successful build via the following two commands on my system:

gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF .init-xenstore-domain.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -I. -I/home/daniel/xen/stubdom/xenstore/../../tools/libxc -I/home/daniel/xen/stubdom/xenstore/../../tools/include -I/home/daniel/xen/stubdom/xenstore/../../tools/libxc -I/home/daniel/xen/stubdom/xenstore/../../tools/include  -c -o init-xenstore-domain.o init-xenstore-domain.c 
gcc     init-xenstore-domain.o libxenstore.so /home/daniel/xen/stubdom/xenstore/../../tools/libxc/libxenctrl.so /home/daniel/xen/stubdom/xenstore/../../tools/libxc/libxenguest.so /home/daniel/xen/stubdom/xenstore/../../tools/xenstore/libxenstore.so -o init-xenstore-domain 


> I tried to include the directory /root/xen-4.2.0/xen/include/xen/ (base on my directory structure) and also created a symlink so that the folder xen and the subfolder libelf (like mentioned in the include directive of the source file) and linked to the libelf.h file which is included in the directory mentioned above, but all this thing result in the same error so how could I compile init-xenstore-domain utility? The normal xenstore stubdomain file (not the ocamel variant) was builded successfully by the build system. But the build system at now don't seem to build the init-xenstore-domain utility so I thought I have to compile it seperately.
> 
> Hope that someone can help me out.
> 
> Best Regards
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 19:53:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 19:53:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TisMM-0007zd-AX; Wed, 12 Dec 2012 19:52:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TisMK-0007zY-HV
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 19:52:48 +0000
Received: from [85.158.137.99:8002] by server-16.bemta-3.messagelabs.com id
	52/9C-27634-F80E8C05; Wed, 12 Dec 2012 19:52:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355341966!18470064!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7270 invoked from network); 12 Dec 2012 19:52:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 19:52:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,268,1355097600"; 
   d="scan'208";a="98324"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 19:52:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 19:52:45 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TisMH-0007qB-J8;
	Wed, 12 Dec 2012 19:52:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TisMH-0003NC-6U;
	Wed, 12 Dec 2012 19:52:45 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14674-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 19:52:45 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14674: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14674 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14674/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14672
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14672

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  02140822d833
baseline version:
 xen                  e32c114016f7

------------------------------------------------------------
People who touched revisions under test:
  Charles Arnold <carnold@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=02140822d833
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 02140822d833
+ branch=xen-4.2-testing
+ revision=02140822d833
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r 02140822d833 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 5 changesets with 5 changes to 5 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 19:53:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 19:53:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TisMM-0007zd-AX; Wed, 12 Dec 2012 19:52:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TisMK-0007zY-HV
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 19:52:48 +0000
Received: from [85.158.137.99:8002] by server-16.bemta-3.messagelabs.com id
	52/9C-27634-F80E8C05; Wed, 12 Dec 2012 19:52:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355341966!18470064!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7270 invoked from network); 12 Dec 2012 19:52:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 19:52:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,268,1355097600"; 
   d="scan'208";a="98324"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 19:52:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 19:52:45 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TisMH-0007qB-J8;
	Wed, 12 Dec 2012 19:52:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TisMH-0003NC-6U;
	Wed, 12 Dec 2012 19:52:45 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14674-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 19:52:45 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14674: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14674 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14674/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14672
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14672

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  02140822d833
baseline version:
 xen                  e32c114016f7

------------------------------------------------------------
People who touched revisions under test:
  Charles Arnold <carnold@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=02140822d833
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 02140822d833
+ branch=xen-4.2-testing
+ revision=02140822d833
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r 02140822d833 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 5 changesets with 5 changes to 5 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 20:18:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 20:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiskl-0008TD-KE; Wed, 12 Dec 2012 20:18:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tiskj-0008Sy-7R
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 20:18:02 +0000
Received: from [193.109.254.147:28812] by server-15.bemta-14.messagelabs.com
	id 18/42-05116-876E8C05; Wed, 12 Dec 2012 20:18:00 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355343474!10266074!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25238 invoked from network); 12 Dec 2012 20:17:55 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-27.messagelabs.com with SMTP;
	12 Dec 2012 20:17:55 -0000
X-TM-IMSS-Message-ID: <1400460d0002d6c7@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 1400460d0002d6c7 ;
	Wed, 12 Dec 2012 15:17:55 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBCKHpUW029187; 
	Wed, 12 Dec 2012 15:17:51 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Wed, 12 Dec 2012 15:17:48 -0500
Message-Id: <1355343468-28291-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
Cc: dgdegra@tycho.nsa.gov, Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v7] libxl: introduce XSM relabel on build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In response to a suggestion from Jan, I am splitting out independent
patches from the larger XSM series that I have been posting.  This is
the only patch from that series that touches the toolstack; it is
independent of the rest of the series as the hypervisor component has
already been committed.

---------------------8<-------------------------------------------------

Allow a domain to be built under one security label and run using a
different label.  This can be used to prevent the domain builder or
control domain from having the ability to access a guest domain's memory
via map_foreign_range except during the build process where this is
required.

Example domain configuration snippet:
  seclabel='customer_1:vm_r:nomigrate_t'
  init_seclabel='customer_1:vm_r:nomigrate_t_building'

Note: this does not provide complete protection from a malicious dom0;
mappings created during the build process may persist after the relabel,
and could be used to indirectly access the guest's memory. However, if
dom0 correctly unmaps the domain upon building, a the domU is protected
against dom0 becoming malicious in the future.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 docs/man/xl.cfg.pod.5                        |  9 +++++
 docs/misc/xsm-flask.txt                      |  2 +
 tools/flask/policy/policy/modules/xen/xen.if | 56 +++++++++++++++++++++-------
 tools/flask/policy/policy/modules/xen/xen.te | 10 +++++
 tools/libxc/xc_flask.c                       | 10 +++++
 tools/libxc/xenctrl.h                        |  1 +
 tools/libxl/libxl_create.c                   |  4 ++
 tools/libxl/libxl_types.idl                  |  1 +
 tools/libxl/xl_cmdimpl.c                     | 20 +++++++++-
 9 files changed, 99 insertions(+), 14 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index dc3f494..caba162 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -270,6 +270,15 @@ UUID will be generated.
 
 Assign an XSM security label to this domain.
 
+=item B<init_seclabel="LABEL">
+
+Specify an XSM security label used for this domain temporarily during
+its build. The domain's XSM label will be changed to the execution
+seclabel (specified by "seclabel") once the build is complete, prior to
+unpausing the domain. With a properly constructed security policy (such
+as nomigrate_t in the example policy), this can be used to build a
+domain whose memory is not accessible to the toolstack domain.
+
 =item B<nomigrate=BOOLEAN>
 
 Disable migration of this domain.  This enables certain other features
diff --git a/docs/misc/xsm-flask.txt b/docs/misc/xsm-flask.txt
index 6b0d327..0778a28 100644
--- a/docs/misc/xsm-flask.txt
+++ b/docs/misc/xsm-flask.txt
@@ -60,6 +60,8 @@ that can be used without dom0 disaggregation. The main types for domUs are:
  - domU_t is a domain that can communicate with any other domU_t
  - isolated_domU_t can only communicate with dom0
  - prot_domU_t is a domain type whose creation can be disabled with a boolean
+ - nomigrate_t is a domain that must be created via the nomigrate_t_building
+   type, and whose memory cannot be read by dom0 once created
 
 HVM domains with stubdomain device models use two types (one per domain):
  - domHVM_t is an HVM domain that uses a stubdomain device model
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 3f58909..2ad11b2 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -9,24 +9,47 @@
 #   Declare a type as a domain type, and allow basic domain setup
 define(`declare_domain', `
 	type $1, domain_type`'ifelse(`$#', `1', `', `,shift($@)');
+	type $1_channel, event_type;
+	type_transition $1 domain_type:event $1_channel;
 	allow $1 $1:grant { query setup };
 	allow $1 $1:mmu { adjust physmap map_read map_write stat pinpage };
 	allow $1 $1:hvm { getparam setparam };
 ')
 
-# create_domain(priv, target)
-#   Allow a domain to be created
-define(`create_domain', `
+# declare_build_label(type)
+#   Declare a paired _building type for the given domain type
+define(`declare_build_label', `
+	type $1_building, domain_type;
+	type_transition $1_building domain_type:event $1_channel;
+	allow $1_building $1 : domain transition;
+')
+
+define(`create_domain_common', `
 	allow $1 $2:domain { create max_vcpus setdomainmaxmem setaddrsize
-			getdomaininfo hypercall setvcpucontext scheduler
-			unpause getvcpuinfo getvcpuextstate getaddrsize
-			getvcpuaffinity };
+			getdomaininfo hypercall setvcpucontext setextvcpucontext
+			scheduler getvcpuinfo getvcpuextstate getaddrsize
+			getvcpuaffinity setvcpuaffinity };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu {map_read map_write adjust memorymap physmap pinpage};
 	allow $1 $2:grant setup;
-	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute setparam pcilevel trackdirtyvram };
-	allow $1 $2_$1_channel:event create;
+	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc setparam pcilevel trackdirtyvram };
+')
+
+# create_domain(priv, target)
+#   Allow a domain to be created directly
+define(`create_domain', `
+	create_domain_common($1, $2)
+	allow $1 $2_channel:event create;
+')
+
+# create_domain_build_label(priv, target)
+#   Allow a domain to be created via its domain build label
+define(`create_domain_build_label', `
+	create_domain_common($1, $2_building)
+	allow $1 $2_channel:event create;
+	allow $1 $2_building:domain2 relabelfrom;
+	allow $1 $2:domain2 relabelto;
 ')
 
 # manage_domain(priv, target)
@@ -37,6 +60,15 @@ define(`manage_domain', `
 			setvcpuaffinity setdomainmaxmem };
 ')
 
+# migrate_domain_out(priv, target)
+#   Allow creation of a snapshot or migration image from a domain
+#   (inbound migration is the same as domain creation)
+define(`migrate_domain_out', `
+	allow $1 $2:hvm { gethvmc getparam irqlevel };
+	allow $1 $2:mmu { stat pageinfo map_read };
+	allow $1 $2:domain { getaddrsize getvcpucontext getextvcpucontext getvcpuextstate pause destroy };
+')
+
 ################################################################################
 #
 # Inter-domain communication
@@ -47,8 +79,6 @@ define(`manage_domain', `
 #   This allows an event channel to be created from domains with labels
 #   <source> to <dest> and will label it <chan-label>
 define(`create_channel', `
-	type $3, event_type;
-	type_transition $1 $2:event $3;
 	allow $1 $3:event { create send status };
 	allow $3 $2:event { bind };
 ')
@@ -56,8 +86,8 @@ define(`create_channel', `
 # domain_event_comms(dom1, dom2)
 #   Allow two domain types to communicate using event channels
 define(`domain_event_comms', `
-	create_channel($1, $2, $1_$2_channel)
-	create_channel($2, $1, $2_$1_channel)
+	create_channel($1, $2, $1_channel)
+	create_channel($2, $1, $2_channel)
 ')
 
 # domain_comms(dom1, dom2)
@@ -72,7 +102,7 @@ define(`domain_comms', `
 #   Allow a domain types to communicate with others of its type using grants
 #   and event channels (this includes event channels to DOMID_SELF)
 define(`domain_self_comms', `
-	create_channel($1, $1, $1_self_channel)
+	create_channel($1, $1, $1_channel)
 	allow $1 $1:grant { map_read map_write copy unmap };
 ')
 
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 9550397..1162153 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -90,6 +90,7 @@ create_domain(dom0_t, isolated_domU_t)
 manage_domain(dom0_t, isolated_domU_t)
 domain_comms(dom0_t, isolated_domU_t)
 
+# Declare a boolean that denies creation of prot_domU_t domains
 gen_bool(prot_doms_locked, false)
 declare_domain(prot_domU_t)
 if (!prot_doms_locked) {
@@ -111,6 +112,15 @@ manage_domain(dom0_t, dm_dom_t)
 domain_comms(dom0_t, dm_dom_t)
 device_model(dm_dom_t, domHVM_t)
 
+# nomigrate_t must be built via the nomigrate_t_building label; once built,
+# dom0 cannot read its memory.
+declare_domain(nomigrate_t)
+declare_build_label(nomigrate_t)
+create_domain_build_label(dom0_t, nomigrate_t)
+manage_domain(dom0_t, nomigrate_t)
+domain_comms(dom0_t, nomigrate_t)
+domain_self_comms(nomigrate_t)
+
 ###############################################################################
 #
 # Device delegation
diff --git a/tools/libxc/xc_flask.c b/tools/libxc/xc_flask.c
index 80c5a2d..face1e0 100644
--- a/tools/libxc/xc_flask.c
+++ b/tools/libxc/xc_flask.c
@@ -422,6 +422,16 @@ int xc_flask_setavc_threshold(xc_interface *xch, int threshold)
     return xc_flask_op(xch, &op);
 }
 
+int xc_flask_relabel_domain(xc_interface *xch, int domid, uint32_t sid)
+{
+    DECLARE_FLASK_OP;
+    op.cmd = FLASK_RELABEL_DOMAIN;
+    op.u.relabel.domid = domid;
+    op.u.relabel.sid = sid;
+
+    return xc_flask_op(xch, &op);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 1cd13c1..32122fd 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -2169,6 +2169,7 @@ int xc_flask_policyvers(xc_interface *xc_handle);
 int xc_flask_avc_hashstats(xc_interface *xc_handle, char *buf, int size);
 int xc_flask_getavc_threshold(xc_interface *xc_handle);
 int xc_flask_setavc_threshold(xc_interface *xc_handle, int threshold);
+int xc_flask_relabel_domain(xc_interface *xch, int domid, uint32_t sid);
 
 struct elf_binary;
 void xc_elf_set_logfile(xc_interface *xch, struct elf_binary *elf,
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 9d20086..b183255 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1182,6 +1182,10 @@ static void domcreate_complete(libxl__egc *egc,
                                int rc)
 {
     STATE_AO_GC(dcs->ao);
+    libxl_domain_config *const d_config = dcs->guest_config;
+
+    if (!rc && d_config->b_info.exec_ssidref)
+        rc = xc_flask_relabel_domain(CTX->xch, dcs->guest_domid, d_config->b_info.exec_ssidref);
 
     if (rc) {
         if (dcs->guest_domid) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 7eac4a8..93524f0 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -268,6 +268,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("video_memkb",     MemKB),
     ("shadow_memkb",    MemKB),
     ("rtc_timeoffset",  uint32),
+    ("exec_ssidref",    uint32),
     ("localtime",       libxl_defbool),
     ("disable_migrate", libxl_defbool),
     ("cpuid",           libxl_cpuid_policy_list),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4b75fc3..e964bf1 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -596,16 +596,34 @@ static void parse_config_data(const char *config_source,
         exit(1);
     }
 
-    if (!xlu_cfg_get_string (config, "seclabel", &buf, 0)) {
+    if (!xlu_cfg_get_string (config, "init_seclabel", &buf, 0)) {
         e = libxl_flask_context_to_sid(ctx, (char *)buf, strlen(buf),
                                     &c_info->ssidref);
         if (e) {
             if (errno == ENOSYS) {
+                fprintf(stderr, "XSM Disabled: init_seclabel not supported\n");
+            } else {
+                fprintf(stderr, "Invalid init_seclabel: %s\n", buf);
+                exit(1);
+            }
+        }
+    }
+
+    if (!xlu_cfg_get_string (config, "seclabel", &buf, 0)) {
+        uint32_t ssidref;
+        e = libxl_flask_context_to_sid(ctx, (char *)buf, strlen(buf),
+                                    &ssidref);
+        if (e) {
+            if (errno == ENOSYS) {
                 fprintf(stderr, "XSM Disabled: seclabel not supported\n");
             } else {
                 fprintf(stderr, "Invalid seclabel: %s\n", buf);
                 exit(1);
             }
+        } else if (c_info->ssidref) {
+            b_info->exec_ssidref = ssidref;
+        } else {
+            c_info->ssidref = ssidref;
         }
     }
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 20:18:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 20:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiskl-0008TD-KE; Wed, 12 Dec 2012 20:18:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tiskj-0008Sy-7R
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 20:18:02 +0000
Received: from [193.109.254.147:28812] by server-15.bemta-14.messagelabs.com
	id 18/42-05116-876E8C05; Wed, 12 Dec 2012 20:18:00 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355343474!10266074!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25238 invoked from network); 12 Dec 2012 20:17:55 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-27.messagelabs.com with SMTP;
	12 Dec 2012 20:17:55 -0000
X-TM-IMSS-Message-ID: <1400460d0002d6c7@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 1400460d0002d6c7 ;
	Wed, 12 Dec 2012 15:17:55 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBCKHpUW029187; 
	Wed, 12 Dec 2012 15:17:51 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Wed, 12 Dec 2012 15:17:48 -0500
Message-Id: <1355343468-28291-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
Cc: dgdegra@tycho.nsa.gov, Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v7] libxl: introduce XSM relabel on build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In response to a suggestion from Jan, I am splitting out independent
patches from the larger XSM series that I have been posting.  This is
the only patch from that series that touches the toolstack; it is
independent of the rest of the series as the hypervisor component has
already been committed.

---------------------8<-------------------------------------------------

Allow a domain to be built under one security label and run using a
different label.  This can be used to prevent the domain builder or
control domain from having the ability to access a guest domain's memory
via map_foreign_range except during the build process where this is
required.

Example domain configuration snippet:
  seclabel='customer_1:vm_r:nomigrate_t'
  init_seclabel='customer_1:vm_r:nomigrate_t_building'

Note: this does not provide complete protection from a malicious dom0;
mappings created during the build process may persist after the relabel,
and could be used to indirectly access the guest's memory. However, if
dom0 correctly unmaps the domain upon building, a the domU is protected
against dom0 becoming malicious in the future.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 docs/man/xl.cfg.pod.5                        |  9 +++++
 docs/misc/xsm-flask.txt                      |  2 +
 tools/flask/policy/policy/modules/xen/xen.if | 56 +++++++++++++++++++++-------
 tools/flask/policy/policy/modules/xen/xen.te | 10 +++++
 tools/libxc/xc_flask.c                       | 10 +++++
 tools/libxc/xenctrl.h                        |  1 +
 tools/libxl/libxl_create.c                   |  4 ++
 tools/libxl/libxl_types.idl                  |  1 +
 tools/libxl/xl_cmdimpl.c                     | 20 +++++++++-
 9 files changed, 99 insertions(+), 14 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index dc3f494..caba162 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -270,6 +270,15 @@ UUID will be generated.
 
 Assign an XSM security label to this domain.
 
+=item B<init_seclabel="LABEL">
+
+Specify an XSM security label used for this domain temporarily during
+its build. The domain's XSM label will be changed to the execution
+seclabel (specified by "seclabel") once the build is complete, prior to
+unpausing the domain. With a properly constructed security policy (such
+as nomigrate_t in the example policy), this can be used to build a
+domain whose memory is not accessible to the toolstack domain.
+
 =item B<nomigrate=BOOLEAN>
 
 Disable migration of this domain.  This enables certain other features
diff --git a/docs/misc/xsm-flask.txt b/docs/misc/xsm-flask.txt
index 6b0d327..0778a28 100644
--- a/docs/misc/xsm-flask.txt
+++ b/docs/misc/xsm-flask.txt
@@ -60,6 +60,8 @@ that can be used without dom0 disaggregation. The main types for domUs are:
  - domU_t is a domain that can communicate with any other domU_t
  - isolated_domU_t can only communicate with dom0
  - prot_domU_t is a domain type whose creation can be disabled with a boolean
+ - nomigrate_t is a domain that must be created via the nomigrate_t_building
+   type, and whose memory cannot be read by dom0 once created
 
 HVM domains with stubdomain device models use two types (one per domain):
  - domHVM_t is an HVM domain that uses a stubdomain device model
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 3f58909..2ad11b2 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -9,24 +9,47 @@
 #   Declare a type as a domain type, and allow basic domain setup
 define(`declare_domain', `
 	type $1, domain_type`'ifelse(`$#', `1', `', `,shift($@)');
+	type $1_channel, event_type;
+	type_transition $1 domain_type:event $1_channel;
 	allow $1 $1:grant { query setup };
 	allow $1 $1:mmu { adjust physmap map_read map_write stat pinpage };
 	allow $1 $1:hvm { getparam setparam };
 ')
 
-# create_domain(priv, target)
-#   Allow a domain to be created
-define(`create_domain', `
+# declare_build_label(type)
+#   Declare a paired _building type for the given domain type
+define(`declare_build_label', `
+	type $1_building, domain_type;
+	type_transition $1_building domain_type:event $1_channel;
+	allow $1_building $1 : domain transition;
+')
+
+define(`create_domain_common', `
 	allow $1 $2:domain { create max_vcpus setdomainmaxmem setaddrsize
-			getdomaininfo hypercall setvcpucontext scheduler
-			unpause getvcpuinfo getvcpuextstate getaddrsize
-			getvcpuaffinity };
+			getdomaininfo hypercall setvcpucontext setextvcpucontext
+			scheduler getvcpuinfo getvcpuextstate getaddrsize
+			getvcpuaffinity setvcpuaffinity };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu {map_read map_write adjust memorymap physmap pinpage};
 	allow $1 $2:grant setup;
-	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute setparam pcilevel trackdirtyvram };
-	allow $1 $2_$1_channel:event create;
+	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc setparam pcilevel trackdirtyvram };
+')
+
+# create_domain(priv, target)
+#   Allow a domain to be created directly
+define(`create_domain', `
+	create_domain_common($1, $2)
+	allow $1 $2_channel:event create;
+')
+
+# create_domain_build_label(priv, target)
+#   Allow a domain to be created via its domain build label
+define(`create_domain_build_label', `
+	create_domain_common($1, $2_building)
+	allow $1 $2_channel:event create;
+	allow $1 $2_building:domain2 relabelfrom;
+	allow $1 $2:domain2 relabelto;
 ')
 
 # manage_domain(priv, target)
@@ -37,6 +60,15 @@ define(`manage_domain', `
 			setvcpuaffinity setdomainmaxmem };
 ')
 
+# migrate_domain_out(priv, target)
+#   Allow creation of a snapshot or migration image from a domain
+#   (inbound migration is the same as domain creation)
+define(`migrate_domain_out', `
+	allow $1 $2:hvm { gethvmc getparam irqlevel };
+	allow $1 $2:mmu { stat pageinfo map_read };
+	allow $1 $2:domain { getaddrsize getvcpucontext getextvcpucontext getvcpuextstate pause destroy };
+')
+
 ################################################################################
 #
 # Inter-domain communication
@@ -47,8 +79,6 @@ define(`manage_domain', `
 #   This allows an event channel to be created from domains with labels
 #   <source> to <dest> and will label it <chan-label>
 define(`create_channel', `
-	type $3, event_type;
-	type_transition $1 $2:event $3;
 	allow $1 $3:event { create send status };
 	allow $3 $2:event { bind };
 ')
@@ -56,8 +86,8 @@ define(`create_channel', `
 # domain_event_comms(dom1, dom2)
 #   Allow two domain types to communicate using event channels
 define(`domain_event_comms', `
-	create_channel($1, $2, $1_$2_channel)
-	create_channel($2, $1, $2_$1_channel)
+	create_channel($1, $2, $1_channel)
+	create_channel($2, $1, $2_channel)
 ')
 
 # domain_comms(dom1, dom2)
@@ -72,7 +102,7 @@ define(`domain_comms', `
 #   Allow a domain types to communicate with others of its type using grants
 #   and event channels (this includes event channels to DOMID_SELF)
 define(`domain_self_comms', `
-	create_channel($1, $1, $1_self_channel)
+	create_channel($1, $1, $1_channel)
 	allow $1 $1:grant { map_read map_write copy unmap };
 ')
 
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 9550397..1162153 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -90,6 +90,7 @@ create_domain(dom0_t, isolated_domU_t)
 manage_domain(dom0_t, isolated_domU_t)
 domain_comms(dom0_t, isolated_domU_t)
 
+# Declare a boolean that denies creation of prot_domU_t domains
 gen_bool(prot_doms_locked, false)
 declare_domain(prot_domU_t)
 if (!prot_doms_locked) {
@@ -111,6 +112,15 @@ manage_domain(dom0_t, dm_dom_t)
 domain_comms(dom0_t, dm_dom_t)
 device_model(dm_dom_t, domHVM_t)
 
+# nomigrate_t must be built via the nomigrate_t_building label; once built,
+# dom0 cannot read its memory.
+declare_domain(nomigrate_t)
+declare_build_label(nomigrate_t)
+create_domain_build_label(dom0_t, nomigrate_t)
+manage_domain(dom0_t, nomigrate_t)
+domain_comms(dom0_t, nomigrate_t)
+domain_self_comms(nomigrate_t)
+
 ###############################################################################
 #
 # Device delegation
diff --git a/tools/libxc/xc_flask.c b/tools/libxc/xc_flask.c
index 80c5a2d..face1e0 100644
--- a/tools/libxc/xc_flask.c
+++ b/tools/libxc/xc_flask.c
@@ -422,6 +422,16 @@ int xc_flask_setavc_threshold(xc_interface *xch, int threshold)
     return xc_flask_op(xch, &op);
 }
 
+int xc_flask_relabel_domain(xc_interface *xch, int domid, uint32_t sid)
+{
+    DECLARE_FLASK_OP;
+    op.cmd = FLASK_RELABEL_DOMAIN;
+    op.u.relabel.domid = domid;
+    op.u.relabel.sid = sid;
+
+    return xc_flask_op(xch, &op);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 1cd13c1..32122fd 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -2169,6 +2169,7 @@ int xc_flask_policyvers(xc_interface *xc_handle);
 int xc_flask_avc_hashstats(xc_interface *xc_handle, char *buf, int size);
 int xc_flask_getavc_threshold(xc_interface *xc_handle);
 int xc_flask_setavc_threshold(xc_interface *xc_handle, int threshold);
+int xc_flask_relabel_domain(xc_interface *xch, int domid, uint32_t sid);
 
 struct elf_binary;
 void xc_elf_set_logfile(xc_interface *xch, struct elf_binary *elf,
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 9d20086..b183255 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1182,6 +1182,10 @@ static void domcreate_complete(libxl__egc *egc,
                                int rc)
 {
     STATE_AO_GC(dcs->ao);
+    libxl_domain_config *const d_config = dcs->guest_config;
+
+    if (!rc && d_config->b_info.exec_ssidref)
+        rc = xc_flask_relabel_domain(CTX->xch, dcs->guest_domid, d_config->b_info.exec_ssidref);
 
     if (rc) {
         if (dcs->guest_domid) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 7eac4a8..93524f0 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -268,6 +268,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("video_memkb",     MemKB),
     ("shadow_memkb",    MemKB),
     ("rtc_timeoffset",  uint32),
+    ("exec_ssidref",    uint32),
     ("localtime",       libxl_defbool),
     ("disable_migrate", libxl_defbool),
     ("cpuid",           libxl_cpuid_policy_list),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4b75fc3..e964bf1 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -596,16 +596,34 @@ static void parse_config_data(const char *config_source,
         exit(1);
     }
 
-    if (!xlu_cfg_get_string (config, "seclabel", &buf, 0)) {
+    if (!xlu_cfg_get_string (config, "init_seclabel", &buf, 0)) {
         e = libxl_flask_context_to_sid(ctx, (char *)buf, strlen(buf),
                                     &c_info->ssidref);
         if (e) {
             if (errno == ENOSYS) {
+                fprintf(stderr, "XSM Disabled: init_seclabel not supported\n");
+            } else {
+                fprintf(stderr, "Invalid init_seclabel: %s\n", buf);
+                exit(1);
+            }
+        }
+    }
+
+    if (!xlu_cfg_get_string (config, "seclabel", &buf, 0)) {
+        uint32_t ssidref;
+        e = libxl_flask_context_to_sid(ctx, (char *)buf, strlen(buf),
+                                    &ssidref);
+        if (e) {
+            if (errno == ENOSYS) {
                 fprintf(stderr, "XSM Disabled: seclabel not supported\n");
             } else {
                 fprintf(stderr, "Invalid seclabel: %s\n", buf);
                 exit(1);
             }
+        } else if (c_info->ssidref) {
+            b_info->exec_ssidref = ssidref;
+        } else {
+            c_info->ssidref = ssidref;
         }
     }
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 20:38:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 20:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tit3q-0000Wi-8J; Wed, 12 Dec 2012 20:37:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen@lippux.com>)
	id 1Tit3o-0000WY-4t; Wed, 12 Dec 2012 20:37:44 +0000
Received: from [85.158.143.99:17788] by server-3.bemta-4.messagelabs.com id
	37/C9-18211-71BE8C05; Wed, 12 Dec 2012 20:37:43 +0000
X-Env-Sender: xen@lippux.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355344662!18024653!1
X-Originating-IP: [78.46.181.14]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23055 invoked from network); 12 Dec 2012 20:37:42 -0000
Received: from www107.your-server.de (HELO www107.your-server.de)
	(78.46.181.14)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 20:37:42 -0000
Received: from [78.46.5.203] (helo=sslproxy01.your-server.de)
	by www107.your-server.de with esmtpsa (TLSv1:AES256-SHA:256)
	(Exim 4.74) (envelope-from <xen@lippux.com>)
	id 1Tit3l-0003MR-W2; Wed, 12 Dec 2012 21:37:42 +0100
Received: from [192.168.0.32] (helo=webmail03.your-server.de)
	by sslproxy01.your-server.de with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <xen@lippux.com>)
	id 1Tit3q-0005BY-53; Wed, 12 Dec 2012 21:37:46 +0100
Received: from ashlynn.lippux.de (ashlynn.lippux.de [5.9.218.242]) by
	webmail.your-server.de (Horde Framework) with HTTP; Wed, 12 Dec 2012
	21:37:35 +0100
Date: Wed, 12 Dec 2012 21:37:35 +0100
Message-ID: <20121212213735.Horde.9y0sVlQvoipQyOsPrfw2yDA@webmail.your-server.de>
From: xen@lippux.com
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <20121212193526.Horde.3nLQZVQvoipQyM5uc7nCRgA@webmail.your-server.de>
	<50C8D449.9080404@tycho.nsa.gov>
In-Reply-To: <50C8D449.9080404@tycho.nsa.gov>
User-Agent: Internet Messaging Program (IMP) H4 (5.0.24)
MIME-Version: 1.0
Content-Disposition: inline
X-Authenticated-Sender: xen@lippux.com
X-Virus-Scanned: Clear (ClamAV 0.97.5/15751/Wed Dec 12 18:45:57 2012)
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Compiling init-xenstore-domain.c to initialize a
 OCamel Xenstore Stubdomain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"; DelSp="Yes"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Zitat von Daniel De Graaf <dgdegra@tycho.nsa.gov>:

> On 12/12/2012 01:35 PM, xen@lippux.com wrote:
>> Hello all,
>>
>> I post this message to the xen-users and xen-devel list because I  
>> think that it is not clear if a user or only a developer can help  
>> me out.
>>
>> At the moment I'm working on Dom0 disaggregation and I want to  
>> create a OCamel Xenstore Stubdomain. So I hope that I have all  
>> needed dependencies so I have Xen 4.2 with XSM and FLASK compiled  
>> in (XMS Framework appears in xl dmesg so I think it works  
>> completely) and Linux 3.6.10 (information in the web said that  
>> Linux 3.5 or above with pv_ops is needed for this).
>>
>> Now I need to have the init-xenstore-domain utility to modify the  
>> init scripts so that the xenstore stubdomain get loaded instead of  
>> starting xenstored in Dom0 (I hope that this is right). So I wanted  
>> to compile init-xenstore-domain.c from the source directory  
>> /tools/xenstore/ within the Xen 4.2.0 source directory.
>>
>> I tried to do this with this command:
>>
>> gcc -c -I/root/xen-4.2.0/dist/install/usr/include/  
>> -I/root/xen-4.2.0/tools/libxc/ -I/root/xen-4.2.0/xen/include/  
>> /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c -o  
>> /root/init-xenstore-domain
>>
>>
>> In my case the xen-4.2.0 directory is located inside a debian base  
>> "build" chroot in the /root/ directory. At the moment I get this  
>> error and I don't know how to include the libelf.h file because of  
>> this I get this error:
>>
>> In file included from  
>> /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
>> /root/xen-4.2.0/tools/libxc/xc_dom.h:17:31: error:  
>> xen/libelf/libelf.h: No such file or directory
>> In file included from  
>> /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
>> /root/xen-4.2.0/tools/libxc/xc_dom.h:63: error: field 'parms' has  
>> incomplete type
>>
>
> Hmm, looks like it's not getting built by default. Try running:
>
> make -C stubdom/xenstore init-xenstore-domain
>
> This results in a successful build via the following two commands on  
> my system:
>
> gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing  
> -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement  
> -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF  
> .init-xenstore-domain.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE  
> -fno-optimize-sibling-calls  -Werror -I.  
> -I/home/daniel/xen/stubdom/xenstore/../../tools/libxc  
> -I/home/daniel/xen/stubdom/xenstore/../../tools/include  
> -I/home/daniel/xen/stubdom/xenstore/../../tools/libxc  
> -I/home/daniel/xen/stubdom/xenstore/../../tools/include  -c -o  
> init-xenstore-domain.o init-xenstore-domain.c
> gcc     init-xenstore-domain.o libxenstore.so  
> /home/daniel/xen/stubdom/xenstore/../../tools/libxc/libxenctrl.so  
> /home/daniel/xen/stubdom/xenstore/../../tools/libxc/libxenguest.so  
> /home/daniel/xen/stubdom/xenstore/../../tools/xenstore/libxenstore.so -o  
> init-xenstore-domain
>
>
>> I tried to include the directory /root/xen-4.2.0/xen/include/xen/  
>> (base on my directory structure) and also created a symlink so that  
>> the folder xen and the subfolder libelf (like mentioned in the  
>> include directive of the source file) and linked to the libelf.h  
>> file which is included in the directory mentioned above, but all  
>> this thing result in the same error so how could I compile  
>> init-xenstore-domain utility? The normal xenstore stubdomain file  
>> (not the ocamel variant) was builded successfully by the build  
>> system. But the build system at now don't seem to build the  
>> init-xenstore-domain utility so I thought I have to compile it  
>> seperately.
>>
>> Hope that someone can help me out.
>>
>> Best Regards
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>
>
> --
> Daniel De Graaf
> National Security Agency

Hello Daniel,

this command worked and I could compile an executable. Thanks for your help :)

Best Regards


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 20:38:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 20:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tit3q-0000Wi-8J; Wed, 12 Dec 2012 20:37:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen@lippux.com>)
	id 1Tit3o-0000WY-4t; Wed, 12 Dec 2012 20:37:44 +0000
Received: from [85.158.143.99:17788] by server-3.bemta-4.messagelabs.com id
	37/C9-18211-71BE8C05; Wed, 12 Dec 2012 20:37:43 +0000
X-Env-Sender: xen@lippux.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355344662!18024653!1
X-Originating-IP: [78.46.181.14]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23055 invoked from network); 12 Dec 2012 20:37:42 -0000
Received: from www107.your-server.de (HELO www107.your-server.de)
	(78.46.181.14)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Dec 2012 20:37:42 -0000
Received: from [78.46.5.203] (helo=sslproxy01.your-server.de)
	by www107.your-server.de with esmtpsa (TLSv1:AES256-SHA:256)
	(Exim 4.74) (envelope-from <xen@lippux.com>)
	id 1Tit3l-0003MR-W2; Wed, 12 Dec 2012 21:37:42 +0100
Received: from [192.168.0.32] (helo=webmail03.your-server.de)
	by sslproxy01.your-server.de with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <xen@lippux.com>)
	id 1Tit3q-0005BY-53; Wed, 12 Dec 2012 21:37:46 +0100
Received: from ashlynn.lippux.de (ashlynn.lippux.de [5.9.218.242]) by
	webmail.your-server.de (Horde Framework) with HTTP; Wed, 12 Dec 2012
	21:37:35 +0100
Date: Wed, 12 Dec 2012 21:37:35 +0100
Message-ID: <20121212213735.Horde.9y0sVlQvoipQyOsPrfw2yDA@webmail.your-server.de>
From: xen@lippux.com
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <20121212193526.Horde.3nLQZVQvoipQyM5uc7nCRgA@webmail.your-server.de>
	<50C8D449.9080404@tycho.nsa.gov>
In-Reply-To: <50C8D449.9080404@tycho.nsa.gov>
User-Agent: Internet Messaging Program (IMP) H4 (5.0.24)
MIME-Version: 1.0
Content-Disposition: inline
X-Authenticated-Sender: xen@lippux.com
X-Virus-Scanned: Clear (ClamAV 0.97.5/15751/Wed Dec 12 18:45:57 2012)
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Compiling init-xenstore-domain.c to initialize a
 OCamel Xenstore Stubdomain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"; DelSp="Yes"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Zitat von Daniel De Graaf <dgdegra@tycho.nsa.gov>:

> On 12/12/2012 01:35 PM, xen@lippux.com wrote:
>> Hello all,
>>
>> I post this message to the xen-users and xen-devel list because I  
>> think that it is not clear if a user or only a developer can help  
>> me out.
>>
>> At the moment I'm working on Dom0 disaggregation and I want to  
>> create a OCamel Xenstore Stubdomain. So I hope that I have all  
>> needed dependencies so I have Xen 4.2 with XSM and FLASK compiled  
>> in (XMS Framework appears in xl dmesg so I think it works  
>> completely) and Linux 3.6.10 (information in the web said that  
>> Linux 3.5 or above with pv_ops is needed for this).
>>
>> Now I need to have the init-xenstore-domain utility to modify the  
>> init scripts so that the xenstore stubdomain get loaded instead of  
>> starting xenstored in Dom0 (I hope that this is right). So I wanted  
>> to compile init-xenstore-domain.c from the source directory  
>> /tools/xenstore/ within the Xen 4.2.0 source directory.
>>
>> I tried to do this with this command:
>>
>> gcc -c -I/root/xen-4.2.0/dist/install/usr/include/  
>> -I/root/xen-4.2.0/tools/libxc/ -I/root/xen-4.2.0/xen/include/  
>> /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c -o  
>> /root/init-xenstore-domain
>>
>>
>> In my case the xen-4.2.0 directory is located inside a debian base  
>> "build" chroot in the /root/ directory. At the moment I get this  
>> error and I don't know how to include the libelf.h file because of  
>> this I get this error:
>>
>> In file included from  
>> /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
>> /root/xen-4.2.0/tools/libxc/xc_dom.h:17:31: error:  
>> xen/libelf/libelf.h: No such file or directory
>> In file included from  
>> /root/xen-4.2.0/stubdom/xenstore/init-xenstore-domain.c:9:
>> /root/xen-4.2.0/tools/libxc/xc_dom.h:63: error: field 'parms' has  
>> incomplete type
>>
>
> Hmm, looks like it's not getting built by default. Try running:
>
> make -C stubdom/xenstore init-xenstore-domain
>
> This results in a successful build via the following two commands on  
> my system:
>
> gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing  
> -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement  
> -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF  
> .init-xenstore-domain.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE  
> -fno-optimize-sibling-calls  -Werror -I.  
> -I/home/daniel/xen/stubdom/xenstore/../../tools/libxc  
> -I/home/daniel/xen/stubdom/xenstore/../../tools/include  
> -I/home/daniel/xen/stubdom/xenstore/../../tools/libxc  
> -I/home/daniel/xen/stubdom/xenstore/../../tools/include  -c -o  
> init-xenstore-domain.o init-xenstore-domain.c
> gcc     init-xenstore-domain.o libxenstore.so  
> /home/daniel/xen/stubdom/xenstore/../../tools/libxc/libxenctrl.so  
> /home/daniel/xen/stubdom/xenstore/../../tools/libxc/libxenguest.so  
> /home/daniel/xen/stubdom/xenstore/../../tools/xenstore/libxenstore.so -o  
> init-xenstore-domain
>
>
>> I tried to include the directory /root/xen-4.2.0/xen/include/xen/  
>> (base on my directory structure) and also created a symlink so that  
>> the folder xen and the subfolder libelf (like mentioned in the  
>> include directive of the source file) and linked to the libelf.h  
>> file which is included in the directory mentioned above, but all  
>> this thing result in the same error so how could I compile  
>> init-xenstore-domain utility? The normal xenstore stubdomain file  
>> (not the ocamel variant) was builded successfully by the build  
>> system. But the build system at now don't seem to build the  
>> init-xenstore-domain utility so I thought I have to compile it  
>> seperately.
>>
>> Hope that someone can help me out.
>>
>> Best Regards
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>
>
> --
> Daniel De Graaf
> National Security Agency

Hello Daniel,

this command worked and I could compile an executable. Thanks for your help :)

Best Regards


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 20:40:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 20:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tit5k-0000gV-Ej; Wed, 12 Dec 2012 20:39:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tit5i-0000fz-RY
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 20:39:42 +0000
Received: from [85.158.143.99:23392] by server-1.bemta-4.messagelabs.com id
	3A/59-28401-E8BE8C05; Wed, 12 Dec 2012 20:39:42 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355344781!18024836!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27804 invoked from network); 12 Dec 2012 20:39:41 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-16.tower-216.messagelabs.com with SMTP;
	12 Dec 2012 20:39:41 -0000
X-TM-IMSS-Message-ID: <72729f630009a51f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 72729f630009a51f ;
	Wed, 12 Dec 2012 15:39:06 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBCKddBF030264; 
	Wed, 12 Dec 2012 15:39:39 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Wed, 12 Dec 2012 15:39:10 -0500
Message-Id: <1355344752-30257-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
Cc: dgdegra@tycho.nsa.gov, keir@xen.org
Subject: [Xen-devel] [PATCH v7 1/2] xen: unify domain locking in domctl code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These two patches were originally part of the XSM series that I have
posted, and remain prerequisites for that series. However, they are
independent of the XSM changes and are a useful simplification
regardless of the use of XSM.

The Acked-bys on these patches were provided before rebasing them over
the copyback changes in 26268:1b72138bddda, which had minor conflicts
that I resolved.

[PATCH 1/2] xen: lock target domain in do_domctl common code
[PATCH 2/2] xen/arch/*: add struct domain parameter to

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 20:40:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 20:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tit5k-0000gV-Ej; Wed, 12 Dec 2012 20:39:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tit5i-0000fz-RY
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 20:39:42 +0000
Received: from [85.158.143.99:23392] by server-1.bemta-4.messagelabs.com id
	3A/59-28401-E8BE8C05; Wed, 12 Dec 2012 20:39:42 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355344781!18024836!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27804 invoked from network); 12 Dec 2012 20:39:41 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-16.tower-216.messagelabs.com with SMTP;
	12 Dec 2012 20:39:41 -0000
X-TM-IMSS-Message-ID: <72729f630009a51f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 72729f630009a51f ;
	Wed, 12 Dec 2012 15:39:06 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBCKddBF030264; 
	Wed, 12 Dec 2012 15:39:39 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Wed, 12 Dec 2012 15:39:10 -0500
Message-Id: <1355344752-30257-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
Cc: dgdegra@tycho.nsa.gov, keir@xen.org
Subject: [Xen-devel] [PATCH v7 1/2] xen: unify domain locking in domctl code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These two patches were originally part of the XSM series that I have
posted, and remain prerequisites for that series. However, they are
independent of the XSM changes and are a useful simplification
regardless of the use of XSM.

The Acked-bys on these patches were provided before rebasing them over
the copyback changes in 26268:1b72138bddda, which had minor conflicts
that I resolved.

[PATCH 1/2] xen: lock target domain in do_domctl common code
[PATCH 2/2] xen/arch/*: add struct domain parameter to

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 20:40:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 20:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tit5m-0000gy-7g; Wed, 12 Dec 2012 20:39:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tit5k-0000gE-FS
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 20:39:44 +0000
Received: from [85.158.143.99:51286] by server-3.bemta-4.messagelabs.com id
	92/AA-18211-F8BE8C05; Wed, 12 Dec 2012 20:39:43 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-216.messagelabs.com!1355344781!28595064!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5051 invoked from network); 12 Dec 2012 20:39:41 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-3.tower-216.messagelabs.com with SMTP;
	12 Dec 2012 20:39:41 -0000
X-TM-IMSS-Message-ID: <7272a2020009a522@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 7272a2020009a522 ;
	Wed, 12 Dec 2012 15:39:07 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBCKddBH030264; 
	Wed, 12 Dec 2012 15:39:39 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Wed, 12 Dec 2012 15:39:12 -0500
Message-Id: <1355344752-30257-3-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355344752-30257-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355344752-30257-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: dgdegra@tycho.nsa.gov, keir@xen.org
Subject: [Xen-devel] [PATCH 2/2] xen/arch/*: add struct domain parameter to
	arch_do_domctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since the arch-independent do_domctl function now RCU locks the domain
specified by op->domain, pass the struct domain to the arch-specific
domctl function and remove the duplicate per-subfunction locking.

This also removes two get_domain/put_domain call pairs (in
XEN_DOMCTL_assign_device and XEN_DOMCTL_deassign_device), replacing them
with RCU locking.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/domctl.c           |   2 +-
 xen/arch/x86/domctl.c           | 455 +++++++---------------------------------
 xen/common/domctl.c             |   2 +-
 xen/drivers/passthrough/iommu.c |  31 +--
 xen/include/xen/hypercall.h     |   2 +-
 xen/include/xen/iommu.h         |   3 +-
 6 files changed, 84 insertions(+), 411 deletions(-)

diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index cf16791..d54a387 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -10,7 +10,7 @@
 #include <xen/errno.h>
 #include <public/domctl.h>
 
-long arch_do_domctl(struct xen_domctl *domctl,
+long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     return -ENOSYS;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 239e411..e89a20a 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -47,7 +47,7 @@ static int gdbsx_guest_mem_io(
 }
 
 long arch_do_domctl(
-    struct xen_domctl *domctl,
+    struct xen_domctl *domctl, struct domain *d,
     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
@@ -58,23 +58,15 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_shadow_op:
     {
-        struct domain *d;
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            ret = paging_domctl(d,
-                                &domctl->u.shadow_op,
-                                guest_handle_cast(u_domctl, void));
-            rcu_unlock_domain(d);
-            copyback = 1;
-        } 
+        ret = paging_domctl(d,
+                            &domctl->u.shadow_op,
+                            guest_handle_cast(u_domctl, void));
+        copyback = 1;
     }
     break;
 
     case XEN_DOMCTL_ioport_permission:
     {
-        struct domain *d;
         unsigned int fp = domctl->u.ioport_permission.first_port;
         unsigned int np = domctl->u.ioport_permission.nr_ports;
         int allow = domctl->u.ioport_permission.allow_access;
@@ -83,10 +75,6 @@ long arch_do_domctl(
         if ( (fp + np) > 65536 )
             break;
 
-        ret = -ESRCH;
-        if ( unlikely((d = rcu_lock_domain_by_id(domctl->domain)) == NULL) )
-            break;
-
         if ( np == 0 )
             ret = 0;
         else if ( xsm_ioport_permission(d, fp, fp + np - 1, allow) )
@@ -95,8 +83,6 @@ long arch_do_domctl(
             ret = ioports_permit_access(d, fp, fp + np - 1);
         else
             ret = ioports_deny_access(d, fp, fp + np - 1);
-
-        rcu_unlock_domain(d);
     }
     break;
 
@@ -104,23 +90,16 @@ long arch_do_domctl(
     {
         struct page_info *page;
         unsigned long mfn = domctl->u.getpageframeinfo.gmfn;
-        domid_t dom = domctl->domain;
-        struct domain *d;
 
         ret = -EINVAL;
-
-        if ( unlikely(!mfn_valid(mfn)) ||
-             unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
+        if ( unlikely(!mfn_valid(mfn)) )
             break;
 
         page = mfn_to_page(mfn);
 
         ret = xsm_getpageframeinfo(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         if ( likely(get_page(page, d)) )
         {
@@ -150,7 +129,6 @@ long arch_do_domctl(
             put_page(page);
         }
 
-        rcu_unlock_domain(d);
         copyback = 1;
     }
     break;
@@ -160,27 +138,17 @@ long arch_do_domctl(
         {
             unsigned int n, j;
             unsigned int num = domctl->u.getpageframeinfo3.num;
-            domid_t dom = domctl->domain;
-            struct domain *d;
             struct page_info *page;
             xen_pfn_t *arr;
 
-            ret = -ESRCH;
-            if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
-                break;
-
             ret = xsm_getpageframeinfo(d);
             if ( ret )
-            {
-                rcu_unlock_domain(d);
                 break;
-            }
 
             if ( unlikely(num > 1024) ||
                  unlikely(num != domctl->u.getpageframeinfo3.num) )
             {
                 ret = -E2BIG;
-                rcu_unlock_domain(d);
                 break;
             }
 
@@ -188,7 +156,6 @@ long arch_do_domctl(
             if ( !page )
             {
                 ret = -ENOMEM;
-                rcu_unlock_domain(d);
                 break;
             }
             arr = page_to_virt(page);
@@ -263,7 +230,6 @@ long arch_do_domctl(
 
             free_domheap_page(virt_to_page(arr));
 
-            rcu_unlock_domain(d);
             break;
         }
         /* fall thru */
@@ -271,25 +237,15 @@ long arch_do_domctl(
     {
         int n,j;
         int num = domctl->u.getpageframeinfo2.num;
-        domid_t dom = domctl->domain;
-        struct domain *d;
         uint32_t *arr32;
-        ret = -ESRCH;
-
-        if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
-            break;
 
         ret = xsm_getpageframeinfo(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         if ( unlikely(num > 1024) )
         {
             ret = -E2BIG;
-            rcu_unlock_domain(d);
             break;
         }
 
@@ -297,7 +253,6 @@ long arch_do_domctl(
         if ( !arr32 )
         {
             ret = -ENOMEM;
-            rcu_unlock_domain(d);
             break;
         }
  
@@ -369,78 +324,58 @@ long arch_do_domctl(
         }
 
         free_xenheap_page(arr32);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_getmemlist:
     {
         int i;
-        struct domain *d = rcu_lock_domain_by_id(domctl->domain);
         unsigned long max_pfns = domctl->u.getmemlist.max_pfns;
         uint64_t mfn;
         struct page_info *page;
 
-        ret = -EINVAL;
-        if ( d != NULL )
-        {
-            ret = xsm_getmemlist(d);
-            if ( ret )
-            {
-                rcu_unlock_domain(d);
-                break;
-            }
+        ret = xsm_getmemlist(d);
+        if ( ret )
+            break;
 
-            spin_lock(&d->page_alloc_lock);
+        if ( unlikely(d->is_dying) ) {
+            ret = -EINVAL;
+            break;
+        }
 
-            if ( unlikely(d->is_dying) ) {
-                spin_unlock(&d->page_alloc_lock);
-                goto getmemlist_out;
-            }
+        spin_lock(&d->page_alloc_lock);
 
-            ret = i = 0;
-            page_list_for_each(page, &d->page_list)
+        ret = i = 0;
+        page_list_for_each(page, &d->page_list)
+        {
+            if ( i >= max_pfns )
+                break;
+            mfn = page_to_mfn(page);
+            if ( copy_to_guest_offset(domctl->u.getmemlist.buffer,
+                                      i, &mfn, 1) )
             {
-                if ( i >= max_pfns )
-                    break;
-                mfn = page_to_mfn(page);
-                if ( copy_to_guest_offset(domctl->u.getmemlist.buffer,
-                                          i, &mfn, 1) )
-                {
-                    ret = -EFAULT;
-                    break;
-                }
-                ++i;
+                ret = -EFAULT;
+                break;
             }
-            
-            spin_unlock(&d->page_alloc_lock);
+			++i;
+		}
 
-            domctl->u.getmemlist.num_pfns = i;
-            copyback = 1;
-        getmemlist_out:
-            rcu_unlock_domain(d);
-        }
+        spin_unlock(&d->page_alloc_lock);
+
+        domctl->u.getmemlist.num_pfns = i;
+        copyback = 1;
     }
     break;
 
     case XEN_DOMCTL_hypercall_init:
     {
-        struct domain *d = rcu_lock_domain_by_id(domctl->domain);
         unsigned long gmfn = domctl->u.hypercall_init.gmfn;
         struct page_info *page;
         void *hypercall_page;
 
-        ret = -ESRCH;
-        if ( unlikely(d == NULL) )
-            break;
-
         ret = xsm_hypercall_init(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
 
@@ -449,7 +384,6 @@ long arch_do_domctl(
         {
             if ( page )
                 put_page(page);
-            rcu_unlock_domain(d);
             break;
         }
 
@@ -460,19 +394,12 @@ long arch_do_domctl(
         unmap_domain_page(hypercall_page);
 
         put_page_and_type(page);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_sethvmcontext:
     { 
         struct hvm_domain_context c = { .size = domctl->u.hvmcontext.size };
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
 
         ret = xsm_hvmcontext(d, domctl->cmd);
         if ( ret )
@@ -497,19 +424,12 @@ long arch_do_domctl(
     sethvmcontext_out:
         if ( c.data != NULL )
             xfree(c.data);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_gethvmcontext:
     { 
         struct hvm_domain_context c = { 0 };
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
 
         ret = xsm_hvmcontext(d, domctl->cmd);
         if ( ret )
@@ -548,7 +468,6 @@ long arch_do_domctl(
             ret = -EFAULT;
 
     gethvmcontext_out:
-        rcu_unlock_domain(d);
         copyback = 1;
 
         if ( c.data != NULL )
@@ -558,46 +477,28 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_gethvmcontext_partial:
     { 
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_hvmcontext(d, domctl->cmd);
         if ( ret )
-            goto gethvmcontext_partial_out;
+            break;
 
         ret = -EINVAL;
         if ( !is_hvm_domain(d) ) 
-            goto gethvmcontext_partial_out;
+            break;
 
         domain_pause(d);
         ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
                            domctl->u.hvmcontext_partial.instance,
                            domctl->u.hvmcontext_partial.buffer);
         domain_unpause(d);
-
-    gethvmcontext_partial_out:
-        rcu_unlock_domain(d);
     }
     break;
 
 
     case XEN_DOMCTL_set_address_size:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_address_size(d, domctl->cmd);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         switch ( domctl->u.address_size.size )
         {
@@ -611,30 +512,18 @@ long arch_do_domctl(
             ret = (domctl->u.address_size.size == BITS_PER_LONG) ? 0 : -EINVAL;
             break;
         }
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_get_address_size:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_address_size(d, domctl->cmd);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domctl->u.address_size.size =
             is_pv_32on64_domain(d) ? 32 : BITS_PER_LONG;
 
-        rcu_unlock_domain(d);
         ret = 0;
         copyback = 1;
     }
@@ -642,46 +531,28 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_set_machine_address_size:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_machine_address_size(d, domctl->cmd);
         if ( ret )
-            goto set_machine_address_size_out;
+            break;
 
         ret = -EBUSY;
         if ( d->tot_pages > 0 )
-            goto set_machine_address_size_out;
+            break;
 
         d->arch.physaddr_bitsize = domctl->u.address_size.size;
 
         ret = 0;
-    set_machine_address_size_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_get_machine_address_size:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_machine_address_size(d, domctl->cmd);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domctl->u.address_size.size = d->arch.physaddr_bitsize;
 
-        rcu_unlock_domain(d);
         ret = 0;
         copyback = 1;
     }
@@ -689,25 +560,20 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_sendtrigger:
     {
-        struct domain *d;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_sendtrigger(d);
         if ( ret )
-            goto sendtrigger_out;
+            break;
 
         ret = -EINVAL;
         if ( domctl->u.sendtrigger.vcpu >= MAX_VIRT_CPUS )
-            goto sendtrigger_out;
+            break;
 
         ret = -ESRCH;
         if ( domctl->u.sendtrigger.vcpu >= d->max_vcpus ||
              (v = d->vcpu[domctl->u.sendtrigger.vcpu]) == NULL )
-            goto sendtrigger_out;
+            break;
 
         switch ( domctl->u.sendtrigger.trigger )
         {
@@ -744,34 +610,27 @@ long arch_do_domctl(
         default:
             ret = -ENOSYS;
         }
-
-    sendtrigger_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_bind_pt_irq:
     {
-        struct domain * d;
         xen_domctl_bind_pt_irq_t * bind;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
         bind = &(domctl->u.bind_pt_irq);
 
         ret = -EINVAL;
         if ( !is_hvm_domain(d) )
-            goto bind_out;
+            break;
 
         ret = xsm_bind_pt_irq(d, bind);
         if ( ret )
-            goto bind_out;
+            break;
 
         ret = -EPERM;
         if ( !IS_PRIV(current->domain) &&
              !irq_access_permitted(current->domain, bind->machine_irq) )
-            goto bind_out;
+            break;
 
         ret = -ESRCH;
         if ( iommu_enabled )
@@ -783,26 +642,19 @@ long arch_do_domctl(
         if ( ret < 0 )
             printk(XENLOG_G_ERR "pt_irq_create_bind failed (%ld) for dom%d\n",
                    ret, d->domain_id);
-
-    bind_out:
-        rcu_unlock_domain(d);
     }
     break;    
 
     case XEN_DOMCTL_unbind_pt_irq:
     {
-        struct domain * d;
         xen_domctl_bind_pt_irq_t * bind;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
         bind = &(domctl->u.bind_pt_irq);
 
         ret = -EPERM;
         if ( !IS_PRIV(current->domain) &&
              !irq_access_permitted(current->domain, bind->machine_irq) )
-            goto unbind_out;
+            break;
 
         if ( iommu_enabled )
         {
@@ -813,15 +665,11 @@ long arch_do_domctl(
         if ( ret < 0 )
             printk(XENLOG_G_ERR "pt_irq_destroy_bind failed (%ld) for dom%d\n",
                    ret, d->domain_id);
-
-    unbind_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_memory_mapping:
     {
-        struct domain *d;
         unsigned long gfn = domctl->u.memory_mapping.first_gfn;
         unsigned long mfn = domctl->u.memory_mapping.first_mfn;
         unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
@@ -839,15 +687,9 @@ long arch_do_domctl(
              !iomem_access_permitted(current->domain, mfn, mfn + nr_mfns - 1) )
             break;
 
-        ret = -ESRCH;
-        if ( unlikely((d = rcu_lock_domain_by_id(domctl->domain)) == NULL) )
-            break;
-
         ret = xsm_iomem_permission(d, mfn, mfn + nr_mfns - 1, add);
-        if ( ret ) {
-            rcu_unlock_domain(d);
+        if ( ret )
             break;
-        }
 
         if ( add )
         {
@@ -894,15 +736,12 @@ long arch_do_domctl(
                        ret, add ? "removing" : "denying", d->domain_id,
                        mfn, mfn + nr_mfns - 1);
         }
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_ioport_mapping:
     {
 #define MAX_IOPORTS    0x10000
-        struct domain *d;
         struct hvm_iommu *hd;
         unsigned int fgp = domctl->u.ioport_mapping.first_gport;
         unsigned int fmp = domctl->u.ioport_mapping.first_mport;
@@ -926,15 +765,9 @@ long arch_do_domctl(
              !ioports_access_permitted(current->domain, fmp, fmp + np - 1) )
             break;
 
-        ret = -ESRCH;
-        if ( unlikely((d = rcu_lock_domain_by_id(domctl->domain)) == NULL) )
-            break;
-
         ret = xsm_ioport_permission(d, fmp, fmp + np - 1, add);
-        if ( ret ) {
-            rcu_unlock_domain(d);
+        if ( ret )
             break;
-        }
 
         hd = domain_hvm_iommu(d);
         if ( add )
@@ -990,30 +823,19 @@ long arch_do_domctl(
                        "ioport_map: error %ld denying dom%d access to [%x,%x]\n",
                        ret, d->domain_id, fmp, fmp + np - 1);
         }
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_pin_mem_cacheattr:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_pin_mem_cacheattr(d);
         if ( ret )
-            goto pin_out;
+            break;
 
         ret = hvm_set_mem_pinned_cacheattr(
             d, domctl->u.pin_mem_cacheattr.start,
             domctl->u.pin_mem_cacheattr.end,
             domctl->u.pin_mem_cacheattr.type);
-
-    pin_out:
-        rcu_unlock_domain(d);
     }
     break;
 
@@ -1021,19 +843,13 @@ long arch_do_domctl(
     case XEN_DOMCTL_get_ext_vcpucontext:
     {
         struct xen_domctl_ext_vcpucontext *evc;
-        struct domain *d;
         struct vcpu *v;
 
         evc = &domctl->u.ext_vcpucontext;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_ext_vcpucontext(d, domctl->cmd);
         if ( ret )
-            goto ext_vcpucontext_out;
+            break;
 
         ret = -ESRCH;
         if ( (evc->vcpu >= d->max_vcpus) ||
@@ -1124,7 +940,6 @@ long arch_do_domctl(
         ret = 0;
 
     ext_vcpucontext_out:
-        rcu_unlock_domain(d);
         if ( domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext )
             copyback = 1;
     }
@@ -1132,16 +947,10 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_set_cpuid:
     {
-        struct domain *d;
         xen_domctl_cpuid_t *ctl = &domctl->u.cpuid;
         cpuid_input_t *cpuid = NULL; 
         int i;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         for ( i = 0; i < MAX_CPUID_INPUT; i++ )
         {
             cpuid = &d->arch.cpuids[i];
@@ -1164,21 +973,13 @@ long arch_do_domctl(
             memcpy(cpuid, ctl, sizeof(cpuid_input_t));
             ret = 0;
         }
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_gettscinfo:
     {
-        struct domain *d;
         xen_guest_tsc_info_t info;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         domain_pause(d);
         tsc_get_info(d, &info.tsc_mode,
                         &info.elapsed_nsec,
@@ -1189,20 +990,11 @@ long arch_do_domctl(
         else
             ret = 0;
         domain_unpause(d);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_settscinfo:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         domain_pause(d);
         tsc_set_info(d, domctl->u.tsc_info.info.tsc_mode,
                      domctl->u.tsc_info.info.elapsed_nsec,
@@ -1210,66 +1002,40 @@ long arch_do_domctl(
                      domctl->u.tsc_info.info.incarnation);
         domain_unpause(d);
 
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_suppress_spurious_page_faults:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            d->arch.suppress_spurious_page_faults = 1;
-            rcu_unlock_domain(d);
-            ret = 0;
-        }
+        d->arch.suppress_spurious_page_faults = 1;
+        ret = 0;
     }
     break;
 
     case XEN_DOMCTL_debug_op:
     {
-        struct domain *d;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         ret = -EINVAL;
         if ( (domctl->u.debug_op.vcpu >= d->max_vcpus) ||
              ((v = d->vcpu[domctl->u.debug_op.vcpu]) == NULL) )
-            goto debug_op_out;
+            break;
 
         ret = -EINVAL;
         if ( !is_hvm_domain(d))
-            goto debug_op_out;
+            break;
 
         ret = hvm_debug_op(v, domctl->u.debug_op.op);
-
-    debug_op_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_gdbsx_guestmemio:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         domctl->u.gdbsx_guest_memio.remain =
             domctl->u.gdbsx_guest_memio.len;
 
         ret = gdbsx_guest_mem_io(domctl->domain, &domctl->u.gdbsx_guest_memio);
-
-        rcu_unlock_domain(d);
         if ( !ret )
            copyback = 1;
     }
@@ -1277,71 +1043,42 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_gdbsx_pausevcpu:
     {
-        struct domain *d;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = -EBUSY;
         if ( !d->is_paused_by_controller )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
         ret = -EINVAL;
         if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
              (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
         vcpu_pause(v);
         ret = 0;
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_gdbsx_unpausevcpu:
     {
-        struct domain *d;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = -EBUSY;
         if ( !d->is_paused_by_controller )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
         ret = -EINVAL;
         if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
              (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
         if ( !atomic_read(&v->pause_count) )
             printk("WARN: Unpausing vcpu:%d which is not paused\n", v->vcpu_id);
         vcpu_unpause(v);
         ret = 0;
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_gdbsx_domstatus:
     {
-        struct domain *d;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         domctl->u.gdbsx_domstatus.vcpu_id = -1;
         domctl->u.gdbsx_domstatus.paused = d->is_paused_by_controller;
         if ( domctl->u.gdbsx_domstatus.paused )
@@ -1358,7 +1095,6 @@ long arch_do_domctl(
                 }
             }
         }
-        rcu_unlock_domain(d);
         ret = 0;
         copyback = 1;
     }
@@ -1368,7 +1104,6 @@ long arch_do_domctl(
     case XEN_DOMCTL_getvcpuextstate:
     {
         struct xen_domctl_vcpuextstate *evc;
-        struct domain *d;
         struct vcpu *v;
         uint32_t offset = 0;
         uint64_t _xfeature_mask = 0;
@@ -1379,12 +1114,6 @@ long arch_do_domctl(
 
         evc = &domctl->u.vcpuextstate;
 
-        ret = -ESRCH;
-
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_vcpuextstate(d, domctl->cmd);
         if ( ret )
             goto vcpuextstate_out;
@@ -1483,7 +1212,6 @@ long arch_do_domctl(
         ret = 0;
 
     vcpuextstate_out:
-        rcu_unlock_domain(d);
         if ( domctl->cmd == XEN_DOMCTL_getvcpuextstate )
             copyback = 1;
     }
@@ -1491,52 +1219,35 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_mem_event_op:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            ret = xsm_mem_event(d);
-            if ( !ret )
-                ret = mem_event_domctl(d, &domctl->u.mem_event_op,
-                                       guest_handle_cast(u_domctl, void));
-            rcu_unlock_domain(d);
-            copyback = 1;
-        } 
+        ret = xsm_mem_event(d);
+        if ( !ret )
+            ret = mem_event_domctl(d, &domctl->u.mem_event_op,
+                                   guest_handle_cast(u_domctl, void));
+        copyback = 1;
     }
     break;
 
     case XEN_DOMCTL_mem_sharing_op:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            ret = xsm_mem_sharing(d);
-            if ( !ret )
-                ret = mem_sharing_domctl(d, &domctl->u.mem_sharing_op);
-            rcu_unlock_domain(d);
-        } 
+        ret = xsm_mem_sharing(d);
+        if ( !ret )
+            ret = mem_sharing_domctl(d, &domctl->u.mem_sharing_op);
     }
     break;
 
 #if P2M_AUDIT
     case XEN_DOMCTL_audit_p2m:
     {
-        struct domain *d;
-
-        ret = rcu_lock_remote_target_domain_by_id(domctl->domain, &d);
-        if ( ret != 0 )
+        if ( d == current->domain )
+        {
+            ret = -EPERM;
             break;
+        }
 
         audit_p2m(d,
                   &domctl->u.audit_p2m.orphans,
                   &domctl->u.audit_p2m.m2p_bad,
                   &domctl->u.audit_p2m.p2m_bad);
-        rcu_unlock_domain(d);
         copyback = 1;
     }
     break;
@@ -1544,52 +1255,36 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_set_access_required:
     {
-        struct domain *d;
         struct p2m_domain* p2m;
         
         ret = -EPERM;
-        if ( current->domain->domain_id == domctl->domain )
+        if ( current->domain == d )
             break;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            ret = xsm_mem_event(d);
-            if ( !ret ) {
-                p2m = p2m_get_hostp2m(d);
-                p2m->access_required = domctl->u.access_required.access_required;
-            }
-            rcu_unlock_domain(d);
-        } 
+        ret = xsm_mem_event(d);
+        if ( !ret ) {
+            p2m = p2m_get_hostp2m(d);
+            p2m->access_required = domctl->u.access_required.access_required;
+        }
     }
     break;
 
     case XEN_DOMCTL_set_broken_page_p2m:
     {
-        struct domain *d;
+        p2m_type_t pt;
+        unsigned long pfn = domctl->u.set_broken_page_p2m.pfn;
+        mfn_t mfn = get_gfn_query(d, pfn, &pt);
 
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            p2m_type_t pt;
-            unsigned long pfn = domctl->u.set_broken_page_p2m.pfn;
-            mfn_t mfn = get_gfn_query(d, pfn, &pt);
-
-            if ( unlikely(!mfn_valid(mfn_x(mfn)) || !p2m_is_ram(pt) ||
-                         (p2m_change_type(d, pfn, pt, p2m_ram_broken) != pt)) )
-                ret = -EINVAL;
+        if ( unlikely(!mfn_valid(mfn_x(mfn)) || !p2m_is_ram(pt) ||
+                     (p2m_change_type(d, pfn, pt, p2m_ram_broken) != pt)) )
+            ret = -EINVAL;
 
-            put_gfn(d, pfn);
-            rcu_unlock_domain(d);
-        }
-        else
-            ret = -ESRCH;
+        put_gfn(d, pfn);
     }
     break;
 
     default:
-        ret = iommu_do_domctl(domctl, u_domctl);
+        ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
     }
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index a491159..ca789bb 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -882,7 +882,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     break;
 
     default:
-        ret = arch_do_domctl(op, u_domctl);
+        ret = arch_do_domctl(op, d, u_domctl);
         break;
     }
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index fb6b5db..1cd0007 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -542,10 +542,9 @@ void iommu_crash_shutdown(void)
 }
 
 int iommu_do_domctl(
-    struct xen_domctl *domctl,
+    struct xen_domctl *domctl, struct domain *d,
     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    struct domain *d;
     u16 seg;
     u8 bus, devfn;
     int ret = 0;
@@ -564,10 +563,6 @@ int iommu_do_domctl(
         if ( ret )
             break;
 
-        ret = -EINVAL;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         seg = domctl->u.get_device_group.machine_sbdf >> 16;
         bus = (domctl->u.get_device_group.machine_sbdf >> 8) & 0xff;
         devfn = domctl->u.get_device_group.machine_sbdf & 0xff;
@@ -588,7 +583,6 @@ int iommu_do_domctl(
         }
         if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
             ret = -EFAULT;
-        rcu_unlock_domain(d);
     }
     break;
 
@@ -611,20 +605,15 @@ int iommu_do_domctl(
         break;
 
     case XEN_DOMCTL_assign_device:
-        if ( unlikely((d = get_domain_by_id(domctl->domain)) == NULL) ||
-             unlikely(d->is_dying) )
+        if ( unlikely(d->is_dying) )
         {
-            printk(XENLOG_G_ERR
-                   "XEN_DOMCTL_assign_device: get_domain_by_id() failed\n");
             ret = -EINVAL;
-            if ( d )
-                goto assign_device_out;
             break;
         }
 
         ret = xsm_assign_device(d, domctl->u.assign_device.machine_sbdf);
         if ( ret )
-            goto assign_device_out;
+            break;
 
         seg = domctl->u.get_device_group.machine_sbdf >> 16;
         bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
@@ -638,22 +627,12 @@ int iommu_do_domctl(
                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
                    d->domain_id, ret);
 
-    assign_device_out:
-        put_domain(d);
         break;
 
     case XEN_DOMCTL_deassign_device:
-        if ( unlikely((d = get_domain_by_id(domctl->domain)) == NULL) )
-        {
-            printk(XENLOG_G_ERR
-                   "XEN_DOMCTL_deassign_device: get_domain_by_id() failed\n");
-            ret = -EINVAL;
-            break;
-        }
-
         ret = xsm_deassign_device(d, domctl->u.assign_device.machine_sbdf);
         if ( ret )
-            goto deassign_device_out;
+            break;
 
         seg = domctl->u.get_device_group.machine_sbdf >> 16;
         bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
@@ -668,8 +647,6 @@ int iommu_do_domctl(
                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
                    d->domain_id, ret);
 
-    deassign_device_out:
-        put_domain(d);
         break;
 
     default:
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index e315523..7c3d719 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -37,7 +37,7 @@ do_domctl(
 
 extern long
 arch_do_domctl(
-    struct xen_domctl *domctl,
+    struct xen_domctl *domctl, struct domain *d,
     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 7626216..d477137 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -132,7 +132,8 @@ void iommu_crash_shutdown(void);
 void iommu_set_dom0_mapping(struct domain *d);
 void iommu_share_p2m_table(struct domain *d);
 
-int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
+int iommu_do_domctl(struct xen_domctl *, struct domain *d,
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 20:40:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 20:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tit5m-0000gy-7g; Wed, 12 Dec 2012 20:39:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tit5k-0000gE-FS
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 20:39:44 +0000
Received: from [85.158.143.99:51286] by server-3.bemta-4.messagelabs.com id
	92/AA-18211-F8BE8C05; Wed, 12 Dec 2012 20:39:43 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-216.messagelabs.com!1355344781!28595064!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5051 invoked from network); 12 Dec 2012 20:39:41 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-3.tower-216.messagelabs.com with SMTP;
	12 Dec 2012 20:39:41 -0000
X-TM-IMSS-Message-ID: <7272a2020009a522@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 7272a2020009a522 ;
	Wed, 12 Dec 2012 15:39:07 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBCKddBH030264; 
	Wed, 12 Dec 2012 15:39:39 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Wed, 12 Dec 2012 15:39:12 -0500
Message-Id: <1355344752-30257-3-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355344752-30257-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355344752-30257-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: dgdegra@tycho.nsa.gov, keir@xen.org
Subject: [Xen-devel] [PATCH 2/2] xen/arch/*: add struct domain parameter to
	arch_do_domctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since the arch-independent do_domctl function now RCU locks the domain
specified by op->domain, pass the struct domain to the arch-specific
domctl function and remove the duplicate per-subfunction locking.

This also removes two get_domain/put_domain call pairs (in
XEN_DOMCTL_assign_device and XEN_DOMCTL_deassign_device), replacing them
with RCU locking.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/domctl.c           |   2 +-
 xen/arch/x86/domctl.c           | 455 +++++++---------------------------------
 xen/common/domctl.c             |   2 +-
 xen/drivers/passthrough/iommu.c |  31 +--
 xen/include/xen/hypercall.h     |   2 +-
 xen/include/xen/iommu.h         |   3 +-
 6 files changed, 84 insertions(+), 411 deletions(-)

diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index cf16791..d54a387 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -10,7 +10,7 @@
 #include <xen/errno.h>
 #include <public/domctl.h>
 
-long arch_do_domctl(struct xen_domctl *domctl,
+long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     return -ENOSYS;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 239e411..e89a20a 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -47,7 +47,7 @@ static int gdbsx_guest_mem_io(
 }
 
 long arch_do_domctl(
-    struct xen_domctl *domctl,
+    struct xen_domctl *domctl, struct domain *d,
     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
@@ -58,23 +58,15 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_shadow_op:
     {
-        struct domain *d;
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            ret = paging_domctl(d,
-                                &domctl->u.shadow_op,
-                                guest_handle_cast(u_domctl, void));
-            rcu_unlock_domain(d);
-            copyback = 1;
-        } 
+        ret = paging_domctl(d,
+                            &domctl->u.shadow_op,
+                            guest_handle_cast(u_domctl, void));
+        copyback = 1;
     }
     break;
 
     case XEN_DOMCTL_ioport_permission:
     {
-        struct domain *d;
         unsigned int fp = domctl->u.ioport_permission.first_port;
         unsigned int np = domctl->u.ioport_permission.nr_ports;
         int allow = domctl->u.ioport_permission.allow_access;
@@ -83,10 +75,6 @@ long arch_do_domctl(
         if ( (fp + np) > 65536 )
             break;
 
-        ret = -ESRCH;
-        if ( unlikely((d = rcu_lock_domain_by_id(domctl->domain)) == NULL) )
-            break;
-
         if ( np == 0 )
             ret = 0;
         else if ( xsm_ioport_permission(d, fp, fp + np - 1, allow) )
@@ -95,8 +83,6 @@ long arch_do_domctl(
             ret = ioports_permit_access(d, fp, fp + np - 1);
         else
             ret = ioports_deny_access(d, fp, fp + np - 1);
-
-        rcu_unlock_domain(d);
     }
     break;
 
@@ -104,23 +90,16 @@ long arch_do_domctl(
     {
         struct page_info *page;
         unsigned long mfn = domctl->u.getpageframeinfo.gmfn;
-        domid_t dom = domctl->domain;
-        struct domain *d;
 
         ret = -EINVAL;
-
-        if ( unlikely(!mfn_valid(mfn)) ||
-             unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
+        if ( unlikely(!mfn_valid(mfn)) )
             break;
 
         page = mfn_to_page(mfn);
 
         ret = xsm_getpageframeinfo(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         if ( likely(get_page(page, d)) )
         {
@@ -150,7 +129,6 @@ long arch_do_domctl(
             put_page(page);
         }
 
-        rcu_unlock_domain(d);
         copyback = 1;
     }
     break;
@@ -160,27 +138,17 @@ long arch_do_domctl(
         {
             unsigned int n, j;
             unsigned int num = domctl->u.getpageframeinfo3.num;
-            domid_t dom = domctl->domain;
-            struct domain *d;
             struct page_info *page;
             xen_pfn_t *arr;
 
-            ret = -ESRCH;
-            if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
-                break;
-
             ret = xsm_getpageframeinfo(d);
             if ( ret )
-            {
-                rcu_unlock_domain(d);
                 break;
-            }
 
             if ( unlikely(num > 1024) ||
                  unlikely(num != domctl->u.getpageframeinfo3.num) )
             {
                 ret = -E2BIG;
-                rcu_unlock_domain(d);
                 break;
             }
 
@@ -188,7 +156,6 @@ long arch_do_domctl(
             if ( !page )
             {
                 ret = -ENOMEM;
-                rcu_unlock_domain(d);
                 break;
             }
             arr = page_to_virt(page);
@@ -263,7 +230,6 @@ long arch_do_domctl(
 
             free_domheap_page(virt_to_page(arr));
 
-            rcu_unlock_domain(d);
             break;
         }
         /* fall thru */
@@ -271,25 +237,15 @@ long arch_do_domctl(
     {
         int n,j;
         int num = domctl->u.getpageframeinfo2.num;
-        domid_t dom = domctl->domain;
-        struct domain *d;
         uint32_t *arr32;
-        ret = -ESRCH;
-
-        if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
-            break;
 
         ret = xsm_getpageframeinfo(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         if ( unlikely(num > 1024) )
         {
             ret = -E2BIG;
-            rcu_unlock_domain(d);
             break;
         }
 
@@ -297,7 +253,6 @@ long arch_do_domctl(
         if ( !arr32 )
         {
             ret = -ENOMEM;
-            rcu_unlock_domain(d);
             break;
         }
  
@@ -369,78 +324,58 @@ long arch_do_domctl(
         }
 
         free_xenheap_page(arr32);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_getmemlist:
     {
         int i;
-        struct domain *d = rcu_lock_domain_by_id(domctl->domain);
         unsigned long max_pfns = domctl->u.getmemlist.max_pfns;
         uint64_t mfn;
         struct page_info *page;
 
-        ret = -EINVAL;
-        if ( d != NULL )
-        {
-            ret = xsm_getmemlist(d);
-            if ( ret )
-            {
-                rcu_unlock_domain(d);
-                break;
-            }
+        ret = xsm_getmemlist(d);
+        if ( ret )
+            break;
 
-            spin_lock(&d->page_alloc_lock);
+        if ( unlikely(d->is_dying) ) {
+            ret = -EINVAL;
+            break;
+        }
 
-            if ( unlikely(d->is_dying) ) {
-                spin_unlock(&d->page_alloc_lock);
-                goto getmemlist_out;
-            }
+        spin_lock(&d->page_alloc_lock);
 
-            ret = i = 0;
-            page_list_for_each(page, &d->page_list)
+        ret = i = 0;
+        page_list_for_each(page, &d->page_list)
+        {
+            if ( i >= max_pfns )
+                break;
+            mfn = page_to_mfn(page);
+            if ( copy_to_guest_offset(domctl->u.getmemlist.buffer,
+                                      i, &mfn, 1) )
             {
-                if ( i >= max_pfns )
-                    break;
-                mfn = page_to_mfn(page);
-                if ( copy_to_guest_offset(domctl->u.getmemlist.buffer,
-                                          i, &mfn, 1) )
-                {
-                    ret = -EFAULT;
-                    break;
-                }
-                ++i;
+                ret = -EFAULT;
+                break;
             }
-            
-            spin_unlock(&d->page_alloc_lock);
+			++i;
+		}
 
-            domctl->u.getmemlist.num_pfns = i;
-            copyback = 1;
-        getmemlist_out:
-            rcu_unlock_domain(d);
-        }
+        spin_unlock(&d->page_alloc_lock);
+
+        domctl->u.getmemlist.num_pfns = i;
+        copyback = 1;
     }
     break;
 
     case XEN_DOMCTL_hypercall_init:
     {
-        struct domain *d = rcu_lock_domain_by_id(domctl->domain);
         unsigned long gmfn = domctl->u.hypercall_init.gmfn;
         struct page_info *page;
         void *hypercall_page;
 
-        ret = -ESRCH;
-        if ( unlikely(d == NULL) )
-            break;
-
         ret = xsm_hypercall_init(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
 
@@ -449,7 +384,6 @@ long arch_do_domctl(
         {
             if ( page )
                 put_page(page);
-            rcu_unlock_domain(d);
             break;
         }
 
@@ -460,19 +394,12 @@ long arch_do_domctl(
         unmap_domain_page(hypercall_page);
 
         put_page_and_type(page);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_sethvmcontext:
     { 
         struct hvm_domain_context c = { .size = domctl->u.hvmcontext.size };
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
 
         ret = xsm_hvmcontext(d, domctl->cmd);
         if ( ret )
@@ -497,19 +424,12 @@ long arch_do_domctl(
     sethvmcontext_out:
         if ( c.data != NULL )
             xfree(c.data);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_gethvmcontext:
     { 
         struct hvm_domain_context c = { 0 };
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
 
         ret = xsm_hvmcontext(d, domctl->cmd);
         if ( ret )
@@ -548,7 +468,6 @@ long arch_do_domctl(
             ret = -EFAULT;
 
     gethvmcontext_out:
-        rcu_unlock_domain(d);
         copyback = 1;
 
         if ( c.data != NULL )
@@ -558,46 +477,28 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_gethvmcontext_partial:
     { 
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_hvmcontext(d, domctl->cmd);
         if ( ret )
-            goto gethvmcontext_partial_out;
+            break;
 
         ret = -EINVAL;
         if ( !is_hvm_domain(d) ) 
-            goto gethvmcontext_partial_out;
+            break;
 
         domain_pause(d);
         ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
                            domctl->u.hvmcontext_partial.instance,
                            domctl->u.hvmcontext_partial.buffer);
         domain_unpause(d);
-
-    gethvmcontext_partial_out:
-        rcu_unlock_domain(d);
     }
     break;
 
 
     case XEN_DOMCTL_set_address_size:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_address_size(d, domctl->cmd);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         switch ( domctl->u.address_size.size )
         {
@@ -611,30 +512,18 @@ long arch_do_domctl(
             ret = (domctl->u.address_size.size == BITS_PER_LONG) ? 0 : -EINVAL;
             break;
         }
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_get_address_size:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_address_size(d, domctl->cmd);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domctl->u.address_size.size =
             is_pv_32on64_domain(d) ? 32 : BITS_PER_LONG;
 
-        rcu_unlock_domain(d);
         ret = 0;
         copyback = 1;
     }
@@ -642,46 +531,28 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_set_machine_address_size:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_machine_address_size(d, domctl->cmd);
         if ( ret )
-            goto set_machine_address_size_out;
+            break;
 
         ret = -EBUSY;
         if ( d->tot_pages > 0 )
-            goto set_machine_address_size_out;
+            break;
 
         d->arch.physaddr_bitsize = domctl->u.address_size.size;
 
         ret = 0;
-    set_machine_address_size_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_get_machine_address_size:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_machine_address_size(d, domctl->cmd);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domctl->u.address_size.size = d->arch.physaddr_bitsize;
 
-        rcu_unlock_domain(d);
         ret = 0;
         copyback = 1;
     }
@@ -689,25 +560,20 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_sendtrigger:
     {
-        struct domain *d;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = xsm_sendtrigger(d);
         if ( ret )
-            goto sendtrigger_out;
+            break;
 
         ret = -EINVAL;
         if ( domctl->u.sendtrigger.vcpu >= MAX_VIRT_CPUS )
-            goto sendtrigger_out;
+            break;
 
         ret = -ESRCH;
         if ( domctl->u.sendtrigger.vcpu >= d->max_vcpus ||
              (v = d->vcpu[domctl->u.sendtrigger.vcpu]) == NULL )
-            goto sendtrigger_out;
+            break;
 
         switch ( domctl->u.sendtrigger.trigger )
         {
@@ -744,34 +610,27 @@ long arch_do_domctl(
         default:
             ret = -ENOSYS;
         }
-
-    sendtrigger_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_bind_pt_irq:
     {
-        struct domain * d;
         xen_domctl_bind_pt_irq_t * bind;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
         bind = &(domctl->u.bind_pt_irq);
 
         ret = -EINVAL;
         if ( !is_hvm_domain(d) )
-            goto bind_out;
+            break;
 
         ret = xsm_bind_pt_irq(d, bind);
         if ( ret )
-            goto bind_out;
+            break;
 
         ret = -EPERM;
         if ( !IS_PRIV(current->domain) &&
              !irq_access_permitted(current->domain, bind->machine_irq) )
-            goto bind_out;
+            break;
 
         ret = -ESRCH;
         if ( iommu_enabled )
@@ -783,26 +642,19 @@ long arch_do_domctl(
         if ( ret < 0 )
             printk(XENLOG_G_ERR "pt_irq_create_bind failed (%ld) for dom%d\n",
                    ret, d->domain_id);
-
-    bind_out:
-        rcu_unlock_domain(d);
     }
     break;    
 
     case XEN_DOMCTL_unbind_pt_irq:
     {
-        struct domain * d;
         xen_domctl_bind_pt_irq_t * bind;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
         bind = &(domctl->u.bind_pt_irq);
 
         ret = -EPERM;
         if ( !IS_PRIV(current->domain) &&
              !irq_access_permitted(current->domain, bind->machine_irq) )
-            goto unbind_out;
+            break;
 
         if ( iommu_enabled )
         {
@@ -813,15 +665,11 @@ long arch_do_domctl(
         if ( ret < 0 )
             printk(XENLOG_G_ERR "pt_irq_destroy_bind failed (%ld) for dom%d\n",
                    ret, d->domain_id);
-
-    unbind_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_memory_mapping:
     {
-        struct domain *d;
         unsigned long gfn = domctl->u.memory_mapping.first_gfn;
         unsigned long mfn = domctl->u.memory_mapping.first_mfn;
         unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
@@ -839,15 +687,9 @@ long arch_do_domctl(
              !iomem_access_permitted(current->domain, mfn, mfn + nr_mfns - 1) )
             break;
 
-        ret = -ESRCH;
-        if ( unlikely((d = rcu_lock_domain_by_id(domctl->domain)) == NULL) )
-            break;
-
         ret = xsm_iomem_permission(d, mfn, mfn + nr_mfns - 1, add);
-        if ( ret ) {
-            rcu_unlock_domain(d);
+        if ( ret )
             break;
-        }
 
         if ( add )
         {
@@ -894,15 +736,12 @@ long arch_do_domctl(
                        ret, add ? "removing" : "denying", d->domain_id,
                        mfn, mfn + nr_mfns - 1);
         }
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_ioport_mapping:
     {
 #define MAX_IOPORTS    0x10000
-        struct domain *d;
         struct hvm_iommu *hd;
         unsigned int fgp = domctl->u.ioport_mapping.first_gport;
         unsigned int fmp = domctl->u.ioport_mapping.first_mport;
@@ -926,15 +765,9 @@ long arch_do_domctl(
              !ioports_access_permitted(current->domain, fmp, fmp + np - 1) )
             break;
 
-        ret = -ESRCH;
-        if ( unlikely((d = rcu_lock_domain_by_id(domctl->domain)) == NULL) )
-            break;
-
         ret = xsm_ioport_permission(d, fmp, fmp + np - 1, add);
-        if ( ret ) {
-            rcu_unlock_domain(d);
+        if ( ret )
             break;
-        }
 
         hd = domain_hvm_iommu(d);
         if ( add )
@@ -990,30 +823,19 @@ long arch_do_domctl(
                        "ioport_map: error %ld denying dom%d access to [%x,%x]\n",
                        ret, d->domain_id, fmp, fmp + np - 1);
         }
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_pin_mem_cacheattr:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_pin_mem_cacheattr(d);
         if ( ret )
-            goto pin_out;
+            break;
 
         ret = hvm_set_mem_pinned_cacheattr(
             d, domctl->u.pin_mem_cacheattr.start,
             domctl->u.pin_mem_cacheattr.end,
             domctl->u.pin_mem_cacheattr.type);
-
-    pin_out:
-        rcu_unlock_domain(d);
     }
     break;
 
@@ -1021,19 +843,13 @@ long arch_do_domctl(
     case XEN_DOMCTL_get_ext_vcpucontext:
     {
         struct xen_domctl_ext_vcpucontext *evc;
-        struct domain *d;
         struct vcpu *v;
 
         evc = &domctl->u.ext_vcpucontext;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_ext_vcpucontext(d, domctl->cmd);
         if ( ret )
-            goto ext_vcpucontext_out;
+            break;
 
         ret = -ESRCH;
         if ( (evc->vcpu >= d->max_vcpus) ||
@@ -1124,7 +940,6 @@ long arch_do_domctl(
         ret = 0;
 
     ext_vcpucontext_out:
-        rcu_unlock_domain(d);
         if ( domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext )
             copyback = 1;
     }
@@ -1132,16 +947,10 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_set_cpuid:
     {
-        struct domain *d;
         xen_domctl_cpuid_t *ctl = &domctl->u.cpuid;
         cpuid_input_t *cpuid = NULL; 
         int i;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         for ( i = 0; i < MAX_CPUID_INPUT; i++ )
         {
             cpuid = &d->arch.cpuids[i];
@@ -1164,21 +973,13 @@ long arch_do_domctl(
             memcpy(cpuid, ctl, sizeof(cpuid_input_t));
             ret = 0;
         }
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_gettscinfo:
     {
-        struct domain *d;
         xen_guest_tsc_info_t info;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         domain_pause(d);
         tsc_get_info(d, &info.tsc_mode,
                         &info.elapsed_nsec,
@@ -1189,20 +990,11 @@ long arch_do_domctl(
         else
             ret = 0;
         domain_unpause(d);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_settscinfo:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         domain_pause(d);
         tsc_set_info(d, domctl->u.tsc_info.info.tsc_mode,
                      domctl->u.tsc_info.info.elapsed_nsec,
@@ -1210,66 +1002,40 @@ long arch_do_domctl(
                      domctl->u.tsc_info.info.incarnation);
         domain_unpause(d);
 
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_suppress_spurious_page_faults:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            d->arch.suppress_spurious_page_faults = 1;
-            rcu_unlock_domain(d);
-            ret = 0;
-        }
+        d->arch.suppress_spurious_page_faults = 1;
+        ret = 0;
     }
     break;
 
     case XEN_DOMCTL_debug_op:
     {
-        struct domain *d;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         ret = -EINVAL;
         if ( (domctl->u.debug_op.vcpu >= d->max_vcpus) ||
              ((v = d->vcpu[domctl->u.debug_op.vcpu]) == NULL) )
-            goto debug_op_out;
+            break;
 
         ret = -EINVAL;
         if ( !is_hvm_domain(d))
-            goto debug_op_out;
+            break;
 
         ret = hvm_debug_op(v, domctl->u.debug_op.op);
-
-    debug_op_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_gdbsx_guestmemio:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         domctl->u.gdbsx_guest_memio.remain =
             domctl->u.gdbsx_guest_memio.len;
 
         ret = gdbsx_guest_mem_io(domctl->domain, &domctl->u.gdbsx_guest_memio);
-
-        rcu_unlock_domain(d);
         if ( !ret )
            copyback = 1;
     }
@@ -1277,71 +1043,42 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_gdbsx_pausevcpu:
     {
-        struct domain *d;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = -EBUSY;
         if ( !d->is_paused_by_controller )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
         ret = -EINVAL;
         if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
              (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
         vcpu_pause(v);
         ret = 0;
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_gdbsx_unpausevcpu:
     {
-        struct domain *d;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         ret = -EBUSY;
         if ( !d->is_paused_by_controller )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
         ret = -EINVAL;
         if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
              (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
         if ( !atomic_read(&v->pause_count) )
             printk("WARN: Unpausing vcpu:%d which is not paused\n", v->vcpu_id);
         vcpu_unpause(v);
         ret = 0;
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_gdbsx_domstatus:
     {
-        struct domain *d;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         domctl->u.gdbsx_domstatus.vcpu_id = -1;
         domctl->u.gdbsx_domstatus.paused = d->is_paused_by_controller;
         if ( domctl->u.gdbsx_domstatus.paused )
@@ -1358,7 +1095,6 @@ long arch_do_domctl(
                 }
             }
         }
-        rcu_unlock_domain(d);
         ret = 0;
         copyback = 1;
     }
@@ -1368,7 +1104,6 @@ long arch_do_domctl(
     case XEN_DOMCTL_getvcpuextstate:
     {
         struct xen_domctl_vcpuextstate *evc;
-        struct domain *d;
         struct vcpu *v;
         uint32_t offset = 0;
         uint64_t _xfeature_mask = 0;
@@ -1379,12 +1114,6 @@ long arch_do_domctl(
 
         evc = &domctl->u.vcpuextstate;
 
-        ret = -ESRCH;
-
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_vcpuextstate(d, domctl->cmd);
         if ( ret )
             goto vcpuextstate_out;
@@ -1483,7 +1212,6 @@ long arch_do_domctl(
         ret = 0;
 
     vcpuextstate_out:
-        rcu_unlock_domain(d);
         if ( domctl->cmd == XEN_DOMCTL_getvcpuextstate )
             copyback = 1;
     }
@@ -1491,52 +1219,35 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_mem_event_op:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            ret = xsm_mem_event(d);
-            if ( !ret )
-                ret = mem_event_domctl(d, &domctl->u.mem_event_op,
-                                       guest_handle_cast(u_domctl, void));
-            rcu_unlock_domain(d);
-            copyback = 1;
-        } 
+        ret = xsm_mem_event(d);
+        if ( !ret )
+            ret = mem_event_domctl(d, &domctl->u.mem_event_op,
+                                   guest_handle_cast(u_domctl, void));
+        copyback = 1;
     }
     break;
 
     case XEN_DOMCTL_mem_sharing_op:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            ret = xsm_mem_sharing(d);
-            if ( !ret )
-                ret = mem_sharing_domctl(d, &domctl->u.mem_sharing_op);
-            rcu_unlock_domain(d);
-        } 
+        ret = xsm_mem_sharing(d);
+        if ( !ret )
+            ret = mem_sharing_domctl(d, &domctl->u.mem_sharing_op);
     }
     break;
 
 #if P2M_AUDIT
     case XEN_DOMCTL_audit_p2m:
     {
-        struct domain *d;
-
-        ret = rcu_lock_remote_target_domain_by_id(domctl->domain, &d);
-        if ( ret != 0 )
+        if ( d == current->domain )
+        {
+            ret = -EPERM;
             break;
+        }
 
         audit_p2m(d,
                   &domctl->u.audit_p2m.orphans,
                   &domctl->u.audit_p2m.m2p_bad,
                   &domctl->u.audit_p2m.p2m_bad);
-        rcu_unlock_domain(d);
         copyback = 1;
     }
     break;
@@ -1544,52 +1255,36 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_set_access_required:
     {
-        struct domain *d;
         struct p2m_domain* p2m;
         
         ret = -EPERM;
-        if ( current->domain->domain_id == domctl->domain )
+        if ( current->domain == d )
             break;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            ret = xsm_mem_event(d);
-            if ( !ret ) {
-                p2m = p2m_get_hostp2m(d);
-                p2m->access_required = domctl->u.access_required.access_required;
-            }
-            rcu_unlock_domain(d);
-        } 
+        ret = xsm_mem_event(d);
+        if ( !ret ) {
+            p2m = p2m_get_hostp2m(d);
+            p2m->access_required = domctl->u.access_required.access_required;
+        }
     }
     break;
 
     case XEN_DOMCTL_set_broken_page_p2m:
     {
-        struct domain *d;
+        p2m_type_t pt;
+        unsigned long pfn = domctl->u.set_broken_page_p2m.pfn;
+        mfn_t mfn = get_gfn_query(d, pfn, &pt);
 
-        d = rcu_lock_domain_by_id(domctl->domain);
-        if ( d != NULL )
-        {
-            p2m_type_t pt;
-            unsigned long pfn = domctl->u.set_broken_page_p2m.pfn;
-            mfn_t mfn = get_gfn_query(d, pfn, &pt);
-
-            if ( unlikely(!mfn_valid(mfn_x(mfn)) || !p2m_is_ram(pt) ||
-                         (p2m_change_type(d, pfn, pt, p2m_ram_broken) != pt)) )
-                ret = -EINVAL;
+        if ( unlikely(!mfn_valid(mfn_x(mfn)) || !p2m_is_ram(pt) ||
+                     (p2m_change_type(d, pfn, pt, p2m_ram_broken) != pt)) )
+            ret = -EINVAL;
 
-            put_gfn(d, pfn);
-            rcu_unlock_domain(d);
-        }
-        else
-            ret = -ESRCH;
+        put_gfn(d, pfn);
     }
     break;
 
     default:
-        ret = iommu_do_domctl(domctl, u_domctl);
+        ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
     }
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index a491159..ca789bb 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -882,7 +882,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     break;
 
     default:
-        ret = arch_do_domctl(op, u_domctl);
+        ret = arch_do_domctl(op, d, u_domctl);
         break;
     }
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index fb6b5db..1cd0007 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -542,10 +542,9 @@ void iommu_crash_shutdown(void)
 }
 
 int iommu_do_domctl(
-    struct xen_domctl *domctl,
+    struct xen_domctl *domctl, struct domain *d,
     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    struct domain *d;
     u16 seg;
     u8 bus, devfn;
     int ret = 0;
@@ -564,10 +563,6 @@ int iommu_do_domctl(
         if ( ret )
             break;
 
-        ret = -EINVAL;
-        if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
-            break;
-
         seg = domctl->u.get_device_group.machine_sbdf >> 16;
         bus = (domctl->u.get_device_group.machine_sbdf >> 8) & 0xff;
         devfn = domctl->u.get_device_group.machine_sbdf & 0xff;
@@ -588,7 +583,6 @@ int iommu_do_domctl(
         }
         if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
             ret = -EFAULT;
-        rcu_unlock_domain(d);
     }
     break;
 
@@ -611,20 +605,15 @@ int iommu_do_domctl(
         break;
 
     case XEN_DOMCTL_assign_device:
-        if ( unlikely((d = get_domain_by_id(domctl->domain)) == NULL) ||
-             unlikely(d->is_dying) )
+        if ( unlikely(d->is_dying) )
         {
-            printk(XENLOG_G_ERR
-                   "XEN_DOMCTL_assign_device: get_domain_by_id() failed\n");
             ret = -EINVAL;
-            if ( d )
-                goto assign_device_out;
             break;
         }
 
         ret = xsm_assign_device(d, domctl->u.assign_device.machine_sbdf);
         if ( ret )
-            goto assign_device_out;
+            break;
 
         seg = domctl->u.get_device_group.machine_sbdf >> 16;
         bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
@@ -638,22 +627,12 @@ int iommu_do_domctl(
                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
                    d->domain_id, ret);
 
-    assign_device_out:
-        put_domain(d);
         break;
 
     case XEN_DOMCTL_deassign_device:
-        if ( unlikely((d = get_domain_by_id(domctl->domain)) == NULL) )
-        {
-            printk(XENLOG_G_ERR
-                   "XEN_DOMCTL_deassign_device: get_domain_by_id() failed\n");
-            ret = -EINVAL;
-            break;
-        }
-
         ret = xsm_deassign_device(d, domctl->u.assign_device.machine_sbdf);
         if ( ret )
-            goto deassign_device_out;
+            break;
 
         seg = domctl->u.get_device_group.machine_sbdf >> 16;
         bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
@@ -668,8 +647,6 @@ int iommu_do_domctl(
                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
                    d->domain_id, ret);
 
-    deassign_device_out:
-        put_domain(d);
         break;
 
     default:
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index e315523..7c3d719 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -37,7 +37,7 @@ do_domctl(
 
 extern long
 arch_do_domctl(
-    struct xen_domctl *domctl,
+    struct xen_domctl *domctl, struct domain *d,
     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 7626216..d477137 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -132,7 +132,8 @@ void iommu_crash_shutdown(void);
 void iommu_set_dom0_mapping(struct domain *d);
 void iommu_share_p2m_table(struct domain *d);
 
-int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
+int iommu_do_domctl(struct xen_domctl *, struct domain *d,
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 20:40:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 20:40:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tit5k-0000gc-Qc; Wed, 12 Dec 2012 20:39:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tit5j-0000gE-Qf
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 20:39:44 +0000
Received: from [85.158.143.99:51276] by server-3.bemta-4.messagelabs.com id
	32/AA-18211-F8BE8C05; Wed, 12 Dec 2012 20:39:43 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-216.messagelabs.com!1355344781!23923979!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 618 invoked from network); 12 Dec 2012 20:39:41 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-4.tower-216.messagelabs.com with SMTP;
	12 Dec 2012 20:39:41 -0000
X-TM-IMSS-Message-ID: <7272a0ba0009a520@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 7272a0ba0009a520 ;
	Wed, 12 Dec 2012 15:39:06 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBCKddBG030264; 
	Wed, 12 Dec 2012 15:39:39 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Wed, 12 Dec 2012 15:39:11 -0500
Message-Id: <1355344752-30257-2-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355344752-30257-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355344752-30257-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: dgdegra@tycho.nsa.gov, keir@xen.org
Subject: [Xen-devel] [PATCH 1/2] xen: lock target domain in do_domctl common
	code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because almost all domctls need to lock the target domain, do this by
default instead of repeating it in each domctl. This is not currently
extended to the arch-specific domctls, but RCU locks are safe to take
recursively so this only causes duplicate but correct locking.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Jan Beulich <jbeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>
---
 xen/common/domctl.c | 268 ++++++++++++----------------------------------------
 1 file changed, 59 insertions(+), 209 deletions(-)

diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 99eea48..a491159 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -244,6 +244,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     long ret = 0;
     bool_t copyback = 0;
     struct xen_domctl curop, *op = &curop;
+    struct domain *d;
 
     if ( copy_from_guest(op, u_domctl, 1) )
         return -EFAULT;
@@ -253,19 +254,29 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     switch ( op->cmd )
     {
+    case XEN_DOMCTL_createdomain:
+    case XEN_DOMCTL_getdomaininfo:
+    case XEN_DOMCTL_test_assign_device:
+        d = NULL;
+        break;
+    default:
+        d = rcu_lock_domain_by_id(op->domain);
+        if ( d == NULL )
+            return -ESRCH;
+    }
+
+    switch ( op->cmd )
+    {
     case XEN_DOMCTL_ioport_mapping:
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_bind_pt_irq:
     case XEN_DOMCTL_unbind_pt_irq: {
-        struct domain *d;
-        bool_t is_priv = IS_PRIV(current->domain);
-        if ( !is_priv && ((d = rcu_lock_domain_by_id(op->domain)) != NULL) )
+        bool_t is_priv = IS_PRIV_FOR(current->domain, d);
+        if ( !is_priv )
         {
-            is_priv = IS_PRIV_FOR(current->domain, d);
-            rcu_unlock_domain(d);
+            ret = -EPERM;
+            goto domctl_out_unlock_domonly;
         }
-        if ( !is_priv )
-            return -EPERM;
         break;
     }
 #ifdef XSM_ENABLE
@@ -279,15 +290,18 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     }
 
     if ( !domctl_lock_acquire() )
+    {
+        if ( d )
+            rcu_unlock_domain(d);
         return hypercall_create_continuation(
             __HYPERVISOR_domctl, "h", u_domctl);
+    }
 
     switch ( op->cmd )
     {
 
     case XEN_DOMCTL_setvcpucontext:
     {
-        struct domain *d = rcu_lock_domain_by_id(op->domain);
         vcpu_guest_context_u c = { .nat = NULL };
         unsigned int vcpu = op->u.vcpucontext.vcpu;
         struct vcpu *v;
@@ -341,77 +355,48 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     svc_out:
         free_vcpu_guest_context(c.nat);
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_pausedomain:
     {
-        struct domain *d = rcu_lock_domain_by_id(op->domain);
-        ret = -ESRCH;
-        if ( d != NULL )
-        {
-            ret = xsm_pausedomain(d);
-            if ( ret )
-                goto pausedomain_out;
+        ret = xsm_pausedomain(d);
+        if ( ret )
+            break;
 
-            ret = -EINVAL;
-            if ( d != current->domain )
-            {
-                domain_pause_by_systemcontroller(d);
-                ret = 0;
-            }
-        pausedomain_out:
-            rcu_unlock_domain(d);
+        ret = -EINVAL;
+        if ( d != current->domain )
+        {
+            domain_pause_by_systemcontroller(d);
+            ret = 0;
         }
     }
     break;
 
     case XEN_DOMCTL_unpausedomain:
     {
-        struct domain *d = rcu_lock_domain_by_id(op->domain);
-
-        ret = -ESRCH;
-        if ( d == NULL )
-            break;
-
         ret = xsm_unpausedomain(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domain_unpause_by_systemcontroller(d);
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_resumedomain:
     {
-        struct domain *d = rcu_lock_domain_by_id(op->domain);
-
-        ret = -ESRCH;
-        if ( d == NULL )
-            break;
-
         ret = xsm_resumedomain(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domain_resume(d);
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_createdomain:
     {
-        struct domain *d;
         domid_t        dom;
         static domid_t rover = 0;
         unsigned int domcr_flags;
@@ -461,6 +446,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         if ( IS_ERR(d) )
         {
             ret = PTR_ERR(d);
+            d = NULL;
             break;
         }
 
@@ -471,39 +457,28 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
         op->domain = d->domain_id;
         copyback = 1;
+        d = NULL;
     }
     break;
 
     case XEN_DOMCTL_max_vcpus:
     {
-        struct domain *d;
         unsigned int i, max = op->u.max_vcpus.max, cpu;
         cpumask_t *online;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(op->domain)) == NULL )
-            break;
-
         ret = -EINVAL;
         if ( (d == current->domain) || /* no domain_pause() */
              (max > MAX_VIRT_CPUS) ||
              (is_hvm_domain(d) && (max > MAX_HVM_VCPUS)) )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         ret = xsm_max_vcpus(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         /* Until Xenoprof can dynamically grow its vcpu-s array... */
         if ( d->xenoprof )
         {
-            rcu_unlock_domain(d);
             ret = -EAGAIN;
             break;
         }
@@ -578,44 +553,31 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     maxvcpu_out_novcpulock:
         domain_unpause(d);
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_destroydomain:
     {
-        struct domain *d = rcu_lock_domain_by_id(op->domain);
-        ret = -ESRCH;
-        if ( d != NULL )
-        {
-            ret = xsm_destroydomain(d) ? : domain_kill(d);
-            rcu_unlock_domain(d);
-        }
+        ret = xsm_destroydomain(d) ? : domain_kill(d);
     }
     break;
 
     case XEN_DOMCTL_setvcpuaffinity:
     case XEN_DOMCTL_getvcpuaffinity:
     {
-        domid_t dom = op->domain;
-        struct domain *d = rcu_lock_domain_by_id(dom);
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( d == NULL )
-            break;
-
         ret = xsm_vcpuaffinity(op->cmd, d);
         if ( ret )
-            goto vcpuaffinity_out;
+            break;
 
         ret = -EINVAL;
         if ( op->u.vcpuaffinity.vcpu >= d->max_vcpus )
-            goto vcpuaffinity_out;
+            break;
 
         ret = -ESRCH;
         if ( (v = d->vcpu[op->u.vcpuaffinity.vcpu]) == NULL )
-            goto vcpuaffinity_out;
+            break;
 
         if ( op->cmd == XEN_DOMCTL_setvcpuaffinity )
         {
@@ -634,35 +596,22 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             ret = cpumask_to_xenctl_cpumap(
                 &op->u.vcpuaffinity.cpumap, v->cpu_affinity);
         }
-
-    vcpuaffinity_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_scheduler_op:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(op->domain)) == NULL )
-            break;
-
         ret = xsm_scheduler(d);
         if ( ret )
-            goto scheduler_op_out;
+            break;
 
         ret = sched_adjust(d, &op->u.scheduler_op);
         copyback = 1;
-
-    scheduler_op_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_getdomaininfo:
     { 
-        struct domain *d;
         domid_t dom = op->domain;
 
         rcu_read_lock(&domlist_read_lock);
@@ -689,19 +638,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     getdomaininfo_out:
         rcu_read_unlock(&domlist_read_lock);
+        d = NULL;
     }
     break;
 
     case XEN_DOMCTL_getvcpucontext:
     { 
         vcpu_guest_context_u c = { .nat = NULL };
-        struct domain       *d;
         struct vcpu         *v;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(op->domain)) == NULL )
-            break;
-
         ret = xsm_getvcpucontext(d);
         if ( ret )
             goto getvcpucontext_out;
@@ -751,31 +696,25 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     getvcpucontext_out:
         xfree(c.nat);
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_getvcpuinfo:
     { 
-        struct domain *d;
         struct vcpu   *v;
         struct vcpu_runstate_info runstate;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(op->domain)) == NULL )
-            break;
-
         ret = xsm_getvcpuinfo(d);
         if ( ret )
-            goto getvcpuinfo_out;
+            break;
 
         ret = -EINVAL;
         if ( op->u.getvcpuinfo.vcpu >= d->max_vcpus )
-            goto getvcpuinfo_out;
+            break;
 
         ret = -ESRCH;
         if ( (v = d->vcpu[op->u.getvcpuinfo.vcpu]) == NULL )
-            goto getvcpuinfo_out;
+            break;
 
         vcpu_runstate_get(v, &runstate);
 
@@ -786,25 +725,16 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         op->u.getvcpuinfo.cpu      = v->processor;
         ret = 0;
         copyback = 1;
-
-    getvcpuinfo_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_max_mem:
     {
-        struct domain *d;
         unsigned long new_max;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_setdomainmaxmem(d);
         if ( ret )
-            goto max_mem_out;
+            break;
 
         ret = -EINVAL;
         new_max = op->u.max_mem.max_memkb >> (PAGE_SHIFT-10);
@@ -818,77 +748,43 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         d->max_pages = new_max;
         ret = 0;
         spin_unlock(&d->page_alloc_lock);
-
-    max_mem_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_setdomainhandle:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_setdomainhandle(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         memcpy(d->handle, op->u.setdomainhandle.handle,
                sizeof(xen_domain_handle_t));
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_setdebugging:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         ret = -EINVAL;
         if ( d == current->domain ) /* no domain_pause() */
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         ret = xsm_setdebugging(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domain_pause(d);
         d->debugger_attached = !!op->u.setdebugging.enable;
         domain_unpause(d); /* causes guest to latch new status */
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_irq_permission:
     {
-        struct domain *d;
         unsigned int pirq = op->u.irq_permission.pirq;
         int allow = op->u.irq_permission.allow_access;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         if ( pirq >= d->nr_pirqs )
             ret = -EINVAL;
         else if ( xsm_irq_permission(d, pirq, allow) )
@@ -897,14 +793,11 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             ret = irq_permit_access(d, pirq);
         else
             ret = irq_deny_access(d, pirq);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_iomem_permission:
     {
-        struct domain *d;
         unsigned long mfn = op->u.iomem_permission.first_mfn;
         unsigned long nr_mfns = op->u.iomem_permission.nr_mfns;
         int allow = op->u.iomem_permission.allow_access;
@@ -913,125 +806,78 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         if ( (mfn + nr_mfns - 1) < mfn ) /* wrap? */
             break;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         if ( xsm_iomem_permission(d, mfn, mfn + nr_mfns - 1, allow) )
             ret = -EPERM;
         else if ( allow )
             ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
         else
             ret = iomem_deny_access(d, mfn, mfn + nr_mfns - 1);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_settimeoffset:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_domain_settime(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domain_set_time_offset(d, op->u.settimeoffset.time_offset_seconds);
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_set_target:
     {
-        struct domain *d, *e;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
+        struct domain *e;
 
         ret = -ESRCH;
         e = get_domain_by_id(op->u.set_target.target);
         if ( e == NULL )
-            goto set_target_out;
+            break;
 
         ret = -EINVAL;
         if ( (d == e) || (d->target != NULL) )
         {
             put_domain(e);
-            goto set_target_out;
+            break;
         }
 
         ret = xsm_set_target(d, e);
         if ( ret ) {
             put_domain(e);
-            goto set_target_out;            
+            break;
         }
 
         /* Hold reference on @e until we destroy @d. */
         d->target = e;
 
         ret = 0;
-
-    set_target_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_subscribe:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d != NULL )
-        {
-            ret = xsm_domctl(d, op->cmd);
-            if ( !ret )
-                d->suspend_evtchn = op->u.subscribe.port;
-            rcu_unlock_domain(d);
-        }
+        ret = xsm_domctl(d, op->cmd);
+        if ( !ret )
+            d->suspend_evtchn = op->u.subscribe.port;
     }
     break;
 
     case XEN_DOMCTL_disable_migrate:
     {
-        struct domain *d;
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(op->domain)) != NULL )
-        {
-            ret = xsm_domctl(d, op->cmd);
-            if ( !ret )
-                d->disable_migrate = op->u.disable_migrate.disable;
-            rcu_unlock_domain(d);
-        }
+        ret = xsm_domctl(d, op->cmd);
+        if ( !ret )
+            d->disable_migrate = op->u.disable_migrate.disable;
     }
     break;
 
     case XEN_DOMCTL_set_virq_handler:
     {
-        struct domain *d;
         uint32_t virq = op->u.set_virq_handler.virq;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d != NULL )
-        {
-            ret = xsm_set_virq_handler(d, virq);
-            if ( !ret )
-                ret = set_global_virq_handler(d, virq);
-            rcu_unlock_domain(d);
-        }
+        ret = xsm_set_virq_handler(d, virq);
+        if ( !ret )
+            ret = set_global_virq_handler(d, virq);
     }
     break;
 
@@ -1042,6 +888,10 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     domctl_lock_release();
 
+ domctl_out_unlock_domonly:
+    if ( d )
+        rcu_unlock_domain(d);
+
     if ( copyback && __copy_to_guest(u_domctl, op, 1) )
         ret = -EFAULT;
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 20:40:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 20:40:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tit5k-0000gc-Qc; Wed, 12 Dec 2012 20:39:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tit5j-0000gE-Qf
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 20:39:44 +0000
Received: from [85.158.143.99:51276] by server-3.bemta-4.messagelabs.com id
	32/AA-18211-F8BE8C05; Wed, 12 Dec 2012 20:39:43 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-216.messagelabs.com!1355344781!23923979!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 618 invoked from network); 12 Dec 2012 20:39:41 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-4.tower-216.messagelabs.com with SMTP;
	12 Dec 2012 20:39:41 -0000
X-TM-IMSS-Message-ID: <7272a0ba0009a520@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 7272a0ba0009a520 ;
	Wed, 12 Dec 2012 15:39:06 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBCKddBG030264; 
	Wed, 12 Dec 2012 15:39:39 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Wed, 12 Dec 2012 15:39:11 -0500
Message-Id: <1355344752-30257-2-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355344752-30257-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355344752-30257-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: dgdegra@tycho.nsa.gov, keir@xen.org
Subject: [Xen-devel] [PATCH 1/2] xen: lock target domain in do_domctl common
	code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because almost all domctls need to lock the target domain, do this by
default instead of repeating it in each domctl. This is not currently
extended to the arch-specific domctls, but RCU locks are safe to take
recursively so this only causes duplicate but correct locking.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Jan Beulich <jbeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>
---
 xen/common/domctl.c | 268 ++++++++++++----------------------------------------
 1 file changed, 59 insertions(+), 209 deletions(-)

diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 99eea48..a491159 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -244,6 +244,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     long ret = 0;
     bool_t copyback = 0;
     struct xen_domctl curop, *op = &curop;
+    struct domain *d;
 
     if ( copy_from_guest(op, u_domctl, 1) )
         return -EFAULT;
@@ -253,19 +254,29 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     switch ( op->cmd )
     {
+    case XEN_DOMCTL_createdomain:
+    case XEN_DOMCTL_getdomaininfo:
+    case XEN_DOMCTL_test_assign_device:
+        d = NULL;
+        break;
+    default:
+        d = rcu_lock_domain_by_id(op->domain);
+        if ( d == NULL )
+            return -ESRCH;
+    }
+
+    switch ( op->cmd )
+    {
     case XEN_DOMCTL_ioport_mapping:
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_bind_pt_irq:
     case XEN_DOMCTL_unbind_pt_irq: {
-        struct domain *d;
-        bool_t is_priv = IS_PRIV(current->domain);
-        if ( !is_priv && ((d = rcu_lock_domain_by_id(op->domain)) != NULL) )
+        bool_t is_priv = IS_PRIV_FOR(current->domain, d);
+        if ( !is_priv )
         {
-            is_priv = IS_PRIV_FOR(current->domain, d);
-            rcu_unlock_domain(d);
+            ret = -EPERM;
+            goto domctl_out_unlock_domonly;
         }
-        if ( !is_priv )
-            return -EPERM;
         break;
     }
 #ifdef XSM_ENABLE
@@ -279,15 +290,18 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     }
 
     if ( !domctl_lock_acquire() )
+    {
+        if ( d )
+            rcu_unlock_domain(d);
         return hypercall_create_continuation(
             __HYPERVISOR_domctl, "h", u_domctl);
+    }
 
     switch ( op->cmd )
     {
 
     case XEN_DOMCTL_setvcpucontext:
     {
-        struct domain *d = rcu_lock_domain_by_id(op->domain);
         vcpu_guest_context_u c = { .nat = NULL };
         unsigned int vcpu = op->u.vcpucontext.vcpu;
         struct vcpu *v;
@@ -341,77 +355,48 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     svc_out:
         free_vcpu_guest_context(c.nat);
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_pausedomain:
     {
-        struct domain *d = rcu_lock_domain_by_id(op->domain);
-        ret = -ESRCH;
-        if ( d != NULL )
-        {
-            ret = xsm_pausedomain(d);
-            if ( ret )
-                goto pausedomain_out;
+        ret = xsm_pausedomain(d);
+        if ( ret )
+            break;
 
-            ret = -EINVAL;
-            if ( d != current->domain )
-            {
-                domain_pause_by_systemcontroller(d);
-                ret = 0;
-            }
-        pausedomain_out:
-            rcu_unlock_domain(d);
+        ret = -EINVAL;
+        if ( d != current->domain )
+        {
+            domain_pause_by_systemcontroller(d);
+            ret = 0;
         }
     }
     break;
 
     case XEN_DOMCTL_unpausedomain:
     {
-        struct domain *d = rcu_lock_domain_by_id(op->domain);
-
-        ret = -ESRCH;
-        if ( d == NULL )
-            break;
-
         ret = xsm_unpausedomain(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domain_unpause_by_systemcontroller(d);
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_resumedomain:
     {
-        struct domain *d = rcu_lock_domain_by_id(op->domain);
-
-        ret = -ESRCH;
-        if ( d == NULL )
-            break;
-
         ret = xsm_resumedomain(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domain_resume(d);
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_createdomain:
     {
-        struct domain *d;
         domid_t        dom;
         static domid_t rover = 0;
         unsigned int domcr_flags;
@@ -461,6 +446,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         if ( IS_ERR(d) )
         {
             ret = PTR_ERR(d);
+            d = NULL;
             break;
         }
 
@@ -471,39 +457,28 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
         op->domain = d->domain_id;
         copyback = 1;
+        d = NULL;
     }
     break;
 
     case XEN_DOMCTL_max_vcpus:
     {
-        struct domain *d;
         unsigned int i, max = op->u.max_vcpus.max, cpu;
         cpumask_t *online;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(op->domain)) == NULL )
-            break;
-
         ret = -EINVAL;
         if ( (d == current->domain) || /* no domain_pause() */
              (max > MAX_VIRT_CPUS) ||
              (is_hvm_domain(d) && (max > MAX_HVM_VCPUS)) )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         ret = xsm_max_vcpus(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         /* Until Xenoprof can dynamically grow its vcpu-s array... */
         if ( d->xenoprof )
         {
-            rcu_unlock_domain(d);
             ret = -EAGAIN;
             break;
         }
@@ -578,44 +553,31 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     maxvcpu_out_novcpulock:
         domain_unpause(d);
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_destroydomain:
     {
-        struct domain *d = rcu_lock_domain_by_id(op->domain);
-        ret = -ESRCH;
-        if ( d != NULL )
-        {
-            ret = xsm_destroydomain(d) ? : domain_kill(d);
-            rcu_unlock_domain(d);
-        }
+        ret = xsm_destroydomain(d) ? : domain_kill(d);
     }
     break;
 
     case XEN_DOMCTL_setvcpuaffinity:
     case XEN_DOMCTL_getvcpuaffinity:
     {
-        domid_t dom = op->domain;
-        struct domain *d = rcu_lock_domain_by_id(dom);
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( d == NULL )
-            break;
-
         ret = xsm_vcpuaffinity(op->cmd, d);
         if ( ret )
-            goto vcpuaffinity_out;
+            break;
 
         ret = -EINVAL;
         if ( op->u.vcpuaffinity.vcpu >= d->max_vcpus )
-            goto vcpuaffinity_out;
+            break;
 
         ret = -ESRCH;
         if ( (v = d->vcpu[op->u.vcpuaffinity.vcpu]) == NULL )
-            goto vcpuaffinity_out;
+            break;
 
         if ( op->cmd == XEN_DOMCTL_setvcpuaffinity )
         {
@@ -634,35 +596,22 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             ret = cpumask_to_xenctl_cpumap(
                 &op->u.vcpuaffinity.cpumap, v->cpu_affinity);
         }
-
-    vcpuaffinity_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_scheduler_op:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(op->domain)) == NULL )
-            break;
-
         ret = xsm_scheduler(d);
         if ( ret )
-            goto scheduler_op_out;
+            break;
 
         ret = sched_adjust(d, &op->u.scheduler_op);
         copyback = 1;
-
-    scheduler_op_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_getdomaininfo:
     { 
-        struct domain *d;
         domid_t dom = op->domain;
 
         rcu_read_lock(&domlist_read_lock);
@@ -689,19 +638,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     getdomaininfo_out:
         rcu_read_unlock(&domlist_read_lock);
+        d = NULL;
     }
     break;
 
     case XEN_DOMCTL_getvcpucontext:
     { 
         vcpu_guest_context_u c = { .nat = NULL };
-        struct domain       *d;
         struct vcpu         *v;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(op->domain)) == NULL )
-            break;
-
         ret = xsm_getvcpucontext(d);
         if ( ret )
             goto getvcpucontext_out;
@@ -751,31 +696,25 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     getvcpucontext_out:
         xfree(c.nat);
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_getvcpuinfo:
     { 
-        struct domain *d;
         struct vcpu   *v;
         struct vcpu_runstate_info runstate;
 
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(op->domain)) == NULL )
-            break;
-
         ret = xsm_getvcpuinfo(d);
         if ( ret )
-            goto getvcpuinfo_out;
+            break;
 
         ret = -EINVAL;
         if ( op->u.getvcpuinfo.vcpu >= d->max_vcpus )
-            goto getvcpuinfo_out;
+            break;
 
         ret = -ESRCH;
         if ( (v = d->vcpu[op->u.getvcpuinfo.vcpu]) == NULL )
-            goto getvcpuinfo_out;
+            break;
 
         vcpu_runstate_get(v, &runstate);
 
@@ -786,25 +725,16 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         op->u.getvcpuinfo.cpu      = v->processor;
         ret = 0;
         copyback = 1;
-
-    getvcpuinfo_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_max_mem:
     {
-        struct domain *d;
         unsigned long new_max;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_setdomainmaxmem(d);
         if ( ret )
-            goto max_mem_out;
+            break;
 
         ret = -EINVAL;
         new_max = op->u.max_mem.max_memkb >> (PAGE_SHIFT-10);
@@ -818,77 +748,43 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         d->max_pages = new_max;
         ret = 0;
         spin_unlock(&d->page_alloc_lock);
-
-    max_mem_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_setdomainhandle:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_setdomainhandle(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         memcpy(d->handle, op->u.setdomainhandle.handle,
                sizeof(xen_domain_handle_t));
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_setdebugging:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         ret = -EINVAL;
         if ( d == current->domain ) /* no domain_pause() */
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         ret = xsm_setdebugging(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domain_pause(d);
         d->debugger_attached = !!op->u.setdebugging.enable;
         domain_unpause(d); /* causes guest to latch new status */
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_irq_permission:
     {
-        struct domain *d;
         unsigned int pirq = op->u.irq_permission.pirq;
         int allow = op->u.irq_permission.allow_access;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         if ( pirq >= d->nr_pirqs )
             ret = -EINVAL;
         else if ( xsm_irq_permission(d, pirq, allow) )
@@ -897,14 +793,11 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             ret = irq_permit_access(d, pirq);
         else
             ret = irq_deny_access(d, pirq);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_iomem_permission:
     {
-        struct domain *d;
         unsigned long mfn = op->u.iomem_permission.first_mfn;
         unsigned long nr_mfns = op->u.iomem_permission.nr_mfns;
         int allow = op->u.iomem_permission.allow_access;
@@ -913,125 +806,78 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         if ( (mfn + nr_mfns - 1) < mfn ) /* wrap? */
             break;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         if ( xsm_iomem_permission(d, mfn, mfn + nr_mfns - 1, allow) )
             ret = -EPERM;
         else if ( allow )
             ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
         else
             ret = iomem_deny_access(d, mfn, mfn + nr_mfns - 1);
-
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_settimeoffset:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
-
         ret = xsm_domain_settime(d);
         if ( ret )
-        {
-            rcu_unlock_domain(d);
             break;
-        }
 
         domain_set_time_offset(d, op->u.settimeoffset.time_offset_seconds);
-        rcu_unlock_domain(d);
         ret = 0;
     }
     break;
 
     case XEN_DOMCTL_set_target:
     {
-        struct domain *d, *e;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d == NULL )
-            break;
+        struct domain *e;
 
         ret = -ESRCH;
         e = get_domain_by_id(op->u.set_target.target);
         if ( e == NULL )
-            goto set_target_out;
+            break;
 
         ret = -EINVAL;
         if ( (d == e) || (d->target != NULL) )
         {
             put_domain(e);
-            goto set_target_out;
+            break;
         }
 
         ret = xsm_set_target(d, e);
         if ( ret ) {
             put_domain(e);
-            goto set_target_out;            
+            break;
         }
 
         /* Hold reference on @e until we destroy @d. */
         d->target = e;
 
         ret = 0;
-
-    set_target_out:
-        rcu_unlock_domain(d);
     }
     break;
 
     case XEN_DOMCTL_subscribe:
     {
-        struct domain *d;
-
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d != NULL )
-        {
-            ret = xsm_domctl(d, op->cmd);
-            if ( !ret )
-                d->suspend_evtchn = op->u.subscribe.port;
-            rcu_unlock_domain(d);
-        }
+        ret = xsm_domctl(d, op->cmd);
+        if ( !ret )
+            d->suspend_evtchn = op->u.subscribe.port;
     }
     break;
 
     case XEN_DOMCTL_disable_migrate:
     {
-        struct domain *d;
-        ret = -ESRCH;
-        if ( (d = rcu_lock_domain_by_id(op->domain)) != NULL )
-        {
-            ret = xsm_domctl(d, op->cmd);
-            if ( !ret )
-                d->disable_migrate = op->u.disable_migrate.disable;
-            rcu_unlock_domain(d);
-        }
+        ret = xsm_domctl(d, op->cmd);
+        if ( !ret )
+            d->disable_migrate = op->u.disable_migrate.disable;
     }
     break;
 
     case XEN_DOMCTL_set_virq_handler:
     {
-        struct domain *d;
         uint32_t virq = op->u.set_virq_handler.virq;
 
-        ret = -ESRCH;
-        d = rcu_lock_domain_by_id(op->domain);
-        if ( d != NULL )
-        {
-            ret = xsm_set_virq_handler(d, virq);
-            if ( !ret )
-                ret = set_global_virq_handler(d, virq);
-            rcu_unlock_domain(d);
-        }
+        ret = xsm_set_virq_handler(d, virq);
+        if ( !ret )
+            ret = set_global_virq_handler(d, virq);
     }
     break;
 
@@ -1042,6 +888,10 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     domctl_lock_release();
 
+ domctl_out_unlock_domonly:
+    if ( d )
+        rcu_unlock_domain(d);
+
     if ( copyback && __copy_to_guest(u_domctl, op, 1) )
         ret = -EFAULT;
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 21:02:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 21:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TitRl-0001ZG-Kf; Wed, 12 Dec 2012 21:02:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <akpm@linux-foundation.org>) id 1TitRj-0001Yx-MI
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 21:02:27 +0000
Received: from [85.158.143.35:9931] by server-1.bemta-4.messagelabs.com id
	CA/D2-28401-3E0F8C05; Wed, 12 Dec 2012 21:02:27 +0000
X-Env-Sender: akpm@linux-foundation.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355346115!14547080!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15949 invoked from network); 12 Dec 2012 21:01:55 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-16.tower-21.messagelabs.com with SMTP;
	12 Dec 2012 21:01:55 -0000
Received: from akpm.mtv.corp.google.com (unknown [216.239.45.90])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 72DE6828;
	Wed, 12 Dec 2012 21:01:53 +0000 (UTC)
Date: Wed, 12 Dec 2012 13:01:52 -0800
From: Andrew Morton <akpm@linux-foundation.org>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-Id: <20121212130152.daf9cf2d.akpm@linux-foundation.org>
In-Reply-To: <50C61653.70107@citrix.com>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
	<1354630913-17287-2-git-send-email-roger.pau@citrix.com>
	<20121207202003.GA9462@phenom.dumpdata.com>
	<50C5D276.6090009@citrix.com>
	<20121210151533.GE6955@localhost.localdomain>
	<50C61653.70107@citrix.com>
X-Mailer: Sylpheed 3.0.2 (GTK+ 2.20.1; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: "sfr@canb.auug.org.au" <sfr@canb.auug.org.au>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>
Subject: Re: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
 llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012 18:05:23 +0100
Roger Pau Monn=E9 <roger.pau@citrix.com> wrote:

> On 10/12/12 16:15, Konrad Rzeszutek Wilk wrote:
> > On Mon, Dec 10, 2012 at 01:15:50PM +0100, Roger Pau Monn=E9 wrote:
> >> On 07/12/12 21:20, Konrad Rzeszutek Wilk wrote:
> >>> On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
> >>>> Implement a safe version of llist_for_each_entry, and use it in
> >>>> blkif_free. Previously grants where freed while iterating the list,
> >>>> which lead to dereferences when trying to fetch the next item.
> >>>
> >>> Looks like xen-blkfront is the only user of this llist_for_each_entry.
> >>>
> >>> Would it be more prudent to put the macro in the llist.h file?
> >>
> >> I'm not able to find out who is the maintainer of llist, should I just
> >> CC it's author?
> > =

> > Sure. I CC-ed akpm here to solicit his input as well. Either way I am
> > OK wit this being in xen-blkfront but it just seems that it could
> > be useful in the llist file since that is where the non-safe version
> > resides.
> =

> I'm going to resend this series with llist_for_each_entry_safe added to
> llist.h in a separated patch. I wouldn't like to delay this series
> because we cannot get llist_for_each_entry_safe added to llist.h, that's
> why I've added it to blkfront in the first place.

Please include that llist.h patch within the xen merge.  It's just a single
hunk and won't cause any disruption.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 21:02:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 21:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TitRl-0001ZG-Kf; Wed, 12 Dec 2012 21:02:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <akpm@linux-foundation.org>) id 1TitRj-0001Yx-MI
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 21:02:27 +0000
Received: from [85.158.143.35:9931] by server-1.bemta-4.messagelabs.com id
	CA/D2-28401-3E0F8C05; Wed, 12 Dec 2012 21:02:27 +0000
X-Env-Sender: akpm@linux-foundation.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355346115!14547080!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15949 invoked from network); 12 Dec 2012 21:01:55 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-16.tower-21.messagelabs.com with SMTP;
	12 Dec 2012 21:01:55 -0000
Received: from akpm.mtv.corp.google.com (unknown [216.239.45.90])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 72DE6828;
	Wed, 12 Dec 2012 21:01:53 +0000 (UTC)
Date: Wed, 12 Dec 2012 13:01:52 -0800
From: Andrew Morton <akpm@linux-foundation.org>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-Id: <20121212130152.daf9cf2d.akpm@linux-foundation.org>
In-Reply-To: <50C61653.70107@citrix.com>
References: <1354630913-17287-1-git-send-email-roger.pau@citrix.com>
	<1354630913-17287-2-git-send-email-roger.pau@citrix.com>
	<20121207202003.GA9462@phenom.dumpdata.com>
	<50C5D276.6090009@citrix.com>
	<20121210151533.GE6955@localhost.localdomain>
	<50C61653.70107@citrix.com>
X-Mailer: Sylpheed 3.0.2 (GTK+ 2.20.1; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: "sfr@canb.auug.org.au" <sfr@canb.auug.org.au>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>
Subject: Re: [Xen-devel] [PATCH 2/2] xen-blkfront: implement safe version of
 llist_for_each_entry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Dec 2012 18:05:23 +0100
Roger Pau Monn=E9 <roger.pau@citrix.com> wrote:

> On 10/12/12 16:15, Konrad Rzeszutek Wilk wrote:
> > On Mon, Dec 10, 2012 at 01:15:50PM +0100, Roger Pau Monn=E9 wrote:
> >> On 07/12/12 21:20, Konrad Rzeszutek Wilk wrote:
> >>> On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
> >>>> Implement a safe version of llist_for_each_entry, and use it in
> >>>> blkif_free. Previously grants where freed while iterating the list,
> >>>> which lead to dereferences when trying to fetch the next item.
> >>>
> >>> Looks like xen-blkfront is the only user of this llist_for_each_entry.
> >>>
> >>> Would it be more prudent to put the macro in the llist.h file?
> >>
> >> I'm not able to find out who is the maintainer of llist, should I just
> >> CC it's author?
> > =

> > Sure. I CC-ed akpm here to solicit his input as well. Either way I am
> > OK wit this being in xen-blkfront but it just seems that it could
> > be useful in the llist file since that is where the non-safe version
> > resides.
> =

> I'm going to resend this series with llist_for_each_entry_safe added to
> llist.h in a separated patch. I wouldn't like to delay this series
> because we cannot get llist_for_each_entry_safe added to llist.h, that's
> why I've added it to blkfront in the first place.

Please include that llist.h patch within the xen merge.  It's just a single
hunk and won't cause any disruption.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 22:10:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 22:10:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiuUs-0002KJ-UH; Wed, 12 Dec 2012 22:09:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TiuUs-0002KE-1v
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 22:09:46 +0000
Received: from [85.158.139.211:44659] by server-10.bemta-5.messagelabs.com id
	89/B9-13383-9A009C05; Wed, 12 Dec 2012 22:09:45 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355350181!16003236!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODkwNDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1362 invoked from network); 12 Dec 2012 22:09:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 22:09:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,269,1355097600"; 
   d="scan'208";a="509703"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	12 Dec 2012 22:09:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 12 Dec 2012 17:09:40 -0500
Received: from [10.80.3.146] (helo=localmatsp-T3500.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<mats.petersson@citrix.com>)	id 1TiuUm-0002Vo-EP;
	Wed, 12 Dec 2012 22:09:40 +0000
From: Mats Petersson <mats.petersson@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 12 Dec 2012 22:09:38 +0000
Message-ID: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: Ian.Campbell@citrix.com, konrad.wilk@oracle.com,
	Mats Petersson <mats.petersson@citrix.com>,
	linux-kernel@vger.kernel.org, JBeulich@suse.com,
	david.vrabel@citrix.com, mats@planetcatfish.com
Subject: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

One comment asked for more details on the improvements:
Using a small test program to map Guest memory into Dom0 (repeatedly
for "Iterations" mapping the same first "Num Pages")
Iterations    Num Pages	   Time 3.7rc4	Time With this patch
5000	      4096	   76.107	37.027
10000	      2048	   75.703	37.177
20000	      1024	   75.893	37.247
So a little better than twice as fast.

Using this patch in migration, using "time" to measure the overall
time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
memory, one network card, one disk, guest at login prompt):
Time 3.7rc5		Time With this patch
6.697			5.667
Since migration involves a whole lot of other things, it's only about
15% faster - but still a good improvement. Similar measurement with a
guest that is running code to "dirty" memory shows about 23%
improvement, as it spends more time copying dirtied memory.

As discussed elsewhere, a good deal more can be had from improving the
munmap system call, but it is a little tricky to get this in without
worsening non-PVOPS kernel, so I will have another look at this.

---
Update since last posting: 
I have just run some benchmarks of a 16GB guest, and the improvement
with this patch is around 23-30% for the overall copy time, and 42%
shorter downtime on a "busy" guest (writing in a loop to about 8GB of
memory). I think this is definitely worth having.

The V4 patch consists of cosmetic adjustments (spelling mistake in
comment and reversing condition in an if-statement to avoid having an
"else" branch, a random empty line added by accident now reverted back
to previous state). Thanks to Jan Beulich for the comments.

The V3 of the patch contains suggested improvements from:
David Vrabel - make it two distinct external functions, doc-comments.
Ian Campbell - use one common function for the main work.
Jan Beulich  - found a bug and pointed out some whitespace problems.



Signed-off-by: Mats Petersson <mats.petersson@citrix.com>

---
 arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
 drivers/xen/privcmd.c |   55 +++++++++++++++++----
 include/xen/xen-ops.h |    5 ++
 3 files changed, 169 insertions(+), 23 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index dcf5f2d..a67774f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
 #define REMAP_BATCH_SIZE 16
 
 struct remap_data {
-	unsigned long mfn;
+	unsigned long *mfn;
+	bool contiguous;
 	pgprot_t prot;
 	struct mmu_update *mmu_update;
 };
@@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
+	/* If we have a contigious range, just update the mfn itself,
+	   else update pointer to be "next mfn". */
+	if (rmd->contiguous)
+		(*rmd->mfn)++;
+	else
+		rmd->mfn++;
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       unsigned long mfn, int nr,
-			       pgprot_t prot, unsigned domid)
-{
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ *
+ * Note that err_ptr is used to indicate whether *mfn
+ * is a list or a "first mfn of a contiguous range". */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			unsigned long *mfn, int nr,
+			int *err_ptr, pgprot_t prot,
+			unsigned domid)
+{
+	int err, last_err = 0;
 	struct remap_data rmd;
 	struct mmu_update mmu_update[REMAP_BATCH_SIZE];
-	int batch;
 	unsigned long range;
-	int err = 0;
 
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
@@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 
 	rmd.mfn = mfn;
 	rmd.prot = prot;
+	/* We use the err_ptr to indicate if there we are doing a contigious
+	 * mapping or a discontigious mapping. */
+	rmd.contiguous = !err_ptr;
 
 	while (nr) {
-		batch = min(REMAP_BATCH_SIZE, nr);
+		int index = 0;
+		int done = 0;
+		int batch = min(REMAP_BATCH_SIZE, nr);
+		int batch_left = batch;
 		range = (unsigned long)batch << PAGE_SHIFT;
 
 		rmd.mmu_update = mmu_update;
@@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
-		if (err < 0)
-			goto out;
+		/* We record the error for each page that gives an error, but
+		 * continue mapping until the whole set is done */
+		do {
+			err = HYPERVISOR_mmu_update(&mmu_update[index],
+						    batch_left, &done, domid);
+			if (err < 0) {
+				if (!err_ptr)
+					goto out;
+				/* increment done so we skip the error item */
+				done++;
+				last_err = err_ptr[index] = err;
+			}
+			batch_left -= done;
+			index += done;
+		} while (batch_left);
 
 		nr -= batch;
 		addr += range;
+		if (err_ptr)
+			err_ptr += batch;
 	}
 
-	err = 0;
+	err = last_err;
 out:
 
 	xen_flush_tlb_all();
 
 	return err;
 }
+
+/* xen_remap_domain_mfn_range() - Used to map foreign pages
+ * @vma:   the vma for the pages to be mapped into
+ * @addr:  the address at which to map the pages
+ * @mfn:   the first MFN to map
+ * @nr:    the number of consecutive mfns to map
+ * @prot:  page protection mask
+ * @domid: id of the domain that we are mapping from
+ *
+ * This function takes a mfn and maps nr pages on from that into this kernel's
+ * memory. The owner of the pages is defined by domid. Where the pages are
+ * mapped is determined by addr, and vma is used for "accounting" of the
+ * pages.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long mfn, int nr,
+			       pgprot_t prot, unsigned domid)
+{
+	return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not sucessfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 71f5c45..75f6e86 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
 	return ret;
 }
 
+/*
+ * Similar to traverse_pages, but use each page as a "block" of
+ * data to be processed as one unit.
+ */
+static int traverse_pages_block(unsigned nelem, size_t size,
+				struct list_head *pos,
+				int (*fn)(void *data, int nr, void *state),
+				void *state)
+{
+	void *pagedata;
+	unsigned pageidx;
+	int ret = 0;
+
+	BUG_ON(size > PAGE_SIZE);
+
+	pageidx = PAGE_SIZE;
+
+	while (nelem) {
+		int nr = (PAGE_SIZE/size);
+		struct page *page;
+		if (nr > nelem)
+			nr = nelem;
+		pos = pos->next;
+		page = list_entry(pos, struct page, lru);
+		pagedata = page_address(page);
+		ret = (*fn)(pagedata, nr, state);
+		if (ret)
+			break;
+		nelem -= nr;
+	}
+
+	return ret;
+}
+
 struct mmap_mfn_state {
 	unsigned long va;
 	struct vm_area_struct *vma;
@@ -250,7 +284,7 @@ struct mmap_batch_state {
 	 *      0 for no errors
 	 *      1 if at least one error has happened (and no
 	 *          -ENOENT errors have happened)
-	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 *      -ENOENT if at least one -ENOENT has happened.
 	 */
 	int global_error;
 	/* An array for individual errors */
@@ -260,17 +294,20 @@ struct mmap_batch_state {
 	xen_pfn_t __user *user_mfn;
 };
 
-static int mmap_batch_fn(void *data, void *state)
+static int mmap_batch_fn(void *data, int nr, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
 	int ret;
 
-	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-					 st->vma->vm_page_prot, st->domain);
+	BUG_ON(nr < 0);
 
-	/* Store error code for second pass. */
-	*(st->err++) = ret;
+	ret = xen_remap_domain_mfn_array(st->vma,
+					 st->va & PAGE_MASK,
+					 mfnp, nr,
+					 st->err,
+					 st->vma->vm_page_prot,
+					 st->domain);
 
 	/* And see if it affects the global_error. */
 	if (ret < 0) {
@@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
 				st->global_error = 1;
 		}
 	}
-	st->va += PAGE_SIZE;
+	st->va += PAGE_SIZE * nr;
 
 	return 0;
 }
@@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 	state.err           = err_array;
 
 	/* mmap_batch_fn guarantees ret == 0 */
-	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state));
+	BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
+				    &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 6a198e4..22cad75 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid);
 
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid);
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 22:10:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 22:10:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiuUs-0002KJ-UH; Wed, 12 Dec 2012 22:09:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TiuUs-0002KE-1v
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 22:09:46 +0000
Received: from [85.158.139.211:44659] by server-10.bemta-5.messagelabs.com id
	89/B9-13383-9A009C05; Wed, 12 Dec 2012 22:09:45 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355350181!16003236!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODkwNDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1362 invoked from network); 12 Dec 2012 22:09:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 22:09:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,269,1355097600"; 
   d="scan'208";a="509703"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	12 Dec 2012 22:09:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 12 Dec 2012 17:09:40 -0500
Received: from [10.80.3.146] (helo=localmatsp-T3500.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<mats.petersson@citrix.com>)	id 1TiuUm-0002Vo-EP;
	Wed, 12 Dec 2012 22:09:40 +0000
From: Mats Petersson <mats.petersson@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 12 Dec 2012 22:09:38 +0000
Message-ID: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: Ian.Campbell@citrix.com, konrad.wilk@oracle.com,
	Mats Petersson <mats.petersson@citrix.com>,
	linux-kernel@vger.kernel.org, JBeulich@suse.com,
	david.vrabel@citrix.com, mats@planetcatfish.com
Subject: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

One comment asked for more details on the improvements:
Using a small test program to map Guest memory into Dom0 (repeatedly
for "Iterations" mapping the same first "Num Pages")
Iterations    Num Pages	   Time 3.7rc4	Time With this patch
5000	      4096	   76.107	37.027
10000	      2048	   75.703	37.177
20000	      1024	   75.893	37.247
So a little better than twice as fast.

Using this patch in migration, using "time" to measure the overall
time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
memory, one network card, one disk, guest at login prompt):
Time 3.7rc5		Time With this patch
6.697			5.667
Since migration involves a whole lot of other things, it's only about
15% faster - but still a good improvement. Similar measurement with a
guest that is running code to "dirty" memory shows about 23%
improvement, as it spends more time copying dirtied memory.

As discussed elsewhere, a good deal more can be had from improving the
munmap system call, but it is a little tricky to get this in without
worsening non-PVOPS kernel, so I will have another look at this.

---
Update since last posting: 
I have just run some benchmarks of a 16GB guest, and the improvement
with this patch is around 23-30% for the overall copy time, and 42%
shorter downtime on a "busy" guest (writing in a loop to about 8GB of
memory). I think this is definitely worth having.

The V4 patch consists of cosmetic adjustments (spelling mistake in
comment and reversing condition in an if-statement to avoid having an
"else" branch, a random empty line added by accident now reverted back
to previous state). Thanks to Jan Beulich for the comments.

The V3 of the patch contains suggested improvements from:
David Vrabel - make it two distinct external functions, doc-comments.
Ian Campbell - use one common function for the main work.
Jan Beulich  - found a bug and pointed out some whitespace problems.



Signed-off-by: Mats Petersson <mats.petersson@citrix.com>

---
 arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
 drivers/xen/privcmd.c |   55 +++++++++++++++++----
 include/xen/xen-ops.h |    5 ++
 3 files changed, 169 insertions(+), 23 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index dcf5f2d..a67774f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
 #define REMAP_BATCH_SIZE 16
 
 struct remap_data {
-	unsigned long mfn;
+	unsigned long *mfn;
+	bool contiguous;
 	pgprot_t prot;
 	struct mmu_update *mmu_update;
 };
@@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
+	/* If we have a contigious range, just update the mfn itself,
+	   else update pointer to be "next mfn". */
+	if (rmd->contiguous)
+		(*rmd->mfn)++;
+	else
+		rmd->mfn++;
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       unsigned long mfn, int nr,
-			       pgprot_t prot, unsigned domid)
-{
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ *
+ * Note that err_ptr is used to indicate whether *mfn
+ * is a list or a "first mfn of a contiguous range". */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			unsigned long *mfn, int nr,
+			int *err_ptr, pgprot_t prot,
+			unsigned domid)
+{
+	int err, last_err = 0;
 	struct remap_data rmd;
 	struct mmu_update mmu_update[REMAP_BATCH_SIZE];
-	int batch;
 	unsigned long range;
-	int err = 0;
 
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
@@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 
 	rmd.mfn = mfn;
 	rmd.prot = prot;
+	/* We use the err_ptr to indicate if there we are doing a contigious
+	 * mapping or a discontigious mapping. */
+	rmd.contiguous = !err_ptr;
 
 	while (nr) {
-		batch = min(REMAP_BATCH_SIZE, nr);
+		int index = 0;
+		int done = 0;
+		int batch = min(REMAP_BATCH_SIZE, nr);
+		int batch_left = batch;
 		range = (unsigned long)batch << PAGE_SHIFT;
 
 		rmd.mmu_update = mmu_update;
@@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
-		if (err < 0)
-			goto out;
+		/* We record the error for each page that gives an error, but
+		 * continue mapping until the whole set is done */
+		do {
+			err = HYPERVISOR_mmu_update(&mmu_update[index],
+						    batch_left, &done, domid);
+			if (err < 0) {
+				if (!err_ptr)
+					goto out;
+				/* increment done so we skip the error item */
+				done++;
+				last_err = err_ptr[index] = err;
+			}
+			batch_left -= done;
+			index += done;
+		} while (batch_left);
 
 		nr -= batch;
 		addr += range;
+		if (err_ptr)
+			err_ptr += batch;
 	}
 
-	err = 0;
+	err = last_err;
 out:
 
 	xen_flush_tlb_all();
 
 	return err;
 }
+
+/* xen_remap_domain_mfn_range() - Used to map foreign pages
+ * @vma:   the vma for the pages to be mapped into
+ * @addr:  the address at which to map the pages
+ * @mfn:   the first MFN to map
+ * @nr:    the number of consecutive mfns to map
+ * @prot:  page protection mask
+ * @domid: id of the domain that we are mapping from
+ *
+ * This function takes a mfn and maps nr pages on from that into this kernel's
+ * memory. The owner of the pages is defined by domid. Where the pages are
+ * mapped is determined by addr, and vma is used for "accounting" of the
+ * pages.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long mfn, int nr,
+			       pgprot_t prot, unsigned domid)
+{
+	return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not sucessfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 71f5c45..75f6e86 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
 	return ret;
 }
 
+/*
+ * Similar to traverse_pages, but use each page as a "block" of
+ * data to be processed as one unit.
+ */
+static int traverse_pages_block(unsigned nelem, size_t size,
+				struct list_head *pos,
+				int (*fn)(void *data, int nr, void *state),
+				void *state)
+{
+	void *pagedata;
+	unsigned pageidx;
+	int ret = 0;
+
+	BUG_ON(size > PAGE_SIZE);
+
+	pageidx = PAGE_SIZE;
+
+	while (nelem) {
+		int nr = (PAGE_SIZE/size);
+		struct page *page;
+		if (nr > nelem)
+			nr = nelem;
+		pos = pos->next;
+		page = list_entry(pos, struct page, lru);
+		pagedata = page_address(page);
+		ret = (*fn)(pagedata, nr, state);
+		if (ret)
+			break;
+		nelem -= nr;
+	}
+
+	return ret;
+}
+
 struct mmap_mfn_state {
 	unsigned long va;
 	struct vm_area_struct *vma;
@@ -250,7 +284,7 @@ struct mmap_batch_state {
 	 *      0 for no errors
 	 *      1 if at least one error has happened (and no
 	 *          -ENOENT errors have happened)
-	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 *      -ENOENT if at least one -ENOENT has happened.
 	 */
 	int global_error;
 	/* An array for individual errors */
@@ -260,17 +294,20 @@ struct mmap_batch_state {
 	xen_pfn_t __user *user_mfn;
 };
 
-static int mmap_batch_fn(void *data, void *state)
+static int mmap_batch_fn(void *data, int nr, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
 	int ret;
 
-	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-					 st->vma->vm_page_prot, st->domain);
+	BUG_ON(nr < 0);
 
-	/* Store error code for second pass. */
-	*(st->err++) = ret;
+	ret = xen_remap_domain_mfn_array(st->vma,
+					 st->va & PAGE_MASK,
+					 mfnp, nr,
+					 st->err,
+					 st->vma->vm_page_prot,
+					 st->domain);
 
 	/* And see if it affects the global_error. */
 	if (ret < 0) {
@@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
 				st->global_error = 1;
 		}
 	}
-	st->va += PAGE_SIZE;
+	st->va += PAGE_SIZE * nr;
 
 	return 0;
 }
@@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 	state.err           = err_array;
 
 	/* mmap_batch_fn guarantees ret == 0 */
-	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state));
+	BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
+				    &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 6a198e4..22cad75 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid);
 
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid);
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 22:23:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 22:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiuiA-0002du-M4; Wed, 12 Dec 2012 22:23:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tiui9-0002dp-PY
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 22:23:30 +0000
Received: from [85.158.137.99:61978] by server-9.bemta-3.messagelabs.com id
	3F/70-11948-1E309C05; Wed, 12 Dec 2012 22:23:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1355351008!18267161!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMzY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17506 invoked from network); 12 Dec 2012 22:23:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 22:23:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,269,1355097600"; 
   d="scan'208";a="100804"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 22:23:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 22:23:27 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tiui7-0000TL-KE;
	Wed, 12 Dec 2012 22:23:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tiui7-0003ee-CS;
	Wed, 12 Dec 2012 22:23:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14675-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 22:23:27 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14675: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14675 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14675/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14662
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 14662

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  a866cc5b8235
baseline version:
 xen                  309ff3ad9dcc

------------------------------------------------------------
People who touched revisions under test:
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=a866cc5b8235
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing a866cc5b8235
+ branch=xen-4.1-testing
+ revision=a866cc5b8235
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r a866cc5b8235 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 3 changes to 3 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 22:23:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 22:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiuiA-0002du-M4; Wed, 12 Dec 2012 22:23:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tiui9-0002dp-PY
	for xen-devel@lists.xensource.com; Wed, 12 Dec 2012 22:23:30 +0000
Received: from [85.158.137.99:61978] by server-9.bemta-3.messagelabs.com id
	3F/70-11948-1E309C05; Wed, 12 Dec 2012 22:23:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1355351008!18267161!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMzY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17506 invoked from network); 12 Dec 2012 22:23:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 22:23:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,269,1355097600"; 
   d="scan'208";a="100804"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Dec 2012 22:23:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 12 Dec 2012 22:23:27 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tiui7-0000TL-KE;
	Wed, 12 Dec 2012 22:23:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tiui7-0003ee-CS;
	Wed, 12 Dec 2012 22:23:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14675-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Dec 2012 22:23:27 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14675: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14675 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14675/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14662
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 14662

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  a866cc5b8235
baseline version:
 xen                  309ff3ad9dcc

------------------------------------------------------------
People who touched revisions under test:
  Kouya Shimura <kouya@jp.fujitsu.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=a866cc5b8235
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing a866cc5b8235
+ branch=xen-4.1-testing
+ revision=a866cc5b8235
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r a866cc5b8235 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 3 changes to 3 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 23:32:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 23:32:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TivmT-0003AH-Ts; Wed, 12 Dec 2012 23:32:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronny.hegewald@online.de>) id 1TivmR-0003AC-9O
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 23:31:59 +0000
Received: from [85.158.138.51:44556] by server-3.bemta-3.messagelabs.com id
	1C/2B-31588-EE319C05; Wed, 12 Dec 2012 23:31:58 +0000
X-Env-Sender: ronny.hegewald@online.de
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355355117!28726926!1
X-Originating-IP: [212.227.17.8]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjE3LjggPT4gNzg4Mjk=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjE3LjggPT4gNzg4Mjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9003 invoked from network); 12 Dec 2012 23:31:57 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.8)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 23:31:57 -0000
Received: from server2-groupware.localnet (kons-d9be1fbe.pool.mediaWays.net
	[217.190.31.190])
	by mrelayeu.kundenserver.de (node=mreu2) with ESMTP (Nemesis)
	id 0Le9EG-1TM00F2zvA-00qMvk; Thu, 13 Dec 2012 00:31:00 +0100
From: Ronny Hegewald <ronny.hegewald@online.de>
To: xen-devel@lists.xen.org,
 Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 00:35:28 +0000
User-Agent: KMail/1.11.4 (Linux/3.0.3; KDE/4.2.4; i686; ; )
References: <50A4C8F102000078000A8C14@nat28.tlf.novell.com>
In-Reply-To: <50A4C8F102000078000A8C14@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
Message-Id: <201212130035.28918.ronny.hegewald@online.de>
X-Provags-ID: V02:K0:ExCmXKENWi73EDC9WqxBCtMZj3OG4FOwUoO8gaexv4Q
	9F9v5AcaXmExNjowBvmzHEJvCei9eJ6yIV9IElSoOjc8fDOygD
	BqpWI7TUW3ArQ0KP+A5jihv1ee4Y/i+sDoMfz4EdgpNtRy/Vg7
	aIrqJuoDeLhsOSZ35wkU8XE/JW5ktqXYx9pjzyLxAmjN1hevSf
	bBwqeRPHM0B3IcCmSPQdpOrt8uv8hlSGM9kAULN41aj8Vd9ge9
	UEzUtanjC2J8GVQF69ELv9kqE3kzvgZw34+mjgbQ+4ZWhzcYRm
	fwXY1b1lDCMsO2cLboXDskzD4ZpM3WENtauWqCc3oL5u7N+jWt
	N9eTh7qyvgpzAA1XGe0ReIkqLAfoj4/UzvUBJ3tOtvP6otoM7h
	nW6NfHHvhFUjtvw32JvuzlbeMrChWVlVys=
Cc: Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Xen 4.2.1-rc1 and 4.1.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thursday 15 November 2012, Jan Beulich wrote:
> All,
>
> aiming at releases with not too many more RCs, please test!
>
> Thanks, Jan

The patch "fix vfb related assertion problem when starting pv-domU" was 
acknowledged as stable-candidate from Ian Campbell (see 
http://lists.xen.org/archives/html/xen-devel/2012-11/msg01208.html), but 
couldn't find it in the 4.2 stable branch. 

Its a one-liner and i didn't find anything on the list that it was discarded 
for stable.

Its revision 26145:8b93ac0c93f3 (copyied from above link)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 12 23:32:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Dec 2012 23:32:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TivmT-0003AH-Ts; Wed, 12 Dec 2012 23:32:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronny.hegewald@online.de>) id 1TivmR-0003AC-9O
	for xen-devel@lists.xen.org; Wed, 12 Dec 2012 23:31:59 +0000
Received: from [85.158.138.51:44556] by server-3.bemta-3.messagelabs.com id
	1C/2B-31588-EE319C05; Wed, 12 Dec 2012 23:31:58 +0000
X-Env-Sender: ronny.hegewald@online.de
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355355117!28726926!1
X-Originating-IP: [212.227.17.8]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjE3LjggPT4gNzg4Mjk=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjE3LjggPT4gNzg4Mjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9003 invoked from network); 12 Dec 2012 23:31:57 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.8)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Dec 2012 23:31:57 -0000
Received: from server2-groupware.localnet (kons-d9be1fbe.pool.mediaWays.net
	[217.190.31.190])
	by mrelayeu.kundenserver.de (node=mreu2) with ESMTP (Nemesis)
	id 0Le9EG-1TM00F2zvA-00qMvk; Thu, 13 Dec 2012 00:31:00 +0100
From: Ronny Hegewald <ronny.hegewald@online.de>
To: xen-devel@lists.xen.org,
 Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 00:35:28 +0000
User-Agent: KMail/1.11.4 (Linux/3.0.3; KDE/4.2.4; i686; ; )
References: <50A4C8F102000078000A8C14@nat28.tlf.novell.com>
In-Reply-To: <50A4C8F102000078000A8C14@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
Message-Id: <201212130035.28918.ronny.hegewald@online.de>
X-Provags-ID: V02:K0:ExCmXKENWi73EDC9WqxBCtMZj3OG4FOwUoO8gaexv4Q
	9F9v5AcaXmExNjowBvmzHEJvCei9eJ6yIV9IElSoOjc8fDOygD
	BqpWI7TUW3ArQ0KP+A5jihv1ee4Y/i+sDoMfz4EdgpNtRy/Vg7
	aIrqJuoDeLhsOSZ35wkU8XE/JW5ktqXYx9pjzyLxAmjN1hevSf
	bBwqeRPHM0B3IcCmSPQdpOrt8uv8hlSGM9kAULN41aj8Vd9ge9
	UEzUtanjC2J8GVQF69ELv9kqE3kzvgZw34+mjgbQ+4ZWhzcYRm
	fwXY1b1lDCMsO2cLboXDskzD4ZpM3WENtauWqCc3oL5u7N+jWt
	N9eTh7qyvgpzAA1XGe0ReIkqLAfoj4/UzvUBJ3tOtvP6otoM7h
	nW6NfHHvhFUjtvw32JvuzlbeMrChWVlVys=
Cc: Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Xen 4.2.1-rc1 and 4.1.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thursday 15 November 2012, Jan Beulich wrote:
> All,
>
> aiming at releases with not too many more RCs, please test!
>
> Thanks, Jan

The patch "fix vfb related assertion problem when starting pv-domU" was 
acknowledged as stable-candidate from Ian Campbell (see 
http://lists.xen.org/archives/html/xen-devel/2012-11/msg01208.html), but 
couldn't find it in the 4.2 stable branch. 

Its a one-liner and i didn't find anything on the list that it was discarded 
for stable.

Its revision 26145:8b93ac0c93f3 (copyied from above link)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 00:32:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 00:32:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiwiz-0004IV-Bj; Thu, 13 Dec 2012 00:32:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tiwiy-0004IQ-EY
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 00:32:28 +0000
Received: from [85.158.143.99:30583] by server-3.bemta-4.messagelabs.com id
	71/10-18211-B1229C05; Thu, 13 Dec 2012 00:32:27 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1355358746!22211023!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMwMzY2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20989 invoked from network); 13 Dec 2012 00:32:26 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-6.tower-216.messagelabs.com with SMTP;
	13 Dec 2012 00:32:26 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 12 Dec 2012 16:31:25 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,269,1355126400"; d="scan'208";a="256692141"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 12 Dec 2012 16:31:25 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 12 Dec 2012 16:31:25 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 13 Dec 2012 08:31:22 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Thread-Topic: [PATCH 00/11] Add virtual EPT support Xen. 
Thread-Index: AQHN1p1R2dujkVAVsk2xGsrrRr7JApgV5PJw
Date: Thu, 13 Dec 2012 00:31:21 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033AEDB0@SHSMSX101.ccr.corp.intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"keir@xen.org" <keir@xen.org>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 00/11] Add virtual EPT support Xen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, Guys 
Do you have comments for this patchset ? Thanks!
Xiantao 

> -----Original Message-----
> From: Zhang, Xiantao
> Sent: Tuesday, December 11, 2012 1:57 AM
> To: xen-devel@lists.xensource.com
> Cc: JBeulich@suse.com; keir@xen.org; Dong, Eddie; Nakajima, Jun; Zhang,
> Xiantao
> Subject: [PATCH 00/11] Add virtual EPT support Xen.
> 
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> With virtual EPT support, L1 hyerpvisor can use EPT hardware for L2 guest's
> memory virtualization. In this way, L2 guest's performance can be improved
> sharply. According to our testing, some benchmarks can show > 5x
> performance gain.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Zhang Xiantao (11):
>   nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
>   nestedhap: Change nested p2m's walker to vendor-specific
>   nested_ept: Implement guest ept's walker
>   nested_ept: Add permission check for success case
>   EPT: Make ept data structure or operations neutral
>   nEPT: Try to enable EPT paging for L2 guest.
>   nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
>   nEPT: Use minimal permission for nested p2m.
>   nEPT: handle invept instruction from L1 VMM
>   nEPT: expost EPT capablity to L1 VMM
>   nVMX: Expose VPID capability to nested VMM.
> 
>  xen/arch/x86/hvm/hvm.c                  |    7 +-
>  xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++
>  xen/arch/x86/hvm/svm/svm.c              |    3 +-
>  xen/arch/x86/hvm/vmx/vmcs.c             |    2 +-
>  xen/arch/x86/hvm/vmx/vmx.c              |   76 +++++---
>  xen/arch/x86/hvm/vmx/vvmx.c             |  208 ++++++++++++++++++-
>  xen/arch/x86/mm/guest_walk.c            |   12 +-
>  xen/arch/x86/mm/hap/Makefile            |    1 +
>  xen/arch/x86/mm/hap/nested_ept.c        |  345
> +++++++++++++++++++++++++++++++
>  xen/arch/x86/mm/hap/nested_hap.c        |   79 +++----
>  xen/arch/x86/mm/mm-locks.h              |    2 +-
>  xen/arch/x86/mm/p2m-ept.c               |   96 +++++++--
>  xen/arch/x86/mm/p2m.c                   |   44 +++--
>  xen/arch/x86/mm/shadow/multi.c          |    2 +-
>  xen/include/asm-x86/guest_pt.h          |    8 +
>  xen/include/asm-x86/hvm/hvm.h           |    9 +-
>  xen/include/asm-x86/hvm/nestedhvm.h     |    1 +
>  xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
>  xen/include/asm-x86/hvm/vmx/vmcs.h      |   31 ++-
>  xen/include/asm-x86/hvm/vmx/vmx.h       |    6 +-
>  xen/include/asm-x86/hvm/vmx/vvmx.h      |   29 +++-
>  xen/include/asm-x86/p2m.h               |   17 +-
>  22 files changed, 859 insertions(+), 153 deletions(-)  create mode 100644
> xen/arch/x86/mm/hap/nested_ept.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 00:32:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 00:32:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiwiz-0004IV-Bj; Thu, 13 Dec 2012 00:32:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tiwiy-0004IQ-EY
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 00:32:28 +0000
Received: from [85.158.143.99:30583] by server-3.bemta-4.messagelabs.com id
	71/10-18211-B1229C05; Thu, 13 Dec 2012 00:32:27 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1355358746!22211023!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMwMzY2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20989 invoked from network); 13 Dec 2012 00:32:26 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-6.tower-216.messagelabs.com with SMTP;
	13 Dec 2012 00:32:26 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 12 Dec 2012 16:31:25 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,269,1355126400"; d="scan'208";a="256692141"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 12 Dec 2012 16:31:25 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 12 Dec 2012 16:31:25 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 13 Dec 2012 08:31:22 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Thread-Topic: [PATCH 00/11] Add virtual EPT support Xen. 
Thread-Index: AQHN1p1R2dujkVAVsk2xGsrrRr7JApgV5PJw
Date: Thu, 13 Dec 2012 00:31:21 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033AEDB0@SHSMSX101.ccr.corp.intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"keir@xen.org" <keir@xen.org>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 00/11] Add virtual EPT support Xen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, Guys 
Do you have comments for this patchset ? Thanks!
Xiantao 

> -----Original Message-----
> From: Zhang, Xiantao
> Sent: Tuesday, December 11, 2012 1:57 AM
> To: xen-devel@lists.xensource.com
> Cc: JBeulich@suse.com; keir@xen.org; Dong, Eddie; Nakajima, Jun; Zhang,
> Xiantao
> Subject: [PATCH 00/11] Add virtual EPT support Xen.
> 
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> With virtual EPT support, L1 hyerpvisor can use EPT hardware for L2 guest's
> memory virtualization. In this way, L2 guest's performance can be improved
> sharply. According to our testing, some benchmarks can show > 5x
> performance gain.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Zhang Xiantao (11):
>   nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
>   nestedhap: Change nested p2m's walker to vendor-specific
>   nested_ept: Implement guest ept's walker
>   nested_ept: Add permission check for success case
>   EPT: Make ept data structure or operations neutral
>   nEPT: Try to enable EPT paging for L2 guest.
>   nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
>   nEPT: Use minimal permission for nested p2m.
>   nEPT: handle invept instruction from L1 VMM
>   nEPT: expost EPT capablity to L1 VMM
>   nVMX: Expose VPID capability to nested VMM.
> 
>  xen/arch/x86/hvm/hvm.c                  |    7 +-
>  xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++
>  xen/arch/x86/hvm/svm/svm.c              |    3 +-
>  xen/arch/x86/hvm/vmx/vmcs.c             |    2 +-
>  xen/arch/x86/hvm/vmx/vmx.c              |   76 +++++---
>  xen/arch/x86/hvm/vmx/vvmx.c             |  208 ++++++++++++++++++-
>  xen/arch/x86/mm/guest_walk.c            |   12 +-
>  xen/arch/x86/mm/hap/Makefile            |    1 +
>  xen/arch/x86/mm/hap/nested_ept.c        |  345
> +++++++++++++++++++++++++++++++
>  xen/arch/x86/mm/hap/nested_hap.c        |   79 +++----
>  xen/arch/x86/mm/mm-locks.h              |    2 +-
>  xen/arch/x86/mm/p2m-ept.c               |   96 +++++++--
>  xen/arch/x86/mm/p2m.c                   |   44 +++--
>  xen/arch/x86/mm/shadow/multi.c          |    2 +-
>  xen/include/asm-x86/guest_pt.h          |    8 +
>  xen/include/asm-x86/hvm/hvm.h           |    9 +-
>  xen/include/asm-x86/hvm/nestedhvm.h     |    1 +
>  xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
>  xen/include/asm-x86/hvm/vmx/vmcs.h      |   31 ++-
>  xen/include/asm-x86/hvm/vmx/vmx.h       |    6 +-
>  xen/include/asm-x86/hvm/vmx/vvmx.h      |   29 +++-
>  xen/include/asm-x86/p2m.h               |   17 +-
>  22 files changed, 859 insertions(+), 153 deletions(-)  create mode 100644
> xen/arch/x86/mm/hap/nested_ept.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 00:38:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 00:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiwom-0004Qx-5h; Thu, 13 Dec 2012 00:38:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tiwok-0004Qq-6y
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 00:38:26 +0000
Received: from [85.158.143.35:55745] by server-1.bemta-4.messagelabs.com id
	E0/EE-28401-18329C05; Thu, 13 Dec 2012 00:38:25 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355359100!16104619!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM4NzIx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27569 invoked from network); 13 Dec 2012 00:38:21 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-14.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 00:38:21 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 12 Dec 2012 16:38:19 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,269,1355126400"; 
	d="scan'208,217";a="230781434"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by azsmga001.ch.intel.com with ESMTP; 12 Dec 2012 16:38:18 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 12 Dec 2012 16:38:18 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 13 Dec 2012 08:38:16 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: fahimeh soltaninejad <f.soltani298@gmail.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] hw to assign devices with VT-d in (fedora with Xen
	hypervisor)
Thread-Index: AQHN2IyO/gYXL3cpuE+4Eq0/SowEd5gV4cQg
Date: Thu, 13 Dec 2012 00:38:15 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033AEDEC@SHSMSX101.ccr.corp.intel.com>
References: <CAKLxbwJ92Fs9YKDKSjkCPok5-M7mpB-ttXJZXp-hexRpT6ZBdg@mail.gmail.com>
In-Reply-To: <CAKLxbwJ92Fs9YKDKSjkCPok5-M7mpB-ttXJZXp-hexRpT6ZBdg@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] hw to assign devices with VT-d in (fedora with
	Xen	hypervisor)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6525300667451429031=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6525300667451429031==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_B6C2EB9186482D47BD0C5A9A48345644033AEDECSHSMSX101ccrcor_"

--_000_B6C2EB9186482D47BD0C5A9A48345644033AEDECSHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

You can refer to this wiki page: http://wiki.xensource.com/xenwiki/VTdHowTo=
  . If you meet any issues with it, report it back.
Xiantao

From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.o=
rg] On Behalf Of fahimeh soltaninejad
Sent: Tuesday, December 11, 2012 11:40 PM
To: xen-devel@lists.xen.org
Subject: [Xen-devel] hw to assign devices with VT-d in (fedora with Xen hyp=
ervisor)

hi,
i have installed Xen hypervisor on fedora 17 and now i want to assign devic=
es directly to a guest OS through VT-d. does any one know how can i do it?
i found some solutions for that but those had some steps for when installin=
g Xen while i installed Xen on fedora. i am going to know,can i assign devi=
ce to a guest OS through VT-d capability with installed Xen on fedora 17?
thanks.

--_000_B6C2EB9186482D47BD0C5A9A48345644033AEDECSHSMSX101ccrcor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">You can refer to this wik=
i page:
<a href=3D"http://wiki.xensource.com/xenwiki/VTdHowTo">http://wiki.xensourc=
e.com/xenwiki/VTdHowTo</a>&nbsp; . If you meet any issues with it, report i=
t back.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Xiantao<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D"><o:p>&nbsp;</o:p></span><=
/p>
<div style=3D"border:none;border-left:solid blue 1.5pt;padding:0in 0in 0in =
4.0pt">
<div>
<div style=3D"border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in">
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:&quot=
;Tahoma&quot;,&quot;sans-serif&quot;">From:</span></b><span style=3D"font-s=
ize:10.0pt;font-family:&quot;Tahoma&quot;,&quot;sans-serif&quot;"> xen-deve=
l-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org]
<b>On Behalf Of </b>fahimeh soltaninejad<br>
<b>Sent:</b> Tuesday, December 11, 2012 11:40 PM<br>
<b>To:</b> xen-devel@lists.xen.org<br>
<b>Subject:</b> [Xen-devel] hw to assign devices with VT-d in (fedora with =
Xen hypervisor)<o:p></o:p></span></p>
</div>
</div>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">hi,<br>
i have installed Xen hypervisor on fedora 17 and now i want to assign devic=
es directly to a guest OS through VT-d. does any one know how can i do it?<=
br>
i found some solutions for that but those had some steps for when installin=
g Xen while i installed Xen on fedora. i am going to know,can i assign devi=
ce to a guest OS through VT-d capability with installed Xen on fedora 17?<b=
r>
thanks. <o:p></o:p></p>
</div>
</div>
</body>
</html>

--_000_B6C2EB9186482D47BD0C5A9A48345644033AEDECSHSMSX101ccrcor_--


--===============6525300667451429031==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6525300667451429031==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 00:38:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 00:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tiwom-0004Qx-5h; Thu, 13 Dec 2012 00:38:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tiwok-0004Qq-6y
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 00:38:26 +0000
Received: from [85.158.143.35:55745] by server-1.bemta-4.messagelabs.com id
	E0/EE-28401-18329C05; Thu, 13 Dec 2012 00:38:25 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355359100!16104619!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM4NzIx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27569 invoked from network); 13 Dec 2012 00:38:21 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-14.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 00:38:21 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 12 Dec 2012 16:38:19 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,269,1355126400"; 
	d="scan'208,217";a="230781434"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by azsmga001.ch.intel.com with ESMTP; 12 Dec 2012 16:38:18 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 12 Dec 2012 16:38:18 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 13 Dec 2012 08:38:16 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: fahimeh soltaninejad <f.soltani298@gmail.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] hw to assign devices with VT-d in (fedora with Xen
	hypervisor)
Thread-Index: AQHN2IyO/gYXL3cpuE+4Eq0/SowEd5gV4cQg
Date: Thu, 13 Dec 2012 00:38:15 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033AEDEC@SHSMSX101.ccr.corp.intel.com>
References: <CAKLxbwJ92Fs9YKDKSjkCPok5-M7mpB-ttXJZXp-hexRpT6ZBdg@mail.gmail.com>
In-Reply-To: <CAKLxbwJ92Fs9YKDKSjkCPok5-M7mpB-ttXJZXp-hexRpT6ZBdg@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] hw to assign devices with VT-d in (fedora with
	Xen	hypervisor)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6525300667451429031=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6525300667451429031==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_B6C2EB9186482D47BD0C5A9A48345644033AEDECSHSMSX101ccrcor_"

--_000_B6C2EB9186482D47BD0C5A9A48345644033AEDECSHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

You can refer to this wiki page: http://wiki.xensource.com/xenwiki/VTdHowTo=
  . If you meet any issues with it, report it back.
Xiantao

From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.o=
rg] On Behalf Of fahimeh soltaninejad
Sent: Tuesday, December 11, 2012 11:40 PM
To: xen-devel@lists.xen.org
Subject: [Xen-devel] hw to assign devices with VT-d in (fedora with Xen hyp=
ervisor)

hi,
i have installed Xen hypervisor on fedora 17 and now i want to assign devic=
es directly to a guest OS through VT-d. does any one know how can i do it?
i found some solutions for that but those had some steps for when installin=
g Xen while i installed Xen on fedora. i am going to know,can i assign devi=
ce to a guest OS through VT-d capability with installed Xen on fedora 17?
thanks.

--_000_B6C2EB9186482D47BD0C5A9A48345644033AEDECSHSMSX101ccrcor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">You can refer to this wik=
i page:
<a href=3D"http://wiki.xensource.com/xenwiki/VTdHowTo">http://wiki.xensourc=
e.com/xenwiki/VTdHowTo</a>&nbsp; . If you meet any issues with it, report i=
t back.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Xiantao<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D"><o:p>&nbsp;</o:p></span><=
/p>
<div style=3D"border:none;border-left:solid blue 1.5pt;padding:0in 0in 0in =
4.0pt">
<div>
<div style=3D"border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in">
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:&quot=
;Tahoma&quot;,&quot;sans-serif&quot;">From:</span></b><span style=3D"font-s=
ize:10.0pt;font-family:&quot;Tahoma&quot;,&quot;sans-serif&quot;"> xen-deve=
l-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org]
<b>On Behalf Of </b>fahimeh soltaninejad<br>
<b>Sent:</b> Tuesday, December 11, 2012 11:40 PM<br>
<b>To:</b> xen-devel@lists.xen.org<br>
<b>Subject:</b> [Xen-devel] hw to assign devices with VT-d in (fedora with =
Xen hypervisor)<o:p></o:p></span></p>
</div>
</div>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">hi,<br>
i have installed Xen hypervisor on fedora 17 and now i want to assign devic=
es directly to a guest OS through VT-d. does any one know how can i do it?<=
br>
i found some solutions for that but those had some steps for when installin=
g Xen while i installed Xen on fedora. i am going to know,can i assign devi=
ce to a guest OS through VT-d capability with installed Xen on fedora 17?<b=
r>
thanks. <o:p></o:p></p>
</div>
</div>
</body>
</html>

--_000_B6C2EB9186482D47BD0C5A9A48345644033AEDECSHSMSX101ccrcor_--


--===============6525300667451429031==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6525300667451429031==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 01:17:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 01:17:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TixPb-00008I-AG; Thu, 13 Dec 2012 01:16:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TixPZ-00008D-I3
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 01:16:29 +0000
Received: from [85.158.139.83:65404] by server-2.bemta-5.messagelabs.com id
	CB/AE-16162-C6C29C05; Thu, 13 Dec 2012 01:16:28 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355361386!29650437!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTE0MDU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15235 invoked from network); 13 Dec 2012 01:16:28 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 01:16:28 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBD1FQE5028478
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 13 Dec 2012 01:15:27 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBD1FPsu011476
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 13 Dec 2012 01:15:25 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBD1FOJa019309; Wed, 12 Dec 2012 19:15:25 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Dec 2012 17:15:24 -0800
Date: Wed, 12 Dec 2012 17:15:23 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121212171523.332a0a89@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012 12:10:19 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > On Mon, 10 Dec 2012 09:43:34 +0000
> > "Jan Beulich" <JBeulich@suse.com> wrote:
> > 
> > > >>> On 08.12.12 at 02:46, Mukesh Rathor <mukesh.rathor@oracle.com>
> > > >>> wrote:
> > > > The second is msi.c. I don't understand it very well, and need
> > > > to figure what to do for PVH. Would appreciate suggestions if
> > > > anyone knows.
> > > 
> > > Why do you think you need to do something specially for PVH here
> > > in the first place? The only adjustment I would expect might be
> > > needed is address translation (depending on how PVH deal with
> > > MMIO addresses).
> > 
> > Ok, thanks. Looks like I'm missing some address translation
> > somewhere, getting EPT violation from dom0. Time to read up more on
> > msi-x and debug.
> 
> That's strange because AFAIK Linux is never editing the MSI-X entries
> directly: give a look at
> arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps MSIs
> into pirqs using hypercalls. Xen should be the only one to touch the
> real MSI-X table.

So, this is what's happening. The side effect of :

        if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
                                dev->msix_table.last) )
            WARN();
        if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
                                dev->msix_pba.last) )
            WARN();

in msix_capability_init() in xen is that the dom0 EPT entries that I've
mapped are going from RW to read only. Then when dom0 accesses it, I
get EPT violation. In case of pure PV, the PTE entry to access the
iomem is RW, and the above rangeset adding doesn't affect it. I don't
understand why? Looking into that now...

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 01:17:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 01:17:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TixPb-00008I-AG; Thu, 13 Dec 2012 01:16:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TixPZ-00008D-I3
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 01:16:29 +0000
Received: from [85.158.139.83:65404] by server-2.bemta-5.messagelabs.com id
	CB/AE-16162-C6C29C05; Thu, 13 Dec 2012 01:16:28 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355361386!29650437!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTE0MDU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15235 invoked from network); 13 Dec 2012 01:16:28 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 01:16:28 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBD1FQE5028478
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 13 Dec 2012 01:15:27 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBD1FPsu011476
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 13 Dec 2012 01:15:25 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBD1FOJa019309; Wed, 12 Dec 2012 19:15:25 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Dec 2012 17:15:24 -0800
Date: Wed, 12 Dec 2012 17:15:23 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121212171523.332a0a89@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 11 Dec 2012 12:10:19 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > On Mon, 10 Dec 2012 09:43:34 +0000
> > "Jan Beulich" <JBeulich@suse.com> wrote:
> > 
> > > >>> On 08.12.12 at 02:46, Mukesh Rathor <mukesh.rathor@oracle.com>
> > > >>> wrote:
> > > > The second is msi.c. I don't understand it very well, and need
> > > > to figure what to do for PVH. Would appreciate suggestions if
> > > > anyone knows.
> > > 
> > > Why do you think you need to do something specially for PVH here
> > > in the first place? The only adjustment I would expect might be
> > > needed is address translation (depending on how PVH deal with
> > > MMIO addresses).
> > 
> > Ok, thanks. Looks like I'm missing some address translation
> > somewhere, getting EPT violation from dom0. Time to read up more on
> > msi-x and debug.
> 
> That's strange because AFAIK Linux is never editing the MSI-X entries
> directly: give a look at
> arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps MSIs
> into pirqs using hypercalls. Xen should be the only one to touch the
> real MSI-X table.

So, this is what's happening. The side effect of :

        if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
                                dev->msix_table.last) )
            WARN();
        if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
                                dev->msix_pba.last) )
            WARN();

in msix_capability_init() in xen is that the dom0 EPT entries that I've
mapped are going from RW to read only. Then when dom0 accesses it, I
get EPT violation. In case of pure PV, the PTE entry to access the
iomem is RW, and the above rangeset adding doesn't affect it. I don't
understand why? Looking into that now...

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 01:20:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 01:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TixSb-0000EF-Tr; Thu, 13 Dec 2012 01:19:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nenolod@dereferenced.org>) id 1TixSa-0000E5-J9
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 01:19:36 +0000
Received: from [85.158.139.211:20400] by server-16.bemta-5.messagelabs.com id
	8D/D4-09208-72D29C05; Thu, 13 Dec 2012 01:19:35 +0000
X-Env-Sender: nenolod@dereferenced.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355361574!20290282!1
X-Originating-IP: [209.85.216.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13216 invoked from network); 13 Dec 2012 01:19:35 -0000
Received: from mail-qa0-f43.google.com (HELO mail-qa0-f43.google.com)
	(209.85.216.43)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 01:19:35 -0000
Received: by mail-qa0-f43.google.com with SMTP id cr7so5866480qab.9
	for <xen-devel@lists.xensource.com>;
	Wed, 12 Dec 2012 17:19:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type
	:x-gm-message-state;
	bh=/Z11yCuii8jIE302sWTnjGo4WZtgorwwuZ0VH19glV8=;
	b=TWiUJRTVi6F/Rz97cAD+EfUXaDOKdccSy8xJOKybw/EhSnCwatkiNze1+9xevPDwn+
	po87qAakwLV7Y2b1FptL3RhZAQvkty3wGRgBqFfvmnTm+QcFxKZVMx6ZrsQKGYa0yX1T
	yDmrVJ/FgQKyu9bJpWu+Gujpbb6TRTVmt6d3S5iwKnWRFhfYBPN6utOju83o6yEmdBzX
	RiLX78ZQapcewk8uuvoJ1XS8TfZHnnBwNZeHr//MXzow9YHBVH3DN0Cv9/TvCw+JeIq0
	hcSWAE3DVoWI6YhffgkyDN7RhY24HXZ2U65iU3qUFdA3iwjV6HI/0E9kEzItI+dTka0d
	REPw==
MIME-Version: 1.0
Received: by 10.49.130.167 with SMTP id of7mr57059qeb.22.1355361573960; Wed,
	12 Dec 2012 17:19:33 -0800 (PST)
Received: by 10.49.13.136 with HTTP; Wed, 12 Dec 2012 17:19:33 -0800 (PST)
Date: Wed, 12 Dec 2012 19:19:33 -0600
Message-ID: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
From: William Pitcock <nenolod@dereferenced.org>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
X-Gm-Message-State: ALoCoQnzWPe6/hbmD5t9p0rnW4p0YOQZAkyAjtrMUIsYXkYwwADtQc5voYXL9svDsgzuDjCfBpQn
Subject: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I would like to introduce the Python Xen library, which uses libxs and
libxc directly to provide some manipulation functions for the domains
running on a hypervisor.

This code is being used in production right now on a semi-private
cloud, as part of the management backplane.

The code is available from PyPI at:
   http://pypi.python.org/pypi/Python-Xen

You may also view the code online via Bitbucket:
  http://bitbucket.org/tortoiselabs/python-xen

The code itself is under the ISC license, so it may be used for any
use, including commercial hypervisors.  We also accept patches for
desired functionality which may be missing.

William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 01:20:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 01:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TixSb-0000EF-Tr; Thu, 13 Dec 2012 01:19:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nenolod@dereferenced.org>) id 1TixSa-0000E5-J9
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 01:19:36 +0000
Received: from [85.158.139.211:20400] by server-16.bemta-5.messagelabs.com id
	8D/D4-09208-72D29C05; Thu, 13 Dec 2012 01:19:35 +0000
X-Env-Sender: nenolod@dereferenced.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355361574!20290282!1
X-Originating-IP: [209.85.216.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13216 invoked from network); 13 Dec 2012 01:19:35 -0000
Received: from mail-qa0-f43.google.com (HELO mail-qa0-f43.google.com)
	(209.85.216.43)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 01:19:35 -0000
Received: by mail-qa0-f43.google.com with SMTP id cr7so5866480qab.9
	for <xen-devel@lists.xensource.com>;
	Wed, 12 Dec 2012 17:19:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type
	:x-gm-message-state;
	bh=/Z11yCuii8jIE302sWTnjGo4WZtgorwwuZ0VH19glV8=;
	b=TWiUJRTVi6F/Rz97cAD+EfUXaDOKdccSy8xJOKybw/EhSnCwatkiNze1+9xevPDwn+
	po87qAakwLV7Y2b1FptL3RhZAQvkty3wGRgBqFfvmnTm+QcFxKZVMx6ZrsQKGYa0yX1T
	yDmrVJ/FgQKyu9bJpWu+Gujpbb6TRTVmt6d3S5iwKnWRFhfYBPN6utOju83o6yEmdBzX
	RiLX78ZQapcewk8uuvoJ1XS8TfZHnnBwNZeHr//MXzow9YHBVH3DN0Cv9/TvCw+JeIq0
	hcSWAE3DVoWI6YhffgkyDN7RhY24HXZ2U65iU3qUFdA3iwjV6HI/0E9kEzItI+dTka0d
	REPw==
MIME-Version: 1.0
Received: by 10.49.130.167 with SMTP id of7mr57059qeb.22.1355361573960; Wed,
	12 Dec 2012 17:19:33 -0800 (PST)
Received: by 10.49.13.136 with HTTP; Wed, 12 Dec 2012 17:19:33 -0800 (PST)
Date: Wed, 12 Dec 2012 19:19:33 -0600
Message-ID: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
From: William Pitcock <nenolod@dereferenced.org>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
X-Gm-Message-State: ALoCoQnzWPe6/hbmD5t9p0rnW4p0YOQZAkyAjtrMUIsYXkYwwADtQc5voYXL9svDsgzuDjCfBpQn
Subject: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I would like to introduce the Python Xen library, which uses libxs and
libxc directly to provide some manipulation functions for the domains
running on a hypervisor.

This code is being used in production right now on a semi-private
cloud, as part of the management backplane.

The code is available from PyPI at:
   http://pypi.python.org/pypi/Python-Xen

You may also view the code online via Bitbucket:
  http://bitbucket.org/tortoiselabs/python-xen

The code itself is under the ISC license, so it may be used for any
use, including commercial hypervisors.  We also accept patches for
desired functionality which may be missing.

William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 01:44:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 01:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TixqU-0000lR-MY; Thu, 13 Dec 2012 01:44:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TixqT-0000lK-04
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 01:44:17 +0000
Received: from [85.158.139.83:43884] by server-11.bemta-5.messagelabs.com id
	0C/70-31624-0F239C05; Thu, 13 Dec 2012 01:44:16 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355363053!29493549!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjA3NzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10527 invoked from network); 13 Dec 2012 01:44:15 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 01:44:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBD1hFHp024899
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 13 Dec 2012 01:43:16 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBD1hEVQ017190
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 13 Dec 2012 01:43:14 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBD1hEKh030799; Wed, 12 Dec 2012 19:43:14 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Dec 2012 17:43:14 -0800
Date: Wed, 12 Dec 2012 17:43:12 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20121212174312.68146c02@mantra.us.oracle.com>
In-Reply-To: <20121212171523.332a0a89@mantra.us.oracle.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 12 Dec 2012 17:15:23 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> On Tue, 11 Dec 2012 12:10:19 +0000
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > > On Mon, 10 Dec 2012 09:43:34 +0000
> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > 
> > That's strange because AFAIK Linux is never editing the MSI-X
> > entries directly: give a look at
> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> > MSIs into pirqs using hypercalls. Xen should be the only one to
> > touch the real MSI-X table.
> 
> So, this is what's happening. The side effect of :
> 
>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
>                                 dev->msix_table.last) )
>             WARN();
>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
>                                 dev->msix_pba.last) )
>             WARN();
> 
> in msix_capability_init() in xen is that the dom0 EPT entries that
> I've mapped are going from RW to read only. Then when dom0 accesses
> it, I get EPT violation. In case of pure PV, the PTE entry to access
> the iomem is RW, and the above rangeset adding doesn't affect it. I
> don't understand why? Looking into that now...

Ah, it's coming from:

ept_p2m_type_to_flags(): 
   case p2m_mmio_direct:
       entry->r = entry->x = 1;
       entry->w = !rangeset_contains_singleton(mmio_ro_ranges, entry->mfn);

I suppose, the best fix would be to add a check here for dom0 and let it
have w access?

thanks
mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 01:44:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 01:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TixqU-0000lR-MY; Thu, 13 Dec 2012 01:44:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TixqT-0000lK-04
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 01:44:17 +0000
Received: from [85.158.139.83:43884] by server-11.bemta-5.messagelabs.com id
	0C/70-31624-0F239C05; Thu, 13 Dec 2012 01:44:16 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355363053!29493549!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjA3NzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10527 invoked from network); 13 Dec 2012 01:44:15 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 01:44:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBD1hFHp024899
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 13 Dec 2012 01:43:16 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBD1hEVQ017190
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 13 Dec 2012 01:43:14 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBD1hEKh030799; Wed, 12 Dec 2012 19:43:14 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Dec 2012 17:43:14 -0800
Date: Wed, 12 Dec 2012 17:43:12 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20121212174312.68146c02@mantra.us.oracle.com>
In-Reply-To: <20121212171523.332a0a89@mantra.us.oracle.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 12 Dec 2012 17:15:23 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> On Tue, 11 Dec 2012 12:10:19 +0000
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > > On Mon, 10 Dec 2012 09:43:34 +0000
> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > 
> > That's strange because AFAIK Linux is never editing the MSI-X
> > entries directly: give a look at
> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> > MSIs into pirqs using hypercalls. Xen should be the only one to
> > touch the real MSI-X table.
> 
> So, this is what's happening. The side effect of :
> 
>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
>                                 dev->msix_table.last) )
>             WARN();
>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
>                                 dev->msix_pba.last) )
>             WARN();
> 
> in msix_capability_init() in xen is that the dom0 EPT entries that
> I've mapped are going from RW to read only. Then when dom0 accesses
> it, I get EPT violation. In case of pure PV, the PTE entry to access
> the iomem is RW, and the above rangeset adding doesn't affect it. I
> don't understand why? Looking into that now...

Ah, it's coming from:

ept_p2m_type_to_flags(): 
   case p2m_mmio_direct:
       entry->r = entry->x = 1;
       entry->w = !rangeset_contains_singleton(mmio_ro_ranges, entry->mfn);

I suppose, the best fix would be to add a check here for dom0 and let it
have w access?

thanks
mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 02:11:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 02:11:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiyGX-0001RQ-5P; Thu, 13 Dec 2012 02:11:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiyGV-0001RL-N4
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 02:11:12 +0000
Received: from [85.158.143.35:46833] by server-2.bemta-4.messagelabs.com id
	76/A4-30861-E3939C05; Thu, 13 Dec 2012 02:11:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355364669!5279746!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMzY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23558 invoked from network); 13 Dec 2012 02:11:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 02:11:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,269,1355097600"; 
   d="scan'208";a="104457"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 02:11:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 02:11:08 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiyGS-0001c3-NM;
	Thu, 13 Dec 2012 02:11:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiyGS-0006sm-Dq;
	Thu, 13 Dec 2012 02:11:08 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14676-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 02:11:08 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14676: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9132873983982513388=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9132873983982513388==
Content-Type: text/plain

flight 14676 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14676/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14673
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14673

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  74d4a6cc5392
baseline version:
 xen                  ef8c1b607b10

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=74d4a6cc5392
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 74d4a6cc5392
+ branch=xen-unstable
+ revision=74d4a6cc5392
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 74d4a6cc5392 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files


--===============9132873983982513388==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9132873983982513388==--

From xen-devel-bounces@lists.xen.org Thu Dec 13 02:11:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 02:11:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TiyGX-0001RQ-5P; Thu, 13 Dec 2012 02:11:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TiyGV-0001RL-N4
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 02:11:12 +0000
Received: from [85.158.143.35:46833] by server-2.bemta-4.messagelabs.com id
	76/A4-30861-E3939C05; Thu, 13 Dec 2012 02:11:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355364669!5279746!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMzY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23558 invoked from network); 13 Dec 2012 02:11:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 02:11:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,269,1355097600"; 
   d="scan'208";a="104457"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 02:11:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 02:11:08 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TiyGS-0001c3-NM;
	Thu, 13 Dec 2012 02:11:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TiyGS-0006sm-Dq;
	Thu, 13 Dec 2012 02:11:08 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14676-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 02:11:08 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14676: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9132873983982513388=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9132873983982513388==
Content-Type: text/plain

flight 14676 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14676/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14673
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14673

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  74d4a6cc5392
baseline version:
 xen                  ef8c1b607b10

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=74d4a6cc5392
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 74d4a6cc5392
+ branch=xen-unstable
+ revision=74d4a6cc5392
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 74d4a6cc5392 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files


--===============9132873983982513388==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9132873983982513388==--

From xen-devel-bounces@lists.xen.org Thu Dec 13 05:13:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 05:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj16E-00037U-Qx; Thu, 13 Dec 2012 05:12:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj16D-00037P-IQ
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 05:12:45 +0000
Received: from [85.158.139.83:60571] by server-15.bemta-5.messagelabs.com id
	0A/E4-20523-CC369C05; Thu, 13 Dec 2012 05:12:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1355375563!22372175!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMzY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27120 invoked from network); 13 Dec 2012 05:12:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 05:12:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,272,1355097600"; 
   d="scan'208";a="106473"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 05:12:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 05:12:42 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tj16A-0002ag-DH;
	Thu, 13 Dec 2012 05:12:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tj16A-0001BN-7k;
	Thu, 13 Dec 2012 05:12:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14677-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 05:12:42 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14677: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14677 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14677/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 14675

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14675
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 14675

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  255a0b6a8104
baseline version:
 xen                  a866cc5b8235

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23427:255a0b6a8104
tag:         tip
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23426:a866cc5b8235
user:        Keir Fraser <keir@xen.org>
date:        Wed Dec 12 09:40:16 2012 +0000
    
    Added signature for changeset 0125069bc1b2
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 05:13:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 05:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj16E-00037U-Qx; Thu, 13 Dec 2012 05:12:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj16D-00037P-IQ
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 05:12:45 +0000
Received: from [85.158.139.83:60571] by server-15.bemta-5.messagelabs.com id
	0A/E4-20523-CC369C05; Thu, 13 Dec 2012 05:12:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1355375563!22372175!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwMzY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27120 invoked from network); 13 Dec 2012 05:12:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 05:12:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,272,1355097600"; 
   d="scan'208";a="106473"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 05:12:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 05:12:42 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tj16A-0002ag-DH;
	Thu, 13 Dec 2012 05:12:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tj16A-0001BN-7k;
	Thu, 13 Dec 2012 05:12:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14677-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 05:12:42 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14677: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14677 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14677/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 14675

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14675
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 14675

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  255a0b6a8104
baseline version:
 xen                  a866cc5b8235

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23427:255a0b6a8104
tag:         tip
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23426:a866cc5b8235
user:        Keir Fraser <keir@xen.org>
date:        Wed Dec 12 09:40:16 2012 +0000
    
    Added signature for changeset 0125069bc1b2
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 08:27:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 08:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj47t-0004rd-W6; Thu, 13 Dec 2012 08:26:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1Tj47t-0004rY-0B
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 08:26:41 +0000
Received: from [85.158.139.83:63588] by server-14.bemta-5.messagelabs.com id
	98/12-09538-04199C05; Thu, 13 Dec 2012 08:26:40 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355387199!27513557!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMwMzY2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21516 invoked from network); 13 Dec 2012 08:26:39 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-12.tower-182.messagelabs.com with SMTP;
	13 Dec 2012 08:26:39 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 13 Dec 2012 00:26:37 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,273,1355126400"; d="scan'208";a="256832324"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 13 Dec 2012 00:26:33 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.19.9.28) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 13 Dec 2012 00:25:56 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx109.amr.corp.intel.com (10.19.9.28) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 13 Dec 2012 00:25:56 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 13 Dec 2012 16:25:55 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH] x86/xen : Fix the wrong check in pciback
Thread-Index: AQHNyFiH35etVScr/0ilTQpjjJLbNpgWhikQ
Date: Thu, 13 Dec 2012 08:25:54 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E2DFA42@SHSMSX101.ccr.corp.intel.com>
References: <1353550823-10009-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1353550823-10009-1-git-send-email-yang.z.zhang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "konrad@kernel.org" <konrad@kernel.org>
Subject: Re: [Xen-devel] [PATCH] x86/xen : Fix the wrong check in pciback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2012-11-22:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> Fix the wrong check in pciback.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  drivers/xen/xen-pciback/pciback.h |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> diff --git a/drivers/xen/xen-pciback/pciback.h
> b/drivers/xen/xen-pciback/pciback.h index a7def01..f72af87 100644 ---
> a/drivers/xen/xen-pciback/pciback.h +++
> b/drivers/xen/xen-pciback/pciback.h @@ -124,7 +124,7 @@ static inline
> int xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
>  static inline void xen_pcibk_release_pci_dev(struct xen_pcibk_device *pdev,
>  					     struct pci_dev *dev)
>  {
> -	if (xen_pcibk_backend && xen_pcibk_backend->free)
> +	if (xen_pcibk_backend && xen_pcibk_backend->release)
>  		return xen_pcibk_backend->release(pdev, dev);
>  }
> --
> 1.7.1.1
Any comment?

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 08:27:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 08:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj47t-0004rd-W6; Thu, 13 Dec 2012 08:26:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1Tj47t-0004rY-0B
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 08:26:41 +0000
Received: from [85.158.139.83:63588] by server-14.bemta-5.messagelabs.com id
	98/12-09538-04199C05; Thu, 13 Dec 2012 08:26:40 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355387199!27513557!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMwMzY2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21516 invoked from network); 13 Dec 2012 08:26:39 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-12.tower-182.messagelabs.com with SMTP;
	13 Dec 2012 08:26:39 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 13 Dec 2012 00:26:37 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,273,1355126400"; d="scan'208";a="256832324"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 13 Dec 2012 00:26:33 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.19.9.28) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 13 Dec 2012 00:25:56 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx109.amr.corp.intel.com (10.19.9.28) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 13 Dec 2012 00:25:56 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Thu, 13 Dec 2012 16:25:55 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH] x86/xen : Fix the wrong check in pciback
Thread-Index: AQHNyFiH35etVScr/0ilTQpjjJLbNpgWhikQ
Date: Thu, 13 Dec 2012 08:25:54 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E2DFA42@SHSMSX101.ccr.corp.intel.com>
References: <1353550823-10009-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1353550823-10009-1-git-send-email-yang.z.zhang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "konrad@kernel.org" <konrad@kernel.org>
Subject: Re: [Xen-devel] [PATCH] x86/xen : Fix the wrong check in pciback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2012-11-22:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> Fix the wrong check in pciback.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  drivers/xen/xen-pciback/pciback.h |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> diff --git a/drivers/xen/xen-pciback/pciback.h
> b/drivers/xen/xen-pciback/pciback.h index a7def01..f72af87 100644 ---
> a/drivers/xen/xen-pciback/pciback.h +++
> b/drivers/xen/xen-pciback/pciback.h @@ -124,7 +124,7 @@ static inline
> int xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
>  static inline void xen_pcibk_release_pci_dev(struct xen_pcibk_device *pdev,
>  					     struct pci_dev *dev)
>  {
> -	if (xen_pcibk_backend && xen_pcibk_backend->free)
> +	if (xen_pcibk_backend && xen_pcibk_backend->release)
>  		return xen_pcibk_backend->release(pdev, dev);
>  }
> --
> 1.7.1.1
Any comment?

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 08:47:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 08:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj4Rr-000565-UF; Thu, 13 Dec 2012 08:47:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1Tj4Rq-000560-3A
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 08:47:18 +0000
Received: from [85.158.139.83:34757] by server-14.bemta-5.messagelabs.com id
	5C/E5-09538-51699C05; Thu, 13 Dec 2012 08:47:17 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355388419!29527574!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19971 invoked from network); 13 Dec 2012 08:47:00 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-15.tower-182.messagelabs.com with AES128-SHA encrypted
	SMTP; 13 Dec 2012 08:47:00 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Thu, 13 Dec 2012 09:46:59 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] VGA passthrough and AMD drivers : Success !
Thread-Index: Ac3ZC8qo5QXhoVBySkamhILb9r/g6A==
Date: Thu, 13 Dec 2012 08:46:58 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C53E41@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: [Xen-devel]  VGA passthrough and AMD drivers : Success !
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>>>>>>> Hi all,
>>>>>>>>> I have made some tests to find a good driver for FirePro V8800 
>>>>>>>>> on windows 7 64bit HVM.
>>>>>>>>> I have been focused on ?advanced features?: quad buffer and 
>>>>>>>>> active stereoscopy, synchronization ?
>>>>>>>>> The results, for all FirePro drivers (of this year); I can?t 
>>>>>>>>> get the quad buffer/active stereoscopy feature.
>>>>>>>>> But they work on a native installation.
>>>>>>>> Can you describe the setup a little more?
>>>>>>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>>>>>>
>>>>>>> It?s a setup used in CAVE system, I try (and its works, minus 
>>>>>>> some
>>>>>>> issues) to virtualize ?virtual reality contexts? that needs full 
>>>>>>> graphics card features.
>>>>>>>
>>>>>>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>>>>>>
>>>>>>> cores_per_socket : 4
>>>>>>>
>>>>>>> threads_per_core : 2
>>>>>>>
>>>>>>> cpu_mhz : 2660
>>>>>>>
>>>>>>> total_memory : 4079
>>>>>>>
>>>>>>>> How many graphic cards per guest?
>>>>>>> One card per guest.
>>>>>>>
>>>>>>>> How many guests? On how many hosts?
>>>>>>> One guest per computer.
>>>>>>>
>>>>>> And of course, I just thought of some other questions:
>>>>>> What version of Xen are you using?
>>>>>> What kernel are you using in Dom0?
>>>>> release                : 2.6.32-5-xen-amd64
>>>>> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
>>>>> machine                : x86_64
>>>>> nr_cpus                : 8
>>>>> nr_nodes               : 1
>>>>> cores_per_socket       : 4
>>>>> threads_per_core       : 2
>>>>> cpu_mhz                : 2660
>>>>> hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:00000000:00000001:00000000
>>>>> virt_caps              : hvm hvm_directio
>>>>> total_memory           : 4079
>>>>> free_cpus              : 0
>>>>> xen_major              : 4
>>>>> xen_minor              : 2
>>>>> xen_extra              : -unstable
>>>>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>>>>> xen_scheduler          : credit
>>>>> xen_pagesize           : 4096
>>>>> platform_params        : virt_start=0xffff800000000000
>>>>> xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
>>>>> xen_commandline        : placeholder
>>>>> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
>>>>> xend_config_format     : 4
>>>>>
>>>>> I will change to a newer version and use  xl toolstack when VGA passthrough will be supported.
>>>>>
>>>>>> And just to be clear, there is only Dom0 and one Windows 7 HVM guest on each machine?
>>>>> Yes
>>>>>
>>>>>>>>> The only driver that allows this feature is a Radeon HD driver 
>>>>>>>>> (Catalyst 12.10 WHQL).
>>>>>>>>> But this driver becomes unstable when an application using 
>>>>>>>>> active stereo and synchronization is closed:
>>>>>>>>> -The synchronization between two computers is lost.
>>>>>>>>> -The CCC can crash when the synchronization is made again.
>>>>>>>>> Someone have any clues about this?
>>>>>>>> I don't know exactly how this works on AMD/ATI graphics cards, 
>>>>>>>> but I have worked with synchronisation on other graphics cards 
>>>>>>>> about 7 years ago, so I have some idea of how you solve the 
>>>>>>>> various problems.
>>>>>>>> What I don't quite understand is why it would be different 
>>>>>>>> between a virtual environment and the bare-metal ("native") 
>>>>>>>> install. My immediate guess is that there is a timing 
>>>>>>>> difference, for one of two reasons:
>>>>>>>> 1. IOMMU is adding extra delays to the graphics card reading system memory.
>>>>>>>> 2. Interrupt delays due to hypervisor.
>>>>>>>> 3. Dom0 or other guest domains "stealing" CPU from the guest.
>>>>>>>> I don't think those are easy to work around (as they all have to 
>>>>>>>> "happen" in a virtual system), but I also don't REALLY 
>>>>>>>> understand why this should cause problems in the first place, as 
>>>>>>>> there isn't any guarantee as to the timings of either memory 
>>>>>>>> reads, interrupt latency/responsiveness or CPU availability in 
>>>>>>>> Windows, so the same problem would appear in native systems as well, given "the right"
>>>>>>>> circumstances.
>>>>>>>> What exactly is the crash in CCC?
>>>>>>>> (CCC stands for "Catalyst Control Center" - which I think is a 
>>>>>>>> Windows "service" to handle certain requests from the driver 
>>>>>>>> that can't be done in kernel mode [or shouldn't be done in the 
>>>>>>>> driver in general]).
>>>>>>> After the application is closed, I launch the Catalyst Control
>>>>>> Center, the synchronization state seems to be good. But there is
>>>>>>> no synchronization.
>>>>>>>
>>>>>>> If I try to apply any modifications of synchronization (synchro 
>>>>>>> server or client), CCC freeze and I need to kill it manually.
>>>>>>>
>>>>>>> I can set the synchronization back after this.
>>>>>>>
>>>>>> This clearly sounds like a software issue in the CCC itself. I could be wrong, but that's what I think right now. It would be rather difficult to figure out what is going wrong without at least a repro environment.
>>>>> I've made a bunch of tests this morning:
>>>>> -CCC crash when I've got two displays: I set one to be the synchronization server and the other a client at the same time. When I set the server, apply this configuration and set the client after, it didn't crash.
>>>>> -If my application (Virtools) crash, synchronization is reset.
>>>>> -Eyes are sometimes inverted with the same trigger edge.
>>>>I saw that problem with the product I was working on once or twice. 
>>>>Makes it look really "confusing". This was a settings problem in my case (because I wrote my own "controls", I could set almost every aspect of everything that could possibly be changed, with a very basic command line >>application that interacted pretty straight down to the driver - with the usual caveat of "make sure you know what you are doing" - the normal GUI Control panel setup was much more "you can only set things that make sense >>for you to set"). That is probably not really what your problem is... But could be a configuration of driver or application issue, of course.
>>>>
>>>>>
>>>>> I've got all this behaviors with both HVM and native installation under 7 64bits.  So I think it's clearly a software issue.
>>>>>
>>>>> Next step:  7 32bits.
>>>>So, this is not a Xen issue... Report it to the ATI/AMD folks!
>>>>
>>>Yes, but it doesn't explain why I can't get active stereoscopy with FirePro drivers on HVM.
>>
>>>>>> Whilst I'm all for using Xen for everything, there are sometimes situations when "not using Xen" may actually be the right choice. Can you explain why running your guests in Xen is of benefit? [If you'd like to answer "none  of  your business", that's fine, but it may help to understand what the "business case" is for this].
>>>>> The objective is to mutualize graphical cluster for immersive systems. Virtual Reality applications are sensitive in their configurations; it's a pain to manage multiple users and it's nearly impossible to have different configurations for these users. Usually immersive systems are stuck in one configuration (OS, drivers, applications ...), and only few people are allowed to change settings.
>>>>> The idea is to use Xen and VGA passthrough, for create personals environments that allow every user to make their own configurations without impacts on others.
>>>>>
>>>>> Be able to have VR configurations in virtual machines and to able to run it with 3D features, is a serious benefit for Virtual Reality users.
>>>>
>>>>Thanks for your explanation. Makes some sense, however, I feel that it also makes things more complex - if the system is so sensitive, it may get "upset" simply by having the differences in system behaviour that you >>automatically get from running on a virtual machine vs. "bare metal". Don't let that stop you, I'm just saying there may be issues caused by Xen (or other Virtualisation products) are not quite as transparent as they really should >>be.
>>>>>>
>>
>>> It's not the hardware configurations that is so sensitive but more the software configurations and drivers versions.  
>>> I've already made some demonstrations of Xen capabilities in our use case, there is no negative feedback. I think that HVM behavior is perfect for our uses, except these driver issues. 
>>>
>>> I found one minor bug (for us), if the first HVM executed (id=1) has the VGA card, the computer reboot without logs. 
>>> My workaround is to launch an HVM without VGA first, stop it properly, and launch my usual HVM with VGA passthrough. 
>>> I think, it's a bug due to my installation (Xen 4.2.0-unstable).
>>>
>>> I just got a new test computer, a Dell Precision T7500 with a V9800 FirePro, maybe I will have the time to test something tomorrow!
>
>Exactly the same behaviors with this computer. 
>
>>
>>Hi,
>>
>>My experiences with CCC and vga passthrough aren't great either (bluescreen most of the time).
>>It works now with the 12.11 catalyst beta drivers and not installing CCC, but just installing the driver and opencl packages from the c:\AMD\packages dir after stopping the install after the total package is unpacked.
>>Don't know if the stereoscopy also comes in a seperate package.
>>
>>It runs opencl fine now, with near native performance (with luxmark 
>>benchmark) :-) (with xen-unstable, qemu-upstream, linux 3.7 dom0, win7 
>>guest, 12.11 catalyst drivers, ati radeon HD 6570 at the moment)
>>
>
>Thanks for the feedbacks! I will try what you said.
>But for my use case I need CCC, even if it isn't works properly, users know how to use it and make it work.  
>

Finally I handle the synchronization issues, as Mats said, this issue was coming for the synchronization configuration.
The synchronization doesn't work if an EyeInfinity group has not been properly configured...
The synchronization is stabilized, and I get the expected behaviors (as a native installation).

I have not been able to get the FirePro drivers works, yet!

Aurelien

>Aurelien
>
>--
>Sander
>
> Aurelien
>>--
>>Mats
>>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 08:47:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 08:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj4Rr-000565-UF; Thu, 13 Dec 2012 08:47:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1Tj4Rq-000560-3A
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 08:47:18 +0000
Received: from [85.158.139.83:34757] by server-14.bemta-5.messagelabs.com id
	5C/E5-09538-51699C05; Thu, 13 Dec 2012 08:47:17 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355388419!29527574!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19971 invoked from network); 13 Dec 2012 08:47:00 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-15.tower-182.messagelabs.com with AES128-SHA encrypted
	SMTP; 13 Dec 2012 08:47:00 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0421.002;
	Thu, 13 Dec 2012 09:46:59 +0100
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] VGA passthrough and AMD drivers : Success !
Thread-Index: Ac3ZC8qo5QXhoVBySkamhILb9r/g6A==
Date: Thu, 13 Dec 2012 08:46:58 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C53E41@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: [Xen-devel]  VGA passthrough and AMD drivers : Success !
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>>>>>>> Hi all,
>>>>>>>>> I have made some tests to find a good driver for FirePro V8800 
>>>>>>>>> on windows 7 64bit HVM.
>>>>>>>>> I have been focused on ?advanced features?: quad buffer and 
>>>>>>>>> active stereoscopy, synchronization ?
>>>>>>>>> The results, for all FirePro drivers (of this year); I can?t 
>>>>>>>>> get the quad buffer/active stereoscopy feature.
>>>>>>>>> But they work on a native installation.
>>>>>>>> Can you describe the setup a little more?
>>>>>>> I?ve got 2 HP Z800 workstation with FirePro V8800, one per computer.
>>>>>>>
>>>>>>> It?s a setup used in CAVE system, I try (and its works, minus 
>>>>>>> some
>>>>>>> issues) to virtualize ?virtual reality contexts? that needs full 
>>>>>>> graphics card features.
>>>>>>>
>>>>>>> Intel Xeon E5640 CPU with Intel 5520 chipset
>>>>>>>
>>>>>>> cores_per_socket : 4
>>>>>>>
>>>>>>> threads_per_core : 2
>>>>>>>
>>>>>>> cpu_mhz : 2660
>>>>>>>
>>>>>>> total_memory : 4079
>>>>>>>
>>>>>>>> How many graphic cards per guest?
>>>>>>> One card per guest.
>>>>>>>
>>>>>>>> How many guests? On how many hosts?
>>>>>>> One guest per computer.
>>>>>>>
>>>>>> And of course, I just thought of some other questions:
>>>>>> What version of Xen are you using?
>>>>>> What kernel are you using in Dom0?
>>>>> release                : 2.6.32-5-xen-amd64
>>>>> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
>>>>> machine                : x86_64
>>>>> nr_cpus                : 8
>>>>> nr_nodes               : 1
>>>>> cores_per_socket       : 4
>>>>> threads_per_core       : 2
>>>>> cpu_mhz                : 2660
>>>>> hw_caps                : bfebfbff:2c100800:00000000:00003f40:029ee3ff:00000000:00000001:00000000
>>>>> virt_caps              : hvm hvm_directio
>>>>> total_memory           : 4079
>>>>> free_cpus              : 0
>>>>> xen_major              : 4
>>>>> xen_minor              : 2
>>>>> xen_extra              : -unstable
>>>>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>>>>> xen_scheduler          : credit
>>>>> xen_pagesize           : 4096
>>>>> platform_params        : virt_start=0xffff800000000000
>>>>> xen_changeset          : Sun Jul 22 16:37:25 2012 +0100 25622:3c426da4788e
>>>>> xen_commandline        : placeholder
>>>>> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
>>>>> xend_config_format     : 4
>>>>>
>>>>> I will change to a newer version and use  xl toolstack when VGA passthrough will be supported.
>>>>>
>>>>>> And just to be clear, there is only Dom0 and one Windows 7 HVM guest on each machine?
>>>>> Yes
>>>>>
>>>>>>>>> The only driver that allows this feature is a Radeon HD driver 
>>>>>>>>> (Catalyst 12.10 WHQL).
>>>>>>>>> But this driver becomes unstable when an application using 
>>>>>>>>> active stereo and synchronization is closed:
>>>>>>>>> -The synchronization between two computers is lost.
>>>>>>>>> -The CCC can crash when the synchronization is made again.
>>>>>>>>> Someone have any clues about this?
>>>>>>>> I don't know exactly how this works on AMD/ATI graphics cards, 
>>>>>>>> but I have worked with synchronisation on other graphics cards 
>>>>>>>> about 7 years ago, so I have some idea of how you solve the 
>>>>>>>> various problems.
>>>>>>>> What I don't quite understand is why it would be different 
>>>>>>>> between a virtual environment and the bare-metal ("native") 
>>>>>>>> install. My immediate guess is that there is a timing 
>>>>>>>> difference, for one of two reasons:
>>>>>>>> 1. IOMMU is adding extra delays to the graphics card reading system memory.
>>>>>>>> 2. Interrupt delays due to hypervisor.
>>>>>>>> 3. Dom0 or other guest domains "stealing" CPU from the guest.
>>>>>>>> I don't think those are easy to work around (as they all have to 
>>>>>>>> "happen" in a virtual system), but I also don't REALLY 
>>>>>>>> understand why this should cause problems in the first place, as 
>>>>>>>> there isn't any guarantee as to the timings of either memory 
>>>>>>>> reads, interrupt latency/responsiveness or CPU availability in 
>>>>>>>> Windows, so the same problem would appear in native systems as well, given "the right"
>>>>>>>> circumstances.
>>>>>>>> What exactly is the crash in CCC?
>>>>>>>> (CCC stands for "Catalyst Control Center" - which I think is a 
>>>>>>>> Windows "service" to handle certain requests from the driver 
>>>>>>>> that can't be done in kernel mode [or shouldn't be done in the 
>>>>>>>> driver in general]).
>>>>>>> After the application is closed, I launch the Catalyst Control
>>>>>> Center, the synchronization state seems to be good. But there is
>>>>>>> no synchronization.
>>>>>>>
>>>>>>> If I try to apply any modifications of synchronization (synchro 
>>>>>>> server or client), CCC freeze and I need to kill it manually.
>>>>>>>
>>>>>>> I can set the synchronization back after this.
>>>>>>>
>>>>>> This clearly sounds like a software issue in the CCC itself. I could be wrong, but that's what I think right now. It would be rather difficult to figure out what is going wrong without at least a repro environment.
>>>>> I've made a bunch of tests this morning:
>>>>> -CCC crash when I've got two displays: I set one to be the synchronization server and the other a client at the same time. When I set the server, apply this configuration and set the client after, it didn't crash.
>>>>> -If my application (Virtools) crash, synchronization is reset.
>>>>> -Eyes are sometimes inverted with the same trigger edge.
>>>>I saw that problem with the product I was working on once or twice. 
>>>>Makes it look really "confusing". This was a settings problem in my case (because I wrote my own "controls", I could set almost every aspect of everything that could possibly be changed, with a very basic command line >>application that interacted pretty straight down to the driver - with the usual caveat of "make sure you know what you are doing" - the normal GUI Control panel setup was much more "you can only set things that make sense >>for you to set"). That is probably not really what your problem is... But could be a configuration of driver or application issue, of course.
>>>>
>>>>>
>>>>> I've got all this behaviors with both HVM and native installation under 7 64bits.  So I think it's clearly a software issue.
>>>>>
>>>>> Next step:  7 32bits.
>>>>So, this is not a Xen issue... Report it to the ATI/AMD folks!
>>>>
>>>Yes, but it doesn't explain why I can't get active stereoscopy with FirePro drivers on HVM.
>>
>>>>>> Whilst I'm all for using Xen for everything, there are sometimes situations when "not using Xen" may actually be the right choice. Can you explain why running your guests in Xen is of benefit? [If you'd like to answer "none  of  your business", that's fine, but it may help to understand what the "business case" is for this].
>>>>> The objective is to mutualize graphical cluster for immersive systems. Virtual Reality applications are sensitive in their configurations; it's a pain to manage multiple users and it's nearly impossible to have different configurations for these users. Usually immersive systems are stuck in one configuration (OS, drivers, applications ...), and only few people are allowed to change settings.
>>>>> The idea is to use Xen and VGA passthrough, for create personals environments that allow every user to make their own configurations without impacts on others.
>>>>>
>>>>> Be able to have VR configurations in virtual machines and to able to run it with 3D features, is a serious benefit for Virtual Reality users.
>>>>
>>>>Thanks for your explanation. Makes some sense, however, I feel that it also makes things more complex - if the system is so sensitive, it may get "upset" simply by having the differences in system behaviour that you >>automatically get from running on a virtual machine vs. "bare metal". Don't let that stop you, I'm just saying there may be issues caused by Xen (or other Virtualisation products) are not quite as transparent as they really should >>be.
>>>>>>
>>
>>> It's not the hardware configurations that is so sensitive but more the software configurations and drivers versions.  
>>> I've already made some demonstrations of Xen capabilities in our use case, there is no negative feedback. I think that HVM behavior is perfect for our uses, except these driver issues. 
>>>
>>> I found one minor bug (for us), if the first HVM executed (id=1) has the VGA card, the computer reboot without logs. 
>>> My workaround is to launch an HVM without VGA first, stop it properly, and launch my usual HVM with VGA passthrough. 
>>> I think, it's a bug due to my installation (Xen 4.2.0-unstable).
>>>
>>> I just got a new test computer, a Dell Precision T7500 with a V9800 FirePro, maybe I will have the time to test something tomorrow!
>
>Exactly the same behaviors with this computer. 
>
>>
>>Hi,
>>
>>My experiences with CCC and vga passthrough aren't great either (bluescreen most of the time).
>>It works now with the 12.11 catalyst beta drivers and not installing CCC, but just installing the driver and opencl packages from the c:\AMD\packages dir after stopping the install after the total package is unpacked.
>>Don't know if the stereoscopy also comes in a seperate package.
>>
>>It runs opencl fine now, with near native performance (with luxmark 
>>benchmark) :-) (with xen-unstable, qemu-upstream, linux 3.7 dom0, win7 
>>guest, 12.11 catalyst drivers, ati radeon HD 6570 at the moment)
>>
>
>Thanks for the feedbacks! I will try what you said.
>But for my use case I need CCC, even if it isn't works properly, users know how to use it and make it work.  
>

Finally I handle the synchronization issues, as Mats said, this issue was coming for the synchronization configuration.
The synchronization doesn't work if an EyeInfinity group has not been properly configured...
The synchronization is stabilized, and I get the expected behaviors (as a native installation).

I have not been able to get the FirePro drivers works, yet!

Aurelien

>Aurelien
>
>--
>Sander
>
> Aurelien
>>--
>>Mats
>>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 09:09:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 09:09:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj4mk-0005P1-Am; Thu, 13 Dec 2012 09:08:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tj4mj-0005Ow-CI
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 09:08:53 +0000
Received: from [85.158.139.211:7359] by server-13.bemta-5.messagelabs.com id
	0D/1C-10716-42B99C05; Thu, 13 Dec 2012 09:08:52 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355389730!16056938!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4606 invoked from network); 13 Dec 2012 09:08:52 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 09:08:52 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so1805314iac.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 01:08:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=t+2gk/lEklwOI1G4Ejv7RYXs5gjnffrYI1VRYBPGh0w=;
	b=C7fzGCULZKMObK+l6P/6neJr2ajMNKat8127EHZ0cLDR8cydp60fwDWFSeT8lLJsoB
	rB607WN70Ng0Reayu3Y2+F6P/CpBEp0uZ1uCi8Qi3BGRUZNHAfVxmxJ5O0A1TWfCyoSc
	+L1iAxPO4vsGux4dufxP/xfcBFwjS+TmOuucWrCxARO8a1ITciJ3S7N1C2BWWgoNrleS
	zLv0SA7x/rCYZW1N5qCRgCrO4T65VvpGL8aLahVIQ8BsJ7pyf9pjgjFK9u1iTy7peciD
	EbIS8Xj/6T8pijb/jjXWZ8WaYJnqaESxVHQ/Q5pZ27nhJIovrQVZjY5Z91x/m1plqjXX
	8J/g==
MIME-Version: 1.0
Received: by 10.50.151.172 with SMTP id ur12mr957712igb.44.1355389730268; Thu,
	13 Dec 2012 01:08:50 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 13 Dec 2012 01:08:50 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
Date: Thu, 13 Dec 2012 17:08:50 +0800
X-Google-Sender-Auth: bUQDeJ_GWmE3UxyMMj717miJTak
Message-ID: <CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	"Kay, Allen M" <allen.m.kay@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, 
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>, "Dong,
	Eddie" <eddie.dong@intel.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> > PVHVM Linux guests setup interrupts differently: they request an event
>> > channel for each legacy interrupt or MSI/MSI-X, then the hypervisor uses
>> > these event channels to inject notifications into the guest, rather
>> > than emulated interrupts or emulated MSIs.
>> >
>> Will this affect the result of pci_get_class() as called by the intel driver?
>> If not, this can still not explain the different behavior.
>> Maybe I need to do one more experiment when I got time.
>
> No, it doesn't.
>
I've done more experiments last night and it turns out that I'm being
fooled previously.
So actually none of the config (PV or not) correctly detected the PCH version.
The magic my debian PVHVM build works is in the script maintained grub config,
which automatically loads some VGA / FB driver to show image background.
The manual maintained grub config for openelec is quick && dirty and
does not have such fancy stuff.
Manually adding such VGA / FB driver to grub config could also 'fix'
the output issue.

However, this is a real issue that failing to detect PCH version in
the driver, even though the output can be made available through grub.
Driver code path specific to new PCH version is simply missed with
this issue which could potentially lead to other issues.

Could you xen && intel developers sit down to discuss and come up with
a formal solution?
I can foresee that cooperation is required on this topic.

Thanks,
Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 09:09:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 09:09:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj4mk-0005P1-Am; Thu, 13 Dec 2012 09:08:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tj4mj-0005Ow-CI
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 09:08:53 +0000
Received: from [85.158.139.211:7359] by server-13.bemta-5.messagelabs.com id
	0D/1C-10716-42B99C05; Thu, 13 Dec 2012 09:08:52 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355389730!16056938!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4606 invoked from network); 13 Dec 2012 09:08:52 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 09:08:52 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so1805314iac.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 01:08:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=t+2gk/lEklwOI1G4Ejv7RYXs5gjnffrYI1VRYBPGh0w=;
	b=C7fzGCULZKMObK+l6P/6neJr2ajMNKat8127EHZ0cLDR8cydp60fwDWFSeT8lLJsoB
	rB607WN70Ng0Reayu3Y2+F6P/CpBEp0uZ1uCi8Qi3BGRUZNHAfVxmxJ5O0A1TWfCyoSc
	+L1iAxPO4vsGux4dufxP/xfcBFwjS+TmOuucWrCxARO8a1ITciJ3S7N1C2BWWgoNrleS
	zLv0SA7x/rCYZW1N5qCRgCrO4T65VvpGL8aLahVIQ8BsJ7pyf9pjgjFK9u1iTy7peciD
	EbIS8Xj/6T8pijb/jjXWZ8WaYJnqaESxVHQ/Q5pZ27nhJIovrQVZjY5Z91x/m1plqjXX
	8J/g==
MIME-Version: 1.0
Received: by 10.50.151.172 with SMTP id ur12mr957712igb.44.1355389730268; Thu,
	13 Dec 2012 01:08:50 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 13 Dec 2012 01:08:50 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
Date: Thu, 13 Dec 2012 17:08:50 +0800
X-Google-Sender-Auth: bUQDeJ_GWmE3UxyMMj717miJTak
Message-ID: <CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	"Kay, Allen M" <allen.m.kay@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, 
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>, "Dong,
	Eddie" <eddie.dong@intel.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> > PVHVM Linux guests setup interrupts differently: they request an event
>> > channel for each legacy interrupt or MSI/MSI-X, then the hypervisor uses
>> > these event channels to inject notifications into the guest, rather
>> > than emulated interrupts or emulated MSIs.
>> >
>> Will this affect the result of pci_get_class() as called by the intel driver?
>> If not, this can still not explain the different behavior.
>> Maybe I need to do one more experiment when I got time.
>
> No, it doesn't.
>
I've done more experiments last night and it turns out that I'm being
fooled previously.
So actually none of the config (PV or not) correctly detected the PCH version.
The magic my debian PVHVM build works is in the script maintained grub config,
which automatically loads some VGA / FB driver to show image background.
The manual maintained grub config for openelec is quick && dirty and
does not have such fancy stuff.
Manually adding such VGA / FB driver to grub config could also 'fix'
the output issue.

However, this is a real issue that failing to detect PCH version in
the driver, even though the output can be made available through grub.
Driver code path specific to new PCH version is simply missed with
this issue which could potentially lead to other issues.

Could you xen && intel developers sit down to discuss and come up with
a formal solution?
I can foresee that cooperation is required on this topic.

Thanks,
Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:26:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj5z0-0006in-8n; Thu, 13 Dec 2012 10:25:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tj5yy-0006ie-OR
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:25:36 +0000
Received: from [85.158.138.51:6835] by server-16.bemta-3.messagelabs.com id
	3D/53-27634-B1DA9C05; Thu, 13 Dec 2012 10:25:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355394330!22358612!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27498 invoked from network); 13 Dec 2012 10:25:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 10:25:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 10:25:29 +0000
Message-Id: <50C9BB2902000078000B00E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 10:25:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>,
 "Tim Deegan" <tim@xen.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<B6C2EB9186482D47BD0C5A9A48345644033AEDB0@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A48345644033AEDB0@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/11] Add virtual EPT support Xen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.12.12 at 01:31, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> Hi, Guys 
> Do you have comments for this patchset ? Thanks!
> Xiantao 

I was actually hoping for Tim to take a look. But you should probably
be a little more patient - it's been just two days since this got posted.

Jan

>> -----Original Message-----
>> From: Zhang, Xiantao
>> Sent: Tuesday, December 11, 2012 1:57 AM
>> To: xen-devel@lists.xensource.com 
>> Cc: JBeulich@suse.com; keir@xen.org; Dong, Eddie; Nakajima, Jun; Zhang,
>> Xiantao
>> Subject: [PATCH 00/11] Add virtual EPT support Xen.
>> 
>> From: Zhang Xiantao <xiantao.zhang@intel.com>
>> 
>> With virtual EPT support, L1 hyerpvisor can use EPT hardware for L2 guest's
>> memory virtualization. In this way, L2 guest's performance can be improved
>> sharply. According to our testing, some benchmarks can show > 5x
>> performance gain.
>> 
>> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
>> 
>> Zhang Xiantao (11):
>>   nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
>>   nestedhap: Change nested p2m's walker to vendor-specific
>>   nested_ept: Implement guest ept's walker
>>   nested_ept: Add permission check for success case
>>   EPT: Make ept data structure or operations neutral
>>   nEPT: Try to enable EPT paging for L2 guest.
>>   nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
>>   nEPT: Use minimal permission for nested p2m.
>>   nEPT: handle invept instruction from L1 VMM
>>   nEPT: expost EPT capablity to L1 VMM
>>   nVMX: Expose VPID capability to nested VMM.
>> 
>>  xen/arch/x86/hvm/hvm.c                  |    7 +-
>>  xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++
>>  xen/arch/x86/hvm/svm/svm.c              |    3 +-
>>  xen/arch/x86/hvm/vmx/vmcs.c             |    2 +-
>>  xen/arch/x86/hvm/vmx/vmx.c              |   76 +++++---
>>  xen/arch/x86/hvm/vmx/vvmx.c             |  208 ++++++++++++++++++-
>>  xen/arch/x86/mm/guest_walk.c            |   12 +-
>>  xen/arch/x86/mm/hap/Makefile            |    1 +
>>  xen/arch/x86/mm/hap/nested_ept.c        |  345
>> +++++++++++++++++++++++++++++++
>>  xen/arch/x86/mm/hap/nested_hap.c        |   79 +++----
>>  xen/arch/x86/mm/mm-locks.h              |    2 +-
>>  xen/arch/x86/mm/p2m-ept.c               |   96 +++++++--
>>  xen/arch/x86/mm/p2m.c                   |   44 +++--
>>  xen/arch/x86/mm/shadow/multi.c          |    2 +-
>>  xen/include/asm-x86/guest_pt.h          |    8 +
>>  xen/include/asm-x86/hvm/hvm.h           |    9 +-
>>  xen/include/asm-x86/hvm/nestedhvm.h     |    1 +
>>  xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
>>  xen/include/asm-x86/hvm/vmx/vmcs.h      |   31 ++-
>>  xen/include/asm-x86/hvm/vmx/vmx.h       |    6 +-
>>  xen/include/asm-x86/hvm/vmx/vvmx.h      |   29 +++-
>>  xen/include/asm-x86/p2m.h               |   17 +-
>>  22 files changed, 859 insertions(+), 153 deletions(-)  create mode 100644
>> xen/arch/x86/mm/hap/nested_ept.c




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:26:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj5z0-0006in-8n; Thu, 13 Dec 2012 10:25:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tj5yy-0006ie-OR
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:25:36 +0000
Received: from [85.158.138.51:6835] by server-16.bemta-3.messagelabs.com id
	3D/53-27634-B1DA9C05; Thu, 13 Dec 2012 10:25:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355394330!22358612!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27498 invoked from network); 13 Dec 2012 10:25:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 10:25:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 10:25:29 +0000
Message-Id: <50C9BB2902000078000B00E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 10:25:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>,
 "Tim Deegan" <tim@xen.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<B6C2EB9186482D47BD0C5A9A48345644033AEDB0@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A48345644033AEDB0@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/11] Add virtual EPT support Xen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.12.12 at 01:31, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> Hi, Guys 
> Do you have comments for this patchset ? Thanks!
> Xiantao 

I was actually hoping for Tim to take a look. But you should probably
be a little more patient - it's been just two days since this got posted.

Jan

>> -----Original Message-----
>> From: Zhang, Xiantao
>> Sent: Tuesday, December 11, 2012 1:57 AM
>> To: xen-devel@lists.xensource.com 
>> Cc: JBeulich@suse.com; keir@xen.org; Dong, Eddie; Nakajima, Jun; Zhang,
>> Xiantao
>> Subject: [PATCH 00/11] Add virtual EPT support Xen.
>> 
>> From: Zhang Xiantao <xiantao.zhang@intel.com>
>> 
>> With virtual EPT support, L1 hyerpvisor can use EPT hardware for L2 guest's
>> memory virtualization. In this way, L2 guest's performance can be improved
>> sharply. According to our testing, some benchmarks can show > 5x
>> performance gain.
>> 
>> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
>> 
>> Zhang Xiantao (11):
>>   nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
>>   nestedhap: Change nested p2m's walker to vendor-specific
>>   nested_ept: Implement guest ept's walker
>>   nested_ept: Add permission check for success case
>>   EPT: Make ept data structure or operations neutral
>>   nEPT: Try to enable EPT paging for L2 guest.
>>   nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
>>   nEPT: Use minimal permission for nested p2m.
>>   nEPT: handle invept instruction from L1 VMM
>>   nEPT: expost EPT capablity to L1 VMM
>>   nVMX: Expose VPID capability to nested VMM.
>> 
>>  xen/arch/x86/hvm/hvm.c                  |    7 +-
>>  xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++
>>  xen/arch/x86/hvm/svm/svm.c              |    3 +-
>>  xen/arch/x86/hvm/vmx/vmcs.c             |    2 +-
>>  xen/arch/x86/hvm/vmx/vmx.c              |   76 +++++---
>>  xen/arch/x86/hvm/vmx/vvmx.c             |  208 ++++++++++++++++++-
>>  xen/arch/x86/mm/guest_walk.c            |   12 +-
>>  xen/arch/x86/mm/hap/Makefile            |    1 +
>>  xen/arch/x86/mm/hap/nested_ept.c        |  345
>> +++++++++++++++++++++++++++++++
>>  xen/arch/x86/mm/hap/nested_hap.c        |   79 +++----
>>  xen/arch/x86/mm/mm-locks.h              |    2 +-
>>  xen/arch/x86/mm/p2m-ept.c               |   96 +++++++--
>>  xen/arch/x86/mm/p2m.c                   |   44 +++--
>>  xen/arch/x86/mm/shadow/multi.c          |    2 +-
>>  xen/include/asm-x86/guest_pt.h          |    8 +
>>  xen/include/asm-x86/hvm/hvm.h           |    9 +-
>>  xen/include/asm-x86/hvm/nestedhvm.h     |    1 +
>>  xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
>>  xen/include/asm-x86/hvm/vmx/vmcs.h      |   31 ++-
>>  xen/include/asm-x86/hvm/vmx/vmx.h       |    6 +-
>>  xen/include/asm-x86/hvm/vmx/vvmx.h      |   29 +++-
>>  xen/include/asm-x86/p2m.h               |   17 +-
>>  22 files changed, 859 insertions(+), 153 deletions(-)  create mode 100644
>> xen/arch/x86/mm/hap/nested_ept.c




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:40:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:40:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6D3-0006zc-O2; Thu, 13 Dec 2012 10:40:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tj6D2-0006zU-KM
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:40:08 +0000
Received: from [85.158.143.35:58563] by server-2.bemta-4.messagelabs.com id
	10/BE-30861-780B9C05; Thu, 13 Dec 2012 10:40:07 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355395203!12699945!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15792 invoked from network); 13 Dec 2012 10:40:03 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-11.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 10:40:03 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:47976
	helo=localhost.localdomain) by smtp.eikelenboom.it with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>)
	id 1Tj6Gv-00030a-73; Thu, 13 Dec 2012 11:44:09 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
To: qemu-devel@nongnu.org
Date: Thu, 13 Dec 2012 11:39:27 +0100
Message-Id: <1355395168-3086-1-git-send-email-linux@eikelenboom.it>
X-Mailer: git-send-email 1.7.2.5
Cc: Anthony.Perard@citrix.com, Sander Eikelenboom <linux@eikelenboom.it>,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] Fix compile errors when enabling Xen debug
	logging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sander Eikelenboom (1):
  Fix compile errors when enabling Xen debug logging.

 hw/xen_pt.c |    4 ++--
 xen-all.c   |    4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:40:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:40:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6D3-0006zc-O2; Thu, 13 Dec 2012 10:40:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tj6D2-0006zU-KM
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:40:08 +0000
Received: from [85.158.143.35:58563] by server-2.bemta-4.messagelabs.com id
	10/BE-30861-780B9C05; Thu, 13 Dec 2012 10:40:07 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355395203!12699945!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15792 invoked from network); 13 Dec 2012 10:40:03 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-11.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 10:40:03 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:47976
	helo=localhost.localdomain) by smtp.eikelenboom.it with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>)
	id 1Tj6Gv-00030a-73; Thu, 13 Dec 2012 11:44:09 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
To: qemu-devel@nongnu.org
Date: Thu, 13 Dec 2012 11:39:27 +0100
Message-Id: <1355395168-3086-1-git-send-email-linux@eikelenboom.it>
X-Mailer: git-send-email 1.7.2.5
Cc: Anthony.Perard@citrix.com, Sander Eikelenboom <linux@eikelenboom.it>,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] Fix compile errors when enabling Xen debug
	logging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sander Eikelenboom (1):
  Fix compile errors when enabling Xen debug logging.

 hw/xen_pt.c |    4 ++--
 xen-all.c   |    4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:42:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6FH-00075j-9l; Thu, 13 Dec 2012 10:42:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tj6FF-00075d-Hw
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:42:25 +0000
Received: from [85.158.143.35:11857] by server-2.bemta-4.messagelabs.com id
	DB/42-30861-011B9C05; Thu, 13 Dec 2012 10:42:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355395342!5037955!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13218 invoked from network); 13 Dec 2012 10:42:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 10:42:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 10:42:22 +0000
Message-Id: <50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 10:42:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
In-Reply-To: <20121212174312.68146c02@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> On Wed, 12 Dec 2012 17:15:23 -0800
> Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> 
>> On Tue, 11 Dec 2012 12:10:19 +0000
>> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> 
>> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
>> > > On Mon, 10 Dec 2012 09:43:34 +0000
>> > > "Jan Beulich" <JBeulich@suse.com> wrote:
>> > 
>> > That's strange because AFAIK Linux is never editing the MSI-X
>> > entries directly: give a look at
>> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
>> > MSIs into pirqs using hypercalls. Xen should be the only one to
>> > touch the real MSI-X table.
>> 
>> So, this is what's happening. The side effect of :
>> 
>>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
>>                                 dev->msix_table.last) )
>>             WARN();
>>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
>>                                 dev->msix_pba.last) )
>>             WARN();
>> 
>> in msix_capability_init() in xen is that the dom0 EPT entries that
>> I've mapped are going from RW to read only. Then when dom0 accesses
>> it, I get EPT violation. In case of pure PV, the PTE entry to access
>> the iomem is RW, and the above rangeset adding doesn't affect it. I
>> don't understand why? Looking into that now...

As far as I was able to tell back at the time when I implemented
this, existing code shouldn't have mappings for these tables in
place at the time these ranges get added here. But I noted in
the patch description that this is a potential issue (and may need
fixing if deemed severe enough - back then, apparently nobody
really cared, perhaps largely because passthrough to PV guests
isn't considered fully secure anyway).

Now - did that change? I.e. can you describe where the mappings
come from that cause the problem here? Because if that is
possible even with non-hostile code, that might warrant properly
dealing with that, even if pretty involved (I don't think there's a
way to find all mappings of a given page other than a brute force
scan of all L1 page tables the domain currently has).

> Ah, it's coming from:
> 
> ept_p2m_type_to_flags(): 
>    case p2m_mmio_direct:
>        entry->r = entry->x = 1;
>        entry->w = !rangeset_contains_singleton(mmio_ro_ranges, entry->mfn);
> 
> I suppose, the best fix would be to add a check here for dom0 and let it
> have w access?

No, absolutely not. Nor would that help with the same issue in a
DomU.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:42:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6FH-00075j-9l; Thu, 13 Dec 2012 10:42:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tj6FF-00075d-Hw
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:42:25 +0000
Received: from [85.158.143.35:11857] by server-2.bemta-4.messagelabs.com id
	DB/42-30861-011B9C05; Thu, 13 Dec 2012 10:42:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355395342!5037955!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13218 invoked from network); 13 Dec 2012 10:42:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 10:42:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 10:42:22 +0000
Message-Id: <50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 10:42:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
In-Reply-To: <20121212174312.68146c02@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> On Wed, 12 Dec 2012 17:15:23 -0800
> Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> 
>> On Tue, 11 Dec 2012 12:10:19 +0000
>> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> 
>> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
>> > > On Mon, 10 Dec 2012 09:43:34 +0000
>> > > "Jan Beulich" <JBeulich@suse.com> wrote:
>> > 
>> > That's strange because AFAIK Linux is never editing the MSI-X
>> > entries directly: give a look at
>> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
>> > MSIs into pirqs using hypercalls. Xen should be the only one to
>> > touch the real MSI-X table.
>> 
>> So, this is what's happening. The side effect of :
>> 
>>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
>>                                 dev->msix_table.last) )
>>             WARN();
>>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
>>                                 dev->msix_pba.last) )
>>             WARN();
>> 
>> in msix_capability_init() in xen is that the dom0 EPT entries that
>> I've mapped are going from RW to read only. Then when dom0 accesses
>> it, I get EPT violation. In case of pure PV, the PTE entry to access
>> the iomem is RW, and the above rangeset adding doesn't affect it. I
>> don't understand why? Looking into that now...

As far as I was able to tell back at the time when I implemented
this, existing code shouldn't have mappings for these tables in
place at the time these ranges get added here. But I noted in
the patch description that this is a potential issue (and may need
fixing if deemed severe enough - back then, apparently nobody
really cared, perhaps largely because passthrough to PV guests
isn't considered fully secure anyway).

Now - did that change? I.e. can you describe where the mappings
come from that cause the problem here? Because if that is
possible even with non-hostile code, that might warrant properly
dealing with that, even if pretty involved (I don't think there's a
way to find all mappings of a given page other than a brute force
scan of all L1 page tables the domain currently has).

> Ah, it's coming from:
> 
> ept_p2m_type_to_flags(): 
>    case p2m_mmio_direct:
>        entry->r = entry->x = 1;
>        entry->w = !rangeset_contains_singleton(mmio_ro_ranges, entry->mfn);
> 
> I suppose, the best fix would be to add a check here for dom0 and let it
> have w access?

No, absolutely not. Nor would that help with the same issue in a
DomU.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:43:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Fd-00077H-Mn; Thu, 13 Dec 2012 10:42:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tj6Fb-00076p-Ud
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:42:48 +0000
Received: from [193.109.254.147:62638] by server-13.bemta-14.messagelabs.com
	id DA/92-01725-721B9C05; Thu, 13 Dec 2012 10:42:47 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355395316!10224513!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4493 invoked from network); 13 Dec 2012 10:41:56 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 10:41:56 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:47978
	helo=localhost.localdomain) by smtp.eikelenboom.it with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>)
	id 1Tj6Im-00031S-FR; Thu, 13 Dec 2012 11:46:04 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
To: qemu-devel@nongnu.org
Date: Thu, 13 Dec 2012 11:41:33 +0100
Message-Id: <1355395293-3122-2-git-send-email-linux@eikelenboom.it>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1Tj6Gv-00030a-73>
References: <1Tj6Gv-00030a-73>
Cc: Anthony.Perard@citrix.com, Sander Eikelenboom <linux@eikelenboom.it>,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] Fix compile errors when enabling Xen debug
	logging.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it>
---
 hw/xen_pt.c |    4 ++--
 xen-all.c   |    4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen_pt.c b/hw/xen_pt.c
index 7a3846e..ca42a14 100644
--- a/hw/xen_pt.c
+++ b/hw/xen_pt.c
@@ -671,7 +671,7 @@ static int xen_pt_initfn(PCIDevice *d)
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
-                   s->real_device.domain, bus, slot, func);
+                   s->real_device.domain, s->real_device.bus, s->real_device.dev, s->real_device.func);
     }
 
     /* Initialize virtualized PCI configuration (Extended 256 Bytes) */
@@ -752,7 +752,7 @@ out:
     memory_listener_register(&s->memory_listener, &address_space_memory);
     memory_listener_register(&s->io_listener, &address_space_io);
     XEN_PT_LOG(d, "Real physical device %02x:%02x.%d registered successfuly!\n",
-               bus, slot, func);
+               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function);
 
     return 0;
 }
diff --git a/xen-all.c b/xen-all.c
index 046cc2a..d225b9d 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -292,7 +292,7 @@ static int xen_add_to_physmap(XenIOState *state,
     return -1;
 
 go_physmap:
-    DPRINTF("mapping vram to %llx - %llx\n", start_addr, start_addr + size);
+    DPRINTF("mapping vram to %"PRI_xen_pfn" - %"PRI_xen_pfn"\n", start_addr, start_addr + size);
 
     pfn = phys_offset >> TARGET_PAGE_BITS;
     start_gpfn = start_addr >> TARGET_PAGE_BITS;
@@ -365,7 +365,7 @@ static int xen_remove_from_physmap(XenIOState *state,
     phys_offset = physmap->phys_offset;
     size = physmap->size;
 
-    DPRINTF("unmapping vram to %llx - %llx, from %llx\n",
+    DPRINTF("unmapping vram to %"PRI_xen_pfn" - %"PRI_xen_pfn", from %"PRI_xen_pfn"\n",
             phys_offset, phys_offset + size, start_addr);
 
     size >>= TARGET_PAGE_BITS;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:43:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Fd-00077H-Mn; Thu, 13 Dec 2012 10:42:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tj6Fb-00076p-Ud
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:42:48 +0000
Received: from [193.109.254.147:62638] by server-13.bemta-14.messagelabs.com
	id DA/92-01725-721B9C05; Thu, 13 Dec 2012 10:42:47 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355395316!10224513!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4493 invoked from network); 13 Dec 2012 10:41:56 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 10:41:56 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:47978
	helo=localhost.localdomain) by smtp.eikelenboom.it with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>)
	id 1Tj6Im-00031S-FR; Thu, 13 Dec 2012 11:46:04 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
To: qemu-devel@nongnu.org
Date: Thu, 13 Dec 2012 11:41:33 +0100
Message-Id: <1355395293-3122-2-git-send-email-linux@eikelenboom.it>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1Tj6Gv-00030a-73>
References: <1Tj6Gv-00030a-73>
Cc: Anthony.Perard@citrix.com, Sander Eikelenboom <linux@eikelenboom.it>,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] Fix compile errors when enabling Xen debug
	logging.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it>
---
 hw/xen_pt.c |    4 ++--
 xen-all.c   |    4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen_pt.c b/hw/xen_pt.c
index 7a3846e..ca42a14 100644
--- a/hw/xen_pt.c
+++ b/hw/xen_pt.c
@@ -671,7 +671,7 @@ static int xen_pt_initfn(PCIDevice *d)
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
-                   s->real_device.domain, bus, slot, func);
+                   s->real_device.domain, s->real_device.bus, s->real_device.dev, s->real_device.func);
     }
 
     /* Initialize virtualized PCI configuration (Extended 256 Bytes) */
@@ -752,7 +752,7 @@ out:
     memory_listener_register(&s->memory_listener, &address_space_memory);
     memory_listener_register(&s->io_listener, &address_space_io);
     XEN_PT_LOG(d, "Real physical device %02x:%02x.%d registered successfuly!\n",
-               bus, slot, func);
+               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function);
 
     return 0;
 }
diff --git a/xen-all.c b/xen-all.c
index 046cc2a..d225b9d 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -292,7 +292,7 @@ static int xen_add_to_physmap(XenIOState *state,
     return -1;
 
 go_physmap:
-    DPRINTF("mapping vram to %llx - %llx\n", start_addr, start_addr + size);
+    DPRINTF("mapping vram to %"PRI_xen_pfn" - %"PRI_xen_pfn"\n", start_addr, start_addr + size);
 
     pfn = phys_offset >> TARGET_PAGE_BITS;
     start_gpfn = start_addr >> TARGET_PAGE_BITS;
@@ -365,7 +365,7 @@ static int xen_remove_from_physmap(XenIOState *state,
     phys_offset = physmap->phys_offset;
     size = physmap->size;
 
-    DPRINTF("unmapping vram to %llx - %llx, from %llx\n",
+    DPRINTF("unmapping vram to %"PRI_xen_pfn" - %"PRI_xen_pfn", from %"PRI_xen_pfn"\n",
             phys_offset, phys_offset + size, start_addr);
 
     size >>= TARGET_PAGE_BITS;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:45:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Hn-0007Hh-8E; Thu, 13 Dec 2012 10:45:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj6Hk-0007HY-Su
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 10:45:01 +0000
Received: from [85.158.138.51:65126] by server-6.bemta-3.messagelabs.com id
	B2/B7-12154-6A1B9C05; Thu, 13 Dec 2012 10:44:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355395489!28680344!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4272 invoked from network); 13 Dec 2012 10:44:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:44:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,272,1355097600"; 
   d="scan'208";a="109605"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:44:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 09:11:57 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tj4ph-0003uS-G0;
	Thu, 13 Dec 2012 09:11:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tj4ph-0002gn-F7;
	Thu, 13 Dec 2012 09:11:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14678-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 09:11:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14678: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14678 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14678/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14676
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14676

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  74d4a6cc5392
baseline version:
 xen                  74d4a6cc5392

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:45:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Hn-0007Hh-8E; Thu, 13 Dec 2012 10:45:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj6Hk-0007HY-Su
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 10:45:01 +0000
Received: from [85.158.138.51:65126] by server-6.bemta-3.messagelabs.com id
	B2/B7-12154-6A1B9C05; Thu, 13 Dec 2012 10:44:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355395489!28680344!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4272 invoked from network); 13 Dec 2012 10:44:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:44:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,272,1355097600"; 
   d="scan'208";a="109605"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:44:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 09:11:57 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tj4ph-0003uS-G0;
	Thu, 13 Dec 2012 09:11:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tj4ph-0002gn-F7;
	Thu, 13 Dec 2012 09:11:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14678-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 09:11:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14678: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14678 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14678/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14676
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14676

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  74d4a6cc5392
baseline version:
 xen                  74d4a6cc5392

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:46:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Ix-0007PM-Sx; Thu, 13 Dec 2012 10:46:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj6Iw-0007P2-Gy
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:46:14 +0000
Received: from [85.158.143.35:44118] by server-2.bemta-4.messagelabs.com id
	B3/38-30861-5F1B9C05; Thu, 13 Dec 2012 10:46:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355395571!14614409!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1248 invoked from network); 13 Dec 2012 10:46:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:46:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,272,1355097600"; 
   d="scan'208";a="110043"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:12:17 +0000
Message-ID: <1355393536.10554.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?ISO-8859-1?Q?S=E9bastien_Fr=E9mal?= <sebastien.fremal@gmail.com>
Date: Thu, 13 Dec 2012 10:12:16 +0000
In-Reply-To: <CAOV6k-Cs-uk2Kts6QArtW5MbK1thRhvfcWqJ7ydZJAWou0KSnA@mail.gmail.com>
References: <CAOV6k-Cs-uk2Kts6QArtW5MbK1thRhvfcWqJ7ydZJAWou0KSnA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Stubdom compilation fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTEyLTExIGF0IDE1OjQwICswMDAwLCBTw6liYXN0aWVuIEZyw6ltYWwgd3Jv
dGU6Cj4gSGVsbG8sCj4gCj4gSSdtIHRyeWluZyB0byBjb21waWxlIFhlbiA0LjIuMCBvbiBVYnVu
dHUgMTEuMTAgYnV0IGl0IGNyYXNoZXMgd2hlbgo+IGNvbXBpbGluZyB0aGUgc3R1YmRvbSBwYXJ0
LiBBcyBoZSBjYW4ndCBmaW5kIHN0ZGRlZi5oLCBoZSBjYW4ndCBjcmVhdGUKPiBsaWJyYXJpZXMg
YW5kIGZpbmFsbHkgY3Jhc2hlcy4gWW91IGNhbiBmaW5kIGEgc2FtcGxlIG9mIGVycm9ycyBiZWxv
dyA6CgpXaGF0IHdhcyB0aGUgIm1ha2UiIGNvbW1hbmQgbGluZSB5b3UgdXNlZD8KCkRpZCB5b3Ug
ZG93bmxvYWQgdGhlIDQuMi4wIHRhcmJhbGwgb3IgYXJlIHlvdSBidWlsZGluZyBmcm9tIE1lcmN1
cmlhbD8KSWYgdGhlIGxhdHRlciB0aGVuIHdoaWNoIHJldmlzaW9uLiBEaWQgeW91IG1vZGlmeSBh
bnl0aGluZyBpbiB0aGUgc291cmNlCnRyZWU/Cgo+IE1ha2luZyBhbGwgaW4gbWlzYwo+IG1ha2Vb
Nl06IGVudHJhbnQgZGFucyBsZSByw6lwZXJ0b2lyZQo+IMKrIC9ob21lL2ZyZW1hbHMveGVuLTQu
Mi4wL3N0dWJkb20vbmV3bGliLXg4Nl82NC94ODZfNjQteGVuLWVsZi9uZXdsaWIvbGliYy9taXNj
IMK7Cj4gZ2NjIC1pc3lzdGVtIC9ob21lL2ZyZW1hbHMveGVuLTQuMi4wL3N0dWJkb20vLi4vZXh0
cmFzL21pbmktb3MvaW5jbHVkZSAtRF9fTUlOSU9TX18gLURIQVZFX0xJQkMgLWlzeXN0ZW0gL2hv
bWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNsdWRlL3Bv
c2l4IC1pc3lzdGVtIC9ob21lL2ZyZW1hbHMveGVuLTQuMi4wL3N0dWJkb20vLi4vdG9vbHMveGVu
c3RvcmUgIC1pc3lzdGVtIC9ob21lL2ZyZW1hbHMveGVuLTQuMi4wL3N0dWJkb20vLi4vZXh0cmFz
L21pbmktb3MvaW5jbHVkZS94ODYgLWlzeXN0ZW0gL2hvbWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1
YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNsdWRlL3g4Ni94ODZfNjQgLVUgX19saW51eF9fIC1V
IF9fRnJlZUJTRF9fIC1VIF9fc3VuX18gLW5vc3RkaW5jIC1pc3lzdGVtIC9ob21lL2ZyZW1hbHMv
eGVuLTQuMi4wL3N0dWJkb20vLi4vZXh0cmFzL21pbmktb3MvaW5jbHVkZS9wb3NpeCAtaXN5c3Rl
bSAvaG9tZS9mcmVtYWxzL3hlbi00LjIuMC9zdHViZG9tL2Nyb3NzLXJvb3QteDg2XzY0L3g4Nl82
NC14ZW4tZWxmL2luY2x1ZGUgLWlzeXN0ZW0gaW5jbHVkZSAtaXN5c3RlbSAvaG9tZS9mcmVtYWxz
L3hlbi00LjIuMC9zdHViZG9tL2x3aXAteDg2XzY0L3NyYy9pbmNsdWRlIC1pc3lzdGVtIC9ob21l
L2ZyZW1hbHMveGVuLTQuMi4wL3N0dWJkb20vbHdpcC14ODZfNjQvc3JjL2luY2x1ZGUvaXB2NCAt
SS9ob21lL2ZyZW1hbHMveGVuLTQuMi4wL3N0dWJkb20vaW5jbHVkZSAtSS9ob21lL2ZyZW1hbHMv
eGVuLTQuMi4wL3N0dWJkb20vLi4veGVuL2luY2x1ZGUgLW1uby1yZWQtem9uZSAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgIC1tNjQgLW1uby1yZWQtem9uZSAtZm5vLXJlb3JkZXItYmxvY2tz
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLVduby11bnVzZWQtYnV0LXNldC12YXJpYWJsZSAgIC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtRF9JMzg2TUFDSF9BTExPV19IV19JTlRFUlJVUFRT
IC1CL2hvbWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1YmRvbS9uZXdsaWIteDg2XzY0L3g4Nl82NC14
ZW4tZWxmL25ld2xpYi8gLWlzeXN0ZW0gL2hvbWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1YmRvbS9u
ZXdsaWIteDg2XzY0L3g4Nl82NC14ZW4tZWxmL25ld2xpYi90YXJnLWluY2x1ZGUgLWlzeXN0ZW0g
L2hvbWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1YmRvbS9uZXdsaWItMS4xNi4wL25ld2xpYi9saWJj
L2luY2x1ZGUgLUIvaG9tZS9mcmVtYWxzL3hlbi00LjIuMC9zdHViZG9tL25ld2xpYi14ODZfNjQv
eDg2XzY0LXhlbi1lbGYvbGliZ2xvc3MveDg2XzY0IC1ML2hvbWUvZnJlbWFscy94ZW4tNC4yLjAv
c3R1YmRvbS9uZXdsaWIteDg2XzY0L3g4Nl82NC14ZW4tZWxmL2xpYmdsb3NzL2xpYm5vc3lzIC1M
L2hvbWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1YmRvbS9uZXdsaWItMS4xNi4wL2xpYmdsb3NzL3g4
Nl82NCAtRFBBQ0tBR0VfTkFNRT1cIm5ld2xpYlwiIC1EUEFDS0FHRV9UQVJOQU1FPVwibmV3bGli
XCIgLURQQUNLQUdFX1ZFUlNJT049XCIxLjE2LjBcIiAtRFBBQ0tBR0VfU1RSSU5HPVwibmV3bGli
XCAxLjE2LjBcIiAtRFBBQ0tBR0VfQlVHUkVQT1JUPVwiXCIgLUkuIC1JLi4vLi4vLi4vLi4vLi4v
bmV3bGliLTEuMTYuMC9uZXdsaWIvbGliYy9taXNjIC1PMiAtRE1JU1NJTkdfU1lTQ0FMTF9OQU1F
UyAtZm5vLWJ1aWx0aW4gICAgICAtTzIgLWcgLWcgLU8yICAgLWMgLW8gbGliX2EtX19kcHJpbnRm
Lm8gYHRlc3QgLWYgJ19fZHByaW50Zi5jJyB8fCBlY2hvICcuLi8uLi8uLi8uLi8uLi9uZXdsaWIt
MS4xNi4wL25ld2xpYi9saWJjL21pc2MvJ2BfX2RwcmludGYuYwo+IEluIGZpbGUgaW5jbHVkZWQg
ZnJvbSAvaG9tZS9mcmVtYWxzL3hlbi00LjIuMC9zdHViZG9tL25ld2xpYi0xLjE2LjAvbmV3bGli
L2xpYmMvaW5jbHVkZS9zeXMvcmVlbnQuaDoxNDowLAo+ICAgICAgICAgICAgICAgICAgZnJvbSAv
aG9tZS9mcmVtYWxzL3hlbi00LjIuMC9zdHViZG9tL25ld2xpYi0xLjE2LjAvbmV3bGliL2xpYmMv
aW5jbHVkZS9yZWVudC5oOjQ4LAo+ICAgICAgICAgICAgICAgICAgZnJvbSAuLi8uLi8uLi8uLi8u
Li9uZXdsaWItMS4xNi4wL25ld2xpYi9saWJjL21pc2MvX19kcHJpbnRmLmM6ODoKPiAvaG9tZS9m
cmVtYWxzL3hlbi00LjIuMC9zdHViZG9tL25ld2xpYi0xLjE2LjAvbmV3bGliL2xpYmMvaW5jbHVk
ZS9zeXMvX3R5cGVzLmg6Njc6MjA6IGVycmV1ciBmYXRhbGU6IHN0ZGRlZi5oIDogQXVjdW4gZmlj
aGllciBvdSBkb3NzaWVyIGRlIGNlIHR5cGUKPiBjb21waWxhdGlvbiB0ZXJtaW7DqWUuCj4gbWFr
ZVs2XTogKioqIFtsaWJfYS1fX2RwcmludGYub10gRXJyZXVyIDEKClRoaXMgc21lbGxzIGEgbGl0
dGxlIGJpdCBsaWtlIHRoZSBpc3N1ZSB3aGljaCB3YXMgc3VwcG9zZWQgdG8gaGF2ZSBiZWVuCmFk
ZHJlc3NlZCB3YXkgYmFjayBpbiAxODM5Mjo4YWMzZTdlN2Q4MjMuIGUuZy4KaHR0cDovL29sZC1s
aXN0LWFyY2hpdmVzLnhlbi5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAwOC0wOS9tc2cw
MDgxMy5odG1sIGFuZCBodHRwOi8vb2xkLWxpc3QtYXJjaGl2ZXMueGVuLm9yZy94ZW4tdXNlcnMv
MjAwOC0xMS9tc2cwMDU5OS5odG1sCgpDYW4geW91IHRyeSBidWlsZGluZyB3aXRoIExBTkc9QyBp
bnN0ZWFkIG9mIExBTkc9ZnI/CgpJIGNhbid0IG9mZmhhbmQgc3BvdCBhbnl3aGVyZSB3aGljaCBp
bnZva2VzIGdjYyB0aGF0IG1pZ2h0IG5lZWQgYSBMQU5HPUMKd2hpY2ggaGFzbid0IGFscmVhZHkg
Z290IG9uZS4KCklhbi4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0
dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:46:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Ix-0007PM-Sx; Thu, 13 Dec 2012 10:46:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj6Iw-0007P2-Gy
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:46:14 +0000
Received: from [85.158.143.35:44118] by server-2.bemta-4.messagelabs.com id
	B3/38-30861-5F1B9C05; Thu, 13 Dec 2012 10:46:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355395571!14614409!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1248 invoked from network); 13 Dec 2012 10:46:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:46:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,272,1355097600"; 
   d="scan'208";a="110043"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:12:17 +0000
Message-ID: <1355393536.10554.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?ISO-8859-1?Q?S=E9bastien_Fr=E9mal?= <sebastien.fremal@gmail.com>
Date: Thu, 13 Dec 2012 10:12:16 +0000
In-Reply-To: <CAOV6k-Cs-uk2Kts6QArtW5MbK1thRhvfcWqJ7ydZJAWou0KSnA@mail.gmail.com>
References: <CAOV6k-Cs-uk2Kts6QArtW5MbK1thRhvfcWqJ7ydZJAWou0KSnA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Stubdom compilation fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTEyLTExIGF0IDE1OjQwICswMDAwLCBTw6liYXN0aWVuIEZyw6ltYWwgd3Jv
dGU6Cj4gSGVsbG8sCj4gCj4gSSdtIHRyeWluZyB0byBjb21waWxlIFhlbiA0LjIuMCBvbiBVYnVu
dHUgMTEuMTAgYnV0IGl0IGNyYXNoZXMgd2hlbgo+IGNvbXBpbGluZyB0aGUgc3R1YmRvbSBwYXJ0
LiBBcyBoZSBjYW4ndCBmaW5kIHN0ZGRlZi5oLCBoZSBjYW4ndCBjcmVhdGUKPiBsaWJyYXJpZXMg
YW5kIGZpbmFsbHkgY3Jhc2hlcy4gWW91IGNhbiBmaW5kIGEgc2FtcGxlIG9mIGVycm9ycyBiZWxv
dyA6CgpXaGF0IHdhcyB0aGUgIm1ha2UiIGNvbW1hbmQgbGluZSB5b3UgdXNlZD8KCkRpZCB5b3Ug
ZG93bmxvYWQgdGhlIDQuMi4wIHRhcmJhbGwgb3IgYXJlIHlvdSBidWlsZGluZyBmcm9tIE1lcmN1
cmlhbD8KSWYgdGhlIGxhdHRlciB0aGVuIHdoaWNoIHJldmlzaW9uLiBEaWQgeW91IG1vZGlmeSBh
bnl0aGluZyBpbiB0aGUgc291cmNlCnRyZWU/Cgo+IE1ha2luZyBhbGwgaW4gbWlzYwo+IG1ha2Vb
Nl06IGVudHJhbnQgZGFucyBsZSByw6lwZXJ0b2lyZQo+IMKrIC9ob21lL2ZyZW1hbHMveGVuLTQu
Mi4wL3N0dWJkb20vbmV3bGliLXg4Nl82NC94ODZfNjQteGVuLWVsZi9uZXdsaWIvbGliYy9taXNj
IMK7Cj4gZ2NjIC1pc3lzdGVtIC9ob21lL2ZyZW1hbHMveGVuLTQuMi4wL3N0dWJkb20vLi4vZXh0
cmFzL21pbmktb3MvaW5jbHVkZSAtRF9fTUlOSU9TX18gLURIQVZFX0xJQkMgLWlzeXN0ZW0gL2hv
bWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNsdWRlL3Bv
c2l4IC1pc3lzdGVtIC9ob21lL2ZyZW1hbHMveGVuLTQuMi4wL3N0dWJkb20vLi4vdG9vbHMveGVu
c3RvcmUgIC1pc3lzdGVtIC9ob21lL2ZyZW1hbHMveGVuLTQuMi4wL3N0dWJkb20vLi4vZXh0cmFz
L21pbmktb3MvaW5jbHVkZS94ODYgLWlzeXN0ZW0gL2hvbWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1
YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNsdWRlL3g4Ni94ODZfNjQgLVUgX19saW51eF9fIC1V
IF9fRnJlZUJTRF9fIC1VIF9fc3VuX18gLW5vc3RkaW5jIC1pc3lzdGVtIC9ob21lL2ZyZW1hbHMv
eGVuLTQuMi4wL3N0dWJkb20vLi4vZXh0cmFzL21pbmktb3MvaW5jbHVkZS9wb3NpeCAtaXN5c3Rl
bSAvaG9tZS9mcmVtYWxzL3hlbi00LjIuMC9zdHViZG9tL2Nyb3NzLXJvb3QteDg2XzY0L3g4Nl82
NC14ZW4tZWxmL2luY2x1ZGUgLWlzeXN0ZW0gaW5jbHVkZSAtaXN5c3RlbSAvaG9tZS9mcmVtYWxz
L3hlbi00LjIuMC9zdHViZG9tL2x3aXAteDg2XzY0L3NyYy9pbmNsdWRlIC1pc3lzdGVtIC9ob21l
L2ZyZW1hbHMveGVuLTQuMi4wL3N0dWJkb20vbHdpcC14ODZfNjQvc3JjL2luY2x1ZGUvaXB2NCAt
SS9ob21lL2ZyZW1hbHMveGVuLTQuMi4wL3N0dWJkb20vaW5jbHVkZSAtSS9ob21lL2ZyZW1hbHMv
eGVuLTQuMi4wL3N0dWJkb20vLi4veGVuL2luY2x1ZGUgLW1uby1yZWQtem9uZSAtTzEgLWZuby1v
bWl0LWZyYW1lLXBvaW50ZXIgIC1tNjQgLW1uby1yZWQtem9uZSAtZm5vLXJlb3JkZXItYmxvY2tz
IC1mbm8tYXN5bmNocm9ub3VzLXVud2luZC10YWJsZXMgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLVduby11bnVzZWQtYnV0LXNldC12YXJpYWJsZSAgIC1mbm8tc3RhY2st
cHJvdGVjdG9yIC1mbm8tZXhjZXB0aW9ucyAtRF9JMzg2TUFDSF9BTExPV19IV19JTlRFUlJVUFRT
IC1CL2hvbWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1YmRvbS9uZXdsaWIteDg2XzY0L3g4Nl82NC14
ZW4tZWxmL25ld2xpYi8gLWlzeXN0ZW0gL2hvbWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1YmRvbS9u
ZXdsaWIteDg2XzY0L3g4Nl82NC14ZW4tZWxmL25ld2xpYi90YXJnLWluY2x1ZGUgLWlzeXN0ZW0g
L2hvbWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1YmRvbS9uZXdsaWItMS4xNi4wL25ld2xpYi9saWJj
L2luY2x1ZGUgLUIvaG9tZS9mcmVtYWxzL3hlbi00LjIuMC9zdHViZG9tL25ld2xpYi14ODZfNjQv
eDg2XzY0LXhlbi1lbGYvbGliZ2xvc3MveDg2XzY0IC1ML2hvbWUvZnJlbWFscy94ZW4tNC4yLjAv
c3R1YmRvbS9uZXdsaWIteDg2XzY0L3g4Nl82NC14ZW4tZWxmL2xpYmdsb3NzL2xpYm5vc3lzIC1M
L2hvbWUvZnJlbWFscy94ZW4tNC4yLjAvc3R1YmRvbS9uZXdsaWItMS4xNi4wL2xpYmdsb3NzL3g4
Nl82NCAtRFBBQ0tBR0VfTkFNRT1cIm5ld2xpYlwiIC1EUEFDS0FHRV9UQVJOQU1FPVwibmV3bGli
XCIgLURQQUNLQUdFX1ZFUlNJT049XCIxLjE2LjBcIiAtRFBBQ0tBR0VfU1RSSU5HPVwibmV3bGli
XCAxLjE2LjBcIiAtRFBBQ0tBR0VfQlVHUkVQT1JUPVwiXCIgLUkuIC1JLi4vLi4vLi4vLi4vLi4v
bmV3bGliLTEuMTYuMC9uZXdsaWIvbGliYy9taXNjIC1PMiAtRE1JU1NJTkdfU1lTQ0FMTF9OQU1F
UyAtZm5vLWJ1aWx0aW4gICAgICAtTzIgLWcgLWcgLU8yICAgLWMgLW8gbGliX2EtX19kcHJpbnRm
Lm8gYHRlc3QgLWYgJ19fZHByaW50Zi5jJyB8fCBlY2hvICcuLi8uLi8uLi8uLi8uLi9uZXdsaWIt
MS4xNi4wL25ld2xpYi9saWJjL21pc2MvJ2BfX2RwcmludGYuYwo+IEluIGZpbGUgaW5jbHVkZWQg
ZnJvbSAvaG9tZS9mcmVtYWxzL3hlbi00LjIuMC9zdHViZG9tL25ld2xpYi0xLjE2LjAvbmV3bGli
L2xpYmMvaW5jbHVkZS9zeXMvcmVlbnQuaDoxNDowLAo+ICAgICAgICAgICAgICAgICAgZnJvbSAv
aG9tZS9mcmVtYWxzL3hlbi00LjIuMC9zdHViZG9tL25ld2xpYi0xLjE2LjAvbmV3bGliL2xpYmMv
aW5jbHVkZS9yZWVudC5oOjQ4LAo+ICAgICAgICAgICAgICAgICAgZnJvbSAuLi8uLi8uLi8uLi8u
Li9uZXdsaWItMS4xNi4wL25ld2xpYi9saWJjL21pc2MvX19kcHJpbnRmLmM6ODoKPiAvaG9tZS9m
cmVtYWxzL3hlbi00LjIuMC9zdHViZG9tL25ld2xpYi0xLjE2LjAvbmV3bGliL2xpYmMvaW5jbHVk
ZS9zeXMvX3R5cGVzLmg6Njc6MjA6IGVycmV1ciBmYXRhbGU6IHN0ZGRlZi5oIDogQXVjdW4gZmlj
aGllciBvdSBkb3NzaWVyIGRlIGNlIHR5cGUKPiBjb21waWxhdGlvbiB0ZXJtaW7DqWUuCj4gbWFr
ZVs2XTogKioqIFtsaWJfYS1fX2RwcmludGYub10gRXJyZXVyIDEKClRoaXMgc21lbGxzIGEgbGl0
dGxlIGJpdCBsaWtlIHRoZSBpc3N1ZSB3aGljaCB3YXMgc3VwcG9zZWQgdG8gaGF2ZSBiZWVuCmFk
ZHJlc3NlZCB3YXkgYmFjayBpbiAxODM5Mjo4YWMzZTdlN2Q4MjMuIGUuZy4KaHR0cDovL29sZC1s
aXN0LWFyY2hpdmVzLnhlbi5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAwOC0wOS9tc2cw
MDgxMy5odG1sIGFuZCBodHRwOi8vb2xkLWxpc3QtYXJjaGl2ZXMueGVuLm9yZy94ZW4tdXNlcnMv
MjAwOC0xMS9tc2cwMDU5OS5odG1sCgpDYW4geW91IHRyeSBidWlsZGluZyB3aXRoIExBTkc9QyBp
bnN0ZWFkIG9mIExBTkc9ZnI/CgpJIGNhbid0IG9mZmhhbmQgc3BvdCBhbnl3aGVyZSB3aGljaCBp
bnZva2VzIGdjYyB0aGF0IG1pZ2h0IG5lZWQgYSBMQU5HPUMKd2hpY2ggaGFzbid0IGFscmVhZHkg
Z290IG9uZS4KCklhbi4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0
dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:46:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:46:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6JI-0007S3-BK; Thu, 13 Dec 2012 10:46:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj6JG-0007Re-7O
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:46:34 +0000
Received: from [193.109.254.147:47450] by server-4.bemta-14.messagelabs.com id
	56/60-15233-902B9C05; Thu, 13 Dec 2012 10:46:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355395592!8576439!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12911 invoked from network); 13 Dec 2012 10:46:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:46:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,272,1355097600"; 
   d="scan'208";a="110121"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:22:33 +0000
Message-ID: <1355394151.10554.59.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:22:31 +0000
In-Reply-To: <d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
References: <patchbomb.1354274486@cosworth.uk.xensource.com>
	<d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 2] xl: Introduce helper macro for
 option parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ping?

On Fri, 2012-11-30 at 11:21 +0000, Ian Campbell wrote:
> # HG changeset patch
> # User Ian Campbell <ijc@hellion.org.uk>
> # Date 1354274264 0
> # Node ID d4cc790b47d8735ae3f2b0c4707bfa58f90a2cd3
> # Parent  b63fbacd5037e79bc4f40429453cb59816f94793
> xl: Introduce helper macro for option parsing.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> v2:
> - s/FOREACH_OPT/SWITCH_FOREACH_OPT/
> - Document the macro
> 
> diff -r b63fbacd5037 -r d4cc790b47d8 tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c  Fri Nov 30 11:17:44 2012 +0000
> +++ b/tools/libxl/xl_cmdimpl.c  Fri Nov 30 11:17:44 2012 +0000
> @@ -2297,6 +2297,10 @@ static int64_t parse_mem_size_kb(const c
> 
>  #define COMMON_LONG_OPTS {"help", 0, 0, 'h'}
> 
> +/*
> + * Callers should use SWITCH_FOREACH_OPT in preference to calling this
> + * directly.
> + */
>  static int def_getopt(int argc, char * const argv[],
>                        const char *optstring,
>                        const struct option *longopts,
> @@ -2335,6 +2339,57 @@ static int def_getopt(int argc, char * c
>      return -1;
>  }
> 
> +/*
> + * Wraps def_getopt into a convenient loop+switch to process all arguments.
> + *
> + * _opt:        an int variable, holds the current option during processing.
> + * _opts:       short options, as per getopt_long(3)'s optstring argument.
> + * _lopts:      long options, as per getopt_long(3)'s longopts argument. May
> + *              be null.
> + * _help:       name of this command, for usage string.
> + * _req:        number of non-option command line parameters which are required.
> + *
> + * In addition the calling context is expected to contain variables
> + * "argc" and "argv" in the conventional C-style:
> + *   main(int argc, char **argv)
> + * manner.
> + *
> + * Callers should treat SWITCH_FOREACH_OPT as they would a switch
> + * statement over the value of _opt. Each option given in _opts (or
> + * _lopts) should be handled by a case statement as if it were inside
> + * a switch statement.
> + *
> + * In addition to the options provided in _opts callers must handle
> + * two additional pseudo options:
> + *  0 -- generated if the user passes a -h option. help will be printed,
> + *       caller should return 0.
> + *  2 -- generated if the user does not provided _req non-option arguments,
> + *       caller should return 2.
> + *
> + * Example:
> + *
> + * int main_foo(int argc, char **argv) {
> + *     int opt;
> + *
> + *     SWITCH_FOREACH_OPT(opt, "blah", NULL, "foo", 0) {
> + *     case 0: case2:
> + *          return opt;
> + *      case 'b':
> + *          ... handle b option...
> + *          break;
> + *      case 'l':
> + *          ... handle l option ...
> + *          break;
> + *      case etc etc...
> + *      }
> + *      ... do something useful with the options ...
> + * }
> + */
> +#define SWITCH_FOREACH_OPT(_opt, _opts, _lopts, _help, _req)    \
> +    while (((_opt) = def_getopt(argc, argv, (_opts), (_lopts),  \
> +                                (_help), (_req))) != -1)        \
> +        switch (opt)
> +
>  static int set_memory_max(uint32_t domid, const char *mem)
>  {
>      int64_t memorykb;
> @@ -2358,8 +2413,10 @@ int main_memmax(int argc, char **argv)
>      char *mem;
>      int rc;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "mem-max", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "mem-max", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
>      mem = argv[optind + 1];
> @@ -2392,8 +2449,10 @@ int main_memset(int argc, char **argv)
>      int opt = 0;
>      const char *mem;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "mem-set", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "mem-set", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
>      mem = argv[optind + 1];
> @@ -2431,8 +2490,10 @@ int main_cd_eject(int argc, char **argv)
>      int opt = 0;
>      const char *virtdev;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cd-eject", 2)) != -1)
> -        return opt;
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cd-eject", 2) {
> +        case 0: case 2:
> +            return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
>      virtdev = argv[optind + 1];
> @@ -2448,8 +2509,10 @@ int main_cd_insert(int argc, char **argv
>      const char *virtdev;
>      char *file = NULL; /* modified by cd_insert tokenising it */
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cd-insert", 3)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cd-insert", 3) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
>      virtdev = argv[optind + 1];
> @@ -2465,24 +2528,22 @@ int main_console(int argc, char **argv)
>      int opt = 0, num = 0;
>      libxl_console_type type = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "n:t:", NULL, "console", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 't':
> -            if (!strcmp(optarg, "pv"))
> -                type = LIBXL_CONSOLE_TYPE_PV;
> -            else if (!strcmp(optarg, "serial"))
> -                type = LIBXL_CONSOLE_TYPE_SERIAL;
> -            else {
> -                fprintf(stderr, "console type supported are: pv, serial\n");
> -                return 2;
> -            }
> -            break;
> -        case 'n':
> -            num = atoi(optarg);
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "n:t:", NULL, "console", 1) {
> +    case 0: case 2:
> +        return opt;
> +    case 't':
> +        if (!strcmp(optarg, "pv"))
> +            type = LIBXL_CONSOLE_TYPE_PV;
> +        else if (!strcmp(optarg, "serial"))
> +            type = LIBXL_CONSOLE_TYPE_SERIAL;
> +        else {
> +            fprintf(stderr, "console type supported are: pv, serial\n");
> +            return 2;
> +        }
> +        break;
> +    case 'n':
> +        num = atoi(optarg);
> +        break;
>      }
> 
>      domid = find_domain(argv[optind]);
> @@ -2505,14 +2566,12 @@ int main_vncviewer(int argc, char **argv
>      uint32_t domid;
>      int opt, autopass = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "ah", opts, "vncviewer", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            autopass = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "ah", opts, "vncviewer", 1) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        autopass = 1;
> +        break;
>      }
> 
>      domid = find_domain(argv[optind]);
> @@ -2545,8 +2604,10 @@ int main_pcilist(int argc, char **argv)
>      uint32_t domid;
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "pci-list", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-list", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
> 
> @@ -2584,14 +2645,12 @@ int main_pcidetach(int argc, char **argv
>      int force = 0;
>      const char *bdf = NULL;
> 
> -    while ((opt = def_getopt(argc, argv, "f", NULL, "pci-detach", 2)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'f':
> -            force = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
> +    case 0: case 2:
> +        return opt;
> +    case 'f':
> +        force = 1;
> +        break;
>      }
> 
>      domid = find_domain(argv[optind]);
> @@ -2626,8 +2685,10 @@ int main_pciattach(int argc, char **argv
>      int opt;
>      const char *bdf = NULL, *vs = NULL;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "pci-attach", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
>      bdf = argv[optind + 1];
> @@ -2660,8 +2721,10 @@ int main_pciassignable_list(int argc, ch
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-list", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pciassignable_list();
>      return 0;
> @@ -2692,11 +2755,9 @@ int main_pciassignable_add(int argc, cha
>      int opt;
>      const char *bdf = NULL;
> 
> -    while ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-add", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
> +    case 0: case 2:
> +        return opt;
>      }
> 
>      bdf = argv[optind];
> @@ -2731,14 +2792,12 @@ int main_pciassignable_remove(int argc,
>      const char *bdf = NULL;
>      int rebind = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "r", NULL, "pci-assignable-remove", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'r':
> -            rebind=1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
> +    case 0: case 2:
> +        return opt;
> +    case 'r':
> +        rebind=1;
> +        break;
>      }
> 
>      bdf = argv[optind];
> @@ -3549,34 +3608,31 @@ int main_restore(int argc, char **argv)
>          {0, 0, 0, 0}
>      };
> 
> -    while ((opt = def_getopt(argc, argv, "FhcpdeVA",
> -                             opts, "restore", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'c':
> -            console_autoconnect = 1;
> -            break;
> -        case 'p':
> -            paused = 1;
> -            break;
> -        case 'd':
> -            debug = 1;
> -            break;
> -        case 'F':
> -            daemonize = 0;
> -            break;
> -        case 'e':
> -            daemonize = 0;
> -            monitor = 0;
> -            break;
> -        case 'V':
> -            vnc = 1;
> -            break;
> -        case 'A':
> -            vnc = vncautopass = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "FhcpdeVA", opts, "restore", 1) {
> +    case 0: case 2:
> +        return opt;
> +    case 'c':
> +        console_autoconnect = 1;
> +        break;
> +    case 'p':
> +        paused = 1;
> +        break;
> +    case 'd':
> +        debug = 1;
> +        break;
> +    case 'F':
> +        daemonize = 0;
> +        break;
> +    case 'e':
> +        daemonize = 0;
> +        monitor = 0;
> +        break;
> +    case 'V':
> +        vnc = 1;
> +        break;
> +    case 'A':
> +        vnc = vncautopass = 1;
> +        break;
>      }
> 
>      if (argc-optind == 1) {
> @@ -3613,24 +3669,22 @@ int main_migrate_receive(int argc, char
>      int debug = 0, daemonize = 1, monitor = 1, remus = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "Fedr", NULL, "migrate-receive", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'F':
> -            daemonize = 0;
> -            break;
> -        case 'e':
> -            daemonize = 0;
> -            monitor = 0;
> -            break;
> -        case 'd':
> -            debug = 1;
> -            break;
> -        case 'r':
> -            remus = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "Fedr", NULL, "migrate-receive", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'F':
> +        daemonize = 0;
> +        break;
> +    case 'e':
> +        daemonize = 0;
> +        monitor = 0;
> +        break;
> +    case 'd':
> +        debug = 1;
> +        break;
> +    case 'r':
> +        remus = 1;
> +        break;
>      }
> 
>      if (argc-optind != 0) {
> @@ -3652,14 +3706,12 @@ int main_save(int argc, char **argv)
>      int checkpoint = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "c", NULL, "save", 2)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'c':
> -            checkpoint = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "c", NULL, "save", 2) {
> +    case 0: case 2:
> +        return opt;
> +    case 'c':
> +        checkpoint = 1;
> +        break;
>      }
> 
>      if (argc-optind > 3) {
> @@ -3685,27 +3737,25 @@ int main_migrate(int argc, char **argv)
>      char *host;
>      int opt, daemonize = 1, monitor = 1, debug = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "FC:s:ed", NULL, "migrate", 2)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'C':
> -            config_filename = optarg;
> -            break;
> -        case 's':
> -            ssh_command = optarg;
> -            break;
> -        case 'F':
> -            daemonize = 0;
> -            break;
> -        case 'e':
> -            daemonize = 0;
> -            monitor = 0;
> -            break;
> -        case 'd':
> -            debug = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "FC:s:ed", NULL, "migrate", 2) {
> +    case 0: case 2:
> +        return opt;
> +    case 'C':
> +        config_filename = optarg;
> +        break;
> +    case 's':
> +        ssh_command = optarg;
> +        break;
> +    case 'F':
> +        daemonize = 0;
> +        break;
> +    case 'e':
> +        daemonize = 0;
> +        monitor = 0;
> +        break;
> +    case 'd':
> +        debug = 1;
> +        break;
>      }
> 
>      domid = find_domain(argv[optind]);
> @@ -3729,8 +3779,10 @@ int main_dump_core(int argc, char **argv
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "dump-core", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "dump-core", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      core_dump_domain(find_domain(argv[optind]), argv[optind + 1]);
>      return 0;
> @@ -3740,8 +3792,10 @@ int main_pause(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "pause", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "pause", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pause_domain(find_domain(argv[optind]));
> 
> @@ -3752,8 +3806,10 @@ int main_unpause(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "unpause", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "unpause", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      unpause_domain(find_domain(argv[optind]));
> 
> @@ -3764,8 +3820,10 @@ int main_destroy(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "destroy", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "destroy", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      destroy_domain(find_domain(argv[optind]));
>      return 0;
> @@ -3787,20 +3845,18 @@ static int main_shutdown_or_reboot(int d
>          {0, 0, 0, 0}
>      };
> 
> -    while ((opt = def_getopt(argc, argv, "awF", opts, what, 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            all = 1;
> -            break;
> -        case 'w':
> -            wait_for_it = 1;
> -            break;
> -        case 'F':
> -            fallback_trigger = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "awF", opts, what, 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        all = 1;
> +        break;
> +    case 'w':
> +        wait_for_it = 1;
> +        break;
> +    case 'F':
> +        fallback_trigger = 1;
> +        break;
>      }
> 
>      if (!argv[optind] && !all) {
> @@ -3871,23 +3927,18 @@ int main_list(int argc, char **argv)
>      libxl_dominfo *info, *info_free=0;
>      int nb_domain, rc;
> 
> -    while ((opt = def_getopt(argc, argv, "lvhZ", opts, "list", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'l':
> -            details = 1;
> -            break;
> -        case 'h':
> -            help("list");
> -            return 0;
> -        case 'v':
> -            verbose = 1;
> -            break;
> -        case 'Z':
> -            context = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "lvhZ", opts, "list", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'l':
> +        details = 1;
> +        break;
> +    case 'v':
> +        verbose = 1;
> +        break;
> +    case 'Z':
> +        context = 1;
> +        break;
>      }
> 
>      if (optind >= argc) {
> @@ -3933,8 +3984,10 @@ int main_vm_list(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vm-list", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vm-list", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      list_vm();
>      return 0;
> @@ -3964,45 +4017,40 @@ int main_create(int argc, char **argv)
>          argc--; argv++;
>      }
> 
> -    while ((opt = def_getopt(argc, argv, "Fhnqf:pcdeVA", opts, "create", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'f':
> -            filename = optarg;
> -            break;
> -        case 'p':
> -            paused = 1;
> -            break;
> -        case 'c':
> -            console_autoconnect = 1;
> -            break;
> -        case 'd':
> -            debug = 1;
> -            break;
> -        case 'F':
> -            daemonize = 0;
> -            break;
> -        case 'e':
> -            daemonize = 0;
> -            monitor = 0;
> -            break;
> -        case 'h':
> -            help("create");
> -            return 0;
> -        case 'n':
> -            dryrun_only = 1;
> -            break;
> -        case 'q':
> -            quiet = 1;
> -            break;
> -        case 'V':
> -            vnc = 1;
> -            break;
> -        case 'A':
> -            vnc = vncautopass = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "Fhnqf:pcdeVA", opts, "create", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'f':
> +        filename = optarg;
> +        break;
> +    case 'p':
> +        paused = 1;
> +        break;
> +    case 'c':
> +        console_autoconnect = 1;
> +        break;
> +    case 'd':
> +        debug = 1;
> +        break;
> +    case 'F':
> +        daemonize = 0;
> +        break;
> +    case 'e':
> +        daemonize = 0;
> +        monitor = 0;
> +        break;
> +    case 'n':
> +        dryrun_only = 1;
> +        break;
> +    case 'q':
> +        quiet = 1;
> +        break;
> +    case 'V':
> +        vnc = 1;
> +        break;
> +    case 'A':
> +        vnc = vncautopass = 1;
> +        break;
>      }
> 
>      extra_config[0] = '\0';
> @@ -4070,17 +4118,15 @@ int main_config_update(int argc, char **
>          argc--; argv++;
>      }
> 
> -    while ((opt = def_getopt(argc, argv, "dhqf:", opts, "config_update", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'd':
> -            debug = 1;
> -            break;
> -        case 'f':
> -            filename = optarg;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "dhqf:", opts, "config_update", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'd':
> +        debug = 1;
> +        break;
> +    case 'f':
> +        filename = optarg;
> +        break;
>      }
> 
>      extra_config[0] = '\0';
> @@ -4168,8 +4214,11 @@ int main_button_press(int argc, char **a
>      fprintf(stderr, "WARNING: \"button-press\" is deprecated. "
>              "Please use \"trigger\"\n");
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "button-press", 2)) != -1)
> +
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "button-press", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      button_press(find_domain(argv[optind]), argv[optind + 1]);
> 
> @@ -4309,8 +4358,10 @@ int main_vcpulist(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpu-list", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpu-list", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      vcpulist(argc - optind, argv + optind);
>      return 0;
> @@ -4370,8 +4421,10 @@ int main_vcpupin(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-pin", 3)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-pin", 3) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      vcpupin(find_domain(argv[optind]), argv[optind+1] , argv[optind+2]);
>      return 0;
> @@ -4406,8 +4459,10 @@ int main_vcpuset(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-set", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-set", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      vcpuset(find_domain(argv[optind]), argv[optind+1]);
>      return 0;
> @@ -4589,14 +4644,12 @@ int main_info(int argc, char **argv)
>      };
>      int numa = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "hn", opts, "info", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'n':
> -            numa = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "hn", opts, "info", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'n':
> +        numa = 1;
> +        break;
>      }
> 
>      print_info(numa);
> @@ -4630,8 +4683,10 @@ int main_sharing(int argc, char **argv)
>      libxl_dominfo *info, *info_free = NULL;
>      int nb_domain, rc;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "sharing", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "sharing", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      if (optind >= argc) {
>          info = libxl_list_domain(ctx, &nb_domain);
> @@ -4911,36 +4966,34 @@ int main_sched_credit(int argc, char **a
>          {0, 0, 0, 0}
>      };
> 
> -    while ((opt = def_getopt(argc, argv, "d:w:c:p:t:r:hs", opts, "sched-credit", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'd':
> -            dom = optarg;
> -            break;
> -        case 'w':
> -            weight = strtol(optarg, NULL, 10);
> -            opt_w = 1;
> -            break;
> -        case 'c':
> -            cap = strtol(optarg, NULL, 10);
> -            opt_c = 1;
> -            break;
> -        case 't':
> -            tslice = strtol(optarg, NULL, 10);
> -            opt_t = 1;
> -            break;
> -        case 'r':
> -            ratelimit = strtol(optarg, NULL, 10);
> -            opt_r = 1;
> -            break;
> -        case 's':
> -            opt_s = 1;
> -            break;
> -        case 'p':
> -            cpupool = optarg;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "d:w:c:p:t:r:hs", opts, "sched-credit", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'd':
> +        dom = optarg;
> +        break;
> +    case 'w':
> +        weight = strtol(optarg, NULL, 10);
> +        opt_w = 1;
> +        break;
> +    case 'c':
> +        cap = strtol(optarg, NULL, 10);
> +        opt_c = 1;
> +        break;
> +    case 't':
> +        tslice = strtol(optarg, NULL, 10);
> +        opt_t = 1;
> +        break;
> +    case 'r':
> +        ratelimit = strtol(optarg, NULL, 10);
> +        opt_r = 1;
> +        break;
> +    case 's':
> +        opt_s = 1;
> +        break;
> +    case 'p':
> +        cpupool = optarg;
> +        break;
>      }
> 
>      if ((cpupool || opt_s) && (dom || opt_w || opt_c)) {
> @@ -5030,21 +5083,19 @@ int main_sched_credit2(int argc, char **
>          {0, 0, 0, 0}
>      };
> 
> -    while ((opt = def_getopt(argc, argv, "d:w:p:h", opts, "sched-credit2", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'd':
> -            dom = optarg;
> -            break;
> -        case 'w':
> -            weight = strtol(optarg, NULL, 10);
> -            opt_w = 1;
> -            break;
> -        case 'p':
> -            cpupool = optarg;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "d:w:p:h", opts, "sched-credit2", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'd':
> +        dom = optarg;
> +        break;
> +    case 'w':
> +        weight = strtol(optarg, NULL, 10);
> +        opt_w = 1;
> +        break;
> +    case 'p':
> +        cpupool = optarg;
> +        break;
>      }
> 
>      if (cpupool && (dom || opt_w)) {
> @@ -5105,37 +5156,35 @@ int main_sched_sedf(int argc, char **arg
>          {0, 0, 0, 0}
>      };
> 
> -    while ((opt = def_getopt(argc, argv, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'd':
> -            dom = optarg;
> -            break;
> -        case 'p':
> -            period = strtol(optarg, NULL, 10);
> -            opt_p = 1;
> -            break;
> -        case 's':
> -            slice = strtol(optarg, NULL, 10);
> -            opt_s = 1;
> -            break;
> -        case 'l':
> -            latency = strtol(optarg, NULL, 10);
> -            opt_l = 1;
> -            break;
> -        case 'e':
> -            extra = strtol(optarg, NULL, 10);
> -            opt_e = 1;
> -            break;
> -        case 'w':
> -            weight = strtol(optarg, NULL, 10);
> -            opt_w = 1;
> -            break;
> -        case 'c':
> -            cpupool = optarg;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'd':
> +        dom = optarg;
> +        break;
> +    case 'p':
> +        period = strtol(optarg, NULL, 10);
> +        opt_p = 1;
> +        break;
> +    case 's':
> +        slice = strtol(optarg, NULL, 10);
> +        opt_s = 1;
> +        break;
> +    case 'l':
> +        latency = strtol(optarg, NULL, 10);
> +        opt_l = 1;
> +        break;
> +    case 'e':
> +        extra = strtol(optarg, NULL, 10);
> +        opt_e = 1;
> +        break;
> +    case 'w':
> +        weight = strtol(optarg, NULL, 10);
> +        opt_w = 1;
> +        break;
> +    case 'c':
> +        cpupool = optarg;
> +        break;
>      }
> 
>      if (cpupool && (dom || opt_p || opt_s || opt_l || opt_e || opt_w)) {
> @@ -5202,8 +5251,10 @@ int main_domid(int argc, char **argv)
>      int opt;
>      const char *domname = NULL;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "domid", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "domid", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domname = argv[optind];
> 
> @@ -5224,8 +5275,10 @@ int main_domname(int argc, char **argv)
>      char *domname = NULL;
>      char *endptr = NULL;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "domname", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "domname", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = strtol(argv[optind], &endptr, 10);
>      if (domid == 0 && !strcmp(endptr, argv[optind])) {
> @@ -5252,8 +5305,10 @@ int main_rename(int argc, char **argv)
>      int opt;
>      const char *dom, *new_name;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "rename", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "rename", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      dom = argv[optind++];
>      new_name = argv[optind];
> @@ -5276,8 +5331,10 @@ int main_trigger(int argc, char **argv)
>      const char *trigger_name = NULL;
>      libxl_trigger trigger;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "trigger", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "trigger", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind++]);
> 
> @@ -5306,8 +5363,10 @@ int main_sysrq(int argc, char **argv)
>      int opt;
>      const char *sysrq = NULL;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "sysrq", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "sysrq", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind++]);
> 
> @@ -5329,8 +5388,10 @@ int main_debug_keys(int argc, char **arg
>      int opt;
>      char *keys;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "debug-keys", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "debug-keys", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      keys = argv[optind];
> 
> @@ -5349,14 +5410,12 @@ int main_dmesg(int argc, char **argv)
>      char *line;
>      int opt, ret = 1;
> 
> -    while ((opt = def_getopt(argc, argv, "c", NULL, "dmesg", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'c':
> -            clear = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "c", NULL, "dmesg", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'c':
> +        clear = 1;
> +        break;
>      }
> 
>      cr = libxl_xen_console_read_start(ctx, clear);
> @@ -5375,8 +5434,10 @@ int main_top(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "top", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "top", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      return system("xentop");
>  }
> @@ -5392,8 +5453,10 @@ int main_networkattach(int argc, char **
>      int i;
>      unsigned int val;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "network-attach", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "network-attach", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      if (argc-optind > 11) {
>          help("network-attach");
> @@ -5479,8 +5542,10 @@ int main_networklist(int argc, char **ar
>      libxl_nicinfo nicinfo;
>      int nb, i;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "network-list", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "network-list", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
>      printf("%-3s %-2s %-17s %-6s %-5s %-6s %5s/%-5s %-30s\n",
> @@ -5516,8 +5581,10 @@ int main_networkdetach(int argc, char **
>      int opt;
>      libxl_device_nic nic;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "network-detach", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "network-detach", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
> 
> @@ -5547,8 +5614,10 @@ int main_blockattach(int argc, char **ar
>      libxl_device_disk disk = { 0 };
>      XLU_Config *config = 0;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "block-attach", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "block-attach", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      if (domain_qualifier_to_domid(argv[optind], &fe_domid, 0) < 0) {
>          fprintf(stderr, "%s is an invalid domain identifier\n", argv[optind]);
> @@ -5582,8 +5651,10 @@ int main_blocklist(int argc, char **argv
>      libxl_device_disk *disks;
>      libxl_diskinfo diskinfo;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "block-list", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "block-list", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      printf("%-5s %-3s %-6s %-5s %-6s %-8s %-30s\n",
>             "Vdev", "BE", "handle", "state", "evt-ch", "ring-ref", "BE-path");
> @@ -5618,8 +5689,10 @@ int main_blockdetach(int argc, char **ar
>      int opt, rc = 0;
>      libxl_device_disk disk;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "block-detach", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "block-detach", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
> 
> @@ -5643,8 +5716,10 @@ int main_vtpmattach(int argc, char **arg
>      unsigned int val;
>      uint32_t domid;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-attach", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-attach", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      if (domain_qualifier_to_domid(argv[optind], &domid, 0) < 0) {
>          fprintf(stderr, "%s is an invalid domain identifier\n", argv[optind]);
> @@ -5696,8 +5771,10 @@ int main_vtpmlist(int argc, char **argv)
>      libxl_vtpminfo vtpminfo;
>      int nb, i;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-list", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-list", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      /*      Idx  BE   UUID   Hdl  Sta  evch rref  BE-path */
>      printf("%-3s %-2s %-36s %-6s %-5s %-6s %-5s %-10s\n",
> @@ -5736,8 +5813,10 @@ int main_vtpmdetach(int argc, char **arg
>      libxl_device_vtpm vtpm;
>      libxl_uuid uuid;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-detach", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-detach", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
> 
> @@ -5928,14 +6007,12 @@ int main_uptime(int argc, char **argv)
>      int nb_doms = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "s", NULL, "uptime", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 's':
> -            short_mode = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "s", NULL, "uptime", 1) {
> +    case 0: case 2:
> +        return opt;
> +    case 's':
> +        short_mode = 1;
> +        break;
>      }
> 
>      for (;(dom = argv[optind]) != NULL; nb_doms++,optind++)
> @@ -5955,17 +6032,15 @@ int main_tmem_list(int argc, char **argv
>      int all = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "al", NULL, "tmem-list", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'l':
> -            use_long = 1;
> -            break;
> -        case 'a':
> -            all = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "al", NULL, "tmem-list", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'l':
> +        use_long = 1;
> +        break;
> +    case 'a':
> +        all = 1;
> +        break;
>      }
> 
>      dom = argv[optind];
> @@ -5996,14 +6071,12 @@ int main_tmem_freeze(int argc, char **ar
>      int all = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-freeze", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            all = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-freeze", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        all = 1;
> +        break;
>      }
> 
>      dom = argv[optind];
> @@ -6029,14 +6102,12 @@ int main_tmem_thaw(int argc, char **argv
>      int all = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-thaw", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            all = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-thaw", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        all = 1;
> +        break;
>      }
> 
>      dom = argv[optind];
> @@ -6064,26 +6135,24 @@ int main_tmem_set(int argc, char **argv)
>      int all = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "aw:c:p:", NULL, "tmem-set", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            all = 1;
> -            break;
> -        case 'w':
> -            weight = strtol(optarg, NULL, 10);
> -            opt_w = 1;
> -            break;
> -        case 'c':
> -            cap = strtol(optarg, NULL, 10);
> -            opt_c = 1;
> -            break;
> -        case 'p':
> -            compress = strtol(optarg, NULL, 10);
> -            opt_p = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "aw:c:p:", NULL, "tmem-set", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        all = 1;
> +        break;
> +    case 'w':
> +        weight = strtol(optarg, NULL, 10);
> +        opt_w = 1;
> +        break;
> +    case 'c':
> +        cap = strtol(optarg, NULL, 10);
> +        opt_c = 1;
> +        break;
> +    case 'p':
> +        compress = strtol(optarg, NULL, 10);
> +        opt_p = 1;
> +        break;
>      }
> 
>      dom = argv[optind];
> @@ -6125,20 +6194,18 @@ int main_tmem_shared_auth(int argc, char
>      int all = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "au:A:", NULL, "tmem-shared-auth", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            all = 1;
> -            break;
> -        case 'u':
> -            uuid = optarg;
> -            break;
> -        case 'A':
> -            autharg = optarg;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "au:A:", NULL, "tmem-shared-auth", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        all = 1;
> +        break;
> +    case 'u':
> +        uuid = optarg;
> +        break;
> +    case 'A':
> +        autharg = optarg;
> +        break;
>      }
> 
>      dom = argv[optind];
> @@ -6175,8 +6242,10 @@ int main_tmem_freeable(int argc, char **
>      int opt;
>      int mb;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "tmem-freeable", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "tmem-freeale", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      mb = libxl_tmem_freeable(ctx);
>      if (mb == -1)
> @@ -6215,17 +6284,15 @@ int main_cpupoolcreate(int argc, char **
>      libxl_cputopology *topology;
>      int rc = -ERROR_FAIL;
> 
> -    while ((opt = def_getopt(argc, argv, "hnf:", opts, "cpupool-create", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'f':
> -            filename = optarg;
> -            break;
> -        case 'n':
> -            dryrun_only = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "hnf:", opts, "cpupool-create", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'f':
> +        filename = optarg;
> +        break;
> +    case 'n':
> +        dryrun_only = 1;
> +        break;
>      }
> 
>      memset(extra_config, 0, sizeof(extra_config));
> @@ -6400,14 +6467,12 @@ int main_cpupoollist(int argc, char **ar
>      char *name;
>      int ret = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "hc", opts, "cpupool-list", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            break;
> -        case 'c':
> -            opt_cpus = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "hc", opts, "cpupool-list", 1) {
> +    case 0: case 2:
> +        break;
> +    case 'c':
> +        opt_cpus = 1;
> +        break;
>      }
> 
>      if (optind < argc) {
> @@ -6467,8 +6532,10 @@ int main_cpupooldestroy(int argc, char *
>      const char *pool;
>      uint32_t poolid;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-destroy", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-destroy", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pool = argv[optind];
> 
> @@ -6488,8 +6555,10 @@ int main_cpupoolrename(int argc, char **
>      const char *new_name;
>      uint32_t poolid;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-rename", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-rename", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pool = argv[optind++];
> 
> @@ -6518,8 +6587,10 @@ int main_cpupoolcpuadd(int argc, char **
>      int node;
>      int n;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-add", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-add", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pool = argv[optind++];
>      node = -1;
> @@ -6562,8 +6633,10 @@ int main_cpupoolcpuremove(int argc, char
>      int node;
>      int n;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-remove", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-remove", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pool = argv[optind++];
>      node = -1;
> @@ -6605,8 +6678,10 @@ int main_cpupoolmigrate(int argc, char *
>      const char *dom;
>      uint32_t domid;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-migrate", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-migrate", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      dom = argv[optind++];
>      pool = argv[optind];
> @@ -6645,8 +6720,11 @@ int main_cpupoolnumasplit(int argc, char
>      libxl_cputopology *topology;
>      libxl_dominfo info;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-numa-split", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-numa-split", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> +
>      ret = 0;
> 
>      poolinfo = libxl_list_cpupool(ctx, &n_pools);
> @@ -6898,27 +6976,24 @@ int main_remus(int argc, char **argv)
>      r_info.blackhole = 0;
>      r_info.compression = 1;
> 
> -    while ((opt = def_getopt(argc, argv, "bui:s:e", NULL, "remus", 2)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -
> -        case 'i':
> -           r_info.interval = atoi(optarg);
> -            break;
> -        case 'b':
> -            r_info.blackhole = 1;
> -            break;
> -        case 'u':
> -           r_info.compression = 0;
> -            break;
> -        case 's':
> -            ssh_command = optarg;
> -            break;
> -        case 'e':
> -            daemonize = 0;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "bui:s:e", NULL, "remus", 2) {
> +    case 0: case 2:
> +        return opt;
> +    case 'i':
> +        r_info.interval = atoi(optarg);
> +        break;
> +    case 'b':
> +        r_info.blackhole = 1;
> +        break;
> +    case 'u':
> +        r_info.compression = 0;
> +        break;
> +    case 's':
> +        ssh_command = optarg;
> +        break;
> +    case 'e':
> +        daemonize = 0;
> +        break;
>      }
> 
>      domid = find_domain(argv[optind]);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:46:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:46:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6JI-0007S3-BK; Thu, 13 Dec 2012 10:46:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj6JG-0007Re-7O
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:46:34 +0000
Received: from [193.109.254.147:47450] by server-4.bemta-14.messagelabs.com id
	56/60-15233-902B9C05; Thu, 13 Dec 2012 10:46:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355395592!8576439!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12911 invoked from network); 13 Dec 2012 10:46:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:46:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,272,1355097600"; 
   d="scan'208";a="110121"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:22:33 +0000
Message-ID: <1355394151.10554.59.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:22:31 +0000
In-Reply-To: <d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
References: <patchbomb.1354274486@cosworth.uk.xensource.com>
	<d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 2] xl: Introduce helper macro for
 option parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ping?

On Fri, 2012-11-30 at 11:21 +0000, Ian Campbell wrote:
> # HG changeset patch
> # User Ian Campbell <ijc@hellion.org.uk>
> # Date 1354274264 0
> # Node ID d4cc790b47d8735ae3f2b0c4707bfa58f90a2cd3
> # Parent  b63fbacd5037e79bc4f40429453cb59816f94793
> xl: Introduce helper macro for option parsing.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> v2:
> - s/FOREACH_OPT/SWITCH_FOREACH_OPT/
> - Document the macro
> 
> diff -r b63fbacd5037 -r d4cc790b47d8 tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c  Fri Nov 30 11:17:44 2012 +0000
> +++ b/tools/libxl/xl_cmdimpl.c  Fri Nov 30 11:17:44 2012 +0000
> @@ -2297,6 +2297,10 @@ static int64_t parse_mem_size_kb(const c
> 
>  #define COMMON_LONG_OPTS {"help", 0, 0, 'h'}
> 
> +/*
> + * Callers should use SWITCH_FOREACH_OPT in preference to calling this
> + * directly.
> + */
>  static int def_getopt(int argc, char * const argv[],
>                        const char *optstring,
>                        const struct option *longopts,
> @@ -2335,6 +2339,57 @@ static int def_getopt(int argc, char * c
>      return -1;
>  }
> 
> +/*
> + * Wraps def_getopt into a convenient loop+switch to process all arguments.
> + *
> + * _opt:        an int variable, holds the current option during processing.
> + * _opts:       short options, as per getopt_long(3)'s optstring argument.
> + * _lopts:      long options, as per getopt_long(3)'s longopts argument. May
> + *              be null.
> + * _help:       name of this command, for usage string.
> + * _req:        number of non-option command line parameters which are required.
> + *
> + * In addition the calling context is expected to contain variables
> + * "argc" and "argv" in the conventional C-style:
> + *   main(int argc, char **argv)
> + * manner.
> + *
> + * Callers should treat SWITCH_FOREACH_OPT as they would a switch
> + * statement over the value of _opt. Each option given in _opts (or
> + * _lopts) should be handled by a case statement as if it were inside
> + * a switch statement.
> + *
> + * In addition to the options provided in _opts callers must handle
> + * two additional pseudo options:
> + *  0 -- generated if the user passes a -h option. help will be printed,
> + *       caller should return 0.
> + *  2 -- generated if the user does not provided _req non-option arguments,
> + *       caller should return 2.
> + *
> + * Example:
> + *
> + * int main_foo(int argc, char **argv) {
> + *     int opt;
> + *
> + *     SWITCH_FOREACH_OPT(opt, "blah", NULL, "foo", 0) {
> + *     case 0: case2:
> + *          return opt;
> + *      case 'b':
> + *          ... handle b option...
> + *          break;
> + *      case 'l':
> + *          ... handle l option ...
> + *          break;
> + *      case etc etc...
> + *      }
> + *      ... do something useful with the options ...
> + * }
> + */
> +#define SWITCH_FOREACH_OPT(_opt, _opts, _lopts, _help, _req)    \
> +    while (((_opt) = def_getopt(argc, argv, (_opts), (_lopts),  \
> +                                (_help), (_req))) != -1)        \
> +        switch (opt)
> +
>  static int set_memory_max(uint32_t domid, const char *mem)
>  {
>      int64_t memorykb;
> @@ -2358,8 +2413,10 @@ int main_memmax(int argc, char **argv)
>      char *mem;
>      int rc;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "mem-max", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "mem-max", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
>      mem = argv[optind + 1];
> @@ -2392,8 +2449,10 @@ int main_memset(int argc, char **argv)
>      int opt = 0;
>      const char *mem;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "mem-set", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "mem-set", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
>      mem = argv[optind + 1];
> @@ -2431,8 +2490,10 @@ int main_cd_eject(int argc, char **argv)
>      int opt = 0;
>      const char *virtdev;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cd-eject", 2)) != -1)
> -        return opt;
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cd-eject", 2) {
> +        case 0: case 2:
> +            return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
>      virtdev = argv[optind + 1];
> @@ -2448,8 +2509,10 @@ int main_cd_insert(int argc, char **argv
>      const char *virtdev;
>      char *file = NULL; /* modified by cd_insert tokenising it */
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cd-insert", 3)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cd-insert", 3) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
>      virtdev = argv[optind + 1];
> @@ -2465,24 +2528,22 @@ int main_console(int argc, char **argv)
>      int opt = 0, num = 0;
>      libxl_console_type type = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "n:t:", NULL, "console", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 't':
> -            if (!strcmp(optarg, "pv"))
> -                type = LIBXL_CONSOLE_TYPE_PV;
> -            else if (!strcmp(optarg, "serial"))
> -                type = LIBXL_CONSOLE_TYPE_SERIAL;
> -            else {
> -                fprintf(stderr, "console type supported are: pv, serial\n");
> -                return 2;
> -            }
> -            break;
> -        case 'n':
> -            num = atoi(optarg);
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "n:t:", NULL, "console", 1) {
> +    case 0: case 2:
> +        return opt;
> +    case 't':
> +        if (!strcmp(optarg, "pv"))
> +            type = LIBXL_CONSOLE_TYPE_PV;
> +        else if (!strcmp(optarg, "serial"))
> +            type = LIBXL_CONSOLE_TYPE_SERIAL;
> +        else {
> +            fprintf(stderr, "console type supported are: pv, serial\n");
> +            return 2;
> +        }
> +        break;
> +    case 'n':
> +        num = atoi(optarg);
> +        break;
>      }
> 
>      domid = find_domain(argv[optind]);
> @@ -2505,14 +2566,12 @@ int main_vncviewer(int argc, char **argv
>      uint32_t domid;
>      int opt, autopass = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "ah", opts, "vncviewer", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            autopass = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "ah", opts, "vncviewer", 1) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        autopass = 1;
> +        break;
>      }
> 
>      domid = find_domain(argv[optind]);
> @@ -2545,8 +2604,10 @@ int main_pcilist(int argc, char **argv)
>      uint32_t domid;
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "pci-list", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-list", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
> 
> @@ -2584,14 +2645,12 @@ int main_pcidetach(int argc, char **argv
>      int force = 0;
>      const char *bdf = NULL;
> 
> -    while ((opt = def_getopt(argc, argv, "f", NULL, "pci-detach", 2)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'f':
> -            force = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
> +    case 0: case 2:
> +        return opt;
> +    case 'f':
> +        force = 1;
> +        break;
>      }
> 
>      domid = find_domain(argv[optind]);
> @@ -2626,8 +2685,10 @@ int main_pciattach(int argc, char **argv
>      int opt;
>      const char *bdf = NULL, *vs = NULL;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "pci-attach", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
>      bdf = argv[optind + 1];
> @@ -2660,8 +2721,10 @@ int main_pciassignable_list(int argc, ch
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-list", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pciassignable_list();
>      return 0;
> @@ -2692,11 +2755,9 @@ int main_pciassignable_add(int argc, cha
>      int opt;
>      const char *bdf = NULL;
> 
> -    while ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-add", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
> +    case 0: case 2:
> +        return opt;
>      }
> 
>      bdf = argv[optind];
> @@ -2731,14 +2792,12 @@ int main_pciassignable_remove(int argc,
>      const char *bdf = NULL;
>      int rebind = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "r", NULL, "pci-assignable-remove", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'r':
> -            rebind=1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
> +    case 0: case 2:
> +        return opt;
> +    case 'r':
> +        rebind=1;
> +        break;
>      }
> 
>      bdf = argv[optind];
> @@ -3549,34 +3608,31 @@ int main_restore(int argc, char **argv)
>          {0, 0, 0, 0}
>      };
> 
> -    while ((opt = def_getopt(argc, argv, "FhcpdeVA",
> -                             opts, "restore", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'c':
> -            console_autoconnect = 1;
> -            break;
> -        case 'p':
> -            paused = 1;
> -            break;
> -        case 'd':
> -            debug = 1;
> -            break;
> -        case 'F':
> -            daemonize = 0;
> -            break;
> -        case 'e':
> -            daemonize = 0;
> -            monitor = 0;
> -            break;
> -        case 'V':
> -            vnc = 1;
> -            break;
> -        case 'A':
> -            vnc = vncautopass = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "FhcpdeVA", opts, "restore", 1) {
> +    case 0: case 2:
> +        return opt;
> +    case 'c':
> +        console_autoconnect = 1;
> +        break;
> +    case 'p':
> +        paused = 1;
> +        break;
> +    case 'd':
> +        debug = 1;
> +        break;
> +    case 'F':
> +        daemonize = 0;
> +        break;
> +    case 'e':
> +        daemonize = 0;
> +        monitor = 0;
> +        break;
> +    case 'V':
> +        vnc = 1;
> +        break;
> +    case 'A':
> +        vnc = vncautopass = 1;
> +        break;
>      }
> 
>      if (argc-optind == 1) {
> @@ -3613,24 +3669,22 @@ int main_migrate_receive(int argc, char
>      int debug = 0, daemonize = 1, monitor = 1, remus = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "Fedr", NULL, "migrate-receive", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'F':
> -            daemonize = 0;
> -            break;
> -        case 'e':
> -            daemonize = 0;
> -            monitor = 0;
> -            break;
> -        case 'd':
> -            debug = 1;
> -            break;
> -        case 'r':
> -            remus = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "Fedr", NULL, "migrate-receive", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'F':
> +        daemonize = 0;
> +        break;
> +    case 'e':
> +        daemonize = 0;
> +        monitor = 0;
> +        break;
> +    case 'd':
> +        debug = 1;
> +        break;
> +    case 'r':
> +        remus = 1;
> +        break;
>      }
> 
>      if (argc-optind != 0) {
> @@ -3652,14 +3706,12 @@ int main_save(int argc, char **argv)
>      int checkpoint = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "c", NULL, "save", 2)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'c':
> -            checkpoint = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "c", NULL, "save", 2) {
> +    case 0: case 2:
> +        return opt;
> +    case 'c':
> +        checkpoint = 1;
> +        break;
>      }
> 
>      if (argc-optind > 3) {
> @@ -3685,27 +3737,25 @@ int main_migrate(int argc, char **argv)
>      char *host;
>      int opt, daemonize = 1, monitor = 1, debug = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "FC:s:ed", NULL, "migrate", 2)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'C':
> -            config_filename = optarg;
> -            break;
> -        case 's':
> -            ssh_command = optarg;
> -            break;
> -        case 'F':
> -            daemonize = 0;
> -            break;
> -        case 'e':
> -            daemonize = 0;
> -            monitor = 0;
> -            break;
> -        case 'd':
> -            debug = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "FC:s:ed", NULL, "migrate", 2) {
> +    case 0: case 2:
> +        return opt;
> +    case 'C':
> +        config_filename = optarg;
> +        break;
> +    case 's':
> +        ssh_command = optarg;
> +        break;
> +    case 'F':
> +        daemonize = 0;
> +        break;
> +    case 'e':
> +        daemonize = 0;
> +        monitor = 0;
> +        break;
> +    case 'd':
> +        debug = 1;
> +        break;
>      }
> 
>      domid = find_domain(argv[optind]);
> @@ -3729,8 +3779,10 @@ int main_dump_core(int argc, char **argv
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "dump-core", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "dump-core", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      core_dump_domain(find_domain(argv[optind]), argv[optind + 1]);
>      return 0;
> @@ -3740,8 +3792,10 @@ int main_pause(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "pause", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "pause", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pause_domain(find_domain(argv[optind]));
> 
> @@ -3752,8 +3806,10 @@ int main_unpause(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "unpause", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "unpause", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      unpause_domain(find_domain(argv[optind]));
> 
> @@ -3764,8 +3820,10 @@ int main_destroy(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "destroy", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "destroy", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      destroy_domain(find_domain(argv[optind]));
>      return 0;
> @@ -3787,20 +3845,18 @@ static int main_shutdown_or_reboot(int d
>          {0, 0, 0, 0}
>      };
> 
> -    while ((opt = def_getopt(argc, argv, "awF", opts, what, 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            all = 1;
> -            break;
> -        case 'w':
> -            wait_for_it = 1;
> -            break;
> -        case 'F':
> -            fallback_trigger = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "awF", opts, what, 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        all = 1;
> +        break;
> +    case 'w':
> +        wait_for_it = 1;
> +        break;
> +    case 'F':
> +        fallback_trigger = 1;
> +        break;
>      }
> 
>      if (!argv[optind] && !all) {
> @@ -3871,23 +3927,18 @@ int main_list(int argc, char **argv)
>      libxl_dominfo *info, *info_free=0;
>      int nb_domain, rc;
> 
> -    while ((opt = def_getopt(argc, argv, "lvhZ", opts, "list", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'l':
> -            details = 1;
> -            break;
> -        case 'h':
> -            help("list");
> -            return 0;
> -        case 'v':
> -            verbose = 1;
> -            break;
> -        case 'Z':
> -            context = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "lvhZ", opts, "list", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'l':
> +        details = 1;
> +        break;
> +    case 'v':
> +        verbose = 1;
> +        break;
> +    case 'Z':
> +        context = 1;
> +        break;
>      }
> 
>      if (optind >= argc) {
> @@ -3933,8 +3984,10 @@ int main_vm_list(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vm-list", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vm-list", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      list_vm();
>      return 0;
> @@ -3964,45 +4017,40 @@ int main_create(int argc, char **argv)
>          argc--; argv++;
>      }
> 
> -    while ((opt = def_getopt(argc, argv, "Fhnqf:pcdeVA", opts, "create", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'f':
> -            filename = optarg;
> -            break;
> -        case 'p':
> -            paused = 1;
> -            break;
> -        case 'c':
> -            console_autoconnect = 1;
> -            break;
> -        case 'd':
> -            debug = 1;
> -            break;
> -        case 'F':
> -            daemonize = 0;
> -            break;
> -        case 'e':
> -            daemonize = 0;
> -            monitor = 0;
> -            break;
> -        case 'h':
> -            help("create");
> -            return 0;
> -        case 'n':
> -            dryrun_only = 1;
> -            break;
> -        case 'q':
> -            quiet = 1;
> -            break;
> -        case 'V':
> -            vnc = 1;
> -            break;
> -        case 'A':
> -            vnc = vncautopass = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "Fhnqf:pcdeVA", opts, "create", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'f':
> +        filename = optarg;
> +        break;
> +    case 'p':
> +        paused = 1;
> +        break;
> +    case 'c':
> +        console_autoconnect = 1;
> +        break;
> +    case 'd':
> +        debug = 1;
> +        break;
> +    case 'F':
> +        daemonize = 0;
> +        break;
> +    case 'e':
> +        daemonize = 0;
> +        monitor = 0;
> +        break;
> +    case 'n':
> +        dryrun_only = 1;
> +        break;
> +    case 'q':
> +        quiet = 1;
> +        break;
> +    case 'V':
> +        vnc = 1;
> +        break;
> +    case 'A':
> +        vnc = vncautopass = 1;
> +        break;
>      }
> 
>      extra_config[0] = '\0';
> @@ -4070,17 +4118,15 @@ int main_config_update(int argc, char **
>          argc--; argv++;
>      }
> 
> -    while ((opt = def_getopt(argc, argv, "dhqf:", opts, "config_update", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'd':
> -            debug = 1;
> -            break;
> -        case 'f':
> -            filename = optarg;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "dhqf:", opts, "config_update", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'd':
> +        debug = 1;
> +        break;
> +    case 'f':
> +        filename = optarg;
> +        break;
>      }
> 
>      extra_config[0] = '\0';
> @@ -4168,8 +4214,11 @@ int main_button_press(int argc, char **a
>      fprintf(stderr, "WARNING: \"button-press\" is deprecated. "
>              "Please use \"trigger\"\n");
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "button-press", 2)) != -1)
> +
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "button-press", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      button_press(find_domain(argv[optind]), argv[optind + 1]);
> 
> @@ -4309,8 +4358,10 @@ int main_vcpulist(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpu-list", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpu-list", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      vcpulist(argc - optind, argv + optind);
>      return 0;
> @@ -4370,8 +4421,10 @@ int main_vcpupin(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-pin", 3)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-pin", 3) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      vcpupin(find_domain(argv[optind]), argv[optind+1] , argv[optind+2]);
>      return 0;
> @@ -4406,8 +4459,10 @@ int main_vcpuset(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-set", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-set", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      vcpuset(find_domain(argv[optind]), argv[optind+1]);
>      return 0;
> @@ -4589,14 +4644,12 @@ int main_info(int argc, char **argv)
>      };
>      int numa = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "hn", opts, "info", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'n':
> -            numa = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "hn", opts, "info", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'n':
> +        numa = 1;
> +        break;
>      }
> 
>      print_info(numa);
> @@ -4630,8 +4683,10 @@ int main_sharing(int argc, char **argv)
>      libxl_dominfo *info, *info_free = NULL;
>      int nb_domain, rc;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "sharing", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "sharing", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      if (optind >= argc) {
>          info = libxl_list_domain(ctx, &nb_domain);
> @@ -4911,36 +4966,34 @@ int main_sched_credit(int argc, char **a
>          {0, 0, 0, 0}
>      };
> 
> -    while ((opt = def_getopt(argc, argv, "d:w:c:p:t:r:hs", opts, "sched-credit", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'd':
> -            dom = optarg;
> -            break;
> -        case 'w':
> -            weight = strtol(optarg, NULL, 10);
> -            opt_w = 1;
> -            break;
> -        case 'c':
> -            cap = strtol(optarg, NULL, 10);
> -            opt_c = 1;
> -            break;
> -        case 't':
> -            tslice = strtol(optarg, NULL, 10);
> -            opt_t = 1;
> -            break;
> -        case 'r':
> -            ratelimit = strtol(optarg, NULL, 10);
> -            opt_r = 1;
> -            break;
> -        case 's':
> -            opt_s = 1;
> -            break;
> -        case 'p':
> -            cpupool = optarg;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "d:w:c:p:t:r:hs", opts, "sched-credit", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'd':
> +        dom = optarg;
> +        break;
> +    case 'w':
> +        weight = strtol(optarg, NULL, 10);
> +        opt_w = 1;
> +        break;
> +    case 'c':
> +        cap = strtol(optarg, NULL, 10);
> +        opt_c = 1;
> +        break;
> +    case 't':
> +        tslice = strtol(optarg, NULL, 10);
> +        opt_t = 1;
> +        break;
> +    case 'r':
> +        ratelimit = strtol(optarg, NULL, 10);
> +        opt_r = 1;
> +        break;
> +    case 's':
> +        opt_s = 1;
> +        break;
> +    case 'p':
> +        cpupool = optarg;
> +        break;
>      }
> 
>      if ((cpupool || opt_s) && (dom || opt_w || opt_c)) {
> @@ -5030,21 +5083,19 @@ int main_sched_credit2(int argc, char **
>          {0, 0, 0, 0}
>      };
> 
> -    while ((opt = def_getopt(argc, argv, "d:w:p:h", opts, "sched-credit2", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'd':
> -            dom = optarg;
> -            break;
> -        case 'w':
> -            weight = strtol(optarg, NULL, 10);
> -            opt_w = 1;
> -            break;
> -        case 'p':
> -            cpupool = optarg;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "d:w:p:h", opts, "sched-credit2", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'd':
> +        dom = optarg;
> +        break;
> +    case 'w':
> +        weight = strtol(optarg, NULL, 10);
> +        opt_w = 1;
> +        break;
> +    case 'p':
> +        cpupool = optarg;
> +        break;
>      }
> 
>      if (cpupool && (dom || opt_w)) {
> @@ -5105,37 +5156,35 @@ int main_sched_sedf(int argc, char **arg
>          {0, 0, 0, 0}
>      };
> 
> -    while ((opt = def_getopt(argc, argv, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'd':
> -            dom = optarg;
> -            break;
> -        case 'p':
> -            period = strtol(optarg, NULL, 10);
> -            opt_p = 1;
> -            break;
> -        case 's':
> -            slice = strtol(optarg, NULL, 10);
> -            opt_s = 1;
> -            break;
> -        case 'l':
> -            latency = strtol(optarg, NULL, 10);
> -            opt_l = 1;
> -            break;
> -        case 'e':
> -            extra = strtol(optarg, NULL, 10);
> -            opt_e = 1;
> -            break;
> -        case 'w':
> -            weight = strtol(optarg, NULL, 10);
> -            opt_w = 1;
> -            break;
> -        case 'c':
> -            cpupool = optarg;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'd':
> +        dom = optarg;
> +        break;
> +    case 'p':
> +        period = strtol(optarg, NULL, 10);
> +        opt_p = 1;
> +        break;
> +    case 's':
> +        slice = strtol(optarg, NULL, 10);
> +        opt_s = 1;
> +        break;
> +    case 'l':
> +        latency = strtol(optarg, NULL, 10);
> +        opt_l = 1;
> +        break;
> +    case 'e':
> +        extra = strtol(optarg, NULL, 10);
> +        opt_e = 1;
> +        break;
> +    case 'w':
> +        weight = strtol(optarg, NULL, 10);
> +        opt_w = 1;
> +        break;
> +    case 'c':
> +        cpupool = optarg;
> +        break;
>      }
> 
>      if (cpupool && (dom || opt_p || opt_s || opt_l || opt_e || opt_w)) {
> @@ -5202,8 +5251,10 @@ int main_domid(int argc, char **argv)
>      int opt;
>      const char *domname = NULL;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "domid", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "domid", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domname = argv[optind];
> 
> @@ -5224,8 +5275,10 @@ int main_domname(int argc, char **argv)
>      char *domname = NULL;
>      char *endptr = NULL;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "domname", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "domname", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = strtol(argv[optind], &endptr, 10);
>      if (domid == 0 && !strcmp(endptr, argv[optind])) {
> @@ -5252,8 +5305,10 @@ int main_rename(int argc, char **argv)
>      int opt;
>      const char *dom, *new_name;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "rename", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "rename", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      dom = argv[optind++];
>      new_name = argv[optind];
> @@ -5276,8 +5331,10 @@ int main_trigger(int argc, char **argv)
>      const char *trigger_name = NULL;
>      libxl_trigger trigger;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "trigger", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "trigger", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind++]);
> 
> @@ -5306,8 +5363,10 @@ int main_sysrq(int argc, char **argv)
>      int opt;
>      const char *sysrq = NULL;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "sysrq", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "sysrq", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind++]);
> 
> @@ -5329,8 +5388,10 @@ int main_debug_keys(int argc, char **arg
>      int opt;
>      char *keys;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "debug-keys", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "debug-keys", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      keys = argv[optind];
> 
> @@ -5349,14 +5410,12 @@ int main_dmesg(int argc, char **argv)
>      char *line;
>      int opt, ret = 1;
> 
> -    while ((opt = def_getopt(argc, argv, "c", NULL, "dmesg", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'c':
> -            clear = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "c", NULL, "dmesg", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'c':
> +        clear = 1;
> +        break;
>      }
> 
>      cr = libxl_xen_console_read_start(ctx, clear);
> @@ -5375,8 +5434,10 @@ int main_top(int argc, char **argv)
>  {
>      int opt;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "top", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "top", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      return system("xentop");
>  }
> @@ -5392,8 +5453,10 @@ int main_networkattach(int argc, char **
>      int i;
>      unsigned int val;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "network-attach", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "network-attach", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      if (argc-optind > 11) {
>          help("network-attach");
> @@ -5479,8 +5542,10 @@ int main_networklist(int argc, char **ar
>      libxl_nicinfo nicinfo;
>      int nb, i;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "network-list", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "network-list", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
>      printf("%-3s %-2s %-17s %-6s %-5s %-6s %5s/%-5s %-30s\n",
> @@ -5516,8 +5581,10 @@ int main_networkdetach(int argc, char **
>      int opt;
>      libxl_device_nic nic;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "network-detach", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "network-detach", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
> 
> @@ -5547,8 +5614,10 @@ int main_blockattach(int argc, char **ar
>      libxl_device_disk disk = { 0 };
>      XLU_Config *config = 0;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "block-attach", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "block-attach", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      if (domain_qualifier_to_domid(argv[optind], &fe_domid, 0) < 0) {
>          fprintf(stderr, "%s is an invalid domain identifier\n", argv[optind]);
> @@ -5582,8 +5651,10 @@ int main_blocklist(int argc, char **argv
>      libxl_device_disk *disks;
>      libxl_diskinfo diskinfo;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "block-list", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "block-list", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      printf("%-5s %-3s %-6s %-5s %-6s %-8s %-30s\n",
>             "Vdev", "BE", "handle", "state", "evt-ch", "ring-ref", "BE-path");
> @@ -5618,8 +5689,10 @@ int main_blockdetach(int argc, char **ar
>      int opt, rc = 0;
>      libxl_device_disk disk;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "block-detach", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "block-detach", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
> 
> @@ -5643,8 +5716,10 @@ int main_vtpmattach(int argc, char **arg
>      unsigned int val;
>      uint32_t domid;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-attach", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-attach", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      if (domain_qualifier_to_domid(argv[optind], &domid, 0) < 0) {
>          fprintf(stderr, "%s is an invalid domain identifier\n", argv[optind]);
> @@ -5696,8 +5771,10 @@ int main_vtpmlist(int argc, char **argv)
>      libxl_vtpminfo vtpminfo;
>      int nb, i;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-list", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-list", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      /*      Idx  BE   UUID   Hdl  Sta  evch rref  BE-path */
>      printf("%-3s %-2s %-36s %-6s %-5s %-6s %-5s %-10s\n",
> @@ -5736,8 +5813,10 @@ int main_vtpmdetach(int argc, char **arg
>      libxl_device_vtpm vtpm;
>      libxl_uuid uuid;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-detach", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-detach", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      domid = find_domain(argv[optind]);
> 
> @@ -5928,14 +6007,12 @@ int main_uptime(int argc, char **argv)
>      int nb_doms = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "s", NULL, "uptime", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 's':
> -            short_mode = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "s", NULL, "uptime", 1) {
> +    case 0: case 2:
> +        return opt;
> +    case 's':
> +        short_mode = 1;
> +        break;
>      }
> 
>      for (;(dom = argv[optind]) != NULL; nb_doms++,optind++)
> @@ -5955,17 +6032,15 @@ int main_tmem_list(int argc, char **argv
>      int all = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "al", NULL, "tmem-list", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'l':
> -            use_long = 1;
> -            break;
> -        case 'a':
> -            all = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "al", NULL, "tmem-list", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'l':
> +        use_long = 1;
> +        break;
> +    case 'a':
> +        all = 1;
> +        break;
>      }
> 
>      dom = argv[optind];
> @@ -5996,14 +6071,12 @@ int main_tmem_freeze(int argc, char **ar
>      int all = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-freeze", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            all = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-freeze", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        all = 1;
> +        break;
>      }
> 
>      dom = argv[optind];
> @@ -6029,14 +6102,12 @@ int main_tmem_thaw(int argc, char **argv
>      int all = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-thaw", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            all = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-thaw", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        all = 1;
> +        break;
>      }
> 
>      dom = argv[optind];
> @@ -6064,26 +6135,24 @@ int main_tmem_set(int argc, char **argv)
>      int all = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "aw:c:p:", NULL, "tmem-set", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            all = 1;
> -            break;
> -        case 'w':
> -            weight = strtol(optarg, NULL, 10);
> -            opt_w = 1;
> -            break;
> -        case 'c':
> -            cap = strtol(optarg, NULL, 10);
> -            opt_c = 1;
> -            break;
> -        case 'p':
> -            compress = strtol(optarg, NULL, 10);
> -            opt_p = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "aw:c:p:", NULL, "tmem-set", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        all = 1;
> +        break;
> +    case 'w':
> +        weight = strtol(optarg, NULL, 10);
> +        opt_w = 1;
> +        break;
> +    case 'c':
> +        cap = strtol(optarg, NULL, 10);
> +        opt_c = 1;
> +        break;
> +    case 'p':
> +        compress = strtol(optarg, NULL, 10);
> +        opt_p = 1;
> +        break;
>      }
> 
>      dom = argv[optind];
> @@ -6125,20 +6194,18 @@ int main_tmem_shared_auth(int argc, char
>      int all = 0;
>      int opt;
> 
> -    while ((opt = def_getopt(argc, argv, "au:A:", NULL, "tmem-shared-auth", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'a':
> -            all = 1;
> -            break;
> -        case 'u':
> -            uuid = optarg;
> -            break;
> -        case 'A':
> -            autharg = optarg;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "au:A:", NULL, "tmem-shared-auth", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'a':
> +        all = 1;
> +        break;
> +    case 'u':
> +        uuid = optarg;
> +        break;
> +    case 'A':
> +        autharg = optarg;
> +        break;
>      }
> 
>      dom = argv[optind];
> @@ -6175,8 +6242,10 @@ int main_tmem_freeable(int argc, char **
>      int opt;
>      int mb;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "tmem-freeable", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "tmem-freeale", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      mb = libxl_tmem_freeable(ctx);
>      if (mb == -1)
> @@ -6215,17 +6284,15 @@ int main_cpupoolcreate(int argc, char **
>      libxl_cputopology *topology;
>      int rc = -ERROR_FAIL;
> 
> -    while ((opt = def_getopt(argc, argv, "hnf:", opts, "cpupool-create", 0)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -        case 'f':
> -            filename = optarg;
> -            break;
> -        case 'n':
> -            dryrun_only = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "hnf:", opts, "cpupool-create", 0) {
> +    case 0: case 2:
> +        return opt;
> +    case 'f':
> +        filename = optarg;
> +        break;
> +    case 'n':
> +        dryrun_only = 1;
> +        break;
>      }
> 
>      memset(extra_config, 0, sizeof(extra_config));
> @@ -6400,14 +6467,12 @@ int main_cpupoollist(int argc, char **ar
>      char *name;
>      int ret = 0;
> 
> -    while ((opt = def_getopt(argc, argv, "hc", opts, "cpupool-list", 1)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            break;
> -        case 'c':
> -            opt_cpus = 1;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "hc", opts, "cpupool-list", 1) {
> +    case 0: case 2:
> +        break;
> +    case 'c':
> +        opt_cpus = 1;
> +        break;
>      }
> 
>      if (optind < argc) {
> @@ -6467,8 +6532,10 @@ int main_cpupooldestroy(int argc, char *
>      const char *pool;
>      uint32_t poolid;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-destroy", 1)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-destroy", 1) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pool = argv[optind];
> 
> @@ -6488,8 +6555,10 @@ int main_cpupoolrename(int argc, char **
>      const char *new_name;
>      uint32_t poolid;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-rename", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-rename", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pool = argv[optind++];
> 
> @@ -6518,8 +6587,10 @@ int main_cpupoolcpuadd(int argc, char **
>      int node;
>      int n;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-add", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-add", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pool = argv[optind++];
>      node = -1;
> @@ -6562,8 +6633,10 @@ int main_cpupoolcpuremove(int argc, char
>      int node;
>      int n;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-remove", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-remove", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      pool = argv[optind++];
>      node = -1;
> @@ -6605,8 +6678,10 @@ int main_cpupoolmigrate(int argc, char *
>      const char *dom;
>      uint32_t domid;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-migrate", 2)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-migrate", 2) {
> +    case 0: case 2:
>          return opt;
> +    }
> 
>      dom = argv[optind++];
>      pool = argv[optind];
> @@ -6645,8 +6720,11 @@ int main_cpupoolnumasplit(int argc, char
>      libxl_cputopology *topology;
>      libxl_dominfo info;
> 
> -    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-numa-split", 0)) != -1)
> +    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-numa-split", 0) {
> +    case 0: case 2:
>          return opt;
> +    }
> +
>      ret = 0;
> 
>      poolinfo = libxl_list_cpupool(ctx, &n_pools);
> @@ -6898,27 +6976,24 @@ int main_remus(int argc, char **argv)
>      r_info.blackhole = 0;
>      r_info.compression = 1;
> 
> -    while ((opt = def_getopt(argc, argv, "bui:s:e", NULL, "remus", 2)) != -1) {
> -        switch (opt) {
> -        case 0: case 2:
> -            return opt;
> -
> -        case 'i':
> -           r_info.interval = atoi(optarg);
> -            break;
> -        case 'b':
> -            r_info.blackhole = 1;
> -            break;
> -        case 'u':
> -           r_info.compression = 0;
> -            break;
> -        case 's':
> -            ssh_command = optarg;
> -            break;
> -        case 'e':
> -            daemonize = 0;
> -            break;
> -        }
> +    SWITCH_FOREACH_OPT(opt, "bui:s:e", NULL, "remus", 2) {
> +    case 0: case 2:
> +        return opt;
> +    case 'i':
> +        r_info.interval = atoi(optarg);
> +        break;
> +    case 'b':
> +        r_info.blackhole = 1;
> +        break;
> +    case 'u':
> +        r_info.compression = 0;
> +        break;
> +    case 's':
> +        ssh_command = optarg;
> +        break;
> +    case 'e':
> +        daemonize = 0;
> +        break;
>      }
> 
>      domid = find_domain(argv[optind]);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:47:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6KK-0007gC-Mm; Thu, 13 Dec 2012 10:47:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj6KI-0007fj-Q5
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:47:39 +0000
Received: from [85.158.139.211:13788] by server-9.bemta-5.messagelabs.com id
	8D/98-10690-A42B9C05; Thu, 13 Dec 2012 10:47:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355395657!20210665!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2938 invoked from network); 13 Dec 2012 10:47:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:47:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="110195"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:29:37 +0000
Message-ID: <1355394576.10554.62.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jim Fehlig <jfehlig@suse.com>
Date: Thu, 13 Dec 2012 10:29:36 +0000
In-Reply-To: <50C8C665.2030202@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
	<50C8C665.2030202@suse.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-12 at 18:01 +0000, Jim Fehlig wrote:
> Ian Jackson wrote:

> >   
> >>  After again reading
> >> libxl_event.h, I'm considering the below patch in the libvirt libxl
> >> driver.  The change is primarily inspired by this comment for
> >> libxl_osevent_occurred_timeout:
> >>     
> > ...
> >   
> >> /* Implicitly, on entry to this function the timeout has been
> >>  * deregistered.  If _occurred_timeout is called, libxl will not
> >>  * call timeout_deregister; if it wants to requeue the timeout it
> >>  * will call timeout_register again.
> >>  */
> >>     
> >
> > Well your patch is only correct when used with the new libxl, with my
> > patches.
> >   
> 
> Hmm, it is not clear to me how to make the libxl driver work correctly
> with libxl pre and post your patches :-/.

Ideally we will find a way to make this work without changes on the
application side.

But if that turns out to be impossible and applications are going to
need patching anyway then I think we should consider just fixing the API
rather than playing tricks like the "modify to 0" thing to try and keep
it compatible.

One option is to add new hooks which libxl can call to take/release the
application's event loop lock + a LIBXL_HAVE_EVENT_LOOP_LOCK define so
the application can conditionally provide them. The upside is that I
would expect this to result in much simpler code in both libxl and
libvirt. The downside is that doing this kind of sucks from an API
stability point of view, but if the application has to change anyway
then we might as well do it cleanly instead of bending over backwards to
keep the API the same.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:47:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6KK-0007gC-Mm; Thu, 13 Dec 2012 10:47:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj6KI-0007fj-Q5
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:47:39 +0000
Received: from [85.158.139.211:13788] by server-9.bemta-5.messagelabs.com id
	8D/98-10690-A42B9C05; Thu, 13 Dec 2012 10:47:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355395657!20210665!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2938 invoked from network); 13 Dec 2012 10:47:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:47:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="110195"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:29:37 +0000
Message-ID: <1355394576.10554.62.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jim Fehlig <jfehlig@suse.com>
Date: Thu, 13 Dec 2012 10:29:36 +0000
In-Reply-To: <50C8C665.2030202@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
	<50C8C665.2030202@suse.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-12 at 18:01 +0000, Jim Fehlig wrote:
> Ian Jackson wrote:

> >   
> >>  After again reading
> >> libxl_event.h, I'm considering the below patch in the libvirt libxl
> >> driver.  The change is primarily inspired by this comment for
> >> libxl_osevent_occurred_timeout:
> >>     
> > ...
> >   
> >> /* Implicitly, on entry to this function the timeout has been
> >>  * deregistered.  If _occurred_timeout is called, libxl will not
> >>  * call timeout_deregister; if it wants to requeue the timeout it
> >>  * will call timeout_register again.
> >>  */
> >>     
> >
> > Well your patch is only correct when used with the new libxl, with my
> > patches.
> >   
> 
> Hmm, it is not clear to me how to make the libxl driver work correctly
> with libxl pre and post your patches :-/.

Ideally we will find a way to make this work without changes on the
application side.

But if that turns out to be impossible and applications are going to
need patching anyway then I think we should consider just fixing the API
rather than playing tricks like the "modify to 0" thing to try and keep
it compatible.

One option is to add new hooks which libxl can call to take/release the
application's event loop lock + a LIBXL_HAVE_EVENT_LOOP_LOCK define so
the application can conditionally provide them. The upside is that I
would expect this to result in much simpler code in both libxl and
libvirt. The downside is that doing this kind of sucks from an API
stability point of view, but if the application has to change anyway
then we might as well do it cleanly instead of bending over backwards to
keep the API the same.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:47:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6KM-0007gw-3v; Thu, 13 Dec 2012 10:47:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1Tj6KL-0007g8-6z; Thu, 13 Dec 2012 10:47:41 +0000
Received: from [85.158.139.211:55695] by server-2.bemta-5.messagelabs.com id
	C9/23-16162-C42B9C05; Thu, 13 Dec 2012 10:47:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355395657!20210665!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2930 invoked from network); 13 Dec 2012 10:47:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:47:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="110196"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:29:45 +0000
Message-ID: <1355394584.10554.63.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen@lippux.com" <xen@lippux.com>
Date: Thu, 13 Dec 2012 10:29:44 +0000
In-Reply-To: <20121212213735.Horde.9y0sVlQvoipQyOsPrfw2yDA@webmail.your-server.de>
References: <20121212193526.Horde.3nLQZVQvoipQyM5uc7nCRgA@webmail.your-server.de>
	<50C8D449.9080404@tycho.nsa.gov>
	<20121212213735.Horde.9y0sVlQvoipQyOsPrfw2yDA@webmail.your-server.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Compiling init-xenstore-domain.c to
 initialize a OCamel Xenstore Stubdomain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-12 at 20:37 +0000, xen@lippux.com wrote:
> this command worked and I could compile an executable. Thanks for your help :)

If/when you get this working do you think you could write it up on the
wiki?

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:47:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6KM-0007gw-3v; Thu, 13 Dec 2012 10:47:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1Tj6KL-0007g8-6z; Thu, 13 Dec 2012 10:47:41 +0000
Received: from [85.158.139.211:55695] by server-2.bemta-5.messagelabs.com id
	C9/23-16162-C42B9C05; Thu, 13 Dec 2012 10:47:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355395657!20210665!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2930 invoked from network); 13 Dec 2012 10:47:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:47:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="110196"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:29:45 +0000
Message-ID: <1355394584.10554.63.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen@lippux.com" <xen@lippux.com>
Date: Thu, 13 Dec 2012 10:29:44 +0000
In-Reply-To: <20121212213735.Horde.9y0sVlQvoipQyOsPrfw2yDA@webmail.your-server.de>
References: <20121212193526.Horde.3nLQZVQvoipQyM5uc7nCRgA@webmail.your-server.de>
	<50C8D449.9080404@tycho.nsa.gov>
	<20121212213735.Horde.9y0sVlQvoipQyOsPrfw2yDA@webmail.your-server.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Compiling init-xenstore-domain.c to
 initialize a OCamel Xenstore Stubdomain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-12 at 20:37 +0000, xen@lippux.com wrote:
> this command worked and I could compile an executable. Thanks for your help :)

If/when you get this working do you think you could write it up on the
wiki?

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:48:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Kh-0007pT-3m; Thu, 13 Dec 2012 10:48:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tj6Kf-0007oH-5p
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:48:01 +0000
Received: from [85.158.139.211:61643] by server-12.bemta-5.messagelabs.com id
	13/91-02275-062B9C05; Thu, 13 Dec 2012 10:48:00 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355395677!18837174!3
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4119 invoked from network); 13 Dec 2012 10:47:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:47:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="110291"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:31 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:39:44 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 13 Dec 2012 11:39:37 +0100
Message-ID: <1355395179-55724-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v4 1/3] xen-blkback: implement safe iterator for
	the list of persistent grants
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q2hhbmdlIGZvcmVhY2hfZ3JhbnQgaXRlcmF0b3IgdG8gYSBzYWZlIHZlcnNpb24sIHRoYXQgYWxs
b3dzIGZyZWVpbmcKdGhlIGVsZW1lbnQgd2hpbGUgaXRlcmF0aW5nLiBBbHNvIG1vdmUgdGhlIGZy
ZWUgY29kZSBpbgpmcmVlX3BlcnNpc3RlbnRfZ250cyB0byBwcmV2ZW50IGZyZWVpbmcgdGhlIGVs
ZW1lbnQgYmVmb3JlIHRoZSByYl9uZXh0CmNhbGwuCgpSZXBvcnRlZC1ieTogRGFuIENhcnBlbnRl
ciA8ZGFuLmNhcnBlbnRlckBvcmFjbGUuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29u
cmFkQGtlcm5lbC5vcmc+CkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwotLS0KQ2hhbmdlcyBz
aW5jZSB2MjoKICogSW1wbGVtZW50IHRoZSBzYW1lIHNlbWFudGljcyBhcyBsaXN0X2Zvcl9lYWNo
X3NhZmUsIHVzaW5nIHR5cGUgKgogICBpbnN0ZWFkIG9mIHJiX25vZGUgKiBhcyB0aGUgdGVtcG9y
YXJ5IGN1cnNvci4KLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyB8ICAg
MjMgKysrKysrKysrKysrKystLS0tLS0tLS0KIDEgZmlsZXMgY2hhbmdlZCwgMTQgaW5zZXJ0aW9u
cygrKSwgOSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGti
YWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4
IDc0Mzc0ZmIuLjg4MDgwMjggMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
YmxrYmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC0x
NjEsMTAgKzE2MSwxNCBAQCBzdGF0aWMgaW50IGRpc3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4
ZW5fYmxraWYgKmJsa2lmLAogc3RhdGljIHZvaWQgbWFrZV9yZXNwb25zZShzdHJ1Y3QgeGVuX2Js
a2lmICpibGtpZiwgdTY0IGlkLAogCQkJICB1bnNpZ25lZCBzaG9ydCBvcCwgaW50IHN0KTsKIAot
I2RlZmluZSBmb3JlYWNoX2dyYW50KHBvcywgcmJ0cmVlLCBub2RlKSBcCi0JZm9yICgocG9zKSA9
IGNvbnRhaW5lcl9vZihyYl9maXJzdCgocmJ0cmVlKSksIHR5cGVvZigqKHBvcykpLCBub2RlKTsg
XAotCSAgICAgJihwb3MpLT5ub2RlICE9IE5VTEw7IFwKLQkgICAgIChwb3MpID0gY29udGFpbmVy
X29mKHJiX25leHQoJihwb3MpLT5ub2RlKSwgdHlwZW9mKCoocG9zKSksIG5vZGUpKQorI2RlZmlu
ZSBmb3JlYWNoX2dyYW50X3NhZmUocG9zLCBuLCByYnRyZWUsIG5vZGUpIFwKKwlmb3IgKChwb3Mp
ID0gY29udGFpbmVyX29mKHJiX2ZpcnN0KChyYnRyZWUpKSwgdHlwZW9mKCoocG9zKSksIG5vZGUp
LFwKKwkgICAgIChuKSA9IGNvbnRhaW5lcl9vZigoKCYocG9zKS0+bm9kZSAhPSBOVUxMKSA/IAkJ
XAorCSAgICAgICAgIHJiX25leHQoJihwb3MpLT5ub2RlKSA6IE5VTEwpLCB0eXBlb2YoKihwb3Mp
KSwgbm9kZSk7CVwKKwkgICAgICYocG9zKS0+bm9kZSAhPSBOVUxMOwkJCQkJXAorCSAgICAgKHBv
cykgPSBuLAkJCQkJCQlcCisJICAgICAobikgPSBjb250YWluZXJfb2YoKCgmKHBvcyktPm5vZGUg
IT0gTlVMTCkgPwkJXAorCSAgICAgICAgIHJiX25leHQoJihwb3MpLT5ub2RlKSA6IE5VTEwpLCB0
eXBlb2YoKihwb3MpKSwgbm9kZSkpCiAKIAogc3RhdGljIHZvaWQgYWRkX3BlcnNpc3RlbnRfZ250
KHN0cnVjdCByYl9yb290ICpyb290LApAQCAtMjE2LDExICsyMjAsMTEgQEAgc3RhdGljIHZvaWQg
ZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBu
dW0pCiB7CiAJc3RydWN0IGdudHRhYl91bm1hcF9ncmFudF9yZWYgdW5tYXBbQkxLSUZfTUFYX1NF
R01FTlRTX1BFUl9SRVFVRVNUXTsKIAlzdHJ1Y3QgcGFnZSAqcGFnZXNbQkxLSUZfTUFYX1NFR01F
TlRTX1BFUl9SRVFVRVNUXTsKLQlzdHJ1Y3QgcGVyc2lzdGVudF9nbnQgKnBlcnNpc3RlbnRfZ250
OworCXN0cnVjdCBwZXJzaXN0ZW50X2dudCAqcGVyc2lzdGVudF9nbnQsICpuOwogCWludCByZXQg
PSAwOwogCWludCBzZWdzX3RvX3VubWFwID0gMDsKIAotCWZvcmVhY2hfZ3JhbnQocGVyc2lzdGVu
dF9nbnQsIHJvb3QsIG5vZGUpIHsKKwlmb3JlYWNoX2dyYW50X3NhZmUocGVyc2lzdGVudF9nbnQs
IG4sIHJvb3QsIG5vZGUpIHsKIAkJQlVHX09OKHBlcnNpc3RlbnRfZ250LT5oYW5kbGUgPT0KIAkJ
CUJMS0JBQ0tfSU5WQUxJRF9IQU5ETEUpOwogCQlnbnR0YWJfc2V0X3VubWFwX29wKCZ1bm1hcFtz
ZWdzX3RvX3VubWFwXSwKQEAgLTIzMCw5ICsyMzQsNiBAQCBzdGF0aWMgdm9pZCBmcmVlX3BlcnNp
c3RlbnRfZ250cyhzdHJ1Y3QgcmJfcm9vdCAqcm9vdCwgdW5zaWduZWQgaW50IG51bSkKIAkJCXBl
cnNpc3RlbnRfZ250LT5oYW5kbGUpOwogCiAJCXBhZ2VzW3NlZ3NfdG9fdW5tYXBdID0gcGVyc2lz
dGVudF9nbnQtPnBhZ2U7Ci0JCXJiX2VyYXNlKCZwZXJzaXN0ZW50X2dudC0+bm9kZSwgcm9vdCk7
Ci0JCWtmcmVlKHBlcnNpc3RlbnRfZ250KTsKLQkJbnVtLS07CiAKIAkJaWYgKCsrc2Vnc190b191
bm1hcCA9PSBCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JFUVVFU1QgfHwKIAkJCSFyYl9uZXh0KCZw
ZXJzaXN0ZW50X2dudC0+bm9kZSkpIHsKQEAgLTI0MSw2ICsyNDIsMTAgQEAgc3RhdGljIHZvaWQg
ZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBu
dW0pCiAJCQlCVUdfT04ocmV0KTsKIAkJCXNlZ3NfdG9fdW5tYXAgPSAwOwogCQl9CisKKwkJcmJf
ZXJhc2UoJnBlcnNpc3RlbnRfZ250LT5ub2RlLCByb290KTsKKwkJa2ZyZWUocGVyc2lzdGVudF9n
bnQpOworCQludW0tLTsKIAl9CiAJQlVHX09OKG51bSAhPSAwKTsKIH0KLS0gCjEuNy43LjUgKEFw
cGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:48:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Kh-0007pT-3m; Thu, 13 Dec 2012 10:48:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tj6Kf-0007oH-5p
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:48:01 +0000
Received: from [85.158.139.211:61643] by server-12.bemta-5.messagelabs.com id
	13/91-02275-062B9C05; Thu, 13 Dec 2012 10:48:00 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355395677!18837174!3
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4119 invoked from network); 13 Dec 2012 10:47:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:47:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="110291"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:31 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:39:44 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 13 Dec 2012 11:39:37 +0100
Message-ID: <1355395179-55724-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v4 1/3] xen-blkback: implement safe iterator for
	the list of persistent grants
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q2hhbmdlIGZvcmVhY2hfZ3JhbnQgaXRlcmF0b3IgdG8gYSBzYWZlIHZlcnNpb24sIHRoYXQgYWxs
b3dzIGZyZWVpbmcKdGhlIGVsZW1lbnQgd2hpbGUgaXRlcmF0aW5nLiBBbHNvIG1vdmUgdGhlIGZy
ZWUgY29kZSBpbgpmcmVlX3BlcnNpc3RlbnRfZ250cyB0byBwcmV2ZW50IGZyZWVpbmcgdGhlIGVs
ZW1lbnQgYmVmb3JlIHRoZSByYl9uZXh0CmNhbGwuCgpSZXBvcnRlZC1ieTogRGFuIENhcnBlbnRl
ciA8ZGFuLmNhcnBlbnRlckBvcmFjbGUuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29u
cmFkQGtlcm5lbC5vcmc+CkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwotLS0KQ2hhbmdlcyBz
aW5jZSB2MjoKICogSW1wbGVtZW50IHRoZSBzYW1lIHNlbWFudGljcyBhcyBsaXN0X2Zvcl9lYWNo
X3NhZmUsIHVzaW5nIHR5cGUgKgogICBpbnN0ZWFkIG9mIHJiX25vZGUgKiBhcyB0aGUgdGVtcG9y
YXJ5IGN1cnNvci4KLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyB8ICAg
MjMgKysrKysrKysrKysrKystLS0tLS0tLS0KIDEgZmlsZXMgY2hhbmdlZCwgMTQgaW5zZXJ0aW9u
cygrKSwgOSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGti
YWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4
IDc0Mzc0ZmIuLjg4MDgwMjggMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
YmxrYmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC0x
NjEsMTAgKzE2MSwxNCBAQCBzdGF0aWMgaW50IGRpc3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4
ZW5fYmxraWYgKmJsa2lmLAogc3RhdGljIHZvaWQgbWFrZV9yZXNwb25zZShzdHJ1Y3QgeGVuX2Js
a2lmICpibGtpZiwgdTY0IGlkLAogCQkJICB1bnNpZ25lZCBzaG9ydCBvcCwgaW50IHN0KTsKIAot
I2RlZmluZSBmb3JlYWNoX2dyYW50KHBvcywgcmJ0cmVlLCBub2RlKSBcCi0JZm9yICgocG9zKSA9
IGNvbnRhaW5lcl9vZihyYl9maXJzdCgocmJ0cmVlKSksIHR5cGVvZigqKHBvcykpLCBub2RlKTsg
XAotCSAgICAgJihwb3MpLT5ub2RlICE9IE5VTEw7IFwKLQkgICAgIChwb3MpID0gY29udGFpbmVy
X29mKHJiX25leHQoJihwb3MpLT5ub2RlKSwgdHlwZW9mKCoocG9zKSksIG5vZGUpKQorI2RlZmlu
ZSBmb3JlYWNoX2dyYW50X3NhZmUocG9zLCBuLCByYnRyZWUsIG5vZGUpIFwKKwlmb3IgKChwb3Mp
ID0gY29udGFpbmVyX29mKHJiX2ZpcnN0KChyYnRyZWUpKSwgdHlwZW9mKCoocG9zKSksIG5vZGUp
LFwKKwkgICAgIChuKSA9IGNvbnRhaW5lcl9vZigoKCYocG9zKS0+bm9kZSAhPSBOVUxMKSA/IAkJ
XAorCSAgICAgICAgIHJiX25leHQoJihwb3MpLT5ub2RlKSA6IE5VTEwpLCB0eXBlb2YoKihwb3Mp
KSwgbm9kZSk7CVwKKwkgICAgICYocG9zKS0+bm9kZSAhPSBOVUxMOwkJCQkJXAorCSAgICAgKHBv
cykgPSBuLAkJCQkJCQlcCisJICAgICAobikgPSBjb250YWluZXJfb2YoKCgmKHBvcyktPm5vZGUg
IT0gTlVMTCkgPwkJXAorCSAgICAgICAgIHJiX25leHQoJihwb3MpLT5ub2RlKSA6IE5VTEwpLCB0
eXBlb2YoKihwb3MpKSwgbm9kZSkpCiAKIAogc3RhdGljIHZvaWQgYWRkX3BlcnNpc3RlbnRfZ250
KHN0cnVjdCByYl9yb290ICpyb290LApAQCAtMjE2LDExICsyMjAsMTEgQEAgc3RhdGljIHZvaWQg
ZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBu
dW0pCiB7CiAJc3RydWN0IGdudHRhYl91bm1hcF9ncmFudF9yZWYgdW5tYXBbQkxLSUZfTUFYX1NF
R01FTlRTX1BFUl9SRVFVRVNUXTsKIAlzdHJ1Y3QgcGFnZSAqcGFnZXNbQkxLSUZfTUFYX1NFR01F
TlRTX1BFUl9SRVFVRVNUXTsKLQlzdHJ1Y3QgcGVyc2lzdGVudF9nbnQgKnBlcnNpc3RlbnRfZ250
OworCXN0cnVjdCBwZXJzaXN0ZW50X2dudCAqcGVyc2lzdGVudF9nbnQsICpuOwogCWludCByZXQg
PSAwOwogCWludCBzZWdzX3RvX3VubWFwID0gMDsKIAotCWZvcmVhY2hfZ3JhbnQocGVyc2lzdGVu
dF9nbnQsIHJvb3QsIG5vZGUpIHsKKwlmb3JlYWNoX2dyYW50X3NhZmUocGVyc2lzdGVudF9nbnQs
IG4sIHJvb3QsIG5vZGUpIHsKIAkJQlVHX09OKHBlcnNpc3RlbnRfZ250LT5oYW5kbGUgPT0KIAkJ
CUJMS0JBQ0tfSU5WQUxJRF9IQU5ETEUpOwogCQlnbnR0YWJfc2V0X3VubWFwX29wKCZ1bm1hcFtz
ZWdzX3RvX3VubWFwXSwKQEAgLTIzMCw5ICsyMzQsNiBAQCBzdGF0aWMgdm9pZCBmcmVlX3BlcnNp
c3RlbnRfZ250cyhzdHJ1Y3QgcmJfcm9vdCAqcm9vdCwgdW5zaWduZWQgaW50IG51bSkKIAkJCXBl
cnNpc3RlbnRfZ250LT5oYW5kbGUpOwogCiAJCXBhZ2VzW3NlZ3NfdG9fdW5tYXBdID0gcGVyc2lz
dGVudF9nbnQtPnBhZ2U7Ci0JCXJiX2VyYXNlKCZwZXJzaXN0ZW50X2dudC0+bm9kZSwgcm9vdCk7
Ci0JCWtmcmVlKHBlcnNpc3RlbnRfZ250KTsKLQkJbnVtLS07CiAKIAkJaWYgKCsrc2Vnc190b191
bm1hcCA9PSBCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JFUVVFU1QgfHwKIAkJCSFyYl9uZXh0KCZw
ZXJzaXN0ZW50X2dudC0+bm9kZSkpIHsKQEAgLTI0MSw2ICsyNDIsMTAgQEAgc3RhdGljIHZvaWQg
ZnJlZV9wZXJzaXN0ZW50X2dudHMoc3RydWN0IHJiX3Jvb3QgKnJvb3QsIHVuc2lnbmVkIGludCBu
dW0pCiAJCQlCVUdfT04ocmV0KTsKIAkJCXNlZ3NfdG9fdW5tYXAgPSAwOwogCQl9CisKKwkJcmJf
ZXJhc2UoJnBlcnNpc3RlbnRfZ250LT5ub2RlLCByb290KTsKKwkJa2ZyZWUocGVyc2lzdGVudF9n
bnQpOworCQludW0tLTsKIAl9CiAJQlVHX09OKG51bSAhPSAwKTsKIH0KLS0gCjEuNy43LjUgKEFw
cGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:48:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Ke-0007o1-9V; Thu, 13 Dec 2012 10:48:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tj6Kd-0007nW-Jc
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:47:59 +0000
Received: from [85.158.139.211:3441] by server-14.bemta-5.messagelabs.com id
	38/07-09538-E52B9C05; Thu, 13 Dec 2012 10:47:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355395677!18837174!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3741 invoked from network); 13 Dec 2012 10:47:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:47:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="110293"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:31 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:39:45 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 13 Dec 2012 11:39:39 +0100
Message-ID: <1355395179-55724-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1355395179-55724-1-git-send-email-roger.pau@citrix.com>
References: <1355395179-55724-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v4 3/3] xen-blkfront: transverse list of
	persistent grants safely
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VXNlIGxsaXN0X2Zvcl9lYWNoX2VudHJ5X3NhZmUgaW4gYmxraWZfZnJlZS4gUHJldmlvdXNseSBn
cmFudHMgd2hlcmUKZnJlZWQgd2hpbGUgaXRlcmF0aW5nIHRoZSBsaXN0LCB3aGljaCBsZWFkIHRv
IGRlcmVmZXJlbmNlcyB3aGVuIHRyeWluZwp0byBmZXRjaCB0aGUgbmV4dCBpdGVtLgoKUmVwb3J0
ZWQtYnk6IERhbiBDYXJwZW50ZXIgPGRhbi5jYXJwZW50ZXJAb3JhY2xlLmNvbT4KU2lnbmVkLW9m
Zi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25yYWQg
Unplc3p1dGVrIFdpbGsgPGtvbnJhZEBrZXJuZWwub3JnPgpDYzogeGVuLWRldmVsQGxpc3RzLnhl
bi5vcmcKLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jIHwgICAgNCArKy0tCiAxIGZp
bGVzIGNoYW5nZWQsIDIgaW5zZXJ0aW9ucygrKSwgMiBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg
YS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrZnJv
bnQuYwppbmRleCA5NmU5YjAwLi44YWU4MDYyIDEwMDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hl
bi1ibGtmcm9udC5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2Zyb250LmMKQEAgLTc5MSw3
ICs3OTEsNyBAQCBzdGF0aWMgdm9pZCBibGtpZl9yZXN0YXJ0X3F1ZXVlKHN0cnVjdCB3b3JrX3N0
cnVjdCAqd29yaykKIHN0YXRpYyB2b2lkIGJsa2lmX2ZyZWUoc3RydWN0IGJsa2Zyb250X2luZm8g
KmluZm8sIGludCBzdXNwZW5kKQogewogCXN0cnVjdCBsbGlzdF9ub2RlICphbGxfZ250czsKLQlz
dHJ1Y3QgZ3JhbnQgKnBlcnNpc3RlbnRfZ250OworCXN0cnVjdCBncmFudCAqcGVyc2lzdGVudF9n
bnQsICpuOwogCiAJLyogUHJldmVudCBuZXcgcmVxdWVzdHMgYmVpbmcgaXNzdWVkIHVudGlsIHdl
IGZpeCB0aGluZ3MgdXAuICovCiAJc3Bpbl9sb2NrX2lycSgmaW5mby0+aW9fbG9jayk7CkBAIC04
MDQsNyArODA0LDcgQEAgc3RhdGljIHZvaWQgYmxraWZfZnJlZShzdHJ1Y3QgYmxrZnJvbnRfaW5m
byAqaW5mbywgaW50IHN1c3BlbmQpCiAJLyogUmVtb3ZlIGFsbCBwZXJzaXN0ZW50IGdyYW50cyAq
LwogCWlmIChpbmZvLT5wZXJzaXN0ZW50X2dudHNfYykgewogCQlhbGxfZ250cyA9IGxsaXN0X2Rl
bF9hbGwoJmluZm8tPnBlcnNpc3RlbnRfZ250cyk7Ci0JCWxsaXN0X2Zvcl9lYWNoX2VudHJ5KHBl
cnNpc3RlbnRfZ250LCBhbGxfZ250cywgbm9kZSkgeworCQlsbGlzdF9mb3JfZWFjaF9lbnRyeV9z
YWZlKHBlcnNpc3RlbnRfZ250LCBuLCBhbGxfZ250cywgbm9kZSkgewogCQkJZ250dGFiX2VuZF9m
b3JlaWduX2FjY2VzcyhwZXJzaXN0ZW50X2dudC0+Z3JlZiwgMCwgMFVMKTsKIAkJCV9fZnJlZV9w
YWdlKHBmbl90b19wYWdlKHBlcnNpc3RlbnRfZ250LT5wZm4pKTsKIAkJCWtmcmVlKHBlcnNpc3Rl
bnRfZ250KTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:48:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Ke-0007o1-9V; Thu, 13 Dec 2012 10:48:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tj6Kd-0007nW-Jc
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:47:59 +0000
Received: from [85.158.139.211:3441] by server-14.bemta-5.messagelabs.com id
	38/07-09538-E52B9C05; Thu, 13 Dec 2012 10:47:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355395677!18837174!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3741 invoked from network); 13 Dec 2012 10:47:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:47:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="110293"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:31 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:39:45 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 13 Dec 2012 11:39:39 +0100
Message-ID: <1355395179-55724-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1355395179-55724-1-git-send-email-roger.pau@citrix.com>
References: <1355395179-55724-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, xen-devel@lists.xen.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v4 3/3] xen-blkfront: transverse list of
	persistent grants safely
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VXNlIGxsaXN0X2Zvcl9lYWNoX2VudHJ5X3NhZmUgaW4gYmxraWZfZnJlZS4gUHJldmlvdXNseSBn
cmFudHMgd2hlcmUKZnJlZWQgd2hpbGUgaXRlcmF0aW5nIHRoZSBsaXN0LCB3aGljaCBsZWFkIHRv
IGRlcmVmZXJlbmNlcyB3aGVuIHRyeWluZwp0byBmZXRjaCB0aGUgbmV4dCBpdGVtLgoKUmVwb3J0
ZWQtYnk6IERhbiBDYXJwZW50ZXIgPGRhbi5jYXJwZW50ZXJAb3JhY2xlLmNvbT4KU2lnbmVkLW9m
Zi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25yYWQg
Unplc3p1dGVrIFdpbGsgPGtvbnJhZEBrZXJuZWwub3JnPgpDYzogeGVuLWRldmVsQGxpc3RzLnhl
bi5vcmcKLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jIHwgICAgNCArKy0tCiAxIGZp
bGVzIGNoYW5nZWQsIDIgaW5zZXJ0aW9ucygrKSwgMiBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg
YS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrZnJv
bnQuYwppbmRleCA5NmU5YjAwLi44YWU4MDYyIDEwMDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hl
bi1ibGtmcm9udC5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2Zyb250LmMKQEAgLTc5MSw3
ICs3OTEsNyBAQCBzdGF0aWMgdm9pZCBibGtpZl9yZXN0YXJ0X3F1ZXVlKHN0cnVjdCB3b3JrX3N0
cnVjdCAqd29yaykKIHN0YXRpYyB2b2lkIGJsa2lmX2ZyZWUoc3RydWN0IGJsa2Zyb250X2luZm8g
KmluZm8sIGludCBzdXNwZW5kKQogewogCXN0cnVjdCBsbGlzdF9ub2RlICphbGxfZ250czsKLQlz
dHJ1Y3QgZ3JhbnQgKnBlcnNpc3RlbnRfZ250OworCXN0cnVjdCBncmFudCAqcGVyc2lzdGVudF9n
bnQsICpuOwogCiAJLyogUHJldmVudCBuZXcgcmVxdWVzdHMgYmVpbmcgaXNzdWVkIHVudGlsIHdl
IGZpeCB0aGluZ3MgdXAuICovCiAJc3Bpbl9sb2NrX2lycSgmaW5mby0+aW9fbG9jayk7CkBAIC04
MDQsNyArODA0LDcgQEAgc3RhdGljIHZvaWQgYmxraWZfZnJlZShzdHJ1Y3QgYmxrZnJvbnRfaW5m
byAqaW5mbywgaW50IHN1c3BlbmQpCiAJLyogUmVtb3ZlIGFsbCBwZXJzaXN0ZW50IGdyYW50cyAq
LwogCWlmIChpbmZvLT5wZXJzaXN0ZW50X2dudHNfYykgewogCQlhbGxfZ250cyA9IGxsaXN0X2Rl
bF9hbGwoJmluZm8tPnBlcnNpc3RlbnRfZ250cyk7Ci0JCWxsaXN0X2Zvcl9lYWNoX2VudHJ5KHBl
cnNpc3RlbnRfZ250LCBhbGxfZ250cywgbm9kZSkgeworCQlsbGlzdF9mb3JfZWFjaF9lbnRyeV9z
YWZlKHBlcnNpc3RlbnRfZ250LCBuLCBhbGxfZ250cywgbm9kZSkgewogCQkJZ250dGFiX2VuZF9m
b3JlaWduX2FjY2VzcyhwZXJzaXN0ZW50X2dudC0+Z3JlZiwgMCwgMFVMKTsKIAkJCV9fZnJlZV9w
YWdlKHBmbl90b19wYWdlKHBlcnNpc3RlbnRfZ250LT5wZm4pKTsKIAkJCWtmcmVlKHBlcnNpc3Rl
bnRfZ250KTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:48:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Kg-0007pE-NV; Thu, 13 Dec 2012 10:48:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj6Ke-0007o9-Rx
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:48:01 +0000
Received: from [85.158.139.211:61611] by server-13.bemta-5.messagelabs.com id
	2F/6B-10716-062B9C05; Thu, 13 Dec 2012 10:48:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355395677!18837174!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3856 invoked from network); 13 Dec 2012 10:47:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:47:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="110301"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:40:39 +0000
Message-ID: <1355395238.10554.71.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 10:40:38 +0000
In-Reply-To: <20617.13045.486553.172990@mariner.uk.xensource.com>
References: <508916B3.2030403@amd.com>
	<20617.13045.486553.172990@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Christoph Egger <Christoph_Egger@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding Christoph's new address, I guess this is a thing exposed on
NetBSD?

On Thu, 2012-10-25 at 13:39 +0100, Ian Jackson wrote:
> Christoph Egger writes ("[Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
> > 
> > use PREFIX when building upstream qemu.
> > 
> > Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
> 
> This looks reasonable but can you explain what goes wrong when,
> without this ?  I'd like to be able to verify the bug and fix myself.

AFAICT the default PREFIX for qemu-xen is /usr/local and we pass
--bindir, --datadir (as Xen specific paths, like /usr/lib/xen/bin) but
not --prefix. It looks like this covers most stuff but results in a
smattering of stuff getting installed under /usr/local:

$ find dist/install/usr/local/ | grep qemu
dist/install/usr/local/libexec/qemu-bridge-helper
dist/install/usr/local/share/man/man8/qemu-nbd.8
dist/install/usr/local/share/man/man1/qemu.1
dist/install/usr/local/share/man/man1/qemu-img.1
dist/install/usr/local/share/doc/qemu
dist/install/usr/local/share/doc/qemu/qemu-tech.html
dist/install/usr/local/share/doc/qemu/qemu-doc.html
dist/install/usr/local/share/doc/qemu/qmp-commands.txt
dist/install/usr/local/etc/qemu
dist/install/usr/local/etc/qemu/target-x86_64.conf
(there is also some ocaml stuff under there it seems...)

I'm not quite sure that installing those into our $PREFIX is correct
either though -- there seems like the possibility of clashing with a
non-Xen install of qemu, so we might be better off moving these to e.g.
$PREFIX/doc/xen/qemu/ and adding "xen" in the man page path etc? (the
binaries corresponding to those manpages are in /usr/lib/xen/bin/)
Perhaps qemu.1xen ?

I don't know what dist/install/usr/local/etc/qemu/target-x86_64.conf is
but it is empty here. I suspect Xen does not use
dist/install/usr/local/libexec/qemu-bridge-helper or it should be
in /usr/lib/xen/bin.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:48:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Kg-0007pE-NV; Thu, 13 Dec 2012 10:48:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj6Ke-0007o9-Rx
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:48:01 +0000
Received: from [85.158.139.211:61611] by server-13.bemta-5.messagelabs.com id
	2F/6B-10716-062B9C05; Thu, 13 Dec 2012 10:48:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355395677!18837174!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3856 invoked from network); 13 Dec 2012 10:47:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 10:47:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="110301"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 10:45:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 10:40:39 +0000
Message-ID: <1355395238.10554.71.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 10:40:38 +0000
In-Reply-To: <20617.13045.486553.172990@mariner.uk.xensource.com>
References: <508916B3.2030403@amd.com>
	<20617.13045.486553.172990@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Christoph Egger <Christoph_Egger@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding Christoph's new address, I guess this is a thing exposed on
NetBSD?

On Thu, 2012-10-25 at 13:39 +0100, Ian Jackson wrote:
> Christoph Egger writes ("[Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
> > 
> > use PREFIX when building upstream qemu.
> > 
> > Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
> 
> This looks reasonable but can you explain what goes wrong when,
> without this ?  I'd like to be able to verify the bug and fix myself.

AFAICT the default PREFIX for qemu-xen is /usr/local and we pass
--bindir, --datadir (as Xen specific paths, like /usr/lib/xen/bin) but
not --prefix. It looks like this covers most stuff but results in a
smattering of stuff getting installed under /usr/local:

$ find dist/install/usr/local/ | grep qemu
dist/install/usr/local/libexec/qemu-bridge-helper
dist/install/usr/local/share/man/man8/qemu-nbd.8
dist/install/usr/local/share/man/man1/qemu.1
dist/install/usr/local/share/man/man1/qemu-img.1
dist/install/usr/local/share/doc/qemu
dist/install/usr/local/share/doc/qemu/qemu-tech.html
dist/install/usr/local/share/doc/qemu/qemu-doc.html
dist/install/usr/local/share/doc/qemu/qmp-commands.txt
dist/install/usr/local/etc/qemu
dist/install/usr/local/etc/qemu/target-x86_64.conf
(there is also some ocaml stuff under there it seems...)

I'm not quite sure that installing those into our $PREFIX is correct
either though -- there seems like the possibility of clashing with a
non-Xen install of qemu, so we might be better off moving these to e.g.
$PREFIX/doc/xen/qemu/ and adding "xen" in the man page path etc? (the
binaries corresponding to those manpages are in /usr/lib/xen/bin/)
Perhaps qemu.1xen ?

I don't know what dist/install/usr/local/etc/qemu/target-x86_64.conf is
but it is empty here. I suspect Xen does not use
dist/install/usr/local/libexec/qemu-bridge-helper or it should be
in /usr/lib/xen/bin.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:57:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Tx-0000kF-Ja; Thu, 13 Dec 2012 10:57:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6Tw-0000k0-DA
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:36 +0000
Received: from [85.158.139.83:46583] by server-8.bemta-5.messagelabs.com id
	C1/D5-15003-F94B9C05; Thu, 13 Dec 2012 10:57:35 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355396254!28232846!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22826 invoked from network); 13 Dec 2012 10:57:35 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-10.tower-182.messagelabs.com with SMTP;
	13 Dec 2012 10:57:35 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 83955C5618F;
	Thu, 13 Dec 2012 10:57:34 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:00 +0000
Message-Id: <1355396228-3183-3-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 02/10] libxl_json: Remove JSON_ERROR from enum.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This value from libxl__json_node_type is never used.

Backported from xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693130 -3600
: Node ID 4a6d5d8cba4fc44f9bbda201188885868604b8e8
: Parent  c9b80c7f8db1a5d26906a2298c481bf7e87fda94
---
 tools/libxl/libxl_internal.h |    1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 2959527..5b285d4 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1428,7 +1428,6 @@ _hidden yajl_gen_status libxl__yajl_gen_asciiz(yajl_gen hand, const char *str);
 _hidden yajl_gen_status libxl__yajl_gen_enum(yajl_gen hand, const char *str);
 
 typedef enum {
-    JSON_ERROR,
     JSON_NULL,
     JSON_TRUE,
     JSON_FALSE,
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:57:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Tx-0000kF-Ja; Thu, 13 Dec 2012 10:57:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6Tw-0000k0-DA
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:36 +0000
Received: from [85.158.139.83:46583] by server-8.bemta-5.messagelabs.com id
	C1/D5-15003-F94B9C05; Thu, 13 Dec 2012 10:57:35 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355396254!28232846!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22826 invoked from network); 13 Dec 2012 10:57:35 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-10.tower-182.messagelabs.com with SMTP;
	13 Dec 2012 10:57:35 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 83955C5618F;
	Thu, 13 Dec 2012 10:57:34 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:00 +0000
Message-Id: <1355396228-3183-3-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 02/10] libxl_json: Remove JSON_ERROR from enum.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This value from libxl__json_node_type is never used.

Backported from xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693130 -3600
: Node ID 4a6d5d8cba4fc44f9bbda201188885868604b8e8
: Parent  c9b80c7f8db1a5d26906a2298c481bf7e87fda94
---
 tools/libxl/libxl_internal.h |    1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 2959527..5b285d4 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1428,7 +1428,6 @@ _hidden yajl_gen_status libxl__yajl_gen_asciiz(yajl_gen hand, const char *str);
 _hidden yajl_gen_status libxl__yajl_gen_enum(yajl_gen hand, const char *str);
 
 typedef enum {
-    JSON_ERROR,
     JSON_NULL,
     JSON_TRUE,
     JSON_FALSE,
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:57:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:57:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Tw-0000k1-6c; Thu, 13 Dec 2012 10:57:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6Tv-0000js-BC
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:35 +0000
Received: from [85.158.137.99:18545] by server-9.bemta-3.messagelabs.com id
	31/9B-11948-E94B9C05; Thu, 13 Dec 2012 10:57:34 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355396253!14082777!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4261 invoked from network); 13 Dec 2012 10:57:33 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-6.tower-217.messagelabs.com with SMTP;
	13 Dec 2012 10:57:33 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id B46EDC5618E;
	Thu, 13 Dec 2012 10:57:32 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:56:59 +0000
Message-Id: <1355396228-3183-2-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 01/10] libxl_json: Export json_object related
	function.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Export libxl__json_object_alloc and libxl__json_object_append_to to
use them in a later patch.

Backported from xen-unstable patch:
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693129 -3600
: Node ID c9b80c7f8db1a5d26906a2298c481bf7e87fda94
: Parent  93e3e6a33e0a1ec9f92fc575334caa35e6dbc757
---
 tools/libxl/libxl_internal.h |   14 ++++++++++++--
 tools/libxl/libxl_json.c     |   32 ++++++++++++++++----------------
 2 files changed, 28 insertions(+), 18 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index a135cd7..2959527 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1512,6 +1512,15 @@ static inline long long libxl__json_object_get_integer(const libxl__json_object
         return -1;
 }
 
+/*
+ * NOGC can be used with those json_object functions, but the
+ * libxl__json_object* will need to be freed with libxl__json_object_free.
+ */
+_hidden libxl__json_object *libxl__json_object_alloc(libxl__gc *gc_opt,
+                                                     libxl__json_node_type type);
+_hidden int libxl__json_object_append_to(libxl__gc *gc_opt,
+                                         libxl__json_object *obj,
+                                         libxl__json_object *dst);
 _hidden libxl__json_object *libxl__json_array_get(const libxl__json_object *o,
                                                   int i);
 _hidden
@@ -1520,9 +1529,10 @@ libxl__json_map_node *libxl__json_map_node_get(const libxl__json_object *o,
 _hidden const libxl__json_object *libxl__json_map_get(const char *key,
                                           const libxl__json_object *o,
                                           libxl__json_node_type expected_type);
-_hidden void libxl__json_object_free(libxl__gc *gc, libxl__json_object *obj);
+_hidden void libxl__json_object_free(libxl__gc *gc_opt,
+                                     libxl__json_object *obj);
 
-_hidden libxl__json_object *libxl__json_parse(libxl__gc *gc, const char *s);
+_hidden libxl__json_object *libxl__json_parse(libxl__gc *gc_opt, const char *s);
 
   /* Based on /local/domain/$domid/dm-version xenstore key
    * default is qemu xen traditional */
diff --git a/tools/libxl/libxl_json.c b/tools/libxl/libxl_json.c
index caa8312..0b0cf2f 100644
--- a/tools/libxl/libxl_json.c
+++ b/tools/libxl/libxl_json.c
@@ -205,7 +205,7 @@ yajl_gen_status libxl__string_gen_json(yajl_gen hand,
  * libxl__json_object helper functions
  */
 
-static libxl__json_object *json_object_alloc(libxl__gc *gc,
+libxl__json_object *libxl__json_object_alloc(libxl__gc *gc,
                                              libxl__json_node_type type)
 {
     libxl__json_object *obj;
@@ -236,7 +236,7 @@ static libxl__json_object *json_object_alloc(libxl__gc *gc,
     return obj;
 }
 
-static int json_object_append_to(libxl__gc *gc,
+int libxl__json_object_append_to(libxl__gc *gc,
                                  libxl__json_object *obj,
                                  libxl__json_object *dst)
 {
@@ -393,10 +393,10 @@ static int json_callback_null(void *opaque)
 
     DEBUG_GEN(ctx, null);
 
-    if ((obj = json_object_alloc(ctx->gc, JSON_NULL)) == NULL)
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_NULL)) == NULL)
         return 0;
 
-    if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+    if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
         libxl__json_object_free(ctx->gc, obj);
         return 0;
     }
@@ -411,11 +411,11 @@ static int json_callback_boolean(void *opaque, int boolean)
 
     DEBUG_GEN_VALUE(ctx, bool, boolean);
 
-    if ((obj = json_object_alloc(ctx->gc,
+    if ((obj = libxl__json_object_alloc(ctx->gc,
                                  boolean ? JSON_TRUE : JSON_FALSE)) == NULL)
         return 0;
 
-    if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+    if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
         libxl__json_object_free(ctx->gc, obj);
         return 0;
     }
@@ -448,7 +448,7 @@ static int json_callback_number(void *opaque, const char *s, libxl_yajl_length l
             goto error;
         }
 
-        if ((obj = json_object_alloc(ctx->gc, JSON_DOUBLE)) == NULL)
+        if ((obj = libxl__json_object_alloc(ctx->gc, JSON_DOUBLE)) == NULL)
             return 0;
         obj->u.d = d;
     } else {
@@ -458,7 +458,7 @@ static int json_callback_number(void *opaque, const char *s, libxl_yajl_length l
             goto error;
         }
 
-        if ((obj = json_object_alloc(ctx->gc, JSON_INTEGER)) == NULL)
+        if ((obj = libxl__json_object_alloc(ctx->gc, JSON_INTEGER)) == NULL)
             return 0;
         obj->u.i = i;
     }
@@ -466,7 +466,7 @@ static int json_callback_number(void *opaque, const char *s, libxl_yajl_length l
 
 error:
     /* If the conversion fail, we just store the original string. */
-    if ((obj = json_object_alloc(ctx->gc, JSON_NUMBER)) == NULL)
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_NUMBER)) == NULL)
         return 0;
 
     t = malloc(len + 1);
@@ -481,7 +481,7 @@ error:
     obj->u.string = t;
 
 out:
-    if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+    if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
         libxl__json_object_free(ctx->gc, obj);
         return 0;
     }
@@ -508,13 +508,13 @@ static int json_callback_string(void *opaque, const unsigned char *str,
     strncpy(t, (const char *) str, len);
     t[len] = 0;
 
-    if ((obj = json_object_alloc(ctx->gc, JSON_STRING)) == NULL) {
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_STRING)) == NULL) {
         free(t);
         return 0;
     }
     obj->u.string = t;
 
-    if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+    if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
         libxl__json_object_free(ctx->gc, obj);
         return 0;
     }
@@ -573,11 +573,11 @@ static int json_callback_start_map(void *opaque)
 
     DEBUG_GEN(ctx, map_open);
 
-    if ((obj = json_object_alloc(ctx->gc, JSON_MAP)) == NULL)
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_MAP)) == NULL)
         return 0;
 
     if (ctx->current) {
-        if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+        if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
             libxl__json_object_free(ctx->gc, obj);
             return 0;
         }
@@ -615,11 +615,11 @@ static int json_callback_start_array(void *opaque)
 
     DEBUG_GEN(ctx, array_open);
 
-    if ((obj = json_object_alloc(ctx->gc, JSON_ARRAY)) == NULL)
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_ARRAY)) == NULL)
         return 0;
 
     if (ctx->current) {
-        if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+        if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
             libxl__json_object_free(ctx->gc, obj);
             return 0;
         }
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:57:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:57:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Tw-0000k1-6c; Thu, 13 Dec 2012 10:57:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6Tv-0000js-BC
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:35 +0000
Received: from [85.158.137.99:18545] by server-9.bemta-3.messagelabs.com id
	31/9B-11948-E94B9C05; Thu, 13 Dec 2012 10:57:34 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355396253!14082777!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4261 invoked from network); 13 Dec 2012 10:57:33 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-6.tower-217.messagelabs.com with SMTP;
	13 Dec 2012 10:57:33 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id B46EDC5618E;
	Thu, 13 Dec 2012 10:57:32 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:56:59 +0000
Message-Id: <1355396228-3183-2-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 01/10] libxl_json: Export json_object related
	function.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Export libxl__json_object_alloc and libxl__json_object_append_to to
use them in a later patch.

Backported from xen-unstable patch:
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693129 -3600
: Node ID c9b80c7f8db1a5d26906a2298c481bf7e87fda94
: Parent  93e3e6a33e0a1ec9f92fc575334caa35e6dbc757
---
 tools/libxl/libxl_internal.h |   14 ++++++++++++--
 tools/libxl/libxl_json.c     |   32 ++++++++++++++++----------------
 2 files changed, 28 insertions(+), 18 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index a135cd7..2959527 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1512,6 +1512,15 @@ static inline long long libxl__json_object_get_integer(const libxl__json_object
         return -1;
 }
 
+/*
+ * NOGC can be used with those json_object functions, but the
+ * libxl__json_object* will need to be freed with libxl__json_object_free.
+ */
+_hidden libxl__json_object *libxl__json_object_alloc(libxl__gc *gc_opt,
+                                                     libxl__json_node_type type);
+_hidden int libxl__json_object_append_to(libxl__gc *gc_opt,
+                                         libxl__json_object *obj,
+                                         libxl__json_object *dst);
 _hidden libxl__json_object *libxl__json_array_get(const libxl__json_object *o,
                                                   int i);
 _hidden
@@ -1520,9 +1529,10 @@ libxl__json_map_node *libxl__json_map_node_get(const libxl__json_object *o,
 _hidden const libxl__json_object *libxl__json_map_get(const char *key,
                                           const libxl__json_object *o,
                                           libxl__json_node_type expected_type);
-_hidden void libxl__json_object_free(libxl__gc *gc, libxl__json_object *obj);
+_hidden void libxl__json_object_free(libxl__gc *gc_opt,
+                                     libxl__json_object *obj);
 
-_hidden libxl__json_object *libxl__json_parse(libxl__gc *gc, const char *s);
+_hidden libxl__json_object *libxl__json_parse(libxl__gc *gc_opt, const char *s);
 
   /* Based on /local/domain/$domid/dm-version xenstore key
    * default is qemu xen traditional */
diff --git a/tools/libxl/libxl_json.c b/tools/libxl/libxl_json.c
index caa8312..0b0cf2f 100644
--- a/tools/libxl/libxl_json.c
+++ b/tools/libxl/libxl_json.c
@@ -205,7 +205,7 @@ yajl_gen_status libxl__string_gen_json(yajl_gen hand,
  * libxl__json_object helper functions
  */
 
-static libxl__json_object *json_object_alloc(libxl__gc *gc,
+libxl__json_object *libxl__json_object_alloc(libxl__gc *gc,
                                              libxl__json_node_type type)
 {
     libxl__json_object *obj;
@@ -236,7 +236,7 @@ static libxl__json_object *json_object_alloc(libxl__gc *gc,
     return obj;
 }
 
-static int json_object_append_to(libxl__gc *gc,
+int libxl__json_object_append_to(libxl__gc *gc,
                                  libxl__json_object *obj,
                                  libxl__json_object *dst)
 {
@@ -393,10 +393,10 @@ static int json_callback_null(void *opaque)
 
     DEBUG_GEN(ctx, null);
 
-    if ((obj = json_object_alloc(ctx->gc, JSON_NULL)) == NULL)
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_NULL)) == NULL)
         return 0;
 
-    if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+    if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
         libxl__json_object_free(ctx->gc, obj);
         return 0;
     }
@@ -411,11 +411,11 @@ static int json_callback_boolean(void *opaque, int boolean)
 
     DEBUG_GEN_VALUE(ctx, bool, boolean);
 
-    if ((obj = json_object_alloc(ctx->gc,
+    if ((obj = libxl__json_object_alloc(ctx->gc,
                                  boolean ? JSON_TRUE : JSON_FALSE)) == NULL)
         return 0;
 
-    if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+    if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
         libxl__json_object_free(ctx->gc, obj);
         return 0;
     }
@@ -448,7 +448,7 @@ static int json_callback_number(void *opaque, const char *s, libxl_yajl_length l
             goto error;
         }
 
-        if ((obj = json_object_alloc(ctx->gc, JSON_DOUBLE)) == NULL)
+        if ((obj = libxl__json_object_alloc(ctx->gc, JSON_DOUBLE)) == NULL)
             return 0;
         obj->u.d = d;
     } else {
@@ -458,7 +458,7 @@ static int json_callback_number(void *opaque, const char *s, libxl_yajl_length l
             goto error;
         }
 
-        if ((obj = json_object_alloc(ctx->gc, JSON_INTEGER)) == NULL)
+        if ((obj = libxl__json_object_alloc(ctx->gc, JSON_INTEGER)) == NULL)
             return 0;
         obj->u.i = i;
     }
@@ -466,7 +466,7 @@ static int json_callback_number(void *opaque, const char *s, libxl_yajl_length l
 
 error:
     /* If the conversion fail, we just store the original string. */
-    if ((obj = json_object_alloc(ctx->gc, JSON_NUMBER)) == NULL)
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_NUMBER)) == NULL)
         return 0;
 
     t = malloc(len + 1);
@@ -481,7 +481,7 @@ error:
     obj->u.string = t;
 
 out:
-    if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+    if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
         libxl__json_object_free(ctx->gc, obj);
         return 0;
     }
@@ -508,13 +508,13 @@ static int json_callback_string(void *opaque, const unsigned char *str,
     strncpy(t, (const char *) str, len);
     t[len] = 0;
 
-    if ((obj = json_object_alloc(ctx->gc, JSON_STRING)) == NULL) {
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_STRING)) == NULL) {
         free(t);
         return 0;
     }
     obj->u.string = t;
 
-    if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+    if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
         libxl__json_object_free(ctx->gc, obj);
         return 0;
     }
@@ -573,11 +573,11 @@ static int json_callback_start_map(void *opaque)
 
     DEBUG_GEN(ctx, map_open);
 
-    if ((obj = json_object_alloc(ctx->gc, JSON_MAP)) == NULL)
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_MAP)) == NULL)
         return 0;
 
     if (ctx->current) {
-        if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+        if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
             libxl__json_object_free(ctx->gc, obj);
             return 0;
         }
@@ -615,11 +615,11 @@ static int json_callback_start_array(void *opaque)
 
     DEBUG_GEN(ctx, array_open);
 
-    if ((obj = json_object_alloc(ctx->gc, JSON_ARRAY)) == NULL)
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_ARRAY)) == NULL)
         return 0;
 
     if (ctx->current) {
-        if (json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
+        if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
             libxl__json_object_free(ctx->gc, obj);
             return 0;
         }
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:57:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6U8-0000mo-60; Thu, 13 Dec 2012 10:57:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6U6-0000lx-3X
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:46 +0000
Received: from [85.158.143.99:34030] by server-1.bemta-4.messagelabs.com id
	33/51-28401-9A4B9C05; Thu, 13 Dec 2012 10:57:45 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355396261!19541299!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25298 invoked from network); 13 Dec 2012 10:57:41 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-8.tower-216.messagelabs.com with SMTP;
	13 Dec 2012 10:57:41 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 972C5C56195;
	Thu, 13 Dec 2012 10:57:41 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:06 +0000
Message-Id: <1355396228-3183-9-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 08/10] libxl_qmp: Introduce
	libxl__qmp_set_global_dirty_log.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function will enable or disable the global dirty log on QEMU,
used during a migration.

Backport of xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693135 -3600
: Node ID d4aec9eff7e6d15c2805957af620c82555553b3e
: Parent  f3890916496445c97d6778d6c986b0270ff707f2
---
 tools/libxl/libxl_internal.h |    2 ++
 tools/libxl/libxl_qmp.c      |   12 ++++++++++--
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index b00ff61..f658562 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1400,6 +1400,8 @@ _hidden int libxl__qmp_stop(libxl__gc *gc, int domid);
 _hidden int libxl__qmp_resume(libxl__gc *gc, int domid);
 /* Save current QEMU state into fd. */
 _hidden int libxl__qmp_save(libxl__gc *gc, int domid, const char *filename);
+/* Set dirty bitmap logging status */
+_hidden int libxl__qmp_set_global_dirty_log(libxl__gc *gc, int domid, bool enable);
 /* close and free the QMP handler */
 _hidden void libxl__qmp_close(libxl__qmp_handler *qmp);
 /* remove the socket file, if the file has already been removed,
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index b09bf13..ac10f20 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -658,7 +658,6 @@ static void qmp_parameters_add_string(libxl__gc *gc,
     qmp_parameters_common_add(gc, param, name, obj);
 }
 
-#if 0
 static void qmp_parameters_add_bool(libxl__gc *gc,
                                     libxl__json_object **param,
                                     const char *name, bool b)
@@ -669,7 +668,6 @@ static void qmp_parameters_add_bool(libxl__gc *gc,
     obj->u.b = b;
     qmp_parameters_common_add(gc, param, name, obj);
 }
-#endif
 
 #define QMP_PARAMETERS_SPRINTF(args, name, format, ...) \
     qmp_parameters_add_string(gc, args, name, \
@@ -905,6 +903,16 @@ int libxl__qmp_resume(libxl__gc *gc, int domid)
     return qmp_run_command(gc, domid, "cont", NULL, NULL, NULL);
 }
 
+int libxl__qmp_set_global_dirty_log(libxl__gc *gc, int domid, bool enable)
+{
+    libxl__json_object *args = NULL;
+
+    qmp_parameters_add_bool(gc, &args, "enable", enable);
+
+    return qmp_run_command(gc, domid, "xen-set-global-dirty-log", args,
+                           NULL, NULL);
+}
+
 int libxl__qmp_initializations(libxl__gc *gc, uint32_t domid,
                                const libxl_domain_config *guest_config)
 {
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:57:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6U8-0000mo-60; Thu, 13 Dec 2012 10:57:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6U6-0000lx-3X
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:46 +0000
Received: from [85.158.143.99:34030] by server-1.bemta-4.messagelabs.com id
	33/51-28401-9A4B9C05; Thu, 13 Dec 2012 10:57:45 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355396261!19541299!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25298 invoked from network); 13 Dec 2012 10:57:41 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-8.tower-216.messagelabs.com with SMTP;
	13 Dec 2012 10:57:41 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 972C5C56195;
	Thu, 13 Dec 2012 10:57:41 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:06 +0000
Message-Id: <1355396228-3183-9-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 08/10] libxl_qmp: Introduce
	libxl__qmp_set_global_dirty_log.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function will enable or disable the global dirty log on QEMU,
used during a migration.

Backport of xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693135 -3600
: Node ID d4aec9eff7e6d15c2805957af620c82555553b3e
: Parent  f3890916496445c97d6778d6c986b0270ff707f2
---
 tools/libxl/libxl_internal.h |    2 ++
 tools/libxl/libxl_qmp.c      |   12 ++++++++++--
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index b00ff61..f658562 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1400,6 +1400,8 @@ _hidden int libxl__qmp_stop(libxl__gc *gc, int domid);
 _hidden int libxl__qmp_resume(libxl__gc *gc, int domid);
 /* Save current QEMU state into fd. */
 _hidden int libxl__qmp_save(libxl__gc *gc, int domid, const char *filename);
+/* Set dirty bitmap logging status */
+_hidden int libxl__qmp_set_global_dirty_log(libxl__gc *gc, int domid, bool enable);
 /* close and free the QMP handler */
 _hidden void libxl__qmp_close(libxl__qmp_handler *qmp);
 /* remove the socket file, if the file has already been removed,
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index b09bf13..ac10f20 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -658,7 +658,6 @@ static void qmp_parameters_add_string(libxl__gc *gc,
     qmp_parameters_common_add(gc, param, name, obj);
 }
 
-#if 0
 static void qmp_parameters_add_bool(libxl__gc *gc,
                                     libxl__json_object **param,
                                     const char *name, bool b)
@@ -669,7 +668,6 @@ static void qmp_parameters_add_bool(libxl__gc *gc,
     obj->u.b = b;
     qmp_parameters_common_add(gc, param, name, obj);
 }
-#endif
 
 #define QMP_PARAMETERS_SPRINTF(args, name, format, ...) \
     qmp_parameters_add_string(gc, args, name, \
@@ -905,6 +903,16 @@ int libxl__qmp_resume(libxl__gc *gc, int domid)
     return qmp_run_command(gc, domid, "cont", NULL, NULL, NULL);
 }
 
+int libxl__qmp_set_global_dirty_log(libxl__gc *gc, int domid, bool enable)
+{
+    libxl__json_object *args = NULL;
+
+    qmp_parameters_add_bool(gc, &args, "enable", enable);
+
+    return qmp_run_command(gc, domid, "xen-set-global-dirty-log", args,
+                           NULL, NULL);
+}
+
 int libxl__qmp_initializations(libxl__gc *gc, uint32_t domid,
                                const libxl_domain_config *guest_config)
 {
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6U8-0000mz-IV; Thu, 13 Dec 2012 10:57:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6U6-0000mI-Ey
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:46 +0000
Received: from [85.158.139.83:27001] by server-9.bemta-5.messagelabs.com id
	3A/D1-10690-9A4B9C05; Thu, 13 Dec 2012 10:57:45 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355396256!27939337!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15892 invoked from network); 13 Dec 2012 10:57:36 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-2.tower-182.messagelabs.com with SMTP;
	13 Dec 2012 10:57:36 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 06D3DC56190;
	Thu, 13 Dec 2012 10:57:36 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:01 +0000
Message-Id: <1355396228-3183-4-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 03/10] libxl_json: Replace JSON_TRUE/FALSE by
	JSON_BOOL.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Those two JSON_TRUE and JSON_FALSE were types of node. But it's better
to have a unique JSON_BOOL type.

Backported from xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693131 -3600
: Node ID 3f71aab0e2774ded0c5a03436c364fb031ba9aa0
: Parent  4a6d5d8cba4fc44f9bbda201188885868604b8e8
---
 tools/libxl/libxl_internal.h |   15 +++++++++++++--
 tools/libxl/libxl_json.c     |    3 +--
 tools/libxl/libxl_qmp.c      |    3 ++-
 3 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 5b285d4..7dbd8af 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1429,8 +1429,7 @@ _hidden yajl_gen_status libxl__yajl_gen_enum(yajl_gen hand, const char *str);
 
 typedef enum {
     JSON_NULL,
-    JSON_TRUE,
-    JSON_FALSE,
+    JSON_BOOL,
     JSON_INTEGER,
     JSON_DOUBLE,
     /* number is store in string, it's too big to be a long long or a double */
@@ -1444,6 +1443,7 @@ typedef enum {
 typedef struct libxl__json_object {
     libxl__json_node_type type;
     union {
+        bool b;
         long long i;
         double d;
         char *string;
@@ -1462,6 +1462,10 @@ typedef struct {
 
 typedef struct libxl__yajl_ctx libxl__yajl_ctx;
 
+static inline bool libxl__json_object_is_bool(const libxl__json_object *o)
+{
+    return o != NULL && o->type == JSON_BOOL;
+}
 static inline bool libxl__json_object_is_string(const libxl__json_object *o)
 {
     return o != NULL && o->type == JSON_STRING;
@@ -1479,6 +1483,13 @@ static inline bool libxl__json_object_is_array(const libxl__json_object *o)
     return o != NULL && o->type == JSON_ARRAY;
 }
 
+static inline bool libxl__json_object_get_bool(const libxl__json_object *o)
+{
+    if (libxl__json_object_is_bool(o))
+        return o->u.b;
+    else
+        return false;
+}
 static inline
 const char *libxl__json_object_get_string(const libxl__json_object *o)
 {
diff --git a/tools/libxl/libxl_json.c b/tools/libxl/libxl_json.c
index 0b0cf2f..98db465 100644
--- a/tools/libxl/libxl_json.c
+++ b/tools/libxl/libxl_json.c
@@ -411,8 +411,7 @@ static int json_callback_boolean(void *opaque, int boolean)
 
     DEBUG_GEN_VALUE(ctx, bool, boolean);
 
-    if ((obj = libxl__json_object_alloc(ctx->gc,
-                                 boolean ? JSON_TRUE : JSON_FALSE)) == NULL)
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_BOOL)) == NULL)
         return 0;
 
     if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index e33b130..9e86c35 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -178,7 +178,8 @@ static int qmp_register_vnc_callback(libxl__qmp_handler *qmp,
         goto out;
     }
 
-    if (libxl__json_map_get("enabled", o, JSON_FALSE)) {
+    obj = libxl__json_map_get("enabled", o, JSON_BOOL);
+    if (!obj || !libxl__json_object_get_bool(obj)) {
         rc = 0;
         goto out;
     }
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6U8-0000mz-IV; Thu, 13 Dec 2012 10:57:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6U6-0000mI-Ey
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:46 +0000
Received: from [85.158.139.83:27001] by server-9.bemta-5.messagelabs.com id
	3A/D1-10690-9A4B9C05; Thu, 13 Dec 2012 10:57:45 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355396256!27939337!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15892 invoked from network); 13 Dec 2012 10:57:36 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-2.tower-182.messagelabs.com with SMTP;
	13 Dec 2012 10:57:36 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 06D3DC56190;
	Thu, 13 Dec 2012 10:57:36 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:01 +0000
Message-Id: <1355396228-3183-4-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 03/10] libxl_json: Replace JSON_TRUE/FALSE by
	JSON_BOOL.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Those two JSON_TRUE and JSON_FALSE were types of node. But it's better
to have a unique JSON_BOOL type.

Backported from xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693131 -3600
: Node ID 3f71aab0e2774ded0c5a03436c364fb031ba9aa0
: Parent  4a6d5d8cba4fc44f9bbda201188885868604b8e8
---
 tools/libxl/libxl_internal.h |   15 +++++++++++++--
 tools/libxl/libxl_json.c     |    3 +--
 tools/libxl/libxl_qmp.c      |    3 ++-
 3 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 5b285d4..7dbd8af 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1429,8 +1429,7 @@ _hidden yajl_gen_status libxl__yajl_gen_enum(yajl_gen hand, const char *str);
 
 typedef enum {
     JSON_NULL,
-    JSON_TRUE,
-    JSON_FALSE,
+    JSON_BOOL,
     JSON_INTEGER,
     JSON_DOUBLE,
     /* number is store in string, it's too big to be a long long or a double */
@@ -1444,6 +1443,7 @@ typedef enum {
 typedef struct libxl__json_object {
     libxl__json_node_type type;
     union {
+        bool b;
         long long i;
         double d;
         char *string;
@@ -1462,6 +1462,10 @@ typedef struct {
 
 typedef struct libxl__yajl_ctx libxl__yajl_ctx;
 
+static inline bool libxl__json_object_is_bool(const libxl__json_object *o)
+{
+    return o != NULL && o->type == JSON_BOOL;
+}
 static inline bool libxl__json_object_is_string(const libxl__json_object *o)
 {
     return o != NULL && o->type == JSON_STRING;
@@ -1479,6 +1483,13 @@ static inline bool libxl__json_object_is_array(const libxl__json_object *o)
     return o != NULL && o->type == JSON_ARRAY;
 }
 
+static inline bool libxl__json_object_get_bool(const libxl__json_object *o)
+{
+    if (libxl__json_object_is_bool(o))
+        return o->u.b;
+    else
+        return false;
+}
 static inline
 const char *libxl__json_object_get_string(const libxl__json_object *o)
 {
diff --git a/tools/libxl/libxl_json.c b/tools/libxl/libxl_json.c
index 0b0cf2f..98db465 100644
--- a/tools/libxl/libxl_json.c
+++ b/tools/libxl/libxl_json.c
@@ -411,8 +411,7 @@ static int json_callback_boolean(void *opaque, int boolean)
 
     DEBUG_GEN_VALUE(ctx, bool, boolean);
 
-    if ((obj = libxl__json_object_alloc(ctx->gc,
-                                 boolean ? JSON_TRUE : JSON_FALSE)) == NULL)
+    if ((obj = libxl__json_object_alloc(ctx->gc, JSON_BOOL)) == NULL)
         return 0;
 
     if (libxl__json_object_append_to(ctx->gc, obj, ctx->current) == -1) {
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index e33b130..9e86c35 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -178,7 +178,8 @@ static int qmp_register_vnc_callback(libxl__qmp_handler *qmp,
         goto out;
     }
 
-    if (libxl__json_map_get("enabled", o, JSON_FALSE)) {
+    obj = libxl__json_map_get("enabled", o, JSON_BOOL);
+    if (!obj || !libxl__json_object_get_bool(obj)) {
         rc = 0;
         goto out;
     }
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6UA-0000nl-B7; Thu, 13 Dec 2012 10:57:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6U9-0000mu-8G
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:49 +0000
Received: from [85.158.143.99:4903] by server-3.bemta-4.messagelabs.com id
	F7/46-18211-CA4B9C05; Thu, 13 Dec 2012 10:57:48 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355396261!19541299!3
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25518 invoked from network); 13 Dec 2012 10:57:43 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-8.tower-216.messagelabs.com with SMTP;
	13 Dec 2012 10:57:43 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 94763C56197;
	Thu, 13 Dec 2012 10:57:43 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:08 +0000
Message-Id: <1355396228-3183-11-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 10/10] libxl: Allow migration with qemu-xen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Backport of xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693136 -3600
: Node ID 0995890022391682a2499a202c3c8608e1d3780a
: Parent  08fac5c2bf3dcbc493ce45091383f6ce1938f369
---
 tools/libxl/libxl.c |   17 -----------------
 1 files changed, 0 insertions(+), 17 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 4b4c5b0..9b14364 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -768,23 +768,6 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd, int flags,
         goto out_err;
     }
 
-    if (type == LIBXL_DOMAIN_TYPE_HVM && flags & LIBXL_SUSPEND_LIVE) {
-        switch (libxl__device_model_version_running(gc, domid)) {
-        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-            LOG(ERROR,
-                "cannot live migrate HVM domains with qemu-xen device-model");
-            rc = ERROR_FAIL;
-            goto out_err;
-        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
-            /* No problem */
-            break;
-        case -1:
-            rc = ERROR_FAIL;
-            goto out_err;
-        default: abort();
-        }
-    }
-
     libxl__domain_suspend_state *dss;
     GCNEW(dss);
 
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6UA-0000nl-B7; Thu, 13 Dec 2012 10:57:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6U9-0000mu-8G
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:49 +0000
Received: from [85.158.143.99:4903] by server-3.bemta-4.messagelabs.com id
	F7/46-18211-CA4B9C05; Thu, 13 Dec 2012 10:57:48 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355396261!19541299!3
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25518 invoked from network); 13 Dec 2012 10:57:43 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-8.tower-216.messagelabs.com with SMTP;
	13 Dec 2012 10:57:43 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 94763C56197;
	Thu, 13 Dec 2012 10:57:43 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:08 +0000
Message-Id: <1355396228-3183-11-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 10/10] libxl: Allow migration with qemu-xen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Backport of xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693136 -3600
: Node ID 0995890022391682a2499a202c3c8608e1d3780a
: Parent  08fac5c2bf3dcbc493ce45091383f6ce1938f369
---
 tools/libxl/libxl.c |   17 -----------------
 1 files changed, 0 insertions(+), 17 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 4b4c5b0..9b14364 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -768,23 +768,6 @@ int libxl_domain_suspend(libxl_ctx *ctx, uint32_t domid, int fd, int flags,
         goto out_err;
     }
 
-    if (type == LIBXL_DOMAIN_TYPE_HVM && flags & LIBXL_SUSPEND_LIVE) {
-        switch (libxl__device_model_version_running(gc, domid)) {
-        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-            LOG(ERROR,
-                "cannot live migrate HVM domains with qemu-xen device-model");
-            rc = ERROR_FAIL;
-            goto out_err;
-        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
-            /* No problem */
-            break;
-        case -1:
-            rc = ERROR_FAIL;
-            goto out_err;
-        default: abort();
-        }
-    }
-
     libxl__domain_suspend_state *dss;
     GCNEW(dss);
 
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6UH-0000sb-4y; Thu, 13 Dec 2012 10:57:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6UF-0000qu-Mh
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:55 +0000
Received: from [85.158.143.35:19533] by server-3.bemta-4.messagelabs.com id
	BC/86-18211-2B4B9C05; Thu, 13 Dec 2012 10:57:54 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355396247!5039771!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32491 invoked from network); 13 Dec 2012 10:57:27 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-2.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 10:57:27 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id E65FBC5617B;
	Thu, 13 Dec 2012 10:57:16 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:56:58 +0000
Message-Id: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355238698.843.42.camel@zakaz.uk.xensource.com>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen device
	model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a COMPILE TESTED ONLY RFC for a backport of the libxl elements
of:

http://marc.info/?l=xen-devel&m=134944750724252

Stefano Stabellini said he would look at backporting the qemu-xen
side.

My first question is whether this is the right approach: do we want
to include the JSON changes too (presumably it changes the API).
I am not 100% convinced they are necessary to get live migrate
working.

DO NOT RUN THIS CODE ON ANYTHING YOU CARE ABOUT.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6UH-0000sb-4y; Thu, 13 Dec 2012 10:57:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6UF-0000qu-Mh
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:55 +0000
Received: from [85.158.143.35:19533] by server-3.bemta-4.messagelabs.com id
	BC/86-18211-2B4B9C05; Thu, 13 Dec 2012 10:57:54 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355396247!5039771!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32491 invoked from network); 13 Dec 2012 10:57:27 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-2.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 10:57:27 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id E65FBC5617B;
	Thu, 13 Dec 2012 10:57:16 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:56:58 +0000
Message-Id: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355238698.843.42.camel@zakaz.uk.xensource.com>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen device
	model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a COMPILE TESTED ONLY RFC for a backport of the libxl elements
of:

http://marc.info/?l=xen-devel&m=134944750724252

Stefano Stabellini said he would look at backporting the qemu-xen
side.

My first question is whether this is the right approach: do we want
to include the JSON changes too (presumably it changes the API).
I am not 100% convinced they are necessary to get live migrate
working.

DO NOT RUN THIS CODE ON ANYTHING YOU CARE ABOUT.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6U9-0000nc-VG; Thu, 13 Dec 2012 10:57:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6U8-0000mu-PY
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:48 +0000
Received: from [85.158.143.99:4839] by server-3.bemta-4.messagelabs.com id
	14/46-18211-CA4B9C05; Thu, 13 Dec 2012 10:57:48 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355396261!19541299!2
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25343 invoked from network); 13 Dec 2012 10:57:42 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-8.tower-216.messagelabs.com with SMTP;
	13 Dec 2012 10:57:42 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 91C86C56196;
	Thu, 13 Dec 2012 10:57:42 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:07 +0000
Message-Id: <1355396228-3183-10-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 09/10] libxl_dom: Call the right switch logdirty
	for the right DM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch dispatch the switch logdirty call depending on which device model
version is running.

The call to qemu-xen right now is synchronous, not like the one to
qemu-xen-traditional.

Backport of xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693136 -3600
: Node ID 08fac5c2bf3dcbc493ce45091383f6ce1938f369
: Parent  d4aec9eff7e6d15c2805957af620c82555553b3e
---
 tools/libxl/libxl_dom.c |   45 ++++++++++++++++++++++++++++++++++++++++++---
 1 files changed, 42 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index e1de832..95da18e 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -685,10 +685,10 @@ static void logdirty_init(libxl__logdirty_switch *lds)
     libxl__ev_time_init(&lds->timeout);
 }
 
-void libxl__domain_suspend_common_switch_qemu_logdirty
-                               (int domid, unsigned enable, void *user)
+static void domain_suspend_switch_qemu_xen_traditional_logdirty
+                               (int domid, unsigned enable,
+                                libxl__save_helper_state *shs)
 {
-    libxl__save_helper_state *shs = user;
     libxl__egc *egc = shs->egc;
     libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
     libxl__logdirty_switch *lds = &dss->logdirty;
@@ -756,6 +756,45 @@ void libxl__domain_suspend_common_switch_qemu_logdirty
     switch_logdirty_done(egc,dss,-1);
 }
 
+static void domain_suspend_switch_qemu_xen_logdirty
+                               (int domid, unsigned enable,
+                                libxl__save_helper_state *shs)
+{
+    libxl__egc *egc = shs->egc;
+    libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
+    STATE_AO_GC(dss->ao);
+    int rc;
+
+    rc = libxl__qmp_set_global_dirty_log(gc, domid, enable);
+    if (!rc) {
+        libxl__xc_domain_saverestore_async_callback_done(egc, shs, 0);
+    } else {
+        LOG(ERROR,"logdirty switch failed (rc=%d), aborting suspend",rc);
+        libxl__xc_domain_saverestore_async_callback_done(egc, shs, -1);
+    }
+}
+
+void libxl__domain_suspend_common_switch_qemu_logdirty
+                               (int domid, unsigned enable, void *user)
+{
+    libxl__save_helper_state *shs = user;
+    libxl__egc *egc = shs->egc;
+    libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
+    STATE_AO_GC(dss->ao);
+
+    switch (libxl__device_model_version_running(gc, domid)) {
+    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+        domain_suspend_switch_qemu_xen_traditional_logdirty(domid, enable, shs);
+        break;
+    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
+        domain_suspend_switch_qemu_xen_logdirty(domid, enable, shs);
+        break;
+    default:
+        LOG(ERROR,"logdirty switch failed"
+            ", no valid device model version found, aborting suspend");
+        libxl__xc_domain_saverestore_async_callback_done(egc, shs, -1);
+    }
+}
 static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
                                     const struct timeval *requested_abs)
 {
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6U9-0000nc-VG; Thu, 13 Dec 2012 10:57:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6U8-0000mu-PY
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:48 +0000
Received: from [85.158.143.99:4839] by server-3.bemta-4.messagelabs.com id
	14/46-18211-CA4B9C05; Thu, 13 Dec 2012 10:57:48 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355396261!19541299!2
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25343 invoked from network); 13 Dec 2012 10:57:42 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-8.tower-216.messagelabs.com with SMTP;
	13 Dec 2012 10:57:42 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 91C86C56196;
	Thu, 13 Dec 2012 10:57:42 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:07 +0000
Message-Id: <1355396228-3183-10-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 09/10] libxl_dom: Call the right switch logdirty
	for the right DM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch dispatch the switch logdirty call depending on which device model
version is running.

The call to qemu-xen right now is synchronous, not like the one to
qemu-xen-traditional.

Backport of xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693136 -3600
: Node ID 08fac5c2bf3dcbc493ce45091383f6ce1938f369
: Parent  d4aec9eff7e6d15c2805957af620c82555553b3e
---
 tools/libxl/libxl_dom.c |   45 ++++++++++++++++++++++++++++++++++++++++++---
 1 files changed, 42 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index e1de832..95da18e 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -685,10 +685,10 @@ static void logdirty_init(libxl__logdirty_switch *lds)
     libxl__ev_time_init(&lds->timeout);
 }
 
-void libxl__domain_suspend_common_switch_qemu_logdirty
-                               (int domid, unsigned enable, void *user)
+static void domain_suspend_switch_qemu_xen_traditional_logdirty
+                               (int domid, unsigned enable,
+                                libxl__save_helper_state *shs)
 {
-    libxl__save_helper_state *shs = user;
     libxl__egc *egc = shs->egc;
     libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
     libxl__logdirty_switch *lds = &dss->logdirty;
@@ -756,6 +756,45 @@ void libxl__domain_suspend_common_switch_qemu_logdirty
     switch_logdirty_done(egc,dss,-1);
 }
 
+static void domain_suspend_switch_qemu_xen_logdirty
+                               (int domid, unsigned enable,
+                                libxl__save_helper_state *shs)
+{
+    libxl__egc *egc = shs->egc;
+    libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
+    STATE_AO_GC(dss->ao);
+    int rc;
+
+    rc = libxl__qmp_set_global_dirty_log(gc, domid, enable);
+    if (!rc) {
+        libxl__xc_domain_saverestore_async_callback_done(egc, shs, 0);
+    } else {
+        LOG(ERROR,"logdirty switch failed (rc=%d), aborting suspend",rc);
+        libxl__xc_domain_saverestore_async_callback_done(egc, shs, -1);
+    }
+}
+
+void libxl__domain_suspend_common_switch_qemu_logdirty
+                               (int domid, unsigned enable, void *user)
+{
+    libxl__save_helper_state *shs = user;
+    libxl__egc *egc = shs->egc;
+    libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
+    STATE_AO_GC(dss->ao);
+
+    switch (libxl__device_model_version_running(gc, domid)) {
+    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+        domain_suspend_switch_qemu_xen_traditional_logdirty(domid, enable, shs);
+        break;
+    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
+        domain_suspend_switch_qemu_xen_logdirty(domid, enable, shs);
+        break;
+    default:
+        LOG(ERROR,"logdirty switch failed"
+            ", no valid device model version found, aborting suspend");
+        libxl__xc_domain_saverestore_async_callback_done(egc, shs, -1);
+    }
+}
 static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
                                     const struct timeval *requested_abs)
 {
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6UA-0000oF-N4; Thu, 13 Dec 2012 10:57:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6U9-0000n6-7V
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:49 +0000
Received: from [85.158.143.99:34248] by server-2.bemta-4.messagelabs.com id
	AE/C9-30861-CA4B9C05; Thu, 13 Dec 2012 10:57:48 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355396259!23986455!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27564 invoked from network); 13 Dec 2012 10:57:39 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-2.tower-216.messagelabs.com with SMTP;
	13 Dec 2012 10:57:39 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 88392C56193;
	Thu, 13 Dec 2012 10:57:39 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:04 +0000
Message-Id: <1355396228-3183-7-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 06/10] libxl_qmp: Use qmp_parameters_* functions
	for param list of a QMP command.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Backported from xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693133 -3600
: Node ID be5d014f91dfbd67afacc3385c265243794a246f
: Parent  6f7847729f0f42614de516d15257ede7243f995f
---
 tools/libxl/libxl_qmp.c |   89 ++++++++++++++++-------------------------------
 1 files changed, 30 insertions(+), 59 deletions(-)

diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index 827f1b7..605e8f3 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -78,7 +78,7 @@ struct libxl__qmp_handler {
 };
 
 static int qmp_send(libxl__qmp_handler *qmp,
-                    const char *cmd, libxl_key_value_list *args,
+                    const char *cmd, libxl__json_object *args,
                     qmp_callback_t callback, void *opaque,
                     qmp_request_context *context);
 
@@ -503,7 +503,7 @@ static int qmp_next(libxl__gc *gc, libxl__qmp_handler *qmp)
 }
 
 static char *qmp_send_prepare(libxl__gc *gc, libxl__qmp_handler *qmp,
-                              const char *cmd, libxl_key_value_list *args,
+                              const char *cmd, libxl__json_object *args,
                               qmp_callback_t callback, void *opaque,
                               qmp_request_context *context)
 {
@@ -527,7 +527,7 @@ static char *qmp_send_prepare(libxl__gc *gc, libxl__qmp_handler *qmp,
     yajl_gen_integer(hand, ++qmp->last_id_used);
     if (args) {
         libxl__yajl_gen_asciiz(hand, "arguments");
-        libxl_key_value_list_gen_json(hand, args);
+        libxl__json_object_to_yajl_gen(gc, hand, args);
     }
     yajl_gen_map_close(hand);
 
@@ -561,7 +561,7 @@ out:
 }
 
 static int qmp_send(libxl__qmp_handler *qmp,
-                    const char *cmd, libxl_key_value_list *args,
+                    const char *cmd, libxl__json_object *args,
                     qmp_callback_t callback, void *opaque,
                     qmp_request_context *context)
 {
@@ -589,7 +589,7 @@ out:
 }
 
 static int qmp_synchronous_send(libxl__qmp_handler *qmp, const char *cmd,
-                                libxl_key_value_list *args,
+                                libxl__json_object *args,
                                 qmp_callback_t callback, void *opaque,
                                 int ask_timeout)
 {
@@ -624,7 +624,6 @@ static void qmp_free_handler(libxl__qmp_handler *qmp)
     free(qmp);
 }
 
-#if 0
 /*
  * QMP Parameters Helpers
  */
@@ -659,6 +658,7 @@ static void qmp_parameters_add_string(libxl__gc *gc,
     qmp_parameters_common_add(gc, param, name, obj);
 }
 
+#if 0
 static void qmp_parameters_add_bool(libxl__gc *gc,
                                     libxl__json_object **param,
                                     const char *name, bool b)
@@ -669,11 +669,11 @@ static void qmp_parameters_add_bool(libxl__gc *gc,
     obj->u.b = b;
     qmp_parameters_common_add(gc, param, name, obj);
 }
+#endif
 
 #define QMP_PARAMETERS_SPRINTF(args, name, format, ...) \
     qmp_parameters_add_string(gc, args, name, \
                               libxl__sprintf(gc, format, __VA_ARGS__))
-#endif
 
 /*
  * API
@@ -801,8 +801,7 @@ out:
 int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 {
     libxl__qmp_handler *qmp = NULL;
-    flexarray_t *parameters = NULL;
-    libxl_key_value_list args = NULL;
+    libxl__json_object *args = NULL;
     char *hostaddr = NULL;
     int rc = 0;
 
@@ -815,31 +814,22 @@ int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
     if (!hostaddr)
         return -1;
 
-    parameters = flexarray_make(6, 1);
-    flexarray_append_pair(parameters, "driver", "xen-pci-passthrough");
-    flexarray_append_pair(parameters, "id",
-                          libxl__sprintf(gc, PCI_PT_QDEV_ID,
-                                         pcidev->bus, pcidev->dev,
-                                         pcidev->func));
-    flexarray_append_pair(parameters, "hostaddr", hostaddr);
+    qmp_parameters_add_string(gc, &args, "driver", "xen-pci-passthrough");
+    QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
+                           pcidev->bus, pcidev->dev, pcidev->func);
+    qmp_parameters_add_string(gc, &args, "hostaddr", hostaddr);
     if (pcidev->vdevfn) {
-        flexarray_append_pair(parameters, "addr",
-                              libxl__sprintf(gc, "%x.%x",
-                                             PCI_SLOT(pcidev->vdevfn),
-                                             PCI_FUNC(pcidev->vdevfn)));
+        QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
+                               PCI_SLOT(pcidev->vdevfn), PCI_FUNC(pcidev->vdevfn));
     }
-    args = libxl__xs_kvs_of_flexarray(gc, parameters, parameters->count);
-    if (!args)
-        return -1;
 
-    rc = qmp_synchronous_send(qmp, "device_add", &args,
+    rc = qmp_synchronous_send(qmp, "device_add", args,
                               NULL, NULL, qmp->timeout);
     if (rc == 0) {
         rc = qmp_synchronous_send(qmp, "query-pci", NULL,
                                   pci_add_callback, pcidev, qmp->timeout);
     }
 
-    flexarray_free(parameters);
     libxl__qmp_close(qmp);
     return rc;
 }
@@ -847,24 +837,18 @@ int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 static int qmp_device_del(libxl__gc *gc, int domid, char *id)
 {
     libxl__qmp_handler *qmp = NULL;
-    flexarray_t *parameters = NULL;
-    libxl_key_value_list args = NULL;
+    libxl__json_object *args = NULL;
     int rc = 0;
 
     qmp = libxl__qmp_initialize(gc, domid);
     if (!qmp)
         return ERROR_FAIL;
 
-    parameters = flexarray_make(2, 1);
-    flexarray_append_pair(parameters, "id", id);
-    args = libxl__xs_kvs_of_flexarray(gc, parameters, parameters->count);
-    if (!args)
-        return ERROR_NOMEM;
+    qmp_parameters_add_string(gc, &args, "id", id);
 
-    rc = qmp_synchronous_send(qmp, "device_del", &args,
+    rc = qmp_synchronous_send(qmp, "device_del", args,
                               NULL, NULL, qmp->timeout);
 
-    flexarray_free(parameters);
     libxl__qmp_close(qmp);
     return rc;
 }
@@ -882,56 +866,43 @@ int libxl__qmp_pci_del(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 int libxl__qmp_save(libxl__gc *gc, int domid, const char *filename)
 {
     libxl__qmp_handler *qmp = NULL;
-    flexarray_t *parameters = NULL;
-    libxl_key_value_list args = NULL;
+    libxl__json_object *args = NULL;
     int rc = 0;
 
     qmp = libxl__qmp_initialize(gc, domid);
     if (!qmp)
         return ERROR_FAIL;
 
-    parameters = flexarray_make(2, 1);
-    if (!parameters) {
-        rc = ERROR_NOMEM;
-        goto out;
-    }
-    flexarray_append_pair(parameters, "filename", (char *)filename);
-    args = libxl__xs_kvs_of_flexarray(gc, parameters, parameters->count);
+    qmp_parameters_add_string(gc, &args, "filename", (char *)filename);
     if (!args) {
         rc = ERROR_NOMEM;
-        goto out2;
+        goto out;
     }
 
-    rc = qmp_synchronous_send(qmp, "xen-save-devices-state", &args,
+    rc = qmp_synchronous_send(qmp, "xen-save-devices-state", args,
                               NULL, NULL, qmp->timeout);
 
-out2:
-    flexarray_free(parameters);
 out:
     libxl__qmp_close(qmp);
     return rc;
+
 }
 
 static int qmp_change(libxl__gc *gc, libxl__qmp_handler *qmp,
                       char *device, char *target, char *arg)
 {
-    flexarray_t *parameters = NULL;
-    libxl_key_value_list args = NULL;
+    libxl__json_object *args = NULL;
     int rc = 0;
 
-    parameters = flexarray_make(6, 1);
-    flexarray_append_pair(parameters, "device", device);
-    flexarray_append_pair(parameters, "target", target);
-    if (arg)
-        flexarray_append_pair(parameters, "arg", arg);
-    args = libxl__xs_kvs_of_flexarray(gc, parameters, parameters->count);
-    if (!args)
-        return ERROR_NOMEM;
+    qmp_parameters_add_string(gc, &args, "device", device);
+    qmp_parameters_add_string(gc, &args, "target", target);
+    if (arg) {
+        qmp_parameters_add_string(gc, &args, "arg", arg);
+    }
 
-    rc = qmp_synchronous_send(qmp, "change", &args,
+    rc = qmp_synchronous_send(qmp, "change", args,
                               NULL, NULL, qmp->timeout);
 
-    flexarray_free(parameters);
     return rc;
 }
 
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6UA-0000oF-N4; Thu, 13 Dec 2012 10:57:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6U9-0000n6-7V
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:57:49 +0000
Received: from [85.158.143.99:34248] by server-2.bemta-4.messagelabs.com id
	AE/C9-30861-CA4B9C05; Thu, 13 Dec 2012 10:57:48 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355396259!23986455!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27564 invoked from network); 13 Dec 2012 10:57:39 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-2.tower-216.messagelabs.com with SMTP;
	13 Dec 2012 10:57:39 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 88392C56193;
	Thu, 13 Dec 2012 10:57:39 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:04 +0000
Message-Id: <1355396228-3183-7-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 06/10] libxl_qmp: Use qmp_parameters_* functions
	for param list of a QMP command.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Backported from xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693133 -3600
: Node ID be5d014f91dfbd67afacc3385c265243794a246f
: Parent  6f7847729f0f42614de516d15257ede7243f995f
---
 tools/libxl/libxl_qmp.c |   89 ++++++++++++++++-------------------------------
 1 files changed, 30 insertions(+), 59 deletions(-)

diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index 827f1b7..605e8f3 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -78,7 +78,7 @@ struct libxl__qmp_handler {
 };
 
 static int qmp_send(libxl__qmp_handler *qmp,
-                    const char *cmd, libxl_key_value_list *args,
+                    const char *cmd, libxl__json_object *args,
                     qmp_callback_t callback, void *opaque,
                     qmp_request_context *context);
 
@@ -503,7 +503,7 @@ static int qmp_next(libxl__gc *gc, libxl__qmp_handler *qmp)
 }
 
 static char *qmp_send_prepare(libxl__gc *gc, libxl__qmp_handler *qmp,
-                              const char *cmd, libxl_key_value_list *args,
+                              const char *cmd, libxl__json_object *args,
                               qmp_callback_t callback, void *opaque,
                               qmp_request_context *context)
 {
@@ -527,7 +527,7 @@ static char *qmp_send_prepare(libxl__gc *gc, libxl__qmp_handler *qmp,
     yajl_gen_integer(hand, ++qmp->last_id_used);
     if (args) {
         libxl__yajl_gen_asciiz(hand, "arguments");
-        libxl_key_value_list_gen_json(hand, args);
+        libxl__json_object_to_yajl_gen(gc, hand, args);
     }
     yajl_gen_map_close(hand);
 
@@ -561,7 +561,7 @@ out:
 }
 
 static int qmp_send(libxl__qmp_handler *qmp,
-                    const char *cmd, libxl_key_value_list *args,
+                    const char *cmd, libxl__json_object *args,
                     qmp_callback_t callback, void *opaque,
                     qmp_request_context *context)
 {
@@ -589,7 +589,7 @@ out:
 }
 
 static int qmp_synchronous_send(libxl__qmp_handler *qmp, const char *cmd,
-                                libxl_key_value_list *args,
+                                libxl__json_object *args,
                                 qmp_callback_t callback, void *opaque,
                                 int ask_timeout)
 {
@@ -624,7 +624,6 @@ static void qmp_free_handler(libxl__qmp_handler *qmp)
     free(qmp);
 }
 
-#if 0
 /*
  * QMP Parameters Helpers
  */
@@ -659,6 +658,7 @@ static void qmp_parameters_add_string(libxl__gc *gc,
     qmp_parameters_common_add(gc, param, name, obj);
 }
 
+#if 0
 static void qmp_parameters_add_bool(libxl__gc *gc,
                                     libxl__json_object **param,
                                     const char *name, bool b)
@@ -669,11 +669,11 @@ static void qmp_parameters_add_bool(libxl__gc *gc,
     obj->u.b = b;
     qmp_parameters_common_add(gc, param, name, obj);
 }
+#endif
 
 #define QMP_PARAMETERS_SPRINTF(args, name, format, ...) \
     qmp_parameters_add_string(gc, args, name, \
                               libxl__sprintf(gc, format, __VA_ARGS__))
-#endif
 
 /*
  * API
@@ -801,8 +801,7 @@ out:
 int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 {
     libxl__qmp_handler *qmp = NULL;
-    flexarray_t *parameters = NULL;
-    libxl_key_value_list args = NULL;
+    libxl__json_object *args = NULL;
     char *hostaddr = NULL;
     int rc = 0;
 
@@ -815,31 +814,22 @@ int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
     if (!hostaddr)
         return -1;
 
-    parameters = flexarray_make(6, 1);
-    flexarray_append_pair(parameters, "driver", "xen-pci-passthrough");
-    flexarray_append_pair(parameters, "id",
-                          libxl__sprintf(gc, PCI_PT_QDEV_ID,
-                                         pcidev->bus, pcidev->dev,
-                                         pcidev->func));
-    flexarray_append_pair(parameters, "hostaddr", hostaddr);
+    qmp_parameters_add_string(gc, &args, "driver", "xen-pci-passthrough");
+    QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
+                           pcidev->bus, pcidev->dev, pcidev->func);
+    qmp_parameters_add_string(gc, &args, "hostaddr", hostaddr);
     if (pcidev->vdevfn) {
-        flexarray_append_pair(parameters, "addr",
-                              libxl__sprintf(gc, "%x.%x",
-                                             PCI_SLOT(pcidev->vdevfn),
-                                             PCI_FUNC(pcidev->vdevfn)));
+        QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
+                               PCI_SLOT(pcidev->vdevfn), PCI_FUNC(pcidev->vdevfn));
     }
-    args = libxl__xs_kvs_of_flexarray(gc, parameters, parameters->count);
-    if (!args)
-        return -1;
 
-    rc = qmp_synchronous_send(qmp, "device_add", &args,
+    rc = qmp_synchronous_send(qmp, "device_add", args,
                               NULL, NULL, qmp->timeout);
     if (rc == 0) {
         rc = qmp_synchronous_send(qmp, "query-pci", NULL,
                                   pci_add_callback, pcidev, qmp->timeout);
     }
 
-    flexarray_free(parameters);
     libxl__qmp_close(qmp);
     return rc;
 }
@@ -847,24 +837,18 @@ int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 static int qmp_device_del(libxl__gc *gc, int domid, char *id)
 {
     libxl__qmp_handler *qmp = NULL;
-    flexarray_t *parameters = NULL;
-    libxl_key_value_list args = NULL;
+    libxl__json_object *args = NULL;
     int rc = 0;
 
     qmp = libxl__qmp_initialize(gc, domid);
     if (!qmp)
         return ERROR_FAIL;
 
-    parameters = flexarray_make(2, 1);
-    flexarray_append_pair(parameters, "id", id);
-    args = libxl__xs_kvs_of_flexarray(gc, parameters, parameters->count);
-    if (!args)
-        return ERROR_NOMEM;
+    qmp_parameters_add_string(gc, &args, "id", id);
 
-    rc = qmp_synchronous_send(qmp, "device_del", &args,
+    rc = qmp_synchronous_send(qmp, "device_del", args,
                               NULL, NULL, qmp->timeout);
 
-    flexarray_free(parameters);
     libxl__qmp_close(qmp);
     return rc;
 }
@@ -882,56 +866,43 @@ int libxl__qmp_pci_del(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 int libxl__qmp_save(libxl__gc *gc, int domid, const char *filename)
 {
     libxl__qmp_handler *qmp = NULL;
-    flexarray_t *parameters = NULL;
-    libxl_key_value_list args = NULL;
+    libxl__json_object *args = NULL;
     int rc = 0;
 
     qmp = libxl__qmp_initialize(gc, domid);
     if (!qmp)
         return ERROR_FAIL;
 
-    parameters = flexarray_make(2, 1);
-    if (!parameters) {
-        rc = ERROR_NOMEM;
-        goto out;
-    }
-    flexarray_append_pair(parameters, "filename", (char *)filename);
-    args = libxl__xs_kvs_of_flexarray(gc, parameters, parameters->count);
+    qmp_parameters_add_string(gc, &args, "filename", (char *)filename);
     if (!args) {
         rc = ERROR_NOMEM;
-        goto out2;
+        goto out;
     }
 
-    rc = qmp_synchronous_send(qmp, "xen-save-devices-state", &args,
+    rc = qmp_synchronous_send(qmp, "xen-save-devices-state", args,
                               NULL, NULL, qmp->timeout);
 
-out2:
-    flexarray_free(parameters);
 out:
     libxl__qmp_close(qmp);
     return rc;
+
 }
 
 static int qmp_change(libxl__gc *gc, libxl__qmp_handler *qmp,
                       char *device, char *target, char *arg)
 {
-    flexarray_t *parameters = NULL;
-    libxl_key_value_list args = NULL;
+    libxl__json_object *args = NULL;
     int rc = 0;
 
-    parameters = flexarray_make(6, 1);
-    flexarray_append_pair(parameters, "device", device);
-    flexarray_append_pair(parameters, "target", target);
-    if (arg)
-        flexarray_append_pair(parameters, "arg", arg);
-    args = libxl__xs_kvs_of_flexarray(gc, parameters, parameters->count);
-    if (!args)
-        return ERROR_NOMEM;
+    qmp_parameters_add_string(gc, &args, "device", device);
+    qmp_parameters_add_string(gc, &args, "target", target);
+    if (arg) {
+        qmp_parameters_add_string(gc, &args, "arg", arg);
+    }
 
-    rc = qmp_synchronous_send(qmp, "change", &args,
+    rc = qmp_synchronous_send(qmp, "change", args,
                               NULL, NULL, qmp->timeout);
 
-    flexarray_free(parameters);
     return rc;
 }
 
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6UN-0000xa-78; Thu, 13 Dec 2012 10:58:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6UL-0000vR-3i
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:58:01 +0000
Received: from [85.158.139.211:57913] by server-3.bemta-5.messagelabs.com id
	5F/07-25441-8B4B9C05; Thu, 13 Dec 2012 10:58:00 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355396279!16077095!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24565 invoked from network); 13 Dec 2012 10:57:59 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-206.messagelabs.com with SMTP;
	13 Dec 2012 10:57:59 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 7779EC56192;
	Thu, 13 Dec 2012 10:57:38 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:03 +0000
Message-Id: <1355396228-3183-6-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 05/10] libxl_qmp: Introduces helpers to create
	an argument list.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Those functions will be used to create a "list" of parameters that
contain more than just strings. This list is converted by qmp_send to
a string to be sent to QEMU.

Those functions will be used in the next two patches, so right now
there are not compiled.

Backported from xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693132 -3600
: Node ID 6f7847729f0f42614de516d15257ede7243f995f
: Parent  74dee58cfc0d2d6594f388db3b4d2ce91d1bb204
---
 tools/libxl/libxl_qmp.c |   51 +++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 51 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index 9e86c35..827f1b7 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -624,6 +624,57 @@ static void qmp_free_handler(libxl__qmp_handler *qmp)
     free(qmp);
 }
 
+#if 0
+/*
+ * QMP Parameters Helpers
+ */
+static void qmp_parameters_common_add(libxl__gc *gc,
+                                      libxl__json_object **param,
+                                      const char *name,
+                                      libxl__json_object *obj)
+{
+    libxl__json_map_node *arg = NULL;
+
+    if (!*param) {
+        *param = libxl__json_object_alloc(gc, JSON_MAP);
+    }
+
+    arg = libxl__zalloc(gc, sizeof(*arg));
+
+    arg->map_key = libxl__strdup(gc, name);
+    arg->obj = obj;
+
+    flexarray_append((*param)->u.map, arg);
+}
+
+static void qmp_parameters_add_string(libxl__gc *gc,
+                                      libxl__json_object **param,
+                                      const char *name, const char *argument)
+{
+    libxl__json_object *obj;
+
+    obj = libxl__json_object_alloc(gc, JSON_STRING);
+    obj->u.string = libxl__strdup(gc, argument);
+
+    qmp_parameters_common_add(gc, param, name, obj);
+}
+
+static void qmp_parameters_add_bool(libxl__gc *gc,
+                                    libxl__json_object **param,
+                                    const char *name, bool b)
+{
+    libxl__json_object *obj;
+
+    obj = libxl__json_object_alloc(gc, JSON_BOOL);
+    obj->u.b = b;
+    qmp_parameters_common_add(gc, param, name, obj);
+}
+
+#define QMP_PARAMETERS_SPRINTF(args, name, format, ...) \
+    qmp_parameters_add_string(gc, args, name, \
+                              libxl__sprintf(gc, format, __VA_ARGS__))
+#endif
+
 /*
  * API
  */
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6UN-0000xa-78; Thu, 13 Dec 2012 10:58:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6UL-0000vR-3i
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:58:01 +0000
Received: from [85.158.139.211:57913] by server-3.bemta-5.messagelabs.com id
	5F/07-25441-8B4B9C05; Thu, 13 Dec 2012 10:58:00 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355396279!16077095!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24565 invoked from network); 13 Dec 2012 10:57:59 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-206.messagelabs.com with SMTP;
	13 Dec 2012 10:57:59 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 7779EC56192;
	Thu, 13 Dec 2012 10:57:38 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:03 +0000
Message-Id: <1355396228-3183-6-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 05/10] libxl_qmp: Introduces helpers to create
	an argument list.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Those functions will be used to create a "list" of parameters that
contain more than just strings. This list is converted by qmp_send to
a string to be sent to QEMU.

Those functions will be used in the next two patches, so right now
there are not compiled.

Backported from xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693132 -3600
: Node ID 6f7847729f0f42614de516d15257ede7243f995f
: Parent  74dee58cfc0d2d6594f388db3b4d2ce91d1bb204
---
 tools/libxl/libxl_qmp.c |   51 +++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 51 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index 9e86c35..827f1b7 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -624,6 +624,57 @@ static void qmp_free_handler(libxl__qmp_handler *qmp)
     free(qmp);
 }
 
+#if 0
+/*
+ * QMP Parameters Helpers
+ */
+static void qmp_parameters_common_add(libxl__gc *gc,
+                                      libxl__json_object **param,
+                                      const char *name,
+                                      libxl__json_object *obj)
+{
+    libxl__json_map_node *arg = NULL;
+
+    if (!*param) {
+        *param = libxl__json_object_alloc(gc, JSON_MAP);
+    }
+
+    arg = libxl__zalloc(gc, sizeof(*arg));
+
+    arg->map_key = libxl__strdup(gc, name);
+    arg->obj = obj;
+
+    flexarray_append((*param)->u.map, arg);
+}
+
+static void qmp_parameters_add_string(libxl__gc *gc,
+                                      libxl__json_object **param,
+                                      const char *name, const char *argument)
+{
+    libxl__json_object *obj;
+
+    obj = libxl__json_object_alloc(gc, JSON_STRING);
+    obj->u.string = libxl__strdup(gc, argument);
+
+    qmp_parameters_common_add(gc, param, name, obj);
+}
+
+static void qmp_parameters_add_bool(libxl__gc *gc,
+                                    libxl__json_object **param,
+                                    const char *name, bool b)
+{
+    libxl__json_object *obj;
+
+    obj = libxl__json_object_alloc(gc, JSON_BOOL);
+    obj->u.b = b;
+    qmp_parameters_common_add(gc, param, name, obj);
+}
+
+#define QMP_PARAMETERS_SPRINTF(args, name, format, ...) \
+    qmp_parameters_add_string(gc, args, name, \
+                              libxl__sprintf(gc, format, __VA_ARGS__))
+#endif
+
 /*
  * API
  */
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6UL-0000wM-Pd; Thu, 13 Dec 2012 10:58:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6UK-0000uZ-AI
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:58:00 +0000
Received: from [85.158.137.99:7168] by server-12.bemta-3.messagelabs.com id
	5C/AF-27559-7B4B9C05; Thu, 13 Dec 2012 10:57:59 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355396257!19151837!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22764 invoked from network); 13 Dec 2012 10:57:37 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-16.tower-217.messagelabs.com with SMTP;
	13 Dec 2012 10:57:37 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 3DF3EC56191;
	Thu, 13 Dec 2012 10:57:37 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:02 +0000
Message-Id: <1355396228-3183-5-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 04/10] libxl_json: Introduce
	libxl__json_object_to_yajl_gen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function converts a libxl__json_object to yajl by calling every
yajl_gen_* function on a preallocated yajl_gen hand.

This helps to integrate a json_object into an already existing
yajl_gen tree.

This function is used in a later patch.

Backported from xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693132 -3600
: Node ID 74dee58cfc0d2d6594f388db3b4d2ce91d1bb204
: Parent  3f71aab0e2774ded0c5a03436c364fb031ba9aa0
---
 tools/libxl/libxl_internal.h |    3 ++
 tools/libxl/libxl_json.c     |   61 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 7dbd8af..b00ff61 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1539,6 +1539,9 @@ libxl__json_map_node *libxl__json_map_node_get(const libxl__json_object *o,
 _hidden const libxl__json_object *libxl__json_map_get(const char *key,
                                           const libxl__json_object *o,
                                           libxl__json_node_type expected_type);
+_hidden yajl_status libxl__json_object_to_yajl_gen(libxl__gc *gc_opt,
+                                                   yajl_gen hand,
+                                                   libxl__json_object *param);
 _hidden void libxl__json_object_free(libxl__gc *gc_opt,
                                      libxl__json_object *obj);
 
diff --git a/tools/libxl/libxl_json.c b/tools/libxl/libxl_json.c
index 98db465..72b52e8 100644
--- a/tools/libxl/libxl_json.c
+++ b/tools/libxl/libxl_json.c
@@ -381,6 +381,67 @@ const libxl__json_object *libxl__json_map_get(const char *key,
     return NULL;
 }
 
+yajl_status libxl__json_object_to_yajl_gen(libxl__gc *gc,
+                                           yajl_gen hand,
+                                           libxl__json_object *obj)
+{
+    int idx = 0;
+    yajl_status rc;
+
+    switch (obj->type) {
+    case JSON_NULL:
+        return yajl_gen_null(hand);
+    case JSON_BOOL:
+        return yajl_gen_bool(hand, obj->u.b);
+    case JSON_INTEGER:
+        return yajl_gen_integer(hand, obj->u.i);
+    case JSON_DOUBLE:
+        return yajl_gen_double(hand, obj->u.d);
+    case JSON_NUMBER:
+        return yajl_gen_number(hand, obj->u.string, strlen(obj->u.string));
+    case JSON_STRING:
+        return libxl__yajl_gen_asciiz(hand, obj->u.string);
+    case JSON_MAP: {
+        libxl__json_map_node *node = NULL;
+
+        rc = yajl_gen_map_open(hand);
+        if (rc != yajl_status_ok)
+            return rc;
+        for (idx = 0; idx < obj->u.map->count; idx++) {
+            if (flexarray_get(obj->u.map, idx, (void**)&node) != 0)
+                break;
+
+            rc = libxl__yajl_gen_asciiz(hand, node->map_key);
+            if (rc != yajl_status_ok)
+                return rc;
+            rc = libxl__json_object_to_yajl_gen(gc, hand, node->obj);
+            if (rc != yajl_status_ok)
+                return rc;
+        }
+        return yajl_gen_map_close(hand);
+    }
+    case JSON_ARRAY: {
+        libxl__json_object *node = NULL;
+
+        rc = yajl_gen_array_open(hand);
+        if (rc != yajl_status_ok)
+            return rc;
+        for (idx = 0; idx < obj->u.array->count; idx++) {
+            if (flexarray_get(obj->u.array, idx, (void**)&node) != 0)
+                break;
+            rc = libxl__json_object_to_yajl_gen(gc, hand, node);
+            if (rc != yajl_status_ok)
+                return rc;
+        }
+        return yajl_gen_array_close(hand);
+    }
+    case JSON_ANY:
+        /* JSON_ANY is not a valid value for obj->type. */
+        ;
+    }
+    abort();
+}
+
 
 /*
  * JSON callbacks
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6UL-0000wM-Pd; Thu, 13 Dec 2012 10:58:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6UK-0000uZ-AI
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:58:00 +0000
Received: from [85.158.137.99:7168] by server-12.bemta-3.messagelabs.com id
	5C/AF-27559-7B4B9C05; Thu, 13 Dec 2012 10:57:59 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355396257!19151837!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22764 invoked from network); 13 Dec 2012 10:57:37 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-16.tower-217.messagelabs.com with SMTP;
	13 Dec 2012 10:57:37 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 3DF3EC56191;
	Thu, 13 Dec 2012 10:57:37 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:02 +0000
Message-Id: <1355396228-3183-5-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 04/10] libxl_json: Introduce
	libxl__json_object_to_yajl_gen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function converts a libxl__json_object to yajl by calling every
yajl_gen_* function on a preallocated yajl_gen hand.

This helps to integrate a json_object into an already existing
yajl_gen tree.

This function is used in a later patch.

Backported from xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693132 -3600
: Node ID 74dee58cfc0d2d6594f388db3b4d2ce91d1bb204
: Parent  3f71aab0e2774ded0c5a03436c364fb031ba9aa0
---
 tools/libxl/libxl_internal.h |    3 ++
 tools/libxl/libxl_json.c     |   61 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 7dbd8af..b00ff61 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1539,6 +1539,9 @@ libxl__json_map_node *libxl__json_map_node_get(const libxl__json_object *o,
 _hidden const libxl__json_object *libxl__json_map_get(const char *key,
                                           const libxl__json_object *o,
                                           libxl__json_node_type expected_type);
+_hidden yajl_status libxl__json_object_to_yajl_gen(libxl__gc *gc_opt,
+                                                   yajl_gen hand,
+                                                   libxl__json_object *param);
 _hidden void libxl__json_object_free(libxl__gc *gc_opt,
                                      libxl__json_object *obj);
 
diff --git a/tools/libxl/libxl_json.c b/tools/libxl/libxl_json.c
index 98db465..72b52e8 100644
--- a/tools/libxl/libxl_json.c
+++ b/tools/libxl/libxl_json.c
@@ -381,6 +381,67 @@ const libxl__json_object *libxl__json_map_get(const char *key,
     return NULL;
 }
 
+yajl_status libxl__json_object_to_yajl_gen(libxl__gc *gc,
+                                           yajl_gen hand,
+                                           libxl__json_object *obj)
+{
+    int idx = 0;
+    yajl_status rc;
+
+    switch (obj->type) {
+    case JSON_NULL:
+        return yajl_gen_null(hand);
+    case JSON_BOOL:
+        return yajl_gen_bool(hand, obj->u.b);
+    case JSON_INTEGER:
+        return yajl_gen_integer(hand, obj->u.i);
+    case JSON_DOUBLE:
+        return yajl_gen_double(hand, obj->u.d);
+    case JSON_NUMBER:
+        return yajl_gen_number(hand, obj->u.string, strlen(obj->u.string));
+    case JSON_STRING:
+        return libxl__yajl_gen_asciiz(hand, obj->u.string);
+    case JSON_MAP: {
+        libxl__json_map_node *node = NULL;
+
+        rc = yajl_gen_map_open(hand);
+        if (rc != yajl_status_ok)
+            return rc;
+        for (idx = 0; idx < obj->u.map->count; idx++) {
+            if (flexarray_get(obj->u.map, idx, (void**)&node) != 0)
+                break;
+
+            rc = libxl__yajl_gen_asciiz(hand, node->map_key);
+            if (rc != yajl_status_ok)
+                return rc;
+            rc = libxl__json_object_to_yajl_gen(gc, hand, node->obj);
+            if (rc != yajl_status_ok)
+                return rc;
+        }
+        return yajl_gen_map_close(hand);
+    }
+    case JSON_ARRAY: {
+        libxl__json_object *node = NULL;
+
+        rc = yajl_gen_array_open(hand);
+        if (rc != yajl_status_ok)
+            return rc;
+        for (idx = 0; idx < obj->u.array->count; idx++) {
+            if (flexarray_get(obj->u.array, idx, (void**)&node) != 0)
+                break;
+            rc = libxl__json_object_to_yajl_gen(gc, hand, node);
+            if (rc != yajl_status_ok)
+                return rc;
+        }
+        return yajl_gen_array_close(hand);
+    }
+    case JSON_ANY:
+        /* JSON_ANY is not a valid value for obj->type. */
+        ;
+    }
+    abort();
+}
+
 
 /*
  * JSON callbacks
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Uc-00019R-Kv; Thu, 13 Dec 2012 10:58:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6Ub-000184-Dv
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:58:17 +0000
Received: from [85.158.137.99:34193] by server-13.bemta-3.messagelabs.com id
	6C/53-00465-8C4B9C05; Thu, 13 Dec 2012 10:58:16 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355396260!16875784!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26810 invoked from network); 13 Dec 2012 10:57:40 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-15.tower-217.messagelabs.com with SMTP;
	13 Dec 2012 10:57:40 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 9482AC56194;
	Thu, 13 Dec 2012 10:57:40 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:05 +0000
Message-Id: <1355396228-3183-8-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 07/10] libxl_qmp: Simplify run of single QMP
	commands.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This new function connects to QEMU, sends the command and disconnects.

Backport of xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693134 -3600
: Node ID f3890916496445c97d6778d6c986b0270ff707f2
: Parent  be5d014f91dfbd67afacc3385c265243794a246f
---
 tools/libxl/libxl_qmp.c |   77 +++++++++++++---------------------------------
 1 files changed, 22 insertions(+), 55 deletions(-)

diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index 605e8f3..b09bf13 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -798,6 +798,23 @@ out:
     return rc;
 }
 
+static int qmp_run_command(libxl__gc *gc, int domid,
+                           const char *cmd, libxl__json_object *args,
+                           qmp_callback_t callback, void *opaque)
+{
+    libxl__qmp_handler *qmp = NULL;
+    int rc = 0;
+
+    qmp = libxl__qmp_initialize(gc, domid);
+    if (!qmp)
+        return ERROR_FAIL;
+
+    rc = qmp_synchronous_send(qmp, cmd, args, callback, opaque, qmp->timeout);
+
+    libxl__qmp_close(qmp);
+    return rc;
+}
+
 int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 {
     libxl__qmp_handler *qmp = NULL;
@@ -836,21 +853,10 @@ int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 
 static int qmp_device_del(libxl__gc *gc, int domid, char *id)
 {
-    libxl__qmp_handler *qmp = NULL;
     libxl__json_object *args = NULL;
-    int rc = 0;
-
-    qmp = libxl__qmp_initialize(gc, domid);
-    if (!qmp)
-        return ERROR_FAIL;
 
     qmp_parameters_add_string(gc, &args, "id", id);
-
-    rc = qmp_synchronous_send(qmp, "device_del", args,
-                              NULL, NULL, qmp->timeout);
-
-    libxl__qmp_close(qmp);
-    return rc;
+    return qmp_run_command(gc, domid, "device_del", args, NULL, NULL);
 }
 
 int libxl__qmp_pci_del(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
@@ -865,27 +871,10 @@ int libxl__qmp_pci_del(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 
 int libxl__qmp_save(libxl__gc *gc, int domid, const char *filename)
 {
-    libxl__qmp_handler *qmp = NULL;
     libxl__json_object *args = NULL;
-    int rc = 0;
-
-    qmp = libxl__qmp_initialize(gc, domid);
-    if (!qmp)
-        return ERROR_FAIL;
-
-    qmp_parameters_add_string(gc, &args, "filename", (char *)filename);
-    if (!args) {
-        rc = ERROR_NOMEM;
-        goto out;
-    }
-
-    rc = qmp_synchronous_send(qmp, "xen-save-devices-state", args,
-                              NULL, NULL, qmp->timeout);
-
-out:
-    libxl__qmp_close(qmp);
-    return rc;
 
+    return qmp_run_command(gc, domid, "xen-save-devices-state", args,
+                           NULL, NULL);
 }
 
 static int qmp_change(libxl__gc *gc, libxl__qmp_handler *qmp,
@@ -908,34 +897,12 @@ static int qmp_change(libxl__gc *gc, libxl__qmp_handler *qmp,
 
 int libxl__qmp_stop(libxl__gc *gc, int domid)
 {
-    libxl__qmp_handler *qmp = NULL;
-    int rc = 0;
-
-    qmp = libxl__qmp_initialize(gc, domid);
-    if (!qmp)
-        return ERROR_FAIL;
-
-    rc = qmp_synchronous_send(qmp, "stop", NULL,
-                              NULL, NULL, qmp->timeout);
-
-    libxl__qmp_close(qmp);
-    return rc;
+    return qmp_run_command(gc, domid, "stop", NULL, NULL, NULL);
 }
 
 int libxl__qmp_resume(libxl__gc *gc, int domid)
 {
-    libxl__qmp_handler *qmp = NULL;
-    int rc = 0;
-
-    qmp = libxl__qmp_initialize(gc, domid);
-    if (!qmp)
-        return ERROR_FAIL;
-
-    rc = qmp_synchronous_send(qmp, "cont", NULL,
-                              NULL, NULL, qmp->timeout);
-
-    libxl__qmp_close(qmp);
-    return rc;
+    return qmp_run_command(gc, domid, "cont", NULL, NULL, NULL);
 }
 
 int libxl__qmp_initializations(libxl__gc *gc, uint32_t domid,
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 10:58:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 10:58:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6Uc-00019R-Kv; Thu, 13 Dec 2012 10:58:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6Ub-000184-Dv
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:58:17 +0000
Received: from [85.158.137.99:34193] by server-13.bemta-3.messagelabs.com id
	6C/53-00465-8C4B9C05; Thu, 13 Dec 2012 10:58:16 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355396260!16875784!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26810 invoked from network); 13 Dec 2012 10:57:40 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-15.tower-217.messagelabs.com with SMTP;
	13 Dec 2012 10:57:40 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 9482AC56194;
	Thu, 13 Dec 2012 10:57:40 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:57:05 +0000
Message-Id: <1355396228-3183-8-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 07/10] libxl_qmp: Simplify run of single QMP
	commands.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This new function connects to QEMU, sends the command and disconnects.

Backport of xen-unstable patch:
: HG changeset patch
: User Anthony PERARD <anthony.perard@citrix.com>
: Date 1349693134 -3600
: Node ID f3890916496445c97d6778d6c986b0270ff707f2
: Parent  be5d014f91dfbd67afacc3385c265243794a246f
---
 tools/libxl/libxl_qmp.c |   77 +++++++++++++---------------------------------
 1 files changed, 22 insertions(+), 55 deletions(-)

diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index 605e8f3..b09bf13 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -798,6 +798,23 @@ out:
     return rc;
 }
 
+static int qmp_run_command(libxl__gc *gc, int domid,
+                           const char *cmd, libxl__json_object *args,
+                           qmp_callback_t callback, void *opaque)
+{
+    libxl__qmp_handler *qmp = NULL;
+    int rc = 0;
+
+    qmp = libxl__qmp_initialize(gc, domid);
+    if (!qmp)
+        return ERROR_FAIL;
+
+    rc = qmp_synchronous_send(qmp, cmd, args, callback, opaque, qmp->timeout);
+
+    libxl__qmp_close(qmp);
+    return rc;
+}
+
 int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 {
     libxl__qmp_handler *qmp = NULL;
@@ -836,21 +853,10 @@ int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 
 static int qmp_device_del(libxl__gc *gc, int domid, char *id)
 {
-    libxl__qmp_handler *qmp = NULL;
     libxl__json_object *args = NULL;
-    int rc = 0;
-
-    qmp = libxl__qmp_initialize(gc, domid);
-    if (!qmp)
-        return ERROR_FAIL;
 
     qmp_parameters_add_string(gc, &args, "id", id);
-
-    rc = qmp_synchronous_send(qmp, "device_del", args,
-                              NULL, NULL, qmp->timeout);
-
-    libxl__qmp_close(qmp);
-    return rc;
+    return qmp_run_command(gc, domid, "device_del", args, NULL, NULL);
 }
 
 int libxl__qmp_pci_del(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
@@ -865,27 +871,10 @@ int libxl__qmp_pci_del(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
 
 int libxl__qmp_save(libxl__gc *gc, int domid, const char *filename)
 {
-    libxl__qmp_handler *qmp = NULL;
     libxl__json_object *args = NULL;
-    int rc = 0;
-
-    qmp = libxl__qmp_initialize(gc, domid);
-    if (!qmp)
-        return ERROR_FAIL;
-
-    qmp_parameters_add_string(gc, &args, "filename", (char *)filename);
-    if (!args) {
-        rc = ERROR_NOMEM;
-        goto out;
-    }
-
-    rc = qmp_synchronous_send(qmp, "xen-save-devices-state", args,
-                              NULL, NULL, qmp->timeout);
-
-out:
-    libxl__qmp_close(qmp);
-    return rc;
 
+    return qmp_run_command(gc, domid, "xen-save-devices-state", args,
+                           NULL, NULL);
 }
 
 static int qmp_change(libxl__gc *gc, libxl__qmp_handler *qmp,
@@ -908,34 +897,12 @@ static int qmp_change(libxl__gc *gc, libxl__qmp_handler *qmp,
 
 int libxl__qmp_stop(libxl__gc *gc, int domid)
 {
-    libxl__qmp_handler *qmp = NULL;
-    int rc = 0;
-
-    qmp = libxl__qmp_initialize(gc, domid);
-    if (!qmp)
-        return ERROR_FAIL;
-
-    rc = qmp_synchronous_send(qmp, "stop", NULL,
-                              NULL, NULL, qmp->timeout);
-
-    libxl__qmp_close(qmp);
-    return rc;
+    return qmp_run_command(gc, domid, "stop", NULL, NULL, NULL);
 }
 
 int libxl__qmp_resume(libxl__gc *gc, int domid)
 {
-    libxl__qmp_handler *qmp = NULL;
-    int rc = 0;
-
-    qmp = libxl__qmp_initialize(gc, domid);
-    if (!qmp)
-        return ERROR_FAIL;
-
-    rc = qmp_synchronous_send(qmp, "cont", NULL,
-                              NULL, NULL, qmp->timeout);
-
-    libxl__qmp_close(qmp);
-    return rc;
+    return qmp_run_command(gc, domid, "cont", NULL, NULL, NULL);
 }
 
 int libxl__qmp_initializations(libxl__gc *gc, uint32_t domid,
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:00:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6WD-0001tz-Er; Thu, 13 Dec 2012 10:59:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tj6WB-0001t2-QS
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:59:56 +0000
Received: from [85.158.138.51:16355] by server-3.bemta-3.messagelabs.com id
	D2/01-31588-A25B9C05; Thu, 13 Dec 2012 10:59:54 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355396393!10055836!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32134 invoked from network); 13 Dec 2012 10:59:54 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 10:59:54 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tj6W7-000JoG-GF; Thu, 13 Dec 2012 10:59:51 +0000
Date: Thu, 13 Dec 2012 10:59:51 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121213105951.GA75286@ocelot.phlegethon.org>
References: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
	on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:35 +0000 on 12 Dec (1355312134), Andrew Cooper wrote:
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/crash.c
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,127 @@
>  
>  static atomic_t waiting_for_crash_ipi;
>  static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>  
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>  {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    int cpu = smp_processor_id();
> +
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
> +    ASSERT( cpu != crashing_cpu );

nmi_shootdown_cpus() has a go, but I think it would have to do an atomic
compare-exchange on crashing_cpu to actually be sure that only one CPU
is crashing at a time.  If two CPUs try to lead crashes at the same
time, it will deadlock here, with NMIs disabled on all CPUs.

The old behaviour was also not entirely race-free, but a little more
likey to make progress, as in most cases at least one CPU would see
cpu == crashing_cpu here and return.

Using cmpxchg in nmi_shootdown_cpus() would have the drawback that if
one CPU tried to kexec and wedged, another CPU couldn't then have a go. 
Not sure which of all these unlikely scenarios is worst. :)

> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -635,11 +635,45 @@ ENTRY(nmi)
>          movl  $TRAP_nmi,4(%rsp)
>          jmp   handle_ist_exception
>  
> +ENTRY(nmi_crash)
> +        cli

Aren't interrupts disabled already because we came in through an
interrupt gate?

> +        pushq $0
> +        movl $TRAP_nmi,4(%rsp)
> +        SAVE_ALL
> +        movq %rsp,%rdi

Why?  do_nmi_crash() doesn't actually use any of its arguments.
AFAICT, we could pretty much use do_nmi_crash explicitly in the IDT
entry.

> +        callq do_nmi_crash /* Does not return */
> +        ud2
> +
>  ENTRY(machine_check)
>          pushq $0
>          movl  $TRAP_machine_check,4(%rsp)
>          jmp   handle_ist_exception
>  
> +/* Enable NMIs.  No special register assumptions. Only %rax is not preserved. */
> +ENTRY(enable_nmis)
> +        movq  %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        retq

This could be made a bit smaller by something like: 

	popq  %rcx         /* Remember return address */
	movq  %rsp, %rax   /* Grab RSP before pushing */
	
	/* Set up stack frame */
	pushq $0               /* SS */
	pushq %rax             /* RSP */
	pushfq                 /* RFLAGS */
	pushq $__HYPERVISOR_CS /* CS */
	pushq %rcx             /* RIP */
	
	iretq /* Disable the hardware NMI latch and return */

which also keeps the call/return counts balanced.  But tbh I'm not sure
it's worth it.  And I suspect you'll tell me why it's wrong. :)

> +
> +/* No op trap handler.  Required for kexec crash path.  This is not
> + * declared with the ENTRY() macro to avoid wasted alignment space.
> + */
> +.globl trap_nop
> +trap_nop:
> +        iretq

The commit message says this reuses the iretq in enable_nmis().

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:00:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6WD-0001tz-Er; Thu, 13 Dec 2012 10:59:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tj6WB-0001t2-QS
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 10:59:56 +0000
Received: from [85.158.138.51:16355] by server-3.bemta-3.messagelabs.com id
	D2/01-31588-A25B9C05; Thu, 13 Dec 2012 10:59:54 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355396393!10055836!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32134 invoked from network); 13 Dec 2012 10:59:54 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 10:59:54 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tj6W7-000JoG-GF; Thu, 13 Dec 2012 10:59:51 +0000
Date: Thu, 13 Dec 2012 10:59:51 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121213105951.GA75286@ocelot.phlegethon.org>
References: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
	on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:35 +0000 on 12 Dec (1355312134), Andrew Cooper wrote:
> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/crash.c
> --- a/xen/arch/x86/crash.c
> +++ b/xen/arch/x86/crash.c
> @@ -32,41 +32,127 @@
>  
>  static atomic_t waiting_for_crash_ipi;
>  static unsigned int crashing_cpu;
> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>  
> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
> +/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>  {
> -    /* Don't do anything if this handler is invoked on crashing cpu.
> -     * Otherwise, system will completely hang. Crashing cpu can get
> -     * an NMI if system was initially booted with nmi_watchdog parameter.
> +    int cpu = smp_processor_id();
> +
> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
> +    ASSERT( cpu != crashing_cpu );

nmi_shootdown_cpus() has a go, but I think it would have to do an atomic
compare-exchange on crashing_cpu to actually be sure that only one CPU
is crashing at a time.  If two CPUs try to lead crashes at the same
time, it will deadlock here, with NMIs disabled on all CPUs.

The old behaviour was also not entirely race-free, but a little more
likey to make progress, as in most cases at least one CPU would see
cpu == crashing_cpu here and return.

Using cmpxchg in nmi_shootdown_cpus() would have the drawback that if
one CPU tried to kexec and wedged, another CPU couldn't then have a go. 
Not sure which of all these unlikely scenarios is worst. :)

> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -635,11 +635,45 @@ ENTRY(nmi)
>          movl  $TRAP_nmi,4(%rsp)
>          jmp   handle_ist_exception
>  
> +ENTRY(nmi_crash)
> +        cli

Aren't interrupts disabled already because we came in through an
interrupt gate?

> +        pushq $0
> +        movl $TRAP_nmi,4(%rsp)
> +        SAVE_ALL
> +        movq %rsp,%rdi

Why?  do_nmi_crash() doesn't actually use any of its arguments.
AFAICT, we could pretty much use do_nmi_crash explicitly in the IDT
entry.

> +        callq do_nmi_crash /* Does not return */
> +        ud2
> +
>  ENTRY(machine_check)
>          pushq $0
>          movl  $TRAP_machine_check,4(%rsp)
>          jmp   handle_ist_exception
>  
> +/* Enable NMIs.  No special register assumptions. Only %rax is not preserved. */
> +ENTRY(enable_nmis)
> +        movq  %rsp, %rax /* Grab RSP before pushing */
> +
> +        /* Set up stack frame */
> +        pushq $0               /* SS */
> +        pushq %rax             /* RSP */
> +        pushfq                 /* RFLAGS */
> +        pushq $__HYPERVISOR_CS /* CS */
> +        leaq  1f(%rip),%rax
> +        pushq %rax             /* RIP */
> +
> +        iretq /* Disable the hardware NMI latch */
> +1:
> +        retq

This could be made a bit smaller by something like: 

	popq  %rcx         /* Remember return address */
	movq  %rsp, %rax   /* Grab RSP before pushing */
	
	/* Set up stack frame */
	pushq $0               /* SS */
	pushq %rax             /* RSP */
	pushfq                 /* RFLAGS */
	pushq $__HYPERVISOR_CS /* CS */
	pushq %rcx             /* RIP */
	
	iretq /* Disable the hardware NMI latch and return */

which also keeps the call/return counts balanced.  But tbh I'm not sure
it's worth it.  And I suspect you'll tell me why it's wrong. :)

> +
> +/* No op trap handler.  Required for kexec crash path.  This is not
> + * declared with the ENTRY() macro to avoid wasted alignment space.
> + */
> +.globl trap_nop
> +trap_nop:
> +        iretq

The commit message says this reuses the iretq in enable_nmis().

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:04:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6aZ-0002fp-Hq; Thu, 13 Dec 2012 11:04:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6aY-0002fk-7H
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:04:26 +0000
Received: from [85.158.139.83:48183] by server-6.bemta-5.messagelabs.com id
	00/0B-30498-936B9C05; Thu, 13 Dec 2012 11:04:25 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355396660!29134276!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6791 invoked from network); 13 Dec 2012 11:04:20 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-182.messagelabs.com with SMTP;
	13 Dec 2012 11:04:20 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 1731DC5617B;
	Thu, 13 Dec 2012 11:03:08 +0000 (GMT)
Date: Thu, 13 Dec 2012 11:03:07 +0000
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Message-ID: <FBCCA7FECCCD18AFDA45C114@nimrod.local>
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Apologies, I missed the vital information that this is a patch for Xen-4.2,
against current xen-4.2-testing.

--On 13 December 2012 10:56:58 +0000 Alex Bligh <alex@alex.org.uk> wrote:

> This is a COMPILE TESTED ONLY RFC for a backport of the libxl elements
> of:
>
> http://marc.info/?l=xen-devel&m=134944750724252
>
> Stefano Stabellini said he would look at backporting the qemu-xen
> side.
>
> My first question is whether this is the right approach: do we want
> to include the JSON changes too (presumably it changes the API).
> I am not 100% convinced they are necessary to get live migrate
> working.
>
> DO NOT RUN THIS CODE ON ANYTHING YOU CARE ABOUT.
>
>



-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:04:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6aZ-0002fp-Hq; Thu, 13 Dec 2012 11:04:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tj6aY-0002fk-7H
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:04:26 +0000
Received: from [85.158.139.83:48183] by server-6.bemta-5.messagelabs.com id
	00/0B-30498-936B9C05; Thu, 13 Dec 2012 11:04:25 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355396660!29134276!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6791 invoked from network); 13 Dec 2012 11:04:20 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-182.messagelabs.com with SMTP;
	13 Dec 2012 11:04:20 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 1731DC5617B;
	Thu, 13 Dec 2012 11:03:08 +0000 (GMT)
Date: Thu, 13 Dec 2012 11:03:07 +0000
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>
Message-ID: <FBCCA7FECCCD18AFDA45C114@nimrod.local>
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Apologies, I missed the vital information that this is a patch for Xen-4.2,
against current xen-4.2-testing.

--On 13 December 2012 10:56:58 +0000 Alex Bligh <alex@alex.org.uk> wrote:

> This is a COMPILE TESTED ONLY RFC for a backport of the libxl elements
> of:
>
> http://marc.info/?l=xen-devel&m=134944750724252
>
> Stefano Stabellini said he would look at backporting the qemu-xen
> side.
>
> My first question is whether this is the right approach: do we want
> to include the JSON changes too (presumably it changes the API).
> I am not 100% convinced they are necessary to get live migrate
> working.
>
> DO NOT RUN THIS CODE ON ANYTHING YOU CARE ABOUT.
>
>



-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:20:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6q6-0002w9-6w; Thu, 13 Dec 2012 11:20:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1Tj6q2-0002w4-GF
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 11:20:28 +0000
Received: from [85.158.138.51:36045] by server-1.bemta-3.messagelabs.com id
	DD/0B-08906-9F9B9C05; Thu, 13 Dec 2012 11:20:25 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355397424!26939076!1
X-Originating-IP: [209.85.216.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_TEST_2,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20713 invoked from network); 13 Dec 2012 11:17:05 -0000
Received: from mail-qa0-f43.google.com (HELO mail-qa0-f43.google.com)
	(209.85.216.43)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:17:05 -0000
Received: by mail-qa0-f43.google.com with SMTP id cr7so6487609qab.9
	for <xen-devel@lists.xensource.com>;
	Thu, 13 Dec 2012 03:17:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=d6CLbpz3U1s2Rply4np+/pRJMBBSpGFI79vHFf7uGRE=;
	b=Ey2dSv3faJ4snh7MgYvTrEGWRfvOlWvs6HjMouuyEhL/3HYwQ38gBubpARGIfk8F3S
	ODky8D+/KtRcktgbVU9SZcTJXcr977ZelwWQqHyypecWIAae6TEHtTbAJGGqG0Ksqc/b
	oS6b4MvmBDaTKv5pN9tescMK0PEeFAhtjYd+k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type
	:x-gm-message-state;
	bh=d6CLbpz3U1s2Rply4np+/pRJMBBSpGFI79vHFf7uGRE=;
	b=MbUMQeRijAw7D3Bxi3jLlKGRh9iL1Yx60x86wQnyqleZQJ03BQBED3dpwDNDLcfHgn
	kzsKiBoxoGrPe4ygkJlqDwM8ApeMFReayHqSvOM0+bekGlJwjlgzX/CUg21Ju3auekUr
	9icbaUI3HLeIch7NW7yzgMYqZJOjZww8T195KABDPFve13Vu5vTIWRIoGDJEy1RTypn7
	M3kqqwsoH6Q5tVZDLaBd6I7zpHhtugzciH5piMLtucyJZ5nv5wMMJSSUcUZjx5vT9ics
	fBYLNDS9UqlqFo6VGOlXEO13m/O8lkCxBgFa3rdw/6GdMSGvdAILgurHggLp67kymivv
	xVzg==
Received: by 10.49.96.1 with SMTP id do1mr723503qeb.26.1355397423712; Thu, 13
	Dec 2012 03:17:03 -0800 (PST)
MIME-Version: 1.0
Received: by 10.49.119.202 with HTTP; Thu, 13 Dec 2012 03:16:48 -0800 (PST)
X-Originating-IP: [178.70.154.20]
In-Reply-To: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Thu, 13 Dec 2012 15:16:48 +0400
X-Google-Sender-Auth: YBTHjZC37XFmtqjEQEk4dmj2Bq4
Message-ID: <CACaajQs9jnZXaNiUmdOLAt1yuz_LowNf_wgkznM_mTS5hj0rJg@mail.gmail.com>
To: William Pitcock <nenolod@dereferenced.org>
X-Gm-Message-State: ALoCoQkB2bcJHyYhNUZ6Nue2hzZ8HE6C/zIpeApr9acnJif/QkJAWH7OQPUMUUfZ7Lbt1kyAMqJG
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Great!

2012/12/13 William Pitcock <nenolod@dereferenced.org>:
> Hello,
>
> I would like to introduce the Python Xen library, which uses libxs and
> libxc directly to provide some manipulation functions for the domains
> running on a hypervisor.
>
> This code is being used in production right now on a semi-private
> cloud, as part of the management backplane.
>
> The code is available from PyPI at:
>    http://pypi.python.org/pypi/Python-Xen
>
> You may also view the code online via Bitbucket:
>   http://bitbucket.org/tortoiselabs/python-xen
>
> The code itself is under the ISC license, so it may be used for any
> use, including commercial hypervisors.  We also accept patches for
> desired functionality which may be missing.
>
> William
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



-- 
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:20:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6q6-0002w9-6w; Thu, 13 Dec 2012 11:20:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1Tj6q2-0002w4-GF
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 11:20:28 +0000
Received: from [85.158.138.51:36045] by server-1.bemta-3.messagelabs.com id
	DD/0B-08906-9F9B9C05; Thu, 13 Dec 2012 11:20:25 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355397424!26939076!1
X-Originating-IP: [209.85.216.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_TEST_2,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20713 invoked from network); 13 Dec 2012 11:17:05 -0000
Received: from mail-qa0-f43.google.com (HELO mail-qa0-f43.google.com)
	(209.85.216.43)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:17:05 -0000
Received: by mail-qa0-f43.google.com with SMTP id cr7so6487609qab.9
	for <xen-devel@lists.xensource.com>;
	Thu, 13 Dec 2012 03:17:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=d6CLbpz3U1s2Rply4np+/pRJMBBSpGFI79vHFf7uGRE=;
	b=Ey2dSv3faJ4snh7MgYvTrEGWRfvOlWvs6HjMouuyEhL/3HYwQ38gBubpARGIfk8F3S
	ODky8D+/KtRcktgbVU9SZcTJXcr977ZelwWQqHyypecWIAae6TEHtTbAJGGqG0Ksqc/b
	oS6b4MvmBDaTKv5pN9tescMK0PEeFAhtjYd+k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type
	:x-gm-message-state;
	bh=d6CLbpz3U1s2Rply4np+/pRJMBBSpGFI79vHFf7uGRE=;
	b=MbUMQeRijAw7D3Bxi3jLlKGRh9iL1Yx60x86wQnyqleZQJ03BQBED3dpwDNDLcfHgn
	kzsKiBoxoGrPe4ygkJlqDwM8ApeMFReayHqSvOM0+bekGlJwjlgzX/CUg21Ju3auekUr
	9icbaUI3HLeIch7NW7yzgMYqZJOjZww8T195KABDPFve13Vu5vTIWRIoGDJEy1RTypn7
	M3kqqwsoH6Q5tVZDLaBd6I7zpHhtugzciH5piMLtucyJZ5nv5wMMJSSUcUZjx5vT9ics
	fBYLNDS9UqlqFo6VGOlXEO13m/O8lkCxBgFa3rdw/6GdMSGvdAILgurHggLp67kymivv
	xVzg==
Received: by 10.49.96.1 with SMTP id do1mr723503qeb.26.1355397423712; Thu, 13
	Dec 2012 03:17:03 -0800 (PST)
MIME-Version: 1.0
Received: by 10.49.119.202 with HTTP; Thu, 13 Dec 2012 03:16:48 -0800 (PST)
X-Originating-IP: [178.70.154.20]
In-Reply-To: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Thu, 13 Dec 2012 15:16:48 +0400
X-Google-Sender-Auth: YBTHjZC37XFmtqjEQEk4dmj2Bq4
Message-ID: <CACaajQs9jnZXaNiUmdOLAt1yuz_LowNf_wgkznM_mTS5hj0rJg@mail.gmail.com>
To: William Pitcock <nenolod@dereferenced.org>
X-Gm-Message-State: ALoCoQkB2bcJHyYhNUZ6Nue2hzZ8HE6C/zIpeApr9acnJif/QkJAWH7OQPUMUUfZ7Lbt1kyAMqJG
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Great!

2012/12/13 William Pitcock <nenolod@dereferenced.org>:
> Hello,
>
> I would like to introduce the Python Xen library, which uses libxs and
> libxc directly to provide some manipulation functions for the domains
> running on a hypervisor.
>
> This code is being used in production right now on a semi-private
> cloud, as part of the management backplane.
>
> The code is available from PyPI at:
>    http://pypi.python.org/pypi/Python-Xen
>
> You may also view the code online via Bitbucket:
>   http://bitbucket.org/tortoiselabs/python-xen
>
> The code itself is under the ISC license, so it may be used for any
> use, including commercial hypervisors.  We also accept patches for
> desired functionality which may be missing.
>
> William
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



-- 
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:28:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6xm-000371-6j; Thu, 13 Dec 2012 11:28:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1Tj6xk-00036v-BF
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:28:24 +0000
Received: from [85.158.143.35:2860] by server-1.bemta-4.messagelabs.com id
	FA/24-28401-7DBB9C05; Thu, 13 Dec 2012 11:28:23 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355398084!5045106!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_TEST_2,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18944 invoked from network); 13 Dec 2012 11:28:06 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:28:06 -0000
Received: by mail-qa0-f45.google.com with SMTP id j15so6388506qaq.11
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 03:28:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=YjICzg18ebEN4J3W/LYvpCKB5lHRSxLpu63FMfS+kTs=;
	b=YvVobKDeywqhVbEX6JmCCvJagvXiyFMw+rSiQMLRqD88HrYBzal8zx30V0QgzDJAMz
	P5CJflzfrg3ltOu2P5oWxBX9UYM286fRnlDeH+R3J1t6oRr6nlg6UxbkcH5z2hEnUxo1
	LmW9iTKJmt+ooUf6J32A2D3uQoO7drrhvBWqk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type
	:x-gm-message-state;
	bh=YjICzg18ebEN4J3W/LYvpCKB5lHRSxLpu63FMfS+kTs=;
	b=YFivdOUrpcNSQE7kOXOTtbjiGSTJohXgsCPCyH8ajDStV19DBYns9GGpxJuBsKuW4X
	y1BsXsOSW2E0JwB0VBcV6tDRbXRttTRUOZEUvpYVXvBXU569QbrDmSLB5iXyMcNcloOf
	HEvWytb6pT/HZRlqzgwb6iud4aAQW9bSJauHtMZ15V0mEOfBdoW7K50R1KtE6c87AMU5
	j0z5+OaNh8AmmM76uOqoIjWXUXT/MBj5ZoLz2BM58cW+ly06g4Nd3ri1YWnlIjWKKdyG
	SibPUl193KT4rH7KSQ4YMiMC9GO9QqfqmUxA4ASyh8SZThUnpy+8frRT0394dIa+vi0x
	PiGw==
Received: by 10.229.115.32 with SMTP id g32mr152306qcq.55.1355398084411; Thu,
	13 Dec 2012 03:28:04 -0800 (PST)
MIME-Version: 1.0
Received: by 10.49.119.202 with HTTP; Thu, 13 Dec 2012 03:27:49 -0800 (PST)
X-Originating-IP: [178.70.154.20]
In-Reply-To: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
References: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Thu, 13 Dec 2012 15:27:49 +0400
X-Google-Sender-Auth: fsoQFu0gwEk1GN4gJBlo4HYK3dw
Message-ID: <CACaajQu9VexhRWXK8U_fjtjFSL6Bwk=0e8OdWhshh7UERmiuQg@mail.gmail.com>
To: Mats Petersson <mats.petersson@citrix.com>
X-Gm-Message-State: ALoCoQloZa01bbsT9+Y3MFz+qrKT9h8UZOLoxWbQCdkuvzxcJ9S3buFrsCXBqZcDjFDNO6OWw/E2
Cc: Ian.Campbell@citrix.com, konrad.wilk@oracle.com, mats@planetcatfish.com,
	linux-kernel@vger.kernel.org, xen-devel@lists.xen.org,
	david.vrabel@citrix.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If i apply this patch to kernel 3.6.7 does it needs to apply another
patches to work?

2012/12/13 Mats Petersson <mats.petersson@citrix.com>:
> One comment asked for more details on the improvements:
> Using a small test program to map Guest memory into Dom0 (repeatedly
> for "Iterations" mapping the same first "Num Pages")
> Iterations    Num Pages    Time 3.7rc4  Time With this patch
> 5000          4096         76.107       37.027
> 10000         2048         75.703       37.177
> 20000         1024         75.893       37.247
> So a little better than twice as fast.
>
> Using this patch in migration, using "time" to measure the overall
> time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
> memory, one network card, one disk, guest at login prompt):
> Time 3.7rc5             Time With this patch
> 6.697                   5.667
> Since migration involves a whole lot of other things, it's only about
> 15% faster - but still a good improvement. Similar measurement with a
> guest that is running code to "dirty" memory shows about 23%
> improvement, as it spends more time copying dirtied memory.
>
> As discussed elsewhere, a good deal more can be had from improving the
> munmap system call, but it is a little tricky to get this in without
> worsening non-PVOPS kernel, so I will have another look at this.
>
> ---
> Update since last posting:
> I have just run some benchmarks of a 16GB guest, and the improvement
> with this patch is around 23-30% for the overall copy time, and 42%
> shorter downtime on a "busy" guest (writing in a loop to about 8GB of
> memory). I think this is definitely worth having.
>
> The V4 patch consists of cosmetic adjustments (spelling mistake in
> comment and reversing condition in an if-statement to avoid having an
> "else" branch, a random empty line added by accident now reverted back
> to previous state). Thanks to Jan Beulich for the comments.
>
> The V3 of the patch contains suggested improvements from:
> David Vrabel - make it two distinct external functions, doc-comments.
> Ian Campbell - use one common function for the main work.
> Jan Beulich  - found a bug and pointed out some whitespace problems.
>
>
>
> Signed-off-by: Mats Petersson <mats.petersson@citrix.com>
>
> ---
>  arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
>  drivers/xen/privcmd.c |   55 +++++++++++++++++----
>  include/xen/xen-ops.h |    5 ++
>  3 files changed, 169 insertions(+), 23 deletions(-)
>
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index dcf5f2d..a67774f 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
>  #define REMAP_BATCH_SIZE 16
>
>  struct remap_data {
> -       unsigned long mfn;
> +       unsigned long *mfn;
> +       bool contiguous;
>         pgprot_t prot;
>         struct mmu_update *mmu_update;
>  };
> @@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>                                  unsigned long addr, void *data)
>  {
>         struct remap_data *rmd = data;
> -       pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
> +       pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
> +       /* If we have a contigious range, just update the mfn itself,
> +          else update pointer to be "next mfn". */
> +       if (rmd->contiguous)
> +               (*rmd->mfn)++;
> +       else
> +               rmd->mfn++;
>
>         rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
>         rmd->mmu_update->val = pte_val_ma(pte);
> @@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>         return 0;
>  }
>
> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> -                              unsigned long addr,
> -                              unsigned long mfn, int nr,
> -                              pgprot_t prot, unsigned domid)
> -{
> +/* do_remap_mfn() - helper function to map foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages
> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @err_ptr: pointer to array
> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + *
> + * This function takes an array of mfns and maps nr pages from that into
> + * this kernel's memory. The owner of the pages is defined by domid. Where the
> + * pages are mapped is determined by addr, and vma is used for "accounting" of
> + * the pages.
> + *
> + * Return value is zero for success, negative for failure.
> + *
> + * Note that err_ptr is used to indicate whether *mfn
> + * is a list or a "first mfn of a contiguous range". */
> +static int do_remap_mfn(struct vm_area_struct *vma,
> +                       unsigned long addr,
> +                       unsigned long *mfn, int nr,
> +                       int *err_ptr, pgprot_t prot,
> +                       unsigned domid)
> +{
> +       int err, last_err = 0;
>         struct remap_data rmd;
>         struct mmu_update mmu_update[REMAP_BATCH_SIZE];
> -       int batch;
>         unsigned long range;
> -       int err = 0;
>
>         if (xen_feature(XENFEAT_auto_translated_physmap))
>                 return -EINVAL;
> @@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>
>         rmd.mfn = mfn;
>         rmd.prot = prot;
> +       /* We use the err_ptr to indicate if there we are doing a contigious
> +        * mapping or a discontigious mapping. */
> +       rmd.contiguous = !err_ptr;
>
>         while (nr) {
> -               batch = min(REMAP_BATCH_SIZE, nr);
> +               int index = 0;
> +               int done = 0;
> +               int batch = min(REMAP_BATCH_SIZE, nr);
> +               int batch_left = batch;
>                 range = (unsigned long)batch << PAGE_SHIFT;
>
>                 rmd.mmu_update = mmu_update;
> @@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>                 if (err)
>                         goto out;
>
> -               err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
> -               if (err < 0)
> -                       goto out;
> +               /* We record the error for each page that gives an error, but
> +                * continue mapping until the whole set is done */
> +               do {
> +                       err = HYPERVISOR_mmu_update(&mmu_update[index],
> +                                                   batch_left, &done, domid);
> +                       if (err < 0) {
> +                               if (!err_ptr)
> +                                       goto out;
> +                               /* increment done so we skip the error item */
> +                               done++;
> +                               last_err = err_ptr[index] = err;
> +                       }
> +                       batch_left -= done;
> +                       index += done;
> +               } while (batch_left);
>
>                 nr -= batch;
>                 addr += range;
> +               if (err_ptr)
> +                       err_ptr += batch;
>         }
>
> -       err = 0;
> +       err = last_err;
>  out:
>
>         xen_flush_tlb_all();
>
>         return err;
>  }
> +
> +/* xen_remap_domain_mfn_range() - Used to map foreign pages
> + * @vma:   the vma for the pages to be mapped into
> + * @addr:  the address at which to map the pages
> + * @mfn:   the first MFN to map
> + * @nr:    the number of consecutive mfns to map
> + * @prot:  page protection mask
> + * @domid: id of the domain that we are mapping from
> + *
> + * This function takes a mfn and maps nr pages on from that into this kernel's
> + * memory. The owner of the pages is defined by domid. Where the pages are
> + * mapped is determined by addr, and vma is used for "accounting" of the
> + * pages.
> + *
> + * Return value is zero for success, negative ERRNO value for failure.
> + */
> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> +                              unsigned long addr,
> +                              unsigned long mfn, int nr,
> +                              pgprot_t prot, unsigned domid)
> +{
> +       return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
> +}
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> +
> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages
> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @err_ptr: pointer to array of integers, one per MFN, for an error
> + *           value for each page. The err_ptr must not be NULL.
> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + *
> + * This function takes an array of mfns and maps nr pages from that into this
> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
> + * are mapped is determined by addr, and vma is used for "accounting" of the
> + * pages. The err_ptr array is filled in on any page that is not sucessfully
> + * mapped in.
> + *
> + * Return value is zero for success, negative ERRNO value for failure.
> + * Note that the error value -ENOENT is considered a "retry", so when this
> + * error code is seen, another call should be made with the list of pages that
> + * are marked as -ENOENT in the err_ptr array.
> + */
> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +                              unsigned long addr,
> +                              unsigned long *mfn, int nr,
> +                              int *err_ptr, pgprot_t prot,
> +                              unsigned domid)
> +{
> +       /* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
> +        * and the consequences later is quite hard to detect what the actual
> +        * cause of "wrong memory was mapped in".
> +        */
> +       BUG_ON(err_ptr == NULL);
> +       return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
> +}
> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 71f5c45..75f6e86 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
>         return ret;
>  }
>
> +/*
> + * Similar to traverse_pages, but use each page as a "block" of
> + * data to be processed as one unit.
> + */
> +static int traverse_pages_block(unsigned nelem, size_t size,
> +                               struct list_head *pos,
> +                               int (*fn)(void *data, int nr, void *state),
> +                               void *state)
> +{
> +       void *pagedata;
> +       unsigned pageidx;
> +       int ret = 0;
> +
> +       BUG_ON(size > PAGE_SIZE);
> +
> +       pageidx = PAGE_SIZE;
> +
> +       while (nelem) {
> +               int nr = (PAGE_SIZE/size);
> +               struct page *page;
> +               if (nr > nelem)
> +                       nr = nelem;
> +               pos = pos->next;
> +               page = list_entry(pos, struct page, lru);
> +               pagedata = page_address(page);
> +               ret = (*fn)(pagedata, nr, state);
> +               if (ret)
> +                       break;
> +               nelem -= nr;
> +       }
> +
> +       return ret;
> +}
> +
>  struct mmap_mfn_state {
>         unsigned long va;
>         struct vm_area_struct *vma;
> @@ -250,7 +284,7 @@ struct mmap_batch_state {
>          *      0 for no errors
>          *      1 if at least one error has happened (and no
>          *          -ENOENT errors have happened)
> -        *      -ENOENT if at least 1 -ENOENT has happened.
> +        *      -ENOENT if at least one -ENOENT has happened.
>          */
>         int global_error;
>         /* An array for individual errors */
> @@ -260,17 +294,20 @@ struct mmap_batch_state {
>         xen_pfn_t __user *user_mfn;
>  };
>
> -static int mmap_batch_fn(void *data, void *state)
> +static int mmap_batch_fn(void *data, int nr, void *state)
>  {
>         xen_pfn_t *mfnp = data;
>         struct mmap_batch_state *st = state;
>         int ret;
>
> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -                                        st->vma->vm_page_prot, st->domain);
> +       BUG_ON(nr < 0);
>
> -       /* Store error code for second pass. */
> -       *(st->err++) = ret;
> +       ret = xen_remap_domain_mfn_array(st->vma,
> +                                        st->va & PAGE_MASK,
> +                                        mfnp, nr,
> +                                        st->err,
> +                                        st->vma->vm_page_prot,
> +                                        st->domain);
>
>         /* And see if it affects the global_error. */
>         if (ret < 0) {
> @@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
>                                 st->global_error = 1;
>                 }
>         }
> -       st->va += PAGE_SIZE;
> +       st->va += PAGE_SIZE * nr;
>
>         return 0;
>  }
> @@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>         state.err           = err_array;
>
>         /* mmap_batch_fn guarantees ret == 0 */
> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
> -                            &pagelist, mmap_batch_fn, &state));
> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
> +                                   &pagelist, mmap_batch_fn, &state));
>
>         up_write(&mm->mmap_sem);
>
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 6a198e4..22cad75 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>                                unsigned long mfn, int nr,
>                                pgprot_t prot, unsigned domid);
>
> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +                              unsigned long addr,
> +                              unsigned long *mfn, int nr,
> +                              int *err_ptr, pgprot_t prot,
> +                              unsigned domid);
>  #endif /* INCLUDE_XEN_OPS_H */
> --
> 1.7.9.5
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



-- 
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:28:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj6xm-000371-6j; Thu, 13 Dec 2012 11:28:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1Tj6xk-00036v-BF
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:28:24 +0000
Received: from [85.158.143.35:2860] by server-1.bemta-4.messagelabs.com id
	FA/24-28401-7DBB9C05; Thu, 13 Dec 2012 11:28:23 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355398084!5045106!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_TEST_2,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18944 invoked from network); 13 Dec 2012 11:28:06 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:28:06 -0000
Received: by mail-qa0-f45.google.com with SMTP id j15so6388506qaq.11
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 03:28:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=YjICzg18ebEN4J3W/LYvpCKB5lHRSxLpu63FMfS+kTs=;
	b=YvVobKDeywqhVbEX6JmCCvJagvXiyFMw+rSiQMLRqD88HrYBzal8zx30V0QgzDJAMz
	P5CJflzfrg3ltOu2P5oWxBX9UYM286fRnlDeH+R3J1t6oRr6nlg6UxbkcH5z2hEnUxo1
	LmW9iTKJmt+ooUf6J32A2D3uQoO7drrhvBWqk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type
	:x-gm-message-state;
	bh=YjICzg18ebEN4J3W/LYvpCKB5lHRSxLpu63FMfS+kTs=;
	b=YFivdOUrpcNSQE7kOXOTtbjiGSTJohXgsCPCyH8ajDStV19DBYns9GGpxJuBsKuW4X
	y1BsXsOSW2E0JwB0VBcV6tDRbXRttTRUOZEUvpYVXvBXU569QbrDmSLB5iXyMcNcloOf
	HEvWytb6pT/HZRlqzgwb6iud4aAQW9bSJauHtMZ15V0mEOfBdoW7K50R1KtE6c87AMU5
	j0z5+OaNh8AmmM76uOqoIjWXUXT/MBj5ZoLz2BM58cW+ly06g4Nd3ri1YWnlIjWKKdyG
	SibPUl193KT4rH7KSQ4YMiMC9GO9QqfqmUxA4ASyh8SZThUnpy+8frRT0394dIa+vi0x
	PiGw==
Received: by 10.229.115.32 with SMTP id g32mr152306qcq.55.1355398084411; Thu,
	13 Dec 2012 03:28:04 -0800 (PST)
MIME-Version: 1.0
Received: by 10.49.119.202 with HTTP; Thu, 13 Dec 2012 03:27:49 -0800 (PST)
X-Originating-IP: [178.70.154.20]
In-Reply-To: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
References: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Thu, 13 Dec 2012 15:27:49 +0400
X-Google-Sender-Auth: fsoQFu0gwEk1GN4gJBlo4HYK3dw
Message-ID: <CACaajQu9VexhRWXK8U_fjtjFSL6Bwk=0e8OdWhshh7UERmiuQg@mail.gmail.com>
To: Mats Petersson <mats.petersson@citrix.com>
X-Gm-Message-State: ALoCoQloZa01bbsT9+Y3MFz+qrKT9h8UZOLoxWbQCdkuvzxcJ9S3buFrsCXBqZcDjFDNO6OWw/E2
Cc: Ian.Campbell@citrix.com, konrad.wilk@oracle.com, mats@planetcatfish.com,
	linux-kernel@vger.kernel.org, xen-devel@lists.xen.org,
	david.vrabel@citrix.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If i apply this patch to kernel 3.6.7 does it needs to apply another
patches to work?

2012/12/13 Mats Petersson <mats.petersson@citrix.com>:
> One comment asked for more details on the improvements:
> Using a small test program to map Guest memory into Dom0 (repeatedly
> for "Iterations" mapping the same first "Num Pages")
> Iterations    Num Pages    Time 3.7rc4  Time With this patch
> 5000          4096         76.107       37.027
> 10000         2048         75.703       37.177
> 20000         1024         75.893       37.247
> So a little better than twice as fast.
>
> Using this patch in migration, using "time" to measure the overall
> time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
> memory, one network card, one disk, guest at login prompt):
> Time 3.7rc5             Time With this patch
> 6.697                   5.667
> Since migration involves a whole lot of other things, it's only about
> 15% faster - but still a good improvement. Similar measurement with a
> guest that is running code to "dirty" memory shows about 23%
> improvement, as it spends more time copying dirtied memory.
>
> As discussed elsewhere, a good deal more can be had from improving the
> munmap system call, but it is a little tricky to get this in without
> worsening non-PVOPS kernel, so I will have another look at this.
>
> ---
> Update since last posting:
> I have just run some benchmarks of a 16GB guest, and the improvement
> with this patch is around 23-30% for the overall copy time, and 42%
> shorter downtime on a "busy" guest (writing in a loop to about 8GB of
> memory). I think this is definitely worth having.
>
> The V4 patch consists of cosmetic adjustments (spelling mistake in
> comment and reversing condition in an if-statement to avoid having an
> "else" branch, a random empty line added by accident now reverted back
> to previous state). Thanks to Jan Beulich for the comments.
>
> The V3 of the patch contains suggested improvements from:
> David Vrabel - make it two distinct external functions, doc-comments.
> Ian Campbell - use one common function for the main work.
> Jan Beulich  - found a bug and pointed out some whitespace problems.
>
>
>
> Signed-off-by: Mats Petersson <mats.petersson@citrix.com>
>
> ---
>  arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
>  drivers/xen/privcmd.c |   55 +++++++++++++++++----
>  include/xen/xen-ops.h |    5 ++
>  3 files changed, 169 insertions(+), 23 deletions(-)
>
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index dcf5f2d..a67774f 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
>  #define REMAP_BATCH_SIZE 16
>
>  struct remap_data {
> -       unsigned long mfn;
> +       unsigned long *mfn;
> +       bool contiguous;
>         pgprot_t prot;
>         struct mmu_update *mmu_update;
>  };
> @@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>                                  unsigned long addr, void *data)
>  {
>         struct remap_data *rmd = data;
> -       pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
> +       pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
> +       /* If we have a contigious range, just update the mfn itself,
> +          else update pointer to be "next mfn". */
> +       if (rmd->contiguous)
> +               (*rmd->mfn)++;
> +       else
> +               rmd->mfn++;
>
>         rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
>         rmd->mmu_update->val = pte_val_ma(pte);
> @@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>         return 0;
>  }
>
> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> -                              unsigned long addr,
> -                              unsigned long mfn, int nr,
> -                              pgprot_t prot, unsigned domid)
> -{
> +/* do_remap_mfn() - helper function to map foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages
> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @err_ptr: pointer to array
> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + *
> + * This function takes an array of mfns and maps nr pages from that into
> + * this kernel's memory. The owner of the pages is defined by domid. Where the
> + * pages are mapped is determined by addr, and vma is used for "accounting" of
> + * the pages.
> + *
> + * Return value is zero for success, negative for failure.
> + *
> + * Note that err_ptr is used to indicate whether *mfn
> + * is a list or a "first mfn of a contiguous range". */
> +static int do_remap_mfn(struct vm_area_struct *vma,
> +                       unsigned long addr,
> +                       unsigned long *mfn, int nr,
> +                       int *err_ptr, pgprot_t prot,
> +                       unsigned domid)
> +{
> +       int err, last_err = 0;
>         struct remap_data rmd;
>         struct mmu_update mmu_update[REMAP_BATCH_SIZE];
> -       int batch;
>         unsigned long range;
> -       int err = 0;
>
>         if (xen_feature(XENFEAT_auto_translated_physmap))
>                 return -EINVAL;
> @@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>
>         rmd.mfn = mfn;
>         rmd.prot = prot;
> +       /* We use the err_ptr to indicate if there we are doing a contigious
> +        * mapping or a discontigious mapping. */
> +       rmd.contiguous = !err_ptr;
>
>         while (nr) {
> -               batch = min(REMAP_BATCH_SIZE, nr);
> +               int index = 0;
> +               int done = 0;
> +               int batch = min(REMAP_BATCH_SIZE, nr);
> +               int batch_left = batch;
>                 range = (unsigned long)batch << PAGE_SHIFT;
>
>                 rmd.mmu_update = mmu_update;
> @@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>                 if (err)
>                         goto out;
>
> -               err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
> -               if (err < 0)
> -                       goto out;
> +               /* We record the error for each page that gives an error, but
> +                * continue mapping until the whole set is done */
> +               do {
> +                       err = HYPERVISOR_mmu_update(&mmu_update[index],
> +                                                   batch_left, &done, domid);
> +                       if (err < 0) {
> +                               if (!err_ptr)
> +                                       goto out;
> +                               /* increment done so we skip the error item */
> +                               done++;
> +                               last_err = err_ptr[index] = err;
> +                       }
> +                       batch_left -= done;
> +                       index += done;
> +               } while (batch_left);
>
>                 nr -= batch;
>                 addr += range;
> +               if (err_ptr)
> +                       err_ptr += batch;
>         }
>
> -       err = 0;
> +       err = last_err;
>  out:
>
>         xen_flush_tlb_all();
>
>         return err;
>  }
> +
> +/* xen_remap_domain_mfn_range() - Used to map foreign pages
> + * @vma:   the vma for the pages to be mapped into
> + * @addr:  the address at which to map the pages
> + * @mfn:   the first MFN to map
> + * @nr:    the number of consecutive mfns to map
> + * @prot:  page protection mask
> + * @domid: id of the domain that we are mapping from
> + *
> + * This function takes a mfn and maps nr pages on from that into this kernel's
> + * memory. The owner of the pages is defined by domid. Where the pages are
> + * mapped is determined by addr, and vma is used for "accounting" of the
> + * pages.
> + *
> + * Return value is zero for success, negative ERRNO value for failure.
> + */
> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> +                              unsigned long addr,
> +                              unsigned long mfn, int nr,
> +                              pgprot_t prot, unsigned domid)
> +{
> +       return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
> +}
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> +
> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages
> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @err_ptr: pointer to array of integers, one per MFN, for an error
> + *           value for each page. The err_ptr must not be NULL.
> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + *
> + * This function takes an array of mfns and maps nr pages from that into this
> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
> + * are mapped is determined by addr, and vma is used for "accounting" of the
> + * pages. The err_ptr array is filled in on any page that is not sucessfully
> + * mapped in.
> + *
> + * Return value is zero for success, negative ERRNO value for failure.
> + * Note that the error value -ENOENT is considered a "retry", so when this
> + * error code is seen, another call should be made with the list of pages that
> + * are marked as -ENOENT in the err_ptr array.
> + */
> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +                              unsigned long addr,
> +                              unsigned long *mfn, int nr,
> +                              int *err_ptr, pgprot_t prot,
> +                              unsigned domid)
> +{
> +       /* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
> +        * and the consequences later is quite hard to detect what the actual
> +        * cause of "wrong memory was mapped in".
> +        */
> +       BUG_ON(err_ptr == NULL);
> +       return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
> +}
> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 71f5c45..75f6e86 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
>         return ret;
>  }
>
> +/*
> + * Similar to traverse_pages, but use each page as a "block" of
> + * data to be processed as one unit.
> + */
> +static int traverse_pages_block(unsigned nelem, size_t size,
> +                               struct list_head *pos,
> +                               int (*fn)(void *data, int nr, void *state),
> +                               void *state)
> +{
> +       void *pagedata;
> +       unsigned pageidx;
> +       int ret = 0;
> +
> +       BUG_ON(size > PAGE_SIZE);
> +
> +       pageidx = PAGE_SIZE;
> +
> +       while (nelem) {
> +               int nr = (PAGE_SIZE/size);
> +               struct page *page;
> +               if (nr > nelem)
> +                       nr = nelem;
> +               pos = pos->next;
> +               page = list_entry(pos, struct page, lru);
> +               pagedata = page_address(page);
> +               ret = (*fn)(pagedata, nr, state);
> +               if (ret)
> +                       break;
> +               nelem -= nr;
> +       }
> +
> +       return ret;
> +}
> +
>  struct mmap_mfn_state {
>         unsigned long va;
>         struct vm_area_struct *vma;
> @@ -250,7 +284,7 @@ struct mmap_batch_state {
>          *      0 for no errors
>          *      1 if at least one error has happened (and no
>          *          -ENOENT errors have happened)
> -        *      -ENOENT if at least 1 -ENOENT has happened.
> +        *      -ENOENT if at least one -ENOENT has happened.
>          */
>         int global_error;
>         /* An array for individual errors */
> @@ -260,17 +294,20 @@ struct mmap_batch_state {
>         xen_pfn_t __user *user_mfn;
>  };
>
> -static int mmap_batch_fn(void *data, void *state)
> +static int mmap_batch_fn(void *data, int nr, void *state)
>  {
>         xen_pfn_t *mfnp = data;
>         struct mmap_batch_state *st = state;
>         int ret;
>
> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -                                        st->vma->vm_page_prot, st->domain);
> +       BUG_ON(nr < 0);
>
> -       /* Store error code for second pass. */
> -       *(st->err++) = ret;
> +       ret = xen_remap_domain_mfn_array(st->vma,
> +                                        st->va & PAGE_MASK,
> +                                        mfnp, nr,
> +                                        st->err,
> +                                        st->vma->vm_page_prot,
> +                                        st->domain);
>
>         /* And see if it affects the global_error. */
>         if (ret < 0) {
> @@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
>                                 st->global_error = 1;
>                 }
>         }
> -       st->va += PAGE_SIZE;
> +       st->va += PAGE_SIZE * nr;
>
>         return 0;
>  }
> @@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>         state.err           = err_array;
>
>         /* mmap_batch_fn guarantees ret == 0 */
> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
> -                            &pagelist, mmap_batch_fn, &state));
> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
> +                                   &pagelist, mmap_batch_fn, &state));
>
>         up_write(&mm->mmap_sem);
>
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 6a198e4..22cad75 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>                                unsigned long mfn, int nr,
>                                pgprot_t prot, unsigned domid);
>
> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +                              unsigned long addr,
> +                              unsigned long *mfn, int nr,
> +                              int *err_ptr, pgprot_t prot,
> +                              unsigned domid);
>  #endif /* INCLUDE_XEN_OPS_H */
> --
> 1.7.9.5
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



-- 
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:32:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj71I-0003Lu-4T; Thu, 13 Dec 2012 11:32:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tj71G-0003Ll-R6
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:32:03 +0000
Received: from [85.158.138.51:18659] by server-10.bemta-3.messagelabs.com id
	94/97-07616-DACB9C05; Thu, 13 Dec 2012 11:31:57 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355398314!20584780!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8314 invoked from network); 13 Dec 2012 11:31:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:31:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="566238"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 11:31:54 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 06:31:54 -0500
Message-ID: <50C9BCA8.3010504@citrix.com>
Date: Thu, 13 Dec 2012 11:31:52 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
References: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
	<CACaajQu9VexhRWXK8U_fjtjFSL6Bwk=0e8OdWhshh7UERmiuQg@mail.gmail.com>
In-Reply-To: <CACaajQu9VexhRWXK8U_fjtjFSL6Bwk=0e8OdWhshh7UERmiuQg@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/12 11:27, Vasiliy Tolstov wrote:
> If i apply this patch to kernel 3.6.7 does it needs to apply another
> patches to work?
No, it should "just work". Obviously, if it doesn't, let me know.

--
Mats
>
> 2012/12/13 Mats Petersson <mats.petersson@citrix.com>:
>> One comment asked for more details on the improvements:
>> Using a small test program to map Guest memory into Dom0 (repeatedly
>> for "Iterations" mapping the same first "Num Pages")
>> Iterations    Num Pages    Time 3.7rc4  Time With this patch
>> 5000          4096         76.107       37.027
>> 10000         2048         75.703       37.177
>> 20000         1024         75.893       37.247
>> So a little better than twice as fast.
>>
>> Using this patch in migration, using "time" to measure the overall
>> time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
>> memory, one network card, one disk, guest at login prompt):
>> Time 3.7rc5             Time With this patch
>> 6.697                   5.667
>> Since migration involves a whole lot of other things, it's only about
>> 15% faster - but still a good improvement. Similar measurement with a
>> guest that is running code to "dirty" memory shows about 23%
>> improvement, as it spends more time copying dirtied memory.
>>
>> As discussed elsewhere, a good deal more can be had from improving the
>> munmap system call, but it is a little tricky to get this in without
>> worsening non-PVOPS kernel, so I will have another look at this.
>>
>> ---
>> Update since last posting:
>> I have just run some benchmarks of a 16GB guest, and the improvement
>> with this patch is around 23-30% for the overall copy time, and 42%
>> shorter downtime on a "busy" guest (writing in a loop to about 8GB of
>> memory). I think this is definitely worth having.
>>
>> The V4 patch consists of cosmetic adjustments (spelling mistake in
>> comment and reversing condition in an if-statement to avoid having an
>> "else" branch, a random empty line added by accident now reverted back
>> to previous state). Thanks to Jan Beulich for the comments.
>>
>> The V3 of the patch contains suggested improvements from:
>> David Vrabel - make it two distinct external functions, doc-comments.
>> Ian Campbell - use one common function for the main work.
>> Jan Beulich  - found a bug and pointed out some whitespace problems.
>>
>>
>>
>> Signed-off-by: Mats Petersson <mats.petersson@citrix.com>
>>
>> ---
>>   arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
>>   drivers/xen/privcmd.c |   55 +++++++++++++++++----
>>   include/xen/xen-ops.h |    5 ++
>>   3 files changed, 169 insertions(+), 23 deletions(-)
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index dcf5f2d..a67774f 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
>>   #define REMAP_BATCH_SIZE 16
>>
>>   struct remap_data {
>> -       unsigned long mfn;
>> +       unsigned long *mfn;
>> +       bool contiguous;
>>          pgprot_t prot;
>>          struct mmu_update *mmu_update;
>>   };
>> @@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>>                                   unsigned long addr, void *data)
>>   {
>>          struct remap_data *rmd = data;
>> -       pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
>> +       pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
>> +       /* If we have a contigious range, just update the mfn itself,
>> +          else update pointer to be "next mfn". */
>> +       if (rmd->contiguous)
>> +               (*rmd->mfn)++;
>> +       else
>> +               rmd->mfn++;
>>
>>          rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
>>          rmd->mmu_update->val = pte_val_ma(pte);
>> @@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>>          return 0;
>>   }
>>
>> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> -                              unsigned long addr,
>> -                              unsigned long mfn, int nr,
>> -                              pgprot_t prot, unsigned domid)
>> -{
>> +/* do_remap_mfn() - helper function to map foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @err_ptr: pointer to array
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into
>> + * this kernel's memory. The owner of the pages is defined by domid. Where the
>> + * pages are mapped is determined by addr, and vma is used for "accounting" of
>> + * the pages.
>> + *
>> + * Return value is zero for success, negative for failure.
>> + *
>> + * Note that err_ptr is used to indicate whether *mfn
>> + * is a list or a "first mfn of a contiguous range". */
>> +static int do_remap_mfn(struct vm_area_struct *vma,
>> +                       unsigned long addr,
>> +                       unsigned long *mfn, int nr,
>> +                       int *err_ptr, pgprot_t prot,
>> +                       unsigned domid)
>> +{
>> +       int err, last_err = 0;
>>          struct remap_data rmd;
>>          struct mmu_update mmu_update[REMAP_BATCH_SIZE];
>> -       int batch;
>>          unsigned long range;
>> -       int err = 0;
>>
>>          if (xen_feature(XENFEAT_auto_translated_physmap))
>>                  return -EINVAL;
>> @@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>
>>          rmd.mfn = mfn;
>>          rmd.prot = prot;
>> +       /* We use the err_ptr to indicate if there we are doing a contigious
>> +        * mapping or a discontigious mapping. */
>> +       rmd.contiguous = !err_ptr;
>>
>>          while (nr) {
>> -               batch = min(REMAP_BATCH_SIZE, nr);
>> +               int index = 0;
>> +               int done = 0;
>> +               int batch = min(REMAP_BATCH_SIZE, nr);
>> +               int batch_left = batch;
>>                  range = (unsigned long)batch << PAGE_SHIFT;
>>
>>                  rmd.mmu_update = mmu_update;
>> @@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>                  if (err)
>>                          goto out;
>>
>> -               err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
>> -               if (err < 0)
>> -                       goto out;
>> +               /* We record the error for each page that gives an error, but
>> +                * continue mapping until the whole set is done */
>> +               do {
>> +                       err = HYPERVISOR_mmu_update(&mmu_update[index],
>> +                                                   batch_left, &done, domid);
>> +                       if (err < 0) {
>> +                               if (!err_ptr)
>> +                                       goto out;
>> +                               /* increment done so we skip the error item */
>> +                               done++;
>> +                               last_err = err_ptr[index] = err;
>> +                       }
>> +                       batch_left -= done;
>> +                       index += done;
>> +               } while (batch_left);
>>
>>                  nr -= batch;
>>                  addr += range;
>> +               if (err_ptr)
>> +                       err_ptr += batch;
>>          }
>>
>> -       err = 0;
>> +       err = last_err;
>>   out:
>>
>>          xen_flush_tlb_all();
>>
>>          return err;
>>   }
>> +
>> +/* xen_remap_domain_mfn_range() - Used to map foreign pages
>> + * @vma:   the vma for the pages to be mapped into
>> + * @addr:  the address at which to map the pages
>> + * @mfn:   the first MFN to map
>> + * @nr:    the number of consecutive mfns to map
>> + * @prot:  page protection mask
>> + * @domid: id of the domain that we are mapping from
>> + *
>> + * This function takes a mfn and maps nr pages on from that into this kernel's
>> + * memory. The owner of the pages is defined by domid. Where the pages are
>> + * mapped is determined by addr, and vma is used for "accounting" of the
>> + * pages.
>> + *
>> + * Return value is zero for success, negative ERRNO value for failure.
>> + */
>> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> +                              unsigned long addr,
>> +                              unsigned long mfn, int nr,
>> +                              pgprot_t prot, unsigned domid)
>> +{
>> +       return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
>> +}
>>   EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>> +
>> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @err_ptr: pointer to array of integers, one per MFN, for an error
>> + *           value for each page. The err_ptr must not be NULL.
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into this
>> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
>> + * are mapped is determined by addr, and vma is used for "accounting" of the
>> + * pages. The err_ptr array is filled in on any page that is not sucessfully
>> + * mapped in.
>> + *
>> + * Return value is zero for success, negative ERRNO value for failure.
>> + * Note that the error value -ENOENT is considered a "retry", so when this
>> + * error code is seen, another call should be made with the list of pages that
>> + * are marked as -ENOENT in the err_ptr array.
>> + */
>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>> +                              unsigned long addr,
>> +                              unsigned long *mfn, int nr,
>> +                              int *err_ptr, pgprot_t prot,
>> +                              unsigned domid)
>> +{
>> +       /* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
>> +        * and the consequences later is quite hard to detect what the actual
>> +        * cause of "wrong memory was mapped in".
>> +        */
>> +       BUG_ON(err_ptr == NULL);
>> +       return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
>> +}
>> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index 71f5c45..75f6e86 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
>> @@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
>>          return ret;
>>   }
>>
>> +/*
>> + * Similar to traverse_pages, but use each page as a "block" of
>> + * data to be processed as one unit.
>> + */
>> +static int traverse_pages_block(unsigned nelem, size_t size,
>> +                               struct list_head *pos,
>> +                               int (*fn)(void *data, int nr, void *state),
>> +                               void *state)
>> +{
>> +       void *pagedata;
>> +       unsigned pageidx;
>> +       int ret = 0;
>> +
>> +       BUG_ON(size > PAGE_SIZE);
>> +
>> +       pageidx = PAGE_SIZE;
>> +
>> +       while (nelem) {
>> +               int nr = (PAGE_SIZE/size);
>> +               struct page *page;
>> +               if (nr > nelem)
>> +                       nr = nelem;
>> +               pos = pos->next;
>> +               page = list_entry(pos, struct page, lru);
>> +               pagedata = page_address(page);
>> +               ret = (*fn)(pagedata, nr, state);
>> +               if (ret)
>> +                       break;
>> +               nelem -= nr;
>> +       }
>> +
>> +       return ret;
>> +}
>> +
>>   struct mmap_mfn_state {
>>          unsigned long va;
>>          struct vm_area_struct *vma;
>> @@ -250,7 +284,7 @@ struct mmap_batch_state {
>>           *      0 for no errors
>>           *      1 if at least one error has happened (and no
>>           *          -ENOENT errors have happened)
>> -        *      -ENOENT if at least 1 -ENOENT has happened.
>> +        *      -ENOENT if at least one -ENOENT has happened.
>>           */
>>          int global_error;
>>          /* An array for individual errors */
>> @@ -260,17 +294,20 @@ struct mmap_batch_state {
>>          xen_pfn_t __user *user_mfn;
>>   };
>>
>> -static int mmap_batch_fn(void *data, void *state)
>> +static int mmap_batch_fn(void *data, int nr, void *state)
>>   {
>>          xen_pfn_t *mfnp = data;
>>          struct mmap_batch_state *st = state;
>>          int ret;
>>
>> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>> -                                        st->vma->vm_page_prot, st->domain);
>> +       BUG_ON(nr < 0);
>>
>> -       /* Store error code for second pass. */
>> -       *(st->err++) = ret;
>> +       ret = xen_remap_domain_mfn_array(st->vma,
>> +                                        st->va & PAGE_MASK,
>> +                                        mfnp, nr,
>> +                                        st->err,
>> +                                        st->vma->vm_page_prot,
>> +                                        st->domain);
>>
>>          /* And see if it affects the global_error. */
>>          if (ret < 0) {
>> @@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
>>                                  st->global_error = 1;
>>                  }
>>          }
>> -       st->va += PAGE_SIZE;
>> +       st->va += PAGE_SIZE * nr;
>>
>>          return 0;
>>   }
>> @@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>>          state.err           = err_array;
>>
>>          /* mmap_batch_fn guarantees ret == 0 */
>> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
>> -                            &pagelist, mmap_batch_fn, &state));
>> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
>> +                                   &pagelist, mmap_batch_fn, &state));
>>
>>          up_write(&mm->mmap_sem);
>>
>> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
>> index 6a198e4..22cad75 100644
>> --- a/include/xen/xen-ops.h
>> +++ b/include/xen/xen-ops.h
>> @@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>                                 unsigned long mfn, int nr,
>>                                 pgprot_t prot, unsigned domid);
>>
>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>> +                              unsigned long addr,
>> +                              unsigned long *mfn, int nr,
>> +                              int *err_ptr, pgprot_t prot,
>> +                              unsigned domid);
>>   #endif /* INCLUDE_XEN_OPS_H */
>> --
>> 1.7.9.5
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
> --
> Vasiliy Tolstov,
> Clodo.ru
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:32:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj71I-0003Lu-4T; Thu, 13 Dec 2012 11:32:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tj71G-0003Ll-R6
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:32:03 +0000
Received: from [85.158.138.51:18659] by server-10.bemta-3.messagelabs.com id
	94/97-07616-DACB9C05; Thu, 13 Dec 2012 11:31:57 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355398314!20584780!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8314 invoked from network); 13 Dec 2012 11:31:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:31:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="566238"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 11:31:54 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 06:31:54 -0500
Message-ID: <50C9BCA8.3010504@citrix.com>
Date: Thu, 13 Dec 2012 11:31:52 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
References: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
	<CACaajQu9VexhRWXK8U_fjtjFSL6Bwk=0e8OdWhshh7UERmiuQg@mail.gmail.com>
In-Reply-To: <CACaajQu9VexhRWXK8U_fjtjFSL6Bwk=0e8OdWhshh7UERmiuQg@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/12 11:27, Vasiliy Tolstov wrote:
> If i apply this patch to kernel 3.6.7 does it needs to apply another
> patches to work?
No, it should "just work". Obviously, if it doesn't, let me know.

--
Mats
>
> 2012/12/13 Mats Petersson <mats.petersson@citrix.com>:
>> One comment asked for more details on the improvements:
>> Using a small test program to map Guest memory into Dom0 (repeatedly
>> for "Iterations" mapping the same first "Num Pages")
>> Iterations    Num Pages    Time 3.7rc4  Time With this patch
>> 5000          4096         76.107       37.027
>> 10000         2048         75.703       37.177
>> 20000         1024         75.893       37.247
>> So a little better than twice as fast.
>>
>> Using this patch in migration, using "time" to measure the overall
>> time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
>> memory, one network card, one disk, guest at login prompt):
>> Time 3.7rc5             Time With this patch
>> 6.697                   5.667
>> Since migration involves a whole lot of other things, it's only about
>> 15% faster - but still a good improvement. Similar measurement with a
>> guest that is running code to "dirty" memory shows about 23%
>> improvement, as it spends more time copying dirtied memory.
>>
>> As discussed elsewhere, a good deal more can be had from improving the
>> munmap system call, but it is a little tricky to get this in without
>> worsening non-PVOPS kernel, so I will have another look at this.
>>
>> ---
>> Update since last posting:
>> I have just run some benchmarks of a 16GB guest, and the improvement
>> with this patch is around 23-30% for the overall copy time, and 42%
>> shorter downtime on a "busy" guest (writing in a loop to about 8GB of
>> memory). I think this is definitely worth having.
>>
>> The V4 patch consists of cosmetic adjustments (spelling mistake in
>> comment and reversing condition in an if-statement to avoid having an
>> "else" branch, a random empty line added by accident now reverted back
>> to previous state). Thanks to Jan Beulich for the comments.
>>
>> The V3 of the patch contains suggested improvements from:
>> David Vrabel - make it two distinct external functions, doc-comments.
>> Ian Campbell - use one common function for the main work.
>> Jan Beulich  - found a bug and pointed out some whitespace problems.
>>
>>
>>
>> Signed-off-by: Mats Petersson <mats.petersson@citrix.com>
>>
>> ---
>>   arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
>>   drivers/xen/privcmd.c |   55 +++++++++++++++++----
>>   include/xen/xen-ops.h |    5 ++
>>   3 files changed, 169 insertions(+), 23 deletions(-)
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index dcf5f2d..a67774f 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
>>   #define REMAP_BATCH_SIZE 16
>>
>>   struct remap_data {
>> -       unsigned long mfn;
>> +       unsigned long *mfn;
>> +       bool contiguous;
>>          pgprot_t prot;
>>          struct mmu_update *mmu_update;
>>   };
>> @@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>>                                   unsigned long addr, void *data)
>>   {
>>          struct remap_data *rmd = data;
>> -       pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
>> +       pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
>> +       /* If we have a contigious range, just update the mfn itself,
>> +          else update pointer to be "next mfn". */
>> +       if (rmd->contiguous)
>> +               (*rmd->mfn)++;
>> +       else
>> +               rmd->mfn++;
>>
>>          rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
>>          rmd->mmu_update->val = pte_val_ma(pte);
>> @@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>>          return 0;
>>   }
>>
>> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> -                              unsigned long addr,
>> -                              unsigned long mfn, int nr,
>> -                              pgprot_t prot, unsigned domid)
>> -{
>> +/* do_remap_mfn() - helper function to map foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @err_ptr: pointer to array
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into
>> + * this kernel's memory. The owner of the pages is defined by domid. Where the
>> + * pages are mapped is determined by addr, and vma is used for "accounting" of
>> + * the pages.
>> + *
>> + * Return value is zero for success, negative for failure.
>> + *
>> + * Note that err_ptr is used to indicate whether *mfn
>> + * is a list or a "first mfn of a contiguous range". */
>> +static int do_remap_mfn(struct vm_area_struct *vma,
>> +                       unsigned long addr,
>> +                       unsigned long *mfn, int nr,
>> +                       int *err_ptr, pgprot_t prot,
>> +                       unsigned domid)
>> +{
>> +       int err, last_err = 0;
>>          struct remap_data rmd;
>>          struct mmu_update mmu_update[REMAP_BATCH_SIZE];
>> -       int batch;
>>          unsigned long range;
>> -       int err = 0;
>>
>>          if (xen_feature(XENFEAT_auto_translated_physmap))
>>                  return -EINVAL;
>> @@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>
>>          rmd.mfn = mfn;
>>          rmd.prot = prot;
>> +       /* We use the err_ptr to indicate if there we are doing a contigious
>> +        * mapping or a discontigious mapping. */
>> +       rmd.contiguous = !err_ptr;
>>
>>          while (nr) {
>> -               batch = min(REMAP_BATCH_SIZE, nr);
>> +               int index = 0;
>> +               int done = 0;
>> +               int batch = min(REMAP_BATCH_SIZE, nr);
>> +               int batch_left = batch;
>>                  range = (unsigned long)batch << PAGE_SHIFT;
>>
>>                  rmd.mmu_update = mmu_update;
>> @@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>                  if (err)
>>                          goto out;
>>
>> -               err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
>> -               if (err < 0)
>> -                       goto out;
>> +               /* We record the error for each page that gives an error, but
>> +                * continue mapping until the whole set is done */
>> +               do {
>> +                       err = HYPERVISOR_mmu_update(&mmu_update[index],
>> +                                                   batch_left, &done, domid);
>> +                       if (err < 0) {
>> +                               if (!err_ptr)
>> +                                       goto out;
>> +                               /* increment done so we skip the error item */
>> +                               done++;
>> +                               last_err = err_ptr[index] = err;
>> +                       }
>> +                       batch_left -= done;
>> +                       index += done;
>> +               } while (batch_left);
>>
>>                  nr -= batch;
>>                  addr += range;
>> +               if (err_ptr)
>> +                       err_ptr += batch;
>>          }
>>
>> -       err = 0;
>> +       err = last_err;
>>   out:
>>
>>          xen_flush_tlb_all();
>>
>>          return err;
>>   }
>> +
>> +/* xen_remap_domain_mfn_range() - Used to map foreign pages
>> + * @vma:   the vma for the pages to be mapped into
>> + * @addr:  the address at which to map the pages
>> + * @mfn:   the first MFN to map
>> + * @nr:    the number of consecutive mfns to map
>> + * @prot:  page protection mask
>> + * @domid: id of the domain that we are mapping from
>> + *
>> + * This function takes a mfn and maps nr pages on from that into this kernel's
>> + * memory. The owner of the pages is defined by domid. Where the pages are
>> + * mapped is determined by addr, and vma is used for "accounting" of the
>> + * pages.
>> + *
>> + * Return value is zero for success, negative ERRNO value for failure.
>> + */
>> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> +                              unsigned long addr,
>> +                              unsigned long mfn, int nr,
>> +                              pgprot_t prot, unsigned domid)
>> +{
>> +       return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
>> +}
>>   EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>> +
>> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @err_ptr: pointer to array of integers, one per MFN, for an error
>> + *           value for each page. The err_ptr must not be NULL.
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into this
>> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
>> + * are mapped is determined by addr, and vma is used for "accounting" of the
>> + * pages. The err_ptr array is filled in on any page that is not sucessfully
>> + * mapped in.
>> + *
>> + * Return value is zero for success, negative ERRNO value for failure.
>> + * Note that the error value -ENOENT is considered a "retry", so when this
>> + * error code is seen, another call should be made with the list of pages that
>> + * are marked as -ENOENT in the err_ptr array.
>> + */
>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>> +                              unsigned long addr,
>> +                              unsigned long *mfn, int nr,
>> +                              int *err_ptr, pgprot_t prot,
>> +                              unsigned domid)
>> +{
>> +       /* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
>> +        * and the consequences later is quite hard to detect what the actual
>> +        * cause of "wrong memory was mapped in".
>> +        */
>> +       BUG_ON(err_ptr == NULL);
>> +       return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
>> +}
>> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index 71f5c45..75f6e86 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
>> @@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
>>          return ret;
>>   }
>>
>> +/*
>> + * Similar to traverse_pages, but use each page as a "block" of
>> + * data to be processed as one unit.
>> + */
>> +static int traverse_pages_block(unsigned nelem, size_t size,
>> +                               struct list_head *pos,
>> +                               int (*fn)(void *data, int nr, void *state),
>> +                               void *state)
>> +{
>> +       void *pagedata;
>> +       unsigned pageidx;
>> +       int ret = 0;
>> +
>> +       BUG_ON(size > PAGE_SIZE);
>> +
>> +       pageidx = PAGE_SIZE;
>> +
>> +       while (nelem) {
>> +               int nr = (PAGE_SIZE/size);
>> +               struct page *page;
>> +               if (nr > nelem)
>> +                       nr = nelem;
>> +               pos = pos->next;
>> +               page = list_entry(pos, struct page, lru);
>> +               pagedata = page_address(page);
>> +               ret = (*fn)(pagedata, nr, state);
>> +               if (ret)
>> +                       break;
>> +               nelem -= nr;
>> +       }
>> +
>> +       return ret;
>> +}
>> +
>>   struct mmap_mfn_state {
>>          unsigned long va;
>>          struct vm_area_struct *vma;
>> @@ -250,7 +284,7 @@ struct mmap_batch_state {
>>           *      0 for no errors
>>           *      1 if at least one error has happened (and no
>>           *          -ENOENT errors have happened)
>> -        *      -ENOENT if at least 1 -ENOENT has happened.
>> +        *      -ENOENT if at least one -ENOENT has happened.
>>           */
>>          int global_error;
>>          /* An array for individual errors */
>> @@ -260,17 +294,20 @@ struct mmap_batch_state {
>>          xen_pfn_t __user *user_mfn;
>>   };
>>
>> -static int mmap_batch_fn(void *data, void *state)
>> +static int mmap_batch_fn(void *data, int nr, void *state)
>>   {
>>          xen_pfn_t *mfnp = data;
>>          struct mmap_batch_state *st = state;
>>          int ret;
>>
>> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>> -                                        st->vma->vm_page_prot, st->domain);
>> +       BUG_ON(nr < 0);
>>
>> -       /* Store error code for second pass. */
>> -       *(st->err++) = ret;
>> +       ret = xen_remap_domain_mfn_array(st->vma,
>> +                                        st->va & PAGE_MASK,
>> +                                        mfnp, nr,
>> +                                        st->err,
>> +                                        st->vma->vm_page_prot,
>> +                                        st->domain);
>>
>>          /* And see if it affects the global_error. */
>>          if (ret < 0) {
>> @@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
>>                                  st->global_error = 1;
>>                  }
>>          }
>> -       st->va += PAGE_SIZE;
>> +       st->va += PAGE_SIZE * nr;
>>
>>          return 0;
>>   }
>> @@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>>          state.err           = err_array;
>>
>>          /* mmap_batch_fn guarantees ret == 0 */
>> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
>> -                            &pagelist, mmap_batch_fn, &state));
>> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
>> +                                   &pagelist, mmap_batch_fn, &state));
>>
>>          up_write(&mm->mmap_sem);
>>
>> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
>> index 6a198e4..22cad75 100644
>> --- a/include/xen/xen-ops.h
>> +++ b/include/xen/xen-ops.h
>> @@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>                                 unsigned long mfn, int nr,
>>                                 pgprot_t prot, unsigned domid);
>>
>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>> +                              unsigned long addr,
>> +                              unsigned long *mfn, int nr,
>> +                              int *err_ptr, pgprot_t prot,
>> +                              unsigned domid);
>>   #endif /* INCLUDE_XEN_OPS_H */
>> --
>> 1.7.9.5
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
> --
> Vasiliy Tolstov,
> Clodo.ru
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:39:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj78E-0003Zi-1f; Thu, 13 Dec 2012 11:39:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj78C-0003Zc-9b
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:39:12 +0000
Received: from [85.158.139.83:39559] by server-15.bemta-5.messagelabs.com id
	94/18-20523-F5EB9C05; Thu, 13 Dec 2012 11:39:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355398714!23384653!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13321 invoked from network); 13 Dec 2012 11:38:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:38:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113461"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:38:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:38:33 +0000
Message-ID: <1355398711.10554.90.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Thu, 13 Dec 2012 11:38:31 +0000
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding Ian J who looks after the tools portion of the stable branches.

I'm personally not convinced that this change is appropriate for a
stable branch backport, it's a pretty large feature patch after all and
not a bug fix.

On Thu, 2012-12-13 at 10:56 +0000, Alex Bligh wrote:
> This is a COMPILE TESTED ONLY RFC for a backport of the libxl elements
> of:
> 
> http://marc.info/?l=xen-devel&m=134944750724252
> 
> Stefano Stabellini said he would look at backporting the qemu-xen
> side.
> 
> My first question is whether this is the right approach: do we want
> to include the JSON changes too (presumably it changes the API).
> I am not 100% convinced they are necessary to get live migrate
> working.

You certainly need the JSON changes which let libxl make the QMP call to
enable logdirty in the qemu process. I don't know about the rest,
presumably they only change the internal API and not the external API?

> DO NOT RUN THIS CODE ON ANYTHING YOU CARE ABOUT.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:39:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj78E-0003Zi-1f; Thu, 13 Dec 2012 11:39:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj78C-0003Zc-9b
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:39:12 +0000
Received: from [85.158.139.83:39559] by server-15.bemta-5.messagelabs.com id
	94/18-20523-F5EB9C05; Thu, 13 Dec 2012 11:39:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355398714!23384653!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13321 invoked from network); 13 Dec 2012 11:38:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:38:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113461"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:38:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:38:33 +0000
Message-ID: <1355398711.10554.90.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Thu, 13 Dec 2012 11:38:31 +0000
In-Reply-To: <1355396228-3183-1-git-send-email-alex@alex.org.uk>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding Ian J who looks after the tools portion of the stable branches.

I'm personally not convinced that this change is appropriate for a
stable branch backport, it's a pretty large feature patch after all and
not a bug fix.

On Thu, 2012-12-13 at 10:56 +0000, Alex Bligh wrote:
> This is a COMPILE TESTED ONLY RFC for a backport of the libxl elements
> of:
> 
> http://marc.info/?l=xen-devel&m=134944750724252
> 
> Stefano Stabellini said he would look at backporting the qemu-xen
> side.
> 
> My first question is whether this is the right approach: do we want
> to include the JSON changes too (presumably it changes the API).
> I am not 100% convinced they are necessary to get live migrate
> working.

You certainly need the JSON changes which let libxl make the QMP call to
enable logdirty in the qemu process. I don't know about the rest,
presumably they only change the internal API and not the external API?

> DO NOT RUN THIS CODE ON ANYTHING YOU CARE ABOUT.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:46:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7FX-00049H-4P; Thu, 13 Dec 2012 11:46:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7FV-000492-LQ
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:46:45 +0000
Received: from [85.158.138.51:44268] by server-1.bemta-3.messagelabs.com id
	39/11-08906-420C9C05; Thu, 13 Dec 2012 11:46:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355399199!24491656!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25136 invoked from network); 13 Dec 2012 11:46:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:46:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113813"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:46:39 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:46:39 +0000
Message-ID: <1355399198.10554.92.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Thu, 13 Dec 2012 11:46:38 +0000
In-Reply-To: <1355343468-28291-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355343468-28291-1-git-send-email-dgdegra@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v7] libxl: introduce XSM relabel on build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-12 at 20:17 +0000, Daniel De Graaf wrote:
> In response to a suggestion from Jan, I am splitting out independent
> patches from the larger XSM series that I have been posting.  This is
> the only patch from that series that touches the toolstack; it is
> independent of the rest of the series as the hypervisor component has
> already been committed.
> 
> ---------------------8<-------------------------------------------------
> 
> Allow a domain to be built under one security label and run using a
> different label.  This can be used to prevent the domain builder or
> control domain from having the ability to access a guest domain's memory
> via map_foreign_range except during the build process where this is
> required.
> 
> Example domain configuration snippet:
>   seclabel='customer_1:vm_r:nomigrate_t'
>   init_seclabel='customer_1:vm_r:nomigrate_t_building'
> 
> Note: this does not provide complete protection from a malicious dom0;
> mappings created during the build process may persist after the relabel,
> and could be used to indirectly access the guest's memory. However, if
> dom0 correctly unmaps the domain upon building, a the domU is protected
> against dom0 becoming malicious in the future.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>

Acked + applied, thanks.

I'm in two minds about whether we should add a LIBXL_HAVE_<foo> #define.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:46:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7FX-00049H-4P; Thu, 13 Dec 2012 11:46:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7FV-000492-LQ
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:46:45 +0000
Received: from [85.158.138.51:44268] by server-1.bemta-3.messagelabs.com id
	39/11-08906-420C9C05; Thu, 13 Dec 2012 11:46:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355399199!24491656!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25136 invoked from network); 13 Dec 2012 11:46:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:46:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113813"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:46:39 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:46:39 +0000
Message-ID: <1355399198.10554.92.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Thu, 13 Dec 2012 11:46:38 +0000
In-Reply-To: <1355343468-28291-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355343468-28291-1-git-send-email-dgdegra@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v7] libxl: introduce XSM relabel on build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-12 at 20:17 +0000, Daniel De Graaf wrote:
> In response to a suggestion from Jan, I am splitting out independent
> patches from the larger XSM series that I have been posting.  This is
> the only patch from that series that touches the toolstack; it is
> independent of the rest of the series as the hypervisor component has
> already been committed.
> 
> ---------------------8<-------------------------------------------------
> 
> Allow a domain to be built under one security label and run using a
> different label.  This can be used to prevent the domain builder or
> control domain from having the ability to access a guest domain's memory
> via map_foreign_range except during the build process where this is
> required.
> 
> Example domain configuration snippet:
>   seclabel='customer_1:vm_r:nomigrate_t'
>   init_seclabel='customer_1:vm_r:nomigrate_t_building'
> 
> Note: this does not provide complete protection from a malicious dom0;
> mappings created during the build process may persist after the relabel,
> and could be used to indirectly access the guest's memory. However, if
> dom0 correctly unmaps the domain upon building, a the domU is protected
> against dom0 becoming malicious in the future.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>

Acked + applied, thanks.

I'm in two minds about whether we should add a LIBXL_HAVE_<foo> #define.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:47:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7Ff-0004AM-HI; Thu, 13 Dec 2012 11:46:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7Fe-0004AC-Vx
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:46:55 +0000
Received: from [85.158.139.83:25806] by server-15.bemta-5.messagelabs.com id
	C1/DB-20523-E20C9C05; Thu, 13 Dec 2012 11:46:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355399210!22339756!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24039 invoked from network); 13 Dec 2012 11:46:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:46:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113820"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:46:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:46:50 +0000
Message-ID: <1355399209.10554.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 11:46:49 +0000
In-Reply-To: <20680.46260.266332.338851@mariner.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<C76A4AA2522F56F728DDCA4A@nimrod.local>
	<1355235125.843.13.camel@zakaz.uk.xensource.com>
	<20680.46260.266332.338851@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Alex Bligh <alex@alex.org.uk>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> Subject: libxl: qemu trad logdirty: Tolerate ENOENT on ret path
> 
> It can happen in error conditions that lds->ret_path doesn't exist,
> and libxl__xs_read_checked signals this by setting got_ret=NULL.  If
> this happens, fail without crashing.
> 
> Reported-by: Alex Bligh <alex@alex.org.uk>,
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:47:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7Ff-0004AM-HI; Thu, 13 Dec 2012 11:46:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7Fe-0004AC-Vx
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:46:55 +0000
Received: from [85.158.139.83:25806] by server-15.bemta-5.messagelabs.com id
	C1/DB-20523-E20C9C05; Thu, 13 Dec 2012 11:46:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355399210!22339756!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24039 invoked from network); 13 Dec 2012 11:46:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:46:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113820"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:46:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:46:50 +0000
Message-ID: <1355399209.10554.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 11:46:49 +0000
In-Reply-To: <20680.46260.266332.338851@mariner.uk.xensource.com>
References: <22A70860509BC87BE892A601@nimrod.local>
	<C76A4AA2522F56F728DDCA4A@nimrod.local>
	<1355235125.843.13.camel@zakaz.uk.xensource.com>
	<20680.46260.266332.338851@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Alex Bligh <alex@alex.org.uk>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1 live migration with qemu device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> Subject: libxl: qemu trad logdirty: Tolerate ENOENT on ret path
> 
> It can happen in error conditions that lds->ret_path doesn't exist,
> and libxl__xs_read_checked signals this by setting got_ret=NULL.  If
> this happens, fail without crashing.
> 
> Reported-by: Alex Bligh <alex@alex.org.uk>,
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:47:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7GM-0004Hp-B1; Thu, 13 Dec 2012 11:47:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7GL-0004HP-4K
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 11:47:37 +0000
Received: from [85.158.139.83:34278] by server-10.bemta-5.messagelabs.com id
	B0/7F-13383-850C9C05; Thu, 13 Dec 2012 11:47:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355399227!29721547!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23888 invoked from network); 13 Dec 2012 11:47:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:47:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113834"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:47:16 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:47:15 +0000
Message-ID: <1355399234.10554.95.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 13 Dec 2012 11:47:14 +0000
In-Reply-To: <1354902355-10209-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354902355-10209-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v4 1/2] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 17:45 +0000, Stefano Stabellini wrote:
> Get the address of the GIC distributor, cpu, virtual and virtual cpu
> interfaces registers from device tree.
> 
> Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
> and friends because we are using them from mode_switch.S, that is
> executed before device tree has been parsed. But at least mode_switch.S
> is known to contain vexpress specific code anyway.
> 
> Changes in v4:
> - return ranges from device_tree_nr_reg_ranges;
> - remove hard tab.
> 
> Changes in v3:
> - printk a message with the GIC interface addresses in gic_init;
> - use strcmp in device_tree_node_compatible;
> - rename device_tree_get_reg_ranges to device_tree_nr_reg_ranges;
> - improve error message in process_gic_node.
> 
> Changes in v2:
> - remove 2 superflous lines from process_gic_node;
> - introduce device_tree_get_reg_ranges;
> - add a check for uninitialized GIC interface addresses;
> - add a check for non-page aligned GIC interface addresses;
> - remove the code to deal with non-page aligned addresses from GICC and
> GICH.
> 
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:47:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7GM-0004Hp-B1; Thu, 13 Dec 2012 11:47:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7GL-0004HP-4K
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 11:47:37 +0000
Received: from [85.158.139.83:34278] by server-10.bemta-5.messagelabs.com id
	B0/7F-13383-850C9C05; Thu, 13 Dec 2012 11:47:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355399227!29721547!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23888 invoked from network); 13 Dec 2012 11:47:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:47:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113834"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:47:16 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:47:15 +0000
Message-ID: <1355399234.10554.95.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 13 Dec 2012 11:47:14 +0000
In-Reply-To: <1354902355-10209-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354902355-10209-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v4 1/2] xen: get GIC addresses from DT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 17:45 +0000, Stefano Stabellini wrote:
> Get the address of the GIC distributor, cpu, virtual and virtual cpu
> interfaces registers from device tree.
> 
> Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
> and friends because we are using them from mode_switch.S, that is
> executed before device tree has been parsed. But at least mode_switch.S
> is known to contain vexpress specific code anyway.
> 
> Changes in v4:
> - return ranges from device_tree_nr_reg_ranges;
> - remove hard tab.
> 
> Changes in v3:
> - printk a message with the GIC interface addresses in gic_init;
> - use strcmp in device_tree_node_compatible;
> - rename device_tree_get_reg_ranges to device_tree_nr_reg_ranges;
> - improve error message in process_gic_node.
> 
> Changes in v2:
> - remove 2 superflous lines from process_gic_node;
> - introduce device_tree_get_reg_ranges;
> - add a check for uninitialized GIC interface addresses;
> - add a check for non-page aligned GIC interface addresses;
> - remove the code to deal with non-page aligned addresses from GICC and
> GICH.
> 
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:47:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7GL-0004Hi-VU; Thu, 13 Dec 2012 11:47:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7GL-0004HR-6q
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 11:47:37 +0000
Received: from [85.158.139.83:43738] by server-1.bemta-5.messagelabs.com id
	0D/D0-12813-850C9C05; Thu, 13 Dec 2012 11:47:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355399227!29721547!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23787 invoked from network); 13 Dec 2012 11:47:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:47:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113829"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:47:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:47:07 +0000
Message-ID: <1355399225.10554.94.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 13 Dec 2012 11:47:05 +0000
In-Reply-To: <1354902355-10209-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354902355-10209-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v4 2/2] xen/arm: use strcmp in
	device_tree_type_matches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 17:45 +0000, Stefano Stabellini wrote:
> We want to match the exact string rather than the first subset.
> 
> Changes in v4:
> - get rid of len.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked + applied , thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:47:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7GL-0004Hi-VU; Thu, 13 Dec 2012 11:47:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7GL-0004HR-6q
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 11:47:37 +0000
Received: from [85.158.139.83:43738] by server-1.bemta-5.messagelabs.com id
	0D/D0-12813-850C9C05; Thu, 13 Dec 2012 11:47:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355399227!29721547!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23787 invoked from network); 13 Dec 2012 11:47:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:47:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113829"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:47:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:47:07 +0000
Message-ID: <1355399225.10554.94.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 13 Dec 2012 11:47:05 +0000
In-Reply-To: <1354902355-10209-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1354902355-10209-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v4 2/2] xen/arm: use strcmp in
	device_tree_type_matches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-07 at 17:45 +0000, Stefano Stabellini wrote:
> We want to match the exact string rather than the first subset.
> 
> Changes in v4:
> - get rid of len.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked + applied , thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:48:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7H1-0004TO-0W; Thu, 13 Dec 2012 11:48:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tj7Gz-0004Sn-Rl
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:48:18 +0000
Received: from [85.158.139.211:43389] by server-12.bemta-5.messagelabs.com id
	26/98-02275-180C9C05; Thu, 13 Dec 2012 11:48:17 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1355399236!20239862!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13403 invoked from network); 13 Dec 2012 11:47:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:47:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="567235"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 11:47:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 06:47:16 -0500
Received: from [10.80.239.153]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id 1Tj7Fz-0006uM-QX;
	Thu, 13 Dec 2012 11:47:15 +0000
Message-ID: <50C9C043.7090507@citrix.com>
Date: Thu, 13 Dec 2012 11:47:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121127 Icedove/10.0.11
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
	<20121213105951.GA75286@ocelot.phlegethon.org>
In-Reply-To: <20121213105951.GA75286@ocelot.phlegethon.org>
Cc: Keir <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
 on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/12 10:59, Tim Deegan wrote:
> At 11:35 +0000 on 12 Dec (1355312134), Andrew Cooper wrote:
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/crash.c
>> --- a/xen/arch/x86/crash.c
>> +++ b/xen/arch/x86/crash.c
>> @@ -32,41 +32,127 @@
>>
>>   static atomic_t waiting_for_crash_ipi;
>>   static unsigned int crashing_cpu;
>> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>>
>> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
>> +/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
>> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>>   {
>> -    /* Don't do anything if this handler is invoked on crashing cpu.
>> -     * Otherwise, system will completely hang. Crashing cpu can get
>> -     * an NMI if system was initially booted with nmi_watchdog parameter.
>> +    int cpu = smp_processor_id();
>> +
>> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
>> +    ASSERT( cpu != crashing_cpu );
> nmi_shootdown_cpus() has a go, but I think it would have to do an atomic
> compare-exchange on crashing_cpu to actually be sure that only one CPU
> is crashing at a time.  If two CPUs try to lead crashes at the same
> time, it will deadlock here, with NMIs disabled on all CPUs.
>
> The old behaviour was also not entirely race-free, but a little more
> likey to make progress, as in most cases at least one CPU would see
> cpu == crashing_cpu here and return.
>
> Using cmpxchg in nmi_shootdown_cpus() would have the drawback that if
> one CPU tried to kexec and wedged, another CPU couldn't then have a go.
> Not sure which of all these unlikely scenarios is worst. :)

kexec_common_shutdown() calls one_cpu_only() which gives the impression 
that nmi_shootdown_cpus() can only be gotten to once.  I think this is 
race free for different pcpus attempting to crash at the same time, but 
is certainly not safe for a cascade crash on the same pcpu.

As far as I can reason, this race can be worked around by doing a 
combined atomic set of the KEXEC_IN_PROGRESS flag, and a set of the 
crashing cpu id.  However, there are also other race conditions along 
the crash path which are addressed in subsequent patches which are still 
in progress.  As the choice here is between two differently nasty race 
conditions, I would prefer to defer fixing it to later and doing it 
properly.

>
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/x86_64/entry.S
>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -635,11 +635,45 @@ ENTRY(nmi)
>>           movl  $TRAP_nmi,4(%rsp)
>>           jmp   handle_ist_exception
>>
>> +ENTRY(nmi_crash)
>> +        cli
> Aren't interrupts disabled already because we came in through an
> interrupt gate?

I was taking the pragmatic approach that in this case, we really have no 
guarantees about Xen state.  Having said that, we will (hopefully) have 
entered here through a gate, which will be intacked, and the function 
itself is deliberately safe to interruption.  I shall remove it.

>
>> +        pushq $0
>> +        movl $TRAP_nmi,4(%rsp)
>> +        SAVE_ALL
>> +        movq %rsp,%rdi
> Why?  do_nmi_crash() doesn't actually use any of its arguments.
> AFAICT, we could pretty much use do_nmi_crash explicitly in the IDT
> entry.

I was going with the DECLARE_TRAP_HANDLER() way of doing things, for 
consistency with the other handlers.  I can change it if you wish, but 
as this is not a hotpath, I would err on the lazy side and leave it as 
it is.

>
>> +        callq do_nmi_crash /* Does not return */
>> +        ud2
>> +
>>   ENTRY(machine_check)
>>           pushq $0
>>           movl  $TRAP_machine_check,4(%rsp)
>>           jmp   handle_ist_exception
>>
>> +/* Enable NMIs.  No special register assumptions. Only %rax is not preserved. */
>> +ENTRY(enable_nmis)
>> +        movq  %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
>> +
>> +        iretq /* Disable the hardware NMI latch */
>> +1:
>> +        retq
> This could be made a bit smaller by something like:
>
> 	popq  %rcx         /* Remember return address */
> 	movq  %rsp, %rax   /* Grab RSP before pushing */
> 	
> 	/* Set up stack frame */
> 	pushq $0               /* SS */
> 	pushq %rax             /* RSP */
> 	pushfq                 /* RFLAGS */
> 	pushq $__HYPERVISOR_CS /* CS */
> 	pushq %rcx             /* RIP */
> 	
> 	iretq /* Disable the hardware NMI latch and return */
>
> which also keeps the call/return counts balanced.  But tbh I'm not sure
> it's worth it.  And I suspect you'll tell me why it's wrong. :)

I cant see any problem with it.

>
>> +
>> +/* No op trap handler.  Required for kexec crash path.  This is not
>> + * declared with the ENTRY() macro to avoid wasted alignment space.
>> + */
>> +.globl trap_nop
>> +trap_nop:
>> +        iretq
> The commit message says this reuses the iretq in enable_nmis().
>
> Cheers,
>
> Tim.

Ok - I will fix the stale message.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:48:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7H1-0004TO-0W; Thu, 13 Dec 2012 11:48:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tj7Gz-0004Sn-Rl
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:48:18 +0000
Received: from [85.158.139.211:43389] by server-12.bemta-5.messagelabs.com id
	26/98-02275-180C9C05; Thu, 13 Dec 2012 11:48:17 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1355399236!20239862!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13403 invoked from network); 13 Dec 2012 11:47:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:47:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="567235"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 11:47:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 06:47:16 -0500
Received: from [10.80.239.153]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id 1Tj7Fz-0006uM-QX;
	Thu, 13 Dec 2012 11:47:15 +0000
Message-ID: <50C9C043.7090507@citrix.com>
Date: Thu, 13 Dec 2012 11:47:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121127 Icedove/10.0.11
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
	<20121213105951.GA75286@ocelot.phlegethon.org>
In-Reply-To: <20121213105951.GA75286@ocelot.phlegethon.org>
Cc: Keir <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
 on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/12 10:59, Tim Deegan wrote:
> At 11:35 +0000 on 12 Dec (1355312134), Andrew Cooper wrote:
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/crash.c
>> --- a/xen/arch/x86/crash.c
>> +++ b/xen/arch/x86/crash.c
>> @@ -32,41 +32,127 @@
>>
>>   static atomic_t waiting_for_crash_ipi;
>>   static unsigned int crashing_cpu;
>> +static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
>>
>> -static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
>> +/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
>> +void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
>>   {
>> -    /* Don't do anything if this handler is invoked on crashing cpu.
>> -     * Otherwise, system will completely hang. Crashing cpu can get
>> -     * an NMI if system was initially booted with nmi_watchdog parameter.
>> +    int cpu = smp_processor_id();
>> +
>> +    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
>> +    ASSERT( cpu != crashing_cpu );
> nmi_shootdown_cpus() has a go, but I think it would have to do an atomic
> compare-exchange on crashing_cpu to actually be sure that only one CPU
> is crashing at a time.  If two CPUs try to lead crashes at the same
> time, it will deadlock here, with NMIs disabled on all CPUs.
>
> The old behaviour was also not entirely race-free, but a little more
> likey to make progress, as in most cases at least one CPU would see
> cpu == crashing_cpu here and return.
>
> Using cmpxchg in nmi_shootdown_cpus() would have the drawback that if
> one CPU tried to kexec and wedged, another CPU couldn't then have a go.
> Not sure which of all these unlikely scenarios is worst. :)

kexec_common_shutdown() calls one_cpu_only() which gives the impression 
that nmi_shootdown_cpus() can only be gotten to once.  I think this is 
race free for different pcpus attempting to crash at the same time, but 
is certainly not safe for a cascade crash on the same pcpu.

As far as I can reason, this race can be worked around by doing a 
combined atomic set of the KEXEC_IN_PROGRESS flag, and a set of the 
crashing cpu id.  However, there are also other race conditions along 
the crash path which are addressed in subsequent patches which are still 
in progress.  As the choice here is between two differently nasty race 
conditions, I would prefer to defer fixing it to later and doing it 
properly.

>
>> diff -r ef8c1b607b10 -r 96b068439bc4 xen/arch/x86/x86_64/entry.S
>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -635,11 +635,45 @@ ENTRY(nmi)
>>           movl  $TRAP_nmi,4(%rsp)
>>           jmp   handle_ist_exception
>>
>> +ENTRY(nmi_crash)
>> +        cli
> Aren't interrupts disabled already because we came in through an
> interrupt gate?

I was taking the pragmatic approach that in this case, we really have no 
guarantees about Xen state.  Having said that, we will (hopefully) have 
entered here through a gate, which will be intacked, and the function 
itself is deliberately safe to interruption.  I shall remove it.

>
>> +        pushq $0
>> +        movl $TRAP_nmi,4(%rsp)
>> +        SAVE_ALL
>> +        movq %rsp,%rdi
> Why?  do_nmi_crash() doesn't actually use any of its arguments.
> AFAICT, we could pretty much use do_nmi_crash explicitly in the IDT
> entry.

I was going with the DECLARE_TRAP_HANDLER() way of doing things, for 
consistency with the other handlers.  I can change it if you wish, but 
as this is not a hotpath, I would err on the lazy side and leave it as 
it is.

>
>> +        callq do_nmi_crash /* Does not return */
>> +        ud2
>> +
>>   ENTRY(machine_check)
>>           pushq $0
>>           movl  $TRAP_machine_check,4(%rsp)
>>           jmp   handle_ist_exception
>>
>> +/* Enable NMIs.  No special register assumptions. Only %rax is not preserved. */
>> +ENTRY(enable_nmis)
>> +        movq  %rsp, %rax /* Grab RSP before pushing */
>> +
>> +        /* Set up stack frame */
>> +        pushq $0               /* SS */
>> +        pushq %rax             /* RSP */
>> +        pushfq                 /* RFLAGS */
>> +        pushq $__HYPERVISOR_CS /* CS */
>> +        leaq  1f(%rip),%rax
>> +        pushq %rax             /* RIP */
>> +
>> +        iretq /* Disable the hardware NMI latch */
>> +1:
>> +        retq
> This could be made a bit smaller by something like:
>
> 	popq  %rcx         /* Remember return address */
> 	movq  %rsp, %rax   /* Grab RSP before pushing */
> 	
> 	/* Set up stack frame */
> 	pushq $0               /* SS */
> 	pushq %rax             /* RSP */
> 	pushfq                 /* RFLAGS */
> 	pushq $__HYPERVISOR_CS /* CS */
> 	pushq %rcx             /* RIP */
> 	
> 	iretq /* Disable the hardware NMI latch and return */
>
> which also keeps the call/return counts balanced.  But tbh I'm not sure
> it's worth it.  And I suspect you'll tell me why it's wrong. :)

I cant see any problem with it.

>
>> +
>> +/* No op trap handler.  Required for kexec crash path.  This is not
>> + * declared with the ENTRY() macro to avoid wasted alignment space.
>> + */
>> +.globl trap_nop
>> +trap_nop:
>> +        iretq
> The commit message says this reuses the iretq in enable_nmis().
>
> Cheers,
>
> Tim.

Ok - I will fix the stale message.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:48:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7HE-0004ZR-F8; Thu, 13 Dec 2012 11:48:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7HC-0004YY-O3
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:48:31 +0000
Received: from [85.158.139.83:56761] by server-1.bemta-5.messagelabs.com id
	A4/A2-12813-D80C9C05; Thu, 13 Dec 2012 11:48:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355399297!18332343!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18309 invoked from network); 13 Dec 2012 11:48:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:48:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113865"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:48:17 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:48:17 +0000
Message-ID: <1355399295.10554.96.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Thu, 13 Dec 2012 11:48:15 +0000
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 1/8] add vtpm-stubdom code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For future reference please could you include an indication of what
changed in a new posting of a series, either in the 0/N mail (which is
useful to include as an intro in any case) or in the individual
changelogs.

On Thu, 2012-12-06 at 18:19 +0000, Matthew Fioravante wrote:
> Add the code base for vtpm-stubdom to the stubdom
> heirarchy. Makefile changes in later patch.
> 
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  stubdom/vtpm/Makefile    |   37 +++++
>  stubdom/vtpm/minios.cfg  |   14 ++
>  stubdom/vtpm/vtpm.c      |  404 ++++++++++++++++++++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpm.h      |   36 +++++
>  stubdom/vtpm/vtpm_cmd.c  |  256 +++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpm_cmd.h  |   31 ++++
>  stubdom/vtpm/vtpm_pcrs.c |   43 +++++
>  stubdom/vtpm/vtpm_pcrs.h |   53 ++++++
>  stubdom/vtpm/vtpmblk.c   |  307 +++++++++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpmblk.h   |   31 ++++
>  10 files changed, 1212 insertions(+)
>  create mode 100644 stubdom/vtpm/Makefile
>  create mode 100644 stubdom/vtpm/minios.cfg
>  create mode 100644 stubdom/vtpm/vtpm.c
>  create mode 100644 stubdom/vtpm/vtpm.h
>  create mode 100644 stubdom/vtpm/vtpm_cmd.c
>  create mode 100644 stubdom/vtpm/vtpm_cmd.h
>  create mode 100644 stubdom/vtpm/vtpm_pcrs.c
>  create mode 100644 stubdom/vtpm/vtpm_pcrs.h
>  create mode 100644 stubdom/vtpm/vtpmblk.c
>  create mode 100644 stubdom/vtpm/vtpmblk.h
> 
> diff --git a/stubdom/vtpm/Makefile b/stubdom/vtpm/Makefile
> new file mode 100644
> index 0000000..686c0ea
> --- /dev/null
> +++ b/stubdom/vtpm/Makefile
> @@ -0,0 +1,37 @@
> +# Copyright (c) 2010-2012 United States Government, as represented by
> +# the Secretary of Defense.  All rights reserved.
> +#
> +# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> +# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> +# INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> +# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> +# DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> +# SOFTWARE.
> +#
> +
> +XEN_ROOT=../..
> +
> +PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
> +PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o sha4.o
> +
> +TARGET=vtpm.a
> +OBJS=vtpm.o vtpm_cmd.o vtpmblk.o vtpm_pcrs.o
> +
> +
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/build
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/tpm
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/crypto
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)
> +
> +$(TARGET): $(OBJS)
> +       ar -cr $@ $(OBJS) $(TPMEMU_OBJS) $(foreach obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
> +
> +$(OBJS): vtpm_manager.h
> +
> +vtpm_manager.h:
> +       ln -s ../vtpmmgr/vtpm_manager.h vtpm_manager.h
> +
> +clean:
> +       -rm $(TARGET) $(OBJS) vtpm_manager.h
> +
> +.PHONY: clean
> diff --git a/stubdom/vtpm/minios.cfg b/stubdom/vtpm/minios.cfg
> new file mode 100644
> index 0000000..31652ee
> --- /dev/null
> +++ b/stubdom/vtpm/minios.cfg
> @@ -0,0 +1,14 @@
> +CONFIG_TPMFRONT=y
> +CONFIG_TPM_TIS=n
> +CONFIG_TPMBACK=y
> +CONFIG_START_NETWORK=n
> +CONFIG_TEST=n
> +CONFIG_PCIFRONT=n
> +CONFIG_BLKFRONT=y
> +CONFIG_NETFRONT=n
> +CONFIG_FBFRONT=n
> +CONFIG_KBDFRONT=n
> +CONFIG_CONSFRONT=n
> +CONFIG_XENBUS=y
> +CONFIG_LWIP=n
> +CONFIG_XC=n
> diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
> new file mode 100644
> index 0000000..71aef78
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm.c
> @@ -0,0 +1,404 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <inttypes.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <string.h>
> +#include <syslog.h>
> +#include <stdbool.h>
> +#include <errno.h>
> +#include <sys/time.h>
> +#include <xen/xen.h>
> +#include <tpmback.h>
> +#include <tpmfront.h>
> +
> +#include <polarssl/entropy.h>
> +#include <polarssl/ctr_drbg.h>
> +
> +#include "tpm/tpm_emulator_extern.h"
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm.h"
> +#include "vtpm_cmd.h"
> +#include "vtpm_pcrs.h"
> +#include "vtpmblk.h"
> +
> +#define TPM_LOG_INFO LOG_INFO
> +#define TPM_LOG_ERROR LOG_ERR
> +#define TPM_LOG_DEBUG LOG_DEBUG
> +
> +/* Global commandline options - default values */
> +struct Opt_args opt_args = {
> +   .startup = ST_CLEAR,
> +   .loglevel = TPM_LOG_INFO,
> +   .hwinitpcrs = VTPM_PCRNONE,
> +   .tpmconf = 0,
> +   .enable_maint_cmds = false,
> +};
> +
> +static uint32_t badords[32];
> +static unsigned int n_badords = 0;
> +
> +entropy_context entropy;
> +ctr_drbg_context ctr_drbg;
> +
> +struct tpmfront_dev* tpmfront_dev;
> +
> +void vtpm_get_extern_random_bytes(void *buf, size_t nbytes)
> +{
> +   ctr_drbg_random(&ctr_drbg, buf, nbytes);
> +}
> +
> +int vtpm_read_from_file(uint8_t **data, size_t *data_length) {
> +   return read_vtpmblk(tpmfront_dev, data, data_length);
> +}
> +
> +int vtpm_write_to_file(uint8_t *data, size_t data_length) {
> +   return write_vtpmblk(tpmfront_dev, data, data_length);
> +}
> +
> +int vtpm_extern_init_fake(void) {
> +   return 0;
> +}
> +
> +void vtpm_extern_release_fake(void) {
> +}
> +
> +
> +void vtpm_log(int priority, const char *fmt, ...)
> +{
> +   if(opt_args.loglevel >= priority) {
> +      va_list v;
> +      va_start(v, fmt);
> +      vprintf(fmt, v);
> +      va_end(v);
> +   }
> +}
> +
> +static uint64_t vtpm_get_ticks(void)
> +{
> +  static uint64_t old_t = 0;
> +  uint64_t new_t, res_t;
> +  struct timeval tv;
> +  gettimeofday(&tv, NULL);
> +  new_t = (uint64_t)tv.tv_sec * 1000000 + (uint64_t)tv.tv_usec;
> +  res_t = (old_t > 0) ? new_t - old_t : 0;
> +  old_t = new_t;
> +  return res_t;
> +}
> +
> +
> +static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
> +   UINT32 sz = len;
> +   TPM_RESULT rc = VTPM_GetRandom(tpmfront_dev, data, &sz);
> +   *olen = sz;
> +   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
> +}
> +
> +int init_random(void) {
> +   /* Initialize the rng */
> +   entropy_init(&entropy);
> +   entropy_add_source(&entropy, tpm_entropy_source, NULL, 0);
> +   entropy_gather(&entropy);
> +   ctr_drbg_init(&ctr_drbg, entropy_func, &entropy, NULL, 0);
> +   ctr_drbg_set_prediction_resistance( &ctr_drbg, CTR_DRBG_PR_OFF );
> +
> +   return 0;
> +}
> +
> +int check_ordinal(tpmcmd_t* tpmcmd) {
> +   TPM_COMMAND_CODE ord;
> +   UINT32 len = 4;
> +   BYTE* ptr;
> +   unsigned int i;
> +
> +   if(tpmcmd->req_len < 10) {
> +      return true;
> +   }
> +
> +   ptr = tpmcmd->req + 6;
> +   tpm_unmarshal_UINT32(&ptr, &len, &ord);
> +
> +   for(i = 0; i < n_badords; ++i) {
> +      if(ord == badords[i]) {
> +         error("Disabled command ordinal (%" PRIu32") requested!\n");
> +         return false;
> +      }
> +   }
> +   return true;
> +}
> +
> +static void main_loop(void) {
> +   tpmcmd_t* tpmcmd = NULL;
> +   domid_t domid;              /* Domid of frontend */
> +   unsigned int handle;        /* handle of frontend */
> +   int res = -1;
> +
> +   info("VTPM Initializing\n");
> +
> +   /* Set required tpm config args */
> +   opt_args.tpmconf |= TPM_CONF_STRONG_PERSISTENCE;
> +   opt_args.tpmconf &= ~TPM_CONF_USE_INTERNAL_PRNG;
> +   opt_args.tpmconf |= TPM_CONF_GENERATE_EK;
> +   opt_args.tpmconf |= TPM_CONF_GENERATE_SEED_DAA;
> +
> +   /* Initialize the emulator */
> +   tpm_emulator_init(opt_args.startup, opt_args.tpmconf);
> +
> +   /* Initialize any requested PCRs with hardware TPM values */
> +   if(vtpm_initialize_hw_pcrs(tpmfront_dev, opt_args.hwinitpcrs) != TPM_SUCCESS) {
> +      error("Failed to initialize PCRs with hardware TPM values");
> +      goto abort_postpcrs;
> +   }
> +
> +   /* Wait for the frontend domain to connect */
> +   info("Waiting for frontend domain to connect..");
> +   if(tpmback_wait_for_frontend_connect(&domid, &handle) == 0) {
> +      info("VTPM attached to Frontend %u/%u", (unsigned int) domid, handle);
> +   } else {
> +      error("Unable to attach to a frontend");
> +   }
> +
> +   tpmcmd = tpmback_req(domid, handle);
> +   while(tpmcmd) {
> +      /* Handle the request */
> +      if(tpmcmd->req_len) {
> +        tpmcmd->resp = NULL;
> +        tpmcmd->resp_len = 0;
> +
> +         /* First check for disabled ordinals */
> +         if(!check_ordinal(tpmcmd)) {
> +            create_error_response(tpmcmd, TPM_BAD_ORDINAL);
> +         }
> +         /* If not disabled, do the command */
> +         else {
> +            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len)) != 0) {
> +               error("tpm_handle_command() failed");
> +               create_error_response(tpmcmd, TPM_FAIL);
> +            }
> +         }
> +      }
> +
> +      /* Send the response */
> +      tpmback_resp(tpmcmd);
> +
> +      /* Wait for the next request */
> +      tpmcmd = tpmback_req(domid, handle);
> +
> +   }
> +
> +abort_postpcrs:
> +   info("VTPM Shutting down\n");
> +
> +   tpm_emulator_shutdown();
> +}
> +
> +int parse_cmd_line(int argc, char** argv)
> +{
> +   char sval[25];
> +   char* logstr = NULL;
> +   /* Parse the command strings */
> +   for(unsigned int i = 1; i < argc; ++i) {
> +      if (sscanf(argv[i], "loglevel=%25s", sval) == 1){
> +        if (!strcmp(sval, "debug")) {
> +           opt_args.loglevel = TPM_LOG_DEBUG;
> +           logstr = "debug";
> +        }
> +        else if (!strcmp(sval, "info")) {
> +           logstr = "info";
> +           opt_args.loglevel = TPM_LOG_INFO;
> +        }
> +        else if (!strcmp(sval, "error")) {
> +           logstr = "error";
> +           opt_args.loglevel = TPM_LOG_ERROR;
> +        }
> +      }
> +      else if (!strcmp(argv[i], "clear")) {
> +        opt_args.startup = ST_CLEAR;
> +      }
> +      else if (!strcmp(argv[i], "save")) {
> +        opt_args.startup = ST_SAVE;
> +      }
> +      else if (!strcmp(argv[i], "deactivated")) {
> +        opt_args.startup = ST_DEACTIVATED;
> +      }
> +      else if (!strncmp(argv[i], "maintcmds=", 10)) {
> +         if(!strcmp(argv[i] + 10, "1")) {
> +            opt_args.enable_maint_cmds = true;
> +         } else if(!strcmp(argv[i] + 10, "0")) {
> +            opt_args.enable_maint_cmds = false;
> +         }
> +      }
> +      else if(!strncmp(argv[i], "hwinitpcr=", 10)) {
> +         char *pch = argv[i] + 10;
> +         unsigned int v1, v2;
> +         pch = strtok(pch, ",");
> +         while(pch != NULL) {
> +            if(!strcmp(pch, "all")) {
> +               //Set all
> +               opt_args.hwinitpcrs = VTPM_PCRALL;
> +            } else if(!strcmp(pch, "none")) {
> +               //Set none
> +               opt_args.hwinitpcrs = VTPM_PCRNONE;
> +            } else if(sscanf(pch, "%u", &v1) == 1) {
> +               //Set one
> +               if(v1 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               opt_args.hwinitpcrs |= (1 << v1);
> +            } else if(sscanf(pch, "%u-%u", &v1, &v2) == 2) {
> +               //Set range
> +               if(v1 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               if(v2 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               if(v2 < v1) {
> +                  unsigned tp = v1;
> +                  v1 = v2;
> +                  v2 = tp;
> +               }
> +               for(unsigned int i = v1; i <= v2; ++i) {
> +                  opt_args.hwinitpcrs |= (1 << i);
> +               }
> +            } else {
> +               error("hwintipcr error: Invalid PCR specification : %s", pch);
> +               return -1;
> +            }
> +            pch = strtok(NULL, ",");
> +         }
> +      }
> +      else {
> +        error("Invalid command line option `%s'", argv[i]);
> +      }
> +
> +   }
> +
> +   /* Check Errors and print results */
> +   switch(opt_args.startup) {
> +      case ST_CLEAR:
> +        info("Startup mode is `clear'");
> +        break;
> +      case ST_SAVE:
> +        info("Startup mode is `save'");
> +        break;
> +      case ST_DEACTIVATED:
> +        info("Startup mode is `deactivated'");
> +        break;
> +      default:
> +        error("Invalid startup mode %d", opt_args.startup);
> +        return -1;
> +   }
> +
> +   if(opt_args.hwinitpcrs & (VTPM_PCRALL))
> +   {
> +      char pcrstr[1024];
> +      char* ptr = pcrstr;
> +
> +      pcrstr[0] = '\0';
> +      info("The following PCRs will be initialized with values from the hardware TPM:");
> +      for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
> +         if(opt_args.hwinitpcrs & (1 << i)) {
> +            ptr += sprintf(ptr, "%u, ", i);
> +         }
> +      }
> +      /* get rid of the last comma if any numbers were printed */
> +      *(ptr -2) = '\0';
> +
> +      info("\t%s", pcrstr);
> +   } else {
> +      info("All PCRs initialized to default values");
> +   }
> +
> +   if(!opt_args.enable_maint_cmds) {
> +      info("TPM Maintenance Commands disabled");
> +      badords[n_badords++] = TPM_ORD_CreateMaintenanceArchive;
> +      badords[n_badords++] = TPM_ORD_LoadMaintenanceArchive;
> +      badords[n_badords++] = TPM_ORD_KillMaintenanceFeature;
> +      badords[n_badords++] = TPM_ORD_LoadManuMaintPub;
> +      badords[n_badords++] = TPM_ORD_ReadManuMaintPub;
> +   } else {
> +      info("TPM Maintenance Commands enabled");
> +   }
> +
> +   info("Log level set to %s", logstr);
> +
> +   return 0;
> +}
> +
> +void cleanup_opt_args(void) {
> +}
> +
> +int main(int argc, char **argv)
> +{
> +   //FIXME: initializing blkfront without this sleep causes the domain to crash on boot
> +   sleep(2);
> +
> +   /* Setup extern function pointers */
> +   tpm_extern_init = vtpm_extern_init_fake;
> +   tpm_extern_release = vtpm_extern_release_fake;
> +   tpm_malloc = malloc;
> +   tpm_free = free;
> +   tpm_log = vtpm_log;
> +   tpm_get_ticks = vtpm_get_ticks;
> +   tpm_get_extern_random_bytes = vtpm_get_extern_random_bytes;
> +   tpm_write_to_storage = vtpm_write_to_file;
> +   tpm_read_from_storage = vtpm_read_from_file;
> +
> +   info("starting TPM Emulator (1.2.%d.%d-%d)", VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD);
> +   if(parse_cmd_line(argc, argv)) {
> +      error("Error parsing commandline\n");
> +      return -1;
> +   }
> +
> +   /* Initialize devices */
> +   init_tpmback();
> +   if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
> +      error("Unable to initialize tpmfront device");
> +      goto abort_posttpmfront;
> +   }
> +
> +   /* Seed the RNG with entropy from hardware TPM */
> +   if(init_random()) {
> +      error("Unable to initialize RNG");
> +      goto abort_postrng;
> +   }
> +
> +   /* Initialize blkfront device */
> +   if(init_vtpmblk(tpmfront_dev)) {
> +      error("Unable to initialize Blkfront persistent storage");
> +      goto abort_postvtpmblk;
> +   }
> +
> +   /* Run main loop */
> +   main_loop();
> +
> +   /* Shutdown blkfront */
> +   shutdown_vtpmblk();
> +abort_postvtpmblk:
> +abort_postrng:
> +
> +   /* Close devices */
> +   shutdown_tpmfront(tpmfront_dev);
> +abort_posttpmfront:
> +   shutdown_tpmback();
> +
> +   cleanup_opt_args();
> +
> +   return 0;
> +}
> diff --git a/stubdom/vtpm/vtpm.h b/stubdom/vtpm/vtpm.h
> new file mode 100644
> index 0000000..5919e44
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm.h
> @@ -0,0 +1,36 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#ifndef VTPM_H
> +#define VTPM_H
> +
> +#include <stdbool.h>
> +
> +/* For testing */
> +#define VERS_CMD "\x00\xC1\x00\x00\x00\x16\x00\x00\x00\x65\x00\x00\x00\x05\x00\x00\x00\x04\x00\x00\x01\x03"
> +#define VERS_CMD_LEN 22
> +
> +/* Global commandline options */
> +struct Opt_args {
> +   enum StartUp {
> +      ST_CLEAR = 1,
> +      ST_SAVE = 2,
> +      ST_DEACTIVATED = 3
> +   } startup;
> +   unsigned long hwinitpcrs;
> +   int loglevel;
> +   uint32_t tpmconf;
> +   bool enable_maint_cmds;
> +};
> +extern struct Opt_args opt_args;
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
> new file mode 100644
> index 0000000..7eae98b
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_cmd.c
> @@ -0,0 +1,256 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#include <types.h>
> +#include <xen/xen.h>
> +#include <mm.h>
> +#include <gnttab.h>
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm_manager.h"
> +#include "vtpm_cmd.h"
> +#include <tpmback.h>
> +
> +#define TRYFAILGOTO(C) \
> +   if((C)) { \
> +      status = TPM_FAIL; \
> +      goto abort_egress; \
> +   }
> +#define TRYFAILGOTOMSG(C, msg) \
> +   if((C)) { \
> +      status = TPM_FAIL; \
> +      error(msg); \
> +      goto abort_egress; \
> +   }
> +#define CHECKSTATUSGOTO(ret, fname) \
> +   if((ret) != TPM_SUCCESS) { \
> +      error("%s failed with error code (%lu)", fname, (unsigned long) ret); \
> +      status = ord; \
> +      goto abort_egress; \
> +   }
> +
> +#define ERR_MALFORMED "Malformed response from backend"
> +#define ERR_TPMFRONT "Error sending command through frontend device"
> +
> +struct shpage {
> +   void* page;
> +   grant_ref_t grantref;
> +};
> +
> +typedef struct shpage shpage_t;
> +
> +static inline int pack_header(uint8_t** bptr, UINT32* len, TPM_TAG tag, UINT32 size, TPM_COMMAND_CODE ord)
> +{
> +   return *bptr == NULL ||
> +        tpm_marshal_UINT16(bptr, len, tag) ||
> +        tpm_marshal_UINT32(bptr, len, size) ||
> +        tpm_marshal_UINT32(bptr, len, ord);
> +}
> +
> +static inline int unpack_header(uint8_t** bptr, UINT32* len, TPM_TAG* tag, UINT32* size, TPM_COMMAND_CODE* ord)
> +{
> +   return *bptr == NULL ||
> +        tpm_unmarshal_UINT16(bptr, len, tag) ||
> +        tpm_unmarshal_UINT32(bptr, len, size) ||
> +        tpm_unmarshal_UINT32(bptr, len, ord);
> +}
> +
> +int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode)
> +{
> +   TPM_TAG tag;
> +   UINT32 len = tpmcmd->req_len;
> +   uint8_t* respptr;
> +   uint8_t* cmdptr = tpmcmd->req;
> +
> +   if(!tpm_unmarshal_UINT16(&cmdptr, &len, &tag)) {
> +      switch (tag) {
> +         case TPM_TAG_RQU_COMMAND:
> +            tag = TPM_TAG_RSP_COMMAND;
> +            break;
> +         case TPM_TAG_RQU_AUTH1_COMMAND:
> +            tag = TPM_TAG_RQU_AUTH2_COMMAND;
> +            break;
> +         case TPM_TAG_RQU_AUTH2_COMMAND:
> +            tag = TPM_TAG_RQU_AUTH2_COMMAND;
> +            break;
> +      }
> +   } else {
> +      tag = TPM_TAG_RSP_COMMAND;
> +   }
> +
> +   tpmcmd->resp_len = len = 10;
> +   tpmcmd->resp = respptr = tpm_malloc(tpmcmd->resp_len);
> +
> +   return pack_header(&respptr, &len, tag, len, errorcode);
> +}
> +
> +TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32 *numbytes) {
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* cmdbuf, *resp, *bptr;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   /*Ask the real tpm for random bytes for the seed */
> +   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = TPM_ORD_GetRandom;
> +   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
> +
> +   /*Create the raw tpm command */
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, *numbytes));
> +
> +   /* Send cmd, wait for response */
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
> +      ERR_TPMFRONT);
> +
> +   bptr = resp; len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
> +
> +   //Check return status of command
> +   CHECKSTATUSGOTO(ord, "TPM_GetRandom()");
> +
> +   // Get the number of random bytes in the response
> +   TRYFAILGOTOMSG(tpm_unmarshal_UINT32(&bptr, &len, &size), ERR_MALFORMED);
> +   *numbytes = size;
> +
> +   //Get the random bytes out, tpm may give us less bytes than what we wanrt
> +   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, bytes, *numbytes), ERR_MALFORMED);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cmdbuf);
> +   return status;
> +
> +}
> +
> +TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length)
> +{
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* bptr, *resp;
> +   uint8_t* cmdbuf = NULL;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   TPM_TAG tag = VTPM_TAG_REQ;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = VTPM_ORD_LOADHASHKEY;
> +
> +   /*Create the command*/
> +   len = size = VTPM_COMMAND_HEADER_SIZE;
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +
> +   /* Send the command to vtpm_manager */
> +   info("Requesting Encryption key from backend");
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
> +
> +   /* Unpack response header */
> +   bptr = resp;
> +   len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
> +
> +   /* Check return code */
> +   CHECKSTATUSGOTO(ord, "VTPM_LoadHashKey()");
> +
> +   /* Get the size of the key */
> +   *data_length = size - VTPM_COMMAND_HEADER_SIZE;
> +
> +   /* Copy the key bits */
> +   *data = malloc(*data_length);
> +   memcpy(*data, bptr, *data_length);
> +
> +   goto egress;
> +abort_egress:
> +   error("VTPM_LoadHashKey failed");
> +egress:
> +   free(cmdbuf);
> +   return status;
> +}
> +
> +TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length)
> +{
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* bptr, *resp;
> +   uint8_t* cmdbuf = NULL;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   TPM_TAG tag = VTPM_TAG_REQ;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = VTPM_ORD_SAVEHASHKEY;
> +
> +   /*Create the command*/
> +   len = size = VTPM_COMMAND_HEADER_SIZE + data_length;
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   memcpy(bptr, data, data_length);
> +   bptr += data_length;
> +
> +   /* Send the command to vtpm_manager */
> +   info("Sending encryption key to backend");
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
> +
> +   /* Unpack response header */
> +   bptr = resp;
> +   len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
> +
> +   /* Check return code */
> +   CHECKSTATUSGOTO(ord, "VTPM_SaveHashKey()");
> +
> +   goto egress;
> +abort_egress:
> +   error("VTPM_SaveHashKey failed");
> +egress:
> +   free(cmdbuf);
> +   return status;
> +}
> +
> +TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest)
> +{
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t *cmdbuf, *resp, *bptr;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   /*Just send a TPM_PCRRead Command to the HW tpm */
> +   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
> +   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
> +
> +   /*Create the raw tpm cmd */
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, pcrIndex));
> +
> +   /*Send Cmd wait for response */
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
> +
> +   bptr = resp; len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
> +
> +   //Check return status of command
> +   CHECKSTATUSGOTO(ord, "TPM_PCRRead");
> +
> +   //Get the ptr value
> +   memcpy(outDigest, bptr, sizeof(TPM_PCRVALUE));
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cmdbuf);
> +   return status;
> +
> +}
> diff --git a/stubdom/vtpm/vtpm_cmd.h b/stubdom/vtpm/vtpm_cmd.h
> new file mode 100644
> index 0000000..b0bfa22
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_cmd.h
> @@ -0,0 +1,31 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#ifndef MANAGER_H
> +#define MANAGER_H
> +
> +#include <tpmfront.h>
> +#include <tpmback.h>
> +#include "tpm/tpm_structures.h"
> +
> +/* Create a command response error header */
> +int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode);
> +/* Request random bytes from hardware tpm, returns 0 on success */
> +TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32* numbytes);
> +/* Retreive 256 bit AES encryption key from manager */
> +TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length);
> +/* Manager securely saves our 256 bit AES encryption key */
> +TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length);
> +/* Send a TPM_PCRRead command passthrough the manager to the hw tpm */
> +TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest);
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpm_pcrs.c b/stubdom/vtpm/vtpm_pcrs.c
> new file mode 100644
> index 0000000..22a6cef
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_pcrs.c
> @@ -0,0 +1,43 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#include "vtpm_pcrs.h"
> +#include "vtpm_cmd.h"
> +#include "tpm/tpm_data.h"
> +
> +#define PCR_VALUE      tpmData.permanent.data.pcrValue
> +
> +static int write_pcr_direct(unsigned int pcrIndex, uint8_t* val) {
> +   if(pcrIndex > TPM_NUM_PCR) {
> +      return TPM_BADINDEX;
> +   }
> +   memcpy(&PCR_VALUE[pcrIndex], val, sizeof(TPM_PCRVALUE));
> +   return TPM_SUCCESS;
> +}
> +
> +TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs)
> +{
> +   TPM_RESULT rc = TPM_SUCCESS;
> +   uint8_t digest[sizeof(TPM_PCRVALUE)];
> +
> +   for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
> +      if(pcrs & 1 << i) {
> +         if((rc = VTPM_PCRRead(tpmfront_dev, i, digest)) != TPM_SUCCESS) {
> +            error("TPM_PCRRead failed with error : %d", rc);
> +            return rc;
> +         }
> +         write_pcr_direct(i, digest);
> +      }
> +   }
> +
> +   return rc;
> +}
> diff --git a/stubdom/vtpm/vtpm_pcrs.h b/stubdom/vtpm/vtpm_pcrs.h
> new file mode 100644
> index 0000000..11835f9
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_pcrs.h
> @@ -0,0 +1,53 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#ifndef VTPM_PCRS_H
> +#define VTPM_PCRS_H
> +
> +#include "tpm/tpm_structures.h"
> +
> +#define VTPM_PCR0 1
> +#define VTPM_PCR1 1 << 1
> +#define VTPM_PCR2 1 << 2
> +#define VTPM_PCR3 1 << 3
> +#define VTPM_PCR4 1 << 4
> +#define VTPM_PCR5 1 << 5
> +#define VTPM_PCR6 1 << 6
> +#define VTPM_PCR7 1 << 7
> +#define VTPM_PCR8 1 << 8
> +#define VTPM_PCR9 1 << 9
> +#define VTPM_PCR10 1 << 10
> +#define VTPM_PCR11 1 << 11
> +#define VTPM_PCR12 1 << 12
> +#define VTPM_PCR13 1 << 13
> +#define VTPM_PCR14 1 << 14
> +#define VTPM_PCR15 1 << 15
> +#define VTPM_PCR16 1 << 16
> +#define VTPM_PCR17 1 << 17
> +#define VTPM_PCR18 1 << 18
> +#define VTPM_PCR19 1 << 19
> +#define VTPM_PCR20 1 << 20
> +#define VTPM_PCR21 1 << 21
> +#define VTPM_PCR22 1 << 22
> +#define VTPM_PCR23 1 << 23
> +
> +#define VTPM_PCRALL (1 << TPM_NUM_PCR) - 1
> +#define VTPM_PCRNONE 0
> +
> +#define VTPM_NUMPCRS 24
> +
> +struct tpmfront_dev;
> +
> +TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs);
> +
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpmblk.c b/stubdom/vtpm/vtpmblk.c
> new file mode 100644
> index 0000000..b343bd8
> --- /dev/null
> +++ b/stubdom/vtpm/vtpmblk.c
> @@ -0,0 +1,307 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#include <mini-os/byteorder.h>
> +#include "vtpmblk.h"
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm_cmd.h"
> +#include "polarssl/aes.h"
> +#include "polarssl/sha1.h"
> +#include <blkfront.h>
> +#include <unistd.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +
> +/*Encryption key and block sizes */
> +#define BLKSZ 16
> +
> +static struct blkfront_dev* blkdev = NULL;
> +static int blkfront_fd = -1;
> +
> +int init_vtpmblk(struct tpmfront_dev* tpmfront_dev)
> +{
> +   struct blkfront_info blkinfo;
> +   info("Initializing persistent NVM storage\n");
> +
> +   if((blkdev = init_blkfront(NULL, &blkinfo)) == NULL) {
> +      error("BLKIO: ERROR Unable to initialize blkfront");
> +      return -1;
> +   }
> +   if (blkinfo.info & VDISK_READONLY || blkinfo.mode != O_RDWR) {
> +      error("BLKIO: ERROR block device is read only!");
> +      goto error;
> +   }
> +   if((blkfront_fd = blkfront_open(blkdev)) == -1) {
> +      error("Unable to open blkfront file descriptor!");
> +      goto error;
> +   }
> +
> +   return 0;
> +error:
> +   shutdown_blkfront(blkdev);
> +   blkdev = NULL;
> +   return -1;
> +}
> +
> +void shutdown_vtpmblk(void)
> +{
> +   close(blkfront_fd);
> +   blkfront_fd = -1;
> +   blkdev = NULL;
> +}
> +
> +int write_vtpmblk_raw(uint8_t *data, size_t data_length)
> +{
> +   int rc;
> +   uint32_t lenbuf;
> +   debug("Begin Write data=%p len=%u", data, data_length);
> +
> +   lenbuf = cpu_to_be32((uint32_t)data_length);
> +
> +   lseek(blkfront_fd, 0, SEEK_SET);
> +   if((rc = write(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
> +      error("write(length) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +   if((rc = write(blkfront_fd, data, data_length)) != data_length) {
> +      error("write(data) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +
> +   info("Wrote %u bytes to NVM persistent storage", data_length);
> +
> +   return 0;
> +}
> +
> +int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
> +{
> +   int rc;
> +   uint32_t lenbuf;
> +
> +   lseek(blkfront_fd, 0, SEEK_SET);
> +   if(( rc = read(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
> +      error("read(length) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +   *data_length = (size_t) cpu_to_be32(lenbuf);
> +   if(*data_length == 0) {
> +      error("read 0 data_length for NVM");
> +      return -1;
> +   }
> +
> +   *data = tpm_malloc(*data_length);
> +   if((rc = read(blkfront_fd, *data, *data_length)) != *data_length) {
> +      error("read(data) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +
> +   info("Read %u bytes from NVM persistent storage", *data_length);
> +   return 0;
> +}
> +
> +int encrypt_vtpmblk(uint8_t* clear, size_t clear_len, uint8_t** cipher, size_t* cipher_len, uint8_t* symkey)
> +{
> +   int rc = 0;
> +   uint8_t iv[BLKSZ];
> +   aes_context aes_ctx;
> +   UINT32 temp;
> +   int mod;
> +
> +   uint8_t* clbuf = NULL;
> +
> +   uint8_t* ivptr;
> +   int ivlen;
> +
> +   uint8_t* cptr;      //Cipher block pointer
> +   int clen;   //Cipher block length
> +
> +   /*Create a new 256 bit encryption key */
> +   if(symkey == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   tpm_get_extern_random_bytes(symkey, NVMKEYSZ);
> +
> +   /*Setup initialization vector - random bits and then 4 bytes clear text size at the end*/
> +   temp = sizeof(UINT32);
> +   ivlen = BLKSZ - temp;
> +   tpm_get_extern_random_bytes(iv, ivlen);
> +   ivptr = iv + ivlen;
> +   tpm_marshal_UINT32(&ivptr, &temp, (UINT32) clear_len);
> +
> +   /*The clear text needs to be padded out to a multiple of BLKSZ */
> +   mod = clear_len % BLKSZ;
> +   clen = mod ? clear_len + BLKSZ - mod : clear_len;
> +   clbuf = malloc(clen);
> +   if (clbuf == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   memcpy(clbuf, clear, clear_len);
> +   /* zero out the padding bits - FIXME: better / more secure way to handle these? */
> +   if(clen - clear_len) {
> +      memset(clbuf + clear_len, 0, clen - clear_len);
> +   }
> +
> +   /* Setup the ciphertext buffer */
> +   *cipher_len = BLKSZ + clen;         /*iv + ciphertext */
> +   cptr = *cipher = malloc(*cipher_len);
> +   if (*cipher == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Copy the IV to cipher text blob*/
> +   memcpy(cptr, iv, BLKSZ);
> +   cptr += BLKSZ;
> +
> +   /* Setup encryption */
> +   aes_setkey_enc(&aes_ctx, symkey, 256);
> +
> +   /* Do encryption now */
> +   aes_crypt_cbc(&aes_ctx, AES_ENCRYPT, clen, iv, clbuf, cptr);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(clbuf);
> +   return rc;
> +}
> +int decrypt_vtpmblk(uint8_t* cipher, size_t cipher_len, uint8_t** clear, size_t* clear_len, uint8_t* symkey)
> +{
> +   int rc = 0;
> +   uint8_t iv[BLKSZ];
> +   uint8_t* ivptr;
> +   UINT32 u32, temp;
> +   aes_context aes_ctx;
> +
> +   uint8_t* cptr = cipher;     //cipher block pointer
> +   int clen = cipher_len;      //cipher block length
> +
> +   /* Pull out the initialization vector */
> +   memcpy(iv, cipher, BLKSZ);
> +   cptr += BLKSZ;
> +   clen -= BLKSZ;
> +
> +   /* Setup the clear text buffer */
> +   if((*clear = malloc(clen)) == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Get the length of clear text from last 4 bytes of iv */
> +   temp = sizeof(UINT32);
> +   ivptr = iv + BLKSZ - temp;
> +   tpm_unmarshal_UINT32(&ivptr, &temp, &u32);
> +   *clear_len = u32;
> +
> +   /* Setup decryption */
> +   aes_setkey_dec(&aes_ctx, symkey, 256);
> +
> +   /* Do decryption now */
> +   if ((clen % BLKSZ) != 0) {
> +      error("Decryption Error: Cipher block size was not a multiple of %u", BLKSZ);
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   aes_crypt_cbc(&aes_ctx, AES_DECRYPT, clen, iv, cptr, *clear);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   return rc;
> +}
> +
> +int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length) {
> +   int rc;
> +   uint8_t* cipher = NULL;
> +   size_t cipher_len = 0;
> +   uint8_t hashkey[HASHKEYSZ];
> +   uint8_t* symkey = hashkey + HASHSZ;
> +
> +   /* Encrypt the data */
> +   if((rc = encrypt_vtpmblk(data, data_length, &cipher, &cipher_len, symkey))) {
> +      goto abort_egress;
> +   }
> +   /* Write to disk */
> +   if((rc = write_vtpmblk_raw(cipher, cipher_len))) {
> +      goto abort_egress;
> +   }
> +   /* Get sha1 hash of data */
> +   sha1(cipher, cipher_len, hashkey);
> +
> +   /* Send hash and key to manager */
> +   if((rc = VTPM_SaveHashKey(tpmfront_dev, hashkey, HASHKEYSZ)) != TPM_SUCCESS) {
> +      goto abort_egress;
> +   }
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cipher);
> +   return rc;
> +}
> +
> +int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data_length) {
> +   int rc;
> +   uint8_t* cipher = NULL;
> +   size_t cipher_len = 0;
> +   size_t keysize;
> +   uint8_t* hashkey = NULL;
> +   uint8_t hash[HASHSZ];
> +   uint8_t* symkey;
> +
> +   /* Retreive the hash and the key from the manager */
> +   if((rc = VTPM_LoadHashKey(tpmfront_dev, &hashkey, &keysize)) != TPM_SUCCESS) {
> +      goto abort_egress;
> +   }
> +   if(keysize != HASHKEYSZ) {
> +      error("Manager returned a hashkey of invalid size! expected %d, actual %d", NVMKEYSZ, keysize);
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   symkey = hashkey + HASHSZ;
> +
> +   /* Read from disk now */
> +   if((rc = read_vtpmblk_raw(&cipher, &cipher_len))) {
> +      goto abort_egress;
> +   }
> +
> +   /* Compute the hash of the cipher text and compare */
> +   sha1(cipher, cipher_len, hash);
> +   if(memcmp(hash, hashkey, HASHSZ)) {
> +      int i;
> +      error("NVM Storage Checksum failed!");
> +      printf("Expected: ");
> +      for(i = 0; i < HASHSZ; ++i) {
> +        printf("%02hhX ", hashkey[i]);
> +      }
> +      printf("\n");
> +      printf("Actual:   ");
> +      for(i = 0; i < HASHSZ; ++i) {
> +        printf("%02hhX ", hash[i]);
> +      }
> +      printf("\n");
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Decrypt the blob */
> +   if((rc = decrypt_vtpmblk(cipher, cipher_len, data, data_length, symkey))) {
> +      goto abort_egress;
> +   }
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cipher);
> +   free(hashkey);
> +   return rc;
> +}
> diff --git a/stubdom/vtpm/vtpmblk.h b/stubdom/vtpm/vtpmblk.h
> new file mode 100644
> index 0000000..282ce6a
> --- /dev/null
> +++ b/stubdom/vtpm/vtpmblk.h
> @@ -0,0 +1,31 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#ifndef NVM_H
> +#define NVM_H
> +#include <mini-os/types.h>
> +#include <xen/xen.h>
> +#include <tpmfront.h>
> +
> +#define NVMKEYSZ 32
> +#define HASHSZ 20
> +#define HASHKEYSZ (NVMKEYSZ + HASHSZ)
> +
> +int init_vtpmblk(struct tpmfront_dev* tpmfront_dev);
> +void shutdown_vtpmblk(void);
> +
> +/* Encrypts and writes data to blk device */
> +int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t *data, size_t data_length);
> +/* Reads, Decrypts, and returns data from blk device */
> +int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t **data, size_t *data_length);
> +
> +#endif
> --
> 1.7.10.4
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:48:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7HE-0004ZR-F8; Thu, 13 Dec 2012 11:48:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7HC-0004YY-O3
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:48:31 +0000
Received: from [85.158.139.83:56761] by server-1.bemta-5.messagelabs.com id
	A4/A2-12813-D80C9C05; Thu, 13 Dec 2012 11:48:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355399297!18332343!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18309 invoked from network); 13 Dec 2012 11:48:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:48:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="113865"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 11:48:17 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 11:48:17 +0000
Message-ID: <1355399295.10554.96.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Thu, 13 Dec 2012 11:48:15 +0000
In-Reply-To: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 1/8] add vtpm-stubdom code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For future reference please could you include an indication of what
changed in a new posting of a series, either in the 0/N mail (which is
useful to include as an intro in any case) or in the individual
changelogs.

On Thu, 2012-12-06 at 18:19 +0000, Matthew Fioravante wrote:
> Add the code base for vtpm-stubdom to the stubdom
> heirarchy. Makefile changes in later patch.
> 
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  stubdom/vtpm/Makefile    |   37 +++++
>  stubdom/vtpm/minios.cfg  |   14 ++
>  stubdom/vtpm/vtpm.c      |  404 ++++++++++++++++++++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpm.h      |   36 +++++
>  stubdom/vtpm/vtpm_cmd.c  |  256 +++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpm_cmd.h  |   31 ++++
>  stubdom/vtpm/vtpm_pcrs.c |   43 +++++
>  stubdom/vtpm/vtpm_pcrs.h |   53 ++++++
>  stubdom/vtpm/vtpmblk.c   |  307 +++++++++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpmblk.h   |   31 ++++
>  10 files changed, 1212 insertions(+)
>  create mode 100644 stubdom/vtpm/Makefile
>  create mode 100644 stubdom/vtpm/minios.cfg
>  create mode 100644 stubdom/vtpm/vtpm.c
>  create mode 100644 stubdom/vtpm/vtpm.h
>  create mode 100644 stubdom/vtpm/vtpm_cmd.c
>  create mode 100644 stubdom/vtpm/vtpm_cmd.h
>  create mode 100644 stubdom/vtpm/vtpm_pcrs.c
>  create mode 100644 stubdom/vtpm/vtpm_pcrs.h
>  create mode 100644 stubdom/vtpm/vtpmblk.c
>  create mode 100644 stubdom/vtpm/vtpmblk.h
> 
> diff --git a/stubdom/vtpm/Makefile b/stubdom/vtpm/Makefile
> new file mode 100644
> index 0000000..686c0ea
> --- /dev/null
> +++ b/stubdom/vtpm/Makefile
> @@ -0,0 +1,37 @@
> +# Copyright (c) 2010-2012 United States Government, as represented by
> +# the Secretary of Defense.  All rights reserved.
> +#
> +# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> +# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> +# INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> +# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> +# DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> +# SOFTWARE.
> +#
> +
> +XEN_ROOT=../..
> +
> +PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
> +PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o sha4.o
> +
> +TARGET=vtpm.a
> +OBJS=vtpm.o vtpm_cmd.o vtpmblk.o vtpm_pcrs.o
> +
> +
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/build
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/tpm
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/crypto
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)
> +
> +$(TARGET): $(OBJS)
> +       ar -cr $@ $(OBJS) $(TPMEMU_OBJS) $(foreach obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
> +
> +$(OBJS): vtpm_manager.h
> +
> +vtpm_manager.h:
> +       ln -s ../vtpmmgr/vtpm_manager.h vtpm_manager.h
> +
> +clean:
> +       -rm $(TARGET) $(OBJS) vtpm_manager.h
> +
> +.PHONY: clean
> diff --git a/stubdom/vtpm/minios.cfg b/stubdom/vtpm/minios.cfg
> new file mode 100644
> index 0000000..31652ee
> --- /dev/null
> +++ b/stubdom/vtpm/minios.cfg
> @@ -0,0 +1,14 @@
> +CONFIG_TPMFRONT=y
> +CONFIG_TPM_TIS=n
> +CONFIG_TPMBACK=y
> +CONFIG_START_NETWORK=n
> +CONFIG_TEST=n
> +CONFIG_PCIFRONT=n
> +CONFIG_BLKFRONT=y
> +CONFIG_NETFRONT=n
> +CONFIG_FBFRONT=n
> +CONFIG_KBDFRONT=n
> +CONFIG_CONSFRONT=n
> +CONFIG_XENBUS=y
> +CONFIG_LWIP=n
> +CONFIG_XC=n
> diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c
> new file mode 100644
> index 0000000..71aef78
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm.c
> @@ -0,0 +1,404 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <inttypes.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <string.h>
> +#include <syslog.h>
> +#include <stdbool.h>
> +#include <errno.h>
> +#include <sys/time.h>
> +#include <xen/xen.h>
> +#include <tpmback.h>
> +#include <tpmfront.h>
> +
> +#include <polarssl/entropy.h>
> +#include <polarssl/ctr_drbg.h>
> +
> +#include "tpm/tpm_emulator_extern.h"
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm.h"
> +#include "vtpm_cmd.h"
> +#include "vtpm_pcrs.h"
> +#include "vtpmblk.h"
> +
> +#define TPM_LOG_INFO LOG_INFO
> +#define TPM_LOG_ERROR LOG_ERR
> +#define TPM_LOG_DEBUG LOG_DEBUG
> +
> +/* Global commandline options - default values */
> +struct Opt_args opt_args = {
> +   .startup = ST_CLEAR,
> +   .loglevel = TPM_LOG_INFO,
> +   .hwinitpcrs = VTPM_PCRNONE,
> +   .tpmconf = 0,
> +   .enable_maint_cmds = false,
> +};
> +
> +static uint32_t badords[32];
> +static unsigned int n_badords = 0;
> +
> +entropy_context entropy;
> +ctr_drbg_context ctr_drbg;
> +
> +struct tpmfront_dev* tpmfront_dev;
> +
> +void vtpm_get_extern_random_bytes(void *buf, size_t nbytes)
> +{
> +   ctr_drbg_random(&ctr_drbg, buf, nbytes);
> +}
> +
> +int vtpm_read_from_file(uint8_t **data, size_t *data_length) {
> +   return read_vtpmblk(tpmfront_dev, data, data_length);
> +}
> +
> +int vtpm_write_to_file(uint8_t *data, size_t data_length) {
> +   return write_vtpmblk(tpmfront_dev, data, data_length);
> +}
> +
> +int vtpm_extern_init_fake(void) {
> +   return 0;
> +}
> +
> +void vtpm_extern_release_fake(void) {
> +}
> +
> +
> +void vtpm_log(int priority, const char *fmt, ...)
> +{
> +   if(opt_args.loglevel >= priority) {
> +      va_list v;
> +      va_start(v, fmt);
> +      vprintf(fmt, v);
> +      va_end(v);
> +   }
> +}
> +
> +static uint64_t vtpm_get_ticks(void)
> +{
> +  static uint64_t old_t = 0;
> +  uint64_t new_t, res_t;
> +  struct timeval tv;
> +  gettimeofday(&tv, NULL);
> +  new_t = (uint64_t)tv.tv_sec * 1000000 + (uint64_t)tv.tv_usec;
> +  res_t = (old_t > 0) ? new_t - old_t : 0;
> +  old_t = new_t;
> +  return res_t;
> +}
> +
> +
> +static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
> +   UINT32 sz = len;
> +   TPM_RESULT rc = VTPM_GetRandom(tpmfront_dev, data, &sz);
> +   *olen = sz;
> +   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
> +}
> +
> +int init_random(void) {
> +   /* Initialize the rng */
> +   entropy_init(&entropy);
> +   entropy_add_source(&entropy, tpm_entropy_source, NULL, 0);
> +   entropy_gather(&entropy);
> +   ctr_drbg_init(&ctr_drbg, entropy_func, &entropy, NULL, 0);
> +   ctr_drbg_set_prediction_resistance( &ctr_drbg, CTR_DRBG_PR_OFF );
> +
> +   return 0;
> +}
> +
> +int check_ordinal(tpmcmd_t* tpmcmd) {
> +   TPM_COMMAND_CODE ord;
> +   UINT32 len = 4;
> +   BYTE* ptr;
> +   unsigned int i;
> +
> +   if(tpmcmd->req_len < 10) {
> +      return true;
> +   }
> +
> +   ptr = tpmcmd->req + 6;
> +   tpm_unmarshal_UINT32(&ptr, &len, &ord);
> +
> +   for(i = 0; i < n_badords; ++i) {
> +      if(ord == badords[i]) {
> +         error("Disabled command ordinal (%" PRIu32") requested!\n");
> +         return false;
> +      }
> +   }
> +   return true;
> +}
> +
> +static void main_loop(void) {
> +   tpmcmd_t* tpmcmd = NULL;
> +   domid_t domid;              /* Domid of frontend */
> +   unsigned int handle;        /* handle of frontend */
> +   int res = -1;
> +
> +   info("VTPM Initializing\n");
> +
> +   /* Set required tpm config args */
> +   opt_args.tpmconf |= TPM_CONF_STRONG_PERSISTENCE;
> +   opt_args.tpmconf &= ~TPM_CONF_USE_INTERNAL_PRNG;
> +   opt_args.tpmconf |= TPM_CONF_GENERATE_EK;
> +   opt_args.tpmconf |= TPM_CONF_GENERATE_SEED_DAA;
> +
> +   /* Initialize the emulator */
> +   tpm_emulator_init(opt_args.startup, opt_args.tpmconf);
> +
> +   /* Initialize any requested PCRs with hardware TPM values */
> +   if(vtpm_initialize_hw_pcrs(tpmfront_dev, opt_args.hwinitpcrs) != TPM_SUCCESS) {
> +      error("Failed to initialize PCRs with hardware TPM values");
> +      goto abort_postpcrs;
> +   }
> +
> +   /* Wait for the frontend domain to connect */
> +   info("Waiting for frontend domain to connect..");
> +   if(tpmback_wait_for_frontend_connect(&domid, &handle) == 0) {
> +      info("VTPM attached to Frontend %u/%u", (unsigned int) domid, handle);
> +   } else {
> +      error("Unable to attach to a frontend");
> +   }
> +
> +   tpmcmd = tpmback_req(domid, handle);
> +   while(tpmcmd) {
> +      /* Handle the request */
> +      if(tpmcmd->req_len) {
> +        tpmcmd->resp = NULL;
> +        tpmcmd->resp_len = 0;
> +
> +         /* First check for disabled ordinals */
> +         if(!check_ordinal(tpmcmd)) {
> +            create_error_response(tpmcmd, TPM_BAD_ORDINAL);
> +         }
> +         /* If not disabled, do the command */
> +         else {
> +            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len)) != 0) {
> +               error("tpm_handle_command() failed");
> +               create_error_response(tpmcmd, TPM_FAIL);
> +            }
> +         }
> +      }
> +
> +      /* Send the response */
> +      tpmback_resp(tpmcmd);
> +
> +      /* Wait for the next request */
> +      tpmcmd = tpmback_req(domid, handle);
> +
> +   }
> +
> +abort_postpcrs:
> +   info("VTPM Shutting down\n");
> +
> +   tpm_emulator_shutdown();
> +}
> +
> +int parse_cmd_line(int argc, char** argv)
> +{
> +   char sval[25];
> +   char* logstr = NULL;
> +   /* Parse the command strings */
> +   for(unsigned int i = 1; i < argc; ++i) {
> +      if (sscanf(argv[i], "loglevel=%25s", sval) == 1){
> +        if (!strcmp(sval, "debug")) {
> +           opt_args.loglevel = TPM_LOG_DEBUG;
> +           logstr = "debug";
> +        }
> +        else if (!strcmp(sval, "info")) {
> +           logstr = "info";
> +           opt_args.loglevel = TPM_LOG_INFO;
> +        }
> +        else if (!strcmp(sval, "error")) {
> +           logstr = "error";
> +           opt_args.loglevel = TPM_LOG_ERROR;
> +        }
> +      }
> +      else if (!strcmp(argv[i], "clear")) {
> +        opt_args.startup = ST_CLEAR;
> +      }
> +      else if (!strcmp(argv[i], "save")) {
> +        opt_args.startup = ST_SAVE;
> +      }
> +      else if (!strcmp(argv[i], "deactivated")) {
> +        opt_args.startup = ST_DEACTIVATED;
> +      }
> +      else if (!strncmp(argv[i], "maintcmds=", 10)) {
> +         if(!strcmp(argv[i] + 10, "1")) {
> +            opt_args.enable_maint_cmds = true;
> +         } else if(!strcmp(argv[i] + 10, "0")) {
> +            opt_args.enable_maint_cmds = false;
> +         }
> +      }
> +      else if(!strncmp(argv[i], "hwinitpcr=", 10)) {
> +         char *pch = argv[i] + 10;
> +         unsigned int v1, v2;
> +         pch = strtok(pch, ",");
> +         while(pch != NULL) {
> +            if(!strcmp(pch, "all")) {
> +               //Set all
> +               opt_args.hwinitpcrs = VTPM_PCRALL;
> +            } else if(!strcmp(pch, "none")) {
> +               //Set none
> +               opt_args.hwinitpcrs = VTPM_PCRNONE;
> +            } else if(sscanf(pch, "%u", &v1) == 1) {
> +               //Set one
> +               if(v1 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               opt_args.hwinitpcrs |= (1 << v1);
> +            } else if(sscanf(pch, "%u-%u", &v1, &v2) == 2) {
> +               //Set range
> +               if(v1 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               if(v2 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               if(v2 < v1) {
> +                  unsigned tp = v1;
> +                  v1 = v2;
> +                  v2 = tp;
> +               }
> +               for(unsigned int i = v1; i <= v2; ++i) {
> +                  opt_args.hwinitpcrs |= (1 << i);
> +               }
> +            } else {
> +               error("hwintipcr error: Invalid PCR specification : %s", pch);
> +               return -1;
> +            }
> +            pch = strtok(NULL, ",");
> +         }
> +      }
> +      else {
> +        error("Invalid command line option `%s'", argv[i]);
> +      }
> +
> +   }
> +
> +   /* Check Errors and print results */
> +   switch(opt_args.startup) {
> +      case ST_CLEAR:
> +        info("Startup mode is `clear'");
> +        break;
> +      case ST_SAVE:
> +        info("Startup mode is `save'");
> +        break;
> +      case ST_DEACTIVATED:
> +        info("Startup mode is `deactivated'");
> +        break;
> +      default:
> +        error("Invalid startup mode %d", opt_args.startup);
> +        return -1;
> +   }
> +
> +   if(opt_args.hwinitpcrs & (VTPM_PCRALL))
> +   {
> +      char pcrstr[1024];
> +      char* ptr = pcrstr;
> +
> +      pcrstr[0] = '\0';
> +      info("The following PCRs will be initialized with values from the hardware TPM:");
> +      for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
> +         if(opt_args.hwinitpcrs & (1 << i)) {
> +            ptr += sprintf(ptr, "%u, ", i);
> +         }
> +      }
> +      /* get rid of the last comma if any numbers were printed */
> +      *(ptr -2) = '\0';
> +
> +      info("\t%s", pcrstr);
> +   } else {
> +      info("All PCRs initialized to default values");
> +   }
> +
> +   if(!opt_args.enable_maint_cmds) {
> +      info("TPM Maintenance Commands disabled");
> +      badords[n_badords++] = TPM_ORD_CreateMaintenanceArchive;
> +      badords[n_badords++] = TPM_ORD_LoadMaintenanceArchive;
> +      badords[n_badords++] = TPM_ORD_KillMaintenanceFeature;
> +      badords[n_badords++] = TPM_ORD_LoadManuMaintPub;
> +      badords[n_badords++] = TPM_ORD_ReadManuMaintPub;
> +   } else {
> +      info("TPM Maintenance Commands enabled");
> +   }
> +
> +   info("Log level set to %s", logstr);
> +
> +   return 0;
> +}
> +
> +void cleanup_opt_args(void) {
> +}
> +
> +int main(int argc, char **argv)
> +{
> +   //FIXME: initializing blkfront without this sleep causes the domain to crash on boot
> +   sleep(2);
> +
> +   /* Setup extern function pointers */
> +   tpm_extern_init = vtpm_extern_init_fake;
> +   tpm_extern_release = vtpm_extern_release_fake;
> +   tpm_malloc = malloc;
> +   tpm_free = free;
> +   tpm_log = vtpm_log;
> +   tpm_get_ticks = vtpm_get_ticks;
> +   tpm_get_extern_random_bytes = vtpm_get_extern_random_bytes;
> +   tpm_write_to_storage = vtpm_write_to_file;
> +   tpm_read_from_storage = vtpm_read_from_file;
> +
> +   info("starting TPM Emulator (1.2.%d.%d-%d)", VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD);
> +   if(parse_cmd_line(argc, argv)) {
> +      error("Error parsing commandline\n");
> +      return -1;
> +   }
> +
> +   /* Initialize devices */
> +   init_tpmback();
> +   if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
> +      error("Unable to initialize tpmfront device");
> +      goto abort_posttpmfront;
> +   }
> +
> +   /* Seed the RNG with entropy from hardware TPM */
> +   if(init_random()) {
> +      error("Unable to initialize RNG");
> +      goto abort_postrng;
> +   }
> +
> +   /* Initialize blkfront device */
> +   if(init_vtpmblk(tpmfront_dev)) {
> +      error("Unable to initialize Blkfront persistent storage");
> +      goto abort_postvtpmblk;
> +   }
> +
> +   /* Run main loop */
> +   main_loop();
> +
> +   /* Shutdown blkfront */
> +   shutdown_vtpmblk();
> +abort_postvtpmblk:
> +abort_postrng:
> +
> +   /* Close devices */
> +   shutdown_tpmfront(tpmfront_dev);
> +abort_posttpmfront:
> +   shutdown_tpmback();
> +
> +   cleanup_opt_args();
> +
> +   return 0;
> +}
> diff --git a/stubdom/vtpm/vtpm.h b/stubdom/vtpm/vtpm.h
> new file mode 100644
> index 0000000..5919e44
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm.h
> @@ -0,0 +1,36 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#ifndef VTPM_H
> +#define VTPM_H
> +
> +#include <stdbool.h>
> +
> +/* For testing */
> +#define VERS_CMD "\x00\xC1\x00\x00\x00\x16\x00\x00\x00\x65\x00\x00\x00\x05\x00\x00\x00\x04\x00\x00\x01\x03"
> +#define VERS_CMD_LEN 22
> +
> +/* Global commandline options */
> +struct Opt_args {
> +   enum StartUp {
> +      ST_CLEAR = 1,
> +      ST_SAVE = 2,
> +      ST_DEACTIVATED = 3
> +   } startup;
> +   unsigned long hwinitpcrs;
> +   int loglevel;
> +   uint32_t tpmconf;
> +   bool enable_maint_cmds;
> +};
> +extern struct Opt_args opt_args;
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c
> new file mode 100644
> index 0000000..7eae98b
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_cmd.c
> @@ -0,0 +1,256 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#include <types.h>
> +#include <xen/xen.h>
> +#include <mm.h>
> +#include <gnttab.h>
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm_manager.h"
> +#include "vtpm_cmd.h"
> +#include <tpmback.h>
> +
> +#define TRYFAILGOTO(C) \
> +   if((C)) { \
> +      status = TPM_FAIL; \
> +      goto abort_egress; \
> +   }
> +#define TRYFAILGOTOMSG(C, msg) \
> +   if((C)) { \
> +      status = TPM_FAIL; \
> +      error(msg); \
> +      goto abort_egress; \
> +   }
> +#define CHECKSTATUSGOTO(ret, fname) \
> +   if((ret) != TPM_SUCCESS) { \
> +      error("%s failed with error code (%lu)", fname, (unsigned long) ret); \
> +      status = ord; \
> +      goto abort_egress; \
> +   }
> +
> +#define ERR_MALFORMED "Malformed response from backend"
> +#define ERR_TPMFRONT "Error sending command through frontend device"
> +
> +struct shpage {
> +   void* page;
> +   grant_ref_t grantref;
> +};
> +
> +typedef struct shpage shpage_t;
> +
> +static inline int pack_header(uint8_t** bptr, UINT32* len, TPM_TAG tag, UINT32 size, TPM_COMMAND_CODE ord)
> +{
> +   return *bptr == NULL ||
> +        tpm_marshal_UINT16(bptr, len, tag) ||
> +        tpm_marshal_UINT32(bptr, len, size) ||
> +        tpm_marshal_UINT32(bptr, len, ord);
> +}
> +
> +static inline int unpack_header(uint8_t** bptr, UINT32* len, TPM_TAG* tag, UINT32* size, TPM_COMMAND_CODE* ord)
> +{
> +   return *bptr == NULL ||
> +        tpm_unmarshal_UINT16(bptr, len, tag) ||
> +        tpm_unmarshal_UINT32(bptr, len, size) ||
> +        tpm_unmarshal_UINT32(bptr, len, ord);
> +}
> +
> +int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode)
> +{
> +   TPM_TAG tag;
> +   UINT32 len = tpmcmd->req_len;
> +   uint8_t* respptr;
> +   uint8_t* cmdptr = tpmcmd->req;
> +
> +   if(!tpm_unmarshal_UINT16(&cmdptr, &len, &tag)) {
> +      switch (tag) {
> +         case TPM_TAG_RQU_COMMAND:
> +            tag = TPM_TAG_RSP_COMMAND;
> +            break;
> +         case TPM_TAG_RQU_AUTH1_COMMAND:
> +            tag = TPM_TAG_RQU_AUTH2_COMMAND;
> +            break;
> +         case TPM_TAG_RQU_AUTH2_COMMAND:
> +            tag = TPM_TAG_RQU_AUTH2_COMMAND;
> +            break;
> +      }
> +   } else {
> +      tag = TPM_TAG_RSP_COMMAND;
> +   }
> +
> +   tpmcmd->resp_len = len = 10;
> +   tpmcmd->resp = respptr = tpm_malloc(tpmcmd->resp_len);
> +
> +   return pack_header(&respptr, &len, tag, len, errorcode);
> +}
> +
> +TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32 *numbytes) {
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* cmdbuf, *resp, *bptr;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   /*Ask the real tpm for random bytes for the seed */
> +   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = TPM_ORD_GetRandom;
> +   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
> +
> +   /*Create the raw tpm command */
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, *numbytes));
> +
> +   /* Send cmd, wait for response */
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
> +      ERR_TPMFRONT);
> +
> +   bptr = resp; len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
> +
> +   //Check return status of command
> +   CHECKSTATUSGOTO(ord, "TPM_GetRandom()");
> +
> +   // Get the number of random bytes in the response
> +   TRYFAILGOTOMSG(tpm_unmarshal_UINT32(&bptr, &len, &size), ERR_MALFORMED);
> +   *numbytes = size;
> +
> +   //Get the random bytes out, tpm may give us less bytes than what we wanrt
> +   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, bytes, *numbytes), ERR_MALFORMED);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cmdbuf);
> +   return status;
> +
> +}
> +
> +TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length)
> +{
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* bptr, *resp;
> +   uint8_t* cmdbuf = NULL;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   TPM_TAG tag = VTPM_TAG_REQ;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = VTPM_ORD_LOADHASHKEY;
> +
> +   /*Create the command*/
> +   len = size = VTPM_COMMAND_HEADER_SIZE;
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +
> +   /* Send the command to vtpm_manager */
> +   info("Requesting Encryption key from backend");
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
> +
> +   /* Unpack response header */
> +   bptr = resp;
> +   len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
> +
> +   /* Check return code */
> +   CHECKSTATUSGOTO(ord, "VTPM_LoadHashKey()");
> +
> +   /* Get the size of the key */
> +   *data_length = size - VTPM_COMMAND_HEADER_SIZE;
> +
> +   /* Copy the key bits */
> +   *data = malloc(*data_length);
> +   memcpy(*data, bptr, *data_length);
> +
> +   goto egress;
> +abort_egress:
> +   error("VTPM_LoadHashKey failed");
> +egress:
> +   free(cmdbuf);
> +   return status;
> +}
> +
> +TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length)
> +{
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* bptr, *resp;
> +   uint8_t* cmdbuf = NULL;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   TPM_TAG tag = VTPM_TAG_REQ;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = VTPM_ORD_SAVEHASHKEY;
> +
> +   /*Create the command*/
> +   len = size = VTPM_COMMAND_HEADER_SIZE + data_length;
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   memcpy(bptr, data, data_length);
> +   bptr += data_length;
> +
> +   /* Send the command to vtpm_manager */
> +   info("Sending encryption key to backend");
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
> +
> +   /* Unpack response header */
> +   bptr = resp;
> +   len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
> +
> +   /* Check return code */
> +   CHECKSTATUSGOTO(ord, "VTPM_SaveHashKey()");
> +
> +   goto egress;
> +abort_egress:
> +   error("VTPM_SaveHashKey failed");
> +egress:
> +   free(cmdbuf);
> +   return status;
> +}
> +
> +TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest)
> +{
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t *cmdbuf, *resp, *bptr;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   /*Just send a TPM_PCRRead Command to the HW tpm */
> +   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
> +   len = size = sizeof(TPM_TAG) + sizeof(UINT32) + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
> +
> +   /*Create the raw tpm cmd */
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, pcrIndex));
> +
> +   /*Send Cmd wait for response */
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen), ERR_TPMFRONT);
> +
> +   bptr = resp; len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord), ERR_MALFORMED);
> +
> +   //Check return status of command
> +   CHECKSTATUSGOTO(ord, "TPM_PCRRead");
> +
> +   //Get the ptr value
> +   memcpy(outDigest, bptr, sizeof(TPM_PCRVALUE));
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cmdbuf);
> +   return status;
> +
> +}
> diff --git a/stubdom/vtpm/vtpm_cmd.h b/stubdom/vtpm/vtpm_cmd.h
> new file mode 100644
> index 0000000..b0bfa22
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_cmd.h
> @@ -0,0 +1,31 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#ifndef MANAGER_H
> +#define MANAGER_H
> +
> +#include <tpmfront.h>
> +#include <tpmback.h>
> +#include "tpm/tpm_structures.h"
> +
> +/* Create a command response error header */
> +int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode);
> +/* Request random bytes from hardware tpm, returns 0 on success */
> +TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32* numbytes);
> +/* Retreive 256 bit AES encryption key from manager */
> +TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t* data_length);
> +/* Manager securely saves our 256 bit AES encryption key */
> +TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length);
> +/* Send a TPM_PCRRead command passthrough the manager to the hw tpm */
> +TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32 pcrIndex, BYTE* outDigest);
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpm_pcrs.c b/stubdom/vtpm/vtpm_pcrs.c
> new file mode 100644
> index 0000000..22a6cef
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_pcrs.c
> @@ -0,0 +1,43 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#include "vtpm_pcrs.h"
> +#include "vtpm_cmd.h"
> +#include "tpm/tpm_data.h"
> +
> +#define PCR_VALUE      tpmData.permanent.data.pcrValue
> +
> +static int write_pcr_direct(unsigned int pcrIndex, uint8_t* val) {
> +   if(pcrIndex > TPM_NUM_PCR) {
> +      return TPM_BADINDEX;
> +   }
> +   memcpy(&PCR_VALUE[pcrIndex], val, sizeof(TPM_PCRVALUE));
> +   return TPM_SUCCESS;
> +}
> +
> +TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs)
> +{
> +   TPM_RESULT rc = TPM_SUCCESS;
> +   uint8_t digest[sizeof(TPM_PCRVALUE)];
> +
> +   for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
> +      if(pcrs & 1 << i) {
> +         if((rc = VTPM_PCRRead(tpmfront_dev, i, digest)) != TPM_SUCCESS) {
> +            error("TPM_PCRRead failed with error : %d", rc);
> +            return rc;
> +         }
> +         write_pcr_direct(i, digest);
> +      }
> +   }
> +
> +   return rc;
> +}
> diff --git a/stubdom/vtpm/vtpm_pcrs.h b/stubdom/vtpm/vtpm_pcrs.h
> new file mode 100644
> index 0000000..11835f9
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_pcrs.h
> @@ -0,0 +1,53 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#ifndef VTPM_PCRS_H
> +#define VTPM_PCRS_H
> +
> +#include "tpm/tpm_structures.h"
> +
> +#define VTPM_PCR0 1
> +#define VTPM_PCR1 1 << 1
> +#define VTPM_PCR2 1 << 2
> +#define VTPM_PCR3 1 << 3
> +#define VTPM_PCR4 1 << 4
> +#define VTPM_PCR5 1 << 5
> +#define VTPM_PCR6 1 << 6
> +#define VTPM_PCR7 1 << 7
> +#define VTPM_PCR8 1 << 8
> +#define VTPM_PCR9 1 << 9
> +#define VTPM_PCR10 1 << 10
> +#define VTPM_PCR11 1 << 11
> +#define VTPM_PCR12 1 << 12
> +#define VTPM_PCR13 1 << 13
> +#define VTPM_PCR14 1 << 14
> +#define VTPM_PCR15 1 << 15
> +#define VTPM_PCR16 1 << 16
> +#define VTPM_PCR17 1 << 17
> +#define VTPM_PCR18 1 << 18
> +#define VTPM_PCR19 1 << 19
> +#define VTPM_PCR20 1 << 20
> +#define VTPM_PCR21 1 << 21
> +#define VTPM_PCR22 1 << 22
> +#define VTPM_PCR23 1 << 23
> +
> +#define VTPM_PCRALL (1 << TPM_NUM_PCR) - 1
> +#define VTPM_PCRNONE 0
> +
> +#define VTPM_NUMPCRS 24
> +
> +struct tpmfront_dev;
> +
> +TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev, unsigned long pcrs);
> +
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpmblk.c b/stubdom/vtpm/vtpmblk.c
> new file mode 100644
> index 0000000..b343bd8
> --- /dev/null
> +++ b/stubdom/vtpm/vtpmblk.c
> @@ -0,0 +1,307 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#include <mini-os/byteorder.h>
> +#include "vtpmblk.h"
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm_cmd.h"
> +#include "polarssl/aes.h"
> +#include "polarssl/sha1.h"
> +#include <blkfront.h>
> +#include <unistd.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +
> +/*Encryption key and block sizes */
> +#define BLKSZ 16
> +
> +static struct blkfront_dev* blkdev = NULL;
> +static int blkfront_fd = -1;
> +
> +int init_vtpmblk(struct tpmfront_dev* tpmfront_dev)
> +{
> +   struct blkfront_info blkinfo;
> +   info("Initializing persistent NVM storage\n");
> +
> +   if((blkdev = init_blkfront(NULL, &blkinfo)) == NULL) {
> +      error("BLKIO: ERROR Unable to initialize blkfront");
> +      return -1;
> +   }
> +   if (blkinfo.info & VDISK_READONLY || blkinfo.mode != O_RDWR) {
> +      error("BLKIO: ERROR block device is read only!");
> +      goto error;
> +   }
> +   if((blkfront_fd = blkfront_open(blkdev)) == -1) {
> +      error("Unable to open blkfront file descriptor!");
> +      goto error;
> +   }
> +
> +   return 0;
> +error:
> +   shutdown_blkfront(blkdev);
> +   blkdev = NULL;
> +   return -1;
> +}
> +
> +void shutdown_vtpmblk(void)
> +{
> +   close(blkfront_fd);
> +   blkfront_fd = -1;
> +   blkdev = NULL;
> +}
> +
> +int write_vtpmblk_raw(uint8_t *data, size_t data_length)
> +{
> +   int rc;
> +   uint32_t lenbuf;
> +   debug("Begin Write data=%p len=%u", data, data_length);
> +
> +   lenbuf = cpu_to_be32((uint32_t)data_length);
> +
> +   lseek(blkfront_fd, 0, SEEK_SET);
> +   if((rc = write(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
> +      error("write(length) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +   if((rc = write(blkfront_fd, data, data_length)) != data_length) {
> +      error("write(data) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +
> +   info("Wrote %u bytes to NVM persistent storage", data_length);
> +
> +   return 0;
> +}
> +
> +int read_vtpmblk_raw(uint8_t **data, size_t *data_length)
> +{
> +   int rc;
> +   uint32_t lenbuf;
> +
> +   lseek(blkfront_fd, 0, SEEK_SET);
> +   if(( rc = read(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
> +      error("read(length) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +   *data_length = (size_t) cpu_to_be32(lenbuf);
> +   if(*data_length == 0) {
> +      error("read 0 data_length for NVM");
> +      return -1;
> +   }
> +
> +   *data = tpm_malloc(*data_length);
> +   if((rc = read(blkfront_fd, *data, *data_length)) != *data_length) {
> +      error("read(data) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +
> +   info("Read %u bytes from NVM persistent storage", *data_length);
> +   return 0;
> +}
> +
> +int encrypt_vtpmblk(uint8_t* clear, size_t clear_len, uint8_t** cipher, size_t* cipher_len, uint8_t* symkey)
> +{
> +   int rc = 0;
> +   uint8_t iv[BLKSZ];
> +   aes_context aes_ctx;
> +   UINT32 temp;
> +   int mod;
> +
> +   uint8_t* clbuf = NULL;
> +
> +   uint8_t* ivptr;
> +   int ivlen;
> +
> +   uint8_t* cptr;      //Cipher block pointer
> +   int clen;   //Cipher block length
> +
> +   /*Create a new 256 bit encryption key */
> +   if(symkey == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   tpm_get_extern_random_bytes(symkey, NVMKEYSZ);
> +
> +   /*Setup initialization vector - random bits and then 4 bytes clear text size at the end*/
> +   temp = sizeof(UINT32);
> +   ivlen = BLKSZ - temp;
> +   tpm_get_extern_random_bytes(iv, ivlen);
> +   ivptr = iv + ivlen;
> +   tpm_marshal_UINT32(&ivptr, &temp, (UINT32) clear_len);
> +
> +   /*The clear text needs to be padded out to a multiple of BLKSZ */
> +   mod = clear_len % BLKSZ;
> +   clen = mod ? clear_len + BLKSZ - mod : clear_len;
> +   clbuf = malloc(clen);
> +   if (clbuf == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   memcpy(clbuf, clear, clear_len);
> +   /* zero out the padding bits - FIXME: better / more secure way to handle these? */
> +   if(clen - clear_len) {
> +      memset(clbuf + clear_len, 0, clen - clear_len);
> +   }
> +
> +   /* Setup the ciphertext buffer */
> +   *cipher_len = BLKSZ + clen;         /*iv + ciphertext */
> +   cptr = *cipher = malloc(*cipher_len);
> +   if (*cipher == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Copy the IV to cipher text blob*/
> +   memcpy(cptr, iv, BLKSZ);
> +   cptr += BLKSZ;
> +
> +   /* Setup encryption */
> +   aes_setkey_enc(&aes_ctx, symkey, 256);
> +
> +   /* Do encryption now */
> +   aes_crypt_cbc(&aes_ctx, AES_ENCRYPT, clen, iv, clbuf, cptr);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(clbuf);
> +   return rc;
> +}
> +int decrypt_vtpmblk(uint8_t* cipher, size_t cipher_len, uint8_t** clear, size_t* clear_len, uint8_t* symkey)
> +{
> +   int rc = 0;
> +   uint8_t iv[BLKSZ];
> +   uint8_t* ivptr;
> +   UINT32 u32, temp;
> +   aes_context aes_ctx;
> +
> +   uint8_t* cptr = cipher;     //cipher block pointer
> +   int clen = cipher_len;      //cipher block length
> +
> +   /* Pull out the initialization vector */
> +   memcpy(iv, cipher, BLKSZ);
> +   cptr += BLKSZ;
> +   clen -= BLKSZ;
> +
> +   /* Setup the clear text buffer */
> +   if((*clear = malloc(clen)) == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Get the length of clear text from last 4 bytes of iv */
> +   temp = sizeof(UINT32);
> +   ivptr = iv + BLKSZ - temp;
> +   tpm_unmarshal_UINT32(&ivptr, &temp, &u32);
> +   *clear_len = u32;
> +
> +   /* Setup decryption */
> +   aes_setkey_dec(&aes_ctx, symkey, 256);
> +
> +   /* Do decryption now */
> +   if ((clen % BLKSZ) != 0) {
> +      error("Decryption Error: Cipher block size was not a multiple of %u", BLKSZ);
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   aes_crypt_cbc(&aes_ctx, AES_DECRYPT, clen, iv, cptr, *clear);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   return rc;
> +}
> +
> +int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length) {
> +   int rc;
> +   uint8_t* cipher = NULL;
> +   size_t cipher_len = 0;
> +   uint8_t hashkey[HASHKEYSZ];
> +   uint8_t* symkey = hashkey + HASHSZ;
> +
> +   /* Encrypt the data */
> +   if((rc = encrypt_vtpmblk(data, data_length, &cipher, &cipher_len, symkey))) {
> +      goto abort_egress;
> +   }
> +   /* Write to disk */
> +   if((rc = write_vtpmblk_raw(cipher, cipher_len))) {
> +      goto abort_egress;
> +   }
> +   /* Get sha1 hash of data */
> +   sha1(cipher, cipher_len, hashkey);
> +
> +   /* Send hash and key to manager */
> +   if((rc = VTPM_SaveHashKey(tpmfront_dev, hashkey, HASHKEYSZ)) != TPM_SUCCESS) {
> +      goto abort_egress;
> +   }
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cipher);
> +   return rc;
> +}
> +
> +int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data_length) {
> +   int rc;
> +   uint8_t* cipher = NULL;
> +   size_t cipher_len = 0;
> +   size_t keysize;
> +   uint8_t* hashkey = NULL;
> +   uint8_t hash[HASHSZ];
> +   uint8_t* symkey;
> +
> +   /* Retreive the hash and the key from the manager */
> +   if((rc = VTPM_LoadHashKey(tpmfront_dev, &hashkey, &keysize)) != TPM_SUCCESS) {
> +      goto abort_egress;
> +   }
> +   if(keysize != HASHKEYSZ) {
> +      error("Manager returned a hashkey of invalid size! expected %d, actual %d", NVMKEYSZ, keysize);
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   symkey = hashkey + HASHSZ;
> +
> +   /* Read from disk now */
> +   if((rc = read_vtpmblk_raw(&cipher, &cipher_len))) {
> +      goto abort_egress;
> +   }
> +
> +   /* Compute the hash of the cipher text and compare */
> +   sha1(cipher, cipher_len, hash);
> +   if(memcmp(hash, hashkey, HASHSZ)) {
> +      int i;
> +      error("NVM Storage Checksum failed!");
> +      printf("Expected: ");
> +      for(i = 0; i < HASHSZ; ++i) {
> +        printf("%02hhX ", hashkey[i]);
> +      }
> +      printf("\n");
> +      printf("Actual:   ");
> +      for(i = 0; i < HASHSZ; ++i) {
> +        printf("%02hhX ", hash[i]);
> +      }
> +      printf("\n");
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Decrypt the blob */
> +   if((rc = decrypt_vtpmblk(cipher, cipher_len, data, data_length, symkey))) {
> +      goto abort_egress;
> +   }
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cipher);
> +   free(hashkey);
> +   return rc;
> +}
> diff --git a/stubdom/vtpm/vtpmblk.h b/stubdom/vtpm/vtpmblk.h
> new file mode 100644
> index 0000000..282ce6a
> --- /dev/null
> +++ b/stubdom/vtpm/vtpmblk.h
> @@ -0,0 +1,31 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE
> + * SOFTWARE.
> + */
> +
> +#ifndef NVM_H
> +#define NVM_H
> +#include <mini-os/types.h>
> +#include <xen/xen.h>
> +#include <tpmfront.h>
> +
> +#define NVMKEYSZ 32
> +#define HASHSZ 20
> +#define HASHKEYSZ (NVMKEYSZ + HASHSZ)
> +
> +int init_vtpmblk(struct tpmfront_dev* tpmfront_dev);
> +void shutdown_vtpmblk(void);
> +
> +/* Encrypts and writes data to blk device */
> +int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t *data, size_t data_length);
> +/* Reads, Decrypts, and returns data from blk device */
> +int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t **data, size_t *data_length);
> +
> +#endif
> --
> 1.7.10.4
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:55:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7Nt-0005Pd-Oj; Thu, 13 Dec 2012 11:55:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tj7Nr-0005PL-JH
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:55:23 +0000
Received: from [85.158.139.211:58277] by server-11.bemta-5.messagelabs.com id
	54/64-31624-A22C9C05; Thu, 13 Dec 2012 11:55:22 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355399721!20363428!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1449 invoked from network); 13 Dec 2012 11:55:22 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 11:55:22 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tj7No-000Jxu-Fs; Thu, 13 Dec 2012 11:55:20 +0000
Date: Thu, 13 Dec 2012 11:55:20 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121213115520.GB75286@ocelot.phlegethon.org>
References: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
	<20121213105951.GA75286@ocelot.phlegethon.org>
	<50C9C043.7090507@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C9C043.7090507@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: Keir <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
	on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:47 +0000 on 13 Dec (1355399235), Andrew Cooper wrote:
> >nmi_shootdown_cpus() has a go, but I think it would have to do an atomic
> >compare-exchange on crashing_cpu to actually be sure that only one CPU
> >is crashing at a time.  If two CPUs try to lead crashes at the same
> >time, it will deadlock here, with NMIs disabled on all CPUs.
> 
> kexec_common_shutdown() calls one_cpu_only() which gives the impression 
> that nmi_shootdown_cpus() can only be gotten to once.

Right, I missed that.  So,

Acked-by: Tim Deegan <tim@xen.org>

You can tidy up entry.S or not, as you like.  There's no correctness
issue with anything there, just things that could be neater if you were
already updating the patch.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:55:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7Nt-0005Pd-Oj; Thu, 13 Dec 2012 11:55:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tj7Nr-0005PL-JH
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:55:23 +0000
Received: from [85.158.139.211:58277] by server-11.bemta-5.messagelabs.com id
	54/64-31624-A22C9C05; Thu, 13 Dec 2012 11:55:22 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355399721!20363428!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1449 invoked from network); 13 Dec 2012 11:55:22 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 11:55:22 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tj7No-000Jxu-Fs; Thu, 13 Dec 2012 11:55:20 +0000
Date: Thu, 13 Dec 2012 11:55:20 +0000
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20121213115520.GB75286@ocelot.phlegethon.org>
References: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
	<20121213105951.GA75286@ocelot.phlegethon.org>
	<50C9C043.7090507@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C9C043.7090507@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: Keir <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
	on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:47 +0000 on 13 Dec (1355399235), Andrew Cooper wrote:
> >nmi_shootdown_cpus() has a go, but I think it would have to do an atomic
> >compare-exchange on crashing_cpu to actually be sure that only one CPU
> >is crashing at a time.  If two CPUs try to lead crashes at the same
> >time, it will deadlock here, with NMIs disabled on all CPUs.
> 
> kexec_common_shutdown() calls one_cpu_only() which gives the impression 
> that nmi_shootdown_cpus() can only be gotten to once.

Right, I missed that.  So,

Acked-by: Tim Deegan <tim@xen.org>

You can tidy up entry.S or not, as you like.  There's no correctness
issue with anything there, just things that could be neater if you were
already updating the patch.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:58:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7R9-0005gR-6b; Thu, 13 Dec 2012 11:58:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tj7R8-0005g9-2o
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:58:46 +0000
Received: from [85.158.143.99:12561] by server-2.bemta-4.messagelabs.com id
	A7/BA-30861-5F2C9C05; Thu, 13 Dec 2012 11:58:45 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355399923!22456687!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22958 invoked from network); 13 Dec 2012 11:58:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:58:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="530830"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 11:58:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 06:58:43 -0500
Received: from [10.80.239.153]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id 1Tj7R4-00075o-Sj;
	Thu, 13 Dec 2012 11:58:42 +0000
Message-ID: <50C9C2F2.2010703@citrix.com>
Date: Thu, 13 Dec 2012 11:58:42 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121127 Icedove/10.0.11
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
	<20121213105951.GA75286@ocelot.phlegethon.org>
	<50C9C043.7090507@citrix.com>
	<20121213115520.GB75286@ocelot.phlegethon.org>
In-Reply-To: <20121213115520.GB75286@ocelot.phlegethon.org>
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
 on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/12 11:55, Tim Deegan wrote:
> At 11:47 +0000 on 13 Dec (1355399235), Andrew Cooper wrote:
>>> nmi_shootdown_cpus() has a go, but I think it would have to do an atomic
>>> compare-exchange on crashing_cpu to actually be sure that only one CPU
>>> is crashing at a time.  If two CPUs try to lead crashes at the same
>>> time, it will deadlock here, with NMIs disabled on all CPUs.
>> kexec_common_shutdown() calls one_cpu_only() which gives the impression
>> that nmi_shootdown_cpus() can only be gotten to once.
> Right, I missed that.  So,
>
> Acked-by: Tim Deegan<tim@xen.org>
>
> You can tidy up entry.S or not, as you like.  There's no correctness
> issue with anything there, just things that could be neater if you were
> already updating the patch.
>
> Cheers,
>
> Tim.

I am just re spinning the patch for cli and the commit message.  Nothing 
major.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 11:58:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 11:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7R9-0005gR-6b; Thu, 13 Dec 2012 11:58:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tj7R8-0005g9-2o
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 11:58:46 +0000
Received: from [85.158.143.99:12561] by server-2.bemta-4.messagelabs.com id
	A7/BA-30861-5F2C9C05; Thu, 13 Dec 2012 11:58:45 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355399923!22456687!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22958 invoked from network); 13 Dec 2012 11:58:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 11:58:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="530830"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 11:58:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 06:58:43 -0500
Received: from [10.80.239.153]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id 1Tj7R4-00075o-Sj;
	Thu, 13 Dec 2012 11:58:42 +0000
Message-ID: <50C9C2F2.2010703@citrix.com>
Date: Thu, 13 Dec 2012 11:58:42 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121127 Icedove/10.0.11
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <96b068439bc453521aea.1355312134@andrewcoop.uk.xensource.com>
	<20121213105951.GA75286@ocelot.phlegethon.org>
	<50C9C043.7090507@citrix.com>
	<20121213115520.GB75286@ocelot.phlegethon.org>
In-Reply-To: <20121213115520.GB75286@ocelot.phlegethon.org>
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V5] x86/kexec: Change NMI and MCE handling
 on kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/12 11:55, Tim Deegan wrote:
> At 11:47 +0000 on 13 Dec (1355399235), Andrew Cooper wrote:
>>> nmi_shootdown_cpus() has a go, but I think it would have to do an atomic
>>> compare-exchange on crashing_cpu to actually be sure that only one CPU
>>> is crashing at a time.  If two CPUs try to lead crashes at the same
>>> time, it will deadlock here, with NMIs disabled on all CPUs.
>> kexec_common_shutdown() calls one_cpu_only() which gives the impression
>> that nmi_shootdown_cpus() can only be gotten to once.
> Right, I missed that.  So,
>
> Acked-by: Tim Deegan<tim@xen.org>
>
> You can tidy up entry.S or not, as you like.  There's no correctness
> issue with anything there, just things that could be neater if you were
> already updating the patch.
>
> Cheers,
>
> Tim.

I am just re spinning the patch for cli and the commit message.  Nothing 
major.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:02:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:02:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7Ur-0006Ji-8n; Thu, 13 Dec 2012 12:02:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7Up-0006JW-Mq
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:02:36 +0000
Received: from [85.158.139.211:27908] by server-15.bemta-5.messagelabs.com id
	95/D5-20523-AD3C9C05; Thu, 13 Dec 2012 12:02:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355400153!18556716!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2016 invoked from network); 13 Dec 2012 12:02:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:02:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="114364"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 12:02:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 12:02:32 +0000
Message-ID: <1355400151.10554.103.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Thu, 13 Dec 2012 12:02:31 +0000
In-Reply-To: <1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 18:19 +0000, Matthew Fioravante wrote:
> Please rerun autoconf after commiting this patch
> 
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>

This fails for me with :
        /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/test.o: In function `app_main':
        /local/scratch/ianc/devel/committer.git/extras/mini-os/test.c:511: multiple definition of `app_main'
        /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/main.o:/local/scratch/ianc/devel/committer.git/extras/mini-os/main.c:187: first defined here
        make[2]: *** [/local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/mini-os] Error 1
        make[2]: Leaving directory `/local/scratch/ianc/devel/committer.git/extras/mini-os'
        make[1]: *** [caml-stubdom] Error 2
        make[1]: Leaving directory `/local/scratch/ianc/devel/committer.git/stubdom'
        make: *** [install-stubdom] Error 2
        
I'm only guessing it was this patch, but it was somewhere in this
series.

I'd already done all the autoconf faff and updated .*ignore for you
(adding autom4te.cache, config.log and config.status as appropriate). So
rather than resending please could you provide an incremental fix
against:
        git://xenbits.xen.org/people/ianc/xen-unstable.git vtpm

I'll then merge that into the appropriate patch.

It also occurs to me that this series introduces a little bisection blip
where cmake will be required (i.e. from patch 3 until here). The right
way to do this would have been to put the patch introducing autoconf at
the start. I'm inclined to just gloss over that for now, but if you feel
so inclined you could reorder things.

> ---
>  autogen.sh                             |    2 +
>  config/Stubdom.mk.in                   |   45 ++++++++++++++++
>  {tools/m4 => m4}/curses.m4             |    0
>  m4/depends.m4                          |   15 ++++++
>  {tools/m4 => m4}/extfs.m4              |    0
>  {tools/m4 => m4}/features.m4           |    0
>  {tools/m4 => m4}/fetcher.m4            |    0
>  {tools/m4 => m4}/ocaml.m4              |    0
>  {tools/m4 => m4}/path_or_fail.m4       |    0
>  {tools/m4 => m4}/pkg.m4                |    0
>  {tools/m4 => m4}/pthread.m4            |    0
>  {tools/m4 => m4}/ptyfuncs.m4           |    0
>  {tools/m4 => m4}/python_devel.m4       |    0
>  {tools/m4 => m4}/python_version.m4     |    0
>  {tools/m4 => m4}/savevar.m4            |    0
>  {tools/m4 => m4}/set_cflags_ldflags.m4 |    0
>  m4/stubdom.m4                          |   89 ++++++++++++++++++++++++++++++++
>  {tools/m4 => m4}/uuid.m4               |    0
>  stubdom/Makefile                       |   55 +++++---------------
>  stubdom/configure.ac                   |   58 +++++++++++++++++++++
>  tools/configure.ac                     |   28 +++++-----
>  21 files changed, 236 insertions(+), 56 deletions(-)
>  create mode 100644 config/Stubdom.mk.in
>  rename {tools/m4 => m4}/curses.m4 (100%)
>  create mode 100644 m4/depends.m4
>  rename {tools/m4 => m4}/extfs.m4 (100%)
>  rename {tools/m4 => m4}/features.m4 (100%)
>  rename {tools/m4 => m4}/fetcher.m4 (100%)
>  rename {tools/m4 => m4}/ocaml.m4 (100%)
>  rename {tools/m4 => m4}/path_or_fail.m4 (100%)
>  rename {tools/m4 => m4}/pkg.m4 (100%)
>  rename {tools/m4 => m4}/pthread.m4 (100%)
>  rename {tools/m4 => m4}/ptyfuncs.m4 (100%)
>  rename {tools/m4 => m4}/python_devel.m4 (100%)
>  rename {tools/m4 => m4}/python_version.m4 (100%)
>  rename {tools/m4 => m4}/savevar.m4 (100%)
>  rename {tools/m4 => m4}/set_cflags_ldflags.m4 (100%)
>  create mode 100644 m4/stubdom.m4
>  rename {tools/m4 => m4}/uuid.m4 (100%)
>  create mode 100644 stubdom/configure.ac
> 
> diff --git a/autogen.sh b/autogen.sh
> index 58a71ce..ada482c 100755
> --- a/autogen.sh
> +++ b/autogen.sh
> @@ -2,3 +2,5 @@
>  cd tools
>  autoconf
>  autoheader
> +cd ../stubdom
> +autoconf
> diff --git a/config/Stubdom.mk.in b/config/Stubdom.mk.in
> new file mode 100644
> index 0000000..432efd7
> --- /dev/null
> +++ b/config/Stubdom.mk.in
> @@ -0,0 +1,45 @@
> +# Prefix and install folder
> +prefix              := @prefix@
> +PREFIX              := $(prefix)
> +exec_prefix         := @exec_prefix@
> +libdir              := @libdir@
> +LIBDIR              := $(libdir)
> +
> +# Path Programs
> +CMAKE               := @CMAKE@
> +WGET                := @WGET@ -c
> +
> +# A debug build of stubdom? //FIXME: Someone make this do something
> +debug               := @debug@
> +vtpm = @vtpm@
> +
> +STUBDOM_TARGETS     := @STUBDOM_TARGETS@
> +STUBDOM_BUILD       := @STUBDOM_BUILD@
> +STUBDOM_INSTALL     := @STUBDOM_INSTALL@
> +
> +ZLIB_VERSION        := @ZLIB_VERSION@
> +ZLIB_URL            := @ZLIB_URL@
> +
> +LIBPCI_VERSION      := @LIBPCI_VERSION@
> +LIBPCI_URL          := @LIBPCI_URL@
> +
> +NEWLIB_VERSION      := @NEWLIB_VERSION@
> +NEWLIB_URL          := @NEWLIB_URL@
> +
> +LWIP_VERSION        := @LWIP_VERSION@
> +LWIP_URL            := @LWIP_URL@
> +
> +GRUB_VERSION        := @GRUB_VERSION@
> +GRUB_URL            := @GRUB_URL@
> +
> +OCAML_VERSION       := @OCAML_VERSION@
> +OCAML_URL           := @OCAML_URL@
> +
> +GMP_VERSION         := @GMP_VERSION@
> +GMP_URL             := @GMP_URL@
> +
> +POLARSSL_VERSION    := @POLARSSL_VERSION@
> +POLARSSL_URL        := @POLARSSL_URL@
> +
> +TPMEMU_VERSION      := @TPMEMU_VERSION@
> +TPMEMU_URL          := @TPMEMU_URL@
> diff --git a/tools/m4/curses.m4 b/m4/curses.m4
> similarity index 100%
> rename from tools/m4/curses.m4
> rename to m4/curses.m4
> diff --git a/m4/depends.m4 b/m4/depends.m4
> new file mode 100644
> index 0000000..916e665
> --- /dev/null
> +++ b/m4/depends.m4
> @@ -0,0 +1,15 @@
> +
> +AC_DEFUN([AX_DEPENDS_PATH_PROG], [
> +AS_IF([test "x$$1" = "xy"], [AX_PATH_PROG_OR_FAIL([$2], [$3])], [
> +AS_IF([test "x$$1" = "xn"], [
> +$2="/$3-disabled-in-configure-script"
> +], [
> +AC_PATH_PROG([$2], [$3], [no])
> +AS_IF([test x"${$2}" = "xno"], [
> +$1=n
> +$2="/$3-disabled-in-configure-script"
> +])
> +])
> +])
> +AC_SUBST($2)
> +])
> diff --git a/tools/m4/extfs.m4 b/m4/extfs.m4
> similarity index 100%
> rename from tools/m4/extfs.m4
> rename to m4/extfs.m4
> diff --git a/tools/m4/features.m4 b/m4/features.m4
> similarity index 100%
> rename from tools/m4/features.m4
> rename to m4/features.m4
> diff --git a/tools/m4/fetcher.m4 b/m4/fetcher.m4
> similarity index 100%
> rename from tools/m4/fetcher.m4
> rename to m4/fetcher.m4
> diff --git a/tools/m4/ocaml.m4 b/m4/ocaml.m4
> similarity index 100%
> rename from tools/m4/ocaml.m4
> rename to m4/ocaml.m4
> diff --git a/tools/m4/path_or_fail.m4 b/m4/path_or_fail.m4
> similarity index 100%
> rename from tools/m4/path_or_fail.m4
> rename to m4/path_or_fail.m4
> diff --git a/tools/m4/pkg.m4 b/m4/pkg.m4
> similarity index 100%
> rename from tools/m4/pkg.m4
> rename to m4/pkg.m4
> diff --git a/tools/m4/pthread.m4 b/m4/pthread.m4
> similarity index 100%
> rename from tools/m4/pthread.m4
> rename to m4/pthread.m4
> diff --git a/tools/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
> similarity index 100%
> rename from tools/m4/ptyfuncs.m4
> rename to m4/ptyfuncs.m4
> diff --git a/tools/m4/python_devel.m4 b/m4/python_devel.m4
> similarity index 100%
> rename from tools/m4/python_devel.m4
> rename to m4/python_devel.m4
> diff --git a/tools/m4/python_version.m4 b/m4/python_version.m4
> similarity index 100%
> rename from tools/m4/python_version.m4
> rename to m4/python_version.m4
> diff --git a/tools/m4/savevar.m4 b/m4/savevar.m4
> similarity index 100%
> rename from tools/m4/savevar.m4
> rename to m4/savevar.m4
> diff --git a/tools/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
> similarity index 100%
> rename from tools/m4/set_cflags_ldflags.m4
> rename to m4/set_cflags_ldflags.m4
> diff --git a/m4/stubdom.m4 b/m4/stubdom.m4
> new file mode 100644
> index 0000000..0bf0d2c
> --- /dev/null
> +++ b/m4/stubdom.m4
> @@ -0,0 +1,89 @@
> +AC_DEFUN([AX_STUBDOM_DEFAULT_ENABLE], [
> +AC_ARG_ENABLE([$1],
> +AS_HELP_STRING([--disable-$1], [Build and install $1 (default is ENABLED)]),[
> +AX_STUBDOM_INTERNAL([$1], [$2])
> +],[
> +AX_ENABLE_STUBDOM([$1], [$2])
> +])
> +AC_SUBST([$2])
> +])
> +
> +AC_DEFUN([AX_STUBDOM_DEFAULT_DISABLE], [
> +AC_ARG_ENABLE([$1],
> +AS_HELP_STRING([--enable-$1], [Build and install $1 (default is DISABLED)]),[
> +AX_STUBDOM_INTERNAL([$1], [$2])
> +],[
> +AX_DISABLE_STUBDOM([$1], [$2])
> +])
> +AC_SUBST([$2])
> +])
> +
> +AC_DEFUN([AX_STUBDOM_CONDITIONAL], [
> +AC_ARG_ENABLE([$1],
> +AS_HELP_STRING([--enable-$1], [Build and install $1]),[
> +AX_STUBDOM_INTERNAL([$1], [$2])
> +])
> +])
> +
> +AC_DEFUN([AX_STUBDOM_CONDITIONAL_FINISH], [
> +AS_IF([test "x$$2" = "xy" || test "x$$2" = "x"], [
> +AX_ENABLE_STUBDOM([$1],[$2])
> +],[
> +AX_DISABLE_STUBDOM([$1],[$2])
> +])
> +AC_SUBST([$2])
> +])
> +
> +AC_DEFUN([AX_ENABLE_STUBDOM], [
> +$2=y
> +STUBDOM_TARGETS="$STUBDOM_TARGETS $2"
> +STUBDOM_BUILD="$STUBDOM_BUILD $1"
> +STUBDOM_INSTALL="$STUBDOM_INSTALL install-$2"
> +])
> +
> +AC_DEFUN([AX_DISABLE_STUBDOM], [
> +$2=n
> +])
> +
> +dnl Don't call this outside of this file
> +AC_DEFUN([AX_STUBDOM_INTERNAL], [
> +AS_IF([test "x$enableval" = "xyes"], [
> +AX_ENABLE_STUBDOM([$1], [$2])
> +],[
> +AS_IF([test "x$enableval" = "xno"],[
> +AX_DISABLE_STUBDOM([$1], [$2])
> +])
> +])
> +])
> +
> +AC_DEFUN([AX_STUBDOM_FINISH], [
> +AC_SUBST(STUBDOM_TARGETS)
> +AC_SUBST(STUBDOM_BUILD)
> +AC_SUBST(STUBDOM_INSTALL)
> +echo "Will build the following stub domains:"
> +for x in $STUBDOM_BUILD; do
> +       echo "  $x"
> +done
> +])
> +
> +AC_DEFUN([AX_STUBDOM_LIB], [
> +AC_ARG_VAR([$1_URL], [Download url for $2])
> +AS_IF([test "x$$1_URL" = "x"], [
> +       AS_IF([test "x$extfiles" = "xy"],
> +               [$1_URL=\@S|@\@{:@XEN_EXTFILES_URL\@:}@],
> +               [$1_URL="$4"])
> +       ])
> +$1_VERSION="$3"
> +AC_SUBST($1_URL)
> +AC_SUBST($1_VERSION)
> +])
> +
> +AC_DEFUN([AX_STUBDOM_LIB_NOEXT], [
> +AC_ARG_VAR([$1_URL], [Download url for $2])
> +AS_IF([test "x$$1_URL" = "x"], [
> +       $1_URL="$4"
> +       ])
> +$1_VERSION="$3"
> +AC_SUBST($1_URL)
> +AC_SUBST($1_VERSION)
> +])
> diff --git a/tools/m4/uuid.m4 b/m4/uuid.m4
> similarity index 100%
> rename from tools/m4/uuid.m4
> rename to m4/uuid.m4
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index fc70d88..709b71e 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -6,44 +6,7 @@ export XEN_OS=MiniOS
>  export stubdom=y
>  export debug=y
>  include $(XEN_ROOT)/Config.mk
> -
> -#ZLIB_URL?=http://www.zlib.net
> -ZLIB_URL=$(XEN_EXTFILES_URL)
> -ZLIB_VERSION=1.2.3
> -
> -#LIBPCI_URL?=http://www.kernel.org/pub/software/utils/pciutils
> -LIBPCI_URL?=$(XEN_EXTFILES_URL)
> -LIBPCI_VERSION=2.2.9
> -
> -#NEWLIB_URL?=ftp://sources.redhat.com/pub/newlib
> -NEWLIB_URL?=$(XEN_EXTFILES_URL)
> -NEWLIB_VERSION=1.16.0
> -
> -#LWIP_URL?=http://download.savannah.gnu.org/releases/lwip
> -LWIP_URL?=$(XEN_EXTFILES_URL)
> -LWIP_VERSION=1.3.0
> -
> -#GRUB_URL?=http://alpha.gnu.org/gnu/grub
> -GRUB_URL?=$(XEN_EXTFILES_URL)
> -GRUB_VERSION=0.97
> -
> -#OCAML_URL?=$(XEN_EXTFILES_URL)
> -OCAML_URL?=http://caml.inria.fr/pub/distrib/ocaml-3.11
> -OCAML_VERSION=3.11.0
> -
> -GMP_VERSION=4.3.2
> -GMP_URL?=$(XEN_EXTFILES_URL)
> -#GMP_URL?=ftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
> -
> -POLARSSL_VERSION=1.1.4
> -POLARSSL_URL?=$(XEN_EXTFILES_URL)
> -#POLARSSL_URL?=http://polarssl.org/code/releases
> -
> -TPMEMU_VERSION=0.7.4
> -TPMEMU_URL?=$(XEN_EXTFILES_URL)
> -#TPMEMU_URL?=http://download.berlios.de/tpm-emulator
> -
> -WGET=wget -c
> +-include $(XEN_ROOT)/config/Stubdom.mk
> 
>  GNU_TARGET_ARCH:=$(XEN_TARGET_ARCH)
>  ifeq ($(XEN_TARGET_ARCH),x86_32)
> @@ -86,12 +49,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include
> 
>  TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib
> 
> -TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr
> +TARGETS=$(STUBDOM_TARGETS)
> 
>  .PHONY: all
>  all: build
>  ifeq ($(STUBDOM_SUPPORTED),1)
> -build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stubdom vtpmmgrdom
> +build: genpath $(STUBDOM_BUILD)
>  else
>  build: genpath
>  endif
> @@ -245,7 +208,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
>         mv tpm_emulator-$(TPMEMU_VERSION) $@
>         patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
>         mkdir $@/build
> -       cd $@/build; cmake .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
> +       cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
>         touch $@
> 
>  TPMEMU_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm.a
> @@ -483,7 +446,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
>  #########
> 
>  ifeq ($(STUBDOM_SUPPORTED),1)
> -install: genpath install-readme install-ioemu install-grub install-xenstore install-vtpm install-vtpmmgr
> +install: genpath install-readme $(STUBDOM_INSTALL)
>  else
>  install: genpath
>  endif
> @@ -503,6 +466,8 @@ install-grub: pv-grub
>         $(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
>         $(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-grub/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/pv-grub-$(XEN_TARGET_ARCH).gz"
> 
> +install-caml: caml-stubdom
> +
>  install-xenstore: xenstore-stubdom
>         $(INSTALL_DIR) "$(DESTDIR)/usr/lib/xen/boot"
>         $(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-xenstore/mini-os.gz "$(DESTDIR)/usr/lib/xen/boot/xenstore-stubdom.gz"
> @@ -581,3 +546,9 @@ downloadclean: patchclean
> 
>  .PHONY: distclean
>  distclean: downloadclean
> +       -rm ../config/Stubdom.mk
> +
> +ifeq (,$(findstring clean,$(MAKECMDGOALS)))
> +$(XEN_ROOT)/config/Stubdom.mk:
> +       $(error You have to run ./configure before building or installing stubdom)
> +endif
> diff --git a/stubdom/configure.ac b/stubdom/configure.ac
> new file mode 100644
> index 0000000..db44d4a
> --- /dev/null
> +++ b/stubdom/configure.ac
> @@ -0,0 +1,58 @@
> +#                                               -*- Autoconf -*-
> +# Process this file with autoconf to produce a configure script.
> +
> +AC_PREREQ([2.67])
> +AC_INIT([Xen Hypervisor Stub Domains], m4_esyscmd([../version.sh ../xen/Makefile]),
> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
> +AC_CONFIG_SRCDIR([../extras/mini-os/kernel.c])
> +AC_CONFIG_FILES([../config/Stubdom.mk])
> +AC_PREFIX_DEFAULT([/usr])
> +AC_CONFIG_AUX_DIR([../])
> +
> +# M4 Macro includes
> +m4_include([../m4/stubdom.m4])
> +m4_include([../m4/features.m4])
> +m4_include([../m4/path_or_fail.m4])
> +m4_include([../m4/depends.m4])
> +
> +# Enable/disable stub domains
> +AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
> +AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
> +AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
> +AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
> +AX_STUBDOM_CONDITIONAL([vtpmmgrdom], [vtpmmgr])
> +
> +AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
> +AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for libraries])
> +
> +AC_ARG_VAR([CMAKE], [Path to the cmake program])
> +AC_ARG_VAR([WGET], [Path to wget program])
> +
> +# Checks for programs.
> +AC_PROG_CC
> +AC_PROG_MAKE_SET
> +AC_PROG_INSTALL
> +AX_PATH_PROG_OR_FAIL([WGET], [wget])
> +
> +# Checks for programs that depend on a feature
> +AX_DEPENDS_PATH_PROG([vtpm], [CMAKE], [cmake])
> +
> +# Stubdom libraries version and url setup
> +AX_STUBDOM_LIB([ZLIB], [zlib], [1.2.3], [http://www.zlib.net])
> +AX_STUBDOM_LIB([LIBPCI], [libpci], [2.2.9], [http://www.kernel.org/pub/software/utils/pciutils])
> +AX_STUBDOM_LIB([NEWLIB], [newlib], [1.16.0], [ftp://sources.redhat.com/pub/newlib])
> +AX_STUBDOM_LIB([LWIP], [lwip], [1.3.0], [http://download.savannah.gnu.org/releases/lwip])
> +AX_STUBDOM_LIB([GRUB], [grub], [0.97], [http://alpha.gnu.org/gnu/grub])
> +AX_STUBDOM_LIB_NOEXT([OCAML], [ocaml], [3.11.0], [http://caml.inria.fr/pub/distrib/ocaml-3.11])
> +AX_STUBDOM_LIB([GMP], [libgmp], [4.3.2], [ftp://ftp.gmplib.org/pub/gmp-4.3.2])
> +AX_STUBDOM_LIB([POLARSSL], [polarssl], [1.1.4], [http://polarssl.org/code/releases])
> +AX_STUBDOM_LIB([TPMEMU], [berlios tpm emulator], [0.7.4], [http://download.berlios.de/tpm-emulator])
> +
> +#Conditionally enable these stubdoms based on the presense of dependencies
> +AX_STUBDOM_CONDITIONAL_FINISH([vtpm-stubdom], [vtpm])
> +AX_STUBDOM_CONDITIONAL_FINISH([vtpmmgrdom], [vtpmmgr])
> +
> +AX_STUBDOM_FINISH
> +AC_OUTPUT()
> diff --git a/tools/configure.ac b/tools/configure.ac
> index 586313d..971e3e9 100644
> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -22,20 +22,20 @@ APPEND_INCLUDES and APPEND_LIB instead when possible.])
>  AC_CANONICAL_HOST
> 
>  # M4 Macro includes
> -m4_include([m4/savevar.m4])
> -m4_include([m4/features.m4])
> -m4_include([m4/path_or_fail.m4])
> -m4_include([m4/python_version.m4])
> -m4_include([m4/python_devel.m4])
> -m4_include([m4/ocaml.m4])
> -m4_include([m4/set_cflags_ldflags.m4])
> -m4_include([m4/uuid.m4])
> -m4_include([m4/pkg.m4])
> -m4_include([m4/curses.m4])
> -m4_include([m4/pthread.m4])
> -m4_include([m4/ptyfuncs.m4])
> -m4_include([m4/extfs.m4])
> -m4_include([m4/fetcher.m4])
> +m4_include([../m4/savevar.m4])
> +m4_include([../m4/features.m4])
> +m4_include([../m4/path_or_fail.m4])
> +m4_include([../m4/python_version.m4])
> +m4_include([../m4/python_devel.m4])
> +m4_include([../m4/ocaml.m4])
> +m4_include([../m4/set_cflags_ldflags.m4])
> +m4_include([../m4/uuid.m4])
> +m4_include([../m4/pkg.m4])
> +m4_include([../m4/curses.m4])
> +m4_include([../m4/pthread.m4])
> +m4_include([../m4/ptyfuncs.m4])
> +m4_include([../m4/extfs.m4])
> +m4_include([../m4/fetcher.m4])
> 
>  # Enable/disable options
>  AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
> --
> 1.7.10.4
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:02:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:02:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7Ur-0006Ji-8n; Thu, 13 Dec 2012 12:02:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj7Up-0006JW-Mq
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:02:36 +0000
Received: from [85.158.139.211:27908] by server-15.bemta-5.messagelabs.com id
	95/D5-20523-AD3C9C05; Thu, 13 Dec 2012 12:02:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355400153!18556716!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2016 invoked from network); 13 Dec 2012 12:02:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:02:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="114364"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 12:02:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 12:02:32 +0000
Message-ID: <1355400151.10554.103.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Thu, 13 Dec 2012 12:02:31 +0000
In-Reply-To: <1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 18:19 +0000, Matthew Fioravante wrote:
> Please rerun autoconf after commiting this patch
> 
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>

This fails for me with :
        /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/test.o: In function `app_main':
        /local/scratch/ianc/devel/committer.git/extras/mini-os/test.c:511: multiple definition of `app_main'
        /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/main.o:/local/scratch/ianc/devel/committer.git/extras/mini-os/main.c:187: first defined here
        make[2]: *** [/local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/mini-os] Error 1
        make[2]: Leaving directory `/local/scratch/ianc/devel/committer.git/extras/mini-os'
        make[1]: *** [caml-stubdom] Error 2
        make[1]: Leaving directory `/local/scratch/ianc/devel/committer.git/stubdom'
        make: *** [install-stubdom] Error 2
        
I'm only guessing it was this patch, but it was somewhere in this
series.

I'd already done all the autoconf faff and updated .*ignore for you
(adding autom4te.cache, config.log and config.status as appropriate). So
rather than resending please could you provide an incremental fix
against:
        git://xenbits.xen.org/people/ianc/xen-unstable.git vtpm

I'll then merge that into the appropriate patch.

It also occurs to me that this series introduces a little bisection blip
where cmake will be required (i.e. from patch 3 until here). The right
way to do this would have been to put the patch introducing autoconf at
the start. I'm inclined to just gloss over that for now, but if you feel
so inclined you could reorder things.

> ---
>  autogen.sh                             |    2 +
>  config/Stubdom.mk.in                   |   45 ++++++++++++++++
>  {tools/m4 => m4}/curses.m4             |    0
>  m4/depends.m4                          |   15 ++++++
>  {tools/m4 => m4}/extfs.m4              |    0
>  {tools/m4 => m4}/features.m4           |    0
>  {tools/m4 => m4}/fetcher.m4            |    0
>  {tools/m4 => m4}/ocaml.m4              |    0
>  {tools/m4 => m4}/path_or_fail.m4       |    0
>  {tools/m4 => m4}/pkg.m4                |    0
>  {tools/m4 => m4}/pthread.m4            |    0
>  {tools/m4 => m4}/ptyfuncs.m4           |    0
>  {tools/m4 => m4}/python_devel.m4       |    0
>  {tools/m4 => m4}/python_version.m4     |    0
>  {tools/m4 => m4}/savevar.m4            |    0
>  {tools/m4 => m4}/set_cflags_ldflags.m4 |    0
>  m4/stubdom.m4                          |   89 ++++++++++++++++++++++++++++++++
>  {tools/m4 => m4}/uuid.m4               |    0
>  stubdom/Makefile                       |   55 +++++---------------
>  stubdom/configure.ac                   |   58 +++++++++++++++++++++
>  tools/configure.ac                     |   28 +++++-----
>  21 files changed, 236 insertions(+), 56 deletions(-)
>  create mode 100644 config/Stubdom.mk.in
>  rename {tools/m4 => m4}/curses.m4 (100%)
>  create mode 100644 m4/depends.m4
>  rename {tools/m4 => m4}/extfs.m4 (100%)
>  rename {tools/m4 => m4}/features.m4 (100%)
>  rename {tools/m4 => m4}/fetcher.m4 (100%)
>  rename {tools/m4 => m4}/ocaml.m4 (100%)
>  rename {tools/m4 => m4}/path_or_fail.m4 (100%)
>  rename {tools/m4 => m4}/pkg.m4 (100%)
>  rename {tools/m4 => m4}/pthread.m4 (100%)
>  rename {tools/m4 => m4}/ptyfuncs.m4 (100%)
>  rename {tools/m4 => m4}/python_devel.m4 (100%)
>  rename {tools/m4 => m4}/python_version.m4 (100%)
>  rename {tools/m4 => m4}/savevar.m4 (100%)
>  rename {tools/m4 => m4}/set_cflags_ldflags.m4 (100%)
>  create mode 100644 m4/stubdom.m4
>  rename {tools/m4 => m4}/uuid.m4 (100%)
>  create mode 100644 stubdom/configure.ac
> 
> diff --git a/autogen.sh b/autogen.sh
> index 58a71ce..ada482c 100755
> --- a/autogen.sh
> +++ b/autogen.sh
> @@ -2,3 +2,5 @@
>  cd tools
>  autoconf
>  autoheader
> +cd ../stubdom
> +autoconf
> diff --git a/config/Stubdom.mk.in b/config/Stubdom.mk.in
> new file mode 100644
> index 0000000..432efd7
> --- /dev/null
> +++ b/config/Stubdom.mk.in
> @@ -0,0 +1,45 @@
> +# Prefix and install folder
> +prefix              := @prefix@
> +PREFIX              := $(prefix)
> +exec_prefix         := @exec_prefix@
> +libdir              := @libdir@
> +LIBDIR              := $(libdir)
> +
> +# Path Programs
> +CMAKE               := @CMAKE@
> +WGET                := @WGET@ -c
> +
> +# A debug build of stubdom? //FIXME: Someone make this do something
> +debug               := @debug@
> +vtpm = @vtpm@
> +
> +STUBDOM_TARGETS     := @STUBDOM_TARGETS@
> +STUBDOM_BUILD       := @STUBDOM_BUILD@
> +STUBDOM_INSTALL     := @STUBDOM_INSTALL@
> +
> +ZLIB_VERSION        := @ZLIB_VERSION@
> +ZLIB_URL            := @ZLIB_URL@
> +
> +LIBPCI_VERSION      := @LIBPCI_VERSION@
> +LIBPCI_URL          := @LIBPCI_URL@
> +
> +NEWLIB_VERSION      := @NEWLIB_VERSION@
> +NEWLIB_URL          := @NEWLIB_URL@
> +
> +LWIP_VERSION        := @LWIP_VERSION@
> +LWIP_URL            := @LWIP_URL@
> +
> +GRUB_VERSION        := @GRUB_VERSION@
> +GRUB_URL            := @GRUB_URL@
> +
> +OCAML_VERSION       := @OCAML_VERSION@
> +OCAML_URL           := @OCAML_URL@
> +
> +GMP_VERSION         := @GMP_VERSION@
> +GMP_URL             := @GMP_URL@
> +
> +POLARSSL_VERSION    := @POLARSSL_VERSION@
> +POLARSSL_URL        := @POLARSSL_URL@
> +
> +TPMEMU_VERSION      := @TPMEMU_VERSION@
> +TPMEMU_URL          := @TPMEMU_URL@
> diff --git a/tools/m4/curses.m4 b/m4/curses.m4
> similarity index 100%
> rename from tools/m4/curses.m4
> rename to m4/curses.m4
> diff --git a/m4/depends.m4 b/m4/depends.m4
> new file mode 100644
> index 0000000..916e665
> --- /dev/null
> +++ b/m4/depends.m4
> @@ -0,0 +1,15 @@
> +
> +AC_DEFUN([AX_DEPENDS_PATH_PROG], [
> +AS_IF([test "x$$1" = "xy"], [AX_PATH_PROG_OR_FAIL([$2], [$3])], [
> +AS_IF([test "x$$1" = "xn"], [
> +$2="/$3-disabled-in-configure-script"
> +], [
> +AC_PATH_PROG([$2], [$3], [no])
> +AS_IF([test x"${$2}" = "xno"], [
> +$1=n
> +$2="/$3-disabled-in-configure-script"
> +])
> +])
> +])
> +AC_SUBST($2)
> +])
> diff --git a/tools/m4/extfs.m4 b/m4/extfs.m4
> similarity index 100%
> rename from tools/m4/extfs.m4
> rename to m4/extfs.m4
> diff --git a/tools/m4/features.m4 b/m4/features.m4
> similarity index 100%
> rename from tools/m4/features.m4
> rename to m4/features.m4
> diff --git a/tools/m4/fetcher.m4 b/m4/fetcher.m4
> similarity index 100%
> rename from tools/m4/fetcher.m4
> rename to m4/fetcher.m4
> diff --git a/tools/m4/ocaml.m4 b/m4/ocaml.m4
> similarity index 100%
> rename from tools/m4/ocaml.m4
> rename to m4/ocaml.m4
> diff --git a/tools/m4/path_or_fail.m4 b/m4/path_or_fail.m4
> similarity index 100%
> rename from tools/m4/path_or_fail.m4
> rename to m4/path_or_fail.m4
> diff --git a/tools/m4/pkg.m4 b/m4/pkg.m4
> similarity index 100%
> rename from tools/m4/pkg.m4
> rename to m4/pkg.m4
> diff --git a/tools/m4/pthread.m4 b/m4/pthread.m4
> similarity index 100%
> rename from tools/m4/pthread.m4
> rename to m4/pthread.m4
> diff --git a/tools/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
> similarity index 100%
> rename from tools/m4/ptyfuncs.m4
> rename to m4/ptyfuncs.m4
> diff --git a/tools/m4/python_devel.m4 b/m4/python_devel.m4
> similarity index 100%
> rename from tools/m4/python_devel.m4
> rename to m4/python_devel.m4
> diff --git a/tools/m4/python_version.m4 b/m4/python_version.m4
> similarity index 100%
> rename from tools/m4/python_version.m4
> rename to m4/python_version.m4
> diff --git a/tools/m4/savevar.m4 b/m4/savevar.m4
> similarity index 100%
> rename from tools/m4/savevar.m4
> rename to m4/savevar.m4
> diff --git a/tools/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
> similarity index 100%
> rename from tools/m4/set_cflags_ldflags.m4
> rename to m4/set_cflags_ldflags.m4
> diff --git a/m4/stubdom.m4 b/m4/stubdom.m4
> new file mode 100644
> index 0000000..0bf0d2c
> --- /dev/null
> +++ b/m4/stubdom.m4
> @@ -0,0 +1,89 @@
> +AC_DEFUN([AX_STUBDOM_DEFAULT_ENABLE], [
> +AC_ARG_ENABLE([$1],
> +AS_HELP_STRING([--disable-$1], [Build and install $1 (default is ENABLED)]),[
> +AX_STUBDOM_INTERNAL([$1], [$2])
> +],[
> +AX_ENABLE_STUBDOM([$1], [$2])
> +])
> +AC_SUBST([$2])
> +])
> +
> +AC_DEFUN([AX_STUBDOM_DEFAULT_DISABLE], [
> +AC_ARG_ENABLE([$1],
> +AS_HELP_STRING([--enable-$1], [Build and install $1 (default is DISABLED)]),[
> +AX_STUBDOM_INTERNAL([$1], [$2])
> +],[
> +AX_DISABLE_STUBDOM([$1], [$2])
> +])
> +AC_SUBST([$2])
> +])
> +
> +AC_DEFUN([AX_STUBDOM_CONDITIONAL], [
> +AC_ARG_ENABLE([$1],
> +AS_HELP_STRING([--enable-$1], [Build and install $1]),[
> +AX_STUBDOM_INTERNAL([$1], [$2])
> +])
> +])
> +
> +AC_DEFUN([AX_STUBDOM_CONDITIONAL_FINISH], [
> +AS_IF([test "x$$2" = "xy" || test "x$$2" = "x"], [
> +AX_ENABLE_STUBDOM([$1],[$2])
> +],[
> +AX_DISABLE_STUBDOM([$1],[$2])
> +])
> +AC_SUBST([$2])
> +])
> +
> +AC_DEFUN([AX_ENABLE_STUBDOM], [
> +$2=y
> +STUBDOM_TARGETS="$STUBDOM_TARGETS $2"
> +STUBDOM_BUILD="$STUBDOM_BUILD $1"
> +STUBDOM_INSTALL="$STUBDOM_INSTALL install-$2"
> +])
> +
> +AC_DEFUN([AX_DISABLE_STUBDOM], [
> +$2=n
> +])
> +
> +dnl Don't call this outside of this file
> +AC_DEFUN([AX_STUBDOM_INTERNAL], [
> +AS_IF([test "x$enableval" = "xyes"], [
> +AX_ENABLE_STUBDOM([$1], [$2])
> +],[
> +AS_IF([test "x$enableval" = "xno"],[
> +AX_DISABLE_STUBDOM([$1], [$2])
> +])
> +])
> +])
> +
> +AC_DEFUN([AX_STUBDOM_FINISH], [
> +AC_SUBST(STUBDOM_TARGETS)
> +AC_SUBST(STUBDOM_BUILD)
> +AC_SUBST(STUBDOM_INSTALL)
> +echo "Will build the following stub domains:"
> +for x in $STUBDOM_BUILD; do
> +       echo "  $x"
> +done
> +])
> +
> +AC_DEFUN([AX_STUBDOM_LIB], [
> +AC_ARG_VAR([$1_URL], [Download url for $2])
> +AS_IF([test "x$$1_URL" = "x"], [
> +       AS_IF([test "x$extfiles" = "xy"],
> +               [$1_URL=\@S|@\@{:@XEN_EXTFILES_URL\@:}@],
> +               [$1_URL="$4"])
> +       ])
> +$1_VERSION="$3"
> +AC_SUBST($1_URL)
> +AC_SUBST($1_VERSION)
> +])
> +
> +AC_DEFUN([AX_STUBDOM_LIB_NOEXT], [
> +AC_ARG_VAR([$1_URL], [Download url for $2])
> +AS_IF([test "x$$1_URL" = "x"], [
> +       $1_URL="$4"
> +       ])
> +$1_VERSION="$3"
> +AC_SUBST($1_URL)
> +AC_SUBST($1_VERSION)
> +])
> diff --git a/tools/m4/uuid.m4 b/m4/uuid.m4
> similarity index 100%
> rename from tools/m4/uuid.m4
> rename to m4/uuid.m4
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index fc70d88..709b71e 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -6,44 +6,7 @@ export XEN_OS=MiniOS
>  export stubdom=y
>  export debug=y
>  include $(XEN_ROOT)/Config.mk
> -
> -#ZLIB_URL?=http://www.zlib.net
> -ZLIB_URL=$(XEN_EXTFILES_URL)
> -ZLIB_VERSION=1.2.3
> -
> -#LIBPCI_URL?=http://www.kernel.org/pub/software/utils/pciutils
> -LIBPCI_URL?=$(XEN_EXTFILES_URL)
> -LIBPCI_VERSION=2.2.9
> -
> -#NEWLIB_URL?=ftp://sources.redhat.com/pub/newlib
> -NEWLIB_URL?=$(XEN_EXTFILES_URL)
> -NEWLIB_VERSION=1.16.0
> -
> -#LWIP_URL?=http://download.savannah.gnu.org/releases/lwip
> -LWIP_URL?=$(XEN_EXTFILES_URL)
> -LWIP_VERSION=1.3.0
> -
> -#GRUB_URL?=http://alpha.gnu.org/gnu/grub
> -GRUB_URL?=$(XEN_EXTFILES_URL)
> -GRUB_VERSION=0.97
> -
> -#OCAML_URL?=$(XEN_EXTFILES_URL)
> -OCAML_URL?=http://caml.inria.fr/pub/distrib/ocaml-3.11
> -OCAML_VERSION=3.11.0
> -
> -GMP_VERSION=4.3.2
> -GMP_URL?=$(XEN_EXTFILES_URL)
> -#GMP_URL?=ftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
> -
> -POLARSSL_VERSION=1.1.4
> -POLARSSL_URL?=$(XEN_EXTFILES_URL)
> -#POLARSSL_URL?=http://polarssl.org/code/releases
> -
> -TPMEMU_VERSION=0.7.4
> -TPMEMU_URL?=$(XEN_EXTFILES_URL)
> -#TPMEMU_URL?=http://download.berlios.de/tpm-emulator
> -
> -WGET=wget -c
> +-include $(XEN_ROOT)/config/Stubdom.mk
> 
>  GNU_TARGET_ARCH:=$(XEN_TARGET_ARCH)
>  ifeq ($(XEN_TARGET_ARCH),x86_32)
> @@ -86,12 +49,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include
> 
>  TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib
> 
> -TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr
> +TARGETS=$(STUBDOM_TARGETS)
> 
>  .PHONY: all
>  all: build
>  ifeq ($(STUBDOM_SUPPORTED),1)
> -build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stubdom vtpmmgrdom
> +build: genpath $(STUBDOM_BUILD)
>  else
>  build: genpath
>  endif
> @@ -245,7 +208,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
>         mv tpm_emulator-$(TPMEMU_VERSION) $@
>         patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
>         mkdir $@/build
> -       cd $@/build; cmake .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
> +       cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=${CC} -DCMAKE_C_FLAGS="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
>         touch $@
> 
>  TPMEMU_STAMPFILE=$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libtpm.a
> @@ -483,7 +446,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
>  #########
> 
>  ifeq ($(STUBDOM_SUPPORTED),1)
> -install: genpath install-readme install-ioemu install-grub install-xenstore install-vtpm install-vtpmmgr
> +install: genpath install-readme $(STUBDOM_INSTALL)
>  else
>  install: genpath
>  endif
> @@ -503,6 +466,8 @@ install-grub: pv-grub
>         $(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
>         $(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-grub/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/pv-grub-$(XEN_TARGET_ARCH).gz"
> 
> +install-caml: caml-stubdom
> +
>  install-xenstore: xenstore-stubdom
>         $(INSTALL_DIR) "$(DESTDIR)/usr/lib/xen/boot"
>         $(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-xenstore/mini-os.gz "$(DESTDIR)/usr/lib/xen/boot/xenstore-stubdom.gz"
> @@ -581,3 +546,9 @@ downloadclean: patchclean
> 
>  .PHONY: distclean
>  distclean: downloadclean
> +       -rm ../config/Stubdom.mk
> +
> +ifeq (,$(findstring clean,$(MAKECMDGOALS)))
> +$(XEN_ROOT)/config/Stubdom.mk:
> +       $(error You have to run ./configure before building or installing stubdom)
> +endif
> diff --git a/stubdom/configure.ac b/stubdom/configure.ac
> new file mode 100644
> index 0000000..db44d4a
> --- /dev/null
> +++ b/stubdom/configure.ac
> @@ -0,0 +1,58 @@
> +#                                               -*- Autoconf -*-
> +# Process this file with autoconf to produce a configure script.
> +
> +AC_PREREQ([2.67])
> +AC_INIT([Xen Hypervisor Stub Domains], m4_esyscmd([../version.sh ../xen/Makefile]),
> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
> +AC_CONFIG_SRCDIR([../extras/mini-os/kernel.c])
> +AC_CONFIG_FILES([../config/Stubdom.mk])
> +AC_PREFIX_DEFAULT([/usr])
> +AC_CONFIG_AUX_DIR([../])
> +
> +# M4 Macro includes
> +m4_include([../m4/stubdom.m4])
> +m4_include([../m4/features.m4])
> +m4_include([../m4/path_or_fail.m4])
> +m4_include([../m4/depends.m4])
> +
> +# Enable/disable stub domains
> +AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
> +AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
> +AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
> +AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
> +AX_STUBDOM_CONDITIONAL([vtpmmgrdom], [vtpmmgr])
> +
> +AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
> +AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for libraries])
> +
> +AC_ARG_VAR([CMAKE], [Path to the cmake program])
> +AC_ARG_VAR([WGET], [Path to wget program])
> +
> +# Checks for programs.
> +AC_PROG_CC
> +AC_PROG_MAKE_SET
> +AC_PROG_INSTALL
> +AX_PATH_PROG_OR_FAIL([WGET], [wget])
> +
> +# Checks for programs that depend on a feature
> +AX_DEPENDS_PATH_PROG([vtpm], [CMAKE], [cmake])
> +
> +# Stubdom libraries version and url setup
> +AX_STUBDOM_LIB([ZLIB], [zlib], [1.2.3], [http://www.zlib.net])
> +AX_STUBDOM_LIB([LIBPCI], [libpci], [2.2.9], [http://www.kernel.org/pub/software/utils/pciutils])
> +AX_STUBDOM_LIB([NEWLIB], [newlib], [1.16.0], [ftp://sources.redhat.com/pub/newlib])
> +AX_STUBDOM_LIB([LWIP], [lwip], [1.3.0], [http://download.savannah.gnu.org/releases/lwip])
> +AX_STUBDOM_LIB([GRUB], [grub], [0.97], [http://alpha.gnu.org/gnu/grub])
> +AX_STUBDOM_LIB_NOEXT([OCAML], [ocaml], [3.11.0], [http://caml.inria.fr/pub/distrib/ocaml-3.11])
> +AX_STUBDOM_LIB([GMP], [libgmp], [4.3.2], [ftp://ftp.gmplib.org/pub/gmp-4.3.2])
> +AX_STUBDOM_LIB([POLARSSL], [polarssl], [1.1.4], [http://polarssl.org/code/releases])
> +AX_STUBDOM_LIB([TPMEMU], [berlios tpm emulator], [0.7.4], [http://download.berlios.de/tpm-emulator])
> +
> +#Conditionally enable these stubdoms based on the presense of dependencies
> +AX_STUBDOM_CONDITIONAL_FINISH([vtpm-stubdom], [vtpm])
> +AX_STUBDOM_CONDITIONAL_FINISH([vtpmmgrdom], [vtpmmgr])
> +
> +AX_STUBDOM_FINISH
> +AC_OUTPUT()
> diff --git a/tools/configure.ac b/tools/configure.ac
> index 586313d..971e3e9 100644
> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -22,20 +22,20 @@ APPEND_INCLUDES and APPEND_LIB instead when possible.])
>  AC_CANONICAL_HOST
> 
>  # M4 Macro includes
> -m4_include([m4/savevar.m4])
> -m4_include([m4/features.m4])
> -m4_include([m4/path_or_fail.m4])
> -m4_include([m4/python_version.m4])
> -m4_include([m4/python_devel.m4])
> -m4_include([m4/ocaml.m4])
> -m4_include([m4/set_cflags_ldflags.m4])
> -m4_include([m4/uuid.m4])
> -m4_include([m4/pkg.m4])
> -m4_include([m4/curses.m4])
> -m4_include([m4/pthread.m4])
> -m4_include([m4/ptyfuncs.m4])
> -m4_include([m4/extfs.m4])
> -m4_include([m4/fetcher.m4])
> +m4_include([../m4/savevar.m4])
> +m4_include([../m4/features.m4])
> +m4_include([../m4/path_or_fail.m4])
> +m4_include([../m4/python_version.m4])
> +m4_include([../m4/python_devel.m4])
> +m4_include([../m4/ocaml.m4])
> +m4_include([../m4/set_cflags_ldflags.m4])
> +m4_include([../m4/uuid.m4])
> +m4_include([../m4/pkg.m4])
> +m4_include([../m4/curses.m4])
> +m4_include([../m4/pthread.m4])
> +m4_include([../m4/ptyfuncs.m4])
> +m4_include([../m4/extfs.m4])
> +m4_include([../m4/fetcher.m4])
> 
>  # Enable/disable options
>  AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
> --
> 1.7.10.4
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:12:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7e6-00075P-CT; Thu, 13 Dec 2012 12:12:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tj7e5-00075I-Az
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:12:09 +0000
Received: from [85.158.137.99:24002] by server-11.bemta-3.messagelabs.com id
	1B/29-13335-816C9C05; Thu, 13 Dec 2012 12:12:08 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355400716!19167367!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31643 invoked from network); 13 Dec 2012 12:11:57 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 12:11:57 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tj7dr-000K15-KR; Thu, 13 Dec 2012 12:11:55 +0000
Date: Thu, 13 Dec 2012 12:11:55 +0000
From: Tim Deegan <tim@xen.org>
To: Robert Phillips <robert.phillips@citrix.com>
Message-ID: <20121213121155.GC75286@ocelot.phlegethon.org>
References: <1355166836-11338-1-git-send-email-robert.phillips@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355166836-11338-1-git-send-email-robert.phillips@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Avoid race when guest switches between log
	dirty mode and dirty vram mode.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:13 -0500 on 10 Dec (1355148836), Robert Phillips wrote:
> The previous code assumed the guest would be in one of three mutually exclusive
> modes for bookkeeping dirty pages: (1) shadow, (2) hap utilizing the log dirty
> bitmap to support functionality such as live migrate, (3) hap utilizing the
> log dirty bitmap to track dirty vram pages.
> Races arose when a guest attempted to track dirty vram while performing live
> migrate.  (The dispatch table managed by paging_log_dirty_init() might change
> in the middle of a log dirty or a vram tracking function.)
> 
> This change allows hap log dirty and hap vram tracking to be concurrent.
> Vram tracking no longer uses the log dirty bitmap.  Instead it detects
> dirty vram pages by examining their p2m type.  The log dirty bitmap is only
> used by the log dirty code.  Because the two operations use different
> mechanisms, they are no longer mutually exclusive.
> 
> Signed-Off-By: Robert Phillips <robert.phillips@citrix.com>

Applied; thanks for that. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:12:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7e6-00075P-CT; Thu, 13 Dec 2012 12:12:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tj7e5-00075I-Az
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:12:09 +0000
Received: from [85.158.137.99:24002] by server-11.bemta-3.messagelabs.com id
	1B/29-13335-816C9C05; Thu, 13 Dec 2012 12:12:08 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355400716!19167367!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31643 invoked from network); 13 Dec 2012 12:11:57 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 12:11:57 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tj7dr-000K15-KR; Thu, 13 Dec 2012 12:11:55 +0000
Date: Thu, 13 Dec 2012 12:11:55 +0000
From: Tim Deegan <tim@xen.org>
To: Robert Phillips <robert.phillips@citrix.com>
Message-ID: <20121213121155.GC75286@ocelot.phlegethon.org>
References: <1355166836-11338-1-git-send-email-robert.phillips@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355166836-11338-1-git-send-email-robert.phillips@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Avoid race when guest switches between log
	dirty mode and dirty vram mode.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:13 -0500 on 10 Dec (1355148836), Robert Phillips wrote:
> The previous code assumed the guest would be in one of three mutually exclusive
> modes for bookkeeping dirty pages: (1) shadow, (2) hap utilizing the log dirty
> bitmap to support functionality such as live migrate, (3) hap utilizing the
> log dirty bitmap to track dirty vram pages.
> Races arose when a guest attempted to track dirty vram while performing live
> migrate.  (The dispatch table managed by paging_log_dirty_init() might change
> in the middle of a log dirty or a vram tracking function.)
> 
> This change allows hap log dirty and hap vram tracking to be concurrent.
> Vram tracking no longer uses the log dirty bitmap.  Instead it detects
> dirty vram pages by examining their p2m type.  The log dirty bitmap is only
> used by the log dirty code.  Because the two operations use different
> mechanisms, they are no longer mutually exclusive.
> 
> Signed-Off-By: Robert Phillips <robert.phillips@citrix.com>

Applied; thanks for that. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:13:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7fE-0007Cx-Ry; Thu, 13 Dec 2012 12:13:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tj7fD-0007CA-Aa
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:13:19 +0000
Received: from [85.158.139.83:62938] by server-16.bemta-5.messagelabs.com id
	D5/90-09208-E56C9C05; Thu, 13 Dec 2012 12:13:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355400777!25813225!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31875 invoked from network); 13 Dec 2012 12:12:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:12:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="570124"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 12:12:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 07:12:56 -0500
Received: from [10.80.239.153] (helo=andrewcoop.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tj7eq-0007Nt-69;
	Thu, 13 Dec 2012 12:12:56 +0000
MIME-Version: 1.0
X-Mercurial-Node: c25cdb9f5daedfdd4ed95d37e1305a7cb2941c82
Message-ID: <c25cdb9f5daedfdd4ed9.1355400773@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Thu, 13 Dec 2012 12:12:53 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH V6] x86/kexec: Change NMI and MCE handling on
	kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 xen/arch/x86/crash.c            |  116 ++++++++++++++++++++++++++++++++++-----
 xen/arch/x86/machine_kexec.c    |   19 ++++++
 xen/arch/x86/x86_64/entry.S     |   33 +++++++++++
 xen/include/asm-x86/desc.h      |   45 +++++++++++++++
 xen/include/asm-x86/processor.h |    4 +
 5 files changed, 202 insertions(+), 15 deletions(-)


Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safer in more circumstances.

This patch adds three new low level routines:
 * nmi_crash
    This is a special NMI handler for using during a kexec crash.
 * enable_nmis
    This function enables NMIs by executing an iret-to-self, to
    disengage the hardware NMI latch.
 * trap_nop
    This is a no op handler which irets immediately.  It is not declared
    with ENTRY() to avoid the extra alignment overhead.

And adds three new IDT entry helper routines:
 * _write_gate_lower
    This is a substitute for using cmpxchg16b to update a 128bit
    structure at once.  It assumes that the top 64 bits are unchanged
    (and ASSERT()s the fact) and performs a regular write on the lower
    64 bits.
 * _set_gate_lower
    This is functionally equivalent to the already present _set_gate(),
    except it uses _write_gate_lower rather than updating both 64bit
    values.
 * _update_gate_addr_lower
    This is designed to update an IDT entry handler only, without
    altering any other settings in the entry.  It also uses
    _write_gate_lower.

The IDT entry helpers are required because:
  * Is it unsafe to attempt a disable/update/re-enable cycle on the NMI
    or MCE IDT entries.
  * We need to be able to update NMI handlers without changing the IST
    entry.


As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable the crashing cpus NMI/MCE interrupt stack tables.
    Disabling the stack tables removes race conditions which would lead
    to corrupt exception frames and infinite loops.  As this pcpu is
    never planning to execute a sysret back to a pv vcpu, the update is
    safe from a security point of view.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the nmi_crash handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>

--

Changes since v5:
 * Changed stale commit message and removed 'cli' from nmi_crash.

Changes since v4:
 * Change stale comments, spelling corrections and coding style changes.

Changes since v3:
 * Added IDT entry helpers to safely update NMI/MCE entries.

Changes since v2:

 * Rework the alteration of the stack tables to completely remove the
   possibility of a PV domain getting very lucky with the "NMI or MCE in
   a 1 instruction race window on sysret" and managing to execute code
   in the hypervisor context.
 * Make use of set_ist() from the previous patch in the series to avoid
   open-coding the IST manipulation.

Changes since v1:
 * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
   during the original refactoring.
 * Fold trap_nop into the middle of enable_nmis to reuse the iret.
 * Expand comments in areas as per Tim's suggestions.

diff -r ef8c1b607b10 -r c25cdb9f5dae xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,127 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    int cpu = smp_processor_id();
+
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( cpu != crashing_cpu );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        /* Disable the interrupt stack table for the MCE handler.  This
+         * prevents race conditions between clearing MCIP and receving a
+         * new MCE, during which the exception frame would be clobbered
+         * and the MCE handler fall into an infinite loop.  We are soon
+         * going to disable the NMI watchdog, so the loop would not be
+         * caught.
+         *
+         * We do not need to change the NMI IST, as the nmi_crash
+         * handler is immue to corrupt exception frames, by virtue of
+         * being designed never to return.
+         *
+         * This update is safe from a security point of view, as this
+         * pcpu is never going to try to sysret back to a PV vcpu.
+         */
+        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
+
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+        atomic_dec(&waiting_for_crash_ipi);
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely scenario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
+     * can deliberately queue up another NMI at the LAPIC which will not
+     * be delivered as the hardware NMI latch is currently in effect.
+     * This means that if NMIs become unlatched (e.g. following a
+     * non-fatal MCE), the LAPIC will force us back here rather than
+     * wandering back into regular Xen code.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i, cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+
+            if ( i == cpu )
+            {
+                /* Disable the interrupt stack tables for this cpus MCE
+                 * and NMI handlers, and alter the NMI handler to have
+                 * no operation.  Disabling the stack tables prevents
+                 * stack corruption race conditions, while changing the
+                 * handler helps prevent cascading faults; we are
+                 * certainly going to crash by this point.
+                 *
+                 * This update is safe from a security point of view, as
+                 * this pcpu is never going to try to sysret back to a
+                 * PV vcpu.
+                 */
+                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
+            }
+            else
+                /* Do not update stack table for other pcpus. */
+                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi], &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r ef8c1b607b10 -r c25cdb9f5dae xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -81,12 +81,31 @@ void machine_kexec(xen_kexec_image_t *im
         .base = (unsigned long)(boot_cpu_gdt_table - FIRST_RESERVED_GDT_ENTRY),
         .limit = LAST_RESERVED_GDT_BYTE
     };
+    int i;
 
     /* We are about to permenantly jump out of the Xen context into the kexec
      * purgatory code.  We really dont want to be still servicing interupts.
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.
+     *
+     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
+     * pcpus other than us have the nmi_crash handler, while we have the nop
+     * handler.
+     *
+     * The MCE handlers touch extensive areas of Xen code and data.  At this
+     * point, there is nothing we can usefully do, so set the nop handler.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+            _update_gate_addr_lower(&idt_tables[i][TRAP_machine_check], &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r ef8c1b607b10 -r c25cdb9f5dae xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,44 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* Enable NMIs.  No special register assumptions. Only %rax is not preserved. */
+ENTRY(enable_nmis)
+        movq  %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        retq
+
+/* No op trap handler.  Required for kexec crash path.  This is not
+ * declared with the ENTRY() macro to avoid wasted alignment space.
+ */
+.globl trap_nop
+trap_nop:
+        iretq
+
+
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r ef8c1b607b10 -r c25cdb9f5dae xen/include/asm-x86/desc.h
--- a/xen/include/asm-x86/desc.h
+++ b/xen/include/asm-x86/desc.h
@@ -106,6 +106,21 @@ typedef struct {
     u64 a, b;
 } idt_entry_t;
 
+/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
+ * bits of the address not changing, which is a safe assumption as all
+ * functions we are likely to load will live inside the 1GB
+ * code/data/bss address range.
+ *
+ * Ideally, we would use cmpxchg16b, but this is not supported on some
+ * old AMD 64bit capable processors, and has no safe equivalent.
+ */
+static inline void _write_gate_lower(volatile idt_entry_t *gate,
+                                     const idt_entry_t *new)
+{
+    ASSERT(gate->b == new->b);
+    gate->a = new->a;
+}
+
 #define _set_gate(gate_addr,type,dpl,addr)               \
 do {                                                     \
     (gate_addr)->a = 0;                                  \
@@ -122,6 +137,36 @@ do {                                    
         (1UL << 47);                                     \
 } while (0)
 
+static inline void _set_gate_lower(idt_entry_t *gate, unsigned long type,
+                                   unsigned long dpl, void *addr)
+{
+    idt_entry_t idte;
+    idte.b = gate->b;
+    idte.a =
+        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
+        ((unsigned long)(dpl) << 45) |
+        ((unsigned long)(type) << 40) |
+        ((unsigned long)(addr) & 0xFFFFUL) |
+        ((unsigned long)__HYPERVISOR_CS64 << 16) |
+        (1UL << 47);
+    _write_gate_lower(gate, &idte);
+}
+
+/* Update the lower half handler of an IDT Entry, without changing any
+ * other configuration. */
+static inline void _update_gate_addr_lower(idt_entry_t *gate, void *addr)
+{
+    idt_entry_t idte;
+    idte.a = gate->a;
+
+    idte.b = ((unsigned long)(addr) >> 32);
+    idte.a &= 0x0000FFFFFFFF0000ULL;
+    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
+        ((unsigned long)(addr) & 0xFFFFUL);
+
+    _write_gate_lower(gate, &idte);
+}
+
 #define _set_tssldt_desc(desc,addr,limit,type)           \
 do {                                                     \
     (desc)[0].b = (desc)[1].b = 0;                       \
diff -r ef8c1b607b10 -r c25cdb9f5dae xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:13:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7fE-0007Cx-Ry; Thu, 13 Dec 2012 12:13:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Tj7fD-0007CA-Aa
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:13:19 +0000
Received: from [85.158.139.83:62938] by server-16.bemta-5.messagelabs.com id
	D5/90-09208-E56C9C05; Thu, 13 Dec 2012 12:13:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355400777!25813225!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31875 invoked from network); 13 Dec 2012 12:12:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:12:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="570124"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 12:12:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 07:12:56 -0500
Received: from [10.80.239.153] (helo=andrewcoop.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Tj7eq-0007Nt-69;
	Thu, 13 Dec 2012 12:12:56 +0000
MIME-Version: 1.0
X-Mercurial-Node: c25cdb9f5daedfdd4ed95d37e1305a7cb2941c82
Message-ID: <c25cdb9f5daedfdd4ed9.1355400773@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Thu, 13 Dec 2012 12:12:53 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH V6] x86/kexec: Change NMI and MCE handling on
	kexec path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 xen/arch/x86/crash.c            |  116 ++++++++++++++++++++++++++++++++++-----
 xen/arch/x86/machine_kexec.c    |   19 ++++++
 xen/arch/x86/x86_64/entry.S     |   33 +++++++++++
 xen/include/asm-x86/desc.h      |   45 +++++++++++++++
 xen/include/asm-x86/processor.h |    4 +
 5 files changed, 202 insertions(+), 15 deletions(-)


Experimentally, certain crash kernels will triple fault very early after
starting if started with NMIs disabled.  This was discovered when
experimenting with a debug keyhandler which deliberately created a
reentrant NMI, causing stack corruption.

Because of this discovered bug, and that the future changes to the NMI
handling will make the kexec path more fragile, take the time now to
bullet-proof the kexec behaviour to be safer in more circumstances.

This patch adds three new low level routines:
 * nmi_crash
    This is a special NMI handler for using during a kexec crash.
 * enable_nmis
    This function enables NMIs by executing an iret-to-self, to
    disengage the hardware NMI latch.
 * trap_nop
    This is a no op handler which irets immediately.  It is not declared
    with ENTRY() to avoid the extra alignment overhead.

And adds three new IDT entry helper routines:
 * _write_gate_lower
    This is a substitute for using cmpxchg16b to update a 128bit
    structure at once.  It assumes that the top 64 bits are unchanged
    (and ASSERT()s the fact) and performs a regular write on the lower
    64 bits.
 * _set_gate_lower
    This is functionally equivalent to the already present _set_gate(),
    except it uses _write_gate_lower rather than updating both 64bit
    values.
 * _update_gate_addr_lower
    This is designed to update an IDT entry handler only, without
    altering any other settings in the entry.  It also uses
    _write_gate_lower.

The IDT entry helpers are required because:
  * Is it unsafe to attempt a disable/update/re-enable cycle on the NMI
    or MCE IDT entries.
  * We need to be able to update NMI handlers without changing the IST
    entry.


As a result, the new behaviour of the kexec_crash path is:

nmi_shootdown_cpus() will:

 * Disable the crashing cpus NMI/MCE interrupt stack tables.
    Disabling the stack tables removes race conditions which would lead
    to corrupt exception frames and infinite loops.  As this pcpu is
    never planning to execute a sysret back to a pv vcpu, the update is
    safe from a security point of view.

 * Swap the NMI trap handlers.
    The crashing pcpu gets the nop handler, to prevent it getting stuck in
    an NMI context, causing a hang instead of crash.  The non-crashing
    pcpus all get the nmi_crash handler which is designed never to
    return.

do_nmi_crash() will:

 * Save the crash notes and shut the pcpu down.
    There is now an extra per-cpu variable to prevent us from executing
    this multiple times.  In the case where we reenter midway through,
    attempt the whole operation again in preference to not completing
    it in the first place.

 * Set up another NMI at the LAPIC.
    Even when the LAPIC has been disabled, the ID and command registers
    are still usable.  As a result, we can deliberately queue up a new
    NMI to re-interrupt us later if NMIs get unlatched.  Because of the
    call to __stop_this_cpu(), we have to hand craft self_nmi() to be
    safe from General Protection Faults.

 * Fall into infinite loop.

machine_kexec() will:

  * Swap the MCE handlers to be a nop.
     We cannot prevent MCEs from being delivered when we pass off to the
     crash kernel, and the less Xen context is being touched the better.

  * Explicitly enable NMIs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>

--

Changes since v5:
 * Changed stale commit message and removed 'cli' from nmi_crash.

Changes since v4:
 * Change stale comments, spelling corrections and coding style changes.

Changes since v3:
 * Added IDT entry helpers to safely update NMI/MCE entries.

Changes since v2:

 * Rework the alteration of the stack tables to completely remove the
   possibility of a PV domain getting very lucky with the "NMI or MCE in
   a 1 instruction race window on sysret" and managing to execute code
   in the hypervisor context.
 * Make use of set_ist() from the previous patch in the series to avoid
   open-coding the IST manipulation.

Changes since v1:
 * Reintroduce atomic_dec(&waiting_for_crash_ipi); which got missed
   during the original refactoring.
 * Fold trap_nop into the middle of enable_nmis to reuse the iret.
 * Expand comments in areas as per Tim's suggestions.

diff -r ef8c1b607b10 -r c25cdb9f5dae xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -32,41 +32,127 @@
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
+static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
-static int crash_nmi_callback(struct cpu_user_regs *regs, int cpu)
+/* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
 {
-    /* Don't do anything if this handler is invoked on crashing cpu.
-     * Otherwise, system will completely hang. Crashing cpu can get
-     * an NMI if system was initially booted with nmi_watchdog parameter.
+    int cpu = smp_processor_id();
+
+    /* nmi_shootdown_cpus() should ensure that this assertion is correct. */
+    ASSERT( cpu != crashing_cpu );
+
+    /* Save crash information and shut down CPU.  Attempt only once. */
+    if ( ! this_cpu(crash_save_done) )
+    {
+        /* Disable the interrupt stack table for the MCE handler.  This
+         * prevents race conditions between clearing MCIP and receving a
+         * new MCE, during which the exception frame would be clobbered
+         * and the MCE handler fall into an infinite loop.  We are soon
+         * going to disable the NMI watchdog, so the loop would not be
+         * caught.
+         *
+         * We do not need to change the NMI IST, as the nmi_crash
+         * handler is immue to corrupt exception frames, by virtue of
+         * being designed never to return.
+         *
+         * This update is safe from a security point of view, as this
+         * pcpu is never going to try to sysret back to a PV vcpu.
+         */
+        set_ist(&idt_tables[cpu][TRAP_machine_check], IST_NONE);
+
+        kexec_crash_save_cpu();
+        __stop_this_cpu();
+
+        this_cpu(crash_save_done) = 1;
+        atomic_dec(&waiting_for_crash_ipi);
+    }
+
+    /* Poor mans self_nmi().  __stop_this_cpu() has reverted the LAPIC
+     * back to its boot state, so we are unable to rely on the regular
+     * apic_* functions, due to 'x2apic_enabled' being possibly wrong.
+     * (The likely scenario is that we have reverted from x2apic mode to
+     * xapic, at which point #GPFs will occur if we use the apic_*
+     * functions)
+     *
+     * The ICR and APIC ID of the LAPIC are still valid even during
+     * software disable (Intel SDM Vol 3, 10.4.7.2).  As a result, we
+     * can deliberately queue up another NMI at the LAPIC which will not
+     * be delivered as the hardware NMI latch is currently in effect.
+     * This means that if NMIs become unlatched (e.g. following a
+     * non-fatal MCE), the LAPIC will force us back here rather than
+     * wandering back into regular Xen code.
      */
-    if ( cpu == crashing_cpu )
-        return 1;
-    local_irq_disable();
+    switch ( current_local_apic_mode() )
+    {
+        u32 apic_id;
 
-    kexec_crash_save_cpu();
+    case APIC_MODE_X2APIC:
+        apic_id = apic_rdmsr(APIC_ID);
 
-    __stop_this_cpu();
+        apic_wrmsr(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL | ((u64)apic_id << 32));
+        break;
 
-    atomic_dec(&waiting_for_crash_ipi);
+    case APIC_MODE_XAPIC:
+        apic_id = GET_xAPIC_ID(apic_mem_read(APIC_ID));
+
+        while ( apic_mem_read(APIC_ICR) & APIC_ICR_BUSY )
+            cpu_relax();
+
+        apic_mem_write(APIC_ICR2, apic_id << 24);
+        apic_mem_write(APIC_ICR, APIC_DM_NMI | APIC_DEST_PHYSICAL);
+        break;
+
+    default:
+        break;
+    }
 
     for ( ; ; )
         halt();
-
-    return 1;
 }
 
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
+    int i, cpu = smp_processor_id();
 
     local_irq_disable();
 
-    crashing_cpu = smp_processor_id();
+    crashing_cpu = cpu;
     local_irq_count(crashing_cpu) = 0;
 
     atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
-    /* Would it be better to replace the trap vector here? */
-    set_nmi_callback(crash_nmi_callback);
+
+    /* Change NMI trap handlers.  Non-crashing pcpus get nmi_crash which
+     * invokes do_nmi_crash (above), which cause them to write state and
+     * fall into a loop.  The crashing pcpu gets the nop handler to
+     * cause it to return to this function ASAP.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+        {
+
+            if ( i == cpu )
+            {
+                /* Disable the interrupt stack tables for this cpus MCE
+                 * and NMI handlers, and alter the NMI handler to have
+                 * no operation.  Disabling the stack tables prevents
+                 * stack corruption race conditions, while changing the
+                 * handler helps prevent cascading faults; we are
+                 * certainly going to crash by this point.
+                 *
+                 * This update is safe from a security point of view, as
+                 * this pcpu is never going to try to sysret back to a
+                 * PV vcpu.
+                 */
+                _set_gate_lower(&idt_tables[i][TRAP_nmi], 14, 0, &trap_nop);
+                set_ist(&idt_tables[i][TRAP_machine_check], IST_NONE);
+            }
+            else
+                /* Do not update stack table for other pcpus. */
+                _update_gate_addr_lower(&idt_tables[i][TRAP_nmi], &nmi_crash);
+        }
+
     /* Ensure the new callback function is set before sending out the NMI. */
     wmb();
 
diff -r ef8c1b607b10 -r c25cdb9f5dae xen/arch/x86/machine_kexec.c
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -81,12 +81,31 @@ void machine_kexec(xen_kexec_image_t *im
         .base = (unsigned long)(boot_cpu_gdt_table - FIRST_RESERVED_GDT_ENTRY),
         .limit = LAST_RESERVED_GDT_BYTE
     };
+    int i;
 
     /* We are about to permenantly jump out of the Xen context into the kexec
      * purgatory code.  We really dont want to be still servicing interupts.
      */
     local_irq_disable();
 
+    /* Now regular interrupts are disabled, we need to reduce the impact
+     * of interrupts not disabled by 'cli'.
+     *
+     * The NMI handlers have already been set up nmi_shootdown_cpus().  All
+     * pcpus other than us have the nmi_crash handler, while we have the nop
+     * handler.
+     *
+     * The MCE handlers touch extensive areas of Xen code and data.  At this
+     * point, there is nothing we can usefully do, so set the nop handler.
+     */
+    for ( i = 0; i < nr_cpu_ids; ++i )
+        if ( idt_tables[i] )
+            _update_gate_addr_lower(&idt_tables[i][TRAP_machine_check], &trap_nop);
+
+    /* Explicitly enable NMIs on this CPU.  Some crashdump kernels do
+     * not like running with NMIs disabled. */
+    enable_nmis();
+
     /*
      * compat_machine_kexec() returns to idle pagetables, which requires us
      * to be running on a static GDT mapping (idle pagetables have no GDT
diff -r ef8c1b607b10 -r c25cdb9f5dae xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -635,11 +635,44 @@ ENTRY(nmi)
         movl  $TRAP_nmi,4(%rsp)
         jmp   handle_ist_exception
 
+ENTRY(nmi_crash)
+        pushq $0
+        movl $TRAP_nmi,4(%rsp)
+        SAVE_ALL
+        movq %rsp,%rdi
+        callq do_nmi_crash /* Does not return */
+        ud2
+
 ENTRY(machine_check)
         pushq $0
         movl  $TRAP_machine_check,4(%rsp)
         jmp   handle_ist_exception
 
+/* Enable NMIs.  No special register assumptions. Only %rax is not preserved. */
+ENTRY(enable_nmis)
+        movq  %rsp, %rax /* Grab RSP before pushing */
+
+        /* Set up stack frame */
+        pushq $0               /* SS */
+        pushq %rax             /* RSP */
+        pushfq                 /* RFLAGS */
+        pushq $__HYPERVISOR_CS /* CS */
+        leaq  1f(%rip),%rax
+        pushq %rax             /* RIP */
+
+        iretq /* Disable the hardware NMI latch */
+1:
+        retq
+
+/* No op trap handler.  Required for kexec crash path.  This is not
+ * declared with the ENTRY() macro to avoid wasted alignment space.
+ */
+.globl trap_nop
+trap_nop:
+        iretq
+
+
+
 .section .rodata, "a", @progbits
 
 ENTRY(exception_table)
diff -r ef8c1b607b10 -r c25cdb9f5dae xen/include/asm-x86/desc.h
--- a/xen/include/asm-x86/desc.h
+++ b/xen/include/asm-x86/desc.h
@@ -106,6 +106,21 @@ typedef struct {
     u64 a, b;
 } idt_entry_t;
 
+/* Write the lower 64 bits of an IDT Entry. This relies on the upper 32
+ * bits of the address not changing, which is a safe assumption as all
+ * functions we are likely to load will live inside the 1GB
+ * code/data/bss address range.
+ *
+ * Ideally, we would use cmpxchg16b, but this is not supported on some
+ * old AMD 64bit capable processors, and has no safe equivalent.
+ */
+static inline void _write_gate_lower(volatile idt_entry_t *gate,
+                                     const idt_entry_t *new)
+{
+    ASSERT(gate->b == new->b);
+    gate->a = new->a;
+}
+
 #define _set_gate(gate_addr,type,dpl,addr)               \
 do {                                                     \
     (gate_addr)->a = 0;                                  \
@@ -122,6 +137,36 @@ do {                                    
         (1UL << 47);                                     \
 } while (0)
 
+static inline void _set_gate_lower(idt_entry_t *gate, unsigned long type,
+                                   unsigned long dpl, void *addr)
+{
+    idt_entry_t idte;
+    idte.b = gate->b;
+    idte.a =
+        (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
+        ((unsigned long)(dpl) << 45) |
+        ((unsigned long)(type) << 40) |
+        ((unsigned long)(addr) & 0xFFFFUL) |
+        ((unsigned long)__HYPERVISOR_CS64 << 16) |
+        (1UL << 47);
+    _write_gate_lower(gate, &idte);
+}
+
+/* Update the lower half handler of an IDT Entry, without changing any
+ * other configuration. */
+static inline void _update_gate_addr_lower(idt_entry_t *gate, void *addr)
+{
+    idt_entry_t idte;
+    idte.a = gate->a;
+
+    idte.b = ((unsigned long)(addr) >> 32);
+    idte.a &= 0x0000FFFFFFFF0000ULL;
+    idte.a |= (((unsigned long)(addr) & 0xFFFF0000UL) << 32) |
+        ((unsigned long)(addr) & 0xFFFFUL);
+
+    _write_gate_lower(gate, &idte);
+}
+
 #define _set_tssldt_desc(desc,addr,limit,type)           \
 do {                                                     \
     (desc)[0].b = (desc)[1].b = 0;                       \
diff -r ef8c1b607b10 -r c25cdb9f5dae xen/include/asm-x86/processor.h
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -527,6 +527,7 @@ void do_ ## _name(struct cpu_user_regs *
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
+DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -545,6 +546,9 @@ DECLARE_TRAP_HANDLER(alignment_check);
 DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 #undef DECLARE_TRAP_HANDLER
 
+void trap_nop(void);
+void enable_nmis(void);
+
 void syscall_enter(void);
 void sysenter_entry(void);
 void sysenter_eflags_saved(void);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:20:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7lj-0007aC-3y; Thu, 13 Dec 2012 12:20:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tj7li-0007Zx-22
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:20:02 +0000
Received: from [193.109.254.147:32811] by server-4.bemta-14.messagelabs.com id
	C1/6A-15233-1F7C9C05; Thu, 13 Dec 2012 12:20:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355401146!10238663!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22898 invoked from network); 13 Dec 2012 12:19:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:19:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="570646"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 12:19:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 07:19:05 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tj7kn-0007TH-FR;
	Thu, 13 Dec 2012 12:19:05 +0000
Date: Thu, 13 Dec 2012 12:19:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012, Jan Beulich wrote:
> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > On Wed, 12 Dec 2012 17:15:23 -0800
> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > 
> >> On Tue, 11 Dec 2012 12:10:19 +0000
> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> >> 
> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> >> > 
> >> > That's strange because AFAIK Linux is never editing the MSI-X
> >> > entries directly: give a look at
> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> >> > touch the real MSI-X table.
> >> 
> >> So, this is what's happening. The side effect of :
> >> 
> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> >>                                 dev->msix_table.last) )
> >>             WARN();
> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> >>                                 dev->msix_pba.last) )
> >>             WARN();
> >> 
> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> >> I've mapped are going from RW to read only. Then when dom0 accesses
> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> >> don't understand why? Looking into that now...
> 
> As far as I was able to tell back at the time when I implemented
> this, existing code shouldn't have mappings for these tables in
> place at the time these ranges get added here. But I noted in
> the patch description that this is a potential issue (and may need
> fixing if deemed severe enough - back then, apparently nobody
> really cared, perhaps largely because passthrough to PV guests
> isn't considered fully secure anyway).
> 
> Now - did that change? I.e. can you describe where the mappings
> come from that cause the problem here?

The generic Linux MSI-X handling code does that, before calling the
arch specific msi setup function, for which we have a xen version
(xen_initdom_setup_msi_irqs):

pci_enable_msix -> msix_capability_init -> msix_map_region

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:20:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj7lj-0007aC-3y; Thu, 13 Dec 2012 12:20:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tj7li-0007Zx-22
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:20:02 +0000
Received: from [193.109.254.147:32811] by server-4.bemta-14.messagelabs.com id
	C1/6A-15233-1F7C9C05; Thu, 13 Dec 2012 12:20:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355401146!10238663!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22898 invoked from network); 13 Dec 2012 12:19:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:19:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="570646"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 12:19:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 07:19:05 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tj7kn-0007TH-FR;
	Thu, 13 Dec 2012 12:19:05 +0000
Date: Thu, 13 Dec 2012 12:19:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012, Jan Beulich wrote:
> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > On Wed, 12 Dec 2012 17:15:23 -0800
> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > 
> >> On Tue, 11 Dec 2012 12:10:19 +0000
> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> >> 
> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> >> > 
> >> > That's strange because AFAIK Linux is never editing the MSI-X
> >> > entries directly: give a look at
> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> >> > touch the real MSI-X table.
> >> 
> >> So, this is what's happening. The side effect of :
> >> 
> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> >>                                 dev->msix_table.last) )
> >>             WARN();
> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> >>                                 dev->msix_pba.last) )
> >>             WARN();
> >> 
> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> >> I've mapped are going from RW to read only. Then when dom0 accesses
> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> >> don't understand why? Looking into that now...
> 
> As far as I was able to tell back at the time when I implemented
> this, existing code shouldn't have mappings for these tables in
> place at the time these ranges get added here. But I noted in
> the patch description that this is a potential issue (and may need
> fixing if deemed severe enough - back then, apparently nobody
> really cared, perhaps largely because passthrough to PV guests
> isn't considered fully secure anyway).
> 
> Now - did that change? I.e. can you describe where the mappings
> come from that cause the problem here?

The generic Linux MSI-X handling code does that, before calling the
arch specific msi setup function, for which we have a xen version
(xen_initdom_setup_msi_irqs):

pci_enable_msix -> msix_capability_init -> msix_map_region

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:36:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:36:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj81X-0007pQ-SK; Thu, 13 Dec 2012 12:36:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tj81W-0007pL-OL
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:36:22 +0000
Received: from [85.158.138.51:28188] by server-7.bemta-3.messagelabs.com id
	D9/EC-23008-0CBC9C05; Thu, 13 Dec 2012 12:36:16 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355402174!28663947!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11912 invoked from network); 13 Dec 2012 12:36:14 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 12:36:14 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tj81M-000K4t-IK; Thu, 13 Dec 2012 12:36:12 +0000
Date: Thu, 13 Dec 2012 12:36:12 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121213123612.GE75286@ocelot.phlegethon.org>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354799451-16876-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/9] xen: arm: mark early_panic as a
	noreturn function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:10 +0000 on 06 Dec (1354799442), Ian Campbell wrote:
> Otherwise gcc complains about variables being used when not
> initialised when in fact that point is never reached.
> 
> There aren't any instances of this in tree right now, I noticed this
> while developing another patch.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:36:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:36:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj81X-0007pQ-SK; Thu, 13 Dec 2012 12:36:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tj81W-0007pL-OL
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:36:22 +0000
Received: from [85.158.138.51:28188] by server-7.bemta-3.messagelabs.com id
	D9/EC-23008-0CBC9C05; Thu, 13 Dec 2012 12:36:16 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355402174!28663947!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11912 invoked from network); 13 Dec 2012 12:36:14 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 12:36:14 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tj81M-000K4t-IK; Thu, 13 Dec 2012 12:36:12 +0000
Date: Thu, 13 Dec 2012 12:36:12 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121213123612.GE75286@ocelot.phlegethon.org>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354799451-16876-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/9] xen: arm: mark early_panic as a
	noreturn function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:10 +0000 on 06 Dec (1354799442), Ian Campbell wrote:
> Otherwise gcc complains about variables being used when not
> initialised when in fact that point is never reached.
> 
> There aren't any instances of this in tree right now, I noticed this
> while developing another patch.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:44:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:44:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj89E-0000Ju-Pz; Thu, 13 Dec 2012 12:44:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tj89D-0000Jl-Cx
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:44:19 +0000
Received: from [85.158.143.99:58762] by server-3.bemta-4.messagelabs.com id
	1A/B4-18211-2ADC9C05; Thu, 13 Dec 2012 12:44:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355402600!19562955!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20181 invoked from network); 13 Dec 2012 12:43:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:43:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="535215"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 12:43:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 07:43:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tj88F-0007tA-Ed;
	Thu, 13 Dec 2012 12:43:19 +0000
Date: Thu, 13 Dec 2012 12:43:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012, G.R. wrote:
> >> > PVHVM Linux guests setup interrupts differently: they request an event
> >> > channel for each legacy interrupt or MSI/MSI-X, then the hypervisor uses
> >> > these event channels to inject notifications into the guest, rather
> >> > than emulated interrupts or emulated MSIs.
> >> >
> >> Will this affect the result of pci_get_class() as called by the intel driver?
> >> If not, this can still not explain the different behavior.
> >> Maybe I need to do one more experiment when I got time.
> >
> > No, it doesn't.
> >
> I've done more experiments last night and it turns out that I'm being
> fooled previously.
> So actually none of the config (PV or not) correctly detected the PCH version.
> The magic my debian PVHVM build works is in the script maintained grub config,
> which automatically loads some VGA / FB driver to show image background.
> The manual maintained grub config for openelec is quick && dirty and
> does not have such fancy stuff.
> Manually adding such VGA / FB driver to grub config could also 'fix'
> the output issue.
> 
> However, this is a real issue that failing to detect PCH version in
> the driver, even though the output can be made available through grub.
> Driver code path specific to new PCH version is simply missed with
> this issue which could potentially lead to other issues.
> 
> Could you xen && intel developers sit down to discuss and come up with
> a formal solution?
> I can foresee that cooperation is required on this topic.
 
Does this patch work for you?



diff --git a/hw/pci.c b/hw/pci.c
index f051de1..d371bd7 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
     }
 }
 
-typedef struct {
-    PCIDevice dev;
-    PCIBus *bus;
-} PCIBridge;
-
 void pci_bridge_write_config(PCIDevice *d,
                              uint32_t address, uint32_t val, int len)
 {
diff --git a/hw/pci.h b/hw/pci.h
index edc58b6..c2acab9 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -222,6 +222,11 @@ struct PCIDevice {
     int irq_state[4];
 };
 
+typedef struct {
+    PCIDevice dev;
+    PCIBus *bus;
+} PCIBridge;
+
 extern char direct_pci_str[];
 extern int direct_pci_msitranslate;
 extern int direct_pci_power_mgmt;
diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
index c6f8869..d8e789f 100644
--- a/hw/pt-graphics.c
+++ b/hw/pt-graphics.c
@@ -3,6 +3,7 @@
  */
 
 #include "pass-through.h"
+#include "pci.h"
 #include "pci/header.h"
 #include "pci/pci.h"
 
@@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
     did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
     rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
 
-    if ( vid == PCI_VENDOR_ID_INTEL )
-        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
-                        pch_map_irq, "intel_bridge_1f");
+    if (vid == PCI_VENDOR_ID_INTEL) {
+        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
+                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL, pci_bridge_write_config);
+
+        pci_config_set_vendor_id(s->dev.config, vid);
+        pci_config_set_device_id(s->dev.config, did);
+
+        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
+        s->dev.config[PCI_COMMAND + 1] = 0x00;
+        s->dev.config[PCI_STATUS] = 0xa0; // status = fast back-to-back, 66MHz, no error
+        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
+        s->dev.config[PCI_REVISION] = rid;
+        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
+        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
+        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
+        s->dev.config[PCI_HEADER_TYPE] = 0x81;
+        s->dev.config[PCI_SEC_STATUS] = 0xa0;
+
+        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
+    }
 }
 
 uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:44:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:44:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj89E-0000Ju-Pz; Thu, 13 Dec 2012 12:44:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tj89D-0000Jl-Cx
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:44:19 +0000
Received: from [85.158.143.99:58762] by server-3.bemta-4.messagelabs.com id
	1A/B4-18211-2ADC9C05; Thu, 13 Dec 2012 12:44:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355402600!19562955!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20181 invoked from network); 13 Dec 2012 12:43:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:43:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="535215"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 12:43:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 07:43:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tj88F-0007tA-Ed;
	Thu, 13 Dec 2012 12:43:19 +0000
Date: Thu, 13 Dec 2012 12:43:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012, G.R. wrote:
> >> > PVHVM Linux guests setup interrupts differently: they request an event
> >> > channel for each legacy interrupt or MSI/MSI-X, then the hypervisor uses
> >> > these event channels to inject notifications into the guest, rather
> >> > than emulated interrupts or emulated MSIs.
> >> >
> >> Will this affect the result of pci_get_class() as called by the intel driver?
> >> If not, this can still not explain the different behavior.
> >> Maybe I need to do one more experiment when I got time.
> >
> > No, it doesn't.
> >
> I've done more experiments last night and it turns out that I'm being
> fooled previously.
> So actually none of the config (PV or not) correctly detected the PCH version.
> The magic my debian PVHVM build works is in the script maintained grub config,
> which automatically loads some VGA / FB driver to show image background.
> The manual maintained grub config for openelec is quick && dirty and
> does not have such fancy stuff.
> Manually adding such VGA / FB driver to grub config could also 'fix'
> the output issue.
> 
> However, this is a real issue that failing to detect PCH version in
> the driver, even though the output can be made available through grub.
> Driver code path specific to new PCH version is simply missed with
> this issue which could potentially lead to other issues.
> 
> Could you xen && intel developers sit down to discuss and come up with
> a formal solution?
> I can foresee that cooperation is required on this topic.
 
Does this patch work for you?



diff --git a/hw/pci.c b/hw/pci.c
index f051de1..d371bd7 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
     }
 }
 
-typedef struct {
-    PCIDevice dev;
-    PCIBus *bus;
-} PCIBridge;
-
 void pci_bridge_write_config(PCIDevice *d,
                              uint32_t address, uint32_t val, int len)
 {
diff --git a/hw/pci.h b/hw/pci.h
index edc58b6..c2acab9 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -222,6 +222,11 @@ struct PCIDevice {
     int irq_state[4];
 };
 
+typedef struct {
+    PCIDevice dev;
+    PCIBus *bus;
+} PCIBridge;
+
 extern char direct_pci_str[];
 extern int direct_pci_msitranslate;
 extern int direct_pci_power_mgmt;
diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
index c6f8869..d8e789f 100644
--- a/hw/pt-graphics.c
+++ b/hw/pt-graphics.c
@@ -3,6 +3,7 @@
  */
 
 #include "pass-through.h"
+#include "pci.h"
 #include "pci/header.h"
 #include "pci/pci.h"
 
@@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
     did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
     rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
 
-    if ( vid == PCI_VENDOR_ID_INTEL )
-        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
-                        pch_map_irq, "intel_bridge_1f");
+    if (vid == PCI_VENDOR_ID_INTEL) {
+        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
+                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL, pci_bridge_write_config);
+
+        pci_config_set_vendor_id(s->dev.config, vid);
+        pci_config_set_device_id(s->dev.config, did);
+
+        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
+        s->dev.config[PCI_COMMAND + 1] = 0x00;
+        s->dev.config[PCI_STATUS] = 0xa0; // status = fast back-to-back, 66MHz, no error
+        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
+        s->dev.config[PCI_REVISION] = rid;
+        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
+        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
+        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
+        s->dev.config[PCI_HEADER_TYPE] = 0x81;
+        s->dev.config[PCI_SEC_STATUS] = 0xa0;
+
+        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
+    }
 }
 
 uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:54:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8IY-00013c-KZ; Thu, 13 Dec 2012 12:53:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tj8IW-00013W-Je
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:53:56 +0000
Received: from [85.158.139.83:18053] by server-12.bemta-5.messagelabs.com id
	93/11-02275-1EFC9C05; Thu, 13 Dec 2012 12:53:53 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355403228!27565362!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4140 invoked from network); 13 Dec 2012 12:53:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:53:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="573423"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 12:53:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 07:53:25 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tj8I1-00085E-GB;
	Thu, 13 Dec 2012 12:53:25 +0000
Date: Thu, 13 Dec 2012 12:53:21 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1355395293-3122-2-git-send-email-linux@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
References: <1Tj6Gv-00030a-73>
	<1355395293-3122-2-git-send-email-linux@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Fix compile errors when enabling Xen debug
	logging.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012, Sander Eikelenboom wrote:
> Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it>

please wrap around the lines > 80 characters.


>  hw/xen_pt.c |    4 ++--
>  xen-all.c   |    4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/xen_pt.c b/hw/xen_pt.c
> index 7a3846e..ca42a14 100644
> --- a/hw/xen_pt.c
> +++ b/hw/xen_pt.c
> @@ -671,7 +671,7 @@ static int xen_pt_initfn(PCIDevice *d)
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> -                   s->real_device.domain, bus, slot, func);
> +                   s->real_device.domain, s->real_device.bus, s->real_device.dev, s->real_device.func);
>      }
>  
>      /* Initialize virtualized PCI configuration (Extended 256 Bytes) */
> @@ -752,7 +752,7 @@ out:
>      memory_listener_register(&s->memory_listener, &address_space_memory);
>      memory_listener_register(&s->io_listener, &address_space_io);
>      XEN_PT_LOG(d, "Real physical device %02x:%02x.%d registered successfuly!\n",
> -               bus, slot, func);
> +               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function);
>  
>      return 0;
>  }
> diff --git a/xen-all.c b/xen-all.c
> index 046cc2a..d225b9d 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -292,7 +292,7 @@ static int xen_add_to_physmap(XenIOState *state,
>      return -1;
>  
>  go_physmap:
> -    DPRINTF("mapping vram to %llx - %llx\n", start_addr, start_addr + size);
> +    DPRINTF("mapping vram to %"PRI_xen_pfn" - %"PRI_xen_pfn"\n", start_addr, start_addr + size);

given that start_addr and size are hwaddr, these should be HWADDR_PRIx


>      pfn = phys_offset >> TARGET_PAGE_BITS;
>      start_gpfn = start_addr >> TARGET_PAGE_BITS;
> @@ -365,7 +365,7 @@ static int xen_remove_from_physmap(XenIOState *state,
>      phys_offset = physmap->phys_offset;
>      size = physmap->size;
>  
> -    DPRINTF("unmapping vram to %llx - %llx, from %llx\n",
> +    DPRINTF("unmapping vram to %"PRI_xen_pfn" - %"PRI_xen_pfn", from %"PRI_xen_pfn"\n",
>              phys_offset, phys_offset + size, start_addr);

same here

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 12:54:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 12:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8IY-00013c-KZ; Thu, 13 Dec 2012 12:53:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tj8IW-00013W-Je
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 12:53:56 +0000
Received: from [85.158.139.83:18053] by server-12.bemta-5.messagelabs.com id
	93/11-02275-1EFC9C05; Thu, 13 Dec 2012 12:53:53 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355403228!27565362!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4140 invoked from network); 13 Dec 2012 12:53:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:53:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="573423"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 12:53:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 07:53:25 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tj8I1-00085E-GB;
	Thu, 13 Dec 2012 12:53:25 +0000
Date: Thu, 13 Dec 2012 12:53:21 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1355395293-3122-2-git-send-email-linux@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
References: <1Tj6Gv-00030a-73>
	<1355395293-3122-2-git-send-email-linux@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Fix compile errors when enabling Xen debug
	logging.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012, Sander Eikelenboom wrote:
> Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it>

please wrap around the lines > 80 characters.


>  hw/xen_pt.c |    4 ++--
>  xen-all.c   |    4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/xen_pt.c b/hw/xen_pt.c
> index 7a3846e..ca42a14 100644
> --- a/hw/xen_pt.c
> +++ b/hw/xen_pt.c
> @@ -671,7 +671,7 @@ static int xen_pt_initfn(PCIDevice *d)
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> -                   s->real_device.domain, bus, slot, func);
> +                   s->real_device.domain, s->real_device.bus, s->real_device.dev, s->real_device.func);
>      }
>  
>      /* Initialize virtualized PCI configuration (Extended 256 Bytes) */
> @@ -752,7 +752,7 @@ out:
>      memory_listener_register(&s->memory_listener, &address_space_memory);
>      memory_listener_register(&s->io_listener, &address_space_io);
>      XEN_PT_LOG(d, "Real physical device %02x:%02x.%d registered successfuly!\n",
> -               bus, slot, func);
> +               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function);
>  
>      return 0;
>  }
> diff --git a/xen-all.c b/xen-all.c
> index 046cc2a..d225b9d 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -292,7 +292,7 @@ static int xen_add_to_physmap(XenIOState *state,
>      return -1;
>  
>  go_physmap:
> -    DPRINTF("mapping vram to %llx - %llx\n", start_addr, start_addr + size);
> +    DPRINTF("mapping vram to %"PRI_xen_pfn" - %"PRI_xen_pfn"\n", start_addr, start_addr + size);

given that start_addr and size are hwaddr, these should be HWADDR_PRIx


>      pfn = phys_offset >> TARGET_PAGE_BITS;
>      start_gpfn = start_addr >> TARGET_PAGE_BITS;
> @@ -365,7 +365,7 @@ static int xen_remove_from_physmap(XenIOState *state,
>      phys_offset = physmap->phys_offset;
>      size = physmap->size;
>  
> -    DPRINTF("unmapping vram to %llx - %llx, from %llx\n",
> +    DPRINTF("unmapping vram to %"PRI_xen_pfn" - %"PRI_xen_pfn", from %"PRI_xen_pfn"\n",
>              phys_offset, phys_offset + size, start_addr);

same here

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:00:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8Ou-0001Hs-GH; Thu, 13 Dec 2012 13:00:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1Tj8Ot-0001Hl-0z
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:00:31 +0000
Received: from [85.158.143.35:39446] by server-2.bemta-4.messagelabs.com id
	91/07-30861-E61D9C05; Thu, 13 Dec 2012 13:00:30 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355403535!12331355!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_TEST_2,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19721 invoked from network); 13 Dec 2012 12:58:57 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:58:57 -0000
Received: by mail-qa0-f45.google.com with SMTP id j15so6494452qaq.11
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 04:58:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=2r9kzw7brksxX00SSQkrh931uLZeRTaNNTTRhdjq10g=;
	b=UTOD78E9gxfHDpTaCrBY7P9ffT0oW3GFiAsXfzYDbKtjG0gIes/uPpho8rlNSEqEXA
	lUxKBaoU4oFFGdGP7SxvOF2omSwyFabLqKo9PvEEaZAGJZq8NknOB/owgqHbmD+X7Kh7
	wTKMtY2rgbC6UNWaM4t/JrF9FUTRaSVtSk/YU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type
	:x-gm-message-state;
	bh=2r9kzw7brksxX00SSQkrh931uLZeRTaNNTTRhdjq10g=;
	b=msNUg0l2zjnehM5JggS1ElFZpGWdqS/MZvnmRlB9R38MpdyZCuJ6NjoU09OCRkCZMr
	l+DJZO+VgGaCcZLkCm1FVN/phF11EuDDaKOqvmH/ghkFODeovUQuYbMrMngzg9i8bdav
	66yRw4ufb1/LppWMx5lIB//N6+8xQ0MhgVkPhr7QAcDCIvkPGE+bIgloH/8pk0h86qi2
	lyfN4kreaN4W7SR01YDwH//jIdQJ1qabHnPwCl4EbXpbbieQQL6AhLN+SMT/i33aR5B2
	UEyN3ywN/jFGlwWKPXRS2HZAFXGOwa2STHRsrWZG7w5Lrq5tUBGfMzQnjLU3qiLeQT5k
	Mb6g==
Received: by 10.49.96.1 with SMTP id do1mr866631qeb.26.1355403535377; Thu, 13
	Dec 2012 04:58:55 -0800 (PST)
MIME-Version: 1.0
Received: by 10.49.119.202 with HTTP; Thu, 13 Dec 2012 04:58:40 -0800 (PST)
X-Originating-IP: [178.70.154.20]
In-Reply-To: <50C9BCA8.3010504@citrix.com>
References: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
	<CACaajQu9VexhRWXK8U_fjtjFSL6Bwk=0e8OdWhshh7UERmiuQg@mail.gmail.com>
	<50C9BCA8.3010504@citrix.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Thu, 13 Dec 2012 16:58:40 +0400
X-Google-Sender-Auth: H1qSuoo7XGRDc4-Q7pjMmZBMwJw
Message-ID: <CACaajQs+0TkdYYewCkX-DLaT514ve_VTSR0zr6bNBYciWvOiqw@mail.gmail.com>
To: Mats Petersson <mats.petersson@citrix.com>
X-Gm-Message-State: ALoCoQkJamBaeOX/hOgSqRDekaDL+VZWa5j/pHOGJt7liaCfY3++c2kywMPeAo+XVYw3Fvxx7mIX
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks, i'm try.

2012/12/13 Mats Petersson <mats.petersson@citrix.com>:
> On 13/12/12 11:27, Vasiliy Tolstov wrote:
>>
>> If i apply this patch to kernel 3.6.7 does it needs to apply another
>> patches to work?
>
> No, it should "just work". Obviously, if it doesn't, let me know.
>
> --
> Mats
>
>>
>> 2012/12/13 Mats Petersson <mats.petersson@citrix.com>:
>>>
>>> One comment asked for more details on the improvements:
>>> Using a small test program to map Guest memory into Dom0 (repeatedly
>>> for "Iterations" mapping the same first "Num Pages")
>>> Iterations    Num Pages    Time 3.7rc4  Time With this patch
>>> 5000          4096         76.107       37.027
>>> 10000         2048         75.703       37.177
>>> 20000         1024         75.893       37.247
>>> So a little better than twice as fast.
>>>
>>> Using this patch in migration, using "time" to measure the overall
>>> time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
>>> memory, one network card, one disk, guest at login prompt):
>>> Time 3.7rc5             Time With this patch
>>> 6.697                   5.667
>>> Since migration involves a whole lot of other things, it's only about
>>> 15% faster - but still a good improvement. Similar measurement with a
>>> guest that is running code to "dirty" memory shows about 23%
>>> improvement, as it spends more time copying dirtied memory.
>>>
>>> As discussed elsewhere, a good deal more can be had from improving the
>>> munmap system call, but it is a little tricky to get this in without
>>> worsening non-PVOPS kernel, so I will have another look at this.
>>>
>>> ---
>>> Update since last posting:
>>> I have just run some benchmarks of a 16GB guest, and the improvement
>>> with this patch is around 23-30% for the overall copy time, and 42%
>>> shorter downtime on a "busy" guest (writing in a loop to about 8GB of
>>> memory). I think this is definitely worth having.
>>>
>>> The V4 patch consists of cosmetic adjustments (spelling mistake in
>>> comment and reversing condition in an if-statement to avoid having an
>>> "else" branch, a random empty line added by accident now reverted back
>>> to previous state). Thanks to Jan Beulich for the comments.
>>>
>>> The V3 of the patch contains suggested improvements from:
>>> David Vrabel - make it two distinct external functions, doc-comments.
>>> Ian Campbell - use one common function for the main work.
>>> Jan Beulich  - found a bug and pointed out some whitespace problems.
>>>
>>>
>>>
>>> Signed-off-by: Mats Petersson <mats.petersson@citrix.com>
>>>
>>> ---
>>>   arch/x86/xen/mmu.c    |  132
>>> +++++++++++++++++++++++++++++++++++++++++++------
>>>   drivers/xen/privcmd.c |   55 +++++++++++++++++----
>>>   include/xen/xen-ops.h |    5 ++
>>>   3 files changed, 169 insertions(+), 23 deletions(-)
>>>
>>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>>> index dcf5f2d..a67774f 100644
>>> --- a/arch/x86/xen/mmu.c
>>> +++ b/arch/x86/xen/mmu.c
>>> @@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
>>>   #define REMAP_BATCH_SIZE 16
>>>
>>>   struct remap_data {
>>> -       unsigned long mfn;
>>> +       unsigned long *mfn;
>>> +       bool contiguous;
>>>          pgprot_t prot;
>>>          struct mmu_update *mmu_update;
>>>   };
>>> @@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep,
>>> pgtable_t token,
>>>                                   unsigned long addr, void *data)
>>>   {
>>>          struct remap_data *rmd = data;
>>> -       pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
>>> +       pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
>>> +       /* If we have a contigious range, just update the mfn itself,
>>> +          else update pointer to be "next mfn". */
>>> +       if (rmd->contiguous)
>>> +               (*rmd->mfn)++;
>>> +       else
>>> +               rmd->mfn++;
>>>
>>>          rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
>>>          rmd->mmu_update->val = pte_val_ma(pte);
>>> @@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep,
>>> pgtable_t token,
>>>          return 0;
>>>   }
>>>
>>> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>> -                              unsigned long addr,
>>> -                              unsigned long mfn, int nr,
>>> -                              pgprot_t prot, unsigned domid)
>>> -{
>>> +/* do_remap_mfn() - helper function to map foreign pages
>>> + * @vma:     the vma for the pages to be mapped into
>>> + * @addr:    the address at which to map the pages
>>> + * @mfn:     pointer to array of MFNs to map
>>> + * @nr:      the number entries in the MFN array
>>> + * @err_ptr: pointer to array
>>> + * @prot:    page protection mask
>>> + * @domid:   id of the domain that we are mapping from
>>> + *
>>> + * This function takes an array of mfns and maps nr pages from that into
>>> + * this kernel's memory. The owner of the pages is defined by domid.
>>> Where the
>>> + * pages are mapped is determined by addr, and vma is used for
>>> "accounting" of
>>> + * the pages.
>>> + *
>>> + * Return value is zero for success, negative for failure.
>>> + *
>>> + * Note that err_ptr is used to indicate whether *mfn
>>> + * is a list or a "first mfn of a contiguous range". */
>>> +static int do_remap_mfn(struct vm_area_struct *vma,
>>> +                       unsigned long addr,
>>> +                       unsigned long *mfn, int nr,
>>> +                       int *err_ptr, pgprot_t prot,
>>> +                       unsigned domid)
>>> +{
>>> +       int err, last_err = 0;
>>>          struct remap_data rmd;
>>>          struct mmu_update mmu_update[REMAP_BATCH_SIZE];
>>> -       int batch;
>>>          unsigned long range;
>>> -       int err = 0;
>>>
>>>          if (xen_feature(XENFEAT_auto_translated_physmap))
>>>                  return -EINVAL;
>>> @@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct
>>> vm_area_struct *vma,
>>>
>>>          rmd.mfn = mfn;
>>>          rmd.prot = prot;
>>> +       /* We use the err_ptr to indicate if there we are doing a
>>> contigious
>>> +        * mapping or a discontigious mapping. */
>>> +       rmd.contiguous = !err_ptr;
>>>
>>>          while (nr) {
>>> -               batch = min(REMAP_BATCH_SIZE, nr);
>>> +               int index = 0;
>>> +               int done = 0;
>>> +               int batch = min(REMAP_BATCH_SIZE, nr);
>>> +               int batch_left = batch;
>>>                  range = (unsigned long)batch << PAGE_SHIFT;
>>>
>>>                  rmd.mmu_update = mmu_update;
>>> @@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct
>>> vm_area_struct *vma,
>>>                  if (err)
>>>                          goto out;
>>>
>>> -               err = HYPERVISOR_mmu_update(mmu_update, batch, NULL,
>>> domid);
>>> -               if (err < 0)
>>> -                       goto out;
>>> +               /* We record the error for each page that gives an error,
>>> but
>>> +                * continue mapping until the whole set is done */
>>> +               do {
>>> +                       err = HYPERVISOR_mmu_update(&mmu_update[index],
>>> +                                                   batch_left, &done,
>>> domid);
>>> +                       if (err < 0) {
>>> +                               if (!err_ptr)
>>> +                                       goto out;
>>> +                               /* increment done so we skip the error
>>> item */
>>> +                               done++;
>>> +                               last_err = err_ptr[index] = err;
>>> +                       }
>>> +                       batch_left -= done;
>>> +                       index += done;
>>> +               } while (batch_left);
>>>
>>>                  nr -= batch;
>>>                  addr += range;
>>> +               if (err_ptr)
>>> +                       err_ptr += batch;
>>>          }
>>>
>>> -       err = 0;
>>> +       err = last_err;
>>>   out:
>>>
>>>          xen_flush_tlb_all();
>>>
>>>          return err;
>>>   }
>>> +
>>> +/* xen_remap_domain_mfn_range() - Used to map foreign pages
>>> + * @vma:   the vma for the pages to be mapped into
>>> + * @addr:  the address at which to map the pages
>>> + * @mfn:   the first MFN to map
>>> + * @nr:    the number of consecutive mfns to map
>>> + * @prot:  page protection mask
>>> + * @domid: id of the domain that we are mapping from
>>> + *
>>> + * This function takes a mfn and maps nr pages on from that into this
>>> kernel's
>>> + * memory. The owner of the pages is defined by domid. Where the pages
>>> are
>>> + * mapped is determined by addr, and vma is used for "accounting" of the
>>> + * pages.
>>> + *
>>> + * Return value is zero for success, negative ERRNO value for failure.
>>> + */
>>> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>> +                              unsigned long addr,
>>> +                              unsigned long mfn, int nr,
>>> +                              pgprot_t prot, unsigned domid)
>>> +{
>>> +       return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
>>> +}
>>>   EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>>> +
>>> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
>>> + * @vma:     the vma for the pages to be mapped into
>>> + * @addr:    the address at which to map the pages
>>> + * @mfn:     pointer to array of MFNs to map
>>> + * @nr:      the number entries in the MFN array
>>> + * @err_ptr: pointer to array of integers, one per MFN, for an error
>>> + *           value for each page. The err_ptr must not be NULL.
>>> + * @prot:    page protection mask
>>> + * @domid:   id of the domain that we are mapping from
>>> + *
>>> + * This function takes an array of mfns and maps nr pages from that into
>>> this
>>> + * kernel's memory. The owner of the pages is defined by domid. Where
>>> the pages
>>> + * are mapped is determined by addr, and vma is used for "accounting" of
>>> the
>>> + * pages. The err_ptr array is filled in on any page that is not
>>> sucessfully
>>> + * mapped in.
>>> + *
>>> + * Return value is zero for success, negative ERRNO value for failure.
>>> + * Note that the error value -ENOENT is considered a "retry", so when
>>> this
>>> + * error code is seen, another call should be made with the list of
>>> pages that
>>> + * are marked as -ENOENT in the err_ptr array.
>>> + */
>>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>>> +                              unsigned long addr,
>>> +                              unsigned long *mfn, int nr,
>>> +                              int *err_ptr, pgprot_t prot,
>>> +                              unsigned domid)
>>> +{
>>> +       /* We BUG_ON because it's a programmer error to pass a NULL
>>> err_ptr,
>>> +        * and the consequences later is quite hard to detect what the
>>> actual
>>> +        * cause of "wrong memory was mapped in".
>>> +        */
>>> +       BUG_ON(err_ptr == NULL);
>>> +       return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
>>> +}
>>> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>> index 71f5c45..75f6e86 100644
>>> --- a/drivers/xen/privcmd.c
>>> +++ b/drivers/xen/privcmd.c
>>> @@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t
>>> size,
>>>          return ret;
>>>   }
>>>
>>> +/*
>>> + * Similar to traverse_pages, but use each page as a "block" of
>>> + * data to be processed as one unit.
>>> + */
>>> +static int traverse_pages_block(unsigned nelem, size_t size,
>>> +                               struct list_head *pos,
>>> +                               int (*fn)(void *data, int nr, void
>>> *state),
>>> +                               void *state)
>>> +{
>>> +       void *pagedata;
>>> +       unsigned pageidx;
>>> +       int ret = 0;
>>> +
>>> +       BUG_ON(size > PAGE_SIZE);
>>> +
>>> +       pageidx = PAGE_SIZE;
>>> +
>>> +       while (nelem) {
>>> +               int nr = (PAGE_SIZE/size);
>>> +               struct page *page;
>>> +               if (nr > nelem)
>>> +                       nr = nelem;
>>> +               pos = pos->next;
>>> +               page = list_entry(pos, struct page, lru);
>>> +               pagedata = page_address(page);
>>> +               ret = (*fn)(pagedata, nr, state);
>>> +               if (ret)
>>> +                       break;
>>> +               nelem -= nr;
>>> +       }
>>> +
>>> +       return ret;
>>> +}
>>> +
>>>   struct mmap_mfn_state {
>>>          unsigned long va;
>>>          struct vm_area_struct *vma;
>>> @@ -250,7 +284,7 @@ struct mmap_batch_state {
>>>           *      0 for no errors
>>>           *      1 if at least one error has happened (and no
>>>           *          -ENOENT errors have happened)
>>> -        *      -ENOENT if at least 1 -ENOENT has happened.
>>> +        *      -ENOENT if at least one -ENOENT has happened.
>>>           */
>>>          int global_error;
>>>          /* An array for individual errors */
>>> @@ -260,17 +294,20 @@ struct mmap_batch_state {
>>>          xen_pfn_t __user *user_mfn;
>>>   };
>>>
>>> -static int mmap_batch_fn(void *data, void *state)
>>> +static int mmap_batch_fn(void *data, int nr, void *state)
>>>   {
>>>          xen_pfn_t *mfnp = data;
>>>          struct mmap_batch_state *st = state;
>>>          int ret;
>>>
>>> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK,
>>> *mfnp, 1,
>>> -                                        st->vma->vm_page_prot,
>>> st->domain);
>>> +       BUG_ON(nr < 0);
>>>
>>> -       /* Store error code for second pass. */
>>> -       *(st->err++) = ret;
>>> +       ret = xen_remap_domain_mfn_array(st->vma,
>>> +                                        st->va & PAGE_MASK,
>>> +                                        mfnp, nr,
>>> +                                        st->err,
>>> +                                        st->vma->vm_page_prot,
>>> +                                        st->domain);
>>>
>>>          /* And see if it affects the global_error. */
>>>          if (ret < 0) {
>>> @@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
>>>                                  st->global_error = 1;
>>>                  }
>>>          }
>>> -       st->va += PAGE_SIZE;
>>> +       st->va += PAGE_SIZE * nr;
>>>
>>>          return 0;
>>>   }
>>> @@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user
>>> *udata, int version)
>>>          state.err           = err_array;
>>>
>>>          /* mmap_batch_fn guarantees ret == 0 */
>>> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
>>> -                            &pagelist, mmap_batch_fn, &state));
>>> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
>>> +                                   &pagelist, mmap_batch_fn, &state));
>>>
>>>          up_write(&mm->mmap_sem);
>>>
>>> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
>>> index 6a198e4..22cad75 100644
>>> --- a/include/xen/xen-ops.h
>>> +++ b/include/xen/xen-ops.h
>>> @@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct
>>> *vma,
>>>                                 unsigned long mfn, int nr,
>>>                                 pgprot_t prot, unsigned domid);
>>>
>>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>>> +                              unsigned long addr,
>>> +                              unsigned long *mfn, int nr,
>>> +                              int *err_ptr, pgprot_t prot,
>>> +                              unsigned domid);
>>>   #endif /* INCLUDE_XEN_OPS_H */
>>> --
>>> 1.7.9.5
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>>
>>
>>
>> --
>> Vasiliy Tolstov,
>> Clodo.ru
>> e-mail: v.tolstov@selfip.ru
>> jabber: vase@selfip.ru
>
>



-- 
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:00:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8Ou-0001Hs-GH; Thu, 13 Dec 2012 13:00:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1Tj8Ot-0001Hl-0z
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:00:31 +0000
Received: from [85.158.143.35:39446] by server-2.bemta-4.messagelabs.com id
	91/07-30861-E61D9C05; Thu, 13 Dec 2012 13:00:30 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355403535!12331355!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_TEST_2,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19721 invoked from network); 13 Dec 2012 12:58:57 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 12:58:57 -0000
Received: by mail-qa0-f45.google.com with SMTP id j15so6494452qaq.11
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 04:58:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=2r9kzw7brksxX00SSQkrh931uLZeRTaNNTTRhdjq10g=;
	b=UTOD78E9gxfHDpTaCrBY7P9ffT0oW3GFiAsXfzYDbKtjG0gIes/uPpho8rlNSEqEXA
	lUxKBaoU4oFFGdGP7SxvOF2omSwyFabLqKo9PvEEaZAGJZq8NknOB/owgqHbmD+X7Kh7
	wTKMtY2rgbC6UNWaM4t/JrF9FUTRaSVtSk/YU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type
	:x-gm-message-state;
	bh=2r9kzw7brksxX00SSQkrh931uLZeRTaNNTTRhdjq10g=;
	b=msNUg0l2zjnehM5JggS1ElFZpGWdqS/MZvnmRlB9R38MpdyZCuJ6NjoU09OCRkCZMr
	l+DJZO+VgGaCcZLkCm1FVN/phF11EuDDaKOqvmH/ghkFODeovUQuYbMrMngzg9i8bdav
	66yRw4ufb1/LppWMx5lIB//N6+8xQ0MhgVkPhr7QAcDCIvkPGE+bIgloH/8pk0h86qi2
	lyfN4kreaN4W7SR01YDwH//jIdQJ1qabHnPwCl4EbXpbbieQQL6AhLN+SMT/i33aR5B2
	UEyN3ywN/jFGlwWKPXRS2HZAFXGOwa2STHRsrWZG7w5Lrq5tUBGfMzQnjLU3qiLeQT5k
	Mb6g==
Received: by 10.49.96.1 with SMTP id do1mr866631qeb.26.1355403535377; Thu, 13
	Dec 2012 04:58:55 -0800 (PST)
MIME-Version: 1.0
Received: by 10.49.119.202 with HTTP; Thu, 13 Dec 2012 04:58:40 -0800 (PST)
X-Originating-IP: [178.70.154.20]
In-Reply-To: <50C9BCA8.3010504@citrix.com>
References: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
	<CACaajQu9VexhRWXK8U_fjtjFSL6Bwk=0e8OdWhshh7UERmiuQg@mail.gmail.com>
	<50C9BCA8.3010504@citrix.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Thu, 13 Dec 2012 16:58:40 +0400
X-Google-Sender-Auth: H1qSuoo7XGRDc4-Q7pjMmZBMwJw
Message-ID: <CACaajQs+0TkdYYewCkX-DLaT514ve_VTSR0zr6bNBYciWvOiqw@mail.gmail.com>
To: Mats Petersson <mats.petersson@citrix.com>
X-Gm-Message-State: ALoCoQkJamBaeOX/hOgSqRDekaDL+VZWa5j/pHOGJt7liaCfY3++c2kywMPeAo+XVYw3Fvxx7mIX
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks, i'm try.

2012/12/13 Mats Petersson <mats.petersson@citrix.com>:
> On 13/12/12 11:27, Vasiliy Tolstov wrote:
>>
>> If i apply this patch to kernel 3.6.7 does it needs to apply another
>> patches to work?
>
> No, it should "just work". Obviously, if it doesn't, let me know.
>
> --
> Mats
>
>>
>> 2012/12/13 Mats Petersson <mats.petersson@citrix.com>:
>>>
>>> One comment asked for more details on the improvements:
>>> Using a small test program to map Guest memory into Dom0 (repeatedly
>>> for "Iterations" mapping the same first "Num Pages")
>>> Iterations    Num Pages    Time 3.7rc4  Time With this patch
>>> 5000          4096         76.107       37.027
>>> 10000         2048         75.703       37.177
>>> 20000         1024         75.893       37.247
>>> So a little better than twice as fast.
>>>
>>> Using this patch in migration, using "time" to measure the overall
>>> time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
>>> memory, one network card, one disk, guest at login prompt):
>>> Time 3.7rc5             Time With this patch
>>> 6.697                   5.667
>>> Since migration involves a whole lot of other things, it's only about
>>> 15% faster - but still a good improvement. Similar measurement with a
>>> guest that is running code to "dirty" memory shows about 23%
>>> improvement, as it spends more time copying dirtied memory.
>>>
>>> As discussed elsewhere, a good deal more can be had from improving the
>>> munmap system call, but it is a little tricky to get this in without
>>> worsening non-PVOPS kernel, so I will have another look at this.
>>>
>>> ---
>>> Update since last posting:
>>> I have just run some benchmarks of a 16GB guest, and the improvement
>>> with this patch is around 23-30% for the overall copy time, and 42%
>>> shorter downtime on a "busy" guest (writing in a loop to about 8GB of
>>> memory). I think this is definitely worth having.
>>>
>>> The V4 patch consists of cosmetic adjustments (spelling mistake in
>>> comment and reversing condition in an if-statement to avoid having an
>>> "else" branch, a random empty line added by accident now reverted back
>>> to previous state). Thanks to Jan Beulich for the comments.
>>>
>>> The V3 of the patch contains suggested improvements from:
>>> David Vrabel - make it two distinct external functions, doc-comments.
>>> Ian Campbell - use one common function for the main work.
>>> Jan Beulich  - found a bug and pointed out some whitespace problems.
>>>
>>>
>>>
>>> Signed-off-by: Mats Petersson <mats.petersson@citrix.com>
>>>
>>> ---
>>>   arch/x86/xen/mmu.c    |  132
>>> +++++++++++++++++++++++++++++++++++++++++++------
>>>   drivers/xen/privcmd.c |   55 +++++++++++++++++----
>>>   include/xen/xen-ops.h |    5 ++
>>>   3 files changed, 169 insertions(+), 23 deletions(-)
>>>
>>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>>> index dcf5f2d..a67774f 100644
>>> --- a/arch/x86/xen/mmu.c
>>> +++ b/arch/x86/xen/mmu.c
>>> @@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
>>>   #define REMAP_BATCH_SIZE 16
>>>
>>>   struct remap_data {
>>> -       unsigned long mfn;
>>> +       unsigned long *mfn;
>>> +       bool contiguous;
>>>          pgprot_t prot;
>>>          struct mmu_update *mmu_update;
>>>   };
>>> @@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep,
>>> pgtable_t token,
>>>                                   unsigned long addr, void *data)
>>>   {
>>>          struct remap_data *rmd = data;
>>> -       pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
>>> +       pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
>>> +       /* If we have a contigious range, just update the mfn itself,
>>> +          else update pointer to be "next mfn". */
>>> +       if (rmd->contiguous)
>>> +               (*rmd->mfn)++;
>>> +       else
>>> +               rmd->mfn++;
>>>
>>>          rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
>>>          rmd->mmu_update->val = pte_val_ma(pte);
>>> @@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep,
>>> pgtable_t token,
>>>          return 0;
>>>   }
>>>
>>> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>> -                              unsigned long addr,
>>> -                              unsigned long mfn, int nr,
>>> -                              pgprot_t prot, unsigned domid)
>>> -{
>>> +/* do_remap_mfn() - helper function to map foreign pages
>>> + * @vma:     the vma for the pages to be mapped into
>>> + * @addr:    the address at which to map the pages
>>> + * @mfn:     pointer to array of MFNs to map
>>> + * @nr:      the number entries in the MFN array
>>> + * @err_ptr: pointer to array
>>> + * @prot:    page protection mask
>>> + * @domid:   id of the domain that we are mapping from
>>> + *
>>> + * This function takes an array of mfns and maps nr pages from that into
>>> + * this kernel's memory. The owner of the pages is defined by domid.
>>> Where the
>>> + * pages are mapped is determined by addr, and vma is used for
>>> "accounting" of
>>> + * the pages.
>>> + *
>>> + * Return value is zero for success, negative for failure.
>>> + *
>>> + * Note that err_ptr is used to indicate whether *mfn
>>> + * is a list or a "first mfn of a contiguous range". */
>>> +static int do_remap_mfn(struct vm_area_struct *vma,
>>> +                       unsigned long addr,
>>> +                       unsigned long *mfn, int nr,
>>> +                       int *err_ptr, pgprot_t prot,
>>> +                       unsigned domid)
>>> +{
>>> +       int err, last_err = 0;
>>>          struct remap_data rmd;
>>>          struct mmu_update mmu_update[REMAP_BATCH_SIZE];
>>> -       int batch;
>>>          unsigned long range;
>>> -       int err = 0;
>>>
>>>          if (xen_feature(XENFEAT_auto_translated_physmap))
>>>                  return -EINVAL;
>>> @@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct
>>> vm_area_struct *vma,
>>>
>>>          rmd.mfn = mfn;
>>>          rmd.prot = prot;
>>> +       /* We use the err_ptr to indicate if there we are doing a
>>> contigious
>>> +        * mapping or a discontigious mapping. */
>>> +       rmd.contiguous = !err_ptr;
>>>
>>>          while (nr) {
>>> -               batch = min(REMAP_BATCH_SIZE, nr);
>>> +               int index = 0;
>>> +               int done = 0;
>>> +               int batch = min(REMAP_BATCH_SIZE, nr);
>>> +               int batch_left = batch;
>>>                  range = (unsigned long)batch << PAGE_SHIFT;
>>>
>>>                  rmd.mmu_update = mmu_update;
>>> @@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct
>>> vm_area_struct *vma,
>>>                  if (err)
>>>                          goto out;
>>>
>>> -               err = HYPERVISOR_mmu_update(mmu_update, batch, NULL,
>>> domid);
>>> -               if (err < 0)
>>> -                       goto out;
>>> +               /* We record the error for each page that gives an error,
>>> but
>>> +                * continue mapping until the whole set is done */
>>> +               do {
>>> +                       err = HYPERVISOR_mmu_update(&mmu_update[index],
>>> +                                                   batch_left, &done,
>>> domid);
>>> +                       if (err < 0) {
>>> +                               if (!err_ptr)
>>> +                                       goto out;
>>> +                               /* increment done so we skip the error
>>> item */
>>> +                               done++;
>>> +                               last_err = err_ptr[index] = err;
>>> +                       }
>>> +                       batch_left -= done;
>>> +                       index += done;
>>> +               } while (batch_left);
>>>
>>>                  nr -= batch;
>>>                  addr += range;
>>> +               if (err_ptr)
>>> +                       err_ptr += batch;
>>>          }
>>>
>>> -       err = 0;
>>> +       err = last_err;
>>>   out:
>>>
>>>          xen_flush_tlb_all();
>>>
>>>          return err;
>>>   }
>>> +
>>> +/* xen_remap_domain_mfn_range() - Used to map foreign pages
>>> + * @vma:   the vma for the pages to be mapped into
>>> + * @addr:  the address at which to map the pages
>>> + * @mfn:   the first MFN to map
>>> + * @nr:    the number of consecutive mfns to map
>>> + * @prot:  page protection mask
>>> + * @domid: id of the domain that we are mapping from
>>> + *
>>> + * This function takes a mfn and maps nr pages on from that into this
>>> kernel's
>>> + * memory. The owner of the pages is defined by domid. Where the pages
>>> are
>>> + * mapped is determined by addr, and vma is used for "accounting" of the
>>> + * pages.
>>> + *
>>> + * Return value is zero for success, negative ERRNO value for failure.
>>> + */
>>> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>> +                              unsigned long addr,
>>> +                              unsigned long mfn, int nr,
>>> +                              pgprot_t prot, unsigned domid)
>>> +{
>>> +       return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
>>> +}
>>>   EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>>> +
>>> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
>>> + * @vma:     the vma for the pages to be mapped into
>>> + * @addr:    the address at which to map the pages
>>> + * @mfn:     pointer to array of MFNs to map
>>> + * @nr:      the number entries in the MFN array
>>> + * @err_ptr: pointer to array of integers, one per MFN, for an error
>>> + *           value for each page. The err_ptr must not be NULL.
>>> + * @prot:    page protection mask
>>> + * @domid:   id of the domain that we are mapping from
>>> + *
>>> + * This function takes an array of mfns and maps nr pages from that into
>>> this
>>> + * kernel's memory. The owner of the pages is defined by domid. Where
>>> the pages
>>> + * are mapped is determined by addr, and vma is used for "accounting" of
>>> the
>>> + * pages. The err_ptr array is filled in on any page that is not
>>> sucessfully
>>> + * mapped in.
>>> + *
>>> + * Return value is zero for success, negative ERRNO value for failure.
>>> + * Note that the error value -ENOENT is considered a "retry", so when
>>> this
>>> + * error code is seen, another call should be made with the list of
>>> pages that
>>> + * are marked as -ENOENT in the err_ptr array.
>>> + */
>>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>>> +                              unsigned long addr,
>>> +                              unsigned long *mfn, int nr,
>>> +                              int *err_ptr, pgprot_t prot,
>>> +                              unsigned domid)
>>> +{
>>> +       /* We BUG_ON because it's a programmer error to pass a NULL
>>> err_ptr,
>>> +        * and the consequences later is quite hard to detect what the
>>> actual
>>> +        * cause of "wrong memory was mapped in".
>>> +        */
>>> +       BUG_ON(err_ptr == NULL);
>>> +       return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
>>> +}
>>> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>> index 71f5c45..75f6e86 100644
>>> --- a/drivers/xen/privcmd.c
>>> +++ b/drivers/xen/privcmd.c
>>> @@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t
>>> size,
>>>          return ret;
>>>   }
>>>
>>> +/*
>>> + * Similar to traverse_pages, but use each page as a "block" of
>>> + * data to be processed as one unit.
>>> + */
>>> +static int traverse_pages_block(unsigned nelem, size_t size,
>>> +                               struct list_head *pos,
>>> +                               int (*fn)(void *data, int nr, void
>>> *state),
>>> +                               void *state)
>>> +{
>>> +       void *pagedata;
>>> +       unsigned pageidx;
>>> +       int ret = 0;
>>> +
>>> +       BUG_ON(size > PAGE_SIZE);
>>> +
>>> +       pageidx = PAGE_SIZE;
>>> +
>>> +       while (nelem) {
>>> +               int nr = (PAGE_SIZE/size);
>>> +               struct page *page;
>>> +               if (nr > nelem)
>>> +                       nr = nelem;
>>> +               pos = pos->next;
>>> +               page = list_entry(pos, struct page, lru);
>>> +               pagedata = page_address(page);
>>> +               ret = (*fn)(pagedata, nr, state);
>>> +               if (ret)
>>> +                       break;
>>> +               nelem -= nr;
>>> +       }
>>> +
>>> +       return ret;
>>> +}
>>> +
>>>   struct mmap_mfn_state {
>>>          unsigned long va;
>>>          struct vm_area_struct *vma;
>>> @@ -250,7 +284,7 @@ struct mmap_batch_state {
>>>           *      0 for no errors
>>>           *      1 if at least one error has happened (and no
>>>           *          -ENOENT errors have happened)
>>> -        *      -ENOENT if at least 1 -ENOENT has happened.
>>> +        *      -ENOENT if at least one -ENOENT has happened.
>>>           */
>>>          int global_error;
>>>          /* An array for individual errors */
>>> @@ -260,17 +294,20 @@ struct mmap_batch_state {
>>>          xen_pfn_t __user *user_mfn;
>>>   };
>>>
>>> -static int mmap_batch_fn(void *data, void *state)
>>> +static int mmap_batch_fn(void *data, int nr, void *state)
>>>   {
>>>          xen_pfn_t *mfnp = data;
>>>          struct mmap_batch_state *st = state;
>>>          int ret;
>>>
>>> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK,
>>> *mfnp, 1,
>>> -                                        st->vma->vm_page_prot,
>>> st->domain);
>>> +       BUG_ON(nr < 0);
>>>
>>> -       /* Store error code for second pass. */
>>> -       *(st->err++) = ret;
>>> +       ret = xen_remap_domain_mfn_array(st->vma,
>>> +                                        st->va & PAGE_MASK,
>>> +                                        mfnp, nr,
>>> +                                        st->err,
>>> +                                        st->vma->vm_page_prot,
>>> +                                        st->domain);
>>>
>>>          /* And see if it affects the global_error. */
>>>          if (ret < 0) {
>>> @@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
>>>                                  st->global_error = 1;
>>>                  }
>>>          }
>>> -       st->va += PAGE_SIZE;
>>> +       st->va += PAGE_SIZE * nr;
>>>
>>>          return 0;
>>>   }
>>> @@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user
>>> *udata, int version)
>>>          state.err           = err_array;
>>>
>>>          /* mmap_batch_fn guarantees ret == 0 */
>>> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
>>> -                            &pagelist, mmap_batch_fn, &state));
>>> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
>>> +                                   &pagelist, mmap_batch_fn, &state));
>>>
>>>          up_write(&mm->mmap_sem);
>>>
>>> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
>>> index 6a198e4..22cad75 100644
>>> --- a/include/xen/xen-ops.h
>>> +++ b/include/xen/xen-ops.h
>>> @@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct
>>> *vma,
>>>                                 unsigned long mfn, int nr,
>>>                                 pgprot_t prot, unsigned domid);
>>>
>>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>>> +                              unsigned long addr,
>>> +                              unsigned long *mfn, int nr,
>>> +                              int *err_ptr, pgprot_t prot,
>>> +                              unsigned domid);
>>>   #endif /* INCLUDE_XEN_OPS_H */
>>> --
>>> 1.7.9.5
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>>
>>
>>
>> --
>> Vasiliy Tolstov,
>> Clodo.ru
>> e-mail: v.tolstov@selfip.ru
>> jabber: vase@selfip.ru
>
>



-- 
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:03:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:03:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8Rv-0001RR-8b; Thu, 13 Dec 2012 13:03:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj8Rt-0001RM-E1
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 13:03:37 +0000
Received: from [85.158.139.211:62770] by server-16.bemta-5.messagelabs.com id
	65/49-09208-822D9C05; Thu, 13 Dec 2012 13:03:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355403815!18881769!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9367 invoked from network); 13 Dec 2012 13:03:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 13:03:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="116298"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 13:02:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 13:02:36 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tj8Qu-0005zl-Dk;
	Thu, 13 Dec 2012 13:02:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tj8Qu-0007Us-8S;
	Thu, 13 Dec 2012 13:02:36 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14679-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 13:02:36 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14679: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14679 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14679/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14675
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 14675

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  255a0b6a8104
baseline version:
 xen                  a866cc5b8235

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=255a0b6a8104
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing 255a0b6a8104
+ branch=xen-4.1-testing
+ revision=255a0b6a8104
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r 255a0b6a8104 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 6 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:03:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:03:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8Rv-0001RR-8b; Thu, 13 Dec 2012 13:03:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj8Rt-0001RM-E1
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 13:03:37 +0000
Received: from [85.158.139.211:62770] by server-16.bemta-5.messagelabs.com id
	65/49-09208-822D9C05; Thu, 13 Dec 2012 13:03:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355403815!18881769!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9367 invoked from network); 13 Dec 2012 13:03:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 13:03:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="116298"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 13:02:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 13:02:36 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tj8Qu-0005zl-Dk;
	Thu, 13 Dec 2012 13:02:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tj8Qu-0007Us-8S;
	Thu, 13 Dec 2012 13:02:36 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14679-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 13:02:36 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14679: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14679 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14679/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14675
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 14675

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  255a0b6a8104
baseline version:
 xen                  a866cc5b8235

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=255a0b6a8104
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing 255a0b6a8104
+ branch=xen-4.1-testing
+ revision=255a0b6a8104
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r 255a0b6a8104 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 6 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:09:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8Xj-0001fC-1z; Thu, 13 Dec 2012 13:09:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj8Xg-0001f6-UL
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:09:37 +0000
Received: from [85.158.143.35:39617] by server-3.bemta-4.messagelabs.com id
	0B/69-18211-093D9C05; Thu, 13 Dec 2012 13:09:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355404164!16179867!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17107 invoked from network); 13 Dec 2012 13:09:24 -0000
Received: from unknown (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 13:09:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="116486"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 13:07:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 13:07:25 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj8VZ-00061G-Re; Thu, 13 Dec 2012 13:07:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj8VZ-0007z7-K4;
	Thu, 13 Dec 2012 13:07:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.54029.391833.313611@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 13:07:25 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C8C665.2030202@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
	<50C8C665.2030202@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Ian Jackson wrote:
> > Well your patch is only correct when used with the new libxl, with my
> > patches.
> 
> Hmm, it is not clear to me how to make the libxl driver work correctly
> with libxl pre and post your patches :-/.

This should be straightforward I think, except for the deregistration
race bug which is unavoidable with the old libxl.  Simply correctly
implementing both the deregistration callback and the modification
callback ought to do it.

> >> @@ -184,6 +184,8 @@ static void libxlTimerCallback(int timer
> >> ATTRIBUTE_UNUSED, void *timer_v)
> >>  {
> >>      struct libxlOSEventHookTimerInfo *timer_info = timer_v;
> >>  
> >> +    virEventRemoveTimeout(timer_info->id);
> >> +    timer_info->id = -1;
> >>     
> >
> > I don't understand why you need this.  Doesn't libvirt remove timers
> > when they fire ?  If it doesn't, do they otherwise not keep reoccurring ?
> 
> No, timers are not removed when they fire.  And they are continuous, so
> will keep firing at each timeout.  They can be disabled by setting the
> timeout to -1, and can be set to fire on each iteration of the event
> loop by setting the timeout to 0.  But they must be explicitly removed
> with virEventRemoveTimeout when no longer needed.

Right.  Well then yes you need to call virEventRemoveTimeout here.
But that applies to both old and new libxl.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:09:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8Xj-0001fC-1z; Thu, 13 Dec 2012 13:09:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj8Xg-0001f6-UL
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:09:37 +0000
Received: from [85.158.143.35:39617] by server-3.bemta-4.messagelabs.com id
	0B/69-18211-093D9C05; Thu, 13 Dec 2012 13:09:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355404164!16179867!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17107 invoked from network); 13 Dec 2012 13:09:24 -0000
Received: from unknown (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 13:09:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="116486"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 13:07:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 13:07:25 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj8VZ-00061G-Re; Thu, 13 Dec 2012 13:07:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj8VZ-0007z7-K4;
	Thu, 13 Dec 2012 13:07:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.54029.391833.313611@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 13:07:25 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C8C665.2030202@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
	<50C8C665.2030202@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Ian Jackson wrote:
> > Well your patch is only correct when used with the new libxl, with my
> > patches.
> 
> Hmm, it is not clear to me how to make the libxl driver work correctly
> with libxl pre and post your patches :-/.

This should be straightforward I think, except for the deregistration
race bug which is unavoidable with the old libxl.  Simply correctly
implementing both the deregistration callback and the modification
callback ought to do it.

> >> @@ -184,6 +184,8 @@ static void libxlTimerCallback(int timer
> >> ATTRIBUTE_UNUSED, void *timer_v)
> >>  {
> >>      struct libxlOSEventHookTimerInfo *timer_info = timer_v;
> >>  
> >> +    virEventRemoveTimeout(timer_info->id);
> >> +    timer_info->id = -1;
> >>     
> >
> > I don't understand why you need this.  Doesn't libvirt remove timers
> > when they fire ?  If it doesn't, do they otherwise not keep reoccurring ?
> 
> No, timers are not removed when they fire.  And they are continuous, so
> will keep firing at each timeout.  They can be disabled by setting the
> timeout to -1, and can be set to fire on each iteration of the event
> loop by setting the timeout to 0.  But they must be explicitly removed
> with virEventRemoveTimeout when no longer needed.

Right.  Well then yes you need to call virEventRemoveTimeout here.
But that applies to both old and new libxl.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:12:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8aZ-0001nu-Np; Thu, 13 Dec 2012 13:12:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj8aY-0001nn-R0
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:12:35 +0000
Received: from [85.158.143.99:54659] by server-2.bemta-4.messagelabs.com id
	C4/79-30861-244D9C05; Thu, 13 Dec 2012 13:12:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1355404348!24133155!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27917 invoked from network); 13 Dec 2012 13:12:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 13:12:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="116645"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 13:12:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 13:12:28 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj8aS-00062b-53; Thu, 13 Dec 2012 13:12:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj8aS-0007zf-1O;
	Thu, 13 Dec 2012 13:12:28 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.54331.814590.759348@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 13:12:27 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355394576.10554.62.camel@zakaz.uk.xensource.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
	<50C8C665.2030202@suse.com>
	<1355394576.10554.62.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Jim Fehlig <jfehlig@suse.com>, Bamvor Jian Zhang <bjzhang@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> On Wed, 2012-12-12 at 18:01 +0000, Jim Fehlig wrote:
> > Hmm, it is not clear to me how to make the libxl driver work correctly
> > with libxl pre and post your patches :-/.
> 
> Ideally we will find a way to make this work without changes on the
> application side.

Yes, but if the application has bugs which are exposed by the new
approach I think it is probably simplest to fix those bugs.

The way I'm proposing at the moment means that there are two sets of
relevant changes to libxl and libvirt:

 - libxl always use timeout_modify and never _deregister.  Officially
   speaking this is a backwards-compatible change: libxl now promises
   to use only a strict subset of the documented interface provided by
   the application.  Any correct application will still work.

 - libvirt implement both _modify and _deregister correctly.  With old
   libxl this leaves the timeout deregistration race.  With new libxl
   there is no problem at all.

> One option is to add new hooks which libxl can call to take/release the
> application's event loop lock + a LIBXL_HAVE_EVENT_LOOP_LOCK define so
> the application can conditionally provide them. The upside is that I
> would expect this to result in much simpler code in both libxl and
> libvirt. The downside is that doing this kind of sucks from an API
> stability point of view, but if the application has to change anyway
> then we might as well do it cleanly instead of bending over backwards to
> keep the API the same.

I don't know what other users there might be who would be hurt by an
API change.

But I think from a deployment/testing/etc. point of view two separate
patches to libxl and libvirt which each make matters strictly better,
and which can be applied independently, does seem like the best
approach.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:12:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8aZ-0001nu-Np; Thu, 13 Dec 2012 13:12:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj8aY-0001nn-R0
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:12:35 +0000
Received: from [85.158.143.99:54659] by server-2.bemta-4.messagelabs.com id
	C4/79-30861-244D9C05; Thu, 13 Dec 2012 13:12:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1355404348!24133155!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27917 invoked from network); 13 Dec 2012 13:12:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 13:12:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="116645"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 13:12:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 13:12:28 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj8aS-00062b-53; Thu, 13 Dec 2012 13:12:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj8aS-0007zf-1O;
	Thu, 13 Dec 2012 13:12:28 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.54331.814590.759348@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 13:12:27 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355394576.10554.62.camel@zakaz.uk.xensource.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
	<50C8C665.2030202@suse.com>
	<1355394576.10554.62.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Jim Fehlig <jfehlig@suse.com>, Bamvor Jian Zhang <bjzhang@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> On Wed, 2012-12-12 at 18:01 +0000, Jim Fehlig wrote:
> > Hmm, it is not clear to me how to make the libxl driver work correctly
> > with libxl pre and post your patches :-/.
> 
> Ideally we will find a way to make this work without changes on the
> application side.

Yes, but if the application has bugs which are exposed by the new
approach I think it is probably simplest to fix those bugs.

The way I'm proposing at the moment means that there are two sets of
relevant changes to libxl and libvirt:

 - libxl always use timeout_modify and never _deregister.  Officially
   speaking this is a backwards-compatible change: libxl now promises
   to use only a strict subset of the documented interface provided by
   the application.  Any correct application will still work.

 - libvirt implement both _modify and _deregister correctly.  With old
   libxl this leaves the timeout deregistration race.  With new libxl
   there is no problem at all.

> One option is to add new hooks which libxl can call to take/release the
> application's event loop lock + a LIBXL_HAVE_EVENT_LOOP_LOCK define so
> the application can conditionally provide them. The upside is that I
> would expect this to result in much simpler code in both libxl and
> libvirt. The downside is that doing this kind of sucks from an API
> stability point of view, but if the application has to change anyway
> then we might as well do it cleanly instead of bending over backwards to
> keep the API the same.

I don't know what other users there might be who would be hurt by an
API change.

But I think from a deployment/testing/etc. point of view two separate
patches to libxl and libvirt which each make matters strictly better,
and which can be applied independently, does seem like the best
approach.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:22:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8k6-0002cl-0q; Thu, 13 Dec 2012 13:22:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tj8k4-0002cZ-9f
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:22:24 +0000
Received: from [85.158.138.51:45992] by server-14.bemta-3.messagelabs.com id
	C3/D5-27443-F86D9C05; Thu, 13 Dec 2012 13:22:23 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355404942!28447420!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27794 invoked from network); 13 Dec 2012 13:22:22 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 13:22:22 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tj8k1-000KBL-Pl; Thu, 13 Dec 2012 13:22:21 +0000
Date: Thu, 13 Dec 2012 13:22:21 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121213132221.GF75286@ocelot.phlegethon.org>
References: <50C891CF.805@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C891CF.805@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Handling a MEM_EVENT_REASON_CR3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:16 +0200 on 12 Dec (1355329007), Razvan Cojocaru wrote:
> Hello,
> 
> a few questions about receiving a MEM_EVENT_REASON_CR3 even in dom0 
> userspace:
> 
> 1. If I call
> 
> xc_set_hvm_param(xci, domain_id,
>                  HVM_PARAM_MEMORY_EVENT_CR3,
>                  HVMPME_onchangeonly);
> 
> that only triggers a CR3 event if the new value that the guest writes to 
> CR3 is different from the existing value, is that assumption correct?

Yes, I think so.  That's what HVMPME_onchangeonly ought to do.
Did you try it?

> 2. mem_event.h says that if "CR3 was hit: gfn is CR3 value". I'm 
> assuming that gfn is the _new_ value, and that the old value is 
> unavailable, is that also correct?

Yes.

> 3. Is it possible to, upon intercepting the CR3 write, write a 
> _different_ value to CR3, instead of gfn?

Yes.  you can use either xc_vcpu_getcontext()/xc_vcpu_setcontext()
or xc_domain_hvm_getcontext_partial()/xc_domain_hvm_setcontext() to read
and write the vCPU registers.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:22:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8k6-0002cl-0q; Thu, 13 Dec 2012 13:22:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tj8k4-0002cZ-9f
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:22:24 +0000
Received: from [85.158.138.51:45992] by server-14.bemta-3.messagelabs.com id
	C3/D5-27443-F86D9C05; Thu, 13 Dec 2012 13:22:23 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355404942!28447420!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27794 invoked from network); 13 Dec 2012 13:22:22 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 13:22:22 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tj8k1-000KBL-Pl; Thu, 13 Dec 2012 13:22:21 +0000
Date: Thu, 13 Dec 2012 13:22:21 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121213132221.GF75286@ocelot.phlegethon.org>
References: <50C891CF.805@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C891CF.805@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Handling a MEM_EVENT_REASON_CR3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:16 +0200 on 12 Dec (1355329007), Razvan Cojocaru wrote:
> Hello,
> 
> a few questions about receiving a MEM_EVENT_REASON_CR3 even in dom0 
> userspace:
> 
> 1. If I call
> 
> xc_set_hvm_param(xci, domain_id,
>                  HVM_PARAM_MEMORY_EVENT_CR3,
>                  HVMPME_onchangeonly);
> 
> that only triggers a CR3 event if the new value that the guest writes to 
> CR3 is different from the existing value, is that assumption correct?

Yes, I think so.  That's what HVMPME_onchangeonly ought to do.
Did you try it?

> 2. mem_event.h says that if "CR3 was hit: gfn is CR3 value". I'm 
> assuming that gfn is the _new_ value, and that the old value is 
> unavailable, is that also correct?

Yes.

> 3. Is it possible to, upon intercepting the CR3 write, write a 
> _different_ value to CR3, instead of gfn?

Yes.  you can use either xc_vcpu_getcontext()/xc_vcpu_setcontext()
or xc_domain_hvm_getcontext_partial()/xc_domain_hvm_setcontext() to read
and write the vCPU registers.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:34:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8vi-0003Lw-AB; Thu, 13 Dec 2012 13:34:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tj8vg-0003Lm-W8
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:34:25 +0000
Received: from [85.158.139.211:16826] by server-7.bemta-5.messagelabs.com id
	4E/B4-08009-069D9C05; Thu, 13 Dec 2012 13:34:24 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355405663!19492030!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15153 invoked from network); 13 Dec 2012 13:34:23 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-3.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 13:34:23 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:47980
	helo=localhost.localdomain) by smtp.eikelenboom.it with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>)
	id 1Tj8zg-0003uR-VP; Thu, 13 Dec 2012 14:38:33 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
To: qemu-devel@nongnu.org
Date: Thu, 13 Dec 2012 14:34:06 +0100
Message-Id: <1355405646-3226-2-git-send-email-linux@eikelenboom.it>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
Cc: anthony.perard@citrix.com, Sander Eikelenboom <linux@eikelenboom.it>,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] Fix compile errors when enabling Xen debug
	logging.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

v2:
- Wrap around > 80 characters
- Use %"HWADDR_PRIx" format for hwaddr

Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it>
---
 hw/xen_pt.c |    5 +++--
 xen-all.c   |    7 ++++---
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/hw/xen_pt.c b/hw/xen_pt.c
index 7a3846e..7aae826 100644
--- a/hw/xen_pt.c
+++ b/hw/xen_pt.c
@@ -671,7 +671,8 @@ static int xen_pt_initfn(PCIDevice *d)
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
-                   s->real_device.domain, bus, slot, func);
+                   s->real_device.domain, s->real_device.bus,
+                   s->real_device.dev, s->real_device.func);
     }
 
     /* Initialize virtualized PCI configuration (Extended 256 Bytes) */
@@ -752,7 +753,7 @@ out:
     memory_listener_register(&s->memory_listener, &address_space_memory);
     memory_listener_register(&s->io_listener, &address_space_io);
     XEN_PT_LOG(d, "Real physical device %02x:%02x.%d registered successfuly!\n",
-               bus, slot, func);
+               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function);
 
     return 0;
 }
diff --git a/xen-all.c b/xen-all.c
index 046cc2a..d0142bd 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -292,7 +292,8 @@ static int xen_add_to_physmap(XenIOState *state,
     return -1;
 
 go_physmap:
-    DPRINTF("mapping vram to %llx - %llx\n", start_addr, start_addr + size);
+    DPRINTF("mapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx"\n",
+            start_addr, start_addr + size);
 
     pfn = phys_offset >> TARGET_PAGE_BITS;
     start_gpfn = start_addr >> TARGET_PAGE_BITS;
@@ -365,8 +366,8 @@ static int xen_remove_from_physmap(XenIOState *state,
     phys_offset = physmap->phys_offset;
     size = physmap->size;
 
-    DPRINTF("unmapping vram to %llx - %llx, from %llx\n",
-            phys_offset, phys_offset + size, start_addr);
+    DPRINTF("unmapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx", from ",
+            "%"HWADDR_PRIx"\n", phys_offset, phys_offset + size, start_addr);
 
     size >>= TARGET_PAGE_BITS;
     start_addr >>= TARGET_PAGE_BITS;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:34:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8vi-0003Lw-AB; Thu, 13 Dec 2012 13:34:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tj8vg-0003Lm-W8
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:34:25 +0000
Received: from [85.158.139.211:16826] by server-7.bemta-5.messagelabs.com id
	4E/B4-08009-069D9C05; Thu, 13 Dec 2012 13:34:24 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355405663!19492030!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15153 invoked from network); 13 Dec 2012 13:34:23 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-3.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 13:34:23 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:47980
	helo=localhost.localdomain) by smtp.eikelenboom.it with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>)
	id 1Tj8zg-0003uR-VP; Thu, 13 Dec 2012 14:38:33 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
To: qemu-devel@nongnu.org
Date: Thu, 13 Dec 2012 14:34:06 +0100
Message-Id: <1355405646-3226-2-git-send-email-linux@eikelenboom.it>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
Cc: anthony.perard@citrix.com, Sander Eikelenboom <linux@eikelenboom.it>,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] Fix compile errors when enabling Xen debug
	logging.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

v2:
- Wrap around > 80 characters
- Use %"HWADDR_PRIx" format for hwaddr

Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it>
---
 hw/xen_pt.c |    5 +++--
 xen-all.c   |    7 ++++---
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/hw/xen_pt.c b/hw/xen_pt.c
index 7a3846e..7aae826 100644
--- a/hw/xen_pt.c
+++ b/hw/xen_pt.c
@@ -671,7 +671,8 @@ static int xen_pt_initfn(PCIDevice *d)
     s->is_virtfn = s->real_device.is_virtfn;
     if (s->is_virtfn) {
         XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
-                   s->real_device.domain, bus, slot, func);
+                   s->real_device.domain, s->real_device.bus,
+                   s->real_device.dev, s->real_device.func);
     }
 
     /* Initialize virtualized PCI configuration (Extended 256 Bytes) */
@@ -752,7 +753,7 @@ out:
     memory_listener_register(&s->memory_listener, &address_space_memory);
     memory_listener_register(&s->io_listener, &address_space_io);
     XEN_PT_LOG(d, "Real physical device %02x:%02x.%d registered successfuly!\n",
-               bus, slot, func);
+               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function);
 
     return 0;
 }
diff --git a/xen-all.c b/xen-all.c
index 046cc2a..d0142bd 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -292,7 +292,8 @@ static int xen_add_to_physmap(XenIOState *state,
     return -1;
 
 go_physmap:
-    DPRINTF("mapping vram to %llx - %llx\n", start_addr, start_addr + size);
+    DPRINTF("mapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx"\n",
+            start_addr, start_addr + size);
 
     pfn = phys_offset >> TARGET_PAGE_BITS;
     start_gpfn = start_addr >> TARGET_PAGE_BITS;
@@ -365,8 +366,8 @@ static int xen_remove_from_physmap(XenIOState *state,
     phys_offset = physmap->phys_offset;
     size = physmap->size;
 
-    DPRINTF("unmapping vram to %llx - %llx, from %llx\n",
-            phys_offset, phys_offset + size, start_addr);
+    DPRINTF("unmapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx", from ",
+            "%"HWADDR_PRIx"\n", phys_offset, phys_offset + size, start_addr);
 
     size >>= TARGET_PAGE_BITS;
     start_addr >>= TARGET_PAGE_BITS;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:34:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:34:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8vj-0003M5-MB; Thu, 13 Dec 2012 13:34:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tj8vh-0003Ln-DH
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:34:25 +0000
Received: from [85.158.143.99:45392] by server-3.bemta-4.messagelabs.com id
	66/CD-18211-069D9C05; Thu, 13 Dec 2012 13:34:24 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355405663!19572587!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21438 invoked from network); 13 Dec 2012 13:34:24 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 13:34:24 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:47980
	helo=localhost.localdomain) by smtp.eikelenboom.it with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>)
	id 1Tj8ze-0003uR-KG; Thu, 13 Dec 2012 14:38:30 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
To: qemu-devel@nongnu.org
Date: Thu, 13 Dec 2012 14:34:05 +0100
Message-Id: <1355405646-3226-1-git-send-email-linux@eikelenboom.it>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
Cc: anthony.perard@citrix.com, Sander Eikelenboom <linux@eikelenboom.it>,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] Fix compile errors when enabling Xen debug
	logging.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fix compile errors when enabling Xen debug logging.

v2:
- Wrap around > 80 characters
- Use %"HWADDR_PRIx" format for hwaddr

Sander Eikelenboom (1):
  Fix compile errors when enabling Xen debug logging.

 hw/xen_pt.c |    5 +++--
 xen-all.c   |    7 ++++---
 2 files changed, 7 insertions(+), 5 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:34:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:34:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj8vj-0003M5-MB; Thu, 13 Dec 2012 13:34:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tj8vh-0003Ln-DH
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:34:25 +0000
Received: from [85.158.143.99:45392] by server-3.bemta-4.messagelabs.com id
	66/CD-18211-069D9C05; Thu, 13 Dec 2012 13:34:24 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355405663!19572587!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21438 invoked from network); 13 Dec 2012 13:34:24 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 13:34:24 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:47980
	helo=localhost.localdomain) by smtp.eikelenboom.it with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>)
	id 1Tj8ze-0003uR-KG; Thu, 13 Dec 2012 14:38:30 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
To: qemu-devel@nongnu.org
Date: Thu, 13 Dec 2012 14:34:05 +0100
Message-Id: <1355405646-3226-1-git-send-email-linux@eikelenboom.it>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
Cc: anthony.perard@citrix.com, Sander Eikelenboom <linux@eikelenboom.it>,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] Fix compile errors when enabling Xen debug
	logging.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fix compile errors when enabling Xen debug logging.

v2:
- Wrap around > 80 characters
- Use %"HWADDR_PRIx" format for hwaddr

Sander Eikelenboom (1):
  Fix compile errors when enabling Xen debug logging.

 hw/xen_pt.c |    5 +++--
 xen-all.c   |    7 ++++---
 2 files changed, 7 insertions(+), 5 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:39:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:39:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj90N-0003lE-IZ; Thu, 13 Dec 2012 13:39:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tj90L-0003kj-Tx
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:39:14 +0000
Received: from [85.158.137.99:58949] by server-14.bemta-3.messagelabs.com id
	F5/0B-27443-18AD9C05; Thu, 13 Dec 2012 13:39:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355405952!16072645!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1479 invoked from network); 13 Dec 2012 13:39:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 13:39:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 13:39:11 +0000
Message-Id: <50C9E88F02000078000B02A1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 13:39:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Thu, 13 Dec 2012, Jan Beulich wrote:
>> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>> > On Wed, 12 Dec 2012 17:15:23 -0800
>> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>> > 
>> >> On Tue, 11 Dec 2012 12:10:19 +0000
>> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> >> 
>> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
>> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
>> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
>> >> > 
>> >> > That's strange because AFAIK Linux is never editing the MSI-X
>> >> > entries directly: give a look at
>> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
>> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
>> >> > touch the real MSI-X table.
>> >> 
>> >> So, this is what's happening. The side effect of :
>> >> 
>> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
>> >>                                 dev->msix_table.last) )
>> >>             WARN();
>> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
>> >>                                 dev->msix_pba.last) )
>> >>             WARN();
>> >> 
>> >> in msix_capability_init() in xen is that the dom0 EPT entries that
>> >> I've mapped are going from RW to read only. Then when dom0 accesses
>> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
>> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
>> >> don't understand why? Looking into that now...
>> 
>> As far as I was able to tell back at the time when I implemented
>> this, existing code shouldn't have mappings for these tables in
>> place at the time these ranges get added here. But I noted in
>> the patch description that this is a potential issue (and may need
>> fixing if deemed severe enough - back then, apparently nobody
>> really cared, perhaps largely because passthrough to PV guests
>> isn't considered fully secure anyway).
>> 
>> Now - did that change? I.e. can you describe where the mappings
>> come from that cause the problem here?
> 
> The generic Linux MSI-X handling code does that, before calling the
> arch specific msi setup function, for which we have a xen version
> (xen_initdom_setup_msi_irqs):
> 
> pci_enable_msix -> msix_capability_init -> msix_map_region

Ah, okay, (of course?) I had looked only at the forward ported
version of this. Is all that fiddling with the mask bits really
being suppressed properly when running under Xen? Otherwise
pv-ops is quite broken in this regard at present... And if it is,
I don't see what the respective ioremap() is good for here in
the Xen case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 13:39:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 13:39:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj90N-0003lE-IZ; Thu, 13 Dec 2012 13:39:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tj90L-0003kj-Tx
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 13:39:14 +0000
Received: from [85.158.137.99:58949] by server-14.bemta-3.messagelabs.com id
	F5/0B-27443-18AD9C05; Thu, 13 Dec 2012 13:39:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355405952!16072645!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1479 invoked from network); 13 Dec 2012 13:39:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 13:39:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 13:39:11 +0000
Message-Id: <50C9E88F02000078000B02A1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 13:39:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Thu, 13 Dec 2012, Jan Beulich wrote:
>> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>> > On Wed, 12 Dec 2012 17:15:23 -0800
>> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>> > 
>> >> On Tue, 11 Dec 2012 12:10:19 +0000
>> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> >> 
>> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
>> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
>> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
>> >> > 
>> >> > That's strange because AFAIK Linux is never editing the MSI-X
>> >> > entries directly: give a look at
>> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
>> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
>> >> > touch the real MSI-X table.
>> >> 
>> >> So, this is what's happening. The side effect of :
>> >> 
>> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
>> >>                                 dev->msix_table.last) )
>> >>             WARN();
>> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
>> >>                                 dev->msix_pba.last) )
>> >>             WARN();
>> >> 
>> >> in msix_capability_init() in xen is that the dom0 EPT entries that
>> >> I've mapped are going from RW to read only. Then when dom0 accesses
>> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
>> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
>> >> don't understand why? Looking into that now...
>> 
>> As far as I was able to tell back at the time when I implemented
>> this, existing code shouldn't have mappings for these tables in
>> place at the time these ranges get added here. But I noted in
>> the patch description that this is a potential issue (and may need
>> fixing if deemed severe enough - back then, apparently nobody
>> really cared, perhaps largely because passthrough to PV guests
>> isn't considered fully secure anyway).
>> 
>> Now - did that change? I.e. can you describe where the mappings
>> come from that cause the problem here?
> 
> The generic Linux MSI-X handling code does that, before calling the
> arch specific msi setup function, for which we have a xen version
> (xen_initdom_setup_msi_irqs):
> 
> pci_enable_msix -> msix_capability_init -> msix_map_region

Ah, okay, (of course?) I had looked only at the forward ported
version of this. Is all that fiddling with the mask bits really
being suppressed properly when running under Xen? Otherwise
pv-ops is quite broken in this regard at present... And if it is,
I don't see what the respective ioremap() is good for here in
the Xen case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:09:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9Tg-0004br-7E; Thu, 13 Dec 2012 14:09:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tj9Te-0004bj-Bc
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:09:30 +0000
Received: from [193.109.254.147:54358] by server-13.bemta-14.messagelabs.com
	id 9C/4B-01725-991E9C05; Thu, 13 Dec 2012 14:09:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355407740!8605436!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12986 invoked from network); 13 Dec 2012 14:09:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:09:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="545519"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 14:08:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 09:08:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tj9T9-0000pe-6X;
	Thu, 13 Dec 2012 14:08:59 +0000
Date: Thu, 13 Dec 2012 14:08:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1355405646-3226-2-git-send-email-linux@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212131404260.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
	<1355405646-3226-2-git-send-email-linux@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] Fix compile errors when enabling Xen
	debug logging.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012, Sander Eikelenboom wrote:
> v2:
> - Wrap around > 80 characters
> - Use %"HWADDR_PRIx" format for hwaddr
> 
> Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  hw/xen_pt.c |    5 +++--
>  xen-all.c   |    7 ++++---
>  2 files changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/xen_pt.c b/hw/xen_pt.c
> index 7a3846e..7aae826 100644
> --- a/hw/xen_pt.c
> +++ b/hw/xen_pt.c
> @@ -671,7 +671,8 @@ static int xen_pt_initfn(PCIDevice *d)
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> -                   s->real_device.domain, bus, slot, func);
> +                   s->real_device.domain, s->real_device.bus,
> +                   s->real_device.dev, s->real_device.func);
>      }
>  
>      /* Initialize virtualized PCI configuration (Extended 256 Bytes) */
> @@ -752,7 +753,7 @@ out:
>      memory_listener_register(&s->memory_listener, &address_space_memory);
>      memory_listener_register(&s->io_listener, &address_space_io);
>      XEN_PT_LOG(d, "Real physical device %02x:%02x.%d registered successfuly!\n",
> -               bus, slot, func);
> +               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function);
>  
>      return 0;
>  }
> diff --git a/xen-all.c b/xen-all.c
> index 046cc2a..d0142bd 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -292,7 +292,8 @@ static int xen_add_to_physmap(XenIOState *state,
>      return -1;
>  
>  go_physmap:
> -    DPRINTF("mapping vram to %llx - %llx\n", start_addr, start_addr + size);
> +    DPRINTF("mapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx"\n",
> +            start_addr, start_addr + size);
>  
>      pfn = phys_offset >> TARGET_PAGE_BITS;
>      start_gpfn = start_addr >> TARGET_PAGE_BITS;
> @@ -365,8 +366,8 @@ static int xen_remove_from_physmap(XenIOState *state,
>      phys_offset = physmap->phys_offset;
>      size = physmap->size;
>  
> -    DPRINTF("unmapping vram to %llx - %llx, from %llx\n",
> -            phys_offset, phys_offset + size, start_addr);
> +    DPRINTF("unmapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx", from ",
> +            "%"HWADDR_PRIx"\n", phys_offset, phys_offset + size, start_addr);
>  
>      size >>= TARGET_PAGE_BITS;
>      start_addr >>= TARGET_PAGE_BITS;
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:09:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9Tg-0004br-7E; Thu, 13 Dec 2012 14:09:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tj9Te-0004bj-Bc
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:09:30 +0000
Received: from [193.109.254.147:54358] by server-13.bemta-14.messagelabs.com
	id 9C/4B-01725-991E9C05; Thu, 13 Dec 2012 14:09:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355407740!8605436!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12986 invoked from network); 13 Dec 2012 14:09:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:09:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="545519"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 14:08:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 09:08:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tj9T9-0000pe-6X;
	Thu, 13 Dec 2012 14:08:59 +0000
Date: Thu, 13 Dec 2012 14:08:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1355405646-3226-2-git-send-email-linux@eikelenboom.it>
Message-ID: <alpine.DEB.2.02.1212131404260.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212131245280.17523@kaball.uk.xensource.com>
	<1355405646-3226-2-git-send-email-linux@eikelenboom.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] Fix compile errors when enabling Xen
	debug logging.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012, Sander Eikelenboom wrote:
> v2:
> - Wrap around > 80 characters
> - Use %"HWADDR_PRIx" format for hwaddr
> 
> Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  hw/xen_pt.c |    5 +++--
>  xen-all.c   |    7 ++++---
>  2 files changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/xen_pt.c b/hw/xen_pt.c
> index 7a3846e..7aae826 100644
> --- a/hw/xen_pt.c
> +++ b/hw/xen_pt.c
> @@ -671,7 +671,8 @@ static int xen_pt_initfn(PCIDevice *d)
>      s->is_virtfn = s->real_device.is_virtfn;
>      if (s->is_virtfn) {
>          XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
> -                   s->real_device.domain, bus, slot, func);
> +                   s->real_device.domain, s->real_device.bus,
> +                   s->real_device.dev, s->real_device.func);
>      }
>  
>      /* Initialize virtualized PCI configuration (Extended 256 Bytes) */
> @@ -752,7 +753,7 @@ out:
>      memory_listener_register(&s->memory_listener, &address_space_memory);
>      memory_listener_register(&s->io_listener, &address_space_io);
>      XEN_PT_LOG(d, "Real physical device %02x:%02x.%d registered successfuly!\n",
> -               bus, slot, func);
> +               s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function);
>  
>      return 0;
>  }
> diff --git a/xen-all.c b/xen-all.c
> index 046cc2a..d0142bd 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -292,7 +292,8 @@ static int xen_add_to_physmap(XenIOState *state,
>      return -1;
>  
>  go_physmap:
> -    DPRINTF("mapping vram to %llx - %llx\n", start_addr, start_addr + size);
> +    DPRINTF("mapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx"\n",
> +            start_addr, start_addr + size);
>  
>      pfn = phys_offset >> TARGET_PAGE_BITS;
>      start_gpfn = start_addr >> TARGET_PAGE_BITS;
> @@ -365,8 +366,8 @@ static int xen_remove_from_physmap(XenIOState *state,
>      phys_offset = physmap->phys_offset;
>      size = physmap->size;
>  
> -    DPRINTF("unmapping vram to %llx - %llx, from %llx\n",
> -            phys_offset, phys_offset + size, start_addr);
> +    DPRINTF("unmapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx", from ",
> +            "%"HWADDR_PRIx"\n", phys_offset, phys_offset + size, start_addr);
>  
>      size >>= TARGET_PAGE_BITS;
>      start_addr >>= TARGET_PAGE_BITS;
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:16:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9Zp-0004sW-1c; Thu, 13 Dec 2012 14:15:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj9Zo-0004sR-8s
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:15:52 +0000
Received: from [85.158.139.83:64091] by server-16.bemta-5.messagelabs.com id
	6B/18-09208-713E9C05; Thu, 13 Dec 2012 14:15:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355408134!26998034!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28862 invoked from network); 13 Dec 2012 14:15:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:15:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="118931"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 14:15:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 14:15:25 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj9ZN-0006RQ-Qd; Thu, 13 Dec 2012 14:15:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj9ZN-00088B-Mp;
	Thu, 13 Dec 2012 14:15:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.58109.606386.754980@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 14:15:25 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355400151.10554.103.camel@zakaz.uk.xensource.com>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
	<1355400151.10554.103.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [VTPM v7 6/8] Add autoconf to stubdom"):
> It also occurs to me that this series introduces a little bisection blip
> where cmake will be required (i.e. from patch 3 until here). The right
> way to do this would have been to put the patch introducing autoconf at
> the start. I'm inclined to just gloss over that for now, but if you feel
> so inclined you could reorder things.

I would recommend reordering things, yes.

If the bisection blip bites us, what will happen is that the automatic
bisector will finger the cmake-requiring changeset.  Normally if that
happens it would be fair to ask the person who introduces the
bisection breakage to debug whatever the other problem is that the
bisector is hunting for.  I wouldn't recommend knowingly putting
oneself in that position :-).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:16:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9Zp-0004sW-1c; Thu, 13 Dec 2012 14:15:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj9Zo-0004sR-8s
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:15:52 +0000
Received: from [85.158.139.83:64091] by server-16.bemta-5.messagelabs.com id
	6B/18-09208-713E9C05; Thu, 13 Dec 2012 14:15:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355408134!26998034!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28862 invoked from network); 13 Dec 2012 14:15:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:15:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="118931"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 14:15:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 14:15:25 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj9ZN-0006RQ-Qd; Thu, 13 Dec 2012 14:15:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj9ZN-00088B-Mp;
	Thu, 13 Dec 2012 14:15:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.58109.606386.754980@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 14:15:25 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355400151.10554.103.camel@zakaz.uk.xensource.com>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
	<1355400151.10554.103.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [VTPM v7 6/8] Add autoconf to stubdom"):
> It also occurs to me that this series introduces a little bisection blip
> where cmake will be required (i.e. from patch 3 until here). The right
> way to do this would have been to put the patch introducing autoconf at
> the start. I'm inclined to just gloss over that for now, but if you feel
> so inclined you could reorder things.

I would recommend reordering things, yes.

If the bisection blip bites us, what will happen is that the automatic
bisector will finger the cmake-requiring changeset.  Normally if that
happens it would be fair to ask the person who introduces the
bisection breakage to debug whatever the other problem is that the
bisector is hunting for.  I wouldn't recommend knowingly putting
oneself in that position :-).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:25:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9io-0005Qx-Ck; Thu, 13 Dec 2012 14:25:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj9im-0005QL-QJ
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:25:09 +0000
Received: from [85.158.139.83:31820] by server-9.bemta-5.messagelabs.com id
	DF/E9-10690-445E9C05; Thu, 13 Dec 2012 14:25:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355408707!27979589!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5566 invoked from network); 13 Dec 2012 14:25:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:25:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="119338"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 14:25:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 14:25:06 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj9ik-0006UV-Ll; Thu, 13 Dec 2012 14:25:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj9ik-00089j-HC;
	Thu, 13 Dec 2012 14:25:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.58690.426312.833615@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 14:25:06 +0000
To: Christoph Egger <Christoph_Egger@gmx.de>
In-Reply-To: <50C9E49C.8050809@gmx.de>
References: <508916B3.2030403@amd.com>
	<20617.13045.486553.172990@mariner.uk.xensource.com>
	<1355395238.10554.71.camel@zakaz.uk.xensource.com>
	<50C9E49C.8050809@gmx.de>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Christoph Egger writes ("Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
> On 13.12.12 11:40, Ian Campbell wrote:
> > Adding Christoph's new address, I guess this is a thing exposed on
> > NetBSD?
> 
> This is not specific to NetBSD. It is exposed everywhere where you
> install Xen into a non-default directory by specifying the prefix
> to configure.

Indeed so.  I think a better way of putting it is that (IIRC) this bug
in our build system was exposed routinely on NetBSD because the NetBSD
ports collection always passes --prefix.  Is that right ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:25:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9io-0005Qx-Ck; Thu, 13 Dec 2012 14:25:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj9im-0005QL-QJ
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:25:09 +0000
Received: from [85.158.139.83:31820] by server-9.bemta-5.messagelabs.com id
	DF/E9-10690-445E9C05; Thu, 13 Dec 2012 14:25:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355408707!27979589!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5566 invoked from network); 13 Dec 2012 14:25:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:25:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="119338"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 14:25:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 14:25:06 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj9ik-0006UV-Ll; Thu, 13 Dec 2012 14:25:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj9ik-00089j-HC;
	Thu, 13 Dec 2012 14:25:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.58690.426312.833615@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 14:25:06 +0000
To: Christoph Egger <Christoph_Egger@gmx.de>
In-Reply-To: <50C9E49C.8050809@gmx.de>
References: <508916B3.2030403@amd.com>
	<20617.13045.486553.172990@mariner.uk.xensource.com>
	<1355395238.10554.71.camel@zakaz.uk.xensource.com>
	<50C9E49C.8050809@gmx.de>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Christoph Egger writes ("Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
> On 13.12.12 11:40, Ian Campbell wrote:
> > Adding Christoph's new address, I guess this is a thing exposed on
> > NetBSD?
> 
> This is not specific to NetBSD. It is exposed everywhere where you
> install Xen into a non-default directory by specifying the prefix
> to configure.

Indeed so.  I think a better way of putting it is that (IIRC) this bug
in our build system was exposed routinely on NetBSD because the NetBSD
ports collection always passes --prefix.  Is that right ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:26:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9k0-0005XQ-Ru; Thu, 13 Dec 2012 14:26:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tj9jz-0005XH-Lz
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:26:23 +0000
Received: from [85.158.143.99:21036] by server-2.bemta-4.messagelabs.com id
	EE/D3-30861-F85E9C05; Thu, 13 Dec 2012 14:26:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355408780!24029016!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25757 invoked from network); 13 Dec 2012 14:26:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:26:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="585844"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 14:25:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 09:25:21 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tj9iy-00014y-Qe;
	Thu, 13 Dec 2012 14:25:20 +0000
Date: Thu, 13 Dec 2012 14:25:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50C9E88F02000078000B02A1@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012, Jan Beulich wrote:
> >>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Thu, 13 Dec 2012, Jan Beulich wrote:
> >> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> >> > On Wed, 12 Dec 2012 17:15:23 -0800
> >> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> >> > 
> >> >> On Tue, 11 Dec 2012 12:10:19 +0000
> >> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> >> >> 
> >> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> >> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> >> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >> > 
> >> >> > That's strange because AFAIK Linux is never editing the MSI-X
> >> >> > entries directly: give a look at
> >> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> >> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> >> >> > touch the real MSI-X table.
> >> >> 
> >> >> So, this is what's happening. The side effect of :
> >> >> 
> >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> >> >>                                 dev->msix_table.last) )
> >> >>             WARN();
> >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> >> >>                                 dev->msix_pba.last) )
> >> >>             WARN();
> >> >> 
> >> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> >> >> I've mapped are going from RW to read only. Then when dom0 accesses
> >> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> >> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> >> >> don't understand why? Looking into that now...
> >> 
> >> As far as I was able to tell back at the time when I implemented
> >> this, existing code shouldn't have mappings for these tables in
> >> place at the time these ranges get added here. But I noted in
> >> the patch description that this is a potential issue (and may need
> >> fixing if deemed severe enough - back then, apparently nobody
> >> really cared, perhaps largely because passthrough to PV guests
> >> isn't considered fully secure anyway).
> >> 
> >> Now - did that change? I.e. can you describe where the mappings
> >> come from that cause the problem here?
> > 
> > The generic Linux MSI-X handling code does that, before calling the
> > arch specific msi setup function, for which we have a xen version
> > (xen_initdom_setup_msi_irqs):
> > 
> > pci_enable_msix -> msix_capability_init -> msix_map_region
> 
> Ah, okay, (of course?) I had looked only at the forward ported
> version of this. Is all that fiddling with the mask bits really
> being suppressed properly when running under Xen? Otherwise
> pv-ops is quite broken in this regard at present... And if it is,
> I don't see what the respective ioremap() is good for here in
> the Xen case.

Actually I think that you might be right: just looking at the code it
seems that the mask bits get written to the table once as part of the
initialization process:

pci_enable_msix -> msix_capability_init -> msix_program_entries

Unfortunately msix_program_entries is called few lines after
arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
a pirq.
However after that is done, all the masking/unmask is done via irq_mask
that we handle properly masking/unmasking the corresponding event
channels.


Possible solutions on top of my head:

- in msix_program_entries instead of writing to the table directly
(__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 

- replace msix_program_entries with arch_msix_program_entries, but it
would probably be unpopular.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:26:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9k0-0005XQ-Ru; Thu, 13 Dec 2012 14:26:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tj9jz-0005XH-Lz
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:26:23 +0000
Received: from [85.158.143.99:21036] by server-2.bemta-4.messagelabs.com id
	EE/D3-30861-F85E9C05; Thu, 13 Dec 2012 14:26:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355408780!24029016!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25757 invoked from network); 13 Dec 2012 14:26:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:26:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="585844"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 14:25:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 09:25:21 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tj9iy-00014y-Qe;
	Thu, 13 Dec 2012 14:25:20 +0000
Date: Thu, 13 Dec 2012 14:25:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50C9E88F02000078000B02A1@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012, Jan Beulich wrote:
> >>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Thu, 13 Dec 2012, Jan Beulich wrote:
> >> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> >> > On Wed, 12 Dec 2012 17:15:23 -0800
> >> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> >> > 
> >> >> On Tue, 11 Dec 2012 12:10:19 +0000
> >> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> >> >> 
> >> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> >> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> >> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >> > 
> >> >> > That's strange because AFAIK Linux is never editing the MSI-X
> >> >> > entries directly: give a look at
> >> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> >> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> >> >> > touch the real MSI-X table.
> >> >> 
> >> >> So, this is what's happening. The side effect of :
> >> >> 
> >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> >> >>                                 dev->msix_table.last) )
> >> >>             WARN();
> >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> >> >>                                 dev->msix_pba.last) )
> >> >>             WARN();
> >> >> 
> >> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> >> >> I've mapped are going from RW to read only. Then when dom0 accesses
> >> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> >> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> >> >> don't understand why? Looking into that now...
> >> 
> >> As far as I was able to tell back at the time when I implemented
> >> this, existing code shouldn't have mappings for these tables in
> >> place at the time these ranges get added here. But I noted in
> >> the patch description that this is a potential issue (and may need
> >> fixing if deemed severe enough - back then, apparently nobody
> >> really cared, perhaps largely because passthrough to PV guests
> >> isn't considered fully secure anyway).
> >> 
> >> Now - did that change? I.e. can you describe where the mappings
> >> come from that cause the problem here?
> > 
> > The generic Linux MSI-X handling code does that, before calling the
> > arch specific msi setup function, for which we have a xen version
> > (xen_initdom_setup_msi_irqs):
> > 
> > pci_enable_msix -> msix_capability_init -> msix_map_region
> 
> Ah, okay, (of course?) I had looked only at the forward ported
> version of this. Is all that fiddling with the mask bits really
> being suppressed properly when running under Xen? Otherwise
> pv-ops is quite broken in this regard at present... And if it is,
> I don't see what the respective ioremap() is good for here in
> the Xen case.

Actually I think that you might be right: just looking at the code it
seems that the mask bits get written to the table once as part of the
initialization process:

pci_enable_msix -> msix_capability_init -> msix_program_entries

Unfortunately msix_program_entries is called few lines after
arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
a pirq.
However after that is done, all the masking/unmask is done via irq_mask
that we handle properly masking/unmasking the corresponding event
channels.


Possible solutions on top of my head:

- in msix_program_entries instead of writing to the table directly
(__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 

- replace msix_program_entries with arch_msix_program_entries, but it
would probably be unpopular.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:28:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:28:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9ls-0005fU-CA; Thu, 13 Dec 2012 14:28:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph_Egger@gmx.de>) id 1Tj9gB-0005EY-7P
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:22:27 +0000
Received: from [85.158.143.35:44289] by server-2.bemta-4.messagelabs.com id
	53/AE-30861-2A4E9C05; Thu, 13 Dec 2012 14:22:26 +0000
X-Env-Sender: Christoph_Egger@gmx.de
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355408545!14041310!1
X-Originating-IP: [213.165.64.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjE2NS42NC4yMyA9PiAyNTM1MTQ=\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9405 invoked from network); 13 Dec 2012 14:22:25 -0000
Received: from mailout-de.gmx.net (HELO mailout-de.gmx.net) (213.165.64.23)
	by server-3.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 14:22:25 -0000
Received: (qmail invoked by alias); 13 Dec 2012 14:22:24 -0000
Received: from dslb-088-074-128-128.pools.arcor-ip.net (EHLO [10.1.0.3])
	[88.74.128.128]
	by mail.gmx.net (mp029) with SMTP; 13 Dec 2012 15:22:24 +0100
X-Authenticated: #6616588
X-Provags-ID: V01U2FsdGVkX1/Iz/1V/0zIqzF9o1wIeHTBdu2b0RikhY9X3Gvg/2
	h0PwK/JTgn6nfT
Message-ID: <50C9E49C.8050809@gmx.de>
Date: Thu, 13 Dec 2012 15:22:20 +0100
From: Christoph Egger <Christoph_Egger@gmx.de>
User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.4; en-US;
	rv:1.9.2.28) Gecko/20120306 Lightning/1.0b2 Thunderbird/3.1.20
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <508916B3.2030403@amd.com>	
	<20617.13045.486553.172990@mariner.uk.xensource.com>
	<1355395238.10554.71.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355395238.10554.71.camel@zakaz.uk.xensource.com>
X-Enigmail-Version: 1.1.1
X-Y-GMX-Trusted: 0
X-Mailman-Approved-At: Thu, 13 Dec 2012 14:28:18 +0000
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13.12.12 11:40, Ian Campbell wrote:

> Adding Christoph's new address, I guess this is a thing exposed on
> NetBSD?


This is not specific to NetBSD. It is exposed everywhere where you
install Xen into a non-default directory by specifying the prefix
to configure.

Christoph

>
> On Thu, 2012-10-25 at 13:39 +0100, Ian Jackson wrote:
>> Christoph Egger writes ("[Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
>>>
>>> use PREFIX when building upstream qemu.
>>>
>>> Signed-off-by: Christoph Egger<Christoph.Egger@amd.com>
>>
>> This looks reasonable but can you explain what goes wrong when,
>> without this ?  I'd like to be able to verify the bug and fix myself.
>
> AFAICT the default PREFIX for qemu-xen is /usr/local and we pass
> --bindir, --datadir (as Xen specific paths, like /usr/lib/xen/bin) but
> not --prefix. It looks like this covers most stuff but results in a
> smattering of stuff getting installed under /usr/local:
>
> $ find dist/install/usr/local/ | grep qemu
> dist/install/usr/local/libexec/qemu-bridge-helper
> dist/install/usr/local/share/man/man8/qemu-nbd.8
> dist/install/usr/local/share/man/man1/qemu.1
> dist/install/usr/local/share/man/man1/qemu-img.1
> dist/install/usr/local/share/doc/qemu
> dist/install/usr/local/share/doc/qemu/qemu-tech.html
> dist/install/usr/local/share/doc/qemu/qemu-doc.html
> dist/install/usr/local/share/doc/qemu/qmp-commands.txt
> dist/install/usr/local/etc/qemu
> dist/install/usr/local/etc/qemu/target-x86_64.conf
> (there is also some ocaml stuff under there it seems...)
>
> I'm not quite sure that installing those into our $PREFIX is correct
> either though -- there seems like the possibility of clashing with a
> non-Xen install of qemu, so we might be better off moving these to e.g.
> $PREFIX/doc/xen/qemu/ and adding "xen" in the man page path etc? (the
> binaries corresponding to those manpages are in /usr/lib/xen/bin/)
> Perhaps qemu.1xen ?
>
> I don't know what dist/install/usr/local/etc/qemu/target-x86_64.conf is
> but it is empty here. I suspect Xen does not use
> dist/install/usr/local/libexec/qemu-bridge-helper or it should be
> in /usr/lib/xen/bin.
>
> Ian.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:28:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:28:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9ls-0005fU-CA; Thu, 13 Dec 2012 14:28:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph_Egger@gmx.de>) id 1Tj9gB-0005EY-7P
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:22:27 +0000
Received: from [85.158.143.35:44289] by server-2.bemta-4.messagelabs.com id
	53/AE-30861-2A4E9C05; Thu, 13 Dec 2012 14:22:26 +0000
X-Env-Sender: Christoph_Egger@gmx.de
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355408545!14041310!1
X-Originating-IP: [213.165.64.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjE2NS42NC4yMyA9PiAyNTM1MTQ=\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9405 invoked from network); 13 Dec 2012 14:22:25 -0000
Received: from mailout-de.gmx.net (HELO mailout-de.gmx.net) (213.165.64.23)
	by server-3.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 14:22:25 -0000
Received: (qmail invoked by alias); 13 Dec 2012 14:22:24 -0000
Received: from dslb-088-074-128-128.pools.arcor-ip.net (EHLO [10.1.0.3])
	[88.74.128.128]
	by mail.gmx.net (mp029) with SMTP; 13 Dec 2012 15:22:24 +0100
X-Authenticated: #6616588
X-Provags-ID: V01U2FsdGVkX1/Iz/1V/0zIqzF9o1wIeHTBdu2b0RikhY9X3Gvg/2
	h0PwK/JTgn6nfT
Message-ID: <50C9E49C.8050809@gmx.de>
Date: Thu, 13 Dec 2012 15:22:20 +0100
From: Christoph Egger <Christoph_Egger@gmx.de>
User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.4; en-US;
	rv:1.9.2.28) Gecko/20120306 Lightning/1.0b2 Thunderbird/3.1.20
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <508916B3.2030403@amd.com>	
	<20617.13045.486553.172990@mariner.uk.xensource.com>
	<1355395238.10554.71.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355395238.10554.71.camel@zakaz.uk.xensource.com>
X-Enigmail-Version: 1.1.1
X-Y-GMX-Trusted: 0
X-Mailman-Approved-At: Thu, 13 Dec 2012 14:28:18 +0000
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13.12.12 11:40, Ian Campbell wrote:

> Adding Christoph's new address, I guess this is a thing exposed on
> NetBSD?


This is not specific to NetBSD. It is exposed everywhere where you
install Xen into a non-default directory by specifying the prefix
to configure.

Christoph

>
> On Thu, 2012-10-25 at 13:39 +0100, Ian Jackson wrote:
>> Christoph Egger writes ("[Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
>>>
>>> use PREFIX when building upstream qemu.
>>>
>>> Signed-off-by: Christoph Egger<Christoph.Egger@amd.com>
>>
>> This looks reasonable but can you explain what goes wrong when,
>> without this ?  I'd like to be able to verify the bug and fix myself.
>
> AFAICT the default PREFIX for qemu-xen is /usr/local and we pass
> --bindir, --datadir (as Xen specific paths, like /usr/lib/xen/bin) but
> not --prefix. It looks like this covers most stuff but results in a
> smattering of stuff getting installed under /usr/local:
>
> $ find dist/install/usr/local/ | grep qemu
> dist/install/usr/local/libexec/qemu-bridge-helper
> dist/install/usr/local/share/man/man8/qemu-nbd.8
> dist/install/usr/local/share/man/man1/qemu.1
> dist/install/usr/local/share/man/man1/qemu-img.1
> dist/install/usr/local/share/doc/qemu
> dist/install/usr/local/share/doc/qemu/qemu-tech.html
> dist/install/usr/local/share/doc/qemu/qemu-doc.html
> dist/install/usr/local/share/doc/qemu/qmp-commands.txt
> dist/install/usr/local/etc/qemu
> dist/install/usr/local/etc/qemu/target-x86_64.conf
> (there is also some ocaml stuff under there it seems...)
>
> I'm not quite sure that installing those into our $PREFIX is correct
> either though -- there seems like the possibility of clashing with a
> non-Xen install of qemu, so we might be better off moving these to e.g.
> $PREFIX/doc/xen/qemu/ and adding "xen" in the man page path etc? (the
> binaries corresponding to those manpages are in /usr/lib/xen/bin/)
> Perhaps qemu.1xen ?
>
> I don't know what dist/install/usr/local/etc/qemu/target-x86_64.conf is
> but it is empty here. I suspect Xen does not use
> dist/install/usr/local/libexec/qemu-bridge-helper or it should be
> in /usr/lib/xen/bin.
>
> Ian.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:29:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:29:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9mf-0005lP-Qz; Thu, 13 Dec 2012 14:29:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj9md-0005l2-UE
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:29:08 +0000
Received: from [85.158.137.99:36543] by server-6.bemta-3.messagelabs.com id
	50/CC-12154-E26E9C05; Thu, 13 Dec 2012 14:29:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355408941!16917687!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5521 invoked from network); 13 Dec 2012 14:29:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:29:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="119517"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 14:29:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 14:29:00 +0000
Message-ID: <1355408939.10554.135.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph_Egger@gmx.de>
Date: Thu, 13 Dec 2012 14:28:59 +0000
In-Reply-To: <50C9E49C.8050809@gmx.de>
References: <508916B3.2030403@amd.com>
	<20617.13045.486553.172990@mariner.uk.xensource.com>
	<1355395238.10554.71.camel@zakaz.uk.xensource.com>
	<50C9E49C.8050809@gmx.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 14:22 +0000, Christoph Egger wrote:
> On 13.12.12 11:40, Ian Campbell wrote:
> 
> > Adding Christoph's new address, I guess this is a thing exposed on
> > NetBSD?
> 
> 
> This is not specific to NetBSD. It is exposed everywhere where you
> install Xen into a non-default directory by specifying the prefix
> to configure.

Sorry, I wrote that bit before I'd fully grokked what was going on and
forgot to go back and change it.

> 
> Christoph
> 
> >
> > On Thu, 2012-10-25 at 13:39 +0100, Ian Jackson wrote:
> >> Christoph Egger writes ("[Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
> >>>
> >>> use PREFIX when building upstream qemu.
> >>>
> >>> Signed-off-by: Christoph Egger<Christoph.Egger@amd.com>
> >>
> >> This looks reasonable but can you explain what goes wrong when,
> >> without this ?  I'd like to be able to verify the bug and fix myself.
> >
> > AFAICT the default PREFIX for qemu-xen is /usr/local and we pass
> > --bindir, --datadir (as Xen specific paths, like /usr/lib/xen/bin) but
> > not --prefix. It looks like this covers most stuff but results in a
> > smattering of stuff getting installed under /usr/local:
> >
> > $ find dist/install/usr/local/ | grep qemu
> > dist/install/usr/local/libexec/qemu-bridge-helper
> > dist/install/usr/local/share/man/man8/qemu-nbd.8
> > dist/install/usr/local/share/man/man1/qemu.1
> > dist/install/usr/local/share/man/man1/qemu-img.1
> > dist/install/usr/local/share/doc/qemu
> > dist/install/usr/local/share/doc/qemu/qemu-tech.html
> > dist/install/usr/local/share/doc/qemu/qemu-doc.html
> > dist/install/usr/local/share/doc/qemu/qmp-commands.txt
> > dist/install/usr/local/etc/qemu
> > dist/install/usr/local/etc/qemu/target-x86_64.conf
> > (there is also some ocaml stuff under there it seems...)
> >
> > I'm not quite sure that installing those into our $PREFIX is correct
> > either though -- there seems like the possibility of clashing with a
> > non-Xen install of qemu, so we might be better off moving these to e.g.
> > $PREFIX/doc/xen/qemu/ and adding "xen" in the man page path etc? (the
> > binaries corresponding to those manpages are in /usr/lib/xen/bin/)
> > Perhaps qemu.1xen ?
> >
> > I don't know what dist/install/usr/local/etc/qemu/target-x86_64.conf is
> > but it is empty here. I suspect Xen does not use
> > dist/install/usr/local/libexec/qemu-bridge-helper or it should be
> > in /usr/lib/xen/bin.
> >
> > Ian.
> >
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:29:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:29:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9mf-0005lP-Qz; Thu, 13 Dec 2012 14:29:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tj9md-0005l2-UE
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:29:08 +0000
Received: from [85.158.137.99:36543] by server-6.bemta-3.messagelabs.com id
	50/CC-12154-E26E9C05; Thu, 13 Dec 2012 14:29:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355408941!16917687!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5521 invoked from network); 13 Dec 2012 14:29:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:29:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="119517"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 14:29:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 14:29:00 +0000
Message-ID: <1355408939.10554.135.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph_Egger@gmx.de>
Date: Thu, 13 Dec 2012 14:28:59 +0000
In-Reply-To: <50C9E49C.8050809@gmx.de>
References: <508916B3.2030403@amd.com>
	<20617.13045.486553.172990@mariner.uk.xensource.com>
	<1355395238.10554.71.camel@zakaz.uk.xensource.com>
	<50C9E49C.8050809@gmx.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 14:22 +0000, Christoph Egger wrote:
> On 13.12.12 11:40, Ian Campbell wrote:
> 
> > Adding Christoph's new address, I guess this is a thing exposed on
> > NetBSD?
> 
> 
> This is not specific to NetBSD. It is exposed everywhere where you
> install Xen into a non-default directory by specifying the prefix
> to configure.

Sorry, I wrote that bit before I'd fully grokked what was going on and
forgot to go back and change it.

> 
> Christoph
> 
> >
> > On Thu, 2012-10-25 at 13:39 +0100, Ian Jackson wrote:
> >> Christoph Egger writes ("[Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
> >>>
> >>> use PREFIX when building upstream qemu.
> >>>
> >>> Signed-off-by: Christoph Egger<Christoph.Egger@amd.com>
> >>
> >> This looks reasonable but can you explain what goes wrong when,
> >> without this ?  I'd like to be able to verify the bug and fix myself.
> >
> > AFAICT the default PREFIX for qemu-xen is /usr/local and we pass
> > --bindir, --datadir (as Xen specific paths, like /usr/lib/xen/bin) but
> > not --prefix. It looks like this covers most stuff but results in a
> > smattering of stuff getting installed under /usr/local:
> >
> > $ find dist/install/usr/local/ | grep qemu
> > dist/install/usr/local/libexec/qemu-bridge-helper
> > dist/install/usr/local/share/man/man8/qemu-nbd.8
> > dist/install/usr/local/share/man/man1/qemu.1
> > dist/install/usr/local/share/man/man1/qemu-img.1
> > dist/install/usr/local/share/doc/qemu
> > dist/install/usr/local/share/doc/qemu/qemu-tech.html
> > dist/install/usr/local/share/doc/qemu/qemu-doc.html
> > dist/install/usr/local/share/doc/qemu/qmp-commands.txt
> > dist/install/usr/local/etc/qemu
> > dist/install/usr/local/etc/qemu/target-x86_64.conf
> > (there is also some ocaml stuff under there it seems...)
> >
> > I'm not quite sure that installing those into our $PREFIX is correct
> > either though -- there seems like the possibility of clashing with a
> > non-Xen install of qemu, so we might be better off moving these to e.g.
> > $PREFIX/doc/xen/qemu/ and adding "xen" in the man page path etc? (the
> > binaries corresponding to those manpages are in /usr/lib/xen/bin/)
> > Perhaps qemu.1xen ?
> >
> > I don't know what dist/install/usr/local/etc/qemu/target-x86_64.conf is
> > but it is empty here. I suspect Xen does not use
> > dist/install/usr/local/libexec/qemu-bridge-helper or it should be
> > in /usr/lib/xen/bin.
> >
> > Ian.
> >
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:31:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9op-00064e-II; Thu, 13 Dec 2012 14:31:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj9on-000643-PK
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:31:22 +0000
Received: from [85.158.138.51:22578] by server-1.bemta-3.messagelabs.com id
	A0/82-08906-3B6E9C05; Thu, 13 Dec 2012 14:31:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355409074!20830647!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21652 invoked from network); 13 Dec 2012 14:31:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:31:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="119655"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 14:31:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 14:31:13 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj9of-0006WN-Me; Thu, 13 Dec 2012 14:31:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj9of-0008C9-IM;
	Thu, 13 Dec 2012 14:31:13 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.59057.406251.702030@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 14:31:13 +0000
To: Ronny Hegewald <ronny.hegewald@online.de>
In-Reply-To: <201212130035.28918.ronny.hegewald@online.de>
References: <50A4C8F102000078000A8C14@nat28.tlf.novell.com>
	<201212130035.28918.ronny.hegewald@online.de>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1-rc1 and 4.1.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ronny Hegewald writes ("Re: [Xen-devel] Xen 4.2.1-rc1 and 4.1.4-rc1 have been tagged"):
> On Thursday 15 November 2012, Jan Beulich wrote:
> > All,
> >
> > aiming at releases with not too many more RCs, please test!
> >
> > Thanks, Jan
> 
> The patch "fix vfb related assertion problem when starting pv-domU" was 
> acknowledged as stable-candidate from Ian Campbell (see 
> http://lists.xen.org/archives/html/xen-devel/2012-11/msg01208.html), but 
> couldn't find it in the 4.2 stable branch. 
> 
> Its a one-liner and i didn't find anything on the list that it was discarded 
> for stable.
> 
> Its revision 26145:8b93ac0c93f3 (copyied from above link)

Done, thanks.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:31:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9op-00064e-II; Thu, 13 Dec 2012 14:31:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj9on-000643-PK
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:31:22 +0000
Received: from [85.158.138.51:22578] by server-1.bemta-3.messagelabs.com id
	A0/82-08906-3B6E9C05; Thu, 13 Dec 2012 14:31:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355409074!20830647!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21652 invoked from network); 13 Dec 2012 14:31:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:31:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="119655"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 14:31:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 14:31:13 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj9of-0006WN-Me; Thu, 13 Dec 2012 14:31:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj9of-0008C9-IM;
	Thu, 13 Dec 2012 14:31:13 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.59057.406251.702030@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 14:31:13 +0000
To: Ronny Hegewald <ronny.hegewald@online.de>
In-Reply-To: <201212130035.28918.ronny.hegewald@online.de>
References: <50A4C8F102000078000A8C14@nat28.tlf.novell.com>
	<201212130035.28918.ronny.hegewald@online.de>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.1-rc1 and 4.1.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ronny Hegewald writes ("Re: [Xen-devel] Xen 4.2.1-rc1 and 4.1.4-rc1 have been tagged"):
> On Thursday 15 November 2012, Jan Beulich wrote:
> > All,
> >
> > aiming at releases with not too many more RCs, please test!
> >
> > Thanks, Jan
> 
> The patch "fix vfb related assertion problem when starting pv-domU" was 
> acknowledged as stable-candidate from Ian Campbell (see 
> http://lists.xen.org/archives/html/xen-devel/2012-11/msg01208.html), but 
> couldn't find it in the 4.2 stable branch. 
> 
> Its a one-liner and i didn't find anything on the list that it was discarded 
> for stable.
> 
> Its revision 26145:8b93ac0c93f3 (copyied from above link)

Done, thanks.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:34:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:34:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9rC-0006NH-4H; Thu, 13 Dec 2012 14:33:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tj9rB-0006N4-0R
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:33:49 +0000
Received: from [85.158.143.99:43896] by server-2.bemta-4.messagelabs.com id
	6C/8F-30861-C47E9C05; Thu, 13 Dec 2012 14:33:48 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1355409225!28716271!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2771 invoked from network); 13 Dec 2012 14:33:47 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:33:47 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so4010390iej.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 06:33:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=EqMB1y9t/uzA5oJWA20jcS/or7YnrzFzFswyECgUeyU=;
	b=EXB2xDJQU7HdEXfWqsVjAw7m8D3erW8LzS+OwuF91cL/9iapH9lQJo+s2Hwi3ZIfpK
	vhzeqvRewbsmpdk8n79iqyVIqr54YCc2HvSa/zYadF9zShoMU0GZRk9L2KjIM3TBPGDM
	2MWQYdH/ubsSQ7rBPk7Vr9Kv+TiaUl0IUp0xRaWgjD8RIuuPAegz6tNMP8/frpdBTJte
	V1OJ8AGabx+vXAJlq8w6l5934WnzVwDpG/ZxcztwyLdLm7bEKjkMlKNsLG7V7jZrkV0A
	sg6CMjheOmMrrdrBhnzx+tweKXzl4je4wuShm5zWPG1fx/91bjO0iRT80AlDvRWUobk+
	dndw==
MIME-Version: 1.0
Received: by 10.50.159.170 with SMTP id xd10mr1725568igb.44.1355409225575;
	Thu, 13 Dec 2012 06:33:45 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 13 Dec 2012 06:33:45 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
Date: Thu, 13 Dec 2012 22:33:45 +0800
X-Google-Sender-Auth: MCevPKOP6ofvUS03sYxd9sz-EB0
Message-ID: <CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay, Allen M" <allen.m.kay@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 8:43 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
>
> Does this patch work for you?
>

It appears that you change the exposed 1f.0 bridge into an ISA bridge.
The driver should be able to recognize it -- as long as it is not
hidden by the PIIX3 bridge.
I wonder if there is way to entirely override that one...
But anyway I'll try it out first.

>
>
> diff --git a/hw/pci.c b/hw/pci.c
> index f051de1..d371bd7 100644
> --- a/hw/pci.c
> +++ b/hw/pci.c
> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>      }
>  }
>
> -typedef struct {
> -    PCIDevice dev;
> -    PCIBus *bus;
> -} PCIBridge;
> -
>  void pci_bridge_write_config(PCIDevice *d,
>                               uint32_t address, uint32_t val, int len)
>  {
> diff --git a/hw/pci.h b/hw/pci.h
> index edc58b6..c2acab9 100644
> --- a/hw/pci.h
> +++ b/hw/pci.h
> @@ -222,6 +222,11 @@ struct PCIDevice {
>      int irq_state[4];
>  };
>
> +typedef struct {
> +    PCIDevice dev;
> +    PCIBus *bus;
> +} PCIBridge;
> +
>  extern char direct_pci_str[];
>  extern int direct_pci_msitranslate;
>  extern int direct_pci_power_mgmt;
> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
> index c6f8869..d8e789f 100644
> --- a/hw/pt-graphics.c
> +++ b/hw/pt-graphics.c
> @@ -3,6 +3,7 @@
>   */
>
>  #include "pass-through.h"
> +#include "pci.h"
>  #include "pci/header.h"
>  #include "pci/pci.h"
>
> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
>
> -    if ( vid == PCI_VENDOR_ID_INTEL )
> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
> -                        pch_map_irq, "intel_bridge_1f");
> +    if (vid == PCI_VENDOR_ID_INTEL) {
> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL, pci_bridge_write_config);
> +
> +        pci_config_set_vendor_id(s->dev.config, vid);
> +        pci_config_set_device_id(s->dev.config, did);
> +
> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast back-to-back, 66MHz, no error
> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
> +        s->dev.config[PCI_REVISION] = rid;
> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
> +        s->dev.config[PCI_HEADER_TYPE] = 0x81;
> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
> +
> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
> +    }
>  }
>
>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:34:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:34:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9rC-0006NH-4H; Thu, 13 Dec 2012 14:33:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tj9rB-0006N4-0R
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:33:49 +0000
Received: from [85.158.143.99:43896] by server-2.bemta-4.messagelabs.com id
	6C/8F-30861-C47E9C05; Thu, 13 Dec 2012 14:33:48 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1355409225!28716271!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2771 invoked from network); 13 Dec 2012 14:33:47 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:33:47 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so4010390iej.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 06:33:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=EqMB1y9t/uzA5oJWA20jcS/or7YnrzFzFswyECgUeyU=;
	b=EXB2xDJQU7HdEXfWqsVjAw7m8D3erW8LzS+OwuF91cL/9iapH9lQJo+s2Hwi3ZIfpK
	vhzeqvRewbsmpdk8n79iqyVIqr54YCc2HvSa/zYadF9zShoMU0GZRk9L2KjIM3TBPGDM
	2MWQYdH/ubsSQ7rBPk7Vr9Kv+TiaUl0IUp0xRaWgjD8RIuuPAegz6tNMP8/frpdBTJte
	V1OJ8AGabx+vXAJlq8w6l5934WnzVwDpG/ZxcztwyLdLm7bEKjkMlKNsLG7V7jZrkV0A
	sg6CMjheOmMrrdrBhnzx+tweKXzl4je4wuShm5zWPG1fx/91bjO0iRT80AlDvRWUobk+
	dndw==
MIME-Version: 1.0
Received: by 10.50.159.170 with SMTP id xd10mr1725568igb.44.1355409225575;
	Thu, 13 Dec 2012 06:33:45 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 13 Dec 2012 06:33:45 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
Date: Thu, 13 Dec 2012 22:33:45 +0800
X-Google-Sender-Auth: MCevPKOP6ofvUS03sYxd9sz-EB0
Message-ID: <CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay, Allen M" <allen.m.kay@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 8:43 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
>
> Does this patch work for you?
>

It appears that you change the exposed 1f.0 bridge into an ISA bridge.
The driver should be able to recognize it -- as long as it is not
hidden by the PIIX3 bridge.
I wonder if there is way to entirely override that one...
But anyway I'll try it out first.

>
>
> diff --git a/hw/pci.c b/hw/pci.c
> index f051de1..d371bd7 100644
> --- a/hw/pci.c
> +++ b/hw/pci.c
> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>      }
>  }
>
> -typedef struct {
> -    PCIDevice dev;
> -    PCIBus *bus;
> -} PCIBridge;
> -
>  void pci_bridge_write_config(PCIDevice *d,
>                               uint32_t address, uint32_t val, int len)
>  {
> diff --git a/hw/pci.h b/hw/pci.h
> index edc58b6..c2acab9 100644
> --- a/hw/pci.h
> +++ b/hw/pci.h
> @@ -222,6 +222,11 @@ struct PCIDevice {
>      int irq_state[4];
>  };
>
> +typedef struct {
> +    PCIDevice dev;
> +    PCIBus *bus;
> +} PCIBridge;
> +
>  extern char direct_pci_str[];
>  extern int direct_pci_msitranslate;
>  extern int direct_pci_power_mgmt;
> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
> index c6f8869..d8e789f 100644
> --- a/hw/pt-graphics.c
> +++ b/hw/pt-graphics.c
> @@ -3,6 +3,7 @@
>   */
>
>  #include "pass-through.h"
> +#include "pci.h"
>  #include "pci/header.h"
>  #include "pci/pci.h"
>
> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
>
> -    if ( vid == PCI_VENDOR_ID_INTEL )
> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
> -                        pch_map_irq, "intel_bridge_1f");
> +    if (vid == PCI_VENDOR_ID_INTEL) {
> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL, pci_bridge_write_config);
> +
> +        pci_config_set_vendor_id(s->dev.config, vid);
> +        pci_config_set_device_id(s->dev.config, did);
> +
> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast back-to-back, 66MHz, no error
> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
> +        s->dev.config[PCI_REVISION] = rid;
> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
> +        s->dev.config[PCI_HEADER_TYPE] = 0x81;
> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
> +
> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
> +    }
>  }
>
>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:37:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:37:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9uh-0006cd-Rj; Thu, 13 Dec 2012 14:37:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj9uf-0006cY-SW
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:37:26 +0000
Received: from [85.158.137.99:3185] by server-14.bemta-3.messagelabs.com id
	94/80-27443-028E9C05; Thu, 13 Dec 2012 14:37:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1355409401!19252447!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4398 invoked from network); 13 Dec 2012 14:36:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:36:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="119877"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 14:36:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 14:36:41 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj9tx-0006Y5-Dg; Thu, 13 Dec 2012 14:36:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj9tx-0008ES-8z;
	Thu, 13 Dec 2012 14:36:41 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.59385.154336.296785@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 14:36:41 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1352273016.12977.21.camel@hastur.hellion.org.uk>
References: <201211070207.qA727wF1028601@wind.enjellic.com>
	<1352273016.12977.21.camel@hastur.hellion.org.uk>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 2/2] 4.1.2 blktap2 cleanup fixes."):
> On Wed, 2012-11-07 at 02:07 +0000, Dr. Greg Wettstein wrote:
> > ---------------------------------------------------------------------------
> > The following patch when applied on top of:
> > 
> > libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> > 
> > Establishes correct cleanup behavior for blktap devices.  This patch
> > implements the release of the backend device before calling for
> > the destruction of the userspace component of the blktap device.
> > 
> > Without this patch the kernel xen-blkback driver deadlocks with
> > the blktap2 user control plane until the IPC channel is terminated by the
> > timeout on the select() call.  This results in a noticeable delay
> > in the termination of the guest and causes the blktap minor
> > number which had been allocated to be orphaned.
> > 
> > Signed-off-by: Greg Wettstein <greg@enjellic.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

I have fixed up the whitespace error and committed this to 4.1.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:37:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:37:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9uh-0006cd-Rj; Thu, 13 Dec 2012 14:37:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tj9uf-0006cY-SW
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:37:26 +0000
Received: from [85.158.137.99:3185] by server-14.bemta-3.messagelabs.com id
	94/80-27443-028E9C05; Thu, 13 Dec 2012 14:37:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1355409401!19252447!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4398 invoked from network); 13 Dec 2012 14:36:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 14:36:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,273,1355097600"; 
   d="scan'208";a="119877"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 14:36:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 14:36:41 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tj9tx-0006Y5-Dg; Thu, 13 Dec 2012 14:36:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tj9tx-0008ES-8z;
	Thu, 13 Dec 2012 14:36:41 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.59385.154336.296785@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 14:36:41 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1352273016.12977.21.camel@hastur.hellion.org.uk>
References: <201211070207.qA727wF1028601@wind.enjellic.com>
	<1352273016.12977.21.camel@hastur.hellion.org.uk>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] 4.1.2 blktap2 cleanup fixes.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 2/2] 4.1.2 blktap2 cleanup fixes."):
> On Wed, 2012-11-07 at 02:07 +0000, Dr. Greg Wettstein wrote:
> > ---------------------------------------------------------------------------
> > The following patch when applied on top of:
> > 
> > libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> > 
> > Establishes correct cleanup behavior for blktap devices.  This patch
> > implements the release of the backend device before calling for
> > the destruction of the userspace component of the blktap device.
> > 
> > Without this patch the kernel xen-blkback driver deadlocks with
> > the blktap2 user control plane until the IPC channel is terminated by the
> > timeout on the select() call.  This results in a noticeable delay
> > in the termination of the guest and causes the blktap minor
> > number which had been allocated to be orphaned.
> > 
> > Signed-off-by: Greg Wettstein <greg@enjellic.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

I have fixed up the whitespace error and committed this to 4.1.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:41:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9yg-0006tq-Gz; Thu, 13 Dec 2012 14:41:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph_Egger@gmx.de>) id 1Tj9yf-0006tk-6G
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:41:33 +0000
Received: from [85.158.139.83:20415] by server-2.bemta-5.messagelabs.com id
	2A/F1-16162-C19E9C05; Thu, 13 Dec 2012 14:41:32 +0000
X-Env-Sender: Christoph_Egger@gmx.de
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355409691!27586623!1
X-Originating-IP: [213.165.64.22]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDI2ODYzNw==\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDI2ODYzNw==\n, ML_RADAR_SPEW_LINKS_14, 
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14660 invoked from network); 13 Dec 2012 14:41:31 -0000
Received: from mailout-de.gmx.net (HELO mailout-de.gmx.net) (213.165.64.22)
	by server-12.tower-182.messagelabs.com with SMTP;
	13 Dec 2012 14:41:31 -0000
Received: (qmail invoked by alias); 13 Dec 2012 14:41:31 -0000
Received: from dslb-088-074-128-128.pools.arcor-ip.net (EHLO [10.1.0.3])
	[88.74.128.128]
	by mail.gmx.net (mp034) with SMTP; 13 Dec 2012 15:41:31 +0100
X-Authenticated: #6616588
X-Provags-ID: V01U2FsdGVkX18k8wcO9sskX8Q7a9G2VRan23f5xBeOcVeUmTCDs/
	TFGWb5+nG4FiLk
Message-ID: <50C9E919.3080405@gmx.de>
Date: Thu, 13 Dec 2012 15:41:29 +0100
From: Christoph Egger <Christoph_Egger@gmx.de>
User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.4; en-US;
	rv:1.9.2.28) Gecko/20120306 Lightning/1.0b2 Thunderbird/3.1.20
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <508916B3.2030403@amd.com>	<20617.13045.486553.172990@mariner.uk.xensource.com>	<1355395238.10554.71.camel@zakaz.uk.xensource.com>	<50C9E49C.8050809@gmx.de>
	<20681.58690.426312.833615@mariner.uk.xensource.com>
In-Reply-To: <20681.58690.426312.833615@mariner.uk.xensource.com>
X-Enigmail-Version: 1.1.1
X-Y-GMX-Trusted: 0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13.12.12 15:25, Ian Jackson wrote:

> Christoph Egger writes ("Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
>> On 13.12.12 11:40, Ian Campbell wrote:
>>> Adding Christoph's new address, I guess this is a thing exposed on
>>> NetBSD?
>>
>> This is not specific to NetBSD. It is exposed everywhere where you
>> install Xen into a non-default directory by specifying the prefix
>> to configure.
>
> Indeed so.  I think a better way of putting it is that (IIRC) this bug
> in our build system was exposed routinely on NetBSD because the NetBSD
> ports collection always passes --prefix.  Is that right ?


Yes, this is right.

It is also routinely exposed when you choose a different prefix
for different xen versions for development purpose.
I use xen-<c/s> to switch forth and back between different
xen versions. This way I am always able to use a working version
and to test a new changeset.

Christoph

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:41:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tj9yg-0006tq-Gz; Thu, 13 Dec 2012 14:41:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph_Egger@gmx.de>) id 1Tj9yf-0006tk-6G
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:41:33 +0000
Received: from [85.158.139.83:20415] by server-2.bemta-5.messagelabs.com id
	2A/F1-16162-C19E9C05; Thu, 13 Dec 2012 14:41:32 +0000
X-Env-Sender: Christoph_Egger@gmx.de
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355409691!27586623!1
X-Originating-IP: [213.165.64.22]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDI2ODYzNw==\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDI2ODYzNw==\n, ML_RADAR_SPEW_LINKS_14, 
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14660 invoked from network); 13 Dec 2012 14:41:31 -0000
Received: from mailout-de.gmx.net (HELO mailout-de.gmx.net) (213.165.64.22)
	by server-12.tower-182.messagelabs.com with SMTP;
	13 Dec 2012 14:41:31 -0000
Received: (qmail invoked by alias); 13 Dec 2012 14:41:31 -0000
Received: from dslb-088-074-128-128.pools.arcor-ip.net (EHLO [10.1.0.3])
	[88.74.128.128]
	by mail.gmx.net (mp034) with SMTP; 13 Dec 2012 15:41:31 +0100
X-Authenticated: #6616588
X-Provags-ID: V01U2FsdGVkX18k8wcO9sskX8Q7a9G2VRan23f5xBeOcVeUmTCDs/
	TFGWb5+nG4FiLk
Message-ID: <50C9E919.3080405@gmx.de>
Date: Thu, 13 Dec 2012 15:41:29 +0100
From: Christoph Egger <Christoph_Egger@gmx.de>
User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.4; en-US;
	rv:1.9.2.28) Gecko/20120306 Lightning/1.0b2 Thunderbird/3.1.20
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <508916B3.2030403@amd.com>	<20617.13045.486553.172990@mariner.uk.xensource.com>	<1355395238.10554.71.camel@zakaz.uk.xensource.com>	<50C9E49C.8050809@gmx.de>
	<20681.58690.426312.833615@mariner.uk.xensource.com>
In-Reply-To: <20681.58690.426312.833615@mariner.uk.xensource.com>
X-Enigmail-Version: 1.1.1
X-Y-GMX-Trusted: 0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13.12.12 15:25, Ian Jackson wrote:

> Christoph Egger writes ("Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
>> On 13.12.12 11:40, Ian Campbell wrote:
>>> Adding Christoph's new address, I guess this is a thing exposed on
>>> NetBSD?
>>
>> This is not specific to NetBSD. It is exposed everywhere where you
>> install Xen into a non-default directory by specifying the prefix
>> to configure.
>
> Indeed so.  I think a better way of putting it is that (IIRC) this bug
> in our build system was exposed routinely on NetBSD because the NetBSD
> ports collection always passes --prefix.  Is that right ?


Yes, this is right.

It is also routinely exposed when you choose a different prefix
for different xen versions for development purpose.
I use xen-<c/s> to switch forth and back between different
xen versions. This way I am always able to use a working version
and to test a new changeset.

Christoph

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:52:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjA9G-0007Tm-VG; Thu, 13 Dec 2012 14:52:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjA9F-0007Th-K6
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 14:52:29 +0000
Received: from [85.158.139.211:28383] by server-9.bemta-5.messagelabs.com id
	BE/ED-10690-CABE9C05; Thu, 13 Dec 2012 14:52:28 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355410347!18588482!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12522 invoked from network); 13 Dec 2012 14:52:28 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 14:52:28 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjA9C-000KYs-1a; Thu, 13 Dec 2012 14:52:26 +0000
Date: Thu, 13 Dec 2012 14:52:25 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213145225.GG75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-2-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-2-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 01/11] nestedhap: Change hostcr3 and
	p2m->cr3 to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191033), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> VMX doesn't have the concept about host cr3 for nested p2m,
> and only SVM has, so change it to netural words.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:52:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjA9G-0007Tm-VG; Thu, 13 Dec 2012 14:52:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjA9F-0007Th-K6
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 14:52:29 +0000
Received: from [85.158.139.211:28383] by server-9.bemta-5.messagelabs.com id
	BE/ED-10690-CABE9C05; Thu, 13 Dec 2012 14:52:28 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355410347!18588482!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12522 invoked from network); 13 Dec 2012 14:52:28 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 14:52:28 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjA9C-000KYs-1a; Thu, 13 Dec 2012 14:52:26 +0000
Date: Thu, 13 Dec 2012 14:52:25 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213145225.GG75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-2-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-2-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 01/11] nestedhap: Change hostcr3 and
	p2m->cr3 to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191033), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> VMX doesn't have the concept about host cr3 for nested p2m,
> and only SVM has, so change it to netural words.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:53:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjA9c-0007VQ-Bt; Thu, 13 Dec 2012 14:52:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjA9a-0007V7-RI
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 14:52:50 +0000
Received: from [85.158.143.35:33690] by server-1.bemta-4.messagelabs.com id
	D1/94-28401-2CBE9C05; Thu, 13 Dec 2012 14:52:50 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355410369!5074660!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23358 invoked from network); 13 Dec 2012 14:52:49 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 14:52:49 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjA9Z-000KZA-2q; Thu, 13 Dec 2012 14:52:49 +0000
Date: Thu, 13 Dec 2012 14:52:49 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213145249.GH75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-3-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-3-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 02/11] nestedhap: Change nested p2m's walker
	to vendor-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191034), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> EPT and NPT adopts differnt formats for each-level entry,
> so change the walker functions to vendor-specific.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:53:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjA9c-0007VQ-Bt; Thu, 13 Dec 2012 14:52:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjA9a-0007V7-RI
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 14:52:50 +0000
Received: from [85.158.143.35:33690] by server-1.bemta-4.messagelabs.com id
	D1/94-28401-2CBE9C05; Thu, 13 Dec 2012 14:52:50 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355410369!5074660!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23358 invoked from network); 13 Dec 2012 14:52:49 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 14:52:49 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjA9Z-000KZA-2q; Thu, 13 Dec 2012 14:52:49 +0000
Date: Thu, 13 Dec 2012 14:52:49 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213145249.GH75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-3-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-3-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 02/11] nestedhap: Change nested p2m's walker
	to vendor-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191034), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> EPT and NPT adopts differnt formats for each-level entry,
> so change the walker functions to vendor-specific.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 14:58:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAFL-0007m2-5G; Thu, 13 Dec 2012 14:58:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TjAFI-0007lv-Pd
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:58:45 +0000
Received: from [85.158.138.51:63881] by server-14.bemta-3.messagelabs.com id
	84/73-27443-32DE9C05; Thu, 13 Dec 2012 14:58:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355410722!10103958!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10634 invoked from network); 13 Dec 2012 14:58:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 14:58:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 14:58:42 +0000
Message-Id: <50C9FB3202000078000B0338@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 14:58:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2E1FBA32.1__="
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] linux-2.6.18/blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2E1FBA32.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"be->mode" is obtained from xenbus_read(), which does a kmalloc() for
the message body. The short string is never released, so do it along
with freeing "be" itself, and make sure the string isn't kept when
backend_changed() doesn't complete successfully (which made it
desirable to slightly re-structure that function, so that the error
cleanup can be done in one place).

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/xen/blkback/xenbus.c
+++ b/drivers/xen/blkback/xenbus.c
@@ -198,6 +198,7 @@ static int blkback_remove(struct xenbus_
 		be->blkif =3D NULL;
 	}
=20
+	kfree(be->mode);
 	kfree(be);
 	dev->dev.driver_data =3D NULL;
 	write_unlock(&sysfs_read_lock);
@@ -278,6 +279,7 @@ static void backend_changed(struct xenbu
 		=3D container_of(watch, struct backend_info, backend_watch)=
;
 	struct xenbus_device *dev =3D be->dev;
 	int cdrom =3D 0;
+	unsigned long handle;
 	char *device_type;
=20
 	DPRINTK("");
@@ -295,12 +297,12 @@ static void backend_changed(struct xenbu
 		return;
 	}
=20
-	if ((be->major || be->minor) &&
-	    ((be->major !=3D major) || (be->minor !=3D minor))) {
-		printk(KERN_WARNING
-		       "blkback: changing physical device (from %x:%x to "
-		       "%x:%x) not supported.\n", be->major, be->minor,
-		       major, minor);
+	if (be->major | be->minor) {
+		if (be->major !=3D major || be->minor !=3D minor)
+			printk(KERN_WARNING "blkback: "
+			       "changing physical device (from %x:%x to "
+			       "%x:%x) not supported.\n",
+			       be->major, be->minor, major, minor);
 		return;
 	}
=20
@@ -318,31 +320,30 @@ static void backend_changed(struct xenbu
 		kfree(device_type);
 	}
=20
-	if (be->major =3D=3D 0 && be->minor =3D=3D 0) {
-		/* Front end dir is a number, which is used as the handle. =
*/
+	/* Front end dir is a number, which is used as the handle. */
+	handle =3D simple_strtoul(strrchr(dev->otherend, '/') + 1, NULL, =
0);
=20
-		char *p =3D strrchr(dev->otherend, '/') + 1;
-		long handle =3D simple_strtoul(p, NULL, 0);
+	be->major =3D major;
+	be->minor =3D minor;
=20
-		be->major =3D major;
-		be->minor =3D minor;
-
-		err =3D vbd_create(be->blkif, handle, major, minor,
-				 (NULL =3D=3D strchr(be->mode, 'w')), =
cdrom);
-		if (err) {
-			be->major =3D be->minor =3D 0;
-			xenbus_dev_fatal(dev, err, "creating vbd structure"=
);
-			return;
-		}
+	err =3D vbd_create(be->blkif, handle, major, minor,
+			 (NULL =3D=3D strchr(be->mode, 'w')), cdrom);
=20
+	if (err)
+		xenbus_dev_fatal(dev, err, "creating vbd structure");
+	else {
 		err =3D xenvbd_sysfs_addif(dev);
 		if (err) {
 			vbd_free(&be->blkif->vbd);
-			be->major =3D be->minor =3D 0;
 			xenbus_dev_fatal(dev, err, "creating sysfs =
entries");
-			return;
 		}
+	}
=20
+	if (err) {
+		kfree(be->mode);
+		be->mode =3D NULL;
+		be->major =3D be->minor =3D 0;
+	} else {
 		/* We're potentially connected now */
 		update_blkif_status(be->blkif);
 	}




--=__Part2E1FBA32.1__=
Content-Type: text/plain; name="xen-blkback-mode-leak.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-blkback-mode-leak.patch"

blkback: do not leak mode property=0A=0A"be->mode" is obtained from =
xenbus_read(), which does a kmalloc() for=0Athe message body. The short =
string is never released, so do it along=0Awith freeing "be" itself, and =
make sure the string isn't kept when=0Abackend_changed() doesn't complete =
successfully (which made it=0Adesirable to slightly re-structure that =
function, so that the error=0Acleanup can be done in one place).=0A=0ARepor=
ted-by: Olaf Hering <olaf@aepfle.de>=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/drivers/xen/blkback/xenbus.c=0A+++ =
b/drivers/xen/blkback/xenbus.c=0A@@ -198,6 +198,7 @@ static int blkback_rem=
ove(struct xenbus_=0A 		be->blkif =3D NULL;=0A 	}=0A =0A+	=
kfree(be->mode);=0A 	kfree(be);=0A 	dev->dev.driver_data =3D NULL;=0A 	=
write_unlock(&sysfs_read_lock);=0A@@ -278,6 +279,7 @@ static void =
backend_changed(struct xenbu=0A 		=3D container_of(watch, =
struct backend_info, backend_watch);=0A 	struct xenbus_device *dev =
=3D be->dev;=0A 	int cdrom =3D 0;=0A+	unsigned long handle;=0A 	=
char *device_type;=0A =0A 	DPRINTK("");=0A@@ -295,12 +297,12 @@ =
static void backend_changed(struct xenbu=0A 		return;=0A 	=
}=0A =0A-	if ((be->major || be->minor) &&=0A-	    ((be->major =
!=3D major) || (be->minor !=3D minor))) {=0A-		printk(KERN_WARNING=
=0A-		       "blkback: changing physical device (from %x:%x to =
"=0A-		       "%x:%x) not supported.\n", be->major, be->minor,=0A-=
		       major, minor);=0A+	if (be->major | be->minor) =
{=0A+		if (be->major !=3D major || be->minor !=3D minor)=0A+		=
	printk(KERN_WARNING "blkback: "=0A+			       =
"changing physical device (from %x:%x to "=0A+			       =
"%x:%x) not supported.\n",=0A+			       be->major, =
be->minor, major, minor);=0A 		return;=0A 	}=0A =0A@@ -318,31 =
+320,30 @@ static void backend_changed(struct xenbu=0A 		kfree(devic=
e_type);=0A 	}=0A =0A-	if (be->major =3D=3D 0 && be->minor =3D=3D =
0) {=0A-		/* Front end dir is a number, which is used as the =
handle. */=0A+	/* Front end dir is a number, which is used as the handle. =
*/=0A+	handle =3D simple_strtoul(strrchr(dev->otherend, '/') + 1, NULL, =
0);=0A =0A-		char *p =3D strrchr(dev->otherend, '/') + 1;=0A-	=
	long handle =3D simple_strtoul(p, NULL, 0);=0A+	be->major =3D =
major;=0A+	be->minor =3D minor;=0A =0A-		be->major =3D =
major;=0A-		be->minor =3D minor;=0A-=0A-		err =3D =
vbd_create(be->blkif, handle, major, minor,=0A-				 =
(NULL =3D=3D strchr(be->mode, 'w')), cdrom);=0A-		if (err) =
{=0A-			be->major =3D be->minor =3D 0;=0A-			=
xenbus_dev_fatal(dev, err, "creating vbd structure");=0A-			=
return;=0A-		}=0A+	err =3D vbd_create(be->blkif, handle, =
major, minor,=0A+			 (NULL =3D=3D strchr(be->mode, =
'w')), cdrom);=0A =0A+	if (err)=0A+		xenbus_dev_fatal(dev, err, =
"creating vbd structure");=0A+	else {=0A 		err =3D xenvbd_sysf=
s_addif(dev);=0A 		if (err) {=0A 			vbd_free(&b=
e->blkif->vbd);=0A-			be->major =3D be->minor =3D 0;=0A 	=
		xenbus_dev_fatal(dev, err, "creating sysfs entries");=0A-	=
		return;=0A 		}=0A+	}=0A =0A+	if (err) =
{=0A+		kfree(be->mode);=0A+		be->mode =3D NULL;=0A+		=
be->major =3D be->minor =3D 0;=0A+	} else {=0A 		/* We're =
potentially connected now */=0A 		update_blkif_status(be->blk=
if);=0A 	}=0A
--=__Part2E1FBA32.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2E1FBA32.1__=--


From xen-devel-bounces@lists.xen.org Thu Dec 13 14:58:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 14:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAFL-0007m2-5G; Thu, 13 Dec 2012 14:58:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TjAFI-0007lv-Pd
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 14:58:45 +0000
Received: from [85.158.138.51:63881] by server-14.bemta-3.messagelabs.com id
	84/73-27443-32DE9C05; Thu, 13 Dec 2012 14:58:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355410722!10103958!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10634 invoked from network); 13 Dec 2012 14:58:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 14:58:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 14:58:42 +0000
Message-Id: <50C9FB3202000078000B0338@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 14:58:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2E1FBA32.1__="
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] linux-2.6.18/blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2E1FBA32.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"be->mode" is obtained from xenbus_read(), which does a kmalloc() for
the message body. The short string is never released, so do it along
with freeing "be" itself, and make sure the string isn't kept when
backend_changed() doesn't complete successfully (which made it
desirable to slightly re-structure that function, so that the error
cleanup can be done in one place).

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/xen/blkback/xenbus.c
+++ b/drivers/xen/blkback/xenbus.c
@@ -198,6 +198,7 @@ static int blkback_remove(struct xenbus_
 		be->blkif =3D NULL;
 	}
=20
+	kfree(be->mode);
 	kfree(be);
 	dev->dev.driver_data =3D NULL;
 	write_unlock(&sysfs_read_lock);
@@ -278,6 +279,7 @@ static void backend_changed(struct xenbu
 		=3D container_of(watch, struct backend_info, backend_watch)=
;
 	struct xenbus_device *dev =3D be->dev;
 	int cdrom =3D 0;
+	unsigned long handle;
 	char *device_type;
=20
 	DPRINTK("");
@@ -295,12 +297,12 @@ static void backend_changed(struct xenbu
 		return;
 	}
=20
-	if ((be->major || be->minor) &&
-	    ((be->major !=3D major) || (be->minor !=3D minor))) {
-		printk(KERN_WARNING
-		       "blkback: changing physical device (from %x:%x to "
-		       "%x:%x) not supported.\n", be->major, be->minor,
-		       major, minor);
+	if (be->major | be->minor) {
+		if (be->major !=3D major || be->minor !=3D minor)
+			printk(KERN_WARNING "blkback: "
+			       "changing physical device (from %x:%x to "
+			       "%x:%x) not supported.\n",
+			       be->major, be->minor, major, minor);
 		return;
 	}
=20
@@ -318,31 +320,30 @@ static void backend_changed(struct xenbu
 		kfree(device_type);
 	}
=20
-	if (be->major =3D=3D 0 && be->minor =3D=3D 0) {
-		/* Front end dir is a number, which is used as the handle. =
*/
+	/* Front end dir is a number, which is used as the handle. */
+	handle =3D simple_strtoul(strrchr(dev->otherend, '/') + 1, NULL, =
0);
=20
-		char *p =3D strrchr(dev->otherend, '/') + 1;
-		long handle =3D simple_strtoul(p, NULL, 0);
+	be->major =3D major;
+	be->minor =3D minor;
=20
-		be->major =3D major;
-		be->minor =3D minor;
-
-		err =3D vbd_create(be->blkif, handle, major, minor,
-				 (NULL =3D=3D strchr(be->mode, 'w')), =
cdrom);
-		if (err) {
-			be->major =3D be->minor =3D 0;
-			xenbus_dev_fatal(dev, err, "creating vbd structure"=
);
-			return;
-		}
+	err =3D vbd_create(be->blkif, handle, major, minor,
+			 (NULL =3D=3D strchr(be->mode, 'w')), cdrom);
=20
+	if (err)
+		xenbus_dev_fatal(dev, err, "creating vbd structure");
+	else {
 		err =3D xenvbd_sysfs_addif(dev);
 		if (err) {
 			vbd_free(&be->blkif->vbd);
-			be->major =3D be->minor =3D 0;
 			xenbus_dev_fatal(dev, err, "creating sysfs =
entries");
-			return;
 		}
+	}
=20
+	if (err) {
+		kfree(be->mode);
+		be->mode =3D NULL;
+		be->major =3D be->minor =3D 0;
+	} else {
 		/* We're potentially connected now */
 		update_blkif_status(be->blkif);
 	}




--=__Part2E1FBA32.1__=
Content-Type: text/plain; name="xen-blkback-mode-leak.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-blkback-mode-leak.patch"

blkback: do not leak mode property=0A=0A"be->mode" is obtained from =
xenbus_read(), which does a kmalloc() for=0Athe message body. The short =
string is never released, so do it along=0Awith freeing "be" itself, and =
make sure the string isn't kept when=0Abackend_changed() doesn't complete =
successfully (which made it=0Adesirable to slightly re-structure that =
function, so that the error=0Acleanup can be done in one place).=0A=0ARepor=
ted-by: Olaf Hering <olaf@aepfle.de>=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/drivers/xen/blkback/xenbus.c=0A+++ =
b/drivers/xen/blkback/xenbus.c=0A@@ -198,6 +198,7 @@ static int blkback_rem=
ove(struct xenbus_=0A 		be->blkif =3D NULL;=0A 	}=0A =0A+	=
kfree(be->mode);=0A 	kfree(be);=0A 	dev->dev.driver_data =3D NULL;=0A 	=
write_unlock(&sysfs_read_lock);=0A@@ -278,6 +279,7 @@ static void =
backend_changed(struct xenbu=0A 		=3D container_of(watch, =
struct backend_info, backend_watch);=0A 	struct xenbus_device *dev =
=3D be->dev;=0A 	int cdrom =3D 0;=0A+	unsigned long handle;=0A 	=
char *device_type;=0A =0A 	DPRINTK("");=0A@@ -295,12 +297,12 @@ =
static void backend_changed(struct xenbu=0A 		return;=0A 	=
}=0A =0A-	if ((be->major || be->minor) &&=0A-	    ((be->major =
!=3D major) || (be->minor !=3D minor))) {=0A-		printk(KERN_WARNING=
=0A-		       "blkback: changing physical device (from %x:%x to =
"=0A-		       "%x:%x) not supported.\n", be->major, be->minor,=0A-=
		       major, minor);=0A+	if (be->major | be->minor) =
{=0A+		if (be->major !=3D major || be->minor !=3D minor)=0A+		=
	printk(KERN_WARNING "blkback: "=0A+			       =
"changing physical device (from %x:%x to "=0A+			       =
"%x:%x) not supported.\n",=0A+			       be->major, =
be->minor, major, minor);=0A 		return;=0A 	}=0A =0A@@ -318,31 =
+320,30 @@ static void backend_changed(struct xenbu=0A 		kfree(devic=
e_type);=0A 	}=0A =0A-	if (be->major =3D=3D 0 && be->minor =3D=3D =
0) {=0A-		/* Front end dir is a number, which is used as the =
handle. */=0A+	/* Front end dir is a number, which is used as the handle. =
*/=0A+	handle =3D simple_strtoul(strrchr(dev->otherend, '/') + 1, NULL, =
0);=0A =0A-		char *p =3D strrchr(dev->otherend, '/') + 1;=0A-	=
	long handle =3D simple_strtoul(p, NULL, 0);=0A+	be->major =3D =
major;=0A+	be->minor =3D minor;=0A =0A-		be->major =3D =
major;=0A-		be->minor =3D minor;=0A-=0A-		err =3D =
vbd_create(be->blkif, handle, major, minor,=0A-				 =
(NULL =3D=3D strchr(be->mode, 'w')), cdrom);=0A-		if (err) =
{=0A-			be->major =3D be->minor =3D 0;=0A-			=
xenbus_dev_fatal(dev, err, "creating vbd structure");=0A-			=
return;=0A-		}=0A+	err =3D vbd_create(be->blkif, handle, =
major, minor,=0A+			 (NULL =3D=3D strchr(be->mode, =
'w')), cdrom);=0A =0A+	if (err)=0A+		xenbus_dev_fatal(dev, err, =
"creating vbd structure");=0A+	else {=0A 		err =3D xenvbd_sysf=
s_addif(dev);=0A 		if (err) {=0A 			vbd_free(&b=
e->blkif->vbd);=0A-			be->major =3D be->minor =3D 0;=0A 	=
		xenbus_dev_fatal(dev, err, "creating sysfs entries");=0A-	=
		return;=0A 		}=0A+	}=0A =0A+	if (err) =
{=0A+		kfree(be->mode);=0A+		be->mode =3D NULL;=0A+		=
be->major =3D be->minor =3D 0;=0A+	} else {=0A 		/* We're =
potentially connected now */=0A 		update_blkif_status(be->blk=
if);=0A 	}=0A
--=__Part2E1FBA32.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2E1FBA32.1__=--


From xen-devel-bounces@lists.xen.org Thu Dec 13 15:00:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAHE-0007u4-Rb; Thu, 13 Dec 2012 15:00:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>) id 1TjAHD-0007ts-8U
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:00:43 +0000
Received: from [85.158.143.35:32758] by server-3.bemta-4.messagelabs.com id
	0A/D1-18211-A9DE9C05; Thu, 13 Dec 2012 15:00:42 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355410638!12348269!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13491 invoked from network); 13 Dec 2012 14:57:20 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 14:57:20 -0000
Received: from aplexcas2.dom1.jhuapl.edu (aplexcas2.dom1.jhuapl.edu
	[128.244.198.91]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 4640_90ad_481c7365_a7dd_4053_8e3c_084d49478979;
	Thu, 13 Dec 2012 09:57:09 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas2.dom1.jhuapl.edu ([128.244.198.91]) with mapi; Thu, 13 Dec 2012
	09:55:57 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 13 Dec 2012 09:55:55 -0500
Thread-Topic: [VTPM v7 1/8] add vtpm-stubdom code
Thread-Index: Ac3ZJ8FhGgl6pZ3qSESzq/iEEtQClgAGiRdg
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BE1625E5@aplesstripe.dom1.jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1355399295.10554.96.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355399295.10554.96.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 1/8] add vtpm-stubdom code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ok, just for reference for now, Nothing has changed with the vtpm patches, just the autoconf stuff is evolving.

-----Original Message-----
From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
Sent: Thursday, December 13, 2012 6:48 AM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [VTPM v7 1/8] add vtpm-stubdom code

For future reference please could you include an indication of what changed in a new posting of a series, either in the 0/N mail (which is useful to include as an intro in any case) or in the individual changelogs.

On Thu, 2012-12-06 at 18:19 +0000, Matthew Fioravante wrote:
> Add the code base for vtpm-stubdom to the stubdom heirarchy. Makefile
> changes in later patch.
>
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  stubdom/vtpm/Makefile    |   37 +++++
>  stubdom/vtpm/minios.cfg  |   14 ++
>  stubdom/vtpm/vtpm.c      |  404 ++++++++++++++++++++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpm.h      |   36 +++++
>  stubdom/vtpm/vtpm_cmd.c  |  256 +++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpm_cmd.h  |   31 ++++
>  stubdom/vtpm/vtpm_pcrs.c |   43 +++++
>  stubdom/vtpm/vtpm_pcrs.h |   53 ++++++
>  stubdom/vtpm/vtpmblk.c   |  307 +++++++++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpmblk.h   |   31 ++++
>  10 files changed, 1212 insertions(+)
>  create mode 100644 stubdom/vtpm/Makefile  create mode 100644
> stubdom/vtpm/minios.cfg  create mode 100644 stubdom/vtpm/vtpm.c
> create mode 100644 stubdom/vtpm/vtpm.h  create mode 100644
> stubdom/vtpm/vtpm_cmd.c  create mode 100644 stubdom/vtpm/vtpm_cmd.h
> create mode 100644 stubdom/vtpm/vtpm_pcrs.c  create mode 100644
> stubdom/vtpm/vtpm_pcrs.h  create mode 100644 stubdom/vtpm/vtpmblk.c
> create mode 100644 stubdom/vtpm/vtpmblk.h
>
> diff --git a/stubdom/vtpm/Makefile b/stubdom/vtpm/Makefile new file
> mode 100644 index 0000000..686c0ea
> --- /dev/null
> +++ b/stubdom/vtpm/Makefile
> @@ -0,0 +1,37 @@
> +# Copyright (c) 2010-2012 United States Government, as represented by
> +# the Secretary of Defense.  All rights reserved.
> +#
> +# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> +# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES #
> +INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> +# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY #
> +DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE #
> +SOFTWARE.
> +#
> +
> +XEN_ROOT=../..
> +
> +PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
> +PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o sha4.o
> +
> +TARGET=vtpm.a
> +OBJS=vtpm.o vtpm_cmd.o vtpmblk.o vtpm_pcrs.o
> +
> +
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/build
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/tpm
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/crypto
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)
> +
> +$(TARGET): $(OBJS)
> +       ar -cr $@ $(OBJS) $(TPMEMU_OBJS) $(foreach
> +obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
> +
> +$(OBJS): vtpm_manager.h
> +
> +vtpm_manager.h:
> +       ln -s ../vtpmmgr/vtpm_manager.h vtpm_manager.h
> +
> +clean:
> +       -rm $(TARGET) $(OBJS) vtpm_manager.h
> +
> +.PHONY: clean
> diff --git a/stubdom/vtpm/minios.cfg b/stubdom/vtpm/minios.cfg new
> file mode 100644 index 0000000..31652ee
> --- /dev/null
> +++ b/stubdom/vtpm/minios.cfg
> @@ -0,0 +1,14 @@
> +CONFIG_TPMFRONT=y
> +CONFIG_TPM_TIS=n
> +CONFIG_TPMBACK=y
> +CONFIG_START_NETWORK=n
> +CONFIG_TEST=n
> +CONFIG_PCIFRONT=n
> +CONFIG_BLKFRONT=y
> +CONFIG_NETFRONT=n
> +CONFIG_FBFRONT=n
> +CONFIG_KBDFRONT=n
> +CONFIG_CONSFRONT=n
> +CONFIG_XENBUS=y
> +CONFIG_LWIP=n
> +CONFIG_XC=n
> diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c new file mode
> 100644 index 0000000..71aef78
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm.c
> @@ -0,0 +1,404 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <inttypes.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <string.h>
> +#include <syslog.h>
> +#include <stdbool.h>
> +#include <errno.h>
> +#include <sys/time.h>
> +#include <xen/xen.h>
> +#include <tpmback.h>
> +#include <tpmfront.h>
> +
> +#include <polarssl/entropy.h>
> +#include <polarssl/ctr_drbg.h>
> +
> +#include "tpm/tpm_emulator_extern.h"
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm.h"
> +#include "vtpm_cmd.h"
> +#include "vtpm_pcrs.h"
> +#include "vtpmblk.h"
> +
> +#define TPM_LOG_INFO LOG_INFO
> +#define TPM_LOG_ERROR LOG_ERR
> +#define TPM_LOG_DEBUG LOG_DEBUG
> +
> +/* Global commandline options - default values */ struct Opt_args
> +opt_args = {
> +   .startup = ST_CLEAR,
> +   .loglevel = TPM_LOG_INFO,
> +   .hwinitpcrs = VTPM_PCRNONE,
> +   .tpmconf = 0,
> +   .enable_maint_cmds = false,
> +};
> +
> +static uint32_t badords[32];
> +static unsigned int n_badords = 0;
> +
> +entropy_context entropy;
> +ctr_drbg_context ctr_drbg;
> +
> +struct tpmfront_dev* tpmfront_dev;
> +
> +void vtpm_get_extern_random_bytes(void *buf, size_t nbytes) {
> +   ctr_drbg_random(&ctr_drbg, buf, nbytes); }
> +
> +int vtpm_read_from_file(uint8_t **data, size_t *data_length) {
> +   return read_vtpmblk(tpmfront_dev, data, data_length); }
> +
> +int vtpm_write_to_file(uint8_t *data, size_t data_length) {
> +   return write_vtpmblk(tpmfront_dev, data, data_length); }
> +
> +int vtpm_extern_init_fake(void) {
> +   return 0;
> +}
> +
> +void vtpm_extern_release_fake(void) { }
> +
> +
> +void vtpm_log(int priority, const char *fmt, ...) {
> +   if(opt_args.loglevel >= priority) {
> +      va_list v;
> +      va_start(v, fmt);
> +      vprintf(fmt, v);
> +      va_end(v);
> +   }
> +}
> +
> +static uint64_t vtpm_get_ticks(void)
> +{
> +  static uint64_t old_t = 0;
> +  uint64_t new_t, res_t;
> +  struct timeval tv;
> +  gettimeofday(&tv, NULL);
> +  new_t = (uint64_t)tv.tv_sec * 1000000 + (uint64_t)tv.tv_usec;
> +  res_t = (old_t > 0) ? new_t - old_t : 0;
> +  old_t = new_t;
> +  return res_t;
> +}
> +
> +
> +static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
> +   UINT32 sz = len;
> +   TPM_RESULT rc = VTPM_GetRandom(tpmfront_dev, data, &sz);
> +   *olen = sz;
> +   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
> +}
> +
> +int init_random(void) {
> +   /* Initialize the rng */
> +   entropy_init(&entropy);
> +   entropy_add_source(&entropy, tpm_entropy_source, NULL, 0);
> +   entropy_gather(&entropy);
> +   ctr_drbg_init(&ctr_drbg, entropy_func, &entropy, NULL, 0);
> +   ctr_drbg_set_prediction_resistance( &ctr_drbg, CTR_DRBG_PR_OFF );
> +
> +   return 0;
> +}
> +
> +int check_ordinal(tpmcmd_t* tpmcmd) {
> +   TPM_COMMAND_CODE ord;
> +   UINT32 len = 4;
> +   BYTE* ptr;
> +   unsigned int i;
> +
> +   if(tpmcmd->req_len < 10) {
> +      return true;
> +   }
> +
> +   ptr = tpmcmd->req + 6;
> +   tpm_unmarshal_UINT32(&ptr, &len, &ord);
> +
> +   for(i = 0; i < n_badords; ++i) {
> +      if(ord == badords[i]) {
> +         error("Disabled command ordinal (%" PRIu32") requested!\n");
> +         return false;
> +      }
> +   }
> +   return true;
> +}
> +
> +static void main_loop(void) {
> +   tpmcmd_t* tpmcmd = NULL;
> +   domid_t domid;              /* Domid of frontend */
> +   unsigned int handle;        /* handle of frontend */
> +   int res = -1;
> +
> +   info("VTPM Initializing\n");
> +
> +   /* Set required tpm config args */
> +   opt_args.tpmconf |= TPM_CONF_STRONG_PERSISTENCE;
> +   opt_args.tpmconf &= ~TPM_CONF_USE_INTERNAL_PRNG;
> +   opt_args.tpmconf |= TPM_CONF_GENERATE_EK;
> +   opt_args.tpmconf |= TPM_CONF_GENERATE_SEED_DAA;
> +
> +   /* Initialize the emulator */
> +   tpm_emulator_init(opt_args.startup, opt_args.tpmconf);
> +
> +   /* Initialize any requested PCRs with hardware TPM values */
> +   if(vtpm_initialize_hw_pcrs(tpmfront_dev, opt_args.hwinitpcrs) != TPM_SUCCESS) {
> +      error("Failed to initialize PCRs with hardware TPM values");
> +      goto abort_postpcrs;
> +   }
> +
> +   /* Wait for the frontend domain to connect */
> +   info("Waiting for frontend domain to connect..");
> +   if(tpmback_wait_for_frontend_connect(&domid, &handle) == 0) {
> +      info("VTPM attached to Frontend %u/%u", (unsigned int) domid, handle);
> +   } else {
> +      error("Unable to attach to a frontend");
> +   }
> +
> +   tpmcmd = tpmback_req(domid, handle);
> +   while(tpmcmd) {
> +      /* Handle the request */
> +      if(tpmcmd->req_len) {
> +        tpmcmd->resp = NULL;
> +        tpmcmd->resp_len = 0;
> +
> +         /* First check for disabled ordinals */
> +         if(!check_ordinal(tpmcmd)) {
> +            create_error_response(tpmcmd, TPM_BAD_ORDINAL);
> +         }
> +         /* If not disabled, do the command */
> +         else {
> +            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len)) != 0) {
> +               error("tpm_handle_command() failed");
> +               create_error_response(tpmcmd, TPM_FAIL);
> +            }
> +         }
> +      }
> +
> +      /* Send the response */
> +      tpmback_resp(tpmcmd);
> +
> +      /* Wait for the next request */
> +      tpmcmd = tpmback_req(domid, handle);
> +
> +   }
> +
> +abort_postpcrs:
> +   info("VTPM Shutting down\n");
> +
> +   tpm_emulator_shutdown();
> +}
> +
> +int parse_cmd_line(int argc, char** argv) {
> +   char sval[25];
> +   char* logstr = NULL;
> +   /* Parse the command strings */
> +   for(unsigned int i = 1; i < argc; ++i) {
> +      if (sscanf(argv[i], "loglevel=%25s", sval) == 1){
> +        if (!strcmp(sval, "debug")) {
> +           opt_args.loglevel = TPM_LOG_DEBUG;
> +           logstr = "debug";
> +        }
> +        else if (!strcmp(sval, "info")) {
> +           logstr = "info";
> +           opt_args.loglevel = TPM_LOG_INFO;
> +        }
> +        else if (!strcmp(sval, "error")) {
> +           logstr = "error";
> +           opt_args.loglevel = TPM_LOG_ERROR;
> +        }
> +      }
> +      else if (!strcmp(argv[i], "clear")) {
> +        opt_args.startup = ST_CLEAR;
> +      }
> +      else if (!strcmp(argv[i], "save")) {
> +        opt_args.startup = ST_SAVE;
> +      }
> +      else if (!strcmp(argv[i], "deactivated")) {
> +        opt_args.startup = ST_DEACTIVATED;
> +      }
> +      else if (!strncmp(argv[i], "maintcmds=", 10)) {
> +         if(!strcmp(argv[i] + 10, "1")) {
> +            opt_args.enable_maint_cmds = true;
> +         } else if(!strcmp(argv[i] + 10, "0")) {
> +            opt_args.enable_maint_cmds = false;
> +         }
> +      }
> +      else if(!strncmp(argv[i], "hwinitpcr=", 10)) {
> +         char *pch = argv[i] + 10;
> +         unsigned int v1, v2;
> +         pch = strtok(pch, ",");
> +         while(pch != NULL) {
> +            if(!strcmp(pch, "all")) {
> +               //Set all
> +               opt_args.hwinitpcrs = VTPM_PCRALL;
> +            } else if(!strcmp(pch, "none")) {
> +               //Set none
> +               opt_args.hwinitpcrs = VTPM_PCRNONE;
> +            } else if(sscanf(pch, "%u", &v1) == 1) {
> +               //Set one
> +               if(v1 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               opt_args.hwinitpcrs |= (1 << v1);
> +            } else if(sscanf(pch, "%u-%u", &v1, &v2) == 2) {
> +               //Set range
> +               if(v1 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               if(v2 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               if(v2 < v1) {
> +                  unsigned tp = v1;
> +                  v1 = v2;
> +                  v2 = tp;
> +               }
> +               for(unsigned int i = v1; i <= v2; ++i) {
> +                  opt_args.hwinitpcrs |= (1 << i);
> +               }
> +            } else {
> +               error("hwintipcr error: Invalid PCR specification : %s", pch);
> +               return -1;
> +            }
> +            pch = strtok(NULL, ",");
> +         }
> +      }
> +      else {
> +        error("Invalid command line option `%s'", argv[i]);
> +      }
> +
> +   }
> +
> +   /* Check Errors and print results */
> +   switch(opt_args.startup) {
> +      case ST_CLEAR:
> +        info("Startup mode is `clear'");
> +        break;
> +      case ST_SAVE:
> +        info("Startup mode is `save'");
> +        break;
> +      case ST_DEACTIVATED:
> +        info("Startup mode is `deactivated'");
> +        break;
> +      default:
> +        error("Invalid startup mode %d", opt_args.startup);
> +        return -1;
> +   }
> +
> +   if(opt_args.hwinitpcrs & (VTPM_PCRALL))
> +   {
> +      char pcrstr[1024];
> +      char* ptr = pcrstr;
> +
> +      pcrstr[0] = '\0';
> +      info("The following PCRs will be initialized with values from the hardware TPM:");
> +      for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
> +         if(opt_args.hwinitpcrs & (1 << i)) {
> +            ptr += sprintf(ptr, "%u, ", i);
> +         }
> +      }
> +      /* get rid of the last comma if any numbers were printed */
> +      *(ptr -2) = '\0';
> +
> +      info("\t%s", pcrstr);
> +   } else {
> +      info("All PCRs initialized to default values");
> +   }
> +
> +   if(!opt_args.enable_maint_cmds) {
> +      info("TPM Maintenance Commands disabled");
> +      badords[n_badords++] = TPM_ORD_CreateMaintenanceArchive;
> +      badords[n_badords++] = TPM_ORD_LoadMaintenanceArchive;
> +      badords[n_badords++] = TPM_ORD_KillMaintenanceFeature;
> +      badords[n_badords++] = TPM_ORD_LoadManuMaintPub;
> +      badords[n_badords++] = TPM_ORD_ReadManuMaintPub;
> +   } else {
> +      info("TPM Maintenance Commands enabled");
> +   }
> +
> +   info("Log level set to %s", logstr);
> +
> +   return 0;
> +}
> +
> +void cleanup_opt_args(void) {
> +}
> +
> +int main(int argc, char **argv)
> +{
> +   //FIXME: initializing blkfront without this sleep causes the domain to crash on boot
> +   sleep(2);
> +
> +   /* Setup extern function pointers */
> +   tpm_extern_init = vtpm_extern_init_fake;
> +   tpm_extern_release = vtpm_extern_release_fake;
> +   tpm_malloc = malloc;
> +   tpm_free = free;
> +   tpm_log = vtpm_log;
> +   tpm_get_ticks = vtpm_get_ticks;
> +   tpm_get_extern_random_bytes = vtpm_get_extern_random_bytes;
> +   tpm_write_to_storage = vtpm_write_to_file;
> +   tpm_read_from_storage = vtpm_read_from_file;
> +
> +   info("starting TPM Emulator (1.2.%d.%d-%d)", VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD);
> +   if(parse_cmd_line(argc, argv)) {
> +      error("Error parsing commandline\n");
> +      return -1;
> +   }
> +
> +   /* Initialize devices */
> +   init_tpmback();
> +   if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
> +      error("Unable to initialize tpmfront device");
> +      goto abort_posttpmfront;
> +   }
> +
> +   /* Seed the RNG with entropy from hardware TPM */
> +   if(init_random()) {
> +      error("Unable to initialize RNG");
> +      goto abort_postrng;
> +   }
> +
> +   /* Initialize blkfront device */
> +   if(init_vtpmblk(tpmfront_dev)) {
> +      error("Unable to initialize Blkfront persistent storage");
> +      goto abort_postvtpmblk;
> +   }
> +
> +   /* Run main loop */
> +   main_loop();
> +
> +   /* Shutdown blkfront */
> +   shutdown_vtpmblk();
> +abort_postvtpmblk:
> +abort_postrng:
> +
> +   /* Close devices */
> +   shutdown_tpmfront(tpmfront_dev);
> +abort_posttpmfront:
> +   shutdown_tpmback();
> +
> +   cleanup_opt_args();
> +
> +   return 0;
> +}
> diff --git a/stubdom/vtpm/vtpm.h b/stubdom/vtpm/vtpm.h new file mode
> 100644 index 0000000..5919e44
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm.h
> @@ -0,0 +1,36 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#ifndef VTPM_H
> +#define VTPM_H
> +
> +#include <stdbool.h>
> +
> +/* For testing */
> +#define VERS_CMD "\x00\xC1\x00\x00\x00\x16\x00\x00\x00\x65\x00\x00\x00\x05\x00\x00\x00\x04\x00\x00\x01\x03"
> +#define VERS_CMD_LEN 22
> +
> +/* Global commandline options */
> +struct Opt_args {
> +   enum StartUp {
> +      ST_CLEAR = 1,
> +      ST_SAVE = 2,
> +      ST_DEACTIVATED = 3
> +   } startup;
> +   unsigned long hwinitpcrs;
> +   int loglevel;
> +   uint32_t tpmconf;
> +   bool enable_maint_cmds;
> +};
> +extern struct Opt_args opt_args;
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c new
> file mode 100644 index 0000000..7eae98b
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_cmd.c
> @@ -0,0 +1,256 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#include <types.h>
> +#include <xen/xen.h>
> +#include <mm.h>
> +#include <gnttab.h>
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm_manager.h"
> +#include "vtpm_cmd.h"
> +#include <tpmback.h>
> +
> +#define TRYFAILGOTO(C) \
> +   if((C)) { \
> +      status = TPM_FAIL; \
> +      goto abort_egress; \
> +   }
> +#define TRYFAILGOTOMSG(C, msg) \
> +   if((C)) { \
> +      status = TPM_FAIL; \
> +      error(msg); \
> +      goto abort_egress; \
> +   }
> +#define CHECKSTATUSGOTO(ret, fname) \
> +   if((ret) != TPM_SUCCESS) { \
> +      error("%s failed with error code (%lu)", fname, (unsigned long) ret); \
> +      status = ord; \
> +      goto abort_egress; \
> +   }
> +
> +#define ERR_MALFORMED "Malformed response from backend"
> +#define ERR_TPMFRONT "Error sending command through frontend device"
> +
> +struct shpage {
> +   void* page;
> +   grant_ref_t grantref;
> +};
> +
> +typedef struct shpage shpage_t;
> +
> +static inline int pack_header(uint8_t** bptr, UINT32* len, TPM_TAG
> +tag, UINT32 size, TPM_COMMAND_CODE ord) {
> +   return *bptr == NULL ||
> +        tpm_marshal_UINT16(bptr, len, tag) ||
> +        tpm_marshal_UINT32(bptr, len, size) ||
> +        tpm_marshal_UINT32(bptr, len, ord); }
> +
> +static inline int unpack_header(uint8_t** bptr, UINT32* len, TPM_TAG*
> +tag, UINT32* size, TPM_COMMAND_CODE* ord) {
> +   return *bptr == NULL ||
> +        tpm_unmarshal_UINT16(bptr, len, tag) ||
> +        tpm_unmarshal_UINT32(bptr, len, size) ||
> +        tpm_unmarshal_UINT32(bptr, len, ord); }
> +
> +int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode) {
> +   TPM_TAG tag;
> +   UINT32 len = tpmcmd->req_len;
> +   uint8_t* respptr;
> +   uint8_t* cmdptr = tpmcmd->req;
> +
> +   if(!tpm_unmarshal_UINT16(&cmdptr, &len, &tag)) {
> +      switch (tag) {
> +         case TPM_TAG_RQU_COMMAND:
> +            tag = TPM_TAG_RSP_COMMAND;
> +            break;
> +         case TPM_TAG_RQU_AUTH1_COMMAND:
> +            tag = TPM_TAG_RQU_AUTH2_COMMAND;
> +            break;
> +         case TPM_TAG_RQU_AUTH2_COMMAND:
> +            tag = TPM_TAG_RQU_AUTH2_COMMAND;
> +            break;
> +      }
> +   } else {
> +      tag = TPM_TAG_RSP_COMMAND;
> +   }
> +
> +   tpmcmd->resp_len = len = 10;
> +   tpmcmd->resp = respptr = tpm_malloc(tpmcmd->resp_len);
> +
> +   return pack_header(&respptr, &len, tag, len, errorcode); }
> +
> +TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32 *numbytes) {
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* cmdbuf, *resp, *bptr;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   /*Ask the real tpm for random bytes for the seed */
> +   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = TPM_ORD_GetRandom;
> +   len = size = sizeof(TPM_TAG) + sizeof(UINT32) +
> + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
> +
> +   /*Create the raw tpm command */
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, *numbytes));
> +
> +   /* Send cmd, wait for response */
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
> +      ERR_TPMFRONT);
> +
> +   bptr = resp; len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord),
> + ERR_MALFORMED);
> +
> +   //Check return status of command
> +   CHECKSTATUSGOTO(ord, "TPM_GetRandom()");
> +
> +   // Get the number of random bytes in the response
> +   TRYFAILGOTOMSG(tpm_unmarshal_UINT32(&bptr, &len, &size), ERR_MALFORMED);
> +   *numbytes = size;
> +
> +   //Get the random bytes out, tpm may give us less bytes than what we wanrt
> +   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, bytes,
> + *numbytes), ERR_MALFORMED);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cmdbuf);
> +   return status;
> +
> +}
> +
> +TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev,
> +uint8_t** data, size_t* data_length) {
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* bptr, *resp;
> +   uint8_t* cmdbuf = NULL;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   TPM_TAG tag = VTPM_TAG_REQ;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = VTPM_ORD_LOADHASHKEY;
> +
> +   /*Create the command*/
> +   len = size = VTPM_COMMAND_HEADER_SIZE;
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +
> +   /* Send the command to vtpm_manager */
> +   info("Requesting Encryption key from backend");
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp,
> + &resplen), ERR_TPMFRONT);
> +
> +   /* Unpack response header */
> +   bptr = resp;
> +   len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord),
> + ERR_MALFORMED);
> +
> +   /* Check return code */
> +   CHECKSTATUSGOTO(ord, "VTPM_LoadHashKey()");
> +
> +   /* Get the size of the key */
> +   *data_length = size - VTPM_COMMAND_HEADER_SIZE;
> +
> +   /* Copy the key bits */
> +   *data = malloc(*data_length);
> +   memcpy(*data, bptr, *data_length);
> +
> +   goto egress;
> +abort_egress:
> +   error("VTPM_LoadHashKey failed");
> +egress:
> +   free(cmdbuf);
> +   return status;
> +}
> +
> +TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev,
> +uint8_t* data, size_t data_length) {
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* bptr, *resp;
> +   uint8_t* cmdbuf = NULL;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   TPM_TAG tag = VTPM_TAG_REQ;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = VTPM_ORD_SAVEHASHKEY;
> +
> +   /*Create the command*/
> +   len = size = VTPM_COMMAND_HEADER_SIZE + data_length;
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   memcpy(bptr, data, data_length);
> +   bptr += data_length;
> +
> +   /* Send the command to vtpm_manager */
> +   info("Sending encryption key to backend");
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp,
> + &resplen), ERR_TPMFRONT);
> +
> +   /* Unpack response header */
> +   bptr = resp;
> +   len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord),
> + ERR_MALFORMED);
> +
> +   /* Check return code */
> +   CHECKSTATUSGOTO(ord, "VTPM_SaveHashKey()");
> +
> +   goto egress;
> +abort_egress:
> +   error("VTPM_SaveHashKey failed");
> +egress:
> +   free(cmdbuf);
> +   return status;
> +}
> +
> +TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32
> +pcrIndex, BYTE* outDigest) {
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t *cmdbuf, *resp, *bptr;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   /*Just send a TPM_PCRRead Command to the HW tpm */
> +   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
> +   len = size = sizeof(TPM_TAG) + sizeof(UINT32) +
> + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
> +
> +   /*Create the raw tpm cmd */
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, pcrIndex));
> +
> +   /*Send Cmd wait for response */
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp,
> + &resplen), ERR_TPMFRONT);
> +
> +   bptr = resp; len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord),
> + ERR_MALFORMED);
> +
> +   //Check return status of command
> +   CHECKSTATUSGOTO(ord, "TPM_PCRRead");
> +
> +   //Get the ptr value
> +   memcpy(outDigest, bptr, sizeof(TPM_PCRVALUE));
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cmdbuf);
> +   return status;
> +
> +}
> diff --git a/stubdom/vtpm/vtpm_cmd.h b/stubdom/vtpm/vtpm_cmd.h new
> file mode 100644 index 0000000..b0bfa22
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_cmd.h
> @@ -0,0 +1,31 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#ifndef MANAGER_H
> +#define MANAGER_H
> +
> +#include <tpmfront.h>
> +#include <tpmback.h>
> +#include "tpm/tpm_structures.h"
> +
> +/* Create a command response error header */ int
> +create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode);
> +/* Request random bytes from hardware tpm, returns 0 on success */
> +TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE*
> +bytes, UINT32* numbytes);
> +/* Retreive 256 bit AES encryption key from manager */ TPM_RESULT
> +VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data,
> +size_t* data_length);
> +/* Manager securely saves our 256 bit AES encryption key */
> +TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev,
> +uint8_t* data, size_t data_length);
> +/* Send a TPM_PCRRead command passthrough the manager to the hw tpm
> +*/ TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32
> +pcrIndex, BYTE* outDigest);
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpm_pcrs.c b/stubdom/vtpm/vtpm_pcrs.c new
> file mode 100644 index 0000000..22a6cef
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_pcrs.c
> @@ -0,0 +1,43 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#include "vtpm_pcrs.h"
> +#include "vtpm_cmd.h"
> +#include "tpm/tpm_data.h"
> +
> +#define PCR_VALUE      tpmData.permanent.data.pcrValue
> +
> +static int write_pcr_direct(unsigned int pcrIndex, uint8_t* val) {
> +   if(pcrIndex > TPM_NUM_PCR) {
> +      return TPM_BADINDEX;
> +   }
> +   memcpy(&PCR_VALUE[pcrIndex], val, sizeof(TPM_PCRVALUE));
> +   return TPM_SUCCESS;
> +}
> +
> +TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev,
> +unsigned long pcrs) {
> +   TPM_RESULT rc = TPM_SUCCESS;
> +   uint8_t digest[sizeof(TPM_PCRVALUE)];
> +
> +   for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
> +      if(pcrs & 1 << i) {
> +         if((rc = VTPM_PCRRead(tpmfront_dev, i, digest)) != TPM_SUCCESS) {
> +            error("TPM_PCRRead failed with error : %d", rc);
> +            return rc;
> +         }
> +         write_pcr_direct(i, digest);
> +      }
> +   }
> +
> +   return rc;
> +}
> diff --git a/stubdom/vtpm/vtpm_pcrs.h b/stubdom/vtpm/vtpm_pcrs.h new
> file mode 100644 index 0000000..11835f9
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_pcrs.h
> @@ -0,0 +1,53 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#ifndef VTPM_PCRS_H
> +#define VTPM_PCRS_H
> +
> +#include "tpm/tpm_structures.h"
> +
> +#define VTPM_PCR0 1
> +#define VTPM_PCR1 1 << 1
> +#define VTPM_PCR2 1 << 2
> +#define VTPM_PCR3 1 << 3
> +#define VTPM_PCR4 1 << 4
> +#define VTPM_PCR5 1 << 5
> +#define VTPM_PCR6 1 << 6
> +#define VTPM_PCR7 1 << 7
> +#define VTPM_PCR8 1 << 8
> +#define VTPM_PCR9 1 << 9
> +#define VTPM_PCR10 1 << 10
> +#define VTPM_PCR11 1 << 11
> +#define VTPM_PCR12 1 << 12
> +#define VTPM_PCR13 1 << 13
> +#define VTPM_PCR14 1 << 14
> +#define VTPM_PCR15 1 << 15
> +#define VTPM_PCR16 1 << 16
> +#define VTPM_PCR17 1 << 17
> +#define VTPM_PCR18 1 << 18
> +#define VTPM_PCR19 1 << 19
> +#define VTPM_PCR20 1 << 20
> +#define VTPM_PCR21 1 << 21
> +#define VTPM_PCR22 1 << 22
> +#define VTPM_PCR23 1 << 23
> +
> +#define VTPM_PCRALL (1 << TPM_NUM_PCR) - 1 #define VTPM_PCRNONE 0
> +
> +#define VTPM_NUMPCRS 24
> +
> +struct tpmfront_dev;
> +
> +TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev,
> +unsigned long pcrs);
> +
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpmblk.c b/stubdom/vtpm/vtpmblk.c new file
> mode 100644 index 0000000..b343bd8
> --- /dev/null
> +++ b/stubdom/vtpm/vtpmblk.c
> @@ -0,0 +1,307 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#include <mini-os/byteorder.h>
> +#include "vtpmblk.h"
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm_cmd.h"
> +#include "polarssl/aes.h"
> +#include "polarssl/sha1.h"
> +#include <blkfront.h>
> +#include <unistd.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +
> +/*Encryption key and block sizes */
> +#define BLKSZ 16
> +
> +static struct blkfront_dev* blkdev = NULL; static int blkfront_fd =
> +-1;
> +
> +int init_vtpmblk(struct tpmfront_dev* tpmfront_dev) {
> +   struct blkfront_info blkinfo;
> +   info("Initializing persistent NVM storage\n");
> +
> +   if((blkdev = init_blkfront(NULL, &blkinfo)) == NULL) {
> +      error("BLKIO: ERROR Unable to initialize blkfront");
> +      return -1;
> +   }
> +   if (blkinfo.info & VDISK_READONLY || blkinfo.mode != O_RDWR) {
> +      error("BLKIO: ERROR block device is read only!");
> +      goto error;
> +   }
> +   if((blkfront_fd = blkfront_open(blkdev)) == -1) {
> +      error("Unable to open blkfront file descriptor!");
> +      goto error;
> +   }
> +
> +   return 0;
> +error:
> +   shutdown_blkfront(blkdev);
> +   blkdev = NULL;
> +   return -1;
> +}
> +
> +void shutdown_vtpmblk(void)
> +{
> +   close(blkfront_fd);
> +   blkfront_fd = -1;
> +   blkdev = NULL;
> +}
> +
> +int write_vtpmblk_raw(uint8_t *data, size_t data_length) {
> +   int rc;
> +   uint32_t lenbuf;
> +   debug("Begin Write data=%p len=%u", data, data_length);
> +
> +   lenbuf = cpu_to_be32((uint32_t)data_length);
> +
> +   lseek(blkfront_fd, 0, SEEK_SET);
> +   if((rc = write(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
> +      error("write(length) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +   if((rc = write(blkfront_fd, data, data_length)) != data_length) {
> +      error("write(data) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +
> +   info("Wrote %u bytes to NVM persistent storage", data_length);
> +
> +   return 0;
> +}
> +
> +int read_vtpmblk_raw(uint8_t **data, size_t *data_length) {
> +   int rc;
> +   uint32_t lenbuf;
> +
> +   lseek(blkfront_fd, 0, SEEK_SET);
> +   if(( rc = read(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
> +      error("read(length) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +   *data_length = (size_t) cpu_to_be32(lenbuf);
> +   if(*data_length == 0) {
> +      error("read 0 data_length for NVM");
> +      return -1;
> +   }
> +
> +   *data = tpm_malloc(*data_length);
> +   if((rc = read(blkfront_fd, *data, *data_length)) != *data_length) {
> +      error("read(data) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +
> +   info("Read %u bytes from NVM persistent storage", *data_length);
> +   return 0;
> +}
> +
> +int encrypt_vtpmblk(uint8_t* clear, size_t clear_len, uint8_t**
> +cipher, size_t* cipher_len, uint8_t* symkey) {
> +   int rc = 0;
> +   uint8_t iv[BLKSZ];
> +   aes_context aes_ctx;
> +   UINT32 temp;
> +   int mod;
> +
> +   uint8_t* clbuf = NULL;
> +
> +   uint8_t* ivptr;
> +   int ivlen;
> +
> +   uint8_t* cptr;      //Cipher block pointer
> +   int clen;   //Cipher block length
> +
> +   /*Create a new 256 bit encryption key */
> +   if(symkey == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   tpm_get_extern_random_bytes(symkey, NVMKEYSZ);
> +
> +   /*Setup initialization vector - random bits and then 4 bytes clear text size at the end*/
> +   temp = sizeof(UINT32);
> +   ivlen = BLKSZ - temp;
> +   tpm_get_extern_random_bytes(iv, ivlen);
> +   ivptr = iv + ivlen;
> +   tpm_marshal_UINT32(&ivptr, &temp, (UINT32) clear_len);
> +
> +   /*The clear text needs to be padded out to a multiple of BLKSZ */
> +   mod = clear_len % BLKSZ;
> +   clen = mod ? clear_len + BLKSZ - mod : clear_len;
> +   clbuf = malloc(clen);
> +   if (clbuf == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   memcpy(clbuf, clear, clear_len);
> +   /* zero out the padding bits - FIXME: better / more secure way to handle these? */
> +   if(clen - clear_len) {
> +      memset(clbuf + clear_len, 0, clen - clear_len);
> +   }
> +
> +   /* Setup the ciphertext buffer */
> +   *cipher_len = BLKSZ + clen;         /*iv + ciphertext */
> +   cptr = *cipher = malloc(*cipher_len);
> +   if (*cipher == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Copy the IV to cipher text blob*/
> +   memcpy(cptr, iv, BLKSZ);
> +   cptr += BLKSZ;
> +
> +   /* Setup encryption */
> +   aes_setkey_enc(&aes_ctx, symkey, 256);
> +
> +   /* Do encryption now */
> +   aes_crypt_cbc(&aes_ctx, AES_ENCRYPT, clen, iv, clbuf, cptr);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(clbuf);
> +   return rc;
> +}
> +int decrypt_vtpmblk(uint8_t* cipher, size_t cipher_len, uint8_t**
> +clear, size_t* clear_len, uint8_t* symkey) {
> +   int rc = 0;
> +   uint8_t iv[BLKSZ];
> +   uint8_t* ivptr;
> +   UINT32 u32, temp;
> +   aes_context aes_ctx;
> +
> +   uint8_t* cptr = cipher;     //cipher block pointer
> +   int clen = cipher_len;      //cipher block length
> +
> +   /* Pull out the initialization vector */
> +   memcpy(iv, cipher, BLKSZ);
> +   cptr += BLKSZ;
> +   clen -= BLKSZ;
> +
> +   /* Setup the clear text buffer */
> +   if((*clear = malloc(clen)) == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Get the length of clear text from last 4 bytes of iv */
> +   temp = sizeof(UINT32);
> +   ivptr = iv + BLKSZ - temp;
> +   tpm_unmarshal_UINT32(&ivptr, &temp, &u32);
> +   *clear_len = u32;
> +
> +   /* Setup decryption */
> +   aes_setkey_dec(&aes_ctx, symkey, 256);
> +
> +   /* Do decryption now */
> +   if ((clen % BLKSZ) != 0) {
> +      error("Decryption Error: Cipher block size was not a multiple of %u", BLKSZ);
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   aes_crypt_cbc(&aes_ctx, AES_DECRYPT, clen, iv, cptr, *clear);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   return rc;
> +}
> +
> +int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length) {
> +   int rc;
> +   uint8_t* cipher = NULL;
> +   size_t cipher_len = 0;
> +   uint8_t hashkey[HASHKEYSZ];
> +   uint8_t* symkey = hashkey + HASHSZ;
> +
> +   /* Encrypt the data */
> +   if((rc = encrypt_vtpmblk(data, data_length, &cipher, &cipher_len, symkey))) {
> +      goto abort_egress;
> +   }
> +   /* Write to disk */
> +   if((rc = write_vtpmblk_raw(cipher, cipher_len))) {
> +      goto abort_egress;
> +   }
> +   /* Get sha1 hash of data */
> +   sha1(cipher, cipher_len, hashkey);
> +
> +   /* Send hash and key to manager */
> +   if((rc = VTPM_SaveHashKey(tpmfront_dev, hashkey, HASHKEYSZ)) != TPM_SUCCESS) {
> +      goto abort_egress;
> +   }
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cipher);
> +   return rc;
> +}
> +
> +int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data_length) {
> +   int rc;
> +   uint8_t* cipher = NULL;
> +   size_t cipher_len = 0;
> +   size_t keysize;
> +   uint8_t* hashkey = NULL;
> +   uint8_t hash[HASHSZ];
> +   uint8_t* symkey;
> +
> +   /* Retreive the hash and the key from the manager */
> +   if((rc = VTPM_LoadHashKey(tpmfront_dev, &hashkey, &keysize)) != TPM_SUCCESS) {
> +      goto abort_egress;
> +   }
> +   if(keysize != HASHKEYSZ) {
> +      error("Manager returned a hashkey of invalid size! expected %d, actual %d", NVMKEYSZ, keysize);
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   symkey = hashkey + HASHSZ;
> +
> +   /* Read from disk now */
> +   if((rc = read_vtpmblk_raw(&cipher, &cipher_len))) {
> +      goto abort_egress;
> +   }
> +
> +   /* Compute the hash of the cipher text and compare */
> +   sha1(cipher, cipher_len, hash);
> +   if(memcmp(hash, hashkey, HASHSZ)) {
> +      int i;
> +      error("NVM Storage Checksum failed!");
> +      printf("Expected: ");
> +      for(i = 0; i < HASHSZ; ++i) {
> +        printf("%02hhX ", hashkey[i]);
> +      }
> +      printf("\n");
> +      printf("Actual:   ");
> +      for(i = 0; i < HASHSZ; ++i) {
> +        printf("%02hhX ", hash[i]);
> +      }
> +      printf("\n");
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Decrypt the blob */
> +   if((rc = decrypt_vtpmblk(cipher, cipher_len, data, data_length, symkey))) {
> +      goto abort_egress;
> +   }
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cipher);
> +   free(hashkey);
> +   return rc;
> +}
> diff --git a/stubdom/vtpm/vtpmblk.h b/stubdom/vtpm/vtpmblk.h new file
> mode 100644 index 0000000..282ce6a
> --- /dev/null
> +++ b/stubdom/vtpm/vtpmblk.h
> @@ -0,0 +1,31 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#ifndef NVM_H
> +#define NVM_H
> +#include <mini-os/types.h>
> +#include <xen/xen.h>
> +#include <tpmfront.h>
> +
> +#define NVMKEYSZ 32
> +#define HASHSZ 20
> +#define HASHKEYSZ (NVMKEYSZ + HASHSZ)
> +
> +int init_vtpmblk(struct tpmfront_dev* tpmfront_dev); void
> +shutdown_vtpmblk(void);
> +
> +/* Encrypts and writes data to blk device */ int write_vtpmblk(struct
> +tpmfront_dev* tpmfront_dev, uint8_t *data, size_t data_length);
> +/* Reads, Decrypts, and returns data from blk device */ int
> +read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t **data,
> +size_t *data_length);
> +
> +#endif
> --
> 1.7.10.4
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:00:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAHE-0007u4-Rb; Thu, 13 Dec 2012 15:00:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>) id 1TjAHD-0007ts-8U
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:00:43 +0000
Received: from [85.158.143.35:32758] by server-3.bemta-4.messagelabs.com id
	0A/D1-18211-A9DE9C05; Thu, 13 Dec 2012 15:00:42 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355410638!12348269!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13491 invoked from network); 13 Dec 2012 14:57:20 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 14:57:20 -0000
Received: from aplexcas2.dom1.jhuapl.edu (aplexcas2.dom1.jhuapl.edu
	[128.244.198.91]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 4640_90ad_481c7365_a7dd_4053_8e3c_084d49478979;
	Thu, 13 Dec 2012 09:57:09 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas2.dom1.jhuapl.edu ([128.244.198.91]) with mapi; Thu, 13 Dec 2012
	09:55:57 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 13 Dec 2012 09:55:55 -0500
Thread-Topic: [VTPM v7 1/8] add vtpm-stubdom code
Thread-Index: Ac3ZJ8FhGgl6pZ3qSESzq/iEEtQClgAGiRdg
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BE1625E5@aplesstripe.dom1.jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1355399295.10554.96.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355399295.10554.96.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 1/8] add vtpm-stubdom code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ok, just for reference for now, Nothing has changed with the vtpm patches, just the autoconf stuff is evolving.

-----Original Message-----
From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
Sent: Thursday, December 13, 2012 6:48 AM
To: Fioravante, Matthew E.
Cc: xen-devel@lists.xen.org; Ian Jackson
Subject: Re: [VTPM v7 1/8] add vtpm-stubdom code

For future reference please could you include an indication of what changed in a new posting of a series, either in the 0/N mail (which is useful to include as an intro in any case) or in the individual changelogs.

On Thu, 2012-12-06 at 18:19 +0000, Matthew Fioravante wrote:
> Add the code base for vtpm-stubdom to the stubdom heirarchy. Makefile
> changes in later patch.
>
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  stubdom/vtpm/Makefile    |   37 +++++
>  stubdom/vtpm/minios.cfg  |   14 ++
>  stubdom/vtpm/vtpm.c      |  404 ++++++++++++++++++++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpm.h      |   36 +++++
>  stubdom/vtpm/vtpm_cmd.c  |  256 +++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpm_cmd.h  |   31 ++++
>  stubdom/vtpm/vtpm_pcrs.c |   43 +++++
>  stubdom/vtpm/vtpm_pcrs.h |   53 ++++++
>  stubdom/vtpm/vtpmblk.c   |  307 +++++++++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpmblk.h   |   31 ++++
>  10 files changed, 1212 insertions(+)
>  create mode 100644 stubdom/vtpm/Makefile  create mode 100644
> stubdom/vtpm/minios.cfg  create mode 100644 stubdom/vtpm/vtpm.c
> create mode 100644 stubdom/vtpm/vtpm.h  create mode 100644
> stubdom/vtpm/vtpm_cmd.c  create mode 100644 stubdom/vtpm/vtpm_cmd.h
> create mode 100644 stubdom/vtpm/vtpm_pcrs.c  create mode 100644
> stubdom/vtpm/vtpm_pcrs.h  create mode 100644 stubdom/vtpm/vtpmblk.c
> create mode 100644 stubdom/vtpm/vtpmblk.h
>
> diff --git a/stubdom/vtpm/Makefile b/stubdom/vtpm/Makefile new file
> mode 100644 index 0000000..686c0ea
> --- /dev/null
> +++ b/stubdom/vtpm/Makefile
> @@ -0,0 +1,37 @@
> +# Copyright (c) 2010-2012 United States Government, as represented by
> +# the Secretary of Defense.  All rights reserved.
> +#
> +# THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> +# ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES #
> +INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY, FITNESS
> +# FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY #
> +DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING THE #
> +SOFTWARE.
> +#
> +
> +XEN_ROOT=../..
> +
> +PSSL_DIR=../polarssl-$(XEN_TARGET_ARCH)/library
> +PSSL_OBJS=aes.o sha1.o entropy.o ctr_drbg.o sha4.o
> +
> +TARGET=vtpm.a
> +OBJS=vtpm.o vtpm_cmd.o vtpmblk.o vtpm_pcrs.o
> +
> +
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/build
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/tpm
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)/crypto
> +CPPFLAGS+=-I../tpm_emulator-$(XEN_TARGET_ARCH)
> +
> +$(TARGET): $(OBJS)
> +       ar -cr $@ $(OBJS) $(TPMEMU_OBJS) $(foreach
> +obj,$(PSSL_OBJS),$(PSSL_DIR)/$(obj))
> +
> +$(OBJS): vtpm_manager.h
> +
> +vtpm_manager.h:
> +       ln -s ../vtpmmgr/vtpm_manager.h vtpm_manager.h
> +
> +clean:
> +       -rm $(TARGET) $(OBJS) vtpm_manager.h
> +
> +.PHONY: clean
> diff --git a/stubdom/vtpm/minios.cfg b/stubdom/vtpm/minios.cfg new
> file mode 100644 index 0000000..31652ee
> --- /dev/null
> +++ b/stubdom/vtpm/minios.cfg
> @@ -0,0 +1,14 @@
> +CONFIG_TPMFRONT=y
> +CONFIG_TPM_TIS=n
> +CONFIG_TPMBACK=y
> +CONFIG_START_NETWORK=n
> +CONFIG_TEST=n
> +CONFIG_PCIFRONT=n
> +CONFIG_BLKFRONT=y
> +CONFIG_NETFRONT=n
> +CONFIG_FBFRONT=n
> +CONFIG_KBDFRONT=n
> +CONFIG_CONSFRONT=n
> +CONFIG_XENBUS=y
> +CONFIG_LWIP=n
> +CONFIG_XC=n
> diff --git a/stubdom/vtpm/vtpm.c b/stubdom/vtpm/vtpm.c new file mode
> 100644 index 0000000..71aef78
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm.c
> @@ -0,0 +1,404 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <inttypes.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <string.h>
> +#include <syslog.h>
> +#include <stdbool.h>
> +#include <errno.h>
> +#include <sys/time.h>
> +#include <xen/xen.h>
> +#include <tpmback.h>
> +#include <tpmfront.h>
> +
> +#include <polarssl/entropy.h>
> +#include <polarssl/ctr_drbg.h>
> +
> +#include "tpm/tpm_emulator_extern.h"
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm.h"
> +#include "vtpm_cmd.h"
> +#include "vtpm_pcrs.h"
> +#include "vtpmblk.h"
> +
> +#define TPM_LOG_INFO LOG_INFO
> +#define TPM_LOG_ERROR LOG_ERR
> +#define TPM_LOG_DEBUG LOG_DEBUG
> +
> +/* Global commandline options - default values */ struct Opt_args
> +opt_args = {
> +   .startup = ST_CLEAR,
> +   .loglevel = TPM_LOG_INFO,
> +   .hwinitpcrs = VTPM_PCRNONE,
> +   .tpmconf = 0,
> +   .enable_maint_cmds = false,
> +};
> +
> +static uint32_t badords[32];
> +static unsigned int n_badords = 0;
> +
> +entropy_context entropy;
> +ctr_drbg_context ctr_drbg;
> +
> +struct tpmfront_dev* tpmfront_dev;
> +
> +void vtpm_get_extern_random_bytes(void *buf, size_t nbytes) {
> +   ctr_drbg_random(&ctr_drbg, buf, nbytes); }
> +
> +int vtpm_read_from_file(uint8_t **data, size_t *data_length) {
> +   return read_vtpmblk(tpmfront_dev, data, data_length); }
> +
> +int vtpm_write_to_file(uint8_t *data, size_t data_length) {
> +   return write_vtpmblk(tpmfront_dev, data, data_length); }
> +
> +int vtpm_extern_init_fake(void) {
> +   return 0;
> +}
> +
> +void vtpm_extern_release_fake(void) { }
> +
> +
> +void vtpm_log(int priority, const char *fmt, ...) {
> +   if(opt_args.loglevel >= priority) {
> +      va_list v;
> +      va_start(v, fmt);
> +      vprintf(fmt, v);
> +      va_end(v);
> +   }
> +}
> +
> +static uint64_t vtpm_get_ticks(void)
> +{
> +  static uint64_t old_t = 0;
> +  uint64_t new_t, res_t;
> +  struct timeval tv;
> +  gettimeofday(&tv, NULL);
> +  new_t = (uint64_t)tv.tv_sec * 1000000 + (uint64_t)tv.tv_usec;
> +  res_t = (old_t > 0) ? new_t - old_t : 0;
> +  old_t = new_t;
> +  return res_t;
> +}
> +
> +
> +static int tpm_entropy_source(void* dummy, unsigned char* data, size_t len, size_t* olen) {
> +   UINT32 sz = len;
> +   TPM_RESULT rc = VTPM_GetRandom(tpmfront_dev, data, &sz);
> +   *olen = sz;
> +   return rc == TPM_SUCCESS ? 0 : POLARSSL_ERR_ENTROPY_SOURCE_FAILED;
> +}
> +
> +int init_random(void) {
> +   /* Initialize the rng */
> +   entropy_init(&entropy);
> +   entropy_add_source(&entropy, tpm_entropy_source, NULL, 0);
> +   entropy_gather(&entropy);
> +   ctr_drbg_init(&ctr_drbg, entropy_func, &entropy, NULL, 0);
> +   ctr_drbg_set_prediction_resistance( &ctr_drbg, CTR_DRBG_PR_OFF );
> +
> +   return 0;
> +}
> +
> +int check_ordinal(tpmcmd_t* tpmcmd) {
> +   TPM_COMMAND_CODE ord;
> +   UINT32 len = 4;
> +   BYTE* ptr;
> +   unsigned int i;
> +
> +   if(tpmcmd->req_len < 10) {
> +      return true;
> +   }
> +
> +   ptr = tpmcmd->req + 6;
> +   tpm_unmarshal_UINT32(&ptr, &len, &ord);
> +
> +   for(i = 0; i < n_badords; ++i) {
> +      if(ord == badords[i]) {
> +         error("Disabled command ordinal (%" PRIu32") requested!\n");
> +         return false;
> +      }
> +   }
> +   return true;
> +}
> +
> +static void main_loop(void) {
> +   tpmcmd_t* tpmcmd = NULL;
> +   domid_t domid;              /* Domid of frontend */
> +   unsigned int handle;        /* handle of frontend */
> +   int res = -1;
> +
> +   info("VTPM Initializing\n");
> +
> +   /* Set required tpm config args */
> +   opt_args.tpmconf |= TPM_CONF_STRONG_PERSISTENCE;
> +   opt_args.tpmconf &= ~TPM_CONF_USE_INTERNAL_PRNG;
> +   opt_args.tpmconf |= TPM_CONF_GENERATE_EK;
> +   opt_args.tpmconf |= TPM_CONF_GENERATE_SEED_DAA;
> +
> +   /* Initialize the emulator */
> +   tpm_emulator_init(opt_args.startup, opt_args.tpmconf);
> +
> +   /* Initialize any requested PCRs with hardware TPM values */
> +   if(vtpm_initialize_hw_pcrs(tpmfront_dev, opt_args.hwinitpcrs) != TPM_SUCCESS) {
> +      error("Failed to initialize PCRs with hardware TPM values");
> +      goto abort_postpcrs;
> +   }
> +
> +   /* Wait for the frontend domain to connect */
> +   info("Waiting for frontend domain to connect..");
> +   if(tpmback_wait_for_frontend_connect(&domid, &handle) == 0) {
> +      info("VTPM attached to Frontend %u/%u", (unsigned int) domid, handle);
> +   } else {
> +      error("Unable to attach to a frontend");
> +   }
> +
> +   tpmcmd = tpmback_req(domid, handle);
> +   while(tpmcmd) {
> +      /* Handle the request */
> +      if(tpmcmd->req_len) {
> +        tpmcmd->resp = NULL;
> +        tpmcmd->resp_len = 0;
> +
> +         /* First check for disabled ordinals */
> +         if(!check_ordinal(tpmcmd)) {
> +            create_error_response(tpmcmd, TPM_BAD_ORDINAL);
> +         }
> +         /* If not disabled, do the command */
> +         else {
> +            if((res = tpm_handle_command(tpmcmd->req, tpmcmd->req_len, &tpmcmd->resp, &tpmcmd->resp_len)) != 0) {
> +               error("tpm_handle_command() failed");
> +               create_error_response(tpmcmd, TPM_FAIL);
> +            }
> +         }
> +      }
> +
> +      /* Send the response */
> +      tpmback_resp(tpmcmd);
> +
> +      /* Wait for the next request */
> +      tpmcmd = tpmback_req(domid, handle);
> +
> +   }
> +
> +abort_postpcrs:
> +   info("VTPM Shutting down\n");
> +
> +   tpm_emulator_shutdown();
> +}
> +
> +int parse_cmd_line(int argc, char** argv) {
> +   char sval[25];
> +   char* logstr = NULL;
> +   /* Parse the command strings */
> +   for(unsigned int i = 1; i < argc; ++i) {
> +      if (sscanf(argv[i], "loglevel=%25s", sval) == 1){
> +        if (!strcmp(sval, "debug")) {
> +           opt_args.loglevel = TPM_LOG_DEBUG;
> +           logstr = "debug";
> +        }
> +        else if (!strcmp(sval, "info")) {
> +           logstr = "info";
> +           opt_args.loglevel = TPM_LOG_INFO;
> +        }
> +        else if (!strcmp(sval, "error")) {
> +           logstr = "error";
> +           opt_args.loglevel = TPM_LOG_ERROR;
> +        }
> +      }
> +      else if (!strcmp(argv[i], "clear")) {
> +        opt_args.startup = ST_CLEAR;
> +      }
> +      else if (!strcmp(argv[i], "save")) {
> +        opt_args.startup = ST_SAVE;
> +      }
> +      else if (!strcmp(argv[i], "deactivated")) {
> +        opt_args.startup = ST_DEACTIVATED;
> +      }
> +      else if (!strncmp(argv[i], "maintcmds=", 10)) {
> +         if(!strcmp(argv[i] + 10, "1")) {
> +            opt_args.enable_maint_cmds = true;
> +         } else if(!strcmp(argv[i] + 10, "0")) {
> +            opt_args.enable_maint_cmds = false;
> +         }
> +      }
> +      else if(!strncmp(argv[i], "hwinitpcr=", 10)) {
> +         char *pch = argv[i] + 10;
> +         unsigned int v1, v2;
> +         pch = strtok(pch, ",");
> +         while(pch != NULL) {
> +            if(!strcmp(pch, "all")) {
> +               //Set all
> +               opt_args.hwinitpcrs = VTPM_PCRALL;
> +            } else if(!strcmp(pch, "none")) {
> +               //Set none
> +               opt_args.hwinitpcrs = VTPM_PCRNONE;
> +            } else if(sscanf(pch, "%u", &v1) == 1) {
> +               //Set one
> +               if(v1 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               opt_args.hwinitpcrs |= (1 << v1);
> +            } else if(sscanf(pch, "%u-%u", &v1, &v2) == 2) {
> +               //Set range
> +               if(v1 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               if(v2 >= TPM_NUM_PCR) {
> +                  error("hwinitpcr error: Invalid PCR index %u", v1);
> +                  return -1;
> +               }
> +               if(v2 < v1) {
> +                  unsigned tp = v1;
> +                  v1 = v2;
> +                  v2 = tp;
> +               }
> +               for(unsigned int i = v1; i <= v2; ++i) {
> +                  opt_args.hwinitpcrs |= (1 << i);
> +               }
> +            } else {
> +               error("hwintipcr error: Invalid PCR specification : %s", pch);
> +               return -1;
> +            }
> +            pch = strtok(NULL, ",");
> +         }
> +      }
> +      else {
> +        error("Invalid command line option `%s'", argv[i]);
> +      }
> +
> +   }
> +
> +   /* Check Errors and print results */
> +   switch(opt_args.startup) {
> +      case ST_CLEAR:
> +        info("Startup mode is `clear'");
> +        break;
> +      case ST_SAVE:
> +        info("Startup mode is `save'");
> +        break;
> +      case ST_DEACTIVATED:
> +        info("Startup mode is `deactivated'");
> +        break;
> +      default:
> +        error("Invalid startup mode %d", opt_args.startup);
> +        return -1;
> +   }
> +
> +   if(opt_args.hwinitpcrs & (VTPM_PCRALL))
> +   {
> +      char pcrstr[1024];
> +      char* ptr = pcrstr;
> +
> +      pcrstr[0] = '\0';
> +      info("The following PCRs will be initialized with values from the hardware TPM:");
> +      for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
> +         if(opt_args.hwinitpcrs & (1 << i)) {
> +            ptr += sprintf(ptr, "%u, ", i);
> +         }
> +      }
> +      /* get rid of the last comma if any numbers were printed */
> +      *(ptr -2) = '\0';
> +
> +      info("\t%s", pcrstr);
> +   } else {
> +      info("All PCRs initialized to default values");
> +   }
> +
> +   if(!opt_args.enable_maint_cmds) {
> +      info("TPM Maintenance Commands disabled");
> +      badords[n_badords++] = TPM_ORD_CreateMaintenanceArchive;
> +      badords[n_badords++] = TPM_ORD_LoadMaintenanceArchive;
> +      badords[n_badords++] = TPM_ORD_KillMaintenanceFeature;
> +      badords[n_badords++] = TPM_ORD_LoadManuMaintPub;
> +      badords[n_badords++] = TPM_ORD_ReadManuMaintPub;
> +   } else {
> +      info("TPM Maintenance Commands enabled");
> +   }
> +
> +   info("Log level set to %s", logstr);
> +
> +   return 0;
> +}
> +
> +void cleanup_opt_args(void) {
> +}
> +
> +int main(int argc, char **argv)
> +{
> +   //FIXME: initializing blkfront without this sleep causes the domain to crash on boot
> +   sleep(2);
> +
> +   /* Setup extern function pointers */
> +   tpm_extern_init = vtpm_extern_init_fake;
> +   tpm_extern_release = vtpm_extern_release_fake;
> +   tpm_malloc = malloc;
> +   tpm_free = free;
> +   tpm_log = vtpm_log;
> +   tpm_get_ticks = vtpm_get_ticks;
> +   tpm_get_extern_random_bytes = vtpm_get_extern_random_bytes;
> +   tpm_write_to_storage = vtpm_write_to_file;
> +   tpm_read_from_storage = vtpm_read_from_file;
> +
> +   info("starting TPM Emulator (1.2.%d.%d-%d)", VERSION_MAJOR, VERSION_MINOR, VERSION_BUILD);
> +   if(parse_cmd_line(argc, argv)) {
> +      error("Error parsing commandline\n");
> +      return -1;
> +   }
> +
> +   /* Initialize devices */
> +   init_tpmback();
> +   if((tpmfront_dev = init_tpmfront(NULL)) == NULL) {
> +      error("Unable to initialize tpmfront device");
> +      goto abort_posttpmfront;
> +   }
> +
> +   /* Seed the RNG with entropy from hardware TPM */
> +   if(init_random()) {
> +      error("Unable to initialize RNG");
> +      goto abort_postrng;
> +   }
> +
> +   /* Initialize blkfront device */
> +   if(init_vtpmblk(tpmfront_dev)) {
> +      error("Unable to initialize Blkfront persistent storage");
> +      goto abort_postvtpmblk;
> +   }
> +
> +   /* Run main loop */
> +   main_loop();
> +
> +   /* Shutdown blkfront */
> +   shutdown_vtpmblk();
> +abort_postvtpmblk:
> +abort_postrng:
> +
> +   /* Close devices */
> +   shutdown_tpmfront(tpmfront_dev);
> +abort_posttpmfront:
> +   shutdown_tpmback();
> +
> +   cleanup_opt_args();
> +
> +   return 0;
> +}
> diff --git a/stubdom/vtpm/vtpm.h b/stubdom/vtpm/vtpm.h new file mode
> 100644 index 0000000..5919e44
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm.h
> @@ -0,0 +1,36 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#ifndef VTPM_H
> +#define VTPM_H
> +
> +#include <stdbool.h>
> +
> +/* For testing */
> +#define VERS_CMD "\x00\xC1\x00\x00\x00\x16\x00\x00\x00\x65\x00\x00\x00\x05\x00\x00\x00\x04\x00\x00\x01\x03"
> +#define VERS_CMD_LEN 22
> +
> +/* Global commandline options */
> +struct Opt_args {
> +   enum StartUp {
> +      ST_CLEAR = 1,
> +      ST_SAVE = 2,
> +      ST_DEACTIVATED = 3
> +   } startup;
> +   unsigned long hwinitpcrs;
> +   int loglevel;
> +   uint32_t tpmconf;
> +   bool enable_maint_cmds;
> +};
> +extern struct Opt_args opt_args;
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpm_cmd.c b/stubdom/vtpm/vtpm_cmd.c new
> file mode 100644 index 0000000..7eae98b
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_cmd.c
> @@ -0,0 +1,256 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#include <types.h>
> +#include <xen/xen.h>
> +#include <mm.h>
> +#include <gnttab.h>
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm_manager.h"
> +#include "vtpm_cmd.h"
> +#include <tpmback.h>
> +
> +#define TRYFAILGOTO(C) \
> +   if((C)) { \
> +      status = TPM_FAIL; \
> +      goto abort_egress; \
> +   }
> +#define TRYFAILGOTOMSG(C, msg) \
> +   if((C)) { \
> +      status = TPM_FAIL; \
> +      error(msg); \
> +      goto abort_egress; \
> +   }
> +#define CHECKSTATUSGOTO(ret, fname) \
> +   if((ret) != TPM_SUCCESS) { \
> +      error("%s failed with error code (%lu)", fname, (unsigned long) ret); \
> +      status = ord; \
> +      goto abort_egress; \
> +   }
> +
> +#define ERR_MALFORMED "Malformed response from backend"
> +#define ERR_TPMFRONT "Error sending command through frontend device"
> +
> +struct shpage {
> +   void* page;
> +   grant_ref_t grantref;
> +};
> +
> +typedef struct shpage shpage_t;
> +
> +static inline int pack_header(uint8_t** bptr, UINT32* len, TPM_TAG
> +tag, UINT32 size, TPM_COMMAND_CODE ord) {
> +   return *bptr == NULL ||
> +        tpm_marshal_UINT16(bptr, len, tag) ||
> +        tpm_marshal_UINT32(bptr, len, size) ||
> +        tpm_marshal_UINT32(bptr, len, ord); }
> +
> +static inline int unpack_header(uint8_t** bptr, UINT32* len, TPM_TAG*
> +tag, UINT32* size, TPM_COMMAND_CODE* ord) {
> +   return *bptr == NULL ||
> +        tpm_unmarshal_UINT16(bptr, len, tag) ||
> +        tpm_unmarshal_UINT32(bptr, len, size) ||
> +        tpm_unmarshal_UINT32(bptr, len, ord); }
> +
> +int create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode) {
> +   TPM_TAG tag;
> +   UINT32 len = tpmcmd->req_len;
> +   uint8_t* respptr;
> +   uint8_t* cmdptr = tpmcmd->req;
> +
> +   if(!tpm_unmarshal_UINT16(&cmdptr, &len, &tag)) {
> +      switch (tag) {
> +         case TPM_TAG_RQU_COMMAND:
> +            tag = TPM_TAG_RSP_COMMAND;
> +            break;
> +         case TPM_TAG_RQU_AUTH1_COMMAND:
> +            tag = TPM_TAG_RQU_AUTH2_COMMAND;
> +            break;
> +         case TPM_TAG_RQU_AUTH2_COMMAND:
> +            tag = TPM_TAG_RQU_AUTH2_COMMAND;
> +            break;
> +      }
> +   } else {
> +      tag = TPM_TAG_RSP_COMMAND;
> +   }
> +
> +   tpmcmd->resp_len = len = 10;
> +   tpmcmd->resp = respptr = tpm_malloc(tpmcmd->resp_len);
> +
> +   return pack_header(&respptr, &len, tag, len, errorcode); }
> +
> +TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE* bytes, UINT32 *numbytes) {
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* cmdbuf, *resp, *bptr;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   /*Ask the real tpm for random bytes for the seed */
> +   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = TPM_ORD_GetRandom;
> +   len = size = sizeof(TPM_TAG) + sizeof(UINT32) +
> + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
> +
> +   /*Create the raw tpm command */
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, *numbytes));
> +
> +   /* Send cmd, wait for response */
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp, &resplen),
> +      ERR_TPMFRONT);
> +
> +   bptr = resp; len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord),
> + ERR_MALFORMED);
> +
> +   //Check return status of command
> +   CHECKSTATUSGOTO(ord, "TPM_GetRandom()");
> +
> +   // Get the number of random bytes in the response
> +   TRYFAILGOTOMSG(tpm_unmarshal_UINT32(&bptr, &len, &size), ERR_MALFORMED);
> +   *numbytes = size;
> +
> +   //Get the random bytes out, tpm may give us less bytes than what we wanrt
> +   TRYFAILGOTOMSG(tpm_unmarshal_BYTE_ARRAY(&bptr, &len, bytes,
> + *numbytes), ERR_MALFORMED);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cmdbuf);
> +   return status;
> +
> +}
> +
> +TPM_RESULT VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev,
> +uint8_t** data, size_t* data_length) {
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* bptr, *resp;
> +   uint8_t* cmdbuf = NULL;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   TPM_TAG tag = VTPM_TAG_REQ;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = VTPM_ORD_LOADHASHKEY;
> +
> +   /*Create the command*/
> +   len = size = VTPM_COMMAND_HEADER_SIZE;
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +
> +   /* Send the command to vtpm_manager */
> +   info("Requesting Encryption key from backend");
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp,
> + &resplen), ERR_TPMFRONT);
> +
> +   /* Unpack response header */
> +   bptr = resp;
> +   len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord),
> + ERR_MALFORMED);
> +
> +   /* Check return code */
> +   CHECKSTATUSGOTO(ord, "VTPM_LoadHashKey()");
> +
> +   /* Get the size of the key */
> +   *data_length = size - VTPM_COMMAND_HEADER_SIZE;
> +
> +   /* Copy the key bits */
> +   *data = malloc(*data_length);
> +   memcpy(*data, bptr, *data_length);
> +
> +   goto egress;
> +abort_egress:
> +   error("VTPM_LoadHashKey failed");
> +egress:
> +   free(cmdbuf);
> +   return status;
> +}
> +
> +TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev,
> +uint8_t* data, size_t data_length) {
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t* bptr, *resp;
> +   uint8_t* cmdbuf = NULL;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   TPM_TAG tag = VTPM_TAG_REQ;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = VTPM_ORD_SAVEHASHKEY;
> +
> +   /*Create the command*/
> +   len = size = VTPM_COMMAND_HEADER_SIZE + data_length;
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   memcpy(bptr, data, data_length);
> +   bptr += data_length;
> +
> +   /* Send the command to vtpm_manager */
> +   info("Sending encryption key to backend");
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp,
> + &resplen), ERR_TPMFRONT);
> +
> +   /* Unpack response header */
> +   bptr = resp;
> +   len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord),
> + ERR_MALFORMED);
> +
> +   /* Check return code */
> +   CHECKSTATUSGOTO(ord, "VTPM_SaveHashKey()");
> +
> +   goto egress;
> +abort_egress:
> +   error("VTPM_SaveHashKey failed");
> +egress:
> +   free(cmdbuf);
> +   return status;
> +}
> +
> +TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32
> +pcrIndex, BYTE* outDigest) {
> +   TPM_RESULT status = TPM_SUCCESS;
> +   uint8_t *cmdbuf, *resp, *bptr;
> +   size_t resplen = 0;
> +   UINT32 len;
> +
> +   /*Just send a TPM_PCRRead Command to the HW tpm */
> +   TPM_TAG tag = TPM_TAG_RQU_COMMAND;
> +   UINT32 size;
> +   TPM_COMMAND_CODE ord = TPM_ORD_PCRRead;
> +   len = size = sizeof(TPM_TAG) + sizeof(UINT32) +
> + sizeof(TPM_COMMAND_CODE) + sizeof(UINT32);
> +
> +   /*Create the raw tpm cmd */
> +   bptr = cmdbuf = malloc(size);
> +   TRYFAILGOTO(pack_header(&bptr, &len, tag, size, ord));
> +   TRYFAILGOTO(tpm_marshal_UINT32(&bptr, &len, pcrIndex));
> +
> +   /*Send Cmd wait for response */
> +   TRYFAILGOTOMSG(tpmfront_cmd(tpmfront_dev, cmdbuf, size, &resp,
> + &resplen), ERR_TPMFRONT);
> +
> +   bptr = resp; len = resplen;
> +   TRYFAILGOTOMSG(unpack_header(&bptr, &len, &tag, &size, &ord),
> + ERR_MALFORMED);
> +
> +   //Check return status of command
> +   CHECKSTATUSGOTO(ord, "TPM_PCRRead");
> +
> +   //Get the ptr value
> +   memcpy(outDigest, bptr, sizeof(TPM_PCRVALUE));
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cmdbuf);
> +   return status;
> +
> +}
> diff --git a/stubdom/vtpm/vtpm_cmd.h b/stubdom/vtpm/vtpm_cmd.h new
> file mode 100644 index 0000000..b0bfa22
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_cmd.h
> @@ -0,0 +1,31 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#ifndef MANAGER_H
> +#define MANAGER_H
> +
> +#include <tpmfront.h>
> +#include <tpmback.h>
> +#include "tpm/tpm_structures.h"
> +
> +/* Create a command response error header */ int
> +create_error_response(tpmcmd_t* tpmcmd, TPM_RESULT errorcode);
> +/* Request random bytes from hardware tpm, returns 0 on success */
> +TPM_RESULT VTPM_GetRandom(struct tpmfront_dev* tpmfront_dev, BYTE*
> +bytes, UINT32* numbytes);
> +/* Retreive 256 bit AES encryption key from manager */ TPM_RESULT
> +VTPM_LoadHashKey(struct tpmfront_dev* tpmfront_dev, uint8_t** data,
> +size_t* data_length);
> +/* Manager securely saves our 256 bit AES encryption key */
> +TPM_RESULT VTPM_SaveHashKey(struct tpmfront_dev* tpmfront_dev,
> +uint8_t* data, size_t data_length);
> +/* Send a TPM_PCRRead command passthrough the manager to the hw tpm
> +*/ TPM_RESULT VTPM_PCRRead(struct tpmfront_dev* tpmfront_dev, UINT32
> +pcrIndex, BYTE* outDigest);
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpm_pcrs.c b/stubdom/vtpm/vtpm_pcrs.c new
> file mode 100644 index 0000000..22a6cef
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_pcrs.c
> @@ -0,0 +1,43 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#include "vtpm_pcrs.h"
> +#include "vtpm_cmd.h"
> +#include "tpm/tpm_data.h"
> +
> +#define PCR_VALUE      tpmData.permanent.data.pcrValue
> +
> +static int write_pcr_direct(unsigned int pcrIndex, uint8_t* val) {
> +   if(pcrIndex > TPM_NUM_PCR) {
> +      return TPM_BADINDEX;
> +   }
> +   memcpy(&PCR_VALUE[pcrIndex], val, sizeof(TPM_PCRVALUE));
> +   return TPM_SUCCESS;
> +}
> +
> +TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev,
> +unsigned long pcrs) {
> +   TPM_RESULT rc = TPM_SUCCESS;
> +   uint8_t digest[sizeof(TPM_PCRVALUE)];
> +
> +   for(unsigned int i = 0; i < TPM_NUM_PCR; ++i) {
> +      if(pcrs & 1 << i) {
> +         if((rc = VTPM_PCRRead(tpmfront_dev, i, digest)) != TPM_SUCCESS) {
> +            error("TPM_PCRRead failed with error : %d", rc);
> +            return rc;
> +         }
> +         write_pcr_direct(i, digest);
> +      }
> +   }
> +
> +   return rc;
> +}
> diff --git a/stubdom/vtpm/vtpm_pcrs.h b/stubdom/vtpm/vtpm_pcrs.h new
> file mode 100644 index 0000000..11835f9
> --- /dev/null
> +++ b/stubdom/vtpm/vtpm_pcrs.h
> @@ -0,0 +1,53 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#ifndef VTPM_PCRS_H
> +#define VTPM_PCRS_H
> +
> +#include "tpm/tpm_structures.h"
> +
> +#define VTPM_PCR0 1
> +#define VTPM_PCR1 1 << 1
> +#define VTPM_PCR2 1 << 2
> +#define VTPM_PCR3 1 << 3
> +#define VTPM_PCR4 1 << 4
> +#define VTPM_PCR5 1 << 5
> +#define VTPM_PCR6 1 << 6
> +#define VTPM_PCR7 1 << 7
> +#define VTPM_PCR8 1 << 8
> +#define VTPM_PCR9 1 << 9
> +#define VTPM_PCR10 1 << 10
> +#define VTPM_PCR11 1 << 11
> +#define VTPM_PCR12 1 << 12
> +#define VTPM_PCR13 1 << 13
> +#define VTPM_PCR14 1 << 14
> +#define VTPM_PCR15 1 << 15
> +#define VTPM_PCR16 1 << 16
> +#define VTPM_PCR17 1 << 17
> +#define VTPM_PCR18 1 << 18
> +#define VTPM_PCR19 1 << 19
> +#define VTPM_PCR20 1 << 20
> +#define VTPM_PCR21 1 << 21
> +#define VTPM_PCR22 1 << 22
> +#define VTPM_PCR23 1 << 23
> +
> +#define VTPM_PCRALL (1 << TPM_NUM_PCR) - 1 #define VTPM_PCRNONE 0
> +
> +#define VTPM_NUMPCRS 24
> +
> +struct tpmfront_dev;
> +
> +TPM_RESULT vtpm_initialize_hw_pcrs(struct tpmfront_dev* tpmfront_dev,
> +unsigned long pcrs);
> +
> +
> +#endif
> diff --git a/stubdom/vtpm/vtpmblk.c b/stubdom/vtpm/vtpmblk.c new file
> mode 100644 index 0000000..b343bd8
> --- /dev/null
> +++ b/stubdom/vtpm/vtpmblk.c
> @@ -0,0 +1,307 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#include <mini-os/byteorder.h>
> +#include "vtpmblk.h"
> +#include "tpm/tpm_marshalling.h"
> +#include "vtpm_cmd.h"
> +#include "polarssl/aes.h"
> +#include "polarssl/sha1.h"
> +#include <blkfront.h>
> +#include <unistd.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +
> +/*Encryption key and block sizes */
> +#define BLKSZ 16
> +
> +static struct blkfront_dev* blkdev = NULL; static int blkfront_fd =
> +-1;
> +
> +int init_vtpmblk(struct tpmfront_dev* tpmfront_dev) {
> +   struct blkfront_info blkinfo;
> +   info("Initializing persistent NVM storage\n");
> +
> +   if((blkdev = init_blkfront(NULL, &blkinfo)) == NULL) {
> +      error("BLKIO: ERROR Unable to initialize blkfront");
> +      return -1;
> +   }
> +   if (blkinfo.info & VDISK_READONLY || blkinfo.mode != O_RDWR) {
> +      error("BLKIO: ERROR block device is read only!");
> +      goto error;
> +   }
> +   if((blkfront_fd = blkfront_open(blkdev)) == -1) {
> +      error("Unable to open blkfront file descriptor!");
> +      goto error;
> +   }
> +
> +   return 0;
> +error:
> +   shutdown_blkfront(blkdev);
> +   blkdev = NULL;
> +   return -1;
> +}
> +
> +void shutdown_vtpmblk(void)
> +{
> +   close(blkfront_fd);
> +   blkfront_fd = -1;
> +   blkdev = NULL;
> +}
> +
> +int write_vtpmblk_raw(uint8_t *data, size_t data_length) {
> +   int rc;
> +   uint32_t lenbuf;
> +   debug("Begin Write data=%p len=%u", data, data_length);
> +
> +   lenbuf = cpu_to_be32((uint32_t)data_length);
> +
> +   lseek(blkfront_fd, 0, SEEK_SET);
> +   if((rc = write(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
> +      error("write(length) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +   if((rc = write(blkfront_fd, data, data_length)) != data_length) {
> +      error("write(data) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +
> +   info("Wrote %u bytes to NVM persistent storage", data_length);
> +
> +   return 0;
> +}
> +
> +int read_vtpmblk_raw(uint8_t **data, size_t *data_length) {
> +   int rc;
> +   uint32_t lenbuf;
> +
> +   lseek(blkfront_fd, 0, SEEK_SET);
> +   if(( rc = read(blkfront_fd, (uint8_t*)&lenbuf, 4)) != 4) {
> +      error("read(length) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +   *data_length = (size_t) cpu_to_be32(lenbuf);
> +   if(*data_length == 0) {
> +      error("read 0 data_length for NVM");
> +      return -1;
> +   }
> +
> +   *data = tpm_malloc(*data_length);
> +   if((rc = read(blkfront_fd, *data, *data_length)) != *data_length) {
> +      error("read(data) failed! error was %s", strerror(errno));
> +      return -1;
> +   }
> +
> +   info("Read %u bytes from NVM persistent storage", *data_length);
> +   return 0;
> +}
> +
> +int encrypt_vtpmblk(uint8_t* clear, size_t clear_len, uint8_t**
> +cipher, size_t* cipher_len, uint8_t* symkey) {
> +   int rc = 0;
> +   uint8_t iv[BLKSZ];
> +   aes_context aes_ctx;
> +   UINT32 temp;
> +   int mod;
> +
> +   uint8_t* clbuf = NULL;
> +
> +   uint8_t* ivptr;
> +   int ivlen;
> +
> +   uint8_t* cptr;      //Cipher block pointer
> +   int clen;   //Cipher block length
> +
> +   /*Create a new 256 bit encryption key */
> +   if(symkey == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   tpm_get_extern_random_bytes(symkey, NVMKEYSZ);
> +
> +   /*Setup initialization vector - random bits and then 4 bytes clear text size at the end*/
> +   temp = sizeof(UINT32);
> +   ivlen = BLKSZ - temp;
> +   tpm_get_extern_random_bytes(iv, ivlen);
> +   ivptr = iv + ivlen;
> +   tpm_marshal_UINT32(&ivptr, &temp, (UINT32) clear_len);
> +
> +   /*The clear text needs to be padded out to a multiple of BLKSZ */
> +   mod = clear_len % BLKSZ;
> +   clen = mod ? clear_len + BLKSZ - mod : clear_len;
> +   clbuf = malloc(clen);
> +   if (clbuf == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   memcpy(clbuf, clear, clear_len);
> +   /* zero out the padding bits - FIXME: better / more secure way to handle these? */
> +   if(clen - clear_len) {
> +      memset(clbuf + clear_len, 0, clen - clear_len);
> +   }
> +
> +   /* Setup the ciphertext buffer */
> +   *cipher_len = BLKSZ + clen;         /*iv + ciphertext */
> +   cptr = *cipher = malloc(*cipher_len);
> +   if (*cipher == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Copy the IV to cipher text blob*/
> +   memcpy(cptr, iv, BLKSZ);
> +   cptr += BLKSZ;
> +
> +   /* Setup encryption */
> +   aes_setkey_enc(&aes_ctx, symkey, 256);
> +
> +   /* Do encryption now */
> +   aes_crypt_cbc(&aes_ctx, AES_ENCRYPT, clen, iv, clbuf, cptr);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(clbuf);
> +   return rc;
> +}
> +int decrypt_vtpmblk(uint8_t* cipher, size_t cipher_len, uint8_t**
> +clear, size_t* clear_len, uint8_t* symkey) {
> +   int rc = 0;
> +   uint8_t iv[BLKSZ];
> +   uint8_t* ivptr;
> +   UINT32 u32, temp;
> +   aes_context aes_ctx;
> +
> +   uint8_t* cptr = cipher;     //cipher block pointer
> +   int clen = cipher_len;      //cipher block length
> +
> +   /* Pull out the initialization vector */
> +   memcpy(iv, cipher, BLKSZ);
> +   cptr += BLKSZ;
> +   clen -= BLKSZ;
> +
> +   /* Setup the clear text buffer */
> +   if((*clear = malloc(clen)) == NULL) {
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Get the length of clear text from last 4 bytes of iv */
> +   temp = sizeof(UINT32);
> +   ivptr = iv + BLKSZ - temp;
> +   tpm_unmarshal_UINT32(&ivptr, &temp, &u32);
> +   *clear_len = u32;
> +
> +   /* Setup decryption */
> +   aes_setkey_dec(&aes_ctx, symkey, 256);
> +
> +   /* Do decryption now */
> +   if ((clen % BLKSZ) != 0) {
> +      error("Decryption Error: Cipher block size was not a multiple of %u", BLKSZ);
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   aes_crypt_cbc(&aes_ctx, AES_DECRYPT, clen, iv, cptr, *clear);
> +
> +   goto egress;
> +abort_egress:
> +egress:
> +   return rc;
> +}
> +
> +int write_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t* data, size_t data_length) {
> +   int rc;
> +   uint8_t* cipher = NULL;
> +   size_t cipher_len = 0;
> +   uint8_t hashkey[HASHKEYSZ];
> +   uint8_t* symkey = hashkey + HASHSZ;
> +
> +   /* Encrypt the data */
> +   if((rc = encrypt_vtpmblk(data, data_length, &cipher, &cipher_len, symkey))) {
> +      goto abort_egress;
> +   }
> +   /* Write to disk */
> +   if((rc = write_vtpmblk_raw(cipher, cipher_len))) {
> +      goto abort_egress;
> +   }
> +   /* Get sha1 hash of data */
> +   sha1(cipher, cipher_len, hashkey);
> +
> +   /* Send hash and key to manager */
> +   if((rc = VTPM_SaveHashKey(tpmfront_dev, hashkey, HASHKEYSZ)) != TPM_SUCCESS) {
> +      goto abort_egress;
> +   }
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cipher);
> +   return rc;
> +}
> +
> +int read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t** data, size_t *data_length) {
> +   int rc;
> +   uint8_t* cipher = NULL;
> +   size_t cipher_len = 0;
> +   size_t keysize;
> +   uint8_t* hashkey = NULL;
> +   uint8_t hash[HASHSZ];
> +   uint8_t* symkey;
> +
> +   /* Retreive the hash and the key from the manager */
> +   if((rc = VTPM_LoadHashKey(tpmfront_dev, &hashkey, &keysize)) != TPM_SUCCESS) {
> +      goto abort_egress;
> +   }
> +   if(keysize != HASHKEYSZ) {
> +      error("Manager returned a hashkey of invalid size! expected %d, actual %d", NVMKEYSZ, keysize);
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +   symkey = hashkey + HASHSZ;
> +
> +   /* Read from disk now */
> +   if((rc = read_vtpmblk_raw(&cipher, &cipher_len))) {
> +      goto abort_egress;
> +   }
> +
> +   /* Compute the hash of the cipher text and compare */
> +   sha1(cipher, cipher_len, hash);
> +   if(memcmp(hash, hashkey, HASHSZ)) {
> +      int i;
> +      error("NVM Storage Checksum failed!");
> +      printf("Expected: ");
> +      for(i = 0; i < HASHSZ; ++i) {
> +        printf("%02hhX ", hashkey[i]);
> +      }
> +      printf("\n");
> +      printf("Actual:   ");
> +      for(i = 0; i < HASHSZ; ++i) {
> +        printf("%02hhX ", hash[i]);
> +      }
> +      printf("\n");
> +      rc = -1;
> +      goto abort_egress;
> +   }
> +
> +   /* Decrypt the blob */
> +   if((rc = decrypt_vtpmblk(cipher, cipher_len, data, data_length, symkey))) {
> +      goto abort_egress;
> +   }
> +   goto egress;
> +abort_egress:
> +egress:
> +   free(cipher);
> +   free(hashkey);
> +   return rc;
> +}
> diff --git a/stubdom/vtpm/vtpmblk.h b/stubdom/vtpm/vtpmblk.h new file
> mode 100644 index 0000000..282ce6a
> --- /dev/null
> +++ b/stubdom/vtpm/vtpmblk.h
> @@ -0,0 +1,31 @@
> +/*
> + * Copyright (c) 2010-2012 United States Government, as represented
> +by
> + * the Secretary of Defense.  All rights reserved.
> + *
> + * THIS SOFTWARE AND ITS DOCUMENTATION ARE PROVIDED AS IS AND WITHOUT
> + * ANY EXPRESS OR IMPLIED WARRANTIES WHATSOEVER. ALL WARRANTIES
> + * INCLUDING, BUT NOT LIMITED TO, PERFORMANCE, MERCHANTABILITY,
> +FITNESS
> + * FOR A PARTICULAR  PURPOSE, AND NONINFRINGEMENT ARE HEREBY
> + * DISCLAIMED. USERS ASSUME THE ENTIRE RISK AND LIABILITY OF USING
> +THE
> + * SOFTWARE.
> + */
> +
> +#ifndef NVM_H
> +#define NVM_H
> +#include <mini-os/types.h>
> +#include <xen/xen.h>
> +#include <tpmfront.h>
> +
> +#define NVMKEYSZ 32
> +#define HASHSZ 20
> +#define HASHKEYSZ (NVMKEYSZ + HASHSZ)
> +
> +int init_vtpmblk(struct tpmfront_dev* tpmfront_dev); void
> +shutdown_vtpmblk(void);
> +
> +/* Encrypts and writes data to blk device */ int write_vtpmblk(struct
> +tpmfront_dev* tpmfront_dev, uint8_t *data, size_t data_length);
> +/* Reads, Decrypts, and returns data from blk device */ int
> +read_vtpmblk(struct tpmfront_dev* tpmfront_dev, uint8_t **data,
> +size_t *data_length);
> +
> +#endif
> --
> 1.7.10.4
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:02:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAIi-0008Gj-MF; Thu, 13 Dec 2012 15:02:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TjAIg-0008GX-SQ
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:02:15 +0000
Received: from [85.158.138.51:34077] by server-5.bemta-3.messagelabs.com id
	59/1A-15136-5FDE9C05; Thu, 13 Dec 2012 15:02:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355410932!20626518!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15948 invoked from network); 13 Dec 2012 15:02:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:02:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 15:02:12 +0000
Message-Id: <50C9FC0302000078000B0344@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 15:02:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,<konrad.wilk@oracle.com>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
 backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> backend_changed might be called multiple times, which will leak
> be->mode. Make sure it will be called only once. Remove some unneeded
> checks. Also the be->mode string was leaked, release the memory on
> device shutdown.

So I decided to make an attempt myself, retaining the current
behavior of allowing multiple calls, yet not having to sprinkle
around multiple kfree()-s for be->mode. Slightly re-structuring
the function made this pretty strait forward.

Jan

> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
> 
> incorporate all comments from Jan.
> fold the oneline change to xen_blkbk_remove into this change
> now its compile tested.
> 
>  drivers/block/xen-blkback/xenbus.c | 69 ++++++++++++++++++--------------------
>  1 file changed, 33 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/block/xen-blkback/xenbus.c 
> b/drivers/block/xen-blkback/xenbus.c
> index f58434c..5ca77c3 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -28,6 +28,7 @@ struct backend_info {
>  	unsigned		major;
>  	unsigned		minor;
>  	char			*mode;
> +	unsigned		alive;
>  };
>  
>  static struct kmem_cache *xen_blkif_cachep;
> @@ -366,6 +367,7 @@ static int xen_blkbk_remove(struct xenbus_device *dev)
>  		be->blkif = NULL;
>  	}
>  
> +	kfree(be->mode);
>  	kfree(be);
>  	dev_set_drvdata(&dev->dev, NULL);
>  	return 0;
> @@ -501,10 +503,14 @@ static void backend_changed(struct xenbus_watch *watch,
>  		= container_of(watch, struct backend_info, backend_watch);
>  	struct xenbus_device *dev = be->dev;
>  	int cdrom = 0;
> -	char *device_type;
> +	char *device_type, *p;
> +	long handle;
>  
>  	DPRINTK("");
>  
> +	if (be->alive)
> +		return;
> +
>  	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
>  			   &major, &minor);
>  	if (XENBUS_EXIST_ERR(err)) {
> @@ -520,12 +526,7 @@ static void backend_changed(struct xenbus_watch *watch,
>  		return;
>  	}
>  
> -	if ((be->major || be->minor) &&
> -	    ((be->major != major) || (be->minor != minor))) {
> -		pr_warn(DRV_PFX "changing physical device (from %x:%x to %x:%x) not 
> supported.\n",
> -			be->major, be->minor, major, minor);
> -		return;
> -	}
> +	be->alive = 1;
>  
>  	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
>  	if (IS_ERR(be->mode)) {
> @@ -541,39 +542,35 @@ static void backend_changed(struct xenbus_watch *watch,
>  		kfree(device_type);
>  	}
>  
> -	if (be->major == 0 && be->minor == 0) {
> -		/* Front end dir is a number, which is used as the handle. */
> -
> -		char *p = strrchr(dev->otherend, '/') + 1;
> -		long handle;
> -		err = strict_strtoul(p, 0, &handle);
> -		if (err)
> -			return;
> -
> -		be->major = major;
> -		be->minor = minor;
> +	/* Front end dir is a number, which is used as the handle. */
> +	p = strrchr(dev->otherend, '/') + 1;
> +	err = strict_strtoul(p, 0, &handle);
> +	if (err)
> +		return;
>  
> -		err = xen_vbd_create(be->blkif, handle, major, minor,
> -				 (NULL == strchr(be->mode, 'w')), cdrom);
> -		if (err) {
> -			be->major = 0;
> -			be->minor = 0;
> -			xenbus_dev_fatal(dev, err, "creating vbd structure");
> -			return;
> -		}
> +	be->major = major;
> +	be->minor = minor;
>  
> -		err = xenvbd_sysfs_addif(dev);
> -		if (err) {
> -			xen_vbd_free(&be->blkif->vbd);
> -			be->major = 0;
> -			be->minor = 0;
> -			xenbus_dev_fatal(dev, err, "creating sysfs entries");
> -			return;
> -		}
> +	err = xen_vbd_create(be->blkif, handle, major, minor,
> +			 (NULL == strchr(be->mode, 'w')), cdrom);
> +	if (err) {
> +		be->major = 0;
> +		be->minor = 0;
> +		xenbus_dev_fatal(dev, err, "creating vbd structure");
> +		return;
> +	}
>  
> -		/* We're potentially connected now */
> -		xen_update_blkif_status(be->blkif);
> +	err = xenvbd_sysfs_addif(dev);
> +	if (err) {
> +		xen_vbd_free(&be->blkif->vbd);
> +		be->major = 0;
> +		be->minor = 0;
> +		xenbus_dev_fatal(dev, err, "creating sysfs entries");
> +		return;
>  	}
> +
> +	/* We're potentially connected now */
> +	xen_update_blkif_status(be->blkif);
>  }
>  
>  
> -- 
> 1.8.0.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:02:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAIi-0008Gj-MF; Thu, 13 Dec 2012 15:02:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TjAIg-0008GX-SQ
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:02:15 +0000
Received: from [85.158.138.51:34077] by server-5.bemta-3.messagelabs.com id
	59/1A-15136-5FDE9C05; Thu, 13 Dec 2012 15:02:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355410932!20626518!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15948 invoked from network); 13 Dec 2012 15:02:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:02:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 15:02:12 +0000
Message-Id: <50C9FC0302000078000B0344@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 15:02:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,<konrad.wilk@oracle.com>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
 backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> backend_changed might be called multiple times, which will leak
> be->mode. Make sure it will be called only once. Remove some unneeded
> checks. Also the be->mode string was leaked, release the memory on
> device shutdown.

So I decided to make an attempt myself, retaining the current
behavior of allowing multiple calls, yet not having to sprinkle
around multiple kfree()-s for be->mode. Slightly re-structuring
the function made this pretty strait forward.

Jan

> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
> 
> incorporate all comments from Jan.
> fold the oneline change to xen_blkbk_remove into this change
> now its compile tested.
> 
>  drivers/block/xen-blkback/xenbus.c | 69 ++++++++++++++++++--------------------
>  1 file changed, 33 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/block/xen-blkback/xenbus.c 
> b/drivers/block/xen-blkback/xenbus.c
> index f58434c..5ca77c3 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -28,6 +28,7 @@ struct backend_info {
>  	unsigned		major;
>  	unsigned		minor;
>  	char			*mode;
> +	unsigned		alive;
>  };
>  
>  static struct kmem_cache *xen_blkif_cachep;
> @@ -366,6 +367,7 @@ static int xen_blkbk_remove(struct xenbus_device *dev)
>  		be->blkif = NULL;
>  	}
>  
> +	kfree(be->mode);
>  	kfree(be);
>  	dev_set_drvdata(&dev->dev, NULL);
>  	return 0;
> @@ -501,10 +503,14 @@ static void backend_changed(struct xenbus_watch *watch,
>  		= container_of(watch, struct backend_info, backend_watch);
>  	struct xenbus_device *dev = be->dev;
>  	int cdrom = 0;
> -	char *device_type;
> +	char *device_type, *p;
> +	long handle;
>  
>  	DPRINTK("");
>  
> +	if (be->alive)
> +		return;
> +
>  	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
>  			   &major, &minor);
>  	if (XENBUS_EXIST_ERR(err)) {
> @@ -520,12 +526,7 @@ static void backend_changed(struct xenbus_watch *watch,
>  		return;
>  	}
>  
> -	if ((be->major || be->minor) &&
> -	    ((be->major != major) || (be->minor != minor))) {
> -		pr_warn(DRV_PFX "changing physical device (from %x:%x to %x:%x) not 
> supported.\n",
> -			be->major, be->minor, major, minor);
> -		return;
> -	}
> +	be->alive = 1;
>  
>  	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
>  	if (IS_ERR(be->mode)) {
> @@ -541,39 +542,35 @@ static void backend_changed(struct xenbus_watch *watch,
>  		kfree(device_type);
>  	}
>  
> -	if (be->major == 0 && be->minor == 0) {
> -		/* Front end dir is a number, which is used as the handle. */
> -
> -		char *p = strrchr(dev->otherend, '/') + 1;
> -		long handle;
> -		err = strict_strtoul(p, 0, &handle);
> -		if (err)
> -			return;
> -
> -		be->major = major;
> -		be->minor = minor;
> +	/* Front end dir is a number, which is used as the handle. */
> +	p = strrchr(dev->otherend, '/') + 1;
> +	err = strict_strtoul(p, 0, &handle);
> +	if (err)
> +		return;
>  
> -		err = xen_vbd_create(be->blkif, handle, major, minor,
> -				 (NULL == strchr(be->mode, 'w')), cdrom);
> -		if (err) {
> -			be->major = 0;
> -			be->minor = 0;
> -			xenbus_dev_fatal(dev, err, "creating vbd structure");
> -			return;
> -		}
> +	be->major = major;
> +	be->minor = minor;
>  
> -		err = xenvbd_sysfs_addif(dev);
> -		if (err) {
> -			xen_vbd_free(&be->blkif->vbd);
> -			be->major = 0;
> -			be->minor = 0;
> -			xenbus_dev_fatal(dev, err, "creating sysfs entries");
> -			return;
> -		}
> +	err = xen_vbd_create(be->blkif, handle, major, minor,
> +			 (NULL == strchr(be->mode, 'w')), cdrom);
> +	if (err) {
> +		be->major = 0;
> +		be->minor = 0;
> +		xenbus_dev_fatal(dev, err, "creating vbd structure");
> +		return;
> +	}
>  
> -		/* We're potentially connected now */
> -		xen_update_blkif_status(be->blkif);
> +	err = xenvbd_sysfs_addif(dev);
> +	if (err) {
> +		xen_vbd_free(&be->blkif->vbd);
> +		be->major = 0;
> +		be->minor = 0;
> +		xenbus_dev_fatal(dev, err, "creating sysfs entries");
> +		return;
>  	}
> +
> +	/* We're potentially connected now */
> +	xen_update_blkif_status(be->blkif);
>  }
>  
>  
> -- 
> 1.8.0.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:05:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAM3-0008Tz-As; Thu, 13 Dec 2012 15:05:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjAM2-0008Tp-6W
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:05:42 +0000
Received: from [193.109.254.147:28923] by server-13.bemta-14.messagelabs.com
	id 53/DA-01725-5CEE9C05; Thu, 13 Dec 2012 15:05:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355411138!10263400!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12998 invoked from network); 13 Dec 2012 15:05:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:05:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="121017"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:05:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 15:05:38 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjALy-0006kf-Dn; Thu, 13 Dec 2012 15:05:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjALy-0008Im-65;
	Thu, 13 Dec 2012 15:05:38 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.61121.666480.840769@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 15:05:37 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
References: <patchbomb.1354274486@cosworth.uk.xensource.com>
	<d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2] xl: Introduce helper macro for
	option parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH 2 of 2] xl: Introduce helper macro for option parsing"):
> - s/FOREACH_OPT/SWITCH_FOREACH_OPT/
> - Document the macro

Thanks.

> +/*
> + * Wraps def_getopt into a convenient loop+switch to process all arguments.
> + *
> + * _opt:        an int variable, holds the current option during processing.
> + * _opts:       short options, as per getopt_long(3)'s optstring argument.
> + * _lopts:      long options, as per getopt_long(3)'s longopts argument. May
> + *              be null.
> + * _help:       name of this command, for usage string.
> + * _req:        number of non-option command line parameters which are required.

Can we have a pseudo-prototype for this ?  Eg

    *   SWITCH_FOREACH_OPT(int &opt, const char *opts,
    *                      const struct option *longopts,
    *                      const char *commandname,
    *                      int num_opts_req) { ...
    *    case ...

Also, there is no need to prefix macro formal arguments with _.  Why
do you do that ?  We don't do it elsewhere in libxl...

> + * Callers should treat SWITCH_FOREACH_OPT as they would a switch
> + * statement over the value of _opt. Each option given in _opts (or
> + * _lopts) should be handled by a case statement as if it were inside
> + * a switch statement.
> + *
> + * In addition to the options provided in _opts callers must handle
> + * two additional pseudo options:
> + *  0 -- generated if the user passes a -h option. help will be printed,
> + *       caller should return 0.
> + *  2 -- generated if the user does not provided _req non-option arguments,
> + *       caller should return 2.

I don't think you can mean "caller should return".  Your description
of the macro doesn't specify anything in particular about the calling
function so it can't possibly intend for you to return particular
values from it.

Did you mean "cause the program to exit" ?  And if so why not have the
macro (or the function) do that ?

I haven't gone through your call sites to review them...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:05:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAM3-0008Tz-As; Thu, 13 Dec 2012 15:05:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjAM2-0008Tp-6W
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:05:42 +0000
Received: from [193.109.254.147:28923] by server-13.bemta-14.messagelabs.com
	id 53/DA-01725-5CEE9C05; Thu, 13 Dec 2012 15:05:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355411138!10263400!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12998 invoked from network); 13 Dec 2012 15:05:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:05:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="121017"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:05:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 15:05:38 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjALy-0006kf-Dn; Thu, 13 Dec 2012 15:05:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjALy-0008Im-65;
	Thu, 13 Dec 2012 15:05:38 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.61121.666480.840769@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 15:05:37 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
References: <patchbomb.1354274486@cosworth.uk.xensource.com>
	<d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2] xl: Introduce helper macro for
	option parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH 2 of 2] xl: Introduce helper macro for option parsing"):
> - s/FOREACH_OPT/SWITCH_FOREACH_OPT/
> - Document the macro

Thanks.

> +/*
> + * Wraps def_getopt into a convenient loop+switch to process all arguments.
> + *
> + * _opt:        an int variable, holds the current option during processing.
> + * _opts:       short options, as per getopt_long(3)'s optstring argument.
> + * _lopts:      long options, as per getopt_long(3)'s longopts argument. May
> + *              be null.
> + * _help:       name of this command, for usage string.
> + * _req:        number of non-option command line parameters which are required.

Can we have a pseudo-prototype for this ?  Eg

    *   SWITCH_FOREACH_OPT(int &opt, const char *opts,
    *                      const struct option *longopts,
    *                      const char *commandname,
    *                      int num_opts_req) { ...
    *    case ...

Also, there is no need to prefix macro formal arguments with _.  Why
do you do that ?  We don't do it elsewhere in libxl...

> + * Callers should treat SWITCH_FOREACH_OPT as they would a switch
> + * statement over the value of _opt. Each option given in _opts (or
> + * _lopts) should be handled by a case statement as if it were inside
> + * a switch statement.
> + *
> + * In addition to the options provided in _opts callers must handle
> + * two additional pseudo options:
> + *  0 -- generated if the user passes a -h option. help will be printed,
> + *       caller should return 0.
> + *  2 -- generated if the user does not provided _req non-option arguments,
> + *       caller should return 2.

I don't think you can mean "caller should return".  Your description
of the macro doesn't specify anything in particular about the calling
function so it can't possibly intend for you to return particular
values from it.

Did you mean "cause the program to exit" ?  And if so why not have the
macro (or the function) do that ?

I haven't gone through your call sites to review them...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:07:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:07:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjANx-00008k-SD; Thu, 13 Dec 2012 15:07:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjANw-00008b-Ka
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:07:40 +0000
Received: from [85.158.143.35:51634] by server-2.bemta-4.messagelabs.com id
	AF/C0-30861-B3FE9C05; Thu, 13 Dec 2012 15:07:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355411253!17164843!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25670 invoked from network); 13 Dec 2012 15:07:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:07:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="121089"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:07:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 15:07:33 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjANp-0006lG-Ck; Thu, 13 Dec 2012 15:07:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjANn-0008J5-6A;
	Thu, 13 Dec 2012 15:07:31 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.61235.10341.776924@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 15:07:31 +0000
To: Christoph Egger <Christoph_Egger@gmx.de>
In-Reply-To: <50C9E919.3080405@gmx.de>
References: <508916B3.2030403@amd.com>
	<20617.13045.486553.172990@mariner.uk.xensource.com>
	<1355395238.10554.71.camel@zakaz.uk.xensource.com>
	<50C9E49C.8050809@gmx.de>
	<20681.58690.426312.833615@mariner.uk.xensource.com>
	<50C9E919.3080405@gmx.de>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Christoph Egger writes ("Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
> Yes, this is right.
> 
> It is also routinely exposed when you choose a different prefix
> for different xen versions for development purpose.
> I use xen-<c/s> to switch forth and back between different
> xen versions. This way I am always able to use a working version
> and to test a new changeset.

Right.  So we'd appreciate your opinion :-).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:07:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:07:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjANx-00008k-SD; Thu, 13 Dec 2012 15:07:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjANw-00008b-Ka
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:07:40 +0000
Received: from [85.158.143.35:51634] by server-2.bemta-4.messagelabs.com id
	AF/C0-30861-B3FE9C05; Thu, 13 Dec 2012 15:07:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355411253!17164843!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25670 invoked from network); 13 Dec 2012 15:07:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:07:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="121089"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:07:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 15:07:33 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjANp-0006lG-Ck; Thu, 13 Dec 2012 15:07:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjANn-0008J5-6A;
	Thu, 13 Dec 2012 15:07:31 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.61235.10341.776924@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 15:07:31 +0000
To: Christoph Egger <Christoph_Egger@gmx.de>
In-Reply-To: <50C9E919.3080405@gmx.de>
References: <508916B3.2030403@amd.com>
	<20617.13045.486553.172990@mariner.uk.xensource.com>
	<1355395238.10554.71.camel@zakaz.uk.xensource.com>
	<50C9E49C.8050809@gmx.de>
	<20681.58690.426312.833615@mariner.uk.xensource.com>
	<50C9E919.3080405@gmx.de>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream
 qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Christoph Egger writes ("Re: [Xen-devel] [PATCH] tools: use PREFIX when building upstream qemu"):
> Yes, this is right.
> 
> It is also routinely exposed when you choose a different prefix
> for different xen versions for development purpose.
> I use xen-<c/s> to switch forth and back between different
> xen versions. This way I am always able to use a working version
> and to test a new changeset.

Right.  So we'd appreciate your opinion :-).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:08:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAON-0000Be-9S; Thu, 13 Dec 2012 15:08:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TjAOL-0000BT-Rn
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:08:05 +0000
Received: from [85.158.143.35:27500] by server-3.bemta-4.messagelabs.com id
	EA/9D-18211-55FE9C05; Thu, 13 Dec 2012 15:08:05 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355411210!15472263!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16915 invoked from network); 13 Dec 2012 15:06:50 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 15:06:50 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 3B724C5617B;
	Thu, 13 Dec 2012 15:06:37 +0000 (GMT)
Date: Thu, 13 Dec 2012 15:06:34 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <95EB29C55F836E775B70A9F5@nimrod.local>
In-Reply-To: <1355398711.10554.90.camel@zakaz.uk.xensource.com>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>	
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Alex Bligh <alex@alex.org.uk>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,

--On 13 December 2012 11:38:31 +0000 Ian Campbell <Ian.Campbell@citrix.com> wrote:

> Adding Ian J who looks after the tools portion of the stable branches.
>
> I'm personally not convinced that this change is appropriate for a
> stable branch backport, it's a pretty large feature patch after all and
> not a bug fix.

My prime concern is for our own usage, which I can easily address using
a local build.

But beyond that, my concern is that Xen-4.3 is (as advertised) going
to have qemu-xen as the default device model; no one needing HVM
can currently use it at all (if they need live migrate which I guess
most do). That's a bit of a jump.

Xen docs currently same qemu-xen is supported. Nowhere does it say
"save under HVM you can't live migrate". I'd argue lack of live migration
for HVM *is* a bug.

If you ignore the JSON stuff, the patch is quite small. We could always
only enable it as a compile option...

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:08:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAON-0000Be-9S; Thu, 13 Dec 2012 15:08:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TjAOL-0000BT-Rn
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:08:05 +0000
Received: from [85.158.143.35:27500] by server-3.bemta-4.messagelabs.com id
	EA/9D-18211-55FE9C05; Thu, 13 Dec 2012 15:08:05 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355411210!15472263!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16915 invoked from network); 13 Dec 2012 15:06:50 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 15:06:50 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 3B724C5617B;
	Thu, 13 Dec 2012 15:06:37 +0000 (GMT)
Date: Thu, 13 Dec 2012 15:06:34 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <95EB29C55F836E775B70A9F5@nimrod.local>
In-Reply-To: <1355398711.10554.90.camel@zakaz.uk.xensource.com>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>	
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Alex Bligh <alex@alex.org.uk>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,

--On 13 December 2012 11:38:31 +0000 Ian Campbell <Ian.Campbell@citrix.com> wrote:

> Adding Ian J who looks after the tools portion of the stable branches.
>
> I'm personally not convinced that this change is appropriate for a
> stable branch backport, it's a pretty large feature patch after all and
> not a bug fix.

My prime concern is for our own usage, which I can easily address using
a local build.

But beyond that, my concern is that Xen-4.3 is (as advertised) going
to have qemu-xen as the default device model; no one needing HVM
can currently use it at all (if they need live migrate which I guess
most do). That's a bit of a jump.

Xen docs currently same qemu-xen is supported. Nowhere does it say
"save under HVM you can't live migrate". I'd argue lack of live migration
for HVM *is* a bug.

If you ignore the JSON stuff, the patch is quite small. We could always
only enable it as a compile option...

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:09:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAPV-0000J2-Om; Thu, 13 Dec 2012 15:09:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjAPU-0000Io-Sg
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:09:17 +0000
Received: from [85.158.139.83:42605] by server-7.bemta-5.messagelabs.com id
	0D/C9-08009-C9FE9C05; Thu, 13 Dec 2012 15:09:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355411355!25845710!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8839 invoked from network); 13 Dec 2012 15:09:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:09:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="121160"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:09:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 15:09:14 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjAPS-0006ln-V4; Thu, 13 Dec 2012 15:09:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjAPS-0008K2-QU;
	Thu, 13 Dec 2012 15:09:14 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.61338.718589.604442@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 15:09:14 +0000
To: Alex Bligh <alex@alex.org.uk>
In-Reply-To: <95EB29C55F836E775B70A9F5@nimrod.local>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>
	<95EB29C55F836E775B70A9F5@nimrod.local>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Alex Bligh writes ("Re: [RFC] Enabling live-migrate on HVM on qemu-xen device model"):
> But beyond that, my concern is that Xen-4.3 is (as advertised) going
> to have qemu-xen as the default device model; no one needing HVM
> can currently use it at all (if they need live migrate which I guess
> most do). That's a bit of a jump.
> 
> Xen docs currently same qemu-xen is supported. Nowhere does it say
> "save under HVM you can't live migrate". I'd argue lack of live migration
> for HVM *is* a bug.
> 
> If you ignore the JSON stuff, the patch is quite small. We could always
> only enable it as a compile option...

I think this is a very good argument.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:09:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAPV-0000J2-Om; Thu, 13 Dec 2012 15:09:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjAPU-0000Io-Sg
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:09:17 +0000
Received: from [85.158.139.83:42605] by server-7.bemta-5.messagelabs.com id
	0D/C9-08009-C9FE9C05; Thu, 13 Dec 2012 15:09:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355411355!25845710!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8839 invoked from network); 13 Dec 2012 15:09:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:09:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="121160"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:09:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 15:09:14 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjAPS-0006ln-V4; Thu, 13 Dec 2012 15:09:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjAPS-0008K2-QU;
	Thu, 13 Dec 2012 15:09:14 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.61338.718589.604442@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 15:09:14 +0000
To: Alex Bligh <alex@alex.org.uk>
In-Reply-To: <95EB29C55F836E775B70A9F5@nimrod.local>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>
	<95EB29C55F836E775B70A9F5@nimrod.local>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Alex Bligh writes ("Re: [RFC] Enabling live-migrate on HVM on qemu-xen device model"):
> But beyond that, my concern is that Xen-4.3 is (as advertised) going
> to have qemu-xen as the default device model; no one needing HVM
> can currently use it at all (if they need live migrate which I guess
> most do). That's a bit of a jump.
> 
> Xen docs currently same qemu-xen is supported. Nowhere does it say
> "save under HVM you can't live migrate". I'd argue lack of live migration
> for HVM *is* a bug.
> 
> If you ignore the JSON stuff, the patch is quite small. We could always
> only enable it as a compile option...

I think this is a very good argument.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:09:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:09:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAPf-0000Ka-5K; Thu, 13 Dec 2012 15:09:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TjAPe-0000K5-4k
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:09:26 +0000
Received: from [193.109.254.147:10553] by server-16.bemta-14.messagelabs.com
	id D8/22-18932-5AFE9C05; Thu, 13 Dec 2012 15:09:25 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355411364!9993490!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0ODE5ODA=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0ODE5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10856 invoked from network); 13 Dec 2012 15:09:25 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 15:09:25 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk3c7qfw==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-094-001.pools.arcor-ip.net [84.57.94.1])
	by smtp.strato.de (jored mo39) (RZmta 31.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id q052daoBDF8NAb ;
	Thu, 13 Dec 2012 16:09:18 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 919921884C; Thu, 13 Dec 2012 16:09:17 +0100 (CET)
Date: Thu, 13 Dec 2012 16:09:17 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121213150917.GA12748@aepfle.de>
References: <50C9FB3202000078000B0338@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C9FB3202000078000B0338@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/blkback: do not leak mode
	property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, Jan Beulich wrote:

> +	if (be->major | be->minor) {

I think a single | works as intended, but || was meant?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:09:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:09:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAPf-0000Ka-5K; Thu, 13 Dec 2012 15:09:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1TjAPe-0000K5-4k
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:09:26 +0000
Received: from [193.109.254.147:10553] by server-16.bemta-14.messagelabs.com
	id D8/22-18932-5AFE9C05; Thu, 13 Dec 2012 15:09:25 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355411364!9993490!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0ODE5ODA=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0ODE5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10856 invoked from network); 13 Dec 2012 15:09:25 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 15:09:25 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk3c7qfw==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-094-001.pools.arcor-ip.net [84.57.94.1])
	by smtp.strato.de (jored mo39) (RZmta 31.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id q052daoBDF8NAb ;
	Thu, 13 Dec 2012 16:09:18 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 919921884C; Thu, 13 Dec 2012 16:09:17 +0100 (CET)
Date: Thu, 13 Dec 2012 16:09:17 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121213150917.GA12748@aepfle.de>
References: <50C9FB3202000078000B0338@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C9FB3202000078000B0338@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5558 (2012-10-16)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/blkback: do not leak mode
	property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, Jan Beulich wrote:

> +	if (be->major | be->minor) {

I think a single | works as intended, but || was meant?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:12:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:12:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjASW-0000rj-PN; Thu, 13 Dec 2012 15:12:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TjASU-0000qo-JH
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:12:22 +0000
Received: from [85.158.139.211:30352] by server-14.bemta-5.messagelabs.com id
	C7/EB-09538-550F9C05; Thu, 13 Dec 2012 15:12:21 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355411538!17797856!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6548 invoked from network); 13 Dec 2012 15:12:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:12:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="556850"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 15:12:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 10:12:17 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TjASP-0001nQ-M9;
	Thu, 13 Dec 2012 15:12:17 +0000
Message-ID: <1355411537.8376.52.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 15:12:17 +0000
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, konrad.wilk@oracle.com
Subject: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad

I encountered a bug when trying to bring offline a cpu then online it
again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
with a quick fix.

The HVM DomU is configured with 4 vcpus. After booting into command
prompt, I do following operations.

# echo 0 > /sys/devices/system/cpu/cpu3/online
# echo 1 > /sys/devices/system/cpu/cpu3/online

With Debian's default 2.6.32-5-amd64 kernel, the last log is:

    Booting processor 3 APIC 0x6 ip 0x6000

With my own kernel which is of version 3.5, I'm able to get more logs:

[   44.047358] Booting Node 0 Processor 3 APIC 0x6
[   44.061201] ------------[ cut here ]------------
[   44.065186] kernel BUG at kernel/hrtimer.c:1259!
[   44.065186] invalid opcode: 0000 [#1] SMP
[   44.065186] CPU 3
[   44.065186] Modules linked in:
[   44.065186]
[   44.065186] Pid: 0, comm: swapper/3 Not tainted 3.5.0-xen-evtchn+ #50 Xen HVM domU
[   44.065186] RIP: 0010:[<ffffffff8105682e>]  [<ffffffff8105682e>] hrtimer_interrupt+0x24/0x1a5
[   44.065186] RSP: 0000:ffff88000f463de8  EFLAGS: 00010046
[   44.065186] RAX: ffffffff8105680a RBX: ffff88000f46e640 RCX: 00000000fffffffa
[   44.065186] RDX: 00000000fffffffa RSI: 0000000000000000 RDI: ffff88000f46bd80
[   44.065186] RBP: 0000000000000057 R08: ffff88000e000b40 R09: 0000000000000019
[   44.065186] R10: 0000000000000000 R11: 0000000000000001 R12: ffff88000e6e8e00
[   44.065186] R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000000
[   44.065186] FS:  0000000000000000(0000) GS:ffff88000f460000(0000) knlGS:0000000000000000
[   44.065186] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[   44.065186] CR2: 0000000000000000 CR3: 000000000181b000 CR4: 00000000000007e0
[   44.065186] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   44.065186] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[   44.065186] Process swapper/3 (pid: 0, threadinfo ffff88000e62e000, task ffff88000e62aea0)
[   44.065186] Stack:
[   44.065186]  0000000000000001 ffff88000f46e680 ffffffff81013711 00000008cfba9b27
[   44.065186]  00000000fffffffa ffff88000e6e97c0 0000000000000057 ffff88000e6e8e00
[   44.065186]  0000000000000000 0000000000000001 0000000000000000 ffffffff81006954
[   44.065186] Call Trace:
[   44.065186]  <IRQ>
[   44.065186]  [<ffffffff81013711>] ? paravirt_sched_clock+0x5/0x8
[   44.065186]  [<ffffffff81006954>] ? xen_timer_interrupt+0x26/0x162
[   44.065186]  [<ffffffff8109a220>] ? check_for_new_grace_period.isra.32+0x90/0x9a
[   44.065186]  [<ffffffff810956df>] ? handle_irq_event_percpu+0x32/0x1b0
[   44.065186]  [<ffffffff8128f88b>] ? irq_get_handler_data+0x7/0x16
[   44.065186]  [<ffffffff81097e39>] ? handle_percpu_irq+0x3a/0x4f
[   44.065186]  [<ffffffff8128f9ec>] ? __xen_evtchn_do_upcall_l2+0x131/0x1c0
[   44.065186]  [<ffffffff812913d3>] ? xen_evtchn_do_upcall+0x27/0x37
[   44.065186]  [<ffffffff8140081a>] ? xen_hvm_callback_vector+0x6a/0x70
[   44.065186]  <EOI>
[   44.065186]  [<ffffffff81094b8f>] ? cpumask_next+0x17/0x19
[   44.065186]  [<ffffffff813eb75b>] ? start_secondary+0x184/0x1e2
[   44.065186]  [<ffffffff813eb757>] ? start_secondary+0x180/0x1e2
[   44.065186]  [<ffffffff813eb5d7>] ? set_cpu_sibling_map+0x40e/0x40e
[   44.065186] Code: 41 5d 41 5e 41 5f c3 41 57 41 56 41 55 41 54 55 53 48 c7 c3 40 e6 00 00 48 83 ec 28 65 48 03 1c 25 e8 db 00 00 83 7b 18 00 75 02 <0f> 0b 48
 ff 43 20 48 bd ff ff ff ff ff ff ff 7f 41 be 03 00 00
[   44.065186] RIP  [<ffffffff8105682e>] hrtimer_interrupt+0x24/0x1a5
[   44.065186]  RSP <ffff88000f463de8>
[   44.065186] ---[ end trace 9366352b116a03db ]---
[   44.065186] Kernel panic - not syncing: Fatal exception in interrupt

And if I offline online cpu 2 in 2.6.32-5-amd64:

[   27.933928] Booting processor 2 APIC 0x4 ip 0x6000
[   25.708098] Initializing CPU#2
[   25.708098] CPU: L1 I cache: 32K, L1 D cache: 32K
[   25.708098] CPU: L2 cache: 6144K
[   25.708098] CPU 2/0x4 -> Node 0
[   25.708098] CPU: Physical Processor ID: 0
[   25.708098] CPU: Processor Core ID: 4
[   28.028234] CPU2: Intel(R) Core(TM)2 Quad  CPU   Q9450  @ 2.66GHz stepping 07
[   28.069320] checking TSC synchronization [CPU#0 -> CPU#2]: passed.
[   25.708098] installing Xen timer for CPU 2
[   28.098101] CPU0 attaching NULL sched-domain.
[   28.098106] CPU1 attaching NULL sched-domain.
[   28.098110] CPU3 attaching NULL sched-domain.
[   28.098092] ------------[ cut here ]------------
[   28.098092] WARNING: at /build/buildd-linux-2.6_2.6.32-30-amd64-d4MbNM/linux-2.6-2.6.32/debian/build/source_amd64_none/kernel/irq/chip.c:88 unbind_from_irq+0
x147/0x159()
[   28.098092] Hardware name: HVM domU
[   28.144127] CPU0 attaching sched-domain:
[   28.144131]  domain 0: span 0-3 level CPU
[   28.144133]   groups: 0 1 2 3
[   28.144139] CPU1 attaching sched-domain:
[   28.144142]  domain 0: span 0-3 level CPU
[   28.144145]   groups: 1 2 3 0
[   28.144150] CPU2 attaching sched-domain:
[   28.144152]  domain 0: span 0-3 level CPU
[   28.144155]   groups: 2 3 0 1
[   28.144160] CPU3 attaching sched-domain:
[   28.144162]  domain 0: span 0-3 level CPU
[   28.144165]   groups: 3 0 1 2
[   28.209159] Destroying IRQ18 without calling free_irq
[   28.215985] Modules linked in: loop parport_pc parport psmouse evdev serio_raw snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr i2c_piix4 i2c_core butto
n processor ext3 jbd mbcache ata_generic ata_piix libata floppy thermal thermal_sys xen_blkfront scsi_mod [last unloaded: scsi_wait_scan]
[   28.224050] Pid: 0, comm: swapper Not tainted 2.6.32-5-amd64 #1
[   28.224050] Call Trace:
[   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
[   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
[   28.224050]  [<ffffffff8104dd7c>] ? warn_slowpath_common+0x77/0xa3
[   28.224050]  [<ffffffff8104de04>] ? warn_slowpath_fmt+0x51/0x59
[   28.224050]  [<ffffffff810e4493>] ? get_partial_node+0x15/0x85
[   28.224050]  [<ffffffff811966fd>] ? kvasprintf+0x41/0x68
[   28.224050]  [<ffffffff8109639e>] ? dynamic_irq_cleanup_x+0x4b/0xc2
[   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
[   28.224050]  [<ffffffff811ef5b7>] ? bind_virq_to_irqhandler+0x14c/0x15d
[   28.224050]  [<ffffffff8100df77>] ? xen_timer_interrupt+0x0/0x18d
[   28.224050]  [<ffffffff812f5121>] ? set_cpu_sibling_map+0x2f4/0x311
[   28.224050]  [<ffffffff8100df0d>] ? xen_setup_timer+0x55/0xa2
[   28.224050]  [<ffffffff8100df71>] ? xen_hvm_setup_cpu_clockevents+0x17/0x1d
[   28.224050]  [<ffffffff812f52fc>] ? start_secondary+0x17c/0x185
[   28.224050] ---[ end trace db1493923b5e103d ]---

The logs for cpu 2 in my 3.5 kernel is identical to those for cpu 3.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:12:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:12:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjASW-0000rj-PN; Thu, 13 Dec 2012 15:12:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TjASU-0000qo-JH
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:12:22 +0000
Received: from [85.158.139.211:30352] by server-14.bemta-5.messagelabs.com id
	C7/EB-09538-550F9C05; Thu, 13 Dec 2012 15:12:21 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355411538!17797856!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6548 invoked from network); 13 Dec 2012 15:12:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:12:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="556850"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 15:12:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 10:12:17 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TjASP-0001nQ-M9;
	Thu, 13 Dec 2012 15:12:17 +0000
Message-ID: <1355411537.8376.52.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 15:12:17 +0000
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, konrad.wilk@oracle.com
Subject: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad

I encountered a bug when trying to bring offline a cpu then online it
again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
with a quick fix.

The HVM DomU is configured with 4 vcpus. After booting into command
prompt, I do following operations.

# echo 0 > /sys/devices/system/cpu/cpu3/online
# echo 1 > /sys/devices/system/cpu/cpu3/online

With Debian's default 2.6.32-5-amd64 kernel, the last log is:

    Booting processor 3 APIC 0x6 ip 0x6000

With my own kernel which is of version 3.5, I'm able to get more logs:

[   44.047358] Booting Node 0 Processor 3 APIC 0x6
[   44.061201] ------------[ cut here ]------------
[   44.065186] kernel BUG at kernel/hrtimer.c:1259!
[   44.065186] invalid opcode: 0000 [#1] SMP
[   44.065186] CPU 3
[   44.065186] Modules linked in:
[   44.065186]
[   44.065186] Pid: 0, comm: swapper/3 Not tainted 3.5.0-xen-evtchn+ #50 Xen HVM domU
[   44.065186] RIP: 0010:[<ffffffff8105682e>]  [<ffffffff8105682e>] hrtimer_interrupt+0x24/0x1a5
[   44.065186] RSP: 0000:ffff88000f463de8  EFLAGS: 00010046
[   44.065186] RAX: ffffffff8105680a RBX: ffff88000f46e640 RCX: 00000000fffffffa
[   44.065186] RDX: 00000000fffffffa RSI: 0000000000000000 RDI: ffff88000f46bd80
[   44.065186] RBP: 0000000000000057 R08: ffff88000e000b40 R09: 0000000000000019
[   44.065186] R10: 0000000000000000 R11: 0000000000000001 R12: ffff88000e6e8e00
[   44.065186] R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000000
[   44.065186] FS:  0000000000000000(0000) GS:ffff88000f460000(0000) knlGS:0000000000000000
[   44.065186] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[   44.065186] CR2: 0000000000000000 CR3: 000000000181b000 CR4: 00000000000007e0
[   44.065186] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   44.065186] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[   44.065186] Process swapper/3 (pid: 0, threadinfo ffff88000e62e000, task ffff88000e62aea0)
[   44.065186] Stack:
[   44.065186]  0000000000000001 ffff88000f46e680 ffffffff81013711 00000008cfba9b27
[   44.065186]  00000000fffffffa ffff88000e6e97c0 0000000000000057 ffff88000e6e8e00
[   44.065186]  0000000000000000 0000000000000001 0000000000000000 ffffffff81006954
[   44.065186] Call Trace:
[   44.065186]  <IRQ>
[   44.065186]  [<ffffffff81013711>] ? paravirt_sched_clock+0x5/0x8
[   44.065186]  [<ffffffff81006954>] ? xen_timer_interrupt+0x26/0x162
[   44.065186]  [<ffffffff8109a220>] ? check_for_new_grace_period.isra.32+0x90/0x9a
[   44.065186]  [<ffffffff810956df>] ? handle_irq_event_percpu+0x32/0x1b0
[   44.065186]  [<ffffffff8128f88b>] ? irq_get_handler_data+0x7/0x16
[   44.065186]  [<ffffffff81097e39>] ? handle_percpu_irq+0x3a/0x4f
[   44.065186]  [<ffffffff8128f9ec>] ? __xen_evtchn_do_upcall_l2+0x131/0x1c0
[   44.065186]  [<ffffffff812913d3>] ? xen_evtchn_do_upcall+0x27/0x37
[   44.065186]  [<ffffffff8140081a>] ? xen_hvm_callback_vector+0x6a/0x70
[   44.065186]  <EOI>
[   44.065186]  [<ffffffff81094b8f>] ? cpumask_next+0x17/0x19
[   44.065186]  [<ffffffff813eb75b>] ? start_secondary+0x184/0x1e2
[   44.065186]  [<ffffffff813eb757>] ? start_secondary+0x180/0x1e2
[   44.065186]  [<ffffffff813eb5d7>] ? set_cpu_sibling_map+0x40e/0x40e
[   44.065186] Code: 41 5d 41 5e 41 5f c3 41 57 41 56 41 55 41 54 55 53 48 c7 c3 40 e6 00 00 48 83 ec 28 65 48 03 1c 25 e8 db 00 00 83 7b 18 00 75 02 <0f> 0b 48
 ff 43 20 48 bd ff ff ff ff ff ff ff 7f 41 be 03 00 00
[   44.065186] RIP  [<ffffffff8105682e>] hrtimer_interrupt+0x24/0x1a5
[   44.065186]  RSP <ffff88000f463de8>
[   44.065186] ---[ end trace 9366352b116a03db ]---
[   44.065186] Kernel panic - not syncing: Fatal exception in interrupt

And if I offline online cpu 2 in 2.6.32-5-amd64:

[   27.933928] Booting processor 2 APIC 0x4 ip 0x6000
[   25.708098] Initializing CPU#2
[   25.708098] CPU: L1 I cache: 32K, L1 D cache: 32K
[   25.708098] CPU: L2 cache: 6144K
[   25.708098] CPU 2/0x4 -> Node 0
[   25.708098] CPU: Physical Processor ID: 0
[   25.708098] CPU: Processor Core ID: 4
[   28.028234] CPU2: Intel(R) Core(TM)2 Quad  CPU   Q9450  @ 2.66GHz stepping 07
[   28.069320] checking TSC synchronization [CPU#0 -> CPU#2]: passed.
[   25.708098] installing Xen timer for CPU 2
[   28.098101] CPU0 attaching NULL sched-domain.
[   28.098106] CPU1 attaching NULL sched-domain.
[   28.098110] CPU3 attaching NULL sched-domain.
[   28.098092] ------------[ cut here ]------------
[   28.098092] WARNING: at /build/buildd-linux-2.6_2.6.32-30-amd64-d4MbNM/linux-2.6-2.6.32/debian/build/source_amd64_none/kernel/irq/chip.c:88 unbind_from_irq+0
x147/0x159()
[   28.098092] Hardware name: HVM domU
[   28.144127] CPU0 attaching sched-domain:
[   28.144131]  domain 0: span 0-3 level CPU
[   28.144133]   groups: 0 1 2 3
[   28.144139] CPU1 attaching sched-domain:
[   28.144142]  domain 0: span 0-3 level CPU
[   28.144145]   groups: 1 2 3 0
[   28.144150] CPU2 attaching sched-domain:
[   28.144152]  domain 0: span 0-3 level CPU
[   28.144155]   groups: 2 3 0 1
[   28.144160] CPU3 attaching sched-domain:
[   28.144162]  domain 0: span 0-3 level CPU
[   28.144165]   groups: 3 0 1 2
[   28.209159] Destroying IRQ18 without calling free_irq
[   28.215985] Modules linked in: loop parport_pc parport psmouse evdev serio_raw snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr i2c_piix4 i2c_core butto
n processor ext3 jbd mbcache ata_generic ata_piix libata floppy thermal thermal_sys xen_blkfront scsi_mod [last unloaded: scsi_wait_scan]
[   28.224050] Pid: 0, comm: swapper Not tainted 2.6.32-5-amd64 #1
[   28.224050] Call Trace:
[   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
[   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
[   28.224050]  [<ffffffff8104dd7c>] ? warn_slowpath_common+0x77/0xa3
[   28.224050]  [<ffffffff8104de04>] ? warn_slowpath_fmt+0x51/0x59
[   28.224050]  [<ffffffff810e4493>] ? get_partial_node+0x15/0x85
[   28.224050]  [<ffffffff811966fd>] ? kvasprintf+0x41/0x68
[   28.224050]  [<ffffffff8109639e>] ? dynamic_irq_cleanup_x+0x4b/0xc2
[   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
[   28.224050]  [<ffffffff811ef5b7>] ? bind_virq_to_irqhandler+0x14c/0x15d
[   28.224050]  [<ffffffff8100df77>] ? xen_timer_interrupt+0x0/0x18d
[   28.224050]  [<ffffffff812f5121>] ? set_cpu_sibling_map+0x2f4/0x311
[   28.224050]  [<ffffffff8100df0d>] ? xen_setup_timer+0x55/0xa2
[   28.224050]  [<ffffffff8100df71>] ? xen_hvm_setup_cpu_clockevents+0x17/0x1d
[   28.224050]  [<ffffffff812f52fc>] ? start_secondary+0x17c/0x185
[   28.224050] ---[ end trace db1493923b5e103d ]---

The logs for cpu 2 in my 3.5 kernel is identical to those for cpu 3.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:13:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAT2-00011P-Cr; Thu, 13 Dec 2012 15:12:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TjAT1-00011F-SP
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:12:55 +0000
Received: from [85.158.137.99:9318] by server-15.bemta-3.messagelabs.com id
	B4/41-07921-270F9C05; Thu, 13 Dec 2012 15:12:50 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1355411562!16125301!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16480 invoked from network); 13 Dec 2012 15:12:43 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-11.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:12:43 -0000
Received: (qmail 12674 invoked from network); 13 Dec 2012 17:12:42 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 17:12:42 +0200
Message-ID: <50C9F09D.9080602@gmail.com>
Date: Thu, 13 Dec 2012 17:13:33 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 233248,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_NO_LINK_NMD; NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY;
	NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	75fcd2a893beba3a27d29356434e6472.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t9.17e7n003h.26ajf], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44366
Subject: [Xen-devel] Libxc code to get MTRR memory type for physical address
	pa
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

what would be the libxc-based equivalent of get_mtrr_type(struct 
mtrr_state *m, paddr_t pa) (from xen/arch/x86/hvm/mtrr.c)?

I've searched the source code and found this:

struct hvm_hw_mtrr {
#define MTRR_VCNT 8
#define NUM_FIXED_MSR 11
     uint64_t msr_pat_cr;
     /* mtrr physbase & physmask msr pair*/
     uint64_t msr_mtrr_var[MTRR_VCNT*2];
     uint64_t msr_mtrr_fixed[NUM_FIXED_MSR];
     uint64_t msr_mtrr_cap;
     uint64_t msr_mtrr_def_type;
};

in xen/include/public/arch-x86/hvm/save.h. I can retrieve that using
xc_domain_hvm_getcontext_partial(), but what would the the best way to 
get the uint8_t result, for a given 'pa', that get_mtrr_type() returns?

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:13:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAT2-00011P-Cr; Thu, 13 Dec 2012 15:12:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TjAT1-00011F-SP
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:12:55 +0000
Received: from [85.158.137.99:9318] by server-15.bemta-3.messagelabs.com id
	B4/41-07921-270F9C05; Thu, 13 Dec 2012 15:12:50 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1355411562!16125301!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16480 invoked from network); 13 Dec 2012 15:12:43 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-11.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:12:43 -0000
Received: (qmail 12674 invoked from network); 13 Dec 2012 17:12:42 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 17:12:42 +0200
Message-ID: <50C9F09D.9080602@gmail.com>
Date: Thu, 13 Dec 2012 17:13:33 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 233248,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_NO_LINK_NMD; NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY;
	NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	75fcd2a893beba3a27d29356434e6472.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t9.17e7n003h.26ajf], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44366
Subject: [Xen-devel] Libxc code to get MTRR memory type for physical address
	pa
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

what would be the libxc-based equivalent of get_mtrr_type(struct 
mtrr_state *m, paddr_t pa) (from xen/arch/x86/hvm/mtrr.c)?

I've searched the source code and found this:

struct hvm_hw_mtrr {
#define MTRR_VCNT 8
#define NUM_FIXED_MSR 11
     uint64_t msr_pat_cr;
     /* mtrr physbase & physmask msr pair*/
     uint64_t msr_mtrr_var[MTRR_VCNT*2];
     uint64_t msr_mtrr_fixed[NUM_FIXED_MSR];
     uint64_t msr_mtrr_cap;
     uint64_t msr_mtrr_def_type;
};

in xen/include/public/arch-x86/hvm/save.h. I can retrieve that using
xc_domain_hvm_getcontext_partial(), but what would the the best way to 
get the uint8_t result, for a given 'pa', that get_mtrr_type() returns?

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:13:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:13:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjATh-00017l-RR; Thu, 13 Dec 2012 15:13:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TjATg-00017c-Po
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:13:36 +0000
Received: from [85.158.137.99:25710] by server-14.bemta-3.messagelabs.com id
	D9/48-27443-B90F9C05; Thu, 13 Dec 2012 15:13:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355411600!16090687!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11171 invoked from network); 13 Dec 2012 15:13:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:13:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 15:13:19 +0000
Message-Id: <50C9FE9F02000078000B0371@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 15:13:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <50C9FB3202000078000B0338@nat28.tlf.novell.com>
	<20121213150917.GA12748@aepfle.de>
In-Reply-To: <20121213150917.GA12748@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/blkback: do not leak mode
	property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.12.12 at 16:09, Olaf Hering <olaf@aepfle.de> wrote:
> On Thu, Dec 13, Jan Beulich wrote:
> 
>> +	if (be->major | be->minor) {
> 
> I think a single | works as intended, but || was meant?

No, I indeed meant to use | (as producing better code even with
a compiler not knowing of that transformation).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:13:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:13:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjATh-00017l-RR; Thu, 13 Dec 2012 15:13:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TjATg-00017c-Po
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:13:36 +0000
Received: from [85.158.137.99:25710] by server-14.bemta-3.messagelabs.com id
	D9/48-27443-B90F9C05; Thu, 13 Dec 2012 15:13:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355411600!16090687!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11171 invoked from network); 13 Dec 2012 15:13:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:13:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 15:13:19 +0000
Message-Id: <50C9FE9F02000078000B0371@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 15:13:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <50C9FB3202000078000B0338@nat28.tlf.novell.com>
	<20121213150917.GA12748@aepfle.de>
In-Reply-To: <20121213150917.GA12748@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/blkback: do not leak mode
	property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.12.12 at 16:09, Olaf Hering <olaf@aepfle.de> wrote:
> On Thu, Dec 13, Jan Beulich wrote:
> 
>> +	if (be->major | be->minor) {
> 
> I think a single | works as intended, but || was meant?

No, I indeed meant to use | (as producing better code even with
a compiler not knowing of that transformation).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:15:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAVt-0001OH-DC; Thu, 13 Dec 2012 15:15:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <matthew.fioravante@jhuapl.edu>) id 1TjAVq-0001O2-Ty
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:15:51 +0000
Received: from [85.158.139.211:11345] by server-14.bemta-5.messagelabs.com id
	29/B4-09538-621F9C05; Thu, 13 Dec 2012 15:15:50 +0000
X-Env-Sender: matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355411746!18558668!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31559 invoked from network); 13 Dec 2012 15:15:48 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 15:15:48 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 146c_fbba_8cfb9bc4_f6b5_44b1_9e21_e2e6ff3c2dfd;
	Thu, 13 Dec 2012 10:15:39 -0500
Message-ID: <50C9F111.7060907@jhuapl.edu>
Date: Thu, 13 Dec 2012 10:15:29 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
	<1355400151.10554.103.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355400151.10554.103.camel@zakaz.uk.xensource.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4062266918116992738=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============4062266918116992738==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070701040408070806090007"

This is a cryptographically signed message in MIME format.

--------------ms070701040408070806090007
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12/13/2012 07:02 AM, Ian Campbell wrote:
> On Thu, 2012-12-06 at 18:19 +0000, Matthew Fioravante wrote:
>> Please rerun autoconf after commiting this patch
>>
>> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> This fails for me with :
>          /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64=
-caml/test.o: In function `app_main':
>          /local/scratch/ianc/devel/committer.git/extras/mini-os/test.c:=
511: multiple definition of `app_main'
>          /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64=
-caml/main.o:/local/scratch/ianc/devel/committer.git/extras/mini-os/main.=
c:187: first defined here
>          make[2]: *** [/local/scratch/ianc/devel/committer.git/stubdom/=
mini-os-x86_64-caml/mini-os] Error 1
>          make[2]: Leaving directory `/local/scratch/ianc/devel/committe=
r.git/extras/mini-os'
>          make[1]: *** [caml-stubdom] Error 2
>          make[1]: Leaving directory `/local/scratch/ianc/devel/committe=
r.git/stubdom'
>          make: *** [install-stubdom] Error 2
>
> I'm only guessing it was this patch, but it was somewhere in this
> series.
Actually it looks like caml itself is just broken. The original stubdom=20
makefile did not include it as a build target but my configure.ac=20
erroneously enables it by default. The multiple definition problem=20
occurs because like c-stubdom its minios.cfg enables CONFIG_TEST which=20
defines a main function. It also defines a main function for itself.

Patch incoming
>
> I'd already done all the autoconf faff and updated .*ignore for you
> (adding autom4te.cache, config.log and config.status as appropriate). S=
o
> rather than resending please could you provide an incremental fix
> against:
>          git://xenbits.xen.org/people/ianc/xen-unstable.git vtpm
>
> I'll then merge that into the appropriate patch.
>
> It also occurs to me that this series introduces a little bisection bli=
p
> where cmake will be required (i.e. from patch 3 until here). The right
> way to do this would have been to put the patch introducing autoconf at=

> the start. I'm inclined to just gloss over that for now, but if you fee=
l
> so inclined you could reorder things.
I think the easiest way to fix this is to remove vtpm from the default=20
built targets in stubdom. That way if someone does does a make or a make =

install in stubdom before autoconf they won't hit the cmake dependency.


>
>> ---
>>   autogen.sh                             |    2 +
>>   config/Stubdom.mk.in                   |   45 ++++++++++++++++
>>   {tools/m4 =3D> m4}/curses.m4             |    0
>>   m4/depends.m4                          |   15 ++++++
>>   {tools/m4 =3D> m4}/extfs.m4              |    0
>>   {tools/m4 =3D> m4}/features.m4           |    0
>>   {tools/m4 =3D> m4}/fetcher.m4            |    0
>>   {tools/m4 =3D> m4}/ocaml.m4              |    0
>>   {tools/m4 =3D> m4}/path_or_fail.m4       |    0
>>   {tools/m4 =3D> m4}/pkg.m4                |    0
>>   {tools/m4 =3D> m4}/pthread.m4            |    0
>>   {tools/m4 =3D> m4}/ptyfuncs.m4           |    0
>>   {tools/m4 =3D> m4}/python_devel.m4       |    0
>>   {tools/m4 =3D> m4}/python_version.m4     |    0
>>   {tools/m4 =3D> m4}/savevar.m4            |    0
>>   {tools/m4 =3D> m4}/set_cflags_ldflags.m4 |    0
>>   m4/stubdom.m4                          |   89 ++++++++++++++++++++++=
++++++++++
>>   {tools/m4 =3D> m4}/uuid.m4               |    0
>>   stubdom/Makefile                       |   55 +++++---------------
>>   stubdom/configure.ac                   |   58 +++++++++++++++++++++
>>   tools/configure.ac                     |   28 +++++-----
>>   21 files changed, 236 insertions(+), 56 deletions(-)
>>   create mode 100644 config/Stubdom.mk.in
>>   rename {tools/m4 =3D> m4}/curses.m4 (100%)
>>   create mode 100644 m4/depends.m4
>>   rename {tools/m4 =3D> m4}/extfs.m4 (100%)
>>   rename {tools/m4 =3D> m4}/features.m4 (100%)
>>   rename {tools/m4 =3D> m4}/fetcher.m4 (100%)
>>   rename {tools/m4 =3D> m4}/ocaml.m4 (100%)
>>   rename {tools/m4 =3D> m4}/path_or_fail.m4 (100%)
>>   rename {tools/m4 =3D> m4}/pkg.m4 (100%)
>>   rename {tools/m4 =3D> m4}/pthread.m4 (100%)
>>   rename {tools/m4 =3D> m4}/ptyfuncs.m4 (100%)
>>   rename {tools/m4 =3D> m4}/python_devel.m4 (100%)
>>   rename {tools/m4 =3D> m4}/python_version.m4 (100%)
>>   rename {tools/m4 =3D> m4}/savevar.m4 (100%)
>>   rename {tools/m4 =3D> m4}/set_cflags_ldflags.m4 (100%)
>>   create mode 100644 m4/stubdom.m4
>>   rename {tools/m4 =3D> m4}/uuid.m4 (100%)
>>   create mode 100644 stubdom/configure.ac
>>
>> diff --git a/autogen.sh b/autogen.sh
>> index 58a71ce..ada482c 100755
>> --- a/autogen.sh
>> +++ b/autogen.sh
>> @@ -2,3 +2,5 @@
>>   cd tools
>>   autoconf
>>   autoheader
>> +cd ../stubdom
>> +autoconf
>> diff --git a/config/Stubdom.mk.in b/config/Stubdom.mk.in
>> new file mode 100644
>> index 0000000..432efd7
>> --- /dev/null
>> +++ b/config/Stubdom.mk.in
>> @@ -0,0 +1,45 @@
>> +# Prefix and install folder
>> +prefix              :=3D @prefix@
>> +PREFIX              :=3D $(prefix)
>> +exec_prefix         :=3D @exec_prefix@
>> +libdir              :=3D @libdir@
>> +LIBDIR              :=3D $(libdir)
>> +
>> +# Path Programs
>> +CMAKE               :=3D @CMAKE@
>> +WGET                :=3D @WGET@ -c
>> +
>> +# A debug build of stubdom? //FIXME: Someone make this do something
>> +debug               :=3D @debug@
>> +vtpm =3D @vtpm@
>> +
>> +STUBDOM_TARGETS     :=3D @STUBDOM_TARGETS@
>> +STUBDOM_BUILD       :=3D @STUBDOM_BUILD@
>> +STUBDOM_INSTALL     :=3D @STUBDOM_INSTALL@
>> +
>> +ZLIB_VERSION        :=3D @ZLIB_VERSION@
>> +ZLIB_URL            :=3D @ZLIB_URL@
>> +
>> +LIBPCI_VERSION      :=3D @LIBPCI_VERSION@
>> +LIBPCI_URL          :=3D @LIBPCI_URL@
>> +
>> +NEWLIB_VERSION      :=3D @NEWLIB_VERSION@
>> +NEWLIB_URL          :=3D @NEWLIB_URL@
>> +
>> +LWIP_VERSION        :=3D @LWIP_VERSION@
>> +LWIP_URL            :=3D @LWIP_URL@
>> +
>> +GRUB_VERSION        :=3D @GRUB_VERSION@
>> +GRUB_URL            :=3D @GRUB_URL@
>> +
>> +OCAML_VERSION       :=3D @OCAML_VERSION@
>> +OCAML_URL           :=3D @OCAML_URL@
>> +
>> +GMP_VERSION         :=3D @GMP_VERSION@
>> +GMP_URL             :=3D @GMP_URL@
>> +
>> +POLARSSL_VERSION    :=3D @POLARSSL_VERSION@
>> +POLARSSL_URL        :=3D @POLARSSL_URL@
>> +
>> +TPMEMU_VERSION      :=3D @TPMEMU_VERSION@
>> +TPMEMU_URL          :=3D @TPMEMU_URL@
>> diff --git a/tools/m4/curses.m4 b/m4/curses.m4
>> similarity index 100%
>> rename from tools/m4/curses.m4
>> rename to m4/curses.m4
>> diff --git a/m4/depends.m4 b/m4/depends.m4
>> new file mode 100644
>> index 0000000..916e665
>> --- /dev/null
>> +++ b/m4/depends.m4
>> @@ -0,0 +1,15 @@
>> +
>> +AC_DEFUN([AX_DEPENDS_PATH_PROG], [
>> +AS_IF([test "x$$1" =3D "xy"], [AX_PATH_PROG_OR_FAIL([$2], [$3])], [
>> +AS_IF([test "x$$1" =3D "xn"], [
>> +$2=3D"/$3-disabled-in-configure-script"
>> +], [
>> +AC_PATH_PROG([$2], [$3], [no])
>> +AS_IF([test x"${$2}" =3D "xno"], [
>> +$1=3Dn
>> +$2=3D"/$3-disabled-in-configure-script"
>> +])
>> +])
>> +])
>> +AC_SUBST($2)
>> +])
>> diff --git a/tools/m4/extfs.m4 b/m4/extfs.m4
>> similarity index 100%
>> rename from tools/m4/extfs.m4
>> rename to m4/extfs.m4
>> diff --git a/tools/m4/features.m4 b/m4/features.m4
>> similarity index 100%
>> rename from tools/m4/features.m4
>> rename to m4/features.m4
>> diff --git a/tools/m4/fetcher.m4 b/m4/fetcher.m4
>> similarity index 100%
>> rename from tools/m4/fetcher.m4
>> rename to m4/fetcher.m4
>> diff --git a/tools/m4/ocaml.m4 b/m4/ocaml.m4
>> similarity index 100%
>> rename from tools/m4/ocaml.m4
>> rename to m4/ocaml.m4
>> diff --git a/tools/m4/path_or_fail.m4 b/m4/path_or_fail.m4
>> similarity index 100%
>> rename from tools/m4/path_or_fail.m4
>> rename to m4/path_or_fail.m4
>> diff --git a/tools/m4/pkg.m4 b/m4/pkg.m4
>> similarity index 100%
>> rename from tools/m4/pkg.m4
>> rename to m4/pkg.m4
>> diff --git a/tools/m4/pthread.m4 b/m4/pthread.m4
>> similarity index 100%
>> rename from tools/m4/pthread.m4
>> rename to m4/pthread.m4
>> diff --git a/tools/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
>> similarity index 100%
>> rename from tools/m4/ptyfuncs.m4
>> rename to m4/ptyfuncs.m4
>> diff --git a/tools/m4/python_devel.m4 b/m4/python_devel.m4
>> similarity index 100%
>> rename from tools/m4/python_devel.m4
>> rename to m4/python_devel.m4
>> diff --git a/tools/m4/python_version.m4 b/m4/python_version.m4
>> similarity index 100%
>> rename from tools/m4/python_version.m4
>> rename to m4/python_version.m4
>> diff --git a/tools/m4/savevar.m4 b/m4/savevar.m4
>> similarity index 100%
>> rename from tools/m4/savevar.m4
>> rename to m4/savevar.m4
>> diff --git a/tools/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4=

>> similarity index 100%
>> rename from tools/m4/set_cflags_ldflags.m4
>> rename to m4/set_cflags_ldflags.m4
>> diff --git a/m4/stubdom.m4 b/m4/stubdom.m4
>> new file mode 100644
>> index 0000000..0bf0d2c
>> --- /dev/null
>> +++ b/m4/stubdom.m4
>> @@ -0,0 +1,89 @@
>> +AC_DEFUN([AX_STUBDOM_DEFAULT_ENABLE], [
>> +AC_ARG_ENABLE([$1],
>> +AS_HELP_STRING([--disable-$1], [Build and install $1 (default is ENAB=
LED)]),[
>> +AX_STUBDOM_INTERNAL([$1], [$2])
>> +],[
>> +AX_ENABLE_STUBDOM([$1], [$2])
>> +])
>> +AC_SUBST([$2])
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_DEFAULT_DISABLE], [
>> +AC_ARG_ENABLE([$1],
>> +AS_HELP_STRING([--enable-$1], [Build and install $1 (default is DISAB=
LED)]),[
>> +AX_STUBDOM_INTERNAL([$1], [$2])
>> +],[
>> +AX_DISABLE_STUBDOM([$1], [$2])
>> +])
>> +AC_SUBST([$2])
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_CONDITIONAL], [
>> +AC_ARG_ENABLE([$1],
>> +AS_HELP_STRING([--enable-$1], [Build and install $1]),[
>> +AX_STUBDOM_INTERNAL([$1], [$2])
>> +])
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_CONDITIONAL_FINISH], [
>> +AS_IF([test "x$$2" =3D "xy" || test "x$$2" =3D "x"], [
>> +AX_ENABLE_STUBDOM([$1],[$2])
>> +],[
>> +AX_DISABLE_STUBDOM([$1],[$2])
>> +])
>> +AC_SUBST([$2])
>> +])
>> +
>> +AC_DEFUN([AX_ENABLE_STUBDOM], [
>> +$2=3Dy
>> +STUBDOM_TARGETS=3D"$STUBDOM_TARGETS $2"
>> +STUBDOM_BUILD=3D"$STUBDOM_BUILD $1"
>> +STUBDOM_INSTALL=3D"$STUBDOM_INSTALL install-$2"
>> +])
>> +
>> +AC_DEFUN([AX_DISABLE_STUBDOM], [
>> +$2=3Dn
>> +])
>> +
>> +dnl Don't call this outside of this file
>> +AC_DEFUN([AX_STUBDOM_INTERNAL], [
>> +AS_IF([test "x$enableval" =3D "xyes"], [
>> +AX_ENABLE_STUBDOM([$1], [$2])
>> +],[
>> +AS_IF([test "x$enableval" =3D "xno"],[
>> +AX_DISABLE_STUBDOM([$1], [$2])
>> +])
>> +])
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_FINISH], [
>> +AC_SUBST(STUBDOM_TARGETS)
>> +AC_SUBST(STUBDOM_BUILD)
>> +AC_SUBST(STUBDOM_INSTALL)
>> +echo "Will build the following stub domains:"
>> +for x in $STUBDOM_BUILD; do
>> +       echo "  $x"
>> +done
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_LIB], [
>> +AC_ARG_VAR([$1_URL], [Download url for $2])
>> +AS_IF([test "x$$1_URL" =3D "x"], [
>> +       AS_IF([test "x$extfiles" =3D "xy"],
>> +               [$1_URL=3D\@S|@\@{:@XEN_EXTFILES_URL\@:}@],
>> +               [$1_URL=3D"$4"])
>> +       ])
>> +$1_VERSION=3D"$3"
>> +AC_SUBST($1_URL)
>> +AC_SUBST($1_VERSION)
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_LIB_NOEXT], [
>> +AC_ARG_VAR([$1_URL], [Download url for $2])
>> +AS_IF([test "x$$1_URL" =3D "x"], [
>> +       $1_URL=3D"$4"
>> +       ])
>> +$1_VERSION=3D"$3"
>> +AC_SUBST($1_URL)
>> +AC_SUBST($1_VERSION)
>> +])
>> diff --git a/tools/m4/uuid.m4 b/m4/uuid.m4
>> similarity index 100%
>> rename from tools/m4/uuid.m4
>> rename to m4/uuid.m4
>> diff --git a/stubdom/Makefile b/stubdom/Makefile
>> index fc70d88..709b71e 100644
>> --- a/stubdom/Makefile
>> +++ b/stubdom/Makefile
>> @@ -6,44 +6,7 @@ export XEN_OS=3DMiniOS
>>   export stubdom=3Dy
>>   export debug=3Dy
>>   include $(XEN_ROOT)/Config.mk
>> -
>> -#ZLIB_URL?=3Dhttp://www.zlib.net
>> -ZLIB_URL=3D$(XEN_EXTFILES_URL)
>> -ZLIB_VERSION=3D1.2.3
>> -
>> -#LIBPCI_URL?=3Dhttp://www.kernel.org/pub/software/utils/pciutils
>> -LIBPCI_URL?=3D$(XEN_EXTFILES_URL)
>> -LIBPCI_VERSION=3D2.2.9
>> -
>> -#NEWLIB_URL?=3Dftp://sources.redhat.com/pub/newlib
>> -NEWLIB_URL?=3D$(XEN_EXTFILES_URL)
>> -NEWLIB_VERSION=3D1.16.0
>> -
>> -#LWIP_URL?=3Dhttp://download.savannah.gnu.org/releases/lwip
>> -LWIP_URL?=3D$(XEN_EXTFILES_URL)
>> -LWIP_VERSION=3D1.3.0
>> -
>> -#GRUB_URL?=3Dhttp://alpha.gnu.org/gnu/grub
>> -GRUB_URL?=3D$(XEN_EXTFILES_URL)
>> -GRUB_VERSION=3D0.97
>> -
>> -#OCAML_URL?=3D$(XEN_EXTFILES_URL)
>> -OCAML_URL?=3Dhttp://caml.inria.fr/pub/distrib/ocaml-3.11
>> -OCAML_VERSION=3D3.11.0
>> -
>> -GMP_VERSION=3D4.3.2
>> -GMP_URL?=3D$(XEN_EXTFILES_URL)
>> -#GMP_URL?=3Dftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
>> -
>> -POLARSSL_VERSION=3D1.1.4
>> -POLARSSL_URL?=3D$(XEN_EXTFILES_URL)
>> -#POLARSSL_URL?=3Dhttp://polarssl.org/code/releases
>> -
>> -TPMEMU_VERSION=3D0.7.4
>> -TPMEMU_URL?=3D$(XEN_EXTFILES_URL)
>> -#TPMEMU_URL?=3Dhttp://download.berlios.de/tpm-emulator
>> -
>> -WGET=3Dwget -c
>> +-include $(XEN_ROOT)/config/Stubdom.mk
>>
>>   GNU_TARGET_ARCH:=3D$(XEN_TARGET_ARCH)
>>   ifeq ($(XEN_TARGET_ARCH),x86_32)
>> @@ -86,12 +49,12 @@ TARGET_CPPFLAGS +=3D -I$(XEN_ROOT)/xen/include
>>
>>   TARGET_LDFLAGS +=3D -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-x=
en-elf/lib
>>
>> -TARGETS=3Dioemu c caml grub xenstore vtpm vtpmmgr
>> +TARGETS=3D$(STUBDOM_TARGETS)
>>
>>   .PHONY: all
>>   all: build
>>   ifeq ($(STUBDOM_SUPPORTED),1)
>> -build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-=
stubdom vtpmmgrdom
>> +build: genpath $(STUBDOM_BUILD)
>>   else
>>   build: genpath
>>   endif
>> @@ -245,7 +208,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TP=
MEMU_VERSION).tar.gz
>>          mv tpm_emulator-$(TPMEMU_VERSION) $@
>>          patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
>>          mkdir $@/build
>> -       cd $@/build; cmake .. -DCMAKE_C_COMPILER=3D${CC} -DCMAKE_C_FLA=
GS=3D"-std=3Dc99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno=
-declaration-after-statement"
>> +       cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=3D${CC} -DCMAKE_C_=
FLAGS=3D"-std=3Dc99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -=
Wno-declaration-after-statement"
>>          touch $@
>>
>>   TPMEMU_STAMPFILE=3D$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libt=
pm.a
>> @@ -483,7 +446,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenst=
ore libxc xenstore
>>   #########
>>
>>   ifeq ($(STUBDOM_SUPPORTED),1)
>> -install: genpath install-readme install-ioemu install-grub install-xe=
nstore install-vtpm install-vtpmmgr
>> +install: genpath install-readme $(STUBDOM_INSTALL)
>>   else
>>   install: genpath
>>   endif
>> @@ -503,6 +466,8 @@ install-grub: pv-grub
>>          $(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
>>          $(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-grub/mini-os.gz "$=
(DESTDIR)$(XENFIRMWAREDIR)/pv-grub-$(XEN_TARGET_ARCH).gz"
>>
>> +install-caml: caml-stubdom
>> +
>>   install-xenstore: xenstore-stubdom
>>          $(INSTALL_DIR) "$(DESTDIR)/usr/lib/xen/boot"
>>          $(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-xenstore/mini-os.g=
z "$(DESTDIR)/usr/lib/xen/boot/xenstore-stubdom.gz"
>> @@ -581,3 +546,9 @@ downloadclean: patchclean
>>
>>   .PHONY: distclean
>>   distclean: downloadclean
>> +       -rm ../config/Stubdom.mk
>> +
>> +ifeq (,$(findstring clean,$(MAKECMDGOALS)))
>> +$(XEN_ROOT)/config/Stubdom.mk:
>> +       $(error You have to run ./configure before building or install=
ing stubdom)
>> +endif
>> diff --git a/stubdom/configure.ac b/stubdom/configure.ac
>> new file mode 100644
>> index 0000000..db44d4a
>> --- /dev/null
>> +++ b/stubdom/configure.ac
>> @@ -0,0 +1,58 @@
>> +#                                               -*- Autoconf -*-
>> +# Process this file with autoconf to produce a configure script.
>> +
>> +AC_PREREQ([2.67])
>> +AC_INIT([Xen Hypervisor Stub Domains], m4_esyscmd([../version.sh ../x=
en/Makefile]),
>> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
>> +AC_CONFIG_SRCDIR([../extras/mini-os/kernel.c])
>> +AC_CONFIG_FILES([../config/Stubdom.mk])
>> +AC_PREFIX_DEFAULT([/usr])
>> +AC_CONFIG_AUX_DIR([../])
>> +
>> +# M4 Macro includes
>> +m4_include([../m4/stubdom.m4])
>> +m4_include([../m4/features.m4])
>> +m4_include([../m4/path_or_fail.m4])
>> +m4_include([../m4/depends.m4])
>> +
>> +# Enable/disable stub domains
>> +AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
>> +AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
>> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
>> +AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
>> +AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
>> +AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
>> +AX_STUBDOM_CONDITIONAL([vtpmmgrdom], [vtpmmgr])
>> +
>> +AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
>> +AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for li=
braries])
>> +
>> +AC_ARG_VAR([CMAKE], [Path to the cmake program])
>> +AC_ARG_VAR([WGET], [Path to wget program])
>> +
>> +# Checks for programs.
>> +AC_PROG_CC
>> +AC_PROG_MAKE_SET
>> +AC_PROG_INSTALL
>> +AX_PATH_PROG_OR_FAIL([WGET], [wget])
>> +
>> +# Checks for programs that depend on a feature
>> +AX_DEPENDS_PATH_PROG([vtpm], [CMAKE], [cmake])
>> +
>> +# Stubdom libraries version and url setup
>> +AX_STUBDOM_LIB([ZLIB], [zlib], [1.2.3], [http://www.zlib.net])
>> +AX_STUBDOM_LIB([LIBPCI], [libpci], [2.2.9], [http://www.kernel.org/pu=
b/software/utils/pciutils])
>> +AX_STUBDOM_LIB([NEWLIB], [newlib], [1.16.0], [ftp://sources.redhat.co=
m/pub/newlib])
>> +AX_STUBDOM_LIB([LWIP], [lwip], [1.3.0], [http://download.savannah.gnu=
=2Eorg/releases/lwip])
>> +AX_STUBDOM_LIB([GRUB], [grub], [0.97], [http://alpha.gnu.org/gnu/grub=
])
>> +AX_STUBDOM_LIB_NOEXT([OCAML], [ocaml], [3.11.0], [http://caml.inria.f=
r/pub/distrib/ocaml-3.11])
>> +AX_STUBDOM_LIB([GMP], [libgmp], [4.3.2], [ftp://ftp.gmplib.org/pub/gm=
p-4.3.2])
>> +AX_STUBDOM_LIB([POLARSSL], [polarssl], [1.1.4], [http://polarssl.org/=
code/releases])
>> +AX_STUBDOM_LIB([TPMEMU], [berlios tpm emulator], [0.7.4], [http://dow=
nload.berlios.de/tpm-emulator])
>> +
>> +#Conditionally enable these stubdoms based on the presense of depende=
ncies
>> +AX_STUBDOM_CONDITIONAL_FINISH([vtpm-stubdom], [vtpm])
>> +AX_STUBDOM_CONDITIONAL_FINISH([vtpmmgrdom], [vtpmmgr])
>> +
>> +AX_STUBDOM_FINISH
>> +AC_OUTPUT()
>> diff --git a/tools/configure.ac b/tools/configure.ac
>> index 586313d..971e3e9 100644
>> --- a/tools/configure.ac
>> +++ b/tools/configure.ac
>> @@ -22,20 +22,20 @@ APPEND_INCLUDES and APPEND_LIB instead when possib=
le.])
>>   AC_CANONICAL_HOST
>>
>>   # M4 Macro includes
>> -m4_include([m4/savevar.m4])
>> -m4_include([m4/features.m4])
>> -m4_include([m4/path_or_fail.m4])
>> -m4_include([m4/python_version.m4])
>> -m4_include([m4/python_devel.m4])
>> -m4_include([m4/ocaml.m4])
>> -m4_include([m4/set_cflags_ldflags.m4])
>> -m4_include([m4/uuid.m4])
>> -m4_include([m4/pkg.m4])
>> -m4_include([m4/curses.m4])
>> -m4_include([m4/pthread.m4])
>> -m4_include([m4/ptyfuncs.m4])
>> -m4_include([m4/extfs.m4])
>> -m4_include([m4/fetcher.m4])
>> +m4_include([../m4/savevar.m4])
>> +m4_include([../m4/features.m4])
>> +m4_include([../m4/path_or_fail.m4])
>> +m4_include([../m4/python_version.m4])
>> +m4_include([../m4/python_devel.m4])
>> +m4_include([../m4/ocaml.m4])
>> +m4_include([../m4/set_cflags_ldflags.m4])
>> +m4_include([../m4/uuid.m4])
>> +m4_include([../m4/pkg.m4])
>> +m4_include([../m4/curses.m4])
>> +m4_include([../m4/pthread.m4])
>> +m4_include([../m4/ptyfuncs.m4])
>> +m4_include([../m4/extfs.m4])
>> +m4_include([../m4/fetcher.m4])
>>
>>   # Enable/disable options
>>   AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTT=
P])
>> --
>> 1.7.10.4
>>
>



--------------ms070701040408070806090007
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIxMzE1MTUyOVowIwYJKoZIhvcNAQkEMRYEFDTwxXHHoE5PlbdK
o4S4494kZVf8MGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYAaKIJnCBbpbQJw0D0rDnQrGbieInjMq3p0
csFudWxT6hbD3RfOAlVgLEDh2gv0t8t63X/fPUbVieV8Zf0jgA1Eos/EiMr/aVTijt36BwdK
uJOrWs34loga74jowaB9P/be+9uXqpDPMVxMRo39NRlEY3nogFI1CjhKAQOSQpMFpgAAAAAA
AA==
--------------ms070701040408070806090007--


--===============4062266918116992738==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4062266918116992738==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 15:15:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAVt-0001OH-DC; Thu, 13 Dec 2012 15:15:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <matthew.fioravante@jhuapl.edu>) id 1TjAVq-0001O2-Ty
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:15:51 +0000
Received: from [85.158.139.211:11345] by server-14.bemta-5.messagelabs.com id
	29/B4-09538-621F9C05; Thu, 13 Dec 2012 15:15:50 +0000
X-Env-Sender: matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355411746!18558668!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31559 invoked from network); 13 Dec 2012 15:15:48 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 15:15:48 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 146c_fbba_8cfb9bc4_f6b5_44b1_9e21_e2e6ff3c2dfd;
	Thu, 13 Dec 2012 10:15:39 -0500
Message-ID: <50C9F111.7060907@jhuapl.edu>
Date: Thu, 13 Dec 2012 10:15:29 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
	<1355400151.10554.103.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355400151.10554.103.camel@zakaz.uk.xensource.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4062266918116992738=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============4062266918116992738==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070701040408070806090007"

This is a cryptographically signed message in MIME format.

--------------ms070701040408070806090007
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12/13/2012 07:02 AM, Ian Campbell wrote:
> On Thu, 2012-12-06 at 18:19 +0000, Matthew Fioravante wrote:
>> Please rerun autoconf after commiting this patch
>>
>> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> This fails for me with :
>          /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64=
-caml/test.o: In function `app_main':
>          /local/scratch/ianc/devel/committer.git/extras/mini-os/test.c:=
511: multiple definition of `app_main'
>          /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64=
-caml/main.o:/local/scratch/ianc/devel/committer.git/extras/mini-os/main.=
c:187: first defined here
>          make[2]: *** [/local/scratch/ianc/devel/committer.git/stubdom/=
mini-os-x86_64-caml/mini-os] Error 1
>          make[2]: Leaving directory `/local/scratch/ianc/devel/committe=
r.git/extras/mini-os'
>          make[1]: *** [caml-stubdom] Error 2
>          make[1]: Leaving directory `/local/scratch/ianc/devel/committe=
r.git/stubdom'
>          make: *** [install-stubdom] Error 2
>
> I'm only guessing it was this patch, but it was somewhere in this
> series.
Actually it looks like caml itself is just broken. The original stubdom=20
makefile did not include it as a build target but my configure.ac=20
erroneously enables it by default. The multiple definition problem=20
occurs because like c-stubdom its minios.cfg enables CONFIG_TEST which=20
defines a main function. It also defines a main function for itself.

Patch incoming
>
> I'd already done all the autoconf faff and updated .*ignore for you
> (adding autom4te.cache, config.log and config.status as appropriate). S=
o
> rather than resending please could you provide an incremental fix
> against:
>          git://xenbits.xen.org/people/ianc/xen-unstable.git vtpm
>
> I'll then merge that into the appropriate patch.
>
> It also occurs to me that this series introduces a little bisection bli=
p
> where cmake will be required (i.e. from patch 3 until here). The right
> way to do this would have been to put the patch introducing autoconf at=

> the start. I'm inclined to just gloss over that for now, but if you fee=
l
> so inclined you could reorder things.
I think the easiest way to fix this is to remove vtpm from the default=20
built targets in stubdom. That way if someone does does a make or a make =

install in stubdom before autoconf they won't hit the cmake dependency.


>
>> ---
>>   autogen.sh                             |    2 +
>>   config/Stubdom.mk.in                   |   45 ++++++++++++++++
>>   {tools/m4 =3D> m4}/curses.m4             |    0
>>   m4/depends.m4                          |   15 ++++++
>>   {tools/m4 =3D> m4}/extfs.m4              |    0
>>   {tools/m4 =3D> m4}/features.m4           |    0
>>   {tools/m4 =3D> m4}/fetcher.m4            |    0
>>   {tools/m4 =3D> m4}/ocaml.m4              |    0
>>   {tools/m4 =3D> m4}/path_or_fail.m4       |    0
>>   {tools/m4 =3D> m4}/pkg.m4                |    0
>>   {tools/m4 =3D> m4}/pthread.m4            |    0
>>   {tools/m4 =3D> m4}/ptyfuncs.m4           |    0
>>   {tools/m4 =3D> m4}/python_devel.m4       |    0
>>   {tools/m4 =3D> m4}/python_version.m4     |    0
>>   {tools/m4 =3D> m4}/savevar.m4            |    0
>>   {tools/m4 =3D> m4}/set_cflags_ldflags.m4 |    0
>>   m4/stubdom.m4                          |   89 ++++++++++++++++++++++=
++++++++++
>>   {tools/m4 =3D> m4}/uuid.m4               |    0
>>   stubdom/Makefile                       |   55 +++++---------------
>>   stubdom/configure.ac                   |   58 +++++++++++++++++++++
>>   tools/configure.ac                     |   28 +++++-----
>>   21 files changed, 236 insertions(+), 56 deletions(-)
>>   create mode 100644 config/Stubdom.mk.in
>>   rename {tools/m4 =3D> m4}/curses.m4 (100%)
>>   create mode 100644 m4/depends.m4
>>   rename {tools/m4 =3D> m4}/extfs.m4 (100%)
>>   rename {tools/m4 =3D> m4}/features.m4 (100%)
>>   rename {tools/m4 =3D> m4}/fetcher.m4 (100%)
>>   rename {tools/m4 =3D> m4}/ocaml.m4 (100%)
>>   rename {tools/m4 =3D> m4}/path_or_fail.m4 (100%)
>>   rename {tools/m4 =3D> m4}/pkg.m4 (100%)
>>   rename {tools/m4 =3D> m4}/pthread.m4 (100%)
>>   rename {tools/m4 =3D> m4}/ptyfuncs.m4 (100%)
>>   rename {tools/m4 =3D> m4}/python_devel.m4 (100%)
>>   rename {tools/m4 =3D> m4}/python_version.m4 (100%)
>>   rename {tools/m4 =3D> m4}/savevar.m4 (100%)
>>   rename {tools/m4 =3D> m4}/set_cflags_ldflags.m4 (100%)
>>   create mode 100644 m4/stubdom.m4
>>   rename {tools/m4 =3D> m4}/uuid.m4 (100%)
>>   create mode 100644 stubdom/configure.ac
>>
>> diff --git a/autogen.sh b/autogen.sh
>> index 58a71ce..ada482c 100755
>> --- a/autogen.sh
>> +++ b/autogen.sh
>> @@ -2,3 +2,5 @@
>>   cd tools
>>   autoconf
>>   autoheader
>> +cd ../stubdom
>> +autoconf
>> diff --git a/config/Stubdom.mk.in b/config/Stubdom.mk.in
>> new file mode 100644
>> index 0000000..432efd7
>> --- /dev/null
>> +++ b/config/Stubdom.mk.in
>> @@ -0,0 +1,45 @@
>> +# Prefix and install folder
>> +prefix              :=3D @prefix@
>> +PREFIX              :=3D $(prefix)
>> +exec_prefix         :=3D @exec_prefix@
>> +libdir              :=3D @libdir@
>> +LIBDIR              :=3D $(libdir)
>> +
>> +# Path Programs
>> +CMAKE               :=3D @CMAKE@
>> +WGET                :=3D @WGET@ -c
>> +
>> +# A debug build of stubdom? //FIXME: Someone make this do something
>> +debug               :=3D @debug@
>> +vtpm =3D @vtpm@
>> +
>> +STUBDOM_TARGETS     :=3D @STUBDOM_TARGETS@
>> +STUBDOM_BUILD       :=3D @STUBDOM_BUILD@
>> +STUBDOM_INSTALL     :=3D @STUBDOM_INSTALL@
>> +
>> +ZLIB_VERSION        :=3D @ZLIB_VERSION@
>> +ZLIB_URL            :=3D @ZLIB_URL@
>> +
>> +LIBPCI_VERSION      :=3D @LIBPCI_VERSION@
>> +LIBPCI_URL          :=3D @LIBPCI_URL@
>> +
>> +NEWLIB_VERSION      :=3D @NEWLIB_VERSION@
>> +NEWLIB_URL          :=3D @NEWLIB_URL@
>> +
>> +LWIP_VERSION        :=3D @LWIP_VERSION@
>> +LWIP_URL            :=3D @LWIP_URL@
>> +
>> +GRUB_VERSION        :=3D @GRUB_VERSION@
>> +GRUB_URL            :=3D @GRUB_URL@
>> +
>> +OCAML_VERSION       :=3D @OCAML_VERSION@
>> +OCAML_URL           :=3D @OCAML_URL@
>> +
>> +GMP_VERSION         :=3D @GMP_VERSION@
>> +GMP_URL             :=3D @GMP_URL@
>> +
>> +POLARSSL_VERSION    :=3D @POLARSSL_VERSION@
>> +POLARSSL_URL        :=3D @POLARSSL_URL@
>> +
>> +TPMEMU_VERSION      :=3D @TPMEMU_VERSION@
>> +TPMEMU_URL          :=3D @TPMEMU_URL@
>> diff --git a/tools/m4/curses.m4 b/m4/curses.m4
>> similarity index 100%
>> rename from tools/m4/curses.m4
>> rename to m4/curses.m4
>> diff --git a/m4/depends.m4 b/m4/depends.m4
>> new file mode 100644
>> index 0000000..916e665
>> --- /dev/null
>> +++ b/m4/depends.m4
>> @@ -0,0 +1,15 @@
>> +
>> +AC_DEFUN([AX_DEPENDS_PATH_PROG], [
>> +AS_IF([test "x$$1" =3D "xy"], [AX_PATH_PROG_OR_FAIL([$2], [$3])], [
>> +AS_IF([test "x$$1" =3D "xn"], [
>> +$2=3D"/$3-disabled-in-configure-script"
>> +], [
>> +AC_PATH_PROG([$2], [$3], [no])
>> +AS_IF([test x"${$2}" =3D "xno"], [
>> +$1=3Dn
>> +$2=3D"/$3-disabled-in-configure-script"
>> +])
>> +])
>> +])
>> +AC_SUBST($2)
>> +])
>> diff --git a/tools/m4/extfs.m4 b/m4/extfs.m4
>> similarity index 100%
>> rename from tools/m4/extfs.m4
>> rename to m4/extfs.m4
>> diff --git a/tools/m4/features.m4 b/m4/features.m4
>> similarity index 100%
>> rename from tools/m4/features.m4
>> rename to m4/features.m4
>> diff --git a/tools/m4/fetcher.m4 b/m4/fetcher.m4
>> similarity index 100%
>> rename from tools/m4/fetcher.m4
>> rename to m4/fetcher.m4
>> diff --git a/tools/m4/ocaml.m4 b/m4/ocaml.m4
>> similarity index 100%
>> rename from tools/m4/ocaml.m4
>> rename to m4/ocaml.m4
>> diff --git a/tools/m4/path_or_fail.m4 b/m4/path_or_fail.m4
>> similarity index 100%
>> rename from tools/m4/path_or_fail.m4
>> rename to m4/path_or_fail.m4
>> diff --git a/tools/m4/pkg.m4 b/m4/pkg.m4
>> similarity index 100%
>> rename from tools/m4/pkg.m4
>> rename to m4/pkg.m4
>> diff --git a/tools/m4/pthread.m4 b/m4/pthread.m4
>> similarity index 100%
>> rename from tools/m4/pthread.m4
>> rename to m4/pthread.m4
>> diff --git a/tools/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
>> similarity index 100%
>> rename from tools/m4/ptyfuncs.m4
>> rename to m4/ptyfuncs.m4
>> diff --git a/tools/m4/python_devel.m4 b/m4/python_devel.m4
>> similarity index 100%
>> rename from tools/m4/python_devel.m4
>> rename to m4/python_devel.m4
>> diff --git a/tools/m4/python_version.m4 b/m4/python_version.m4
>> similarity index 100%
>> rename from tools/m4/python_version.m4
>> rename to m4/python_version.m4
>> diff --git a/tools/m4/savevar.m4 b/m4/savevar.m4
>> similarity index 100%
>> rename from tools/m4/savevar.m4
>> rename to m4/savevar.m4
>> diff --git a/tools/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4=

>> similarity index 100%
>> rename from tools/m4/set_cflags_ldflags.m4
>> rename to m4/set_cflags_ldflags.m4
>> diff --git a/m4/stubdom.m4 b/m4/stubdom.m4
>> new file mode 100644
>> index 0000000..0bf0d2c
>> --- /dev/null
>> +++ b/m4/stubdom.m4
>> @@ -0,0 +1,89 @@
>> +AC_DEFUN([AX_STUBDOM_DEFAULT_ENABLE], [
>> +AC_ARG_ENABLE([$1],
>> +AS_HELP_STRING([--disable-$1], [Build and install $1 (default is ENAB=
LED)]),[
>> +AX_STUBDOM_INTERNAL([$1], [$2])
>> +],[
>> +AX_ENABLE_STUBDOM([$1], [$2])
>> +])
>> +AC_SUBST([$2])
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_DEFAULT_DISABLE], [
>> +AC_ARG_ENABLE([$1],
>> +AS_HELP_STRING([--enable-$1], [Build and install $1 (default is DISAB=
LED)]),[
>> +AX_STUBDOM_INTERNAL([$1], [$2])
>> +],[
>> +AX_DISABLE_STUBDOM([$1], [$2])
>> +])
>> +AC_SUBST([$2])
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_CONDITIONAL], [
>> +AC_ARG_ENABLE([$1],
>> +AS_HELP_STRING([--enable-$1], [Build and install $1]),[
>> +AX_STUBDOM_INTERNAL([$1], [$2])
>> +])
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_CONDITIONAL_FINISH], [
>> +AS_IF([test "x$$2" =3D "xy" || test "x$$2" =3D "x"], [
>> +AX_ENABLE_STUBDOM([$1],[$2])
>> +],[
>> +AX_DISABLE_STUBDOM([$1],[$2])
>> +])
>> +AC_SUBST([$2])
>> +])
>> +
>> +AC_DEFUN([AX_ENABLE_STUBDOM], [
>> +$2=3Dy
>> +STUBDOM_TARGETS=3D"$STUBDOM_TARGETS $2"
>> +STUBDOM_BUILD=3D"$STUBDOM_BUILD $1"
>> +STUBDOM_INSTALL=3D"$STUBDOM_INSTALL install-$2"
>> +])
>> +
>> +AC_DEFUN([AX_DISABLE_STUBDOM], [
>> +$2=3Dn
>> +])
>> +
>> +dnl Don't call this outside of this file
>> +AC_DEFUN([AX_STUBDOM_INTERNAL], [
>> +AS_IF([test "x$enableval" =3D "xyes"], [
>> +AX_ENABLE_STUBDOM([$1], [$2])
>> +],[
>> +AS_IF([test "x$enableval" =3D "xno"],[
>> +AX_DISABLE_STUBDOM([$1], [$2])
>> +])
>> +])
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_FINISH], [
>> +AC_SUBST(STUBDOM_TARGETS)
>> +AC_SUBST(STUBDOM_BUILD)
>> +AC_SUBST(STUBDOM_INSTALL)
>> +echo "Will build the following stub domains:"
>> +for x in $STUBDOM_BUILD; do
>> +       echo "  $x"
>> +done
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_LIB], [
>> +AC_ARG_VAR([$1_URL], [Download url for $2])
>> +AS_IF([test "x$$1_URL" =3D "x"], [
>> +       AS_IF([test "x$extfiles" =3D "xy"],
>> +               [$1_URL=3D\@S|@\@{:@XEN_EXTFILES_URL\@:}@],
>> +               [$1_URL=3D"$4"])
>> +       ])
>> +$1_VERSION=3D"$3"
>> +AC_SUBST($1_URL)
>> +AC_SUBST($1_VERSION)
>> +])
>> +
>> +AC_DEFUN([AX_STUBDOM_LIB_NOEXT], [
>> +AC_ARG_VAR([$1_URL], [Download url for $2])
>> +AS_IF([test "x$$1_URL" =3D "x"], [
>> +       $1_URL=3D"$4"
>> +       ])
>> +$1_VERSION=3D"$3"
>> +AC_SUBST($1_URL)
>> +AC_SUBST($1_VERSION)
>> +])
>> diff --git a/tools/m4/uuid.m4 b/m4/uuid.m4
>> similarity index 100%
>> rename from tools/m4/uuid.m4
>> rename to m4/uuid.m4
>> diff --git a/stubdom/Makefile b/stubdom/Makefile
>> index fc70d88..709b71e 100644
>> --- a/stubdom/Makefile
>> +++ b/stubdom/Makefile
>> @@ -6,44 +6,7 @@ export XEN_OS=3DMiniOS
>>   export stubdom=3Dy
>>   export debug=3Dy
>>   include $(XEN_ROOT)/Config.mk
>> -
>> -#ZLIB_URL?=3Dhttp://www.zlib.net
>> -ZLIB_URL=3D$(XEN_EXTFILES_URL)
>> -ZLIB_VERSION=3D1.2.3
>> -
>> -#LIBPCI_URL?=3Dhttp://www.kernel.org/pub/software/utils/pciutils
>> -LIBPCI_URL?=3D$(XEN_EXTFILES_URL)
>> -LIBPCI_VERSION=3D2.2.9
>> -
>> -#NEWLIB_URL?=3Dftp://sources.redhat.com/pub/newlib
>> -NEWLIB_URL?=3D$(XEN_EXTFILES_URL)
>> -NEWLIB_VERSION=3D1.16.0
>> -
>> -#LWIP_URL?=3Dhttp://download.savannah.gnu.org/releases/lwip
>> -LWIP_URL?=3D$(XEN_EXTFILES_URL)
>> -LWIP_VERSION=3D1.3.0
>> -
>> -#GRUB_URL?=3Dhttp://alpha.gnu.org/gnu/grub
>> -GRUB_URL?=3D$(XEN_EXTFILES_URL)
>> -GRUB_VERSION=3D0.97
>> -
>> -#OCAML_URL?=3D$(XEN_EXTFILES_URL)
>> -OCAML_URL?=3Dhttp://caml.inria.fr/pub/distrib/ocaml-3.11
>> -OCAML_VERSION=3D3.11.0
>> -
>> -GMP_VERSION=3D4.3.2
>> -GMP_URL?=3D$(XEN_EXTFILES_URL)
>> -#GMP_URL?=3Dftp://ftp.gmplib.org/pub/gmp-$(GMP_VERSION)
>> -
>> -POLARSSL_VERSION=3D1.1.4
>> -POLARSSL_URL?=3D$(XEN_EXTFILES_URL)
>> -#POLARSSL_URL?=3Dhttp://polarssl.org/code/releases
>> -
>> -TPMEMU_VERSION=3D0.7.4
>> -TPMEMU_URL?=3D$(XEN_EXTFILES_URL)
>> -#TPMEMU_URL?=3Dhttp://download.berlios.de/tpm-emulator
>> -
>> -WGET=3Dwget -c
>> +-include $(XEN_ROOT)/config/Stubdom.mk
>>
>>   GNU_TARGET_ARCH:=3D$(XEN_TARGET_ARCH)
>>   ifeq ($(XEN_TARGET_ARCH),x86_32)
>> @@ -86,12 +49,12 @@ TARGET_CPPFLAGS +=3D -I$(XEN_ROOT)/xen/include
>>
>>   TARGET_LDFLAGS +=3D -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-x=
en-elf/lib
>>
>> -TARGETS=3Dioemu c caml grub xenstore vtpm vtpmmgr
>> +TARGETS=3D$(STUBDOM_TARGETS)
>>
>>   .PHONY: all
>>   all: build
>>   ifeq ($(STUBDOM_SUPPORTED),1)
>> -build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-=
stubdom vtpmmgrdom
>> +build: genpath $(STUBDOM_BUILD)
>>   else
>>   build: genpath
>>   endif
>> @@ -245,7 +208,7 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TP=
MEMU_VERSION).tar.gz
>>          mv tpm_emulator-$(TPMEMU_VERSION) $@
>>          patch -d $@ -p1 < tpmemu-$(TPMEMU_VERSION).patch;
>>          mkdir $@/build
>> -       cd $@/build; cmake .. -DCMAKE_C_COMPILER=3D${CC} -DCMAKE_C_FLA=
GS=3D"-std=3Dc99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno=
-declaration-after-statement"
>> +       cd $@/build; $(CMAKE) .. -DCMAKE_C_COMPILER=3D${CC} -DCMAKE_C_=
FLAGS=3D"-std=3Dc99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -=
Wno-declaration-after-statement"
>>          touch $@
>>
>>   TPMEMU_STAMPFILE=3D$(CROSS_ROOT)/$(GNU_TARGET_ARCH)-xen-elf/lib/libt=
pm.a
>> @@ -483,7 +446,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenst=
ore libxc xenstore
>>   #########
>>
>>   ifeq ($(STUBDOM_SUPPORTED),1)
>> -install: genpath install-readme install-ioemu install-grub install-xe=
nstore install-vtpm install-vtpmmgr
>> +install: genpath install-readme $(STUBDOM_INSTALL)
>>   else
>>   install: genpath
>>   endif
>> @@ -503,6 +466,8 @@ install-grub: pv-grub
>>          $(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
>>          $(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-grub/mini-os.gz "$=
(DESTDIR)$(XENFIRMWAREDIR)/pv-grub-$(XEN_TARGET_ARCH).gz"
>>
>> +install-caml: caml-stubdom
>> +
>>   install-xenstore: xenstore-stubdom
>>          $(INSTALL_DIR) "$(DESTDIR)/usr/lib/xen/boot"
>>          $(INSTALL_DATA) mini-os-$(XEN_TARGET_ARCH)-xenstore/mini-os.g=
z "$(DESTDIR)/usr/lib/xen/boot/xenstore-stubdom.gz"
>> @@ -581,3 +546,9 @@ downloadclean: patchclean
>>
>>   .PHONY: distclean
>>   distclean: downloadclean
>> +       -rm ../config/Stubdom.mk
>> +
>> +ifeq (,$(findstring clean,$(MAKECMDGOALS)))
>> +$(XEN_ROOT)/config/Stubdom.mk:
>> +       $(error You have to run ./configure before building or install=
ing stubdom)
>> +endif
>> diff --git a/stubdom/configure.ac b/stubdom/configure.ac
>> new file mode 100644
>> index 0000000..db44d4a
>> --- /dev/null
>> +++ b/stubdom/configure.ac
>> @@ -0,0 +1,58 @@
>> +#                                               -*- Autoconf -*-
>> +# Process this file with autoconf to produce a configure script.
>> +
>> +AC_PREREQ([2.67])
>> +AC_INIT([Xen Hypervisor Stub Domains], m4_esyscmd([../version.sh ../x=
en/Makefile]),
>> +    [xen-devel@lists.xen.org], [xen], [http://www.xen.org/])
>> +AC_CONFIG_SRCDIR([../extras/mini-os/kernel.c])
>> +AC_CONFIG_FILES([../config/Stubdom.mk])
>> +AC_PREFIX_DEFAULT([/usr])
>> +AC_CONFIG_AUX_DIR([../])
>> +
>> +# M4 Macro includes
>> +m4_include([../m4/stubdom.m4])
>> +m4_include([../m4/features.m4])
>> +m4_include([../m4/path_or_fail.m4])
>> +m4_include([../m4/depends.m4])
>> +
>> +# Enable/disable stub domains
>> +AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
>> +AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
>> +AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
>> +AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
>> +AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
>> +AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
>> +AX_STUBDOM_CONDITIONAL([vtpmmgrdom], [vtpmmgr])
>> +
>> +AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of stubdom])
>> +AX_ARG_DEFAULT_ENABLE([extfiles], [Use xen extfiles repository for li=
braries])
>> +
>> +AC_ARG_VAR([CMAKE], [Path to the cmake program])
>> +AC_ARG_VAR([WGET], [Path to wget program])
>> +
>> +# Checks for programs.
>> +AC_PROG_CC
>> +AC_PROG_MAKE_SET
>> +AC_PROG_INSTALL
>> +AX_PATH_PROG_OR_FAIL([WGET], [wget])
>> +
>> +# Checks for programs that depend on a feature
>> +AX_DEPENDS_PATH_PROG([vtpm], [CMAKE], [cmake])
>> +
>> +# Stubdom libraries version and url setup
>> +AX_STUBDOM_LIB([ZLIB], [zlib], [1.2.3], [http://www.zlib.net])
>> +AX_STUBDOM_LIB([LIBPCI], [libpci], [2.2.9], [http://www.kernel.org/pu=
b/software/utils/pciutils])
>> +AX_STUBDOM_LIB([NEWLIB], [newlib], [1.16.0], [ftp://sources.redhat.co=
m/pub/newlib])
>> +AX_STUBDOM_LIB([LWIP], [lwip], [1.3.0], [http://download.savannah.gnu=
=2Eorg/releases/lwip])
>> +AX_STUBDOM_LIB([GRUB], [grub], [0.97], [http://alpha.gnu.org/gnu/grub=
])
>> +AX_STUBDOM_LIB_NOEXT([OCAML], [ocaml], [3.11.0], [http://caml.inria.f=
r/pub/distrib/ocaml-3.11])
>> +AX_STUBDOM_LIB([GMP], [libgmp], [4.3.2], [ftp://ftp.gmplib.org/pub/gm=
p-4.3.2])
>> +AX_STUBDOM_LIB([POLARSSL], [polarssl], [1.1.4], [http://polarssl.org/=
code/releases])
>> +AX_STUBDOM_LIB([TPMEMU], [berlios tpm emulator], [0.7.4], [http://dow=
nload.berlios.de/tpm-emulator])
>> +
>> +#Conditionally enable these stubdoms based on the presense of depende=
ncies
>> +AX_STUBDOM_CONDITIONAL_FINISH([vtpm-stubdom], [vtpm])
>> +AX_STUBDOM_CONDITIONAL_FINISH([vtpmmgrdom], [vtpmmgr])
>> +
>> +AX_STUBDOM_FINISH
>> +AC_OUTPUT()
>> diff --git a/tools/configure.ac b/tools/configure.ac
>> index 586313d..971e3e9 100644
>> --- a/tools/configure.ac
>> +++ b/tools/configure.ac
>> @@ -22,20 +22,20 @@ APPEND_INCLUDES and APPEND_LIB instead when possib=
le.])
>>   AC_CANONICAL_HOST
>>
>>   # M4 Macro includes
>> -m4_include([m4/savevar.m4])
>> -m4_include([m4/features.m4])
>> -m4_include([m4/path_or_fail.m4])
>> -m4_include([m4/python_version.m4])
>> -m4_include([m4/python_devel.m4])
>> -m4_include([m4/ocaml.m4])
>> -m4_include([m4/set_cflags_ldflags.m4])
>> -m4_include([m4/uuid.m4])
>> -m4_include([m4/pkg.m4])
>> -m4_include([m4/curses.m4])
>> -m4_include([m4/pthread.m4])
>> -m4_include([m4/ptyfuncs.m4])
>> -m4_include([m4/extfs.m4])
>> -m4_include([m4/fetcher.m4])
>> +m4_include([../m4/savevar.m4])
>> +m4_include([../m4/features.m4])
>> +m4_include([../m4/path_or_fail.m4])
>> +m4_include([../m4/python_version.m4])
>> +m4_include([../m4/python_devel.m4])
>> +m4_include([../m4/ocaml.m4])
>> +m4_include([../m4/set_cflags_ldflags.m4])
>> +m4_include([../m4/uuid.m4])
>> +m4_include([../m4/pkg.m4])
>> +m4_include([../m4/curses.m4])
>> +m4_include([../m4/pthread.m4])
>> +m4_include([../m4/ptyfuncs.m4])
>> +m4_include([../m4/extfs.m4])
>> +m4_include([../m4/fetcher.m4])
>>
>>   # Enable/disable options
>>   AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTT=
P])
>> --
>> 1.7.10.4
>>
>



--------------ms070701040408070806090007
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIxMzE1MTUyOVowIwYJKoZIhvcNAQkEMRYEFDTwxXHHoE5PlbdK
o4S4494kZVf8MGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYAaKIJnCBbpbQJw0D0rDnQrGbieInjMq3p0
csFudWxT6hbD3RfOAlVgLEDh2gv0t8t63X/fPUbVieV8Zf0jgA1Eos/EiMr/aVTijt36BwdK
uJOrWs34loga74jowaB9P/be+9uXqpDPMVxMRo39NRlEY3nogFI1CjhKAQOSQpMFpgAAAAAA
AA==
--------------ms070701040408070806090007--


--===============4062266918116992738==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4062266918116992738==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 15:23:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAd8-0001tD-H4; Thu, 13 Dec 2012 15:23:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <matthew.fioravante@jhuapl.edu>) id 1TjAd7-0001t8-JT
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:23:21 +0000
Received: from [85.158.139.83:33400] by server-16.bemta-5.messagelabs.com id
	E5/93-09208-8E2F9C05; Thu, 13 Dec 2012 15:23:20 +0000
X-Env-Sender: matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355412181!29600396!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24238 invoked from network); 13 Dec 2012 15:23:03 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:23:03 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6682_aec7_278c6117_c56c_4a37_a65c_ee2b60e32530;
	Thu, 13 Dec 2012 10:23:00 -0500
Message-ID: <50C9F2CA.1010602@jhuapl.edu>
Date: Thu, 13 Dec 2012 10:22:50 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3177938388012268390=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============3177938388012268390==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080402000506050101070506"

This is a cryptographically signed message in MIME format.

--------------ms080402000506050101070506
Content-Type: multipart/mixed;
 boundary="------------030206080003050708020500"

This is a multi-part message in MIME format.
--------------030206080003050708020500
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Ian, this one is special just for you. I'm sending it as an attachment=20
because my email client will mangle it.
This patch will remove the cmake dependency from xen prior to autoconf=20
stubdom

This patch applies ontop of [VTPM v7 3/8] vtpm/vtpmmgr and required libs =

to stubdom/Makefile

You can apply it to your tree by doing the following:

git rebase -i <VTPM v7 3/8 revision>

Select to EDIT the 3/8 revision

patch -p1 < 0004-Remove-vtpm-from-default-build-targets.patch

git commit --amend

git rebase --continue

You will hit conflicts with the autoconf patch, just edit=20
stubdom/Makefile and accept all of the updates from the autoconf patch.=20
Namely changing the TARGETS variable, build rule, and install rule

git add stubdom/Makefile
git commit
git rebase --continue


If you'd prefer not to go through these gyrations, I can send an updated =

full vtpm patchset with the fix merged in.

--------------030206080003050708020500
Content-Type: text/x-patch;
 name="0004-Remove-vtpm-from-default-build-targets.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="0004-Remove-vtpm-from-default-build-targets.patch"

=46rom 6feee4bb2be06b0853558e51d4384df2c4df1b68 Mon Sep 17 00:00:00 2001
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Thu, 13 Dec 2012 10:10:54 -0500
Subject: [PATCH 4/9] Remove vtpm from default build targets

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 stubdom/Makefile |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/stubdom/Makefile b/stubdom/Makefile
index fc70d88..357ee70 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -86,12 +86,12 @@ TARGET_CPPFLAGS +=3D -I$(XEN_ROOT)/xen/include
=20
 TARGET_LDFLAGS +=3D -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-e=
lf/lib
=20
-TARGETS=3Dioemu c caml grub xenstore vtpm vtpmmgr
+TARGETS=3Dioemu c caml grub xenstore=20
=20
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stu=
bdom vtpmmgrdom
+build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom=20
 else
 build: genpath
 endif
@@ -483,7 +483,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore=
 libxc xenstore
 #########
=20
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: genpath install-readme install-ioemu install-grub install-xenst=
ore install-vtpm install-vtpmmgr
+install: genpath install-readme install-ioemu install-grub install-xenst=
ore=20
 else
 install: genpath
 endif
--=20
1.7.10.4


--------------030206080003050708020500--

--------------ms080402000506050101070506
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIxMzE1MjI1MFowIwYJKoZIhvcNAQkEMRYEFLvJGn+aej55Ff8M
OiQulXZ5/1VhMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYB/pE5ggRMEQPNGhikKj5yR5XWaRKrFEFMG
aocgQtje5Cld77zt7hC7G/BeL6p31taeXrjFgAbO6mQ96apYNcbVBq4DffQkQ3NdxDYuby1v
pGqq6b0A8DX2mX2oHqHS8U+fD9sJr0XtlDL9fOEbtoWYizMsgj+jU/M8kOnDJRUXywAAAAAA
AA==
--------------ms080402000506050101070506--


--===============3177938388012268390==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3177938388012268390==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 15:23:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAd8-0001tD-H4; Thu, 13 Dec 2012 15:23:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <matthew.fioravante@jhuapl.edu>) id 1TjAd7-0001t8-JT
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:23:21 +0000
Received: from [85.158.139.83:33400] by server-16.bemta-5.messagelabs.com id
	E5/93-09208-8E2F9C05; Thu, 13 Dec 2012 15:23:20 +0000
X-Env-Sender: matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355412181!29600396!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24238 invoked from network); 13 Dec 2012 15:23:03 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:23:03 -0000
Received: from [128.244.206.185] (unknown [128.244.206.185]) by
	pilot.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 6682_aec7_278c6117_c56c_4a37_a65c_ee2b60e32530;
	Thu, 13 Dec 2012 10:23:00 -0500
Message-ID: <50C9F2CA.1010602@jhuapl.edu>
Date: Thu, 13 Dec 2012 10:22:50 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121028 Thunderbird/16.0.2
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3177938388012268390=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a cryptographically signed message in MIME format.

--===============3177938388012268390==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080402000506050101070506"

This is a cryptographically signed message in MIME format.

--------------ms080402000506050101070506
Content-Type: multipart/mixed;
 boundary="------------030206080003050708020500"

This is a multi-part message in MIME format.
--------------030206080003050708020500
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Ian, this one is special just for you. I'm sending it as an attachment=20
because my email client will mangle it.
This patch will remove the cmake dependency from xen prior to autoconf=20
stubdom

This patch applies ontop of [VTPM v7 3/8] vtpm/vtpmmgr and required libs =

to stubdom/Makefile

You can apply it to your tree by doing the following:

git rebase -i <VTPM v7 3/8 revision>

Select to EDIT the 3/8 revision

patch -p1 < 0004-Remove-vtpm-from-default-build-targets.patch

git commit --amend

git rebase --continue

You will hit conflicts with the autoconf patch, just edit=20
stubdom/Makefile and accept all of the updates from the autoconf patch.=20
Namely changing the TARGETS variable, build rule, and install rule

git add stubdom/Makefile
git commit
git rebase --continue


If you'd prefer not to go through these gyrations, I can send an updated =

full vtpm patchset with the fix merged in.

--------------030206080003050708020500
Content-Type: text/x-patch;
 name="0004-Remove-vtpm-from-default-build-targets.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="0004-Remove-vtpm-from-default-build-targets.patch"

=46rom 6feee4bb2be06b0853558e51d4384df2c4df1b68 Mon Sep 17 00:00:00 2001
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Thu, 13 Dec 2012 10:10:54 -0500
Subject: [PATCH 4/9] Remove vtpm from default build targets

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 stubdom/Makefile |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/stubdom/Makefile b/stubdom/Makefile
index fc70d88..357ee70 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -86,12 +86,12 @@ TARGET_CPPFLAGS +=3D -I$(XEN_ROOT)/xen/include
=20
 TARGET_LDFLAGS +=3D -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-e=
lf/lib
=20
-TARGETS=3Dioemu c caml grub xenstore vtpm vtpmmgr
+TARGETS=3Dioemu c caml grub xenstore=20
=20
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom vtpm-stu=
bdom vtpmmgrdom
+build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom=20
 else
 build: genpath
 endif
@@ -483,7 +483,7 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore=
 libxc xenstore
 #########
=20
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: genpath install-readme install-ioemu install-grub install-xenst=
ore install-vtpm install-vtpmmgr
+install: genpath install-readme install-ioemu install-grub install-xenst=
ore=20
 else
 install: genpath
 endif
--=20
1.7.10.4


--------------030206080003050708020500--

--------------ms080402000506050101070506
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIDyjCC
A8YwggMvoAMCAQICBD/xyf0wDQYJKoZIhvcNAQEFBQAwLzELMAkGA1UEBhMCVVMxDzANBgNV
BAoTBkpIVUFQTDEPMA0GA1UECxMGQklTRENBMB4XDTEwMDYxMTE4MjIwNloXDTEzMDYxMTE4
NTIwNlowZjELMAkGA1UEBhMCVVMxDzANBgNVBAoTBkpIVUFQTDEPMA0GA1UECxMGUGVvcGxl
MTUwFgYDVQQLEw9WUE5Hcm91cC1CSVNEQ0EwGwYDVQQDExRNYXR0aGV3IEUgRmlvcmF2YW50
ZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAnpbwVSP6o1Nb5lcW7dd3yTo9iBJdi7qz
4nANOMFPK7JOy5npKN1iiousl28U/scUJES55gPwAWYJK3uVyQAsA4adgDKi5DoD1UHDQEwp
bY7iHLJeq0NPr4BqYNqnCFPbE6HC8zSJrr4qKn+gVUQT39SIFqdiIPJwZL8FYTRQ/zsCAwEA
AaOCAbYwggGyMAsGA1UdDwQEAwIHgDArBgNVHRAEJDAigA8yMDEwMDYxMTE4MjIwNlqBDzIw
MTIwNzE3MjI1MjA2WjAbBg0rBgEEAbMlCwMBAQEBBAoWCGZpb3JhbWUxMBsGDSsGAQQBsyUL
AwEBAQIEChIIMDAxMDQyNjEwWAYJYIZIAYb6ax4BBEsMSVRoZSBwcml2YXRlIGtleSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgY2VydGlmaWNhdGUgbWF5IGhhdmUgYmVlbiBleHBvcnRlZC4w
KAYDVR0RBCEwH4EdTWF0dGhldy5GaW9yYXZhbnRlQGpodWFwbC5lZHUwUgYDVR0fBEswSTBH
oEWgQ6RBMD8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJU0RD
QTEOMAwGA1UEAxMFQ1JMNTYwHwYDVR0jBBgwFoAUCDUpmxH52EU2CyWmF2EJMB1yqeswHQYD
VR0OBBYEFO6LYxg6r9wHZ+zdQtBHn1dZ/YTNMAkGA1UdEwQCMAAwGQYJKoZIhvZ9B0EABAww
ChsEVjcuMQMCBLAwDQYJKoZIhvcNAQEFBQADgYEAJO9HQh4YNChVLzuZqK5ARJARD8JoujGZ
fdo75quvg2jXFQe2sEjvLnxJZgm/pv8fdZakq48CWwjYHKuvIp7sDjTEsQfo+y7SpN/N2NvJ
WU5SqfK1VgYtNLRRoGJUB5Q1aZ+Dg95g3kqpyfpUMISJL8IKVLtJVfN4fggFVUYZ9wwxggGr
MIIBpwIBATA3MC8xCzAJBgNVBAYTAlVTMQ8wDQYDVQQKEwZKSFVBUEwxDzANBgNVBAsTBkJJ
U0RDQQIEP/HJ/TAJBgUrDgMCGgUAoIHLMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTEyMTIxMzE1MjI1MFowIwYJKoZIhvcNAQkEMRYEFLvJGn+aej55Ff8M
OiQulXZ5/1VhMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEgYB/pE5ggRMEQPNGhikKj5yR5XWaRKrFEFMG
aocgQtje5Cld77zt7hC7G/BeL6p31taeXrjFgAbO6mQ96apYNcbVBq4DffQkQ3NdxDYuby1v
pGqq6b0A8DX2mX2oHqHS8U+fD9sJr0XtlDL9fOEbtoWYizMsgj+jU/M8kOnDJRUXywAAAAAA
AA==
--------------ms080402000506050101070506--


--===============3177938388012268390==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3177938388012268390==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 15:24:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:24:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAdd-0001vo-4R; Thu, 13 Dec 2012 15:23:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TjAdb-0001ve-1H
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:23:51 +0000
Received: from [85.158.139.83:40242] by server-3.bemta-5.messagelabs.com id
	9B/3F-25441-603F9C05; Thu, 13 Dec 2012 15:23:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355412229!29600606!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30385 invoked from network); 13 Dec 2012 15:23:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:23:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="122108"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:23:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 15:23:49 +0000
Message-ID: <1355412227.10554.140.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Thu, 13 Dec 2012 15:23:47 +0000
In-Reply-To: <50C9F111.7060907@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
	<1355400151.10554.103.camel@zakaz.uk.xensource.com>
	<50C9F111.7060907@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 15:15 +0000, Matthew Fioravante wrote:
> On 12/13/2012 07:02 AM, Ian Campbell wrote:
> > On Thu, 2012-12-06 at 18:19 +0000, Matthew Fioravante wrote:
> >> Please rerun autoconf after commiting this patch
> >>
> >> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> > This fails for me with :
> >          /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/test.o: In function `app_main':
> >          /local/scratch/ianc/devel/committer.git/extras/mini-os/test.c:511: multiple definition of `app_main'
> >          /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/main.o:/local/scratch/ianc/devel/committer.git/extras/mini-os/main.c:187: first defined here
> >          make[2]: *** [/local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/mini-os] Error 1
> >          make[2]: Leaving directory `/local/scratch/ianc/devel/committer.git/extras/mini-os'
> >          make[1]: *** [caml-stubdom] Error 2
> >          make[1]: Leaving directory `/local/scratch/ianc/devel/committer.git/stubdom'
> >          make: *** [install-stubdom] Error 2
> >
> > I'm only guessing it was this patch, but it was somewhere in this
> > series.
> Actually it looks like caml itself is just broken. The original stubdom 
> makefile did not include it as a build target but my configure.ac 
> erroneously enables it by default. The multiple definition problem 
> occurs because like c-stubdom its minios.cfg enables CONFIG_TEST which 
> defines a main function. It also defines a main function for itself.
> 
> Patch incoming

Thanks.
> > I'd already done all the autoconf faff and updated .*ignore for you
> > (adding autom4te.cache, config.log and config.status as appropriate). So
> > rather than resending please could you provide an incremental fix
> > against:
> >          git://xenbits.xen.org/people/ianc/xen-unstable.git vtpm
> >
> > I'll then merge that into the appropriate patch.
> >
> > It also occurs to me that this series introduces a little bisection blip
> > where cmake will be required (i.e. from patch 3 until here). The right
> > way to do this would have been to put the patch introducing autoconf at
> > the start. I'm inclined to just gloss over that for now, but if you feel
> > so inclined you could reorder things.
> I think the easiest way to fix this is to remove vtpm from the default 
> built targets in stubdom. That way if someone does does a make or a make 
> install in stubdom before autoconf they won't hit the cmake dependency.

You mean drop this hunk from "vtpm/vtpmmgr and required libs to
stubdom/Makefile":
@@ -74,12 +86,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include

 TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib

-TARGETS=ioemu c caml grub xenstore
+TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr

and re-do it later? That would work. I'd prefer to re-add it in a
completely new patch at the end rather than conflating this with the
autoconf switch.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:24:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:24:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAdd-0001vo-4R; Thu, 13 Dec 2012 15:23:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TjAdb-0001ve-1H
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:23:51 +0000
Received: from [85.158.139.83:40242] by server-3.bemta-5.messagelabs.com id
	9B/3F-25441-603F9C05; Thu, 13 Dec 2012 15:23:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355412229!29600606!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30385 invoked from network); 13 Dec 2012 15:23:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:23:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="122108"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:23:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 15:23:49 +0000
Message-ID: <1355412227.10554.140.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Thu, 13 Dec 2012 15:23:47 +0000
In-Reply-To: <50C9F111.7060907@jhuapl.edu>
References: <1354817961-22196-1-git-send-email-matthew.fioravante@jhuapl.edu>
	<1354817961-22196-6-git-send-email-matthew.fioravante@jhuapl.edu>
	<1355400151.10554.103.camel@zakaz.uk.xensource.com>
	<50C9F111.7060907@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [VTPM v7 6/8] Add autoconf to stubdom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 15:15 +0000, Matthew Fioravante wrote:
> On 12/13/2012 07:02 AM, Ian Campbell wrote:
> > On Thu, 2012-12-06 at 18:19 +0000, Matthew Fioravante wrote:
> >> Please rerun autoconf after commiting this patch
> >>
> >> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> > This fails for me with :
> >          /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/test.o: In function `app_main':
> >          /local/scratch/ianc/devel/committer.git/extras/mini-os/test.c:511: multiple definition of `app_main'
> >          /local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/main.o:/local/scratch/ianc/devel/committer.git/extras/mini-os/main.c:187: first defined here
> >          make[2]: *** [/local/scratch/ianc/devel/committer.git/stubdom/mini-os-x86_64-caml/mini-os] Error 1
> >          make[2]: Leaving directory `/local/scratch/ianc/devel/committer.git/extras/mini-os'
> >          make[1]: *** [caml-stubdom] Error 2
> >          make[1]: Leaving directory `/local/scratch/ianc/devel/committer.git/stubdom'
> >          make: *** [install-stubdom] Error 2
> >
> > I'm only guessing it was this patch, but it was somewhere in this
> > series.
> Actually it looks like caml itself is just broken. The original stubdom 
> makefile did not include it as a build target but my configure.ac 
> erroneously enables it by default. The multiple definition problem 
> occurs because like c-stubdom its minios.cfg enables CONFIG_TEST which 
> defines a main function. It also defines a main function for itself.
> 
> Patch incoming

Thanks.
> > I'd already done all the autoconf faff and updated .*ignore for you
> > (adding autom4te.cache, config.log and config.status as appropriate). So
> > rather than resending please could you provide an incremental fix
> > against:
> >          git://xenbits.xen.org/people/ianc/xen-unstable.git vtpm
> >
> > I'll then merge that into the appropriate patch.
> >
> > It also occurs to me that this series introduces a little bisection blip
> > where cmake will be required (i.e. from patch 3 until here). The right
> > way to do this would have been to put the patch introducing autoconf at
> > the start. I'm inclined to just gloss over that for now, but if you feel
> > so inclined you could reorder things.
> I think the easiest way to fix this is to remove vtpm from the default 
> built targets in stubdom. That way if someone does does a make or a make 
> install in stubdom before autoconf they won't hit the cmake dependency.

You mean drop this hunk from "vtpm/vtpmmgr and required libs to
stubdom/Makefile":
@@ -74,12 +86,12 @@ TARGET_CPPFLAGS += -I$(XEN_ROOT)/xen/include

 TARGET_LDFLAGS += -nostdlib -L$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib

-TARGETS=ioemu c caml grub xenstore
+TARGETS=ioemu c caml grub xenstore vtpm vtpmmgr

and re-do it later? That would work. I'd prefer to re-add it in a
completely new patch at the end rather than conflating this with the
autoconf switch.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:32:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:32:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAlb-0002Mv-Do; Thu, 13 Dec 2012 15:32:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TjAlZ-0002Mq-V6
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:32:06 +0000
Received: from [85.158.143.35:33387] by server-3.bemta-4.messagelabs.com id
	82/02-18211-5F4F9C05; Thu, 13 Dec 2012 15:32:05 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355412723!14665862!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1581 invoked from network); 13 Dec 2012 15:32:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:32:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="559965"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 15:32:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 10:32:02 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TjAlW-00025H-CU;
	Thu, 13 Dec 2012 15:32:02 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <konrad.wilk@oracle.com>, <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 15:31:35 +0000
Message-ID: <1355412695-5274-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [PATCH] Add EVTCHNOP_reset in Xen interface header
	files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 include/xen/interface/event_channel.h |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h
index 2090881..f494292 100644
--- a/include/xen/interface/event_channel.h
+++ b/include/xen/interface/event_channel.h
@@ -177,6 +177,19 @@ struct evtchn_unmask {
 	evtchn_port_t port;
 };
 
+/*
+ * EVTCHNOP_reset: Close all event channels associated with specified domain.
+ * NOTES:
+ *  1. <dom> may be specified as DOMID_SELF.
+ *  2. Only a sufficiently-privileged domain may specify other than DOMID_SELF.
+ */
+#define EVTCHNOP_reset		 10
+struct evtchn_reset {
+	/* IN parameters. */
+	domid_t dom;
+};
+typedef struct evtchn_reset evtchn_reset_t;
+
 struct evtchn_op {
 	uint32_t cmd; /* EVTCHNOP_* */
 	union {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:32:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:32:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAlb-0002Mv-Do; Thu, 13 Dec 2012 15:32:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TjAlZ-0002Mq-V6
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:32:06 +0000
Received: from [85.158.143.35:33387] by server-3.bemta-4.messagelabs.com id
	82/02-18211-5F4F9C05; Thu, 13 Dec 2012 15:32:05 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355412723!14665862!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1581 invoked from network); 13 Dec 2012 15:32:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:32:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="559965"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 15:32:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 10:32:02 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TjAlW-00025H-CU;
	Thu, 13 Dec 2012 15:32:02 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <konrad.wilk@oracle.com>, <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 15:31:35 +0000
Message-ID: <1355412695-5274-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [PATCH] Add EVTCHNOP_reset in Xen interface header
	files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 include/xen/interface/event_channel.h |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h
index 2090881..f494292 100644
--- a/include/xen/interface/event_channel.h
+++ b/include/xen/interface/event_channel.h
@@ -177,6 +177,19 @@ struct evtchn_unmask {
 	evtchn_port_t port;
 };
 
+/*
+ * EVTCHNOP_reset: Close all event channels associated with specified domain.
+ * NOTES:
+ *  1. <dom> may be specified as DOMID_SELF.
+ *  2. Only a sufficiently-privileged domain may specify other than DOMID_SELF.
+ */
+#define EVTCHNOP_reset		 10
+struct evtchn_reset {
+	/* IN parameters. */
+	domid_t dom;
+};
+typedef struct evtchn_reset evtchn_reset_t;
+
 struct evtchn_op {
 	uint32_t cmd; /* EVTCHNOP_* */
 	union {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:32:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAlq-0002Nw-Qm; Thu, 13 Dec 2012 15:32:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <matthew.fioravante@jhuapl.edu>) id 1TjAlp-0002Ni-Jm
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:32:21 +0000
Received: from [85.158.138.51:51349] by server-14.bemta-3.messagelabs.com id
	85/05-27443-405F9C05; Thu, 13 Dec 2012 15:32:20 +0000
X-Env-Sender: matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-6.tower-174.messagelabs.com!1355412737!20756903!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12552 invoked from network); 13 Dec 2012 15:32:19 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 15:32:19 -0000
Received: from anonelbe.jhuapl.edu (unknown [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 1fef_b6ca_9d92c4d8_e67b_41f7_bde7_255676d1ef4d;
	Thu, 13 Dec 2012 10:32:09 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: Ian.Campbell@citrix.com,
	xen-devel@lists.xen.org
Date: Thu, 13 Dec 2012 10:31:58 -0500
Message-Id: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 stubdom/configure.ac |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/stubdom/configure.ac b/stubdom/configure.ac
index db44d4a..384a94a 100644
--- a/stubdom/configure.ac
+++ b/stubdom/configure.ac
@@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])
 # Enable/disable stub domains
 AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
 AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
-AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
+AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])
 AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
 AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
 AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:32:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAlq-0002Nw-Qm; Thu, 13 Dec 2012 15:32:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <matthew.fioravante@jhuapl.edu>) id 1TjAlp-0002Ni-Jm
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:32:21 +0000
Received: from [85.158.138.51:51349] by server-14.bemta-3.messagelabs.com id
	85/05-27443-405F9C05; Thu, 13 Dec 2012 15:32:20 +0000
X-Env-Sender: matthew.fioravante@jhuapl.edu
X-Msg-Ref: server-6.tower-174.messagelabs.com!1355412737!20756903!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12552 invoked from network); 13 Dec 2012 15:32:19 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 15:32:19 -0000
Received: from anonelbe.jhuapl.edu (unknown [128.244.206.185]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,256bits,AES256-SHA)
	id 1fef_b6ca_9d92c4d8_e67b_41f7_bde7_255676d1ef4d;
	Thu, 13 Dec 2012 10:32:09 -0500
From: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
To: Ian.Campbell@citrix.com,
	xen-devel@lists.xen.org
Date: Thu, 13 Dec 2012 10:31:58 -0500
Message-Id: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Subject: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
---
 stubdom/configure.ac |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/stubdom/configure.ac b/stubdom/configure.ac
index db44d4a..384a94a 100644
--- a/stubdom/configure.ac
+++ b/stubdom/configure.ac
@@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])
 # Enable/disable stub domains
 AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
 AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
-AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
+AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])
 AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
 AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
 AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:33:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:33:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAn4-0002VP-BV; Thu, 13 Dec 2012 15:33:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TjAn3-0002VE-1A
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:33:37 +0000
Received: from [85.158.143.35:54793] by server-3.bemta-4.messagelabs.com id
	3F/34-18211-055F9C05; Thu, 13 Dec 2012 15:33:36 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355412814!5080736!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26010 invoked from network); 13 Dec 2012 15:33:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:33:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="560302"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 15:33:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 10:33:33 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TjAmz-000267-3y;
	Thu, 13 Dec 2012 15:33:33 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <konrad.wilk@oracle.com>, <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 15:33:05 +0000
Message-ID: <1355412785-5343-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [PATCH] Fix vcpu restore path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The runstate of vcpu should be restored for all possible cpus, as well as the
vcpu info placement.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 arch/x86/xen/enlighten.c |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index ff962d4..bc893e7 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -178,10 +178,11 @@ void xen_vcpu_restore(void)
 {
 	int cpu;
 
-	for_each_online_cpu(cpu) {
+	for_each_possible_cpu(cpu) {
 		bool other_cpu = (cpu != smp_processor_id());
+		bool is_up = HYPERVISOR_vcpu_op(VCPUOP_is_up, cpu, NULL);
 
-		if (other_cpu &&
+		if (other_cpu && is_up &&
 		    HYPERVISOR_vcpu_op(VCPUOP_down, cpu, NULL))
 			BUG();
 
@@ -190,7 +191,7 @@ void xen_vcpu_restore(void)
 		if (have_vcpu_info_placement)
 			xen_vcpu_setup(cpu);
 
-		if (other_cpu &&
+		if (other_cpu && is_up &&
 		    HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
 			BUG();
 	}
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:33:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:33:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAn4-0002VP-BV; Thu, 13 Dec 2012 15:33:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TjAn3-0002VE-1A
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:33:37 +0000
Received: from [85.158.143.35:54793] by server-3.bemta-4.messagelabs.com id
	3F/34-18211-055F9C05; Thu, 13 Dec 2012 15:33:36 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355412814!5080736!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26010 invoked from network); 13 Dec 2012 15:33:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:33:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="560302"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 15:33:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 10:33:33 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TjAmz-000267-3y;
	Thu, 13 Dec 2012 15:33:33 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <konrad.wilk@oracle.com>, <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 15:33:05 +0000
Message-ID: <1355412785-5343-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [PATCH] Fix vcpu restore path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The runstate of vcpu should be restored for all possible cpus, as well as the
vcpu info placement.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 arch/x86/xen/enlighten.c |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index ff962d4..bc893e7 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -178,10 +178,11 @@ void xen_vcpu_restore(void)
 {
 	int cpu;
 
-	for_each_online_cpu(cpu) {
+	for_each_possible_cpu(cpu) {
 		bool other_cpu = (cpu != smp_processor_id());
+		bool is_up = HYPERVISOR_vcpu_op(VCPUOP_is_up, cpu, NULL);
 
-		if (other_cpu &&
+		if (other_cpu && is_up &&
 		    HYPERVISOR_vcpu_op(VCPUOP_down, cpu, NULL))
 			BUG();
 
@@ -190,7 +191,7 @@ void xen_vcpu_restore(void)
 		if (have_vcpu_info_placement)
 			xen_vcpu_setup(cpu);
 
-		if (other_cpu &&
+		if (other_cpu && is_up &&
 		    HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
 			BUG();
 	}
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:34:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAnI-0002XR-P2; Thu, 13 Dec 2012 15:33:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TjAnH-0002X9-2A
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:33:51 +0000
Received: from [85.158.143.99:33087] by server-2.bemta-4.messagelabs.com id
	3F/D7-30861-E55F9C05; Thu, 13 Dec 2012 15:33:50 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355412828!29343472!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19578 invoked from network); 13 Dec 2012 15:33:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:33:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="598361"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:33:48 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Thu, 13 Dec 2012
	10:33:48 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Charles Arnold <carnold@suse.com>, xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:34:02 -0500
Thread-Topic: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
Thread-Index: Ac3XsvGLxs+yL8jdSXWEPQNDY3DnLABk/w5Q
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31E33E929@FTLPMAILBOX02.citrite.net>
References: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
	<50C6EC470200009100083E54@novprvoes0310.provo.novell.com>
In-Reply-To: <50C6EC470200009100083E54@novprvoes0310.provo.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of Charles Arnold
> Sent: Tuesday, December 11, 2012 10:18 AM
> To: Ross Philipson; xen-devel
> Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
> 
> >>> On 12/11/2012 at 07:11 AM, in message
> <831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>,
> Ross Philipson <Ross.Philipson@citrix.com> wrote:
> > Yea I guess I should follow up on this. I did not manage to get it
> > into 4.2 but I thought it had clearance for 4.3. Do I need to resubmit
> the patch set?
> 
> It has been several months since you last submitted.
> You probably should resubmit the patches based on the current unstable
> code base.

Yea, I will try to get that done ASAP. Also someone was interested in some sample code for fetching SMBIOS/ACPI data for pass-through (Art Napor IIRC). I will send that along.

Thanks
Ross

> 
> - Charles
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:34:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAnI-0002XR-P2; Thu, 13 Dec 2012 15:33:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TjAnH-0002X9-2A
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:33:51 +0000
Received: from [85.158.143.99:33087] by server-2.bemta-4.messagelabs.com id
	3F/D7-30861-E55F9C05; Thu, 13 Dec 2012 15:33:50 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355412828!29343472!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19578 invoked from network); 13 Dec 2012 15:33:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:33:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="598361"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:33:48 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Thu, 13 Dec 2012
	10:33:48 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Charles Arnold <carnold@suse.com>, xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 10:34:02 -0500
Thread-Topic: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
Thread-Index: Ac3XsvGLxs+yL8jdSXWEPQNDY3DnLABk/w5Q
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31E33E929@FTLPMAILBOX02.citrite.net>
References: <50C5B6250200009100083DA3@novprvoes0310.provo.novell.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>
	<50C6EC470200009100083E54@novprvoes0310.provo.novell.com>
In-Reply-To: <50C6EC470200009100083E54@novprvoes0310.provo.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of Charles Arnold
> Sent: Tuesday, December 11, 2012 10:18 AM
> To: Ross Philipson; xen-devel
> Subject: Re: [Xen-devel] [PATCH v3 00/04] HVM firmware passthrough
> 
> >>> On 12/11/2012 at 07:11 AM, in message
> <831D55AF5A11D64C9B4B43F59EEBF720A31E33E56D@FTLPMAILBOX02.citrite.net>,
> Ross Philipson <Ross.Philipson@citrix.com> wrote:
> > Yea I guess I should follow up on this. I did not manage to get it
> > into 4.2 but I thought it had clearance for 4.3. Do I need to resubmit
> the patch set?
> 
> It has been several months since you last submitted.
> You probably should resubmit the patches based on the current unstable
> code base.

Yea, I will try to get that done ASAP. Also someone was interested in some sample code for fetching SMBIOS/ACPI data for pass-through (Art Napor IIRC). I will send that along.

Thanks
Ross

> 
> - Charles
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:39:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:39:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAsJ-0002yX-Sp; Thu, 13 Dec 2012 15:39:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1TjAsI-0002yO-HT; Thu, 13 Dec 2012 15:39:02 +0000
Received: from [193.109.254.147:32638] by server-1.bemta-14.messagelabs.com id
	C3/DA-15901-596F9C05; Thu, 13 Dec 2012 15:39:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355412949!8853222!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17645 invoked from network); 13 Dec 2012 15:35:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:35:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="122883"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:35:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 15:35:48 +0000
Message-ID: <1355412947.10554.147.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Paul Harvey <jhebus@googlemail.com>
Date: Thu, 13 Dec 2012 15:35:47 +0000
In-Reply-To: <CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 15:28 +0000, Paul Harvey wrote:

> ./lsevntchn 1000
>    1: VCPU 0: Interdomain (Connected) - Remote Domain 0, Port 72
>    2: VCPU 0: Interdomain (Connected) - Remote Domain 0, Port 73

When I mentioned evtchn limitations I meant in dom0, IOW the other end
of all these. At two evtchns per-minios domain you'd expect to hit
issues around 512 domains on a 32 bit domain 0

> I have changed the configuration file /etc/security/limits.config and
>  rebooted the machines and assumed that this would have applied the
> new limits to the deamons, but you were right and 

I don't have this file on Debian, so I guess it is particular to
whichever distro you use -- perhaps there is an dependency issue between
the xencommons initscript and whatever initscript applies the settings
from /etc/security/limits.config?

> I killed all the domains and restarted the xenconsoled. This applies
> the new limits: 

Great!

> BUT:
> 
> 
> There is now a buffer overflow happening somewhere which is crashing
> the deamon when creating the 340th domain, 

Not Great! :-/

I've added xen-devel@.

> as shown by strace: 

Unfortunately strace doesn't give the sort of information needed to
diagnose this. Can you run the daemon under gdb? When it crashes you can
type "bt" to get a backtrace. If there are debuginfo packages available
in your distro installing the ones for the Xen packages would improve
the output of this too.

If you could figure out where (if anywhere) the daemons stderr (AKA fd
2) was going then that would be useful too. It may be enough to run it
in the foreground.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:39:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:39:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAsJ-0002yX-Sp; Thu, 13 Dec 2012 15:39:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1TjAsI-0002yO-HT; Thu, 13 Dec 2012 15:39:02 +0000
Received: from [193.109.254.147:32638] by server-1.bemta-14.messagelabs.com id
	C3/DA-15901-596F9C05; Thu, 13 Dec 2012 15:39:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355412949!8853222!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17645 invoked from network); 13 Dec 2012 15:35:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:35:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="122883"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:35:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 15:35:48 +0000
Message-ID: <1355412947.10554.147.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Paul Harvey <jhebus@googlemail.com>
Date: Thu, 13 Dec 2012 15:35:47 +0000
In-Reply-To: <CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 15:28 +0000, Paul Harvey wrote:

> ./lsevntchn 1000
>    1: VCPU 0: Interdomain (Connected) - Remote Domain 0, Port 72
>    2: VCPU 0: Interdomain (Connected) - Remote Domain 0, Port 73

When I mentioned evtchn limitations I meant in dom0, IOW the other end
of all these. At two evtchns per-minios domain you'd expect to hit
issues around 512 domains on a 32 bit domain 0

> I have changed the configuration file /etc/security/limits.config and
>  rebooted the machines and assumed that this would have applied the
> new limits to the deamons, but you were right and 

I don't have this file on Debian, so I guess it is particular to
whichever distro you use -- perhaps there is an dependency issue between
the xencommons initscript and whatever initscript applies the settings
from /etc/security/limits.config?

> I killed all the domains and restarted the xenconsoled. This applies
> the new limits: 

Great!

> BUT:
> 
> 
> There is now a buffer overflow happening somewhere which is crashing
> the deamon when creating the 340th domain, 

Not Great! :-/

I've added xen-devel@.

> as shown by strace: 

Unfortunately strace doesn't give the sort of information needed to
diagnose this. Can you run the daemon under gdb? When it crashes you can
type "bt" to get a backtrace. If there are debuginfo packages available
in your distro installing the ones for the Xen packages would improve
the output of this too.

If you could figure out where (if anywhere) the daemons stderr (AKA fd
2) was going then that would be useful too. It may be enough to run it
in the foreground.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:41:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:41:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAuO-0003MY-7N; Thu, 13 Dec 2012 15:41:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TjAuN-0003MK-8H
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:41:11 +0000
Received: from [85.158.143.35:37858] by server-1.bemta-4.messagelabs.com id
	62/2B-28401-617F9C05; Thu, 13 Dec 2012 15:41:10 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355413268!15477559!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32097 invoked from network); 13 Dec 2012 15:41:09 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-13.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 15:41:09 -0000
Received: from [137.65.222.57] ([137.65.222.57])
	by mail.novell.com with ESMTP; Thu, 13 Dec 2012 08:41:06 -0700
Message-ID: <50C9F711.400@suse.com>
Date: Thu, 13 Dec 2012 08:41:05 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>	<20680.47971.962603.851882@mariner.uk.xensource.com>	<50C8BE3F.4040402@suse.com>	<20680.49391.646654.814456@mariner.uk.xensource.com>	<50C8C665.2030202@suse.com>
	<20681.54029.391833.313611@mariner.uk.xensource.com>
In-Reply-To: <20681.54029.391833.313611@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
>   
>> Ian Jackson wrote:
>>     
>>> Well your patch is only correct when used with the new libxl, with my
>>> patches.
>>>       
>> Hmm, it is not clear to me how to make the libxl driver work correctly
>> with libxl pre and post your patches :-/.
>>     
>
> This should be straightforward I think, except for the deregistration
> race bug which is unavoidable with the old libxl.  Simply correctly
> implementing both the deregistration callback and the modification
> callback ought to do it.
>   

Yes, after thinking about it some more, I agree.

As for the modify callback, do you agree that it is fine to ignore the
timeval parameter and just update the timer in libvirt's event loop to
fire immediately?  I.e. always treat the timeval parameter as containing
{0,0} regardless of "old" or "new" libxl?  I think my patch here is correct

http://lists.xen.org/archives/html/xen-devel/2012-12/msg00985.html

>   
>>>> @@ -184,6 +184,8 @@ static void libxlTimerCallback(int timer
>>>> ATTRIBUTE_UNUSED, void *timer_v)
>>>>  {
>>>>      struct libxlOSEventHookTimerInfo *timer_info = timer_v;
>>>>  
>>>> +    virEventRemoveTimeout(timer_info->id);
>>>> +    timer_info->id = -1;
>>>>     
>>>>         
>>> I don't understand why you need this.  Doesn't libvirt remove timers
>>> when they fire ?  If it doesn't, do they otherwise not keep reoccurring ?
>>>       
>> No, timers are not removed when they fire.  And they are continuous, so
>> will keep firing at each timeout.  They can be disabled by setting the
>> timeout to -1, and can be set to fire on each iteration of the event
>> loop by setting the timeout to 0.  But they must be explicitly removed
>> with virEventRemoveTimeout when no longer needed.
>>     
>
> Right.  Well then yes you need to call virEventRemoveTimeout here.
> But that applies to both old and new libxl.
>   

Right.

Thanks for the help with getting this issue resolved!

Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:41:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:41:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAuO-0003MY-7N; Thu, 13 Dec 2012 15:41:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TjAuN-0003MK-8H
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:41:11 +0000
Received: from [85.158.143.35:37858] by server-1.bemta-4.messagelabs.com id
	62/2B-28401-617F9C05; Thu, 13 Dec 2012 15:41:10 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355413268!15477559!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32097 invoked from network); 13 Dec 2012 15:41:09 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-13.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 15:41:09 -0000
Received: from [137.65.222.57] ([137.65.222.57])
	by mail.novell.com with ESMTP; Thu, 13 Dec 2012 08:41:06 -0700
Message-ID: <50C9F711.400@suse.com>
Date: Thu, 13 Dec 2012 08:41:05 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>	<20680.47971.962603.851882@mariner.uk.xensource.com>	<50C8BE3F.4040402@suse.com>	<20680.49391.646654.814456@mariner.uk.xensource.com>	<50C8C665.2030202@suse.com>
	<20681.54029.391833.313611@mariner.uk.xensource.com>
In-Reply-To: <20681.54029.391833.313611@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
>   
>> Ian Jackson wrote:
>>     
>>> Well your patch is only correct when used with the new libxl, with my
>>> patches.
>>>       
>> Hmm, it is not clear to me how to make the libxl driver work correctly
>> with libxl pre and post your patches :-/.
>>     
>
> This should be straightforward I think, except for the deregistration
> race bug which is unavoidable with the old libxl.  Simply correctly
> implementing both the deregistration callback and the modification
> callback ought to do it.
>   

Yes, after thinking about it some more, I agree.

As for the modify callback, do you agree that it is fine to ignore the
timeval parameter and just update the timer in libvirt's event loop to
fire immediately?  I.e. always treat the timeval parameter as containing
{0,0} regardless of "old" or "new" libxl?  I think my patch here is correct

http://lists.xen.org/archives/html/xen-devel/2012-12/msg00985.html

>   
>>>> @@ -184,6 +184,8 @@ static void libxlTimerCallback(int timer
>>>> ATTRIBUTE_UNUSED, void *timer_v)
>>>>  {
>>>>      struct libxlOSEventHookTimerInfo *timer_info = timer_v;
>>>>  
>>>> +    virEventRemoveTimeout(timer_info->id);
>>>> +    timer_info->id = -1;
>>>>     
>>>>         
>>> I don't understand why you need this.  Doesn't libvirt remove timers
>>> when they fire ?  If it doesn't, do they otherwise not keep reoccurring ?
>>>       
>> No, timers are not removed when they fire.  And they are continuous, so
>> will keep firing at each timeout.  They can be disabled by setting the
>> timeout to -1, and can be set to fire on each iteration of the event
>> loop by setting the timeout to 0.  But they must be explicitly removed
>> with virEventRemoveTimeout when no longer needed.
>>     
>
> Right.  Well then yes you need to call virEventRemoveTimeout here.
> But that applies to both old and new libxl.
>   

Right.

Thanks for the help with getting this issue resolved!

Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:41:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAuv-0003Yf-Mj; Thu, 13 Dec 2012 15:41:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjAuu-0003XA-JC
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 15:41:44 +0000
Received: from [85.158.138.51:51137] by server-3.bemta-3.messagelabs.com id
	CD/A6-31588-737F9C05; Thu, 13 Dec 2012 15:41:43 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355413302!18799867!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20773 invoked from network); 13 Dec 2012 15:41:42 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:41:42 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjAuq-000Khb-3F; Thu, 13 Dec 2012 15:41:40 +0000
Date: Thu, 13 Dec 2012 15:41:40 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213154140.GI75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-4-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-4-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 03/11] nEPT: Implement guest ept's walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191035), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Implment guest EPT PT walker, some logic is based on shadow's
> ia32e PT walker. During the PT walking, if the target pages are
> not in memory, use RETRY mechanism and get a chance to let the
> target page back.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

The design looks pretty good.  A few comments below on code details --
I think the only big one is that the ept walker shouldn't force eptes
into 'normal' pte types just so it can reuse the walk_t struct.

> @@ -88,10 +88,11 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
>  
>  /* If the map is non-NULL, we leave this function having 
>   * acquired an extra ref on mfn_to_page(*mfn) */
> -static inline void *map_domain_gfn(struct p2m_domain *p2m,
> +void *map_domain_gfn(struct p2m_domain *p2m,
>                                     gfn_t gfn, 
>                                     mfn_t *mfn,
>                                     p2m_type_t *p2mt,
> +                                   p2m_query_t *q,

I think this should just be a plain p2m_query_t and not a pointer to
one; the code below only dereferences the pointer to read it.  That will
save you having a variable just to hold 'P2M_ALLOC | P2M_UNSHARE' in a
few places below.

> --- /dev/null
> +++ b/xen/arch/x86/mm/hap/nested_ept.c
> +/* For EPT's walker reserved bits and EMT check  */
> +#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
> +                     ~((1ull << paddr_bits) - 1))
> +
> +
> +#define EPT_EMT_WB  6
> +#define EPT_EMT_UC  0

These two definitions should be in vmx.h along with the other
architectural constants for EPTEs.

> +
> +#define NEPT_VPID_CAP_BITS 0
> +
> +#define NEPT_1G_ENTRY_FLAG (1 << 11)
> +#define NEPT_2M_ENTRY_FLAG (1 << 10)
> +#define NEPT_4K_ENTRY_FLAG (1 << 9)
> +
> +/* Always expose 1G and 2M capability to guest, 
> + so don't need additional check */
> +bool_t nept_sp_entry(uint64_t entry)
> +{
> +    return !!(entry & EPTE_SUPER_PAGE_MASK);
> +}
> +
> +static bool_t nept_rsv_bits_check(uint64_t entry, uint32_t level)
> +{
> +    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
> +
> +    switch ( level ){
> +        case 1:
> +            break;
> +        case 2 ... 3:
> +            if (nept_sp_entry(entry))
> +                rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
> +            else
> +                rsv_bits |= 0xfull << 3;

Please use EPTE_EMT_MASK rather than open-coding it.

> +            break;
> +        case 4:
> +        rsv_bits |= 0xf8;

Again, please use EPTE_EMT_MASK | EPTE_IGMT_MASK | EPTE_SUPER_PAGE_MASK.

> +        break;
> +        default:
> +            printk("Unsupported EPT paging level: %d\n", level);
> +    }
> +    if ( ((entry & rsv_bits) ^ rsv_bits) == rsv_bits )
> +        return 0;

This XOR is useful in the normal walker because we care about _which_
bits are wrong.  Here, you can just return !(entry & rsv_bits) for the
same result.

> +    return 1;
> +}
> +
> +/* EMT checking*/
> +static bool_t nept_emt_bits_check(uint64_t entry, uint32_t level)
> +{
> +    ept_entry_t e;
> +    e.epte = entry;
> +    if ( e.sp || level == 1 ) {
> +        if ( e.emt == 2 || e.emt == 3 || e.emt == 7 )
> +            return 1;

Please define more of the EPT_EMT_* constants for these values and use
them.

> +    }
> +    return 0;
> +}
> +
> +static bool_t nept_rwx_bits_check(uint64_t entry) {
> +    /*write only or write/execute only*/
> +    uint8_t rwx_bits = entry & 0x7;
> +
> +    if ( rwx_bits == 2 || rwx_bits == 6)
> +        return 1;
> +    if ( rwx_bits == 4 && !(NEPT_VPID_CAP_BITS &
> +                        VMX_EPT_EXEC_ONLY_SUPPORTED))

Please pass the entry as an ept_entry_t and check the named r, w and x
fields rather than using magic numbers. 

> +        return 1;
> +    return 0;
> +}
> +
> +/* nept's misconfiguration check */
> +static bool_t nept_misconfiguration_check(uint64_t entry, uint32_t level)
> +{
> +    return (nept_rsv_bits_check(entry, level) ||
> +                nept_emt_bits_check(entry, level) ||
> +                nept_rwx_bits_check(entry));
> +}
> +
> +static bool_t nept_present_check(uint64_t entry)
> +{
> +    if (entry & 0x7)

Again, please pass an ept_entry_t and check the r/w/x fields. 

> +        return 1;
> +    return 0;
> +}
> +
> +uint64_t nept_get_ept_vpid_cap(void)
> +{
> +    /*TODO: exposed ept and vpid features*/

This TODO comment doesn't get removed later in the series.  Is returning
0 here always OK?

> +    return NEPT_VPID_CAP_BITS;
> +}
> +
> +static uint32_t
> +nept_walk_tables(struct vcpu *v, unsigned long l2ga, walk_t *gw)
> +{
> +    p2m_type_t p2mt;
> +    uint32_t rc = 0, ret = 0, gflags;
> +    struct domain *d = v->domain;
> +    struct p2m_domain *p2m = d->arch.p2m;
> +    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
> +    p2m_query_t qt = P2M_ALLOC;
> +
> +    guest_l1e_t *l1p = NULL;
> +    guest_l2e_t *l2p = NULL;
> +    guest_l3e_t *l3p = NULL;
> +    guest_l4e_t *l4p = NULL;

These aren't realy guest_l*es, so I think you should use ept_entry_t *
to point to them.  While you're at it, why not define an equivalent
ept_walk_t struct that uses the ept-specific types instead of putting
EPT entries in a normal walk_t?

Also, unlike the normal guest walker, you don't need to hold these maps
open for writing A/D bits, so you could just use a single pointer and
unmap as you go.

> +    sp = nept_sp_entry(gw->l3e.l3);
> +    /* Super 1G entry */
> +    if ( sp )
> +    {
> +        /* Generate a fake l1 table entry so callers don't all 
> +         * have to understand superpages. */

You only have one caller for this function, and it does understand
superpages -- it explicitly check for them.  So I think you can avoid
this part altogether (likewise for 2M superpages) and just DTRT in the
caller.

Cheers,

Tim.

> +        gfn_t start = guest_l3e_get_gfn(gw->l3e);
> +
> +        /* Increment the pfn by the right number of 4k pages. */
> +        start = _gfn((gfn_x(start) & ~GUEST_L3_GFN_MASK) +
> +                     ((l2ga >> PAGE_SHIFT) & GUEST_L3_GFN_MASK));
> +        gflags = (gw->l3e.l3 & 0x7f) | NEPT_1G_ENTRY_FLAG;
> +        gw->l1e = guest_l1e_from_gfn(start, gflags);
> +        gw->l2mfn = gw->l1mfn = _mfn(INVALID_MFN);
> +        goto done;
> +    }
> +

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:41:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjAuv-0003Yf-Mj; Thu, 13 Dec 2012 15:41:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjAuu-0003XA-JC
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 15:41:44 +0000
Received: from [85.158.138.51:51137] by server-3.bemta-3.messagelabs.com id
	CD/A6-31588-737F9C05; Thu, 13 Dec 2012 15:41:43 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355413302!18799867!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20773 invoked from network); 13 Dec 2012 15:41:42 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:41:42 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjAuq-000Khb-3F; Thu, 13 Dec 2012 15:41:40 +0000
Date: Thu, 13 Dec 2012 15:41:40 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213154140.GI75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-4-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-4-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 03/11] nEPT: Implement guest ept's walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191035), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Implment guest EPT PT walker, some logic is based on shadow's
> ia32e PT walker. During the PT walking, if the target pages are
> not in memory, use RETRY mechanism and get a chance to let the
> target page back.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

The design looks pretty good.  A few comments below on code details --
I think the only big one is that the ept walker shouldn't force eptes
into 'normal' pte types just so it can reuse the walk_t struct.

> @@ -88,10 +88,11 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
>  
>  /* If the map is non-NULL, we leave this function having 
>   * acquired an extra ref on mfn_to_page(*mfn) */
> -static inline void *map_domain_gfn(struct p2m_domain *p2m,
> +void *map_domain_gfn(struct p2m_domain *p2m,
>                                     gfn_t gfn, 
>                                     mfn_t *mfn,
>                                     p2m_type_t *p2mt,
> +                                   p2m_query_t *q,

I think this should just be a plain p2m_query_t and not a pointer to
one; the code below only dereferences the pointer to read it.  That will
save you having a variable just to hold 'P2M_ALLOC | P2M_UNSHARE' in a
few places below.

> --- /dev/null
> +++ b/xen/arch/x86/mm/hap/nested_ept.c
> +/* For EPT's walker reserved bits and EMT check  */
> +#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
> +                     ~((1ull << paddr_bits) - 1))
> +
> +
> +#define EPT_EMT_WB  6
> +#define EPT_EMT_UC  0

These two definitions should be in vmx.h along with the other
architectural constants for EPTEs.

> +
> +#define NEPT_VPID_CAP_BITS 0
> +
> +#define NEPT_1G_ENTRY_FLAG (1 << 11)
> +#define NEPT_2M_ENTRY_FLAG (1 << 10)
> +#define NEPT_4K_ENTRY_FLAG (1 << 9)
> +
> +/* Always expose 1G and 2M capability to guest, 
> + so don't need additional check */
> +bool_t nept_sp_entry(uint64_t entry)
> +{
> +    return !!(entry & EPTE_SUPER_PAGE_MASK);
> +}
> +
> +static bool_t nept_rsv_bits_check(uint64_t entry, uint32_t level)
> +{
> +    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
> +
> +    switch ( level ){
> +        case 1:
> +            break;
> +        case 2 ... 3:
> +            if (nept_sp_entry(entry))
> +                rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
> +            else
> +                rsv_bits |= 0xfull << 3;

Please use EPTE_EMT_MASK rather than open-coding it.

> +            break;
> +        case 4:
> +        rsv_bits |= 0xf8;

Again, please use EPTE_EMT_MASK | EPTE_IGMT_MASK | EPTE_SUPER_PAGE_MASK.

> +        break;
> +        default:
> +            printk("Unsupported EPT paging level: %d\n", level);
> +    }
> +    if ( ((entry & rsv_bits) ^ rsv_bits) == rsv_bits )
> +        return 0;

This XOR is useful in the normal walker because we care about _which_
bits are wrong.  Here, you can just return !(entry & rsv_bits) for the
same result.

> +    return 1;
> +}
> +
> +/* EMT checking*/
> +static bool_t nept_emt_bits_check(uint64_t entry, uint32_t level)
> +{
> +    ept_entry_t e;
> +    e.epte = entry;
> +    if ( e.sp || level == 1 ) {
> +        if ( e.emt == 2 || e.emt == 3 || e.emt == 7 )
> +            return 1;

Please define more of the EPT_EMT_* constants for these values and use
them.

> +    }
> +    return 0;
> +}
> +
> +static bool_t nept_rwx_bits_check(uint64_t entry) {
> +    /*write only or write/execute only*/
> +    uint8_t rwx_bits = entry & 0x7;
> +
> +    if ( rwx_bits == 2 || rwx_bits == 6)
> +        return 1;
> +    if ( rwx_bits == 4 && !(NEPT_VPID_CAP_BITS &
> +                        VMX_EPT_EXEC_ONLY_SUPPORTED))

Please pass the entry as an ept_entry_t and check the named r, w and x
fields rather than using magic numbers. 

> +        return 1;
> +    return 0;
> +}
> +
> +/* nept's misconfiguration check */
> +static bool_t nept_misconfiguration_check(uint64_t entry, uint32_t level)
> +{
> +    return (nept_rsv_bits_check(entry, level) ||
> +                nept_emt_bits_check(entry, level) ||
> +                nept_rwx_bits_check(entry));
> +}
> +
> +static bool_t nept_present_check(uint64_t entry)
> +{
> +    if (entry & 0x7)

Again, please pass an ept_entry_t and check the r/w/x fields. 

> +        return 1;
> +    return 0;
> +}
> +
> +uint64_t nept_get_ept_vpid_cap(void)
> +{
> +    /*TODO: exposed ept and vpid features*/

This TODO comment doesn't get removed later in the series.  Is returning
0 here always OK?

> +    return NEPT_VPID_CAP_BITS;
> +}
> +
> +static uint32_t
> +nept_walk_tables(struct vcpu *v, unsigned long l2ga, walk_t *gw)
> +{
> +    p2m_type_t p2mt;
> +    uint32_t rc = 0, ret = 0, gflags;
> +    struct domain *d = v->domain;
> +    struct p2m_domain *p2m = d->arch.p2m;
> +    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
> +    p2m_query_t qt = P2M_ALLOC;
> +
> +    guest_l1e_t *l1p = NULL;
> +    guest_l2e_t *l2p = NULL;
> +    guest_l3e_t *l3p = NULL;
> +    guest_l4e_t *l4p = NULL;

These aren't realy guest_l*es, so I think you should use ept_entry_t *
to point to them.  While you're at it, why not define an equivalent
ept_walk_t struct that uses the ept-specific types instead of putting
EPT entries in a normal walk_t?

Also, unlike the normal guest walker, you don't need to hold these maps
open for writing A/D bits, so you could just use a single pointer and
unmap as you go.

> +    sp = nept_sp_entry(gw->l3e.l3);
> +    /* Super 1G entry */
> +    if ( sp )
> +    {
> +        /* Generate a fake l1 table entry so callers don't all 
> +         * have to understand superpages. */

You only have one caller for this function, and it does understand
superpages -- it explicitly check for them.  So I think you can avoid
this part altogether (likewise for 2M superpages) and just DTRT in the
caller.

Cheers,

Tim.

> +        gfn_t start = guest_l3e_get_gfn(gw->l3e);
> +
> +        /* Increment the pfn by the right number of 4k pages. */
> +        start = _gfn((gfn_x(start) & ~GUEST_L3_GFN_MASK) +
> +                     ((l2ga >> PAGE_SHIFT) & GUEST_L3_GFN_MASK));
> +        gflags = (gw->l3e.l3 & 0x7f) | NEPT_1G_ENTRY_FLAG;
> +        gw->l1e = guest_l1e_from_gfn(start, gflags);
> +        gw->l2mfn = gw->l1mfn = _mfn(INVALID_MFN);
> +        goto done;
> +    }
> +

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:47:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjB0Z-0004Nb-Pa; Thu, 13 Dec 2012 15:47:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjB0X-0004NT-HJ
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 15:47:33 +0000
Received: from [85.158.139.83:57608] by server-6.bemta-5.messagelabs.com id
	A2/41-30498-498F9C05; Thu, 13 Dec 2012 15:47:32 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355413651!29037548!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25970 invoked from network); 13 Dec 2012 15:47:32 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:47:32 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjB0V-000Kif-F3; Thu, 13 Dec 2012 15:47:31 +0000
Date: Thu, 13 Dec 2012 15:47:31 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213154731.GJ75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-5-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-5-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xensource.com, keir@xen.org, JBeulich@suse.com,
	eddie.dong@intel.com, Xu Dongxiao <dongxiao.xu@intel.com>,
	jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 04/11] nEPT: Do further permission check for
	sucessful translation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191036), xiantao.zhang@intel.com wrote:
> +static
> +bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
> +{
> +    if ( ((rwx_acc & 0x1) && !(rwx_bits & 0x1)) ||
> +        ((rwx_acc & 0x2) && !(rwx_bits & 0x2 )) ||
> +        ((rwx_acc & 0x4) && !(rwx_bits & 0x4 )) )
> +        return 0;

Ugh.  It would be nice to use human-readable names for these.
Or, since you know these are both <= 0x7, just test for
!(rwx_acc & ~rwx_bits).

Also, this should really be folded into the previous patch.

Cheers,

Tim.

> +
>  /* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
>  
>  int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
> @@ -301,11 +311,17 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
>                  rwx_bits = gw.l4e.l4 & gw.l3e.l3  & 0x7;
>                  *page_order = 18;
>              }
> -            else
> +            else {
>                  gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
> -
> -            *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
> -            break;
> +                BUG();
> +            }
> +            if ( nept_permission_check(rwx_acc, rwx_bits) )
> +            {
> +                 *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
> +                 break;
> +            }
> +            rc = EPT_TRANSLATE_VIOLATION;
> +        /* Fall through to EPT violation if permission check fails. */
>          case EPT_TRANSLATE_VIOLATION:
>              *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
>              *exit_reason = EXIT_REASON_EPT_VIOLATION;
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:47:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjB0Z-0004Nb-Pa; Thu, 13 Dec 2012 15:47:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjB0X-0004NT-HJ
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 15:47:33 +0000
Received: from [85.158.139.83:57608] by server-6.bemta-5.messagelabs.com id
	A2/41-30498-498F9C05; Thu, 13 Dec 2012 15:47:32 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355413651!29037548!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25970 invoked from network); 13 Dec 2012 15:47:32 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 15:47:32 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjB0V-000Kif-F3; Thu, 13 Dec 2012 15:47:31 +0000
Date: Thu, 13 Dec 2012 15:47:31 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213154731.GJ75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-5-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-5-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xensource.com, keir@xen.org, JBeulich@suse.com,
	eddie.dong@intel.com, Xu Dongxiao <dongxiao.xu@intel.com>,
	jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 04/11] nEPT: Do further permission check for
	sucessful translation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191036), xiantao.zhang@intel.com wrote:
> +static
> +bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
> +{
> +    if ( ((rwx_acc & 0x1) && !(rwx_bits & 0x1)) ||
> +        ((rwx_acc & 0x2) && !(rwx_bits & 0x2 )) ||
> +        ((rwx_acc & 0x4) && !(rwx_bits & 0x4 )) )
> +        return 0;

Ugh.  It would be nice to use human-readable names for these.
Or, since you know these are both <= 0x7, just test for
!(rwx_acc & ~rwx_bits).

Also, this should really be folded into the previous patch.

Cheers,

Tim.

> +
>  /* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
>  
>  int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
> @@ -301,11 +311,17 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
>                  rwx_bits = gw.l4e.l4 & gw.l3e.l3  & 0x7;
>                  *page_order = 18;
>              }
> -            else
> +            else {
>                  gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
> -
> -            *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
> -            break;
> +                BUG();
> +            }
> +            if ( nept_permission_check(rwx_acc, rwx_bits) )
> +            {
> +                 *l1gfn = guest_l1e_get_paddr(gw.l1e) >> PAGE_SHIFT;
> +                 break;
> +            }
> +            rc = EPT_TRANSLATE_VIOLATION;
> +        /* Fall through to EPT violation if permission check fails. */
>          case EPT_TRANSLATE_VIOLATION:
>              *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
>              *exit_reason = EXIT_REASON_EPT_VIOLATION;
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:51:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjB47-0004ho-P0; Thu, 13 Dec 2012 15:51:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjB46-0004hg-M9
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:51:14 +0000
Received: from [193.109.254.147:39910] by server-10.bemta-14.messagelabs.com
	id 24/46-13263-179F9C05; Thu, 13 Dec 2012 15:51:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355413873!9985070!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5372 invoked from network); 13 Dec 2012 15:51:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:51:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="124033"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:51:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 15:51:12 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjB45-0007Ds-0L; Thu, 13 Dec 2012 15:51:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjB44-0008Q3-SW;
	Thu, 13 Dec 2012 15:51:12 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.63855.858383.346914@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 15:51:11 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C9F711.400@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
	<50C8C665.2030202@suse.com>
	<20681.54029.391833.313611@mariner.uk.xensource.com>
	<50C9F711.400@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Yes, after thinking about it some more, I agree.
> 
> As for the modify callback, do you agree that it is fine to ignore the
> timeval parameter and just update the timer in libvirt's event loop to
> fire immediately?  I.e. always treat the timeval parameter as containing
> {0,0} regardless of "old" or "new" libxl?

Yes.  That is fine.

The reason is that old libxl doesn't ever call this function and new
libxl always calls it with {0,0}.  If you're worried about this, add
an assertion :-).  But it's theoretical.

> I think my patch here is correct
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00985.html

Having thought about it, I agree, provided that you also fix the
potential integer overflow in libxlTimeoutRegisterEventHook.

> > Right.  Well then yes you need to call virEventRemoveTimeout here.
> > But that applies to both old and new libxl.
> 
> Right.
> 
> Thanks for the help with getting this issue resolved!

Thank you and I'm sorry to have caused trouble with my inadequate
analysis.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:51:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjB47-0004ho-P0; Thu, 13 Dec 2012 15:51:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjB46-0004hg-M9
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:51:14 +0000
Received: from [193.109.254.147:39910] by server-10.bemta-14.messagelabs.com
	id 24/46-13263-179F9C05; Thu, 13 Dec 2012 15:51:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355413873!9985070!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5372 invoked from network); 13 Dec 2012 15:51:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:51:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="124033"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:51:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 15:51:12 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjB45-0007Ds-0L; Thu, 13 Dec 2012 15:51:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjB44-0008Q3-SW;
	Thu, 13 Dec 2012 15:51:12 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.63855.858383.346914@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 15:51:11 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C9F711.400@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
	<50C8C665.2030202@suse.com>
	<20681.54029.391833.313611@mariner.uk.xensource.com>
	<50C9F711.400@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Yes, after thinking about it some more, I agree.
> 
> As for the modify callback, do you agree that it is fine to ignore the
> timeval parameter and just update the timer in libvirt's event loop to
> fire immediately?  I.e. always treat the timeval parameter as containing
> {0,0} regardless of "old" or "new" libxl?

Yes.  That is fine.

The reason is that old libxl doesn't ever call this function and new
libxl always calls it with {0,0}.  If you're worried about this, add
an assertion :-).  But it's theoretical.

> I think my patch here is correct
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00985.html

Having thought about it, I agree, provided that you also fix the
potential integer overflow in libxlTimeoutRegisterEventHook.

> > Right.  Well then yes you need to call virEventRemoveTimeout here.
> > But that applies to both old and new libxl.
> 
> Right.
> 
> Thanks for the help with getting this issue resolved!

Thank you and I'm sorry to have caused trouble with my inadequate
analysis.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:53:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:53:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjB6H-000589-L5; Thu, 13 Dec 2012 15:53:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TjB6F-00057m-Mi
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:53:27 +0000
Received: from [85.158.139.211:28223] by server-9.bemta-5.messagelabs.com id
	AB/D8-10690-6F9F9C05; Thu, 13 Dec 2012 15:53:26 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355414005!20405844!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18588 invoked from network); 13 Dec 2012 15:53:26 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-16.tower-206.messagelabs.com with SMTP;
	13 Dec 2012 15:53:26 -0000
Received: from [137.65.222.57] ([137.65.222.57])
	by mail.novell.com with ESMTP; Thu, 13 Dec 2012 08:53:15 -0700
Message-ID: <50C9F9EA.7020408@suse.com>
Date: Thu, 13 Dec 2012 08:53:14 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>	<20680.47971.962603.851882@mariner.uk.xensource.com>	<50C8BE3F.4040402@suse.com>	<20680.49391.646654.814456@mariner.uk.xensource.com>	<50C8C665.2030202@suse.com>
	<1355394576.10554.62.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355394576.10554.62.camel@zakaz.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Wed, 2012-12-12 at 18:01 +0000, Jim Fehlig wrote:
>   
>> Ian Jackson wrote:
>>     
>
>   
>>>   
>>>       
>>>>  After again reading
>>>> libxl_event.h, I'm considering the below patch in the libvirt libxl
>>>> driver.  The change is primarily inspired by this comment for
>>>> libxl_osevent_occurred_timeout:
>>>>     
>>>>         
>>> ...
>>>   
>>>       
>>>> /* Implicitly, on entry to this function the timeout has been
>>>>  * deregistered.  If _occurred_timeout is called, libxl will not
>>>>  * call timeout_deregister; if it wants to requeue the timeout it
>>>>  * will call timeout_register again.
>>>>  */
>>>>     
>>>>         
>>> Well your patch is only correct when used with the new libxl, with my
>>> patches.
>>>   
>>>       
>> Hmm, it is not clear to me how to make the libxl driver work correctly
>> with libxl pre and post your patches :-/.
>>     
>
> Ideally we will find a way to make this work without changes on the
> application side.
>   

I think Ian J. is right about applications still working, *if* they have
the callbacks coded correctly.  There are some bugs on the libvirt side
as well :).

> But if that turns out to be impossible and applications are going to
> need patching anyway then I think we should consider just fixing the API
> rather than playing tricks like the "modify to 0" thing to try and keep
> it compatible.
>
> One option is to add new hooks which libxl can call to take/release the
> application's event loop lock + a LIBXL_HAVE_EVENT_LOOP_LOCK define so
> the application can conditionally provide them.

libvirt's event loop lock is private to the event impl and not exposed
to its numerous users.

Regards,
Jim



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:53:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:53:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjB6H-000589-L5; Thu, 13 Dec 2012 15:53:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TjB6F-00057m-Mi
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:53:27 +0000
Received: from [85.158.139.211:28223] by server-9.bemta-5.messagelabs.com id
	AB/D8-10690-6F9F9C05; Thu, 13 Dec 2012 15:53:26 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355414005!20405844!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18588 invoked from network); 13 Dec 2012 15:53:26 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-16.tower-206.messagelabs.com with SMTP;
	13 Dec 2012 15:53:26 -0000
Received: from [137.65.222.57] ([137.65.222.57])
	by mail.novell.com with ESMTP; Thu, 13 Dec 2012 08:53:15 -0700
Message-ID: <50C9F9EA.7020408@suse.com>
Date: Thu, 13 Dec 2012 08:53:14 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>	<20680.47971.962603.851882@mariner.uk.xensource.com>	<50C8BE3F.4040402@suse.com>	<20680.49391.646654.814456@mariner.uk.xensource.com>	<50C8C665.2030202@suse.com>
	<1355394576.10554.62.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355394576.10554.62.camel@zakaz.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Wed, 2012-12-12 at 18:01 +0000, Jim Fehlig wrote:
>   
>> Ian Jackson wrote:
>>     
>
>   
>>>   
>>>       
>>>>  After again reading
>>>> libxl_event.h, I'm considering the below patch in the libvirt libxl
>>>> driver.  The change is primarily inspired by this comment for
>>>> libxl_osevent_occurred_timeout:
>>>>     
>>>>         
>>> ...
>>>   
>>>       
>>>> /* Implicitly, on entry to this function the timeout has been
>>>>  * deregistered.  If _occurred_timeout is called, libxl will not
>>>>  * call timeout_deregister; if it wants to requeue the timeout it
>>>>  * will call timeout_register again.
>>>>  */
>>>>     
>>>>         
>>> Well your patch is only correct when used with the new libxl, with my
>>> patches.
>>>   
>>>       
>> Hmm, it is not clear to me how to make the libxl driver work correctly
>> with libxl pre and post your patches :-/.
>>     
>
> Ideally we will find a way to make this work without changes on the
> application side.
>   

I think Ian J. is right about applications still working, *if* they have
the callbacks coded correctly.  There are some bugs on the libvirt side
as well :).

> But if that turns out to be impossible and applications are going to
> need patching anyway then I think we should consider just fixing the API
> rather than playing tricks like the "modify to 0" thing to try and keep
> it compatible.
>
> One option is to add new hooks which libxl can call to take/release the
> application's event loop lock + a LIBXL_HAVE_EVENT_LOOP_LOCK define so
> the application can conditionally provide them.

libvirt's event loop lock is private to the event impl and not exposed
to its numerous users.

Regards,
Jim



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:58:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBAs-0005Nt-CI; Thu, 13 Dec 2012 15:58:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TjBAq-0005Nn-QC
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:58:13 +0000
Received: from [85.158.137.99:50437] by server-9.bemta-3.messagelabs.com id
	B1/E3-11948-31BF9C05; Thu, 13 Dec 2012 15:58:11 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1355414290!18962478!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2997 invoked from network); 13 Dec 2012 15:58:11 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-2.tower-217.messagelabs.com with SMTP;
	13 Dec 2012 15:58:11 -0000
Received: from [137.65.222.57] ([137.65.222.57])
	by mail.novell.com with ESMTP; Thu, 13 Dec 2012 08:57:57 -0700
Message-ID: <50C9FB02.7010204@suse.com>
Date: Thu, 13 Dec 2012 08:57:54 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>	<20680.47971.962603.851882@mariner.uk.xensource.com>	<50C8BE3F.4040402@suse.com>	<20680.49391.646654.814456@mariner.uk.xensource.com>	<50C8C665.2030202@suse.com>	<20681.54029.391833.313611@mariner.uk.xensource.com>	<50C9F711.400@suse.com>
	<20681.63855.858383.346914@mariner.uk.xensource.com>
In-Reply-To: <20681.63855.858383.346914@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
>   
>> Yes, after thinking about it some more, I agree.
>>
>> As for the modify callback, do you agree that it is fine to ignore the
>> timeval parameter and just update the timer in libvirt's event loop to
>> fire immediately?  I.e. always treat the timeval parameter as containing
>> {0,0} regardless of "old" or "new" libxl?
>>     
>
> Yes.  That is fine.
>
> The reason is that old libxl doesn't ever call this function and new
> libxl always calls it with {0,0}.  If you're worried about this, add
> an assertion :-).  But it's theoretical.
>
>   
>> I think my patch here is correct
>> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00985.html
>>     
>
> Having thought about it, I agree, provided that you also fix the
> potential integer overflow in libxlTimeoutRegisterEventHook.
>   

Right.  I'll squash a fix to handle that into my current patch and send
it to the libvirt dev list, cc'ing xen-devel too.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:58:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBAs-0005Nt-CI; Thu, 13 Dec 2012 15:58:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TjBAq-0005Nn-QC
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:58:13 +0000
Received: from [85.158.137.99:50437] by server-9.bemta-3.messagelabs.com id
	B1/E3-11948-31BF9C05; Thu, 13 Dec 2012 15:58:11 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1355414290!18962478!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2997 invoked from network); 13 Dec 2012 15:58:11 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-2.tower-217.messagelabs.com with SMTP;
	13 Dec 2012 15:58:11 -0000
Received: from [137.65.222.57] ([137.65.222.57])
	by mail.novell.com with ESMTP; Thu, 13 Dec 2012 08:57:57 -0700
Message-ID: <50C9FB02.7010204@suse.com>
Date: Thu, 13 Dec 2012 08:57:54 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>	<20680.47971.962603.851882@mariner.uk.xensource.com>	<50C8BE3F.4040402@suse.com>	<20680.49391.646654.814456@mariner.uk.xensource.com>	<50C8C665.2030202@suse.com>	<20681.54029.391833.313611@mariner.uk.xensource.com>	<50C9F711.400@suse.com>
	<20681.63855.858383.346914@mariner.uk.xensource.com>
In-Reply-To: <20681.63855.858383.346914@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
>   
>> Yes, after thinking about it some more, I agree.
>>
>> As for the modify callback, do you agree that it is fine to ignore the
>> timeval parameter and just update the timer in libvirt's event loop to
>> fire immediately?  I.e. always treat the timeval parameter as containing
>> {0,0} regardless of "old" or "new" libxl?
>>     
>
> Yes.  That is fine.
>
> The reason is that old libxl doesn't ever call this function and new
> libxl always calls it with {0,0}.  If you're worried about this, add
> an assertion :-).  But it's theoretical.
>
>   
>> I think my patch here is correct
>> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00985.html
>>     
>
> Having thought about it, I agree, provided that you also fix the
> potential integer overflow in libxlTimeoutRegisterEventHook.
>   

Right.  I'll squash a fix to handle that into my current patch and send
it to the libvirt dev list, cc'ing xen-devel too.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:59:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBBe-0005Ro-QI; Thu, 13 Dec 2012 15:59:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjBBc-0005Rd-NT
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:59:01 +0000
Received: from [85.158.139.83:3169] by server-1.bemta-5.messagelabs.com id
	A3/76-12813-44BF9C05; Thu, 13 Dec 2012 15:59:00 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355414339!25854766!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22103 invoked from network); 13 Dec 2012 15:58:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:58:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="124323"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:58:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 15:58:58 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjBBa-0007Kr-W7; Thu, 13 Dec 2012 15:58:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjBBa-0008R5-Ru;
	Thu, 13 Dec 2012 15:58:58 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.64322.738705.484310@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 15:58:58 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C9F9EA.7020408@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
	<50C8C665.2030202@suse.com>
	<1355394576.10554.62.camel@zakaz.uk.xensource.com>
	<50C9F9EA.7020408@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Ian Campbell wrote:
> > One option is to add new hooks which libxl can call to take/release the
> > application's event loop lock + a LIBXL_HAVE_EVENT_LOOP_LOCK define so
> > the application can conditionally provide them.
> 
> libvirt's event loop lock is private to the event impl and not exposed
> to its numerous users.

Right.  I still think it might be useful to provide a way for a
consenting application to allow libxl to use the application's event
loop lock (perhaps, its single giant lock) as the ctx lock.  If it had
been possible in this case it would have eliminated these particular
races, so it's a benefit for those applications.  And the extra
complexity doesn't seem likely to introduce other bugs.

But I think we should fault that feature in when we have a potential
user for it, and from what you say that's not libvirt.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 15:59:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 15:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBBe-0005Ro-QI; Thu, 13 Dec 2012 15:59:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjBBc-0005Rd-NT
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 15:59:01 +0000
Received: from [85.158.139.83:3169] by server-1.bemta-5.messagelabs.com id
	A3/76-12813-44BF9C05; Thu, 13 Dec 2012 15:59:00 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355414339!25854766!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22103 invoked from network); 13 Dec 2012 15:58:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 15:58:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,274,1355097600"; 
   d="scan'208";a="124323"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 15:58:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 15:58:58 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjBBa-0007Kr-W7; Thu, 13 Dec 2012 15:58:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjBBa-0008R5-Ru;
	Thu, 13 Dec 2012 15:58:58 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20681.64322.738705.484310@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 15:58:58 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <50C9F9EA.7020408@suse.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>
	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>
	<50C7B974.4050706@suse.com>
	<20680.47971.962603.851882@mariner.uk.xensource.com>
	<50C8BE3F.4040402@suse.com>
	<20680.49391.646654.814456@mariner.uk.xensource.com>
	<50C8C665.2030202@suse.com>
	<1355394576.10554.62.camel@zakaz.uk.xensource.com>
	<50C9F9EA.7020408@suse.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Bamvor Jian Zhang <bjzhang@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
> Ian Campbell wrote:
> > One option is to add new hooks which libxl can call to take/release the
> > application's event loop lock + a LIBXL_HAVE_EVENT_LOOP_LOCK define so
> > the application can conditionally provide them.
> 
> libvirt's event loop lock is private to the event impl and not exposed
> to its numerous users.

Right.  I still think it might be useful to provide a way for a
consenting application to allow libxl to use the application's event
loop lock (perhaps, its single giant lock) as the ctx lock.  If it had
been possible in this case it would have eliminated these particular
races, so it's a benefit for those applications.  And the extra
complexity doesn't seem likely to introduce other bugs.

But I think we should fault that feature in when we have a potential
user for it, and from what you say that's not libvirt.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:00:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:00:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBCm-0005vG-Ga; Thu, 13 Dec 2012 16:00:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TjBCk-0005ur-Ib
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 16:00:10 +0000
Received: from [85.158.139.211:44287] by server-15.bemta-5.messagelabs.com id
	DB/16-20523-98BF9C05; Thu, 13 Dec 2012 16:00:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355414408!16133813!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16838 invoked from network); 13 Dec 2012 16:00:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:00:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 16:00:07 +0000
Message-Id: <50CA099702000078000B03DC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 16:00:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>,<konrad.wilk@oracle.com>
References: <1355412785-5343-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1355412785-5343-1-git-send-email-wei.liu2@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Fix vcpu restore path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.12.12 at 16:33, Wei Liu <wei.liu2@citrix.com> wrote:
> The runstate of vcpu should be restored for all possible cpus, as well as the
> vcpu info placement.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  arch/x86/xen/enlighten.c |    7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index ff962d4..bc893e7 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -178,10 +178,11 @@ void xen_vcpu_restore(void)
>  {
>  	int cpu;
>  
> -	for_each_online_cpu(cpu) {
> +	for_each_possible_cpu(cpu) {
>  		bool other_cpu = (cpu != smp_processor_id());
> +		bool is_up = HYPERVISOR_vcpu_op(VCPUOP_is_up, cpu, NULL);
>  
> -		if (other_cpu &&
> +		if (other_cpu && is_up &&
>  		    HYPERVISOR_vcpu_op(VCPUOP_down, cpu, NULL))
>  			BUG();
>  
> @@ -190,7 +191,7 @@ void xen_vcpu_restore(void)
>  		if (have_vcpu_info_placement)
>  			xen_vcpu_setup(cpu);
>  
> -		if (other_cpu &&
> +		if (other_cpu && is_up &&
>  		    HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
>  			BUG();
>  	}
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:00:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:00:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBCm-0005vG-Ga; Thu, 13 Dec 2012 16:00:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TjBCk-0005ur-Ib
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 16:00:10 +0000
Received: from [85.158.139.211:44287] by server-15.bemta-5.messagelabs.com id
	DB/16-20523-98BF9C05; Thu, 13 Dec 2012 16:00:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355414408!16133813!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQzMTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16838 invoked from network); 13 Dec 2012 16:00:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:00:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2012 16:00:07 +0000
Message-Id: <50CA099702000078000B03DC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 13 Dec 2012 16:00:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>,<konrad.wilk@oracle.com>
References: <1355412785-5343-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1355412785-5343-1-git-send-email-wei.liu2@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Fix vcpu restore path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.12.12 at 16:33, Wei Liu <wei.liu2@citrix.com> wrote:
> The runstate of vcpu should be restored for all possible cpus, as well as the
> vcpu info placement.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  arch/x86/xen/enlighten.c |    7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index ff962d4..bc893e7 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -178,10 +178,11 @@ void xen_vcpu_restore(void)
>  {
>  	int cpu;
>  
> -	for_each_online_cpu(cpu) {
> +	for_each_possible_cpu(cpu) {
>  		bool other_cpu = (cpu != smp_processor_id());
> +		bool is_up = HYPERVISOR_vcpu_op(VCPUOP_is_up, cpu, NULL);
>  
> -		if (other_cpu &&
> +		if (other_cpu && is_up &&
>  		    HYPERVISOR_vcpu_op(VCPUOP_down, cpu, NULL))
>  			BUG();
>  
> @@ -190,7 +191,7 @@ void xen_vcpu_restore(void)
>  		if (have_vcpu_info_placement)
>  			xen_vcpu_setup(cpu);
>  
> -		if (other_cpu &&
> +		if (other_cpu && is_up &&
>  		    HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
>  			BUG();
>  	}
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:04:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBGz-0006Uw-8a; Thu, 13 Dec 2012 16:04:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjBGx-0006Ur-FH
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 16:04:31 +0000
Received: from [85.158.139.83:6857] by server-14.bemta-5.messagelabs.com id
	39/3D-09538-E8CF9C05; Thu, 13 Dec 2012 16:04:30 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-182.messagelabs.com!1355414659!22473412!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3374 invoked from network); 13 Dec 2012 16:04:20 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:04:20 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjBGj-000KlT-Jl; Thu, 13 Dec 2012 16:04:17 +0000
Date: Thu, 13 Dec 2012 16:04:17 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213160417.GK75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-6-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-6-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191037), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Share the current EPT logic with nested EPT case, so
> make the related data structure or operations netural
> to comment EPT and nested EPT.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Since the struct ept_data is only 16 bytes long, why not just embed it
in the struct p2m_domain, as 

>          mm_lock_t        lock;         /* Locking of private pod structs,   *
>                                          * not relying on the p2m lock.      */
>      } pod;
> +    union {
> +        struct ept_data ept;
> +        /* NPT equivalent could go here if needed */
> +    };
>  };

That would tidy up the alloc/free stuff a fair bit, though you'd still
need it for the cpumask, I guess.

It would be nice to wrap the alloc/free functions up in the usual way so
we dont get ept-specific functions with arch-independednt names.

Otherwise taht looks fine.

Cheers,

Tim.

> ---
>  xen/arch/x86/hvm/vmx/vmcs.c        |    2 +-
>  xen/arch/x86/hvm/vmx/vmx.c         |   39 +++++++++------
>  xen/arch/x86/mm/p2m-ept.c          |   96 ++++++++++++++++++++++++++++--------
>  xen/arch/x86/mm/p2m.c              |   16 +++++-
>  xen/include/asm-x86/hvm/vmx/vmcs.h |   30 +++++++----
>  xen/include/asm-x86/hvm/vmx/vmx.h  |    6 ++-
>  xen/include/asm-x86/p2m.h          |    1 +
>  7 files changed, 137 insertions(+), 53 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index 9adc7a4..b9ebdfe 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -942,7 +942,7 @@ static int construct_vmcs(struct vcpu *v)
>      }
>  
>      if ( paging_mode_hap(d) )
> -        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept_control.eptp);
> +        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept.ept_ctl.eptp);
>  
>      if ( cpu_has_vmx_pat && paging_mode_hap(d) )
>      {
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index c67ac59..06455bf 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -79,22 +79,23 @@ static void __ept_sync_domain(void *info);
>  static int vmx_domain_initialise(struct domain *d)
>  {
>      int rc;
> +    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
>  
>      /* Set the memory type used when accessing EPT paging structures. */
> -    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
> +    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
>  
>      /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
> -    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
> +    ept->ept_ctl.ept_wl = 3;
>  
> -    d->arch.hvm_domain.vmx.ept_control.asr  =
> +    ept->ept_ctl.asr  =
>          pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
>  
> -    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
> +    if ( !zalloc_cpumask_var(&ept->ept_synced) )
>          return -ENOMEM;
>  
>      if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
>      {
> -        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
> +        free_cpumask_var(ept->ept_synced);
>          return rc;
>      }
>  
> @@ -103,9 +104,10 @@ static int vmx_domain_initialise(struct domain *d)
>  
>  static void vmx_domain_destroy(struct domain *d)
>  {
> +    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
>      if ( paging_mode_hap(d) )
> -        on_each_cpu(__ept_sync_domain, d, 1);
> -    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
> +        on_each_cpu(__ept_sync_domain, p2m_get_hostp2m(d), 1);
> +    free_cpumask_var(ept->ept_synced);
>      vmx_free_vlapic_mapping(d);
>  }
>  
> @@ -641,6 +643,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
>      unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
> +    struct ept_data *ept_data = p2m_get_hostp2m(d)->hap_data;
>  
>      /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
>      if ( old_cr4 != new_cr4 )
> @@ -650,10 +653,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
>      {
>          unsigned int cpu = smp_processor_id();
>          /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
> -        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced) &&
> +        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
>               !cpumask_test_and_set_cpu(cpu,
> -                                       d->arch.hvm_domain.vmx.ept_synced) )
> -            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
> +                                       ept_get_synced_mask(ept_data)) )
> +            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
>      }
>  
>      vmx_restore_guest_msrs(v);
> @@ -1218,12 +1221,16 @@ static void vmx_update_guest_efer(struct vcpu *v)
>  
>  static void __ept_sync_domain(void *info)
>  {
> -    struct domain *d = info;
> -    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
> +    struct p2m_domain *p2m = info;
> +    struct ept_data *ept_data = p2m->hap_data;
> +
> +    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
>  }
>  
> -void ept_sync_domain(struct domain *d)
> +void ept_sync_domain(struct p2m_domain *p2m)
>  {
> +    struct domain *d = p2m->domain;
> +    struct ept_data *ept_data = p2m->hap_data;
>      /* Only if using EPT and this domain has some VCPUs to dirty. */
>      if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
>          return;
> @@ -1236,11 +1243,11 @@ void ept_sync_domain(struct domain *d)
>       * the ept_synced mask before on_selected_cpus() reads it, resulting in
>       * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
>       */
> -    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
> +    cpumask_and(ept_get_synced_mask(ept_data),
>                  d->domain_dirty_cpumask, &cpu_online_map);
>  
> -    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
> -                     __ept_sync_domain, d, 1);
> +    on_selected_cpus(ept_get_synced_mask(ept_data),
> +                     __ept_sync_domain, p2m, 1);
>  }
>  
>  void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index c964f54..8adf3f9 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>      int need_modify_vtd_table = 1;
>      int vtd_pte_present = 0;
>      int needs_sync = 1;
> -    struct domain *d = p2m->domain;
>      ept_entry_t old_entry = { .epte = 0 };
> +    struct ept_data *ept_data = p2m->hap_data;
> +    struct domain *d = p2m->domain;
>  
> +    ASSERT(ept_data);
>      /*
>       * the caller must make sure:
>       * 1. passing valid gfn and mfn at order boundary.
> @@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>       * 3. passing a valid order.
>       */
>      if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
> -         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
> +         ((u64)gfn >> ((ept_get_wl(ept_data) + 1) * EPT_TABLE_ORDER)) ||
>           (order % EPT_TABLE_ORDER) )
>          return 0;
>  
> -    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
> -           (target == 1 && hvm_hap_has_2mb(d)) ||
> +    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
> +           (target == 1 && hvm_hap_has_2mb()) ||
>             (target == 0));
>  
> -    table = map_domain_page(ept_get_asr(d));
> +    table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>  
> -    for ( i = ept_get_wl(d); i > target; i-- )
> +    for ( i = ept_get_wl(ept_data); i > target; i-- )
>      {
>          ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
>          if ( !ret )
> @@ -439,9 +441,11 @@ out:
>      unmap_domain_page(table);
>  
>      if ( needs_sync )
> -        ept_sync_domain(p2m->domain);
> +        ept_sync_domain(p2m);
>  
> -    if ( rv && iommu_enabled && need_iommu(p2m->domain) && need_modify_vtd_table )
> +    /* For non-nested p2m, may need to change VT-d page table.*/
> +    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled && need_iommu(p2m->domain) &&
> +                need_modify_vtd_table )
>      {
>          if ( iommu_hap_pt_share )
>              iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
> @@ -488,14 +492,14 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
>                             unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
>                             p2m_query_t q, unsigned int *page_order)
>  {
> -    struct domain *d = p2m->domain;
> -    ept_entry_t *table = map_domain_page(ept_get_asr(d));
> +    ept_entry_t *table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>      unsigned long gfn_remainder = gfn;
>      ept_entry_t *ept_entry;
>      u32 index;
>      int i;
>      int ret = 0;
>      mfn_t mfn = _mfn(INVALID_MFN);
> +    struct ept_data *ept_data = p2m->hap_data;
>  
>      *t = p2m_mmio_dm;
>      *a = p2m_access_n;
> @@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
>  
>      /* Should check if gfn obeys GAW here. */
>  
> -    for ( i = ept_get_wl(d); i > 0; i-- )
> +    for ( i = ept_get_wl(ept_data); i > 0; i-- )
>      {
>      retry:
>          ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
> @@ -588,19 +592,20 @@ out:
>  static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
>      unsigned long gfn, int *level)
>  {
> -    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
> +    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>      unsigned long gfn_remainder = gfn;
>      ept_entry_t *ept_entry;
>      ept_entry_t content = { .epte = 0 };
>      u32 index;
>      int i;
>      int ret=0;
> +    struct ept_data *ept_data = p2m->hap_data;
>  
>      /* This pfn is higher than the highest the p2m map currently holds */
>      if ( gfn > p2m->max_mapped_pfn )
>          goto out;
>  
> -    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
> +    for ( i = ept_get_wl(ept_data); i > 0; i-- )
>      {
>          ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
>          if ( !ret || ret == GUEST_TABLE_POD_PAGE )
> @@ -622,7 +627,8 @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
>  void ept_walk_table(struct domain *d, unsigned long gfn)
>  {
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    ept_entry_t *table = map_domain_page(ept_get_asr(d));
> +    struct ept_data *ept_data = p2m->hap_data;
> +    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>      unsigned long gfn_remainder = gfn;
>  
>      int i;
> @@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned long gfn)
>          goto out;
>      }
>  
> -    for ( i = ept_get_wl(d); i >= 0; i-- )
> +    for ( i = ept_get_wl(ept_data); i >= 0; i-- )
>      {
>          ept_entry_t *ept_entry, *next;
>          u32 index;
> @@ -778,16 +784,16 @@ static void ept_change_entry_type_page(mfn_t ept_page_mfn, int ept_page_level,
>  static void ept_change_entry_type_global(struct p2m_domain *p2m,
>                                           p2m_type_t ot, p2m_type_t nt)
>  {
> -    struct domain *d = p2m->domain;
> -    if ( ept_get_asr(d) == 0 )
> +    struct ept_data *ept_data = p2m->hap_data;
> +    if ( ept_get_asr(ept_data) == 0 )
>          return;
>  
>      BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
>      BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct));
>  
> -    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d), ot, nt);
> +    ept_change_entry_type_page(_mfn(ept_get_asr(ept_data)), ept_get_wl(ept_data), ot, nt);
>  
> -    ept_sync_domain(d);
> +    ept_sync_domain(p2m);
>  }
>  
>  void ept_p2m_init(struct p2m_domain *p2m)
> @@ -811,6 +817,7 @@ static void ept_dump_p2m_table(unsigned char key)
>      unsigned long gfn, gfn_remainder;
>      unsigned long record_counter = 0;
>      struct p2m_domain *p2m;
> +    struct ept_data *ept_data;
>  
>      for_each_domain(d)
>      {
> @@ -818,15 +825,16 @@ static void ept_dump_p2m_table(unsigned char key)
>              continue;
>  
>          p2m = p2m_get_hostp2m(d);
> +    ept_data = p2m->hap_data;
>          printk("\ndomain%d EPT p2m table: \n", d->domain_id);
>  
>          for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
>          {
>              gfn_remainder = gfn;
>              mfn = _mfn(INVALID_MFN);
> -            table = map_domain_page(ept_get_asr(d));
> +            table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>  
> -            for ( i = ept_get_wl(d); i > 0; i-- )
> +            for ( i = ept_get_wl(ept_data); i > 0; i-- )
>              {
>                  ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
>                  if ( ret != GUEST_TABLE_NORMAL_PAGE )
> @@ -858,6 +866,52 @@ out:
>      }
>  }
>  
> +int alloc_p2m_hap_data(struct p2m_domain *p2m)
> +{
> +    struct domain *d = p2m->domain;
> +    struct ept_data *ept;
> +
> +    ASSERT(d);
> +    if (!hap_enabled(d))
> +        return 0;
> +
> +    p2m->hap_data = ept = xzalloc(struct ept_data);
> +    if ( !p2m->hap_data )
> +        return -ENOMEM;
> +    if ( !zalloc_cpumask_var(&ept->ept_synced) )
> +    {
> +        xfree(ept);
> +        p2m->hap_data = NULL;
> +        return -ENOMEM;    
> +    }
> +    return 0;
> +}
> +
> +void free_p2m_hap_data(struct p2m_domain *p2m)
> +{
> +    struct ept_data *ept;
> +
> +    if ( !hap_enabled(p2m->domain) )
> +        return;
> +
> +    if ( p2m_is_nestedp2m(p2m)) {
> +        ept = p2m->hap_data;
> +        if ( ept ) {
> +            free_cpumask_var(ept->ept_synced);
> +            xfree(ept);
> +        }
> +    }
> +}
> +
> +void p2m_init_hap_data(struct p2m_domain *p2m)
> +{
> +    struct ept_data *ept = p2m->hap_data;
> +
> +    ept->ept_ctl.ept_wl = 3;
> +    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
> +    ept->ept_ctl.asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> +}
> +
>  static struct keyhandler ept_p2m_table = {
>      .diagnostic = 0,
>      .u.fn = ept_dump_p2m_table,
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 62c2d78..799bbfb 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -105,6 +105,8 @@ p2m_init_nestedp2m(struct domain *d)
>          if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
>              return -ENOMEM;
>          p2m_initialise(d, p2m);
> +        if ( cpu_has_vmx && alloc_p2m_hap_data(p2m) )
> +            return -ENOMEM;
>          p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
>          list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
>      }
> @@ -126,12 +128,14 @@ int p2m_init(struct domain *d)
>          return -ENOMEM;
>      }
>      p2m_initialise(d, p2m);
> +    if ( hap_enabled(d) && cpu_has_vmx)
> +        p2m->hap_data = &d->arch.hvm_domain.vmx.ept;
>  
>      /* Must initialise nestedp2m unconditionally
>       * since nestedhvm_enabled(d) returns false here.
>       * (p2m_init runs too early for HVM_PARAM_* options) */
>      rc = p2m_init_nestedp2m(d);
> -    if ( rc ) 
> +    if ( rc )
>          p2m_final_teardown(d);
>      return rc;
>  }
> @@ -354,6 +358,8 @@ int p2m_alloc_table(struct p2m_domain *p2m)
>  
>      if ( hap_enabled(d) )
>          iommu_share_p2m_table(d);
> +    if ( p2m_is_nestedp2m(p2m) && hap_enabled(d) )
> +        p2m_init_hap_data(p2m);
>  
>      P2M_PRINTK("populating p2m table\n");
>  
> @@ -436,12 +442,16 @@ void p2m_teardown(struct p2m_domain *p2m)
>  static void p2m_teardown_nestedp2m(struct domain *d)
>  {
>      uint8_t i;
> +    struct p2m_domain *p2m;
>  
>      for (i = 0; i < MAX_NESTEDP2M; i++) {
>          if ( !d->arch.nested_p2m[i] )
>              continue;
> -        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
> -        xfree(d->arch.nested_p2m[i]);
> +        p2m = d->arch.nested_p2m[i];
> +        if ( p2m->hap_data )
> +            free_p2m_hap_data(p2m);
> +        free_cpumask_var(p2m->dirty_cpumask);
> +        xfree(p2m);
>          d->arch.nested_p2m[i] = NULL;
>      }
>  }
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index 9a728b6..e6b4e3b 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -56,26 +56,34 @@ struct vmx_msr_state {
>  
>  #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
>  
> -struct vmx_domain {
> -    unsigned long apic_access_mfn;
> -    union {
> -        struct {
> +union eptp_control{
> +    struct {
>              u64 ept_mt :3,
>                  ept_wl :3,
>                  rsvd   :6,
>                  asr    :52;
>          };
>          u64 eptp;
> -    } ept_control;
> +};
> +
> +struct ept_data{
> +    union eptp_control ept_ctl;
>      cpumask_var_t ept_synced;
>  };
>  
> -#define ept_get_wl(d)   \
> -    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
> -#define ept_get_asr(d)  \
> -    ((d)->arch.hvm_domain.vmx.ept_control.asr)
> -#define ept_get_eptp(d) \
> -    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
> +struct vmx_domain {
> +    unsigned long apic_access_mfn;
> +    struct ept_data ept; 
> +};
> +
> +#define ept_get_wl(ept_data)   \
> +    (((struct ept_data*)(ept_data))->ept_ctl.ept_wl)
> +#define ept_get_asr(ept_data)  \
> +    (((struct ept_data*)(ept_data))->ept_ctl.asr)
> +#define ept_get_eptp(ept_data) \
> +    (((struct ept_data*)(ept_data))->ept_ctl.eptp)
> +#define ept_get_synced_mask(ept_data)\
> +    (((struct ept_data*)(ept_data))->ept_synced)
>  
>  struct arch_vmx_struct {
>      /* Virtual address of VMCS. */
> diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
> index aa5b080..573a12e 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
> @@ -333,7 +333,7 @@ static inline void ept_sync_all(void)
>      __invept(INVEPT_ALL_CONTEXT, 0, 0);
>  }
>  
> -void ept_sync_domain(struct domain *d);
> +void ept_sync_domain(struct p2m_domain *p2m);
>  
>  static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
>  {
> @@ -401,6 +401,10 @@ void setup_ept_dump(void);
>  
>  void update_guest_eip(void);
>  
> +int alloc_p2m_hap_data(struct p2m_domain *p2m);
> +void free_p2m_hap_data(struct p2m_domain *p2m);
> +void p2m_init_hap_data(struct p2m_domain *p2m);
> +
>  /* EPT violation qualifications definitions */
>  #define _EPT_READ_VIOLATION         0
>  #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 1807ad6..0fb1b2d 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -277,6 +277,7 @@ struct p2m_domain {
>          mm_lock_t        lock;         /* Locking of private pod structs,   *
>                                          * not relying on the p2m lock.      */
>      } pod;
> +    void *hap_data;
>  };
>  
>  /* get host p2m table */
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:04:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBGz-0006Uw-8a; Thu, 13 Dec 2012 16:04:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjBGx-0006Ur-FH
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 16:04:31 +0000
Received: from [85.158.139.83:6857] by server-14.bemta-5.messagelabs.com id
	39/3D-09538-E8CF9C05; Thu, 13 Dec 2012 16:04:30 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-182.messagelabs.com!1355414659!22473412!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3374 invoked from network); 13 Dec 2012 16:04:20 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:04:20 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjBGj-000KlT-Jl; Thu, 13 Dec 2012 16:04:17 +0000
Date: Thu, 13 Dec 2012 16:04:17 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213160417.GK75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-6-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-6-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191037), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Share the current EPT logic with nested EPT case, so
> make the related data structure or operations netural
> to comment EPT and nested EPT.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Since the struct ept_data is only 16 bytes long, why not just embed it
in the struct p2m_domain, as 

>          mm_lock_t        lock;         /* Locking of private pod structs,   *
>                                          * not relying on the p2m lock.      */
>      } pod;
> +    union {
> +        struct ept_data ept;
> +        /* NPT equivalent could go here if needed */
> +    };
>  };

That would tidy up the alloc/free stuff a fair bit, though you'd still
need it for the cpumask, I guess.

It would be nice to wrap the alloc/free functions up in the usual way so
we dont get ept-specific functions with arch-independednt names.

Otherwise taht looks fine.

Cheers,

Tim.

> ---
>  xen/arch/x86/hvm/vmx/vmcs.c        |    2 +-
>  xen/arch/x86/hvm/vmx/vmx.c         |   39 +++++++++------
>  xen/arch/x86/mm/p2m-ept.c          |   96 ++++++++++++++++++++++++++++--------
>  xen/arch/x86/mm/p2m.c              |   16 +++++-
>  xen/include/asm-x86/hvm/vmx/vmcs.h |   30 +++++++----
>  xen/include/asm-x86/hvm/vmx/vmx.h  |    6 ++-
>  xen/include/asm-x86/p2m.h          |    1 +
>  7 files changed, 137 insertions(+), 53 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index 9adc7a4..b9ebdfe 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -942,7 +942,7 @@ static int construct_vmcs(struct vcpu *v)
>      }
>  
>      if ( paging_mode_hap(d) )
> -        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept_control.eptp);
> +        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept.ept_ctl.eptp);
>  
>      if ( cpu_has_vmx_pat && paging_mode_hap(d) )
>      {
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index c67ac59..06455bf 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -79,22 +79,23 @@ static void __ept_sync_domain(void *info);
>  static int vmx_domain_initialise(struct domain *d)
>  {
>      int rc;
> +    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
>  
>      /* Set the memory type used when accessing EPT paging structures. */
> -    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
> +    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
>  
>      /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
> -    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
> +    ept->ept_ctl.ept_wl = 3;
>  
> -    d->arch.hvm_domain.vmx.ept_control.asr  =
> +    ept->ept_ctl.asr  =
>          pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
>  
> -    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
> +    if ( !zalloc_cpumask_var(&ept->ept_synced) )
>          return -ENOMEM;
>  
>      if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
>      {
> -        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
> +        free_cpumask_var(ept->ept_synced);
>          return rc;
>      }
>  
> @@ -103,9 +104,10 @@ static int vmx_domain_initialise(struct domain *d)
>  
>  static void vmx_domain_destroy(struct domain *d)
>  {
> +    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
>      if ( paging_mode_hap(d) )
> -        on_each_cpu(__ept_sync_domain, d, 1);
> -    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
> +        on_each_cpu(__ept_sync_domain, p2m_get_hostp2m(d), 1);
> +    free_cpumask_var(ept->ept_synced);
>      vmx_free_vlapic_mapping(d);
>  }
>  
> @@ -641,6 +643,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
>      unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
> +    struct ept_data *ept_data = p2m_get_hostp2m(d)->hap_data;
>  
>      /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
>      if ( old_cr4 != new_cr4 )
> @@ -650,10 +653,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
>      {
>          unsigned int cpu = smp_processor_id();
>          /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
> -        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced) &&
> +        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
>               !cpumask_test_and_set_cpu(cpu,
> -                                       d->arch.hvm_domain.vmx.ept_synced) )
> -            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
> +                                       ept_get_synced_mask(ept_data)) )
> +            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
>      }
>  
>      vmx_restore_guest_msrs(v);
> @@ -1218,12 +1221,16 @@ static void vmx_update_guest_efer(struct vcpu *v)
>  
>  static void __ept_sync_domain(void *info)
>  {
> -    struct domain *d = info;
> -    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
> +    struct p2m_domain *p2m = info;
> +    struct ept_data *ept_data = p2m->hap_data;
> +
> +    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
>  }
>  
> -void ept_sync_domain(struct domain *d)
> +void ept_sync_domain(struct p2m_domain *p2m)
>  {
> +    struct domain *d = p2m->domain;
> +    struct ept_data *ept_data = p2m->hap_data;
>      /* Only if using EPT and this domain has some VCPUs to dirty. */
>      if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
>          return;
> @@ -1236,11 +1243,11 @@ void ept_sync_domain(struct domain *d)
>       * the ept_synced mask before on_selected_cpus() reads it, resulting in
>       * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
>       */
> -    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
> +    cpumask_and(ept_get_synced_mask(ept_data),
>                  d->domain_dirty_cpumask, &cpu_online_map);
>  
> -    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
> -                     __ept_sync_domain, d, 1);
> +    on_selected_cpus(ept_get_synced_mask(ept_data),
> +                     __ept_sync_domain, p2m, 1);
>  }
>  
>  void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index c964f54..8adf3f9 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>      int need_modify_vtd_table = 1;
>      int vtd_pte_present = 0;
>      int needs_sync = 1;
> -    struct domain *d = p2m->domain;
>      ept_entry_t old_entry = { .epte = 0 };
> +    struct ept_data *ept_data = p2m->hap_data;
> +    struct domain *d = p2m->domain;
>  
> +    ASSERT(ept_data);
>      /*
>       * the caller must make sure:
>       * 1. passing valid gfn and mfn at order boundary.
> @@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>       * 3. passing a valid order.
>       */
>      if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
> -         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
> +         ((u64)gfn >> ((ept_get_wl(ept_data) + 1) * EPT_TABLE_ORDER)) ||
>           (order % EPT_TABLE_ORDER) )
>          return 0;
>  
> -    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
> -           (target == 1 && hvm_hap_has_2mb(d)) ||
> +    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
> +           (target == 1 && hvm_hap_has_2mb()) ||
>             (target == 0));
>  
> -    table = map_domain_page(ept_get_asr(d));
> +    table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>  
> -    for ( i = ept_get_wl(d); i > target; i-- )
> +    for ( i = ept_get_wl(ept_data); i > target; i-- )
>      {
>          ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
>          if ( !ret )
> @@ -439,9 +441,11 @@ out:
>      unmap_domain_page(table);
>  
>      if ( needs_sync )
> -        ept_sync_domain(p2m->domain);
> +        ept_sync_domain(p2m);
>  
> -    if ( rv && iommu_enabled && need_iommu(p2m->domain) && need_modify_vtd_table )
> +    /* For non-nested p2m, may need to change VT-d page table.*/
> +    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled && need_iommu(p2m->domain) &&
> +                need_modify_vtd_table )
>      {
>          if ( iommu_hap_pt_share )
>              iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
> @@ -488,14 +492,14 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
>                             unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
>                             p2m_query_t q, unsigned int *page_order)
>  {
> -    struct domain *d = p2m->domain;
> -    ept_entry_t *table = map_domain_page(ept_get_asr(d));
> +    ept_entry_t *table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>      unsigned long gfn_remainder = gfn;
>      ept_entry_t *ept_entry;
>      u32 index;
>      int i;
>      int ret = 0;
>      mfn_t mfn = _mfn(INVALID_MFN);
> +    struct ept_data *ept_data = p2m->hap_data;
>  
>      *t = p2m_mmio_dm;
>      *a = p2m_access_n;
> @@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
>  
>      /* Should check if gfn obeys GAW here. */
>  
> -    for ( i = ept_get_wl(d); i > 0; i-- )
> +    for ( i = ept_get_wl(ept_data); i > 0; i-- )
>      {
>      retry:
>          ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
> @@ -588,19 +592,20 @@ out:
>  static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
>      unsigned long gfn, int *level)
>  {
> -    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
> +    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>      unsigned long gfn_remainder = gfn;
>      ept_entry_t *ept_entry;
>      ept_entry_t content = { .epte = 0 };
>      u32 index;
>      int i;
>      int ret=0;
> +    struct ept_data *ept_data = p2m->hap_data;
>  
>      /* This pfn is higher than the highest the p2m map currently holds */
>      if ( gfn > p2m->max_mapped_pfn )
>          goto out;
>  
> -    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
> +    for ( i = ept_get_wl(ept_data); i > 0; i-- )
>      {
>          ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
>          if ( !ret || ret == GUEST_TABLE_POD_PAGE )
> @@ -622,7 +627,8 @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
>  void ept_walk_table(struct domain *d, unsigned long gfn)
>  {
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    ept_entry_t *table = map_domain_page(ept_get_asr(d));
> +    struct ept_data *ept_data = p2m->hap_data;
> +    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>      unsigned long gfn_remainder = gfn;
>  
>      int i;
> @@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned long gfn)
>          goto out;
>      }
>  
> -    for ( i = ept_get_wl(d); i >= 0; i-- )
> +    for ( i = ept_get_wl(ept_data); i >= 0; i-- )
>      {
>          ept_entry_t *ept_entry, *next;
>          u32 index;
> @@ -778,16 +784,16 @@ static void ept_change_entry_type_page(mfn_t ept_page_mfn, int ept_page_level,
>  static void ept_change_entry_type_global(struct p2m_domain *p2m,
>                                           p2m_type_t ot, p2m_type_t nt)
>  {
> -    struct domain *d = p2m->domain;
> -    if ( ept_get_asr(d) == 0 )
> +    struct ept_data *ept_data = p2m->hap_data;
> +    if ( ept_get_asr(ept_data) == 0 )
>          return;
>  
>      BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
>      BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct));
>  
> -    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d), ot, nt);
> +    ept_change_entry_type_page(_mfn(ept_get_asr(ept_data)), ept_get_wl(ept_data), ot, nt);
>  
> -    ept_sync_domain(d);
> +    ept_sync_domain(p2m);
>  }
>  
>  void ept_p2m_init(struct p2m_domain *p2m)
> @@ -811,6 +817,7 @@ static void ept_dump_p2m_table(unsigned char key)
>      unsigned long gfn, gfn_remainder;
>      unsigned long record_counter = 0;
>      struct p2m_domain *p2m;
> +    struct ept_data *ept_data;
>  
>      for_each_domain(d)
>      {
> @@ -818,15 +825,16 @@ static void ept_dump_p2m_table(unsigned char key)
>              continue;
>  
>          p2m = p2m_get_hostp2m(d);
> +    ept_data = p2m->hap_data;
>          printk("\ndomain%d EPT p2m table: \n", d->domain_id);
>  
>          for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
>          {
>              gfn_remainder = gfn;
>              mfn = _mfn(INVALID_MFN);
> -            table = map_domain_page(ept_get_asr(d));
> +            table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>  
> -            for ( i = ept_get_wl(d); i > 0; i-- )
> +            for ( i = ept_get_wl(ept_data); i > 0; i-- )
>              {
>                  ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
>                  if ( ret != GUEST_TABLE_NORMAL_PAGE )
> @@ -858,6 +866,52 @@ out:
>      }
>  }
>  
> +int alloc_p2m_hap_data(struct p2m_domain *p2m)
> +{
> +    struct domain *d = p2m->domain;
> +    struct ept_data *ept;
> +
> +    ASSERT(d);
> +    if (!hap_enabled(d))
> +        return 0;
> +
> +    p2m->hap_data = ept = xzalloc(struct ept_data);
> +    if ( !p2m->hap_data )
> +        return -ENOMEM;
> +    if ( !zalloc_cpumask_var(&ept->ept_synced) )
> +    {
> +        xfree(ept);
> +        p2m->hap_data = NULL;
> +        return -ENOMEM;    
> +    }
> +    return 0;
> +}
> +
> +void free_p2m_hap_data(struct p2m_domain *p2m)
> +{
> +    struct ept_data *ept;
> +
> +    if ( !hap_enabled(p2m->domain) )
> +        return;
> +
> +    if ( p2m_is_nestedp2m(p2m)) {
> +        ept = p2m->hap_data;
> +        if ( ept ) {
> +            free_cpumask_var(ept->ept_synced);
> +            xfree(ept);
> +        }
> +    }
> +}
> +
> +void p2m_init_hap_data(struct p2m_domain *p2m)
> +{
> +    struct ept_data *ept = p2m->hap_data;
> +
> +    ept->ept_ctl.ept_wl = 3;
> +    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
> +    ept->ept_ctl.asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> +}
> +
>  static struct keyhandler ept_p2m_table = {
>      .diagnostic = 0,
>      .u.fn = ept_dump_p2m_table,
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 62c2d78..799bbfb 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -105,6 +105,8 @@ p2m_init_nestedp2m(struct domain *d)
>          if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
>              return -ENOMEM;
>          p2m_initialise(d, p2m);
> +        if ( cpu_has_vmx && alloc_p2m_hap_data(p2m) )
> +            return -ENOMEM;
>          p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
>          list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
>      }
> @@ -126,12 +128,14 @@ int p2m_init(struct domain *d)
>          return -ENOMEM;
>      }
>      p2m_initialise(d, p2m);
> +    if ( hap_enabled(d) && cpu_has_vmx)
> +        p2m->hap_data = &d->arch.hvm_domain.vmx.ept;
>  
>      /* Must initialise nestedp2m unconditionally
>       * since nestedhvm_enabled(d) returns false here.
>       * (p2m_init runs too early for HVM_PARAM_* options) */
>      rc = p2m_init_nestedp2m(d);
> -    if ( rc ) 
> +    if ( rc )
>          p2m_final_teardown(d);
>      return rc;
>  }
> @@ -354,6 +358,8 @@ int p2m_alloc_table(struct p2m_domain *p2m)
>  
>      if ( hap_enabled(d) )
>          iommu_share_p2m_table(d);
> +    if ( p2m_is_nestedp2m(p2m) && hap_enabled(d) )
> +        p2m_init_hap_data(p2m);
>  
>      P2M_PRINTK("populating p2m table\n");
>  
> @@ -436,12 +442,16 @@ void p2m_teardown(struct p2m_domain *p2m)
>  static void p2m_teardown_nestedp2m(struct domain *d)
>  {
>      uint8_t i;
> +    struct p2m_domain *p2m;
>  
>      for (i = 0; i < MAX_NESTEDP2M; i++) {
>          if ( !d->arch.nested_p2m[i] )
>              continue;
> -        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
> -        xfree(d->arch.nested_p2m[i]);
> +        p2m = d->arch.nested_p2m[i];
> +        if ( p2m->hap_data )
> +            free_p2m_hap_data(p2m);
> +        free_cpumask_var(p2m->dirty_cpumask);
> +        xfree(p2m);
>          d->arch.nested_p2m[i] = NULL;
>      }
>  }
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index 9a728b6..e6b4e3b 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -56,26 +56,34 @@ struct vmx_msr_state {
>  
>  #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
>  
> -struct vmx_domain {
> -    unsigned long apic_access_mfn;
> -    union {
> -        struct {
> +union eptp_control{
> +    struct {
>              u64 ept_mt :3,
>                  ept_wl :3,
>                  rsvd   :6,
>                  asr    :52;
>          };
>          u64 eptp;
> -    } ept_control;
> +};
> +
> +struct ept_data{
> +    union eptp_control ept_ctl;
>      cpumask_var_t ept_synced;
>  };
>  
> -#define ept_get_wl(d)   \
> -    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
> -#define ept_get_asr(d)  \
> -    ((d)->arch.hvm_domain.vmx.ept_control.asr)
> -#define ept_get_eptp(d) \
> -    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
> +struct vmx_domain {
> +    unsigned long apic_access_mfn;
> +    struct ept_data ept; 
> +};
> +
> +#define ept_get_wl(ept_data)   \
> +    (((struct ept_data*)(ept_data))->ept_ctl.ept_wl)
> +#define ept_get_asr(ept_data)  \
> +    (((struct ept_data*)(ept_data))->ept_ctl.asr)
> +#define ept_get_eptp(ept_data) \
> +    (((struct ept_data*)(ept_data))->ept_ctl.eptp)
> +#define ept_get_synced_mask(ept_data)\
> +    (((struct ept_data*)(ept_data))->ept_synced)
>  
>  struct arch_vmx_struct {
>      /* Virtual address of VMCS. */
> diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
> index aa5b080..573a12e 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
> @@ -333,7 +333,7 @@ static inline void ept_sync_all(void)
>      __invept(INVEPT_ALL_CONTEXT, 0, 0);
>  }
>  
> -void ept_sync_domain(struct domain *d);
> +void ept_sync_domain(struct p2m_domain *p2m);
>  
>  static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
>  {
> @@ -401,6 +401,10 @@ void setup_ept_dump(void);
>  
>  void update_guest_eip(void);
>  
> +int alloc_p2m_hap_data(struct p2m_domain *p2m);
> +void free_p2m_hap_data(struct p2m_domain *p2m);
> +void p2m_init_hap_data(struct p2m_domain *p2m);
> +
>  /* EPT violation qualifications definitions */
>  #define _EPT_READ_VIOLATION         0
>  #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 1807ad6..0fb1b2d 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -277,6 +277,7 @@ struct p2m_domain {
>          mm_lock_t        lock;         /* Locking of private pod structs,   *
>                                          * not relying on the p2m lock.      */
>      } pod;
> +    void *hap_data;
>  };
>  
>  /* get host p2m table */
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:10:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBM9-0006rZ-W5; Thu, 13 Dec 2012 16:09:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TjBM8-0006rT-GC
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 16:09:52 +0000
Received: from [85.158.139.211:36300] by server-9.bemta-5.messagelabs.com id
	91/92-10690-FCDF9C05; Thu, 13 Dec 2012 16:09:51 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355414969!18602323!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24956 invoked from network); 13 Dec 2012 16:09:30 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-206.messagelabs.com with SMTP;
	13 Dec 2012 16:09:30 -0000
X-TM-IMSS-Message-ID: <18432fe5000370b1@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 18432fe5000370b1 ;
	Thu, 13 Dec 2012 11:09:29 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBDG9SkV001142; 
	Thu, 13 Dec 2012 11:09:28 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Thu, 13 Dec 2012 11:08:47 -0500
Message-Id: <1355414927-12281-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] xenconsoled: use grant references instead of
	map_foreign_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Grant references for the xenstore and xenconsole shared pages exist, but
currently only xenstore uses these references.  Change the xenconsole
daemon to prefer using the grant reference over map_foreign_range when
mapping the shared console ring.

This allows xenconsoled to be run in a domain other than dom0 if set up
correctly - for libxl, the xenstore path /tool/xenconsoled/domid
specifies the domain containing xenconsoled.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 tools/console/daemon/io.c | 45 +++++++++++++++++++++++++++++++++++++++------
 1 file changed, 39 insertions(+), 6 deletions(-)

diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
index 48fe151..e24247d 100644
--- a/tools/console/daemon/io.c
+++ b/tools/console/daemon/io.c
@@ -24,6 +24,7 @@
 #include "io.h"
 #include <xenstore.h>
 #include <xen/io/console.h>
+#include <xen/grant_table.h>
 
 #include <stdlib.h>
 #include <errno.h>
@@ -69,6 +70,7 @@ static int log_hv_fd = -1;
 static evtchn_port_or_error_t log_hv_evtchn = -1;
 static xc_interface *xch; /* why does xenconsoled have two xc handles ? */
 static xc_evtchn *xce_handle = NULL;
+static xc_gnttab *xcg_handle = NULL;
 
 struct buffer {
 	char *data;
@@ -501,6 +503,17 @@ static int xs_gather(struct xs_handle *xs, const char *dir, ...)
 	va_end(ap);
 	return ret;
 }
+
+static void domain_unmap_interface(struct domain *dom)
+{
+	if (dom->interface == NULL)
+		return;
+	if (xcg_handle && dom->ring_ref == -1)
+		xc_gnttab_munmap(xcg_handle, dom->interface, 1);
+	else
+		munmap(dom->interface, getpagesize());
+	dom->interface = NULL;
+}
  
 static int domain_create_ring(struct domain *dom)
 {
@@ -522,9 +535,19 @@ static int domain_create_ring(struct domain *dom)
 	}
 	free(type);
 
-	if (ring_ref != dom->ring_ref) {
-		if (dom->interface != NULL)
-			munmap(dom->interface, getpagesize());
+	/* If using ring_ref and it has changed, remap */
+	if (ring_ref != dom->ring_ref && dom->ring_ref != -1)
+		domain_unmap_interface(dom);
+
+	if (!dom->interface && xcg_handle) {
+		/* Prefer using grant table */
+		dom->interface = xc_gnttab_map_grant_ref(xcg_handle,
+			dom->domid, GNTTAB_RESERVED_CONSOLE,
+			PROT_READ|PROT_WRITE);
+		dom->ring_ref = -1;
+	}
+	if (!dom->interface) {
+		/* Fall back to xc_map_foreign_range */
 		dom->interface = xc_map_foreign_range(
 			xc, dom->domid, getpagesize(),
 			PROT_READ|PROT_WRITE,
@@ -720,9 +743,7 @@ static void shutdown_domain(struct domain *d)
 {
 	d->is_dead = true;
 	watch_domain(d, false);
-	if (d->interface != NULL)
-		munmap(d->interface, getpagesize());
-	d->interface = NULL;
+	domain_unmap_interface(d);
 	if (d->xce_handle != NULL)
 		xc_evtchn_close(d->xce_handle);
 	d->xce_handle = NULL;
@@ -736,6 +757,13 @@ void enum_domains(void)
 	xc_dominfo_t dominfo;
 	struct domain *dom;
 
+	if (enum_pass == 0) {
+		xcg_handle = xc_gnttab_open(NULL, 0);
+		if (xcg_handle == NULL) {
+			dolog(LOG_DEBUG, "Failed to open xcg handle: %d (%s)",
+				  errno, strerror(errno));
+		}
+	}
 	enum_pass++;
 
 	while (xc_domain_getinfo(xc, domid, 1, &dominfo) == 1) {
@@ -946,6 +974,7 @@ void handle_io(void)
 			      errno, strerror(errno));
 			goto out;
 		}
+
 		log_hv_fd = create_hv_log();
 		if (log_hv_fd == -1)
 			goto out;
@@ -1097,6 +1126,10 @@ void handle_io(void)
 		xc_evtchn_close(xce_handle);
 		xce_handle = NULL;
 	}
+	if (xcg_handle != NULL) {
+		xc_gnttab_close(xcg_handle);
+		xcg_handle = NULL;
+	}
 	log_hv_evtchn = -1;
 }
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:10:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBM9-0006rZ-W5; Thu, 13 Dec 2012 16:09:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TjBM8-0006rT-GC
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 16:09:52 +0000
Received: from [85.158.139.211:36300] by server-9.bemta-5.messagelabs.com id
	91/92-10690-FCDF9C05; Thu, 13 Dec 2012 16:09:51 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355414969!18602323!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24956 invoked from network); 13 Dec 2012 16:09:30 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-206.messagelabs.com with SMTP;
	13 Dec 2012 16:09:30 -0000
X-TM-IMSS-Message-ID: <18432fe5000370b1@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 18432fe5000370b1 ;
	Thu, 13 Dec 2012 11:09:29 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBDG9SkV001142; 
	Thu, 13 Dec 2012 11:09:28 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Thu, 13 Dec 2012 11:08:47 -0500
Message-Id: <1355414927-12281-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] xenconsoled: use grant references instead of
	map_foreign_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Grant references for the xenstore and xenconsole shared pages exist, but
currently only xenstore uses these references.  Change the xenconsole
daemon to prefer using the grant reference over map_foreign_range when
mapping the shared console ring.

This allows xenconsoled to be run in a domain other than dom0 if set up
correctly - for libxl, the xenstore path /tool/xenconsoled/domid
specifies the domain containing xenconsoled.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 tools/console/daemon/io.c | 45 +++++++++++++++++++++++++++++++++++++++------
 1 file changed, 39 insertions(+), 6 deletions(-)

diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
index 48fe151..e24247d 100644
--- a/tools/console/daemon/io.c
+++ b/tools/console/daemon/io.c
@@ -24,6 +24,7 @@
 #include "io.h"
 #include <xenstore.h>
 #include <xen/io/console.h>
+#include <xen/grant_table.h>
 
 #include <stdlib.h>
 #include <errno.h>
@@ -69,6 +70,7 @@ static int log_hv_fd = -1;
 static evtchn_port_or_error_t log_hv_evtchn = -1;
 static xc_interface *xch; /* why does xenconsoled have two xc handles ? */
 static xc_evtchn *xce_handle = NULL;
+static xc_gnttab *xcg_handle = NULL;
 
 struct buffer {
 	char *data;
@@ -501,6 +503,17 @@ static int xs_gather(struct xs_handle *xs, const char *dir, ...)
 	va_end(ap);
 	return ret;
 }
+
+static void domain_unmap_interface(struct domain *dom)
+{
+	if (dom->interface == NULL)
+		return;
+	if (xcg_handle && dom->ring_ref == -1)
+		xc_gnttab_munmap(xcg_handle, dom->interface, 1);
+	else
+		munmap(dom->interface, getpagesize());
+	dom->interface = NULL;
+}
  
 static int domain_create_ring(struct domain *dom)
 {
@@ -522,9 +535,19 @@ static int domain_create_ring(struct domain *dom)
 	}
 	free(type);
 
-	if (ring_ref != dom->ring_ref) {
-		if (dom->interface != NULL)
-			munmap(dom->interface, getpagesize());
+	/* If using ring_ref and it has changed, remap */
+	if (ring_ref != dom->ring_ref && dom->ring_ref != -1)
+		domain_unmap_interface(dom);
+
+	if (!dom->interface && xcg_handle) {
+		/* Prefer using grant table */
+		dom->interface = xc_gnttab_map_grant_ref(xcg_handle,
+			dom->domid, GNTTAB_RESERVED_CONSOLE,
+			PROT_READ|PROT_WRITE);
+		dom->ring_ref = -1;
+	}
+	if (!dom->interface) {
+		/* Fall back to xc_map_foreign_range */
 		dom->interface = xc_map_foreign_range(
 			xc, dom->domid, getpagesize(),
 			PROT_READ|PROT_WRITE,
@@ -720,9 +743,7 @@ static void shutdown_domain(struct domain *d)
 {
 	d->is_dead = true;
 	watch_domain(d, false);
-	if (d->interface != NULL)
-		munmap(d->interface, getpagesize());
-	d->interface = NULL;
+	domain_unmap_interface(d);
 	if (d->xce_handle != NULL)
 		xc_evtchn_close(d->xce_handle);
 	d->xce_handle = NULL;
@@ -736,6 +757,13 @@ void enum_domains(void)
 	xc_dominfo_t dominfo;
 	struct domain *dom;
 
+	if (enum_pass == 0) {
+		xcg_handle = xc_gnttab_open(NULL, 0);
+		if (xcg_handle == NULL) {
+			dolog(LOG_DEBUG, "Failed to open xcg handle: %d (%s)",
+				  errno, strerror(errno));
+		}
+	}
 	enum_pass++;
 
 	while (xc_domain_getinfo(xc, domid, 1, &dominfo) == 1) {
@@ -946,6 +974,7 @@ void handle_io(void)
 			      errno, strerror(errno));
 			goto out;
 		}
+
 		log_hv_fd = create_hv_log();
 		if (log_hv_fd == -1)
 			goto out;
@@ -1097,6 +1126,10 @@ void handle_io(void)
 		xc_evtchn_close(xce_handle);
 		xce_handle = NULL;
 	}
+	if (xcg_handle != NULL) {
+		xc_gnttab_close(xcg_handle);
+		xcg_handle = NULL;
+	}
 	log_hv_evtchn = -1;
 }
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:17:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:17:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBTD-0007GJ-4w; Thu, 13 Dec 2012 16:17:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjBTB-0007GE-Mc
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 16:17:09 +0000
Received: from [85.158.143.35:39180] by server-3.bemta-4.messagelabs.com id
	AB/7F-18211-48FF9C05; Thu, 13 Dec 2012 16:17:08 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355415367!5156028!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9904 invoked from network); 13 Dec 2012 16:16:08 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:16:08 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjBSB-000KnF-CJ; Thu, 13 Dec 2012 16:16:07 +0000
Date: Thu, 13 Dec 2012 16:16:07 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213161607.GL75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-7-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-7-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 06/11] nEPT: Try to enable EPT paging for L2
	guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191038), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Once found EPT is enabled by L1 VMM, enabled nested EPT support
> for L2 guest.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Acked-by: Tim Deegan <tim@xen.org>
(though strictly speaking this isn't x86/mm code)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:17:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:17:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBTD-0007GJ-4w; Thu, 13 Dec 2012 16:17:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjBTB-0007GE-Mc
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 16:17:09 +0000
Received: from [85.158.143.35:39180] by server-3.bemta-4.messagelabs.com id
	AB/7F-18211-48FF9C05; Thu, 13 Dec 2012 16:17:08 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355415367!5156028!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9904 invoked from network); 13 Dec 2012 16:16:08 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:16:08 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjBSB-000KnF-CJ; Thu, 13 Dec 2012 16:16:07 +0000
Date: Thu, 13 Dec 2012 16:16:07 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213161607.GL75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-7-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-7-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 06/11] nEPT: Try to enable EPT paging for L2
	guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191038), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Once found EPT is enabled by L1 VMM, enabled nested EPT support
> for L2 guest.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Acked-by: Tim Deegan <tim@xen.org>
(though strictly speaking this isn't x86/mm code)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:18:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:18:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBU6-0007JC-JC; Thu, 13 Dec 2012 16:18:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjBU5-0007J3-0x
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 16:18:05 +0000
Received: from [85.158.137.99:12947] by server-1.bemta-3.messagelabs.com id
	BB/A6-08906-49FF9C05; Thu, 13 Dec 2012 16:17:24 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-217.messagelabs.com!1355415443!13844728!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2344 invoked from network); 13 Dec 2012 16:17:24 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:17:24 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjBTP-000Knb-HX; Thu, 13 Dec 2012 16:17:23 +0000
Date: Thu, 13 Dec 2012 16:17:23 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213161723.GM75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-8-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-8-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 07/11] nEPT: Sync PDPTR fields if L2 guest
	in PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191039), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
> vmentry.
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |   10 ++++++++--
>  1 files changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index ab68b52..3fc128b 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -824,9 +824,15 @@ static void load_shadow_guest_state(struct vcpu *v)
>      vvmcs_to_shadow(vvmcs, CR0_READ_SHADOW);
>      vvmcs_to_shadow(vvmcs, CR4_READ_SHADOW);
>      vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
> -    vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);

Did you really mean to remove this line as well?  If so, it'll need some
explanation in the checkin description.

Tim.

>  
> -    /* TODO: PDPTRs for nested ept */
> +    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
> +                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
> +    }
> +
>      /* TODO: CR3 target control */
>  }
>  
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:18:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:18:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBU6-0007JC-JC; Thu, 13 Dec 2012 16:18:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjBU5-0007J3-0x
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 16:18:05 +0000
Received: from [85.158.137.99:12947] by server-1.bemta-3.messagelabs.com id
	BB/A6-08906-49FF9C05; Thu, 13 Dec 2012 16:17:24 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-217.messagelabs.com!1355415443!13844728!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2344 invoked from network); 13 Dec 2012 16:17:24 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:17:24 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjBTP-000Knb-HX; Thu, 13 Dec 2012 16:17:23 +0000
Date: Thu, 13 Dec 2012 16:17:23 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213161723.GM75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-8-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-8-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 07/11] nEPT: Sync PDPTR fields if L2 guest
	in PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191039), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
> vmentry.
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |   10 ++++++++--
>  1 files changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index ab68b52..3fc128b 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -824,9 +824,15 @@ static void load_shadow_guest_state(struct vcpu *v)
>      vvmcs_to_shadow(vvmcs, CR0_READ_SHADOW);
>      vvmcs_to_shadow(vvmcs, CR4_READ_SHADOW);
>      vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
> -    vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);

Did you really mean to remove this line as well?  If so, it'll need some
explanation in the checkin description.

Tim.

>  
> -    /* TODO: PDPTRs for nested ept */
> +    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
> +                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
> +    }
> +
>      /* TODO: CR3 target control */
>  }
>  
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:35:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:35:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBkU-0007r8-8v; Thu, 13 Dec 2012 16:35:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TjBkS-0007qs-LX
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 16:35:00 +0000
Received: from [85.158.139.83:2946] by server-12.bemta-5.messagelabs.com id
	84/4C-02275-3B30AC05; Thu, 13 Dec 2012 16:34:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355416497!28002407!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTE1Njcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32244 invoked from network); 13 Dec 2012 16:34:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 16:34:59 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBDGYYln022519
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 13 Dec 2012 16:34:35 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBDGYYo7010908
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 13 Dec 2012 16:34:34 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBDGYXB2029278; Thu, 13 Dec 2012 10:34:34 -0600
Received: from localhost.localdomain (/208.54.5.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 13 Dec 2012 08:34:33 -0800
Date: Thu, 13 Dec 2012 11:34:27 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Message-ID: <20121213163426.GD20661@localhost.localdomain>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
	<40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, 2012 at 01:03:55AM +0000, Xu, Dongxiao wrote:
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Wednesday, December 12, 2012 1:07 AM
> > To: Xu, Dongxiao
> > Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
> > hook
> > 
> > On Tue, Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
> > > > -----Original Message-----
> > > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > > Sent: Friday, December 07, 2012 10:09 PM
> > > > To: Xu, Dongxiao
> > > > Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> > > > Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for
> > > > map_sg hook
> > > >
> > > > On Thu, Dec 06, 2012 at 09:08:42PM +0800, Dongxiao Xu wrote:
> > > > > While mapping sg buffers, checking to cross page DMA buffer is
> > > > > also needed. If the guest DMA buffer crosses page boundary, Xen
> > > > > should exchange contiguous memory for it.
> > > >
> > > > So this is when we cross those 2MB contingous swatch of buffers.
> > > > Wouldn't we get the same problem with the 'map_page' call? If the
> > > > driver tried to map say a 4MB DMA region?
> > >
> > > Yes, it also needs such check, as I just replied to Jan's mail.
> > >
> > > >
> > > > What if this check was done in the routines that provide the
> > > > software static buffers and there try to provide a nice DMA contingous
> > swatch of pages?
> > >
> > > Yes, this approach also came to our mind, which needs to modify the driver
> > itself.
> > > If so, it requires driver not using such static buffers (e.g., from kmalloc) to do
> > DMA even if the buffer is continuous in native.
> > 
> > I am bit loss here.
> > 
> > Is the issue you found only with drivers that do not use DMA API?
> > Can you perhaps point me to the code that triggered this fix in the first place?
> 
> Yes, we met this issue on a specific SAS device/driver, and it calls into libata-core code, you can refer to function ata_dev_read_id() called from ata_dev_reread_id() in drivers/ata/libata-core.c.
> 
> In the above function, the target buffer is (void *)dev->link->ap->sector_buf, which is 512 bytes static buffer and unfortunately it across the page boundary.

Hm, that looks like a DMA API violation. If you ran the code with
CONFIG_DEBUG_DMA_API did it complain about this? I recall the floppy
driver doing something similar and the Debug DMA API was quite verbose
in pointing this out.

> 
> > > Is this acceptable by kernel/driver upstream?
> > 
> > I am still not completely clear on what you had in mind. The one method I
> > thought about that might help in this is to have Xen-SWIOTLB track which
> > memory ranges were exchanged (so xen_swiotlb_fixup would save the *buf
> > and the size for each call to xen_create_contiguous_region in a list or array).
> > 
> > When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they would
> > consult said array/list to see if the region they retrieved crosses said 2MB
> > chunks. If so.. and here I am unsure of what would be the best way to proceed.
> 
> We thought we can solve the issue in several ways:
> 
> 1) Like the previous patch I sent out, we check the DMA region in xen_swiotlb_map_page() and xen_swiotlb_map_sg_attr(), and if DMA region crosses page boundary, we exchange the memory and copy the content. However it has race condition that when copying the memory content (we introduced two memory copies in the patch), some other code may also visit the page, which may encounter incorrect values.
> 2) Mostly the same as item 1), the only difference is that we put the memory content copy inside Xen hypervisor but not in Dom0. This requires we add certain flags to indicating memory moving in the XENMEM_exchange hypercall.
> 3) As you also mentioned, this is not a common case, it is only triggered by some specific devices/drivers. we can fix it in certain driver to avoid DMA into static buffers. Like (void *)dev->link->ap->sector_buf in the above case. But I am not sure whether it is acceptable by kernel/driver upstream.
> 
> Thanks,
> Dongxiao
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:35:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:35:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBkU-0007r8-8v; Thu, 13 Dec 2012 16:35:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TjBkS-0007qs-LX
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 16:35:00 +0000
Received: from [85.158.139.83:2946] by server-12.bemta-5.messagelabs.com id
	84/4C-02275-3B30AC05; Thu, 13 Dec 2012 16:34:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355416497!28002407!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTE1Njcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32244 invoked from network); 13 Dec 2012 16:34:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 16:34:59 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBDGYYln022519
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 13 Dec 2012 16:34:35 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBDGYYo7010908
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 13 Dec 2012 16:34:34 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBDGYXB2029278; Thu, 13 Dec 2012 10:34:34 -0600
Received: from localhost.localdomain (/208.54.5.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 13 Dec 2012 08:34:33 -0800
Date: Thu, 13 Dec 2012 11:34:27 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Message-ID: <20121213163426.GD20661@localhost.localdomain>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
	<40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, 2012 at 01:03:55AM +0000, Xu, Dongxiao wrote:
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Wednesday, December 12, 2012 1:07 AM
> > To: Xu, Dongxiao
> > Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for map_sg
> > hook
> > 
> > On Tue, Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
> > > > -----Original Message-----
> > > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > > Sent: Friday, December 07, 2012 10:09 PM
> > > > To: Xu, Dongxiao
> > > > Cc: xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> > > > Subject: Re: [PATCH] xen/swiotlb: Exchange to contiguous memory for
> > > > map_sg hook
> > > >
> > > > On Thu, Dec 06, 2012 at 09:08:42PM +0800, Dongxiao Xu wrote:
> > > > > While mapping sg buffers, checking to cross page DMA buffer is
> > > > > also needed. If the guest DMA buffer crosses page boundary, Xen
> > > > > should exchange contiguous memory for it.
> > > >
> > > > So this is when we cross those 2MB contingous swatch of buffers.
> > > > Wouldn't we get the same problem with the 'map_page' call? If the
> > > > driver tried to map say a 4MB DMA region?
> > >
> > > Yes, it also needs such check, as I just replied to Jan's mail.
> > >
> > > >
> > > > What if this check was done in the routines that provide the
> > > > software static buffers and there try to provide a nice DMA contingous
> > swatch of pages?
> > >
> > > Yes, this approach also came to our mind, which needs to modify the driver
> > itself.
> > > If so, it requires driver not using such static buffers (e.g., from kmalloc) to do
> > DMA even if the buffer is continuous in native.
> > 
> > I am bit loss here.
> > 
> > Is the issue you found only with drivers that do not use DMA API?
> > Can you perhaps point me to the code that triggered this fix in the first place?
> 
> Yes, we met this issue on a specific SAS device/driver, and it calls into libata-core code, you can refer to function ata_dev_read_id() called from ata_dev_reread_id() in drivers/ata/libata-core.c.
> 
> In the above function, the target buffer is (void *)dev->link->ap->sector_buf, which is 512 bytes static buffer and unfortunately it across the page boundary.

Hm, that looks like a DMA API violation. If you ran the code with
CONFIG_DEBUG_DMA_API did it complain about this? I recall the floppy
driver doing something similar and the Debug DMA API was quite verbose
in pointing this out.

> 
> > > Is this acceptable by kernel/driver upstream?
> > 
> > I am still not completely clear on what you had in mind. The one method I
> > thought about that might help in this is to have Xen-SWIOTLB track which
> > memory ranges were exchanged (so xen_swiotlb_fixup would save the *buf
> > and the size for each call to xen_create_contiguous_region in a list or array).
> > 
> > When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they would
> > consult said array/list to see if the region they retrieved crosses said 2MB
> > chunks. If so.. and here I am unsure of what would be the best way to proceed.
> 
> We thought we can solve the issue in several ways:
> 
> 1) Like the previous patch I sent out, we check the DMA region in xen_swiotlb_map_page() and xen_swiotlb_map_sg_attr(), and if DMA region crosses page boundary, we exchange the memory and copy the content. However it has race condition that when copying the memory content (we introduced two memory copies in the patch), some other code may also visit the page, which may encounter incorrect values.
> 2) Mostly the same as item 1), the only difference is that we put the memory content copy inside Xen hypervisor but not in Dom0. This requires we add certain flags to indicating memory moving in the XENMEM_exchange hypercall.
> 3) As you also mentioned, this is not a common case, it is only triggered by some specific devices/drivers. we can fix it in certain driver to avoid DMA into static buffers. Like (void *)dev->link->ap->sector_buf in the above case. But I am not sure whether it is acceptable by kernel/driver upstream.
> 
> Thanks,
> Dongxiao
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:39:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBoN-00081y-Uo; Thu, 13 Dec 2012 16:39:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TjBoM-00081s-2v
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 16:39:02 +0000
Received: from [85.158.138.51:59726] by server-9.bemta-3.messagelabs.com id
	D9/7F-11948-5A40AC05; Thu, 13 Dec 2012 16:39:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355416738!28743854!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjAxMDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31789 invoked from network); 13 Dec 2012 16:38:59 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 16:38:59 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBDGcsO7032322
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 13 Dec 2012 16:38:55 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBDGcrS5023076
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 13 Dec 2012 16:38:54 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBDGcr1p000632; Thu, 13 Dec 2012 10:38:53 -0600
Received: from localhost.localdomain (/208.54.5.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 13 Dec 2012 08:38:53 -0800
Date: Thu, 13 Dec 2012 11:38:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mats Petersson <mats.petersson@citrix.com>
Message-ID: <20121213163846.GF20661@localhost.localdomain>
References: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Ian.Campbell@citrix.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, JBeulich@suse.com,
	david.vrabel@citrix.com, mats@planetcatfish.com
Subject: Re: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, 2012 at 10:09:38PM +0000, Mats Petersson wrote:
> One comment asked for more details on the improvements:
> Using a small test program to map Guest memory into Dom0 (repeatedly
> for "Iterations" mapping the same first "Num Pages")

I missed this in my for 3.8 queue. I will queue it up next
week and send it to Linus after a review..

> Iterations    Num Pages	   Time 3.7rc4	Time With this patch
> 5000	      4096	   76.107	37.027
> 10000	      2048	   75.703	37.177
> 20000	      1024	   75.893	37.247
> So a little better than twice as fast.
> 
> Using this patch in migration, using "time" to measure the overall
> time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
> memory, one network card, one disk, guest at login prompt):
> Time 3.7rc5		Time With this patch
> 6.697			5.667
> Since migration involves a whole lot of other things, it's only about
> 15% faster - but still a good improvement. Similar measurement with a
> guest that is running code to "dirty" memory shows about 23%
> improvement, as it spends more time copying dirtied memory.
> 
> As discussed elsewhere, a good deal more can be had from improving the
> munmap system call, but it is a little tricky to get this in without
> worsening non-PVOPS kernel, so I will have another look at this.
> 
> ---
> Update since last posting: 
> I have just run some benchmarks of a 16GB guest, and the improvement
> with this patch is around 23-30% for the overall copy time, and 42%
> shorter downtime on a "busy" guest (writing in a loop to about 8GB of
> memory). I think this is definitely worth having.
> 
> The V4 patch consists of cosmetic adjustments (spelling mistake in
> comment and reversing condition in an if-statement to avoid having an
> "else" branch, a random empty line added by accident now reverted back
> to previous state). Thanks to Jan Beulich for the comments.
> 
> The V3 of the patch contains suggested improvements from:
> David Vrabel - make it two distinct external functions, doc-comments.
> Ian Campbell - use one common function for the main work.
> Jan Beulich  - found a bug and pointed out some whitespace problems.
> 
> 
> 
> Signed-off-by: Mats Petersson <mats.petersson@citrix.com>
> 
> ---
>  arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
>  drivers/xen/privcmd.c |   55 +++++++++++++++++----
>  include/xen/xen-ops.h |    5 ++
>  3 files changed, 169 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index dcf5f2d..a67774f 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
>  #define REMAP_BATCH_SIZE 16
>  
>  struct remap_data {
> -	unsigned long mfn;
> +	unsigned long *mfn;
> +	bool contiguous;
>  	pgprot_t prot;
>  	struct mmu_update *mmu_update;
>  };
> @@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>  				 unsigned long addr, void *data)
>  {
>  	struct remap_data *rmd = data;
> -	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
> +	pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
> +	/* If we have a contigious range, just update the mfn itself,
> +	   else update pointer to be "next mfn". */
> +	if (rmd->contiguous)
> +		(*rmd->mfn)++;
> +	else
> +		rmd->mfn++;
>  
>  	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
>  	rmd->mmu_update->val = pte_val_ma(pte);
> @@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>  	return 0;
>  }
>  
> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> -			       unsigned long addr,
> -			       unsigned long mfn, int nr,
> -			       pgprot_t prot, unsigned domid)
> -{
> +/* do_remap_mfn() - helper function to map foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages
> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @err_ptr: pointer to array
> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + *
> + * This function takes an array of mfns and maps nr pages from that into
> + * this kernel's memory. The owner of the pages is defined by domid. Where the
> + * pages are mapped is determined by addr, and vma is used for "accounting" of
> + * the pages.
> + *
> + * Return value is zero for success, negative for failure.
> + *
> + * Note that err_ptr is used to indicate whether *mfn
> + * is a list or a "first mfn of a contiguous range". */
> +static int do_remap_mfn(struct vm_area_struct *vma,
> +			unsigned long addr,
> +			unsigned long *mfn, int nr,
> +			int *err_ptr, pgprot_t prot,
> +			unsigned domid)
> +{
> +	int err, last_err = 0;
>  	struct remap_data rmd;
>  	struct mmu_update mmu_update[REMAP_BATCH_SIZE];
> -	int batch;
>  	unsigned long range;
> -	int err = 0;
>  
>  	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		return -EINVAL;
> @@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  
>  	rmd.mfn = mfn;
>  	rmd.prot = prot;
> +	/* We use the err_ptr to indicate if there we are doing a contigious
> +	 * mapping or a discontigious mapping. */
> +	rmd.contiguous = !err_ptr;
>  
>  	while (nr) {
> -		batch = min(REMAP_BATCH_SIZE, nr);
> +		int index = 0;
> +		int done = 0;
> +		int batch = min(REMAP_BATCH_SIZE, nr);
> +		int batch_left = batch;
>  		range = (unsigned long)batch << PAGE_SHIFT;
>  
>  		rmd.mmu_update = mmu_update;
> @@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  		if (err)
>  			goto out;
>  
> -		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
> -		if (err < 0)
> -			goto out;
> +		/* We record the error for each page that gives an error, but
> +		 * continue mapping until the whole set is done */
> +		do {
> +			err = HYPERVISOR_mmu_update(&mmu_update[index],
> +						    batch_left, &done, domid);
> +			if (err < 0) {
> +				if (!err_ptr)
> +					goto out;
> +				/* increment done so we skip the error item */
> +				done++;
> +				last_err = err_ptr[index] = err;
> +			}
> +			batch_left -= done;
> +			index += done;
> +		} while (batch_left);
>  
>  		nr -= batch;
>  		addr += range;
> +		if (err_ptr)
> +			err_ptr += batch;
>  	}
>  
> -	err = 0;
> +	err = last_err;
>  out:
>  
>  	xen_flush_tlb_all();
>  
>  	return err;
>  }
> +
> +/* xen_remap_domain_mfn_range() - Used to map foreign pages
> + * @vma:   the vma for the pages to be mapped into
> + * @addr:  the address at which to map the pages
> + * @mfn:   the first MFN to map
> + * @nr:    the number of consecutive mfns to map
> + * @prot:  page protection mask
> + * @domid: id of the domain that we are mapping from
> + *
> + * This function takes a mfn and maps nr pages on from that into this kernel's
> + * memory. The owner of the pages is defined by domid. Where the pages are
> + * mapped is determined by addr, and vma is used for "accounting" of the
> + * pages.
> + *
> + * Return value is zero for success, negative ERRNO value for failure.
> + */
> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> +			       unsigned long addr,
> +			       unsigned long mfn, int nr,
> +			       pgprot_t prot, unsigned domid)
> +{
> +	return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
> +}
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> +
> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages
> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @err_ptr: pointer to array of integers, one per MFN, for an error
> + *           value for each page. The err_ptr must not be NULL.
> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + *
> + * This function takes an array of mfns and maps nr pages from that into this
> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
> + * are mapped is determined by addr, and vma is used for "accounting" of the
> + * pages. The err_ptr array is filled in on any page that is not sucessfully
> + * mapped in.
> + *
> + * Return value is zero for success, negative ERRNO value for failure.
> + * Note that the error value -ENOENT is considered a "retry", so when this
> + * error code is seen, another call should be made with the list of pages that
> + * are marked as -ENOENT in the err_ptr array.
> + */
> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +			       unsigned long addr,
> +			       unsigned long *mfn, int nr,
> +			       int *err_ptr, pgprot_t prot,
> +			       unsigned domid)
> +{
> +	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
> +	 * and the consequences later is quite hard to detect what the actual
> +	 * cause of "wrong memory was mapped in".
> +	 */
> +	BUG_ON(err_ptr == NULL);
> +	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
> +}
> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 71f5c45..75f6e86 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
>  	return ret;
>  }
>  
> +/*
> + * Similar to traverse_pages, but use each page as a "block" of
> + * data to be processed as one unit.
> + */
> +static int traverse_pages_block(unsigned nelem, size_t size,
> +				struct list_head *pos,
> +				int (*fn)(void *data, int nr, void *state),
> +				void *state)
> +{
> +	void *pagedata;
> +	unsigned pageidx;
> +	int ret = 0;
> +
> +	BUG_ON(size > PAGE_SIZE);
> +
> +	pageidx = PAGE_SIZE;
> +
> +	while (nelem) {
> +		int nr = (PAGE_SIZE/size);
> +		struct page *page;
> +		if (nr > nelem)
> +			nr = nelem;
> +		pos = pos->next;
> +		page = list_entry(pos, struct page, lru);
> +		pagedata = page_address(page);
> +		ret = (*fn)(pagedata, nr, state);
> +		if (ret)
> +			break;
> +		nelem -= nr;
> +	}
> +
> +	return ret;
> +}
> +
>  struct mmap_mfn_state {
>  	unsigned long va;
>  	struct vm_area_struct *vma;
> @@ -250,7 +284,7 @@ struct mmap_batch_state {
>  	 *      0 for no errors
>  	 *      1 if at least one error has happened (and no
>  	 *          -ENOENT errors have happened)
> -	 *      -ENOENT if at least 1 -ENOENT has happened.
> +	 *      -ENOENT if at least one -ENOENT has happened.
>  	 */
>  	int global_error;
>  	/* An array for individual errors */
> @@ -260,17 +294,20 @@ struct mmap_batch_state {
>  	xen_pfn_t __user *user_mfn;
>  };
>  
> -static int mmap_batch_fn(void *data, void *state)
> +static int mmap_batch_fn(void *data, int nr, void *state)
>  {
>  	xen_pfn_t *mfnp = data;
>  	struct mmap_batch_state *st = state;
>  	int ret;
>  
> -	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -					 st->vma->vm_page_prot, st->domain);
> +	BUG_ON(nr < 0);
>  
> -	/* Store error code for second pass. */
> -	*(st->err++) = ret;
> +	ret = xen_remap_domain_mfn_array(st->vma,
> +					 st->va & PAGE_MASK,
> +					 mfnp, nr,
> +					 st->err,
> +					 st->vma->vm_page_prot,
> +					 st->domain);
>  
>  	/* And see if it affects the global_error. */
>  	if (ret < 0) {
> @@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
>  				st->global_error = 1;
>  		}
>  	}
> -	st->va += PAGE_SIZE;
> +	st->va += PAGE_SIZE * nr;
>  
>  	return 0;
>  }
> @@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>  	state.err           = err_array;
>  
>  	/* mmap_batch_fn guarantees ret == 0 */
> -	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
> -			     &pagelist, mmap_batch_fn, &state));
> +	BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
> +				    &pagelist, mmap_batch_fn, &state));
>  
>  	up_write(&mm->mmap_sem);
>  
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 6a198e4..22cad75 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  			       unsigned long mfn, int nr,
>  			       pgprot_t prot, unsigned domid);
>  
> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +			       unsigned long addr,
> +			       unsigned long *mfn, int nr,
> +			       int *err_ptr, pgprot_t prot,
> +			       unsigned domid);
>  #endif /* INCLUDE_XEN_OPS_H */
> -- 
> 1.7.9.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:39:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBoN-00081y-Uo; Thu, 13 Dec 2012 16:39:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TjBoM-00081s-2v
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 16:39:02 +0000
Received: from [85.158.138.51:59726] by server-9.bemta-3.messagelabs.com id
	D9/7F-11948-5A40AC05; Thu, 13 Dec 2012 16:39:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355416738!28743854!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjAxMDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31789 invoked from network); 13 Dec 2012 16:38:59 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 16:38:59 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBDGcsO7032322
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 13 Dec 2012 16:38:55 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBDGcrS5023076
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 13 Dec 2012 16:38:54 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBDGcr1p000632; Thu, 13 Dec 2012 10:38:53 -0600
Received: from localhost.localdomain (/208.54.5.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 13 Dec 2012 08:38:53 -0800
Date: Thu, 13 Dec 2012 11:38:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mats Petersson <mats.petersson@citrix.com>
Message-ID: <20121213163846.GF20661@localhost.localdomain>
References: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Ian.Campbell@citrix.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, JBeulich@suse.com,
	david.vrabel@citrix.com, mats@planetcatfish.com
Subject: Re: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, 2012 at 10:09:38PM +0000, Mats Petersson wrote:
> One comment asked for more details on the improvements:
> Using a small test program to map Guest memory into Dom0 (repeatedly
> for "Iterations" mapping the same first "Num Pages")

I missed this in my for 3.8 queue. I will queue it up next
week and send it to Linus after a review..

> Iterations    Num Pages	   Time 3.7rc4	Time With this patch
> 5000	      4096	   76.107	37.027
> 10000	      2048	   75.703	37.177
> 20000	      1024	   75.893	37.247
> So a little better than twice as fast.
> 
> Using this patch in migration, using "time" to measure the overall
> time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
> memory, one network card, one disk, guest at login prompt):
> Time 3.7rc5		Time With this patch
> 6.697			5.667
> Since migration involves a whole lot of other things, it's only about
> 15% faster - but still a good improvement. Similar measurement with a
> guest that is running code to "dirty" memory shows about 23%
> improvement, as it spends more time copying dirtied memory.
> 
> As discussed elsewhere, a good deal more can be had from improving the
> munmap system call, but it is a little tricky to get this in without
> worsening non-PVOPS kernel, so I will have another look at this.
> 
> ---
> Update since last posting: 
> I have just run some benchmarks of a 16GB guest, and the improvement
> with this patch is around 23-30% for the overall copy time, and 42%
> shorter downtime on a "busy" guest (writing in a loop to about 8GB of
> memory). I think this is definitely worth having.
> 
> The V4 patch consists of cosmetic adjustments (spelling mistake in
> comment and reversing condition in an if-statement to avoid having an
> "else" branch, a random empty line added by accident now reverted back
> to previous state). Thanks to Jan Beulich for the comments.
> 
> The V3 of the patch contains suggested improvements from:
> David Vrabel - make it two distinct external functions, doc-comments.
> Ian Campbell - use one common function for the main work.
> Jan Beulich  - found a bug and pointed out some whitespace problems.
> 
> 
> 
> Signed-off-by: Mats Petersson <mats.petersson@citrix.com>
> 
> ---
>  arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
>  drivers/xen/privcmd.c |   55 +++++++++++++++++----
>  include/xen/xen-ops.h |    5 ++
>  3 files changed, 169 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index dcf5f2d..a67774f 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
>  #define REMAP_BATCH_SIZE 16
>  
>  struct remap_data {
> -	unsigned long mfn;
> +	unsigned long *mfn;
> +	bool contiguous;
>  	pgprot_t prot;
>  	struct mmu_update *mmu_update;
>  };
> @@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>  				 unsigned long addr, void *data)
>  {
>  	struct remap_data *rmd = data;
> -	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
> +	pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
> +	/* If we have a contigious range, just update the mfn itself,
> +	   else update pointer to be "next mfn". */
> +	if (rmd->contiguous)
> +		(*rmd->mfn)++;
> +	else
> +		rmd->mfn++;
>  
>  	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
>  	rmd->mmu_update->val = pte_val_ma(pte);
> @@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>  	return 0;
>  }
>  
> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> -			       unsigned long addr,
> -			       unsigned long mfn, int nr,
> -			       pgprot_t prot, unsigned domid)
> -{
> +/* do_remap_mfn() - helper function to map foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages
> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @err_ptr: pointer to array
> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + *
> + * This function takes an array of mfns and maps nr pages from that into
> + * this kernel's memory. The owner of the pages is defined by domid. Where the
> + * pages are mapped is determined by addr, and vma is used for "accounting" of
> + * the pages.
> + *
> + * Return value is zero for success, negative for failure.
> + *
> + * Note that err_ptr is used to indicate whether *mfn
> + * is a list or a "first mfn of a contiguous range". */
> +static int do_remap_mfn(struct vm_area_struct *vma,
> +			unsigned long addr,
> +			unsigned long *mfn, int nr,
> +			int *err_ptr, pgprot_t prot,
> +			unsigned domid)
> +{
> +	int err, last_err = 0;
>  	struct remap_data rmd;
>  	struct mmu_update mmu_update[REMAP_BATCH_SIZE];
> -	int batch;
>  	unsigned long range;
> -	int err = 0;
>  
>  	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		return -EINVAL;
> @@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  
>  	rmd.mfn = mfn;
>  	rmd.prot = prot;
> +	/* We use the err_ptr to indicate if there we are doing a contigious
> +	 * mapping or a discontigious mapping. */
> +	rmd.contiguous = !err_ptr;
>  
>  	while (nr) {
> -		batch = min(REMAP_BATCH_SIZE, nr);
> +		int index = 0;
> +		int done = 0;
> +		int batch = min(REMAP_BATCH_SIZE, nr);
> +		int batch_left = batch;
>  		range = (unsigned long)batch << PAGE_SHIFT;
>  
>  		rmd.mmu_update = mmu_update;
> @@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  		if (err)
>  			goto out;
>  
> -		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
> -		if (err < 0)
> -			goto out;
> +		/* We record the error for each page that gives an error, but
> +		 * continue mapping until the whole set is done */
> +		do {
> +			err = HYPERVISOR_mmu_update(&mmu_update[index],
> +						    batch_left, &done, domid);
> +			if (err < 0) {
> +				if (!err_ptr)
> +					goto out;
> +				/* increment done so we skip the error item */
> +				done++;
> +				last_err = err_ptr[index] = err;
> +			}
> +			batch_left -= done;
> +			index += done;
> +		} while (batch_left);
>  
>  		nr -= batch;
>  		addr += range;
> +		if (err_ptr)
> +			err_ptr += batch;
>  	}
>  
> -	err = 0;
> +	err = last_err;
>  out:
>  
>  	xen_flush_tlb_all();
>  
>  	return err;
>  }
> +
> +/* xen_remap_domain_mfn_range() - Used to map foreign pages
> + * @vma:   the vma for the pages to be mapped into
> + * @addr:  the address at which to map the pages
> + * @mfn:   the first MFN to map
> + * @nr:    the number of consecutive mfns to map
> + * @prot:  page protection mask
> + * @domid: id of the domain that we are mapping from
> + *
> + * This function takes a mfn and maps nr pages on from that into this kernel's
> + * memory. The owner of the pages is defined by domid. Where the pages are
> + * mapped is determined by addr, and vma is used for "accounting" of the
> + * pages.
> + *
> + * Return value is zero for success, negative ERRNO value for failure.
> + */
> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> +			       unsigned long addr,
> +			       unsigned long mfn, int nr,
> +			       pgprot_t prot, unsigned domid)
> +{
> +	return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
> +}
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> +
> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages
> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @err_ptr: pointer to array of integers, one per MFN, for an error
> + *           value for each page. The err_ptr must not be NULL.
> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + *
> + * This function takes an array of mfns and maps nr pages from that into this
> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
> + * are mapped is determined by addr, and vma is used for "accounting" of the
> + * pages. The err_ptr array is filled in on any page that is not sucessfully
> + * mapped in.
> + *
> + * Return value is zero for success, negative ERRNO value for failure.
> + * Note that the error value -ENOENT is considered a "retry", so when this
> + * error code is seen, another call should be made with the list of pages that
> + * are marked as -ENOENT in the err_ptr array.
> + */
> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +			       unsigned long addr,
> +			       unsigned long *mfn, int nr,
> +			       int *err_ptr, pgprot_t prot,
> +			       unsigned domid)
> +{
> +	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
> +	 * and the consequences later is quite hard to detect what the actual
> +	 * cause of "wrong memory was mapped in".
> +	 */
> +	BUG_ON(err_ptr == NULL);
> +	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
> +}
> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 71f5c45..75f6e86 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
>  	return ret;
>  }
>  
> +/*
> + * Similar to traverse_pages, but use each page as a "block" of
> + * data to be processed as one unit.
> + */
> +static int traverse_pages_block(unsigned nelem, size_t size,
> +				struct list_head *pos,
> +				int (*fn)(void *data, int nr, void *state),
> +				void *state)
> +{
> +	void *pagedata;
> +	unsigned pageidx;
> +	int ret = 0;
> +
> +	BUG_ON(size > PAGE_SIZE);
> +
> +	pageidx = PAGE_SIZE;
> +
> +	while (nelem) {
> +		int nr = (PAGE_SIZE/size);
> +		struct page *page;
> +		if (nr > nelem)
> +			nr = nelem;
> +		pos = pos->next;
> +		page = list_entry(pos, struct page, lru);
> +		pagedata = page_address(page);
> +		ret = (*fn)(pagedata, nr, state);
> +		if (ret)
> +			break;
> +		nelem -= nr;
> +	}
> +
> +	return ret;
> +}
> +
>  struct mmap_mfn_state {
>  	unsigned long va;
>  	struct vm_area_struct *vma;
> @@ -250,7 +284,7 @@ struct mmap_batch_state {
>  	 *      0 for no errors
>  	 *      1 if at least one error has happened (and no
>  	 *          -ENOENT errors have happened)
> -	 *      -ENOENT if at least 1 -ENOENT has happened.
> +	 *      -ENOENT if at least one -ENOENT has happened.
>  	 */
>  	int global_error;
>  	/* An array for individual errors */
> @@ -260,17 +294,20 @@ struct mmap_batch_state {
>  	xen_pfn_t __user *user_mfn;
>  };
>  
> -static int mmap_batch_fn(void *data, void *state)
> +static int mmap_batch_fn(void *data, int nr, void *state)
>  {
>  	xen_pfn_t *mfnp = data;
>  	struct mmap_batch_state *st = state;
>  	int ret;
>  
> -	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -					 st->vma->vm_page_prot, st->domain);
> +	BUG_ON(nr < 0);
>  
> -	/* Store error code for second pass. */
> -	*(st->err++) = ret;
> +	ret = xen_remap_domain_mfn_array(st->vma,
> +					 st->va & PAGE_MASK,
> +					 mfnp, nr,
> +					 st->err,
> +					 st->vma->vm_page_prot,
> +					 st->domain);
>  
>  	/* And see if it affects the global_error. */
>  	if (ret < 0) {
> @@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
>  				st->global_error = 1;
>  		}
>  	}
> -	st->va += PAGE_SIZE;
> +	st->va += PAGE_SIZE * nr;
>  
>  	return 0;
>  }
> @@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>  	state.err           = err_array;
>  
>  	/* mmap_batch_fn guarantees ret == 0 */
> -	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
> -			     &pagelist, mmap_batch_fn, &state));
> +	BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
> +				    &pagelist, mmap_batch_fn, &state));
>  
>  	up_write(&mm->mmap_sem);
>  
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 6a198e4..22cad75 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  			       unsigned long mfn, int nr,
>  			       pgprot_t prot, unsigned domid);
>  
> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +			       unsigned long addr,
> +			       unsigned long *mfn, int nr,
> +			       int *err_ptr, pgprot_t prot,
> +			       unsigned domid);
>  #endif /* INCLUDE_XEN_OPS_H */
> -- 
> 1.7.9.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:43:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:43:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBsu-0008Tc-NP; Thu, 13 Dec 2012 16:43:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjBss-0008TU-HN
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 16:43:42 +0000
Received: from [85.158.137.99:33254] by server-8.bemta-3.messagelabs.com id
	35/25-01297-DB50AC05; Thu, 13 Dec 2012 16:43:41 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355417020!19307540!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21668 invoked from network); 13 Dec 2012 16:43:40 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:43:40 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjBso-000L21-Bq; Thu, 13 Dec 2012 16:43:38 +0000
Date: Thu, 13 Dec 2012 16:43:38 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213164338.GN75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-9-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-9-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 08/11] nEPT: Use minimal permission for
	nested p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191040), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Emulate permission check for the nested p2m. Current solution is to
> use minimal permission, and once meet permission violation in L0, then
> determin whether it is caused by guest EPT or host EPT
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

> --- a/xen/arch/x86/hvm/svm/nestedsvm.c
> +++ b/xen/arch/x86/hvm/svm/nestedsvm.c
> @@ -1177,7 +1177,7 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
>   */
>  int
>  nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
> -                      unsigned int *page_order,
> +                      unsigned int *page_order, uint8_t *p2m_acc,
>                        bool_t access_r, bool_t access_w, bool_t access_x)

I don't like these interface changes (see below) but if we do have them,
at least make the SVM version use p2m_access_rwx, to match the old
behaviour, rather than letting it use an uninitialised stack variable. :)

> @@ -250,10 +251,13 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>  
>      page_order_20 = min(page_order_21, page_order_10);
>  
> +    if (p2ma_10 > p2m_access_rwx)
> +        p2ma_10 = p2m_access_rwx;

That's plain wrong.  If the access type is p2m_access_rx2rw, this will
give the guest write access to what ought to be a read-only page. 

I think it would be best to leave the p2m-access stuff to the p2m
walkers, and not add all those extra p2ma arguments.  Instead, just use
the _actual_ access permissions of this fault as the p2ma.   That way
you know you have something that's acceptabel to both p2m tables.

I guess that will mean some extra faults on read-then-write behaviour.
If those are measurable, we could look at pulling the p2m-access types
out like this, but you'll have to explicitly handle all the special
types.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:43:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:43:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjBsu-0008Tc-NP; Thu, 13 Dec 2012 16:43:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjBss-0008TU-HN
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 16:43:42 +0000
Received: from [85.158.137.99:33254] by server-8.bemta-3.messagelabs.com id
	35/25-01297-DB50AC05; Thu, 13 Dec 2012 16:43:41 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355417020!19307540!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21668 invoked from network); 13 Dec 2012 16:43:40 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:43:40 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjBso-000L21-Bq; Thu, 13 Dec 2012 16:43:38 +0000
Date: Thu, 13 Dec 2012 16:43:38 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213164338.GN75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-9-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-9-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 08/11] nEPT: Use minimal permission for
	nested p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191040), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Emulate permission check for the nested p2m. Current solution is to
> use minimal permission, and once meet permission violation in L0, then
> determin whether it is caused by guest EPT or host EPT
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

> --- a/xen/arch/x86/hvm/svm/nestedsvm.c
> +++ b/xen/arch/x86/hvm/svm/nestedsvm.c
> @@ -1177,7 +1177,7 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
>   */
>  int
>  nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
> -                      unsigned int *page_order,
> +                      unsigned int *page_order, uint8_t *p2m_acc,
>                        bool_t access_r, bool_t access_w, bool_t access_x)

I don't like these interface changes (see below) but if we do have them,
at least make the SVM version use p2m_access_rwx, to match the old
behaviour, rather than letting it use an uninitialised stack variable. :)

> @@ -250,10 +251,13 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>  
>      page_order_20 = min(page_order_21, page_order_10);
>  
> +    if (p2ma_10 > p2m_access_rwx)
> +        p2ma_10 = p2m_access_rwx;

That's plain wrong.  If the access type is p2m_access_rx2rw, this will
give the guest write access to what ought to be a read-only page. 

I think it would be best to leave the p2m-access stuff to the p2m
walkers, and not add all those extra p2ma arguments.  Instead, just use
the _actual_ access permissions of this fault as the p2ma.   That way
you know you have something that's acceptabel to both p2m tables.

I guess that will mean some extra faults on read-then-write behaviour.
If those are measurable, we could look at pulling the p2m-access types
out like this, but you'll have to explicitly handle all the special
types.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:53:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjC2T-0000PZ-18; Thu, 13 Dec 2012 16:53:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TjC2R-0000PR-SP
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 16:53:36 +0000
Received: from [85.158.143.35:33584] by server-3.bemta-4.messagelabs.com id
	F6/4F-18211-F080AC05; Thu, 13 Dec 2012 16:53:35 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355417613!14063742!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6157 invoked from network); 13 Dec 2012 16:53:34 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-3.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 16:53:34 -0000
Received: from [137.65.222.57] ([137.65.222.57])
	by mail.novell.com with ESMTP; Thu, 13 Dec 2012 09:53:22 -0700
Message-ID: <50CA0801.2040901@suse.com>
Date: Thu, 13 Dec 2012 09:53:21 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>	<20680.47971.962603.851882@mariner.uk.xensource.com>	<50C8BE3F.4040402@suse.com>	<20680.49391.646654.814456@mariner.uk.xensource.com>	<50C8C665.2030202@suse.com>	<1355394576.10554.62.camel@zakaz.uk.xensource.com>	<50C9F9EA.7020408@suse.com>
	<20681.64322.738705.484310@mariner.uk.xensource.com>
In-Reply-To: <20681.64322.738705.484310@mariner.uk.xensource.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
>   
>> Ian Campbell wrote:
>>     
>>> One option is to add new hooks which libxl can call to take/release the
>>> application's event loop lock + a LIBXL_HAVE_EVENT_LOOP_LOCK define so
>>> the application can conditionally provide them.
>>>       
>> libvirt's event loop lock is private to the event impl and not exposed
>> to its numerous users.
>>     
>
> Right.  I still think it might be useful to provide a way for a
> consenting application to allow libxl to use the application's event
> loop lock (perhaps, its single giant lock) as the ctx lock.  If it had
> been possible in this case it would have eliminated these particular
> races, so it's a benefit for those applications.  And the extra
> complexity doesn't seem likely to introduce other bugs.
>
> But I think we should fault that feature in when we have a potential
> user for it, and from what you say that's not libvirt.
>   

Correct.  That approach doesn't really fit with libvirt's generic event
loop used by the various drivers.  I suppose the libxl driver could have
a private event loop, but I'd prefer to keep the pattern used by other
drivers.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:53:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjC2T-0000PZ-18; Thu, 13 Dec 2012 16:53:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1TjC2R-0000PR-SP
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 16:53:36 +0000
Received: from [85.158.143.35:33584] by server-3.bemta-4.messagelabs.com id
	F6/4F-18211-F080AC05; Thu, 13 Dec 2012 16:53:35 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355417613!14063742!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6157 invoked from network); 13 Dec 2012 16:53:34 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-3.tower-21.messagelabs.com with SMTP;
	13 Dec 2012 16:53:34 -0000
Received: from [137.65.222.57] ([137.65.222.57])
	by mail.novell.com with ESMTP; Thu, 13 Dec 2012 09:53:22 -0700
Message-ID: <50CA0801.2040901@suse.com>
Date: Thu, 13 Dec 2012 09:53:21 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20678.5159.946248.90947@mariner.uk.xensource.com>	<1355158624-11163-2-git-send-email-ian.jackson@eu.citrix.com>	<50C7B974.4050706@suse.com>	<20680.47971.962603.851882@mariner.uk.xensource.com>	<50C8BE3F.4040402@suse.com>	<20680.49391.646654.814456@mariner.uk.xensource.com>	<50C8C665.2030202@suse.com>	<1355394576.10554.62.camel@zakaz.uk.xensource.com>	<50C9F9EA.7020408@suse.com>
	<20681.64322.738705.484310@mariner.uk.xensource.com>
In-Reply-To: <20681.64322.738705.484310@mariner.uk.xensource.com>
Cc: Bamvor Jian Zhang <bjzhang@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback
 race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 2/2] libxl: fix stale timeout event callback race"):
>   
>> Ian Campbell wrote:
>>     
>>> One option is to add new hooks which libxl can call to take/release the
>>> application's event loop lock + a LIBXL_HAVE_EVENT_LOOP_LOCK define so
>>> the application can conditionally provide them.
>>>       
>> libvirt's event loop lock is private to the event impl and not exposed
>> to its numerous users.
>>     
>
> Right.  I still think it might be useful to provide a way for a
> consenting application to allow libxl to use the application's event
> loop lock (perhaps, its single giant lock) as the ctx lock.  If it had
> been possible in this case it would have eliminated these particular
> races, so it's a benefit for those applications.  And the extra
> complexity doesn't seem likely to introduce other bugs.
>
> But I think we should fault that feature in when we have a potential
> user for it, and from what you say that's not libvirt.
>   

Correct.  That approach doesn't really fit with libvirt's generic event
loop used by the various drivers.  I suppose the libxl driver could have
a private event loop, but I'd prefer to keep the pattern used by other
drivers.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:57:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjC5m-0000Ws-8Y; Thu, 13 Dec 2012 16:57:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjC5k-0000Wk-0w
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 16:57:00 +0000
Received: from [85.158.143.35:41661] by server-3.bemta-4.messagelabs.com id
	68/23-18211-BD80AC05; Thu, 13 Dec 2012 16:56:59 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355417807!16213414!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8922 invoked from network); 13 Dec 2012 16:56:48 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:56:48 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjC5X-000L80-05; Thu, 13 Dec 2012 16:56:47 +0000
Date: Thu, 13 Dec 2012 16:56:46 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213165646.GO75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-10-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-10-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 09/11] nEPT: handle invept instruction from
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191041), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Add the INVEPT instruction emulation logic.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Looks fine, but you have some whitespace problems...

> +int nvmx_handle_invept(struct cpu_user_regs *regs)
> +{
> +    struct vmx_inst_decoded decode;
> +    unsigned long eptp;
> +    u64 inv_type;
> +
> +    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
> +             != X86EMUL_OKAY )
> +        return X86EMUL_EXCEPTION;
> +
> +    inv_type = reg_read(regs, decode.reg2);
> +    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);
> +
> +    switch (inv_type){

here

> +    case INVEPT_SINGLE_CONTEXT:
> +        {
> +            struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
> +            if ( p2m )
> +            {
> +	            p2m_flush(current, p2m);
> +		        ept_sync_domain(p2m);

and again here (hard tabs)

> +            }
> +        }
> +		break;

and again.

With those fixed, Acked-by: Tim Deegan <tim@xen.org>
(again with the caveat that this isn't under x86/mm)

Cheers,

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:57:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjC5m-0000Ws-8Y; Thu, 13 Dec 2012 16:57:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjC5k-0000Wk-0w
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 16:57:00 +0000
Received: from [85.158.143.35:41661] by server-3.bemta-4.messagelabs.com id
	68/23-18211-BD80AC05; Thu, 13 Dec 2012 16:56:59 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355417807!16213414!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8922 invoked from network); 13 Dec 2012 16:56:48 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 16:56:48 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjC5X-000L80-05; Thu, 13 Dec 2012 16:56:47 +0000
Date: Thu, 13 Dec 2012 16:56:46 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213165646.GO75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-10-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-10-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 09/11] nEPT: handle invept instruction from
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191041), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Add the INVEPT instruction emulation logic.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Looks fine, but you have some whitespace problems...

> +int nvmx_handle_invept(struct cpu_user_regs *regs)
> +{
> +    struct vmx_inst_decoded decode;
> +    unsigned long eptp;
> +    u64 inv_type;
> +
> +    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
> +             != X86EMUL_OKAY )
> +        return X86EMUL_EXCEPTION;
> +
> +    inv_type = reg_read(regs, decode.reg2);
> +    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);
> +
> +    switch (inv_type){

here

> +    case INVEPT_SINGLE_CONTEXT:
> +        {
> +            struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
> +            if ( p2m )
> +            {
> +	            p2m_flush(current, p2m);
> +		        ept_sync_domain(p2m);

and again here (hard tabs)

> +            }
> +        }
> +		break;

and again.

With those fixed, Acked-by: Tim Deegan <tim@xen.org>
(again with the caveat that this isn't under x86/mm)

Cheers,

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:57:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjC5z-0000Y2-Qi; Thu, 13 Dec 2012 16:57:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1TjC5x-0000Xi-Oe; Thu, 13 Dec 2012 16:57:14 +0000
Received: from [85.158.138.51:61268] by server-15.bemta-3.messagelabs.com id
	3C/B0-07921-2E80AC05; Thu, 13 Dec 2012 16:57:06 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355417776!20646930!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17052 invoked from network); 13 Dec 2012 16:56:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 16:56:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="127668"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 16:56:16 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 16:56:15 +0000
Message-ID: <50CA08AE.80102@citrix.com>
Date: Thu, 13 Dec 2012 17:56:14 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com>
In-Reply-To: <50C879FB.7060208@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: [Xen-devel] Handling iSCSI block devices (Was: Driver domains and
	device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

After doing some more research I'm able to understand a little better
how the Open-iSCSI initiator works (which I think is the most common
initiator in the Linux world, and should be supported by all distros).
This is marginally derived from the driver domains protocol proposal,
because if we are going to implement a driver domain protocol we should
try to handle device kinds that can have a two phase connection
mechanism, and iSCSI looks like the most interesting candidate (from my
POV).

I would like to implement iSCSI support in libxl to have at least a
device kind that makes use of this two phase connection mechanism, and
then draft a driver domain communication protocol, since we will already
have a device that needs this kind of protocol (it looked strange to
implement a two phase protocol without having any device that needed it).

This is the very simple scheme of the two phases of the connection of a
iSCSI device:

The first phase of connecting a iSCSI device consists of discovering it,
which can be done before entering the blackout phase of migration:

iscsiadm -m discovery -t st -p <ip>:<port>

And possibly setting the right authentication method:

iscsiadm -m node --targetname <iqn> -p <ip>:<port> --op=update --name
node.session.auth.authmethod --value=<auth_method>
iscsiadm -m node --targetname <iqn> -p <ip>:<port> --op=update --name
node.session.auth.username --value=<user>
iscsiadm -m node --targetname <iqn> -p <ip>:<port> --op=update --name
node.session.auth.password --value=<password>

The second phase is the actual device plug:

iscsiadm -m node --targetname <iqn> -p <ip>:<port> --login

I'm trying to fit all this parameters in the current diskspec, but I
guess we will have to add new parameters. I think the iqn parameter
should go in "target", and the rest should have their own parameters, so
this will leave us with the following new parameters:

- portal: specifies the address, and optionally the port to connect to
the desired target, the format is <ip>:<port>
- authmethod: authentication method
- user: username to use for authentication
- password: password to use for authentication.

So the diskspec line would look like:

portal=127.0.0.0:3260, authmethod=CHAP, user=foo, password=bar,
backendtype=phy, format=iscsi, vdev=xvda,
target=iqn.2012-12.com.example:lun1

Note that I've used the format parameter here to specify "iscsi", which
will be a new format, to distinguish this from a block device that also
uses the "phy" backend type. All this new parameters should also be
added to the libxl_device_disk struct.

Since this device type uses two hotplug scripts we should also add a new
generic parameter to specify a "preparatory" hotplug script, so other
custom devices can also make use of this, something like "preparescript"?

I would like to get some feedback about handling iSCSI devices, and also
about adding all this new parameters to the diskspec.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 16:57:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 16:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjC5z-0000Y2-Qi; Thu, 13 Dec 2012 16:57:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1TjC5x-0000Xi-Oe; Thu, 13 Dec 2012 16:57:14 +0000
Received: from [85.158.138.51:61268] by server-15.bemta-3.messagelabs.com id
	3C/B0-07921-2E80AC05; Thu, 13 Dec 2012 16:57:06 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355417776!20646930!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17052 invoked from network); 13 Dec 2012 16:56:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 16:56:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="127668"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 16:56:16 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 16:56:15 +0000
Message-ID: <50CA08AE.80102@citrix.com>
Date: Thu, 13 Dec 2012 17:56:14 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com>
In-Reply-To: <50C879FB.7060208@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: [Xen-devel] Handling iSCSI block devices (Was: Driver domains and
	device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

After doing some more research I'm able to understand a little better
how the Open-iSCSI initiator works (which I think is the most common
initiator in the Linux world, and should be supported by all distros).
This is marginally derived from the driver domains protocol proposal,
because if we are going to implement a driver domain protocol we should
try to handle device kinds that can have a two phase connection
mechanism, and iSCSI looks like the most interesting candidate (from my
POV).

I would like to implement iSCSI support in libxl to have at least a
device kind that makes use of this two phase connection mechanism, and
then draft a driver domain communication protocol, since we will already
have a device that needs this kind of protocol (it looked strange to
implement a two phase protocol without having any device that needed it).

This is the very simple scheme of the two phases of the connection of a
iSCSI device:

The first phase of connecting a iSCSI device consists of discovering it,
which can be done before entering the blackout phase of migration:

iscsiadm -m discovery -t st -p <ip>:<port>

And possibly setting the right authentication method:

iscsiadm -m node --targetname <iqn> -p <ip>:<port> --op=update --name
node.session.auth.authmethod --value=<auth_method>
iscsiadm -m node --targetname <iqn> -p <ip>:<port> --op=update --name
node.session.auth.username --value=<user>
iscsiadm -m node --targetname <iqn> -p <ip>:<port> --op=update --name
node.session.auth.password --value=<password>

The second phase is the actual device plug:

iscsiadm -m node --targetname <iqn> -p <ip>:<port> --login

I'm trying to fit all this parameters in the current diskspec, but I
guess we will have to add new parameters. I think the iqn parameter
should go in "target", and the rest should have their own parameters, so
this will leave us with the following new parameters:

- portal: specifies the address, and optionally the port to connect to
the desired target, the format is <ip>:<port>
- authmethod: authentication method
- user: username to use for authentication
- password: password to use for authentication.

So the diskspec line would look like:

portal=127.0.0.0:3260, authmethod=CHAP, user=foo, password=bar,
backendtype=phy, format=iscsi, vdev=xvda,
target=iqn.2012-12.com.example:lun1

Note that I've used the format parameter here to specify "iscsi", which
will be a new format, to distinguish this from a block device that also
uses the "phy" backend type. All this new parameters should also be
added to the libxl_device_disk struct.

Since this device type uses two hotplug scripts we should also add a new
generic parameter to specify a "preparatory" hotplug script, so other
custom devices can also make use of this, something like "preparescript"?

I would like to get some feedback about handling iSCSI devices, and also
about adding all this new parameters to the diskspec.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:00:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjC9D-0000u0-G4; Thu, 13 Dec 2012 17:00:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjC9C-0000ts-1B
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 17:00:34 +0000
Received: from [85.158.143.35:14241] by server-2.bemta-4.messagelabs.com id
	DE/F8-30861-1B90AC05; Thu, 13 Dec 2012 17:00:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355418029!14064644!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27257 invoked from network); 13 Dec 2012 17:00:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:00:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="127965"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 17:00:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 17:00:28 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjC96-0007tE-I7;
	Thu, 13 Dec 2012 17:00:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjC95-0002pH-QQ;
	Thu, 13 Dec 2012 17:00:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14682-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 17:00:27 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14682: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4225535325976605124=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4225535325976605124==
Content-Type: text/plain

flight 14682 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14682/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3  8 guest-saverestore   fail REGR. vs. 14678
 test-amd64-amd64-xl-qemut-winxpsp3 12 guest-localmigrate/x10 fail REGR. vs. 14678
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore fail REGR. vs. 14678

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14678
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14678

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  c91d9f6b6fba
baseline version:
 xen                  74d4a6cc5392

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26280:c91d9f6b6fba
tag:         tip
user:        Daniel De Graaf <dgdegra@tycho.nsa.gov>
date:        Thu Dec 13 11:44:02 2012 +0000
    
    libxl: introduce XSM relabel on build
    
    Allow a domain to be built under one security label and run using a
    different label.  This can be used to prevent the domain builder or
    control domain from having the ability to access a guest domain's memory
    via map_foreign_range except during the build process where this is
    required.
    
    Example domain configuration snippet:
      seclabel='customer_1:vm_r:nomigrate_t'
      init_seclabel='customer_1:vm_r:nomigrate_t_building'
    
    Note: this does not provide complete protection from a malicious dom0;
    mappings created during the build process may persist after the relabel,
    and could be used to indirectly access the guest's memory. However, if
    dom0 correctly unmaps the domain upon building, a the domU is protected
    against dom0 becoming malicious in the future.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26279:ef9242f5846f
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Thu Dec 13 11:44:01 2012 +0000
    
    libxl: qemu trad logdirty: Tolerate ENOENT on ret path
    
    It can happen in error conditions that lds->ret_path doesn't exist,
    and libxl__xs_read_checked signals this by setting got_ret=NULL.  If
    this happens, fail without crashing.
    
    Reported-by: Alex Bligh <alex@alex.org.uk>,
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26278:69ec301b8ec2
user:        Stefano Stabellini <stefano.stabellini@eu.citrix.com>
date:        Thu Dec 13 11:44:01 2012 +0000
    
    xen/arm: use strcmp in device_tree_type_matches
    
    We want to match the exact string rather than the first subset.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26277:54340e92367f
user:        Stefano Stabellini <stefano.stabellini@eu.citrix.com>
date:        Thu Dec 13 11:44:00 2012 +0000
    
    xen: get GIC addresses from DT
    
    Get the address of the GIC distributor, cpu, virtual and virtual cpu
    interfaces registers from device tree.
    
    Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
    and friends because we are using them from mode_switch.S, that is
    executed before device tree has been parsed. But at least mode_switch.S
    is known to contain vexpress specific code anyway.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26276:db8800f09ac1
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 13 11:22:54 2012 +0100
    
    vscsiif: allow larger segments-per-request values
    
    At least certain tape devices require fixed size blocks to be operated
    upon, i.e. breaking up of I/O requests is not permitted. Consequently
    we need an interface extension that (leaving aside implementation
    limitations) doesn't impose a limit on the number of segments that can
    be associated with an individual request.
    
    This, in turn, excludes the blkif extension FreeBSD folks implemented,
    as that still imposes an upper limit (the actual I/O request still
    specifies the full number of segments - as an 8-bit quantity -, and
    subsequent ring slots get used to carry the excess segment
    descriptors).
    
    The alternative therefore is to allow the frontend to pre-set segment
    descriptors _before_ actually issuing the I/O request. I/O will then
    be done by the backend for the accumulated set of segments.
    
    To properly associate segment preset operations with the main request,
    the rqid-s between them should match (originally I had hoped to use
    this to avoid producing individual responses for the pre-set
    operations, but that turned out to violate the underlying shared ring
    implementation).
    
    Negotiation of the maximum number of segments a particular backend
    implementation supports happens through a new "segs-per-req" xenstore
    node.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    
    
changeset:   26275:74d4a6cc5392
user:        Dongxiao Xu <dongxiao.xu@intel.com>
date:        Wed Dec 12 10:47:18 2012 +0100
    
    VMX: intr.c: remove i386 related code
    
    i386 arch is no longer supported by Xen, remove the related code.
    
    Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    
    
========================================
commit 6a0cf3786f1964fdf5a17f88f26cb499f4e89c81
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Dec 6 12:35:58 2012 +0000

    qemu-stubdom: prevent useless medium change
    
    qemu-stubdom was stripping the prefix from the "params" xenstore
    key in xenstore_parse_domain_config, which was then saved stripped in
    a variable. In xenstore_process_event we compare the "param" from
    xenstore (not stripped) with the stripped "param" saved in the
    variable, which leads to a medium change (even if there isn't any),
    since we are comparing something like aio:/path/to/file with
    /path/to/file. This only happens one time, since
    xenstore_parse_domain_config is the only place where we strip the
    prefix. The result of this bug is the following:
    
    xs_read_watch() -> /local/domain/0/backend/qdisk/19/5632/params hdc
    close(7)
    close blk: backend=/local/domain/0/backend/qdisk/19/5632
    node=/local/domain/19/device/vbd/5632
    (XEN) HVM18: HVM Loader
    (XEN) HVM18: Detected Xen v4.3-unstable
    (XEN) HVM18: Xenbus rings @0xfeffc000, event channel 4
    (XEN) HVM18: System requested ROMBIOS
    (XEN) HVM18: CPU speed is 2400 MHz
    (XEN) irq.c:270: Dom18 PCI link 0 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 0 routed to IRQ5
    (XEN) irq.c:270: Dom18 PCI link 1 changed 0 -> 10
    (XEN) HVM18: PCI-ISA link 1 routed to IRQ10
    (XEN) irq.c:270: Dom18 PCI link 2 changed 0 -> 11
    (XEN) HVM18: PCI-ISA link 2 routed to IRQ11
    (XEN) irq.c:270: Dom18 PCI link 3 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 3 routed to IRQ5
    (XEN) HVM18: pci dev 01:3 INTA->IRQ10
    (XEN) HVM18: pci dev 03:0 INTA->IRQ5
    (XEN) HVM18: pci dev 04:0 INTA->IRQ5
    (XEN) HVM18: pci dev 02:0 bar 10 size lx: 02000000
    (XEN) HVM18: pci dev 03:0 bar 14 size lx: 01000000
    (XEN) HVM18: pci dev 02:0 bar 14 size lx: 00001000
    (XEN) HVM18: pci dev 03:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 14 size lx: 00000100
    (XEN) HVM18: pci dev 01:1 bar 20 size lx: 00000010
    (XEN) HVM18: Multiprocessor initialisation:
    (XEN) HVM18:  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18:  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18: Testing HVM environment:
    (XEN) HVM18:  - REP INSB across page boundaries ... passed
    (XEN) HVM18:  - GS base MSRs and SWAPGS ... passed
    (XEN) HVM18: Passed 2 of 2 tests
    (XEN) HVM18: Writing SMBIOS tables ...
    (XEN) HVM18: Loading ROMBIOS ...
    (XEN) HVM18: 9660 bytes of ROMBIOS high-memory extensions:
    (XEN) HVM18:   Relocating to 0xfc001000-0xfc0035bc ... done
    (XEN) HVM18: Creating MP tables ...
    (XEN) HVM18: Loading Cirrus VGABIOS ...
    (XEN) HVM18: Loading PCI Option ROM ...
    (XEN) HVM18:  - Manufacturer: http://ipxe.org
    (XEN) HVM18:  - Product name: iPXE
    (XEN) HVM18: Option ROMs:
    (XEN) HVM18:  c0000-c8fff: VGA BIOS
    (XEN) HVM18:  c9000-d8fff: Etherboot ROM
    (XEN) HVM18: Loading ACPI ...
    (XEN) HVM18: vm86 TSS at fc00f680
    (XEN) HVM18: BIOS map:
    (XEN) HVM18:  f0000-fffff: Main BIOS
    (XEN) HVM18: E820 table:
    (XEN) HVM18:  [00]: 00000000:00000000 - 00000000:0009e000: RAM
    (XEN) HVM18:  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
    (XEN) HVM18:  HOLE: 00000000:000a0000 - 00000000:000e0000
    (XEN) HVM18:  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
    (XEN) HVM18:  [03]: 00000000:00100000 - 00000000:3f800000: RAM
    (XEN) HVM18:  HOLE: 00000000:3f800000 - 00000000:fc000000
    (XEN) HVM18:  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
    (XEN) HVM18: Invoking ROMBIOS ...
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) stdvga.c:147:d18 entering stdvga and caching modes
    (XEN) HVM18: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
    (XEN) HVM18: Bochs BIOS - build: 06/23/99
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) HVM18: Options: apmbios pcibios eltorito PMM
    (XEN) HVM18:
    (XEN) HVM18: ata0-0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63
    (XEN) HVM18: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (10240 MBytes)
    (XEN) HVM18: IDE time out
    (XEN) HVM18: ata1 master: QEMU DVD-ROM ATAPI-4 CD-Rom/DVD-Rom
    (XEN) HVM18: IDE time out
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: Press F12 for boot menu.
    (XEN) HVM18:
    (XEN) HVM18: Booting from CD-Rom...
    (XEN) HVM18: ata_is_ready returned 1
    (XEN) HVM18: CDROM boot failure code : 0003
    (XEN) HVM18: Boot from CD-Rom failed: could not read the boot disk
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: No bootable device.
    (XEN) HVM18: Powering off in 30 seconds.
    ******************* BLKFRONT for /local/domain/19/device/vbd/5632 **********
    
    backend at /local/domain/0/backend/qdisk/19/5632
    Failed to read
    /local/domain/0/backend/qdisk/19/5632/feature-flush-cache.
    284420 sectors of 512 bytes
    **************************
    blk_open(/local/domain/19/device/vbd/5632) -> 7
    
    As seen in this trace, the medium change happens just when the
    guest is booting, which leads to the guest not being able to boot
    because the BIOS is not able to access the device.
    
    This is a regression from Xen 4.1, which is able to boot from "file:/"
    based backends when using stubdomains.
    
    [ By inspection, this patch does not change the flow for the
      non-stubdom case. -iwj]
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


--===============4225535325976605124==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4225535325976605124==--

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:00:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjC9D-0000u0-G4; Thu, 13 Dec 2012 17:00:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjC9C-0000ts-1B
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 17:00:34 +0000
Received: from [85.158.143.35:14241] by server-2.bemta-4.messagelabs.com id
	DE/F8-30861-1B90AC05; Thu, 13 Dec 2012 17:00:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355418029!14064644!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27257 invoked from network); 13 Dec 2012 17:00:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:00:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="127965"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 17:00:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 17:00:28 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjC96-0007tE-I7;
	Thu, 13 Dec 2012 17:00:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjC95-0002pH-QQ;
	Thu, 13 Dec 2012 17:00:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14682-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Dec 2012 17:00:27 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14682: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4225535325976605124=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4225535325976605124==
Content-Type: text/plain

flight 14682 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14682/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3  8 guest-saverestore   fail REGR. vs. 14678
 test-amd64-amd64-xl-qemut-winxpsp3 12 guest-localmigrate/x10 fail REGR. vs. 14678
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore fail REGR. vs. 14678

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14678
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14678

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  c91d9f6b6fba
baseline version:
 xen                  74d4a6cc5392

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26280:c91d9f6b6fba
tag:         tip
user:        Daniel De Graaf <dgdegra@tycho.nsa.gov>
date:        Thu Dec 13 11:44:02 2012 +0000
    
    libxl: introduce XSM relabel on build
    
    Allow a domain to be built under one security label and run using a
    different label.  This can be used to prevent the domain builder or
    control domain from having the ability to access a guest domain's memory
    via map_foreign_range except during the build process where this is
    required.
    
    Example domain configuration snippet:
      seclabel='customer_1:vm_r:nomigrate_t'
      init_seclabel='customer_1:vm_r:nomigrate_t_building'
    
    Note: this does not provide complete protection from a malicious dom0;
    mappings created during the build process may persist after the relabel,
    and could be used to indirectly access the guest's memory. However, if
    dom0 correctly unmaps the domain upon building, a the domU is protected
    against dom0 becoming malicious in the future.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26279:ef9242f5846f
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Thu Dec 13 11:44:01 2012 +0000
    
    libxl: qemu trad logdirty: Tolerate ENOENT on ret path
    
    It can happen in error conditions that lds->ret_path doesn't exist,
    and libxl__xs_read_checked signals this by setting got_ret=NULL.  If
    this happens, fail without crashing.
    
    Reported-by: Alex Bligh <alex@alex.org.uk>,
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26278:69ec301b8ec2
user:        Stefano Stabellini <stefano.stabellini@eu.citrix.com>
date:        Thu Dec 13 11:44:01 2012 +0000
    
    xen/arm: use strcmp in device_tree_type_matches
    
    We want to match the exact string rather than the first subset.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26277:54340e92367f
user:        Stefano Stabellini <stefano.stabellini@eu.citrix.com>
date:        Thu Dec 13 11:44:00 2012 +0000
    
    xen: get GIC addresses from DT
    
    Get the address of the GIC distributor, cpu, virtual and virtual cpu
    interfaces registers from device tree.
    
    Note: I couldn't completely get rid of GIC_BASE_ADDRESS, GIC_DR_OFFSET
    and friends because we are using them from mode_switch.S, that is
    executed before device tree has been parsed. But at least mode_switch.S
    is known to contain vexpress specific code anyway.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
changeset:   26276:db8800f09ac1
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 13 11:22:54 2012 +0100
    
    vscsiif: allow larger segments-per-request values
    
    At least certain tape devices require fixed size blocks to be operated
    upon, i.e. breaking up of I/O requests is not permitted. Consequently
    we need an interface extension that (leaving aside implementation
    limitations) doesn't impose a limit on the number of segments that can
    be associated with an individual request.
    
    This, in turn, excludes the blkif extension FreeBSD folks implemented,
    as that still imposes an upper limit (the actual I/O request still
    specifies the full number of segments - as an 8-bit quantity -, and
    subsequent ring slots get used to carry the excess segment
    descriptors).
    
    The alternative therefore is to allow the frontend to pre-set segment
    descriptors _before_ actually issuing the I/O request. I/O will then
    be done by the backend for the accumulated set of segments.
    
    To properly associate segment preset operations with the main request,
    the rqid-s between them should match (originally I had hoped to use
    this to avoid producing individual responses for the pre-set
    operations, but that turned out to violate the underlying shared ring
    implementation).
    
    Negotiation of the maximum number of segments a particular backend
    implementation supports happens through a new "segs-per-req" xenstore
    node.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    
    
changeset:   26275:74d4a6cc5392
user:        Dongxiao Xu <dongxiao.xu@intel.com>
date:        Wed Dec 12 10:47:18 2012 +0100
    
    VMX: intr.c: remove i386 related code
    
    i386 arch is no longer supported by Xen, remove the related code.
    
    Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    
    
========================================
commit 6a0cf3786f1964fdf5a17f88f26cb499f4e89c81
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Dec 6 12:35:58 2012 +0000

    qemu-stubdom: prevent useless medium change
    
    qemu-stubdom was stripping the prefix from the "params" xenstore
    key in xenstore_parse_domain_config, which was then saved stripped in
    a variable. In xenstore_process_event we compare the "param" from
    xenstore (not stripped) with the stripped "param" saved in the
    variable, which leads to a medium change (even if there isn't any),
    since we are comparing something like aio:/path/to/file with
    /path/to/file. This only happens one time, since
    xenstore_parse_domain_config is the only place where we strip the
    prefix. The result of this bug is the following:
    
    xs_read_watch() -> /local/domain/0/backend/qdisk/19/5632/params hdc
    close(7)
    close blk: backend=/local/domain/0/backend/qdisk/19/5632
    node=/local/domain/19/device/vbd/5632
    (XEN) HVM18: HVM Loader
    (XEN) HVM18: Detected Xen v4.3-unstable
    (XEN) HVM18: Xenbus rings @0xfeffc000, event channel 4
    (XEN) HVM18: System requested ROMBIOS
    (XEN) HVM18: CPU speed is 2400 MHz
    (XEN) irq.c:270: Dom18 PCI link 0 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 0 routed to IRQ5
    (XEN) irq.c:270: Dom18 PCI link 1 changed 0 -> 10
    (XEN) HVM18: PCI-ISA link 1 routed to IRQ10
    (XEN) irq.c:270: Dom18 PCI link 2 changed 0 -> 11
    (XEN) HVM18: PCI-ISA link 2 routed to IRQ11
    (XEN) irq.c:270: Dom18 PCI link 3 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 3 routed to IRQ5
    (XEN) HVM18: pci dev 01:3 INTA->IRQ10
    (XEN) HVM18: pci dev 03:0 INTA->IRQ5
    (XEN) HVM18: pci dev 04:0 INTA->IRQ5
    (XEN) HVM18: pci dev 02:0 bar 10 size lx: 02000000
    (XEN) HVM18: pci dev 03:0 bar 14 size lx: 01000000
    (XEN) HVM18: pci dev 02:0 bar 14 size lx: 00001000
    (XEN) HVM18: pci dev 03:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 14 size lx: 00000100
    (XEN) HVM18: pci dev 01:1 bar 20 size lx: 00000010
    (XEN) HVM18: Multiprocessor initialisation:
    (XEN) HVM18:  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18:  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18: Testing HVM environment:
    (XEN) HVM18:  - REP INSB across page boundaries ... passed
    (XEN) HVM18:  - GS base MSRs and SWAPGS ... passed
    (XEN) HVM18: Passed 2 of 2 tests
    (XEN) HVM18: Writing SMBIOS tables ...
    (XEN) HVM18: Loading ROMBIOS ...
    (XEN) HVM18: 9660 bytes of ROMBIOS high-memory extensions:
    (XEN) HVM18:   Relocating to 0xfc001000-0xfc0035bc ... done
    (XEN) HVM18: Creating MP tables ...
    (XEN) HVM18: Loading Cirrus VGABIOS ...
    (XEN) HVM18: Loading PCI Option ROM ...
    (XEN) HVM18:  - Manufacturer: http://ipxe.org
    (XEN) HVM18:  - Product name: iPXE
    (XEN) HVM18: Option ROMs:
    (XEN) HVM18:  c0000-c8fff: VGA BIOS
    (XEN) HVM18:  c9000-d8fff: Etherboot ROM
    (XEN) HVM18: Loading ACPI ...
    (XEN) HVM18: vm86 TSS at fc00f680
    (XEN) HVM18: BIOS map:
    (XEN) HVM18:  f0000-fffff: Main BIOS
    (XEN) HVM18: E820 table:
    (XEN) HVM18:  [00]: 00000000:00000000 - 00000000:0009e000: RAM
    (XEN) HVM18:  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
    (XEN) HVM18:  HOLE: 00000000:000a0000 - 00000000:000e0000
    (XEN) HVM18:  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
    (XEN) HVM18:  [03]: 00000000:00100000 - 00000000:3f800000: RAM
    (XEN) HVM18:  HOLE: 00000000:3f800000 - 00000000:fc000000
    (XEN) HVM18:  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
    (XEN) HVM18: Invoking ROMBIOS ...
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) stdvga.c:147:d18 entering stdvga and caching modes
    (XEN) HVM18: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
    (XEN) HVM18: Bochs BIOS - build: 06/23/99
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) HVM18: Options: apmbios pcibios eltorito PMM
    (XEN) HVM18:
    (XEN) HVM18: ata0-0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63
    (XEN) HVM18: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (10240 MBytes)
    (XEN) HVM18: IDE time out
    (XEN) HVM18: ata1 master: QEMU DVD-ROM ATAPI-4 CD-Rom/DVD-Rom
    (XEN) HVM18: IDE time out
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: Press F12 for boot menu.
    (XEN) HVM18:
    (XEN) HVM18: Booting from CD-Rom...
    (XEN) HVM18: ata_is_ready returned 1
    (XEN) HVM18: CDROM boot failure code : 0003
    (XEN) HVM18: Boot from CD-Rom failed: could not read the boot disk
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: No bootable device.
    (XEN) HVM18: Powering off in 30 seconds.
    ******************* BLKFRONT for /local/domain/19/device/vbd/5632 **********
    
    backend at /local/domain/0/backend/qdisk/19/5632
    Failed to read
    /local/domain/0/backend/qdisk/19/5632/feature-flush-cache.
    284420 sectors of 512 bytes
    **************************
    blk_open(/local/domain/19/device/vbd/5632) -> 7
    
    As seen in this trace, the medium change happens just when the
    guest is booting, which leads to the guest not being able to boot
    because the BIOS is not able to access the device.
    
    This is a regression from Xen 4.1, which is able to boot from "file:/"
    based backends when using stubdomains.
    
    [ By inspection, this patch does not change the flow for the
      non-stubdom case. -iwj]
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


--===============4225535325976605124==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4225535325976605124==--

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:04:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCCT-00019S-AT; Thu, 13 Dec 2012 17:03:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjCCS-00019M-37
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 17:03:56 +0000
Received: from [85.158.139.211:32018] by server-16.bemta-5.messagelabs.com id
	4C/52-09208-B7A0AC05; Thu, 13 Dec 2012 17:03:55 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355418234!18880394!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14638 invoked from network); 13 Dec 2012 17:03:54 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 17:03:54 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjCCP-000L98-FT; Thu, 13 Dec 2012 17:03:53 +0000
Date: Thu, 13 Dec 2012 17:03:53 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213170353.GP75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-11-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-11-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 10/11] nEPT: expost EPT capablity to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191042), xiantao.zhang@intel.com wrote:
> --- a/xen/arch/x86/mm/hap/nested_ept.c
> +++ b/xen/arch/x86/mm/hap/nested_ept.c
> @@ -48,7 +48,7 @@
>  #define EPT_EMT_WB  6
>  #define EPT_EMT_UC  0
>  
> -#define NEPT_VPID_CAP_BITS 0
> +#define NEPT_VPID_CAP_BITS 0x0000000006134140ul

Ah, I didn't spot this earlier.  I think for clarity the definition of
nept_get_ept_vpid_cap() should be moved entirely into this faile (and
presuambly the TODO comment can be removed).

Where does the magic number 0x0000000006134140ul come from?
Can it be broken out into meaningful constants?

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:04:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCCT-00019S-AT; Thu, 13 Dec 2012 17:03:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjCCS-00019M-37
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 17:03:56 +0000
Received: from [85.158.139.211:32018] by server-16.bemta-5.messagelabs.com id
	4C/52-09208-B7A0AC05; Thu, 13 Dec 2012 17:03:55 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355418234!18880394!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14638 invoked from network); 13 Dec 2012 17:03:54 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 17:03:54 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjCCP-000L98-FT; Thu, 13 Dec 2012 17:03:53 +0000
Date: Thu, 13 Dec 2012 17:03:53 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213170353.GP75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-11-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-11-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 10/11] nEPT: expost EPT capablity to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191042), xiantao.zhang@intel.com wrote:
> --- a/xen/arch/x86/mm/hap/nested_ept.c
> +++ b/xen/arch/x86/mm/hap/nested_ept.c
> @@ -48,7 +48,7 @@
>  #define EPT_EMT_WB  6
>  #define EPT_EMT_UC  0
>  
> -#define NEPT_VPID_CAP_BITS 0
> +#define NEPT_VPID_CAP_BITS 0x0000000006134140ul

Ah, I didn't spot this earlier.  I think for clarity the definition of
nept_get_ept_vpid_cap() should be moved entirely into this faile (and
presuambly the TODO comment can be removed).

Where does the magic number 0x0000000006134140ul come from?
Can it be broken out into meaningful constants?

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:15:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:15:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCNg-0001gh-Jv; Thu, 13 Dec 2012 17:15:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjCNe-0001gT-QC
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 17:15:31 +0000
Received: from [85.158.139.83:26316] by server-1.bemta-5.messagelabs.com id
	25/00-12813-13D0AC05; Thu, 13 Dec 2012 17:15:29 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355418927!28008337!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31290 invoked from network); 13 Dec 2012 17:15:28 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 17:15:28 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjCNZ-000LAp-Nn; Thu, 13 Dec 2012 17:15:25 +0000
Date: Thu, 13 Dec 2012 17:15:25 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213171525.GQ75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-12-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-12-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 11/11] nVMX: Expose VPID capability to
	nested VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191043), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Virtualize VPID for the nested vmm, use host's VPID
> to emualte guest's VPID. For each virtual vmentry, if
> guest'v vpid is changed, allocate a new host VPID for
> L2 guest.

Looks fine to me, but there's some whitespace mangling: 

> @@ -2747,8 +2750,11 @@ void vmx_vmenter_helper(void)
>  
>      if ( !cpu_has_vmx_vpid )
>          goto out;
> +    if ( nestedhvm_vcpu_in_guestmode(curr) )
> +        p_asid =  &vcpu_nestedhvm(curr).nv_n2asid;

here (after '='),

> @@ -897,6 +908,18 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
>      if ( nestedhvm_paging_mode_hap(v) )
>          __vmwrite(EPT_POINTER, get_shadow_eptp(v));
>  
> +    /* nested VPID support! */
> +    if ( cpu_has_vmx_vpid && nvmx_vpid_enabled(nvcpu) )
> +    {
> +        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> +        uint32_t new_vpid =  __get_vvmcs(vvmcs, VIRTUAL_PROCESSOR_ID);

here (after '='),

> @@ -1363,6 +1386,9 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
>      unsigned long eptp;
>      u64 inv_type;
>  
> +    if(!cpu_has_vmx_ept)

here,

> @@ -1401,6 +1427,37 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
>      (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
>      ((uint32_t)(__emul_value(enable1, default1) | host_value)))
>  
> +int nvmx_handle_invvpid(struct cpu_user_regs *regs)
> +{
> +    struct vmx_inst_decoded decode;
> +    unsigned long vpid;
> +    u64 inv_type;
> +
> +    if(!cpu_has_vmx_vpid)

here,

> +        return X86EMUL_EXCEPTION;
> +
> +    if ( decode_vmx_inst(regs, &decode, &vpid, 0)
> +             != X86EMUL_OKAY )
> +        return X86EMUL_EXCEPTION;
> +
> +    inv_type = reg_read(regs, decode.reg2);
> +    gdprintk(XENLOG_DEBUG,"inv_type:%ld, vpid:%lx\n", inv_type, vpid);
> +
> +    switch ( inv_type ){
> +        /* Just invalidate all tlb entries for all types! */
> +        case INVVPID_INDIVIDUAL_ADDR:
> +	    case INVVPID_SINGLE_CONTEXT:
> +	    case INVVPID_ALL_CONTEXT:
> +            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
> +            break;
> +	    default:
> +		    return X86EMUL_EXCEPTION;
> +	}

here (lots of tabs),

> @@ -126,8 +126,9 @@ static bool_t nept_present_check(uint64_t entry)
>  
>  uint64_t nept_get_ept_vpid_cap(void)
>  {
> -    /*TODO: exposed ept and vpid features*/
> -    return NEPT_VPID_CAP_BITS;
> +    if (cpu_has_vmx_ept && cpu_has_vmx_vpid)

and here.

With those fixed, 

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:15:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:15:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCNg-0001gh-Jv; Thu, 13 Dec 2012 17:15:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjCNe-0001gT-QC
	for xen-devel@lists.xensource.com; Thu, 13 Dec 2012 17:15:31 +0000
Received: from [85.158.139.83:26316] by server-1.bemta-5.messagelabs.com id
	25/00-12813-13D0AC05; Thu, 13 Dec 2012 17:15:29 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355418927!28008337!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31290 invoked from network); 13 Dec 2012 17:15:28 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 17:15:28 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjCNZ-000LAp-Nn; Thu, 13 Dec 2012 17:15:25 +0000
Date: Thu, 13 Dec 2012 17:15:25 +0000
From: Tim Deegan <tim@xen.org>
To: xiantao.zhang@intel.com
Message-ID: <20121213171525.GQ75286@ocelot.phlegethon.org>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-12-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355162243-11857-12-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, xen-devel@lists.xensource.com, eddie.dong@intel.com,
	JBeulich@suse.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH 11/11] nVMX: Expose VPID capability to
	nested VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01:57 +0800 on 11 Dec (1355191043), xiantao.zhang@intel.com wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Virtualize VPID for the nested vmm, use host's VPID
> to emualte guest's VPID. For each virtual vmentry, if
> guest'v vpid is changed, allocate a new host VPID for
> L2 guest.

Looks fine to me, but there's some whitespace mangling: 

> @@ -2747,8 +2750,11 @@ void vmx_vmenter_helper(void)
>  
>      if ( !cpu_has_vmx_vpid )
>          goto out;
> +    if ( nestedhvm_vcpu_in_guestmode(curr) )
> +        p_asid =  &vcpu_nestedhvm(curr).nv_n2asid;

here (after '='),

> @@ -897,6 +908,18 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
>      if ( nestedhvm_paging_mode_hap(v) )
>          __vmwrite(EPT_POINTER, get_shadow_eptp(v));
>  
> +    /* nested VPID support! */
> +    if ( cpu_has_vmx_vpid && nvmx_vpid_enabled(nvcpu) )
> +    {
> +        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> +        uint32_t new_vpid =  __get_vvmcs(vvmcs, VIRTUAL_PROCESSOR_ID);

here (after '='),

> @@ -1363,6 +1386,9 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
>      unsigned long eptp;
>      u64 inv_type;
>  
> +    if(!cpu_has_vmx_ept)

here,

> @@ -1401,6 +1427,37 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
>      (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
>      ((uint32_t)(__emul_value(enable1, default1) | host_value)))
>  
> +int nvmx_handle_invvpid(struct cpu_user_regs *regs)
> +{
> +    struct vmx_inst_decoded decode;
> +    unsigned long vpid;
> +    u64 inv_type;
> +
> +    if(!cpu_has_vmx_vpid)

here,

> +        return X86EMUL_EXCEPTION;
> +
> +    if ( decode_vmx_inst(regs, &decode, &vpid, 0)
> +             != X86EMUL_OKAY )
> +        return X86EMUL_EXCEPTION;
> +
> +    inv_type = reg_read(regs, decode.reg2);
> +    gdprintk(XENLOG_DEBUG,"inv_type:%ld, vpid:%lx\n", inv_type, vpid);
> +
> +    switch ( inv_type ){
> +        /* Just invalidate all tlb entries for all types! */
> +        case INVVPID_INDIVIDUAL_ADDR:
> +	    case INVVPID_SINGLE_CONTEXT:
> +	    case INVVPID_ALL_CONTEXT:
> +            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
> +            break;
> +	    default:
> +		    return X86EMUL_EXCEPTION;
> +	}

here (lots of tabs),

> @@ -126,8 +126,9 @@ static bool_t nept_present_check(uint64_t entry)
>  
>  uint64_t nept_get_ept_vpid_cap(void)
>  {
> -    /*TODO: exposed ept and vpid features*/
> -    return NEPT_VPID_CAP_BITS;
> +    if (cpu_has_vmx_ept && cpu_has_vmx_vpid)

and here.

With those fixed, 

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:22:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCU8-0001tA-Gr; Thu, 13 Dec 2012 17:22:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjCU6-0001t5-JQ
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:22:10 +0000
Received: from [85.158.143.35:15290] by server-1.bemta-4.messagelabs.com id
	82/D7-28401-1CE0AC05; Thu, 13 Dec 2012 17:22:09 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355419318!12757029!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2199 invoked from network); 13 Dec 2012 17:22:08 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 17:22:08 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjCTu-000LBl-Bc; Thu, 13 Dec 2012 17:21:58 +0000
Date: Thu, 13 Dec 2012 17:21:58 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121213172158.GR75286@ocelot.phlegethon.org>
References: <50C9F09D.9080602@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C9F09D.9080602@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Libxc code to get MTRR memory type for physical
	address pa
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:13 +0200 on 13 Dec (1355418813), Razvan Cojocaru wrote:
> Hello,
> 
> what would be the libxc-based equivalent of get_mtrr_type(struct 
> mtrr_state *m, paddr_t pa) (from xen/arch/x86/hvm/mtrr.c)?
> 
> I've searched the source code and found this:
> 
> struct hvm_hw_mtrr {
> #define MTRR_VCNT 8
> #define NUM_FIXED_MSR 11
>     uint64_t msr_pat_cr;
>     /* mtrr physbase & physmask msr pair*/
>     uint64_t msr_mtrr_var[MTRR_VCNT*2];
>     uint64_t msr_mtrr_fixed[NUM_FIXED_MSR];
>     uint64_t msr_mtrr_cap;
>     uint64_t msr_mtrr_def_type;
> };
> 
> in xen/include/public/arch-x86/hvm/save.h. I can retrieve that using
> xc_domain_hvm_getcontext_partial(), but what would the the best way to 
> get the uint8_t result, for a given 'pa', that get_mtrr_type() returns?

I think you'd have to write some code to decode the MTRRs, by copying
the logic that Xen uses in get_mtrr_type (or by following the processor
manuals).

If you do, please send a patch to add it to libxc for the next person!

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:22:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCU8-0001tA-Gr; Thu, 13 Dec 2012 17:22:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TjCU6-0001t5-JQ
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:22:10 +0000
Received: from [85.158.143.35:15290] by server-1.bemta-4.messagelabs.com id
	82/D7-28401-1CE0AC05; Thu, 13 Dec 2012 17:22:09 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355419318!12757029!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2199 invoked from network); 13 Dec 2012 17:22:08 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 17:22:08 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TjCTu-000LBl-Bc; Thu, 13 Dec 2012 17:21:58 +0000
Date: Thu, 13 Dec 2012 17:21:58 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121213172158.GR75286@ocelot.phlegethon.org>
References: <50C9F09D.9080602@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C9F09D.9080602@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Libxc code to get MTRR memory type for physical
	address pa
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:13 +0200 on 13 Dec (1355418813), Razvan Cojocaru wrote:
> Hello,
> 
> what would be the libxc-based equivalent of get_mtrr_type(struct 
> mtrr_state *m, paddr_t pa) (from xen/arch/x86/hvm/mtrr.c)?
> 
> I've searched the source code and found this:
> 
> struct hvm_hw_mtrr {
> #define MTRR_VCNT 8
> #define NUM_FIXED_MSR 11
>     uint64_t msr_pat_cr;
>     /* mtrr physbase & physmask msr pair*/
>     uint64_t msr_mtrr_var[MTRR_VCNT*2];
>     uint64_t msr_mtrr_fixed[NUM_FIXED_MSR];
>     uint64_t msr_mtrr_cap;
>     uint64_t msr_mtrr_def_type;
> };
> 
> in xen/include/public/arch-x86/hvm/save.h. I can retrieve that using
> xc_domain_hvm_getcontext_partial(), but what would the the best way to 
> get the uint8_t result, for a given 'pa', that get_mtrr_type() returns?

I think you'd have to write some code to decode the MTRRs, by copying
the logic that Xen uses in get_mtrr_type (or by following the processor
manuals).

If you do, please send a patch to add it to libxc for the next person!

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:32:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCdc-0002Nb-Jh; Thu, 13 Dec 2012 17:32:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TjCdb-0002NW-MG
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:31:59 +0000
Received: from [193.109.254.147:39683] by server-1.bemta-14.messagelabs.com id
	C2/F1-15901-E011AC05; Thu, 13 Dec 2012 17:31:58 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355419917!9998459!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14457 invoked from network); 13 Dec 2012 17:31:58 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 17:31:58 -0000
Received: (qmail 26118 invoked from network); 13 Dec 2012 19:31:56 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 19:31:56 +0200
Message-ID: <50CA1140.1000808@gmail.com>
Date: Thu, 13 Dec 2012 19:32:48 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <50C9F09D.9080602@gmail.com>
	<20121213172158.GR75286@ocelot.phlegethon.org>
In-Reply-To: <20121213172158.GR75286@ocelot.phlegethon.org>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 233270,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 694aef023c26b49302520270a65467ac.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id:
	2m1g3t9.17e7n003h.2el1a], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44367
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Libxc code to get MTRR memory type for physical
 address pa
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> I think you'd have to write some code to decode the MTRRs, by copying
> the logic that Xen uses in get_mtrr_type (or by following the processor
> manuals).
>
> If you do, please send a patch to add it to libxc for the next person!

Ah, I thought it might come to this. :) Of course, if I end up doing it 
I'll happily contribute the patch.

Thanks,
Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:32:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCdc-0002Nb-Jh; Thu, 13 Dec 2012 17:32:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TjCdb-0002NW-MG
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:31:59 +0000
Received: from [193.109.254.147:39683] by server-1.bemta-14.messagelabs.com id
	C2/F1-15901-E011AC05; Thu, 13 Dec 2012 17:31:58 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355419917!9998459!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14457 invoked from network); 13 Dec 2012 17:31:58 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Dec 2012 17:31:58 -0000
Received: (qmail 26118 invoked from network); 13 Dec 2012 19:31:56 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	13 Dec 2012 19:31:56 +0200
Message-ID: <50CA1140.1000808@gmail.com>
Date: Thu, 13 Dec 2012 19:32:48 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <50C9F09D.9080602@gmail.com>
	<20121213172158.GR75286@ocelot.phlegethon.org>
In-Reply-To: <20121213172158.GR75286@ocelot.phlegethon.org>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 233270,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 694aef023c26b49302520270a65467ac.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id:
	2m1g3t9.17e7n003h.2el1a], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44367
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Libxc code to get MTRR memory type for physical
 address pa
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> I think you'd have to write some code to decode the MTRRs, by copying
> the logic that Xen uses in get_mtrr_type (or by following the processor
> manuals).
>
> If you do, please send a patch to add it to libxc for the next person!

Ah, I thought it might come to this. :) Of course, if I end up doing it 
I'll happily contribute the patch.

Thanks,
Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:39:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCkh-0002Wy-J9; Thu, 13 Dec 2012 17:39:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TjCkg-0002Wt-EH
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:39:18 +0000
Received: from [193.109.254.147:52204] by server-4.bemta-14.messagelabs.com id
	1B/6C-15233-5C21AC05; Thu, 13 Dec 2012 17:39:17 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355420354!10012659!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31263 invoked from network); 13 Dec 2012 17:39:15 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:39:15 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so4402046iej.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 09:39:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=BBAXARW2/stjLbwl5ApkfmpEx7a2zJW11kRmafdqCM4=;
	b=dVICZhnkj9yOlMrGO3z0659Q5e0XPmZ1hvG2IfdkOsi1nee6ZEoDpXe1SdnznWclDX
	fwye96SPrwfsbseQcxJ5p8t7zZ/Ifc2i9zk4Unh990ocDxxyKhDwcdfHlKbD4+DYbZRo
	Krg5XwMyrE63dnf6ADp9hbeDU3+q8IxL6PRnTaeRODnbaVynks/p98xAPjweVHHt6eSg
	J0p4cs0G9karoIDs606WO5l88ArSG51YjXhl+e+xd/LAja/MKNBiU/y/YMSzc3Xs3Tq3
	3eRXEEUkZj09yCaTmNEc4LJLopB5uxnhoOqjee9DMFR/YiHiw5cD77Ivk+cP/uApThY8
	FOaA==
MIME-Version: 1.0
Received: by 10.42.32.200 with SMTP id f8mr2199940icd.18.1355420353757; Thu,
	13 Dec 2012 09:39:13 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 13 Dec 2012 09:39:13 -0800 (PST)
In-Reply-To: <CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
	<CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
Date: Fri, 14 Dec 2012 01:39:13 +0800
X-Google-Sender-Auth: jo3iStApgK70T1YCzIpMBgaSuko
Message-ID: <CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay, Allen M" <allen.m.kay@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 10:33 PM, G.R. <firemeteor@users.sourceforge.net> wrote:
> On Thu, Dec 13, 2012 at 8:43 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>>
>> Does this patch work for you?
>>
>
> It appears that you change the exposed 1f.0 bridge into an ISA bridge.
> The driver should be able to recognize it -- as long as it is not
> hidden by the PIIX3 bridge.
> I wonder if there is way to entirely override that one...
> But anyway I'll try it out first.
>

Stefano, your patch does not produce an ISA bridge as expected.
The device as viewed from the domU is like this:
00:1f.0 Non-VGA unclassified device [0000]: Intel Corporation H77
Express Chipset LPC Controller [8086:1e4a] (rev 04)

I'm on the latest 4.2-testing branch just synced && built for your patch.

Thanks,
Timothy

>>
>>
>> diff --git a/hw/pci.c b/hw/pci.c
>> index f051de1..d371bd7 100644
>> --- a/hw/pci.c
>> +++ b/hw/pci.c
>> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>>      }
>>  }
>>
>> -typedef struct {
>> -    PCIDevice dev;
>> -    PCIBus *bus;
>> -} PCIBridge;
>> -
>>  void pci_bridge_write_config(PCIDevice *d,
>>                               uint32_t address, uint32_t val, int len)
>>  {
>> diff --git a/hw/pci.h b/hw/pci.h
>> index edc58b6..c2acab9 100644
>> --- a/hw/pci.h
>> +++ b/hw/pci.h
>> @@ -222,6 +222,11 @@ struct PCIDevice {
>>      int irq_state[4];
>>  };
>>
>> +typedef struct {
>> +    PCIDevice dev;
>> +    PCIBus *bus;
>> +} PCIBridge;
>> +
>>  extern char direct_pci_str[];
>>  extern int direct_pci_msitranslate;
>>  extern int direct_pci_power_mgmt;
>> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
>> index c6f8869..d8e789f 100644
>> --- a/hw/pt-graphics.c
>> +++ b/hw/pt-graphics.c
>> @@ -3,6 +3,7 @@
>>   */
>>
>>  #include "pass-through.h"
>> +#include "pci.h"
>>  #include "pci/header.h"
>>  #include "pci/pci.h"
>>
>> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
>>
>> -    if ( vid == PCI_VENDOR_ID_INTEL )
>> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
>> -                        pch_map_irq, "intel_bridge_1f");
>> +    if (vid == PCI_VENDOR_ID_INTEL) {
>> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
>> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL, pci_bridge_write_config);
>> +
>> +        pci_config_set_vendor_id(s->dev.config, vid);
>> +        pci_config_set_device_id(s->dev.config, did);
>> +
>> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
>> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
>> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast back-to-back, 66MHz, no error
>> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
>> +        s->dev.config[PCI_REVISION] = rid;
>> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
>> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
>> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
>> +        s->dev.config[PCI_HEADER_TYPE] = 0x81;
>> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
>> +
>> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
>> +    }
>>  }
>>
>>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:39:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCkh-0002Wy-J9; Thu, 13 Dec 2012 17:39:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TjCkg-0002Wt-EH
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:39:18 +0000
Received: from [193.109.254.147:52204] by server-4.bemta-14.messagelabs.com id
	1B/6C-15233-5C21AC05; Thu, 13 Dec 2012 17:39:17 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355420354!10012659!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31263 invoked from network); 13 Dec 2012 17:39:15 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:39:15 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so4402046iej.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 09:39:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=BBAXARW2/stjLbwl5ApkfmpEx7a2zJW11kRmafdqCM4=;
	b=dVICZhnkj9yOlMrGO3z0659Q5e0XPmZ1hvG2IfdkOsi1nee6ZEoDpXe1SdnznWclDX
	fwye96SPrwfsbseQcxJ5p8t7zZ/Ifc2i9zk4Unh990ocDxxyKhDwcdfHlKbD4+DYbZRo
	Krg5XwMyrE63dnf6ADp9hbeDU3+q8IxL6PRnTaeRODnbaVynks/p98xAPjweVHHt6eSg
	J0p4cs0G9karoIDs606WO5l88ArSG51YjXhl+e+xd/LAja/MKNBiU/y/YMSzc3Xs3Tq3
	3eRXEEUkZj09yCaTmNEc4LJLopB5uxnhoOqjee9DMFR/YiHiw5cD77Ivk+cP/uApThY8
	FOaA==
MIME-Version: 1.0
Received: by 10.42.32.200 with SMTP id f8mr2199940icd.18.1355420353757; Thu,
	13 Dec 2012 09:39:13 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 13 Dec 2012 09:39:13 -0800 (PST)
In-Reply-To: <CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
	<CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
Date: Fri, 14 Dec 2012 01:39:13 +0800
X-Google-Sender-Auth: jo3iStApgK70T1YCzIpMBgaSuko
Message-ID: <CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay, Allen M" <allen.m.kay@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 10:33 PM, G.R. <firemeteor@users.sourceforge.net> wrote:
> On Thu, Dec 13, 2012 at 8:43 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>>
>> Does this patch work for you?
>>
>
> It appears that you change the exposed 1f.0 bridge into an ISA bridge.
> The driver should be able to recognize it -- as long as it is not
> hidden by the PIIX3 bridge.
> I wonder if there is way to entirely override that one...
> But anyway I'll try it out first.
>

Stefano, your patch does not produce an ISA bridge as expected.
The device as viewed from the domU is like this:
00:1f.0 Non-VGA unclassified device [0000]: Intel Corporation H77
Express Chipset LPC Controller [8086:1e4a] (rev 04)

I'm on the latest 4.2-testing branch just synced && built for your patch.

Thanks,
Timothy

>>
>>
>> diff --git a/hw/pci.c b/hw/pci.c
>> index f051de1..d371bd7 100644
>> --- a/hw/pci.c
>> +++ b/hw/pci.c
>> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>>      }
>>  }
>>
>> -typedef struct {
>> -    PCIDevice dev;
>> -    PCIBus *bus;
>> -} PCIBridge;
>> -
>>  void pci_bridge_write_config(PCIDevice *d,
>>                               uint32_t address, uint32_t val, int len)
>>  {
>> diff --git a/hw/pci.h b/hw/pci.h
>> index edc58b6..c2acab9 100644
>> --- a/hw/pci.h
>> +++ b/hw/pci.h
>> @@ -222,6 +222,11 @@ struct PCIDevice {
>>      int irq_state[4];
>>  };
>>
>> +typedef struct {
>> +    PCIDevice dev;
>> +    PCIBus *bus;
>> +} PCIBridge;
>> +
>>  extern char direct_pci_str[];
>>  extern int direct_pci_msitranslate;
>>  extern int direct_pci_power_mgmt;
>> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
>> index c6f8869..d8e789f 100644
>> --- a/hw/pt-graphics.c
>> +++ b/hw/pt-graphics.c
>> @@ -3,6 +3,7 @@
>>   */
>>
>>  #include "pass-through.h"
>> +#include "pci.h"
>>  #include "pci/header.h"
>>  #include "pci/pci.h"
>>
>> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
>>
>> -    if ( vid == PCI_VENDOR_ID_INTEL )
>> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
>> -                        pch_map_irq, "intel_bridge_1f");
>> +    if (vid == PCI_VENDOR_ID_INTEL) {
>> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
>> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL, pci_bridge_write_config);
>> +
>> +        pci_config_set_vendor_id(s->dev.config, vid);
>> +        pci_config_set_device_id(s->dev.config, did);
>> +
>> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
>> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
>> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast back-to-back, 66MHz, no error
>> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
>> +        s->dev.config[PCI_REVISION] = rid;
>> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
>> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
>> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
>> +        s->dev.config[PCI_HEADER_TYPE] = 0x81;
>> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
>> +
>> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
>> +    }
>>  }
>>
>>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:39:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCkn-0002XM-Ve; Thu, 13 Dec 2012 17:39:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1TjCkm-0002XC-57
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:39:24 +0000
Received: from [193.109.254.147:32918] by server-13.bemta-14.messagelabs.com
	id 67/F3-01725-BC21AC05; Thu, 13 Dec 2012 17:39:23 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355420361!10283669!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17789 invoked from network); 13 Dec 2012 17:39:22 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:39:22 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so4067844wib.14
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 09:39:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=8kFJrKp1Vsz8T8x/uT5IVI58u3J70gVfYwrIn16qpIQ=;
	b=yqs1SmB+RMLN5uuaSb3iM//KTinsRvZe3WVKNHgcTFIEog/BkF5/XvbMd6UMF4kBMB
	J/7HVSwoBUND7bFDGxWua++ZxlJbuD0MxR9r+sc2KM6WmVDgGjbZgkEx/1g7BEd1NA83
	/5/lvXPUr1yTUOxQgZe0RK3Ba1s1hFYqPrBfmCV4fr4sgKX0u//HOUjqYB+pX27KHXcX
	eIfOwkhrZ3sKd3fF2M1LHVLPpWxmqrmqPAumxNLpEUQ9MCy70tEcqMr312QeW0KDNafU
	dVYIqP86Oj3ChYT9oIDdsRtR74E9z1YsT4dn5fdXKvlreJaZTCGKm0Epm+3rnkCB6ADi
	g4cw==
MIME-Version: 1.0
Received: by 10.180.88.138 with SMTP id bg10mr4802881wib.13.1355420361861;
	Thu, 13 Dec 2012 09:39:21 -0800 (PST)
Received: by 10.194.64.194 with HTTP; Thu, 13 Dec 2012 09:39:21 -0800 (PST)
Date: Thu, 13 Dec 2012 23:09:21 +0530
Message-ID: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Code for precopy algorithm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0518888049493152236=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0518888049493152236==
Content-Type: multipart/alternative; boundary=f46d0442685230aa8004d0bf658b

--f46d0442685230aa8004d0bf658b
Content-Type: text/plain; charset=ISO-8859-1

Hello all,
              I want to optimize the pre copy algorithm.So in which file
can I find the implementation of algorithm.And how to understand the
working of code? Which part causes the live migration in code?


regards,
DigvijaySingh

--f46d0442685230aa8004d0bf658b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello all,<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 I want to optimize th=
e pre copy algorithm.So in which file can I find the implementation of algo=
rithm.And how to understand the working of code? Which part causes the live=
 migration in code?<br>
<br><br>regards,<br>DigvijaySingh<br>

--f46d0442685230aa8004d0bf658b--


--===============0518888049493152236==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0518888049493152236==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 17:39:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCkn-0002XM-Ve; Thu, 13 Dec 2012 17:39:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1TjCkm-0002XC-57
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:39:24 +0000
Received: from [193.109.254.147:32918] by server-13.bemta-14.messagelabs.com
	id 67/F3-01725-BC21AC05; Thu, 13 Dec 2012 17:39:23 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355420361!10283669!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17789 invoked from network); 13 Dec 2012 17:39:22 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:39:22 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm11so4067844wib.14
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 09:39:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=8kFJrKp1Vsz8T8x/uT5IVI58u3J70gVfYwrIn16qpIQ=;
	b=yqs1SmB+RMLN5uuaSb3iM//KTinsRvZe3WVKNHgcTFIEog/BkF5/XvbMd6UMF4kBMB
	J/7HVSwoBUND7bFDGxWua++ZxlJbuD0MxR9r+sc2KM6WmVDgGjbZgkEx/1g7BEd1NA83
	/5/lvXPUr1yTUOxQgZe0RK3Ba1s1hFYqPrBfmCV4fr4sgKX0u//HOUjqYB+pX27KHXcX
	eIfOwkhrZ3sKd3fF2M1LHVLPpWxmqrmqPAumxNLpEUQ9MCy70tEcqMr312QeW0KDNafU
	dVYIqP86Oj3ChYT9oIDdsRtR74E9z1YsT4dn5fdXKvlreJaZTCGKm0Epm+3rnkCB6ADi
	g4cw==
MIME-Version: 1.0
Received: by 10.180.88.138 with SMTP id bg10mr4802881wib.13.1355420361861;
	Thu, 13 Dec 2012 09:39:21 -0800 (PST)
Received: by 10.194.64.194 with HTTP; Thu, 13 Dec 2012 09:39:21 -0800 (PST)
Date: Thu, 13 Dec 2012 23:09:21 +0530
Message-ID: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Code for precopy algorithm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0518888049493152236=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0518888049493152236==
Content-Type: multipart/alternative; boundary=f46d0442685230aa8004d0bf658b

--f46d0442685230aa8004d0bf658b
Content-Type: text/plain; charset=ISO-8859-1

Hello all,
              I want to optimize the pre copy algorithm.So in which file
can I find the implementation of algorithm.And how to understand the
working of code? Which part causes the live migration in code?


regards,
DigvijaySingh

--f46d0442685230aa8004d0bf658b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello all,<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 I want to optimize th=
e pre copy algorithm.So in which file can I find the implementation of algo=
rithm.And how to understand the working of code? Which part causes the live=
 migration in code?<br>
<br><br>regards,<br>DigvijaySingh<br>

--f46d0442685230aa8004d0bf658b--


--===============0518888049493152236==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0518888049493152236==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 17:41:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCmA-0002es-F6; Thu, 13 Dec 2012 17:40:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TjCm8-0002ef-TO
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:40:49 +0000
Received: from [85.158.138.51:50968] by server-4.bemta-3.messagelabs.com id
	44/59-31835-1131AC05; Thu, 13 Dec 2012 17:40:33 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355420430!24557265!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2360 invoked from network); 13 Dec 2012 17:40:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:40:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="581510"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 17:40:30 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 12:40:29 -0500
Message-ID: <50CA130C.2010606@citrix.com>
Date: Thu, 13 Dec 2012 17:40:28 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
	<20121213163846.GF20661@localhost.localdomain>
In-Reply-To: <20121213163846.GF20661@localhost.localdomain>
X-Originating-IP: [10.80.3.146]
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>
Subject: Re: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/12 16:38, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 12, 2012 at 10:09:38PM +0000, Mats Petersson wrote:
>> One comment asked for more details on the improvements:
>> Using a small test program to map Guest memory into Dom0 (repeatedly
>> for "Iterations" mapping the same first "Num Pages")
> I missed this in my for 3.8 queue. I will queue it up next
> week and send it to Linus after a review..

Please do review (and test if you fancy ;) ). However, it just occurred 
to me and Ian Cambell just confirmed that ARM uses some of the same 
code, so will need to have the right bits in their code to cope.

I plan to hack something up for ARM and work with Ian to get it tested, 
etc. Hopefully shouldn't take very long.

--
Mats
>
>> Iterations    Num Pages          Time 3.7rc4  Time With this patch
>> 5000        4096         76.107       37.027
>> 10000       2048         75.703       37.177
>> 20000       1024         75.893       37.247
>> So a little better than twice as fast.
>>
>> Using this patch in migration, using "time" to measure the overall
>> time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
>> memory, one network card, one disk, guest at login prompt):
>> Time 3.7rc5           Time With this patch
>> 6.697                 5.667
>> Since migration involves a whole lot of other things, it's only about
>> 15% faster - but still a good improvement. Similar measurement with a
>> guest that is running code to "dirty" memory shows about 23%
>> improvement, as it spends more time copying dirtied memory.
>>
>> As discussed elsewhere, a good deal more can be had from improving the
>> munmap system call, but it is a little tricky to get this in without
>> worsening non-PVOPS kernel, so I will have another look at this.
>>
>> ---
>> Update since last posting:
>> I have just run some benchmarks of a 16GB guest, and the improvement
>> with this patch is around 23-30% for the overall copy time, and 42%
>> shorter downtime on a "busy" guest (writing in a loop to about 8GB of
>> memory). I think this is definitely worth having.
>>
>> The V4 patch consists of cosmetic adjustments (spelling mistake in
>> comment and reversing condition in an if-statement to avoid having an
>> "else" branch, a random empty line added by accident now reverted back
>> to previous state). Thanks to Jan Beulich for the comments.
>>
>> The V3 of the patch contains suggested improvements from:
>> David Vrabel - make it two distinct external functions, doc-comments.
>> Ian Campbell - use one common function for the main work.
>> Jan Beulich  - found a bug and pointed out some whitespace problems.
>>
>>
>>
>> Signed-off-by: Mats Petersson <mats.petersson@citrix.com>
>>
>> ---
>>   arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
>>   drivers/xen/privcmd.c |   55 +++++++++++++++++----
>>   include/xen/xen-ops.h |    5 ++
>>   3 files changed, 169 insertions(+), 23 deletions(-)
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index dcf5f2d..a67774f 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
>>   #define REMAP_BATCH_SIZE 16
>>
>>   struct remap_data {
>> -     unsigned long mfn;
>> +     unsigned long *mfn;
>> +     bool contiguous;
>>        pgprot_t prot;
>>        struct mmu_update *mmu_update;
>>   };
>> @@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>>                                 unsigned long addr, void *data)
>>   {
>>        struct remap_data *rmd = data;
>> -     pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
>> +     pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
>> +     /* If we have a contigious range, just update the mfn itself,
>> +        else update pointer to be "next mfn". */
>> +     if (rmd->contiguous)
>> +             (*rmd->mfn)++;
>> +     else
>> +             rmd->mfn++;
>>
>>        rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
>>        rmd->mmu_update->val = pte_val_ma(pte);
>> @@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>>        return 0;
>>   }
>>
>> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> -                            unsigned long addr,
>> -                            unsigned long mfn, int nr,
>> -                            pgprot_t prot, unsigned domid)
>> -{
>> +/* do_remap_mfn() - helper function to map foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @err_ptr: pointer to array
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into
>> + * this kernel's memory. The owner of the pages is defined by domid. Where the
>> + * pages are mapped is determined by addr, and vma is used for "accounting" of
>> + * the pages.
>> + *
>> + * Return value is zero for success, negative for failure.
>> + *
>> + * Note that err_ptr is used to indicate whether *mfn
>> + * is a list or a "first mfn of a contiguous range". */
>> +static int do_remap_mfn(struct vm_area_struct *vma,
>> +                     unsigned long addr,
>> +                     unsigned long *mfn, int nr,
>> +                     int *err_ptr, pgprot_t prot,
>> +                     unsigned domid)
>> +{
>> +     int err, last_err = 0;
>>        struct remap_data rmd;
>>        struct mmu_update mmu_update[REMAP_BATCH_SIZE];
>> -     int batch;
>>        unsigned long range;
>> -     int err = 0;
>>
>>        if (xen_feature(XENFEAT_auto_translated_physmap))
>>                return -EINVAL;
>> @@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>
>>        rmd.mfn = mfn;
>>        rmd.prot = prot;
>> +     /* We use the err_ptr to indicate if there we are doing a contigious
>> +      * mapping or a discontigious mapping. */
>> +     rmd.contiguous = !err_ptr;
>>
>>        while (nr) {
>> -             batch = min(REMAP_BATCH_SIZE, nr);
>> +             int index = 0;
>> +             int done = 0;
>> +             int batch = min(REMAP_BATCH_SIZE, nr);
>> +             int batch_left = batch;
>>                range = (unsigned long)batch << PAGE_SHIFT;
>>
>>                rmd.mmu_update = mmu_update;
>> @@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>                if (err)
>>                        goto out;
>>
>> -             err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
>> -             if (err < 0)
>> -                     goto out;
>> +             /* We record the error for each page that gives an error, but
>> +              * continue mapping until the whole set is done */
>> +             do {
>> +                     err = HYPERVISOR_mmu_update(&mmu_update[index],
>> +                                                 batch_left, &done, domid);
>> +                     if (err < 0) {
>> +                             if (!err_ptr)
>> +                                     goto out;
>> +                             /* increment done so we skip the error item */
>> +                             done++;
>> +                             last_err = err_ptr[index] = err;
>> +                     }
>> +                     batch_left -= done;
>> +                     index += done;
>> +             } while (batch_left);
>>
>>                nr -= batch;
>>                addr += range;
>> +             if (err_ptr)
>> +                     err_ptr += batch;
>>        }
>>
>> -     err = 0;
>> +     err = last_err;
>>   out:
>>
>>        xen_flush_tlb_all();
>>
>>        return err;
>>   }
>> +
>> +/* xen_remap_domain_mfn_range() - Used to map foreign pages
>> + * @vma:   the vma for the pages to be mapped into
>> + * @addr:  the address at which to map the pages
>> + * @mfn:   the first MFN to map
>> + * @nr:    the number of consecutive mfns to map
>> + * @prot:  page protection mask
>> + * @domid: id of the domain that we are mapping from
>> + *
>> + * This function takes a mfn and maps nr pages on from that into this kernel's
>> + * memory. The owner of the pages is defined by domid. Where the pages are
>> + * mapped is determined by addr, and vma is used for "accounting" of the
>> + * pages.
>> + *
>> + * Return value is zero for success, negative ERRNO value for failure.
>> + */
>> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> +                            unsigned long addr,
>> +                            unsigned long mfn, int nr,
>> +                            pgprot_t prot, unsigned domid)
>> +{
>> +     return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
>> +}
>>   EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>> +
>> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @err_ptr: pointer to array of integers, one per MFN, for an error
>> + *           value for each page. The err_ptr must not be NULL.
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into this
>> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
>> + * are mapped is determined by addr, and vma is used for "accounting" of the
>> + * pages. The err_ptr array is filled in on any page that is not sucessfully
>> + * mapped in.
>> + *
>> + * Return value is zero for success, negative ERRNO value for failure.
>> + * Note that the error value -ENOENT is considered a "retry", so when this
>> + * error code is seen, another call should be made with the list of pages that
>> + * are marked as -ENOENT in the err_ptr array.
>> + */
>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>> +                            unsigned long addr,
>> +                            unsigned long *mfn, int nr,
>> +                            int *err_ptr, pgprot_t prot,
>> +                            unsigned domid)
>> +{
>> +     /* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
>> +      * and the consequences later is quite hard to detect what the actual
>> +      * cause of "wrong memory was mapped in".
>> +      */
>> +     BUG_ON(err_ptr == NULL);
>> +     return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
>> +}
>> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index 71f5c45..75f6e86 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
>> @@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
>>        return ret;
>>   }
>>
>> +/*
>> + * Similar to traverse_pages, but use each page as a "block" of
>> + * data to be processed as one unit.
>> + */
>> +static int traverse_pages_block(unsigned nelem, size_t size,
>> +                             struct list_head *pos,
>> +                             int (*fn)(void *data, int nr, void *state),
>> +                             void *state)
>> +{
>> +     void *pagedata;
>> +     unsigned pageidx;
>> +     int ret = 0;
>> +
>> +     BUG_ON(size > PAGE_SIZE);
>> +
>> +     pageidx = PAGE_SIZE;
>> +
>> +     while (nelem) {
>> +             int nr = (PAGE_SIZE/size);
>> +             struct page *page;
>> +             if (nr > nelem)
>> +                     nr = nelem;
>> +             pos = pos->next;
>> +             page = list_entry(pos, struct page, lru);
>> +             pagedata = page_address(page);
>> +             ret = (*fn)(pagedata, nr, state);
>> +             if (ret)
>> +                     break;
>> +             nelem -= nr;
>> +     }
>> +
>> +     return ret;
>> +}
>> +
>>   struct mmap_mfn_state {
>>        unsigned long va;
>>        struct vm_area_struct *vma;
>> @@ -250,7 +284,7 @@ struct mmap_batch_state {
>>         *      0 for no errors
>>         *      1 if at least one error has happened (and no
>>         *          -ENOENT errors have happened)
>> -      *      -ENOENT if at least 1 -ENOENT has happened.
>> +      *      -ENOENT if at least one -ENOENT has happened.
>>         */
>>        int global_error;
>>        /* An array for individual errors */
>> @@ -260,17 +294,20 @@ struct mmap_batch_state {
>>        xen_pfn_t __user *user_mfn;
>>   };
>>
>> -static int mmap_batch_fn(void *data, void *state)
>> +static int mmap_batch_fn(void *data, int nr, void *state)
>>   {
>>        xen_pfn_t *mfnp = data;
>>        struct mmap_batch_state *st = state;
>>        int ret;
>>
>> -     ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>> -                                      st->vma->vm_page_prot, st->domain);
>> +     BUG_ON(nr < 0);
>>
>> -     /* Store error code for second pass. */
>> -     *(st->err++) = ret;
>> +     ret = xen_remap_domain_mfn_array(st->vma,
>> +                                      st->va & PAGE_MASK,
>> +                                      mfnp, nr,
>> +                                      st->err,
>> +                                      st->vma->vm_page_prot,
>> +                                      st->domain);
>>
>>        /* And see if it affects the global_error. */
>>        if (ret < 0) {
>> @@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
>>                                st->global_error = 1;
>>                }
>>        }
>> -     st->va += PAGE_SIZE;
>> +     st->va += PAGE_SIZE * nr;
>>
>>        return 0;
>>   }
>> @@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>>        state.err           = err_array;
>>
>>        /* mmap_batch_fn guarantees ret == 0 */
>> -     BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
>> -                          &pagelist, mmap_batch_fn, &state));
>> +     BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
>> +                                 &pagelist, mmap_batch_fn, &state));
>>
>>        up_write(&mm->mmap_sem);
>>
>> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
>> index 6a198e4..22cad75 100644
>> --- a/include/xen/xen-ops.h
>> +++ b/include/xen/xen-ops.h
>> @@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>                               unsigned long mfn, int nr,
>>                               pgprot_t prot, unsigned domid);
>>
>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>> +                            unsigned long addr,
>> +                            unsigned long *mfn, int nr,
>> +                            int *err_ptr, pgprot_t prot,
>> +                            unsigned domid);
>>   #endif /* INCLUDE_XEN_OPS_H */
>> --
>> 1.7.9.5
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:41:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCmA-0002es-F6; Thu, 13 Dec 2012 17:40:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TjCm8-0002ef-TO
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:40:49 +0000
Received: from [85.158.138.51:50968] by server-4.bemta-3.messagelabs.com id
	44/59-31835-1131AC05; Thu, 13 Dec 2012 17:40:33 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355420430!24557265!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTE4Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2360 invoked from network); 13 Dec 2012 17:40:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:40:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="581510"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 17:40:30 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 12:40:29 -0500
Message-ID: <50CA130C.2010606@citrix.com>
Date: Thu, 13 Dec 2012 17:40:28 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1355350178-22315-1-git-send-email-mats.petersson@citrix.com>
	<20121213163846.GF20661@localhost.localdomain>
In-Reply-To: <20121213163846.GF20661@localhost.localdomain>
X-Originating-IP: [10.80.3.146]
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>
Subject: Re: [Xen-devel] [PATCH V4] xen/privcmd:
	improve-performance-of-mapping-of-guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/12 16:38, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 12, 2012 at 10:09:38PM +0000, Mats Petersson wrote:
>> One comment asked for more details on the improvements:
>> Using a small test program to map Guest memory into Dom0 (repeatedly
>> for "Iterations" mapping the same first "Num Pages")
> I missed this in my for 3.8 queue. I will queue it up next
> week and send it to Linus after a review..

Please do review (and test if you fancy ;) ). However, it just occurred 
to me and Ian Cambell just confirmed that ARM uses some of the same 
code, so will need to have the right bits in their code to cope.

I plan to hack something up for ARM and work with Ian to get it tested, 
etc. Hopefully shouldn't take very long.

--
Mats
>
>> Iterations    Num Pages          Time 3.7rc4  Time With this patch
>> 5000        4096         76.107       37.027
>> 10000       2048         75.703       37.177
>> 20000       1024         75.893       37.247
>> So a little better than twice as fast.
>>
>> Using this patch in migration, using "time" to measure the overall
>> time it take to migrate a guest (Debian Squeeze 6.0, 1024MB guest
>> memory, one network card, one disk, guest at login prompt):
>> Time 3.7rc5           Time With this patch
>> 6.697                 5.667
>> Since migration involves a whole lot of other things, it's only about
>> 15% faster - but still a good improvement. Similar measurement with a
>> guest that is running code to "dirty" memory shows about 23%
>> improvement, as it spends more time copying dirtied memory.
>>
>> As discussed elsewhere, a good deal more can be had from improving the
>> munmap system call, but it is a little tricky to get this in without
>> worsening non-PVOPS kernel, so I will have another look at this.
>>
>> ---
>> Update since last posting:
>> I have just run some benchmarks of a 16GB guest, and the improvement
>> with this patch is around 23-30% for the overall copy time, and 42%
>> shorter downtime on a "busy" guest (writing in a loop to about 8GB of
>> memory). I think this is definitely worth having.
>>
>> The V4 patch consists of cosmetic adjustments (spelling mistake in
>> comment and reversing condition in an if-statement to avoid having an
>> "else" branch, a random empty line added by accident now reverted back
>> to previous state). Thanks to Jan Beulich for the comments.
>>
>> The V3 of the patch contains suggested improvements from:
>> David Vrabel - make it two distinct external functions, doc-comments.
>> Ian Campbell - use one common function for the main work.
>> Jan Beulich  - found a bug and pointed out some whitespace problems.
>>
>>
>>
>> Signed-off-by: Mats Petersson <mats.petersson@citrix.com>
>>
>> ---
>>   arch/x86/xen/mmu.c    |  132 +++++++++++++++++++++++++++++++++++++++++++------
>>   drivers/xen/privcmd.c |   55 +++++++++++++++++----
>>   include/xen/xen-ops.h |    5 ++
>>   3 files changed, 169 insertions(+), 23 deletions(-)
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index dcf5f2d..a67774f 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
>>   #define REMAP_BATCH_SIZE 16
>>
>>   struct remap_data {
>> -     unsigned long mfn;
>> +     unsigned long *mfn;
>> +     bool contiguous;
>>        pgprot_t prot;
>>        struct mmu_update *mmu_update;
>>   };
>> @@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>>                                 unsigned long addr, void *data)
>>   {
>>        struct remap_data *rmd = data;
>> -     pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
>> +     pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
>> +     /* If we have a contigious range, just update the mfn itself,
>> +        else update pointer to be "next mfn". */
>> +     if (rmd->contiguous)
>> +             (*rmd->mfn)++;
>> +     else
>> +             rmd->mfn++;
>>
>>        rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
>>        rmd->mmu_update->val = pte_val_ma(pte);
>> @@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
>>        return 0;
>>   }
>>
>> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> -                            unsigned long addr,
>> -                            unsigned long mfn, int nr,
>> -                            pgprot_t prot, unsigned domid)
>> -{
>> +/* do_remap_mfn() - helper function to map foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @err_ptr: pointer to array
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into
>> + * this kernel's memory. The owner of the pages is defined by domid. Where the
>> + * pages are mapped is determined by addr, and vma is used for "accounting" of
>> + * the pages.
>> + *
>> + * Return value is zero for success, negative for failure.
>> + *
>> + * Note that err_ptr is used to indicate whether *mfn
>> + * is a list or a "first mfn of a contiguous range". */
>> +static int do_remap_mfn(struct vm_area_struct *vma,
>> +                     unsigned long addr,
>> +                     unsigned long *mfn, int nr,
>> +                     int *err_ptr, pgprot_t prot,
>> +                     unsigned domid)
>> +{
>> +     int err, last_err = 0;
>>        struct remap_data rmd;
>>        struct mmu_update mmu_update[REMAP_BATCH_SIZE];
>> -     int batch;
>>        unsigned long range;
>> -     int err = 0;
>>
>>        if (xen_feature(XENFEAT_auto_translated_physmap))
>>                return -EINVAL;
>> @@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>
>>        rmd.mfn = mfn;
>>        rmd.prot = prot;
>> +     /* We use the err_ptr to indicate if there we are doing a contigious
>> +      * mapping or a discontigious mapping. */
>> +     rmd.contiguous = !err_ptr;
>>
>>        while (nr) {
>> -             batch = min(REMAP_BATCH_SIZE, nr);
>> +             int index = 0;
>> +             int done = 0;
>> +             int batch = min(REMAP_BATCH_SIZE, nr);
>> +             int batch_left = batch;
>>                range = (unsigned long)batch << PAGE_SHIFT;
>>
>>                rmd.mmu_update = mmu_update;
>> @@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>                if (err)
>>                        goto out;
>>
>> -             err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
>> -             if (err < 0)
>> -                     goto out;
>> +             /* We record the error for each page that gives an error, but
>> +              * continue mapping until the whole set is done */
>> +             do {
>> +                     err = HYPERVISOR_mmu_update(&mmu_update[index],
>> +                                                 batch_left, &done, domid);
>> +                     if (err < 0) {
>> +                             if (!err_ptr)
>> +                                     goto out;
>> +                             /* increment done so we skip the error item */
>> +                             done++;
>> +                             last_err = err_ptr[index] = err;
>> +                     }
>> +                     batch_left -= done;
>> +                     index += done;
>> +             } while (batch_left);
>>
>>                nr -= batch;
>>                addr += range;
>> +             if (err_ptr)
>> +                     err_ptr += batch;
>>        }
>>
>> -     err = 0;
>> +     err = last_err;
>>   out:
>>
>>        xen_flush_tlb_all();
>>
>>        return err;
>>   }
>> +
>> +/* xen_remap_domain_mfn_range() - Used to map foreign pages
>> + * @vma:   the vma for the pages to be mapped into
>> + * @addr:  the address at which to map the pages
>> + * @mfn:   the first MFN to map
>> + * @nr:    the number of consecutive mfns to map
>> + * @prot:  page protection mask
>> + * @domid: id of the domain that we are mapping from
>> + *
>> + * This function takes a mfn and maps nr pages on from that into this kernel's
>> + * memory. The owner of the pages is defined by domid. Where the pages are
>> + * mapped is determined by addr, and vma is used for "accounting" of the
>> + * pages.
>> + *
>> + * Return value is zero for success, negative ERRNO value for failure.
>> + */
>> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> +                            unsigned long addr,
>> +                            unsigned long mfn, int nr,
>> +                            pgprot_t prot, unsigned domid)
>> +{
>> +     return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
>> +}
>>   EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>> +
>> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @err_ptr: pointer to array of integers, one per MFN, for an error
>> + *           value for each page. The err_ptr must not be NULL.
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into this
>> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
>> + * are mapped is determined by addr, and vma is used for "accounting" of the
>> + * pages. The err_ptr array is filled in on any page that is not sucessfully
>> + * mapped in.
>> + *
>> + * Return value is zero for success, negative ERRNO value for failure.
>> + * Note that the error value -ENOENT is considered a "retry", so when this
>> + * error code is seen, another call should be made with the list of pages that
>> + * are marked as -ENOENT in the err_ptr array.
>> + */
>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>> +                            unsigned long addr,
>> +                            unsigned long *mfn, int nr,
>> +                            int *err_ptr, pgprot_t prot,
>> +                            unsigned domid)
>> +{
>> +     /* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
>> +      * and the consequences later is quite hard to detect what the actual
>> +      * cause of "wrong memory was mapped in".
>> +      */
>> +     BUG_ON(err_ptr == NULL);
>> +     return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
>> +}
>> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index 71f5c45..75f6e86 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
>> @@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
>>        return ret;
>>   }
>>
>> +/*
>> + * Similar to traverse_pages, but use each page as a "block" of
>> + * data to be processed as one unit.
>> + */
>> +static int traverse_pages_block(unsigned nelem, size_t size,
>> +                             struct list_head *pos,
>> +                             int (*fn)(void *data, int nr, void *state),
>> +                             void *state)
>> +{
>> +     void *pagedata;
>> +     unsigned pageidx;
>> +     int ret = 0;
>> +
>> +     BUG_ON(size > PAGE_SIZE);
>> +
>> +     pageidx = PAGE_SIZE;
>> +
>> +     while (nelem) {
>> +             int nr = (PAGE_SIZE/size);
>> +             struct page *page;
>> +             if (nr > nelem)
>> +                     nr = nelem;
>> +             pos = pos->next;
>> +             page = list_entry(pos, struct page, lru);
>> +             pagedata = page_address(page);
>> +             ret = (*fn)(pagedata, nr, state);
>> +             if (ret)
>> +                     break;
>> +             nelem -= nr;
>> +     }
>> +
>> +     return ret;
>> +}
>> +
>>   struct mmap_mfn_state {
>>        unsigned long va;
>>        struct vm_area_struct *vma;
>> @@ -250,7 +284,7 @@ struct mmap_batch_state {
>>         *      0 for no errors
>>         *      1 if at least one error has happened (and no
>>         *          -ENOENT errors have happened)
>> -      *      -ENOENT if at least 1 -ENOENT has happened.
>> +      *      -ENOENT if at least one -ENOENT has happened.
>>         */
>>        int global_error;
>>        /* An array for individual errors */
>> @@ -260,17 +294,20 @@ struct mmap_batch_state {
>>        xen_pfn_t __user *user_mfn;
>>   };
>>
>> -static int mmap_batch_fn(void *data, void *state)
>> +static int mmap_batch_fn(void *data, int nr, void *state)
>>   {
>>        xen_pfn_t *mfnp = data;
>>        struct mmap_batch_state *st = state;
>>        int ret;
>>
>> -     ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>> -                                      st->vma->vm_page_prot, st->domain);
>> +     BUG_ON(nr < 0);
>>
>> -     /* Store error code for second pass. */
>> -     *(st->err++) = ret;
>> +     ret = xen_remap_domain_mfn_array(st->vma,
>> +                                      st->va & PAGE_MASK,
>> +                                      mfnp, nr,
>> +                                      st->err,
>> +                                      st->vma->vm_page_prot,
>> +                                      st->domain);
>>
>>        /* And see if it affects the global_error. */
>>        if (ret < 0) {
>> @@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
>>                                st->global_error = 1;
>>                }
>>        }
>> -     st->va += PAGE_SIZE;
>> +     st->va += PAGE_SIZE * nr;
>>
>>        return 0;
>>   }
>> @@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>>        state.err           = err_array;
>>
>>        /* mmap_batch_fn guarantees ret == 0 */
>> -     BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
>> -                          &pagelist, mmap_batch_fn, &state));
>> +     BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
>> +                                 &pagelist, mmap_batch_fn, &state));
>>
>>        up_write(&mm->mmap_sem);
>>
>> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
>> index 6a198e4..22cad75 100644
>> --- a/include/xen/xen-ops.h
>> +++ b/include/xen/xen-ops.h
>> @@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>                               unsigned long mfn, int nr,
>>                               pgprot_t prot, unsigned domid);
>>
>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>> +                            unsigned long addr,
>> +                            unsigned long *mfn, int nr,
>> +                            int *err_ptr, pgprot_t prot,
>> +                            unsigned domid);
>>   #endif /* INCLUDE_XEN_OPS_H */
>> --
>> 1.7.9.5
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:47:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCsT-0003Ab-GO; Thu, 13 Dec 2012 17:47:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TjCsR-0003AW-Oj
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:47:19 +0000
Received: from [85.158.143.99:54469] by server-1.bemta-4.messagelabs.com id
	CF/4D-28401-7A41AC05; Thu, 13 Dec 2012 17:47:19 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355420836!18177246!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17365 invoked from network); 13 Dec 2012 17:47:18 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:47:18 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so2757153vcb.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 09:47:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=eRz7KUFJ8DkpDaYGvcf1WWo1B7bNR/QG3zbM1qp6GB0=;
	b=Adj0TbEEVx57sib875QBlwTR/1h55O5NmH4rEsZMXgbJ97dTs9HQ474OYuezc9n5Kz
	sVTjDQPfOHNKJmy6REryRZZWeHhAou+3pXYgFWWqcae5gIwvdyPfsLy0t2Oi1+dlJcX7
	bcytzyjzyNELHcMAl+tjsLXrsDu1FXoyQCDoNkkxyHLBesEJs/5nvO6vaCB2+oG2gDnO
	FOG/5MLy2ccAj0omVjjOP9l2nxgZWtiZyfIDBsIjNyRxFU1z20i2BJxE1wupE9uQ2oJn
	rf22v/bfcJIaO7CeSBHldkPwJy2wB0Xc7Bm1blHwP4ZkUji7fU6ieDeSYVfBzJD+fTM9
	2BNQ==
MIME-Version: 1.0
Received: by 10.52.92.144 with SMTP id cm16mr4263666vdb.36.1355420836304; Thu,
	13 Dec 2012 09:47:16 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Thu, 13 Dec 2012 09:47:16 -0800 (PST)
In-Reply-To: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>
Date: Thu, 13 Dec 2012 17:47:16 +0000
X-Google-Sender-Auth: -Q75RKa8PHNY-6Dyay52HIvvF80
Message-ID: <CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1818271761357557018=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1818271761357557018==
Content-Type: multipart/alternative; boundary=20cf307cff967817d804d0bf8130

--20cf307cff967817d804d0bf8130
Content-Type: text/plain; charset=ISO-8859-1

Is there a rationale for this?  If so, it should probably be in the commit
message.

 -George


On Thu, Dec 13, 2012 at 3:31 PM, Matthew Fioravante <
matthew.fioravante@jhuapl.edu> wrote:

> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> ---
>  stubdom/configure.ac |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/stubdom/configure.ac b/stubdom/configure.ac
> index db44d4a..384a94a 100644
> --- a/stubdom/configure.ac
> +++ b/stubdom/configure.ac
> @@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])
>  # Enable/disable stub domains
>  AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
>  AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> -AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])
>  AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
>  AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
>  AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
> --
> 1.7.10.4
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--20cf307cff967817d804d0bf8130
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Is there a rationale for this?=A0 If so, it should probably be in the commi=
t message.<br><br>=A0-George<br><div class=3D"gmail_extra"><br><br><div cla=
ss=3D"gmail_quote">On Thu, Dec 13, 2012 at 3:31 PM, Matthew Fioravante <spa=
n dir=3D"ltr">&lt;<a href=3D"mailto:matthew.fioravante@jhuapl.edu" target=
=3D"_blank">matthew.fioravante@jhuapl.edu</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Signed-off-by: Matthew Fioravante &lt;<a hre=
f=3D"mailto:matthew.fioravante@jhuapl.edu">matthew.fioravante@jhuapl.edu</a=
>&gt;<br>

---<br>
=A0stubdom/<a href=3D"http://configure.ac" target=3D"_blank">configure.ac</=
a> | =A0 =A02 +-<br>
=A01 file changed, 1 insertion(+), 1 deletion(-)<br>
<br>
diff --git a/stubdom/<a href=3D"http://configure.ac" target=3D"_blank">conf=
igure.ac</a> b/stubdom/<a href=3D"http://configure.ac" target=3D"_blank">co=
nfigure.ac</a><br>
index db44d4a..384a94a 100644<br>
--- a/stubdom/<a href=3D"http://configure.ac" target=3D"_blank">configure.a=
c</a><br>
+++ b/stubdom/<a href=3D"http://configure.ac" target=3D"_blank">configure.a=
c</a><br>
@@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])<br>
=A0# Enable/disable stub domains<br>
=A0AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])<br>
=A0AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])<br>
-AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])<br>
+AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])<br>
=A0AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])<br>
=A0AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])<br>
=A0AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])<br>
<span class=3D"HOEnZb"><font color=3D"#888888">--<br>
1.7.10.4<br>
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</font></span></blockquote></div><br></div>

--20cf307cff967817d804d0bf8130--


--===============1818271761357557018==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1818271761357557018==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 17:47:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCsT-0003Ab-GO; Thu, 13 Dec 2012 17:47:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TjCsR-0003AW-Oj
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:47:19 +0000
Received: from [85.158.143.99:54469] by server-1.bemta-4.messagelabs.com id
	CF/4D-28401-7A41AC05; Thu, 13 Dec 2012 17:47:19 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355420836!18177246!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17365 invoked from network); 13 Dec 2012 17:47:18 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:47:18 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so2757153vcb.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 09:47:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=eRz7KUFJ8DkpDaYGvcf1WWo1B7bNR/QG3zbM1qp6GB0=;
	b=Adj0TbEEVx57sib875QBlwTR/1h55O5NmH4rEsZMXgbJ97dTs9HQ474OYuezc9n5Kz
	sVTjDQPfOHNKJmy6REryRZZWeHhAou+3pXYgFWWqcae5gIwvdyPfsLy0t2Oi1+dlJcX7
	bcytzyjzyNELHcMAl+tjsLXrsDu1FXoyQCDoNkkxyHLBesEJs/5nvO6vaCB2+oG2gDnO
	FOG/5MLy2ccAj0omVjjOP9l2nxgZWtiZyfIDBsIjNyRxFU1z20i2BJxE1wupE9uQ2oJn
	rf22v/bfcJIaO7CeSBHldkPwJy2wB0Xc7Bm1blHwP4ZkUji7fU6ieDeSYVfBzJD+fTM9
	2BNQ==
MIME-Version: 1.0
Received: by 10.52.92.144 with SMTP id cm16mr4263666vdb.36.1355420836304; Thu,
	13 Dec 2012 09:47:16 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Thu, 13 Dec 2012 09:47:16 -0800 (PST)
In-Reply-To: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>
References: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>
Date: Thu, 13 Dec 2012 17:47:16 +0000
X-Google-Sender-Auth: -Q75RKa8PHNY-6Dyay52HIvvF80
Message-ID: <CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1818271761357557018=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1818271761357557018==
Content-Type: multipart/alternative; boundary=20cf307cff967817d804d0bf8130

--20cf307cff967817d804d0bf8130
Content-Type: text/plain; charset=ISO-8859-1

Is there a rationale for this?  If so, it should probably be in the commit
message.

 -George


On Thu, Dec 13, 2012 at 3:31 PM, Matthew Fioravante <
matthew.fioravante@jhuapl.edu> wrote:

> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> ---
>  stubdom/configure.ac |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/stubdom/configure.ac b/stubdom/configure.ac
> index db44d4a..384a94a 100644
> --- a/stubdom/configure.ac
> +++ b/stubdom/configure.ac
> @@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])
>  # Enable/disable stub domains
>  AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
>  AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> -AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])
>  AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
>  AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
>  AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
> --
> 1.7.10.4
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--20cf307cff967817d804d0bf8130
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Is there a rationale for this?=A0 If so, it should probably be in the commi=
t message.<br><br>=A0-George<br><div class=3D"gmail_extra"><br><br><div cla=
ss=3D"gmail_quote">On Thu, Dec 13, 2012 at 3:31 PM, Matthew Fioravante <spa=
n dir=3D"ltr">&lt;<a href=3D"mailto:matthew.fioravante@jhuapl.edu" target=
=3D"_blank">matthew.fioravante@jhuapl.edu</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Signed-off-by: Matthew Fioravante &lt;<a hre=
f=3D"mailto:matthew.fioravante@jhuapl.edu">matthew.fioravante@jhuapl.edu</a=
>&gt;<br>

---<br>
=A0stubdom/<a href=3D"http://configure.ac" target=3D"_blank">configure.ac</=
a> | =A0 =A02 +-<br>
=A01 file changed, 1 insertion(+), 1 deletion(-)<br>
<br>
diff --git a/stubdom/<a href=3D"http://configure.ac" target=3D"_blank">conf=
igure.ac</a> b/stubdom/<a href=3D"http://configure.ac" target=3D"_blank">co=
nfigure.ac</a><br>
index db44d4a..384a94a 100644<br>
--- a/stubdom/<a href=3D"http://configure.ac" target=3D"_blank">configure.a=
c</a><br>
+++ b/stubdom/<a href=3D"http://configure.ac" target=3D"_blank">configure.a=
c</a><br>
@@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])<br>
=A0# Enable/disable stub domains<br>
=A0AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])<br>
=A0AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])<br>
-AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])<br>
+AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])<br>
=A0AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])<br>
=A0AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])<br>
=A0AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])<br>
<span class=3D"HOEnZb"><font color=3D"#888888">--<br>
1.7.10.4<br>
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</font></span></blockquote></div><br></div>

--20cf307cff967817d804d0bf8130--


--===============1818271761357557018==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1818271761357557018==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 17:48:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCtr-0003Ex-W8; Thu, 13 Dec 2012 17:48:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TjCtp-0003El-U6
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:48:46 +0000
Received: from [85.158.139.83:24692] by server-5.bemta-5.messagelabs.com id
	88/E1-22648-DF41AC05; Thu, 13 Dec 2012 17:48:45 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355420922!27617904!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12170 invoked from network); 13 Dec 2012 17:48:44 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:48:44 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so2691849vbi.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 09:48:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=CsORvh4OBnzRNi0oMMZuubMi+rqmeP9mtOVjLPOQfCg=;
	b=O1Qi47BAiAewumSKlnYmklsFPQKiPTHaTethPeaq8+TTV02/44845y9nqjWjzzmGXj
	p0IsB9oHhrJiXkaySciBVHX0ELtnl5fSn+xUEnjcaznfu6t9X4kQVS/E/yMzcTJeOF/V
	X3n7DjtXrTb1gH1eDaIiQvAnmnC2X/qBC5YDWWfHaUkELn3TNvClwwCL0WsuwmroUT4o
	8MTdIP6Rbuo+/7C/AOHiICirUn5zJS5zud+mkmpWtk2vYSlB2xgsvAqssqcpGO0aAsii
	4MNWkV1NgRhls/Ie9BySWax114QJBGxPrIakTn9L+j4v3iSpUGBeuYJXtf9mYI4S2zMI
	4LLA==
MIME-Version: 1.0
Received: by 10.220.149.69 with SMTP id s5mr4931628vcv.23.1355420922468; Thu,
	13 Dec 2012 09:48:42 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Thu, 13 Dec 2012 09:48:42 -0800 (PST)
In-Reply-To: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
References: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
Date: Thu, 13 Dec 2012 17:48:42 +0000
X-Google-Sender-Auth: dVKQq4rZl8dCvXURZ3C96Yc2eX4
Message-ID: <CAFLBxZYOfesf89hx5asfyrN3_VQRcSnr0cRuGt6Yj=BQRHvEEA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: digvijay chauhan <digvijaych@gmail.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Code for precopy algorithm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0011961592952747967=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0011961592952747967==
Content-Type: multipart/alternative; boundary=f46d042fd8989ad85804d0bf8694

--f46d042fd8989ad85804d0bf8694
Content-Type: text/plain; charset=ISO-8859-1

Did you grep for "migrate" in the source code?

 -George


On Thu, Dec 13, 2012 at 5:39 PM, digvijay chauhan <digvijaych@gmail.com>wrote:

> Hello all,
>               I want to optimize the pre copy algorithm.So in which file
> can I find the implementation of algorithm.And how to understand the
> working of code? Which part causes the live migration in code?
>
>
> regards,
> DigvijaySingh
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--f46d042fd8989ad85804d0bf8694
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Did you grep for &quot;migrate&quot; in the source code?<br><br>=A0-George<=
br><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Thu, De=
c 13, 2012 at 5:39 PM, digvijay chauhan <span dir=3D"ltr">&lt;<a href=3D"ma=
ilto:digvijaych@gmail.com" target=3D"_blank">digvijaych@gmail.com</a>&gt;</=
span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Hello all,<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0 I want to optimize the pre copy algorithm.So in which file can I =
find the implementation of algorithm.And how to understand the working of c=
ode? Which part causes the live migration in code?<br>

<br><br>regards,<br>DigvijaySingh<br>
<br>_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br></blockquote></div><br></div>

--f46d042fd8989ad85804d0bf8694--


--===============0011961592952747967==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0011961592952747967==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 17:48:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjCtr-0003Ex-W8; Thu, 13 Dec 2012 17:48:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TjCtp-0003El-U6
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:48:46 +0000
Received: from [85.158.139.83:24692] by server-5.bemta-5.messagelabs.com id
	88/E1-22648-DF41AC05; Thu, 13 Dec 2012 17:48:45 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355420922!27617904!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12170 invoked from network); 13 Dec 2012 17:48:44 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:48:44 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so2691849vbi.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 09:48:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=CsORvh4OBnzRNi0oMMZuubMi+rqmeP9mtOVjLPOQfCg=;
	b=O1Qi47BAiAewumSKlnYmklsFPQKiPTHaTethPeaq8+TTV02/44845y9nqjWjzzmGXj
	p0IsB9oHhrJiXkaySciBVHX0ELtnl5fSn+xUEnjcaznfu6t9X4kQVS/E/yMzcTJeOF/V
	X3n7DjtXrTb1gH1eDaIiQvAnmnC2X/qBC5YDWWfHaUkELn3TNvClwwCL0WsuwmroUT4o
	8MTdIP6Rbuo+/7C/AOHiICirUn5zJS5zud+mkmpWtk2vYSlB2xgsvAqssqcpGO0aAsii
	4MNWkV1NgRhls/Ie9BySWax114QJBGxPrIakTn9L+j4v3iSpUGBeuYJXtf9mYI4S2zMI
	4LLA==
MIME-Version: 1.0
Received: by 10.220.149.69 with SMTP id s5mr4931628vcv.23.1355420922468; Thu,
	13 Dec 2012 09:48:42 -0800 (PST)
Received: by 10.58.231.138 with HTTP; Thu, 13 Dec 2012 09:48:42 -0800 (PST)
In-Reply-To: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
References: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
Date: Thu, 13 Dec 2012 17:48:42 +0000
X-Google-Sender-Auth: dVKQq4rZl8dCvXURZ3C96Yc2eX4
Message-ID: <CAFLBxZYOfesf89hx5asfyrN3_VQRcSnr0cRuGt6Yj=BQRHvEEA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: digvijay chauhan <digvijaych@gmail.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Code for precopy algorithm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0011961592952747967=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0011961592952747967==
Content-Type: multipart/alternative; boundary=f46d042fd8989ad85804d0bf8694

--f46d042fd8989ad85804d0bf8694
Content-Type: text/plain; charset=ISO-8859-1

Did you grep for "migrate" in the source code?

 -George


On Thu, Dec 13, 2012 at 5:39 PM, digvijay chauhan <digvijaych@gmail.com>wrote:

> Hello all,
>               I want to optimize the pre copy algorithm.So in which file
> can I find the implementation of algorithm.And how to understand the
> working of code? Which part causes the live migration in code?
>
>
> regards,
> DigvijaySingh
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--f46d042fd8989ad85804d0bf8694
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Did you grep for &quot;migrate&quot; in the source code?<br><br>=A0-George<=
br><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Thu, De=
c 13, 2012 at 5:39 PM, digvijay chauhan <span dir=3D"ltr">&lt;<a href=3D"ma=
ilto:digvijaych@gmail.com" target=3D"_blank">digvijaych@gmail.com</a>&gt;</=
span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Hello all,<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0 I want to optimize the pre copy algorithm.So in which file can I =
find the implementation of algorithm.And how to understand the working of c=
ode? Which part causes the live migration in code?<br>

<br><br>regards,<br>DigvijaySingh<br>
<br>_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br></blockquote></div><br></div>

--f46d042fd8989ad85804d0bf8694--


--===============0011961592952747967==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0011961592952747967==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 17:55:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjD05-0003Tr-TN; Thu, 13 Dec 2012 17:55:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TjD04-0003Tm-LV
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:55:12 +0000
Received: from [85.158.137.99:37854] by server-2.bemta-3.messagelabs.com id
	23/9C-11239-F761AC05; Thu, 13 Dec 2012 17:55:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355421310!14158971!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23207 invoked from network); 13 Dec 2012 17:55:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:55:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="130274"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 17:55:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 17:55:09 +0000
Message-ID: <1355421309.10554.153.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: digvijay chauhan <digvijaych@gmail.com>
Date: Thu, 13 Dec 2012 17:55:09 +0000
In-Reply-To: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
References: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Code for precopy algorithm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 17:39 +0000, digvijay chauhan wrote:
> Hello all,
>               I want to optimize the pre copy algorithm.So in which
> file can I find the implementation of algorithm.And how to understand
> the working of code? Which part causes the live migration in code?

You've already asked this and been answered:
http://lists.xen.org/archives/html/xen-users/2012-12/msg00142.html
http://lists.xen.org/archives/html/xen-users/2012-12/msg00160.html
http://lists.xen.org/archives/html/xen-users/2012-12/msg00130.html

I'm afraid it is now time for you to do some leg work of your own. It is
unreasonable for you to expect that people on this list will either
decide on your project for you or hold your hand every step of the way.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:55:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjD05-0003Tr-TN; Thu, 13 Dec 2012 17:55:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TjD04-0003Tm-LV
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:55:12 +0000
Received: from [85.158.137.99:37854] by server-2.bemta-3.messagelabs.com id
	23/9C-11239-F761AC05; Thu, 13 Dec 2012 17:55:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355421310!14158971!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23207 invoked from network); 13 Dec 2012 17:55:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:55:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="130274"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 17:55:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 13 Dec 2012 17:55:09 +0000
Message-ID: <1355421309.10554.153.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: digvijay chauhan <digvijaych@gmail.com>
Date: Thu, 13 Dec 2012 17:55:09 +0000
In-Reply-To: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
References: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Code for precopy algorithm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 17:39 +0000, digvijay chauhan wrote:
> Hello all,
>               I want to optimize the pre copy algorithm.So in which
> file can I find the implementation of algorithm.And how to understand
> the working of code? Which part causes the live migration in code?

You've already asked this and been answered:
http://lists.xen.org/archives/html/xen-users/2012-12/msg00142.html
http://lists.xen.org/archives/html/xen-users/2012-12/msg00160.html
http://lists.xen.org/archives/html/xen-users/2012-12/msg00130.html

I'm afraid it is now time for you to do some leg work of your own. It is
unreasonable for you to expect that people on this list will either
decide on your project for you or hold your hand every step of the way.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 17:58:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:58:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjD3K-0003n7-Ka; Thu, 13 Dec 2012 17:58:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1TjD3J-0003n1-A7
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:58:33 +0000
Received: from [85.158.139.83:7274] by server-2.bemta-5.messagelabs.com id
	C0/38-16162-8471AC05; Thu, 13 Dec 2012 17:58:32 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355421510!22404369!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6329 invoked from network); 13 Dec 2012 17:58:31 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:58:31 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so938548wgb.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 09:58:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=abPvMeEV1rbhMm26EhfCnT5RHiBAyvdPifu2IX311ns=;
	b=sepvuYDHaVIeFNIFRGHKwgMyjfaaDWQQcZTlZFbm6p7WjJfso54E/As28LR5gbxrmD
	MVq8brT5alZDqycTW+ryYO1Dguv2CPrlJRC1wlQkWOn7bdoAX7BABBAhOYuGmDupNoh3
	7B9XN6m5GVpEvBevVcKD4Ojfj77HglGt+0nbtbwr1Z/GGFCtYH5t/95VgwbuLI8WD9lK
	VzBN9I90JMY71SxdRTPQtrvx3NX4joN+PLqFhyiyrEvdnleb2EqAYTYIXt89S84yHA13
	TcS+9gZ6aad8emMSCxx+hTdhNdDOc9pZSw1D+cLKGAOLxv7hb6QY0MaD6Oo0NVrO5MM8
	N8aw==
MIME-Version: 1.0
Received: by 10.194.120.132 with SMTP id lc4mr9933776wjb.59.1355421510637;
	Thu, 13 Dec 2012 09:58:30 -0800 (PST)
Received: by 10.194.64.194 with HTTP; Thu, 13 Dec 2012 09:58:30 -0800 (PST)
In-Reply-To: <1355421309.10554.153.camel@zakaz.uk.xensource.com>
References: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
	<1355421309.10554.153.camel@zakaz.uk.xensource.com>
Date: Thu, 13 Dec 2012 23:28:30 +0530
Message-ID: <CANq0ewtUbqybrTOuYeppXMSPvP4dJCSBQrBP14aedx06hfc56w@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Code for precopy algorithm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6442616330530525012=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6442616330530525012==
Content-Type: multipart/alternative; boundary=089e0118450aa9999c04d0bfa9d5

--089e0118450aa9999c04d0bfa9d5
Content-Type: text/plain; charset=ISO-8859-1

Sorry Sir. I have just 4 days left for the review thats y I was asking
this.I also dont want this type of questions but circumstances are
prompting me, again sorry for this type of question sir. it is my project
and I have todo all things,I know.

On Thu, Dec 13, 2012 at 11:25 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Thu, 2012-12-13 at 17:39 +0000, digvijay chauhan wrote:
> > Hello all,
> >               I want to optimize the pre copy algorithm.So in which
> > file can I find the implementation of algorithm.And how to understand
> > the working of code? Which part causes the live migration in code?
>
> You've already asked this and been answered:
> http://lists.xen.org/archives/html/xen-users/2012-12/msg00142.html
> http://lists.xen.org/archives/html/xen-users/2012-12/msg00160.html
> http://lists.xen.org/archives/html/xen-users/2012-12/msg00130.html
>
> I'm afraid it is now time for you to do some leg work of your own. It is
> unreasonable for you to expect that people on this list will either
> decide on your project for you or hold your hand every step of the way.
>
> Ian.
>
>

--089e0118450aa9999c04d0bfa9d5
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Sorry Sir. I have just 4 days left for the review thats y I was asking this=
.I also dont want this type of questions but circumstances are prompting me=
, again sorry for this type of question sir. it is my project and I have to=
do all things,I know.<br>
<br><div class=3D"gmail_quote">On Thu, Dec 13, 2012 at 11:25 PM, Ian Campbe=
ll <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=
=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote cl=
ass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;p=
adding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Thu, 2012-12-13 at 17:39 +0000, =
digvijay chauhan wrote:<br>
&gt; Hello all,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 I want to optimize the pre copy algorithm.=
So in which<br>
&gt; file can I find the implementation of algorithm.And how to understand<=
br>
&gt; the working of code? Which part causes the live migration in code?<br>
<br>
</div></div>You&#39;ve already asked this and been answered:<br>
<a href=3D"http://lists.xen.org/archives/html/xen-users/2012-12/msg00142.ht=
ml" target=3D"_blank">http://lists.xen.org/archives/html/xen-users/2012-12/=
msg00142.html</a><br>
<a href=3D"http://lists.xen.org/archives/html/xen-users/2012-12/msg00160.ht=
ml" target=3D"_blank">http://lists.xen.org/archives/html/xen-users/2012-12/=
msg00160.html</a><br>
<a href=3D"http://lists.xen.org/archives/html/xen-users/2012-12/msg00130.ht=
ml" target=3D"_blank">http://lists.xen.org/archives/html/xen-users/2012-12/=
msg00130.html</a><br>
<br>
I&#39;m afraid it is now time for you to do some leg work of your own. It i=
s<br>
unreasonable for you to expect that people on this list will either<br>
decide on your project for you or hold your hand every step of the way.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote></div><br>

--089e0118450aa9999c04d0bfa9d5--


--===============6442616330530525012==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6442616330530525012==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 17:58:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 17:58:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjD3K-0003n7-Ka; Thu, 13 Dec 2012 17:58:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1TjD3J-0003n1-A7
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 17:58:33 +0000
Received: from [85.158.139.83:7274] by server-2.bemta-5.messagelabs.com id
	C0/38-16162-8471AC05; Thu, 13 Dec 2012 17:58:32 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355421510!22404369!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6329 invoked from network); 13 Dec 2012 17:58:31 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 17:58:31 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so938548wgb.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 09:58:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=abPvMeEV1rbhMm26EhfCnT5RHiBAyvdPifu2IX311ns=;
	b=sepvuYDHaVIeFNIFRGHKwgMyjfaaDWQQcZTlZFbm6p7WjJfso54E/As28LR5gbxrmD
	MVq8brT5alZDqycTW+ryYO1Dguv2CPrlJRC1wlQkWOn7bdoAX7BABBAhOYuGmDupNoh3
	7B9XN6m5GVpEvBevVcKD4Ojfj77HglGt+0nbtbwr1Z/GGFCtYH5t/95VgwbuLI8WD9lK
	VzBN9I90JMY71SxdRTPQtrvx3NX4joN+PLqFhyiyrEvdnleb2EqAYTYIXt89S84yHA13
	TcS+9gZ6aad8emMSCxx+hTdhNdDOc9pZSw1D+cLKGAOLxv7hb6QY0MaD6Oo0NVrO5MM8
	N8aw==
MIME-Version: 1.0
Received: by 10.194.120.132 with SMTP id lc4mr9933776wjb.59.1355421510637;
	Thu, 13 Dec 2012 09:58:30 -0800 (PST)
Received: by 10.194.64.194 with HTTP; Thu, 13 Dec 2012 09:58:30 -0800 (PST)
In-Reply-To: <1355421309.10554.153.camel@zakaz.uk.xensource.com>
References: <CANq0ewuNEQ1NXoUChQio_SdQjn2ZBxM9f-Bwa-LyZ8O2KBE6kQ@mail.gmail.com>
	<1355421309.10554.153.camel@zakaz.uk.xensource.com>
Date: Thu, 13 Dec 2012 23:28:30 +0530
Message-ID: <CANq0ewtUbqybrTOuYeppXMSPvP4dJCSBQrBP14aedx06hfc56w@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Code for precopy algorithm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6442616330530525012=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6442616330530525012==
Content-Type: multipart/alternative; boundary=089e0118450aa9999c04d0bfa9d5

--089e0118450aa9999c04d0bfa9d5
Content-Type: text/plain; charset=ISO-8859-1

Sorry Sir. I have just 4 days left for the review thats y I was asking
this.I also dont want this type of questions but circumstances are
prompting me, again sorry for this type of question sir. it is my project
and I have todo all things,I know.

On Thu, Dec 13, 2012 at 11:25 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Thu, 2012-12-13 at 17:39 +0000, digvijay chauhan wrote:
> > Hello all,
> >               I want to optimize the pre copy algorithm.So in which
> > file can I find the implementation of algorithm.And how to understand
> > the working of code? Which part causes the live migration in code?
>
> You've already asked this and been answered:
> http://lists.xen.org/archives/html/xen-users/2012-12/msg00142.html
> http://lists.xen.org/archives/html/xen-users/2012-12/msg00160.html
> http://lists.xen.org/archives/html/xen-users/2012-12/msg00130.html
>
> I'm afraid it is now time for you to do some leg work of your own. It is
> unreasonable for you to expect that people on this list will either
> decide on your project for you or hold your hand every step of the way.
>
> Ian.
>
>

--089e0118450aa9999c04d0bfa9d5
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Sorry Sir. I have just 4 days left for the review thats y I was asking this=
.I also dont want this type of questions but circumstances are prompting me=
, again sorry for this type of question sir. it is my project and I have to=
do all things,I know.<br>
<br><div class=3D"gmail_quote">On Thu, Dec 13, 2012 at 11:25 PM, Ian Campbe=
ll <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=
=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote cl=
ass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;p=
adding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Thu, 2012-12-13 at 17:39 +0000, =
digvijay chauhan wrote:<br>
&gt; Hello all,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 I want to optimize the pre copy algorithm.=
So in which<br>
&gt; file can I find the implementation of algorithm.And how to understand<=
br>
&gt; the working of code? Which part causes the live migration in code?<br>
<br>
</div></div>You&#39;ve already asked this and been answered:<br>
<a href=3D"http://lists.xen.org/archives/html/xen-users/2012-12/msg00142.ht=
ml" target=3D"_blank">http://lists.xen.org/archives/html/xen-users/2012-12/=
msg00142.html</a><br>
<a href=3D"http://lists.xen.org/archives/html/xen-users/2012-12/msg00160.ht=
ml" target=3D"_blank">http://lists.xen.org/archives/html/xen-users/2012-12/=
msg00160.html</a><br>
<a href=3D"http://lists.xen.org/archives/html/xen-users/2012-12/msg00130.ht=
ml" target=3D"_blank">http://lists.xen.org/archives/html/xen-users/2012-12/=
msg00130.html</a><br>
<br>
I&#39;m afraid it is now time for you to do some leg work of your own. It i=
s<br>
unreasonable for you to expect that people on this list will either<br>
decide on your project for you or hold your hand every step of the way.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote></div><br>

--089e0118450aa9999c04d0bfa9d5--


--===============6442616330530525012==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6442616330530525012==--


From xen-devel-bounces@lists.xen.org Thu Dec 13 18:21:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 18:21:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjDP7-0004IA-MW; Thu, 13 Dec 2012 18:21:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1TjDP5-0004I5-Br
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 18:21:03 +0000
Received: from [193.109.254.147:52831] by server-6.bemta-14.messagelabs.com id
	CD/E4-25153-E8C1AC05; Thu, 13 Dec 2012 18:21:02 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355422859!10003483!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24543 invoked from network); 13 Dec 2012 18:21:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 18:21:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="625628"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 18:20:59 +0000
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 13:20:59 -0500
From: Julien Grall <julien.grall@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 11:53:50 +0000
Message-ID: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH V3] libxenstore: filter watch events in
	libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XenStore puts in queued watch events via a thread and notifies the user.
Sometimes xs_unwatch is called before all related message is read. The use
case is non-threaded libevent, we have two event A and B:
    - Event A will destroy something and call xs_unwatch;
    - Event B is used to notify that a node has changed in XenStore.
As the event is called one by one, event A can be handled before event B.
So on next xs_watch_read the user could retrieve an unwatch token and
a segfault occured if the token store the pointer of the structure
(ie: "backend:0xcafe").

To avoid problem with previous application using libXenStore, this behaviour
will only be enabled if XS_UNWATCH_SAFE is give to xs_open.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
  Modification between V2 and V3:
    - Add XS_UNWATCH_SAFE;
    - Rename xs_clear_watch_pipe to xs_maybe_clear_watch_pipe.

  Modification between V1 and V2:
    - Add xs_clear_watch_pipe to avoid code duplication;
    - Modify commit message by Ian Jackson;
    - Rework list filtering.

 tools/xenstore/xenstore.h |    7 ++++
 tools/xenstore/xs.c       |   86 ++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 85 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstore.h b/tools/xenstore/xenstore.h
index 7259e49..81ce8da 100644
--- a/tools/xenstore/xenstore.h
+++ b/tools/xenstore/xenstore.h
@@ -26,6 +26,13 @@
 
 #define XS_OPEN_READONLY	1UL<<0
 #define XS_OPEN_SOCKETONLY      1UL<<1
+/*
+ * Avoid race condition with unwatch - Not enabled by default
+ *
+ * If you want to enable it please be sure to watch a path (/foo) and
+ * a sub-path (/foo/bar) with the same token.
+ */
+#define XS_UNWATCH_SAFE     1UL<<2
 
 struct xs_handle;
 typedef uint32_t xs_transaction_t;
diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index b951015..591fe79 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -67,6 +67,8 @@ struct xs_handle {
 
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_safe;
 
 	/*
          * A list of replies. Currently only one will ever be outstanding
@@ -125,6 +127,8 @@ struct xs_handle {
 	struct list_head watch_list;
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_safe;
 };
 
 #define mutex_lock(m)		((void)0)
@@ -247,6 +251,8 @@ static struct xs_handle *get_handle(const char *connect_to)
 	/* Watch pipe is allocated on demand in xs_fileno(). */
 	h->watch_pipe[0] = h->watch_pipe[1] = -1;
 
+	h->unwatch_safe = false;
+
 #ifdef USE_PTHREAD
 	pthread_mutex_init(&h->watch_mutex, NULL);
 	pthread_cond_init(&h->watch_condvar, NULL);
@@ -287,6 +293,9 @@ struct xs_handle *xs_open(unsigned long flags)
 	if (!xsh && !(flags & XS_OPEN_SOCKETONLY))
 		xsh = get_handle(xs_domain_dev());
 
+	if (xsh && (flags & XS_UNWATCH_SAFE))
+		xsh->unwatch_safe = true;
+
 	return xsh;
 }
 
@@ -753,6 +762,19 @@ bool xs_watch(struct xs_handle *h, const char *path, const char *token)
 				ARRAY_SIZE(iov), NULL));
 }
 
+
+/* Clear the pipe token if there are no more pending watchs.
+ * We suppose the watch_mutex is already taken.
+ */
+static void xs_maybe_clear_watch_pipe(struct xs_handle *h)
+{
+	char c;
+
+	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
+		while (read(h->watch_pipe[0], &c, 1) != 1)
+			continue;
+}
+
 /* Find out what node change was on (will block if nothing pending).
  * Returns array of two pointers: path and token, or NULL.
  * Call free() after use.
@@ -761,7 +783,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 				  int nonblocking)
 {
 	struct xs_stored_msg *msg;
-	char **ret, *strings, c = 0;
+	char **ret, *strings;
 	unsigned int num_strings, i;
 
 	mutex_lock(&h->watch_mutex);
@@ -798,11 +820,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 	msg = list_top(&h->watch_list, struct xs_stored_msg, list);
 	list_del(&msg->list);
 
-	/* Clear the pipe token if there are no more pending watches. */
-	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
-		while (read(h->watch_pipe[0], &c, 1) != 1)
-			continue;
-
+	xs_maybe_clear_watch_pipe(h);
 	mutex_unlock(&h->watch_mutex);
 
 	assert(msg->hdr.type == XS_WATCH_EVENT);
@@ -855,14 +873,66 @@ char **xs_read_watch(struct xs_handle *h, unsigned int *num)
 bool xs_unwatch(struct xs_handle *h, const char *path, const char *token)
 {
 	struct iovec iov[2];
+	struct xs_stored_msg *msg, *tmsg;
+	bool res;
+	char *s, *p;
+	unsigned int i;
+	char *l_token, *l_path;
 
 	iov[0].iov_base = (char *)path;
 	iov[0].iov_len = strlen(path) + 1;
 	iov[1].iov_base = (char *)token;
 	iov[1].iov_len = strlen(token) + 1;
 
-	return xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
-				ARRAY_SIZE(iov), NULL));
+	res = xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
+						   ARRAY_SIZE(iov), NULL));
+
+	if (!h->unwatch_safe) /* Don't filter the watch list */
+		return res;
+
+
+	/* Filter the watch list to remove potential message */
+	mutex_lock(&h->watch_mutex);
+
+	if (list_empty(&h->watch_list)) {
+	    mutex_unlock(&h->watch_mutex);
+	    return res;
+	}
+
+	list_for_each_entry_safe(msg, tmsg, &h->watch_list, list) {
+		assert(msg->hdr.type == XS_WATCH_EVENT);
+
+		s = msg->body;
+
+		l_token = NULL;
+		l_path = NULL;
+
+		for (p = s, i = 0; p < msg->body + msg->hdr.len; p++) {
+			if (*p == '\0')
+			{
+				if (i == XS_WATCH_TOKEN)
+					l_token = s;
+				else if (i == XS_WATCH_PATH)
+					l_path = s;
+				i++;
+				s = p + 1;
+			}
+		}
+
+		if (l_token && !strcmp(token, l_token)
+			/* Use strncmp because we can have a watch fired on sub-directory */
+			&& l_path && !strncmp(path, l_path, strlen(path))) {
+			fprintf (stderr, "DELETE\n");
+			list_del(&msg->list);
+			free(msg);
+		}
+	}
+
+	xs_maybe_clear_watch_pipe(h);
+
+	mutex_unlock(&h->watch_mutex);
+
+	return res;
 }
 
 /* Start a transaction: changes by others will not be seen during this
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 18:21:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 18:21:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjDP7-0004IA-MW; Thu, 13 Dec 2012 18:21:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1TjDP5-0004I5-Br
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 18:21:03 +0000
Received: from [193.109.254.147:52831] by server-6.bemta-14.messagelabs.com id
	CD/E4-25153-E8C1AC05; Thu, 13 Dec 2012 18:21:02 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355422859!10003483!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk0MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24543 invoked from network); 13 Dec 2012 18:21:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 18:21:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="625628"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	13 Dec 2012 18:20:59 +0000
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 13 Dec 2012 13:20:59 -0500
From: Julien Grall <julien.grall@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Dec 2012 11:53:50 +0000
Message-ID: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH V3] libxenstore: filter watch events in
	libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XenStore puts in queued watch events via a thread and notifies the user.
Sometimes xs_unwatch is called before all related message is read. The use
case is non-threaded libevent, we have two event A and B:
    - Event A will destroy something and call xs_unwatch;
    - Event B is used to notify that a node has changed in XenStore.
As the event is called one by one, event A can be handled before event B.
So on next xs_watch_read the user could retrieve an unwatch token and
a segfault occured if the token store the pointer of the structure
(ie: "backend:0xcafe").

To avoid problem with previous application using libXenStore, this behaviour
will only be enabled if XS_UNWATCH_SAFE is give to xs_open.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
  Modification between V2 and V3:
    - Add XS_UNWATCH_SAFE;
    - Rename xs_clear_watch_pipe to xs_maybe_clear_watch_pipe.

  Modification between V1 and V2:
    - Add xs_clear_watch_pipe to avoid code duplication;
    - Modify commit message by Ian Jackson;
    - Rework list filtering.

 tools/xenstore/xenstore.h |    7 ++++
 tools/xenstore/xs.c       |   86 ++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 85 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstore.h b/tools/xenstore/xenstore.h
index 7259e49..81ce8da 100644
--- a/tools/xenstore/xenstore.h
+++ b/tools/xenstore/xenstore.h
@@ -26,6 +26,13 @@
 
 #define XS_OPEN_READONLY	1UL<<0
 #define XS_OPEN_SOCKETONLY      1UL<<1
+/*
+ * Avoid race condition with unwatch - Not enabled by default
+ *
+ * If you want to enable it please be sure to watch a path (/foo) and
+ * a sub-path (/foo/bar) with the same token.
+ */
+#define XS_UNWATCH_SAFE     1UL<<2
 
 struct xs_handle;
 typedef uint32_t xs_transaction_t;
diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index b951015..591fe79 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -67,6 +67,8 @@ struct xs_handle {
 
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_safe;
 
 	/*
          * A list of replies. Currently only one will ever be outstanding
@@ -125,6 +127,8 @@ struct xs_handle {
 	struct list_head watch_list;
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_safe;
 };
 
 #define mutex_lock(m)		((void)0)
@@ -247,6 +251,8 @@ static struct xs_handle *get_handle(const char *connect_to)
 	/* Watch pipe is allocated on demand in xs_fileno(). */
 	h->watch_pipe[0] = h->watch_pipe[1] = -1;
 
+	h->unwatch_safe = false;
+
 #ifdef USE_PTHREAD
 	pthread_mutex_init(&h->watch_mutex, NULL);
 	pthread_cond_init(&h->watch_condvar, NULL);
@@ -287,6 +293,9 @@ struct xs_handle *xs_open(unsigned long flags)
 	if (!xsh && !(flags & XS_OPEN_SOCKETONLY))
 		xsh = get_handle(xs_domain_dev());
 
+	if (xsh && (flags & XS_UNWATCH_SAFE))
+		xsh->unwatch_safe = true;
+
 	return xsh;
 }
 
@@ -753,6 +762,19 @@ bool xs_watch(struct xs_handle *h, const char *path, const char *token)
 				ARRAY_SIZE(iov), NULL));
 }
 
+
+/* Clear the pipe token if there are no more pending watchs.
+ * We suppose the watch_mutex is already taken.
+ */
+static void xs_maybe_clear_watch_pipe(struct xs_handle *h)
+{
+	char c;
+
+	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
+		while (read(h->watch_pipe[0], &c, 1) != 1)
+			continue;
+}
+
 /* Find out what node change was on (will block if nothing pending).
  * Returns array of two pointers: path and token, or NULL.
  * Call free() after use.
@@ -761,7 +783,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 				  int nonblocking)
 {
 	struct xs_stored_msg *msg;
-	char **ret, *strings, c = 0;
+	char **ret, *strings;
 	unsigned int num_strings, i;
 
 	mutex_lock(&h->watch_mutex);
@@ -798,11 +820,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 	msg = list_top(&h->watch_list, struct xs_stored_msg, list);
 	list_del(&msg->list);
 
-	/* Clear the pipe token if there are no more pending watches. */
-	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
-		while (read(h->watch_pipe[0], &c, 1) != 1)
-			continue;
-
+	xs_maybe_clear_watch_pipe(h);
 	mutex_unlock(&h->watch_mutex);
 
 	assert(msg->hdr.type == XS_WATCH_EVENT);
@@ -855,14 +873,66 @@ char **xs_read_watch(struct xs_handle *h, unsigned int *num)
 bool xs_unwatch(struct xs_handle *h, const char *path, const char *token)
 {
 	struct iovec iov[2];
+	struct xs_stored_msg *msg, *tmsg;
+	bool res;
+	char *s, *p;
+	unsigned int i;
+	char *l_token, *l_path;
 
 	iov[0].iov_base = (char *)path;
 	iov[0].iov_len = strlen(path) + 1;
 	iov[1].iov_base = (char *)token;
 	iov[1].iov_len = strlen(token) + 1;
 
-	return xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
-				ARRAY_SIZE(iov), NULL));
+	res = xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
+						   ARRAY_SIZE(iov), NULL));
+
+	if (!h->unwatch_safe) /* Don't filter the watch list */
+		return res;
+
+
+	/* Filter the watch list to remove potential message */
+	mutex_lock(&h->watch_mutex);
+
+	if (list_empty(&h->watch_list)) {
+	    mutex_unlock(&h->watch_mutex);
+	    return res;
+	}
+
+	list_for_each_entry_safe(msg, tmsg, &h->watch_list, list) {
+		assert(msg->hdr.type == XS_WATCH_EVENT);
+
+		s = msg->body;
+
+		l_token = NULL;
+		l_path = NULL;
+
+		for (p = s, i = 0; p < msg->body + msg->hdr.len; p++) {
+			if (*p == '\0')
+			{
+				if (i == XS_WATCH_TOKEN)
+					l_token = s;
+				else if (i == XS_WATCH_PATH)
+					l_path = s;
+				i++;
+				s = p + 1;
+			}
+		}
+
+		if (l_token && !strcmp(token, l_token)
+			/* Use strncmp because we can have a watch fired on sub-directory */
+			&& l_path && !strncmp(path, l_path, strlen(path))) {
+			fprintf (stderr, "DELETE\n");
+			list_del(&msg->list);
+			free(msg);
+		}
+	}
+
+	xs_maybe_clear_watch_pipe(h);
+
+	mutex_unlock(&h->watch_mutex);
+
+	return res;
 }
 
 /* Start a transaction: changes by others will not be seen during this
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 18:24:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 18:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjDRz-0004Qu-F8; Thu, 13 Dec 2012 18:24:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1TjDRy-0004QX-7A; Thu, 13 Dec 2012 18:24:02 +0000
Received: from [85.158.138.51:13942] by server-13.bemta-3.messagelabs.com id
	03/11-00465-14D1AC05; Thu, 13 Dec 2012 18:24:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355423040!28796377!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20892 invoked from network); 13 Dec 2012 18:24:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 18:24:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="131128"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 18:24:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 18:23:59 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjDRv-0008OM-Tz; Thu, 13 Dec 2012 18:23:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjDRv-000069-QP;
	Thu, 13 Dec 2012 18:23:59 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20682.7487.598565.547405@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 18:23:59 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50CA08AE.80102@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com> <50CA08AE.80102@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Handling iSCSI block devices (Was: Driver domains and device handling)"):
> [stuff]

Most of this sounds sensible.

> So the diskspec line would look like:
> 
> portal=127.0.0.0:3260, authmethod=CHAP, user=foo, password=bar,
> backendtype=phy, format=iscsi, vdev=xvda,
> target=iqn.2012-12.com.example:lun1

Are we suggesting that every backend type should be able to define its
own parameters ?  I was imagining that these options would all go into
"target" - and if "target" is last it can contain commas and =s.

> Note that I've used the format parameter here to specify "iscsi", which
> will be a new format, to distinguish this from a block device that also
> uses the "phy" backend type. All this new parameters should also be
> added to the libxl_device_disk struct.

I don't think this is right.  I think the right answer is
"script=iscsi".  The format might be qcow or something.

> Since this device type uses two hotplug scripts we should also add a new
> generic parameter to specify a "preparatory" hotplug script, so other
> custom devices can also make use of this, something like "preparescript"?

Clearly when we have this two-phase setup we need to have more
scripts, or the existing script with different arguments.

I think it should be controlled by the same argument.  So maybe
script=iscsi causes libxl to check for a dropping in the script file
saying "yes do the prepare thing" or maybe it runs
/etc/xen/scripts/block-iscsi--prepare or something.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 18:24:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 18:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjDRz-0004Qu-F8; Thu, 13 Dec 2012 18:24:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1TjDRy-0004QX-7A; Thu, 13 Dec 2012 18:24:02 +0000
Received: from [85.158.138.51:13942] by server-13.bemta-3.messagelabs.com id
	03/11-00465-14D1AC05; Thu, 13 Dec 2012 18:24:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355423040!28796377!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20892 invoked from network); 13 Dec 2012 18:24:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 18:24:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="131128"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 18:24:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 18:23:59 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjDRv-0008OM-Tz; Thu, 13 Dec 2012 18:23:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjDRv-000069-QP;
	Thu, 13 Dec 2012 18:23:59 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20682.7487.598565.547405@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 18:23:59 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50CA08AE.80102@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com> <50CA08AE.80102@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Handling iSCSI block devices (Was: Driver domains and device handling)"):
> [stuff]

Most of this sounds sensible.

> So the diskspec line would look like:
> 
> portal=127.0.0.0:3260, authmethod=CHAP, user=foo, password=bar,
> backendtype=phy, format=iscsi, vdev=xvda,
> target=iqn.2012-12.com.example:lun1

Are we suggesting that every backend type should be able to define its
own parameters ?  I was imagining that these options would all go into
"target" - and if "target" is last it can contain commas and =s.

> Note that I've used the format parameter here to specify "iscsi", which
> will be a new format, to distinguish this from a block device that also
> uses the "phy" backend type. All this new parameters should also be
> added to the libxl_device_disk struct.

I don't think this is right.  I think the right answer is
"script=iscsi".  The format might be qcow or something.

> Since this device type uses two hotplug scripts we should also add a new
> generic parameter to specify a "preparatory" hotplug script, so other
> custom devices can also make use of this, something like "preparescript"?

Clearly when we have this two-phase setup we need to have more
scripts, or the existing script with different arguments.

I think it should be controlled by the same argument.  So maybe
script=iscsi causes libxl to check for a dropping in the script file
saying "yes do the prepare thing" or maybe it runs
/etc/xen/scripts/block-iscsi--prepare or something.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 18:36:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 18:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjDda-00058L-Uw; Thu, 13 Dec 2012 18:36:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjDdZ-00058G-IS
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 18:36:01 +0000
Received: from [85.158.137.99:29804] by server-2.bemta-3.messagelabs.com id
	32/5E-11239-0102AC05; Thu, 13 Dec 2012 18:36:00 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355423723!13047656!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19886 invoked from network); 13 Dec 2012 18:35:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 18:35:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="131860"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 18:35:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 18:35:02 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjDcc-0008Rl-Qs; Thu, 13 Dec 2012 18:35:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjDcc-00006w-Ku;
	Thu, 13 Dec 2012 18:35:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20682.8150.410558.581149@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 18:35:02 +0000
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
References: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH V3] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien Grall writes ("[PATCH V3] libxenstore: filter watch events in libxenstore when we unwatch"):
> XenStore puts in queued watch events via a thread and notifies the user.
> Sometimes xs_unwatch is called before all related message is read. The use
> case is non-threaded libevent, we have two event A and B:
>     - Event A will destroy something and call xs_unwatch;
>     - Event B is used to notify that a node has changed in XenStore.
> As the event is called one by one, event A can be handled before event B.
> So on next xs_watch_read the user could retrieve an unwatch token and
> a segfault occured if the token store the pointer of the structure
> (ie: "backend:0xcafe").
> 
> To avoid problem with previous application using libXenStore, this behaviour
> will only be enabled if XS_UNWATCH_SAFE is give to xs_open.

Sorry I didn't reply to your previous email on this subject.

I think this is a reasonable approach but the option name needs to be
more descriptive and the documentation a bit better.

XS_UNWATCH_FILTER ?  XS_WATCH_TOKENS_UNIQUE ?

As for the documentation:

 /*
  * Setting XS_UNWATCH_FILTER arranges that after xs_unwatch, no
  * related watch events will be delivered via xs_read_watch.  But
  * this relies on all tokens provided by the application to
  * libxenstore being unique.  The choices are:
  *
  *   XS_UNWATCH_FILTER clear           XS_UNWATCH_FILTER set
  *
  *    Even after xs_unwatch, "stale"    After xs_unwatch returns, no
  *    instances of the watch event      watch events with the same
  *    may be delivered.                 token will be delivered.
  *
  *    Tokens need not be unique.        Tokens specified in xs_watch
  *                                      calls must be unique.
  */

With this specification it is not necessary to check the path when
filtering out the stale events.  Which is just as well because:

> +		if (l_token && !strcmp(token, l_token)
> +			/* Use strncmp because we can have a watch fired on sub-directory */
> +			&& l_path && !strncmp(path, l_path, strlen(path))) {

This isn't correct: the subpath relation is not the same as the prefix
relation.

Would this semantic be sufficient for your application ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 18:36:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 18:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjDda-00058L-Uw; Thu, 13 Dec 2012 18:36:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjDdZ-00058G-IS
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 18:36:01 +0000
Received: from [85.158.137.99:29804] by server-2.bemta-3.messagelabs.com id
	32/5E-11239-0102AC05; Thu, 13 Dec 2012 18:36:00 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355423723!13047656!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19886 invoked from network); 13 Dec 2012 18:35:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 18:35:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="131860"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 18:35:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 18:35:02 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjDcc-0008Rl-Qs; Thu, 13 Dec 2012 18:35:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjDcc-00006w-Ku;
	Thu, 13 Dec 2012 18:35:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20682.8150.410558.581149@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 18:35:02 +0000
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
References: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH V3] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien Grall writes ("[PATCH V3] libxenstore: filter watch events in libxenstore when we unwatch"):
> XenStore puts in queued watch events via a thread and notifies the user.
> Sometimes xs_unwatch is called before all related message is read. The use
> case is non-threaded libevent, we have two event A and B:
>     - Event A will destroy something and call xs_unwatch;
>     - Event B is used to notify that a node has changed in XenStore.
> As the event is called one by one, event A can be handled before event B.
> So on next xs_watch_read the user could retrieve an unwatch token and
> a segfault occured if the token store the pointer of the structure
> (ie: "backend:0xcafe").
> 
> To avoid problem with previous application using libXenStore, this behaviour
> will only be enabled if XS_UNWATCH_SAFE is give to xs_open.

Sorry I didn't reply to your previous email on this subject.

I think this is a reasonable approach but the option name needs to be
more descriptive and the documentation a bit better.

XS_UNWATCH_FILTER ?  XS_WATCH_TOKENS_UNIQUE ?

As for the documentation:

 /*
  * Setting XS_UNWATCH_FILTER arranges that after xs_unwatch, no
  * related watch events will be delivered via xs_read_watch.  But
  * this relies on all tokens provided by the application to
  * libxenstore being unique.  The choices are:
  *
  *   XS_UNWATCH_FILTER clear           XS_UNWATCH_FILTER set
  *
  *    Even after xs_unwatch, "stale"    After xs_unwatch returns, no
  *    instances of the watch event      watch events with the same
  *    may be delivered.                 token will be delivered.
  *
  *    Tokens need not be unique.        Tokens specified in xs_watch
  *                                      calls must be unique.
  */

With this specification it is not necessary to check the path when
filtering out the stale events.  Which is just as well because:

> +		if (l_token && !strcmp(token, l_token)
> +			/* Use strncmp because we can have a watch fired on sub-directory */
> +			&& l_path && !strncmp(path, l_path, strlen(path))) {

This isn't correct: the subpath relation is not the same as the prefix
relation.

Would this semantic be sufficient for your application ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 18:36:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 18:36:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjDe5-0005Ag-C2; Thu, 13 Dec 2012 18:36:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjDe3-0005AU-R5
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 18:36:31 +0000
Received: from [85.158.139.83:54691] by server-9.bemta-5.messagelabs.com id
	B2/61-10690-F202AC05; Thu, 13 Dec 2012 18:36:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355423790!29781348!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15893 invoked from network); 13 Dec 2012 18:36:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 18:36:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="131888"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 18:36:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 18:36:29 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjDe1-0008SI-T9; Thu, 13 Dec 2012 18:36:29 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjDe1-00007B-PO;
	Thu, 13 Dec 2012 18:36:29 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20682.8237.686471.637990@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 18:36:29 +0000
To: Julien Grall <julien.grall@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <20682.8150.410558.581149@mariner.uk.xensource.com>
References: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
	<20682.8150.410558.581149@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: [Xen-devel] [PATCH V3] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[PATCH V3] libxenstore: filter watch events in libxenstore when we unwatch"):
>   *    Tokens need not be unique.        Tokens specified in xs_watch
>   *                                      calls must be unique.

This is vague about lifetime.  Perhaps:

  A token specified in an xs_watch call may not be also specified in
  a second xs_watch call until a successful xs_unwatch for the first
  watch has returned.

?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 18:36:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 18:36:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjDe5-0005Ag-C2; Thu, 13 Dec 2012 18:36:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjDe3-0005AU-R5
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 18:36:31 +0000
Received: from [85.158.139.83:54691] by server-9.bemta-5.messagelabs.com id
	B2/61-10690-F202AC05; Thu, 13 Dec 2012 18:36:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355423790!29781348!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwNTg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15893 invoked from network); 13 Dec 2012 18:36:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Dec 2012 18:36:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,275,1355097600"; 
   d="scan'208";a="131888"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Dec 2012 18:36:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 13 Dec 2012 18:36:29 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjDe1-0008SI-T9; Thu, 13 Dec 2012 18:36:29 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjDe1-00007B-PO;
	Thu, 13 Dec 2012 18:36:29 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20682.8237.686471.637990@mariner.uk.xensource.com>
Date: Thu, 13 Dec 2012 18:36:29 +0000
To: Julien Grall <julien.grall@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <20682.8150.410558.581149@mariner.uk.xensource.com>
References: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
	<20682.8150.410558.581149@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: [Xen-devel] [PATCH V3] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[PATCH V3] libxenstore: filter watch events in libxenstore when we unwatch"):
>   *    Tokens need not be unique.        Tokens specified in xs_watch
>   *                                      calls must be unique.

This is vague about lifetime.  Perhaps:

  A token specified in an xs_watch call may not be also specified in
  a second xs_watch call until a successful xs_unwatch for the first
  watch has returned.

?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 21:48:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 21:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjGcu-0008Qx-Bi; Thu, 13 Dec 2012 21:47:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TjGcs-0008Qs-Lv
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 21:47:31 +0000
Received: from [85.158.138.51:15605] by server-15.bemta-3.messagelabs.com id
	36/0E-07921-1FC4AC05; Thu, 13 Dec 2012 21:47:29 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355435247!28515946!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12105 invoked from network); 13 Dec 2012 21:47:27 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-174.messagelabs.com with SMTP;
	13 Dec 2012 21:47:27 -0000
X-TM-IMSS-Message-ID: <77d6c882000a84ca@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 77d6c882000a84ca ;
	Thu, 13 Dec 2012 16:46:36 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBDLlBFN022270; 
	Thu, 13 Dec 2012 16:47:11 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Thu, 13 Dec 2012 16:47:10 -0500
Message-Id: <1355435230-4176-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH RFC/v2] libxl: postpone backend name resolution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This replaces the backend_domid field in libxl devices with a structure
allowing either a domid or a domain name to be specified.  The domain
name is resolved into a domain ID in the _setdefault function when
adding the device.  This change allows the backend of the block devices
to be specified (which previously required passing the libxl_ctx down
into the block device parser).

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>

---

This patch does not include the changes to tools/libxl/libxlu_disk_l.c
and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
changes due to different generator versions.

 docs/misc/xl-disk-configuration.txt | 12 +++++++
 tools/libxl/libxl.c                 | 65 +++++++++++++++++++++++++------------
 tools/libxl/libxl_dm.c              |  6 ++--
 tools/libxl/libxl_types.idl         | 15 ++++++---
 tools/libxl/libxl_utils.c           |  2 +-
 tools/libxl/libxlu_disk_l.l         |  1 +
 tools/libxl/xl_cmdimpl.c            | 36 ++++----------------
 tools/libxl/xl_sxp.c                |  6 ++--
 8 files changed, 82 insertions(+), 61 deletions(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 86c16be..5bd456d 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -139,6 +139,18 @@ cdrom
 Convenience alias for "devtype=cdrom".
 
 
+backend=<domain-name>
+---------------------
+
+Description:           Designates a backend domain for the device
+Supported values:      Valid domain names
+Mandatory:             No
+
+Specifies the backend domain which this device should attach to. This
+defaults to domain 0. Specifying another domain requires setting up a
+driver domain which is outside the scope of this document.
+
+
 backendtype=<backend-type>
 --------------------------
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 6c4455e..edb29b3 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1153,7 +1153,7 @@ static void disk_eject_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
     sscanf(backend,
             "/local/domain/%d/backend/%" TOSTRING(BACKEND_STRING_SIZE)
            "[a-z]/%*d/%*d",
-           &disk->backend_domid, backend_type);
+           &disk->backend_domain.domid, backend_type);
     if (!strcmp(backend_type, "tap") || !strcmp(backend_type, "vbd")) {
         disk->backend = LIBXL_DISK_BACKEND_TAP;
     } else if (!strcmp(backend_type, "qdisk")) {
@@ -1710,13 +1710,36 @@ out:
     return;
 }
 
+/* backend domain name-to-domid conversion utility */
+static int libxl__resolve_domain(libxl__gc *gc, struct libxl_domain_desc *desc)
+{
+    int i, rv;
+    const char *name = desc->name;
+    if (!name)
+        return 0;
+    for (i=0; name[i]; i++) {
+        if (!isdigit(name[i])) {
+            rv = libxl_name_to_domid(libxl__gc_owner(gc), name, &desc->domid);
+            if (rv)
+                return rv;
+            goto resolved;
+        }
+    }
+    desc->domid = atoi(name);
+
+ resolved:
+    free(desc->name);
+    desc->name = NULL;
+    return 0;
+}
+
 /******************************************************************************/
 int libxl__device_vtpm_setdefault(libxl__gc *gc, libxl_device_vtpm *vtpm)
 {
    if(libxl_uuid_is_nil(&vtpm->uuid)) {
       libxl_uuid_generate(&vtpm->uuid);
    }
-   return 0;
+   return libxl__resolve_domain(gc, &vtpm->backend_domain);
 }
 
 static int libxl__device_from_vtpm(libxl__gc *gc, uint32_t domid,
@@ -1724,7 +1747,7 @@ static int libxl__device_from_vtpm(libxl__gc *gc, uint32_t domid,
                                    libxl__device *device)
 {
    device->backend_devid   = vtpm->devid;
-   device->backend_domid   = vtpm->backend_domid;
+   device->backend_domid   = vtpm->backend_domain.domid;
    device->backend_kind    = LIBXL__DEVICE_KIND_VTPM;
    device->devid           = vtpm->devid;
    device->domid           = domid;
@@ -1783,7 +1806,7 @@ void libxl__device_vtpm_add(libxl__egc *egc, uint32_t domid,
     flexarray_append(back, "False");
 
     flexarray_append(front, "backend-id");
-    flexarray_append(front, GCSPRINTF("%d", vtpm->backend_domid));
+    flexarray_append(front, GCSPRINTF("%d", vtpm->backend_domain.domid));
     flexarray_append(front, "state");
     flexarray_append(front, GCSPRINTF("%d", 1));
     flexarray_append(front, "handle");
@@ -1832,7 +1855,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
           tmp = libxl__xs_read(gc, XBT_NULL,
                 GCSPRINTF("%s/%s/backend_id",
                    fe_path, *dir));
-          vtpm->backend_domid = atoi(tmp);
+          vtpm->backend_domain.domid = atoi(tmp);
 
           tmp = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/uuid", be_path));
           if(tmp) {
@@ -1934,7 +1957,7 @@ int libxl_devid_to_device_vtpm(libxl_ctx *ctx,
     rc = 1;
     for (i = 0; i < nb; ++i) {
         if(devid == vtpms[i].devid) {
-            vtpm->backend_domid = vtpms[i].backend_domid;
+            vtpm->backend_domain.domid = vtpms[i].backend_domain.domid;
             vtpm->devid = vtpms[i].devid;
             libxl_uuid_copy(&vtpm->uuid, &vtpms[i].uuid);
             rc = 0;
@@ -1956,7 +1979,7 @@ int libxl__device_disk_setdefault(libxl__gc *gc, libxl_device_disk *disk)
     rc = libxl__device_disk_set_backend(gc, disk);
     if (rc) return rc;
 
-    return rc;
+    return libxl__resolve_domain(gc, &disk->backend_domain);
 }
 
 int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
@@ -1973,7 +1996,7 @@ int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
         return ERROR_INVAL;
     }
 
-    device->backend_domid = disk->backend_domid;
+    device->backend_domid = disk->backend_domain.domid;
     device->backend_devid = devid;
 
     switch (disk->backend) {
@@ -2133,7 +2156,7 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
 
         flexarray_append(front, "backend-id");
-        flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
+        flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domain.domid));
         flexarray_append(front, "state");
         flexarray_append(front, libxl__sprintf(gc, "%d", 1));
         flexarray_append(front, "virtual-device");
@@ -2298,7 +2321,7 @@ static int libxl__append_disk_list_of_type(libxl__gc *gc,
             p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
             if ((rc=libxl__device_disk_from_xs_be(gc, p, pdisk)))
                 goto out;
-            pdisk->backend_domid = 0;
+            pdisk->backend_domain.domid = 0;
             *ndisks += 1;
         }
     }
@@ -2759,7 +2782,7 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
         LOG(ERROR, "unable to get current hotplug scripts execution setting");
         return run_hotplug_scripts;
     }
-    if (nic->backend_domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
+    if (nic->backend_domain.domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
         LOG(ERROR, "cannot use a backend domain different than %d if"
                    "hotplug scripts are executed from libxl",
                    LIBXL_TOOLSTACK_DOMID);
@@ -2784,7 +2807,7 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
         abort();
     }
 
-    return 0;
+    return libxl__resolve_domain(gc, &nic->backend_domain);
 }
 
 static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
@@ -2792,7 +2815,7 @@ static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
                                   libxl__device *device)
 {
     device->backend_devid    = nic->devid;
-    device->backend_domid    = nic->backend_domid;
+    device->backend_domid    = nic->backend_domain.domid;
     device->backend_kind     = LIBXL__DEVICE_KIND_VIF;
     device->devid            = nic->devid;
     device->domid            = domid;
@@ -2874,7 +2897,7 @@ void libxl__device_nic_add(libxl__egc *egc, uint32_t domid,
                                      libxl_nic_type_to_string(nic->nictype)));
 
     flexarray_append(front, "backend-id");
-    flexarray_append(front, libxl__sprintf(gc, "%d", nic->backend_domid));
+    flexarray_append(front, libxl__sprintf(gc, "%d", nic->backend_domain.domid));
     flexarray_append(front, "state");
     flexarray_append(front, libxl__sprintf(gc, "%d", 1));
     flexarray_append(front, "handle");
@@ -2991,7 +3014,7 @@ static int libxl__append_nic_list_of_type(libxl__gc *gc,
             const char *p;
             p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
             libxl__device_nic_from_xs_be(gc, p, pnic);
-            pnic->backend_domid = 0;
+            pnic->backend_domain.domid = 0;
         }
     }
     return 0;
@@ -3144,7 +3167,7 @@ out:
 
 int libxl__device_vkb_setdefault(libxl__gc *gc, libxl_device_vkb *vkb)
 {
-    return 0;
+    return libxl__resolve_domain(gc, &vkb->backend_domain);
 }
 
 static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
@@ -3152,7 +3175,7 @@ static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
                                   libxl__device *device)
 {
     device->backend_devid = vkb->devid;
-    device->backend_domid = vkb->backend_domid;
+    device->backend_domid = vkb->backend_domain.domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VKBD;
     device->devid = vkb->devid;
     device->domid = domid;
@@ -3205,7 +3228,7 @@ int libxl__device_vkb_add(libxl__gc *gc, uint32_t domid,
     flexarray_append(back, libxl__domid_to_name(gc, domid));
 
     flexarray_append(front, "backend-id");
-    flexarray_append(front, libxl__sprintf(gc, "%d", vkb->backend_domid));
+    flexarray_append(front, libxl__sprintf(gc, "%d", vkb->backend_domain.domid));
     flexarray_append(front, "state");
     flexarray_append(front, libxl__sprintf(gc, "%d", 1));
 
@@ -3236,7 +3259,7 @@ int libxl__device_vfb_setdefault(libxl__gc *gc, libxl_device_vfb *vfb)
     libxl_defbool_setdefault(&vfb->sdl.enable, false);
     libxl_defbool_setdefault(&vfb->sdl.opengl, false);
 
-    return 0;
+    return libxl__resolve_domain(gc, &vfb->backend_domain);
 }
 
 static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
@@ -3244,7 +3267,7 @@ static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
                                   libxl__device *device)
 {
     device->backend_devid = vfb->devid;
-    device->backend_domid = vfb->backend_domid;
+    device->backend_domid = vfb->backend_domain.domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VFB;
     device->devid = vfb->devid;
     device->domid = domid;
@@ -3309,7 +3332,7 @@ int libxl__device_vfb_add(libxl__gc *gc, uint32_t domid, libxl_device_vfb *vfb)
     }
 
     flexarray_append_pair(front, "backend-id",
-                          libxl__sprintf(gc, "%d", vfb->backend_domid));
+                          libxl__sprintf(gc, "%d", vfb->backend_domain.domid));
     flexarray_append_pair(front, "state", libxl__sprintf(gc, "%d", 1));
 
     libxl__device_generic_add(gc, XBT_NULL, &device,
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index c036dc1..82a55ea 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -644,13 +644,15 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc,
     libxl_device_vfb_init(vfb);
     libxl_device_vkb_init(vkb);
 
-    vfb->backend_domid = 0;
+    vfb->backend_domain.domid = 0;
+    vfb->backend_domain.name = NULL;
     vfb->devid = 0;
     vfb->vnc = b_info->u.hvm.vnc;
     vfb->keymap = b_info->u.hvm.keymap;
     vfb->sdl = b_info->u.hvm.sdl;
 
-    vkb->backend_domid = 0;
+    vkb->backend_domain.domid = 0;
+    vkb->backend_domain.name = NULL;
     vkb->devid = 0;
     return 0;
 }
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 93524f0..6ec7967 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -136,6 +136,11 @@ libxl_vga_interface_type = Enumeration("vga_interface_type", [
 # Complex libxl types
 #
 
+libxl_domain_desc = Struct("domain_desc", [
+    ("domid", libxl_domid),
+    ("name",  string),
+    ])
+
 libxl_ioport_range = Struct("ioport_range", [
     ("first", uint32),
     ("number", uint32),
@@ -344,7 +349,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 )
 
 libxl_device_vfb = Struct("device_vfb", [
-    ("backend_domid", libxl_domid),
+    ("backend_domain",libxl_domain_desc),
     ("devid",         libxl_devid),
     ("vnc",           libxl_vnc_info),
     ("sdl",           libxl_sdl_info),
@@ -353,12 +358,12 @@ libxl_device_vfb = Struct("device_vfb", [
     ])
 
 libxl_device_vkb = Struct("device_vkb", [
-    ("backend_domid", libxl_domid),
+    ("backend_domain", libxl_domain_desc),
     ("devid", libxl_devid),
     ])
 
 libxl_device_disk = Struct("device_disk", [
-    ("backend_domid", libxl_domid),
+    ("backend_domain", libxl_domain_desc),
     ("pdev_path", string),
     ("vdev", string),
     ("backend", libxl_disk_backend),
@@ -370,7 +375,7 @@ libxl_device_disk = Struct("device_disk", [
     ])
 
 libxl_device_nic = Struct("device_nic", [
-    ("backend_domid", libxl_domid),
+    ("backend_domain", libxl_domain_desc),
     ("devid", libxl_devid),
     ("mtu", integer),
     ("model", string),
@@ -397,7 +402,7 @@ libxl_device_pci = Struct("device_pci", [
     ])
 
 libxl_device_vtpm = Struct("device_vtpm", [
-    ("backend_domid",    libxl_domid),
+    ("backend_domain",   libxl_domain_desc),
     ("devid",            libxl_devid),
     ("uuid",             libxl_uuid),
 ])
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index 8f78790..86a310e 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -478,7 +478,7 @@ int libxl_uuid_to_device_vtpm(libxl_ctx *ctx, uint32_t domid,
     rc = 1;
     for (i = 0; i < nb; ++i) {
         if(!libxl_uuid_compare(uuid, &vtpms[i].uuid)) {
-            vtpm->backend_domid = vtpms[i].backend_domid;
+            vtpm->backend_domain.domid = vtpms[i].backend_domain.domid;
             vtpm->devid = vtpms[i].devid;
             libxl_uuid_copy(&vtpm->uuid, &vtpms[i].uuid);
             rc = 0;
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index bee16a1..a65030f 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -168,6 +168,7 @@ devtype=disk,?	{ DPC->disk->is_cdrom = 0; }
 devtype=[^,]*,?	{ xlu__disk_err(DPC,yytext,"unknown value for type"); }
 
 access=[^,]*,?	{ STRIP(','); setaccess(DPC, FROMEQUALS); }
+backend=[^,]*,? { STRIP(','); SAVESTRING("backend", backend_domain.name, FROMEQUALS); }
 backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index e964bf1..bc22f3c 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1090,12 +1090,7 @@ static void parse_config_data(const char *config_source,
                      break;
                   *p2 = '\0';
                   if (!strcmp(p, "backend")) {
-                     if(domain_qualifier_to_domid(p2 + 1, &(vtpm->backend_domid), 0))
-                     {
-                        fprintf(stderr,
-                              "Specified vtpm backend domain `%s' does not exist!\n", p2 + 1);
-                        exit(1);
-                     }
+                     vtpm->backend_domain.name = strdup(p2 + 1);
                      got_backend = true;
                   } else if(!strcmp(p, "uuid")) {
                      if( libxl_uuid_from_string(&vtpm->uuid, p2 + 1) ) {
@@ -1190,11 +1185,8 @@ static void parse_config_data(const char *config_source,
                     free(nic->ifname);
                     nic->ifname = strdup(p2 + 1);
                 } else if (!strcmp(p, "backend")) {
-                    if(libxl_name_to_domid(ctx, (p2 + 1), &(nic->backend_domid))) {
-                        fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
-                        nic->backend_domid = 0;
-                    }
-                    if (nic->backend_domid != 0 && run_hotplug_scripts) {
+                    nic->backend_domain.name = strdup(p2 + 1);
+                    if (run_hotplug_scripts) {
                         fprintf(stderr, "ERROR: the vif 'backend=' option "
                                 "cannot be used in conjunction with "
                                 "run_hotplug_scripts, please set "
@@ -2431,8 +2423,6 @@ static void cd_insert(uint32_t domid, const char *virtdev, char *phys)
 
     parse_disk_config(&config, buf, &disk);
 
-    disk.backend_domid = 0;
-
     libxl_cdrom_insert(ctx, domid, &disk, NULL);
 
     libxl_device_disk_dispose(&disk);
@@ -5516,11 +5506,7 @@ int main_networkattach(int argc, char **argv)
         } else if (MATCH_OPTION("script", *argv, oparg)) {
             replace_string(&nic.script, oparg);
         } else if (MATCH_OPTION("backend", *argv, oparg)) {
-            if(libxl_name_to_domid(ctx, oparg, &val)) {
-                fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
-                val = 0;
-            }
-            nic.backend_domid = val;
+            replace_string(&nic.backend_domain.name, oparg);
         } else if (MATCH_OPTION("vifname", *argv, oparg)) {
             replace_string(&nic.ifname, oparg);
         } else if (MATCH_OPTION("model", *argv, oparg)) {
@@ -5623,8 +5609,8 @@ int main_networkdetach(int argc, char **argv)
 int main_blockattach(int argc, char **argv)
 {
     int opt;
-    uint32_t fe_domid, be_domid = 0;
-    libxl_device_disk disk = { 0 };
+    uint32_t fe_domid;
+    libxl_device_disk disk;
     XLU_Config *config = 0;
 
     if ((opt = def_getopt(argc, argv, "", "block-attach", 2)) != -1)
@@ -5639,8 +5625,6 @@ int main_blockattach(int argc, char **argv)
     parse_disk_config_multistring
         (&config, argc-optind, (const char* const*)argv + optind, &disk);
 
-    disk.backend_domid = be_domid;
-
     if (dryrun_only) {
         char *json = libxl_device_disk_to_json(ctx, &disk);
         printf("disk: %s\n", json);
@@ -5720,7 +5704,6 @@ int main_vtpmattach(int argc, char **argv)
     int opt;
     libxl_device_vtpm vtpm;
     char *oparg;
-    unsigned int val;
     uint32_t domid;
 
     if ((opt = def_getopt(argc, argv, "", "vtpm-attach", 1)) != -1)
@@ -5740,12 +5723,7 @@ int main_vtpmattach(int argc, char **argv)
                 return 1;
             }
         } else if (MATCH_OPTION("backend", *argv, oparg)) {
-            if(domain_qualifier_to_domid(oparg, &val, 0)) {
-                fprintf(stderr,
-                      "Specified backend domain does not exist, defaulting to Dom0\n");
-                val = 0;
-            }
-            vtpm.backend_domid = val;
+            replace_string(&vtpm.backend_domain.name, oparg);
         } else {
             fprintf(stderr, "unrecognized argument `%s'\n", *argv);
             return 1;
diff --git a/tools/libxl/xl_sxp.c b/tools/libxl/xl_sxp.c
index a16a025..6cb8d40 100644
--- a/tools/libxl/xl_sxp.c
+++ b/tools/libxl/xl_sxp.c
@@ -163,7 +163,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
     for (i = 0; i < d_config->num_disks; i++) {
         printf("\t(device\n");
         printf("\t\t(tap\n");
-        printf("\t\t\t(backend_domid %d)\n", d_config->disks[i].backend_domid);
+        printf("\t\t\t(backend_domid %d)\n", d_config->disks[i].backend_domain.domid);
         printf("\t\t\t(frontend_domid %d)\n", domid);
         printf("\t\t\t(physpath %s)\n", d_config->disks[i].pdev_path);
         printf("\t\t\t(phystype %d)\n", d_config->disks[i].backend);
@@ -180,7 +180,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
         printf("\t\t(vif\n");
         if (d_config->nics[i].ifname)
             printf("\t\t\t(vifname %s)\n", d_config->nics[i].ifname);
-        printf("\t\t\t(backend_domid %d)\n", d_config->nics[i].backend_domid);
+        printf("\t\t\t(backend_domid %d)\n", d_config->nics[i].backend_domain.domid);
         printf("\t\t\t(frontend_domid %d)\n", domid);
         printf("\t\t\t(devid %d)\n", d_config->nics[i].devid);
         printf("\t\t\t(mtu %d)\n", d_config->nics[i].mtu);
@@ -210,7 +210,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
     for (i = 0; i < d_config->num_vfbs; i++) {
         printf("\t(device\n");
         printf("\t\t(vfb\n");
-        printf("\t\t\t(backend_domid %d)\n", d_config->vfbs[i].backend_domid);
+        printf("\t\t\t(backend_domid %d)\n", d_config->vfbs[i].backend_domain.domid);
         printf("\t\t\t(frontend_domid %d)\n", domid);
         printf("\t\t\t(devid %d)\n", d_config->vfbs[i].devid);
         printf("\t\t\t(vnc %s)\n",
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 21:48:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 21:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjGcu-0008Qx-Bi; Thu, 13 Dec 2012 21:47:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TjGcs-0008Qs-Lv
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 21:47:31 +0000
Received: from [85.158.138.51:15605] by server-15.bemta-3.messagelabs.com id
	36/0E-07921-1FC4AC05; Thu, 13 Dec 2012 21:47:29 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355435247!28515946!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12105 invoked from network); 13 Dec 2012 21:47:27 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-174.messagelabs.com with SMTP;
	13 Dec 2012 21:47:27 -0000
X-TM-IMSS-Message-ID: <77d6c882000a84ca@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 77d6c882000a84ca ;
	Thu, 13 Dec 2012 16:46:36 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBDLlBFN022270; 
	Thu, 13 Dec 2012 16:47:11 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Thu, 13 Dec 2012 16:47:10 -0500
Message-Id: <1355435230-4176-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH RFC/v2] libxl: postpone backend name resolution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This replaces the backend_domid field in libxl devices with a structure
allowing either a domid or a domain name to be specified.  The domain
name is resolved into a domain ID in the _setdefault function when
adding the device.  This change allows the backend of the block devices
to be specified (which previously required passing the libxl_ctx down
into the block device parser).

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>

---

This patch does not include the changes to tools/libxl/libxlu_disk_l.c
and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
changes due to different generator versions.

 docs/misc/xl-disk-configuration.txt | 12 +++++++
 tools/libxl/libxl.c                 | 65 +++++++++++++++++++++++++------------
 tools/libxl/libxl_dm.c              |  6 ++--
 tools/libxl/libxl_types.idl         | 15 ++++++---
 tools/libxl/libxl_utils.c           |  2 +-
 tools/libxl/libxlu_disk_l.l         |  1 +
 tools/libxl/xl_cmdimpl.c            | 36 ++++----------------
 tools/libxl/xl_sxp.c                |  6 ++--
 8 files changed, 82 insertions(+), 61 deletions(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 86c16be..5bd456d 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -139,6 +139,18 @@ cdrom
 Convenience alias for "devtype=cdrom".
 
 
+backend=<domain-name>
+---------------------
+
+Description:           Designates a backend domain for the device
+Supported values:      Valid domain names
+Mandatory:             No
+
+Specifies the backend domain which this device should attach to. This
+defaults to domain 0. Specifying another domain requires setting up a
+driver domain which is outside the scope of this document.
+
+
 backendtype=<backend-type>
 --------------------------
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 6c4455e..edb29b3 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1153,7 +1153,7 @@ static void disk_eject_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
     sscanf(backend,
             "/local/domain/%d/backend/%" TOSTRING(BACKEND_STRING_SIZE)
            "[a-z]/%*d/%*d",
-           &disk->backend_domid, backend_type);
+           &disk->backend_domain.domid, backend_type);
     if (!strcmp(backend_type, "tap") || !strcmp(backend_type, "vbd")) {
         disk->backend = LIBXL_DISK_BACKEND_TAP;
     } else if (!strcmp(backend_type, "qdisk")) {
@@ -1710,13 +1710,36 @@ out:
     return;
 }
 
+/* backend domain name-to-domid conversion utility */
+static int libxl__resolve_domain(libxl__gc *gc, struct libxl_domain_desc *desc)
+{
+    int i, rv;
+    const char *name = desc->name;
+    if (!name)
+        return 0;
+    for (i=0; name[i]; i++) {
+        if (!isdigit(name[i])) {
+            rv = libxl_name_to_domid(libxl__gc_owner(gc), name, &desc->domid);
+            if (rv)
+                return rv;
+            goto resolved;
+        }
+    }
+    desc->domid = atoi(name);
+
+ resolved:
+    free(desc->name);
+    desc->name = NULL;
+    return 0;
+}
+
 /******************************************************************************/
 int libxl__device_vtpm_setdefault(libxl__gc *gc, libxl_device_vtpm *vtpm)
 {
    if(libxl_uuid_is_nil(&vtpm->uuid)) {
       libxl_uuid_generate(&vtpm->uuid);
    }
-   return 0;
+   return libxl__resolve_domain(gc, &vtpm->backend_domain);
 }
 
 static int libxl__device_from_vtpm(libxl__gc *gc, uint32_t domid,
@@ -1724,7 +1747,7 @@ static int libxl__device_from_vtpm(libxl__gc *gc, uint32_t domid,
                                    libxl__device *device)
 {
    device->backend_devid   = vtpm->devid;
-   device->backend_domid   = vtpm->backend_domid;
+   device->backend_domid   = vtpm->backend_domain.domid;
    device->backend_kind    = LIBXL__DEVICE_KIND_VTPM;
    device->devid           = vtpm->devid;
    device->domid           = domid;
@@ -1783,7 +1806,7 @@ void libxl__device_vtpm_add(libxl__egc *egc, uint32_t domid,
     flexarray_append(back, "False");
 
     flexarray_append(front, "backend-id");
-    flexarray_append(front, GCSPRINTF("%d", vtpm->backend_domid));
+    flexarray_append(front, GCSPRINTF("%d", vtpm->backend_domain.domid));
     flexarray_append(front, "state");
     flexarray_append(front, GCSPRINTF("%d", 1));
     flexarray_append(front, "handle");
@@ -1832,7 +1855,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
           tmp = libxl__xs_read(gc, XBT_NULL,
                 GCSPRINTF("%s/%s/backend_id",
                    fe_path, *dir));
-          vtpm->backend_domid = atoi(tmp);
+          vtpm->backend_domain.domid = atoi(tmp);
 
           tmp = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/uuid", be_path));
           if(tmp) {
@@ -1934,7 +1957,7 @@ int libxl_devid_to_device_vtpm(libxl_ctx *ctx,
     rc = 1;
     for (i = 0; i < nb; ++i) {
         if(devid == vtpms[i].devid) {
-            vtpm->backend_domid = vtpms[i].backend_domid;
+            vtpm->backend_domain.domid = vtpms[i].backend_domain.domid;
             vtpm->devid = vtpms[i].devid;
             libxl_uuid_copy(&vtpm->uuid, &vtpms[i].uuid);
             rc = 0;
@@ -1956,7 +1979,7 @@ int libxl__device_disk_setdefault(libxl__gc *gc, libxl_device_disk *disk)
     rc = libxl__device_disk_set_backend(gc, disk);
     if (rc) return rc;
 
-    return rc;
+    return libxl__resolve_domain(gc, &disk->backend_domain);
 }
 
 int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
@@ -1973,7 +1996,7 @@ int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
         return ERROR_INVAL;
     }
 
-    device->backend_domid = disk->backend_domid;
+    device->backend_domid = disk->backend_domain.domid;
     device->backend_devid = devid;
 
     switch (disk->backend) {
@@ -2133,7 +2156,7 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
 
         flexarray_append(front, "backend-id");
-        flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
+        flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domain.domid));
         flexarray_append(front, "state");
         flexarray_append(front, libxl__sprintf(gc, "%d", 1));
         flexarray_append(front, "virtual-device");
@@ -2298,7 +2321,7 @@ static int libxl__append_disk_list_of_type(libxl__gc *gc,
             p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
             if ((rc=libxl__device_disk_from_xs_be(gc, p, pdisk)))
                 goto out;
-            pdisk->backend_domid = 0;
+            pdisk->backend_domain.domid = 0;
             *ndisks += 1;
         }
     }
@@ -2759,7 +2782,7 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
         LOG(ERROR, "unable to get current hotplug scripts execution setting");
         return run_hotplug_scripts;
     }
-    if (nic->backend_domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
+    if (nic->backend_domain.domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
         LOG(ERROR, "cannot use a backend domain different than %d if"
                    "hotplug scripts are executed from libxl",
                    LIBXL_TOOLSTACK_DOMID);
@@ -2784,7 +2807,7 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
         abort();
     }
 
-    return 0;
+    return libxl__resolve_domain(gc, &nic->backend_domain);
 }
 
 static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
@@ -2792,7 +2815,7 @@ static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
                                   libxl__device *device)
 {
     device->backend_devid    = nic->devid;
-    device->backend_domid    = nic->backend_domid;
+    device->backend_domid    = nic->backend_domain.domid;
     device->backend_kind     = LIBXL__DEVICE_KIND_VIF;
     device->devid            = nic->devid;
     device->domid            = domid;
@@ -2874,7 +2897,7 @@ void libxl__device_nic_add(libxl__egc *egc, uint32_t domid,
                                      libxl_nic_type_to_string(nic->nictype)));
 
     flexarray_append(front, "backend-id");
-    flexarray_append(front, libxl__sprintf(gc, "%d", nic->backend_domid));
+    flexarray_append(front, libxl__sprintf(gc, "%d", nic->backend_domain.domid));
     flexarray_append(front, "state");
     flexarray_append(front, libxl__sprintf(gc, "%d", 1));
     flexarray_append(front, "handle");
@@ -2991,7 +3014,7 @@ static int libxl__append_nic_list_of_type(libxl__gc *gc,
             const char *p;
             p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
             libxl__device_nic_from_xs_be(gc, p, pnic);
-            pnic->backend_domid = 0;
+            pnic->backend_domain.domid = 0;
         }
     }
     return 0;
@@ -3144,7 +3167,7 @@ out:
 
 int libxl__device_vkb_setdefault(libxl__gc *gc, libxl_device_vkb *vkb)
 {
-    return 0;
+    return libxl__resolve_domain(gc, &vkb->backend_domain);
 }
 
 static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
@@ -3152,7 +3175,7 @@ static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
                                   libxl__device *device)
 {
     device->backend_devid = vkb->devid;
-    device->backend_domid = vkb->backend_domid;
+    device->backend_domid = vkb->backend_domain.domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VKBD;
     device->devid = vkb->devid;
     device->domid = domid;
@@ -3205,7 +3228,7 @@ int libxl__device_vkb_add(libxl__gc *gc, uint32_t domid,
     flexarray_append(back, libxl__domid_to_name(gc, domid));
 
     flexarray_append(front, "backend-id");
-    flexarray_append(front, libxl__sprintf(gc, "%d", vkb->backend_domid));
+    flexarray_append(front, libxl__sprintf(gc, "%d", vkb->backend_domain.domid));
     flexarray_append(front, "state");
     flexarray_append(front, libxl__sprintf(gc, "%d", 1));
 
@@ -3236,7 +3259,7 @@ int libxl__device_vfb_setdefault(libxl__gc *gc, libxl_device_vfb *vfb)
     libxl_defbool_setdefault(&vfb->sdl.enable, false);
     libxl_defbool_setdefault(&vfb->sdl.opengl, false);
 
-    return 0;
+    return libxl__resolve_domain(gc, &vfb->backend_domain);
 }
 
 static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
@@ -3244,7 +3267,7 @@ static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
                                   libxl__device *device)
 {
     device->backend_devid = vfb->devid;
-    device->backend_domid = vfb->backend_domid;
+    device->backend_domid = vfb->backend_domain.domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VFB;
     device->devid = vfb->devid;
     device->domid = domid;
@@ -3309,7 +3332,7 @@ int libxl__device_vfb_add(libxl__gc *gc, uint32_t domid, libxl_device_vfb *vfb)
     }
 
     flexarray_append_pair(front, "backend-id",
-                          libxl__sprintf(gc, "%d", vfb->backend_domid));
+                          libxl__sprintf(gc, "%d", vfb->backend_domain.domid));
     flexarray_append_pair(front, "state", libxl__sprintf(gc, "%d", 1));
 
     libxl__device_generic_add(gc, XBT_NULL, &device,
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index c036dc1..82a55ea 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -644,13 +644,15 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc,
     libxl_device_vfb_init(vfb);
     libxl_device_vkb_init(vkb);
 
-    vfb->backend_domid = 0;
+    vfb->backend_domain.domid = 0;
+    vfb->backend_domain.name = NULL;
     vfb->devid = 0;
     vfb->vnc = b_info->u.hvm.vnc;
     vfb->keymap = b_info->u.hvm.keymap;
     vfb->sdl = b_info->u.hvm.sdl;
 
-    vkb->backend_domid = 0;
+    vkb->backend_domain.domid = 0;
+    vkb->backend_domain.name = NULL;
     vkb->devid = 0;
     return 0;
 }
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 93524f0..6ec7967 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -136,6 +136,11 @@ libxl_vga_interface_type = Enumeration("vga_interface_type", [
 # Complex libxl types
 #
 
+libxl_domain_desc = Struct("domain_desc", [
+    ("domid", libxl_domid),
+    ("name",  string),
+    ])
+
 libxl_ioport_range = Struct("ioport_range", [
     ("first", uint32),
     ("number", uint32),
@@ -344,7 +349,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 )
 
 libxl_device_vfb = Struct("device_vfb", [
-    ("backend_domid", libxl_domid),
+    ("backend_domain",libxl_domain_desc),
     ("devid",         libxl_devid),
     ("vnc",           libxl_vnc_info),
     ("sdl",           libxl_sdl_info),
@@ -353,12 +358,12 @@ libxl_device_vfb = Struct("device_vfb", [
     ])
 
 libxl_device_vkb = Struct("device_vkb", [
-    ("backend_domid", libxl_domid),
+    ("backend_domain", libxl_domain_desc),
     ("devid", libxl_devid),
     ])
 
 libxl_device_disk = Struct("device_disk", [
-    ("backend_domid", libxl_domid),
+    ("backend_domain", libxl_domain_desc),
     ("pdev_path", string),
     ("vdev", string),
     ("backend", libxl_disk_backend),
@@ -370,7 +375,7 @@ libxl_device_disk = Struct("device_disk", [
     ])
 
 libxl_device_nic = Struct("device_nic", [
-    ("backend_domid", libxl_domid),
+    ("backend_domain", libxl_domain_desc),
     ("devid", libxl_devid),
     ("mtu", integer),
     ("model", string),
@@ -397,7 +402,7 @@ libxl_device_pci = Struct("device_pci", [
     ])
 
 libxl_device_vtpm = Struct("device_vtpm", [
-    ("backend_domid",    libxl_domid),
+    ("backend_domain",   libxl_domain_desc),
     ("devid",            libxl_devid),
     ("uuid",             libxl_uuid),
 ])
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index 8f78790..86a310e 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -478,7 +478,7 @@ int libxl_uuid_to_device_vtpm(libxl_ctx *ctx, uint32_t domid,
     rc = 1;
     for (i = 0; i < nb; ++i) {
         if(!libxl_uuid_compare(uuid, &vtpms[i].uuid)) {
-            vtpm->backend_domid = vtpms[i].backend_domid;
+            vtpm->backend_domain.domid = vtpms[i].backend_domain.domid;
             vtpm->devid = vtpms[i].devid;
             libxl_uuid_copy(&vtpm->uuid, &vtpms[i].uuid);
             rc = 0;
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index bee16a1..a65030f 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -168,6 +168,7 @@ devtype=disk,?	{ DPC->disk->is_cdrom = 0; }
 devtype=[^,]*,?	{ xlu__disk_err(DPC,yytext,"unknown value for type"); }
 
 access=[^,]*,?	{ STRIP(','); setaccess(DPC, FROMEQUALS); }
+backend=[^,]*,? { STRIP(','); SAVESTRING("backend", backend_domain.name, FROMEQUALS); }
 backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index e964bf1..bc22f3c 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1090,12 +1090,7 @@ static void parse_config_data(const char *config_source,
                      break;
                   *p2 = '\0';
                   if (!strcmp(p, "backend")) {
-                     if(domain_qualifier_to_domid(p2 + 1, &(vtpm->backend_domid), 0))
-                     {
-                        fprintf(stderr,
-                              "Specified vtpm backend domain `%s' does not exist!\n", p2 + 1);
-                        exit(1);
-                     }
+                     vtpm->backend_domain.name = strdup(p2 + 1);
                      got_backend = true;
                   } else if(!strcmp(p, "uuid")) {
                      if( libxl_uuid_from_string(&vtpm->uuid, p2 + 1) ) {
@@ -1190,11 +1185,8 @@ static void parse_config_data(const char *config_source,
                     free(nic->ifname);
                     nic->ifname = strdup(p2 + 1);
                 } else if (!strcmp(p, "backend")) {
-                    if(libxl_name_to_domid(ctx, (p2 + 1), &(nic->backend_domid))) {
-                        fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
-                        nic->backend_domid = 0;
-                    }
-                    if (nic->backend_domid != 0 && run_hotplug_scripts) {
+                    nic->backend_domain.name = strdup(p2 + 1);
+                    if (run_hotplug_scripts) {
                         fprintf(stderr, "ERROR: the vif 'backend=' option "
                                 "cannot be used in conjunction with "
                                 "run_hotplug_scripts, please set "
@@ -2431,8 +2423,6 @@ static void cd_insert(uint32_t domid, const char *virtdev, char *phys)
 
     parse_disk_config(&config, buf, &disk);
 
-    disk.backend_domid = 0;
-
     libxl_cdrom_insert(ctx, domid, &disk, NULL);
 
     libxl_device_disk_dispose(&disk);
@@ -5516,11 +5506,7 @@ int main_networkattach(int argc, char **argv)
         } else if (MATCH_OPTION("script", *argv, oparg)) {
             replace_string(&nic.script, oparg);
         } else if (MATCH_OPTION("backend", *argv, oparg)) {
-            if(libxl_name_to_domid(ctx, oparg, &val)) {
-                fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
-                val = 0;
-            }
-            nic.backend_domid = val;
+            replace_string(&nic.backend_domain.name, oparg);
         } else if (MATCH_OPTION("vifname", *argv, oparg)) {
             replace_string(&nic.ifname, oparg);
         } else if (MATCH_OPTION("model", *argv, oparg)) {
@@ -5623,8 +5609,8 @@ int main_networkdetach(int argc, char **argv)
 int main_blockattach(int argc, char **argv)
 {
     int opt;
-    uint32_t fe_domid, be_domid = 0;
-    libxl_device_disk disk = { 0 };
+    uint32_t fe_domid;
+    libxl_device_disk disk;
     XLU_Config *config = 0;
 
     if ((opt = def_getopt(argc, argv, "", "block-attach", 2)) != -1)
@@ -5639,8 +5625,6 @@ int main_blockattach(int argc, char **argv)
     parse_disk_config_multistring
         (&config, argc-optind, (const char* const*)argv + optind, &disk);
 
-    disk.backend_domid = be_domid;
-
     if (dryrun_only) {
         char *json = libxl_device_disk_to_json(ctx, &disk);
         printf("disk: %s\n", json);
@@ -5720,7 +5704,6 @@ int main_vtpmattach(int argc, char **argv)
     int opt;
     libxl_device_vtpm vtpm;
     char *oparg;
-    unsigned int val;
     uint32_t domid;
 
     if ((opt = def_getopt(argc, argv, "", "vtpm-attach", 1)) != -1)
@@ -5740,12 +5723,7 @@ int main_vtpmattach(int argc, char **argv)
                 return 1;
             }
         } else if (MATCH_OPTION("backend", *argv, oparg)) {
-            if(domain_qualifier_to_domid(oparg, &val, 0)) {
-                fprintf(stderr,
-                      "Specified backend domain does not exist, defaulting to Dom0\n");
-                val = 0;
-            }
-            vtpm.backend_domid = val;
+            replace_string(&vtpm.backend_domain.name, oparg);
         } else {
             fprintf(stderr, "unrecognized argument `%s'\n", *argv);
             return 1;
diff --git a/tools/libxl/xl_sxp.c b/tools/libxl/xl_sxp.c
index a16a025..6cb8d40 100644
--- a/tools/libxl/xl_sxp.c
+++ b/tools/libxl/xl_sxp.c
@@ -163,7 +163,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
     for (i = 0; i < d_config->num_disks; i++) {
         printf("\t(device\n");
         printf("\t\t(tap\n");
-        printf("\t\t\t(backend_domid %d)\n", d_config->disks[i].backend_domid);
+        printf("\t\t\t(backend_domid %d)\n", d_config->disks[i].backend_domain.domid);
         printf("\t\t\t(frontend_domid %d)\n", domid);
         printf("\t\t\t(physpath %s)\n", d_config->disks[i].pdev_path);
         printf("\t\t\t(phystype %d)\n", d_config->disks[i].backend);
@@ -180,7 +180,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
         printf("\t\t(vif\n");
         if (d_config->nics[i].ifname)
             printf("\t\t\t(vifname %s)\n", d_config->nics[i].ifname);
-        printf("\t\t\t(backend_domid %d)\n", d_config->nics[i].backend_domid);
+        printf("\t\t\t(backend_domid %d)\n", d_config->nics[i].backend_domain.domid);
         printf("\t\t\t(frontend_domid %d)\n", domid);
         printf("\t\t\t(devid %d)\n", d_config->nics[i].devid);
         printf("\t\t\t(mtu %d)\n", d_config->nics[i].mtu);
@@ -210,7 +210,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
     for (i = 0; i < d_config->num_vfbs; i++) {
         printf("\t(device\n");
         printf("\t\t(vfb\n");
-        printf("\t\t\t(backend_domid %d)\n", d_config->vfbs[i].backend_domid);
+        printf("\t\t\t(backend_domid %d)\n", d_config->vfbs[i].backend_domain.domid);
         printf("\t\t\t(frontend_domid %d)\n", domid);
         printf("\t\t\t(devid %d)\n", d_config->vfbs[i].devid);
         printf("\t\t\t(vnc %s)\n",
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 23:13:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 23:13:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjHxd-0001Ol-CX; Thu, 13 Dec 2012 23:13:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1TjHxb-0001Og-GD
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 23:12:59 +0000
Received: from [85.158.137.99:8998] by server-5.bemta-3.messagelabs.com id
	11/82-15136-AF06AC05; Thu, 13 Dec 2012 23:12:58 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355440374!13818456!1
X-Originating-IP: [74.125.149.142]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1851 invoked from network); 13 Dec 2012 23:12:57 -0000
Received: from na3sys009aog129.obsmtp.com (HELO na3sys009aog129.obsmtp.com)
	(74.125.149.142)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 23:12:57 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob129.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUMpg9SvDAQC/0rGsQBrvVbkp5a7d1JmX@postini.com;
	Thu, 13 Dec 2012 15:12:56 PST
Received: from INHYMS173.ca.com (155.35.35.47) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Fri, 14 Dec 2012 04:42:51 +0530
Received: from INHYMS111A.ca.com ([169.254.3.239]) by INHYMS173.ca.com
	([155.35.35.47]) with mapi id 14.01.0355.002;
	Fri, 14 Dec 2012 04:42:51 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Matt Wilson <msw@amazon.com>
Thread-Topic: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
	properly when larger MTU sizes are used
Thread-Index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAgAd+rmAEvv3JQAAGcEEQAAliekAAQ4jfWAADtHkgABtHOWQ
Date: Thu, 13 Dec 2012 23:12:50 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {F26EA7D8-4580-41A6-BA42-13454C5F8A14}
x-cr-hashedpuzzle: ApN1 BTEk IdDo I/lL Kger KqQE MSoX QoNC SVnt YXSR aRH/
	avHM a4ob a9iY brSz huVR; 3;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAbQBzAHcAQABhAG0AYQB6AG8AbgAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {F26EA7D8-4580-41A6-BA42-13454C5F8A14};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Thu,
	13 Dec 2012 23:05:49 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDACAAVgAyAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Matt Wilson [mailto:msw@amazon.com]
> Sent: Wednesday, December 12, 2012 3:05 AM
> To: Palagummi, Siva
> Cc: Ian Campbell; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > > -----Original Message-----
> > > From: Matt Wilson [mailto:msw@amazon.com]
> > > Sent: Thursday, December 06, 2012 11:05 AM
> > > To: Palagummi, Siva
> > > Cc: Ian Campbell; xen-devel@lists.xen.org
> > > Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring
> slots
> > > properly when larger MTU sizes are used
> > >
> > > On Wed, Dec 05, 2012 at 11:56:32AM +0000, Palagummi, Siva wrote:
> > > > Matt,
> > > [...]
> > > > You are right. The above chunk which is already part of the
> upstream
> > > > is unfortunately incorrect for some cases. We also ran into
> issues
> > > > in our environment around a week back and found this problem. The
> > > > count will be different based on head len because of the
> > > > optimization that start_new_rx_buffer is trying to do for large
> > > > buffers.  A hole of size "offset_in_page" will be left in first
> page
> > > > during copy if the remaining buffer size is >=PAG_SIZE. This
> > > > subsequently affects the copy_off as well.
> > > >
> > > > So xen_netbk_count_skb_slots actually needs a fix to calculate
> the
> > > > count correctly based on head len. And also a fix to calculate
> the
> > > > copy_off properly to which the data from fragments gets copied.
> > >
> > > Can you explain more about the copy_off problem? I'm not seeing it.
> >
> > You can clearly see below that copy_off is input to
> > start_new_rx_buffer while copying frags.
> 
> Yes, but that's the right thing to do. copy_off should be set to the
> destination offset after copying the last byte of linear data, which
> means "skb_headlen(skb) % PAGE_SIZE" is correct.
> 

No. It is not correct for two reasons. For example what if skb_headlen(skb) is exactly a multiple of PAGE_SIZE. Copy_off would be set to ZERO. And now if there exists some data in frags, ZERO will be passed in as copy_off value and start_new_rx_buffer will return FALSE. And second reason is the obvious case from the current code where "offset_in_page(skb->data)" size hole will be left in the first buffer after first pass in case remaining data that need to be copied is going to overflow the first buffer.

> > So if the buggy "count" calculation below is fixed based on
> > offset_in_page value then copy_off value also will change
> > accordingly.
> 
> This calculation is not incorrect. You should only need as many
> PAGE_SIZE buffers as you have linear data to fill.
> 

This calculation is incorrect and do not match actual slots used as it is now  unless some new change is done either in nebk_gop_skb or in start_new_rx_buffer. 

> >         count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> >
> >         copy_off = skb_headlen(skb) % PAGE_SIZE;
> >
> >         if (skb_shinfo(skb)->gso_size)
> >                 count++;
> >
> >         for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
> >                 unsigned long size = skb_frag_size(&skb_shinfo(skb)-
> >frags[i]);
> >                 unsigned long bytes;
> >                 while (size > 0) {
> >                         BUG_ON(copy_off > MAX_BUFFER_OFFSET);
> >
> >                         if (start_new_rx_buffer(copy_off, size, 0)) {
> >                                 count++;
> >                                 copy_off = 0;
> >                         }
> >
> >
> > So a correct calculation should be somewhat like below because of
> > the optimization in start_new_rx_buffer for larger sizes.
> 
> start_new_rx_buffer() should not be starting a new buffer after the
> first pass copying the linear data.
> 
> >       linear_len = skb_headlen(skb)
> > 	count = (linear_len <= PAGE_SIZE)
> >               ? 1
> >               :DIV_ROUND_UP(offset_in_page(skb->data)+linear_len,
> PAGE_SIZE));
> >
> >       copy_off = ((offset_in_page(skb->data)+linear_len) <
> 2*PAGE_SIZE)
> > 			? linear_len % PAGE_SIZE;
> > 			: (offset_in_page(skb->data)+linear_len) % PAGE_SIZE;
> 
> A change like this makes the code much more difficult to understand.
> 

:-) It would have been easier had we written logic using a for loop similar to how the counting is done for data in frags. In fact I did do mistake in above calculations :-( . A proper logic probably should look somewhat like below.

linear_len=skb_headlen(skb);
count = (linear_len <= PAGE_SIZE)
	? 1
	:DIV_ROUND_UP(offset_in_page(skb->data)+linear_len,PAGE_SIZE);

copy_off = (linear_len <= PAGE_SIZE)
		?linear_len
		:( offset_in_page(skb->data)+linear_len -1)%PAGE_SIZE+1;


> > > > max_required_rx_slots also may require a fix to account the
> > > > additional slot that may be required in case mtu >= PAG_SIZE. For
> > > > worst case scenario atleast another +1.  One thing that is still
> > > > puzzling here is, max_required_rx_slots seems to be assuming that
> > > > linear length in head will never be greater than mtu size. But
> that
> > > > doesn't seem to be the case all the time. I wonder if it requires
> > > > some kind of fix there or special handling when count_skb_slots
> > > > exceeds max_required_rx_slots.
> > >
> > > We should only be using the number of pages required to copy the
> > > data. The fix shouldn't be to anticipate wasting ring space by
> > > increasing the return value of max_required_rx_slots().
> > >
> >
> > I do not think we are wasting any ring space. But just ensuring that
> > we have enough before proceeding ahead.
> 
> For some SKBs with large linear buffers, we certainly are wasting
> space. Go back and read the explanation in
>   http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html
> 
I think I probably did not put my point clearly to make it understandable. Xen_netbk_rx_ring_full uses max_required_rx_slots value. Xen_netbk_rx_ring_full is called to decide wither a vif is schedulable or not. So in case the mtu value is >=PAGE_SIZE, for a worst case scenario additional buffer would be required which is not taken care by current calculations.  

Ofcourse in your new fix if you do a code change not to leave a hole in first buffer then this correction may not be required. But I am not the right person to decide the implications of the fix you are proposing. The current start_new_rx_buffer seems to be trying to make the copies PAGE aligned and also reduce number of copy operations.

For example let us say SKB_HEAD_LEN is for whatever reason 4*PAGE_SIZE and offset_in_page is 32.

As per existing logic of start_new_rx_buffer and with the fix I am proposing for count and copy_off, we will calculate and occupy 5 ring buffers and will use 5 copy operations.

If we fix it the way you are proposing, not to leave a hole in the first buffer by modifying start_new_rx_buffer then it will occupy 4 ring buffers but will require 8 copy operations as per existing logic in netbk_gop_skb while copying head!!


Thanks
Siva


> > > [...]
> > >
> > > > > Why increment count by the /estimated/ count instead of the
> actual
> > > > > number of slots used? We have the number of slots in the line
> just
> > > > > above, in sco->meta_slots_used.
> > > > >
> > > >
> > > > Count actually refers to ring slots consumed rather than
> meta_slots
> > > > used.  Count can be different from meta_slots_used.
> > >
> > > Aah, indeed. This can end up being too pessimistic if you have lots
> of
> > > frags that require multiple copy operations. I still think that it
> > > would be better to calculate the actual number of ring slots
> consumed
> > > by netbk_gop_skb() to avoid other bugs like the one you originally
> > > fixed.
> > >
> >
> > counting done in count_skb_slots is what exactly it is. The fix done
> > above is to make it same so that no need to re calculate again.
> 
> Today, the counting done in count_skb_slots() *does not* match the
> number of buffer slots consumed by netbk_gop_skb().
> 
> Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 13 23:13:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Dec 2012 23:13:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjHxd-0001Ol-CX; Thu, 13 Dec 2012 23:13:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1TjHxb-0001Og-GD
	for xen-devel@lists.xen.org; Thu, 13 Dec 2012 23:12:59 +0000
Received: from [85.158.137.99:8998] by server-5.bemta-3.messagelabs.com id
	11/82-15136-AF06AC05; Thu, 13 Dec 2012 23:12:58 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355440374!13818456!1
X-Originating-IP: [74.125.149.142]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1851 invoked from network); 13 Dec 2012 23:12:57 -0000
Received: from na3sys009aog129.obsmtp.com (HELO na3sys009aog129.obsmtp.com)
	(74.125.149.142)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Dec 2012 23:12:57 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob129.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUMpg9SvDAQC/0rGsQBrvVbkp5a7d1JmX@postini.com;
	Thu, 13 Dec 2012 15:12:56 PST
Received: from INHYMS173.ca.com (155.35.35.47) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Fri, 14 Dec 2012 04:42:51 +0530
Received: from INHYMS111A.ca.com ([169.254.3.239]) by INHYMS173.ca.com
	([155.35.35.47]) with mapi id 14.01.0355.002;
	Fri, 14 Dec 2012 04:42:51 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Matt Wilson <msw@amazon.com>
Thread-Topic: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
	properly when larger MTU sizes are used
Thread-Index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAgAd+rmAEvv3JQAAGcEEQAAliekAAQ4jfWAADtHkgABtHOWQ
Date: Thu, 13 Dec 2012 23:12:50 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {F26EA7D8-4580-41A6-BA42-13454C5F8A14}
x-cr-hashedpuzzle: ApN1 BTEk IdDo I/lL Kger KqQE MSoX QoNC SVnt YXSR aRH/
	avHM a4ob a9iY brSz huVR; 3;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAbQBzAHcAQABhAG0AYQB6AG8AbgAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {F26EA7D8-4580-41A6-BA42-13454C5F8A14};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Thu,
	13 Dec 2012 23:05:49 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDACAAVgAyAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Matt Wilson [mailto:msw@amazon.com]
> Sent: Wednesday, December 12, 2012 3:05 AM
> To: Palagummi, Siva
> Cc: Ian Campbell; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > > -----Original Message-----
> > > From: Matt Wilson [mailto:msw@amazon.com]
> > > Sent: Thursday, December 06, 2012 11:05 AM
> > > To: Palagummi, Siva
> > > Cc: Ian Campbell; xen-devel@lists.xen.org
> > > Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring
> slots
> > > properly when larger MTU sizes are used
> > >
> > > On Wed, Dec 05, 2012 at 11:56:32AM +0000, Palagummi, Siva wrote:
> > > > Matt,
> > > [...]
> > > > You are right. The above chunk which is already part of the
> upstream
> > > > is unfortunately incorrect for some cases. We also ran into
> issues
> > > > in our environment around a week back and found this problem. The
> > > > count will be different based on head len because of the
> > > > optimization that start_new_rx_buffer is trying to do for large
> > > > buffers.  A hole of size "offset_in_page" will be left in first
> page
> > > > during copy if the remaining buffer size is >=PAG_SIZE. This
> > > > subsequently affects the copy_off as well.
> > > >
> > > > So xen_netbk_count_skb_slots actually needs a fix to calculate
> the
> > > > count correctly based on head len. And also a fix to calculate
> the
> > > > copy_off properly to which the data from fragments gets copied.
> > >
> > > Can you explain more about the copy_off problem? I'm not seeing it.
> >
> > You can clearly see below that copy_off is input to
> > start_new_rx_buffer while copying frags.
> 
> Yes, but that's the right thing to do. copy_off should be set to the
> destination offset after copying the last byte of linear data, which
> means "skb_headlen(skb) % PAGE_SIZE" is correct.
> 

No. It is not correct for two reasons. For example what if skb_headlen(skb) is exactly a multiple of PAGE_SIZE. Copy_off would be set to ZERO. And now if there exists some data in frags, ZERO will be passed in as copy_off value and start_new_rx_buffer will return FALSE. And second reason is the obvious case from the current code where "offset_in_page(skb->data)" size hole will be left in the first buffer after first pass in case remaining data that need to be copied is going to overflow the first buffer.

> > So if the buggy "count" calculation below is fixed based on
> > offset_in_page value then copy_off value also will change
> > accordingly.
> 
> This calculation is not incorrect. You should only need as many
> PAGE_SIZE buffers as you have linear data to fill.
> 

This calculation is incorrect and do not match actual slots used as it is now  unless some new change is done either in nebk_gop_skb or in start_new_rx_buffer. 

> >         count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> >
> >         copy_off = skb_headlen(skb) % PAGE_SIZE;
> >
> >         if (skb_shinfo(skb)->gso_size)
> >                 count++;
> >
> >         for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
> >                 unsigned long size = skb_frag_size(&skb_shinfo(skb)-
> >frags[i]);
> >                 unsigned long bytes;
> >                 while (size > 0) {
> >                         BUG_ON(copy_off > MAX_BUFFER_OFFSET);
> >
> >                         if (start_new_rx_buffer(copy_off, size, 0)) {
> >                                 count++;
> >                                 copy_off = 0;
> >                         }
> >
> >
> > So a correct calculation should be somewhat like below because of
> > the optimization in start_new_rx_buffer for larger sizes.
> 
> start_new_rx_buffer() should not be starting a new buffer after the
> first pass copying the linear data.
> 
> >       linear_len = skb_headlen(skb)
> > 	count = (linear_len <= PAGE_SIZE)
> >               ? 1
> >               :DIV_ROUND_UP(offset_in_page(skb->data)+linear_len,
> PAGE_SIZE));
> >
> >       copy_off = ((offset_in_page(skb->data)+linear_len) <
> 2*PAGE_SIZE)
> > 			? linear_len % PAGE_SIZE;
> > 			: (offset_in_page(skb->data)+linear_len) % PAGE_SIZE;
> 
> A change like this makes the code much more difficult to understand.
> 

:-) It would have been easier had we written logic using a for loop similar to how the counting is done for data in frags. In fact I did do mistake in above calculations :-( . A proper logic probably should look somewhat like below.

linear_len=skb_headlen(skb);
count = (linear_len <= PAGE_SIZE)
	? 1
	:DIV_ROUND_UP(offset_in_page(skb->data)+linear_len,PAGE_SIZE);

copy_off = (linear_len <= PAGE_SIZE)
		?linear_len
		:( offset_in_page(skb->data)+linear_len -1)%PAGE_SIZE+1;


> > > > max_required_rx_slots also may require a fix to account the
> > > > additional slot that may be required in case mtu >= PAG_SIZE. For
> > > > worst case scenario atleast another +1.  One thing that is still
> > > > puzzling here is, max_required_rx_slots seems to be assuming that
> > > > linear length in head will never be greater than mtu size. But
> that
> > > > doesn't seem to be the case all the time. I wonder if it requires
> > > > some kind of fix there or special handling when count_skb_slots
> > > > exceeds max_required_rx_slots.
> > >
> > > We should only be using the number of pages required to copy the
> > > data. The fix shouldn't be to anticipate wasting ring space by
> > > increasing the return value of max_required_rx_slots().
> > >
> >
> > I do not think we are wasting any ring space. But just ensuring that
> > we have enough before proceeding ahead.
> 
> For some SKBs with large linear buffers, we certainly are wasting
> space. Go back and read the explanation in
>   http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html
> 
I think I probably did not put my point clearly to make it understandable. Xen_netbk_rx_ring_full uses max_required_rx_slots value. Xen_netbk_rx_ring_full is called to decide wither a vif is schedulable or not. So in case the mtu value is >=PAGE_SIZE, for a worst case scenario additional buffer would be required which is not taken care by current calculations.  

Ofcourse in your new fix if you do a code change not to leave a hole in first buffer then this correction may not be required. But I am not the right person to decide the implications of the fix you are proposing. The current start_new_rx_buffer seems to be trying to make the copies PAGE aligned and also reduce number of copy operations.

For example let us say SKB_HEAD_LEN is for whatever reason 4*PAGE_SIZE and offset_in_page is 32.

As per existing logic of start_new_rx_buffer and with the fix I am proposing for count and copy_off, we will calculate and occupy 5 ring buffers and will use 5 copy operations.

If we fix it the way you are proposing, not to leave a hole in the first buffer by modifying start_new_rx_buffer then it will occupy 4 ring buffers but will require 8 copy operations as per existing logic in netbk_gop_skb while copying head!!


Thanks
Siva


> > > [...]
> > >
> > > > > Why increment count by the /estimated/ count instead of the
> actual
> > > > > number of slots used? We have the number of slots in the line
> just
> > > > > above, in sco->meta_slots_used.
> > > > >
> > > >
> > > > Count actually refers to ring slots consumed rather than
> meta_slots
> > > > used.  Count can be different from meta_slots_used.
> > >
> > > Aah, indeed. This can end up being too pessimistic if you have lots
> of
> > > frags that require multiple copy operations. I still think that it
> > > would be better to calculate the actual number of ring slots
> consumed
> > > by netbk_gop_skb() to avoid other bugs like the one you originally
> > > fixed.
> > >
> >
> > counting done in count_skb_slots is what exactly it is. The fix done
> > above is to make it same so that no need to re calculate again.
> 
> Today, the counting done in count_skb_slots() *does not* match the
> number of buffer slots consumed by netbk_gop_skb().
> 
> Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 00:48:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 00:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjJRb-0002yw-O2; Fri, 14 Dec 2012 00:48:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjJRZ-0002yr-Pq
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 00:48:02 +0000
Received: from [85.158.138.51:36672] by server-1.bemta-3.messagelabs.com id
	CF/63-08906-0477AC05; Fri, 14 Dec 2012 00:48:00 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355446080!28527956!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29895 invoked from network); 14 Dec 2012 00:48:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 00:48:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,276,1355097600"; 
   d="scan'208";a="139491"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 00:47:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 00:47:59 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjJRX-00021D-Gt;
	Fri, 14 Dec 2012 00:47:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjJRW-0003tQ-Ue;
	Fri, 14 Dec 2012 00:47:59 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14683-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 00:47:58 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14683: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14683 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14683/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14674
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14674

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  bfdbf9747fc4
baseline version:
 xen                  02140822d833

------------------------------------------------------------
People who touched revisions under test:
  Charles Arnold <carnold@suse.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Ronny Hegewald <Ronny.Hegewald@online.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=bfdbf9747fc4
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing bfdbf9747fc4
+ branch=xen-4.2-testing
+ revision=bfdbf9747fc4
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r bfdbf9747fc4 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 00:48:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 00:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjJRb-0002yw-O2; Fri, 14 Dec 2012 00:48:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjJRZ-0002yr-Pq
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 00:48:02 +0000
Received: from [85.158.138.51:36672] by server-1.bemta-3.messagelabs.com id
	CF/63-08906-0477AC05; Fri, 14 Dec 2012 00:48:00 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355446080!28527956!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29895 invoked from network); 14 Dec 2012 00:48:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 00:48:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,276,1355097600"; 
   d="scan'208";a="139491"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 00:47:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 00:47:59 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjJRX-00021D-Gt;
	Fri, 14 Dec 2012 00:47:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjJRW-0003tQ-Ue;
	Fri, 14 Dec 2012 00:47:59 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14683-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 00:47:58 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14683: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14683 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14683/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14674
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14674

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  bfdbf9747fc4
baseline version:
 xen                  02140822d833

------------------------------------------------------------
People who touched revisions under test:
  Charles Arnold <carnold@suse.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Ronny Hegewald <Ronny.Hegewald@online.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=bfdbf9747fc4
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing bfdbf9747fc4
+ branch=xen-4.2-testing
+ revision=bfdbf9747fc4
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r bfdbf9747fc4 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 02:40:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 02:40:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjLBe-0008NC-3U; Fri, 14 Dec 2012 02:39:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TjLBc-0008N7-Nz
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 02:39:41 +0000
Received: from [85.158.139.211:47490] by server-7.bemta-5.messagelabs.com id
	CB/79-08009-B619AC05; Fri, 14 Dec 2012 02:39:39 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1355452777!20316279!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1627 invoked from network); 14 Dec 2012 02:39:38 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 02:39:38 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so5137285iej.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 18:39:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=RB8y5as+6mtQZjNJbuIQ2KSxdlMZvgxnLFHJYTiuhMw=;
	b=c73YjAUMkwUTGXqRIpqZ6gkTBKTrVTHYCW8if7z58PNsyNcmS3BckLvDMN0rrzF9YW
	ayqk/5nv72gBkToLiRWRUJYPBd9odoMn6E1d7nzON/km0UZ+AM87h9/Y4W2bfZ5Xqe7N
	1lEgtyKqBD94YKcKI9Jy9S44NUA+8aqnSa8MQlV96bLUCsuOmPbg454jTTkpH4+XzFvf
	xnKdZf+biJRl/I4T/HXw3fxrQyn4vxtk45/4AKb/Mk/IgMapNv1iEofESwTX2M4dEwm1
	cIVns9sifhaO7QzJu7gsBBNLlUbUNUjKibRs6xtA0tbOWFZTasn6lMa8/U1D5g94BtJ+
	z59A==
MIME-Version: 1.0
Received: by 10.50.151.241 with SMTP id ut17mr344914igb.11.1355452777310; Thu,
	13 Dec 2012 18:39:37 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 13 Dec 2012 18:39:37 -0800 (PST)
In-Reply-To: <CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
	<CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
	<CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
Date: Fri, 14 Dec 2012 10:39:37 +0800
X-Google-Sender-Auth: U7gqMpqYns0ABM22lS5A2jH75Ks
Message-ID: <CAKhsbWaagzeQL7OGW7YJJKDYxqz=YrV0jKgbdL9nB9qF3+nk=w@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay, Allen M" <allen.m.kay@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 14, 2012 at 1:39 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> On Thu, Dec 13, 2012 at 10:33 PM, G.R. <firemeteor@users.sourceforge.net> wrote:
>> On Thu, Dec 13, 2012 at 8:43 PM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>>>
>>> Does this patch work for you?
>>>
>>
>> It appears that you change the exposed 1f.0 bridge into an ISA bridge.
>> The driver should be able to recognize it -- as long as it is not
>> hidden by the PIIX3 bridge.
>> I wonder if there is way to entirely override that one...
>> But anyway I'll try it out first.
>>
>
> Stefano, your patch does not produce an ISA bridge as expected.
> The device as viewed from the domU is like this:
> 00:1f.0 Non-VGA unclassified device [0000]: Intel Corporation H77
> Express Chipset LPC Controller [8086:1e4a] (rev 04)
>
> I'm on the latest 4.2-testing branch just synced && built for your patch.
>

Some more info from lspci:

Intel ISA bridge as seen from dom0:
00:1f.0 ISA bridge: Intel Corporation H77 Express Chipset LPC
Controller (rev 04)
00: 86 80 4a 1e 07 00 10 02 04 00 01 06 00 00 80 00
10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
20: 00 00 00 00 00 00 00 00 00 00 00 00 49 18 4a 1e
30: 00 00 00 00 e0 00 00 00 00 00 00 00 00 00 00 00

After exposed to domU
00:1f.0 Non-VGA unclassified device: Intel Corporation H77 Express
Chipset LPC Controller (rev 04)
00: 86 80 4a 1e 07 00 a0 00 04 00 01 06 00 10 81 00
10: 00 50 42 f1 10 50 42 f1 00 01 01 f1 30 50 42 f1
20: 40 50 42 f1 50 50 42 f1 00 00 00 00 f4 1a 00 11
30: 60 50 42 f1 00 00 00 00 00 00 00 00 00 00 00 00

The PIIX3 ISA bridge as emulated by qemu:

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00: 86 80 00 70 07 00 00 02 00 00 01 06 00 00 80 00
10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
20: 00 00 00 00 00 00 00 00 00 00 00 00 f4 1a 00 11
30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

I have no idea if this is due to the header type set to 0x81 (pci-pci
bridge, multi-functional), or some other fields need to be cleared...


> Thanks,
> Timothy
>
>>>
>>>
>>> diff --git a/hw/pci.c b/hw/pci.c
>>> index f051de1..d371bd7 100644
>>> --- a/hw/pci.c
>>> +++ b/hw/pci.c
>>> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>>>      }
>>>  }
>>>
>>> -typedef struct {
>>> -    PCIDevice dev;
>>> -    PCIBus *bus;
>>> -} PCIBridge;
>>> -
>>>  void pci_bridge_write_config(PCIDevice *d,
>>>                               uint32_t address, uint32_t val, int len)
>>>  {
>>> diff --git a/hw/pci.h b/hw/pci.h
>>> index edc58b6..c2acab9 100644
>>> --- a/hw/pci.h
>>> +++ b/hw/pci.h
>>> @@ -222,6 +222,11 @@ struct PCIDevice {
>>>      int irq_state[4];
>>>  };
>>>
>>> +typedef struct {
>>> +    PCIDevice dev;
>>> +    PCIBus *bus;
>>> +} PCIBridge;
>>> +
>>>  extern char direct_pci_str[];
>>>  extern int direct_pci_msitranslate;
>>>  extern int direct_pci_power_mgmt;
>>> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
>>> index c6f8869..d8e789f 100644
>>> --- a/hw/pt-graphics.c
>>> +++ b/hw/pt-graphics.c
>>> @@ -3,6 +3,7 @@
>>>   */
>>>
>>>  #include "pass-through.h"
>>> +#include "pci.h"
>>>  #include "pci/header.h"
>>>  #include "pci/pci.h"
>>>
>>> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>>>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>>>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
>>>
>>> -    if ( vid == PCI_VENDOR_ID_INTEL )
>>> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
>>> -                        pch_map_irq, "intel_bridge_1f");
>>> +    if (vid == PCI_VENDOR_ID_INTEL) {
>>> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
>>> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL, pci_bridge_write_config);
>>> +
>>> +        pci_config_set_vendor_id(s->dev.config, vid);
>>> +        pci_config_set_device_id(s->dev.config, did);
>>> +
>>> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
>>> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
>>> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast back-to-back, 66MHz, no error
>>> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
>>> +        s->dev.config[PCI_REVISION] = rid;
>>> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
>>> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
>>> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
>>> +        s->dev.config[PCI_HEADER_TYPE] = 0x81;
>>> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
>>> +
>>> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
>>> +    }
>>>  }
>>>
>>>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 02:40:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 02:40:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjLBe-0008NC-3U; Fri, 14 Dec 2012 02:39:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TjLBc-0008N7-Nz
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 02:39:41 +0000
Received: from [85.158.139.211:47490] by server-7.bemta-5.messagelabs.com id
	CB/79-08009-B619AC05; Fri, 14 Dec 2012 02:39:39 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1355452777!20316279!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1627 invoked from network); 14 Dec 2012 02:39:38 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 02:39:38 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so5137285iej.32
	for <xen-devel@lists.xen.org>; Thu, 13 Dec 2012 18:39:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=RB8y5as+6mtQZjNJbuIQ2KSxdlMZvgxnLFHJYTiuhMw=;
	b=c73YjAUMkwUTGXqRIpqZ6gkTBKTrVTHYCW8if7z58PNsyNcmS3BckLvDMN0rrzF9YW
	ayqk/5nv72gBkToLiRWRUJYPBd9odoMn6E1d7nzON/km0UZ+AM87h9/Y4W2bfZ5Xqe7N
	1lEgtyKqBD94YKcKI9Jy9S44NUA+8aqnSa8MQlV96bLUCsuOmPbg454jTTkpH4+XzFvf
	xnKdZf+biJRl/I4T/HXw3fxrQyn4vxtk45/4AKb/Mk/IgMapNv1iEofESwTX2M4dEwm1
	cIVns9sifhaO7QzJu7gsBBNLlUbUNUjKibRs6xtA0tbOWFZTasn6lMa8/U1D5g94BtJ+
	z59A==
MIME-Version: 1.0
Received: by 10.50.151.241 with SMTP id ut17mr344914igb.11.1355452777310; Thu,
	13 Dec 2012 18:39:37 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 13 Dec 2012 18:39:37 -0800 (PST)
In-Reply-To: <CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
	<CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
	<CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
Date: Fri, 14 Dec 2012 10:39:37 +0800
X-Google-Sender-Auth: U7gqMpqYns0ABM22lS5A2jH75Ks
Message-ID: <CAKhsbWaagzeQL7OGW7YJJKDYxqz=YrV0jKgbdL9nB9qF3+nk=w@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay, Allen M" <allen.m.kay@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 14, 2012 at 1:39 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> On Thu, Dec 13, 2012 at 10:33 PM, G.R. <firemeteor@users.sourceforge.net> wrote:
>> On Thu, Dec 13, 2012 at 8:43 PM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>>>
>>> Does this patch work for you?
>>>
>>
>> It appears that you change the exposed 1f.0 bridge into an ISA bridge.
>> The driver should be able to recognize it -- as long as it is not
>> hidden by the PIIX3 bridge.
>> I wonder if there is way to entirely override that one...
>> But anyway I'll try it out first.
>>
>
> Stefano, your patch does not produce an ISA bridge as expected.
> The device as viewed from the domU is like this:
> 00:1f.0 Non-VGA unclassified device [0000]: Intel Corporation H77
> Express Chipset LPC Controller [8086:1e4a] (rev 04)
>
> I'm on the latest 4.2-testing branch just synced && built for your patch.
>

Some more info from lspci:

Intel ISA bridge as seen from dom0:
00:1f.0 ISA bridge: Intel Corporation H77 Express Chipset LPC
Controller (rev 04)
00: 86 80 4a 1e 07 00 10 02 04 00 01 06 00 00 80 00
10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
20: 00 00 00 00 00 00 00 00 00 00 00 00 49 18 4a 1e
30: 00 00 00 00 e0 00 00 00 00 00 00 00 00 00 00 00

After exposed to domU
00:1f.0 Non-VGA unclassified device: Intel Corporation H77 Express
Chipset LPC Controller (rev 04)
00: 86 80 4a 1e 07 00 a0 00 04 00 01 06 00 10 81 00
10: 00 50 42 f1 10 50 42 f1 00 01 01 f1 30 50 42 f1
20: 40 50 42 f1 50 50 42 f1 00 00 00 00 f4 1a 00 11
30: 60 50 42 f1 00 00 00 00 00 00 00 00 00 00 00 00

The PIIX3 ISA bridge as emulated by qemu:

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00: 86 80 00 70 07 00 00 02 00 00 01 06 00 00 80 00
10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
20: 00 00 00 00 00 00 00 00 00 00 00 00 f4 1a 00 11
30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

I have no idea if this is due to the header type set to 0x81 (pci-pci
bridge, multi-functional), or some other fields need to be cleared...


> Thanks,
> Timothy
>
>>>
>>>
>>> diff --git a/hw/pci.c b/hw/pci.c
>>> index f051de1..d371bd7 100644
>>> --- a/hw/pci.c
>>> +++ b/hw/pci.c
>>> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>>>      }
>>>  }
>>>
>>> -typedef struct {
>>> -    PCIDevice dev;
>>> -    PCIBus *bus;
>>> -} PCIBridge;
>>> -
>>>  void pci_bridge_write_config(PCIDevice *d,
>>>                               uint32_t address, uint32_t val, int len)
>>>  {
>>> diff --git a/hw/pci.h b/hw/pci.h
>>> index edc58b6..c2acab9 100644
>>> --- a/hw/pci.h
>>> +++ b/hw/pci.h
>>> @@ -222,6 +222,11 @@ struct PCIDevice {
>>>      int irq_state[4];
>>>  };
>>>
>>> +typedef struct {
>>> +    PCIDevice dev;
>>> +    PCIBus *bus;
>>> +} PCIBridge;
>>> +
>>>  extern char direct_pci_str[];
>>>  extern int direct_pci_msitranslate;
>>>  extern int direct_pci_power_mgmt;
>>> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
>>> index c6f8869..d8e789f 100644
>>> --- a/hw/pt-graphics.c
>>> +++ b/hw/pt-graphics.c
>>> @@ -3,6 +3,7 @@
>>>   */
>>>
>>>  #include "pass-through.h"
>>> +#include "pci.h"
>>>  #include "pci/header.h"
>>>  #include "pci/pci.h"
>>>
>>> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>>>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>>>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
>>>
>>> -    if ( vid == PCI_VENDOR_ID_INTEL )
>>> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
>>> -                        pch_map_irq, "intel_bridge_1f");
>>> +    if (vid == PCI_VENDOR_ID_INTEL) {
>>> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
>>> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL, pci_bridge_write_config);
>>> +
>>> +        pci_config_set_vendor_id(s->dev.config, vid);
>>> +        pci_config_set_device_id(s->dev.config, did);
>>> +
>>> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
>>> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
>>> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast back-to-back, 66MHz, no error
>>> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
>>> +        s->dev.config[PCI_REVISION] = rid;
>>> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
>>> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
>>> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
>>> +        s->dev.config[PCI_HEADER_TYPE] = 0x81;
>>> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
>>> +
>>> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
>>> +    }
>>>  }
>>>
>>>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 03:26:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 03:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjLun-0000bk-6N; Fri, 14 Dec 2012 03:26:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjLul-0000bf-N8
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 03:26:20 +0000
Received: from [85.158.138.51:28810] by server-9.bemta-3.messagelabs.com id
	49/17-11948-A5C9AC05; Fri, 14 Dec 2012 03:26:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355455577!22479244!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4583 invoked from network); 14 Dec 2012 03:26:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 03:26:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,277,1355097600"; 
   d="scan'208";a="140823"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 03:26:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 03:26:17 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjLuj-000320-EO;
	Fri, 14 Dec 2012 03:26:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjLuj-0002QX-4K;
	Fri, 14 Dec 2012 03:26:17 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14684-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 03:26:17 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14684: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14684 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14684/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   10 guest-saverestore         fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl-win        12 guest-localmigrate/x10    fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     18 leak-check/check         fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 03:26:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 03:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjLun-0000bk-6N; Fri, 14 Dec 2012 03:26:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjLul-0000bf-N8
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 03:26:20 +0000
Received: from [85.158.138.51:28810] by server-9.bemta-3.messagelabs.com id
	49/17-11948-A5C9AC05; Fri, 14 Dec 2012 03:26:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355455577!22479244!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4583 invoked from network); 14 Dec 2012 03:26:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 03:26:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,277,1355097600"; 
   d="scan'208";a="140823"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 03:26:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 03:26:17 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjLuj-000320-EO;
	Fri, 14 Dec 2012 03:26:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjLuj-0002QX-4K;
	Fri, 14 Dec 2012 03:26:17 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14684-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 03:26:17 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14684: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14684 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14684/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   10 guest-saverestore         fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl-win        12 guest-localmigrate/x10    fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     18 leak-check/check         fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 06:18:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 06:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjObA-0002Qy-E6; Fri, 14 Dec 2012 06:18:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjOb8-0002Qr-GV
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 06:18:14 +0000
Received: from [85.158.137.99:14248] by server-15.bemta-3.messagelabs.com id
	40/DC-07921-5A4CAC05; Fri, 14 Dec 2012 06:18:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355465892!17601591!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24569 invoked from network); 14 Dec 2012 06:18:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 06:18:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,279,1355097600"; 
   d="scan'208";a="142492"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 06:18:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 06:18:11 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjOb5-0003vb-SK;
	Fri, 14 Dec 2012 06:18:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjOb5-0002Qa-NB;
	Fri, 14 Dec 2012 06:18:11 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14685-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 06:18:11 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14685: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1619901603857064012=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1619901603857064012==
Content-Type: text/plain

flight 14685 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14685/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-win-vcpus1 12 guest-localmigrate/x10   fail REGR. vs. 14678

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14678
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14678

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  74d4a6cc5392

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Robert Phillips <robert.phillips@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 377 lines long.)


--===============1619901603857064012==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1619901603857064012==--

From xen-devel-bounces@lists.xen.org Fri Dec 14 06:18:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 06:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjObA-0002Qy-E6; Fri, 14 Dec 2012 06:18:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjOb8-0002Qr-GV
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 06:18:14 +0000
Received: from [85.158.137.99:14248] by server-15.bemta-3.messagelabs.com id
	40/DC-07921-5A4CAC05; Fri, 14 Dec 2012 06:18:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355465892!17601591!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24569 invoked from network); 14 Dec 2012 06:18:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 06:18:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,279,1355097600"; 
   d="scan'208";a="142492"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 06:18:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 06:18:11 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjOb5-0003vb-SK;
	Fri, 14 Dec 2012 06:18:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjOb5-0002Qa-NB;
	Fri, 14 Dec 2012 06:18:11 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14685-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 06:18:11 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14685: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1619901603857064012=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1619901603857064012==
Content-Type: text/plain

flight 14685 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14685/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-win-vcpus1 12 guest-localmigrate/x10   fail REGR. vs. 14678

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14678
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14678

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  74d4a6cc5392

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Robert Phillips <robert.phillips@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 377 lines long.)


--===============1619901603857064012==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1619901603857064012==--

From xen-devel-bounces@lists.xen.org Fri Dec 14 09:09:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 09:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjRGC-0004dG-Az; Fri, 14 Dec 2012 09:08:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joshsystem@gmail.com>) id 1TjRGA-0004dB-W3
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 09:08:47 +0000
Received: from [85.158.139.83:20971] by server-10.bemta-5.messagelabs.com id
	6B/C2-13383-E9CEAC05; Fri, 14 Dec 2012 09:08:46 +0000
X-Env-Sender: joshsystem@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355476123!18471568!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=1.7 required=7.0 tests=HTML_MESSAGE,
	HTML_OBFUSCATE_05_10,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24627 invoked from network); 14 Dec 2012 09:08:43 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 09:08:43 -0000
Received: by mail-ea0-f171.google.com with SMTP id n10so1168052eaa.30
	for <xen-devel@lists.xensource.com>;
	Fri, 14 Dec 2012 01:08:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=MAgurMpo1w4B9G9q9wi7yXGINb16TmpUQ1BI94LDRug=;
	b=vakomdhcxIaczXHAaXMNmCBk3FwDt0mTnMML+9R/QETLGxk1XYJA1qZCrQtuZjaN1A
	itdZ9VlA/nqXZktdfl/nk/+tSf3a3JjTxw8pUW4icjZI1GsjlZENC0k/BgZopnBWCvce
	B8WpfvlqZKceFDnjEjyBTHwhalH6mJWgPjgRyW62cugqO6cXzdA1tvZnAHaipl6BkV/h
	ZuB2G+YZvuV6BdZn5UctONlQOZpZmJ9amWaHBpzyhPdoI0Srbt+/XjEbL3z5MhwwLo0S
	Y4Gqj7drrYXx3dNai1zicxI3kJNoAg8U6NVPXdM8C//VGF5ZZ1j3Dzz6WSQyBHYUAhEF
	c+/w==
MIME-Version: 1.0
Received: by 10.14.218.69 with SMTP id j45mr13324987eep.35.1355476123215; Fri,
	14 Dec 2012 01:08:43 -0800 (PST)
Received: by 10.14.132.140 with HTTP; Fri, 14 Dec 2012 01:08:43 -0800 (PST)
Date: Fri, 14 Dec 2012 17:08:43 +0800
Message-ID: <CAOYkbagjXEun=QaNmrKpd69Ma0BqWpvbb_oJGzBg8V60FRkJPQ@mail.gmail.com>
From: Josh Zhao <joshsystem@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] About ARM branch questions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4488942114803382280=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4488942114803382280==
Content-Type: multipart/alternative; boundary=047d7b6042fad3682204d0cc601d

--047d7b6042fad3682204d0cc601d
Content-Type: text/plain; charset=ISO-8859-1

Hi, I have a new questions about the ARM-Xen.
 (1)   Following wiki in
http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions,  are
this repos latest?

    Xen:  http://xenbits.xen.org/hg/xen-unstable.hg
    Linux Dom0 and DomU: The *arm-privcmd-for-3.8* branch of
git://xenbits.xen.org/people/ianc/linux.git<http://xenbits.xen.org/gitweb/?p=people/ianc/linux.git;a=shortlog;h=refs/heads/arm-privcmd-for-3.8>


(2)  Is the ARM PV interfaces for IO same as X86?


Thanks

josh zhao

--047d7b6042fad3682204d0cc601d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi, I have a new questions about the ARM-Xen.<div>=A0(1) =A0 Following wiki=
 in <a href=3D"http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Exten=
sions">http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions</a=
>, =A0are this repos latest?</div>
<div><br></div><div>=A0 =A0 Xen: =A0<a href=3D"http://xenbits.xen.org/hg/xe=
n-unstable.hg">http://xenbits.xen.org/hg/xen-unstable.hg</a></div><div>=A0 =
=A0 Linux Dom0 and DomU: The=A0<i style=3D"color:rgb(0,0,0);font-family:san=
s-serif;font-size:13px;line-height:19.03333282470703px">arm-privcmd-for-3.8=
</i><span style=3D"color:rgb(0,0,0);font-family:sans-serif;font-size:13px;l=
ine-height:19.03333282470703px">=A0branch of=A0</span><a href=3D"http://xen=
bits.xen.org/gitweb/?p=3Dpeople/ianc/linux.git;a=3Dshortlog;h=3Drefs/heads/=
arm-privcmd-for-3.8" class=3D"" title=3D"http://xenbits.xen.org/gitweb/?p=
=3Dpeople/ianc/linux.git;a=3Dshortlog;h=3Drefs/heads/arm-privcmd-for-3.8" r=
el=3D"nofollow" style=3D"color:rgb(51,102,187);background-image:url(http://=
wiki.xen.org/mediawiki/skins/monobook/external.png);padding:0px 13px 0px 0p=
x;font-family:sans-serif;font-size:13px;line-height:19.03333282470703px;bac=
kground-repeat:no-repeat no-repeat">git://xenbits.xen.org/people/ianc/linux=
.git</a>=A0</div>
<div><br></div><div>(2) =A0Is the ARM PV interfaces for IO same as X86?</di=
v><div><br></div><div><br></div><div>Thanks</div><div><br></div><div>josh z=
hao</div>

--047d7b6042fad3682204d0cc601d--


--===============4488942114803382280==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4488942114803382280==--


From xen-devel-bounces@lists.xen.org Fri Dec 14 09:09:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 09:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjRGC-0004dG-Az; Fri, 14 Dec 2012 09:08:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joshsystem@gmail.com>) id 1TjRGA-0004dB-W3
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 09:08:47 +0000
Received: from [85.158.139.83:20971] by server-10.bemta-5.messagelabs.com id
	6B/C2-13383-E9CEAC05; Fri, 14 Dec 2012 09:08:46 +0000
X-Env-Sender: joshsystem@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355476123!18471568!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=1.7 required=7.0 tests=HTML_MESSAGE,
	HTML_OBFUSCATE_05_10,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24627 invoked from network); 14 Dec 2012 09:08:43 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 09:08:43 -0000
Received: by mail-ea0-f171.google.com with SMTP id n10so1168052eaa.30
	for <xen-devel@lists.xensource.com>;
	Fri, 14 Dec 2012 01:08:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=MAgurMpo1w4B9G9q9wi7yXGINb16TmpUQ1BI94LDRug=;
	b=vakomdhcxIaczXHAaXMNmCBk3FwDt0mTnMML+9R/QETLGxk1XYJA1qZCrQtuZjaN1A
	itdZ9VlA/nqXZktdfl/nk/+tSf3a3JjTxw8pUW4icjZI1GsjlZENC0k/BgZopnBWCvce
	B8WpfvlqZKceFDnjEjyBTHwhalH6mJWgPjgRyW62cugqO6cXzdA1tvZnAHaipl6BkV/h
	ZuB2G+YZvuV6BdZn5UctONlQOZpZmJ9amWaHBpzyhPdoI0Srbt+/XjEbL3z5MhwwLo0S
	Y4Gqj7drrYXx3dNai1zicxI3kJNoAg8U6NVPXdM8C//VGF5ZZ1j3Dzz6WSQyBHYUAhEF
	c+/w==
MIME-Version: 1.0
Received: by 10.14.218.69 with SMTP id j45mr13324987eep.35.1355476123215; Fri,
	14 Dec 2012 01:08:43 -0800 (PST)
Received: by 10.14.132.140 with HTTP; Fri, 14 Dec 2012 01:08:43 -0800 (PST)
Date: Fri, 14 Dec 2012 17:08:43 +0800
Message-ID: <CAOYkbagjXEun=QaNmrKpd69Ma0BqWpvbb_oJGzBg8V60FRkJPQ@mail.gmail.com>
From: Josh Zhao <joshsystem@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] About ARM branch questions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4488942114803382280=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4488942114803382280==
Content-Type: multipart/alternative; boundary=047d7b6042fad3682204d0cc601d

--047d7b6042fad3682204d0cc601d
Content-Type: text/plain; charset=ISO-8859-1

Hi, I have a new questions about the ARM-Xen.
 (1)   Following wiki in
http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions,  are
this repos latest?

    Xen:  http://xenbits.xen.org/hg/xen-unstable.hg
    Linux Dom0 and DomU: The *arm-privcmd-for-3.8* branch of
git://xenbits.xen.org/people/ianc/linux.git<http://xenbits.xen.org/gitweb/?p=people/ianc/linux.git;a=shortlog;h=refs/heads/arm-privcmd-for-3.8>


(2)  Is the ARM PV interfaces for IO same as X86?


Thanks

josh zhao

--047d7b6042fad3682204d0cc601d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi, I have a new questions about the ARM-Xen.<div>=A0(1) =A0 Following wiki=
 in <a href=3D"http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Exten=
sions">http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions</a=
>, =A0are this repos latest?</div>
<div><br></div><div>=A0 =A0 Xen: =A0<a href=3D"http://xenbits.xen.org/hg/xe=
n-unstable.hg">http://xenbits.xen.org/hg/xen-unstable.hg</a></div><div>=A0 =
=A0 Linux Dom0 and DomU: The=A0<i style=3D"color:rgb(0,0,0);font-family:san=
s-serif;font-size:13px;line-height:19.03333282470703px">arm-privcmd-for-3.8=
</i><span style=3D"color:rgb(0,0,0);font-family:sans-serif;font-size:13px;l=
ine-height:19.03333282470703px">=A0branch of=A0</span><a href=3D"http://xen=
bits.xen.org/gitweb/?p=3Dpeople/ianc/linux.git;a=3Dshortlog;h=3Drefs/heads/=
arm-privcmd-for-3.8" class=3D"" title=3D"http://xenbits.xen.org/gitweb/?p=
=3Dpeople/ianc/linux.git;a=3Dshortlog;h=3Drefs/heads/arm-privcmd-for-3.8" r=
el=3D"nofollow" style=3D"color:rgb(51,102,187);background-image:url(http://=
wiki.xen.org/mediawiki/skins/monobook/external.png);padding:0px 13px 0px 0p=
x;font-family:sans-serif;font-size:13px;line-height:19.03333282470703px;bac=
kground-repeat:no-repeat no-repeat">git://xenbits.xen.org/people/ianc/linux=
.git</a>=A0</div>
<div><br></div><div>(2) =A0Is the ARM PV interfaces for IO same as X86?</di=
v><div><br></div><div><br></div><div>Thanks</div><div><br></div><div>josh z=
hao</div>

--047d7b6042fad3682204d0cc601d--


--===============4488942114803382280==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4488942114803382280==--


From xen-devel-bounces@lists.xen.org Fri Dec 14 10:08:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 10:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjSBk-0005Jk-6m; Fri, 14 Dec 2012 10:08:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yunhong.jiang@intel.com>) id 1TjSBh-0005Jf-S8
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 10:08:14 +0000
Received: from [85.158.139.83:43428] by server-7.bemta-5.messagelabs.com id
	86/A7-08009-C8AFAC05; Fri, 14 Dec 2012 10:08:12 +0000
X-Env-Sender: yunhong.jiang@intel.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355479669!29873919!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzY5NzQ1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12425 invoked from network); 14 Dec 2012 10:07:49 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-5.tower-182.messagelabs.com with SMTP;
	14 Dec 2012 10:07:49 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 14 Dec 2012 02:06:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,279,1355126400"; d="scan'208";a="257359414"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 14 Dec 2012 02:07:47 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 14 Dec 2012 02:07:47 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 14 Dec 2012 02:07:47 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Fri, 14 Dec 2012 18:07:45 +0800
From: "Jiang, Yunhong" <yunhong.jiang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: frame table setup for memory hotplug
Thread-Index: AQHN1JYxVGteCELzMEuvG/T8mYXdW5gSCrDg//9/1wCABpFsQA==
Date: Fri, 14 Dec 2012 10:07:44 +0000
Message-ID: <DDCAE26804250545B9934A2056554AA037B6B0@SHSMSX101.ccr.corp.intel.com>
References: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
	<DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
	<50C5F60602000078000AF672@nat28.tlf.novell.com>
In-Reply-To: <50C5F60602000078000AF672@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

My colleagues helped me to test and seems it's working.

A minor question is, with your patch, we will create all frame table for all potential memory range? That's a bit low efficient IMHO, because there will be large hole between address between pre-populated memory and the new added one.

Thanks
--jyh

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, December 10, 2012 9:48 PM
> To: Jiang, Yunhong
> Cc: xen-devel
> Subject: RE: frame table setup for memory hotplug
> 
> >>> On 10.12.12 at 14:32, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:
> > IIRC, the reason we do this is because in memory hotplug situation, there
> > will be a very big hole between the address of the memory populated before
> > hot-plug and the memory populated by hot-added memory. (i.e. the added
> memory
> > started at very high-end address). So instead of setup the frame table for the
> > whole address space, we expand the frame table dynamically after hotplug.
> >
> > We have the memory hotplug environment, so if you have any patch, I'm glad
> > to test it, or have my colleagues help to test it.
> 
> I meanwhile decided to keep the code logically the same, but
> testing the patch at
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00793.html
> (or the staging/normal trees once it went in/got pushed) would still be
> much appreciated.
> 
> Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 10:08:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 10:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjSBk-0005Jk-6m; Fri, 14 Dec 2012 10:08:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yunhong.jiang@intel.com>) id 1TjSBh-0005Jf-S8
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 10:08:14 +0000
Received: from [85.158.139.83:43428] by server-7.bemta-5.messagelabs.com id
	86/A7-08009-C8AFAC05; Fri, 14 Dec 2012 10:08:12 +0000
X-Env-Sender: yunhong.jiang@intel.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355479669!29873919!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzY5NzQ1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12425 invoked from network); 14 Dec 2012 10:07:49 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-5.tower-182.messagelabs.com with SMTP;
	14 Dec 2012 10:07:49 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 14 Dec 2012 02:06:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,279,1355126400"; d="scan'208";a="257359414"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 14 Dec 2012 02:07:47 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 14 Dec 2012 02:07:47 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 14 Dec 2012 02:07:47 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Fri, 14 Dec 2012 18:07:45 +0800
From: "Jiang, Yunhong" <yunhong.jiang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: frame table setup for memory hotplug
Thread-Index: AQHN1JYxVGteCELzMEuvG/T8mYXdW5gSCrDg//9/1wCABpFsQA==
Date: Fri, 14 Dec 2012 10:07:44 +0000
Message-ID: <DDCAE26804250545B9934A2056554AA037B6B0@SHSMSX101.ccr.corp.intel.com>
References: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
	<DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
	<50C5F60602000078000AF672@nat28.tlf.novell.com>
In-Reply-To: <50C5F60602000078000AF672@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

My colleagues helped me to test and seems it's working.

A minor question is, with your patch, we will create all frame table for all potential memory range? That's a bit low efficient IMHO, because there will be large hole between address between pre-populated memory and the new added one.

Thanks
--jyh

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, December 10, 2012 9:48 PM
> To: Jiang, Yunhong
> Cc: xen-devel
> Subject: RE: frame table setup for memory hotplug
> 
> >>> On 10.12.12 at 14:32, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:
> > IIRC, the reason we do this is because in memory hotplug situation, there
> > will be a very big hole between the address of the memory populated before
> > hot-plug and the memory populated by hot-added memory. (i.e. the added
> memory
> > started at very high-end address). So instead of setup the frame table for the
> > whole address space, we expand the frame table dynamically after hotplug.
> >
> > We have the memory hotplug environment, so if you have any patch, I'm glad
> > to test it, or have my colleagues help to test it.
> 
> I meanwhile decided to keep the code logically the same, but
> testing the patch at
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg00793.html
> (or the staging/normal trees once it went in/got pushed) would still be
> much appreciated.
> 
> Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 10:49:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 10:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjSpR-0005oe-Sd; Fri, 14 Dec 2012 10:49:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1TjSpQ-0005oR-B6; Fri, 14 Dec 2012 10:49:16 +0000
Received: from [85.158.139.83:38549] by server-8.bemta-5.messagelabs.com id
	11/D0-15003-B240BC05; Fri, 14 Dec 2012 10:49:15 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355482154!29870677!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18889 invoked from network); 14 Dec 2012 10:49:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 10:49:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,279,1355097600"; 
   d="scan'208";a="148737"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 10:49:14 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 14 Dec 2012 10:49:14 +0000
Message-ID: <50CB0429.9070300@citrix.com>
Date: Fri, 14 Dec 2012 11:49:13 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com>	<50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
In-Reply-To: <20682.7487.598565.547405@mariner.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/12 19:23, Ian Jackson wrote:
> Roger Pau Monne writes ("Handling iSCSI block devices (Was: Driver domains and device handling)"):
>> [stuff]
> 
> Most of this sounds sensible.
> 
>> So the diskspec line would look like:
>>
>> portal=127.0.0.0:3260, authmethod=CHAP, user=foo, password=bar,
>> backendtype=phy, format=iscsi, vdev=xvda,
>> target=iqn.2012-12.com.example:lun1
> 
> Are we suggesting that every backend type should be able to define its
> own parameters ?  I was imagining that these options would all go into
> "target" - and if "target" is last it can contain commas and =s.

According to RFC3270 and RFC1035 IQNs should follow this format:

iqn.yyyy-mm.com.example:optional.string

The problem is that Open-iSCSI seems to accept almost anything, for
example iqn.yyyy-mm,com@example:... is a valid iqn from Open-iSCSI point
of view. The only character that Open-iSCSI doesn't seem to accept in
iqns is "/", but I don't really like using that as a field separator
inside of target. So I propose the following encoding for target:

"<iqn>,<portal>"
"<iqn>,<portal>,<auth_method>,<user>,<password>"

If a user/password is given, we should take care about what we write to
"params" xenstore backend field (because the DomU can read that). Would
you agree with the syntax described below?

>> Note that I've used the format parameter here to specify "iscsi", which
>> will be a new format, to distinguish this from a block device that also
>> uses the "phy" backend type. All this new parameters should also be
>> added to the libxl_device_disk struct.
> 
> I don't think this is right.  I think the right answer is
> "script=iscsi".  The format might be qcow or something.

Yes, it might be better to specify the script.

>> Since this device type uses two hotplug scripts we should also add a new
>> generic parameter to specify a "preparatory" hotplug script, so other
>> custom devices can also make use of this, something like "preparescript"?
> 
> Clearly when we have this two-phase setup we need to have more
> scripts, or the existing script with different arguments.
> 
> I think it should be controlled by the same argument.  So maybe
> script=iscsi causes libxl to check for a dropping in the script file
> saying "yes do the prepare thing" or maybe it runs
> /etc/xen/scripts/block-iscsi--prepare or something.

I like the approach to call the same hotplug script twice, the first
time use something like `/etc/xen/scripts/block-iscsi prepare`, and the
second time `/etc/xen/scripts/block-iscsi add`


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 10:49:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 10:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjSpR-0005oe-Sd; Fri, 14 Dec 2012 10:49:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1TjSpQ-0005oR-B6; Fri, 14 Dec 2012 10:49:16 +0000
Received: from [85.158.139.83:38549] by server-8.bemta-5.messagelabs.com id
	11/D0-15003-B240BC05; Fri, 14 Dec 2012 10:49:15 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355482154!29870677!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18889 invoked from network); 14 Dec 2012 10:49:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 10:49:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,279,1355097600"; 
   d="scan'208";a="148737"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 10:49:14 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 14 Dec 2012 10:49:14 +0000
Message-ID: <50CB0429.9070300@citrix.com>
Date: Fri, 14 Dec 2012 11:49:13 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com>	<50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
In-Reply-To: <20682.7487.598565.547405@mariner.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/12 19:23, Ian Jackson wrote:
> Roger Pau Monne writes ("Handling iSCSI block devices (Was: Driver domains and device handling)"):
>> [stuff]
> 
> Most of this sounds sensible.
> 
>> So the diskspec line would look like:
>>
>> portal=127.0.0.0:3260, authmethod=CHAP, user=foo, password=bar,
>> backendtype=phy, format=iscsi, vdev=xvda,
>> target=iqn.2012-12.com.example:lun1
> 
> Are we suggesting that every backend type should be able to define its
> own parameters ?  I was imagining that these options would all go into
> "target" - and if "target" is last it can contain commas and =s.

According to RFC3270 and RFC1035 IQNs should follow this format:

iqn.yyyy-mm.com.example:optional.string

The problem is that Open-iSCSI seems to accept almost anything, for
example iqn.yyyy-mm,com@example:... is a valid iqn from Open-iSCSI point
of view. The only character that Open-iSCSI doesn't seem to accept in
iqns is "/", but I don't really like using that as a field separator
inside of target. So I propose the following encoding for target:

"<iqn>,<portal>"
"<iqn>,<portal>,<auth_method>,<user>,<password>"

If a user/password is given, we should take care about what we write to
"params" xenstore backend field (because the DomU can read that). Would
you agree with the syntax described below?

>> Note that I've used the format parameter here to specify "iscsi", which
>> will be a new format, to distinguish this from a block device that also
>> uses the "phy" backend type. All this new parameters should also be
>> added to the libxl_device_disk struct.
> 
> I don't think this is right.  I think the right answer is
> "script=iscsi".  The format might be qcow or something.

Yes, it might be better to specify the script.

>> Since this device type uses two hotplug scripts we should also add a new
>> generic parameter to specify a "preparatory" hotplug script, so other
>> custom devices can also make use of this, something like "preparescript"?
> 
> Clearly when we have this two-phase setup we need to have more
> scripts, or the existing script with different arguments.
> 
> I think it should be controlled by the same argument.  So maybe
> script=iscsi causes libxl to check for a dropping in the script file
> saying "yes do the prepare thing" or maybe it runs
> /etc/xen/scripts/block-iscsi--prepare or something.

I like the approach to call the same hotplug script twice, the first
time use something like `/etc/xen/scripts/block-iscsi prepare`, and the
second time `/etc/xen/scripts/block-iscsi add`


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 11:24:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 11:24:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjTNL-0006Nd-Ta; Fri, 14 Dec 2012 11:24:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TjTNK-0006NY-GV
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 11:24:18 +0000
Received: from [85.158.143.99:36069] by server-2.bemta-4.messagelabs.com id
	D4/BC-30861-16C0BC05; Fri, 14 Dec 2012 11:24:17 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1355484242!28455243!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17591 invoked from network); 14 Dec 2012 11:24:04 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 11:24:04 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so5625057iej.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 03:24:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=YcDkc/HgoIPQjOFBlNONcEPg/eJ27IB32ivbty9RWvA=;
	b=By/oS+qqIFPZWM+xRblSJBlQSjlz9qc+/PuWg8FGsfEDJMtxp5jI9KOmfzSOkVDYl4
	XH2Q9uQL48UPFIDwQlD7lJqi2ma1Q2fNEoZjbcOkX27uMOu/CDXgZG1V0cAlSTiE9WTZ
	YILHWA/VrBmywekH/d9SJ9hfnx/OyJjzcAv+aVjBi/CGoIfUrTloAHstlZh0GKkAfyvB
	gbTyspp0vTn+tCSWKQrr0qjs2xhWC8TwvfRTxY6YwMZs0r+8+KCE2lwqDE613RT9Myd3
	Bla+Ts+5zJmFAsam+m3tbKMFREwThjitKLWQvrvg/qsEKW+rh7k50vg25vEYHIgs8l7V
	aZ4g==
MIME-Version: 1.0
Received: by 10.50.77.230 with SMTP id v6mr1255864igw.11.1355484240945; Fri,
	14 Dec 2012 03:24:00 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Fri, 14 Dec 2012 03:24:00 -0800 (PST)
In-Reply-To: <CAKhsbWaagzeQL7OGW7YJJKDYxqz=YrV0jKgbdL9nB9qF3+nk=w@mail.gmail.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
	<CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
	<CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
	<CAKhsbWaagzeQL7OGW7YJJKDYxqz=YrV0jKgbdL9nB9qF3+nk=w@mail.gmail.com>
Date: Fri, 14 Dec 2012 19:24:00 +0800
X-Google-Sender-Auth: -YRDlJWLHSaQ7B1NX8pheVLgNPg
Message-ID: <CAKhsbWbBSAdyGjNyyQOQ2SQALY3d30Brw0MhAc0HRhrSvgO01A@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay, Allen M" <allen.m.kay@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 14, 2012 at 10:39 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> On Fri, Dec 14, 2012 at 1:39 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
>> On Thu, Dec 13, 2012 at 10:33 PM, G.R. <firemeteor@users.sourceforge.net> wrote:
>>> On Thu, Dec 13, 2012 at 8:43 PM, Stefano Stabellini
>>> <stefano.stabellini@eu.citrix.com> wrote:
>>>>
>>>> Does this patch work for you?
>>>>
>>>
>>> It appears that you change the exposed 1f.0 bridge into an ISA bridge.
>>> The driver should be able to recognize it -- as long as it is not
>>> hidden by the PIIX3 bridge.
>>> I wonder if there is way to entirely override that one...
>>> But anyway I'll try it out first.
>>>
>>
>> Stefano, your patch does not produce an ISA bridge as expected.
>> The device as viewed from the domU is like this:
>> 00:1f.0 Non-VGA unclassified device [0000]: Intel Corporation H77
>> Express Chipset LPC Controller [8086:1e4a] (rev 04)
>>
>> I'm on the latest 4.2-testing branch just synced && built for your patch.
>>
>
> Some more info from lspci:
>
> Intel ISA bridge as seen from dom0:
> 00:1f.0 ISA bridge: Intel Corporation H77 Express Chipset LPC
> Controller (rev 04)
> 00: 86 80 4a 1e 07 00 10 02 04 00 01 06 00 00 80 00
> 10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 20: 00 00 00 00 00 00 00 00 00 00 00 00 49 18 4a 1e
> 30: 00 00 00 00 e0 00 00 00 00 00 00 00 00 00 00 00
>
> After exposed to domU
> 00:1f.0 Non-VGA unclassified device: Intel Corporation H77 Express
> Chipset LPC Controller (rev 04)
> 00: 86 80 4a 1e 07 00 a0 00 04 00 01 06 00 10 81 00
> 10: 00 50 42 f1 10 50 42 f1 00 01 01 f1 30 50 42 f1
> 20: 40 50 42 f1 50 50 42 f1 00 00 00 00 f4 1a 00 11
> 30: 60 50 42 f1 00 00 00 00 00 00 00 00 00 00 00 00
>
> The PIIX3 ISA bridge as emulated by qemu:
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
> 00: 86 80 00 70 07 00 00 02 00 00 01 06 00 00 80 00
> 10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 20: 00 00 00 00 00 00 00 00 00 00 00 00 f4 1a 00 11
> 30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>
> I have no idea if this is due to the header type set to 0x81 (pci-pci
> bridge, multi-functional), or some other fields need to be cleared...
>

I did another experiment to change the header type to 0x80.
The devices now shows up as an ISA bridge.
But as long as the PIIX3 bridge cannot be overridden, I need another
hack in intel driver to make it work.

This is the local hack I made, based on kernel 3.2.31.
But the function does not change in new version, other than new chip support.
Not sure if this change is acceptable to upstream...

--- i915_drv.c.orig    2012-10-10 10:31:37.000000000 +0800
+++ i915_drv.c    2012-12-14 19:10:32.000000000 +0800
@@ -303,6 +303,7 @@
 {
     struct drm_i915_private *dev_priv = dev->dev_private;
     struct pci_dev *pch;
+    unsigned found = 0;

     /*
      * The reason to probe ISA bridge instead of Dev31:Fun0 is to
@@ -311,11 +312,13 @@
      * underneath. This is a requirement from virtualization team.
      */
     pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
-    if (pch) {
+    while (pch) {
         if (pch->vendor == PCI_VENDOR_ID_INTEL) {
             int id;
             id = pch->device & INTEL_PCH_DEVICE_ID_MASK;

+            found = pch->device;
+
             if (id == INTEL_PCH_IBX_DEVICE_ID_TYPE) {
                 dev_priv->pch_type = PCH_IBX;
                 DRM_DEBUG_KMS("Found Ibex Peak PCH\n");
@@ -326,9 +329,21 @@
                 /* PantherPoint is CPT compatible */
                 dev_priv->pch_type = PCH_CPT;
                 DRM_DEBUG_KMS("Found PatherPoint PCH\n");
+            } else {
+                found = 0;
             }
+                }
+                if (found) {
+            pci_dev_put(pch);
+            break;
+        } else {
+            struct pci_dev *curr = pch;
+            pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, curr);
+            pci_dev_put(curr);
         }
-        pci_dev_put(pch);
+    }
+    if (!found) {
+        DRM_DEBUG_KMS("intel PCH detect failed, nothing found\n");
     }
 }



>
>> Thanks,
>> Timothy
>>
>>>>
>>>>
>>>> diff --git a/hw/pci.c b/hw/pci.c
>>>> index f051de1..d371bd7 100644
>>>> --- a/hw/pci.c
>>>> +++ b/hw/pci.c
>>>> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>>>>      }
>>>>  }
>>>>
>>>> -typedef struct {
>>>> -    PCIDevice dev;
>>>> -    PCIBus *bus;
>>>> -} PCIBridge;
>>>> -
>>>>  void pci_bridge_write_config(PCIDevice *d,
>>>>                               uint32_t address, uint32_t val, int len)
>>>>  {
>>>> diff --git a/hw/pci.h b/hw/pci.h
>>>> index edc58b6..c2acab9 100644
>>>> --- a/hw/pci.h
>>>> +++ b/hw/pci.h
>>>> @@ -222,6 +222,11 @@ struct PCIDevice {
>>>>      int irq_state[4];
>>>>  };
>>>>
>>>> +typedef struct {
>>>> +    PCIDevice dev;
>>>> +    PCIBus *bus;
>>>> +} PCIBridge;
>>>> +
>>>>  extern char direct_pci_str[];
>>>>  extern int direct_pci_msitranslate;
>>>>  extern int direct_pci_power_mgmt;
>>>> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
>>>> index c6f8869..d8e789f 100644
>>>> --- a/hw/pt-graphics.c
>>>> +++ b/hw/pt-graphics.c
>>>> @@ -3,6 +3,7 @@
>>>>   */
>>>>
>>>>  #include "pass-through.h"
>>>> +#include "pci.h"
>>>>  #include "pci/header.h"
>>>>  #include "pci/pci.h"
>>>>
>>>> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>>>>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>>>>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
>>>>
>>>> -    if ( vid == PCI_VENDOR_ID_INTEL )
>>>> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
>>>> -                        pch_map_irq, "intel_bridge_1f");
>>>> +    if (vid == PCI_VENDOR_ID_INTEL) {
>>>> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
>>>> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL, pci_bridge_write_config);
>>>> +
>>>> +        pci_config_set_vendor_id(s->dev.config, vid);
>>>> +        pci_config_set_device_id(s->dev.config, did);
>>>> +
>>>> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
>>>> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
>>>> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast back-to-back, 66MHz, no error
>>>> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
>>>> +        s->dev.config[PCI_REVISION] = rid;
>>>> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
>>>> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
>>>> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
>>>> +        s->dev.config[PCI_HEADER_TYPE] = 0x81;
>>>> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
>>>> +
>>>> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
>>>> +    }
>>>>  }
>>>>
>>>>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 11:24:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 11:24:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjTNL-0006Nd-Ta; Fri, 14 Dec 2012 11:24:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TjTNK-0006NY-GV
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 11:24:18 +0000
Received: from [85.158.143.99:36069] by server-2.bemta-4.messagelabs.com id
	D4/BC-30861-16C0BC05; Fri, 14 Dec 2012 11:24:17 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1355484242!28455243!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17591 invoked from network); 14 Dec 2012 11:24:04 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 11:24:04 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so5625057iej.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 03:24:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=YcDkc/HgoIPQjOFBlNONcEPg/eJ27IB32ivbty9RWvA=;
	b=By/oS+qqIFPZWM+xRblSJBlQSjlz9qc+/PuWg8FGsfEDJMtxp5jI9KOmfzSOkVDYl4
	XH2Q9uQL48UPFIDwQlD7lJqi2ma1Q2fNEoZjbcOkX27uMOu/CDXgZG1V0cAlSTiE9WTZ
	YILHWA/VrBmywekH/d9SJ9hfnx/OyJjzcAv+aVjBi/CGoIfUrTloAHstlZh0GKkAfyvB
	gbTyspp0vTn+tCSWKQrr0qjs2xhWC8TwvfRTxY6YwMZs0r+8+KCE2lwqDE613RT9Myd3
	Bla+Ts+5zJmFAsam+m3tbKMFREwThjitKLWQvrvg/qsEKW+rh7k50vg25vEYHIgs8l7V
	aZ4g==
MIME-Version: 1.0
Received: by 10.50.77.230 with SMTP id v6mr1255864igw.11.1355484240945; Fri,
	14 Dec 2012 03:24:00 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Fri, 14 Dec 2012 03:24:00 -0800 (PST)
In-Reply-To: <CAKhsbWaagzeQL7OGW7YJJKDYxqz=YrV0jKgbdL9nB9qF3+nk=w@mail.gmail.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZp2ne0gWs_3GCq+-yd78jjheudCCSawKP_SBFqWkcyBA@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
	<CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
	<CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
	<CAKhsbWaagzeQL7OGW7YJJKDYxqz=YrV0jKgbdL9nB9qF3+nk=w@mail.gmail.com>
Date: Fri, 14 Dec 2012 19:24:00 +0800
X-Google-Sender-Auth: -YRDlJWLHSaQ7B1NX8pheVLgNPg
Message-ID: <CAKhsbWbBSAdyGjNyyQOQ2SQALY3d30Brw0MhAc0HRhrSvgO01A@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay, Allen M" <allen.m.kay@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 14, 2012 at 10:39 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> On Fri, Dec 14, 2012 at 1:39 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
>> On Thu, Dec 13, 2012 at 10:33 PM, G.R. <firemeteor@users.sourceforge.net> wrote:
>>> On Thu, Dec 13, 2012 at 8:43 PM, Stefano Stabellini
>>> <stefano.stabellini@eu.citrix.com> wrote:
>>>>
>>>> Does this patch work for you?
>>>>
>>>
>>> It appears that you change the exposed 1f.0 bridge into an ISA bridge.
>>> The driver should be able to recognize it -- as long as it is not
>>> hidden by the PIIX3 bridge.
>>> I wonder if there is way to entirely override that one...
>>> But anyway I'll try it out first.
>>>
>>
>> Stefano, your patch does not produce an ISA bridge as expected.
>> The device as viewed from the domU is like this:
>> 00:1f.0 Non-VGA unclassified device [0000]: Intel Corporation H77
>> Express Chipset LPC Controller [8086:1e4a] (rev 04)
>>
>> I'm on the latest 4.2-testing branch just synced && built for your patch.
>>
>
> Some more info from lspci:
>
> Intel ISA bridge as seen from dom0:
> 00:1f.0 ISA bridge: Intel Corporation H77 Express Chipset LPC
> Controller (rev 04)
> 00: 86 80 4a 1e 07 00 10 02 04 00 01 06 00 00 80 00
> 10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 20: 00 00 00 00 00 00 00 00 00 00 00 00 49 18 4a 1e
> 30: 00 00 00 00 e0 00 00 00 00 00 00 00 00 00 00 00
>
> After exposed to domU
> 00:1f.0 Non-VGA unclassified device: Intel Corporation H77 Express
> Chipset LPC Controller (rev 04)
> 00: 86 80 4a 1e 07 00 a0 00 04 00 01 06 00 10 81 00
> 10: 00 50 42 f1 10 50 42 f1 00 01 01 f1 30 50 42 f1
> 20: 40 50 42 f1 50 50 42 f1 00 00 00 00 f4 1a 00 11
> 30: 60 50 42 f1 00 00 00 00 00 00 00 00 00 00 00 00
>
> The PIIX3 ISA bridge as emulated by qemu:
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
> 00: 86 80 00 70 07 00 00 02 00 00 01 06 00 00 80 00
> 10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 20: 00 00 00 00 00 00 00 00 00 00 00 00 f4 1a 00 11
> 30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>
> I have no idea if this is due to the header type set to 0x81 (pci-pci
> bridge, multi-functional), or some other fields need to be cleared...
>

I did another experiment to change the header type to 0x80.
The devices now shows up as an ISA bridge.
But as long as the PIIX3 bridge cannot be overridden, I need another
hack in intel driver to make it work.

This is the local hack I made, based on kernel 3.2.31.
But the function does not change in new version, other than new chip support.
Not sure if this change is acceptable to upstream...

--- i915_drv.c.orig    2012-10-10 10:31:37.000000000 +0800
+++ i915_drv.c    2012-12-14 19:10:32.000000000 +0800
@@ -303,6 +303,7 @@
 {
     struct drm_i915_private *dev_priv = dev->dev_private;
     struct pci_dev *pch;
+    unsigned found = 0;

     /*
      * The reason to probe ISA bridge instead of Dev31:Fun0 is to
@@ -311,11 +312,13 @@
      * underneath. This is a requirement from virtualization team.
      */
     pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
-    if (pch) {
+    while (pch) {
         if (pch->vendor == PCI_VENDOR_ID_INTEL) {
             int id;
             id = pch->device & INTEL_PCH_DEVICE_ID_MASK;

+            found = pch->device;
+
             if (id == INTEL_PCH_IBX_DEVICE_ID_TYPE) {
                 dev_priv->pch_type = PCH_IBX;
                 DRM_DEBUG_KMS("Found Ibex Peak PCH\n");
@@ -326,9 +329,21 @@
                 /* PantherPoint is CPT compatible */
                 dev_priv->pch_type = PCH_CPT;
                 DRM_DEBUG_KMS("Found PatherPoint PCH\n");
+            } else {
+                found = 0;
             }
+                }
+                if (found) {
+            pci_dev_put(pch);
+            break;
+        } else {
+            struct pci_dev *curr = pch;
+            pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, curr);
+            pci_dev_put(curr);
         }
-        pci_dev_put(pch);
+    }
+    if (!found) {
+        DRM_DEBUG_KMS("intel PCH detect failed, nothing found\n");
     }
 }



>
>> Thanks,
>> Timothy
>>
>>>>
>>>>
>>>> diff --git a/hw/pci.c b/hw/pci.c
>>>> index f051de1..d371bd7 100644
>>>> --- a/hw/pci.c
>>>> +++ b/hw/pci.c
>>>> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>>>>      }
>>>>  }
>>>>
>>>> -typedef struct {
>>>> -    PCIDevice dev;
>>>> -    PCIBus *bus;
>>>> -} PCIBridge;
>>>> -
>>>>  void pci_bridge_write_config(PCIDevice *d,
>>>>                               uint32_t address, uint32_t val, int len)
>>>>  {
>>>> diff --git a/hw/pci.h b/hw/pci.h
>>>> index edc58b6..c2acab9 100644
>>>> --- a/hw/pci.h
>>>> +++ b/hw/pci.h
>>>> @@ -222,6 +222,11 @@ struct PCIDevice {
>>>>      int irq_state[4];
>>>>  };
>>>>
>>>> +typedef struct {
>>>> +    PCIDevice dev;
>>>> +    PCIBus *bus;
>>>> +} PCIBridge;
>>>> +
>>>>  extern char direct_pci_str[];
>>>>  extern int direct_pci_msitranslate;
>>>>  extern int direct_pci_power_mgmt;
>>>> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
>>>> index c6f8869..d8e789f 100644
>>>> --- a/hw/pt-graphics.c
>>>> +++ b/hw/pt-graphics.c
>>>> @@ -3,6 +3,7 @@
>>>>   */
>>>>
>>>>  #include "pass-through.h"
>>>> +#include "pci.h"
>>>>  #include "pci/header.h"
>>>>  #include "pci/pci.h"
>>>>
>>>> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>>>>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>>>>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
>>>>
>>>> -    if ( vid == PCI_VENDOR_ID_INTEL )
>>>> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
>>>> -                        pch_map_irq, "intel_bridge_1f");
>>>> +    if (vid == PCI_VENDOR_ID_INTEL) {
>>>> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
>>>> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL, pci_bridge_write_config);
>>>> +
>>>> +        pci_config_set_vendor_id(s->dev.config, vid);
>>>> +        pci_config_set_device_id(s->dev.config, did);
>>>> +
>>>> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
>>>> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
>>>> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast back-to-back, 66MHz, no error
>>>> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
>>>> +        s->dev.config[PCI_REVISION] = rid;
>>>> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
>>>> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
>>>> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
>>>> +        s->dev.config[PCI_HEADER_TYPE] = 0x81;
>>>> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
>>>> +
>>>> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
>>>> +    }
>>>>  }
>>>>
>>>>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 11:46:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 11:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjTi5-0006gQ-SH; Fri, 14 Dec 2012 11:45:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjTi3-0006gJ-SD
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 11:45:44 +0000
Received: from [193.109.254.147:58401] by server-16.bemta-14.messagelabs.com
	id 42/CB-18932-7611BC05; Fri, 14 Dec 2012 11:45:43 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355485510!2918581!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11195 invoked from network); 14 Dec 2012 11:45:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 11:45:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,280,1355097600"; 
   d="scan'208";a="150278"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 11:45:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 11:45:09 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjThV-0005y8-U9;
	Fri, 14 Dec 2012 11:45:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjThV-0006vl-TQ;
	Fri, 14 Dec 2012 11:45:09 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14689-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 11:45:09 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14689 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14689/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-i386-win           7 windows-install             fail pass in 14684
 test-amd64-i386-xl-credit2   10 guest-saverestore  fail in 14684 pass in 14689
 test-i386-i386-xl-win    12 guest-localmigrate/x10 fail in 14684 pass in 14689

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     18 leak-check/check         fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check      fail in 14684 never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 11:46:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 11:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjTi5-0006gQ-SH; Fri, 14 Dec 2012 11:45:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjTi3-0006gJ-SD
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 11:45:44 +0000
Received: from [193.109.254.147:58401] by server-16.bemta-14.messagelabs.com
	id 42/CB-18932-7611BC05; Fri, 14 Dec 2012 11:45:43 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355485510!2918581!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11195 invoked from network); 14 Dec 2012 11:45:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 11:45:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,280,1355097600"; 
   d="scan'208";a="150278"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 11:45:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 11:45:09 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjThV-0005y8-U9;
	Fri, 14 Dec 2012 11:45:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjThV-0006vl-TQ;
	Fri, 14 Dec 2012 11:45:09 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14689-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 11:45:09 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14689 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14689/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-i386-win           7 windows-install             fail pass in 14684
 test-amd64-i386-xl-credit2   10 guest-saverestore  fail in 14684 pass in 14689
 test-i386-i386-xl-win    12 guest-localmigrate/x10 fail in 14684 pass in 14689

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     18 leak-check/check         fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check      fail in 14684 never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 12:30:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 12:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjUP7-0007C9-Ju; Fri, 14 Dec 2012 12:30:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1TjUP6-0007Bw-Jy; Fri, 14 Dec 2012 12:30:12 +0000
Received: from [85.158.138.51:59748] by server-9.bemta-3.messagelabs.com id
	2A/97-11948-3DB1BC05; Fri, 14 Dec 2012 12:30:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355488210!28836824!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16151 invoked from network); 14 Dec 2012 12:30:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 12:30:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,280,1355097600"; 
   d="scan'208";a="151498"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 12:30:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 12:30:05 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjUOz-0006Q0-3x; Fri, 14 Dec 2012 12:30:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjUOy-0000lb-RI;
	Fri, 14 Dec 2012 12:30:04 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20683.7116.546352.141787@mariner.uk.xensource.com>
Date: Fri, 14 Dec 2012 12:30:04 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50CB0429.9070300@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com> <50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
> According to RFC3270 and RFC1035 IQNs should follow this format:
> 
> iqn.yyyy-mm.com.example:optional.string
> 
> The problem is that Open-iSCSI seems to accept almost anything, for
> example iqn.yyyy-mm,com@example:... is a valid iqn from Open-iSCSI point
> of view. The only character that Open-iSCSI doesn't seem to accept in
> iqns is "/", but I don't really like using that as a field separator
> inside of target. So I propose the following encoding for target:
> 
> "<iqn>,<portal>"
> "<iqn>,<portal>,<auth_method>,<user>,<password>"
> 
> If a user/password is given, we should take care about what we write to
> "params" xenstore backend field (because the DomU can read that). Would
> you agree with the syntax described below?

Wouldn't it be better to specify this in a more key/value like way ?

The password is a problem.  Perhaps we need to arrange not to write
params to a place where the guest can see it, but that means upheaval
for the interface to block scripts.

> > I think it should be controlled by the same argument.  So maybe
> > script=iscsi causes libxl to check for a dropping in the script file
> > saying "yes do the prepare thing" or maybe it runs
> > /etc/xen/scripts/block-iscsi--prepare or something.
> 
> I like the approach to call the same hotplug script twice, the first
> time use something like `/etc/xen/scripts/block-iscsi prepare`, and the
> second time `/etc/xen/scripts/block-iscsi add`

So how would we tell whether the script understood this ?

Perhaps we should invent a new config parameter parallel to script
which specifies an entirely new interface.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 12:30:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 12:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjUP7-0007C9-Ju; Fri, 14 Dec 2012 12:30:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1TjUP6-0007Bw-Jy; Fri, 14 Dec 2012 12:30:12 +0000
Received: from [85.158.138.51:59748] by server-9.bemta-3.messagelabs.com id
	2A/97-11948-3DB1BC05; Fri, 14 Dec 2012 12:30:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355488210!28836824!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16151 invoked from network); 14 Dec 2012 12:30:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 12:30:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,280,1355097600"; 
   d="scan'208";a="151498"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 12:30:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 12:30:05 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjUOz-0006Q0-3x; Fri, 14 Dec 2012 12:30:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjUOy-0000lb-RI;
	Fri, 14 Dec 2012 12:30:04 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20683.7116.546352.141787@mariner.uk.xensource.com>
Date: Fri, 14 Dec 2012 12:30:04 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50CB0429.9070300@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com> <50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
> According to RFC3270 and RFC1035 IQNs should follow this format:
> 
> iqn.yyyy-mm.com.example:optional.string
> 
> The problem is that Open-iSCSI seems to accept almost anything, for
> example iqn.yyyy-mm,com@example:... is a valid iqn from Open-iSCSI point
> of view. The only character that Open-iSCSI doesn't seem to accept in
> iqns is "/", but I don't really like using that as a field separator
> inside of target. So I propose the following encoding for target:
> 
> "<iqn>,<portal>"
> "<iqn>,<portal>,<auth_method>,<user>,<password>"
> 
> If a user/password is given, we should take care about what we write to
> "params" xenstore backend field (because the DomU can read that). Would
> you agree with the syntax described below?

Wouldn't it be better to specify this in a more key/value like way ?

The password is a problem.  Perhaps we need to arrange not to write
params to a place where the guest can see it, but that means upheaval
for the interface to block scripts.

> > I think it should be controlled by the same argument.  So maybe
> > script=iscsi causes libxl to check for a dropping in the script file
> > saying "yes do the prepare thing" or maybe it runs
> > /etc/xen/scripts/block-iscsi--prepare or something.
> 
> I like the approach to call the same hotplug script twice, the first
> time use something like `/etc/xen/scripts/block-iscsi prepare`, and the
> second time `/etc/xen/scripts/block-iscsi add`

So how would we tell whether the script understood this ?

Perhaps we should invent a new config parameter parallel to script
which specifies an entirely new interface.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 12:32:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 12:32:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjUR0-0007LF-C5; Fri, 14 Dec 2012 12:32:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TjUQz-0007L2-34
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 12:32:09 +0000
Received: from [85.158.139.83:52782] by server-9.bemta-5.messagelabs.com id
	0C/7D-10690-84C1BC05; Fri, 14 Dec 2012 12:32:08 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355488287!29170976!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4223 invoked from network); 14 Dec 2012 12:31:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 12:31:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,280,1355097600"; 
   d="scan'208";a="719105"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 12:31:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 07:31:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TjUQI-00053D-5X;
	Fri, 14 Dec 2012 12:31:26 +0000
Date: Fri, 14 Dec 2012 12:31:21 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: James Kim <james@therfi.com>
In-Reply-To: <202022CB-36F1-4025-86D4-F428147231F1@therfi.com>
Message-ID: <alpine.DEB.2.02.1212141225590.17523@kaball.uk.xensource.com>
References: <202022CB-36F1-4025-86D4-F428147231F1@therfi.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1235358558-1355488281=:17523"
Cc: "xen-arm@lists.xen.org" <xen-arm@lists.xen.org>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [XenARM] Hello~ I've a question.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1235358558-1355488281=:17523
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

Regarding the upstream Xen port to ARMv7 with virtualization extensions
(http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions), we
have been working mainly on the Versatile Express Cortex A15: you should
be able to find an emulator on the ARM website free with an evalutation
license
(http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions/FastMode=
ls).

From=20the real hardware platform point of view, Versatile Express is a
bit expensive, so I would recommend something based on the Samsung
Exynos 5, like the new Arndale Board (http://www.arndaleboard.org).


On Thu, 13 Dec 2012, James Kim wrote:
> These days, I'm searching an embedded board for porting XEN and Hyperviso=
r.
> But, I've been couldn't find it. If anyone knows, please advise to me.
>=20
>=20
> I just want to apply as easy as possible. So, if it's already, let me kno=
w BRAND NAME and MODEL.
>=20
>=20
> =C2=A0Thank you.
>=20
>=20
>=20
>=20
>=20
>=20
>=20
--1342847746-1235358558-1355488281=:17523
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1235358558-1355488281=:17523--


From xen-devel-bounces@lists.xen.org Fri Dec 14 12:32:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 12:32:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjUR0-0007LF-C5; Fri, 14 Dec 2012 12:32:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TjUQz-0007L2-34
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 12:32:09 +0000
Received: from [85.158.139.83:52782] by server-9.bemta-5.messagelabs.com id
	0C/7D-10690-84C1BC05; Fri, 14 Dec 2012 12:32:08 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355488287!29170976!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4223 invoked from network); 14 Dec 2012 12:31:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 12:31:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,280,1355097600"; 
   d="scan'208";a="719105"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 12:31:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 07:31:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TjUQI-00053D-5X;
	Fri, 14 Dec 2012 12:31:26 +0000
Date: Fri, 14 Dec 2012 12:31:21 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: James Kim <james@therfi.com>
In-Reply-To: <202022CB-36F1-4025-86D4-F428147231F1@therfi.com>
Message-ID: <alpine.DEB.2.02.1212141225590.17523@kaball.uk.xensource.com>
References: <202022CB-36F1-4025-86D4-F428147231F1@therfi.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1235358558-1355488281=:17523"
Cc: "xen-arm@lists.xen.org" <xen-arm@lists.xen.org>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [XenARM] Hello~ I've a question.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1235358558-1355488281=:17523
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

Regarding the upstream Xen port to ARMv7 with virtualization extensions
(http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions), we
have been working mainly on the Versatile Express Cortex A15: you should
be able to find an emulator on the ARM website free with an evalutation
license
(http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions/FastMode=
ls).

From=20the real hardware platform point of view, Versatile Express is a
bit expensive, so I would recommend something based on the Samsung
Exynos 5, like the new Arndale Board (http://www.arndaleboard.org).


On Thu, 13 Dec 2012, James Kim wrote:
> These days, I'm searching an embedded board for porting XEN and Hyperviso=
r.
> But, I've been couldn't find it. If anyone knows, please advise to me.
>=20
>=20
> I just want to apply as easy as possible. So, if it's already, let me kno=
w BRAND NAME and MODEL.
>=20
>=20
> =C2=A0Thank you.
>=20
>=20
>=20
>=20
>=20
>=20
>=20
--1342847746-1235358558-1355488281=:17523
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1235358558-1355488281=:17523--


From xen-devel-bounces@lists.xen.org Fri Dec 14 12:49:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 12:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjUhP-0007d2-2Z; Fri, 14 Dec 2012 12:49:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TjUhN-0007cx-29
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 12:49:05 +0000
Received: from [85.158.139.211:15540] by server-15.bemta-5.messagelabs.com id
	15/64-20523-0402BC05; Fri, 14 Dec 2012 12:49:04 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355489342!19048028!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24932 invoked from network); 14 Dec 2012 12:49:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 12:49:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,280,1355097600"; 
   d="scan'208";a="720555"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 12:49:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 07:49:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TjUhJ-0005JP-5f;
	Fri, 14 Dec 2012 12:49:01 +0000
Date: Fri, 14 Dec 2012 12:48:56 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Josh Zhao <joshsystem@gmail.com>
In-Reply-To: <CAOYkbagjXEun=QaNmrKpd69Ma0BqWpvbb_oJGzBg8V60FRkJPQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212141244260.17523@kaball.uk.xensource.com>
References: <CAOYkbagjXEun=QaNmrKpd69Ma0BqWpvbb_oJGzBg8V60FRkJPQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1634788834-1355489336=:17523"
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] About ARM branch questions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1634788834-1355489336=:17523
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Fri, 14 Dec 2012, Josh Zhao wrote:
> Hi, I have a new questions about the ARM-Xen.=C2=A0(1) =C2=A0 Following w=
iki in
> http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions, =C2=A0=
are this repos latest?
>=20
> =C2=A0 =C2=A0 Xen: =C2=A0http://xenbits.xen.org/hg/xen-unstable.hg
> =C2=A0 =C2=A0 Linux Dom0 and DomU: The=C2=A0arm-privcmd-for-3.8=C2=A0bran=
ch of=C2=A0git://xenbits.xen.org/people/ianc/linux.git=C2=A0

Yes, they are.
However we have few outstanding patches, already sent to the list by me
or Ian, but still unapplied.


> (2) =C2=A0Is the ARM PV interfaces for IO same as X86?

I guess you are referring to the Linux PV frontend and backend drivers?
Like netfront/netback and blkfront/blkback? If that is the case, then
yes, we are using exactly the same frontend/backend driver pairs as in
Xen x86.
--1342847746-1634788834-1355489336=:17523
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1634788834-1355489336=:17523--


From xen-devel-bounces@lists.xen.org Fri Dec 14 12:49:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 12:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjUhP-0007d2-2Z; Fri, 14 Dec 2012 12:49:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TjUhN-0007cx-29
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 12:49:05 +0000
Received: from [85.158.139.211:15540] by server-15.bemta-5.messagelabs.com id
	15/64-20523-0402BC05; Fri, 14 Dec 2012 12:49:04 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355489342!19048028!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24932 invoked from network); 14 Dec 2012 12:49:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 12:49:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,280,1355097600"; 
   d="scan'208";a="720555"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 12:49:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 07:49:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TjUhJ-0005JP-5f;
	Fri, 14 Dec 2012 12:49:01 +0000
Date: Fri, 14 Dec 2012 12:48:56 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Josh Zhao <joshsystem@gmail.com>
In-Reply-To: <CAOYkbagjXEun=QaNmrKpd69Ma0BqWpvbb_oJGzBg8V60FRkJPQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212141244260.17523@kaball.uk.xensource.com>
References: <CAOYkbagjXEun=QaNmrKpd69Ma0BqWpvbb_oJGzBg8V60FRkJPQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1634788834-1355489336=:17523"
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] About ARM branch questions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1634788834-1355489336=:17523
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Fri, 14 Dec 2012, Josh Zhao wrote:
> Hi, I have a new questions about the ARM-Xen.=C2=A0(1) =C2=A0 Following w=
iki in
> http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions, =C2=A0=
are this repos latest?
>=20
> =C2=A0 =C2=A0 Xen: =C2=A0http://xenbits.xen.org/hg/xen-unstable.hg
> =C2=A0 =C2=A0 Linux Dom0 and DomU: The=C2=A0arm-privcmd-for-3.8=C2=A0bran=
ch of=C2=A0git://xenbits.xen.org/people/ianc/linux.git=C2=A0

Yes, they are.
However we have few outstanding patches, already sent to the list by me
or Ian, but still unapplied.


> (2) =C2=A0Is the ARM PV interfaces for IO same as X86?

I guess you are referring to the Linux PV frontend and backend drivers?
Like netfront/netback and blkfront/blkback? If that is the case, then
yes, we are using exactly the same frontend/backend driver pairs as in
Xen x86.
--1342847746-1634788834-1355489336=:17523
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1634788834-1355489336=:17523--


From xen-devel-bounces@lists.xen.org Fri Dec 14 13:00:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 13:00:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjUrW-0007mw-8G; Fri, 14 Dec 2012 12:59:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TjUrU-0007mr-Sn
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 12:59:33 +0000
Received: from [85.158.139.83:22430] by server-16.bemta-5.messagelabs.com id
	0C/5F-09208-4B22BC05; Fri, 14 Dec 2012 12:59:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355489954!29905418!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6650 invoked from network); 14 Dec 2012 12:59:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 12:59:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,280,1355097600"; 
   d="scan'208";a="676341"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 12:59:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 07:59:12 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TjUrA-0005SR-Qu;
	Fri, 14 Dec 2012 12:59:12 +0000
Date: Fri, 14 Dec 2012 12:59:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWbBSAdyGjNyyQOQ2SQALY3d30Brw0MhAc0HRhrSvgO01A@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212141249440.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
	<CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
	<CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
	<CAKhsbWaagzeQL7OGW7YJJKDYxqz=YrV0jKgbdL9nB9qF3+nk=w@mail.gmail.com>
	<CAKhsbWbBSAdyGjNyyQOQ2SQALY3d30Brw0MhAc0HRhrSvgO01A@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 14 Dec 2012, G.R. wrote:
> I did another experiment to change the header type to 0x80.
> The devices now shows up as an ISA bridge.

Great! Could you please resend the patch with your change to xen-devel?


> But as long as the PIIX3 bridge cannot be overridden, I need another
> hack in intel driver to make it work.
> 
> This is the local hack I made, based on kernel 3.2.31.
> But the function does not change in new version, other than new chip support.
> Not sure if this change is acceptable to upstream...

I am not the maintainer of the i915 driver, but my feeling is that it is
acceptable.
However when making changes to Linux, you need to make them against the
the latest release or one of the latest RCs of Linus' git tree
(git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git),
for example v3.7. You also need to generate the patch with git diff.


> --- i915_drv.c.orig    2012-10-10 10:31:37.000000000 +0800
> +++ i915_drv.c    2012-12-14 19:10:32.000000000 +0800
> @@ -303,6 +303,7 @@
>  {
>      struct drm_i915_private *dev_priv = dev->dev_private;
>      struct pci_dev *pch;
> +    unsigned found = 0;

You don't need to introduce found: I would just use "continue" in the
while loop if there isn't a match and "break" at the first good match.

> 
>      /*
>       * The reason to probe ISA bridge instead of Dev31:Fun0 is to
> @@ -311,11 +312,13 @@
>       * underneath. This is a requirement from virtualization team.
>       */
>      pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
> -    if (pch) {
> +    while (pch) {
>          if (pch->vendor == PCI_VENDOR_ID_INTEL) {
>              int id;
>              id = pch->device & INTEL_PCH_DEVICE_ID_MASK;
> 
> +            found = pch->device;
> +
>              if (id == INTEL_PCH_IBX_DEVICE_ID_TYPE) {
>                  dev_priv->pch_type = PCH_IBX;
>                  DRM_DEBUG_KMS("Found Ibex Peak PCH\n");
> @@ -326,9 +329,21 @@
>                  /* PantherPoint is CPT compatible */
>                  dev_priv->pch_type = PCH_CPT;
>                  DRM_DEBUG_KMS("Found PatherPoint PCH\n");
> +            } else {
> +                found = 0;
>              }
> +                }
> +                if (found) {
> +            pci_dev_put(pch);
> +            break;
> +        } else {
> +            struct pci_dev *curr = pch;
> +            pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, curr);
> +            pci_dev_put(curr);
>          }
> -        pci_dev_put(pch);
> +    }
> +    if (!found) {
> +        DRM_DEBUG_KMS("intel PCH detect failed, nothing found\n");
>      }
>  }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 13:00:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 13:00:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjUrW-0007mw-8G; Fri, 14 Dec 2012 12:59:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TjUrU-0007mr-Sn
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 12:59:33 +0000
Received: from [85.158.139.83:22430] by server-16.bemta-5.messagelabs.com id
	0C/5F-09208-4B22BC05; Fri, 14 Dec 2012 12:59:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355489954!29905418!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6650 invoked from network); 14 Dec 2012 12:59:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 12:59:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,280,1355097600"; 
   d="scan'208";a="676341"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 12:59:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 07:59:12 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TjUrA-0005SR-Qu;
	Fri, 14 Dec 2012 12:59:12 +0000
Date: Fri, 14 Dec 2012 12:59:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWbBSAdyGjNyyQOQ2SQALY3d30Brw0MhAc0HRhrSvgO01A@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212141249440.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
	<CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
	<CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
	<CAKhsbWaagzeQL7OGW7YJJKDYxqz=YrV0jKgbdL9nB9qF3+nk=w@mail.gmail.com>
	<CAKhsbWbBSAdyGjNyyQOQ2SQALY3d30Brw0MhAc0HRhrSvgO01A@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 14 Dec 2012, G.R. wrote:
> I did another experiment to change the header type to 0x80.
> The devices now shows up as an ISA bridge.

Great! Could you please resend the patch with your change to xen-devel?


> But as long as the PIIX3 bridge cannot be overridden, I need another
> hack in intel driver to make it work.
> 
> This is the local hack I made, based on kernel 3.2.31.
> But the function does not change in new version, other than new chip support.
> Not sure if this change is acceptable to upstream...

I am not the maintainer of the i915 driver, but my feeling is that it is
acceptable.
However when making changes to Linux, you need to make them against the
the latest release or one of the latest RCs of Linus' git tree
(git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git),
for example v3.7. You also need to generate the patch with git diff.


> --- i915_drv.c.orig    2012-10-10 10:31:37.000000000 +0800
> +++ i915_drv.c    2012-12-14 19:10:32.000000000 +0800
> @@ -303,6 +303,7 @@
>  {
>      struct drm_i915_private *dev_priv = dev->dev_private;
>      struct pci_dev *pch;
> +    unsigned found = 0;

You don't need to introduce found: I would just use "continue" in the
while loop if there isn't a match and "break" at the first good match.

> 
>      /*
>       * The reason to probe ISA bridge instead of Dev31:Fun0 is to
> @@ -311,11 +312,13 @@
>       * underneath. This is a requirement from virtualization team.
>       */
>      pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
> -    if (pch) {
> +    while (pch) {
>          if (pch->vendor == PCI_VENDOR_ID_INTEL) {
>              int id;
>              id = pch->device & INTEL_PCH_DEVICE_ID_MASK;
> 
> +            found = pch->device;
> +
>              if (id == INTEL_PCH_IBX_DEVICE_ID_TYPE) {
>                  dev_priv->pch_type = PCH_IBX;
>                  DRM_DEBUG_KMS("Found Ibex Peak PCH\n");
> @@ -326,9 +329,21 @@
>                  /* PantherPoint is CPT compatible */
>                  dev_priv->pch_type = PCH_CPT;
>                  DRM_DEBUG_KMS("Found PatherPoint PCH\n");
> +            } else {
> +                found = 0;
>              }
> +                }
> +                if (found) {
> +            pci_dev_put(pch);
> +            break;
> +        } else {
> +            struct pci_dev *curr = pch;
> +            pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, curr);
> +            pci_dev_put(curr);
>          }
> -        pci_dev_put(pch);
> +    }
> +    if (!found) {
> +        DRM_DEBUG_KMS("intel PCH detect failed, nothing found\n");
>      }
>  }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 13:06:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 13:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjUxc-0007yI-6c; Fri, 14 Dec 2012 13:05:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TjUxa-0007yB-Dc
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 13:05:50 +0000
Received: from [193.109.254.147:17078] by server-10.bemta-14.messagelabs.com
	id 0B/F6-13263-D242BC05; Fri, 14 Dec 2012 13:05:49 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355490347!10381659!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NjA5Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17974 invoked from network); 14 Dec 2012 13:05:48 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-27.messagelabs.com with SMTP;
	14 Dec 2012 13:05:48 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 14 Dec 2012 05:05:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,280,1355126400"; d="scan'208";a="263978076"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga002.fm.intel.com with ESMTP; 14 Dec 2012 05:05:46 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 14 Dec 2012 05:05:45 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Fri, 14 Dec 2012 21:05:44 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH V1 1/2] Xen acpi memory hotplug driver
Thread-Index: AQHN1IPpUNtbxbdZSkG9MqBQJp2L85gVdm/ggALVLaA=
Date: Fri, 14 Dec 2012 13:05:43 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A9E57@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
	<20121207140528.GA3140@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A7C7B@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353A7C7B@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Liu, Jinsong wrote:
> Konrad Rzeszutek Wilk wrote:
>> On Thu, Dec 06, 2012 at 04:27:36AM +0000, Liu, Jinsong wrote:
>>>>>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig index
>>>>>>> 126d8ce..abd0396 100644 --- a/drivers/xen/Kconfig
>>>>>>> +++ b/drivers/xen/Kconfig
>>>>>>> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
>>>>>>>  	  Allow kernel fetching MCE error from Xen platform and
>>>>>>>  	  converting it into Linux mcelog format for mcelog tools
>>>>>>> 
>>>>>>> +config XEN_ACPI_MEMORY_HOTPLUG
>>>>>>> +	bool "Xen ACPI memory hotplug"
>>>>>> 
>>>>>> There should be a way to make this a module.
>>>>> 
>>>>> I have some concerns to make it a module:
>>>>> 1. xen and native memhotplug driver both work as module, while we
>>>>> need early load xen driver. 
>>>>> 2. if possible, a xen stub driver may solve load sequence issue,
>>>>>   but it may involve other issues * if xen driver load then
>>>>> unload, native driver may have chance to load successfully;
>>>> 
>>>> The stub driver would still "occupy" the ACPI bus for the memory
>>>> hotplug PnP, so I think this would not be a problem.
>>>> 
>>> 
>>> I'm not quite clear your mean here, do you mean it has
>>> 1. xen_stub driver + xen_memhoplug driver, then xen_strub driver
>>> unload and entirely replaced by xen_memhotplug driver, or
>>> 2. xen_stub driver (w/ stub ops) + xen_memhotplug ops (not driver),
>>> then xen_stub driver keep occupying but its stub ops later replaced
>>> by xen_memhotplug ops?
>> 
>> #2
>>> 
>>> If in way #1, it has risk that native driver may load (if xen
>>> driver unload). If in way #2, xen_memhotplug ops lose the chance to
>>> probe/add/bind existed memory devices (since it's done when driver
>>> registerred). 
>> 
>> Could the stub driver have a queue of events?
> 
> If so, why not do 'real' add ops (like our patch did, to build-in xen
> memory hotplug logic)? 
> I'm not quite clear your purpose of insisting module -- what's
> advantage of module you prefer? 
> 
>> 
>>> 
>>>>>   * if xen driver load --> unload --> load again, then it will
>>>>> lose hotplug notification during unload period;
>>>> 
>>>> Sure. But I think we can do it with this driver? After all the
>>>> function of it is to just tell the firmware to turn on/off sockets
>>>> - and if we miss one notification we won't take advantage of the
>>>> power savings - but we can do that later on.
>>>> 
>>> 
>>> Not only inform firmware.
>>> Hotplug notify callback will invoke acpi_bus_add -> ... ->
>>> implicitly invoke drv->ops.add method to add the hotadded memory
>>> device.
>> 
>> Gotcha.
> 
> ? So it will lose the notification and no way to add the new memory
> device in the future. 
> 
> Xen memory hotplug logic consist of 2 parts:
> 1) driver logic (.add/.remove etc)
> 2) notification install/callback logic
> If you want to use 'xen_stub driver + .add/.remove ops', then
> notification install/callback logic would implement with xen_stub
> driver (means in build-in part, otherwise it would lose notification
> when the ops unload) --> but that would make xen_stub in big build-in
> size.    

How about
* build-in part: xen_stub driver (stub .add to record what matched cpu devices) + notification install/callback;
* module part: .add/.remove ops;
w/ it, native driver has no chance to load and no hotplug event lose, and approximately 1/3 code is build-in and 2/3 are module.

I think it will work but I'm not quite sure, at least we can have a try/test?

Thanks,
Jinsong

> 
>>> 
>>>> 
>>>>>   * if xen driver load --> unload --> load again, then it will
>>>>> re-add all memory devices, but the handle for 'booting memory
>>>>> device' and 'hotplug memory device' are different while we have no
>>>>> way to distinguish these 2 kind of devices.
>>>> 
>>>> Wouldn't the stub driver hold onto that?
>>>> 
>>> 
>>> Same question as comment #1. Do you mean it has a xen_stub driver
>>> (w/ stub ops) and a xen_memhotplug ops?
>> 
>> Correct.
>>> 
>>>>> 
>>>>> IMHO I think to make xen hotplug logic as module may involves
>>>>> unexpected result. Is there any obvious advantages of doing so?
>>>>> after all we have provided config choice to user. Thoughts?
>>>> 
>>>> Yes, it becomes a module - which is what we want.
>>>> 
>>> 
>>> What I meant here is, module will bring some unexpected issues for
>>> xen hotplug. We can provide user 'bool' config choice, let them
>>> decide to build-in or not, but not 'tristate' choice.
>> 
>> What would be involved in making it an tristate choice?
>>> 
>>> Thanks
>>> Jinsong
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 13:06:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 13:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjUxc-0007yI-6c; Fri, 14 Dec 2012 13:05:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1TjUxa-0007yB-Dc
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 13:05:50 +0000
Received: from [193.109.254.147:17078] by server-10.bemta-14.messagelabs.com
	id 0B/F6-13263-D242BC05; Fri, 14 Dec 2012 13:05:49 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355490347!10381659!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NjA5Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17974 invoked from network); 14 Dec 2012 13:05:48 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-27.messagelabs.com with SMTP;
	14 Dec 2012 13:05:48 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 14 Dec 2012 05:05:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,280,1355126400"; d="scan'208";a="263978076"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga002.fm.intel.com with ESMTP; 14 Dec 2012 05:05:46 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 14 Dec 2012 05:05:45 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Fri, 14 Dec 2012 21:05:44 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH V1 1/2] Xen acpi memory hotplug driver
Thread-Index: AQHN1IPpUNtbxbdZSkG9MqBQJp2L85gVdm/ggALVLaA=
Date: Fri, 14 Dec 2012 13:05:43 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353A9E57@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
	<20121207140528.GA3140@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A7C7B@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353A7C7B@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Liu, Jinsong wrote:
> Konrad Rzeszutek Wilk wrote:
>> On Thu, Dec 06, 2012 at 04:27:36AM +0000, Liu, Jinsong wrote:
>>>>>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig index
>>>>>>> 126d8ce..abd0396 100644 --- a/drivers/xen/Kconfig
>>>>>>> +++ b/drivers/xen/Kconfig
>>>>>>> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
>>>>>>>  	  Allow kernel fetching MCE error from Xen platform and
>>>>>>>  	  converting it into Linux mcelog format for mcelog tools
>>>>>>> 
>>>>>>> +config XEN_ACPI_MEMORY_HOTPLUG
>>>>>>> +	bool "Xen ACPI memory hotplug"
>>>>>> 
>>>>>> There should be a way to make this a module.
>>>>> 
>>>>> I have some concerns to make it a module:
>>>>> 1. xen and native memhotplug driver both work as module, while we
>>>>> need early load xen driver. 
>>>>> 2. if possible, a xen stub driver may solve load sequence issue,
>>>>>   but it may involve other issues * if xen driver load then
>>>>> unload, native driver may have chance to load successfully;
>>>> 
>>>> The stub driver would still "occupy" the ACPI bus for the memory
>>>> hotplug PnP, so I think this would not be a problem.
>>>> 
>>> 
>>> I'm not quite clear your mean here, do you mean it has
>>> 1. xen_stub driver + xen_memhoplug driver, then xen_strub driver
>>> unload and entirely replaced by xen_memhotplug driver, or
>>> 2. xen_stub driver (w/ stub ops) + xen_memhotplug ops (not driver),
>>> then xen_stub driver keep occupying but its stub ops later replaced
>>> by xen_memhotplug ops?
>> 
>> #2
>>> 
>>> If in way #1, it has risk that native driver may load (if xen
>>> driver unload). If in way #2, xen_memhotplug ops lose the chance to
>>> probe/add/bind existed memory devices (since it's done when driver
>>> registerred). 
>> 
>> Could the stub driver have a queue of events?
> 
> If so, why not do 'real' add ops (like our patch did, to build-in xen
> memory hotplug logic)? 
> I'm not quite clear your purpose of insisting module -- what's
> advantage of module you prefer? 
> 
>> 
>>> 
>>>>>   * if xen driver load --> unload --> load again, then it will
>>>>> lose hotplug notification during unload period;
>>>> 
>>>> Sure. But I think we can do it with this driver? After all the
>>>> function of it is to just tell the firmware to turn on/off sockets
>>>> - and if we miss one notification we won't take advantage of the
>>>> power savings - but we can do that later on.
>>>> 
>>> 
>>> Not only inform firmware.
>>> Hotplug notify callback will invoke acpi_bus_add -> ... ->
>>> implicitly invoke drv->ops.add method to add the hotadded memory
>>> device.
>> 
>> Gotcha.
> 
> ? So it will lose the notification and no way to add the new memory
> device in the future. 
> 
> Xen memory hotplug logic consist of 2 parts:
> 1) driver logic (.add/.remove etc)
> 2) notification install/callback logic
> If you want to use 'xen_stub driver + .add/.remove ops', then
> notification install/callback logic would implement with xen_stub
> driver (means in build-in part, otherwise it would lose notification
> when the ops unload) --> but that would make xen_stub in big build-in
> size.    

How about
* build-in part: xen_stub driver (stub .add to record what matched cpu devices) + notification install/callback;
* module part: .add/.remove ops;
w/ it, native driver has no chance to load and no hotplug event lose, and approximately 1/3 code is build-in and 2/3 are module.

I think it will work but I'm not quite sure, at least we can have a try/test?

Thanks,
Jinsong

> 
>>> 
>>>> 
>>>>>   * if xen driver load --> unload --> load again, then it will
>>>>> re-add all memory devices, but the handle for 'booting memory
>>>>> device' and 'hotplug memory device' are different while we have no
>>>>> way to distinguish these 2 kind of devices.
>>>> 
>>>> Wouldn't the stub driver hold onto that?
>>>> 
>>> 
>>> Same question as comment #1. Do you mean it has a xen_stub driver
>>> (w/ stub ops) and a xen_memhotplug ops?
>> 
>> Correct.
>>> 
>>>>> 
>>>>> IMHO I think to make xen hotplug logic as module may involves
>>>>> unexpected result. Is there any obvious advantages of doing so?
>>>>> after all we have provided config choice to user. Thoughts?
>>>> 
>>>> Yes, it becomes a module - which is what we want.
>>>> 
>>> 
>>> What I meant here is, module will bring some unexpected issues for
>>> xen hotplug. We can provide user 'bool' config choice, let them
>>> decide to build-in or not, but not 'tristate' choice.
>> 
>> What would be involved in making it an tristate choice?
>>> 
>>> Thanks
>>> Jinsong
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 14:32:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWJO-0000uX-90; Fri, 14 Dec 2012 14:32:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1TjWJN-0000uS-7u
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:32:25 +0000
Received: from [193.109.254.147:16290] by server-16.bemta-14.messagelabs.com
	id A2/2D-18932-8783BC05; Fri, 14 Dec 2012 14:32:24 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1355495527!5363084!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5274 invoked from network); 14 Dec 2012 14:32:07 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 14:32:07 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so1418324wgb.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 06:32:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=0rAhR3XVH8xVPvx3lE2jR+M4iQqarlVf/DqIq2Mtv/0=;
	b=YZYE+QGxsiUFYbiWrx/YHYS4S/ByDWWSIhUbijI0xcc3cX5tQxXQZRasBRJmzRxpVI
	MwmNHjeVqppOdc9VUUCWZwcjIdDycHiie1zUh3COX52Y8VTdKLZLN0IJPMDXcjDku1F5
	Dmnk6y383v7juAF2KPgz4rYA/Dc9XQ9zNauKZsasMkwm7EUsX5fEkA4B2FIhABvA9PmY
	hwA71Yp2Wyx1opEX4aWTOThbFYmNMZ64yyxVcUH3K5XiszthIsJC6FsmwW3HFMcziiun
	l4BYmVS8fpfcT3vzfue2f5X9T9ryP/kRK8e19GEV/DWg1Hgo3bJpSqzV5j422McORK1o
	ks9Q==
MIME-Version: 1.0
Received: by 10.180.24.199 with SMTP id w7mr3014444wif.5.1355495527150; Fri,
	14 Dec 2012 06:32:07 -0800 (PST)
Received: by 10.194.64.194 with HTTP; Fri, 14 Dec 2012 06:32:07 -0800 (PST)
Date: Fri, 14 Dec 2012 20:02:07 +0530
Message-ID: <CANq0ewvN=e6C7=C=QmDYv9kptoW0htZ2mO1o2gXRzk2NwCFmxg@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] xen with huffman coding
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7343406972418274123=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7343406972418274123==
Content-Type: multipart/alternative; boundary=f46d043c06fe63f29204d0d0e56c

--f46d043c06fe63f29204d0d0e56c
Content-Type: text/plain; charset=ISO-8859-1

Hello all,
              Is it possible to integrate Xen pre-copy algorithm with
huffman coding compression algorithm. If during live migration of vm pages
are first compressed and then transferred at destination.

regards,
DigvijaySingh

--f46d043c06fe63f29204d0d0e56c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello all,<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 Is it possible to int=
egrate Xen pre-copy algorithm with huffman coding compression algorithm. If=
 during live migration of vm pages are first compressed and then transferre=
d at destination.<br><br>
regards,<br>DigvijaySingh<br>

--f46d043c06fe63f29204d0d0e56c--


--===============7343406972418274123==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7343406972418274123==--


From xen-devel-bounces@lists.xen.org Fri Dec 14 14:32:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWJO-0000uX-90; Fri, 14 Dec 2012 14:32:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1TjWJN-0000uS-7u
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:32:25 +0000
Received: from [193.109.254.147:16290] by server-16.bemta-14.messagelabs.com
	id A2/2D-18932-8783BC05; Fri, 14 Dec 2012 14:32:24 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1355495527!5363084!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5274 invoked from network); 14 Dec 2012 14:32:07 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 14:32:07 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so1418324wgb.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 06:32:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=0rAhR3XVH8xVPvx3lE2jR+M4iQqarlVf/DqIq2Mtv/0=;
	b=YZYE+QGxsiUFYbiWrx/YHYS4S/ByDWWSIhUbijI0xcc3cX5tQxXQZRasBRJmzRxpVI
	MwmNHjeVqppOdc9VUUCWZwcjIdDycHiie1zUh3COX52Y8VTdKLZLN0IJPMDXcjDku1F5
	Dmnk6y383v7juAF2KPgz4rYA/Dc9XQ9zNauKZsasMkwm7EUsX5fEkA4B2FIhABvA9PmY
	hwA71Yp2Wyx1opEX4aWTOThbFYmNMZ64yyxVcUH3K5XiszthIsJC6FsmwW3HFMcziiun
	l4BYmVS8fpfcT3vzfue2f5X9T9ryP/kRK8e19GEV/DWg1Hgo3bJpSqzV5j422McORK1o
	ks9Q==
MIME-Version: 1.0
Received: by 10.180.24.199 with SMTP id w7mr3014444wif.5.1355495527150; Fri,
	14 Dec 2012 06:32:07 -0800 (PST)
Received: by 10.194.64.194 with HTTP; Fri, 14 Dec 2012 06:32:07 -0800 (PST)
Date: Fri, 14 Dec 2012 20:02:07 +0530
Message-ID: <CANq0ewvN=e6C7=C=QmDYv9kptoW0htZ2mO1o2gXRzk2NwCFmxg@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] xen with huffman coding
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7343406972418274123=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7343406972418274123==
Content-Type: multipart/alternative; boundary=f46d043c06fe63f29204d0d0e56c

--f46d043c06fe63f29204d0d0e56c
Content-Type: text/plain; charset=ISO-8859-1

Hello all,
              Is it possible to integrate Xen pre-copy algorithm with
huffman coding compression algorithm. If during live migration of vm pages
are first compressed and then transferred at destination.

regards,
DigvijaySingh

--f46d043c06fe63f29204d0d0e56c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello all,<br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 Is it possible to int=
egrate Xen pre-copy algorithm with huffman coding compression algorithm. If=
 during live migration of vm pages are first compressed and then transferre=
d at destination.<br><br>
regards,<br>DigvijaySingh<br>

--f46d043c06fe63f29204d0d0e56c--


--===============7343406972418274123==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7343406972418274123==--


From xen-devel-bounces@lists.xen.org Fri Dec 14 14:36:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWN9-00013A-2l; Fri, 14 Dec 2012 14:36:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>) id 1TjWN7-000135-K5
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:36:17 +0000
Received: from [85.158.139.211:38631] by server-9.bemta-5.messagelabs.com id
	E9/07-10690-0693BC05; Fri, 14 Dec 2012 14:36:16 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355495774!18086145!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26934 invoked from network); 14 Dec 2012 14:36:15 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Dec 2012 14:36:15 -0000
Received: from aplexcas1.dom1.jhuapl.edu (aplexcas1.dom1.jhuapl.edu
	[128.244.198.90]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 2a05_773f_e7260365_695f_4f37_92f1_45d486d38dce;
	Fri, 14 Dec 2012 09:36:07 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Fri, 14 Dec 2012
	09:34:12 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: George Dunlap <dunlapg@umich.edu>
Date: Fri, 14 Dec 2012 09:33:27 -0500
Thread-Topic: [Xen-devel] [PATCH] Disable caml-stubdom by default
Thread-Index: Ac3ZWevm894My7ZcSw2kUrNrvKKPLQArg+Fg
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC20D@aplesstripe.dom1.jhuapl.edu>
References: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>,
	<CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
In-Reply-To: <CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Caml was disabled in the original stubdom default build targets. I should not have enabled it in my original stubdom autoconf patch.

________________________________________
From: dunlapg@gmail.com [dunlapg@gmail.com] On Behalf Of George Dunlap [dunlapg@umich.edu]
Sent: Thursday, December 13, 2012 12:47 PM
To: Fioravante, Matthew E.
Cc: Ian Campbell; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default

Is there a rationale for this?  If so, it should probably be in the commit message.

 -George


On Thu, Dec 13, 2012 at 3:31 PM, Matthew Fioravante <matthew.fioravante@jhuapl.edu<mailto:matthew.fioravante@jhuapl.edu>> wrote:
Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu<mailto:matthew.fioravante@jhuapl.edu>>
---
 stubdom/configure.ac<http://configure.ac> |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/stubdom/configure.ac<http://configure.ac> b/stubdom/configure.ac<http://configure.ac>
index db44d4a..384a94a 100644
--- a/stubdom/configure.ac<http://configure.ac>
+++ b/stubdom/configure.ac<http://configure.ac>
@@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])
 # Enable/disable stub domains
 AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
 AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
-AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
+AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])
 AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
 AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
 AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
--
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 14:36:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWN9-00013A-2l; Fri, 14 Dec 2012 14:36:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>) id 1TjWN7-000135-K5
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:36:17 +0000
Received: from [85.158.139.211:38631] by server-9.bemta-5.messagelabs.com id
	E9/07-10690-0693BC05; Fri, 14 Dec 2012 14:36:16 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355495774!18086145!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26934 invoked from network); 14 Dec 2012 14:36:15 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Dec 2012 14:36:15 -0000
Received: from aplexcas1.dom1.jhuapl.edu (aplexcas1.dom1.jhuapl.edu
	[128.244.198.90]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 2a05_773f_e7260365_695f_4f37_92f1_45d486d38dce;
	Fri, 14 Dec 2012 09:36:07 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Fri, 14 Dec 2012
	09:34:12 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: George Dunlap <dunlapg@umich.edu>
Date: Fri, 14 Dec 2012 09:33:27 -0500
Thread-Topic: [Xen-devel] [PATCH] Disable caml-stubdom by default
Thread-Index: Ac3ZWevm894My7ZcSw2kUrNrvKKPLQArg+Fg
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC20D@aplesstripe.dom1.jhuapl.edu>
References: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>,
	<CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
In-Reply-To: <CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Caml was disabled in the original stubdom default build targets. I should not have enabled it in my original stubdom autoconf patch.

________________________________________
From: dunlapg@gmail.com [dunlapg@gmail.com] On Behalf Of George Dunlap [dunlapg@umich.edu]
Sent: Thursday, December 13, 2012 12:47 PM
To: Fioravante, Matthew E.
Cc: Ian Campbell; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default

Is there a rationale for this?  If so, it should probably be in the commit message.

 -George


On Thu, Dec 13, 2012 at 3:31 PM, Matthew Fioravante <matthew.fioravante@jhuapl.edu<mailto:matthew.fioravante@jhuapl.edu>> wrote:
Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu<mailto:matthew.fioravante@jhuapl.edu>>
---
 stubdom/configure.ac<http://configure.ac> |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/stubdom/configure.ac<http://configure.ac> b/stubdom/configure.ac<http://configure.ac>
index db44d4a..384a94a 100644
--- a/stubdom/configure.ac<http://configure.ac>
+++ b/stubdom/configure.ac<http://configure.ac>
@@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])
 # Enable/disable stub domains
 AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
 AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
-AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
+AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])
 AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
 AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
 AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
--
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 14:42:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWSn-0001Dw-Sz; Fri, 14 Dec 2012 14:42:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TjWSn-0001Dq-3A
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:42:09 +0000
Received: from [85.158.137.99:60388] by server-5.bemta-3.messagelabs.com id
	48/3A-15136-FBA3BC05; Fri, 14 Dec 2012 14:42:07 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1355496124!14050270!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30725 invoked from network); 14 Dec 2012 14:42:06 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 14:42:06 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so3156564iac.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 06:42:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=thJ/jBQhoSKOW+1ZmY+M6xXmz/hzm5ybEfLr+8pARos=;
	b=vTrlLJDq8aMM7CSwSsIinOIm+DotRtIJhOikl8UI1RKLnvlMKbT/ZdrgjX/zjUWhLN
	dZ++zV5X66SCqhHuGsrlisDYXf1dsX9hQl+hFg39ziN7kqnqIoSVov52Ton2bs1mZicA
	mkcdgNksDen6djt0iDeWolKg3rNOF0o14im2ci+SetcaeFZkqJ92o9HnUP5lDAa/5JvJ
	InaU3VoxawWBn9syThJ5PqL/USX2ceAkejMpOuoZFdJ4r89B1j0+GhDnv5oT1P5pqyIq
	CfuJhiQWTklomEnL01saO1JjK/zx3AfWq+A5ml4a/RPvVSoHjmzUe6J/mKLQREu+C/Io
	3KrA==
MIME-Version: 1.0
Received: by 10.50.185.230 with SMTP id ff6mr1762275igc.7.1355496124150; Fri,
	14 Dec 2012 06:42:04 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Fri, 14 Dec 2012 06:42:04 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212141249440.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
	<CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
	<CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
	<CAKhsbWaagzeQL7OGW7YJJKDYxqz=YrV0jKgbdL9nB9qF3+nk=w@mail.gmail.com>
	<CAKhsbWbBSAdyGjNyyQOQ2SQALY3d30Brw0MhAc0HRhrSvgO01A@mail.gmail.com>
	<alpine.DEB.2.02.1212141249440.17523@kaball.uk.xensource.com>
Date: Fri, 14 Dec 2012 22:42:04 +0800
X-Google-Sender-Auth: EmFLKnuSKNT_aJUSHoCWe4FtPrE
Message-ID: <CAKhsbWY2OiQcZ8=ReBnr01VN=uudkK2tdNaAZbDXxEM_cJpCSA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	"Kay, Allen M" <allen.m.kay@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, 
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>, "Dong,
	Eddie" <eddie.dong@intel.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 14, 2012 at 8:59 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Fri, 14 Dec 2012, G.R. wrote:
>> I did another experiment to change the header type to 0x80.
>> The devices now shows up as an ISA bridge.
>
> Great! Could you please resend the patch with your change to xen-devel?
>
Sure, do I need to start a separate thread? Or simply send a follow up?
>
>> But as long as the PIIX3 bridge cannot be overridden, I need another
>> hack in intel driver to make it work.
>>
>> This is the local hack I made, based on kernel 3.2.31.
>> But the function does not change in new version, other than new chip support.
>> Not sure if this change is acceptable to upstream...
>
> I am not the maintainer of the i915 driver, but my feeling is that it is
> acceptable.
> However when making changes to Linux, you need to make them against the
> the latest release or one of the latest RCs of Linus' git tree
> (git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git),
> for example v3.7. You also need to generate the patch with git diff.
>

Adding Intel guys to To list.
Could you check the change I made to the IGD driver.
If you think that's okay, I can create a patch against the latest 3.7 kernel.

>
>> --- i915_drv.c.orig    2012-10-10 10:31:37.000000000 +0800
>> +++ i915_drv.c    2012-12-14 19:10:32.000000000 +0800
>> @@ -303,6 +303,7 @@
>>  {
>>      struct drm_i915_private *dev_priv = dev->dev_private;
>>      struct pci_dev *pch;
>> +    unsigned found = 0;
>
> You don't need to introduce found: I would just use "continue" in the
> while loop if there isn't a match and "break" at the first good match.
>
That was part of the debugging log I added several days ago to root
cause the issue.
I'll have a clean version if upstream is okay with this 'while' change.

>>
>>      /*
>>       * The reason to probe ISA bridge instead of Dev31:Fun0 is to
>> @@ -311,11 +312,13 @@
>>       * underneath. This is a requirement from virtualization team.
>>       */
>>      pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
>> -    if (pch) {
>> +    while (pch) {
>>          if (pch->vendor == PCI_VENDOR_ID_INTEL) {
>>              int id;
>>              id = pch->device & INTEL_PCH_DEVICE_ID_MASK;
>>
>> +            found = pch->device;
>> +
>>              if (id == INTEL_PCH_IBX_DEVICE_ID_TYPE) {
>>                  dev_priv->pch_type = PCH_IBX;
>>                  DRM_DEBUG_KMS("Found Ibex Peak PCH\n");
>> @@ -326,9 +329,21 @@
>>                  /* PantherPoint is CPT compatible */
>>                  dev_priv->pch_type = PCH_CPT;
>>                  DRM_DEBUG_KMS("Found PatherPoint PCH\n");
>> +            } else {
>> +                found = 0;
>>              }
>> +                }
>> +                if (found) {
>> +            pci_dev_put(pch);
>> +            break;
>> +        } else {
>> +            struct pci_dev *curr = pch;
>> +            pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, curr);
>> +            pci_dev_put(curr);
>>          }
>> -        pci_dev_put(pch);
>> +    }
>> +    if (!found) {
>> +        DRM_DEBUG_KMS("intel PCH detect failed, nothing found\n");
>>      }
>>  }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 14:42:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWSn-0001Dw-Sz; Fri, 14 Dec 2012 14:42:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TjWSn-0001Dq-3A
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:42:09 +0000
Received: from [85.158.137.99:60388] by server-5.bemta-3.messagelabs.com id
	48/3A-15136-FBA3BC05; Fri, 14 Dec 2012 14:42:07 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1355496124!14050270!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30725 invoked from network); 14 Dec 2012 14:42:06 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 14:42:06 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so3156564iac.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 06:42:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=thJ/jBQhoSKOW+1ZmY+M6xXmz/hzm5ybEfLr+8pARos=;
	b=vTrlLJDq8aMM7CSwSsIinOIm+DotRtIJhOikl8UI1RKLnvlMKbT/ZdrgjX/zjUWhLN
	dZ++zV5X66SCqhHuGsrlisDYXf1dsX9hQl+hFg39ziN7kqnqIoSVov52Ton2bs1mZicA
	mkcdgNksDen6djt0iDeWolKg3rNOF0o14im2ci+SetcaeFZkqJ92o9HnUP5lDAa/5JvJ
	InaU3VoxawWBn9syThJ5PqL/USX2ceAkejMpOuoZFdJ4r89B1j0+GhDnv5oT1P5pqyIq
	CfuJhiQWTklomEnL01saO1JjK/zx3AfWq+A5ml4a/RPvVSoHjmzUe6J/mKLQREu+C/Io
	3KrA==
MIME-Version: 1.0
Received: by 10.50.185.230 with SMTP id ff6mr1762275igc.7.1355496124150; Fri,
	14 Dec 2012 06:42:04 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Fri, 14 Dec 2012 06:42:04 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212141249440.17523@kaball.uk.xensource.com>
References: <CAKhsbWYS_CpuL3WT-FY-eQqnv5qdHz-OxpR_MdHs6+mFHdpDqg@mail.gmail.com>
	<CAKhsbWZXFngUPJc3BvUZEFWF2SUk+RDUnJ_aNk54oZdeHsg=Zg@mail.gmail.com>
	<alpine.DEB.2.02.1212101222120.4633@kaball.uk.xensource.com>
	<003AAFE53969E14CB1F09B6FD68C3CD450E91359@ORSMSX105.amr.corp.intel.com>
	<CAKhsbWbycCJJ933wJ5nOZT2DbXOsrgPEPR_4E+c+wCB4buFWNQ@mail.gmail.com>
	<alpine.DEB.2.02.1212111137520.17523@kaball.uk.xensource.com>
	<CAKhsbWZqtvpfepN50H_L-J6o6tOC62yYxu53oKdftKW0aXz0bA@mail.gmail.com>
	<alpine.DEB.2.02.1212111503480.17523@kaball.uk.xensource.com>
	<CAKhsbWZ63y2aTXpe6iov-71=G579BcH3oKdb9BtKnw1CmhuQfA@mail.gmail.com>
	<alpine.DEB.2.02.1212131223460.17523@kaball.uk.xensource.com>
	<CAKhsbWavOrKfpMUST1rC7QQdV9hM5SkJCHdHgV2X6bu6iro6=A@mail.gmail.com>
	<CAKhsbWbhcjiVuAhvHy0ve-1=-t231aoZ8DiovCS1f2UyVs_CSw@mail.gmail.com>
	<CAKhsbWaagzeQL7OGW7YJJKDYxqz=YrV0jKgbdL9nB9qF3+nk=w@mail.gmail.com>
	<CAKhsbWbBSAdyGjNyyQOQ2SQALY3d30Brw0MhAc0HRhrSvgO01A@mail.gmail.com>
	<alpine.DEB.2.02.1212141249440.17523@kaball.uk.xensource.com>
Date: Fri, 14 Dec 2012 22:42:04 +0800
X-Google-Sender-Auth: EmFLKnuSKNT_aJUSHoCWe4FtPrE
Message-ID: <CAKhsbWY2OiQcZ8=ReBnr01VN=uudkK2tdNaAZbDXxEM_cJpCSA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	"Kay, Allen M" <allen.m.kay@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, 
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>, "Dong,
	Eddie" <eddie.dong@intel.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] intel IGD driver intel_detect_pch() failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 14, 2012 at 8:59 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Fri, 14 Dec 2012, G.R. wrote:
>> I did another experiment to change the header type to 0x80.
>> The devices now shows up as an ISA bridge.
>
> Great! Could you please resend the patch with your change to xen-devel?
>
Sure, do I need to start a separate thread? Or simply send a follow up?
>
>> But as long as the PIIX3 bridge cannot be overridden, I need another
>> hack in intel driver to make it work.
>>
>> This is the local hack I made, based on kernel 3.2.31.
>> But the function does not change in new version, other than new chip support.
>> Not sure if this change is acceptable to upstream...
>
> I am not the maintainer of the i915 driver, but my feeling is that it is
> acceptable.
> However when making changes to Linux, you need to make them against the
> the latest release or one of the latest RCs of Linus' git tree
> (git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git),
> for example v3.7. You also need to generate the patch with git diff.
>

Adding Intel guys to To list.
Could you check the change I made to the IGD driver.
If you think that's okay, I can create a patch against the latest 3.7 kernel.

>
>> --- i915_drv.c.orig    2012-10-10 10:31:37.000000000 +0800
>> +++ i915_drv.c    2012-12-14 19:10:32.000000000 +0800
>> @@ -303,6 +303,7 @@
>>  {
>>      struct drm_i915_private *dev_priv = dev->dev_private;
>>      struct pci_dev *pch;
>> +    unsigned found = 0;
>
> You don't need to introduce found: I would just use "continue" in the
> while loop if there isn't a match and "break" at the first good match.
>
That was part of the debugging log I added several days ago to root
cause the issue.
I'll have a clean version if upstream is okay with this 'while' change.

>>
>>      /*
>>       * The reason to probe ISA bridge instead of Dev31:Fun0 is to
>> @@ -311,11 +312,13 @@
>>       * underneath. This is a requirement from virtualization team.
>>       */
>>      pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
>> -    if (pch) {
>> +    while (pch) {
>>          if (pch->vendor == PCI_VENDOR_ID_INTEL) {
>>              int id;
>>              id = pch->device & INTEL_PCH_DEVICE_ID_MASK;
>>
>> +            found = pch->device;
>> +
>>              if (id == INTEL_PCH_IBX_DEVICE_ID_TYPE) {
>>                  dev_priv->pch_type = PCH_IBX;
>>                  DRM_DEBUG_KMS("Found Ibex Peak PCH\n");
>> @@ -326,9 +329,21 @@
>>                  /* PantherPoint is CPT compatible */
>>                  dev_priv->pch_type = PCH_CPT;
>>                  DRM_DEBUG_KMS("Found PatherPoint PCH\n");
>> +            } else {
>> +                found = 0;
>>              }
>> +                }
>> +                if (found) {
>> +            pci_dev_put(pch);
>> +            break;
>> +        } else {
>> +            struct pci_dev *curr = pch;
>> +            pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, curr);
>> +            pci_dev_put(curr);
>>          }
>> -        pci_dev_put(pch);
>> +    }
>> +    if (!found) {
>> +        DRM_DEBUG_KMS("intel PCH detect failed, nothing found\n");
>>      }
>>  }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 14:51:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:51:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWb8-0001Or-U4; Fri, 14 Dec 2012 14:50:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TjWb8-0001Om-0g
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:50:46 +0000
Received: from [85.158.139.83:51184] by server-13.bemta-5.messagelabs.com id
	9B/A2-10716-5CC3BC05; Fri, 14 Dec 2012 14:50:45 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355496613!27170040!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26342 invoked from network); 14 Dec 2012 14:50:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 14:50:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="690402"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 14:49:13 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 09:49:13 -0500
Message-ID: <50CB3C68.30605@citrix.com>
Date: Fri, 14 Dec 2012 14:49:12 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CANq0ewvN=e6C7=C=QmDYv9kptoW0htZ2mO1o2gXRzk2NwCFmxg@mail.gmail.com>
In-Reply-To: <CANq0ewvN=e6C7=C=QmDYv9kptoW0htZ2mO1o2gXRzk2NwCFmxg@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] xen with huffman coding
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 14:32, digvijay chauhan wrote:
> Hello all,
>               Is it possible to integrate Xen pre-copy algorithm with 
> huffman coding compression algorithm. If during live migration of vm 
> pages are first compressed and then transferred at destination.
Short answer:
Probably.

Longer answer:
Have a look at the code, do some experimental changes, and see how much 
it improves. The question is if it's sufficiently more efficient to 
compress and then transfer the package, then uncompress it, vs. the 
uncompressed version. The answer to that depends, partly, on what the 
network setup is on the system, and how well the huffman coding is done.

For code and typical data, I'm not at all convinced that huffman 
encoding (which is based on run-lengths) is the best method. You may 
want to look at byte-pair encoding instead, which is more suitable for 
"stuff that contains byte values" - I believe it is also possible to do 
"in situ", meaning the encoding never takes more space than the original 
data - but I could have got that wrong.

Looking at different compression methods would be a good research project.

What is your goal (e.g. is downtime or overall transfer time the 
important factor)?
Generally, downtime is not very much affected by the amount of time it 
takes to copy the guest, except for the final iteration, which is a 
small portion of the total copy time. Of course, the total copy time for 
a large guest could easily be several minutes if the guest is also 
making lots of pages dirty.

What is your test setup?

What kind of network hardware are you using?

--
Mats
>
> regards,
> DigvijaySingh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 14:51:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:51:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWb8-0001Or-U4; Fri, 14 Dec 2012 14:50:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TjWb8-0001Om-0g
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:50:46 +0000
Received: from [85.158.139.83:51184] by server-13.bemta-5.messagelabs.com id
	9B/A2-10716-5CC3BC05; Fri, 14 Dec 2012 14:50:45 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355496613!27170040!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26342 invoked from network); 14 Dec 2012 14:50:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 14:50:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="690402"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 14:49:13 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 09:49:13 -0500
Message-ID: <50CB3C68.30605@citrix.com>
Date: Fri, 14 Dec 2012 14:49:12 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CANq0ewvN=e6C7=C=QmDYv9kptoW0htZ2mO1o2gXRzk2NwCFmxg@mail.gmail.com>
In-Reply-To: <CANq0ewvN=e6C7=C=QmDYv9kptoW0htZ2mO1o2gXRzk2NwCFmxg@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] xen with huffman coding
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 14:32, digvijay chauhan wrote:
> Hello all,
>               Is it possible to integrate Xen pre-copy algorithm with 
> huffman coding compression algorithm. If during live migration of vm 
> pages are first compressed and then transferred at destination.
Short answer:
Probably.

Longer answer:
Have a look at the code, do some experimental changes, and see how much 
it improves. The question is if it's sufficiently more efficient to 
compress and then transfer the package, then uncompress it, vs. the 
uncompressed version. The answer to that depends, partly, on what the 
network setup is on the system, and how well the huffman coding is done.

For code and typical data, I'm not at all convinced that huffman 
encoding (which is based on run-lengths) is the best method. You may 
want to look at byte-pair encoding instead, which is more suitable for 
"stuff that contains byte values" - I believe it is also possible to do 
"in situ", meaning the encoding never takes more space than the original 
data - but I could have got that wrong.

Looking at different compression methods would be a good research project.

What is your goal (e.g. is downtime or overall transfer time the 
important factor)?
Generally, downtime is not very much affected by the amount of time it 
takes to copy the guest, except for the final iteration, which is a 
small portion of the total copy time. Of course, the total copy time for 
a large guest could easily be several minutes if the guest is also 
making lots of pages dirty.

What is your test setup?

What kind of network hardware are you using?

--
Mats
>
> regards,
> DigvijaySingh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 14:55:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWfG-0001XX-Ja; Fri, 14 Dec 2012 14:55:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TjWfE-0001XP-I9
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:55:00 +0000
Received: from [85.158.143.35:39772] by server-1.bemta-4.messagelabs.com id
	5A/CC-28401-2CD3BC05; Fri, 14 Dec 2012 14:54:58 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355496896!12870851!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,INFO_TLD,
	MIME_QP_LONG_LINE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3991 invoked from network); 14 Dec 2012 14:54:57 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-11.tower-21.messagelabs.com with SMTP;
	14 Dec 2012 14:54:57 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id E0642C5618D;
	Fri, 14 Dec 2012 14:54:45 +0000 (GMT)
Date: Fri, 14 Dec 2012 14:54:44 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Xen Devel <xen-devel@lists.xen.org>
Message-ID: <5B4525F296F6ABEB38B0E614@nimrod.local>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] Fatal crash on xen4.2 HVM + qemu-xen dm + NFS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

V2UgYXJlIHNlZWluZyBhIG5hc3R5IGNyYXNoIG9uIHhlbjQuMiBIVk0gKyBxZW11LXhlbiBkZXZp
Y2UgbW9kZWwuCgpXaGVuIHJ1bm5pbmcgYW4gVWJ1bnR1IENsb3VkIEltYWdlIFZNIGFzIGEgZ3Vl
c3Qgb3BlcmF0aW5nIHN5c3RlbSwKdGhlbiBhbGwgKG9yIG5lYXJseSBhbGwpIHRoZSB0aW1lLCBz
b21lIHdheSB0aHJvdWdoIHRoZSBib290IHByb2Nlc3MKdGhlIHBoeXNpY2FsIG1hY2hpbmUgZWl0
aGVyIGNyYXNoZXMgdG90YWxseSBhbmQgcmVib290cywgb3IgbG9zZXMKbmV0d29ya2luZy4gQSB0
eXBpY2FsIGNyYXNoIGR1bXAgaXMgYmVsb3cuCgpUaGUgc3RyYW5nZSB0aGluZ3MgaXMgdGhpcyBk
b2VzICpOT1QqIGFwcGVhciB0byBoYXBwZW4gdXNpbmcgdGhlCm5vbi1jbG91ZC1pbWFnZSB2ZXJz
aW9uIG9mIHRoZSBzYW1lIFVidW50dSBndWVzdCBvcGVyYXRpbmcgc3lzdGVtCihkZXNwaXRlIGxv
YWRpbmcgaXQgd2l0aCBib25uaWUrKyBhbmQgbG90cyBvZiBuZXR3b3JrIHRyYWZmaWMpLgpXZSBi
ZWxpZXZlIHRoZSBtYWluIHNpZ25pZmljYW50IGNoYW5nZSBpcyB0aGF0IHRoZSBjbG91ZCBpbWFn
ZQpyZXNpemVzIGl0cyBwYXJ0aXRpb24gdGh1cyBmaWxpbmcgc3lzdGVtIG9uIGJvb3QuIFBlcmhh
cHMgc29tZQptYWdpYyBoYXBwZW5zIHdoZW4gdGhlIHBhcnRpdGlvbiB0YWJsZSBpcyB3cml0dGVu
IHRvLiBPYnZpb3VzbHkKbm8gZ3Vlc3QgT1Mgc2hvdWxkIGNyYXNoIGRvbTAuCgpUaGUgc2V0dXAg
d2UgaGF2ZSBhdCB0aGUgbW9tZW50IGlzIGEgcWNvdzIgZGlzayBmaWxlIG9uIE5GUyBhbmQKYSBi
YWNraW5nIGZpbGUsIHVzaW5nIHRoZSBxZW11LXhlbiBkZXZpY2UgbW9kZWwuIEl0IHNlZW1zIHRv
CnJlcXVpcmUgTkZTIHRvIGNyYXNoIGl0LgoKU3RlcHMgdG8gcmVwbGljYXRlOgoKIyBjZCAvbXkv
bmZzL2RpcmVjdG9yeQojIHdnZXQgaHR0cDovL2Nsb3VkLWltYWdlcy51YnVudHUuY29tL3ByZWNp
c2UvY3VycmVudC9wcmVjaXNlLXNlcnZlci1jbG91ZGltZy1hbWQ2NC1kaXNrMS5pbWcKIyBxZW11
LWltZyBjcmVhdGUgLWYgcWNvdzIgLWIgcHJlY2lzZS1zZXJ2ZXItY2xvdWRpbWctYW1kNjQtZGlz
azEuaW1nIHRlc3RkaXNrLnFjb3cyIDIwRwojIHhsIGNyZWF0ZSB4bGNyZWF0ZS1xY293LmNvbmYK
ClN0YXJ0IHRoZSBtYWNoaW5lIGFuZCAodGhpcyBpcyBpbXBvcnRhbnQpIGNoYW5nZSB0aGUgYm9v
dCBsaW5lCnRvIGluY2x1ZGUgdGhlIHRleHQgJ2RzPW5vY2xvdWQgdWJ1bnR1LXBhc3M9cGFzc3dv
cmQnICh3aGljaApzdG9wcyB0aGUgaW1hZ2UgaGFuZ2luZyB3aGlsc3QgaXQncyB0cnlpbmcgdG8g
ZmV0Y2ggbWV0YWRhdGEpLgpZb3UgbWF5IHdhbnQgdG8gcmVtb3ZlIGNvbnNvbGUgcmVkaXJlY3Rp
b24gdG8gc2VyaWFsLgpJdCBzaG91bGQgY3Jhc2ggZG9tMCBpbiBsZXNzIHRoYW4gYSBtaW51dGUu
CgpUaGUgY29uZmlnIGZpbGUgaXMgcGFzdGVkIGJlbG93LgoKVGhpcyBpcyByZXBsaWNhYmxlIGlu
ZGVwZW5kZW50IG9mIGhhcmR3YXJlICh3ZSd2ZSB0cmllZCBvbiA0IGRpZmZlcmVudAptYWNoaW5l
cyBvZiB2YXJpb3VzIHR5cGVzKS4gSXQgaXMgcmVwbGljYWJsZSBpbmRlcGVuZGVudCBvZiBkb20w
Cmtlcm5lbCAod2UndmUgdHJpZWQgMy4yLjAtMzIgYW5kIHRoZSBjdXJyZW50IHF1YW50YWwga2Vy
bmVsLCBhbmQgYQpmZXcgb3RoZXJzKS4gSXQgYWxzbyBkb2VzIG5vdCBoYXBwZW4gb24ga3ZtIChl
eGFjdGx5IHRoZSBzYW1lIHNldHVwKS4KClRoaXMgbG9va3MgYSBiaXQgbGlrZSB0aGlzIGFuY2ll
bnQgYnVnOgogIGh0dHA6Ly9idWdzLmRlYmlhbi5vcmcvY2dpLWJpbi9idWdyZXBvcnQuY2dpP2J1
Zz02NDA5NDEKd2hpY2ggMjAxMSBidWcgd2hpY2ggSWFuIENhbXBiZWxsIChjb3BpZWQpIHJlbGF0
ZWQgdG8gc29tZSBtb3JlCmFuY2llbnQgYnVncywgc3BlY2lmaWNhbGx5IHRoaXMgb25lOgogIGh0
dHA6Ly9tYXJjLmluZm8vP2w9bGludXgtbmZzJm09MTIyNDI0MTMyNzI5NzIwJnc9MgpIb3dldmVy
LCBhcyBmYXIgYXMgSSBjYW4gdGVsbCBpdCBicmVha3MgZXZlbiB3aXRoIG1vZGVybgprZXJuZWxz
IHdoZXJlIHRoZSByZWxldmFudCBORlMgY2hhbmdlcyB3ZXJlIG1hZGUuIEFsc28sCndlIGRvIG5v
dCBhcHBlYXIgdG8gbmVlZCB0byBmb3JjZSByZXRyYW5zbWl0cyB0byBoYXBwZW4KKGl0J3MgcG9z
c2libGUgdGhhdCB0aGVyZSBpcyBzb21lIGxvY2t1cCBvbiBYZW4gd2hpY2ggaXMKY2F1c2luZyB0
aGUgcmV0cmFuc21pdCB0byBvY2N1ciB3aGljaCBpcyB0cmlnZ2VyaW5nIHRoZQppc3N1ZSkuCgpB
bnkgaWRlYXM/CgotLSAKQWxleCBCbGlnaAoKCgojIFRoZSBkb21haW4gYnVpbGQgZnVuY3Rpb24u
IEhWTSBkb21haW4gdXNlcyAnaHZtJy4KYnVpbGRlcj0naHZtJwoKIyBJbml0aWFsIG1lbW9yeSBh
bGxvY2F0aW9uIChpbiBtZWdhYnl0ZXMpIGZvciB0aGUgbmV3IGRvbWFpbi4KIwojIFdBUk5JTkc6
IENyZWF0aW5nIGEgZG9tYWluIHdpdGggaW5zdWZmaWNpZW50IG1lbW9yeSBtYXkgY2F1c2Ugb3V0
IG9mCiMgICAgICAgICAgbWVtb3J5IGVycm9ycy4gVGhlIGRvbWFpbiBuZWVkcyBlbm91Z2ggbWVt
b3J5IHRvIGJvb3Qga2VybmVsCiMgICAgICAgICAgYW5kIG1vZHVsZXMuIEFsbG9jYXRpbmcgbGVz
cyB0aGFuIDMyTUJzIGlzIG5vdCByZWNvbW1lbmRlZC4KbWVtb3J5ID0gNTEyCgoKIyBBIG5hbWUg
Zm9yIHlvdXIgZG9tYWluLiBBbGwgZG9tYWlucyBtdXN0IGhhdmUgZGlmZmVyZW50IG5hbWVzLgpu
YW1lID0gIlVidW50dVhlbiIKCiMgMTI4LWJpdCBVVUlEIGZvciB0aGUgZG9tYWluLiAgVGhlIGRl
ZmF1bHQgYmVoYXZpb3IgaXMgdG8gZ2VuZXJhdGUgYSBuZXcgVVVJRAojIG9uIGVhY2ggY2FsbCB0
byAneG0gY3JlYXRlJy4KI3V1aWQgPSAiMDZlZDAwZmUtMTE2Mi00ZmM0LWI1ZDgtMTE5OTNlZTRh
OGI5IgoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgVGhlIG51bWJlciBvZiBjcHVzIGd1ZXN0IHBs
YXRmb3JtIGhhcywgZGVmYXVsdD0xCnZjcHVzPTIKCgpkaXNrID0gWyAndGFwOnFjb3cyOi9teS9u
ZnMvZGlyZWN0b3J5L3Rlc3RkaXNrLnFjb3cyLHh2ZGEsdycgXQoKdmlmID0gWydtYWM9MDA6MTY6
M2U6MjU6OTY6YzggLCBicmlkZ2U9ZGVmYXVsdGJyJ10KCmRldmljZV9tb2RlbF92ZXJzaW9uID0g
J3FlbXUteGVuJwpkZXZpY2VfbW9kZWxfb3ZlcnJpZGUgPSAnL3Vzci9saWIveGVuL2Jpbi9xZW11
LXN5c3RlbS1pMzg2JwojZGV2aWNlX21vZGVsX292ZXJyaWRlID0gJy91c3IvYmluL3FlbXUtc3lz
dGVtLXg4Nl82NCcKI2RldmljZV9tb2RlbF9hcmdzID0gWyAnLW1vbml0b3InLCAndGNwOjEyNy4w
LjAuMToyMzQ1JyBdCgpzZGw9MAoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBlbmFibGUgT3BlbkdM
IGZvciB0ZXh0dXJlIHJlbmRlcmluZyBpbnNpZGUgdGhlIFNETCB3aW5kb3csIGRlZmF1bHQgPSAx
CiMgdmFsaWQgb25seSBpZiBzZGwgaXMgZW5hYmxlZC4Kb3BlbmdsPTEKCiMtLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tCiMgZW5hYmxlIFZOQyBsaWJyYXJ5IGZvciBncmFwaGljcywgZGVmYXVsdCA9IDEKdm5j
PTEKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgYWRkcmVzcyB0aGF0IHNob3VsZCBiZSBsaXN0ZW5l
ZCBvbiBmb3IgdGhlIFZOQyBzZXJ2ZXIgaWYgdm5jIGlzIHNldC4KIyBkZWZhdWx0IGlzIHRvIHVz
ZSAndm5jLWxpc3Rlbicgc2V0dGluZyBmcm9tCiMgYXV4YmluLnhlbl9jb25maWdkaXIoKSArIC94
ZW5kLWNvbmZpZy5zeHAKdm5jbGlzdGVuPSIwLjAuMC4wIgoKIy0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0K
IyBzZXQgVk5DIGRpc3BsYXkgbnVtYmVyLCBkZWZhdWx0ID0gZG9taWQKdm5jZGlzcGxheT0wCgoj
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLQojIHRyeSB0byBmaW5kIGFuIHVudXNlZCBwb3J0IGZvciB0aGUg
Vk5DIHNlcnZlciwgZGVmYXVsdCA9IDEKdm5jdW51c2VkPTAKCiMtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
CiMgc2V0IHBhc3N3b3JkIGZvciBkb21haW4ncyBWTkMgY29uc29sZQojIGRlZmF1bHQgaXMgZGVw
ZW50cyBvbiB2bmNwYXNzd2QgaW4geGVuZC1jb25maWcuc3hwCnZuY3Bhc3N3ZD0ncGFzc3dvcmQn
CgoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBlbmFibGUgc3RkdmdhLCBkZWZhdWx0ID0gMCAodXNl
IGNpcnJ1cyBsb2dpYyBkZXZpY2UgbW9kZWwpCnN0ZHZnYT0wCgojLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0KIyAgIHNlcmlhbCBwb3J0IHJlLWRpcmVjdCB0byBwdHkgZGVpdmNlLCAvZGV2L3B0cy9uCiMg
ICB0aGVuIHhtIGNvbnNvbGUgb3IgbWluaWNvbSBjYW4gY29ubmVjdApzZXJpYWw9J3B0eScKCgoK
CgpLZXJuZWwgMy4yLjAtMzItZ2VuZXJpYyBvbiBhbiB4ODZfNjQKClsgMTQxNi45OTI0MDJdIEJV
RzogdW5hYmxlIHRvIGhhbmRsZSBrZXJuZWwgcGFnaW5nIHJlcXVlc3QgYXQKZmZmZjg4MDczZmVl
NmUwMApbIDE0MTYuOTkyOTAyXSBJUDogWzxmZmZmZmZmZjgxMzE4ZTJiPl0gbWVtY3B5KzB4Yi8w
eDEyMApbIDE0MTYuOTkzMjQ0XSBQR0QgMWMwNjA2NyBQVUQgN2VjNzMwNjcgUE1EIDdlZTczMDY3
IFBURSAwClsgMTQxNi45OTM5ODVdIE9vcHM6IDAwMDAgWyMxXSBTTVAKWyAxNDE2Ljk5NDQzM10g
Q1BVIDQKWyAxNDE2Ljk5NDU4N10gTW9kdWxlcyBsaW5rZWQgaW46IHh0X3BoeXNkZXYgeGVuX3Bj
aWJhY2sgeGVuX25ldGJhY2sKeGVuX2Jsa2JhY2sgeGVuX2dudGFsbG9jIHhlbl9nbnRkZXYgeGVu
X2V2dGNobiB4ZW5mcyB2ZXRoIGlwNnRfTE9HCm5mX2Nvbm50cmFja19pcHY2IG5mXwpkZWZyYWdf
aXB2NiBpcDZ0YWJsZV9maWx0ZXIgaXA2X3RhYmxlcyBpcHRfTE9HIHh0X2xpbWl0IHh0X3N0YXRl
Cnh0X3RjcHVkcCBuZl9jb25udHJhY2tfbmV0bGluayBuZm5ldGxpbmsgZWJ0X2lwIGVidGFibGVf
ZmlsdGVyCmlwdGFibGVfbWFuZ2xlIGlwdF9NQVNRVUVSQURFCmlwdGFibGVfbmF0IG5mX25hdCBu
Zl9jb25udHJhY2tfaXB2NCBuZl9jb25udHJhY2sgbmZfZGVmcmFnX2lwdjQKaXB0YWJsZV9maWx0
ZXIgaXBfdGFibGVzIGliX2lzZXIgcmRtYV9jbSBpYl9jbSBpd19jbSBpYl9zYSBpYl9tYWQKaWJf
Y29yZSBpYl9hZGRyIGlzY3NpX3RjcApsaWJpc2NzaV90Y3AgbGliaXNjc2kgc2NzaV90cmFuc3Bv
cnRfaXNjc2kgZWJ0YWJsZV9icm91dGUgZWJ0YWJsZXMKeF90YWJsZXMgZGNkYmFzIHBzbW91c2Ug
c2VyaW9fcmF3IGFtZDY0X2VkYWNfbW9kIHVzYmhpZCBoaWQgZWRhY19jb3JlCnNwNTEwMF90Y28g
aTJjX3BpaXgKNCBlZGFjX21jZV9hbWQgZmFtMTVoX3Bvd2VyIGsxMHRlbXAgaWdiIGJueDIgYWNw
aV9wb3dlcl9tZXRlciBtYWNfaGlkCmRtX211bHRpcGF0aCBicmlkZ2UgODAyMXEgZ2FycCBzdHAg
aXhnYmUgZGNhIG1kaW8gbmZzZCBuZnMgbG9ja2QgZnNjYWNoZQphdXRoX3JwY2dzcyBuZgpzX2Fj
bCBzdW5ycGMgW2xhc3QgdW5sb2FkZWQ6IHNjc2lfdHJhbnNwb3J0X2lzY3NpXQpbIDE0MTcuMDA1
MDExXQpbIDE0MTcuMDA1MDExXSBQaWQ6IDAsIGNvbW06IHN3YXBwZXIvNCBUYWludGVkOiBHIMKg
wqDCoMKgwqDCoMKgVwozLjIuMC0zMi1nZW5lcmljICM1MS1VYnVudHUgRGVsbCBJbmMuIFBvd2Vy
RWRnZSBSNzE1LzBDNU1NSwpbIDE0MTcuMDA1MDExXSBSSVA6IGUwMzA6WzxmZmZmZmZmZjgxMzE4
ZTJiPl0gwqBbPGZmZmZmZmZmODEzMThlMmI+XQptZW1jcHkrMHhiLzB4MTIwClsgMTQxNy4wMDUw
MTFdIFJTUDogZTAyYjpmZmZmODgwMDYwMDgzYjA4IMKgRUZMQUdTOiAwMDAxMDI0NgpbIDE0MTcu
MDA1MDExXSBSQVg6IGZmZmY4ODAwMWUxMmM5ZTQgUkJYOiAwMDAwMDAwMDAwMDAwMjEwIFJDWDoK
MDAwMDAwMDAwMDAwMDA0MApbIDE0MTcuMDA1MDExXSBSRFg6IDAwMDAwMDAwMDAwMDAwMDAgUlNJ
OiBmZmZmODgwNzNmZWU2ZTAwIFJESToKZmZmZjg4MDAxZTEyYzllNApbIDE0MTcuMDA1MDExXSBS
QlA6IGZmZmY4ODAwNjAwODNiNzAgUjA4OiAwMDAwMDAwMDAwMDAwMmU4IFIwOToKMDAwMDAwMDAw
MDAwMDIwMApbIDE0MTcuMDA1MDExXSBSMTA6IGZmZmY4ODAwMWUxMmM5ZTQgUjExOiAwMDAwMDAw
MDAwMDAwMjgwIFIxMjoKMDAwMDAwMDAwMDAwMDBlOApbIDE0MTcuMDA1MDExXSBSMTM6IGZmZmY4
ODAwNGIwMTRjMDAgUjE0OiBmZmZmODgwMDRiNTMyMDAwIFIxNToKMDAwMDAwMDAwMDAwMDAwMQpb
IDE0MTcuMDA1MDExXSBGUzogwqAwMDAwN2YxYTk5MDg5NzAwKDAwMDApIEdTOmZmZmY4ODAwNjAw
ODAwMDAoMDAwMCkKa25sR1M6MDAwMDAwMDAwMDAwMDAwMApbIDE0MTcuMDA1MDExXSBDUzogwqBl
MDMzIERTOiAwMDJiIEVTOiAwMDJiIENSMDogMDAwMDAwMDA4MDA1MDAzYgpbIDE0MTcuMDA1MDEx
XSBDUjI6IGZmZmY4ODA3M2ZlZTZlMDAgQ1IzOiAwMDAwMDAwMDE1ZDIyMDAwIENSNDoKMDAwMDAw
MDAwMDA0MDY2MApbIDE0MTcuMDA1MDExXSBEUjA6IDAwMDAwMDAwMDAwMDAwMDAgRFIxOiAwMDAw
MDAwMDAwMDAwMDAwIERSMjoKMDAwMDAwMDAwMDAwMDAwMApbIDE0MTcuMDA1MDExXSBEUjM6IDAw
MDAwMDAwMDAwMDAwMDAgRFI2OiAwMDAwMDAwMGZmZmYwZmYwIERSNzoKMDAwMDAwMDAwMDAwMDQw
MApbIDE0MTcuMDA1MDExXSBQcm9jZXNzIHN3YXBwZXIvNCAocGlkOiAwLCB0aHJlYWRpbmZvIGZm
ZmY4ODAwNGI1MzIwMDAsCnRhc2sgZmZmZjg4MDA0YjUzODAwMCkKWyAxNDE3LjAwNTAxMV0gU3Rh
Y2s6ClsgMTQxNy4wMDUwMTFdIMKgZmZmZmZmZmY4MTUzMmMwZSAwMDAwMDAwMDAwMDAwMDAwIGZm
ZmY4ODAwMDAwMDAyZTgKZmZmZjg4MDAwMDAwMDIwMApbIDE0MTcuMDA1MDExXSDCoGZmZmY4ODAw
MWUxMmM5ZTQgMDAwMDAwMDAwMDAwMDIwMCBmZmZmODgwMDRiNTMzZmQ4CmZmZmY4ODAwNjAwODNi
YTAKWyAxNDE3LjAwNTAxMV0gwqBmZmZmODgwMDRiMDE1ODAwIGZmZmY4ODAwNGIwMTRjMDAgZmZm
Zjg4MDAxYjE0MjAwMAowMDAwMDAwMDAwMDAwMGZjClsgMTQxNy4wMDUwMTFdIENhbGwgVHJhY2U6
ClsgMTQxNy4wMDUwMTFdIMKgPElSUT4KWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1MzJj
MGU+XSA/IHNrYl9jb3B5X2JpdHMrMHgxNmUvMHgyYzAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZm
ZmZmODE1MzQ2M2E+XSBza2JfY29weSsweDhhLzB4YjAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZm
ZmZmODE1NGI1MTc+XSBuZWlnaF9wcm9iZSsweDM3LzB4ODAKWyAxNDE3LjAwNTAxMV0gwqBbPGZm
ZmZmZmZmODE1NGI5ZGI+XSBfX25laWdoX2V2ZW50X3NlbmQrMHhiYi8weDIxMApbIDE0MTcuMDA1
MDExXSDCoFs8ZmZmZmZmZmY4MTU0YmM3Mz5dIG5laWdoX3Jlc29sdmVfb3V0cHV0KzB4MTQzLzB4
MWYwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNTZkZGU1Pl0gPyBuZl9ob29rX3Nsb3cr
MHg3NS8weDE1MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU3YTUxMD5dID8gaXBfZnJh
Z21lbnQrMHg4MTAvMHg4MTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1N2E2OGU+XSBp
cF9maW5pc2hfb3V0cHV0KzB4MTdlLzB4MmYwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgx
NTMzZGRiPl0gPyBfX2FsbG9jX3NrYisweDRiLzB4MjQwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZm
ZmZmZjgxNTdiMWU4Pl0gaXBfb3V0cHV0KzB4OTgvMHhhMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZm
ZmZmZmY4MTU3YThhND5dID8gX19pcF9sb2NhbF9vdXQrMHhhNC8weGIwClsgMTQxNy4wMDUwMTFd
IMKgWzxmZmZmZmZmZjgxNTdhOGQ5Pl0gaXBfbG9jYWxfb3V0KzB4MjkvMHgzMApbIDE0MTcuMDA1
MDExXSDCoFs8ZmZmZmZmZmY4MTU3YWEzYz5dIGlwX3F1ZXVlX3htaXQrMHgxNWMvMHg0MTAKWyAx
NDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1OTU4NDA+XSA/IHRjcF9yZXRyYW5zbWl0X3RpbWVy
KzB4NDQwLzB4NDQwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNTkyYzY5Pl0gdGNwX3Ry
YW5zbWl0X3NrYisweDM1OS8weDU4MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU5M2Jl
MT5dIHRjcF9yZXRyYW5zbWl0X3NrYisweDE3MS8weDMxMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZm
ZmZmZmY4MTU5NTYxYj5dIHRjcF9yZXRyYW5zbWl0X3RpbWVyKzB4MjFiLzB4NDQwClsgMTQxNy4w
MDUwMTFdIMKgWzxmZmZmZmZmZjgxNTk1OTI4Pl0gdGNwX3dyaXRlX3RpbWVyKzB4ZTgvMHgxMTAK
WyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1OTU4NDA+XSA/IHRjcF9yZXRyYW5zbWl0X3Rp
bWVyKzB4NDQwLzB4NDQwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxMDc1ZDM2Pl0gY2Fs
bF90aW1lcl9mbisweDQ2LzB4MTYwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNTk1ODQw
Pl0gPyB0Y3BfcmV0cmFuc21pdF90aW1lcisweDQ0MC8weDQ0MApbIDE0MTcuMDA1MDExXSDCoFs8
ZmZmZmZmZmY4MTA3NzY4Mj5dIHJ1bl90aW1lcl9zb2Z0aXJxKzB4MTMyLzB4MmEwClsgMTQxNy4w
MDUwMTFdIMKgWzxmZmZmZmZmZjgxMDZlNWQ4Pl0gX19kb19zb2Z0aXJxKzB4YTgvMHgyMTAKWyAx
NDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEzYTk0Yjc+XSA/IF9feGVuX2V2dGNobl9kb191cGNh
bGwrMHgyMDcvMHgyNTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE2NjU2YWM+XSBjYWxs
X3NvZnRpcnErMHgxYy8weDMwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxMDE1MzA1Pl0g
ZG9fc29mdGlycSsweDY1LzB4YTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEwNmU5YmU+
XSBpcnFfZXhpdCsweDhlLzB4YjAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEzYWI1OTU+
XSB4ZW5fZXZ0Y2huX2RvX3VwY2FsbCsweDM1LzB4NTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZm
ZmZmODE2NjU2ZmU+XSB4ZW5fZG9faHlwZXJ2aXNvcl9jYWxsYmFjaysweDFlLzB4MzAKWyAxNDE3
LjAwNTAxMV0gwqA8RU9JPgpbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTAwMTNhYT5dID8g
aHlwZXJjYWxsX3BhZ2UrMHgzYWEvMHgxMDAwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgx
MDAxM2FhPl0gPyBoeXBlcmNhbGxfcGFnZSsweDNhYS8weDEwMDAKWyAxNDE3LjAwNTAxMV0gwqBb
PGZmZmZmZmZmODEwMGEyZDA+XSA/IHhlbl9zYWZlX2hhbHQrMHgxMC8weDIwClsgMTQxNy4wMDUw
MTFdIMKgWzxmZmZmZmZmZjgxMDFiOTgzPl0gPyBkZWZhdWx0X2lkbGUrMHg1My8weDFkMApbIDE0
MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTAxMjIzNj5dID8gY3B1X2lkbGUrMHhkNi8weDEyMApb
IDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTAwYWIyOT5dID8geGVuX2lycV9lbmFibGVfZGly
ZWN0X3JlbG9jKzB4NC8weDQKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE2MzM2OWM+XSA/
IGNwdV9icmluZ3VwX2FuZF9pZGxlKzB4ZS8weDEwClsgMTQxNy4wMDUwMTFdIENvZGU6IDU4IDQ4
IDJiIDQzIDUwIDg4IDQzIDRlIDQ4IDgzIGM0IDA4IDViIDVkIGMzIDkwIGU4CjFiIGZlIGZmIGZm
IGViIGU2IDkwIDkwIDkwIDkwIDkwIDkwIDkwIDkwIDkwIDQ4IDg5IGY4IDg5IGQxIGMxIGU5IDAz
IDgzCmUyIDA3IDxmMz4gNDggYTUgODkgZDEgZjMgYTQgYzMgMjAgNDggODMgZWEgMjAgNGMgOGIg
MDYgNGMgOGIgNGUgMDggNGMKWyAxNDE3LjAwNTAxMV0gUklQIMKgWzxmZmZmZmZmZjgxMzE4ZTJi
Pl0gbWVtY3B5KzB4Yi8weDEyMApbIDE0MTcuMDA1MDExXSDCoFJTUCA8ZmZmZjg4MDA2MDA4M2Iw
OD4KWyAxNDE3LjAwNTAxMV0gQ1IyOiBmZmZmODgwNzNmZWU2ZTAwClsgMTQxNy4wMDUwMTFdIC0t
LVsgZW5kIHRyYWNlIGFlNGU3ZjU2ZWEwNjY1ZmUgXS0tLQpbIDE0MTcuMDA1MDExXSBLZXJuZWwg
cGFuaWMgLSBub3Qgc3luY2luZzogRmF0YWwgZXhjZXB0aW9uIGluIGludGVycnVwdApbIDE0MTcu
MDA1MDExXSBQaWQ6IDAsIGNvbW06IHN3YXBwZXIvNCBUYWludGVkOiBHIMKgwqDCoMKgwqBEIFcK
My4yLjAtMzItZ2VuZXJpYyAjNTEtVWJ1bnR1ClsgMTQxNy4wMDUwMTFdIENhbGwgVHJhY2U6Clsg
MTQxNy4wMDUwMTFdIMKgPElSUT4gwqBbPGZmZmZmZmZmODE2NDIxOTc+XSBwYW5pYysweDkxLzB4
MWE0ClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNjVjMDFhPl0gb29wc19lbmQrMHhlYS8w
eGYwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNjQxMDI3Pl0gbm9fY29udGV4dCsweDE1
MC8weDE1ZApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTY0MTFmZD5dIF9fYmFkX2FyZWFf
bm9zZW1hcGhvcmUrMHgxYzkvMHgxZTgKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE2NDA4
MzU+XSA/IHB0ZV9vZmZzZXRfa2VybmVsKzB4MTMvMHgzYwpbIDE0MTcuMDA1MDExXSDCoFs8ZmZm
ZmZmZmY4MTY0MTIyZj5dIGJhZF9hcmVhX25vc2VtYXBob3JlKzB4MTMvMHgxNQpbIDE0MTcuMDA1
MDExXSDCoFs8ZmZmZmZmZmY4MTY1ZWMzNj5dIGRvX3BhZ2VfZmF1bHQrMHg0MjYvMHg1MjAKWyAx
NDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE2NWIwY2U+XSA/IF9yYXdfc3Bpbl9sb2NrX2lycXNh
dmUrMHgyZS8weDQwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxMDU5ZDhhPl0gPyBnZXRf
bm9oel90aW1lcl90YXJnZXQrMHg1YS8weGMwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgx
NjViMDRlPl0gPyBfcmF3X3NwaW5fdW5sb2NrX2lycXJlc3RvcmUrMHgxZS8weDMwClsgMTQxNy4w
MDUwMTFdIMKgWzxmZmZmZmZmZjgxMDc3ZjkzPl0gPyBtb2RfdGltZXJfcGVuZGluZysweDExMy8w
eDI0MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmZhMDMxN2YzND5dID8gX19uZl9jdF9yZWZy
ZXNoX2FjY3QrMHhkNC8weDEwMApbbmZfY29ubnRyYWNrXQpbIDE0MTcuMDA1MDExXSDCoFs8ZmZm
ZmZmZmY4MTY1YjViNT5dIHBhZ2VfZmF1bHQrMHgyNS8weDMwClsgMTQxNy4wMDUwMTFdIMKgWzxm
ZmZmZmZmZjgxMzE4ZTJiPl0gPyBtZW1jcHkrMHhiLzB4MTIwClsgMTQxNy4wMDUwMTFdIMKgWzxm
ZmZmZmZmZjgxNTMyYzBlPl0gPyBza2JfY29weV9iaXRzKzB4MTZlLzB4MmMwClsgMTQxNy4wMDUw
MTFdIMKgWzxmZmZmZmZmZjgxNTM0NjNhPl0gc2tiX2NvcHkrMHg4YS8weGIwClsgMTQxNy4wMDUw
MTFdIMKgWzxmZmZmZmZmZjgxNTRiNTE3Pl0gbmVpZ2hfcHJvYmUrMHgzNy8weDgwClsgMTQxNy4w
MDUwMTFdIMKgWzxmZmZmZmZmZjgxNTRiOWRiPl0gX19uZWlnaF9ldmVudF9zZW5kKzB4YmIvMHgy
MTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1NGJjNzM+XSBuZWlnaF9yZXNvbHZlX291
dHB1dCsweDE0My8weDFmMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU2ZGRlNT5dID8g
bmZfaG9va19zbG93KzB4NzUvMHgxNTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1N2E1
MTA+XSA/IGlwX2ZyYWdtZW50KzB4ODEwLzB4ODEwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZm
ZjgxNTdhNjhlPl0gaXBfZmluaXNoX291dHB1dCsweDE3ZS8weDJmMApbIDE0MTcuMDA1MDExXSDC
oFs8ZmZmZmZmZmY4MTUzM2RkYj5dID8gX19hbGxvY19za2IrMHg0Yi8weDI0MApbIDE0MTcuMDA1
MDExXSDCoFs8ZmZmZmZmZmY4MTU3YjFlOD5dIGlwX291dHB1dCsweDk4LzB4YTAKWyAxNDE3LjAw
NTAxMV0gwqBbPGZmZmZmZmZmODE1N2E4YTQ+XSA/IF9faXBfbG9jYWxfb3V0KzB4YTQvMHhiMApb
IDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU3YThkOT5dIGlwX2xvY2FsX291dCsweDI5LzB4
MzAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1N2FhM2M+XSBpcF9xdWV1ZV94bWl0KzB4
MTVjLzB4NDEwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNTk1ODQwPl0gPyB0Y3BfcmV0
cmFuc21pdF90aW1lcisweDQ0MC8weDQ0MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU5
MmM2OT5dIHRjcF90cmFuc21pdF9za2IrMHgzNTkvMHg1ODAKWyAxNDE3LjAwNTAxMV0gwqBbPGZm
ZmZmZmZmODE1OTNiZTE+XSB0Y3BfcmV0cmFuc21pdF9za2IrMHgxNzEvMHgzMTAKWyAxNDE3LjAw
NTAxMV0gwqBbPGZmZmZmZmZmODE1OTU2MWI+XSB0Y3BfcmV0cmFuc21pdF90aW1lcisweDIxYi8w
eDQ0MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU5NTkyOD5dIHRjcF93cml0ZV90aW1l
cisweGU4LzB4MTEwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNTk1ODQwPl0gPyB0Y3Bf
cmV0cmFuc21pdF90aW1lcisweDQ0MC8weDQ0MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4
MTA3NWQzNj5dIGNhbGxfdGltZXJfZm4rMHg0Ni8weDE2MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZm
ZmZmZmY4MTU5NTg0MD5dID8gdGNwX3JldHJhbnNtaXRfdGltZXIrMHg0NDAvMHg0NDAKWyAxNDE3
LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEwNzc2ODI+XSBydW5fdGltZXJfc29mdGlycSsweDEzMi8w
eDJhMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTA2ZTVkOD5dIF9fZG9fc29mdGlycSsw
eGE4LzB4MjEwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxM2E5NGI3Pl0gPyBfX3hlbl9l
dnRjaG5fZG9fdXBjYWxsKzB4MjA3LzB4MjUwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgx
NjY1NmFjPl0gY2FsbF9zb2Z0aXJxKzB4MWMvMHgzMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZm
ZmY4MTAxNTMwNT5dIGRvX3NvZnRpcnErMHg2NS8weGEwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZm
ZmZmZjgxMDZlOWJlPl0gaXJxX2V4aXQrMHg4ZS8weGIwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZm
ZmZmZjgxM2FiNTk1Pl0geGVuX2V2dGNobl9kb191cGNhbGwrMHgzNS8weDUwClsgMTQxNy4wMDUw
MTFdIMKgWzxmZmZmZmZmZjgxNjY1NmZlPl0geGVuX2RvX2h5cGVydmlzb3JfY2FsbGJhY2srMHgx
ZS8weDMwClsgMTQxNy4wMDUwMTFdIMKgPEVPST4gwqBbPGZmZmZmZmZmODEwMDEzYWE+XSA/IGh5
cGVyY2FsbF9wYWdlKzB4M2FhLzB4MTAwMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTAw
MTNhYT5dID8gaHlwZXJjYWxsX3BhZ2UrMHgzYWEvMHgxMDAwClsgMTQxNy4wMDUwMTFdIMKgWzxm
ZmZmZmZmZjgxMDBhMmQwPl0gPyB4ZW5fc2FmZV9oYWx0KzB4MTAvMHgyMApbIDE0MTcuMDA1MDEx
XSDCoFs8ZmZmZmZmZmY4MTAxYjk4Mz5dID8gZGVmYXVsdF9pZGxlKzB4NTMvMHgxZDAKWyAxNDE3
LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEwMTIyMzY+XSA/IGNwdV9pZGxlKzB4ZDYvMHgxMjAKWyAx
NDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEwMGFiMjk+XSA/IHhlbl9pcnFfZW5hYmxlX2RpcmVj
dF9yZWxvYysweDQvMHg0ClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNjMzNjljPl0gPyBj
cHVfYnJpbmd1cF9hbmRfaWRsZSsweGUvMHgxMAooWEVOKSBEb21haW4gMCBjcmFzaGVkOiAnbm9y
ZWJvb3QnIHNldCAtIG5vdCByZWJvb3RpbmcuCQoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Dec 14 14:55:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWfG-0001XX-Ja; Fri, 14 Dec 2012 14:55:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TjWfE-0001XP-I9
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:55:00 +0000
Received: from [85.158.143.35:39772] by server-1.bemta-4.messagelabs.com id
	5A/CC-28401-2CD3BC05; Fri, 14 Dec 2012 14:54:58 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355496896!12870851!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,INFO_TLD,
	MIME_QP_LONG_LINE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3991 invoked from network); 14 Dec 2012 14:54:57 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-11.tower-21.messagelabs.com with SMTP;
	14 Dec 2012 14:54:57 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id E0642C5618D;
	Fri, 14 Dec 2012 14:54:45 +0000 (GMT)
Date: Fri, 14 Dec 2012 14:54:44 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Xen Devel <xen-devel@lists.xen.org>
Message-ID: <5B4525F296F6ABEB38B0E614@nimrod.local>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] Fatal crash on xen4.2 HVM + qemu-xen dm + NFS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

V2UgYXJlIHNlZWluZyBhIG5hc3R5IGNyYXNoIG9uIHhlbjQuMiBIVk0gKyBxZW11LXhlbiBkZXZp
Y2UgbW9kZWwuCgpXaGVuIHJ1bm5pbmcgYW4gVWJ1bnR1IENsb3VkIEltYWdlIFZNIGFzIGEgZ3Vl
c3Qgb3BlcmF0aW5nIHN5c3RlbSwKdGhlbiBhbGwgKG9yIG5lYXJseSBhbGwpIHRoZSB0aW1lLCBz
b21lIHdheSB0aHJvdWdoIHRoZSBib290IHByb2Nlc3MKdGhlIHBoeXNpY2FsIG1hY2hpbmUgZWl0
aGVyIGNyYXNoZXMgdG90YWxseSBhbmQgcmVib290cywgb3IgbG9zZXMKbmV0d29ya2luZy4gQSB0
eXBpY2FsIGNyYXNoIGR1bXAgaXMgYmVsb3cuCgpUaGUgc3RyYW5nZSB0aGluZ3MgaXMgdGhpcyBk
b2VzICpOT1QqIGFwcGVhciB0byBoYXBwZW4gdXNpbmcgdGhlCm5vbi1jbG91ZC1pbWFnZSB2ZXJz
aW9uIG9mIHRoZSBzYW1lIFVidW50dSBndWVzdCBvcGVyYXRpbmcgc3lzdGVtCihkZXNwaXRlIGxv
YWRpbmcgaXQgd2l0aCBib25uaWUrKyBhbmQgbG90cyBvZiBuZXR3b3JrIHRyYWZmaWMpLgpXZSBi
ZWxpZXZlIHRoZSBtYWluIHNpZ25pZmljYW50IGNoYW5nZSBpcyB0aGF0IHRoZSBjbG91ZCBpbWFn
ZQpyZXNpemVzIGl0cyBwYXJ0aXRpb24gdGh1cyBmaWxpbmcgc3lzdGVtIG9uIGJvb3QuIFBlcmhh
cHMgc29tZQptYWdpYyBoYXBwZW5zIHdoZW4gdGhlIHBhcnRpdGlvbiB0YWJsZSBpcyB3cml0dGVu
IHRvLiBPYnZpb3VzbHkKbm8gZ3Vlc3QgT1Mgc2hvdWxkIGNyYXNoIGRvbTAuCgpUaGUgc2V0dXAg
d2UgaGF2ZSBhdCB0aGUgbW9tZW50IGlzIGEgcWNvdzIgZGlzayBmaWxlIG9uIE5GUyBhbmQKYSBi
YWNraW5nIGZpbGUsIHVzaW5nIHRoZSBxZW11LXhlbiBkZXZpY2UgbW9kZWwuIEl0IHNlZW1zIHRv
CnJlcXVpcmUgTkZTIHRvIGNyYXNoIGl0LgoKU3RlcHMgdG8gcmVwbGljYXRlOgoKIyBjZCAvbXkv
bmZzL2RpcmVjdG9yeQojIHdnZXQgaHR0cDovL2Nsb3VkLWltYWdlcy51YnVudHUuY29tL3ByZWNp
c2UvY3VycmVudC9wcmVjaXNlLXNlcnZlci1jbG91ZGltZy1hbWQ2NC1kaXNrMS5pbWcKIyBxZW11
LWltZyBjcmVhdGUgLWYgcWNvdzIgLWIgcHJlY2lzZS1zZXJ2ZXItY2xvdWRpbWctYW1kNjQtZGlz
azEuaW1nIHRlc3RkaXNrLnFjb3cyIDIwRwojIHhsIGNyZWF0ZSB4bGNyZWF0ZS1xY293LmNvbmYK
ClN0YXJ0IHRoZSBtYWNoaW5lIGFuZCAodGhpcyBpcyBpbXBvcnRhbnQpIGNoYW5nZSB0aGUgYm9v
dCBsaW5lCnRvIGluY2x1ZGUgdGhlIHRleHQgJ2RzPW5vY2xvdWQgdWJ1bnR1LXBhc3M9cGFzc3dv
cmQnICh3aGljaApzdG9wcyB0aGUgaW1hZ2UgaGFuZ2luZyB3aGlsc3QgaXQncyB0cnlpbmcgdG8g
ZmV0Y2ggbWV0YWRhdGEpLgpZb3UgbWF5IHdhbnQgdG8gcmVtb3ZlIGNvbnNvbGUgcmVkaXJlY3Rp
b24gdG8gc2VyaWFsLgpJdCBzaG91bGQgY3Jhc2ggZG9tMCBpbiBsZXNzIHRoYW4gYSBtaW51dGUu
CgpUaGUgY29uZmlnIGZpbGUgaXMgcGFzdGVkIGJlbG93LgoKVGhpcyBpcyByZXBsaWNhYmxlIGlu
ZGVwZW5kZW50IG9mIGhhcmR3YXJlICh3ZSd2ZSB0cmllZCBvbiA0IGRpZmZlcmVudAptYWNoaW5l
cyBvZiB2YXJpb3VzIHR5cGVzKS4gSXQgaXMgcmVwbGljYWJsZSBpbmRlcGVuZGVudCBvZiBkb20w
Cmtlcm5lbCAod2UndmUgdHJpZWQgMy4yLjAtMzIgYW5kIHRoZSBjdXJyZW50IHF1YW50YWwga2Vy
bmVsLCBhbmQgYQpmZXcgb3RoZXJzKS4gSXQgYWxzbyBkb2VzIG5vdCBoYXBwZW4gb24ga3ZtIChl
eGFjdGx5IHRoZSBzYW1lIHNldHVwKS4KClRoaXMgbG9va3MgYSBiaXQgbGlrZSB0aGlzIGFuY2ll
bnQgYnVnOgogIGh0dHA6Ly9idWdzLmRlYmlhbi5vcmcvY2dpLWJpbi9idWdyZXBvcnQuY2dpP2J1
Zz02NDA5NDEKd2hpY2ggMjAxMSBidWcgd2hpY2ggSWFuIENhbXBiZWxsIChjb3BpZWQpIHJlbGF0
ZWQgdG8gc29tZSBtb3JlCmFuY2llbnQgYnVncywgc3BlY2lmaWNhbGx5IHRoaXMgb25lOgogIGh0
dHA6Ly9tYXJjLmluZm8vP2w9bGludXgtbmZzJm09MTIyNDI0MTMyNzI5NzIwJnc9MgpIb3dldmVy
LCBhcyBmYXIgYXMgSSBjYW4gdGVsbCBpdCBicmVha3MgZXZlbiB3aXRoIG1vZGVybgprZXJuZWxz
IHdoZXJlIHRoZSByZWxldmFudCBORlMgY2hhbmdlcyB3ZXJlIG1hZGUuIEFsc28sCndlIGRvIG5v
dCBhcHBlYXIgdG8gbmVlZCB0byBmb3JjZSByZXRyYW5zbWl0cyB0byBoYXBwZW4KKGl0J3MgcG9z
c2libGUgdGhhdCB0aGVyZSBpcyBzb21lIGxvY2t1cCBvbiBYZW4gd2hpY2ggaXMKY2F1c2luZyB0
aGUgcmV0cmFuc21pdCB0byBvY2N1ciB3aGljaCBpcyB0cmlnZ2VyaW5nIHRoZQppc3N1ZSkuCgpB
bnkgaWRlYXM/CgotLSAKQWxleCBCbGlnaAoKCgojIFRoZSBkb21haW4gYnVpbGQgZnVuY3Rpb24u
IEhWTSBkb21haW4gdXNlcyAnaHZtJy4KYnVpbGRlcj0naHZtJwoKIyBJbml0aWFsIG1lbW9yeSBh
bGxvY2F0aW9uIChpbiBtZWdhYnl0ZXMpIGZvciB0aGUgbmV3IGRvbWFpbi4KIwojIFdBUk5JTkc6
IENyZWF0aW5nIGEgZG9tYWluIHdpdGggaW5zdWZmaWNpZW50IG1lbW9yeSBtYXkgY2F1c2Ugb3V0
IG9mCiMgICAgICAgICAgbWVtb3J5IGVycm9ycy4gVGhlIGRvbWFpbiBuZWVkcyBlbm91Z2ggbWVt
b3J5IHRvIGJvb3Qga2VybmVsCiMgICAgICAgICAgYW5kIG1vZHVsZXMuIEFsbG9jYXRpbmcgbGVz
cyB0aGFuIDMyTUJzIGlzIG5vdCByZWNvbW1lbmRlZC4KbWVtb3J5ID0gNTEyCgoKIyBBIG5hbWUg
Zm9yIHlvdXIgZG9tYWluLiBBbGwgZG9tYWlucyBtdXN0IGhhdmUgZGlmZmVyZW50IG5hbWVzLgpu
YW1lID0gIlVidW50dVhlbiIKCiMgMTI4LWJpdCBVVUlEIGZvciB0aGUgZG9tYWluLiAgVGhlIGRl
ZmF1bHQgYmVoYXZpb3IgaXMgdG8gZ2VuZXJhdGUgYSBuZXcgVVVJRAojIG9uIGVhY2ggY2FsbCB0
byAneG0gY3JlYXRlJy4KI3V1aWQgPSAiMDZlZDAwZmUtMTE2Mi00ZmM0LWI1ZDgtMTE5OTNlZTRh
OGI5IgoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgVGhlIG51bWJlciBvZiBjcHVzIGd1ZXN0IHBs
YXRmb3JtIGhhcywgZGVmYXVsdD0xCnZjcHVzPTIKCgpkaXNrID0gWyAndGFwOnFjb3cyOi9teS9u
ZnMvZGlyZWN0b3J5L3Rlc3RkaXNrLnFjb3cyLHh2ZGEsdycgXQoKdmlmID0gWydtYWM9MDA6MTY6
M2U6MjU6OTY6YzggLCBicmlkZ2U9ZGVmYXVsdGJyJ10KCmRldmljZV9tb2RlbF92ZXJzaW9uID0g
J3FlbXUteGVuJwpkZXZpY2VfbW9kZWxfb3ZlcnJpZGUgPSAnL3Vzci9saWIveGVuL2Jpbi9xZW11
LXN5c3RlbS1pMzg2JwojZGV2aWNlX21vZGVsX292ZXJyaWRlID0gJy91c3IvYmluL3FlbXUtc3lz
dGVtLXg4Nl82NCcKI2RldmljZV9tb2RlbF9hcmdzID0gWyAnLW1vbml0b3InLCAndGNwOjEyNy4w
LjAuMToyMzQ1JyBdCgpzZGw9MAoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBlbmFibGUgT3BlbkdM
IGZvciB0ZXh0dXJlIHJlbmRlcmluZyBpbnNpZGUgdGhlIFNETCB3aW5kb3csIGRlZmF1bHQgPSAx
CiMgdmFsaWQgb25seSBpZiBzZGwgaXMgZW5hYmxlZC4Kb3BlbmdsPTEKCiMtLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tCiMgZW5hYmxlIFZOQyBsaWJyYXJ5IGZvciBncmFwaGljcywgZGVmYXVsdCA9IDEKdm5j
PTEKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgYWRkcmVzcyB0aGF0IHNob3VsZCBiZSBsaXN0ZW5l
ZCBvbiBmb3IgdGhlIFZOQyBzZXJ2ZXIgaWYgdm5jIGlzIHNldC4KIyBkZWZhdWx0IGlzIHRvIHVz
ZSAndm5jLWxpc3Rlbicgc2V0dGluZyBmcm9tCiMgYXV4YmluLnhlbl9jb25maWdkaXIoKSArIC94
ZW5kLWNvbmZpZy5zeHAKdm5jbGlzdGVuPSIwLjAuMC4wIgoKIy0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0K
IyBzZXQgVk5DIGRpc3BsYXkgbnVtYmVyLCBkZWZhdWx0ID0gZG9taWQKdm5jZGlzcGxheT0wCgoj
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLQojIHRyeSB0byBmaW5kIGFuIHVudXNlZCBwb3J0IGZvciB0aGUg
Vk5DIHNlcnZlciwgZGVmYXVsdCA9IDEKdm5jdW51c2VkPTAKCiMtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
CiMgc2V0IHBhc3N3b3JkIGZvciBkb21haW4ncyBWTkMgY29uc29sZQojIGRlZmF1bHQgaXMgZGVw
ZW50cyBvbiB2bmNwYXNzd2QgaW4geGVuZC1jb25maWcuc3hwCnZuY3Bhc3N3ZD0ncGFzc3dvcmQn
CgoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBlbmFibGUgc3RkdmdhLCBkZWZhdWx0ID0gMCAodXNl
IGNpcnJ1cyBsb2dpYyBkZXZpY2UgbW9kZWwpCnN0ZHZnYT0wCgojLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0KIyAgIHNlcmlhbCBwb3J0IHJlLWRpcmVjdCB0byBwdHkgZGVpdmNlLCAvZGV2L3B0cy9uCiMg
ICB0aGVuIHhtIGNvbnNvbGUgb3IgbWluaWNvbSBjYW4gY29ubmVjdApzZXJpYWw9J3B0eScKCgoK
CgpLZXJuZWwgMy4yLjAtMzItZ2VuZXJpYyBvbiBhbiB4ODZfNjQKClsgMTQxNi45OTI0MDJdIEJV
RzogdW5hYmxlIHRvIGhhbmRsZSBrZXJuZWwgcGFnaW5nIHJlcXVlc3QgYXQKZmZmZjg4MDczZmVl
NmUwMApbIDE0MTYuOTkyOTAyXSBJUDogWzxmZmZmZmZmZjgxMzE4ZTJiPl0gbWVtY3B5KzB4Yi8w
eDEyMApbIDE0MTYuOTkzMjQ0XSBQR0QgMWMwNjA2NyBQVUQgN2VjNzMwNjcgUE1EIDdlZTczMDY3
IFBURSAwClsgMTQxNi45OTM5ODVdIE9vcHM6IDAwMDAgWyMxXSBTTVAKWyAxNDE2Ljk5NDQzM10g
Q1BVIDQKWyAxNDE2Ljk5NDU4N10gTW9kdWxlcyBsaW5rZWQgaW46IHh0X3BoeXNkZXYgeGVuX3Bj
aWJhY2sgeGVuX25ldGJhY2sKeGVuX2Jsa2JhY2sgeGVuX2dudGFsbG9jIHhlbl9nbnRkZXYgeGVu
X2V2dGNobiB4ZW5mcyB2ZXRoIGlwNnRfTE9HCm5mX2Nvbm50cmFja19pcHY2IG5mXwpkZWZyYWdf
aXB2NiBpcDZ0YWJsZV9maWx0ZXIgaXA2X3RhYmxlcyBpcHRfTE9HIHh0X2xpbWl0IHh0X3N0YXRl
Cnh0X3RjcHVkcCBuZl9jb25udHJhY2tfbmV0bGluayBuZm5ldGxpbmsgZWJ0X2lwIGVidGFibGVf
ZmlsdGVyCmlwdGFibGVfbWFuZ2xlIGlwdF9NQVNRVUVSQURFCmlwdGFibGVfbmF0IG5mX25hdCBu
Zl9jb25udHJhY2tfaXB2NCBuZl9jb25udHJhY2sgbmZfZGVmcmFnX2lwdjQKaXB0YWJsZV9maWx0
ZXIgaXBfdGFibGVzIGliX2lzZXIgcmRtYV9jbSBpYl9jbSBpd19jbSBpYl9zYSBpYl9tYWQKaWJf
Y29yZSBpYl9hZGRyIGlzY3NpX3RjcApsaWJpc2NzaV90Y3AgbGliaXNjc2kgc2NzaV90cmFuc3Bv
cnRfaXNjc2kgZWJ0YWJsZV9icm91dGUgZWJ0YWJsZXMKeF90YWJsZXMgZGNkYmFzIHBzbW91c2Ug
c2VyaW9fcmF3IGFtZDY0X2VkYWNfbW9kIHVzYmhpZCBoaWQgZWRhY19jb3JlCnNwNTEwMF90Y28g
aTJjX3BpaXgKNCBlZGFjX21jZV9hbWQgZmFtMTVoX3Bvd2VyIGsxMHRlbXAgaWdiIGJueDIgYWNw
aV9wb3dlcl9tZXRlciBtYWNfaGlkCmRtX211bHRpcGF0aCBicmlkZ2UgODAyMXEgZ2FycCBzdHAg
aXhnYmUgZGNhIG1kaW8gbmZzZCBuZnMgbG9ja2QgZnNjYWNoZQphdXRoX3JwY2dzcyBuZgpzX2Fj
bCBzdW5ycGMgW2xhc3QgdW5sb2FkZWQ6IHNjc2lfdHJhbnNwb3J0X2lzY3NpXQpbIDE0MTcuMDA1
MDExXQpbIDE0MTcuMDA1MDExXSBQaWQ6IDAsIGNvbW06IHN3YXBwZXIvNCBUYWludGVkOiBHIMKg
wqDCoMKgwqDCoMKgVwozLjIuMC0zMi1nZW5lcmljICM1MS1VYnVudHUgRGVsbCBJbmMuIFBvd2Vy
RWRnZSBSNzE1LzBDNU1NSwpbIDE0MTcuMDA1MDExXSBSSVA6IGUwMzA6WzxmZmZmZmZmZjgxMzE4
ZTJiPl0gwqBbPGZmZmZmZmZmODEzMThlMmI+XQptZW1jcHkrMHhiLzB4MTIwClsgMTQxNy4wMDUw
MTFdIFJTUDogZTAyYjpmZmZmODgwMDYwMDgzYjA4IMKgRUZMQUdTOiAwMDAxMDI0NgpbIDE0MTcu
MDA1MDExXSBSQVg6IGZmZmY4ODAwMWUxMmM5ZTQgUkJYOiAwMDAwMDAwMDAwMDAwMjEwIFJDWDoK
MDAwMDAwMDAwMDAwMDA0MApbIDE0MTcuMDA1MDExXSBSRFg6IDAwMDAwMDAwMDAwMDAwMDAgUlNJ
OiBmZmZmODgwNzNmZWU2ZTAwIFJESToKZmZmZjg4MDAxZTEyYzllNApbIDE0MTcuMDA1MDExXSBS
QlA6IGZmZmY4ODAwNjAwODNiNzAgUjA4OiAwMDAwMDAwMDAwMDAwMmU4IFIwOToKMDAwMDAwMDAw
MDAwMDIwMApbIDE0MTcuMDA1MDExXSBSMTA6IGZmZmY4ODAwMWUxMmM5ZTQgUjExOiAwMDAwMDAw
MDAwMDAwMjgwIFIxMjoKMDAwMDAwMDAwMDAwMDBlOApbIDE0MTcuMDA1MDExXSBSMTM6IGZmZmY4
ODAwNGIwMTRjMDAgUjE0OiBmZmZmODgwMDRiNTMyMDAwIFIxNToKMDAwMDAwMDAwMDAwMDAwMQpb
IDE0MTcuMDA1MDExXSBGUzogwqAwMDAwN2YxYTk5MDg5NzAwKDAwMDApIEdTOmZmZmY4ODAwNjAw
ODAwMDAoMDAwMCkKa25sR1M6MDAwMDAwMDAwMDAwMDAwMApbIDE0MTcuMDA1MDExXSBDUzogwqBl
MDMzIERTOiAwMDJiIEVTOiAwMDJiIENSMDogMDAwMDAwMDA4MDA1MDAzYgpbIDE0MTcuMDA1MDEx
XSBDUjI6IGZmZmY4ODA3M2ZlZTZlMDAgQ1IzOiAwMDAwMDAwMDE1ZDIyMDAwIENSNDoKMDAwMDAw
MDAwMDA0MDY2MApbIDE0MTcuMDA1MDExXSBEUjA6IDAwMDAwMDAwMDAwMDAwMDAgRFIxOiAwMDAw
MDAwMDAwMDAwMDAwIERSMjoKMDAwMDAwMDAwMDAwMDAwMApbIDE0MTcuMDA1MDExXSBEUjM6IDAw
MDAwMDAwMDAwMDAwMDAgRFI2OiAwMDAwMDAwMGZmZmYwZmYwIERSNzoKMDAwMDAwMDAwMDAwMDQw
MApbIDE0MTcuMDA1MDExXSBQcm9jZXNzIHN3YXBwZXIvNCAocGlkOiAwLCB0aHJlYWRpbmZvIGZm
ZmY4ODAwNGI1MzIwMDAsCnRhc2sgZmZmZjg4MDA0YjUzODAwMCkKWyAxNDE3LjAwNTAxMV0gU3Rh
Y2s6ClsgMTQxNy4wMDUwMTFdIMKgZmZmZmZmZmY4MTUzMmMwZSAwMDAwMDAwMDAwMDAwMDAwIGZm
ZmY4ODAwMDAwMDAyZTgKZmZmZjg4MDAwMDAwMDIwMApbIDE0MTcuMDA1MDExXSDCoGZmZmY4ODAw
MWUxMmM5ZTQgMDAwMDAwMDAwMDAwMDIwMCBmZmZmODgwMDRiNTMzZmQ4CmZmZmY4ODAwNjAwODNi
YTAKWyAxNDE3LjAwNTAxMV0gwqBmZmZmODgwMDRiMDE1ODAwIGZmZmY4ODAwNGIwMTRjMDAgZmZm
Zjg4MDAxYjE0MjAwMAowMDAwMDAwMDAwMDAwMGZjClsgMTQxNy4wMDUwMTFdIENhbGwgVHJhY2U6
ClsgMTQxNy4wMDUwMTFdIMKgPElSUT4KWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1MzJj
MGU+XSA/IHNrYl9jb3B5X2JpdHMrMHgxNmUvMHgyYzAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZm
ZmZmODE1MzQ2M2E+XSBza2JfY29weSsweDhhLzB4YjAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZm
ZmZmODE1NGI1MTc+XSBuZWlnaF9wcm9iZSsweDM3LzB4ODAKWyAxNDE3LjAwNTAxMV0gwqBbPGZm
ZmZmZmZmODE1NGI5ZGI+XSBfX25laWdoX2V2ZW50X3NlbmQrMHhiYi8weDIxMApbIDE0MTcuMDA1
MDExXSDCoFs8ZmZmZmZmZmY4MTU0YmM3Mz5dIG5laWdoX3Jlc29sdmVfb3V0cHV0KzB4MTQzLzB4
MWYwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNTZkZGU1Pl0gPyBuZl9ob29rX3Nsb3cr
MHg3NS8weDE1MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU3YTUxMD5dID8gaXBfZnJh
Z21lbnQrMHg4MTAvMHg4MTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1N2E2OGU+XSBp
cF9maW5pc2hfb3V0cHV0KzB4MTdlLzB4MmYwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgx
NTMzZGRiPl0gPyBfX2FsbG9jX3NrYisweDRiLzB4MjQwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZm
ZmZmZjgxNTdiMWU4Pl0gaXBfb3V0cHV0KzB4OTgvMHhhMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZm
ZmZmZmY4MTU3YThhND5dID8gX19pcF9sb2NhbF9vdXQrMHhhNC8weGIwClsgMTQxNy4wMDUwMTFd
IMKgWzxmZmZmZmZmZjgxNTdhOGQ5Pl0gaXBfbG9jYWxfb3V0KzB4MjkvMHgzMApbIDE0MTcuMDA1
MDExXSDCoFs8ZmZmZmZmZmY4MTU3YWEzYz5dIGlwX3F1ZXVlX3htaXQrMHgxNWMvMHg0MTAKWyAx
NDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1OTU4NDA+XSA/IHRjcF9yZXRyYW5zbWl0X3RpbWVy
KzB4NDQwLzB4NDQwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNTkyYzY5Pl0gdGNwX3Ry
YW5zbWl0X3NrYisweDM1OS8weDU4MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU5M2Jl
MT5dIHRjcF9yZXRyYW5zbWl0X3NrYisweDE3MS8weDMxMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZm
ZmZmZmY4MTU5NTYxYj5dIHRjcF9yZXRyYW5zbWl0X3RpbWVyKzB4MjFiLzB4NDQwClsgMTQxNy4w
MDUwMTFdIMKgWzxmZmZmZmZmZjgxNTk1OTI4Pl0gdGNwX3dyaXRlX3RpbWVyKzB4ZTgvMHgxMTAK
WyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1OTU4NDA+XSA/IHRjcF9yZXRyYW5zbWl0X3Rp
bWVyKzB4NDQwLzB4NDQwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxMDc1ZDM2Pl0gY2Fs
bF90aW1lcl9mbisweDQ2LzB4MTYwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNTk1ODQw
Pl0gPyB0Y3BfcmV0cmFuc21pdF90aW1lcisweDQ0MC8weDQ0MApbIDE0MTcuMDA1MDExXSDCoFs8
ZmZmZmZmZmY4MTA3NzY4Mj5dIHJ1bl90aW1lcl9zb2Z0aXJxKzB4MTMyLzB4MmEwClsgMTQxNy4w
MDUwMTFdIMKgWzxmZmZmZmZmZjgxMDZlNWQ4Pl0gX19kb19zb2Z0aXJxKzB4YTgvMHgyMTAKWyAx
NDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEzYTk0Yjc+XSA/IF9feGVuX2V2dGNobl9kb191cGNh
bGwrMHgyMDcvMHgyNTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE2NjU2YWM+XSBjYWxs
X3NvZnRpcnErMHgxYy8weDMwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxMDE1MzA1Pl0g
ZG9fc29mdGlycSsweDY1LzB4YTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEwNmU5YmU+
XSBpcnFfZXhpdCsweDhlLzB4YjAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEzYWI1OTU+
XSB4ZW5fZXZ0Y2huX2RvX3VwY2FsbCsweDM1LzB4NTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZm
ZmZmODE2NjU2ZmU+XSB4ZW5fZG9faHlwZXJ2aXNvcl9jYWxsYmFjaysweDFlLzB4MzAKWyAxNDE3
LjAwNTAxMV0gwqA8RU9JPgpbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTAwMTNhYT5dID8g
aHlwZXJjYWxsX3BhZ2UrMHgzYWEvMHgxMDAwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgx
MDAxM2FhPl0gPyBoeXBlcmNhbGxfcGFnZSsweDNhYS8weDEwMDAKWyAxNDE3LjAwNTAxMV0gwqBb
PGZmZmZmZmZmODEwMGEyZDA+XSA/IHhlbl9zYWZlX2hhbHQrMHgxMC8weDIwClsgMTQxNy4wMDUw
MTFdIMKgWzxmZmZmZmZmZjgxMDFiOTgzPl0gPyBkZWZhdWx0X2lkbGUrMHg1My8weDFkMApbIDE0
MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTAxMjIzNj5dID8gY3B1X2lkbGUrMHhkNi8weDEyMApb
IDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTAwYWIyOT5dID8geGVuX2lycV9lbmFibGVfZGly
ZWN0X3JlbG9jKzB4NC8weDQKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE2MzM2OWM+XSA/
IGNwdV9icmluZ3VwX2FuZF9pZGxlKzB4ZS8weDEwClsgMTQxNy4wMDUwMTFdIENvZGU6IDU4IDQ4
IDJiIDQzIDUwIDg4IDQzIDRlIDQ4IDgzIGM0IDA4IDViIDVkIGMzIDkwIGU4CjFiIGZlIGZmIGZm
IGViIGU2IDkwIDkwIDkwIDkwIDkwIDkwIDkwIDkwIDkwIDQ4IDg5IGY4IDg5IGQxIGMxIGU5IDAz
IDgzCmUyIDA3IDxmMz4gNDggYTUgODkgZDEgZjMgYTQgYzMgMjAgNDggODMgZWEgMjAgNGMgOGIg
MDYgNGMgOGIgNGUgMDggNGMKWyAxNDE3LjAwNTAxMV0gUklQIMKgWzxmZmZmZmZmZjgxMzE4ZTJi
Pl0gbWVtY3B5KzB4Yi8weDEyMApbIDE0MTcuMDA1MDExXSDCoFJTUCA8ZmZmZjg4MDA2MDA4M2Iw
OD4KWyAxNDE3LjAwNTAxMV0gQ1IyOiBmZmZmODgwNzNmZWU2ZTAwClsgMTQxNy4wMDUwMTFdIC0t
LVsgZW5kIHRyYWNlIGFlNGU3ZjU2ZWEwNjY1ZmUgXS0tLQpbIDE0MTcuMDA1MDExXSBLZXJuZWwg
cGFuaWMgLSBub3Qgc3luY2luZzogRmF0YWwgZXhjZXB0aW9uIGluIGludGVycnVwdApbIDE0MTcu
MDA1MDExXSBQaWQ6IDAsIGNvbW06IHN3YXBwZXIvNCBUYWludGVkOiBHIMKgwqDCoMKgwqBEIFcK
My4yLjAtMzItZ2VuZXJpYyAjNTEtVWJ1bnR1ClsgMTQxNy4wMDUwMTFdIENhbGwgVHJhY2U6Clsg
MTQxNy4wMDUwMTFdIMKgPElSUT4gwqBbPGZmZmZmZmZmODE2NDIxOTc+XSBwYW5pYysweDkxLzB4
MWE0ClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNjVjMDFhPl0gb29wc19lbmQrMHhlYS8w
eGYwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNjQxMDI3Pl0gbm9fY29udGV4dCsweDE1
MC8weDE1ZApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTY0MTFmZD5dIF9fYmFkX2FyZWFf
bm9zZW1hcGhvcmUrMHgxYzkvMHgxZTgKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE2NDA4
MzU+XSA/IHB0ZV9vZmZzZXRfa2VybmVsKzB4MTMvMHgzYwpbIDE0MTcuMDA1MDExXSDCoFs8ZmZm
ZmZmZmY4MTY0MTIyZj5dIGJhZF9hcmVhX25vc2VtYXBob3JlKzB4MTMvMHgxNQpbIDE0MTcuMDA1
MDExXSDCoFs8ZmZmZmZmZmY4MTY1ZWMzNj5dIGRvX3BhZ2VfZmF1bHQrMHg0MjYvMHg1MjAKWyAx
NDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE2NWIwY2U+XSA/IF9yYXdfc3Bpbl9sb2NrX2lycXNh
dmUrMHgyZS8weDQwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxMDU5ZDhhPl0gPyBnZXRf
bm9oel90aW1lcl90YXJnZXQrMHg1YS8weGMwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgx
NjViMDRlPl0gPyBfcmF3X3NwaW5fdW5sb2NrX2lycXJlc3RvcmUrMHgxZS8weDMwClsgMTQxNy4w
MDUwMTFdIMKgWzxmZmZmZmZmZjgxMDc3ZjkzPl0gPyBtb2RfdGltZXJfcGVuZGluZysweDExMy8w
eDI0MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmZhMDMxN2YzND5dID8gX19uZl9jdF9yZWZy
ZXNoX2FjY3QrMHhkNC8weDEwMApbbmZfY29ubnRyYWNrXQpbIDE0MTcuMDA1MDExXSDCoFs8ZmZm
ZmZmZmY4MTY1YjViNT5dIHBhZ2VfZmF1bHQrMHgyNS8weDMwClsgMTQxNy4wMDUwMTFdIMKgWzxm
ZmZmZmZmZjgxMzE4ZTJiPl0gPyBtZW1jcHkrMHhiLzB4MTIwClsgMTQxNy4wMDUwMTFdIMKgWzxm
ZmZmZmZmZjgxNTMyYzBlPl0gPyBza2JfY29weV9iaXRzKzB4MTZlLzB4MmMwClsgMTQxNy4wMDUw
MTFdIMKgWzxmZmZmZmZmZjgxNTM0NjNhPl0gc2tiX2NvcHkrMHg4YS8weGIwClsgMTQxNy4wMDUw
MTFdIMKgWzxmZmZmZmZmZjgxNTRiNTE3Pl0gbmVpZ2hfcHJvYmUrMHgzNy8weDgwClsgMTQxNy4w
MDUwMTFdIMKgWzxmZmZmZmZmZjgxNTRiOWRiPl0gX19uZWlnaF9ldmVudF9zZW5kKzB4YmIvMHgy
MTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1NGJjNzM+XSBuZWlnaF9yZXNvbHZlX291
dHB1dCsweDE0My8weDFmMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU2ZGRlNT5dID8g
bmZfaG9va19zbG93KzB4NzUvMHgxNTAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1N2E1
MTA+XSA/IGlwX2ZyYWdtZW50KzB4ODEwLzB4ODEwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZm
ZjgxNTdhNjhlPl0gaXBfZmluaXNoX291dHB1dCsweDE3ZS8weDJmMApbIDE0MTcuMDA1MDExXSDC
oFs8ZmZmZmZmZmY4MTUzM2RkYj5dID8gX19hbGxvY19za2IrMHg0Yi8weDI0MApbIDE0MTcuMDA1
MDExXSDCoFs8ZmZmZmZmZmY4MTU3YjFlOD5dIGlwX291dHB1dCsweDk4LzB4YTAKWyAxNDE3LjAw
NTAxMV0gwqBbPGZmZmZmZmZmODE1N2E4YTQ+XSA/IF9faXBfbG9jYWxfb3V0KzB4YTQvMHhiMApb
IDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU3YThkOT5dIGlwX2xvY2FsX291dCsweDI5LzB4
MzAKWyAxNDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODE1N2FhM2M+XSBpcF9xdWV1ZV94bWl0KzB4
MTVjLzB4NDEwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNTk1ODQwPl0gPyB0Y3BfcmV0
cmFuc21pdF90aW1lcisweDQ0MC8weDQ0MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU5
MmM2OT5dIHRjcF90cmFuc21pdF9za2IrMHgzNTkvMHg1ODAKWyAxNDE3LjAwNTAxMV0gwqBbPGZm
ZmZmZmZmODE1OTNiZTE+XSB0Y3BfcmV0cmFuc21pdF9za2IrMHgxNzEvMHgzMTAKWyAxNDE3LjAw
NTAxMV0gwqBbPGZmZmZmZmZmODE1OTU2MWI+XSB0Y3BfcmV0cmFuc21pdF90aW1lcisweDIxYi8w
eDQ0MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTU5NTkyOD5dIHRjcF93cml0ZV90aW1l
cisweGU4LzB4MTEwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNTk1ODQwPl0gPyB0Y3Bf
cmV0cmFuc21pdF90aW1lcisweDQ0MC8weDQ0MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4
MTA3NWQzNj5dIGNhbGxfdGltZXJfZm4rMHg0Ni8weDE2MApbIDE0MTcuMDA1MDExXSDCoFs8ZmZm
ZmZmZmY4MTU5NTg0MD5dID8gdGNwX3JldHJhbnNtaXRfdGltZXIrMHg0NDAvMHg0NDAKWyAxNDE3
LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEwNzc2ODI+XSBydW5fdGltZXJfc29mdGlycSsweDEzMi8w
eDJhMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTA2ZTVkOD5dIF9fZG9fc29mdGlycSsw
eGE4LzB4MjEwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxM2E5NGI3Pl0gPyBfX3hlbl9l
dnRjaG5fZG9fdXBjYWxsKzB4MjA3LzB4MjUwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgx
NjY1NmFjPl0gY2FsbF9zb2Z0aXJxKzB4MWMvMHgzMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZm
ZmY4MTAxNTMwNT5dIGRvX3NvZnRpcnErMHg2NS8weGEwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZm
ZmZmZjgxMDZlOWJlPl0gaXJxX2V4aXQrMHg4ZS8weGIwClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZm
ZmZmZjgxM2FiNTk1Pl0geGVuX2V2dGNobl9kb191cGNhbGwrMHgzNS8weDUwClsgMTQxNy4wMDUw
MTFdIMKgWzxmZmZmZmZmZjgxNjY1NmZlPl0geGVuX2RvX2h5cGVydmlzb3JfY2FsbGJhY2srMHgx
ZS8weDMwClsgMTQxNy4wMDUwMTFdIMKgPEVPST4gwqBbPGZmZmZmZmZmODEwMDEzYWE+XSA/IGh5
cGVyY2FsbF9wYWdlKzB4M2FhLzB4MTAwMApbIDE0MTcuMDA1MDExXSDCoFs8ZmZmZmZmZmY4MTAw
MTNhYT5dID8gaHlwZXJjYWxsX3BhZ2UrMHgzYWEvMHgxMDAwClsgMTQxNy4wMDUwMTFdIMKgWzxm
ZmZmZmZmZjgxMDBhMmQwPl0gPyB4ZW5fc2FmZV9oYWx0KzB4MTAvMHgyMApbIDE0MTcuMDA1MDEx
XSDCoFs8ZmZmZmZmZmY4MTAxYjk4Mz5dID8gZGVmYXVsdF9pZGxlKzB4NTMvMHgxZDAKWyAxNDE3
LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEwMTIyMzY+XSA/IGNwdV9pZGxlKzB4ZDYvMHgxMjAKWyAx
NDE3LjAwNTAxMV0gwqBbPGZmZmZmZmZmODEwMGFiMjk+XSA/IHhlbl9pcnFfZW5hYmxlX2RpcmVj
dF9yZWxvYysweDQvMHg0ClsgMTQxNy4wMDUwMTFdIMKgWzxmZmZmZmZmZjgxNjMzNjljPl0gPyBj
cHVfYnJpbmd1cF9hbmRfaWRsZSsweGUvMHgxMAooWEVOKSBEb21haW4gMCBjcmFzaGVkOiAnbm9y
ZWJvb3QnIHNldCAtIG5vdCByZWJvb3RpbmcuCQoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Dec 14 14:58:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:58:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWio-0001hH-FO; Fri, 14 Dec 2012 14:58:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1TjWin-0001hB-CS
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:58:41 +0000
Received: from [193.109.254.147:63166] by server-16.bemta-14.messagelabs.com
	id 3E/DB-18932-0AE3BC05; Fri, 14 Dec 2012 14:58:40 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355497001!10108758!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28877 invoked from network); 14 Dec 2012 14:56:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 14:56:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="691184"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 14:56:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 09:56:41 -0500
Received: from belegaer.cam.xci-test.com ([10.80.248.240])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<julien.grall@citrix.com>)	id 1TjWgq-0007Co-Vi;
	Fri, 14 Dec 2012 14:56:40 +0000
Message-ID: <50CB3D95.9000000@citrix.com>
Date: Fri, 14 Dec 2012 14:54:13 +0000
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20121027 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
	<20682.8150.410558.581149@mariner.uk.xensource.com>
In-Reply-To: <20682.8150.410558.581149@mariner.uk.xensource.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V3] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/13/2012 06:35 PM, Ian Jackson wrote:

> Julien Grall writes ("[PATCH V3] libxenstore: filter watch events in libxenstore when we unwatch"):
>> XenStore puts in queued watch events via a thread and notifies the user.
>> Sometimes xs_unwatch is called before all related message is read. The use
>> case is non-threaded libevent, we have two event A and B:
>>     - Event A will destroy something and call xs_unwatch;
>>     - Event B is used to notify that a node has changed in XenStore.
>> As the event is called one by one, event A can be handled before event B.
>> So on next xs_watch_read the user could retrieve an unwatch token and
>> a segfault occured if the token store the pointer of the structure
>> (ie: "backend:0xcafe").
>>
>> To avoid problem with previous application using libXenStore, this behaviour
>> will only be enabled if XS_UNWATCH_SAFE is give to xs_open.
> 
> Sorry I didn't reply to your previous email on this subject.
> 
> I think this is a reasonable approach but the option name needs to be
> more descriptive and the documentation a bit better.
> 
> XS_UNWATCH_FILTER ?  XS_WATCH_TOKENS_UNIQUE ?


I think XS_UNWATCH_FILTER is better.

> 
> As for the documentation:


I think it's too restrictive for upstream. xs_unwatch could only filter
following the token and the subpath. I modified your documentation
proposal below to take into account this solution. What do you think?

>  /*
>   * Setting XS_UNWATCH_FILTER arranges that after xs_unwatch, no
>   * related watch events will be delivered via xs_read_watch.  But

    * this relies on the couple token, subpath is unique.

>   *
>   *   XS_UNWATCH_FILTER clear           XS_UNWATCH_FILTER set
>   *
>   *    Even after xs_unwatch, "stale"    After xs_unwatch returns, no
>   *    instances of the watch event      watch events with the same
>   *    may be delivered.                 token and with the same subpath

    *                                      will be delivered.
    *

    *    A path and a subpath can be       The application must avoid to

    *    register with the same token.     register a path (/foo/) and
    *                                      a subpath (/foo/bar) with the
    *                                      same path until a successful
    *                                      xs_unwatch for the first
    *                                      watch has returned.

>   *                                      
>   */
> 
> With this specification it is not necessary to check the path when
> filtering out the stale events.  Which is just as well because:
>

>> +		if (l_token && !strcmp(token, l_token)
>> +			/* Use strncmp because we can have a watch fired on sub-directory */
>> +			&& l_path && !strncmp(path, l_path, strlen(path))) {
> 
> This isn't correct: the subpath relation is not the same as the prefix
> relation.


I will fix the test by using xs_path_is_subpath.

Thanks,

Julien


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 14:58:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 14:58:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWio-0001hH-FO; Fri, 14 Dec 2012 14:58:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1TjWin-0001hB-CS
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 14:58:41 +0000
Received: from [193.109.254.147:63166] by server-16.bemta-14.messagelabs.com
	id 3E/DB-18932-0AE3BC05; Fri, 14 Dec 2012 14:58:40 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355497001!10108758!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28877 invoked from network); 14 Dec 2012 14:56:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 14:56:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="691184"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 14:56:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 09:56:41 -0500
Received: from belegaer.cam.xci-test.com ([10.80.248.240])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<julien.grall@citrix.com>)	id 1TjWgq-0007Co-Vi;
	Fri, 14 Dec 2012 14:56:40 +0000
Message-ID: <50CB3D95.9000000@citrix.com>
Date: Fri, 14 Dec 2012 14:54:13 +0000
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20121027 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
	<20682.8150.410558.581149@mariner.uk.xensource.com>
In-Reply-To: <20682.8150.410558.581149@mariner.uk.xensource.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V3] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/13/2012 06:35 PM, Ian Jackson wrote:

> Julien Grall writes ("[PATCH V3] libxenstore: filter watch events in libxenstore when we unwatch"):
>> XenStore puts in queued watch events via a thread and notifies the user.
>> Sometimes xs_unwatch is called before all related message is read. The use
>> case is non-threaded libevent, we have two event A and B:
>>     - Event A will destroy something and call xs_unwatch;
>>     - Event B is used to notify that a node has changed in XenStore.
>> As the event is called one by one, event A can be handled before event B.
>> So on next xs_watch_read the user could retrieve an unwatch token and
>> a segfault occured if the token store the pointer of the structure
>> (ie: "backend:0xcafe").
>>
>> To avoid problem with previous application using libXenStore, this behaviour
>> will only be enabled if XS_UNWATCH_SAFE is give to xs_open.
> 
> Sorry I didn't reply to your previous email on this subject.
> 
> I think this is a reasonable approach but the option name needs to be
> more descriptive and the documentation a bit better.
> 
> XS_UNWATCH_FILTER ?  XS_WATCH_TOKENS_UNIQUE ?


I think XS_UNWATCH_FILTER is better.

> 
> As for the documentation:


I think it's too restrictive for upstream. xs_unwatch could only filter
following the token and the subpath. I modified your documentation
proposal below to take into account this solution. What do you think?

>  /*
>   * Setting XS_UNWATCH_FILTER arranges that after xs_unwatch, no
>   * related watch events will be delivered via xs_read_watch.  But

    * this relies on the couple token, subpath is unique.

>   *
>   *   XS_UNWATCH_FILTER clear           XS_UNWATCH_FILTER set
>   *
>   *    Even after xs_unwatch, "stale"    After xs_unwatch returns, no
>   *    instances of the watch event      watch events with the same
>   *    may be delivered.                 token and with the same subpath

    *                                      will be delivered.
    *

    *    A path and a subpath can be       The application must avoid to

    *    register with the same token.     register a path (/foo/) and
    *                                      a subpath (/foo/bar) with the
    *                                      same path until a successful
    *                                      xs_unwatch for the first
    *                                      watch has returned.

>   *                                      
>   */
> 
> With this specification it is not necessary to check the path when
> filtering out the stale events.  Which is just as well because:
>

>> +		if (l_token && !strcmp(token, l_token)
>> +			/* Use strncmp because we can have a watch fired on sub-directory */
>> +			&& l_path && !strncmp(path, l_path, strlen(path))) {
> 
> This isn't correct: the subpath relation is not the same as the prefix
> relation.


I will fix the test by using xs_path_is_subpath.

Thanks,

Julien


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:04:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWoE-0001uK-8b; Fri, 14 Dec 2012 15:04:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1TjWoC-0001uA-CE; Fri, 14 Dec 2012 15:04:16 +0000
Received: from [85.158.143.35:11507] by server-3.bemta-4.messagelabs.com id
	26/34-18211-FEF3BC05; Fri, 14 Dec 2012 15:04:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355497063!5489796!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30407 invoked from network); 14 Dec 2012 14:57:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 14:57:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="737179"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 14:57:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 09:57:38 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TjWhm-0007Dc-V4;
	Fri, 14 Dec 2012 14:57:38 +0000
Message-ID: <1355497058.8376.63.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Paul Harvey <jhebus@googlemail.com>
Date: Fri, 14 Dec 2012 14:57:38 +0000
In-Reply-To: <CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
	<1355412947.10554.147.camel@zakaz.uk.xensource.com>
	<CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-14 at 13:06 +0000, Paul Harvey wrote:
> SO
> 
> #with 341 domains
> ./lsevntchn 0 | wc -l
> 724
> 
> Attaching gdb to xenconsoled, 
> 
> Program received signal SIGABRT, Aborted.
> 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) bt
> #0  0x00007fe588ca8425 in raise ()
> from /lib/x86_64-linux-gnu/libc.so.6
> #1  0x00007fe588cabb8b in abort ()
> from /lib/x86_64-linux-gnu/libc.so.6
> #2  0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #3  0x00007fe588d7c807 in __fortify_fail ()
> from /lib/x86_64-linux-gnu/libc.so.6
> #4  0x00007fe588d7b700 in __chk_fail ()
> from /lib/x86_64-linux-gnu/libc.so.6
> #5  0x00007fe588d7c7be in __fdelt_warn ()
> from /lib/x86_64-linux-gnu/libc.so.6
> #6  0x0000000000403ca8 in handle_io () at daemon/io.c:1059
> #7  0x00000000004021c5 in main (argc=2, argv=0x7fff58691d48) at
> daemon/main.c:166
> 

libc raises exception when it detects memory violation.

You can probably try to use valgrind to identify memory leak in
xenconsoled.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:04:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjWoE-0001uK-8b; Fri, 14 Dec 2012 15:04:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1TjWoC-0001uA-CE; Fri, 14 Dec 2012 15:04:16 +0000
Received: from [85.158.143.35:11507] by server-3.bemta-4.messagelabs.com id
	26/34-18211-FEF3BC05; Fri, 14 Dec 2012 15:04:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355497063!5489796!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30407 invoked from network); 14 Dec 2012 14:57:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 14:57:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="737179"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 14:57:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 09:57:38 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TjWhm-0007Dc-V4;
	Fri, 14 Dec 2012 14:57:38 +0000
Message-ID: <1355497058.8376.63.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Paul Harvey <jhebus@googlemail.com>
Date: Fri, 14 Dec 2012 14:57:38 +0000
In-Reply-To: <CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
	<1355412947.10554.147.camel@zakaz.uk.xensource.com>
	<CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-14 at 13:06 +0000, Paul Harvey wrote:
> SO
> 
> #with 341 domains
> ./lsevntchn 0 | wc -l
> 724
> 
> Attaching gdb to xenconsoled, 
> 
> Program received signal SIGABRT, Aborted.
> 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) bt
> #0  0x00007fe588ca8425 in raise ()
> from /lib/x86_64-linux-gnu/libc.so.6
> #1  0x00007fe588cabb8b in abort ()
> from /lib/x86_64-linux-gnu/libc.so.6
> #2  0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #3  0x00007fe588d7c807 in __fortify_fail ()
> from /lib/x86_64-linux-gnu/libc.so.6
> #4  0x00007fe588d7b700 in __chk_fail ()
> from /lib/x86_64-linux-gnu/libc.so.6
> #5  0x00007fe588d7c7be in __fdelt_warn ()
> from /lib/x86_64-linux-gnu/libc.so.6
> #6  0x0000000000403ca8 in handle_io () at daemon/io.c:1059
> #7  0x00000000004021c5 in main (argc=2, argv=0x7fff58691d48) at
> daemon/main.c:166
> 

libc raises exception when it detects memory violation.

You can probably try to use valgrind to identify memory leak in
xenconsoled.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:20:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjX3i-0002Og-MW; Fri, 14 Dec 2012 15:20:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1TjX3h-0002OS-KF; Fri, 14 Dec 2012 15:20:17 +0000
Received: from [85.158.143.99:16663] by server-1.bemta-4.messagelabs.com id
	4D/BC-28401-0B34BC05; Fri, 14 Dec 2012 15:20:16 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355498410!28548894!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5298 invoked from network); 14 Dec 2012 15:20:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 15:20:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="156304"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 15:20:11 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 14 Dec 2012 15:20:10 +0000
Message-ID: <50CB43A9.3040806@citrix.com>
Date: Fri, 14 Dec 2012 16:20:09 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com>	<50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
In-Reply-To: <20683.7116.546352.141787@mariner.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 13:30, Ian Jackson wrote:
> Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
>> According to RFC3270 and RFC1035 IQNs should follow this format:
>>
>> iqn.yyyy-mm.com.example:optional.string
>>
>> The problem is that Open-iSCSI seems to accept almost anything, for
>> example iqn.yyyy-mm,com@example:... is a valid iqn from Open-iSCSI point
>> of view. The only character that Open-iSCSI doesn't seem to accept in
>> iqns is "/", but I don't really like using that as a field separator
>> inside of target. So I propose the following encoding for target:
>>
>> "<iqn>,<portal>"
>> "<iqn>,<portal>,<auth_method>,<user>,<password>"
>>
>> If a user/password is given, we should take care about what we write to
>> "params" xenstore backend field (because the DomU can read that). Would
>> you agree with the syntax described below?
> 
> Wouldn't it be better to specify this in a more key/value like way ?

I guess we could use something like:

"<iqn>,<portal>,auth_method=<auth_method>,user=<user>,password=<password>"

Where <iqn> and <portal> are required, and all other fields are
optional. Password should always be the last field, because it can
contain special characters, like "," or "=".

> The password is a problem.  Perhaps we need to arrange not to write
> params to a place where the guest can see it, but that means upheaval
> for the interface to block scripts.

I was thinking of adding a new variable to aodev that can contain an
extra parameter to pass to hotplug scripts, so we can directly pass the
full diskspec to the hotplug script and the hotplug script itself can
decide what to save in the "params" field (to be used later in the
shutdown/destroy).

>>> I think it should be controlled by the same argument.  So maybe
>>> script=iscsi causes libxl to check for a dropping in the script file
>>> saying "yes do the prepare thing" or maybe it runs
>>> /etc/xen/scripts/block-iscsi--prepare or something.
>>
>> I like the approach to call the same hotplug script twice, the first
>> time use something like `/etc/xen/scripts/block-iscsi prepare`, and the
>> second time `/etc/xen/scripts/block-iscsi add`
> 
> So how would we tell whether the script understood this ?

I'm still looking into the current hotplug script mess, but I only see
the following lines in block-common.sh that should be changed:

if [ "$command" != "add" ] &&
   [ "$command" != "remove" ]
then
  log err "Invalid command: $command"
  exit 1
fi

I think current block hotplug scripts will work nicely when passed the
"prepare" command, they will become no-ops, since they all seem to use
the following case:

case "$command" in
add)
	[...]
	;;
remove)
	[...]
	;;
esac

> Perhaps we should invent a new config parameter parallel to script
> which specifies an entirely new interface.
> 
> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:20:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjX3i-0002Og-MW; Fri, 14 Dec 2012 15:20:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1TjX3h-0002OS-KF; Fri, 14 Dec 2012 15:20:17 +0000
Received: from [85.158.143.99:16663] by server-1.bemta-4.messagelabs.com id
	4D/BC-28401-0B34BC05; Fri, 14 Dec 2012 15:20:16 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355498410!28548894!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5298 invoked from network); 14 Dec 2012 15:20:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 15:20:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="156304"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 15:20:11 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 14 Dec 2012 15:20:10 +0000
Message-ID: <50CB43A9.3040806@citrix.com>
Date: Fri, 14 Dec 2012 16:20:09 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com>	<50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
In-Reply-To: <20683.7116.546352.141787@mariner.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 13:30, Ian Jackson wrote:
> Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
>> According to RFC3270 and RFC1035 IQNs should follow this format:
>>
>> iqn.yyyy-mm.com.example:optional.string
>>
>> The problem is that Open-iSCSI seems to accept almost anything, for
>> example iqn.yyyy-mm,com@example:... is a valid iqn from Open-iSCSI point
>> of view. The only character that Open-iSCSI doesn't seem to accept in
>> iqns is "/", but I don't really like using that as a field separator
>> inside of target. So I propose the following encoding for target:
>>
>> "<iqn>,<portal>"
>> "<iqn>,<portal>,<auth_method>,<user>,<password>"
>>
>> If a user/password is given, we should take care about what we write to
>> "params" xenstore backend field (because the DomU can read that). Would
>> you agree with the syntax described below?
> 
> Wouldn't it be better to specify this in a more key/value like way ?

I guess we could use something like:

"<iqn>,<portal>,auth_method=<auth_method>,user=<user>,password=<password>"

Where <iqn> and <portal> are required, and all other fields are
optional. Password should always be the last field, because it can
contain special characters, like "," or "=".

> The password is a problem.  Perhaps we need to arrange not to write
> params to a place where the guest can see it, but that means upheaval
> for the interface to block scripts.

I was thinking of adding a new variable to aodev that can contain an
extra parameter to pass to hotplug scripts, so we can directly pass the
full diskspec to the hotplug script and the hotplug script itself can
decide what to save in the "params" field (to be used later in the
shutdown/destroy).

>>> I think it should be controlled by the same argument.  So maybe
>>> script=iscsi causes libxl to check for a dropping in the script file
>>> saying "yes do the prepare thing" or maybe it runs
>>> /etc/xen/scripts/block-iscsi--prepare or something.
>>
>> I like the approach to call the same hotplug script twice, the first
>> time use something like `/etc/xen/scripts/block-iscsi prepare`, and the
>> second time `/etc/xen/scripts/block-iscsi add`
> 
> So how would we tell whether the script understood this ?

I'm still looking into the current hotplug script mess, but I only see
the following lines in block-common.sh that should be changed:

if [ "$command" != "add" ] &&
   [ "$command" != "remove" ]
then
  log err "Invalid command: $command"
  exit 1
fi

I think current block hotplug scripts will work nicely when passed the
"prepare" command, they will become no-ops, since they all seem to use
the following case:

case "$command" in
add)
	[...]
	;;
remove)
	[...]
	;;
esac

> Perhaps we should invent a new config parameter parallel to script
> which specifies an entirely new interface.
> 
> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:34:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjXHg-0002s7-Tv; Fri, 14 Dec 2012 15:34:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TjXHf-0002rz-J1
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 15:34:43 +0000
Received: from [85.158.138.51:25105] by server-12.bemta-3.messagelabs.com id
	39/E7-27559-2174BC05; Fri, 14 Dec 2012 15:34:42 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355499281!27136156!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9587 invoked from network); 14 Dec 2012 15:34:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 15:34:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="156648"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 15:34:42 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 14 Dec 2012 15:34:41 +0000
Message-ID: <50CB4710.8010502@citrix.com>
Date: Fri, 14 Dec 2012 16:34:40 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
References: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>,
	<CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC20D@aplesstripe.dom1.jhuapl.edu>
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48BDDDC20D@aplesstripe.dom1.jhuapl.edu>
Cc: George Dunlap <dunlapg@umich.edu>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 15:33, Fioravante, Matthew E. wrote:
> Caml was disabled in the original stubdom default build targets. I should not have enabled it in my original stubdom autoconf patch.

Does caml-stubdom require something else apart from caml itself? If not
I think it should be enabled by default if caml is detected (like we do
for the caml xenstore in the tools).

> ________________________________________
> From: dunlapg@gmail.com [dunlapg@gmail.com] On Behalf Of George Dunlap [dunlapg@umich.edu]
> Sent: Thursday, December 13, 2012 12:47 PM
> To: Fioravante, Matthew E.
> Cc: Ian Campbell; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
> 
> Is there a rationale for this?  If so, it should probably be in the commit message.
> 
>  -George
> 
> 
> On Thu, Dec 13, 2012 at 3:31 PM, Matthew Fioravante <matthew.fioravante@jhuapl.edu<mailto:matthew.fioravante@jhuapl.edu>> wrote:
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu<mailto:matthew.fioravante@jhuapl.edu>>
> ---
>  stubdom/configure.ac<http://configure.ac> |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/stubdom/configure.ac<http://configure.ac> b/stubdom/configure.ac<http://configure.ac>
> index db44d4a..384a94a 100644
> --- a/stubdom/configure.ac<http://configure.ac>
> +++ b/stubdom/configure.ac<http://configure.ac>
> @@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])
>  # Enable/disable stub domains
>  AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
>  AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> -AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])
>  AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
>  AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
>  AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
> --
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>
> http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:34:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjXHg-0002s7-Tv; Fri, 14 Dec 2012 15:34:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TjXHf-0002rz-J1
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 15:34:43 +0000
Received: from [85.158.138.51:25105] by server-12.bemta-3.messagelabs.com id
	39/E7-27559-2174BC05; Fri, 14 Dec 2012 15:34:42 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355499281!27136156!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9587 invoked from network); 14 Dec 2012 15:34:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 15:34:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="156648"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 15:34:42 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 14 Dec 2012 15:34:41 +0000
Message-ID: <50CB4710.8010502@citrix.com>
Date: Fri, 14 Dec 2012 16:34:40 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
References: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>,
	<CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC20D@aplesstripe.dom1.jhuapl.edu>
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48BDDDC20D@aplesstripe.dom1.jhuapl.edu>
Cc: George Dunlap <dunlapg@umich.edu>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 15:33, Fioravante, Matthew E. wrote:
> Caml was disabled in the original stubdom default build targets. I should not have enabled it in my original stubdom autoconf patch.

Does caml-stubdom require something else apart from caml itself? If not
I think it should be enabled by default if caml is detected (like we do
for the caml xenstore in the tools).

> ________________________________________
> From: dunlapg@gmail.com [dunlapg@gmail.com] On Behalf Of George Dunlap [dunlapg@umich.edu]
> Sent: Thursday, December 13, 2012 12:47 PM
> To: Fioravante, Matthew E.
> Cc: Ian Campbell; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
> 
> Is there a rationale for this?  If so, it should probably be in the commit message.
> 
>  -George
> 
> 
> On Thu, Dec 13, 2012 at 3:31 PM, Matthew Fioravante <matthew.fioravante@jhuapl.edu<mailto:matthew.fioravante@jhuapl.edu>> wrote:
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu<mailto:matthew.fioravante@jhuapl.edu>>
> ---
>  stubdom/configure.ac<http://configure.ac> |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/stubdom/configure.ac<http://configure.ac> b/stubdom/configure.ac<http://configure.ac>
> index db44d4a..384a94a 100644
> --- a/stubdom/configure.ac<http://configure.ac>
> +++ b/stubdom/configure.ac<http://configure.ac>
> @@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])
>  # Enable/disable stub domains
>  AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
>  AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> -AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])
>  AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
>  AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
>  AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
> --
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>
> http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:39:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:39:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjXLm-00032D-N0; Fri, 14 Dec 2012 15:38:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjXLk-000328-Rr
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 15:38:57 +0000
Received: from [85.158.143.99:49169] by server-2.bemta-4.messagelabs.com id
	A5/8E-30861-0184BC05; Fri, 14 Dec 2012 15:38:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355499535!19758655!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32315 invoked from network); 14 Dec 2012 15:38:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 15:38:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="156764"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 15:38:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 15:38:55 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjXHR-0007t0-Lq;
	Fri, 14 Dec 2012 15:34:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjXHR-0004lz-Fq;
	Fri, 14 Dec 2012 15:34:29 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14693-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 15:34:29 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14693: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9092364101536556810=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9092364101536556810==
Content-Type: text/plain

flight 14693 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14693/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14678
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14678

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  74d4a6cc5392

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Robert Phillips <robert.phillips@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=f50aab21f9f2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable f50aab21f9f2
+ branch=xen-unstable
+ revision=f50aab21f9f2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r f50aab21f9f2 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 24 changes to 23 files


--===============9092364101536556810==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9092364101536556810==--

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:39:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:39:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjXLm-00032D-N0; Fri, 14 Dec 2012 15:38:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjXLk-000328-Rr
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 15:38:57 +0000
Received: from [85.158.143.99:49169] by server-2.bemta-4.messagelabs.com id
	A5/8E-30861-0184BC05; Fri, 14 Dec 2012 15:38:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355499535!19758655!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32315 invoked from network); 14 Dec 2012 15:38:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 15:38:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="156764"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 15:38:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 15:38:55 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjXHR-0007t0-Lq;
	Fri, 14 Dec 2012 15:34:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjXHR-0004lz-Fq;
	Fri, 14 Dec 2012 15:34:29 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14693-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 15:34:29 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14693: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9092364101536556810=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9092364101536556810==
Content-Type: text/plain

flight 14693 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14693/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14678
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14678

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  74d4a6cc5392

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dongxiao Xu <dongxiao.xu@intel.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Robert Phillips <robert.phillips@citrix.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=f50aab21f9f2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable f50aab21f9f2
+ branch=xen-unstable
+ revision=f50aab21f9f2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r f50aab21f9f2 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 24 changes to 23 files


--===============9092364101536556810==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9092364101536556810==--

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:42:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:42:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjXPL-0003Dy-Gc; Fri, 14 Dec 2012 15:42:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjXPK-0003Dt-Si
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 15:42:39 +0000
Received: from [85.158.137.99:10877] by server-3.bemta-3.messagelabs.com id
	32/B5-31588-EE84BC05; Fri, 14 Dec 2012 15:42:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355499757!13938068!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11335 invoked from network); 14 Dec 2012 15:42:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 15:42:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="156950"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 15:42:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 15:42:37 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjXPJ-0007vO-3m; Fri, 14 Dec 2012 15:42:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjXPI-0000zC-Px;
	Fri, 14 Dec 2012 15:42:36 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20683.18668.222288.111279@mariner.uk.xensource.com>
Date: Fri, 14 Dec 2012 15:42:36 +0000
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <50CB3D95.9000000@citrix.com>
References: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
	<20682.8150.410558.581149@mariner.uk.xensource.com>
	<50CB3D95.9000000@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V3] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien Grall writes ("Re: [PATCH V3] libxenstore: filter watch events in libxenstore when we unwatch"):
> On 12/13/2012 06:35 PM, Ian Jackson wrote:
> > XS_UNWATCH_FILTER ?  XS_WATCH_TOKENS_UNIQUE ?
> 
> I think XS_UNWATCH_FILTER is better.

OK.

> > As for the documentation:
> 
> I think it's too restrictive for upstream. xs_unwatch could only filter
> following the token and the subpath. I modified your documentation
> proposal below to take into account this solution. What do you think?

I like your proposal.

      *    A path and a subpath can be       The application must avoid to
      *    register with the same token.     register a path (/foo/) and

A tiny grammar quibble: you should write "the application must avoid
registering".  "Avoid" needs the FOOing form of a verb rather than the
"to FOO" form.

> I will fix the test by using xs_path_is_subpath.

Great.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:42:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:42:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjXPL-0003Dy-Gc; Fri, 14 Dec 2012 15:42:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjXPK-0003Dt-Si
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 15:42:39 +0000
Received: from [85.158.137.99:10877] by server-3.bemta-3.messagelabs.com id
	32/B5-31588-EE84BC05; Fri, 14 Dec 2012 15:42:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355499757!13938068!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11335 invoked from network); 14 Dec 2012 15:42:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 15:42:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="156950"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 15:42:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 15:42:37 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjXPJ-0007vO-3m; Fri, 14 Dec 2012 15:42:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjXPI-0000zC-Px;
	Fri, 14 Dec 2012 15:42:36 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20683.18668.222288.111279@mariner.uk.xensource.com>
Date: Fri, 14 Dec 2012 15:42:36 +0000
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <50CB3D95.9000000@citrix.com>
References: <53cbcf7907b54963906d7b5fa5ad40b4fff37886.1355399450.git.julien.grall@citrix.com>
	<20682.8150.410558.581149@mariner.uk.xensource.com>
	<50CB3D95.9000000@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V3] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien Grall writes ("Re: [PATCH V3] libxenstore: filter watch events in libxenstore when we unwatch"):
> On 12/13/2012 06:35 PM, Ian Jackson wrote:
> > XS_UNWATCH_FILTER ?  XS_WATCH_TOKENS_UNIQUE ?
> 
> I think XS_UNWATCH_FILTER is better.

OK.

> > As for the documentation:
> 
> I think it's too restrictive for upstream. xs_unwatch could only filter
> following the token and the subpath. I modified your documentation
> proposal below to take into account this solution. What do you think?

I like your proposal.

      *    A path and a subpath can be       The application must avoid to
      *    register with the same token.     register a path (/foo/) and

A tiny grammar quibble: you should write "the application must avoid
registering".  "Avoid" needs the FOOing form of a verb rather than the
"to FOO" form.

> I will fix the test by using xs_path_is_subpath.

Great.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:46:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjXT1-0003NN-9s; Fri, 14 Dec 2012 15:46:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1TjXT0-0003N9-1j; Fri, 14 Dec 2012 15:46:26 +0000
Received: from [85.158.139.211:3773] by server-5.bemta-5.messagelabs.com id
	F1/26-22648-1D94BC05; Fri, 14 Dec 2012 15:46:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355499984!20037224!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15048 invoked from network); 14 Dec 2012 15:46:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 15:46:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="157056"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 15:46:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 15:46:23 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjXSx-0007wf-Kz; Fri, 14 Dec 2012 15:46:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjXSx-0000zR-GY;
	Fri, 14 Dec 2012 15:46:23 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20683.18895.282490.216230@mariner.uk.xensource.com>
Date: Fri, 14 Dec 2012 15:46:23 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50CB43A9.3040806@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com> <50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
	<50CB43A9.3040806@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
> I was thinking of adding a new variable to aodev that can contain an
> extra parameter to pass to hotplug scripts, so we can directly pass the
> full diskspec to the hotplug script and the hotplug script itself can
> decide what to save in the "params" field (to be used later in the
> shutdown/destroy).

That's all very well but command line parameters are visible in ps and
so not suitable for passwords.  Really there should be an area in
xenstore that's for communication between the toolstack and the driver
domain (including scripts in the latter), but which is not visible to
the guest.

> > So how would we tell whether the script understood this ?
> 
> I'm still looking into the current hotplug script mess, but I only see
> the following lines in block-common.sh that should be changed:
> 
> if [ "$command" != "add" ] &&
>    [ "$command" != "remove" ]

What about existing out-of-tree scripts ?  Do they all use
block-common ?

> then
>   log err "Invalid command: $command"
>   exit 1
> fi

And this is no good because if libxl does the error handling properly
it would cause every attempt to fail :-).  You could explicitly ignore
prepare and unprepare.

> I think current block hotplug scripts will work nicely when passed the
> "prepare" command, they will become no-ops, since they all seem to use
> the following case:

I'm worried about out-of-tree scripts.


The existing hotplug script interface is pretty horrible TBH.  Which
is why I was suggesting inventing a new one.  We could keep the old
interface for out-of-tree and unconverted in-tree scripts, and provide
a new parameter to request the new style.

Eg "method=<something>" rather than "script=<something>" would mean
"set script to <something> and also set the flag saying `use the new
script calling convention'"

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:46:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjXT1-0003NN-9s; Fri, 14 Dec 2012 15:46:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1TjXT0-0003N9-1j; Fri, 14 Dec 2012 15:46:26 +0000
Received: from [85.158.139.211:3773] by server-5.bemta-5.messagelabs.com id
	F1/26-22648-1D94BC05; Fri, 14 Dec 2012 15:46:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355499984!20037224!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15048 invoked from network); 14 Dec 2012 15:46:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 15:46:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,281,1355097600"; 
   d="scan'208";a="157056"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 15:46:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 15:46:23 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjXSx-0007wf-Kz; Fri, 14 Dec 2012 15:46:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjXSx-0000zR-GY;
	Fri, 14 Dec 2012 15:46:23 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20683.18895.282490.216230@mariner.uk.xensource.com>
Date: Fri, 14 Dec 2012 15:46:23 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50CB43A9.3040806@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com> <50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
	<50CB43A9.3040806@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
> I was thinking of adding a new variable to aodev that can contain an
> extra parameter to pass to hotplug scripts, so we can directly pass the
> full diskspec to the hotplug script and the hotplug script itself can
> decide what to save in the "params" field (to be used later in the
> shutdown/destroy).

That's all very well but command line parameters are visible in ps and
so not suitable for passwords.  Really there should be an area in
xenstore that's for communication between the toolstack and the driver
domain (including scripts in the latter), but which is not visible to
the guest.

> > So how would we tell whether the script understood this ?
> 
> I'm still looking into the current hotplug script mess, but I only see
> the following lines in block-common.sh that should be changed:
> 
> if [ "$command" != "add" ] &&
>    [ "$command" != "remove" ]

What about existing out-of-tree scripts ?  Do they all use
block-common ?

> then
>   log err "Invalid command: $command"
>   exit 1
> fi

And this is no good because if libxl does the error handling properly
it would cause every attempt to fail :-).  You could explicitly ignore
prepare and unprepare.

> I think current block hotplug scripts will work nicely when passed the
> "prepare" command, they will become no-ops, since they all seem to use
> the following case:

I'm worried about out-of-tree scripts.


The existing hotplug script interface is pretty horrible TBH.  Which
is why I was suggesting inventing a new one.  We could keep the old
interface for out-of-tree and unconverted in-tree scripts, and provide
a new parameter to request the new style.

Eg "method=<something>" rather than "script=<something>" would mean
"set script to <something> and also set the flag saying `use the new
script calling convention'"

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 15:56:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjXcT-0003lg-15; Fri, 14 Dec 2012 15:56:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TjXcR-0003kY-1H
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 15:56:11 +0000
Received: from [85.158.139.211:45616] by server-5.bemta-5.messagelabs.com id
	63/26-22648-A1C4BC05; Fri, 14 Dec 2012 15:56:10 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355500567!20568206!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31300 invoked from network); 14 Dec 2012 15:56:08 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	14 Dec 2012 15:56:08 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:55293 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TjXgM-0006Hg-VA; Fri, 14 Dec 2012 17:00:15 +0100
Date: Fri, 14 Dec 2012 16:55:57 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1995613404.20121214165557@eikelenboom.it>
To: xen-devel <xen-devel@lists.xen.org>, linux-kernel@vger.kernel.org
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------1071AB00421F249D1"
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
	dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------1071AB00421F249D1
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi Konrad,

I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.
The boot stalls:

[    0.000000] ACPI: PM-Timer IO Port: 0x808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
[    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
[    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
[   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
[   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
[   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
[   64.598692] sending NMI to all CPUs:
[   64.598716] xen: vector 0x2 is not implemented


Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
[    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-


The exact seem config with 3.7.0 as kernel works fine.
Complete serial log is attached.

--

Sander



------------1071AB00421F249D1
Content-Type: application/octet-stream;
 name="serial.log"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="serial.log"

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fX18gICAgICAgICAgICAgICAgICAgIF8g
ICAgICAgIF8gICAgIF8gICAgICANCiBcIFwvIC9fX18gXyBfXyAgIHwgfHwgfCAgfF9fXyAv
ICAgIF8gICBfIF8gX18gIF9fX3wgfF8gX18gX3wgfF9fIHwgfCBfX18gDQogIFwgIC8vIF8g
XCAnXyBcICB8IHx8IHxfICAgfF8gXCBfX3wgfCB8IHwgJ18gXC8gX198IF9fLyBfYCB8ICdf
IFx8IHwvIF8gXA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgX19fKSB8X198IHxffCB8
IHwgfCBcX18gXCB8fCAoX3wgfCB8XykgfCB8ICBfXy8NCiAvXy9cX1xfX198X3wgfF98ICAg
IHxffChfKV9fX18vICAgIFxfXyxffF98IHxffF9fXy9cX19cX18sX3xfLl9fL3xffFxfX198
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZlcnNpb24gNC4zLXVuc3RhYmxl
IChyb290QGR5bmRucy5vcmcpIChnY2MgKERlYmlhbiA0LjQuNS04KSA0LjQuNSkgVHVlIERl
YyAxMSAxNToyNToyNCBDRVQgMjAxMg0KKFhFTikgTGF0ZXN0IENoYW5nZVNldDogTW9uIERl
YyAxMCAxMToxNjoxNyAyMDEyICswMDAwIDI2MjcwOjAzY2I3MWJjMzJmOQ0KKFhFTikgQm9v
dGxvYWRlcjogR1JVQiAxLjk4KzIwMTAwODA0LTE0K3NxdWVlemUxDQooWEVOKSBDb21tYW5k
IGxpbmU6IGRvbTBfbWVtPTEwMjRNLG1heDoxMDI0TSBsb2dsdmw9YWxsIGxvZ2x2bF9ndWVz
dD1hbGwgY29uc29sZV90aW1lc3RhbXBzIHZnYT1nZngtMTI4MHgxMDI0eDMyIGNwdWlkbGUg
Y3B1ZnJlcT14ZW4gbm9yZWJvb3QgZGVidWcgbGFwaWM9ZGVidWcgYXBpY192ZXJib3NpdHk9
ZGVidWcgYXBpYz1kZWJ1ZyBpb21tdT1vbix2ZXJib3NlLGRlYnVnLGFtZC1pb21tdS1kZWJ1
ZyBjb20xPTM4NDAwLDhuMSBjb25zb2xlPXZnYSxjb20xDQooWEVOKSBWaWRlbyBpbmZvcm1h
dGlvbjoNCihYRU4pICBWR0EgaXMgZ3JhcGhpY3MgbW9kZSAxMjgweDEwMjQsIDMyIGJwcA0K
KFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNv
bmRzDQooWEVOKSBEaXNjIGluZm9ybWF0aW9uOg0KKFhFTikgIEZvdW5kIDMgTUJSIHNpZ25h
dHVyZXMNCihYRU4pICBGb3VuZCAzIEVERCBpbmZvcm1hdGlvbiBzdHJ1Y3R1cmVzDQooWEVO
KSBYZW4tZTgyMCBSQU0gbWFwOg0KKFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAw
MDAwMDlmMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDAwMDA5ZjAwMCAtIDAwMDAwMDAw
MDAwYTAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAwZTQwMDAgLSAwMDAwMDAw
MDAwMTAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMDAwMTAwMDAwIC0gMDAwMDAw
MDBhZmY5MDAwMCAodXNhYmxlKQ0KKFhFTikgIDAwMDAwMDAwYWZmOTAwMDAgLSAwMDAwMDAw
MGFmZjllMDAwIChBQ1BJIGRhdGEpDQooWEVOKSAgMDAwMDAwMDBhZmY5ZTAwMCAtIDAwMDAw
MDAwYWZmZTAwMDAgKEFDUEkgTlZTKQ0KKFhFTikgIDAwMDAwMDAwYWZmZTAwMDAgLSAwMDAw
MDAwMGIwMDAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZmZTAwMDAwIC0gMDAw
MDAwMDEwMDAwMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDEwMDAwMDAwMCAtIDAw
MDAwMDAyNTAwMDAwMDAgKHVzYWJsZSkNCihYRU4pIEFDUEk6IFJTRFAgMDAwRkIxMDAsIDAw
MTQgKHIwIEFDUElBTSkNCihYRU4pIEFDUEk6IFJTRFQgQUZGOTAwMDAsIDAwNDggKHIxIE1T
SSAgICBPRU1TTElDICAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogRkFD
UCBBRkY5MDIwMCwgMDA4NCAocjEgNzY0ME1TIEE3NjQwMTAwIDIwMTAwOTEzIE1TRlQgICAg
ICAgOTcpDQooWEVOKSBBQ1BJOiBEU0RUIEFGRjkwNUUwLCA5NDI3IChyMSAgQTc2NDAgQTc2
NDAxMDAgICAgICAxMDAgSU5UTCAyMDA1MTExNykNCihYRU4pIEFDUEk6IEZBQ1MgQUZGOUUw
MDAsIDAwNDANCihYRU4pIEFDUEk6IEFQSUMgQUZGOTAzOTAsIDAwODggKHIxIDc2NDBNUyBB
NzY0MDEwMCAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogTUNGRyBBRkY5
MDQyMCwgMDAzQyAocjEgNzY0ME1TIE9FTU1DRkcgIDIwMTAwOTEzIE1TRlQgICAgICAgOTcp
DQooWEVOKSBBQ1BJOiBTTElDIEFGRjkwNDYwLCAwMTc2IChyMSBNU0kgICAgT0VNU0xJQyAg
MjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IE9FTUIgQUZGOUUwNDAsIDAw
NzIgKHIxIDc2NDBNUyBBNzY0MDEwMCAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikg
QUNQSTogU1JBVCBBRkY5QTVFMCwgMDEwOCAocjMgQU1EICAgIEZBTV9GXzEwICAgICAgICAy
IEFNRCAgICAgICAgIDEpDQooWEVOKSBBQ1BJOiBIUEVUIEFGRjlBNkYwLCAwMDM4IChyMSA3
NjQwTVMgT0VNSFBFVCAgMjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IElW
UlMgQUZGOUE3MzAsIDAwRjggKHIxICBBTUQgICAgIFJEODkwUyAgIDIwMjAzMSBBTUQgICAg
ICAgICAwKQ0KKFhFTikgQUNQSTogU1NEVCBBRkY5QTgzMCwgMERBNCAocjEgQSBNIEkgIFBP
V0VSTk9XICAgICAgICAxIEFNRCAgICAgICAgIDEpDQooWEVOKSBTeXN0ZW0gUkFNOiA4MTkx
TUIgKDgzODc3NzJrQikNCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMCAtPiBOb2RlIDAN
CihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBY
TSAwIC0+IEFQSUMgMiAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMyAt
PiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgNCAtPiBOb2RlIDANCihYRU4p
IFNSQVQ6IFBYTSAwIC0+IEFQSUMgNSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IE5vZGUgMCBQ
WE0gMCAwLWEwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwLWIwMDAwMDAw
DQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwMDAwLTI1MDAwMDAwMA0KKFhFTikg
TlVNQTogQWxsb2NhdGVkIG1lbW5vZGVtYXAgZnJvbSAyNGQ5NjAwMDAgLSAyNGQ5NjMwMDAN
CihYRU4pIE5VTUE6IFVzaW5nIDggZm9yIHRoZSBoYXNoIHNoaWZ0Lg0KKFhFTikgRG9tYWlu
IGhlYXAgaW5pdGlhbGlzZWQNCihYRU4pIHZlc2FmYjogZnJhbWVidWZmZXIgYXQgMHhmYjAw
MDAwMCwgbWFwcGVkIHRvIDB4ZmZmZjgyYzAwMDA4MTAwMCwgdXNpbmcgNjE0NGssIHRvdGFs
IDE0MzM2aw0KKFhFTikgdmVzYWZiOiBtb2RlIGlzIDEyODB4MTAyNHgzMiwgbGluZWxlbmd0
aD01MTIwLCBmb250IDh4MTYNCihYRU4pIHZlc2FmYjogVHJ1ZWNvbG9yOiBzaXplPTg6ODo4
OjgsIHNoaWZ0PTI0OjE2Ojg6MA0KKFhFTikgZm91bmQgU01QIE1QLXRhYmxlIGF0IDAwMGZm
NzgwDQooWEVOKSBETUkgcHJlc2VudC4NCihYRU4pIEFQSUMgYm9vdCBzdGF0ZSBpcyAneGFw
aWMnDQooWEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0DQooWEVOKSBBQ1BJOiBQTS1U
aW1lciBJTyBQb3J0OiAweDgwOA0KKFhFTikgQUNQSTogQUNQSSBTTEVFUCBJTkZPOiBwbTF4
X2NudFs4MDQsMF0sIHBtMXhfZXZ0WzgwMCwwXQ0KKFhFTikgQUNQSTogICAgICAgICAgICAg
ICAgICB3YWtldXBfdmVjW2FmZjllMDBjXSwgdmVjX3NpemVbMjBdDQooWEVOKSBBQ1BJOiBM
b2NhbCBBUElDIGFkZHJlc3MgMHhmZWUwMDAwMA0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHgwMV0gbGFwaWNfaWRbMHgwMF0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMCAw
OjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0g
bGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMSAwOjEwIEFQSUMg
dmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwM10gbGFwaWNfaWRb
MHgwMl0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMiAwOjEwIEFQSUMgdmVyc2lvbiAx
Ng0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwM10gZW5h
YmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMyAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikg
QUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNV0gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkNCihY
RU4pIFByb2Nlc3NvciAjNCAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQ
SUMgKGFjcGlfaWRbMHgwNl0gbGFwaWNfaWRbMHgwNV0gZW5hYmxlZCkNCihYRU4pIFByb2Nl
c3NvciAjNSAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogSU9BUElDIChpZFsw
eDA2XSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBd
OiBhcGljX2lkIDYsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMN
CihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwN10gYWRkcmVzc1sweGZlYzIwMDAwXSBnc2lf
YmFzZVsyNF0pDQooWEVOKSBJT0FQSUNbMV06IGFwaWNfaWQgNywgdmVyc2lvbiAzMywgYWRk
cmVzcyAweGZlYzIwMDAwLCBHU0kgMjQtNTUNCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZSIChi
dXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQooWEVOKSBBQ1BJOiBJTlRf
U1JDX09WUiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQooWEVO
KSBBQ1BJOiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQg
Ynkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlE5IHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVO
KSBFbmFibGluZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMiBJL08gQVBJQ3MNCihYRU4p
IEFDUEk6IEhQRVQgaWQ6IDB4ODMwMCBiYXNlOiAweGZlZDAwMDAwDQooWEVOKSBUYWJsZSBp
cyBub3QgZm91bmQhDQooWEVOKSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3Vy
YXRpb24gaW5mb3JtYXRpb24NCihYRU4pIFNNUDogQWxsb3dpbmcgNiBDUFVzICgwIGhvdHBs
dWcgQ1BVcykNCihYRU4pIG1hcHBlZCBBUElDIHRvIGZmZmY4MmMzZmZkZmIwMDAgKGZlZTAw
MDAwKQ0KKFhFTikgbWFwcGVkIElPQVBJQyB0byBmZmZmODJjM2ZmZGZhMDAwIChmZWMwMDAw
MCkNCihYRU4pIG1hcHBlZCBJT0FQSUMgdG8gZmZmZjgyYzNmZmRmOTAwMCAoZmVjMjAwMDAp
DQooWEVOKSBJUlEgbGltaXRzOiA1NiBHU0ksIDExMTIgTVNJL01TSS1YDQooWEVOKSBVc2lu
ZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpDQooWEVOKSBEZXRl
Y3RlZCAzMjAwLjEyNiBNSHogcHJvY2Vzc29yLg0KKFhFTikgSW5pdGluZyBtZW1vcnkgc2hh
cmluZy4NCihYRU4pIEFNRCBGYW0xMGggbWFjaGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxl
ZA0KKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDogYmFzZSBlMDAwMDAwMCBzZWdt
ZW50IDAwMDAgYnVzZXMgMDAgLSBmZg0KKFhFTikgUENJOiBOb3QgdXNpbmcgTUNGRyBmb3Ig
c2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgQU1ELVZpOiBGb3VuZCBNU0kgY2FwYWJp
bGl0eSBibG9jayBhdCAweDU0DQooWEVOKSBBTUQtVmk6IEFDUEkgVGFibGU6DQooWEVOKSBB
TUQtVmk6ICBTaWduYXR1cmUgSVZSUw0KKFhFTikgQU1ELVZpOiAgTGVuZ3RoIDB4ZjgNCihY
RU4pIEFNRC1WaTogIFJldmlzaW9uIDB4MQ0KKFhFTikgQU1ELVZpOiAgQ2hlY2tTdW0gMHg1
MA0KKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRCAgDQooWEVOKSBBTUQtVmk6ICBPRU1fVGFi
bGVfSWQgUkQ4OTBTDQooWEVOKSBBTUQtVmk6ICBPRU1fUmV2aXNpb24gMHgyMDIwMzENCihY
RU4pIEFNRC1WaTogIENyZWF0b3JfSWQgQU1EIA0KKFhFTikgQU1ELVZpOiAgQ3JlYXRvcl9S
ZXZpc2lvbiAwDQooWEVOKSBBTUQtVmk6IElWUlMgQmxvY2s6DQooWEVOKSBBTUQtVmk6ICBU
eXBlIDB4MTANCihYRU4pIEFNRC1WaTogIEZsYWdzIDB4M2UNCihYRU4pIEFNRC1WaTogIExl
bmd0aCAweGM4DQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHgyDQooWEVOKSBBTUQtVmk6IElW
SEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDMNCihYRU4pIEFNRC1W
aTogIERldl9JZCAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6ICBE
ZXZfSWQgUmFuZ2U6IDAgLT4gMHgyDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAweDEw
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAw
eGIwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmlj
ZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZf
SWQgMHgxOA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERl
dmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBE
ZXZfSWQgMHg5MDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4NDMNCihYRU4pIEFNRC1W
aTogIERldl9JZCAweGEwOA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIFJhbmdlOiAweGEwOCAtPiAweGFmZg0KKFhFTikgQU1ELVZpOiAgRGV2X0lk
IEFsaWFzOiAweGEwMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeToNCihYRU4p
IEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHgyOA0KKFhFTikg
QU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeToNCihY
RU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHg4MDANCihY
RU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6
DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIDB4MzAN
CihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50
cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIDB4
NzAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9J
ZCAweDUwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2
aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERl
dl9JZCAweDYwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHg1OA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJ
VkhEIERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQt
Vmk6ICBEZXZfSWQgMHg1MDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1W
aTogIERldl9JZCBSYW5nZTogMHg1MDAgLT4gMHg1MDENCihYRU4pIEFNRC1WaTogSVZIRCBE
ZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIDB4NjgNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIDB4NDAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6
IElWSEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFN
RC1WaTogIERldl9JZCAweDg4DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQt
Vmk6IElWSEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDMNCihYRU4p
IEFNRC1WaTogIERldl9JZCAweDkwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBB
TUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4OTAgLT4gMHg5Mg0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHg5OA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIFJhbmdlOiAweDk4IC0+IDB4OWENCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2Ug
RW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lk
IDB4YTANCihYRU4pIEFNRC1WaTogIEZsYWdzIDB4ZDcNCihYRU4pIEFNRC1WaTogSVZIRCBE
ZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIDB4YTENCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIDB4YTINCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTog
SVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1E
LVZpOiAgRGV2X0lkIDB4YTMNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1W
aTogSVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikg
QU1ELVZpOiAgRGV2X0lkIDB4YTQNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFN
RC1WaTogSVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4NDMNCihY
RU4pIEFNRC1WaTogIERldl9JZCAweDMwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhF
TikgQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweDMwMCAtPiAweDNmZg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIEFsaWFzOiAweGE0DQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAweGE1
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAw
eGE4DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9J
ZCAweGE5DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2
aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERl
dl9JZCAweDEwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHhiMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIFJhbmdlOiAweGIwIC0+IDB4YjINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2Ug
RW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDANCihYRU4pIEFNRC1WaTogIERldl9JZCAw
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDQ4DQooWEVOKSBBTUQtVmk6ICBEZXZfSWQg
MA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMHhkNw0KKFhFTikgQU1ELVZpOiBJVkhEIERldmlj
ZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHg0OA0KKFhFTikgQU1ELVZpOiAgRGV2
X0lkIDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSU9NTVUgMCBF
bmFibGVkLg0KKFhFTikgQU1ELVZpOiBFbmFibGluZyBnbG9iYWwgdmVjdG9yIG1hcA0KKFhF
TikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQNCihYRU4pICAtIERvbTAgbW9kZTogUmVs
YXhlZA0KKFhFTikgR2V0dGluZyBWRVJTSU9OOiA4MDA1MDAxMA0KKFhFTikgR2V0dGluZyBW
RVJTSU9OOiA4MDA1MDAxMA0KKFhFTikgR2V0dGluZyBJRDogMA0KKFhFTikgR2V0dGluZyBM
VlQwOiA3MDANCihYRU4pIEdldHRpbmcgTFZUMTogNDAwDQooWEVOKSBlbmFibGVkIEV4dElO
VCBvbiBDUFUjMA0KKFhFTikgRVNSIHZhbHVlIGJlZm9yZSBlbmFibGluZyB2ZWN0b3I6IDB4
NCAgYWZ0ZXI6IDANCihYRU4pIEVOQUJMSU5HIElPLUFQSUMgSVJRcw0KKFhFTikgIC0+IFVz
aW5nIG5ldyBBQ0sgbWV0aG9kDQooWEVOKSBpbml0IElPX0FQSUMgSVJRcw0KKFhFTikgIElP
LUFQSUMgKGFwaWNpZC1waW4pIDYtMCwgNi0xNiwgNi0xNywgNi0xOCwgNi0xOSwgNi0yMCwg
Ni0yMSwgNi0yMiwgNi0yMywgNy0wLCA3LTEsIDctMiwgNy0zLCA3LTQsIDctNSwgNy02LCA3
LTcsIDctOCwgNy05LCA3LTEwLCA3LTExLCA3LTEyLCA3LTEzLCA3LTE0LCA3LTE1LCA3LTE2
LCA3LTE3LCA3LTE4LCA3LTE5LCA3LTIwLCA3LTIxLCA3LTIyLCA3LTIzLCA3LTI0LCA3LTI1
LCA3LTI2LCA3LTI3LCA3LTI4LCA3LTI5LCA3LTMwLCA3LTMxIG5vdCBjb25uZWN0ZWQuDQoo
WEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4y
PS0xDQooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1Lg0KKFhFTikgbnVtYmVy
IG9mIElPLUFQSUMgIzYgcmVnaXN0ZXJzOiAyNC4NCihYRU4pIG51bWJlciBvZiBJTy1BUElD
ICM3IHJlZ2lzdGVyczogMzIuDQooWEVOKSB0ZXN0aW5nIHRoZSBJTyBBUElDLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4NCihYRU4pIElPIEFQSUMgIzYuLi4uLi4NCihYRU4pIC4uLi4gcmVn
aXN0ZXIgIzAwOiAwNjAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICA6IHBoeXNpY2FsIEFQSUMg
aWQ6IDA2DQooWEVOKSAuLi4uLi4uICAgIDogRGVsaXZlcnkgVHlwZTogMA0KKFhFTikgLi4u
Li4uLiAgICA6IExUUyAgICAgICAgICA6IDANCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAxOiAw
MDE3ODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBtYXggcmVkaXJlY3Rpb24gZW50cmllczog
MDAxNw0KKFhFTikgLi4uLi4uLiAgICAgOiBQUlEgaW1wbGVtZW50ZWQ6IDENCihYRU4pIC4u
Li4uLi4gICAgIDogSU8gQVBJQyB2ZXJzaW9uOiAwMDIxDQooWEVOKSAuLi4uIHJlZ2lzdGVy
ICMwMjogMDYwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDogYXJiaXRyYXRpb246IDA2DQoo
WEVOKSAuLi4uIHJlZ2lzdGVyICMwMzogMDcwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDog
Qm9vdCBEVCAgICA6IDANCihYRU4pIC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRhYmxlOg0KKFhF
TikgIE5SIExvZyBQaHkgTWFzayBUcmlnIElSUiBQb2wgU3RhdCBEZXN0IERlbGkgVmVjdDog
ICANCihYRU4pICAwMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAg
IDAwDQooWEVOKSAgMDEgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICAzMA0KKFhFTikgIDAyIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgRjANCihYRU4pICAwMyAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAx
ICAgIDM4DQooWEVOKSAgMDQgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAg
MSAgICBGMQ0KKFhFTikgIDA1IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAg
IDEgICAgNDANCihYRU4pICAwNiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAg
ICAxICAgIDQ4DQooWEVOKSAgMDcgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEg
ICAgMSAgICA1MA0KKFhFTikgIDA4IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgNTgNCihYRU4pICAwOSAwMDEgMDEgIDEgICAgMSAgICAwICAgMSAgIDAgICAg
MSAgICAxICAgIDYwDQooWEVOKSAgMGEgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICA2OA0KKFhFTikgIDBiIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgNzANCihYRU4pICAwYyAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDc4DQooWEVOKSAgMGQgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICA4OA0KKFhFTikgIDBlIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgOTANCihYRU4pICAwZiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDk4DQooWEVOKSAgMTAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAg
ICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDExIDAwMCAwMCAgMSAgICAwICAgIDAgICAw
ICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxMiAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTMgMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE0IDAwMCAwMCAgMSAgICAwICAgIDAg
ICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNSAwMDAgMDAgIDEgICAgMCAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTYgMDAwIDAwICAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE3IDAwMCAwMCAgMSAgICAwICAg
IDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pIElPIEFQSUMgIzcuLi4uLi4NCihY
RU4pIC4uLi4gcmVnaXN0ZXIgIzAwOiAwNzAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICA6IHBo
eXNpY2FsIEFQSUMgaWQ6IDA3DQooWEVOKSAuLi4uLi4uICAgIDogRGVsaXZlcnkgVHlwZTog
MA0KKFhFTikgLi4uLi4uLiAgICA6IExUUyAgICAgICAgICA6IDANCihYRU4pIC4uLi4gcmVn
aXN0ZXIgIzAxOiAwMDFGODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBtYXggcmVkaXJlY3Rp
b24gZW50cmllczogMDAxRg0KKFhFTikgLi4uLi4uLiAgICAgOiBQUlEgaW1wbGVtZW50ZWQ6
IDENCihYRU4pIC4uLi4uLi4gICAgIDogSU8gQVBJQyB2ZXJzaW9uOiAwMDIxDQooWEVOKSAu
Li4uIHJlZ2lzdGVyICMwMjogMDAwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDogYXJiaXRy
YXRpb246IDAwDQooWEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0YWJsZToNCihYRU4pICBO
UiBMb2cgUGh5IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRGVzdCBEZWxpIFZlY3Q6ICAgDQoo
WEVOKSAgMDAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0K
KFhFTikgIDAxIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAN
CihYRU4pICAwMiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAw
DQooWEVOKSAgMDMgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAw
MA0KKFhFTikgIDA0IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAg
MDANCihYRU4pICAwNSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAg
IDAwDQooWEVOKSAgMDYgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAg
ICAwMA0KKFhFTikgIDA3IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAg
ICAgMDANCihYRU4pICAwOCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwDQooWEVOKSAgMDkgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MCAgICAwMA0KKFhFTikgIDBhIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAg
IDAgICAgMDANCihYRU4pICAwYiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwDQooWEVOKSAgMGMgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMCAgICAwMA0KKFhFTikgIDBkIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAw
ICAgIDAgICAgMDANCihYRU4pICAwZSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAwICAgIDAwDQooWEVOKSAgMGYgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMCAgICAwMA0KKFhFTikgIDEwIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAg
ICAwICAgIDAgICAgMDANCihYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAg
ICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAw
ICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDEzIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAg
MCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAg
IDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAg
ICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE2IDAwMCAwMCAgMSAgICAwICAgIDAgICAw
ICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNyAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTggMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE5IDAwMCAwMCAgMSAgICAwICAgIDAg
ICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxYSAwMDAgMDAgIDEgICAgMCAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWIgMDAwIDAwICAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFjIDAwMCAwMCAgMSAgICAwICAg
IDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxZCAwMDAgMDAgIDEgICAgMCAg
ICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWUgMDAwIDAwICAxICAgIDAg
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFmIDAwMCAwMCAgMSAgICAw
ICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pIFVzaW5nIHZlY3Rvci1iYXNl
ZCBpbmRleGluZw0KKFhFTikgSVJRIHRvIHBpbiBtYXBwaW5nczoNCihYRU4pIElSUTI0MCAt
PiAwOjINCihYRU4pIElSUTQ4IC0+IDA6MQ0KKFhFTikgSVJRNTYgLT4gMDozDQooWEVOKSBJ
UlEyNDEgLT4gMDo0DQooWEVOKSBJUlE2NCAtPiAwOjUNCihYRU4pIElSUTcyIC0+IDA6Ng0K
KFhFTikgSVJRODAgLT4gMDo3DQooWEVOKSBJUlE4OCAtPiAwOjgNCihYRU4pIElSUTk2IC0+
IDA6OQ0KKFhFTikgSVJRMTA0IC0+IDA6MTANCihYRU4pIElSUTExMiAtPiAwOjExDQooWEVO
KSBJUlExMjAgLT4gMDoxMg0KKFhFTikgSVJRMTM2IC0+IDA6MTMNCihYRU4pIElSUTE0NCAt
PiAwOjE0DQooWEVOKSBJUlExNTIgLT4gMDoxNQ0KKFhFTikgLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uIGRvbmUuDQooWEVOKSBVc2luZyBsb2NhbCBBUElDIHRpbWVy
IGludGVycnVwdHMuDQooWEVOKSBjYWxpYnJhdGluZyBBUElDIHRpbWVyIC4uLg0KKFhFTikg
Li4uLi4gQ1BVIGNsb2NrIHNwZWVkIGlzIDMyMDAuMTIwOSBNSHouDQooWEVOKSAuLi4uLiBo
b3N0IGJ1cyBjbG9jayBzcGVlZCBpcyAyMDAuMDA3NCBNSHouDQooWEVOKSAuLi4uLiBidXNf
c2NhbGUgPSAweGNjZDcNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQyXSBQbGF0Zm9ybSB0
aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDJdIEFs
bG9jYXRlZCBjb25zb2xlIHJpbmcgb2YgNjQgS2lCLg0KKFhFTikgWzIwMTItMTItMTQgMTU6
MzQ6NDJdIEhWTTogQVNJRHMgZW5hYmxlZC4NCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQy
XSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pIFsyMDEyLTEyLTE0
IDE1OjM0OjQyXSAgLSBOZXN0ZWQgUGFnZSBUYWJsZXMgKE5QVCkNCihYRU4pIFsyMDEyLTEy
LTE0IDE1OjM0OjQyXSAgLSBMYXN0IEJyYW5jaCBSZWNvcmQgKExCUikgVmlydHVhbGlzYXRp
b24NCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQyXSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAj
Vk1FWElUDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0Ml0gIC0gUGF1c2UtSW50ZXJjZXB0
IEZpbHRlcg0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDJdIEhWTTogU1ZNIGVuYWJsZWQN
CihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQyXSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBh
Z2luZyAoSEFQKSBkZXRlY3RlZA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDJdIEhWTTog
SEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0INCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0
OjQxXSBtYXNrZWQgRXh0SU5UIG9uIENQVSMxDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0
Ml0gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwYmYNCihY
RU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQxXSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyDQooWEVO
KSBbMjAxMi0xMi0xNCAxNTozNDo0Ml0gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBw
YXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQxXSBtYXNrZWQg
RXh0SU5UIG9uIENQVSMzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0Ml0gbWljcm9jb2Rl
OiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDEyLTEy
LTE0IDE1OjM0OjQxXSBtYXNrZWQgRXh0SU5UIG9uIENQVSM0DQooWEVOKSBbMjAxMi0xMi0x
NCAxNTozNDo0Ml0gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEw
MDAwYmYNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQxXSBtYXNrZWQgRXh0SU5UIG9uIENQ
VSM1DQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gQnJvdWdodCB1cCA2IENQVXMNCihY
RU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86
IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIEhQRVQ6
IDMgdGltZXJzICgzIHdpbGwgYmUgdXNlZCBmb3IgYnJvYWRjYXN0KQ0KKFhFTikgWzIwMTIt
MTItMTQgMTU6MzQ6NDNdIEFDUEkgc2xlZXAgbW9kZXM6IFMzDQooWEVOKSBbMjAxMi0xMi0x
NCAxNTozNDo0M10gTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5n
IGZyZXF1ZW5jeQ0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIG1jaGVja19wb2xsOiBN
YWNoaW5lIGNoZWNrIHBvbGxpbmcgdGltZXIgc3RhcnRlZC4NCihYRU4pIFsyMDEyLTEyLTE0
IDE1OjM0OjQzXSBYZW5vcHJvZmlsZTogRmFpbGVkIHRvIHNldHVwIElCUyBMVlQgb2Zmc2V0
LCBJQlNDVEwgPSAweGZmZmZmZmZmDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gKioq
IExPQURJTkcgRE9NQUlOIDAgKioqDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxm
X3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9MHgxMDAwMDAwIG1lbXN6PTB4ZDViMDAwDQoo
WEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFk
ZHI9MHgxZTAwMDAwIG1lbXN6PTB4ZGEwZjANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQz
XSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFlZGIwMDAgbWVtc3o9MHgxM2Rj
MA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIGVsZl9wYXJzZV9iaW5hcnk6IHBoZHI6
IHBhZGRyPTB4MWVlZjAwMCBtZW1zej0weGUwNjAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6
MzQ6NDNdIGVsZl9wYXJzZV9iaW5hcnk6IG1lbW9yeTogMHgxMDAwMDAwIC0+IDB4MmNmNTAw
MA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VF
U1RfT1MgPSAibGludXgiDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hlbl9w
YXJzZV9ub3RlOiBHVUVTVF9WRVJTSU9OID0gIjIuNiINCihYRU4pIFsyMDEyLTEyLTE0IDE1
OjM0OjQzXSBlbGZfeGVuX3BhcnNlX25vdGU6IFhFTl9WRVJTSU9OID0gInhlbi0zLjAiDQoo
WEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBWSVJUX0JB
U0UgPSAweGZmZmZmZmZmODAwMDAwMDANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBl
bGZfeGVuX3BhcnNlX25vdGU6IEVOVFJZID0gMHhmZmZmZmZmZjgxZWVmMjEwDQooWEVOKSBb
MjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBIWVBFUkNBTExfUEFH
RSA9IDB4ZmZmZmZmZmY4MTAwMTAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIGVs
Zl94ZW5fcGFyc2Vfbm90ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVzfHBh
ZV9wZ2Rpcl9hYm92ZV80Z2IiDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hl
bl9wYXJzZV9ub3RlOiBQQUVfTU9ERSA9ICJ5ZXMiDQooWEVOKSBbMjAxMi0xMi0xNCAxNToz
NDo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBMT0FERVIgPSAiZ2VuZXJpYyINCihYRU4pIFsy
MDEyLTEyLTE0IDE1OjM0OjQzXSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25vd24geGVuIGVs
ZiBub3RlICgweGQpDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hlbl9wYXJz
ZV9ub3RlOiBTVVNQRU5EX0NBTkNFTCA9IDB4MQ0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6
NDNdIGVsZl94ZW5fcGFyc2Vfbm90ZTogSFZfU1RBUlRfTE9XID0gMHhmZmZmODAwMDAwMDAw
MDAwDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBQ
QUREUl9PRkZTRVQgPSAweDANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBlbGZfeGVu
X2FkZHJfY2FsY19jaGVjazogYWRkcmVzc2VzOg0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6
NDNdICAgICB2aXJ0X2Jhc2UgICAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAwDQooWEVOKSBb
MjAxMi0xMi0xNCAxNTozNDo0M10gICAgIGVsZl9wYWRkcl9vZmZzZXQgPSAweDANCihYRU4p
IFsyMDEyLTEyLTE0IDE1OjM0OjQzXSAgICAgdmlydF9vZmZzZXQgICAgICA9IDB4ZmZmZmZm
ZmY4MDAwMDAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdICAgICB2aXJ0X2tzdGFy
dCAgICAgID0gMHhmZmZmZmZmZjgxMDAwMDAwDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0
M10gICAgIHZpcnRfa2VuZCAgICAgICAgPSAweGZmZmZmZmZmODJjZjUwMDANCihYRU4pIFsy
MDEyLTEyLTE0IDE1OjM0OjQzXSAgICAgdmlydF9lbnRyeSAgICAgICA9IDB4ZmZmZmZmZmY4
MWVlZjIxMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdICAgICBwMm1fYmFzZSAgICAg
ICAgID0gMHhmZmZmZmZmZmZmZmZmZmZmDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10g
IFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0MzINCihYRU4pIFsyMDEyLTEyLTE0
IDE1OjM0OjQzXSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgUEFFLCBsc2IsIHBhZGRyIDB4MTAw
MDAwMCAtPiAweDJjZjUwMDANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBQSFlTSUNB
TCBNRU1PUlkgQVJSQU5HRU1FTlQ6DQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gIERv
bTAgYWxsb2MuOiAgIDAwMDAwMDAyNDAwMDAwMDAtPjAwMDAwMDAyNDQwMDAwMDAgKDI0MjUx
NiBwYWdlcyB0byBiZSBhbGxvY2F0ZWQpDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10g
IEluaXQuIHJhbWRpc2s6IDAwMDAwMDAyNGYzNTQwMDAtPjAwMDAwMDAyNGZmZmZjMDANCihY
RU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBWSVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoN
CihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSAgTG9hZGVkIGtlcm5lbDogZmZmZmZmZmY4
MTAwMDAwMC0+ZmZmZmZmZmY4MmNmNTAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNd
ICBJbml0LiByYW1kaXNrOiBmZmZmZmZmZjgyY2Y1MDAwLT5mZmZmZmZmZjgzOWEwYzAwDQoo
WEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gIFBoeXMtTWFjaCBtYXA6IGZmZmZmZmZmODM5
YTEwMDAtPmZmZmZmZmZmODNiYTEwMDANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSAg
U3RhcnQgaW5mbzogICAgZmZmZmZmZmY4M2JhMTAwMC0+ZmZmZmZmZmY4M2JhMTRiNA0KKFhF
TikgWzIwMTItMTItMTQgMTU6MzQ6NDNdICBQYWdlIHRhYmxlczogICBmZmZmZmZmZjgzYmEy
MDAwLT5mZmZmZmZmZjgzYmM1MDAwDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gIEJv
b3Qgc3RhY2s6ICAgIGZmZmZmZmZmODNiYzUwMDAtPmZmZmZmZmZmODNiYzYwMDANCihYRU4p
IFsyMDEyLTEyLTE0IDE1OjM0OjQzXSAgVE9UQUw6ICAgICAgICAgZmZmZmZmZmY4MDAwMDAw
MC0+ZmZmZmZmZmY4NDAwMDAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdICBFTlRS
WSBBRERSRVNTOiBmZmZmZmZmZjgxZWVmMjEwDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0
M10gRG9tMCBoYXMgbWF4aW11bSA2IFZDUFVzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0
M10gZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDAgYXQgMHhmZmZmZmZmZjgxMDAwMDAwIC0+IDB4
ZmZmZmZmZmY4MWQ1YjAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIGVsZl9sb2Fk
X2JpbmFyeTogcGhkciAxIGF0IDB4ZmZmZmZmZmY4MWUwMDAwMCAtPiAweGZmZmZmZmZmODFl
ZGEwZjANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBlbGZfbG9hZF9iaW5hcnk6IHBo
ZHIgMiBhdCAweGZmZmZmZmZmODFlZGIwMDAgLT4gMHhmZmZmZmZmZjgxZWVlZGMwDQooWEVO
KSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDMgYXQgMHhm
ZmZmZmZmZjgxZWVmMDAwIC0+IDB4ZmZmZmZmZmY4MWY5NTAwMA0KKFhFTikgWzIwMTItMTIt
MTQgMTU6MzQ6NDNdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUg
PSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gQU1ELVZpOiBTZXR1cCBJL08gcGFn
ZSB0YWJsZTogZGV2aWNlIGlkID0gMHgyLCByb290IHRhYmxlID0gMHgyNGQ4MDIwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNd
IEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAsIHJvb3Qg
dGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVO
KSBbMjAxMi0xMi0xNCAxNTozNDo0M10gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHgxOCwgcm9vdCB0YWJsZSA9IDB4MjRkODAyMDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDI4LCByb290IHRhYmxlID0g
MHgyNGQ4MDIwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTIt
MTItMTQgMTU6MzQ6NDNdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4MzAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5n
IG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gQU1ELVZpOiBTZXR1cCBJ
L08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg1MCwgcm9vdCB0YWJsZSA9IDB4MjRkODAy
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTE0IDE1
OjM0OjQzXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDU4
LCByb290IHRhYmxlID0gMHgyNGQ4MDIwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0g
Mw0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4NjgsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4OCwgcm9vdCB0
YWJsZSA9IDB4MjRkODAyMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4p
IFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweDkwLCByb290IHRhYmxlID0gMHgyNGQ4MDIwMDAsIGRvbWFpbiA9IDAs
IHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIEFNRC1WaTog
U2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTIsIHJvb3QgdGFibGUgPSAw
eDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0x
Mi0xNCAxNTozNDo0M10gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHg5OCwgcm9vdCB0YWJsZSA9IDB4MjRkODAyMDAwLCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDlhLCByb290IHRhYmxlID0gMHgyNGQ4MDIw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMTQgMTU6
MzQ6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTAs
IHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAz
DQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhhMSwgcm9vdCB0YWJsZSA9IDB4MjRkODAyMDAwLCBkb21h
aW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQ0XSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGEyLCByb290IHRh
YmxlID0gMHgyNGQ4MDIwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikg
WzIwMTItMTItMTQgMTU6MzQ6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4YTMsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwg
cGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNCwgcm9vdCB0YWJsZSA9IDB4
MjRkODAyMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEy
LTE0IDE1OjM0OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQg
PSAweGE1LCByb290IHRhYmxlID0gMHgyNGQ4MDIwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMw0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDRdIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTgsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNToz
NDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhiMCwg
cm9vdCB0YWJsZSA9IDB4MjRkODAyMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMN
CihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweGIyLCByb290IHRhYmxlID0gMHgyNGQ4MDIwMDAsIGRvbWFp
biA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDRdIEFN
RC1WaTogTm8gaW9tbXUgZm9yIGRldmljZSAwMDAwOjAwOjE4LjANCihYRU4pIFsyMDEyLTEy
LTE0IDE1OjM0OjQ0XSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2UgMDAwMDowMDoxOC4x
DQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBObyBpb21tdSBmb3IgZGV2
aWNlIDAwMDA6MDA6MTguMg0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDRdIEFNRC1WaTog
Tm8gaW9tbXUgZm9yIGRldmljZSAwMDAwOjAwOjE4LjMNCihYRU4pIFsyMDEyLTEyLTE0IDE1
OjM0OjQ0XSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2UgMDAwMDowMDoxOC40DQooWEVO
KSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHg0MDAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZp
OiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg1MDAsIHJvb3QgdGFibGUg
PSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAx
Mi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNl
IGlkID0gMHg1MDEsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg2MDAsIHJvb3QgdGFibGUgPSAweDI0
ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0x
NCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0g
MHg3MDAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1v
ZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4MDAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNToz
NDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MDAs
IHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAz
DQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhhMDAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhiMDAsIHJvb3Qg
dGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVO
KSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gU2NydWJiaW5nIEZyZWUgUkFNOiAuLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLmRvbmUuDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0Nl0gSW5pdGlhbCBsb3cg
bWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuDQooWEVOKSBbMjAx
Mi0xMi0xNCAxNTozNDo0Nl0gU3RkLiBMb2dsZXZlbDogQWxsDQooWEVOKSBbMjAxMi0xMi0x
NCAxNTozNDo0Nl0gR3Vlc3QgTG9nbGV2ZWw6IEFsbA0KKFhFTikgWzIwMTItMTItMTQgMTU6
MzQ6NDZdIFhlbiBpcyByZWxpbnF1aXNoaW5nIFZHQSBjb25zb2xlLg0KKFhFTikgWzIwMTIt
MTItMTQgMTU6MzQ6NDZdICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1h
JyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQ0KKFhFTikgWzIwMTItMTIt
MTQgMTU6MzQ6NDZdIEZyZWVkIDI1MmtCIGluaXQgbWVtb3J5Lg0KbWFwcGluZyBrZXJuZWwg
aW50byBwaHlzaWNhbCBtZW1vcnkNCmFib3V0IHRvIGdldCBzdGFydGVkLi4uDQpbICAgIDAu
MDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVzZXQNClsgICAgMC4wMDAw
MDBdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIGNwdQ0KWyAgICAwLjAwMDAwMF0gTGlu
dXggdmVyc2lvbiAzLjcuMHJjMC0yMDEyMTIxNC1uZXRkZWJ1ZyAocm9vdEBzZXJ2ZWVyc3Rl
cnRqZSkgKGdjYyB2ZXJzaW9uIDQuNC41IChEZWJpYW4gNC40LjUtOCkgKSAjMSBTTVAgUFJF
RU1QVCBGcmkgRGVjIDE0IDEwOjA1OjI3IENFVCAyMDEyDQpbICAgIDAuMDAwMDAwXSBDb21t
YW5kIGxpbmU6IHJvb3Q9L2Rldi9tYXBwZXIvc2VydmVlcnN0ZXJ0amUtcm9vdCBybyB2ZXJi
b3NlIG1lbT0xMDI0TSBjb25zb2xlPWh2YzAgY29uc29sZT10dHkwIG5vbW9kZXNldCB2Z2E9
Nzk0IHZpZGVvPXZlc2FmYiBhY3BpX2VuZm9yY2VfcmVzb3VyY2VzPWxheCByODE2OS51c2Vf
ZGFjPTEgZWFybHlwcmludGs9eGVuIG1heF9sb29wPTUwIGxvb3BfbWF4X3BhcnQ9MTAgeGVu
LXBjaWJhY2suaGlkZT0oMDM6MDYuMCkoMDQ6MDAuKikoMDU6MDAuKikoMDY6MDAuKikoMGE6
MDEuKikgZGVidWcgbG9nbGV2ZWw9MTANClsgICAgMC4wMDAwMDBdIEZyZWVpbmcgOWYtMTAw
IHBmbiByYW5nZTogOTcgcGFnZXMgZnJlZWQNClsgICAgMC4wMDAwMDBdIFJlbGVhc2VkIDk3
IHBhZ2VzIG9mIHVudXNlZCBtZW1vcnkNClsgICAgMC4wMDAwMDBdIFNldCAzMjc4ODkgcGFn
ZShzKSB0byAxLTEgbWFwcGluZw0KWyAgICAwLjAwMDAwMF0gUG9wdWxhdGluZyA0MDAwMC00
MDA2MSBwZm4gcmFuZ2U6IDk3IHBhZ2VzIGFkZGVkDQpbICAgIDAuMDAwMDAwXSBlODIwOiBC
SU9TLXByb3ZpZGVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAwXSBYZW46IFtt
ZW0gMHgwMDAwMDAwMDAwMDAwMDAwLTB4MDAwMDAwMDAwMDA5ZWZmZl0gdXNhYmxlDQpbICAg
IDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDAwMDlmMDAwLTB4MDAwMDAwMDAwMDBm
ZmZmZl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwMDAx
MDAwMDAtMHgwMDAwMDAwMDQwMDYwZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBdIFhlbjog
W21lbSAweDAwMDAwMDAwNDAwNjEwMDAtMHgwMDAwMDAwMGFmZjhmZmZmXSB1bnVzYWJsZQ0K
WyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDBhZmY5MDAwMC0weDAwMDAwMDAw
YWZmOWRmZmZdIEFDUEkgZGF0YQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAw
MDBhZmY5ZTAwMC0weDAwMDAwMDAwYWZmZGZmZmZdIEFDUEkgTlZTDQpbICAgIDAuMDAwMDAw
XSBYZW46IFttZW0gMHgwMDAwMDAwMGFmZmUwMDAwLTB4MDAwMDAwMDBhZmZmZmZmZl0gcmVz
ZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwZmVjMDAwMDAtMHgw
MDAwMDAwMGZlYzAwZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4
MDAwMDAwMDBmZWMyMDAwMC0weDAwMDAwMDAwZmVjMjBmZmZdIHJlc2VydmVkDQpbICAgIDAu
MDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMGZlZTAwMDAwLTB4MDAwMDAwMDBmZWUwMGZm
Zl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwZmZlMDAw
MDAtMHgwMDAwMDAwMGZmZmZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBb
bWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAwMDAyNGZmZmZmZmZdIHVudXNhYmxlDQpb
ICAgIDAuMDAwMDAwXSBib290Y29uc29sZSBbeGVuYm9vdDBdIGVuYWJsZWQNClsgICAgMC4w
MDAwMDBdIE5YIChFeGVjdXRlIERpc2FibGUpIHByb3RlY3Rpb246IGFjdGl2ZQ0KWyAgICAw
LjAwMDAwMF0gZTgyMDogdXNlci1kZWZpbmVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAu
MDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDAwMDAwMDAwMC0weDAwMDAwMDAwMDAwOWVm
ZmZdIHVzYWJsZQ0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAwMDAwOWYw
MDAtMHgwMDAwMDAwMDAwMGZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gdXNlcjog
W21lbSAweDAwMDAwMDAwMDAxMDAwMDAtMHgwMDAwMDAwMDNmZmZmZmZmXSB1c2FibGUNClsg
ICAgMC4wMDAwMDBdIHVzZXI6IFttZW0gMHgwMDAwMDAwMDQwMDYxMDAwLTB4MDAwMDAwMDBh
ZmY4ZmZmZl0gdW51c2FibGUNClsgICAgMC4wMDAwMDBdIHVzZXI6IFttZW0gMHgwMDAwMDAw
MGFmZjkwMDAwLTB4MDAwMDAwMDBhZmY5ZGZmZl0gQUNQSSBkYXRhDQpbICAgIDAuMDAwMDAw
XSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBhZmY5ZTAwMC0weDAwMDAwMDAwYWZmZGZmZmZdIEFD
UEkgTlZTDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBhZmZlMDAwMC0w
eDAwMDAwMDAwYWZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVt
IDB4MDAwMDAwMDBmZWMwMDAwMC0weDAwMDAwMDAwZmVjMDBmZmZdIHJlc2VydmVkDQpbICAg
IDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBmZWMyMDAwMC0weDAwMDAwMDAwZmVj
MjBmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBm
ZWUwMDAwMC0weDAwMDAwMDAwZmVlMDBmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1
c2VyOiBbbWVtIDB4MDAwMDAwMDBmZmUwMDAwMC0weDAwMDAwMDAwZmZmZmZmZmZdIHJlc2Vy
dmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAw
MDAwMDAyNGZmZmZmZmZdIHVudXNhYmxlDQpbICAgIDAuMDAwMDAwXSBETUkgcHJlc2VudC4N
ClsgICAgMC4wMDAwMDBdIERNSTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1TLTc2NDAp
ICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsgICAgMC4wMDAwMDBdIGU4MjA6IHVwZGF0
ZSBbbWVtIDB4MDAwMDAwMDAtMHgwMDAwZmZmZl0gdXNhYmxlID09PiByZXNlcnZlZA0KWyAg
ICAwLjAwMDAwMF0gZTgyMDogcmVtb3ZlIFttZW0gMHgwMDBhMDAwMC0weDAwMGZmZmZmXSB1
c2FibGUNClsgICAgMC4wMDAwMDBdIE5vIEFHUCBicmlkZ2UgZm91bmQNClsgICAgMC4wMDAw
MDBdIGU4MjA6IGxhc3RfcGZuID0gMHg0MDAwMCBtYXhfYXJjaF9wZm4gPSAweDQwMDAwMDAw
MA0KWyAgICAwLjAwMDAwMF0gaW5pdGlhbCBtZW1vcnkgbWFwcGVkOiBbbWVtIDB4MDAwMDAw
MDAtMHgwMzlhMGZmZl0NClsgICAgMC4wMDAwMDBdIEJhc2UgbWVtb3J5IHRyYW1wb2xpbmUg
YXQgW2ZmZmY4ODAwMDAwOTkwMDBdIDk5MDAwIHNpemUgMjQ1NzYNClsgICAgMC4wMDAwMDBd
IGluaXRfbWVtb3J5X21hcHBpbmc6IFttZW0gMHgwMDAwMDAwMC0weDNmZmZmZmZmXQ0KWyAg
ICAwLjAwMDAwMF0gIFttZW0gMHgwMDAwMDAwMC0weDNmZmZmZmZmXSBwYWdlIDRrDQpbICAg
IDAuMDAwMDAwXSBrZXJuZWwgZGlyZWN0IG1hcHBpbmcgdGFibGVzIHVwIHRvIDB4M2ZmZmZm
ZmYgQCBbbWVtIDB4MDJhZjMwMDAtMHgwMmNmNGZmZl0NClsgICAgMC4wMDAwMDBdIHhlbjog
c2V0dGluZyBSVyB0aGUgcmFuZ2UgMmNkMzAwMCAtIDJjZjUwMDANClsgICAgMC4wMDAwMDBd
IFJBTURJU0s6IFttZW0gMHgwMmNmNTAwMC0weDAzOWEwZmZmXQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogUlNEUCAwMDAwMDAwMDAwMGZiMTAwIDAwMDE0ICh2MDAgQUNQSUFNKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogUlNEVCAwMDAwMDAwMGFmZjkwMDAwIDAwMDQ4ICh2MDEgTVNJICAg
IE9FTVNMSUMgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBGQUNQIDAwMDAwMDAwYWZmOTAyMDAgMDAwODQgKHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAx
MDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IERTRFQgMDAwMDAw
MDBhZmY5MDVlMCAwOTQyNyAodjAxICBBNzY0MCBBNzY0MDEwMCAwMDAwMDEwMCBJTlRMIDIw
MDUxMTE3KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogRkFDUyAwMDAwMDAwMGFmZjllMDAwIDAw
MDQwDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBBUElDIDAwMDAwMDAwYWZmOTAzOTAgMDAwODgg
KHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4w
MDAwMDBdIEFDUEk6IE1DRkcgMDAwMDAwMDBhZmY5MDQyMCAwMDAzQyAodjAxIDc2NDBNUyBP
RU1NQ0ZHICAyMDEwMDkxMyBNU0ZUIDAwMDAwMDk3KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
U0xJQyAwMDAwMDAwMGFmZjkwNDYwIDAwMTc2ICh2MDEgTVNJICAgIE9FTVNMSUMgIDIwMTAw
OTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBPRU1CIDAwMDAwMDAw
YWZmOWUwNDAgMDAwNzIgKHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAw
MDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNSQVQgMDAwMDAwMDBhZmY5YTVlMCAwMDEw
OCAodjAzIEFNRCAgICBGQU1fRl8xMCAwMDAwMDAwMiBBTUQgIDAwMDAwMDAxKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogSFBFVCAwMDAwMDAwMGFmZjlhNmYwIDAwMDM4ICh2MDEgNzY0ME1T
IE9FTUhQRVQgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBJVlJTIDAwMDAwMDAwYWZmOWE3MzAgMDAwRjggKHYwMSAgQU1EICAgICBSRDg5MFMgMDAy
MDIwMzEgQU1EICAwMDAwMDAwMCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNTRFQgMDAwMDAw
MDBhZmY5YTgzMCAwMERBNCAodjAxIEEgTSBJICBQT1dFUk5PVyAwMDAwMDAwMSBBTUQgIDAw
MDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTG9jYWwgQVBJQyBhZGRyZXNzIDB4ZmVl
MDAwMDANClsgICAgMC4wMDAwMDBdIE5VTUEgdHVybmVkIG9mZg0KWyAgICAwLjAwMDAwMF0g
RmFraW5nIGEgbm9kZSBhdCBbbWVtIDB4MDAwMDAwMDAwMDAwMDAwMC0weDAwMDAwMDAwM2Zm
ZmZmZmZdDQpbICAgIDAuMDAwMDAwXSBJbml0bWVtIHNldHVwIG5vZGUgMCBbbWVtIDB4MDAw
MDAwMDAtMHgzZmZmZmZmZl0NClsgICAgMC4wMDAwMDBdICAgTk9ERV9EQVRBIFttZW0gMHgz
ZmZmNTAwMC0weDNmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gWm9uZSByYW5nZXM6DQpbICAg
IDAuMDAwMDAwXSAgIERNQSAgICAgIFttZW0gMHgwMDAxMDAwMC0weDAwZmZmZmZmXQ0KWyAg
ICAwLjAwMDAwMF0gICBETUEzMiAgICBbbWVtIDB4MDEwMDAwMDAtMHhmZmZmZmZmZl0NClsg
ICAgMC4wMDAwMDBdICAgTm9ybWFsICAgZW1wdHkNClsgICAgMC4wMDAwMDBdIE1vdmFibGUg
em9uZSBzdGFydCBmb3IgZWFjaCBub2RlDQpbICAgIDAuMDAwMDAwXSBFYXJseSBtZW1vcnkg
bm9kZSByYW5nZXMNClsgICAgMC4wMDAwMDBdICAgbm9kZSAgIDA6IFttZW0gMHgwMDAxMDAw
MC0weDAwMDllZmZmXQ0KWyAgICAwLjAwMDAwMF0gICBub2RlICAgMDogW21lbSAweDAwMTAw
MDAwLTB4M2ZmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSBPbiBub2RlIDAgdG90YWxwYWdlczog
MjYyMDMxDQpbICAgIDAuMDAwMDAwXSAgIERNQSB6b25lOiA2NCBwYWdlcyB1c2VkIGZvciBt
ZW1tYXANClsgICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6IDYgcGFnZXMgcmVzZXJ2ZWQNClsg
ICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6IDM5MTMgcGFnZXMsIExJRk8gYmF0Y2g6MA0KWyAg
ICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiA0MDMyIHBhZ2VzIHVzZWQgZm9yIG1lbW1hcA0K
WyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiAyNTQwMTYgcGFnZXMsIExJRk8gYmF0Y2g6
MzENClsgICAgMC4wMDAwMDBdIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4ODA4DQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBMb2NhbCBBUElDIGFkZHJlc3MgMHhmZWUwMDAwMA0KWyAgICAw
LjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgwMF0gZW5h
YmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGlj
X2lkWzB4MDFdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9p
ZFsweDAzXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwM10gZW5hYmxlZCkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDVdIGxhcGljX2lkWzB4MDRdIGVuYWJs
ZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA2XSBsYXBpY19p
ZFsweDA1XSBlbmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogSU9BUElDIChpZFsweDA2
XSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KWyAgICAwLjAwMDAwMF0gSU9B
UElDWzBdOiBhcGljX2lkIDYsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJ
IDAtMjMNClsgICAgMC4wMDAwMDBdIEFDUEk6IElPQVBJQyAoaWRbMHgwN10gYWRkcmVzc1sw
eGZlYzIwMDAwXSBnc2lfYmFzZVsyNF0pDQpbICAgIDAuMDAwMDAwXSBJT0FQSUNbMV06IGFw
aWNfaWQgNywgdmVyc2lvbiAzMywgYWRkcmVzcyAweGZlYzIwMDAwLCBHU0kgMjQtNTUNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IElOVF9TUkNfT1ZSIChidXMgMCBidXNfaXJxIDAgZ2xvYmFs
X2lycSAyIGRmbCBkZmwpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVz
IDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQpbICAgIDAuMDAwMDAwXSBB
Q1BJOiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJUlEy
IHVzZWQgYnkgb3ZlcnJpZGUuDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJUlE5IHVzZWQgYnkg
b3ZlcnJpZGUuDQpbICAgIDAuMDAwMDAwXSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNv
bmZpZ3VyYXRpb24gaW5mb3JtYXRpb24NClsgICAgMC4wMDAwMDBdIEFDUEk6IEhQRVQgaWQ6
IDB4ODMwMCBiYXNlOiAweGZlZDAwMDAwDQpbICAgIDAuMDAwMDAwXSBzbXBib290OiBBbGxv
d2luZyA2IENQVXMsIDAgaG90cGx1ZyBDUFVzDQpbICAgIDAuMDAwMDAwXSBucl9pcnFzX2dz
aTogNzINClsgICAgMC4wMDAwMDBdIGU4MjA6IFttZW0gMHhiMDAwMDAwMC0weGZlYmZmZmZm
XSBhdmFpbGFibGUgZm9yIFBDSSBkZXZpY2VzDQpbICAgIDAuMDAwMDAwXSBCb290aW5nIHBh
cmF2aXJ0dWFsaXplZCBrZXJuZWwgb24gWGVuDQpbICAgIDAuMDAwMDAwXSBYZW4gdmVyc2lv
bjogNC4zLXVuc3RhYmxlIChwcmVzZXJ2ZS1BRCkNClsgICAgMC4wMDAwMDBdIHNldHVwX3Bl
cmNwdTogTlJfQ1BVUzo4IG5yX2NwdW1hc2tfYml0czo4IG5yX2NwdV9pZHM6NiBucl9ub2Rl
X2lkczoxDQpbICAgIDAuMDAwMDAwXSBQRVJDUFU6IEVtYmVkZGVkIDI3IHBhZ2VzL2NwdSBA
ZmZmZjg4MDAzZjgwMDAwMCBzODEzNDQgcjgxOTIgZDIxMDU2IHUyNjIxNDQNClsgICAgMC4w
MDAwMDBdIHBjcHUtYWxsb2M6IHM4MTM0NCByODE5MiBkMjEwNTYgdTI2MjE0NCBhbGxvYz0x
KjIwOTcxNTINClsgICAgMC4wMDAwMDBdIHBjcHUtYWxsb2M6IFswXSAwIDEgMiAzIDQgNSAt
IC0gDQpbICAgIDQuMjY5NDk4XSBCdWlsdCAxIHpvbmVsaXN0cyBpbiBOb2RlIG9yZGVyLCBt
b2JpbGl0eSBncm91cGluZyBvbi4gIFRvdGFsIHBhZ2VzOiAyNTc5MjkNClsgICAgNC4yNjk1
MDFdIFBvbGljeSB6b25lOiBETUEzMg0KWyAgICA0LjI2OTUwOV0gS2VybmVsIGNvbW1hbmQg
bGluZTogcm9vdD0vZGV2L21hcHBlci9zZXJ2ZWVyc3RlcnRqZS1yb290IHJvIHZlcmJvc2Ug
bWVtPTEwMjRNIGNvbnNvbGU9aHZjMCBjb25zb2xlPXR0eTAgbm9tb2Rlc2V0IHZnYT03OTQg
dmlkZW89dmVzYWZiIGFjcGlfZW5mb3JjZV9yZXNvdXJjZXM9bGF4IHI4MTY5LnVzZV9kYWM9
MSBlYXJseXByaW50az14ZW4gbWF4X2xvb3A9NTAgbG9vcF9tYXhfcGFydD0xMCB4ZW4tcGNp
YmFjay5oaWRlPSgwMzowNi4wKSgwNDowMC4qKSgwNTowMC4qKSgwNjowMC4qKSgwYTowMS4q
KSBkZWJ1ZyBsb2dsZXZlbD0xMA0KWyAgICA0LjI2OTY3Ml0gUElEIGhhc2ggdGFibGUgZW50
cmllczogNDA5NiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzKQ0KWyAgICA0LjI2OTY3N10gX19l
eF90YWJsZSBhbHJlYWR5IHNvcnRlZCwgc2tpcHBpbmcgc29ydA0KWyAgICA0LjMxMDk5Nl0g
c29mdHdhcmUgSU8gVExCIFttZW0gMHgzYTYwMDAwMC0weDNlNWZmZmZmXSAoNjRNQikgbWFw
cGVkIGF0IFtmZmZmODgwMDNhNjAwMDAwLWZmZmY4ODAwM2U1ZmZmZmZdDQpbICAgIDQuMzE2
MjA2XSBNZW1vcnk6IDkyMjI4MGsvMTA0ODU3NmsgYXZhaWxhYmxlICg5MjQyayBrZXJuZWwg
Y29kZSwgNDUyayBhYnNlbnQsIDEyNTg0NGsgcmVzZXJ2ZWQsIDU5NjJrIGRhdGEsIDcxMmsg
aW5pdCkNClsgICAgNC4zMTYzMDJdIFNMVUI6IEdlbnNsYWJzPTE1LCBIV2FsaWduPTY0LCBP
cmRlcj0wLTMsIE1pbk9iamVjdHM9MCwgQ1BVcz02LCBOb2Rlcz0xDQpbICAgIDQuMzE2Mzc2
XSBQcmVlbXB0aWJsZSBoaWVyYXJjaGljYWwgUkNVIGltcGxlbWVudGF0aW9uLg0KWyAgICA0
LjMxNjM3N10gCVJDVSBkeW50aWNrLWlkbGUgZ3JhY2UtcGVyaW9kIGFjY2VsZXJhdGlvbiBp
cyBlbmFibGVkLg0KWyAgICA0LjMxNjM3OV0gCUFkZGl0aW9uYWwgcGVyLUNQVSBpbmZvIHBy
aW50ZWQgd2l0aCBzdGFsbHMuDQpbICAgIDQuMzE2MzgxXSAJUkNVIHJlc3RyaWN0aW5nIENQ
VXMgZnJvbSBOUl9DUFVTPTggdG8gbnJfY3B1X2lkcz02Lg0KWyAgICA0LjMxNjQxOV0gTlJf
SVJRUzo0MzUyIG5yX2lycXM6MTI3MiAxNg0KWyAgICA0LjMxNjQ5M10geGVuOiBzY2kgb3Zl
cnJpZGU6IGdsb2JhbF9pcnE9OSB0cmlnZ2VyPTAgcG9sYXJpdHk9MQ0KWyAgICA0LjMxNjQ5
Nl0geGVuOiByZWdpc3RlcmluZyBnc2kgOSB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0KWyAg
ICA0LjMxNjUyOF0geGVuOiAtLT4gcGlycT05IC0+IGlycT05IChnc2k9OSkNCihYRU4pIFsy
MDEyLTEyLTE0IDE1OjM0OjQ2XSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAo
Ni05IC0+IDB4NjAgLT4gSVJRIDkgTW9kZToxIEFjdGl2ZToxKQ0KWyAgICA0LjMxNjU2M10g
eGVuOiBhY3BpIHNjaSA5DQpbICAgIDQuMzE2NTY4XSB4ZW46IC0tPiBwaXJxPTEgLT4gaXJx
PTEgKGdzaT0xKQ0KWyAgICA0LjMxNjU3M10geGVuOiAtLT4gcGlycT0yIC0+IGlycT0yIChn
c2k9MikNClsgICAgNC4zMTY1NzddIHhlbjogLS0+IHBpcnE9MyAtPiBpcnE9MyAoZ3NpPTMp
DQpbICAgIDQuMzE2NTgxXSB4ZW46IC0tPiBwaXJxPTQgLT4gaXJxPTQgKGdzaT00KQ0KWyAg
ICA0LjMxNjU4NF0geGVuOiAtLT4gcGlycT01IC0+IGlycT01IChnc2k9NSkNClsgICAgNC4z
MTY1ODhdIHhlbjogLS0+IHBpcnE9NiAtPiBpcnE9NiAoZ3NpPTYpDQpbICAgIDQuMzE2NTky
XSB4ZW46IC0tPiBwaXJxPTcgLT4gaXJxPTcgKGdzaT03KQ0KWyAgICA0LjMxNjU5Nl0geGVu
OiAtLT4gcGlycT04IC0+IGlycT04IChnc2k9OCkNClsgICAgNC4zMTY2MDBdIHhlbjogLS0+
IHBpcnE9MTAgLT4gaXJxPTEwIChnc2k9MTApDQpbICAgIDQuMzE2NjA0XSB4ZW46IC0tPiBw
aXJxPTExIC0+IGlycT0xMSAoZ3NpPTExKQ0KWyAgICA0LjMxNjYwOF0geGVuOiAtLT4gcGly
cT0xMiAtPiBpcnE9MTIgKGdzaT0xMikNClsgICAgNC4zMTY2MTJdIHhlbjogLS0+IHBpcnE9
MTMgLT4gaXJxPTEzIChnc2k9MTMpDQpbICAgIDQuMzE2NjE2XSB4ZW46IC0tPiBwaXJxPTE0
IC0+IGlycT0xNCAoZ3NpPTE0KQ0KWyAgICA0LjMxNjYyMF0geGVuOiAtLT4gcGlycT0xNSAt
PiBpcnE9MTUgKGdzaT0xNSkNClsgICAgNC4zMTY3MDJdIENvbnNvbGU6IGNvbG91ciBkdW1t
eSBkZXZpY2UgODB4MjUNClsgICAgNC4zMTY3MDddIGNvbnNvbGUgW3R0eTBdIGVuYWJsZWQs
IGJvb3Rjb25zb2xlIGRpc2FibGVkDQpbICAgIDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dy
b3VwIHN1YnN5cyBjcHVzZXQNClsgICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBjZ3JvdXAg
c3Vic3lzIGNwdQ0KWyAgICAwLjAwMDAwMF0gTGludXggdmVyc2lvbiAzLjcuMHJjMC0yMDEy
MTIxNC1uZXRkZWJ1ZyAocm9vdEBzZXJ2ZWVyc3RlcnRqZSkgKGdjYyB2ZXJzaW9uIDQuNC41
IChEZWJpYW4gNC40LjUtOCkgKSAjMSBTTVAgUFJFRU1QVCBGcmkgRGVjIDE0IDEwOjA1OjI3
IENFVCAyMDEyDQpbICAgIDAuMDAwMDAwXSBDb21tYW5kIGxpbmU6IHJvb3Q9L2Rldi9tYXBw
ZXIvc2VydmVlcnN0ZXJ0amUtcm9vdCBybyB2ZXJib3NlIG1lbT0xMDI0TSBjb25zb2xlPWh2
YzAgY29uc29sZT10dHkwIG5vbW9kZXNldCB2Z2E9Nzk0IHZpZGVvPXZlc2FmYiBhY3BpX2Vu
Zm9yY2VfcmVzb3VyY2VzPWxheCByODE2OS51c2VfZGFjPTEgZWFybHlwcmludGs9eGVuIG1h
eF9sb29wPTUwIGxvb3BfbWF4X3BhcnQ9MTAgeGVuLXBjaWJhY2suaGlkZT0oMDM6MDYuMCko
MDQ6MDAuKikoMDU6MDAuKikoMDY6MDAuKikoMGE6MDEuKikgZGVidWcgbG9nbGV2ZWw9MTAN
ClsgICAgMC4wMDAwMDBdIEZyZWVpbmcgOWYtMTAwIHBmbiByYW5nZTogOTcgcGFnZXMgZnJl
ZWQNClsgICAgMC4wMDAwMDBdIDEtMSBtYXBwaW5nIG9uIDlmLT4xMDANClsgICAgMC4wMDAw
MDBdIDEtMSBtYXBwaW5nIG9uIGFmZjkwLT4xMDAwMDANClsgICAgMC4wMDAwMDBdIFJlbGVh
c2VkIDk3IHBhZ2VzIG9mIHVudXNlZCBtZW1vcnkNClsgICAgMC4wMDAwMDBdIFNldCAzMjc4
ODkgcGFnZShzKSB0byAxLTEgbWFwcGluZw0KWyAgICAwLjAwMDAwMF0gUG9wdWxhdGluZyA0
MDAwMC00MDA2MSBwZm4gcmFuZ2U6IDk3IHBhZ2VzIGFkZGVkDQpbICAgIDAuMDAwMDAwXSBl
ODIwOiBCSU9TLXByb3ZpZGVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAwXSBY
ZW46IFttZW0gMHgwMDAwMDAwMDAwMDAwMDAwLTB4MDAwMDAwMDAwMDA5ZWZmZl0gdXNhYmxl
DQpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDAwMDlmMDAwLTB4MDAwMDAw
MDAwMDBmZmZmZl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAw
MDAwMDAxMDAwMDAtMHgwMDAwMDAwMDQwMDYwZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBd
IFhlbjogW21lbSAweDAwMDAwMDAwNDAwNjEwMDAtMHgwMDAwMDAwMGFmZjhmZmZmXSB1bnVz
YWJsZQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDBhZmY5MDAwMC0weDAw
MDAwMDAwYWZmOWRmZmZdIEFDUEkgZGF0YQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4
MDAwMDAwMDBhZmY5ZTAwMC0weDAwMDAwMDAwYWZmZGZmZmZdIEFDUEkgTlZTDQpbICAgIDAu
MDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMGFmZmUwMDAwLTB4MDAwMDAwMDBhZmZmZmZm
Zl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwZmVjMDAw
MDAtMHgwMDAwMDAwMGZlYzAwZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBb
bWVtIDB4MDAwMDAwMDBmZWMyMDAwMC0weDAwMDAwMDAwZmVjMjBmZmZdIHJlc2VydmVkDQpb
ICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMGZlZTAwMDAwLTB4MDAwMDAwMDBm
ZWUwMGZmZl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAw
ZmZlMDAwMDAtMHgwMDAwMDAwMGZmZmZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0g
WGVuOiBbbWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAwMDAyNGZmZmZmZmZdIHVudXNh
YmxlDQpbICAgIDAuMDAwMDAwXSBlODIwOiByZW1vdmUgW21lbSAweDQwMDAwMDAwLTB4ZmZm
ZmZmZmZmZmZmZmZmZV0gdXNhYmxlDQpbICAgIDAuMDAwMDAwXSBib290Y29uc29sZSBbeGVu
Ym9vdDBdIGVuYWJsZWQNClsgICAgMC4wMDAwMDBdIE5YIChFeGVjdXRlIERpc2FibGUpIHBy
b3RlY3Rpb246IGFjdGl2ZQ0KWyAgICAwLjAwMDAwMF0gZTgyMDogdXNlci1kZWZpbmVkIHBo
eXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDAw
MDAwMDAwMC0weDAwMDAwMDAwMDAwOWVmZmZdIHVzYWJsZQ0KWyAgICAwLjAwMDAwMF0gdXNl
cjogW21lbSAweDAwMDAwMDAwMDAwOWYwMDAtMHgwMDAwMDAwMDAwMGZmZmZmXSByZXNlcnZl
ZA0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAwMDAxMDAwMDAtMHgwMDAw
MDAwMDNmZmZmZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBdIHVzZXI6IFttZW0gMHgwMDAw
MDAwMDQwMDYxMDAwLTB4MDAwMDAwMDBhZmY4ZmZmZl0gdW51c2FibGUNClsgICAgMC4wMDAw
MDBdIHVzZXI6IFttZW0gMHgwMDAwMDAwMGFmZjkwMDAwLTB4MDAwMDAwMDBhZmY5ZGZmZl0g
QUNQSSBkYXRhDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBhZmY5ZTAw
MC0weDAwMDAwMDAwYWZmZGZmZmZdIEFDUEkgTlZTDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBb
bWVtIDB4MDAwMDAwMDBhZmZlMDAwMC0weDAwMDAwMDAwYWZmZmZmZmZdIHJlc2VydmVkDQpb
ICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBmZWMwMDAwMC0weDAwMDAwMDAw
ZmVjMDBmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAw
MDBmZWMyMDAwMC0weDAwMDAwMDAwZmVjMjBmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAw
XSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBmZWUwMDAwMC0weDAwMDAwMDAwZmVlMDBmZmZdIHJl
c2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBmZmUwMDAwMC0w
eDAwMDAwMDAwZmZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVt
IDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAwMDAyNGZmZmZmZmZdIHVudXNhYmxlDQpbICAg
IDAuMDAwMDAwXSBETUkgcHJlc2VudC4NClsgICAgMC4wMDAwMDBdIERNSTogTVNJIE1TLTc2
NDAvODkwRlhBLUdENzAgKE1TLTc2NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsg
ICAgMC4wMDAwMDBdIGU4MjA6IHVwZGF0ZSBbbWVtIDB4MDAwMDAwMDAtMHgwMDAwZmZmZl0g
dXNhYmxlID09PiByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gZTgyMDogcmVtb3ZlIFttZW0g
MHgwMDBhMDAwMC0weDAwMGZmZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBdIE5vIEFHUCBi
cmlkZ2UgZm91bmQNClsgICAgMC4wMDAwMDBdIGU4MjA6IGxhc3RfcGZuID0gMHg0MDAwMCBt
YXhfYXJjaF9wZm4gPSAweDQwMDAwMDAwMA0KWyAgICAwLjAwMDAwMF0gaW5pdGlhbCBtZW1v
cnkgbWFwcGVkOiBbbWVtIDB4MDAwMDAwMDAtMHgwMzlhMGZmZl0NClsgICAgMC4wMDAwMDBd
IEJhc2UgbWVtb3J5IHRyYW1wb2xpbmUgYXQgW2ZmZmY4ODAwMDAwOTkwMDBdIDk5MDAwIHNp
emUgMjQ1NzYNClsgICAgMC4wMDAwMDBdIGluaXRfbWVtb3J5X21hcHBpbmc6IFttZW0gMHgw
MDAwMDAwMC0weDNmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gIFttZW0gMHgwMDAwMDAwMC0w
eDNmZmZmZmZmXSBwYWdlIDRrDQpbICAgIDAuMDAwMDAwXSBrZXJuZWwgZGlyZWN0IG1hcHBp
bmcgdGFibGVzIHVwIHRvIDB4M2ZmZmZmZmYgQCBbbWVtIDB4MDJhZjMwMDAtMHgwMmNmNGZm
Zl0NClsgICAgMC4wMDAwMDBdIHhlbjogc2V0dGluZyBSVyB0aGUgcmFuZ2UgMmNkMzAwMCAt
IDJjZjUwMDANClsgICAgMC4wMDAwMDBdIFJBTURJU0s6IFttZW0gMHgwMmNmNTAwMC0weDAz
OWEwZmZmXQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogUlNEUCAwMDAwMDAwMDAwMGZiMTAwIDAw
MDE0ICh2MDAgQUNQSUFNKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogUlNEVCAwMDAwMDAwMGFm
ZjkwMDAwIDAwMDQ4ICh2MDEgTVNJICAgIE9FTVNMSUMgIDIwMTAwOTEzIE1TRlQgMDAwMDAw
OTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGQUNQIDAwMDAwMDAwYWZmOTAyMDAgMDAwODQg
KHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4w
MDAwMDBdIEFDUEk6IERTRFQgMDAwMDAwMDBhZmY5MDVlMCAwOTQyNyAodjAxICBBNzY0MCBB
NzY0MDEwMCAwMDAwMDEwMCBJTlRMIDIwMDUxMTE3KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
RkFDUyAwMDAwMDAwMGFmZjllMDAwIDAwMDQwDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBBUElD
IDAwMDAwMDAwYWZmOTAzOTAgMDAwODggKHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMg
TVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IE1DRkcgMDAwMDAwMDBhZmY5
MDQyMCAwMDAzQyAodjAxIDc2NDBNUyBPRU1NQ0ZHICAyMDEwMDkxMyBNU0ZUIDAwMDAwMDk3
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogU0xJQyAwMDAwMDAwMGFmZjkwNDYwIDAwMTc2ICh2
MDEgTVNJICAgIE9FTVNMSUMgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBPRU1CIDAwMDAwMDAwYWZmOWUwNDAgMDAwNzIgKHYwMSA3NjQwTVMgQTc2
NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNS
QVQgMDAwMDAwMDBhZmY5YTVlMCAwMDEwOCAodjAzIEFNRCAgICBGQU1fRl8xMCAwMDAwMDAw
MiBBTUQgIDAwMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogSFBFVCAwMDAwMDAwMGFm
ZjlhNmYwIDAwMDM4ICh2MDEgNzY0ME1TIE9FTUhQRVQgIDIwMTAwOTEzIE1TRlQgMDAwMDAw
OTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJVlJTIDAwMDAwMDAwYWZmOWE3MzAgMDAwRjgg
KHYwMSAgQU1EICAgICBSRDg5MFMgMDAyMDIwMzEgQU1EICAwMDAwMDAwMCkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IFNTRFQgMDAwMDAwMDBhZmY5YTgzMCAwMERBNCAodjAxIEEgTSBJICBQ
T1dFUk5PVyAwMDAwMDAwMSBBTUQgIDAwMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
TG9jYWwgQVBJQyBhZGRyZXNzIDB4ZmVlMDAwMDANClsgICAgMC4wMDAwMDBdIE5VTUEgdHVy
bmVkIG9mZg0KWyAgICAwLjAwMDAwMF0gRmFraW5nIGEgbm9kZSBhdCBbbWVtIDB4MDAwMDAw
MDAwMDAwMDAwMC0weDAwMDAwMDAwM2ZmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSBJbml0bWVt
IHNldHVwIG5vZGUgMCBbbWVtIDB4MDAwMDAwMDAtMHgzZmZmZmZmZl0NClsgICAgMC4wMDAw
MDBdICAgTk9ERV9EQVRBIFttZW0gMHgzZmZmNTAwMC0weDNmZmZmZmZmXQ0KWyAgICAwLjAw
MDAwMF0gWm9uZSByYW5nZXM6DQpbICAgIDAuMDAwMDAwXSAgIERNQSAgICAgIFttZW0gMHgw
MDAxMDAwMC0weDAwZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gICBETUEzMiAgICBbbWVtIDB4
MDEwMDAwMDAtMHhmZmZmZmZmZl0NClsgICAgMC4wMDAwMDBdICAgTm9ybWFsICAgZW1wdHkN
ClsgICAgMC4wMDAwMDBdIE1vdmFibGUgem9uZSBzdGFydCBmb3IgZWFjaCBub2RlDQpbICAg
IDAuMDAwMDAwXSBFYXJseSBtZW1vcnkgbm9kZSByYW5nZXMNClsgICAgMC4wMDAwMDBdICAg
bm9kZSAgIDA6IFttZW0gMHgwMDAxMDAwMC0weDAwMDllZmZmXQ0KWyAgICAwLjAwMDAwMF0g
ICBub2RlICAgMDogW21lbSAweDAwMTAwMDAwLTB4M2ZmZmZmZmZdDQpbICAgIDAuMDAwMDAw
XSBPbiBub2RlIDAgdG90YWxwYWdlczogMjYyMDMxDQpbICAgIDAuMDAwMDAwXSAgIERNQSB6
b25lOiA2NCBwYWdlcyB1c2VkIGZvciBtZW1tYXANClsgICAgMC4wMDAwMDBdICAgRE1BIHpv
bmU6IDYgcGFnZXMgcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6IDM5MTMg
cGFnZXMsIExJRk8gYmF0Y2g6MA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiA0MDMy
IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiAy
NTQwMTYgcGFnZXMsIExJRk8gYmF0Y2g6MzENClsgICAgMC4wMDAwMDBdIEFDUEk6IFBNLVRp
bWVyIElPIFBvcnQ6IDB4ODA4DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMb2NhbCBBUElDIGFk
ZHJlc3MgMHhmZWUwMDAwMA0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRb
MHgwMV0gbGFwaWNfaWRbMHgwMF0gZW5hYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExB
UElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MDFdIGVuYWJsZWQpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAzXSBsYXBpY19pZFsweDAyXSBlbmFibGVk
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRb
MHgwM10gZW5hYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
MDVdIGxhcGljX2lkWzB4MDRdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDA2XSBsYXBpY19pZFsweDA1XSBlbmFibGVkKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogSU9BUElDIChpZFsweDA2XSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9iYXNl
WzBdKQ0KWyAgICAwLjAwMDAwMF0gSU9BUElDWzBdOiBhcGljX2lkIDYsIHZlcnNpb24gMzMs
IGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMNClsgICAgMC4wMDAwMDBdIEFDUEk6IElP
QVBJQyAoaWRbMHgwN10gYWRkcmVzc1sweGZlYzIwMDAwXSBnc2lfYmFzZVsyNF0pDQpbICAg
IDAuMDAwMDAwXSBJT0FQSUNbMV06IGFwaWNfaWQgNywgdmVyc2lvbiAzMywgYWRkcmVzcyAw
eGZlYzIwMDAwLCBHU0kgMjQtDQpbICAgNjQuNTk4NjI4XSBJTkZPOiByY3VfcHJlZW1wdCBk
ZXRlY3RlZCBzdGFsbHMgb24gQ1BVcy90YXNrczoNClsgICA2NC41OTg2NzZdIAkwOiAoMSBH
UHMgYmVoaW5kKSBpZGxlPWFlZC8xNDAwMDAwMDAwMDAwMDAvMCBkcmFpbj01IC4gdGltZXIg
bm90IHBlbmRpbmcNClsgICA2NC41OTg2ODNdIAkoZGV0ZWN0ZWQgYnkgMSwgdD0xODAwNCBq
aWZmaWVzLCBnPTE4NDQ2NzQ0MDczNzA5NTUxNDE0LCBjPTE4NDQ2NzQ0MDczNzA5NTUxNDEz
LCBxPTE2MikNClsgICA2NC41OTg2OTJdIHNlbmRpbmcgTk1JIHRvIGFsbCBDUFVzOg0KWyAg
IDY0LjU5ODcxNl0geGVuOiB2ZWN0b3IgMHgyIGlzIG5vdCBpbXBsZW1lbnRlZA0K
------------1071AB00421F249D1
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------1071AB00421F249D1--



From xen-devel-bounces@lists.xen.org Fri Dec 14 15:56:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 15:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjXcT-0003lg-15; Fri, 14 Dec 2012 15:56:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TjXcR-0003kY-1H
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 15:56:11 +0000
Received: from [85.158.139.211:45616] by server-5.bemta-5.messagelabs.com id
	63/26-22648-A1C4BC05; Fri, 14 Dec 2012 15:56:10 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355500567!20568206!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31300 invoked from network); 14 Dec 2012 15:56:08 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	14 Dec 2012 15:56:08 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:55293 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TjXgM-0006Hg-VA; Fri, 14 Dec 2012 17:00:15 +0100
Date: Fri, 14 Dec 2012 16:55:57 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1995613404.20121214165557@eikelenboom.it>
To: xen-devel <xen-devel@lists.xen.org>, linux-kernel@vger.kernel.org
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------1071AB00421F249D1"
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
	dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------1071AB00421F249D1
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi Konrad,

I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.
The boot stalls:

[    0.000000] ACPI: PM-Timer IO Port: 0x808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
[    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
[    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
[   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
[   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
[   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
[   64.598692] sending NMI to all CPUs:
[   64.598716] xen: vector 0x2 is not implemented


Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
[    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-


The exact seem config with 3.7.0 as kernel works fine.
Complete serial log is attached.

--

Sander



------------1071AB00421F249D1
Content-Type: application/octet-stream;
 name="serial.log"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="serial.log"

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fX18gICAgICAgICAgICAgICAgICAgIF8g
ICAgICAgIF8gICAgIF8gICAgICANCiBcIFwvIC9fX18gXyBfXyAgIHwgfHwgfCAgfF9fXyAv
ICAgIF8gICBfIF8gX18gIF9fX3wgfF8gX18gX3wgfF9fIHwgfCBfX18gDQogIFwgIC8vIF8g
XCAnXyBcICB8IHx8IHxfICAgfF8gXCBfX3wgfCB8IHwgJ18gXC8gX198IF9fLyBfYCB8ICdf
IFx8IHwvIF8gXA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgX19fKSB8X198IHxffCB8
IHwgfCBcX18gXCB8fCAoX3wgfCB8XykgfCB8ICBfXy8NCiAvXy9cX1xfX198X3wgfF98ICAg
IHxffChfKV9fX18vICAgIFxfXyxffF98IHxffF9fXy9cX19cX18sX3xfLl9fL3xffFxfX198
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZlcnNpb24gNC4zLXVuc3RhYmxl
IChyb290QGR5bmRucy5vcmcpIChnY2MgKERlYmlhbiA0LjQuNS04KSA0LjQuNSkgVHVlIERl
YyAxMSAxNToyNToyNCBDRVQgMjAxMg0KKFhFTikgTGF0ZXN0IENoYW5nZVNldDogTW9uIERl
YyAxMCAxMToxNjoxNyAyMDEyICswMDAwIDI2MjcwOjAzY2I3MWJjMzJmOQ0KKFhFTikgQm9v
dGxvYWRlcjogR1JVQiAxLjk4KzIwMTAwODA0LTE0K3NxdWVlemUxDQooWEVOKSBDb21tYW5k
IGxpbmU6IGRvbTBfbWVtPTEwMjRNLG1heDoxMDI0TSBsb2dsdmw9YWxsIGxvZ2x2bF9ndWVz
dD1hbGwgY29uc29sZV90aW1lc3RhbXBzIHZnYT1nZngtMTI4MHgxMDI0eDMyIGNwdWlkbGUg
Y3B1ZnJlcT14ZW4gbm9yZWJvb3QgZGVidWcgbGFwaWM9ZGVidWcgYXBpY192ZXJib3NpdHk9
ZGVidWcgYXBpYz1kZWJ1ZyBpb21tdT1vbix2ZXJib3NlLGRlYnVnLGFtZC1pb21tdS1kZWJ1
ZyBjb20xPTM4NDAwLDhuMSBjb25zb2xlPXZnYSxjb20xDQooWEVOKSBWaWRlbyBpbmZvcm1h
dGlvbjoNCihYRU4pICBWR0EgaXMgZ3JhcGhpY3MgbW9kZSAxMjgweDEwMjQsIDMyIGJwcA0K
KFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNv
bmRzDQooWEVOKSBEaXNjIGluZm9ybWF0aW9uOg0KKFhFTikgIEZvdW5kIDMgTUJSIHNpZ25h
dHVyZXMNCihYRU4pICBGb3VuZCAzIEVERCBpbmZvcm1hdGlvbiBzdHJ1Y3R1cmVzDQooWEVO
KSBYZW4tZTgyMCBSQU0gbWFwOg0KKFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAw
MDAwMDlmMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDAwMDA5ZjAwMCAtIDAwMDAwMDAw
MDAwYTAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAwZTQwMDAgLSAwMDAwMDAw
MDAwMTAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMDAwMTAwMDAwIC0gMDAwMDAw
MDBhZmY5MDAwMCAodXNhYmxlKQ0KKFhFTikgIDAwMDAwMDAwYWZmOTAwMDAgLSAwMDAwMDAw
MGFmZjllMDAwIChBQ1BJIGRhdGEpDQooWEVOKSAgMDAwMDAwMDBhZmY5ZTAwMCAtIDAwMDAw
MDAwYWZmZTAwMDAgKEFDUEkgTlZTKQ0KKFhFTikgIDAwMDAwMDAwYWZmZTAwMDAgLSAwMDAw
MDAwMGIwMDAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZmZTAwMDAwIC0gMDAw
MDAwMDEwMDAwMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDEwMDAwMDAwMCAtIDAw
MDAwMDAyNTAwMDAwMDAgKHVzYWJsZSkNCihYRU4pIEFDUEk6IFJTRFAgMDAwRkIxMDAsIDAw
MTQgKHIwIEFDUElBTSkNCihYRU4pIEFDUEk6IFJTRFQgQUZGOTAwMDAsIDAwNDggKHIxIE1T
SSAgICBPRU1TTElDICAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogRkFD
UCBBRkY5MDIwMCwgMDA4NCAocjEgNzY0ME1TIEE3NjQwMTAwIDIwMTAwOTEzIE1TRlQgICAg
ICAgOTcpDQooWEVOKSBBQ1BJOiBEU0RUIEFGRjkwNUUwLCA5NDI3IChyMSAgQTc2NDAgQTc2
NDAxMDAgICAgICAxMDAgSU5UTCAyMDA1MTExNykNCihYRU4pIEFDUEk6IEZBQ1MgQUZGOUUw
MDAsIDAwNDANCihYRU4pIEFDUEk6IEFQSUMgQUZGOTAzOTAsIDAwODggKHIxIDc2NDBNUyBB
NzY0MDEwMCAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogTUNGRyBBRkY5
MDQyMCwgMDAzQyAocjEgNzY0ME1TIE9FTU1DRkcgIDIwMTAwOTEzIE1TRlQgICAgICAgOTcp
DQooWEVOKSBBQ1BJOiBTTElDIEFGRjkwNDYwLCAwMTc2IChyMSBNU0kgICAgT0VNU0xJQyAg
MjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IE9FTUIgQUZGOUUwNDAsIDAw
NzIgKHIxIDc2NDBNUyBBNzY0MDEwMCAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQ0KKFhFTikg
QUNQSTogU1JBVCBBRkY5QTVFMCwgMDEwOCAocjMgQU1EICAgIEZBTV9GXzEwICAgICAgICAy
IEFNRCAgICAgICAgIDEpDQooWEVOKSBBQ1BJOiBIUEVUIEFGRjlBNkYwLCAwMDM4IChyMSA3
NjQwTVMgT0VNSFBFVCAgMjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IElW
UlMgQUZGOUE3MzAsIDAwRjggKHIxICBBTUQgICAgIFJEODkwUyAgIDIwMjAzMSBBTUQgICAg
ICAgICAwKQ0KKFhFTikgQUNQSTogU1NEVCBBRkY5QTgzMCwgMERBNCAocjEgQSBNIEkgIFBP
V0VSTk9XICAgICAgICAxIEFNRCAgICAgICAgIDEpDQooWEVOKSBTeXN0ZW0gUkFNOiA4MTkx
TUIgKDgzODc3NzJrQikNCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMCAtPiBOb2RlIDAN
CihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBY
TSAwIC0+IEFQSUMgMiAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMyAt
PiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgNCAtPiBOb2RlIDANCihYRU4p
IFNSQVQ6IFBYTSAwIC0+IEFQSUMgNSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IE5vZGUgMCBQ
WE0gMCAwLWEwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwLWIwMDAwMDAw
DQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwMDAwLTI1MDAwMDAwMA0KKFhFTikg
TlVNQTogQWxsb2NhdGVkIG1lbW5vZGVtYXAgZnJvbSAyNGQ5NjAwMDAgLSAyNGQ5NjMwMDAN
CihYRU4pIE5VTUE6IFVzaW5nIDggZm9yIHRoZSBoYXNoIHNoaWZ0Lg0KKFhFTikgRG9tYWlu
IGhlYXAgaW5pdGlhbGlzZWQNCihYRU4pIHZlc2FmYjogZnJhbWVidWZmZXIgYXQgMHhmYjAw
MDAwMCwgbWFwcGVkIHRvIDB4ZmZmZjgyYzAwMDA4MTAwMCwgdXNpbmcgNjE0NGssIHRvdGFs
IDE0MzM2aw0KKFhFTikgdmVzYWZiOiBtb2RlIGlzIDEyODB4MTAyNHgzMiwgbGluZWxlbmd0
aD01MTIwLCBmb250IDh4MTYNCihYRU4pIHZlc2FmYjogVHJ1ZWNvbG9yOiBzaXplPTg6ODo4
OjgsIHNoaWZ0PTI0OjE2Ojg6MA0KKFhFTikgZm91bmQgU01QIE1QLXRhYmxlIGF0IDAwMGZm
NzgwDQooWEVOKSBETUkgcHJlc2VudC4NCihYRU4pIEFQSUMgYm9vdCBzdGF0ZSBpcyAneGFw
aWMnDQooWEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0DQooWEVOKSBBQ1BJOiBQTS1U
aW1lciBJTyBQb3J0OiAweDgwOA0KKFhFTikgQUNQSTogQUNQSSBTTEVFUCBJTkZPOiBwbTF4
X2NudFs4MDQsMF0sIHBtMXhfZXZ0WzgwMCwwXQ0KKFhFTikgQUNQSTogICAgICAgICAgICAg
ICAgICB3YWtldXBfdmVjW2FmZjllMDBjXSwgdmVjX3NpemVbMjBdDQooWEVOKSBBQ1BJOiBM
b2NhbCBBUElDIGFkZHJlc3MgMHhmZWUwMDAwMA0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHgwMV0gbGFwaWNfaWRbMHgwMF0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMCAw
OjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0g
bGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMSAwOjEwIEFQSUMg
dmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwM10gbGFwaWNfaWRb
MHgwMl0gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMiAwOjEwIEFQSUMgdmVyc2lvbiAx
Ng0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwM10gZW5h
YmxlZCkNCihYRU4pIFByb2Nlc3NvciAjMyAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikg
QUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNV0gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkNCihY
RU4pIFByb2Nlc3NvciAjNCAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQ
SUMgKGFjcGlfaWRbMHgwNl0gbGFwaWNfaWRbMHgwNV0gZW5hYmxlZCkNCihYRU4pIFByb2Nl
c3NvciAjNSAwOjEwIEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogSU9BUElDIChpZFsw
eDA2XSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBd
OiBhcGljX2lkIDYsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMN
CihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwN10gYWRkcmVzc1sweGZlYzIwMDAwXSBnc2lf
YmFzZVsyNF0pDQooWEVOKSBJT0FQSUNbMV06IGFwaWNfaWQgNywgdmVyc2lvbiAzMywgYWRk
cmVzcyAweGZlYzIwMDAwLCBHU0kgMjQtNTUNCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZSIChi
dXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQooWEVOKSBBQ1BJOiBJTlRf
U1JDX09WUiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQooWEVO
KSBBQ1BJOiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQg
Ynkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlE5IHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVO
KSBFbmFibGluZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMiBJL08gQVBJQ3MNCihYRU4p
IEFDUEk6IEhQRVQgaWQ6IDB4ODMwMCBiYXNlOiAweGZlZDAwMDAwDQooWEVOKSBUYWJsZSBp
cyBub3QgZm91bmQhDQooWEVOKSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3Vy
YXRpb24gaW5mb3JtYXRpb24NCihYRU4pIFNNUDogQWxsb3dpbmcgNiBDUFVzICgwIGhvdHBs
dWcgQ1BVcykNCihYRU4pIG1hcHBlZCBBUElDIHRvIGZmZmY4MmMzZmZkZmIwMDAgKGZlZTAw
MDAwKQ0KKFhFTikgbWFwcGVkIElPQVBJQyB0byBmZmZmODJjM2ZmZGZhMDAwIChmZWMwMDAw
MCkNCihYRU4pIG1hcHBlZCBJT0FQSUMgdG8gZmZmZjgyYzNmZmRmOTAwMCAoZmVjMjAwMDAp
DQooWEVOKSBJUlEgbGltaXRzOiA1NiBHU0ksIDExMTIgTVNJL01TSS1YDQooWEVOKSBVc2lu
ZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpDQooWEVOKSBEZXRl
Y3RlZCAzMjAwLjEyNiBNSHogcHJvY2Vzc29yLg0KKFhFTikgSW5pdGluZyBtZW1vcnkgc2hh
cmluZy4NCihYRU4pIEFNRCBGYW0xMGggbWFjaGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxl
ZA0KKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDogYmFzZSBlMDAwMDAwMCBzZWdt
ZW50IDAwMDAgYnVzZXMgMDAgLSBmZg0KKFhFTikgUENJOiBOb3QgdXNpbmcgTUNGRyBmb3Ig
c2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgQU1ELVZpOiBGb3VuZCBNU0kgY2FwYWJp
bGl0eSBibG9jayBhdCAweDU0DQooWEVOKSBBTUQtVmk6IEFDUEkgVGFibGU6DQooWEVOKSBB
TUQtVmk6ICBTaWduYXR1cmUgSVZSUw0KKFhFTikgQU1ELVZpOiAgTGVuZ3RoIDB4ZjgNCihY
RU4pIEFNRC1WaTogIFJldmlzaW9uIDB4MQ0KKFhFTikgQU1ELVZpOiAgQ2hlY2tTdW0gMHg1
MA0KKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRCAgDQooWEVOKSBBTUQtVmk6ICBPRU1fVGFi
bGVfSWQgUkQ4OTBTDQooWEVOKSBBTUQtVmk6ICBPRU1fUmV2aXNpb24gMHgyMDIwMzENCihY
RU4pIEFNRC1WaTogIENyZWF0b3JfSWQgQU1EIA0KKFhFTikgQU1ELVZpOiAgQ3JlYXRvcl9S
ZXZpc2lvbiAwDQooWEVOKSBBTUQtVmk6IElWUlMgQmxvY2s6DQooWEVOKSBBTUQtVmk6ICBU
eXBlIDB4MTANCihYRU4pIEFNRC1WaTogIEZsYWdzIDB4M2UNCihYRU4pIEFNRC1WaTogIExl
bmd0aCAweGM4DQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHgyDQooWEVOKSBBTUQtVmk6IElW
SEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDMNCihYRU4pIEFNRC1W
aTogIERldl9JZCAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6ICBE
ZXZfSWQgUmFuZ2U6IDAgLT4gMHgyDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAweDEw
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAw
eGIwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmlj
ZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZf
SWQgMHgxOA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERl
dmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBE
ZXZfSWQgMHg5MDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4NDMNCihYRU4pIEFNRC1W
aTogIERldl9JZCAweGEwOA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIFJhbmdlOiAweGEwOCAtPiAweGFmZg0KKFhFTikgQU1ELVZpOiAgRGV2X0lk
IEFsaWFzOiAweGEwMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeToNCihYRU4p
IEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHgyOA0KKFhFTikg
QU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeToNCihY
RU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgMHg4MDANCihY
RU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6
DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIDB4MzAN
CihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50
cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIDB4
NzAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9J
ZCAweDUwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2
aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERl
dl9JZCAweDYwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgyDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHg1OA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJ
VkhEIERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQt
Vmk6ICBEZXZfSWQgMHg1MDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1W
aTogIERldl9JZCBSYW5nZTogMHg1MDAgLT4gMHg1MDENCihYRU4pIEFNRC1WaTogSVZIRCBE
ZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIDB4NjgNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIDB4NDAwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6
IElWSEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFN
RC1WaTogIERldl9JZCAweDg4DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQt
Vmk6IElWSEQgRGV2aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDMNCihYRU4p
IEFNRC1WaTogIERldl9JZCAweDkwDQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBB
TUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4OTAgLT4gMHg5Mg0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHg5OA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIFJhbmdlOiAweDk4IC0+IDB4OWENCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2Ug
RW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAgRGV2X0lk
IDB4YTANCihYRU4pIEFNRC1WaTogIEZsYWdzIDB4ZDcNCihYRU4pIEFNRC1WaTogSVZIRCBE
ZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIDB4YTENCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIDB4YTINCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTog
SVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikgQU1E
LVZpOiAgRGV2X0lkIDB4YTMNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1W
aTogSVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4Mg0KKFhFTikg
QU1ELVZpOiAgRGV2X0lkIDB4YTQNCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFN
RC1WaTogSVZIRCBEZXZpY2UgRW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDB4NDMNCihY
RU4pIEFNRC1WaTogIERldl9JZCAweDMwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhF
TikgQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweDMwMCAtPiAweDNmZg0KKFhFTikgQU1ELVZp
OiAgRGV2X0lkIEFsaWFzOiAweGE0DQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAweGE1
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9JZCAw
eGE4DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERldl9J
ZCAweGE5DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2
aWNlIEVudHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDINCihYRU4pIEFNRC1WaTogIERl
dl9JZCAweDEwMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHgzDQooWEVOKSBBTUQtVmk6
ICBEZXZfSWQgMHhiMA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAg
RGV2X0lkIFJhbmdlOiAweGIwIC0+IDB4YjINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2Ug
RW50cnk6DQooWEVOKSBBTUQtVmk6ICBUeXBlIDANCihYRU4pIEFNRC1WaTogIERldl9JZCAw
DQooWEVOKSBBTUQtVmk6ICBGbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5Og0KKFhFTikgQU1ELVZpOiAgVHlwZSAweDQ4DQooWEVOKSBBTUQtVmk6ICBEZXZfSWQg
MA0KKFhFTikgQU1ELVZpOiAgRmxhZ3MgMHhkNw0KKFhFTikgQU1ELVZpOiBJVkhEIERldmlj
ZSBFbnRyeToNCihYRU4pIEFNRC1WaTogIFR5cGUgMHg0OA0KKFhFTikgQU1ELVZpOiAgRGV2
X0lkIDANCihYRU4pIEFNRC1WaTogIEZsYWdzIDANCihYRU4pIEFNRC1WaTogSU9NTVUgMCBF
bmFibGVkLg0KKFhFTikgQU1ELVZpOiBFbmFibGluZyBnbG9iYWwgdmVjdG9yIG1hcA0KKFhF
TikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQNCihYRU4pICAtIERvbTAgbW9kZTogUmVs
YXhlZA0KKFhFTikgR2V0dGluZyBWRVJTSU9OOiA4MDA1MDAxMA0KKFhFTikgR2V0dGluZyBW
RVJTSU9OOiA4MDA1MDAxMA0KKFhFTikgR2V0dGluZyBJRDogMA0KKFhFTikgR2V0dGluZyBM
VlQwOiA3MDANCihYRU4pIEdldHRpbmcgTFZUMTogNDAwDQooWEVOKSBlbmFibGVkIEV4dElO
VCBvbiBDUFUjMA0KKFhFTikgRVNSIHZhbHVlIGJlZm9yZSBlbmFibGluZyB2ZWN0b3I6IDB4
NCAgYWZ0ZXI6IDANCihYRU4pIEVOQUJMSU5HIElPLUFQSUMgSVJRcw0KKFhFTikgIC0+IFVz
aW5nIG5ldyBBQ0sgbWV0aG9kDQooWEVOKSBpbml0IElPX0FQSUMgSVJRcw0KKFhFTikgIElP
LUFQSUMgKGFwaWNpZC1waW4pIDYtMCwgNi0xNiwgNi0xNywgNi0xOCwgNi0xOSwgNi0yMCwg
Ni0yMSwgNi0yMiwgNi0yMywgNy0wLCA3LTEsIDctMiwgNy0zLCA3LTQsIDctNSwgNy02LCA3
LTcsIDctOCwgNy05LCA3LTEwLCA3LTExLCA3LTEyLCA3LTEzLCA3LTE0LCA3LTE1LCA3LTE2
LCA3LTE3LCA3LTE4LCA3LTE5LCA3LTIwLCA3LTIxLCA3LTIyLCA3LTIzLCA3LTI0LCA3LTI1
LCA3LTI2LCA3LTI3LCA3LTI4LCA3LTI5LCA3LTMwLCA3LTMxIG5vdCBjb25uZWN0ZWQuDQoo
WEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4y
PS0xDQooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1Lg0KKFhFTikgbnVtYmVy
IG9mIElPLUFQSUMgIzYgcmVnaXN0ZXJzOiAyNC4NCihYRU4pIG51bWJlciBvZiBJTy1BUElD
ICM3IHJlZ2lzdGVyczogMzIuDQooWEVOKSB0ZXN0aW5nIHRoZSBJTyBBUElDLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4NCihYRU4pIElPIEFQSUMgIzYuLi4uLi4NCihYRU4pIC4uLi4gcmVn
aXN0ZXIgIzAwOiAwNjAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICA6IHBoeXNpY2FsIEFQSUMg
aWQ6IDA2DQooWEVOKSAuLi4uLi4uICAgIDogRGVsaXZlcnkgVHlwZTogMA0KKFhFTikgLi4u
Li4uLiAgICA6IExUUyAgICAgICAgICA6IDANCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAxOiAw
MDE3ODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBtYXggcmVkaXJlY3Rpb24gZW50cmllczog
MDAxNw0KKFhFTikgLi4uLi4uLiAgICAgOiBQUlEgaW1wbGVtZW50ZWQ6IDENCihYRU4pIC4u
Li4uLi4gICAgIDogSU8gQVBJQyB2ZXJzaW9uOiAwMDIxDQooWEVOKSAuLi4uIHJlZ2lzdGVy
ICMwMjogMDYwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDogYXJiaXRyYXRpb246IDA2DQoo
WEVOKSAuLi4uIHJlZ2lzdGVyICMwMzogMDcwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDog
Qm9vdCBEVCAgICA6IDANCihYRU4pIC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRhYmxlOg0KKFhF
TikgIE5SIExvZyBQaHkgTWFzayBUcmlnIElSUiBQb2wgU3RhdCBEZXN0IERlbGkgVmVjdDog
ICANCihYRU4pICAwMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAg
IDAwDQooWEVOKSAgMDEgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICAzMA0KKFhFTikgIDAyIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgRjANCihYRU4pICAwMyAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAx
ICAgIDM4DQooWEVOKSAgMDQgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAg
MSAgICBGMQ0KKFhFTikgIDA1IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAg
IDEgICAgNDANCihYRU4pICAwNiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAg
ICAxICAgIDQ4DQooWEVOKSAgMDcgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEg
ICAgMSAgICA1MA0KKFhFTikgIDA4IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgNTgNCihYRU4pICAwOSAwMDEgMDEgIDEgICAgMSAgICAwICAgMSAgIDAgICAg
MSAgICAxICAgIDYwDQooWEVOKSAgMGEgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICA2OA0KKFhFTikgIDBiIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgNzANCihYRU4pICAwYyAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDc4DQooWEVOKSAgMGQgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICA4OA0KKFhFTikgIDBlIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgOTANCihYRU4pICAwZiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDk4DQooWEVOKSAgMTAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAg
ICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDExIDAwMCAwMCAgMSAgICAwICAgIDAgICAw
ICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxMiAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTMgMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE0IDAwMCAwMCAgMSAgICAwICAgIDAg
ICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNSAwMDAgMDAgIDEgICAgMCAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTYgMDAwIDAwICAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE3IDAwMCAwMCAgMSAgICAwICAg
IDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pIElPIEFQSUMgIzcuLi4uLi4NCihY
RU4pIC4uLi4gcmVnaXN0ZXIgIzAwOiAwNzAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICA6IHBo
eXNpY2FsIEFQSUMgaWQ6IDA3DQooWEVOKSAuLi4uLi4uICAgIDogRGVsaXZlcnkgVHlwZTog
MA0KKFhFTikgLi4uLi4uLiAgICA6IExUUyAgICAgICAgICA6IDANCihYRU4pIC4uLi4gcmVn
aXN0ZXIgIzAxOiAwMDFGODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBtYXggcmVkaXJlY3Rp
b24gZW50cmllczogMDAxRg0KKFhFTikgLi4uLi4uLiAgICAgOiBQUlEgaW1wbGVtZW50ZWQ6
IDENCihYRU4pIC4uLi4uLi4gICAgIDogSU8gQVBJQyB2ZXJzaW9uOiAwMDIxDQooWEVOKSAu
Li4uIHJlZ2lzdGVyICMwMjogMDAwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgIDogYXJiaXRy
YXRpb246IDAwDQooWEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0YWJsZToNCihYRU4pICBO
UiBMb2cgUGh5IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRGVzdCBEZWxpIFZlY3Q6ICAgDQoo
WEVOKSAgMDAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0K
KFhFTikgIDAxIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAN
CihYRU4pICAwMiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAw
DQooWEVOKSAgMDMgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAw
MA0KKFhFTikgIDA0IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAg
MDANCihYRU4pICAwNSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAg
IDAwDQooWEVOKSAgMDYgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAg
ICAwMA0KKFhFTikgIDA3IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAg
ICAgMDANCihYRU4pICAwOCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwDQooWEVOKSAgMDkgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MCAgICAwMA0KKFhFTikgIDBhIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAg
IDAgICAgMDANCihYRU4pICAwYiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwDQooWEVOKSAgMGMgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMCAgICAwMA0KKFhFTikgIDBkIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAw
ICAgIDAgICAgMDANCihYRU4pICAwZSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAwICAgIDAwDQooWEVOKSAgMGYgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMCAgICAwMA0KKFhFTikgIDEwIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAg
ICAwICAgIDAgICAgMDANCihYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAg
ICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAw
ICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDEzIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAg
MCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAg
IDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAg
ICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE2IDAwMCAwMCAgMSAgICAwICAgIDAgICAw
ICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNyAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTggMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE5IDAwMCAwMCAgMSAgICAwICAgIDAg
ICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxYSAwMDAgMDAgIDEgICAgMCAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWIgMDAwIDAwICAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFjIDAwMCAwMCAgMSAgICAwICAg
IDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxZCAwMDAgMDAgIDEgICAgMCAg
ICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWUgMDAwIDAwICAxICAgIDAg
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFmIDAwMCAwMCAgMSAgICAw
ICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pIFVzaW5nIHZlY3Rvci1iYXNl
ZCBpbmRleGluZw0KKFhFTikgSVJRIHRvIHBpbiBtYXBwaW5nczoNCihYRU4pIElSUTI0MCAt
PiAwOjINCihYRU4pIElSUTQ4IC0+IDA6MQ0KKFhFTikgSVJRNTYgLT4gMDozDQooWEVOKSBJ
UlEyNDEgLT4gMDo0DQooWEVOKSBJUlE2NCAtPiAwOjUNCihYRU4pIElSUTcyIC0+IDA6Ng0K
KFhFTikgSVJRODAgLT4gMDo3DQooWEVOKSBJUlE4OCAtPiAwOjgNCihYRU4pIElSUTk2IC0+
IDA6OQ0KKFhFTikgSVJRMTA0IC0+IDA6MTANCihYRU4pIElSUTExMiAtPiAwOjExDQooWEVO
KSBJUlExMjAgLT4gMDoxMg0KKFhFTikgSVJRMTM2IC0+IDA6MTMNCihYRU4pIElSUTE0NCAt
PiAwOjE0DQooWEVOKSBJUlExNTIgLT4gMDoxNQ0KKFhFTikgLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uIGRvbmUuDQooWEVOKSBVc2luZyBsb2NhbCBBUElDIHRpbWVy
IGludGVycnVwdHMuDQooWEVOKSBjYWxpYnJhdGluZyBBUElDIHRpbWVyIC4uLg0KKFhFTikg
Li4uLi4gQ1BVIGNsb2NrIHNwZWVkIGlzIDMyMDAuMTIwOSBNSHouDQooWEVOKSAuLi4uLiBo
b3N0IGJ1cyBjbG9jayBzcGVlZCBpcyAyMDAuMDA3NCBNSHouDQooWEVOKSAuLi4uLiBidXNf
c2NhbGUgPSAweGNjZDcNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQyXSBQbGF0Zm9ybSB0
aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDJdIEFs
bG9jYXRlZCBjb25zb2xlIHJpbmcgb2YgNjQgS2lCLg0KKFhFTikgWzIwMTItMTItMTQgMTU6
MzQ6NDJdIEhWTTogQVNJRHMgZW5hYmxlZC4NCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQy
XSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pIFsyMDEyLTEyLTE0
IDE1OjM0OjQyXSAgLSBOZXN0ZWQgUGFnZSBUYWJsZXMgKE5QVCkNCihYRU4pIFsyMDEyLTEy
LTE0IDE1OjM0OjQyXSAgLSBMYXN0IEJyYW5jaCBSZWNvcmQgKExCUikgVmlydHVhbGlzYXRp
b24NCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQyXSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAj
Vk1FWElUDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0Ml0gIC0gUGF1c2UtSW50ZXJjZXB0
IEZpbHRlcg0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDJdIEhWTTogU1ZNIGVuYWJsZWQN
CihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQyXSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBh
Z2luZyAoSEFQKSBkZXRlY3RlZA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDJdIEhWTTog
SEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0INCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0
OjQxXSBtYXNrZWQgRXh0SU5UIG9uIENQVSMxDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0
Ml0gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwYmYNCihY
RU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQxXSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyDQooWEVO
KSBbMjAxMi0xMi0xNCAxNTozNDo0Ml0gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBw
YXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQxXSBtYXNrZWQg
RXh0SU5UIG9uIENQVSMzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0Ml0gbWljcm9jb2Rl
OiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDEyLTEy
LTE0IDE1OjM0OjQxXSBtYXNrZWQgRXh0SU5UIG9uIENQVSM0DQooWEVOKSBbMjAxMi0xMi0x
NCAxNTozNDo0Ml0gbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEw
MDAwYmYNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQxXSBtYXNrZWQgRXh0SU5UIG9uIENQ
VSM1DQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gQnJvdWdodCB1cCA2IENQVXMNCihY
RU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86
IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIEhQRVQ6
IDMgdGltZXJzICgzIHdpbGwgYmUgdXNlZCBmb3IgYnJvYWRjYXN0KQ0KKFhFTikgWzIwMTIt
MTItMTQgMTU6MzQ6NDNdIEFDUEkgc2xlZXAgbW9kZXM6IFMzDQooWEVOKSBbMjAxMi0xMi0x
NCAxNTozNDo0M10gTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5n
IGZyZXF1ZW5jeQ0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIG1jaGVja19wb2xsOiBN
YWNoaW5lIGNoZWNrIHBvbGxpbmcgdGltZXIgc3RhcnRlZC4NCihYRU4pIFsyMDEyLTEyLTE0
IDE1OjM0OjQzXSBYZW5vcHJvZmlsZTogRmFpbGVkIHRvIHNldHVwIElCUyBMVlQgb2Zmc2V0
LCBJQlNDVEwgPSAweGZmZmZmZmZmDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gKioq
IExPQURJTkcgRE9NQUlOIDAgKioqDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxm
X3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9MHgxMDAwMDAwIG1lbXN6PTB4ZDViMDAwDQoo
WEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFk
ZHI9MHgxZTAwMDAwIG1lbXN6PTB4ZGEwZjANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQz
XSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFlZGIwMDAgbWVtc3o9MHgxM2Rj
MA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIGVsZl9wYXJzZV9iaW5hcnk6IHBoZHI6
IHBhZGRyPTB4MWVlZjAwMCBtZW1zej0weGUwNjAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6
MzQ6NDNdIGVsZl9wYXJzZV9iaW5hcnk6IG1lbW9yeTogMHgxMDAwMDAwIC0+IDB4MmNmNTAw
MA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VF
U1RfT1MgPSAibGludXgiDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hlbl9w
YXJzZV9ub3RlOiBHVUVTVF9WRVJTSU9OID0gIjIuNiINCihYRU4pIFsyMDEyLTEyLTE0IDE1
OjM0OjQzXSBlbGZfeGVuX3BhcnNlX25vdGU6IFhFTl9WRVJTSU9OID0gInhlbi0zLjAiDQoo
WEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBWSVJUX0JB
U0UgPSAweGZmZmZmZmZmODAwMDAwMDANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBl
bGZfeGVuX3BhcnNlX25vdGU6IEVOVFJZID0gMHhmZmZmZmZmZjgxZWVmMjEwDQooWEVOKSBb
MjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBIWVBFUkNBTExfUEFH
RSA9IDB4ZmZmZmZmZmY4MTAwMTAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIGVs
Zl94ZW5fcGFyc2Vfbm90ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVzfHBh
ZV9wZ2Rpcl9hYm92ZV80Z2IiDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hl
bl9wYXJzZV9ub3RlOiBQQUVfTU9ERSA9ICJ5ZXMiDQooWEVOKSBbMjAxMi0xMi0xNCAxNToz
NDo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBMT0FERVIgPSAiZ2VuZXJpYyINCihYRU4pIFsy
MDEyLTEyLTE0IDE1OjM0OjQzXSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25vd24geGVuIGVs
ZiBub3RlICgweGQpDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hlbl9wYXJz
ZV9ub3RlOiBTVVNQRU5EX0NBTkNFTCA9IDB4MQ0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6
NDNdIGVsZl94ZW5fcGFyc2Vfbm90ZTogSFZfU1RBUlRfTE9XID0gMHhmZmZmODAwMDAwMDAw
MDAwDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX3hlbl9wYXJzZV9ub3RlOiBQ
QUREUl9PRkZTRVQgPSAweDANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBlbGZfeGVu
X2FkZHJfY2FsY19jaGVjazogYWRkcmVzc2VzOg0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6
NDNdICAgICB2aXJ0X2Jhc2UgICAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAwDQooWEVOKSBb
MjAxMi0xMi0xNCAxNTozNDo0M10gICAgIGVsZl9wYWRkcl9vZmZzZXQgPSAweDANCihYRU4p
IFsyMDEyLTEyLTE0IDE1OjM0OjQzXSAgICAgdmlydF9vZmZzZXQgICAgICA9IDB4ZmZmZmZm
ZmY4MDAwMDAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdICAgICB2aXJ0X2tzdGFy
dCAgICAgID0gMHhmZmZmZmZmZjgxMDAwMDAwDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0
M10gICAgIHZpcnRfa2VuZCAgICAgICAgPSAweGZmZmZmZmZmODJjZjUwMDANCihYRU4pIFsy
MDEyLTEyLTE0IDE1OjM0OjQzXSAgICAgdmlydF9lbnRyeSAgICAgICA9IDB4ZmZmZmZmZmY4
MWVlZjIxMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdICAgICBwMm1fYmFzZSAgICAg
ICAgID0gMHhmZmZmZmZmZmZmZmZmZmZmDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10g
IFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0MzINCihYRU4pIFsyMDEyLTEyLTE0
IDE1OjM0OjQzXSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgUEFFLCBsc2IsIHBhZGRyIDB4MTAw
MDAwMCAtPiAweDJjZjUwMDANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBQSFlTSUNB
TCBNRU1PUlkgQVJSQU5HRU1FTlQ6DQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gIERv
bTAgYWxsb2MuOiAgIDAwMDAwMDAyNDAwMDAwMDAtPjAwMDAwMDAyNDQwMDAwMDAgKDI0MjUx
NiBwYWdlcyB0byBiZSBhbGxvY2F0ZWQpDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10g
IEluaXQuIHJhbWRpc2s6IDAwMDAwMDAyNGYzNTQwMDAtPjAwMDAwMDAyNGZmZmZjMDANCihY
RU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBWSVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoN
CihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSAgTG9hZGVkIGtlcm5lbDogZmZmZmZmZmY4
MTAwMDAwMC0+ZmZmZmZmZmY4MmNmNTAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNd
ICBJbml0LiByYW1kaXNrOiBmZmZmZmZmZjgyY2Y1MDAwLT5mZmZmZmZmZjgzOWEwYzAwDQoo
WEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gIFBoeXMtTWFjaCBtYXA6IGZmZmZmZmZmODM5
YTEwMDAtPmZmZmZmZmZmODNiYTEwMDANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSAg
U3RhcnQgaW5mbzogICAgZmZmZmZmZmY4M2JhMTAwMC0+ZmZmZmZmZmY4M2JhMTRiNA0KKFhF
TikgWzIwMTItMTItMTQgMTU6MzQ6NDNdICBQYWdlIHRhYmxlczogICBmZmZmZmZmZjgzYmEy
MDAwLT5mZmZmZmZmZjgzYmM1MDAwDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gIEJv
b3Qgc3RhY2s6ICAgIGZmZmZmZmZmODNiYzUwMDAtPmZmZmZmZmZmODNiYzYwMDANCihYRU4p
IFsyMDEyLTEyLTE0IDE1OjM0OjQzXSAgVE9UQUw6ICAgICAgICAgZmZmZmZmZmY4MDAwMDAw
MC0+ZmZmZmZmZmY4NDAwMDAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdICBFTlRS
WSBBRERSRVNTOiBmZmZmZmZmZjgxZWVmMjEwDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0
M10gRG9tMCBoYXMgbWF4aW11bSA2IFZDUFVzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0
M10gZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDAgYXQgMHhmZmZmZmZmZjgxMDAwMDAwIC0+IDB4
ZmZmZmZmZmY4MWQ1YjAwMA0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIGVsZl9sb2Fk
X2JpbmFyeTogcGhkciAxIGF0IDB4ZmZmZmZmZmY4MWUwMDAwMCAtPiAweGZmZmZmZmZmODFl
ZGEwZjANCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBlbGZfbG9hZF9iaW5hcnk6IHBo
ZHIgMiBhdCAweGZmZmZmZmZmODFlZGIwMDAgLT4gMHhmZmZmZmZmZjgxZWVlZGMwDQooWEVO
KSBbMjAxMi0xMi0xNCAxNTozNDo0M10gZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDMgYXQgMHhm
ZmZmZmZmZjgxZWVmMDAwIC0+IDB4ZmZmZmZmZmY4MWY5NTAwMA0KKFhFTikgWzIwMTItMTIt
MTQgMTU6MzQ6NDNdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUg
PSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gQU1ELVZpOiBTZXR1cCBJL08gcGFn
ZSB0YWJsZTogZGV2aWNlIGlkID0gMHgyLCByb290IHRhYmxlID0gMHgyNGQ4MDIwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNd
IEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAsIHJvb3Qg
dGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVO
KSBbMjAxMi0xMi0xNCAxNTozNDo0M10gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHgxOCwgcm9vdCB0YWJsZSA9IDB4MjRkODAyMDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDI4LCByb290IHRhYmxlID0g
MHgyNGQ4MDIwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTIt
MTItMTQgMTU6MzQ6NDNdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4MzAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5n
IG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10gQU1ELVZpOiBTZXR1cCBJ
L08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg1MCwgcm9vdCB0YWJsZSA9IDB4MjRkODAy
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTE0IDE1
OjM0OjQzXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDU4
LCByb290IHRhYmxlID0gMHgyNGQ4MDIwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0g
Mw0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4NjgsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0M10g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4OCwgcm9vdCB0
YWJsZSA9IDB4MjRkODAyMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4p
IFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweDkwLCByb290IHRhYmxlID0gMHgyNGQ4MDIwMDAsIGRvbWFpbiA9IDAs
IHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDNdIEFNRC1WaTog
U2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTIsIHJvb3QgdGFibGUgPSAw
eDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0x
Mi0xNCAxNTozNDo0M10gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHg5OCwgcm9vdCB0YWJsZSA9IDB4MjRkODAyMDAwLCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQzXSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDlhLCByb290IHRhYmxlID0gMHgyNGQ4MDIw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMTQgMTU6
MzQ6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTAs
IHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAz
DQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhhMSwgcm9vdCB0YWJsZSA9IDB4MjRkODAyMDAwLCBkb21h
aW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQ0XSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGEyLCByb290IHRh
YmxlID0gMHgyNGQ4MDIwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikg
WzIwMTItMTItMTQgMTU6MzQ6NDRdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4YTMsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwg
cGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNCwgcm9vdCB0YWJsZSA9IDB4
MjRkODAyMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDEyLTEy
LTE0IDE1OjM0OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQg
PSAweGE1LCByb290IHRhYmxlID0gMHgyNGQ4MDIwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMw0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDRdIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTgsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNToz
NDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhiMCwg
cm9vdCB0YWJsZSA9IDB4MjRkODAyMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMN
CihYRU4pIFsyMDEyLTEyLTE0IDE1OjM0OjQ0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweGIyLCByb290IHRhYmxlID0gMHgyNGQ4MDIwMDAsIGRvbWFp
biA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDRdIEFN
RC1WaTogTm8gaW9tbXUgZm9yIGRldmljZSAwMDAwOjAwOjE4LjANCihYRU4pIFsyMDEyLTEy
LTE0IDE1OjM0OjQ0XSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2UgMDAwMDowMDoxOC4x
DQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBObyBpb21tdSBmb3IgZGV2
aWNlIDAwMDA6MDA6MTguMg0KKFhFTikgWzIwMTItMTItMTQgMTU6MzQ6NDRdIEFNRC1WaTog
Tm8gaW9tbXUgZm9yIGRldmljZSAwMDAwOjAwOjE4LjMNCihYRU4pIFsyMDEyLTEyLTE0IDE1
OjM0OjQ0XSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2UgMDAwMDowMDoxOC40DQooWEVO
KSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHg0MDAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZp
OiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg1MDAsIHJvb3QgdGFibGUg
PSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAx
Mi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNl
IGlkID0gMHg1MDEsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg2MDAsIHJvb3QgdGFibGUgPSAweDI0
ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0x
NCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0g
MHg3MDAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1v
ZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4MDAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNToz
NDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MDAs
IHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAz
DQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhhMDAsIHJvb3QgdGFibGUgPSAweDI0ZDgwMjAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0NF0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhiMDAsIHJvb3Qg
dGFibGUgPSAweDI0ZDgwMjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVO
KSBbMjAxMi0xMi0xNCAxNTozNDo0NF0gU2NydWJiaW5nIEZyZWUgUkFNOiAuLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLmRvbmUuDQooWEVOKSBbMjAxMi0xMi0xNCAxNTozNDo0Nl0gSW5pdGlhbCBsb3cg
bWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuDQooWEVOKSBbMjAx
Mi0xMi0xNCAxNTozNDo0Nl0gU3RkLiBMb2dsZXZlbDogQWxsDQooWEVOKSBbMjAxMi0xMi0x
NCAxNTozNDo0Nl0gR3Vlc3QgTG9nbGV2ZWw6IEFsbA0KKFhFTikgWzIwMTItMTItMTQgMTU6
MzQ6NDZdIFhlbiBpcyByZWxpbnF1aXNoaW5nIFZHQSBjb25zb2xlLg0KKFhFTikgWzIwMTIt
MTItMTQgMTU6MzQ6NDZdICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1h
JyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQ0KKFhFTikgWzIwMTItMTIt
MTQgMTU6MzQ6NDZdIEZyZWVkIDI1MmtCIGluaXQgbWVtb3J5Lg0KbWFwcGluZyBrZXJuZWwg
aW50byBwaHlzaWNhbCBtZW1vcnkNCmFib3V0IHRvIGdldCBzdGFydGVkLi4uDQpbICAgIDAu
MDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVzZXQNClsgICAgMC4wMDAw
MDBdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIGNwdQ0KWyAgICAwLjAwMDAwMF0gTGlu
dXggdmVyc2lvbiAzLjcuMHJjMC0yMDEyMTIxNC1uZXRkZWJ1ZyAocm9vdEBzZXJ2ZWVyc3Rl
cnRqZSkgKGdjYyB2ZXJzaW9uIDQuNC41IChEZWJpYW4gNC40LjUtOCkgKSAjMSBTTVAgUFJF
RU1QVCBGcmkgRGVjIDE0IDEwOjA1OjI3IENFVCAyMDEyDQpbICAgIDAuMDAwMDAwXSBDb21t
YW5kIGxpbmU6IHJvb3Q9L2Rldi9tYXBwZXIvc2VydmVlcnN0ZXJ0amUtcm9vdCBybyB2ZXJi
b3NlIG1lbT0xMDI0TSBjb25zb2xlPWh2YzAgY29uc29sZT10dHkwIG5vbW9kZXNldCB2Z2E9
Nzk0IHZpZGVvPXZlc2FmYiBhY3BpX2VuZm9yY2VfcmVzb3VyY2VzPWxheCByODE2OS51c2Vf
ZGFjPTEgZWFybHlwcmludGs9eGVuIG1heF9sb29wPTUwIGxvb3BfbWF4X3BhcnQ9MTAgeGVu
LXBjaWJhY2suaGlkZT0oMDM6MDYuMCkoMDQ6MDAuKikoMDU6MDAuKikoMDY6MDAuKikoMGE6
MDEuKikgZGVidWcgbG9nbGV2ZWw9MTANClsgICAgMC4wMDAwMDBdIEZyZWVpbmcgOWYtMTAw
IHBmbiByYW5nZTogOTcgcGFnZXMgZnJlZWQNClsgICAgMC4wMDAwMDBdIFJlbGVhc2VkIDk3
IHBhZ2VzIG9mIHVudXNlZCBtZW1vcnkNClsgICAgMC4wMDAwMDBdIFNldCAzMjc4ODkgcGFn
ZShzKSB0byAxLTEgbWFwcGluZw0KWyAgICAwLjAwMDAwMF0gUG9wdWxhdGluZyA0MDAwMC00
MDA2MSBwZm4gcmFuZ2U6IDk3IHBhZ2VzIGFkZGVkDQpbICAgIDAuMDAwMDAwXSBlODIwOiBC
SU9TLXByb3ZpZGVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAwXSBYZW46IFtt
ZW0gMHgwMDAwMDAwMDAwMDAwMDAwLTB4MDAwMDAwMDAwMDA5ZWZmZl0gdXNhYmxlDQpbICAg
IDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDAwMDlmMDAwLTB4MDAwMDAwMDAwMDBm
ZmZmZl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwMDAx
MDAwMDAtMHgwMDAwMDAwMDQwMDYwZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBdIFhlbjog
W21lbSAweDAwMDAwMDAwNDAwNjEwMDAtMHgwMDAwMDAwMGFmZjhmZmZmXSB1bnVzYWJsZQ0K
WyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDBhZmY5MDAwMC0weDAwMDAwMDAw
YWZmOWRmZmZdIEFDUEkgZGF0YQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAw
MDBhZmY5ZTAwMC0weDAwMDAwMDAwYWZmZGZmZmZdIEFDUEkgTlZTDQpbICAgIDAuMDAwMDAw
XSBYZW46IFttZW0gMHgwMDAwMDAwMGFmZmUwMDAwLTB4MDAwMDAwMDBhZmZmZmZmZl0gcmVz
ZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwZmVjMDAwMDAtMHgw
MDAwMDAwMGZlYzAwZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4
MDAwMDAwMDBmZWMyMDAwMC0weDAwMDAwMDAwZmVjMjBmZmZdIHJlc2VydmVkDQpbICAgIDAu
MDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMGZlZTAwMDAwLTB4MDAwMDAwMDBmZWUwMGZm
Zl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwZmZlMDAw
MDAtMHgwMDAwMDAwMGZmZmZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBb
bWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAwMDAyNGZmZmZmZmZdIHVudXNhYmxlDQpb
ICAgIDAuMDAwMDAwXSBib290Y29uc29sZSBbeGVuYm9vdDBdIGVuYWJsZWQNClsgICAgMC4w
MDAwMDBdIE5YIChFeGVjdXRlIERpc2FibGUpIHByb3RlY3Rpb246IGFjdGl2ZQ0KWyAgICAw
LjAwMDAwMF0gZTgyMDogdXNlci1kZWZpbmVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAu
MDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDAwMDAwMDAwMC0weDAwMDAwMDAwMDAwOWVm
ZmZdIHVzYWJsZQ0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAwMDAwOWYw
MDAtMHgwMDAwMDAwMDAwMGZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gdXNlcjog
W21lbSAweDAwMDAwMDAwMDAxMDAwMDAtMHgwMDAwMDAwMDNmZmZmZmZmXSB1c2FibGUNClsg
ICAgMC4wMDAwMDBdIHVzZXI6IFttZW0gMHgwMDAwMDAwMDQwMDYxMDAwLTB4MDAwMDAwMDBh
ZmY4ZmZmZl0gdW51c2FibGUNClsgICAgMC4wMDAwMDBdIHVzZXI6IFttZW0gMHgwMDAwMDAw
MGFmZjkwMDAwLTB4MDAwMDAwMDBhZmY5ZGZmZl0gQUNQSSBkYXRhDQpbICAgIDAuMDAwMDAw
XSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBhZmY5ZTAwMC0weDAwMDAwMDAwYWZmZGZmZmZdIEFD
UEkgTlZTDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBhZmZlMDAwMC0w
eDAwMDAwMDAwYWZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVt
IDB4MDAwMDAwMDBmZWMwMDAwMC0weDAwMDAwMDAwZmVjMDBmZmZdIHJlc2VydmVkDQpbICAg
IDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBmZWMyMDAwMC0weDAwMDAwMDAwZmVj
MjBmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBm
ZWUwMDAwMC0weDAwMDAwMDAwZmVlMDBmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1
c2VyOiBbbWVtIDB4MDAwMDAwMDBmZmUwMDAwMC0weDAwMDAwMDAwZmZmZmZmZmZdIHJlc2Vy
dmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAw
MDAwMDAyNGZmZmZmZmZdIHVudXNhYmxlDQpbICAgIDAuMDAwMDAwXSBETUkgcHJlc2VudC4N
ClsgICAgMC4wMDAwMDBdIERNSTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1TLTc2NDAp
ICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsgICAgMC4wMDAwMDBdIGU4MjA6IHVwZGF0
ZSBbbWVtIDB4MDAwMDAwMDAtMHgwMDAwZmZmZl0gdXNhYmxlID09PiByZXNlcnZlZA0KWyAg
ICAwLjAwMDAwMF0gZTgyMDogcmVtb3ZlIFttZW0gMHgwMDBhMDAwMC0weDAwMGZmZmZmXSB1
c2FibGUNClsgICAgMC4wMDAwMDBdIE5vIEFHUCBicmlkZ2UgZm91bmQNClsgICAgMC4wMDAw
MDBdIGU4MjA6IGxhc3RfcGZuID0gMHg0MDAwMCBtYXhfYXJjaF9wZm4gPSAweDQwMDAwMDAw
MA0KWyAgICAwLjAwMDAwMF0gaW5pdGlhbCBtZW1vcnkgbWFwcGVkOiBbbWVtIDB4MDAwMDAw
MDAtMHgwMzlhMGZmZl0NClsgICAgMC4wMDAwMDBdIEJhc2UgbWVtb3J5IHRyYW1wb2xpbmUg
YXQgW2ZmZmY4ODAwMDAwOTkwMDBdIDk5MDAwIHNpemUgMjQ1NzYNClsgICAgMC4wMDAwMDBd
IGluaXRfbWVtb3J5X21hcHBpbmc6IFttZW0gMHgwMDAwMDAwMC0weDNmZmZmZmZmXQ0KWyAg
ICAwLjAwMDAwMF0gIFttZW0gMHgwMDAwMDAwMC0weDNmZmZmZmZmXSBwYWdlIDRrDQpbICAg
IDAuMDAwMDAwXSBrZXJuZWwgZGlyZWN0IG1hcHBpbmcgdGFibGVzIHVwIHRvIDB4M2ZmZmZm
ZmYgQCBbbWVtIDB4MDJhZjMwMDAtMHgwMmNmNGZmZl0NClsgICAgMC4wMDAwMDBdIHhlbjog
c2V0dGluZyBSVyB0aGUgcmFuZ2UgMmNkMzAwMCAtIDJjZjUwMDANClsgICAgMC4wMDAwMDBd
IFJBTURJU0s6IFttZW0gMHgwMmNmNTAwMC0weDAzOWEwZmZmXQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogUlNEUCAwMDAwMDAwMDAwMGZiMTAwIDAwMDE0ICh2MDAgQUNQSUFNKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogUlNEVCAwMDAwMDAwMGFmZjkwMDAwIDAwMDQ4ICh2MDEgTVNJICAg
IE9FTVNMSUMgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBGQUNQIDAwMDAwMDAwYWZmOTAyMDAgMDAwODQgKHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAx
MDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IERTRFQgMDAwMDAw
MDBhZmY5MDVlMCAwOTQyNyAodjAxICBBNzY0MCBBNzY0MDEwMCAwMDAwMDEwMCBJTlRMIDIw
MDUxMTE3KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogRkFDUyAwMDAwMDAwMGFmZjllMDAwIDAw
MDQwDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBBUElDIDAwMDAwMDAwYWZmOTAzOTAgMDAwODgg
KHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4w
MDAwMDBdIEFDUEk6IE1DRkcgMDAwMDAwMDBhZmY5MDQyMCAwMDAzQyAodjAxIDc2NDBNUyBP
RU1NQ0ZHICAyMDEwMDkxMyBNU0ZUIDAwMDAwMDk3KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
U0xJQyAwMDAwMDAwMGFmZjkwNDYwIDAwMTc2ICh2MDEgTVNJICAgIE9FTVNMSUMgIDIwMTAw
OTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBPRU1CIDAwMDAwMDAw
YWZmOWUwNDAgMDAwNzIgKHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAw
MDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNSQVQgMDAwMDAwMDBhZmY5YTVlMCAwMDEw
OCAodjAzIEFNRCAgICBGQU1fRl8xMCAwMDAwMDAwMiBBTUQgIDAwMDAwMDAxKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogSFBFVCAwMDAwMDAwMGFmZjlhNmYwIDAwMDM4ICh2MDEgNzY0ME1T
IE9FTUhQRVQgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBJVlJTIDAwMDAwMDAwYWZmOWE3MzAgMDAwRjggKHYwMSAgQU1EICAgICBSRDg5MFMgMDAy
MDIwMzEgQU1EICAwMDAwMDAwMCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNTRFQgMDAwMDAw
MDBhZmY5YTgzMCAwMERBNCAodjAxIEEgTSBJICBQT1dFUk5PVyAwMDAwMDAwMSBBTUQgIDAw
MDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTG9jYWwgQVBJQyBhZGRyZXNzIDB4ZmVl
MDAwMDANClsgICAgMC4wMDAwMDBdIE5VTUEgdHVybmVkIG9mZg0KWyAgICAwLjAwMDAwMF0g
RmFraW5nIGEgbm9kZSBhdCBbbWVtIDB4MDAwMDAwMDAwMDAwMDAwMC0weDAwMDAwMDAwM2Zm
ZmZmZmZdDQpbICAgIDAuMDAwMDAwXSBJbml0bWVtIHNldHVwIG5vZGUgMCBbbWVtIDB4MDAw
MDAwMDAtMHgzZmZmZmZmZl0NClsgICAgMC4wMDAwMDBdICAgTk9ERV9EQVRBIFttZW0gMHgz
ZmZmNTAwMC0weDNmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gWm9uZSByYW5nZXM6DQpbICAg
IDAuMDAwMDAwXSAgIERNQSAgICAgIFttZW0gMHgwMDAxMDAwMC0weDAwZmZmZmZmXQ0KWyAg
ICAwLjAwMDAwMF0gICBETUEzMiAgICBbbWVtIDB4MDEwMDAwMDAtMHhmZmZmZmZmZl0NClsg
ICAgMC4wMDAwMDBdICAgTm9ybWFsICAgZW1wdHkNClsgICAgMC4wMDAwMDBdIE1vdmFibGUg
em9uZSBzdGFydCBmb3IgZWFjaCBub2RlDQpbICAgIDAuMDAwMDAwXSBFYXJseSBtZW1vcnkg
bm9kZSByYW5nZXMNClsgICAgMC4wMDAwMDBdICAgbm9kZSAgIDA6IFttZW0gMHgwMDAxMDAw
MC0weDAwMDllZmZmXQ0KWyAgICAwLjAwMDAwMF0gICBub2RlICAgMDogW21lbSAweDAwMTAw
MDAwLTB4M2ZmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSBPbiBub2RlIDAgdG90YWxwYWdlczog
MjYyMDMxDQpbICAgIDAuMDAwMDAwXSAgIERNQSB6b25lOiA2NCBwYWdlcyB1c2VkIGZvciBt
ZW1tYXANClsgICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6IDYgcGFnZXMgcmVzZXJ2ZWQNClsg
ICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6IDM5MTMgcGFnZXMsIExJRk8gYmF0Y2g6MA0KWyAg
ICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiA0MDMyIHBhZ2VzIHVzZWQgZm9yIG1lbW1hcA0K
WyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiAyNTQwMTYgcGFnZXMsIExJRk8gYmF0Y2g6
MzENClsgICAgMC4wMDAwMDBdIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4ODA4DQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBMb2NhbCBBUElDIGFkZHJlc3MgMHhmZWUwMDAwMA0KWyAgICAw
LjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgwMF0gZW5h
YmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGlj
X2lkWzB4MDFdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9p
ZFsweDAzXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwM10gZW5hYmxlZCkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDVdIGxhcGljX2lkWzB4MDRdIGVuYWJs
ZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA2XSBsYXBpY19p
ZFsweDA1XSBlbmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogSU9BUElDIChpZFsweDA2
XSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KWyAgICAwLjAwMDAwMF0gSU9B
UElDWzBdOiBhcGljX2lkIDYsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJ
IDAtMjMNClsgICAgMC4wMDAwMDBdIEFDUEk6IElPQVBJQyAoaWRbMHgwN10gYWRkcmVzc1sw
eGZlYzIwMDAwXSBnc2lfYmFzZVsyNF0pDQpbICAgIDAuMDAwMDAwXSBJT0FQSUNbMV06IGFw
aWNfaWQgNywgdmVyc2lvbiAzMywgYWRkcmVzcyAweGZlYzIwMDAwLCBHU0kgMjQtNTUNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IElOVF9TUkNfT1ZSIChidXMgMCBidXNfaXJxIDAgZ2xvYmFs
X2lycSAyIGRmbCBkZmwpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVz
IDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQpbICAgIDAuMDAwMDAwXSBB
Q1BJOiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJUlEy
IHVzZWQgYnkgb3ZlcnJpZGUuDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJUlE5IHVzZWQgYnkg
b3ZlcnJpZGUuDQpbICAgIDAuMDAwMDAwXSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNv
bmZpZ3VyYXRpb24gaW5mb3JtYXRpb24NClsgICAgMC4wMDAwMDBdIEFDUEk6IEhQRVQgaWQ6
IDB4ODMwMCBiYXNlOiAweGZlZDAwMDAwDQpbICAgIDAuMDAwMDAwXSBzbXBib290OiBBbGxv
d2luZyA2IENQVXMsIDAgaG90cGx1ZyBDUFVzDQpbICAgIDAuMDAwMDAwXSBucl9pcnFzX2dz
aTogNzINClsgICAgMC4wMDAwMDBdIGU4MjA6IFttZW0gMHhiMDAwMDAwMC0weGZlYmZmZmZm
XSBhdmFpbGFibGUgZm9yIFBDSSBkZXZpY2VzDQpbICAgIDAuMDAwMDAwXSBCb290aW5nIHBh
cmF2aXJ0dWFsaXplZCBrZXJuZWwgb24gWGVuDQpbICAgIDAuMDAwMDAwXSBYZW4gdmVyc2lv
bjogNC4zLXVuc3RhYmxlIChwcmVzZXJ2ZS1BRCkNClsgICAgMC4wMDAwMDBdIHNldHVwX3Bl
cmNwdTogTlJfQ1BVUzo4IG5yX2NwdW1hc2tfYml0czo4IG5yX2NwdV9pZHM6NiBucl9ub2Rl
X2lkczoxDQpbICAgIDAuMDAwMDAwXSBQRVJDUFU6IEVtYmVkZGVkIDI3IHBhZ2VzL2NwdSBA
ZmZmZjg4MDAzZjgwMDAwMCBzODEzNDQgcjgxOTIgZDIxMDU2IHUyNjIxNDQNClsgICAgMC4w
MDAwMDBdIHBjcHUtYWxsb2M6IHM4MTM0NCByODE5MiBkMjEwNTYgdTI2MjE0NCBhbGxvYz0x
KjIwOTcxNTINClsgICAgMC4wMDAwMDBdIHBjcHUtYWxsb2M6IFswXSAwIDEgMiAzIDQgNSAt
IC0gDQpbICAgIDQuMjY5NDk4XSBCdWlsdCAxIHpvbmVsaXN0cyBpbiBOb2RlIG9yZGVyLCBt
b2JpbGl0eSBncm91cGluZyBvbi4gIFRvdGFsIHBhZ2VzOiAyNTc5MjkNClsgICAgNC4yNjk1
MDFdIFBvbGljeSB6b25lOiBETUEzMg0KWyAgICA0LjI2OTUwOV0gS2VybmVsIGNvbW1hbmQg
bGluZTogcm9vdD0vZGV2L21hcHBlci9zZXJ2ZWVyc3RlcnRqZS1yb290IHJvIHZlcmJvc2Ug
bWVtPTEwMjRNIGNvbnNvbGU9aHZjMCBjb25zb2xlPXR0eTAgbm9tb2Rlc2V0IHZnYT03OTQg
dmlkZW89dmVzYWZiIGFjcGlfZW5mb3JjZV9yZXNvdXJjZXM9bGF4IHI4MTY5LnVzZV9kYWM9
MSBlYXJseXByaW50az14ZW4gbWF4X2xvb3A9NTAgbG9vcF9tYXhfcGFydD0xMCB4ZW4tcGNp
YmFjay5oaWRlPSgwMzowNi4wKSgwNDowMC4qKSgwNTowMC4qKSgwNjowMC4qKSgwYTowMS4q
KSBkZWJ1ZyBsb2dsZXZlbD0xMA0KWyAgICA0LjI2OTY3Ml0gUElEIGhhc2ggdGFibGUgZW50
cmllczogNDA5NiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzKQ0KWyAgICA0LjI2OTY3N10gX19l
eF90YWJsZSBhbHJlYWR5IHNvcnRlZCwgc2tpcHBpbmcgc29ydA0KWyAgICA0LjMxMDk5Nl0g
c29mdHdhcmUgSU8gVExCIFttZW0gMHgzYTYwMDAwMC0weDNlNWZmZmZmXSAoNjRNQikgbWFw
cGVkIGF0IFtmZmZmODgwMDNhNjAwMDAwLWZmZmY4ODAwM2U1ZmZmZmZdDQpbICAgIDQuMzE2
MjA2XSBNZW1vcnk6IDkyMjI4MGsvMTA0ODU3NmsgYXZhaWxhYmxlICg5MjQyayBrZXJuZWwg
Y29kZSwgNDUyayBhYnNlbnQsIDEyNTg0NGsgcmVzZXJ2ZWQsIDU5NjJrIGRhdGEsIDcxMmsg
aW5pdCkNClsgICAgNC4zMTYzMDJdIFNMVUI6IEdlbnNsYWJzPTE1LCBIV2FsaWduPTY0LCBP
cmRlcj0wLTMsIE1pbk9iamVjdHM9MCwgQ1BVcz02LCBOb2Rlcz0xDQpbICAgIDQuMzE2Mzc2
XSBQcmVlbXB0aWJsZSBoaWVyYXJjaGljYWwgUkNVIGltcGxlbWVudGF0aW9uLg0KWyAgICA0
LjMxNjM3N10gCVJDVSBkeW50aWNrLWlkbGUgZ3JhY2UtcGVyaW9kIGFjY2VsZXJhdGlvbiBp
cyBlbmFibGVkLg0KWyAgICA0LjMxNjM3OV0gCUFkZGl0aW9uYWwgcGVyLUNQVSBpbmZvIHBy
aW50ZWQgd2l0aCBzdGFsbHMuDQpbICAgIDQuMzE2MzgxXSAJUkNVIHJlc3RyaWN0aW5nIENQ
VXMgZnJvbSBOUl9DUFVTPTggdG8gbnJfY3B1X2lkcz02Lg0KWyAgICA0LjMxNjQxOV0gTlJf
SVJRUzo0MzUyIG5yX2lycXM6MTI3MiAxNg0KWyAgICA0LjMxNjQ5M10geGVuOiBzY2kgb3Zl
cnJpZGU6IGdsb2JhbF9pcnE9OSB0cmlnZ2VyPTAgcG9sYXJpdHk9MQ0KWyAgICA0LjMxNjQ5
Nl0geGVuOiByZWdpc3RlcmluZyBnc2kgOSB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0KWyAg
ICA0LjMxNjUyOF0geGVuOiAtLT4gcGlycT05IC0+IGlycT05IChnc2k9OSkNCihYRU4pIFsy
MDEyLTEyLTE0IDE1OjM0OjQ2XSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAo
Ni05IC0+IDB4NjAgLT4gSVJRIDkgTW9kZToxIEFjdGl2ZToxKQ0KWyAgICA0LjMxNjU2M10g
eGVuOiBhY3BpIHNjaSA5DQpbICAgIDQuMzE2NTY4XSB4ZW46IC0tPiBwaXJxPTEgLT4gaXJx
PTEgKGdzaT0xKQ0KWyAgICA0LjMxNjU3M10geGVuOiAtLT4gcGlycT0yIC0+IGlycT0yIChn
c2k9MikNClsgICAgNC4zMTY1NzddIHhlbjogLS0+IHBpcnE9MyAtPiBpcnE9MyAoZ3NpPTMp
DQpbICAgIDQuMzE2NTgxXSB4ZW46IC0tPiBwaXJxPTQgLT4gaXJxPTQgKGdzaT00KQ0KWyAg
ICA0LjMxNjU4NF0geGVuOiAtLT4gcGlycT01IC0+IGlycT01IChnc2k9NSkNClsgICAgNC4z
MTY1ODhdIHhlbjogLS0+IHBpcnE9NiAtPiBpcnE9NiAoZ3NpPTYpDQpbICAgIDQuMzE2NTky
XSB4ZW46IC0tPiBwaXJxPTcgLT4gaXJxPTcgKGdzaT03KQ0KWyAgICA0LjMxNjU5Nl0geGVu
OiAtLT4gcGlycT04IC0+IGlycT04IChnc2k9OCkNClsgICAgNC4zMTY2MDBdIHhlbjogLS0+
IHBpcnE9MTAgLT4gaXJxPTEwIChnc2k9MTApDQpbICAgIDQuMzE2NjA0XSB4ZW46IC0tPiBw
aXJxPTExIC0+IGlycT0xMSAoZ3NpPTExKQ0KWyAgICA0LjMxNjYwOF0geGVuOiAtLT4gcGly
cT0xMiAtPiBpcnE9MTIgKGdzaT0xMikNClsgICAgNC4zMTY2MTJdIHhlbjogLS0+IHBpcnE9
MTMgLT4gaXJxPTEzIChnc2k9MTMpDQpbICAgIDQuMzE2NjE2XSB4ZW46IC0tPiBwaXJxPTE0
IC0+IGlycT0xNCAoZ3NpPTE0KQ0KWyAgICA0LjMxNjYyMF0geGVuOiAtLT4gcGlycT0xNSAt
PiBpcnE9MTUgKGdzaT0xNSkNClsgICAgNC4zMTY3MDJdIENvbnNvbGU6IGNvbG91ciBkdW1t
eSBkZXZpY2UgODB4MjUNClsgICAgNC4zMTY3MDddIGNvbnNvbGUgW3R0eTBdIGVuYWJsZWQs
IGJvb3Rjb25zb2xlIGRpc2FibGVkDQpbICAgIDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dy
b3VwIHN1YnN5cyBjcHVzZXQNClsgICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBjZ3JvdXAg
c3Vic3lzIGNwdQ0KWyAgICAwLjAwMDAwMF0gTGludXggdmVyc2lvbiAzLjcuMHJjMC0yMDEy
MTIxNC1uZXRkZWJ1ZyAocm9vdEBzZXJ2ZWVyc3RlcnRqZSkgKGdjYyB2ZXJzaW9uIDQuNC41
IChEZWJpYW4gNC40LjUtOCkgKSAjMSBTTVAgUFJFRU1QVCBGcmkgRGVjIDE0IDEwOjA1OjI3
IENFVCAyMDEyDQpbICAgIDAuMDAwMDAwXSBDb21tYW5kIGxpbmU6IHJvb3Q9L2Rldi9tYXBw
ZXIvc2VydmVlcnN0ZXJ0amUtcm9vdCBybyB2ZXJib3NlIG1lbT0xMDI0TSBjb25zb2xlPWh2
YzAgY29uc29sZT10dHkwIG5vbW9kZXNldCB2Z2E9Nzk0IHZpZGVvPXZlc2FmYiBhY3BpX2Vu
Zm9yY2VfcmVzb3VyY2VzPWxheCByODE2OS51c2VfZGFjPTEgZWFybHlwcmludGs9eGVuIG1h
eF9sb29wPTUwIGxvb3BfbWF4X3BhcnQ9MTAgeGVuLXBjaWJhY2suaGlkZT0oMDM6MDYuMCko
MDQ6MDAuKikoMDU6MDAuKikoMDY6MDAuKikoMGE6MDEuKikgZGVidWcgbG9nbGV2ZWw9MTAN
ClsgICAgMC4wMDAwMDBdIEZyZWVpbmcgOWYtMTAwIHBmbiByYW5nZTogOTcgcGFnZXMgZnJl
ZWQNClsgICAgMC4wMDAwMDBdIDEtMSBtYXBwaW5nIG9uIDlmLT4xMDANClsgICAgMC4wMDAw
MDBdIDEtMSBtYXBwaW5nIG9uIGFmZjkwLT4xMDAwMDANClsgICAgMC4wMDAwMDBdIFJlbGVh
c2VkIDk3IHBhZ2VzIG9mIHVudXNlZCBtZW1vcnkNClsgICAgMC4wMDAwMDBdIFNldCAzMjc4
ODkgcGFnZShzKSB0byAxLTEgbWFwcGluZw0KWyAgICAwLjAwMDAwMF0gUG9wdWxhdGluZyA0
MDAwMC00MDA2MSBwZm4gcmFuZ2U6IDk3IHBhZ2VzIGFkZGVkDQpbICAgIDAuMDAwMDAwXSBl
ODIwOiBCSU9TLXByb3ZpZGVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAwXSBY
ZW46IFttZW0gMHgwMDAwMDAwMDAwMDAwMDAwLTB4MDAwMDAwMDAwMDA5ZWZmZl0gdXNhYmxl
DQpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDAwMDlmMDAwLTB4MDAwMDAw
MDAwMDBmZmZmZl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAw
MDAwMDAxMDAwMDAtMHgwMDAwMDAwMDQwMDYwZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBd
IFhlbjogW21lbSAweDAwMDAwMDAwNDAwNjEwMDAtMHgwMDAwMDAwMGFmZjhmZmZmXSB1bnVz
YWJsZQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDBhZmY5MDAwMC0weDAw
MDAwMDAwYWZmOWRmZmZdIEFDUEkgZGF0YQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4
MDAwMDAwMDBhZmY5ZTAwMC0weDAwMDAwMDAwYWZmZGZmZmZdIEFDUEkgTlZTDQpbICAgIDAu
MDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMGFmZmUwMDAwLTB4MDAwMDAwMDBhZmZmZmZm
Zl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwZmVjMDAw
MDAtMHgwMDAwMDAwMGZlYzAwZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBb
bWVtIDB4MDAwMDAwMDBmZWMyMDAwMC0weDAwMDAwMDAwZmVjMjBmZmZdIHJlc2VydmVkDQpb
ICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMGZlZTAwMDAwLTB4MDAwMDAwMDBm
ZWUwMGZmZl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAw
ZmZlMDAwMDAtMHgwMDAwMDAwMGZmZmZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0g
WGVuOiBbbWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAwMDAyNGZmZmZmZmZdIHVudXNh
YmxlDQpbICAgIDAuMDAwMDAwXSBlODIwOiByZW1vdmUgW21lbSAweDQwMDAwMDAwLTB4ZmZm
ZmZmZmZmZmZmZmZmZV0gdXNhYmxlDQpbICAgIDAuMDAwMDAwXSBib290Y29uc29sZSBbeGVu
Ym9vdDBdIGVuYWJsZWQNClsgICAgMC4wMDAwMDBdIE5YIChFeGVjdXRlIERpc2FibGUpIHBy
b3RlY3Rpb246IGFjdGl2ZQ0KWyAgICAwLjAwMDAwMF0gZTgyMDogdXNlci1kZWZpbmVkIHBo
eXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDAw
MDAwMDAwMC0weDAwMDAwMDAwMDAwOWVmZmZdIHVzYWJsZQ0KWyAgICAwLjAwMDAwMF0gdXNl
cjogW21lbSAweDAwMDAwMDAwMDAwOWYwMDAtMHgwMDAwMDAwMDAwMGZmZmZmXSByZXNlcnZl
ZA0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAwMDAxMDAwMDAtMHgwMDAw
MDAwMDNmZmZmZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBdIHVzZXI6IFttZW0gMHgwMDAw
MDAwMDQwMDYxMDAwLTB4MDAwMDAwMDBhZmY4ZmZmZl0gdW51c2FibGUNClsgICAgMC4wMDAw
MDBdIHVzZXI6IFttZW0gMHgwMDAwMDAwMGFmZjkwMDAwLTB4MDAwMDAwMDBhZmY5ZGZmZl0g
QUNQSSBkYXRhDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBhZmY5ZTAw
MC0weDAwMDAwMDAwYWZmZGZmZmZdIEFDUEkgTlZTDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBb
bWVtIDB4MDAwMDAwMDBhZmZlMDAwMC0weDAwMDAwMDAwYWZmZmZmZmZdIHJlc2VydmVkDQpb
ICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBmZWMwMDAwMC0weDAwMDAwMDAw
ZmVjMDBmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAw
MDBmZWMyMDAwMC0weDAwMDAwMDAwZmVjMjBmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAw
XSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBmZWUwMDAwMC0weDAwMDAwMDAwZmVlMDBmZmZdIHJl
c2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBmZmUwMDAwMC0w
eDAwMDAwMDAwZmZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVt
IDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAwMDAyNGZmZmZmZmZdIHVudXNhYmxlDQpbICAg
IDAuMDAwMDAwXSBETUkgcHJlc2VudC4NClsgICAgMC4wMDAwMDBdIERNSTogTVNJIE1TLTc2
NDAvODkwRlhBLUdENzAgKE1TLTc2NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsg
ICAgMC4wMDAwMDBdIGU4MjA6IHVwZGF0ZSBbbWVtIDB4MDAwMDAwMDAtMHgwMDAwZmZmZl0g
dXNhYmxlID09PiByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gZTgyMDogcmVtb3ZlIFttZW0g
MHgwMDBhMDAwMC0weDAwMGZmZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBdIE5vIEFHUCBi
cmlkZ2UgZm91bmQNClsgICAgMC4wMDAwMDBdIGU4MjA6IGxhc3RfcGZuID0gMHg0MDAwMCBt
YXhfYXJjaF9wZm4gPSAweDQwMDAwMDAwMA0KWyAgICAwLjAwMDAwMF0gaW5pdGlhbCBtZW1v
cnkgbWFwcGVkOiBbbWVtIDB4MDAwMDAwMDAtMHgwMzlhMGZmZl0NClsgICAgMC4wMDAwMDBd
IEJhc2UgbWVtb3J5IHRyYW1wb2xpbmUgYXQgW2ZmZmY4ODAwMDAwOTkwMDBdIDk5MDAwIHNp
emUgMjQ1NzYNClsgICAgMC4wMDAwMDBdIGluaXRfbWVtb3J5X21hcHBpbmc6IFttZW0gMHgw
MDAwMDAwMC0weDNmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gIFttZW0gMHgwMDAwMDAwMC0w
eDNmZmZmZmZmXSBwYWdlIDRrDQpbICAgIDAuMDAwMDAwXSBrZXJuZWwgZGlyZWN0IG1hcHBp
bmcgdGFibGVzIHVwIHRvIDB4M2ZmZmZmZmYgQCBbbWVtIDB4MDJhZjMwMDAtMHgwMmNmNGZm
Zl0NClsgICAgMC4wMDAwMDBdIHhlbjogc2V0dGluZyBSVyB0aGUgcmFuZ2UgMmNkMzAwMCAt
IDJjZjUwMDANClsgICAgMC4wMDAwMDBdIFJBTURJU0s6IFttZW0gMHgwMmNmNTAwMC0weDAz
OWEwZmZmXQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogUlNEUCAwMDAwMDAwMDAwMGZiMTAwIDAw
MDE0ICh2MDAgQUNQSUFNKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogUlNEVCAwMDAwMDAwMGFm
ZjkwMDAwIDAwMDQ4ICh2MDEgTVNJICAgIE9FTVNMSUMgIDIwMTAwOTEzIE1TRlQgMDAwMDAw
OTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGQUNQIDAwMDAwMDAwYWZmOTAyMDAgMDAwODQg
KHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4w
MDAwMDBdIEFDUEk6IERTRFQgMDAwMDAwMDBhZmY5MDVlMCAwOTQyNyAodjAxICBBNzY0MCBB
NzY0MDEwMCAwMDAwMDEwMCBJTlRMIDIwMDUxMTE3KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
RkFDUyAwMDAwMDAwMGFmZjllMDAwIDAwMDQwDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBBUElD
IDAwMDAwMDAwYWZmOTAzOTAgMDAwODggKHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMg
TVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IE1DRkcgMDAwMDAwMDBhZmY5
MDQyMCAwMDAzQyAodjAxIDc2NDBNUyBPRU1NQ0ZHICAyMDEwMDkxMyBNU0ZUIDAwMDAwMDk3
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogU0xJQyAwMDAwMDAwMGFmZjkwNDYwIDAwMTc2ICh2
MDEgTVNJICAgIE9FTVNMSUMgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBPRU1CIDAwMDAwMDAwYWZmOWUwNDAgMDAwNzIgKHYwMSA3NjQwTVMgQTc2
NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNS
QVQgMDAwMDAwMDBhZmY5YTVlMCAwMDEwOCAodjAzIEFNRCAgICBGQU1fRl8xMCAwMDAwMDAw
MiBBTUQgIDAwMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogSFBFVCAwMDAwMDAwMGFm
ZjlhNmYwIDAwMDM4ICh2MDEgNzY0ME1TIE9FTUhQRVQgIDIwMTAwOTEzIE1TRlQgMDAwMDAw
OTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJVlJTIDAwMDAwMDAwYWZmOWE3MzAgMDAwRjgg
KHYwMSAgQU1EICAgICBSRDg5MFMgMDAyMDIwMzEgQU1EICAwMDAwMDAwMCkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IFNTRFQgMDAwMDAwMDBhZmY5YTgzMCAwMERBNCAodjAxIEEgTSBJICBQ
T1dFUk5PVyAwMDAwMDAwMSBBTUQgIDAwMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
TG9jYWwgQVBJQyBhZGRyZXNzIDB4ZmVlMDAwMDANClsgICAgMC4wMDAwMDBdIE5VTUEgdHVy
bmVkIG9mZg0KWyAgICAwLjAwMDAwMF0gRmFraW5nIGEgbm9kZSBhdCBbbWVtIDB4MDAwMDAw
MDAwMDAwMDAwMC0weDAwMDAwMDAwM2ZmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSBJbml0bWVt
IHNldHVwIG5vZGUgMCBbbWVtIDB4MDAwMDAwMDAtMHgzZmZmZmZmZl0NClsgICAgMC4wMDAw
MDBdICAgTk9ERV9EQVRBIFttZW0gMHgzZmZmNTAwMC0weDNmZmZmZmZmXQ0KWyAgICAwLjAw
MDAwMF0gWm9uZSByYW5nZXM6DQpbICAgIDAuMDAwMDAwXSAgIERNQSAgICAgIFttZW0gMHgw
MDAxMDAwMC0weDAwZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gICBETUEzMiAgICBbbWVtIDB4
MDEwMDAwMDAtMHhmZmZmZmZmZl0NClsgICAgMC4wMDAwMDBdICAgTm9ybWFsICAgZW1wdHkN
ClsgICAgMC4wMDAwMDBdIE1vdmFibGUgem9uZSBzdGFydCBmb3IgZWFjaCBub2RlDQpbICAg
IDAuMDAwMDAwXSBFYXJseSBtZW1vcnkgbm9kZSByYW5nZXMNClsgICAgMC4wMDAwMDBdICAg
bm9kZSAgIDA6IFttZW0gMHgwMDAxMDAwMC0weDAwMDllZmZmXQ0KWyAgICAwLjAwMDAwMF0g
ICBub2RlICAgMDogW21lbSAweDAwMTAwMDAwLTB4M2ZmZmZmZmZdDQpbICAgIDAuMDAwMDAw
XSBPbiBub2RlIDAgdG90YWxwYWdlczogMjYyMDMxDQpbICAgIDAuMDAwMDAwXSAgIERNQSB6
b25lOiA2NCBwYWdlcyB1c2VkIGZvciBtZW1tYXANClsgICAgMC4wMDAwMDBdICAgRE1BIHpv
bmU6IDYgcGFnZXMgcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6IDM5MTMg
cGFnZXMsIExJRk8gYmF0Y2g6MA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiA0MDMy
IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiAy
NTQwMTYgcGFnZXMsIExJRk8gYmF0Y2g6MzENClsgICAgMC4wMDAwMDBdIEFDUEk6IFBNLVRp
bWVyIElPIFBvcnQ6IDB4ODA4DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMb2NhbCBBUElDIGFk
ZHJlc3MgMHhmZWUwMDAwMA0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRb
MHgwMV0gbGFwaWNfaWRbMHgwMF0gZW5hYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExB
UElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MDFdIGVuYWJsZWQpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAzXSBsYXBpY19pZFsweDAyXSBlbmFibGVk
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRb
MHgwM10gZW5hYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
MDVdIGxhcGljX2lkWzB4MDRdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDA2XSBsYXBpY19pZFsweDA1XSBlbmFibGVkKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogSU9BUElDIChpZFsweDA2XSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9iYXNl
WzBdKQ0KWyAgICAwLjAwMDAwMF0gSU9BUElDWzBdOiBhcGljX2lkIDYsIHZlcnNpb24gMzMs
IGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMNClsgICAgMC4wMDAwMDBdIEFDUEk6IElP
QVBJQyAoaWRbMHgwN10gYWRkcmVzc1sweGZlYzIwMDAwXSBnc2lfYmFzZVsyNF0pDQpbICAg
IDAuMDAwMDAwXSBJT0FQSUNbMV06IGFwaWNfaWQgNywgdmVyc2lvbiAzMywgYWRkcmVzcyAw
eGZlYzIwMDAwLCBHU0kgMjQtDQpbICAgNjQuNTk4NjI4XSBJTkZPOiByY3VfcHJlZW1wdCBk
ZXRlY3RlZCBzdGFsbHMgb24gQ1BVcy90YXNrczoNClsgICA2NC41OTg2NzZdIAkwOiAoMSBH
UHMgYmVoaW5kKSBpZGxlPWFlZC8xNDAwMDAwMDAwMDAwMDAvMCBkcmFpbj01IC4gdGltZXIg
bm90IHBlbmRpbmcNClsgICA2NC41OTg2ODNdIAkoZGV0ZWN0ZWQgYnkgMSwgdD0xODAwNCBq
aWZmaWVzLCBnPTE4NDQ2NzQ0MDczNzA5NTUxNDE0LCBjPTE4NDQ2NzQ0MDczNzA5NTUxNDEz
LCBxPTE2MikNClsgICA2NC41OTg2OTJdIHNlbmRpbmcgTk1JIHRvIGFsbCBDUFVzOg0KWyAg
IDY0LjU5ODcxNl0geGVuOiB2ZWN0b3IgMHgyIGlzIG5vdCBpbXBsZW1lbnRlZA0K
------------1071AB00421F249D1
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------1071AB00421F249D1--



From xen-devel-bounces@lists.xen.org Fri Dec 14 16:22:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 16:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjY1q-00058z-Oi; Fri, 14 Dec 2012 16:22:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>) id 1TjY1p-00058r-6o
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 16:22:25 +0000
Received: from [85.158.138.51:23776] by server-14.bemta-3.messagelabs.com id
	10/00-27443-0425BC05; Fri, 14 Dec 2012 16:22:24 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355502141!29023131!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10267 invoked from network); 14 Dec 2012 16:22:23 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Dec 2012 16:22:23 -0000
Received: from aplexcas2.dom1.jhuapl.edu (aplexcas2.dom1.jhuapl.edu
	[128.244.198.91]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 29fc_9412_e4cd8486_a0ef_4c43_84c6_141addd5bfc9;
	Fri, 14 Dec 2012 11:22:15 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas2.dom1.jhuapl.edu ([128.244.198.91]) with mapi; Fri, 14 Dec 2012
	11:20:26 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Date: Fri, 14 Dec 2012 11:20:25 -0500
Thread-Topic: [Xen-devel] [PATCH] Disable caml-stubdom by default
Thread-Index: Ac3aEI7/wU5gU2irQ6SIlzpTmssHAAABYmkG
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC20E@aplesstripe.dom1.jhuapl.edu>
References: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>, 
	<CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC20D@aplesstripe.dom1.jhuapl.edu>,
	<50CB4710.8010502@citrix.com>
In-Reply-To: <50CB4710.8010502@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I thought the purpose of caml stubdom like c stubdom is an example for deve=
loping new domains. If that is the case, then I don't think it makes sense =
to build and install it by default. End users who are just building and ins=
talling xen don't need to have it.

The other problem is that c-stubdom and caml-stubdom have main() functions =
but also define CONFIG_TEST=3Dy in their config files. The test framework a=
lso defines a main() function, causing a multiple definition linker error.

Whats the intention here? Do we want them to run the test code or do we wan=
t them to just be simple stubs with hello world main functions?
________________________________________
From: Roger Pau Monn=E9 [roger.pau@citrix.com]
Sent: Friday, December 14, 2012 10:34 AM
To: Fioravante, Matthew E.
Cc: George Dunlap; Ian Campbell; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default

On 14/12/12 15:33, Fioravante, Matthew E. wrote:
> Caml was disabled in the original stubdom default build targets. I should=
 not have enabled it in my original stubdom autoconf patch.

Does caml-stubdom require something else apart from caml itself? If not
I think it should be enabled by default if caml is detected (like we do
for the caml xenstore in the tools).

> ________________________________________
> From: dunlapg@gmail.com [dunlapg@gmail.com] On Behalf Of George Dunlap [d=
unlapg@umich.edu]
> Sent: Thursday, December 13, 2012 12:47 PM
> To: Fioravante, Matthew E.
> Cc: Ian Campbell; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
>
> Is there a rationale for this?  If so, it should probably be in the commi=
t message.
>
>  -George
>
>
> On Thu, Dec 13, 2012 at 3:31 PM, Matthew Fioravante <matthew.fioravante@j=
huapl.edu<mailto:matthew.fioravante@jhuapl.edu>> wrote:
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu<mailto:m=
atthew.fioravante@jhuapl.edu>>
> ---
>  stubdom/configure.ac<http://configure.ac> |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/stubdom/configure.ac<http://configure.ac> b/stubdom/configur=
e.ac<http://configure.ac>
> index db44d4a..384a94a 100644
> --- a/stubdom/configure.ac<http://configure.ac>
> +++ b/stubdom/configure.ac<http://configure.ac>
> @@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])
>  # Enable/disable stub domains
>  AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
>  AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> -AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])
>  AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
>  AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
>  AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
> --
> 1.7.10.4
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>
> http://lists.xen.org/xen-devel
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 16:22:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 16:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjY1q-00058z-Oi; Fri, 14 Dec 2012 16:22:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>) id 1TjY1p-00058r-6o
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 16:22:25 +0000
Received: from [85.158.138.51:23776] by server-14.bemta-3.messagelabs.com id
	10/00-27443-0425BC05; Fri, 14 Dec 2012 16:22:24 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355502141!29023131!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10267 invoked from network); 14 Dec 2012 16:22:23 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Dec 2012 16:22:23 -0000
Received: from aplexcas2.dom1.jhuapl.edu (aplexcas2.dom1.jhuapl.edu
	[128.244.198.91]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 29fc_9412_e4cd8486_a0ef_4c43_84c6_141addd5bfc9;
	Fri, 14 Dec 2012 11:22:15 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas2.dom1.jhuapl.edu ([128.244.198.91]) with mapi; Fri, 14 Dec 2012
	11:20:26 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Date: Fri, 14 Dec 2012 11:20:25 -0500
Thread-Topic: [Xen-devel] [PATCH] Disable caml-stubdom by default
Thread-Index: Ac3aEI7/wU5gU2irQ6SIlzpTmssHAAABYmkG
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48BDDDC20E@aplesstripe.dom1.jhuapl.edu>
References: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>, 
	<CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC20D@aplesstripe.dom1.jhuapl.edu>,
	<50CB4710.8010502@citrix.com>
In-Reply-To: <50CB4710.8010502@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I thought the purpose of caml stubdom like c stubdom is an example for deve=
loping new domains. If that is the case, then I don't think it makes sense =
to build and install it by default. End users who are just building and ins=
talling xen don't need to have it.

The other problem is that c-stubdom and caml-stubdom have main() functions =
but also define CONFIG_TEST=3Dy in their config files. The test framework a=
lso defines a main() function, causing a multiple definition linker error.

Whats the intention here? Do we want them to run the test code or do we wan=
t them to just be simple stubs with hello world main functions?
________________________________________
From: Roger Pau Monn=E9 [roger.pau@citrix.com]
Sent: Friday, December 14, 2012 10:34 AM
To: Fioravante, Matthew E.
Cc: George Dunlap; Ian Campbell; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default

On 14/12/12 15:33, Fioravante, Matthew E. wrote:
> Caml was disabled in the original stubdom default build targets. I should=
 not have enabled it in my original stubdom autoconf patch.

Does caml-stubdom require something else apart from caml itself? If not
I think it should be enabled by default if caml is detected (like we do
for the caml xenstore in the tools).

> ________________________________________
> From: dunlapg@gmail.com [dunlapg@gmail.com] On Behalf Of George Dunlap [d=
unlapg@umich.edu]
> Sent: Thursday, December 13, 2012 12:47 PM
> To: Fioravante, Matthew E.
> Cc: Ian Campbell; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
>
> Is there a rationale for this?  If so, it should probably be in the commi=
t message.
>
>  -George
>
>
> On Thu, Dec 13, 2012 at 3:31 PM, Matthew Fioravante <matthew.fioravante@j=
huapl.edu<mailto:matthew.fioravante@jhuapl.edu>> wrote:
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu<mailto:m=
atthew.fioravante@jhuapl.edu>>
> ---
>  stubdom/configure.ac<http://configure.ac> |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/stubdom/configure.ac<http://configure.ac> b/stubdom/configur=
e.ac<http://configure.ac>
> index db44d4a..384a94a 100644
> --- a/stubdom/configure.ac<http://configure.ac>
> +++ b/stubdom/configure.ac<http://configure.ac>
> @@ -18,7 +18,7 @@ m4_include([../m4/depends.m4])
>  # Enable/disable stub domains
>  AX_STUBDOM_DEFAULT_ENABLE([ioemu-stubdom], [ioemu])
>  AX_STUBDOM_DEFAULT_DISABLE([c-stubdom], [c])
> -AX_STUBDOM_DEFAULT_ENABLE([caml-stubdom], [caml])
> +AX_STUBDOM_DEFAULT_DISABLE([caml-stubdom], [caml])
>  AX_STUBDOM_DEFAULT_ENABLE([pv-grub], [grub])
>  AX_STUBDOM_DEFAULT_ENABLE([xenstore-stubdom], [xenstore])
>  AX_STUBDOM_CONDITIONAL([vtpm-stubdom], [vtpm])
> --
> 1.7.10.4
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>
> http://lists.xen.org/xen-devel
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 16:28:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 16:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjY75-0005aA-Ty; Fri, 14 Dec 2012 16:27:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TjY74-0005Zc-1Y
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 16:27:50 +0000
Received: from [85.158.139.83:9343] by server-8.bemta-5.messagelabs.com id
	69/BE-15003-3835BC05; Fri, 14 Dec 2012 16:27:47 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355502467!29359910!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30358 invoked from network); 14 Dec 2012 16:27:47 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 16:27:47 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so2077268eek.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 08:27:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Wgf0eAvCbFOtECcZycD6/AEnEc2EPd1vmBm5qK818Ho=;
	b=ClSuA6McYJ4DGelpo2o6IguYbgD/ZcV6vw3i/xe6JesbxFCWFGZDlDxZ/KWtRki7Kx
	V9H6Z9UkfnAyz4BjpYJpRZSpcj9NzPOI2NGgo6QTo/gzVxUt0CG10j62XdnYv0H2UyCj
	HNu0JiN1wN/9MzDpfNV2+kBiQz6WzxqoiMHKCQoBzB93AyTNCvl2kzL68Vwi97YbLojC
	RYuuyXLcBznqwUD6cazYOlG4Xf6qxl61Tt1bhwBAwU0IVE92CHaEAxToKtFnEnw0b4HH
	BLp/r8kbVJfOFS+81CRhic/VTm6fkZPnp8gyAgUZXfCKWY4THQrOYcAoqR7uIXf/vtZZ
	dw/Q==
Received: by 10.14.194.199 with SMTP id m47mr16452221een.11.1355502466828;
	Fri, 14 Dec 2012 08:27:46 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id 46sm10311131eeg.4.2012.12.14.08.27.44
	(version=SSLv3 cipher=OTHER); Fri, 14 Dec 2012 08:27:45 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 14 Dec 2012 16:27:42 +0000
From: Keir Fraser <keir@xen.org>
To: Mats Petersson <mats.petersson@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CCF103FE.55C40%keir@xen.org>
Thread-Topic: [Xen-devel] xen with huffman coding
Thread-Index: Ac3aF/Ckz+jzfpFRL0Ooly1s1jc1EQ==
In-Reply-To: <50CB3C68.30605@citrix.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] xen with huffman coding
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/2012 14:49, "Mats Petersson" <mats.petersson@citrix.com> wrote:

> For code and typical data, I'm not at all convinced that huffman
> encoding (which is based on run-lengths) is the best method.

Actually Huffman encoding is not a run-length scheme.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 16:28:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 16:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjY75-0005aA-Ty; Fri, 14 Dec 2012 16:27:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TjY74-0005Zc-1Y
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 16:27:50 +0000
Received: from [85.158.139.83:9343] by server-8.bemta-5.messagelabs.com id
	69/BE-15003-3835BC05; Fri, 14 Dec 2012 16:27:47 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355502467!29359910!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30358 invoked from network); 14 Dec 2012 16:27:47 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 16:27:47 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so2077268eek.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 08:27:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Wgf0eAvCbFOtECcZycD6/AEnEc2EPd1vmBm5qK818Ho=;
	b=ClSuA6McYJ4DGelpo2o6IguYbgD/ZcV6vw3i/xe6JesbxFCWFGZDlDxZ/KWtRki7Kx
	V9H6Z9UkfnAyz4BjpYJpRZSpcj9NzPOI2NGgo6QTo/gzVxUt0CG10j62XdnYv0H2UyCj
	HNu0JiN1wN/9MzDpfNV2+kBiQz6WzxqoiMHKCQoBzB93AyTNCvl2kzL68Vwi97YbLojC
	RYuuyXLcBznqwUD6cazYOlG4Xf6qxl61Tt1bhwBAwU0IVE92CHaEAxToKtFnEnw0b4HH
	BLp/r8kbVJfOFS+81CRhic/VTm6fkZPnp8gyAgUZXfCKWY4THQrOYcAoqR7uIXf/vtZZ
	dw/Q==
Received: by 10.14.194.199 with SMTP id m47mr16452221een.11.1355502466828;
	Fri, 14 Dec 2012 08:27:46 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id 46sm10311131eeg.4.2012.12.14.08.27.44
	(version=SSLv3 cipher=OTHER); Fri, 14 Dec 2012 08:27:45 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 14 Dec 2012 16:27:42 +0000
From: Keir Fraser <keir@xen.org>
To: Mats Petersson <mats.petersson@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CCF103FE.55C40%keir@xen.org>
Thread-Topic: [Xen-devel] xen with huffman coding
Thread-Index: Ac3aF/Ckz+jzfpFRL0Ooly1s1jc1EQ==
In-Reply-To: <50CB3C68.30605@citrix.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] xen with huffman coding
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/2012 14:49, "Mats Petersson" <mats.petersson@citrix.com> wrote:

> For code and typical data, I'm not at all convinced that huffman
> encoding (which is based on run-lengths) is the best method.

Actually Huffman encoding is not a run-length scheme.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 16:35:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 16:35:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjYDu-00067I-Qh; Fri, 14 Dec 2012 16:34:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TjYDt-000678-QV
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 16:34:53 +0000
Received: from [193.109.254.147:44639] by server-9.bemta-14.messagelabs.com id
	53/BA-24482-D255BC05; Fri, 14 Dec 2012 16:34:53 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355502890!2955367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22152 invoked from network); 14 Dec 2012 16:34:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 16:34:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="705637"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 16:34:49 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 11:34:49 -0500
Message-ID: <50CB5528.2020203@citrix.com>
Date: Fri, 14 Dec 2012 16:34:48 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Keir Fraser <keir@xen.org>
References: <CCF103FE.55C40%keir@xen.org>
In-Reply-To: <CCF103FE.55C40%keir@xen.org>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen with huffman coding
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 16:27, Keir Fraser wrote:
> On 14/12/2012 14:49, "Mats Petersson" <mats.petersson@citrix.com> wrote:
>
>> For code and typical data, I'm not at all convinced that huffman
>> encoding (which is based on run-lengths) is the best method.
> Actually Huffman encoding is not a run-length scheme.
Ah, I'm confusing it with ccitt (or whatever it is that fax-machines 
use), which uses a fixed Huffman tree to encode a set of run lengths of 
black/white pixels.

Either way, looking at more than one compression mechanism may have some 
value.

--
Mats
>
>   -- Keir
>
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 16:35:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 16:35:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjYDu-00067I-Qh; Fri, 14 Dec 2012 16:34:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TjYDt-000678-QV
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 16:34:53 +0000
Received: from [193.109.254.147:44639] by server-9.bemta-14.messagelabs.com id
	53/BA-24482-D255BC05; Fri, 14 Dec 2012 16:34:53 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355502890!2955367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22152 invoked from network); 14 Dec 2012 16:34:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 16:34:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="705637"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 16:34:49 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 11:34:49 -0500
Message-ID: <50CB5528.2020203@citrix.com>
Date: Fri, 14 Dec 2012 16:34:48 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Keir Fraser <keir@xen.org>
References: <CCF103FE.55C40%keir@xen.org>
In-Reply-To: <CCF103FE.55C40%keir@xen.org>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen with huffman coding
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 16:27, Keir Fraser wrote:
> On 14/12/2012 14:49, "Mats Petersson" <mats.petersson@citrix.com> wrote:
>
>> For code and typical data, I'm not at all convinced that huffman
>> encoding (which is based on run-lengths) is the best method.
> Actually Huffman encoding is not a run-length scheme.
Ah, I'm confusing it with ccitt (or whatever it is that fax-machines 
use), which uses a fixed Huffman tree to encode a set of run lengths of 
black/white pixels.

Either way, looking at more than one compression mechanism may have some 
value.

--
Mats
>
>   -- Keir
>
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 17:04:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 17:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjYgN-0006pL-Vh; Fri, 14 Dec 2012 17:04:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1TjYgL-0006pG-Vc
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 17:04:18 +0000
Received: from [85.158.143.99:4203] by server-3.bemta-4.messagelabs.com id
	DB/6F-18211-11C5BC05; Fri, 14 Dec 2012 17:04:17 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355504655!29516406!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18193 invoked from network); 14 Dec 2012 17:04:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 17:04:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="758300"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 17:04:14 +0000
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 12:04:14 -0500
From: Julien Grall <julien.grall@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 14 Dec 2012 10:36:55 +0000
Message-ID: <f081fe6f4620667d6dbcee93df2555d357c169bb.1355480806.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH V4] libxenstore: filter watch events in
	libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XenStore puts in queued watch events via a thread and notifies the user.
Sometimes xs_unwatch is called before all related message is read. The use
case is non-threaded libevent, we have two event A and B:
    - Event A will destroy something and call xs_unwatch;
    - Event B is used to notify that a node has changed in XenStore.
As the event is called one by one, event A can be handled before event B.
So on next xs_watch_read the user could retrieve an unwatch token and
a segfault occured if the token store the pointer of the structure
(ie: "backend:0xcafe").

To avoid problem with previous application using libXenStore, this behaviour
will only be enabled if XS_UNWATCH_FILTER is given to xs_open.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Julien Grall <julien.grall@citrix.com>
---

  Modifications between V3 and V4:
    - Rename XS_UNWATCH_SAFE to XS_UNWATCH_FILTER;
    - Improve documentation;
    - Fix sub-path checking in xs_unwatch.

  Modifications between V2 and V3:
    - Add XS_UNWATCH_SAFE;
    - Rename xs_clear_watch_pipe to xs_maybe_clear_watch_pipe.

  Modifications between V1 and V2:
    - Add xs_clear_watch_pipe to avoid code duplication;
    - Modify commit message by Ian Jackson;
    - Rework list filtering.

 tools/xenstore/xenstore.h |   21 ++++++++++++
 tools/xenstore/xs.c       |   84 ++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 97 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstore.h b/tools/xenstore/xenstore.h
index 7259e49..fdf5e76 100644
--- a/tools/xenstore/xenstore.h
+++ b/tools/xenstore/xenstore.h
@@ -27,6 +27,27 @@
 #define XS_OPEN_READONLY	1UL<<0
 #define XS_OPEN_SOCKETONLY      1UL<<1
 
+/*
+ * Setting XS_UNWATCH_FILTER arranges that after xs_unwatch, no
+ * related watch events will be delivered via xs_read_watch.  But
+ * this relies on the couple token, subpath is unique.
+ *
+ * XS_UNWATCH_FILTER clear          XS_UNWATCH_FILTER set
+ *
+ * Even after xs_unwatch, "stale"   After xs_unwatch returns, no
+ * instances of the watch event     watch events with the same
+ * may be delivered.                token and with the same subpath
+ *                                  will be delivered.
+ *
+ * A path and a subpath can be      The application must avoid
+ * register with the same token.    registering a path (/foo/) and
+ *                                  a subpath (/foo/bar) with the
+ *                                  same path until a successful
+ *                                  xs_unwatch for the first watch
+ *                                  has returned.
+ */
+#define XS_UNWATCH_FILTER     1UL<<2
+
 struct xs_handle;
 typedef uint32_t xs_transaction_t;
 
diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index b951015..5368769 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -67,6 +67,8 @@ struct xs_handle {
 
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_filter;
 
 	/*
          * A list of replies. Currently only one will ever be outstanding
@@ -125,6 +127,8 @@ struct xs_handle {
 	struct list_head watch_list;
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_filter;
 };
 
 #define mutex_lock(m)		((void)0)
@@ -247,6 +251,8 @@ static struct xs_handle *get_handle(const char *connect_to)
 	/* Watch pipe is allocated on demand in xs_fileno(). */
 	h->watch_pipe[0] = h->watch_pipe[1] = -1;
 
+	h->unwatch_filter = false;
+
 #ifdef USE_PTHREAD
 	pthread_mutex_init(&h->watch_mutex, NULL);
 	pthread_cond_init(&h->watch_condvar, NULL);
@@ -287,6 +293,9 @@ struct xs_handle *xs_open(unsigned long flags)
 	if (!xsh && !(flags & XS_OPEN_SOCKETONLY))
 		xsh = get_handle(xs_domain_dev());
 
+	if (xsh && (flags & XS_UNWATCH_FILTER))
+		xsh->unwatch_filter = true;
+
 	return xsh;
 }
 
@@ -753,6 +762,19 @@ bool xs_watch(struct xs_handle *h, const char *path, const char *token)
 				ARRAY_SIZE(iov), NULL));
 }
 
+
+/* Clear the pipe token if there are no more pending watchs.
+ * We suppose the watch_mutex is already taken.
+ */
+static void xs_maybe_clear_watch_pipe(struct xs_handle *h)
+{
+	char c;
+
+	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
+		while (read(h->watch_pipe[0], &c, 1) != 1)
+			continue;
+}
+
 /* Find out what node change was on (will block if nothing pending).
  * Returns array of two pointers: path and token, or NULL.
  * Call free() after use.
@@ -761,7 +783,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 				  int nonblocking)
 {
 	struct xs_stored_msg *msg;
-	char **ret, *strings, c = 0;
+	char **ret, *strings;
 	unsigned int num_strings, i;
 
 	mutex_lock(&h->watch_mutex);
@@ -798,11 +820,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 	msg = list_top(&h->watch_list, struct xs_stored_msg, list);
 	list_del(&msg->list);
 
-	/* Clear the pipe token if there are no more pending watches. */
-	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
-		while (read(h->watch_pipe[0], &c, 1) != 1)
-			continue;
-
+	xs_maybe_clear_watch_pipe(h);
 	mutex_unlock(&h->watch_mutex);
 
 	assert(msg->hdr.type == XS_WATCH_EVENT);
@@ -855,14 +873,64 @@ char **xs_read_watch(struct xs_handle *h, unsigned int *num)
 bool xs_unwatch(struct xs_handle *h, const char *path, const char *token)
 {
 	struct iovec iov[2];
+	struct xs_stored_msg *msg, *tmsg;
+	bool res;
+	char *s, *p;
+	unsigned int i;
+	char *l_token, *l_path;
 
 	iov[0].iov_base = (char *)path;
 	iov[0].iov_len = strlen(path) + 1;
 	iov[1].iov_base = (char *)token;
 	iov[1].iov_len = strlen(token) + 1;
 
-	return xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
-				ARRAY_SIZE(iov), NULL));
+	res = xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
+						   ARRAY_SIZE(iov), NULL));
+
+	if (!h->unwatch_filter) /* Don't filter the watch list */
+		return res;
+
+
+	/* Filter the watch list to remove potential message */
+	mutex_lock(&h->watch_mutex);
+
+	if (list_empty(&h->watch_list)) {
+	    mutex_unlock(&h->watch_mutex);
+	    return res;
+	}
+
+	list_for_each_entry_safe(msg, tmsg, &h->watch_list, list) {
+		assert(msg->hdr.type == XS_WATCH_EVENT);
+
+		s = msg->body;
+
+		l_token = NULL;
+		l_path = NULL;
+
+		for (p = s, i = 0; p < msg->body + msg->hdr.len; p++) {
+			if (*p == '\0')
+			{
+				if (i == XS_WATCH_TOKEN)
+					l_token = s;
+				else if (i == XS_WATCH_PATH)
+					l_path = s;
+				i++;
+				s = p + 1;
+			}
+		}
+
+		if (l_token && !strcmp(token, l_token)
+			&& l_path && xs_path_is_subpath(path, l_path)) {
+			list_del(&msg->list);
+			free(msg);
+		}
+	}
+
+	xs_maybe_clear_watch_pipe(h);
+
+	mutex_unlock(&h->watch_mutex);
+
+	return res;
 }
 
 /* Start a transaction: changes by others will not be seen during this
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 17:04:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 17:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjYgN-0006pL-Vh; Fri, 14 Dec 2012 17:04:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1TjYgL-0006pG-Vc
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 17:04:18 +0000
Received: from [85.158.143.99:4203] by server-3.bemta-4.messagelabs.com id
	DB/6F-18211-11C5BC05; Fri, 14 Dec 2012 17:04:17 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355504655!29516406!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18193 invoked from network); 14 Dec 2012 17:04:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 17:04:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="758300"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 17:04:14 +0000
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 12:04:14 -0500
From: Julien Grall <julien.grall@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 14 Dec 2012 10:36:55 +0000
Message-ID: <f081fe6f4620667d6dbcee93df2555d357c169bb.1355480806.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH V4] libxenstore: filter watch events in
	libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XenStore puts in queued watch events via a thread and notifies the user.
Sometimes xs_unwatch is called before all related message is read. The use
case is non-threaded libevent, we have two event A and B:
    - Event A will destroy something and call xs_unwatch;
    - Event B is used to notify that a node has changed in XenStore.
As the event is called one by one, event A can be handled before event B.
So on next xs_watch_read the user could retrieve an unwatch token and
a segfault occured if the token store the pointer of the structure
(ie: "backend:0xcafe").

To avoid problem with previous application using libXenStore, this behaviour
will only be enabled if XS_UNWATCH_FILTER is given to xs_open.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Julien Grall <julien.grall@citrix.com>
---

  Modifications between V3 and V4:
    - Rename XS_UNWATCH_SAFE to XS_UNWATCH_FILTER;
    - Improve documentation;
    - Fix sub-path checking in xs_unwatch.

  Modifications between V2 and V3:
    - Add XS_UNWATCH_SAFE;
    - Rename xs_clear_watch_pipe to xs_maybe_clear_watch_pipe.

  Modifications between V1 and V2:
    - Add xs_clear_watch_pipe to avoid code duplication;
    - Modify commit message by Ian Jackson;
    - Rework list filtering.

 tools/xenstore/xenstore.h |   21 ++++++++++++
 tools/xenstore/xs.c       |   84 ++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 97 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstore.h b/tools/xenstore/xenstore.h
index 7259e49..fdf5e76 100644
--- a/tools/xenstore/xenstore.h
+++ b/tools/xenstore/xenstore.h
@@ -27,6 +27,27 @@
 #define XS_OPEN_READONLY	1UL<<0
 #define XS_OPEN_SOCKETONLY      1UL<<1
 
+/*
+ * Setting XS_UNWATCH_FILTER arranges that after xs_unwatch, no
+ * related watch events will be delivered via xs_read_watch.  But
+ * this relies on the couple token, subpath is unique.
+ *
+ * XS_UNWATCH_FILTER clear          XS_UNWATCH_FILTER set
+ *
+ * Even after xs_unwatch, "stale"   After xs_unwatch returns, no
+ * instances of the watch event     watch events with the same
+ * may be delivered.                token and with the same subpath
+ *                                  will be delivered.
+ *
+ * A path and a subpath can be      The application must avoid
+ * register with the same token.    registering a path (/foo/) and
+ *                                  a subpath (/foo/bar) with the
+ *                                  same path until a successful
+ *                                  xs_unwatch for the first watch
+ *                                  has returned.
+ */
+#define XS_UNWATCH_FILTER     1UL<<2
+
 struct xs_handle;
 typedef uint32_t xs_transaction_t;
 
diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index b951015..5368769 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -67,6 +67,8 @@ struct xs_handle {
 
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_filter;
 
 	/*
          * A list of replies. Currently only one will ever be outstanding
@@ -125,6 +127,8 @@ struct xs_handle {
 	struct list_head watch_list;
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_filter;
 };
 
 #define mutex_lock(m)		((void)0)
@@ -247,6 +251,8 @@ static struct xs_handle *get_handle(const char *connect_to)
 	/* Watch pipe is allocated on demand in xs_fileno(). */
 	h->watch_pipe[0] = h->watch_pipe[1] = -1;
 
+	h->unwatch_filter = false;
+
 #ifdef USE_PTHREAD
 	pthread_mutex_init(&h->watch_mutex, NULL);
 	pthread_cond_init(&h->watch_condvar, NULL);
@@ -287,6 +293,9 @@ struct xs_handle *xs_open(unsigned long flags)
 	if (!xsh && !(flags & XS_OPEN_SOCKETONLY))
 		xsh = get_handle(xs_domain_dev());
 
+	if (xsh && (flags & XS_UNWATCH_FILTER))
+		xsh->unwatch_filter = true;
+
 	return xsh;
 }
 
@@ -753,6 +762,19 @@ bool xs_watch(struct xs_handle *h, const char *path, const char *token)
 				ARRAY_SIZE(iov), NULL));
 }
 
+
+/* Clear the pipe token if there are no more pending watchs.
+ * We suppose the watch_mutex is already taken.
+ */
+static void xs_maybe_clear_watch_pipe(struct xs_handle *h)
+{
+	char c;
+
+	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
+		while (read(h->watch_pipe[0], &c, 1) != 1)
+			continue;
+}
+
 /* Find out what node change was on (will block if nothing pending).
  * Returns array of two pointers: path and token, or NULL.
  * Call free() after use.
@@ -761,7 +783,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 				  int nonblocking)
 {
 	struct xs_stored_msg *msg;
-	char **ret, *strings, c = 0;
+	char **ret, *strings;
 	unsigned int num_strings, i;
 
 	mutex_lock(&h->watch_mutex);
@@ -798,11 +820,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 	msg = list_top(&h->watch_list, struct xs_stored_msg, list);
 	list_del(&msg->list);
 
-	/* Clear the pipe token if there are no more pending watches. */
-	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
-		while (read(h->watch_pipe[0], &c, 1) != 1)
-			continue;
-
+	xs_maybe_clear_watch_pipe(h);
 	mutex_unlock(&h->watch_mutex);
 
 	assert(msg->hdr.type == XS_WATCH_EVENT);
@@ -855,14 +873,64 @@ char **xs_read_watch(struct xs_handle *h, unsigned int *num)
 bool xs_unwatch(struct xs_handle *h, const char *path, const char *token)
 {
 	struct iovec iov[2];
+	struct xs_stored_msg *msg, *tmsg;
+	bool res;
+	char *s, *p;
+	unsigned int i;
+	char *l_token, *l_path;
 
 	iov[0].iov_base = (char *)path;
 	iov[0].iov_len = strlen(path) + 1;
 	iov[1].iov_base = (char *)token;
 	iov[1].iov_len = strlen(token) + 1;
 
-	return xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
-				ARRAY_SIZE(iov), NULL));
+	res = xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
+						   ARRAY_SIZE(iov), NULL));
+
+	if (!h->unwatch_filter) /* Don't filter the watch list */
+		return res;
+
+
+	/* Filter the watch list to remove potential message */
+	mutex_lock(&h->watch_mutex);
+
+	if (list_empty(&h->watch_list)) {
+	    mutex_unlock(&h->watch_mutex);
+	    return res;
+	}
+
+	list_for_each_entry_safe(msg, tmsg, &h->watch_list, list) {
+		assert(msg->hdr.type == XS_WATCH_EVENT);
+
+		s = msg->body;
+
+		l_token = NULL;
+		l_path = NULL;
+
+		for (p = s, i = 0; p < msg->body + msg->hdr.len; p++) {
+			if (*p == '\0')
+			{
+				if (i == XS_WATCH_TOKEN)
+					l_token = s;
+				else if (i == XS_WATCH_PATH)
+					l_path = s;
+				i++;
+				s = p + 1;
+			}
+		}
+
+		if (l_token && !strcmp(token, l_token)
+			&& l_path && xs_path_is_subpath(path, l_path)) {
+			list_del(&msg->list);
+			free(msg);
+		}
+	}
+
+	xs_maybe_clear_watch_pipe(h);
+
+	mutex_unlock(&h->watch_mutex);
+
+	return res;
 }
 
 /* Start a transaction: changes by others will not be seen during this
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 17:20:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 17:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjYvi-00073K-H2; Fri, 14 Dec 2012 17:20:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TjYvh-00073F-Pt
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 17:20:10 +0000
Received: from [85.158.138.51:30131] by server-3.bemta-3.messagelabs.com id
	6D/85-31588-9CF5BC05; Fri, 14 Dec 2012 17:20:09 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355505608!28952578!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27540 invoked from network); 14 Dec 2012 17:20:08 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 17:20:08 -0000
Received: by mail-we0-f173.google.com with SMTP id z2so1715860wey.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 09:20:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=p4HJlCpB9XutRMxksnnZAwYPExUc0EP0P+uKQoK94uQ=;
	b=BODhtKHJZjuT3wPyAyKFnuu0f/oFj0nZnIJDZHmU5v7nwEi/GbHdKJS//uiYne9C+o
	UzmSvG1ctljw9YF7uMxGLMmDqd0nPzNrj2tak3rrZCSueG+ZzKEy/KZTv31dbL6/KVK5
	3zGTO2ZhjsqDpNiu3rS5xLZN4i/Om2fTtZMmmUsxPq8IYm6gFwV2hIcfEUTx/NdwerDo
	4j1Ybak3NhaKw5mdbtqpTdI7EoaxFaaWWhw7OJ14llzHlSQNv39jN4ajuUZkvhtHVsVN
	6EtInudZoGHkexp7KMQzcUYEVnvDLDhNRZeRaA7b73LPMtjQC7HmNUBu1IW1yxUWDK4R
	c/7A==
Received: by 10.180.107.197 with SMTP id he5mr3936352wib.1.1355505605891;
	Fri, 14 Dec 2012 09:20:05 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id p3sm13148350wic.8.2012.12.14.09.20.04
	(version=SSLv3 cipher=OTHER); Fri, 14 Dec 2012 09:20:05 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 14 Dec 2012 17:19:56 +0000
From: Keir Fraser <keir@xen.org>
To: Mats Petersson <mats.petersson@citrix.com>
Message-ID: <CCF1103C.55C4B%keir@xen.org>
Thread-Topic: [Xen-devel] xen with huffman coding
Thread-Index: Ac3aHzymkfp39Qtcr0SN+yZeSUnHRg==
In-Reply-To: <50CB5528.2020203@citrix.com>
Mime-version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen with huffman coding
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/2012 16:34, "Mats Petersson" <mats.petersson@citrix.com> wrote:

> On 14/12/12 16:27, Keir Fraser wrote:
>> On 14/12/2012 14:49, "Mats Petersson" <mats.petersson@citrix.com> wrote:
>> 
>>> For code and typical data, I'm not at all convinced that huffman
>>> encoding (which is based on run-lengths) is the best method.
>> Actually Huffman encoding is not a run-length scheme.
> Ah, I'm confusing it with ccitt (or whatever it is that fax-machines
> use), which uses a fixed Huffman tree to encode a set of run lengths of
> black/white pixels.
> 
> Either way, looking at more than one compression mechanism may have some
> value.

Yes it was a minor point really, and noone really uses Huffman on its own
afaik. More broadly I agree -- it looks like an interesting student project.

> --
> Mats
>> 
>>   -- Keir
>> 
>> 
>> 
>> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 17:20:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 17:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjYvi-00073K-H2; Fri, 14 Dec 2012 17:20:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TjYvh-00073F-Pt
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 17:20:10 +0000
Received: from [85.158.138.51:30131] by server-3.bemta-3.messagelabs.com id
	6D/85-31588-9CF5BC05; Fri, 14 Dec 2012 17:20:09 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355505608!28952578!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27540 invoked from network); 14 Dec 2012 17:20:08 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 17:20:08 -0000
Received: by mail-we0-f173.google.com with SMTP id z2so1715860wey.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 09:20:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=p4HJlCpB9XutRMxksnnZAwYPExUc0EP0P+uKQoK94uQ=;
	b=BODhtKHJZjuT3wPyAyKFnuu0f/oFj0nZnIJDZHmU5v7nwEi/GbHdKJS//uiYne9C+o
	UzmSvG1ctljw9YF7uMxGLMmDqd0nPzNrj2tak3rrZCSueG+ZzKEy/KZTv31dbL6/KVK5
	3zGTO2ZhjsqDpNiu3rS5xLZN4i/Om2fTtZMmmUsxPq8IYm6gFwV2hIcfEUTx/NdwerDo
	4j1Ybak3NhaKw5mdbtqpTdI7EoaxFaaWWhw7OJ14llzHlSQNv39jN4ajuUZkvhtHVsVN
	6EtInudZoGHkexp7KMQzcUYEVnvDLDhNRZeRaA7b73LPMtjQC7HmNUBu1IW1yxUWDK4R
	c/7A==
Received: by 10.180.107.197 with SMTP id he5mr3936352wib.1.1355505605891;
	Fri, 14 Dec 2012 09:20:05 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id p3sm13148350wic.8.2012.12.14.09.20.04
	(version=SSLv3 cipher=OTHER); Fri, 14 Dec 2012 09:20:05 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Fri, 14 Dec 2012 17:19:56 +0000
From: Keir Fraser <keir@xen.org>
To: Mats Petersson <mats.petersson@citrix.com>
Message-ID: <CCF1103C.55C4B%keir@xen.org>
Thread-Topic: [Xen-devel] xen with huffman coding
Thread-Index: Ac3aHzymkfp39Qtcr0SN+yZeSUnHRg==
In-Reply-To: <50CB5528.2020203@citrix.com>
Mime-version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen with huffman coding
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/2012 16:34, "Mats Petersson" <mats.petersson@citrix.com> wrote:

> On 14/12/12 16:27, Keir Fraser wrote:
>> On 14/12/2012 14:49, "Mats Petersson" <mats.petersson@citrix.com> wrote:
>> 
>>> For code and typical data, I'm not at all convinced that huffman
>>> encoding (which is based on run-lengths) is the best method.
>> Actually Huffman encoding is not a run-length scheme.
> Ah, I'm confusing it with ccitt (or whatever it is that fax-machines
> use), which uses a fixed Huffman tree to encode a set of run lengths of
> black/white pixels.
> 
> Either way, looking at more than one compression mechanism may have some
> value.

Yes it was a minor point really, and noone really uses Huffman on its own
afaik. More broadly I agree -- it looks like an interesting student project.

> --
> Mats
>> 
>>   -- Keir
>> 
>> 
>> 
>> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 17:33:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 17:33:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjZ8D-0007Hp-WA; Fri, 14 Dec 2012 17:33:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1TjZ8C-0007Hc-8L; Fri, 14 Dec 2012 17:33:04 +0000
Received: from [193.109.254.147:58693] by server-15.bemta-14.messagelabs.com
	id FF/DF-05116-FC26BC05; Fri, 14 Dec 2012 17:33:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355506383!8993632!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10352 invoked from network); 14 Dec 2012 17:33:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 17:33:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="159728"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 17:33:04 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 14 Dec 2012 17:33:02 +0000
Message-ID: <50CB62CD.7020103@citrix.com>
Date: Fri, 14 Dec 2012 18:33:01 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com>	<50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
	<50CB43A9.3040806@citrix.com>
	<20683.18895.282490.216230@mariner.uk.xensource.com>
In-Reply-To: <20683.18895.282490.216230@mariner.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 16:46, Ian Jackson wrote:
> Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
>> I was thinking of adding a new variable to aodev that can contain an
>> extra parameter to pass to hotplug scripts, so we can directly pass the
>> full diskspec to the hotplug script and the hotplug script itself can
>> decide what to save in the "params" field (to be used later in the
>> shutdown/destroy).
> 
> That's all very well but command line parameters are visible in ps and
> so not suitable for passwords.  Really there should be an area in
> xenstore that's for communication between the toolstack and the driver
> domain (including scripts in the latter), but which is not visible to
> the guest.
> 
>>> So how would we tell whether the script understood this ?
>>
>> I'm still looking into the current hotplug script mess, but I only see
>> the following lines in block-common.sh that should be changed:
>>
>> if [ "$command" != "add" ] &&
>>    [ "$command" != "remove" ]
> 
> What about existing out-of-tree scripts ?  Do they all use
> block-common ?
> 
>> then
>>   log err "Invalid command: $command"
>>   exit 1
>> fi
> 
> And this is no good because if libxl does the error handling properly
> it would cause every attempt to fail :-).  You could explicitly ignore
> prepare and unprepare.
> 
>> I think current block hotplug scripts will work nicely when passed the
>> "prepare" command, they will become no-ops, since they all seem to use
>> the following case:
> 
> I'm worried about out-of-tree scripts.
> 
> 
> The existing hotplug script interface is pretty horrible TBH.  Which
> is why I was suggesting inventing a new one.  We could keep the old
> interface for out-of-tree and unconverted in-tree scripts, and provide
> a new parameter to request the new style.
> 
> Eg "method=<something>" rather than "script=<something>" would mean
> "set script to <something> and also set the flag saying `use the new
> script calling convention'"

Yes, I agree that current hotplug script interface is not good (if this
was ever intended to be an interface).

When method=<foo> is used as a parameter in the disk specification, it
is assumed that script <foo> is using the new hotplug calling convention

Script <foo> will be called with only one of the following parameters:

 * prepare: called before start building the domain, this is specially
interesting during migration to offload as much work as possible from
the "add" call, which is done during the blackout phase of migration. In
the prepare state, the backend xenstore entries have not yet been created.

 * add: called to connect the device. Xenstore backend entries exist,
and backend state is 2 (XenbusStateInitWait).

 * remove: called to disconnect the device. Xenstore backend entries
exists, and backend state is 6 (XenbusStateClosed).

Environment variables the script can use (set by the caller):

 * BACKEND_PATH: path to xenstore backend of the related device, ie:
/local/domain/0/backend/vbd/3/51712/. Empty when the script is called
with "prepare" argument. When the new hotplug calling convention is
used, the toolstack will not write the backend "params" node, it is up
to the hotplug script to write it if necessary.

 * HOTPLUG_PATH: path to the xenstore directory that can be used to pass
extra parameters to the script. In this implementation only the "params"
variable is set, and it will contain the full diskspec string. This
xenstore path will not be deleted until the script has been called with
the "remove" parameter, so it can be used to store information that will
persist between the different hotplug calls.

I'm not sure where HOTPLUG_PATH should reside, does
/local/domain/<backend_domid>/libxl/hotplug/<domid>/<devid>/ sound ok?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 17:33:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 17:33:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjZ8D-0007Hp-WA; Fri, 14 Dec 2012 17:33:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1TjZ8C-0007Hc-8L; Fri, 14 Dec 2012 17:33:04 +0000
Received: from [193.109.254.147:58693] by server-15.bemta-14.messagelabs.com
	id FF/DF-05116-FC26BC05; Fri, 14 Dec 2012 17:33:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355506383!8993632!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10352 invoked from network); 14 Dec 2012 17:33:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 17:33:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="159728"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 17:33:04 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 14 Dec 2012 17:33:02 +0000
Message-ID: <50CB62CD.7020103@citrix.com>
Date: Fri, 14 Dec 2012 18:33:01 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com>	<50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
	<50CB43A9.3040806@citrix.com>
	<20683.18895.282490.216230@mariner.uk.xensource.com>
In-Reply-To: <20683.18895.282490.216230@mariner.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 16:46, Ian Jackson wrote:
> Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
>> I was thinking of adding a new variable to aodev that can contain an
>> extra parameter to pass to hotplug scripts, so we can directly pass the
>> full diskspec to the hotplug script and the hotplug script itself can
>> decide what to save in the "params" field (to be used later in the
>> shutdown/destroy).
> 
> That's all very well but command line parameters are visible in ps and
> so not suitable for passwords.  Really there should be an area in
> xenstore that's for communication between the toolstack and the driver
> domain (including scripts in the latter), but which is not visible to
> the guest.
> 
>>> So how would we tell whether the script understood this ?
>>
>> I'm still looking into the current hotplug script mess, but I only see
>> the following lines in block-common.sh that should be changed:
>>
>> if [ "$command" != "add" ] &&
>>    [ "$command" != "remove" ]
> 
> What about existing out-of-tree scripts ?  Do they all use
> block-common ?
> 
>> then
>>   log err "Invalid command: $command"
>>   exit 1
>> fi
> 
> And this is no good because if libxl does the error handling properly
> it would cause every attempt to fail :-).  You could explicitly ignore
> prepare and unprepare.
> 
>> I think current block hotplug scripts will work nicely when passed the
>> "prepare" command, they will become no-ops, since they all seem to use
>> the following case:
> 
> I'm worried about out-of-tree scripts.
> 
> 
> The existing hotplug script interface is pretty horrible TBH.  Which
> is why I was suggesting inventing a new one.  We could keep the old
> interface for out-of-tree and unconverted in-tree scripts, and provide
> a new parameter to request the new style.
> 
> Eg "method=<something>" rather than "script=<something>" would mean
> "set script to <something> and also set the flag saying `use the new
> script calling convention'"

Yes, I agree that current hotplug script interface is not good (if this
was ever intended to be an interface).

When method=<foo> is used as a parameter in the disk specification, it
is assumed that script <foo> is using the new hotplug calling convention

Script <foo> will be called with only one of the following parameters:

 * prepare: called before start building the domain, this is specially
interesting during migration to offload as much work as possible from
the "add" call, which is done during the blackout phase of migration. In
the prepare state, the backend xenstore entries have not yet been created.

 * add: called to connect the device. Xenstore backend entries exist,
and backend state is 2 (XenbusStateInitWait).

 * remove: called to disconnect the device. Xenstore backend entries
exists, and backend state is 6 (XenbusStateClosed).

Environment variables the script can use (set by the caller):

 * BACKEND_PATH: path to xenstore backend of the related device, ie:
/local/domain/0/backend/vbd/3/51712/. Empty when the script is called
with "prepare" argument. When the new hotplug calling convention is
used, the toolstack will not write the backend "params" node, it is up
to the hotplug script to write it if necessary.

 * HOTPLUG_PATH: path to the xenstore directory that can be used to pass
extra parameters to the script. In this implementation only the "params"
variable is set, and it will contain the full diskspec string. This
xenstore path will not be deleted until the script has been called with
the "remove" parameter, so it can be used to store information that will
persist between the different hotplug calls.

I'm not sure where HOTPLUG_PATH should reside, does
/local/domain/<backend_domid>/libxl/hotplug/<domid>/<devid>/ sound ok?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 18:25:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 18:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjZwl-00089f-H5; Fri, 14 Dec 2012 18:25:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjZwk-00089a-NQ
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 18:25:18 +0000
Received: from [85.158.138.51:59351] by server-11.bemta-3.messagelabs.com id
	B9/10-13335-C0F6BC05; Fri, 14 Dec 2012 18:25:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355509515!28662594!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6675 invoked from network); 14 Dec 2012 18:25:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 18:25:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="160957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 18:25:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 18:25:15 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjZwh-0000RK-6f; Fri, 14 Dec 2012 18:25:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjZwh-0001C6-2S;
	Fri, 14 Dec 2012 18:25:15 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20683.28426.862418.874446@mariner.uk.xensource.com>
Date: Fri, 14 Dec 2012 18:25:14 +0000
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <f081fe6f4620667d6dbcee93df2555d357c169bb.1355480806.git.julien.grall@citrix.com>
References: <f081fe6f4620667d6dbcee93df2555d357c169bb.1355480806.git.julien.grall@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH V4] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien Grall writes ("[PATCH V4] libxenstore: filter watch events in libxenstore when we unwatch"):
> XenStore puts in queued watch events via a thread and notifies the user.
> Sometimes xs_unwatch is called before all related message is read. The use
> case is non-threaded libevent, we have two event A and B:
>     - Event A will destroy something and call xs_unwatch;
>     - Event B is used to notify that a node has changed in XenStore.
> As the event is called one by one, event A can be handled before event B.
> So on next xs_watch_read the user could retrieve an unwatch token and
> a segfault occured if the token store the pointer of the structure
> (ie: "backend:0xcafe").
> 
> To avoid problem with previous application using libXenStore, this behaviour
> will only be enabled if XS_UNWATCH_FILTER is given to xs_open.
...
> -	return xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
> -				ARRAY_SIZE(iov), NULL));
> +	res = xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
> +						   ARRAY_SIZE(iov), NULL));

Thanks!

I think the only things left are some formatting glitches.  Here, you
change the indentation but the new indentation doesn't match.

> +	if (!h->unwatch_filter) /* Don't filter the watch list */
> +		return res;
...
> +	if (list_empty(&h->watch_list)) {
> +	    mutex_unlock(&h->watch_mutex);
> +	    return res;

And here's a 4-character indent.  xs.c seems to use hard tabs so we
should continue that convention.

> +		if (l_token && !strcmp(token, l_token)
> +			&& l_path && xs_path_is_subpath(path, l_path)) {

Style in xs seems to have the && on the previous line and to align the
next line with the inside of the opening paren.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 18:25:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 18:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjZwl-00089f-H5; Fri, 14 Dec 2012 18:25:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjZwk-00089a-NQ
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 18:25:18 +0000
Received: from [85.158.138.51:59351] by server-11.bemta-3.messagelabs.com id
	B9/10-13335-C0F6BC05; Fri, 14 Dec 2012 18:25:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355509515!28662594!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6675 invoked from network); 14 Dec 2012 18:25:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 18:25:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="160957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 18:25:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 18:25:15 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjZwh-0000RK-6f; Fri, 14 Dec 2012 18:25:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjZwh-0001C6-2S;
	Fri, 14 Dec 2012 18:25:15 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20683.28426.862418.874446@mariner.uk.xensource.com>
Date: Fri, 14 Dec 2012 18:25:14 +0000
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <f081fe6f4620667d6dbcee93df2555d357c169bb.1355480806.git.julien.grall@citrix.com>
References: <f081fe6f4620667d6dbcee93df2555d357c169bb.1355480806.git.julien.grall@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH V4] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien Grall writes ("[PATCH V4] libxenstore: filter watch events in libxenstore when we unwatch"):
> XenStore puts in queued watch events via a thread and notifies the user.
> Sometimes xs_unwatch is called before all related message is read. The use
> case is non-threaded libevent, we have two event A and B:
>     - Event A will destroy something and call xs_unwatch;
>     - Event B is used to notify that a node has changed in XenStore.
> As the event is called one by one, event A can be handled before event B.
> So on next xs_watch_read the user could retrieve an unwatch token and
> a segfault occured if the token store the pointer of the structure
> (ie: "backend:0xcafe").
> 
> To avoid problem with previous application using libXenStore, this behaviour
> will only be enabled if XS_UNWATCH_FILTER is given to xs_open.
...
> -	return xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
> -				ARRAY_SIZE(iov), NULL));
> +	res = xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
> +						   ARRAY_SIZE(iov), NULL));

Thanks!

I think the only things left are some formatting glitches.  Here, you
change the indentation but the new indentation doesn't match.

> +	if (!h->unwatch_filter) /* Don't filter the watch list */
> +		return res;
...
> +	if (list_empty(&h->watch_list)) {
> +	    mutex_unlock(&h->watch_mutex);
> +	    return res;

And here's a 4-character indent.  xs.c seems to use hard tabs so we
should continue that convention.

> +		if (l_token && !strcmp(token, l_token)
> +			&& l_path && xs_path_is_subpath(path, l_path)) {

Style in xs seems to have the && on the previous line and to align the
next line with the inside of the opening paren.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 18:26:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 18:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjZxl-0008Di-4s; Fri, 14 Dec 2012 18:26:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1TjZxj-0008DQ-41; Fri, 14 Dec 2012 18:26:19 +0000
Received: from [193.109.254.147:26077] by server-1.bemta-14.messagelabs.com id
	0C/08-15901-A4F6BC05; Fri, 14 Dec 2012 18:26:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355509576!2965145!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24179 invoked from network); 14 Dec 2012 18:26:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 18:26:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="160978"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 18:26:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 18:26:16 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjZxg-0000Rf-5Y; Fri, 14 Dec 2012 18:26:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjZxg-0001CL-28;
	Fri, 14 Dec 2012 18:26:16 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20683.28487.926648.451291@mariner.uk.xensource.com>
Date: Fri, 14 Dec 2012 18:26:15 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50CB62CD.7020103@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com> <50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
	<50CB43A9.3040806@citrix.com>
	<20683.18895.282490.216230@mariner.uk.xensource.com>
	<50CB62CD.7020103@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
> Script <foo> will be called with only one of the following parameters:
> 
>  * prepare: called before start building the domain, this is specially
> interesting during migration to offload as much work as possible from
> the "add" call, which is done during the blackout phase of migration. In
> the prepare state, the backend xenstore entries have not yet been created.
> 
>  * add: called to connect the device. Xenstore backend entries exist,
> and backend state is 2 (XenbusStateInitWait).
> 
>  * remove: called to disconnect the device. Xenstore backend entries
> exists, and backend state is 6 (XenbusStateClosed).

I assume we need an unprepare here too.

> Environment variables the script can use (set by the caller):
...
> I'm not sure where HOTPLUG_PATH should reside, does
> /local/domain/<backend_domid>/libxl/hotplug/<domid>/<devid>/ sound ok?

I think that would be fine.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 18:26:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 18:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjZxl-0008Di-4s; Fri, 14 Dec 2012 18:26:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1TjZxj-0008DQ-41; Fri, 14 Dec 2012 18:26:19 +0000
Received: from [193.109.254.147:26077] by server-1.bemta-14.messagelabs.com id
	0C/08-15901-A4F6BC05; Fri, 14 Dec 2012 18:26:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355509576!2965145!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24179 invoked from network); 14 Dec 2012 18:26:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 18:26:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="160978"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 18:26:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 18:26:16 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TjZxg-0000Rf-5Y; Fri, 14 Dec 2012 18:26:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TjZxg-0001CL-28;
	Fri, 14 Dec 2012 18:26:16 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20683.28487.926648.451291@mariner.uk.xensource.com>
Date: Fri, 14 Dec 2012 18:26:15 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50CB62CD.7020103@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com> <50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
	<50CB43A9.3040806@citrix.com>
	<20683.18895.282490.216230@mariner.uk.xensource.com>
	<50CB62CD.7020103@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
> Script <foo> will be called with only one of the following parameters:
> 
>  * prepare: called before start building the domain, this is specially
> interesting during migration to offload as much work as possible from
> the "add" call, which is done during the blackout phase of migration. In
> the prepare state, the backend xenstore entries have not yet been created.
> 
>  * add: called to connect the device. Xenstore backend entries exist,
> and backend state is 2 (XenbusStateInitWait).
> 
>  * remove: called to disconnect the device. Xenstore backend entries
> exists, and backend state is 6 (XenbusStateClosed).

I assume we need an unprepare here too.

> Environment variables the script can use (set by the caller):
...
> I'm not sure where HOTPLUG_PATH should reside, does
> /local/domain/<backend_domid>/libxl/hotplug/<domid>/<devid>/ sound ok?

I think that would be fine.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 18:38:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 18:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tja9Q-0000Cd-If; Fri, 14 Dec 2012 18:38:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1Tja9O-0000CP-WC; Fri, 14 Dec 2012 18:38:23 +0000
Received: from [85.158.139.83:15093] by server-15.bemta-5.messagelabs.com id
	0E/FB-20523-E127BC05; Fri, 14 Dec 2012 18:38:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355510301!22569973!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26116 invoked from network); 14 Dec 2012 18:38:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 18:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="161138"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 18:38:22 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 14 Dec 2012 18:38:21 +0000
Message-ID: <50CB721C.5010006@citrix.com>
Date: Fri, 14 Dec 2012 19:38:20 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com>	<50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
	<50CB43A9.3040806@citrix.com>
	<20683.18895.282490.216230@mariner.uk.xensource.com>
	<50CB62CD.7020103@citrix.com>
	<20683.28487.926648.451291@mariner.uk.xensource.com>
In-Reply-To: <20683.28487.926648.451291@mariner.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 19:26, Ian Jackson wrote:
> Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
>>  * remove: called to disconnect the device. Xenstore backend entries
>> exists, and backend state is 6 (XenbusStateClosed).
> 
> I assume we need an unprepare here too.

I've also thought that, but the reason for prepare to exist is to reduce
the time that the "add" operation takes, thus reducing the blackout
phase during migration.

There's no such problem in the remove phase, but I guess we need an
unprepare in case there's a failure between the prepare and add
operations, and we wish to give the hotplug script an opportunity to
unwind whatever the prepare operation has done.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 18:38:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 18:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tja9Q-0000Cd-If; Fri, 14 Dec 2012 18:38:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1Tja9O-0000CP-WC; Fri, 14 Dec 2012 18:38:23 +0000
Received: from [85.158.139.83:15093] by server-15.bemta-5.messagelabs.com id
	0E/FB-20523-E127BC05; Fri, 14 Dec 2012 18:38:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355510301!22569973!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26116 invoked from network); 14 Dec 2012 18:38:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 18:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="161138"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 18:38:22 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 14 Dec 2012 18:38:21 +0000
Message-ID: <50CB721C.5010006@citrix.com>
Date: Fri, 14 Dec 2012 19:38:20 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com>	<50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
	<50CB43A9.3040806@citrix.com>
	<20683.18895.282490.216230@mariner.uk.xensource.com>
	<50CB62CD.7020103@citrix.com>
	<20683.28487.926648.451291@mariner.uk.xensource.com>
In-Reply-To: <20683.28487.926648.451291@mariner.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/12/12 19:26, Ian Jackson wrote:
> Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
>>  * remove: called to disconnect the device. Xenstore backend entries
>> exists, and backend state is 6 (XenbusStateClosed).
> 
> I assume we need an unprepare here too.

I've also thought that, but the reason for prepare to exist is to reduce
the time that the "add" operation takes, thus reducing the blackout
phase during migration.

There's no such problem in the remove phase, but I guess we need an
unprepare in case there's a failure between the prepare and add
operations, and we wish to give the hotplug script an opportunity to
unwind whatever the prepare operation has done.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 18:47:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 18:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjaII-0000SP-Jr; Fri, 14 Dec 2012 18:47:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1TjaIG-0000SK-W8
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 18:47:33 +0000
Received: from [85.158.139.211:19995] by server-15.bemta-5.messagelabs.com id
	7D/14-20523-4447BC05; Fri, 14 Dec 2012 18:47:32 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355510849!18115935!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26761 invoked from network); 14 Dec 2012 18:47:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 18:47:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="772595"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 18:47:25 +0000
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 13:47:25 -0500
From: Julien Grall <julien.grall@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 14 Dec 2012 12:20:35 +0000
Message-ID: <ce70525c32dc475f58ad007a6c6a735d094c57e5.1355487375.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH V5] libxenstore: filter watch events in
	libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XenStore puts in queued watch events via a thread and notifies the user.
Sometimes xs_unwatch is called before all related message is read. The use
case is non-threaded libevent, we have two event A and B:
    - Event A will destroy something and call xs_unwatch;
    - Event B is used to notify that a node has changed in XenStore.
As the event is called one by one, event A can be handled before event B.
So on next xs_watch_read the user could retrieve an unwatch token and
a segfault occured if the token store the pointer of the structure
(ie: "backend:0xcafe").

To avoid problem with previous application using libXenStore, this behaviour
will only be enabled if XS_UNWATCH_FILTER is given to xs_open.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Julien Grall <julien.grall@citrix.com>
---

  Modification between V4 and V5:
    - Use tab instead of space for the indentation.

  Modifications between V3 and V4:
    - Rename XS_UNWATCH_SAFE to XS_UNWATCH_FILTER;
    - Improve documentation;
    - Fix sub-path checking in xs_unwatch.

  Modifications between V2 and V3:
    - Add XS_UNWATCH_SAFE;
    - Rename xs_clear_watch_pipe to xs_maybe_clear_watch_pipe.

  Modifications between V1 and V2:
    - Add xs_clear_watch_pipe to avoid code duplication;
    - Modify commit message by Ian Jackson;
    - Rework list filtering.

tools/xenstore/xenstore.h |   21 ++++++++++++
 tools/xenstore/xs.c       |   84 ++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 97 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstore.h b/tools/xenstore/xenstore.h
index 7259e49..fdf5e76 100644
--- a/tools/xenstore/xenstore.h
+++ b/tools/xenstore/xenstore.h
@@ -27,6 +27,27 @@
 #define XS_OPEN_READONLY	1UL<<0
 #define XS_OPEN_SOCKETONLY      1UL<<1
 
+/*
+ * Setting XS_UNWATCH_FILTER arranges that after xs_unwatch, no
+ * related watch events will be delivered via xs_read_watch.  But
+ * this relies on the couple token, subpath is unique.
+ *
+ * XS_UNWATCH_FILTER clear          XS_UNWATCH_FILTER set
+ *
+ * Even after xs_unwatch, "stale"   After xs_unwatch returns, no
+ * instances of the watch event     watch events with the same
+ * may be delivered.                token and with the same subpath
+ *                                  will be delivered.
+ *
+ * A path and a subpath can be      The application must avoid
+ * register with the same token.    registering a path (/foo/) and
+ *                                  a subpath (/foo/bar) with the
+ *                                  same path until a successful
+ *                                  xs_unwatch for the first watch
+ *                                  has returned.
+ */
+#define XS_UNWATCH_FILTER     1UL<<2
+
 struct xs_handle;
 typedef uint32_t xs_transaction_t;
 
diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index b951015..86ef6c7 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -67,6 +67,8 @@ struct xs_handle {
 
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_filter;
 
 	/*
          * A list of replies. Currently only one will ever be outstanding
@@ -125,6 +127,8 @@ struct xs_handle {
 	struct list_head watch_list;
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_filter;
 };
 
 #define mutex_lock(m)		((void)0)
@@ -247,6 +251,8 @@ static struct xs_handle *get_handle(const char *connect_to)
 	/* Watch pipe is allocated on demand in xs_fileno(). */
 	h->watch_pipe[0] = h->watch_pipe[1] = -1;
 
+	h->unwatch_filter = false;
+
 #ifdef USE_PTHREAD
 	pthread_mutex_init(&h->watch_mutex, NULL);
 	pthread_cond_init(&h->watch_condvar, NULL);
@@ -287,6 +293,9 @@ struct xs_handle *xs_open(unsigned long flags)
 	if (!xsh && !(flags & XS_OPEN_SOCKETONLY))
 		xsh = get_handle(xs_domain_dev());
 
+	if (xsh && (flags & XS_UNWATCH_FILTER))
+		xsh->unwatch_filter = true;
+
 	return xsh;
 }
 
@@ -753,6 +762,19 @@ bool xs_watch(struct xs_handle *h, const char *path, const char *token)
 				ARRAY_SIZE(iov), NULL));
 }
 
+
+/* Clear the pipe token if there are no more pending watchs.
+ * We suppose the watch_mutex is already taken.
+ */
+static void xs_maybe_clear_watch_pipe(struct xs_handle *h)
+{
+	char c;
+
+	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
+		while (read(h->watch_pipe[0], &c, 1) != 1)
+			continue;
+}
+
 /* Find out what node change was on (will block if nothing pending).
  * Returns array of two pointers: path and token, or NULL.
  * Call free() after use.
@@ -761,7 +783,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 				  int nonblocking)
 {
 	struct xs_stored_msg *msg;
-	char **ret, *strings, c = 0;
+	char **ret, *strings;
 	unsigned int num_strings, i;
 
 	mutex_lock(&h->watch_mutex);
@@ -798,11 +820,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 	msg = list_top(&h->watch_list, struct xs_stored_msg, list);
 	list_del(&msg->list);
 
-	/* Clear the pipe token if there are no more pending watches. */
-	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
-		while (read(h->watch_pipe[0], &c, 1) != 1)
-			continue;
-
+	xs_maybe_clear_watch_pipe(h);
 	mutex_unlock(&h->watch_mutex);
 
 	assert(msg->hdr.type == XS_WATCH_EVENT);
@@ -855,14 +873,64 @@ char **xs_read_watch(struct xs_handle *h, unsigned int *num)
 bool xs_unwatch(struct xs_handle *h, const char *path, const char *token)
 {
 	struct iovec iov[2];
+	struct xs_stored_msg *msg, *tmsg;
+	bool res;
+	char *s, *p;
+	unsigned int i;
+	char *l_token, *l_path;
 
 	iov[0].iov_base = (char *)path;
 	iov[0].iov_len = strlen(path) + 1;
 	iov[1].iov_base = (char *)token;
 	iov[1].iov_len = strlen(token) + 1;
 
-	return xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
-				ARRAY_SIZE(iov), NULL));
+	res = xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
+			       ARRAY_SIZE(iov), NULL));
+
+	if (!h->unwatch_filter) /* Don't filter the watch list */
+		return res;
+
+
+	/* Filter the watch list to remove potential message */
+	mutex_lock(&h->watch_mutex);
+
+	if (list_empty(&h->watch_list)) {
+		mutex_unlock(&h->watch_mutex);
+		return res;
+	}
+
+	list_for_each_entry_safe(msg, tmsg, &h->watch_list, list) {
+		assert(msg->hdr.type == XS_WATCH_EVENT);
+
+		s = msg->body;
+
+		l_token = NULL;
+		l_path = NULL;
+
+		for (p = s, i = 0; p < msg->body + msg->hdr.len; p++) {
+			if (*p == '\0')
+			{
+				if (i == XS_WATCH_TOKEN)
+					l_token = s;
+				else if (i == XS_WATCH_PATH)
+					l_path = s;
+				i++;
+				s = p + 1;
+			}
+		}
+
+		if (l_token && !strcmp(token, l_token) &&
+		    l_path && xs_path_is_subpath(path, l_path)) {
+			list_del(&msg->list);
+			free(msg);
+		}
+	}
+
+	xs_maybe_clear_watch_pipe(h);
+
+	mutex_unlock(&h->watch_mutex);
+
+	return res;
 }
 
 /* Start a transaction: changes by others will not be seen during this
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 18:47:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 18:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjaII-0000SP-Jr; Fri, 14 Dec 2012 18:47:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1TjaIG-0000SK-W8
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 18:47:33 +0000
Received: from [85.158.139.211:19995] by server-15.bemta-5.messagelabs.com id
	7D/14-20523-4447BC05; Fri, 14 Dec 2012 18:47:32 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355510849!18115935!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26761 invoked from network); 14 Dec 2012 18:47:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 18:47:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="772595"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 18:47:25 +0000
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 13:47:25 -0500
From: Julien Grall <julien.grall@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 14 Dec 2012 12:20:35 +0000
Message-ID: <ce70525c32dc475f58ad007a6c6a735d094c57e5.1355487375.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH V5] libxenstore: filter watch events in
	libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XenStore puts in queued watch events via a thread and notifies the user.
Sometimes xs_unwatch is called before all related message is read. The use
case is non-threaded libevent, we have two event A and B:
    - Event A will destroy something and call xs_unwatch;
    - Event B is used to notify that a node has changed in XenStore.
As the event is called one by one, event A can be handled before event B.
So on next xs_watch_read the user could retrieve an unwatch token and
a segfault occured if the token store the pointer of the structure
(ie: "backend:0xcafe").

To avoid problem with previous application using libXenStore, this behaviour
will only be enabled if XS_UNWATCH_FILTER is given to xs_open.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Julien Grall <julien.grall@citrix.com>
---

  Modification between V4 and V5:
    - Use tab instead of space for the indentation.

  Modifications between V3 and V4:
    - Rename XS_UNWATCH_SAFE to XS_UNWATCH_FILTER;
    - Improve documentation;
    - Fix sub-path checking in xs_unwatch.

  Modifications between V2 and V3:
    - Add XS_UNWATCH_SAFE;
    - Rename xs_clear_watch_pipe to xs_maybe_clear_watch_pipe.

  Modifications between V1 and V2:
    - Add xs_clear_watch_pipe to avoid code duplication;
    - Modify commit message by Ian Jackson;
    - Rework list filtering.

tools/xenstore/xenstore.h |   21 ++++++++++++
 tools/xenstore/xs.c       |   84 ++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 97 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstore.h b/tools/xenstore/xenstore.h
index 7259e49..fdf5e76 100644
--- a/tools/xenstore/xenstore.h
+++ b/tools/xenstore/xenstore.h
@@ -27,6 +27,27 @@
 #define XS_OPEN_READONLY	1UL<<0
 #define XS_OPEN_SOCKETONLY      1UL<<1
 
+/*
+ * Setting XS_UNWATCH_FILTER arranges that after xs_unwatch, no
+ * related watch events will be delivered via xs_read_watch.  But
+ * this relies on the couple token, subpath is unique.
+ *
+ * XS_UNWATCH_FILTER clear          XS_UNWATCH_FILTER set
+ *
+ * Even after xs_unwatch, "stale"   After xs_unwatch returns, no
+ * instances of the watch event     watch events with the same
+ * may be delivered.                token and with the same subpath
+ *                                  will be delivered.
+ *
+ * A path and a subpath can be      The application must avoid
+ * register with the same token.    registering a path (/foo/) and
+ *                                  a subpath (/foo/bar) with the
+ *                                  same path until a successful
+ *                                  xs_unwatch for the first watch
+ *                                  has returned.
+ */
+#define XS_UNWATCH_FILTER     1UL<<2
+
 struct xs_handle;
 typedef uint32_t xs_transaction_t;
 
diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index b951015..86ef6c7 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -67,6 +67,8 @@ struct xs_handle {
 
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_filter;
 
 	/*
          * A list of replies. Currently only one will ever be outstanding
@@ -125,6 +127,8 @@ struct xs_handle {
 	struct list_head watch_list;
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
+	/* Filtering watch event in unwatch function? */
+	bool unwatch_filter;
 };
 
 #define mutex_lock(m)		((void)0)
@@ -247,6 +251,8 @@ static struct xs_handle *get_handle(const char *connect_to)
 	/* Watch pipe is allocated on demand in xs_fileno(). */
 	h->watch_pipe[0] = h->watch_pipe[1] = -1;
 
+	h->unwatch_filter = false;
+
 #ifdef USE_PTHREAD
 	pthread_mutex_init(&h->watch_mutex, NULL);
 	pthread_cond_init(&h->watch_condvar, NULL);
@@ -287,6 +293,9 @@ struct xs_handle *xs_open(unsigned long flags)
 	if (!xsh && !(flags & XS_OPEN_SOCKETONLY))
 		xsh = get_handle(xs_domain_dev());
 
+	if (xsh && (flags & XS_UNWATCH_FILTER))
+		xsh->unwatch_filter = true;
+
 	return xsh;
 }
 
@@ -753,6 +762,19 @@ bool xs_watch(struct xs_handle *h, const char *path, const char *token)
 				ARRAY_SIZE(iov), NULL));
 }
 
+
+/* Clear the pipe token if there are no more pending watchs.
+ * We suppose the watch_mutex is already taken.
+ */
+static void xs_maybe_clear_watch_pipe(struct xs_handle *h)
+{
+	char c;
+
+	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
+		while (read(h->watch_pipe[0], &c, 1) != 1)
+			continue;
+}
+
 /* Find out what node change was on (will block if nothing pending).
  * Returns array of two pointers: path and token, or NULL.
  * Call free() after use.
@@ -761,7 +783,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 				  int nonblocking)
 {
 	struct xs_stored_msg *msg;
-	char **ret, *strings, c = 0;
+	char **ret, *strings;
 	unsigned int num_strings, i;
 
 	mutex_lock(&h->watch_mutex);
@@ -798,11 +820,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 	msg = list_top(&h->watch_list, struct xs_stored_msg, list);
 	list_del(&msg->list);
 
-	/* Clear the pipe token if there are no more pending watches. */
-	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
-		while (read(h->watch_pipe[0], &c, 1) != 1)
-			continue;
-
+	xs_maybe_clear_watch_pipe(h);
 	mutex_unlock(&h->watch_mutex);
 
 	assert(msg->hdr.type == XS_WATCH_EVENT);
@@ -855,14 +873,64 @@ char **xs_read_watch(struct xs_handle *h, unsigned int *num)
 bool xs_unwatch(struct xs_handle *h, const char *path, const char *token)
 {
 	struct iovec iov[2];
+	struct xs_stored_msg *msg, *tmsg;
+	bool res;
+	char *s, *p;
+	unsigned int i;
+	char *l_token, *l_path;
 
 	iov[0].iov_base = (char *)path;
 	iov[0].iov_len = strlen(path) + 1;
 	iov[1].iov_base = (char *)token;
 	iov[1].iov_len = strlen(token) + 1;
 
-	return xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
-				ARRAY_SIZE(iov), NULL));
+	res = xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
+			       ARRAY_SIZE(iov), NULL));
+
+	if (!h->unwatch_filter) /* Don't filter the watch list */
+		return res;
+
+
+	/* Filter the watch list to remove potential message */
+	mutex_lock(&h->watch_mutex);
+
+	if (list_empty(&h->watch_list)) {
+		mutex_unlock(&h->watch_mutex);
+		return res;
+	}
+
+	list_for_each_entry_safe(msg, tmsg, &h->watch_list, list) {
+		assert(msg->hdr.type == XS_WATCH_EVENT);
+
+		s = msg->body;
+
+		l_token = NULL;
+		l_path = NULL;
+
+		for (p = s, i = 0; p < msg->body + msg->hdr.len; p++) {
+			if (*p == '\0')
+			{
+				if (i == XS_WATCH_TOKEN)
+					l_token = s;
+				else if (i == XS_WATCH_PATH)
+					l_path = s;
+				i++;
+				s = p + 1;
+			}
+		}
+
+		if (l_token && !strcmp(token, l_token) &&
+		    l_path && xs_path_is_subpath(path, l_path)) {
+			list_del(&msg->list);
+			free(msg);
+		}
+	}
+
+	xs_maybe_clear_watch_pipe(h);
+
+	mutex_unlock(&h->watch_mutex);
+
+	return res;
 }
 
 /* Start a transaction: changes by others will not be seen during this
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 18:53:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 18:53:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjaNq-0000eC-DK; Fri, 14 Dec 2012 18:53:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TjaNo-0000e7-HW
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 18:53:16 +0000
Received: from [85.158.143.35:34217] by server-3.bemta-4.messagelabs.com id
	10/C5-18211-B957BC05; Fri, 14 Dec 2012 18:53:15 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355511193!5231150!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDEwOTg2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29221 invoked from network); 14 Dec 2012 18:53:15 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 18:53:15 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355511195; x=1387047195;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=Kx013HD7p91IBSi82hqgaV0v3+qQ6Z4PKS2XhmileAk=;
	b=qrWo7A/NNsM+UJFHllw/ZgkYEn+ic9P5Hhg3CAcXFIXA6Ll66TUckPvJ
	F5QgvmePtZdfxLFSOcLYmBvjTIgpBnOwHmm76jw4PAthC+Wh7SGxUSeY7
	LdHwA9WDAhujxv68t3UZdCA2TFmGwiZuNK9DcJUDVj7ZzzOQbcUF84poo o=;
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; d="scan'208";a="418141053"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 14 Dec 2012 18:53:12 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-1101.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBEIrAe2024622
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Fri, 14 Dec 2012 18:53:11 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Fri, 14 Dec 2012 10:53:04 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Fri, 14 Dec 2012 10:53:04 -0800
Date: Fri, 14 Dec 2012 10:53:04 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Message-ID: <20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 11:12:50PM +0000, Palagummi, Siva wrote:
> > -----Original Message-----
> > From: Matt Wilson [mailto:msw@amazon.com]
> > Sent: Wednesday, December 12, 2012 3:05 AM
> >
> > On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > >
> > > You can clearly see below that copy_off is input to
> > > start_new_rx_buffer while copying frags.
> > 
> > Yes, but that's the right thing to do. copy_off should be set to the
> > destination offset after copying the last byte of linear data, which
> > means "skb_headlen(skb) % PAGE_SIZE" is correct.
> > 
>
> No. It is not correct for two reasons. For example what if
> skb_headlen(skb) is exactly a multiple of PAGE_SIZE. Copy_off would
> be set to ZERO. And now if there exists some data in frags, ZERO
> will be passed in as copy_off value and start_new_rx_buffer will
> return FALSE. And second reason is the obvious case from the current
> code where "offset_in_page(skb->data)" size hole will be left in the
> first buffer after first pass in case remaining data that need to be
> copied is going to overflow the first buffer.

Right, and I'm arguing that having the code leave a hole is less
desirable than potentially increasing the number of copy
operations. I'd like to hear from Ian and others if using the buffers
efficiently is more important than reducing copy operations. Intuitively,
I think it's more important to use the ring efficiently.

> > > So if the buggy "count" calculation below is fixed based on
> > > offset_in_page value then copy_off value also will change
> > > accordingly.
> > 
> > This calculation is not incorrect. You should only need as many
> > PAGE_SIZE buffers as you have linear data to fill.
> > 
>
> This calculation is incorrect and do not match actual slots used as
> it is now unless some new change is done either in nebk_gop_skb or
> in start_new_rx_buffer.

Yes, and I'm proposing to change netbk_gop_skb() and
start_new_rx_buffer().

> > >       linear_len = skb_headlen(skb)
> > > 	count = (linear_len <= PAGE_SIZE)
> > >               ? 1
> > >               :DIV_ROUND_UP(offset_in_page(skb->data)+linear_len,
> > PAGE_SIZE));
> > >
> > >       copy_off = ((offset_in_page(skb->data)+linear_len) <
> > 2*PAGE_SIZE)
> > > 			? linear_len % PAGE_SIZE;
> > > 			: (offset_in_page(skb->data)+linear_len) % PAGE_SIZE;
> > 
> > A change like this makes the code much more difficult to understand.
> > 

> :-) It would have been easier had we written logic using a for loop
> similar to how the counting is done for data in frags. In fact I did
> do mistake in above calculations :-( . A proper logic probably
> should look somewhat like below.

Yes, we tested one version of a fix that modeled the algorithm used in
count_skb_slots() more directly after the copy loop in netbk_gop_skb()
to avoid such errors.

> linear_len=skb_headlen(skb);
> count = (linear_len <= PAGE_SIZE)
> 	? 1
> 	:DIV_ROUND_UP(offset_in_page(skb->data)+linear_len,PAGE_SIZE);
> 
> copy_off = (linear_len <= PAGE_SIZE)
> 		?linear_len
> 		:( offset_in_page(skb->data)+linear_len -1)%PAGE_SIZE+1;

If we can agree that utilizing buffers efficiently is more important
than reducing copy operations, we can avoid this complexity.

> > For some SKBs with large linear buffers, we certainly are wasting
> > space. Go back and read the explanation in
> >   http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html
>
> I think I probably did not put my point clearly to make it
> understandable. Xen_netbk_rx_ring_full uses max_required_rx_slots
> value. Xen_netbk_rx_ring_full is called to decide wither a vif is
> schedulable or not. So in case the mtu value is >=PAGE_SIZE, for a
> worst case scenario additional buffer would be required which is not
> taken care by current calculations.

Correct, I agree with your description of the current problem.

> Ofcourse in your new fix if you do a code change not to leave a hole
> in first buffer then this correction may not be required. But I am
> not the right person to decide the implications of the fix you are
> proposing. The current start_new_rx_buffer seems to be trying to
> make the copies PAGE aligned and also reduce number of copy
> operations.

Indeed, that is what the change that we're testing now does. Changing
start_new_rx_buffer() to avoid leaving a hole in the first buffer is
performing quite well. Unfortunately I don't have comparison numbers
just now.

> For example let us say SKB_HEAD_LEN is for whatever reason
> 4*PAGE_SIZE and offset_in_page is 32.
>
> As per existing logic of start_new_rx_buffer and with the fix I am
> proposing for count and copy_off, we will calculate and occupy 5
> ring buffers and will use 5 copy operations.

I agree.

> If we fix it the way you are proposing, not to leave a hole in the
> first buffer by modifying start_new_rx_buffer then it will occupy 4
> ring buffers but will require 8 copy operations as per existing
> logic in netbk_gop_skb while copying head!!

I think that we should optimize for more commonly seen lengths, which
I'd think would be 2-3 pages max. The delta in copy operations is
smaller in these cases, where we would use 4 copy operations for 2
pages (as opposed to 3 with the current algorithm) and 6 copy
operations for 3 pages (as opposed to 4 with the current algorithm).

IanC, Konrad - do you have any opinions on the best approach here?

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 18:53:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 18:53:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjaNq-0000eC-DK; Fri, 14 Dec 2012 18:53:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TjaNo-0000e7-HW
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 18:53:16 +0000
Received: from [85.158.143.35:34217] by server-3.bemta-4.messagelabs.com id
	10/C5-18211-B957BC05; Fri, 14 Dec 2012 18:53:15 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355511193!5231150!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDEwOTg2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29221 invoked from network); 14 Dec 2012 18:53:15 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 18:53:15 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355511195; x=1387047195;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=Kx013HD7p91IBSi82hqgaV0v3+qQ6Z4PKS2XhmileAk=;
	b=qrWo7A/NNsM+UJFHllw/ZgkYEn+ic9P5Hhg3CAcXFIXA6Ll66TUckPvJ
	F5QgvmePtZdfxLFSOcLYmBvjTIgpBnOwHmm76jw4PAthC+Wh7SGxUSeY7
	LdHwA9WDAhujxv68t3UZdCA2TFmGwiZuNK9DcJUDVj7ZzzOQbcUF84poo o=;
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; d="scan'208";a="418141053"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 14 Dec 2012 18:53:12 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-1101.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBEIrAe2024622
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Fri, 14 Dec 2012 18:53:11 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Fri, 14 Dec 2012 10:53:04 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Fri, 14 Dec 2012 10:53:04 -0800
Date: Fri, 14 Dec 2012 10:53:04 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Message-ID: <20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 11:12:50PM +0000, Palagummi, Siva wrote:
> > -----Original Message-----
> > From: Matt Wilson [mailto:msw@amazon.com]
> > Sent: Wednesday, December 12, 2012 3:05 AM
> >
> > On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > >
> > > You can clearly see below that copy_off is input to
> > > start_new_rx_buffer while copying frags.
> > 
> > Yes, but that's the right thing to do. copy_off should be set to the
> > destination offset after copying the last byte of linear data, which
> > means "skb_headlen(skb) % PAGE_SIZE" is correct.
> > 
>
> No. It is not correct for two reasons. For example what if
> skb_headlen(skb) is exactly a multiple of PAGE_SIZE. Copy_off would
> be set to ZERO. And now if there exists some data in frags, ZERO
> will be passed in as copy_off value and start_new_rx_buffer will
> return FALSE. And second reason is the obvious case from the current
> code where "offset_in_page(skb->data)" size hole will be left in the
> first buffer after first pass in case remaining data that need to be
> copied is going to overflow the first buffer.

Right, and I'm arguing that having the code leave a hole is less
desirable than potentially increasing the number of copy
operations. I'd like to hear from Ian and others if using the buffers
efficiently is more important than reducing copy operations. Intuitively,
I think it's more important to use the ring efficiently.

> > > So if the buggy "count" calculation below is fixed based on
> > > offset_in_page value then copy_off value also will change
> > > accordingly.
> > 
> > This calculation is not incorrect. You should only need as many
> > PAGE_SIZE buffers as you have linear data to fill.
> > 
>
> This calculation is incorrect and do not match actual slots used as
> it is now unless some new change is done either in nebk_gop_skb or
> in start_new_rx_buffer.

Yes, and I'm proposing to change netbk_gop_skb() and
start_new_rx_buffer().

> > >       linear_len = skb_headlen(skb)
> > > 	count = (linear_len <= PAGE_SIZE)
> > >               ? 1
> > >               :DIV_ROUND_UP(offset_in_page(skb->data)+linear_len,
> > PAGE_SIZE));
> > >
> > >       copy_off = ((offset_in_page(skb->data)+linear_len) <
> > 2*PAGE_SIZE)
> > > 			? linear_len % PAGE_SIZE;
> > > 			: (offset_in_page(skb->data)+linear_len) % PAGE_SIZE;
> > 
> > A change like this makes the code much more difficult to understand.
> > 

> :-) It would have been easier had we written logic using a for loop
> similar to how the counting is done for data in frags. In fact I did
> do mistake in above calculations :-( . A proper logic probably
> should look somewhat like below.

Yes, we tested one version of a fix that modeled the algorithm used in
count_skb_slots() more directly after the copy loop in netbk_gop_skb()
to avoid such errors.

> linear_len=skb_headlen(skb);
> count = (linear_len <= PAGE_SIZE)
> 	? 1
> 	:DIV_ROUND_UP(offset_in_page(skb->data)+linear_len,PAGE_SIZE);
> 
> copy_off = (linear_len <= PAGE_SIZE)
> 		?linear_len
> 		:( offset_in_page(skb->data)+linear_len -1)%PAGE_SIZE+1;

If we can agree that utilizing buffers efficiently is more important
than reducing copy operations, we can avoid this complexity.

> > For some SKBs with large linear buffers, we certainly are wasting
> > space. Go back and read the explanation in
> >   http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html
>
> I think I probably did not put my point clearly to make it
> understandable. Xen_netbk_rx_ring_full uses max_required_rx_slots
> value. Xen_netbk_rx_ring_full is called to decide wither a vif is
> schedulable or not. So in case the mtu value is >=PAGE_SIZE, for a
> worst case scenario additional buffer would be required which is not
> taken care by current calculations.

Correct, I agree with your description of the current problem.

> Ofcourse in your new fix if you do a code change not to leave a hole
> in first buffer then this correction may not be required. But I am
> not the right person to decide the implications of the fix you are
> proposing. The current start_new_rx_buffer seems to be trying to
> make the copies PAGE aligned and also reduce number of copy
> operations.

Indeed, that is what the change that we're testing now does. Changing
start_new_rx_buffer() to avoid leaving a hole in the first buffer is
performing quite well. Unfortunately I don't have comparison numbers
just now.

> For example let us say SKB_HEAD_LEN is for whatever reason
> 4*PAGE_SIZE and offset_in_page is 32.
>
> As per existing logic of start_new_rx_buffer and with the fix I am
> proposing for count and copy_off, we will calculate and occupy 5
> ring buffers and will use 5 copy operations.

I agree.

> If we fix it the way you are proposing, not to leave a hole in the
> first buffer by modifying start_new_rx_buffer then it will occupy 4
> ring buffers but will require 8 copy operations as per existing
> logic in netbk_gop_skb while copying head!!

I think that we should optimize for more commonly seen lengths, which
I'd think would be 2-3 pages max. The delta in copy operations is
smaller in these cases, where we would use 4 copy operations for 2
pages (as opposed to 3 with the current algorithm) and 6 copy
operations for 3 pages (as opposed to 4 with the current algorithm).

IanC, Konrad - do you have any opinions on the best approach here?

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 19:22:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 19:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjaqD-0000yX-4x; Fri, 14 Dec 2012 19:22:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjaqB-0000yS-Kd
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 19:22:35 +0000
Received: from [85.158.137.99:28336] by server-11.bemta-3.messagelabs.com id
	7D/19-13335-A7C7BC05; Fri, 14 Dec 2012 19:22:34 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355512952!13962475!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25126 invoked from network); 14 Dec 2012 19:22:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 19:22:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="726951"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 19:22:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 14:22:31 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tjaq7-0002lk-IC;
	Fri, 14 Dec 2012 19:22:31 +0000
Message-ID: <50CB7B0B.9010605@eu.citrix.com>
Date: Fri, 14 Dec 2012 19:16:27 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
In-Reply-To: <bced65aa4410b0272064.1355280771@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> In _csched_cpu_pick() we try to select the best possible CPU for
> running a VCPU, considering the characteristics of the underlying
> hardware (i.e., how many threads, core, sockets, and how busy they
> are). What we want is "the idle execution vehicle with the most
> idling neighbours in its grouping".
>
> In order to achieve it, we select a CPU from the VCPU's affinity,
> giving preference to its current processor if possible, as the basis
> for the comparison with all the other CPUs. Problem is, to discount
> the VCPU itself when computing this "idleness" (in an attempt to be
> fair wrt its current processor), we arbitrarily and unconditionally
> consider that selected CPU as idle, even when it is not the case,
> for instance:
>   1. If the CPU is not the one where the VCPU is running (perhaps due
>      to the affinity being changed);
>   2. The CPU is where the VCPU is running, but it has other VCPUs in
>      its runq, so it won't go idle even if the VCPU in question goes.

Good catch -- thanks.  Comments below.

> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -59,6 +59,8 @@
>   #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
>   #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
>   #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
> +/* Is the first element of _cpu's runq its idle vcpu? */
> +#define IS_RUNQ_IDLE(_cpu)  (is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
>   
>   
>   /*
> @@ -479,9 +481,14 @@ static int
>        * distinct cores first and guarantees we don't do something stupid
>        * like run two VCPUs on co-hyperthreads while there are idle cores
>        * or sockets.
> +     *
> +     * Notice that, when computing the "idleness" of cpu, we may want to
> +     * discount vc. That is, iff vc is the currently running and the only
> +     * runnable vcpu on cpu, we add cpu to the idlers.
>        */
>       cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> -    cpumask_set_cpu(cpu, &idlers);
> +    if ( current_on_cpu(cpu) == vc && IS_RUNQ_IDLE(cpu) )
> +        cpumask_set_cpu(cpu, &idlers);

Why bother with this whole "current_on_cpu()" thing, when you can just 
look at vc->processor?  I.e.:

if ( cpu == vc->processor && IS_RUNQ_IDLE(cpu) )

>       cpumask_and(&cpus, &cpus, &idlers);
>       cpumask_clear_cpu(cpu, &cpus);
>   
> @@ -489,7 +496,7 @@ static int
>       {
>           cpumask_t cpu_idlers;
>           cpumask_t nxt_idlers;
> -        int nxt, weight_cpu, weight_nxt;
> +        int nxt, nr_idlers_cpu, nr_idlers_nxt;

I think Jan is right, this probably should be a separate patch.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 19:22:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 19:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjaqD-0000yX-4x; Fri, 14 Dec 2012 19:22:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjaqB-0000yS-Kd
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 19:22:35 +0000
Received: from [85.158.137.99:28336] by server-11.bemta-3.messagelabs.com id
	7D/19-13335-A7C7BC05; Fri, 14 Dec 2012 19:22:34 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355512952!13962475!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25126 invoked from network); 14 Dec 2012 19:22:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 19:22:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="726951"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 19:22:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 14:22:31 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tjaq7-0002lk-IC;
	Fri, 14 Dec 2012 19:22:31 +0000
Message-ID: <50CB7B0B.9010605@eu.citrix.com>
Date: Fri, 14 Dec 2012 19:16:27 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
In-Reply-To: <bced65aa4410b0272064.1355280771@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> In _csched_cpu_pick() we try to select the best possible CPU for
> running a VCPU, considering the characteristics of the underlying
> hardware (i.e., how many threads, core, sockets, and how busy they
> are). What we want is "the idle execution vehicle with the most
> idling neighbours in its grouping".
>
> In order to achieve it, we select a CPU from the VCPU's affinity,
> giving preference to its current processor if possible, as the basis
> for the comparison with all the other CPUs. Problem is, to discount
> the VCPU itself when computing this "idleness" (in an attempt to be
> fair wrt its current processor), we arbitrarily and unconditionally
> consider that selected CPU as idle, even when it is not the case,
> for instance:
>   1. If the CPU is not the one where the VCPU is running (perhaps due
>      to the affinity being changed);
>   2. The CPU is where the VCPU is running, but it has other VCPUs in
>      its runq, so it won't go idle even if the VCPU in question goes.

Good catch -- thanks.  Comments below.

> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -59,6 +59,8 @@
>   #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
>   #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
>   #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
> +/* Is the first element of _cpu's runq its idle vcpu? */
> +#define IS_RUNQ_IDLE(_cpu)  (is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
>   
>   
>   /*
> @@ -479,9 +481,14 @@ static int
>        * distinct cores first and guarantees we don't do something stupid
>        * like run two VCPUs on co-hyperthreads while there are idle cores
>        * or sockets.
> +     *
> +     * Notice that, when computing the "idleness" of cpu, we may want to
> +     * discount vc. That is, iff vc is the currently running and the only
> +     * runnable vcpu on cpu, we add cpu to the idlers.
>        */
>       cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> -    cpumask_set_cpu(cpu, &idlers);
> +    if ( current_on_cpu(cpu) == vc && IS_RUNQ_IDLE(cpu) )
> +        cpumask_set_cpu(cpu, &idlers);

Why bother with this whole "current_on_cpu()" thing, when you can just 
look at vc->processor?  I.e.:

if ( cpu == vc->processor && IS_RUNQ_IDLE(cpu) )

>       cpumask_and(&cpus, &cpus, &idlers);
>       cpumask_clear_cpu(cpu, &cpus);
>   
> @@ -489,7 +496,7 @@ static int
>       {
>           cpumask_t cpu_idlers;
>           cpumask_t nxt_idlers;
> -        int nxt, weight_cpu, weight_nxt;
> +        int nxt, nr_idlers_cpu, nr_idlers_nxt;

I think Jan is right, this probably should be a separate patch.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 19:36:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 19:36:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tjb32-0001Ao-Fk; Fri, 14 Dec 2012 19:35:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tjb30-0001Aj-JS
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 19:35:50 +0000
Received: from [85.158.138.51:49932] by server-13.bemta-3.messagelabs.com id
	D1/3C-00465-09F7BC05; Fri, 14 Dec 2012 19:35:44 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355513738!10300253!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23736 invoked from network); 14 Dec 2012 19:35:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 19:35:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="778980"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 19:35:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 14:35:34 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tjb2k-0002we-GN;
	Fri, 14 Dec 2012 19:35:34 +0000
Message-ID: <50CB7E1A.8000304@eu.citrix.com>
Date: Fri, 14 Dec 2012 19:29:30 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<69860abfe7aec775f781.1355280772@Solace>
In-Reply-To: <69860abfe7aec775f781.1355280772@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 6 v2] xen: sched_credit: improve
 tickling of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> Right now, when a VCPU wakes-up, we check whether it should preempt
> what is running on the PCPU, and whether or not the waking VCPU can
> be migrated (by tickling some idlers). However, this can result in
> suboptimal or even wrong behaviour, as explained here:
>
>   http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html
>
> This change, instead, when deciding which PCPU(s) to tickle, upon
> VCPU wake-up, considers both what it is likely to happen on the PCPU
> where the wakeup occurs,and whether or not there are idlers where
> the woken-up VCPU can run. In fact, if there are, we can avoid
> interrupting the running VCPU. Only in case there aren't any of
> these PCPUs, preemption and migration are the way to go.
>
> This has been tested (on top of the previous change) by running
> the following benchmarks inside 2, 6 and 10 VMs, concurrently, on
> a shared host, each with 2 VCPUs and 960 MB of memory (host had 16
> ways and 12 GB RAM).
>
> 1) All VMs had 'cpus="all"' in their config file.
>
> $ sysbench --test=cpu ... (time, lower is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 50.078467 +/- 1.6676162 | 49.673667 +/- 0.0094321 |
>   | 6   | 63.259472 +/- 0.1137586 | 61.680011 +/- 1.0208723 |
>   | 10  | 91.246797 +/- 0.1154008 | 90.396720 +/- 1.5900423 |
> $ sysbench --test=memory ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 485.56333 +/- 6.0527356 | 487.83167 +/- 0.7602850 |
>   | 6   | 401.36278 +/- 1.9745916 | 409.96778 +/- 3.6761092 |
>   | 10  | 294.43933 +/- 0.8064945 | 302.49033 +/- 0.2343978 |
> $ specjbb2005 ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 43150.63 +/- 1359.5616  | 43275.427 +/- 606.28185 |
>   | 6   | 29274.29 +/- 1024.4042  | 29716.189 +/- 1290.1878 |
>   | 10  | 19061.28 +/- 512.88561  | 19192.599 +/- 605.66058 |
>
>
> 2) All VMs had their VCPUs statically pinned to the host's PCPUs.
>
> $ sysbench --test=cpu ... (time, lower is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 47.8211   +/- 0.0215504 | 47.826900 +/- 0.0077872 |
>   | 6   | 62.689122 +/- 0.0877173 | 62.764539 +/- 0.3882493 |
>   | 10  | 90.321097 +/- 1.4803867 | 89.974570 +/- 1.1437566 |
> $ sysbench --test=memory ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 550.97667 +/- 2.3512355 | 550.87000 +/- 0.8140792 |
>   | 6   | 443.15000 +/- 5.7471797 | 454.01056 +/- 8.4373466 |
>   | 10  | 313.89233 +/- 1.3237493 | 321.81167 +/- 0.3528418 |
> $ specjbb2005 ... (throughput, higher is better)
>   | 2   | 49591.057 +/- 952.93384 | 49594.195 +/- 799.57976 |
>   | 6   | 33538.247 +/- 1089.2115 | 33671.758 +/- 1077.6806 |
>   | 10  | 21927.870 +/- 831.88742 | 21891.131 +/- 563.37929 |
>
>
> Numbers show how the change has either no or very limited impact
> (specjbb2005 case) or, when it does have some impact, that is a
> real improvement in performances (sysbench-memory case).
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ---
> Changes from v1:
>   * Rewritten as per George's suggestion, in order to improve readability.
>   * Killed some of the stats, with the only exception of `tickle_idlers_none'
>     and `tickle_idlers_some'. They don't make things look that terrible and
>     I think they could be useful.
>   * The preemption+migration of the currently running VCPU has been turned
>     into a migration request, instead than just tickling. I traced this
>     thing some more, and it looks like that is the way to go. Tickling is
>     not effective here, because the woken PCPU would expect cur to be out
>     of the scheduler tail, which is likely false (cur->is_running is
>     still set to 1).

Ah, right. :-)

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -133,6 +133,7 @@ struct csched_vcpu {
>           uint32_t state_idle;
>           uint32_t migrate_q;
>           uint32_t migrate_r;
> +        uint32_t kicked_away;
>       } stats;
>   #endif
>   };
> @@ -251,54 +252,67 @@ static inline void
>       struct csched_vcpu * const cur =
>           CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
> -    cpumask_t mask;
> +    cpumask_t mask, idle_mask;
> +    int idlers_empty;
>   
>       ASSERT(cur);
>       cpumask_clear(&mask);
>   
> -    /* If strictly higher priority than current VCPU, signal the CPU */
> -    if ( new->pri > cur->pri )
> +    idlers_empty = cpumask_empty(prv->idlers);
> +
> +    /*
> +     * If the pcpu is idle, or there are no idlers and the new
> +     * vcpu is a higher priority than the old vcpu, run it here.
> +     *
> +     * If there are idle cpus, first try to find one suitable to run
> +     * new, so we can avoid preempting cur.  If we cannot find a
> +     * suitable idler on which to run new, run it here, but try to
> +     * find a suitable idler on which to run cur instead.
> +     */
> +    if ( cur->pri == CSCHED_PRI_IDLE
> +         || (idlers_empty && new->pri > cur->pri) )
>       {
> -        if ( cur->pri == CSCHED_PRI_IDLE )
> -            SCHED_STAT_CRANK(tickle_local_idler);
> -        else if ( cur->pri == CSCHED_PRI_TS_OVER )
> -            SCHED_STAT_CRANK(tickle_local_over);
> -        else if ( cur->pri == CSCHED_PRI_TS_UNDER )
> -            SCHED_STAT_CRANK(tickle_local_under);
> -        else
> -            SCHED_STAT_CRANK(tickle_local_other);
> -
> +        if ( cur->pri != CSCHED_PRI_IDLE )
> +            SCHED_STAT_CRANK(tickle_idlers_none);
>           cpumask_set_cpu(cpu, &mask);
>       }
> +    else if ( !idlers_empty )
> +    {
> +        /* Check whether or not there are idlers that can run new */
> +        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
>   
> -    /*
> -     * If this CPU has at least two runnable VCPUs, we tickle any idlers to
> -     * let them know there is runnable work in the system...
> -     */
> -    if ( cur->pri > CSCHED_PRI_IDLE )
> -    {
> -        if ( cpumask_empty(prv->idlers) )
> +        /*
> +         * If there are no suitable idlers for new, and it's higher
> +         * priority than cur, ask the scheduler to migrate cur away.
> +         * We have to act like this (instead of just waking some of
> +         * the idlers suitable for cur) because cur is running.
> +         *
> +         * If there are suitable idlers for new, no matter priorities,
> +         * leave cur alone (as it is running and is, likely, cache-hot)
> +         * and wake some of them (which is waking up and so is, likely,
> +         * cache cold anyway).
> +         */
> +        if ( cpumask_empty(&idle_mask) && new->pri > cur->pri )
>           {
>               SCHED_STAT_CRANK(tickle_idlers_none);
> +            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
> +            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> +            SCHED_STAT_CRANK(migrate_kicked_away);
> +            set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
> +            cpumask_set_cpu(cpu, &mask);
>           }
> -        else
> +        else if ( !cpumask_empty(&idle_mask) )
>           {
> -            cpumask_t idle_mask;
> -
> -            cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
> -            if ( !cpumask_empty(&idle_mask) )
> +            /* Which of the idlers suitable for new shall we wake up? */
> +            SCHED_STAT_CRANK(tickle_idlers_some);
> +            if ( opt_tickle_one_idle )
>               {
> -                SCHED_STAT_CRANK(tickle_idlers_some);
> -                if ( opt_tickle_one_idle )
> -                {
> -                    this_cpu(last_tickle_cpu) =
> -                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
> -                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
> -                }
> -                else
> -                    cpumask_or(&mask, &mask, &idle_mask);
> +                this_cpu(last_tickle_cpu) =
> +                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
> +                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
>               }
> -            cpumask_and(&mask, &mask, new->vcpu->cpu_affinity);
> +            else
> +                cpumask_or(&mask, &mask, &idle_mask);
>           }
>       }
>   
> @@ -1456,13 +1470,14 @@ csched_dump_vcpu(struct csched_vcpu *svc
>       {
>           printk(" credit=%i [w=%u]", atomic_read(&svc->credit), sdom->weight);
>   #ifdef CSCHED_STATS
> -        printk(" (%d+%u) {a/i=%u/%u m=%u+%u}",
> +        printk(" (%d+%u) {a/i=%u/%u m=%u+%u (k=%u)}",
>                   svc->stats.credit_last,
>                   svc->stats.credit_incr,
>                   svc->stats.state_active,
>                   svc->stats.state_idle,
>                   svc->stats.migrate_q,
> -                svc->stats.migrate_r);
> +                svc->stats.migrate_r,
> +                svc->stats.kicked_away);
>   #endif
>       }
>   
> diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h
> --- a/xen/include/xen/perfc_defn.h
> +++ b/xen/include/xen/perfc_defn.h
> @@ -39,10 +39,6 @@ PERFCOUNTER(vcpu_wake_runnable,     "csc
>   PERFCOUNTER(vcpu_wake_not_runnable, "csched: vcpu_wake_not_runnable")
>   PERFCOUNTER(vcpu_park,              "csched: vcpu_park")
>   PERFCOUNTER(vcpu_unpark,            "csched: vcpu_unpark")
> -PERFCOUNTER(tickle_local_idler,     "csched: tickle_local_idler")
> -PERFCOUNTER(tickle_local_over,      "csched: tickle_local_over")
> -PERFCOUNTER(tickle_local_under,     "csched: tickle_local_under")
> -PERFCOUNTER(tickle_local_other,     "csched: tickle_local_other")
>   PERFCOUNTER(tickle_idlers_none,     "csched: tickle_idlers_none")
>   PERFCOUNTER(tickle_idlers_some,     "csched: tickle_idlers_some")
>   PERFCOUNTER(load_balance_idle,      "csched: load_balance_idle")
> @@ -52,6 +48,7 @@ PERFCOUNTER(steal_trylock_failed,   "csc
>   PERFCOUNTER(steal_peer_idle,        "csched: steal_peer_idle")
>   PERFCOUNTER(migrate_queued,         "csched: migrate_queued")
>   PERFCOUNTER(migrate_running,        "csched: migrate_running")
> +PERFCOUNTER(migrate_kicked_away,    "csched: migrate_kicked_away")
>   PERFCOUNTER(vcpu_hot,               "csched: vcpu_hot")
>   
>   PERFCOUNTER(need_flush_tlb_flush,   "PG_need_flush tlb flushes")


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 19:36:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 19:36:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tjb32-0001Ao-Fk; Fri, 14 Dec 2012 19:35:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tjb30-0001Aj-JS
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 19:35:50 +0000
Received: from [85.158.138.51:49932] by server-13.bemta-3.messagelabs.com id
	D1/3C-00465-09F7BC05; Fri, 14 Dec 2012 19:35:44 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355513738!10300253!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23736 invoked from network); 14 Dec 2012 19:35:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 19:35:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="778980"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 19:35:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 14:35:34 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tjb2k-0002we-GN;
	Fri, 14 Dec 2012 19:35:34 +0000
Message-ID: <50CB7E1A.8000304@eu.citrix.com>
Date: Fri, 14 Dec 2012 19:29:30 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<69860abfe7aec775f781.1355280772@Solace>
In-Reply-To: <69860abfe7aec775f781.1355280772@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 6 v2] xen: sched_credit: improve
 tickling of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> Right now, when a VCPU wakes-up, we check whether it should preempt
> what is running on the PCPU, and whether or not the waking VCPU can
> be migrated (by tickling some idlers). However, this can result in
> suboptimal or even wrong behaviour, as explained here:
>
>   http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html
>
> This change, instead, when deciding which PCPU(s) to tickle, upon
> VCPU wake-up, considers both what it is likely to happen on the PCPU
> where the wakeup occurs,and whether or not there are idlers where
> the woken-up VCPU can run. In fact, if there are, we can avoid
> interrupting the running VCPU. Only in case there aren't any of
> these PCPUs, preemption and migration are the way to go.
>
> This has been tested (on top of the previous change) by running
> the following benchmarks inside 2, 6 and 10 VMs, concurrently, on
> a shared host, each with 2 VCPUs and 960 MB of memory (host had 16
> ways and 12 GB RAM).
>
> 1) All VMs had 'cpus="all"' in their config file.
>
> $ sysbench --test=cpu ... (time, lower is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 50.078467 +/- 1.6676162 | 49.673667 +/- 0.0094321 |
>   | 6   | 63.259472 +/- 0.1137586 | 61.680011 +/- 1.0208723 |
>   | 10  | 91.246797 +/- 0.1154008 | 90.396720 +/- 1.5900423 |
> $ sysbench --test=memory ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 485.56333 +/- 6.0527356 | 487.83167 +/- 0.7602850 |
>   | 6   | 401.36278 +/- 1.9745916 | 409.96778 +/- 3.6761092 |
>   | 10  | 294.43933 +/- 0.8064945 | 302.49033 +/- 0.2343978 |
> $ specjbb2005 ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 43150.63 +/- 1359.5616  | 43275.427 +/- 606.28185 |
>   | 6   | 29274.29 +/- 1024.4042  | 29716.189 +/- 1290.1878 |
>   | 10  | 19061.28 +/- 512.88561  | 19192.599 +/- 605.66058 |
>
>
> 2) All VMs had their VCPUs statically pinned to the host's PCPUs.
>
> $ sysbench --test=cpu ... (time, lower is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 47.8211   +/- 0.0215504 | 47.826900 +/- 0.0077872 |
>   | 6   | 62.689122 +/- 0.0877173 | 62.764539 +/- 0.3882493 |
>   | 10  | 90.321097 +/- 1.4803867 | 89.974570 +/- 1.1437566 |
> $ sysbench --test=memory ... (throughput, higher is better)
>   | VMs | w/o this change         | w/ this change          |
>   | 2   | 550.97667 +/- 2.3512355 | 550.87000 +/- 0.8140792 |
>   | 6   | 443.15000 +/- 5.7471797 | 454.01056 +/- 8.4373466 |
>   | 10  | 313.89233 +/- 1.3237493 | 321.81167 +/- 0.3528418 |
> $ specjbb2005 ... (throughput, higher is better)
>   | 2   | 49591.057 +/- 952.93384 | 49594.195 +/- 799.57976 |
>   | 6   | 33538.247 +/- 1089.2115 | 33671.758 +/- 1077.6806 |
>   | 10  | 21927.870 +/- 831.88742 | 21891.131 +/- 563.37929 |
>
>
> Numbers show how the change has either no or very limited impact
> (specjbb2005 case) or, when it does have some impact, that is a
> real improvement in performances (sysbench-memory case).
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ---
> Changes from v1:
>   * Rewritten as per George's suggestion, in order to improve readability.
>   * Killed some of the stats, with the only exception of `tickle_idlers_none'
>     and `tickle_idlers_some'. They don't make things look that terrible and
>     I think they could be useful.
>   * The preemption+migration of the currently running VCPU has been turned
>     into a migration request, instead than just tickling. I traced this
>     thing some more, and it looks like that is the way to go. Tickling is
>     not effective here, because the woken PCPU would expect cur to be out
>     of the scheduler tail, which is likely false (cur->is_running is
>     still set to 1).

Ah, right. :-)

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -133,6 +133,7 @@ struct csched_vcpu {
>           uint32_t state_idle;
>           uint32_t migrate_q;
>           uint32_t migrate_r;
> +        uint32_t kicked_away;
>       } stats;
>   #endif
>   };
> @@ -251,54 +252,67 @@ static inline void
>       struct csched_vcpu * const cur =
>           CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
> -    cpumask_t mask;
> +    cpumask_t mask, idle_mask;
> +    int idlers_empty;
>   
>       ASSERT(cur);
>       cpumask_clear(&mask);
>   
> -    /* If strictly higher priority than current VCPU, signal the CPU */
> -    if ( new->pri > cur->pri )
> +    idlers_empty = cpumask_empty(prv->idlers);
> +
> +    /*
> +     * If the pcpu is idle, or there are no idlers and the new
> +     * vcpu is a higher priority than the old vcpu, run it here.
> +     *
> +     * If there are idle cpus, first try to find one suitable to run
> +     * new, so we can avoid preempting cur.  If we cannot find a
> +     * suitable idler on which to run new, run it here, but try to
> +     * find a suitable idler on which to run cur instead.
> +     */
> +    if ( cur->pri == CSCHED_PRI_IDLE
> +         || (idlers_empty && new->pri > cur->pri) )
>       {
> -        if ( cur->pri == CSCHED_PRI_IDLE )
> -            SCHED_STAT_CRANK(tickle_local_idler);
> -        else if ( cur->pri == CSCHED_PRI_TS_OVER )
> -            SCHED_STAT_CRANK(tickle_local_over);
> -        else if ( cur->pri == CSCHED_PRI_TS_UNDER )
> -            SCHED_STAT_CRANK(tickle_local_under);
> -        else
> -            SCHED_STAT_CRANK(tickle_local_other);
> -
> +        if ( cur->pri != CSCHED_PRI_IDLE )
> +            SCHED_STAT_CRANK(tickle_idlers_none);
>           cpumask_set_cpu(cpu, &mask);
>       }
> +    else if ( !idlers_empty )
> +    {
> +        /* Check whether or not there are idlers that can run new */
> +        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
>   
> -    /*
> -     * If this CPU has at least two runnable VCPUs, we tickle any idlers to
> -     * let them know there is runnable work in the system...
> -     */
> -    if ( cur->pri > CSCHED_PRI_IDLE )
> -    {
> -        if ( cpumask_empty(prv->idlers) )
> +        /*
> +         * If there are no suitable idlers for new, and it's higher
> +         * priority than cur, ask the scheduler to migrate cur away.
> +         * We have to act like this (instead of just waking some of
> +         * the idlers suitable for cur) because cur is running.
> +         *
> +         * If there are suitable idlers for new, no matter priorities,
> +         * leave cur alone (as it is running and is, likely, cache-hot)
> +         * and wake some of them (which is waking up and so is, likely,
> +         * cache cold anyway).
> +         */
> +        if ( cpumask_empty(&idle_mask) && new->pri > cur->pri )
>           {
>               SCHED_STAT_CRANK(tickle_idlers_none);
> +            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
> +            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> +            SCHED_STAT_CRANK(migrate_kicked_away);
> +            set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
> +            cpumask_set_cpu(cpu, &mask);
>           }
> -        else
> +        else if ( !cpumask_empty(&idle_mask) )
>           {
> -            cpumask_t idle_mask;
> -
> -            cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
> -            if ( !cpumask_empty(&idle_mask) )
> +            /* Which of the idlers suitable for new shall we wake up? */
> +            SCHED_STAT_CRANK(tickle_idlers_some);
> +            if ( opt_tickle_one_idle )
>               {
> -                SCHED_STAT_CRANK(tickle_idlers_some);
> -                if ( opt_tickle_one_idle )
> -                {
> -                    this_cpu(last_tickle_cpu) =
> -                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
> -                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
> -                }
> -                else
> -                    cpumask_or(&mask, &mask, &idle_mask);
> +                this_cpu(last_tickle_cpu) =
> +                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
> +                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
>               }
> -            cpumask_and(&mask, &mask, new->vcpu->cpu_affinity);
> +            else
> +                cpumask_or(&mask, &mask, &idle_mask);
>           }
>       }
>   
> @@ -1456,13 +1470,14 @@ csched_dump_vcpu(struct csched_vcpu *svc
>       {
>           printk(" credit=%i [w=%u]", atomic_read(&svc->credit), sdom->weight);
>   #ifdef CSCHED_STATS
> -        printk(" (%d+%u) {a/i=%u/%u m=%u+%u}",
> +        printk(" (%d+%u) {a/i=%u/%u m=%u+%u (k=%u)}",
>                   svc->stats.credit_last,
>                   svc->stats.credit_incr,
>                   svc->stats.state_active,
>                   svc->stats.state_idle,
>                   svc->stats.migrate_q,
> -                svc->stats.migrate_r);
> +                svc->stats.migrate_r,
> +                svc->stats.kicked_away);
>   #endif
>       }
>   
> diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h
> --- a/xen/include/xen/perfc_defn.h
> +++ b/xen/include/xen/perfc_defn.h
> @@ -39,10 +39,6 @@ PERFCOUNTER(vcpu_wake_runnable,     "csc
>   PERFCOUNTER(vcpu_wake_not_runnable, "csched: vcpu_wake_not_runnable")
>   PERFCOUNTER(vcpu_park,              "csched: vcpu_park")
>   PERFCOUNTER(vcpu_unpark,            "csched: vcpu_unpark")
> -PERFCOUNTER(tickle_local_idler,     "csched: tickle_local_idler")
> -PERFCOUNTER(tickle_local_over,      "csched: tickle_local_over")
> -PERFCOUNTER(tickle_local_under,     "csched: tickle_local_under")
> -PERFCOUNTER(tickle_local_other,     "csched: tickle_local_other")
>   PERFCOUNTER(tickle_idlers_none,     "csched: tickle_idlers_none")
>   PERFCOUNTER(tickle_idlers_some,     "csched: tickle_idlers_some")
>   PERFCOUNTER(load_balance_idle,      "csched: load_balance_idle")
> @@ -52,6 +48,7 @@ PERFCOUNTER(steal_trylock_failed,   "csc
>   PERFCOUNTER(steal_peer_idle,        "csched: steal_peer_idle")
>   PERFCOUNTER(migrate_queued,         "csched: migrate_queued")
>   PERFCOUNTER(migrate_running,        "csched: migrate_running")
> +PERFCOUNTER(migrate_kicked_away,    "csched: migrate_kicked_away")
>   PERFCOUNTER(vcpu_hot,               "csched: vcpu_hot")
>   
>   PERFCOUNTER(need_flush_tlb_flush,   "PG_need_flush tlb flushes")


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 19:46:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 19:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbCZ-0001Lm-JE; Fri, 14 Dec 2012 19:45:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjbCX-0001Lh-QN
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 19:45:42 +0000
Received: from [85.158.138.51:38817] by server-11.bemta-3.messagelabs.com id
	B2/37-13335-0E18BC05; Fri, 14 Dec 2012 19:45:36 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355514325!22608744!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23140 invoked from network); 14 Dec 2012 19:45:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 19:45:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="780433"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 19:45:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 14:45:24 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TjbCG-00035x-Ho;
	Fri, 14 Dec 2012 19:45:24 +0000
Message-ID: <50CB8068.7080404@eu.citrix.com>
Date: Fri, 14 Dec 2012 19:39:20 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<21b142498d4699921251.1355280773@Solace>
In-Reply-To: <21b142498d4699921251.1355280773@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3 of 6 v2] xen: sched_credit: use
 current_on_cpu() when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> Defined by the previous change, using it wherever it is appropriate
> throughout sched_credit.c makes the code easier to read.

Hmm, I hadn't read this patch when I commented about removing the macro 
from the first patch. :-)

I personally think that using vc->processor is better in that patch 
anyway; but using this macro elsewhere is probably fine.

I think from a taste point of view, I would have put this patch, with 
the new definition, as the first patch in the series, and the had the 
second patch just use it.

I'll go back and respond to Jan's comment about the macro.

  -George

>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -231,7 +231,7 @@ static void burn_credits(struct csched_v
>       unsigned int credits;
>   
>       /* Assert svc is current */
> -    ASSERT(svc==CSCHED_VCPU(per_cpu(schedule_data, svc->vcpu->processor).curr));
> +    ASSERT( svc == CSCHED_VCPU(current_on_cpu(svc->vcpu->processor)) );
>   
>       if ( (delta = now - svc->start_time) <= 0 )
>           return;
> @@ -249,8 +249,7 @@ DEFINE_PER_CPU(unsigned int, last_tickle
>   static inline void
>   __runq_tickle(unsigned int cpu, struct csched_vcpu *new)
>   {
> -    struct csched_vcpu * const cur =
> -        CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
> +    struct csched_vcpu * const cur = CSCHED_VCPU(current_on_cpu(cpu));
>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
>       cpumask_t mask, idle_mask;
>       int idlers_empty;
> @@ -387,7 +386,7 @@ csched_alloc_pdata(const struct schedule
>           per_cpu(schedule_data, cpu).sched_priv = spc;
>   
>       /* Start off idling... */
> -    BUG_ON(!is_idle_vcpu(per_cpu(schedule_data, cpu).curr));
> +    BUG_ON(!is_idle_vcpu(current_on_cpu(cpu)));
>       cpumask_set_cpu(cpu, prv->idlers);
>   
>       spin_unlock_irqrestore(&prv->lock, flags);
> @@ -730,7 +729,7 @@ csched_vcpu_sleep(const struct scheduler
>   
>       BUG_ON( is_idle_vcpu(vc) );
>   
> -    if ( per_cpu(schedule_data, vc->processor).curr == vc )
> +    if ( current_on_cpu(vc->processor) == vc )
>           cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
>       else if ( __vcpu_on_runq(svc) )
>           __runq_remove(svc);
> @@ -744,7 +743,7 @@ csched_vcpu_wake(const struct scheduler
>   
>       BUG_ON( is_idle_vcpu(vc) );
>   
> -    if ( unlikely(per_cpu(schedule_data, cpu).curr == vc) )
> +    if ( unlikely(current_on_cpu(cpu) == vc) )
>       {
>           SCHED_STAT_CRANK(vcpu_wake_running);
>           return;
> @@ -1213,7 +1212,7 @@ static struct csched_vcpu *
>   csched_runq_steal(int peer_cpu, int cpu, int pri)
>   {
>       const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
> -    const struct vcpu * const peer_vcpu = per_cpu(schedule_data, peer_cpu).curr;
> +    const struct vcpu * const peer_vcpu = current_on_cpu(peer_cpu);
>       struct csched_vcpu *speer;
>       struct list_head *iter;
>       struct vcpu *vc;
> @@ -1502,7 +1501,7 @@ csched_dump_pcpu(const struct scheduler
>       printk("core=%s\n", cpustr);
>   
>       /* current VCPU */
> -    svc = CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
> +    svc = CSCHED_VCPU(current_on_cpu(cpu));
>       if ( svc )
>       {
>           printk("\trun: ");


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 19:46:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 19:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbCZ-0001Lm-JE; Fri, 14 Dec 2012 19:45:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjbCX-0001Lh-QN
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 19:45:42 +0000
Received: from [85.158.138.51:38817] by server-11.bemta-3.messagelabs.com id
	B2/37-13335-0E18BC05; Fri, 14 Dec 2012 19:45:36 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355514325!22608744!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23140 invoked from network); 14 Dec 2012 19:45:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 19:45:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,282,1355097600"; 
   d="scan'208";a="780433"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 19:45:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 14:45:24 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TjbCG-00035x-Ho;
	Fri, 14 Dec 2012 19:45:24 +0000
Message-ID: <50CB8068.7080404@eu.citrix.com>
Date: Fri, 14 Dec 2012 19:39:20 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<21b142498d4699921251.1355280773@Solace>
In-Reply-To: <21b142498d4699921251.1355280773@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3 of 6 v2] xen: sched_credit: use
 current_on_cpu() when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> Defined by the previous change, using it wherever it is appropriate
> throughout sched_credit.c makes the code easier to read.

Hmm, I hadn't read this patch when I commented about removing the macro 
from the first patch. :-)

I personally think that using vc->processor is better in that patch 
anyway; but using this macro elsewhere is probably fine.

I think from a taste point of view, I would have put this patch, with 
the new definition, as the first patch in the series, and the had the 
second patch just use it.

I'll go back and respond to Jan's comment about the macro.

  -George

>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -231,7 +231,7 @@ static void burn_credits(struct csched_v
>       unsigned int credits;
>   
>       /* Assert svc is current */
> -    ASSERT(svc==CSCHED_VCPU(per_cpu(schedule_data, svc->vcpu->processor).curr));
> +    ASSERT( svc == CSCHED_VCPU(current_on_cpu(svc->vcpu->processor)) );
>   
>       if ( (delta = now - svc->start_time) <= 0 )
>           return;
> @@ -249,8 +249,7 @@ DEFINE_PER_CPU(unsigned int, last_tickle
>   static inline void
>   __runq_tickle(unsigned int cpu, struct csched_vcpu *new)
>   {
> -    struct csched_vcpu * const cur =
> -        CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
> +    struct csched_vcpu * const cur = CSCHED_VCPU(current_on_cpu(cpu));
>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
>       cpumask_t mask, idle_mask;
>       int idlers_empty;
> @@ -387,7 +386,7 @@ csched_alloc_pdata(const struct schedule
>           per_cpu(schedule_data, cpu).sched_priv = spc;
>   
>       /* Start off idling... */
> -    BUG_ON(!is_idle_vcpu(per_cpu(schedule_data, cpu).curr));
> +    BUG_ON(!is_idle_vcpu(current_on_cpu(cpu)));
>       cpumask_set_cpu(cpu, prv->idlers);
>   
>       spin_unlock_irqrestore(&prv->lock, flags);
> @@ -730,7 +729,7 @@ csched_vcpu_sleep(const struct scheduler
>   
>       BUG_ON( is_idle_vcpu(vc) );
>   
> -    if ( per_cpu(schedule_data, vc->processor).curr == vc )
> +    if ( current_on_cpu(vc->processor) == vc )
>           cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
>       else if ( __vcpu_on_runq(svc) )
>           __runq_remove(svc);
> @@ -744,7 +743,7 @@ csched_vcpu_wake(const struct scheduler
>   
>       BUG_ON( is_idle_vcpu(vc) );
>   
> -    if ( unlikely(per_cpu(schedule_data, cpu).curr == vc) )
> +    if ( unlikely(current_on_cpu(cpu) == vc) )
>       {
>           SCHED_STAT_CRANK(vcpu_wake_running);
>           return;
> @@ -1213,7 +1212,7 @@ static struct csched_vcpu *
>   csched_runq_steal(int peer_cpu, int cpu, int pri)
>   {
>       const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
> -    const struct vcpu * const peer_vcpu = per_cpu(schedule_data, peer_cpu).curr;
> +    const struct vcpu * const peer_vcpu = current_on_cpu(peer_cpu);
>       struct csched_vcpu *speer;
>       struct list_head *iter;
>       struct vcpu *vc;
> @@ -1502,7 +1501,7 @@ csched_dump_pcpu(const struct scheduler
>       printk("core=%s\n", cpustr);
>   
>       /* current VCPU */
> -    svc = CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
> +    svc = CSCHED_VCPU(current_on_cpu(cpu));
>       if ( svc )
>       {
>           printk("\trun: ");


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 19:57:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 19:57:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbNR-0001X8-PG; Fri, 14 Dec 2012 19:56:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjbNQ-0001X3-5U
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 19:56:56 +0000
Received: from [85.158.139.211:12988] by server-12.bemta-5.messagelabs.com id
	30/51-02275-7848BC05; Fri, 14 Dec 2012 19:56:55 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355515013!19103160!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4527 invoked from network); 14 Dec 2012 19:56:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 19:56:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; 
   d="scan'208";a="782267"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 19:56:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 14:56:52 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TjbNM-0003FV-6E;
	Fri, 14 Dec 2012 19:56:52 +0000
Message-ID: <50CB8318.6050807@eu.citrix.com>
Date: Fri, 14 Dec 2012 19:50:48 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
In-Reply-To: <50C864D402000078000AFDB0@nat28.tlf.novell.com>
Cc: Dario Faggioli <dario.faggioli@citrix.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 10:04, Jan Beulich wrote:
>>>> On 12.12.12 at 03:52, Dario Faggioli <dario.faggioli@citrix.com> wrote:
>> --- a/xen/common/sched_credit.c
>> +++ b/xen/common/sched_credit.c
>> @@ -59,6 +59,8 @@
>>   #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
>>   #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
>>   #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
>> +/* Is the first element of _cpu's runq its idle vcpu? */
>> +#define IS_RUNQ_IDLE(_cpu)  (is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
>>   
>>   
>>   /*
>> @@ -479,9 +481,14 @@ static int
>>        * distinct cores first and guarantees we don't do something stupid
>>        * like run two VCPUs on co-hyperthreads while there are idle cores
>>        * or sockets.
>> +     *
>> +     * Notice that, when computing the "idleness" of cpu, we may want to
>> +     * discount vc. That is, iff vc is the currently running and the only
>> +     * runnable vcpu on cpu, we add cpu to the idlers.
>>        */
>>       cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
>> -    cpumask_set_cpu(cpu, &idlers);
>> +    if ( current_on_cpu(cpu) == vc && IS_RUNQ_IDLE(cpu) )
>> +        cpumask_set_cpu(cpu, &idlers);
>>       cpumask_and(&cpus, &cpus, &idlers);
>>       cpumask_clear_cpu(cpu, &cpus);
>>   
>> @@ -489,7 +496,7 @@ static int
>>       {
>>           cpumask_t cpu_idlers;
>>           cpumask_t nxt_idlers;
>> -        int nxt, weight_cpu, weight_nxt;
>> +        int nxt, nr_idlers_cpu, nr_idlers_nxt;
>>           int migrate_factor;
>>   
>>           nxt = cpumask_cycle(cpu, &cpus);
>> @@ -513,12 +520,12 @@ static int
>>               cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
>>           }
>>   
>> -        weight_cpu = cpumask_weight(&cpu_idlers);
>> -        weight_nxt = cpumask_weight(&nxt_idlers);
>> +        nr_idlers_cpu = cpumask_weight(&cpu_idlers);
>> +        nr_idlers_nxt = cpumask_weight(&nxt_idlers);
>>           /* smt_power_savings: consolidate work rather than spreading it */
>>           if ( sched_smt_power_savings ?
>> -             weight_cpu > weight_nxt :
>> -             weight_cpu * migrate_factor < weight_nxt )
>> +             nr_idlers_cpu > nr_idlers_nxt :
>> +             nr_idlers_cpu * migrate_factor < nr_idlers_nxt )
>>           {
>>               cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
>>               spc = CSCHED_PCPU(nxt);
> Despite you mentioning this in the description, these last two hunks
> are, afaict, only renaming variables (and that's even debatable, as
> the current names aren't really misleading imo), and hence I don't
> think belong in a patch that clearly has the potential for causing
> (performance) regressions.
>
> That said - I don't think it will (and even more, I'm agreeable to the
> change done).
>
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -396,6 +396,9 @@ extern struct vcpu *idle_vcpu[NR_CPUS];
>>   #define is_idle_domain(d) ((d)->domain_id == DOMID_IDLE)
>>   #define is_idle_vcpu(v)   (is_idle_domain((v)->domain))
>>   
>> +#define current_on_cpu(_c) \
>> +  ( (per_cpu(schedule_data, _c).curr) )
>> +
> This, imo, really belings into sched-if.h.

Hmm, it looks like there are a number of things that could live in 
either sched-if.h or sched.h; but I think this one probably most closely 
links with thins like vcpu_is_runnable() and cpu_is_haltable(), both of 
which are in sched.h; so sched.h is where I'd put it.

> Plus - what's the point of double parentheses, when in fact none
> at all would be needed?
>
> And finally, why "_c" and not just "c"?

I think the underscore is pretty standard in macros.

There's certainly no need for double parentheses though.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 19:57:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 19:57:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbNR-0001X8-PG; Fri, 14 Dec 2012 19:56:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjbNQ-0001X3-5U
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 19:56:56 +0000
Received: from [85.158.139.211:12988] by server-12.bemta-5.messagelabs.com id
	30/51-02275-7848BC05; Fri, 14 Dec 2012 19:56:55 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355515013!19103160!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxODk3MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4527 invoked from network); 14 Dec 2012 19:56:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 19:56:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; 
   d="scan'208";a="782267"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 19:56:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 14:56:52 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TjbNM-0003FV-6E;
	Fri, 14 Dec 2012 19:56:52 +0000
Message-ID: <50CB8318.6050807@eu.citrix.com>
Date: Fri, 14 Dec 2012 19:50:48 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
In-Reply-To: <50C864D402000078000AFDB0@nat28.tlf.novell.com>
Cc: Dario Faggioli <dario.faggioli@citrix.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 10:04, Jan Beulich wrote:
>>>> On 12.12.12 at 03:52, Dario Faggioli <dario.faggioli@citrix.com> wrote:
>> --- a/xen/common/sched_credit.c
>> +++ b/xen/common/sched_credit.c
>> @@ -59,6 +59,8 @@
>>   #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
>>   #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
>>   #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
>> +/* Is the first element of _cpu's runq its idle vcpu? */
>> +#define IS_RUNQ_IDLE(_cpu)  (is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
>>   
>>   
>>   /*
>> @@ -479,9 +481,14 @@ static int
>>        * distinct cores first and guarantees we don't do something stupid
>>        * like run two VCPUs on co-hyperthreads while there are idle cores
>>        * or sockets.
>> +     *
>> +     * Notice that, when computing the "idleness" of cpu, we may want to
>> +     * discount vc. That is, iff vc is the currently running and the only
>> +     * runnable vcpu on cpu, we add cpu to the idlers.
>>        */
>>       cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
>> -    cpumask_set_cpu(cpu, &idlers);
>> +    if ( current_on_cpu(cpu) == vc && IS_RUNQ_IDLE(cpu) )
>> +        cpumask_set_cpu(cpu, &idlers);
>>       cpumask_and(&cpus, &cpus, &idlers);
>>       cpumask_clear_cpu(cpu, &cpus);
>>   
>> @@ -489,7 +496,7 @@ static int
>>       {
>>           cpumask_t cpu_idlers;
>>           cpumask_t nxt_idlers;
>> -        int nxt, weight_cpu, weight_nxt;
>> +        int nxt, nr_idlers_cpu, nr_idlers_nxt;
>>           int migrate_factor;
>>   
>>           nxt = cpumask_cycle(cpu, &cpus);
>> @@ -513,12 +520,12 @@ static int
>>               cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
>>           }
>>   
>> -        weight_cpu = cpumask_weight(&cpu_idlers);
>> -        weight_nxt = cpumask_weight(&nxt_idlers);
>> +        nr_idlers_cpu = cpumask_weight(&cpu_idlers);
>> +        nr_idlers_nxt = cpumask_weight(&nxt_idlers);
>>           /* smt_power_savings: consolidate work rather than spreading it */
>>           if ( sched_smt_power_savings ?
>> -             weight_cpu > weight_nxt :
>> -             weight_cpu * migrate_factor < weight_nxt )
>> +             nr_idlers_cpu > nr_idlers_nxt :
>> +             nr_idlers_cpu * migrate_factor < nr_idlers_nxt )
>>           {
>>               cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
>>               spc = CSCHED_PCPU(nxt);
> Despite you mentioning this in the description, these last two hunks
> are, afaict, only renaming variables (and that's even debatable, as
> the current names aren't really misleading imo), and hence I don't
> think belong in a patch that clearly has the potential for causing
> (performance) regressions.
>
> That said - I don't think it will (and even more, I'm agreeable to the
> change done).
>
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -396,6 +396,9 @@ extern struct vcpu *idle_vcpu[NR_CPUS];
>>   #define is_idle_domain(d) ((d)->domain_id == DOMID_IDLE)
>>   #define is_idle_vcpu(v)   (is_idle_domain((v)->domain))
>>   
>> +#define current_on_cpu(_c) \
>> +  ( (per_cpu(schedule_data, _c).curr) )
>> +
> This, imo, really belings into sched-if.h.

Hmm, it looks like there are a number of things that could live in 
either sched-if.h or sched.h; but I think this one probably most closely 
links with thins like vcpu_is_runnable() and cpu_is_haltable(), both of 
which are in sched.h; so sched.h is where I'd put it.

> Plus - what's the point of double parentheses, when in fact none
> at all would be needed?
>
> And finally, why "_c" and not just "c"?

I think the underscore is pretty standard in macros.

There's certainly no need for double parentheses though.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:04:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:04:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbUo-0001nW-0s; Fri, 14 Dec 2012 20:04:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjbUm-0001nR-Ms
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 20:04:32 +0000
Received: from [85.158.137.99:27836] by server-10.bemta-3.messagelabs.com id
	85/E6-07616-B468BC05; Fri, 14 Dec 2012 20:04:27 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355515465!16293449!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9043 invoked from network); 14 Dec 2012 20:04:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 20:04:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; 
   d="scan'208";a="732454"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 20:03:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 15:03:25 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TjbTh-0003LQ-Ct;
	Fri, 14 Dec 2012 20:03:25 +0000
Message-ID: <50CB84A1.1060205@eu.citrix.com>
Date: Fri, 14 Dec 2012 19:57:21 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<7a199dea34425e890b31.1355280774@Solace>
In-Reply-To: <7a199dea34425e890b31.1355280774@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4 of 6 v2] xen: tracing: report where a VCPU
	wakes up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> When looking at traces, it turns out to be useful to know where a
> waking-up VCPU is being queued. Yes, that is always the CPU where
> it ran last, but that information can well be lost in past trace
> records!

When you say "lost in past trace records", do you primarily mean that 
the records themselves have been lost (due to the per-cpu trace buffers 
filling up), or do you mean that it may be way way back and you don't 
want to go back and find it?

If the latter, I think the best thing to do would be to just augment 
xenalyze to keep track of that information and print it when it sees the 
wake record.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:04:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:04:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbUo-0001nW-0s; Fri, 14 Dec 2012 20:04:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjbUm-0001nR-Ms
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 20:04:32 +0000
Received: from [85.158.137.99:27836] by server-10.bemta-3.messagelabs.com id
	85/E6-07616-B468BC05; Fri, 14 Dec 2012 20:04:27 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355515465!16293449!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9043 invoked from network); 14 Dec 2012 20:04:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 20:04:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; 
   d="scan'208";a="732454"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 20:03:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 15:03:25 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TjbTh-0003LQ-Ct;
	Fri, 14 Dec 2012 20:03:25 +0000
Message-ID: <50CB84A1.1060205@eu.citrix.com>
Date: Fri, 14 Dec 2012 19:57:21 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<7a199dea34425e890b31.1355280774@Solace>
In-Reply-To: <7a199dea34425e890b31.1355280774@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4 of 6 v2] xen: tracing: report where a VCPU
	wakes up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> When looking at traces, it turns out to be useful to know where a
> waking-up VCPU is being queued. Yes, that is always the CPU where
> it ran last, but that information can well be lost in past trace
> records!

When you say "lost in past trace records", do you primarily mean that 
the records themselves have been lost (due to the per-cpu trace buffers 
filling up), or do you mean that it may be way way back and you don't 
want to go back and find it?

If the latter, I think the best thing to do would be to just augment 
xenalyze to keep track of that information and print it when it sees the 
wake record.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:07:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:07:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbX6-0001t1-IC; Fri, 14 Dec 2012 20:06:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjbX5-0001su-Aw
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 20:06:55 +0000
Received: from [85.158.139.211:40006] by server-9.bemta-5.messagelabs.com id
	D1/35-10690-ED68BC05; Fri, 14 Dec 2012 20:06:54 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355515610!20593422!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16022 invoked from network); 14 Dec 2012 20:06:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 20:06:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; 
   d="scan'208";a="733010"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 20:06:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 15:06:38 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TjbWn-0003OD-TM;
	Fri, 14 Dec 2012 20:06:37 +0000
Message-ID: <50CB8562.8070706@eu.citrix.com>
Date: Fri, 14 Dec 2012 20:00:34 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<68004a57d91a9bf8b371.1355280775@Solace>
In-Reply-To: <68004a57d91a9bf8b371.1355280775@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 5 of 6 v2] xen: tracing: introduce
 per-scheduler trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> So that it becomes possible to create scheduler specific trace records,
> within each scheduler, without worrying about the overlapping, and also
> without giving up being able to recognise them univocally. The latter
> is deemed as useful, since we can have more than one scheduler running
> at the same time, thanks to cpupools.
>
> The event ID is 12 bits, and this change uses the upper 3 of them for
> the 'scheduler ID'. This means we're limited to 8 schedulers and to
> 512 scheduler specific tracing events. Both seem reasonable limitations
> as of now.
>
> This also converts the existing credit2 tracing (the only scheduler
> generating tracing events up to now) to the new system.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> Changes from v1:
>   * The event ID generaion macro is now called `TRC_SCHED_CLASS_EVT()',
>     and has been generalized and put in trace.h, as suggested.
>   * The handling of per-scheduler tracing IDs and masks have been
>     restructured, properly naming "ID" the numerical identifiers
>     and "MASK" the bitmasks, as requested.
>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -29,18 +29,22 @@
>   #define d2printk(x...)
>   //#define d2printk printk
>   
> -#define TRC_CSCHED2_TICK        TRC_SCHED_CLASS + 1
> -#define TRC_CSCHED2_RUNQ_POS    TRC_SCHED_CLASS + 2
> -#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3
> -#define TRC_CSCHED2_CREDIT_ADD  TRC_SCHED_CLASS + 4
> -#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5
> -#define TRC_CSCHED2_TICKLE       TRC_SCHED_CLASS + 6
> -#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7
> -#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8
> -#define TRC_CSCHED2_UPDATE_LOAD   TRC_SCHED_CLASS + 9
> -#define TRC_CSCHED2_RUNQ_ASSIGN   TRC_SCHED_CLASS + 10
> -#define TRC_CSCHED2_UPDATE_VCPU_LOAD   TRC_SCHED_CLASS + 11
> -#define TRC_CSCHED2_UPDATE_RUNQ_LOAD   TRC_SCHED_CLASS + 12
> +/*
> + * Credit2 tracing events ("only" 512 available!). Check
> + * include/public/trace.h for more details.
> + */
> +#define TRC_CSCHED2_TICK             TRC_SCHED_CLASS_EVT(CSCHED2, 1)
> +#define TRC_CSCHED2_RUNQ_POS         TRC_SCHED_CLASS_EVT(CSCHED2, 2)
> +#define TRC_CSCHED2_CREDIT_BURN      TRC_SCHED_CLASS_EVT(CSCHED2, 3)
> +#define TRC_CSCHED2_CREDIT_ADD       TRC_SCHED_CLASS_EVT(CSCHED2, 4)
> +#define TRC_CSCHED2_TICKLE_CHECK     TRC_SCHED_CLASS_EVT(CSCHED2, 5)
> +#define TRC_CSCHED2_TICKLE           TRC_SCHED_CLASS_EVT(CSCHED2, 6)
> +#define TRC_CSCHED2_CREDIT_RESET     TRC_SCHED_CLASS_EVT(CSCHED2, 7)
> +#define TRC_CSCHED2_SCHED_TASKLET    TRC_SCHED_CLASS_EVT(CSCHED2, 8)
> +#define TRC_CSCHED2_UPDATE_LOAD      TRC_SCHED_CLASS_EVT(CSCHED2, 9)
> +#define TRC_CSCHED2_RUNQ_ASSIGN      TRC_SCHED_CLASS_EVT(CSCHED2, 10)
> +#define TRC_CSCHED2_UPDATE_VCPU_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 11)
> +#define TRC_CSCHED2_UPDATE_RUNQ_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 12)
>   
>   /*
>    * WARNING: This is still in an experimental phase.  Status and work can be found at the
> diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
> --- a/xen/include/public/trace.h
> +++ b/xen/include/public/trace.h
> @@ -57,6 +57,32 @@
>   #define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
>   #define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
>   
> +/*
> + * The highest 3 bits of the last 12 bits of TRC_SCHED_CLASS above are
> + * reserved for encoding what scheduler produced the information. The
> + * actual event is encoded in the last 9 bits.
> + *
> + * This means we have 8 scheduling IDs available (which means at most 8
> + * schedulers generating events) and, in each scheduler, up to 512
> + * different events.
> + */
> +#define TRC_SCHED_ID_BITS 3
> +#define TRC_SCHED_ID_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
> +#define TRC_SCHED_ID_MASK (((1UL<<TRC_SCHED_ID_BITS) - 1) << TRC_SCHED_ID_SHIFT)
> +#define TRC_SCHED_EVT_MASK (~(TRC_SCHED_ID_MASK))
> +
> +/* Per-scheduler IDs, to identify scheduler specific events */
> +#define TRC_SCHED_CSCHED   0
> +#define TRC_SCHED_CSCHED2  1
> +#define TRC_SCHED_SEDF     2
> +#define TRC_SCHED_ARINC653 3
> +
> +/* Per-scheduler tracing */
> +#define TRC_SCHED_CLASS_EVT(_c, _e) \
> +  ( ( TRC_SCHED_CLASS | \
> +      ((TRC_SCHED_##_c << TRC_SCHED_ID_SHIFT) & TRC_SCHED_ID_MASK) ) + \
> +    (_e & TRC_SCHED_EVT_MASK) )
> +
>   /* Trace classes for Hardware */
>   #define TRC_HW_PM           0x00801000   /* Power management traces */
>   #define TRC_HW_IRQ          0x00802000   /* Traces relating to the handling of IRQs */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:07:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:07:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbX6-0001t1-IC; Fri, 14 Dec 2012 20:06:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjbX5-0001su-Aw
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 20:06:55 +0000
Received: from [85.158.139.211:40006] by server-9.bemta-5.messagelabs.com id
	D1/35-10690-ED68BC05; Fri, 14 Dec 2012 20:06:54 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355515610!20593422!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16022 invoked from network); 14 Dec 2012 20:06:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 20:06:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; 
   d="scan'208";a="733010"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 20:06:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 15:06:38 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TjbWn-0003OD-TM;
	Fri, 14 Dec 2012 20:06:37 +0000
Message-ID: <50CB8562.8070706@eu.citrix.com>
Date: Fri, 14 Dec 2012 20:00:34 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<68004a57d91a9bf8b371.1355280775@Solace>
In-Reply-To: <68004a57d91a9bf8b371.1355280775@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 5 of 6 v2] xen: tracing: introduce
 per-scheduler trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> So that it becomes possible to create scheduler specific trace records,
> within each scheduler, without worrying about the overlapping, and also
> without giving up being able to recognise them univocally. The latter
> is deemed as useful, since we can have more than one scheduler running
> at the same time, thanks to cpupools.
>
> The event ID is 12 bits, and this change uses the upper 3 of them for
> the 'scheduler ID'. This means we're limited to 8 schedulers and to
> 512 scheduler specific tracing events. Both seem reasonable limitations
> as of now.
>
> This also converts the existing credit2 tracing (the only scheduler
> generating tracing events up to now) to the new system.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> Changes from v1:
>   * The event ID generaion macro is now called `TRC_SCHED_CLASS_EVT()',
>     and has been generalized and put in trace.h, as suggested.
>   * The handling of per-scheduler tracing IDs and masks have been
>     restructured, properly naming "ID" the numerical identifiers
>     and "MASK" the bitmasks, as requested.
>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -29,18 +29,22 @@
>   #define d2printk(x...)
>   //#define d2printk printk
>   
> -#define TRC_CSCHED2_TICK        TRC_SCHED_CLASS + 1
> -#define TRC_CSCHED2_RUNQ_POS    TRC_SCHED_CLASS + 2
> -#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3
> -#define TRC_CSCHED2_CREDIT_ADD  TRC_SCHED_CLASS + 4
> -#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5
> -#define TRC_CSCHED2_TICKLE       TRC_SCHED_CLASS + 6
> -#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7
> -#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8
> -#define TRC_CSCHED2_UPDATE_LOAD   TRC_SCHED_CLASS + 9
> -#define TRC_CSCHED2_RUNQ_ASSIGN   TRC_SCHED_CLASS + 10
> -#define TRC_CSCHED2_UPDATE_VCPU_LOAD   TRC_SCHED_CLASS + 11
> -#define TRC_CSCHED2_UPDATE_RUNQ_LOAD   TRC_SCHED_CLASS + 12
> +/*
> + * Credit2 tracing events ("only" 512 available!). Check
> + * include/public/trace.h for more details.
> + */
> +#define TRC_CSCHED2_TICK             TRC_SCHED_CLASS_EVT(CSCHED2, 1)
> +#define TRC_CSCHED2_RUNQ_POS         TRC_SCHED_CLASS_EVT(CSCHED2, 2)
> +#define TRC_CSCHED2_CREDIT_BURN      TRC_SCHED_CLASS_EVT(CSCHED2, 3)
> +#define TRC_CSCHED2_CREDIT_ADD       TRC_SCHED_CLASS_EVT(CSCHED2, 4)
> +#define TRC_CSCHED2_TICKLE_CHECK     TRC_SCHED_CLASS_EVT(CSCHED2, 5)
> +#define TRC_CSCHED2_TICKLE           TRC_SCHED_CLASS_EVT(CSCHED2, 6)
> +#define TRC_CSCHED2_CREDIT_RESET     TRC_SCHED_CLASS_EVT(CSCHED2, 7)
> +#define TRC_CSCHED2_SCHED_TASKLET    TRC_SCHED_CLASS_EVT(CSCHED2, 8)
> +#define TRC_CSCHED2_UPDATE_LOAD      TRC_SCHED_CLASS_EVT(CSCHED2, 9)
> +#define TRC_CSCHED2_RUNQ_ASSIGN      TRC_SCHED_CLASS_EVT(CSCHED2, 10)
> +#define TRC_CSCHED2_UPDATE_VCPU_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 11)
> +#define TRC_CSCHED2_UPDATE_RUNQ_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 12)
>   
>   /*
>    * WARNING: This is still in an experimental phase.  Status and work can be found at the
> diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
> --- a/xen/include/public/trace.h
> +++ b/xen/include/public/trace.h
> @@ -57,6 +57,32 @@
>   #define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
>   #define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
>   
> +/*
> + * The highest 3 bits of the last 12 bits of TRC_SCHED_CLASS above are
> + * reserved for encoding what scheduler produced the information. The
> + * actual event is encoded in the last 9 bits.
> + *
> + * This means we have 8 scheduling IDs available (which means at most 8
> + * schedulers generating events) and, in each scheduler, up to 512
> + * different events.
> + */
> +#define TRC_SCHED_ID_BITS 3
> +#define TRC_SCHED_ID_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
> +#define TRC_SCHED_ID_MASK (((1UL<<TRC_SCHED_ID_BITS) - 1) << TRC_SCHED_ID_SHIFT)
> +#define TRC_SCHED_EVT_MASK (~(TRC_SCHED_ID_MASK))
> +
> +/* Per-scheduler IDs, to identify scheduler specific events */
> +#define TRC_SCHED_CSCHED   0
> +#define TRC_SCHED_CSCHED2  1
> +#define TRC_SCHED_SEDF     2
> +#define TRC_SCHED_ARINC653 3
> +
> +/* Per-scheduler tracing */
> +#define TRC_SCHED_CLASS_EVT(_c, _e) \
> +  ( ( TRC_SCHED_CLASS | \
> +      ((TRC_SCHED_##_c << TRC_SCHED_ID_SHIFT) & TRC_SCHED_ID_MASK) ) + \
> +    (_e & TRC_SCHED_EVT_MASK) )
> +
>   /* Trace classes for Hardware */
>   #define TRC_HW_PM           0x00801000   /* Power management traces */
>   #define TRC_HW_IRQ          0x00802000   /* Traces relating to the handling of IRQs */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:10:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbZr-00021y-8I; Fri, 14 Dec 2012 20:09:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TjbZp-00021m-4E
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 20:09:45 +0000
Received: from [85.158.143.35:38202] by server-3.bemta-4.messagelabs.com id
	79/75-18211-8878BC05; Fri, 14 Dec 2012 20:09:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355515777!12507997!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjMwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1898 invoked from network); 14 Dec 2012 20:09:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Dec 2012 20:09:40 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBEK8d3l021509
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 14 Dec 2012 20:08:40 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBEK8bwU022311
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 14 Dec 2012 20:08:38 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBEK8bll016136; Fri, 14 Dec 2012 14:08:37 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 14 Dec 2012 12:08:37 -0800
Date: Fri, 14 Dec 2012 12:08:36 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121214120836.6ec4ad4a@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012 14:25:16 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Thu, 13 Dec 2012, Jan Beulich wrote:
> > >>> On 13.12.12 at 13:19, Stefano Stabellini
> > >>> <stefano.stabellini@eu.citrix.com>
> > wrote:
> > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
> > >> >>> <mukesh.rathor@oracle.com> wrote:
> 
> Actually I think that you might be right: just looking at the code it
> seems that the mask bits get written to the table once as part of the
> initialization process:
> 
> pci_enable_msix -> msix_capability_init -> msix_program_entries
> 
> Unfortunately msix_program_entries is called few lines after
> arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
> as a pirq.
> However after that is done, all the masking/unmask is done via
> irq_mask that we handle properly masking/unmasking the corresponding
> event channels.
> 
> 
> Possible solutions on top of my head:
> 
> - in msix_program_entries instead of writing to the table directly
> (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
> 
> - replace msix_program_entries with arch_msix_program_entries, but it
> would probably be unpopular.


Can you or Jan or somebody please take that over? I can focus on other
PVH things then and try to get a patch in asap.

Thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:10:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbZr-00021y-8I; Fri, 14 Dec 2012 20:09:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1TjbZp-00021m-4E
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 20:09:45 +0000
Received: from [85.158.143.35:38202] by server-3.bemta-4.messagelabs.com id
	79/75-18211-8878BC05; Fri, 14 Dec 2012 20:09:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355515777!12507997!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjMwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1898 invoked from network); 14 Dec 2012 20:09:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Dec 2012 20:09:40 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBEK8d3l021509
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 14 Dec 2012 20:08:40 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBEK8bwU022311
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 14 Dec 2012 20:08:38 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBEK8bll016136; Fri, 14 Dec 2012 14:08:37 -0600
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 14 Dec 2012 12:08:37 -0800
Date: Fri, 14 Dec 2012 12:08:36 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121214120836.6ec4ad4a@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Dec 2012 14:25:16 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Thu, 13 Dec 2012, Jan Beulich wrote:
> > >>> On 13.12.12 at 13:19, Stefano Stabellini
> > >>> <stefano.stabellini@eu.citrix.com>
> > wrote:
> > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
> > >> >>> <mukesh.rathor@oracle.com> wrote:
> 
> Actually I think that you might be right: just looking at the code it
> seems that the mask bits get written to the table once as part of the
> initialization process:
> 
> pci_enable_msix -> msix_capability_init -> msix_program_entries
> 
> Unfortunately msix_program_entries is called few lines after
> arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
> as a pirq.
> However after that is done, all the masking/unmask is done via
> irq_mask that we handle properly masking/unmasking the corresponding
> event channels.
> 
> 
> Possible solutions on top of my head:
> 
> - in msix_program_entries instead of writing to the table directly
> (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
> 
> - replace msix_program_entries with arch_msix_program_entries, but it
> would probably be unpopular.


Can you or Jan or somebody please take that over? I can focus on other
PVH things then and try to get a patch in asap.

Thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:11:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:11:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbbN-000288-PS; Fri, 14 Dec 2012 20:11:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjbbM-00027x-CW
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 20:11:20 +0000
Received: from [85.158.143.99:23987] by server-2.bemta-4.messagelabs.com id
	C8/FF-30861-7E78BC05; Fri, 14 Dec 2012 20:11:19 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1355515876!22519310!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13442 invoked from network); 14 Dec 2012 20:11:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 20:11:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; 
   d="scan'208";a="733561"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 20:11:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 15:11:15 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TjbbH-0003SJ-Cr;
	Fri, 14 Dec 2012 20:11:15 +0000
Message-ID: <50CB8677.9050001@eu.citrix.com>
Date: Fri, 14 Dec 2012 20:05:11 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<036a3bb938a550f2ee0c.1355280776@Solace>
In-Reply-To: <036a3bb938a550f2ee0c.1355280776@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6 of 6 v2] xen: sched_credit: add some
	tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> About tickling, and PCPU selection.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ---
> Changes from v1:
>   * Dummy `struct d {}', accommodating `cpu' only, removed
>     in spite of the much more readable `trace_var(..., sizeof(cpu), &cpu)',
>     as suggested.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -21,6 +21,7 @@
>   #include <asm/atomic.h>
>   #include <xen/errno.h>
>   #include <xen/keyhandler.h>
> +#include <xen/trace.h>
>   
>   
>   /*
> @@ -97,6 +98,18 @@
>   
>   
>   /*
> + * Credit tracing events ("only" 512 available!). Check
> + * include/public/trace.h for more details.
> + */
> +#define TRC_CSCHED_SCHED_TASKLET TRC_SCHED_CLASS_EVT(CSCHED, 1)
> +#define TRC_CSCHED_ACCOUNT_START TRC_SCHED_CLASS_EVT(CSCHED, 2)
> +#define TRC_CSCHED_ACCOUNT_STOP  TRC_SCHED_CLASS_EVT(CSCHED, 3)
> +#define TRC_CSCHED_STOLEN_VCPU   TRC_SCHED_CLASS_EVT(CSCHED, 4)
> +#define TRC_CSCHED_PICKED_CPU    TRC_SCHED_CLASS_EVT(CSCHED, 5)
> +#define TRC_CSCHED_TICKLE        TRC_SCHED_CLASS_EVT(CSCHED, 6)
> +
> +
> +/*
>    * Boot parameters
>    */
>   static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
> @@ -315,9 +328,18 @@ static inline void
>           }
>       }
>   
> -    /* Send scheduler interrupts to designated CPUs */
>       if ( !cpumask_empty(&mask) )
> +    {
> +        if ( unlikely(tb_init_done) )
> +        {
> +            /* Avoid TRACE_*: saves checking !tb_init_done each step */
> +            for_each_cpu(cpu, &mask)
> +                trace_var(TRC_CSCHED_TICKLE, 0, sizeof(cpu), &cpu);
> +        }

Hmm, probably should have pointed this out before, but trace_var() is a 
static inline which checks tb_init_done -- you want __trace_var(). :-)

The rest still looks good.

  -George

> +
> +        /* Send scheduler interrupts to designated CPUs */
>           cpumask_raise_softirq(&mask, SCHEDULE_SOFTIRQ);
> +    }
>   }
>   
>   static void
> @@ -554,6 +576,8 @@ static int
>       if ( commit && spc )
>          spc->idle_bias = cpu;
>   
> +    TRACE_3D(TRC_CSCHED_PICKED_CPU, vc->domain->domain_id, vc->vcpu_id, cpu);
> +
>       return cpu;
>   }
>   
> @@ -586,6 +610,9 @@ static inline void
>           }
>       }
>   
> +    TRACE_3D(TRC_CSCHED_ACCOUNT_START, sdom->dom->domain_id,
> +             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
> +
>       spin_unlock_irqrestore(&prv->lock, flags);
>   }
>   
> @@ -608,6 +635,9 @@ static inline void
>       {
>           list_del_init(&sdom->active_sdom_elem);
>       }
> +
> +    TRACE_3D(TRC_CSCHED_ACCOUNT_STOP, sdom->dom->domain_id,
> +             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
>   }
>   
>   static void
> @@ -1241,6 +1271,8 @@ csched_runq_steal(int peer_cpu, int cpu,
>               if (__csched_vcpu_is_migrateable(vc, cpu))
>               {
>                   /* We got a candidate. Grab it! */
> +                TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
> +                         vc->domain->domain_id, vc->vcpu_id);
>                   SCHED_VCPU_STAT_CRANK(speer, migrate_q);
>                   SCHED_STAT_CRANK(migrate_queued);
>                   WARN_ON(vc->is_urgent);
> @@ -1401,6 +1433,7 @@ csched_schedule(
>       /* Tasklet work (which runs in idle VCPU context) overrides all else. */
>       if ( tasklet_work_scheduled )
>       {
> +        TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
>           snext = CSCHED_VCPU(idle_vcpu[cpu]);
>           snext->pri = CSCHED_PRI_TS_BOOST;
>       }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:11:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:11:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbbN-000288-PS; Fri, 14 Dec 2012 20:11:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TjbbM-00027x-CW
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 20:11:20 +0000
Received: from [85.158.143.99:23987] by server-2.bemta-4.messagelabs.com id
	C8/FF-30861-7E78BC05; Fri, 14 Dec 2012 20:11:19 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1355515876!22519310!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTIxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13442 invoked from network); 14 Dec 2012 20:11:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 20:11:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; 
   d="scan'208";a="733561"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	14 Dec 2012 20:11:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 14 Dec 2012 15:11:15 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TjbbH-0003SJ-Cr;
	Fri, 14 Dec 2012 20:11:15 +0000
Message-ID: <50CB8677.9050001@eu.citrix.com>
Date: Fri, 14 Dec 2012 20:05:11 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355280770@Solace>
	<036a3bb938a550f2ee0c.1355280776@Solace>
In-Reply-To: <036a3bb938a550f2ee0c.1355280776@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6 of 6 v2] xen: sched_credit: add some
	tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/12/12 02:52, Dario Faggioli wrote:
> About tickling, and PCPU selection.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ---
> Changes from v1:
>   * Dummy `struct d {}', accommodating `cpu' only, removed
>     in spite of the much more readable `trace_var(..., sizeof(cpu), &cpu)',
>     as suggested.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -21,6 +21,7 @@
>   #include <asm/atomic.h>
>   #include <xen/errno.h>
>   #include <xen/keyhandler.h>
> +#include <xen/trace.h>
>   
>   
>   /*
> @@ -97,6 +98,18 @@
>   
>   
>   /*
> + * Credit tracing events ("only" 512 available!). Check
> + * include/public/trace.h for more details.
> + */
> +#define TRC_CSCHED_SCHED_TASKLET TRC_SCHED_CLASS_EVT(CSCHED, 1)
> +#define TRC_CSCHED_ACCOUNT_START TRC_SCHED_CLASS_EVT(CSCHED, 2)
> +#define TRC_CSCHED_ACCOUNT_STOP  TRC_SCHED_CLASS_EVT(CSCHED, 3)
> +#define TRC_CSCHED_STOLEN_VCPU   TRC_SCHED_CLASS_EVT(CSCHED, 4)
> +#define TRC_CSCHED_PICKED_CPU    TRC_SCHED_CLASS_EVT(CSCHED, 5)
> +#define TRC_CSCHED_TICKLE        TRC_SCHED_CLASS_EVT(CSCHED, 6)
> +
> +
> +/*
>    * Boot parameters
>    */
>   static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
> @@ -315,9 +328,18 @@ static inline void
>           }
>       }
>   
> -    /* Send scheduler interrupts to designated CPUs */
>       if ( !cpumask_empty(&mask) )
> +    {
> +        if ( unlikely(tb_init_done) )
> +        {
> +            /* Avoid TRACE_*: saves checking !tb_init_done each step */
> +            for_each_cpu(cpu, &mask)
> +                trace_var(TRC_CSCHED_TICKLE, 0, sizeof(cpu), &cpu);
> +        }

Hmm, probably should have pointed this out before, but trace_var() is a 
static inline which checks tb_init_done -- you want __trace_var(). :-)

The rest still looks good.

  -George

> +
> +        /* Send scheduler interrupts to designated CPUs */
>           cpumask_raise_softirq(&mask, SCHEDULE_SOFTIRQ);
> +    }
>   }
>   
>   static void
> @@ -554,6 +576,8 @@ static int
>       if ( commit && spc )
>          spc->idle_bias = cpu;
>   
> +    TRACE_3D(TRC_CSCHED_PICKED_CPU, vc->domain->domain_id, vc->vcpu_id, cpu);
> +
>       return cpu;
>   }
>   
> @@ -586,6 +610,9 @@ static inline void
>           }
>       }
>   
> +    TRACE_3D(TRC_CSCHED_ACCOUNT_START, sdom->dom->domain_id,
> +             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
> +
>       spin_unlock_irqrestore(&prv->lock, flags);
>   }
>   
> @@ -608,6 +635,9 @@ static inline void
>       {
>           list_del_init(&sdom->active_sdom_elem);
>       }
> +
> +    TRACE_3D(TRC_CSCHED_ACCOUNT_STOP, sdom->dom->domain_id,
> +             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
>   }
>   
>   static void
> @@ -1241,6 +1271,8 @@ csched_runq_steal(int peer_cpu, int cpu,
>               if (__csched_vcpu_is_migrateable(vc, cpu))
>               {
>                   /* We got a candidate. Grab it! */
> +                TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
> +                         vc->domain->domain_id, vc->vcpu_id);
>                   SCHED_VCPU_STAT_CRANK(speer, migrate_q);
>                   SCHED_STAT_CRANK(migrate_queued);
>                   WARN_ON(vc->is_urgent);
> @@ -1401,6 +1433,7 @@ csched_schedule(
>       /* Tasklet work (which runs in idle VCPU context) overrides all else. */
>       if ( tasklet_work_scheduled )
>       {
> +        TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
>           snext = CSCHED_VCPU(idle_vcpu[cpu]);
>           snext->pri = CSCHED_PRI_TS_BOOST;
>       }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:13:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:13:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbdZ-0002KI-BC; Fri, 14 Dec 2012 20:13:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TjbdY-0002K9-3J
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 20:13:36 +0000
Received: from [85.158.143.35:45562] by server-2.bemta-4.messagelabs.com id
	43/B0-30861-F688BC05; Fri, 14 Dec 2012 20:13:35 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355515993!10653723!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24316 invoked from network); 14 Dec 2012 20:13:15 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-10.tower-21.messagelabs.com with SMTP;
	14 Dec 2012 20:13:15 -0000
X-TM-IMSS-Message-ID: <1e482ab600046ce2@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 1e482ab600046ce2 ;
	Fri, 14 Dec 2012 15:12:39 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBEKCeAh025340; 
	Fri, 14 Dec 2012 15:12:40 -0500
Message-ID: <50CB8838.1070009@tycho.nsa.gov>
Date: Fri, 14 Dec 2012 15:12:40 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-10-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1355169347-25917-10-git-send-email-dgdegra@tycho.nsa.gov>
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 09/14] stubdom/vtpm: Add PCR pass-through to
 hardware TPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/10/2012 02:55 PM, Daniel De Graaf wrote:
> This allows the hardware TPM's PCRs to be accessed from a vTPM for
> debugging and as a simple alternative to a deep quote in situations
> where the integrity of the vTPM's own TCB is not in question.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  stubdom/Makefile                   |  1 +
>  stubdom/vtpm-pcr-passthrough.patch | 73 ++++++++++++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpm_cmd.c            | 38 ++++++++++++++++++++
>  3 files changed, 112 insertions(+)
>  create mode 100644 stubdom/vtpm-pcr-passthrough.patch

This patch is incomplete, so don't apply it: seal operations can't use the
extra PCRs, and it's likely other operations such as nvram have the same
problem. It's not a dependency for any other patch, and an alternative
implementation should end up being more configurable anyway.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:13:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:13:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjbdZ-0002KI-BC; Fri, 14 Dec 2012 20:13:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TjbdY-0002K9-3J
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 20:13:36 +0000
Received: from [85.158.143.35:45562] by server-2.bemta-4.messagelabs.com id
	43/B0-30861-F688BC05; Fri, 14 Dec 2012 20:13:35 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355515993!10653723!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24316 invoked from network); 14 Dec 2012 20:13:15 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-10.tower-21.messagelabs.com with SMTP;
	14 Dec 2012 20:13:15 -0000
X-TM-IMSS-Message-ID: <1e482ab600046ce2@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 1e482ab600046ce2 ;
	Fri, 14 Dec 2012 15:12:39 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBEKCeAh025340; 
	Fri, 14 Dec 2012 15:12:40 -0500
Message-ID: <50CB8838.1070009@tycho.nsa.gov>
Date: Fri, 14 Dec 2012 15:12:40 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <1355169347-25917-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355169347-25917-10-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1355169347-25917-10-git-send-email-dgdegra@tycho.nsa.gov>
Cc: matthew.fioravante@jhuapl.edu, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 09/14] stubdom/vtpm: Add PCR pass-through to
 hardware TPM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/10/2012 02:55 PM, Daniel De Graaf wrote:
> This allows the hardware TPM's PCRs to be accessed from a vTPM for
> debugging and as a simple alternative to a deep quote in situations
> where the integrity of the vTPM's own TCB is not in question.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  stubdom/Makefile                   |  1 +
>  stubdom/vtpm-pcr-passthrough.patch | 73 ++++++++++++++++++++++++++++++++++++++
>  stubdom/vtpm/vtpm_cmd.c            | 38 ++++++++++++++++++++
>  3 files changed, 112 insertions(+)
>  create mode 100644 stubdom/vtpm-pcr-passthrough.patch

This patch is incomplete, so don't apply it: seal operations can't use the
extra PCRs, and it's likely other operations such as nvram have the same
problem. It's not a dependency for any other patch, and an alternative
implementation should end up being more configurable anyway.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:16:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tjbgb-0002VS-Uk; Fri, 14 Dec 2012 20:16:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tjbga-0002VI-78
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 20:16:44 +0000
Received: from [85.158.143.35:64571] by server-2.bemta-4.messagelabs.com id
	F8/02-30861-B298BC05; Fri, 14 Dec 2012 20:16:43 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355516199!12508358!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11723 invoked from network); 14 Dec 2012 20:16:39 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-7.tower-21.messagelabs.com with SMTP;
	14 Dec 2012 20:16:39 -0000
X-TM-IMSS-Message-ID: <1e4baec400046d4e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 1e4baec400046d4e ;
	Fri, 14 Dec 2012 15:16:29 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBEKGUMZ025545; 
	Fri, 14 Dec 2012 15:16:30 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Fri, 14 Dec 2012 15:16:29 -0500
Message-Id: <1355516189-30318-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-2-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-2-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v3.2] mini-os/tpm{back,
	front}: Change shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This changes the vTPM shared page ABI from a copy of the Xen network
interface to a single-page interface that better reflects the expected
behavior of a TPM: only a single request packet can be sent at any given
time, and every packet sent generates a single response packet. This
protocol change should also increase efficiency as it avoids mapping and
unmapping grants when possible. The vtpm xenbus device has been renamed
to "vtpm2" to avoid conflicts with existing (xen-patched) kernels
supporting the old interface.

While the contents of the shared page have been defined to allow packets
larger than a single page (actually 4088 bytes) by allowing the client
to add extra grant references, the mapping of these extra references has
not been implemented; a feature node in xenstore may be used in the
future to indicate full support for the multi-page protocol. Most uses
of the TPM should not require this feature.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

---

Changes from v3: the change from "vtpm" to "vtpm2" in libxl was
incomplete, causing the vtpm-attach and -detach commands to fail.

 extras/mini-os/include/tpmback.h     |   1 +
 extras/mini-os/include/tpmfront.h    |   7 +-
 extras/mini-os/tpmback.c             | 135 +++++++++++++----------------------
 extras/mini-os/tpmfront.c            | 119 ++++++++++++------------------
 tools/libxl/libxl.c                  |  12 ++--
 tools/libxl/libxl_types_internal.idl |   2 +-
 xen/include/public/io/tpmif.h        |  45 +++---------
 7 files changed, 117 insertions(+), 204 deletions(-)

diff --git a/extras/mini-os/include/tpmback.h b/extras/mini-os/include/tpmback.h
index ff86732..ec9eda4 100644
--- a/extras/mini-os/include/tpmback.h
+++ b/extras/mini-os/include/tpmback.h
@@ -43,6 +43,7 @@
 
 struct tpmcmd {
    domid_t domid;		/* Domid of the frontend */
+   uint8_t locality;    /* Locality requested by the frontend */
    unsigned int handle;	/* Handle of the frontend */
    unsigned char uuid[16];			/* uuid of the tpm interface */
 
diff --git a/extras/mini-os/include/tpmfront.h b/extras/mini-os/include/tpmfront.h
index fd2cb17..a0c7c4d 100644
--- a/extras/mini-os/include/tpmfront.h
+++ b/extras/mini-os/include/tpmfront.h
@@ -37,9 +37,7 @@ struct tpmfront_dev {
    grant_ref_t ring_ref;
    evtchn_port_t evtchn;
 
-   tpmif_tx_interface_t* tx;
-
-   void** pages;
+   vtpm_shared_page_t *page;
 
    domid_t bedomid;
    char* nodename;
@@ -77,6 +75,9 @@ void shutdown_tpmfront(struct tpmfront_dev* dev);
  * */
 int tpmfront_cmd(struct tpmfront_dev* dev, uint8_t* req, size_t reqlen, uint8_t** resp, size_t* resplen);
 
+/* Set the locality used for communicating with a vTPM */
+int tpmfront_set_locality(struct tpmfront_dev* dev, int locality);
+
 #ifdef HAVE_LIBC
 #include <sys/stat.h>
 /* POSIX IO functions:
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index 658fed1..50f8a5d 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -86,10 +86,7 @@ struct tpmif {
    evtchn_port_t evtchn;
 
    /* Shared page */
-   tpmif_tx_interface_t* tx;
-
-   /* pointer to TPMIF_RX_RING_SIZE pages */
-   void** pages;
+   vtpm_shared_page_t *page;
 
    enum xenbus_state state;
    enum { DISCONNECTED, DISCONNECTING, CONNECTED } status;
@@ -312,7 +309,6 @@ int insert_tpmif(tpmif_t* tpmif)
       remove_tpmif(tpmif);
       goto error_post_irq;
    }
-
    return 0;
 error:
    local_irq_restore(flags);
@@ -336,7 +332,7 @@ int tpmif_change_state(tpmif_t* tpmif, enum xenbus_state state)
    if (tpmif->state == state)
       return 0;
 
-   snprintf(path, 512, "backend/vtpm/%u/%u/state", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/state", (unsigned int) tpmif->domid, tpmif->handle);
 
    if((err = xenbus_read(XBT_NIL, path, &value))) {
       TPMBACK_ERR("Unable to read backend state %s, error was %s\n", path, err);
@@ -362,7 +358,7 @@ int tpmif_change_state(tpmif_t* tpmif, enum xenbus_state state)
    }
 
    /*update xenstore*/
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_printf(XBT_NIL, path, "state", "%u", state))) {
       TPMBACK_ERR("Error writing to xenstore %s, error was %s new state=%d\n", path, err, state);
       free(err);
@@ -386,8 +382,7 @@ inline tpmif_t* __init_tpmif(domid_t domid, unsigned int handle)
    tpmif->fe_state_path = NULL;
    tpmif->state = XenbusStateInitialising;
    tpmif->status = DISCONNECTED;
-   tpmif->tx = NULL;
-   tpmif->pages = NULL;
+   tpmif->page = NULL;
    tpmif->flags = 0;
    memset(tpmif->uuid, 0, sizeof(tpmif->uuid));
    return tpmif;
@@ -395,9 +390,6 @@ inline tpmif_t* __init_tpmif(domid_t domid, unsigned int handle)
 
 void __free_tpmif(tpmif_t* tpmif)
 {
-   if(tpmif->pages) {
-      free(tpmif->pages);
-   }
    if(tpmif->fe_path) {
       free(tpmif->fe_path);
    }
@@ -424,23 +416,17 @@ tpmif_t* new_tpmif(domid_t domid, unsigned int handle)
    tpmif = __init_tpmif(domid, handle);
 
    /* Get the uuid from xenstore */
-   snprintf(path, 512, "backend/vtpm/%u/%u/uuid", (unsigned int) domid, handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/uuid", (unsigned int) domid, handle);
    if((!xenbus_read_uuid(path, tpmif->uuid))) {
       TPMBACK_ERR("Error reading %s\n", path);
       goto error;
    }
 
-   /* allocate pages to be used for shared mapping */
-   if((tpmif->pages = malloc(sizeof(void*) * TPMIF_TX_RING_SIZE)) == NULL) {
-      goto error;
-   }
-   memset(tpmif->pages, 0, sizeof(void*) * TPMIF_TX_RING_SIZE);
-
    if(tpmif_change_state(tpmif, XenbusStateInitWait)) {
       goto error;
    }
 
-   snprintf(path, 512, "backend/vtpm/%u/%u/frontend", (unsigned int) domid, handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/frontend", (unsigned int) domid, handle);
    if((err = xenbus_read(XBT_NIL, path, &tpmif->fe_path))) {
       TPMBACK_ERR("Error creating new tpm instance xenbus_read(%s), Error = %s", path, err);
       free(err);
@@ -486,7 +472,7 @@ void free_tpmif(tpmif_t* tpmif)
       tpmif->status = DISCONNECTING;
       mask_evtchn(tpmif->evtchn);
 
-      if(gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->tx, 1)) {
+      if(gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->page, 1)) {
 	 TPMBACK_ERR("%u/%u Error occured while trying to unmap shared page\n", (unsigned int) tpmif->domid, tpmif->handle);
       }
 
@@ -508,7 +494,7 @@ void free_tpmif(tpmif_t* tpmif)
    schedule();
 
    /* Remove the old xenbus entries */
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_rm(XBT_NIL, path))) {
       TPMBACK_ERR("Error cleaning up xenbus entries path=%s error=%s\n", path, err);
       free(err);
@@ -529,9 +515,10 @@ void free_tpmif(tpmif_t* tpmif)
 void tpmback_handler(evtchn_port_t port, struct pt_regs *regs, void *data)
 {
    tpmif_t* tpmif = (tpmif_t*) data;
-   tpmif_tx_request_t* tx = &tpmif->tx->ring[0].req;
-   /* Throw away 0 size events, these can trigger from event channel unmasking */
-   if(tx->size == 0)
+   vtpm_shared_page_t* pg = tpmif->page;
+
+   /* Only pay attention if the request is ready */
+   if (pg->state == 0)
       return;
 
    TPMBACK_DEBUG("EVENT CHANNEL FIRE %u/%u\n", (unsigned int) tpmif->domid, tpmif->handle);
@@ -585,11 +572,10 @@ int connect_fe(tpmif_t* tpmif)
    free(value);
 
    domid = tpmif->domid;
-   if((tpmif->tx = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &ringref, PROT_READ | PROT_WRITE)) == NULL) {
+   if((tpmif->page = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &ringref, PROT_READ | PROT_WRITE)) == NULL) {
       TPMBACK_ERR("Failed to map grant reference %u/%u\n", (unsigned int) tpmif->domid, tpmif->handle);
       return -1;
    }
-   memset(tpmif->tx, 0, PAGE_SIZE);
 
    /*Bind the event channel */
    if((evtchn_bind_interdomain(tpmif->domid, evtchn, tpmback_handler, tpmif, &tpmif->evtchn)))
@@ -600,7 +586,7 @@ int connect_fe(tpmif_t* tpmif)
    unmask_evtchn(tpmif->evtchn);
 
    /* Write the ready flag and change status to connected */
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_printf(XBT_NIL, path, "ready", "%u", 1))) {
       TPMBACK_ERR("%u/%u Unable to write ready flag on connect_fe()\n", (unsigned int) tpmif->domid, tpmif->handle);
       free(err);
@@ -618,7 +604,7 @@ error_post_evtchn:
    mask_evtchn(tpmif->evtchn);
    unbind_evtchn(tpmif->evtchn);
 error_post_map:
-   gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->tx, 1);
+   gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->page, 1);
    return -1;
 }
 
@@ -670,8 +656,8 @@ static int parse_eventstr(const char* evstr, domid_t* domid, unsigned int* handl
    char* value;
    unsigned int udomid = 0;
    tpmif_t* tpmif;
-   /* First check for new frontends, this occurs when /backend/vtpm/<domid>/<handle> gets created. Note we what the sscanf to fail on the last %s */
-   if (sscanf(evstr, "backend/vtpm/%u/%u/%40s", &udomid, handle, cmd) == 2) {
+   /* First check for new frontends, this occurs when /backend/vtpm2/<domid>/<handle> gets created. Note we what the sscanf to fail on the last %s */
+   if (sscanf(evstr, "backend/vtpm2/%u/%u/%40s", &udomid, handle, cmd) == 2) {
       *domid = udomid;
       /* Make sure the entry exists, if this event triggers because the entry dissapeared then ignore it */
       if((err = xenbus_read(XBT_NIL, evstr, &value))) {
@@ -685,7 +671,7 @@ static int parse_eventstr(const char* evstr, domid_t* domid, unsigned int* handl
 	 return EV_NONE;
       }
       return EV_NEWFE;
-   } else if((ret = sscanf(evstr, "/local/domain/%u/device/vtpm/%u/%40s", &udomid, handle, cmd)) == 3) {
+   } else if((ret = sscanf(evstr, "/local/domain/%u/device/vtpm2/%u/%40s", &udomid, handle, cmd)) == 3) {
       *domid = udomid;
       if (!strcmp(cmd, "state"))
 	 return EV_STCHNG;
@@ -784,7 +770,7 @@ void tpmback_set_resume_callback(void (*cb)(domid_t, unsigned int))
 
 static void event_listener(void)
 {
-   const char* bepath = "backend/vtpm";
+   const char* bepath = "backend/vtpm2";
    char **path;
    char* err;
 
@@ -874,6 +860,7 @@ void shutdown_tpmback(void)
 inline void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, unsigned char uuid[16])
 {
    tpmcmd->domid = domid;
+   tpmcmd->locality = -1;
    tpmcmd->handle = handle;
    memcpy(tpmcmd->uuid, uuid, sizeof(tpmcmd->uuid));
    tpmcmd->req = NULL;
@@ -884,12 +871,12 @@ inline void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, un
 
 tpmcmd_t* get_request(tpmif_t* tpmif) {
    tpmcmd_t* cmd;
-   tpmif_tx_request_t* tx;
-   int offset;
-   int tocopy;
-   int i;
-   uint32_t domid;
+   vtpm_shared_page_t* shr;
+   unsigned int offset;
    int flags;
+#ifdef TPMBACK_PRINT_DEBUG
+   int i;
+#endif
 
    local_irq_save(flags);
 
@@ -899,35 +886,22 @@ tpmcmd_t* get_request(tpmif_t* tpmif) {
    }
    init_tpmcmd(cmd, tpmif->domid, tpmif->handle, tpmif->uuid);
 
-   tx = &tpmif->tx->ring[0].req;
-   cmd->req_len = tx->size;
+   shr = tpmif->page;
+   cmd->req_len = shr->length;
+   cmd->locality = shr->locality;
+   offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+   if (offset > PAGE_SIZE || offset + cmd->req_len > PAGE_SIZE) {
+      TPMBACK_ERR("%u/%u Command size too long for shared page!\n", (unsigned int) tpmif->domid, tpmif->handle);
+      goto error;
+   }
    /* Allocate the buffer */
    if(cmd->req_len) {
       if((cmd->req = malloc(cmd->req_len)) == NULL) {
 	 goto error;
       }
    }
-   /* Copy the bits from the shared pages */
-   offset = 0;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && offset < cmd->req_len; ++i) {
-      tx = &tpmif->tx->ring[i].req;
-
-      /* Map the page with the data */
-      domid = (uint32_t)tpmif->domid;
-      if((tpmif->pages[i] = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &tx->ref, PROT_READ)) == NULL) {
-	 TPMBACK_ERR("%u/%u Unable to map shared page during read!\n", (unsigned int) tpmif->domid, tpmif->handle);
-	 goto error;
-      }
-
-      /* do the copy now */
-      tocopy = min(cmd->req_len - offset, PAGE_SIZE);
-      memcpy(&cmd->req[offset], tpmif->pages[i], tocopy);
-      offset += tocopy;
-
-      /* release the page */
-      gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->pages[i], 1);
-
-   }
+   /* Copy the bits from the shared page(s) */
+   memcpy(cmd->req, offset + (uint8_t*)shr, cmd->req_len);
 
 #ifdef TPMBACK_PRINT_DEBUG
    TPMBACK_DEBUG("Received Tpm Command from %u/%u of size %u", (unsigned int) tpmif->domid, tpmif->handle, cmd->req_len);
@@ -958,38 +932,24 @@ error:
 
 void send_response(tpmcmd_t* cmd, tpmif_t* tpmif)
 {
-   tpmif_tx_request_t* tx;
-   int offset;
-   int i;
-   uint32_t domid;
-   int tocopy;
+   vtpm_shared_page_t* shr;
+   unsigned int offset;
    int flags;
+#ifdef TPMBACK_PRINT_DEBUG
+int i;
+#endif
 
    local_irq_save(flags);
 
-   tx = &tpmif->tx->ring[0].req;
-   tx->size = cmd->resp_len;
-
-   offset = 0;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && offset < cmd->resp_len; ++i) {
-      tx = &tpmif->tx->ring[i].req;
-
-      /* Map the page with the data */
-      domid = (uint32_t)tpmif->domid;
-      if((tpmif->pages[i] = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &tx->ref, PROT_WRITE)) == NULL) {
-	 TPMBACK_ERR("%u/%u Unable to map shared page during write!\n", (unsigned int) tpmif->domid, tpmif->handle);
-	 goto error;
-      }
-
-      /* do the copy now */
-      tocopy = min(cmd->resp_len - offset, PAGE_SIZE);
-      memcpy(tpmif->pages[i], &cmd->resp[offset], tocopy);
-      offset += tocopy;
-
-      /* release the page */
-      gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->pages[i], 1);
+   shr = tpmif->page;
+   shr->length = cmd->resp_len;
 
+   offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+   if (offset > PAGE_SIZE || offset + cmd->resp_len > PAGE_SIZE) {
+      TPMBACK_ERR("%u/%u Command size too long for shared page!\n", (unsigned int) tpmif->domid, tpmif->handle);
+      goto error;
    }
+   memcpy(offset + (uint8_t*)shr, cmd->resp, cmd->resp_len);
 
 #ifdef TPMBACK_PRINT_DEBUG
    TPMBACK_DEBUG("Sent response to %u/%u of size %u", (unsigned int) tpmif->domid, tpmif->handle, cmd->resp_len);
@@ -1003,6 +963,7 @@ void send_response(tpmcmd_t* cmd, tpmif_t* tpmif)
 #endif
    /* clear the ready flag and send the event channel notice to the frontend */
    tpmif_req_finished(tpmif);
+   shr->state = 0;
    notify_remote_via_evtchn(tpmif->evtchn);
 error:
    local_irq_restore(flags);
diff --git a/extras/mini-os/tpmfront.c b/extras/mini-os/tpmfront.c
index 0218d7f..ac9ba42 100644
--- a/extras/mini-os/tpmfront.c
+++ b/extras/mini-os/tpmfront.c
@@ -176,7 +176,7 @@ static int wait_for_backend_state_changed(struct tpmfront_dev* dev, XenbusState
 	 ret = wait_for_backend_closed(&events, path);
 	 break;
       default:
-	 break;
+         TPMFRONT_ERR("Bad wait state %d, ignoring\n", state);
    }
 
    if((err = xenbus_unwatch_path_token(XBT_NIL, path, path))) {
@@ -190,13 +190,13 @@ static int tpmfront_connect(struct tpmfront_dev* dev)
 {
    char* err;
    /* Create shared page */
-   dev->tx = (tpmif_tx_interface_t*) alloc_page();
-   if(dev->tx == NULL) {
+   dev->page = (vtpm_shared_page_t*) alloc_page();
+   if(dev->page == NULL) {
       TPMFRONT_ERR("Unable to allocate page for shared memory\n");
       goto error;
    }
-   memset(dev->tx, 0, PAGE_SIZE);
-   dev->ring_ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->tx), 0);
+   memset(dev->page, 0, PAGE_SIZE);
+   dev->ring_ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->page), 0);
    TPMFRONT_DEBUG("grant ref is %lu\n", (unsigned long) dev->ring_ref);
 
    /*Create event channel */
@@ -228,7 +228,7 @@ error_postevtchn:
       unbind_evtchn(dev->evtchn);
 error_postmap:
       gnttab_end_access(dev->ring_ref);
-      free_page(dev->tx);
+      free_page(dev->page);
 error:
    return -1;
 }
@@ -240,7 +240,6 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
    char path[512];
    char* value, *err;
    unsigned long long ival;
-   int i;
 
    printk("============= Init TPM Front ================\n");
 
@@ -251,7 +250,7 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
    dev->fd = -1;
 #endif
 
-   nodename = _nodename ? _nodename : "device/vtpm/0";
+   nodename = _nodename ? _nodename : "device/vtpm2/0";
    dev->nodename = strdup(nodename);
 
    init_waitqueue_head(&dev->waitq);
@@ -289,19 +288,6 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
       goto error;
    }
 
-   /* Allocate pages that will contain the messages */
-   dev->pages = malloc(sizeof(void*) * TPMIF_TX_RING_SIZE);
-   if(dev->pages == NULL) {
-      goto error;
-   }
-   memset(dev->pages, 0, sizeof(void*) * TPMIF_TX_RING_SIZE);
-   for(i = 0; i < TPMIF_TX_RING_SIZE; ++i) {
-      dev->pages[i] = (void*)alloc_page();
-      if(dev->pages[i] == NULL) {
-	 goto error;
-      }
-   }
-
    TPMFRONT_LOG("Initialization Completed successfully\n");
 
    return dev;
@@ -314,8 +300,6 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
 {
    char* err;
    char path[512];
-   int i;
-   tpmif_tx_request_t* tx;
    if(dev == NULL) {
       return;
    }
@@ -349,27 +333,12 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
       /* Wait for the backend to close and unmap shared pages, ignore any errors */
       wait_for_backend_state_changed(dev, XenbusStateClosed);
 
-      /* Cleanup any shared pages */
-      if(dev->pages) {
-	 for(i = 0; i < TPMIF_TX_RING_SIZE; ++i) {
-	    if(dev->pages[i]) {
-	       tx = &dev->tx->ring[i].req;
-	       if(tx->ref != 0) {
-		  gnttab_end_access(tx->ref);
-	       }
-	       free_page(dev->pages[i]);
-	    }
-	 }
-	 free(dev->pages);
-      }
-
       /* Close event channel and unmap shared page */
       mask_evtchn(dev->evtchn);
       unbind_evtchn(dev->evtchn);
       gnttab_end_access(dev->ring_ref);
 
-      free_page(dev->tx);
-
+      free_page(dev->page);
    }
 
    /* Cleanup memory usage */
@@ -387,13 +356,17 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
 
 int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 {
+   unsigned int offset;
+   vtpm_shared_page_t* shr = NULL;
+#ifdef TPMFRONT_PRINT_DEBUG
    int i;
-   tpmif_tx_request_t* tx = NULL;
+#endif
    /* Error Checking */
    if(dev == NULL || dev->state != XenbusStateConnected) {
       TPMFRONT_ERR("Tried to send message through disconnected frontend\n");
       return -1;
    }
+   shr = dev->page;
 
 #ifdef TPMFRONT_PRINT_DEBUG
    TPMFRONT_DEBUG("Sending Msg to backend size=%u", (unsigned int) length);
@@ -407,19 +380,16 @@ int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 #endif
 
    /* Copy to shared pages now */
-   for(i = 0; length > 0 && i < TPMIF_TX_RING_SIZE; ++i) {
-      /* Share the page */
-      tx = &dev->tx->ring[i].req;
-      tx->unused = 0;
-      tx->addr = virt_to_mach(dev->pages[i]);
-      tx->ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->pages[i]), 0);
-      /* Copy the bits to the page */
-      tx->size = length > PAGE_SIZE ? PAGE_SIZE : length;
-      memcpy(dev->pages[i], &msg[i * PAGE_SIZE], tx->size);
-
-      /* Update counters */
-      length -= tx->size;
+   offset = sizeof(*shr);
+   if (length + offset > PAGE_SIZE) {
+      TPMFRONT_ERR("Message too long for shared page\n");
+      return -1;
    }
+   memcpy(offset + (uint8_t*)shr, msg, length);
+   shr->length = length;
+   barrier();
+   shr->state = 1;
+
    dev->waiting = 1;
    dev->resplen = 0;
 #ifdef HAVE_LIBC
@@ -434,44 +404,41 @@ int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 }
 int tpmfront_recv(struct tpmfront_dev* dev, uint8_t** msg, size_t *length)
 {
-   tpmif_tx_request_t* tx;
-   int i;
+   unsigned int offset;
+   vtpm_shared_page_t* shr = NULL;
+#ifdef TPMFRONT_PRINT_DEBUG
+int i;
+#endif
    if(dev == NULL || dev->state != XenbusStateConnected) {
       TPMFRONT_ERR("Tried to receive message from disconnected frontend\n");
       return -1;
    }
    /*Wait for the response */
    wait_event(dev->waitq, (!dev->waiting));
+   shr = dev->page;
+   if (shr->state != 0)
+      goto quit;
 
    /* Initialize */
    *msg = NULL;
-   *length = 0;
+   *length = shr->length;
+   offset = sizeof(*shr);
 
-   /* special case, just quit */
-   tx = &dev->tx->ring[0].req;
-   if(tx->size == 0 ) {
-       goto quit;
-   }
-   /* Get the total size */
-   tx = &dev->tx->ring[0].req;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && tx->size > 0; ++i) {
-      tx = &dev->tx->ring[i].req;
-      *length += tx->size;
+   if (*length + offset > PAGE_SIZE) {
+      TPMFRONT_ERR("Reply too long for shared page\n");
+      return -1;
    }
+
    /* Alloc the buffer */
    if(dev->respbuf) {
       free(dev->respbuf);
    }
    *msg = dev->respbuf = malloc(*length);
    dev->resplen = *length;
+
    /* Copy the bits */
-   tx = &dev->tx->ring[0].req;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && tx->size > 0; ++i) {
-      tx = &dev->tx->ring[i].req;
-      memcpy(&(*msg)[i * PAGE_SIZE], dev->pages[i], tx->size);
-      gnttab_end_access(tx->ref);
-      tx->ref = 0;
-   }
+   memcpy(*msg, offset + (uint8_t*)shr, *length);
+
 #ifdef TPMFRONT_PRINT_DEBUG
    TPMFRONT_DEBUG("Received response from backend size=%u", (unsigned int) *length);
    for(i = 0; i < *length; ++i) {
@@ -504,6 +471,14 @@ int tpmfront_cmd(struct tpmfront_dev* dev, uint8_t* req, size_t reqlen, uint8_t*
    return 0;
 }
 
+int tpmfront_set_locality(struct tpmfront_dev* dev, int locality)
+{
+   if (!dev || !dev->page)
+      return -1;
+   dev->page->locality = locality;
+   return 0;
+}
+
 #ifdef HAVE_LIBC
 #include <errno.h>
 int tpmfront_open(struct tpmfront_dev* dev)
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 8d921bc..77a3836 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1725,10 +1725,10 @@ static int libxl__device_from_vtpm(libxl__gc *gc, uint32_t domid,
 {
    device->backend_devid   = vtpm->devid;
    device->backend_domid   = vtpm->backend_domid;
-   device->backend_kind    = LIBXL__DEVICE_KIND_VTPM;
+   device->backend_kind    = LIBXL__DEVICE_KIND_VTPM2;
    device->devid           = vtpm->devid;
    device->domid           = domid;
-   device->kind            = LIBXL__DEVICE_KIND_VTPM;
+   device->kind            = LIBXL__DEVICE_KIND_VTPM2;
 
    return 0;
 }
@@ -1756,7 +1756,7 @@ void libxl__device_vtpm_add(libxl__egc *egc, uint32_t domid,
             goto out;
         }
         l = libxl__xs_directory(gc, XBT_NULL,
-              GCSPRINTF("%s/device/vtpm", dompath), &nb);
+              GCSPRINTF("%s/device/vtpm2", dompath), &nb);
         if(l == NULL || nb == 0) {
             vtpm->devid = 0;
         } else {
@@ -1815,7 +1815,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
 
     *num = 0;
 
-    fe_path = libxl__sprintf(gc, "%s/device/vtpm", libxl__xs_get_dompath(gc, domid));
+    fe_path = libxl__sprintf(gc, "%s/device/vtpm2", libxl__xs_get_dompath(gc, domid));
     dir = libxl__xs_directory(gc, XBT_NULL, fe_path, &ndirs);
     if(dir) {
        vtpms = malloc(sizeof(*vtpms) * ndirs);
@@ -1830,7 +1830,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
           vtpm->devid = atoi(*dir);
 
           tmp = libxl__xs_read(gc, XBT_NULL,
-                GCSPRINTF("%s/%s/backend_id",
+                GCSPRINTF("%s/%s/backend-id",
                    fe_path, *dir));
           vtpm->backend_domid = atoi(tmp);
 
@@ -1863,7 +1863,7 @@ int libxl_device_vtpm_getinfo(libxl_ctx *ctx,
     dompath = libxl__xs_get_dompath(gc, domid);
     vtpminfo->devid = vtpm->devid;
 
-    vtpmpath = GCSPRINTF("%s/device/vtpm/%d", dompath, vtpminfo->devid);
+    vtpmpath = GCSPRINTF("%s/device/vtpm2/%d", dompath, vtpminfo->devid);
     vtpminfo->backend = xs_read(ctx->xsh, XBT_NULL,
           GCSPRINTF("%s/backend", vtpmpath), NULL);
     if (!vtpminfo->backend) {
diff --git a/tools/libxl/libxl_types_internal.idl b/tools/libxl/libxl_types_internal.idl
index c40120e..7fd43ab 100644
--- a/tools/libxl/libxl_types_internal.idl
+++ b/tools/libxl/libxl_types_internal.idl
@@ -19,7 +19,7 @@ libxl__device_kind = Enumeration("device_kind", [
     (5, "VFB"),
     (6, "VKBD"),
     (7, "CONSOLE"),
-    (8, "VTPM"),
+    (8, "VTPM2"),
     ])
 
 libxl__console_backend = Enumeration("console_backend", [
diff --git a/xen/include/public/io/tpmif.h b/xen/include/public/io/tpmif.h
index 02ccdab..7c96530 100644
--- a/xen/include/public/io/tpmif.h
+++ b/xen/include/public/io/tpmif.h
@@ -1,7 +1,7 @@
 /******************************************************************************
  * tpmif.h
  *
- * TPM I/O interface for Xen guest OSes.
+ * TPM I/O interface for Xen guest OSes, v2
  *
  * Permission is hereby granted, free of charge, to any person obtaining a copy
  * of this software and associated documentation files (the "Software"), to
@@ -21,48 +21,23 @@
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  *
- * Copyright (c) 2005, IBM Corporation
- *
- * Author: Stefan Berger, stefanb@us.ibm.com
- * Grant table support: Mahadevan Gomathisankaran
- *
- * This code has been derived from tools/libxc/xen/io/netif.h
- *
- * Copyright (c) 2003-2004, Keir Fraser
  */
 
 #ifndef __XEN_PUBLIC_IO_TPMIF_H__
 #define __XEN_PUBLIC_IO_TPMIF_H__
 
-#include "../grant_table.h"
+struct vtpm_shared_page {
+    uint32_t length;         /* request/response length in bytes */
 
-struct tpmif_tx_request {
-    unsigned long addr;   /* Machine address of packet.   */
-    grant_ref_t ref;      /* grant table access reference */
-    uint16_t unused;
-    uint16_t size;        /* Packet size in bytes.        */
-};
-typedef struct tpmif_tx_request tpmif_tx_request_t;
-
-/*
- * The TPMIF_TX_RING_SIZE defines the number of pages the
- * front-end and backend can exchange (= size of array).
- */
-typedef uint32_t TPMIF_RING_IDX;
-
-#define TPMIF_TX_RING_SIZE 1
-
-/* This structure must fit in a memory page. */
-
-struct tpmif_ring {
-    struct tpmif_tx_request req;
-};
-typedef struct tpmif_ring tpmif_ring_t;
+    uint8_t state;           /* 0 - response ready / idle
+                              * 1 - request ready / working */
+    uint8_t locality;        /* for the current request */
+    uint8_t pad;
 
-struct tpmif_tx_interface {
-    struct tpmif_ring ring[TPMIF_TX_RING_SIZE];
+    uint8_t nr_extra_pages;  /* extra pages for long packets; may be zero */
+    uint32_t extra_pages[0]; /* grant IDs; length is actually nr_extra_pages */
 };
-typedef struct tpmif_tx_interface tpmif_tx_interface_t;
+typedef struct vtpm_shared_page vtpm_shared_page_t;
 
 #endif
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 20:16:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 20:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tjbgb-0002VS-Uk; Fri, 14 Dec 2012 20:16:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Tjbga-0002VI-78
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 20:16:44 +0000
Received: from [85.158.143.35:64571] by server-2.bemta-4.messagelabs.com id
	F8/02-30861-B298BC05; Fri, 14 Dec 2012 20:16:43 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355516199!12508358!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11723 invoked from network); 14 Dec 2012 20:16:39 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-7.tower-21.messagelabs.com with SMTP;
	14 Dec 2012 20:16:39 -0000
X-TM-IMSS-Message-ID: <1e4baec400046d4e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 1e4baec400046d4e ;
	Fri, 14 Dec 2012 15:16:29 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBEKGUMZ025545; 
	Fri, 14 Dec 2012 15:16:30 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: matthew.fioravante@jhuapl.edu
Date: Fri, 14 Dec 2012 15:16:29 -0500
Message-Id: <1355516189-30318-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355169347-25917-2-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355169347-25917-2-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v3.2] mini-os/tpm{back,
	front}: Change shared page ABI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This changes the vTPM shared page ABI from a copy of the Xen network
interface to a single-page interface that better reflects the expected
behavior of a TPM: only a single request packet can be sent at any given
time, and every packet sent generates a single response packet. This
protocol change should also increase efficiency as it avoids mapping and
unmapping grants when possible. The vtpm xenbus device has been renamed
to "vtpm2" to avoid conflicts with existing (xen-patched) kernels
supporting the old interface.

While the contents of the shared page have been defined to allow packets
larger than a single page (actually 4088 bytes) by allowing the client
to add extra grant references, the mapping of these extra references has
not been implemented; a feature node in xenstore may be used in the
future to indicate full support for the multi-page protocol. Most uses
of the TPM should not require this feature.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

---

Changes from v3: the change from "vtpm" to "vtpm2" in libxl was
incomplete, causing the vtpm-attach and -detach commands to fail.

 extras/mini-os/include/tpmback.h     |   1 +
 extras/mini-os/include/tpmfront.h    |   7 +-
 extras/mini-os/tpmback.c             | 135 +++++++++++++----------------------
 extras/mini-os/tpmfront.c            | 119 ++++++++++++------------------
 tools/libxl/libxl.c                  |  12 ++--
 tools/libxl/libxl_types_internal.idl |   2 +-
 xen/include/public/io/tpmif.h        |  45 +++---------
 7 files changed, 117 insertions(+), 204 deletions(-)

diff --git a/extras/mini-os/include/tpmback.h b/extras/mini-os/include/tpmback.h
index ff86732..ec9eda4 100644
--- a/extras/mini-os/include/tpmback.h
+++ b/extras/mini-os/include/tpmback.h
@@ -43,6 +43,7 @@
 
 struct tpmcmd {
    domid_t domid;		/* Domid of the frontend */
+   uint8_t locality;    /* Locality requested by the frontend */
    unsigned int handle;	/* Handle of the frontend */
    unsigned char uuid[16];			/* uuid of the tpm interface */
 
diff --git a/extras/mini-os/include/tpmfront.h b/extras/mini-os/include/tpmfront.h
index fd2cb17..a0c7c4d 100644
--- a/extras/mini-os/include/tpmfront.h
+++ b/extras/mini-os/include/tpmfront.h
@@ -37,9 +37,7 @@ struct tpmfront_dev {
    grant_ref_t ring_ref;
    evtchn_port_t evtchn;
 
-   tpmif_tx_interface_t* tx;
-
-   void** pages;
+   vtpm_shared_page_t *page;
 
    domid_t bedomid;
    char* nodename;
@@ -77,6 +75,9 @@ void shutdown_tpmfront(struct tpmfront_dev* dev);
  * */
 int tpmfront_cmd(struct tpmfront_dev* dev, uint8_t* req, size_t reqlen, uint8_t** resp, size_t* resplen);
 
+/* Set the locality used for communicating with a vTPM */
+int tpmfront_set_locality(struct tpmfront_dev* dev, int locality);
+
 #ifdef HAVE_LIBC
 #include <sys/stat.h>
 /* POSIX IO functions:
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
index 658fed1..50f8a5d 100644
--- a/extras/mini-os/tpmback.c
+++ b/extras/mini-os/tpmback.c
@@ -86,10 +86,7 @@ struct tpmif {
    evtchn_port_t evtchn;
 
    /* Shared page */
-   tpmif_tx_interface_t* tx;
-
-   /* pointer to TPMIF_RX_RING_SIZE pages */
-   void** pages;
+   vtpm_shared_page_t *page;
 
    enum xenbus_state state;
    enum { DISCONNECTED, DISCONNECTING, CONNECTED } status;
@@ -312,7 +309,6 @@ int insert_tpmif(tpmif_t* tpmif)
       remove_tpmif(tpmif);
       goto error_post_irq;
    }
-
    return 0;
 error:
    local_irq_restore(flags);
@@ -336,7 +332,7 @@ int tpmif_change_state(tpmif_t* tpmif, enum xenbus_state state)
    if (tpmif->state == state)
       return 0;
 
-   snprintf(path, 512, "backend/vtpm/%u/%u/state", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/state", (unsigned int) tpmif->domid, tpmif->handle);
 
    if((err = xenbus_read(XBT_NIL, path, &value))) {
       TPMBACK_ERR("Unable to read backend state %s, error was %s\n", path, err);
@@ -362,7 +358,7 @@ int tpmif_change_state(tpmif_t* tpmif, enum xenbus_state state)
    }
 
    /*update xenstore*/
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_printf(XBT_NIL, path, "state", "%u", state))) {
       TPMBACK_ERR("Error writing to xenstore %s, error was %s new state=%d\n", path, err, state);
       free(err);
@@ -386,8 +382,7 @@ inline tpmif_t* __init_tpmif(domid_t domid, unsigned int handle)
    tpmif->fe_state_path = NULL;
    tpmif->state = XenbusStateInitialising;
    tpmif->status = DISCONNECTED;
-   tpmif->tx = NULL;
-   tpmif->pages = NULL;
+   tpmif->page = NULL;
    tpmif->flags = 0;
    memset(tpmif->uuid, 0, sizeof(tpmif->uuid));
    return tpmif;
@@ -395,9 +390,6 @@ inline tpmif_t* __init_tpmif(domid_t domid, unsigned int handle)
 
 void __free_tpmif(tpmif_t* tpmif)
 {
-   if(tpmif->pages) {
-      free(tpmif->pages);
-   }
    if(tpmif->fe_path) {
       free(tpmif->fe_path);
    }
@@ -424,23 +416,17 @@ tpmif_t* new_tpmif(domid_t domid, unsigned int handle)
    tpmif = __init_tpmif(domid, handle);
 
    /* Get the uuid from xenstore */
-   snprintf(path, 512, "backend/vtpm/%u/%u/uuid", (unsigned int) domid, handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/uuid", (unsigned int) domid, handle);
    if((!xenbus_read_uuid(path, tpmif->uuid))) {
       TPMBACK_ERR("Error reading %s\n", path);
       goto error;
    }
 
-   /* allocate pages to be used for shared mapping */
-   if((tpmif->pages = malloc(sizeof(void*) * TPMIF_TX_RING_SIZE)) == NULL) {
-      goto error;
-   }
-   memset(tpmif->pages, 0, sizeof(void*) * TPMIF_TX_RING_SIZE);
-
    if(tpmif_change_state(tpmif, XenbusStateInitWait)) {
       goto error;
    }
 
-   snprintf(path, 512, "backend/vtpm/%u/%u/frontend", (unsigned int) domid, handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u/frontend", (unsigned int) domid, handle);
    if((err = xenbus_read(XBT_NIL, path, &tpmif->fe_path))) {
       TPMBACK_ERR("Error creating new tpm instance xenbus_read(%s), Error = %s", path, err);
       free(err);
@@ -486,7 +472,7 @@ void free_tpmif(tpmif_t* tpmif)
       tpmif->status = DISCONNECTING;
       mask_evtchn(tpmif->evtchn);
 
-      if(gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->tx, 1)) {
+      if(gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->page, 1)) {
 	 TPMBACK_ERR("%u/%u Error occured while trying to unmap shared page\n", (unsigned int) tpmif->domid, tpmif->handle);
       }
 
@@ -508,7 +494,7 @@ void free_tpmif(tpmif_t* tpmif)
    schedule();
 
    /* Remove the old xenbus entries */
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_rm(XBT_NIL, path))) {
       TPMBACK_ERR("Error cleaning up xenbus entries path=%s error=%s\n", path, err);
       free(err);
@@ -529,9 +515,10 @@ void free_tpmif(tpmif_t* tpmif)
 void tpmback_handler(evtchn_port_t port, struct pt_regs *regs, void *data)
 {
    tpmif_t* tpmif = (tpmif_t*) data;
-   tpmif_tx_request_t* tx = &tpmif->tx->ring[0].req;
-   /* Throw away 0 size events, these can trigger from event channel unmasking */
-   if(tx->size == 0)
+   vtpm_shared_page_t* pg = tpmif->page;
+
+   /* Only pay attention if the request is ready */
+   if (pg->state == 0)
       return;
 
    TPMBACK_DEBUG("EVENT CHANNEL FIRE %u/%u\n", (unsigned int) tpmif->domid, tpmif->handle);
@@ -585,11 +572,10 @@ int connect_fe(tpmif_t* tpmif)
    free(value);
 
    domid = tpmif->domid;
-   if((tpmif->tx = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &ringref, PROT_READ | PROT_WRITE)) == NULL) {
+   if((tpmif->page = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &ringref, PROT_READ | PROT_WRITE)) == NULL) {
       TPMBACK_ERR("Failed to map grant reference %u/%u\n", (unsigned int) tpmif->domid, tpmif->handle);
       return -1;
    }
-   memset(tpmif->tx, 0, PAGE_SIZE);
 
    /*Bind the event channel */
    if((evtchn_bind_interdomain(tpmif->domid, evtchn, tpmback_handler, tpmif, &tpmif->evtchn)))
@@ -600,7 +586,7 @@ int connect_fe(tpmif_t* tpmif)
    unmask_evtchn(tpmif->evtchn);
 
    /* Write the ready flag and change status to connected */
-   snprintf(path, 512, "backend/vtpm/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
+   snprintf(path, 512, "backend/vtpm2/%u/%u", (unsigned int) tpmif->domid, tpmif->handle);
    if((err = xenbus_printf(XBT_NIL, path, "ready", "%u", 1))) {
       TPMBACK_ERR("%u/%u Unable to write ready flag on connect_fe()\n", (unsigned int) tpmif->domid, tpmif->handle);
       free(err);
@@ -618,7 +604,7 @@ error_post_evtchn:
    mask_evtchn(tpmif->evtchn);
    unbind_evtchn(tpmif->evtchn);
 error_post_map:
-   gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->tx, 1);
+   gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->page, 1);
    return -1;
 }
 
@@ -670,8 +656,8 @@ static int parse_eventstr(const char* evstr, domid_t* domid, unsigned int* handl
    char* value;
    unsigned int udomid = 0;
    tpmif_t* tpmif;
-   /* First check for new frontends, this occurs when /backend/vtpm/<domid>/<handle> gets created. Note we what the sscanf to fail on the last %s */
-   if (sscanf(evstr, "backend/vtpm/%u/%u/%40s", &udomid, handle, cmd) == 2) {
+   /* First check for new frontends, this occurs when /backend/vtpm2/<domid>/<handle> gets created. Note we what the sscanf to fail on the last %s */
+   if (sscanf(evstr, "backend/vtpm2/%u/%u/%40s", &udomid, handle, cmd) == 2) {
       *domid = udomid;
       /* Make sure the entry exists, if this event triggers because the entry dissapeared then ignore it */
       if((err = xenbus_read(XBT_NIL, evstr, &value))) {
@@ -685,7 +671,7 @@ static int parse_eventstr(const char* evstr, domid_t* domid, unsigned int* handl
 	 return EV_NONE;
       }
       return EV_NEWFE;
-   } else if((ret = sscanf(evstr, "/local/domain/%u/device/vtpm/%u/%40s", &udomid, handle, cmd)) == 3) {
+   } else if((ret = sscanf(evstr, "/local/domain/%u/device/vtpm2/%u/%40s", &udomid, handle, cmd)) == 3) {
       *domid = udomid;
       if (!strcmp(cmd, "state"))
 	 return EV_STCHNG;
@@ -784,7 +770,7 @@ void tpmback_set_resume_callback(void (*cb)(domid_t, unsigned int))
 
 static void event_listener(void)
 {
-   const char* bepath = "backend/vtpm";
+   const char* bepath = "backend/vtpm2";
    char **path;
    char* err;
 
@@ -874,6 +860,7 @@ void shutdown_tpmback(void)
 inline void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, unsigned char uuid[16])
 {
    tpmcmd->domid = domid;
+   tpmcmd->locality = -1;
    tpmcmd->handle = handle;
    memcpy(tpmcmd->uuid, uuid, sizeof(tpmcmd->uuid));
    tpmcmd->req = NULL;
@@ -884,12 +871,12 @@ inline void init_tpmcmd(tpmcmd_t* tpmcmd, domid_t domid, unsigned int handle, un
 
 tpmcmd_t* get_request(tpmif_t* tpmif) {
    tpmcmd_t* cmd;
-   tpmif_tx_request_t* tx;
-   int offset;
-   int tocopy;
-   int i;
-   uint32_t domid;
+   vtpm_shared_page_t* shr;
+   unsigned int offset;
    int flags;
+#ifdef TPMBACK_PRINT_DEBUG
+   int i;
+#endif
 
    local_irq_save(flags);
 
@@ -899,35 +886,22 @@ tpmcmd_t* get_request(tpmif_t* tpmif) {
    }
    init_tpmcmd(cmd, tpmif->domid, tpmif->handle, tpmif->uuid);
 
-   tx = &tpmif->tx->ring[0].req;
-   cmd->req_len = tx->size;
+   shr = tpmif->page;
+   cmd->req_len = shr->length;
+   cmd->locality = shr->locality;
+   offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+   if (offset > PAGE_SIZE || offset + cmd->req_len > PAGE_SIZE) {
+      TPMBACK_ERR("%u/%u Command size too long for shared page!\n", (unsigned int) tpmif->domid, tpmif->handle);
+      goto error;
+   }
    /* Allocate the buffer */
    if(cmd->req_len) {
       if((cmd->req = malloc(cmd->req_len)) == NULL) {
 	 goto error;
       }
    }
-   /* Copy the bits from the shared pages */
-   offset = 0;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && offset < cmd->req_len; ++i) {
-      tx = &tpmif->tx->ring[i].req;
-
-      /* Map the page with the data */
-      domid = (uint32_t)tpmif->domid;
-      if((tpmif->pages[i] = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &tx->ref, PROT_READ)) == NULL) {
-	 TPMBACK_ERR("%u/%u Unable to map shared page during read!\n", (unsigned int) tpmif->domid, tpmif->handle);
-	 goto error;
-      }
-
-      /* do the copy now */
-      tocopy = min(cmd->req_len - offset, PAGE_SIZE);
-      memcpy(&cmd->req[offset], tpmif->pages[i], tocopy);
-      offset += tocopy;
-
-      /* release the page */
-      gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->pages[i], 1);
-
-   }
+   /* Copy the bits from the shared page(s) */
+   memcpy(cmd->req, offset + (uint8_t*)shr, cmd->req_len);
 
 #ifdef TPMBACK_PRINT_DEBUG
    TPMBACK_DEBUG("Received Tpm Command from %u/%u of size %u", (unsigned int) tpmif->domid, tpmif->handle, cmd->req_len);
@@ -958,38 +932,24 @@ error:
 
 void send_response(tpmcmd_t* cmd, tpmif_t* tpmif)
 {
-   tpmif_tx_request_t* tx;
-   int offset;
-   int i;
-   uint32_t domid;
-   int tocopy;
+   vtpm_shared_page_t* shr;
+   unsigned int offset;
    int flags;
+#ifdef TPMBACK_PRINT_DEBUG
+int i;
+#endif
 
    local_irq_save(flags);
 
-   tx = &tpmif->tx->ring[0].req;
-   tx->size = cmd->resp_len;
-
-   offset = 0;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && offset < cmd->resp_len; ++i) {
-      tx = &tpmif->tx->ring[i].req;
-
-      /* Map the page with the data */
-      domid = (uint32_t)tpmif->domid;
-      if((tpmif->pages[i] = gntmap_map_grant_refs(&gtpmdev.map, 1, &domid, 0, &tx->ref, PROT_WRITE)) == NULL) {
-	 TPMBACK_ERR("%u/%u Unable to map shared page during write!\n", (unsigned int) tpmif->domid, tpmif->handle);
-	 goto error;
-      }
-
-      /* do the copy now */
-      tocopy = min(cmd->resp_len - offset, PAGE_SIZE);
-      memcpy(tpmif->pages[i], &cmd->resp[offset], tocopy);
-      offset += tocopy;
-
-      /* release the page */
-      gntmap_munmap(&gtpmdev.map, (unsigned long)tpmif->pages[i], 1);
+   shr = tpmif->page;
+   shr->length = cmd->resp_len;
 
+   offset = sizeof(*shr) + 4*shr->nr_extra_pages;
+   if (offset > PAGE_SIZE || offset + cmd->resp_len > PAGE_SIZE) {
+      TPMBACK_ERR("%u/%u Command size too long for shared page!\n", (unsigned int) tpmif->domid, tpmif->handle);
+      goto error;
    }
+   memcpy(offset + (uint8_t*)shr, cmd->resp, cmd->resp_len);
 
 #ifdef TPMBACK_PRINT_DEBUG
    TPMBACK_DEBUG("Sent response to %u/%u of size %u", (unsigned int) tpmif->domid, tpmif->handle, cmd->resp_len);
@@ -1003,6 +963,7 @@ void send_response(tpmcmd_t* cmd, tpmif_t* tpmif)
 #endif
    /* clear the ready flag and send the event channel notice to the frontend */
    tpmif_req_finished(tpmif);
+   shr->state = 0;
    notify_remote_via_evtchn(tpmif->evtchn);
 error:
    local_irq_restore(flags);
diff --git a/extras/mini-os/tpmfront.c b/extras/mini-os/tpmfront.c
index 0218d7f..ac9ba42 100644
--- a/extras/mini-os/tpmfront.c
+++ b/extras/mini-os/tpmfront.c
@@ -176,7 +176,7 @@ static int wait_for_backend_state_changed(struct tpmfront_dev* dev, XenbusState
 	 ret = wait_for_backend_closed(&events, path);
 	 break;
       default:
-	 break;
+         TPMFRONT_ERR("Bad wait state %d, ignoring\n", state);
    }
 
    if((err = xenbus_unwatch_path_token(XBT_NIL, path, path))) {
@@ -190,13 +190,13 @@ static int tpmfront_connect(struct tpmfront_dev* dev)
 {
    char* err;
    /* Create shared page */
-   dev->tx = (tpmif_tx_interface_t*) alloc_page();
-   if(dev->tx == NULL) {
+   dev->page = (vtpm_shared_page_t*) alloc_page();
+   if(dev->page == NULL) {
       TPMFRONT_ERR("Unable to allocate page for shared memory\n");
       goto error;
    }
-   memset(dev->tx, 0, PAGE_SIZE);
-   dev->ring_ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->tx), 0);
+   memset(dev->page, 0, PAGE_SIZE);
+   dev->ring_ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->page), 0);
    TPMFRONT_DEBUG("grant ref is %lu\n", (unsigned long) dev->ring_ref);
 
    /*Create event channel */
@@ -228,7 +228,7 @@ error_postevtchn:
       unbind_evtchn(dev->evtchn);
 error_postmap:
       gnttab_end_access(dev->ring_ref);
-      free_page(dev->tx);
+      free_page(dev->page);
 error:
    return -1;
 }
@@ -240,7 +240,6 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
    char path[512];
    char* value, *err;
    unsigned long long ival;
-   int i;
 
    printk("============= Init TPM Front ================\n");
 
@@ -251,7 +250,7 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
    dev->fd = -1;
 #endif
 
-   nodename = _nodename ? _nodename : "device/vtpm/0";
+   nodename = _nodename ? _nodename : "device/vtpm2/0";
    dev->nodename = strdup(nodename);
 
    init_waitqueue_head(&dev->waitq);
@@ -289,19 +288,6 @@ struct tpmfront_dev* init_tpmfront(const char* _nodename)
       goto error;
    }
 
-   /* Allocate pages that will contain the messages */
-   dev->pages = malloc(sizeof(void*) * TPMIF_TX_RING_SIZE);
-   if(dev->pages == NULL) {
-      goto error;
-   }
-   memset(dev->pages, 0, sizeof(void*) * TPMIF_TX_RING_SIZE);
-   for(i = 0; i < TPMIF_TX_RING_SIZE; ++i) {
-      dev->pages[i] = (void*)alloc_page();
-      if(dev->pages[i] == NULL) {
-	 goto error;
-      }
-   }
-
    TPMFRONT_LOG("Initialization Completed successfully\n");
 
    return dev;
@@ -314,8 +300,6 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
 {
    char* err;
    char path[512];
-   int i;
-   tpmif_tx_request_t* tx;
    if(dev == NULL) {
       return;
    }
@@ -349,27 +333,12 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
       /* Wait for the backend to close and unmap shared pages, ignore any errors */
       wait_for_backend_state_changed(dev, XenbusStateClosed);
 
-      /* Cleanup any shared pages */
-      if(dev->pages) {
-	 for(i = 0; i < TPMIF_TX_RING_SIZE; ++i) {
-	    if(dev->pages[i]) {
-	       tx = &dev->tx->ring[i].req;
-	       if(tx->ref != 0) {
-		  gnttab_end_access(tx->ref);
-	       }
-	       free_page(dev->pages[i]);
-	    }
-	 }
-	 free(dev->pages);
-      }
-
       /* Close event channel and unmap shared page */
       mask_evtchn(dev->evtchn);
       unbind_evtchn(dev->evtchn);
       gnttab_end_access(dev->ring_ref);
 
-      free_page(dev->tx);
-
+      free_page(dev->page);
    }
 
    /* Cleanup memory usage */
@@ -387,13 +356,17 @@ void shutdown_tpmfront(struct tpmfront_dev* dev)
 
 int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 {
+   unsigned int offset;
+   vtpm_shared_page_t* shr = NULL;
+#ifdef TPMFRONT_PRINT_DEBUG
    int i;
-   tpmif_tx_request_t* tx = NULL;
+#endif
    /* Error Checking */
    if(dev == NULL || dev->state != XenbusStateConnected) {
       TPMFRONT_ERR("Tried to send message through disconnected frontend\n");
       return -1;
    }
+   shr = dev->page;
 
 #ifdef TPMFRONT_PRINT_DEBUG
    TPMFRONT_DEBUG("Sending Msg to backend size=%u", (unsigned int) length);
@@ -407,19 +380,16 @@ int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 #endif
 
    /* Copy to shared pages now */
-   for(i = 0; length > 0 && i < TPMIF_TX_RING_SIZE; ++i) {
-      /* Share the page */
-      tx = &dev->tx->ring[i].req;
-      tx->unused = 0;
-      tx->addr = virt_to_mach(dev->pages[i]);
-      tx->ref = gnttab_grant_access(dev->bedomid, virt_to_mfn(dev->pages[i]), 0);
-      /* Copy the bits to the page */
-      tx->size = length > PAGE_SIZE ? PAGE_SIZE : length;
-      memcpy(dev->pages[i], &msg[i * PAGE_SIZE], tx->size);
-
-      /* Update counters */
-      length -= tx->size;
+   offset = sizeof(*shr);
+   if (length + offset > PAGE_SIZE) {
+      TPMFRONT_ERR("Message too long for shared page\n");
+      return -1;
    }
+   memcpy(offset + (uint8_t*)shr, msg, length);
+   shr->length = length;
+   barrier();
+   shr->state = 1;
+
    dev->waiting = 1;
    dev->resplen = 0;
 #ifdef HAVE_LIBC
@@ -434,44 +404,41 @@ int tpmfront_send(struct tpmfront_dev* dev, const uint8_t* msg, size_t length)
 }
 int tpmfront_recv(struct tpmfront_dev* dev, uint8_t** msg, size_t *length)
 {
-   tpmif_tx_request_t* tx;
-   int i;
+   unsigned int offset;
+   vtpm_shared_page_t* shr = NULL;
+#ifdef TPMFRONT_PRINT_DEBUG
+int i;
+#endif
    if(dev == NULL || dev->state != XenbusStateConnected) {
       TPMFRONT_ERR("Tried to receive message from disconnected frontend\n");
       return -1;
    }
    /*Wait for the response */
    wait_event(dev->waitq, (!dev->waiting));
+   shr = dev->page;
+   if (shr->state != 0)
+      goto quit;
 
    /* Initialize */
    *msg = NULL;
-   *length = 0;
+   *length = shr->length;
+   offset = sizeof(*shr);
 
-   /* special case, just quit */
-   tx = &dev->tx->ring[0].req;
-   if(tx->size == 0 ) {
-       goto quit;
-   }
-   /* Get the total size */
-   tx = &dev->tx->ring[0].req;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && tx->size > 0; ++i) {
-      tx = &dev->tx->ring[i].req;
-      *length += tx->size;
+   if (*length + offset > PAGE_SIZE) {
+      TPMFRONT_ERR("Reply too long for shared page\n");
+      return -1;
    }
+
    /* Alloc the buffer */
    if(dev->respbuf) {
       free(dev->respbuf);
    }
    *msg = dev->respbuf = malloc(*length);
    dev->resplen = *length;
+
    /* Copy the bits */
-   tx = &dev->tx->ring[0].req;
-   for(i = 0; i < TPMIF_TX_RING_SIZE && tx->size > 0; ++i) {
-      tx = &dev->tx->ring[i].req;
-      memcpy(&(*msg)[i * PAGE_SIZE], dev->pages[i], tx->size);
-      gnttab_end_access(tx->ref);
-      tx->ref = 0;
-   }
+   memcpy(*msg, offset + (uint8_t*)shr, *length);
+
 #ifdef TPMFRONT_PRINT_DEBUG
    TPMFRONT_DEBUG("Received response from backend size=%u", (unsigned int) *length);
    for(i = 0; i < *length; ++i) {
@@ -504,6 +471,14 @@ int tpmfront_cmd(struct tpmfront_dev* dev, uint8_t* req, size_t reqlen, uint8_t*
    return 0;
 }
 
+int tpmfront_set_locality(struct tpmfront_dev* dev, int locality)
+{
+   if (!dev || !dev->page)
+      return -1;
+   dev->page->locality = locality;
+   return 0;
+}
+
 #ifdef HAVE_LIBC
 #include <errno.h>
 int tpmfront_open(struct tpmfront_dev* dev)
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 8d921bc..77a3836 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1725,10 +1725,10 @@ static int libxl__device_from_vtpm(libxl__gc *gc, uint32_t domid,
 {
    device->backend_devid   = vtpm->devid;
    device->backend_domid   = vtpm->backend_domid;
-   device->backend_kind    = LIBXL__DEVICE_KIND_VTPM;
+   device->backend_kind    = LIBXL__DEVICE_KIND_VTPM2;
    device->devid           = vtpm->devid;
    device->domid           = domid;
-   device->kind            = LIBXL__DEVICE_KIND_VTPM;
+   device->kind            = LIBXL__DEVICE_KIND_VTPM2;
 
    return 0;
 }
@@ -1756,7 +1756,7 @@ void libxl__device_vtpm_add(libxl__egc *egc, uint32_t domid,
             goto out;
         }
         l = libxl__xs_directory(gc, XBT_NULL,
-              GCSPRINTF("%s/device/vtpm", dompath), &nb);
+              GCSPRINTF("%s/device/vtpm2", dompath), &nb);
         if(l == NULL || nb == 0) {
             vtpm->devid = 0;
         } else {
@@ -1815,7 +1815,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
 
     *num = 0;
 
-    fe_path = libxl__sprintf(gc, "%s/device/vtpm", libxl__xs_get_dompath(gc, domid));
+    fe_path = libxl__sprintf(gc, "%s/device/vtpm2", libxl__xs_get_dompath(gc, domid));
     dir = libxl__xs_directory(gc, XBT_NULL, fe_path, &ndirs);
     if(dir) {
        vtpms = malloc(sizeof(*vtpms) * ndirs);
@@ -1830,7 +1830,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
           vtpm->devid = atoi(*dir);
 
           tmp = libxl__xs_read(gc, XBT_NULL,
-                GCSPRINTF("%s/%s/backend_id",
+                GCSPRINTF("%s/%s/backend-id",
                    fe_path, *dir));
           vtpm->backend_domid = atoi(tmp);
 
@@ -1863,7 +1863,7 @@ int libxl_device_vtpm_getinfo(libxl_ctx *ctx,
     dompath = libxl__xs_get_dompath(gc, domid);
     vtpminfo->devid = vtpm->devid;
 
-    vtpmpath = GCSPRINTF("%s/device/vtpm/%d", dompath, vtpminfo->devid);
+    vtpmpath = GCSPRINTF("%s/device/vtpm2/%d", dompath, vtpminfo->devid);
     vtpminfo->backend = xs_read(ctx->xsh, XBT_NULL,
           GCSPRINTF("%s/backend", vtpmpath), NULL);
     if (!vtpminfo->backend) {
diff --git a/tools/libxl/libxl_types_internal.idl b/tools/libxl/libxl_types_internal.idl
index c40120e..7fd43ab 100644
--- a/tools/libxl/libxl_types_internal.idl
+++ b/tools/libxl/libxl_types_internal.idl
@@ -19,7 +19,7 @@ libxl__device_kind = Enumeration("device_kind", [
     (5, "VFB"),
     (6, "VKBD"),
     (7, "CONSOLE"),
-    (8, "VTPM"),
+    (8, "VTPM2"),
     ])
 
 libxl__console_backend = Enumeration("console_backend", [
diff --git a/xen/include/public/io/tpmif.h b/xen/include/public/io/tpmif.h
index 02ccdab..7c96530 100644
--- a/xen/include/public/io/tpmif.h
+++ b/xen/include/public/io/tpmif.h
@@ -1,7 +1,7 @@
 /******************************************************************************
  * tpmif.h
  *
- * TPM I/O interface for Xen guest OSes.
+ * TPM I/O interface for Xen guest OSes, v2
  *
  * Permission is hereby granted, free of charge, to any person obtaining a copy
  * of this software and associated documentation files (the "Software"), to
@@ -21,48 +21,23 @@
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  * DEALINGS IN THE SOFTWARE.
  *
- * Copyright (c) 2005, IBM Corporation
- *
- * Author: Stefan Berger, stefanb@us.ibm.com
- * Grant table support: Mahadevan Gomathisankaran
- *
- * This code has been derived from tools/libxc/xen/io/netif.h
- *
- * Copyright (c) 2003-2004, Keir Fraser
  */
 
 #ifndef __XEN_PUBLIC_IO_TPMIF_H__
 #define __XEN_PUBLIC_IO_TPMIF_H__
 
-#include "../grant_table.h"
+struct vtpm_shared_page {
+    uint32_t length;         /* request/response length in bytes */
 
-struct tpmif_tx_request {
-    unsigned long addr;   /* Machine address of packet.   */
-    grant_ref_t ref;      /* grant table access reference */
-    uint16_t unused;
-    uint16_t size;        /* Packet size in bytes.        */
-};
-typedef struct tpmif_tx_request tpmif_tx_request_t;
-
-/*
- * The TPMIF_TX_RING_SIZE defines the number of pages the
- * front-end and backend can exchange (= size of array).
- */
-typedef uint32_t TPMIF_RING_IDX;
-
-#define TPMIF_TX_RING_SIZE 1
-
-/* This structure must fit in a memory page. */
-
-struct tpmif_ring {
-    struct tpmif_tx_request req;
-};
-typedef struct tpmif_ring tpmif_ring_t;
+    uint8_t state;           /* 0 - response ready / idle
+                              * 1 - request ready / working */
+    uint8_t locality;        /* for the current request */
+    uint8_t pad;
 
-struct tpmif_tx_interface {
-    struct tpmif_ring ring[TPMIF_TX_RING_SIZE];
+    uint8_t nr_extra_pages;  /* extra pages for long packets; may be zero */
+    uint32_t extra_pages[0]; /* grant IDs; length is actually nr_extra_pages */
 };
-typedef struct tpmif_tx_interface tpmif_tx_interface_t;
+typedef struct vtpm_shared_page vtpm_shared_page_t;
 
 #endif
 
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 22:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 22:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjdLS-0003G1-G2; Fri, 14 Dec 2012 22:03:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjdLQ-0003Fu-Ow
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 22:03:01 +0000
Received: from [85.158.143.35:59931] by server-1.bemta-4.messagelabs.com id
	72/08-28401-312ABC05; Fri, 14 Dec 2012 22:02:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355522579!13219352!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMTE5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19483 invoked from network); 14 Dec 2012 22:02:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 22:02:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; 
   d="scan'208";a="164151"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 22:02:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 22:02:58 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjdLO-0001wU-SK;
	Fri, 14 Dec 2012 22:02:58 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjdLO-0007Ph-RJ;
	Fri, 14 Dec 2012 22:02:58 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14699-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 22:02:58 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14699: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14699 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14699/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check fail in 14689 REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   11 guest-localmigrate          fail pass in 14689
 test-amd64-i386-win           7 windows-install    fail in 14689 pass in 14699

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10   fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-amd64-xl-sedf    18 leak-check/check fail in 14689 blocked in 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 14 22:03:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Dec 2012 22:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjdLS-0003G1-G2; Fri, 14 Dec 2012 22:03:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjdLQ-0003Fu-Ow
	for xen-devel@lists.xensource.com; Fri, 14 Dec 2012 22:03:01 +0000
Received: from [85.158.143.35:59931] by server-1.bemta-4.messagelabs.com id
	72/08-28401-312ABC05; Fri, 14 Dec 2012 22:02:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355522579!13219352!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMTE5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19483 invoked from network); 14 Dec 2012 22:02:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 22:02:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,284,1355097600"; 
   d="scan'208";a="164151"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Dec 2012 22:02:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 14 Dec 2012 22:02:58 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjdLO-0001wU-SK;
	Fri, 14 Dec 2012 22:02:58 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjdLO-0007Ph-RJ;
	Fri, 14 Dec 2012 22:02:58 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14699-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Dec 2012 22:02:58 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14699: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14699 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14699/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check fail in 14689 REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   11 guest-localmigrate          fail pass in 14689
 test-amd64-i386-win           7 windows-install    fail in 14689 pass in 14699

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10   fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-amd64-xl-sedf    18 leak-check/check fail in 14689 blocked in 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 02:16:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 02:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjhHu-0000KX-NF; Sat, 15 Dec 2012 02:15:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjhHt-0000KS-AZ
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 02:15:37 +0000
Received: from [85.158.138.51:55628] by server-15.bemta-3.messagelabs.com id
	9B/7F-07921-84DDBC05; Sat, 15 Dec 2012 02:15:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355537735!22631172!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMTE5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28501 invoked from network); 15 Dec 2012 02:15:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 02:15:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,285,1355097600"; 
   d="scan'208";a="167517"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 02:15:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 02:15:34 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjhHq-0003UQ-EZ;
	Sat, 15 Dec 2012 02:15:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjhHq-00030M-4b;
	Sat, 15 Dec 2012 02:15:34 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14701-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 02:15:34 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14701: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14701 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14701/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14693
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14693

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  f50aab21f9f2

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 02:16:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 02:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjhHu-0000KX-NF; Sat, 15 Dec 2012 02:15:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjhHt-0000KS-AZ
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 02:15:37 +0000
Received: from [85.158.138.51:55628] by server-15.bemta-3.messagelabs.com id
	9B/7F-07921-84DDBC05; Sat, 15 Dec 2012 02:15:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355537735!22631172!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMTE5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28501 invoked from network); 15 Dec 2012 02:15:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 02:15:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,285,1355097600"; 
   d="scan'208";a="167517"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 02:15:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 02:15:34 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjhHq-0003UQ-EZ;
	Sat, 15 Dec 2012 02:15:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjhHq-00030M-4b;
	Sat, 15 Dec 2012 02:15:34 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14701-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 02:15:34 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14701: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14701 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14701/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14693
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14693

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  f50aab21f9f2

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 02:39:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 02:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tjhex-0000X3-Sw; Sat, 15 Dec 2012 02:39:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tjhew-0000Wy-J3
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 02:39:26 +0000
Received: from [85.158.139.83:61712] by server-9.bemta-5.messagelabs.com id
	3A/64-10690-DD2EBC05; Sat, 15 Dec 2012 02:39:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355539164!27815122!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMTE5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19158 invoked from network); 15 Dec 2012 02:39:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 02:39:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,285,1355097600"; 
   d="scan'208";a="167621"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 02:39:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 02:39:24 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tjheu-0003bl-Ev;
	Sat, 15 Dec 2012 02:39:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tjheu-0000a3-DQ;
	Sat, 15 Dec 2012 02:39:24 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1Tjheu-0000a3-DQ@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 02:39:24 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing bisection] complete test-amd64-amd64-xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-4.1-testing
xen branch xen-4.1-testing
job test-amd64-amd64-xl
test leak-check/check

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104


  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-4.1-testing.test-amd64-amd64-xl.leak-check--check.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14699 fail [host=fire-frog] / 14679 [host=potato-beetle] 14677 [host=gall-mite] 14675 [host=leaf-beetle] 14662 [host=field-cricket] 14566 [host=gall-mite] 14562 [host=field-cricket] 14484 [host=earwig] 14419 ok.
Failure / basis pass flights: 14699 / 14419
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git#b36c42985575cd6d761d39e5770e57a1f52832ae-b36c42985575cd6d761d39e5770e57a1f52832ae http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg#5639047d6c9f-93e17b0cd035
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
Loaded 1001 nodes in revision graph
Searching for test results:
 14419 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
 14484 [host=earwig]
 14562 [host=field-cricket]
 14566 [host=gall-mite]
 14684 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14677 [host=gall-mite]
 14662 [host=field-cricket]
 14695 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a8a9e1c126ea
 14675 [host=leaf-beetle]
 14679 [host=potato-beetle]
 14691 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
 14692 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14689 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14696 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 0125069bc1b2
 14698 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a866cc5b8235
 14700 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14702 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14703 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14704 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14699 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14705 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14707 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Searching for interesting versions
 Result found: flight 14419 (pass), for basis pass
 Result found: flight 14684 (fail), for basis failure
 Repro found: flight 14691 (pass), for basis pass
 Repro found: flight 14692 (fail), for basis failure
 0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
No revisions left to test, checking graph state.
 Result found: flight 14700 (pass), for last pass
 Result found: flight 14702 (fail), for first failure
 Repro found: flight 14703 (pass), for last pass
 Repro found: flight 14704 (fail), for first failure
 Repro found: flight 14705 (pass), for last pass
 Repro found: flight 14707 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found

  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-4.1-testing.test-amd64-amd64-xl.leak-check--check.{dot,ps,png,html}.
----------------------------------------
14707: tolerable ALL FAIL

flight 14707 xen-4.1-testing real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14707/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl          18 leak-check/check        fail baseline untested


jobs:
 test-amd64-amd64-xl                                          fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 02:39:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 02:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tjhex-0000X3-Sw; Sat, 15 Dec 2012 02:39:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tjhew-0000Wy-J3
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 02:39:26 +0000
Received: from [85.158.139.83:61712] by server-9.bemta-5.messagelabs.com id
	3A/64-10690-DD2EBC05; Sat, 15 Dec 2012 02:39:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355539164!27815122!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMTE5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19158 invoked from network); 15 Dec 2012 02:39:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 02:39:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,285,1355097600"; 
   d="scan'208";a="167621"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 02:39:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 02:39:24 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tjheu-0003bl-Ev;
	Sat, 15 Dec 2012 02:39:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tjheu-0000a3-DQ;
	Sat, 15 Dec 2012 02:39:24 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1Tjheu-0000a3-DQ@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 02:39:24 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing bisection] complete test-amd64-amd64-xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-4.1-testing
xen branch xen-4.1-testing
job test-amd64-amd64-xl
test leak-check/check

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104


  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-4.1-testing.test-amd64-amd64-xl.leak-check--check.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14699 fail [host=fire-frog] / 14679 [host=potato-beetle] 14677 [host=gall-mite] 14675 [host=leaf-beetle] 14662 [host=field-cricket] 14566 [host=gall-mite] 14562 [host=field-cricket] 14484 [host=earwig] 14419 ok.
Failure / basis pass flights: 14699 / 14419
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git#b36c42985575cd6d761d39e5770e57a1f52832ae-b36c42985575cd6d761d39e5770e57a1f52832ae http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg#5639047d6c9f-93e17b0cd035
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
Loaded 1001 nodes in revision graph
Searching for test results:
 14419 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
 14484 [host=earwig]
 14562 [host=field-cricket]
 14566 [host=gall-mite]
 14684 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14677 [host=gall-mite]
 14662 [host=field-cricket]
 14695 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a8a9e1c126ea
 14675 [host=leaf-beetle]
 14679 [host=potato-beetle]
 14691 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
 14692 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14689 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14696 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 0125069bc1b2
 14698 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a866cc5b8235
 14700 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14702 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14703 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14704 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14699 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14705 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14707 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Searching for interesting versions
 Result found: flight 14419 (pass), for basis pass
 Result found: flight 14684 (fail), for basis failure
 Repro found: flight 14691 (pass), for basis pass
 Repro found: flight 14692 (fail), for basis failure
 0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
No revisions left to test, checking graph state.
 Result found: flight 14700 (pass), for last pass
 Result found: flight 14702 (fail), for first failure
 Repro found: flight 14703 (pass), for last pass
 Repro found: flight 14704 (fail), for first failure
 Repro found: flight 14705 (pass), for last pass
 Repro found: flight 14707 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found

  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-4.1-testing.test-amd64-amd64-xl.leak-check--check.{dot,ps,png,html}.
----------------------------------------
14707: tolerable ALL FAIL

flight 14707 xen-4.1-testing real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14707/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl          18 leak-check/check        fail baseline untested


jobs:
 test-amd64-amd64-xl                                          fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 07:28:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 07:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjmAV-00028v-Uz; Sat, 15 Dec 2012 07:28:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjmAT-00028q-Rw
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 07:28:18 +0000
Received: from [193.109.254.147:22652] by server-8.bemta-14.messagelabs.com id
	48/DA-26341-1962CC05; Sat, 15 Dec 2012 07:28:17 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355556495!1748430!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMTE5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25861 invoked from network); 15 Dec 2012 07:28:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 07:28:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,288,1355097600"; 
   d="scan'208";a="169385"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 07:28:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 07:28:13 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjmAP-00056B-VP;
	Sat, 15 Dec 2012 07:28:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjmAP-0007NC-RE;
	Sat, 15 Dec 2012 07:28:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14706-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 07:28:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14706: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14706 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14706/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check fail in 14689 REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    7 debian-install              fail pass in 14699
 test-amd64-i386-xl-credit2   11 guest-localmigrate fail in 14699 pass in 14689
 test-amd64-i386-win           7 windows-install    fail in 14689 pass in 14706

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     18 leak-check/check         fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-amd64-xl-sedf 14 guest-localmigrate/x10 fail in 14699 blocked in 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 07:28:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 07:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjmAV-00028v-Uz; Sat, 15 Dec 2012 07:28:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjmAT-00028q-Rw
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 07:28:18 +0000
Received: from [193.109.254.147:22652] by server-8.bemta-14.messagelabs.com id
	48/DA-26341-1962CC05; Sat, 15 Dec 2012 07:28:17 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355556495!1748430!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMTE5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25861 invoked from network); 15 Dec 2012 07:28:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 07:28:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,288,1355097600"; 
   d="scan'208";a="169385"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 07:28:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 07:28:13 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjmAP-00056B-VP;
	Sat, 15 Dec 2012 07:28:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjmAP-0007NC-RE;
	Sat, 15 Dec 2012 07:28:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14706-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 07:28:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14706: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14706 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14706/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check fail in 14689 REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    7 debian-install              fail pass in 14699
 test-amd64-i386-xl-credit2   11 guest-localmigrate fail in 14699 pass in 14689
 test-amd64-i386-win           7 windows-install    fail in 14689 pass in 14706

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     18 leak-check/check         fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-amd64-xl-sedf 14 guest-localmigrate/x10 fail in 14699 blocked in 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 11:08:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 11:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tjpaf-0004kh-Fy; Sat, 15 Dec 2012 11:07:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tjpae-0004kZ-Fl
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 11:07:32 +0000
Received: from [85.158.137.99:16502] by server-14.bemta-3.messagelabs.com id
	32/DC-27443-3F95CC05; Sat, 15 Dec 2012 11:07:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355569650!13268090!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30168 invoked from network); 15 Dec 2012 11:07:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 11:07:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,288,1355097600"; 
   d="scan'208";a="170847"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 11:07:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 11:07:30 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tjpac-0006Bp-8I;
	Sat, 15 Dec 2012 11:07:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjpaR-0002V0-Nj;
	Sat, 15 Dec 2012 11:07:25 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14708-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 11:07:24 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14708: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14708 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14708/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14701
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14701

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  f50aab21f9f2

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 11:08:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 11:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tjpaf-0004kh-Fy; Sat, 15 Dec 2012 11:07:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tjpae-0004kZ-Fl
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 11:07:32 +0000
Received: from [85.158.137.99:16502] by server-14.bemta-3.messagelabs.com id
	32/DC-27443-3F95CC05; Sat, 15 Dec 2012 11:07:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355569650!13268090!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30168 invoked from network); 15 Dec 2012 11:07:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 11:07:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,288,1355097600"; 
   d="scan'208";a="170847"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 11:07:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 11:07:30 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tjpac-0006Bp-8I;
	Sat, 15 Dec 2012 11:07:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjpaR-0002V0-Nj;
	Sat, 15 Dec 2012 11:07:25 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14708-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 11:07:24 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14708: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14708 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14708/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14701
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14701

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  f50aab21f9f2

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 12:58:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 12:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjrJ5-0005Wv-UM; Sat, 15 Dec 2012 12:57:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjrJ3-0005Wq-QY
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 12:57:30 +0000
Received: from [85.158.139.211:23607] by server-7.bemta-5.messagelabs.com id
	D8/DD-08009-8B37CC05; Sat, 15 Dec 2012 12:57:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355576247!20502995!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15516 invoked from network); 15 Dec 2012 12:57:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 12:57:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,290,1355097600"; 
   d="scan'208";a="171609"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 12:57:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 12:57:27 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjrJ1-0006jU-5N;
	Sat, 15 Dec 2012 12:57:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjrIz-00026D-WB;
	Sat, 15 Dec 2012 12:57:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1TjrIz-00026D-WB@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 12:57:25 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing bisection] complete
	test-amd64-i386-xl-multivcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-4.1-testing
xen branch xen-4.1-testing
job test-amd64-i386-xl-multivcpu
test leak-check/check

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104


  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-4.1-testing.test-amd64-i386-xl-multivcpu.leak-check--check.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14706 fail [host=field-cricket] / 14679 [host=lace-bug] 14677 [host=woodlouse] 14675 [host=bush-cricket] 14662 [host=itch-mite] 14566 [host=potato-beetle] 14562 [host=woodlouse] 14484 [host=itch-mite] 14419 ok.
Failure / basis pass flights: 14706 / 14419
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git#b36c42985575cd6d761d39e5770e57a1f52832ae-b36c42985575cd6d761d39e5770e57a1f52832ae http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg#5639047d6c9f-93e17b0cd035
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
Loaded 1001 nodes in revision graph
Searching for test results:
 14419 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
 14484 [host=itch-mite]
 14562 [host=woodlouse]
 14566 [host=potato-beetle]
 14684 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14677 [host=woodlouse]
 14662 [host=itch-mite]
 14675 [host=bush-cricket]
 14679 [host=lace-bug]
 14689 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14699 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14706 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14711 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
 14712 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14713 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a8a9e1c126ea
 14714 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 0125069bc1b2
 14716 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a866cc5b8235
 14717 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14718 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14719 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14720 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14721 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14722 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Searching for interesting versions
 Result found: flight 14419 (pass), for basis pass
 Result found: flight 14684 (fail), for basis failure
 Repro found: flight 14711 (pass), for basis pass
 Repro found: flight 14712 (fail), for basis failure
 0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
No revisions left to test, checking graph state.
 Result found: flight 14717 (pass), for last pass
 Result found: flight 14718 (fail), for first failure
 Repro found: flight 14719 (pass), for last pass
 Repro found: flight 14720 (fail), for first failure
 Repro found: flight 14721 (pass), for last pass
 Repro found: flight 14722 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found

  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-4.1-testing.test-amd64-i386-xl-multivcpu.leak-check--check.{dot,ps,png,html}.
----------------------------------------
14722: tolerable ALL FAIL

flight 14722 xen-4.1-testing real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14722/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu 18 leak-check/check        fail baseline untested


jobs:
 test-amd64-i386-xl-multivcpu                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 12:58:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 12:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TjrJ5-0005Wv-UM; Sat, 15 Dec 2012 12:57:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TjrJ3-0005Wq-QY
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 12:57:30 +0000
Received: from [85.158.139.211:23607] by server-7.bemta-5.messagelabs.com id
	D8/DD-08009-8B37CC05; Sat, 15 Dec 2012 12:57:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355576247!20502995!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15516 invoked from network); 15 Dec 2012 12:57:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 12:57:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,290,1355097600"; 
   d="scan'208";a="171609"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 12:57:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 12:57:27 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TjrJ1-0006jU-5N;
	Sat, 15 Dec 2012 12:57:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TjrIz-00026D-WB;
	Sat, 15 Dec 2012 12:57:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1TjrIz-00026D-WB@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 12:57:25 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing bisection] complete
	test-amd64-i386-xl-multivcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-4.1-testing
xen branch xen-4.1-testing
job test-amd64-i386-xl-multivcpu
test leak-check/check

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104


  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-4.1-testing.test-amd64-i386-xl-multivcpu.leak-check--check.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14706 fail [host=field-cricket] / 14679 [host=lace-bug] 14677 [host=woodlouse] 14675 [host=bush-cricket] 14662 [host=itch-mite] 14566 [host=potato-beetle] 14562 [host=woodlouse] 14484 [host=itch-mite] 14419 ok.
Failure / basis pass flights: 14706 / 14419
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git#b36c42985575cd6d761d39e5770e57a1f52832ae-b36c42985575cd6d761d39e5770e57a1f52832ae http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg#5639047d6c9f-93e17b0cd035
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
Loaded 1001 nodes in revision graph
Searching for test results:
 14419 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
 14484 [host=itch-mite]
 14562 [host=woodlouse]
 14566 [host=potato-beetle]
 14684 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14677 [host=woodlouse]
 14662 [host=itch-mite]
 14675 [host=bush-cricket]
 14679 [host=lace-bug]
 14689 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14699 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14706 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14711 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 5639047d6c9f
 14712 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14713 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a8a9e1c126ea
 14714 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 0125069bc1b2
 14716 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a866cc5b8235
 14717 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14718 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14719 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14720 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14721 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14722 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Searching for interesting versions
 Result found: flight 14419 (pass), for basis pass
 Result found: flight 14684 (fail), for basis failure
 Repro found: flight 14711 (pass), for basis pass
 Repro found: flight 14712 (fail), for basis failure
 0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
No revisions left to test, checking graph state.
 Result found: flight 14717 (pass), for last pass
 Result found: flight 14718 (fail), for first failure
 Repro found: flight 14719 (pass), for last pass
 Repro found: flight 14720 (fail), for first failure
 Repro found: flight 14721 (pass), for last pass
 Repro found: flight 14722 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found

  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-4.1-testing.test-amd64-i386-xl-multivcpu.leak-check--check.{dot,ps,png,html}.
----------------------------------------
14722: tolerable ALL FAIL

flight 14722 xen-4.1-testing real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14722/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu 18 leak-check/check        fail baseline untested


jobs:
 test-amd64-i386-xl-multivcpu                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 15:59:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 15:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tju8X-0006OD-40; Sat, 15 Dec 2012 15:58:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tju8U-0006O8-SD
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 15:58:47 +0000
Received: from [85.158.143.35:45441] by server-2.bemta-4.messagelabs.com id
	55/FC-30861-63E9CC05; Sat, 15 Dec 2012 15:58:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355587125!15693531!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29838 invoked from network); 15 Dec 2012 15:58:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 15:58:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,291,1355097600"; 
   d="scan'208";a="172651"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 15:58:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 15:58:44 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tju8S-0007lB-FQ;
	Sat, 15 Dec 2012 15:58:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tju8O-0003Me-Vb;
	Sat, 15 Dec 2012 15:58:42 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14715-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 15:58:40 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14715: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14715 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14715/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check fail in 14689 REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    7 debian-install              fail pass in 14699
 test-amd64-i386-xl-credit2   11 guest-localmigrate fail in 14699 pass in 14689
 test-amd64-i386-win           7 windows-install    fail in 14689 pass in 14715

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     18 leak-check/check         fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-amd64-xl-sedf 14 guest-localmigrate/x10 fail in 14699 blocked in 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 15:59:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 15:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tju8X-0006OD-40; Sat, 15 Dec 2012 15:58:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tju8U-0006O8-SD
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 15:58:47 +0000
Received: from [85.158.143.35:45441] by server-2.bemta-4.messagelabs.com id
	55/FC-30861-63E9CC05; Sat, 15 Dec 2012 15:58:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355587125!15693531!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29838 invoked from network); 15 Dec 2012 15:58:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 15:58:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,291,1355097600"; 
   d="scan'208";a="172651"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 15:58:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 15:58:44 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tju8S-0007lB-FQ;
	Sat, 15 Dec 2012 15:58:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tju8O-0003Me-Vb;
	Sat, 15 Dec 2012 15:58:42 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14715-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 15:58:40 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14715: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14715 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14715/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check fail in 14689 REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    7 debian-install              fail pass in 14699
 test-amd64-i386-xl-credit2   11 guest-localmigrate fail in 14699 pass in 14689
 test-amd64-i386-win           7 windows-install    fail in 14689 pass in 14715

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     18 leak-check/check         fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-amd64-xl-sedf 14 guest-localmigrate/x10 fail in 14699 blocked in 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 21:19:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 21:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tjz7q-00087X-CN; Sat, 15 Dec 2012 21:18:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tjz7n-00087S-Ua
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 21:18:24 +0000
Received: from [85.158.139.211:32238] by server-9.bemta-5.messagelabs.com id
	1D/9A-10690-F19ECC05; Sat, 15 Dec 2012 21:18:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355606302!18208215!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30905 invoked from network); 15 Dec 2012 21:18:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 21:18:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,291,1355097600"; 
   d="scan'208";a="174482"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 21:17:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 21:17:22 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tjz6o-0000uQ-D5;
	Sat, 15 Dec 2012 21:17:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tjz6n-0000gf-Im;
	Sat, 15 Dec 2012 21:17:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1Tjz6n-0000gf-Im@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 21:17:21 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing bisection] complete test-amd64-i386-xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-4.1-testing
xen branch xen-4.1-testing
job test-amd64-i386-xl
test leak-check/check

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104


  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-4.1-testing.test-amd64-i386-xl.leak-check--check.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14715 fail [host=earwig] / 14679 [host=lace-bug] 14677 [host=woodlouse] 14675 [host=bush-cricket] 14662 [host=potato-beetle] 14566 [host=gall-mite] 14562 [host=lace-bug] 14484 [host=lace-bug] 14419 [host=gall-mite] 14400 ok.
Failure / basis pass flights: 14715 / 14400
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae ce405f5fd5ee
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git#b36c42985575cd6d761d39e5770e57a1f52832ae-b36c42985575cd6d761d39e5770e57a1f52832ae http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg#ce405f5fd5ee-93e17b0cd035
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
Loaded 1001 nodes in revision graph
Searching for test results:
 14419 [host=gall-mite]
 14400 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae ce405f5fd5ee
 14484 [host=lace-bug]
 14562 [host=lace-bug]
 14566 [host=gall-mite]
 14684 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14677 [host=woodlouse]
 14662 [host=potato-beetle]
 14675 [host=bush-cricket]
 14679 [host=lace-bug]
 14689 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14699 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14706 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14723 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae ce405f5fd5ee
 14724 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14725 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae cadc212c8ef3
 14726 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 0125069bc1b2
 14715 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14727 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a866cc5b8235
 14729 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14730 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14731 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14732 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14733 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14734 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Searching for interesting versions
 Result found: flight 14400 (pass), for basis pass
 Result found: flight 14684 (fail), for basis failure
 Repro found: flight 14723 (pass), for basis pass
 Repro found: flight 14724 (fail), for basis failure
 0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
No revisions left to test, checking graph state.
 Result found: flight 14729 (pass), for last pass
 Result found: flight 14730 (fail), for first failure
 Repro found: flight 14731 (pass), for last pass
 Repro found: flight 14732 (fail), for first failure
 Repro found: flight 14733 (pass), for last pass
 Repro found: flight 14734 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found

  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-4.1-testing.test-amd64-i386-xl.leak-check--check.{dot,ps,png,html}.
----------------------------------------
14734: tolerable ALL FAIL

flight 14734 xen-4.1-testing real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14734/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl           18 leak-check/check        fail baseline untested


jobs:
 test-amd64-i386-xl                                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 21:19:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 21:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tjz7q-00087X-CN; Sat, 15 Dec 2012 21:18:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tjz7n-00087S-Ua
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 21:18:24 +0000
Received: from [85.158.139.211:32238] by server-9.bemta-5.messagelabs.com id
	1D/9A-10690-F19ECC05; Sat, 15 Dec 2012 21:18:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355606302!18208215!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30905 invoked from network); 15 Dec 2012 21:18:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 21:18:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,291,1355097600"; 
   d="scan'208";a="174482"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 21:17:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 21:17:22 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tjz6o-0000uQ-D5;
	Sat, 15 Dec 2012 21:17:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tjz6n-0000gf-Im;
	Sat, 15 Dec 2012 21:17:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1Tjz6n-0000gf-Im@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 21:17:21 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing bisection] complete test-amd64-i386-xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-4.1-testing
xen branch xen-4.1-testing
job test-amd64-i386-xl
test leak-check/check

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104


  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-4.1-testing.test-amd64-i386-xl.leak-check--check.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14715 fail [host=earwig] / 14679 [host=lace-bug] 14677 [host=woodlouse] 14675 [host=bush-cricket] 14662 [host=potato-beetle] 14566 [host=gall-mite] 14562 [host=lace-bug] 14484 [host=lace-bug] 14419 [host=gall-mite] 14400 ok.
Failure / basis pass flights: 14715 / 14400
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae ce405f5fd5ee
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git#b36c42985575cd6d761d39e5770e57a1f52832ae-b36c42985575cd6d761d39e5770e57a1f52832ae http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg#ce405f5fd5ee-93e17b0cd035
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
Loaded 1001 nodes in revision graph
Searching for test results:
 14419 [host=gall-mite]
 14400 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae ce405f5fd5ee
 14484 [host=lace-bug]
 14562 [host=lace-bug]
 14566 [host=gall-mite]
 14684 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14677 [host=woodlouse]
 14662 [host=potato-beetle]
 14675 [host=bush-cricket]
 14679 [host=lace-bug]
 14689 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14699 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14706 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14723 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae ce405f5fd5ee
 14724 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14725 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae cadc212c8ef3
 14726 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 0125069bc1b2
 14715 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14727 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a866cc5b8235
 14729 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14730 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14731 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14732 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14733 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14734 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Searching for interesting versions
 Result found: flight 14400 (pass), for basis pass
 Result found: flight 14684 (fail), for basis failure
 Repro found: flight 14723 (pass), for basis pass
 Repro found: flight 14724 (fail), for basis failure
 0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
No revisions left to test, checking graph state.
 Result found: flight 14729 (pass), for last pass
 Result found: flight 14730 (fail), for first failure
 Repro found: flight 14731 (pass), for last pass
 Repro found: flight 14732 (fail), for first failure
 Repro found: flight 14733 (pass), for last pass
 Repro found: flight 14734 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found

  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-4.1-testing.test-amd64-i386-xl.leak-check--check.{dot,ps,png,html}.
----------------------------------------
14734: tolerable ALL FAIL

flight 14734 xen-4.1-testing real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14734/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl           18 leak-check/check        fail baseline untested


jobs:
 test-amd64-i386-xl                                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 23:09:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 23:09:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tk0qN-0000Bo-6g; Sat, 15 Dec 2012 23:08:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tk0qL-0000Bj-NR
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 23:08:30 +0000
Received: from [85.158.139.211:8075] by server-15.bemta-5.messagelabs.com id
	78/CC-20523-CE20DC05; Sat, 15 Dec 2012 23:08:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355612908!18842212!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12869 invoked from network); 15 Dec 2012 23:08:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 23:08:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,291,1355097600"; 
   d="scan'208";a="175205"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 23:08:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 23:08:27 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tk0qJ-0001St-Je;
	Sat, 15 Dec 2012 23:08:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tk0qJ-0006D6-Ih;
	Sat, 15 Dec 2012 23:08:27 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14728-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 23:08:27 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14728: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14728 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14728/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-i386-i386-xl-qemut-winxpsp3 12 guest-localmigrate/x10  fail pass in 14715
 test-amd64-i386-xl-credit2    7 debian-install     fail in 14715 pass in 14728

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10   fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-amd64-xl-sedf    18 leak-check/check fail in 14715 blocked in 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop        fail in 14715 never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 15 23:09:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Dec 2012 23:09:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tk0qN-0000Bo-6g; Sat, 15 Dec 2012 23:08:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tk0qL-0000Bj-NR
	for xen-devel@lists.xensource.com; Sat, 15 Dec 2012 23:08:30 +0000
Received: from [85.158.139.211:8075] by server-15.bemta-5.messagelabs.com id
	78/CC-20523-CE20DC05; Sat, 15 Dec 2012 23:08:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355612908!18842212!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12869 invoked from network); 15 Dec 2012 23:08:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Dec 2012 23:08:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,291,1355097600"; 
   d="scan'208";a="175205"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Dec 2012 23:08:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 15 Dec 2012 23:08:27 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tk0qJ-0001St-Je;
	Sat, 15 Dec 2012 23:08:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tk0qJ-0006D6-Ih;
	Sat, 15 Dec 2012 23:08:27 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14728-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Dec 2012 23:08:27 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14728: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14728 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14728/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-i386-i386-xl-qemut-winxpsp3 12 guest-localmigrate/x10  fail pass in 14715
 test-amd64-i386-xl-credit2    7 debian-install     fail in 14715 pass in 14728

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10   fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-amd64-xl-sedf    18 leak-check/check fail in 14715 blocked in 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop        fail in 14715 never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 05:09:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 05:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tk6Sh-00068M-9k; Sun, 16 Dec 2012 05:08:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tk6Sf-00068H-1W
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 05:08:25 +0000
Received: from [85.158.143.35:32290] by server-3.bemta-4.messagelabs.com id
	BD/59-18211-8475DC05; Sun, 16 Dec 2012 05:08:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355634503!10753049!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjUx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22192 invoked from network); 16 Dec 2012 05:08:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 05:08:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,292,1355097600"; 
   d="scan'208";a="177832"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Dec 2012 05:08:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 16 Dec 2012 05:08:23 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tk6Sd-0003H3-4i;
	Sun, 16 Dec 2012 05:08:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tk6Sd-00052G-25;
	Sun, 16 Dec 2012 05:08:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1Tk6Sd-00052G-25@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Dec 2012 05:08:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing bisection] complete test-i386-i386-xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-4.1-testing
xen branch xen-4.1-testing
job test-i386-i386-xl
test leak-check/check

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104


  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-4.1-testing.test-i386-i386-xl.leak-check--check.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14728 fail [host=gall-mite] / 14679 [host=potato-beetle] 14677 [host=woodlouse] 14675 [host=bush-cricket] 14662 [host=moss-bug] 14566 ok.
Failure / basis pass flights: 14728 / 14566
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a8a9e1c126ea
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git#b36c42985575cd6d761d39e5770e57a1f52832ae-b36c42985575cd6d761d39e5770e57a1f52832ae http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg#a8a9e1c126ea-93e17b0cd035
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
Loaded 1001 nodes in revision graph
Searching for test results:
 14562 [host=field-cricket]
 14566 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a8a9e1c126ea
 14684 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14677 [host=woodlouse]
 14662 [host=moss-bug]
 14675 [host=bush-cricket]
 14679 [host=potato-beetle]
 14689 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14699 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14706 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14728 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14738 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a866cc5b8235
 14740 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14741 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14742 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14744 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14745 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14715 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14746 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14735 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a8a9e1c126ea
 14736 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14737 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 0125069bc1b2
Searching for interesting versions
 Result found: flight 14566 (pass), for basis pass
 Result found: flight 14684 (fail), for basis failure
 Repro found: flight 14735 (pass), for basis pass
 Repro found: flight 14736 (fail), for basis failure
 0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
No revisions left to test, checking graph state.
 Result found: flight 14740 (pass), for last pass
 Result found: flight 14741 (fail), for first failure
 Repro found: flight 14742 (pass), for last pass
 Repro found: flight 14744 (fail), for first failure
 Repro found: flight 14745 (pass), for last pass
 Repro found: flight 14746 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found

  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-4.1-testing.test-i386-i386-xl.leak-check--check.{dot,ps,png,html}.
----------------------------------------
14746: tolerable ALL FAIL

flight 14746 xen-4.1-testing real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14746/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-i386-i386-xl            18 leak-check/check        fail baseline untested


jobs:
 test-i386-i386-xl                                            fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 05:09:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 05:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tk6Sh-00068M-9k; Sun, 16 Dec 2012 05:08:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tk6Sf-00068H-1W
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 05:08:25 +0000
Received: from [85.158.143.35:32290] by server-3.bemta-4.messagelabs.com id
	BD/59-18211-8475DC05; Sun, 16 Dec 2012 05:08:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355634503!10753049!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjUx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22192 invoked from network); 16 Dec 2012 05:08:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 05:08:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,292,1355097600"; 
   d="scan'208";a="177832"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Dec 2012 05:08:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 16 Dec 2012 05:08:23 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tk6Sd-0003H3-4i;
	Sun, 16 Dec 2012 05:08:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tk6Sd-00052G-25;
	Sun, 16 Dec 2012 05:08:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1Tk6Sd-00052G-25@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Dec 2012 05:08:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing bisection] complete test-i386-i386-xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-4.1-testing
xen branch xen-4.1-testing
job test-i386-i386-xl
test leak-check/check

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104


  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-4.1-testing.test-i386-i386-xl.leak-check--check.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14728 fail [host=gall-mite] / 14679 [host=potato-beetle] 14677 [host=woodlouse] 14675 [host=bush-cricket] 14662 [host=moss-bug] 14566 ok.
Failure / basis pass flights: 14728 / 14566
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a8a9e1c126ea
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git#b36c42985575cd6d761d39e5770e57a1f52832ae-b36c42985575cd6d761d39e5770e57a1f52832ae http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg#a8a9e1c126ea-93e17b0cd035
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
Loaded 1001 nodes in revision graph
Searching for test results:
 14562 [host=field-cricket]
 14566 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a8a9e1c126ea
 14684 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14677 [host=woodlouse]
 14662 [host=moss-bug]
 14675 [host=bush-cricket]
 14679 [host=potato-beetle]
 14689 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14699 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14706 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14728 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14738 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a866cc5b8235
 14740 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14741 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14742 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14744 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14745 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14715 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14746 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14735 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a8a9e1c126ea
 14736 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14737 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 0125069bc1b2
Searching for interesting versions
 Result found: flight 14566 (pass), for basis pass
 Result found: flight 14684 (fail), for basis failure
 Repro found: flight 14735 (pass), for basis pass
 Repro found: flight 14736 (fail), for basis failure
 0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
No revisions left to test, checking graph state.
 Result found: flight 14740 (pass), for last pass
 Result found: flight 14741 (fail), for first failure
 Repro found: flight 14742 (pass), for last pass
 Repro found: flight 14744 (fail), for first failure
 Repro found: flight 14745 (pass), for last pass
 Repro found: flight 14746 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found

  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-4.1-testing.test-i386-i386-xl.leak-check--check.{dot,ps,png,html}.
----------------------------------------
14746: tolerable ALL FAIL

flight 14746 xen-4.1-testing real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14746/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-i386-i386-xl            18 leak-check/check        fail baseline untested


jobs:
 test-i386-i386-xl                                            fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 06:13:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 06:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tk7TE-0006VL-Hs; Sun, 16 Dec 2012 06:13:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tk7TC-0006VG-QN
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 06:13:03 +0000
Received: from [85.158.139.83:51327] by server-4.bemta-5.messagelabs.com id
	66/CC-14693-E666DC05; Sun, 16 Dec 2012 06:13:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355638381!27803941!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjUx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3419 invoked from network); 16 Dec 2012 06:13:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 06:13:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,292,1355097600"; 
   d="scan'208";a="178222"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Dec 2012 06:13:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 16 Dec 2012 06:13:00 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tk7TA-0003aM-NW;
	Sun, 16 Dec 2012 06:13:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tk7TA-0004Wv-Jq;
	Sun, 16 Dec 2012 06:13:00 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14739-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Dec 2012 06:13:00 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14739: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14739 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14739/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10   fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 06:13:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 06:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tk7TE-0006VL-Hs; Sun, 16 Dec 2012 06:13:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tk7TC-0006VG-QN
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 06:13:03 +0000
Received: from [85.158.139.83:51327] by server-4.bemta-5.messagelabs.com id
	66/CC-14693-E666DC05; Sun, 16 Dec 2012 06:13:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355638381!27803941!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjUx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3419 invoked from network); 16 Dec 2012 06:13:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 06:13:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,292,1355097600"; 
   d="scan'208";a="178222"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Dec 2012 06:13:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 16 Dec 2012 06:13:00 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tk7TA-0003aM-NW;
	Sun, 16 Dec 2012 06:13:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tk7TA-0004Wv-Jq;
	Sun, 16 Dec 2012 06:13:00 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14739-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Dec 2012 06:13:00 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14739: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14739 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14739/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10   fail blocked in 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 09:30:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 09:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkAXj-0007wv-0l; Sun, 16 Dec 2012 09:29:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liu.yi24@zte.com.cn>) id 1TkAXi-0007wq-7t
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 09:29:54 +0000
Received: from [85.158.143.35:39125] by server-3.bemta-4.messagelabs.com id
	BE/04-18211-1949DC05; Sun, 16 Dec 2012 09:29:53 +0000
X-Env-Sender: liu.yi24@zte.com.cn
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355650191!5421864!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17312 invoked from network); 16 Dec 2012 09:29:52 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-9.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	16 Dec 2012 09:29:52 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <liu.yi24@zte.com.cn>) id 1TkAXf-000285-0B
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 01:29:51 -0800
Date: Sun, 16 Dec 2012 01:29:51 -0800 (PST)
From: "Liu.yi" <liu.yi24@zte.com.cn>
To: xen-devel@lists.xensource.com
Message-ID: <1355650190976-5713082.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] xm save -c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

hi,all
    I have a winxpsp2 vm with gplpv driver installed. The problem is that
when I excute "xm save -c winxpsp2 save.bin", the command is excuted
successfully but the vm hang after issuing a disk operaton.
    After installing gplpv debug driver, from the qemu log I found vbd
resume failed, after a while windows issues disk reset command, then vm
hang.
    xm debug-keys shows as follows before "xm save -c":
(XEN)        1 [0/1]: s=3 n=0 d=0 p=72 x=1
(XEN)        2 [0/0]: s=3 n=0 d=0 p=71 x=0
(XEN)        3 [0/1]: s=2 n=0 d=0 x=0
(XEN)        4 [0/0]: s=6 n=0 x=0                   // pdo_event_channel
(XEN)        5 [0/0]: s=2 n=0 d=0 x=0             // suspend_evtchn
(XEN)        6 [0/0]: s=3 n=0 d=0 p=73 x=0     // vbd-768 event-channel
(XEN)        7 [0/0]: s=3 n=0 d=0 p=74 x=0     // vif-0 event-channel
    xm debug-keys shows as follows after "xm save -c":
(XEN)        1 [0/1]: s=3 n=0 d=0 p=72 x=1
(XEN)        2 [0/0]: s=3 n=0 d=0 p=71 x=0
(XEN)        3 [0/1]: s=2 n=0 d=0 x=0
(XEN)        4 [0/0]: s=6 n=0 x=0                   // pdo_event_channel
(XEN)        5 [0/0]: s=2 n=0 d=0 x=0             // suspend_evtchn
(XEN)        6 [0/0]: s=6 n=0 x=0                   // new pdo_event_channel
(XEN)        7 [0/0]: s=2 n=0 d=0 x=0             // new suspend_evtchn
(XEN)        8 [0/0]: s=3 n=0 d=0 p=? x=0       // new vbd-768 event-channel
                                                              // vif-0
resume don't start at all because resume hang at vbd

    blkback and blkif driver seem free their event-channel in
unbind_from_irqhandler when suspending, so when gplpv driver resumes it
starts allocing event-channel from 6. The strange things is that gplpv
driver allocs new pdo_event_channel and suspend_evtchn (6 and 7), the
previous ones retain active.
    I tried to unbind the old pdo_event_channel and suspend_evtchn, but
suspending vm hang. "xm save -c" wokrs if I reuse the old pdo_event_channel
and suspend_evtchn as follows:

  in evtchn.c:EvtChn_Init()
      KeInitializeEvent(&xpdd->pdo_suspend_event, SynchronizationEvent,
FALSE);
      if (xpdd->pdo_event_channel == 0) {
          KdPrint((__DRIVER_NAME "     create new pdo_event_channel\n"));
          xpdd->pdo_event_channel = EvtChn_AllocIpi(xpdd, 0);
      }
  in xenpci_fdo.c:XenPci_ConnectSuspendEvt()
      if (xpdd->suspend_evtchn == 0) {
          xpdd->suspend_evtchn = EvtChn_AllocUnbound(xpdd, 0);
          KdPrint((__DRIVER_NAME "     create new suspend event
channel\n"));
      } 

    The qemu log show vbd and vif resume successfully, vm runs fine.
    I'm not sure about these modifications to gplpv windows driver. I will
be grateful for any suggestions, thks.



--
View this message in context: http://xen.1045712.n5.nabble.com/xm-save-c-problem-tp5713082.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 09:30:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 09:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkAXj-0007wv-0l; Sun, 16 Dec 2012 09:29:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liu.yi24@zte.com.cn>) id 1TkAXi-0007wq-7t
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 09:29:54 +0000
Received: from [85.158.143.35:39125] by server-3.bemta-4.messagelabs.com id
	BE/04-18211-1949DC05; Sun, 16 Dec 2012 09:29:53 +0000
X-Env-Sender: liu.yi24@zte.com.cn
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355650191!5421864!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17312 invoked from network); 16 Dec 2012 09:29:52 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-9.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	16 Dec 2012 09:29:52 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <liu.yi24@zte.com.cn>) id 1TkAXf-000285-0B
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 01:29:51 -0800
Date: Sun, 16 Dec 2012 01:29:51 -0800 (PST)
From: "Liu.yi" <liu.yi24@zte.com.cn>
To: xen-devel@lists.xensource.com
Message-ID: <1355650190976-5713082.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] xm save -c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

hi,all
    I have a winxpsp2 vm with gplpv driver installed. The problem is that
when I excute "xm save -c winxpsp2 save.bin", the command is excuted
successfully but the vm hang after issuing a disk operaton.
    After installing gplpv debug driver, from the qemu log I found vbd
resume failed, after a while windows issues disk reset command, then vm
hang.
    xm debug-keys shows as follows before "xm save -c":
(XEN)        1 [0/1]: s=3 n=0 d=0 p=72 x=1
(XEN)        2 [0/0]: s=3 n=0 d=0 p=71 x=0
(XEN)        3 [0/1]: s=2 n=0 d=0 x=0
(XEN)        4 [0/0]: s=6 n=0 x=0                   // pdo_event_channel
(XEN)        5 [0/0]: s=2 n=0 d=0 x=0             // suspend_evtchn
(XEN)        6 [0/0]: s=3 n=0 d=0 p=73 x=0     // vbd-768 event-channel
(XEN)        7 [0/0]: s=3 n=0 d=0 p=74 x=0     // vif-0 event-channel
    xm debug-keys shows as follows after "xm save -c":
(XEN)        1 [0/1]: s=3 n=0 d=0 p=72 x=1
(XEN)        2 [0/0]: s=3 n=0 d=0 p=71 x=0
(XEN)        3 [0/1]: s=2 n=0 d=0 x=0
(XEN)        4 [0/0]: s=6 n=0 x=0                   // pdo_event_channel
(XEN)        5 [0/0]: s=2 n=0 d=0 x=0             // suspend_evtchn
(XEN)        6 [0/0]: s=6 n=0 x=0                   // new pdo_event_channel
(XEN)        7 [0/0]: s=2 n=0 d=0 x=0             // new suspend_evtchn
(XEN)        8 [0/0]: s=3 n=0 d=0 p=? x=0       // new vbd-768 event-channel
                                                              // vif-0
resume don't start at all because resume hang at vbd

    blkback and blkif driver seem free their event-channel in
unbind_from_irqhandler when suspending, so when gplpv driver resumes it
starts allocing event-channel from 6. The strange things is that gplpv
driver allocs new pdo_event_channel and suspend_evtchn (6 and 7), the
previous ones retain active.
    I tried to unbind the old pdo_event_channel and suspend_evtchn, but
suspending vm hang. "xm save -c" wokrs if I reuse the old pdo_event_channel
and suspend_evtchn as follows:

  in evtchn.c:EvtChn_Init()
      KeInitializeEvent(&xpdd->pdo_suspend_event, SynchronizationEvent,
FALSE);
      if (xpdd->pdo_event_channel == 0) {
          KdPrint((__DRIVER_NAME "     create new pdo_event_channel\n"));
          xpdd->pdo_event_channel = EvtChn_AllocIpi(xpdd, 0);
      }
  in xenpci_fdo.c:XenPci_ConnectSuspendEvt()
      if (xpdd->suspend_evtchn == 0) {
          xpdd->suspend_evtchn = EvtChn_AllocUnbound(xpdd, 0);
          KdPrint((__DRIVER_NAME "     create new suspend event
channel\n"));
      } 

    The qemu log show vbd and vif resume successfully, vm runs fine.
    I'm not sure about these modifications to gplpv windows driver. I will
be grateful for any suggestions, thks.



--
View this message in context: http://xen.1045712.n5.nabble.com/xm-save-c-problem-tp5713082.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 09:50:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 09:50:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkArB-00088g-U8; Sun, 16 Dec 2012 09:50:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkArA-00088b-1o
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 09:50:00 +0000
Received: from [193.109.254.147:43409] by server-12.bemta-14.messagelabs.com
	id 05/68-06523-7499DC05; Sun, 16 Dec 2012 09:49:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355651398!2910231!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15050 invoked from network); 16 Dec 2012 09:49:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 09:49:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,292,1355097600"; 
   d="scan'208";a="179807"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Dec 2012 09:49:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 16 Dec 2012 09:49:57 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkAr7-0004jD-SR;
	Sun, 16 Dec 2012 09:49:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkAr7-0004nq-Ns;
	Sun, 16 Dec 2012 09:49:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14743-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Dec 2012 09:49:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14743: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14743 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14743/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14708
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14708

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  f50aab21f9f2

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 09:50:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 09:50:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkArB-00088g-U8; Sun, 16 Dec 2012 09:50:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkArA-00088b-1o
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 09:50:00 +0000
Received: from [193.109.254.147:43409] by server-12.bemta-14.messagelabs.com
	id 05/68-06523-7499DC05; Sun, 16 Dec 2012 09:49:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355651398!2910231!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15050 invoked from network); 16 Dec 2012 09:49:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 09:49:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,292,1355097600"; 
   d="scan'208";a="179807"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Dec 2012 09:49:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 16 Dec 2012 09:49:57 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkAr7-0004jD-SR;
	Sun, 16 Dec 2012 09:49:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkAr7-0004nq-Ns;
	Sun, 16 Dec 2012 09:49:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14743-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Dec 2012 09:49:57 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14743: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14743 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14743/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14708
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14708

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  f50aab21f9f2

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 11:27:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 11:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkCMk-0000WQ-2g; Sun, 16 Dec 2012 11:26:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liu.yi24@zte.com.cn>) id 1TkCMi-0000WL-3i
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 11:26:40 +0000
Received: from [85.158.143.99:31930] by server-3.bemta-4.messagelabs.com id
	6B/91-18211-FEFADC05; Sun, 16 Dec 2012 11:26:39 +0000
X-Env-Sender: liu.yi24@zte.com.cn
X-Msg-Ref: server-14.tower-216.messagelabs.com!1355657197!20060335!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22649 invoked from network); 16 Dec 2012 11:26:38 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-14.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	16 Dec 2012 11:26:38 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <liu.yi24@zte.com.cn>) id 1TkCMf-0007ZU-11
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 03:26:37 -0800
Date: Sun, 16 Dec 2012 03:26:37 -0800 (PST)
From: "Liu.yi" <liu.yi24@zte.com.cn>
To: xen-devel@lists.xensource.com
Message-ID: <1355657197024-5713084.post@n5.nabble.com>
In-Reply-To: <1355650190976-5713082.post@n5.nabble.com>
References: <1355650190976-5713082.post@n5.nabble.com>
MIME-Version: 1.0
Subject: Re: [Xen-devel] xm save -c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

    Unfortunately xm restore failed because when vm restores(resume)
xpdd->pdo_event_channel and xpdd->suspend_evtchn aren't 0.
    EvtChn_Unbind and EvtChn_Close pdo_event_channel in EvtChn_Init when
xpdd->pdo_event_channel isn't 0;
    EvtChn_Unbind and EvtChn_Close suspend_evtchn in
XenPci_ConnectSuspendEvt when xpdd->suspend_evtchn isn't 0;
    After modifing pv driver as above, xm save -c and xm restore seems work
fine. Again, I'm not sure about these.



--
View this message in context: http://xen.1045712.n5.nabble.com/xm-save-c-problem-tp5713082p5713084.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 11:27:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 11:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkCMk-0000WQ-2g; Sun, 16 Dec 2012 11:26:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liu.yi24@zte.com.cn>) id 1TkCMi-0000WL-3i
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 11:26:40 +0000
Received: from [85.158.143.99:31930] by server-3.bemta-4.messagelabs.com id
	6B/91-18211-FEFADC05; Sun, 16 Dec 2012 11:26:39 +0000
X-Env-Sender: liu.yi24@zte.com.cn
X-Msg-Ref: server-14.tower-216.messagelabs.com!1355657197!20060335!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22649 invoked from network); 16 Dec 2012 11:26:38 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-14.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	16 Dec 2012 11:26:38 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <liu.yi24@zte.com.cn>) id 1TkCMf-0007ZU-11
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 03:26:37 -0800
Date: Sun, 16 Dec 2012 03:26:37 -0800 (PST)
From: "Liu.yi" <liu.yi24@zte.com.cn>
To: xen-devel@lists.xensource.com
Message-ID: <1355657197024-5713084.post@n5.nabble.com>
In-Reply-To: <1355650190976-5713082.post@n5.nabble.com>
References: <1355650190976-5713082.post@n5.nabble.com>
MIME-Version: 1.0
Subject: Re: [Xen-devel] xm save -c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

    Unfortunately xm restore failed because when vm restores(resume)
xpdd->pdo_event_channel and xpdd->suspend_evtchn aren't 0.
    EvtChn_Unbind and EvtChn_Close pdo_event_channel in EvtChn_Init when
xpdd->pdo_event_channel isn't 0;
    EvtChn_Unbind and EvtChn_Close suspend_evtchn in
XenPci_ConnectSuspendEvt when xpdd->suspend_evtchn isn't 0;
    After modifing pv driver as above, xm save -c and xm restore seems work
fine. Again, I'm not sure about these.



--
View this message in context: http://xen.1045712.n5.nabble.com/xm-save-c-problem-tp5713082p5713084.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 12:46:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 12:46:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkDbc-00015t-9Y; Sun, 16 Dec 2012 12:46:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TkDbb-00015l-2W
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 12:46:07 +0000
Received: from [85.158.143.99:26965] by server-3.bemta-4.messagelabs.com id
	A7/28-18211-E82CDC05; Sun, 16 Dec 2012 12:46:06 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355661965!29553360!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzA0ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28009 invoked from network); 16 Dec 2012 12:46:05 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Dec 2012 12:46:05 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 1F9382B48;
	Sun, 16 Dec 2012 14:46:03 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id CE1AA20061; Sun, 16 Dec 2012 14:46:02 +0200 (EET)
Date: Sun, 16 Dec 2012 14:46:02 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Liu.yi" <liu.yi24@zte.com.cn>
Message-ID: <20121216124602.GJ8912@reaktio.net>
References: <1355650190976-5713082.post@n5.nabble.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355650190976-5713082.post@n5.nabble.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] xm save -c problem with GPLPV drivers and winxp
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Dec 16, 2012 at 01:29:51AM -0800, Liu.yi wrote:
> hi,all

Hello,

I added the important keywords to the subject..

-- Pasi

>     I have a winxpsp2 vm with gplpv driver installed. The problem is that
> when I excute "xm save -c winxpsp2 save.bin", the command is excuted
> successfully but the vm hang after issuing a disk operaton.
>     After installing gplpv debug driver, from the qemu log I found vbd
> resume failed, after a while windows issues disk reset command, then vm
> hang.
>     xm debug-keys shows as follows before "xm save -c":
> (XEN)        1 [0/1]: s=3 n=0 d=0 p=72 x=1
> (XEN)        2 [0/0]: s=3 n=0 d=0 p=71 x=0
> (XEN)        3 [0/1]: s=2 n=0 d=0 x=0
> (XEN)        4 [0/0]: s=6 n=0 x=0                   // pdo_event_channel
> (XEN)        5 [0/0]: s=2 n=0 d=0 x=0             // suspend_evtchn
> (XEN)        6 [0/0]: s=3 n=0 d=0 p=73 x=0     // vbd-768 event-channel
> (XEN)        7 [0/0]: s=3 n=0 d=0 p=74 x=0     // vif-0 event-channel
>     xm debug-keys shows as follows after "xm save -c":
> (XEN)        1 [0/1]: s=3 n=0 d=0 p=72 x=1
> (XEN)        2 [0/0]: s=3 n=0 d=0 p=71 x=0
> (XEN)        3 [0/1]: s=2 n=0 d=0 x=0
> (XEN)        4 [0/0]: s=6 n=0 x=0                   // pdo_event_channel
> (XEN)        5 [0/0]: s=2 n=0 d=0 x=0             // suspend_evtchn
> (XEN)        6 [0/0]: s=6 n=0 x=0                   // new pdo_event_channel
> (XEN)        7 [0/0]: s=2 n=0 d=0 x=0             // new suspend_evtchn
> (XEN)        8 [0/0]: s=3 n=0 d=0 p=? x=0       // new vbd-768 event-channel
>                                                               // vif-0
> resume don't start at all because resume hang at vbd
> 
>     blkback and blkif driver seem free their event-channel in
> unbind_from_irqhandler when suspending, so when gplpv driver resumes it
> starts allocing event-channel from 6. The strange things is that gplpv
> driver allocs new pdo_event_channel and suspend_evtchn (6 and 7), the
> previous ones retain active.
>     I tried to unbind the old pdo_event_channel and suspend_evtchn, but
> suspending vm hang. "xm save -c" wokrs if I reuse the old pdo_event_channel
> and suspend_evtchn as follows:
> 
>   in evtchn.c:EvtChn_Init()
>       KeInitializeEvent(&xpdd->pdo_suspend_event, SynchronizationEvent,
> FALSE);
>       if (xpdd->pdo_event_channel == 0) {
>           KdPrint((__DRIVER_NAME "     create new pdo_event_channel\n"));
>           xpdd->pdo_event_channel = EvtChn_AllocIpi(xpdd, 0);
>       }
>   in xenpci_fdo.c:XenPci_ConnectSuspendEvt()
>       if (xpdd->suspend_evtchn == 0) {
>           xpdd->suspend_evtchn = EvtChn_AllocUnbound(xpdd, 0);
>           KdPrint((__DRIVER_NAME "     create new suspend event
> channel\n"));
>       } 
> 
>     The qemu log show vbd and vif resume successfully, vm runs fine.
>     I'm not sure about these modifications to gplpv windows driver. I will
> be grateful for any suggestions, thks.
> 
> 
> 
> --
> View this message in context: http://xen.1045712.n5.nabble.com/xm-save-c-problem-tp5713082.html
> Sent from the Xen - Dev mailing list archive at Nabble.com.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 12:46:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 12:46:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkDbc-00015t-9Y; Sun, 16 Dec 2012 12:46:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TkDbb-00015l-2W
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 12:46:07 +0000
Received: from [85.158.143.99:26965] by server-3.bemta-4.messagelabs.com id
	A7/28-18211-E82CDC05; Sun, 16 Dec 2012 12:46:06 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355661965!29553360!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzA0ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28009 invoked from network); 16 Dec 2012 12:46:05 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Dec 2012 12:46:05 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 1F9382B48;
	Sun, 16 Dec 2012 14:46:03 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id CE1AA20061; Sun, 16 Dec 2012 14:46:02 +0200 (EET)
Date: Sun, 16 Dec 2012 14:46:02 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Liu.yi" <liu.yi24@zte.com.cn>
Message-ID: <20121216124602.GJ8912@reaktio.net>
References: <1355650190976-5713082.post@n5.nabble.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355650190976-5713082.post@n5.nabble.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] xm save -c problem with GPLPV drivers and winxp
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Dec 16, 2012 at 01:29:51AM -0800, Liu.yi wrote:
> hi,all

Hello,

I added the important keywords to the subject..

-- Pasi

>     I have a winxpsp2 vm with gplpv driver installed. The problem is that
> when I excute "xm save -c winxpsp2 save.bin", the command is excuted
> successfully but the vm hang after issuing a disk operaton.
>     After installing gplpv debug driver, from the qemu log I found vbd
> resume failed, after a while windows issues disk reset command, then vm
> hang.
>     xm debug-keys shows as follows before "xm save -c":
> (XEN)        1 [0/1]: s=3 n=0 d=0 p=72 x=1
> (XEN)        2 [0/0]: s=3 n=0 d=0 p=71 x=0
> (XEN)        3 [0/1]: s=2 n=0 d=0 x=0
> (XEN)        4 [0/0]: s=6 n=0 x=0                   // pdo_event_channel
> (XEN)        5 [0/0]: s=2 n=0 d=0 x=0             // suspend_evtchn
> (XEN)        6 [0/0]: s=3 n=0 d=0 p=73 x=0     // vbd-768 event-channel
> (XEN)        7 [0/0]: s=3 n=0 d=0 p=74 x=0     // vif-0 event-channel
>     xm debug-keys shows as follows after "xm save -c":
> (XEN)        1 [0/1]: s=3 n=0 d=0 p=72 x=1
> (XEN)        2 [0/0]: s=3 n=0 d=0 p=71 x=0
> (XEN)        3 [0/1]: s=2 n=0 d=0 x=0
> (XEN)        4 [0/0]: s=6 n=0 x=0                   // pdo_event_channel
> (XEN)        5 [0/0]: s=2 n=0 d=0 x=0             // suspend_evtchn
> (XEN)        6 [0/0]: s=6 n=0 x=0                   // new pdo_event_channel
> (XEN)        7 [0/0]: s=2 n=0 d=0 x=0             // new suspend_evtchn
> (XEN)        8 [0/0]: s=3 n=0 d=0 p=? x=0       // new vbd-768 event-channel
>                                                               // vif-0
> resume don't start at all because resume hang at vbd
> 
>     blkback and blkif driver seem free their event-channel in
> unbind_from_irqhandler when suspending, so when gplpv driver resumes it
> starts allocing event-channel from 6. The strange things is that gplpv
> driver allocs new pdo_event_channel and suspend_evtchn (6 and 7), the
> previous ones retain active.
>     I tried to unbind the old pdo_event_channel and suspend_evtchn, but
> suspending vm hang. "xm save -c" wokrs if I reuse the old pdo_event_channel
> and suspend_evtchn as follows:
> 
>   in evtchn.c:EvtChn_Init()
>       KeInitializeEvent(&xpdd->pdo_suspend_event, SynchronizationEvent,
> FALSE);
>       if (xpdd->pdo_event_channel == 0) {
>           KdPrint((__DRIVER_NAME "     create new pdo_event_channel\n"));
>           xpdd->pdo_event_channel = EvtChn_AllocIpi(xpdd, 0);
>       }
>   in xenpci_fdo.c:XenPci_ConnectSuspendEvt()
>       if (xpdd->suspend_evtchn == 0) {
>           xpdd->suspend_evtchn = EvtChn_AllocUnbound(xpdd, 0);
>           KdPrint((__DRIVER_NAME "     create new suspend event
> channel\n"));
>       } 
> 
>     The qemu log show vbd and vif resume successfully, vm runs fine.
>     I'm not sure about these modifications to gplpv windows driver. I will
> be grateful for any suggestions, thks.
> 
> 
> 
> --
> View this message in context: http://xen.1045712.n5.nabble.com/xm-save-c-problem-tp5713082.html
> Sent from the Xen - Dev mailing list archive at Nabble.com.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 14:46:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 14:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkFTZ-0001gt-9V; Sun, 16 Dec 2012 14:45:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkFTX-0001go-0j
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 14:45:55 +0000
Received: from [85.158.139.83:4875] by server-16.bemta-5.messagelabs.com id
	A1/D8-09208-2AEDDC05; Sun, 16 Dec 2012 14:45:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355669151!29525621!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27304 invoked from network); 16 Dec 2012 14:45:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 14:45:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,294,1355097600"; 
   d="scan'208";a="182458"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Dec 2012 14:45:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 16 Dec 2012 14:45:51 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkFTT-0006Bg-2d;
	Sun, 16 Dec 2012 14:45:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkFTT-00005L-1G;
	Sun, 16 Dec 2012 14:45:51 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14748-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Dec 2012 14:45:51 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14748: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14748 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14748/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 14:46:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 14:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkFTZ-0001gt-9V; Sun, 16 Dec 2012 14:45:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkFTX-0001go-0j
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 14:45:55 +0000
Received: from [85.158.139.83:4875] by server-16.bemta-5.messagelabs.com id
	A1/D8-09208-2AEDDC05; Sun, 16 Dec 2012 14:45:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355669151!29525621!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27304 invoked from network); 16 Dec 2012 14:45:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 14:45:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,294,1355097600"; 
   d="scan'208";a="182458"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Dec 2012 14:45:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 16 Dec 2012 14:45:51 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkFTT-0006Bg-2d;
	Sun, 16 Dec 2012 14:45:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkFTT-00005L-1G;
	Sun, 16 Dec 2012 14:45:51 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14748-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Dec 2012 14:45:51 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14748: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14748 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14748/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 15:44:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 15:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkGNd-00023N-Qs; Sun, 16 Dec 2012 15:43:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joshsystem@gmail.com>) id 1TkGNc-00023I-RL
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 15:43:53 +0000
Received: from [85.158.139.83:55331] by server-1.bemta-5.messagelabs.com id
	4D/25-12813-83CEDC05; Sun, 16 Dec 2012 15:43:52 +0000
X-Env-Sender: joshsystem@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355672630!27839209!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10018 invoked from network); 16 Dec 2012 15:43:50 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 15:43:50 -0000
Received: by mail-ea0-f171.google.com with SMTP id n10so2029586eaa.30
	for <xen-devel@lists.xensource.com>;
	Sun, 16 Dec 2012 07:43:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=JiYkY4tKm9j5CMLeXKieGA8krq/EUUoa1/LBkLJTYyE=;
	b=lO7qPJrj2RBKKE4LR088WovfLZDNx8ysqlmWThQrEjtGaD+KqwhaO4UF3dJEYfEW0w
	lPnn9EII+Hw7PYaJcFuFpVIrxqADGXkJtPIbyhLeJfURcuuRyZfWGP36VIy3/ZyAr5MP
	wrQ2WrfJNra0UFU/AOtWPTDGCTdK+k7LcqvgCySan4DSosfhQGiupGX3hqVqWNEYpirS
	or+FCMsaC7CGXAf7p9sSeRD+1tXKJ4PLQgFN1Q8mdHb2l3D+VWTjrkzwfiZNDg3myHLF
	MZ3VQ0zi/teQa7m1hRMgb58dDnuT4hdGJRbsKFQZKm1cnd3aVir+aFKURf8Q71VOUnnO
	dwdQ==
MIME-Version: 1.0
Received: by 10.14.213.134 with SMTP id a6mr33044206eep.45.1355672630107; Sun,
	16 Dec 2012 07:43:50 -0800 (PST)
Received: by 10.14.132.140 with HTTP; Sun, 16 Dec 2012 07:43:49 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212141244260.17523@kaball.uk.xensource.com>
References: <CAOYkbagjXEun=QaNmrKpd69Ma0BqWpvbb_oJGzBg8V60FRkJPQ@mail.gmail.com>
	<alpine.DEB.2.02.1212141244260.17523@kaball.uk.xensource.com>
Date: Sun, 16 Dec 2012 23:43:49 +0800
Message-ID: <CAOYkbaiubLbm3e4kGTagHr7A0FSkwP5808VinLEza=fg4DhuhQ@mail.gmail.com>
From: Josh Zhao <joshsystem@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] About ARM branch questions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1000066618953256852=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1000066618953256852==
Content-Type: multipart/alternative; boundary=e89a8f923f3a8c9e1a04d0fa214b

--e89a8f923f3a8c9e1a04d0fa214b
Content-Type: text/plain; charset=ISO-8859-1

Thanks Stabellini. I saw the slides of New PVH Virtualisation mode for ARM
Cortex A15 and x8<http://blog.xen.org/index.php/2012/09/21/xensummit-sessions-new-pvh-virtualisation-mode-for-arm-cortex-a15arm-servers-and-x86/>,
 but can't understand  why no PVOPs and no shadow pagetables bacuase of
nasted paging in HW?

josh zhao


2012/12/14 Stefano Stabellini <stefano.stabellini@eu.citrix.com>

> On Fri, 14 Dec 2012, Josh Zhao wrote:
> > Hi, I have a new questions about the ARM-Xen. (1)   Following wiki in
> > http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions,  are
> this repos latest?
> >
> >     Xen:  http://xenbits.xen.org/hg/xen-unstable.hg
> >     Linux Dom0 and DomU: The arm-privcmd-for-3.8 branch of git://
> xenbits.xen.org/people/ianc/linux.git
>
> Yes, they are.
> However we have few outstanding patches, already sent to the list by me
> or Ian, but still unapplied.
>
>
> > (2)  Is the ARM PV interfaces for IO same as X86?
>
> I guess you are referring to the Linux PV frontend and backend drivers?
> Like netfront/netback and blkfront/blkback? If that is the case, then
> yes, we are using exactly the same frontend/backend driver pairs as in
> Xen x86.

--e89a8f923f3a8c9e1a04d0fa214b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks Stabellini. I saw the slides of=A0<a href=3D"http://blog.xen.org/ind=
ex.php/2012/09/21/xensummit-sessions-new-pvh-virtualisation-mode-for-arm-co=
rtex-a15arm-servers-and-x86/" style=3D"color:rgb(102,102,102);font-family:V=
erdana,Arial,Helvetica,sans-serif;font-size:12px;line-height:16px">New PVH =
Virtualisation mode for ARM Cortex A15 and x8</a>, =A0but can&#39;t underst=
and =A0why no PVOPs and no shadow pagetables bacuase of nasted paging in HW=
?<div>
<br></div><div>josh zhao</div><div class=3D"gmail_extra"><br><br><div class=
=3D"gmail_quote">2012/12/14 Stefano Stabellini <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:stefano.stabellini@eu.citrix.com" target=3D"_blank">stefano.sta=
bellini@eu.citrix.com</a>&gt;</span><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Fri, 14 Dec 2012, Josh =
Zhao wrote:<br>
&gt; Hi, I have a new questions about the ARM-Xen.=A0(1) =A0 Following wiki=
 in<br>
&gt; <a href=3D"http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Exte=
nsions" target=3D"_blank">http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualiz=
ation_Extensions</a>, =A0are this repos latest?<br>
&gt;<br>
&gt; =A0 =A0 Xen: =A0<a href=3D"http://xenbits.xen.org/hg/xen-unstable.hg" =
target=3D"_blank">http://xenbits.xen.org/hg/xen-unstable.hg</a><br>
&gt; =A0 =A0 Linux Dom0 and DomU: The=A0arm-privcmd-for-3.8=A0branch of=A0g=
it://<a href=3D"http://xenbits.xen.org/people/ianc/linux.git" target=3D"_bl=
ank">xenbits.xen.org/people/ianc/linux.git</a>=A0<br>
<br>
</div>Yes, they are.<br>
However we have few outstanding patches, already sent to the list by me<br>
or Ian, but still unapplied.<br>
<div class=3D"im"><br>
<br>
&gt; (2) =A0Is the ARM PV interfaces for IO same as X86?<br>
<br>
</div>I guess you are referring to the Linux PV frontend and backend driver=
s?<br>
Like netfront/netback and blkfront/blkback? If that is the case, then<br>
yes, we are using exactly the same frontend/backend driver pairs as in<br>
Xen x86.</blockquote></div><br></div>

--e89a8f923f3a8c9e1a04d0fa214b--


--===============1000066618953256852==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1000066618953256852==--


From xen-devel-bounces@lists.xen.org Sun Dec 16 15:44:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 15:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkGNd-00023N-Qs; Sun, 16 Dec 2012 15:43:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joshsystem@gmail.com>) id 1TkGNc-00023I-RL
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 15:43:53 +0000
Received: from [85.158.139.83:55331] by server-1.bemta-5.messagelabs.com id
	4D/25-12813-83CEDC05; Sun, 16 Dec 2012 15:43:52 +0000
X-Env-Sender: joshsystem@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355672630!27839209!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10018 invoked from network); 16 Dec 2012 15:43:50 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 15:43:50 -0000
Received: by mail-ea0-f171.google.com with SMTP id n10so2029586eaa.30
	for <xen-devel@lists.xensource.com>;
	Sun, 16 Dec 2012 07:43:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=JiYkY4tKm9j5CMLeXKieGA8krq/EUUoa1/LBkLJTYyE=;
	b=lO7qPJrj2RBKKE4LR088WovfLZDNx8ysqlmWThQrEjtGaD+KqwhaO4UF3dJEYfEW0w
	lPnn9EII+Hw7PYaJcFuFpVIrxqADGXkJtPIbyhLeJfURcuuRyZfWGP36VIy3/ZyAr5MP
	wrQ2WrfJNra0UFU/AOtWPTDGCTdK+k7LcqvgCySan4DSosfhQGiupGX3hqVqWNEYpirS
	or+FCMsaC7CGXAf7p9sSeRD+1tXKJ4PLQgFN1Q8mdHb2l3D+VWTjrkzwfiZNDg3myHLF
	MZ3VQ0zi/teQa7m1hRMgb58dDnuT4hdGJRbsKFQZKm1cnd3aVir+aFKURf8Q71VOUnnO
	dwdQ==
MIME-Version: 1.0
Received: by 10.14.213.134 with SMTP id a6mr33044206eep.45.1355672630107; Sun,
	16 Dec 2012 07:43:50 -0800 (PST)
Received: by 10.14.132.140 with HTTP; Sun, 16 Dec 2012 07:43:49 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212141244260.17523@kaball.uk.xensource.com>
References: <CAOYkbagjXEun=QaNmrKpd69Ma0BqWpvbb_oJGzBg8V60FRkJPQ@mail.gmail.com>
	<alpine.DEB.2.02.1212141244260.17523@kaball.uk.xensource.com>
Date: Sun, 16 Dec 2012 23:43:49 +0800
Message-ID: <CAOYkbaiubLbm3e4kGTagHr7A0FSkwP5808VinLEza=fg4DhuhQ@mail.gmail.com>
From: Josh Zhao <joshsystem@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] About ARM branch questions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1000066618953256852=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1000066618953256852==
Content-Type: multipart/alternative; boundary=e89a8f923f3a8c9e1a04d0fa214b

--e89a8f923f3a8c9e1a04d0fa214b
Content-Type: text/plain; charset=ISO-8859-1

Thanks Stabellini. I saw the slides of New PVH Virtualisation mode for ARM
Cortex A15 and x8<http://blog.xen.org/index.php/2012/09/21/xensummit-sessions-new-pvh-virtualisation-mode-for-arm-cortex-a15arm-servers-and-x86/>,
 but can't understand  why no PVOPs and no shadow pagetables bacuase of
nasted paging in HW?

josh zhao


2012/12/14 Stefano Stabellini <stefano.stabellini@eu.citrix.com>

> On Fri, 14 Dec 2012, Josh Zhao wrote:
> > Hi, I have a new questions about the ARM-Xen. (1)   Following wiki in
> > http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions,  are
> this repos latest?
> >
> >     Xen:  http://xenbits.xen.org/hg/xen-unstable.hg
> >     Linux Dom0 and DomU: The arm-privcmd-for-3.8 branch of git://
> xenbits.xen.org/people/ianc/linux.git
>
> Yes, they are.
> However we have few outstanding patches, already sent to the list by me
> or Ian, but still unapplied.
>
>
> > (2)  Is the ARM PV interfaces for IO same as X86?
>
> I guess you are referring to the Linux PV frontend and backend drivers?
> Like netfront/netback and blkfront/blkback? If that is the case, then
> yes, we are using exactly the same frontend/backend driver pairs as in
> Xen x86.

--e89a8f923f3a8c9e1a04d0fa214b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks Stabellini. I saw the slides of=A0<a href=3D"http://blog.xen.org/ind=
ex.php/2012/09/21/xensummit-sessions-new-pvh-virtualisation-mode-for-arm-co=
rtex-a15arm-servers-and-x86/" style=3D"color:rgb(102,102,102);font-family:V=
erdana,Arial,Helvetica,sans-serif;font-size:12px;line-height:16px">New PVH =
Virtualisation mode for ARM Cortex A15 and x8</a>, =A0but can&#39;t underst=
and =A0why no PVOPs and no shadow pagetables bacuase of nasted paging in HW=
?<div>
<br></div><div>josh zhao</div><div class=3D"gmail_extra"><br><br><div class=
=3D"gmail_quote">2012/12/14 Stefano Stabellini <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:stefano.stabellini@eu.citrix.com" target=3D"_blank">stefano.sta=
bellini@eu.citrix.com</a>&gt;</span><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Fri, 14 Dec 2012, Josh =
Zhao wrote:<br>
&gt; Hi, I have a new questions about the ARM-Xen.=A0(1) =A0 Following wiki=
 in<br>
&gt; <a href=3D"http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Exte=
nsions" target=3D"_blank">http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualiz=
ation_Extensions</a>, =A0are this repos latest?<br>
&gt;<br>
&gt; =A0 =A0 Xen: =A0<a href=3D"http://xenbits.xen.org/hg/xen-unstable.hg" =
target=3D"_blank">http://xenbits.xen.org/hg/xen-unstable.hg</a><br>
&gt; =A0 =A0 Linux Dom0 and DomU: The=A0arm-privcmd-for-3.8=A0branch of=A0g=
it://<a href=3D"http://xenbits.xen.org/people/ianc/linux.git" target=3D"_bl=
ank">xenbits.xen.org/people/ianc/linux.git</a>=A0<br>
<br>
</div>Yes, they are.<br>
However we have few outstanding patches, already sent to the list by me<br>
or Ian, but still unapplied.<br>
<div class=3D"im"><br>
<br>
&gt; (2) =A0Is the ARM PV interfaces for IO same as X86?<br>
<br>
</div>I guess you are referring to the Linux PV frontend and backend driver=
s?<br>
Like netfront/netback and blkfront/blkback? If that is the case, then<br>
yes, we are using exactly the same frontend/backend driver pairs as in<br>
Xen x86.</blockquote></div><br></div>

--e89a8f923f3a8c9e1a04d0fa214b--


--===============1000066618953256852==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1000066618953256852==--


From xen-devel-bounces@lists.xen.org Sun Dec 16 17:39:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 17:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkIB0-00031b-Ir; Sun, 16 Dec 2012 17:38:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkIAz-00031W-23
	for xen-devel@lists.xen.org; Sun, 16 Dec 2012 17:38:57 +0000
Received: from [193.109.254.147:39431] by server-16.bemta-14.messagelabs.com
	id 67/79-18932-0370EC05; Sun, 16 Dec 2012 17:38:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355679512!10188982!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTIxMDYz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4019 invoked from network); 16 Dec 2012 17:38:34 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Dec 2012 17:38:34 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBGHcQ5w017720
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 16 Dec 2012 17:38:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBGHcQH3015246
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 16 Dec 2012 17:38:26 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBGHcPcJ023320; Sun, 16 Dec 2012 11:38:25 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 16 Dec 2012 09:38:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 74C821C05A8; Sun, 16 Dec 2012 12:38:24 -0500 (EST)
Date: Sun, 16 Dec 2012 12:38:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20121216173824.GA4518@phenom.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1995613404.20121214165557@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: linux-kernel@vger.kernel.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
 dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
> Hi Konrad,
> 
> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.

Yeah, saw it over the Dec 11->Dec 12 merges and was out on
vacation during that time (just got back).

Did you by any chance try to do a git bisect to narrow down
which merge it was?

Thanks!
> The boot stalls:
> 
> [    0.000000] ACPI: PM-Timer IO Port: 0x808
> [    0.000000] ACPI: Local APIC address 0xfee00000
> [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
> [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
> [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
> [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
> [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
> [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
> [    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
> [    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
> [    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
> [   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
> [   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
> [   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
> [   64.598692] sending NMI to all CPUs:
> [   64.598716] xen: vector 0x2 is not implemented
> 
> 
> Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
> 
> 
> The exact seem config with 3.7.0 as kernel works fine.
> Complete serial log is attached.
> 
> --
> 
> Sander
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 17:39:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 17:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkIB0-00031b-Ir; Sun, 16 Dec 2012 17:38:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkIAz-00031W-23
	for xen-devel@lists.xen.org; Sun, 16 Dec 2012 17:38:57 +0000
Received: from [193.109.254.147:39431] by server-16.bemta-14.messagelabs.com
	id 67/79-18932-0370EC05; Sun, 16 Dec 2012 17:38:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355679512!10188982!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTIxMDYz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4019 invoked from network); 16 Dec 2012 17:38:34 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Dec 2012 17:38:34 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBGHcQ5w017720
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 16 Dec 2012 17:38:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBGHcQH3015246
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 16 Dec 2012 17:38:26 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBGHcPcJ023320; Sun, 16 Dec 2012 11:38:25 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 16 Dec 2012 09:38:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 74C821C05A8; Sun, 16 Dec 2012 12:38:24 -0500 (EST)
Date: Sun, 16 Dec 2012 12:38:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20121216173824.GA4518@phenom.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1995613404.20121214165557@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: linux-kernel@vger.kernel.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
 dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
> Hi Konrad,
> 
> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.

Yeah, saw it over the Dec 11->Dec 12 merges and was out on
vacation during that time (just got back).

Did you by any chance try to do a git bisect to narrow down
which merge it was?

Thanks!
> The boot stalls:
> 
> [    0.000000] ACPI: PM-Timer IO Port: 0x808
> [    0.000000] ACPI: Local APIC address 0xfee00000
> [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
> [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
> [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
> [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
> [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
> [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
> [    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
> [    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
> [    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
> [   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
> [   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
> [   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
> [   64.598692] sending NMI to all CPUs:
> [   64.598716] xen: vector 0x2 is not implemented
> 
> 
> Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
> 
> 
> The exact seem config with 3.7.0 as kernel works fine.
> Complete serial log is attached.
> 
> --
> 
> Sander
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 19:43:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 19:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkK6s-00043l-DY; Sun, 16 Dec 2012 19:42:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TkK6q-00043g-Lh
	for xen-devel@lists.xen.org; Sun, 16 Dec 2012 19:42:48 +0000
Received: from [85.158.139.211:43421] by server-10.bemta-5.messagelabs.com id
	C1/75-13383-7342EC05; Sun, 16 Dec 2012 19:42:47 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355686966!19249775!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17106 invoked from network); 16 Dec 2012 19:42:46 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-2.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	16 Dec 2012 19:42:46 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:61824 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TkKAy-0003YY-0c; Sun, 16 Dec 2012 20:47:04 +0100
Date: Sun, 16 Dec 2012 20:42:44 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <315845667.20121216204244@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121216173824.GA4518@phenom.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: linux-kernel@vger.kernel.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
	dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Sunday, December 16, 2012, 6:38:24 PM, you wrote:

> On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.

> Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> vacation during that time (just got back).

> Did you by any chance try to do a git bisect to narrow down
> which merge it was?

Hi Konrad,

Nope haven't had the time, I only tried resetting to commit 189251705649bdfdf5e5850eb178f8cbfdac5480 as a "hunch"(just before a lot of x86 and rcu commits), but the result didn't boot ..

--
Sander

> Thanks!
>> The boot stalls:
>> 
>> [    0.000000] ACPI: PM-Timer IO Port: 0x808
>> [    0.000000] ACPI: Local APIC address 0xfee00000
>> [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
>> [    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
>> [    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
>> [    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> [   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
>> [   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
>> [   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
>> [   64.598692] sending NMI to all CPUs:
>> [   64.598716] xen: vector 0x2 is not implemented
>> 
>> 
>> Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> 
>> 
>> The exact seem config with 3.7.0 as kernel works fine.
>> Complete serial log is attached.
>> 
>> --
>> 
>> Sander
>> 
>> 





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 19:43:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 19:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkK6s-00043l-DY; Sun, 16 Dec 2012 19:42:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TkK6q-00043g-Lh
	for xen-devel@lists.xen.org; Sun, 16 Dec 2012 19:42:48 +0000
Received: from [85.158.139.211:43421] by server-10.bemta-5.messagelabs.com id
	C1/75-13383-7342EC05; Sun, 16 Dec 2012 19:42:47 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355686966!19249775!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17106 invoked from network); 16 Dec 2012 19:42:46 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-2.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	16 Dec 2012 19:42:46 -0000
Received: from 76-69-ftth.on.nl ([88.159.69.76]:61824 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TkKAy-0003YY-0c; Sun, 16 Dec 2012 20:47:04 +0100
Date: Sun, 16 Dec 2012 20:42:44 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <315845667.20121216204244@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121216173824.GA4518@phenom.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: linux-kernel@vger.kernel.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
	dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Sunday, December 16, 2012, 6:38:24 PM, you wrote:

> On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.

> Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> vacation during that time (just got back).

> Did you by any chance try to do a git bisect to narrow down
> which merge it was?

Hi Konrad,

Nope haven't had the time, I only tried resetting to commit 189251705649bdfdf5e5850eb178f8cbfdac5480 as a "hunch"(just before a lot of x86 and rcu commits), but the result didn't boot ..

--
Sander

> Thanks!
>> The boot stalls:
>> 
>> [    0.000000] ACPI: PM-Timer IO Port: 0x808
>> [    0.000000] ACPI: Local APIC address 0xfee00000
>> [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
>> [    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
>> [    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
>> [    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> [   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
>> [   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
>> [   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
>> [   64.598692] sending NMI to all CPUs:
>> [   64.598716] xen: vector 0x2 is not implemented
>> 
>> 
>> Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> 
>> 
>> The exact seem config with 3.7.0 as kernel works fine.
>> Complete serial log is attached.
>> 
>> --
>> 
>> Sander
>> 
>> 





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 20:52:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 20:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkLBz-0004v3-Lp; Sun, 16 Dec 2012 20:52:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkLBy-0004uy-Si
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 20:52:11 +0000
Received: from [85.158.143.99:56543] by server-2.bemta-4.messagelabs.com id
	D0/EA-30861-A743EC05; Sun, 16 Dec 2012 20:52:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355691129!28746552!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31834 invoked from network); 16 Dec 2012 20:52:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 20:52:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,294,1355097600"; 
   d="scan'208";a="184875"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Dec 2012 20:52:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 16 Dec 2012 20:52:08 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkLBw-00087v-3M;
	Sun, 16 Dec 2012 20:52:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkLBv-0001bo-SR;
	Sun, 16 Dec 2012 20:52:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14756-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Dec 2012 20:52:07 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14756: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14756 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14756/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-i386-i386-xl-qemut-winxpsp3  7 windows-install         fail pass in 14748

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop        fail in 14748 never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 20:52:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 20:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkLBz-0004v3-Lp; Sun, 16 Dec 2012 20:52:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkLBy-0004uy-Si
	for xen-devel@lists.xensource.com; Sun, 16 Dec 2012 20:52:11 +0000
Received: from [85.158.143.99:56543] by server-2.bemta-4.messagelabs.com id
	D0/EA-30861-A743EC05; Sun, 16 Dec 2012 20:52:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355691129!28746552!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31834 invoked from network); 16 Dec 2012 20:52:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 20:52:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,294,1355097600"; 
   d="scan'208";a="184875"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Dec 2012 20:52:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 16 Dec 2012 20:52:08 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkLBw-00087v-3M;
	Sun, 16 Dec 2012 20:52:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkLBv-0001bo-SR;
	Sun, 16 Dec 2012 20:52:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14756-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Dec 2012 20:52:07 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14756: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14756 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14756/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-i386-i386-xl-qemut-winxpsp3  7 windows-install         fail pass in 14748

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop        fail in 14748 never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 16 22:00:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 22:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkMFb-0005or-5U; Sun, 16 Dec 2012 21:59:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sdiris@gmail.com>) id 1TkMFZ-0005om-PK
	for xen-devel@lists.xen.org; Sun, 16 Dec 2012 21:59:57 +0000
Received: from [193.109.254.147:7351] by server-13.bemta-14.messagelabs.com id
	2A/2C-01725-D544EC05; Sun, 16 Dec 2012 21:59:57 +0000
X-Env-Sender: sdiris@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355695196!2947693!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=2.1 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,SUB_HELLO,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18534 invoked from network); 16 Dec 2012 21:59:56 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 21:59:56 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so2292925wgb.32
	for <xen-devel@lists.xen.org>; Sun, 16 Dec 2012 13:59:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:subject:date:message-id:mime-version:content-type:x-mailer
	:thread-index:content-language;
	bh=zmcjH3virAAVZMNOhyuvwjt2kItv0y3kxe8R4v+y52E=;
	b=YkZI7+Ejl3YUZQ8GGPm0Ry3tZw8ipNlxp60dGZdV7Jpl/p3pfs7iEnmT64BB2rC8T2
	WhyCsmJi8pHH/4GvFv4N3CfktLRCVchW3PaLYd0bYamqc5eLE7MKcoLdCIMw+9huQsdV
	ovALUtX1mB4inr2kkXL6BbGsUBLZKSM0bIGKRNMy/7GybDNzR6JvtrpE91/WQxlPpm3B
	qo77aFslhdmI8NTWFArW/vI73ocVwVA4+qrsoPf86PzgcvPwWeGhyDivu66dHlVzVYdR
	q9DiX7wOMENX1UAMEc3IJ512UuMANVx0xGm69b+1Oh0uFfg50t4FSen4PHMdY2ibqn+L
	/8Ww==
Received: by 10.180.107.163 with SMTP id hd3mr1498939wib.4.1355695196205;
	Sun, 16 Dec 2012 13:59:56 -0800 (PST)
Received: from yimingwin7 (c188.al.cl.cam.ac.uk. [128.232.110.188])
	by mx.google.com with ESMTPS id dm3sm8499034wib.9.2012.12.16.13.59.53
	(version=TLSv1/SSLv3 cipher=OTHER);
	Sun, 16 Dec 2012 13:59:54 -0800 (PST)
From: "Yiming Zhang" <sdiris@gmail.com>
To: <xen-devel@lists.xen.org>
Date: Sun, 16 Dec 2012 21:59:53 -0000
Message-ID: <000001cddbd8$ae48adf0$0ada09d0$@gmail.com>
MIME-Version: 1.0
X-Mailer: Microsoft Outlook 14.0
Thread-Index: Ac3b16RBrSj/xrCjSOW+A4HEMnjXsw==
Content-Language: zh-cn
Subject: [Xen-devel] Hello from Yiming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5002421521955153098=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multipart message in MIME format.

--===============5002421521955153098==
Content-Type: multipart/alternative;
	boundary="----=_NextPart_000_0001_01CDDBD8.AE494A30"
Content-Language: zh-cn

This is a multipart message in MIME format.

------=_NextPart_000_0001_01CDDBD8.AE494A30
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

Hello everyone,

 

I am Yiming Zhang, an assistant professor from NUDT, China, and now I am a
visiting researcher at Cambridge University, UK. 

 

I began to study Xen this month in order to run applications on Xen WITHOUT
traditional OS support, like what the Mirage project is focusing on. If
someone is working on similar goals, I'd love to discuss with you. Thanks!

 

Regards,

Yiming


------=_NextPart_000_0001_01CDDBD8.AE494A30
Content-Type: text/html;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	text-align:justify;
	text-justify:inter-ideograph;
	font-size:10.5pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
/* Page Definitions */
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DZH-CN link=3Dblue =
vlink=3Dpurple style=3D'text-justify-trim:punctuation'><div =
class=3DWordSection1><p class=3DMsoNormal><span lang=3DEN-US>Hello =
everyone,<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>I am Yiming Zhang, an assistant professor from NUDT, China, =
and now I am a visiting researcher at Cambridge University, UK. =
<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>I began to study Xen this month in order to run =
applications on Xen WITHOUT traditional OS support, like what the Mirage =
project is focusing on. If someone is working on similar goals, =
I&#8217;d love to discuss with you. Thanks!<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Regards,<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>Yiming<o:p></o:p></span></p></div></body></html>
------=_NextPart_000_0001_01CDDBD8.AE494A30--



--===============5002421521955153098==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5002421521955153098==--



From xen-devel-bounces@lists.xen.org Sun Dec 16 22:00:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Dec 2012 22:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkMFb-0005or-5U; Sun, 16 Dec 2012 21:59:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sdiris@gmail.com>) id 1TkMFZ-0005om-PK
	for xen-devel@lists.xen.org; Sun, 16 Dec 2012 21:59:57 +0000
Received: from [193.109.254.147:7351] by server-13.bemta-14.messagelabs.com id
	2A/2C-01725-D544EC05; Sun, 16 Dec 2012 21:59:57 +0000
X-Env-Sender: sdiris@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355695196!2947693!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=2.1 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,SUB_HELLO,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18534 invoked from network); 16 Dec 2012 21:59:56 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 21:59:56 -0000
Received: by mail-wg0-f53.google.com with SMTP id ei8so2292925wgb.32
	for <xen-devel@lists.xen.org>; Sun, 16 Dec 2012 13:59:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:subject:date:message-id:mime-version:content-type:x-mailer
	:thread-index:content-language;
	bh=zmcjH3virAAVZMNOhyuvwjt2kItv0y3kxe8R4v+y52E=;
	b=YkZI7+Ejl3YUZQ8GGPm0Ry3tZw8ipNlxp60dGZdV7Jpl/p3pfs7iEnmT64BB2rC8T2
	WhyCsmJi8pHH/4GvFv4N3CfktLRCVchW3PaLYd0bYamqc5eLE7MKcoLdCIMw+9huQsdV
	ovALUtX1mB4inr2kkXL6BbGsUBLZKSM0bIGKRNMy/7GybDNzR6JvtrpE91/WQxlPpm3B
	qo77aFslhdmI8NTWFArW/vI73ocVwVA4+qrsoPf86PzgcvPwWeGhyDivu66dHlVzVYdR
	q9DiX7wOMENX1UAMEc3IJ512UuMANVx0xGm69b+1Oh0uFfg50t4FSen4PHMdY2ibqn+L
	/8Ww==
Received: by 10.180.107.163 with SMTP id hd3mr1498939wib.4.1355695196205;
	Sun, 16 Dec 2012 13:59:56 -0800 (PST)
Received: from yimingwin7 (c188.al.cl.cam.ac.uk. [128.232.110.188])
	by mx.google.com with ESMTPS id dm3sm8499034wib.9.2012.12.16.13.59.53
	(version=TLSv1/SSLv3 cipher=OTHER);
	Sun, 16 Dec 2012 13:59:54 -0800 (PST)
From: "Yiming Zhang" <sdiris@gmail.com>
To: <xen-devel@lists.xen.org>
Date: Sun, 16 Dec 2012 21:59:53 -0000
Message-ID: <000001cddbd8$ae48adf0$0ada09d0$@gmail.com>
MIME-Version: 1.0
X-Mailer: Microsoft Outlook 14.0
Thread-Index: Ac3b16RBrSj/xrCjSOW+A4HEMnjXsw==
Content-Language: zh-cn
Subject: [Xen-devel] Hello from Yiming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5002421521955153098=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multipart message in MIME format.

--===============5002421521955153098==
Content-Type: multipart/alternative;
	boundary="----=_NextPart_000_0001_01CDDBD8.AE494A30"
Content-Language: zh-cn

This is a multipart message in MIME format.

------=_NextPart_000_0001_01CDDBD8.AE494A30
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

Hello everyone,

 

I am Yiming Zhang, an assistant professor from NUDT, China, and now I am a
visiting researcher at Cambridge University, UK. 

 

I began to study Xen this month in order to run applications on Xen WITHOUT
traditional OS support, like what the Mirage project is focusing on. If
someone is working on similar goals, I'd love to discuss with you. Thanks!

 

Regards,

Yiming


------=_NextPart_000_0001_01CDDBD8.AE494A30
Content-Type: text/html;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	text-align:justify;
	text-justify:inter-ideograph;
	font-size:10.5pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
/* Page Definitions */
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DZH-CN link=3Dblue =
vlink=3Dpurple style=3D'text-justify-trim:punctuation'><div =
class=3DWordSection1><p class=3DMsoNormal><span lang=3DEN-US>Hello =
everyone,<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>I am Yiming Zhang, an assistant professor from NUDT, China, =
and now I am a visiting researcher at Cambridge University, UK. =
<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>I began to study Xen this month in order to run =
applications on Xen WITHOUT traditional OS support, like what the Mirage =
project is focusing on. If someone is working on similar goals, =
I&#8217;d love to discuss with you. Thanks!<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Regards,<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>Yiming<o:p></o:p></span></p></div></body></html>
------=_NextPart_000_0001_01CDDBD8.AE494A30--



--===============5002421521955153098==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5002421521955153098==--



From xen-devel-bounces@lists.xen.org Mon Dec 17 01:41:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 01:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkPgy-0002sw-Sb; Mon, 17 Dec 2012 01:40:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkPgx-0002sr-FO
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 01:40:27 +0000
Received: from [85.158.139.211:50469] by server-1.bemta-5.messagelabs.com id
	05/C8-12813-A087EC05; Mon, 17 Dec 2012 01:40:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355708425!20629585!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28454 invoked from network); 17 Dec 2012 01:40:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 01:40:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,297,1355097600"; 
   d="scan'208";a="187100"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 01:40:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 01:40:23 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkPgt-0001CA-RL;
	Mon, 17 Dec 2012 01:40:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkPgt-0006uD-Lc;
	Mon, 17 Dec 2012 01:40:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1TkPgt-0006uD-Lc@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 01:40:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing bisection] complete
	test-amd64-i386-xl-credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-4.1-testing
xen branch xen-4.1-testing
job test-amd64-i386-xl-credit2
test leak-check/check

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104


  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-4.1-testing.test-amd64-i386-xl-credit2.leak-check--check.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14756 fail [host=woodlouse] / 14679 [host=bush-cricket] 14677 [host=potato-beetle] 14675 [host=field-cricket] 14662 ok.
Failure / basis pass flights: 14756 / 14662
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 309ff3ad9dcc
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git#b36c42985575cd6d761d39e5770e57a1f52832ae-b36c42985575cd6d761d39e5770e57a1f52832ae http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg#309ff3ad9dcc-93e17b0cd035
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
Loaded 1001 nodes in revision graph
Searching for test results:
 14684 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14677 [host=potato-beetle]
 14662 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 309ff3ad9dcc
 14675 [host=field-cricket]
 14679 [host=bush-cricket]
 14689 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14699 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14706 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14728 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14757 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14758 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14759 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14760 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14715 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14756 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14739 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14761 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14747 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 309ff3ad9dcc
 14749 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 309ff3ad9dcc
 14763 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14750 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 309ff3ad9dcc
 14751 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14764 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14752 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae e77f613c133a
 14753 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a866cc5b8235
 14765 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14754 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14748 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14755 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14766 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14767 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Searching for interesting versions
 Result found: flight 14662 (pass), for basis pass
 Result found: flight 14689 (fail), for basis failure
 Repro found: flight 14750 (pass), for basis pass
 Repro found: flight 14751 (fail), for basis failure
 0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
No revisions left to test, checking graph state.
 Result found: flight 14754 (pass), for last pass
 Result found: flight 14755 (fail), for first failure
 Repro found: flight 14760 (pass), for last pass
 Repro found: flight 14761 (fail), for first failure
 Repro found: flight 14764 (pass), for last pass
 Repro found: flight 14767 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found

  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-4.1-testing.test-amd64-i386-xl-credit2.leak-check--check.{dot,ps,png,html}.
----------------------------------------
14767: tolerable ALL FAIL

flight 14767 xen-4.1-testing real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14767/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-credit2   18 leak-check/check        fail baseline untested


jobs:
 test-amd64-i386-xl-credit2                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 01:41:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 01:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkPgy-0002sw-Sb; Mon, 17 Dec 2012 01:40:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkPgx-0002sr-FO
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 01:40:27 +0000
Received: from [85.158.139.211:50469] by server-1.bemta-5.messagelabs.com id
	05/C8-12813-A087EC05; Mon, 17 Dec 2012 01:40:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355708425!20629585!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28454 invoked from network); 17 Dec 2012 01:40:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 01:40:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,297,1355097600"; 
   d="scan'208";a="187100"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 01:40:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 01:40:23 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkPgt-0001CA-RL;
	Mon, 17 Dec 2012 01:40:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkPgt-0006uD-Lc;
	Mon, 17 Dec 2012 01:40:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <E1TkPgt-0006uD-Lc@woking.cam.xci-test.com>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 01:40:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com, keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing bisection] complete
	test-amd64-i386-xl-credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

branch xen-4.1-testing
xen branch xen-4.1-testing
job test-amd64-i386-xl-credit2
test leak-check/check

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104


  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      


For bisection revision-tuple graph see:
   http://www.chiark.greenend.org.uk/~xensrcts/results/bisect.xen-4.1-testing.test-amd64-i386-xl-credit2.leak-check--check.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Searching for failure / basis pass:
 14756 fail [host=woodlouse] / 14679 [host=bush-cricket] 14677 [host=potato-beetle] 14675 [host=field-cricket] 14662 ok.
Failure / basis pass flights: 14756 / 14662
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
Tree: xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
Latest a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Basis pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 309ff3ad9dcc
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a938a246d34912423c560f475ccf1ce0c71d9d00-a938a246d34912423c560f475ccf1ce0c71d9d00 git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git#b36c42985575cd6d761d39e5770e57a1f52832ae-b36c42985575cd6d761d39e5770e57a1f52832ae http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg#309ff3ad9dcc-93e17b0cd035
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found
Loaded 1001 nodes in revision graph
Searching for test results:
 14684 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14677 [host=potato-beetle]
 14662 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 309ff3ad9dcc
 14675 [host=field-cricket]
 14679 [host=bush-cricket]
 14689 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14699 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14706 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14728 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14757 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14758 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14759 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14760 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14715 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14756 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14739 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14761 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14747 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 309ff3ad9dcc
 14749 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 309ff3ad9dcc
 14763 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14750 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 309ff3ad9dcc
 14751 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14764 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14752 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae e77f613c133a
 14753 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae a866cc5b8235
 14765 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14754 pass a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
 14748 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14755 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14766 blocked a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
 14767 fail a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 93e17b0cd035
Searching for interesting versions
 Result found: flight 14662 (pass), for basis pass
 Result found: flight 14689 (fail), for basis failure
 Repro found: flight 14750 (pass), for basis pass
 Repro found: flight 14751 (fail), for basis failure
 0 revisions at a938a246d34912423c560f475ccf1ce0c71d9d00 b36c42985575cd6d761d39e5770e57a1f52832ae 255a0b6a8104
No revisions left to test, checking graph state.
 Result found: flight 14754 (pass), for last pass
 Result found: flight 14755 (fail), for first failure
 Repro found: flight 14760 (pass), for last pass
 Repro found: flight 14761 (fail), for first failure
 Repro found: flight 14764 (pass), for last pass
 Repro found: flight 14767 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
  Bug introduced:  93e17b0cd035
  Bug not present: 255a0b6a8104

pulling from ssh://xen@xenbits.xen.org/HG/staging/xen-4.1-testing.hg
searching for changes
no changes found

  changeset:   23428:93e17b0cd035
  tag:         tip
  user:        Greg Wettstein <greg@enjellic.com>
  date:        Thu Dec 13 14:35:58 2012 +0000
      
      libxl: avoid blktap2 deadlock on cleanup
      
      Establishes correct cleanup behavior for blktap devices.  This patch
      implements the release of the backend device before calling for
      the destruction of the userspace component of the blktap device.
      
      Without this patch the kernel xen-blkback driver deadlocks with
      the blktap2 user control plane until the IPC channel is terminated by the
      timeout on the select() call.  This results in a noticeable delay
      in the termination of the guest and causes the blktap minor
      number which had been allocated to be orphaned.
      
      Signed-off-by: Greg Wettstein <greg@enjellic.com>
      Acked-by: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
      Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
      
      

Revision graph left in /home/xc_osstest/results/bisect.xen-4.1-testing.test-amd64-i386-xl-credit2.leak-check--check.{dot,ps,png,html}.
----------------------------------------
14767: tolerable ALL FAIL

flight 14767 xen-4.1-testing real-bisect [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14767/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-credit2   18 leak-check/check        fail baseline untested


jobs:
 test-amd64-i386-xl-credit2                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 02:08:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 02:08:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkQ7F-0003Sr-Hr; Mon, 17 Dec 2012 02:07:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chenxiaolong@cxl.epac.to>) id 1TkQ7E-0003Sm-O6
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 02:07:36 +0000
Received: from [85.158.143.35:38106] by server-1.bemta-4.messagelabs.com id
	60/8E-28401-76E7EC05; Mon, 17 Dec 2012 02:07:35 +0000
X-Env-Sender: chenxiaolong@cxl.epac.to
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355710054!10819229!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=2.0 required=7.0 tests=RATWARE_GECKO_BUILD, RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2365 invoked from network); 17 Dec 2012 02:07:35 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 02:07:35 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so8331236iej.32
	for <xen-devel@lists.xen.org>; Sun, 16 Dec 2012 18:07:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:subject
	:content-type:content-transfer-encoding:x-gm-message-state;
	bh=g3Nxn9+oyMFqORCBk7RGvRw4vTz2b/u4lD4scpuerKM=;
	b=CN+YUqIJ0UFhaQoYKPO6sRGrMejZeJOVPI5Cny49VuOKhEQ/IXUCYWdRNNr1fm/ybb
	H61lNF+jAIKWVl4nUh2+aT+b2IdY+cxG6zZQNkhGilZbPb2UUudHuQGlZ32seEoPHDF4
	RXirfT2goOb/R43FKnE2RMgiSHL21Ce+KrgSfM2lNLZPI+PK114/TmtbxvdFdKg01J9E
	V0MgUEKAxMxKEqJH9KFBW2f+VXP+3aKZqVMUF2RblWD32rLEuZTHu7xXhFu4SqnzhBXM
	QYDKmOdL+ESe1o5kjCjrbFNtcElt8rI9oBC6Fl7Fogjakzyc/Gedcab42aArEtm6hYWJ
	Zk/w==
Received: by 10.43.114.135 with SMTP id fa7mr9982590icc.21.1355710053542;
	Sun, 16 Dec 2012 18:07:33 -0800 (PST)
Received: from [10.0.0.145] (cpe-76-190-150-115.neo.res.rr.com.
	[76.190.150.115])
	by mx.google.com with ESMTPS id 10sm4648953ign.5.2012.12.16.18.07.31
	(version=SSLv3 cipher=OTHER); Sun, 16 Dec 2012 18:07:32 -0800 (PST)
Message-ID: <50CE7E62.2020202@cxl.epac.to>
Date: Sun, 16 Dec 2012 21:07:30 -0500
From: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQny0HFFEoVwA5kS75KQYBeMW759kXoVf8kGH4L/lFpk8WFicWq3Yv+ZE9UOR1qpI5ibQ5Z/
Subject: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Xen developers,

I have been having a problem where only one CPU core is being detected.
I'm using a Lenovo W520 with a UEFI firmware and an Intel Core i7
2720qm (4 real cores, 4 virtual cores). When I boot, I see
"Dom0 has maximum 1 VCPUs", or something similar, scroll by.

In addition, only 10 GB out of my 12 GB of memory is recognized and
ACPI is not working properly (CPU frequency scaling and battery info
are not reported).

This problem has been reported before (for the same laptop too) here:
http://lists.xen.org/archives/html/xen-devel/2012-06/msg00087.html but
unfortunately, the person who sent the message didn't reply with more
information.

Here are the outputs of dmesg with and without Xen:

Without Xen: http://paste.kde.org/626222/raw/
With Xen: http://paste.kde.org/626228/raw/

and some information from xl:

xl vcpu-list: http://paste.kde.org/626234/raw/
xl info: http://paste.kde.org/626240/raw/
xl dmesg: http://paste.kde.org/626246/raw/

This is my first time venturing into Xen territory, so please let me
know if there's any other information needed.

I'm not sure if it helps, but I'd also like to point out that I had a
similar problem with Linux back around July 20, 2011, when I got my
computer. I could work around the issue by booting with "noapic". I'm
not sure which kernel version fixed the issue as I booted with that
option for quite a while. The kernel version, at the time, was 3.0.

I am glad to help in any way I can. I'm excited to get Xen working!
I'm confortable with compiling the kernel or xen if necessary.

Thanks in advance!

Xiao-Long Chen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 02:08:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 02:08:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkQ7F-0003Sr-Hr; Mon, 17 Dec 2012 02:07:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chenxiaolong@cxl.epac.to>) id 1TkQ7E-0003Sm-O6
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 02:07:36 +0000
Received: from [85.158.143.35:38106] by server-1.bemta-4.messagelabs.com id
	60/8E-28401-76E7EC05; Mon, 17 Dec 2012 02:07:35 +0000
X-Env-Sender: chenxiaolong@cxl.epac.to
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355710054!10819229!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=2.0 required=7.0 tests=RATWARE_GECKO_BUILD, RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2365 invoked from network); 17 Dec 2012 02:07:35 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 02:07:35 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so8331236iej.32
	for <xen-devel@lists.xen.org>; Sun, 16 Dec 2012 18:07:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:subject
	:content-type:content-transfer-encoding:x-gm-message-state;
	bh=g3Nxn9+oyMFqORCBk7RGvRw4vTz2b/u4lD4scpuerKM=;
	b=CN+YUqIJ0UFhaQoYKPO6sRGrMejZeJOVPI5Cny49VuOKhEQ/IXUCYWdRNNr1fm/ybb
	H61lNF+jAIKWVl4nUh2+aT+b2IdY+cxG6zZQNkhGilZbPb2UUudHuQGlZ32seEoPHDF4
	RXirfT2goOb/R43FKnE2RMgiSHL21Ce+KrgSfM2lNLZPI+PK114/TmtbxvdFdKg01J9E
	V0MgUEKAxMxKEqJH9KFBW2f+VXP+3aKZqVMUF2RblWD32rLEuZTHu7xXhFu4SqnzhBXM
	QYDKmOdL+ESe1o5kjCjrbFNtcElt8rI9oBC6Fl7Fogjakzyc/Gedcab42aArEtm6hYWJ
	Zk/w==
Received: by 10.43.114.135 with SMTP id fa7mr9982590icc.21.1355710053542;
	Sun, 16 Dec 2012 18:07:33 -0800 (PST)
Received: from [10.0.0.145] (cpe-76-190-150-115.neo.res.rr.com.
	[76.190.150.115])
	by mx.google.com with ESMTPS id 10sm4648953ign.5.2012.12.16.18.07.31
	(version=SSLv3 cipher=OTHER); Sun, 16 Dec 2012 18:07:32 -0800 (PST)
Message-ID: <50CE7E62.2020202@cxl.epac.to>
Date: Sun, 16 Dec 2012 21:07:30 -0500
From: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQny0HFFEoVwA5kS75KQYBeMW759kXoVf8kGH4L/lFpk8WFicWq3Yv+ZE9UOR1qpI5ibQ5Z/
Subject: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Xen developers,

I have been having a problem where only one CPU core is being detected.
I'm using a Lenovo W520 with a UEFI firmware and an Intel Core i7
2720qm (4 real cores, 4 virtual cores). When I boot, I see
"Dom0 has maximum 1 VCPUs", or something similar, scroll by.

In addition, only 10 GB out of my 12 GB of memory is recognized and
ACPI is not working properly (CPU frequency scaling and battery info
are not reported).

This problem has been reported before (for the same laptop too) here:
http://lists.xen.org/archives/html/xen-devel/2012-06/msg00087.html but
unfortunately, the person who sent the message didn't reply with more
information.

Here are the outputs of dmesg with and without Xen:

Without Xen: http://paste.kde.org/626222/raw/
With Xen: http://paste.kde.org/626228/raw/

and some information from xl:

xl vcpu-list: http://paste.kde.org/626234/raw/
xl info: http://paste.kde.org/626240/raw/
xl dmesg: http://paste.kde.org/626246/raw/

This is my first time venturing into Xen territory, so please let me
know if there's any other information needed.

I'm not sure if it helps, but I'd also like to point out that I had a
similar problem with Linux back around July 20, 2011, when I got my
computer. I could work around the issue by booting with "noapic". I'm
not sure which kernel version fixed the issue as I booted with that
option for quite a while. The kernel version, at the time, was 3.0.

I am glad to help in any way I can. I'm excited to get Xen working!
I'm confortable with compiling the kernel or xen if necessary.

Thanks in advance!

Xiao-Long Chen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 04:24:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 04:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkSFL-0004XU-JD; Mon, 17 Dec 2012 04:24:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkSFK-0004XP-9R
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 04:24:06 +0000
Received: from [85.158.138.51:63533] by server-10.bemta-3.messagelabs.com id
	88/27-07616-56E9EC05; Mon, 17 Dec 2012 04:24:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355718244!27362461!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19060 invoked from network); 17 Dec 2012 04:24:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 04:24:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,298,1355097600"; 
   d="scan'208";a="188314"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 04:24:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 04:24:04 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkSFI-00025A-5i;
	Mon, 17 Dec 2012 04:24:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkSFH-0001Wg-NP;
	Mon, 17 Dec 2012 04:24:03 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14762-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 04:24:03 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14762: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14762 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14762/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                 fail pass in 14756
 test-i386-i386-xl-qemut-winxpsp3 7 windows-install fail in 14756 pass in 14762

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-amd64-xl-sedf     11 guest-localmigrate    fail in 14756 like 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 04:24:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 04:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkSFL-0004XU-JD; Mon, 17 Dec 2012 04:24:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkSFK-0004XP-9R
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 04:24:06 +0000
Received: from [85.158.138.51:63533] by server-10.bemta-3.messagelabs.com id
	88/27-07616-56E9EC05; Mon, 17 Dec 2012 04:24:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355718244!27362461!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19060 invoked from network); 17 Dec 2012 04:24:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 04:24:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,298,1355097600"; 
   d="scan'208";a="188314"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 04:24:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 04:24:04 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkSFI-00025A-5i;
	Mon, 17 Dec 2012 04:24:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkSFH-0001Wg-NP;
	Mon, 17 Dec 2012 04:24:03 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14762-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 04:24:03 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14762: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14762 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14762/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                 fail pass in 14756
 test-i386-i386-xl-qemut-winxpsp3 7 windows-install fail in 14756 pass in 14762

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-amd64-xl-sedf     11 guest-localmigrate    fail in 14756 like 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 07:48:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 07:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkVQm-00062U-0w; Mon, 17 Dec 2012 07:48:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkVQk-00062M-8j
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 07:48:06 +0000
Received: from [85.158.138.51:8502] by server-4.bemta-3.messagelabs.com id
	A2/39-31835-53ECEC05; Mon, 17 Dec 2012 07:48:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355730484!21037720!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31753 invoked from network); 17 Dec 2012 07:48:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 07:48:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,299,1355097600"; 
   d="scan'208";a="190419"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 07:48:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 07:48:03 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkVQh-0003BQ-Pu;
	Mon, 17 Dec 2012 07:48:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkVQh-0005Ln-JE;
	Mon, 17 Dec 2012 07:48:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14771-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 07:48:03 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14771: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14771 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14771/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14743
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  f50aab21f9f2

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 07:48:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 07:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkVQm-00062U-0w; Mon, 17 Dec 2012 07:48:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkVQk-00062M-8j
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 07:48:06 +0000
Received: from [85.158.138.51:8502] by server-4.bemta-3.messagelabs.com id
	A2/39-31835-53ECEC05; Mon, 17 Dec 2012 07:48:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355730484!21037720!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31753 invoked from network); 17 Dec 2012 07:48:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 07:48:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,299,1355097600"; 
   d="scan'208";a="190419"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 07:48:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 07:48:03 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkVQh-0003BQ-Pu;
	Mon, 17 Dec 2012 07:48:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkVQh-0005Ln-JE;
	Mon, 17 Dec 2012 07:48:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14771-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 07:48:03 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14771: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14771 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14771/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14743
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f50aab21f9f2
baseline version:
 xen                  f50aab21f9f2

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 08:35:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 08:35:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkWAU-0006lb-6I; Mon, 17 Dec 2012 08:35:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkWAT-0006lW-Cj
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 08:35:21 +0000
Received: from [85.158.139.211:56442] by server-13.bemta-5.messagelabs.com id
	12/03-10716-849DEC05; Mon, 17 Dec 2012 08:35:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355733319!16540187!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25660 invoked from network); 17 Dec 2012 08:35:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 08:35:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 08:35:18 +0000
Message-Id: <50CEE75302000078000B0A99@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 08:35:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
	<50CB8318.6050807@eu.citrix.com>
In-Reply-To: <50CB8318.6050807@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"Keir\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.12.12 at 20:50, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 12/12/12 10:04, Jan Beulich wrote:
>>>>> On 12.12.12 at 03:52, Dario Faggioli <dario.faggioli@citrix.com> wrote:
>>> --- a/xen/common/sched_credit.c
>>> +++ b/xen/common/sched_credit.c
>>> @@ -59,6 +59,8 @@
>>>   #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
>>>   #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
>>>   #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
>>> +/* Is the first element of _cpu's runq its idle vcpu? */
>>> +#define IS_RUNQ_IDLE(_cpu)  (is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
>>>   
>>>   
>>>   /*
>>> @@ -479,9 +481,14 @@ static int
>>>        * distinct cores first and guarantees we don't do something stupid
>>>        * like run two VCPUs on co-hyperthreads while there are idle cores
>>>        * or sockets.
>>> +     *
>>> +     * Notice that, when computing the "idleness" of cpu, we may want to
>>> +     * discount vc. That is, iff vc is the currently running and the only
>>> +     * runnable vcpu on cpu, we add cpu to the idlers.
>>>        */
>>>       cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
>>> -    cpumask_set_cpu(cpu, &idlers);
>>> +    if ( current_on_cpu(cpu) == vc && IS_RUNQ_IDLE(cpu) )
>>> +        cpumask_set_cpu(cpu, &idlers);
>>>       cpumask_and(&cpus, &cpus, &idlers);
>>>       cpumask_clear_cpu(cpu, &cpus);
>>>   
>>> @@ -489,7 +496,7 @@ static int
>>>       {
>>>           cpumask_t cpu_idlers;
>>>           cpumask_t nxt_idlers;
>>> -        int nxt, weight_cpu, weight_nxt;
>>> +        int nxt, nr_idlers_cpu, nr_idlers_nxt;
>>>           int migrate_factor;
>>>   
>>>           nxt = cpumask_cycle(cpu, &cpus);
>>> @@ -513,12 +520,12 @@ static int
>>>               cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
>>>           }
>>>   
>>> -        weight_cpu = cpumask_weight(&cpu_idlers);
>>> -        weight_nxt = cpumask_weight(&nxt_idlers);
>>> +        nr_idlers_cpu = cpumask_weight(&cpu_idlers);
>>> +        nr_idlers_nxt = cpumask_weight(&nxt_idlers);
>>>           /* smt_power_savings: consolidate work rather than spreading it */
>>>           if ( sched_smt_power_savings ?
>>> -             weight_cpu > weight_nxt :
>>> -             weight_cpu * migrate_factor < weight_nxt )
>>> +             nr_idlers_cpu > nr_idlers_nxt :
>>> +             nr_idlers_cpu * migrate_factor < nr_idlers_nxt )
>>>           {
>>>               cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
>>>               spc = CSCHED_PCPU(nxt);
>> Despite you mentioning this in the description, these last two hunks
>> are, afaict, only renaming variables (and that's even debatable, as
>> the current names aren't really misleading imo), and hence I don't
>> think belong in a patch that clearly has the potential for causing
>> (performance) regressions.
>>
>> That said - I don't think it will (and even more, I'm agreeable to the
>> change done).
>>
>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -396,6 +396,9 @@ extern struct vcpu *idle_vcpu[NR_CPUS];
>>>   #define is_idle_domain(d) ((d)->domain_id == DOMID_IDLE)
>>>   #define is_idle_vcpu(v)   (is_idle_domain((v)->domain))
>>>   
>>> +#define current_on_cpu(_c) \
>>> +  ( (per_cpu(schedule_data, _c).curr) )
>>> +
>> This, imo, really belings into sched-if.h.
> 
> Hmm, it looks like there are a number of things that could live in 
> either sched-if.h or sched.h; but I think this one probably most closely 
> links with thins like vcpu_is_runnable() and cpu_is_haltable(), both of 
> which are in sched.h; so sched.h is where I'd put it.

Any use of schedule_data, the type of which is declared in
sched-if.h, should be in sched-if.h - someone only including
sched.h can't make use of it anyway (and it's intended to be
used by scheduler code, i.e. shouldn't be visible to other
code).

>> Plus - what's the point of double parentheses, when in fact none
>> at all would be needed?
>>
>> And finally, why "_c" and not just "c"?
> 
> I think the underscore is pretty standard in macros.

It's bad practice imo; I have always understood this as
questionable attempts of people to avoid name clashes (which
is understandable only for variables declared locally inside a
macro definition).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 08:35:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 08:35:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkWAU-0006lb-6I; Mon, 17 Dec 2012 08:35:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkWAT-0006lW-Cj
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 08:35:21 +0000
Received: from [85.158.139.211:56442] by server-13.bemta-5.messagelabs.com id
	12/03-10716-849DEC05; Mon, 17 Dec 2012 08:35:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355733319!16540187!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25660 invoked from network); 17 Dec 2012 08:35:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 08:35:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 08:35:18 +0000
Message-Id: <50CEE75302000078000B0A99@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 08:35:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
	<50CB8318.6050807@eu.citrix.com>
In-Reply-To: <50CB8318.6050807@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"Keir\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.12.12 at 20:50, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 12/12/12 10:04, Jan Beulich wrote:
>>>>> On 12.12.12 at 03:52, Dario Faggioli <dario.faggioli@citrix.com> wrote:
>>> --- a/xen/common/sched_credit.c
>>> +++ b/xen/common/sched_credit.c
>>> @@ -59,6 +59,8 @@
>>>   #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
>>>   #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
>>>   #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
>>> +/* Is the first element of _cpu's runq its idle vcpu? */
>>> +#define IS_RUNQ_IDLE(_cpu)  (is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
>>>   
>>>   
>>>   /*
>>> @@ -479,9 +481,14 @@ static int
>>>        * distinct cores first and guarantees we don't do something stupid
>>>        * like run two VCPUs on co-hyperthreads while there are idle cores
>>>        * or sockets.
>>> +     *
>>> +     * Notice that, when computing the "idleness" of cpu, we may want to
>>> +     * discount vc. That is, iff vc is the currently running and the only
>>> +     * runnable vcpu on cpu, we add cpu to the idlers.
>>>        */
>>>       cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
>>> -    cpumask_set_cpu(cpu, &idlers);
>>> +    if ( current_on_cpu(cpu) == vc && IS_RUNQ_IDLE(cpu) )
>>> +        cpumask_set_cpu(cpu, &idlers);
>>>       cpumask_and(&cpus, &cpus, &idlers);
>>>       cpumask_clear_cpu(cpu, &cpus);
>>>   
>>> @@ -489,7 +496,7 @@ static int
>>>       {
>>>           cpumask_t cpu_idlers;
>>>           cpumask_t nxt_idlers;
>>> -        int nxt, weight_cpu, weight_nxt;
>>> +        int nxt, nr_idlers_cpu, nr_idlers_nxt;
>>>           int migrate_factor;
>>>   
>>>           nxt = cpumask_cycle(cpu, &cpus);
>>> @@ -513,12 +520,12 @@ static int
>>>               cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
>>>           }
>>>   
>>> -        weight_cpu = cpumask_weight(&cpu_idlers);
>>> -        weight_nxt = cpumask_weight(&nxt_idlers);
>>> +        nr_idlers_cpu = cpumask_weight(&cpu_idlers);
>>> +        nr_idlers_nxt = cpumask_weight(&nxt_idlers);
>>>           /* smt_power_savings: consolidate work rather than spreading it */
>>>           if ( sched_smt_power_savings ?
>>> -             weight_cpu > weight_nxt :
>>> -             weight_cpu * migrate_factor < weight_nxt )
>>> +             nr_idlers_cpu > nr_idlers_nxt :
>>> +             nr_idlers_cpu * migrate_factor < nr_idlers_nxt )
>>>           {
>>>               cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
>>>               spc = CSCHED_PCPU(nxt);
>> Despite you mentioning this in the description, these last two hunks
>> are, afaict, only renaming variables (and that's even debatable, as
>> the current names aren't really misleading imo), and hence I don't
>> think belong in a patch that clearly has the potential for causing
>> (performance) regressions.
>>
>> That said - I don't think it will (and even more, I'm agreeable to the
>> change done).
>>
>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -396,6 +396,9 @@ extern struct vcpu *idle_vcpu[NR_CPUS];
>>>   #define is_idle_domain(d) ((d)->domain_id == DOMID_IDLE)
>>>   #define is_idle_vcpu(v)   (is_idle_domain((v)->domain))
>>>   
>>> +#define current_on_cpu(_c) \
>>> +  ( (per_cpu(schedule_data, _c).curr) )
>>> +
>> This, imo, really belings into sched-if.h.
> 
> Hmm, it looks like there are a number of things that could live in 
> either sched-if.h or sched.h; but I think this one probably most closely 
> links with thins like vcpu_is_runnable() and cpu_is_haltable(), both of 
> which are in sched.h; so sched.h is where I'd put it.

Any use of schedule_data, the type of which is declared in
sched-if.h, should be in sched-if.h - someone only including
sched.h can't make use of it anyway (and it's intended to be
used by scheduler code, i.e. shouldn't be visible to other
code).

>> Plus - what's the point of double parentheses, when in fact none
>> at all would be needed?
>>
>> And finally, why "_c" and not just "c"?
> 
> I think the underscore is pretty standard in macros.

It's bad practice imo; I have always understood this as
questionable attempts of people to avoid name clashes (which
is understandable only for variables declared locally inside a
macro definition).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 08:43:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 08:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkWIH-0006xa-HF; Mon, 17 Dec 2012 08:43:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkWIG-0006xV-09
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 08:43:24 +0000
Received: from [85.158.138.51:50520] by server-12.bemta-3.messagelabs.com id
	F1/BF-27559-B2BDEC05; Mon, 17 Dec 2012 08:43:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1355733802!29176190!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29730 invoked from network); 17 Dec 2012 08:43:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 08:43:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 08:43:21 +0000
Message-Id: <50CEE93602000078000B0AAE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 08:43:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yunhong Jiang" <yunhong.jiang@intel.com>
References: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
	<DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
	<50C5F60602000078000AF672@nat28.tlf.novell.com>
	<DDCAE26804250545B9934A2056554AA037B6B0@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DDCAE26804250545B9934A2056554AA037B6B0@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.12.12 at 11:07, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:
> My colleagues helped me to test and seems it's working.

Thanks!

> A minor question is, with your patch, we will create all frame table for all 
> potential memory range? That's a bit low efficient IMHO, because there will 
> be large hole between address between pre-populated memory and the new added 
> one.

No, we won't. The only thing we will fully populate is the super page
frame table (if enabled in the first place), which is better than not
populating its ranges covering hotplug memory at all (as was the
case before the patch); I'm certainly not against making this more
efficient, but that would imo need to be done by someone being
able to actually test the code paths needing modification.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 08:43:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 08:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkWIH-0006xa-HF; Mon, 17 Dec 2012 08:43:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkWIG-0006xV-09
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 08:43:24 +0000
Received: from [85.158.138.51:50520] by server-12.bemta-3.messagelabs.com id
	F1/BF-27559-B2BDEC05; Mon, 17 Dec 2012 08:43:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1355733802!29176190!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29730 invoked from network); 17 Dec 2012 08:43:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 08:43:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 08:43:21 +0000
Message-Id: <50CEE93602000078000B0AAE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 08:43:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yunhong Jiang" <yunhong.jiang@intel.com>
References: <50C2245402000078000AF0C3@nat28.tlf.novell.com>
	<DDCAE26804250545B9934A2056554AA0371A49@SHSMSX101.ccr.corp.intel.com>
	<50C5F60602000078000AF672@nat28.tlf.novell.com>
	<DDCAE26804250545B9934A2056554AA037B6B0@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DDCAE26804250545B9934A2056554AA037B6B0@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] frame table setup for memory hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.12.12 at 11:07, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:
> My colleagues helped me to test and seems it's working.

Thanks!

> A minor question is, with your patch, we will create all frame table for all 
> potential memory range? That's a bit low efficient IMHO, because there will 
> be large hole between address between pre-populated memory and the new added 
> one.

No, we won't. The only thing we will fully populate is the super page
frame table (if enabled in the first place), which is better than not
populating its ranges covering hotplug memory at all (as was the
case before the patch); I'm certainly not against making this more
efficient, but that would imo need to be done by someone being
able to actually test the code paths needing modification.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 08:59:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 08:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkWX2-00079E-2n; Mon, 17 Dec 2012 08:58:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TkWX0-000796-7x
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 08:58:38 +0000
Received: from [85.158.143.99:51088] by server-3.bemta-4.messagelabs.com id
	AC/E4-18211-DBEDEC05; Mon, 17 Dec 2012 08:58:37 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355734707!18570707!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMjI0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29550 invoked from network); 17 Dec 2012 08:58:27 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-16.tower-216.messagelabs.com with SMTP;
	17 Dec 2012 08:58:27 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 17 Dec 2012 00:56:21 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,299,1355126400"; d="scan'208";a="234989251"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 17 Dec 2012 00:57:12 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 17 Dec 2012 00:57:12 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 17 Dec 2012 16:57:09 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
	operations neutral
Thread-Index: AQHN2UuOO+dwAcebJU6gFcqVbeHQ/JgctauQ
Date: Mon, 17 Dec 2012 08:57:08 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033B3CC8@SHSMSX101.ccr.corp.intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-6-git-send-email-xiantao.zhang@intel.com>
	<20121213160417.GK75286@ocelot.phlegethon.org>
In-Reply-To: <20121213160417.GK75286@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"keir@xen.org" <keir@xen.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "Dong, 
	Eddie" <eddie.dong@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>,
	"Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
 operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Friday, December 14, 2012 12:04 AM
> To: Zhang, Xiantao
> Cc: xen-devel@lists.xensource.com; Dong, Eddie; keir@xen.org; Nakajima,
> Jun; JBeulich@suse.com
> Subject: Re: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
> operations neutral
> 
> At 01:57 +0800 on 11 Dec (1355191037), xiantao.zhang@intel.com wrote:
> > From: Zhang Xiantao <xiantao.zhang@intel.com>
> >
> > Share the current EPT logic with nested EPT case, so make the related
> > data structure or operations netural to comment EPT and nested EPT.
> >
> > Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Since the struct ept_data is only 16 bytes long, why not just embed it in the
> struct p2m_domain, as
> 
> >          mm_lock_t        lock;         /* Locking of private pod structs,   *
> >                                          * not relying on the p2m lock.      */
> >      } pod;
> > +    union {
> > +        struct ept_data ept;
> > +        /* NPT equivalent could go here if needed */
> > +    };
> >  };

Hi, Tim 
Thanks for your review!    If we change it like this,   p2m.h have to include asm/hvm/vmx/vmcs.h, is it acceptable ?  
Xiantao


> That would tidy up the alloc/free stuff a fair bit, though you'd still need it for
> the cpumask, I guess.
> 
> It would be nice to wrap the alloc/free functions up in the usual way so we
> dont get ept-specific functions with arch-independednt names.
> 
> Otherwise taht looks fine.
> 
> Cheers,
> 
> Tim.
> 
> > ---
> >  xen/arch/x86/hvm/vmx/vmcs.c        |    2 +-
> >  xen/arch/x86/hvm/vmx/vmx.c         |   39 +++++++++------
> >  xen/arch/x86/mm/p2m-ept.c          |   96
> ++++++++++++++++++++++++++++--------
> >  xen/arch/x86/mm/p2m.c              |   16 +++++-
> >  xen/include/asm-x86/hvm/vmx/vmcs.h |   30 +++++++----
> >  xen/include/asm-x86/hvm/vmx/vmx.h  |    6 ++-
> >  xen/include/asm-x86/p2m.h          |    1 +
> >  7 files changed, 137 insertions(+), 53 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vmcs.c
> b/xen/arch/x86/hvm/vmx/vmcs.c
> > index 9adc7a4..b9ebdfe 100644
> > --- a/xen/arch/x86/hvm/vmx/vmcs.c
> > +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> > @@ -942,7 +942,7 @@ static int construct_vmcs(struct vcpu *v)
> >      }
> >
> >      if ( paging_mode_hap(d) )
> > -        __vmwrite(EPT_POINTER, d-
> >arch.hvm_domain.vmx.ept_control.eptp);
> > +        __vmwrite(EPT_POINTER,
> > + d->arch.hvm_domain.vmx.ept.ept_ctl.eptp);
> >
> >      if ( cpu_has_vmx_pat && paging_mode_hap(d) )
> >      {
> > diff --git a/xen/arch/x86/hvm/vmx/vmx.c
> b/xen/arch/x86/hvm/vmx/vmx.c
> > index c67ac59..06455bf 100644
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -79,22 +79,23 @@ static void __ept_sync_domain(void *info);  static
> > int vmx_domain_initialise(struct domain *d)  {
> >      int rc;
> > +    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
> >
> >      /* Set the memory type used when accessing EPT paging structures. */
> > -    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
> > +    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
> >
> >      /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
> > -    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
> > +    ept->ept_ctl.ept_wl = 3;
> >
> > -    d->arch.hvm_domain.vmx.ept_control.asr  =
> > +    ept->ept_ctl.asr  =
> >          pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
> >
> > -    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
> > +    if ( !zalloc_cpumask_var(&ept->ept_synced) )
> >          return -ENOMEM;
> >
> >      if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
> >      {
> > -        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
> > +        free_cpumask_var(ept->ept_synced);
> >          return rc;
> >      }
> >
> > @@ -103,9 +104,10 @@ static int vmx_domain_initialise(struct domain
> > *d)
> >
> >  static void vmx_domain_destroy(struct domain *d)  {
> > +    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
> >      if ( paging_mode_hap(d) )
> > -        on_each_cpu(__ept_sync_domain, d, 1);
> > -    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
> > +        on_each_cpu(__ept_sync_domain, p2m_get_hostp2m(d), 1);
> > +    free_cpumask_var(ept->ept_synced);
> >      vmx_free_vlapic_mapping(d);
> >  }
> >
> > @@ -641,6 +643,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)  {
> >      struct domain *d = v->domain;
> >      unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
> > +    struct ept_data *ept_data = p2m_get_hostp2m(d)->hap_data;
> >
> >      /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
> >      if ( old_cr4 != new_cr4 )
> > @@ -650,10 +653,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
> >      {
> >          unsigned int cpu = smp_processor_id();
> >          /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
> > -        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced)
> &&
> > +        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
> >               !cpumask_test_and_set_cpu(cpu,
> > -                                       d->arch.hvm_domain.vmx.ept_synced) )
> > -            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
> > +                                       ept_get_synced_mask(ept_data)) )
> > +            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data),
> > + 0);
> >      }
> >
> >      vmx_restore_guest_msrs(v);
> > @@ -1218,12 +1221,16 @@ static void vmx_update_guest_efer(struct vcpu
> > *v)
> >
> >  static void __ept_sync_domain(void *info)  {
> > -    struct domain *d = info;
> > -    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
> > +    struct p2m_domain *p2m = info;
> > +    struct ept_data *ept_data = p2m->hap_data;
> > +
> > +    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
> >  }
> >
> > -void ept_sync_domain(struct domain *d)
> > +void ept_sync_domain(struct p2m_domain *p2m)
> >  {
> > +    struct domain *d = p2m->domain;
> > +    struct ept_data *ept_data = p2m->hap_data;
> >      /* Only if using EPT and this domain has some VCPUs to dirty. */
> >      if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
> >          return;
> > @@ -1236,11 +1243,11 @@ void ept_sync_domain(struct domain *d)
> >       * the ept_synced mask before on_selected_cpus() reads it, resulting in
> >       * unnecessary extra flushes, to avoid allocating a cpumask_t on the
> stack.
> >       */
> > -    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
> > +    cpumask_and(ept_get_synced_mask(ept_data),
> >                  d->domain_dirty_cpumask, &cpu_online_map);
> >
> > -    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
> > -                     __ept_sync_domain, d, 1);
> > +    on_selected_cpus(ept_get_synced_mask(ept_data),
> > +                     __ept_sync_domain, p2m, 1);
> >  }
> >
> >  void nvmx_enqueue_n2_exceptions(struct vcpu *v, diff --git
> > a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index
> > c964f54..8adf3f9 100644
> > --- a/xen/arch/x86/mm/p2m-ept.c
> > +++ b/xen/arch/x86/mm/p2m-ept.c
> > @@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m,
> unsigned long gfn, mfn_t mfn,
> >      int need_modify_vtd_table = 1;
> >      int vtd_pte_present = 0;
> >      int needs_sync = 1;
> > -    struct domain *d = p2m->domain;
> >      ept_entry_t old_entry = { .epte = 0 };
> > +    struct ept_data *ept_data = p2m->hap_data;
> > +    struct domain *d = p2m->domain;
> >
> > +    ASSERT(ept_data);
> >      /*
> >       * the caller must make sure:
> >       * 1. passing valid gfn and mfn at order boundary.
> > @@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m,
> unsigned long gfn, mfn_t mfn,
> >       * 3. passing a valid order.
> >       */
> >      if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
> > -         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
> > +         ((u64)gfn >> ((ept_get_wl(ept_data) + 1) * EPT_TABLE_ORDER))
> > + ||
> >           (order % EPT_TABLE_ORDER) )
> >          return 0;
> >
> > -    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
> > -           (target == 1 && hvm_hap_has_2mb(d)) ||
> > +    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
> > +           (target == 1 && hvm_hap_has_2mb()) ||
> >             (target == 0));
> >
> > -    table = map_domain_page(ept_get_asr(d));
> > +    table =
> > + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
> >
> > -    for ( i = ept_get_wl(d); i > target; i-- )
> > +    for ( i = ept_get_wl(ept_data); i > target; i-- )
> >      {
> >          ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
> >          if ( !ret )
> > @@ -439,9 +441,11 @@ out:
> >      unmap_domain_page(table);
> >
> >      if ( needs_sync )
> > -        ept_sync_domain(p2m->domain);
> > +        ept_sync_domain(p2m);
> >
> > -    if ( rv && iommu_enabled && need_iommu(p2m->domain) &&
> need_modify_vtd_table )
> > +    /* For non-nested p2m, may need to change VT-d page table.*/
> > +    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled &&
> need_iommu(p2m->domain) &&
> > +                need_modify_vtd_table )
> >      {
> >          if ( iommu_hap_pt_share )
> >              iommu_pte_flush(d, gfn, (u64*)ept_entry, order,
> > vtd_pte_present); @@ -488,14 +492,14 @@ static mfn_t
> ept_get_entry(struct p2m_domain *p2m,
> >                             unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
> >                             p2m_query_t q, unsigned int *page_order)
> > {
> > -    struct domain *d = p2m->domain;
> > -    ept_entry_t *table = map_domain_page(ept_get_asr(d));
> > +    ept_entry_t *table =
> > + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
> >      unsigned long gfn_remainder = gfn;
> >      ept_entry_t *ept_entry;
> >      u32 index;
> >      int i;
> >      int ret = 0;
> >      mfn_t mfn = _mfn(INVALID_MFN);
> > +    struct ept_data *ept_data = p2m->hap_data;
> >
> >      *t = p2m_mmio_dm;
> >      *a = p2m_access_n;
> > @@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain
> *p2m,
> >
> >      /* Should check if gfn obeys GAW here. */
> >
> > -    for ( i = ept_get_wl(d); i > 0; i-- )
> > +    for ( i = ept_get_wl(ept_data); i > 0; i-- )
> >      {
> >      retry:
> >          ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i); @@
> > -588,19 +592,20 @@ out:
> >  static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
> >      unsigned long gfn, int *level)
> >  {
> > -    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
> > +    ept_entry_t *table =
> > + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
> >      unsigned long gfn_remainder = gfn;
> >      ept_entry_t *ept_entry;
> >      ept_entry_t content = { .epte = 0 };
> >      u32 index;
> >      int i;
> >      int ret=0;
> > +    struct ept_data *ept_data = p2m->hap_data;
> >
> >      /* This pfn is higher than the highest the p2m map currently holds */
> >      if ( gfn > p2m->max_mapped_pfn )
> >          goto out;
> >
> > -    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
> > +    for ( i = ept_get_wl(ept_data); i > 0; i-- )
> >      {
> >          ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
> >          if ( !ret || ret == GUEST_TABLE_POD_PAGE ) @@ -622,7 +627,8
> > @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
> > void ept_walk_table(struct domain *d, unsigned long gfn)  {
> >      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> > -    ept_entry_t *table = map_domain_page(ept_get_asr(d));
> > +    struct ept_data *ept_data = p2m->hap_data;
> > +    ept_entry_t *table =
> > + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
> >      unsigned long gfn_remainder = gfn;
> >
> >      int i;
> > @@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned
> long gfn)
> >          goto out;
> >      }
> >
> > -    for ( i = ept_get_wl(d); i >= 0; i-- )
> > +    for ( i = ept_get_wl(ept_data); i >= 0; i-- )
> >      {
> >          ept_entry_t *ept_entry, *next;
> >          u32 index;
> > @@ -778,16 +784,16 @@ static void ept_change_entry_type_page(mfn_t
> > ept_page_mfn, int ept_page_level,  static void
> ept_change_entry_type_global(struct p2m_domain *p2m,
> >                                           p2m_type_t ot, p2m_type_t
> > nt)  {
> > -    struct domain *d = p2m->domain;
> > -    if ( ept_get_asr(d) == 0 )
> > +    struct ept_data *ept_data = p2m->hap_data;
> > +    if ( ept_get_asr(ept_data) == 0 )
> >          return;
> >
> >      BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
> >      BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt ==
> > p2m_mmio_direct));
> >
> > -    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d),
> ot, nt);
> > +    ept_change_entry_type_page(_mfn(ept_get_asr(ept_data)),
> > + ept_get_wl(ept_data), ot, nt);
> >
> > -    ept_sync_domain(d);
> > +    ept_sync_domain(p2m);
> >  }
> >
> >  void ept_p2m_init(struct p2m_domain *p2m) @@ -811,6 +817,7 @@ static
> > void ept_dump_p2m_table(unsigned char key)
> >      unsigned long gfn, gfn_remainder;
> >      unsigned long record_counter = 0;
> >      struct p2m_domain *p2m;
> > +    struct ept_data *ept_data;
> >
> >      for_each_domain(d)
> >      {
> > @@ -818,15 +825,16 @@ static void ept_dump_p2m_table(unsigned char
> key)
> >              continue;
> >
> >          p2m = p2m_get_hostp2m(d);
> > +    ept_data = p2m->hap_data;
> >          printk("\ndomain%d EPT p2m table: \n", d->domain_id);
> >
> >          for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
> >          {
> >              gfn_remainder = gfn;
> >              mfn = _mfn(INVALID_MFN);
> > -            table = map_domain_page(ept_get_asr(d));
> > +            table =
> > + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
> >
> > -            for ( i = ept_get_wl(d); i > 0; i-- )
> > +            for ( i = ept_get_wl(ept_data); i > 0; i-- )
> >              {
> >                  ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
> >                  if ( ret != GUEST_TABLE_NORMAL_PAGE ) @@ -858,6
> > +866,52 @@ out:
> >      }
> >  }
> >
> > +int alloc_p2m_hap_data(struct p2m_domain *p2m) {
> > +    struct domain *d = p2m->domain;
> > +    struct ept_data *ept;
> > +
> > +    ASSERT(d);
> > +    if (!hap_enabled(d))
> > +        return 0;
> > +
> > +    p2m->hap_data = ept = xzalloc(struct ept_data);
> > +    if ( !p2m->hap_data )
> > +        return -ENOMEM;
> > +    if ( !zalloc_cpumask_var(&ept->ept_synced) )
> > +    {
> > +        xfree(ept);
> > +        p2m->hap_data = NULL;
> > +        return -ENOMEM;
> > +    }
> > +    return 0;
> > +}
> > +
> > +void free_p2m_hap_data(struct p2m_domain *p2m) {
> > +    struct ept_data *ept;
> > +
> > +    if ( !hap_enabled(p2m->domain) )
> > +        return;
> > +
> > +    if ( p2m_is_nestedp2m(p2m)) {
> > +        ept = p2m->hap_data;
> > +        if ( ept ) {
> > +            free_cpumask_var(ept->ept_synced);
> > +            xfree(ept);
> > +        }
> > +    }
> > +}
> > +
> > +void p2m_init_hap_data(struct p2m_domain *p2m) {
> > +    struct ept_data *ept = p2m->hap_data;
> > +
> > +    ept->ept_ctl.ept_wl = 3;
> > +    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
> > +    ept->ept_ctl.asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> > +}
> > +
> >  static struct keyhandler ept_p2m_table = {
> >      .diagnostic = 0,
> >      .u.fn = ept_dump_p2m_table,
> > diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index
> > 62c2d78..799bbfb 100644
> > --- a/xen/arch/x86/mm/p2m.c
> > +++ b/xen/arch/x86/mm/p2m.c
> > @@ -105,6 +105,8 @@ p2m_init_nestedp2m(struct domain *d)
> >          if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
> >              return -ENOMEM;
> >          p2m_initialise(d, p2m);
> > +        if ( cpu_has_vmx && alloc_p2m_hap_data(p2m) )
> > +            return -ENOMEM;
> >          p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
> >          list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
> >      }
> > @@ -126,12 +128,14 @@ int p2m_init(struct domain *d)
> >          return -ENOMEM;
> >      }
> >      p2m_initialise(d, p2m);
> > +    if ( hap_enabled(d) && cpu_has_vmx)
> > +        p2m->hap_data = &d->arch.hvm_domain.vmx.ept;
> >
> >      /* Must initialise nestedp2m unconditionally
> >       * since nestedhvm_enabled(d) returns false here.
> >       * (p2m_init runs too early for HVM_PARAM_* options) */
> >      rc = p2m_init_nestedp2m(d);
> > -    if ( rc )
> > +    if ( rc )
> >          p2m_final_teardown(d);
> >      return rc;
> >  }
> > @@ -354,6 +358,8 @@ int p2m_alloc_table(struct p2m_domain *p2m)
> >
> >      if ( hap_enabled(d) )
> >          iommu_share_p2m_table(d);
> > +    if ( p2m_is_nestedp2m(p2m) && hap_enabled(d) )
> > +        p2m_init_hap_data(p2m);
> >
> >      P2M_PRINTK("populating p2m table\n");
> >
> > @@ -436,12 +442,16 @@ void p2m_teardown(struct p2m_domain *p2m)
> > static void p2m_teardown_nestedp2m(struct domain *d)  {
> >      uint8_t i;
> > +    struct p2m_domain *p2m;
> >
> >      for (i = 0; i < MAX_NESTEDP2M; i++) {
> >          if ( !d->arch.nested_p2m[i] )
> >              continue;
> > -        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
> > -        xfree(d->arch.nested_p2m[i]);
> > +        p2m = d->arch.nested_p2m[i];
> > +        if ( p2m->hap_data )
> > +            free_p2m_hap_data(p2m);
> > +        free_cpumask_var(p2m->dirty_cpumask);
> > +        xfree(p2m);
> >          d->arch.nested_p2m[i] = NULL;
> >      }
> >  }
> > diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h
> > b/xen/include/asm-x86/hvm/vmx/vmcs.h
> > index 9a728b6..e6b4e3b 100644
> > --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> > @@ -56,26 +56,34 @@ struct vmx_msr_state {
> >
> >  #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
> >
> > -struct vmx_domain {
> > -    unsigned long apic_access_mfn;
> > -    union {
> > -        struct {
> > +union eptp_control{
> > +    struct {
> >              u64 ept_mt :3,
> >                  ept_wl :3,
> >                  rsvd   :6,
> >                  asr    :52;
> >          };
> >          u64 eptp;
> > -    } ept_control;
> > +};
> > +
> > +struct ept_data{
> > +    union eptp_control ept_ctl;
> >      cpumask_var_t ept_synced;
> >  };
> >
> > -#define ept_get_wl(d)   \
> > -    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
> > -#define ept_get_asr(d)  \
> > -    ((d)->arch.hvm_domain.vmx.ept_control.asr)
> > -#define ept_get_eptp(d) \
> > -    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
> > +struct vmx_domain {
> > +    unsigned long apic_access_mfn;
> > +    struct ept_data ept;
> > +};
> > +
> > +#define ept_get_wl(ept_data)   \
> > +    (((struct ept_data*)(ept_data))->ept_ctl.ept_wl)
> > +#define ept_get_asr(ept_data)  \
> > +    (((struct ept_data*)(ept_data))->ept_ctl.asr)
> > +#define ept_get_eptp(ept_data) \
> > +    (((struct ept_data*)(ept_data))->ept_ctl.eptp)
> > +#define ept_get_synced_mask(ept_data)\
> > +    (((struct ept_data*)(ept_data))->ept_synced)
> >
> >  struct arch_vmx_struct {
> >      /* Virtual address of VMCS. */
> > diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h
> > b/xen/include/asm-x86/hvm/vmx/vmx.h
> > index aa5b080..573a12e 100644
> > --- a/xen/include/asm-x86/hvm/vmx/vmx.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
> > @@ -333,7 +333,7 @@ static inline void ept_sync_all(void)
> >      __invept(INVEPT_ALL_CONTEXT, 0, 0);  }
> >
> > -void ept_sync_domain(struct domain *d);
> > +void ept_sync_domain(struct p2m_domain *p2m);
> >
> >  static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long
> > gva)  { @@ -401,6 +401,10 @@ void setup_ept_dump(void);
> >
> >  void update_guest_eip(void);
> >
> > +int alloc_p2m_hap_data(struct p2m_domain *p2m); void
> > +free_p2m_hap_data(struct p2m_domain *p2m); void
> > +p2m_init_hap_data(struct p2m_domain *p2m);
> > +
> >  /* EPT violation qualifications definitions */
> >  #define _EPT_READ_VIOLATION         0
> >  #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
> > diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> > index 1807ad6..0fb1b2d 100644
> > --- a/xen/include/asm-x86/p2m.h
> > +++ b/xen/include/asm-x86/p2m.h
> > @@ -277,6 +277,7 @@ struct p2m_domain {
> >          mm_lock_t        lock;         /* Locking of private pod structs,   *
> >                                          * not relying on the p2m lock.      */
> >      } pod;
> > +    void *hap_data;
> >  };
> >
> >  /* get host p2m table */
> > --
> > 1.7.1
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 08:59:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 08:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkWX2-00079E-2n; Mon, 17 Dec 2012 08:58:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TkWX0-000796-7x
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 08:58:38 +0000
Received: from [85.158.143.99:51088] by server-3.bemta-4.messagelabs.com id
	AC/E4-18211-DBEDEC05; Mon, 17 Dec 2012 08:58:37 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355734707!18570707!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMjI0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29550 invoked from network); 17 Dec 2012 08:58:27 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-16.tower-216.messagelabs.com with SMTP;
	17 Dec 2012 08:58:27 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 17 Dec 2012 00:56:21 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,299,1355126400"; d="scan'208";a="234989251"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 17 Dec 2012 00:57:12 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 17 Dec 2012 00:57:12 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 17 Dec 2012 16:57:09 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
	operations neutral
Thread-Index: AQHN2UuOO+dwAcebJU6gFcqVbeHQ/JgctauQ
Date: Mon, 17 Dec 2012 08:57:08 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033B3CC8@SHSMSX101.ccr.corp.intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-6-git-send-email-xiantao.zhang@intel.com>
	<20121213160417.GK75286@ocelot.phlegethon.org>
In-Reply-To: <20121213160417.GK75286@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"keir@xen.org" <keir@xen.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "Dong, 
	Eddie" <eddie.dong@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>,
	"Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
 operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Friday, December 14, 2012 12:04 AM
> To: Zhang, Xiantao
> Cc: xen-devel@lists.xensource.com; Dong, Eddie; keir@xen.org; Nakajima,
> Jun; JBeulich@suse.com
> Subject: Re: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
> operations neutral
> 
> At 01:57 +0800 on 11 Dec (1355191037), xiantao.zhang@intel.com wrote:
> > From: Zhang Xiantao <xiantao.zhang@intel.com>
> >
> > Share the current EPT logic with nested EPT case, so make the related
> > data structure or operations netural to comment EPT and nested EPT.
> >
> > Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Since the struct ept_data is only 16 bytes long, why not just embed it in the
> struct p2m_domain, as
> 
> >          mm_lock_t        lock;         /* Locking of private pod structs,   *
> >                                          * not relying on the p2m lock.      */
> >      } pod;
> > +    union {
> > +        struct ept_data ept;
> > +        /* NPT equivalent could go here if needed */
> > +    };
> >  };

Hi, Tim 
Thanks for your review!    If we change it like this,   p2m.h have to include asm/hvm/vmx/vmcs.h, is it acceptable ?  
Xiantao


> That would tidy up the alloc/free stuff a fair bit, though you'd still need it for
> the cpumask, I guess.
> 
> It would be nice to wrap the alloc/free functions up in the usual way so we
> dont get ept-specific functions with arch-independednt names.
> 
> Otherwise taht looks fine.
> 
> Cheers,
> 
> Tim.
> 
> > ---
> >  xen/arch/x86/hvm/vmx/vmcs.c        |    2 +-
> >  xen/arch/x86/hvm/vmx/vmx.c         |   39 +++++++++------
> >  xen/arch/x86/mm/p2m-ept.c          |   96
> ++++++++++++++++++++++++++++--------
> >  xen/arch/x86/mm/p2m.c              |   16 +++++-
> >  xen/include/asm-x86/hvm/vmx/vmcs.h |   30 +++++++----
> >  xen/include/asm-x86/hvm/vmx/vmx.h  |    6 ++-
> >  xen/include/asm-x86/p2m.h          |    1 +
> >  7 files changed, 137 insertions(+), 53 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vmcs.c
> b/xen/arch/x86/hvm/vmx/vmcs.c
> > index 9adc7a4..b9ebdfe 100644
> > --- a/xen/arch/x86/hvm/vmx/vmcs.c
> > +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> > @@ -942,7 +942,7 @@ static int construct_vmcs(struct vcpu *v)
> >      }
> >
> >      if ( paging_mode_hap(d) )
> > -        __vmwrite(EPT_POINTER, d-
> >arch.hvm_domain.vmx.ept_control.eptp);
> > +        __vmwrite(EPT_POINTER,
> > + d->arch.hvm_domain.vmx.ept.ept_ctl.eptp);
> >
> >      if ( cpu_has_vmx_pat && paging_mode_hap(d) )
> >      {
> > diff --git a/xen/arch/x86/hvm/vmx/vmx.c
> b/xen/arch/x86/hvm/vmx/vmx.c
> > index c67ac59..06455bf 100644
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -79,22 +79,23 @@ static void __ept_sync_domain(void *info);  static
> > int vmx_domain_initialise(struct domain *d)  {
> >      int rc;
> > +    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
> >
> >      /* Set the memory type used when accessing EPT paging structures. */
> > -    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
> > +    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
> >
> >      /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
> > -    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
> > +    ept->ept_ctl.ept_wl = 3;
> >
> > -    d->arch.hvm_domain.vmx.ept_control.asr  =
> > +    ept->ept_ctl.asr  =
> >          pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
> >
> > -    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
> > +    if ( !zalloc_cpumask_var(&ept->ept_synced) )
> >          return -ENOMEM;
> >
> >      if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
> >      {
> > -        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
> > +        free_cpumask_var(ept->ept_synced);
> >          return rc;
> >      }
> >
> > @@ -103,9 +104,10 @@ static int vmx_domain_initialise(struct domain
> > *d)
> >
> >  static void vmx_domain_destroy(struct domain *d)  {
> > +    struct ept_data *ept = &d->arch.hvm_domain.vmx.ept;
> >      if ( paging_mode_hap(d) )
> > -        on_each_cpu(__ept_sync_domain, d, 1);
> > -    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
> > +        on_each_cpu(__ept_sync_domain, p2m_get_hostp2m(d), 1);
> > +    free_cpumask_var(ept->ept_synced);
> >      vmx_free_vlapic_mapping(d);
> >  }
> >
> > @@ -641,6 +643,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)  {
> >      struct domain *d = v->domain;
> >      unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
> > +    struct ept_data *ept_data = p2m_get_hostp2m(d)->hap_data;
> >
> >      /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
> >      if ( old_cr4 != new_cr4 )
> > @@ -650,10 +653,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
> >      {
> >          unsigned int cpu = smp_processor_id();
> >          /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
> > -        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced)
> &&
> > +        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
> >               !cpumask_test_and_set_cpu(cpu,
> > -                                       d->arch.hvm_domain.vmx.ept_synced) )
> > -            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
> > +                                       ept_get_synced_mask(ept_data)) )
> > +            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data),
> > + 0);
> >      }
> >
> >      vmx_restore_guest_msrs(v);
> > @@ -1218,12 +1221,16 @@ static void vmx_update_guest_efer(struct vcpu
> > *v)
> >
> >  static void __ept_sync_domain(void *info)  {
> > -    struct domain *d = info;
> > -    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
> > +    struct p2m_domain *p2m = info;
> > +    struct ept_data *ept_data = p2m->hap_data;
> > +
> > +    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
> >  }
> >
> > -void ept_sync_domain(struct domain *d)
> > +void ept_sync_domain(struct p2m_domain *p2m)
> >  {
> > +    struct domain *d = p2m->domain;
> > +    struct ept_data *ept_data = p2m->hap_data;
> >      /* Only if using EPT and this domain has some VCPUs to dirty. */
> >      if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
> >          return;
> > @@ -1236,11 +1243,11 @@ void ept_sync_domain(struct domain *d)
> >       * the ept_synced mask before on_selected_cpus() reads it, resulting in
> >       * unnecessary extra flushes, to avoid allocating a cpumask_t on the
> stack.
> >       */
> > -    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
> > +    cpumask_and(ept_get_synced_mask(ept_data),
> >                  d->domain_dirty_cpumask, &cpu_online_map);
> >
> > -    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
> > -                     __ept_sync_domain, d, 1);
> > +    on_selected_cpus(ept_get_synced_mask(ept_data),
> > +                     __ept_sync_domain, p2m, 1);
> >  }
> >
> >  void nvmx_enqueue_n2_exceptions(struct vcpu *v, diff --git
> > a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index
> > c964f54..8adf3f9 100644
> > --- a/xen/arch/x86/mm/p2m-ept.c
> > +++ b/xen/arch/x86/mm/p2m-ept.c
> > @@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m,
> unsigned long gfn, mfn_t mfn,
> >      int need_modify_vtd_table = 1;
> >      int vtd_pte_present = 0;
> >      int needs_sync = 1;
> > -    struct domain *d = p2m->domain;
> >      ept_entry_t old_entry = { .epte = 0 };
> > +    struct ept_data *ept_data = p2m->hap_data;
> > +    struct domain *d = p2m->domain;
> >
> > +    ASSERT(ept_data);
> >      /*
> >       * the caller must make sure:
> >       * 1. passing valid gfn and mfn at order boundary.
> > @@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m,
> unsigned long gfn, mfn_t mfn,
> >       * 3. passing a valid order.
> >       */
> >      if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
> > -         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
> > +         ((u64)gfn >> ((ept_get_wl(ept_data) + 1) * EPT_TABLE_ORDER))
> > + ||
> >           (order % EPT_TABLE_ORDER) )
> >          return 0;
> >
> > -    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
> > -           (target == 1 && hvm_hap_has_2mb(d)) ||
> > +    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
> > +           (target == 1 && hvm_hap_has_2mb()) ||
> >             (target == 0));
> >
> > -    table = map_domain_page(ept_get_asr(d));
> > +    table =
> > + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
> >
> > -    for ( i = ept_get_wl(d); i > target; i-- )
> > +    for ( i = ept_get_wl(ept_data); i > target; i-- )
> >      {
> >          ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
> >          if ( !ret )
> > @@ -439,9 +441,11 @@ out:
> >      unmap_domain_page(table);
> >
> >      if ( needs_sync )
> > -        ept_sync_domain(p2m->domain);
> > +        ept_sync_domain(p2m);
> >
> > -    if ( rv && iommu_enabled && need_iommu(p2m->domain) &&
> need_modify_vtd_table )
> > +    /* For non-nested p2m, may need to change VT-d page table.*/
> > +    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled &&
> need_iommu(p2m->domain) &&
> > +                need_modify_vtd_table )
> >      {
> >          if ( iommu_hap_pt_share )
> >              iommu_pte_flush(d, gfn, (u64*)ept_entry, order,
> > vtd_pte_present); @@ -488,14 +492,14 @@ static mfn_t
> ept_get_entry(struct p2m_domain *p2m,
> >                             unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
> >                             p2m_query_t q, unsigned int *page_order)
> > {
> > -    struct domain *d = p2m->domain;
> > -    ept_entry_t *table = map_domain_page(ept_get_asr(d));
> > +    ept_entry_t *table =
> > + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
> >      unsigned long gfn_remainder = gfn;
> >      ept_entry_t *ept_entry;
> >      u32 index;
> >      int i;
> >      int ret = 0;
> >      mfn_t mfn = _mfn(INVALID_MFN);
> > +    struct ept_data *ept_data = p2m->hap_data;
> >
> >      *t = p2m_mmio_dm;
> >      *a = p2m_access_n;
> > @@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain
> *p2m,
> >
> >      /* Should check if gfn obeys GAW here. */
> >
> > -    for ( i = ept_get_wl(d); i > 0; i-- )
> > +    for ( i = ept_get_wl(ept_data); i > 0; i-- )
> >      {
> >      retry:
> >          ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i); @@
> > -588,19 +592,20 @@ out:
> >  static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
> >      unsigned long gfn, int *level)
> >  {
> > -    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
> > +    ept_entry_t *table =
> > + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
> >      unsigned long gfn_remainder = gfn;
> >      ept_entry_t *ept_entry;
> >      ept_entry_t content = { .epte = 0 };
> >      u32 index;
> >      int i;
> >      int ret=0;
> > +    struct ept_data *ept_data = p2m->hap_data;
> >
> >      /* This pfn is higher than the highest the p2m map currently holds */
> >      if ( gfn > p2m->max_mapped_pfn )
> >          goto out;
> >
> > -    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
> > +    for ( i = ept_get_wl(ept_data); i > 0; i-- )
> >      {
> >          ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
> >          if ( !ret || ret == GUEST_TABLE_POD_PAGE ) @@ -622,7 +627,8
> > @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
> > void ept_walk_table(struct domain *d, unsigned long gfn)  {
> >      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> > -    ept_entry_t *table = map_domain_page(ept_get_asr(d));
> > +    struct ept_data *ept_data = p2m->hap_data;
> > +    ept_entry_t *table =
> > + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
> >      unsigned long gfn_remainder = gfn;
> >
> >      int i;
> > @@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned
> long gfn)
> >          goto out;
> >      }
> >
> > -    for ( i = ept_get_wl(d); i >= 0; i-- )
> > +    for ( i = ept_get_wl(ept_data); i >= 0; i-- )
> >      {
> >          ept_entry_t *ept_entry, *next;
> >          u32 index;
> > @@ -778,16 +784,16 @@ static void ept_change_entry_type_page(mfn_t
> > ept_page_mfn, int ept_page_level,  static void
> ept_change_entry_type_global(struct p2m_domain *p2m,
> >                                           p2m_type_t ot, p2m_type_t
> > nt)  {
> > -    struct domain *d = p2m->domain;
> > -    if ( ept_get_asr(d) == 0 )
> > +    struct ept_data *ept_data = p2m->hap_data;
> > +    if ( ept_get_asr(ept_data) == 0 )
> >          return;
> >
> >      BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
> >      BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt ==
> > p2m_mmio_direct));
> >
> > -    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d),
> ot, nt);
> > +    ept_change_entry_type_page(_mfn(ept_get_asr(ept_data)),
> > + ept_get_wl(ept_data), ot, nt);
> >
> > -    ept_sync_domain(d);
> > +    ept_sync_domain(p2m);
> >  }
> >
> >  void ept_p2m_init(struct p2m_domain *p2m) @@ -811,6 +817,7 @@ static
> > void ept_dump_p2m_table(unsigned char key)
> >      unsigned long gfn, gfn_remainder;
> >      unsigned long record_counter = 0;
> >      struct p2m_domain *p2m;
> > +    struct ept_data *ept_data;
> >
> >      for_each_domain(d)
> >      {
> > @@ -818,15 +825,16 @@ static void ept_dump_p2m_table(unsigned char
> key)
> >              continue;
> >
> >          p2m = p2m_get_hostp2m(d);
> > +    ept_data = p2m->hap_data;
> >          printk("\ndomain%d EPT p2m table: \n", d->domain_id);
> >
> >          for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
> >          {
> >              gfn_remainder = gfn;
> >              mfn = _mfn(INVALID_MFN);
> > -            table = map_domain_page(ept_get_asr(d));
> > +            table =
> > + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
> >
> > -            for ( i = ept_get_wl(d); i > 0; i-- )
> > +            for ( i = ept_get_wl(ept_data); i > 0; i-- )
> >              {
> >                  ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
> >                  if ( ret != GUEST_TABLE_NORMAL_PAGE ) @@ -858,6
> > +866,52 @@ out:
> >      }
> >  }
> >
> > +int alloc_p2m_hap_data(struct p2m_domain *p2m) {
> > +    struct domain *d = p2m->domain;
> > +    struct ept_data *ept;
> > +
> > +    ASSERT(d);
> > +    if (!hap_enabled(d))
> > +        return 0;
> > +
> > +    p2m->hap_data = ept = xzalloc(struct ept_data);
> > +    if ( !p2m->hap_data )
> > +        return -ENOMEM;
> > +    if ( !zalloc_cpumask_var(&ept->ept_synced) )
> > +    {
> > +        xfree(ept);
> > +        p2m->hap_data = NULL;
> > +        return -ENOMEM;
> > +    }
> > +    return 0;
> > +}
> > +
> > +void free_p2m_hap_data(struct p2m_domain *p2m) {
> > +    struct ept_data *ept;
> > +
> > +    if ( !hap_enabled(p2m->domain) )
> > +        return;
> > +
> > +    if ( p2m_is_nestedp2m(p2m)) {
> > +        ept = p2m->hap_data;
> > +        if ( ept ) {
> > +            free_cpumask_var(ept->ept_synced);
> > +            xfree(ept);
> > +        }
> > +    }
> > +}
> > +
> > +void p2m_init_hap_data(struct p2m_domain *p2m) {
> > +    struct ept_data *ept = p2m->hap_data;
> > +
> > +    ept->ept_ctl.ept_wl = 3;
> > +    ept->ept_ctl.ept_mt = EPT_DEFAULT_MT;
> > +    ept->ept_ctl.asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> > +}
> > +
> >  static struct keyhandler ept_p2m_table = {
> >      .diagnostic = 0,
> >      .u.fn = ept_dump_p2m_table,
> > diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index
> > 62c2d78..799bbfb 100644
> > --- a/xen/arch/x86/mm/p2m.c
> > +++ b/xen/arch/x86/mm/p2m.c
> > @@ -105,6 +105,8 @@ p2m_init_nestedp2m(struct domain *d)
> >          if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
> >              return -ENOMEM;
> >          p2m_initialise(d, p2m);
> > +        if ( cpu_has_vmx && alloc_p2m_hap_data(p2m) )
> > +            return -ENOMEM;
> >          p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
> >          list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
> >      }
> > @@ -126,12 +128,14 @@ int p2m_init(struct domain *d)
> >          return -ENOMEM;
> >      }
> >      p2m_initialise(d, p2m);
> > +    if ( hap_enabled(d) && cpu_has_vmx)
> > +        p2m->hap_data = &d->arch.hvm_domain.vmx.ept;
> >
> >      /* Must initialise nestedp2m unconditionally
> >       * since nestedhvm_enabled(d) returns false here.
> >       * (p2m_init runs too early for HVM_PARAM_* options) */
> >      rc = p2m_init_nestedp2m(d);
> > -    if ( rc )
> > +    if ( rc )
> >          p2m_final_teardown(d);
> >      return rc;
> >  }
> > @@ -354,6 +358,8 @@ int p2m_alloc_table(struct p2m_domain *p2m)
> >
> >      if ( hap_enabled(d) )
> >          iommu_share_p2m_table(d);
> > +    if ( p2m_is_nestedp2m(p2m) && hap_enabled(d) )
> > +        p2m_init_hap_data(p2m);
> >
> >      P2M_PRINTK("populating p2m table\n");
> >
> > @@ -436,12 +442,16 @@ void p2m_teardown(struct p2m_domain *p2m)
> > static void p2m_teardown_nestedp2m(struct domain *d)  {
> >      uint8_t i;
> > +    struct p2m_domain *p2m;
> >
> >      for (i = 0; i < MAX_NESTEDP2M; i++) {
> >          if ( !d->arch.nested_p2m[i] )
> >              continue;
> > -        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
> > -        xfree(d->arch.nested_p2m[i]);
> > +        p2m = d->arch.nested_p2m[i];
> > +        if ( p2m->hap_data )
> > +            free_p2m_hap_data(p2m);
> > +        free_cpumask_var(p2m->dirty_cpumask);
> > +        xfree(p2m);
> >          d->arch.nested_p2m[i] = NULL;
> >      }
> >  }
> > diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h
> > b/xen/include/asm-x86/hvm/vmx/vmcs.h
> > index 9a728b6..e6b4e3b 100644
> > --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> > @@ -56,26 +56,34 @@ struct vmx_msr_state {
> >
> >  #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
> >
> > -struct vmx_domain {
> > -    unsigned long apic_access_mfn;
> > -    union {
> > -        struct {
> > +union eptp_control{
> > +    struct {
> >              u64 ept_mt :3,
> >                  ept_wl :3,
> >                  rsvd   :6,
> >                  asr    :52;
> >          };
> >          u64 eptp;
> > -    } ept_control;
> > +};
> > +
> > +struct ept_data{
> > +    union eptp_control ept_ctl;
> >      cpumask_var_t ept_synced;
> >  };
> >
> > -#define ept_get_wl(d)   \
> > -    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
> > -#define ept_get_asr(d)  \
> > -    ((d)->arch.hvm_domain.vmx.ept_control.asr)
> > -#define ept_get_eptp(d) \
> > -    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
> > +struct vmx_domain {
> > +    unsigned long apic_access_mfn;
> > +    struct ept_data ept;
> > +};
> > +
> > +#define ept_get_wl(ept_data)   \
> > +    (((struct ept_data*)(ept_data))->ept_ctl.ept_wl)
> > +#define ept_get_asr(ept_data)  \
> > +    (((struct ept_data*)(ept_data))->ept_ctl.asr)
> > +#define ept_get_eptp(ept_data) \
> > +    (((struct ept_data*)(ept_data))->ept_ctl.eptp)
> > +#define ept_get_synced_mask(ept_data)\
> > +    (((struct ept_data*)(ept_data))->ept_synced)
> >
> >  struct arch_vmx_struct {
> >      /* Virtual address of VMCS. */
> > diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h
> > b/xen/include/asm-x86/hvm/vmx/vmx.h
> > index aa5b080..573a12e 100644
> > --- a/xen/include/asm-x86/hvm/vmx/vmx.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
> > @@ -333,7 +333,7 @@ static inline void ept_sync_all(void)
> >      __invept(INVEPT_ALL_CONTEXT, 0, 0);  }
> >
> > -void ept_sync_domain(struct domain *d);
> > +void ept_sync_domain(struct p2m_domain *p2m);
> >
> >  static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long
> > gva)  { @@ -401,6 +401,10 @@ void setup_ept_dump(void);
> >
> >  void update_guest_eip(void);
> >
> > +int alloc_p2m_hap_data(struct p2m_domain *p2m); void
> > +free_p2m_hap_data(struct p2m_domain *p2m); void
> > +p2m_init_hap_data(struct p2m_domain *p2m);
> > +
> >  /* EPT violation qualifications definitions */
> >  #define _EPT_READ_VIOLATION         0
> >  #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
> > diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> > index 1807ad6..0fb1b2d 100644
> > --- a/xen/include/asm-x86/p2m.h
> > +++ b/xen/include/asm-x86/p2m.h
> > @@ -277,6 +277,7 @@ struct p2m_domain {
> >          mm_lock_t        lock;         /* Locking of private pod structs,   *
> >                                          * not relying on the p2m lock.      */
> >      } pod;
> > +    void *hap_data;
> >  };
> >
> >  /* get host p2m table */
> > --
> > 1.7.1
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 09:57:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 09:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXRE-0007W0-VB; Mon, 17 Dec 2012 09:56:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkXRC-0007Vt-Ui
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 09:56:43 +0000
Received: from [85.158.143.99:27073] by server-2.bemta-4.messagelabs.com id
	3F/AD-30861-A5CEEC05; Mon, 17 Dec 2012 09:56:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355738201!22269008!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_DONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11153 invoked from network); 17 Dec 2012 09:56:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 09:56:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 09:56:40 +0000
Message-Id: <50CEFA6502000078000B0AF3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 09:56:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-6-git-send-email-xiantao.zhang@intel.com>
	<20121213160417.GK75286@ocelot.phlegethon.org>
	<B6C2EB9186482D47BD0C5A9A48345644033B3CC8@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A48345644033B3CC8@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>, Tim Deegan <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
 operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 09:57, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:

>> -----Original Message-----
>> From: Tim Deegan [mailto:tim@xen.org]
>> Sent: Friday, December 14, 2012 12:04 AM
>> To: Zhang, Xiantao
>> Cc: xen-devel@lists.xensource.com; Dong, Eddie; keir@xen.org; Nakajima,
>> Jun; JBeulich@suse.com 
>> Subject: Re: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
>> operations neutral
>> 
>> At 01:57 +0800 on 11 Dec (1355191037), xiantao.zhang@intel.com wrote:
>> > From: Zhang Xiantao <xiantao.zhang@intel.com>
>> >
>> > Share the current EPT logic with nested EPT case, so make the related
>> > data structure or operations netural to comment EPT and nested EPT.
>> >
>> > Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
>> 
>> Since the struct ept_data is only 16 bytes long, why not just embed it in 
> the
>> struct p2m_domain, as
>> 
>> >          mm_lock_t        lock;         /* Locking of private pod structs,  
>  *
>> >                                          * not relying on the p2m lock.     
>  */
>> >      } pod;
>> > +    union {
>> > +        struct ept_data ept;
>> > +        /* NPT equivalent could go here if needed */
>> > +    };
>> >  };
> 
> Hi, Tim 
> Thanks for your review!    If we change it like this,   p2m.h have to 
> include asm/hvm/vmx/vmcs.h, is it acceptable ?  

I'm sure there are ways to avoid such a dependency.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 09:57:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 09:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXRE-0007W0-VB; Mon, 17 Dec 2012 09:56:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkXRC-0007Vt-Ui
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 09:56:43 +0000
Received: from [85.158.143.99:27073] by server-2.bemta-4.messagelabs.com id
	3F/AD-30861-A5CEEC05; Mon, 17 Dec 2012 09:56:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355738201!22269008!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_DONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11153 invoked from network); 17 Dec 2012 09:56:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 09:56:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 09:56:40 +0000
Message-Id: <50CEFA6502000078000B0AF3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 09:56:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1355162243-11857-1-git-send-email-xiantao.zhang@intel.com>
	<1355162243-11857-6-git-send-email-xiantao.zhang@intel.com>
	<20121213160417.GK75286@ocelot.phlegethon.org>
	<B6C2EB9186482D47BD0C5A9A48345644033B3CC8@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A48345644033B3CC8@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>, Tim Deegan <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
 operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 09:57, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:

>> -----Original Message-----
>> From: Tim Deegan [mailto:tim@xen.org]
>> Sent: Friday, December 14, 2012 12:04 AM
>> To: Zhang, Xiantao
>> Cc: xen-devel@lists.xensource.com; Dong, Eddie; keir@xen.org; Nakajima,
>> Jun; JBeulich@suse.com 
>> Subject: Re: [Xen-devel] [PATCH 05/11] EPT: Make ept data structure or
>> operations neutral
>> 
>> At 01:57 +0800 on 11 Dec (1355191037), xiantao.zhang@intel.com wrote:
>> > From: Zhang Xiantao <xiantao.zhang@intel.com>
>> >
>> > Share the current EPT logic with nested EPT case, so make the related
>> > data structure or operations netural to comment EPT and nested EPT.
>> >
>> > Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
>> 
>> Since the struct ept_data is only 16 bytes long, why not just embed it in 
> the
>> struct p2m_domain, as
>> 
>> >          mm_lock_t        lock;         /* Locking of private pod structs,  
>  *
>> >                                          * not relying on the p2m lock.     
>  */
>> >      } pod;
>> > +    union {
>> > +        struct ept_data ept;
>> > +        /* NPT equivalent could go here if needed */
>> > +    };
>> >  };
> 
> Hi, Tim 
> Thanks for your review!    If we change it like this,   p2m.h have to 
> include asm/hvm/vmx/vmcs.h, is it acceptable ?  

I'm sure there are ways to avoid such a dependency.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 09:58:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 09:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXSW-0007ad-KZ; Mon, 17 Dec 2012 09:58:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkXSV-0007aW-Ay
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 09:58:03 +0000
Received: from [85.158.137.99:16198] by server-2.bemta-3.messagelabs.com id
	67/45-11239-5ACEEC05; Mon, 17 Dec 2012 09:57:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355738276!17364606!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23549 invoked from network); 17 Dec 2012 09:57:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 09:57:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="193481"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 09:57:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 09:57:54 +0000
Message-ID: <1355738272.14620.10.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Mon, 17 Dec 2012 09:57:52 +0000
In-Reply-To: <1355435230-4176-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355435230-4176-1-git-send-email-dgdegra@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/v2] libxl: postpone backend name
	resolution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 21:47 +0000, Daniel De Graaf wrote:
> This replaces the backend_domid field in libxl devices with a structure
> allowing either a domid or a domain name to be specified.  The domain
> name is resolved into a domain ID in the _setdefault function when
> adding the device.  This change allows the backend of the block devices
> to be specified (which previously required passing the libxl_ctx down
> into the block device parser).

I didn't review this in detail yet but my first thought was that this is
a libxl API change, and I can't see any provision for backwards
compatibility.

If you are doing something clever which I've missed then it deserves a
comment ;-)

Assuming not then the simplest solution would be to remove the struct
and just add the name as a field to each affected device. Or maybe an
anonymous struct would do the job if the members were backend_domid and
backend_name?

You could also potentially do something with the provisions around
LIBXL_API_VERSION documented in libxl.h.

Ian.

> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> 
> ---
> 
> This patch does not include the changes to tools/libxl/libxlu_disk_l.c
> and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
> changes due to different generator versions.
> 
>  docs/misc/xl-disk-configuration.txt | 12 +++++++
>  tools/libxl/libxl.c                 | 65 +++++++++++++++++++++++++------------
>  tools/libxl/libxl_dm.c              |  6 ++--
>  tools/libxl/libxl_types.idl         | 15 ++++++---
>  tools/libxl/libxl_utils.c           |  2 +-
>  tools/libxl/libxlu_disk_l.l         |  1 +
>  tools/libxl/xl_cmdimpl.c            | 36 ++++----------------
>  tools/libxl/xl_sxp.c                |  6 ++--
>  8 files changed, 82 insertions(+), 61 deletions(-)
> 
> diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
> index 86c16be..5bd456d 100644
> --- a/docs/misc/xl-disk-configuration.txt
> +++ b/docs/misc/xl-disk-configuration.txt
> @@ -139,6 +139,18 @@ cdrom
>  Convenience alias for "devtype=cdrom".
> 
> 
> +backend=<domain-name>
> +---------------------
> +
> +Description:           Designates a backend domain for the device
> +Supported values:      Valid domain names
> +Mandatory:             No
> +
> +Specifies the backend domain which this device should attach to. This
> +defaults to domain 0. Specifying another domain requires setting up a
> +driver domain which is outside the scope of this document.
> +
> +
>  backendtype=<backend-type>
>  --------------------------
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 6c4455e..edb29b3 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -1153,7 +1153,7 @@ static void disk_eject_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
>      sscanf(backend,
>              "/local/domain/%d/backend/%" TOSTRING(BACKEND_STRING_SIZE)
>             "[a-z]/%*d/%*d",
> -           &disk->backend_domid, backend_type);
> +           &disk->backend_domain.domid, backend_type);
>      if (!strcmp(backend_type, "tap") || !strcmp(backend_type, "vbd")) {
>          disk->backend = LIBXL_DISK_BACKEND_TAP;
>      } else if (!strcmp(backend_type, "qdisk")) {
> @@ -1710,13 +1710,36 @@ out:
>      return;
>  }
> 
> +/* backend domain name-to-domid conversion utility */
> +static int libxl__resolve_domain(libxl__gc *gc, struct libxl_domain_desc *desc)
> +{
> +    int i, rv;
> +    const char *name = desc->name;
> +    if (!name)
> +        return 0;
> +    for (i=0; name[i]; i++) {
> +        if (!isdigit(name[i])) {
> +            rv = libxl_name_to_domid(libxl__gc_owner(gc), name, &desc->domid);
> +            if (rv)
> +                return rv;
> +            goto resolved;
> +        }
> +    }
> +    desc->domid = atoi(name);
> +
> + resolved:
> +    free(desc->name);
> +    desc->name = NULL;
> +    return 0;
> +}
> +
>  /******************************************************************************/
>  int libxl__device_vtpm_setdefault(libxl__gc *gc, libxl_device_vtpm *vtpm)
>  {
>     if(libxl_uuid_is_nil(&vtpm->uuid)) {
>        libxl_uuid_generate(&vtpm->uuid);
>     }
> -   return 0;
> +   return libxl__resolve_domain(gc, &vtpm->backend_domain);
>  }
> 
>  static int libxl__device_from_vtpm(libxl__gc *gc, uint32_t domid,
> @@ -1724,7 +1747,7 @@ static int libxl__device_from_vtpm(libxl__gc *gc, uint32_t domid,
>                                     libxl__device *device)
>  {
>     device->backend_devid   = vtpm->devid;
> -   device->backend_domid   = vtpm->backend_domid;
> +   device->backend_domid   = vtpm->backend_domain.domid;
>     device->backend_kind    = LIBXL__DEVICE_KIND_VTPM;
>     device->devid           = vtpm->devid;
>     device->domid           = domid;
> @@ -1783,7 +1806,7 @@ void libxl__device_vtpm_add(libxl__egc *egc, uint32_t domid,
>      flexarray_append(back, "False");
> 
>      flexarray_append(front, "backend-id");
> -    flexarray_append(front, GCSPRINTF("%d", vtpm->backend_domid));
> +    flexarray_append(front, GCSPRINTF("%d", vtpm->backend_domain.domid));
>      flexarray_append(front, "state");
>      flexarray_append(front, GCSPRINTF("%d", 1));
>      flexarray_append(front, "handle");
> @@ -1832,7 +1855,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
>            tmp = libxl__xs_read(gc, XBT_NULL,
>                  GCSPRINTF("%s/%s/backend_id",
>                     fe_path, *dir));
> -          vtpm->backend_domid = atoi(tmp);
> +          vtpm->backend_domain.domid = atoi(tmp);
> 
>            tmp = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/uuid", be_path));
>            if(tmp) {
> @@ -1934,7 +1957,7 @@ int libxl_devid_to_device_vtpm(libxl_ctx *ctx,
>      rc = 1;
>      for (i = 0; i < nb; ++i) {
>          if(devid == vtpms[i].devid) {
> -            vtpm->backend_domid = vtpms[i].backend_domid;
> +            vtpm->backend_domain.domid = vtpms[i].backend_domain.domid;
>              vtpm->devid = vtpms[i].devid;
>              libxl_uuid_copy(&vtpm->uuid, &vtpms[i].uuid);
>              rc = 0;
> @@ -1956,7 +1979,7 @@ int libxl__device_disk_setdefault(libxl__gc *gc, libxl_device_disk *disk)
>      rc = libxl__device_disk_set_backend(gc, disk);
>      if (rc) return rc;
> 
> -    return rc;
> +    return libxl__resolve_domain(gc, &disk->backend_domain);
>  }
> 
>  int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
> @@ -1973,7 +1996,7 @@ int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
>          return ERROR_INVAL;
>      }
> 
> -    device->backend_domid = disk->backend_domid;
> +    device->backend_domid = disk->backend_domain.domid;
>      device->backend_devid = devid;
> 
>      switch (disk->backend) {
> @@ -2133,7 +2156,7 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
>          flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
> 
>          flexarray_append(front, "backend-id");
> -        flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
> +        flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domain.domid));
>          flexarray_append(front, "state");
>          flexarray_append(front, libxl__sprintf(gc, "%d", 1));
>          flexarray_append(front, "virtual-device");
> @@ -2298,7 +2321,7 @@ static int libxl__append_disk_list_of_type(libxl__gc *gc,
>              p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
>              if ((rc=libxl__device_disk_from_xs_be(gc, p, pdisk)))
>                  goto out;
> -            pdisk->backend_domid = 0;
> +            pdisk->backend_domain.domid = 0;
>              *ndisks += 1;
>          }
>      }
> @@ -2759,7 +2782,7 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>          LOG(ERROR, "unable to get current hotplug scripts execution setting");
>          return run_hotplug_scripts;
>      }
> -    if (nic->backend_domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
> +    if (nic->backend_domain.domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
>          LOG(ERROR, "cannot use a backend domain different than %d if"
>                     "hotplug scripts are executed from libxl",
>                     LIBXL_TOOLSTACK_DOMID);
> @@ -2784,7 +2807,7 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>          abort();
>      }
> 
> -    return 0;
> +    return libxl__resolve_domain(gc, &nic->backend_domain);
>  }
> 
>  static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
> @@ -2792,7 +2815,7 @@ static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
>                                    libxl__device *device)
>  {
>      device->backend_devid    = nic->devid;
> -    device->backend_domid    = nic->backend_domid;
> +    device->backend_domid    = nic->backend_domain.domid;
>      device->backend_kind     = LIBXL__DEVICE_KIND_VIF;
>      device->devid            = nic->devid;
>      device->domid            = domid;
> @@ -2874,7 +2897,7 @@ void libxl__device_nic_add(libxl__egc *egc, uint32_t domid,
>                                       libxl_nic_type_to_string(nic->nictype)));
> 
>      flexarray_append(front, "backend-id");
> -    flexarray_append(front, libxl__sprintf(gc, "%d", nic->backend_domid));
> +    flexarray_append(front, libxl__sprintf(gc, "%d", nic->backend_domain.domid));
>      flexarray_append(front, "state");
>      flexarray_append(front, libxl__sprintf(gc, "%d", 1));
>      flexarray_append(front, "handle");
> @@ -2991,7 +3014,7 @@ static int libxl__append_nic_list_of_type(libxl__gc *gc,
>              const char *p;
>              p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
>              libxl__device_nic_from_xs_be(gc, p, pnic);
> -            pnic->backend_domid = 0;
> +            pnic->backend_domain.domid = 0;
>          }
>      }
>      return 0;
> @@ -3144,7 +3167,7 @@ out:
> 
>  int libxl__device_vkb_setdefault(libxl__gc *gc, libxl_device_vkb *vkb)
>  {
> -    return 0;
> +    return libxl__resolve_domain(gc, &vkb->backend_domain);
>  }
> 
>  static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
> @@ -3152,7 +3175,7 @@ static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
>                                    libxl__device *device)
>  {
>      device->backend_devid = vkb->devid;
> -    device->backend_domid = vkb->backend_domid;
> +    device->backend_domid = vkb->backend_domain.domid;
>      device->backend_kind = LIBXL__DEVICE_KIND_VKBD;
>      device->devid = vkb->devid;
>      device->domid = domid;
> @@ -3205,7 +3228,7 @@ int libxl__device_vkb_add(libxl__gc *gc, uint32_t domid,
>      flexarray_append(back, libxl__domid_to_name(gc, domid));
> 
>      flexarray_append(front, "backend-id");
> -    flexarray_append(front, libxl__sprintf(gc, "%d", vkb->backend_domid));
> +    flexarray_append(front, libxl__sprintf(gc, "%d", vkb->backend_domain.domid));
>      flexarray_append(front, "state");
>      flexarray_append(front, libxl__sprintf(gc, "%d", 1));
> 
> @@ -3236,7 +3259,7 @@ int libxl__device_vfb_setdefault(libxl__gc *gc, libxl_device_vfb *vfb)
>      libxl_defbool_setdefault(&vfb->sdl.enable, false);
>      libxl_defbool_setdefault(&vfb->sdl.opengl, false);
> 
> -    return 0;
> +    return libxl__resolve_domain(gc, &vfb->backend_domain);
>  }
> 
>  static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
> @@ -3244,7 +3267,7 @@ static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
>                                    libxl__device *device)
>  {
>      device->backend_devid = vfb->devid;
> -    device->backend_domid = vfb->backend_domid;
> +    device->backend_domid = vfb->backend_domain.domid;
>      device->backend_kind = LIBXL__DEVICE_KIND_VFB;
>      device->devid = vfb->devid;
>      device->domid = domid;
> @@ -3309,7 +3332,7 @@ int libxl__device_vfb_add(libxl__gc *gc, uint32_t domid, libxl_device_vfb *vfb)
>      }
> 
>      flexarray_append_pair(front, "backend-id",
> -                          libxl__sprintf(gc, "%d", vfb->backend_domid));
> +                          libxl__sprintf(gc, "%d", vfb->backend_domain.domid));
>      flexarray_append_pair(front, "state", libxl__sprintf(gc, "%d", 1));
> 
>      libxl__device_generic_add(gc, XBT_NULL, &device,
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index c036dc1..82a55ea 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -644,13 +644,15 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc,
>      libxl_device_vfb_init(vfb);
>      libxl_device_vkb_init(vkb);
> 
> -    vfb->backend_domid = 0;
> +    vfb->backend_domain.domid = 0;
> +    vfb->backend_domain.name = NULL;
>      vfb->devid = 0;
>      vfb->vnc = b_info->u.hvm.vnc;
>      vfb->keymap = b_info->u.hvm.keymap;
>      vfb->sdl = b_info->u.hvm.sdl;
> 
> -    vkb->backend_domid = 0;
> +    vkb->backend_domain.domid = 0;
> +    vkb->backend_domain.name = NULL;
>      vkb->devid = 0;
>      return 0;
>  }
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 93524f0..6ec7967 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -136,6 +136,11 @@ libxl_vga_interface_type = Enumeration("vga_interface_type", [
>  # Complex libxl types
>  #
> 
> +libxl_domain_desc = Struct("domain_desc", [
> +    ("domid", libxl_domid),
> +    ("name",  string),
> +    ])
> +
>  libxl_ioport_range = Struct("ioport_range", [
>      ("first", uint32),
>      ("number", uint32),
> @@ -344,7 +349,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>  )
> 
>  libxl_device_vfb = Struct("device_vfb", [
> -    ("backend_domid", libxl_domid),
> +    ("backend_domain",libxl_domain_desc),
>      ("devid",         libxl_devid),
>      ("vnc",           libxl_vnc_info),
>      ("sdl",           libxl_sdl_info),
> @@ -353,12 +358,12 @@ libxl_device_vfb = Struct("device_vfb", [
>      ])
> 
>  libxl_device_vkb = Struct("device_vkb", [
> -    ("backend_domid", libxl_domid),
> +    ("backend_domain", libxl_domain_desc),
>      ("devid", libxl_devid),
>      ])
> 
>  libxl_device_disk = Struct("device_disk", [
> -    ("backend_domid", libxl_domid),
> +    ("backend_domain", libxl_domain_desc),
>      ("pdev_path", string),
>      ("vdev", string),
>      ("backend", libxl_disk_backend),
> @@ -370,7 +375,7 @@ libxl_device_disk = Struct("device_disk", [
>      ])
> 
>  libxl_device_nic = Struct("device_nic", [
> -    ("backend_domid", libxl_domid),
> +    ("backend_domain", libxl_domain_desc),
>      ("devid", libxl_devid),
>      ("mtu", integer),
>      ("model", string),
> @@ -397,7 +402,7 @@ libxl_device_pci = Struct("device_pci", [
>      ])
> 
>  libxl_device_vtpm = Struct("device_vtpm", [
> -    ("backend_domid",    libxl_domid),
> +    ("backend_domain",   libxl_domain_desc),
>      ("devid",            libxl_devid),
>      ("uuid",             libxl_uuid),
>  ])
> diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
> index 8f78790..86a310e 100644
> --- a/tools/libxl/libxl_utils.c
> +++ b/tools/libxl/libxl_utils.c
> @@ -478,7 +478,7 @@ int libxl_uuid_to_device_vtpm(libxl_ctx *ctx, uint32_t domid,
>      rc = 1;
>      for (i = 0; i < nb; ++i) {
>          if(!libxl_uuid_compare(uuid, &vtpms[i].uuid)) {
> -            vtpm->backend_domid = vtpms[i].backend_domid;
> +            vtpm->backend_domain.domid = vtpms[i].backend_domain.domid;
>              vtpm->devid = vtpms[i].devid;
>              libxl_uuid_copy(&vtpm->uuid, &vtpms[i].uuid);
>              rc = 0;
> diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
> index bee16a1..a65030f 100644
> --- a/tools/libxl/libxlu_disk_l.l
> +++ b/tools/libxl/libxlu_disk_l.l
> @@ -168,6 +168,7 @@ devtype=disk,?      { DPC->disk->is_cdrom = 0; }
>  devtype=[^,]*,?        { xlu__disk_err(DPC,yytext,"unknown value for type"); }
> 
>  access=[^,]*,? { STRIP(','); setaccess(DPC, FROMEQUALS); }
> +backend=[^,]*,? { STRIP(','); SAVESTRING("backend", backend_domain.name, FROMEQUALS); }
>  backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
> 
>  vdev=[^,]*,?   { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index e964bf1..bc22f3c 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1090,12 +1090,7 @@ static void parse_config_data(const char *config_source,
>                       break;
>                    *p2 = '\0';
>                    if (!strcmp(p, "backend")) {
> -                     if(domain_qualifier_to_domid(p2 + 1, &(vtpm->backend_domid), 0))
> -                     {
> -                        fprintf(stderr,
> -                              "Specified vtpm backend domain `%s' does not exist!\n", p2 + 1);
> -                        exit(1);
> -                     }
> +                     vtpm->backend_domain.name = strdup(p2 + 1);
>                       got_backend = true;
>                    } else if(!strcmp(p, "uuid")) {
>                       if( libxl_uuid_from_string(&vtpm->uuid, p2 + 1) ) {
> @@ -1190,11 +1185,8 @@ static void parse_config_data(const char *config_source,
>                      free(nic->ifname);
>                      nic->ifname = strdup(p2 + 1);
>                  } else if (!strcmp(p, "backend")) {
> -                    if(libxl_name_to_domid(ctx, (p2 + 1), &(nic->backend_domid))) {
> -                        fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
> -                        nic->backend_domid = 0;
> -                    }
> -                    if (nic->backend_domid != 0 && run_hotplug_scripts) {
> +                    nic->backend_domain.name = strdup(p2 + 1);
> +                    if (run_hotplug_scripts) {
>                          fprintf(stderr, "ERROR: the vif 'backend=' option "
>                                  "cannot be used in conjunction with "
>                                  "run_hotplug_scripts, please set "
> @@ -2431,8 +2423,6 @@ static void cd_insert(uint32_t domid, const char *virtdev, char *phys)
> 
>      parse_disk_config(&config, buf, &disk);
> 
> -    disk.backend_domid = 0;
> -
>      libxl_cdrom_insert(ctx, domid, &disk, NULL);
> 
>      libxl_device_disk_dispose(&disk);
> @@ -5516,11 +5506,7 @@ int main_networkattach(int argc, char **argv)
>          } else if (MATCH_OPTION("script", *argv, oparg)) {
>              replace_string(&nic.script, oparg);
>          } else if (MATCH_OPTION("backend", *argv, oparg)) {
> -            if(libxl_name_to_domid(ctx, oparg, &val)) {
> -                fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
> -                val = 0;
> -            }
> -            nic.backend_domid = val;
> +            replace_string(&nic.backend_domain.name, oparg);
>          } else if (MATCH_OPTION("vifname", *argv, oparg)) {
>              replace_string(&nic.ifname, oparg);
>          } else if (MATCH_OPTION("model", *argv, oparg)) {
> @@ -5623,8 +5609,8 @@ int main_networkdetach(int argc, char **argv)
>  int main_blockattach(int argc, char **argv)
>  {
>      int opt;
> -    uint32_t fe_domid, be_domid = 0;
> -    libxl_device_disk disk = { 0 };
> +    uint32_t fe_domid;
> +    libxl_device_disk disk;
>      XLU_Config *config = 0;
> 
>      if ((opt = def_getopt(argc, argv, "", "block-attach", 2)) != -1)
> @@ -5639,8 +5625,6 @@ int main_blockattach(int argc, char **argv)
>      parse_disk_config_multistring
>          (&config, argc-optind, (const char* const*)argv + optind, &disk);
> 
> -    disk.backend_domid = be_domid;
> -
>      if (dryrun_only) {
>          char *json = libxl_device_disk_to_json(ctx, &disk);
>          printf("disk: %s\n", json);
> @@ -5720,7 +5704,6 @@ int main_vtpmattach(int argc, char **argv)
>      int opt;
>      libxl_device_vtpm vtpm;
>      char *oparg;
> -    unsigned int val;
>      uint32_t domid;
> 
>      if ((opt = def_getopt(argc, argv, "", "vtpm-attach", 1)) != -1)
> @@ -5740,12 +5723,7 @@ int main_vtpmattach(int argc, char **argv)
>                  return 1;
>              }
>          } else if (MATCH_OPTION("backend", *argv, oparg)) {
> -            if(domain_qualifier_to_domid(oparg, &val, 0)) {
> -                fprintf(stderr,
> -                      "Specified backend domain does not exist, defaulting to Dom0\n");
> -                val = 0;
> -            }
> -            vtpm.backend_domid = val;
> +            replace_string(&vtpm.backend_domain.name, oparg);
>          } else {
>              fprintf(stderr, "unrecognized argument `%s'\n", *argv);
>              return 1;
> diff --git a/tools/libxl/xl_sxp.c b/tools/libxl/xl_sxp.c
> index a16a025..6cb8d40 100644
> --- a/tools/libxl/xl_sxp.c
> +++ b/tools/libxl/xl_sxp.c
> @@ -163,7 +163,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
>      for (i = 0; i < d_config->num_disks; i++) {
>          printf("\t(device\n");
>          printf("\t\t(tap\n");
> -        printf("\t\t\t(backend_domid %d)\n", d_config->disks[i].backend_domid);
> +        printf("\t\t\t(backend_domid %d)\n", d_config->disks[i].backend_domain.domid);
>          printf("\t\t\t(frontend_domid %d)\n", domid);
>          printf("\t\t\t(physpath %s)\n", d_config->disks[i].pdev_path);
>          printf("\t\t\t(phystype %d)\n", d_config->disks[i].backend);
> @@ -180,7 +180,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
>          printf("\t\t(vif\n");
>          if (d_config->nics[i].ifname)
>              printf("\t\t\t(vifname %s)\n", d_config->nics[i].ifname);
> -        printf("\t\t\t(backend_domid %d)\n", d_config->nics[i].backend_domid);
> +        printf("\t\t\t(backend_domid %d)\n", d_config->nics[i].backend_domain.domid);
>          printf("\t\t\t(frontend_domid %d)\n", domid);
>          printf("\t\t\t(devid %d)\n", d_config->nics[i].devid);
>          printf("\t\t\t(mtu %d)\n", d_config->nics[i].mtu);
> @@ -210,7 +210,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
>      for (i = 0; i < d_config->num_vfbs; i++) {
>          printf("\t(device\n");
>          printf("\t\t(vfb\n");
> -        printf("\t\t\t(backend_domid %d)\n", d_config->vfbs[i].backend_domid);
> +        printf("\t\t\t(backend_domid %d)\n", d_config->vfbs[i].backend_domain.domid);
>          printf("\t\t\t(frontend_domid %d)\n", domid);
>          printf("\t\t\t(devid %d)\n", d_config->vfbs[i].devid);
>          printf("\t\t\t(vnc %s)\n",
> --
> 1.7.11.7
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 09:58:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 09:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXSW-0007ad-KZ; Mon, 17 Dec 2012 09:58:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkXSV-0007aW-Ay
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 09:58:03 +0000
Received: from [85.158.137.99:16198] by server-2.bemta-3.messagelabs.com id
	67/45-11239-5ACEEC05; Mon, 17 Dec 2012 09:57:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355738276!17364606!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23549 invoked from network); 17 Dec 2012 09:57:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 09:57:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="193481"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 09:57:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 09:57:54 +0000
Message-ID: <1355738272.14620.10.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Mon, 17 Dec 2012 09:57:52 +0000
In-Reply-To: <1355435230-4176-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1355435230-4176-1-git-send-email-dgdegra@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/v2] libxl: postpone backend name
	resolution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 21:47 +0000, Daniel De Graaf wrote:
> This replaces the backend_domid field in libxl devices with a structure
> allowing either a domid or a domain name to be specified.  The domain
> name is resolved into a domain ID in the _setdefault function when
> adding the device.  This change allows the backend of the block devices
> to be specified (which previously required passing the libxl_ctx down
> into the block device parser).

I didn't review this in detail yet but my first thought was that this is
a libxl API change, and I can't see any provision for backwards
compatibility.

If you are doing something clever which I've missed then it deserves a
comment ;-)

Assuming not then the simplest solution would be to remove the struct
and just add the name as a field to each affected device. Or maybe an
anonymous struct would do the job if the members were backend_domid and
backend_name?

You could also potentially do something with the provisions around
LIBXL_API_VERSION documented in libxl.h.

Ian.

> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> 
> ---
> 
> This patch does not include the changes to tools/libxl/libxlu_disk_l.c
> and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
> changes due to different generator versions.
> 
>  docs/misc/xl-disk-configuration.txt | 12 +++++++
>  tools/libxl/libxl.c                 | 65 +++++++++++++++++++++++++------------
>  tools/libxl/libxl_dm.c              |  6 ++--
>  tools/libxl/libxl_types.idl         | 15 ++++++---
>  tools/libxl/libxl_utils.c           |  2 +-
>  tools/libxl/libxlu_disk_l.l         |  1 +
>  tools/libxl/xl_cmdimpl.c            | 36 ++++----------------
>  tools/libxl/xl_sxp.c                |  6 ++--
>  8 files changed, 82 insertions(+), 61 deletions(-)
> 
> diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
> index 86c16be..5bd456d 100644
> --- a/docs/misc/xl-disk-configuration.txt
> +++ b/docs/misc/xl-disk-configuration.txt
> @@ -139,6 +139,18 @@ cdrom
>  Convenience alias for "devtype=cdrom".
> 
> 
> +backend=<domain-name>
> +---------------------
> +
> +Description:           Designates a backend domain for the device
> +Supported values:      Valid domain names
> +Mandatory:             No
> +
> +Specifies the backend domain which this device should attach to. This
> +defaults to domain 0. Specifying another domain requires setting up a
> +driver domain which is outside the scope of this document.
> +
> +
>  backendtype=<backend-type>
>  --------------------------
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 6c4455e..edb29b3 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -1153,7 +1153,7 @@ static void disk_eject_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
>      sscanf(backend,
>              "/local/domain/%d/backend/%" TOSTRING(BACKEND_STRING_SIZE)
>             "[a-z]/%*d/%*d",
> -           &disk->backend_domid, backend_type);
> +           &disk->backend_domain.domid, backend_type);
>      if (!strcmp(backend_type, "tap") || !strcmp(backend_type, "vbd")) {
>          disk->backend = LIBXL_DISK_BACKEND_TAP;
>      } else if (!strcmp(backend_type, "qdisk")) {
> @@ -1710,13 +1710,36 @@ out:
>      return;
>  }
> 
> +/* backend domain name-to-domid conversion utility */
> +static int libxl__resolve_domain(libxl__gc *gc, struct libxl_domain_desc *desc)
> +{
> +    int i, rv;
> +    const char *name = desc->name;
> +    if (!name)
> +        return 0;
> +    for (i=0; name[i]; i++) {
> +        if (!isdigit(name[i])) {
> +            rv = libxl_name_to_domid(libxl__gc_owner(gc), name, &desc->domid);
> +            if (rv)
> +                return rv;
> +            goto resolved;
> +        }
> +    }
> +    desc->domid = atoi(name);
> +
> + resolved:
> +    free(desc->name);
> +    desc->name = NULL;
> +    return 0;
> +}
> +
>  /******************************************************************************/
>  int libxl__device_vtpm_setdefault(libxl__gc *gc, libxl_device_vtpm *vtpm)
>  {
>     if(libxl_uuid_is_nil(&vtpm->uuid)) {
>        libxl_uuid_generate(&vtpm->uuid);
>     }
> -   return 0;
> +   return libxl__resolve_domain(gc, &vtpm->backend_domain);
>  }
> 
>  static int libxl__device_from_vtpm(libxl__gc *gc, uint32_t domid,
> @@ -1724,7 +1747,7 @@ static int libxl__device_from_vtpm(libxl__gc *gc, uint32_t domid,
>                                     libxl__device *device)
>  {
>     device->backend_devid   = vtpm->devid;
> -   device->backend_domid   = vtpm->backend_domid;
> +   device->backend_domid   = vtpm->backend_domain.domid;
>     device->backend_kind    = LIBXL__DEVICE_KIND_VTPM;
>     device->devid           = vtpm->devid;
>     device->domid           = domid;
> @@ -1783,7 +1806,7 @@ void libxl__device_vtpm_add(libxl__egc *egc, uint32_t domid,
>      flexarray_append(back, "False");
> 
>      flexarray_append(front, "backend-id");
> -    flexarray_append(front, GCSPRINTF("%d", vtpm->backend_domid));
> +    flexarray_append(front, GCSPRINTF("%d", vtpm->backend_domain.domid));
>      flexarray_append(front, "state");
>      flexarray_append(front, GCSPRINTF("%d", 1));
>      flexarray_append(front, "handle");
> @@ -1832,7 +1855,7 @@ libxl_device_vtpm *libxl_device_vtpm_list(libxl_ctx *ctx, uint32_t domid, int *n
>            tmp = libxl__xs_read(gc, XBT_NULL,
>                  GCSPRINTF("%s/%s/backend_id",
>                     fe_path, *dir));
> -          vtpm->backend_domid = atoi(tmp);
> +          vtpm->backend_domain.domid = atoi(tmp);
> 
>            tmp = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/uuid", be_path));
>            if(tmp) {
> @@ -1934,7 +1957,7 @@ int libxl_devid_to_device_vtpm(libxl_ctx *ctx,
>      rc = 1;
>      for (i = 0; i < nb; ++i) {
>          if(devid == vtpms[i].devid) {
> -            vtpm->backend_domid = vtpms[i].backend_domid;
> +            vtpm->backend_domain.domid = vtpms[i].backend_domain.domid;
>              vtpm->devid = vtpms[i].devid;
>              libxl_uuid_copy(&vtpm->uuid, &vtpms[i].uuid);
>              rc = 0;
> @@ -1956,7 +1979,7 @@ int libxl__device_disk_setdefault(libxl__gc *gc, libxl_device_disk *disk)
>      rc = libxl__device_disk_set_backend(gc, disk);
>      if (rc) return rc;
> 
> -    return rc;
> +    return libxl__resolve_domain(gc, &disk->backend_domain);
>  }
> 
>  int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
> @@ -1973,7 +1996,7 @@ int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
>          return ERROR_INVAL;
>      }
> 
> -    device->backend_domid = disk->backend_domid;
> +    device->backend_domid = disk->backend_domain.domid;
>      device->backend_devid = devid;
> 
>      switch (disk->backend) {
> @@ -2133,7 +2156,7 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
>          flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
> 
>          flexarray_append(front, "backend-id");
> -        flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
> +        flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domain.domid));
>          flexarray_append(front, "state");
>          flexarray_append(front, libxl__sprintf(gc, "%d", 1));
>          flexarray_append(front, "virtual-device");
> @@ -2298,7 +2321,7 @@ static int libxl__append_disk_list_of_type(libxl__gc *gc,
>              p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
>              if ((rc=libxl__device_disk_from_xs_be(gc, p, pdisk)))
>                  goto out;
> -            pdisk->backend_domid = 0;
> +            pdisk->backend_domain.domid = 0;
>              *ndisks += 1;
>          }
>      }
> @@ -2759,7 +2782,7 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>          LOG(ERROR, "unable to get current hotplug scripts execution setting");
>          return run_hotplug_scripts;
>      }
> -    if (nic->backend_domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
> +    if (nic->backend_domain.domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
>          LOG(ERROR, "cannot use a backend domain different than %d if"
>                     "hotplug scripts are executed from libxl",
>                     LIBXL_TOOLSTACK_DOMID);
> @@ -2784,7 +2807,7 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>          abort();
>      }
> 
> -    return 0;
> +    return libxl__resolve_domain(gc, &nic->backend_domain);
>  }
> 
>  static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
> @@ -2792,7 +2815,7 @@ static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
>                                    libxl__device *device)
>  {
>      device->backend_devid    = nic->devid;
> -    device->backend_domid    = nic->backend_domid;
> +    device->backend_domid    = nic->backend_domain.domid;
>      device->backend_kind     = LIBXL__DEVICE_KIND_VIF;
>      device->devid            = nic->devid;
>      device->domid            = domid;
> @@ -2874,7 +2897,7 @@ void libxl__device_nic_add(libxl__egc *egc, uint32_t domid,
>                                       libxl_nic_type_to_string(nic->nictype)));
> 
>      flexarray_append(front, "backend-id");
> -    flexarray_append(front, libxl__sprintf(gc, "%d", nic->backend_domid));
> +    flexarray_append(front, libxl__sprintf(gc, "%d", nic->backend_domain.domid));
>      flexarray_append(front, "state");
>      flexarray_append(front, libxl__sprintf(gc, "%d", 1));
>      flexarray_append(front, "handle");
> @@ -2991,7 +3014,7 @@ static int libxl__append_nic_list_of_type(libxl__gc *gc,
>              const char *p;
>              p = libxl__sprintf(gc, "%s/%s", be_path, *dir);
>              libxl__device_nic_from_xs_be(gc, p, pnic);
> -            pnic->backend_domid = 0;
> +            pnic->backend_domain.domid = 0;
>          }
>      }
>      return 0;
> @@ -3144,7 +3167,7 @@ out:
> 
>  int libxl__device_vkb_setdefault(libxl__gc *gc, libxl_device_vkb *vkb)
>  {
> -    return 0;
> +    return libxl__resolve_domain(gc, &vkb->backend_domain);
>  }
> 
>  static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
> @@ -3152,7 +3175,7 @@ static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
>                                    libxl__device *device)
>  {
>      device->backend_devid = vkb->devid;
> -    device->backend_domid = vkb->backend_domid;
> +    device->backend_domid = vkb->backend_domain.domid;
>      device->backend_kind = LIBXL__DEVICE_KIND_VKBD;
>      device->devid = vkb->devid;
>      device->domid = domid;
> @@ -3205,7 +3228,7 @@ int libxl__device_vkb_add(libxl__gc *gc, uint32_t domid,
>      flexarray_append(back, libxl__domid_to_name(gc, domid));
> 
>      flexarray_append(front, "backend-id");
> -    flexarray_append(front, libxl__sprintf(gc, "%d", vkb->backend_domid));
> +    flexarray_append(front, libxl__sprintf(gc, "%d", vkb->backend_domain.domid));
>      flexarray_append(front, "state");
>      flexarray_append(front, libxl__sprintf(gc, "%d", 1));
> 
> @@ -3236,7 +3259,7 @@ int libxl__device_vfb_setdefault(libxl__gc *gc, libxl_device_vfb *vfb)
>      libxl_defbool_setdefault(&vfb->sdl.enable, false);
>      libxl_defbool_setdefault(&vfb->sdl.opengl, false);
> 
> -    return 0;
> +    return libxl__resolve_domain(gc, &vfb->backend_domain);
>  }
> 
>  static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
> @@ -3244,7 +3267,7 @@ static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
>                                    libxl__device *device)
>  {
>      device->backend_devid = vfb->devid;
> -    device->backend_domid = vfb->backend_domid;
> +    device->backend_domid = vfb->backend_domain.domid;
>      device->backend_kind = LIBXL__DEVICE_KIND_VFB;
>      device->devid = vfb->devid;
>      device->domid = domid;
> @@ -3309,7 +3332,7 @@ int libxl__device_vfb_add(libxl__gc *gc, uint32_t domid, libxl_device_vfb *vfb)
>      }
> 
>      flexarray_append_pair(front, "backend-id",
> -                          libxl__sprintf(gc, "%d", vfb->backend_domid));
> +                          libxl__sprintf(gc, "%d", vfb->backend_domain.domid));
>      flexarray_append_pair(front, "state", libxl__sprintf(gc, "%d", 1));
> 
>      libxl__device_generic_add(gc, XBT_NULL, &device,
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index c036dc1..82a55ea 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -644,13 +644,15 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc,
>      libxl_device_vfb_init(vfb);
>      libxl_device_vkb_init(vkb);
> 
> -    vfb->backend_domid = 0;
> +    vfb->backend_domain.domid = 0;
> +    vfb->backend_domain.name = NULL;
>      vfb->devid = 0;
>      vfb->vnc = b_info->u.hvm.vnc;
>      vfb->keymap = b_info->u.hvm.keymap;
>      vfb->sdl = b_info->u.hvm.sdl;
> 
> -    vkb->backend_domid = 0;
> +    vkb->backend_domain.domid = 0;
> +    vkb->backend_domain.name = NULL;
>      vkb->devid = 0;
>      return 0;
>  }
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 93524f0..6ec7967 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -136,6 +136,11 @@ libxl_vga_interface_type = Enumeration("vga_interface_type", [
>  # Complex libxl types
>  #
> 
> +libxl_domain_desc = Struct("domain_desc", [
> +    ("domid", libxl_domid),
> +    ("name",  string),
> +    ])
> +
>  libxl_ioport_range = Struct("ioport_range", [
>      ("first", uint32),
>      ("number", uint32),
> @@ -344,7 +349,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>  )
> 
>  libxl_device_vfb = Struct("device_vfb", [
> -    ("backend_domid", libxl_domid),
> +    ("backend_domain",libxl_domain_desc),
>      ("devid",         libxl_devid),
>      ("vnc",           libxl_vnc_info),
>      ("sdl",           libxl_sdl_info),
> @@ -353,12 +358,12 @@ libxl_device_vfb = Struct("device_vfb", [
>      ])
> 
>  libxl_device_vkb = Struct("device_vkb", [
> -    ("backend_domid", libxl_domid),
> +    ("backend_domain", libxl_domain_desc),
>      ("devid", libxl_devid),
>      ])
> 
>  libxl_device_disk = Struct("device_disk", [
> -    ("backend_domid", libxl_domid),
> +    ("backend_domain", libxl_domain_desc),
>      ("pdev_path", string),
>      ("vdev", string),
>      ("backend", libxl_disk_backend),
> @@ -370,7 +375,7 @@ libxl_device_disk = Struct("device_disk", [
>      ])
> 
>  libxl_device_nic = Struct("device_nic", [
> -    ("backend_domid", libxl_domid),
> +    ("backend_domain", libxl_domain_desc),
>      ("devid", libxl_devid),
>      ("mtu", integer),
>      ("model", string),
> @@ -397,7 +402,7 @@ libxl_device_pci = Struct("device_pci", [
>      ])
> 
>  libxl_device_vtpm = Struct("device_vtpm", [
> -    ("backend_domid",    libxl_domid),
> +    ("backend_domain",   libxl_domain_desc),
>      ("devid",            libxl_devid),
>      ("uuid",             libxl_uuid),
>  ])
> diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
> index 8f78790..86a310e 100644
> --- a/tools/libxl/libxl_utils.c
> +++ b/tools/libxl/libxl_utils.c
> @@ -478,7 +478,7 @@ int libxl_uuid_to_device_vtpm(libxl_ctx *ctx, uint32_t domid,
>      rc = 1;
>      for (i = 0; i < nb; ++i) {
>          if(!libxl_uuid_compare(uuid, &vtpms[i].uuid)) {
> -            vtpm->backend_domid = vtpms[i].backend_domid;
> +            vtpm->backend_domain.domid = vtpms[i].backend_domain.domid;
>              vtpm->devid = vtpms[i].devid;
>              libxl_uuid_copy(&vtpm->uuid, &vtpms[i].uuid);
>              rc = 0;
> diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
> index bee16a1..a65030f 100644
> --- a/tools/libxl/libxlu_disk_l.l
> +++ b/tools/libxl/libxlu_disk_l.l
> @@ -168,6 +168,7 @@ devtype=disk,?      { DPC->disk->is_cdrom = 0; }
>  devtype=[^,]*,?        { xlu__disk_err(DPC,yytext,"unknown value for type"); }
> 
>  access=[^,]*,? { STRIP(','); setaccess(DPC, FROMEQUALS); }
> +backend=[^,]*,? { STRIP(','); SAVESTRING("backend", backend_domain.name, FROMEQUALS); }
>  backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
> 
>  vdev=[^,]*,?   { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index e964bf1..bc22f3c 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1090,12 +1090,7 @@ static void parse_config_data(const char *config_source,
>                       break;
>                    *p2 = '\0';
>                    if (!strcmp(p, "backend")) {
> -                     if(domain_qualifier_to_domid(p2 + 1, &(vtpm->backend_domid), 0))
> -                     {
> -                        fprintf(stderr,
> -                              "Specified vtpm backend domain `%s' does not exist!\n", p2 + 1);
> -                        exit(1);
> -                     }
> +                     vtpm->backend_domain.name = strdup(p2 + 1);
>                       got_backend = true;
>                    } else if(!strcmp(p, "uuid")) {
>                       if( libxl_uuid_from_string(&vtpm->uuid, p2 + 1) ) {
> @@ -1190,11 +1185,8 @@ static void parse_config_data(const char *config_source,
>                      free(nic->ifname);
>                      nic->ifname = strdup(p2 + 1);
>                  } else if (!strcmp(p, "backend")) {
> -                    if(libxl_name_to_domid(ctx, (p2 + 1), &(nic->backend_domid))) {
> -                        fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
> -                        nic->backend_domid = 0;
> -                    }
> -                    if (nic->backend_domid != 0 && run_hotplug_scripts) {
> +                    nic->backend_domain.name = strdup(p2 + 1);
> +                    if (run_hotplug_scripts) {
>                          fprintf(stderr, "ERROR: the vif 'backend=' option "
>                                  "cannot be used in conjunction with "
>                                  "run_hotplug_scripts, please set "
> @@ -2431,8 +2423,6 @@ static void cd_insert(uint32_t domid, const char *virtdev, char *phys)
> 
>      parse_disk_config(&config, buf, &disk);
> 
> -    disk.backend_domid = 0;
> -
>      libxl_cdrom_insert(ctx, domid, &disk, NULL);
> 
>      libxl_device_disk_dispose(&disk);
> @@ -5516,11 +5506,7 @@ int main_networkattach(int argc, char **argv)
>          } else if (MATCH_OPTION("script", *argv, oparg)) {
>              replace_string(&nic.script, oparg);
>          } else if (MATCH_OPTION("backend", *argv, oparg)) {
> -            if(libxl_name_to_domid(ctx, oparg, &val)) {
> -                fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
> -                val = 0;
> -            }
> -            nic.backend_domid = val;
> +            replace_string(&nic.backend_domain.name, oparg);
>          } else if (MATCH_OPTION("vifname", *argv, oparg)) {
>              replace_string(&nic.ifname, oparg);
>          } else if (MATCH_OPTION("model", *argv, oparg)) {
> @@ -5623,8 +5609,8 @@ int main_networkdetach(int argc, char **argv)
>  int main_blockattach(int argc, char **argv)
>  {
>      int opt;
> -    uint32_t fe_domid, be_domid = 0;
> -    libxl_device_disk disk = { 0 };
> +    uint32_t fe_domid;
> +    libxl_device_disk disk;
>      XLU_Config *config = 0;
> 
>      if ((opt = def_getopt(argc, argv, "", "block-attach", 2)) != -1)
> @@ -5639,8 +5625,6 @@ int main_blockattach(int argc, char **argv)
>      parse_disk_config_multistring
>          (&config, argc-optind, (const char* const*)argv + optind, &disk);
> 
> -    disk.backend_domid = be_domid;
> -
>      if (dryrun_only) {
>          char *json = libxl_device_disk_to_json(ctx, &disk);
>          printf("disk: %s\n", json);
> @@ -5720,7 +5704,6 @@ int main_vtpmattach(int argc, char **argv)
>      int opt;
>      libxl_device_vtpm vtpm;
>      char *oparg;
> -    unsigned int val;
>      uint32_t domid;
> 
>      if ((opt = def_getopt(argc, argv, "", "vtpm-attach", 1)) != -1)
> @@ -5740,12 +5723,7 @@ int main_vtpmattach(int argc, char **argv)
>                  return 1;
>              }
>          } else if (MATCH_OPTION("backend", *argv, oparg)) {
> -            if(domain_qualifier_to_domid(oparg, &val, 0)) {
> -                fprintf(stderr,
> -                      "Specified backend domain does not exist, defaulting to Dom0\n");
> -                val = 0;
> -            }
> -            vtpm.backend_domid = val;
> +            replace_string(&vtpm.backend_domain.name, oparg);
>          } else {
>              fprintf(stderr, "unrecognized argument `%s'\n", *argv);
>              return 1;
> diff --git a/tools/libxl/xl_sxp.c b/tools/libxl/xl_sxp.c
> index a16a025..6cb8d40 100644
> --- a/tools/libxl/xl_sxp.c
> +++ b/tools/libxl/xl_sxp.c
> @@ -163,7 +163,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
>      for (i = 0; i < d_config->num_disks; i++) {
>          printf("\t(device\n");
>          printf("\t\t(tap\n");
> -        printf("\t\t\t(backend_domid %d)\n", d_config->disks[i].backend_domid);
> +        printf("\t\t\t(backend_domid %d)\n", d_config->disks[i].backend_domain.domid);
>          printf("\t\t\t(frontend_domid %d)\n", domid);
>          printf("\t\t\t(physpath %s)\n", d_config->disks[i].pdev_path);
>          printf("\t\t\t(phystype %d)\n", d_config->disks[i].backend);
> @@ -180,7 +180,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
>          printf("\t\t(vif\n");
>          if (d_config->nics[i].ifname)
>              printf("\t\t\t(vifname %s)\n", d_config->nics[i].ifname);
> -        printf("\t\t\t(backend_domid %d)\n", d_config->nics[i].backend_domid);
> +        printf("\t\t\t(backend_domid %d)\n", d_config->nics[i].backend_domain.domid);
>          printf("\t\t\t(frontend_domid %d)\n", domid);
>          printf("\t\t\t(devid %d)\n", d_config->nics[i].devid);
>          printf("\t\t\t(mtu %d)\n", d_config->nics[i].mtu);
> @@ -210,7 +210,7 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config)
>      for (i = 0; i < d_config->num_vfbs; i++) {
>          printf("\t(device\n");
>          printf("\t\t(vfb\n");
> -        printf("\t\t\t(backend_domid %d)\n", d_config->vfbs[i].backend_domid);
> +        printf("\t\t\t(backend_domid %d)\n", d_config->vfbs[i].backend_domain.domid);
>          printf("\t\t\t(frontend_domid %d)\n", domid);
>          printf("\t\t\t(devid %d)\n", d_config->vfbs[i].devid);
>          printf("\t\t\t(vnc %s)\n",
> --
> 1.7.11.7
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 10:04:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 10:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXYA-0007rX-EQ; Mon, 17 Dec 2012 10:03:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkXY9-0007rR-ML
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 10:03:53 +0000
Received: from [85.158.143.35:46346] by server-1.bemta-4.messagelabs.com id
	DB/1F-28401-80EEEC05; Mon, 17 Dec 2012 10:03:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355738631!5440546!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7501 invoked from network); 17 Dec 2012 10:03:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 10:03:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 10:03:51 +0000
Message-Id: <50CEFC1402000078000B0B07@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 10:03:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiao-Long Chen" <chenxiaolong@cxl.epac.to>
References: <50CE7E62.2020202@cxl.epac.to>
In-Reply-To: <50CE7E62.2020202@cxl.epac.to>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 03:07, Xiao-Long Chen <chenxiaolong@cxl.epac.to> wrote:
> Hi Xen developers,
> 
> I have been having a problem where only one CPU core is being detected.
> I'm using a Lenovo W520 with a UEFI firmware and an Intel Core i7
> 2720qm (4 real cores, 4 virtual cores). When I boot, I see
> "Dom0 has maximum 1 VCPUs", or something similar, scroll by.
> 
> In addition, only 10 GB out of my 12 GB of memory is recognized and
> ACPI is not working properly (CPU frequency scaling and battery info
> are not reported).
> 
> This problem has been reported before (for the same laptop too) here:
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg00087.html but
> unfortunately, the person who sent the message didn't reply with more
> information.

And the solution had already been discussed on the list too -
depending on how you do things currently (not judgeable from
the incomplete set of logs you provided), switch to using xen.efi
rather than booting through GrUB2, and/or use a kernel that is
actually EFI-capable (the upstream one continues to require out
of tree patches).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 10:04:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 10:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXYA-0007rX-EQ; Mon, 17 Dec 2012 10:03:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkXY9-0007rR-ML
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 10:03:53 +0000
Received: from [85.158.143.35:46346] by server-1.bemta-4.messagelabs.com id
	DB/1F-28401-80EEEC05; Mon, 17 Dec 2012 10:03:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355738631!5440546!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7501 invoked from network); 17 Dec 2012 10:03:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 10:03:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 10:03:51 +0000
Message-Id: <50CEFC1402000078000B0B07@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 10:03:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiao-Long Chen" <chenxiaolong@cxl.epac.to>
References: <50CE7E62.2020202@cxl.epac.to>
In-Reply-To: <50CE7E62.2020202@cxl.epac.to>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 03:07, Xiao-Long Chen <chenxiaolong@cxl.epac.to> wrote:
> Hi Xen developers,
> 
> I have been having a problem where only one CPU core is being detected.
> I'm using a Lenovo W520 with a UEFI firmware and an Intel Core i7
> 2720qm (4 real cores, 4 virtual cores). When I boot, I see
> "Dom0 has maximum 1 VCPUs", or something similar, scroll by.
> 
> In addition, only 10 GB out of my 12 GB of memory is recognized and
> ACPI is not working properly (CPU frequency scaling and battery info
> are not reported).
> 
> This problem has been reported before (for the same laptop too) here:
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg00087.html but
> unfortunately, the person who sent the message didn't reply with more
> information.

And the solution had already been discussed on the list too -
depending on how you do things currently (not judgeable from
the incomplete set of logs you provided), switch to using xen.efi
rather than booting through GrUB2, and/or use a kernel that is
actually EFI-capable (the upstream one continues to require out
of tree patches).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 10:11:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 10:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXeh-000834-9Y; Mon, 17 Dec 2012 10:10:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkXeg-00082z-5B
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 10:10:38 +0000
Received: from [193.109.254.147:28392] by server-11.bemta-14.messagelabs.com
	id E9/BF-02659-D9FEEC05; Mon, 17 Dec 2012 10:10:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355739035!10724737!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21467 invoked from network); 17 Dec 2012 10:10:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 10:10:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 10:10:35 +0000
Message-Id: <50CEFDA602000078000B0B11@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 10:10:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Alex Bligh" <alex@alex.org.uk>
References: <5B4525F296F6ABEB38B0E614@nimrod.local>
In-Reply-To: <5B4525F296F6ABEB38B0E614@nimrod.local>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Fatal crash on xen4.2 HVM + qemu-xen dm + NFS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.12.12 at 15:54, Alex Bligh <alex@alex.org.uk> wrote:
> [ 1416.992402] BUG: unable to handle kernel paging request at ffff88073fee6e00

Assuming the address above is valid in the first place (i.e. you
have at least 32G installed), this very much suggests access to
a ballooned out page. Could you therefore suppress the use of
ballooning for debugging purposes, and see whether the issue
goes away then?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 10:11:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 10:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXeh-000834-9Y; Mon, 17 Dec 2012 10:10:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkXeg-00082z-5B
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 10:10:38 +0000
Received: from [193.109.254.147:28392] by server-11.bemta-14.messagelabs.com
	id E9/BF-02659-D9FEEC05; Mon, 17 Dec 2012 10:10:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355739035!10724737!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21467 invoked from network); 17 Dec 2012 10:10:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 10:10:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 10:10:35 +0000
Message-Id: <50CEFDA602000078000B0B11@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 10:10:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Alex Bligh" <alex@alex.org.uk>
References: <5B4525F296F6ABEB38B0E614@nimrod.local>
In-Reply-To: <5B4525F296F6ABEB38B0E614@nimrod.local>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Fatal crash on xen4.2 HVM + qemu-xen dm + NFS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.12.12 at 15:54, Alex Bligh <alex@alex.org.uk> wrote:
> [ 1416.992402] BUG: unable to handle kernel paging request at ffff88073fee6e00

Assuming the address above is valid in the first place (i.e. you
have at least 32G installed), this very much suggests access to
a ballooned out page. Could you therefore suppress the use of
ballooning for debugging purposes, and see whether the issue
goes away then?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 10:22:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 10:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXq6-0008HH-Pf; Mon, 17 Dec 2012 10:22:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkXq4-0008HC-Qj
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 10:22:25 +0000
Received: from [85.158.143.35:12078] by server-1.bemta-4.messagelabs.com id
	9E/8D-28401-F52FEC05; Mon, 17 Dec 2012 10:22:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355739737!16568159!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5816 invoked from network); 17 Dec 2012 10:22:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 10:22:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 10:22:17 +0000
Message-Id: <50CF006502000078000B0B30@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 10:22:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <greg@enjellic.com>,<ian.jackson@eu.citrix.com>
References: <osstest-14689-mainreport@xen.org>
In-Reply-To: <osstest-14689-mainreport@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.12.12 at 12:45, xen.org <ian.jackson@eu.citrix.com> wrote:
> flight 14689 xen-4.1-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14689/ 
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
>  test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
>  test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
>  test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
>  test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
>  test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

So this is what really should not have happened past the
intended-to-be-final RC.

> ------------------------------------------------------------
> changeset:   23428:93e17b0cd035
> tag:         tip
> user:        Greg Wettstein <greg@enjellic.com>
> date:        Thu Dec 13 14:35:58 2012 +0000
>     
>     libxl: avoid blktap2 deadlock on cleanup
>     
>     Establishes correct cleanup behavior for blktap devices.  This patch
>     implements the release of the backend device before calling for
>     the destruction of the userspace component of the blktap device.
>     
>     Without this patch the kernel xen-blkback driver deadlocks with
>     the blktap2 user control plane until the IPC channel is terminated by 
> the
>     timeout on the select() call.  This results in a noticeable delay
>     in the termination of the guest and causes the blktap minor
>     number which had been allocated to be orphaned.
>     
>     Signed-off-by: Greg Wettstein <greg@enjellic.com>
>     Acked-by: Ian Campbell <ian.campbell@citrix.com>
>     Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>     Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

How important is this change really to the 4.1 tree? I'd obviously
favor outright reverting it at this point (my understanding being
that the removal of the call to xs_rm() from libxl__device_destroy()
affected more than just tapdisk backends, which I guess was
assumed to be the case because of its neighboring with the call
to libxl__device_destroy_tapdisk()). Would that get the tree into
worse state that 4.1.3 was in?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 10:22:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 10:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXq6-0008HH-Pf; Mon, 17 Dec 2012 10:22:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkXq4-0008HC-Qj
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 10:22:25 +0000
Received: from [85.158.143.35:12078] by server-1.bemta-4.messagelabs.com id
	9E/8D-28401-F52FEC05; Mon, 17 Dec 2012 10:22:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355739737!16568159!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5816 invoked from network); 17 Dec 2012 10:22:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 10:22:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 10:22:17 +0000
Message-Id: <50CF006502000078000B0B30@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 10:22:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <greg@enjellic.com>,<ian.jackson@eu.citrix.com>
References: <osstest-14689-mainreport@xen.org>
In-Reply-To: <osstest-14689-mainreport@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.12.12 at 12:45, xen.org <ian.jackson@eu.citrix.com> wrote:
> flight 14689 xen-4.1-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14689/ 
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
>  test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
>  test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
>  test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
>  test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
>  test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679

So this is what really should not have happened past the
intended-to-be-final RC.

> ------------------------------------------------------------
> changeset:   23428:93e17b0cd035
> tag:         tip
> user:        Greg Wettstein <greg@enjellic.com>
> date:        Thu Dec 13 14:35:58 2012 +0000
>     
>     libxl: avoid blktap2 deadlock on cleanup
>     
>     Establishes correct cleanup behavior for blktap devices.  This patch
>     implements the release of the backend device before calling for
>     the destruction of the userspace component of the blktap device.
>     
>     Without this patch the kernel xen-blkback driver deadlocks with
>     the blktap2 user control plane until the IPC channel is terminated by 
> the
>     timeout on the select() call.  This results in a noticeable delay
>     in the termination of the guest and causes the blktap minor
>     number which had been allocated to be orphaned.
>     
>     Signed-off-by: Greg Wettstein <greg@enjellic.com>
>     Acked-by: Ian Campbell <ian.campbell@citrix.com>
>     Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>     Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

How important is this change really to the 4.1 tree? I'd obviously
favor outright reverting it at this point (my understanding being
that the removal of the call to xs_rm() from libxl__device_destroy()
affected more than just tapdisk backends, which I guess was
assumed to be the case because of its neighboring with the call
to libxl__device_destroy_tapdisk()). Would that get the tree into
worse state that 4.1.3 was in?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 10:30:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 10:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXxC-0008Qa-Mg; Mon, 17 Dec 2012 10:29:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1TkXxB-0008QV-AP
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 10:29:45 +0000
Received: from [193.109.254.147:25046] by server-4.bemta-14.messagelabs.com id
	AE/E2-15233-814FEC05; Mon, 17 Dec 2012 10:29:44 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1355740181!10862839!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=2.0 required=7.0 tests=HTML_MESSAGE,
	RATWARE_GECKO_BUILD,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26634 invoked from network); 17 Dec 2012 10:29:42 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 10:29:42 -0000
Received: by mail-lb0-f173.google.com with SMTP id c1so4511955lbg.32
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 02:29:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type;
	bh=F2/ak21pNYCnxEd2lTgbvNplYBhKWu4IfFYO7j4WqBM=;
	b=De+k5nuZwvbV5HCtlBo2YQhFvv7lf1M86WIODwvLauYgd0YA7++Dxo/BJZ83FKHMe3
	QOxGhuu2KBDceUSQ74xYC3za4+VWvp5QYM73OTVdPdnouQU02G0NzwWrIM8tXwg977oo
	x7Q5TR0KTv/63RBmZ5Tc3gQG8zo54xAdmBQds1/Gag9BPOdk1RNtSH4KOXcJsE3R5nfh
	W90Kj5/bBN/B1JXA3gAJiDdYZAk7gxiUFAD32PwT0xKn0dGiLF3QIRLEIgA3qBLQB9ka
	GtvsuE6pH82b/Xp7HrjgibsBcEqdRx3gZLJfix3Yd31nZrGHgGtBwp0pCSf4dHyQS78O
	Kqjw==
Received: by 10.112.44.2 with SMTP id a2mr5683489lbm.131.1355740180616;
	Mon, 17 Dec 2012 02:29:40 -0800 (PST)
Received: from [172.16.26.11] (b0fb3a03.bb.sky.com. [176.251.58.3])
	by mx.google.com with ESMTPS id pw17sm4684292lab.5.2012.12.17.02.29.38
	(version=SSLv3 cipher=OTHER); Mon, 17 Dec 2012 02:29:39 -0800 (PST)
Message-ID: <50CEF406.8090101@xen.org>
Date: Mon, 17 Dec 2012 10:29:26 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <000001cddbd8$ae48adf0$0ada09d0$@gmail.com>
In-Reply-To: <000001cddbd8$ae48adf0$0ada09d0$@gmail.com>
Subject: Re: [Xen-devel] Hello from Yiming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5637169855294565554=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5637169855294565554==
Content-Type: multipart/alternative;
 boundary="------------040007020202030405070600"

This is a multi-part message in MIME format.
--------------040007020202030405070600
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Yiming, there were a couple of talks on this subject at XenSummit (see 
http://xen.org/xensummit/xs12na_talks/agenda.html). You should be able 
to find contacts from the presentation. There is also ErlangOnXen.org, 
but I don't know teh guy who is working on it
Regards
Lars

On 16/12/2012 21:59, Yiming Zhang wrote:
>
> Hello everyone,
>
> I am Yiming Zhang, an assistant professor from NUDT, China, and now I 
> am a visiting researcher at Cambridge University, UK.
>
> I began to study Xen this month in order to run applications on Xen 
> WITHOUT traditional OS support, like what the Mirage project is 
> focusing on. If someone is working on similar goals, I'd love to 
> discuss with you. Thanks!
>
> Regards,
>
> Yiming
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------040007020202030405070600
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Yiming, there were a couple of talks on
      this subject at XenSummit (see
      <a class="moz-txt-link-freetext" href="http://xen.org/xensummit/xs12na_talks/agenda.html">http://xen.org/xensummit/xs12na_talks/agenda.html</a>). You should be
      able to find contacts from the presentation. There is also
      ErlangOnXen.org, but I don't know teh guy who is working on it<br>
      Regards<br>
      Lars<br>
      <br>
      On 16/12/2012 21:59, Yiming Zhang wrote:<br>
    </div>
    <blockquote cite="mid:000001cddbd8$ae48adf0$0ada09d0$@gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <meta name="Generator" content="Microsoft Word 14 (filtered
        medium)">
      <style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	text-align:justify;
	text-justify:inter-ideograph;
	font-size:10.5pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
/* Page Definitions */
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
      <div class="WordSection1">
        <p class="MsoNormal"><span lang="EN-US">Hello everyone,<o:p></o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US"><o:p>&nbsp;</o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US">I am Yiming Zhang, an
            assistant professor from NUDT, China, and now I am a
            visiting researcher at Cambridge University, UK. <o:p></o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US"><o:p>&nbsp;</o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US">I began to study Xen
            this month in order to run applications on Xen WITHOUT
            traditional OS support, like what the Mirage project is
            focusing on. If someone is working on similar goals, I&#8217;d
            love to discuss with you. Thanks!<o:p></o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US"><o:p>&nbsp;</o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US">Regards,<o:p></o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US">Yiming<o:p></o:p></span></p>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------040007020202030405070600--


--===============5637169855294565554==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5637169855294565554==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 10:30:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 10:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkXxC-0008Qa-Mg; Mon, 17 Dec 2012 10:29:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1TkXxB-0008QV-AP
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 10:29:45 +0000
Received: from [193.109.254.147:25046] by server-4.bemta-14.messagelabs.com id
	AE/E2-15233-814FEC05; Mon, 17 Dec 2012 10:29:44 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1355740181!10862839!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=2.0 required=7.0 tests=HTML_MESSAGE,
	RATWARE_GECKO_BUILD,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26634 invoked from network); 17 Dec 2012 10:29:42 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 10:29:42 -0000
Received: by mail-lb0-f173.google.com with SMTP id c1so4511955lbg.32
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 02:29:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type;
	bh=F2/ak21pNYCnxEd2lTgbvNplYBhKWu4IfFYO7j4WqBM=;
	b=De+k5nuZwvbV5HCtlBo2YQhFvv7lf1M86WIODwvLauYgd0YA7++Dxo/BJZ83FKHMe3
	QOxGhuu2KBDceUSQ74xYC3za4+VWvp5QYM73OTVdPdnouQU02G0NzwWrIM8tXwg977oo
	x7Q5TR0KTv/63RBmZ5Tc3gQG8zo54xAdmBQds1/Gag9BPOdk1RNtSH4KOXcJsE3R5nfh
	W90Kj5/bBN/B1JXA3gAJiDdYZAk7gxiUFAD32PwT0xKn0dGiLF3QIRLEIgA3qBLQB9ka
	GtvsuE6pH82b/Xp7HrjgibsBcEqdRx3gZLJfix3Yd31nZrGHgGtBwp0pCSf4dHyQS78O
	Kqjw==
Received: by 10.112.44.2 with SMTP id a2mr5683489lbm.131.1355740180616;
	Mon, 17 Dec 2012 02:29:40 -0800 (PST)
Received: from [172.16.26.11] (b0fb3a03.bb.sky.com. [176.251.58.3])
	by mx.google.com with ESMTPS id pw17sm4684292lab.5.2012.12.17.02.29.38
	(version=SSLv3 cipher=OTHER); Mon, 17 Dec 2012 02:29:39 -0800 (PST)
Message-ID: <50CEF406.8090101@xen.org>
Date: Mon, 17 Dec 2012 10:29:26 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <000001cddbd8$ae48adf0$0ada09d0$@gmail.com>
In-Reply-To: <000001cddbd8$ae48adf0$0ada09d0$@gmail.com>
Subject: Re: [Xen-devel] Hello from Yiming
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5637169855294565554=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5637169855294565554==
Content-Type: multipart/alternative;
 boundary="------------040007020202030405070600"

This is a multi-part message in MIME format.
--------------040007020202030405070600
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Yiming, there were a couple of talks on this subject at XenSummit (see 
http://xen.org/xensummit/xs12na_talks/agenda.html). You should be able 
to find contacts from the presentation. There is also ErlangOnXen.org, 
but I don't know teh guy who is working on it
Regards
Lars

On 16/12/2012 21:59, Yiming Zhang wrote:
>
> Hello everyone,
>
> I am Yiming Zhang, an assistant professor from NUDT, China, and now I 
> am a visiting researcher at Cambridge University, UK.
>
> I began to study Xen this month in order to run applications on Xen 
> WITHOUT traditional OS support, like what the Mirage project is 
> focusing on. If someone is working on similar goals, I'd love to 
> discuss with you. Thanks!
>
> Regards,
>
> Yiming
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------040007020202030405070600
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Yiming, there were a couple of talks on
      this subject at XenSummit (see
      <a class="moz-txt-link-freetext" href="http://xen.org/xensummit/xs12na_talks/agenda.html">http://xen.org/xensummit/xs12na_talks/agenda.html</a>). You should be
      able to find contacts from the presentation. There is also
      ErlangOnXen.org, but I don't know teh guy who is working on it<br>
      Regards<br>
      Lars<br>
      <br>
      On 16/12/2012 21:59, Yiming Zhang wrote:<br>
    </div>
    <blockquote cite="mid:000001cddbd8$ae48adf0$0ada09d0$@gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <meta name="Generator" content="Microsoft Word 14 (filtered
        medium)">
      <style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	text-align:justify;
	text-justify:inter-ideograph;
	font-size:10.5pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
/* Page Definitions */
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
      <div class="WordSection1">
        <p class="MsoNormal"><span lang="EN-US">Hello everyone,<o:p></o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US"><o:p>&nbsp;</o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US">I am Yiming Zhang, an
            assistant professor from NUDT, China, and now I am a
            visiting researcher at Cambridge University, UK. <o:p></o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US"><o:p>&nbsp;</o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US">I began to study Xen
            this month in order to run applications on Xen WITHOUT
            traditional OS support, like what the Mirage project is
            focusing on. If someone is working on similar goals, I&#8217;d
            love to discuss with you. Thanks!<o:p></o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US"><o:p>&nbsp;</o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US">Regards,<o:p></o:p></span></p>
        <p class="MsoNormal"><span lang="EN-US">Yiming<o:p></o:p></span></p>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------040007020202030405070600--


--===============5637169855294565554==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5637169855294565554==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 10:33:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 10:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkY0u-00007Y-Cn; Mon, 17 Dec 2012 10:33:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkY0s-00007S-Q4
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 10:33:35 +0000
Received: from [85.158.139.83:32588] by server-10.bemta-5.messagelabs.com id
	DC/B8-13383-EF4FEC05; Mon, 17 Dec 2012 10:33:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355740413!22818580!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 632 invoked from network); 17 Dec 2012 10:33:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 10:33:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="194368"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 10:25:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 10:25:20 +0000
Message-ID: <1355739919.14620.25.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 10:25:19 +0000
In-Reply-To: <osstest-14762-mainreport@xen.org>
References: <osstest-14762-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Jan Beulich <JBeulich@suse.com>, Greg Wettstein <greg@enjellic.com>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 14762: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-17 at 04:24 +0000, xen.org wrote:
> flight 14762 xen-4.1-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14762/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
>  test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
>  test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
>  test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
>  test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
>  test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
[...]

http://www.chiark.greenend.org.uk/~xensrcts/logs/14706/test-amd64-amd64-xl/18.ts-leak-check.log
shows a load of leaked xenstore paths relating to the console. The
bisector has fingered the first one (23428:93e17b0cd035) as having
caused it.

Given that we are at 4.1.4-rc2 unless we can fix this today (or by
tomorrow) we should revert. Possibly we should even just revert and try
again for 4.1.5.

Perhaps we should leave the both the libxl__device_destroy_tapdisk and
xs_rm in libxl__device_destroy? The xs_rm is harmless in the case where 
libxl__device_destroy_tapdisk has done something and useful when it has
not.

The call to libxl__device_destroy_tapdisk in libxl__devices_destroy from
the second patch also looks entirely to me -- the function loops over
devices and yet calls that function only only for the last one (which
happens to be a console). I think this is a mismerge on the backport 
23427:255a0b6a8104 (to unstable) makes this change to
libxl__device_destroy with very similar context to the change to
libxl__devices_destroy in 23427:255a0b6a8104 (to 4.1-testing).

Ian.

> ------------------------------------------------------------
> changeset:   23428:93e17b0cd035
> tag:         tip
> user:        Greg Wettstein <greg@enjellic.com>
> date:        Thu Dec 13 14:35:58 2012 +0000
> 
>     libxl: avoid blktap2 deadlock on cleanup
> 
>     Establishes correct cleanup behavior for blktap devices.  This patch
>     implements the release of the backend device before calling for
>     the destruction of the userspace component of the blktap device.
> 
>     Without this patch the kernel xen-blkback driver deadlocks with
>     the blktap2 user control plane until the IPC channel is terminated by the
>     timeout on the select() call.  This results in a noticeable delay
>     in the termination of the guest and causes the blktap minor
>     number which had been allocated to be orphaned.
> 
>     Signed-off-by: Greg Wettstein <greg@enjellic.com>
>     Acked-by: Ian Campbell <ian.campbell@citrix.com>
>     Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>     Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> 
> changeset:   23427:255a0b6a8104
> user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
> date:        Wed Dec 12 17:41:15 2012 +0000
> 
>     From: Ian Campbell <ian.campbell@citrix.com>
> 
>     libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> 
>     This patch properly terminates the tapdisk2 process(es) started
>     to service a virtual block device.
> 
>     Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>     Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
>     xen-unstable changeset: 23883:7998217630e2
>     xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
>     Signed-off-by: Greg Wettstein <greg@enjellic.com>
>     Backport-requested-by: Greg Wettstein <greg@enjellic.com>
>     Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> 
> (qemu changes not included)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 10:33:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 10:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkY0u-00007Y-Cn; Mon, 17 Dec 2012 10:33:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkY0s-00007S-Q4
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 10:33:35 +0000
Received: from [85.158.139.83:32588] by server-10.bemta-5.messagelabs.com id
	DC/B8-13383-EF4FEC05; Mon, 17 Dec 2012 10:33:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355740413!22818580!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 632 invoked from network); 17 Dec 2012 10:33:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 10:33:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="194368"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 10:25:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 10:25:20 +0000
Message-ID: <1355739919.14620.25.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 10:25:19 +0000
In-Reply-To: <osstest-14762-mainreport@xen.org>
References: <osstest-14762-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Jan Beulich <JBeulich@suse.com>, Greg Wettstein <greg@enjellic.com>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 14762: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-17 at 04:24 +0000, xen.org wrote:
> flight 14762 xen-4.1-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14762/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
>  test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
>  test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
>  test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-xl-credit2   18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
>  test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
>  test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
>  test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
[...]

http://www.chiark.greenend.org.uk/~xensrcts/logs/14706/test-amd64-amd64-xl/18.ts-leak-check.log
shows a load of leaked xenstore paths relating to the console. The
bisector has fingered the first one (23428:93e17b0cd035) as having
caused it.

Given that we are at 4.1.4-rc2 unless we can fix this today (or by
tomorrow) we should revert. Possibly we should even just revert and try
again for 4.1.5.

Perhaps we should leave the both the libxl__device_destroy_tapdisk and
xs_rm in libxl__device_destroy? The xs_rm is harmless in the case where 
libxl__device_destroy_tapdisk has done something and useful when it has
not.

The call to libxl__device_destroy_tapdisk in libxl__devices_destroy from
the second patch also looks entirely to me -- the function loops over
devices and yet calls that function only only for the last one (which
happens to be a console). I think this is a mismerge on the backport 
23427:255a0b6a8104 (to unstable) makes this change to
libxl__device_destroy with very similar context to the change to
libxl__devices_destroy in 23427:255a0b6a8104 (to 4.1-testing).

Ian.

> ------------------------------------------------------------
> changeset:   23428:93e17b0cd035
> tag:         tip
> user:        Greg Wettstein <greg@enjellic.com>
> date:        Thu Dec 13 14:35:58 2012 +0000
> 
>     libxl: avoid blktap2 deadlock on cleanup
> 
>     Establishes correct cleanup behavior for blktap devices.  This patch
>     implements the release of the backend device before calling for
>     the destruction of the userspace component of the blktap device.
> 
>     Without this patch the kernel xen-blkback driver deadlocks with
>     the blktap2 user control plane until the IPC channel is terminated by the
>     timeout on the select() call.  This results in a noticeable delay
>     in the termination of the guest and causes the blktap minor
>     number which had been allocated to be orphaned.
> 
>     Signed-off-by: Greg Wettstein <greg@enjellic.com>
>     Acked-by: Ian Campbell <ian.campbell@citrix.com>
>     Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>     Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> 
> changeset:   23427:255a0b6a8104
> user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
> date:        Wed Dec 12 17:41:15 2012 +0000
> 
>     From: Ian Campbell <ian.campbell@citrix.com>
> 
>     libxl: attempt to cleanup tapdisk processes on disk backend destroy.
> 
>     This patch properly terminates the tapdisk2 process(es) started
>     to service a virtual block device.
> 
>     Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>     Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
>     xen-unstable changeset: 23883:7998217630e2
>     xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
>     Signed-off-by: Greg Wettstein <greg@enjellic.com>
>     Backport-requested-by: Greg Wettstein <greg@enjellic.com>
>     Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> 
> (qemu changes not included)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:27:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkYr2-0000Wg-9w; Mon, 17 Dec 2012 11:27:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkYr0-0000Wb-IZ
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 11:27:26 +0000
Received: from [85.158.137.99:39898] by server-15.bemta-3.messagelabs.com id
	FB/1A-07921-D910FC05; Mon, 17 Dec 2012 11:27:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355743644!17384176!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19718 invoked from network); 17 Dec 2012 11:27:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:27:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="196284"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 11:26:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 11:26:39 +0000
Message-ID: <1355743598.14620.43.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Mon, 17 Dec 2012 11:26:38 +0000
In-Reply-To: <20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-14 at 18:53 +0000, Matt Wilson wrote:
> On Thu, Dec 13, 2012 at 11:12:50PM +0000, Palagummi, Siva wrote:
> > > -----Original Message-----
> > > From: Matt Wilson [mailto:msw@amazon.com]
> > > Sent: Wednesday, December 12, 2012 3:05 AM
> > >
> > > On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > > >
> > > > You can clearly see below that copy_off is input to
> > > > start_new_rx_buffer while copying frags.
> > > 
> > > Yes, but that's the right thing to do. copy_off should be set to the
> > > destination offset after copying the last byte of linear data, which
> > > means "skb_headlen(skb) % PAGE_SIZE" is correct.
> > > 
> >
> > No. It is not correct for two reasons. For example what if
> > skb_headlen(skb) is exactly a multiple of PAGE_SIZE. Copy_off would
> > be set to ZERO. And now if there exists some data in frags, ZERO
> > will be passed in as copy_off value and start_new_rx_buffer will
> > return FALSE. And second reason is the obvious case from the current
> > code where "offset_in_page(skb->data)" size hole will be left in the
> > first buffer after first pass in case remaining data that need to be
> > copied is going to overflow the first buffer.
> 
> Right, and I'm arguing that having the code leave a hole is less
> desirable than potentially increasing the number of copy
> operations. I'd like to hear from Ian and others if using the buffers
> efficiently is more important than reducing copy operations. Intuitively,
> I think it's more important to use the ring efficiently.

Do you mean the ring or the actual buffers?

The current code tries to coalesce multiple small frags/heads because it
is usually trivial but doesn't try too hard with multiple larger frags,
since they take up most of a page by themselves anyway. I suppose this
does waste a bit of buffer space and therefore could take more ring
slots, but it's not clear to me how much this matters in practice (it
might be tricky to measure this with any realistic workload).

The cost of splitting a copy into two should be low though, the copies
are already batched into a single hypercall and I'd expect things to be
mostly dominated by the data copy itself rather than the setup of each
individual op, which would argue for splitting a copy in two if that
helps fill the buffers.

The flip side is that once you get past the headers etc the paged frags
likely tend to either bits and bobs (fine) or mostly whole pages. In the
whole pages case trying to fill the buffers will result in every copy
getting split. My gut tells me that the whole pages case probably
dominates, but I'm not sure what the real world impact of splitting all
the copies would be.

> 
> > If we fix it the way you are proposing, not to leave a hole in the
> > first buffer by modifying start_new_rx_buffer then it will occupy 4
> > ring buffers but will require 8 copy operations as per existing
> > logic in netbk_gop_skb while copying head!!
> 
> I think that we should optimize for more commonly seen lengths, which
> I'd think would be 2-3 pages max. The delta in copy operations is
> smaller in these cases, where we would use 4 copy operations for 2
> pages (as opposed to 3 with the current algorithm) and 6 copy
> operations for 3 pages (as opposed to 4 with the current algorithm).
> 
> IanC, Konrad - do you have any opinions on the best approach here?
> 
> Matt



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:27:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkYr2-0000Wg-9w; Mon, 17 Dec 2012 11:27:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkYr0-0000Wb-IZ
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 11:27:26 +0000
Received: from [85.158.137.99:39898] by server-15.bemta-3.messagelabs.com id
	FB/1A-07921-D910FC05; Mon, 17 Dec 2012 11:27:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355743644!17384176!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19718 invoked from network); 17 Dec 2012 11:27:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:27:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="196284"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 11:26:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 11:26:39 +0000
Message-ID: <1355743598.14620.43.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Mon, 17 Dec 2012 11:26:38 +0000
In-Reply-To: <20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-14 at 18:53 +0000, Matt Wilson wrote:
> On Thu, Dec 13, 2012 at 11:12:50PM +0000, Palagummi, Siva wrote:
> > > -----Original Message-----
> > > From: Matt Wilson [mailto:msw@amazon.com]
> > > Sent: Wednesday, December 12, 2012 3:05 AM
> > >
> > > On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > > >
> > > > You can clearly see below that copy_off is input to
> > > > start_new_rx_buffer while copying frags.
> > > 
> > > Yes, but that's the right thing to do. copy_off should be set to the
> > > destination offset after copying the last byte of linear data, which
> > > means "skb_headlen(skb) % PAGE_SIZE" is correct.
> > > 
> >
> > No. It is not correct for two reasons. For example what if
> > skb_headlen(skb) is exactly a multiple of PAGE_SIZE. Copy_off would
> > be set to ZERO. And now if there exists some data in frags, ZERO
> > will be passed in as copy_off value and start_new_rx_buffer will
> > return FALSE. And second reason is the obvious case from the current
> > code where "offset_in_page(skb->data)" size hole will be left in the
> > first buffer after first pass in case remaining data that need to be
> > copied is going to overflow the first buffer.
> 
> Right, and I'm arguing that having the code leave a hole is less
> desirable than potentially increasing the number of copy
> operations. I'd like to hear from Ian and others if using the buffers
> efficiently is more important than reducing copy operations. Intuitively,
> I think it's more important to use the ring efficiently.

Do you mean the ring or the actual buffers?

The current code tries to coalesce multiple small frags/heads because it
is usually trivial but doesn't try too hard with multiple larger frags,
since they take up most of a page by themselves anyway. I suppose this
does waste a bit of buffer space and therefore could take more ring
slots, but it's not clear to me how much this matters in practice (it
might be tricky to measure this with any realistic workload).

The cost of splitting a copy into two should be low though, the copies
are already batched into a single hypercall and I'd expect things to be
mostly dominated by the data copy itself rather than the setup of each
individual op, which would argue for splitting a copy in two if that
helps fill the buffers.

The flip side is that once you get past the headers etc the paged frags
likely tend to either bits and bobs (fine) or mostly whole pages. In the
whole pages case trying to fill the buffers will result in every copy
getting split. My gut tells me that the whole pages case probably
dominates, but I'm not sure what the real world impact of splitting all
the copies would be.

> 
> > If we fix it the way you are proposing, not to leave a hole in the
> > first buffer by modifying start_new_rx_buffer then it will occupy 4
> > ring buffers but will require 8 copy operations as per existing
> > logic in netbk_gop_skb while copying head!!
> 
> I think that we should optimize for more commonly seen lengths, which
> I'd think would be 2-3 pages max. The delta in copy operations is
> smaller in these cases, where we would use 4 copy operations for 2
> pages (as opposed to 3 with the current algorithm) and 6 copy
> operations for 3 pages (as opposed to 4 with the current algorithm).
> 
> IanC, Konrad - do you have any opinions on the best approach here?
> 
> Matt



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:29:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkYsA-0000Zh-P4; Mon, 17 Dec 2012 11:28:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TkYs8-0000ZY-Cu
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 11:28:36 +0000
Received: from [193.109.254.147:11996] by server-4.bemta-14.messagelabs.com id
	53/10-15233-3E10FC05; Mon, 17 Dec 2012 11:28:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355743713!3013154!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTAxODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24892 invoked from network); 17 Dec 2012 11:28:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:28:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="936167"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	17 Dec 2012 11:28:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 17 Dec 2012 06:28:32 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TkYs4-0007x1-Jp;
	Mon, 17 Dec 2012 11:28:32 +0000
Date: Mon, 17 Dec 2012 11:28:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Josh Zhao <joshsystem@gmail.com>
In-Reply-To: <CAOYkbaiubLbm3e4kGTagHr7A0FSkwP5808VinLEza=fg4DhuhQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212171127240.17523@kaball.uk.xensource.com>
References: <CAOYkbagjXEun=QaNmrKpd69Ma0BqWpvbb_oJGzBg8V60FRkJPQ@mail.gmail.com>
	<alpine.DEB.2.02.1212141244260.17523@kaball.uk.xensource.com>
	<CAOYkbaiubLbm3e4kGTagHr7A0FSkwP5808VinLEza=fg4DhuhQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-200283361-1355743705=:17523"
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] About ARM branch questions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-200283361-1355743705=:17523
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

Yes, that is correct: thanks to nested paging in HW we don't need shadow
pagetables in the hypervisor or PVOPS in the Linux kernel.

On Sun, 16 Dec 2012, Josh Zhao wrote:
> Thanks Stabellini. I saw the slides of=C2=A0New PVH Virtualisation mode f=
or ARM Cortex A15 and x8, =C2=A0but can't understand =C2=A0why no
> PVOPs and no shadow pagetables bacuase of nasted paging in HW?
> josh zhao
>=20
>=20
> 2012/12/14 Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>       On Fri, 14 Dec 2012, Josh Zhao wrote:
>       > Hi, I have a new questions about the ARM-Xen.=C2=A0(1) =C2=A0 Fol=
lowing wiki in
>       > http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions=
, =C2=A0are this repos latest?
>       >
>       > =C2=A0 =C2=A0 Xen: =C2=A0http://xenbits.xen.org/hg/xen-unstable.h=
g
>       > =C2=A0 =C2=A0 Linux Dom0 and DomU: The=C2=A0arm-privcmd-for-3.8=
=C2=A0branch of=C2=A0git://xenbits.xen.org/people/ianc/linux.git=C2=A0
>=20
> Yes, they are.
> However we have few outstanding patches, already sent to the list by me
> or Ian, but still unapplied.
>=20
>=20
> > (2) =C2=A0Is the ARM PV interfaces for IO same as X86?
>=20
> I guess you are referring to the Linux PV frontend and backend drivers?
> Like netfront/netback and blkfront/blkback? If that is the case, then
> yes, we are using exactly the same frontend/backend driver pairs as in
> Xen x86.
>=20
>=20
>=20
>=20
--1342847746-200283361-1355743705=:17523
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-200283361-1355743705=:17523--


From xen-devel-bounces@lists.xen.org Mon Dec 17 11:29:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkYsA-0000Zh-P4; Mon, 17 Dec 2012 11:28:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TkYs8-0000ZY-Cu
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 11:28:36 +0000
Received: from [193.109.254.147:11996] by server-4.bemta-14.messagelabs.com id
	53/10-15233-3E10FC05; Mon, 17 Dec 2012 11:28:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355743713!3013154!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTAxODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24892 invoked from network); 17 Dec 2012 11:28:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:28:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="936167"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	17 Dec 2012 11:28:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 17 Dec 2012 06:28:32 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TkYs4-0007x1-Jp;
	Mon, 17 Dec 2012 11:28:32 +0000
Date: Mon, 17 Dec 2012 11:28:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Josh Zhao <joshsystem@gmail.com>
In-Reply-To: <CAOYkbaiubLbm3e4kGTagHr7A0FSkwP5808VinLEza=fg4DhuhQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212171127240.17523@kaball.uk.xensource.com>
References: <CAOYkbagjXEun=QaNmrKpd69Ma0BqWpvbb_oJGzBg8V60FRkJPQ@mail.gmail.com>
	<alpine.DEB.2.02.1212141244260.17523@kaball.uk.xensource.com>
	<CAOYkbaiubLbm3e4kGTagHr7A0FSkwP5808VinLEza=fg4DhuhQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-200283361-1355743705=:17523"
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] About ARM branch questions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-200283361-1355743705=:17523
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

Yes, that is correct: thanks to nested paging in HW we don't need shadow
pagetables in the hypervisor or PVOPS in the Linux kernel.

On Sun, 16 Dec 2012, Josh Zhao wrote:
> Thanks Stabellini. I saw the slides of=C2=A0New PVH Virtualisation mode f=
or ARM Cortex A15 and x8, =C2=A0but can't understand =C2=A0why no
> PVOPs and no shadow pagetables bacuase of nasted paging in HW?
> josh zhao
>=20
>=20
> 2012/12/14 Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>       On Fri, 14 Dec 2012, Josh Zhao wrote:
>       > Hi, I have a new questions about the ARM-Xen.=C2=A0(1) =C2=A0 Fol=
lowing wiki in
>       > http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions=
, =C2=A0are this repos latest?
>       >
>       > =C2=A0 =C2=A0 Xen: =C2=A0http://xenbits.xen.org/hg/xen-unstable.h=
g
>       > =C2=A0 =C2=A0 Linux Dom0 and DomU: The=C2=A0arm-privcmd-for-3.8=
=C2=A0branch of=C2=A0git://xenbits.xen.org/people/ianc/linux.git=C2=A0
>=20
> Yes, they are.
> However we have few outstanding patches, already sent to the list by me
> or Ian, but still unapplied.
>=20
>=20
> > (2) =C2=A0Is the ARM PV interfaces for IO same as X86?
>=20
> I guess you are referring to the Linux PV frontend and backend drivers?
> Like netfront/netback and blkfront/blkback? If that is the case, then
> yes, we are using exactly the same frontend/backend driver pairs as in
> Xen x86.
>=20
>=20
>=20
>=20
--1342847746-200283361-1355743705=:17523
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-200283361-1355743705=:17523--


From xen-devel-bounces@lists.xen.org Mon Dec 17 11:38:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZ0t-0000rC-0b; Mon, 17 Dec 2012 11:37:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TkZ0r-0000r7-Dt
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 11:37:37 +0000
Received: from [85.158.139.211:15635] by server-9.bemta-5.messagelabs.com id
	9B/9F-10690-0040FC05; Mon, 17 Dec 2012 11:37:36 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355744255!17987355!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29824 invoked from network); 17 Dec 2012 11:37:35 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 11:37:35 -0000
Received: (qmail 23872 invoked from network); 17 Dec 2012 13:37:34 +0200
Received: from rcojocaru.dsd.ro (10.10.14.59)
	by mail.bitdefender.com with SMTP; 17 Dec 2012 13:37:34 +0200
MIME-Version: 1.0
X-Mercurial-Node: a23515aabc91bec6e9cf9c0f0bb37b7ec9aca83c
Message-Id: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
User-Agent: Mercurial-patchbomb/2.4.1
Date: Mon, 17 Dec 2012 13:38:30 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
To: xen-devel@lists.xensource.com
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234151,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_NO_LINK_NMD; NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY;
	NN_LEGIT_S_SQARE_BRACKETS; NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled],
	URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, Hits: none,
	MD5: fe460819999e49d00aa4b8d1da190e0a.fuzzy.fzrbl.org], RTDA:
	[Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3t9.17edbokjk.4v600],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44407
Subject: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type() call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add a xc_domain_hvm_get_mtrr_type() call to the libxc API,
to support functionality similar to get_mtrr_type() (which
is only available at the hypervisor level).

Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com>

diff -r f50aab21f9f2 -r a23515aabc91 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Thu Dec 13 14:39:31 2012 +0000
+++ b/tools/libxc/xc_domain.c	Mon Dec 17 12:53:11 2012 +0200
@@ -379,6 +379,45 @@ int xc_domain_hvm_setcontext(xc_interfac
     return ret;
 }
 
+#define MTRR_PHYSMASK_VALID_BIT  11
+#define MTRR_PHYSMASK_SHIFT      12
+#define MTRR_PHYSBASE_TYPE_MASK  0xff   /* lowest 8 bits */
+
+int xc_domain_hvm_get_mtrr_type(xc_interface *xch, 
+                                uint32_t domid,
+                                unsigned long paddress,
+                                uint8_t *type)
+{
+    struct      hvm_hw_mtrr hw_mtrr;
+    int32_t     seg;
+    uint8_t     num_var_ranges;
+    uint64_t    phys_base;
+    uint64_t    phys_mask;
+
+    if ( xc_domain_hvm_getcontext_partial(xch, domid, HVM_SAVE_CODE(MTRR), 0, &hw_mtrr, sizeof hw_mtrr) )
+        return -1;
+
+    num_var_ranges = hw_mtrr.msr_mtrr_cap & 0xff;
+
+    for ( seg = 0; seg < num_var_ranges; seg++ )
+    {
+        phys_base = hw_mtrr.msr_mtrr_var[seg*2];
+        phys_mask = hw_mtrr.msr_mtrr_var[seg*2 + 1];
+
+        if ( phys_mask & (1 << MTRR_PHYSMASK_VALID_BIT) )
+        {
+            if ( ((uint64_t) paddress & phys_mask) >> MTRR_PHYSMASK_SHIFT ==
+                 (phys_base & phys_mask) >> MTRR_PHYSMASK_SHIFT )
+            {
+                *type = phys_base & MTRR_PHYSBASE_TYPE_MASK;
+                return 0;
+            }
+        }
+    }
+
+    return -1;
+}
+
 int xc_vcpu_getcontext(xc_interface *xch,
                        uint32_t domid,
                        uint32_t vcpu,
diff -r f50aab21f9f2 -r a23515aabc91 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Thu Dec 13 14:39:31 2012 +0000
+++ b/tools/libxc/xenctrl.h	Mon Dec 17 12:53:11 2012 +0200
@@ -633,6 +633,15 @@ int xc_domain_hvm_setcontext(xc_interfac
                              uint32_t size);
 
 /**
+ * This function returns information about the MTRR type of
+ * a given guest physical address/
+ */
+int xc_domain_hvm_get_mtrr_type(xc_interface *xch, 
+                                uint32_t domid,
+                                unsigned long paddress,
+                                uint8_t *type);
+
+/**
  * This function returns information about the execution context of a
  * particular vcpu of a domain.
  *

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:38:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZ0t-0000rC-0b; Mon, 17 Dec 2012 11:37:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TkZ0r-0000r7-Dt
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 11:37:37 +0000
Received: from [85.158.139.211:15635] by server-9.bemta-5.messagelabs.com id
	9B/9F-10690-0040FC05; Mon, 17 Dec 2012 11:37:36 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355744255!17987355!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29824 invoked from network); 17 Dec 2012 11:37:35 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 11:37:35 -0000
Received: (qmail 23872 invoked from network); 17 Dec 2012 13:37:34 +0200
Received: from rcojocaru.dsd.ro (10.10.14.59)
	by mail.bitdefender.com with SMTP; 17 Dec 2012 13:37:34 +0200
MIME-Version: 1.0
X-Mercurial-Node: a23515aabc91bec6e9cf9c0f0bb37b7ec9aca83c
Message-Id: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
User-Agent: Mercurial-patchbomb/2.4.1
Date: Mon, 17 Dec 2012 13:38:30 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
To: xen-devel@lists.xensource.com
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234151,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_NO_LINK_NMD; NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY;
	NN_LEGIT_S_SQARE_BRACKETS; NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled],
	URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, Hits: none,
	MD5: fe460819999e49d00aa4b8d1da190e0a.fuzzy.fzrbl.org], RTDA:
	[Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3t9.17edbokjk.4v600],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44407
Subject: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type() call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add a xc_domain_hvm_get_mtrr_type() call to the libxc API,
to support functionality similar to get_mtrr_type() (which
is only available at the hypervisor level).

Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com>

diff -r f50aab21f9f2 -r a23515aabc91 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Thu Dec 13 14:39:31 2012 +0000
+++ b/tools/libxc/xc_domain.c	Mon Dec 17 12:53:11 2012 +0200
@@ -379,6 +379,45 @@ int xc_domain_hvm_setcontext(xc_interfac
     return ret;
 }
 
+#define MTRR_PHYSMASK_VALID_BIT  11
+#define MTRR_PHYSMASK_SHIFT      12
+#define MTRR_PHYSBASE_TYPE_MASK  0xff   /* lowest 8 bits */
+
+int xc_domain_hvm_get_mtrr_type(xc_interface *xch, 
+                                uint32_t domid,
+                                unsigned long paddress,
+                                uint8_t *type)
+{
+    struct      hvm_hw_mtrr hw_mtrr;
+    int32_t     seg;
+    uint8_t     num_var_ranges;
+    uint64_t    phys_base;
+    uint64_t    phys_mask;
+
+    if ( xc_domain_hvm_getcontext_partial(xch, domid, HVM_SAVE_CODE(MTRR), 0, &hw_mtrr, sizeof hw_mtrr) )
+        return -1;
+
+    num_var_ranges = hw_mtrr.msr_mtrr_cap & 0xff;
+
+    for ( seg = 0; seg < num_var_ranges; seg++ )
+    {
+        phys_base = hw_mtrr.msr_mtrr_var[seg*2];
+        phys_mask = hw_mtrr.msr_mtrr_var[seg*2 + 1];
+
+        if ( phys_mask & (1 << MTRR_PHYSMASK_VALID_BIT) )
+        {
+            if ( ((uint64_t) paddress & phys_mask) >> MTRR_PHYSMASK_SHIFT ==
+                 (phys_base & phys_mask) >> MTRR_PHYSMASK_SHIFT )
+            {
+                *type = phys_base & MTRR_PHYSBASE_TYPE_MASK;
+                return 0;
+            }
+        }
+    }
+
+    return -1;
+}
+
 int xc_vcpu_getcontext(xc_interface *xch,
                        uint32_t domid,
                        uint32_t vcpu,
diff -r f50aab21f9f2 -r a23515aabc91 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Thu Dec 13 14:39:31 2012 +0000
+++ b/tools/libxc/xenctrl.h	Mon Dec 17 12:53:11 2012 +0200
@@ -633,6 +633,15 @@ int xc_domain_hvm_setcontext(xc_interfac
                              uint32_t size);
 
 /**
+ * This function returns information about the MTRR type of
+ * a given guest physical address/
+ */
+int xc_domain_hvm_get_mtrr_type(xc_interface *xch, 
+                                uint32_t domid,
+                                unsigned long paddress,
+                                uint8_t *type);
+
+/**
  * This function returns information about the execution context of a
  * particular vcpu of a domain.
  *

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:49:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:49:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZBb-00012B-Eu; Mon, 17 Dec 2012 11:48:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1TkZBZ-00011u-U6; Mon, 17 Dec 2012 11:48:42 +0000
Received: from [85.158.143.99:33270] by server-2.bemta-4.messagelabs.com id
	1C/07-30861-9960FC05; Mon, 17 Dec 2012 11:48:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355744894!28838062!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17349 invoked from network); 17 Dec 2012 11:48:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:48:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="196873"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 11:47:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 11:47:37 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TkZAX-0004W0-8y; Mon, 17 Dec 2012 11:47:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TkZAW-0003Gz-Vh;
	Mon, 17 Dec 2012 11:47:37 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20687.1624.850228.107153@mariner.uk.xensource.com>
Date: Mon, 17 Dec 2012 11:47:36 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50CB721C.5010006@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com> <50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
	<50CB43A9.3040806@citrix.com>
	<20683.18895.282490.216230@mariner.uk.xensource.com>
	<50CB62CD.7020103@citrix.com>
	<20683.28487.926648.451291@mariner.uk.xensource.com>
	<50CB721C.5010006@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
> On 14/12/12 19:26, Ian Jackson wrote:
> > Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
> >>  * remove: called to disconnect the device. Xenstore backend entries
> >> exists, and backend state is 6 (XenbusStateClosed).
> > 
> > I assume we need an unprepare here too.
> 
> I've also thought that, but the reason for prepare to exist is to reduce
> the time that the "add" operation takes, thus reducing the blackout
> phase during migration.

The unprepare operation might also be slow.  (Of course we believe in
crash-only software but the storage provider might not...)

> There's no such problem in the remove phase, but I guess we need an
> unprepare in case there's a failure between the prepare and add
> operations, and we wish to give the hotplug script an opportunity to
> unwind whatever the prepare operation has done.
> 
Yes.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:49:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:49:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZBb-00012B-Eu; Mon, 17 Dec 2012 11:48:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1TkZBZ-00011u-U6; Mon, 17 Dec 2012 11:48:42 +0000
Received: from [85.158.143.99:33270] by server-2.bemta-4.messagelabs.com id
	1C/07-30861-9960FC05; Mon, 17 Dec 2012 11:48:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355744894!28838062!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17349 invoked from network); 17 Dec 2012 11:48:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:48:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="196873"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 11:47:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 11:47:37 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TkZAX-0004W0-8y; Mon, 17 Dec 2012 11:47:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TkZAW-0003Gz-Vh;
	Mon, 17 Dec 2012 11:47:37 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20687.1624.850228.107153@mariner.uk.xensource.com>
Date: Mon, 17 Dec 2012 11:47:36 +0000
To: Roger Pau Monne <roger.pau@citrix.com>
In-Reply-To: <50CB721C.5010006@citrix.com>
References: <50C78D84.1030404@citrix.com>
	<1355309001.10554.13.camel@zakaz.uk.xensource.com>
	<50C879FB.7060208@citrix.com> <50CA08AE.80102@citrix.com>
	<20682.7487.598565.547405@mariner.uk.xensource.com>
	<50CB0429.9070300@citrix.com>
	<20683.7116.546352.141787@mariner.uk.xensource.com>
	<50CB43A9.3040806@citrix.com>
	<20683.18895.282490.216230@mariner.uk.xensource.com>
	<50CB62CD.7020103@citrix.com>
	<20683.28487.926648.451291@mariner.uk.xensource.com>
	<50CB721C.5010006@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Handling iSCSI block devices (Was: Driver domains
 and device handling)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
> On 14/12/12 19:26, Ian Jackson wrote:
> > Roger Pau Monne writes ("Re: Handling iSCSI block devices (Was: Driver domains and device handling)"):
> >>  * remove: called to disconnect the device. Xenstore backend entries
> >> exists, and backend state is 6 (XenbusStateClosed).
> > 
> > I assume we need an unprepare here too.
> 
> I've also thought that, but the reason for prepare to exist is to reduce
> the time that the "add" operation takes, thus reducing the blackout
> phase during migration.

The unprepare operation might also be slow.  (Of course we believe in
crash-only software but the storage provider might not...)

> There's no such problem in the remove phase, but I guess we need an
> unprepare in case there's a failure between the prepare and add
> operations, and we wish to give the hotplug script an opportunity to
> unwind whatever the prepare operation has done.
> 
Yes.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:54:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:54:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZGY-0001QO-U4; Mon, 17 Dec 2012 11:53:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkZGX-0001QC-AS
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 11:53:49 +0000
Received: from [193.109.254.147:31975] by server-11.bemta-14.messagelabs.com
	id B9/4A-02659-CC70FC05; Mon, 17 Dec 2012 11:53:48 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355745156!10270954!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27933 invoked from network); 17 Dec 2012 11:52:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:52:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="196987"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 11:52:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 11:52:33 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TkZFJ-0004Xh-AB; Mon, 17 Dec 2012 11:52:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TkZFJ-0003HK-6A;
	Mon, 17 Dec 2012 11:52:33 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20687.1921.90089.961634@mariner.uk.xensource.com>
Date: Mon, 17 Dec 2012 11:52:33 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50CF006502000078000B0B30@nat28.tlf.novell.com>
References: <osstest-14689-mainreport@xen.org>
	<50CF006502000078000B0B30@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "greg@enjellic.com" <greg@enjellic.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL"):
> So this is what really should not have happened past the
> intended-to-be-final RC.

Sorry about that.

> How important is this change really to the 4.1 tree? I'd obviously
> favor outright reverting it at this point (my understanding being
> that the removal of the call to xs_rm() from libxl__device_destroy()
> affected more than just tapdisk backends, which I guess was
> assumed to be the case because of its neighboring with the call
> to libxl__device_destroy_tapdisk()). Would that get the tree into
> worse state that 4.1.3 was in?

I think reverting it right now is the right thing to do and I will do
that.  We can then think about what to do next.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:54:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:54:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZGY-0001QO-U4; Mon, 17 Dec 2012 11:53:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkZGX-0001QC-AS
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 11:53:49 +0000
Received: from [193.109.254.147:31975] by server-11.bemta-14.messagelabs.com
	id B9/4A-02659-CC70FC05; Mon, 17 Dec 2012 11:53:48 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355745156!10270954!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27933 invoked from network); 17 Dec 2012 11:52:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:52:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="196987"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 11:52:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 11:52:33 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TkZFJ-0004Xh-AB; Mon, 17 Dec 2012 11:52:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TkZFJ-0003HK-6A;
	Mon, 17 Dec 2012 11:52:33 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20687.1921.90089.961634@mariner.uk.xensource.com>
Date: Mon, 17 Dec 2012 11:52:33 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50CF006502000078000B0B30@nat28.tlf.novell.com>
References: <osstest-14689-mainreport@xen.org>
	<50CF006502000078000B0B30@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "greg@enjellic.com" <greg@enjellic.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL"):
> So this is what really should not have happened past the
> intended-to-be-final RC.

Sorry about that.

> How important is this change really to the 4.1 tree? I'd obviously
> favor outright reverting it at this point (my understanding being
> that the removal of the call to xs_rm() from libxl__device_destroy()
> affected more than just tapdisk backends, which I guess was
> assumed to be the case because of its neighboring with the call
> to libxl__device_destroy_tapdisk()). Would that get the tree into
> worse state that 4.1.3 was in?

I think reverting it right now is the right thing to do and I will do
that.  We can then think about what to do next.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:56:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZJ1-0001Zw-FM; Mon, 17 Dec 2012 11:56:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkZJ0-0001Zq-1i
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 11:56:22 +0000
Received: from [85.158.139.211:15926] by server-9.bemta-5.messagelabs.com id
	2F/83-10690-5680FC05; Mon, 17 Dec 2012 11:56:21 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355745371!20846676!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25640 invoked from network); 17 Dec 2012 11:56:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:56:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="197086"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 11:56:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 11:55:42 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TkZIN-0004Yc-0Q; Mon, 17 Dec 2012 11:55:43 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TkZIM-0003IX-SS;
	Mon, 17 Dec 2012 11:55:42 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20687.2110.782035.318909@mariner.uk.xensource.com>
Date: Mon, 17 Dec 2012 11:55:42 +0000
To: Jan Beulich <JBeulich@suse.com>, "greg@enjellic.com" <greg@enjellic.com>,
	xen-devel <xen-devel@lists.xen.org>
In-Reply-To: <20687.1921.90089.961634@mariner.uk.xensource.com>
References: <osstest-14689-mainreport@xen.org>
	<50CF006502000078000B0B30@nat28.tlf.novell.com>
	<20687.1921.90089.961634@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL"):
> Jan Beulich writes ("Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL"):
> > So this is what really should not have happened past the
> > intended-to-be-final RC.
> 
> Sorry about that.
> 
> > How important is this change really to the 4.1 tree? I'd obviously
> > favor outright reverting it at this point (my understanding being
> > that the removal of the call to xs_rm() from libxl__device_destroy()
> > affected more than just tapdisk backends, which I guess was
> > assumed to be the case because of its neighboring with the call
> > to libxl__device_destroy_tapdisk()). Would that get the tree into
> > worse state that 4.1.3 was in?
> 
> I think reverting it right now is the right thing to do and I will do
> that.  We can then think about what to do next.

Now done.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:56:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZJ1-0001Zw-FM; Mon, 17 Dec 2012 11:56:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkZJ0-0001Zq-1i
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 11:56:22 +0000
Received: from [85.158.139.211:15926] by server-9.bemta-5.messagelabs.com id
	2F/83-10690-5680FC05; Mon, 17 Dec 2012 11:56:21 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355745371!20846676!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25640 invoked from network); 17 Dec 2012 11:56:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:56:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="197086"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 11:56:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 11:55:42 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1TkZIN-0004Yc-0Q; Mon, 17 Dec 2012 11:55:43 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1TkZIM-0003IX-SS;
	Mon, 17 Dec 2012 11:55:42 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20687.2110.782035.318909@mariner.uk.xensource.com>
Date: Mon, 17 Dec 2012 11:55:42 +0000
To: Jan Beulich <JBeulich@suse.com>, "greg@enjellic.com" <greg@enjellic.com>,
	xen-devel <xen-devel@lists.xen.org>
In-Reply-To: <20687.1921.90089.961634@mariner.uk.xensource.com>
References: <osstest-14689-mainreport@xen.org>
	<50CF006502000078000B0B30@nat28.tlf.novell.com>
	<20687.1921.90089.961634@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL"):
> Jan Beulich writes ("Re: [Xen-devel] [xen-4.1-testing test] 14689: regressions - FAIL"):
> > So this is what really should not have happened past the
> > intended-to-be-final RC.
> 
> Sorry about that.
> 
> > How important is this change really to the 4.1 tree? I'd obviously
> > favor outright reverting it at this point (my understanding being
> > that the removal of the call to xs_rm() from libxl__device_destroy()
> > affected more than just tapdisk backends, which I guess was
> > assumed to be the case because of its neighboring with the call
> > to libxl__device_destroy_tapdisk()). Would that get the tree into
> > worse state that 4.1.3 was in?
> 
> I think reverting it right now is the right thing to do and I will do
> that.  We can then think about what to do next.

Now done.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:59:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:59:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZLT-0001jA-1V; Mon, 17 Dec 2012 11:58:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1TkZLS-0001iz-56; Mon, 17 Dec 2012 11:58:54 +0000
Received: from [85.158.137.99:11281] by server-14.bemta-3.messagelabs.com id
	4E/2C-27443-DF80FC05; Mon, 17 Dec 2012 11:58:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355745514!14603279!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9735 invoked from network); 17 Dec 2012 11:58:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:58:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="197100"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 11:56:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 11:56:23 +0000
Message-ID: <1355745382.14620.56.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Paul Harvey <jhebus@googlemail.com>
Date: Mon, 17 Dec 2012 11:56:22 +0000
In-Reply-To: <CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
	<1355412947.10554.147.camel@zakaz.uk.xensource.com>
	<CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-14 at 13:06 +0000, Paul Harvey wrote:
> Program received signal SIGABRT, Aborted.
> 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) bt
> #0  0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> #1  0x00007fe588cabb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
> #2  0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #3  0x00007fe588d7c807 in __fortify_fail () from /lib/x86_64-linux-gnu/libc.so.6
> #4  0x00007fe588d7b700 in __chk_fail () from /lib/x86_64-linux-gnu/libc.so.6
> #5  0x00007fe588d7c7be in __fdelt_warn () from /lib/x86_64-linux-gnu/libc.so.6
> #6  0x0000000000403ca8 in handle_io () at daemon/io.c:1059
> #7  0x00000000004021c5 in main (argc=2, argv=0x7fff58691d48) at daemon/main.c:166

daemon/io.c:1059 in 4.1.2 is:
                                    FD_ISSET(xc_evtchn_fd(d->xce_handle),
                                             &readfds))
                                        handle_ring_read(d);

I rather suspect this is overrunning the readfds array.
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_select.h.html suggests this is sized by FD_SETSIZE. On my system that appears to be statically 1024 (at least strace doesn't show a syscall to determine it in a simple test app although grep /usr/include suggests that might be an option on some systems).

It doesn't seem likely that there will be a simple solution to this. We
probably need to switch to something other than select(2). poll(2) seems
to handle arbitrary numbers of file descriptors. epoll(7) would be nice
(it supposedly scales better than poll) but is Linux specific. Another
option might be to fork multiple worker processes (might be a good idea
if xenconsole becomes a bottleneck).

It seems likely (based on a quick grep) that both xenstore (both the C
and ocaml variants) will suffer from the same issue.

I'm not sure why we have an evtchn handle per guest, other than this
comment which suggests it was simply expedient rather than a good
design:
	/* Opening evtchn independently for each console is a bit
	 * wasteful, but that's how the code is structured... */
	dom->xce_handle = xc_evtchn_open(NULL, 0);
	if (dom->xce_handle == NULL) {
		err = errno;
		goto out;
	}
However this is just one open fd which scales with number of domains
(the others are the pty related ones) so just fixing this would just buy
a bit more time but not fix the underlying issue.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 11:59:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 11:59:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZLT-0001jA-1V; Mon, 17 Dec 2012 11:58:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1TkZLS-0001iz-56; Mon, 17 Dec 2012 11:58:54 +0000
Received: from [85.158.137.99:11281] by server-14.bemta-3.messagelabs.com id
	4E/2C-27443-DF80FC05; Mon, 17 Dec 2012 11:58:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355745514!14603279!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9735 invoked from network); 17 Dec 2012 11:58:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 11:58:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="197100"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 11:56:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 11:56:23 +0000
Message-ID: <1355745382.14620.56.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Paul Harvey <jhebus@googlemail.com>
Date: Mon, 17 Dec 2012 11:56:22 +0000
In-Reply-To: <CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
	<1355412947.10554.147.camel@zakaz.uk.xensource.com>
	<CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-14 at 13:06 +0000, Paul Harvey wrote:
> Program received signal SIGABRT, Aborted.
> 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) bt
> #0  0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> #1  0x00007fe588cabb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
> #2  0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #3  0x00007fe588d7c807 in __fortify_fail () from /lib/x86_64-linux-gnu/libc.so.6
> #4  0x00007fe588d7b700 in __chk_fail () from /lib/x86_64-linux-gnu/libc.so.6
> #5  0x00007fe588d7c7be in __fdelt_warn () from /lib/x86_64-linux-gnu/libc.so.6
> #6  0x0000000000403ca8 in handle_io () at daemon/io.c:1059
> #7  0x00000000004021c5 in main (argc=2, argv=0x7fff58691d48) at daemon/main.c:166

daemon/io.c:1059 in 4.1.2 is:
                                    FD_ISSET(xc_evtchn_fd(d->xce_handle),
                                             &readfds))
                                        handle_ring_read(d);

I rather suspect this is overrunning the readfds array.
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_select.h.html suggests this is sized by FD_SETSIZE. On my system that appears to be statically 1024 (at least strace doesn't show a syscall to determine it in a simple test app although grep /usr/include suggests that might be an option on some systems).

It doesn't seem likely that there will be a simple solution to this. We
probably need to switch to something other than select(2). poll(2) seems
to handle arbitrary numbers of file descriptors. epoll(7) would be nice
(it supposedly scales better than poll) but is Linux specific. Another
option might be to fork multiple worker processes (might be a good idea
if xenconsole becomes a bottleneck).

It seems likely (based on a quick grep) that both xenstore (both the C
and ocaml variants) will suffer from the same issue.

I'm not sure why we have an evtchn handle per guest, other than this
comment which suggests it was simply expedient rather than a good
design:
	/* Opening evtchn independently for each console is a bit
	 * wasteful, but that's how the code is structured... */
	dom->xce_handle = xc_evtchn_open(NULL, 0);
	if (dom->xce_handle == NULL) {
		err = errno;
		goto out;
	}
However this is just one open fd which scales with number of domains
(the others are the pty related ones) so just fixing this would just buy
a bit more time but not fix the underlying issue.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 12:10:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 12:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZW7-0002Su-O4; Mon, 17 Dec 2012 12:09:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkZW6-0002Sp-Er
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 12:09:54 +0000
Received: from [85.158.143.35:58728] by server-1.bemta-4.messagelabs.com id
	F8/2E-28401-19B0FC05; Mon, 17 Dec 2012 12:09:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355745854!13443144!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22033 invoked from network); 17 Dec 2012 12:04:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 12:04:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="197236"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 12:02:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 12:01:53 +0000
Message-ID: <1355745712.14620.59.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Mon, 17 Dec 2012 12:01:52 +0000
In-Reply-To: <95EB29C55F836E775B70A9F5@nimrod.local>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>
	<95EB29C55F836E775B70A9F5@nimrod.local>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 15:06 +0000, Alex Bligh wrote:
> But beyond that, my concern is that Xen-4.3 is (as advertised) going
> to have qemu-xen as the default device model; no one needing HVM
> can currently use it at all (if they need live migrate which I guess
> most do). That's a bit of a jump.

> Xen docs currently same qemu-xen is supported. Nowhere does it say
> "save under HVM you can't live migrate". I'd argue lack of live migration
> for HVM *is* a bug.

It's definitely at least a bug in the documentation -- this isn't a
feature which was forgotten about, it was explicitly decided hadn't met
the 4.2 deadline and wasn't going to be ready in time to wait for. This
should have been documented and in the release notes etc, sorry.

We did at least manage to tag it tech preview in
http://wiki.xen.org/wiki/Xen_Release_Features which implies that it is
not yet fully formed.

> If you ignore the JSON stuff, the patch is quite small. We could always
> only enable it as a compile option...



Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 12:10:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 12:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZW7-0002Su-O4; Mon, 17 Dec 2012 12:09:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkZW6-0002Sp-Er
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 12:09:54 +0000
Received: from [85.158.143.35:58728] by server-1.bemta-4.messagelabs.com id
	F8/2E-28401-19B0FC05; Mon, 17 Dec 2012 12:09:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355745854!13443144!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22033 invoked from network); 17 Dec 2012 12:04:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 12:04:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="197236"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 12:02:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 12:01:53 +0000
Message-ID: <1355745712.14620.59.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Mon, 17 Dec 2012 12:01:52 +0000
In-Reply-To: <95EB29C55F836E775B70A9F5@nimrod.local>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>
	<95EB29C55F836E775B70A9F5@nimrod.local>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 15:06 +0000, Alex Bligh wrote:
> But beyond that, my concern is that Xen-4.3 is (as advertised) going
> to have qemu-xen as the default device model; no one needing HVM
> can currently use it at all (if they need live migrate which I guess
> most do). That's a bit of a jump.

> Xen docs currently same qemu-xen is supported. Nowhere does it say
> "save under HVM you can't live migrate". I'd argue lack of live migration
> for HVM *is* a bug.

It's definitely at least a bug in the documentation -- this isn't a
feature which was forgotten about, it was explicitly decided hadn't met
the 4.2 deadline and wasn't going to be ready in time to wait for. This
should have been documented and in the release notes etc, sorry.

We did at least manage to tag it tech preview in
http://wiki.xen.org/wiki/Xen_Release_Features which implies that it is
not yet fully formed.

> If you ignore the JSON stuff, the patch is quite small. We could always
> only enable it as a compile option...



Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 12:20:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 12:20:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZfz-0002hz-Rm; Mon, 17 Dec 2012 12:20:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ferdinand@outlook.com>) id 1TkZfz-0002hu-1h
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 12:20:07 +0000
Received: from [193.109.254.147:4720] by server-3.bemta-14.messagelabs.com id
	E8/60-26055-6FD0FC05; Mon, 17 Dec 2012 12:20:06 +0000
X-Env-Sender: ferdinand@outlook.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355746804!1927556!1
X-Originating-IP: [65.55.111.166]
X-SpamReason: No, hits=0.1 required=7.0 tests=FORGED_HOTMAIL_RCVD,
	ML_RADAR_SPEW_LINKS_12,MSGID_FROM_MTA_HEADER,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30383 invoked from network); 17 Dec 2012 12:20:05 -0000
Received: from blu0-omc4-s27.blu0.hotmail.com (HELO
	blu0-omc4-s27.blu0.hotmail.com) (65.55.111.166)
	by server-11.tower-27.messagelabs.com with SMTP;
	17 Dec 2012 12:20:05 -0000
Received: from BLU0-SMTP20 ([65.55.111.137]) by blu0-omc4-s27.blu0.hotmail.com
	with Microsoft SMTPSVC(6.0.3790.4675); 
	Mon, 17 Dec 2012 04:20:04 -0800
X-EIP: [RXEfgmETBIqM1v4tE/H993v/kU6It+Qe]
X-Originating-Email: [ferdinand@outlook.com]
Message-ID: <BLU0-SMTP20D790FF2582EF8F47D782A5320@phx.gbl>
Received: from [192.168.178.21] ([37.209.36.181]) by
	BLU0-SMTP20.blu0.hotmail.com over TLS secured channel with
	Microsoft SMTPSVC(6.0.3790.4675); Mon, 17 Dec 2012 04:20:04 -0800
Date: Mon, 17 Dec 2012 13:20:37 +0100
From: =?ISO-8859-1?Q?Ferdinand_N=F6lscher?= <ferdinand@outlook.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.10) Gecko/20121027 Icedove/10.0.10
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-OriginalArrivalTime: 17 Dec 2012 12:20:04.0407 (UTC)
	FILETIME=[D8108C70:01CDDC50]
Subject: [Xen-devel] DomU stuck at "Booting from disc.." with stdvga
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi!

When booting an HVM Ubuntu image, it always gets stuck at the BIOS 
screen "Booting from disc..". If I turn off stdvga, it works.
I was using stdvga in order to get a bigger framebuffer, thus allowing 
higher resolutions.
I do not get this issue with Windows domUs.
Can anyone explain that?

I'm using Xen 4.3-unstable, latest branch.

kind regards,
Ferdinand

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 12:20:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 12:20:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkZfz-0002hz-Rm; Mon, 17 Dec 2012 12:20:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ferdinand@outlook.com>) id 1TkZfz-0002hu-1h
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 12:20:07 +0000
Received: from [193.109.254.147:4720] by server-3.bemta-14.messagelabs.com id
	E8/60-26055-6FD0FC05; Mon, 17 Dec 2012 12:20:06 +0000
X-Env-Sender: ferdinand@outlook.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355746804!1927556!1
X-Originating-IP: [65.55.111.166]
X-SpamReason: No, hits=0.1 required=7.0 tests=FORGED_HOTMAIL_RCVD,
	ML_RADAR_SPEW_LINKS_12,MSGID_FROM_MTA_HEADER,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30383 invoked from network); 17 Dec 2012 12:20:05 -0000
Received: from blu0-omc4-s27.blu0.hotmail.com (HELO
	blu0-omc4-s27.blu0.hotmail.com) (65.55.111.166)
	by server-11.tower-27.messagelabs.com with SMTP;
	17 Dec 2012 12:20:05 -0000
Received: from BLU0-SMTP20 ([65.55.111.137]) by blu0-omc4-s27.blu0.hotmail.com
	with Microsoft SMTPSVC(6.0.3790.4675); 
	Mon, 17 Dec 2012 04:20:04 -0800
X-EIP: [RXEfgmETBIqM1v4tE/H993v/kU6It+Qe]
X-Originating-Email: [ferdinand@outlook.com]
Message-ID: <BLU0-SMTP20D790FF2582EF8F47D782A5320@phx.gbl>
Received: from [192.168.178.21] ([37.209.36.181]) by
	BLU0-SMTP20.blu0.hotmail.com over TLS secured channel with
	Microsoft SMTPSVC(6.0.3790.4675); Mon, 17 Dec 2012 04:20:04 -0800
Date: Mon, 17 Dec 2012 13:20:37 +0100
From: =?ISO-8859-1?Q?Ferdinand_N=F6lscher?= <ferdinand@outlook.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.10) Gecko/20121027 Icedove/10.0.10
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-OriginalArrivalTime: 17 Dec 2012 12:20:04.0407 (UTC)
	FILETIME=[D8108C70:01CDDC50]
Subject: [Xen-devel] DomU stuck at "Booting from disc.." with stdvga
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi!

When booting an HVM Ubuntu image, it always gets stuck at the BIOS 
screen "Booting from disc..". If I turn off stdvga, it works.
I was using stdvga in order to get a bigger framebuffer, thus allowing 
higher resolutions.
I do not get this issue with Windows domUs.
Can anyone explain that?

I'm using Xen 4.3-unstable, latest branch.

kind regards,
Ferdinand

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 12:42:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 12:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tka1Q-0002yv-V7; Mon, 17 Dec 2012 12:42:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tka1P-0002yq-B2
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 12:42:15 +0000
Received: from [85.158.138.51:58501] by server-16.bemta-3.messagelabs.com id
	AB/81-27634-6231FC05; Mon, 17 Dec 2012 12:42:14 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355748131!27435035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI2MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9630 invoked from network); 17 Dec 2012 12:42:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 12:42:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="881316"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	17 Dec 2012 12:42:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 17 Dec 2012 07:42:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tka1J-0000aY-WD;
	Mon, 17 Dec 2012 12:42:10 +0000
Date: Mon, 17 Dec 2012 12:42:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20121214120836.6ec4ad4a@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1212171210100.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121214120836.6ec4ad4a@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 14 Dec 2012, Mukesh Rathor wrote:
> On Thu, 13 Dec 2012 14:25:16 +0000
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > >>> On 13.12.12 at 13:19, Stefano Stabellini
> > > >>> <stefano.stabellini@eu.citrix.com>
> > > wrote:
> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
> > > >> >>> <mukesh.rathor@oracle.com> wrote:
> > 
> > Actually I think that you might be right: just looking at the code it
> > seems that the mask bits get written to the table once as part of the
> > initialization process:
> > 
> > pci_enable_msix -> msix_capability_init -> msix_program_entries
> > 
> > Unfortunately msix_program_entries is called few lines after
> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
> > as a pirq.
> > However after that is done, all the masking/unmask is done via
> > irq_mask that we handle properly masking/unmasking the corresponding
> > event channels.
> > 
> > 
> > Possible solutions on top of my head:
> > 
> > - in msix_program_entries instead of writing to the table directly
> > (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
> > 
> > - replace msix_program_entries with arch_msix_program_entries, but it
> > would probably be unpopular.
> 
> 
> Can you or Jan or somebody please take that over? I can focus on other
> PVH things then and try to get a patch in asap.

The following patch moves the MSI-X masking before arch_setup_msi_irqs,
that is when Linux calls PHYSDEVOP_map_pirq that is the hypercall that
causes Xen to execute msix_capability_init.

I don't have access to a machine with an MSI-X device right now so I
have only tested the appended patch with MSI.

---



diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index a825d78..ef73e80 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -652,11 +652,22 @@ static void msix_program_entries(struct pci_dev *dev,
 	int i = 0;
 
 	list_for_each_entry(entry, &dev->msi_list, list) {
+		entries[i].vector = entry->irq;
+		irq_set_msi_desc(entry->irq, entry);
+		i++;
+	}
+}
+
+static void msix_mask_entries(struct pci_dev *dev,
+					struct msix_entry *entries)
+{
+	struct msi_desc *entry;
+	int i = 0;
+
+	list_for_each_entry(entry, &dev->msi_list, list) {
 		int offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE +
 						PCI_MSIX_ENTRY_VECTOR_CTRL;
 
-		entries[i].vector = entry->irq;
-		irq_set_msi_desc(entry->irq, entry);
 		entry->masked = readl(entry->mask_base + offset);
 		msix_mask_irq(entry, 1);
 		i++;
@@ -696,6 +707,8 @@ static int msix_capability_init(struct pci_dev *dev,
 	if (ret)
 		return ret;
 
+	msix_mask_entries(dev, entries);
+
 	ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
 	if (ret)
 		goto error;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 12:42:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 12:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tka1Q-0002yv-V7; Mon, 17 Dec 2012 12:42:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tka1P-0002yq-B2
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 12:42:15 +0000
Received: from [85.158.138.51:58501] by server-16.bemta-3.messagelabs.com id
	AB/81-27634-6231FC05; Mon, 17 Dec 2012 12:42:14 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355748131!27435035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI2MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9630 invoked from network); 17 Dec 2012 12:42:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 12:42:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="881316"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	17 Dec 2012 12:42:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 17 Dec 2012 07:42:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tka1J-0000aY-WD;
	Mon, 17 Dec 2012 12:42:10 +0000
Date: Mon, 17 Dec 2012 12:42:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20121214120836.6ec4ad4a@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1212171210100.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121214120836.6ec4ad4a@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 14 Dec 2012, Mukesh Rathor wrote:
> On Thu, 13 Dec 2012 14:25:16 +0000
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > >>> On 13.12.12 at 13:19, Stefano Stabellini
> > > >>> <stefano.stabellini@eu.citrix.com>
> > > wrote:
> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
> > > >> >>> <mukesh.rathor@oracle.com> wrote:
> > 
> > Actually I think that you might be right: just looking at the code it
> > seems that the mask bits get written to the table once as part of the
> > initialization process:
> > 
> > pci_enable_msix -> msix_capability_init -> msix_program_entries
> > 
> > Unfortunately msix_program_entries is called few lines after
> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
> > as a pirq.
> > However after that is done, all the masking/unmask is done via
> > irq_mask that we handle properly masking/unmasking the corresponding
> > event channels.
> > 
> > 
> > Possible solutions on top of my head:
> > 
> > - in msix_program_entries instead of writing to the table directly
> > (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
> > 
> > - replace msix_program_entries with arch_msix_program_entries, but it
> > would probably be unpopular.
> 
> 
> Can you or Jan or somebody please take that over? I can focus on other
> PVH things then and try to get a patch in asap.

The following patch moves the MSI-X masking before arch_setup_msi_irqs,
that is when Linux calls PHYSDEVOP_map_pirq that is the hypercall that
causes Xen to execute msix_capability_init.

I don't have access to a machine with an MSI-X device right now so I
have only tested the appended patch with MSI.

---



diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index a825d78..ef73e80 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -652,11 +652,22 @@ static void msix_program_entries(struct pci_dev *dev,
 	int i = 0;
 
 	list_for_each_entry(entry, &dev->msi_list, list) {
+		entries[i].vector = entry->irq;
+		irq_set_msi_desc(entry->irq, entry);
+		i++;
+	}
+}
+
+static void msix_mask_entries(struct pci_dev *dev,
+					struct msix_entry *entries)
+{
+	struct msi_desc *entry;
+	int i = 0;
+
+	list_for_each_entry(entry, &dev->msi_list, list) {
 		int offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE +
 						PCI_MSIX_ENTRY_VECTOR_CTRL;
 
-		entries[i].vector = entry->irq;
-		irq_set_msi_desc(entry->irq, entry);
 		entry->masked = readl(entry->mask_base + offset);
 		msix_mask_irq(entry, 1);
 		i++;
@@ -696,6 +707,8 @@ static int msix_capability_init(struct pci_dev *dev,
 	if (ret)
 		return ret;
 
+	msix_mask_entries(dev, entries);
+
 	ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
 	if (ret)
 		goto error;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 12:58:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 12:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkaGx-0003Bk-MR; Mon, 17 Dec 2012 12:58:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>)
	id 1TkaGw-0003BZ-KW; Mon, 17 Dec 2012 12:58:18 +0000
Received: from [85.158.139.211:7056] by server-15.bemta-5.messagelabs.com id
	EC/A9-20523-9E61FC05; Mon, 17 Dec 2012 12:58:17 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1355749094!20714148!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=2.6 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	INFO_TLD,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24050 invoked from network); 17 Dec 2012 12:58:15 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 12:58:15 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so7005237vbi.32
	for <multiple recipients>; Mon, 17 Dec 2012 04:58:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=6H8Uw+w8CW6NfMl/hkKa49Vkjio1BjgdRz5xUtncdfI=;
	b=tY/g+3k4WkGqNWRqacWu/9RZECXRzr4D6TxNhEPDMG9AvAEBUMKEwVnvJy14baubt/
	4+8e1ILdDRYt2Wdgpkqw0zqP6T2zJ7I7gr5+t+07YI5lnjraeH69hLX+hgvhi2+yNaAk
	9xe7qbvhXFCCDpE8kS/cIUE7BO+FW1qMOyQG0PsHm3c7qL8eGR5/0BGg6Tg0FK5XPtCr
	1JdpP9pnU/JSgKR0AGqdTCLn77h+c7dLJ6yIjczzjr1EOFcQLbuQ4aV/NRDnOvPyZuat
	p6ksY9op1HL4iPP83fGajoEQiShx28zUc57JpfobiDHph7OWAW0RJNd1CdGxcN6gc6KC
	/BEw==
MIME-Version: 1.0
Received: by 10.220.156.10 with SMTP id u10mr23036692vcw.28.1355749094033;
	Mon, 17 Dec 2012 04:58:14 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Mon, 17 Dec 2012 04:58:13 -0800 (PST)
Date: Mon, 17 Dec 2012 12:58:13 +0000
X-Google-Sender-Auth: 5GGGCHuH78nUvoqqty-AFlsdvbE
Message-ID: <CAFLBxZZuv=Q_T3F=e88DnJNhZqUutm-fDZ=FCS_-bdXV-eeWSg@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-announce@lists.xen.org
Subject: [Xen-devel] Security disclosure process discussion update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0467968004903859175=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0467968004903859175==
Content-Type: multipart/alternative; boundary=f46d043890932787ff04d10befbd

--f46d043890932787ff04d10befbd
Content-Type: text/plain; charset=ISO-8859-1

After concluding our poll [1] about changes to the security
discussion, we determined that "Pre-disclosure to software vendors and
a wide set of users" was probably the best fit for the community.  A
set of concrete changes to the policy have now been discussed on
xen-devel [2] [3], and we seem to have converged on something everyone
finds acceptable.

We are now presenting these changes for public review.  The purpose of
this review process is to allow feedback on the text which will be
voted on, in accordance to the Xen.org governance procedure [3].  Our
plan is to leave this up for review until the third week in January.
Any substantial updates will be mentioned on the blog and will extend
the review time.

All feedback and discussion should happen in public on the xen-devel
mailing list.  If you have any suggestions for how to improve the
proposal, please e-mail the list, and cc George Dunlap (george dot
dunlap at citrix.com).

= Summary of the updates =

As discussed on the xen-devel mailing list, expand eligibility of the
pre-disclosure list to include any public hosting provider, as well
as software project:
* Change "Large hosting providers" to "Public hosting providers"
* Remove "widely-deployed" from vendors and distributors
* Add rules of thumb for what constitutes "genuine"
* Add an itemized list of information to be included in the application,
to make expectations clear and (hopefully) applications more streamlined.

The first will allow hosting providers of any size to join.

The second will allow software projects and vendors of any size to join.

The third and fourth will help describe exactly what criteria will be used
to
determine eligibility for 1 and 2.

Additionally, this proposal adds the following requirements:
* Applicants and current members must use an e-mail alias, not an
individual's
e-mail
* Applicants and current members must submit a statement saying that they
have
read, understand, and will abide by this process document.

The new policy in its entirety can be found here:

http://wiki.xen.org/wiki/Security_vulnerability_process_draft

For comparison, the current policy can be found here:

http://www.xen.org/projects/security_vulnerability_process.html


[1]
http://blog.xen.org/index.php/2012/08/23/disclosure-process-poll-results/

[2] http://marc.info/?l=xen-devel&m=135300020310446

[3] http://marc.info/?l=xen-devel&m=135455914107182

[4] http://www.xen.org/projects/governance.html

--f46d043890932787ff04d10befbd
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

After concluding our poll [1] about changes to the security<br>discussion, =
we determined that &quot;Pre-disclosure to software vendors and<br>a wide s=
et of users&quot; was probably the best fit for the community.=A0 A<br>set =
of concrete changes to the policy have now been discussed on<br>
xen-devel [2] [3], and we seem to have converged on something everyone<br>f=
inds acceptable.<br><br>We are now presenting these changes for public revi=
ew.=A0 The purpose of<br>this review process is to allow feedback on the te=
xt which will be<br>
voted on, in accordance to the Xen.org governance procedure [3].=A0 Our<br>=
plan is to leave this up for review until the third week in January.<br>Any=
 substantial updates will be mentioned on the blog and will extend<br>the r=
eview time.<br>
<br>All feedback and discussion should happen in public on the xen-devel<br=
>mailing list.=A0 If you have any suggestions for how to improve the<br>pro=
posal, please e-mail the list, and cc George Dunlap (george dot<br>dunlap a=
t <a href=3D"http://citrix.com">citrix.com</a>).<br>
<br>=3D Summary of the updates =3D<br><br>As discussed on the xen-devel mai=
ling list, expand eligibility of the<br>pre-disclosure list to include any =
public hosting provider, as well<br>as software project:<br>* Change &quot;=
Large hosting providers&quot; to &quot;Public hosting providers&quot;<br>
* Remove &quot;widely-deployed&quot; from vendors and distributors<br>* Add=
 rules of thumb for what constitutes &quot;genuine&quot;<br>* Add an itemiz=
ed list of information to be included in the application,<br>to make expect=
ations clear and (hopefully) applications more streamlined.<br>
<br>The first will allow hosting providers of any size to join.<br><br>The =
second will allow software projects and vendors of any size to join.<br><br=
>The third and fourth will help describe exactly what criteria will be used=
 to<br>
determine eligibility for 1 and 2.<br><br>Additionally, this proposal adds =
the following requirements:<br>* Applicants and current members must use an=
 e-mail alias, not an individual&#39;s<br>e-mail<br>* Applicants and curren=
t members must submit a statement saying that they have<br>
read, understand, and will abide by this process document.<br><br>The new p=
olicy in its entirety can be found here:<br><br><a href=3D"http://wiki.xen.=
org/wiki/Security_vulnerability_process_draft">http://wiki.xen.org/wiki/Sec=
urity_vulnerability_process_draft</a><br>
<br>For comparison, the current policy can be found here:<br><br><a href=3D=
"http://www.xen.org/projects/security_vulnerability_process.html">http://ww=
w.xen.org/projects/security_vulnerability_process.html</a><br><br><br>[1] <=
a href=3D"http://blog.xen.org/index.php/2012/08/23/disclosure-process-poll-=
results/">http://blog.xen.org/index.php/2012/08/23/disclosure-process-poll-=
results/</a><br>
<br>[2] <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D135300020310446"=
>http://marc.info/?l=3Dxen-devel&amp;m=3D135300020310446</a><br><br>[3] <a =
href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D135455914107182">http://mar=
c.info/?l=3Dxen-devel&amp;m=3D135455914107182</a><br>
<br>[4] <a href=3D"http://www.xen.org/projects/governance.html">http://www.=
xen.org/projects/governance.html</a><br><br>

--f46d043890932787ff04d10befbd--


--===============0467968004903859175==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0467968004903859175==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 12:58:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 12:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkaGx-0003Bk-MR; Mon, 17 Dec 2012 12:58:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>)
	id 1TkaGw-0003BZ-KW; Mon, 17 Dec 2012 12:58:18 +0000
Received: from [85.158.139.211:7056] by server-15.bemta-5.messagelabs.com id
	EC/A9-20523-9E61FC05; Mon, 17 Dec 2012 12:58:17 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1355749094!20714148!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=2.6 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	INFO_TLD,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24050 invoked from network); 17 Dec 2012 12:58:15 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 12:58:15 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so7005237vbi.32
	for <multiple recipients>; Mon, 17 Dec 2012 04:58:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=6H8Uw+w8CW6NfMl/hkKa49Vkjio1BjgdRz5xUtncdfI=;
	b=tY/g+3k4WkGqNWRqacWu/9RZECXRzr4D6TxNhEPDMG9AvAEBUMKEwVnvJy14baubt/
	4+8e1ILdDRYt2Wdgpkqw0zqP6T2zJ7I7gr5+t+07YI5lnjraeH69hLX+hgvhi2+yNaAk
	9xe7qbvhXFCCDpE8kS/cIUE7BO+FW1qMOyQG0PsHm3c7qL8eGR5/0BGg6Tg0FK5XPtCr
	1JdpP9pnU/JSgKR0AGqdTCLn77h+c7dLJ6yIjczzjr1EOFcQLbuQ4aV/NRDnOvPyZuat
	p6ksY9op1HL4iPP83fGajoEQiShx28zUc57JpfobiDHph7OWAW0RJNd1CdGxcN6gc6KC
	/BEw==
MIME-Version: 1.0
Received: by 10.220.156.10 with SMTP id u10mr23036692vcw.28.1355749094033;
	Mon, 17 Dec 2012 04:58:14 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Mon, 17 Dec 2012 04:58:13 -0800 (PST)
Date: Mon, 17 Dec 2012 12:58:13 +0000
X-Google-Sender-Auth: 5GGGCHuH78nUvoqqty-AFlsdvbE
Message-ID: <CAFLBxZZuv=Q_T3F=e88DnJNhZqUutm-fDZ=FCS_-bdXV-eeWSg@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-announce@lists.xen.org
Subject: [Xen-devel] Security disclosure process discussion update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0467968004903859175=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0467968004903859175==
Content-Type: multipart/alternative; boundary=f46d043890932787ff04d10befbd

--f46d043890932787ff04d10befbd
Content-Type: text/plain; charset=ISO-8859-1

After concluding our poll [1] about changes to the security
discussion, we determined that "Pre-disclosure to software vendors and
a wide set of users" was probably the best fit for the community.  A
set of concrete changes to the policy have now been discussed on
xen-devel [2] [3], and we seem to have converged on something everyone
finds acceptable.

We are now presenting these changes for public review.  The purpose of
this review process is to allow feedback on the text which will be
voted on, in accordance to the Xen.org governance procedure [3].  Our
plan is to leave this up for review until the third week in January.
Any substantial updates will be mentioned on the blog and will extend
the review time.

All feedback and discussion should happen in public on the xen-devel
mailing list.  If you have any suggestions for how to improve the
proposal, please e-mail the list, and cc George Dunlap (george dot
dunlap at citrix.com).

= Summary of the updates =

As discussed on the xen-devel mailing list, expand eligibility of the
pre-disclosure list to include any public hosting provider, as well
as software project:
* Change "Large hosting providers" to "Public hosting providers"
* Remove "widely-deployed" from vendors and distributors
* Add rules of thumb for what constitutes "genuine"
* Add an itemized list of information to be included in the application,
to make expectations clear and (hopefully) applications more streamlined.

The first will allow hosting providers of any size to join.

The second will allow software projects and vendors of any size to join.

The third and fourth will help describe exactly what criteria will be used
to
determine eligibility for 1 and 2.

Additionally, this proposal adds the following requirements:
* Applicants and current members must use an e-mail alias, not an
individual's
e-mail
* Applicants and current members must submit a statement saying that they
have
read, understand, and will abide by this process document.

The new policy in its entirety can be found here:

http://wiki.xen.org/wiki/Security_vulnerability_process_draft

For comparison, the current policy can be found here:

http://www.xen.org/projects/security_vulnerability_process.html


[1]
http://blog.xen.org/index.php/2012/08/23/disclosure-process-poll-results/

[2] http://marc.info/?l=xen-devel&m=135300020310446

[3] http://marc.info/?l=xen-devel&m=135455914107182

[4] http://www.xen.org/projects/governance.html

--f46d043890932787ff04d10befbd
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

After concluding our poll [1] about changes to the security<br>discussion, =
we determined that &quot;Pre-disclosure to software vendors and<br>a wide s=
et of users&quot; was probably the best fit for the community.=A0 A<br>set =
of concrete changes to the policy have now been discussed on<br>
xen-devel [2] [3], and we seem to have converged on something everyone<br>f=
inds acceptable.<br><br>We are now presenting these changes for public revi=
ew.=A0 The purpose of<br>this review process is to allow feedback on the te=
xt which will be<br>
voted on, in accordance to the Xen.org governance procedure [3].=A0 Our<br>=
plan is to leave this up for review until the third week in January.<br>Any=
 substantial updates will be mentioned on the blog and will extend<br>the r=
eview time.<br>
<br>All feedback and discussion should happen in public on the xen-devel<br=
>mailing list.=A0 If you have any suggestions for how to improve the<br>pro=
posal, please e-mail the list, and cc George Dunlap (george dot<br>dunlap a=
t <a href=3D"http://citrix.com">citrix.com</a>).<br>
<br>=3D Summary of the updates =3D<br><br>As discussed on the xen-devel mai=
ling list, expand eligibility of the<br>pre-disclosure list to include any =
public hosting provider, as well<br>as software project:<br>* Change &quot;=
Large hosting providers&quot; to &quot;Public hosting providers&quot;<br>
* Remove &quot;widely-deployed&quot; from vendors and distributors<br>* Add=
 rules of thumb for what constitutes &quot;genuine&quot;<br>* Add an itemiz=
ed list of information to be included in the application,<br>to make expect=
ations clear and (hopefully) applications more streamlined.<br>
<br>The first will allow hosting providers of any size to join.<br><br>The =
second will allow software projects and vendors of any size to join.<br><br=
>The third and fourth will help describe exactly what criteria will be used=
 to<br>
determine eligibility for 1 and 2.<br><br>Additionally, this proposal adds =
the following requirements:<br>* Applicants and current members must use an=
 e-mail alias, not an individual&#39;s<br>e-mail<br>* Applicants and curren=
t members must submit a statement saying that they have<br>
read, understand, and will abide by this process document.<br><br>The new p=
olicy in its entirety can be found here:<br><br><a href=3D"http://wiki.xen.=
org/wiki/Security_vulnerability_process_draft">http://wiki.xen.org/wiki/Sec=
urity_vulnerability_process_draft</a><br>
<br>For comparison, the current policy can be found here:<br><br><a href=3D=
"http://www.xen.org/projects/security_vulnerability_process.html">http://ww=
w.xen.org/projects/security_vulnerability_process.html</a><br><br><br>[1] <=
a href=3D"http://blog.xen.org/index.php/2012/08/23/disclosure-process-poll-=
results/">http://blog.xen.org/index.php/2012/08/23/disclosure-process-poll-=
results/</a><br>
<br>[2] <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D135300020310446"=
>http://marc.info/?l=3Dxen-devel&amp;m=3D135300020310446</a><br><br>[3] <a =
href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D135455914107182">http://mar=
c.info/?l=3Dxen-devel&amp;m=3D135455914107182</a><br>
<br>[4] <a href=3D"http://www.xen.org/projects/governance.html">http://www.=
xen.org/projects/governance.html</a><br><br>

--f46d043890932787ff04d10befbd--


--===============0467968004903859175==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0467968004903859175==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 12:58:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 12:58:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkaHE-0003Cp-Pm; Mon, 17 Dec 2012 12:58:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkaHE-0003Cg-5j
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 12:58:36 +0000
Received: from [85.158.139.211:12244] by server-15.bemta-5.messagelabs.com id
	38/8A-20523-BF61FC05; Mon, 17 Dec 2012 12:58:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355749069!18393606!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1448 invoked from network); 17 Dec 2012 12:57:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 12:57:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 12:57:48 +0000
Message-Id: <50CF24DA02000078000B0BC4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 12:57:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121214120836.6ec4ad4a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212171210100.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212171210100.17523@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 13:42, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Fri, 14 Dec 2012, Mukesh Rathor wrote:
>> On Thu, 13 Dec 2012 14:25:16 +0000
>> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> 
>> > On Thu, 13 Dec 2012, Jan Beulich wrote:
>> > > >>> On 13.12.12 at 13:19, Stefano Stabellini
>> > > >>> <stefano.stabellini@eu.citrix.com>
>> > > wrote:
>> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
>> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
>> > > >> >>> <mukesh.rathor@oracle.com> wrote:
>> > 
>> > Actually I think that you might be right: just looking at the code it
>> > seems that the mask bits get written to the table once as part of the
>> > initialization process:
>> > 
>> > pci_enable_msix -> msix_capability_init -> msix_program_entries
>> > 
>> > Unfortunately msix_program_entries is called few lines after
>> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
>> > as a pirq.
>> > However after that is done, all the masking/unmask is done via
>> > irq_mask that we handle properly masking/unmasking the corresponding
>> > event channels.
>> > 
>> > 
>> > Possible solutions on top of my head:
>> > 
>> > - in msix_program_entries instead of writing to the table directly
>> > (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
>> > 
>> > - replace msix_program_entries with arch_msix_program_entries, but it
>> > would probably be unpopular.
>> 
>> 
>> Can you or Jan or somebody please take that over? I can focus on other
>> PVH things then and try to get a patch in asap.
> 
> The following patch moves the MSI-X masking before arch_setup_msi_irqs,
> that is when Linux calls PHYSDEVOP_map_pirq that is the hypercall that
> causes Xen to execute msix_capability_init.

And in what way would that help?

Jan

> I don't have access to a machine with an MSI-X device right now so I
> have only tested the appended patch with MSI.
> 
> ---
> 
> 
> 
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index a825d78..ef73e80 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -652,11 +652,22 @@ static void msix_program_entries(struct pci_dev *dev,
>  	int i = 0;
>  
>  	list_for_each_entry(entry, &dev->msi_list, list) {
> +		entries[i].vector = entry->irq;
> +		irq_set_msi_desc(entry->irq, entry);
> +		i++;
> +	}
> +}
> +
> +static void msix_mask_entries(struct pci_dev *dev,
> +					struct msix_entry *entries)
> +{
> +	struct msi_desc *entry;
> +	int i = 0;
> +
> +	list_for_each_entry(entry, &dev->msi_list, list) {
>  		int offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE +
>  						PCI_MSIX_ENTRY_VECTOR_CTRL;
>  
> -		entries[i].vector = entry->irq;
> -		irq_set_msi_desc(entry->irq, entry);
>  		entry->masked = readl(entry->mask_base + offset);
>  		msix_mask_irq(entry, 1);
>  		i++;
> @@ -696,6 +707,8 @@ static int msix_capability_init(struct pci_dev *dev,
>  	if (ret)
>  		return ret;
>  
> +	msix_mask_entries(dev, entries);
> +
>  	ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
>  	if (ret)
>  		goto error;




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 12:58:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 12:58:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkaHE-0003Cp-Pm; Mon, 17 Dec 2012 12:58:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkaHE-0003Cg-5j
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 12:58:36 +0000
Received: from [85.158.139.211:12244] by server-15.bemta-5.messagelabs.com id
	38/8A-20523-BF61FC05; Mon, 17 Dec 2012 12:58:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355749069!18393606!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1448 invoked from network); 17 Dec 2012 12:57:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 12:57:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 12:57:48 +0000
Message-Id: <50CF24DA02000078000B0BC4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 12:57:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121214120836.6ec4ad4a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212171210100.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212171210100.17523@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 13:42, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Fri, 14 Dec 2012, Mukesh Rathor wrote:
>> On Thu, 13 Dec 2012 14:25:16 +0000
>> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> 
>> > On Thu, 13 Dec 2012, Jan Beulich wrote:
>> > > >>> On 13.12.12 at 13:19, Stefano Stabellini
>> > > >>> <stefano.stabellini@eu.citrix.com>
>> > > wrote:
>> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
>> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
>> > > >> >>> <mukesh.rathor@oracle.com> wrote:
>> > 
>> > Actually I think that you might be right: just looking at the code it
>> > seems that the mask bits get written to the table once as part of the
>> > initialization process:
>> > 
>> > pci_enable_msix -> msix_capability_init -> msix_program_entries
>> > 
>> > Unfortunately msix_program_entries is called few lines after
>> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
>> > as a pirq.
>> > However after that is done, all the masking/unmask is done via
>> > irq_mask that we handle properly masking/unmasking the corresponding
>> > event channels.
>> > 
>> > 
>> > Possible solutions on top of my head:
>> > 
>> > - in msix_program_entries instead of writing to the table directly
>> > (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
>> > 
>> > - replace msix_program_entries with arch_msix_program_entries, but it
>> > would probably be unpopular.
>> 
>> 
>> Can you or Jan or somebody please take that over? I can focus on other
>> PVH things then and try to get a patch in asap.
> 
> The following patch moves the MSI-X masking before arch_setup_msi_irqs,
> that is when Linux calls PHYSDEVOP_map_pirq that is the hypercall that
> causes Xen to execute msix_capability_init.

And in what way would that help?

Jan

> I don't have access to a machine with an MSI-X device right now so I
> have only tested the appended patch with MSI.
> 
> ---
> 
> 
> 
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index a825d78..ef73e80 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -652,11 +652,22 @@ static void msix_program_entries(struct pci_dev *dev,
>  	int i = 0;
>  
>  	list_for_each_entry(entry, &dev->msi_list, list) {
> +		entries[i].vector = entry->irq;
> +		irq_set_msi_desc(entry->irq, entry);
> +		i++;
> +	}
> +}
> +
> +static void msix_mask_entries(struct pci_dev *dev,
> +					struct msix_entry *entries)
> +{
> +	struct msi_desc *entry;
> +	int i = 0;
> +
> +	list_for_each_entry(entry, &dev->msi_list, list) {
>  		int offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE +
>  						PCI_MSIX_ENTRY_VECTOR_CTRL;
>  
> -		entries[i].vector = entry->irq;
> -		irq_set_msi_desc(entry->irq, entry);
>  		entry->masked = readl(entry->mask_base + offset);
>  		msix_mask_irq(entry, 1);
>  		i++;
> @@ -696,6 +707,8 @@ static int msix_capability_init(struct pci_dev *dev,
>  	if (ret)
>  		return ret;
>  
> +	msix_mask_entries(dev, entries);
> +
>  	ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
>  	if (ret)
>  		goto error;




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 13:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 13:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkajO-0004cn-7E; Mon, 17 Dec 2012 13:27:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkajL-0004ca-VH
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 13:27:40 +0000
Received: from [85.158.139.83:17137] by server-16.bemta-5.messagelabs.com id
	76/4E-09208-ACD1FC05; Mon, 17 Dec 2012 13:27:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355750841!23899867!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8154 invoked from network); 17 Dec 2012 13:27:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 13:27:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="199443"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 13:25:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 13:25:24 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkahA-0005Cp-60;
	Mon, 17 Dec 2012 13:25:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tkah9-0007Ru-NZ;
	Mon, 17 Dec 2012 13:25:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14772-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 13:25:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14772: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14772 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14772/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check fail in 14762 REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    7 debian-install              fail pass in 14762
 test-amd64-amd64-xl-sedf      9 guest-start        fail in 14762 pass in 14772

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 13:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 13:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkajO-0004cn-7E; Mon, 17 Dec 2012 13:27:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkajL-0004ca-VH
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 13:27:40 +0000
Received: from [85.158.139.83:17137] by server-16.bemta-5.messagelabs.com id
	76/4E-09208-ACD1FC05; Mon, 17 Dec 2012 13:27:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355750841!23899867!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8154 invoked from network); 17 Dec 2012 13:27:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 13:27:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="199443"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 13:25:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 13:25:24 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkahA-0005Cp-60;
	Mon, 17 Dec 2012 13:25:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tkah9-0007Ru-NZ;
	Mon, 17 Dec 2012 13:25:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14772-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 13:25:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14772: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14772 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14772/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-amd64-xl          18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check        fail REGR. vs. 14679
 test-amd64-i386-xl           18 leak-check/check          fail REGR. vs. 14679
 test-i386-i386-xl            18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check    fail REGR. vs. 14679
 test-amd64-i386-xl-multivcpu 18 leak-check/check          fail REGR. vs. 14679
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14679
 test-amd64-i386-xl-credit2   18 leak-check/check fail in 14762 REGR. vs. 14679

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    7 debian-install              fail pass in 14762
 test-amd64-amd64-xl-sedf      9 guest-start        fail in 14762 pass in 14772

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14679
 test-amd64-amd64-xl-sedf-pin 18 leak-check/check          fail REGR. vs. 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  93e17b0cd035
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23428:93e17b0cd035
tag:         tip
user:        Greg Wettstein <greg@enjellic.com>
date:        Thu Dec 13 14:35:58 2012 +0000
    
    libxl: avoid blktap2 deadlock on cleanup
    
    Establishes correct cleanup behavior for blktap devices.  This patch
    implements the release of the backend device before calling for
    the destruction of the userspace component of the blktap device.
    
    Without this patch the kernel xen-blkback driver deadlocks with
    the blktap2 user control plane until the IPC channel is terminated by the
    timeout on the select() call.  This results in a noticeable delay
    in the termination of the guest and causes the blktap minor
    number which had been allocated to be orphaned.
    
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23427:255a0b6a8104
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Wed Dec 12 17:41:15 2012 +0000
    
    From: Ian Campbell <ian.campbell@citrix.com>
    
    libxl: attempt to cleanup tapdisk processes on disk backend destroy.
    
    This patch properly terminates the tapdisk2 process(es) started
    to service a virtual block device.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    xen-unstable changeset: 23883:7998217630e2
    xen-unstable date: Wed Sep 28 16:42:11 2011 +0100
    Signed-off-by: Greg Wettstein <greg@enjellic.com>
    Backport-requested-by: Greg Wettstein <greg@enjellic.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 14:16:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:16:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkbUg-0005JL-H0; Mon, 17 Dec 2012 14:16:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TkbUf-0005JG-0S
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 14:16:33 +0000
Received: from [85.158.139.83:6715] by server-7.bemta-5.messagelabs.com id
	3A/14-08009-0492FC05; Mon, 17 Dec 2012 14:16:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355753788!30234558!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI2MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27305 invoked from network); 17 Dec 2012 14:16:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 14:16:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="891905"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	17 Dec 2012 14:16:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 17 Dec 2012 09:16:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TkbUY-0002JI-Lb;
	Mon, 17 Dec 2012 14:16:26 +0000
Date: Mon, 17 Dec 2012 14:16:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50CF24DA02000078000B0BC4@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212171413020.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121214120836.6ec4ad4a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212171210100.17523@kaball.uk.xensource.com>
	<50CF24DA02000078000B0BC4@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Jan Beulich wrote:
> >>> On 17.12.12 at 13:42, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Fri, 14 Dec 2012, Mukesh Rathor wrote:
> >> On Thu, 13 Dec 2012 14:25:16 +0000
> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> >> 
> >> > On Thu, 13 Dec 2012, Jan Beulich wrote:
> >> > > >>> On 13.12.12 at 13:19, Stefano Stabellini
> >> > > >>> <stefano.stabellini@eu.citrix.com>
> >> > > wrote:
> >> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> >> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
> >> > > >> >>> <mukesh.rathor@oracle.com> wrote:
> >> > 
> >> > Actually I think that you might be right: just looking at the code it
> >> > seems that the mask bits get written to the table once as part of the
> >> > initialization process:
> >> > 
> >> > pci_enable_msix -> msix_capability_init -> msix_program_entries
> >> > 
> >> > Unfortunately msix_program_entries is called few lines after
> >> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
> >> > as a pirq.
> >> > However after that is done, all the masking/unmask is done via
> >> > irq_mask that we handle properly masking/unmasking the corresponding
> >> > event channels.
> >> > 
> >> > 
> >> > Possible solutions on top of my head:
> >> > 
> >> > - in msix_program_entries instead of writing to the table directly
> >> > (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
> >> > 
> >> > - replace msix_program_entries with arch_msix_program_entries, but it
> >> > would probably be unpopular.
> >> 
> >> 
> >> Can you or Jan or somebody please take that over? I can focus on other
> >> PVH things then and try to get a patch in asap.
> > 
> > The following patch moves the MSI-X masking before arch_setup_msi_irqs,
> > that is when Linux calls PHYSDEVOP_map_pirq that is the hypercall that
> > causes Xen to execute msix_capability_init.
> 
> And in what way would that help?

I was working under the assumption that before the call to
msix_capability_init (in particular before
rangeset_add_range(mmio_ro_ranges, dev->msix_table...) in Xen, the table
is actually writeable by the guest.

If that is the case, then this scheme should work.
If it is not the case, this patch is wrong.



> > I don't have access to a machine with an MSI-X device right now so I
> > have only tested the appended patch with MSI.
> > 
> > ---
> > 
> > 
> > 
> > diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> > index a825d78..ef73e80 100644
> > --- a/drivers/pci/msi.c
> > +++ b/drivers/pci/msi.c
> > @@ -652,11 +652,22 @@ static void msix_program_entries(struct pci_dev *dev,
> >  	int i = 0;
> >  
> >  	list_for_each_entry(entry, &dev->msi_list, list) {
> > +		entries[i].vector = entry->irq;
> > +		irq_set_msi_desc(entry->irq, entry);
> > +		i++;
> > +	}
> > +}
> > +
> > +static void msix_mask_entries(struct pci_dev *dev,
> > +					struct msix_entry *entries)
> > +{
> > +	struct msi_desc *entry;
> > +	int i = 0;
> > +
> > +	list_for_each_entry(entry, &dev->msi_list, list) {
> >  		int offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE +
> >  						PCI_MSIX_ENTRY_VECTOR_CTRL;
> >  
> > -		entries[i].vector = entry->irq;
> > -		irq_set_msi_desc(entry->irq, entry);
> >  		entry->masked = readl(entry->mask_base + offset);
> >  		msix_mask_irq(entry, 1);
> >  		i++;
> > @@ -696,6 +707,8 @@ static int msix_capability_init(struct pci_dev *dev,
> >  	if (ret)
> >  		return ret;
> >  
> > +	msix_mask_entries(dev, entries);
> > +
> >  	ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
> >  	if (ret)
> >  		goto error;
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 14:16:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:16:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkbUg-0005JL-H0; Mon, 17 Dec 2012 14:16:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TkbUf-0005JG-0S
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 14:16:33 +0000
Received: from [85.158.139.83:6715] by server-7.bemta-5.messagelabs.com id
	3A/14-08009-0492FC05; Mon, 17 Dec 2012 14:16:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355753788!30234558!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI2MTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27305 invoked from network); 17 Dec 2012 14:16:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 14:16:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="891905"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	17 Dec 2012 14:16:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 17 Dec 2012 09:16:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TkbUY-0002JI-Lb;
	Mon, 17 Dec 2012 14:16:26 +0000
Date: Mon, 17 Dec 2012 14:16:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50CF24DA02000078000B0BC4@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1212171413020.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121214120836.6ec4ad4a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212171210100.17523@kaball.uk.xensource.com>
	<50CF24DA02000078000B0BC4@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Jan Beulich wrote:
> >>> On 17.12.12 at 13:42, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Fri, 14 Dec 2012, Mukesh Rathor wrote:
> >> On Thu, 13 Dec 2012 14:25:16 +0000
> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> >> 
> >> > On Thu, 13 Dec 2012, Jan Beulich wrote:
> >> > > >>> On 13.12.12 at 13:19, Stefano Stabellini
> >> > > >>> <stefano.stabellini@eu.citrix.com>
> >> > > wrote:
> >> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> >> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
> >> > > >> >>> <mukesh.rathor@oracle.com> wrote:
> >> > 
> >> > Actually I think that you might be right: just looking at the code it
> >> > seems that the mask bits get written to the table once as part of the
> >> > initialization process:
> >> > 
> >> > pci_enable_msix -> msix_capability_init -> msix_program_entries
> >> > 
> >> > Unfortunately msix_program_entries is called few lines after
> >> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
> >> > as a pirq.
> >> > However after that is done, all the masking/unmask is done via
> >> > irq_mask that we handle properly masking/unmasking the corresponding
> >> > event channels.
> >> > 
> >> > 
> >> > Possible solutions on top of my head:
> >> > 
> >> > - in msix_program_entries instead of writing to the table directly
> >> > (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
> >> > 
> >> > - replace msix_program_entries with arch_msix_program_entries, but it
> >> > would probably be unpopular.
> >> 
> >> 
> >> Can you or Jan or somebody please take that over? I can focus on other
> >> PVH things then and try to get a patch in asap.
> > 
> > The following patch moves the MSI-X masking before arch_setup_msi_irqs,
> > that is when Linux calls PHYSDEVOP_map_pirq that is the hypercall that
> > causes Xen to execute msix_capability_init.
> 
> And in what way would that help?

I was working under the assumption that before the call to
msix_capability_init (in particular before
rangeset_add_range(mmio_ro_ranges, dev->msix_table...) in Xen, the table
is actually writeable by the guest.

If that is the case, then this scheme should work.
If it is not the case, this patch is wrong.



> > I don't have access to a machine with an MSI-X device right now so I
> > have only tested the appended patch with MSI.
> > 
> > ---
> > 
> > 
> > 
> > diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> > index a825d78..ef73e80 100644
> > --- a/drivers/pci/msi.c
> > +++ b/drivers/pci/msi.c
> > @@ -652,11 +652,22 @@ static void msix_program_entries(struct pci_dev *dev,
> >  	int i = 0;
> >  
> >  	list_for_each_entry(entry, &dev->msi_list, list) {
> > +		entries[i].vector = entry->irq;
> > +		irq_set_msi_desc(entry->irq, entry);
> > +		i++;
> > +	}
> > +}
> > +
> > +static void msix_mask_entries(struct pci_dev *dev,
> > +					struct msix_entry *entries)
> > +{
> > +	struct msi_desc *entry;
> > +	int i = 0;
> > +
> > +	list_for_each_entry(entry, &dev->msi_list, list) {
> >  		int offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE +
> >  						PCI_MSIX_ENTRY_VECTOR_CTRL;
> >  
> > -		entries[i].vector = entry->irq;
> > -		irq_set_msi_desc(entry->irq, entry);
> >  		entry->masked = readl(entry->mask_base + offset);
> >  		msix_mask_irq(entry, 1);
> >  		i++;
> > @@ -696,6 +707,8 @@ static int msix_capability_init(struct pci_dev *dev,
> >  	if (ret)
> >  		return ret;
> >  
> > +	msix_mask_entries(dev, entries);
> > +
> >  	ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
> >  	if (ret)
> >  		goto error;
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 14:37:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkboN-0005kx-1w; Mon, 17 Dec 2012 14:36:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TkboL-0005ks-OC
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 14:36:53 +0000
Received: from [85.158.138.51:44620] by server-10.bemta-3.messagelabs.com id
	AA/3E-07616-00E2FC05; Mon, 17 Dec 2012 14:36:48 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355755007!21115651!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15133 invoked from network); 17 Dec 2012 14:36:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 14:36:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; d="asc'?scan'208";a="202164"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 14:36:48 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 14:36:46 +0000
Message-ID: <1355755005.5931.9.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 17 Dec 2012 15:36:45 +0100
In-Reply-To: <50CEE75302000078000B0A99@nat28.tlf.novell.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
	<50CB8318.6050807@eu.citrix.com>
	<50CEE75302000078000B0A99@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"Keir\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0308455572695979880=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0308455572695979880==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-XrI9AYGWM0H0rwTrW3Id"

--=-XrI9AYGWM0H0rwTrW3Id
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2012-12-17 at 08:35 +0000, Jan Beulich wrote:=20
> >>> On 14.12.12 at 20:50, George Dunlap <george.dunlap@eu.citrix.com> wro=
te:
> > On 12/12/12 10:04, Jan Beulich wrote:
> >> This, imo, really belings into sched-if.h.
> >=20
> > Hmm, it looks like there are a number of things that could live in=20
> > either sched-if.h or sched.h; but I think this one probably most closel=
y=20
> > links with thins like vcpu_is_runnable() and cpu_is_haltable(), both of=
=20
> > which are in sched.h; so sched.h is where I'd put it.
>=20
> Any use of schedule_data, the type of which is declared in
> sched-if.h, should be in sched-if.h - someone only including
> sched.h can't make use of it anyway (and it's intended to be
> used by scheduler code, i.e. shouldn't be visible to other
> code).
>=20
Ok, this argument, I find quite convincing, I think I'm putting the
macro in sched-if.h

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-XrI9AYGWM0H0rwTrW3Id
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDPLf0ACgkQk4XaBE3IOsRQLACgopSWEMV9kH2pvkQCvvYodsbM
h9gAnA2OVkujsfCE6oK+kiUly0BQGNZ4
=WbI4
-----END PGP SIGNATURE-----

--=-XrI9AYGWM0H0rwTrW3Id--


--===============0308455572695979880==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0308455572695979880==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 14:37:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkboN-0005kx-1w; Mon, 17 Dec 2012 14:36:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TkboL-0005ks-OC
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 14:36:53 +0000
Received: from [85.158.138.51:44620] by server-10.bemta-3.messagelabs.com id
	AA/3E-07616-00E2FC05; Mon, 17 Dec 2012 14:36:48 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355755007!21115651!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15133 invoked from network); 17 Dec 2012 14:36:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 14:36:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; d="asc'?scan'208";a="202164"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 14:36:48 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 14:36:46 +0000
Message-ID: <1355755005.5931.9.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 17 Dec 2012 15:36:45 +0100
In-Reply-To: <50CEE75302000078000B0A99@nat28.tlf.novell.com>
References: <patchbomb.1355280770@Solace>
	<bced65aa4410b0272064.1355280771@Solace>
	<50C864D402000078000AFDB0@nat28.tlf.novell.com>
	<50CB8318.6050807@eu.citrix.com>
	<50CEE75302000078000B0A99@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"Keir\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 6 v2] xen: sched_credit: improve
 picking up the idlal CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0308455572695979880=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0308455572695979880==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-XrI9AYGWM0H0rwTrW3Id"

--=-XrI9AYGWM0H0rwTrW3Id
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2012-12-17 at 08:35 +0000, Jan Beulich wrote:=20
> >>> On 14.12.12 at 20:50, George Dunlap <george.dunlap@eu.citrix.com> wro=
te:
> > On 12/12/12 10:04, Jan Beulich wrote:
> >> This, imo, really belings into sched-if.h.
> >=20
> > Hmm, it looks like there are a number of things that could live in=20
> > either sched-if.h or sched.h; but I think this one probably most closel=
y=20
> > links with thins like vcpu_is_runnable() and cpu_is_haltable(), both of=
=20
> > which are in sched.h; so sched.h is where I'd put it.
>=20
> Any use of schedule_data, the type of which is declared in
> sched-if.h, should be in sched-if.h - someone only including
> sched.h can't make use of it anyway (and it's intended to be
> used by scheduler code, i.e. shouldn't be visible to other
> code).
>=20
Ok, this argument, I find quite convincing, I think I'm putting the
macro in sched-if.h

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-XrI9AYGWM0H0rwTrW3Id
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDPLf0ACgkQk4XaBE3IOsRQLACgopSWEMV9kH2pvkQCvvYodsbM
h9gAnA2OVkujsfCE6oK+kiUly0BQGNZ4
=WbI4
-----END PGP SIGNATURE-----

--=-XrI9AYGWM0H0rwTrW3Id--


--===============0308455572695979880==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0308455572695979880==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 14:37:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:37:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkboT-0005lL-FQ; Mon, 17 Dec 2012 14:37:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TkboS-0005lD-Eu
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 14:37:00 +0000
Received: from [85.158.143.35:2778] by server-3.bemta-4.messagelabs.com id
	E1/80-18211-B0E2FC05; Mon, 17 Dec 2012 14:36:59 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355755017!10897178!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 919 invoked from network); 17 Dec 2012 14:36:58 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 14:36:58 -0000
Received: (qmail 8059 invoked from network); 17 Dec 2012 16:36:57 +0200
Received: from rcojocaru.dsd.ro (10.10.14.59)
	by mail.bitdefender.com with SMTP; 17 Dec 2012 16:36:56 +0200
MIME-Version: 1.0
X-Mercurial-Node: 46990160b4a373ca96ba860fcb321ee473298960
Message-Id: <46990160b4a373ca96ba.1355755074@rcojocaru.dsd.ro>
User-Agent: Mercurial-patchbomb/2.4.1
Date: Mon, 17 Dec 2012 16:37:54 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
To: xen-devel@lists.xensource.com
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234178,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_SUMM_400_WORDS; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS;
	NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	2b42d861e6b64dd4d7eb9bc19b2748ef.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t9.17ekhd4bo.30dn], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44409
Subject: [Xen-devel] [PATCH] mem_event: Add support for MEM_EVENT_REASON_MSR
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the new MEM_EVENT_REASON_MSR event type. Works similarly
to the other register events, except event.gla always contains
the MSR type (in addition to event.gfn, which holds the value).

Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com>

diff -r f50aab21f9f2 -r 46990160b4a3 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c	Thu Dec 13 14:39:31 2012 +0000
+++ b/xen/arch/x86/hvm/hvm.c	Mon Dec 17 16:37:18 2012 +0200
@@ -2927,6 +2927,8 @@ int hvm_msr_write_intercept(unsigned int
     hvm_cpuid(1, &cpuid[0], &cpuid[1], &cpuid[2], &cpuid[3]);
     mtrr = !!(cpuid[3] & cpufeat_mask(X86_FEATURE_MTRR));
 
+    hvm_memory_event_msr(msr, msr_content);
+
     switch ( msr )
     {
     case MSR_EFER:
@@ -3857,6 +3859,7 @@ long do_hvm_op(unsigned long op, XEN_GUE
             case HVM_PARAM_MEMORY_EVENT_CR0:
             case HVM_PARAM_MEMORY_EVENT_CR3:
             case HVM_PARAM_MEMORY_EVENT_CR4:
+            case HVM_PARAM_MEMORY_EVENT_MSR:
                 if ( d == current->domain )
                     rc = -EPERM;
                 break;
@@ -4485,6 +4488,14 @@ void hvm_memory_event_cr4(unsigned long 
                            value, old, 0, 0);
 }
 
+void hvm_memory_event_msr(unsigned long msr, unsigned long value)
+{
+    hvm_memory_event_traps(current->domain->arch.hvm_domain
+                             .params[HVM_PARAM_MEMORY_EVENT_MSR],
+                           MEM_EVENT_REASON_MSR,
+                           value, ~value, 1, msr);
+}
+
 int hvm_memory_event_int3(unsigned long gla) 
 {
     uint32_t pfec = PFEC_page_present;
diff -r f50aab21f9f2 -r 46990160b4a3 xen/include/asm-x86/hvm/hvm.h
--- a/xen/include/asm-x86/hvm/hvm.h	Thu Dec 13 14:39:31 2012 +0000
+++ b/xen/include/asm-x86/hvm/hvm.h	Mon Dec 17 16:37:18 2012 +0200
@@ -448,6 +448,7 @@ int hvm_x2apic_msr_write(struct vcpu *v,
 void hvm_memory_event_cr0(unsigned long value, unsigned long old);
 void hvm_memory_event_cr3(unsigned long value, unsigned long old);
 void hvm_memory_event_cr4(unsigned long value, unsigned long old);
+void hvm_memory_event_msr(unsigned long msr, unsigned long value);
 /* Called for current VCPU on int3: returns -1 if no listener */
 int hvm_memory_event_int3(unsigned long gla);
 
diff -r f50aab21f9f2 -r 46990160b4a3 xen/include/public/hvm/params.h
--- a/xen/include/public/hvm/params.h	Thu Dec 13 14:39:31 2012 +0000
+++ b/xen/include/public/hvm/params.h	Mon Dec 17 16:37:18 2012 +0200
@@ -141,6 +141,8 @@
 #define HVM_PARAM_ACCESS_RING_PFN   28
 #define HVM_PARAM_SHARING_RING_PFN  29
 
-#define HVM_NR_PARAMS          30
+#define HVM_PARAM_MEMORY_EVENT_MSR  30
+
+#define HVM_NR_PARAMS          31
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
diff -r f50aab21f9f2 -r 46990160b4a3 xen/include/public/mem_event.h
--- a/xen/include/public/mem_event.h	Thu Dec 13 14:39:31 2012 +0000
+++ b/xen/include/public/mem_event.h	Mon Dec 17 16:37:18 2012 +0200
@@ -45,6 +45,7 @@
 #define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
 #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
 #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
+#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR type */
 
 typedef struct mem_event_st {
     uint32_t flags;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 14:37:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:37:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkboT-0005lL-FQ; Mon, 17 Dec 2012 14:37:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TkboS-0005lD-Eu
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 14:37:00 +0000
Received: from [85.158.143.35:2778] by server-3.bemta-4.messagelabs.com id
	E1/80-18211-B0E2FC05; Mon, 17 Dec 2012 14:36:59 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355755017!10897178!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 919 invoked from network); 17 Dec 2012 14:36:58 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 14:36:58 -0000
Received: (qmail 8059 invoked from network); 17 Dec 2012 16:36:57 +0200
Received: from rcojocaru.dsd.ro (10.10.14.59)
	by mail.bitdefender.com with SMTP; 17 Dec 2012 16:36:56 +0200
MIME-Version: 1.0
X-Mercurial-Node: 46990160b4a373ca96ba860fcb321ee473298960
Message-Id: <46990160b4a373ca96ba.1355755074@rcojocaru.dsd.ro>
User-Agent: Mercurial-patchbomb/2.4.1
Date: Mon, 17 Dec 2012 16:37:54 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
To: xen-devel@lists.xensource.com
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234178,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_SUMM_400_WORDS; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS;
	NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	2b42d861e6b64dd4d7eb9bc19b2748ef.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t9.17ekhd4bo.30dn], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44409
Subject: [Xen-devel] [PATCH] mem_event: Add support for MEM_EVENT_REASON_MSR
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the new MEM_EVENT_REASON_MSR event type. Works similarly
to the other register events, except event.gla always contains
the MSR type (in addition to event.gfn, which holds the value).

Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com>

diff -r f50aab21f9f2 -r 46990160b4a3 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c	Thu Dec 13 14:39:31 2012 +0000
+++ b/xen/arch/x86/hvm/hvm.c	Mon Dec 17 16:37:18 2012 +0200
@@ -2927,6 +2927,8 @@ int hvm_msr_write_intercept(unsigned int
     hvm_cpuid(1, &cpuid[0], &cpuid[1], &cpuid[2], &cpuid[3]);
     mtrr = !!(cpuid[3] & cpufeat_mask(X86_FEATURE_MTRR));
 
+    hvm_memory_event_msr(msr, msr_content);
+
     switch ( msr )
     {
     case MSR_EFER:
@@ -3857,6 +3859,7 @@ long do_hvm_op(unsigned long op, XEN_GUE
             case HVM_PARAM_MEMORY_EVENT_CR0:
             case HVM_PARAM_MEMORY_EVENT_CR3:
             case HVM_PARAM_MEMORY_EVENT_CR4:
+            case HVM_PARAM_MEMORY_EVENT_MSR:
                 if ( d == current->domain )
                     rc = -EPERM;
                 break;
@@ -4485,6 +4488,14 @@ void hvm_memory_event_cr4(unsigned long 
                            value, old, 0, 0);
 }
 
+void hvm_memory_event_msr(unsigned long msr, unsigned long value)
+{
+    hvm_memory_event_traps(current->domain->arch.hvm_domain
+                             .params[HVM_PARAM_MEMORY_EVENT_MSR],
+                           MEM_EVENT_REASON_MSR,
+                           value, ~value, 1, msr);
+}
+
 int hvm_memory_event_int3(unsigned long gla) 
 {
     uint32_t pfec = PFEC_page_present;
diff -r f50aab21f9f2 -r 46990160b4a3 xen/include/asm-x86/hvm/hvm.h
--- a/xen/include/asm-x86/hvm/hvm.h	Thu Dec 13 14:39:31 2012 +0000
+++ b/xen/include/asm-x86/hvm/hvm.h	Mon Dec 17 16:37:18 2012 +0200
@@ -448,6 +448,7 @@ int hvm_x2apic_msr_write(struct vcpu *v,
 void hvm_memory_event_cr0(unsigned long value, unsigned long old);
 void hvm_memory_event_cr3(unsigned long value, unsigned long old);
 void hvm_memory_event_cr4(unsigned long value, unsigned long old);
+void hvm_memory_event_msr(unsigned long msr, unsigned long value);
 /* Called for current VCPU on int3: returns -1 if no listener */
 int hvm_memory_event_int3(unsigned long gla);
 
diff -r f50aab21f9f2 -r 46990160b4a3 xen/include/public/hvm/params.h
--- a/xen/include/public/hvm/params.h	Thu Dec 13 14:39:31 2012 +0000
+++ b/xen/include/public/hvm/params.h	Mon Dec 17 16:37:18 2012 +0200
@@ -141,6 +141,8 @@
 #define HVM_PARAM_ACCESS_RING_PFN   28
 #define HVM_PARAM_SHARING_RING_PFN  29
 
-#define HVM_NR_PARAMS          30
+#define HVM_PARAM_MEMORY_EVENT_MSR  30
+
+#define HVM_NR_PARAMS          31
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
diff -r f50aab21f9f2 -r 46990160b4a3 xen/include/public/mem_event.h
--- a/xen/include/public/mem_event.h	Thu Dec 13 14:39:31 2012 +0000
+++ b/xen/include/public/mem_event.h	Mon Dec 17 16:37:18 2012 +0200
@@ -45,6 +45,7 @@
 #define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
 #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
 #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
+#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR type */
 
 typedef struct mem_event_st {
     uint32_t flags;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 14:42:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkbtN-0006CN-Tv; Mon, 17 Dec 2012 14:42:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TkbtN-0006CB-2k
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 14:42:05 +0000
Received: from [85.158.139.83:61379] by server-10.bemta-5.messagelabs.com id
	96/CE-13383-C3F2FC05; Mon, 17 Dec 2012 14:42:04 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355755321!23914583!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2396 invoked from network); 17 Dec 2012 14:42:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 14:42:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; d="asc'?scan'208";a="202329"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 14:41:54 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 14:41:54 +0000
Message-ID: <1355755313.5931.14.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Mon, 17 Dec 2012 15:41:53 +0100
In-Reply-To: <50CB8068.7080404@eu.citrix.com>
References: <patchbomb.1355280770@Solace>
	<21b142498d4699921251.1355280773@Solace>
	<50CB8068.7080404@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3 of 6 v2] xen: sched_credit: use
 current_on_cpu() when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1236313346011643017=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1236313346011643017==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-8xpWPeeNuFXolBvxj/MB"

--=-8xpWPeeNuFXolBvxj/MB
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-14 at 19:39 +0000, George Dunlap wrote:=20
> Hmm, I hadn't read this patch when I commented about removing the macro=
=20
> from the first patch. :-)
>=20
:-)

> I personally think that using vc->processor is better in that patch=20
> anyway; but using this macro elsewhere is probably fine.
>=20
Ok.

> I think from a taste point of view, I would have put this patch, with=20
> the new definition, as the first patch in the series, and the had the=20
> second patch just use it.
>=20
Ok then, when respinning I'll have the first patch defining and using
the macro. Then I'll have the 'fix picking' patch using vc->processor
_instead_ the macro. Yes, that kills the need for the macro, but since I
already have the code for it, and I think things with it does look a bit
better, I won't actually kill it. Let me know (here or replying to the
corresponding e-mail in the reposting) if you don't want it.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-8xpWPeeNuFXolBvxj/MB
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDPLzEACgkQk4XaBE3IOsSnLQCgn/GEzEAPTAm5PuGVyocnQks6
TYgAnjGZw3u2fYueAykIW5XpH5a1WGR5
=gjfd
-----END PGP SIGNATURE-----

--=-8xpWPeeNuFXolBvxj/MB--


--===============1236313346011643017==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1236313346011643017==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 14:42:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkbtN-0006CN-Tv; Mon, 17 Dec 2012 14:42:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TkbtN-0006CB-2k
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 14:42:05 +0000
Received: from [85.158.139.83:61379] by server-10.bemta-5.messagelabs.com id
	96/CE-13383-C3F2FC05; Mon, 17 Dec 2012 14:42:04 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355755321!23914583!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2396 invoked from network); 17 Dec 2012 14:42:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 14:42:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; d="asc'?scan'208";a="202329"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 14:41:54 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 14:41:54 +0000
Message-ID: <1355755313.5931.14.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Mon, 17 Dec 2012 15:41:53 +0100
In-Reply-To: <50CB8068.7080404@eu.citrix.com>
References: <patchbomb.1355280770@Solace>
	<21b142498d4699921251.1355280773@Solace>
	<50CB8068.7080404@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3 of 6 v2] xen: sched_credit: use
 current_on_cpu() when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1236313346011643017=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1236313346011643017==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-8xpWPeeNuFXolBvxj/MB"

--=-8xpWPeeNuFXolBvxj/MB
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-14 at 19:39 +0000, George Dunlap wrote:=20
> Hmm, I hadn't read this patch when I commented about removing the macro=
=20
> from the first patch. :-)
>=20
:-)

> I personally think that using vc->processor is better in that patch=20
> anyway; but using this macro elsewhere is probably fine.
>=20
Ok.

> I think from a taste point of view, I would have put this patch, with=20
> the new definition, as the first patch in the series, and the had the=20
> second patch just use it.
>=20
Ok then, when respinning I'll have the first patch defining and using
the macro. Then I'll have the 'fix picking' patch using vc->processor
_instead_ the macro. Yes, that kills the need for the macro, but since I
already have the code for it, and I think things with it does look a bit
better, I won't actually kill it. Let me know (here or replying to the
corresponding e-mail in the reposting) if you don't want it.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-8xpWPeeNuFXolBvxj/MB
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDPLzEACgkQk4XaBE3IOsSnLQCgn/GEzEAPTAm5PuGVyocnQks6
TYgAnjGZw3u2fYueAykIW5XpH5a1WGR5
=gjfd
-----END PGP SIGNATURE-----

--=-8xpWPeeNuFXolBvxj/MB--


--===============1236313346011643017==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1236313346011643017==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 14:43:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:43:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkbue-0006L5-DD; Mon, 17 Dec 2012 14:43:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tkbud-0006Ky-94
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 14:43:23 +0000
Received: from [193.109.254.147:43201] by server-16.bemta-14.messagelabs.com
	id 37/2B-18932-A8F2FC05; Mon, 17 Dec 2012 14:43:22 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1355755401!7743234!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14537 invoked from network); 17 Dec 2012 14:43:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 14:43:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; d="asc'?scan'208";a="202376"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 14:43:21 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 14:43:21 +0000
Message-ID: <1355755400.5931.16.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Mon, 17 Dec 2012 15:43:20 +0100
In-Reply-To: <50CB84A1.1060205@eu.citrix.com>
References: <patchbomb.1355280770@Solace>
	<7a199dea34425e890b31.1355280774@Solace>
	<50CB84A1.1060205@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4 of 6 v2] xen: tracing: report where a VCPU
 wakes up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5860694473982927584=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5860694473982927584==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-lAl3P5FNUVonLZdAjFEr"

--=-lAl3P5FNUVonLZdAjFEr
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-14 at 19:57 +0000, George Dunlap wrote:=20
> On 12/12/12 02:52, Dario Faggioli wrote:
> > When looking at traces, it turns out to be useful to know where a
> > waking-up VCPU is being queued. Yes, that is always the CPU where
> > it ran last, but that information can well be lost in past trace
> > records!
>=20
> When you say "lost in past trace records", do you primarily mean that=20
> the records themselves have been lost (due to the per-cpu trace buffers=
=20
> filling up), or do you mean that it may be way way back and you don't=20
> want to go back and find it?
>=20
The latter... I'm quite lazy when looking at traces! :-P

> If the latter, I think the best thing to do would be to just augment=20
> xenalyze to keep track of that information and print it when it sees the=
=20
> wake record.
>=20
I agree, I'll kill this patch for now, and investigate further solution
along the line you suggested in the future.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-lAl3P5FNUVonLZdAjFEr
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDPL4gACgkQk4XaBE3IOsTOqQCff6AuQlgxk7vR7uVhDuozQZ6A
1B0An2frFkafjgAEsqgNSv1qqyGd5o34
=n7sj
-----END PGP SIGNATURE-----

--=-lAl3P5FNUVonLZdAjFEr--


--===============5860694473982927584==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5860694473982927584==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 14:43:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:43:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkbue-0006L5-DD; Mon, 17 Dec 2012 14:43:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tkbud-0006Ky-94
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 14:43:23 +0000
Received: from [193.109.254.147:43201] by server-16.bemta-14.messagelabs.com
	id 37/2B-18932-A8F2FC05; Mon, 17 Dec 2012 14:43:22 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1355755401!7743234!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14537 invoked from network); 17 Dec 2012 14:43:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 14:43:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; d="asc'?scan'208";a="202376"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 14:43:21 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 14:43:21 +0000
Message-ID: <1355755400.5931.16.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Mon, 17 Dec 2012 15:43:20 +0100
In-Reply-To: <50CB84A1.1060205@eu.citrix.com>
References: <patchbomb.1355280770@Solace>
	<7a199dea34425e890b31.1355280774@Solace>
	<50CB84A1.1060205@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4 of 6 v2] xen: tracing: report where a VCPU
 wakes up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5860694473982927584=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5860694473982927584==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-lAl3P5FNUVonLZdAjFEr"

--=-lAl3P5FNUVonLZdAjFEr
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-14 at 19:57 +0000, George Dunlap wrote:=20
> On 12/12/12 02:52, Dario Faggioli wrote:
> > When looking at traces, it turns out to be useful to know where a
> > waking-up VCPU is being queued. Yes, that is always the CPU where
> > it ran last, but that information can well be lost in past trace
> > records!
>=20
> When you say "lost in past trace records", do you primarily mean that=20
> the records themselves have been lost (due to the per-cpu trace buffers=
=20
> filling up), or do you mean that it may be way way back and you don't=20
> want to go back and find it?
>=20
The latter... I'm quite lazy when looking at traces! :-P

> If the latter, I think the best thing to do would be to just augment=20
> xenalyze to keep track of that information and print it when it sees the=
=20
> wake record.
>=20
I agree, I'll kill this patch for now, and investigate further solution
along the line you suggested in the future.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-lAl3P5FNUVonLZdAjFEr
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDPL4gACgkQk4XaBE3IOsTOqQCff6AuQlgxk7vR7uVhDuozQZ6A
1B0An2frFkafjgAEsqgNSv1qqyGd5o34
=n7sj
-----END PGP SIGNATURE-----

--=-lAl3P5FNUVonLZdAjFEr--


--===============5860694473982927584==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5860694473982927584==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 14:45:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:45:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkbwm-0006WQ-VR; Mon, 17 Dec 2012 14:45:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tkbwl-0006WG-9n
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 14:45:35 +0000
Received: from [85.158.138.51:39244] by server-1.bemta-3.messagelabs.com id
	06/B3-08906-E003FC05; Mon, 17 Dec 2012 14:45:34 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355755532!29340942!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16862 invoked from network); 17 Dec 2012 14:45:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 14:45:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; d="asc'?scan'208";a="202491"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 14:45:31 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 14:45:30 +0000
Message-ID: <1355755530.5931.18.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Mon, 17 Dec 2012 15:45:30 +0100
In-Reply-To: <50CB8677.9050001@eu.citrix.com>
References: <patchbomb.1355280770@Solace>
	<036a3bb938a550f2ee0c.1355280776@Solace>
	<50CB8677.9050001@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6 of 6 v2] xen: sched_credit: add some
 tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2266619688742480579=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2266619688742480579==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-e+sNvWiBKjN8QlRlLvBh"

--=-e+sNvWiBKjN8QlRlLvBh
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-14 at 20:05 +0000, George Dunlap wrote:=20
> > -    /* Send scheduler interrupts to designated CPUs */
> >       if ( !cpumask_empty(&mask) )
> > +    {
> > +        if ( unlikely(tb_init_done) )
> > +        {
> > +            /* Avoid TRACE_*: saves checking !tb_init_done each step *=
/
> > +            for_each_cpu(cpu, &mask)
> > +                trace_var(TRC_CSCHED_TICKLE, 0, sizeof(cpu), &cpu);
> > +        }
>=20
> Hmm, probably should have pointed this out before, but trace_var() is a=
=20
> static inline which checks tb_init_done -- you want __trace_var(). :-)
>=20
Correct. My bad. It actually was __trace_var() at the beginning (that's
the reason why this particular record deserves special treatment), but I
messed things up while preparing this new version. Sorry for that and
thanks for having spotted this! :-)

Will fix.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-e+sNvWiBKjN8QlRlLvBh
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDPMAoACgkQk4XaBE3IOsTjzQCbB+F9UkolfMD6Dj29xS2GQGPr
BM4An2wDqjZftv/nn5uzYL3/p08fh/B/
=iKfR
-----END PGP SIGNATURE-----

--=-e+sNvWiBKjN8QlRlLvBh--


--===============2266619688742480579==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2266619688742480579==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 14:45:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:45:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkbwm-0006WQ-VR; Mon, 17 Dec 2012 14:45:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tkbwl-0006WG-9n
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 14:45:35 +0000
Received: from [85.158.138.51:39244] by server-1.bemta-3.messagelabs.com id
	06/B3-08906-E003FC05; Mon, 17 Dec 2012 14:45:34 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355755532!29340942!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16862 invoked from network); 17 Dec 2012 14:45:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 14:45:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; d="asc'?scan'208";a="202491"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 14:45:31 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 17 Dec 2012 14:45:30 +0000
Message-ID: <1355755530.5931.18.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Mon, 17 Dec 2012 15:45:30 +0100
In-Reply-To: <50CB8677.9050001@eu.citrix.com>
References: <patchbomb.1355280770@Solace>
	<036a3bb938a550f2ee0c.1355280776@Solace>
	<50CB8677.9050001@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6 of 6 v2] xen: sched_credit: add some
 tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2266619688742480579=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2266619688742480579==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-e+sNvWiBKjN8QlRlLvBh"

--=-e+sNvWiBKjN8QlRlLvBh
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-14 at 20:05 +0000, George Dunlap wrote:=20
> > -    /* Send scheduler interrupts to designated CPUs */
> >       if ( !cpumask_empty(&mask) )
> > +    {
> > +        if ( unlikely(tb_init_done) )
> > +        {
> > +            /* Avoid TRACE_*: saves checking !tb_init_done each step *=
/
> > +            for_each_cpu(cpu, &mask)
> > +                trace_var(TRC_CSCHED_TICKLE, 0, sizeof(cpu), &cpu);
> > +        }
>=20
> Hmm, probably should have pointed this out before, but trace_var() is a=
=20
> static inline which checks tb_init_done -- you want __trace_var(). :-)
>=20
Correct. My bad. It actually was __trace_var() at the beginning (that's
the reason why this particular record deserves special treatment), but I
messed things up while preparing this new version. Sorry for that and
thanks for having spotted this! :-)

Will fix.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-e+sNvWiBKjN8QlRlLvBh
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDPMAoACgkQk4XaBE3IOsTjzQCbB+F9UkolfMD6Dj29xS2GQGPr
BM4An2wDqjZftv/nn5uzYL3/p08fh/B/
=iKfR
-----END PGP SIGNATURE-----

--=-e+sNvWiBKjN8QlRlLvBh--


--===============2266619688742480579==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2266619688742480579==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 14:58:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:58:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkc9O-0006v9-Ju; Mon, 17 Dec 2012 14:58:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tkc9N-0006v4-GQ
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 14:58:37 +0000
Received: from [193.109.254.147:39950] by server-5.bemta-14.messagelabs.com id
	7F/90-32031-C133FC05; Mon, 17 Dec 2012 14:58:36 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355756314!3207103!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21148 invoked from network); 17 Dec 2012 14:58:35 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-6.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	17 Dec 2012 14:58:35 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:57314 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TkcDV-0004BL-L3; Mon, 17 Dec 2012 16:02:53 +0100
Date: Mon, 17 Dec 2012 15:58:30 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <357884620.20121217155830@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121216173824.GA4518@phenom.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: linux-kernel@vger.kernel.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
	dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Sunday, December 16, 2012, 6:38:24 PM, you wrote:

> On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.

> Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> vacation during that time (just got back).

> Did you by any chance try to do a git bisect to narrow down
> which merge it was?

Hi Konrad,

I tried to bisect, but did not succeed so far. But somehow i have the feeling it is at least partly .config related.
After make a new clone, and by hand trying to bisecting down, i came back to v3.7, but that also didn't boot.
So i will see if i can do it the other way around :S

--
Sander

> Thanks!
>> The boot stalls:
>> 
>> [    0.000000] ACPI: PM-Timer IO Port: 0x808
>> [    0.000000] ACPI: Local APIC address 0xfee00000
>> [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
>> [    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
>> [    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
>> [    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> [   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
>> [   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
>> [   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
>> [   64.598692] sending NMI to all CPUs:
>> [   64.598716] xen: vector 0x2 is not implemented
>> 
>> 
>> Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> 
>> 
>> The exact seem config with 3.7.0 as kernel works fine.
>> Complete serial log is attached.
>> 
>> --
>> 
>> Sander
>> 
>> 





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 14:58:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 14:58:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkc9O-0006v9-Ju; Mon, 17 Dec 2012 14:58:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tkc9N-0006v4-GQ
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 14:58:37 +0000
Received: from [193.109.254.147:39950] by server-5.bemta-14.messagelabs.com id
	7F/90-32031-C133FC05; Mon, 17 Dec 2012 14:58:36 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355756314!3207103!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21148 invoked from network); 17 Dec 2012 14:58:35 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-6.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	17 Dec 2012 14:58:35 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:57314 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TkcDV-0004BL-L3; Mon, 17 Dec 2012 16:02:53 +0100
Date: Mon, 17 Dec 2012 15:58:30 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <357884620.20121217155830@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121216173824.GA4518@phenom.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: linux-kernel@vger.kernel.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
	dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Sunday, December 16, 2012, 6:38:24 PM, you wrote:

> On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.

> Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> vacation during that time (just got back).

> Did you by any chance try to do a git bisect to narrow down
> which merge it was?

Hi Konrad,

I tried to bisect, but did not succeed so far. But somehow i have the feeling it is at least partly .config related.
After make a new clone, and by hand trying to bisecting down, i came back to v3.7, but that also didn't boot.
So i will see if i can do it the other way around :S

--
Sander

> Thanks!
>> The boot stalls:
>> 
>> [    0.000000] ACPI: PM-Timer IO Port: 0x808
>> [    0.000000] ACPI: Local APIC address 0xfee00000
>> [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
>> [    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
>> [    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
>> [    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> [   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
>> [   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
>> [   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
>> [   64.598692] sending NMI to all CPUs:
>> [   64.598716] xen: vector 0x2 is not implemented
>> 
>> 
>> Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> 
>> 
>> The exact seem config with 3.7.0 as kernel works fine.
>> Complete serial log is attached.
>> 
>> --
>> 
>> Sander
>> 
>> 





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 15:08:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 15:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkcJB-0007Bc-2E; Mon, 17 Dec 2012 15:08:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TkcJ9-0007BU-JE
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 15:08:43 +0000
Received: from [193.109.254.147:47818] by server-7.bemta-14.messagelabs.com id
	94/7F-08102-A753FC05; Mon, 17 Dec 2012 15:08:42 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355756912!8997015!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6602 invoked from network); 17 Dec 2012 15:08:33 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-9.tower-27.messagelabs.com with SMTP;
	17 Dec 2012 15:08:33 -0000
X-TM-IMSS-Message-ID: <2ca49beb00050eae@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 2ca49beb00050eae ;
	Mon, 17 Dec 2012 10:08:18 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBHF8PMu011304; 
	Mon, 17 Dec 2012 10:08:25 -0500
Message-ID: <50CF3569.4090004@tycho.nsa.gov>
Date: Mon, 17 Dec 2012 10:08:25 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1355435230-4176-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355738272.14620.10.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355738272.14620.10.camel@zakaz.uk.xensource.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/v2] libxl: postpone backend name
	resolution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/17/2012 04:57 AM, Ian Campbell wrote:
> On Thu, 2012-12-13 at 21:47 +0000, Daniel De Graaf wrote:
>> This replaces the backend_domid field in libxl devices with a structure
>> allowing either a domid or a domain name to be specified.  The domain
>> name is resolved into a domain ID in the _setdefault function when
>> adding the device.  This change allows the backend of the block devices
>> to be specified (which previously required passing the libxl_ctx down
>> into the block device parser).
> 
> I didn't review this in detail yet but my first thought was that this is
> a libxl API change, and I can't see any provision for backwards
> compatibility.

Nope, I forgot to address that this version. Will send v3 soon.

> If you are doing something clever which I've missed then it deserves a
> comment ;-)
> 
> Assuming not then the simplest solution would be to remove the struct
> and just add the name as a field to each affected device. Or maybe an
> anonymous struct would do the job if the members were backend_domid and
> backend_name?

I think just adding the field would work better; the anonymous struct seems
like it might be compiler-dependent.  It's not significantly more code in
the setdefault functions to avoid using the struct.

> You could also potentially do something with the provisions around
> LIBXL_API_VERSION documented in libxl.h.
> 
> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 15:08:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 15:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkcJB-0007Bc-2E; Mon, 17 Dec 2012 15:08:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TkcJ9-0007BU-JE
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 15:08:43 +0000
Received: from [193.109.254.147:47818] by server-7.bemta-14.messagelabs.com id
	94/7F-08102-A753FC05; Mon, 17 Dec 2012 15:08:42 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355756912!8997015!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6602 invoked from network); 17 Dec 2012 15:08:33 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-9.tower-27.messagelabs.com with SMTP;
	17 Dec 2012 15:08:33 -0000
X-TM-IMSS-Message-ID: <2ca49beb00050eae@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 2ca49beb00050eae ;
	Mon, 17 Dec 2012 10:08:18 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBHF8PMu011304; 
	Mon, 17 Dec 2012 10:08:25 -0500
Message-ID: <50CF3569.4090004@tycho.nsa.gov>
Date: Mon, 17 Dec 2012 10:08:25 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1355435230-4176-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1355738272.14620.10.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355738272.14620.10.camel@zakaz.uk.xensource.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/v2] libxl: postpone backend name
	resolution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/17/2012 04:57 AM, Ian Campbell wrote:
> On Thu, 2012-12-13 at 21:47 +0000, Daniel De Graaf wrote:
>> This replaces the backend_domid field in libxl devices with a structure
>> allowing either a domid or a domain name to be specified.  The domain
>> name is resolved into a domain ID in the _setdefault function when
>> adding the device.  This change allows the backend of the block devices
>> to be specified (which previously required passing the libxl_ctx down
>> into the block device parser).
> 
> I didn't review this in detail yet but my first thought was that this is
> a libxl API change, and I can't see any provision for backwards
> compatibility.

Nope, I forgot to address that this version. Will send v3 soon.

> If you are doing something clever which I've missed then it deserves a
> comment ;-)
> 
> Assuming not then the simplest solution would be to remove the struct
> and just add the name as a field to each affected device. Or maybe an
> anonymous struct would do the job if the members were backend_domid and
> backend_name?

I think just adding the field would work better; the anonymous struct seems
like it might be compiler-dependent.  It's not significantly more code in
the setdefault functions to avoid using the struct.

> You could also potentially do something with the provisions around
> LIBXL_API_VERSION documented in libxl.h.
> 
> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 15:11:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 15:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkcLT-0007LI-Pj; Mon, 17 Dec 2012 15:11:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TkcLR-0007L8-MF
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 15:11:05 +0000
Received: from [85.158.143.99:23205] by server-1.bemta-4.messagelabs.com id
	71/06-28401-9063FC05; Mon, 17 Dec 2012 15:11:05 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-216.messagelabs.com!1355757058!23442169!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8545 invoked from network); 17 Dec 2012 15:10:58 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-10.tower-216.messagelabs.com with SMTP;
	17 Dec 2012 15:10:58 -0000
X-TM-IMSS-Message-ID: <08ded1ec000069ec@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 08ded1ec000069ec ;
	Mon, 17 Dec 2012 10:10:40 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBHFAt6K011621; 
	Mon, 17 Dec 2012 10:10:55 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon, 17 Dec 2012 10:10:54 -0500
Message-Id: <1355757054-26955-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355738272.14620.10.camel@zakaz.uk.xensource.com>
References: <1355738272.14620.10.camel@zakaz.uk.xensource.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3] libxl: postpone backend name resolution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds a backend_name field in addition to the backend_domid field in
libxl devices, allowing either a domid or a domain name to be specified.
The domain name is resolved into a domain ID in the _setdefault function
when adding the device.  This change allows the backend of the block
devices to be specified (which previously required passing the libxl_ctx
down into the block device parser), and should simplify users of libxl
that wish to use backend domains.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>

---

This patch does not include the changes to tools/libxl/libxlu_disk_l.c
and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
changes due to different generator versions.
---
 docs/misc/xl-disk-configuration.txt | 12 +++++++++
 tools/libxl/libxl.c                 | 49 ++++++++++++++++++++++++++++++++++++-
 tools/libxl/libxl_types.idl         |  5 ++++
 tools/libxl/libxlu_disk_l.l         |  1 +
 tools/libxl/xl_cmdimpl.c            | 36 ++++++---------------------
 5 files changed, 73 insertions(+), 30 deletions(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 86c16be..5bd456d 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -139,6 +139,18 @@ cdrom
 Convenience alias for "devtype=cdrom".
 
 
+backend=<domain-name>
+---------------------
+
+Description:           Designates a backend domain for the device
+Supported values:      Valid domain names
+Mandatory:             No
+
+Specifies the backend domain which this device should attach to. This
+defaults to domain 0. Specifying another domain requires setting up a
+driver domain which is outside the scope of this document.
+
+
 backendtype=<backend-type>
 --------------------------
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 6c4455e..a04b435 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1710,12 +1710,35 @@ out:
     return;
 }
 
+/* backend domain name-to-domid conversion utility */
+static int libxl__resolve_domain(libxl__gc *gc, const char *name)
+{
+    int i, rv;
+    uint32_t domid;
+    for (i=0; name[i]; i++) {
+        if (!isdigit(name[i])) {
+            rv = libxl_name_to_domid(libxl__gc_owner(gc), name, &domid);
+            if (rv)
+                return rv;
+            return domid;
+        }
+    }
+    return atoi(name);
+}
+
 /******************************************************************************/
 int libxl__device_vtpm_setdefault(libxl__gc *gc, libxl_device_vtpm *vtpm)
 {
+   int rc;
    if(libxl_uuid_is_nil(&vtpm->uuid)) {
       libxl_uuid_generate(&vtpm->uuid);
    }
+   if (vtpm->backend_domname) {
+       rc = libxl__resolve_domain(gc, vtpm->backend_domname);
+       if (rc < 0)
+           return rc;
+       vtpm->backend_domid = rc;
+   }
    return 0;
 }
 
@@ -1956,7 +1979,13 @@ int libxl__device_disk_setdefault(libxl__gc *gc, libxl_device_disk *disk)
     rc = libxl__device_disk_set_backend(gc, disk);
     if (rc) return rc;
 
-    return rc;
+    if (disk->backend_domname) {
+        rc = libxl__resolve_domain(gc, disk->backend_domname);
+        if (rc < 0)
+            return rc;
+        disk->backend_domid = rc;
+    }
+    return 0;
 }
 
 int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
@@ -2784,6 +2813,12 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
         abort();
     }
 
+    if (nic->backend_domname) {
+        int rc = libxl__resolve_domain(gc, nic->backend_domname);
+        if (rc < 0)
+            return rc;
+        nic->backend_domid = rc;
+    }
     return 0;
 }
 
@@ -3144,6 +3179,12 @@ out:
 
 int libxl__device_vkb_setdefault(libxl__gc *gc, libxl_device_vkb *vkb)
 {
+    if (vkb->backend_domname) {
+        int rc = libxl__resolve_domain(gc, vkb->backend_domname);
+        if (rc < 0)
+            return rc;
+        vkb->backend_domid = rc;
+    }
     return 0;
 }
 
@@ -3236,6 +3277,12 @@ int libxl__device_vfb_setdefault(libxl__gc *gc, libxl_device_vfb *vfb)
     libxl_defbool_setdefault(&vfb->sdl.enable, false);
     libxl_defbool_setdefault(&vfb->sdl.opengl, false);
 
+    if (vfb->backend_domname) {
+        int rc = libxl__resolve_domain(gc, vfb->backend_domname);
+        if (rc < 0)
+            return rc;
+        vfb->backend_domid = rc;
+    }
     return 0;
 }
 
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 93524f0..131332a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -345,6 +345,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
 libxl_device_vfb = Struct("device_vfb", [
     ("backend_domid", libxl_domid),
+    ("backend_domname",string),
     ("devid",         libxl_devid),
     ("vnc",           libxl_vnc_info),
     ("sdl",           libxl_sdl_info),
@@ -354,11 +355,13 @@ libxl_device_vfb = Struct("device_vfb", [
 
 libxl_device_vkb = Struct("device_vkb", [
     ("backend_domid", libxl_domid),
+    ("backend_domname", string),
     ("devid", libxl_devid),
     ])
 
 libxl_device_disk = Struct("device_disk", [
     ("backend_domid", libxl_domid),
+    ("backend_domname", string),
     ("pdev_path", string),
     ("vdev", string),
     ("backend", libxl_disk_backend),
@@ -371,6 +374,7 @@ libxl_device_disk = Struct("device_disk", [
 
 libxl_device_nic = Struct("device_nic", [
     ("backend_domid", libxl_domid),
+    ("backend_domname", string),
     ("devid", libxl_devid),
     ("mtu", integer),
     ("model", string),
@@ -398,6 +402,7 @@ libxl_device_pci = Struct("device_pci", [
 
 libxl_device_vtpm = Struct("device_vtpm", [
     ("backend_domid",    libxl_domid),
+    ("backend_domname",  string),
     ("devid",            libxl_devid),
     ("uuid",             libxl_uuid),
 ])
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index bee16a1..7c4e7f1 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -168,6 +168,7 @@ devtype=disk,?	{ DPC->disk->is_cdrom = 0; }
 devtype=[^,]*,?	{ xlu__disk_err(DPC,yytext,"unknown value for type"); }
 
 access=[^,]*,?	{ STRIP(','); setaccess(DPC, FROMEQUALS); }
+backend=[^,]*,? { STRIP(','); SAVESTRING("backend", backend_domname, FROMEQUALS); }
 backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index e964bf1..103e344 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1090,12 +1090,7 @@ static void parse_config_data(const char *config_source,
                      break;
                   *p2 = '\0';
                   if (!strcmp(p, "backend")) {
-                     if(domain_qualifier_to_domid(p2 + 1, &(vtpm->backend_domid), 0))
-                     {
-                        fprintf(stderr,
-                              "Specified vtpm backend domain `%s' does not exist!\n", p2 + 1);
-                        exit(1);
-                     }
+                     vtpm->backend_domname = strdup(p2 + 1);
                      got_backend = true;
                   } else if(!strcmp(p, "uuid")) {
                      if( libxl_uuid_from_string(&vtpm->uuid, p2 + 1) ) {
@@ -1190,11 +1185,8 @@ static void parse_config_data(const char *config_source,
                     free(nic->ifname);
                     nic->ifname = strdup(p2 + 1);
                 } else if (!strcmp(p, "backend")) {
-                    if(libxl_name_to_domid(ctx, (p2 + 1), &(nic->backend_domid))) {
-                        fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
-                        nic->backend_domid = 0;
-                    }
-                    if (nic->backend_domid != 0 && run_hotplug_scripts) {
+                    nic->backend_domname = strdup(p2 + 1);
+                    if (run_hotplug_scripts) {
                         fprintf(stderr, "ERROR: the vif 'backend=' option "
                                 "cannot be used in conjunction with "
                                 "run_hotplug_scripts, please set "
@@ -2431,8 +2423,6 @@ static void cd_insert(uint32_t domid, const char *virtdev, char *phys)
 
     parse_disk_config(&config, buf, &disk);
 
-    disk.backend_domid = 0;
-
     libxl_cdrom_insert(ctx, domid, &disk, NULL);
 
     libxl_device_disk_dispose(&disk);
@@ -5516,11 +5506,7 @@ int main_networkattach(int argc, char **argv)
         } else if (MATCH_OPTION("script", *argv, oparg)) {
             replace_string(&nic.script, oparg);
         } else if (MATCH_OPTION("backend", *argv, oparg)) {
-            if(libxl_name_to_domid(ctx, oparg, &val)) {
-                fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
-                val = 0;
-            }
-            nic.backend_domid = val;
+            replace_string(&nic.backend_domname, oparg);
         } else if (MATCH_OPTION("vifname", *argv, oparg)) {
             replace_string(&nic.ifname, oparg);
         } else if (MATCH_OPTION("model", *argv, oparg)) {
@@ -5623,8 +5609,8 @@ int main_networkdetach(int argc, char **argv)
 int main_blockattach(int argc, char **argv)
 {
     int opt;
-    uint32_t fe_domid, be_domid = 0;
-    libxl_device_disk disk = { 0 };
+    uint32_t fe_domid;
+    libxl_device_disk disk;
     XLU_Config *config = 0;
 
     if ((opt = def_getopt(argc, argv, "", "block-attach", 2)) != -1)
@@ -5639,8 +5625,6 @@ int main_blockattach(int argc, char **argv)
     parse_disk_config_multistring
         (&config, argc-optind, (const char* const*)argv + optind, &disk);
 
-    disk.backend_domid = be_domid;
-
     if (dryrun_only) {
         char *json = libxl_device_disk_to_json(ctx, &disk);
         printf("disk: %s\n", json);
@@ -5720,7 +5704,6 @@ int main_vtpmattach(int argc, char **argv)
     int opt;
     libxl_device_vtpm vtpm;
     char *oparg;
-    unsigned int val;
     uint32_t domid;
 
     if ((opt = def_getopt(argc, argv, "", "vtpm-attach", 1)) != -1)
@@ -5740,12 +5723,7 @@ int main_vtpmattach(int argc, char **argv)
                 return 1;
             }
         } else if (MATCH_OPTION("backend", *argv, oparg)) {
-            if(domain_qualifier_to_domid(oparg, &val, 0)) {
-                fprintf(stderr,
-                      "Specified backend domain does not exist, defaulting to Dom0\n");
-                val = 0;
-            }
-            vtpm.backend_domid = val;
+            replace_string(&vtpm.backend_domname, oparg);
         } else {
             fprintf(stderr, "unrecognized argument `%s'\n", *argv);
             return 1;
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 15:11:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 15:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkcLT-0007LI-Pj; Mon, 17 Dec 2012 15:11:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TkcLR-0007L8-MF
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 15:11:05 +0000
Received: from [85.158.143.99:23205] by server-1.bemta-4.messagelabs.com id
	71/06-28401-9063FC05; Mon, 17 Dec 2012 15:11:05 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-216.messagelabs.com!1355757058!23442169!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8545 invoked from network); 17 Dec 2012 15:10:58 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-10.tower-216.messagelabs.com with SMTP;
	17 Dec 2012 15:10:58 -0000
X-TM-IMSS-Message-ID: <08ded1ec000069ec@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 08ded1ec000069ec ;
	Mon, 17 Dec 2012 10:10:40 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBHFAt6K011621; 
	Mon, 17 Dec 2012 10:10:55 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon, 17 Dec 2012 10:10:54 -0500
Message-Id: <1355757054-26955-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.7
In-Reply-To: <1355738272.14620.10.camel@zakaz.uk.xensource.com>
References: <1355738272.14620.10.camel@zakaz.uk.xensource.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3] libxl: postpone backend name resolution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds a backend_name field in addition to the backend_domid field in
libxl devices, allowing either a domid or a domain name to be specified.
The domain name is resolved into a domain ID in the _setdefault function
when adding the device.  This change allows the backend of the block
devices to be specified (which previously required passing the libxl_ctx
down into the block device parser), and should simplify users of libxl
that wish to use backend domains.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>

---

This patch does not include the changes to tools/libxl/libxlu_disk_l.c
and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
changes due to different generator versions.
---
 docs/misc/xl-disk-configuration.txt | 12 +++++++++
 tools/libxl/libxl.c                 | 49 ++++++++++++++++++++++++++++++++++++-
 tools/libxl/libxl_types.idl         |  5 ++++
 tools/libxl/libxlu_disk_l.l         |  1 +
 tools/libxl/xl_cmdimpl.c            | 36 ++++++---------------------
 5 files changed, 73 insertions(+), 30 deletions(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 86c16be..5bd456d 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -139,6 +139,18 @@ cdrom
 Convenience alias for "devtype=cdrom".
 
 
+backend=<domain-name>
+---------------------
+
+Description:           Designates a backend domain for the device
+Supported values:      Valid domain names
+Mandatory:             No
+
+Specifies the backend domain which this device should attach to. This
+defaults to domain 0. Specifying another domain requires setting up a
+driver domain which is outside the scope of this document.
+
+
 backendtype=<backend-type>
 --------------------------
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 6c4455e..a04b435 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1710,12 +1710,35 @@ out:
     return;
 }
 
+/* backend domain name-to-domid conversion utility */
+static int libxl__resolve_domain(libxl__gc *gc, const char *name)
+{
+    int i, rv;
+    uint32_t domid;
+    for (i=0; name[i]; i++) {
+        if (!isdigit(name[i])) {
+            rv = libxl_name_to_domid(libxl__gc_owner(gc), name, &domid);
+            if (rv)
+                return rv;
+            return domid;
+        }
+    }
+    return atoi(name);
+}
+
 /******************************************************************************/
 int libxl__device_vtpm_setdefault(libxl__gc *gc, libxl_device_vtpm *vtpm)
 {
+   int rc;
    if(libxl_uuid_is_nil(&vtpm->uuid)) {
       libxl_uuid_generate(&vtpm->uuid);
    }
+   if (vtpm->backend_domname) {
+       rc = libxl__resolve_domain(gc, vtpm->backend_domname);
+       if (rc < 0)
+           return rc;
+       vtpm->backend_domid = rc;
+   }
    return 0;
 }
 
@@ -1956,7 +1979,13 @@ int libxl__device_disk_setdefault(libxl__gc *gc, libxl_device_disk *disk)
     rc = libxl__device_disk_set_backend(gc, disk);
     if (rc) return rc;
 
-    return rc;
+    if (disk->backend_domname) {
+        rc = libxl__resolve_domain(gc, disk->backend_domname);
+        if (rc < 0)
+            return rc;
+        disk->backend_domid = rc;
+    }
+    return 0;
 }
 
 int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
@@ -2784,6 +2813,12 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
         abort();
     }
 
+    if (nic->backend_domname) {
+        int rc = libxl__resolve_domain(gc, nic->backend_domname);
+        if (rc < 0)
+            return rc;
+        nic->backend_domid = rc;
+    }
     return 0;
 }
 
@@ -3144,6 +3179,12 @@ out:
 
 int libxl__device_vkb_setdefault(libxl__gc *gc, libxl_device_vkb *vkb)
 {
+    if (vkb->backend_domname) {
+        int rc = libxl__resolve_domain(gc, vkb->backend_domname);
+        if (rc < 0)
+            return rc;
+        vkb->backend_domid = rc;
+    }
     return 0;
 }
 
@@ -3236,6 +3277,12 @@ int libxl__device_vfb_setdefault(libxl__gc *gc, libxl_device_vfb *vfb)
     libxl_defbool_setdefault(&vfb->sdl.enable, false);
     libxl_defbool_setdefault(&vfb->sdl.opengl, false);
 
+    if (vfb->backend_domname) {
+        int rc = libxl__resolve_domain(gc, vfb->backend_domname);
+        if (rc < 0)
+            return rc;
+        vfb->backend_domid = rc;
+    }
     return 0;
 }
 
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 93524f0..131332a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -345,6 +345,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
 libxl_device_vfb = Struct("device_vfb", [
     ("backend_domid", libxl_domid),
+    ("backend_domname",string),
     ("devid",         libxl_devid),
     ("vnc",           libxl_vnc_info),
     ("sdl",           libxl_sdl_info),
@@ -354,11 +355,13 @@ libxl_device_vfb = Struct("device_vfb", [
 
 libxl_device_vkb = Struct("device_vkb", [
     ("backend_domid", libxl_domid),
+    ("backend_domname", string),
     ("devid", libxl_devid),
     ])
 
 libxl_device_disk = Struct("device_disk", [
     ("backend_domid", libxl_domid),
+    ("backend_domname", string),
     ("pdev_path", string),
     ("vdev", string),
     ("backend", libxl_disk_backend),
@@ -371,6 +374,7 @@ libxl_device_disk = Struct("device_disk", [
 
 libxl_device_nic = Struct("device_nic", [
     ("backend_domid", libxl_domid),
+    ("backend_domname", string),
     ("devid", libxl_devid),
     ("mtu", integer),
     ("model", string),
@@ -398,6 +402,7 @@ libxl_device_pci = Struct("device_pci", [
 
 libxl_device_vtpm = Struct("device_vtpm", [
     ("backend_domid",    libxl_domid),
+    ("backend_domname",  string),
     ("devid",            libxl_devid),
     ("uuid",             libxl_uuid),
 ])
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index bee16a1..7c4e7f1 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -168,6 +168,7 @@ devtype=disk,?	{ DPC->disk->is_cdrom = 0; }
 devtype=[^,]*,?	{ xlu__disk_err(DPC,yytext,"unknown value for type"); }
 
 access=[^,]*,?	{ STRIP(','); setaccess(DPC, FROMEQUALS); }
+backend=[^,]*,? { STRIP(','); SAVESTRING("backend", backend_domname, FROMEQUALS); }
 backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index e964bf1..103e344 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1090,12 +1090,7 @@ static void parse_config_data(const char *config_source,
                      break;
                   *p2 = '\0';
                   if (!strcmp(p, "backend")) {
-                     if(domain_qualifier_to_domid(p2 + 1, &(vtpm->backend_domid), 0))
-                     {
-                        fprintf(stderr,
-                              "Specified vtpm backend domain `%s' does not exist!\n", p2 + 1);
-                        exit(1);
-                     }
+                     vtpm->backend_domname = strdup(p2 + 1);
                      got_backend = true;
                   } else if(!strcmp(p, "uuid")) {
                      if( libxl_uuid_from_string(&vtpm->uuid, p2 + 1) ) {
@@ -1190,11 +1185,8 @@ static void parse_config_data(const char *config_source,
                     free(nic->ifname);
                     nic->ifname = strdup(p2 + 1);
                 } else if (!strcmp(p, "backend")) {
-                    if(libxl_name_to_domid(ctx, (p2 + 1), &(nic->backend_domid))) {
-                        fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
-                        nic->backend_domid = 0;
-                    }
-                    if (nic->backend_domid != 0 && run_hotplug_scripts) {
+                    nic->backend_domname = strdup(p2 + 1);
+                    if (run_hotplug_scripts) {
                         fprintf(stderr, "ERROR: the vif 'backend=' option "
                                 "cannot be used in conjunction with "
                                 "run_hotplug_scripts, please set "
@@ -2431,8 +2423,6 @@ static void cd_insert(uint32_t domid, const char *virtdev, char *phys)
 
     parse_disk_config(&config, buf, &disk);
 
-    disk.backend_domid = 0;
-
     libxl_cdrom_insert(ctx, domid, &disk, NULL);
 
     libxl_device_disk_dispose(&disk);
@@ -5516,11 +5506,7 @@ int main_networkattach(int argc, char **argv)
         } else if (MATCH_OPTION("script", *argv, oparg)) {
             replace_string(&nic.script, oparg);
         } else if (MATCH_OPTION("backend", *argv, oparg)) {
-            if(libxl_name_to_domid(ctx, oparg, &val)) {
-                fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
-                val = 0;
-            }
-            nic.backend_domid = val;
+            replace_string(&nic.backend_domname, oparg);
         } else if (MATCH_OPTION("vifname", *argv, oparg)) {
             replace_string(&nic.ifname, oparg);
         } else if (MATCH_OPTION("model", *argv, oparg)) {
@@ -5623,8 +5609,8 @@ int main_networkdetach(int argc, char **argv)
 int main_blockattach(int argc, char **argv)
 {
     int opt;
-    uint32_t fe_domid, be_domid = 0;
-    libxl_device_disk disk = { 0 };
+    uint32_t fe_domid;
+    libxl_device_disk disk;
     XLU_Config *config = 0;
 
     if ((opt = def_getopt(argc, argv, "", "block-attach", 2)) != -1)
@@ -5639,8 +5625,6 @@ int main_blockattach(int argc, char **argv)
     parse_disk_config_multistring
         (&config, argc-optind, (const char* const*)argv + optind, &disk);
 
-    disk.backend_domid = be_domid;
-
     if (dryrun_only) {
         char *json = libxl_device_disk_to_json(ctx, &disk);
         printf("disk: %s\n", json);
@@ -5720,7 +5704,6 @@ int main_vtpmattach(int argc, char **argv)
     int opt;
     libxl_device_vtpm vtpm;
     char *oparg;
-    unsigned int val;
     uint32_t domid;
 
     if ((opt = def_getopt(argc, argv, "", "vtpm-attach", 1)) != -1)
@@ -5740,12 +5723,7 @@ int main_vtpmattach(int argc, char **argv)
                 return 1;
             }
         } else if (MATCH_OPTION("backend", *argv, oparg)) {
-            if(domain_qualifier_to_domid(oparg, &val, 0)) {
-                fprintf(stderr,
-                      "Specified backend domain does not exist, defaulting to Dom0\n");
-                val = 0;
-            }
-            vtpm.backend_domid = val;
+            replace_string(&vtpm.backend_domname, oparg);
         } else {
             fprintf(stderr, "unrecognized argument `%s'\n", *argv);
             return 1;
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 15:20:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 15:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkcUJ-0007Zf-Rt; Mon, 17 Dec 2012 15:20:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkcUI-0007Za-UK
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 15:20:15 +0000
Received: from [85.158.143.99:43898] by server-2.bemta-4.messagelabs.com id
	AB/5F-30861-E283FC05; Mon, 17 Dec 2012 15:20:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355757601!29714611!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTAxODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6793 invoked from network); 17 Dec 2012 15:20:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 15:20:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,302,1355097600"; 
   d="scan'208";a="962020"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	17 Dec 2012 15:19:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 17 Dec 2012 10:19:41 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkcTl-0003Kc-Hx;
	Mon, 17 Dec 2012 15:19:41 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Dec 2012 15:19:41 +0000
Message-ID: <1355757581-11845-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen: arm: Call init_xen_time earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If we panic before calling init_xen_time then the "Rebooting in 5
seconds" delay ends up calling udelay which uses cntfrq before it has
been initialised resulting in a divide by zero.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/setup.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..7b0a0f6 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -219,6 +219,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     console_init_preirq();
 #endif
 
+    init_xen_time();
+
     gic_init();
     make_cpus_ready(cpus, boot_phys_offset);
 
@@ -227,8 +229,6 @@ void __init start_xen(unsigned long boot_phys_offset,
     set_current((struct vcpu *)0xfffff000); /* debug sanity */
     idle_vcpu[0] = current;
 
-    init_xen_time();
-
     setup_mm(atag_paddr, fdt_size);
 
     /* Setup Hyp vector base */
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 15:20:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 15:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkcUJ-0007Zf-Rt; Mon, 17 Dec 2012 15:20:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkcUI-0007Za-UK
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 15:20:15 +0000
Received: from [85.158.143.99:43898] by server-2.bemta-4.messagelabs.com id
	AB/5F-30861-E283FC05; Mon, 17 Dec 2012 15:20:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355757601!29714611!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTAxODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6793 invoked from network); 17 Dec 2012 15:20:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 15:20:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,302,1355097600"; 
   d="scan'208";a="962020"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	17 Dec 2012 15:19:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 17 Dec 2012 10:19:41 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkcTl-0003Kc-Hx;
	Mon, 17 Dec 2012 15:19:41 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Dec 2012 15:19:41 +0000
Message-ID: <1355757581-11845-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen: arm: Call init_xen_time earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If we panic before calling init_xen_time then the "Rebooting in 5
seconds" delay ends up calling udelay which uses cntfrq before it has
been initialised resulting in a divide by zero.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/setup.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..7b0a0f6 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -219,6 +219,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     console_init_preirq();
 #endif
 
+    init_xen_time();
+
     gic_init();
     make_cpus_ready(cpus, boot_phys_offset);
 
@@ -227,8 +229,6 @@ void __init start_xen(unsigned long boot_phys_offset,
     set_current((struct vcpu *)0xfffff000); /* debug sanity */
     idle_vcpu[0] = current;
 
-    init_xen_time();
-
     setup_mm(atag_paddr, fdt_size);
 
     /* Setup Hyp vector base */
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 15:28:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 15:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkccP-0007kZ-T6; Mon, 17 Dec 2012 15:28:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkccO-0007kU-8r
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 15:28:36 +0000
Received: from [85.158.143.35:33282] by server-2.bemta-4.messagelabs.com id
	7A/BA-30861-32A3FC05; Mon, 17 Dec 2012 15:28:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355758111!14464976!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8692 invoked from network); 17 Dec 2012 15:28:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 15:28:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 15:28:31 +0000
Message-Id: <50CF482C02000078000B0C7A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 15:28:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121214120836.6ec4ad4a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212171210100.17523@kaball.uk.xensource.com>
	<50CF24DA02000078000B0BC4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212171413020.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212171413020.17523@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 15:16, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Mon, 17 Dec 2012, Jan Beulich wrote:
>> >>> On 17.12.12 at 13:42, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Fri, 14 Dec 2012, Mukesh Rathor wrote:
>> >> On Thu, 13 Dec 2012 14:25:16 +0000
>> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> >> 
>> >> > On Thu, 13 Dec 2012, Jan Beulich wrote:
>> >> > > >>> On 13.12.12 at 13:19, Stefano Stabellini
>> >> > > >>> <stefano.stabellini@eu.citrix.com>
>> >> > > wrote:
>> >> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
>> >> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
>> >> > > >> >>> <mukesh.rathor@oracle.com> wrote:
>> >> > 
>> >> > Actually I think that you might be right: just looking at the code it
>> >> > seems that the mask bits get written to the table once as part of the
>> >> > initialization process:
>> >> > 
>> >> > pci_enable_msix -> msix_capability_init -> msix_program_entries
>> >> > 
>> >> > Unfortunately msix_program_entries is called few lines after
>> >> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
>> >> > as a pirq.
>> >> > However after that is done, all the masking/unmask is done via
>> >> > irq_mask that we handle properly masking/unmasking the corresponding
>> >> > event channels.
>> >> > 
>> >> > 
>> >> > Possible solutions on top of my head:
>> >> > 
>> >> > - in msix_program_entries instead of writing to the table directly
>> >> > (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
>> >> > 
>> >> > - replace msix_program_entries with arch_msix_program_entries, but it
>> >> > would probably be unpopular.
>> >> 
>> >> 
>> >> Can you or Jan or somebody please take that over? I can focus on other
>> >> PVH things then and try to get a patch in asap.
>> > 
>> > The following patch moves the MSI-X masking before arch_setup_msi_irqs,
>> > that is when Linux calls PHYSDEVOP_map_pirq that is the hypercall that
>> > causes Xen to execute msix_capability_init.
>> 
>> And in what way would that help?
> 
> I was working under the assumption that before the call to
> msix_capability_init (in particular before
> rangeset_add_range(mmio_ro_ranges, dev->msix_table...) in Xen, the table
> is actually writeable by the guest.
> 
> If that is the case, then this scheme should work.
> If it is not the case, this patch is wrong.

The question is not if or when the table is writable - the Dom0
kernel should _never_ try to write to the mask bits.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 15:28:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 15:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkccP-0007kZ-T6; Mon, 17 Dec 2012 15:28:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkccO-0007kU-8r
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 15:28:36 +0000
Received: from [85.158.143.35:33282] by server-2.bemta-4.messagelabs.com id
	7A/BA-30861-32A3FC05; Mon, 17 Dec 2012 15:28:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355758111!14464976!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8692 invoked from network); 17 Dec 2012 15:28:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 15:28:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 15:28:31 +0000
Message-Id: <50CF482C02000078000B0C7A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 15:28:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121214120836.6ec4ad4a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212171210100.17523@kaball.uk.xensource.com>
	<50CF24DA02000078000B0BC4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212171413020.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212171413020.17523@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 15:16, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Mon, 17 Dec 2012, Jan Beulich wrote:
>> >>> On 17.12.12 at 13:42, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Fri, 14 Dec 2012, Mukesh Rathor wrote:
>> >> On Thu, 13 Dec 2012 14:25:16 +0000
>> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> >> 
>> >> > On Thu, 13 Dec 2012, Jan Beulich wrote:
>> >> > > >>> On 13.12.12 at 13:19, Stefano Stabellini
>> >> > > >>> <stefano.stabellini@eu.citrix.com>
>> >> > > wrote:
>> >> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
>> >> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
>> >> > > >> >>> <mukesh.rathor@oracle.com> wrote:
>> >> > 
>> >> > Actually I think that you might be right: just looking at the code it
>> >> > seems that the mask bits get written to the table once as part of the
>> >> > initialization process:
>> >> > 
>> >> > pci_enable_msix -> msix_capability_init -> msix_program_entries
>> >> > 
>> >> > Unfortunately msix_program_entries is called few lines after
>> >> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
>> >> > as a pirq.
>> >> > However after that is done, all the masking/unmask is done via
>> >> > irq_mask that we handle properly masking/unmasking the corresponding
>> >> > event channels.
>> >> > 
>> >> > 
>> >> > Possible solutions on top of my head:
>> >> > 
>> >> > - in msix_program_entries instead of writing to the table directly
>> >> > (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
>> >> > 
>> >> > - replace msix_program_entries with arch_msix_program_entries, but it
>> >> > would probably be unpopular.
>> >> 
>> >> 
>> >> Can you or Jan or somebody please take that over? I can focus on other
>> >> PVH things then and try to get a patch in asap.
>> > 
>> > The following patch moves the MSI-X masking before arch_setup_msi_irqs,
>> > that is when Linux calls PHYSDEVOP_map_pirq that is the hypercall that
>> > causes Xen to execute msix_capability_init.
>> 
>> And in what way would that help?
> 
> I was working under the assumption that before the call to
> msix_capability_init (in particular before
> rangeset_add_range(mmio_ro_ranges, dev->msix_table...) in Xen, the table
> is actually writeable by the guest.
> 
> If that is the case, then this scheme should work.
> If it is not the case, this patch is wrong.

The question is not if or when the table is writable - the Dom0
kernel should _never_ try to write to the mask bits.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:36:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkdfz-00005j-Jt; Mon, 17 Dec 2012 16:36:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkdfx-00005e-Pr
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:21 +0000
Received: from [85.158.143.99:45004] by server-3.bemta-4.messagelabs.com id
	94/64-18211-50A4FC05; Mon, 17 Dec 2012 16:36:21 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355762180!29844361!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12911 invoked from network); 17 Dec 2012 16:36:20 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-15.tower-216.messagelabs.com with SMTP;
	17 Dec 2012 16:36:20 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000279; Mon, 17 Dec 2012 16:36:07 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id B39BAC2B12; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:38 +0000
Message-Id: <1355762141-29616-4-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 3/6] ARM: psci: add devicetree binding for
	describing PSCI firmware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a new devicetree binding for describing PSCI firmware
to Linux.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 Documentation/devicetree/bindings/arm/psci.txt | 58 ++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/arm/psci.txt

diff --git a/Documentation/devicetree/bindings/arm/psci.txt b/Documentation/devicetree/bindings/arm/psci.txt
new file mode 100644
index 0000000..904d0d3
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/psci.txt
@@ -0,0 +1,58 @@
+* Power State Coordination Interface (PSCI)
+
+Firmware implementing the PSCI functions described in ARM document number
+ARM DEN 0022A ("Power State Coordination Interface System Software on ARM
+processors") can be used by Linux to initiate various CPU-centric power
+operations.
+
+Issue A of the specification describes functions for CPU suspend, hotplug
+and migration of secure software.
+
+Functions are invoked by trapping to the privilege level of the PSCI
+firmware (specified as part of the binding below) and passing arguments
+in a manner similar to that specified by AAPCS:
+
+	 r0		=> 32-bit Function ID / return value
+	{r1 - r3}	=> Parameters
+
+Note that the immediate field of the trapping instruction must be set
+to #0.
+
+
+Main node required properties:
+
+ - compatible    : Must be "arm,psci"
+
+ - method        : The method of calling the PSCI firmware. Permitted
+                   values are:
+
+                   "smc" : SMC #0, with the register assignments specified
+		           in this binding.
+
+                   "hvc" : HVC #0, with the register assignments specified
+		           in this binding.
+
+ - function-base : The base ID from which the functions are offset.
+
+Main node optional properties:
+
+ - cpu_suspend   : Offset of CPU_SUSPEND ID from function-base
+
+ - cpu_off       : Offset of CPU_OFF ID from function-base
+
+ - cpu_on        : Offset of CPU_ON ID from function-base
+
+ - migrate       : Offset of MIGRATE ID from function-base
+
+
+Example:
+
+	psci {
+		compatible	= "arm,psci";
+		method		= "smc";
+		function-base	= <0x95c1ba5e>;
+		cpu_suspend	= <0>;
+		cpu_off		= <1>;
+		cpu_on		= <2>;
+		migrate		= <3>;
+	};
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:36:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkdfz-00005j-Jt; Mon, 17 Dec 2012 16:36:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkdfx-00005e-Pr
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:21 +0000
Received: from [85.158.143.99:45004] by server-3.bemta-4.messagelabs.com id
	94/64-18211-50A4FC05; Mon, 17 Dec 2012 16:36:21 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355762180!29844361!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12911 invoked from network); 17 Dec 2012 16:36:20 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-15.tower-216.messagelabs.com with SMTP;
	17 Dec 2012 16:36:20 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000279; Mon, 17 Dec 2012 16:36:07 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id B39BAC2B12; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:38 +0000
Message-Id: <1355762141-29616-4-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 3/6] ARM: psci: add devicetree binding for
	describing PSCI firmware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a new devicetree binding for describing PSCI firmware
to Linux.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 Documentation/devicetree/bindings/arm/psci.txt | 58 ++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/arm/psci.txt

diff --git a/Documentation/devicetree/bindings/arm/psci.txt b/Documentation/devicetree/bindings/arm/psci.txt
new file mode 100644
index 0000000..904d0d3
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/psci.txt
@@ -0,0 +1,58 @@
+* Power State Coordination Interface (PSCI)
+
+Firmware implementing the PSCI functions described in ARM document number
+ARM DEN 0022A ("Power State Coordination Interface System Software on ARM
+processors") can be used by Linux to initiate various CPU-centric power
+operations.
+
+Issue A of the specification describes functions for CPU suspend, hotplug
+and migration of secure software.
+
+Functions are invoked by trapping to the privilege level of the PSCI
+firmware (specified as part of the binding below) and passing arguments
+in a manner similar to that specified by AAPCS:
+
+	 r0		=> 32-bit Function ID / return value
+	{r1 - r3}	=> Parameters
+
+Note that the immediate field of the trapping instruction must be set
+to #0.
+
+
+Main node required properties:
+
+ - compatible    : Must be "arm,psci"
+
+ - method        : The method of calling the PSCI firmware. Permitted
+                   values are:
+
+                   "smc" : SMC #0, with the register assignments specified
+		           in this binding.
+
+                   "hvc" : HVC #0, with the register assignments specified
+		           in this binding.
+
+ - function-base : The base ID from which the functions are offset.
+
+Main node optional properties:
+
+ - cpu_suspend   : Offset of CPU_SUSPEND ID from function-base
+
+ - cpu_off       : Offset of CPU_OFF ID from function-base
+
+ - cpu_on        : Offset of CPU_ON ID from function-base
+
+ - migrate       : Offset of MIGRATE ID from function-base
+
+
+Example:
+
+	psci {
+		compatible	= "arm,psci";
+		method		= "smc";
+		function-base	= <0x95c1ba5e>;
+		cpu_suspend	= <0>;
+		cpu_off		= <1>;
+		cpu_on		= <2>;
+		migrate		= <3>;
+	};
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:36:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkdg7-00006f-OL; Mon, 17 Dec 2012 16:36:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkdg5-00006D-Sx
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:30 +0000
Received: from [85.158.138.51:12337] by server-4.bemta-3.messagelabs.com id
	39/4F-31835-80A4FC05; Mon, 17 Dec 2012 16:36:24 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355762183!25042724!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6127 invoked from network); 17 Dec 2012 16:36:23 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-10.tower-174.messagelabs.com with SMTP;
	17 Dec 2012 16:36:23 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000275; Mon, 17 Dec 2012 16:36:07 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 7F0F5C0336; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:36 +0000
Message-Id: <1355762141-29616-2-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 1/6] ARM: opcodes: add missing include of
	linux/linkage.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

opcodes.h wants to declare an asmlinkage function, so we need to include
linux/linkage.h

Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/opcodes.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h
index 74e211a..e796c59 100644
--- a/arch/arm/include/asm/opcodes.h
+++ b/arch/arm/include/asm/opcodes.h
@@ -10,6 +10,7 @@
 #define __ASM_ARM_OPCODES_H
 
 #ifndef __ASSEMBLY__
+#include <linux/linkage.h>
 extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr);
 #endif
 
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:36:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkdg7-00006f-OL; Mon, 17 Dec 2012 16:36:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkdg5-00006D-Sx
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:30 +0000
Received: from [85.158.138.51:12337] by server-4.bemta-3.messagelabs.com id
	39/4F-31835-80A4FC05; Mon, 17 Dec 2012 16:36:24 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355762183!25042724!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6127 invoked from network); 17 Dec 2012 16:36:23 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-10.tower-174.messagelabs.com with SMTP;
	17 Dec 2012 16:36:23 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000275; Mon, 17 Dec 2012 16:36:07 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 7F0F5C0336; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:36 +0000
Message-Id: <1355762141-29616-2-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 1/6] ARM: opcodes: add missing include of
	linux/linkage.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

opcodes.h wants to declare an asmlinkage function, so we need to include
linux/linkage.h

Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/opcodes.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h
index 74e211a..e796c59 100644
--- a/arch/arm/include/asm/opcodes.h
+++ b/arch/arm/include/asm/opcodes.h
@@ -10,6 +10,7 @@
 #define __ASM_ARM_OPCODES_H
 
 #ifndef __ASSEMBLY__
+#include <linux/linkage.h>
 extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr);
 #endif
 
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:36:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkdg5-000068-08; Mon, 17 Dec 2012 16:36:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkdg3-00005s-SL
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:28 +0000
Received: from [85.158.137.99:53520] by server-12.bemta-3.messagelabs.com id
	FD/E1-27559-60A4FC05; Mon, 17 Dec 2012 16:36:22 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355762181!17444139!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17249 invoked from network); 17 Dec 2012 16:36:22 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-15.tower-217.messagelabs.com with SMTP;
	17 Dec 2012 16:36:22 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000277; Mon, 17 Dec 2012 16:36:07 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 99289C2A9E; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:37 +0000
Message-Id: <1355762141-29616-3-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 2/6] ARM: opcodes: add opcodes definitions
	for ARM security extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The ARM security extensions introduced the smc instruction, which is not
supported by all versions of GAS.

This patch introduces opcodes-sec.h, so that smc is made available in a
similar manner to hvc.

Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/opcodes-sec.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)
 create mode 100644 arch/arm/include/asm/opcodes-sec.h

diff --git a/arch/arm/include/asm/opcodes-sec.h b/arch/arm/include/asm/opcodes-sec.h
new file mode 100644
index 0000000..bc3a917
--- /dev/null
+++ b/arch/arm/include/asm/opcodes-sec.h
@@ -0,0 +1,24 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ */
+
+#ifndef __ASM_ARM_OPCODES_SEC_H
+#define __ASM_ARM_OPCODES_SEC_H
+
+#include <asm/opcodes.h>
+
+#define __SMC(imm4) __inst_arm_thumb32(					\
+	0xE1600070 | (((imm4) & 0xF) << 0),				\
+	0xF7F08000 | (((imm4) & 0xF) << 16)				\
+)
+
+#endif /* __ASM_ARM_OPCODES_SEC_H */
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:36:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkdg5-000068-08; Mon, 17 Dec 2012 16:36:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkdg3-00005s-SL
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:28 +0000
Received: from [85.158.137.99:53520] by server-12.bemta-3.messagelabs.com id
	FD/E1-27559-60A4FC05; Mon, 17 Dec 2012 16:36:22 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355762181!17444139!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17249 invoked from network); 17 Dec 2012 16:36:22 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-15.tower-217.messagelabs.com with SMTP;
	17 Dec 2012 16:36:22 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000277; Mon, 17 Dec 2012 16:36:07 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 99289C2A9E; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:37 +0000
Message-Id: <1355762141-29616-3-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 2/6] ARM: opcodes: add opcodes definitions
	for ARM security extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The ARM security extensions introduced the smc instruction, which is not
supported by all versions of GAS.

This patch introduces opcodes-sec.h, so that smc is made available in a
similar manner to hvc.

Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/opcodes-sec.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)
 create mode 100644 arch/arm/include/asm/opcodes-sec.h

diff --git a/arch/arm/include/asm/opcodes-sec.h b/arch/arm/include/asm/opcodes-sec.h
new file mode 100644
index 0000000..bc3a917
--- /dev/null
+++ b/arch/arm/include/asm/opcodes-sec.h
@@ -0,0 +1,24 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ */
+
+#ifndef __ASM_ARM_OPCODES_SEC_H
+#define __ASM_ARM_OPCODES_SEC_H
+
+#include <asm/opcodes.h>
+
+#define __SMC(imm4) __inst_arm_thumb32(					\
+	0xE1600070 | (((imm4) & 0xF) << 0),				\
+	0xF7F08000 | (((imm4) & 0xF) << 16)				\
+)
+
+#endif /* __ASM_ARM_OPCODES_SEC_H */
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:36:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkdgH-00008Y-5F; Mon, 17 Dec 2012 16:36:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1TkdgF-000085-FH
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:39 +0000
Received: from [85.158.143.35:15349] by server-1.bemta-4.messagelabs.com id
	F0/1F-28401-61A4FC05; Mon, 17 Dec 2012 16:36:38 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355762187!5772283!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18758 invoked from network); 17 Dec 2012 16:36:27 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-4.tower-21.messagelabs.com with SMTP;
	17 Dec 2012 16:36:27 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000281; Mon, 17 Dec 2012 16:36:07 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id CC14DC2B13; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:39 +0000
Message-Id: <1355762141-29616-5-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 4/6] ARM: psci: add support for PSCI
	invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for the Power State Coordination Interface
defined by ARM, allowing Linux to request CPU-centric power-management
operations from firmware implementing the PSCI protocol.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/Kconfig            |  10 +++
 arch/arm/include/asm/psci.h |  36 ++++++++
 arch/arm/kernel/Makefile    |   1 +
 arch/arm/kernel/psci.c      | 214 ++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 261 insertions(+)
 create mode 100644 arch/arm/include/asm/psci.h
 create mode 100644 arch/arm/kernel/psci.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index e4017ea..2a04375 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1611,6 +1611,16 @@ config HOTPLUG_CPU
 	  Say Y here to experiment with turning CPUs off and on.  CPUs
 	  can be controlled through /sys/devices/system/cpu.
 
+config ARM_PSCI
+	bool "Support for the ARM Power State Coordination Interface (PSCI)"
+	depends on CPU_V7
+	help
+	  Say Y here if you want Linux to communicate with system firmware
+	  implementing the PSCI specification for CPU-centric power
+	  management operations described in ARM document number ARM DEN
+	  0022A ("Power State Coordination Interface System Software on
+	  ARM processors").
+
 config LOCAL_TIMERS
 	bool "Use local timer interrupts"
 	depends on SMP
diff --git a/arch/arm/include/asm/psci.h b/arch/arm/include/asm/psci.h
new file mode 100644
index 0000000..ce0dbe7
--- /dev/null
+++ b/arch/arm/include/asm/psci.h
@@ -0,0 +1,36 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ */
+
+#ifndef __ASM_ARM_PSCI_H
+#define __ASM_ARM_PSCI_H
+
+#define PSCI_POWER_STATE_TYPE_STANDBY		0
+#define PSCI_POWER_STATE_TYPE_POWER_DOWN	1
+
+struct psci_power_state {
+	u16	id;
+	u8	type;
+	u8	affinity_level;
+};
+
+struct psci_operations {
+	int (*cpu_suspend)(struct psci_power_state state,
+			   unsigned long entry_point);
+	int (*cpu_off)(struct psci_power_state state);
+	int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
+	int (*migrate)(unsigned long cpuid);
+};
+
+extern struct psci_operations psci_ops;
+
+#endif /* __ASM_ARM_PSCI_H */
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index 5bbec7b..5f3338e 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -82,5 +82,6 @@ obj-$(CONFIG_DEBUG_LL)	+= debug.o
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 
 obj-$(CONFIG_ARM_VIRT_EXT)	+= hyp-stub.o
+obj-$(CONFIG_ARM_PSCI)		+= psci.o
 
 extra-y := $(head-y) vmlinux.lds
diff --git a/arch/arm/kernel/psci.c b/arch/arm/kernel/psci.c
new file mode 100644
index 0000000..e7fe909
--- /dev/null
+++ b/arch/arm/kernel/psci.c
@@ -0,0 +1,214 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ */
+
+#define pr_fmt(fmt) "psci: " fmt
+
+#include <linux/init.h>
+#include <linux/of.h>
+
+#include <asm/compiler.h>
+#include <asm/errno.h>
+#include <asm/opcodes-sec.h>
+#include <asm/opcodes-virt.h>
+#include <asm/psci.h>
+
+struct psci_operations psci_ops;
+
+static int (*invoke_psci_fn)(u32, u32, u32, u32);
+
+enum psci_function {
+	PSCI_FN_CPU_SUSPEND,
+	PSCI_FN_CPU_ON,
+	PSCI_FN_CPU_OFF,
+	PSCI_FN_MIGRATE,
+	PSCI_FN_MAX,
+};
+
+static u32 psci_function_id[PSCI_FN_MAX];
+
+#define PSCI_RET_SUCCESS		0
+#define PSCI_RET_EOPNOTSUPP		-1
+#define PSCI_RET_EINVAL			-2
+#define PSCI_RET_EPERM			-3
+
+static int psci_to_linux_errno(int errno)
+{
+	switch (errno) {
+	case PSCI_RET_SUCCESS:
+		return 0;
+	case PSCI_RET_EOPNOTSUPP:
+		return -EOPNOTSUPP;
+	case PSCI_RET_EINVAL:
+		return -EINVAL;
+	case PSCI_RET_EPERM:
+		return -EPERM;
+	};
+
+	return -EINVAL;
+}
+
+#define PSCI_POWER_STATE_ID_MASK	0xffff
+#define PSCI_POWER_STATE_ID_SHIFT	0
+#define PSCI_POWER_STATE_TYPE_MASK	0x1
+#define PSCI_POWER_STATE_TYPE_SHIFT	16
+#define PSCI_POWER_STATE_AFFL_MASK	0x3
+#define PSCI_POWER_STATE_AFFL_SHIFT	24
+
+static u32 psci_power_state_pack(struct psci_power_state state)
+{
+	return	((state.id & PSCI_POWER_STATE_ID_MASK)
+			<< PSCI_POWER_STATE_ID_SHIFT)	|
+		((state.type & PSCI_POWER_STATE_TYPE_MASK)
+			<< PSCI_POWER_STATE_TYPE_SHIFT)	|
+		((state.affinity_level & PSCI_POWER_STATE_AFFL_MASK)
+			<< PSCI_POWER_STATE_AFFL_SHIFT);
+}
+
+/*
+ * The following two functions are invoked via the invoke_psci_fn pointer
+ * and will not be inlined, allowing us to piggyback on the AAPCS.
+ */
+static int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
+{
+	asm volatile(
+			__asmeq("%0", "r0")
+			__asmeq("%1", "r1")
+			__asmeq("%2", "r2")
+			__asmeq("%3", "r3")
+			__HVC(0)
+		: "+r" (function_id)
+		: "r" (arg0), "r" (arg1), "r" (arg2));
+
+	return function_id;
+}
+
+static int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
+{
+	asm volatile(
+			__asmeq("%0", "r0")
+			__asmeq("%1", "r1")
+			__asmeq("%2", "r2")
+			__asmeq("%3", "r3")
+			__SMC(0)
+		: "+r" (function_id)
+		: "r" (arg0), "r" (arg1), "r" (arg2));
+
+	return function_id;
+}
+
+static int psci_cpu_suspend(struct psci_power_state state,
+			    unsigned long entry_point)
+{
+	int err;
+	u32 fn, power_state;
+
+	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
+	power_state = psci_power_state_pack(state);
+	err = invoke_psci_fn(fn, power_state, (u32)entry_point, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_cpu_off(struct psci_power_state state)
+{
+	int err;
+	u32 fn, power_state;
+
+	fn = psci_function_id[PSCI_FN_CPU_OFF];
+	power_state = psci_power_state_pack(state);
+	err = invoke_psci_fn(fn, power_state, 0, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
+{
+	int err;
+	u32 fn;
+
+	fn = psci_function_id[PSCI_FN_CPU_ON];
+	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_migrate(unsigned long cpuid)
+{
+	int err;
+	u32 fn;
+
+	fn = psci_function_id[PSCI_FN_MIGRATE];
+	err = invoke_psci_fn(fn, cpuid, 0, 0);
+	return psci_to_linux_errno(err);
+}
+
+static const struct of_device_id psci_of_match[] __initconst = {
+	{ .compatible = "arm,psci",	},
+	{},
+};
+
+static int __init psci_init(void)
+{
+	struct device_node *np;
+	const char *method;
+	u32 base, id;
+
+	np = of_find_matching_node(NULL, psci_of_match);
+	if (!np)
+		return 0;
+
+	pr_info("probing function IDs from device-tree\n");
+
+	if (of_property_read_u32(np, "function-base", &base)) {
+		pr_warning("missing \"function-base\" property\n");
+		goto out_put_node;
+	}
+
+	if (of_property_read_string(np, "method", &method)) {
+		pr_warning("missing \"method\" property\n");
+		goto out_put_node;
+	}
+
+	if (!strcmp("hvc", method)) {
+		invoke_psci_fn = __invoke_psci_fn_hvc;
+	} else if (!strcmp("smc", method)) {
+		invoke_psci_fn = __invoke_psci_fn_smc;
+	} else {
+		pr_warning("invalid \"method\" property: %s\n", method);
+		goto out_put_node;
+	}
+
+	if (!of_property_read_u32(np, "cpu_suspend", &id)) {
+		psci_function_id[PSCI_FN_CPU_SUSPEND] = base + id;
+		psci_ops.cpu_suspend = psci_cpu_suspend;
+	}
+
+	if (!of_property_read_u32(np, "cpu_off", &id)) {
+		psci_function_id[PSCI_FN_CPU_OFF] = base + id;
+		psci_ops.cpu_off = psci_cpu_off;
+	}
+
+	if (!of_property_read_u32(np, "cpu_on", &id)) {
+		psci_function_id[PSCI_FN_CPU_ON] = base + id;
+		psci_ops.cpu_on = psci_cpu_on;
+	}
+
+	if (!of_property_read_u32(np, "migrate", &id)) {
+		psci_function_id[PSCI_FN_MIGRATE] = base + id;
+		psci_ops.migrate = psci_migrate;
+	}
+
+out_put_node:
+	of_node_put(np);
+	return 0;
+}
+early_initcall(psci_init);
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:36:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkdgH-00008Y-5F; Mon, 17 Dec 2012 16:36:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1TkdgF-000085-FH
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:39 +0000
Received: from [85.158.143.35:15349] by server-1.bemta-4.messagelabs.com id
	F0/1F-28401-61A4FC05; Mon, 17 Dec 2012 16:36:38 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355762187!5772283!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18758 invoked from network); 17 Dec 2012 16:36:27 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-4.tower-21.messagelabs.com with SMTP;
	17 Dec 2012 16:36:27 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000281; Mon, 17 Dec 2012 16:36:07 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id CC14DC2B13; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:39 +0000
Message-Id: <1355762141-29616-5-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 4/6] ARM: psci: add support for PSCI
	invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for the Power State Coordination Interface
defined by ARM, allowing Linux to request CPU-centric power-management
operations from firmware implementing the PSCI protocol.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/Kconfig            |  10 +++
 arch/arm/include/asm/psci.h |  36 ++++++++
 arch/arm/kernel/Makefile    |   1 +
 arch/arm/kernel/psci.c      | 214 ++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 261 insertions(+)
 create mode 100644 arch/arm/include/asm/psci.h
 create mode 100644 arch/arm/kernel/psci.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index e4017ea..2a04375 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1611,6 +1611,16 @@ config HOTPLUG_CPU
 	  Say Y here to experiment with turning CPUs off and on.  CPUs
 	  can be controlled through /sys/devices/system/cpu.
 
+config ARM_PSCI
+	bool "Support for the ARM Power State Coordination Interface (PSCI)"
+	depends on CPU_V7
+	help
+	  Say Y here if you want Linux to communicate with system firmware
+	  implementing the PSCI specification for CPU-centric power
+	  management operations described in ARM document number ARM DEN
+	  0022A ("Power State Coordination Interface System Software on
+	  ARM processors").
+
 config LOCAL_TIMERS
 	bool "Use local timer interrupts"
 	depends on SMP
diff --git a/arch/arm/include/asm/psci.h b/arch/arm/include/asm/psci.h
new file mode 100644
index 0000000..ce0dbe7
--- /dev/null
+++ b/arch/arm/include/asm/psci.h
@@ -0,0 +1,36 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ */
+
+#ifndef __ASM_ARM_PSCI_H
+#define __ASM_ARM_PSCI_H
+
+#define PSCI_POWER_STATE_TYPE_STANDBY		0
+#define PSCI_POWER_STATE_TYPE_POWER_DOWN	1
+
+struct psci_power_state {
+	u16	id;
+	u8	type;
+	u8	affinity_level;
+};
+
+struct psci_operations {
+	int (*cpu_suspend)(struct psci_power_state state,
+			   unsigned long entry_point);
+	int (*cpu_off)(struct psci_power_state state);
+	int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
+	int (*migrate)(unsigned long cpuid);
+};
+
+extern struct psci_operations psci_ops;
+
+#endif /* __ASM_ARM_PSCI_H */
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index 5bbec7b..5f3338e 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -82,5 +82,6 @@ obj-$(CONFIG_DEBUG_LL)	+= debug.o
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 
 obj-$(CONFIG_ARM_VIRT_EXT)	+= hyp-stub.o
+obj-$(CONFIG_ARM_PSCI)		+= psci.o
 
 extra-y := $(head-y) vmlinux.lds
diff --git a/arch/arm/kernel/psci.c b/arch/arm/kernel/psci.c
new file mode 100644
index 0000000..e7fe909
--- /dev/null
+++ b/arch/arm/kernel/psci.c
@@ -0,0 +1,214 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ */
+
+#define pr_fmt(fmt) "psci: " fmt
+
+#include <linux/init.h>
+#include <linux/of.h>
+
+#include <asm/compiler.h>
+#include <asm/errno.h>
+#include <asm/opcodes-sec.h>
+#include <asm/opcodes-virt.h>
+#include <asm/psci.h>
+
+struct psci_operations psci_ops;
+
+static int (*invoke_psci_fn)(u32, u32, u32, u32);
+
+enum psci_function {
+	PSCI_FN_CPU_SUSPEND,
+	PSCI_FN_CPU_ON,
+	PSCI_FN_CPU_OFF,
+	PSCI_FN_MIGRATE,
+	PSCI_FN_MAX,
+};
+
+static u32 psci_function_id[PSCI_FN_MAX];
+
+#define PSCI_RET_SUCCESS		0
+#define PSCI_RET_EOPNOTSUPP		-1
+#define PSCI_RET_EINVAL			-2
+#define PSCI_RET_EPERM			-3
+
+static int psci_to_linux_errno(int errno)
+{
+	switch (errno) {
+	case PSCI_RET_SUCCESS:
+		return 0;
+	case PSCI_RET_EOPNOTSUPP:
+		return -EOPNOTSUPP;
+	case PSCI_RET_EINVAL:
+		return -EINVAL;
+	case PSCI_RET_EPERM:
+		return -EPERM;
+	};
+
+	return -EINVAL;
+}
+
+#define PSCI_POWER_STATE_ID_MASK	0xffff
+#define PSCI_POWER_STATE_ID_SHIFT	0
+#define PSCI_POWER_STATE_TYPE_MASK	0x1
+#define PSCI_POWER_STATE_TYPE_SHIFT	16
+#define PSCI_POWER_STATE_AFFL_MASK	0x3
+#define PSCI_POWER_STATE_AFFL_SHIFT	24
+
+static u32 psci_power_state_pack(struct psci_power_state state)
+{
+	return	((state.id & PSCI_POWER_STATE_ID_MASK)
+			<< PSCI_POWER_STATE_ID_SHIFT)	|
+		((state.type & PSCI_POWER_STATE_TYPE_MASK)
+			<< PSCI_POWER_STATE_TYPE_SHIFT)	|
+		((state.affinity_level & PSCI_POWER_STATE_AFFL_MASK)
+			<< PSCI_POWER_STATE_AFFL_SHIFT);
+}
+
+/*
+ * The following two functions are invoked via the invoke_psci_fn pointer
+ * and will not be inlined, allowing us to piggyback on the AAPCS.
+ */
+static int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
+{
+	asm volatile(
+			__asmeq("%0", "r0")
+			__asmeq("%1", "r1")
+			__asmeq("%2", "r2")
+			__asmeq("%3", "r3")
+			__HVC(0)
+		: "+r" (function_id)
+		: "r" (arg0), "r" (arg1), "r" (arg2));
+
+	return function_id;
+}
+
+static int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
+{
+	asm volatile(
+			__asmeq("%0", "r0")
+			__asmeq("%1", "r1")
+			__asmeq("%2", "r2")
+			__asmeq("%3", "r3")
+			__SMC(0)
+		: "+r" (function_id)
+		: "r" (arg0), "r" (arg1), "r" (arg2));
+
+	return function_id;
+}
+
+static int psci_cpu_suspend(struct psci_power_state state,
+			    unsigned long entry_point)
+{
+	int err;
+	u32 fn, power_state;
+
+	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
+	power_state = psci_power_state_pack(state);
+	err = invoke_psci_fn(fn, power_state, (u32)entry_point, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_cpu_off(struct psci_power_state state)
+{
+	int err;
+	u32 fn, power_state;
+
+	fn = psci_function_id[PSCI_FN_CPU_OFF];
+	power_state = psci_power_state_pack(state);
+	err = invoke_psci_fn(fn, power_state, 0, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
+{
+	int err;
+	u32 fn;
+
+	fn = psci_function_id[PSCI_FN_CPU_ON];
+	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_migrate(unsigned long cpuid)
+{
+	int err;
+	u32 fn;
+
+	fn = psci_function_id[PSCI_FN_MIGRATE];
+	err = invoke_psci_fn(fn, cpuid, 0, 0);
+	return psci_to_linux_errno(err);
+}
+
+static const struct of_device_id psci_of_match[] __initconst = {
+	{ .compatible = "arm,psci",	},
+	{},
+};
+
+static int __init psci_init(void)
+{
+	struct device_node *np;
+	const char *method;
+	u32 base, id;
+
+	np = of_find_matching_node(NULL, psci_of_match);
+	if (!np)
+		return 0;
+
+	pr_info("probing function IDs from device-tree\n");
+
+	if (of_property_read_u32(np, "function-base", &base)) {
+		pr_warning("missing \"function-base\" property\n");
+		goto out_put_node;
+	}
+
+	if (of_property_read_string(np, "method", &method)) {
+		pr_warning("missing \"method\" property\n");
+		goto out_put_node;
+	}
+
+	if (!strcmp("hvc", method)) {
+		invoke_psci_fn = __invoke_psci_fn_hvc;
+	} else if (!strcmp("smc", method)) {
+		invoke_psci_fn = __invoke_psci_fn_smc;
+	} else {
+		pr_warning("invalid \"method\" property: %s\n", method);
+		goto out_put_node;
+	}
+
+	if (!of_property_read_u32(np, "cpu_suspend", &id)) {
+		psci_function_id[PSCI_FN_CPU_SUSPEND] = base + id;
+		psci_ops.cpu_suspend = psci_cpu_suspend;
+	}
+
+	if (!of_property_read_u32(np, "cpu_off", &id)) {
+		psci_function_id[PSCI_FN_CPU_OFF] = base + id;
+		psci_ops.cpu_off = psci_cpu_off;
+	}
+
+	if (!of_property_read_u32(np, "cpu_on", &id)) {
+		psci_function_id[PSCI_FN_CPU_ON] = base + id;
+		psci_ops.cpu_on = psci_cpu_on;
+	}
+
+	if (!of_property_read_u32(np, "migrate", &id)) {
+		psci_function_id[PSCI_FN_MIGRATE] = base + id;
+		psci_ops.migrate = psci_migrate;
+	}
+
+out_put_node:
+	of_node_put(np);
+	return 0;
+}
+early_initcall(psci_init);
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:36:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkdg5-00006G-CE; Mon, 17 Dec 2012 16:36:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkdg4-00005u-6U
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:28 +0000
Received: from [85.158.139.211:64231] by server-7.bemta-5.messagelabs.com id
	9C/6E-08009-B0A4FC05; Mon, 17 Dec 2012 16:36:27 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355762186!19046335!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11347 invoked from network); 17 Dec 2012 16:36:26 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-8.tower-206.messagelabs.com with SMTP;
	17 Dec 2012 16:36:26 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa8ki000290; Mon, 17 Dec 2012 16:36:08 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 08EC9C0336; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:41 +0000
Message-Id: <1355762141-29616-7-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 6/6] ARM: mach-virt: add SMP support using
	PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for SMP to mach-virt using the PSCI
infrastructure.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/mach-virt/Kconfig   |  1 +
 arch/arm/mach-virt/Makefile  |  1 +
 arch/arm/mach-virt/platsmp.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
 arch/arm/mach-virt/virt.c    |  6 ++++
 4 files changed, 84 insertions(+)
 create mode 100644 arch/arm/mach-virt/platsmp.c

diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
index a568a2a..8958f0d 100644
--- a/arch/arm/mach-virt/Kconfig
+++ b/arch/arm/mach-virt/Kconfig
@@ -3,6 +3,7 @@ config ARCH_VIRT
 	select ARCH_WANT_OPTIONAL_GPIOLIB
 	select ARM_GIC
 	select ARM_ARCH_TIMER
+	select ARM_PSCI
 	select HAVE_SMP
 	select CPU_V7
 	select SPARSE_IRQ
diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
index 7ddbfa6..042afc1 100644
--- a/arch/arm/mach-virt/Makefile
+++ b/arch/arm/mach-virt/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-y					:= virt.o
+obj-$(CONFIG_SMP)			+= platsmp.o
diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
new file mode 100644
index 0000000..930362b
--- /dev/null
+++ b/arch/arm/mach-virt/platsmp.c
@@ -0,0 +1,76 @@
+/*
+ * Dummy Virtual Machine - does what it says on the tin.
+ *
+ * Copyright (C) 2012 ARM Ltd
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/of.h>
+
+#include <asm/psci.h>
+#include <asm/smp_plat.h>
+#include <asm/hardware/gic.h>
+
+extern void secondary_startup(void);
+
+/*
+ * Enumerate the possible CPU set from the device tree.
+ */
+static void __init virt_smp_init_cpus(void)
+{
+	struct device_node *dn = NULL;
+	int cpu = 0;
+
+	while ((dn = of_find_node_by_type(dn, "cpu"))) {
+		if (cpu < NR_CPUS)
+			set_cpu_possible(cpu, true);
+		cpu++;
+	}
+
+	/* sanity check */
+	if (cpu > NR_CPUS)
+		pr_warning("no. of cores (%d) greater than configured maximum "
+			   "of %d - clipping\n",
+			   cpu, NR_CPUS);
+
+	set_smp_cross_call(gic_raise_softirq);
+}
+
+static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+static int __cpuinit virt_boot_secondary(unsigned int cpu,
+					 struct task_struct *idle)
+{
+	if (psci_ops.cpu_on)
+		return psci_ops.cpu_on(cpu_logical_map(cpu),
+				       __pa(secondary_startup));
+	return -ENODEV;
+}
+
+static void __cpuinit virt_secondary_init(unsigned int cpu)
+{
+	gic_secondary_init(0);
+}
+
+struct smp_operations __initdata virt_smp_ops = {
+	.smp_init_cpus		= virt_smp_init_cpus,
+	.smp_prepare_cpus	= virt_smp_prepare_cpus,
+	.smp_secondary_init	= virt_secondary_init,
+	.smp_boot_secondary	= virt_boot_secondary,
+};
diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
index 174b9da..d764835 100644
--- a/arch/arm/mach-virt/virt.c
+++ b/arch/arm/mach-virt/virt.c
@@ -20,6 +20,7 @@
 
 #include <linux/of_irq.h>
 #include <linux/of_platform.h>
+#include <linux/smp.h>
 
 #include <asm/arch_timer.h>
 #include <asm/hardware/gic.h>
@@ -56,10 +57,15 @@ static struct sys_timer virt_timer = {
 	.init = virt_timer_init,
 };
 
+#ifdef CONFIG_SMP
+extern struct smp_operations virt_smp_ops;
+#endif
+
 DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
 	.init_irq	= gic_init_irq,
 	.handle_irq     = gic_handle_irq,
 	.timer		= &virt_timer,
 	.init_machine	= virt_init,
+	.smp		= smp_ops(virt_smp_ops),
 	.dt_compat	= virt_dt_match,
 MACHINE_END
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:36:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkdg5-00006G-CE; Mon, 17 Dec 2012 16:36:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkdg4-00005u-6U
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:28 +0000
Received: from [85.158.139.211:64231] by server-7.bemta-5.messagelabs.com id
	9C/6E-08009-B0A4FC05; Mon, 17 Dec 2012 16:36:27 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355762186!19046335!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11347 invoked from network); 17 Dec 2012 16:36:26 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-8.tower-206.messagelabs.com with SMTP;
	17 Dec 2012 16:36:26 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa8ki000290; Mon, 17 Dec 2012 16:36:08 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 08EC9C0336; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:41 +0000
Message-Id: <1355762141-29616-7-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 6/6] ARM: mach-virt: add SMP support using
	PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for SMP to mach-virt using the PSCI
infrastructure.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/mach-virt/Kconfig   |  1 +
 arch/arm/mach-virt/Makefile  |  1 +
 arch/arm/mach-virt/platsmp.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
 arch/arm/mach-virt/virt.c    |  6 ++++
 4 files changed, 84 insertions(+)
 create mode 100644 arch/arm/mach-virt/platsmp.c

diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
index a568a2a..8958f0d 100644
--- a/arch/arm/mach-virt/Kconfig
+++ b/arch/arm/mach-virt/Kconfig
@@ -3,6 +3,7 @@ config ARCH_VIRT
 	select ARCH_WANT_OPTIONAL_GPIOLIB
 	select ARM_GIC
 	select ARM_ARCH_TIMER
+	select ARM_PSCI
 	select HAVE_SMP
 	select CPU_V7
 	select SPARSE_IRQ
diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
index 7ddbfa6..042afc1 100644
--- a/arch/arm/mach-virt/Makefile
+++ b/arch/arm/mach-virt/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-y					:= virt.o
+obj-$(CONFIG_SMP)			+= platsmp.o
diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
new file mode 100644
index 0000000..930362b
--- /dev/null
+++ b/arch/arm/mach-virt/platsmp.c
@@ -0,0 +1,76 @@
+/*
+ * Dummy Virtual Machine - does what it says on the tin.
+ *
+ * Copyright (C) 2012 ARM Ltd
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/of.h>
+
+#include <asm/psci.h>
+#include <asm/smp_plat.h>
+#include <asm/hardware/gic.h>
+
+extern void secondary_startup(void);
+
+/*
+ * Enumerate the possible CPU set from the device tree.
+ */
+static void __init virt_smp_init_cpus(void)
+{
+	struct device_node *dn = NULL;
+	int cpu = 0;
+
+	while ((dn = of_find_node_by_type(dn, "cpu"))) {
+		if (cpu < NR_CPUS)
+			set_cpu_possible(cpu, true);
+		cpu++;
+	}
+
+	/* sanity check */
+	if (cpu > NR_CPUS)
+		pr_warning("no. of cores (%d) greater than configured maximum "
+			   "of %d - clipping\n",
+			   cpu, NR_CPUS);
+
+	set_smp_cross_call(gic_raise_softirq);
+}
+
+static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+static int __cpuinit virt_boot_secondary(unsigned int cpu,
+					 struct task_struct *idle)
+{
+	if (psci_ops.cpu_on)
+		return psci_ops.cpu_on(cpu_logical_map(cpu),
+				       __pa(secondary_startup));
+	return -ENODEV;
+}
+
+static void __cpuinit virt_secondary_init(unsigned int cpu)
+{
+	gic_secondary_init(0);
+}
+
+struct smp_operations __initdata virt_smp_ops = {
+	.smp_init_cpus		= virt_smp_init_cpus,
+	.smp_prepare_cpus	= virt_smp_prepare_cpus,
+	.smp_secondary_init	= virt_secondary_init,
+	.smp_boot_secondary	= virt_boot_secondary,
+};
diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
index 174b9da..d764835 100644
--- a/arch/arm/mach-virt/virt.c
+++ b/arch/arm/mach-virt/virt.c
@@ -20,6 +20,7 @@
 
 #include <linux/of_irq.h>
 #include <linux/of_platform.h>
+#include <linux/smp.h>
 
 #include <asm/arch_timer.h>
 #include <asm/hardware/gic.h>
@@ -56,10 +57,15 @@ static struct sys_timer virt_timer = {
 	.init = virt_timer_init,
 };
 
+#ifdef CONFIG_SMP
+extern struct smp_operations virt_smp_ops;
+#endif
+
 DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
 	.init_irq	= gic_init_irq,
 	.handle_irq     = gic_handle_irq,
 	.timer		= &virt_timer,
 	.init_machine	= virt_init,
+	.smp		= smp_ops(virt_smp_ops),
 	.dt_compat	= virt_dt_match,
 MACHINE_END
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:37:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkdgX-0000Dh-IZ; Mon, 17 Dec 2012 16:36:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1TkdgV-0000D0-Nz
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:55 +0000
Received: from [85.158.139.83:47571] by server-4.bemta-5.messagelabs.com id
	34/72-14693-62A4FC05; Mon, 17 Dec 2012 16:36:54 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355762184!30271788!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8721 invoked from network); 17 Dec 2012 16:36:24 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-5.tower-182.messagelabs.com with SMTP;
	17 Dec 2012 16:36:24 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000274; Mon, 17 Dec 2012 16:36:07 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 6B09DC2A9C; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:35 +0000
Message-Id: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 0/6] Add support for a fake,
	para-virtualised machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello again,

This is version two of the patches originally posted here:

  http://lists.infradead.org/pipermail/linux-arm-kernel/2012-December/135870.html

Given the lively discussion sparked in that thread, there have been a
fair number of changes since the RFC:

	* A client-side implementation of PSCI and proposed DT binding
	* Use of PSCI for SMP boot
	* Removal of the SMP pen code
	* Dropped the RFC tag

Marc hacked up the KVM side separately and this has been tested
successfully with his code using a magic build of kvmtool.

As usual, all feedback welcome.

Cheers,

Will


Marc Zyngier (1):
  ARM: Dummy Virtual Machine platform support

Will Deacon (5):
  ARM: opcodes: add missing include of linux/linkage.h
  ARM: opcodes: add opcodes definitions for ARM security extensions
  ARM: psci: add devicetree binding for describing PSCI firmware
  ARM: psci: add support for PSCI invocations from the kernel
  ARM: mach-virt: add SMP support using PSCI

 Documentation/devicetree/bindings/arm/psci.txt |  58 +++++++
 arch/arm/Kconfig                               |  12 ++
 arch/arm/Makefile                              |   1 +
 arch/arm/include/asm/opcodes-sec.h             |  24 +++
 arch/arm/include/asm/opcodes.h                 |   1 +
 arch/arm/include/asm/psci.h                    |  36 +++++
 arch/arm/kernel/Makefile                       |   1 +
 arch/arm/kernel/psci.c                         | 214 +++++++++++++++++++++++++
 arch/arm/mach-virt/Kconfig                     |  10 ++
 arch/arm/mach-virt/Makefile                    |   6 +
 arch/arm/mach-virt/platsmp.c                   |  76 +++++++++
 arch/arm/mach-virt/virt.c                      |  71 ++++++++
 12 files changed, 510 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/arm/psci.txt
 create mode 100644 arch/arm/include/asm/opcodes-sec.h
 create mode 100644 arch/arm/include/asm/psci.h
 create mode 100644 arch/arm/kernel/psci.c
 create mode 100644 arch/arm/mach-virt/Kconfig
 create mode 100644 arch/arm/mach-virt/Makefile
 create mode 100644 arch/arm/mach-virt/platsmp.c
 create mode 100644 arch/arm/mach-virt/virt.c

-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:37:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkdgX-0000Dh-IZ; Mon, 17 Dec 2012 16:36:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1TkdgV-0000D0-Nz
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:36:55 +0000
Received: from [85.158.139.83:47571] by server-4.bemta-5.messagelabs.com id
	34/72-14693-62A4FC05; Mon, 17 Dec 2012 16:36:54 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355762184!30271788!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8721 invoked from network); 17 Dec 2012 16:36:24 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-5.tower-182.messagelabs.com with SMTP;
	17 Dec 2012 16:36:24 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000274; Mon, 17 Dec 2012 16:36:07 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 6B09DC2A9C; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:35 +0000
Message-Id: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, Will Deacon <will.deacon@arm.com>,
	xen-devel@lists.xen.org, robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 0/6] Add support for a fake,
	para-virtualised machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello again,

This is version two of the patches originally posted here:

  http://lists.infradead.org/pipermail/linux-arm-kernel/2012-December/135870.html

Given the lively discussion sparked in that thread, there have been a
fair number of changes since the RFC:

	* A client-side implementation of PSCI and proposed DT binding
	* Use of PSCI for SMP boot
	* Removal of the SMP pen code
	* Dropped the RFC tag

Marc hacked up the KVM side separately and this has been tested
successfully with his code using a magic build of kvmtool.

As usual, all feedback welcome.

Cheers,

Will


Marc Zyngier (1):
  ARM: Dummy Virtual Machine platform support

Will Deacon (5):
  ARM: opcodes: add missing include of linux/linkage.h
  ARM: opcodes: add opcodes definitions for ARM security extensions
  ARM: psci: add devicetree binding for describing PSCI firmware
  ARM: psci: add support for PSCI invocations from the kernel
  ARM: mach-virt: add SMP support using PSCI

 Documentation/devicetree/bindings/arm/psci.txt |  58 +++++++
 arch/arm/Kconfig                               |  12 ++
 arch/arm/Makefile                              |   1 +
 arch/arm/include/asm/opcodes-sec.h             |  24 +++
 arch/arm/include/asm/opcodes.h                 |   1 +
 arch/arm/include/asm/psci.h                    |  36 +++++
 arch/arm/kernel/Makefile                       |   1 +
 arch/arm/kernel/psci.c                         | 214 +++++++++++++++++++++++++
 arch/arm/mach-virt/Kconfig                     |  10 ++
 arch/arm/mach-virt/Makefile                    |   6 +
 arch/arm/mach-virt/platsmp.c                   |  76 +++++++++
 arch/arm/mach-virt/virt.c                      |  71 ++++++++
 12 files changed, 510 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/arm/psci.txt
 create mode 100644 arch/arm/include/asm/opcodes-sec.h
 create mode 100644 arch/arm/include/asm/psci.h
 create mode 100644 arch/arm/kernel/psci.c
 create mode 100644 arch/arm/mach-virt/Kconfig
 create mode 100644 arch/arm/mach-virt/Makefile
 create mode 100644 arch/arm/mach-virt/platsmp.c
 create mode 100644 arch/arm/mach-virt/virt.c

-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:38:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkdhJ-0000Ra-6Y; Mon, 17 Dec 2012 16:37:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1TkdhH-0000R8-Rq
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:37:44 +0000
Received: from [193.109.254.147:2981] by server-9.bemta-14.messagelabs.com id
	E4/27-24482-75A4FC05; Mon, 17 Dec 2012 16:37:43 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355762180!10778270!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8047 invoked from network); 17 Dec 2012 16:36:21 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-4.tower-27.messagelabs.com with SMTP;
	17 Dec 2012 16:36:21 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000288; Mon, 17 Dec 2012 16:36:08 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id EDEF2C2A9C; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:40 +0000
Message-Id: <1355762141-29616-6-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc Zyngier <marc.zyngier@arm.com>,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Marc Zyngier <marc.zyngier@arm.com>

Add support for the smallest, dumbest possible platform, to be
used as a guest for KVM or other hypervisors.

It only mandates a GIC and architected timers. Fits nicely with
a multiplatform zImage. Uses very little silicon area.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/Kconfig            |  2 ++
 arch/arm/Makefile           |  1 +
 arch/arm/mach-virt/Kconfig  |  9 +++++++
 arch/arm/mach-virt/Makefile |  5 ++++
 arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 82 insertions(+)
 create mode 100644 arch/arm/mach-virt/Kconfig
 create mode 100644 arch/arm/mach-virt/Makefile
 create mode 100644 arch/arm/mach-virt/virt.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 2a04375..552d72e0 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1127,6 +1127,8 @@ source "arch/arm/mach-versatile/Kconfig"
 source "arch/arm/mach-vexpress/Kconfig"
 source "arch/arm/plat-versatile/Kconfig"
 
+source "arch/arm/mach-virt/Kconfig"
+
 source "arch/arm/mach-w90x900/Kconfig"
 
 # Definitions to make life easier
diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 5f914fc..e8232ad 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -192,6 +192,7 @@ machine-$(CONFIG_ARCH_SOCFPGA)		+= socfpga
 machine-$(CONFIG_ARCH_SPEAR13XX)	+= spear13xx
 machine-$(CONFIG_ARCH_SPEAR3XX)		+= spear3xx
 machine-$(CONFIG_MACH_SPEAR600)		+= spear6xx
+machine-$(CONFIG_ARCH_VIRT)		+= virt
 machine-$(CONFIG_ARCH_ZYNQ)		+= zynq
 
 # Platform directory name.  This list is sorted alphanumerically
diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
new file mode 100644
index 0000000..a568a2a
--- /dev/null
+++ b/arch/arm/mach-virt/Kconfig
@@ -0,0 +1,9 @@
+config ARCH_VIRT
+	bool "Dummy Virtual Machine" if ARCH_MULTI_V7
+	select ARCH_WANT_OPTIONAL_GPIOLIB
+	select ARM_GIC
+	select ARM_ARCH_TIMER
+	select HAVE_SMP
+	select CPU_V7
+	select SPARSE_IRQ
+	select USE_OF
diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
new file mode 100644
index 0000000..7ddbfa6
--- /dev/null
+++ b/arch/arm/mach-virt/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for the linux kernel.
+#
+
+obj-y					:= virt.o
diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
new file mode 100644
index 0000000..174b9da
--- /dev/null
+++ b/arch/arm/mach-virt/virt.c
@@ -0,0 +1,65 @@
+/*
+ * Dummy Virtual Machine - does what it says on the tin.
+ *
+ * Copyright (C) 2012 ARM Ltd
+ * Authors: Will Deacon <will.deacon@arm.com>,
+ *          Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+
+#include <asm/arch_timer.h>
+#include <asm/hardware/gic.h>
+#include <asm/mach/arch.h>
+#include <asm/mach/time.h>
+
+const static struct of_device_id irq_match[] = {
+	{ .compatible = "arm,cortex-a15-gic", .data = gic_of_init, },
+	{}
+};
+
+static void __init gic_init_irq(void)
+{
+	of_irq_init(irq_match);
+}
+
+static void __init virt_init(void)
+{
+	of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+
+static void __init virt_timer_init(void)
+{
+	WARN_ON(arch_timer_of_register() != 0);
+	WARN_ON(arch_timer_sched_clock_init() != 0);
+}
+
+static const char *virt_dt_match[] = {
+	"linux,dummy-virt",
+	NULL
+};
+
+static struct sys_timer virt_timer = {
+	.init = virt_timer_init,
+};
+
+DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
+	.init_irq	= gic_init_irq,
+	.handle_irq     = gic_handle_irq,
+	.timer		= &virt_timer,
+	.init_machine	= virt_init,
+	.dt_compat	= virt_dt_match,
+MACHINE_END
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:38:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkdhJ-0000Ra-6Y; Mon, 17 Dec 2012 16:37:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1TkdhH-0000R8-Rq
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:37:44 +0000
Received: from [193.109.254.147:2981] by server-9.bemta-14.messagelabs.com id
	E4/27-24482-75A4FC05; Mon, 17 Dec 2012 16:37:43 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355762180!10778270!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8047 invoked from network); 17 Dec 2012 16:36:21 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-4.tower-27.messagelabs.com with SMTP;
	17 Dec 2012 16:36:21 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBHGa7ki000288; Mon, 17 Dec 2012 16:36:08 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id EDEF2C2A9C; Mon, 17 Dec 2012 16:36:05 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Mon, 17 Dec 2012 16:35:40 +0000
Message-Id: <1355762141-29616-6-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc Zyngier <marc.zyngier@arm.com>,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Marc Zyngier <marc.zyngier@arm.com>

Add support for the smallest, dumbest possible platform, to be
used as a guest for KVM or other hypervisors.

It only mandates a GIC and architected timers. Fits nicely with
a multiplatform zImage. Uses very little silicon area.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/Kconfig            |  2 ++
 arch/arm/Makefile           |  1 +
 arch/arm/mach-virt/Kconfig  |  9 +++++++
 arch/arm/mach-virt/Makefile |  5 ++++
 arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 82 insertions(+)
 create mode 100644 arch/arm/mach-virt/Kconfig
 create mode 100644 arch/arm/mach-virt/Makefile
 create mode 100644 arch/arm/mach-virt/virt.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 2a04375..552d72e0 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1127,6 +1127,8 @@ source "arch/arm/mach-versatile/Kconfig"
 source "arch/arm/mach-vexpress/Kconfig"
 source "arch/arm/plat-versatile/Kconfig"
 
+source "arch/arm/mach-virt/Kconfig"
+
 source "arch/arm/mach-w90x900/Kconfig"
 
 # Definitions to make life easier
diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 5f914fc..e8232ad 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -192,6 +192,7 @@ machine-$(CONFIG_ARCH_SOCFPGA)		+= socfpga
 machine-$(CONFIG_ARCH_SPEAR13XX)	+= spear13xx
 machine-$(CONFIG_ARCH_SPEAR3XX)		+= spear3xx
 machine-$(CONFIG_MACH_SPEAR600)		+= spear6xx
+machine-$(CONFIG_ARCH_VIRT)		+= virt
 machine-$(CONFIG_ARCH_ZYNQ)		+= zynq
 
 # Platform directory name.  This list is sorted alphanumerically
diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
new file mode 100644
index 0000000..a568a2a
--- /dev/null
+++ b/arch/arm/mach-virt/Kconfig
@@ -0,0 +1,9 @@
+config ARCH_VIRT
+	bool "Dummy Virtual Machine" if ARCH_MULTI_V7
+	select ARCH_WANT_OPTIONAL_GPIOLIB
+	select ARM_GIC
+	select ARM_ARCH_TIMER
+	select HAVE_SMP
+	select CPU_V7
+	select SPARSE_IRQ
+	select USE_OF
diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
new file mode 100644
index 0000000..7ddbfa6
--- /dev/null
+++ b/arch/arm/mach-virt/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for the linux kernel.
+#
+
+obj-y					:= virt.o
diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
new file mode 100644
index 0000000..174b9da
--- /dev/null
+++ b/arch/arm/mach-virt/virt.c
@@ -0,0 +1,65 @@
+/*
+ * Dummy Virtual Machine - does what it says on the tin.
+ *
+ * Copyright (C) 2012 ARM Ltd
+ * Authors: Will Deacon <will.deacon@arm.com>,
+ *          Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+
+#include <asm/arch_timer.h>
+#include <asm/hardware/gic.h>
+#include <asm/mach/arch.h>
+#include <asm/mach/time.h>
+
+const static struct of_device_id irq_match[] = {
+	{ .compatible = "arm,cortex-a15-gic", .data = gic_of_init, },
+	{}
+};
+
+static void __init gic_init_irq(void)
+{
+	of_irq_init(irq_match);
+}
+
+static void __init virt_init(void)
+{
+	of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+
+static void __init virt_timer_init(void)
+{
+	WARN_ON(arch_timer_of_register() != 0);
+	WARN_ON(arch_timer_sched_clock_init() != 0);
+}
+
+static const char *virt_dt_match[] = {
+	"linux,dummy-virt",
+	NULL
+};
+
+static struct sys_timer virt_timer = {
+	.init = virt_timer_init,
+};
+
+DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
+	.init_irq	= gic_init_irq,
+	.handle_irq     = gic_handle_irq,
+	.timer		= &virt_timer,
+	.init_machine	= virt_init,
+	.dt_compat	= virt_dt_match,
+MACHINE_END
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:48:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkdrL-00018A-FK; Mon, 17 Dec 2012 16:48:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kiviniemi.valtteri@gmail.com>) id 1TkdrL-000185-0I
	for Xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:48:07 +0000
Received: from [85.158.139.83:62955] by server-3.bemta-5.messagelabs.com id
	C1/12-25441-5CC4FC05; Mon, 17 Dec 2012 16:48:05 +0000
X-Env-Sender: kiviniemi.valtteri@gmail.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355762884!27518509!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8402 invoked from network); 17 Dec 2012 16:48:05 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 16:48:05 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so5526154iac.32
	for <Xen-devel@lists.xen.org>; Mon, 17 Dec 2012 08:48:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Z3qLBWOS0EjqZKSSiJE55nCJa81wfIdZVHMO7Evo+Xo=;
	b=1CmIBjsXaWIdYSNo36ElyrlOLh9Fjo/rCJn5c5o7+31SpIEtflOKZY+ezjhz0N4lPB
	eBvu4Rkv0vHFqiqzyF7lrQtK3FMuxdaYeSG44fhVaoYhWUluLfjeC4JNVI3wS5uV46Ty
	Q4Y7nlEYO1kztNBInwaCV73DeIWnPxU66j0xEtQeTBS9WoKZsYPU2VBPJDSP6kY+utO6
	wRARvGk1xXsHsfcPGq6dKmvozNiaqAIoKZjc+RPo4DSpcf7yO4OcQromXVULjauvbXKl
	wuNG5AKycLRmj2FZjLs6uAmKp71Q634RolERFVI72wctKeAoBPHcsL4TQl2QydMSnueT
	vJkw==
MIME-Version: 1.0
Received: by 10.50.57.234 with SMTP id l10mr9829514igq.18.1355762883905; Mon,
	17 Dec 2012 08:48:03 -0800 (PST)
Received: by 10.231.235.7 with HTTP; Mon, 17 Dec 2012 08:48:03 -0800 (PST)
Date: Mon, 17 Dec 2012 18:48:03 +0200
Message-ID: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
From: Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com>
To: Xen-devel@lists.xen.org
Subject: [Xen-devel] (XEN) traps.c:3156: GPF messages in xen dmesg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3606997001624161410=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3606997001624161410==
Content-Type: multipart/alternative; boundary=14dae93411171849a504d10f25a2

--14dae93411171849a504d10f25a2
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I'm running Xen 4.2.0 with Linux kernel 3.7.0 and I'm seeing a flood of
these messages in xen dmesg:

(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4

What does that mean and is it something that I should worry about?

--
Valtteri Kiviniemi

--14dae93411171849a504d10f25a2
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,<br><br>I&#39;m running Xen 4.2.0 with Linux kernel 3.7.0 and I&#39;m se=
eing a flood of these messages in xen dmesg:<br><br>(XEN) traps.c:3156: GPF=
 (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>(XEN) traps.c:3156: GPF=
 (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>=
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>=
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>=
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>=
<br>What does that mean and is it something that I should worry about?<br>
<br>--<br>Valtteri Kiviniemi<br>

--14dae93411171849a504d10f25a2--


--===============3606997001624161410==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3606997001624161410==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 16:48:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkdrL-00018A-FK; Mon, 17 Dec 2012 16:48:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kiviniemi.valtteri@gmail.com>) id 1TkdrL-000185-0I
	for Xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:48:07 +0000
Received: from [85.158.139.83:62955] by server-3.bemta-5.messagelabs.com id
	C1/12-25441-5CC4FC05; Mon, 17 Dec 2012 16:48:05 +0000
X-Env-Sender: kiviniemi.valtteri@gmail.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355762884!27518509!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8402 invoked from network); 17 Dec 2012 16:48:05 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 16:48:05 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so5526154iac.32
	for <Xen-devel@lists.xen.org>; Mon, 17 Dec 2012 08:48:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Z3qLBWOS0EjqZKSSiJE55nCJa81wfIdZVHMO7Evo+Xo=;
	b=1CmIBjsXaWIdYSNo36ElyrlOLh9Fjo/rCJn5c5o7+31SpIEtflOKZY+ezjhz0N4lPB
	eBvu4Rkv0vHFqiqzyF7lrQtK3FMuxdaYeSG44fhVaoYhWUluLfjeC4JNVI3wS5uV46Ty
	Q4Y7nlEYO1kztNBInwaCV73DeIWnPxU66j0xEtQeTBS9WoKZsYPU2VBPJDSP6kY+utO6
	wRARvGk1xXsHsfcPGq6dKmvozNiaqAIoKZjc+RPo4DSpcf7yO4OcQromXVULjauvbXKl
	wuNG5AKycLRmj2FZjLs6uAmKp71Q634RolERFVI72wctKeAoBPHcsL4TQl2QydMSnueT
	vJkw==
MIME-Version: 1.0
Received: by 10.50.57.234 with SMTP id l10mr9829514igq.18.1355762883905; Mon,
	17 Dec 2012 08:48:03 -0800 (PST)
Received: by 10.231.235.7 with HTTP; Mon, 17 Dec 2012 08:48:03 -0800 (PST)
Date: Mon, 17 Dec 2012 18:48:03 +0200
Message-ID: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
From: Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com>
To: Xen-devel@lists.xen.org
Subject: [Xen-devel] (XEN) traps.c:3156: GPF messages in xen dmesg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3606997001624161410=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3606997001624161410==
Content-Type: multipart/alternative; boundary=14dae93411171849a504d10f25a2

--14dae93411171849a504d10f25a2
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I'm running Xen 4.2.0 with Linux kernel 3.7.0 and I'm seeing a flood of
these messages in xen dmesg:

(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4

What does that mean and is it something that I should worry about?

--
Valtteri Kiviniemi

--14dae93411171849a504d10f25a2
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,<br><br>I&#39;m running Xen 4.2.0 with Linux kernel 3.7.0 and I&#39;m se=
eing a flood of these messages in xen dmesg:<br><br>(XEN) traps.c:3156: GPF=
 (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>(XEN) traps.c:3156: GPF=
 (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>=
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>=
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>=
(XEN) traps.c:3156: GPF (0060): ffff82c480159247 -&gt; ffff82c4802170e4<br>=
<br>What does that mean and is it something that I should worry about?<br>
<br>--<br>Valtteri Kiviniemi<br>

--14dae93411171849a504d10f25a2--


--===============3606997001624161410==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3606997001624161410==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 16:57:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tke0R-0001JH-HL; Mon, 17 Dec 2012 16:57:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tke0Q-0001JC-3O
	for Xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:57:30 +0000
Received: from [85.158.138.51:34240] by server-13.bemta-3.messagelabs.com id
	B0/87-00465-9FE4FC05; Mon, 17 Dec 2012 16:57:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355763447!22925321!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4894 invoked from network); 17 Dec 2012 16:57:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 16:57:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 16:57:26 +0000
Message-Id: <50CF5D0402000078000B0D17@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 16:57:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Valtteri Kiviniemi" <kiviniemi.valtteri@gmail.com>
References: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
In-Reply-To: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (XEN) traps.c:3156: GPF messages in xen dmesg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 17:48, Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com> wrote:
> I'm running Xen 4.2.0 with Linux kernel 3.7.0 and I'm seeing a flood of
> these messages in xen dmesg:
> 
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> 
> What does that mean and is it something that I should worry about?

It means that the hypervisor recovered from #GP faults several
times. What exactly it was that faulted you'd need to look up by
translating the addresses above and resolving them to source
locations. That'll also tell you whether you ought to be worried.

A common example of these happening is Xen carrying out MSR
accesses on behalf of the kernel, when the MSR actually isn't
implemented (i.e. the kernel itself also is prepared to handle
faults upon accessing them).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 16:57:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 16:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tke0R-0001JH-HL; Mon, 17 Dec 2012 16:57:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tke0Q-0001JC-3O
	for Xen-devel@lists.xen.org; Mon, 17 Dec 2012 16:57:30 +0000
Received: from [85.158.138.51:34240] by server-13.bemta-3.messagelabs.com id
	B0/87-00465-9FE4FC05; Mon, 17 Dec 2012 16:57:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355763447!22925321!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4894 invoked from network); 17 Dec 2012 16:57:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 16:57:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 16:57:26 +0000
Message-Id: <50CF5D0402000078000B0D17@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 16:57:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Valtteri Kiviniemi" <kiviniemi.valtteri@gmail.com>
References: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
In-Reply-To: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (XEN) traps.c:3156: GPF messages in xen dmesg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 17:48, Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com> wrote:
> I'm running Xen 4.2.0 with Linux kernel 3.7.0 and I'm seeing a flood of
> these messages in xen dmesg:
> 
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
> 
> What does that mean and is it something that I should worry about?

It means that the hypervisor recovered from #GP faults several
times. What exactly it was that faulted you'd need to look up by
translating the addresses above and resolving them to source
locations. That'll also tell you whether you ought to be worried.

A common example of these happening is Xen carrying out MSR
accesses on behalf of the kernel, when the MSR actually isn't
implemented (i.e. the kernel itself also is prepared to handle
faults upon accessing them).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 17:02:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tke55-0001Tx-8w; Mon, 17 Dec 2012 17:02:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tke53-0001Ts-7V
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:02:17 +0000
Received: from [85.158.139.211:47710] by server-9.bemta-5.messagelabs.com id
	DC/F3-10690-8105FC05; Mon, 17 Dec 2012 17:02:16 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355763735!20904441!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3445 invoked from network); 17 Dec 2012 17:02:15 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-16.tower-206.messagelabs.com with SMTP;
	17 Dec 2012 17:02:15 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 9039FC5618F;
	Mon, 17 Dec 2012 17:02:03 +0000 (GMT)
Date: Mon, 17 Dec 2012 17:02:02 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <ED85BD1013C109A90B23BDE6@Ximines.local>
In-Reply-To: <1355745712.14620.59.camel@zakaz.uk.xensource.com>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>	
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>	
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>	
	<95EB29C55F836E775B70A9F5@nimrod.local>
	<1355745712.14620.59.camel@zakaz.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Alex Bligh <alex@alex.org.uk>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,

--On 17 December 2012 12:01:52 +0000 Ian Campbell <Ian.Campbell@citrix.com> wrote:

> It's definitely at least a bug in the documentation -- this isn't a
> feature which was forgotten about, it was explicitly decided hadn't met
> the 4.2 deadline and wasn't going to be ready in time to wait for. This
> should have been documented and in the release notes etc, sorry.
>
> We did at least manage to tag it tech preview in
> http://wiki.xen.org/wiki/Xen_Release_Features which implies that it is
> not yet fully formed.

It is indeed tagged as 'tech preview'.

But given that:
a) HVM is hardly uncommon
b) live-migrate is pretty essential
c) qemu-xen device model is the default next time around and the /only/
   way you can achieve certain things (e.g. rebaseable snapshots)
d) the code is already written (at least for -unstable) and the patch is
   smallish (particularly if the JSON stuff is ignored)

... I'd suggest a backport to 4.2.x is a reasonable request. I've had a
first go at the libxl part and Stefano got their first and volunteered
to do the qemu bit (if he doesn't have time I can have a go at that too).

If necessary we can guard the code either with a compile time switch or
(as the wiki page suggested to me) a hypervisor command line switch like
TMEM and AVX, which if it's not enabled would just error the call in the
same way libxl does at the moment.

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 17:02:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tke55-0001Tx-8w; Mon, 17 Dec 2012 17:02:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tke53-0001Ts-7V
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:02:17 +0000
Received: from [85.158.139.211:47710] by server-9.bemta-5.messagelabs.com id
	DC/F3-10690-8105FC05; Mon, 17 Dec 2012 17:02:16 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355763735!20904441!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3445 invoked from network); 17 Dec 2012 17:02:15 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-16.tower-206.messagelabs.com with SMTP;
	17 Dec 2012 17:02:15 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 9039FC5618F;
	Mon, 17 Dec 2012 17:02:03 +0000 (GMT)
Date: Mon, 17 Dec 2012 17:02:02 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <ED85BD1013C109A90B23BDE6@Ximines.local>
In-Reply-To: <1355745712.14620.59.camel@zakaz.uk.xensource.com>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>	
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>	
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>	
	<95EB29C55F836E775B70A9F5@nimrod.local>
	<1355745712.14620.59.camel@zakaz.uk.xensource.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Alex Bligh <alex@alex.org.uk>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,

--On 17 December 2012 12:01:52 +0000 Ian Campbell <Ian.Campbell@citrix.com> wrote:

> It's definitely at least a bug in the documentation -- this isn't a
> feature which was forgotten about, it was explicitly decided hadn't met
> the 4.2 deadline and wasn't going to be ready in time to wait for. This
> should have been documented and in the release notes etc, sorry.
>
> We did at least manage to tag it tech preview in
> http://wiki.xen.org/wiki/Xen_Release_Features which implies that it is
> not yet fully formed.

It is indeed tagged as 'tech preview'.

But given that:
a) HVM is hardly uncommon
b) live-migrate is pretty essential
c) qemu-xen device model is the default next time around and the /only/
   way you can achieve certain things (e.g. rebaseable snapshots)
d) the code is already written (at least for -unstable) and the patch is
   smallish (particularly if the JSON stuff is ignored)

... I'd suggest a backport to 4.2.x is a reasonable request. I've had a
first go at the libxl part and Stefano got their first and volunteered
to do the qemu bit (if he doesn't have time I can have a go at that too).

If necessary we can guard the code either with a compile time switch or
(as the wiki page suggested to me) a hypervisor command line switch like
TMEM and AVX, which if it's not enabled would just error the call in the
same way libxl does at the moment.

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 17:07:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:07:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkeAE-0001ey-1s; Mon, 17 Dec 2012 17:07:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkeAC-0001er-91
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:07:36 +0000
Received: from [85.158.138.51:39794] by server-14.bemta-3.messagelabs.com id
	5C/B4-27443-7515FC05; Mon, 17 Dec 2012 17:07:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355764054!29215407!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32640 invoked from network); 17 Dec 2012 17:07:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 17:07:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 17:07:34 +0000
Message-Id: <50CF5F6402000078000B0D33@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 17:07:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xiantao.zhang@intel.com, Donald D Dugger <donald.d.dugger@intel.com>
Subject: [Xen-devel] [PATCH] add maintainers entry for vendor-independent
 IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As agreed to last week.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -184,6 +184,15 @@ F:	xen/arch/x86/hvm/vmx/
 F:	xen/arch/x86/mm/hap/p2m-ept.c
 F:	xen/include/asm-x86/hvm/vmx/
 
+IOMMU VENDOR INDEPENDENT CODE
+M:	Xiantao Zhang <xiantao.zhang@intel.com>
+M:	Jan Beulich <jbeulich@suse.com>
+S:	Supported
+F:	xen/drivers/passthrough/
+X:	xen/drivers/passthrough/amd/
+X:	xen/drivers/passthrough/vtd/
+F:	xen/include/xen/iommu.h
+
 LINUX (PV_OPS)
 M:	Jeremy Fitzhardinge <jeremy@goop.org>
 S:	Supported




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 17:07:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:07:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkeAE-0001ey-1s; Mon, 17 Dec 2012 17:07:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkeAC-0001er-91
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:07:36 +0000
Received: from [85.158.138.51:39794] by server-14.bemta-3.messagelabs.com id
	5C/B4-27443-7515FC05; Mon, 17 Dec 2012 17:07:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355764054!29215407!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32640 invoked from network); 17 Dec 2012 17:07:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 17:07:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 17:07:34 +0000
Message-Id: <50CF5F6402000078000B0D33@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 17:07:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xiantao.zhang@intel.com, Donald D Dugger <donald.d.dugger@intel.com>
Subject: [Xen-devel] [PATCH] add maintainers entry for vendor-independent
 IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As agreed to last week.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -184,6 +184,15 @@ F:	xen/arch/x86/hvm/vmx/
 F:	xen/arch/x86/mm/hap/p2m-ept.c
 F:	xen/include/asm-x86/hvm/vmx/
 
+IOMMU VENDOR INDEPENDENT CODE
+M:	Xiantao Zhang <xiantao.zhang@intel.com>
+M:	Jan Beulich <jbeulich@suse.com>
+S:	Supported
+F:	xen/drivers/passthrough/
+X:	xen/drivers/passthrough/amd/
+X:	xen/drivers/passthrough/vtd/
+F:	xen/include/xen/iommu.h
+
 LINUX (PV_OPS)
 M:	Jeremy Fitzhardinge <jeremy@goop.org>
 S:	Supported




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 17:09:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:09:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkeBq-0001kG-JK; Mon, 17 Dec 2012 17:09:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkeBp-0001k8-ST
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:09:18 +0000
Received: from [85.158.137.99:16738] by server-12.bemta-3.messagelabs.com id
	42/C1-27559-7B15FC05; Mon, 17 Dec 2012 17:09:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355764121!19721644!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10533 invoked from network); 17 Dec 2012 17:08:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 17:08:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 17:08:40 +0000
Message-Id: <50CF5FA602000078000B0D5E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 17:08:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2011B286.1__="
Cc: Andre Przywara <osp@andrep.de>
Subject: [Xen-devel] [PATCH] x86,
	amd: Disable way access filter on Piledriver CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2011B286.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

The Way Access Filter in recent AMD CPUs may hurt the performance of
some workloads, caused by aliasing issues in the L1 cache.
This patch disables it on the affected CPUs.

The issue is similar to that one of last year:
http://lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html=20
This new patch does not replace the old one, we just need another
quirk for newer CPUs.

The performance penalty without the patch depends on the
circumstances, but is a bit less than the last year's 3%.

The workloads affected would be those that access code from the same
physical page under different virtual addresses, so different
processes using the same libraries with ASLR or multiple instances of
PIE-binaries. The code needs to be accessed simultaneously from both
cores of the same compute unit.

More details can be found here:
http://developer.amd.com/Assets/SharedL1InstructionCacheonAMD15hCPU.pdf=20

CPUs affected are anything with the core known as Piledriver.
That includes the new parts of the AMD A-Series (aka Trinity) and the
just released new CPUs of the FX-Series (aka Vishera).
The model numbering is a bit odd here: FX CPUs have model 2,
A-Series has model 10h, with possible extensions to 1Fh. Hence the
range of model ids.

Signed-off-by: Andre Przywara <osp@andrep.de>

Add and use MSR_AMD64_IC_CFG. Update the value whenever it is found to
not have all bits set, rather than just when it's zero.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -448,6 +448,14 @@ static void __devinit init_amd(struct cp
 		}
 	}
=20
+	/*
+	 * The way access filter has a performance penalty on some =
workloads.
+	 * Disable it on the affected CPUs.
+	 */
+	if (c->x86 =3D=3D 0x15 && c->x86_model >=3D 0x02 && c->x86_model < =
0x20 &&
+	    !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) !=3D =
0x1e)
+		wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);
+
         amd_get_topology(c);
=20
 	/* Pointless to use MWAIT on Family10 as it does not deep sleep. =
*/
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -211,6 +211,7 @@
=20
 /* AMD64 MSRs */
 #define MSR_AMD64_NB_CFG		0xc001001f
+#define MSR_AMD64_IC_CFG		0xc0011021
 #define MSR_AMD64_DC_CFG		0xc0011022
 #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
=20




--=__Part2011B286.1__=
Content-Type: text/plain; name="x86-AMD-Fam15-way-access-filter.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-AMD-Fam15-way-access-filter.patch"

x86, amd: Disable way access filter on Piledriver CPUs=0A=0AThe Way Access =
Filter in recent AMD CPUs may hurt the performance of=0Asome workloads, =
caused by aliasing issues in the L1 cache.=0AThis patch disables it on the =
affected CPUs.=0A=0AThe issue is similar to that one of last year:=0Ahttp:/=
/lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html=0AThis new =
patch does not replace the old one, we just need another=0Aquirk for newer =
CPUs.=0A=0AThe performance penalty without the patch depends on the=0Acircu=
mstances, but is a bit less than the last year's 3%.=0A=0AThe workloads =
affected would be those that access code from the same=0Aphysical page =
under different virtual addresses, so different=0Aprocesses using the same =
libraries with ASLR or multiple instances of=0APIE-binaries. The code =
needs to be accessed simultaneously from both=0Acores of the same compute =
unit.=0A=0AMore details can be found here:=0Ahttp://developer.amd.com/Asset=
s/SharedL1InstructionCacheonAMD15hCPU.pdf=0A=0ACPUs affected are anything =
with the core known as Piledriver.=0AThat includes the new parts of the =
AMD A-Series (aka Trinity) and the=0Ajust released new CPUs of the =
FX-Series (aka Vishera).=0AThe model numbering is a bit odd here: FX CPUs =
have model 2,=0AA-Series has model 10h, with possible extensions to 1Fh. =
Hence the=0Arange of model ids.=0A=0ASigned-off-by: Andre Przywara =
<osp@andrep.de>=0A=0AAdd and use MSR_AMD64_IC_CFG. Update the value =
whenever it is found to=0Anot have all bits set, rather than just when =
it's zero.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/cpu/amd.c=0A+++ b/xen/arch/x86/cpu/amd.c=0A@@ -448,6 =
+448,14 @@ static void __devinit init_amd(struct cp=0A 		}=0A 	=
}=0A =0A+	/*=0A+	 * The way access filter has a performance penalty =
on some workloads.=0A+	 * Disable it on the affected CPUs.=0A+	 */=0A+	if =
(c->x86 =3D=3D 0x15 && c->x86_model >=3D 0x02 && c->x86_model < 0x20 =
&&=0A+	    !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) !=3D =
0x1e)=0A+		wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);=0A+=0A =
        amd_get_topology(c);=0A =0A 	/* Pointless to use MWAIT on =
Family10 as it does not deep sleep. */=0A--- a/xen/include/asm-x86/msr-inde=
x.h=0A+++ b/xen/include/asm-x86/msr-index.h=0A@@ -211,6 +211,7 @@=0A =0A =
/* AMD64 MSRs */=0A #define MSR_AMD64_NB_CFG		0xc001001f=0A+#defi=
ne MSR_AMD64_IC_CFG		0xc0011021=0A #define MSR_AMD64_DC_CFG		=
0xc0011022=0A #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46=0A =0A
--=__Part2011B286.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2011B286.1__=--


From xen-devel-bounces@lists.xen.org Mon Dec 17 17:09:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:09:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkeBq-0001kG-JK; Mon, 17 Dec 2012 17:09:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkeBp-0001k8-ST
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:09:18 +0000
Received: from [85.158.137.99:16738] by server-12.bemta-3.messagelabs.com id
	42/C1-27559-7B15FC05; Mon, 17 Dec 2012 17:09:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355764121!19721644!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10533 invoked from network); 17 Dec 2012 17:08:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Dec 2012 17:08:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Dec 2012 17:08:40 +0000
Message-Id: <50CF5FA602000078000B0D5E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Mon, 17 Dec 2012 17:08:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2011B286.1__="
Cc: Andre Przywara <osp@andrep.de>
Subject: [Xen-devel] [PATCH] x86,
	amd: Disable way access filter on Piledriver CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2011B286.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

The Way Access Filter in recent AMD CPUs may hurt the performance of
some workloads, caused by aliasing issues in the L1 cache.
This patch disables it on the affected CPUs.

The issue is similar to that one of last year:
http://lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html=20
This new patch does not replace the old one, we just need another
quirk for newer CPUs.

The performance penalty without the patch depends on the
circumstances, but is a bit less than the last year's 3%.

The workloads affected would be those that access code from the same
physical page under different virtual addresses, so different
processes using the same libraries with ASLR or multiple instances of
PIE-binaries. The code needs to be accessed simultaneously from both
cores of the same compute unit.

More details can be found here:
http://developer.amd.com/Assets/SharedL1InstructionCacheonAMD15hCPU.pdf=20

CPUs affected are anything with the core known as Piledriver.
That includes the new parts of the AMD A-Series (aka Trinity) and the
just released new CPUs of the FX-Series (aka Vishera).
The model numbering is a bit odd here: FX CPUs have model 2,
A-Series has model 10h, with possible extensions to 1Fh. Hence the
range of model ids.

Signed-off-by: Andre Przywara <osp@andrep.de>

Add and use MSR_AMD64_IC_CFG. Update the value whenever it is found to
not have all bits set, rather than just when it's zero.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -448,6 +448,14 @@ static void __devinit init_amd(struct cp
 		}
 	}
=20
+	/*
+	 * The way access filter has a performance penalty on some =
workloads.
+	 * Disable it on the affected CPUs.
+	 */
+	if (c->x86 =3D=3D 0x15 && c->x86_model >=3D 0x02 && c->x86_model < =
0x20 &&
+	    !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) !=3D =
0x1e)
+		wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);
+
         amd_get_topology(c);
=20
 	/* Pointless to use MWAIT on Family10 as it does not deep sleep. =
*/
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -211,6 +211,7 @@
=20
 /* AMD64 MSRs */
 #define MSR_AMD64_NB_CFG		0xc001001f
+#define MSR_AMD64_IC_CFG		0xc0011021
 #define MSR_AMD64_DC_CFG		0xc0011022
 #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
=20




--=__Part2011B286.1__=
Content-Type: text/plain; name="x86-AMD-Fam15-way-access-filter.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-AMD-Fam15-way-access-filter.patch"

x86, amd: Disable way access filter on Piledriver CPUs=0A=0AThe Way Access =
Filter in recent AMD CPUs may hurt the performance of=0Asome workloads, =
caused by aliasing issues in the L1 cache.=0AThis patch disables it on the =
affected CPUs.=0A=0AThe issue is similar to that one of last year:=0Ahttp:/=
/lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html=0AThis new =
patch does not replace the old one, we just need another=0Aquirk for newer =
CPUs.=0A=0AThe performance penalty without the patch depends on the=0Acircu=
mstances, but is a bit less than the last year's 3%.=0A=0AThe workloads =
affected would be those that access code from the same=0Aphysical page =
under different virtual addresses, so different=0Aprocesses using the same =
libraries with ASLR or multiple instances of=0APIE-binaries. The code =
needs to be accessed simultaneously from both=0Acores of the same compute =
unit.=0A=0AMore details can be found here:=0Ahttp://developer.amd.com/Asset=
s/SharedL1InstructionCacheonAMD15hCPU.pdf=0A=0ACPUs affected are anything =
with the core known as Piledriver.=0AThat includes the new parts of the =
AMD A-Series (aka Trinity) and the=0Ajust released new CPUs of the =
FX-Series (aka Vishera).=0AThe model numbering is a bit odd here: FX CPUs =
have model 2,=0AA-Series has model 10h, with possible extensions to 1Fh. =
Hence the=0Arange of model ids.=0A=0ASigned-off-by: Andre Przywara =
<osp@andrep.de>=0A=0AAdd and use MSR_AMD64_IC_CFG. Update the value =
whenever it is found to=0Anot have all bits set, rather than just when =
it's zero.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/cpu/amd.c=0A+++ b/xen/arch/x86/cpu/amd.c=0A@@ -448,6 =
+448,14 @@ static void __devinit init_amd(struct cp=0A 		}=0A 	=
}=0A =0A+	/*=0A+	 * The way access filter has a performance penalty =
on some workloads.=0A+	 * Disable it on the affected CPUs.=0A+	 */=0A+	if =
(c->x86 =3D=3D 0x15 && c->x86_model >=3D 0x02 && c->x86_model < 0x20 =
&&=0A+	    !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) !=3D =
0x1e)=0A+		wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);=0A+=0A =
        amd_get_topology(c);=0A =0A 	/* Pointless to use MWAIT on =
Family10 as it does not deep sleep. */=0A--- a/xen/include/asm-x86/msr-inde=
x.h=0A+++ b/xen/include/asm-x86/msr-index.h=0A@@ -211,6 +211,7 @@=0A =0A =
/* AMD64 MSRs */=0A #define MSR_AMD64_NB_CFG		0xc001001f=0A+#defi=
ne MSR_AMD64_IC_CFG		0xc0011021=0A #define MSR_AMD64_DC_CFG		=
0xc0011022=0A #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46=0A =0A
--=__Part2011B286.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2011B286.1__=--


From xen-devel-bounces@lists.xen.org Mon Dec 17 17:09:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkeC6-0001ll-0e; Mon, 17 Dec 2012 17:09:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TkeC4-0001lW-CB
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:09:32 +0000
Received: from [85.158.143.35:48778] by server-3.bemta-4.messagelabs.com id
	35/FC-18211-BC15FC05; Mon, 17 Dec 2012 17:09:31 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355764164!11634407!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13279 invoked from network); 17 Dec 2012 17:09:29 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-7.tower-21.messagelabs.com with SMTP;
	17 Dec 2012 17:09:29 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 5F70DC5618E;
	Mon, 17 Dec 2012 17:09:11 +0000 (GMT)
Date: Mon, 17 Dec 2012 17:09:10 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <D770D2ADD513518F04AB2AEB@Ximines.local>
In-Reply-To: <50CEFDA602000078000B0B11@nat28.tlf.novell.com>
References: <5B4525F296F6ABEB38B0E614@nimrod.local>
	<50CEFDA602000078000B0B11@nat28.tlf.novell.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Fatal crash on xen4.2 HVM + qemu-xen dm + NFS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan,

--On 17 December 2012 10:10:30 +0000 Jan Beulich <JBeulich@suse.com> wrote:

>>>> On 14.12.12 at 15:54, Alex Bligh <alex@alex.org.uk> wrote:
>> [ 1416.992402] BUG: unable to handle kernel paging request at ffff88073fee6e00
>
> Assuming the address above is valid in the first place (i.e. you
> have at least 32G installed), this very much suggests access to
> a ballooned out page. Could you therefore suppress the use of
> ballooning for debugging purposes, and see whether the issue
> goes away then?

Thanks. I believe that dump was on a 64G box (though it dies happily on lots
of hardware.

I believe we have dom0_mem configured already. Is setting autoballoon=0 in
xl.conf sufficient? (we don't use xend).

I will report back.

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 17:09:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkeC6-0001ll-0e; Mon, 17 Dec 2012 17:09:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TkeC4-0001lW-CB
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:09:32 +0000
Received: from [85.158.143.35:48778] by server-3.bemta-4.messagelabs.com id
	35/FC-18211-BC15FC05; Mon, 17 Dec 2012 17:09:31 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355764164!11634407!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13279 invoked from network); 17 Dec 2012 17:09:29 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-7.tower-21.messagelabs.com with SMTP;
	17 Dec 2012 17:09:29 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 5F70DC5618E;
	Mon, 17 Dec 2012 17:09:11 +0000 (GMT)
Date: Mon, 17 Dec 2012 17:09:10 +0000
From: Alex Bligh <alex@alex.org.uk>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <D770D2ADD513518F04AB2AEB@Ximines.local>
In-Reply-To: <50CEFDA602000078000B0B11@nat28.tlf.novell.com>
References: <5B4525F296F6ABEB38B0E614@nimrod.local>
	<50CEFDA602000078000B0B11@nat28.tlf.novell.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Fatal crash on xen4.2 HVM + qemu-xen dm + NFS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan,

--On 17 December 2012 10:10:30 +0000 Jan Beulich <JBeulich@suse.com> wrote:

>>>> On 14.12.12 at 15:54, Alex Bligh <alex@alex.org.uk> wrote:
>> [ 1416.992402] BUG: unable to handle kernel paging request at ffff88073fee6e00
>
> Assuming the address above is valid in the first place (i.e. you
> have at least 32G installed), this very much suggests access to
> a ballooned out page. Could you therefore suppress the use of
> ballooning for debugging purposes, and see whether the issue
> goes away then?

Thanks. I believe that dump was on a 64G box (though it dies happily on lots
of hardware.

I believe we have dom0_mem configured already. Is setting autoballoon=0 in
xl.conf sufficient? (we don't use xend).

I will report back.

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 17:50:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkep5-0002GM-Lm; Mon, 17 Dec 2012 17:49:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tkep3-0002GH-Gj
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:49:49 +0000
Received: from [85.158.137.99:5620] by server-12.bemta-3.messagelabs.com id
	FA/8A-27559-C3B5FC05; Mon, 17 Dec 2012 17:49:48 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355766586!19147462!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTAxODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21191 invoked from network); 17 Dec 2012 17:49:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 17:49:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,303,1355097600"; 
   d="scan'208";a="986433"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	17 Dec 2012 17:49:37 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 17 Dec 2012 12:49:36 -0500
Message-ID: <50CF5B2F.8000805@citrix.com>
Date: Mon, 17 Dec 2012 17:49:35 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <50CF5FA602000078000B0D5E@nat28.tlf.novell.com>
In-Reply-To: <50CF5FA602000078000B0D5E@nat28.tlf.novell.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] [PATCH] x86,
 amd: Disable way access filter on Piledriver CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/12 17:08, Jan Beulich wrote:
> The Way Access Filter in recent AMD CPUs may hurt the performance of
> some workloads, caused by aliasing issues in the L1 cache.
> This patch disables it on the affected CPUs.
>
> The issue is similar to that one of last year:
> http://lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html
> This new patch does not replace the old one, we just need another
> quirk for newer CPUs.
>
> The performance penalty without the patch depends on the
> circumstances, but is a bit less than the last year's 3%.
>
> The workloads affected would be those that access code from the same
> physical page under different virtual addresses, so different
> processes using the same libraries with ASLR or multiple instances of
> PIE-binaries. The code needs to be accessed simultaneously from both
> cores of the same compute unit.
>
> More details can be found here:
> http://developer.amd.com/Assets/SharedL1InstructionCacheonAMD15hCPU.pdf
>
> CPUs affected are anything with the core known as Piledriver.
> That includes the new parts of the AMD A-Series (aka Trinity) and the
> just released new CPUs of the FX-Series (aka Vishera).
> The model numbering is a bit odd here: FX CPUs have model 2,
> A-Series has model 10h, with possible extensions to 1Fh. Hence the
> range of model ids.
>
> Signed-off-by: Andre Przywara <osp@andrep.de>
>
> Add and use MSR_AMD64_IC_CFG. Update the value whenever it is found to
> not have all bits set, rather than just when it's zero.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -448,6 +448,14 @@ static void __devinit init_amd(struct cp
>   		}
>   	}
>   
> +	/*
> +	 * The way access filter has a performance penalty on some workloads.
> +	 * Disable it on the affected CPUs.
> +	 */
> +	if (c->x86 == 0x15 && c->x86_model >= 0x02 && c->x86_model < 0x20 &&
> +	    !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) != 0x1e)
> +		wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);
Would it not be better to simply write 0x1e, rather than only write it 
when it's not 0x1e? It's a one-off operation, but the extra readmsr 
seems unnecessary to me.

--
Mats
> +
>           amd_get_topology(c);
>   
>   	/* Pointless to use MWAIT on Family10 as it does not deep sleep. */
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -211,6 +211,7 @@
>   
>   /* AMD64 MSRs */
>   #define MSR_AMD64_NB_CFG		0xc001001f
> +#define MSR_AMD64_IC_CFG		0xc0011021
>   #define MSR_AMD64_DC_CFG		0xc0011022
>   #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
>   
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 17:50:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkep5-0002GM-Lm; Mon, 17 Dec 2012 17:49:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tkep3-0002GH-Gj
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:49:49 +0000
Received: from [85.158.137.99:5620] by server-12.bemta-3.messagelabs.com id
	FA/8A-27559-C3B5FC05; Mon, 17 Dec 2012 17:49:48 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355766586!19147462!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTAxODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21191 invoked from network); 17 Dec 2012 17:49:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 17:49:48 -0000
X-IronPort-AV: E=Sophos;i="4.84,303,1355097600"; 
   d="scan'208";a="986433"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	17 Dec 2012 17:49:37 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 17 Dec 2012 12:49:36 -0500
Message-ID: <50CF5B2F.8000805@citrix.com>
Date: Mon, 17 Dec 2012 17:49:35 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <50CF5FA602000078000B0D5E@nat28.tlf.novell.com>
In-Reply-To: <50CF5FA602000078000B0D5E@nat28.tlf.novell.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] [PATCH] x86,
 amd: Disable way access filter on Piledriver CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/12 17:08, Jan Beulich wrote:
> The Way Access Filter in recent AMD CPUs may hurt the performance of
> some workloads, caused by aliasing issues in the L1 cache.
> This patch disables it on the affected CPUs.
>
> The issue is similar to that one of last year:
> http://lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html
> This new patch does not replace the old one, we just need another
> quirk for newer CPUs.
>
> The performance penalty without the patch depends on the
> circumstances, but is a bit less than the last year's 3%.
>
> The workloads affected would be those that access code from the same
> physical page under different virtual addresses, so different
> processes using the same libraries with ASLR or multiple instances of
> PIE-binaries. The code needs to be accessed simultaneously from both
> cores of the same compute unit.
>
> More details can be found here:
> http://developer.amd.com/Assets/SharedL1InstructionCacheonAMD15hCPU.pdf
>
> CPUs affected are anything with the core known as Piledriver.
> That includes the new parts of the AMD A-Series (aka Trinity) and the
> just released new CPUs of the FX-Series (aka Vishera).
> The model numbering is a bit odd here: FX CPUs have model 2,
> A-Series has model 10h, with possible extensions to 1Fh. Hence the
> range of model ids.
>
> Signed-off-by: Andre Przywara <osp@andrep.de>
>
> Add and use MSR_AMD64_IC_CFG. Update the value whenever it is found to
> not have all bits set, rather than just when it's zero.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -448,6 +448,14 @@ static void __devinit init_amd(struct cp
>   		}
>   	}
>   
> +	/*
> +	 * The way access filter has a performance penalty on some workloads.
> +	 * Disable it on the affected CPUs.
> +	 */
> +	if (c->x86 == 0x15 && c->x86_model >= 0x02 && c->x86_model < 0x20 &&
> +	    !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) != 0x1e)
> +		wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);
Would it not be better to simply write 0x1e, rather than only write it 
when it's not 0x1e? It's a one-off operation, but the extra readmsr 
seems unnecessary to me.

--
Mats
> +
>           amd_get_topology(c);
>   
>   	/* Pointless to use MWAIT on Family10 as it does not deep sleep. */
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -211,6 +211,7 @@
>   
>   /* AMD64 MSRs */
>   #define MSR_AMD64_NB_CFG		0xc001001f
> +#define MSR_AMD64_IC_CFG		0xc0011021
>   #define MSR_AMD64_DC_CFG		0xc0011022
>   #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
>   
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 17:55:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkeuB-0002QH-FG; Mon, 17 Dec 2012 17:55:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TkeuA-0002QB-1v
	for Xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:55:06 +0000
Received: from [85.158.137.99:30367] by server-12.bemta-3.messagelabs.com id
	C2/50-27559-97C5FC05; Mon, 17 Dec 2012 17:55:05 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355766904!14293644!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19510 invoked from network); 17 Dec 2012 17:55:04 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-7.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	17 Dec 2012 17:55:04 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:61070 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TkeyK-00051L-3o; Mon, 17 Dec 2012 18:59:24 +0100
Date: Mon, 17 Dec 2012 18:55:00 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1339249142.20121217185500@eikelenboom.it>
To: "Jan Beulich" <JBeulich@suse.com>
In-Reply-To: <50CF5D0402000078000B0D17@nat28.tlf.novell.com>
References: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
	<50CF5D0402000078000B0D17@nat28.tlf.novell.com>
MIME-Version: 1.0
Cc: Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com>, Xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (XEN) traps.c:3156: GPF messages in xen dmesg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 17, 2012, 5:57:24 PM, you wrote:

>>>> On 17.12.12 at 17:48, Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com> wrote:
>> I'm running Xen 4.2.0 with Linux kernel 3.7.0 and I'm seeing a flood of
>> these messages in xen dmesg:
>> 
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> 
>> What does that mean and is it something that I should worry about?

> It means that the hypervisor recovered from #GP faults several
> times. What exactly it was that faulted you'd need to look up by
> translating the addresses above and resolving them to source
> locations. That'll also tell you whether you ought to be worried.

> A common example of these happening is Xen carrying out MSR
> accesses on behalf of the kernel, when the MSR actually isn't
> implemented (i.e. the kernel itself also is prepared to handle
> faults upon accessing them).

Hi Jan,

Perhaps related to a previous discussion (with a open end ..)
http://lists.xen.org/archives/html/xen-devel/2012-09/msg01599.html

--
Sander

> Jan





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 17:55:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 17:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkeuB-0002QH-FG; Mon, 17 Dec 2012 17:55:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TkeuA-0002QB-1v
	for Xen-devel@lists.xen.org; Mon, 17 Dec 2012 17:55:06 +0000
Received: from [85.158.137.99:30367] by server-12.bemta-3.messagelabs.com id
	C2/50-27559-97C5FC05; Mon, 17 Dec 2012 17:55:05 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355766904!14293644!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19510 invoked from network); 17 Dec 2012 17:55:04 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-7.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	17 Dec 2012 17:55:04 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:61070 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TkeyK-00051L-3o; Mon, 17 Dec 2012 18:59:24 +0100
Date: Mon, 17 Dec 2012 18:55:00 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1339249142.20121217185500@eikelenboom.it>
To: "Jan Beulich" <JBeulich@suse.com>
In-Reply-To: <50CF5D0402000078000B0D17@nat28.tlf.novell.com>
References: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
	<50CF5D0402000078000B0D17@nat28.tlf.novell.com>
MIME-Version: 1.0
Cc: Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com>, Xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (XEN) traps.c:3156: GPF messages in xen dmesg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 17, 2012, 5:57:24 PM, you wrote:

>>>> On 17.12.12 at 17:48, Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com> wrote:
>> I'm running Xen 4.2.0 with Linux kernel 3.7.0 and I'm seeing a flood of
>> these messages in xen dmesg:
>> 
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>> 
>> What does that mean and is it something that I should worry about?

> It means that the hypervisor recovered from #GP faults several
> times. What exactly it was that faulted you'd need to look up by
> translating the addresses above and resolving them to source
> locations. That'll also tell you whether you ought to be worried.

> A common example of these happening is Xen carrying out MSR
> accesses on behalf of the kernel, when the MSR actually isn't
> implemented (i.e. the kernel itself also is prepared to handle
> faults upon accessing them).

Hi Jan,

Perhaps related to a previous discussion (with a open end ..)
http://lists.xen.org/archives/html/xen-devel/2012-09/msg01599.html

--
Sander

> Jan





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 18:05:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 18:05:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkf3z-0002gU-PB; Mon, 17 Dec 2012 18:05:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tkf3y-0002gP-Eg
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 18:05:14 +0000
Received: from [85.158.143.35:39826] by server-3.bemta-4.messagelabs.com id
	88/E0-18211-9DE5FC05; Mon, 17 Dec 2012 18:05:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355767506!4868853!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7805 invoked from network); 17 Dec 2012 18:05:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 18:05:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="207343"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 18:05:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 18:05:05 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tkf3p-0008CU-L0; Mon, 17 Dec 2012 18:05:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tkf3p-000358-GS;
	Mon, 17 Dec 2012 18:05:05 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20687.24273.410385.948093@mariner.uk.xensource.com>
Date: Mon, 17 Dec 2012 18:05:05 +0000
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <ce70525c32dc475f58ad007a6c6a735d094c57e5.1355487375.git.julien.grall@citrix.com>
References: <ce70525c32dc475f58ad007a6c6a735d094c57e5.1355487375.git.julien.grall@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH V5] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien Grall writes ("[PATCH V5] libxenstore: filter watch events in libxenstore when we unwatch"):
> XenStore puts in queued watch events via a thread and notifies the user.
> Sometimes xs_unwatch is called before all related message is read. The use
> case is non-threaded libevent, we have two event A and B:
>     - Event A will destroy something and call xs_unwatch;
>     - Event B is used to notify that a node has changed in XenStore.
> As the event is called one by one, event A can be handled before event B.
> So on next xs_watch_read the user could retrieve an unwatch token and
> a segfault occured if the token store the pointer of the structure
> (ie: "backend:0xcafe").
> 
> To avoid problem with previous application using libXenStore, this behaviour
> will only be enabled if XS_UNWATCH_FILTER is given to xs_open.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Signed-off-by: Julien Grall <julien.grall@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 18:05:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 18:05:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkf3z-0002gU-PB; Mon, 17 Dec 2012 18:05:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tkf3y-0002gP-Eg
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 18:05:14 +0000
Received: from [85.158.143.35:39826] by server-3.bemta-4.messagelabs.com id
	88/E0-18211-9DE5FC05; Mon, 17 Dec 2012 18:05:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355767506!4868853!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7805 invoked from network); 17 Dec 2012 18:05:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 18:05:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="207343"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 18:05:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 18:05:05 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tkf3p-0008CU-L0; Mon, 17 Dec 2012 18:05:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tkf3p-000358-GS;
	Mon, 17 Dec 2012 18:05:05 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20687.24273.410385.948093@mariner.uk.xensource.com>
Date: Mon, 17 Dec 2012 18:05:05 +0000
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <ce70525c32dc475f58ad007a6c6a735d094c57e5.1355487375.git.julien.grall@citrix.com>
References: <ce70525c32dc475f58ad007a6c6a735d094c57e5.1355487375.git.julien.grall@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH V5] libxenstore: filter watch events in
 libxenstore when we unwatch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien Grall writes ("[PATCH V5] libxenstore: filter watch events in libxenstore when we unwatch"):
> XenStore puts in queued watch events via a thread and notifies the user.
> Sometimes xs_unwatch is called before all related message is read. The use
> case is non-threaded libevent, we have two event A and B:
>     - Event A will destroy something and call xs_unwatch;
>     - Event B is used to notify that a node has changed in XenStore.
> As the event is called one by one, event A can be handled before event B.
> So on next xs_watch_read the user could retrieve an unwatch token and
> a segfault occured if the token store the pointer of the structure
> (ie: "backend:0xcafe").
> 
> To avoid problem with previous application using libXenStore, this behaviour
> will only be enabled if XS_UNWATCH_FILTER is given to xs_open.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Signed-off-by: Julien Grall <julien.grall@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 18:09:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 18:09:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkf7T-0002mb-Dj; Mon, 17 Dec 2012 18:08:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tkf7S-0002mS-Az
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 18:08:50 +0000
Received: from [85.158.143.99:21270] by server-1.bemta-4.messagelabs.com id
	6F/2B-28401-1BF5FC05; Mon, 17 Dec 2012 18:08:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1355767729!23469429!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24978 invoked from network); 17 Dec 2012 18:08:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 18:08:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="207391"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 18:08:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 18:08:40 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tkf7I-0008Rk-AJ; Mon, 17 Dec 2012 18:08:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tkf7I-00035W-4k;
	Mon, 17 Dec 2012 18:08:40 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20687.24486.142168.302026@mariner.uk.xensource.com>
Date: Mon, 17 Dec 2012 18:08:38 +0000
To: William Pitcock <nenolod@dereferenced.org>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

William Pitcock writes ("[Xen-devel] introducing python-xen"):
> I would like to introduce the Python Xen library, which uses libxs and
> libxc directly to provide some manipulation functions for the domains
> running on a hypervisor.

Thanks, that's interesting.

However, we would recommend nowadays to build this kind of
functionality on top of libxl.  The existing python bindings for libxl
may need some work, but they're probably a good starting point.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 18:09:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 18:09:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkf7T-0002mb-Dj; Mon, 17 Dec 2012 18:08:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tkf7S-0002mS-Az
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 18:08:50 +0000
Received: from [85.158.143.99:21270] by server-1.bemta-4.messagelabs.com id
	6F/2B-28401-1BF5FC05; Mon, 17 Dec 2012 18:08:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1355767729!23469429!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24978 invoked from network); 17 Dec 2012 18:08:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 18:08:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="207391"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 18:08:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 18:08:40 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tkf7I-0008Rk-AJ; Mon, 17 Dec 2012 18:08:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tkf7I-00035W-4k;
	Mon, 17 Dec 2012 18:08:40 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20687.24486.142168.302026@mariner.uk.xensource.com>
Date: Mon, 17 Dec 2012 18:08:38 +0000
To: William Pitcock <nenolod@dereferenced.org>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

William Pitcock writes ("[Xen-devel] introducing python-xen"):
> I would like to introduce the Python Xen library, which uses libxs and
> libxc directly to provide some manipulation functions for the domains
> running on a hypervisor.

Thanks, that's interesting.

However, we would recommend nowadays to build this kind of
functionality on top of libxl.  The existing python bindings for libxl
may need some work, but they're probably a good starting point.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:01:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkgrc-0003Pf-5J; Mon, 17 Dec 2012 20:00:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1Tkgra-0003Pa-5C
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:00:34 +0000
Received: from [85.158.138.51:7366] by server-10.bemta-3.messagelabs.com id
	29/6F-07616-1E97FC05; Mon, 17 Dec 2012 20:00:33 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355774432!27329338!1
X-Originating-IP: [212.227.126.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODYgPT4gNTgxMjE=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODYgPT4gNTgxMjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21154 invoked from network); 17 Dec 2012 20:00:32 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.186)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:00:32 -0000
Received: from klappe2.localnet
	(HSI-KBW-095-208-003-199.hsi5.kabel-badenwuerttemberg.de
	[95.208.3.199])
	by mrelayeu.kundenserver.de (node=mreu4) with ESMTP (Nemesis)
	id 0Lvdma-1T3nzB0Fyu-017XME; Mon, 17 Dec 2012 21:00:17 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Will Deacon <will.deacon@arm.com>
Date: Mon, 17 Dec 2012 20:00:11 +0000
User-Agent: KMail/1.12.2 (Linux/3.5.0; KDE/4.3.2; x86_64; ; )
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-4-git-send-email-will.deacon@arm.com>
In-Reply-To: <1355762141-29616-4-git-send-email-will.deacon@arm.com>
MIME-Version: 1.0
Message-Id: <201212172000.11850.arnd@arndb.de>
X-Provags-ID: V02:K0:8Fg+IG6/JiRsNH7Fdm/FsQjwIZVOK4DELmGZPMSW4wq
	hd93Pip2o8d31HbZcP1TVo9Il6UQE1hktiDupSih6CLfsIz2fE
	iyZuLyBvxUdyfseoSELYLk8ILI5XV+SUMs5Zuv2wPyYxiWHlHt
	7+OQK5NosYM4k3z2UtgzYaECIxmF0OeKsBY3oHCgit8T+ec5ao
	aGbOvJfoRl5db7GHuF4xDF05vPtwUfFAo8XxaIz7kdI5uBBlmC
	SGL9OeIW1RsIfqJEIZlcLbN6Uh2LnVarXnNWVyACgKK2C+pTqh
	oTro5wNqCMjazVx9MqDEdX3yZh8FficZUS9OhEM0WwfsmlWl9l
	KaPRFKgiu5GQOrm17clM=
Cc: dave.martin@linaro.org, nico@fluxnic.net, Marc.Zyngier@arm.com,
	xen-devel@lists.xen.org, robherring2@gmail.com,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 3/6] ARM: psci: add devicetree binding
	for describing PSCI firmware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Monday 17 December 2012, Will Deacon wrote:
> +
> + - function-base : The base ID from which the functions are offset.
> +
> +Main node optional properties:
> +
> + - cpu_suspend   : Offset of CPU_SUSPEND ID from function-base
> +
> + - cpu_off       : Offset of CPU_OFF ID from function-base
> +
> + - cpu_on        : Offset of CPU_ON ID from function-base
> +
> + - migrate       : Offset of MIGRATE ID from function-base

What is the benefit of the "function-base" property over just having
32 bit IDs for each function. For all I can tell, the interface does
not rely on the numbers to be consecutive, so removing the function-base
attribute would make the binding simpler as well as more flexible.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:01:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkgrc-0003Pf-5J; Mon, 17 Dec 2012 20:00:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1Tkgra-0003Pa-5C
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:00:34 +0000
Received: from [85.158.138.51:7366] by server-10.bemta-3.messagelabs.com id
	29/6F-07616-1E97FC05; Mon, 17 Dec 2012 20:00:33 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355774432!27329338!1
X-Originating-IP: [212.227.126.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODYgPT4gNTgxMjE=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODYgPT4gNTgxMjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21154 invoked from network); 17 Dec 2012 20:00:32 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.186)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:00:32 -0000
Received: from klappe2.localnet
	(HSI-KBW-095-208-003-199.hsi5.kabel-badenwuerttemberg.de
	[95.208.3.199])
	by mrelayeu.kundenserver.de (node=mreu4) with ESMTP (Nemesis)
	id 0Lvdma-1T3nzB0Fyu-017XME; Mon, 17 Dec 2012 21:00:17 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Will Deacon <will.deacon@arm.com>
Date: Mon, 17 Dec 2012 20:00:11 +0000
User-Agent: KMail/1.12.2 (Linux/3.5.0; KDE/4.3.2; x86_64; ; )
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-4-git-send-email-will.deacon@arm.com>
In-Reply-To: <1355762141-29616-4-git-send-email-will.deacon@arm.com>
MIME-Version: 1.0
Message-Id: <201212172000.11850.arnd@arndb.de>
X-Provags-ID: V02:K0:8Fg+IG6/JiRsNH7Fdm/FsQjwIZVOK4DELmGZPMSW4wq
	hd93Pip2o8d31HbZcP1TVo9Il6UQE1hktiDupSih6CLfsIz2fE
	iyZuLyBvxUdyfseoSELYLk8ILI5XV+SUMs5Zuv2wPyYxiWHlHt
	7+OQK5NosYM4k3z2UtgzYaECIxmF0OeKsBY3oHCgit8T+ec5ao
	aGbOvJfoRl5db7GHuF4xDF05vPtwUfFAo8XxaIz7kdI5uBBlmC
	SGL9OeIW1RsIfqJEIZlcLbN6Uh2LnVarXnNWVyACgKK2C+pTqh
	oTro5wNqCMjazVx9MqDEdX3yZh8FficZUS9OhEM0WwfsmlWl9l
	KaPRFKgiu5GQOrm17clM=
Cc: dave.martin@linaro.org, nico@fluxnic.net, Marc.Zyngier@arm.com,
	xen-devel@lists.xen.org, robherring2@gmail.com,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 3/6] ARM: psci: add devicetree binding
	for describing PSCI firmware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Monday 17 December 2012, Will Deacon wrote:
> +
> + - function-base : The base ID from which the functions are offset.
> +
> +Main node optional properties:
> +
> + - cpu_suspend   : Offset of CPU_SUSPEND ID from function-base
> +
> + - cpu_off       : Offset of CPU_OFF ID from function-base
> +
> + - cpu_on        : Offset of CPU_ON ID from function-base
> +
> + - migrate       : Offset of MIGRATE ID from function-base

What is the benefit of the "function-base" property over just having
32 bit IDs for each function. For all I can tell, the interface does
not rely on the numbers to be consecutive, so removing the function-base
attribute would make the binding simpler as well as more flexible.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:10:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkh0i-0003a9-AY; Mon, 17 Dec 2012 20:10:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tkh0g-0003a4-G1
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:09:58 +0000
Received: from [85.158.137.99:46982] by server-12.bemta-3.messagelabs.com id
	6B/42-27559-51C7FC05; Mon, 17 Dec 2012 20:09:57 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355774995!19161367!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gOTgzNDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21001 invoked from network); 17 Dec 2012 20:09:56 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:09:56 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355774996; x=1387310996;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=Ps7eLWyK2poDC0tx5rVLfCO03Wm7flYS4ivbl6uWbNE=;
	b=rFDDG+axnaGJeFyIr/Fysf4jzIw5NkaelUlFhiicMpHMEfPg/veUb7Bk
	JLdlW0aEKuoBbqg845zcO38Zq/aBwX2PxbeejxE7/5M0S5P0oMEpXKvGk
	zDoo5q7VPq53Cdk7H4JMsPap3ExZD0ma6XBxd25c1qifwwrb2QnYA9uim M=;
X-IronPort-AV: E=Sophos;i="4.84,304,1355097600"; d="scan'208";a="355320252"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 17 Dec 2012 20:09:53 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBHK9r5R010542
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Mon, 17 Dec 2012 20:09:53 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Mon, 17 Dec 2012 12:09:52 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Mon, 17 Dec 2012 12:09:52 -0800
Date: Mon, 17 Dec 2012 12:09:52 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
	<1355743598.14620.43.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355743598.14620.43.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 11:26:38AM +0000, Ian Campbell wrote:
> On Fri, 2012-12-14 at 18:53 +0000, Matt Wilson wrote:
> > On Thu, Dec 13, 2012 at 11:12:50PM +0000, Palagummi, Siva wrote:
> > > > -----Original Message-----
> > > > From: Matt Wilson [mailto:msw@amazon.com]
> > > > Sent: Wednesday, December 12, 2012 3:05 AM
> > > >
> > > > On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > > > >
> > > > > You can clearly see below that copy_off is input to
> > > > > start_new_rx_buffer while copying frags.
> > > > 
> > > > Yes, but that's the right thing to do. copy_off should be set to the
> > > > destination offset after copying the last byte of linear data, which
> > > > means "skb_headlen(skb) % PAGE_SIZE" is correct.
> > > > 
> > >
> > > No. It is not correct for two reasons. For example what if
> > > skb_headlen(skb) is exactly a multiple of PAGE_SIZE. Copy_off would
> > > be set to ZERO. And now if there exists some data in frags, ZERO
> > > will be passed in as copy_off value and start_new_rx_buffer will
> > > return FALSE. And second reason is the obvious case from the current
> > > code where "offset_in_page(skb->data)" size hole will be left in the
> > > first buffer after first pass in case remaining data that need to be
> > > copied is going to overflow the first buffer.
> > 
> > Right, and I'm arguing that having the code leave a hole is less
> > desirable than potentially increasing the number of copy
> > operations. I'd like to hear from Ian and others if using the buffers
> > efficiently is more important than reducing copy operations. Intuitively,
> > I think it's more important to use the ring efficiently.
> 
> Do you mean the ring or the actual buffers?

Sorry, the actual buffers.

> The current code tries to coalesce multiple small frags/heads because it
> is usually trivial but doesn't try too hard with multiple larger frags,
> since they take up most of a page by themselves anyway. I suppose this
> does waste a bit of buffer space and therefore could take more ring
> slots, but it's not clear to me how much this matters in practice (it
> might be tricky to measure this with any realistic workload).

In the case where we're consistently handling large heads (like when
using a MTU value of 9000 for streaming traffic), we're wasting 1/3 of
the available buffers.

> The cost of splitting a copy into two should be low though, the copies
> are already batched into a single hypercall and I'd expect things to be
> mostly dominated by the data copy itself rather than the setup of each
> individual op, which would argue for splitting a copy in two if that
> helps fill the buffers.

That was my thought as well. We're testing a patch that does just this
now.

> The flip side is that once you get past the headers etc the paged frags
> likely tend to either bits and bobs (fine) or mostly whole pages. In the
> whole pages case trying to fill the buffers will result in every copy
> getting split. My gut tells me that the whole pages case probably
> dominates, but I'm not sure what the real world impact of splitting all
> the copies would be.

Right, I'm less concerned about the paged frags. It might make sense
to skip some space so that the copying can be page aligned. I suppose
it depends on how many defferent pages are in the list, and what the
total size is.

In practice I'd think it would be rare to see a paged SKB for ingress
traffic to domUs unless there is significant intra-host communication
(dom0->domU, domU->domU). When domU ingress traffic is originating
from an Ethernet device it shouldn't be paged. Paged SKBs would come
into play when a SKB is formed for transmit on an egress device that
is SG-capable. Or am I misunderstanding how paged SKBs are used these
days?

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:10:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkh0i-0003a9-AY; Mon, 17 Dec 2012 20:10:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tkh0g-0003a4-G1
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:09:58 +0000
Received: from [85.158.137.99:46982] by server-12.bemta-3.messagelabs.com id
	6B/42-27559-51C7FC05; Mon, 17 Dec 2012 20:09:57 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355774995!19161367!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gOTgzNDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21001 invoked from network); 17 Dec 2012 20:09:56 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:09:56 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355774996; x=1387310996;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=Ps7eLWyK2poDC0tx5rVLfCO03Wm7flYS4ivbl6uWbNE=;
	b=rFDDG+axnaGJeFyIr/Fysf4jzIw5NkaelUlFhiicMpHMEfPg/veUb7Bk
	JLdlW0aEKuoBbqg845zcO38Zq/aBwX2PxbeejxE7/5M0S5P0oMEpXKvGk
	zDoo5q7VPq53Cdk7H4JMsPap3ExZD0ma6XBxd25c1qifwwrb2QnYA9uim M=;
X-IronPort-AV: E=Sophos;i="4.84,304,1355097600"; d="scan'208";a="355320252"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 17 Dec 2012 20:09:53 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBHK9r5R010542
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Mon, 17 Dec 2012 20:09:53 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Mon, 17 Dec 2012 12:09:52 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Mon, 17 Dec 2012 12:09:52 -0800
Date: Mon, 17 Dec 2012 12:09:52 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
	<1355743598.14620.43.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355743598.14620.43.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 11:26:38AM +0000, Ian Campbell wrote:
> On Fri, 2012-12-14 at 18:53 +0000, Matt Wilson wrote:
> > On Thu, Dec 13, 2012 at 11:12:50PM +0000, Palagummi, Siva wrote:
> > > > -----Original Message-----
> > > > From: Matt Wilson [mailto:msw@amazon.com]
> > > > Sent: Wednesday, December 12, 2012 3:05 AM
> > > >
> > > > On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > > > >
> > > > > You can clearly see below that copy_off is input to
> > > > > start_new_rx_buffer while copying frags.
> > > > 
> > > > Yes, but that's the right thing to do. copy_off should be set to the
> > > > destination offset after copying the last byte of linear data, which
> > > > means "skb_headlen(skb) % PAGE_SIZE" is correct.
> > > > 
> > >
> > > No. It is not correct for two reasons. For example what if
> > > skb_headlen(skb) is exactly a multiple of PAGE_SIZE. Copy_off would
> > > be set to ZERO. And now if there exists some data in frags, ZERO
> > > will be passed in as copy_off value and start_new_rx_buffer will
> > > return FALSE. And second reason is the obvious case from the current
> > > code where "offset_in_page(skb->data)" size hole will be left in the
> > > first buffer after first pass in case remaining data that need to be
> > > copied is going to overflow the first buffer.
> > 
> > Right, and I'm arguing that having the code leave a hole is less
> > desirable than potentially increasing the number of copy
> > operations. I'd like to hear from Ian and others if using the buffers
> > efficiently is more important than reducing copy operations. Intuitively,
> > I think it's more important to use the ring efficiently.
> 
> Do you mean the ring or the actual buffers?

Sorry, the actual buffers.

> The current code tries to coalesce multiple small frags/heads because it
> is usually trivial but doesn't try too hard with multiple larger frags,
> since they take up most of a page by themselves anyway. I suppose this
> does waste a bit of buffer space and therefore could take more ring
> slots, but it's not clear to me how much this matters in practice (it
> might be tricky to measure this with any realistic workload).

In the case where we're consistently handling large heads (like when
using a MTU value of 9000 for streaming traffic), we're wasting 1/3 of
the available buffers.

> The cost of splitting a copy into two should be low though, the copies
> are already batched into a single hypercall and I'd expect things to be
> mostly dominated by the data copy itself rather than the setup of each
> individual op, which would argue for splitting a copy in two if that
> helps fill the buffers.

That was my thought as well. We're testing a patch that does just this
now.

> The flip side is that once you get past the headers etc the paged frags
> likely tend to either bits and bobs (fine) or mostly whole pages. In the
> whole pages case trying to fill the buffers will result in every copy
> getting split. My gut tells me that the whole pages case probably
> dominates, but I'm not sure what the real world impact of splitting all
> the copies would be.

Right, I'm less concerned about the paged frags. It might make sense
to skip some space so that the copying can be page aligned. I suppose
it depends on how many defferent pages are in the list, and what the
total size is.

In practice I'd think it would be rare to see a paged SKB for ingress
traffic to domUs unless there is significant intra-host communication
(dom0->domU, domU->domU). When domU ingress traffic is originating
from an Ethernet device it shouldn't be paged. Paged SKBs would come
into play when a SKB is formed for transmit on an egress device that
is SG-capable. Or am I misunderstanding how paged SKBs are used these
days?

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:16:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:16:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkh6y-0003jA-7F; Mon, 17 Dec 2012 20:16:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tkh6x-0003j5-Ek
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:16:27 +0000
Received: from [85.158.138.51:6834] by server-16.bemta-3.messagelabs.com id
	28/76-27634-A9D7FC05; Mon, 17 Dec 2012 20:16:26 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355775385!28100230!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7498 invoked from network); 17 Dec 2012 20:16:25 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:16:25 -0000
Received: by mail-wi0-f169.google.com with SMTP id hq12so2438719wib.2
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 12:16:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:cc:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=AgYpQXKvoz0had4wnNfZuBkZvX0GOp0wbry7tbJp3pU=;
	b=QLrvFf2V2iBfmS95WFLcSFSl1iyEDZCsZqOxVYKMGytbOX0+m4ULN93YDnwdR3uFES
	TMYqNy7GD65zydsxz0LJ/wMabsqUxn7yHQflmvhZ/y3v7hBptxChSNTLLUMEQbDn00cr
	jJzZSq1ZQLwarmx65kfIAyNLnnzqD3a4Oiqj0Ga9RvzVsCap8YMszliQSJ0dK7ScyUsJ
	OYIPLLUgpiORvI0GN0/m00N2XEZvvs4ebr4xEKBwYXXsQ8zPY4zWJDfMU9q6llIkuL/7
	g1GBnNkD/ytOgNu4sY2zQDVPzVnR26i6a3eA4e8kaSNC4kKJbCpZ1t9Amqt6EtaxGsp+
	lx1w==
X-Received: by 10.180.72.232 with SMTP id g8mr18108861wiv.0.1355775385596;
	Mon, 17 Dec 2012 12:16:25 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id hi2sm6411576wib.10.2012.12.17.12.16.23
	(version=SSLv3 cipher=OTHER); Mon, 17 Dec 2012 12:16:24 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Mon, 17 Dec 2012 20:16:18 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCF52E12.55DEA%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] add maintainers entry for vendor-independent
	IOMMU code
Thread-Index: Ac3ck19AE2OQsX4PPkKw3A3Zn3P9/w==
In-Reply-To: <50CF5F6402000078000B0D33@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Donald D Dugger <donald.d.dugger@intel.com>, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] add maintainers entry for
 vendor-independent IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/2012 17:07, "Jan Beulich" <JBeulich@suse.com> wrote:

> As agreed to last week.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -184,6 +184,15 @@ F: xen/arch/x86/hvm/vmx/
>  F: xen/arch/x86/mm/hap/p2m-ept.c
>  F: xen/include/asm-x86/hvm/vmx/
>  
> +IOMMU VENDOR INDEPENDENT CODE
> +M: Xiantao Zhang <xiantao.zhang@intel.com>
> +M: Jan Beulich <jbeulich@suse.com>
> +S: Supported
> +F: xen/drivers/passthrough/
> +X: xen/drivers/passthrough/amd/
> +X: xen/drivers/passthrough/vtd/
> +F: xen/include/xen/iommu.h
> +
>  LINUX (PV_OPS)
>  M: Jeremy Fitzhardinge <jeremy@goop.org>
>  S: Supported
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:16:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:16:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkh6y-0003jA-7F; Mon, 17 Dec 2012 20:16:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tkh6x-0003j5-Ek
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:16:27 +0000
Received: from [85.158.138.51:6834] by server-16.bemta-3.messagelabs.com id
	28/76-27634-A9D7FC05; Mon, 17 Dec 2012 20:16:26 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355775385!28100230!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7498 invoked from network); 17 Dec 2012 20:16:25 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:16:25 -0000
Received: by mail-wi0-f169.google.com with SMTP id hq12so2438719wib.2
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 12:16:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:cc:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=AgYpQXKvoz0had4wnNfZuBkZvX0GOp0wbry7tbJp3pU=;
	b=QLrvFf2V2iBfmS95WFLcSFSl1iyEDZCsZqOxVYKMGytbOX0+m4ULN93YDnwdR3uFES
	TMYqNy7GD65zydsxz0LJ/wMabsqUxn7yHQflmvhZ/y3v7hBptxChSNTLLUMEQbDn00cr
	jJzZSq1ZQLwarmx65kfIAyNLnnzqD3a4Oiqj0Ga9RvzVsCap8YMszliQSJ0dK7ScyUsJ
	OYIPLLUgpiORvI0GN0/m00N2XEZvvs4ebr4xEKBwYXXsQ8zPY4zWJDfMU9q6llIkuL/7
	g1GBnNkD/ytOgNu4sY2zQDVPzVnR26i6a3eA4e8kaSNC4kKJbCpZ1t9Amqt6EtaxGsp+
	lx1w==
X-Received: by 10.180.72.232 with SMTP id g8mr18108861wiv.0.1355775385596;
	Mon, 17 Dec 2012 12:16:25 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id hi2sm6411576wib.10.2012.12.17.12.16.23
	(version=SSLv3 cipher=OTHER); Mon, 17 Dec 2012 12:16:24 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Mon, 17 Dec 2012 20:16:18 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCF52E12.55DEA%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] add maintainers entry for vendor-independent
	IOMMU code
Thread-Index: Ac3ck19AE2OQsX4PPkKw3A3Zn3P9/w==
In-Reply-To: <50CF5F6402000078000B0D33@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Donald D Dugger <donald.d.dugger@intel.com>, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] add maintainers entry for
 vendor-independent IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/2012 17:07, "Jan Beulich" <JBeulich@suse.com> wrote:

> As agreed to last week.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -184,6 +184,15 @@ F: xen/arch/x86/hvm/vmx/
>  F: xen/arch/x86/mm/hap/p2m-ept.c
>  F: xen/include/asm-x86/hvm/vmx/
>  
> +IOMMU VENDOR INDEPENDENT CODE
> +M: Xiantao Zhang <xiantao.zhang@intel.com>
> +M: Jan Beulich <jbeulich@suse.com>
> +S: Supported
> +F: xen/drivers/passthrough/
> +X: xen/drivers/passthrough/amd/
> +X: xen/drivers/passthrough/vtd/
> +F: xen/include/xen/iommu.h
> +
>  LINUX (PV_OPS)
>  M: Jeremy Fitzhardinge <jeremy@goop.org>
>  S: Supported
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:18:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:18:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkh8H-0003ny-S0; Mon, 17 Dec 2012 20:17:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tkh8G-0003ni-II
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:17:48 +0000
Received: from [85.158.143.99:60228] by server-1.bemta-4.messagelabs.com id
	12/93-28401-BED7FC05; Mon, 17 Dec 2012 20:17:47 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1355775466!22853148!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16506 invoked from network); 17 Dec 2012 20:17:46 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:17:46 -0000
Received: by mail-we0-f173.google.com with SMTP id z2so3152637wey.32
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 12:17:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=NBX1HA9rTPw8pyKfpsMpSwV5ZP2B78zIOcwtTYvqa8M=;
	b=G25ia6S2GPv07gEW7KYennT+6iIX92rQYc6oSXcs5T4mNHaIpZW7RBN2sSXen+NYrA
	3Po05QOSoTk2qZAwG/jvL3NL8yrP/ZJOVISXM9KAriMyQAGQQ5BX9Ya9bsgeaLe29J67
	DOZf5IxmOCzLWKKmn4LOllVN4LsmYSCAkgPo7ObN9CmI1ltIQjeOqamibzVyavmNW7Fs
	CAYjQJ0MChmz4quMtq3SxnCIawLLm3HJ2jO4o48syuTl/7YUXBc1Ei3I6I/V2FJJNVX3
	9zGUZ7FNL+bzU8/itSXdyqCjLi4b69Bty0+a3RNmmWkquLAMCgdhgkNHoN1Hbd4bFASe
	lpTw==
Received: by 10.180.99.227 with SMTP id et3mr18080741wib.6.1355775466633;
	Mon, 17 Dec 2012 12:17:46 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id s10sm14191823wiw.4.2012.12.17.12.17.44
	(version=SSLv3 cipher=OTHER); Mon, 17 Dec 2012 12:17:46 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Mon, 17 Dec 2012 20:17:38 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCF52E62.55DEC%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86, amd: Disable way access filter on
	Piledriver CPUs
Thread-Index: Ac3ck47vwBeQTsvbskK5LHTgX91lAg==
In-Reply-To: <50CF5FA602000078000B0D5E@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>, Andre Przywara <osp@andrep.de>
Subject: Re: [Xen-devel] [PATCH] x86,
 amd: Disable way access filter on Piledriver CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/2012 17:08, "Jan Beulich" <JBeulich@suse.com> wrote:

> The Way Access Filter in recent AMD CPUs may hurt the performance of
> some workloads, caused by aliasing issues in the L1 cache.
> This patch disables it on the affected CPUs.
> 
> The issue is similar to that one of last year:
> http://lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html
> This new patch does not replace the old one, we just need another
> quirk for newer CPUs.
> 
> The performance penalty without the patch depends on the
> circumstances, but is a bit less than the last year's 3%.
> 
> The workloads affected would be those that access code from the same
> physical page under different virtual addresses, so different
> processes using the same libraries with ASLR or multiple instances of
> PIE-binaries. The code needs to be accessed simultaneously from both
> cores of the same compute unit.
> 
> More details can be found here:
> http://developer.amd.com/Assets/SharedL1InstructionCacheonAMD15hCPU.pdf
> 
> CPUs affected are anything with the core known as Piledriver.
> That includes the new parts of the AMD A-Series (aka Trinity) and the
> just released new CPUs of the FX-Series (aka Vishera).
> The model numbering is a bit odd here: FX CPUs have model 2,
> A-Series has model 10h, with possible extensions to 1Fh. Hence the
> range of model ids.
> 
> Signed-off-by: Andre Przywara <osp@andrep.de>
> 
> Add and use MSR_AMD64_IC_CFG. Update the value whenever it is found to
> not have all bits set, rather than just when it's zero.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

I don't have a strong opinion on Mats' comment.

 -- Keir

> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -448,6 +448,14 @@ static void __devinit init_amd(struct cp
> }
> }
>  
> + /*
> +  * The way access filter has a performance penalty on some workloads.
> +  * Disable it on the affected CPUs.
> +  */
> + if (c->x86 == 0x15 && c->x86_model >= 0x02 && c->x86_model < 0x20 &&
> +     !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) != 0x1e)
> +  wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);
> +
>          amd_get_topology(c);
>  
> /* Pointless to use MWAIT on Family10 as it does not deep sleep. */
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -211,6 +211,7 @@
>  
>  /* AMD64 MSRs */
>  #define MSR_AMD64_NB_CFG  0xc001001f
> +#define MSR_AMD64_IC_CFG  0xc0011021
>  #define MSR_AMD64_DC_CFG  0xc0011022
>  #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT 46
>  
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:18:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:18:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkh8H-0003ny-S0; Mon, 17 Dec 2012 20:17:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tkh8G-0003ni-II
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:17:48 +0000
Received: from [85.158.143.99:60228] by server-1.bemta-4.messagelabs.com id
	12/93-28401-BED7FC05; Mon, 17 Dec 2012 20:17:47 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1355775466!22853148!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16506 invoked from network); 17 Dec 2012 20:17:46 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:17:46 -0000
Received: by mail-we0-f173.google.com with SMTP id z2so3152637wey.32
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 12:17:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=NBX1HA9rTPw8pyKfpsMpSwV5ZP2B78zIOcwtTYvqa8M=;
	b=G25ia6S2GPv07gEW7KYennT+6iIX92rQYc6oSXcs5T4mNHaIpZW7RBN2sSXen+NYrA
	3Po05QOSoTk2qZAwG/jvL3NL8yrP/ZJOVISXM9KAriMyQAGQQ5BX9Ya9bsgeaLe29J67
	DOZf5IxmOCzLWKKmn4LOllVN4LsmYSCAkgPo7ObN9CmI1ltIQjeOqamibzVyavmNW7Fs
	CAYjQJ0MChmz4quMtq3SxnCIawLLm3HJ2jO4o48syuTl/7YUXBc1Ei3I6I/V2FJJNVX3
	9zGUZ7FNL+bzU8/itSXdyqCjLi4b69Bty0+a3RNmmWkquLAMCgdhgkNHoN1Hbd4bFASe
	lpTw==
Received: by 10.180.99.227 with SMTP id et3mr18080741wib.6.1355775466633;
	Mon, 17 Dec 2012 12:17:46 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id s10sm14191823wiw.4.2012.12.17.12.17.44
	(version=SSLv3 cipher=OTHER); Mon, 17 Dec 2012 12:17:46 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Mon, 17 Dec 2012 20:17:38 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCF52E62.55DEC%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86, amd: Disable way access filter on
	Piledriver CPUs
Thread-Index: Ac3ck47vwBeQTsvbskK5LHTgX91lAg==
In-Reply-To: <50CF5FA602000078000B0D5E@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Mats Petersson <mats.petersson@citrix.com>, Andre Przywara <osp@andrep.de>
Subject: Re: [Xen-devel] [PATCH] x86,
 amd: Disable way access filter on Piledriver CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/2012 17:08, "Jan Beulich" <JBeulich@suse.com> wrote:

> The Way Access Filter in recent AMD CPUs may hurt the performance of
> some workloads, caused by aliasing issues in the L1 cache.
> This patch disables it on the affected CPUs.
> 
> The issue is similar to that one of last year:
> http://lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html
> This new patch does not replace the old one, we just need another
> quirk for newer CPUs.
> 
> The performance penalty without the patch depends on the
> circumstances, but is a bit less than the last year's 3%.
> 
> The workloads affected would be those that access code from the same
> physical page under different virtual addresses, so different
> processes using the same libraries with ASLR or multiple instances of
> PIE-binaries. The code needs to be accessed simultaneously from both
> cores of the same compute unit.
> 
> More details can be found here:
> http://developer.amd.com/Assets/SharedL1InstructionCacheonAMD15hCPU.pdf
> 
> CPUs affected are anything with the core known as Piledriver.
> That includes the new parts of the AMD A-Series (aka Trinity) and the
> just released new CPUs of the FX-Series (aka Vishera).
> The model numbering is a bit odd here: FX CPUs have model 2,
> A-Series has model 10h, with possible extensions to 1Fh. Hence the
> range of model ids.
> 
> Signed-off-by: Andre Przywara <osp@andrep.de>
> 
> Add and use MSR_AMD64_IC_CFG. Update the value whenever it is found to
> not have all bits set, rather than just when it's zero.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

I don't have a strong opinion on Mats' comment.

 -- Keir

> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -448,6 +448,14 @@ static void __devinit init_amd(struct cp
> }
> }
>  
> + /*
> +  * The way access filter has a performance penalty on some workloads.
> +  * Disable it on the affected CPUs.
> +  */
> + if (c->x86 == 0x15 && c->x86_model >= 0x02 && c->x86_model < 0x20 &&
> +     !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) != 0x1e)
> +  wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);
> +
>          amd_get_topology(c);
>  
> /* Pointless to use MWAIT on Family10 as it does not deep sleep. */
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -211,6 +211,7 @@
>  
>  /* AMD64 MSRs */
>  #define MSR_AMD64_NB_CFG  0xc001001f
> +#define MSR_AMD64_IC_CFG  0xc0011021
>  #define MSR_AMD64_DC_CFG  0xc0011022
>  #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT 46
>  
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:25:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkhFt-00044R-RC; Mon, 17 Dec 2012 20:25:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkhFs-00044M-Ok
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 20:25:41 +0000
Received: from [85.158.143.99:19853] by server-3.bemta-4.messagelabs.com id
	87/D0-18211-3CF7FC05; Mon, 17 Dec 2012 20:25:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355775939!28917536!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23837 invoked from network); 17 Dec 2012 20:25:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:25:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="209479"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 20:25:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 20:25:35 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkhFn-00011M-6S;
	Mon, 17 Dec 2012 20:25:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkhFm-0001yG-EC;
	Mon, 17 Dec 2012 20:25:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14773-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 20:25:34 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14773: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14773 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14773/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10   fail blocked in 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  516dbd9deb4f
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=516dbd9deb4f
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing 516dbd9deb4f
+ branch=xen-4.1-testing
+ revision=516dbd9deb4f
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r 516dbd9deb4f ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 4 changes to 2 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:25:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkhFt-00044R-RC; Mon, 17 Dec 2012 20:25:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkhFs-00044M-Ok
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 20:25:41 +0000
Received: from [85.158.143.99:19853] by server-3.bemta-4.messagelabs.com id
	87/D0-18211-3CF7FC05; Mon, 17 Dec 2012 20:25:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355775939!28917536!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExMjgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23837 invoked from network); 17 Dec 2012 20:25:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:25:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="209479"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Dec 2012 20:25:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 17 Dec 2012 20:25:35 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkhFn-00011M-6S;
	Mon, 17 Dec 2012 20:25:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkhFm-0001yG-EC;
	Mon, 17 Dec 2012 20:25:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14773-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Dec 2012 20:25:34 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14773: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14773 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14773/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14679
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10   fail blocked in 14679

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  516dbd9deb4f
baseline version:
 xen                  255a0b6a8104

------------------------------------------------------------
People who touched revisions under test:
  Greg Wettstein <greg@enjellic.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=516dbd9deb4f
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing 516dbd9deb4f
+ branch=xen-4.1-testing
+ revision=516dbd9deb4f
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r 516dbd9deb4f ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 4 changes to 2 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:32:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkhMQ-0004EP-N7; Mon, 17 Dec 2012 20:32:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TkhMP-0004EK-Fv
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:32:25 +0000
Received: from [85.158.139.211:14124] by server-6.bemta-5.messagelabs.com id
	9A/01-30498-8518FC05; Mon, 17 Dec 2012 20:32:24 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355776343!20091211!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20277 invoked from network); 17 Dec 2012 20:32:24 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-15.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	17 Dec 2012 20:32:24 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:63352 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TkhQW-0005nc-RR; Mon, 17 Dec 2012 21:36:40 +0100
Date: Mon, 17 Dec 2012 21:32:17 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1782021524.20121217213217@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121216173824.GA4518@phenom.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: linux-kernel@vger.kernel.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
	dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Sunday, December 16, 2012, 6:38:24 PM, you wrote:

> On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.

> Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> vacation during that time (just got back).

> Did you by any chance try to do a git bisect to narrow down
> which merge it was?

Hi Konrad,

With some more effort it leads to:

git bisect start
# bad: [fa4c95bfdb85d568ae327d57aa33a4f55bab79c4] Merge branch 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
git bisect bad fa4c95bfdb85d568ae327d57aa33a4f55bab79c4
# good: [29594404d7fe73cd80eaa4ee8c43dcc53970c60e] Linux 3.7
git bisect good 29594404d7fe73cd80eaa4ee8c43dcc53970c60e
# bad: [98870901cce098bbe94d90d2c41d8d1fa8d94392] mm/bootmem.c: remove unused wrapper function reserve_bootmem_generic()
git bisect bad 98870901cce098bbe94d90d2c41d8d1fa8d94392
# good: [8966961b31c251b854169e9886394c2a20f2cea7] Merge tag 'staging-3.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging
git bisect good 8966961b31c251b854169e9886394c2a20f2cea7
# bad: [22a40fd9a60388aec8106b0baffc8f59f83bb1b4] Merge tag 'dlm-3.8' of git://git.kernel.org/pub/scm/linux/kernel/git/teigland/linux-dlm
git bisect bad 22a40fd9a60388aec8106b0baffc8f59f83bb1b4
# good: [aefb058b0c27dafb15072406fbfd92d2ac2c8790] Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good aefb058b0c27dafb15072406fbfd92d2ac2c8790
# good: [b64c5fda3868cb29d5dae0909561aa7d93fb7330] Merge branch 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good b64c5fda3868cb29d5dae0909561aa7d93fb7330
# bad: [139353ffbe42ac7abda42f3259c1c374cbf4b779] Merge tag 'please-pull-einj-fix-for-acpi5' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras
git bisect bad 139353ffbe42ac7abda42f3259c1c374cbf4b779
# bad: [d07e43d70eef15a44a2c328a913d8d633a90e088] Merge branch 'omap-serial' of git://git.linaro.org/people/rmk/linux-arm
git bisect bad d07e43d70eef15a44a2c328a913d8d633a90e088
# bad: [a05a4e24dcd73c2de4ef3f8d520b8bbb44570c60] Merge branch 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad a05a4e24dcd73c2de4ef3f8d520b8bbb44570c60
# bad: [a71c8bc5dfefbbf80ef90739791554ef7ea4401b] x86, topology: Debug CPU0 hotplug
git bisect bad a71c8bc5dfefbbf80ef90739791554ef7ea4401b
# bad: [42e78e9719aa0c76711e2731b19c90fe5ae05278] x86-64, hotplug: Add start_cpu0() entry point to head_64.S
git bisect bad 42e78e9719aa0c76711e2731b19c90fe5ae05278
# good: [4d25031a81d3cd32edc00de6596db76cc4010685] x86, topology: Don't offline CPU0 if any PIC irq can not be migrated out of it
git bisect good 4d25031a81d3cd32edc00de6596db76cc4010685
# bad: [209efae12981f3d2d694499b761def10895c078c] x86, hotplug, suspend: Online CPU0 for suspend or hibernate
git bisect bad 209efae12981f3d2d694499b761def10895c078c
# bad: [30106c174311b8cfaaa3186c7f6f9c36c62d17da] x86, hotplug: Support functions for CPU0 online/offline
git bisect bad 30106c174311b8cfaaa3186c7f6f9c36c62d17da



30106c174311b8cfaaa3186c7f6f9c36c62d17da is the first bad commit
commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da
Author: Fenghua Yu <fenghua.yu@intel.com>
Date:   Tue Nov 13 11:32:41 2012 -0800

    x86, hotplug: Support functions for CPU0 online/offline

    Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time.

    Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after
    it's offline.

    Continue to online CPU0 in native_cpu_up().

    Continue to offline CPU0 in native_cpu_disable().

    Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
    Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com
    Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>

:040000 040000 729e56e8eddaaf5d0f55257b82f28006dffb9aab d5c98e50cd92814351ee6c741b7e4c9afa29487c M      arch


Which seems to be merged in http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=74b84233458e9db7c160cec67638efdbec748ca9

--

Sander


> Thanks!
>> The boot stalls:
>> 
>> [    0.000000] ACPI: PM-Timer IO Port: 0x808
>> [    0.000000] ACPI: Local APIC address 0xfee00000
>> [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
>> [    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
>> [    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
>> [    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> [   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
>> [   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
>> [   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
>> [   64.598692] sending NMI to all CPUs:
>> [   64.598716] xen: vector 0x2 is not implemented
>> 
>> 
>> Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> 
>> 
>> The exact seem config with 3.7.0 as kernel works fine.
>> Complete serial log is attached.
>> 
>> --
>> 
>> Sander
>> 
>> 





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:32:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkhMQ-0004EP-N7; Mon, 17 Dec 2012 20:32:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TkhMP-0004EK-Fv
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:32:25 +0000
Received: from [85.158.139.211:14124] by server-6.bemta-5.messagelabs.com id
	9A/01-30498-8518FC05; Mon, 17 Dec 2012 20:32:24 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355776343!20091211!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20277 invoked from network); 17 Dec 2012 20:32:24 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-15.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	17 Dec 2012 20:32:24 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:63352 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TkhQW-0005nc-RR; Mon, 17 Dec 2012 21:36:40 +0100
Date: Mon, 17 Dec 2012 21:32:17 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1782021524.20121217213217@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121216173824.GA4518@phenom.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: linux-kernel@vger.kernel.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
	dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Sunday, December 16, 2012, 6:38:24 PM, you wrote:

> On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.

> Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> vacation during that time (just got back).

> Did you by any chance try to do a git bisect to narrow down
> which merge it was?

Hi Konrad,

With some more effort it leads to:

git bisect start
# bad: [fa4c95bfdb85d568ae327d57aa33a4f55bab79c4] Merge branch 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
git bisect bad fa4c95bfdb85d568ae327d57aa33a4f55bab79c4
# good: [29594404d7fe73cd80eaa4ee8c43dcc53970c60e] Linux 3.7
git bisect good 29594404d7fe73cd80eaa4ee8c43dcc53970c60e
# bad: [98870901cce098bbe94d90d2c41d8d1fa8d94392] mm/bootmem.c: remove unused wrapper function reserve_bootmem_generic()
git bisect bad 98870901cce098bbe94d90d2c41d8d1fa8d94392
# good: [8966961b31c251b854169e9886394c2a20f2cea7] Merge tag 'staging-3.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging
git bisect good 8966961b31c251b854169e9886394c2a20f2cea7
# bad: [22a40fd9a60388aec8106b0baffc8f59f83bb1b4] Merge tag 'dlm-3.8' of git://git.kernel.org/pub/scm/linux/kernel/git/teigland/linux-dlm
git bisect bad 22a40fd9a60388aec8106b0baffc8f59f83bb1b4
# good: [aefb058b0c27dafb15072406fbfd92d2ac2c8790] Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good aefb058b0c27dafb15072406fbfd92d2ac2c8790
# good: [b64c5fda3868cb29d5dae0909561aa7d93fb7330] Merge branch 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good b64c5fda3868cb29d5dae0909561aa7d93fb7330
# bad: [139353ffbe42ac7abda42f3259c1c374cbf4b779] Merge tag 'please-pull-einj-fix-for-acpi5' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras
git bisect bad 139353ffbe42ac7abda42f3259c1c374cbf4b779
# bad: [d07e43d70eef15a44a2c328a913d8d633a90e088] Merge branch 'omap-serial' of git://git.linaro.org/people/rmk/linux-arm
git bisect bad d07e43d70eef15a44a2c328a913d8d633a90e088
# bad: [a05a4e24dcd73c2de4ef3f8d520b8bbb44570c60] Merge branch 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad a05a4e24dcd73c2de4ef3f8d520b8bbb44570c60
# bad: [a71c8bc5dfefbbf80ef90739791554ef7ea4401b] x86, topology: Debug CPU0 hotplug
git bisect bad a71c8bc5dfefbbf80ef90739791554ef7ea4401b
# bad: [42e78e9719aa0c76711e2731b19c90fe5ae05278] x86-64, hotplug: Add start_cpu0() entry point to head_64.S
git bisect bad 42e78e9719aa0c76711e2731b19c90fe5ae05278
# good: [4d25031a81d3cd32edc00de6596db76cc4010685] x86, topology: Don't offline CPU0 if any PIC irq can not be migrated out of it
git bisect good 4d25031a81d3cd32edc00de6596db76cc4010685
# bad: [209efae12981f3d2d694499b761def10895c078c] x86, hotplug, suspend: Online CPU0 for suspend or hibernate
git bisect bad 209efae12981f3d2d694499b761def10895c078c
# bad: [30106c174311b8cfaaa3186c7f6f9c36c62d17da] x86, hotplug: Support functions for CPU0 online/offline
git bisect bad 30106c174311b8cfaaa3186c7f6f9c36c62d17da



30106c174311b8cfaaa3186c7f6f9c36c62d17da is the first bad commit
commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da
Author: Fenghua Yu <fenghua.yu@intel.com>
Date:   Tue Nov 13 11:32:41 2012 -0800

    x86, hotplug: Support functions for CPU0 online/offline

    Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time.

    Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after
    it's offline.

    Continue to online CPU0 in native_cpu_up().

    Continue to offline CPU0 in native_cpu_disable().

    Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
    Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com
    Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>

:040000 040000 729e56e8eddaaf5d0f55257b82f28006dffb9aab d5c98e50cd92814351ee6c741b7e4c9afa29487c M      arch


Which seems to be merged in http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=74b84233458e9db7c160cec67638efdbec748ca9

--

Sander


> Thanks!
>> The boot stalls:
>> 
>> [    0.000000] ACPI: PM-Timer IO Port: 0x808
>> [    0.000000] ACPI: Local APIC address 0xfee00000
>> [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
>> [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
>> [    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
>> [    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
>> [    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> [   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
>> [   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
>> [   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
>> [   64.598692] sending NMI to all CPUs:
>> [   64.598716] xen: vector 0x2 is not implemented
>> 
>> 
>> Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
>> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
>> 
>> 
>> The exact seem config with 3.7.0 as kernel works fine.
>> Complete serial log is attached.
>> 
>> --
>> 
>> Sander
>> 
>> 





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:47:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkhan-0004SJ-CW; Mon, 17 Dec 2012 20:47:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tkhal-0004SE-Cf
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:47:15 +0000
Received: from [85.158.138.51:46450] by server-8.bemta-3.messagelabs.com id
	81/C8-01297-2D48FC05; Mon, 17 Dec 2012 20:47:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355777232!21163918!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTIxNTA2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6616 invoked from network); 17 Dec 2012 20:47:13 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Dec 2012 20:47:13 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBHKl5uV001919
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Dec 2012 20:47:06 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBHKl5oQ016763
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Dec 2012 20:47:05 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBHKl4ej021073; Mon, 17 Dec 2012 14:47:04 -0600
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Dec 2012 12:47:04 -0800
Date: Mon, 17 Dec 2012 15:46:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>, fenghua.yu@intel.com
Message-ID: <20121217204634.GA13181@konrad-lan.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
	<1782021524.20121217213217@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1782021524.20121217213217@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: hpa@linux.intel.com, linux-kernel@vger.kernel.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
 dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 09:32:17PM +0100, Sander Eikelenboom wrote:
> 
> Sunday, December 16, 2012, 6:38:24 PM, you wrote:
> 
> > On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
> >> Hi Konrad,
> >> 
> >> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.
> 
> > Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> > vacation during that time (just got back).
> 
> > Did you by any chance try to do a git bisect to narrow down
> > which merge it was?
> 
> Hi Konrad,

Hey Sander,

Thank you for doing the bisection.

Fenghua - any ideas what might be amiss in the Xen subsystem?
I hadn't looked at the patchset of the CPU0 offlining/onlining
so I am not completly up to speed on the particulars of the patches.

> 
> With some more effort it leads to:
> 
> git bisect start
> # bad: [fa4c95bfdb85d568ae327d57aa33a4f55bab79c4] Merge branch 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
> git bisect bad fa4c95bfdb85d568ae327d57aa33a4f55bab79c4
> # good: [29594404d7fe73cd80eaa4ee8c43dcc53970c60e] Linux 3.7
> git bisect good 29594404d7fe73cd80eaa4ee8c43dcc53970c60e
> # bad: [98870901cce098bbe94d90d2c41d8d1fa8d94392] mm/bootmem.c: remove unused wrapper function reserve_bootmem_generic()
> git bisect bad 98870901cce098bbe94d90d2c41d8d1fa8d94392
> # good: [8966961b31c251b854169e9886394c2a20f2cea7] Merge tag 'staging-3.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging
> git bisect good 8966961b31c251b854169e9886394c2a20f2cea7
> # bad: [22a40fd9a60388aec8106b0baffc8f59f83bb1b4] Merge tag 'dlm-3.8' of git://git.kernel.org/pub/scm/linux/kernel/git/teigland/linux-dlm
> git bisect bad 22a40fd9a60388aec8106b0baffc8f59f83bb1b4
> # good: [aefb058b0c27dafb15072406fbfd92d2ac2c8790] Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
> git bisect good aefb058b0c27dafb15072406fbfd92d2ac2c8790
> # good: [b64c5fda3868cb29d5dae0909561aa7d93fb7330] Merge branch 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
> git bisect good b64c5fda3868cb29d5dae0909561aa7d93fb7330
> # bad: [139353ffbe42ac7abda42f3259c1c374cbf4b779] Merge tag 'please-pull-einj-fix-for-acpi5' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras
> git bisect bad 139353ffbe42ac7abda42f3259c1c374cbf4b779
> # bad: [d07e43d70eef15a44a2c328a913d8d633a90e088] Merge branch 'omap-serial' of git://git.linaro.org/people/rmk/linux-arm
> git bisect bad d07e43d70eef15a44a2c328a913d8d633a90e088
> # bad: [a05a4e24dcd73c2de4ef3f8d520b8bbb44570c60] Merge branch 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
> git bisect bad a05a4e24dcd73c2de4ef3f8d520b8bbb44570c60
> # bad: [a71c8bc5dfefbbf80ef90739791554ef7ea4401b] x86, topology: Debug CPU0 hotplug
> git bisect bad a71c8bc5dfefbbf80ef90739791554ef7ea4401b
> # bad: [42e78e9719aa0c76711e2731b19c90fe5ae05278] x86-64, hotplug: Add start_cpu0() entry point to head_64.S
> git bisect bad 42e78e9719aa0c76711e2731b19c90fe5ae05278
> # good: [4d25031a81d3cd32edc00de6596db76cc4010685] x86, topology: Don't offline CPU0 if any PIC irq can not be migrated out of it
> git bisect good 4d25031a81d3cd32edc00de6596db76cc4010685
> # bad: [209efae12981f3d2d694499b761def10895c078c] x86, hotplug, suspend: Online CPU0 for suspend or hibernate
> git bisect bad 209efae12981f3d2d694499b761def10895c078c
> # bad: [30106c174311b8cfaaa3186c7f6f9c36c62d17da] x86, hotplug: Support functions for CPU0 online/offline
> git bisect bad 30106c174311b8cfaaa3186c7f6f9c36c62d17da
> 
> 
> 
> 30106c174311b8cfaaa3186c7f6f9c36c62d17da is the first bad commit
> commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da
> Author: Fenghua Yu <fenghua.yu@intel.com>
> Date:   Tue Nov 13 11:32:41 2012 -0800
> 
>     x86, hotplug: Support functions for CPU0 online/offline
> 
>     Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
> 
>     Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after
>     it's offline.
> 
>     Continue to online CPU0 in native_cpu_up().
> 
>     Continue to offline CPU0 in native_cpu_disable().
> 
>     Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
>     Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com
>     Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
> 
> :040000 040000 729e56e8eddaaf5d0f55257b82f28006dffb9aab d5c98e50cd92814351ee6c741b7e4c9afa29487c M      arch
> 
> 
> Which seems to be merged in http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=74b84233458e9db7c160cec67638efdbec748ca9
> 
> --
> 
> Sander
> 
> 
> > Thanks!
> >> The boot stalls:
> >> 
> >> [    0.000000] ACPI: PM-Timer IO Port: 0x808
> >> [    0.000000] ACPI: Local APIC address 0xfee00000
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
> >> [    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
> >> [    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
> >> [    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
> >> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
> >> [   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
> >> [   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
> >> [   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
> >> [   64.598692] sending NMI to all CPUs:
> >> [   64.598716] xen: vector 0x2 is not implemented
> >> 
> >> 
> >> Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
> >> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
> >> 
> >> 
> >> The exact seem config with 3.7.0 as kernel works fine.
> >> Complete serial log is attached.
> >> 
> >> --
> >> 
> >> Sander
> >> 
> >> 
> 
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:47:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkhan-0004SJ-CW; Mon, 17 Dec 2012 20:47:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tkhal-0004SE-Cf
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:47:15 +0000
Received: from [85.158.138.51:46450] by server-8.bemta-3.messagelabs.com id
	81/C8-01297-2D48FC05; Mon, 17 Dec 2012 20:47:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355777232!21163918!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTIxNTA2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6616 invoked from network); 17 Dec 2012 20:47:13 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Dec 2012 20:47:13 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBHKl5uV001919
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Dec 2012 20:47:06 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBHKl5oQ016763
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Dec 2012 20:47:05 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBHKl4ej021073; Mon, 17 Dec 2012 14:47:04 -0600
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Dec 2012 12:47:04 -0800
Date: Mon, 17 Dec 2012 15:46:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>, fenghua.yu@intel.com
Message-ID: <20121217204634.GA13181@konrad-lan.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
	<1782021524.20121217213217@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1782021524.20121217213217@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: hpa@linux.intel.com, linux-kernel@vger.kernel.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
 dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 09:32:17PM +0100, Sander Eikelenboom wrote:
> 
> Sunday, December 16, 2012, 6:38:24 PM, you wrote:
> 
> > On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
> >> Hi Konrad,
> >> 
> >> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.
> 
> > Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> > vacation during that time (just got back).
> 
> > Did you by any chance try to do a git bisect to narrow down
> > which merge it was?
> 
> Hi Konrad,

Hey Sander,

Thank you for doing the bisection.

Fenghua - any ideas what might be amiss in the Xen subsystem?
I hadn't looked at the patchset of the CPU0 offlining/onlining
so I am not completly up to speed on the particulars of the patches.

> 
> With some more effort it leads to:
> 
> git bisect start
> # bad: [fa4c95bfdb85d568ae327d57aa33a4f55bab79c4] Merge branch 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
> git bisect bad fa4c95bfdb85d568ae327d57aa33a4f55bab79c4
> # good: [29594404d7fe73cd80eaa4ee8c43dcc53970c60e] Linux 3.7
> git bisect good 29594404d7fe73cd80eaa4ee8c43dcc53970c60e
> # bad: [98870901cce098bbe94d90d2c41d8d1fa8d94392] mm/bootmem.c: remove unused wrapper function reserve_bootmem_generic()
> git bisect bad 98870901cce098bbe94d90d2c41d8d1fa8d94392
> # good: [8966961b31c251b854169e9886394c2a20f2cea7] Merge tag 'staging-3.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging
> git bisect good 8966961b31c251b854169e9886394c2a20f2cea7
> # bad: [22a40fd9a60388aec8106b0baffc8f59f83bb1b4] Merge tag 'dlm-3.8' of git://git.kernel.org/pub/scm/linux/kernel/git/teigland/linux-dlm
> git bisect bad 22a40fd9a60388aec8106b0baffc8f59f83bb1b4
> # good: [aefb058b0c27dafb15072406fbfd92d2ac2c8790] Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
> git bisect good aefb058b0c27dafb15072406fbfd92d2ac2c8790
> # good: [b64c5fda3868cb29d5dae0909561aa7d93fb7330] Merge branch 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
> git bisect good b64c5fda3868cb29d5dae0909561aa7d93fb7330
> # bad: [139353ffbe42ac7abda42f3259c1c374cbf4b779] Merge tag 'please-pull-einj-fix-for-acpi5' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras
> git bisect bad 139353ffbe42ac7abda42f3259c1c374cbf4b779
> # bad: [d07e43d70eef15a44a2c328a913d8d633a90e088] Merge branch 'omap-serial' of git://git.linaro.org/people/rmk/linux-arm
> git bisect bad d07e43d70eef15a44a2c328a913d8d633a90e088
> # bad: [a05a4e24dcd73c2de4ef3f8d520b8bbb44570c60] Merge branch 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
> git bisect bad a05a4e24dcd73c2de4ef3f8d520b8bbb44570c60
> # bad: [a71c8bc5dfefbbf80ef90739791554ef7ea4401b] x86, topology: Debug CPU0 hotplug
> git bisect bad a71c8bc5dfefbbf80ef90739791554ef7ea4401b
> # bad: [42e78e9719aa0c76711e2731b19c90fe5ae05278] x86-64, hotplug: Add start_cpu0() entry point to head_64.S
> git bisect bad 42e78e9719aa0c76711e2731b19c90fe5ae05278
> # good: [4d25031a81d3cd32edc00de6596db76cc4010685] x86, topology: Don't offline CPU0 if any PIC irq can not be migrated out of it
> git bisect good 4d25031a81d3cd32edc00de6596db76cc4010685
> # bad: [209efae12981f3d2d694499b761def10895c078c] x86, hotplug, suspend: Online CPU0 for suspend or hibernate
> git bisect bad 209efae12981f3d2d694499b761def10895c078c
> # bad: [30106c174311b8cfaaa3186c7f6f9c36c62d17da] x86, hotplug: Support functions for CPU0 online/offline
> git bisect bad 30106c174311b8cfaaa3186c7f6f9c36c62d17da
> 
> 
> 
> 30106c174311b8cfaaa3186c7f6f9c36c62d17da is the first bad commit
> commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da
> Author: Fenghua Yu <fenghua.yu@intel.com>
> Date:   Tue Nov 13 11:32:41 2012 -0800
> 
>     x86, hotplug: Support functions for CPU0 online/offline
> 
>     Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
> 
>     Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after
>     it's offline.
> 
>     Continue to online CPU0 in native_cpu_up().
> 
>     Continue to offline CPU0 in native_cpu_disable().
> 
>     Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
>     Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com
>     Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
> 
> :040000 040000 729e56e8eddaaf5d0f55257b82f28006dffb9aab d5c98e50cd92814351ee6c741b7e4c9afa29487c M      arch
> 
> 
> Which seems to be merged in http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=74b84233458e9db7c160cec67638efdbec748ca9
> 
> --
> 
> Sander
> 
> 
> > Thanks!
> >> The boot stalls:
> >> 
> >> [    0.000000] ACPI: PM-Timer IO Port: 0x808
> >> [    0.000000] ACPI: Local APIC address 0xfee00000
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
> >> [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
> >> [    0.000000] ACPI: IOAPIC (id[0x06] address[0xfec00000] gsi_base[0])
> >> [    0.000000] IOAPIC[0]: apic_id 6, version 33, address 0xfec00000, GSI 0-23
> >> [    0.000000] ACPI: IOAPIC (id[0x07] address[0xfec20000] gsi_base[24])
> >> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
> >> [   64.598628] INFO: rcu_preempt detected stalls on CPUs/tasks:
> >> [   64.598676]  0: (1 GPs behind) idle=aed/140000000000000/0 drain=5 . timer not pending
> >> [   64.598683]  (detected by 1, t=18004 jiffies, g=18446744073709551414, c=18446744073709551413, q=162)
> >> [   64.598692] sending NMI to all CPUs:
> >> [   64.598716] xen: vector 0x2 is not implemented
> >> 
> >> 
> >> Perhaps an interesting line is the incomplete (no end of range, and it stalls there some time before the kernel reports the stall itself:
> >> [    0.000000] IOAPIC[1]: apic_id 7, version 33, address 0xfec20000, GSI 24-
> >> 
> >> 
> >> The exact seem config with 3.7.0 as kernel works fine.
> >> Complete serial log is attached.
> >> 
> >> --
> >> 
> >> Sander
> >> 
> >> 
> 
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:51:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkhex-0004dW-BD; Mon, 17 Dec 2012 20:51:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1Tkhev-0004dR-Js
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:51:33 +0000
Received: from [85.158.138.51:51487] by server-12.bemta-3.messagelabs.com id
	06/1A-27559-4D58FC05; Mon, 17 Dec 2012 20:51:32 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-5.tower-174.messagelabs.com!1355777491!29293628!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30249 invoked from network); 17 Dec 2012 20:51:32 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:51:32 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so7968480vcb.32
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 12:51:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:in-reply-to:message-id:references
	:user-agent:mime-version:content-type:x-gm-message-state;
	bh=3lcXGjS9CKwYJ22nc/sQYRmA6rVDX8iyP2hqTlSB+k4=;
	b=ZgK81gER2ZXrbfniy2djRzA983xXTJ4OFIo6EwwSxYy+Ph1DpAu8r8QFPzdPTRKhr1
	0cBCYOoHChGk9s5Prg5I0+jS6BJ3oOra3DsT0j8H9K0yUzZmjEV40rDXO/ojtkfrHOyg
	qyPgpumxOeRavLggyH9SyYj8y8ujGbIHGQ1IQhTisoJbeWAIw4lKiKqkan6/5+dgMtMF
	tI+Y4qr1tAyWBTKjWtquLU+kGDVkYMKuayw0jfx6b083s6mb8J0zLy6ymO/MXg+Uo+kn
	h7V+aNVJcGGGuTgI+ZOKYeUFpWUGyOpbpYjYPUiDXfSR2hu5L5G1fSyVQ4uWM7bIs1cG
	419g==
Received: by 10.58.247.132 with SMTP id ye4mr26557151vec.9.1355777490652;
	Mon, 17 Dec 2012 12:51:30 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id n10sm12737398vde.9.2012.12.17.12.51.28
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 12:51:29 -0800 (PST)
Date: Mon, 17 Dec 2012 15:51:27 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355762141-29616-5-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212171537230.1263@xanadu.home>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-5-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQmCM/D/4PJVj27U9DbKqz21CIypqprS12KhKNEZdgwBIoUM4X48XceLkU2idJTHUfGXoiOu
Cc: dave.martin@linaro.org, arnd@arndb.de, Marc.Zyngier@arm.com,
	xen-devel@lists.xen.org, robherring2@gmail.com,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 4/6] ARM: psci: add support for PSCI
 invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Will Deacon wrote:

> This patch adds support for the Power State Coordination Interface
> defined by ARM, allowing Linux to request CPU-centric power-management
> operations from firmware implementing the PSCI protocol.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

[...]

> +/*
> + * The following two functions are invoked via the invoke_psci_fn pointer
> + * and will not be inlined, allowing us to piggyback on the AAPCS.
> + */

To make sure the code is always in sync with the intent, you could mark 
those with  noinline as well.

> +static int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
> +{
> +	asm volatile(
> +			__asmeq("%0", "r0")
> +			__asmeq("%1", "r1")
> +			__asmeq("%2", "r2")
> +			__asmeq("%3", "r3")
> +			__HVC(0)
> +		: "+r" (function_id)
> +		: "r" (arg0), "r" (arg1), "r" (arg2));
> +
> +	return function_id;
> +}
> +
> +static int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
> +{
> +	asm volatile(
> +			__asmeq("%0", "r0")
> +			__asmeq("%1", "r1")
> +			__asmeq("%2", "r2")
> +			__asmeq("%3", "r3")
> +			__SMC(0)
> +		: "+r" (function_id)
> +		: "r" (arg0), "r" (arg1), "r" (arg2));
> +
> +	return function_id;
> +}
> +
> +static int psci_cpu_suspend(struct psci_power_state state,
> +			    unsigned long entry_point)
> +{
> +	int err;
> +	u32 fn, power_state;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
> +	power_state = psci_power_state_pack(state);
> +	err = invoke_psci_fn(fn, power_state, (u32)entry_point, 0);

Why do you need the u32 cast here?

> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_cpu_off(struct psci_power_state state)
> +{
> +	int err;
> +	u32 fn, power_state;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_OFF];
> +	power_state = psci_power_state_pack(state);
> +	err = invoke_psci_fn(fn, power_state, 0, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
> +{
> +	int err;
> +	u32 fn;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_ON];
> +	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_migrate(unsigned long cpuid)
> +{
> +	int err;
> +	u32 fn;
> +
> +	fn = psci_function_id[PSCI_FN_MIGRATE];
> +	err = invoke_psci_fn(fn, cpuid, 0, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static const struct of_device_id psci_of_match[] __initconst = {
> +	{ .compatible = "arm,psci",	},
> +	{},
> +};
> +
> +static int __init psci_init(void)
> +{
> +	struct device_node *np;
> +	const char *method;
> +	u32 base, id;
> +
> +	np = of_find_matching_node(NULL, psci_of_match);
> +	if (!np)
> +		return 0;
> +
> +	pr_info("probing function IDs from device-tree\n");

Having "probing function IDs from device-tree" in the middle of a kernel 
log isn't very informative.  Better make this more useful or remove it.

> +
> +	if (of_property_read_u32(np, "function-base", &base)) {
> +		pr_warning("missing \"function-base\" property\n");

Same thing here: this lacks context in a kernel log.
And so on for the other occurrences.


Nicolas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 20:51:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 20:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkhex-0004dW-BD; Mon, 17 Dec 2012 20:51:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1Tkhev-0004dR-Js
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 20:51:33 +0000
Received: from [85.158.138.51:51487] by server-12.bemta-3.messagelabs.com id
	06/1A-27559-4D58FC05; Mon, 17 Dec 2012 20:51:32 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-5.tower-174.messagelabs.com!1355777491!29293628!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30249 invoked from network); 17 Dec 2012 20:51:32 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 20:51:32 -0000
Received: by mail-vc0-f173.google.com with SMTP id f13so7968480vcb.32
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 12:51:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:in-reply-to:message-id:references
	:user-agent:mime-version:content-type:x-gm-message-state;
	bh=3lcXGjS9CKwYJ22nc/sQYRmA6rVDX8iyP2hqTlSB+k4=;
	b=ZgK81gER2ZXrbfniy2djRzA983xXTJ4OFIo6EwwSxYy+Ph1DpAu8r8QFPzdPTRKhr1
	0cBCYOoHChGk9s5Prg5I0+jS6BJ3oOra3DsT0j8H9K0yUzZmjEV40rDXO/ojtkfrHOyg
	qyPgpumxOeRavLggyH9SyYj8y8ujGbIHGQ1IQhTisoJbeWAIw4lKiKqkan6/5+dgMtMF
	tI+Y4qr1tAyWBTKjWtquLU+kGDVkYMKuayw0jfx6b083s6mb8J0zLy6ymO/MXg+Uo+kn
	h7V+aNVJcGGGuTgI+ZOKYeUFpWUGyOpbpYjYPUiDXfSR2hu5L5G1fSyVQ4uWM7bIs1cG
	419g==
Received: by 10.58.247.132 with SMTP id ye4mr26557151vec.9.1355777490652;
	Mon, 17 Dec 2012 12:51:30 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id n10sm12737398vde.9.2012.12.17.12.51.28
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 12:51:29 -0800 (PST)
Date: Mon, 17 Dec 2012 15:51:27 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355762141-29616-5-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212171537230.1263@xanadu.home>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-5-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQmCM/D/4PJVj27U9DbKqz21CIypqprS12KhKNEZdgwBIoUM4X48XceLkU2idJTHUfGXoiOu
Cc: dave.martin@linaro.org, arnd@arndb.de, Marc.Zyngier@arm.com,
	xen-devel@lists.xen.org, robherring2@gmail.com,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 4/6] ARM: psci: add support for PSCI
 invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Will Deacon wrote:

> This patch adds support for the Power State Coordination Interface
> defined by ARM, allowing Linux to request CPU-centric power-management
> operations from firmware implementing the PSCI protocol.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

[...]

> +/*
> + * The following two functions are invoked via the invoke_psci_fn pointer
> + * and will not be inlined, allowing us to piggyback on the AAPCS.
> + */

To make sure the code is always in sync with the intent, you could mark 
those with  noinline as well.

> +static int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
> +{
> +	asm volatile(
> +			__asmeq("%0", "r0")
> +			__asmeq("%1", "r1")
> +			__asmeq("%2", "r2")
> +			__asmeq("%3", "r3")
> +			__HVC(0)
> +		: "+r" (function_id)
> +		: "r" (arg0), "r" (arg1), "r" (arg2));
> +
> +	return function_id;
> +}
> +
> +static int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
> +{
> +	asm volatile(
> +			__asmeq("%0", "r0")
> +			__asmeq("%1", "r1")
> +			__asmeq("%2", "r2")
> +			__asmeq("%3", "r3")
> +			__SMC(0)
> +		: "+r" (function_id)
> +		: "r" (arg0), "r" (arg1), "r" (arg2));
> +
> +	return function_id;
> +}
> +
> +static int psci_cpu_suspend(struct psci_power_state state,
> +			    unsigned long entry_point)
> +{
> +	int err;
> +	u32 fn, power_state;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
> +	power_state = psci_power_state_pack(state);
> +	err = invoke_psci_fn(fn, power_state, (u32)entry_point, 0);

Why do you need the u32 cast here?

> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_cpu_off(struct psci_power_state state)
> +{
> +	int err;
> +	u32 fn, power_state;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_OFF];
> +	power_state = psci_power_state_pack(state);
> +	err = invoke_psci_fn(fn, power_state, 0, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
> +{
> +	int err;
> +	u32 fn;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_ON];
> +	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_migrate(unsigned long cpuid)
> +{
> +	int err;
> +	u32 fn;
> +
> +	fn = psci_function_id[PSCI_FN_MIGRATE];
> +	err = invoke_psci_fn(fn, cpuid, 0, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static const struct of_device_id psci_of_match[] __initconst = {
> +	{ .compatible = "arm,psci",	},
> +	{},
> +};
> +
> +static int __init psci_init(void)
> +{
> +	struct device_node *np;
> +	const char *method;
> +	u32 base, id;
> +
> +	np = of_find_matching_node(NULL, psci_of_match);
> +	if (!np)
> +		return 0;
> +
> +	pr_info("probing function IDs from device-tree\n");

Having "probing function IDs from device-tree" in the middle of a kernel 
log isn't very informative.  Better make this more useful or remove it.

> +
> +	if (of_property_read_u32(np, "function-base", &base)) {
> +		pr_warning("missing \"function-base\" property\n");

Same thing here: this lacks context in a kernel log.
And so on for the other occurrences.


Nicolas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 21:13:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 21:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkhzg-0004xu-Hp; Mon, 17 Dec 2012 21:13:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tkhze-0004xp-Ij
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 21:12:58 +0000
Received: from [193.109.254.147:47702] by server-11.bemta-14.messagelabs.com
	id 9A/2C-02659-9DA8FC05; Mon, 17 Dec 2012 21:12:57 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355778775!10334524!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjYyNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19242 invoked from network); 17 Dec 2012 21:12:56 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Dec 2012 21:12:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBHLCnwU019584
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Dec 2012 21:12:49 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBHLCmkv011077
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Dec 2012 21:12:48 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBHLClB4018641; Mon, 17 Dec 2012 15:12:47 -0600
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Dec 2012 13:12:47 -0800
Date: Mon, 17 Dec 2012 16:12:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>, fenghua.yu@intel.com
Message-ID: <20121217211240.GA13412@konrad-lan.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
	<1782021524.20121217213217@eikelenboom.it>
	<20121217204634.GA13181@konrad-lan.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121217204634.GA13181@konrad-lan.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: hpa@linux.intel.com, linux-kernel@vger.kernel.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
 dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 03:46:34PM -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Dec 17, 2012 at 09:32:17PM +0100, Sander Eikelenboom wrote:
> > 
> > Sunday, December 16, 2012, 6:38:24 PM, you wrote:
> > 
> > > On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
> > >> Hi Konrad,
> > >> 
> > >> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.
> > 
> > > Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> > > vacation during that time (just got back).
> > 
> > > Did you by any chance try to do a git bisect to narrow down
> > > which merge it was?
> > 
> > Hi Konrad,
> 
> Hey Sander,
> 
> Thank you for doing the bisection.
> 
> Fenghua - any ideas what might be amiss in the Xen subsystem?
> I hadn't looked at the patchset of the CPU0 offlining/onlining
> so I am not completly up to speed on the particulars of the patches.

> > 30106c174311b8cfaaa3186c7f6f9c36c62d17da is the first bad commit
> > commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da
> > Author: Fenghua Yu <fenghua.yu@intel.com>
> > Date:   Tue Nov 13 11:32:41 2012 -0800
> > 
> >     x86, hotplug: Support functions for CPU0 online/offline
> > 
> >     Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
> > 
> >     Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after
> >     it's offline.
> > 
> >     Continue to online CPU0 in native_cpu_up().
> > 
> >     Continue to offline CPU0 in native_cpu_disable().
> > 
> >     Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
> >     Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com
> >     Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
> > 

This patch:


diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 353c50f..4f7d259 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -254,7 +254,7 @@ static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
 	}
 	xen_init_lock_cpu(0);
 
-	smp_store_cpu_info(0);
+	smp_store_boot_cpu_info();
 	cpu_data(0).x86_max_cores = 1;
 
 	for_each_possible_cpu(i) {

Would do the corresponding change in the Xen subsystem that the above
mentioned commit did. Perhaps that is all that is needed? I am going to
be able to test this and look in more details tomorrow.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 21:13:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 21:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkhzg-0004xu-Hp; Mon, 17 Dec 2012 21:13:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tkhze-0004xp-Ij
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 21:12:58 +0000
Received: from [193.109.254.147:47702] by server-11.bemta-14.messagelabs.com
	id 9A/2C-02659-9DA8FC05; Mon, 17 Dec 2012 21:12:57 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355778775!10334524!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjYyNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19242 invoked from network); 17 Dec 2012 21:12:56 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Dec 2012 21:12:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBHLCnwU019584
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Dec 2012 21:12:49 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBHLCmkv011077
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Dec 2012 21:12:48 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBHLClB4018641; Mon, 17 Dec 2012 15:12:47 -0600
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Dec 2012 13:12:47 -0800
Date: Mon, 17 Dec 2012 16:12:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>, fenghua.yu@intel.com
Message-ID: <20121217211240.GA13412@konrad-lan.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
	<1782021524.20121217213217@eikelenboom.it>
	<20121217204634.GA13181@konrad-lan.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121217204634.GA13181@konrad-lan.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: hpa@linux.intel.com, linux-kernel@vger.kernel.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
 dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 03:46:34PM -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Dec 17, 2012 at 09:32:17PM +0100, Sander Eikelenboom wrote:
> > 
> > Sunday, December 16, 2012, 6:38:24 PM, you wrote:
> > 
> > > On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
> > >> Hi Konrad,
> > >> 
> > >> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.
> > 
> > > Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> > > vacation during that time (just got back).
> > 
> > > Did you by any chance try to do a git bisect to narrow down
> > > which merge it was?
> > 
> > Hi Konrad,
> 
> Hey Sander,
> 
> Thank you for doing the bisection.
> 
> Fenghua - any ideas what might be amiss in the Xen subsystem?
> I hadn't looked at the patchset of the CPU0 offlining/onlining
> so I am not completly up to speed on the particulars of the patches.

> > 30106c174311b8cfaaa3186c7f6f9c36c62d17da is the first bad commit
> > commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da
> > Author: Fenghua Yu <fenghua.yu@intel.com>
> > Date:   Tue Nov 13 11:32:41 2012 -0800
> > 
> >     x86, hotplug: Support functions for CPU0 online/offline
> > 
> >     Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
> > 
> >     Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after
> >     it's offline.
> > 
> >     Continue to online CPU0 in native_cpu_up().
> > 
> >     Continue to offline CPU0 in native_cpu_disable().
> > 
> >     Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
> >     Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com
> >     Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
> > 

This patch:


diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 353c50f..4f7d259 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -254,7 +254,7 @@ static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
 	}
 	xen_init_lock_cpu(0);
 
-	smp_store_cpu_info(0);
+	smp_store_boot_cpu_info();
 	cpu_data(0).x86_max_cores = 1;
 
 	for_each_possible_cpu(i) {

Would do the corresponding change in the Xen subsystem that the above
mentioned commit did. Perhaps that is all that is needed? I am going to
be able to test this and look in more details tomorrow.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 21:24:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 21:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkiA0-00059s-P4; Mon, 17 Dec 2012 21:23:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Tki9z-00059n-6U
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 21:23:39 +0000
Received: from [85.158.143.99:47657] by server-2.bemta-4.messagelabs.com id
	ED/CA-30861-A5D8FC05; Mon, 17 Dec 2012 21:23:38 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355779416!29757772!1
X-Originating-IP: [209.85.216.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7216 invoked from network); 17 Dec 2012 21:23:37 -0000
Received: from mail-qa0-f43.google.com (HELO mail-qa0-f43.google.com)
	(209.85.216.43)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 21:23:37 -0000
Received: by mail-qa0-f43.google.com with SMTP id cr7so2788940qab.9
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 13:23:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition
	:content-transfer-encoding:in-reply-to:user-agent;
	bh=yjDyI6C3h3gsAYg7FPd3bFLZxt2LIsrHXWggo21iNrI=;
	b=BHbP8UVevTEXmur+MMEWqTThGZOrvYeEOaIJ0bqAjfyndZJBMSSR3nosPQZqbiSunv
	CFbv5RAqMgME/IOtJJKTtR77KdTGl+35W67AcsFEqwPQO1Dxdt/xxbNy+1tK5ZUeSar/
	Ui3Q0yDGoumBMly2RP6oRuwt1WGv9ZbinDUoidVOCqxXOHp/eE8eYRHVKiUO7roVXS76
	Qq3FwB/pWaKFel/4ypHlc4Nf8JjUQOTruBGSIZ1CcAOkZCDlUt/N0PG+Z8pAgq1/Nljz
	9M6XhhByDnM45jb2e906sMsJzzzDckWHx3I5evB59mMvZ3cbJ1L+mCfrV2/5tw7ptTXX
	rZuQ==
X-Received: by 10.224.72.197 with SMTP id n5mr92577qaj.38.1355779416681;
	Mon, 17 Dec 2012 13:23:36 -0800 (PST)
Received: from konrad-lan.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id ou3sm2443393qeb.0.2012.12.17.13.23.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 13:23:36 -0800 (PST)
Date: Mon, 17 Dec 2012 16:23:33 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>,
	james.harper@bendigoit.com.au
Message-ID: <20121217212332.GB13841@konrad-lan.dumpdata.com>
References: <1355650190976-5713082.post@n5.nabble.com>
	<20121216124602.GJ8912@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121216124602.GJ8912@reaktio.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, "Liu.yi" <liu.yi24@zte.com.cn>
Subject: Re: [Xen-devel] xm save -c problem with GPLPV drivers and winxp
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Dec 16, 2012 at 02:46:02PM +0200, Pasi K=E4rkk=E4inen wrote:
> On Sun, Dec 16, 2012 at 01:29:51AM -0800, Liu.yi wrote:
> > hi,all
> =

> Hello,
> =

> I added the important keywords to the subject..
> =

> -- Pasi
> =

> >     I have a winxpsp2 vm with gplpv driver installed. The problem is th=
at
> > when I excute "xm save -c winxpsp2 save.bin", the command is excuted
> > successfully but the vm hang after issuing a disk operaton.
> >     After installing gplpv debug driver, from the qemu log I found vbd
> > resume failed, after a while windows issues disk reset command, then vm
> > hang.
> >     xm debug-keys shows as follows before "xm save -c":
> > (XEN)        1 [0/1]: s=3D3 n=3D0 d=3D0 p=3D72 x=3D1
> > (XEN)        2 [0/0]: s=3D3 n=3D0 d=3D0 p=3D71 x=3D0
> > (XEN)        3 [0/1]: s=3D2 n=3D0 d=3D0 x=3D0
> > (XEN)        4 [0/0]: s=3D6 n=3D0 x=3D0                   // pdo_event_=
channel
> > (XEN)        5 [0/0]: s=3D2 n=3D0 d=3D0 x=3D0             // suspend_ev=
tchn
> > (XEN)        6 [0/0]: s=3D3 n=3D0 d=3D0 p=3D73 x=3D0     // vbd-768 eve=
nt-channel
> > (XEN)        7 [0/0]: s=3D3 n=3D0 d=3D0 p=3D74 x=3D0     // vif-0 event=
-channel
> >     xm debug-keys shows as follows after "xm save -c":
> > (XEN)        1 [0/1]: s=3D3 n=3D0 d=3D0 p=3D72 x=3D1
> > (XEN)        2 [0/0]: s=3D3 n=3D0 d=3D0 p=3D71 x=3D0
> > (XEN)        3 [0/1]: s=3D2 n=3D0 d=3D0 x=3D0
> > (XEN)        4 [0/0]: s=3D6 n=3D0 x=3D0                   // pdo_event_=
channel
> > (XEN)        5 [0/0]: s=3D2 n=3D0 d=3D0 x=3D0             // suspend_ev=
tchn
> > (XEN)        6 [0/0]: s=3D6 n=3D0 x=3D0                   // new pdo_ev=
ent_channel
> > (XEN)        7 [0/0]: s=3D2 n=3D0 d=3D0 x=3D0             // new suspen=
d_evtchn
> > (XEN)        8 [0/0]: s=3D3 n=3D0 d=3D0 p=3D? x=3D0       // new vbd-76=
8 event-channel
> >                                                               // vif-0
> > resume don't start at all because resume hang at vbd
> > =

> >     blkback and blkif driver seem free their event-channel in
> > unbind_from_irqhandler when suspending, so when gplpv driver resumes it
> > starts allocing event-channel from 6. The strange things is that gplpv
> > driver allocs new pdo_event_channel and suspend_evtchn (6 and 7), the
> > previous ones retain active.
> >     I tried to unbind the old pdo_event_channel and suspend_evtchn, but
> > suspending vm hang. "xm save -c" wokrs if I reuse the old pdo_event_cha=
nnel
> > and suspend_evtchn as follows:
> > =

> >   in evtchn.c:EvtChn_Init()
> >       KeInitializeEvent(&xpdd->pdo_suspend_event, SynchronizationEvent,
> > FALSE);
> >       if (xpdd->pdo_event_channel =3D=3D 0) {
> >           KdPrint((__DRIVER_NAME "     create new pdo_event_channel\n")=
);
> >           xpdd->pdo_event_channel =3D EvtChn_AllocIpi(xpdd, 0);
> >       }
> >   in xenpci_fdo.c:XenPci_ConnectSuspendEvt()
> >       if (xpdd->suspend_evtchn =3D=3D 0) {
> >           xpdd->suspend_evtchn =3D EvtChn_AllocUnbound(xpdd, 0);
> >           KdPrint((__DRIVER_NAME "     create new suspend event
> > channel\n"));
> >       } =

> > =

> >     The qemu log show vbd and vif resume successfully, vm runs fine.
> >     I'm not sure about these modifications to gplpv windows driver. I w=
ill
> > be grateful for any suggestions, thks.
> > =

> > =

> > =


CC-ing the author of the driver (James).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 21:24:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 21:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkiA0-00059s-P4; Mon, 17 Dec 2012 21:23:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Tki9z-00059n-6U
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 21:23:39 +0000
Received: from [85.158.143.99:47657] by server-2.bemta-4.messagelabs.com id
	ED/CA-30861-A5D8FC05; Mon, 17 Dec 2012 21:23:38 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355779416!29757772!1
X-Originating-IP: [209.85.216.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7216 invoked from network); 17 Dec 2012 21:23:37 -0000
Received: from mail-qa0-f43.google.com (HELO mail-qa0-f43.google.com)
	(209.85.216.43)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 21:23:37 -0000
Received: by mail-qa0-f43.google.com with SMTP id cr7so2788940qab.9
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 13:23:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition
	:content-transfer-encoding:in-reply-to:user-agent;
	bh=yjDyI6C3h3gsAYg7FPd3bFLZxt2LIsrHXWggo21iNrI=;
	b=BHbP8UVevTEXmur+MMEWqTThGZOrvYeEOaIJ0bqAjfyndZJBMSSR3nosPQZqbiSunv
	CFbv5RAqMgME/IOtJJKTtR77KdTGl+35W67AcsFEqwPQO1Dxdt/xxbNy+1tK5ZUeSar/
	Ui3Q0yDGoumBMly2RP6oRuwt1WGv9ZbinDUoidVOCqxXOHp/eE8eYRHVKiUO7roVXS76
	Qq3FwB/pWaKFel/4ypHlc4Nf8JjUQOTruBGSIZ1CcAOkZCDlUt/N0PG+Z8pAgq1/Nljz
	9M6XhhByDnM45jb2e906sMsJzzzDckWHx3I5evB59mMvZ3cbJ1L+mCfrV2/5tw7ptTXX
	rZuQ==
X-Received: by 10.224.72.197 with SMTP id n5mr92577qaj.38.1355779416681;
	Mon, 17 Dec 2012 13:23:36 -0800 (PST)
Received: from konrad-lan.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id ou3sm2443393qeb.0.2012.12.17.13.23.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 13:23:36 -0800 (PST)
Date: Mon, 17 Dec 2012 16:23:33 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>,
	james.harper@bendigoit.com.au
Message-ID: <20121217212332.GB13841@konrad-lan.dumpdata.com>
References: <1355650190976-5713082.post@n5.nabble.com>
	<20121216124602.GJ8912@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121216124602.GJ8912@reaktio.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, "Liu.yi" <liu.yi24@zte.com.cn>
Subject: Re: [Xen-devel] xm save -c problem with GPLPV drivers and winxp
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Dec 16, 2012 at 02:46:02PM +0200, Pasi K=E4rkk=E4inen wrote:
> On Sun, Dec 16, 2012 at 01:29:51AM -0800, Liu.yi wrote:
> > hi,all
> =

> Hello,
> =

> I added the important keywords to the subject..
> =

> -- Pasi
> =

> >     I have a winxpsp2 vm with gplpv driver installed. The problem is th=
at
> > when I excute "xm save -c winxpsp2 save.bin", the command is excuted
> > successfully but the vm hang after issuing a disk operaton.
> >     After installing gplpv debug driver, from the qemu log I found vbd
> > resume failed, after a while windows issues disk reset command, then vm
> > hang.
> >     xm debug-keys shows as follows before "xm save -c":
> > (XEN)        1 [0/1]: s=3D3 n=3D0 d=3D0 p=3D72 x=3D1
> > (XEN)        2 [0/0]: s=3D3 n=3D0 d=3D0 p=3D71 x=3D0
> > (XEN)        3 [0/1]: s=3D2 n=3D0 d=3D0 x=3D0
> > (XEN)        4 [0/0]: s=3D6 n=3D0 x=3D0                   // pdo_event_=
channel
> > (XEN)        5 [0/0]: s=3D2 n=3D0 d=3D0 x=3D0             // suspend_ev=
tchn
> > (XEN)        6 [0/0]: s=3D3 n=3D0 d=3D0 p=3D73 x=3D0     // vbd-768 eve=
nt-channel
> > (XEN)        7 [0/0]: s=3D3 n=3D0 d=3D0 p=3D74 x=3D0     // vif-0 event=
-channel
> >     xm debug-keys shows as follows after "xm save -c":
> > (XEN)        1 [0/1]: s=3D3 n=3D0 d=3D0 p=3D72 x=3D1
> > (XEN)        2 [0/0]: s=3D3 n=3D0 d=3D0 p=3D71 x=3D0
> > (XEN)        3 [0/1]: s=3D2 n=3D0 d=3D0 x=3D0
> > (XEN)        4 [0/0]: s=3D6 n=3D0 x=3D0                   // pdo_event_=
channel
> > (XEN)        5 [0/0]: s=3D2 n=3D0 d=3D0 x=3D0             // suspend_ev=
tchn
> > (XEN)        6 [0/0]: s=3D6 n=3D0 x=3D0                   // new pdo_ev=
ent_channel
> > (XEN)        7 [0/0]: s=3D2 n=3D0 d=3D0 x=3D0             // new suspen=
d_evtchn
> > (XEN)        8 [0/0]: s=3D3 n=3D0 d=3D0 p=3D? x=3D0       // new vbd-76=
8 event-channel
> >                                                               // vif-0
> > resume don't start at all because resume hang at vbd
> > =

> >     blkback and blkif driver seem free their event-channel in
> > unbind_from_irqhandler when suspending, so when gplpv driver resumes it
> > starts allocing event-channel from 6. The strange things is that gplpv
> > driver allocs new pdo_event_channel and suspend_evtchn (6 and 7), the
> > previous ones retain active.
> >     I tried to unbind the old pdo_event_channel and suspend_evtchn, but
> > suspending vm hang. "xm save -c" wokrs if I reuse the old pdo_event_cha=
nnel
> > and suspend_evtchn as follows:
> > =

> >   in evtchn.c:EvtChn_Init()
> >       KeInitializeEvent(&xpdd->pdo_suspend_event, SynchronizationEvent,
> > FALSE);
> >       if (xpdd->pdo_event_channel =3D=3D 0) {
> >           KdPrint((__DRIVER_NAME "     create new pdo_event_channel\n")=
);
> >           xpdd->pdo_event_channel =3D EvtChn_AllocIpi(xpdd, 0);
> >       }
> >   in xenpci_fdo.c:XenPci_ConnectSuspendEvt()
> >       if (xpdd->suspend_evtchn =3D=3D 0) {
> >           xpdd->suspend_evtchn =3D EvtChn_AllocUnbound(xpdd, 0);
> >           KdPrint((__DRIVER_NAME "     create new suspend event
> > channel\n"));
> >       } =

> > =

> >     The qemu log show vbd and vif resume successfully, vm runs fine.
> >     I'm not sure about these modifications to gplpv windows driver. I w=
ill
> > be grateful for any suggestions, thks.
> > =

> > =

> > =


CC-ing the author of the driver (James).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 21:36:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 21:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkiM8-0005Kl-1T; Mon, 17 Dec 2012 21:36:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TkiM6-0005Kg-PR
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 21:36:10 +0000
Received: from [85.158.143.35:48754] by server-1.bemta-4.messagelabs.com id
	AA/43-28401-A409FC05; Mon, 17 Dec 2012 21:36:10 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355780161!4884784!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24656 invoked from network); 17 Dec 2012 21:36:02 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-5.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	17 Dec 2012 21:36:02 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:64070 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TkiQ9-0006BL-W1; Mon, 17 Dec 2012 22:40:22 +0100
Date: Mon, 17 Dec 2012 22:35:58 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <29163327.20121217223558@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121217211240.GA13412@konrad-lan.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
	<1782021524.20121217213217@eikelenboom.it>
	<20121217204634.GA13181@konrad-lan.dumpdata.com>
	<20121217211240.GA13412@konrad-lan.dumpdata.com>
MIME-Version: 1.0
Cc: fenghua.yu@intel.com, hpa@linux.intel.com, linux-kernel@vger.kernel.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
	dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 17, 2012, 10:12:40 PM, you wrote:

> On Mon, Dec 17, 2012 at 03:46:34PM -0500, Konrad Rzeszutek Wilk wrote:
>> On Mon, Dec 17, 2012 at 09:32:17PM +0100, Sander Eikelenboom wrote:
>> > 
>> > Sunday, December 16, 2012, 6:38:24 PM, you wrote:
>> > 
>> > > On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
>> > >> Hi Konrad,
>> > >> 
>> > >> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.
>> > 
>> > > Yeah, saw it over the Dec 11->Dec 12 merges and was out on
>> > > vacation during that time (just got back).
>> > 
>> > > Did you by any chance try to do a git bisect to narrow down
>> > > which merge it was?
>> > 
>> > Hi Konrad,
>> 
>> Hey Sander,
>> 
>> Thank you for doing the bisection.
>> 
>> Fenghua - any ideas what might be amiss in the Xen subsystem?
>> I hadn't looked at the patchset of the CPU0 offlining/onlining
>> so I am not completly up to speed on the particulars of the patches.

>> > 30106c174311b8cfaaa3186c7f6f9c36c62d17da is the first bad commit
>> > commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da
>> > Author: Fenghua Yu <fenghua.yu@intel.com>
>> > Date:   Tue Nov 13 11:32:41 2012 -0800
>> > 
>> >     x86, hotplug: Support functions for CPU0 online/offline
>> > 
>> >     Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
>> > 
>> >     Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after
>> >     it's offline.
>> > 
>> >     Continue to online CPU0 in native_cpu_up().
>> > 
>> >     Continue to offline CPU0 in native_cpu_disable().
>> > 
>> >     Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
>> >     Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com
>> >     Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
>> > 

> This patch:


> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index 353c50f..4f7d259 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -254,7 +254,7 @@ static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
>         }
>         xen_init_lock_cpu(0);
>  
> -       smp_store_cpu_info(0);
> +       smp_store_boot_cpu_info();
>         cpu_data(0).x86_max_cores = 1;
>  
>         for_each_possible_cpu(i) {

> Would do the corresponding change in the Xen subsystem that the above
> mentioned commit did. Perhaps that is all that is needed? I am going to
> be able to test this and look in more details tomorrow.

Seems like it, don't know if there are other things still lurking, but with your patch it boots again as dom0 :-)

Thx !

--
Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 21:36:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 21:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkiM8-0005Kl-1T; Mon, 17 Dec 2012 21:36:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TkiM6-0005Kg-PR
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 21:36:10 +0000
Received: from [85.158.143.35:48754] by server-1.bemta-4.messagelabs.com id
	AA/43-28401-A409FC05; Mon, 17 Dec 2012 21:36:10 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355780161!4884784!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24656 invoked from network); 17 Dec 2012 21:36:02 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-5.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	17 Dec 2012 21:36:02 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:64070 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TkiQ9-0006BL-W1; Mon, 17 Dec 2012 22:40:22 +0100
Date: Mon, 17 Dec 2012 22:35:58 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <29163327.20121217223558@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121217211240.GA13412@konrad-lan.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
	<1782021524.20121217213217@eikelenboom.it>
	<20121217204634.GA13181@konrad-lan.dumpdata.com>
	<20121217211240.GA13412@konrad-lan.dumpdata.com>
MIME-Version: 1.0
Cc: fenghua.yu@intel.com, hpa@linux.intel.com, linux-kernel@vger.kernel.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
	dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 17, 2012, 10:12:40 PM, you wrote:

> On Mon, Dec 17, 2012 at 03:46:34PM -0500, Konrad Rzeszutek Wilk wrote:
>> On Mon, Dec 17, 2012 at 09:32:17PM +0100, Sander Eikelenboom wrote:
>> > 
>> > Sunday, December 16, 2012, 6:38:24 PM, you wrote:
>> > 
>> > > On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
>> > >> Hi Konrad,
>> > >> 
>> > >> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.
>> > 
>> > > Yeah, saw it over the Dec 11->Dec 12 merges and was out on
>> > > vacation during that time (just got back).
>> > 
>> > > Did you by any chance try to do a git bisect to narrow down
>> > > which merge it was?
>> > 
>> > Hi Konrad,
>> 
>> Hey Sander,
>> 
>> Thank you for doing the bisection.
>> 
>> Fenghua - any ideas what might be amiss in the Xen subsystem?
>> I hadn't looked at the patchset of the CPU0 offlining/onlining
>> so I am not completly up to speed on the particulars of the patches.

>> > 30106c174311b8cfaaa3186c7f6f9c36c62d17da is the first bad commit
>> > commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da
>> > Author: Fenghua Yu <fenghua.yu@intel.com>
>> > Date:   Tue Nov 13 11:32:41 2012 -0800
>> > 
>> >     x86, hotplug: Support functions for CPU0 online/offline
>> > 
>> >     Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
>> > 
>> >     Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after
>> >     it's offline.
>> > 
>> >     Continue to online CPU0 in native_cpu_up().
>> > 
>> >     Continue to offline CPU0 in native_cpu_disable().
>> > 
>> >     Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
>> >     Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com
>> >     Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
>> > 

> This patch:


> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index 353c50f..4f7d259 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -254,7 +254,7 @@ static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
>         }
>         xen_init_lock_cpu(0);
>  
> -       smp_store_cpu_info(0);
> +       smp_store_boot_cpu_info();
>         cpu_data(0).x86_max_cores = 1;
>  
>         for_each_possible_cpu(i) {

> Would do the corresponding change in the Xen subsystem that the above
> mentioned commit did. Perhaps that is all that is needed? I am going to
> be able to test this and look in more details tomorrow.

Seems like it, don't know if there are other things still lurking, but with your patch it boots again as dom0 :-)

Thx !

--
Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 21:46:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 21:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkiVd-0005VV-U5; Mon, 17 Dec 2012 21:46:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1TkiVb-0005VQ-BN
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 21:45:59 +0000
Received: from [85.158.139.83:41916] by server-1.bemta-5.messagelabs.com id
	92/8C-12813-6929FC05; Mon, 17 Dec 2012 21:45:58 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355780756!28029862!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27180 invoked from network); 17 Dec 2012 21:45:57 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 21:45:57 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so7896848vbi.32
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 13:45:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:in-reply-to:message-id:references
	:user-agent:mime-version:content-type:x-gm-message-state;
	bh=RD294g5EgVmeDrrKrpvO/HtaGEI4vYLWY+LQVtmUQT8=;
	b=MMg5fY1JGEph3OgIGQNTYCwzAmrfMqSSL5kNle3CByqnbpvQcIihaVB8ExxxioIjQc
	QuFxp/MlFCqenTRkb7E02ScXXmfNXmbG37mfSixBQ/J0jI+y3EmUAUD/YtysF4Sbjit9
	6UBPCLB3E9orrtD/CdWsVQXxhDdLbc83DUIx3FI6A4xhRvptOJvU1IbbjMx4fRSQT2gJ
	mD2PUz9YYFUjk3ALKPGiCBW1GOA6bMGx7qO2DRbaBLxpOY/GMBWYSb+sG5iKMnx4xDSb
	YOSvKsKdz0C61JiZYwRzBAXPo8WjujFHTZPylk3GXW3jipRt7VC0cC/IJnbjm2E6KXV4
	U2XQ==
Received: by 10.52.20.238 with SMTP id q14mr22311530vde.120.1355780755591;
	Mon, 17 Dec 2012 13:45:55 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id cv19sm12827815vdb.5.2012.12.17.13.45.53
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 13:45:54 -0800 (PST)
Date: Mon, 17 Dec 2012 16:45:52 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355762141-29616-7-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212171555570.1263@xanadu.home>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-7-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQn9qL3SkzaDqIi1jBIPue1NlykfPvw7G1ImG0GK7ZmAI3T8jjBj/AQtoPoPfSzet4W9qYeQ
Cc: dave.martin@linaro.org, arnd@arndb.de, Marc.Zyngier@arm.com,
	xen-devel@lists.xen.org, robherring2@gmail.com,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 6/6] ARM: mach-virt: add SMP support
	using PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Will Deacon wrote:

> This patch adds support for SMP to mach-virt using the PSCI
> infrastructure.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm/mach-virt/Kconfig   |  1 +
>  arch/arm/mach-virt/Makefile  |  1 +
>  arch/arm/mach-virt/platsmp.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
>  arch/arm/mach-virt/virt.c    |  6 ++++
>  4 files changed, 84 insertions(+)
>  create mode 100644 arch/arm/mach-virt/platsmp.c
> 
> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
> index a568a2a..8958f0d 100644
> --- a/arch/arm/mach-virt/Kconfig
> +++ b/arch/arm/mach-virt/Kconfig
> @@ -3,6 +3,7 @@ config ARCH_VIRT
>  	select ARCH_WANT_OPTIONAL_GPIOLIB
>  	select ARM_GIC
>  	select ARM_ARCH_TIMER
> +	select ARM_PSCI
>  	select HAVE_SMP
>  	select CPU_V7
>  	select SPARSE_IRQ
> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
> index 7ddbfa6..042afc1 100644
> --- a/arch/arm/mach-virt/Makefile
> +++ b/arch/arm/mach-virt/Makefile
> @@ -3,3 +3,4 @@
>  #
>  
>  obj-y					:= virt.o
> +obj-$(CONFIG_SMP)			+= platsmp.o
> diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
> new file mode 100644
> index 0000000..930362b
> --- /dev/null
> +++ b/arch/arm/mach-virt/platsmp.c
> @@ -0,0 +1,76 @@
> +/*
> + * Dummy Virtual Machine - does what it says on the tin.
> + *
> + * Copyright (C) 2012 ARM Ltd
> + * Author: Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/smp.h>
> +#include <linux/of.h>
> +
> +#include <asm/psci.h>
> +#include <asm/smp_plat.h>
> +#include <asm/hardware/gic.h>
> +
> +extern void secondary_startup(void);
> +
> +/*
> + * Enumerate the possible CPU set from the device tree.
> + */
> +static void __init virt_smp_init_cpus(void)
> +{
> +	struct device_node *dn = NULL;
> +	int cpu = 0;
> +
> +	while ((dn = of_find_node_by_type(dn, "cpu"))) {
> +		if (cpu < NR_CPUS)
> +			set_cpu_possible(cpu, true);
> +		cpu++;
> +	}
> +
> +	/* sanity check */
> +	if (cpu > NR_CPUS)
> +		pr_warning("no. of cores (%d) greater than configured maximum "
> +			   "of %d - clipping\n",
> +			   cpu, NR_CPUS);

Since commit 5587164eea you shouldn't need any of the above.

> +	set_smp_cross_call(gic_raise_softirq);
> +}
> +
> +static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
> +{
> +}
> +
> +static int __cpuinit virt_boot_secondary(unsigned int cpu,
> +					 struct task_struct *idle)
> +{
> +	if (psci_ops.cpu_on)
> +		return psci_ops.cpu_on(cpu_logical_map(cpu),
> +				       __pa(secondary_startup));
> +	return -ENODEV;
> +}
> +
> +static void __cpuinit virt_secondary_init(unsigned int cpu)
> +{
> +	gic_secondary_init(0);
> +}
> +
> +struct smp_operations __initdata virt_smp_ops = {
> +	.smp_init_cpus		= virt_smp_init_cpus,
> +	.smp_prepare_cpus	= virt_smp_prepare_cpus,
> +	.smp_secondary_init	= virt_secondary_init,
> +	.smp_boot_secondary	= virt_boot_secondary,
> +};
> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
> index 174b9da..d764835 100644
> --- a/arch/arm/mach-virt/virt.c
> +++ b/arch/arm/mach-virt/virt.c
> @@ -20,6 +20,7 @@
>  
>  #include <linux/of_irq.h>
>  #include <linux/of_platform.h>
> +#include <linux/smp.h>
>  
>  #include <asm/arch_timer.h>
>  #include <asm/hardware/gic.h>
> @@ -56,10 +57,15 @@ static struct sys_timer virt_timer = {
>  	.init = virt_timer_init,
>  };
>  
> +#ifdef CONFIG_SMP
> +extern struct smp_operations virt_smp_ops;
> +#endif

You don't need to surround prototype declaration here, unless your goal 
was to define a dummy virt_smp_ops when CONFIG_SMP is not selected?
Otherwise the reference below would break compilation.

> +
>  DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
>  	.init_irq	= gic_init_irq,
>  	.handle_irq     = gic_handle_irq,
>  	.timer		= &virt_timer,
>  	.init_machine	= virt_init,
> +	.smp		= smp_ops(virt_smp_ops),
>  	.dt_compat	= virt_dt_match,
>  MACHINE_END
> -- 
> 1.8.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 21:46:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 21:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkiVd-0005VV-U5; Mon, 17 Dec 2012 21:46:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1TkiVb-0005VQ-BN
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 21:45:59 +0000
Received: from [85.158.139.83:41916] by server-1.bemta-5.messagelabs.com id
	92/8C-12813-6929FC05; Mon, 17 Dec 2012 21:45:58 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355780756!28029862!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27180 invoked from network); 17 Dec 2012 21:45:57 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 21:45:57 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so7896848vbi.32
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 13:45:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:in-reply-to:message-id:references
	:user-agent:mime-version:content-type:x-gm-message-state;
	bh=RD294g5EgVmeDrrKrpvO/HtaGEI4vYLWY+LQVtmUQT8=;
	b=MMg5fY1JGEph3OgIGQNTYCwzAmrfMqSSL5kNle3CByqnbpvQcIihaVB8ExxxioIjQc
	QuFxp/MlFCqenTRkb7E02ScXXmfNXmbG37mfSixBQ/J0jI+y3EmUAUD/YtysF4Sbjit9
	6UBPCLB3E9orrtD/CdWsVQXxhDdLbc83DUIx3FI6A4xhRvptOJvU1IbbjMx4fRSQT2gJ
	mD2PUz9YYFUjk3ALKPGiCBW1GOA6bMGx7qO2DRbaBLxpOY/GMBWYSb+sG5iKMnx4xDSb
	YOSvKsKdz0C61JiZYwRzBAXPo8WjujFHTZPylk3GXW3jipRt7VC0cC/IJnbjm2E6KXV4
	U2XQ==
Received: by 10.52.20.238 with SMTP id q14mr22311530vde.120.1355780755591;
	Mon, 17 Dec 2012 13:45:55 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id cv19sm12827815vdb.5.2012.12.17.13.45.53
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 13:45:54 -0800 (PST)
Date: Mon, 17 Dec 2012 16:45:52 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355762141-29616-7-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212171555570.1263@xanadu.home>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-7-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQn9qL3SkzaDqIi1jBIPue1NlykfPvw7G1ImG0GK7ZmAI3T8jjBj/AQtoPoPfSzet4W9qYeQ
Cc: dave.martin@linaro.org, arnd@arndb.de, Marc.Zyngier@arm.com,
	xen-devel@lists.xen.org, robherring2@gmail.com,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 6/6] ARM: mach-virt: add SMP support
	using PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Will Deacon wrote:

> This patch adds support for SMP to mach-virt using the PSCI
> infrastructure.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm/mach-virt/Kconfig   |  1 +
>  arch/arm/mach-virt/Makefile  |  1 +
>  arch/arm/mach-virt/platsmp.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
>  arch/arm/mach-virt/virt.c    |  6 ++++
>  4 files changed, 84 insertions(+)
>  create mode 100644 arch/arm/mach-virt/platsmp.c
> 
> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
> index a568a2a..8958f0d 100644
> --- a/arch/arm/mach-virt/Kconfig
> +++ b/arch/arm/mach-virt/Kconfig
> @@ -3,6 +3,7 @@ config ARCH_VIRT
>  	select ARCH_WANT_OPTIONAL_GPIOLIB
>  	select ARM_GIC
>  	select ARM_ARCH_TIMER
> +	select ARM_PSCI
>  	select HAVE_SMP
>  	select CPU_V7
>  	select SPARSE_IRQ
> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
> index 7ddbfa6..042afc1 100644
> --- a/arch/arm/mach-virt/Makefile
> +++ b/arch/arm/mach-virt/Makefile
> @@ -3,3 +3,4 @@
>  #
>  
>  obj-y					:= virt.o
> +obj-$(CONFIG_SMP)			+= platsmp.o
> diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
> new file mode 100644
> index 0000000..930362b
> --- /dev/null
> +++ b/arch/arm/mach-virt/platsmp.c
> @@ -0,0 +1,76 @@
> +/*
> + * Dummy Virtual Machine - does what it says on the tin.
> + *
> + * Copyright (C) 2012 ARM Ltd
> + * Author: Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/smp.h>
> +#include <linux/of.h>
> +
> +#include <asm/psci.h>
> +#include <asm/smp_plat.h>
> +#include <asm/hardware/gic.h>
> +
> +extern void secondary_startup(void);
> +
> +/*
> + * Enumerate the possible CPU set from the device tree.
> + */
> +static void __init virt_smp_init_cpus(void)
> +{
> +	struct device_node *dn = NULL;
> +	int cpu = 0;
> +
> +	while ((dn = of_find_node_by_type(dn, "cpu"))) {
> +		if (cpu < NR_CPUS)
> +			set_cpu_possible(cpu, true);
> +		cpu++;
> +	}
> +
> +	/* sanity check */
> +	if (cpu > NR_CPUS)
> +		pr_warning("no. of cores (%d) greater than configured maximum "
> +			   "of %d - clipping\n",
> +			   cpu, NR_CPUS);

Since commit 5587164eea you shouldn't need any of the above.

> +	set_smp_cross_call(gic_raise_softirq);
> +}
> +
> +static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
> +{
> +}
> +
> +static int __cpuinit virt_boot_secondary(unsigned int cpu,
> +					 struct task_struct *idle)
> +{
> +	if (psci_ops.cpu_on)
> +		return psci_ops.cpu_on(cpu_logical_map(cpu),
> +				       __pa(secondary_startup));
> +	return -ENODEV;
> +}
> +
> +static void __cpuinit virt_secondary_init(unsigned int cpu)
> +{
> +	gic_secondary_init(0);
> +}
> +
> +struct smp_operations __initdata virt_smp_ops = {
> +	.smp_init_cpus		= virt_smp_init_cpus,
> +	.smp_prepare_cpus	= virt_smp_prepare_cpus,
> +	.smp_secondary_init	= virt_secondary_init,
> +	.smp_boot_secondary	= virt_boot_secondary,
> +};
> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
> index 174b9da..d764835 100644
> --- a/arch/arm/mach-virt/virt.c
> +++ b/arch/arm/mach-virt/virt.c
> @@ -20,6 +20,7 @@
>  
>  #include <linux/of_irq.h>
>  #include <linux/of_platform.h>
> +#include <linux/smp.h>
>  
>  #include <asm/arch_timer.h>
>  #include <asm/hardware/gic.h>
> @@ -56,10 +57,15 @@ static struct sys_timer virt_timer = {
>  	.init = virt_timer_init,
>  };
>  
> +#ifdef CONFIG_SMP
> +extern struct smp_operations virt_smp_ops;
> +#endif

You don't need to surround prototype declaration here, unless your goal 
was to define a dummy virt_smp_ops when CONFIG_SMP is not selected?
Otherwise the reference below would break compilation.

> +
>  DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
>  	.init_irq	= gic_init_irq,
>  	.handle_irq     = gic_handle_irq,
>  	.timer		= &virt_timer,
>  	.init_machine	= virt_init,
> +	.smp		= smp_ops(virt_smp_ops),
>  	.dt_compat	= virt_dt_match,
>  MACHINE_END
> -- 
> 1.8.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:35:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjHP-0005sY-65; Mon, 17 Dec 2012 22:35:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjHN-0005sT-60
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:35:21 +0000
Received: from [85.158.139.211:7908] by server-14.bemta-5.messagelabs.com id
	F1/CF-09538-82E9FC05; Mon, 17 Dec 2012 22:35:20 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355783719!18339614!1
X-Originating-IP: [209.85.212.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 329 invoked from network); 17 Dec 2012 22:35:19 -0000
Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com)
	(209.85.212.182)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:19 -0000
Received: by mail-wi0-f182.google.com with SMTP id hn14so2327398wib.3
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:message-id:user-agent:date:from
	:to:cc; bh=sPZqQhmS4ic8kkR8JOcHNm8TGtLeAEy3Z1mhq3yW6Vc=;
	b=ZC3p3mPSKwotkUOls6FaLp5dx2p5Sebm5VyNeoyrRAPvG7bjyhTltkaMWojpSTkTZb
	I5r6g3UmgFtiKJBQFTuchmXg9xnKb9yPypFQo+R/P7S/Wfpih8rFBCwFpF2+/ydnYyC2
	W0n4Uxps3q5zjxdImcTtZ6LnsHJQ1LI6JwFWHMmAyUdUE9Z8PQZI2+n+NZcJz57iHwyQ
	2rxLAwszZtk7TvZfOg+s4CwHwfupDiHuqw3/8ZaWuVO8+TqQdx5KBj0mH5Lh+p0CsFsf
	QYeADuQW1JQaBGbDzdXShpQJLfC7U16I857+0/YefxqkdTAzel92CZy1A8MdZLsqnWRO
	hi0Q==
X-Received: by 10.180.20.109 with SMTP id m13mr320737wie.16.1355783719012;
	Mon, 17 Dec 2012 14:35:19 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.17
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:17 -0800 (PST)
MIME-Version: 1.0
Message-Id: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:28:57 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 5 v3] xen: sched_credit: fix picking and
 tickling and add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Here's the take 3 of this series (last round here:
 http://comments.gmane.org/gmane.comp.emulators.xen.devel/145998).

Super quickly, this is about fixing a couple of anomalies in the  credit
scheduler and adding some tracing to it.  All the comments raised during v2's
review have been addressed.

Quick summary of the series (* = Acked):

   1/5 xen: sched_credit: define and use curr_on_cpu(cpu)
   2/5 xen: sched_credit: improve picking up the idle CPU for a VCPU
 * 3/5 xen: sched_credit: improve tickling of idle CPUs
 * 4/5 xen: tracing: introduce per-scheduler trace event IDs
   5/5 xen: sched_credit: add some tracing

Thanks and Regards,
Dario

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:35:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjHP-0005sY-65; Mon, 17 Dec 2012 22:35:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjHN-0005sT-60
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:35:21 +0000
Received: from [85.158.139.211:7908] by server-14.bemta-5.messagelabs.com id
	F1/CF-09538-82E9FC05; Mon, 17 Dec 2012 22:35:20 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355783719!18339614!1
X-Originating-IP: [209.85.212.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 329 invoked from network); 17 Dec 2012 22:35:19 -0000
Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com)
	(209.85.212.182)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:19 -0000
Received: by mail-wi0-f182.google.com with SMTP id hn14so2327398wib.3
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:message-id:user-agent:date:from
	:to:cc; bh=sPZqQhmS4ic8kkR8JOcHNm8TGtLeAEy3Z1mhq3yW6Vc=;
	b=ZC3p3mPSKwotkUOls6FaLp5dx2p5Sebm5VyNeoyrRAPvG7bjyhTltkaMWojpSTkTZb
	I5r6g3UmgFtiKJBQFTuchmXg9xnKb9yPypFQo+R/P7S/Wfpih8rFBCwFpF2+/ydnYyC2
	W0n4Uxps3q5zjxdImcTtZ6LnsHJQ1LI6JwFWHMmAyUdUE9Z8PQZI2+n+NZcJz57iHwyQ
	2rxLAwszZtk7TvZfOg+s4CwHwfupDiHuqw3/8ZaWuVO8+TqQdx5KBj0mH5Lh+p0CsFsf
	QYeADuQW1JQaBGbDzdXShpQJLfC7U16I857+0/YefxqkdTAzel92CZy1A8MdZLsqnWRO
	hi0Q==
X-Received: by 10.180.20.109 with SMTP id m13mr320737wie.16.1355783719012;
	Mon, 17 Dec 2012 14:35:19 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.17
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:17 -0800 (PST)
MIME-Version: 1.0
Message-Id: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:28:57 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 0 of 5 v3] xen: sched_credit: fix picking and
 tickling and add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Here's the take 3 of this series (last round here:
 http://comments.gmane.org/gmane.comp.emulators.xen.devel/145998).

Super quickly, this is about fixing a couple of anomalies in the  credit
scheduler and adding some tracing to it.  All the comments raised during v2's
review have been addressed.

Quick summary of the series (* = Acked):

   1/5 xen: sched_credit: define and use curr_on_cpu(cpu)
   2/5 xen: sched_credit: improve picking up the idle CPU for a VCPU
 * 3/5 xen: sched_credit: improve tickling of idle CPUs
 * 4/5 xen: tracing: introduce per-scheduler trace event IDs
   5/5 xen: sched_credit: add some tracing

Thanks and Regards,
Dario

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:35:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjHV-0005tC-VO; Mon, 17 Dec 2012 22:35:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjHU-0005sz-Qj
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:35:29 +0000
Received: from [85.158.139.211:31938] by server-14.bemta-5.messagelabs.com id
	11/DF-09538-03E9FC05; Mon, 17 Dec 2012 22:35:28 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355783724!19080772!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7610 invoked from network); 17 Dec 2012 22:35:24 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:24 -0000
Received: by mail-we0-f171.google.com with SMTP id u3so3178312wey.30
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=050vrvPr9kLYhuKGCRCfIU6trWRN+yc004+AmjWwg+A=;
	b=qTA5kylb48YVfkchhrr4tGbcByIdgo6kbcALF5BPjxoh+RNEXftzDUxXvjz1LiZDB+
	KU0oegCLSKh7hGkRC52q+4ZiyzJECuRPeiPCh4ho9h8vH5UZ3cQX0sgLBsZc2bZeXa1N
	wJa4tzHbiOX0rlyKW9filx1pDvmEUerkcPOgNPOWojLkgPXo0dvNMniVVgr11nYgpUiP
	FPry67MYcl7Xd2kkUJkq4BRdyfkIPFnu/nug+Ghwme0YGjVzlfdykmgldNLJYEOYPOra
	H3oXmsG5All0uT3XdyyrEwaxmSsMrKIYPnxKqbS7mg9SRf2RicZCi56K90T5AFyW3URD
	h+NA==
Received: by 10.194.23.37 with SMTP id j5mr20080598wjf.28.1355783724001;
	Mon, 17 Dec 2012 14:35:24 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.22
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:23 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: d39b0c56342d16a0a0e58ab773d10fff99280ad4
Message-Id: <d39b0c56342d16a0a0e5.1355783340@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
References: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:29:00 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3 of 5 v3] xen: sched_credit: improve tickling
	of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Right now, when a VCPU wakes-up, we check whether it should preempt
what is running on the PCPU, and whether or not the waking VCPU can
be migrated (by tickling some idlers). However, this can result in
suboptimal or even wrong behaviour, as explained here:

 http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html

This change, instead, when deciding which PCPU(s) to tickle, upon
VCPU wake-up, considers both what it is likely to happen on the PCPU
where the wakeup occurs,and whether or not there are idlers where
the woken-up VCPU can run. In fact, if there are, we can avoid
interrupting the running VCPU. Only in case there aren't any of
these PCPUs, preemption and migration are the way to go.

This has been tested (on top of the previous change) by running
the following benchmarks inside 2, 6 and 10 VMs, concurrently, on
a shared host, each with 2 VCPUs and 960 MB of memory (host had 16
ways and 12 GB RAM).

1) All VMs had 'cpus="all"' in their config file.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 50.078467 +/- 1.6676162 | 49.673667 +/- 0.0094321 |
 | 6   | 63.259472 +/- 0.1137586 | 61.680011 +/- 1.0208723 |
 | 10  | 91.246797 +/- 0.1154008 | 90.396720 +/- 1.5900423 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 485.56333 +/- 6.0527356 | 487.83167 +/- 0.7602850 |
 | 6   | 401.36278 +/- 1.9745916 | 409.96778 +/- 3.6761092 |
 | 10  | 294.43933 +/- 0.8064945 | 302.49033 +/- 0.2343978 |
$ specjbb2005 ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 43150.63 +/- 1359.5616  | 43275.427 +/- 606.28185 |
 | 6   | 29274.29 +/- 1024.4042  | 29716.189 +/- 1290.1878 |
 | 10  | 19061.28 +/- 512.88561  | 19192.599 +/- 605.66058 |


2) All VMs had their VCPUs statically pinned to the host's PCPUs.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 47.8211   +/- 0.0215504 | 47.826900 +/- 0.0077872 |
 | 6   | 62.689122 +/- 0.0877173 | 62.764539 +/- 0.3882493 |
 | 10  | 90.321097 +/- 1.4803867 | 89.974570 +/- 1.1437566 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 550.97667 +/- 2.3512355 | 550.87000 +/- 0.8140792 |
 | 6   | 443.15000 +/- 5.7471797 | 454.01056 +/- 8.4373466 |
 | 10  | 313.89233 +/- 1.3237493 | 321.81167 +/- 0.3528418 |
$ specjbb2005 ... (throughput, higher is better)
 | 2   | 49591.057 +/- 952.93384 | 49594.195 +/- 799.57976 |
 | 6   | 33538.247 +/- 1089.2115 | 33671.758 +/- 1077.6806 |
 | 10  | 21927.870 +/- 831.88742 | 21891.131 +/- 563.37929 |


Numbers show how the change has either no or very limited impact
(specjbb2005 case) or, when it does have some impact, that is a
real improvement in performances (sysbench-memory case).

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
---
Changes from v1:
 * Rewritten as per George's suggestion, in order to improve readability.
 * Killed some of the stats, with the only exception of `tickle_idlers_none'
   and `tickle_idlers_some'. They don't make things look that terrible and
   I think they could be useful.
 * The preemption+migration of the currently running VCPU has been turned
   into a migration request, instead than just tickling. I traced this
   thing some more, and it looks like that is the way to go. Tickling is
   not effective here, because the woken PCPU would expect cur to be out
   of the scheduler tail, which is likely false (cur->is_running is
   still set to 1).

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -134,6 +134,7 @@ struct csched_vcpu {
         uint32_t state_idle;
         uint32_t migrate_q;
         uint32_t migrate_r;
+        uint32_t kicked_away;
     } stats;
 #endif
 };
@@ -251,54 +252,67 @@ static inline void
 {
     struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
-    cpumask_t mask;
+    cpumask_t mask, idle_mask;
+    int idlers_empty;
 
     ASSERT(cur);
     cpumask_clear(&mask);
 
-    /* If strictly higher priority than current VCPU, signal the CPU */
-    if ( new->pri > cur->pri )
+    idlers_empty = cpumask_empty(prv->idlers);
+
+    /*
+     * If the pcpu is idle, or there are no idlers and the new
+     * vcpu is a higher priority than the old vcpu, run it here.
+     *
+     * If there are idle cpus, first try to find one suitable to run
+     * new, so we can avoid preempting cur.  If we cannot find a
+     * suitable idler on which to run new, run it here, but try to
+     * find a suitable idler on which to run cur instead.
+     */
+    if ( cur->pri == CSCHED_PRI_IDLE
+         || (idlers_empty && new->pri > cur->pri) )
     {
-        if ( cur->pri == CSCHED_PRI_IDLE )
-            SCHED_STAT_CRANK(tickle_local_idler);
-        else if ( cur->pri == CSCHED_PRI_TS_OVER )
-            SCHED_STAT_CRANK(tickle_local_over);
-        else if ( cur->pri == CSCHED_PRI_TS_UNDER )
-            SCHED_STAT_CRANK(tickle_local_under);
-        else
-            SCHED_STAT_CRANK(tickle_local_other);
-
+        if ( cur->pri != CSCHED_PRI_IDLE )
+            SCHED_STAT_CRANK(tickle_idlers_none);
         cpumask_set_cpu(cpu, &mask);
     }
+    else if ( !idlers_empty )
+    {
+        /* Check whether or not there are idlers that can run new */
+        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
 
-    /*
-     * If this CPU has at least two runnable VCPUs, we tickle any idlers to
-     * let them know there is runnable work in the system...
-     */
-    if ( cur->pri > CSCHED_PRI_IDLE )
-    {
-        if ( cpumask_empty(prv->idlers) )
+        /*
+         * If there are no suitable idlers for new, and it's higher
+         * priority than cur, ask the scheduler to migrate cur away.
+         * We have to act like this (instead of just waking some of
+         * the idlers suitable for cur) because cur is running.
+         *
+         * If there are suitable idlers for new, no matter priorities,
+         * leave cur alone (as it is running and is, likely, cache-hot)
+         * and wake some of them (which is waking up and so is, likely,
+         * cache cold anyway).
+         */
+        if ( cpumask_empty(&idle_mask) && new->pri > cur->pri )
         {
             SCHED_STAT_CRANK(tickle_idlers_none);
+            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
+            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
+            SCHED_STAT_CRANK(migrate_kicked_away);
+            set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
+            cpumask_set_cpu(cpu, &mask);
         }
-        else
+        else if ( !cpumask_empty(&idle_mask) )
         {
-            cpumask_t idle_mask;
-
-            cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
-            if ( !cpumask_empty(&idle_mask) )
+            /* Which of the idlers suitable for new shall we wake up? */
+            SCHED_STAT_CRANK(tickle_idlers_some);
+            if ( opt_tickle_one_idle )
             {
-                SCHED_STAT_CRANK(tickle_idlers_some);
-                if ( opt_tickle_one_idle )
-                {
-                    this_cpu(last_tickle_cpu) = 
-                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
-                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
-                }
-                else
-                    cpumask_or(&mask, &mask, &idle_mask);
+                this_cpu(last_tickle_cpu) =
+                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
+                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
             }
-            cpumask_and(&mask, &mask, new->vcpu->cpu_affinity);
+            else
+                cpumask_or(&mask, &mask, &idle_mask);
         }
     }
 
@@ -1456,13 +1470,14 @@ csched_dump_vcpu(struct csched_vcpu *svc
     {
         printk(" credit=%i [w=%u]", atomic_read(&svc->credit), sdom->weight);
 #ifdef CSCHED_STATS
-        printk(" (%d+%u) {a/i=%u/%u m=%u+%u}",
+        printk(" (%d+%u) {a/i=%u/%u m=%u+%u (k=%u)}",
                 svc->stats.credit_last,
                 svc->stats.credit_incr,
                 svc->stats.state_active,
                 svc->stats.state_idle,
                 svc->stats.migrate_q,
-                svc->stats.migrate_r);
+                svc->stats.migrate_r,
+                svc->stats.kicked_away);
 #endif
     }
 
diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h
--- a/xen/include/xen/perfc_defn.h
+++ b/xen/include/xen/perfc_defn.h
@@ -39,10 +39,6 @@ PERFCOUNTER(vcpu_wake_runnable,     "csc
 PERFCOUNTER(vcpu_wake_not_runnable, "csched: vcpu_wake_not_runnable")
 PERFCOUNTER(vcpu_park,              "csched: vcpu_park")
 PERFCOUNTER(vcpu_unpark,            "csched: vcpu_unpark")
-PERFCOUNTER(tickle_local_idler,     "csched: tickle_local_idler")
-PERFCOUNTER(tickle_local_over,      "csched: tickle_local_over")
-PERFCOUNTER(tickle_local_under,     "csched: tickle_local_under")
-PERFCOUNTER(tickle_local_other,     "csched: tickle_local_other")
 PERFCOUNTER(tickle_idlers_none,     "csched: tickle_idlers_none")
 PERFCOUNTER(tickle_idlers_some,     "csched: tickle_idlers_some")
 PERFCOUNTER(load_balance_idle,      "csched: load_balance_idle")
@@ -52,6 +48,7 @@ PERFCOUNTER(steal_trylock_failed,   "csc
 PERFCOUNTER(steal_peer_idle,        "csched: steal_peer_idle")
 PERFCOUNTER(migrate_queued,         "csched: migrate_queued")
 PERFCOUNTER(migrate_running,        "csched: migrate_running")
+PERFCOUNTER(migrate_kicked_away,    "csched: migrate_kicked_away")
 PERFCOUNTER(vcpu_hot,               "csched: vcpu_hot")
 
 PERFCOUNTER(need_flush_tlb_flush,   "PG_need_flush tlb flushes")

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:35:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjHV-0005tC-VO; Mon, 17 Dec 2012 22:35:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjHU-0005sz-Qj
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:35:29 +0000
Received: from [85.158.139.211:31938] by server-14.bemta-5.messagelabs.com id
	11/DF-09538-03E9FC05; Mon, 17 Dec 2012 22:35:28 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355783724!19080772!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7610 invoked from network); 17 Dec 2012 22:35:24 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:24 -0000
Received: by mail-we0-f171.google.com with SMTP id u3so3178312wey.30
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=050vrvPr9kLYhuKGCRCfIU6trWRN+yc004+AmjWwg+A=;
	b=qTA5kylb48YVfkchhrr4tGbcByIdgo6kbcALF5BPjxoh+RNEXftzDUxXvjz1LiZDB+
	KU0oegCLSKh7hGkRC52q+4ZiyzJECuRPeiPCh4ho9h8vH5UZ3cQX0sgLBsZc2bZeXa1N
	wJa4tzHbiOX0rlyKW9filx1pDvmEUerkcPOgNPOWojLkgPXo0dvNMniVVgr11nYgpUiP
	FPry67MYcl7Xd2kkUJkq4BRdyfkIPFnu/nug+Ghwme0YGjVzlfdykmgldNLJYEOYPOra
	H3oXmsG5All0uT3XdyyrEwaxmSsMrKIYPnxKqbS7mg9SRf2RicZCi56K90T5AFyW3URD
	h+NA==
Received: by 10.194.23.37 with SMTP id j5mr20080598wjf.28.1355783724001;
	Mon, 17 Dec 2012 14:35:24 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.22
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:23 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: d39b0c56342d16a0a0e58ab773d10fff99280ad4
Message-Id: <d39b0c56342d16a0a0e5.1355783340@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
References: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:29:00 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3 of 5 v3] xen: sched_credit: improve tickling
	of idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Right now, when a VCPU wakes-up, we check whether it should preempt
what is running on the PCPU, and whether or not the waking VCPU can
be migrated (by tickling some idlers). However, this can result in
suboptimal or even wrong behaviour, as explained here:

 http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html

This change, instead, when deciding which PCPU(s) to tickle, upon
VCPU wake-up, considers both what it is likely to happen on the PCPU
where the wakeup occurs,and whether or not there are idlers where
the woken-up VCPU can run. In fact, if there are, we can avoid
interrupting the running VCPU. Only in case there aren't any of
these PCPUs, preemption and migration are the way to go.

This has been tested (on top of the previous change) by running
the following benchmarks inside 2, 6 and 10 VMs, concurrently, on
a shared host, each with 2 VCPUs and 960 MB of memory (host had 16
ways and 12 GB RAM).

1) All VMs had 'cpus="all"' in their config file.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 50.078467 +/- 1.6676162 | 49.673667 +/- 0.0094321 |
 | 6   | 63.259472 +/- 0.1137586 | 61.680011 +/- 1.0208723 |
 | 10  | 91.246797 +/- 0.1154008 | 90.396720 +/- 1.5900423 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 485.56333 +/- 6.0527356 | 487.83167 +/- 0.7602850 |
 | 6   | 401.36278 +/- 1.9745916 | 409.96778 +/- 3.6761092 |
 | 10  | 294.43933 +/- 0.8064945 | 302.49033 +/- 0.2343978 |
$ specjbb2005 ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 43150.63 +/- 1359.5616  | 43275.427 +/- 606.28185 |
 | 6   | 29274.29 +/- 1024.4042  | 29716.189 +/- 1290.1878 |
 | 10  | 19061.28 +/- 512.88561  | 19192.599 +/- 605.66058 |


2) All VMs had their VCPUs statically pinned to the host's PCPUs.

$ sysbench --test=cpu ... (time, lower is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 47.8211   +/- 0.0215504 | 47.826900 +/- 0.0077872 |
 | 6   | 62.689122 +/- 0.0877173 | 62.764539 +/- 0.3882493 |
 | 10  | 90.321097 +/- 1.4803867 | 89.974570 +/- 1.1437566 |
$ sysbench --test=memory ... (throughput, higher is better)
 | VMs | w/o this change         | w/ this change          |
 | 2   | 550.97667 +/- 2.3512355 | 550.87000 +/- 0.8140792 |
 | 6   | 443.15000 +/- 5.7471797 | 454.01056 +/- 8.4373466 |
 | 10  | 313.89233 +/- 1.3237493 | 321.81167 +/- 0.3528418 |
$ specjbb2005 ... (throughput, higher is better)
 | 2   | 49591.057 +/- 952.93384 | 49594.195 +/- 799.57976 |
 | 6   | 33538.247 +/- 1089.2115 | 33671.758 +/- 1077.6806 |
 | 10  | 21927.870 +/- 831.88742 | 21891.131 +/- 563.37929 |


Numbers show how the change has either no or very limited impact
(specjbb2005 case) or, when it does have some impact, that is a
real improvement in performances (sysbench-memory case).

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
---
Changes from v1:
 * Rewritten as per George's suggestion, in order to improve readability.
 * Killed some of the stats, with the only exception of `tickle_idlers_none'
   and `tickle_idlers_some'. They don't make things look that terrible and
   I think they could be useful.
 * The preemption+migration of the currently running VCPU has been turned
   into a migration request, instead than just tickling. I traced this
   thing some more, and it looks like that is the way to go. Tickling is
   not effective here, because the woken PCPU would expect cur to be out
   of the scheduler tail, which is likely false (cur->is_running is
   still set to 1).

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -134,6 +134,7 @@ struct csched_vcpu {
         uint32_t state_idle;
         uint32_t migrate_q;
         uint32_t migrate_r;
+        uint32_t kicked_away;
     } stats;
 #endif
 };
@@ -251,54 +252,67 @@ static inline void
 {
     struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
-    cpumask_t mask;
+    cpumask_t mask, idle_mask;
+    int idlers_empty;
 
     ASSERT(cur);
     cpumask_clear(&mask);
 
-    /* If strictly higher priority than current VCPU, signal the CPU */
-    if ( new->pri > cur->pri )
+    idlers_empty = cpumask_empty(prv->idlers);
+
+    /*
+     * If the pcpu is idle, or there are no idlers and the new
+     * vcpu is a higher priority than the old vcpu, run it here.
+     *
+     * If there are idle cpus, first try to find one suitable to run
+     * new, so we can avoid preempting cur.  If we cannot find a
+     * suitable idler on which to run new, run it here, but try to
+     * find a suitable idler on which to run cur instead.
+     */
+    if ( cur->pri == CSCHED_PRI_IDLE
+         || (idlers_empty && new->pri > cur->pri) )
     {
-        if ( cur->pri == CSCHED_PRI_IDLE )
-            SCHED_STAT_CRANK(tickle_local_idler);
-        else if ( cur->pri == CSCHED_PRI_TS_OVER )
-            SCHED_STAT_CRANK(tickle_local_over);
-        else if ( cur->pri == CSCHED_PRI_TS_UNDER )
-            SCHED_STAT_CRANK(tickle_local_under);
-        else
-            SCHED_STAT_CRANK(tickle_local_other);
-
+        if ( cur->pri != CSCHED_PRI_IDLE )
+            SCHED_STAT_CRANK(tickle_idlers_none);
         cpumask_set_cpu(cpu, &mask);
     }
+    else if ( !idlers_empty )
+    {
+        /* Check whether or not there are idlers that can run new */
+        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
 
-    /*
-     * If this CPU has at least two runnable VCPUs, we tickle any idlers to
-     * let them know there is runnable work in the system...
-     */
-    if ( cur->pri > CSCHED_PRI_IDLE )
-    {
-        if ( cpumask_empty(prv->idlers) )
+        /*
+         * If there are no suitable idlers for new, and it's higher
+         * priority than cur, ask the scheduler to migrate cur away.
+         * We have to act like this (instead of just waking some of
+         * the idlers suitable for cur) because cur is running.
+         *
+         * If there are suitable idlers for new, no matter priorities,
+         * leave cur alone (as it is running and is, likely, cache-hot)
+         * and wake some of them (which is waking up and so is, likely,
+         * cache cold anyway).
+         */
+        if ( cpumask_empty(&idle_mask) && new->pri > cur->pri )
         {
             SCHED_STAT_CRANK(tickle_idlers_none);
+            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
+            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
+            SCHED_STAT_CRANK(migrate_kicked_away);
+            set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
+            cpumask_set_cpu(cpu, &mask);
         }
-        else
+        else if ( !cpumask_empty(&idle_mask) )
         {
-            cpumask_t idle_mask;
-
-            cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
-            if ( !cpumask_empty(&idle_mask) )
+            /* Which of the idlers suitable for new shall we wake up? */
+            SCHED_STAT_CRANK(tickle_idlers_some);
+            if ( opt_tickle_one_idle )
             {
-                SCHED_STAT_CRANK(tickle_idlers_some);
-                if ( opt_tickle_one_idle )
-                {
-                    this_cpu(last_tickle_cpu) = 
-                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
-                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
-                }
-                else
-                    cpumask_or(&mask, &mask, &idle_mask);
+                this_cpu(last_tickle_cpu) =
+                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
+                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
             }
-            cpumask_and(&mask, &mask, new->vcpu->cpu_affinity);
+            else
+                cpumask_or(&mask, &mask, &idle_mask);
         }
     }
 
@@ -1456,13 +1470,14 @@ csched_dump_vcpu(struct csched_vcpu *svc
     {
         printk(" credit=%i [w=%u]", atomic_read(&svc->credit), sdom->weight);
 #ifdef CSCHED_STATS
-        printk(" (%d+%u) {a/i=%u/%u m=%u+%u}",
+        printk(" (%d+%u) {a/i=%u/%u m=%u+%u (k=%u)}",
                 svc->stats.credit_last,
                 svc->stats.credit_incr,
                 svc->stats.state_active,
                 svc->stats.state_idle,
                 svc->stats.migrate_q,
-                svc->stats.migrate_r);
+                svc->stats.migrate_r,
+                svc->stats.kicked_away);
 #endif
     }
 
diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h
--- a/xen/include/xen/perfc_defn.h
+++ b/xen/include/xen/perfc_defn.h
@@ -39,10 +39,6 @@ PERFCOUNTER(vcpu_wake_runnable,     "csc
 PERFCOUNTER(vcpu_wake_not_runnable, "csched: vcpu_wake_not_runnable")
 PERFCOUNTER(vcpu_park,              "csched: vcpu_park")
 PERFCOUNTER(vcpu_unpark,            "csched: vcpu_unpark")
-PERFCOUNTER(tickle_local_idler,     "csched: tickle_local_idler")
-PERFCOUNTER(tickle_local_over,      "csched: tickle_local_over")
-PERFCOUNTER(tickle_local_under,     "csched: tickle_local_under")
-PERFCOUNTER(tickle_local_other,     "csched: tickle_local_other")
 PERFCOUNTER(tickle_idlers_none,     "csched: tickle_idlers_none")
 PERFCOUNTER(tickle_idlers_some,     "csched: tickle_idlers_some")
 PERFCOUNTER(load_balance_idle,      "csched: load_balance_idle")
@@ -52,6 +48,7 @@ PERFCOUNTER(steal_trylock_failed,   "csc
 PERFCOUNTER(steal_peer_idle,        "csched: steal_peer_idle")
 PERFCOUNTER(migrate_queued,         "csched: migrate_queued")
 PERFCOUNTER(migrate_running,        "csched: migrate_running")
+PERFCOUNTER(migrate_kicked_away,    "csched: migrate_kicked_away")
 PERFCOUNTER(vcpu_hot,               "csched: vcpu_hot")
 
 PERFCOUNTER(need_flush_tlb_flush,   "PG_need_flush tlb flushes")

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:35:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjHS-0005sm-Ii; Mon, 17 Dec 2012 22:35:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjHR-0005sf-Nv
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:35:25 +0000
Received: from [85.158.143.35:42231] by server-3.bemta-4.messagelabs.com id
	19/22-18211-D2E9FC05; Mon, 17 Dec 2012 22:35:25 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355783720!16006161!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2499 invoked from network); 17 Dec 2012 22:35:20 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:20 -0000
Received: by mail-we0-f171.google.com with SMTP id u3so3178286wey.30
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=LZ5v79rq5HUQIF33fK/9yp1UhIidChxWSy+glXevD9M=;
	b=iWcnNZD5svZc2XEi4EcMmIwA+qA0ffaT5mNnF46gtTkaa7m1hGdb2dFGPuhr85uqrE
	gz6cjhthg6igGfOHgzWToOyvvd4yuZB0G4wrUAVffdYKGZgoD0kB/SVIsknfeLagzHyM
	vHpGacIGn0xRChGehzm2SFVnXQJJVP8GRifugwgiHP6espQdMYn2yVYzIoFDKzHLlOKP
	KJoaT2uAYvfcgDrq/vkrY6SX/LN88N96d404JH/m54Edj2MmWToiw1Q/vJvahmxADVof
	tBjYF5EtfQJysr6RBU61zvssHIY1iLzSg+wiES0AyyochWeaMpaE3so61Wo7AVogP2UJ
	vn6g==
Received: by 10.180.101.99 with SMTP id ff3mr299241wib.21.1355783720449;
	Mon, 17 Dec 2012 14:35:20 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.19
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:19 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 290dfdd1cbe5f3e1c2e00cde08792d4009a8fec0
Message-Id: <290dfdd1cbe5f3e1c2e0.1355783338@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
References: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:28:58 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 5 v3] xen: sched_credit: define and use
	curr_on_cpu(cpu)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To fetch `per_cpu(schedule_data,cpu).curr' in a more readable
way. It's in sched-if.h as that is where `struct schedule_data'
is declared.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v2:
* This patch now contains both macro definition and usage, (and
  has been moved to the top of the series), as suggested during
  review.
* The macro has been moved moved to sched-if.h, as requested
  during review.
* The macro has been renamed curr_on_cpu(), to match with the
  `*curr' field in `struct schedule_data' to which it points.

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -228,7 +228,7 @@ static void burn_credits(struct csched_v
     unsigned int credits;
 
     /* Assert svc is current */
-    ASSERT(svc==CSCHED_VCPU(per_cpu(schedule_data, svc->vcpu->processor).curr));
+    ASSERT( svc == CSCHED_VCPU(curr_on_cpu(svc->vcpu->processor)) );
 
     if ( (delta = now - svc->start_time) <= 0 )
         return;
@@ -246,8 +246,7 @@ DEFINE_PER_CPU(unsigned int, last_tickle
 static inline void
 __runq_tickle(unsigned int cpu, struct csched_vcpu *new)
 {
-    struct csched_vcpu * const cur =
-        CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
+    struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
     cpumask_t mask;
 
@@ -371,7 +370,7 @@ csched_alloc_pdata(const struct schedule
         per_cpu(schedule_data, cpu).sched_priv = spc;
 
     /* Start off idling... */
-    BUG_ON(!is_idle_vcpu(per_cpu(schedule_data, cpu).curr));
+    BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu)));
     cpumask_set_cpu(cpu, prv->idlers);
 
     spin_unlock_irqrestore(&prv->lock, flags);
@@ -709,7 +708,7 @@ csched_vcpu_sleep(const struct scheduler
 
     BUG_ON( is_idle_vcpu(vc) );
 
-    if ( per_cpu(schedule_data, vc->processor).curr == vc )
+    if ( curr_on_cpu(vc->processor) == vc )
         cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
     else if ( __vcpu_on_runq(svc) )
         __runq_remove(svc);
@@ -723,7 +722,7 @@ csched_vcpu_wake(const struct scheduler 
 
     BUG_ON( is_idle_vcpu(vc) );
 
-    if ( unlikely(per_cpu(schedule_data, cpu).curr == vc) )
+    if ( unlikely(curr_on_cpu(cpu) == vc) )
     {
         SCHED_STAT_CRANK(vcpu_wake_running);
         return;
@@ -1192,7 +1191,7 @@ static struct csched_vcpu *
 csched_runq_steal(int peer_cpu, int cpu, int pri)
 {
     const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
-    const struct vcpu * const peer_vcpu = per_cpu(schedule_data, peer_cpu).curr;
+    const struct vcpu * const peer_vcpu = curr_on_cpu(peer_cpu);
     struct csched_vcpu *speer;
     struct list_head *iter;
     struct vcpu *vc;
@@ -1480,7 +1479,7 @@ csched_dump_pcpu(const struct scheduler 
     printk("core=%s\n", cpustr);
 
     /* current VCPU */
-    svc = CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
+    svc = CSCHED_VCPU(curr_on_cpu(cpu));
     if ( svc )
     {
         printk("\trun: ");
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -41,6 +41,8 @@ struct schedule_data {
     atomic_t            urgent_count;   /* how many urgent vcpus           */
 };
 
+#define curr_on_cpu(c)    (per_cpu(schedule_data, c).curr)
+
 DECLARE_PER_CPU(struct schedule_data, schedule_data);
 DECLARE_PER_CPU(struct scheduler *, scheduler);
 DECLARE_PER_CPU(struct cpupool *, cpupool);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:35:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjHS-0005sm-Ii; Mon, 17 Dec 2012 22:35:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjHR-0005sf-Nv
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:35:25 +0000
Received: from [85.158.143.35:42231] by server-3.bemta-4.messagelabs.com id
	19/22-18211-D2E9FC05; Mon, 17 Dec 2012 22:35:25 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355783720!16006161!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2499 invoked from network); 17 Dec 2012 22:35:20 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:20 -0000
Received: by mail-we0-f171.google.com with SMTP id u3so3178286wey.30
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=LZ5v79rq5HUQIF33fK/9yp1UhIidChxWSy+glXevD9M=;
	b=iWcnNZD5svZc2XEi4EcMmIwA+qA0ffaT5mNnF46gtTkaa7m1hGdb2dFGPuhr85uqrE
	gz6cjhthg6igGfOHgzWToOyvvd4yuZB0G4wrUAVffdYKGZgoD0kB/SVIsknfeLagzHyM
	vHpGacIGn0xRChGehzm2SFVnXQJJVP8GRifugwgiHP6espQdMYn2yVYzIoFDKzHLlOKP
	KJoaT2uAYvfcgDrq/vkrY6SX/LN88N96d404JH/m54Edj2MmWToiw1Q/vJvahmxADVof
	tBjYF5EtfQJysr6RBU61zvssHIY1iLzSg+wiES0AyyochWeaMpaE3so61Wo7AVogP2UJ
	vn6g==
Received: by 10.180.101.99 with SMTP id ff3mr299241wib.21.1355783720449;
	Mon, 17 Dec 2012 14:35:20 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.19
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:19 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 290dfdd1cbe5f3e1c2e00cde08792d4009a8fec0
Message-Id: <290dfdd1cbe5f3e1c2e0.1355783338@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
References: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:28:58 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1 of 5 v3] xen: sched_credit: define and use
	curr_on_cpu(cpu)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To fetch `per_cpu(schedule_data,cpu).curr' in a more readable
way. It's in sched-if.h as that is where `struct schedule_data'
is declared.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v2:
* This patch now contains both macro definition and usage, (and
  has been moved to the top of the series), as suggested during
  review.
* The macro has been moved moved to sched-if.h, as requested
  during review.
* The macro has been renamed curr_on_cpu(), to match with the
  `*curr' field in `struct schedule_data' to which it points.

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -228,7 +228,7 @@ static void burn_credits(struct csched_v
     unsigned int credits;
 
     /* Assert svc is current */
-    ASSERT(svc==CSCHED_VCPU(per_cpu(schedule_data, svc->vcpu->processor).curr));
+    ASSERT( svc == CSCHED_VCPU(curr_on_cpu(svc->vcpu->processor)) );
 
     if ( (delta = now - svc->start_time) <= 0 )
         return;
@@ -246,8 +246,7 @@ DEFINE_PER_CPU(unsigned int, last_tickle
 static inline void
 __runq_tickle(unsigned int cpu, struct csched_vcpu *new)
 {
-    struct csched_vcpu * const cur =
-        CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
+    struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
     cpumask_t mask;
 
@@ -371,7 +370,7 @@ csched_alloc_pdata(const struct schedule
         per_cpu(schedule_data, cpu).sched_priv = spc;
 
     /* Start off idling... */
-    BUG_ON(!is_idle_vcpu(per_cpu(schedule_data, cpu).curr));
+    BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu)));
     cpumask_set_cpu(cpu, prv->idlers);
 
     spin_unlock_irqrestore(&prv->lock, flags);
@@ -709,7 +708,7 @@ csched_vcpu_sleep(const struct scheduler
 
     BUG_ON( is_idle_vcpu(vc) );
 
-    if ( per_cpu(schedule_data, vc->processor).curr == vc )
+    if ( curr_on_cpu(vc->processor) == vc )
         cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
     else if ( __vcpu_on_runq(svc) )
         __runq_remove(svc);
@@ -723,7 +722,7 @@ csched_vcpu_wake(const struct scheduler 
 
     BUG_ON( is_idle_vcpu(vc) );
 
-    if ( unlikely(per_cpu(schedule_data, cpu).curr == vc) )
+    if ( unlikely(curr_on_cpu(cpu) == vc) )
     {
         SCHED_STAT_CRANK(vcpu_wake_running);
         return;
@@ -1192,7 +1191,7 @@ static struct csched_vcpu *
 csched_runq_steal(int peer_cpu, int cpu, int pri)
 {
     const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
-    const struct vcpu * const peer_vcpu = per_cpu(schedule_data, peer_cpu).curr;
+    const struct vcpu * const peer_vcpu = curr_on_cpu(peer_cpu);
     struct csched_vcpu *speer;
     struct list_head *iter;
     struct vcpu *vc;
@@ -1480,7 +1479,7 @@ csched_dump_pcpu(const struct scheduler 
     printk("core=%s\n", cpustr);
 
     /* current VCPU */
-    svc = CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
+    svc = CSCHED_VCPU(curr_on_cpu(cpu));
     if ( svc )
     {
         printk("\trun: ");
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -41,6 +41,8 @@ struct schedule_data {
     atomic_t            urgent_count;   /* how many urgent vcpus           */
 };
 
+#define curr_on_cpu(c)    (per_cpu(schedule_data, c).curr)
+
 DECLARE_PER_CPU(struct schedule_data, schedule_data);
 DECLARE_PER_CPU(struct scheduler *, scheduler);
 DECLARE_PER_CPU(struct cpupool *, cpupool);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:35:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjHY-0005tZ-CT; Mon, 17 Dec 2012 22:35:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjHW-0005tL-VR
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:35:31 +0000
Received: from [85.158.139.211:12140] by server-11.bemta-5.messagelabs.com id
	EB/68-31624-23E9FC05; Mon, 17 Dec 2012 22:35:30 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355783729!18075846!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24720 invoked from network); 17 Dec 2012 22:35:29 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:29 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn17so2308058wib.6
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=6MbptbAiX13j6uGuM0kV41h0R3Y3CZD079a3mBr6t78=;
	b=N1ZwUUwnOcgJXnt8KqI2vIxGlTw7f/KgS93jvQFnXvikeF/KeueOn2dEUDjZQczehb
	KpSbrO+gx7QiHqCqbfPON7c0O5NeGo8y6uzUmo99M7JITE7ZSSE48xYRCEdmoh/ufWQ1
	jrG+6i0ns53bOVNcwDV5aR0FQgBiEuDAffPvW0XRFXjlpQVAAeGMvAXARJGwp5zJG6mg
	HvFxVHDCa/eo1cCRUAP1+0ECIgvws0j5ii1oHVOxSDq2aB3eJNtMA8bIY9iAnSGLgVg8
	eLOdfU9SSdwvynVJhK6em6/4QwvMT4IRemnAQ5fMxT232AQ323v9ID0Cqo4CrT775hdg
	xlHA==
Received: by 10.194.85.168 with SMTP id i8mr20155507wjz.18.1355783727190;
	Mon, 17 Dec 2012 14:35:27 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.25
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:26 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 47fe4d3554d40c6b40620e39aa63b0735304505f
Message-Id: <47fe4d3554d40c6b4062.1355783342@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
References: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:29:02 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5 of 5 v3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

About tickling, and PCPU selection.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v2:
* Call to `trace_var()' converted to `__trace_var()', as it originally
  was (something got messed up while reworking this for v2.
  Thanks George. :-) )

Changes from v1:
 * Dummy `struct d {}', accommodating `cpu' only, removed
   in spite of the much more readable `trace_var(..., sizeof(cpu), &cpu)',
   as suggested.

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -21,6 +21,7 @@
 #include <asm/atomic.h>
 #include <xen/errno.h>
 #include <xen/keyhandler.h>
+#include <xen/trace.h>
 
 
 /*
@@ -98,6 +99,18 @@
 
 
 /*
+ * Credit tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED_SCHED_TASKLET TRC_SCHED_CLASS_EVT(CSCHED, 1)
+#define TRC_CSCHED_ACCOUNT_START TRC_SCHED_CLASS_EVT(CSCHED, 2)
+#define TRC_CSCHED_ACCOUNT_STOP  TRC_SCHED_CLASS_EVT(CSCHED, 3)
+#define TRC_CSCHED_STOLEN_VCPU   TRC_SCHED_CLASS_EVT(CSCHED, 4)
+#define TRC_CSCHED_PICKED_CPU    TRC_SCHED_CLASS_EVT(CSCHED, 5)
+#define TRC_CSCHED_TICKLE        TRC_SCHED_CLASS_EVT(CSCHED, 6)
+
+
+/*
  * Boot parameters
  */
 static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
@@ -316,9 +329,18 @@ static inline void
         }
     }
 
-    /* Send scheduler interrupts to designated CPUs */
     if ( !cpumask_empty(&mask) )
+    {
+        if ( unlikely(tb_init_done) )
+        {
+            /* Avoid TRACE_*: saves checking !tb_init_done each step */
+            for_each_cpu(cpu, &mask)
+                __trace_var(TRC_CSCHED_TICKLE, 0, sizeof(cpu), &cpu);
+        }
+
+        /* Send scheduler interrupts to designated CPUs */
         cpumask_raise_softirq(&mask, SCHEDULE_SOFTIRQ);
+    }
 }
 
 static void
@@ -555,6 +577,8 @@ static int
     if ( commit && spc )
        spc->idle_bias = cpu;
 
+    TRACE_3D(TRC_CSCHED_PICKED_CPU, vc->domain->domain_id, vc->vcpu_id, cpu);
+
     return cpu;
 }
 
@@ -587,6 +611,9 @@ static inline void
         }
     }
 
+    TRACE_3D(TRC_CSCHED_ACCOUNT_START, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
+
     spin_unlock_irqrestore(&prv->lock, flags);
 }
 
@@ -609,6 +636,9 @@ static inline void
     {
         list_del_init(&sdom->active_sdom_elem);
     }
+
+    TRACE_3D(TRC_CSCHED_ACCOUNT_STOP, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
 }
 
 static void
@@ -1242,6 +1272,8 @@ csched_runq_steal(int peer_cpu, int cpu,
             if (__csched_vcpu_is_migrateable(vc, cpu))
             {
                 /* We got a candidate. Grab it! */
+                TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
+                         vc->domain->domain_id, vc->vcpu_id);
                 SCHED_VCPU_STAT_CRANK(speer, migrate_q);
                 SCHED_STAT_CRANK(migrate_queued);
                 WARN_ON(vc->is_urgent);
@@ -1402,6 +1434,7 @@ csched_schedule(
     /* Tasklet work (which runs in idle VCPU context) overrides all else. */
     if ( tasklet_work_scheduled )
     {
+        TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
         snext = CSCHED_VCPU(idle_vcpu[cpu]);
         snext->pri = CSCHED_PRI_TS_BOOST;
     }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:35:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjHY-0005tZ-CT; Mon, 17 Dec 2012 22:35:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjHW-0005tL-VR
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:35:31 +0000
Received: from [85.158.139.211:12140] by server-11.bemta-5.messagelabs.com id
	EB/68-31624-23E9FC05; Mon, 17 Dec 2012 22:35:30 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355783729!18075846!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24720 invoked from network); 17 Dec 2012 22:35:29 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:29 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn17so2308058wib.6
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=6MbptbAiX13j6uGuM0kV41h0R3Y3CZD079a3mBr6t78=;
	b=N1ZwUUwnOcgJXnt8KqI2vIxGlTw7f/KgS93jvQFnXvikeF/KeueOn2dEUDjZQczehb
	KpSbrO+gx7QiHqCqbfPON7c0O5NeGo8y6uzUmo99M7JITE7ZSSE48xYRCEdmoh/ufWQ1
	jrG+6i0ns53bOVNcwDV5aR0FQgBiEuDAffPvW0XRFXjlpQVAAeGMvAXARJGwp5zJG6mg
	HvFxVHDCa/eo1cCRUAP1+0ECIgvws0j5ii1oHVOxSDq2aB3eJNtMA8bIY9iAnSGLgVg8
	eLOdfU9SSdwvynVJhK6em6/4QwvMT4IRemnAQ5fMxT232AQ323v9ID0Cqo4CrT775hdg
	xlHA==
Received: by 10.194.85.168 with SMTP id i8mr20155507wjz.18.1355783727190;
	Mon, 17 Dec 2012 14:35:27 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.25
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:26 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 47fe4d3554d40c6b40620e39aa63b0735304505f
Message-Id: <47fe4d3554d40c6b4062.1355783342@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
References: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:29:02 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5 of 5 v3] xen: sched_credit: add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

About tickling, and PCPU selection.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v2:
* Call to `trace_var()' converted to `__trace_var()', as it originally
  was (something got messed up while reworking this for v2.
  Thanks George. :-) )

Changes from v1:
 * Dummy `struct d {}', accommodating `cpu' only, removed
   in spite of the much more readable `trace_var(..., sizeof(cpu), &cpu)',
   as suggested.

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -21,6 +21,7 @@
 #include <asm/atomic.h>
 #include <xen/errno.h>
 #include <xen/keyhandler.h>
+#include <xen/trace.h>
 
 
 /*
@@ -98,6 +99,18 @@
 
 
 /*
+ * Credit tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED_SCHED_TASKLET TRC_SCHED_CLASS_EVT(CSCHED, 1)
+#define TRC_CSCHED_ACCOUNT_START TRC_SCHED_CLASS_EVT(CSCHED, 2)
+#define TRC_CSCHED_ACCOUNT_STOP  TRC_SCHED_CLASS_EVT(CSCHED, 3)
+#define TRC_CSCHED_STOLEN_VCPU   TRC_SCHED_CLASS_EVT(CSCHED, 4)
+#define TRC_CSCHED_PICKED_CPU    TRC_SCHED_CLASS_EVT(CSCHED, 5)
+#define TRC_CSCHED_TICKLE        TRC_SCHED_CLASS_EVT(CSCHED, 6)
+
+
+/*
  * Boot parameters
  */
 static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
@@ -316,9 +329,18 @@ static inline void
         }
     }
 
-    /* Send scheduler interrupts to designated CPUs */
     if ( !cpumask_empty(&mask) )
+    {
+        if ( unlikely(tb_init_done) )
+        {
+            /* Avoid TRACE_*: saves checking !tb_init_done each step */
+            for_each_cpu(cpu, &mask)
+                __trace_var(TRC_CSCHED_TICKLE, 0, sizeof(cpu), &cpu);
+        }
+
+        /* Send scheduler interrupts to designated CPUs */
         cpumask_raise_softirq(&mask, SCHEDULE_SOFTIRQ);
+    }
 }
 
 static void
@@ -555,6 +577,8 @@ static int
     if ( commit && spc )
        spc->idle_bias = cpu;
 
+    TRACE_3D(TRC_CSCHED_PICKED_CPU, vc->domain->domain_id, vc->vcpu_id, cpu);
+
     return cpu;
 }
 
@@ -587,6 +611,9 @@ static inline void
         }
     }
 
+    TRACE_3D(TRC_CSCHED_ACCOUNT_START, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
+
     spin_unlock_irqrestore(&prv->lock, flags);
 }
 
@@ -609,6 +636,9 @@ static inline void
     {
         list_del_init(&sdom->active_sdom_elem);
     }
+
+    TRACE_3D(TRC_CSCHED_ACCOUNT_STOP, sdom->dom->domain_id,
+             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
 }
 
 static void
@@ -1242,6 +1272,8 @@ csched_runq_steal(int peer_cpu, int cpu,
             if (__csched_vcpu_is_migrateable(vc, cpu))
             {
                 /* We got a candidate. Grab it! */
+                TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
+                         vc->domain->domain_id, vc->vcpu_id);
                 SCHED_VCPU_STAT_CRANK(speer, migrate_q);
                 SCHED_STAT_CRANK(migrate_queued);
                 WARN_ON(vc->is_urgent);
@@ -1402,6 +1434,7 @@ csched_schedule(
     /* Tasklet work (which runs in idle VCPU context) overrides all else. */
     if ( tasklet_work_scheduled )
     {
+        TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
         snext = CSCHED_VCPU(idle_vcpu[cpu]);
         snext->pri = CSCHED_PRI_TS_BOOST;
     }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:35:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjHe-0005um-V1; Mon, 17 Dec 2012 22:35:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjHd-0005uY-ES
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:35:37 +0000
Received: from [85.158.143.99:36269] by server-3.bemta-4.messagelabs.com id
	7D/32-18211-83E9FC05; Mon, 17 Dec 2012 22:35:36 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1355783722!29246112!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14813 invoked from network); 17 Dec 2012 22:35:22 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:22 -0000
Received: by mail-wi0-f179.google.com with SMTP id o1so2317704wic.6
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=RIG/1FCZGChALH9cyDBF0YmIh4P03xS7DKX8NLT9Fbg=;
	b=Xzmpkw5KfMNGNHLIlWSo6KfinaxgpDnXYrigPUISd2TemRfJYP1rOnqqd9nYO5XeEG
	F+GLIdKGzVH9Bik7naPuZrj4ukN690b1zNY4qVVR6g8wI5eoyrS6qJhpBA6LZZ6DXtdp
	KbFR5Pjgljx5L9tQcbm3JvKEmitrDE1TVWS3ngQ4hQ/hJ7rm7tUqT47IfNTdzXHjfFQZ
	vDvwf4FFQ9AeZuELcNEQMmwsmEfi+u1EBrxzoMGQII8AnCTdMVOmlwJw1U7oxWqN79fq
	FLqLKUPPg1NV4vRqOAc340qsWVdbSODCVlYD9I7RiMqSTodf3QQDZYbBbYRQ2/0HsDVo
	na5A==
Received: by 10.194.110.231 with SMTP id id7mr20205017wjb.6.1355783722294;
	Mon, 17 Dec 2012 14:35:22 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.20
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:21 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 7e9837f96c0d6afc2f488d1bbc6b0a9419610c9f
Message-Id: <7e9837f96c0d6afc2f48.1355783339@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
References: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:28:59 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2 of 5 v3] xen: sched_credit: improve picking up
 the idle CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In _csched_cpu_pick() we try to select the best possible CPU for
running a VCPU, considering the characteristics of the underlying
hardware (i.e., how many threads, core, sockets, and how busy they
are). What we want is "the idle execution vehicle with the most
idling neighbours in its grouping".

In order to achieve it, we select a CPU from the VCPU's affinity,
giving preference to its current processor if possible, as the basis
for the comparison with all the other CPUs. Problem is, to discount
the VCPU itself when computing this "idleness" (in an attempt to be
fair wrt its current processor), we arbitrarily and unconditionally
consider that selected CPU as idle, even when it is not the case,
for instance:
 1. If the CPU is not the one where the VCPU is running (perhaps due
    to the affinity being changed);
 2. The CPU is where the VCPU is running, but it has other VCPUs in
    its runq, so it won't go idle even if the VCPU in question goes.

This is exemplified in the trace below:

]  3.466115364 x|------|------| d10v1   22005(2:2:5) 3 [ a 1 8 ]
   ... ... ...
   3.466122856 x|------|------| d10v1 runstate_change d10v1 running->offline
   3.466123046 x|------|------| d?v? runstate_change d32767v0 runnable->running
   ... ... ...
]  3.466126887 x|------|------| d32767v0   28004(2:8:4) 3 [ a 1 8 ]

22005(...) line (the first line) means _csched_cpu_pick() was called on
VCPU 1 of domain 10, while it is running on CPU 0, and it choose CPU 8,
which is busy ('|'), even if there are plenty of idle CPUs. That is
because, as a consequence of changing the VCPU affinity, CPU 8 was
chosen as the basis for the comparison, and therefore considered idle
(its bit gets unconditionally set in the bitmask representing the idle
CPUs). 28004(...) line means the VCPU is woken up and queued on CPU 8's
runq, where it waits for a context switch or a migration, in order to
be able to execute.

This change fixes things by only considering the "guessed" CPU idle if
the VCPU in question is both running there and is its only runnable
VCPU.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v2:
 * Use `vc->processor' instead of curr_on_cpu() for determining whether
   or not vc is current on cpu, as suggested during review.
 * Fixed IS_IDLE_RUNQ() macro in case runq is empty.
 * Ditched the variable renaming, as requested during review.

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -59,6 +59,9 @@
 #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
 #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
 #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
+/* Is the first element of _cpu's runq its idle vcpu? */
+#define IS_RUNQ_IDLE(_cpu)  (list_empty(RUNQ(_cpu)) || \
+                             is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
 
 
 /*
@@ -478,9 +481,14 @@ static int
      * distinct cores first and guarantees we don't do something stupid
      * like run two VCPUs on co-hyperthreads while there are idle cores
      * or sockets.
+     *
+     * Notice that, when computing the "idleness" of cpu, we may want to
+     * discount vc. That is, iff vc is the currently running and the only
+     * runnable vcpu on cpu, we add cpu to the idlers.
      */
     cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
-    cpumask_set_cpu(cpu, &idlers);
+    if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
+        cpumask_set_cpu(cpu, &idlers);
     cpumask_and(&cpus, &cpus, &idlers);
     cpumask_clear_cpu(cpu, &cpus);
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:35:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjHe-0005um-V1; Mon, 17 Dec 2012 22:35:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjHd-0005uY-ES
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:35:37 +0000
Received: from [85.158.143.99:36269] by server-3.bemta-4.messagelabs.com id
	7D/32-18211-83E9FC05; Mon, 17 Dec 2012 22:35:36 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1355783722!29246112!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14813 invoked from network); 17 Dec 2012 22:35:22 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:22 -0000
Received: by mail-wi0-f179.google.com with SMTP id o1so2317704wic.6
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:content-transfer-encoding:subject
	:x-mercurial-node:message-id:in-reply-to:references:user-agent:date
	:from:to:cc; bh=RIG/1FCZGChALH9cyDBF0YmIh4P03xS7DKX8NLT9Fbg=;
	b=Xzmpkw5KfMNGNHLIlWSo6KfinaxgpDnXYrigPUISd2TemRfJYP1rOnqqd9nYO5XeEG
	F+GLIdKGzVH9Bik7naPuZrj4ukN690b1zNY4qVVR6g8wI5eoyrS6qJhpBA6LZZ6DXtdp
	KbFR5Pjgljx5L9tQcbm3JvKEmitrDE1TVWS3ngQ4hQ/hJ7rm7tUqT47IfNTdzXHjfFQZ
	vDvwf4FFQ9AeZuELcNEQMmwsmEfi+u1EBrxzoMGQII8AnCTdMVOmlwJw1U7oxWqN79fq
	FLqLKUPPg1NV4vRqOAc340qsWVdbSODCVlYD9I7RiMqSTodf3QQDZYbBbYRQ2/0HsDVo
	na5A==
Received: by 10.194.110.231 with SMTP id id7mr20205017wjb.6.1355783722294;
	Mon, 17 Dec 2012 14:35:22 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.20
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:21 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 7e9837f96c0d6afc2f488d1bbc6b0a9419610c9f
Message-Id: <7e9837f96c0d6afc2f48.1355783339@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
References: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:28:59 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2 of 5 v3] xen: sched_credit: improve picking up
 the idle CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In _csched_cpu_pick() we try to select the best possible CPU for
running a VCPU, considering the characteristics of the underlying
hardware (i.e., how many threads, core, sockets, and how busy they
are). What we want is "the idle execution vehicle with the most
idling neighbours in its grouping".

In order to achieve it, we select a CPU from the VCPU's affinity,
giving preference to its current processor if possible, as the basis
for the comparison with all the other CPUs. Problem is, to discount
the VCPU itself when computing this "idleness" (in an attempt to be
fair wrt its current processor), we arbitrarily and unconditionally
consider that selected CPU as idle, even when it is not the case,
for instance:
 1. If the CPU is not the one where the VCPU is running (perhaps due
    to the affinity being changed);
 2. The CPU is where the VCPU is running, but it has other VCPUs in
    its runq, so it won't go idle even if the VCPU in question goes.

This is exemplified in the trace below:

]  3.466115364 x|------|------| d10v1   22005(2:2:5) 3 [ a 1 8 ]
   ... ... ...
   3.466122856 x|------|------| d10v1 runstate_change d10v1 running->offline
   3.466123046 x|------|------| d?v? runstate_change d32767v0 runnable->running
   ... ... ...
]  3.466126887 x|------|------| d32767v0   28004(2:8:4) 3 [ a 1 8 ]

22005(...) line (the first line) means _csched_cpu_pick() was called on
VCPU 1 of domain 10, while it is running on CPU 0, and it choose CPU 8,
which is busy ('|'), even if there are plenty of idle CPUs. That is
because, as a consequence of changing the VCPU affinity, CPU 8 was
chosen as the basis for the comparison, and therefore considered idle
(its bit gets unconditionally set in the bitmask representing the idle
CPUs). 28004(...) line means the VCPU is woken up and queued on CPU 8's
runq, where it waits for a context switch or a migration, in order to
be able to execute.

This change fixes things by only considering the "guessed" CPU idle if
the VCPU in question is both running there and is its only runnable
VCPU.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v2:
 * Use `vc->processor' instead of curr_on_cpu() for determining whether
   or not vc is current on cpu, as suggested during review.
 * Fixed IS_IDLE_RUNQ() macro in case runq is empty.
 * Ditched the variable renaming, as requested during review.

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -59,6 +59,9 @@
 #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
 #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
 #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
+/* Is the first element of _cpu's runq its idle vcpu? */
+#define IS_RUNQ_IDLE(_cpu)  (list_empty(RUNQ(_cpu)) || \
+                             is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
 
 
 /*
@@ -478,9 +481,14 @@ static int
      * distinct cores first and guarantees we don't do something stupid
      * like run two VCPUs on co-hyperthreads while there are idle cores
      * or sockets.
+     *
+     * Notice that, when computing the "idleness" of cpu, we may want to
+     * discount vc. That is, iff vc is the currently running and the only
+     * runnable vcpu on cpu, we add cpu to the idlers.
      */
     cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
-    cpumask_set_cpu(cpu, &idlers);
+    if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
+        cpumask_set_cpu(cpu, &idlers);
     cpumask_and(&cpus, &cpus, &idlers);
     cpumask_clear_cpu(cpu, &cpus);
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:36:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjI3-00062m-Cz; Mon, 17 Dec 2012 22:36:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjI1-000628-4l
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:36:02 +0000
Received: from [193.109.254.147:7080] by server-15.bemta-14.messagelabs.com id
	08/A9-05116-05E9FC05; Mon, 17 Dec 2012 22:36:00 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355783725!8684422!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11974 invoked from network); 17 Dec 2012 22:35:29 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:29 -0000
Received: by mail-wg0-f51.google.com with SMTP id gg4so2975650wgb.6
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=mGfndDkrcx7sYeT8nQiEb5Bo5Hl4AxHQzLYD7jLXk8g=;
	b=qCvAqBw5UtFITFKNts+PUKfpD1m/CimQnGpypF0VcUg8olhovg5r/bm4Axz/Ty+hot
	gNNdfylpTka+VVigxVkiY6wGLMYt/A3eLi5MbsMHRPk+9eok1jzD2tVCqtzCZK7rWXxl
	Aw08ARdpc9M1CS+bh87/rtcv+gQNNlr1uhmQK/adqlPVEiX/aWA7WLpOZWRo0LP1DJ7K
	eJtQR5RogInP6okjZkLbiyzOc34FZJNwyDZEPB5O7LVA/DeKS/SHdQSzdBRvqLZFfQdP
	2lpcJFUlXkY7wXYycqv4lfeOfdk2qF+agnPNNWPeK3OpruCRE1MS2/FtuIHfbxrE++ip
	z/oQ==
X-Received: by 10.180.107.130 with SMTP id hc2mr339045wib.12.1355783725581;
	Mon, 17 Dec 2012 14:35:25 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.24
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:24 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 4e6b8af4ad97bfeef529320c69edf5f572ca3625
Message-Id: <4e6b8af4ad97bfeef529.1355783341@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
References: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:29:01 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4 of 5 v3] xen: tracing: introduce per-scheduler
 trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So that it becomes possible to create scheduler specific trace records,
within each scheduler, without worrying about the overlapping, and also
without giving up being able to recognise them univocally. The latter
is deemed as useful, since we can have more than one scheduler running
at the same time, thanks to cpupools.

The event ID is 12 bits, and this change uses the upper 3 of them for
the 'scheduler ID'. This means we're limited to 8 schedulers and to
512 scheduler specific tracing events. Both seem reasonable limitations
as of now.

This also converts the existing credit2 tracing (the only scheduler
generating tracing events up to now) to the new system.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
---
Changes from v1:
 * The event ID generaion macro is now called `TRC_SCHED_CLASS_EVT()',
   and has been generalized and put in trace.h, as suggested.
 * The handling of per-scheduler tracing IDs and masks have been
   restructured, properly naming "ID" the numerical identifiers
   and "MASK" the bitmasks, as requested.

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -29,18 +29,22 @@
 #define d2printk(x...)
 //#define d2printk printk
 
-#define TRC_CSCHED2_TICK        TRC_SCHED_CLASS + 1
-#define TRC_CSCHED2_RUNQ_POS    TRC_SCHED_CLASS + 2
-#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3
-#define TRC_CSCHED2_CREDIT_ADD  TRC_SCHED_CLASS + 4
-#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5
-#define TRC_CSCHED2_TICKLE       TRC_SCHED_CLASS + 6
-#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7
-#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8
-#define TRC_CSCHED2_UPDATE_LOAD   TRC_SCHED_CLASS + 9
-#define TRC_CSCHED2_RUNQ_ASSIGN   TRC_SCHED_CLASS + 10
-#define TRC_CSCHED2_UPDATE_VCPU_LOAD   TRC_SCHED_CLASS + 11
-#define TRC_CSCHED2_UPDATE_RUNQ_LOAD   TRC_SCHED_CLASS + 12
+/*
+ * Credit2 tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED2_TICK             TRC_SCHED_CLASS_EVT(CSCHED2, 1)
+#define TRC_CSCHED2_RUNQ_POS         TRC_SCHED_CLASS_EVT(CSCHED2, 2)
+#define TRC_CSCHED2_CREDIT_BURN      TRC_SCHED_CLASS_EVT(CSCHED2, 3)
+#define TRC_CSCHED2_CREDIT_ADD       TRC_SCHED_CLASS_EVT(CSCHED2, 4)
+#define TRC_CSCHED2_TICKLE_CHECK     TRC_SCHED_CLASS_EVT(CSCHED2, 5)
+#define TRC_CSCHED2_TICKLE           TRC_SCHED_CLASS_EVT(CSCHED2, 6)
+#define TRC_CSCHED2_CREDIT_RESET     TRC_SCHED_CLASS_EVT(CSCHED2, 7)
+#define TRC_CSCHED2_SCHED_TASKLET    TRC_SCHED_CLASS_EVT(CSCHED2, 8)
+#define TRC_CSCHED2_UPDATE_LOAD      TRC_SCHED_CLASS_EVT(CSCHED2, 9)
+#define TRC_CSCHED2_RUNQ_ASSIGN      TRC_SCHED_CLASS_EVT(CSCHED2, 10)
+#define TRC_CSCHED2_UPDATE_VCPU_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 11)
+#define TRC_CSCHED2_UPDATE_RUNQ_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 12)
 
 /*
  * WARNING: This is still in an experimental phase.  Status and work can be found at the
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
--- a/xen/include/public/trace.h
+++ b/xen/include/public/trace.h
@@ -57,6 +57,32 @@
 #define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
 #define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
 
+/*
+ * The highest 3 bits of the last 12 bits of TRC_SCHED_CLASS above are
+ * reserved for encoding what scheduler produced the information. The
+ * actual event is encoded in the last 9 bits.
+ *
+ * This means we have 8 scheduling IDs available (which means at most 8
+ * schedulers generating events) and, in each scheduler, up to 512
+ * different events.
+ */
+#define TRC_SCHED_ID_BITS 3
+#define TRC_SCHED_ID_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
+#define TRC_SCHED_ID_MASK (((1UL<<TRC_SCHED_ID_BITS) - 1) << TRC_SCHED_ID_SHIFT)
+#define TRC_SCHED_EVT_MASK (~(TRC_SCHED_ID_MASK))
+
+/* Per-scheduler IDs, to identify scheduler specific events */
+#define TRC_SCHED_CSCHED   0
+#define TRC_SCHED_CSCHED2  1
+#define TRC_SCHED_SEDF     2
+#define TRC_SCHED_ARINC653 3
+
+/* Per-scheduler tracing */
+#define TRC_SCHED_CLASS_EVT(_c, _e) \
+  ( ( TRC_SCHED_CLASS | \
+      ((TRC_SCHED_##_c << TRC_SCHED_ID_SHIFT) & TRC_SCHED_ID_MASK) ) + \
+    (_e & TRC_SCHED_EVT_MASK) )
+
 /* Trace classes for Hardware */
 #define TRC_HW_PM           0x00801000   /* Power management traces */
 #define TRC_HW_IRQ          0x00802000   /* Traces relating to the handling of IRQs */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:36:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjI3-00062m-Cz; Mon, 17 Dec 2012 22:36:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TkjI1-000628-4l
	for xen-devel@lists.xensource.com; Mon, 17 Dec 2012 22:36:02 +0000
Received: from [193.109.254.147:7080] by server-15.bemta-14.messagelabs.com id
	08/A9-05116-05E9FC05; Mon, 17 Dec 2012 22:36:00 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1355783725!8684422!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11974 invoked from network); 17 Dec 2012 22:35:29 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:35:29 -0000
Received: by mail-wg0-f51.google.com with SMTP id gg4so2975650wgb.6
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Dec 2012 14:35:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=mGfndDkrcx7sYeT8nQiEb5Bo5Hl4AxHQzLYD7jLXk8g=;
	b=qCvAqBw5UtFITFKNts+PUKfpD1m/CimQnGpypF0VcUg8olhovg5r/bm4Axz/Ty+hot
	gNNdfylpTka+VVigxVkiY6wGLMYt/A3eLi5MbsMHRPk+9eok1jzD2tVCqtzCZK7rWXxl
	Aw08ARdpc9M1CS+bh87/rtcv+gQNNlr1uhmQK/adqlPVEiX/aWA7WLpOZWRo0LP1DJ7K
	eJtQR5RogInP6okjZkLbiyzOc34FZJNwyDZEPB5O7LVA/DeKS/SHdQSzdBRvqLZFfQdP
	2lpcJFUlXkY7wXYycqv4lfeOfdk2qF+agnPNNWPeK3OpruCRE1MS2/FtuIHfbxrE++ip
	z/oQ==
X-Received: by 10.180.107.130 with SMTP id hc2mr339045wib.12.1355783725581;
	Mon, 17 Dec 2012 14:35:25 -0800 (PST)
Received: from [127.0.1.1] (ip-178-85.sn2.eutelia.it. [83.211.178.85])
	by mx.google.com with ESMTPS id eo10sm14752906wib.9.2012.12.17.14.35.24
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:35:24 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 4e6b8af4ad97bfeef529320c69edf5f572ca3625
Message-Id: <4e6b8af4ad97bfeef529.1355783341@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
References: <patchbomb.1355783337@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Mon, 17 Dec 2012 23:29:01 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xensource.com
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4 of 5 v3] xen: tracing: introduce per-scheduler
 trace event IDs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So that it becomes possible to create scheduler specific trace records,
within each scheduler, without worrying about the overlapping, and also
without giving up being able to recognise them univocally. The latter
is deemed as useful, since we can have more than one scheduler running
at the same time, thanks to cpupools.

The event ID is 12 bits, and this change uses the upper 3 of them for
the 'scheduler ID'. This means we're limited to 8 schedulers and to
512 scheduler specific tracing events. Both seem reasonable limitations
as of now.

This also converts the existing credit2 tracing (the only scheduler
generating tracing events up to now) to the new system.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
---
Changes from v1:
 * The event ID generaion macro is now called `TRC_SCHED_CLASS_EVT()',
   and has been generalized and put in trace.h, as suggested.
 * The handling of per-scheduler tracing IDs and masks have been
   restructured, properly naming "ID" the numerical identifiers
   and "MASK" the bitmasks, as requested.

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -29,18 +29,22 @@
 #define d2printk(x...)
 //#define d2printk printk
 
-#define TRC_CSCHED2_TICK        TRC_SCHED_CLASS + 1
-#define TRC_CSCHED2_RUNQ_POS    TRC_SCHED_CLASS + 2
-#define TRC_CSCHED2_CREDIT_BURN TRC_SCHED_CLASS + 3
-#define TRC_CSCHED2_CREDIT_ADD  TRC_SCHED_CLASS + 4
-#define TRC_CSCHED2_TICKLE_CHECK TRC_SCHED_CLASS + 5
-#define TRC_CSCHED2_TICKLE       TRC_SCHED_CLASS + 6
-#define TRC_CSCHED2_CREDIT_RESET TRC_SCHED_CLASS + 7
-#define TRC_CSCHED2_SCHED_TASKLET TRC_SCHED_CLASS + 8
-#define TRC_CSCHED2_UPDATE_LOAD   TRC_SCHED_CLASS + 9
-#define TRC_CSCHED2_RUNQ_ASSIGN   TRC_SCHED_CLASS + 10
-#define TRC_CSCHED2_UPDATE_VCPU_LOAD   TRC_SCHED_CLASS + 11
-#define TRC_CSCHED2_UPDATE_RUNQ_LOAD   TRC_SCHED_CLASS + 12
+/*
+ * Credit2 tracing events ("only" 512 available!). Check
+ * include/public/trace.h for more details.
+ */
+#define TRC_CSCHED2_TICK             TRC_SCHED_CLASS_EVT(CSCHED2, 1)
+#define TRC_CSCHED2_RUNQ_POS         TRC_SCHED_CLASS_EVT(CSCHED2, 2)
+#define TRC_CSCHED2_CREDIT_BURN      TRC_SCHED_CLASS_EVT(CSCHED2, 3)
+#define TRC_CSCHED2_CREDIT_ADD       TRC_SCHED_CLASS_EVT(CSCHED2, 4)
+#define TRC_CSCHED2_TICKLE_CHECK     TRC_SCHED_CLASS_EVT(CSCHED2, 5)
+#define TRC_CSCHED2_TICKLE           TRC_SCHED_CLASS_EVT(CSCHED2, 6)
+#define TRC_CSCHED2_CREDIT_RESET     TRC_SCHED_CLASS_EVT(CSCHED2, 7)
+#define TRC_CSCHED2_SCHED_TASKLET    TRC_SCHED_CLASS_EVT(CSCHED2, 8)
+#define TRC_CSCHED2_UPDATE_LOAD      TRC_SCHED_CLASS_EVT(CSCHED2, 9)
+#define TRC_CSCHED2_RUNQ_ASSIGN      TRC_SCHED_CLASS_EVT(CSCHED2, 10)
+#define TRC_CSCHED2_UPDATE_VCPU_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 11)
+#define TRC_CSCHED2_UPDATE_RUNQ_LOAD TRC_SCHED_CLASS_EVT(CSCHED2, 12)
 
 /*
  * WARNING: This is still in an experimental phase.  Status and work can be found at the
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
--- a/xen/include/public/trace.h
+++ b/xen/include/public/trace.h
@@ -57,6 +57,32 @@
 #define TRC_SCHED_CLASS     0x00022000   /* Scheduler-specific    */
 #define TRC_SCHED_VERBOSE   0x00028000   /* More inclusive scheduling */
 
+/*
+ * The highest 3 bits of the last 12 bits of TRC_SCHED_CLASS above are
+ * reserved for encoding what scheduler produced the information. The
+ * actual event is encoded in the last 9 bits.
+ *
+ * This means we have 8 scheduling IDs available (which means at most 8
+ * schedulers generating events) and, in each scheduler, up to 512
+ * different events.
+ */
+#define TRC_SCHED_ID_BITS 3
+#define TRC_SCHED_ID_SHIFT (TRC_SUBCLS_SHIFT - TRC_SCHED_ID_BITS)
+#define TRC_SCHED_ID_MASK (((1UL<<TRC_SCHED_ID_BITS) - 1) << TRC_SCHED_ID_SHIFT)
+#define TRC_SCHED_EVT_MASK (~(TRC_SCHED_ID_MASK))
+
+/* Per-scheduler IDs, to identify scheduler specific events */
+#define TRC_SCHED_CSCHED   0
+#define TRC_SCHED_CSCHED2  1
+#define TRC_SCHED_SEDF     2
+#define TRC_SCHED_ARINC653 3
+
+/* Per-scheduler tracing */
+#define TRC_SCHED_CLASS_EVT(_c, _e) \
+  ( ( TRC_SCHED_CLASS | \
+      ((TRC_SCHED_##_c << TRC_SCHED_ID_SHIFT) & TRC_SCHED_ID_MASK) ) + \
+    (_e & TRC_SCHED_EVT_MASK) )
+
 /* Trace classes for Hardware */
 #define TRC_HW_PM           0x00801000   /* Power management traces */
 #define TRC_HW_IRQ          0x00802000   /* Traces relating to the handling of IRQs */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 17 22:48:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjTv-0006lW-N3; Mon, 17 Dec 2012 22:48:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sdiris@gmail.com>) id 1TkjTt-0006lR-VJ
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 22:48:18 +0000
Received: from [85.158.139.83:4289] by server-2.bemta-5.messagelabs.com id
	B5/AB-16162-131AFC05; Mon, 17 Dec 2012 22:48:17 +0000
X-Env-Sender: sdiris@gmail.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355784493!26248340!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 615 invoked from network); 17 Dec 2012 22:48:15 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:48:15 -0000
Received: by mail-we0-f173.google.com with SMTP id z2so3223780wey.32
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 14:47:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:subject:date:message-id:mime-version:content-type:x-mailer
	:thread-index:content-language;
	bh=VyKBcTsKqEM+GjKIfcQFBZGbWtnJgB7C0HkjG72jwUQ=;
	b=tRjUoHhzrjJqZ3OxGn9PFszPvJNItneyKZUqRGk5duxuQBguUzw+WdZx6esXI0A86l
	WhSbEt9b9BWE0kWeuZmztpGW3479aH7HzyouCKDxyIS1IuKymEKkNd34zUkRAw4ctdfu
	ndRF5Tbs/ux1CvldNfyrv27ZL5bf662Jca5MWgRng3sIY8Xc2RVsp9bXNtaauPy0vzif
	3zZcR55QEXFniL13sICV/CGZ3Qs+Ph0VNJnxmwHb45wrUi+OjZun9NRxf4Lc+SZ5XqWD
	O7xJ++YzZow1P9dPdjVQQzeALFd2VrRcLMFVh77P5w625eMvbYiYm/WCbiJWAxALzAAM
	/XkA==
Received: by 10.180.73.80 with SMTP id j16mr415305wiv.5.1355784434355;
	Mon, 17 Dec 2012 14:47:14 -0800 (PST)
Received: from yimingwin7 (c192.al.cl.cam.ac.uk. [128.232.110.192])
	by mx.google.com with ESMTPS id l5sm14804983wia.10.2012.12.17.14.47.12
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:47:13 -0800 (PST)
From: "Yiming Zhang" <sdiris@gmail.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Dec 2012 22:47:12 -0000
Message-ID: <001901cddca8$75110ac0$5f332040$@gmail.com>
MIME-Version: 1.0
X-Mailer: Microsoft Outlook 14.0
Thread-Index: Ac3cp7LkuVh3pIHwRGiDPTwBU1Kd2A==
Content-Language: zh-cn
Subject: [Xen-devel] Error in installing xen-4.2 on Debian
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2250218384675899903=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multipart message in MIME format.

--===============2250218384675899903==
Content-Type: multipart/alternative;
	boundary="----=_NextPart_000_001A_01CDDCA8.7511A700"
Content-Language: zh-cn

This is a multipart message in MIME format.

------=_NextPart_000_001A_01CDDCA8.7511A700
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

Dear all,

 

I want to install Xen 4.2 from the source files on Debian Squeeze. After
"make" and "make install", I update-grub and got the error message:

 

dpkg: version '/boot/xen.gz' has bad syntax: invalid character in version
number

 

I googled the solution. Someone said to delete xen.gz (a symbolic link to
xen.4.2.0.gz) but it doesn't work for me. Someone said it is a bug of grub
and so I installed the latest version, but after reboot my system get into a
confusing grub interface and I don't know how to continue. (thank god I am
using vmware)

 

Can somebody help me? Thank you very much!

 

Regards,

Yiming


------=_NextPart_000_001A_01CDDCA8.7511A700
Content-Type: text/html;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:\5B8B\4F53;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:\5B8B\4F53;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@\5B8B\4F53";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	text-align:justify;
	text-justify:inter-ideograph;
	font-size:10.5pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
/* Page Definitions */
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DZH-CN link=3Dblue =
vlink=3Dpurple style=3D'text-justify-trim:punctuation'><div =
class=3DWordSection1><p class=3DMsoNormal><span lang=3DEN-US>Dear =
all,<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>I want to install Xen 4.2 from the source files on Debian =
Squeeze. After &#8220;make&#8221; and &#8220;make install&#8221;, I =
update-grub and got the error message:<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>dpkg: version '/boot/xen.gz' has =
bad syntax: invalid character in version number<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>I googled the solution. Someone =
said to delete xen.gz (a symbolic link to xen.4.2.0.gz) but it =
doesn&#8217;t work for me. Someone said it is a bug of grub and so I =
installed the latest version, but after reboot my system get into a =
confusing grub interface and I don&#8217;t know how to continue. (thank =
god I am using vmware)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Can somebody help me? Thank you very =
much!<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Regards,<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Yiming<o:p></o:p></span></p></div></body></html>
------=_NextPart_000_001A_01CDDCA8.7511A700--



--===============2250218384675899903==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2250218384675899903==--



From xen-devel-bounces@lists.xen.org Mon Dec 17 22:48:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 22:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkjTv-0006lW-N3; Mon, 17 Dec 2012 22:48:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sdiris@gmail.com>) id 1TkjTt-0006lR-VJ
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 22:48:18 +0000
Received: from [85.158.139.83:4289] by server-2.bemta-5.messagelabs.com id
	B5/AB-16162-131AFC05; Mon, 17 Dec 2012 22:48:17 +0000
X-Env-Sender: sdiris@gmail.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355784493!26248340!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 615 invoked from network); 17 Dec 2012 22:48:15 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 22:48:15 -0000
Received: by mail-we0-f173.google.com with SMTP id z2so3223780wey.32
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 14:47:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:subject:date:message-id:mime-version:content-type:x-mailer
	:thread-index:content-language;
	bh=VyKBcTsKqEM+GjKIfcQFBZGbWtnJgB7C0HkjG72jwUQ=;
	b=tRjUoHhzrjJqZ3OxGn9PFszPvJNItneyKZUqRGk5duxuQBguUzw+WdZx6esXI0A86l
	WhSbEt9b9BWE0kWeuZmztpGW3479aH7HzyouCKDxyIS1IuKymEKkNd34zUkRAw4ctdfu
	ndRF5Tbs/ux1CvldNfyrv27ZL5bf662Jca5MWgRng3sIY8Xc2RVsp9bXNtaauPy0vzif
	3zZcR55QEXFniL13sICV/CGZ3Qs+Ph0VNJnxmwHb45wrUi+OjZun9NRxf4Lc+SZ5XqWD
	O7xJ++YzZow1P9dPdjVQQzeALFd2VrRcLMFVh77P5w625eMvbYiYm/WCbiJWAxALzAAM
	/XkA==
Received: by 10.180.73.80 with SMTP id j16mr415305wiv.5.1355784434355;
	Mon, 17 Dec 2012 14:47:14 -0800 (PST)
Received: from yimingwin7 (c192.al.cl.cam.ac.uk. [128.232.110.192])
	by mx.google.com with ESMTPS id l5sm14804983wia.10.2012.12.17.14.47.12
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 17 Dec 2012 14:47:13 -0800 (PST)
From: "Yiming Zhang" <sdiris@gmail.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Dec 2012 22:47:12 -0000
Message-ID: <001901cddca8$75110ac0$5f332040$@gmail.com>
MIME-Version: 1.0
X-Mailer: Microsoft Outlook 14.0
Thread-Index: Ac3cp7LkuVh3pIHwRGiDPTwBU1Kd2A==
Content-Language: zh-cn
Subject: [Xen-devel] Error in installing xen-4.2 on Debian
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2250218384675899903=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multipart message in MIME format.

--===============2250218384675899903==
Content-Type: multipart/alternative;
	boundary="----=_NextPart_000_001A_01CDDCA8.7511A700"
Content-Language: zh-cn

This is a multipart message in MIME format.

------=_NextPart_000_001A_01CDDCA8.7511A700
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

Dear all,

 

I want to install Xen 4.2 from the source files on Debian Squeeze. After
"make" and "make install", I update-grub and got the error message:

 

dpkg: version '/boot/xen.gz' has bad syntax: invalid character in version
number

 

I googled the solution. Someone said to delete xen.gz (a symbolic link to
xen.4.2.0.gz) but it doesn't work for me. Someone said it is a bug of grub
and so I installed the latest version, but after reboot my system get into a
confusing grub interface and I don't know how to continue. (thank god I am
using vmware)

 

Can somebody help me? Thank you very much!

 

Regards,

Yiming


------=_NextPart_000_001A_01CDDCA8.7511A700
Content-Type: text/html;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:\5B8B\4F53;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:\5B8B\4F53;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@\5B8B\4F53";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	text-align:justify;
	text-justify:inter-ideograph;
	font-size:10.5pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
/* Page Definitions */
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DZH-CN link=3Dblue =
vlink=3Dpurple style=3D'text-justify-trim:punctuation'><div =
class=3DWordSection1><p class=3DMsoNormal><span lang=3DEN-US>Dear =
all,<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>I want to install Xen 4.2 from the source files on Debian =
Squeeze. After &#8220;make&#8221; and &#8220;make install&#8221;, I =
update-grub and got the error message:<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>dpkg: version '/boot/xen.gz' has =
bad syntax: invalid character in version number<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>I googled the solution. Someone =
said to delete xen.gz (a symbolic link to xen.4.2.0.gz) but it =
doesn&#8217;t work for me. Someone said it is a bug of grub and so I =
installed the latest version, but after reboot my system get into a =
confusing grub interface and I don&#8217;t know how to continue. (thank =
god I am using vmware)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Can somebody help me? Thank you very =
much!<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Regards,<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Yiming<o:p></o:p></span></p></div></body></html>
------=_NextPart_000_001A_01CDDCA8.7511A700--



--===============2250218384675899903==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2250218384675899903==--



From xen-devel-bounces@lists.xen.org Mon Dec 17 23:59:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 23:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkkZu-0007Eh-B6; Mon, 17 Dec 2012 23:58:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thiagocmartinsc@gmail.com>) id 1TkkZt-0007Ec-2x
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 23:58:33 +0000
Received: from [85.158.139.83:30004] by server-7.bemta-5.messagelabs.com id
	45/C3-08009-7A1BFC05; Mon, 17 Dec 2012 23:58:31 +0000
X-Env-Sender: thiagocmartinsc@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355788710!28142995!1
X-Originating-IP: [209.85.217.169]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15282 invoked from network); 17 Dec 2012 23:58:30 -0000
Received: from mail-lb0-f169.google.com (HELO mail-lb0-f169.google.com)
	(209.85.217.169)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 23:58:30 -0000
Received: by mail-lb0-f169.google.com with SMTP id gk1so135670lbb.0
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 15:58:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=PBucFwWWbzoaJtdg6ydeb713QQSZxJ/QqO5mfJyDaTg=;
	b=z/1BDlAMJAyf8zdE+Ih9wrlH71g6RcxqEwc6ODgyeOnZPyzraZyScmyiMFJYNudn7i
	xov93M0GfLUWcIAO6cssUhm4L1BxPmb9JwhZuX6F5ijN0ao+CuwtK7rbHdS28fTUoDgs
	ir+1lC0tZ4OqB9kncbyjamkyBi3zL00Hucopq0mmGgk/x9/0u1lB9hVn12ilsI2QusR8
	xIIbcrbDbKPoeStg+s0Bw0sgti3KQKtyixdwIlnZYjzbyFS4axxWdGEA4kqbAxv8+vEZ
	SBVlF0l2wJT+tvYk9N807JuNZ6Qw/lg0YG+s+RXlrUrl9d4VI6VFCjfIP4lglsSFM4h/
	Uzxw==
Received: by 10.112.99.195 with SMTP id es3mr78852lbb.132.1355788710066; Mon,
	17 Dec 2012 15:58:30 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.6.3 with HTTP; Mon, 17 Dec 2012 15:57:59 -0800 (PST)
In-Reply-To: <20120820191429.GY19851@reaktio.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
From: =?ISO-2022-JP?B?TWFydGlueCAtIBskQiU4JSchPCVgJTobKEI=?=
	<thiagocmartinsc@gmail.com>
Date: Mon, 17 Dec 2012 21:57:59 -0200
Message-ID: <CAJSM8J3DvQm-wW95k9Jd3quDkeNiPexj8OYdtnb_qWEWDUs1kw@mail.gmail.com>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2426075246315196067=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2426075246315196067==
Content-Type: multipart/alternative; boundary=14dae9d7182c74392f04d11528b1

--14dae9d7182c74392f04d11528b1
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi Pasi!

 Can you tell me if it will be possible to use Xen like this:

 dom0 -> ATI GPU Passthrough as primary -> HVM domU with Catalyst -> Spice
-> Spice-Client ?

 I do not want to use Spice "alone" and, I do not want to use my domU with
my ATI without SPICE...  That makes sense?

Thanks!
Thiago

On 20 August 2012 16:14, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:

> On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
> >
> > Features and improvements not on this list are of course welcome at
> > any time before the feature freeze.
> >
> > Any questions and feedback are welcome!
> >
> > Your 4.3 release coordinator,
> >  George Dunlap
> >
>
> <snip>
>
> >
> > * xl USB pass-through for PV guests
> >   owner: ?
> >   Port the xend PV pass-through functionality to xl.
> >
>
> xm/xend PVUSB works for both PV and HVM guests, so xl should support PVUS=
B
> for both PV and HVM guests aswell.
> James Harper's GPLPV drivers actually do have PVUSB frontend driver for
> Windows.
>
> Also Suse's xenlinux forward-ported patches have PVUSB support in
> unmodified_drivers for HVM guests.
>
>
> Another USB item:
>
> * xl support for USB device passthru using QEMU emulated USB for HVM
> guests (no need for PVUSB drivers in the HVM guest).
>   This works today in xm/xend with qemu-traditional, but is limited to US=
B
> 1.1, probably because
>   the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
>   So xl support for emulated USB device passthru for both qemu-upstream
> and qemu-traditional.
>
>
> More wishlist items:
>
> * Nested hardware virtualization. Important for easier testing and
> development of Xen (Xen-on-Xen),
>   and for running other hypervisors in Xen VMs. Interesting for labs,
> POCs, etc.
>
> * VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-devel
> archives,
>   but noone has yet stepped up to clean up and get them merged.
>   Currently Intel gfx passthru patches are merged to Xen, but primary
> ATI/NVIDIA require extra patches.
>   This is actually something that a LOT of users ask often, it's discusse=
d
> almost every day on ##xen on IRC.
>   I wonder if XenClient folks could help here?
>
> * Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU
> passthru users.
>   Fujitsu guys posted some patches for this in 2010, and XenClient guys i=
n
> 2009 (iirc),
>   but nothing got further developed and merged to upstream Xen.
>
> * QXL virtual GPU support for SPICE. Someone was already developing this,
>   and posted patches earlier during 4.2 development cycle to xen-devel.
>   Upstream Qemu includes QXL support.
>
> * PVSCSI support in XL. James Harper was (semi) interested in working wit=
h
> this,
>   because he has a PVSCSI frontend driver in Windows GPLPV drivers, and
> he's using PVSCSI for tape backups himself.
>
> * libvirt libxl driver improvements; support more Xen features.
>   Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "default"
> virtualization GUI also with Xen.
>
>
> Hopefully we'll find interested developers for these items :)
>
>
> -- Pasi
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--14dae9d7182c74392f04d11528b1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi Pasi!<div><br></div><div>=A0Can you tell me if it will be possible to us=
e Xen like this:</div><div><br></div><div>=A0dom0 -&gt; ATI GPU Passthrough=
 as primary -&gt; HVM domU with Catalyst -&gt; Spice -&gt; Spice-Client ?</=
div>

<div><br></div><div>=A0I do not want to use Spice &quot;alone&quot; and, I =
do not want to use my domU with my ATI without SPICE...=A0=A0That makes sen=
se?</div>
<div><br></div><div>Thanks!</div><div>Thiago<br><br><div class=3D"gmail_quo=
te">On 20 August 2012 16:14, Pasi K=E4rkk=E4inen <span dir=3D"ltr">&lt;<a h=
ref=3D"mailto:pasik@iki.fi" target=3D"_blank">pasik@iki.fi</a>&gt;</span> w=
rote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On Mon, Aug 20, 2012 at 05:46:59PM +010=
0, George Dunlap wrote:<br>
&gt;<br>
&gt; Features and improvements not on this list are of course welcome at<br=
>
&gt; any time before the feature freeze.<br>
&gt;<br>
&gt; Any questions and feedback are welcome!<br>
&gt;<br>
&gt; Your 4.3 release coordinator,<br>
&gt; =A0George Dunlap<br>
&gt;<br>
<br>
</div>&lt;snip&gt;<br>
<div><br>
&gt;<br>
&gt; * xl USB pass-through for PV guests<br>
&gt; =A0 owner: ?<br>
&gt; =A0 Port the xend PV pass-through functionality to xl.<br>
&gt;<br>
<br>
</div>xm/xend PVUSB works for both PV and HVM guests, so xl should support =
PVUSB for both PV and HVM guests aswell.<br>
James Harper&#39;s GPLPV drivers actually do have PVUSB frontend driver for=
 Windows.<br>
<br>
Also Suse&#39;s xenlinux forward-ported patches have PVUSB support in unmod=
ified_drivers for HVM guests.<br>
<br>
<br>
Another USB item:<br>
<br>
* xl support for USB device passthru using QEMU emulated USB for HVM guests=
 (no need for PVUSB drivers in the HVM guest).<br>
=A0 This works today in xm/xend with qemu-traditional, but is limited to US=
B 1.1, probably because<br>
=A0 the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.<br>
=A0 So xl support for emulated USB device passthru for both qemu-upstream a=
nd qemu-traditional.<br>
<br>
<br>
More wishlist items:<br>
<br>
* Nested hardware virtualization. Important for easier testing and developm=
ent of Xen (Xen-on-Xen),<br>
=A0 and for running other hypervisors in Xen VMs. Interesting for labs, POC=
s, etc.<br>
<br>
* VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-devel arc=
hives,<br>
=A0 but noone has yet stepped up to clean up and get them merged.<br>
=A0 Currently Intel gfx passthru patches are merged to Xen, but primary ATI=
/NVIDIA require extra patches.<br>
=A0 This is actually something that a LOT of users ask often, it&#39;s disc=
ussed almost every day on ##xen on IRC.<br>
=A0 I wonder if XenClient folks could help here?<br>
<br>
* Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU passt=
hru users.<br>
=A0 Fujitsu guys posted some patches for this in 2010, and XenClient guys i=
n 2009 (iirc),<br>
=A0 but nothing got further developed and merged to upstream Xen.<br>
<br>
* QXL virtual GPU support for SPICE. Someone was already developing this,<b=
r>
=A0 and posted patches earlier during 4.2 development cycle to xen-devel.<b=
r>
=A0 Upstream Qemu includes QXL support.<br>
<br>
* PVSCSI support in XL. James Harper was (semi) interested in working with =
this,<br>
=A0 because he has a PVSCSI frontend driver in Windows GPLPV drivers, and h=
e&#39;s using PVSCSI for tape backups himself.<br>
<br>
* libvirt libxl driver improvements; support more Xen features.<br>
=A0 Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS &quot;default&=
quot; virtualization GUI also with Xen.<br>
<br>
<br>
Hopefully we&#39;ll find interested developers for these items :)<br>
<span><font color=3D"#888888"><br>
<br>
-- Pasi<br>
</font></span><div><div><br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div>

--14dae9d7182c74392f04d11528b1--


--===============2426075246315196067==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2426075246315196067==--


From xen-devel-bounces@lists.xen.org Mon Dec 17 23:59:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Dec 2012 23:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkkZu-0007Eh-B6; Mon, 17 Dec 2012 23:58:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thiagocmartinsc@gmail.com>) id 1TkkZt-0007Ec-2x
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 23:58:33 +0000
Received: from [85.158.139.83:30004] by server-7.bemta-5.messagelabs.com id
	45/C3-08009-7A1BFC05; Mon, 17 Dec 2012 23:58:31 +0000
X-Env-Sender: thiagocmartinsc@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355788710!28142995!1
X-Originating-IP: [209.85.217.169]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15282 invoked from network); 17 Dec 2012 23:58:30 -0000
Received: from mail-lb0-f169.google.com (HELO mail-lb0-f169.google.com)
	(209.85.217.169)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Dec 2012 23:58:30 -0000
Received: by mail-lb0-f169.google.com with SMTP id gk1so135670lbb.0
	for <xen-devel@lists.xen.org>; Mon, 17 Dec 2012 15:58:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=PBucFwWWbzoaJtdg6ydeb713QQSZxJ/QqO5mfJyDaTg=;
	b=z/1BDlAMJAyf8zdE+Ih9wrlH71g6RcxqEwc6ODgyeOnZPyzraZyScmyiMFJYNudn7i
	xov93M0GfLUWcIAO6cssUhm4L1BxPmb9JwhZuX6F5ijN0ao+CuwtK7rbHdS28fTUoDgs
	ir+1lC0tZ4OqB9kncbyjamkyBi3zL00Hucopq0mmGgk/x9/0u1lB9hVn12ilsI2QusR8
	xIIbcrbDbKPoeStg+s0Bw0sgti3KQKtyixdwIlnZYjzbyFS4axxWdGEA4kqbAxv8+vEZ
	SBVlF0l2wJT+tvYk9N807JuNZ6Qw/lg0YG+s+RXlrUrl9d4VI6VFCjfIP4lglsSFM4h/
	Uzxw==
Received: by 10.112.99.195 with SMTP id es3mr78852lbb.132.1355788710066; Mon,
	17 Dec 2012 15:58:30 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.6.3 with HTTP; Mon, 17 Dec 2012 15:57:59 -0800 (PST)
In-Reply-To: <20120820191429.GY19851@reaktio.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
From: =?ISO-2022-JP?B?TWFydGlueCAtIBskQiU4JSchPCVgJTobKEI=?=
	<thiagocmartinsc@gmail.com>
Date: Mon, 17 Dec 2012 21:57:59 -0200
Message-ID: <CAJSM8J3DvQm-wW95k9Jd3quDkeNiPexj8OYdtnb_qWEWDUs1kw@mail.gmail.com>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2426075246315196067=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2426075246315196067==
Content-Type: multipart/alternative; boundary=14dae9d7182c74392f04d11528b1

--14dae9d7182c74392f04d11528b1
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi Pasi!

 Can you tell me if it will be possible to use Xen like this:

 dom0 -> ATI GPU Passthrough as primary -> HVM domU with Catalyst -> Spice
-> Spice-Client ?

 I do not want to use Spice "alone" and, I do not want to use my domU with
my ATI without SPICE...  That makes sense?

Thanks!
Thiago

On 20 August 2012 16:14, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:

> On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
> >
> > Features and improvements not on this list are of course welcome at
> > any time before the feature freeze.
> >
> > Any questions and feedback are welcome!
> >
> > Your 4.3 release coordinator,
> >  George Dunlap
> >
>
> <snip>
>
> >
> > * xl USB pass-through for PV guests
> >   owner: ?
> >   Port the xend PV pass-through functionality to xl.
> >
>
> xm/xend PVUSB works for both PV and HVM guests, so xl should support PVUS=
B
> for both PV and HVM guests aswell.
> James Harper's GPLPV drivers actually do have PVUSB frontend driver for
> Windows.
>
> Also Suse's xenlinux forward-ported patches have PVUSB support in
> unmodified_drivers for HVM guests.
>
>
> Another USB item:
>
> * xl support for USB device passthru using QEMU emulated USB for HVM
> guests (no need for PVUSB drivers in the HVM guest).
>   This works today in xm/xend with qemu-traditional, but is limited to US=
B
> 1.1, probably because
>   the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
>   So xl support for emulated USB device passthru for both qemu-upstream
> and qemu-traditional.
>
>
> More wishlist items:
>
> * Nested hardware virtualization. Important for easier testing and
> development of Xen (Xen-on-Xen),
>   and for running other hypervisors in Xen VMs. Interesting for labs,
> POCs, etc.
>
> * VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-devel
> archives,
>   but noone has yet stepped up to clean up and get them merged.
>   Currently Intel gfx passthru patches are merged to Xen, but primary
> ATI/NVIDIA require extra patches.
>   This is actually something that a LOT of users ask often, it's discusse=
d
> almost every day on ##xen on IRC.
>   I wonder if XenClient folks could help here?
>
> * Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU
> passthru users.
>   Fujitsu guys posted some patches for this in 2010, and XenClient guys i=
n
> 2009 (iirc),
>   but nothing got further developed and merged to upstream Xen.
>
> * QXL virtual GPU support for SPICE. Someone was already developing this,
>   and posted patches earlier during 4.2 development cycle to xen-devel.
>   Upstream Qemu includes QXL support.
>
> * PVSCSI support in XL. James Harper was (semi) interested in working wit=
h
> this,
>   because he has a PVSCSI frontend driver in Windows GPLPV drivers, and
> he's using PVSCSI for tape backups himself.
>
> * libvirt libxl driver improvements; support more Xen features.
>   Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "default"
> virtualization GUI also with Xen.
>
>
> Hopefully we'll find interested developers for these items :)
>
>
> -- Pasi
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--14dae9d7182c74392f04d11528b1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi Pasi!<div><br></div><div>=A0Can you tell me if it will be possible to us=
e Xen like this:</div><div><br></div><div>=A0dom0 -&gt; ATI GPU Passthrough=
 as primary -&gt; HVM domU with Catalyst -&gt; Spice -&gt; Spice-Client ?</=
div>

<div><br></div><div>=A0I do not want to use Spice &quot;alone&quot; and, I =
do not want to use my domU with my ATI without SPICE...=A0=A0That makes sen=
se?</div>
<div><br></div><div>Thanks!</div><div>Thiago<br><br><div class=3D"gmail_quo=
te">On 20 August 2012 16:14, Pasi K=E4rkk=E4inen <span dir=3D"ltr">&lt;<a h=
ref=3D"mailto:pasik@iki.fi" target=3D"_blank">pasik@iki.fi</a>&gt;</span> w=
rote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On Mon, Aug 20, 2012 at 05:46:59PM +010=
0, George Dunlap wrote:<br>
&gt;<br>
&gt; Features and improvements not on this list are of course welcome at<br=
>
&gt; any time before the feature freeze.<br>
&gt;<br>
&gt; Any questions and feedback are welcome!<br>
&gt;<br>
&gt; Your 4.3 release coordinator,<br>
&gt; =A0George Dunlap<br>
&gt;<br>
<br>
</div>&lt;snip&gt;<br>
<div><br>
&gt;<br>
&gt; * xl USB pass-through for PV guests<br>
&gt; =A0 owner: ?<br>
&gt; =A0 Port the xend PV pass-through functionality to xl.<br>
&gt;<br>
<br>
</div>xm/xend PVUSB works for both PV and HVM guests, so xl should support =
PVUSB for both PV and HVM guests aswell.<br>
James Harper&#39;s GPLPV drivers actually do have PVUSB frontend driver for=
 Windows.<br>
<br>
Also Suse&#39;s xenlinux forward-ported patches have PVUSB support in unmod=
ified_drivers for HVM guests.<br>
<br>
<br>
Another USB item:<br>
<br>
* xl support for USB device passthru using QEMU emulated USB for HVM guests=
 (no need for PVUSB drivers in the HVM guest).<br>
=A0 This works today in xm/xend with qemu-traditional, but is limited to US=
B 1.1, probably because<br>
=A0 the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.<br>
=A0 So xl support for emulated USB device passthru for both qemu-upstream a=
nd qemu-traditional.<br>
<br>
<br>
More wishlist items:<br>
<br>
* Nested hardware virtualization. Important for easier testing and developm=
ent of Xen (Xen-on-Xen),<br>
=A0 and for running other hypervisors in Xen VMs. Interesting for labs, POC=
s, etc.<br>
<br>
* VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-devel arc=
hives,<br>
=A0 but noone has yet stepped up to clean up and get them merged.<br>
=A0 Currently Intel gfx passthru patches are merged to Xen, but primary ATI=
/NVIDIA require extra patches.<br>
=A0 This is actually something that a LOT of users ask often, it&#39;s disc=
ussed almost every day on ##xen on IRC.<br>
=A0 I wonder if XenClient folks could help here?<br>
<br>
* Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU passt=
hru users.<br>
=A0 Fujitsu guys posted some patches for this in 2010, and XenClient guys i=
n 2009 (iirc),<br>
=A0 but nothing got further developed and merged to upstream Xen.<br>
<br>
* QXL virtual GPU support for SPICE. Someone was already developing this,<b=
r>
=A0 and posted patches earlier during 4.2 development cycle to xen-devel.<b=
r>
=A0 Upstream Qemu includes QXL support.<br>
<br>
* PVSCSI support in XL. James Harper was (semi) interested in working with =
this,<br>
=A0 because he has a PVSCSI frontend driver in Windows GPLPV drivers, and h=
e&#39;s using PVSCSI for tape backups himself.<br>
<br>
* libvirt libxl driver improvements; support more Xen features.<br>
=A0 Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS &quot;default&=
quot; virtualization GUI also with Xen.<br>
<br>
<br>
Hopefully we&#39;ll find interested developers for these items :)<br>
<span><font color=3D"#888888"><br>
<br>
-- Pasi<br>
</font></span><div><div><br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div>

--14dae9d7182c74392f04d11528b1--


--===============2426075246315196067==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2426075246315196067==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 01:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 01:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkljs-0003Rq-7u; Tue, 18 Dec 2012 01:12:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tkljq-0003Rl-Tg
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 01:12:55 +0000
Received: from [85.158.139.83:48891] by server-6.bemta-5.messagelabs.com id
	A1/80-30498-613CFC05; Tue, 18 Dec 2012 01:12:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355793171!29585666!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjYyNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18647 invoked from network); 18 Dec 2012 01:12:53 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 01:12:53 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBI1CgAI016126
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 01:12:42 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBI1CefI005899
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 01:12:41 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBI1Ce49027026; Mon, 17 Dec 2012 19:12:40 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Dec 2012 17:12:40 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0C43B1C05A8; Mon, 17 Dec 2012 20:12:39 -0500 (EST)
Date: Mon, 17 Dec 2012 20:12:39 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20121218011239.GB4518@phenom.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
	<1782021524.20121217213217@eikelenboom.it>
	<20121217204634.GA13181@konrad-lan.dumpdata.com>
	<20121217211240.GA13412@konrad-lan.dumpdata.com>
	<29163327.20121217223558@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <29163327.20121217223558@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: fenghua.yu@intel.com, hpa@linux.intel.com, linux-kernel@vger.kernel.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
 dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 10:35:58PM +0100, Sander Eikelenboom wrote:
> 
> Monday, December 17, 2012, 10:12:40 PM, you wrote:
> 
> > On Mon, Dec 17, 2012 at 03:46:34PM -0500, Konrad Rzeszutek Wilk wrote:
> >> On Mon, Dec 17, 2012 at 09:32:17PM +0100, Sander Eikelenboom wrote:
> >> > 
> >> > Sunday, December 16, 2012, 6:38:24 PM, you wrote:
> >> > 
> >> > > On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
> >> > >> Hi Konrad,
> >> > >> 
> >> > >> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.
> >> > 
> >> > > Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> >> > > vacation during that time (just got back).
> >> > 
> >> > > Did you by any chance try to do a git bisect to narrow down
> >> > > which merge it was?
> >> > 
> >> > Hi Konrad,
> >> 
> >> Hey Sander,
> >> 
> >> Thank you for doing the bisection.
> >> 
> >> Fenghua - any ideas what might be amiss in the Xen subsystem?
> >> I hadn't looked at the patchset of the CPU0 offlining/onlining
> >> so I am not completly up to speed on the particulars of the patches.
> 
> >> > 30106c174311b8cfaaa3186c7f6f9c36c62d17da is the first bad commit
> >> > commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da
> >> > Author: Fenghua Yu <fenghua.yu@intel.com>
> >> > Date:   Tue Nov 13 11:32:41 2012 -0800
> >> > 
> >> >     x86, hotplug: Support functions for CPU0 online/offline
> >> > 
> >> >     Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
> >> > 
> >> >     Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after
> >> >     it's offline.
> >> > 
> >> >     Continue to online CPU0 in native_cpu_up().
> >> > 
> >> >     Continue to offline CPU0 in native_cpu_disable().
> >> > 
> >> >     Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
> >> >     Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com
> >> >     Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
> >> > 
> 
> > This patch:
> 
> 
> > diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> > index 353c50f..4f7d259 100644
> > --- a/arch/x86/xen/smp.c
> > +++ b/arch/x86/xen/smp.c
> > @@ -254,7 +254,7 @@ static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
> >         }
> >         xen_init_lock_cpu(0);
> >  
> > -       smp_store_cpu_info(0);
> > +       smp_store_boot_cpu_info();
> >         cpu_data(0).x86_max_cores = 1;
> >  
> >         for_each_possible_cpu(i) {
> 
> > Would do the corresponding change in the Xen subsystem that the above
> > mentioned commit did. Perhaps that is all that is needed? I am going to
> > be able to test this and look in more details tomorrow.
> 
> Seems like it, don't know if there are other things still lurking, but with your patch it boots again as dom0 :-)

Excellent. And it seems that it also fixes it on my test machine. Great.
I am going to stick Reported-and-Tested-by: Sander Eikelenboom
and push it to Linus shortly. Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 01:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 01:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkljs-0003Rq-7u; Tue, 18 Dec 2012 01:12:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tkljq-0003Rl-Tg
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 01:12:55 +0000
Received: from [85.158.139.83:48891] by server-6.bemta-5.messagelabs.com id
	A1/80-30498-613CFC05; Tue, 18 Dec 2012 01:12:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355793171!29585666!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjYyNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18647 invoked from network); 18 Dec 2012 01:12:53 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 01:12:53 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBI1CgAI016126
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 01:12:42 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBI1CefI005899
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 01:12:41 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBI1Ce49027026; Mon, 17 Dec 2012 19:12:40 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Dec 2012 17:12:40 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0C43B1C05A8; Mon, 17 Dec 2012 20:12:39 -0500 (EST)
Date: Mon, 17 Dec 2012 20:12:39 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20121218011239.GB4518@phenom.dumpdata.com>
References: <1995613404.20121214165557@eikelenboom.it>
	<20121216173824.GA4518@phenom.dumpdata.com>
	<1782021524.20121217213217@eikelenboom.it>
	<20121217204634.GA13181@konrad-lan.dumpdata.com>
	<20121217211240.GA13412@konrad-lan.dumpdata.com>
	<29163327.20121217223558@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <29163327.20121217223558@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: fenghua.yu@intel.com, hpa@linux.intel.com, linux-kernel@vger.kernel.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 3.8.0-rc0 on xen-unstable: RCU Stall during boot as
 dom0 kernel after IOAPIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 10:35:58PM +0100, Sander Eikelenboom wrote:
> 
> Monday, December 17, 2012, 10:12:40 PM, you wrote:
> 
> > On Mon, Dec 17, 2012 at 03:46:34PM -0500, Konrad Rzeszutek Wilk wrote:
> >> On Mon, Dec 17, 2012 at 09:32:17PM +0100, Sander Eikelenboom wrote:
> >> > 
> >> > Sunday, December 16, 2012, 6:38:24 PM, you wrote:
> >> > 
> >> > > On Fri, Dec 14, 2012 at 04:55:57PM +0100, Sander Eikelenboom wrote:
> >> > >> Hi Konrad,
> >> > >> 
> >> > >> I just tried to boot a 3.8.0-rc0 kernel (last commit: 7313264b899bbf3988841296265a6e0e8a7b6521) as dom0 on my machine with current xen-unstable.
> >> > 
> >> > > Yeah, saw it over the Dec 11->Dec 12 merges and was out on
> >> > > vacation during that time (just got back).
> >> > 
> >> > > Did you by any chance try to do a git bisect to narrow down
> >> > > which merge it was?
> >> > 
> >> > Hi Konrad,
> >> 
> >> Hey Sander,
> >> 
> >> Thank you for doing the bisection.
> >> 
> >> Fenghua - any ideas what might be amiss in the Xen subsystem?
> >> I hadn't looked at the patchset of the CPU0 offlining/onlining
> >> so I am not completly up to speed on the particulars of the patches.
> 
> >> > 30106c174311b8cfaaa3186c7f6f9c36c62d17da is the first bad commit
> >> > commit 30106c174311b8cfaaa3186c7f6f9c36c62d17da
> >> > Author: Fenghua Yu <fenghua.yu@intel.com>
> >> > Date:   Tue Nov 13 11:32:41 2012 -0800
> >> > 
> >> >     x86, hotplug: Support functions for CPU0 online/offline
> >> > 
> >> >     Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
> >> > 
> >> >     Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after
> >> >     it's offline.
> >> > 
> >> >     Continue to online CPU0 in native_cpu_up().
> >> > 
> >> >     Continue to offline CPU0 in native_cpu_disable().
> >> > 
> >> >     Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
> >> >     Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com
> >> >     Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
> >> > 
> 
> > This patch:
> 
> 
> > diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> > index 353c50f..4f7d259 100644
> > --- a/arch/x86/xen/smp.c
> > +++ b/arch/x86/xen/smp.c
> > @@ -254,7 +254,7 @@ static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
> >         }
> >         xen_init_lock_cpu(0);
> >  
> > -       smp_store_cpu_info(0);
> > +       smp_store_boot_cpu_info();
> >         cpu_data(0).x86_max_cores = 1;
> >  
> >         for_each_possible_cpu(i) {
> 
> > Would do the corresponding change in the Xen subsystem that the above
> > mentioned commit did. Perhaps that is all that is needed? I am going to
> > be able to test this and look in more details tomorrow.
> 
> Seems like it, don't know if there are other things still lurking, but with your patch it boots again as dom0 :-)

Excellent. And it seems that it also fixes it on my test machine. Great.
I am going to stick Reported-and-Tested-by: Sander Eikelenboom
and push it to Linus shortly. Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 02:17:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 02:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkmjr-00047U-7t; Tue, 18 Dec 2012 02:16:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tkmjo-00047P-Rr
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 02:16:57 +0000
Received: from [85.158.143.35:17856] by server-2.bemta-4.messagelabs.com id
	E1/39-30861-712DFC05; Tue, 18 Dec 2012 02:16:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355797007!16665395!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22011 invoked from network); 18 Dec 2012 02:16:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 02:16:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="212649"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 02:16:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 18 Dec 2012 02:16:45 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tkmjd-00038u-Qa;
	Tue, 18 Dec 2012 02:16:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tkmjd-0002cn-Em;
	Tue, 18 Dec 2012 02:16:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14774-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 02:16:45 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14774: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14774 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14774/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14683
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14683

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  11b4bc743b1f
baseline version:
 xen                  bfdbf9747fc4

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ronny Hegewald <Ronny.Hegewald@online.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=11b4bc743b1f
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 11b4bc743b1f
+ branch=xen-4.2-testing
+ revision=11b4bc743b1f
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r 11b4bc743b1f ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 4 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 02:17:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 02:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkmjr-00047U-7t; Tue, 18 Dec 2012 02:16:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tkmjo-00047P-Rr
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 02:16:57 +0000
Received: from [85.158.143.35:17856] by server-2.bemta-4.messagelabs.com id
	E1/39-30861-712DFC05; Tue, 18 Dec 2012 02:16:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355797007!16665395!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22011 invoked from network); 18 Dec 2012 02:16:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 02:16:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="212649"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 02:16:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 18 Dec 2012 02:16:45 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tkmjd-00038u-Qa;
	Tue, 18 Dec 2012 02:16:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tkmjd-0002cn-Em;
	Tue, 18 Dec 2012 02:16:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14774-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 02:16:45 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14774: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14774 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14774/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14683
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14683

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  11b4bc743b1f
baseline version:
 xen                  bfdbf9747fc4

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ronny Hegewald <Ronny.Hegewald@online.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=11b4bc743b1f
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 11b4bc743b1f
+ branch=xen-4.2-testing
+ revision=11b4bc743b1f
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r 11b4bc743b1f ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 4 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 02:47:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 02:47:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TknCb-0004R0-2l; Tue, 18 Dec 2012 02:46:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TknCa-0004Qv-3T
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 02:46:40 +0000
Received: from [85.158.143.35:6078] by server-3.bemta-4.messagelabs.com id
	A9/4E-18211-F09DFC05; Tue, 18 Dec 2012 02:46:39 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355798796!12405081!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NjczNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18685 invoked from network); 18 Dec 2012 02:46:37 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-12.tower-21.messagelabs.com with SMTP;
	18 Dec 2012 02:46:37 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 17 Dec 2012 18:46:35 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,305,1355126400"; d="scan'208";a="265852689"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 17 Dec 2012 18:46:33 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue, 18 Dec 2012 10:36:34 +0800
Message-Id: <1355798194-18967-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH] nested vmx: nested TPR shadow/threshold
	emulation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TPR shadow/threshold feature is important to speedup the boot time
for Windows guest. Besides, it is a must feature for certain VMM.

We map virtual APIC page address and TPR threshold from L1 VMCS,
and synch it into shadow VMCS in virtual vmentry.
If TPR_BELOW_THRESHOLD VM exit is triggered by L2 guest, we
inject it into L1 VMM for handling.

Besides, this commit fixes an issue for apic access page, if L1
VMM didn't enable this feature, we need to fill zero into the
shadow VMCS.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |    4 +++-
 xen/arch/x86/hvm/vmx/vvmx.c |   43 +++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 44 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 535248a..c5c503e 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -324,7 +324,9 @@ void vmx_intr_assist(void)
     }
 
  out:
-    if ( !cpu_has_vmx_virtual_intr_delivery && cpu_has_vmx_tpr_shadow )
+    if ( !nestedhvm_vcpu_in_guestmode(v) &&
+         !cpu_has_vmx_virtual_intr_delivery &&
+         cpu_has_vmx_tpr_shadow )
         __vmwrite(TPR_THRESHOLD, tpr_threshold);
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b005816..7b27d2d 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -471,8 +471,7 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
     shadow_cntrl = __n2_exec_control(v);
     pio_cntrl &= shadow_cntrl;
     /* Enforce the removed features */
-    shadow_cntrl &= ~(CPU_BASED_TPR_SHADOW
-                      | CPU_BASED_ACTIVATE_MSR_BITMAP
+    shadow_cntrl &= ~(CPU_BASED_ACTIVATE_MSR_BITMAP
                       | CPU_BASED_ACTIVATE_IO_BITMAP
                       | CPU_BASED_UNCOND_IO_EXITING);
     shadow_cntrl |= host_cntrl;
@@ -570,6 +569,38 @@ static void nvmx_update_apic_access_address(struct vcpu *v)
         __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
         hvm_unmap_guest_frame(apic_va); 
     }
+    else
+        __vmwrite(APIC_ACCESS_ADDR, 0);
+}
+
+static void nvmx_update_virtual_apic_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 vapic_gpfn, vapic_mfn;
+    u32 ctrl;
+    void *vapic_va;
+
+    ctrl = __n2_exec_control(v);
+    if ( ctrl & CPU_BASED_TPR_SHADOW )
+    {
+        vapic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, VIRTUAL_APIC_PAGE_ADDR) >> PAGE_SHIFT;
+        vapic_va = hvm_map_guest_frame_ro(vapic_gpfn);
+        vapic_mfn = virt_to_mfn(vapic_va);
+        __vmwrite(VIRTUAL_APIC_PAGE_ADDR, (vapic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(vapic_va); 
+    }
+    else
+        __vmwrite(VIRTUAL_APIC_PAGE_ADDR, 0);
+}
+
+static void nvmx_update_tpr_threshold(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u32 ctrl = __n2_exec_control(v);
+    if ( ctrl & CPU_BASED_TPR_SHADOW )
+        __vmwrite(TPR_THRESHOLD, __get_vvmcs(nvcpu->nv_vvmcx, TPR_THRESHOLD));
+    else
+        __vmwrite(TPR_THRESHOLD, 0);
 }
 
 static void __clear_current_vvmcs(struct vcpu *v)
@@ -780,6 +811,8 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
     nvmx_update_apic_access_address(v);
+    nvmx_update_virtual_apic_address(v);
+    nvmx_update_tpr_threshold(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1371,6 +1404,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_PAUSE_EXITING |
                CPU_BASED_RDPMC_EXITING |
+               CPU_BASED_TPR_SHADOW |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         data = gen_vmx_msr(data, VMX_PROCBASED_CTLS_DEFAULT1, host_data);
         break;
@@ -1707,6 +1741,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
             nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_TPR_BELOW_THRESHOLD:
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_TPR_SHADOW )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 02:47:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 02:47:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TknCb-0004R0-2l; Tue, 18 Dec 2012 02:46:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TknCa-0004Qv-3T
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 02:46:40 +0000
Received: from [85.158.143.35:6078] by server-3.bemta-4.messagelabs.com id
	A9/4E-18211-F09DFC05; Tue, 18 Dec 2012 02:46:39 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355798796!12405081!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NjczNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18685 invoked from network); 18 Dec 2012 02:46:37 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-12.tower-21.messagelabs.com with SMTP;
	18 Dec 2012 02:46:37 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 17 Dec 2012 18:46:35 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,305,1355126400"; d="scan'208";a="265852689"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga002.fm.intel.com with ESMTP; 17 Dec 2012 18:46:33 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Tue, 18 Dec 2012 10:36:34 +0800
Message-Id: <1355798194-18967-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH] nested vmx: nested TPR shadow/threshold
	emulation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TPR shadow/threshold feature is important to speedup the boot time
for Windows guest. Besides, it is a must feature for certain VMM.

We map virtual APIC page address and TPR threshold from L1 VMCS,
and synch it into shadow VMCS in virtual vmentry.
If TPR_BELOW_THRESHOLD VM exit is triggered by L2 guest, we
inject it into L1 VMM for handling.

Besides, this commit fixes an issue for apic access page, if L1
VMM didn't enable this feature, we need to fill zero into the
shadow VMCS.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c |    4 +++-
 xen/arch/x86/hvm/vmx/vvmx.c |   43 +++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 44 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 535248a..c5c503e 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -324,7 +324,9 @@ void vmx_intr_assist(void)
     }
 
  out:
-    if ( !cpu_has_vmx_virtual_intr_delivery && cpu_has_vmx_tpr_shadow )
+    if ( !nestedhvm_vcpu_in_guestmode(v) &&
+         !cpu_has_vmx_virtual_intr_delivery &&
+         cpu_has_vmx_tpr_shadow )
         __vmwrite(TPR_THRESHOLD, tpr_threshold);
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b005816..7b27d2d 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -471,8 +471,7 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
     shadow_cntrl = __n2_exec_control(v);
     pio_cntrl &= shadow_cntrl;
     /* Enforce the removed features */
-    shadow_cntrl &= ~(CPU_BASED_TPR_SHADOW
-                      | CPU_BASED_ACTIVATE_MSR_BITMAP
+    shadow_cntrl &= ~(CPU_BASED_ACTIVATE_MSR_BITMAP
                       | CPU_BASED_ACTIVATE_IO_BITMAP
                       | CPU_BASED_UNCOND_IO_EXITING);
     shadow_cntrl |= host_cntrl;
@@ -570,6 +569,38 @@ static void nvmx_update_apic_access_address(struct vcpu *v)
         __vmwrite(APIC_ACCESS_ADDR, (apic_mfn << PAGE_SHIFT));
         hvm_unmap_guest_frame(apic_va); 
     }
+    else
+        __vmwrite(APIC_ACCESS_ADDR, 0);
+}
+
+static void nvmx_update_virtual_apic_address(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u64 vapic_gpfn, vapic_mfn;
+    u32 ctrl;
+    void *vapic_va;
+
+    ctrl = __n2_exec_control(v);
+    if ( ctrl & CPU_BASED_TPR_SHADOW )
+    {
+        vapic_gpfn = __get_vvmcs(nvcpu->nv_vvmcx, VIRTUAL_APIC_PAGE_ADDR) >> PAGE_SHIFT;
+        vapic_va = hvm_map_guest_frame_ro(vapic_gpfn);
+        vapic_mfn = virt_to_mfn(vapic_va);
+        __vmwrite(VIRTUAL_APIC_PAGE_ADDR, (vapic_mfn << PAGE_SHIFT));
+        hvm_unmap_guest_frame(vapic_va); 
+    }
+    else
+        __vmwrite(VIRTUAL_APIC_PAGE_ADDR, 0);
+}
+
+static void nvmx_update_tpr_threshold(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    u32 ctrl = __n2_exec_control(v);
+    if ( ctrl & CPU_BASED_TPR_SHADOW )
+        __vmwrite(TPR_THRESHOLD, __get_vvmcs(nvcpu->nv_vvmcx, TPR_THRESHOLD));
+    else
+        __vmwrite(TPR_THRESHOLD, 0);
 }
 
 static void __clear_current_vvmcs(struct vcpu *v)
@@ -780,6 +811,8 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_entry_control(v);
     vmx_update_exception_bitmap(v);
     nvmx_update_apic_access_address(v);
+    nvmx_update_virtual_apic_address(v);
+    nvmx_update_tpr_threshold(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
@@ -1371,6 +1404,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_ACTIVATE_MSR_BITMAP |
                CPU_BASED_PAUSE_EXITING |
                CPU_BASED_RDPMC_EXITING |
+               CPU_BASED_TPR_SHADOW |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
         data = gen_vmx_msr(data, VMX_PROCBASED_CTLS_DEFAULT1, host_data);
         break;
@@ -1707,6 +1741,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES )
             nvcpu->nv_vmexit_pending = 1;
         break;
+    case EXIT_REASON_TPR_BELOW_THRESHOLD:
+        ctrl = __n2_exec_control(v);
+        if ( ctrl & CPU_BASED_TPR_SHADOW )
+            nvcpu->nv_vmexit_pending = 1;
+        break;
     default:
         gdprintk(XENLOG_WARNING, "Unknown nested vmexit reason %x.\n",
                  exit_reason);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 04:58:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 04:58:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkpFs-00053J-Vl; Tue, 18 Dec 2012 04:58:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkpFq-00053E-Ev
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 04:58:10 +0000
Received: from [85.158.137.99:22355] by server-1.bemta-3.messagelabs.com id
	1C/54-08906-1E7FFC05; Tue, 18 Dec 2012 04:58:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355806688!17507299!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23672 invoked from network); 18 Dec 2012 04:58:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 04:58:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="213760"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 04:58:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 18 Dec 2012 04:58:07 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkpFn-0003wt-Lq;
	Tue, 18 Dec 2012 04:58:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkpFm-00054A-J3;
	Tue, 18 Dec 2012 04:58:07 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14775-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 04:58:06 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14775: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7187836636461727687=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7187836636461727687==
Content-Type: text/plain

flight 14775 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14775/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemut-win    7 windows-install           fail REGR. vs. 14771

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14771
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14771

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  3664b0420dfa
baseline version:
 xen                  f50aab21f9f2

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@citrix.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26283:3664b0420dfa
tag:         tip
user:        Julien Grall <julien.grall@citrix.com>
date:        Mon Dec 17 18:04:54 2012 +0000
    
    libxenstore: filter watch events in libxenstore when we unwatch
    
    XenStore puts in queued watch events via a thread and notifies the user.
    Sometimes xs_unwatch is called before all related message is read. The use
    case is non-threaded libevent, we have two event A and B:
        - Event A will destroy something and call xs_unwatch;
        - Event B is used to notify that a node has changed in XenStore.
    As the event is called one by one, event A can be handled before event B.
    So on next xs_watch_read the user could retrieve an unwatch token and
    a segfault occured if the token store the pointer of the structure
    (ie: "backend:0xcafe").
    
    To avoid problem with previous application using libXenStore, this behaviour
    will only be enabled if XS_UNWATCH_FILTER is given to xs_open.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Signed-off-by: Julien Grall <julien.grall@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   26282:f50aab21f9f2
user:        Andrew Cooper <andrew.cooper3@citrix.com>
date:        Thu Dec 13 14:39:31 2012 +0000
    
    x86/kexec: Change NMI and MCE handling on kexec path
    
    Experimentally, certain crash kernels will triple fault very early
    after starting if started with NMIs disabled.  This was discovered
    when experimenting with a debug keyhandler which deliberately created
    a reentrant NMI, causing stack corruption.
    
    Because of this discovered bug, and that the future changes to the NMI
    handling will make the kexec path more fragile, take the time now to
    bullet-proof the kexec behaviour to be safer in more circumstances.
    
    This patch adds three new low level routines:
     * nmi_crash
        This is a special NMI handler for using during a kexec crash.
     * enable_nmis
        This function enables NMIs by executing an iret-to-self, to
        disengage the hardware NMI latch.
     * trap_nop
        This is a no op handler which irets immediately.  It is not
        declared
        with ENTRY() to avoid the extra alignment overhead.
    
    And adds three new IDT entry helper routines:
     * _write_gate_lower
        This is a substitute for using cmpxchg16b to update a 128bit
        structure at once.  It assumes that the top 64 bits are unchanged
        (and ASSERT()s the fact) and performs a regular write on the lower
        64 bits.
     * _set_gate_lower
        This is functionally equivalent to the already present
        _set_gate(), except it uses _write_gate_lower rather than updating
        both 64bit values.
     * _update_gate_addr_lower
        This is designed to update an IDT entry handler only, without
        altering any other settings in the entry.  It also uses
        _write_gate_lower.
    
    The IDT entry helpers are required because:
      * Is it unsafe to attempt a disable/update/re-enable cycle on the
        NMI or MCE IDT entries.
      * We need to be able to update NMI handlers without changing the IST
        entry.
    
    As a result, the new behaviour of the kexec_crash path is:
    
    nmi_shootdown_cpus() will:
    
     * Disable the crashing cpus NMI/MCE interrupt stack tables.
        Disabling the stack tables removes race conditions which would
        lead
        to corrupt exception frames and infinite loops.  As this pcpu is
        never planning to execute a sysret back to a pv vcpu, the update
        is
        safe from a security point of view.
    
     * Swap the NMI trap handlers.
        The crashing pcpu gets the nop handler, to prevent it getting
        stuck in
        an NMI context, causing a hang instead of crash.  The non-crashing
        pcpus all get the nmi_crash handler which is designed never to
        return.
    
    do_nmi_crash() will:
    
     * Save the crash notes and shut the pcpu down.
        There is now an extra per-cpu variable to prevent us from
        executing this multiple times.  In the case where we reenter
        midway through, attempt the whole operation again in preference to
        not completing it in the first place.
    
     * Set up another NMI at the LAPIC.
        Even when the LAPIC has been disabled, the ID and command
        registers are still usable.  As a result, we can deliberately
        queue up a new NMI to re-interrupt us later if NMIs get unlatched.
        Because of the call to __stop_this_cpu(), we have to hand craft
        self_nmi() to be safe from General Protection Faults.
    
     * Fall into infinite loop.
    
    machine_kexec() will:
    
      * Swap the MCE handlers to be a nop.
         We cannot prevent MCEs from being delivered when we pass off to
         the crash kernel, and the less Xen context is being touched the
         better.
    
      * Explicitly enable NMIs.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    
    Minor style changes.
    
    Signed-off-by: Keir Fraser <keir@xen.org>
    
    Committed-by: Keir Fraser <keir@xen.org>
    
    
========================================
commit 6a0cf3786f1964fdf5a17f88f26cb499f4e89c81
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Dec 6 12:35:58 2012 +0000

    qemu-stubdom: prevent useless medium change
    
    qemu-stubdom was stripping the prefix from the "params" xenstore
    key in xenstore_parse_domain_config, which was then saved stripped in
    a variable. In xenstore_process_event we compare the "param" from
    xenstore (not stripped) with the stripped "param" saved in the
    variable, which leads to a medium change (even if there isn't any),
    since we are comparing something like aio:/path/to/file with
    /path/to/file. This only happens one time, since
    xenstore_parse_domain_config is the only place where we strip the
    prefix. The result of this bug is the following:
    
    xs_read_watch() -> /local/domain/0/backend/qdisk/19/5632/params hdc
    close(7)
    close blk: backend=/local/domain/0/backend/qdisk/19/5632
    node=/local/domain/19/device/vbd/5632
    (XEN) HVM18: HVM Loader
    (XEN) HVM18: Detected Xen v4.3-unstable
    (XEN) HVM18: Xenbus rings @0xfeffc000, event channel 4
    (XEN) HVM18: System requested ROMBIOS
    (XEN) HVM18: CPU speed is 2400 MHz
    (XEN) irq.c:270: Dom18 PCI link 0 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 0 routed to IRQ5
    (XEN) irq.c:270: Dom18 PCI link 1 changed 0 -> 10
    (XEN) HVM18: PCI-ISA link 1 routed to IRQ10
    (XEN) irq.c:270: Dom18 PCI link 2 changed 0 -> 11
    (XEN) HVM18: PCI-ISA link 2 routed to IRQ11
    (XEN) irq.c:270: Dom18 PCI link 3 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 3 routed to IRQ5
    (XEN) HVM18: pci dev 01:3 INTA->IRQ10
    (XEN) HVM18: pci dev 03:0 INTA->IRQ5
    (XEN) HVM18: pci dev 04:0 INTA->IRQ5
    (XEN) HVM18: pci dev 02:0 bar 10 size lx: 02000000
    (XEN) HVM18: pci dev 03:0 bar 14 size lx: 01000000
    (XEN) HVM18: pci dev 02:0 bar 14 size lx: 00001000
    (XEN) HVM18: pci dev 03:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 14 size lx: 00000100
    (XEN) HVM18: pci dev 01:1 bar 20 size lx: 00000010
    (XEN) HVM18: Multiprocessor initialisation:
    (XEN) HVM18:  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18:  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18: Testing HVM environment:
    (XEN) HVM18:  - REP INSB across page boundaries ... passed
    (XEN) HVM18:  - GS base MSRs and SWAPGS ... passed
    (XEN) HVM18: Passed 2 of 2 tests
    (XEN) HVM18: Writing SMBIOS tables ...
    (XEN) HVM18: Loading ROMBIOS ...
    (XEN) HVM18: 9660 bytes of ROMBIOS high-memory extensions:
    (XEN) HVM18:   Relocating to 0xfc001000-0xfc0035bc ... done
    (XEN) HVM18: Creating MP tables ...
    (XEN) HVM18: Loading Cirrus VGABIOS ...
    (XEN) HVM18: Loading PCI Option ROM ...
    (XEN) HVM18:  - Manufacturer: http://ipxe.org
    (XEN) HVM18:  - Product name: iPXE
    (XEN) HVM18: Option ROMs:
    (XEN) HVM18:  c0000-c8fff: VGA BIOS
    (XEN) HVM18:  c9000-d8fff: Etherboot ROM
    (XEN) HVM18: Loading ACPI ...
    (XEN) HVM18: vm86 TSS at fc00f680
    (XEN) HVM18: BIOS map:
    (XEN) HVM18:  f0000-fffff: Main BIOS
    (XEN) HVM18: E820 table:
    (XEN) HVM18:  [00]: 00000000:00000000 - 00000000:0009e000: RAM
    (XEN) HVM18:  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
    (XEN) HVM18:  HOLE: 00000000:000a0000 - 00000000:000e0000
    (XEN) HVM18:  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
    (XEN) HVM18:  [03]: 00000000:00100000 - 00000000:3f800000: RAM
    (XEN) HVM18:  HOLE: 00000000:3f800000 - 00000000:fc000000
    (XEN) HVM18:  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
    (XEN) HVM18: Invoking ROMBIOS ...
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) stdvga.c:147:d18 entering stdvga and caching modes
    (XEN) HVM18: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
    (XEN) HVM18: Bochs BIOS - build: 06/23/99
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) HVM18: Options: apmbios pcibios eltorito PMM
    (XEN) HVM18:
    (XEN) HVM18: ata0-0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63
    (XEN) HVM18: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (10240 MBytes)
    (XEN) HVM18: IDE time out
    (XEN) HVM18: ata1 master: QEMU DVD-ROM ATAPI-4 CD-Rom/DVD-Rom
    (XEN) HVM18: IDE time out
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: Press F12 for boot menu.
    (XEN) HVM18:
    (XEN) HVM18: Booting from CD-Rom...
    (XEN) HVM18: ata_is_ready returned 1
    (XEN) HVM18: CDROM boot failure code : 0003
    (XEN) HVM18: Boot from CD-Rom failed: could not read the boot disk
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: No bootable device.
    (XEN) HVM18: Powering off in 30 seconds.
    ******************* BLKFRONT for /local/domain/19/device/vbd/5632 **********
    
    backend at /local/domain/0/backend/qdisk/19/5632
    Failed to read
    /local/domain/0/backend/qdisk/19/5632/feature-flush-cache.
    284420 sectors of 512 bytes
    **************************
    blk_open(/local/domain/19/device/vbd/5632) -> 7
    
    As seen in this trace, the medium change happens just when the
    guest is booting, which leads to the guest not being able to boot
    because the BIOS is not able to access the device.
    
    This is a regression from Xen 4.1, which is able to boot from "file:/"
    based backends when using stubdomains.
    
    [ By inspection, this patch does not change the flow for the
      non-stubdom case. -iwj]
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


--===============7187836636461727687==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7187836636461727687==--

From xen-devel-bounces@lists.xen.org Tue Dec 18 04:58:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 04:58:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkpFs-00053J-Vl; Tue, 18 Dec 2012 04:58:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkpFq-00053E-Ev
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 04:58:10 +0000
Received: from [85.158.137.99:22355] by server-1.bemta-3.messagelabs.com id
	1C/54-08906-1E7FFC05; Tue, 18 Dec 2012 04:58:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355806688!17507299!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23672 invoked from network); 18 Dec 2012 04:58:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 04:58:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="213760"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 04:58:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 18 Dec 2012 04:58:07 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkpFn-0003wt-Lq;
	Tue, 18 Dec 2012 04:58:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkpFm-00054A-J3;
	Tue, 18 Dec 2012 04:58:07 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14775-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 04:58:06 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14775: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7187836636461727687=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7187836636461727687==
Content-Type: text/plain

flight 14775 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14775/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemut-win    7 windows-install           fail REGR. vs. 14771

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14771
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14771

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  3664b0420dfa
baseline version:
 xen                  f50aab21f9f2

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@citrix.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26283:3664b0420dfa
tag:         tip
user:        Julien Grall <julien.grall@citrix.com>
date:        Mon Dec 17 18:04:54 2012 +0000
    
    libxenstore: filter watch events in libxenstore when we unwatch
    
    XenStore puts in queued watch events via a thread and notifies the user.
    Sometimes xs_unwatch is called before all related message is read. The use
    case is non-threaded libevent, we have two event A and B:
        - Event A will destroy something and call xs_unwatch;
        - Event B is used to notify that a node has changed in XenStore.
    As the event is called one by one, event A can be handled before event B.
    So on next xs_watch_read the user could retrieve an unwatch token and
    a segfault occured if the token store the pointer of the structure
    (ie: "backend:0xcafe").
    
    To avoid problem with previous application using libXenStore, this behaviour
    will only be enabled if XS_UNWATCH_FILTER is given to xs_open.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Signed-off-by: Julien Grall <julien.grall@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   26282:f50aab21f9f2
user:        Andrew Cooper <andrew.cooper3@citrix.com>
date:        Thu Dec 13 14:39:31 2012 +0000
    
    x86/kexec: Change NMI and MCE handling on kexec path
    
    Experimentally, certain crash kernels will triple fault very early
    after starting if started with NMIs disabled.  This was discovered
    when experimenting with a debug keyhandler which deliberately created
    a reentrant NMI, causing stack corruption.
    
    Because of this discovered bug, and that the future changes to the NMI
    handling will make the kexec path more fragile, take the time now to
    bullet-proof the kexec behaviour to be safer in more circumstances.
    
    This patch adds three new low level routines:
     * nmi_crash
        This is a special NMI handler for using during a kexec crash.
     * enable_nmis
        This function enables NMIs by executing an iret-to-self, to
        disengage the hardware NMI latch.
     * trap_nop
        This is a no op handler which irets immediately.  It is not
        declared
        with ENTRY() to avoid the extra alignment overhead.
    
    And adds three new IDT entry helper routines:
     * _write_gate_lower
        This is a substitute for using cmpxchg16b to update a 128bit
        structure at once.  It assumes that the top 64 bits are unchanged
        (and ASSERT()s the fact) and performs a regular write on the lower
        64 bits.
     * _set_gate_lower
        This is functionally equivalent to the already present
        _set_gate(), except it uses _write_gate_lower rather than updating
        both 64bit values.
     * _update_gate_addr_lower
        This is designed to update an IDT entry handler only, without
        altering any other settings in the entry.  It also uses
        _write_gate_lower.
    
    The IDT entry helpers are required because:
      * Is it unsafe to attempt a disable/update/re-enable cycle on the
        NMI or MCE IDT entries.
      * We need to be able to update NMI handlers without changing the IST
        entry.
    
    As a result, the new behaviour of the kexec_crash path is:
    
    nmi_shootdown_cpus() will:
    
     * Disable the crashing cpus NMI/MCE interrupt stack tables.
        Disabling the stack tables removes race conditions which would
        lead
        to corrupt exception frames and infinite loops.  As this pcpu is
        never planning to execute a sysret back to a pv vcpu, the update
        is
        safe from a security point of view.
    
     * Swap the NMI trap handlers.
        The crashing pcpu gets the nop handler, to prevent it getting
        stuck in
        an NMI context, causing a hang instead of crash.  The non-crashing
        pcpus all get the nmi_crash handler which is designed never to
        return.
    
    do_nmi_crash() will:
    
     * Save the crash notes and shut the pcpu down.
        There is now an extra per-cpu variable to prevent us from
        executing this multiple times.  In the case where we reenter
        midway through, attempt the whole operation again in preference to
        not completing it in the first place.
    
     * Set up another NMI at the LAPIC.
        Even when the LAPIC has been disabled, the ID and command
        registers are still usable.  As a result, we can deliberately
        queue up a new NMI to re-interrupt us later if NMIs get unlatched.
        Because of the call to __stop_this_cpu(), we have to hand craft
        self_nmi() to be safe from General Protection Faults.
    
     * Fall into infinite loop.
    
    machine_kexec() will:
    
      * Swap the MCE handlers to be a nop.
         We cannot prevent MCEs from being delivered when we pass off to
         the crash kernel, and the less Xen context is being touched the
         better.
    
      * Explicitly enable NMIs.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    
    Minor style changes.
    
    Signed-off-by: Keir Fraser <keir@xen.org>
    
    Committed-by: Keir Fraser <keir@xen.org>
    
    
========================================
commit 6a0cf3786f1964fdf5a17f88f26cb499f4e89c81
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Dec 6 12:35:58 2012 +0000

    qemu-stubdom: prevent useless medium change
    
    qemu-stubdom was stripping the prefix from the "params" xenstore
    key in xenstore_parse_domain_config, which was then saved stripped in
    a variable. In xenstore_process_event we compare the "param" from
    xenstore (not stripped) with the stripped "param" saved in the
    variable, which leads to a medium change (even if there isn't any),
    since we are comparing something like aio:/path/to/file with
    /path/to/file. This only happens one time, since
    xenstore_parse_domain_config is the only place where we strip the
    prefix. The result of this bug is the following:
    
    xs_read_watch() -> /local/domain/0/backend/qdisk/19/5632/params hdc
    close(7)
    close blk: backend=/local/domain/0/backend/qdisk/19/5632
    node=/local/domain/19/device/vbd/5632
    (XEN) HVM18: HVM Loader
    (XEN) HVM18: Detected Xen v4.3-unstable
    (XEN) HVM18: Xenbus rings @0xfeffc000, event channel 4
    (XEN) HVM18: System requested ROMBIOS
    (XEN) HVM18: CPU speed is 2400 MHz
    (XEN) irq.c:270: Dom18 PCI link 0 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 0 routed to IRQ5
    (XEN) irq.c:270: Dom18 PCI link 1 changed 0 -> 10
    (XEN) HVM18: PCI-ISA link 1 routed to IRQ10
    (XEN) irq.c:270: Dom18 PCI link 2 changed 0 -> 11
    (XEN) HVM18: PCI-ISA link 2 routed to IRQ11
    (XEN) irq.c:270: Dom18 PCI link 3 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 3 routed to IRQ5
    (XEN) HVM18: pci dev 01:3 INTA->IRQ10
    (XEN) HVM18: pci dev 03:0 INTA->IRQ5
    (XEN) HVM18: pci dev 04:0 INTA->IRQ5
    (XEN) HVM18: pci dev 02:0 bar 10 size lx: 02000000
    (XEN) HVM18: pci dev 03:0 bar 14 size lx: 01000000
    (XEN) HVM18: pci dev 02:0 bar 14 size lx: 00001000
    (XEN) HVM18: pci dev 03:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 14 size lx: 00000100
    (XEN) HVM18: pci dev 01:1 bar 20 size lx: 00000010
    (XEN) HVM18: Multiprocessor initialisation:
    (XEN) HVM18:  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18:  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18: Testing HVM environment:
    (XEN) HVM18:  - REP INSB across page boundaries ... passed
    (XEN) HVM18:  - GS base MSRs and SWAPGS ... passed
    (XEN) HVM18: Passed 2 of 2 tests
    (XEN) HVM18: Writing SMBIOS tables ...
    (XEN) HVM18: Loading ROMBIOS ...
    (XEN) HVM18: 9660 bytes of ROMBIOS high-memory extensions:
    (XEN) HVM18:   Relocating to 0xfc001000-0xfc0035bc ... done
    (XEN) HVM18: Creating MP tables ...
    (XEN) HVM18: Loading Cirrus VGABIOS ...
    (XEN) HVM18: Loading PCI Option ROM ...
    (XEN) HVM18:  - Manufacturer: http://ipxe.org
    (XEN) HVM18:  - Product name: iPXE
    (XEN) HVM18: Option ROMs:
    (XEN) HVM18:  c0000-c8fff: VGA BIOS
    (XEN) HVM18:  c9000-d8fff: Etherboot ROM
    (XEN) HVM18: Loading ACPI ...
    (XEN) HVM18: vm86 TSS at fc00f680
    (XEN) HVM18: BIOS map:
    (XEN) HVM18:  f0000-fffff: Main BIOS
    (XEN) HVM18: E820 table:
    (XEN) HVM18:  [00]: 00000000:00000000 - 00000000:0009e000: RAM
    (XEN) HVM18:  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
    (XEN) HVM18:  HOLE: 00000000:000a0000 - 00000000:000e0000
    (XEN) HVM18:  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
    (XEN) HVM18:  [03]: 00000000:00100000 - 00000000:3f800000: RAM
    (XEN) HVM18:  HOLE: 00000000:3f800000 - 00000000:fc000000
    (XEN) HVM18:  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
    (XEN) HVM18: Invoking ROMBIOS ...
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) stdvga.c:147:d18 entering stdvga and caching modes
    (XEN) HVM18: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
    (XEN) HVM18: Bochs BIOS - build: 06/23/99
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) HVM18: Options: apmbios pcibios eltorito PMM
    (XEN) HVM18:
    (XEN) HVM18: ata0-0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63
    (XEN) HVM18: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (10240 MBytes)
    (XEN) HVM18: IDE time out
    (XEN) HVM18: ata1 master: QEMU DVD-ROM ATAPI-4 CD-Rom/DVD-Rom
    (XEN) HVM18: IDE time out
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: Press F12 for boot menu.
    (XEN) HVM18:
    (XEN) HVM18: Booting from CD-Rom...
    (XEN) HVM18: ata_is_ready returned 1
    (XEN) HVM18: CDROM boot failure code : 0003
    (XEN) HVM18: Boot from CD-Rom failed: could not read the boot disk
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: No bootable device.
    (XEN) HVM18: Powering off in 30 seconds.
    ******************* BLKFRONT for /local/domain/19/device/vbd/5632 **********
    
    backend at /local/domain/0/backend/qdisk/19/5632
    Failed to read
    /local/domain/0/backend/qdisk/19/5632/feature-flush-cache.
    284420 sectors of 512 bytes
    **************************
    blk_open(/local/domain/19/device/vbd/5632) -> 7
    
    As seen in this trace, the medium change happens just when the
    guest is booting, which leads to the guest not being able to boot
    because the BIOS is not able to access the device.
    
    This is a regression from Xen 4.1, which is able to boot from "file:/"
    based backends when using stubdomains.
    
    [ By inspection, this patch does not change the flow for the
      non-stubdom case. -iwj]
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


--===============7187836636461727687==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7187836636461727687==--

From xen-devel-bounces@lists.xen.org Tue Dec 18 05:16:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 05:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkpX7-0005TC-Hy; Tue, 18 Dec 2012 05:16:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkpX4-0005T7-Nk
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 05:15:59 +0000
Received: from [193.109.254.147:8099] by server-6.bemta-14.messagelabs.com id
	D5/80-25153-E0CFFC05; Tue, 18 Dec 2012 05:15:58 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355807755!9060139!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjc0NjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17249 invoked from network); 18 Dec 2012 05:15:57 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 05:15:57 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBI5FoPp017024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 05:15:50 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBI5Fnqu026981
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 05:15:49 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBI5FnTJ024649; Mon, 17 Dec 2012 23:15:49 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Dec 2012 21:15:48 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D1F721C05A8; Tue, 18 Dec 2012 00:15:47 -0500 (EST)
Date: Tue, 18 Dec 2012 00:15:47 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: axboe@kernel.dk, linux-kernel@vger.kernel.org
Message-ID: <20121218051547.GA13450@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [GIT PULL] Bug-fixes to xen-blkfront for v3.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey Jens,

Please git pull the following branch:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.8

which has a bug-fix to the xen-blkfront and xen-blkback driver
when using the persistent mode. An issue was discovered where LVM
disks could not be read correctly and this fixes it. There
is also a change in llist.h which has been blessed by akpm.

Please pull!

 drivers/block/xen-blkback/blkback.c | 18 +++++++++++-------
 drivers/block/xen-blkfront.c        | 10 ++++++----
 include/linux/llist.h               | 25 +++++++++++++++++++++++++
 3 files changed, 42 insertions(+), 11 deletions(-)

Roger Pau Monne (3):
      xen-blkback: implement safe iterator for the list of persistent grants
      llist/xen-blkfront: implement safe version of llist_for_each_entry
      xen-blkfront: handle bvecs with partial data


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 05:16:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 05:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkpX7-0005TC-Hy; Tue, 18 Dec 2012 05:16:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkpX4-0005T7-Nk
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 05:15:59 +0000
Received: from [193.109.254.147:8099] by server-6.bemta-14.messagelabs.com id
	D5/80-25153-E0CFFC05; Tue, 18 Dec 2012 05:15:58 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355807755!9060139!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjc0NjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17249 invoked from network); 18 Dec 2012 05:15:57 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 05:15:57 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBI5FoPp017024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 05:15:50 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBI5Fnqu026981
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 05:15:49 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBI5FnTJ024649; Mon, 17 Dec 2012 23:15:49 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Dec 2012 21:15:48 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D1F721C05A8; Tue, 18 Dec 2012 00:15:47 -0500 (EST)
Date: Tue, 18 Dec 2012 00:15:47 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: axboe@kernel.dk, linux-kernel@vger.kernel.org
Message-ID: <20121218051547.GA13450@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [GIT PULL] Bug-fixes to xen-blkfront for v3.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey Jens,

Please git pull the following branch:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.8

which has a bug-fix to the xen-blkfront and xen-blkback driver
when using the persistent mode. An issue was discovered where LVM
disks could not be read correctly and this fixes it. There
is also a change in llist.h which has been blessed by akpm.

Please pull!

 drivers/block/xen-blkback/blkback.c | 18 +++++++++++-------
 drivers/block/xen-blkfront.c        | 10 ++++++----
 include/linux/llist.h               | 25 +++++++++++++++++++++++++
 3 files changed, 42 insertions(+), 11 deletions(-)

Roger Pau Monne (3):
      xen-blkback: implement safe iterator for the list of persistent grants
      llist/xen-blkfront: implement safe version of llist_for_each_entry
      xen-blkfront: handle bvecs with partial data


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 05:42:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 05:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkpwO-0005gw-UF; Tue, 18 Dec 2012 05:42:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fengguang.wu@intel.com>) id 1TkpwN-0005go-Cl
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 05:42:07 +0000
Received: from [85.158.143.35:49395] by server-2.bemta-4.messagelabs.com id
	B8/F3-30861-E2200D05; Tue, 18 Dec 2012 05:42:06 +0000
X-Env-Sender: fengguang.wu@intel.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355809324!5553580!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NjczNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4196 invoked from network); 18 Dec 2012 05:42:05 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-2.tower-21.messagelabs.com with SMTP;
	18 Dec 2012 05:42:05 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 17 Dec 2012 21:42:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,307,1355126400"; d="scan'208";a="265918903"
Received: from bee.sh.intel.com (HELO localhost) ([10.239.97.14])
	by fmsmga002.fm.intel.com with ESMTP; 17 Dec 2012 21:41:38 -0800
Received: from [192.168.1.144] (helo=kbuild.lkp.intel.com)
	by localhost with smtp (Exim 4.80)
	(envelope-from <fengguang.wu@intel.com>)
	id 1Tkpv0-000OWJ-72; Tue, 18 Dec 2012 13:40:42 +0800
Date: Tue, 18 Dec 2012 13:39:35 +0800
From: kbuild test robot <fengguang.wu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <50d00197.j0ofCx2fkiqUCGLe%fengguang.wu@intel.com>
User-Agent: Heirloom mailx 12.5 6/20/10
MIME-Version: 1.0
X-SA-Exim-Connect-IP: 192.168.1.144
X-SA-Exim-Mail-From: fengguang.wu@intel.com
X-SA-Exim-Scanned: No (on localhost); SAEximRunCond expanded to false
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [xen:stable/for-linus-3.8 11/13]
	arch/x86/xen/smp.c:257:2: error: implicit declaration of
	function 'smp_store_boot_cpu_info'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

tree:   git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.8
head:   9d328a948f38ec240fc6d05db2c146e23ccd9b8b
commit: 06d0b5d9edcecccab45588a472cd34af2608e665 [11/13] xen/smp: Use smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
config: make ARCH=x86_64 allmodconfig

All error/warnings:

arch/x86/xen/smp.c: In function 'xen_smp_prepare_cpus':
arch/x86/xen/smp.c:257:2: error: implicit declaration of function 'smp_store_boot_cpu_info' [-Werror=implicit-function-declaration]
cc1: some warnings being treated as errors

vim +/smp_store_boot_cpu_info +257 arch/x86/xen/smp.c

ed467e69 Konrad Rzeszutek Wilk 2011-09-01  251  
ed467e69 Konrad Rzeszutek Wilk 2011-09-01  252  		xen_raw_printk(m);
ed467e69 Konrad Rzeszutek Wilk 2011-09-01  253  		panic(m);
ed467e69 Konrad Rzeszutek Wilk 2011-09-01  254  	}
2d9e1e2f Jeremy Fitzhardinge   2008-07-07  255  	xen_init_lock_cpu(0);
2d9e1e2f Jeremy Fitzhardinge   2008-07-07  256  
06d0b5d9 Konrad Rzeszutek Wilk 2012-12-17 @257  	smp_store_boot_cpu_info();
c7b75947 Jeremy Fitzhardinge   2008-07-08  258  	cpu_data(0).x86_max_cores = 1;
900cba88 Andrew Jones          2009-12-18  259  
900cba88 Andrew Jones          2009-12-18  260  	for_each_possible_cpu(i) {

---
0-DAY kernel build testing backend         Open Source Technology Center
Fengguang Wu, Yuanhan Liu                              Intel Corporation

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 05:42:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 05:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkpwO-0005gw-UF; Tue, 18 Dec 2012 05:42:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fengguang.wu@intel.com>) id 1TkpwN-0005go-Cl
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 05:42:07 +0000
Received: from [85.158.143.35:49395] by server-2.bemta-4.messagelabs.com id
	B8/F3-30861-E2200D05; Tue, 18 Dec 2012 05:42:06 +0000
X-Env-Sender: fengguang.wu@intel.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355809324!5553580!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NjczNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4196 invoked from network); 18 Dec 2012 05:42:05 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-2.tower-21.messagelabs.com with SMTP;
	18 Dec 2012 05:42:05 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 17 Dec 2012 21:42:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,307,1355126400"; d="scan'208";a="265918903"
Received: from bee.sh.intel.com (HELO localhost) ([10.239.97.14])
	by fmsmga002.fm.intel.com with ESMTP; 17 Dec 2012 21:41:38 -0800
Received: from [192.168.1.144] (helo=kbuild.lkp.intel.com)
	by localhost with smtp (Exim 4.80)
	(envelope-from <fengguang.wu@intel.com>)
	id 1Tkpv0-000OWJ-72; Tue, 18 Dec 2012 13:40:42 +0800
Date: Tue, 18 Dec 2012 13:39:35 +0800
From: kbuild test robot <fengguang.wu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <50d00197.j0ofCx2fkiqUCGLe%fengguang.wu@intel.com>
User-Agent: Heirloom mailx 12.5 6/20/10
MIME-Version: 1.0
X-SA-Exim-Connect-IP: 192.168.1.144
X-SA-Exim-Mail-From: fengguang.wu@intel.com
X-SA-Exim-Scanned: No (on localhost); SAEximRunCond expanded to false
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [xen:stable/for-linus-3.8 11/13]
	arch/x86/xen/smp.c:257:2: error: implicit declaration of
	function 'smp_store_boot_cpu_info'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

tree:   git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.8
head:   9d328a948f38ec240fc6d05db2c146e23ccd9b8b
commit: 06d0b5d9edcecccab45588a472cd34af2608e665 [11/13] xen/smp: Use smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
config: make ARCH=x86_64 allmodconfig

All error/warnings:

arch/x86/xen/smp.c: In function 'xen_smp_prepare_cpus':
arch/x86/xen/smp.c:257:2: error: implicit declaration of function 'smp_store_boot_cpu_info' [-Werror=implicit-function-declaration]
cc1: some warnings being treated as errors

vim +/smp_store_boot_cpu_info +257 arch/x86/xen/smp.c

ed467e69 Konrad Rzeszutek Wilk 2011-09-01  251  
ed467e69 Konrad Rzeszutek Wilk 2011-09-01  252  		xen_raw_printk(m);
ed467e69 Konrad Rzeszutek Wilk 2011-09-01  253  		panic(m);
ed467e69 Konrad Rzeszutek Wilk 2011-09-01  254  	}
2d9e1e2f Jeremy Fitzhardinge   2008-07-07  255  	xen_init_lock_cpu(0);
2d9e1e2f Jeremy Fitzhardinge   2008-07-07  256  
06d0b5d9 Konrad Rzeszutek Wilk 2012-12-17 @257  	smp_store_boot_cpu_info();
c7b75947 Jeremy Fitzhardinge   2008-07-08  258  	cpu_data(0).x86_max_cores = 1;
900cba88 Andrew Jones          2009-12-18  259  
900cba88 Andrew Jones          2009-12-18  260  	for_each_possible_cpu(i) {

---
0-DAY kernel build testing backend         Open Source Technology Center
Fengguang Wu, Yuanhan Liu                              Intel Corporation

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 07:05:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 07:05:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkrEb-0006JX-44; Tue, 18 Dec 2012 07:05:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TkrEZ-0006JS-17
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 07:04:59 +0000
Received: from [85.158.143.35:44031] by server-3.bemta-4.messagelabs.com id
	A1/B8-18211-A9510D05; Tue, 18 Dec 2012 07:04:58 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355814295!5559779!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzE3MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10754 invoked from network); 18 Dec 2012 07:04:56 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 07:04:56 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 574C521BA;
	Tue, 18 Dec 2012 09:03:52 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id DD5A220062; Tue, 18 Dec 2012 09:03:51 +0200 (EET)
Date: Tue, 18 Dec 2012 09:03:51 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Martinx - ?$B%8%'!<%`%:" <thiagocmartinsc@gmail.com>
Message-ID: <20121218070351.GK8912@reaktio.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
	<CAJSM8J3DvQm-wW95k9Jd3quDkeNiPexj8OYdtnb_qWEWDUs1kw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAJSM8J3DvQm-wW95k9Jd3quDkeNiPexj8OYdtnb_qWEWDUs1kw@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 09:57:59PM -0200, Martinx - ?$B%8%'!<%`%: wrote:
>    Hi Pasi!
>     Can you tell me if it will be possible to use Xen like this:
>     dom0 -> ATI GPU Passthrough as primary -> HVM domU with Catalyst -> S=
pice
>    -> Spice-Client ?
>     I do not want to use Spice "alone" and, I do not want to use my domU =
with
>    my ATI without SPICE...  That makes sense?
>    Thanks!

Afaik SPICE uses and requires the virtual QXL GPU for efficient operation, =

so it doesn't work with physical GPUs.

-- Pasi

>    Thiago
> =

>    On 20 August 2012 16:14, Pasi K=E4rkk=E4inen <[1]pasik@iki.fi> wrote:
> =

>      On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
>      >
>      > Features and improvements not on this list are of course welcome at
>      > any time before the feature freeze.
>      >
>      > Any questions and feedback are welcome!
>      >
>      > Your 4.3 release coordinator,
>      >  George Dunlap
>      >
> =

>      <snip>
>      >
>      > * xl USB pass-through for PV guests
>      >   owner: ?
>      >   Port the xend PV pass-through functionality to xl.
>      >
> =

>      xm/xend PVUSB works for both PV and HVM guests, so xl should support
>      PVUSB for both PV and HVM guests aswell.
>      James Harper's GPLPV drivers actually do have PVUSB frontend driver =
for
>      Windows.
> =

>      Also Suse's xenlinux forward-ported patches have PVUSB support in
>      unmodified_drivers for HVM guests.
> =

>      Another USB item:
> =

>      * xl support for USB device passthru using QEMU emulated USB for HVM
>      guests (no need for PVUSB drivers in the HVM guest).
>        This works today in xm/xend with qemu-traditional, but is limited =
to
>      USB 1.1, probably because
>        the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
>        So xl support for emulated USB device passthru for both qemu-upstr=
eam
>      and qemu-traditional.
> =

>      More wishlist items:
> =

>      * Nested hardware virtualization. Important for easier testing and
>      development of Xen (Xen-on-Xen),
>        and for running other hypervisors in Xen VMs. Interesting for labs,
>      POCs, etc.
> =

>      * VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-de=
vel
>      archives,
>        but noone has yet stepped up to clean up and get them merged.
>        Currently Intel gfx passthru patches are merged to Xen, but primary
>      ATI/NVIDIA require extra patches.
>        This is actually something that a LOT of users ask often, it's
>      discussed almost every day on ##xen on IRC.
>        I wonder if XenClient folks could help here?
> =

>      * Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU
>      passthru users.
>        Fujitsu guys posted some patches for this in 2010, and XenClient g=
uys
>      in 2009 (iirc),
>        but nothing got further developed and merged to upstream Xen.
> =

>      * QXL virtual GPU support for SPICE. Someone was already developing
>      this,
>        and posted patches earlier during 4.2 development cycle to xen-dev=
el.
>        Upstream Qemu includes QXL support.
> =

>      * PVSCSI support in XL. James Harper was (semi) interested in working
>      with this,
>        because he has a PVSCSI frontend driver in Windows GPLPV drivers, =
and
>      he's using PVSCSI for tape backups himself.
> =

>      * libvirt libxl driver improvements; support more Xen features.
>        Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "default"
>      virtualization GUI also with Xen.
> =

>      Hopefully we'll find interested developers for these items :)
> =

>      -- Pasi
> =

>      _______________________________________________
>      Xen-devel mailing list
>      [2]Xen-devel@lists.xen.org
>      [3]http://lists.xen.org/xen-devel
> =

> References
> =

>    Visible links
>    1. mailto:pasik@iki.fi
>    2. mailto:Xen-devel@lists.xen.org
>    3. http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 07:05:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 07:05:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkrEb-0006JX-44; Tue, 18 Dec 2012 07:05:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TkrEZ-0006JS-17
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 07:04:59 +0000
Received: from [85.158.143.35:44031] by server-3.bemta-4.messagelabs.com id
	A1/B8-18211-A9510D05; Tue, 18 Dec 2012 07:04:58 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355814295!5559779!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzE3MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10754 invoked from network); 18 Dec 2012 07:04:56 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 07:04:56 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 574C521BA;
	Tue, 18 Dec 2012 09:03:52 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id DD5A220062; Tue, 18 Dec 2012 09:03:51 +0200 (EET)
Date: Tue, 18 Dec 2012 09:03:51 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Martinx - ?$B%8%'!<%`%:" <thiagocmartinsc@gmail.com>
Message-ID: <20121218070351.GK8912@reaktio.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
	<CAJSM8J3DvQm-wW95k9Jd3quDkeNiPexj8OYdtnb_qWEWDUs1kw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAJSM8J3DvQm-wW95k9Jd3quDkeNiPexj8OYdtnb_qWEWDUs1kw@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 09:57:59PM -0200, Martinx - ?$B%8%'!<%`%: wrote:
>    Hi Pasi!
>     Can you tell me if it will be possible to use Xen like this:
>     dom0 -> ATI GPU Passthrough as primary -> HVM domU with Catalyst -> S=
pice
>    -> Spice-Client ?
>     I do not want to use Spice "alone" and, I do not want to use my domU =
with
>    my ATI without SPICE...  That makes sense?
>    Thanks!

Afaik SPICE uses and requires the virtual QXL GPU for efficient operation, =

so it doesn't work with physical GPUs.

-- Pasi

>    Thiago
> =

>    On 20 August 2012 16:14, Pasi K=E4rkk=E4inen <[1]pasik@iki.fi> wrote:
> =

>      On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
>      >
>      > Features and improvements not on this list are of course welcome at
>      > any time before the feature freeze.
>      >
>      > Any questions and feedback are welcome!
>      >
>      > Your 4.3 release coordinator,
>      >  George Dunlap
>      >
> =

>      <snip>
>      >
>      > * xl USB pass-through for PV guests
>      >   owner: ?
>      >   Port the xend PV pass-through functionality to xl.
>      >
> =

>      xm/xend PVUSB works for both PV and HVM guests, so xl should support
>      PVUSB for both PV and HVM guests aswell.
>      James Harper's GPLPV drivers actually do have PVUSB frontend driver =
for
>      Windows.
> =

>      Also Suse's xenlinux forward-ported patches have PVUSB support in
>      unmodified_drivers for HVM guests.
> =

>      Another USB item:
> =

>      * xl support for USB device passthru using QEMU emulated USB for HVM
>      guests (no need for PVUSB drivers in the HVM guest).
>        This works today in xm/xend with qemu-traditional, but is limited =
to
>      USB 1.1, probably because
>        the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
>        So xl support for emulated USB device passthru for both qemu-upstr=
eam
>      and qemu-traditional.
> =

>      More wishlist items:
> =

>      * Nested hardware virtualization. Important for easier testing and
>      development of Xen (Xen-on-Xen),
>        and for running other hypervisors in Xen VMs. Interesting for labs,
>      POCs, etc.
> =

>      * VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-de=
vel
>      archives,
>        but noone has yet stepped up to clean up and get them merged.
>        Currently Intel gfx passthru patches are merged to Xen, but primary
>      ATI/NVIDIA require extra patches.
>        This is actually something that a LOT of users ask often, it's
>      discussed almost every day on ##xen on IRC.
>        I wonder if XenClient folks could help here?
> =

>      * Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU
>      passthru users.
>        Fujitsu guys posted some patches for this in 2010, and XenClient g=
uys
>      in 2009 (iirc),
>        but nothing got further developed and merged to upstream Xen.
> =

>      * QXL virtual GPU support for SPICE. Someone was already developing
>      this,
>        and posted patches earlier during 4.2 development cycle to xen-dev=
el.
>        Upstream Qemu includes QXL support.
> =

>      * PVSCSI support in XL. James Harper was (semi) interested in working
>      with this,
>        because he has a PVSCSI frontend driver in Windows GPLPV drivers, =
and
>      he's using PVSCSI for tape backups himself.
> =

>      * libvirt libxl driver improvements; support more Xen features.
>        Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "default"
>      virtualization GUI also with Xen.
> =

>      Hopefully we'll find interested developers for these items :)
> =

>      -- Pasi
> =

>      _______________________________________________
>      Xen-devel mailing list
>      [2]Xen-devel@lists.xen.org
>      [3]http://lists.xen.org/xen-devel
> =

> References
> =

>    Visible links
>    1. mailto:pasik@iki.fi
>    2. mailto:Xen-devel@lists.xen.org
>    3. http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 07:49:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 07:49:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkrvZ-0006eb-Ef; Tue, 18 Dec 2012 07:49:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkrvX-0006eW-Vy
	for Xen-devel@lists.xen.org; Tue, 18 Dec 2012 07:49:24 +0000
Received: from [85.158.139.83:47031] by server-14.bemta-5.messagelabs.com id
	C7/5C-09538-30020D05; Tue, 18 Dec 2012 07:49:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355816962!22965066!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20081 invoked from network); 18 Dec 2012 07:49:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 07:49:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 07:49:21 +0000
Message-Id: <50D02E0E02000078000B0EE8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 07:49:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Sander Eikelenboom" <linux@eikelenboom.it>
References: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
	<50CF5D0402000078000B0D17@nat28.tlf.novell.com>
	<1339249142.20121217185500@eikelenboom.it>
In-Reply-To: <1339249142.20121217185500@eikelenboom.it>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com>, Xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (XEN) traps.c:3156: GPF messages in xen dmesg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 18:55, Sander Eikelenboom <linux@eikelenboom.it> wrote:

> Monday, December 17, 2012, 5:57:24 PM, you wrote:
> 
>>>>> On 17.12.12 at 17:48, Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com> wrote:
>>> I'm running Xen 4.2.0 with Linux kernel 3.7.0 and I'm seeing a flood of
>>> these messages in xen dmesg:
>>> 
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> 
>>> What does that mean and is it something that I should worry about?
> 
>> It means that the hypervisor recovered from #GP faults several
>> times. What exactly it was that faulted you'd need to look up by
>> translating the addresses above and resolving them to source
>> locations. That'll also tell you whether you ought to be worried.
> 
>> A common example of these happening is Xen carrying out MSR
>> accesses on behalf of the kernel, when the MSR actually isn't
>> implemented (i.e. the kernel itself also is prepared to handle
>> faults upon accessing them).
> 
> Perhaps related to a previous discussion (with a open end ..)
> http://lists.xen.org/archives/html/xen-devel/2012-09/msg01599.html 

Not impossible, but iirc those would generally be accompanied
by SIGSEGV-s and/or kernel crashes in some guest.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 07:49:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 07:49:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkrvZ-0006eb-Ef; Tue, 18 Dec 2012 07:49:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkrvX-0006eW-Vy
	for Xen-devel@lists.xen.org; Tue, 18 Dec 2012 07:49:24 +0000
Received: from [85.158.139.83:47031] by server-14.bemta-5.messagelabs.com id
	C7/5C-09538-30020D05; Tue, 18 Dec 2012 07:49:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355816962!22965066!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20081 invoked from network); 18 Dec 2012 07:49:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 07:49:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 07:49:21 +0000
Message-Id: <50D02E0E02000078000B0EE8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 07:49:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Sander Eikelenboom" <linux@eikelenboom.it>
References: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
	<50CF5D0402000078000B0D17@nat28.tlf.novell.com>
	<1339249142.20121217185500@eikelenboom.it>
In-Reply-To: <1339249142.20121217185500@eikelenboom.it>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com>, Xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (XEN) traps.c:3156: GPF messages in xen dmesg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 18:55, Sander Eikelenboom <linux@eikelenboom.it> wrote:

> Monday, December 17, 2012, 5:57:24 PM, you wrote:
> 
>>>>> On 17.12.12 at 17:48, Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com> wrote:
>>> I'm running Xen 4.2.0 with Linux kernel 3.7.0 and I'm seeing a flood of
>>> these messages in xen dmesg:
>>> 
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>> 
>>> What does that mean and is it something that I should worry about?
> 
>> It means that the hypervisor recovered from #GP faults several
>> times. What exactly it was that faulted you'd need to look up by
>> translating the addresses above and resolving them to source
>> locations. That'll also tell you whether you ought to be worried.
> 
>> A common example of these happening is Xen carrying out MSR
>> accesses on behalf of the kernel, when the MSR actually isn't
>> implemented (i.e. the kernel itself also is prepared to handle
>> faults upon accessing them).
> 
> Perhaps related to a previous discussion (with a open end ..)
> http://lists.xen.org/archives/html/xen-devel/2012-09/msg01599.html 

Not impossible, but iirc those would generally be accompanied
by SIGSEGV-s and/or kernel crashes in some guest.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 08:38:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 08:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TksgI-0007P9-Md; Tue, 18 Dec 2012 08:37:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TksgG-0007P4-T9
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 08:37:41 +0000
Received: from [85.158.139.211:36188] by server-16.bemta-5.messagelabs.com id
	5D/E4-09208-45B20D05; Tue, 18 Dec 2012 08:37:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355819859!20983109!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3907 invoked from network); 18 Dec 2012 08:37:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 08:37:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 08:37:37 +0000
Message-Id: <50D0395E02000078000B0F31@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 08:37:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Lars Kurth <lars.kurth@citrix.com>
Subject: [Xen-devel] [ANNOUNCE] Xen 4.2.1 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Folks,

I am pleased to announce the release of Xen 4.2.1. This is
available immediately from its mercurial repository:
http://xenbits.xen.org/xen-4.2-testing.hg (tag RELEASE-4.2.1)

This fixes the following critical vulnerabilities:
 * CVE-2012-4535 / XSA-20:
    Timer overflow DoS vulnerability
 * CVE-2012-4537 / XSA-22:
    Memory mapping failure DoS vulnerability
 * CVE-2012-4538 / XSA-23:
    Unhooking empty PAE entries DoS vulnerability
 * CVE-2012-4539 / XSA-24:
    Grant table hypercall infinite loop DoS vulnerability
 * CVE-2012-4544,CVE-2012-2625 / XSA-25:
    Xen domain builder Out-of-memory due to malicious kernel/ramdisk
 * CVE-2012-5510 / XSA-26:
    Grant table version switch list corruption vulnerability
 * CVE-2012-5511 / XSA-27:
    several HVM operations do not validate the range of their inputs
 * CVE-2012-5513 / XSA-29:
    XENMEM_exchange may overwrite hypervisor memory
 * CVE-2012-5514 / XSA-30:
    Broken error handling in guest_physmap_mark_populate_on_demand()
 * CVE-2012-5515 / XSA-31:
    Several memory hypercall operations allow invalid extent order values
 * CVE-2012-5525 / XSA-32:
    several hypercalls do not validate input GFNs

We recommend all users of the 4.2.0 code base to update to this
point release.

Among many bug fixes and improvements (around 100 since Xen 4.2.0):
 * A fix for a long standing time management issue
 * Bug fixes for S3 (suspend to RAM) handling
 * Bug fixes for other low level system state handling
 * Bug fixes and improvements to the libxl tool stack
 * Bug fixes to nested virtualization

Regards,
Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 08:38:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 08:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TksgI-0007P9-Md; Tue, 18 Dec 2012 08:37:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TksgG-0007P4-T9
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 08:37:41 +0000
Received: from [85.158.139.211:36188] by server-16.bemta-5.messagelabs.com id
	5D/E4-09208-45B20D05; Tue, 18 Dec 2012 08:37:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355819859!20983109!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3907 invoked from network); 18 Dec 2012 08:37:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 08:37:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 08:37:37 +0000
Message-Id: <50D0395E02000078000B0F31@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 08:37:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Lars Kurth <lars.kurth@citrix.com>
Subject: [Xen-devel] [ANNOUNCE] Xen 4.2.1 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Folks,

I am pleased to announce the release of Xen 4.2.1. This is
available immediately from its mercurial repository:
http://xenbits.xen.org/xen-4.2-testing.hg (tag RELEASE-4.2.1)

This fixes the following critical vulnerabilities:
 * CVE-2012-4535 / XSA-20:
    Timer overflow DoS vulnerability
 * CVE-2012-4537 / XSA-22:
    Memory mapping failure DoS vulnerability
 * CVE-2012-4538 / XSA-23:
    Unhooking empty PAE entries DoS vulnerability
 * CVE-2012-4539 / XSA-24:
    Grant table hypercall infinite loop DoS vulnerability
 * CVE-2012-4544,CVE-2012-2625 / XSA-25:
    Xen domain builder Out-of-memory due to malicious kernel/ramdisk
 * CVE-2012-5510 / XSA-26:
    Grant table version switch list corruption vulnerability
 * CVE-2012-5511 / XSA-27:
    several HVM operations do not validate the range of their inputs
 * CVE-2012-5513 / XSA-29:
    XENMEM_exchange may overwrite hypervisor memory
 * CVE-2012-5514 / XSA-30:
    Broken error handling in guest_physmap_mark_populate_on_demand()
 * CVE-2012-5515 / XSA-31:
    Several memory hypercall operations allow invalid extent order values
 * CVE-2012-5525 / XSA-32:
    several hypercalls do not validate input GFNs

We recommend all users of the 4.2.0 code base to update to this
point release.

Among many bug fixes and improvements (around 100 since Xen 4.2.0):
 * A fix for a long standing time management issue
 * Bug fixes for S3 (suspend to RAM) handling
 * Bug fixes for other low level system state handling
 * Bug fixes and improvements to the libxl tool stack
 * Bug fixes to nested virtualization

Regards,
Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 08:51:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 08:51:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tksta-0007bJ-3J; Tue, 18 Dec 2012 08:51:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TkstY-0007bE-Lp
	for Xen-devel@lists.xen.org; Tue, 18 Dec 2012 08:51:24 +0000
Received: from [85.158.139.83:60029] by server-1.bemta-5.messagelabs.com id
	54/72-12813-B8E20D05; Tue, 18 Dec 2012 08:51:23 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355820680!27601547!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3288 invoked from network); 18 Dec 2012 08:51:20 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-4.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	18 Dec 2012 08:51:20 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:49450 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tksxh-0004HR-ED; Tue, 18 Dec 2012 09:55:41 +0100
Date: Tue, 18 Dec 2012 09:51:16 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <15812275.20121218095116@eikelenboom.it>
To: "Jan Beulich" <JBeulich@suse.com>
In-Reply-To: <50D02E0E02000078000B0EE8@nat28.tlf.novell.com>
References: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
	<50CF5D0402000078000B0D17@nat28.tlf.novell.com>
	<1339249142.20121217185500@eikelenboom.it>
	<50D02E0E02000078000B0EE8@nat28.tlf.novell.com>
MIME-Version: 1.0
Cc: Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com>, Xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (XEN) traps.c:3156: GPF messages in xen dmesg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, December 18, 2012, 8:49:18 AM, you wrote:

>>>> On 17.12.12 at 18:55, Sander Eikelenboom <linux@eikelenboom.it> wrote:

>> Monday, December 17, 2012, 5:57:24 PM, you wrote:
>> 
>>>>>> On 17.12.12 at 17:48, Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com> wrote:
>>>> I'm running Xen 4.2.0 with Linux kernel 3.7.0 and I'm seeing a flood of
>>>> these messages in xen dmesg:
>>>> 
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> 
>>>> What does that mean and is it something that I should worry about?
>> 
>>> It means that the hypervisor recovered from #GP faults several
>>> times. What exactly it was that faulted you'd need to look up by
>>> translating the addresses above and resolving them to source
>>> locations. That'll also tell you whether you ought to be worried.
>> 
>>> A common example of these happening is Xen carrying out MSR
>>> accesses on behalf of the kernel, when the MSR actually isn't
>>> implemented (i.e. the kernel itself also is prepared to handle
>>> faults upon accessing them).
>> 
>> Perhaps related to a previous discussion (with a open end ..)
>> http://lists.xen.org/archives/html/xen-devel/2012-09/msg01599.html 

> Not impossible, but iirc those would generally be accompanied
> by SIGSEGV-s and/or kernel crashes in some guest.

> Jan

Valtteri,

Are you by any chance using pci-passthrough to one of your domains ?

--

Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 08:51:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 08:51:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tksta-0007bJ-3J; Tue, 18 Dec 2012 08:51:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TkstY-0007bE-Lp
	for Xen-devel@lists.xen.org; Tue, 18 Dec 2012 08:51:24 +0000
Received: from [85.158.139.83:60029] by server-1.bemta-5.messagelabs.com id
	54/72-12813-B8E20D05; Tue, 18 Dec 2012 08:51:23 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355820680!27601547!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3288 invoked from network); 18 Dec 2012 08:51:20 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-4.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	18 Dec 2012 08:51:20 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:49450 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tksxh-0004HR-ED; Tue, 18 Dec 2012 09:55:41 +0100
Date: Tue, 18 Dec 2012 09:51:16 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <15812275.20121218095116@eikelenboom.it>
To: "Jan Beulich" <JBeulich@suse.com>
In-Reply-To: <50D02E0E02000078000B0EE8@nat28.tlf.novell.com>
References: <CAN=sCCFdBOKjjyqhr8Ae+cJseeTjMM41_xPC9__LtmVWsmR=qw@mail.gmail.com>
	<50CF5D0402000078000B0D17@nat28.tlf.novell.com>
	<1339249142.20121217185500@eikelenboom.it>
	<50D02E0E02000078000B0EE8@nat28.tlf.novell.com>
MIME-Version: 1.0
Cc: Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com>, Xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (XEN) traps.c:3156: GPF messages in xen dmesg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, December 18, 2012, 8:49:18 AM, you wrote:

>>>> On 17.12.12 at 18:55, Sander Eikelenboom <linux@eikelenboom.it> wrote:

>> Monday, December 17, 2012, 5:57:24 PM, you wrote:
>> 
>>>>>> On 17.12.12 at 17:48, Valtteri Kiviniemi <kiviniemi.valtteri@gmail.com> wrote:
>>>> I'm running Xen 4.2.0 with Linux kernel 3.7.0 and I'm seeing a flood of
>>>> these messages in xen dmesg:
>>>> 
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> (XEN) traps.c:3156: GPF (0060): ffff82c480159247 -> ffff82c4802170e4
>>>> 
>>>> What does that mean and is it something that I should worry about?
>> 
>>> It means that the hypervisor recovered from #GP faults several
>>> times. What exactly it was that faulted you'd need to look up by
>>> translating the addresses above and resolving them to source
>>> locations. That'll also tell you whether you ought to be worried.
>> 
>>> A common example of these happening is Xen carrying out MSR
>>> accesses on behalf of the kernel, when the MSR actually isn't
>>> implemented (i.e. the kernel itself also is prepared to handle
>>> faults upon accessing them).
>> 
>> Perhaps related to a previous discussion (with a open end ..)
>> http://lists.xen.org/archives/html/xen-devel/2012-09/msg01599.html 

> Not impossible, but iirc those would generally be accompanied
> by SIGSEGV-s and/or kernel crashes in some guest.

> Jan

Valtteri,

Are you by any chance using pci-passthrough to one of your domains ?

--

Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 09:06:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 09:06:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkt7e-0007os-Nz; Tue, 18 Dec 2012 09:05:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tkt7d-0007on-EV
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 09:05:57 +0000
Received: from [193.109.254.147:25338] by server-3.bemta-14.messagelabs.com id
	CD/EB-26055-4F130D05; Tue, 18 Dec 2012 09:05:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355821553!3295971!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7962 invoked from network); 18 Dec 2012 09:05:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 09:05:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 09:05:53 +0000
Message-Id: <50D03FFE02000078000B0F62@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 09:05:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mats Petersson" <mats.petersson@citrix.com>
References: <50CF5FA602000078000B0D5E@nat28.tlf.novell.com>
	<50CF5B2F.8000805@citrix.com>
In-Reply-To: <50CF5B2F.8000805@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86,
 amd: Disable way access filter on Piledriver CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 18:49, Mats Petersson <mats.petersson@citrix.com> wrote:
> On 17/12/12 17:08, Jan Beulich wrote:
>> The Way Access Filter in recent AMD CPUs may hurt the performance of
>> some workloads, caused by aliasing issues in the L1 cache.
>> This patch disables it on the affected CPUs.
>>
>> The issue is similar to that one of last year:
>> http://lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html 
>> This new patch does not replace the old one, we just need another
>> quirk for newer CPUs.
>>
>> The performance penalty without the patch depends on the
>> circumstances, but is a bit less than the last year's 3%.
>>
>> The workloads affected would be those that access code from the same
>> physical page under different virtual addresses, so different
>> processes using the same libraries with ASLR or multiple instances of
>> PIE-binaries. The code needs to be accessed simultaneously from both
>> cores of the same compute unit.
>>
>> More details can be found here:
>> http://developer.amd.com/Assets/SharedL1InstructionCacheonAMD15hCPU.pdf 
>>
>> CPUs affected are anything with the core known as Piledriver.
>> That includes the new parts of the AMD A-Series (aka Trinity) and the
>> just released new CPUs of the FX-Series (aka Vishera).
>> The model numbering is a bit odd here: FX CPUs have model 2,
>> A-Series has model 10h, with possible extensions to 1Fh. Hence the
>> range of model ids.
>>
>> Signed-off-by: Andre Przywara <osp@andrep.de>
>>
>> Add and use MSR_AMD64_IC_CFG. Update the value whenever it is found to
>> not have all bits set, rather than just when it's zero.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/cpu/amd.c
>> +++ b/xen/arch/x86/cpu/amd.c
>> @@ -448,6 +448,14 @@ static void __devinit init_amd(struct cp
>>   		}
>>   	}
>>   
>> +	/*
>> +	 * The way access filter has a performance penalty on some workloads.
>> +	 * Disable it on the affected CPUs.
>> +	 */
>> +	if (c->x86 == 0x15 && c->x86_model >= 0x02 && c->x86_model < 0x20 &&
>> +	    !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) != 0x1e)
>> +		wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);
> Would it not be better to simply write 0x1e, rather than only write it 
> when it's not 0x1e? It's a one-off operation, but the extra readmsr 
> seems unnecessary to me.

Possibly, but I guess you checked the original Linux commit that
this was derived from, and saw that they do it yet a little more
oddly (writing the disabling value only when the original value
had all 4 bits clear. Fact is that the way it is now, we avoid an
MSR write to a register we can't even read (and I think that is
desirable in any case; the checking of the value is pretty benign
then).

Jan

>> +
>>           amd_get_topology(c);
>>   
>>   	/* Pointless to use MWAIT on Family10 as it does not deep sleep. */
>> --- a/xen/include/asm-x86/msr-index.h
>> +++ b/xen/include/asm-x86/msr-index.h
>> @@ -211,6 +211,7 @@
>>   
>>   /* AMD64 MSRs */
>>   #define MSR_AMD64_NB_CFG		0xc001001f
>> +#define MSR_AMD64_IC_CFG		0xc0011021
>>   #define MSR_AMD64_DC_CFG		0xc0011022
>>   #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
>>   
>>
>>
>>
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 09:06:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 09:06:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkt7e-0007os-Nz; Tue, 18 Dec 2012 09:05:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tkt7d-0007on-EV
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 09:05:57 +0000
Received: from [193.109.254.147:25338] by server-3.bemta-14.messagelabs.com id
	CD/EB-26055-4F130D05; Tue, 18 Dec 2012 09:05:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355821553!3295971!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7962 invoked from network); 18 Dec 2012 09:05:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 09:05:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 09:05:53 +0000
Message-Id: <50D03FFE02000078000B0F62@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 09:05:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mats Petersson" <mats.petersson@citrix.com>
References: <50CF5FA602000078000B0D5E@nat28.tlf.novell.com>
	<50CF5B2F.8000805@citrix.com>
In-Reply-To: <50CF5B2F.8000805@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86,
 amd: Disable way access filter on Piledriver CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.12.12 at 18:49, Mats Petersson <mats.petersson@citrix.com> wrote:
> On 17/12/12 17:08, Jan Beulich wrote:
>> The Way Access Filter in recent AMD CPUs may hurt the performance of
>> some workloads, caused by aliasing issues in the L1 cache.
>> This patch disables it on the affected CPUs.
>>
>> The issue is similar to that one of last year:
>> http://lkml.indiana.edu/hypermail/linux/kernel/1107.3/00041.html 
>> This new patch does not replace the old one, we just need another
>> quirk for newer CPUs.
>>
>> The performance penalty without the patch depends on the
>> circumstances, but is a bit less than the last year's 3%.
>>
>> The workloads affected would be those that access code from the same
>> physical page under different virtual addresses, so different
>> processes using the same libraries with ASLR or multiple instances of
>> PIE-binaries. The code needs to be accessed simultaneously from both
>> cores of the same compute unit.
>>
>> More details can be found here:
>> http://developer.amd.com/Assets/SharedL1InstructionCacheonAMD15hCPU.pdf 
>>
>> CPUs affected are anything with the core known as Piledriver.
>> That includes the new parts of the AMD A-Series (aka Trinity) and the
>> just released new CPUs of the FX-Series (aka Vishera).
>> The model numbering is a bit odd here: FX CPUs have model 2,
>> A-Series has model 10h, with possible extensions to 1Fh. Hence the
>> range of model ids.
>>
>> Signed-off-by: Andre Przywara <osp@andrep.de>
>>
>> Add and use MSR_AMD64_IC_CFG. Update the value whenever it is found to
>> not have all bits set, rather than just when it's zero.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/cpu/amd.c
>> +++ b/xen/arch/x86/cpu/amd.c
>> @@ -448,6 +448,14 @@ static void __devinit init_amd(struct cp
>>   		}
>>   	}
>>   
>> +	/*
>> +	 * The way access filter has a performance penalty on some workloads.
>> +	 * Disable it on the affected CPUs.
>> +	 */
>> +	if (c->x86 == 0x15 && c->x86_model >= 0x02 && c->x86_model < 0x20 &&
>> +	    !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) != 0x1e)
>> +		wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);
> Would it not be better to simply write 0x1e, rather than only write it 
> when it's not 0x1e? It's a one-off operation, but the extra readmsr 
> seems unnecessary to me.

Possibly, but I guess you checked the original Linux commit that
this was derived from, and saw that they do it yet a little more
oddly (writing the disabling value only when the original value
had all 4 bits clear. Fact is that the way it is now, we avoid an
MSR write to a register we can't even read (and I think that is
desirable in any case; the checking of the value is pretty benign
then).

Jan

>> +
>>           amd_get_topology(c);
>>   
>>   	/* Pointless to use MWAIT on Family10 as it does not deep sleep. */
>> --- a/xen/include/asm-x86/msr-index.h
>> +++ b/xen/include/asm-x86/msr-index.h
>> @@ -211,6 +211,7 @@
>>   
>>   /* AMD64 MSRs */
>>   #define MSR_AMD64_NB_CFG		0xc001001f
>> +#define MSR_AMD64_IC_CFG		0xc0011021
>>   #define MSR_AMD64_DC_CFG		0xc0011022
>>   #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
>>   
>>
>>
>>
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:04:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tku1T-0008JA-1C; Tue, 18 Dec 2012 10:03:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tku1R-0008J5-6p
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:03:37 +0000
Received: from [193.109.254.147:17539] by server-8.bemta-14.messagelabs.com id
	23/BE-26341-87F30D05; Tue, 18 Dec 2012 10:03:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355824973!10762844!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10267 invoked from network); 18 Dec 2012 10:02:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 10:02:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="218901"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 10:02:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 10:02:49 +0000
Message-ID: <1355824968.14620.143.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Tue, 18 Dec 2012 10:02:48 +0000
In-Reply-To: <20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
	<1355743598.14620.43.camel@zakaz.uk.xensource.com>
	<20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-17 at 20:09 +0000, Matt Wilson wrote:
> On Mon, Dec 17, 2012 at 11:26:38AM +0000, Ian Campbell wrote:
> > On Fri, 2012-12-14 at 18:53 +0000, Matt Wilson wrote:
> > > On Thu, Dec 13, 2012 at 11:12:50PM +0000, Palagummi, Siva wrote:
> > > > > -----Original Message-----
> > > > > From: Matt Wilson [mailto:msw@amazon.com]
> > > > > Sent: Wednesday, December 12, 2012 3:05 AM
> > > > >
> > > > > On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > > > > >
> > > > > > You can clearly see below that copy_off is input to
> > > > > > start_new_rx_buffer while copying frags.
> > > > > 
> > > > > Yes, but that's the right thing to do. copy_off should be set to the
> > > > > destination offset after copying the last byte of linear data, which
> > > > > means "skb_headlen(skb) % PAGE_SIZE" is correct.
> > > > > 
> > > >
> > > > No. It is not correct for two reasons. For example what if
> > > > skb_headlen(skb) is exactly a multiple of PAGE_SIZE. Copy_off would
> > > > be set to ZERO. And now if there exists some data in frags, ZERO
> > > > will be passed in as copy_off value and start_new_rx_buffer will
> > > > return FALSE. And second reason is the obvious case from the current
> > > > code where "offset_in_page(skb->data)" size hole will be left in the
> > > > first buffer after first pass in case remaining data that need to be
> > > > copied is going to overflow the first buffer.
> > > 
> > > Right, and I'm arguing that having the code leave a hole is less
> > > desirable than potentially increasing the number of copy
> > > operations. I'd like to hear from Ian and others if using the buffers
> > > efficiently is more important than reducing copy operations. Intuitively,
> > > I think it's more important to use the ring efficiently.
> > 
> > Do you mean the ring or the actual buffers?
> 
> Sorry, the actual buffers.
> 
> > The current code tries to coalesce multiple small frags/heads because it
> > is usually trivial but doesn't try too hard with multiple larger frags,
> > since they take up most of a page by themselves anyway. I suppose this
> > does waste a bit of buffer space and therefore could take more ring
> > slots, but it's not clear to me how much this matters in practice (it
> > might be tricky to measure this with any realistic workload).
> 
> In the case where we're consistently handling large heads (like when
> using a MTU value of 9000 for streaming traffic), we're wasting 1/3 of
> the available buffers.

Sorry if I missed this earlier in the thread, but how do we end up
wasting so much?

For an skb with 9000 bytes in the linear area, which must necessarily be
contiguous, do we not fill the first two page sized buffers completely?
The remaining 808 bytes must then have its own buffer. Hrm, I suppose
that's about 27% wasted over the three pages. If we are doing something
worse than that though then we have a bug somewhere (nb: older netbacks
would only fill the first 2048 bytes of each buffer, the wastage is
presumably phenomenal in that case ;-), MAX_BUFFER_OFFSET is now ==
PAGE_SIZE though)

Unless I've misunderstood this thread and we are considering packing
data from multiple skbs into a single buffer? (i.e. the remaining
4096-808=3288 bytes in the third buffer above would contain data for the
next skb). Does the ring protocol even allow for that possibility? It
seems like a path to complexity to me.

> > The cost of splitting a copy into two should be low though, the copies
> > are already batched into a single hypercall and I'd expect things to be
> > mostly dominated by the data copy itself rather than the setup of each
> > individual op, which would argue for splitting a copy in two if that
> > helps fill the buffers.
> 
> That was my thought as well. We're testing a patch that does just this
> now.
> 
> > The flip side is that once you get past the headers etc the paged frags
> > likely tend to either bits and bobs (fine) or mostly whole pages. In the
> > whole pages case trying to fill the buffers will result in every copy
> > getting split. My gut tells me that the whole pages case probably
> > dominates, but I'm not sure what the real world impact of splitting all
> > the copies would be.
> 
> Right, I'm less concerned about the paged frags. It might make sense
> to skip some space so that the copying can be page aligned. I suppose
> it depends on how many defferent pages are in the list, and what the
> total size is.
> 
> In practice I'd think it would be rare to see a paged SKB for ingress
> traffic to domUs unless there is significant intra-host communication
> (dom0->domU, domU->domU). When domU ingress traffic is originating
> from an Ethernet device it shouldn't be paged. Paged SKBs would come
> into play when a SKB is formed for transmit on an egress device that
> is SG-capable. Or am I misunderstanding how paged SKBs are used these
> days?

I think it depends on the hardware and/or driver. IIRC some devices push
down frag zero into the device for RX DMA and then share it with the
linear area (I think this might have something to do with making LRO or
GRO easier/workable).

Also things such as GRO can commonly cause received skbs being passed up
the stack to contain several frags.

I'm not quite sure how this works but in the case of s/w GRO I wouldn't
be surprised if this resulted in a skb with lots of 1500 byte (i.e. wire
MTU) frags. I think we would end up at least coalescing those two per
buffer on transmit (3000/4096 = 73% filling the page).

Doing better would either need start_new_rx_buffer to always completely
fill each buffer or to take a much more global view of the frags (e.g.
taking the size of the next frag and how it fits into consideration
too).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:04:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tku1T-0008JA-1C; Tue, 18 Dec 2012 10:03:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tku1R-0008J5-6p
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:03:37 +0000
Received: from [193.109.254.147:17539] by server-8.bemta-14.messagelabs.com id
	23/BE-26341-87F30D05; Tue, 18 Dec 2012 10:03:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355824973!10762844!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10267 invoked from network); 18 Dec 2012 10:02:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 10:02:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="218901"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 10:02:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 10:02:49 +0000
Message-ID: <1355824968.14620.143.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Tue, 18 Dec 2012 10:02:48 +0000
In-Reply-To: <20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
	<20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
	<1355743598.14620.43.camel@zakaz.uk.xensource.com>
	<20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-17 at 20:09 +0000, Matt Wilson wrote:
> On Mon, Dec 17, 2012 at 11:26:38AM +0000, Ian Campbell wrote:
> > On Fri, 2012-12-14 at 18:53 +0000, Matt Wilson wrote:
> > > On Thu, Dec 13, 2012 at 11:12:50PM +0000, Palagummi, Siva wrote:
> > > > > -----Original Message-----
> > > > > From: Matt Wilson [mailto:msw@amazon.com]
> > > > > Sent: Wednesday, December 12, 2012 3:05 AM
> > > > >
> > > > > On Tue, Dec 11, 2012 at 10:25:51AM +0000, Palagummi, Siva wrote:
> > > > > >
> > > > > > You can clearly see below that copy_off is input to
> > > > > > start_new_rx_buffer while copying frags.
> > > > > 
> > > > > Yes, but that's the right thing to do. copy_off should be set to the
> > > > > destination offset after copying the last byte of linear data, which
> > > > > means "skb_headlen(skb) % PAGE_SIZE" is correct.
> > > > > 
> > > >
> > > > No. It is not correct for two reasons. For example what if
> > > > skb_headlen(skb) is exactly a multiple of PAGE_SIZE. Copy_off would
> > > > be set to ZERO. And now if there exists some data in frags, ZERO
> > > > will be passed in as copy_off value and start_new_rx_buffer will
> > > > return FALSE. And second reason is the obvious case from the current
> > > > code where "offset_in_page(skb->data)" size hole will be left in the
> > > > first buffer after first pass in case remaining data that need to be
> > > > copied is going to overflow the first buffer.
> > > 
> > > Right, and I'm arguing that having the code leave a hole is less
> > > desirable than potentially increasing the number of copy
> > > operations. I'd like to hear from Ian and others if using the buffers
> > > efficiently is more important than reducing copy operations. Intuitively,
> > > I think it's more important to use the ring efficiently.
> > 
> > Do you mean the ring or the actual buffers?
> 
> Sorry, the actual buffers.
> 
> > The current code tries to coalesce multiple small frags/heads because it
> > is usually trivial but doesn't try too hard with multiple larger frags,
> > since they take up most of a page by themselves anyway. I suppose this
> > does waste a bit of buffer space and therefore could take more ring
> > slots, but it's not clear to me how much this matters in practice (it
> > might be tricky to measure this with any realistic workload).
> 
> In the case where we're consistently handling large heads (like when
> using a MTU value of 9000 for streaming traffic), we're wasting 1/3 of
> the available buffers.

Sorry if I missed this earlier in the thread, but how do we end up
wasting so much?

For an skb with 9000 bytes in the linear area, which must necessarily be
contiguous, do we not fill the first two page sized buffers completely?
The remaining 808 bytes must then have its own buffer. Hrm, I suppose
that's about 27% wasted over the three pages. If we are doing something
worse than that though then we have a bug somewhere (nb: older netbacks
would only fill the first 2048 bytes of each buffer, the wastage is
presumably phenomenal in that case ;-), MAX_BUFFER_OFFSET is now ==
PAGE_SIZE though)

Unless I've misunderstood this thread and we are considering packing
data from multiple skbs into a single buffer? (i.e. the remaining
4096-808=3288 bytes in the third buffer above would contain data for the
next skb). Does the ring protocol even allow for that possibility? It
seems like a path to complexity to me.

> > The cost of splitting a copy into two should be low though, the copies
> > are already batched into a single hypercall and I'd expect things to be
> > mostly dominated by the data copy itself rather than the setup of each
> > individual op, which would argue for splitting a copy in two if that
> > helps fill the buffers.
> 
> That was my thought as well. We're testing a patch that does just this
> now.
> 
> > The flip side is that once you get past the headers etc the paged frags
> > likely tend to either bits and bobs (fine) or mostly whole pages. In the
> > whole pages case trying to fill the buffers will result in every copy
> > getting split. My gut tells me that the whole pages case probably
> > dominates, but I'm not sure what the real world impact of splitting all
> > the copies would be.
> 
> Right, I'm less concerned about the paged frags. It might make sense
> to skip some space so that the copying can be page aligned. I suppose
> it depends on how many defferent pages are in the list, and what the
> total size is.
> 
> In practice I'd think it would be rare to see a paged SKB for ingress
> traffic to domUs unless there is significant intra-host communication
> (dom0->domU, domU->domU). When domU ingress traffic is originating
> from an Ethernet device it shouldn't be paged. Paged SKBs would come
> into play when a SKB is formed for transmit on an egress device that
> is SG-capable. Or am I misunderstanding how paged SKBs are used these
> days?

I think it depends on the hardware and/or driver. IIRC some devices push
down frag zero into the device for RX DMA and then share it with the
linear area (I think this might have something to do with making LRO or
GRO easier/workable).

Also things such as GRO can commonly cause received skbs being passed up
the stack to contain several frags.

I'm not quite sure how this works but in the case of s/w GRO I wouldn't
be surprised if this resulted in a skb with lots of 1500 byte (i.e. wire
MTU) frags. I think we would end up at least coalescing those two per
buffer on transmit (3000/4096 = 73% filling the page).

Doing better would either need start_new_rx_buffer to always completely
fill each buffer or to take a much more global view of the frags (e.g.
taking the size of the next frag and how it fits into consideration
too).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:04:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tku2P-0008Lh-Fb; Tue, 18 Dec 2012 10:04:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tku2O-0008La-Lm
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:04:36 +0000
Received: from [85.158.138.51:18569] by server-5.bemta-3.messagelabs.com id
	2C/17-15136-3BF30D05; Tue, 18 Dec 2012 10:04:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355825058!10716961!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7202 invoked from network); 18 Dec 2012 10:04:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 10:04:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="218947"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 10:04:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 10:04:16 +0000
Message-ID: <1355825055.14620.144.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Tue, 18 Dec 2012 10:04:15 +0000
In-Reply-To: <ED85BD1013C109A90B23BDE6@Ximines.local>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>
	<95EB29C55F836E775B70A9F5@nimrod.local>
	<1355745712.14620.59.camel@zakaz.uk.xensource.com>
	<ED85BD1013C109A90B23BDE6@Ximines.local>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-17 at 17:02 +0000, Alex Bligh wrote:
> Ian,
> 
> --On 17 December 2012 12:01:52 +0000 Ian Campbell <Ian.Campbell@citrix.com> wrote:
> 
> > It's definitely at least a bug in the documentation -- this isn't a
> > feature which was forgotten about, it was explicitly decided hadn't met
> > the 4.2 deadline and wasn't going to be ready in time to wait for. This
> > should have been documented and in the release notes etc, sorry.
> >
> > We did at least manage to tag it tech preview in
> > http://wiki.xen.org/wiki/Xen_Release_Features which implies that it is
> > not yet fully formed.
> 
> It is indeed tagged as 'tech preview'.
> 
> But given that:
> a) HVM is hardly uncommon
> b) live-migrate is pretty essential
> c) qemu-xen device model is the default next time around and the /only/
>    way you can achieve certain things (e.g. rebaseable snapshots)
> d) the code is already written (at least for -unstable) and the patch is
>    smallish (particularly if the JSON stuff is ignored)

These are all true, but none of them are an argument that this is a bug
fix rather than a feature request.

Our general policy is not to backport features because the risk of
regressions is higher, so feature backports need to be considered more
carefully than a bugfix.

> ... I'd suggest a backport to 4.2.x is a reasonable request.

It's certainly a reasonable request, and I'm not suggesting we shouldn't
consider it. My point is that it needs more careful consideration than a
straight bugfix backport.

For example your list above doesn't include the flip sides which are:
      * Migration of PV guests
      * Migration of HVM Qemu-traditional guests
      * HVM Qemu-xen guests for users who don't care about migration

All of these run the risk of regression. Is this new feature worth that
risk? Obviously you think so, but I'm not so sure.

>  I've had a
> first go at the libxl part and Stefano got their first and volunteered
> to do the qemu bit (if he doesn't have time I can have a go at that too).

It's appreciated but ability and or willingness to do the work is only
one part of the equation.

> If necessary we can guard the code either with a compile time switch or
> (as the wiki page suggested to me) a hypervisor command line switch like
> TMEM and AVX, which if it's not enabled would just error the call in the
> same way libxl does at the moment.

I don't think we should have a compile time switch, we should either
backport this new feature as a fully supported thing or not at all.
Adding conditional features like this simply divides our testing
resource.

A hypervisor switch isn't an option because the hypervisor isn't
involved in this decision at all.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:04:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tku2P-0008Lh-Fb; Tue, 18 Dec 2012 10:04:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tku2O-0008La-Lm
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:04:36 +0000
Received: from [85.158.138.51:18569] by server-5.bemta-3.messagelabs.com id
	2C/17-15136-3BF30D05; Tue, 18 Dec 2012 10:04:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355825058!10716961!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7202 invoked from network); 18 Dec 2012 10:04:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 10:04:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="218947"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 10:04:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 10:04:16 +0000
Message-ID: <1355825055.14620.144.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Date: Tue, 18 Dec 2012 10:04:15 +0000
In-Reply-To: <ED85BD1013C109A90B23BDE6@Ximines.local>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>
	<95EB29C55F836E775B70A9F5@nimrod.local>
	<1355745712.14620.59.camel@zakaz.uk.xensource.com>
	<ED85BD1013C109A90B23BDE6@Ximines.local>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-17 at 17:02 +0000, Alex Bligh wrote:
> Ian,
> 
> --On 17 December 2012 12:01:52 +0000 Ian Campbell <Ian.Campbell@citrix.com> wrote:
> 
> > It's definitely at least a bug in the documentation -- this isn't a
> > feature which was forgotten about, it was explicitly decided hadn't met
> > the 4.2 deadline and wasn't going to be ready in time to wait for. This
> > should have been documented and in the release notes etc, sorry.
> >
> > We did at least manage to tag it tech preview in
> > http://wiki.xen.org/wiki/Xen_Release_Features which implies that it is
> > not yet fully formed.
> 
> It is indeed tagged as 'tech preview'.
> 
> But given that:
> a) HVM is hardly uncommon
> b) live-migrate is pretty essential
> c) qemu-xen device model is the default next time around and the /only/
>    way you can achieve certain things (e.g. rebaseable snapshots)
> d) the code is already written (at least for -unstable) and the patch is
>    smallish (particularly if the JSON stuff is ignored)

These are all true, but none of them are an argument that this is a bug
fix rather than a feature request.

Our general policy is not to backport features because the risk of
regressions is higher, so feature backports need to be considered more
carefully than a bugfix.

> ... I'd suggest a backport to 4.2.x is a reasonable request.

It's certainly a reasonable request, and I'm not suggesting we shouldn't
consider it. My point is that it needs more careful consideration than a
straight bugfix backport.

For example your list above doesn't include the flip sides which are:
      * Migration of PV guests
      * Migration of HVM Qemu-traditional guests
      * HVM Qemu-xen guests for users who don't care about migration

All of these run the risk of regression. Is this new feature worth that
risk? Obviously you think so, but I'm not so sure.

>  I've had a
> first go at the libxl part and Stefano got their first and volunteered
> to do the qemu bit (if he doesn't have time I can have a go at that too).

It's appreciated but ability and or willingness to do the work is only
one part of the equation.

> If necessary we can guard the code either with a compile time switch or
> (as the wiki page suggested to me) a hypervisor command line switch like
> TMEM and AVX, which if it's not enabled would just error the call in the
> same way libxl does at the moment.

I don't think we should have a compile time switch, we should either
backport this new feature as a fully supported thing or not at all.
Adding conditional features like this simply divides our testing
resource.

A hypervisor switch isn't an option because the hypervisor isn't
involved in this decision at all.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:09:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:09:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tku78-00006h-71; Tue, 18 Dec 2012 10:09:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tku76-00006Z-TS
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:09:29 +0000
Received: from [85.158.138.51:39504] by server-4.bemta-3.messagelabs.com id
	AC/42-31835-3D040D05; Tue, 18 Dec 2012 10:09:23 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1355825344!29366123!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9163 invoked from network); 18 Dec 2012 10:09:04 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-5.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 10:09:04 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIA8qki019430; Tue, 18 Dec 2012 10:08:52 GMT
Date: Tue, 18 Dec 2012 10:08:50 +0000
From: Will Deacon <will.deacon@arm.com>
To: Arnd Bergmann <arnd@arndb.de>
Message-ID: <20121218100827.GB9072@mudshark.cambridge.arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-4-git-send-email-will.deacon@arm.com>
	<201212172000.11850.arnd@arndb.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <201212172000.11850.arnd@arndb.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"nico@fluxnic.net" <nico@fluxnic.net>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 3/6] ARM: psci: add devicetree binding
 for describing PSCI firmware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Arnd,

On Mon, Dec 17, 2012 at 08:00:11PM +0000, Arnd Bergmann wrote:
> On Monday 17 December 2012, Will Deacon wrote:
> > +
> > + - function-base : The base ID from which the functions are offset.
> > +
> > +Main node optional properties:
> > +
> > + - cpu_suspend   : Offset of CPU_SUSPEND ID from function-base
> > +
> > + - cpu_off       : Offset of CPU_OFF ID from function-base
> > +
> > + - cpu_on        : Offset of CPU_ON ID from function-base
> > +
> > + - migrate       : Offset of MIGRATE ID from function-base
> 
> What is the benefit of the "function-base" property over just having
> 32 bit IDs for each function. For all I can tell, the interface does
> not rely on the numbers to be consecutive, so removing the function-base
> attribute would make the binding simpler as well as more flexible.

Sure, happy to change that for v3.

Will

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:09:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:09:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tku78-00006h-71; Tue, 18 Dec 2012 10:09:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tku76-00006Z-TS
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:09:29 +0000
Received: from [85.158.138.51:39504] by server-4.bemta-3.messagelabs.com id
	AC/42-31835-3D040D05; Tue, 18 Dec 2012 10:09:23 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1355825344!29366123!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9163 invoked from network); 18 Dec 2012 10:09:04 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-5.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 10:09:04 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIA8qki019430; Tue, 18 Dec 2012 10:08:52 GMT
Date: Tue, 18 Dec 2012 10:08:50 +0000
From: Will Deacon <will.deacon@arm.com>
To: Arnd Bergmann <arnd@arndb.de>
Message-ID: <20121218100827.GB9072@mudshark.cambridge.arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-4-git-send-email-will.deacon@arm.com>
	<201212172000.11850.arnd@arndb.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <201212172000.11850.arnd@arndb.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"nico@fluxnic.net" <nico@fluxnic.net>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 3/6] ARM: psci: add devicetree binding
 for describing PSCI firmware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Arnd,

On Mon, Dec 17, 2012 at 08:00:11PM +0000, Arnd Bergmann wrote:
> On Monday 17 December 2012, Will Deacon wrote:
> > +
> > + - function-base : The base ID from which the functions are offset.
> > +
> > +Main node optional properties:
> > +
> > + - cpu_suspend   : Offset of CPU_SUSPEND ID from function-base
> > +
> > + - cpu_off       : Offset of CPU_OFF ID from function-base
> > +
> > + - cpu_on        : Offset of CPU_ON ID from function-base
> > +
> > + - migrate       : Offset of MIGRATE ID from function-base
> 
> What is the benefit of the "function-base" property over just having
> 32 bit IDs for each function. For all I can tell, the interface does
> not rely on the numbers to be consecutive, so removing the function-base
> attribute would make the binding simpler as well as more flexible.

Sure, happy to change that for v3.

Will

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:14:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:14:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkuBV-0000Hx-26; Tue, 18 Dec 2012 10:14:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1TkuBT-0000Hq-AE
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:13:59 +0000
Received: from [85.158.139.83:41335] by server-14.bemta-5.messagelabs.com id
	5F/11-09538-6E140D05; Tue, 18 Dec 2012 10:13:58 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355825506!29640691!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17740 invoked from network); 18 Dec 2012 10:11:46 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-9.tower-182.messagelabs.com with SMTP;
	18 Dec 2012 10:11:46 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIABRki019497; Tue, 18 Dec 2012 10:11:27 GMT
Date: Tue, 18 Dec 2012 10:11:25 +0000
From: Will Deacon <will.deacon@arm.com>
To: Nicolas Pitre <nicolas.pitre@linaro.org>
Message-ID: <20121218101125.GC9072@mudshark.cambridge.arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-5-git-send-email-will.deacon@arm.com>
	<alpine.LFD.2.02.1212171537230.1263@xanadu.home>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.LFD.2.02.1212171537230.1263@xanadu.home>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 4/6] ARM: psci: add support for PSCI
 invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 08:51:27PM +0000, Nicolas Pitre wrote:
> On Mon, 17 Dec 2012, Will Deacon wrote:
> 
> > This patch adds support for the Power State Coordination Interface
> > defined by ARM, allowing Linux to request CPU-centric power-management
> > operations from firmware implementing the PSCI protocol.
> > 
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> 
> [...]
> 
> > +/*
> > + * The following two functions are invoked via the invoke_psci_fn pointer
> > + * and will not be inlined, allowing us to piggyback on the AAPCS.
> > + */
> 
> To make sure the code is always in sync with the intent, you could mark 
> those with  noinline as well.

Can do.

> > +static int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
> > +{
> > +	asm volatile(
> > +			__asmeq("%0", "r0")
> > +			__asmeq("%1", "r1")
> > +			__asmeq("%2", "r2")
> > +			__asmeq("%3", "r3")
> > +			__HVC(0)
> > +		: "+r" (function_id)
> > +		: "r" (arg0), "r" (arg1), "r" (arg2));
> > +
> > +	return function_id;
> > +}
> > +
> > +static int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
> > +{
> > +	asm volatile(
> > +			__asmeq("%0", "r0")
> > +			__asmeq("%1", "r1")
> > +			__asmeq("%2", "r2")
> > +			__asmeq("%3", "r3")
> > +			__SMC(0)
> > +		: "+r" (function_id)
> > +		: "r" (arg0), "r" (arg1), "r" (arg2));
> > +
> > +	return function_id;
> > +}
> > +
> > +static int psci_cpu_suspend(struct psci_power_state state,
> > +			    unsigned long entry_point)
> > +{
> > +	int err;
> > +	u32 fn, power_state;
> > +
> > +	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
> > +	power_state = psci_power_state_pack(state);
> > +	err = invoke_psci_fn(fn, power_state, (u32)entry_point, 0);
> 
> Why do you need the u32 cast here?

That's a hangover from when entry_point was a void *. I'll fix that, thanks.

> > +static int __init psci_init(void)
> > +{
> > +	struct device_node *np;
> > +	const char *method;
> > +	u32 base, id;
> > +
> > +	np = of_find_matching_node(NULL, psci_of_match);
> > +	if (!np)
> > +		return 0;
> > +
> > +	pr_info("probing function IDs from device-tree\n");
> 
> Having "probing function IDs from device-tree" in the middle of a kernel 
> log isn't very informative.  Better make this more useful or remove it.
> 
> > +
> > +	if (of_property_read_u32(np, "function-base", &base)) {
> > +		pr_warning("missing \"function-base\" property\n");
> 
> Same thing here: this lacks context in a kernel log.
> And so on for the other occurrences.

Actually, these are all prefixed with "psci: " thanks to the pr_fmt
definition at the top of the file. I can remove them if you like, but then
it's not obvious which parts of the PSCI code are available from looking at
a kernel boot log.

Cheers for the review,

Will

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:14:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:14:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkuBV-0000Hx-26; Tue, 18 Dec 2012 10:14:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1TkuBT-0000Hq-AE
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:13:59 +0000
Received: from [85.158.139.83:41335] by server-14.bemta-5.messagelabs.com id
	5F/11-09538-6E140D05; Tue, 18 Dec 2012 10:13:58 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355825506!29640691!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17740 invoked from network); 18 Dec 2012 10:11:46 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-9.tower-182.messagelabs.com with SMTP;
	18 Dec 2012 10:11:46 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIABRki019497; Tue, 18 Dec 2012 10:11:27 GMT
Date: Tue, 18 Dec 2012 10:11:25 +0000
From: Will Deacon <will.deacon@arm.com>
To: Nicolas Pitre <nicolas.pitre@linaro.org>
Message-ID: <20121218101125.GC9072@mudshark.cambridge.arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-5-git-send-email-will.deacon@arm.com>
	<alpine.LFD.2.02.1212171537230.1263@xanadu.home>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.LFD.2.02.1212171537230.1263@xanadu.home>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 4/6] ARM: psci: add support for PSCI
 invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 08:51:27PM +0000, Nicolas Pitre wrote:
> On Mon, 17 Dec 2012, Will Deacon wrote:
> 
> > This patch adds support for the Power State Coordination Interface
> > defined by ARM, allowing Linux to request CPU-centric power-management
> > operations from firmware implementing the PSCI protocol.
> > 
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> 
> [...]
> 
> > +/*
> > + * The following two functions are invoked via the invoke_psci_fn pointer
> > + * and will not be inlined, allowing us to piggyback on the AAPCS.
> > + */
> 
> To make sure the code is always in sync with the intent, you could mark 
> those with  noinline as well.

Can do.

> > +static int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
> > +{
> > +	asm volatile(
> > +			__asmeq("%0", "r0")
> > +			__asmeq("%1", "r1")
> > +			__asmeq("%2", "r2")
> > +			__asmeq("%3", "r3")
> > +			__HVC(0)
> > +		: "+r" (function_id)
> > +		: "r" (arg0), "r" (arg1), "r" (arg2));
> > +
> > +	return function_id;
> > +}
> > +
> > +static int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, u32 arg2)
> > +{
> > +	asm volatile(
> > +			__asmeq("%0", "r0")
> > +			__asmeq("%1", "r1")
> > +			__asmeq("%2", "r2")
> > +			__asmeq("%3", "r3")
> > +			__SMC(0)
> > +		: "+r" (function_id)
> > +		: "r" (arg0), "r" (arg1), "r" (arg2));
> > +
> > +	return function_id;
> > +}
> > +
> > +static int psci_cpu_suspend(struct psci_power_state state,
> > +			    unsigned long entry_point)
> > +{
> > +	int err;
> > +	u32 fn, power_state;
> > +
> > +	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
> > +	power_state = psci_power_state_pack(state);
> > +	err = invoke_psci_fn(fn, power_state, (u32)entry_point, 0);
> 
> Why do you need the u32 cast here?

That's a hangover from when entry_point was a void *. I'll fix that, thanks.

> > +static int __init psci_init(void)
> > +{
> > +	struct device_node *np;
> > +	const char *method;
> > +	u32 base, id;
> > +
> > +	np = of_find_matching_node(NULL, psci_of_match);
> > +	if (!np)
> > +		return 0;
> > +
> > +	pr_info("probing function IDs from device-tree\n");
> 
> Having "probing function IDs from device-tree" in the middle of a kernel 
> log isn't very informative.  Better make this more useful or remove it.
> 
> > +
> > +	if (of_property_read_u32(np, "function-base", &base)) {
> > +		pr_warning("missing \"function-base\" property\n");
> 
> Same thing here: this lacks context in a kernel log.
> And so on for the other occurrences.

Actually, these are all prefixed with "psci: " thanks to the pr_fmt
definition at the top of the file. I can remove them if you like, but then
it's not obvious which parts of the PSCI code are available from looking at
a kernel boot log.

Cheers for the review,

Will

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:17:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkuEW-0000Q2-LX; Tue, 18 Dec 2012 10:17:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkuEU-0000Po-Ks
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:17:06 +0000
Received: from [85.158.139.83:17297] by server-2.bemta-5.messagelabs.com id
	FC/1F-16162-1A240D05; Tue, 18 Dec 2012 10:17:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355825821!28601264!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2749 invoked from network); 18 Dec 2012 10:17:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 10:17:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 10:17:01 +0000
Message-Id: <50D050AB02000078000B0F95@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 10:16:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2110AC8B.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18: adjust module parameter types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2110AC8B.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

In particular use "unsigned" and "bool" where the parameter has
respective meaning, and make the respective variables static where
possible and not already done.

Also drop the use of the bogus USBIF_BACK_MAX_PENDING_REQS definition
and do some minimal cleanup.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/xen/blkback/blkback.c
+++ b/drivers/xen/blkback/blkback.c
@@ -55,15 +55,15 @@
  * This will increase the chances of being able to write whole tracks.
  * 64 should be enough to keep us competitive with Linux.
  */
-static int blkif_reqs =3D 64;
-module_param_named(reqs, blkif_reqs, int, 0);
+static unsigned int blkif_reqs =3D 64;
+module_param_named(reqs, blkif_reqs, uint, 0);
 MODULE_PARM_DESC(reqs, "Number of blkback requests to allocate");
=20
 /* Run-time switchable: /sys/module/blkback/parameters/ */
-static unsigned int log_stats =3D 0;
-static unsigned int debug_lvl =3D 0;
-module_param(log_stats, int, 0644);
-module_param(debug_lvl, int, 0644);
+static int log_stats;
+static unsigned int debug_lvl;
+module_param(log_stats, bool, 0644);
+module_param(debug_lvl, uint, 0644);
=20
 /*
  * Each outstanding request that we've passed to the lower device layers =
has a=20
--- a/drivers/xen/blktap/blktap.c
+++ b/drivers/xen/blktap/blktap.c
@@ -127,10 +127,10 @@ static struct tap_blkif *tapfds[MAX_TAP_
 static int blktap_next_minor;
=20
 /* Run-time switchable: /sys/module/blktap/parameters/ */
-static unsigned int log_stats =3D 0;
-static unsigned int debug_lvl =3D 0;
-module_param(log_stats, int, 0644);
-module_param(debug_lvl, int, 0644);
+static int log_stats;
+static unsigned int debug_lvl;
+module_param(log_stats, bool, 0644);
+module_param(debug_lvl, uint, 0644);
=20
 /*
  * Each outstanding request that we've passed to the lower device layers =
has a=20
--- a/drivers/xen/fbfront/xenfb.c
+++ b/drivers/xen/fbfront/xenfb.c
@@ -138,8 +138,8 @@ struct xenfb_info
 #define XENFB_DEFAULT_FB_LEN (XENFB_WIDTH * XENFB_HEIGHT * XENFB_DEPTH / =
8)
=20
 enum {KPARAM_MEM, KPARAM_WIDTH, KPARAM_HEIGHT, KPARAM_CNT};
-static int video[KPARAM_CNT] =3D {2, XENFB_WIDTH, XENFB_HEIGHT};
-module_param_array(video, int, NULL, 0);
+static unsigned int video[KPARAM_CNT] =3D {2, XENFB_WIDTH, XENFB_HEIGHT};
+module_param_array(video, uint, NULL, 0);
 MODULE_PARM_DESC(video,
 		"Size of video memory in MB and width,height in pixels, =
default =3D (2,800,600)");
=20
--- a/drivers/xen/pciback/pciback_ops.c
+++ b/drivers/xen/pciback/pciback_ops.c
@@ -10,7 +10,7 @@
 #include "pciback.h"
=20
 int verbose_request =3D 0;
-module_param(verbose_request, int, 0644);
+module_param(verbose_request, bool, 0644);
=20
 /* Ensure a device is "turned off" and ready to be exported.
  * (Also see pciback_config_reset to ensure virtual configuration space =
is
--- a/drivers/xen/pcifront/pci_op.c
+++ b/drivers/xen/pcifront/pci_op.c
@@ -12,8 +12,8 @@
 #include <xen/evtchn.h>
 #include "pcifront.h"
=20
-static int verbose_request =3D 0;
-module_param(verbose_request, int, 0644);
+static int verbose_request;
+module_param(verbose_request, bool, 0644);
=20
 static void pcifront_init_sd(struct pcifront_sd *sd,
 			     unsigned int domain, unsigned int bus,
--- a/drivers/xen/scsiback/scsiback.c
+++ b/drivers/xen/scsiback/scsiback.c
@@ -56,8 +56,8 @@ int vscsiif_reqs =3D VSCSIIF_BACK_MAX_PEND
 module_param_named(reqs, vscsiif_reqs, uint, 0);
 MODULE_PARM_DESC(reqs, "Number of scsiback requests to allocate");
=20
-static unsigned int log_print_stat =3D 0;
-module_param(log_print_stat, int, 0644);
+static int log_print_stat;
+module_param(log_print_stat, bool, 0644);
=20
 #define SCSIBACK_INVALID_HANDLE (~0)
=20
@@ -219,10 +219,8 @@ static void scsiback_cmd_done(struct req
 	resid        =3D req->data_len;
 	errors       =3D req->errors;
=20
-	if (errors !=3D 0) {
-		if (log_print_stat)
-			scsiback_print_status(sense_buffer, errors, =
pending_req);
-	}
+	if (errors && log_print_stat)
+		scsiback_print_status(sense_buffer, errors, pending_req);
=20
 	/* The Host mode is through as for Emulation. */
 	if (pending_req->info->feature !=3D VSCSI_TYPE_HOST)
--- a/drivers/xen/usbback/usbback.c
+++ b/drivers/xen/usbback/usbback.c
@@ -53,8 +53,8 @@
 #include "../../usb/core/hub.h"
 #endif
=20
-int usbif_reqs =3D USBIF_BACK_MAX_PENDING_REQS;
-module_param_named(reqs, usbif_reqs, int, 0);
+static unsigned int usbif_reqs =3D 128;
+module_param_named(reqs, usbif_reqs, uint, 0);
 MODULE_PARM_DESC(reqs, "Number of usbback requests to allocate");
=20
 struct pending_req_segment {



--=__Part2110AC8B.0__=
Content-Type: text/plain; name="xen-module-param-types.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-module-param-types.patch"

adjust module parameter types=0A=0AIn particular use "unsigned" and "bool" =
where the parameter has=0Arespective meaning, and make the respective =
variables static where=0Apossible and not already done.=0A=0AAlso drop the =
use of the bogus USBIF_BACK_MAX_PENDING_REQS definition=0Aand do some =
minimal cleanup.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A-=
-- a/drivers/xen/blkback/blkback.c=0A+++ b/drivers/xen/blkback/blkback.c=0A=
@@ -55,15 +55,15 @@=0A  * This will increase the chances of being able to =
write whole tracks.=0A  * 64 should be enough to keep us competitive with =
Linux.=0A  */=0A-static int blkif_reqs =3D 64;=0A-module_param_named(reqs, =
blkif_reqs, int, 0);=0A+static unsigned int blkif_reqs =3D 64;=0A+module_pa=
ram_named(reqs, blkif_reqs, uint, 0);=0A MODULE_PARM_DESC(reqs, "Number of =
blkback requests to allocate");=0A =0A /* Run-time switchable: /sys/module/=
blkback/parameters/ */=0A-static unsigned int log_stats =3D 0;=0A-static =
unsigned int debug_lvl =3D 0;=0A-module_param(log_stats, int, 0644);=0A-mod=
ule_param(debug_lvl, int, 0644);=0A+static int log_stats;=0A+static =
unsigned int debug_lvl;=0A+module_param(log_stats, bool, 0644);=0A+module_p=
aram(debug_lvl, uint, 0644);=0A =0A /*=0A  * Each outstanding request that =
we've passed to the lower device layers has a =0A--- a/drivers/xen/blktap/b=
lktap.c=0A+++ b/drivers/xen/blktap/blktap.c=0A@@ -127,10 +127,10 @@ static =
struct tap_blkif *tapfds[MAX_TAP_=0A static int blktap_next_minor;=0A =0A =
/* Run-time switchable: /sys/module/blktap/parameters/ */=0A-static =
unsigned int log_stats =3D 0;=0A-static unsigned int debug_lvl =3D =
0;=0A-module_param(log_stats, int, 0644);=0A-module_param(debug_lvl, int, =
0644);=0A+static int log_stats;=0A+static unsigned int debug_lvl;=0A+module=
_param(log_stats, bool, 0644);=0A+module_param(debug_lvl, uint, 0644);=0A =
=0A /*=0A  * Each outstanding request that we've passed to the lower =
device layers has a =0A--- a/drivers/xen/fbfront/xenfb.c=0A+++ b/drivers/xe=
n/fbfront/xenfb.c=0A@@ -138,8 +138,8 @@ struct xenfb_info=0A #define =
XENFB_DEFAULT_FB_LEN (XENFB_WIDTH * XENFB_HEIGHT * XENFB_DEPTH / 8)=0A =0A =
enum {KPARAM_MEM, KPARAM_WIDTH, KPARAM_HEIGHT, KPARAM_CNT};=0A-static int =
video[KPARAM_CNT] =3D {2, XENFB_WIDTH, XENFB_HEIGHT};=0A-module_param_array=
(video, int, NULL, 0);=0A+static unsigned int video[KPARAM_CNT] =3D {2, =
XENFB_WIDTH, XENFB_HEIGHT};=0A+module_param_array(video, uint, NULL, =
0);=0A MODULE_PARM_DESC(video,=0A 		"Size of video memory in =
MB and width,height in pixels, default =3D (2,800,600)");=0A =0A--- =
a/drivers/xen/pciback/pciback_ops.c=0A+++ b/drivers/xen/pciback/pciback_ops=
.c=0A@@ -10,7 +10,7 @@=0A #include "pciback.h"=0A =0A int verbose_request =
=3D 0;=0A-module_param(verbose_request, int, 0644);=0A+module_param(verbose=
_request, bool, 0644);=0A =0A /* Ensure a device is "turned off" and ready =
to be exported.=0A  * (Also see pciback_config_reset to ensure virtual =
configuration space is=0A--- a/drivers/xen/pcifront/pci_op.c=0A+++ =
b/drivers/xen/pcifront/pci_op.c=0A@@ -12,8 +12,8 @@=0A #include <xen/evtchn=
.h>=0A #include "pcifront.h"=0A =0A-static int verbose_request =3D =
0;=0A-module_param(verbose_request, int, 0644);=0A+static int verbose_reque=
st;=0A+module_param(verbose_request, bool, 0644);=0A =0A static void =
pcifront_init_sd(struct pcifront_sd *sd,=0A 			     =
unsigned int domain, unsigned int bus,=0A--- a/drivers/xen/scsiback/scsibac=
k.c=0A+++ b/drivers/xen/scsiback/scsiback.c=0A@@ -56,8 +56,8 @@ int =
vscsiif_reqs =3D VSCSIIF_BACK_MAX_PEND=0A module_param_named(reqs, =
vscsiif_reqs, uint, 0);=0A MODULE_PARM_DESC(reqs, "Number of scsiback =
requests to allocate");=0A =0A-static unsigned int log_print_stat =3D =
0;=0A-module_param(log_print_stat, int, 0644);=0A+static int log_print_stat=
;=0A+module_param(log_print_stat, bool, 0644);=0A =0A #define SCSIBACK_INVA=
LID_HANDLE (~0)=0A =0A@@ -219,10 +219,8 @@ static void scsiback_cmd_done(st=
ruct req=0A 	resid        =3D req->data_len;=0A 	errors       =3D =
req->errors;=0A =0A-	if (errors !=3D 0) {=0A-		if =
(log_print_stat)=0A-			scsiback_print_status(sense_buffer,=
 errors, pending_req);=0A-	}=0A+	if (errors && log_print_stat)=0A+	=
	scsiback_print_status(sense_buffer, errors, pending_req);=0A =0A 	=
/* The Host mode is through as for Emulation. */=0A 	if (pending_req->in=
fo->feature !=3D VSCSI_TYPE_HOST)=0A--- a/drivers/xen/usbback/usbback.c=0A+=
++ b/drivers/xen/usbback/usbback.c=0A@@ -53,8 +53,8 @@=0A #include =
"../../usb/core/hub.h"=0A #endif=0A =0A-int usbif_reqs =3D USBIF_BACK_MAX_P=
ENDING_REQS;=0A-module_param_named(reqs, usbif_reqs, int, 0);=0A+static =
unsigned int usbif_reqs =3D 128;=0A+module_param_named(reqs, usbif_reqs, =
uint, 0);=0A MODULE_PARM_DESC(reqs, "Number of usbback requests to =
allocate");=0A =0A struct pending_req_segment {=0A
--=__Part2110AC8B.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2110AC8B.0__=--


From xen-devel-bounces@lists.xen.org Tue Dec 18 10:17:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkuEW-0000Q2-LX; Tue, 18 Dec 2012 10:17:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkuEU-0000Po-Ks
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:17:06 +0000
Received: from [85.158.139.83:17297] by server-2.bemta-5.messagelabs.com id
	FC/1F-16162-1A240D05; Tue, 18 Dec 2012 10:17:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355825821!28601264!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2749 invoked from network); 18 Dec 2012 10:17:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 10:17:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 10:17:01 +0000
Message-Id: <50D050AB02000078000B0F95@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 10:16:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2110AC8B.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18: adjust module parameter types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2110AC8B.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

In particular use "unsigned" and "bool" where the parameter has
respective meaning, and make the respective variables static where
possible and not already done.

Also drop the use of the bogus USBIF_BACK_MAX_PENDING_REQS definition
and do some minimal cleanup.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/xen/blkback/blkback.c
+++ b/drivers/xen/blkback/blkback.c
@@ -55,15 +55,15 @@
  * This will increase the chances of being able to write whole tracks.
  * 64 should be enough to keep us competitive with Linux.
  */
-static int blkif_reqs =3D 64;
-module_param_named(reqs, blkif_reqs, int, 0);
+static unsigned int blkif_reqs =3D 64;
+module_param_named(reqs, blkif_reqs, uint, 0);
 MODULE_PARM_DESC(reqs, "Number of blkback requests to allocate");
=20
 /* Run-time switchable: /sys/module/blkback/parameters/ */
-static unsigned int log_stats =3D 0;
-static unsigned int debug_lvl =3D 0;
-module_param(log_stats, int, 0644);
-module_param(debug_lvl, int, 0644);
+static int log_stats;
+static unsigned int debug_lvl;
+module_param(log_stats, bool, 0644);
+module_param(debug_lvl, uint, 0644);
=20
 /*
  * Each outstanding request that we've passed to the lower device layers =
has a=20
--- a/drivers/xen/blktap/blktap.c
+++ b/drivers/xen/blktap/blktap.c
@@ -127,10 +127,10 @@ static struct tap_blkif *tapfds[MAX_TAP_
 static int blktap_next_minor;
=20
 /* Run-time switchable: /sys/module/blktap/parameters/ */
-static unsigned int log_stats =3D 0;
-static unsigned int debug_lvl =3D 0;
-module_param(log_stats, int, 0644);
-module_param(debug_lvl, int, 0644);
+static int log_stats;
+static unsigned int debug_lvl;
+module_param(log_stats, bool, 0644);
+module_param(debug_lvl, uint, 0644);
=20
 /*
  * Each outstanding request that we've passed to the lower device layers =
has a=20
--- a/drivers/xen/fbfront/xenfb.c
+++ b/drivers/xen/fbfront/xenfb.c
@@ -138,8 +138,8 @@ struct xenfb_info
 #define XENFB_DEFAULT_FB_LEN (XENFB_WIDTH * XENFB_HEIGHT * XENFB_DEPTH / =
8)
=20
 enum {KPARAM_MEM, KPARAM_WIDTH, KPARAM_HEIGHT, KPARAM_CNT};
-static int video[KPARAM_CNT] =3D {2, XENFB_WIDTH, XENFB_HEIGHT};
-module_param_array(video, int, NULL, 0);
+static unsigned int video[KPARAM_CNT] =3D {2, XENFB_WIDTH, XENFB_HEIGHT};
+module_param_array(video, uint, NULL, 0);
 MODULE_PARM_DESC(video,
 		"Size of video memory in MB and width,height in pixels, =
default =3D (2,800,600)");
=20
--- a/drivers/xen/pciback/pciback_ops.c
+++ b/drivers/xen/pciback/pciback_ops.c
@@ -10,7 +10,7 @@
 #include "pciback.h"
=20
 int verbose_request =3D 0;
-module_param(verbose_request, int, 0644);
+module_param(verbose_request, bool, 0644);
=20
 /* Ensure a device is "turned off" and ready to be exported.
  * (Also see pciback_config_reset to ensure virtual configuration space =
is
--- a/drivers/xen/pcifront/pci_op.c
+++ b/drivers/xen/pcifront/pci_op.c
@@ -12,8 +12,8 @@
 #include <xen/evtchn.h>
 #include "pcifront.h"
=20
-static int verbose_request =3D 0;
-module_param(verbose_request, int, 0644);
+static int verbose_request;
+module_param(verbose_request, bool, 0644);
=20
 static void pcifront_init_sd(struct pcifront_sd *sd,
 			     unsigned int domain, unsigned int bus,
--- a/drivers/xen/scsiback/scsiback.c
+++ b/drivers/xen/scsiback/scsiback.c
@@ -56,8 +56,8 @@ int vscsiif_reqs =3D VSCSIIF_BACK_MAX_PEND
 module_param_named(reqs, vscsiif_reqs, uint, 0);
 MODULE_PARM_DESC(reqs, "Number of scsiback requests to allocate");
=20
-static unsigned int log_print_stat =3D 0;
-module_param(log_print_stat, int, 0644);
+static int log_print_stat;
+module_param(log_print_stat, bool, 0644);
=20
 #define SCSIBACK_INVALID_HANDLE (~0)
=20
@@ -219,10 +219,8 @@ static void scsiback_cmd_done(struct req
 	resid        =3D req->data_len;
 	errors       =3D req->errors;
=20
-	if (errors !=3D 0) {
-		if (log_print_stat)
-			scsiback_print_status(sense_buffer, errors, =
pending_req);
-	}
+	if (errors && log_print_stat)
+		scsiback_print_status(sense_buffer, errors, pending_req);
=20
 	/* The Host mode is through as for Emulation. */
 	if (pending_req->info->feature !=3D VSCSI_TYPE_HOST)
--- a/drivers/xen/usbback/usbback.c
+++ b/drivers/xen/usbback/usbback.c
@@ -53,8 +53,8 @@
 #include "../../usb/core/hub.h"
 #endif
=20
-int usbif_reqs =3D USBIF_BACK_MAX_PENDING_REQS;
-module_param_named(reqs, usbif_reqs, int, 0);
+static unsigned int usbif_reqs =3D 128;
+module_param_named(reqs, usbif_reqs, uint, 0);
 MODULE_PARM_DESC(reqs, "Number of usbback requests to allocate");
=20
 struct pending_req_segment {



--=__Part2110AC8B.0__=
Content-Type: text/plain; name="xen-module-param-types.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-module-param-types.patch"

adjust module parameter types=0A=0AIn particular use "unsigned" and "bool" =
where the parameter has=0Arespective meaning, and make the respective =
variables static where=0Apossible and not already done.=0A=0AAlso drop the =
use of the bogus USBIF_BACK_MAX_PENDING_REQS definition=0Aand do some =
minimal cleanup.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A-=
-- a/drivers/xen/blkback/blkback.c=0A+++ b/drivers/xen/blkback/blkback.c=0A=
@@ -55,15 +55,15 @@=0A  * This will increase the chances of being able to =
write whole tracks.=0A  * 64 should be enough to keep us competitive with =
Linux.=0A  */=0A-static int blkif_reqs =3D 64;=0A-module_param_named(reqs, =
blkif_reqs, int, 0);=0A+static unsigned int blkif_reqs =3D 64;=0A+module_pa=
ram_named(reqs, blkif_reqs, uint, 0);=0A MODULE_PARM_DESC(reqs, "Number of =
blkback requests to allocate");=0A =0A /* Run-time switchable: /sys/module/=
blkback/parameters/ */=0A-static unsigned int log_stats =3D 0;=0A-static =
unsigned int debug_lvl =3D 0;=0A-module_param(log_stats, int, 0644);=0A-mod=
ule_param(debug_lvl, int, 0644);=0A+static int log_stats;=0A+static =
unsigned int debug_lvl;=0A+module_param(log_stats, bool, 0644);=0A+module_p=
aram(debug_lvl, uint, 0644);=0A =0A /*=0A  * Each outstanding request that =
we've passed to the lower device layers has a =0A--- a/drivers/xen/blktap/b=
lktap.c=0A+++ b/drivers/xen/blktap/blktap.c=0A@@ -127,10 +127,10 @@ static =
struct tap_blkif *tapfds[MAX_TAP_=0A static int blktap_next_minor;=0A =0A =
/* Run-time switchable: /sys/module/blktap/parameters/ */=0A-static =
unsigned int log_stats =3D 0;=0A-static unsigned int debug_lvl =3D =
0;=0A-module_param(log_stats, int, 0644);=0A-module_param(debug_lvl, int, =
0644);=0A+static int log_stats;=0A+static unsigned int debug_lvl;=0A+module=
_param(log_stats, bool, 0644);=0A+module_param(debug_lvl, uint, 0644);=0A =
=0A /*=0A  * Each outstanding request that we've passed to the lower =
device layers has a =0A--- a/drivers/xen/fbfront/xenfb.c=0A+++ b/drivers/xe=
n/fbfront/xenfb.c=0A@@ -138,8 +138,8 @@ struct xenfb_info=0A #define =
XENFB_DEFAULT_FB_LEN (XENFB_WIDTH * XENFB_HEIGHT * XENFB_DEPTH / 8)=0A =0A =
enum {KPARAM_MEM, KPARAM_WIDTH, KPARAM_HEIGHT, KPARAM_CNT};=0A-static int =
video[KPARAM_CNT] =3D {2, XENFB_WIDTH, XENFB_HEIGHT};=0A-module_param_array=
(video, int, NULL, 0);=0A+static unsigned int video[KPARAM_CNT] =3D {2, =
XENFB_WIDTH, XENFB_HEIGHT};=0A+module_param_array(video, uint, NULL, =
0);=0A MODULE_PARM_DESC(video,=0A 		"Size of video memory in =
MB and width,height in pixels, default =3D (2,800,600)");=0A =0A--- =
a/drivers/xen/pciback/pciback_ops.c=0A+++ b/drivers/xen/pciback/pciback_ops=
.c=0A@@ -10,7 +10,7 @@=0A #include "pciback.h"=0A =0A int verbose_request =
=3D 0;=0A-module_param(verbose_request, int, 0644);=0A+module_param(verbose=
_request, bool, 0644);=0A =0A /* Ensure a device is "turned off" and ready =
to be exported.=0A  * (Also see pciback_config_reset to ensure virtual =
configuration space is=0A--- a/drivers/xen/pcifront/pci_op.c=0A+++ =
b/drivers/xen/pcifront/pci_op.c=0A@@ -12,8 +12,8 @@=0A #include <xen/evtchn=
.h>=0A #include "pcifront.h"=0A =0A-static int verbose_request =3D =
0;=0A-module_param(verbose_request, int, 0644);=0A+static int verbose_reque=
st;=0A+module_param(verbose_request, bool, 0644);=0A =0A static void =
pcifront_init_sd(struct pcifront_sd *sd,=0A 			     =
unsigned int domain, unsigned int bus,=0A--- a/drivers/xen/scsiback/scsibac=
k.c=0A+++ b/drivers/xen/scsiback/scsiback.c=0A@@ -56,8 +56,8 @@ int =
vscsiif_reqs =3D VSCSIIF_BACK_MAX_PEND=0A module_param_named(reqs, =
vscsiif_reqs, uint, 0);=0A MODULE_PARM_DESC(reqs, "Number of scsiback =
requests to allocate");=0A =0A-static unsigned int log_print_stat =3D =
0;=0A-module_param(log_print_stat, int, 0644);=0A+static int log_print_stat=
;=0A+module_param(log_print_stat, bool, 0644);=0A =0A #define SCSIBACK_INVA=
LID_HANDLE (~0)=0A =0A@@ -219,10 +219,8 @@ static void scsiback_cmd_done(st=
ruct req=0A 	resid        =3D req->data_len;=0A 	errors       =3D =
req->errors;=0A =0A-	if (errors !=3D 0) {=0A-		if =
(log_print_stat)=0A-			scsiback_print_status(sense_buffer,=
 errors, pending_req);=0A-	}=0A+	if (errors && log_print_stat)=0A+	=
	scsiback_print_status(sense_buffer, errors, pending_req);=0A =0A 	=
/* The Host mode is through as for Emulation. */=0A 	if (pending_req->in=
fo->feature !=3D VSCSI_TYPE_HOST)=0A--- a/drivers/xen/usbback/usbback.c=0A+=
++ b/drivers/xen/usbback/usbback.c=0A@@ -53,8 +53,8 @@=0A #include =
"../../usb/core/hub.h"=0A #endif=0A =0A-int usbif_reqs =3D USBIF_BACK_MAX_P=
ENDING_REQS;=0A-module_param_named(reqs, usbif_reqs, int, 0);=0A+static =
unsigned int usbif_reqs =3D 128;=0A+module_param_named(reqs, usbif_reqs, =
uint, 0);=0A MODULE_PARM_DESC(reqs, "Number of usbback requests to =
allocate");=0A =0A struct pending_req_segment {=0A
--=__Part2110AC8B.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2110AC8B.0__=--


From xen-devel-bounces@lists.xen.org Tue Dec 18 10:21:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkuIb-0000cD-HC; Tue, 18 Dec 2012 10:21:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkuIa-0000c3-00
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:21:20 +0000
Received: from [85.158.139.83:59296] by server-9.bemta-5.messagelabs.com id
	66/71-10690-F9340D05; Tue, 18 Dec 2012 10:21:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355826078!29795806!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31095 invoked from network); 18 Dec 2012 10:21:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 10:21:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 10:21:18 +0000
Message-Id: <50D051AB02000078000B0FA5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 10:21:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2011AD8B.0__="
Subject: [Xen-devel] [PATCH] usbif: drop bogus definition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2011AD8B.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Just like recently done for vSCSI, remove a backend implementation
detail from the interface header.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/public/io/usbif.h
+++ b/xen/include/public/io/usbif.h
@@ -71,7 +71,6 @@ enum usb_spec_version {
 #define usbif_pipesubmit(pipe) (!usbif_pipeunlink(pipe))
 #define usbif_setunlink_pipe(pipe) ((pipe)|(0x20))
=20
-#define USBIF_BACK_MAX_PENDING_REQS (128)
 #define USBIF_MAX_SEGMENTS_PER_REQUEST (16)
=20
 /*




--=__Part2011AD8B.0__=
Content-Type: text/plain; name="usbif-cleanup.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="usbif-cleanup.patch"

usbif: drop bogus definition=0A=0AJust like recently done for vSCSI, =
remove a backend implementation=0Adetail from the interface header.=0A=0ASi=
gned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/include/public/=
io/usbif.h=0A+++ b/xen/include/public/io/usbif.h=0A@@ -71,7 +71,6 @@ enum =
usb_spec_version {=0A #define usbif_pipesubmit(pipe) (!usbif_pipeunlink(pip=
e))=0A #define usbif_setunlink_pipe(pipe) ((pipe)|(0x20))=0A =0A-#define =
USBIF_BACK_MAX_PENDING_REQS (128)=0A #define USBIF_MAX_SEGMENTS_PER_REQUEST=
 (16)=0A =0A /*=0A
--=__Part2011AD8B.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2011AD8B.0__=--


From xen-devel-bounces@lists.xen.org Tue Dec 18 10:21:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkuIb-0000cD-HC; Tue, 18 Dec 2012 10:21:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkuIa-0000c3-00
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:21:20 +0000
Received: from [85.158.139.83:59296] by server-9.bemta-5.messagelabs.com id
	66/71-10690-F9340D05; Tue, 18 Dec 2012 10:21:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355826078!29795806!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31095 invoked from network); 18 Dec 2012 10:21:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 10:21:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 10:21:18 +0000
Message-Id: <50D051AB02000078000B0FA5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 10:21:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2011AD8B.0__="
Subject: [Xen-devel] [PATCH] usbif: drop bogus definition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2011AD8B.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Just like recently done for vSCSI, remove a backend implementation
detail from the interface header.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/public/io/usbif.h
+++ b/xen/include/public/io/usbif.h
@@ -71,7 +71,6 @@ enum usb_spec_version {
 #define usbif_pipesubmit(pipe) (!usbif_pipeunlink(pipe))
 #define usbif_setunlink_pipe(pipe) ((pipe)|(0x20))
=20
-#define USBIF_BACK_MAX_PENDING_REQS (128)
 #define USBIF_MAX_SEGMENTS_PER_REQUEST (16)
=20
 /*




--=__Part2011AD8B.0__=
Content-Type: text/plain; name="usbif-cleanup.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="usbif-cleanup.patch"

usbif: drop bogus definition=0A=0AJust like recently done for vSCSI, =
remove a backend implementation=0Adetail from the interface header.=0A=0ASi=
gned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/include/public/=
io/usbif.h=0A+++ b/xen/include/public/io/usbif.h=0A@@ -71,7 +71,6 @@ enum =
usb_spec_version {=0A #define usbif_pipesubmit(pipe) (!usbif_pipeunlink(pip=
e))=0A #define usbif_setunlink_pipe(pipe) ((pipe)|(0x20))=0A =0A-#define =
USBIF_BACK_MAX_PENDING_REQS (128)=0A #define USBIF_MAX_SEGMENTS_PER_REQUEST=
 (16)=0A =0A /*=0A
--=__Part2011AD8B.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2011AD8B.0__=--


From xen-devel-bounces@lists.xen.org Tue Dec 18 10:33:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkuTg-0000pu-QG; Tue, 18 Dec 2012 10:32:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TkuTf-0000pp-7Z
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:32:47 +0000
Received: from [85.158.139.211:34289] by server-14.bemta-5.messagelabs.com id
	F7/F4-09538-E4640D05; Tue, 18 Dec 2012 10:32:46 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355826765!18536549!1
X-Originating-IP: [209.85.212.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13985 invoked from network); 18 Dec 2012 10:32:45 -0000
Received: from mail-wi0-f176.google.com (HELO mail-wi0-f176.google.com)
	(209.85.212.176)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 10:32:45 -0000
Received: by mail-wi0-f176.google.com with SMTP id hm6so2580340wib.9
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 02:32:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=YBKU3J0VTyNSvY0MWNKsNUtX1AB1n3E7sUnKvc/8HGk=;
	b=DCBl1uvdO24okGQfuqyJjOYAFBmy9YOqxDEYUudyiXnDCtj8LS5l/EQefvy8t4WQKR
	wFhmYIHO2Mb3C6yhCEgMxRH/wUiqaQqWsxVL43K73Qqg/Y6KMw8k2LJADpbqWxG+3+ji
	3V51JnV+WyDyqOeCy2sbmPe/6TvPbaq7NQC3gKTnZXAq4V8zV5DK9rlpFb1aWB2H0S6Q
	vRGXmhQpJUljVOZdcJP0jRwATVkNFOg2CVmNhrj4tBUc5hoGFe/oJoQVcSh/ZrEycR7o
	8mTVeBpWWxagVirENDvCptFve60vSqljePNjMDLDQI3u4If/6YQzQATR4VZ5byo7J66o
	QPrg==
X-Received: by 10.180.95.169 with SMTP id dl9mr3556620wib.20.1355826765424;
	Tue, 18 Dec 2012 02:32:45 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id s10sm2002234wiw.4.2012.12.18.02.32.43
	(version=SSLv3 cipher=OTHER); Tue, 18 Dec 2012 02:32:44 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Tue, 18 Dec 2012 10:32:35 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCF5F6C3.55E5E%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] usbif: drop bogus definition
Thread-Index: Ac3dCv5UwiiKlQCRsEGWwr5I+3yhoA==
In-Reply-To: <50D051AB02000078000B0FA5@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] usbif: drop bogus definition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/12/2012 10:21, "Jan Beulich" <JBeulich@suse.com> wrote:

> Just like recently done for vSCSI, remove a backend implementation
> detail from the interface header.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/include/public/io/usbif.h
> +++ b/xen/include/public/io/usbif.h
> @@ -71,7 +71,6 @@ enum usb_spec_version {
>  #define usbif_pipesubmit(pipe) (!usbif_pipeunlink(pipe))
>  #define usbif_setunlink_pipe(pipe) ((pipe)|(0x20))
>  
> -#define USBIF_BACK_MAX_PENDING_REQS (128)
>  #define USBIF_MAX_SEGMENTS_PER_REQUEST (16)
>  
>  /*
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:33:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkuTg-0000pu-QG; Tue, 18 Dec 2012 10:32:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TkuTf-0000pp-7Z
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:32:47 +0000
Received: from [85.158.139.211:34289] by server-14.bemta-5.messagelabs.com id
	F7/F4-09538-E4640D05; Tue, 18 Dec 2012 10:32:46 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355826765!18536549!1
X-Originating-IP: [209.85.212.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13985 invoked from network); 18 Dec 2012 10:32:45 -0000
Received: from mail-wi0-f176.google.com (HELO mail-wi0-f176.google.com)
	(209.85.212.176)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 10:32:45 -0000
Received: by mail-wi0-f176.google.com with SMTP id hm6so2580340wib.9
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 02:32:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=YBKU3J0VTyNSvY0MWNKsNUtX1AB1n3E7sUnKvc/8HGk=;
	b=DCBl1uvdO24okGQfuqyJjOYAFBmy9YOqxDEYUudyiXnDCtj8LS5l/EQefvy8t4WQKR
	wFhmYIHO2Mb3C6yhCEgMxRH/wUiqaQqWsxVL43K73Qqg/Y6KMw8k2LJADpbqWxG+3+ji
	3V51JnV+WyDyqOeCy2sbmPe/6TvPbaq7NQC3gKTnZXAq4V8zV5DK9rlpFb1aWB2H0S6Q
	vRGXmhQpJUljVOZdcJP0jRwATVkNFOg2CVmNhrj4tBUc5hoGFe/oJoQVcSh/ZrEycR7o
	8mTVeBpWWxagVirENDvCptFve60vSqljePNjMDLDQI3u4If/6YQzQATR4VZ5byo7J66o
	QPrg==
X-Received: by 10.180.95.169 with SMTP id dl9mr3556620wib.20.1355826765424;
	Tue, 18 Dec 2012 02:32:45 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id s10sm2002234wiw.4.2012.12.18.02.32.43
	(version=SSLv3 cipher=OTHER); Tue, 18 Dec 2012 02:32:44 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Tue, 18 Dec 2012 10:32:35 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCF5F6C3.55E5E%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] usbif: drop bogus definition
Thread-Index: Ac3dCv5UwiiKlQCRsEGWwr5I+3yhoA==
In-Reply-To: <50D051AB02000078000B0FA5@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] usbif: drop bogus definition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/12/2012 10:21, "Jan Beulich" <JBeulich@suse.com> wrote:

> Just like recently done for vSCSI, remove a backend implementation
> detail from the interface header.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/include/public/io/usbif.h
> +++ b/xen/include/public/io/usbif.h
> @@ -71,7 +71,6 @@ enum usb_spec_version {
>  #define usbif_pipesubmit(pipe) (!usbif_pipeunlink(pipe))
>  #define usbif_setunlink_pipe(pipe) ((pipe)|(0x20))
>  
> -#define USBIF_BACK_MAX_PENDING_REQS (128)
>  #define USBIF_MAX_SEGMENTS_PER_REQUEST (16)
>  
>  /*
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:49:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:49:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkuk0-0001Bk-UB; Tue, 18 Dec 2012 10:49:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkujy-0001Bf-Iq
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:49:38 +0000
Received: from [85.158.138.51:33217] by server-5.bemta-3.messagelabs.com id
	42/44-15136-14A40D05; Tue, 18 Dec 2012 10:49:37 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355827776!19425050!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29290 invoked from network); 18 Dec 2012 10:49:37 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-7.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 10:49:37 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIAnSki020102; Tue, 18 Dec 2012 10:49:28 GMT
Date: Tue, 18 Dec 2012 10:49:27 +0000
From: Will Deacon <will.deacon@arm.com>
To: Nicolas Pitre <nicolas.pitre@linaro.org>
Message-ID: <20121218104927.GA9632@mudshark.cambridge.arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-7-git-send-email-will.deacon@arm.com>
	<alpine.LFD.2.02.1212171555570.1263@xanadu.home>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.LFD.2.02.1212171555570.1263@xanadu.home>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 6/6] ARM: mach-virt: add SMP support
	using PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 09:45:52PM +0000, Nicolas Pitre wrote:
> On Mon, 17 Dec 2012, Will Deacon wrote:
> > This patch adds support for SMP to mach-virt using the PSCI
> > infrastructure.
> > 
> > Signed-off-by: Will Deacon <will.deacon@arm.com>

[...]

> > +/*
> > + * Enumerate the possible CPU set from the device tree.
> > + */
> > +static void __init virt_smp_init_cpus(void)
> > +{
> > +	struct device_node *dn = NULL;
> > +	int cpu = 0;
> > +
> > +	while ((dn = of_find_node_by_type(dn, "cpu"))) {
> > +		if (cpu < NR_CPUS)
> > +			set_cpu_possible(cpu, true);
> > +		cpu++;
> > +	}
> > +
> > +	/* sanity check */
> > +	if (cpu > NR_CPUS)
> > +		pr_warning("no. of cores (%d) greater than configured maximum "
> > +			   "of %d - clipping\n",
> > +			   cpu, NR_CPUS);
> 
> Since commit 5587164eea you shouldn't need any of the above.

There's going to be nothing left at this rate! Thanks.

> > +#ifdef CONFIG_SMP
> > +extern struct smp_operations virt_smp_ops;
> > +#endif
> 
> You don't need to surround prototype declaration here, unless your goal 
> was to define a dummy virt_smp_ops when CONFIG_SMP is not selected?
> Otherwise the reference below would break compilation.

Right you are, the smp_ops macro does the magic for us. I'll put together a
v3.

Cheers,

Will

> > +
> >  DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
> >  	.init_irq	= gic_init_irq,
> >  	.handle_irq     = gic_handle_irq,
> >  	.timer		= &virt_timer,
> >  	.init_machine	= virt_init,
> > +	.smp		= smp_ops(virt_smp_ops),
> >  	.dt_compat	= virt_dt_match,
> >  MACHINE_END
> > -- 
> > 1.8.0
> > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 10:49:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 10:49:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkuk0-0001Bk-UB; Tue, 18 Dec 2012 10:49:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkujy-0001Bf-Iq
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 10:49:38 +0000
Received: from [85.158.138.51:33217] by server-5.bemta-3.messagelabs.com id
	42/44-15136-14A40D05; Tue, 18 Dec 2012 10:49:37 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355827776!19425050!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29290 invoked from network); 18 Dec 2012 10:49:37 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-7.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 10:49:37 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIAnSki020102; Tue, 18 Dec 2012 10:49:28 GMT
Date: Tue, 18 Dec 2012 10:49:27 +0000
From: Will Deacon <will.deacon@arm.com>
To: Nicolas Pitre <nicolas.pitre@linaro.org>
Message-ID: <20121218104927.GA9632@mudshark.cambridge.arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-7-git-send-email-will.deacon@arm.com>
	<alpine.LFD.2.02.1212171555570.1263@xanadu.home>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.LFD.2.02.1212171555570.1263@xanadu.home>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 6/6] ARM: mach-virt: add SMP support
	using PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 17, 2012 at 09:45:52PM +0000, Nicolas Pitre wrote:
> On Mon, 17 Dec 2012, Will Deacon wrote:
> > This patch adds support for SMP to mach-virt using the PSCI
> > infrastructure.
> > 
> > Signed-off-by: Will Deacon <will.deacon@arm.com>

[...]

> > +/*
> > + * Enumerate the possible CPU set from the device tree.
> > + */
> > +static void __init virt_smp_init_cpus(void)
> > +{
> > +	struct device_node *dn = NULL;
> > +	int cpu = 0;
> > +
> > +	while ((dn = of_find_node_by_type(dn, "cpu"))) {
> > +		if (cpu < NR_CPUS)
> > +			set_cpu_possible(cpu, true);
> > +		cpu++;
> > +	}
> > +
> > +	/* sanity check */
> > +	if (cpu > NR_CPUS)
> > +		pr_warning("no. of cores (%d) greater than configured maximum "
> > +			   "of %d - clipping\n",
> > +			   cpu, NR_CPUS);
> 
> Since commit 5587164eea you shouldn't need any of the above.

There's going to be nothing left at this rate! Thanks.

> > +#ifdef CONFIG_SMP
> > +extern struct smp_operations virt_smp_ops;
> > +#endif
> 
> You don't need to surround prototype declaration here, unless your goal 
> was to define a dummy virt_smp_ops when CONFIG_SMP is not selected?
> Otherwise the reference below would break compilation.

Right you are, the smp_ops macro does the magic for us. I'll put together a
v3.

Cheers,

Will

> > +
> >  DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
> >  	.init_irq	= gic_init_irq,
> >  	.handle_irq     = gic_handle_irq,
> >  	.timer		= &virt_timer,
> >  	.init_machine	= virt_init,
> > +	.smp		= smp_ops(virt_smp_ops),
> >  	.dt_compat	= virt_dt_match,
> >  MACHINE_END
> > -- 
> > 1.8.0
> > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 11:04:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 11:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkuy9-0001Pk-Ej; Tue, 18 Dec 2012 11:04:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Tkuy7-0001Pf-Qk
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 11:04:16 +0000
Received: from [85.158.143.35:42793] by server-2.bemta-4.messagelabs.com id
	B6/62-30861-FAD40D05; Tue, 18 Dec 2012 11:04:15 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355828653!13579376!1
X-Originating-IP: [209.85.212.41]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14591 invoked from network); 18 Dec 2012 11:04:14 -0000
Received: from mail-vb0-f41.google.com (HELO mail-vb0-f41.google.com)
	(209.85.212.41)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 11:04:14 -0000
Received: by mail-vb0-f41.google.com with SMTP id l22so623817vbn.14
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 03:04:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=UrPgC2AaxBYKsv6NhkTfkbvQRWM5JrCtovhkEdIWz2s=;
	b=0ZyXJFpWnbmX2ZmSknzZolyKwwG4WP9fo339Exnx1K6M9kkwBIlD74WIyTEAzX3tkD
	3g0prBbJ6e4GJVE6FczX2bkBqTIndzVyQvNMsuUZkyeyX0eIBqZuaRe8WjeFNrovTfAg
	lOOYEVrc8Iq9hc8wTbo4A+tp1LUZ6RzjGgLY95MAieIpCOzKTvxqYy07fLXqN8U5ZuQ4
	NL+YWEwvPFVmGMFYxMynEXK4/0AWKE5qK2OvwJSLLTNDjZLXZ0fdEa+c36iqh0LcH3xF
	ntjDQXWRdRi/+yooOvwgC2ebEB6FaLnXHldx7GNWPgHVY6ALQ085g4w5+DSCuouE6UeC
	OLkA==
MIME-Version: 1.0
Received: by 10.58.134.14 with SMTP id pg14mr2274865veb.42.1355828653130; Tue,
	18 Dec 2012 03:04:13 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 03:04:13 -0800 (PST)
In-Reply-To: <1355825055.14620.144.camel@zakaz.uk.xensource.com>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>
	<95EB29C55F836E775B70A9F5@nimrod.local>
	<1355745712.14620.59.camel@zakaz.uk.xensource.com>
	<ED85BD1013C109A90B23BDE6@Ximines.local>
	<1355825055.14620.144.camel@zakaz.uk.xensource.com>
Date: Tue, 18 Dec 2012 11:04:13 +0000
X-Google-Sender-Auth: 5qQbQ6Nd9kmfZ-3IYWtO1kxptBk
Message-ID: <CAFLBxZY4FS7EbEabmf-sTM+2T+wM4gAOEdeSAKpDaDj_g_pBAQ@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2814204772729453381=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2814204772729453381==
Content-Type: multipart/alternative; boundary=089e012941dc3f054b04d11e751c

--089e012941dc3f054b04d11e751c
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 18, 2012 at 10:04 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> For example your list above doesn't include the flip sides which are:
>       * Migration of PV guests
>       * Migration of HVM Qemu-traditional guests
>       * HVM Qemu-xen guests for users who don't care about migration
>
> All of these run the risk of regression. Is this new feature worth that
> risk? Obviously you think so, but I'm not so sure.
>

Looking just at the titles of the patch series, it seems that there's only
one patch which touches code not specific to qemu-xen.  Assuming that's
just an "if(qemu_xen_upstream) { X } else { Y }", it seems like it would be
a pretty low risk for #1 and #2.  So the only potential risk would be if
the changes to qemu-xen introduces a regression in other code.

 -George

--089e012941dc3f054b04d11e751c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Tue, Dec 18, 2012 at 10:04 AM, Ian Campbell <span dir=
=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">I=
an.Campbell@citrix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra">=
<div class=3D"gmail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">For example your list above doesn&#39;t incl=
ude the flip sides which are:<br>
=A0 =A0 =A0 * Migration of PV guests<br>
=A0 =A0 =A0 * Migration of HVM Qemu-traditional guests<br>
=A0 =A0 =A0 * HVM Qemu-xen guests for users who don&#39;t care about migrat=
ion<br>
<br>
All of these run the risk of regression. Is this new feature worth that<br>
risk? Obviously you think so, but I&#39;m not so sure.<br></blockquote><div=
><br></div><div>Looking just at the titles of the patch series, it seems th=
at there&#39;s only one patch which touches code not specific to qemu-xen.=
=A0 Assuming that&#39;s just an &quot;if(qemu_xen_upstream) { X } else { Y =
}&quot;, it seems like it would be a pretty low risk for #1 and #2.=A0 So t=
he only potential risk would be if the changes to qemu-xen introduces a reg=
ression in other code.<br>
</div><div><br></div><div>=A0-George<br></div></div></div></div>

--089e012941dc3f054b04d11e751c--


--===============2814204772729453381==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2814204772729453381==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 11:04:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 11:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkuy9-0001Pk-Ej; Tue, 18 Dec 2012 11:04:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Tkuy7-0001Pf-Qk
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 11:04:16 +0000
Received: from [85.158.143.35:42793] by server-2.bemta-4.messagelabs.com id
	B6/62-30861-FAD40D05; Tue, 18 Dec 2012 11:04:15 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355828653!13579376!1
X-Originating-IP: [209.85.212.41]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14591 invoked from network); 18 Dec 2012 11:04:14 -0000
Received: from mail-vb0-f41.google.com (HELO mail-vb0-f41.google.com)
	(209.85.212.41)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 11:04:14 -0000
Received: by mail-vb0-f41.google.com with SMTP id l22so623817vbn.14
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 03:04:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=UrPgC2AaxBYKsv6NhkTfkbvQRWM5JrCtovhkEdIWz2s=;
	b=0ZyXJFpWnbmX2ZmSknzZolyKwwG4WP9fo339Exnx1K6M9kkwBIlD74WIyTEAzX3tkD
	3g0prBbJ6e4GJVE6FczX2bkBqTIndzVyQvNMsuUZkyeyX0eIBqZuaRe8WjeFNrovTfAg
	lOOYEVrc8Iq9hc8wTbo4A+tp1LUZ6RzjGgLY95MAieIpCOzKTvxqYy07fLXqN8U5ZuQ4
	NL+YWEwvPFVmGMFYxMynEXK4/0AWKE5qK2OvwJSLLTNDjZLXZ0fdEa+c36iqh0LcH3xF
	ntjDQXWRdRi/+yooOvwgC2ebEB6FaLnXHldx7GNWPgHVY6ALQ085g4w5+DSCuouE6UeC
	OLkA==
MIME-Version: 1.0
Received: by 10.58.134.14 with SMTP id pg14mr2274865veb.42.1355828653130; Tue,
	18 Dec 2012 03:04:13 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 03:04:13 -0800 (PST)
In-Reply-To: <1355825055.14620.144.camel@zakaz.uk.xensource.com>
References: <1355238698.843.42.camel@zakaz.uk.xensource.com>
	<1355396228-3183-1-git-send-email-alex@alex.org.uk>
	<1355398711.10554.90.camel@zakaz.uk.xensource.com>
	<95EB29C55F836E775B70A9F5@nimrod.local>
	<1355745712.14620.59.camel@zakaz.uk.xensource.com>
	<ED85BD1013C109A90B23BDE6@Ximines.local>
	<1355825055.14620.144.camel@zakaz.uk.xensource.com>
Date: Tue, 18 Dec 2012 11:04:13 +0000
X-Google-Sender-Auth: 5qQbQ6Nd9kmfZ-3IYWtO1kxptBk
Message-ID: <CAFLBxZY4FS7EbEabmf-sTM+2T+wM4gAOEdeSAKpDaDj_g_pBAQ@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Alex Bligh <alex@alex.org.uk>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC] Enabling live-migrate on HVM on qemu-xen
	device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2814204772729453381=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2814204772729453381==
Content-Type: multipart/alternative; boundary=089e012941dc3f054b04d11e751c

--089e012941dc3f054b04d11e751c
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 18, 2012 at 10:04 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> For example your list above doesn't include the flip sides which are:
>       * Migration of PV guests
>       * Migration of HVM Qemu-traditional guests
>       * HVM Qemu-xen guests for users who don't care about migration
>
> All of these run the risk of regression. Is this new feature worth that
> risk? Obviously you think so, but I'm not so sure.
>

Looking just at the titles of the patch series, it seems that there's only
one patch which touches code not specific to qemu-xen.  Assuming that's
just an "if(qemu_xen_upstream) { X } else { Y }", it seems like it would be
a pretty low risk for #1 and #2.  So the only potential risk would be if
the changes to qemu-xen introduces a regression in other code.

 -George

--089e012941dc3f054b04d11e751c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Tue, Dec 18, 2012 at 10:04 AM, Ian Campbell <span dir=
=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">I=
an.Campbell@citrix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra">=
<div class=3D"gmail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">For example your list above doesn&#39;t incl=
ude the flip sides which are:<br>
=A0 =A0 =A0 * Migration of PV guests<br>
=A0 =A0 =A0 * Migration of HVM Qemu-traditional guests<br>
=A0 =A0 =A0 * HVM Qemu-xen guests for users who don&#39;t care about migrat=
ion<br>
<br>
All of these run the risk of regression. Is this new feature worth that<br>
risk? Obviously you think so, but I&#39;m not so sure.<br></blockquote><div=
><br></div><div>Looking just at the titles of the patch series, it seems th=
at there&#39;s only one patch which touches code not specific to qemu-xen.=
=A0 Assuming that&#39;s just an &quot;if(qemu_xen_upstream) { X } else { Y =
}&quot;, it seems like it would be a pretty low risk for #1 and #2.=A0 So t=
he only potential risk would be if the changes to qemu-xen introduces a reg=
ression in other code.<br>
</div><div><br></div><div>=A0-George<br></div></div></div></div>

--089e012941dc3f054b04d11e751c--


--===============2814204772729453381==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2814204772729453381==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 11:32:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 11:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkvOu-0001eJ-Rb; Tue, 18 Dec 2012 11:31:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkvOt-0001eE-HB
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 11:31:55 +0000
Received: from [193.109.254.147:41450] by server-1.bemta-14.messagelabs.com id
	06/B2-15901-A2450D05; Tue, 18 Dec 2012 11:31:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355830265!10501348!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16282 invoked from network); 18 Dec 2012 11:31:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 11:31:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="221877"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 11:31:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 11:31:05 +0000
Message-ID: <1355830264.14620.196.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Tue, 18 Dec 2012 11:31:04 +0000
In-Reply-To: <50C9F2CA.1010602@jhuapl.edu>
References: <50C9F2CA.1010602@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 15:22 +0000, Matthew Fioravante wrote:
> Ian, this one is special just for you. I'm sending it as an attachment 
> because my email client will mangle it.
> This patch will remove the cmake dependency from xen prior to autoconf 
> stubdom

Thanks, I merged this as described and also folded "Disable caml-stubdom
by default" into the patch which added it enabled.

However this still fails for me when vtpm is not enabled:
        make[1]: *** No rule to make target `mini-os-x86_64-vtpm', needed by `vtpm-stubdom'.  Stop.
        make[1]: Leaving directory `/local/scratch/ianc/devel/committer.git/stubdom'
        make: *** [install-stubdom] Error 2

Something to with vtpmmgr not being conditional?

I've pushed my current branch to:
        git://xenbits.xen.org/people/ianc/xen-unstable.git vtpm2

Please can you fixup and resubmit the whole thing?

It would be useful to take all my [ijc -- did foo] comments and either
remove them or add something to the main changelog.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 11:32:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 11:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkvOu-0001eJ-Rb; Tue, 18 Dec 2012 11:31:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkvOt-0001eE-HB
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 11:31:55 +0000
Received: from [193.109.254.147:41450] by server-1.bemta-14.messagelabs.com id
	06/B2-15901-A2450D05; Tue, 18 Dec 2012 11:31:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355830265!10501348!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16282 invoked from network); 18 Dec 2012 11:31:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 11:31:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="221877"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 11:31:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 11:31:05 +0000
Message-ID: <1355830264.14620.196.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Tue, 18 Dec 2012 11:31:04 +0000
In-Reply-To: <50C9F2CA.1010602@jhuapl.edu>
References: <50C9F2CA.1010602@jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 15:22 +0000, Matthew Fioravante wrote:
> Ian, this one is special just for you. I'm sending it as an attachment 
> because my email client will mangle it.
> This patch will remove the cmake dependency from xen prior to autoconf 
> stubdom

Thanks, I merged this as described and also folded "Disable caml-stubdom
by default" into the patch which added it enabled.

However this still fails for me when vtpm is not enabled:
        make[1]: *** No rule to make target `mini-os-x86_64-vtpm', needed by `vtpm-stubdom'.  Stop.
        make[1]: Leaving directory `/local/scratch/ianc/devel/committer.git/stubdom'
        make: *** [install-stubdom] Error 2

Something to with vtpmmgr not being conditional?

I've pushed my current branch to:
        git://xenbits.xen.org/people/ianc/xen-unstable.git vtpm2

Please can you fixup and resubmit the whole thing?

It would be useful to take all my [ijc -- did foo] comments and either
remove them or add something to the main changelog.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 11:43:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 11:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkvZm-0001os-0u; Tue, 18 Dec 2012 11:43:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TkvZj-0001ok-Sb
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 11:43:08 +0000
Received: from [85.158.143.99:29513] by server-2.bemta-4.messagelabs.com id
	89/E1-30861-BC650D05; Tue, 18 Dec 2012 11:43:07 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355830985!29846733!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13052 invoked from network); 18 Dec 2012 11:43:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 11:43:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1019915"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 11:43:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 06:43:04 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TkvZf-0004Ma-Ur;
	Tue, 18 Dec 2012 11:43:03 +0000
Date: Tue, 18 Dec 2012 11:42:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355757581-11845-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181142070.17523@kaball.uk.xensource.com>
References: <1355757581-11845-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: Call init_xen_time earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Ian Campbell wrote:
> If we panic before calling init_xen_time then the "Rebooting in 5
> seconds" delay ends up calling udelay which uses cntfrq before it has
> been initialised resulting in a divide by zero.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  xen/arch/arm/setup.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 2076724..7b0a0f6 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -219,6 +219,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>      console_init_preirq();
>  #endif
>  
> +    init_xen_time();
> +
>      gic_init();
>      make_cpus_ready(cpus, boot_phys_offset);
>  
> @@ -227,8 +229,6 @@ void __init start_xen(unsigned long boot_phys_offset,
>      set_current((struct vcpu *)0xfffff000); /* debug sanity */
>      idle_vcpu[0] = current;
>  
> -    init_xen_time();
> -
>      setup_mm(atag_paddr, fdt_size);
>  
>      /* Setup Hyp vector base */
> -- 
> 1.7.9.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 11:43:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 11:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkvZm-0001os-0u; Tue, 18 Dec 2012 11:43:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TkvZj-0001ok-Sb
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 11:43:08 +0000
Received: from [85.158.143.99:29513] by server-2.bemta-4.messagelabs.com id
	89/E1-30861-BC650D05; Tue, 18 Dec 2012 11:43:07 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355830985!29846733!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13052 invoked from network); 18 Dec 2012 11:43:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 11:43:06 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1019915"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 11:43:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 06:43:04 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TkvZf-0004Ma-Ur;
	Tue, 18 Dec 2012 11:43:03 +0000
Date: Tue, 18 Dec 2012 11:42:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355757581-11845-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181142070.17523@kaball.uk.xensource.com>
References: <1355757581-11845-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: Call init_xen_time earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Ian Campbell wrote:
> If we panic before calling init_xen_time then the "Rebooting in 5
> seconds" delay ends up calling udelay which uses cntfrq before it has
> been initialised resulting in a divide by zero.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  xen/arch/arm/setup.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 2076724..7b0a0f6 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -219,6 +219,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>      console_init_preirq();
>  #endif
>  
> +    init_xen_time();
> +
>      gic_init();
>      make_cpus_ready(cpus, boot_phys_offset);
>  
> @@ -227,8 +229,6 @@ void __init start_xen(unsigned long boot_phys_offset,
>      set_current((struct vcpu *)0xfffff000); /* debug sanity */
>      idle_vcpu[0] = current;
>  
> -    init_xen_time();
> -
>      setup_mm(atag_paddr, fdt_size);
>  
>      /* Setup Hyp vector base */
> -- 
> 1.7.9.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 11:47:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 11:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkvdS-0001vp-PL; Tue, 18 Dec 2012 11:46:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkvdR-0001vk-C4
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 11:46:57 +0000
Received: from [85.158.138.51:58749] by server-6.bemta-3.messagelabs.com id
	B9/D3-12154-0B750D05; Tue, 18 Dec 2012 11:46:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355831215!10737205!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8859 invoked from network); 18 Dec 2012 11:46:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 11:46:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="222323"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 11:46:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 11:46:54 +0000
Message-ID: <1355831212.14620.211.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Tue, 18 Dec 2012 11:46:52 +0000
In-Reply-To: <1355830264.14620.196.camel@zakaz.uk.xensource.com>
References: <50C9F2CA.1010602@jhuapl.edu>
	<1355830264.14620.196.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 11:31 +0000, Ian Campbell wrote:
> On Thu, 2012-12-13 at 15:22 +0000, Matthew Fioravante wrote:
> > Ian, this one is special just for you. I'm sending it as an attachment 
> > because my email client will mangle it.
> > This patch will remove the cmake dependency from xen prior to autoconf 
> > stubdom
> 
> Thanks, I merged this as described and also folded "Disable caml-stubdom
> by default" into the patch which added it enabled.
> 
> However this still fails for me when vtpm is not enabled:
>         make[1]: *** No rule to make target `mini-os-x86_64-vtpm', needed by `vtpm-stubdom'.  Stop.
>         make[1]: Leaving directory `/local/scratch/ianc/devel/committer.git/stubdom'
>         make: *** [install-stubdom] Error 2
> 
> Something to with vtpmmgr not being conditional?

Looks like a simple thinko. I'll merge the following into "stubdom: Add
autoconf", hopefully my testing won't find any other issues.

Please try and remember to test both with and without any new option you
are adding in the future. 

Ian.


commit 569b8782415325b27897434f4893945769c45296
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Dec 18 11:41:32 2012 +0000

    install-vtpmmgr should depend on vtpmmgrdom not vtpm-stubdom

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 709b71e..7519683 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -476,7 +476,7 @@ install-vtpm: vtpm-stubdom
 	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
 	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpm/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpm-stubdom.gz"
 
-install-vtpmmgr: vtpm-stubdom
+install-vtpmmgr: vtpmmgrdom
 	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
 	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpmmgr/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpmmgrdom.gz"
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 11:47:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 11:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkvdS-0001vp-PL; Tue, 18 Dec 2012 11:46:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkvdR-0001vk-C4
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 11:46:57 +0000
Received: from [85.158.138.51:58749] by server-6.bemta-3.messagelabs.com id
	B9/D3-12154-0B750D05; Tue, 18 Dec 2012 11:46:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355831215!10737205!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8859 invoked from network); 18 Dec 2012 11:46:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 11:46:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="222323"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 11:46:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 11:46:54 +0000
Message-ID: <1355831212.14620.211.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Tue, 18 Dec 2012 11:46:52 +0000
In-Reply-To: <1355830264.14620.196.camel@zakaz.uk.xensource.com>
References: <50C9F2CA.1010602@jhuapl.edu>
	<1355830264.14620.196.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 11:31 +0000, Ian Campbell wrote:
> On Thu, 2012-12-13 at 15:22 +0000, Matthew Fioravante wrote:
> > Ian, this one is special just for you. I'm sending it as an attachment 
> > because my email client will mangle it.
> > This patch will remove the cmake dependency from xen prior to autoconf 
> > stubdom
> 
> Thanks, I merged this as described and also folded "Disable caml-stubdom
> by default" into the patch which added it enabled.
> 
> However this still fails for me when vtpm is not enabled:
>         make[1]: *** No rule to make target `mini-os-x86_64-vtpm', needed by `vtpm-stubdom'.  Stop.
>         make[1]: Leaving directory `/local/scratch/ianc/devel/committer.git/stubdom'
>         make: *** [install-stubdom] Error 2
> 
> Something to with vtpmmgr not being conditional?

Looks like a simple thinko. I'll merge the following into "stubdom: Add
autoconf", hopefully my testing won't find any other issues.

Please try and remember to test both with and without any new option you
are adding in the future. 

Ian.


commit 569b8782415325b27897434f4893945769c45296
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Dec 18 11:41:32 2012 +0000

    install-vtpmmgr should depend on vtpmmgrdom not vtpm-stubdom

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 709b71e..7519683 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -476,7 +476,7 @@ install-vtpm: vtpm-stubdom
 	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
 	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpm/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpm-stubdom.gz"
 
-install-vtpmmgr: vtpm-stubdom
+install-vtpmmgr: vtpmmgrdom
 	$(INSTALL_DIR) "$(DESTDIR)$(XENFIRMWAREDIR)"
 	$(INSTALL_PROG) mini-os-$(XEN_TARGET_ARCH)-vtpmmgr/mini-os.gz "$(DESTDIR)$(XENFIRMWAREDIR)/vtpmmgrdom.gz"
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:05:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:05:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkvuo-0002Ib-5L; Tue, 18 Dec 2012 12:04:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tkvum-0002IW-Ip
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 12:04:52 +0000
Received: from [85.158.139.83:63899] by server-2.bemta-5.messagelabs.com id
	64/2D-16162-2EB50D05; Tue, 18 Dec 2012 12:04:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355832288!28624342!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14061 invoked from network); 18 Dec 2012 12:04:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:04:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1021566"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:04:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:04:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tkvuh-0004hx-6r;
	Tue, 18 Dec 2012 12:04:47 +0000
Date: Tue, 18 Dec 2012 12:04:38 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355762141-29616-6-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Marc Zyngier <marc.zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Will Deacon wrote:
> From: Marc Zyngier <marc.zyngier@arm.com>
> 
> Add support for the smallest, dumbest possible platform, to be
> used as a guest for KVM or other hypervisors.
> 
> It only mandates a GIC and architected timers. Fits nicely with
> a multiplatform zImage. Uses very little silicon area.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm/Kconfig            |  2 ++
>  arch/arm/Makefile           |  1 +
>  arch/arm/mach-virt/Kconfig  |  9 +++++++
>  arch/arm/mach-virt/Makefile |  5 ++++
>  arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 82 insertions(+)
>  create mode 100644 arch/arm/mach-virt/Kconfig
>  create mode 100644 arch/arm/mach-virt/Makefile
>  create mode 100644 arch/arm/mach-virt/virt.c

Should it come along with a DTS?


> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 2a04375..552d72e0 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1127,6 +1127,8 @@ source "arch/arm/mach-versatile/Kconfig"
>  source "arch/arm/mach-vexpress/Kconfig"
>  source "arch/arm/plat-versatile/Kconfig"
>  
> +source "arch/arm/mach-virt/Kconfig"
> +
>  source "arch/arm/mach-w90x900/Kconfig"
>  
>  # Definitions to make life easier
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index 5f914fc..e8232ad 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -192,6 +192,7 @@ machine-$(CONFIG_ARCH_SOCFPGA)		+= socfpga
>  machine-$(CONFIG_ARCH_SPEAR13XX)	+= spear13xx
>  machine-$(CONFIG_ARCH_SPEAR3XX)		+= spear3xx
>  machine-$(CONFIG_MACH_SPEAR600)		+= spear6xx
> +machine-$(CONFIG_ARCH_VIRT)		+= virt
>  machine-$(CONFIG_ARCH_ZYNQ)		+= zynq
>  
>  # Platform directory name.  This list is sorted alphanumerically
> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
> new file mode 100644
> index 0000000..a568a2a
> --- /dev/null
> +++ b/arch/arm/mach-virt/Kconfig
> @@ -0,0 +1,9 @@
> +config ARCH_VIRT
> +	bool "Dummy Virtual Machine" if ARCH_MULTI_V7
> +	select ARCH_WANT_OPTIONAL_GPIOLIB
> +	select ARM_GIC
> +	select ARM_ARCH_TIMER
> +	select HAVE_SMP
> +	select CPU_V7
> +	select SPARSE_IRQ
> +	select USE_OF
> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
> new file mode 100644
> index 0000000..7ddbfa6
> --- /dev/null
> +++ b/arch/arm/mach-virt/Makefile
> @@ -0,0 +1,5 @@
> +#
> +# Makefile for the linux kernel.
> +#
> +
> +obj-y					:= virt.o
> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
> new file mode 100644
> index 0000000..174b9da
> --- /dev/null
> +++ b/arch/arm/mach-virt/virt.c
> @@ -0,0 +1,65 @@
> +/*
> + * Dummy Virtual Machine - does what it says on the tin.
> + *
> + * Copyright (C) 2012 ARM Ltd
> + * Authors: Will Deacon <will.deacon@arm.com>,
> + *          Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +
> +#include <asm/arch_timer.h>
> +#include <asm/hardware/gic.h>
> +#include <asm/mach/arch.h>
> +#include <asm/mach/time.h>
> +
> +const static struct of_device_id irq_match[] = {
> +	{ .compatible = "arm,cortex-a15-gic", .data = gic_of_init, },
> +	{}
> +};
> +
> +static void __init gic_init_irq(void)
> +{
> +	of_irq_init(irq_match);
> +}
> +
> +static void __init virt_init(void)
> +{
> +	of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
> +}
> +
> +static void __init virt_timer_init(void)
> +{
> +	WARN_ON(arch_timer_of_register() != 0);
> +	WARN_ON(arch_timer_sched_clock_init() != 0);
> +}
> +
> +static const char *virt_dt_match[] = {
> +	"linux,dummy-virt",
> +	NULL
> +};
> +
> +static struct sys_timer virt_timer = {
> +	.init = virt_timer_init,
> +};
> +
> +DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
> +	.init_irq	= gic_init_irq,
> +	.handle_irq     = gic_handle_irq,
> +	.timer		= &virt_timer,
> +	.init_machine	= virt_init,
> +	.dt_compat	= virt_dt_match,
> +MACHINE_END
> -- 
> 1.8.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:05:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:05:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkvuo-0002Ib-5L; Tue, 18 Dec 2012 12:04:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tkvum-0002IW-Ip
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 12:04:52 +0000
Received: from [85.158.139.83:63899] by server-2.bemta-5.messagelabs.com id
	64/2D-16162-2EB50D05; Tue, 18 Dec 2012 12:04:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355832288!28624342!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14061 invoked from network); 18 Dec 2012 12:04:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:04:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1021566"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:04:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:04:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tkvuh-0004hx-6r;
	Tue, 18 Dec 2012 12:04:47 +0000
Date: Tue, 18 Dec 2012 12:04:38 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355762141-29616-6-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Marc Zyngier <marc.zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Will Deacon wrote:
> From: Marc Zyngier <marc.zyngier@arm.com>
> 
> Add support for the smallest, dumbest possible platform, to be
> used as a guest for KVM or other hypervisors.
> 
> It only mandates a GIC and architected timers. Fits nicely with
> a multiplatform zImage. Uses very little silicon area.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm/Kconfig            |  2 ++
>  arch/arm/Makefile           |  1 +
>  arch/arm/mach-virt/Kconfig  |  9 +++++++
>  arch/arm/mach-virt/Makefile |  5 ++++
>  arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 82 insertions(+)
>  create mode 100644 arch/arm/mach-virt/Kconfig
>  create mode 100644 arch/arm/mach-virt/Makefile
>  create mode 100644 arch/arm/mach-virt/virt.c

Should it come along with a DTS?


> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 2a04375..552d72e0 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1127,6 +1127,8 @@ source "arch/arm/mach-versatile/Kconfig"
>  source "arch/arm/mach-vexpress/Kconfig"
>  source "arch/arm/plat-versatile/Kconfig"
>  
> +source "arch/arm/mach-virt/Kconfig"
> +
>  source "arch/arm/mach-w90x900/Kconfig"
>  
>  # Definitions to make life easier
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index 5f914fc..e8232ad 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -192,6 +192,7 @@ machine-$(CONFIG_ARCH_SOCFPGA)		+= socfpga
>  machine-$(CONFIG_ARCH_SPEAR13XX)	+= spear13xx
>  machine-$(CONFIG_ARCH_SPEAR3XX)		+= spear3xx
>  machine-$(CONFIG_MACH_SPEAR600)		+= spear6xx
> +machine-$(CONFIG_ARCH_VIRT)		+= virt
>  machine-$(CONFIG_ARCH_ZYNQ)		+= zynq
>  
>  # Platform directory name.  This list is sorted alphanumerically
> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
> new file mode 100644
> index 0000000..a568a2a
> --- /dev/null
> +++ b/arch/arm/mach-virt/Kconfig
> @@ -0,0 +1,9 @@
> +config ARCH_VIRT
> +	bool "Dummy Virtual Machine" if ARCH_MULTI_V7
> +	select ARCH_WANT_OPTIONAL_GPIOLIB
> +	select ARM_GIC
> +	select ARM_ARCH_TIMER
> +	select HAVE_SMP
> +	select CPU_V7
> +	select SPARSE_IRQ
> +	select USE_OF
> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
> new file mode 100644
> index 0000000..7ddbfa6
> --- /dev/null
> +++ b/arch/arm/mach-virt/Makefile
> @@ -0,0 +1,5 @@
> +#
> +# Makefile for the linux kernel.
> +#
> +
> +obj-y					:= virt.o
> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
> new file mode 100644
> index 0000000..174b9da
> --- /dev/null
> +++ b/arch/arm/mach-virt/virt.c
> @@ -0,0 +1,65 @@
> +/*
> + * Dummy Virtual Machine - does what it says on the tin.
> + *
> + * Copyright (C) 2012 ARM Ltd
> + * Authors: Will Deacon <will.deacon@arm.com>,
> + *          Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +
> +#include <asm/arch_timer.h>
> +#include <asm/hardware/gic.h>
> +#include <asm/mach/arch.h>
> +#include <asm/mach/time.h>
> +
> +const static struct of_device_id irq_match[] = {
> +	{ .compatible = "arm,cortex-a15-gic", .data = gic_of_init, },
> +	{}
> +};
> +
> +static void __init gic_init_irq(void)
> +{
> +	of_irq_init(irq_match);
> +}
> +
> +static void __init virt_init(void)
> +{
> +	of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
> +}
> +
> +static void __init virt_timer_init(void)
> +{
> +	WARN_ON(arch_timer_of_register() != 0);
> +	WARN_ON(arch_timer_sched_clock_init() != 0);
> +}
> +
> +static const char *virt_dt_match[] = {
> +	"linux,dummy-virt",
> +	NULL
> +};
> +
> +static struct sys_timer virt_timer = {
> +	.init = virt_timer_init,
> +};
> +
> +DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
> +	.init_irq	= gic_init_irq,
> +	.handle_irq     = gic_handle_irq,
> +	.timer		= &virt_timer,
> +	.init_machine	= virt_init,
> +	.dt_compat	= virt_dt_match,
> +MACHINE_END
> -- 
> 1.8.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:10:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkw0E-0002RR-3b; Tue, 18 Dec 2012 12:10:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tkw0C-0002RJ-8x
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 12:10:28 +0000
Received: from [85.158.143.35:32216] by server-3.bemta-4.messagelabs.com id
	3F/8B-18211-33D50D05; Tue, 18 Dec 2012 12:10:27 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355832621!11737241!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24028 invoked from network); 18 Dec 2012 12:10:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:10:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1091476"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:10:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:10:05 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tkvzp-0004mo-8g;
	Tue, 18 Dec 2012 12:10:05 +0000
Message-ID: <50D05D1C.8070103@eu.citrix.com>
Date: Tue, 18 Dec 2012 12:10:04 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355783337@Solace>
	<290dfdd1cbe5f3e1c2e0.1355783338@Solace>
In-Reply-To: <290dfdd1cbe5f3e1c2e0.1355783338@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1 of 5 v3] xen: sched_credit: define and use
	curr_on_cpu(cpu)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/12 22:28, Dario Faggioli wrote:
> To fetch `per_cpu(schedule_data,cpu).curr' in a more readable
> way. It's in sched-if.h as that is where `struct schedule_data'
> is declared.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> Changes from v2:
> * This patch now contains both macro definition and usage, (and
>    has been moved to the top of the series), as suggested during
>    review.
> * The macro has been moved moved to sched-if.h, as requested
>    during review.
> * The macro has been renamed curr_on_cpu(), to match with the
>    `*curr' field in `struct schedule_data' to which it points.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -228,7 +228,7 @@ static void burn_credits(struct csched_v
>       unsigned int credits;
>
>       /* Assert svc is current */
> -    ASSERT(svc==CSCHED_VCPU(per_cpu(schedule_data, svc->vcpu->processor).curr));
> +    ASSERT( svc == CSCHED_VCPU(curr_on_cpu(svc->vcpu->processor)) );
>
>       if ( (delta = now - svc->start_time) <= 0 )
>           return;
> @@ -246,8 +246,7 @@ DEFINE_PER_CPU(unsigned int, last_tickle
>   static inline void
>   __runq_tickle(unsigned int cpu, struct csched_vcpu *new)
>   {
> -    struct csched_vcpu * const cur =
> -        CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
> +    struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
>       cpumask_t mask;
>
> @@ -371,7 +370,7 @@ csched_alloc_pdata(const struct schedule
>           per_cpu(schedule_data, cpu).sched_priv = spc;
>
>       /* Start off idling... */
> -    BUG_ON(!is_idle_vcpu(per_cpu(schedule_data, cpu).curr));
> +    BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu)));
>       cpumask_set_cpu(cpu, prv->idlers);
>
>       spin_unlock_irqrestore(&prv->lock, flags);
> @@ -709,7 +708,7 @@ csched_vcpu_sleep(const struct scheduler
>
>       BUG_ON( is_idle_vcpu(vc) );
>
> -    if ( per_cpu(schedule_data, vc->processor).curr == vc )
> +    if ( curr_on_cpu(vc->processor) == vc )
>           cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
>       else if ( __vcpu_on_runq(svc) )
>           __runq_remove(svc);
> @@ -723,7 +722,7 @@ csched_vcpu_wake(const struct scheduler
>
>       BUG_ON( is_idle_vcpu(vc) );
>
> -    if ( unlikely(per_cpu(schedule_data, cpu).curr == vc) )
> +    if ( unlikely(curr_on_cpu(cpu) == vc) )
>       {
>           SCHED_STAT_CRANK(vcpu_wake_running);
>           return;
> @@ -1192,7 +1191,7 @@ static struct csched_vcpu *
>   csched_runq_steal(int peer_cpu, int cpu, int pri)
>   {
>       const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
> -    const struct vcpu * const peer_vcpu = per_cpu(schedule_data, peer_cpu).curr;
> +    const struct vcpu * const peer_vcpu = curr_on_cpu(peer_cpu);
>       struct csched_vcpu *speer;
>       struct list_head *iter;
>       struct vcpu *vc;
> @@ -1480,7 +1479,7 @@ csched_dump_pcpu(const struct scheduler
>       printk("core=%s\n", cpustr);
>
>       /* current VCPU */
> -    svc = CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
> +    svc = CSCHED_VCPU(curr_on_cpu(cpu));
>       if ( svc )
>       {
>           printk("\trun: ");
> diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
> --- a/xen/include/xen/sched-if.h
> +++ b/xen/include/xen/sched-if.h
> @@ -41,6 +41,8 @@ struct schedule_data {
>       atomic_t            urgent_count;   /* how many urgent vcpus           */
>   };
>
> +#define curr_on_cpu(c)    (per_cpu(schedule_data, c).curr)
> +
>   DECLARE_PER_CPU(struct schedule_data, schedule_data);
>   DECLARE_PER_CPU(struct scheduler *, scheduler);
>   DECLARE_PER_CPU(struct cpupool *, cpupool);
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:10:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkw0E-0002RR-3b; Tue, 18 Dec 2012 12:10:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tkw0C-0002RJ-8x
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 12:10:28 +0000
Received: from [85.158.143.35:32216] by server-3.bemta-4.messagelabs.com id
	3F/8B-18211-33D50D05; Tue, 18 Dec 2012 12:10:27 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355832621!11737241!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24028 invoked from network); 18 Dec 2012 12:10:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:10:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1091476"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:10:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:10:05 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tkvzp-0004mo-8g;
	Tue, 18 Dec 2012 12:10:05 +0000
Message-ID: <50D05D1C.8070103@eu.citrix.com>
Date: Tue, 18 Dec 2012 12:10:04 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355783337@Solace>
	<290dfdd1cbe5f3e1c2e0.1355783338@Solace>
In-Reply-To: <290dfdd1cbe5f3e1c2e0.1355783338@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1 of 5 v3] xen: sched_credit: define and use
	curr_on_cpu(cpu)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/12 22:28, Dario Faggioli wrote:
> To fetch `per_cpu(schedule_data,cpu).curr' in a more readable
> way. It's in sched-if.h as that is where `struct schedule_data'
> is declared.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> Changes from v2:
> * This patch now contains both macro definition and usage, (and
>    has been moved to the top of the series), as suggested during
>    review.
> * The macro has been moved moved to sched-if.h, as requested
>    during review.
> * The macro has been renamed curr_on_cpu(), to match with the
>    `*curr' field in `struct schedule_data' to which it points.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -228,7 +228,7 @@ static void burn_credits(struct csched_v
>       unsigned int credits;
>
>       /* Assert svc is current */
> -    ASSERT(svc==CSCHED_VCPU(per_cpu(schedule_data, svc->vcpu->processor).curr));
> +    ASSERT( svc == CSCHED_VCPU(curr_on_cpu(svc->vcpu->processor)) );
>
>       if ( (delta = now - svc->start_time) <= 0 )
>           return;
> @@ -246,8 +246,7 @@ DEFINE_PER_CPU(unsigned int, last_tickle
>   static inline void
>   __runq_tickle(unsigned int cpu, struct csched_vcpu *new)
>   {
> -    struct csched_vcpu * const cur =
> -        CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
> +    struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
>       cpumask_t mask;
>
> @@ -371,7 +370,7 @@ csched_alloc_pdata(const struct schedule
>           per_cpu(schedule_data, cpu).sched_priv = spc;
>
>       /* Start off idling... */
> -    BUG_ON(!is_idle_vcpu(per_cpu(schedule_data, cpu).curr));
> +    BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu)));
>       cpumask_set_cpu(cpu, prv->idlers);
>
>       spin_unlock_irqrestore(&prv->lock, flags);
> @@ -709,7 +708,7 @@ csched_vcpu_sleep(const struct scheduler
>
>       BUG_ON( is_idle_vcpu(vc) );
>
> -    if ( per_cpu(schedule_data, vc->processor).curr == vc )
> +    if ( curr_on_cpu(vc->processor) == vc )
>           cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
>       else if ( __vcpu_on_runq(svc) )
>           __runq_remove(svc);
> @@ -723,7 +722,7 @@ csched_vcpu_wake(const struct scheduler
>
>       BUG_ON( is_idle_vcpu(vc) );
>
> -    if ( unlikely(per_cpu(schedule_data, cpu).curr == vc) )
> +    if ( unlikely(curr_on_cpu(cpu) == vc) )
>       {
>           SCHED_STAT_CRANK(vcpu_wake_running);
>           return;
> @@ -1192,7 +1191,7 @@ static struct csched_vcpu *
>   csched_runq_steal(int peer_cpu, int cpu, int pri)
>   {
>       const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
> -    const struct vcpu * const peer_vcpu = per_cpu(schedule_data, peer_cpu).curr;
> +    const struct vcpu * const peer_vcpu = curr_on_cpu(peer_cpu);
>       struct csched_vcpu *speer;
>       struct list_head *iter;
>       struct vcpu *vc;
> @@ -1480,7 +1479,7 @@ csched_dump_pcpu(const struct scheduler
>       printk("core=%s\n", cpustr);
>
>       /* current VCPU */
> -    svc = CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
> +    svc = CSCHED_VCPU(curr_on_cpu(cpu));
>       if ( svc )
>       {
>           printk("\trun: ");
> diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
> --- a/xen/include/xen/sched-if.h
> +++ b/xen/include/xen/sched-if.h
> @@ -41,6 +41,8 @@ struct schedule_data {
>       atomic_t            urgent_count;   /* how many urgent vcpus           */
>   };
>
> +#define curr_on_cpu(c)    (per_cpu(schedule_data, c).curr)
> +
>   DECLARE_PER_CPU(struct schedule_data, schedule_data);
>   DECLARE_PER_CPU(struct scheduler *, scheduler);
>   DECLARE_PER_CPU(struct cpupool *, cpupool);
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:12:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkw1v-0002YH-L6; Tue, 18 Dec 2012 12:12:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tkw1v-0002YB-1C
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 12:12:15 +0000
Received: from [85.158.137.99:43665] by server-2.bemta-3.messagelabs.com id
	4A/7C-11239-E9D50D05; Tue, 18 Dec 2012 12:12:14 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355832731!13666096!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11818 invoked from network); 18 Dec 2012 12:12:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:12:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1091633"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:12:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:12:11 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tkw1q-0004pH-ME;
	Tue, 18 Dec 2012 12:12:10 +0000
Message-ID: <50D05D9A.3010601@eu.citrix.com>
Date: Tue, 18 Dec 2012 12:12:10 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355783337@Solace>
	<7e9837f96c0d6afc2f48.1355783339@Solace>
In-Reply-To: <7e9837f96c0d6afc2f48.1355783339@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 5 v3] xen: sched_credit: improve
 picking up the idle CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/12 22:28, Dario Faggioli wrote:
> In _csched_cpu_pick() we try to select the best possible CPU for
> running a VCPU, considering the characteristics of the underlying
> hardware (i.e., how many threads, core, sockets, and how busy they
> are). What we want is "the idle execution vehicle with the most
> idling neighbours in its grouping".
>
> In order to achieve it, we select a CPU from the VCPU's affinity,
> giving preference to its current processor if possible, as the basis
> for the comparison with all the other CPUs. Problem is, to discount
> the VCPU itself when computing this "idleness" (in an attempt to be
> fair wrt its current processor), we arbitrarily and unconditionally
> consider that selected CPU as idle, even when it is not the case,
> for instance:
>   1. If the CPU is not the one where the VCPU is running (perhaps due
>      to the affinity being changed);
>   2. The CPU is where the VCPU is running, but it has other VCPUs in
>      its runq, so it won't go idle even if the VCPU in question goes.
>
> This is exemplified in the trace below:
>
> ]  3.466115364 x|------|------| d10v1   22005(2:2:5) 3 [ a 1 8 ]
>     ... ... ...
>     3.466122856 x|------|------| d10v1 runstate_change d10v1 running->offline
>     3.466123046 x|------|------| d?v? runstate_change d32767v0 runnable->running
>     ... ... ...
> ]  3.466126887 x|------|------| d32767v0   28004(2:8:4) 3 [ a 1 8 ]
>
> 22005(...) line (the first line) means _csched_cpu_pick() was called on
> VCPU 1 of domain 10, while it is running on CPU 0, and it choose CPU 8,
> which is busy ('|'), even if there are plenty of idle CPUs. That is
> because, as a consequence of changing the VCPU affinity, CPU 8 was
> chosen as the basis for the comparison, and therefore considered idle
> (its bit gets unconditionally set in the bitmask representing the idle
> CPUs). 28004(...) line means the VCPU is woken up and queued on CPU 8's
> runq, where it waits for a context switch or a migration, in order to
> be able to execute.
>
> This change fixes things by only considering the "guessed" CPU idle if
> the VCPU in question is both running there and is its only runnable
> VCPU.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> Changes from v2:
>   * Use `vc->processor' instead of curr_on_cpu() for determining whether
>     or not vc is current on cpu, as suggested during review.
>   * Fixed IS_IDLE_RUNQ() macro in case runq is empty.
>   * Ditched the variable renaming, as requested during review.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -59,6 +59,9 @@
>   #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
>   #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
>   #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
> +/* Is the first element of _cpu's runq its idle vcpu? */
> +#define IS_RUNQ_IDLE(_cpu)  (list_empty(RUNQ(_cpu)) || \
> +                             is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
>
>
>   /*
> @@ -478,9 +481,14 @@ static int
>        * distinct cores first and guarantees we don't do something stupid
>        * like run two VCPUs on co-hyperthreads while there are idle cores
>        * or sockets.
> +     *
> +     * Notice that, when computing the "idleness" of cpu, we may want to
> +     * discount vc. That is, iff vc is the currently running and the only
> +     * runnable vcpu on cpu, we add cpu to the idlers.
>        */
>       cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> -    cpumask_set_cpu(cpu, &idlers);
> +    if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
> +        cpumask_set_cpu(cpu, &idlers);
>       cpumask_and(&cpus, &cpus, &idlers);
>       cpumask_clear_cpu(cpu, &cpus);
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:12:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkw1v-0002YH-L6; Tue, 18 Dec 2012 12:12:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tkw1v-0002YB-1C
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 12:12:15 +0000
Received: from [85.158.137.99:43665] by server-2.bemta-3.messagelabs.com id
	4A/7C-11239-E9D50D05; Tue, 18 Dec 2012 12:12:14 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355832731!13666096!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11818 invoked from network); 18 Dec 2012 12:12:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:12:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1091633"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:12:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:12:11 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tkw1q-0004pH-ME;
	Tue, 18 Dec 2012 12:12:10 +0000
Message-ID: <50D05D9A.3010601@eu.citrix.com>
Date: Tue, 18 Dec 2012 12:12:10 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355783337@Solace>
	<7e9837f96c0d6afc2f48.1355783339@Solace>
In-Reply-To: <7e9837f96c0d6afc2f48.1355783339@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2 of 5 v3] xen: sched_credit: improve
 picking up the idle CPU for a VCPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/12 22:28, Dario Faggioli wrote:
> In _csched_cpu_pick() we try to select the best possible CPU for
> running a VCPU, considering the characteristics of the underlying
> hardware (i.e., how many threads, core, sockets, and how busy they
> are). What we want is "the idle execution vehicle with the most
> idling neighbours in its grouping".
>
> In order to achieve it, we select a CPU from the VCPU's affinity,
> giving preference to its current processor if possible, as the basis
> for the comparison with all the other CPUs. Problem is, to discount
> the VCPU itself when computing this "idleness" (in an attempt to be
> fair wrt its current processor), we arbitrarily and unconditionally
> consider that selected CPU as idle, even when it is not the case,
> for instance:
>   1. If the CPU is not the one where the VCPU is running (perhaps due
>      to the affinity being changed);
>   2. The CPU is where the VCPU is running, but it has other VCPUs in
>      its runq, so it won't go idle even if the VCPU in question goes.
>
> This is exemplified in the trace below:
>
> ]  3.466115364 x|------|------| d10v1   22005(2:2:5) 3 [ a 1 8 ]
>     ... ... ...
>     3.466122856 x|------|------| d10v1 runstate_change d10v1 running->offline
>     3.466123046 x|------|------| d?v? runstate_change d32767v0 runnable->running
>     ... ... ...
> ]  3.466126887 x|------|------| d32767v0   28004(2:8:4) 3 [ a 1 8 ]
>
> 22005(...) line (the first line) means _csched_cpu_pick() was called on
> VCPU 1 of domain 10, while it is running on CPU 0, and it choose CPU 8,
> which is busy ('|'), even if there are plenty of idle CPUs. That is
> because, as a consequence of changing the VCPU affinity, CPU 8 was
> chosen as the basis for the comparison, and therefore considered idle
> (its bit gets unconditionally set in the bitmask representing the idle
> CPUs). 28004(...) line means the VCPU is woken up and queued on CPU 8's
> runq, where it waits for a context switch or a migration, in order to
> be able to execute.
>
> This change fixes things by only considering the "guessed" CPU idle if
> the VCPU in question is both running there and is its only runnable
> VCPU.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> Changes from v2:
>   * Use `vc->processor' instead of curr_on_cpu() for determining whether
>     or not vc is current on cpu, as suggested during review.
>   * Fixed IS_IDLE_RUNQ() macro in case runq is empty.
>   * Ditched the variable renaming, as requested during review.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -59,6 +59,9 @@
>   #define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
>   #define CSCHED_DOM(_dom)    ((struct csched_dom *) (_dom)->sched_priv)
>   #define RUNQ(_cpu)          (&(CSCHED_PCPU(_cpu)->runq))
> +/* Is the first element of _cpu's runq its idle vcpu? */
> +#define IS_RUNQ_IDLE(_cpu)  (list_empty(RUNQ(_cpu)) || \
> +                             is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))
>
>
>   /*
> @@ -478,9 +481,14 @@ static int
>        * distinct cores first and guarantees we don't do something stupid
>        * like run two VCPUs on co-hyperthreads while there are idle cores
>        * or sockets.
> +     *
> +     * Notice that, when computing the "idleness" of cpu, we may want to
> +     * discount vc. That is, iff vc is the currently running and the only
> +     * runnable vcpu on cpu, we add cpu to the idlers.
>        */
>       cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> -    cpumask_set_cpu(cpu, &idlers);
> +    if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
> +        cpumask_set_cpu(cpu, &idlers);
>       cpumask_and(&cpus, &cpus, &idlers);
>       cpumask_clear_cpu(cpu, &cpus);
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:16:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:16:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkw5F-0002iI-8s; Tue, 18 Dec 2012 12:15:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tkw5E-0002iB-57
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 12:15:40 +0000
Received: from [85.158.139.211:64067] by server-14.bemta-5.messagelabs.com id
	9E/67-09538-B6E50D05; Tue, 18 Dec 2012 12:15:39 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355832937!19535514!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13110 invoked from network); 18 Dec 2012 12:15:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:15:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1091907"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:15:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:15:36 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tkw59-0004t6-Sn;
	Tue, 18 Dec 2012 12:15:35 +0000
Message-ID: <50D05E67.6070408@eu.citrix.com>
Date: Tue, 18 Dec 2012 12:15:35 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355783337@Solace>
	<47fe4d3554d40c6b4062.1355783342@Solace>
In-Reply-To: <47fe4d3554d40c6b4062.1355783342@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 5 of 5 v3] xen: sched_credit: add some
	tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/12 22:29, Dario Faggioli wrote:
> About tickling, and PCPU selection.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> Changes from v2:
> * Call to `trace_var()' converted to `__trace_var()', as it originally
>    was (something got messed up while reworking this for v2.
>    Thanks George. :-) )
>
> Changes from v1:
>   * Dummy `struct d {}', accommodating `cpu' only, removed
>     in spite of the much more readable `trace_var(..., sizeof(cpu), &cpu)',
>     as suggested.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -21,6 +21,7 @@
>   #include <asm/atomic.h>
>   #include <xen/errno.h>
>   #include <xen/keyhandler.h>
> +#include <xen/trace.h>
>
>
>   /*
> @@ -98,6 +99,18 @@
>
>
>   /*
> + * Credit tracing events ("only" 512 available!). Check
> + * include/public/trace.h for more details.
> + */
> +#define TRC_CSCHED_SCHED_TASKLET TRC_SCHED_CLASS_EVT(CSCHED, 1)
> +#define TRC_CSCHED_ACCOUNT_START TRC_SCHED_CLASS_EVT(CSCHED, 2)
> +#define TRC_CSCHED_ACCOUNT_STOP  TRC_SCHED_CLASS_EVT(CSCHED, 3)
> +#define TRC_CSCHED_STOLEN_VCPU   TRC_SCHED_CLASS_EVT(CSCHED, 4)
> +#define TRC_CSCHED_PICKED_CPU    TRC_SCHED_CLASS_EVT(CSCHED, 5)
> +#define TRC_CSCHED_TICKLE        TRC_SCHED_CLASS_EVT(CSCHED, 6)
> +
> +
> +/*
>    * Boot parameters
>    */
>   static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
> @@ -316,9 +329,18 @@ static inline void
>           }
>       }
>
> -    /* Send scheduler interrupts to designated CPUs */
>       if ( !cpumask_empty(&mask) )
> +    {
> +        if ( unlikely(tb_init_done) )
> +        {
> +            /* Avoid TRACE_*: saves checking !tb_init_done each step */
> +            for_each_cpu(cpu, &mask)
> +                __trace_var(TRC_CSCHED_TICKLE, 0, sizeof(cpu), &cpu);
> +        }
> +
> +        /* Send scheduler interrupts to designated CPUs */
>           cpumask_raise_softirq(&mask, SCHEDULE_SOFTIRQ);
> +    }
>   }
>
>   static void
> @@ -555,6 +577,8 @@ static int
>       if ( commit && spc )
>          spc->idle_bias = cpu;
>
> +    TRACE_3D(TRC_CSCHED_PICKED_CPU, vc->domain->domain_id, vc->vcpu_id, cpu);
> +
>       return cpu;
>   }
>
> @@ -587,6 +611,9 @@ static inline void
>           }
>       }
>
> +    TRACE_3D(TRC_CSCHED_ACCOUNT_START, sdom->dom->domain_id,
> +             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
> +
>       spin_unlock_irqrestore(&prv->lock, flags);
>   }
>
> @@ -609,6 +636,9 @@ static inline void
>       {
>           list_del_init(&sdom->active_sdom_elem);
>       }
> +
> +    TRACE_3D(TRC_CSCHED_ACCOUNT_STOP, sdom->dom->domain_id,
> +             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
>   }
>
>   static void
> @@ -1242,6 +1272,8 @@ csched_runq_steal(int peer_cpu, int cpu,
>               if (__csched_vcpu_is_migrateable(vc, cpu))
>               {
>                   /* We got a candidate. Grab it! */
> +                TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
> +                         vc->domain->domain_id, vc->vcpu_id);
>                   SCHED_VCPU_STAT_CRANK(speer, migrate_q);
>                   SCHED_STAT_CRANK(migrate_queued);
>                   WARN_ON(vc->is_urgent);
> @@ -1402,6 +1434,7 @@ csched_schedule(
>       /* Tasklet work (which runs in idle VCPU context) overrides all else. */
>       if ( tasklet_work_scheduled )
>       {
> +        TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
>           snext = CSCHED_VCPU(idle_vcpu[cpu]);
>           snext->pri = CSCHED_PRI_TS_BOOST;
>       }
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:16:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:16:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkw5F-0002iI-8s; Tue, 18 Dec 2012 12:15:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tkw5E-0002iB-57
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 12:15:40 +0000
Received: from [85.158.139.211:64067] by server-14.bemta-5.messagelabs.com id
	9E/67-09538-B6E50D05; Tue, 18 Dec 2012 12:15:39 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355832937!19535514!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13110 invoked from network); 18 Dec 2012 12:15:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:15:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1091907"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:15:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:15:36 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tkw59-0004t6-Sn;
	Tue, 18 Dec 2012 12:15:35 +0000
Message-ID: <50D05E67.6070408@eu.citrix.com>
Date: Tue, 18 Dec 2012 12:15:35 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355783337@Solace>
	<47fe4d3554d40c6b4062.1355783342@Solace>
In-Reply-To: <47fe4d3554d40c6b4062.1355783342@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 5 of 5 v3] xen: sched_credit: add some
	tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/12 22:29, Dario Faggioli wrote:
> About tickling, and PCPU selection.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> Changes from v2:
> * Call to `trace_var()' converted to `__trace_var()', as it originally
>    was (something got messed up while reworking this for v2.
>    Thanks George. :-) )
>
> Changes from v1:
>   * Dummy `struct d {}', accommodating `cpu' only, removed
>     in spite of the much more readable `trace_var(..., sizeof(cpu), &cpu)',
>     as suggested.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -21,6 +21,7 @@
>   #include <asm/atomic.h>
>   #include <xen/errno.h>
>   #include <xen/keyhandler.h>
> +#include <xen/trace.h>
>
>
>   /*
> @@ -98,6 +99,18 @@
>
>
>   /*
> + * Credit tracing events ("only" 512 available!). Check
> + * include/public/trace.h for more details.
> + */
> +#define TRC_CSCHED_SCHED_TASKLET TRC_SCHED_CLASS_EVT(CSCHED, 1)
> +#define TRC_CSCHED_ACCOUNT_START TRC_SCHED_CLASS_EVT(CSCHED, 2)
> +#define TRC_CSCHED_ACCOUNT_STOP  TRC_SCHED_CLASS_EVT(CSCHED, 3)
> +#define TRC_CSCHED_STOLEN_VCPU   TRC_SCHED_CLASS_EVT(CSCHED, 4)
> +#define TRC_CSCHED_PICKED_CPU    TRC_SCHED_CLASS_EVT(CSCHED, 5)
> +#define TRC_CSCHED_TICKLE        TRC_SCHED_CLASS_EVT(CSCHED, 6)
> +
> +
> +/*
>    * Boot parameters
>    */
>   static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
> @@ -316,9 +329,18 @@ static inline void
>           }
>       }
>
> -    /* Send scheduler interrupts to designated CPUs */
>       if ( !cpumask_empty(&mask) )
> +    {
> +        if ( unlikely(tb_init_done) )
> +        {
> +            /* Avoid TRACE_*: saves checking !tb_init_done each step */
> +            for_each_cpu(cpu, &mask)
> +                __trace_var(TRC_CSCHED_TICKLE, 0, sizeof(cpu), &cpu);
> +        }
> +
> +        /* Send scheduler interrupts to designated CPUs */
>           cpumask_raise_softirq(&mask, SCHEDULE_SOFTIRQ);
> +    }
>   }
>
>   static void
> @@ -555,6 +577,8 @@ static int
>       if ( commit && spc )
>          spc->idle_bias = cpu;
>
> +    TRACE_3D(TRC_CSCHED_PICKED_CPU, vc->domain->domain_id, vc->vcpu_id, cpu);
> +
>       return cpu;
>   }
>
> @@ -587,6 +611,9 @@ static inline void
>           }
>       }
>
> +    TRACE_3D(TRC_CSCHED_ACCOUNT_START, sdom->dom->domain_id,
> +             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
> +
>       spin_unlock_irqrestore(&prv->lock, flags);
>   }
>
> @@ -609,6 +636,9 @@ static inline void
>       {
>           list_del_init(&sdom->active_sdom_elem);
>       }
> +
> +    TRACE_3D(TRC_CSCHED_ACCOUNT_STOP, sdom->dom->domain_id,
> +             svc->vcpu->vcpu_id, sdom->active_vcpu_count);
>   }
>
>   static void
> @@ -1242,6 +1272,8 @@ csched_runq_steal(int peer_cpu, int cpu,
>               if (__csched_vcpu_is_migrateable(vc, cpu))
>               {
>                   /* We got a candidate. Grab it! */
> +                TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
> +                         vc->domain->domain_id, vc->vcpu_id);
>                   SCHED_VCPU_STAT_CRANK(speer, migrate_q);
>                   SCHED_STAT_CRANK(migrate_queued);
>                   WARN_ON(vc->is_urgent);
> @@ -1402,6 +1434,7 @@ csched_schedule(
>       /* Tasklet work (which runs in idle VCPU context) overrides all else. */
>       if ( tasklet_work_scheduled )
>       {
> +        TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
>           snext = CSCHED_VCPU(idle_vcpu[cpu]);
>           snext->pri = CSCHED_PRI_TS_BOOST;
>       }
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:17:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:17:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkw6O-0002nh-O4; Tue, 18 Dec 2012 12:16:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tkw6N-0002nS-AL
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 12:16:51 +0000
Received: from [85.158.143.99:22531] by server-3.bemta-4.messagelabs.com id
	11/16-18211-2BE50D05; Tue, 18 Dec 2012 12:16:50 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355832989!18786196!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7170 invoked from network); 18 Dec 2012 12:16:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:16:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1023186"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:16:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:16:28 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tkw60-0004tw-Cl;
	Tue, 18 Dec 2012 12:16:28 +0000
Message-ID: <50D05E9B.5050605@eu.citrix.com>
Date: Tue, 18 Dec 2012 12:16:27 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355783337@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0 of 5 v3] xen: sched_credit: fix picking
 and tickling and add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/12 22:28, Dario Faggioli wrote:
> Hello,
>
> Here's the take 3 of this series (last round here:
>   http://comments.gmane.org/gmane.comp.emulators.xen.devel/145998).
>
> Super quickly, this is about fixing a couple of anomalies in the  credit
> scheduler and adding some tracing to it.  All the comments raised during v2's
> review have been addressed.
>
> Quick summary of the series (* = Acked):
>
>     1/5 xen: sched_credit: define and use curr_on_cpu(cpu)
>     2/5 xen: sched_credit: improve picking up the idle CPU for a VCPU
>   * 3/5 xen: sched_credit: improve tickling of idle CPUs
>   * 4/5 xen: tracing: introduce per-scheduler trace event IDs
>     5/5 xen: sched_credit: add some tracing

Keir, Jan: All of the patches have Acks from me.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:17:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:17:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkw6O-0002nh-O4; Tue, 18 Dec 2012 12:16:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tkw6N-0002nS-AL
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 12:16:51 +0000
Received: from [85.158.143.99:22531] by server-3.bemta-4.messagelabs.com id
	11/16-18211-2BE50D05; Tue, 18 Dec 2012 12:16:50 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355832989!18786196!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7170 invoked from network); 18 Dec 2012 12:16:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:16:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1023186"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:16:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:16:28 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tkw60-0004tw-Cl;
	Tue, 18 Dec 2012 12:16:28 +0000
Message-ID: <50D05E9B.5050605@eu.citrix.com>
Date: Tue, 18 Dec 2012 12:16:27 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355783337@Solace>
In-Reply-To: <patchbomb.1355783337@Solace>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0 of 5 v3] xen: sched_credit: fix picking
 and tickling and add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/12/12 22:28, Dario Faggioli wrote:
> Hello,
>
> Here's the take 3 of this series (last round here:
>   http://comments.gmane.org/gmane.comp.emulators.xen.devel/145998).
>
> Super quickly, this is about fixing a couple of anomalies in the  credit
> scheduler and adding some tracing to it.  All the comments raised during v2's
> review have been addressed.
>
> Quick summary of the series (* = Acked):
>
>     1/5 xen: sched_credit: define and use curr_on_cpu(cpu)
>     2/5 xen: sched_credit: improve picking up the idle CPU for a VCPU
>   * 3/5 xen: sched_credit: improve tickling of idle CPUs
>   * 4/5 xen: tracing: introduce per-scheduler trace event IDs
>     5/5 xen: sched_credit: add some tracing

Keir, Jan: All of the patches have Acks from me.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:20:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:20:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkw9J-000318-Ie; Tue, 18 Dec 2012 12:19:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tkw9I-000312-OA
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 12:19:52 +0000
Received: from [85.158.137.99:3941] by server-7.bemta-3.messagelabs.com id
	24/7E-23008-76F50D05; Tue, 18 Dec 2012 12:19:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355833189!14414000!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18832 invoked from network); 18 Dec 2012 12:19:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:19:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1092327"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:19:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:19:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tkw9E-0004xH-LF;
	Tue, 18 Dec 2012 12:19:48 +0000
Date: Tue, 18 Dec 2012 12:19:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355762141-29616-7-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.DEB.2.02.1212181204450.17523@kaball.uk.xensource.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-7-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	"Marc.Zyngier@arm.com" <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 6/6] ARM: mach-virt: add SMP support
 using PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Will Deacon wrote:
> This patch adds support for SMP to mach-virt using the PSCI
> infrastructure.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
>
>  arch/arm/mach-virt/Kconfig   |  1 +
>  arch/arm/mach-virt/Makefile  |  1 +
>  arch/arm/mach-virt/platsmp.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
>  arch/arm/mach-virt/virt.c    |  6 ++++
>  4 files changed, 84 insertions(+)
>  create mode 100644 arch/arm/mach-virt/platsmp.c
> 
> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
> index a568a2a..8958f0d 100644
> --- a/arch/arm/mach-virt/Kconfig
> +++ b/arch/arm/mach-virt/Kconfig
> @@ -3,6 +3,7 @@ config ARCH_VIRT
>  	select ARCH_WANT_OPTIONAL_GPIOLIB
>  	select ARM_GIC
>  	select ARM_ARCH_TIMER
> +	select ARM_PSCI
>  	select HAVE_SMP
>  	select CPU_V7
>  	select SPARSE_IRQ

Considering that PSCI is actually needed only to boot secondary cpus,
maybe we want to select it if CONFIG_SMP is enabled?


> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
> index 7ddbfa6..042afc1 100644
> --- a/arch/arm/mach-virt/Makefile
> +++ b/arch/arm/mach-virt/Makefile
> @@ -3,3 +3,4 @@
>  #
>  
>  obj-y					:= virt.o
> +obj-$(CONFIG_SMP)			+= platsmp.o
> diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
> new file mode 100644
> index 0000000..930362b
> --- /dev/null
> +++ b/arch/arm/mach-virt/platsmp.c
> @@ -0,0 +1,76 @@
> +/*
> + * Dummy Virtual Machine - does what it says on the tin.
> + *
> + * Copyright (C) 2012 ARM Ltd
> + * Author: Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/smp.h>
> +#include <linux/of.h>
> +
> +#include <asm/psci.h>
> +#include <asm/smp_plat.h>
> +#include <asm/hardware/gic.h>
> +
> +extern void secondary_startup(void);
> +
> +/*
> + * Enumerate the possible CPU set from the device tree.
> + */
> +static void __init virt_smp_init_cpus(void)
> +{
> +	struct device_node *dn = NULL;
> +	int cpu = 0;
> +
> +	while ((dn = of_find_node_by_type(dn, "cpu"))) {
> +		if (cpu < NR_CPUS)
> +			set_cpu_possible(cpu, true);
> +		cpu++;
> +	}
> +
> +	/* sanity check */
> +	if (cpu > NR_CPUS)
> +		pr_warning("no. of cores (%d) greater than configured maximum "
> +			   "of %d - clipping\n",
> +			   cpu, NR_CPUS);
> +
> +	set_smp_cross_call(gic_raise_softirq);
> +}
> +
> +static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
> +{
> +}
> +
> +static int __cpuinit virt_boot_secondary(unsigned int cpu,
> +					 struct task_struct *idle)
> +{
> +	if (psci_ops.cpu_on)
> +		return psci_ops.cpu_on(cpu_logical_map(cpu),
> +				       __pa(secondary_startup));
> +	return -ENODEV;
> +}

Isn't there a better way to check whether PSCI is actually "enabled", as
in present in the device tree and initialized correctly?

Maybe we need a pcsi_enabled() static inline of some sort?


> +static void __cpuinit virt_secondary_init(unsigned int cpu)
> +{
> +	gic_secondary_init(0);
> +}
> +
> +struct smp_operations __initdata virt_smp_ops = {
> +	.smp_init_cpus		= virt_smp_init_cpus,
> +	.smp_prepare_cpus	= virt_smp_prepare_cpus,
> +	.smp_secondary_init	= virt_secondary_init,
> +	.smp_boot_secondary	= virt_boot_secondary,
> +};
> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
> index 174b9da..d764835 100644
> --- a/arch/arm/mach-virt/virt.c
> +++ b/arch/arm/mach-virt/virt.c
> @@ -20,6 +20,7 @@
>  
>  #include <linux/of_irq.h>
>  #include <linux/of_platform.h>
> +#include <linux/smp.h>
>  
>  #include <asm/arch_timer.h>
>  #include <asm/hardware/gic.h>
> @@ -56,10 +57,15 @@ static struct sys_timer virt_timer = {
>  	.init = virt_timer_init,
>  };
>  
> +#ifdef CONFIG_SMP
> +extern struct smp_operations virt_smp_ops;
> +#endif
> +
>  DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
>  	.init_irq	= gic_init_irq,
>  	.handle_irq     = gic_handle_irq,
>  	.timer		= &virt_timer,
>  	.init_machine	= virt_init,
> +	.smp		= smp_ops(virt_smp_ops),
>  	.dt_compat	= virt_dt_match,
>  MACHINE_END
> -- 
> 1.8.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:20:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:20:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkw9J-000318-Ie; Tue, 18 Dec 2012 12:19:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tkw9I-000312-OA
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 12:19:52 +0000
Received: from [85.158.137.99:3941] by server-7.bemta-3.messagelabs.com id
	24/7E-23008-76F50D05; Tue, 18 Dec 2012 12:19:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355833189!14414000!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18832 invoked from network); 18 Dec 2012 12:19:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 12:19:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,308,1355097600"; 
   d="scan'208";a="1092327"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 12:19:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 07:19:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tkw9E-0004xH-LF;
	Tue, 18 Dec 2012 12:19:48 +0000
Date: Tue, 18 Dec 2012 12:19:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355762141-29616-7-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.DEB.2.02.1212181204450.17523@kaball.uk.xensource.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-7-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	"Marc.Zyngier@arm.com" <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 6/6] ARM: mach-virt: add SMP support
 using PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Dec 2012, Will Deacon wrote:
> This patch adds support for SMP to mach-virt using the PSCI
> infrastructure.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
>
>  arch/arm/mach-virt/Kconfig   |  1 +
>  arch/arm/mach-virt/Makefile  |  1 +
>  arch/arm/mach-virt/platsmp.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
>  arch/arm/mach-virt/virt.c    |  6 ++++
>  4 files changed, 84 insertions(+)
>  create mode 100644 arch/arm/mach-virt/platsmp.c
> 
> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
> index a568a2a..8958f0d 100644
> --- a/arch/arm/mach-virt/Kconfig
> +++ b/arch/arm/mach-virt/Kconfig
> @@ -3,6 +3,7 @@ config ARCH_VIRT
>  	select ARCH_WANT_OPTIONAL_GPIOLIB
>  	select ARM_GIC
>  	select ARM_ARCH_TIMER
> +	select ARM_PSCI
>  	select HAVE_SMP
>  	select CPU_V7
>  	select SPARSE_IRQ

Considering that PSCI is actually needed only to boot secondary cpus,
maybe we want to select it if CONFIG_SMP is enabled?


> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
> index 7ddbfa6..042afc1 100644
> --- a/arch/arm/mach-virt/Makefile
> +++ b/arch/arm/mach-virt/Makefile
> @@ -3,3 +3,4 @@
>  #
>  
>  obj-y					:= virt.o
> +obj-$(CONFIG_SMP)			+= platsmp.o
> diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
> new file mode 100644
> index 0000000..930362b
> --- /dev/null
> +++ b/arch/arm/mach-virt/platsmp.c
> @@ -0,0 +1,76 @@
> +/*
> + * Dummy Virtual Machine - does what it says on the tin.
> + *
> + * Copyright (C) 2012 ARM Ltd
> + * Author: Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/smp.h>
> +#include <linux/of.h>
> +
> +#include <asm/psci.h>
> +#include <asm/smp_plat.h>
> +#include <asm/hardware/gic.h>
> +
> +extern void secondary_startup(void);
> +
> +/*
> + * Enumerate the possible CPU set from the device tree.
> + */
> +static void __init virt_smp_init_cpus(void)
> +{
> +	struct device_node *dn = NULL;
> +	int cpu = 0;
> +
> +	while ((dn = of_find_node_by_type(dn, "cpu"))) {
> +		if (cpu < NR_CPUS)
> +			set_cpu_possible(cpu, true);
> +		cpu++;
> +	}
> +
> +	/* sanity check */
> +	if (cpu > NR_CPUS)
> +		pr_warning("no. of cores (%d) greater than configured maximum "
> +			   "of %d - clipping\n",
> +			   cpu, NR_CPUS);
> +
> +	set_smp_cross_call(gic_raise_softirq);
> +}
> +
> +static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
> +{
> +}
> +
> +static int __cpuinit virt_boot_secondary(unsigned int cpu,
> +					 struct task_struct *idle)
> +{
> +	if (psci_ops.cpu_on)
> +		return psci_ops.cpu_on(cpu_logical_map(cpu),
> +				       __pa(secondary_startup));
> +	return -ENODEV;
> +}

Isn't there a better way to check whether PSCI is actually "enabled", as
in present in the device tree and initialized correctly?

Maybe we need a pcsi_enabled() static inline of some sort?


> +static void __cpuinit virt_secondary_init(unsigned int cpu)
> +{
> +	gic_secondary_init(0);
> +}
> +
> +struct smp_operations __initdata virt_smp_ops = {
> +	.smp_init_cpus		= virt_smp_init_cpus,
> +	.smp_prepare_cpus	= virt_smp_prepare_cpus,
> +	.smp_secondary_init	= virt_secondary_init,
> +	.smp_boot_secondary	= virt_boot_secondary,
> +};
> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
> index 174b9da..d764835 100644
> --- a/arch/arm/mach-virt/virt.c
> +++ b/arch/arm/mach-virt/virt.c
> @@ -20,6 +20,7 @@
>  
>  #include <linux/of_irq.h>
>  #include <linux/of_platform.h>
> +#include <linux/smp.h>
>  
>  #include <asm/arch_timer.h>
>  #include <asm/hardware/gic.h>
> @@ -56,10 +57,15 @@ static struct sys_timer virt_timer = {
>  	.init = virt_timer_init,
>  };
>  
> +#ifdef CONFIG_SMP
> +extern struct smp_operations virt_smp_ops;
> +#endif
> +
>  DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
>  	.init_irq	= gic_init_irq,
>  	.handle_irq     = gic_handle_irq,
>  	.timer		= &virt_timer,
>  	.init_machine	= virt_init,
> +	.smp		= smp_ops(virt_smp_ops),
>  	.dt_compat	= virt_dt_match,
>  MACHINE_END
> -- 
> 1.8.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:54:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwgs-0003fI-D7; Tue, 18 Dec 2012 12:54:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tkwgr-0003fD-7P
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 12:54:33 +0000
Received: from [85.158.143.35:38774] by server-3.bemta-4.messagelabs.com id
	20/73-18211-88760D05; Tue, 18 Dec 2012 12:54:32 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355835265!12469214!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31650 invoked from network); 18 Dec 2012 12:54:25 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-12.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	18 Dec 2012 12:54:25 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:55276 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tkwkm-0005Wc-NN; Tue, 18 Dec 2012 13:58:36 +0100
Date: Tue, 18 Dec 2012 13:54:11 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1541942460.20121218135411@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355152338.21160.37.camel@zakaz.uk.xensource.com>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com>
	<341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
	<1355152338.21160.37.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
	net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 4:12:18 PM, you wrote:

> I wrote
>> > I have a vague recollection of a patch to set skb->truesize more
>> > accurately in xennet_poll (netfront), but I can't seem to find any
>> > reference to it now.

> I finally found the following in my git tree. Looks like I never sent it
> out.

> Does it help?

Hi Ian,

As I reported earlier it works for me.
I haven't seen a pull request anywhere yet ?

--
Sander


> 8<--------------------

> From 788ba317fa241512be7a8630b1b58e53faff83ed Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 22 Aug 2012 11:55:31 +0100
> Subject: [PATCH] xen/netfront: improve truesize tracking

> Fixes WARN_ON from skb_try_coalesce.

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  drivers/net/xen-netfront.c |   15 +++++----------
>  net/core/skbuff.c          |    2 +-
>  2 files changed, 6 insertions(+), 11 deletions(-)

> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index caa0110..b06ef81 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -971,17 +971,12 @@ err:
>                  * overheads. Here, we add the size of the data pulled
>                  * in xennet_fill_frags().
>                  *
> -                * We also adjust for any unused space in the main
> -                * data area by subtracting (RX_COPY_THRESHOLD -
> -                * len). This is especially important with drivers
> -                * which split incoming packets into header and data,
> -                * using only 66 bytes of the main data area (see the
> -                * e1000 driver for example.)  On such systems,
> -                * without this last adjustement, our achievable
> -                * receive throughout using the standard receive
> -                * buffer size was cut by 25%(!!!).
> +                * We also adjust for the __pskb_pull_tail done in
> +                * handle_incoming_queue which pulls data from the
> +                * frags into the head area, which is already
> +                * accounted in RX_COPY_THRESHOLD.
>                  */
> -               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +               skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>                 skb->len += skb->data_len;
>  
>                 if (rx->flags & XEN_NETRXF_csum_blank)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 6e04b1f..941a974 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3439,7 +3439,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
>                 delta = from->truesize - SKB_TRUESIZE(skb_end_offset(from));
>         }
>  
> -       WARN_ON_ONCE(delta < len);
> +       WARN_ONCE(delta < len, "delta %d < len %d\n", delta, len);
>  
>         memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
>                skb_shinfo(from)->frags,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 12:54:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 12:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwgs-0003fI-D7; Tue, 18 Dec 2012 12:54:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tkwgr-0003fD-7P
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 12:54:33 +0000
Received: from [85.158.143.35:38774] by server-3.bemta-4.messagelabs.com id
	20/73-18211-88760D05; Tue, 18 Dec 2012 12:54:32 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355835265!12469214!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31650 invoked from network); 18 Dec 2012 12:54:25 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-12.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	18 Dec 2012 12:54:25 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:55276 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tkwkm-0005Wc-NN; Tue, 18 Dec 2012 13:58:36 +0100
Date: Tue, 18 Dec 2012 13:54:11 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1541942460.20121218135411@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355152338.21160.37.camel@zakaz.uk.xensource.com>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com>
	<341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
	<1355152338.21160.37.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
	net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, December 10, 2012, 4:12:18 PM, you wrote:

> I wrote
>> > I have a vague recollection of a patch to set skb->truesize more
>> > accurately in xennet_poll (netfront), but I can't seem to find any
>> > reference to it now.

> I finally found the following in my git tree. Looks like I never sent it
> out.

> Does it help?

Hi Ian,

As I reported earlier it works for me.
I haven't seen a pull request anywhere yet ?

--
Sander


> 8<--------------------

> From 788ba317fa241512be7a8630b1b58e53faff83ed Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 22 Aug 2012 11:55:31 +0100
> Subject: [PATCH] xen/netfront: improve truesize tracking

> Fixes WARN_ON from skb_try_coalesce.

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  drivers/net/xen-netfront.c |   15 +++++----------
>  net/core/skbuff.c          |    2 +-
>  2 files changed, 6 insertions(+), 11 deletions(-)

> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index caa0110..b06ef81 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -971,17 +971,12 @@ err:
>                  * overheads. Here, we add the size of the data pulled
>                  * in xennet_fill_frags().
>                  *
> -                * We also adjust for any unused space in the main
> -                * data area by subtracting (RX_COPY_THRESHOLD -
> -                * len). This is especially important with drivers
> -                * which split incoming packets into header and data,
> -                * using only 66 bytes of the main data area (see the
> -                * e1000 driver for example.)  On such systems,
> -                * without this last adjustement, our achievable
> -                * receive throughout using the standard receive
> -                * buffer size was cut by 25%(!!!).
> +                * We also adjust for the __pskb_pull_tail done in
> +                * handle_incoming_queue which pulls data from the
> +                * frags into the head area, which is already
> +                * accounted in RX_COPY_THRESHOLD.
>                  */
> -               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +               skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>                 skb->len += skb->data_len;
>  
>                 if (rx->flags & XEN_NETRXF_csum_blank)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 6e04b1f..941a974 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3439,7 +3439,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
>                 delta = from->truesize - SKB_TRUESIZE(skb_end_offset(from));
>         }
>  
> -       WARN_ON_ONCE(delta < len);
> +       WARN_ONCE(delta < len, "delta %d < len %d\n", delta, len);
>  
>         memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
>                skb_shinfo(from)->frags,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtr-0003t0-Ux; Tue, 18 Dec 2012 13:07:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cheungrck@gmail.com>) id 1TjbRt-0001l6-P2
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 20:01:34 +0000
Received: from [85.158.138.51:17037] by server-6.bemta-3.messagelabs.com id
	9F/69-12154-7958BC05; Fri, 14 Dec 2012 20:01:27 +0000
X-Env-Sender: cheungrck@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355515285!22609832!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30478 invoked from network); 14 Dec 2012 20:01:27 -0000
Received: from mail-da0-f45.google.com (HELO mail-da0-f45.google.com)
	(209.85.210.45)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 20:01:27 -0000
Received: by mail-da0-f45.google.com with SMTP id w4so1517208dam.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 12:01:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=yqlmC4btxoDAs56bRPLgPTyna8cU6vgX9vs3HN+rhbQ=;
	b=yDXPRRA1OtPqFcOSwwNNj0JKtiuaujbNh+mYLf9gUUwEsAima5VzzlcUCI+DbWFYnL
	7ONLn2Npr33JeGPsU2XayP6SqXfLkYFrAWPwa7D5e3QJXeTrCeXIoyZ8fCRi7Kt2ngVj
	B7D20fFTwp2dqr91ZtVlBx442RK/KvanKRsaZZ8YGQ00Qh+l661OiBDk44PachatfGp6
	NLkWThl1C7UnGKQJ+/2FzxjxWSfTytrW5bwr+IVAE4fuC/W4WHS6uyFKskIwfOQMgOVf
	VvFfoLqndYQVWbt9v94FLUNTqRJQ6fdiSazhq8UAPkouwM0h5qN4ZzA8GMTgfrCQQUrH
	Nkvg==
MIME-Version: 1.0
Received: by 10.66.89.9 with SMTP id bk9mr18485384pab.67.1355515284881; Fri,
	14 Dec 2012 12:01:24 -0800 (PST)
Received: by 10.67.22.5 with HTTP; Fri, 14 Dec 2012 12:01:24 -0800 (PST)
Date: Fri, 14 Dec 2012 15:01:24 -0500
Message-ID: <CAHNcjxU+uXSq7gr2n3OV2Li3V21w+5F0oOnfinut_uHkmzekhQ@mail.gmail.com>
From: Qi Zhang <cheungrck@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Subject: [Xen-devel] Something about xenalyze
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3175400360251728274=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3175400360251728274==
Content-Type: multipart/alternative; boundary=f46d042fda540b006504d0d57fed

--f46d042fda540b006504d0d57fed
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

I am a Ph.D. student, and get to something about xenalyze to ask.

Part of my output of 'xenalyze --scatterplot-runstate' is:

1v0 10.822094482 2
1v0 10.822094482 3
0v0 10.822096386 3
0v0 10.822096386 0

It seems like that VCPU0 of VM1 has two states(2 and 3) at the same time
point(10.822094482). So, can you help me to explain that?
Thank you!

PS:The state array is as follows:
int runstate_graph[RUNSTATE_MAX] =3D
{
    [RUNSTATE_BLOCKED]=3D0,
    [RUNSTATE_OFFLINE]=3D1,
    [RUNSTATE_RUNNABLE]=3D2,
    [RUNSTATE_RUNNING]=3D3,
    [RUNSTATE_LOST]=3D-1,
    [RUNSTATE_QUEUED]=3D-2,
    [RUNSTATE_INIT]=3D-2,
};

--=20
Best Wishes=A3=A1
      Qi Zhang

--f46d042fda540b006504d0d57fed
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<div style=3D"font-family:arial,sans-serif;font-size:14px">I am a Ph.D. stu=
dent, and get to something about xenalyze to ask.</div><div style=3D"font-f=
amily:arial,sans-serif;font-size:14px"><br></div><div style=3D"font-family:=
arial,sans-serif;font-size:14px">
Part of my output of &#39;xenalyze --scatterplot-runstate&#39; is:</div><di=
v style=3D"font-family:arial,sans-serif;font-size:14px"><br></div><div styl=
e=3D"font-family:arial,sans-serif;font-size:14px"><span style=3D"line-heigh=
t:22.383333206176758px;background-color:rgb(240,243,250);font-family:song,V=
erdana">1v0 10.822094482 2</span><br style=3D"line-height:22.38333320617675=
8px;background-color:rgb(240,243,250);font-family:song,Verdana;word-wrap:br=
eak-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">1v0 10.822094482 3</span><br style=3D"line=
-height:22.383333206176758px;background-color:rgb(240,243,250);font-family:=
song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">0v0 10.822096386 3</span><br style=3D"line=
-height:22.383333206176758px;background-color:rgb(240,243,250);font-family:=
song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">0v0 10.822096386 0</span><br clear=3D"all"=
><div><br></div><div>It seems like that VCPU0 of VM1 has two states(2 and 3=
) at the same time point(<span style=3D"line-height:22.366666793823242px;ba=
ckground-color:rgb(240,243,250);font-family:song,Verdana">10.822094482</spa=
n>). So, can you help me to explain that?</div>
<div>Thank you!<br></div><div><br></div><div>PS:The state array is as follo=
ws:</div><div><span style=3D"line-height:22.383333206176758px;background-co=
lor:rgb(240,243,250);font-family:song,Verdana">int runstate_graph[RUNSTATE_=
MAX] =3D</span><br style=3D"line-height:22.383333206176758px;background-col=
or:rgb(240,243,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">{</span><br style=3D"line-height:22.383333=
206176758px;background-color:rgb(240,243,250);font-family:song,Verdana;word=
-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_BLOCKED]=3D0,</spa=
n><br style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_OFFLINE]=3D1,</spa=
n><br style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_RUNNABLE]=3D2,</sp=
an><br style=3D"line-height:22.383333206176758px;background-color:rgb(240,2=
43,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_RUNNING]=3D3,</spa=
n><br style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_LOST]=3D-1,</span>=
<br style=3D"line-height:22.383333206176758px;background-color:rgb(240,243,=
250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_QUEUED]=3D-2,</spa=
n><br style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_INIT]=3D-2,</span>=
<br style=3D"line-height:22.383333206176758px;background-color:rgb(240,243,=
250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">};</span></div></div><div><br></div>-- <br=
><div>Best Wishes=A3=A1<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Qi Zhang</di=
v>
<div><br></div><br>

--f46d042fda540b006504d0d57fed--


--===============3175400360251728274==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3175400360251728274==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtr-0003t0-Ux; Tue, 18 Dec 2012 13:07:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cheungrck@gmail.com>) id 1TjbRt-0001l6-P2
	for xen-devel@lists.xen.org; Fri, 14 Dec 2012 20:01:34 +0000
Received: from [85.158.138.51:17037] by server-6.bemta-3.messagelabs.com id
	9F/69-12154-7958BC05; Fri, 14 Dec 2012 20:01:27 +0000
X-Env-Sender: cheungrck@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355515285!22609832!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30478 invoked from network); 14 Dec 2012 20:01:27 -0000
Received: from mail-da0-f45.google.com (HELO mail-da0-f45.google.com)
	(209.85.210.45)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 20:01:27 -0000
Received: by mail-da0-f45.google.com with SMTP id w4so1517208dam.32
	for <xen-devel@lists.xen.org>; Fri, 14 Dec 2012 12:01:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=yqlmC4btxoDAs56bRPLgPTyna8cU6vgX9vs3HN+rhbQ=;
	b=yDXPRRA1OtPqFcOSwwNNj0JKtiuaujbNh+mYLf9gUUwEsAima5VzzlcUCI+DbWFYnL
	7ONLn2Npr33JeGPsU2XayP6SqXfLkYFrAWPwa7D5e3QJXeTrCeXIoyZ8fCRi7Kt2ngVj
	B7D20fFTwp2dqr91ZtVlBx442RK/KvanKRsaZZ8YGQ00Qh+l661OiBDk44PachatfGp6
	NLkWThl1C7UnGKQJ+/2FzxjxWSfTytrW5bwr+IVAE4fuC/W4WHS6uyFKskIwfOQMgOVf
	VvFfoLqndYQVWbt9v94FLUNTqRJQ6fdiSazhq8UAPkouwM0h5qN4ZzA8GMTgfrCQQUrH
	Nkvg==
MIME-Version: 1.0
Received: by 10.66.89.9 with SMTP id bk9mr18485384pab.67.1355515284881; Fri,
	14 Dec 2012 12:01:24 -0800 (PST)
Received: by 10.67.22.5 with HTTP; Fri, 14 Dec 2012 12:01:24 -0800 (PST)
Date: Fri, 14 Dec 2012 15:01:24 -0500
Message-ID: <CAHNcjxU+uXSq7gr2n3OV2Li3V21w+5F0oOnfinut_uHkmzekhQ@mail.gmail.com>
From: Qi Zhang <cheungrck@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Subject: [Xen-devel] Something about xenalyze
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3175400360251728274=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3175400360251728274==
Content-Type: multipart/alternative; boundary=f46d042fda540b006504d0d57fed

--f46d042fda540b006504d0d57fed
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

I am a Ph.D. student, and get to something about xenalyze to ask.

Part of my output of 'xenalyze --scatterplot-runstate' is:

1v0 10.822094482 2
1v0 10.822094482 3
0v0 10.822096386 3
0v0 10.822096386 0

It seems like that VCPU0 of VM1 has two states(2 and 3) at the same time
point(10.822094482). So, can you help me to explain that?
Thank you!

PS:The state array is as follows:
int runstate_graph[RUNSTATE_MAX] =3D
{
    [RUNSTATE_BLOCKED]=3D0,
    [RUNSTATE_OFFLINE]=3D1,
    [RUNSTATE_RUNNABLE]=3D2,
    [RUNSTATE_RUNNING]=3D3,
    [RUNSTATE_LOST]=3D-1,
    [RUNSTATE_QUEUED]=3D-2,
    [RUNSTATE_INIT]=3D-2,
};

--=20
Best Wishes=A3=A1
      Qi Zhang

--f46d042fda540b006504d0d57fed
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<div style=3D"font-family:arial,sans-serif;font-size:14px">I am a Ph.D. stu=
dent, and get to something about xenalyze to ask.</div><div style=3D"font-f=
amily:arial,sans-serif;font-size:14px"><br></div><div style=3D"font-family:=
arial,sans-serif;font-size:14px">
Part of my output of &#39;xenalyze --scatterplot-runstate&#39; is:</div><di=
v style=3D"font-family:arial,sans-serif;font-size:14px"><br></div><div styl=
e=3D"font-family:arial,sans-serif;font-size:14px"><span style=3D"line-heigh=
t:22.383333206176758px;background-color:rgb(240,243,250);font-family:song,V=
erdana">1v0 10.822094482 2</span><br style=3D"line-height:22.38333320617675=
8px;background-color:rgb(240,243,250);font-family:song,Verdana;word-wrap:br=
eak-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">1v0 10.822094482 3</span><br style=3D"line=
-height:22.383333206176758px;background-color:rgb(240,243,250);font-family:=
song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">0v0 10.822096386 3</span><br style=3D"line=
-height:22.383333206176758px;background-color:rgb(240,243,250);font-family:=
song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">0v0 10.822096386 0</span><br clear=3D"all"=
><div><br></div><div>It seems like that VCPU0 of VM1 has two states(2 and 3=
) at the same time point(<span style=3D"line-height:22.366666793823242px;ba=
ckground-color:rgb(240,243,250);font-family:song,Verdana">10.822094482</spa=
n>). So, can you help me to explain that?</div>
<div>Thank you!<br></div><div><br></div><div>PS:The state array is as follo=
ws:</div><div><span style=3D"line-height:22.383333206176758px;background-co=
lor:rgb(240,243,250);font-family:song,Verdana">int runstate_graph[RUNSTATE_=
MAX] =3D</span><br style=3D"line-height:22.383333206176758px;background-col=
or:rgb(240,243,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">{</span><br style=3D"line-height:22.383333=
206176758px;background-color:rgb(240,243,250);font-family:song,Verdana;word=
-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_BLOCKED]=3D0,</spa=
n><br style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_OFFLINE]=3D1,</spa=
n><br style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_RUNNABLE]=3D2,</sp=
an><br style=3D"line-height:22.383333206176758px;background-color:rgb(240,2=
43,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_RUNNING]=3D3,</spa=
n><br style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_LOST]=3D-1,</span>=
<br style=3D"line-height:22.383333206176758px;background-color:rgb(240,243,=
250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_QUEUED]=3D-2,</spa=
n><br style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">&nbsp; &nbsp; [RUNSTATE_INIT]=3D-2,</span>=
<br style=3D"line-height:22.383333206176758px;background-color:rgb(240,243,=
250);font-family:song,Verdana;word-wrap:break-word">
<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">};</span></div></div><div><br></div>-- <br=
><div>Best Wishes=A3=A1<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Qi Zhang</di=
v>
<div><br></div><br>

--f46d042fda540b006504d0d57fed--


--===============3175400360251728274==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3175400360251728274==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtq-0003sf-K4; Tue, 18 Dec 2012 13:07:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhebus@googlemail.com>)
	id 1TjUyW-00081T-MK; Fri, 14 Dec 2012 13:06:48 +0000
Received: from [85.158.138.51:44312] by server-5.bemta-3.messagelabs.com id
	A3/B2-15136-7642BC05; Fri, 14 Dec 2012 13:06:47 +0000
X-Env-Sender: jhebus@googlemail.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355490405!27707095!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3638 invoked from network); 14 Dec 2012 13:06:46 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 13:06:46 -0000
Received: by mail-ob0-f173.google.com with SMTP id xn12so3295321obc.32
	for <multiple recipients>; Fri, 14 Dec 2012 05:06:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=yLrBLZ8qaFsNOKCOjusRuH+liFHlVOWc1ebN+gO/+oo=;
	b=LtphoWH0QmEI57IsEe13QoJksJXJ4J9E3lbs3HaONQlnfDbzLO4WTupVmOvr4u1EMn
	sJWZxn2wqGDuMrjGxoN2oGy0zQfKX5Q8NIMPXEuejN+/M0blm9P5fZiyZuQJEHS/hid3
	JoKRtf4radDufcfQvvSNnzt/f+YQEpBNaKGmVzeUlh6EZxM530GaoLRJ6nWPPZ4iFJMq
	2byDPfig0RBcTKpqmlQsbFB1FyJwvRAtK+KSNR3wg4U1AiZbZwl4Cz1jwChIJsMsVK1i
	H74AJUTibZA3mdz84Ftyqx2KYxqZm8v+8lIK2TIr0P2TOBSJcLTFdimrL/3SUmIJvM3j
	j9xw==
MIME-Version: 1.0
Received: by 10.60.30.42 with SMTP id p10mr4478435oeh.59.1355490405043; Fri,
	14 Dec 2012 05:06:45 -0800 (PST)
Received: by 10.76.21.196 with HTTP; Fri, 14 Dec 2012 05:06:44 -0800 (PST)
In-Reply-To: <1355412947.10554.147.camel@zakaz.uk.xensource.com>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
	<1355412947.10554.147.camel@zakaz.uk.xensource.com>
Date: Fri, 14 Dec 2012 13:06:44 +0000
Message-ID: <CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
From: Paul Harvey <jhebus@googlemail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5688692529563693049=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5688692529563693049==
Content-Type: multipart/alternative; boundary=e89a8fb206a816cdd004d0cfb44a

--e89a8fb206a816cdd004d0cfb44a
Content-Type: text/plain; charset=ISO-8859-1

SO

#with 341 domains
./lsevntchn 0 | wc -l
724

Attaching gdb to xenconsoled,

Program received signal SIGABRT, Aborted.
0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0  0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x00007fe588cabb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x00007fe588d7c807 in __fortify_fail () from
/lib/x86_64-linux-gnu/libc.so.6
#4  0x00007fe588d7b700 in __chk_fail () from /lib/x86_64-linux-gnu/libc.so.6
#5  0x00007fe588d7c7be in __fdelt_warn () from
/lib/x86_64-linux-gnu/libc.so.6
#6  0x0000000000403ca8 in handle_io () at daemon/io.c:1059
#7  0x00000000004021c5 in main (argc=2, argv=0x7fff58691d48) at
daemon/main.c:166

Unfortunately strace doesn't give the sort of information needed to
diagnose this. Can you run the daemon under gdb? When it crashes you can
type "bt" to get a backtrace. If there are debuginfo packages available
in your distro installing the ones for the Xen packages would improve
the output of this too.


i don't really know how to enable the debugging info for these libraries. I
can't see anything on Google about debuginfo packages for Ubuntu 12.04.
Incidentally i just grabbed the xen version in there repo following this :

https://help.ubuntu.com/community/Xen

i did grab a copy of the source of xen 4.1.2 and compiled it with debug in
the tools, so that is why i can see proper output for the first two

Paul

--e89a8fb206a816cdd004d0cfb44a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

SO<br><br>#with 341 domains<br>./lsevntchn 0 | wc -l<br>724<br><br>Attachin=
g gdb to xenconsoled, <br><br>Program received signal SIGABRT, Aborted.<br>=
0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6<br>(gdb=
) bt<br>
#0=A0 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6<b=
r>#1=A0 0x00007fe588cabb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6=
<br>#2=A0 0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6<=
br>
#3=A0 0x00007fe588d7c807 in __fortify_fail () from /lib/x86_64-linux-gnu/li=
bc.so.6<br>#4=A0 0x00007fe588d7b700 in __chk_fail () from /lib/x86_64-linux=
-gnu/libc.so.6<br>#5=A0 0x00007fe588d7c7be in __fdelt_warn () from /lib/x86=
_64-linux-gnu/libc.so.6<br>
#6=A0 0x0000000000403ca8 in handle_io () at daemon/io.c:1059<br>#7=A0 0x000=
00000004021c5 in main (argc=3D2, argv=3D0x7fff58691d48) at daemon/main.c:16=
6<br><br><span style=3D"background-color:rgb(255,0,0)">Unfortunately strace=
 doesn&#39;t give the sort of information needed to</span><br style=3D"back=
ground-color:rgb(255,0,0)">
<span style=3D"background-color:rgb(255,0,0)">
diagnose this. Can you run the daemon under gdb? When it crashes you can</s=
pan><br style=3D"background-color:rgb(255,0,0)"><span style=3D"background-c=
olor:rgb(255,0,0)">
type &quot;bt&quot; to get a backtrace. If there are debuginfo packages ava=
ilable</span><br style=3D"background-color:rgb(255,0,0)"><span style=3D"bac=
kground-color:rgb(255,0,0)">
in your distro installing the ones for the Xen packages would improve</span=
><br style=3D"background-color:rgb(255,0,0)"><span style=3D"background-colo=
r:rgb(255,0,0)">
the output of this too.</span><br style=3D"background-color:rgb(255,0,255)"=
>
<br><br>i don&#39;t really know how to enable the debugging info for these =
libraries. I can&#39;t see anything on Google about debuginfo packages for =
Ubuntu 12.04. Incidentally i just grabbed the xen version in there repo fol=
lowing this : <br>
<br><a href=3D"https://help.ubuntu.com/community/Xen">https://help.ubuntu.c=
om/community/Xen</a><br><br>i did grab a copy of the source of xen 4.1.2 an=
d compiled it with debug in the tools, so that is why i can see proper outp=
ut for the first two<br>
<br>Paul<br>

--e89a8fb206a816cdd004d0cfb44a--


--===============5688692529563693049==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5688692529563693049==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwto-0003sQ-Rc; Tue, 18 Dec 2012 13:07:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tkwtn-0003sL-4a
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:07:55 +0000
Received: from [193.109.254.147:52603] by server-11.bemta-14.messagelabs.com
	id 68/D1-02659-AAA60D05; Tue, 18 Dec 2012 13:07:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355836073!10886108!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14336 invoked from network); 18 Dec 2012 13:07:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 13:07:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 13:07:53 +0000
Message-Id: <50D078B702000078000B101F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 13:07:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Paolo Bonzini" <pbonzini@redhat.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
In-Reply-To: <50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 09:33, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> On some machines, the location at 0x40e does not point to the beginning
>> of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>> area of the EBDA, while the option ROMs place their data below that
>> segment.
>> 
>> For this reason, 0x413 is actually a better source than 0x40e to get
>> the location of the real-mode trampoline.  But it is even better to
>> fetch the information from the multiboot structure, where the boot
>> loader has placed the data for us already.
> 
> I think if anything we really should make this a minimum calculation
> of all three (sanity checked) values, rather than throwing the other
> sources out. It's just not certain enough that we can trust all
> multiboot implementations.

I never saw a response from you on this one - were you
intending to follow up, or did you (silently) expect us to sort
this out?

Jan

> Of course, ideally we'd consult the memory map, but the E820 one
> is unavailable at that point (and getting at it would create a
> chicken-and-egg problem).
> 
> Jan
> 
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>>  xen/arch/x86/boot/head.S | 21 ++++++++++++---------
>>  1 file changed, 12 insertions(+), 9 deletions(-)
>> 
>> diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
>> index 7efa155..1790462 100644
>> --- a/xen/arch/x86/boot/head.S
>> +++ b/xen/arch/x86/boot/head.S
>> @@ -78,16 +78,19 @@ __start:
>>          cmp     $0x2BADB002,%eax
>>          jne     not_multiboot
>>  
>> -        /* Set up trampoline segment 64k below EBDA */
>> -        movzwl  0x40e,%eax          /* EBDA segment */
>> -        cmp     $0xa000,%eax        /* sanity check (high) */
>> -        jae     0f
>> -        cmp     $0x4000,%eax        /* sanity check (low) */
>> -        jae     1f
>> -0:
>> -        movzwl  0x413,%eax          /* use base memory size on failure */
>> -        shl     $10-4,%eax
>> +        /* Set up trampoline segment just below end of base memory.
>> +         * Prefer to get this information from the multiboot
>> +         * structure, if available.
>> +         */
>> +        mov     4(%ebx),%eax        /* kb of low memory */
>> +        testb   $1,(%ebx)           /* test MBI_MEMLIMITS */
>> +        jnz     1f
>> +
>> +        movzwl  0x413,%eax          /* base memory size in kb */
>>  1:
>> +        shl     $10-4,%eax          /* convert to a segment number */
>> +
>> +        /* Reserve 64kb for the trampoline */
>>          sub     $0x1000,%eax
>>  
>>          /* From arch/x86/smpboot.c: start_eip had better be page-aligned! */
>> -- 
>> 1.8.0
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtq-0003sf-K4; Tue, 18 Dec 2012 13:07:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhebus@googlemail.com>)
	id 1TjUyW-00081T-MK; Fri, 14 Dec 2012 13:06:48 +0000
Received: from [85.158.138.51:44312] by server-5.bemta-3.messagelabs.com id
	A3/B2-15136-7642BC05; Fri, 14 Dec 2012 13:06:47 +0000
X-Env-Sender: jhebus@googlemail.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355490405!27707095!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3638 invoked from network); 14 Dec 2012 13:06:46 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 13:06:46 -0000
Received: by mail-ob0-f173.google.com with SMTP id xn12so3295321obc.32
	for <multiple recipients>; Fri, 14 Dec 2012 05:06:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=yLrBLZ8qaFsNOKCOjusRuH+liFHlVOWc1ebN+gO/+oo=;
	b=LtphoWH0QmEI57IsEe13QoJksJXJ4J9E3lbs3HaONQlnfDbzLO4WTupVmOvr4u1EMn
	sJWZxn2wqGDuMrjGxoN2oGy0zQfKX5Q8NIMPXEuejN+/M0blm9P5fZiyZuQJEHS/hid3
	JoKRtf4radDufcfQvvSNnzt/f+YQEpBNaKGmVzeUlh6EZxM530GaoLRJ6nWPPZ4iFJMq
	2byDPfig0RBcTKpqmlQsbFB1FyJwvRAtK+KSNR3wg4U1AiZbZwl4Cz1jwChIJsMsVK1i
	H74AJUTibZA3mdz84Ftyqx2KYxqZm8v+8lIK2TIr0P2TOBSJcLTFdimrL/3SUmIJvM3j
	j9xw==
MIME-Version: 1.0
Received: by 10.60.30.42 with SMTP id p10mr4478435oeh.59.1355490405043; Fri,
	14 Dec 2012 05:06:45 -0800 (PST)
Received: by 10.76.21.196 with HTTP; Fri, 14 Dec 2012 05:06:44 -0800 (PST)
In-Reply-To: <1355412947.10554.147.camel@zakaz.uk.xensource.com>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
	<1355412947.10554.147.camel@zakaz.uk.xensource.com>
Date: Fri, 14 Dec 2012 13:06:44 +0000
Message-ID: <CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
From: Paul Harvey <jhebus@googlemail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5688692529563693049=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5688692529563693049==
Content-Type: multipart/alternative; boundary=e89a8fb206a816cdd004d0cfb44a

--e89a8fb206a816cdd004d0cfb44a
Content-Type: text/plain; charset=ISO-8859-1

SO

#with 341 domains
./lsevntchn 0 | wc -l
724

Attaching gdb to xenconsoled,

Program received signal SIGABRT, Aborted.
0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0  0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x00007fe588cabb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x00007fe588d7c807 in __fortify_fail () from
/lib/x86_64-linux-gnu/libc.so.6
#4  0x00007fe588d7b700 in __chk_fail () from /lib/x86_64-linux-gnu/libc.so.6
#5  0x00007fe588d7c7be in __fdelt_warn () from
/lib/x86_64-linux-gnu/libc.so.6
#6  0x0000000000403ca8 in handle_io () at daemon/io.c:1059
#7  0x00000000004021c5 in main (argc=2, argv=0x7fff58691d48) at
daemon/main.c:166

Unfortunately strace doesn't give the sort of information needed to
diagnose this. Can you run the daemon under gdb? When it crashes you can
type "bt" to get a backtrace. If there are debuginfo packages available
in your distro installing the ones for the Xen packages would improve
the output of this too.


i don't really know how to enable the debugging info for these libraries. I
can't see anything on Google about debuginfo packages for Ubuntu 12.04.
Incidentally i just grabbed the xen version in there repo following this :

https://help.ubuntu.com/community/Xen

i did grab a copy of the source of xen 4.1.2 and compiled it with debug in
the tools, so that is why i can see proper output for the first two

Paul

--e89a8fb206a816cdd004d0cfb44a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

SO<br><br>#with 341 domains<br>./lsevntchn 0 | wc -l<br>724<br><br>Attachin=
g gdb to xenconsoled, <br><br>Program received signal SIGABRT, Aborted.<br>=
0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6<br>(gdb=
) bt<br>
#0=A0 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6<b=
r>#1=A0 0x00007fe588cabb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6=
<br>#2=A0 0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6<=
br>
#3=A0 0x00007fe588d7c807 in __fortify_fail () from /lib/x86_64-linux-gnu/li=
bc.so.6<br>#4=A0 0x00007fe588d7b700 in __chk_fail () from /lib/x86_64-linux=
-gnu/libc.so.6<br>#5=A0 0x00007fe588d7c7be in __fdelt_warn () from /lib/x86=
_64-linux-gnu/libc.so.6<br>
#6=A0 0x0000000000403ca8 in handle_io () at daemon/io.c:1059<br>#7=A0 0x000=
00000004021c5 in main (argc=3D2, argv=3D0x7fff58691d48) at daemon/main.c:16=
6<br><br><span style=3D"background-color:rgb(255,0,0)">Unfortunately strace=
 doesn&#39;t give the sort of information needed to</span><br style=3D"back=
ground-color:rgb(255,0,0)">
<span style=3D"background-color:rgb(255,0,0)">
diagnose this. Can you run the daemon under gdb? When it crashes you can</s=
pan><br style=3D"background-color:rgb(255,0,0)"><span style=3D"background-c=
olor:rgb(255,0,0)">
type &quot;bt&quot; to get a backtrace. If there are debuginfo packages ava=
ilable</span><br style=3D"background-color:rgb(255,0,0)"><span style=3D"bac=
kground-color:rgb(255,0,0)">
in your distro installing the ones for the Xen packages would improve</span=
><br style=3D"background-color:rgb(255,0,0)"><span style=3D"background-colo=
r:rgb(255,0,0)">
the output of this too.</span><br style=3D"background-color:rgb(255,0,255)"=
>
<br><br>i don&#39;t really know how to enable the debugging info for these =
libraries. I can&#39;t see anything on Google about debuginfo packages for =
Ubuntu 12.04. Incidentally i just grabbed the xen version in there repo fol=
lowing this : <br>
<br><a href=3D"https://help.ubuntu.com/community/Xen">https://help.ubuntu.c=
om/community/Xen</a><br><br>i did grab a copy of the source of xen 4.1.2 an=
d compiled it with debug in the tools, so that is why i can see proper outp=
ut for the first two<br>
<br>Paul<br>

--e89a8fb206a816cdd004d0cfb44a--


--===============5688692529563693049==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5688692529563693049==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwto-0003sQ-Rc; Tue, 18 Dec 2012 13:07:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tkwtn-0003sL-4a
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:07:55 +0000
Received: from [193.109.254.147:52603] by server-11.bemta-14.messagelabs.com
	id 68/D1-02659-AAA60D05; Tue, 18 Dec 2012 13:07:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355836073!10886108!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14336 invoked from network); 18 Dec 2012 13:07:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 13:07:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 13:07:53 +0000
Message-Id: <50D078B702000078000B101F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 13:07:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Paolo Bonzini" <pbonzini@redhat.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
In-Reply-To: <50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.11.12 at 09:33, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> On some machines, the location at 0x40e does not point to the beginning
>> of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>> area of the EBDA, while the option ROMs place their data below that
>> segment.
>> 
>> For this reason, 0x413 is actually a better source than 0x40e to get
>> the location of the real-mode trampoline.  But it is even better to
>> fetch the information from the multiboot structure, where the boot
>> loader has placed the data for us already.
> 
> I think if anything we really should make this a minimum calculation
> of all three (sanity checked) values, rather than throwing the other
> sources out. It's just not certain enough that we can trust all
> multiboot implementations.

I never saw a response from you on this one - were you
intending to follow up, or did you (silently) expect us to sort
this out?

Jan

> Of course, ideally we'd consult the memory map, but the E820 one
> is unavailable at that point (and getting at it would create a
> chicken-and-egg problem).
> 
> Jan
> 
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>>  xen/arch/x86/boot/head.S | 21 ++++++++++++---------
>>  1 file changed, 12 insertions(+), 9 deletions(-)
>> 
>> diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
>> index 7efa155..1790462 100644
>> --- a/xen/arch/x86/boot/head.S
>> +++ b/xen/arch/x86/boot/head.S
>> @@ -78,16 +78,19 @@ __start:
>>          cmp     $0x2BADB002,%eax
>>          jne     not_multiboot
>>  
>> -        /* Set up trampoline segment 64k below EBDA */
>> -        movzwl  0x40e,%eax          /* EBDA segment */
>> -        cmp     $0xa000,%eax        /* sanity check (high) */
>> -        jae     0f
>> -        cmp     $0x4000,%eax        /* sanity check (low) */
>> -        jae     1f
>> -0:
>> -        movzwl  0x413,%eax          /* use base memory size on failure */
>> -        shl     $10-4,%eax
>> +        /* Set up trampoline segment just below end of base memory.
>> +         * Prefer to get this information from the multiboot
>> +         * structure, if available.
>> +         */
>> +        mov     4(%ebx),%eax        /* kb of low memory */
>> +        testb   $1,(%ebx)           /* test MBI_MEMLIMITS */
>> +        jnz     1f
>> +
>> +        movzwl  0x413,%eax          /* base memory size in kb */
>>  1:
>> +        shl     $10-4,%eax          /* convert to a segment number */
>> +
>> +        /* Reserve 64kb for the trampoline */
>>          sub     $0x1000,%eax
>>  
>>          /* From arch/x86/smpboot.c: start_eip had better be page-aligned! */
>> -- 
>> 1.8.0
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtr-0003so-4Z; Tue, 18 Dec 2012 13:07:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhebus@googlemail.com>)
	id 1TjY0o-000556-6f; Fri, 14 Dec 2012 16:21:23 +0000
Received: from [85.158.143.35:4033] by server-2.bemta-4.messagelabs.com id
	AB/50-30861-1025BC05; Fri, 14 Dec 2012 16:21:21 +0000
X-Env-Sender: jhebus@googlemail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355502054!5499347!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20919 invoked from network); 14 Dec 2012 16:20:56 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 16:20:56 -0000
Received: by mail-ob0-f173.google.com with SMTP id xn12so3537464obc.32
	for <multiple recipients>; Fri, 14 Dec 2012 08:20:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=LZTeaXhCUQpoXi9JhQP+zu44lb65jSqDMJazWvPmkzk=;
	b=qNeuy1ScFsopNXFlScEI9Hc9zjWr696nuBitlTVH87TZXaltrKrZLjZmt4PVxlWM9A
	h4oCQEPjrfQfRcdy+3s4qXqZyev9N3wNEEIqs7HWVoZzFvAnErZBXMK6KTabRfFhjCeq
	ccanzM1NVzWZsiFQRRKg9zPMyzuvLPGQft82/VfhRTjVibFj1CsVQs2Bg0ncKyIe6wy8
	kwS1awQl9LOgHH+jXhQcQegu94sZrLmArZgEQWpbpv8N5f61KZMePHiJH4QTWF+A+C/x
	KTCAZZILRImTVCVn24HJZetd0MThHfZGiMP0k496yynOQfBzOwXVBO7gQVSES/PeQYC1
	Br1A==
MIME-Version: 1.0
Received: by 10.60.11.130 with SMTP id q2mr4918360oeb.141.1355502054511; Fri,
	14 Dec 2012 08:20:54 -0800 (PST)
Received: by 10.76.21.196 with HTTP; Fri, 14 Dec 2012 08:20:53 -0800 (PST)
In-Reply-To: <1355497058.8376.63.camel@iceland>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
	<1355412947.10554.147.camel@zakaz.uk.xensource.com>
	<CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
	<1355497058.8376.63.camel@iceland>
Date: Fri, 14 Dec 2012 16:20:53 +0000
Message-ID: <CABR7Q=oLJg8EJHcask822MbYRh6Mb7QGgd_x_MRRriqMT8H_0Q@mail.gmail.com>
From: Paul Harvey <jhebus@googlemail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Content-Type: multipart/mixed; boundary=e89a8fb203a073963004d0d26a71
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--e89a8fb203a073963004d0d26a71
Content-Type: multipart/alternative; boundary=e89a8fb203a073962404d0d26a6f

--e89a8fb203a073962404d0d26a6f
Content-Type: text/plain; charset=ISO-8859-1

On 14 December 2012 14:57, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Fri, 2012-12-14 at 13:06 +0000, Paul Harvey wrote:
> > SO
> >
> > #with 341 domains
> > ./lsevntchn 0 | wc -l
> > 724
> >
> > Attaching gdb to xenconsoled,
> >
> > Program received signal SIGABRT, Aborted.
> > 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> > (gdb) bt
> > #0  0x00007fe588ca8425 in raise ()
> > from /lib/x86_64-linux-gnu/libc.so.6
> > #1  0x00007fe588cabb8b in abort ()
> > from /lib/x86_64-linux-gnu/libc.so.6
> > #2  0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> > #3  0x00007fe588d7c807 in __fortify_fail ()
> > from /lib/x86_64-linux-gnu/libc.so.6
> > #4  0x00007fe588d7b700 in __chk_fail ()
> > from /lib/x86_64-linux-gnu/libc.so.6
> > #5  0x00007fe588d7c7be in __fdelt_warn ()
> > from /lib/x86_64-linux-gnu/libc.so.6
> > #6  0x0000000000403ca8 in handle_io () at daemon/io.c:1059
> > #7  0x00000000004021c5 in main (argc=2, argv=0x7fff58691d48) at
> > daemon/main.c:166
> >
>
> libc raises exception when it detects memory violation.
>
> You can probably try to use valgrind to identify memory leak in
> xenconsoled.
>
>
> Wei.
>
>
Feeling in a little over my head now,

I have run valgrind and include the file with the output. As before
xenconsoled crashes, but i am not really sure how to read what i am seeing
from valgrind. I am not really sure if it is telling me that these errors
happen as it goes along, or if it is as a result of the crash that there
are lost blocks around.

Valgrind was run with:

valgrind --tool=memcheck --leak-check=yes --show-reachable=yes
--num-callers=20 --log-file="valgrind_output.txt" --track-fds=yes
./xenconsoled --pid-file=/var/run/xenconsoled.pid

If the attached file doesn't show, could you tell where it should go?

Paul

--e89a8fb203a073962404d0d26a6f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On 14 December 2012 14:57, Wei Liu <span dir=3D"ltr">&lt;<a href=3D"mailto:=
Wei.Liu2@citrix.com" target=3D"_blank">Wei.Liu2@citrix.com</a>&gt;</span> w=
rote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blockquote =
class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px sol=
id rgb(204,204,204);padding-left:1ex">
<div class=3D"im">On Fri, 2012-12-14 at 13:06 +0000, Paul Harvey wrote:<br>
&gt; SO<br>
&gt;<br>
&gt; #with 341 domains<br>
&gt; ./lsevntchn 0 | wc -l<br>
&gt; 724<br>
&gt;<br>
&gt; Attaching gdb to xenconsoled,<br>
&gt;<br>
&gt; Program received signal SIGABRT, Aborted.<br>
&gt; 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6<br=
>
&gt; (gdb) bt<br>
&gt; #0 =A00x00007fe588ca8425 in raise ()<br>
&gt; from /lib/x86_64-linux-gnu/libc.so.6<br>
&gt; #1 =A00x00007fe588cabb8b in abort ()<br>
&gt; from /lib/x86_64-linux-gnu/libc.so.6<br>
&gt; #2 =A00x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6=
<br>
&gt; #3 =A00x00007fe588d7c807 in __fortify_fail ()<br>
&gt; from /lib/x86_64-linux-gnu/libc.so.6<br>
&gt; #4 =A00x00007fe588d7b700 in __chk_fail ()<br>
&gt; from /lib/x86_64-linux-gnu/libc.so.6<br>
&gt; #5 =A00x00007fe588d7c7be in __fdelt_warn ()<br>
&gt; from /lib/x86_64-linux-gnu/libc.so.6<br>
&gt; #6 =A00x0000000000403ca8 in handle_io () at daemon/io.c:1059<br>
&gt; #7 =A00x00000000004021c5 in main (argc=3D2, argv=3D0x7fff58691d48) at<=
br>
&gt; daemon/main.c:166<br>
&gt;<br>
<br>
</div>libc raises exception when it detects memory violation.<br>
<br>
You can probably try to use valgrind to identify memory leak in<br>
xenconsoled.<br>
<span class=3D""><font color=3D"#888888"><br>
<br>
Wei.<br>
<br>
</font></span></blockquote></div><br>Feeling in a little over my head now, =
<br><br>I have run valgrind and include the file with the output. As before=
 xenconsoled crashes, but i am not really sure how to read what i am seeing=
 from valgrind. I am not really sure if it is telling me that these errors =
happen as it goes along, or if it is as a result of the crash that there ar=
e lost blocks around.<br>
<br>Valgrind was run with:<br><br>valgrind --tool=3Dmemcheck --leak-check=
=3Dyes --show-reachable=3Dyes --num-callers=3D20 --log-file=3D&quot;valgrin=
d_output.txt&quot; --track-fds=3Dyes ./xenconsoled --pid-file=3D/var/run/xe=
nconsoled.pid<br>
<br>If the attached file doesn&#39;t show, could you tell where it should g=
o?<br><br>Paul<br></div>

--e89a8fb203a073962404d0d26a6f--
--e89a8fb203a073963004d0d26a71
Content-Type: text/plain; charset=US-ASCII; name="valgrind_output.txt"
Content-Disposition: attachment; filename="valgrind_output.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hapitlp20

PT03MDAxPT0gTWVtY2hlY2ssIGEgbWVtb3J5IGVycm9yIGRldGVjdG9yCj09NzAwMT09IENvcHly
aWdodCAoQykgMjAwMi0yMDExLCBhbmQgR05VIEdQTCdkLCBieSBKdWxpYW4gU2V3YXJkIGV0IGFs
Lgo9PTcwMDE9PSBVc2luZyBWYWxncmluZC0zLjcuMCBhbmQgTGliVkVYOyByZXJ1biB3aXRoIC1o
IGZvciBjb3B5cmlnaHQgaW5mbwo9PTcwMDE9PSBDb21tYW5kOiAuL3hlbmNvbnNvbGVkIC0tcGlk
LWZpbGU9L3Zhci9ydW4veGVuY29uc29sZWQucGlkCj09NzAwMT09IFBhcmVudCBQSUQ6IDQ3NDUK
PT03MDAxPT0gCj09NzAwMT09IAo9PTcwMDE9PSBGSUxFIERFU0NSSVBUT1JTOiA0IG9wZW4gYXQg
ZXhpdC4KPT03MDAxPT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMzogL2hvbWUvcGF1bGgvRGVza3Rv
cC9kb3dubG9hZGVkL3hlbi00LjEuMi90b29scy9jb25zb2xlL3ZhbGdyaW5kX291dHB1dC50eHQK
PT03MDAxPT0gICAgPGluaGVyaXRlZCBmcm9tIHBhcmVudD4KPT03MDAxPT0gCj09NzAwMT09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDI6IC9kZXYvcHRzLzIKPT03MDAxPT0gICAgPGluaGVyaXRlZCBm
cm9tIHBhcmVudD4KPT03MDAxPT0gCj09NzAwMT09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE6Cj09
NzAwMT09ICAgIDxpbmhlcml0ZWQgZnJvbSBwYXJlbnQ+Cj09NzAwMT09IAo9PTcwMDE9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciAwOiAvZGV2L3B0cy8yCj09NzAwMT09ICAgIDxpbmhlcml0ZWQgZnJv
bSBwYXJlbnQ+Cj09NzAwMT09IAo9PTcwMDE9PSAKPT03MDAxPT0gSEVBUCBTVU1NQVJZOgo9PTcw
MDE9PSAgICAgaW4gdXNlIGF0IGV4aXQ6IDQ2IGJ5dGVzIGluIDIgYmxvY2tzCj09NzAwMT09ICAg
dG90YWwgaGVhcCB1c2FnZTogMiBhbGxvY3MsIDAgZnJlZXMsIDQ2IGJ5dGVzIGFsbG9jYXRlZAo9
PTcwMDE9PSAKPT03MDAxPT0gMjEgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIHN0aWxsIHJlYWNoYWJs
ZSBpbiBsb3NzIHJlY29yZCAxIG9mIDIKPT03MDAxPT0gICAgYXQgMHg0QzJCNkNEOiBtYWxsb2Mg
KGluIC91c3IvbGliL3ZhbGdyaW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2NC1saW51eC5zbykK
PT03MDAxPT0gICAgYnkgMHg0MDIyMjA6IG1haW4gKG1haW4uYzoxNDQpCj09NzAwMT09IAo9PTcw
MDE9PSAyNSBieXRlcyBpbiAxIGJsb2NrcyBhcmUgc3RpbGwgcmVhY2hhYmxlIGluIGxvc3MgcmVj
b3JkIDIgb2YgMgo9PTcwMDE9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIv
dmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDE9PSAgICBi
eSAweDU4RjZENzE6IHN0cmR1cCAoc3RyZHVwLmM6NDMpCj09NzAwMT09ICAgIGJ5IDB4NDAxRjlC
OiBtYWluIChtYWluLmM6MTEzKQo9PTcwMDE9PSAKPT03MDAxPT0gTEVBSyBTVU1NQVJZOgo9PTcw
MDE9PSAgICBkZWZpbml0ZWx5IGxvc3Q6IDAgYnl0ZXMgaW4gMCBibG9ja3MKPT03MDAxPT0gICAg
aW5kaXJlY3RseSBsb3N0OiAwIGJ5dGVzIGluIDAgYmxvY2tzCj09NzAwMT09ICAgICAgcG9zc2li
bHkgbG9zdDogMCBieXRlcyBpbiAwIGJsb2Nrcwo9PTcwMDE9PSAgICBzdGlsbCByZWFjaGFibGU6
IDQ2IGJ5dGVzIGluIDIgYmxvY2tzCj09NzAwMT09ICAgICAgICAgc3VwcHJlc3NlZDogMCBieXRl
cyBpbiAwIGJsb2Nrcwo9PTcwMDE9PSAKPT03MDAxPT0gRm9yIGNvdW50cyBvZiBkZXRlY3RlZCBh
bmQgc3VwcHJlc3NlZCBlcnJvcnMsIHJlcnVuIHdpdGg6IC12Cj09NzAwMT09IEVSUk9SIFNVTU1B
Ulk6IDAgZXJyb3JzIGZyb20gMCBjb250ZXh0cyAoc3VwcHJlc3NlZDogMiBmcm9tIDIpCj09NzAw
NT09IAo9PTcwMDU9PSBGSUxFIERFU0NSSVBUT1JTOiA0IG9wZW4gYXQgZXhpdC4KPT03MDA1PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMzogL2hvbWUvcGF1bGgvRGVza3RvcC9kb3dubG9hZGVkL3hl
bi00LjEuMi90b29scy9jb25zb2xlL3ZhbGdyaW5kX291dHB1dC50eHQKPT03MDA1PT0gICAgPGlu
aGVyaXRlZCBmcm9tIHBhcmVudD4KPT03MDA1PT0gCj09NzAwNT09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDI6IC9kZXYvcHRzLzIKPT03MDA1PT0gICAgPGluaGVyaXRlZCBmcm9tIHBhcmVudD4KPT03
MDA1PT0gCj09NzAwNT09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE6Cj09NzAwNT09ICAgIDxpbmhl
cml0ZWQgZnJvbSBwYXJlbnQ+Cj09NzAwNT09IAo9PTcwMDU9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAwOiAvZGV2L3B0cy8yCj09NzAwNT09ICAgIDxpbmhlcml0ZWQgZnJvbSBwYXJlbnQ+Cj09NzAw
NT09IAo9PTcwMDU9PSAKPT03MDA1PT0gSEVBUCBTVU1NQVJZOgo9PTcwMDU9PSAgICAgaW4gdXNl
IGF0IGV4aXQ6IDQ2IGJ5dGVzIGluIDIgYmxvY2tzCj09NzAwNT09ICAgdG90YWwgaGVhcCB1c2Fn
ZTogMiBhbGxvY3MsIDAgZnJlZXMsIDQ2IGJ5dGVzIGFsbG9jYXRlZAo9PTcwMDU9PSAKPT03MDA1
PT0gMjEgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIHN0aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29y
ZCAxIG9mIDIKPT03MDA1PT0gICAgYXQgMHg0QzJCNkNEOiBtYWxsb2MgKGluIC91c3IvbGliL3Zh
bGdyaW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2NC1saW51eC5zbykKPT03MDA1PT0gICAgYnkg
MHg0MDIyMjA6IG1haW4gKG1haW4uYzoxNDQpCj09NzAwNT09IAo9PTcwMDU9PSAyNSBieXRlcyBp
biAxIGJsb2NrcyBhcmUgc3RpbGwgcmVhY2hhYmxlIGluIGxvc3MgcmVjb3JkIDIgb2YgMgo9PTcw
MDU9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVs
b2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDU9PSAgICBieSAweDU4RjZENzE6IHN0
cmR1cCAoc3RyZHVwLmM6NDMpCj09NzAwNT09ICAgIGJ5IDB4NDAxRjlCOiBtYWluIChtYWluLmM6
MTEzKQo9PTcwMDU9PSAKPT03MDA1PT0gTEVBSyBTVU1NQVJZOgo9PTcwMDU9PSAgICBkZWZpbml0
ZWx5IGxvc3Q6IDAgYnl0ZXMgaW4gMCBibG9ja3MKPT03MDA1PT0gICAgaW5kaXJlY3RseSBsb3N0
OiAwIGJ5dGVzIGluIDAgYmxvY2tzCj09NzAwNT09ICAgICAgcG9zc2libHkgbG9zdDogMCBieXRl
cyBpbiAwIGJsb2Nrcwo9PTcwMDU9PSAgICBzdGlsbCByZWFjaGFibGU6IDQ2IGJ5dGVzIGluIDIg
YmxvY2tzCj09NzAwNT09ICAgICAgICAgc3VwcHJlc3NlZDogMCBieXRlcyBpbiAwIGJsb2Nrcwo9
PTcwMDU9PSAKPT03MDA1PT0gRm9yIGNvdW50cyBvZiBkZXRlY3RlZCBhbmQgc3VwcHJlc3NlZCBl
cnJvcnMsIHJlcnVuIHdpdGg6IC12Cj09NzAwNT09IEVSUk9SIFNVTU1BUlk6IDAgZXJyb3JzIGZy
b20gMCBjb250ZXh0cyAoc3VwcHJlc3NlZDogMiBmcm9tIDIpCj09NzAwNj09IFdhcm5pbmc6IG5v
dGVkIGJ1dCB1bmhhbmRsZWQgaW9jdGwgMHgzMDUwMDAgd2l0aCBubyBzaXplL2RpcmVjdGlvbiBo
aW50cwo9PTcwMDY9PSAgICBUaGlzIGNvdWxkIGNhdXNlIHNwdXJpb3VzIHZhbHVlIGVycm9ycyB0
byBhcHBlYXIuCj09NzAwNj09ICAgIFNlZSBSRUFETUVfTUlTU0lOR19TWVNDQUxMX09SX0lPQ1RM
IGZvciBndWlkYW5jZSBvbiB3cml0aW5nIGEgcHJvcGVyIHdyYXBwZXIuCj09NzAwNj09IFdhcm5p
bmc6IG5vdGVkIGJ1dCB1bmhhbmRsZWQgaW9jdGwgMHgzMDUwMDAgd2l0aCBubyBzaXplL2RpcmVj
dGlvbiBoaW50cwo9PTcwMDY9PSAgICBUaGlzIGNvdWxkIGNhdXNlIHNwdXJpb3VzIHZhbHVlIGVy
cm9ycyB0byBhcHBlYXIuCj09NzAwNj09ICAgIFNlZSBSRUFETUVfTUlTU0lOR19TWVNDQUxMX09S
X0lPQ1RMIGZvciBndWlkYW5jZSBvbiB3cml0aW5nIGEgcHJvcGVyIHdyYXBwZXIuCj09NzAwNj09
IFdhcm5pbmc6IG5vdGVkIGJ1dCB1bmhhbmRsZWQgaW9jdGwgMHgzMDUwMDAgd2l0aCBubyBzaXpl
L2RpcmVjdGlvbiBoaW50cwo9PTcwMDY9PSAgICBUaGlzIGNvdWxkIGNhdXNlIHNwdXJpb3VzIHZh
bHVlIGVycm9ycyB0byBhcHBlYXIuCj09NzAwNj09ICAgIFNlZSBSRUFETUVfTUlTU0lOR19TWVND
QUxMX09SX0lPQ1RMIGZvciBndWlkYW5jZSBvbiB3cml0aW5nIGEgcHJvcGVyIHdyYXBwZXIuCj09
NzAwNj09IENvbmRpdGlvbmFsIGp1bXAgb3IgbW92ZSBkZXBlbmRzIG9uIHVuaW5pdGlhbGlzZWQg
dmFsdWUocykKPT03MDA2PT0gICAgYXQgMHg0RTNCQzU1OiB4Y19kb21haW5fZ2V0aW5mbyAoeGNf
ZG9tYWluLmM6MjI5KQo9PTcwMDY9PSAgICBieSAweDQwMzRFNTogZW51bV9kb21haW5zIChpby5j
OjczNikKPT03MDA2PT0gICAgYnkgMHg0MDQyQTg6IGhhbmRsZV9pbyAoaW8uYzo4NjcpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
Q29uZGl0aW9uYWwganVtcCBvciBtb3ZlIGRlcGVuZHMgb24gdW5pbml0aWFsaXNlZCB2YWx1ZShz
KQo9PTcwMDY9PSAgICBhdCAweDQwMzUxQTogZW51bV9kb21haW5zIChpby5jOjczOCkKPT03MDA2
PT0gICAgYnkgMHg0MDQyQTg6IGhhbmRsZV9pbyAoaW8uYzo4NjcpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcg
ZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hl
bi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVy
Ci0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8K
LS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoK
LS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBX
QVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNh
bid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcg
ZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hl
bi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVy
Ci0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8K
LS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoK
LS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBX
QVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNh
bid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcg
ZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hl
bi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVy
Ci0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8K
LS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoK
LS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBX
QVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNh
bid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcg
ZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hl
bi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVy
Ci0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8K
LS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoK
LS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBX
QVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNh
bid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcg
ZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hl
bi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVy
Ci0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8K
LS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoK
LS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBX
QVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNh
bid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKPT03MDA2PT0gCj09NzAwNj09IEZJTEUgREVTQ1JJUFRPUlM6IDEwMjYgb3Bl
biBhdCBleGl0Lgo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDI1OiAvZGV2L3B0cy8z
NDUKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikK
PT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9
PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAg
ICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgMTAyNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDEwMjM6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNo
bl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZh
Y2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJB
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDIyOiAvZGV2
L3B0cy8zNDQKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUu
Uzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9
PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcw
MDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMTAyMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDEwMjA6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4
X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19p
bnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4
NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQw
NDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4g
KG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDE5
OiAvZGV2L3B0cy8zNDMKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVt
cGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6
MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5
KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgMTAxODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDEwMTc6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6
IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4
OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAxMDE2OiAvZGV2L3B0cy8zNDIKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVu
cHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAxNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDEwMTQ6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRF
NEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0
RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAxMDEzOiAvZGV2L3B0cy8zNDEKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5
IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAxMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwMTE6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAg
YnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciAxMDEwOiAvZGV2L3B0cy8zNDAKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBv
cGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2Ny
ZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAwOTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwMDg6IC9kZXYveGVuL2V2dGNobgo9PTcw
MDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9
PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2
PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUu
YzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciAxMDA3OiAvZGV2L3B0cy8zMzkKPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0
NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9t
YWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWlu
X2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9p
byAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9
PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAwNjogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwMDU6IC9kZXYveGVuL2V2dGNo
bgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9
PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkK
PT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3By
aXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDA0OiAvZGV2L3B0cy8zMzgKPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFE
MDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0Njog
ZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhh
bmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAwMzogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwMDI6IC9kZXYveGVu
L2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIu
aDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24g
KHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDAxOiAvZGV2L3B0cy8zMzcKPT03MDA2
PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0g
ICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAw
eDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQw
MzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQz
RTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluICht
YWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAwMDog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5OTogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5ODogL2Rldi9wdHMvMzM2Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5
NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5Njog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5NTogL2Rldi9wdHMvMzM1
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDk5NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5
MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5MjogL2Rldi9wdHMv
MzM0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDk5MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDk5MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk4OTogL2Rldi9w
dHMvMzMzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDk4ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDk4NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk4NjogL2Rl
di9wdHMvMzMyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDk4NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDk4NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk4Mzog
L2Rldi9wdHMvMzMxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDk4MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDk4MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk4
MDogL2Rldi9wdHMvMzMwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDk3OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDk3ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDk3NzogL2Rldi9wdHMvMzI5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDk3NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDk3NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDk3NDogL2Rldi9wdHMvMzI4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk3MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDk3MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDk3MTogL2Rldi9wdHMvMzI3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk3MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDk2ODogL2Rldi9wdHMvMzI2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDk2NTogL2Rldi9wdHMvMzI1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2NDogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDk2MjogL2Rldi9wdHMvMzI0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2MTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDk1OTogL2Rldi9wdHMvMzIzCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1ODogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1NzogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1NjogL2Rldi9wdHMvMzIyCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1NTogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1NDogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1MzogL2Rldi9wdHMvMzIxCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1MjogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1MTogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1MDogL2Rldi9wdHMvMzIwCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0OTogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0ODogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0NzogL2Rldi9wdHMvMzE5Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0NjogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0NTogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0NDogL2Rldi9wdHMvMzE4Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0Mzog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0MjogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0MTogL2Rldi9wdHMvMzE3Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0
MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkzOTog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkzODogL2Rldi9wdHMvMzE2
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDkzNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkz
NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkzNTogL2Rldi9wdHMv
MzE1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDkzNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDkzMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkzMjogL2Rldi9w
dHMvMzE0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDkzMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDkzMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkyOTogL2Rl
di9wdHMvMzEzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDkyODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDkyNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkyNjog
L2Rldi9wdHMvMzEyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDkyNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDkyNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDky
MzogL2Rldi9wdHMvMzExCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDkyMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDkyMTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDkyMDogL2Rldi9wdHMvMzEwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDkxOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDkxODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDkxNzogL2Rldi9wdHMvMzA5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkxNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDkxNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDkxNDogL2Rldi9wdHMvMzA4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkxMzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkxMjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDkxMTogL2Rldi9wdHMvMzA3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkxMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDkwODogL2Rldi9wdHMvMzA2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwNzogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDkwNTogL2Rldi9wdHMvMzA1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwNDogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDkwMjogL2Rldi9wdHMvMzA0Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwMTogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwMDogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5OTogL2Rldi9wdHMvMzAzCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5ODogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5NzogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5NjogL2Rldi9wdHMvMzAyCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5NTogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5NDogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5MzogL2Rldi9wdHMvMzAxCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5MjogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5MTogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5MDogL2Rldi9wdHMvMzAwCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4OTogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4ODogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4NzogL2Rldi9wdHMvMjk5Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4Njog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4NTogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4NDogL2Rldi9wdHMvMjk4Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4
MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4Mjog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4MTogL2Rldi9wdHMvMjk3
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDg4MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg3
OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg3ODogL2Rldi9wdHMv
Mjk2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDg3NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDg3NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg3NTogL2Rldi9w
dHMvMjk1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDg3NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDg3MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg3MjogL2Rl
di9wdHMvMjk0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDg3MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDg3MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg2OTog
L2Rldi9wdHMvMjkzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDg2ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDg2NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg2
NjogL2Rldi9wdHMvMjkyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDg2NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDg2NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDg2MzogL2Rldi9wdHMvMjkxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDg2MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDg2MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDg2MDogL2Rldi9wdHMvMjkwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDg1ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDg1NzogL2Rldi9wdHMvMjg5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDg1NDogL2Rldi9wdHMvMjg4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDg1MTogL2Rldi9wdHMvMjg3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1MDogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDg0ODogL2Rldi9wdHMvMjg2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0NzogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDg0NTogL2Rldi9wdHMvMjg1Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0NDogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0MzogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0MjogL2Rldi9wdHMvMjg0Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0MTogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0MDogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzOTogL2Rldi9wdHMvMjgzCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzODogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzNzogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzNjogL2Rldi9wdHMvMjgyCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzNTogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzNDogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzMzogL2Rldi9wdHMvMjgxCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzMjogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzMTogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzMDogL2Rldi9wdHMvMjgwCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyOTog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyODogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyNzogL2Rldi9wdHMvMjc5Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgy
NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyNTog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyNDogL2Rldi9wdHMvMjc4
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDgyMzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgy
MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyMTogL2Rldi9wdHMv
Mjc3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDgyMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDgxOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgxODogL2Rldi9w
dHMvMjc2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDgxNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDgxNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgxNTogL2Rl
di9wdHMvMjc1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDgxNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDgxMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgxMjog
L2Rldi9wdHMvMjc0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDgxMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDgxMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgw
OTogL2Rldi9wdHMvMjczCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDgwODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDgwNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDgwNjogL2Rldi9wdHMvMjcyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDgwNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDgwNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDgwMzogL2Rldi9wdHMvMjcxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgwMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDgwMTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDgwMDogL2Rldi9wdHMvMjcwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDc5NzogL2Rldi9wdHMvMjY5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDc5NDogL2Rldi9wdHMvMjY4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5MzogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDc5MTogL2Rldi9wdHMvMjY3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5MDogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDc4ODogL2Rldi9wdHMvMjY2Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4NzogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4NjogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4NTogL2Rldi9wdHMvMjY1Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4NDogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4MzogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4MjogL2Rldi9wdHMvMjY0Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4MTogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4MDogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3OTogL2Rldi9wdHMvMjYzCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3ODogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3NzogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3NjogL2Rldi9wdHMvMjYyCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3NTogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3NDogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3MzogL2Rldi9wdHMvMjYxCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3Mjog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3MTogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3MDogL2Rldi9wdHMvMjYwCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2
OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2ODog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2NzogL2Rldi9wdHMvMjU5
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDc2NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2
NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2NDogL2Rldi9wdHMv
MjU4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDc2MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDc2MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2MTogL2Rldi9w
dHMvMjU3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDc2MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDc1OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc1ODogL2Rl
di9wdHMvMjU2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDc1NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDc1NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc1NTog
L2Rldi9wdHMvMjU1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDc1NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDc1MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc1
MjogL2Rldi9wdHMvMjU0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDc1MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDc1MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDc0OTogL2Rldi9wdHMvMjUzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDc0ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDc0NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDc0NjogL2Rldi9wdHMvMjUyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc0NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDc0NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDc0MzogL2Rldi9wdHMvMjUxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc0MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc0MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDc0MDogL2Rldi9wdHMvMjUwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDczNzogL2Rldi9wdHMvMjQ5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczNjogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDczNDogL2Rldi9wdHMvMjQ4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczMzogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczMjogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDczMTogL2Rldi9wdHMvMjQ3Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczMDogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyOTogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyODogL2Rldi9wdHMvMjQ2Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyNzogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyNjogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyNTogL2Rldi9wdHMvMjQ1Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyNDogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyMzogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyMjogL2Rldi9wdHMvMjQ0Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyMTogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyMDogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxOTogL2Rldi9wdHMvMjQzCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxODogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxNzogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxNjogL2Rldi9wdHMvMjQyCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxNTog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxNDogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxMzogL2Rldi9wdHMvMjQxCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcx
MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxMTog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxMDogL2Rldi9wdHMvMjQw
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDcwOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcw
ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcwNzogL2Rldi9wdHMv
MjM5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDcwNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDcwNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcwNDogL2Rldi9w
dHMvMjM4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDcwMzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDcwMjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcwMTogL2Rl
di9wdHMvMjM3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDcwMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDY5OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY5ODog
L2Rldi9wdHMvMjM2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDY5NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDY5NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY5
NTogL2Rldi9wdHMvMjM1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDY5NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDY5MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDY5MjogL2Rldi9wdHMvMjM0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDY5MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDY5MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDY4OTogL2Rldi9wdHMvMjMzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY4ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDY4NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDY4NjogL2Rldi9wdHMvMjMyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY4NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY4NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDY4MzogL2Rldi9wdHMvMjMxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY4MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY4MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDY4MDogL2Rldi9wdHMvMjMwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3OTogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDY3NzogL2Rldi9wdHMvMjI5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3NjogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDY3NDogL2Rldi9wdHMvMjI4Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3MzogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3MjogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3MTogL2Rldi9wdHMvMjI3Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3MDogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2OTogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2ODogL2Rldi9wdHMvMjI2Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2NzogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2NjogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2NTogL2Rldi9wdHMvMjI1Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2NDogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2MzogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2MjogL2Rldi9wdHMvMjI0Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2MTogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2MDogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1OTogL2Rldi9wdHMvMjIzCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1ODog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1NzogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1NjogL2Rldi9wdHMvMjIyCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1
NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1NDog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1MzogL2Rldi9wdHMvMjIx
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDY1MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1
MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1MDogL2Rldi9wdHMv
MjIwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDY0OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDY0ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY0NzogL2Rldi9w
dHMvMjE5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDY0NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDY0NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY0NDogL2Rl
di9wdHMvMjE4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDY0MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDY0MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY0MTog
L2Rldi9wdHMvMjE3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDY0MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDYzOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYz
ODogL2Rldi9wdHMvMjE2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDYzNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDYzNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDYzNTogL2Rldi9wdHMvMjE1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDYzNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDYzMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDYzMjogL2Rldi9wdHMvMjE0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYzMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDYzMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDYyOTogL2Rldi9wdHMvMjEzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDYyNjogL2Rldi9wdHMvMjEyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDYyMzogL2Rldi9wdHMvMjExCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyMjogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyMTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDYyMDogL2Rldi9wdHMvMjEwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxOTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxODogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDYxNzogL2Rldi9wdHMvMjA5Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxNjogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxNTogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxNDogL2Rldi9wdHMvMjA4Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxMzogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxMjogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxMTogL2Rldi9wdHMvMjA3Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxMDogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwOTogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwODogL2Rldi9wdHMvMjA2Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwNzogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwNjogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwNTogL2Rldi9wdHMvMjA1Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwNDogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwMzogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwMjogL2Rldi9wdHMvMjA0Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwMTog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwMDogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5OTogL2Rldi9wdHMvMjAzCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5
ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5Nzog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5NjogL2Rldi9wdHMvMjAy
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDU5NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5
NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5MzogL2Rldi9wdHMv
MjAxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDU5MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDU5MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5MDogL2Rldi9w
dHMvMjAwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDU4OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDU4ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU4NzogL2Rl
di9wdHMvMTk5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDU4NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDU4NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU4NDog
L2Rldi9wdHMvMTk4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDU4MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDU4MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU4
MTogL2Rldi9wdHMvMTk3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDU4MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDU3OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDU3ODogL2Rldi9wdHMvMTk2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDU3NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDU3NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDU3NTogL2Rldi9wdHMvMTk1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU3NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDU3MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDU3MjogL2Rldi9wdHMvMTk0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU3MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU3MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDU2OTogL2Rldi9wdHMvMTkzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDU2NjogL2Rldi9wdHMvMTkyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2NTogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDU2MzogL2Rldi9wdHMvMTkxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2MjogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDU2MDogL2Rldi9wdHMvMTkwCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1OTogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1ODogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1NzogL2Rldi9wdHMvMTg5Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1NjogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1NTogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1NDogL2Rldi9wdHMvMTg4Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1MzogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1MjogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1MTogL2Rldi9wdHMvMTg3Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1MDogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0OTogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0ODogL2Rldi9wdHMvMTg2Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0NzogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0NjogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0NTogL2Rldi9wdHMvMTg1Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0NDog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0MzogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0MjogL2Rldi9wdHMvMTg0Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0
MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0MDog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUzOTogL2Rldi9wdHMvMTgz
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDUzODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUz
NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUzNjogL2Rldi9wdHMv
MTgyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDUzNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDUzNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUzMzogL2Rldi9w
dHMvMTgxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDUzMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDUzMTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUzMDogL2Rl
di9wdHMvMTgwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDUyOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDUyODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUyNzog
L2Rldi9wdHMvMTc5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDUyNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDUyNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUy
NDogL2Rldi9wdHMvMTc4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDUyMzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDUyMjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDUyMTogL2Rldi9wdHMvMTc3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDUyMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDUxOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDUxODogL2Rldi9wdHMvMTc2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUxNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDUxNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDUxNTogL2Rldi9wdHMvMTc1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUxNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUxMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDUxMjogL2Rldi9wdHMvMTc0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUxMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUxMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDUwOTogL2Rldi9wdHMvMTczCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwODogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDUwNjogL2Rldi9wdHMvMTcyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwNTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDUwMzogL2Rldi9wdHMvMTcxCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwMjogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwMTogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwMDogL2Rldi9wdHMvMTcwCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5OTogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5ODogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5NzogL2Rldi9wdHMvMTY5Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5NjogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5NTogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5NDogL2Rldi9wdHMvMTY4Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5MzogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5MjogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5MTogL2Rldi9wdHMvMTY3Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5MDogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4OTogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4ODogL2Rldi9wdHMvMTY2Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4Nzog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4NjogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4NTogL2Rldi9wdHMvMTY1Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4
NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4Mzog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4MjogL2Rldi9wdHMvMTY0
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQ4MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4
MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ3OTogL2Rldi9wdHMv
MTYzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQ3ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQ3NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ3NjogL2Rldi9w
dHMvMTYyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQ3NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQ3NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ3MzogL2Rl
di9wdHMvMTYxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDQ3MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQ3MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ3MDog
L2Rldi9wdHMvMTYwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQ2OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDQ2ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ2
NzogL2Rldi9wdHMvMTU5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQ2NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQ2NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQ2NDogL2Rldi9wdHMvMTU4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDQ2MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQ2MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQ2MTogL2Rldi9wdHMvMTU3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ2MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDQ1OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQ1ODogL2Rldi9wdHMvMTU2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDQ1NTogL2Rldi9wdHMvMTU1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQ1MjogL2Rldi9wdHMvMTU0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1MTogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQ0OTogL2Rldi9wdHMvMTUzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0ODogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDQ0NjogL2Rldi9wdHMvMTUyCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0NTogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0NDogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0MzogL2Rldi9wdHMvMTUxCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0MjogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0MTogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0MDogL2Rldi9wdHMvMTUwCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzOTogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzODogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzNzogL2Rldi9wdHMvMTQ5Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzNjogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzNTogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzNDogL2Rldi9wdHMvMTQ4Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzMzogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzMjogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzMTogL2Rldi9wdHMvMTQ3Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzMDog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQyOTogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQyODogL2Rldi9wdHMvMTQ2Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQy
NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQyNjog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQyNTogL2Rldi9wdHMvMTQ1
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQyNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQy
MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQyMjogL2Rldi9wdHMv
MTQ0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQyMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQyMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQxOTogL2Rldi9w
dHMvMTQzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQxODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQxNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQxNjogL2Rl
di9wdHMvMTQyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDQxNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQxNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQxMzog
L2Rldi9wdHMvMTQxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQxMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDQxMTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQx
MDogL2Rldi9wdHMvMTQwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQwOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQwODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQwNzogL2Rldi9wdHMvMTM5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDQwNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQwNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQwNDogL2Rldi9wdHMvMTM4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQwMzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDQwMjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQwMTogL2Rldi9wdHMvMTM3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQwMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDM5ODogL2Rldi9wdHMvMTM2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDM5NTogL2Rldi9wdHMvMTM1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5NDogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDM5MjogL2Rldi9wdHMvMTM0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5MTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDM4OTogL2Rldi9wdHMvMTMzCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4ODogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4NzogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4NjogL2Rldi9wdHMvMTMyCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4NTogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4NDogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4MzogL2Rldi9wdHMvMTMxCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4MjogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4MTogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4MDogL2Rldi9wdHMvMTMwCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3OTogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3ODogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3NzogL2Rldi9wdHMvMTI5Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3NjogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3NTogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3NDogL2Rldi9wdHMvMTI4Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3Mzog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3MjogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3MTogL2Rldi9wdHMvMTI3Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3
MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM2OTog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM2ODogL2Rldi9wdHMvMTI2
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDM2NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM2
NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM2NTogL2Rldi9wdHMv
MTI1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDM2NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDM2MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM2MjogL2Rldi9w
dHMvMTI0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDM2MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDM2MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM1OTogL2Rl
di9wdHMvMTIzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDM1ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDM1NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM1Njog
L2Rldi9wdHMvMTIyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDM1NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDM1NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM1
MzogL2Rldi9wdHMvMTIxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDM1MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDM1MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDM1MDogL2Rldi9wdHMvMTIwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDM0OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDM0ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDM0NzogL2Rldi9wdHMvMTE5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM0NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDM0NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDM0NDogL2Rldi9wdHMvMTE4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM0MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM0MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDM0MTogL2Rldi9wdHMvMTE3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM0MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDMzODogL2Rldi9wdHMvMTE2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzNzogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDMzNTogL2Rldi9wdHMvMTE1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzNDogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDMzMjogL2Rldi9wdHMvMTE0Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzMTogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzMDogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyOTogL2Rldi9wdHMvMTEzCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyODogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyNzogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyNjogL2Rldi9wdHMvMTEyCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyNTogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyNDogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyMzogL2Rldi9wdHMvMTExCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyMjogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyMTogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyMDogL2Rldi9wdHMvMTEwCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxOTogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxODogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxNzogL2Rldi9wdHMvMTA5Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxNjog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxNTogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxNDogL2Rldi9wdHMvMTA4Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMx
MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxMjog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxMTogL2Rldi9wdHMvMTA3
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDMxMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMw
OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMwODogL2Rldi9wdHMv
MTA2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDMwNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDMwNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMwNTogL2Rldi9w
dHMvMTA1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDMwNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDMwMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMwMjogL2Rl
di9wdHMvMTA0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDMwMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDMwMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI5OTog
L2Rldi9wdHMvMTAzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDI5ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDI5NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI5
NjogL2Rldi9wdHMvMTAyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDI5NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDI5NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDI5MzogL2Rldi9wdHMvMTAxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDI5MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDI5MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDI5MDogL2Rldi9wdHMvMTAwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI4OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDI4ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDI4NzogL2Rldi9wdHMvOTkKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChv
cGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjg2OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlC
ODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDog
Z2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3Bl
bnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMjg1OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0
RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4
NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcw
MDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMjg0OiAvZGV2L3B0cy85OAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkg
KG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0
eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyODM6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5
OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVE
OiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChv
cGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAo
aW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyODI6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkg
MHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAyODE6IC9kZXYvcHRzLzk3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI4MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI3OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDI3ODogL2Rldi9wdHMvOTYKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVu
cHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjc3OiAvZGV2L3B0bXgKPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5
QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0
eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjc2OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAg
IGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2
KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgMjc1OiAvZGV2L3B0cy85NQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9w
ZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3Jl
YXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNzQ6IC9kZXYvcHRteAo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1
OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVu
cHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRl
X3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNzM6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9
PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAg
ICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0g
ICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzox
NjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciAyNzI6IC9kZXYvcHRzLzk0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI3MTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI3MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDI2OTogL2Rldi9wdHMvOTMKPT03MDA2PT0gICAgYXQgMHg1OTU0
NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVD
OiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWlu
X2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2Ny
ZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAo
aW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcw
MDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjY4OiAvZGV2L3B0bXgKPT03MDA2
PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0g
ICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5
IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTog
b3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2Ny
ZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjY3OiAvZGV2L3hlbi9ldnRjaG4KPT03
MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2
PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRl
LmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMjY2OiAvZGV2L3B0cy85Mgo9PTcwMDY9PSAgICBhdCAweDU5
NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1
NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21h
aW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5f
Y3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lv
IChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09
NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNjU6IC9kZXYvcHRteAo9PTcw
MDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9
PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAg
YnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZB
OiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5f
Y3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3Jl
YXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChp
by5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAw
Nj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNjQ6IC9kZXYveGVuL2V2dGNobgo9
PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcw
MDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03
MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZh
dGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNjM6IC9kZXYvcHRzLzkxCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI2MjogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI2MTogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI2MDogL2Rldi9wdHMvOTAKPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1
NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDog
ZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9t
YWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRs
ZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2
KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjU5OiAvZGV2L3B0bXgK
PT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03
MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09
ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9t
YWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWlu
X2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9p
byAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9
PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjU4OiAvZGV2L3hlbi9ldnRj
aG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikK
PT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19w
cml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjU3OiAvZGV2L3B0cy84OQo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQw
OiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBk
b21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFu
ZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzox
NjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNTY6IC9kZXYvcHRt
eAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9
PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2
PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1
NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBk
b21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21h
aW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxl
X2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYp
Cj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNTU6IC9kZXYveGVuL2V2
dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgy
KQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1
NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhj
X3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNTQ6IC9kZXYvcHRzLzg4Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI1MzogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI1MjogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI1MTogL2Rldi9wdHMvODcKPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQw
MkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0
NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6
IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWlu
LmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjUwOiAvZGV2
L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4
MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09
NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFE
MDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0Njog
ZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhh
bmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjQ5OiAvZGV2L3hl
bi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUu
Uzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwy
Lmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9u
ICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjQ4OiAvZGV2L3B0cy84Ngo9PTcwMDY9
PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4
NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAz
MzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNF
NDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1h
aW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNDc6IC9k
ZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykK
PT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAg
YnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAy
QUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNDY6IC9kZXYv
eGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0
ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250
bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21t
b24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3Jl
YXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChp
by5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAw
Nj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNDU6IC9kZXYvcHRzLzg1Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI0NDog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI0MzogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI0MjogL2Rldi9wdHMvODQKPT03
MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2
PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBi
eSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAw
eDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0
MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWlu
IChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjQx
OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxh
dGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6
NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAw
eDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQw
MzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQz
RTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluICht
YWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjQwOiAv
ZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVt
cGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAo
ZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5f
Y29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWlu
X2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9p
byAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9
PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjM5OiAvZGV2L3B0cy84Mwo9
PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAg
IGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5
IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAw
eDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1h
aW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAy
Mzg6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1w
bGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQu
Yzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2
PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5
IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4
NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQw
NDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4g
KG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyMzc6
IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10
ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVu
IChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Bl
bl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21h
aW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxl
X2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYp
Cj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyMzY6IC9kZXYvcHRzLzgy
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDIzNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDIz
NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDIzMzogL2Rldi9wdHMv
ODEKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikK
PT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9
PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAg
ICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgMjMyOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdl
dHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAg
ICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBi
eSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkg
MHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBt
YWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3Ig
MjMxOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5f
b3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNl
X29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTog
ZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhh
bmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjMwOiAvZGV2L3B0
cy84MAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgy
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09
ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAg
ICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIx
QzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3Jp
cHRvciAyMjk6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAo
Z2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkK
PT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09
ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAyMjg6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNo
bl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZh
Y2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJB
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyMjc6IC9kZXYv
cHRzLzc5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDIyNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDIyNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDIyNDogL2Rl
di9wdHMvNzgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUu
Uzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9
PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcw
MDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMjIzOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVu
cHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6
OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcw
MDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9
PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgMjIyOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9l
dnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50
ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQw
MzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQz
RTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluICht
YWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjIxOiAv
ZGV2L3B0cy83Nwo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0
ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAyMjA6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29w
ZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQu
Yzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09
NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAyMTk6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4
X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19p
bnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4
NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQw
NDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4g
KG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyMTg6
IC9kZXYvcHRzLzc2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDIxNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDIxNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDIx
NTogL2Rldi9wdHMvNzUKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVt
cGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6
MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5
KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgMjE0OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3Np
eF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdl
dHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4
KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9
PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgMjEzOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBs
aW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODog
eGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBi
eSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkg
MHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBt
YWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3Ig
MjEyOiAvZGV2L3B0cy83NAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10
ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHku
YzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0
MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciAyMTE6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBv
c2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAo
Z2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6
OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciAyMTA6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6
IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4
OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAyMDk6IC9kZXYvcHRzLzczCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDIwODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDIwNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDIwNjogL2Rldi9wdHMvNzIKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVu
cHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMjA1OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0
NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYy
OiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0
cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0
eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6
NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3
NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgMjA0OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRE
NEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0
NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9
PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgMjAzOiAvZGV2L3B0cy83MQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9w
ZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAo
aW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyMDI6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5
NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4
NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBn
ZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVu
cHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8u
Yzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciAyMDE6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRF
NEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0
RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAyMDA6IC9kZXYvcHRzLzcwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE5OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE5ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDE5NzogL2Rldi9wdHMvNjkKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5
IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTk2OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1
OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlF
RDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAo
b3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTk1OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5
IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9
PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgMTk0OiAvZGV2L3B0cy82OAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5w
dHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRl
X3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxOTM6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlC
OUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5
IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0
eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxOTI6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAg
YnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciAxOTE6IC9kZXYvcHRzLzY3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE5MDogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE4OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDE4ODogL2Rldi9wdHMvNjYKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBv
cGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2Ny
ZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTg3OiAvZGV2L3B0bXgKPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4
NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3Bl
bnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTg2OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2
PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0g
ICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09
ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6
MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1
NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgMTg1OiAvZGV2L3B0cy82NQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6
IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5f
Y3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3Jl
YXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChp
by5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAw
Nj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxODQ6IC9kZXYvcHRteAo9PTcwMDY9
PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAg
ICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkg
MHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBv
cGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3Jl
YXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxODM6IC9kZXYveGVuL2V2dGNobgo9PTcw
MDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9
PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2
PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUu
YzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciAxODI6IC9kZXYvcHRzLzY0Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE4MTogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE4MDogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE3OTogL2Rldi9wdHMvNjMKPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0
NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9t
YWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWlu
X2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9p
byAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9
PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTc4OiAvZGV2L3B0bXgKPT03
MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2
PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAg
IGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2
QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWlu
X2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2Ny
ZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAo
aW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcw
MDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTc3OiAvZGV2L3hlbi9ldnRjaG4K
PT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03
MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09
NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2
YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTc2OiAvZGV2L3B0cy82Mgo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBk
b21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21h
aW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxl
X2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYp
Cj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNzU6IC9kZXYvcHRteAo9
PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcw
MDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0g
ICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0
NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21h
aW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5f
Y3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lv
IChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09
NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNzQ6IC9kZXYveGVuL2V2dGNo
bgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9
PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkK
PT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3By
aXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNzM6IC9kZXYvcHRzLzYxCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE3MjogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE3MTogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE3MDogL2Rldi9wdHMvNjAKPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFE
MDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0Njog
ZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhh
bmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTY5OiAvZGV2L3B0
bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikK
PT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDog
ZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9t
YWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRs
ZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2
KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTY4OiAvZGV2L3hlbi9l
dnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4
MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6
NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4
Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTY3OiAvZGV2L3B0cy81OQo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAy
QUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNjY6IC9kZXYv
cHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgy
KQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03
MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkg
MHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQw
OiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBk
b21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFu
ZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzox
NjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNjU6IC9kZXYveGVu
L2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIu
aDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24g
KHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNjQ6IC9kZXYvcHRzLzU4Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE2MzogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE2MjogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE2MTogL2Rldi9wdHMvNTcKPT03MDA2
PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0g
ICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAw
eDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQw
MzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQz
RTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluICht
YWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTYwOiAv
ZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUu
Uzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQw
MkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0
NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6
IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWlu
LmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTU5OiAvZGV2
L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxh
dGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNu
dGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29t
bW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2Ny
ZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAo
aW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcw
MDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTU4OiAvZGV2L3B0cy81Ngo9PTcw
MDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5
IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4
NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQw
NDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4g
KG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNTc6
IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0
ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0
NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0g
ICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4
NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAz
MzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNF
NDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1h
aW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNTY6IC9k
ZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1w
bGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChm
Y250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9j
b21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5f
Y3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lv
IChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09
NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNTU6IC9kZXYvcHRzLzU1Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE1
NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE1Mzog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE1MjogL2Rldi9wdHMvNTQK
PT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03
MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAg
ICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBi
eSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkg
MHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBt
YWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3Ig
MTUxOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVt
cGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0
LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBi
eSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAw
eDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0
MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWlu
IChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTUw
OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3Bl
biAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29w
ZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9t
YWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRs
ZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2
KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTQ5OiAvZGV2L3B0cy81
Mwo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09
ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAxNDg6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10
ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0
cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03
MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAg
IGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5
IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAw
eDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1h
aW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAx
NDc6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9v
cGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vf
b3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBk
b21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFu
ZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzox
NjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNDY6IC9kZXYvcHRz
LzUxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDE0NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDE0NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE0MzogL2Rldi9w
dHMvNTIKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4
MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcw
MDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9
PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgMTQyOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQg
KGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9
PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAg
ICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgMTQxOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRj
aG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJm
YWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIy
QTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6
IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWlu
LmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTQwOiAvZGV2
L3B0cy81MAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09
NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAxMzk6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5w
dCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5
MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09
ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAg
ICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIx
QzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3Jp
cHRvciAxMzg6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2
dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRl
cmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAz
MjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNF
NDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1h
aW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMzc6IC9k
ZXYvcHRzLzQ5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDEzNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDEzNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEzNDog
L2Rldi9wdHMvNDgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxh
dGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTEx
KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9
PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgMTMzOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9v
cGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0
LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9
PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcw
MDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMTMyOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51
eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNf
aW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAw
eDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0
MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWlu
IChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTMx
OiAvZGV2L3B0cy80Nwo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1w
bGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzox
MTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciAxMzA6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4
X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0
cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAxMjk6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxp
bnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4
Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5
IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAw
eDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1h
aW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAx
Mjg6IC9kZXYvcHRzLzQ2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDEyNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDEyNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDEyNTogL2Rldi9wdHMvNDUKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5
LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6
NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3
NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgMTI0OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBw
b3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQg
KGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5j
Ojk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5
KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgMTIzOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0
NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2
OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUND
ODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAg
ICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgMTIyOiAvZGV2L3B0cy80NAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5w
dHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8u
Yzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciAxMjE6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6
IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRw
dCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5
LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0
MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciAxMjA6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5
NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0
RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1
Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09
ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAg
ICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIx
QzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3Jp
cHRvciAxMTk6IC9kZXYvcHRzLzQzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDExODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDExNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDExNjogL2Rldi9wdHMvNDIKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChv
cGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTE1OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlC
ODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDog
Z2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3Bl
bnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMTE0OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0
RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4
NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcw
MDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMTEzOiAvZGV2L3B0cy80MAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkg
KG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0
eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMTI6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5
OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVE
OiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChv
cGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAo
aW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMTE6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkg
MHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAxMTA6IC9kZXYvcHRzLzQxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDEwNzogL2Rldi9wdHMvMzgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVu
cHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTA2OiAvZGV2L3B0bXgKPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5
QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0
eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTA1OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAg
IGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2
KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgMTA0OiAvZGV2L3B0cy8zOQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9w
ZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3Jl
YXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDM6IC9kZXYvcHRteAo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1
OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVu
cHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRl
X3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDI6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9
PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAg
ICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0g
ICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzox
NjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciAxMDE6IC9kZXYvcHRzLzM3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwMDogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2
PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0g
ICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09
ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6
MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1
NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgOTg6IC9kZXYvcHRzLzM1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk3OiAvZGV2L3B0bXgKPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4
NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3Bl
bnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgOTY6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9
PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAg
ICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0g
ICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzox
NjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciA5NTogL2Rldi9wdHMvMzYKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBv
cGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2Ny
ZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgOTQ6IC9kZXYvcHRteAo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1
OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVu
cHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRl
X3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA5MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDkyOiAvZGV2L3B0cy8zNAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9w
ZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3Jl
YXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA5MTogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAg
IGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2
KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgODk6IC9kZXYvcHRzLzMzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4OiAvZGV2L3B0bXgKPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5
QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0
eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgODc6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAg
YnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciA4NjogL2Rldi9wdHMvMzIKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVu
cHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgODU6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlC
OUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5
IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0
eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA4NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDgzOiAvZGV2L3B0cy8zMQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5w
dHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRl
X3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA4MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgxOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5
IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9
PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgODA6IC9kZXYvcHRzLzMwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1
OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlF
RDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAo
b3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgNzg6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkg
MHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciA3NzogL2Rldi9wdHMvMjkKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5
IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgNzY6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5
OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVE
OiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChv
cGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAo
aW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA3NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDc0OiAvZGV2L3B0cy8yNwo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkg
KG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0
eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA3MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0
RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4
NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcw
MDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgNzE6IC9kZXYvcHRzLzI4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcwOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlC
ODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDog
Z2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3Bl
bnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgNjk6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRF
NEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0
RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciA2ODogL2Rldi9wdHMvMjYKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChv
cGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgNjc6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5
NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4
NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBn
ZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVu
cHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8u
Yzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciA2NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDY1OiAvZGV2L3B0cy8yNQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9w
ZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAo
aW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA2NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDYzOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRE
NEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0
NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9
PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgNjI6IC9kZXYvcHRzLzI0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0
NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYy
OiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0
cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0
eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6
NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3
NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgNjA6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5
NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0
RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1
Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09
ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAg
ICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIx
QzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3Jp
cHRvciA1OTogL2Rldi9wdHMvMjMKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVu
cHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgNTg6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6
IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRw
dCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5
LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0
MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciA1NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDU2OiAvZGV2L3B0cy8yMgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5w
dHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8u
Yzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciA1NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDU0OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0
NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2
OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUND
ODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAg
ICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgNTM6IC9kZXYvcHRzLzIxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDUyOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBw
b3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQg
KGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5j
Ojk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5
KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgNTE6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6
IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4
OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciA1MDogL2Rldi9wdHMvMTkKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5
LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6
NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3
NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgNDk6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBv
c2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAo
Z2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6
OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciA0ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQ3OiAvZGV2L3B0cy8yMAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10
ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHku
YzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0
MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciA0NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQ1OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBs
aW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODog
eGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBi
eSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkg
MHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBt
YWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3Ig
NDQ6IC9kZXYvcHRzLzE4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQzOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3Np
eF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdl
dHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4
KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9
PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgNDI6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxp
bnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4
Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5
IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAw
eDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1h
aW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA0
MTogL2Rldi9wdHMvMTYKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVt
cGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6
MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5
KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgNDA6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4
X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0
cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAzOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4
OiAvZGV2L3B0cy8xNwo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1w
bGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzox
MTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciAzNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDM2OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51
eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNf
aW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAw
eDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0
MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWlu
IChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMzU6
IC9kZXYvcHRzLzE0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDM0OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9v
cGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0
LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9
PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcw
MDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMzM6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4
X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19p
bnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4
NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQw
NDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4g
KG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAzMjog
L2Rldi9wdHMvMTUKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxh
dGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTEx
KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9
PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgMzE6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29w
ZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQu
Yzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09
NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAzMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI5OiAv
ZGV2L3B0cy84Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDI4OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVu
cHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6
OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcw
MDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9
PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgMjc6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2
dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRl
cmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAz
MjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNF
NDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1h
aW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNjogL2Rl
di9wdHMvMTMKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUu
Uzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9
PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcw
MDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMjU6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5w
dCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5
MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09
ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAg
ICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIx
QzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3Jp
cHRvciAyNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDIzOiAvZGV2
L3B0cy8xMgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09
NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAyMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDIxOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRj
aG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJm
YWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIy
QTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6
IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWlu
LmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjA6IC9kZXYv
cHRzLzExCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDE5OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQg
KGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9
PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAg
ICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgMTg6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNo
bl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZh
Y2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJB
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNzogL2Rldi9w
dHMvMTAKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4
MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcw
MDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9
PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgMTY6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAo
Z2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkK
PT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09
ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAxNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE0OiAvZGV2L3B0
cy85Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDEzOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdl
dHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAg
ICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBi
eSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkg
MHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBt
YWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3Ig
MTI6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9v
cGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vf
b3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBk
b21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFu
ZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzox
NjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMTogL2Rldi9wdHMv
MQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09
ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAxMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk6
IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10
ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVu
IChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Bl
bl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21h
aW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxl
X2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYp
Cj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA4Ogo9PTcwMDY9PSAgICBh
dCAweDU5NTUwQTc6IHBpcGUgKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1MjVCQUVCOiB4c19maWxlbm8gKGluIC91c3IvbGliL2xpYnhlbnN0b3JlLnNvLjMuMC4wKQo9
PTcwMDY9PSAgICBieSAweDQwMzg0NjogaGFuZGxlX2lvIChpby5jOjk2MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciA3Ogo9PTcwMDY9PSAgICBhdCAweDU5NTUwQTc6IHBpcGUgKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1MjVCQUVCOiB4c19maWxlbm8gKGluIC91
c3IvbGliL2xpYnhlbnN0b3JlLnNvLjMuMC4wKQo9PTcwMDY9PSAgICBieSAweDQwMzg0NjogaGFu
ZGxlX2lvIChpby5jOjk2MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzox
NjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA2OiAvcHJvYy94ZW4v
cHJpdmNtZAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2QjA6IF9fb3Blbl9ub2NhbmNlbCAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNERCQjg6IGxpbnV4X3ByaXZjbWRf
b3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNl
X29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwNDg2MDog
eGVuX3NldHVwICh1dGlscy5jOjExOSkKPT03MDA2PT0gICAgYnkgMHg0MDIxQUI6IG1haW4gKG1h
aW4uYzoxNjEpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIEFGX1VOSVggc29ja2V0IDU6IDx1bmtu
b3duPgo9PTcwMDY9PSAgICBhdCAweDU5NjJGNTc6IHNvY2tldCAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDUyNUIxNzg6ID8/PyAoaW4gL3Vzci9saWIvbGlieGVuc3Rv
cmUuc28uMy4wLjApCj09NzAwNj09ICAgIGJ5IDB4NTI1QkI0RjogeHNfb3BlbiAoaW4gL3Vzci9s
aWIvbGlieGVuc3RvcmUuc28uMy4wLjApCj09NzAwNj09ICAgIGJ5IDB4NDA0ODQ1OiB4ZW5fc2V0
dXAgKHV0aWxzLmM6MTEyKQo9PTcwMDY9PSAgICBieSAweDQwMjFBQjogbWFpbiAobWFpbi5jOjE2
MSkKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ6IC92YXIvcnVuL3hl
bmNvbnNvbGVkLnBpZAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2QjA6IF9fb3Blbl9ub2NhbmNlbCAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDQwNDc1QTogZGFlbW9uaXpl
IChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0MDIyMDU6IG1haW4gKG1haW4uYzoxNTgp
Cj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyOiAvZGV2L251bGwKPT03
MDA2PT0gICAgYXQgMHg1OTU1MDQ3OiBkdXAyIChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0NzFEOiBkYWVtb25pemUgKHV0aWxzLmM6ODEpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMjA1OiBtYWluIChtYWluLmM6MTU4KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgMTogL2Rldi9udWxsCj09NzAwNj09ICAgIGF0IDB4NTk1NTA0NzogZHVwMiAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDQwNDcxRDogZGFlbW9uaXpl
ICh1dGlscy5jOjgxKQo9PTcwMDY9PSAgICBieSAweDQwMjIwNTogbWFpbiAobWFpbi5jOjE1OCkK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDA6IC9kZXYvbnVsbAo9PTcw
MDY9PSAgICBhdCAweDU5NTUwNDc6IGR1cDIgKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2
PT0gICAgYnkgMHg0MDQ3MUQ6IGRhZW1vbml6ZSAodXRpbHMuYzo4MSkKPT03MDA2PT0gICAgYnkg
MHg0MDIyMDU6IG1haW4gKG1haW4uYzoxNTgpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAzOiAvaG9tZS9wYXVsaC9EZXNrdG9wL2Rvd25sb2FkZWQveGVuLTQuMS4yL3Rv
b2xzL2NvbnNvbGUvdmFsZ3JpbmRfb3V0cHV0LnR4dAo9PTcwMDY9PSAgICA8aW5oZXJpdGVkIGZy
b20gcGFyZW50Pgo9PTcwMDY9PSAKPT03MDA2PT0gCj09NzAwNj09IEhFQVAgU1VNTUFSWToKPT03
MDA2PT0gICAgIGluIHVzZSBhdCBleGl0OiA0ODQsODc4IGJ5dGVzIGluIDEsMzgzIGJsb2Nrcwo9
PTcwMDY9PSAgIHRvdGFsIGhlYXAgdXNhZ2U6IDMzLDI2OCBhbGxvY3MsIDMxLDg4NSBmcmVlcywg
MSw3ODYsNzkxIGJ5dGVzIGFsbG9jYXRlZAo9PTcwMDY9PSAKPT03MDA2PT0gMTYgYnl0ZXMgaW4g
MSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3NzIHJlY29yZCAxIG9mIDI0Cj09NzAw
Nj09ICAgIGF0IDB4NEMyQjZDRDogbWFsbG9jIChpbiAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxv
YWRfbWVtY2hlY2stYW1kNjQtbGludXguc28pCj09NzAwNj09ICAgIGJ5IDB4NTk3NDQ1OTogX19u
c3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQ1NikKPT03MDA2PT0gICAgYnkgMHg2QTRE
MTg0OiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5hbV9yQEBHTElCQ18yLjIu
NSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1MTogZ3JhbnRwdCAo
Z3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVucHR5IChvcGVucHR5
LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6
NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3
NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gMTYg
Ynl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3NzIHJlY29yZCAyIG9m
IDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjZDRDogbWFsbG9jIChpbiAvdXNyL2xpYi92YWxncmlu
ZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1kNjQtbGludXguc28pCj09NzAwNj09ICAgIGJ5IDB4NTk3
NDQ1OTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQ1NikKPT03MDA2PT0gICAg
YnkgMHg2QTREMTlFOiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5hbV9yQEBH
TElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1MTog
Z3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVucHR5
IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gMTYgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3NzIHJl
Y29yZCAzIG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjZDRDogbWFsbG9jIChpbiAvdXNyL2xp
Yi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1kNjQtbGludXguc28pCj09NzAwNj09ICAg
IGJ5IDB4NTk3NDQ1OTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQ1NikKPT03
MDA2PT0gICAgYnkgMHg2QTREMUI4OiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRn
cm5hbV9yQEBHTElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4
NTk5QkU1MTogZ3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgw
OiBvcGVucHR5IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWlu
X2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2Ny
ZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAo
aW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcw
MDY9PSAKPT03MDA2PT0gMTYgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBp
biBsb3NzIHJlY29yZCA0IG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjZDRDogbWFsbG9jIChp
biAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1kNjQtbGludXguc28pCj09
NzAwNj09ICAgIGJ5IDB4NTk3NDQ1OTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5j
OjQ1NikKPT03MDA2PT0gICAgYnkgMHg2QTREMUQyOiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJC
NDZDOiBnZXRncm5hbV9yQEBHTElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09
ICAgIGJ5IDB4NTk5QkU1MTogZ3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkg
MHg1NDY0NDgwOiBvcGVucHR5IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFE
MDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0Njog
ZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhh
bmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gMTYgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0
bHkgbG9zdCBpbiBsb3NzIHJlY29yZCA1IG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjZDRDog
bWFsbG9jIChpbiAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1kNjQtbGlu
dXguc28pCj09NzAwNj09ICAgIGJ5IDB4NTk3NDQ1OTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChu
c3N3aXRjaC5jOjQ1NikKPT03MDA2PT0gICAgYnkgMHg2QTREMUVDOiA/Pz8KPT03MDA2PT0gICAg
YnkgMHg1OTJCNDZDOiBnZXRncm5hbV9yQEBHTElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1MTogZ3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2
PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVucHR5IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBi
eSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAw
eDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0
MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWlu
IChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gMjEgYnl0ZXMgaW4gMSBibG9ja3MgYXJl
IHN0aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29yZCA2IG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4
NEMyQjZDRDogbWFsbG9jIChpbiAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2st
YW1kNjQtbGludXguc28pCj09NzAwNj09ICAgIGJ5IDB4NDAyMjIwOiBtYWluIChtYWluLmM6MTQ0
KQo9PTcwMDY9PSAKPT03MDA2PT0gMjUgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIHN0aWxsIHJlYWNo
YWJsZSBpbiBsb3NzIHJlY29yZCA3IG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjZDRDogbWFs
bG9jIChpbiAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1kNjQtbGludXgu
c28pCj09NzAwNj09ICAgIGJ5IDB4NThGNkQ3MTogc3RyZHVwIChzdHJkdXAuYzo0MykKPT03MDA2
PT0gICAgYnkgMHg0MDFGOUI6IG1haW4gKG1haW4uYzoxMTMpCj09NzAwNj09IAo9PTcwMDY9PSAz
MiBieXRlcyBpbiAxIGJsb2NrcyBhcmUgaW5kaXJlY3RseSBsb3N0IGluIGxvc3MgcmVjb3JkIDgg
b2YgMjQKPT03MDA2PT0gICAgYXQgMHg0QzJCNkNEOiBtYWxsb2MgKGluIC91c3IvbGliL3ZhbGdy
aW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2NC1saW51eC5zbykKPT03MDA2PT0gICAgYnkgMHg1
OTVFRTBBOiB0c2VhcmNoICh0c2VhcmNoLmM6MjgxKQo9PTcwMDY9PSAgICBieSAweDU5NzQzRTk6
IF9fbnNzX2xvb2t1cF9mdW5jdGlvbiAobnNzd2l0Y2guYzo0MzkpCj09NzAwNj09ICAgIGJ5IDB4
NkE0RDE4NDogPz8/Cj09NzAwNj09ICAgIGJ5IDB4NTkyQjQ2QzogZ2V0Z3JuYW1fckBAR0xJQkNf
Mi4yLjUgKGdldFhYYnlZWV9yLmM6MjU2KQo9PTcwMDY9PSAgICBieSAweDU5OUJFNTE6IGdyYW50
cHQgKGdyYW50cHQuYzoxNTMpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ4MDogb3BlbnB0eSAob3Bl
bnB0eS5jOjEwMikKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IDMyIGJ5dGVzIGluIDEgYmxvY2tzIGFyZSBpbmRpcmVjdGx5IGxvc3QgaW4gbG9zcyByZWNvcmQg
OSBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFs
Z3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBieSAw
eDU5NUVFMEE6IHRzZWFyY2ggKHRzZWFyY2guYzoyODEpCj09NzAwNj09ICAgIGJ5IDB4NTk3NDNF
OTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQzOSkKPT03MDA2PT0gICAgYnkg
MHg2QTREMTlFOiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5hbV9yQEBHTElC
Q18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1MTogZ3Jh
bnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVucHR5IChv
cGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gMzIgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3NzIHJlY29y
ZCAxMCBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIv
dmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBi
eSAweDU5NUVFMEE6IHRzZWFyY2ggKHRzZWFyY2guYzoyODEpCj09NzAwNj09ICAgIGJ5IDB4NTk3
NDNFOTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQzOSkKPT03MDA2PT0gICAg
YnkgMHg2QTREMUI4OiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5hbV9yQEBH
TElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1MTog
Z3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVucHR5
IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gMzIgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3NzIHJl
Y29yZCAxMSBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9s
aWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAg
ICBieSAweDU5NUVFMEE6IHRzZWFyY2ggKHRzZWFyY2guYzoyODEpCj09NzAwNj09ICAgIGJ5IDB4
NTk3NDNFOTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQzOSkKPT03MDA2PT0g
ICAgYnkgMHg2QTREMUQyOiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5hbV9y
QEBHTElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1
MTogZ3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVu
cHR5IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gMzIgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3Nz
IHJlY29yZCAxMiBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vz
ci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9
PSAgICBieSAweDU5NUVFMEE6IHRzZWFyY2ggKHRzZWFyY2guYzoyODEpCj09NzAwNj09ICAgIGJ5
IDB4NTk3NDNFOTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQzOSkKPT03MDA2
PT0gICAgYnkgMHg2QTREMUVDOiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5h
bV9yQEBHTElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5
QkU1MTogZ3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBv
cGVucHR5IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2Ny
ZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gNDggYnl0ZXMgaW4gMSBibG9ja3MgYXJlIHN0aWxsIHJlYWNoYWJsZSBpbiBs
b3NzIHJlY29yZCAxMyBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4g
L3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcw
MDY9PSAgICBieSAweDRFNEM4RkY6IHh0bF9jcmVhdGVsb2dnZXJfc3RkaW9zdHJlYW0gKHh0bF9s
b2dnZXJfc3RkaW8uYzoxNTYpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUQyNTogeGNfaW50ZXJmYWNl
X29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTQ1KQo9PTcwMDY9PSAgICBieSAweDQwNDg2MDog
eGVuX3NldHVwICh1dGlscy5jOjExOSkKPT03MDA2PT0gICAgYnkgMHg0MDIxQUI6IG1haW4gKG1h
aW4uYzoxNjEpCj09NzAwNj09IAo9PTcwMDY9PSAxNjQgYnl0ZXMgaW4gNCBibG9ja3MgYXJlIHN0
aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29yZCAxNCBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRD
MkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFt
ZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBieSAweDUyNUIzM0U6ID8/PyAoaW4gL3Vzci9saWIv
bGlieGVuc3RvcmUuc28uMy4wLjApCj09NzAwNj09ICAgIGJ5IDB4NTI1QkEzMDogPz8/IChpbiAv
dXNyL2xpYi9saWJ4ZW5zdG9yZS5zby4zLjAuMCkKPT03MDA2PT0gICAgYnkgMHg1QzM0RTk5OiBz
dGFydF90aHJlYWQgKHB0aHJlYWRfY3JlYXRlLmM6MzA4KQo9PTcwMDY9PSAKPT03MDA2PT0gMjAw
IGJ5dGVzIGluIDUgYmxvY2tzIGFyZSBzdGlsbCByZWFjaGFibGUgaW4gbG9zcyByZWNvcmQgMTUg
b2YgMjQKPT03MDA2PT0gICAgYXQgMHg0QzJCNkNEOiBtYWxsb2MgKGluIC91c3IvbGliL3ZhbGdy
aW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2NC1saW51eC5zbykKPT03MDA2PT0gICAgYnkgMHg1
MjVCMkI3OiA/Pz8gKGluIC91c3IvbGliL2xpYnhlbnN0b3JlLnNvLjMuMC4wKQo9PTcwMDY9PSAg
ICBieSAweDUyNUJBMzA6ID8/PyAoaW4gL3Vzci9saWIvbGlieGVuc3RvcmUuc28uMy4wLjApCj09
NzAwNj09ICAgIGJ5IDB4NUMzNEU5OTogc3RhcnRfdGhyZWFkIChwdGhyZWFkX2NyZWF0ZS5jOjMw
OCkKPT03MDA2PT0gCj09NzAwNj09IDI3MiBieXRlcyBpbiAxIGJsb2NrcyBhcmUgcG9zc2libHkg
bG9zdCBpbiBsb3NzIHJlY29yZCAxNiBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMjlEQjQ6IGNh
bGxvYyAoaW4gL3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4
LnNvKQo9PTcwMDY9PSAgICBieSAweDQwMTIwNzQ6IF9kbF9hbGxvY2F0ZV90bHMgKGRsLXRscy5j
OjI5NykKPT03MDA2PT0gICAgYnkgMHg1QzM1QUJDOiBwdGhyZWFkX2NyZWF0ZUBAR0xJQkNfMi4y
LjUgKGFsbG9jYXRlc3RhY2suYzo1NzEpCj09NzAwNj09ICAgIGJ5IDB4NTI1QzE3QjogeHNfd2F0
Y2ggKGluIC91c3IvbGliL2xpYnhlbnN0b3JlLnNvLjMuMC4wKQo9PTcwMDY9PSAgICBieSAweDQw
NDg4NjogeGVuX3NldHVwICh1dGlscy5jOjEyNSkKPT03MDA2PT0gICAgYnkgMHg0MDIxQUI6IG1h
aW4gKG1haW4uYzoxNjEpCj09NzAwNj09IAo9PTcwMDY9PSAyODAgYnl0ZXMgaW4gMSBibG9ja3Mg
YXJlIHN0aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29yZCAxNyBvZiAyNAo9PTcwMDY9PSAgICBh
dCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNo
ZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBieSAweDUyNUIwOTA6ID8/PyAoaW4gL3Vz
ci9saWIvbGlieGVuc3RvcmUuc28uMy4wLjApCj09NzAwNj09ICAgIGJ5IDB4NTI1QkI0RjogeHNf
b3BlbiAoaW4gL3Vzci9saWIvbGlieGVuc3RvcmUuc28uMy4wLjApCj09NzAwNj09ICAgIGJ5IDB4
NDA0ODQ1OiB4ZW5fc2V0dXAgKHV0aWxzLmM6MTEyKQo9PTcwMDY9PSAgICBieSAweDQwMjFBQjog
bWFpbiAobWFpbi5jOjE2MSkKPT03MDA2PT0gCj09NzAwNj09IDMwMCAoNjAgZGlyZWN0LCAyNDAg
aW5kaXJlY3QpIGJ5dGVzIGluIDEgYmxvY2tzIGFyZSBkZWZpbml0ZWx5IGxvc3QgaW4gbG9zcyBy
ZWNvcmQgMTggb2YgMjQKPT03MDA2PT0gICAgYXQgMHg0QzJCNkNEOiBtYWxsb2MgKGluIC91c3Iv
bGliL3ZhbGdyaW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2NC1saW51eC5zbykKPT03MDA2PT0g
ICAgYnkgMHg1OTczNTk0OiBuc3NfcGFyc2Vfc2VydmljZV9saXN0IChuc3N3aXRjaC5jOjY3OCkK
PT03MDA2PT0gICAgYnkgMHg1OTc0MDU1OiBfX25zc19kYXRhYmFzZV9sb29rdXAgKG5zc3dpdGNo
LmM6MTc1KQo9PTcwMDY9PSAgICBieSAweDZBNEQxNjk6ID8/Pwo9PTcwMDY9PSAgICBieSAweDU5
MkI0NkM6IGdldGdybmFtX3JAQEdMSUJDXzIuMi41IChnZXRYWGJ5WVlfci5jOjI1NikKPT03MDA2
PT0gICAgYnkgMHg1OTlCRTUxOiBncmFudHB0IChncmFudHB0LmM6MTUzKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0ODA6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMDIpCj09NzAwNj09ICAgIGJ5IDB4NDAy
QUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSAxLDIwOCBieXRlcyBpbiAxIGJsb2NrcyBhcmUgc3Rp
bGwgcmVhY2hhYmxlIGluIGxvc3MgcmVjb3JkIDE5IG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMy
QjZDRDogbWFsbG9jIChpbiAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1k
NjQtbGludXguc28pCj09NzAwNj09ICAgIGJ5IDB4NEU0NUMyQTogeGNfaW50ZXJmYWNlX29wZW5f
Y29tbW9uICh4Y19wcml2YXRlLmM6MTUwKQo9PTcwMDY9PSAgICBieSAweDQwNDg2MDogeGVuX3Nl
dHVwICh1dGlscy5jOjExOSkKPT03MDA2PT0gICAgYnkgMHg0MDIxQUI6IG1haW4gKG1haW4uYzox
NjEpCj09NzAwNj09IAo9PTcwMDY9PSA0LDA5NiBieXRlcyBpbiAxIGJsb2NrcyBhcmUgc3RpbGwg
cmVhY2hhYmxlIGluIGxvc3MgcmVjb3JkIDIwIG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyOUJF
ODogbWVtYWxpZ24gKGluIC91c3IvbGliL3ZhbGdyaW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2
NC1saW51eC5zbykKPT03MDA2PT0gICAgYnkgMHg0QzI5Qzk3OiBwb3NpeF9tZW1hbGlnbiAoaW4g
L3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcw
MDY9PSAgICBieSAweDRFNEJDQ0I6IHhjX19oeXBlcmNhbGxfYnVmZmVyX2FsbG9jX3BhZ2VzICh4
Y19oY2FsbF9idWYuYzoxNDMpCj09NzAwNj09ICAgIGJ5IDB4NEU0QkU2MjogeGNfX2h5cGVyY2Fs
bF9idWZmZXJfYWxsb2MgKHhjX2hjYWxsX2J1Zi5jOjE4OSkKPT03MDA2PT0gICAgYnkgMHg0RTRC
RUUyOiB4Y19faHlwZXJjYWxsX2JvdW5jZV9wcmUgKHhjX2hjYWxsX2J1Zi5jOjIzMSkKPT03MDA2
PT0gICAgYnkgMHg0RTNCQjY3OiB4Y19kb21haW5fZ2V0aW5mbyAoeGNfcHJpdmF0ZS5oOjI0MSkK
PT03MDA2PT0gICAgYnkgMHg0MDM0RTU6IGVudW1fZG9tYWlucyAoaW8uYzo3MzYpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUJBOiBtYWluIChtYWluLmM6MTY0KQo9PTcwMDY9PSAKPT03MDA2PT0gOSww
ODggYnl0ZXMgaW4gMzM5IGJsb2NrcyBhcmUgc3RpbGwgcmVhY2hhYmxlIGluIGxvc3MgcmVjb3Jk
IDIxIG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjdCMjogcmVhbGxvYyAoaW4gL3Vzci9saWIv
dmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBi
eSAweDQwMzU4OTogZW51bV9kb21haW5zIChpby5jOjYzMykKPT03MDA2PT0gICAgYnkgMHg0MDQy
QTg6IGhhbmRsZV9pbyAoaW8uYzo4NjcpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluICht
YWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gMTYsMjcyIGJ5dGVzIGluIDMzOSBibG9ja3Mg
YXJlIHN0aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29yZCAyMiBvZiAyNAo9PTcwMDY9PSAgICBh
dCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNo
ZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBieSAweDRFNEM4RkY6IHh0bF9jcmVhdGVs
b2dnZXJfc3RkaW9zdHJlYW0gKHh0bF9sb2dnZXJfc3RkaW8uYzoxNTYpCj09NzAwNj09ICAgIGJ5
IDB4NEU0NUQyNTogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTQ1KQo9
PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gNDMsMzkyIGJ5
dGVzIGluIDMzOSBibG9ja3MgYXJlIHN0aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29yZCAyMyBv
ZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFsZ3Jp
bmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBieSAweDQw
MzU1MTogZW51bV9kb21haW5zIChpby5jOjYyMykKPT03MDA2PT0gICAgYnkgMHg0MDQyQTg6IGhh
bmRsZV9pbyAoaW8uYzo4NjcpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gNDA5LDUxMiBieXRlcyBpbiAzMzkgYmxvY2tzIGFyZSBz
dGlsbCByZWFjaGFibGUgaW4gbG9zcyByZWNvcmQgMjQgb2YgMjQKPT03MDA2PT0gICAgYXQgMHg0
QzJCNkNEOiBtYWxsb2MgKGluIC91c3IvbGliL3ZhbGdyaW5kL3ZncHJlbG9hZF9tZW1jaGVjay1h
bWQ2NC1saW51eC5zbykKPT03MDA2PT0gICAgYnkgMHg0RTQ1QzJBOiB4Y19pbnRlcmZhY2Vfb3Bl
bl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNTApCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21h
aW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxl
X2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYp
Cj09NzAwNj09IAo9PTcwMDY9PSBMRUFLIFNVTU1BUlk6Cj09NzAwNj09ICAgIGRlZmluaXRlbHkg
bG9zdDogNjAgYnl0ZXMgaW4gMSBibG9ja3MKPT03MDA2PT0gICAgaW5kaXJlY3RseSBsb3N0OiAy
NDAgYnl0ZXMgaW4gMTAgYmxvY2tzCj09NzAwNj09ICAgICAgcG9zc2libHkgbG9zdDogMjcyIGJ5
dGVzIGluIDEgYmxvY2tzCj09NzAwNj09ICAgIHN0aWxsIHJlYWNoYWJsZTogNDg0LDMwNiBieXRl
cyBpbiAxLDM3MSBibG9ja3MKPT03MDA2PT0gICAgICAgICBzdXBwcmVzc2VkOiAwIGJ5dGVzIGlu
IDAgYmxvY2tzCj09NzAwNj09IAo9PTcwMDY9PSBGb3IgY291bnRzIG9mIGRldGVjdGVkIGFuZCBz
dXBwcmVzc2VkIGVycm9ycywgcmVydW4gd2l0aDogLXYKPT03MDA2PT0gVXNlIC0tdHJhY2stb3Jp
Z2lucz15ZXMgdG8gc2VlIHdoZXJlIHVuaW5pdGlhbGlzZWQgdmFsdWVzIGNvbWUgZnJvbQo9PTcw
MDY9PSBFUlJPUiBTVU1NQVJZOiAxMTUyNjIgZXJyb3JzIGZyb20gNCBjb250ZXh0cyAoc3VwcHJl
c3NlZDogMiBmcm9tIDIpCg==
--e89a8fb203a073963004d0d26a71
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--e89a8fb203a073963004d0d26a71--


From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtr-0003so-4Z; Tue, 18 Dec 2012 13:07:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhebus@googlemail.com>)
	id 1TjY0o-000556-6f; Fri, 14 Dec 2012 16:21:23 +0000
Received: from [85.158.143.35:4033] by server-2.bemta-4.messagelabs.com id
	AB/50-30861-1025BC05; Fri, 14 Dec 2012 16:21:21 +0000
X-Env-Sender: jhebus@googlemail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1355502054!5499347!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20919 invoked from network); 14 Dec 2012 16:20:56 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Dec 2012 16:20:56 -0000
Received: by mail-ob0-f173.google.com with SMTP id xn12so3537464obc.32
	for <multiple recipients>; Fri, 14 Dec 2012 08:20:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=LZTeaXhCUQpoXi9JhQP+zu44lb65jSqDMJazWvPmkzk=;
	b=qNeuy1ScFsopNXFlScEI9Hc9zjWr696nuBitlTVH87TZXaltrKrZLjZmt4PVxlWM9A
	h4oCQEPjrfQfRcdy+3s4qXqZyev9N3wNEEIqs7HWVoZzFvAnErZBXMK6KTabRfFhjCeq
	ccanzM1NVzWZsiFQRRKg9zPMyzuvLPGQft82/VfhRTjVibFj1CsVQs2Bg0ncKyIe6wy8
	kwS1awQl9LOgHH+jXhQcQegu94sZrLmArZgEQWpbpv8N5f61KZMePHiJH4QTWF+A+C/x
	KTCAZZILRImTVCVn24HJZetd0MThHfZGiMP0k496yynOQfBzOwXVBO7gQVSES/PeQYC1
	Br1A==
MIME-Version: 1.0
Received: by 10.60.11.130 with SMTP id q2mr4918360oeb.141.1355502054511; Fri,
	14 Dec 2012 08:20:54 -0800 (PST)
Received: by 10.76.21.196 with HTTP; Fri, 14 Dec 2012 08:20:53 -0800 (PST)
In-Reply-To: <1355497058.8376.63.camel@iceland>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
	<1355412947.10554.147.camel@zakaz.uk.xensource.com>
	<CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
	<1355497058.8376.63.camel@iceland>
Date: Fri, 14 Dec 2012 16:20:53 +0000
Message-ID: <CABR7Q=oLJg8EJHcask822MbYRh6Mb7QGgd_x_MRRriqMT8H_0Q@mail.gmail.com>
From: Paul Harvey <jhebus@googlemail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Content-Type: multipart/mixed; boundary=e89a8fb203a073963004d0d26a71
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--e89a8fb203a073963004d0d26a71
Content-Type: multipart/alternative; boundary=e89a8fb203a073962404d0d26a6f

--e89a8fb203a073962404d0d26a6f
Content-Type: text/plain; charset=ISO-8859-1

On 14 December 2012 14:57, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Fri, 2012-12-14 at 13:06 +0000, Paul Harvey wrote:
> > SO
> >
> > #with 341 domains
> > ./lsevntchn 0 | wc -l
> > 724
> >
> > Attaching gdb to xenconsoled,
> >
> > Program received signal SIGABRT, Aborted.
> > 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> > (gdb) bt
> > #0  0x00007fe588ca8425 in raise ()
> > from /lib/x86_64-linux-gnu/libc.so.6
> > #1  0x00007fe588cabb8b in abort ()
> > from /lib/x86_64-linux-gnu/libc.so.6
> > #2  0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> > #3  0x00007fe588d7c807 in __fortify_fail ()
> > from /lib/x86_64-linux-gnu/libc.so.6
> > #4  0x00007fe588d7b700 in __chk_fail ()
> > from /lib/x86_64-linux-gnu/libc.so.6
> > #5  0x00007fe588d7c7be in __fdelt_warn ()
> > from /lib/x86_64-linux-gnu/libc.so.6
> > #6  0x0000000000403ca8 in handle_io () at daemon/io.c:1059
> > #7  0x00000000004021c5 in main (argc=2, argv=0x7fff58691d48) at
> > daemon/main.c:166
> >
>
> libc raises exception when it detects memory violation.
>
> You can probably try to use valgrind to identify memory leak in
> xenconsoled.
>
>
> Wei.
>
>
Feeling in a little over my head now,

I have run valgrind and include the file with the output. As before
xenconsoled crashes, but i am not really sure how to read what i am seeing
from valgrind. I am not really sure if it is telling me that these errors
happen as it goes along, or if it is as a result of the crash that there
are lost blocks around.

Valgrind was run with:

valgrind --tool=memcheck --leak-check=yes --show-reachable=yes
--num-callers=20 --log-file="valgrind_output.txt" --track-fds=yes
./xenconsoled --pid-file=/var/run/xenconsoled.pid

If the attached file doesn't show, could you tell where it should go?

Paul

--e89a8fb203a073962404d0d26a6f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On 14 December 2012 14:57, Wei Liu <span dir=3D"ltr">&lt;<a href=3D"mailto:=
Wei.Liu2@citrix.com" target=3D"_blank">Wei.Liu2@citrix.com</a>&gt;</span> w=
rote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blockquote =
class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px sol=
id rgb(204,204,204);padding-left:1ex">
<div class=3D"im">On Fri, 2012-12-14 at 13:06 +0000, Paul Harvey wrote:<br>
&gt; SO<br>
&gt;<br>
&gt; #with 341 domains<br>
&gt; ./lsevntchn 0 | wc -l<br>
&gt; 724<br>
&gt;<br>
&gt; Attaching gdb to xenconsoled,<br>
&gt;<br>
&gt; Program received signal SIGABRT, Aborted.<br>
&gt; 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6<br=
>
&gt; (gdb) bt<br>
&gt; #0 =A00x00007fe588ca8425 in raise ()<br>
&gt; from /lib/x86_64-linux-gnu/libc.so.6<br>
&gt; #1 =A00x00007fe588cabb8b in abort ()<br>
&gt; from /lib/x86_64-linux-gnu/libc.so.6<br>
&gt; #2 =A00x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6=
<br>
&gt; #3 =A00x00007fe588d7c807 in __fortify_fail ()<br>
&gt; from /lib/x86_64-linux-gnu/libc.so.6<br>
&gt; #4 =A00x00007fe588d7b700 in __chk_fail ()<br>
&gt; from /lib/x86_64-linux-gnu/libc.so.6<br>
&gt; #5 =A00x00007fe588d7c7be in __fdelt_warn ()<br>
&gt; from /lib/x86_64-linux-gnu/libc.so.6<br>
&gt; #6 =A00x0000000000403ca8 in handle_io () at daemon/io.c:1059<br>
&gt; #7 =A00x00000000004021c5 in main (argc=3D2, argv=3D0x7fff58691d48) at<=
br>
&gt; daemon/main.c:166<br>
&gt;<br>
<br>
</div>libc raises exception when it detects memory violation.<br>
<br>
You can probably try to use valgrind to identify memory leak in<br>
xenconsoled.<br>
<span class=3D""><font color=3D"#888888"><br>
<br>
Wei.<br>
<br>
</font></span></blockquote></div><br>Feeling in a little over my head now, =
<br><br>I have run valgrind and include the file with the output. As before=
 xenconsoled crashes, but i am not really sure how to read what i am seeing=
 from valgrind. I am not really sure if it is telling me that these errors =
happen as it goes along, or if it is as a result of the crash that there ar=
e lost blocks around.<br>
<br>Valgrind was run with:<br><br>valgrind --tool=3Dmemcheck --leak-check=
=3Dyes --show-reachable=3Dyes --num-callers=3D20 --log-file=3D&quot;valgrin=
d_output.txt&quot; --track-fds=3Dyes ./xenconsoled --pid-file=3D/var/run/xe=
nconsoled.pid<br>
<br>If the attached file doesn&#39;t show, could you tell where it should g=
o?<br><br>Paul<br></div>

--e89a8fb203a073962404d0d26a6f--
--e89a8fb203a073963004d0d26a71
Content-Type: text/plain; charset=US-ASCII; name="valgrind_output.txt"
Content-Disposition: attachment; filename="valgrind_output.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hapitlp20

PT03MDAxPT0gTWVtY2hlY2ssIGEgbWVtb3J5IGVycm9yIGRldGVjdG9yCj09NzAwMT09IENvcHly
aWdodCAoQykgMjAwMi0yMDExLCBhbmQgR05VIEdQTCdkLCBieSBKdWxpYW4gU2V3YXJkIGV0IGFs
Lgo9PTcwMDE9PSBVc2luZyBWYWxncmluZC0zLjcuMCBhbmQgTGliVkVYOyByZXJ1biB3aXRoIC1o
IGZvciBjb3B5cmlnaHQgaW5mbwo9PTcwMDE9PSBDb21tYW5kOiAuL3hlbmNvbnNvbGVkIC0tcGlk
LWZpbGU9L3Zhci9ydW4veGVuY29uc29sZWQucGlkCj09NzAwMT09IFBhcmVudCBQSUQ6IDQ3NDUK
PT03MDAxPT0gCj09NzAwMT09IAo9PTcwMDE9PSBGSUxFIERFU0NSSVBUT1JTOiA0IG9wZW4gYXQg
ZXhpdC4KPT03MDAxPT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMzogL2hvbWUvcGF1bGgvRGVza3Rv
cC9kb3dubG9hZGVkL3hlbi00LjEuMi90b29scy9jb25zb2xlL3ZhbGdyaW5kX291dHB1dC50eHQK
PT03MDAxPT0gICAgPGluaGVyaXRlZCBmcm9tIHBhcmVudD4KPT03MDAxPT0gCj09NzAwMT09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDI6IC9kZXYvcHRzLzIKPT03MDAxPT0gICAgPGluaGVyaXRlZCBm
cm9tIHBhcmVudD4KPT03MDAxPT0gCj09NzAwMT09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE6Cj09
NzAwMT09ICAgIDxpbmhlcml0ZWQgZnJvbSBwYXJlbnQ+Cj09NzAwMT09IAo9PTcwMDE9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciAwOiAvZGV2L3B0cy8yCj09NzAwMT09ICAgIDxpbmhlcml0ZWQgZnJv
bSBwYXJlbnQ+Cj09NzAwMT09IAo9PTcwMDE9PSAKPT03MDAxPT0gSEVBUCBTVU1NQVJZOgo9PTcw
MDE9PSAgICAgaW4gdXNlIGF0IGV4aXQ6IDQ2IGJ5dGVzIGluIDIgYmxvY2tzCj09NzAwMT09ICAg
dG90YWwgaGVhcCB1c2FnZTogMiBhbGxvY3MsIDAgZnJlZXMsIDQ2IGJ5dGVzIGFsbG9jYXRlZAo9
PTcwMDE9PSAKPT03MDAxPT0gMjEgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIHN0aWxsIHJlYWNoYWJs
ZSBpbiBsb3NzIHJlY29yZCAxIG9mIDIKPT03MDAxPT0gICAgYXQgMHg0QzJCNkNEOiBtYWxsb2Mg
KGluIC91c3IvbGliL3ZhbGdyaW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2NC1saW51eC5zbykK
PT03MDAxPT0gICAgYnkgMHg0MDIyMjA6IG1haW4gKG1haW4uYzoxNDQpCj09NzAwMT09IAo9PTcw
MDE9PSAyNSBieXRlcyBpbiAxIGJsb2NrcyBhcmUgc3RpbGwgcmVhY2hhYmxlIGluIGxvc3MgcmVj
b3JkIDIgb2YgMgo9PTcwMDE9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIv
dmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDE9PSAgICBi
eSAweDU4RjZENzE6IHN0cmR1cCAoc3RyZHVwLmM6NDMpCj09NzAwMT09ICAgIGJ5IDB4NDAxRjlC
OiBtYWluIChtYWluLmM6MTEzKQo9PTcwMDE9PSAKPT03MDAxPT0gTEVBSyBTVU1NQVJZOgo9PTcw
MDE9PSAgICBkZWZpbml0ZWx5IGxvc3Q6IDAgYnl0ZXMgaW4gMCBibG9ja3MKPT03MDAxPT0gICAg
aW5kaXJlY3RseSBsb3N0OiAwIGJ5dGVzIGluIDAgYmxvY2tzCj09NzAwMT09ICAgICAgcG9zc2li
bHkgbG9zdDogMCBieXRlcyBpbiAwIGJsb2Nrcwo9PTcwMDE9PSAgICBzdGlsbCByZWFjaGFibGU6
IDQ2IGJ5dGVzIGluIDIgYmxvY2tzCj09NzAwMT09ICAgICAgICAgc3VwcHJlc3NlZDogMCBieXRl
cyBpbiAwIGJsb2Nrcwo9PTcwMDE9PSAKPT03MDAxPT0gRm9yIGNvdW50cyBvZiBkZXRlY3RlZCBh
bmQgc3VwcHJlc3NlZCBlcnJvcnMsIHJlcnVuIHdpdGg6IC12Cj09NzAwMT09IEVSUk9SIFNVTU1B
Ulk6IDAgZXJyb3JzIGZyb20gMCBjb250ZXh0cyAoc3VwcHJlc3NlZDogMiBmcm9tIDIpCj09NzAw
NT09IAo9PTcwMDU9PSBGSUxFIERFU0NSSVBUT1JTOiA0IG9wZW4gYXQgZXhpdC4KPT03MDA1PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMzogL2hvbWUvcGF1bGgvRGVza3RvcC9kb3dubG9hZGVkL3hl
bi00LjEuMi90b29scy9jb25zb2xlL3ZhbGdyaW5kX291dHB1dC50eHQKPT03MDA1PT0gICAgPGlu
aGVyaXRlZCBmcm9tIHBhcmVudD4KPT03MDA1PT0gCj09NzAwNT09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDI6IC9kZXYvcHRzLzIKPT03MDA1PT0gICAgPGluaGVyaXRlZCBmcm9tIHBhcmVudD4KPT03
MDA1PT0gCj09NzAwNT09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE6Cj09NzAwNT09ICAgIDxpbmhl
cml0ZWQgZnJvbSBwYXJlbnQ+Cj09NzAwNT09IAo9PTcwMDU9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAwOiAvZGV2L3B0cy8yCj09NzAwNT09ICAgIDxpbmhlcml0ZWQgZnJvbSBwYXJlbnQ+Cj09NzAw
NT09IAo9PTcwMDU9PSAKPT03MDA1PT0gSEVBUCBTVU1NQVJZOgo9PTcwMDU9PSAgICAgaW4gdXNl
IGF0IGV4aXQ6IDQ2IGJ5dGVzIGluIDIgYmxvY2tzCj09NzAwNT09ICAgdG90YWwgaGVhcCB1c2Fn
ZTogMiBhbGxvY3MsIDAgZnJlZXMsIDQ2IGJ5dGVzIGFsbG9jYXRlZAo9PTcwMDU9PSAKPT03MDA1
PT0gMjEgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIHN0aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29y
ZCAxIG9mIDIKPT03MDA1PT0gICAgYXQgMHg0QzJCNkNEOiBtYWxsb2MgKGluIC91c3IvbGliL3Zh
bGdyaW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2NC1saW51eC5zbykKPT03MDA1PT0gICAgYnkg
MHg0MDIyMjA6IG1haW4gKG1haW4uYzoxNDQpCj09NzAwNT09IAo9PTcwMDU9PSAyNSBieXRlcyBp
biAxIGJsb2NrcyBhcmUgc3RpbGwgcmVhY2hhYmxlIGluIGxvc3MgcmVjb3JkIDIgb2YgMgo9PTcw
MDU9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVs
b2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDU9PSAgICBieSAweDU4RjZENzE6IHN0
cmR1cCAoc3RyZHVwLmM6NDMpCj09NzAwNT09ICAgIGJ5IDB4NDAxRjlCOiBtYWluIChtYWluLmM6
MTEzKQo9PTcwMDU9PSAKPT03MDA1PT0gTEVBSyBTVU1NQVJZOgo9PTcwMDU9PSAgICBkZWZpbml0
ZWx5IGxvc3Q6IDAgYnl0ZXMgaW4gMCBibG9ja3MKPT03MDA1PT0gICAgaW5kaXJlY3RseSBsb3N0
OiAwIGJ5dGVzIGluIDAgYmxvY2tzCj09NzAwNT09ICAgICAgcG9zc2libHkgbG9zdDogMCBieXRl
cyBpbiAwIGJsb2Nrcwo9PTcwMDU9PSAgICBzdGlsbCByZWFjaGFibGU6IDQ2IGJ5dGVzIGluIDIg
YmxvY2tzCj09NzAwNT09ICAgICAgICAgc3VwcHJlc3NlZDogMCBieXRlcyBpbiAwIGJsb2Nrcwo9
PTcwMDU9PSAKPT03MDA1PT0gRm9yIGNvdW50cyBvZiBkZXRlY3RlZCBhbmQgc3VwcHJlc3NlZCBl
cnJvcnMsIHJlcnVuIHdpdGg6IC12Cj09NzAwNT09IEVSUk9SIFNVTU1BUlk6IDAgZXJyb3JzIGZy
b20gMCBjb250ZXh0cyAoc3VwcHJlc3NlZDogMiBmcm9tIDIpCj09NzAwNj09IFdhcm5pbmc6IG5v
dGVkIGJ1dCB1bmhhbmRsZWQgaW9jdGwgMHgzMDUwMDAgd2l0aCBubyBzaXplL2RpcmVjdGlvbiBo
aW50cwo9PTcwMDY9PSAgICBUaGlzIGNvdWxkIGNhdXNlIHNwdXJpb3VzIHZhbHVlIGVycm9ycyB0
byBhcHBlYXIuCj09NzAwNj09ICAgIFNlZSBSRUFETUVfTUlTU0lOR19TWVNDQUxMX09SX0lPQ1RM
IGZvciBndWlkYW5jZSBvbiB3cml0aW5nIGEgcHJvcGVyIHdyYXBwZXIuCj09NzAwNj09IFdhcm5p
bmc6IG5vdGVkIGJ1dCB1bmhhbmRsZWQgaW9jdGwgMHgzMDUwMDAgd2l0aCBubyBzaXplL2RpcmVj
dGlvbiBoaW50cwo9PTcwMDY9PSAgICBUaGlzIGNvdWxkIGNhdXNlIHNwdXJpb3VzIHZhbHVlIGVy
cm9ycyB0byBhcHBlYXIuCj09NzAwNj09ICAgIFNlZSBSRUFETUVfTUlTU0lOR19TWVNDQUxMX09S
X0lPQ1RMIGZvciBndWlkYW5jZSBvbiB3cml0aW5nIGEgcHJvcGVyIHdyYXBwZXIuCj09NzAwNj09
IFdhcm5pbmc6IG5vdGVkIGJ1dCB1bmhhbmRsZWQgaW9jdGwgMHgzMDUwMDAgd2l0aCBubyBzaXpl
L2RpcmVjdGlvbiBoaW50cwo9PTcwMDY9PSAgICBUaGlzIGNvdWxkIGNhdXNlIHNwdXJpb3VzIHZh
bHVlIGVycm9ycyB0byBhcHBlYXIuCj09NzAwNj09ICAgIFNlZSBSRUFETUVfTUlTU0lOR19TWVND
QUxMX09SX0lPQ1RMIGZvciBndWlkYW5jZSBvbiB3cml0aW5nIGEgcHJvcGVyIHdyYXBwZXIuCj09
NzAwNj09IENvbmRpdGlvbmFsIGp1bXAgb3IgbW92ZSBkZXBlbmRzIG9uIHVuaW5pdGlhbGlzZWQg
dmFsdWUocykKPT03MDA2PT0gICAgYXQgMHg0RTNCQzU1OiB4Y19kb21haW5fZ2V0aW5mbyAoeGNf
ZG9tYWluLmM6MjI5KQo9PTcwMDY9PSAgICBieSAweDQwMzRFNTogZW51bV9kb21haW5zIChpby5j
OjczNikKPT03MDA2PT0gICAgYnkgMHg0MDQyQTg6IGhhbmRsZV9pbyAoaW8uYzo4NjcpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
Q29uZGl0aW9uYWwganVtcCBvciBtb3ZlIGRlcGVuZHMgb24gdW5pbml0aWFsaXNlZCB2YWx1ZShz
KQo9PTcwMDY9PSAgICBhdCAweDQwMzUxQTogZW51bV9kb21haW5zIChpby5jOjczOCkKPT03MDA2
PT0gICAgYnkgMHg0MDQyQTg6IGhhbmRsZV9pbyAoaW8uYzo4NjcpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcg
ZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hl
bi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVy
Ci0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8K
LS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoK
LS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBX
QVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNh
bid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcg
ZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hl
bi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVy
Ci0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8K
LS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoK
LS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBX
QVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNh
bid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcg
ZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hl
bi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVy
Ci0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8K
LS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoK
LS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBX
QVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNh
bid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcg
ZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hl
bi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVy
Ci0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8K
LS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoK
LS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBX
QVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNh
bid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcg
ZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hl
bi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVy
Ci0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8K
LS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoK
LS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBX
QVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNh
bid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2Vy
aW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcg
ZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZp
bGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Ig
d2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8g
ZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3Bl
Y3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2Mv
eGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFk
ZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5m
bwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21k
OgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0t
IFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0g
V2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0g
Y2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBT
ZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGlu
ZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQg
ZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJv
ciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5m
byBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5z
cGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFk
aW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJv
Yy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhl
YWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZj
bWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2
LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYt
LSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYt
LSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6
IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFk
aW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVh
ZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVy
cm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBp
bmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBp
bnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJl
YWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9w
cm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYg
aGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVn
IGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJp
dmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcw
MDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAw
Ni0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAw
Ni0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklO
RzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJl
YWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCBy
ZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMg
ZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVn
IGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRv
IGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20g
L3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVM
RiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVi
dWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9w
cml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0t
NzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03
MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03
MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJO
SU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4g
cmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0
IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91
cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVi
dWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUg
dG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hl
biByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJv
bSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3Qg
RUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVu
L3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIK
LS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwot
LTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgot
LTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdB
Uk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hl
biByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2Fu
J3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJp
b3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBk
ZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmls
ZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3
aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBm
cm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVj
dCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5n
IGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94
ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRl
cgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
Ci0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6
Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0g
V0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBX
aGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBj
YW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNl
cmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5n
IGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBm
aWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9y
IHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZv
IGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNw
ZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRp
bmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9j
L3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVh
ZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNt
ZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYt
LSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0t
IFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0t
IGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzog
U2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRp
bmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFk
IGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJy
b3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGlu
Zm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGlu
c3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3By
b2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBo
ZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcg
aW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2
Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAw
Ni0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2
LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2
LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5H
OiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVh
ZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJl
YWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBl
cnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcg
aW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8g
aW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAv
cHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxG
IGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3By
aXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03
MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcw
MDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcw
MDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5J
Tkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiBy
ZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3Qg
cmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3Vz
IGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1
ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0
byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FSTklORzogU2VyaW91cyBlcnJvciB3aGVu
IHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVuIHJlYWRpbmcgZGVidWcgaW5mbyBmcm9t
IC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4ndCByZWFkIGZpbGUgdG8gaW5zcGVjdCBF
TEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlvdXMgZXJyb3Igd2hlbiByZWFkaW5nIGRl
YnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRlYnVnIGluZm8gZnJvbSAvcHJvYy94ZW4v
cHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxlIHRvIGluc3BlY3QgRUxGIGhlYWRlcgot
LTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0t
NzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZyb20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0t
NzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0IEVMRiBoZWFkZXIKLS03MDA2LS0gV0FS
TklORzogU2VyaW91cyBlcnJvciB3aGVuIHJlYWRpbmcgZGVidWcgaW5mbwotLTcwMDYtLSBXaGVu
IHJlYWRpbmcgZGVidWcgaW5mbyBmcm9tIC9wcm9jL3hlbi9wcml2Y21kOgotLTcwMDYtLSBjYW4n
dCByZWFkIGZpbGUgdG8gaW5zcGVjdCBFTEYgaGVhZGVyCi0tNzAwNi0tIFdBUk5JTkc6IFNlcmlv
dXMgZXJyb3Igd2hlbiByZWFkaW5nIGRlYnVnIGluZm8KLS03MDA2LS0gV2hlbiByZWFkaW5nIGRl
YnVnIGluZm8gZnJvbSAvcHJvYy94ZW4vcHJpdmNtZDoKLS03MDA2LS0gY2FuJ3QgcmVhZCBmaWxl
IHRvIGluc3BlY3QgRUxGIGhlYWRlcgotLTcwMDYtLSBXQVJOSU5HOiBTZXJpb3VzIGVycm9yIHdo
ZW4gcmVhZGluZyBkZWJ1ZyBpbmZvCi0tNzAwNi0tIFdoZW4gcmVhZGluZyBkZWJ1ZyBpbmZvIGZy
b20gL3Byb2MveGVuL3ByaXZjbWQ6Ci0tNzAwNi0tIGNhbid0IHJlYWQgZmlsZSB0byBpbnNwZWN0
IEVMRiBoZWFkZXIKPT03MDA2PT0gCj09NzAwNj09IEZJTEUgREVTQ1JJUFRPUlM6IDEwMjYgb3Bl
biBhdCBleGl0Lgo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDI1OiAvZGV2L3B0cy8z
NDUKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikK
PT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9
PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAg
ICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgMTAyNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDEwMjM6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNo
bl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZh
Y2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJB
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDIyOiAvZGV2
L3B0cy8zNDQKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUu
Uzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9
PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcw
MDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMTAyMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDEwMjA6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4
X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19p
bnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4
NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQw
NDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4g
KG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDE5
OiAvZGV2L3B0cy8zNDMKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVt
cGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6
MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5
KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgMTAxODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDEwMTc6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6
IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4
OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAxMDE2OiAvZGV2L3B0cy8zNDIKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVu
cHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAxNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDEwMTQ6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRF
NEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0
RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAxMDEzOiAvZGV2L3B0cy8zNDEKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5
IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAxMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwMTE6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAg
YnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciAxMDEwOiAvZGV2L3B0cy8zNDAKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBv
cGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2Ny
ZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAwOTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwMDg6IC9kZXYveGVuL2V2dGNobgo9PTcw
MDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9
PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2
PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUu
YzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciAxMDA3OiAvZGV2L3B0cy8zMzkKPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0
NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9t
YWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWlu
X2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9p
byAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9
PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAwNjogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwMDU6IC9kZXYveGVuL2V2dGNo
bgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9
PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkK
PT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3By
aXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDA0OiAvZGV2L3B0cy8zMzgKPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFE
MDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0Njog
ZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhh
bmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAwMzogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwMDI6IC9kZXYveGVu
L2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIu
aDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24g
KHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDAxOiAvZGV2L3B0cy8zMzcKPT03MDA2
PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0g
ICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAw
eDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQw
MzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQz
RTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluICht
YWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTAwMDog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5OTogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5ODogL2Rldi9wdHMvMzM2Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5
NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5Njog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5NTogL2Rldi9wdHMvMzM1
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDk5NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5
MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5MjogL2Rldi9wdHMv
MzM0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDk5MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDk5MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk4OTogL2Rldi9w
dHMvMzMzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDk4ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDk4NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk4NjogL2Rl
di9wdHMvMzMyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDk4NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDk4NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk4Mzog
L2Rldi9wdHMvMzMxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDk4MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDk4MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk4
MDogL2Rldi9wdHMvMzMwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDk3OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDk3ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDk3NzogL2Rldi9wdHMvMzI5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDk3NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDk3NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDk3NDogL2Rldi9wdHMvMzI4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk3MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDk3MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDk3MTogL2Rldi9wdHMvMzI3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk3MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDk2ODogL2Rldi9wdHMvMzI2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDk2NTogL2Rldi9wdHMvMzI1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2NDogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDk2MjogL2Rldi9wdHMvMzI0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2MTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk2MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDk1OTogL2Rldi9wdHMvMzIzCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1ODogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1NzogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1NjogL2Rldi9wdHMvMzIyCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1NTogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1NDogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1MzogL2Rldi9wdHMvMzIxCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1MjogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1MTogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk1MDogL2Rldi9wdHMvMzIwCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0OTogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0ODogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0NzogL2Rldi9wdHMvMzE5Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0NjogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0NTogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0NDogL2Rldi9wdHMvMzE4Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0Mzog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0MjogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0MTogL2Rldi9wdHMvMzE3Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk0
MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkzOTog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkzODogL2Rldi9wdHMvMzE2
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDkzNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkz
NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkzNTogL2Rldi9wdHMv
MzE1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDkzNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDkzMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkzMjogL2Rldi9w
dHMvMzE0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDkzMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDkzMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkyOTogL2Rl
di9wdHMvMzEzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDkyODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDkyNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkyNjog
L2Rldi9wdHMvMzEyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDkyNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDkyNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDky
MzogL2Rldi9wdHMvMzExCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDkyMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDkyMTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDkyMDogL2Rldi9wdHMvMzEwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDkxOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDkxODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDkxNzogL2Rldi9wdHMvMzA5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkxNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDkxNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDkxNDogL2Rldi9wdHMvMzA4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkxMzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkxMjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDkxMTogL2Rldi9wdHMvMzA3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkxMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDkwODogL2Rldi9wdHMvMzA2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwNzogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDkwNTogL2Rldi9wdHMvMzA1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwNDogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDkwMjogL2Rldi9wdHMvMzA0Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwMTogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwMDogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5OTogL2Rldi9wdHMvMzAzCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5ODogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5NzogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5NjogL2Rldi9wdHMvMzAyCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5NTogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5NDogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5MzogL2Rldi9wdHMvMzAxCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5MjogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5MTogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg5MDogL2Rldi9wdHMvMzAwCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4OTogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4ODogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4NzogL2Rldi9wdHMvMjk5Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4Njog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4NTogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4NDogL2Rldi9wdHMvMjk4Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4
MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4Mjog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4MTogL2Rldi9wdHMvMjk3
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDg4MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg3
OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg3ODogL2Rldi9wdHMv
Mjk2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDg3NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDg3NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg3NTogL2Rldi9w
dHMvMjk1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDg3NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDg3MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg3MjogL2Rl
di9wdHMvMjk0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDg3MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDg3MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg2OTog
L2Rldi9wdHMvMjkzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDg2ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDg2NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg2
NjogL2Rldi9wdHMvMjkyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDg2NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDg2NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDg2MzogL2Rldi9wdHMvMjkxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDg2MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDg2MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDg2MDogL2Rldi9wdHMvMjkwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDg1ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDg1NzogL2Rldi9wdHMvMjg5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDg1NDogL2Rldi9wdHMvMjg4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDg1MTogL2Rldi9wdHMvMjg3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg1MDogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDg0ODogL2Rldi9wdHMvMjg2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0NzogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDg0NTogL2Rldi9wdHMvMjg1Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0NDogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0MzogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0MjogL2Rldi9wdHMvMjg0Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0MTogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg0MDogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzOTogL2Rldi9wdHMvMjgzCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzODogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzNzogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzNjogL2Rldi9wdHMvMjgyCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzNTogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzNDogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzMzogL2Rldi9wdHMvMjgxCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzMjogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzMTogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgzMDogL2Rldi9wdHMvMjgwCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyOTog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyODogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyNzogL2Rldi9wdHMvMjc5Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgy
NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyNTog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyNDogL2Rldi9wdHMvMjc4
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDgyMzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgy
MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgyMTogL2Rldi9wdHMv
Mjc3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDgyMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDgxOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgxODogL2Rldi9w
dHMvMjc2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDgxNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDgxNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgxNTogL2Rl
di9wdHMvMjc1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDgxNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDgxMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgxMjog
L2Rldi9wdHMvMjc0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDgxMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDgxMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgw
OTogL2Rldi9wdHMvMjczCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDgwODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDgwNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDgwNjogL2Rldi9wdHMvMjcyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDgwNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDgwNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDgwMzogL2Rldi9wdHMvMjcxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgwMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDgwMTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDgwMDogL2Rldi9wdHMvMjcwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDc5NzogL2Rldi9wdHMvMjY5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDc5NDogL2Rldi9wdHMvMjY4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5MzogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDc5MTogL2Rldi9wdHMvMjY3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5MDogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDc4ODogL2Rldi9wdHMvMjY2Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4NzogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4NjogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4NTogL2Rldi9wdHMvMjY1Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4NDogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4MzogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4MjogL2Rldi9wdHMvMjY0Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4MTogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc4MDogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3OTogL2Rldi9wdHMvMjYzCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3ODogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3NzogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3NjogL2Rldi9wdHMvMjYyCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3NTogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3NDogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3MzogL2Rldi9wdHMvMjYxCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3Mjog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3MTogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc3MDogL2Rldi9wdHMvMjYwCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2
OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2ODog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2NzogL2Rldi9wdHMvMjU5
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDc2NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2
NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2NDogL2Rldi9wdHMv
MjU4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDc2MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDc2MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc2MTogL2Rldi9w
dHMvMjU3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDc2MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDc1OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc1ODogL2Rl
di9wdHMvMjU2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDc1NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDc1NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc1NTog
L2Rldi9wdHMvMjU1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDc1NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDc1MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc1
MjogL2Rldi9wdHMvMjU0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDc1MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDc1MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDc0OTogL2Rldi9wdHMvMjUzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDc0ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDc0NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDc0NjogL2Rldi9wdHMvMjUyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc0NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDc0NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDc0MzogL2Rldi9wdHMvMjUxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc0MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc0MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDc0MDogL2Rldi9wdHMvMjUwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDczNzogL2Rldi9wdHMvMjQ5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczNjogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDczNDogL2Rldi9wdHMvMjQ4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczMzogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczMjogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDczMTogL2Rldi9wdHMvMjQ3Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDczMDogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyOTogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyODogL2Rldi9wdHMvMjQ2Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyNzogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyNjogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyNTogL2Rldi9wdHMvMjQ1Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyNDogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyMzogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyMjogL2Rldi9wdHMvMjQ0Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyMTogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyMDogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxOTogL2Rldi9wdHMvMjQzCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxODogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxNzogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxNjogL2Rldi9wdHMvMjQyCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxNTog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxNDogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxMzogL2Rldi9wdHMvMjQxCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcx
MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxMTog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcxMDogL2Rldi9wdHMvMjQw
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDcwOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcw
ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcwNzogL2Rldi9wdHMv
MjM5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDcwNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDcwNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcwNDogL2Rldi9w
dHMvMjM4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDcwMzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDcwMjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcwMTogL2Rl
di9wdHMvMjM3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDcwMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDY5OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY5ODog
L2Rldi9wdHMvMjM2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDY5NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDY5NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY5
NTogL2Rldi9wdHMvMjM1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDY5NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDY5MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDY5MjogL2Rldi9wdHMvMjM0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDY5MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDY5MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDY4OTogL2Rldi9wdHMvMjMzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY4ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDY4NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDY4NjogL2Rldi9wdHMvMjMyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY4NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY4NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDY4MzogL2Rldi9wdHMvMjMxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY4MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY4MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDY4MDogL2Rldi9wdHMvMjMwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3OTogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDY3NzogL2Rldi9wdHMvMjI5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3NjogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDY3NDogL2Rldi9wdHMvMjI4Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3MzogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3MjogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3MTogL2Rldi9wdHMvMjI3Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY3MDogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2OTogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2ODogL2Rldi9wdHMvMjI2Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2NzogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2NjogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2NTogL2Rldi9wdHMvMjI1Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2NDogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2MzogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2MjogL2Rldi9wdHMvMjI0Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2MTogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY2MDogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1OTogL2Rldi9wdHMvMjIzCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1ODog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1NzogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1NjogL2Rldi9wdHMvMjIyCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1
NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1NDog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1MzogL2Rldi9wdHMvMjIx
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDY1MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1
MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY1MDogL2Rldi9wdHMv
MjIwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDY0OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDY0ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY0NzogL2Rldi9w
dHMvMjE5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDY0NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDY0NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY0NDogL2Rl
di9wdHMvMjE4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDY0MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDY0MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDY0MTog
L2Rldi9wdHMvMjE3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDY0MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDYzOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYz
ODogL2Rldi9wdHMvMjE2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDYzNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDYzNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDYzNTogL2Rldi9wdHMvMjE1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDYzNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDYzMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDYzMjogL2Rldi9wdHMvMjE0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYzMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDYzMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDYyOTogL2Rldi9wdHMvMjEzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDYyNjogL2Rldi9wdHMvMjEyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDYyMzogL2Rldi9wdHMvMjExCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyMjogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYyMTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDYyMDogL2Rldi9wdHMvMjEwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxOTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxODogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDYxNzogL2Rldi9wdHMvMjA5Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxNjogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxNTogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxNDogL2Rldi9wdHMvMjA4Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxMzogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxMjogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxMTogL2Rldi9wdHMvMjA3Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxMDogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwOTogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwODogL2Rldi9wdHMvMjA2Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwNzogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwNjogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwNTogL2Rldi9wdHMvMjA1Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwNDogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwMzogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwMjogL2Rldi9wdHMvMjA0Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwMTog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYwMDogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5OTogL2Rldi9wdHMvMjAzCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5
ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5Nzog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5NjogL2Rldi9wdHMvMjAy
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDU5NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5
NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5MzogL2Rldi9wdHMv
MjAxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDU5MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDU5MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU5MDogL2Rldi9w
dHMvMjAwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDU4OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDU4ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU4NzogL2Rl
di9wdHMvMTk5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDU4NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDU4NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU4NDog
L2Rldi9wdHMvMTk4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDU4MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDU4MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU4
MTogL2Rldi9wdHMvMTk3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDU4MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDU3OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDU3ODogL2Rldi9wdHMvMTk2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDU3NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDU3NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDU3NTogL2Rldi9wdHMvMTk1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU3NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDU3MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDU3MjogL2Rldi9wdHMvMTk0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU3MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU3MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDU2OTogL2Rldi9wdHMvMTkzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDU2NjogL2Rldi9wdHMvMTkyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2NTogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDU2MzogL2Rldi9wdHMvMTkxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2MjogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU2MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDU2MDogL2Rldi9wdHMvMTkwCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1OTogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1ODogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1NzogL2Rldi9wdHMvMTg5Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1NjogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1NTogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1NDogL2Rldi9wdHMvMTg4Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1MzogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1MjogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1MTogL2Rldi9wdHMvMTg3Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU1MDogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0OTogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0ODogL2Rldi9wdHMvMTg2Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0NzogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0NjogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0NTogL2Rldi9wdHMvMTg1Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0NDog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0MzogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0MjogL2Rldi9wdHMvMTg0Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0
MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDU0MDog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUzOTogL2Rldi9wdHMvMTgz
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDUzODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUz
NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUzNjogL2Rldi9wdHMv
MTgyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDUzNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDUzNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUzMzogL2Rldi9w
dHMvMTgxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDUzMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDUzMTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUzMDogL2Rl
di9wdHMvMTgwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDUyOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDUyODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUyNzog
L2Rldi9wdHMvMTc5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDUyNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDUyNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUy
NDogL2Rldi9wdHMvMTc4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDUyMzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDUyMjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDUyMTogL2Rldi9wdHMvMTc3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDUyMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDUxOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDUxODogL2Rldi9wdHMvMTc2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUxNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDUxNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDUxNTogL2Rldi9wdHMvMTc1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUxNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUxMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDUxMjogL2Rldi9wdHMvMTc0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUxMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUxMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDUwOTogL2Rldi9wdHMvMTczCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwODogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDUwNjogL2Rldi9wdHMvMTcyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwNTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDUwMzogL2Rldi9wdHMvMTcxCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwMjogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwMTogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDUwMDogL2Rldi9wdHMvMTcwCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5OTogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5ODogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5NzogL2Rldi9wdHMvMTY5Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5NjogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5NTogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5NDogL2Rldi9wdHMvMTY4Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5MzogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5MjogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5MTogL2Rldi9wdHMvMTY3Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ5MDogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4OTogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4ODogL2Rldi9wdHMvMTY2Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4Nzog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4NjogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4NTogL2Rldi9wdHMvMTY1Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4
NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4Mzog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4MjogL2Rldi9wdHMvMTY0
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQ4MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ4
MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ3OTogL2Rldi9wdHMv
MTYzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQ3ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQ3NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ3NjogL2Rldi9w
dHMvMTYyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQ3NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQ3NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ3MzogL2Rl
di9wdHMvMTYxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDQ3MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQ3MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ3MDog
L2Rldi9wdHMvMTYwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQ2OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDQ2ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ2
NzogL2Rldi9wdHMvMTU5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQ2NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQ2NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQ2NDogL2Rldi9wdHMvMTU4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDQ2MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQ2MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQ2MTogL2Rldi9wdHMvMTU3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ2MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDQ1OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQ1ODogL2Rldi9wdHMvMTU2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDQ1NTogL2Rldi9wdHMvMTU1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQ1MjogL2Rldi9wdHMvMTU0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1MTogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ1MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQ0OTogL2Rldi9wdHMvMTUzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0ODogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDQ0NjogL2Rldi9wdHMvMTUyCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0NTogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0NDogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0MzogL2Rldi9wdHMvMTUxCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0MjogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0MTogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ0MDogL2Rldi9wdHMvMTUwCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzOTogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzODogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzNzogL2Rldi9wdHMvMTQ5Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzNjogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzNTogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzNDogL2Rldi9wdHMvMTQ4Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzMzogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzMjogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzMTogL2Rldi9wdHMvMTQ3Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQzMDog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQyOTogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQyODogL2Rldi9wdHMvMTQ2Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQy
NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQyNjog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQyNTogL2Rldi9wdHMvMTQ1
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQyNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQy
MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQyMjogL2Rldi9wdHMv
MTQ0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQyMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQyMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQxOTogL2Rldi9w
dHMvMTQzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQxODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQxNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQxNjogL2Rl
di9wdHMvMTQyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDQxNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQxNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQxMzog
L2Rldi9wdHMvMTQxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQxMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDQxMTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQx
MDogL2Rldi9wdHMvMTQwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQwOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQwODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQwNzogL2Rldi9wdHMvMTM5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDQwNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQwNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDQwNDogL2Rldi9wdHMvMTM4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQwMzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDQwMjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDQwMTogL2Rldi9wdHMvMTM3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQwMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDM5ODogL2Rldi9wdHMvMTM2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDM5NTogL2Rldi9wdHMvMTM1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5NDogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDM5MjogL2Rldi9wdHMvMTM0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5MTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM5MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDM4OTogL2Rldi9wdHMvMTMzCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4ODogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4NzogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4NjogL2Rldi9wdHMvMTMyCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4NTogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4NDogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4MzogL2Rldi9wdHMvMTMxCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4MjogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4MTogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4MDogL2Rldi9wdHMvMTMwCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3OTogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3ODogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3NzogL2Rldi9wdHMvMTI5Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3NjogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3NTogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3NDogL2Rldi9wdHMvMTI4Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3Mzog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3MjogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3MTogL2Rldi9wdHMvMTI3Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM3
MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM2OTog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM2ODogL2Rldi9wdHMvMTI2
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDM2NzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM2
NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM2NTogL2Rldi9wdHMv
MTI1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDM2NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDM2MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM2MjogL2Rldi9w
dHMvMTI0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDM2MTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDM2MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM1OTogL2Rl
di9wdHMvMTIzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDM1ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDM1NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM1Njog
L2Rldi9wdHMvMTIyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDM1NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDM1NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM1
MzogL2Rldi9wdHMvMTIxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDM1MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDM1MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDM1MDogL2Rldi9wdHMvMTIwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDM0OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDM0ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDM0NzogL2Rldi9wdHMvMTE5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM0NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDM0NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDM0NDogL2Rldi9wdHMvMTE4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM0MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM0MjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDM0MTogL2Rldi9wdHMvMTE3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM0MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDMzODogL2Rldi9wdHMvMTE2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzNzogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDMzNTogL2Rldi9wdHMvMTE1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzNDogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDMzMjogL2Rldi9wdHMvMTE0Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzMTogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMzMDogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyOTogL2Rldi9wdHMvMTEzCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyODogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyNzogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyNjogL2Rldi9wdHMvMTEyCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyNTogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyNDogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyMzogL2Rldi9wdHMvMTExCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyMjogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyMTogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMyMDogL2Rldi9wdHMvMTEwCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxOTogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxODogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxNzogL2Rldi9wdHMvMTA5Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxNjog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxNTogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxNDogL2Rldi9wdHMvMTA4Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMx
MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxMjog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMxMTogL2Rldi9wdHMvMTA3
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDMxMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMw
OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMwODogL2Rldi9wdHMv
MTA2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDMwNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDMwNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMwNTogL2Rldi9w
dHMvMTA1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDMwNDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDMwMzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDMwMjogL2Rl
di9wdHMvMTA0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDMwMTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDMwMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI5OTog
L2Rldi9wdHMvMTAzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDI5ODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDI5NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI5
NjogL2Rldi9wdHMvMTAyCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDI5NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDI5NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDI5MzogL2Rldi9wdHMvMTAxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDI5MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDI5MTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDI5MDogL2Rldi9wdHMvMTAwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI4OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDI4ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDI4NzogL2Rldi9wdHMvOTkKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChv
cGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjg2OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlC
ODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDog
Z2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3Bl
bnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMjg1OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0
RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4
NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcw
MDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMjg0OiAvZGV2L3B0cy85OAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkg
KG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0
eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyODM6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5
OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVE
OiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChv
cGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAo
aW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyODI6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkg
MHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAyODE6IC9kZXYvcHRzLzk3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI4MDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI3OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDI3ODogL2Rldi9wdHMvOTYKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVu
cHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjc3OiAvZGV2L3B0bXgKPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5
QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0
eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjc2OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAg
IGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2
KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgMjc1OiAvZGV2L3B0cy85NQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9w
ZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3Jl
YXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNzQ6IC9kZXYvcHRteAo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1
OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVu
cHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRl
X3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNzM6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9
PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAg
ICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0g
ICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzox
NjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciAyNzI6IC9kZXYvcHRzLzk0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI3MTogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI3MDogL2Rldi94ZW4vZXZ0Y2huCj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9
PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5j
OjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDI2OTogL2Rldi9wdHMvOTMKPT03MDA2PT0gICAgYXQgMHg1OTU0
NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVD
OiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWlu
X2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2Ny
ZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAo
aW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcw
MDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjY4OiAvZGV2L3B0bXgKPT03MDA2
PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0g
ICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5
IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTog
b3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2Ny
ZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjY3OiAvZGV2L3hlbi9ldnRjaG4KPT03
MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2
PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRl
LmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMjY2OiAvZGV2L3B0cy85Mgo9PTcwMDY9PSAgICBhdCAweDU5
NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1
NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21h
aW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5f
Y3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lv
IChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09
NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNjU6IC9kZXYvcHRteAo9PTcw
MDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9
PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAg
YnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZB
OiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5f
Y3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3Jl
YXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChp
by5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAw
Nj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNjQ6IC9kZXYveGVuL2V2dGNobgo9
PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcw
MDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03
MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZh
dGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNjM6IC9kZXYvcHRzLzkxCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI2MjogL2Rldi9wdG14Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAg
ICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0
NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI2MTogL2Rldi94ZW4vZXZ0Y2hu
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9
PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJp
dmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI2MDogL2Rldi9wdHMvOTAKPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1
NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDog
ZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9t
YWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRs
ZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2
KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjU5OiAvZGV2L3B0bXgK
PT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03
MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09
ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2
NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9t
YWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWlu
X2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9p
byAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9
PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjU4OiAvZGV2L3hlbi9ldnRj
aG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikK
PT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19w
cml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjU3OiAvZGV2L3B0cy84OQo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQw
OiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBk
b21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFu
ZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzox
NjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNTY6IC9kZXYvcHRt
eAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9
PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2
PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1
NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBk
b21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21h
aW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxl
X2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYp
Cj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNTU6IC9kZXYveGVuL2V2
dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgy
KQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1
NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhj
X3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNTQ6IC9kZXYvcHRzLzg4Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI1MzogL2Rldi9w
dG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcw
MDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAw
eDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI1MjogL2Rldi94ZW4v
ZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5o
OjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAo
eGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI1MTogL2Rldi9wdHMvODcKPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQw
MkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0
NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6
IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWlu
LmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjUwOiAvZGV2
L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4
MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09
NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5
IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFE
MDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0Njog
ZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhh
bmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjQ5OiAvZGV2L3hl
bi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUu
Uzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwy
Lmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9u
ICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjQ4OiAvZGV2L3B0cy84Ngo9PTcwMDY9
PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4
NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAz
MzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNF
NDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1h
aW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNDc6IC9k
ZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykK
PT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAg
YnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAy
QUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNDY6IC9kZXYv
eGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0
ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250
bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21t
b24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3Jl
YXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChp
by5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAw
Nj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNDU6IC9kZXYvcHRzLzg1Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI0NDog
L2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3
KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAg
ICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI0MzogL2Rl
di94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZj
bnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2Nv
bW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI0MjogL2Rldi9wdHMvODQKPT03
MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2
PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBi
eSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAw
eDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0
MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWlu
IChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjQx
OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxh
dGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6
NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09
ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAw
eDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQw
MzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQz
RTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluICht
YWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjQwOiAv
ZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVt
cGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAo
ZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5f
Y29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWlu
X2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9p
byAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9
PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjM5OiAvZGV2L3B0cy84Mwo9
PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAg
IGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5
IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAw
eDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1h
aW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAy
Mzg6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1w
bGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQu
Yzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2
PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5
IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4
NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQw
NDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4g
KG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyMzc6
IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10
ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVu
IChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Bl
bl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21h
aW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxl
X2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYp
Cj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyMzY6IC9kZXYvcHRzLzgy
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDIzNTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDIz
NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29w
ZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9v
cGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDIzMzogL2Rldi9wdHMv
ODEKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikK
PT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9
PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAg
ICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgMjMyOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdl
dHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAg
ICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBi
eSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkg
MHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBt
YWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3Ig
MjMxOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5f
b3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNl
X29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTog
ZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhh
bmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjMwOiAvZGV2L3B0
cy84MAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgy
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09
ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAg
ICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIx
QzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3Jp
cHRvciAyMjk6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAo
Z2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkK
PT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09
ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAyMjg6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNo
bl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZh
Y2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJB
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyMjc6IC9kZXYv
cHRzLzc5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDIyNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDIyNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDIyNDogL2Rl
di9wdHMvNzgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUu
Uzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9
PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcw
MDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMjIzOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVu
cHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6
OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcw
MDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9
PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgMjIyOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9l
dnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50
ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQw
MzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQz
RTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluICht
YWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjIxOiAv
ZGV2L3B0cy83Nwo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0
ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAyMjA6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29w
ZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQu
Yzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09
NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAyMTk6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4
X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19p
bnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4
NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQw
NDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4g
KG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyMTg6
IC9kZXYvcHRzLzc2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDIxNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDIxNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDIx
NTogL2Rldi9wdHMvNzUKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVt
cGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6
MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5
KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgMjE0OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3Np
eF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdl
dHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4
KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9
PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgMjEzOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBs
aW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODog
eGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBi
eSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkg
MHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBt
YWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3Ig
MjEyOiAvZGV2L3B0cy83NAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10
ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHku
YzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0
MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciAyMTE6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBv
c2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAo
Z2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6
OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciAyMTA6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6
IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4
OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAyMDk6IC9kZXYvcHRzLzczCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDIwODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDIwNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDIwNjogL2Rldi9wdHMvNzIKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVu
cHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMjA1OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0
NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYy
OiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0
cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0
eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6
NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3
NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgMjA0OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRE
NEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0
NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9
PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgMjAzOiAvZGV2L3B0cy83MQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9w
ZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAo
aW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyMDI6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5
NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4
NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBn
ZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVu
cHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8u
Yzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciAyMDE6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRF
NEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0
RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAyMDA6IC9kZXYvcHRzLzcwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE5OTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE5ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDE5NzogL2Rldi9wdHMvNjkKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5
IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTk2OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1
OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlF
RDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAo
b3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTk1OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5
IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9
PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgMTk0OiAvZGV2L3B0cy82OAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5w
dHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRl
X3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxOTM6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlC
OUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5
IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0
eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxOTI6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAg
YnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciAxOTE6IC9kZXYvcHRzLzY3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE5MDogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE4OTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDE4ODogL2Rldi9wdHMvNjYKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBv
cGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2Ny
ZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTg3OiAvZGV2L3B0bXgKPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4
NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3Bl
bnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTg2OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2
PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0g
ICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09
ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6
MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1
NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgMTg1OiAvZGV2L3B0cy82NQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6
IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5f
Y3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3Jl
YXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChp
by5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAw
Nj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxODQ6IC9kZXYvcHRteAo9PTcwMDY9
PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAg
ICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkg
MHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBv
cGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3Jl
YXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxODM6IC9kZXYveGVuL2V2dGNobgo9PTcw
MDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9
PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2
PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUu
YzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciAxODI6IC9kZXYvcHRzLzY0Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1
Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFp
bl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9j
cmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8g
KGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03
MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE4MTogL2Rldi9wdG14Cj09NzAw
Nj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09
ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBi
eSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6
IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE4MDogL2Rldi94ZW4vZXZ0Y2huCj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcw
MDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0
ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE3OTogL2Rldi9wdHMvNjMKPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0
NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9t
YWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWlu
X2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9p
byAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9
PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTc4OiAvZGV2L3B0bXgKPT03
MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2
PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAg
IGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2
QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWlu
X2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2Ny
ZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAo
aW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcw
MDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTc3OiAvZGV2L3hlbi9ldnRjaG4K
PT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03
MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09
NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2
YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTc2OiAvZGV2L3B0cy82Mgo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBk
b21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21h
aW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxl
X2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYp
Cj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNzU6IC9kZXYvcHRteAo9
PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcw
MDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0g
ICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0
NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21h
aW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5f
Y3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lv
IChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09
NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNzQ6IC9kZXYveGVuL2V2dGNo
bgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9
PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkK
PT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3By
aXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNzM6IC9kZXYvcHRzLzYxCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6
IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRv
bWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5k
bGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2
NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE3MjogL2Rldi9wdG14
Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09
NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9
PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0
NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRv
bWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE3MTogL2Rldi94ZW4vZXZ0
Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0
KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNf
cHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE3MDogL2Rldi9wdHMvNjAKPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFE
MDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0Njog
ZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhh
bmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTY5OiAvZGV2L3B0
bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikK
PT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAw
Nj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4
NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDog
ZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9t
YWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRs
ZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2
KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTY4OiAvZGV2L3hlbi9l
dnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4
MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6
NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4
Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTY3OiAvZGV2L3B0cy81OQo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAy
QUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNjY6IC9kZXYv
cHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgy
KQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03
MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkg
MHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQw
OiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBk
b21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFu
ZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzox
NjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNjU6IC9kZXYveGVu
L2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIu
aDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24g
KHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNjQ6IC9kZXYvcHRzLzU4Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0
MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMz
NDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE2MzogL2Rl
di9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9
PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJB
RDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE2MjogL2Rldi94
ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRs
Mi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1v
biAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE2MTogL2Rldi9wdHMvNTcKPT03MDA2
PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0g
ICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAw
eDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQw
MzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQz
RTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluICht
YWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTYwOiAv
ZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUu
Uzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAg
IGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQw
MkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0
NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6
IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWlu
LmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTU5OiAvZGV2
L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxh
dGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNu
dGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29t
bW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2Ny
ZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAo
aW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcw
MDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTU4OiAvZGV2L3B0cy81Ngo9PTcw
MDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5
IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4
NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQw
NDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4g
KG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNTc6
IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0
ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0
NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0g
ICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4
NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAz
MzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNF
NDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1h
aW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNTY6IC9k
ZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1w
bGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChm
Y250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9j
b21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5f
Y3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lv
IChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09
NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNTU6IC9kZXYvcHRzLzU1Cj09
NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE1
NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5j
OjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9
PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkg
MHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0
MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE1Mzog
L2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4g
KGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVu
X2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFp
bl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVf
aW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE1MjogL2Rldi9wdHMvNTQK
PT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03
MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAg
ICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBi
eSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkg
MHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBt
YWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3Ig
MTUxOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVt
cGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0
LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAw
Nj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBi
eSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAw
eDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0
MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWlu
IChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTUw
OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3Bl
biAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29w
ZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9t
YWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRs
ZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2
KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTQ5OiAvZGV2L3B0cy81
Mwo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09
ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAxNDg6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10
ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0
cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03
MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAg
IGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5
IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAw
eDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1h
aW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAx
NDc6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9v
cGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vf
b3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBk
b21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFu
ZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzox
NjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNDY6IC9kZXYvcHRz
LzUxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDE0NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChn
ZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0g
ICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAg
YnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDE0NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE0MzogL2Rldi9w
dHMvNTIKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4
MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcw
MDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9
PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgMTQyOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQg
KGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9
PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAg
ICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgMTQxOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRj
aG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJm
YWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIy
QTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6
IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWlu
LmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTQwOiAvZGV2
L3B0cy81MAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09
NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAxMzk6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5w
dCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5
MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09
ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAg
ICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIx
QzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3Jp
cHRvciAxMzg6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2
dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRl
cmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAz
MjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNF
NDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1h
aW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMzc6IC9k
ZXYvcHRzLzQ5Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDEzNjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3Bl
bnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5j
OjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDEzNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEzNDog
L2Rldi9wdHMvNDgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxh
dGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTEx
KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9
PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgMTMzOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9v
cGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0
LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9
PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcw
MDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMTMyOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51
eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNf
aW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAw
eDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0
MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWlu
IChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTMx
OiAvZGV2L3B0cy80Nwo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1w
bGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzox
MTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciAxMzA6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4
X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0
cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAxMjk6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxp
bnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4
Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5
IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAw
eDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1h
aW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAx
Mjg6IC9kZXYvcHRzLzQ2Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDEyNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDEyNjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDEyNTogL2Rldi9wdHMvNDUKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5
LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6
NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3
NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgMTI0OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBw
b3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQg
KGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5j
Ojk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5
KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgMTIzOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0
NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2
OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUND
ODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAg
ICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgMTIyOiAvZGV2L3B0cy80NAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5w
dHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8u
Yzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciAxMjE6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6
IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRw
dCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5
LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0
MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciAxMjA6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5
NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0
RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1
Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09
ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAg
ICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIx
QzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3Jp
cHRvciAxMTk6IC9kZXYvcHRzLzQzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDExODogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDExNzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDExNjogL2Rldi9wdHMvNDIKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChv
cGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTE1OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlC
ODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDog
Z2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3Bl
bnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgMTE0OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0
RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4
NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcw
MDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMTEzOiAvZGV2L3B0cy80MAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkg
KG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0
eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMTI6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5
OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVE
OiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChv
cGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAo
aW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMTE6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkg
MHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAxMTA6IC9kZXYvcHRzLzQxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwOTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDEwNzogL2Rldi9wdHMvMzgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVu
cHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTA2OiAvZGV2L3B0bXgKPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5
QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0
eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMTA1OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAg
IGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2
KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgMTA0OiAvZGV2L3B0cy8zOQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9w
ZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3Jl
YXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDM6IC9kZXYvcHRteAo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1
OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVu
cHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRl
X3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMDI6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9
PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAg
ICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0g
ICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzox
NjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciAxMDE6IC9kZXYvcHRzLzM3Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDEwMDogL2Rldi9wdG14Cj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAw
eDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9w
ZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk5OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2
PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0g
ICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09
ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6
MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1
NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgOTg6IC9kZXYvcHRzLzM1Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzog
b3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9j
cmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVh
dGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlv
LmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2
PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk3OiAvZGV2L3B0bXgKPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4
NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3Bl
bnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgOTY6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9
PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAg
ICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0g
ICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzox
NjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciA5NTogL2Rldi9wdHMvMzYKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBv
cGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2Ny
ZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgOTQ6IC9kZXYvcHRteAo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1
OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVu
cHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRl
X3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA5MzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09
ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAg
IGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAg
ICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2
NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDkyOiAvZGV2L3B0cy8zNAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9w
ZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3Jl
YXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRl
X3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5j
Ojg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09
IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA5MTogL2Rldi9wdG14Cj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5
OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5w
dHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDkwOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0g
ICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAg
YnkgMHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAg
IGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2
KQo9PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgODk6IC9kZXYvcHRzLzMzCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3Bl
bnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVh
dGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVf
cmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6
ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0g
Cj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDg4OiAvZGV2L3B0bXgKPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5
QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0
eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgODc6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAg
ICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBi
eSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAg
YnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciA4NjogL2Rldi9wdHMvMzIKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVu
cHR5IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgODU6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDU5OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlC
OUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5
IChvcGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0
eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA4NDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAg
IGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5
IDB4NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBi
eSAweDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikK
PT03MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDgzOiAvZGV2L3B0cy8zMQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5w
dHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRl
X3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jp
bmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3
MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9
PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA4MjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5
RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkg
KG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDgxOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAg
YXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg0RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5
IDB4NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9
PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgODA6IC9kZXYvcHRzLzMwCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0
eSAob3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVf
dHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmlu
ZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODcz
KQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09
NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDc5OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1
OTlCODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlF
RDogZ2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAo
b3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgNzg6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBh
dCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAw
eDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkg
MHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciA3NzogL2Rldi9wdHMvMjkKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5
IChvcGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgNzY6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5
OUI4NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVE
OiBnZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChv
cGVucHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAo
aW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA3NTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0
IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4
NEU0RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAw
eDRFNDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03
MDA2PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDc0OiAvZGV2L3B0cy8yNwo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkg
KG9wZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0
eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3Jpbmcg
KGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykK
PT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcw
MDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA3MzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5
Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6
IGdldHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9w
ZW5wdHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcyOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQg
MHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0
RTRENEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4
NEU0NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcw
MDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgNzE6IC9kZXYvcHRzLzI4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAo
b3BlbnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5
IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAo
aW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9
PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAw
Nj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDcwOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlC
ODYyOiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDog
Z2V0cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3Bl
bnB0eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgNjk6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAw
eDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRF
NEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0
RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciA2ODogL2Rldi9wdHMvMjYKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChv
cGVucHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgNjc6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5
NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4
NjI6IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBn
ZXRwdCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVu
cHR5LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8u
Yzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciA2NjogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4
NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0
RDRGNjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRF
NDVDQzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2
PT0gICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDY1OiAvZGV2L3B0cy8yNQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9w
ZW5wdHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAo
aW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlv
LmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03
MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9
PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA2NDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2
MjogcG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdl
dHB0IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5w
dHkuYzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDYzOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1
OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRE
NEY2OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0
NUNDODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9
PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgNjI6IC9kZXYvcHRzLzI0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3Bl
bnB0eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDYxOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0
NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYy
OiBwb3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0
cHQgKGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0
eS5jOjk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6
NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3
NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgNjA6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5
NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0
RjY6IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1
Q0M4OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09
ICAgIGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAg
ICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIx
QzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3Jp
cHRvciA1OTogL2Rldi9wdHMvMjMKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVu
cHR5LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlv
LmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5j
OjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0g
T3BlbiBmaWxlIGRlc2NyaXB0b3IgNTg6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6
IHBvc2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRw
dCAoZ2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5
LmM6OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0
MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciA1NzogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1
NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRG
NjogbGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVD
Qzg6IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0g
ICAgYnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDU2OiAvZGV2L3B0cy8yMgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5w
dHkuYzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8u
Yzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6
NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2
PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBP
cGVuIGZpbGUgZGVzY3JpcHRvciA1NTogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2Mjog
cG9zaXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0
IChnZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHku
Yzo5OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDU0OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0
NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2
OiBsaW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUND
ODogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAg
ICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgNTM6IC9kZXYvcHRzLzIxCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxs
LXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0
eS5jOjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5j
OjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1
NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9
PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9w
ZW4gZmlsZSBkZXNjcmlwdG9yIDUyOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBw
b3NpeF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQg
KGdldHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5j
Ojk4KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5
KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgNTE6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2
Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6
IGxpbnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4
OiB4Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciA1MDogL2Rldi9wdHMvMTkKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5
LmM6MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6
NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3
NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3Bl
biBmaWxlIGRlc2NyaXB0b3IgNDk6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBv
c2l4X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAo
Z2V0cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6
OTgpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciA0ODogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZD
RDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjog
bGludXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6
IHhjX2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAg
YnkgMHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5
IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDog
bWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9y
IDQ3OiAvZGV2L3B0cy8yMAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10
ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHku
YzoxMTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0
MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0
KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0g
ICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVu
IGZpbGUgZGVzY3JpcHRvciA0NjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9z
aXhfb3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChn
ZXRwdC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5
OCkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDQ1OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNE
OiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBs
aW51eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODog
eGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBi
eSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkg
MHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBt
YWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3Ig
NDQ6IC9kZXYvcHRzLzE4Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5j
OjExMSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQw
OSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQp
Cj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAg
ICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4g
ZmlsZSBkZXNjcmlwdG9yIDQzOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3Np
eF9vcGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdl
dHB0LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4
KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9
PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgNDI6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6
ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxp
bnV4X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4
Y19pbnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5
IDB4NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAw
eDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1h
aW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA0
MTogL2Rldi9wdHMvMTYKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVt
cGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6
MTExKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5
KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkK
PT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAg
IGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBm
aWxlIGRlc2NyaXB0b3IgNDA6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4
X29wZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0
cHQuYzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09
NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcw
MDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkg
MHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAzOTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDog
Pz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGlu
dXhfZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhj
X2ludGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkg
MHg0MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDM4
OiAvZGV2L3B0cy8xNwo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1w
bGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzox
MTEpCj09NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkp
Cj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9
PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciAzNzogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhf
b3BlbnB0IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRw
dC5jOjkxKQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDM2OiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/
Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51
eF9ldnRjaG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNf
aW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAw
eDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0
MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWlu
IChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMzU6
IC9kZXYvcHRzLzE0Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBs
YXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjEx
MSkKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkK
PT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09
NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBi
eSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmls
ZSBkZXNjcmlwdG9yIDM0OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8g
KHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9v
cGVucHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0
LmM6OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9
PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcw
MDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMzM6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/
PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4
X2V2dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19p
bnRlcmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4
NDAzMjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQw
NDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4g
KG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAzMjog
L2Rldi9wdHMvMTUKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxh
dGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTEx
KQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9
PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgMzE6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29w
ZW5wdCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQu
Yzo5MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09
NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAzMDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/
IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhf
ZXZ0Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2lu
dGVyZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0
MDMyMkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0
M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAo
bWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDI5OiAv
ZGV2L3B0cy84Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRl
LlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkK
PT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03
MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAw
eDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBk
ZXNjcmlwdG9yIDI4OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVu
cHQgKGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6
OTEpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcw
MDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9
PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgMjc6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2
dGNobl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRl
cmZhY2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAz
MjJBOiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNF
NDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1h
aW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyNjogL2Rl
di9wdHMvMTMKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUu
Uzo4MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9
PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcw
MDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2
PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4
NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRl
c2NyaXB0b3IgMjU6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5w
dCAoZ2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5
MSkKPT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAw
Nj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09
ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAg
ICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIx
QzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3Jp
cHRvciAyNDogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChz
eXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0
Y2huX29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVy
ZmFjZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMy
MkE6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0
OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFp
bi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDIzOiAvZGV2
L3B0cy8xMgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09
NzAwNj09ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAw
Nj09ICAgIGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9
PSAgICBieSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0
MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVz
Y3JpcHRvciAyMjogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0
IChnZXRwdC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkx
KQo9PTcwMDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDIxOiAvZGV2L3hlbi9ldnRjaG4KPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5
c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg0RTRENEY2OiBsaW51eF9ldnRj
aG5fb3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJm
YWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwMzIy
QTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6
IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWlu
LmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3IgMjA6IC9kZXYv
cHRzLzExCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6
ODIpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03
MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2
PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09
ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQw
MjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNj
cmlwdG9yIDE5OiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2Nh
bGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQg
KGdldHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9
PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAg
ICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAg
YnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0
OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0
b3IgMTg6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lz
Y2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNo
bl9vcGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZh
Y2Vfb3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJB
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxNzogL2Rldi9w
dHMvMTAKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwtdGVtcGxhdGUuUzo4
MikKPT03MDA2PT0gICAgYnkgMHg1NDY0NTVDOiBvcGVucHR5IChvcGVucHR5LmM6MTExKQo9PTcw
MDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9
PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0g
ICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAy
MUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2Ny
aXB0b3IgMTY6IC9kZXYvcHRteAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDU5OUI4NjI6IHBvc2l4X29wZW5wdCAo
Z2V0cHQuYzo0NykKPT03MDA2PT0gICAgYnkgMHg1OTlCOUVEOiBnZXRwdCAoZ2V0cHQuYzo5MSkK
PT03MDA2PT0gICAgYnkgMHg1NDY0NDZBOiBvcGVucHR5IChvcGVucHR5LmM6OTgpCj09NzAwNj09
ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAxNTogL2Rldi94ZW4vZXZ0Y2huCj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNj
YWxsLXRlbXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NEU0RDRGNjogbGludXhfZXZ0Y2hu
X29wZW4gKGZjbnRsMi5oOjU0KQo9PTcwMDY9PSAgICBieSAweDRFNDVDQzg6IHhjX2ludGVyZmFj
ZV9vcGVuX2NvbW1vbiAoeGNfcHJpdmF0ZS5jOjE2NikKPT03MDA2PT0gICAgYnkgMHg0MDMyMkE6
IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NTUpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBo
YW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5j
OjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDE0OiAvZGV2L3B0
cy85Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRlbXBsYXRlLlM6ODIp
Cj09NzAwNj09ICAgIGJ5IDB4NTQ2NDU1Qzogb3BlbnB0eSAob3BlbnB0eS5jOjExMSkKPT03MDA2
PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0g
ICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAg
IGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFD
NDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlw
dG9yIDEzOiAvZGV2L3B0bXgKPT03MDA2PT0gICAgYXQgMHg1OTU0NkNEOiA/Pz8gKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1OTlCODYyOiBwb3NpeF9vcGVucHQgKGdl
dHB0LmM6NDcpCj09NzAwNj09ICAgIGJ5IDB4NTk5QjlFRDogZ2V0cHQgKGdldHB0LmM6OTEpCj09
NzAwNj09ICAgIGJ5IDB4NTQ2NDQ2QTogb3BlbnB0eSAob3BlbnB0eS5jOjk4KQo9PTcwMDY9PSAg
ICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBi
eSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkg
MHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBt
YWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxlIGRlc2NyaXB0b3Ig
MTI6IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9v
cGVuIChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vf
b3Blbl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBk
b21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFu
ZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzox
NjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAxMTogL2Rldi9wdHMv
MQo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9
PTcwMDY9PSAgICBieSAweDU0NjQ1NUM6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMTEpCj09NzAwNj09
ICAgIGJ5IDB4NDAyQUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAg
IGJ5IDB4NDAzMzQ2OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBi
eSAweDQwNDNFNDogaGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6
IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRv
ciAxMDogL2Rldi9wdG14Cj09NzAwNj09ICAgIGF0IDB4NTk1NDZDRDogPz8/IChzeXNjYWxsLXRl
bXBsYXRlLlM6ODIpCj09NzAwNj09ICAgIGJ5IDB4NTk5Qjg2MjogcG9zaXhfb3BlbnB0IChnZXRw
dC5jOjQ3KQo9PTcwMDY9PSAgICBieSAweDU5OUI5RUQ6IGdldHB0IChnZXRwdC5jOjkxKQo9PTcw
MDY9PSAgICBieSAweDU0NjQ0NkE6IG9wZW5wdHkgKG9wZW5wdHkuYzo5OCkKPT03MDA2PT0gICAg
YnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChpby5jOjQwOSkKPT03MDA2PT0gICAgYnkg
MHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8uYzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4
NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcwMDY9PSAgICBieSAweDQwMjFDNDogbWFp
biAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDk6
IC9kZXYveGVuL2V2dGNobgo9PTcwMDY9PSAgICBhdCAweDU5NTQ2Q0Q6ID8/PyAoc3lzY2FsbC10
ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNEQ0RjY6IGxpbnV4X2V2dGNobl9vcGVu
IChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0RTQ1Q0M4OiB4Y19pbnRlcmZhY2Vfb3Bl
bl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNjYpCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21h
aW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxl
X2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYp
Cj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA4Ogo9PTcwMDY9PSAgICBh
dCAweDU5NTUwQTc6IHBpcGUgKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkg
MHg1MjVCQUVCOiB4c19maWxlbm8gKGluIC91c3IvbGliL2xpYnhlbnN0b3JlLnNvLjMuMC4wKQo9
PTcwMDY9PSAgICBieSAweDQwMzg0NjogaGFuZGxlX2lvIChpby5jOjk2MykKPT03MDA2PT0gICAg
YnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZp
bGUgZGVzY3JpcHRvciA3Ogo9PTcwMDY9PSAgICBhdCAweDU5NTUwQTc6IHBpcGUgKHN5c2NhbGwt
dGVtcGxhdGUuUzo4MikKPT03MDA2PT0gICAgYnkgMHg1MjVCQUVCOiB4c19maWxlbm8gKGluIC91
c3IvbGliL2xpYnhlbnN0b3JlLnNvLjMuMC4wKQo9PTcwMDY9PSAgICBieSAweDQwMzg0NjogaGFu
ZGxlX2lvIChpby5jOjk2MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzox
NjYpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciA2OiAvcHJvYy94ZW4v
cHJpdmNtZAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2QjA6IF9fb3Blbl9ub2NhbmNlbCAoc3lzY2Fs
bC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDRFNERCQjg6IGxpbnV4X3ByaXZjbWRf
b3BlbiAoZmNudGwyLmg6NTQpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUNDODogeGNfaW50ZXJmYWNl
X29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTY2KQo9PTcwMDY9PSAgICBieSAweDQwNDg2MDog
eGVuX3NldHVwICh1dGlscy5jOjExOSkKPT03MDA2PT0gICAgYnkgMHg0MDIxQUI6IG1haW4gKG1h
aW4uYzoxNjEpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIEFGX1VOSVggc29ja2V0IDU6IDx1bmtu
b3duPgo9PTcwMDY9PSAgICBhdCAweDU5NjJGNTc6IHNvY2tldCAoc3lzY2FsbC10ZW1wbGF0ZS5T
OjgyKQo9PTcwMDY9PSAgICBieSAweDUyNUIxNzg6ID8/PyAoaW4gL3Vzci9saWIvbGlieGVuc3Rv
cmUuc28uMy4wLjApCj09NzAwNj09ICAgIGJ5IDB4NTI1QkI0RjogeHNfb3BlbiAoaW4gL3Vzci9s
aWIvbGlieGVuc3RvcmUuc28uMy4wLjApCj09NzAwNj09ICAgIGJ5IDB4NDA0ODQ1OiB4ZW5fc2V0
dXAgKHV0aWxzLmM6MTEyKQo9PTcwMDY9PSAgICBieSAweDQwMjFBQjogbWFpbiAobWFpbi5jOjE2
MSkKPT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDQ6IC92YXIvcnVuL3hl
bmNvbnNvbGVkLnBpZAo9PTcwMDY9PSAgICBhdCAweDU5NTQ2QjA6IF9fb3Blbl9ub2NhbmNlbCAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDQwNDc1QTogZGFlbW9uaXpl
IChmY250bDIuaDo1NCkKPT03MDA2PT0gICAgYnkgMHg0MDIyMDU6IG1haW4gKG1haW4uYzoxNTgp
Cj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUgZGVzY3JpcHRvciAyOiAvZGV2L251bGwKPT03
MDA2PT0gICAgYXQgMHg1OTU1MDQ3OiBkdXAyIChzeXNjYWxsLXRlbXBsYXRlLlM6ODIpCj09NzAw
Nj09ICAgIGJ5IDB4NDA0NzFEOiBkYWVtb25pemUgKHV0aWxzLmM6ODEpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMjA1OiBtYWluIChtYWluLmM6MTU4KQo9PTcwMDY9PSAKPT03MDA2PT0gT3BlbiBmaWxl
IGRlc2NyaXB0b3IgMTogL2Rldi9udWxsCj09NzAwNj09ICAgIGF0IDB4NTk1NTA0NzogZHVwMiAo
c3lzY2FsbC10ZW1wbGF0ZS5TOjgyKQo9PTcwMDY9PSAgICBieSAweDQwNDcxRDogZGFlbW9uaXpl
ICh1dGlscy5jOjgxKQo9PTcwMDY9PSAgICBieSAweDQwMjIwNTogbWFpbiAobWFpbi5jOjE1OCkK
PT03MDA2PT0gCj09NzAwNj09IE9wZW4gZmlsZSBkZXNjcmlwdG9yIDA6IC9kZXYvbnVsbAo9PTcw
MDY9PSAgICBhdCAweDU5NTUwNDc6IGR1cDIgKHN5c2NhbGwtdGVtcGxhdGUuUzo4MikKPT03MDA2
PT0gICAgYnkgMHg0MDQ3MUQ6IGRhZW1vbml6ZSAodXRpbHMuYzo4MSkKPT03MDA2PT0gICAgYnkg
MHg0MDIyMDU6IG1haW4gKG1haW4uYzoxNTgpCj09NzAwNj09IAo9PTcwMDY9PSBPcGVuIGZpbGUg
ZGVzY3JpcHRvciAzOiAvaG9tZS9wYXVsaC9EZXNrdG9wL2Rvd25sb2FkZWQveGVuLTQuMS4yL3Rv
b2xzL2NvbnNvbGUvdmFsZ3JpbmRfb3V0cHV0LnR4dAo9PTcwMDY9PSAgICA8aW5oZXJpdGVkIGZy
b20gcGFyZW50Pgo9PTcwMDY9PSAKPT03MDA2PT0gCj09NzAwNj09IEhFQVAgU1VNTUFSWToKPT03
MDA2PT0gICAgIGluIHVzZSBhdCBleGl0OiA0ODQsODc4IGJ5dGVzIGluIDEsMzgzIGJsb2Nrcwo9
PTcwMDY9PSAgIHRvdGFsIGhlYXAgdXNhZ2U6IDMzLDI2OCBhbGxvY3MsIDMxLDg4NSBmcmVlcywg
MSw3ODYsNzkxIGJ5dGVzIGFsbG9jYXRlZAo9PTcwMDY9PSAKPT03MDA2PT0gMTYgYnl0ZXMgaW4g
MSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3NzIHJlY29yZCAxIG9mIDI0Cj09NzAw
Nj09ICAgIGF0IDB4NEMyQjZDRDogbWFsbG9jIChpbiAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxv
YWRfbWVtY2hlY2stYW1kNjQtbGludXguc28pCj09NzAwNj09ICAgIGJ5IDB4NTk3NDQ1OTogX19u
c3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQ1NikKPT03MDA2PT0gICAgYnkgMHg2QTRE
MTg0OiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5hbV9yQEBHTElCQ18yLjIu
NSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1MTogZ3JhbnRwdCAo
Z3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVucHR5IChvcGVucHR5
LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6
NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3
NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gMTYg
Ynl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3NzIHJlY29yZCAyIG9m
IDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjZDRDogbWFsbG9jIChpbiAvdXNyL2xpYi92YWxncmlu
ZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1kNjQtbGludXguc28pCj09NzAwNj09ICAgIGJ5IDB4NTk3
NDQ1OTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQ1NikKPT03MDA2PT0gICAg
YnkgMHg2QTREMTlFOiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5hbV9yQEBH
TElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1MTog
Z3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVucHR5
IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gMTYgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3NzIHJl
Y29yZCAzIG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjZDRDogbWFsbG9jIChpbiAvdXNyL2xp
Yi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1kNjQtbGludXguc28pCj09NzAwNj09ICAg
IGJ5IDB4NTk3NDQ1OTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQ1NikKPT03
MDA2PT0gICAgYnkgMHg2QTREMUI4OiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRn
cm5hbV9yQEBHTElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4
NTk5QkU1MTogZ3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgw
OiBvcGVucHR5IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWlu
X2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2Ny
ZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAo
aW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcw
MDY9PSAKPT03MDA2PT0gMTYgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBp
biBsb3NzIHJlY29yZCA0IG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjZDRDogbWFsbG9jIChp
biAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1kNjQtbGludXguc28pCj09
NzAwNj09ICAgIGJ5IDB4NTk3NDQ1OTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5j
OjQ1NikKPT03MDA2PT0gICAgYnkgMHg2QTREMUQyOiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJC
NDZDOiBnZXRncm5hbV9yQEBHTElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09
ICAgIGJ5IDB4NTk5QkU1MTogZ3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkg
MHg1NDY0NDgwOiBvcGVucHR5IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFE
MDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0Njog
ZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhh
bmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gMTYgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0
bHkgbG9zdCBpbiBsb3NzIHJlY29yZCA1IG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjZDRDog
bWFsbG9jIChpbiAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1kNjQtbGlu
dXguc28pCj09NzAwNj09ICAgIGJ5IDB4NTk3NDQ1OTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChu
c3N3aXRjaC5jOjQ1NikKPT03MDA2PT0gICAgYnkgMHg2QTREMUVDOiA/Pz8KPT03MDA2PT0gICAg
YnkgMHg1OTJCNDZDOiBnZXRncm5hbV9yQEBHTElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYp
Cj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1MTogZ3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2
PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVucHR5IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBi
eSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAw
eDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0
MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWlu
IChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gMjEgYnl0ZXMgaW4gMSBibG9ja3MgYXJl
IHN0aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29yZCA2IG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4
NEMyQjZDRDogbWFsbG9jIChpbiAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2st
YW1kNjQtbGludXguc28pCj09NzAwNj09ICAgIGJ5IDB4NDAyMjIwOiBtYWluIChtYWluLmM6MTQ0
KQo9PTcwMDY9PSAKPT03MDA2PT0gMjUgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIHN0aWxsIHJlYWNo
YWJsZSBpbiBsb3NzIHJlY29yZCA3IG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjZDRDogbWFs
bG9jIChpbiAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1kNjQtbGludXgu
c28pCj09NzAwNj09ICAgIGJ5IDB4NThGNkQ3MTogc3RyZHVwIChzdHJkdXAuYzo0MykKPT03MDA2
PT0gICAgYnkgMHg0MDFGOUI6IG1haW4gKG1haW4uYzoxMTMpCj09NzAwNj09IAo9PTcwMDY9PSAz
MiBieXRlcyBpbiAxIGJsb2NrcyBhcmUgaW5kaXJlY3RseSBsb3N0IGluIGxvc3MgcmVjb3JkIDgg
b2YgMjQKPT03MDA2PT0gICAgYXQgMHg0QzJCNkNEOiBtYWxsb2MgKGluIC91c3IvbGliL3ZhbGdy
aW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2NC1saW51eC5zbykKPT03MDA2PT0gICAgYnkgMHg1
OTVFRTBBOiB0c2VhcmNoICh0c2VhcmNoLmM6MjgxKQo9PTcwMDY9PSAgICBieSAweDU5NzQzRTk6
IF9fbnNzX2xvb2t1cF9mdW5jdGlvbiAobnNzd2l0Y2guYzo0MzkpCj09NzAwNj09ICAgIGJ5IDB4
NkE0RDE4NDogPz8/Cj09NzAwNj09ICAgIGJ5IDB4NTkyQjQ2QzogZ2V0Z3JuYW1fckBAR0xJQkNf
Mi4yLjUgKGdldFhYYnlZWV9yLmM6MjU2KQo9PTcwMDY9PSAgICBieSAweDU5OUJFNTE6IGdyYW50
cHQgKGdyYW50cHQuYzoxNTMpCj09NzAwNj09ICAgIGJ5IDB4NTQ2NDQ4MDogb3BlbnB0eSAob3Bl
bnB0eS5jOjEwMikKPT03MDA2PT0gICAgYnkgMHg0MDJBRDA6IGRvbWFpbl9jcmVhdGVfdHR5IChp
by5jOjQwOSkKPT03MDA2PT0gICAgYnkgMHg0MDMzNDY6IGRvbWFpbl9jcmVhdGVfcmluZyAoaW8u
Yzo1NzQpCj09NzAwNj09ICAgIGJ5IDB4NDA0M0U0OiBoYW5kbGVfaW8gKGlvLmM6ODczKQo9PTcw
MDY9PSAgICBieSAweDQwMjFDNDogbWFpbiAobWFpbi5jOjE2NikKPT03MDA2PT0gCj09NzAwNj09
IDMyIGJ5dGVzIGluIDEgYmxvY2tzIGFyZSBpbmRpcmVjdGx5IGxvc3QgaW4gbG9zcyByZWNvcmQg
OSBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFs
Z3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBieSAw
eDU5NUVFMEE6IHRzZWFyY2ggKHRzZWFyY2guYzoyODEpCj09NzAwNj09ICAgIGJ5IDB4NTk3NDNF
OTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQzOSkKPT03MDA2PT0gICAgYnkg
MHg2QTREMTlFOiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5hbV9yQEBHTElC
Q18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1MTogZ3Jh
bnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVucHR5IChv
cGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90dHkg
KGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5nIChp
by5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09
NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2
PT0gMzIgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3NzIHJlY29y
ZCAxMCBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIv
dmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBi
eSAweDU5NUVFMEE6IHRzZWFyY2ggKHRzZWFyY2guYzoyODEpCj09NzAwNj09ICAgIGJ5IDB4NTk3
NDNFOTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQzOSkKPT03MDA2PT0gICAg
YnkgMHg2QTREMUI4OiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5hbV9yQEBH
TElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1MTog
Z3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVucHR5
IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0ZV90
dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9yaW5n
IChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMp
Cj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03
MDA2PT0gMzIgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3NzIHJl
Y29yZCAxMSBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9s
aWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAg
ICBieSAweDU5NUVFMEE6IHRzZWFyY2ggKHRzZWFyY2guYzoyODEpCj09NzAwNj09ICAgIGJ5IDB4
NTk3NDNFOTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQzOSkKPT03MDA2PT0g
ICAgYnkgMHg2QTREMUQyOiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5hbV9y
QEBHTElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5QkU1
MTogZ3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBvcGVu
cHR5IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2NyZWF0
ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0ZV9y
aW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4
NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAK
PT03MDA2PT0gMzIgYnl0ZXMgaW4gMSBibG9ja3MgYXJlIGluZGlyZWN0bHkgbG9zdCBpbiBsb3Nz
IHJlY29yZCAxMiBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vz
ci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9
PSAgICBieSAweDU5NUVFMEE6IHRzZWFyY2ggKHRzZWFyY2guYzoyODEpCj09NzAwNj09ICAgIGJ5
IDB4NTk3NDNFOTogX19uc3NfbG9va3VwX2Z1bmN0aW9uIChuc3N3aXRjaC5jOjQzOSkKPT03MDA2
PT0gICAgYnkgMHg2QTREMUVDOiA/Pz8KPT03MDA2PT0gICAgYnkgMHg1OTJCNDZDOiBnZXRncm5h
bV9yQEBHTElCQ18yLjIuNSAoZ2V0WFhieVlZX3IuYzoyNTYpCj09NzAwNj09ICAgIGJ5IDB4NTk5
QkU1MTogZ3JhbnRwdCAoZ3JhbnRwdC5jOjE1MykKPT03MDA2PT0gICAgYnkgMHg1NDY0NDgwOiBv
cGVucHR5IChvcGVucHR5LmM6MTAyKQo9PTcwMDY9PSAgICBieSAweDQwMkFEMDogZG9tYWluX2Ny
ZWF0ZV90dHkgKGlvLmM6NDA5KQo9PTcwMDY9PSAgICBieSAweDQwMzM0NjogZG9tYWluX2NyZWF0
ZV9yaW5nIChpby5jOjU3NCkKPT03MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8u
Yzo4NzMpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9
PSAKPT03MDA2PT0gNDggYnl0ZXMgaW4gMSBibG9ja3MgYXJlIHN0aWxsIHJlYWNoYWJsZSBpbiBs
b3NzIHJlY29yZCAxMyBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4g
L3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcw
MDY9PSAgICBieSAweDRFNEM4RkY6IHh0bF9jcmVhdGVsb2dnZXJfc3RkaW9zdHJlYW0gKHh0bF9s
b2dnZXJfc3RkaW8uYzoxNTYpCj09NzAwNj09ICAgIGJ5IDB4NEU0NUQyNTogeGNfaW50ZXJmYWNl
X29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTQ1KQo9PTcwMDY9PSAgICBieSAweDQwNDg2MDog
eGVuX3NldHVwICh1dGlscy5jOjExOSkKPT03MDA2PT0gICAgYnkgMHg0MDIxQUI6IG1haW4gKG1h
aW4uYzoxNjEpCj09NzAwNj09IAo9PTcwMDY9PSAxNjQgYnl0ZXMgaW4gNCBibG9ja3MgYXJlIHN0
aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29yZCAxNCBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRD
MkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFt
ZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBieSAweDUyNUIzM0U6ID8/PyAoaW4gL3Vzci9saWIv
bGlieGVuc3RvcmUuc28uMy4wLjApCj09NzAwNj09ICAgIGJ5IDB4NTI1QkEzMDogPz8/IChpbiAv
dXNyL2xpYi9saWJ4ZW5zdG9yZS5zby4zLjAuMCkKPT03MDA2PT0gICAgYnkgMHg1QzM0RTk5OiBz
dGFydF90aHJlYWQgKHB0aHJlYWRfY3JlYXRlLmM6MzA4KQo9PTcwMDY9PSAKPT03MDA2PT0gMjAw
IGJ5dGVzIGluIDUgYmxvY2tzIGFyZSBzdGlsbCByZWFjaGFibGUgaW4gbG9zcyByZWNvcmQgMTUg
b2YgMjQKPT03MDA2PT0gICAgYXQgMHg0QzJCNkNEOiBtYWxsb2MgKGluIC91c3IvbGliL3ZhbGdy
aW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2NC1saW51eC5zbykKPT03MDA2PT0gICAgYnkgMHg1
MjVCMkI3OiA/Pz8gKGluIC91c3IvbGliL2xpYnhlbnN0b3JlLnNvLjMuMC4wKQo9PTcwMDY9PSAg
ICBieSAweDUyNUJBMzA6ID8/PyAoaW4gL3Vzci9saWIvbGlieGVuc3RvcmUuc28uMy4wLjApCj09
NzAwNj09ICAgIGJ5IDB4NUMzNEU5OTogc3RhcnRfdGhyZWFkIChwdGhyZWFkX2NyZWF0ZS5jOjMw
OCkKPT03MDA2PT0gCj09NzAwNj09IDI3MiBieXRlcyBpbiAxIGJsb2NrcyBhcmUgcG9zc2libHkg
bG9zdCBpbiBsb3NzIHJlY29yZCAxNiBvZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMjlEQjQ6IGNh
bGxvYyAoaW4gL3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4
LnNvKQo9PTcwMDY9PSAgICBieSAweDQwMTIwNzQ6IF9kbF9hbGxvY2F0ZV90bHMgKGRsLXRscy5j
OjI5NykKPT03MDA2PT0gICAgYnkgMHg1QzM1QUJDOiBwdGhyZWFkX2NyZWF0ZUBAR0xJQkNfMi4y
LjUgKGFsbG9jYXRlc3RhY2suYzo1NzEpCj09NzAwNj09ICAgIGJ5IDB4NTI1QzE3QjogeHNfd2F0
Y2ggKGluIC91c3IvbGliL2xpYnhlbnN0b3JlLnNvLjMuMC4wKQo9PTcwMDY9PSAgICBieSAweDQw
NDg4NjogeGVuX3NldHVwICh1dGlscy5jOjEyNSkKPT03MDA2PT0gICAgYnkgMHg0MDIxQUI6IG1h
aW4gKG1haW4uYzoxNjEpCj09NzAwNj09IAo9PTcwMDY9PSAyODAgYnl0ZXMgaW4gMSBibG9ja3Mg
YXJlIHN0aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29yZCAxNyBvZiAyNAo9PTcwMDY9PSAgICBh
dCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNo
ZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBieSAweDUyNUIwOTA6ID8/PyAoaW4gL3Vz
ci9saWIvbGlieGVuc3RvcmUuc28uMy4wLjApCj09NzAwNj09ICAgIGJ5IDB4NTI1QkI0RjogeHNf
b3BlbiAoaW4gL3Vzci9saWIvbGlieGVuc3RvcmUuc28uMy4wLjApCj09NzAwNj09ICAgIGJ5IDB4
NDA0ODQ1OiB4ZW5fc2V0dXAgKHV0aWxzLmM6MTEyKQo9PTcwMDY9PSAgICBieSAweDQwMjFBQjog
bWFpbiAobWFpbi5jOjE2MSkKPT03MDA2PT0gCj09NzAwNj09IDMwMCAoNjAgZGlyZWN0LCAyNDAg
aW5kaXJlY3QpIGJ5dGVzIGluIDEgYmxvY2tzIGFyZSBkZWZpbml0ZWx5IGxvc3QgaW4gbG9zcyBy
ZWNvcmQgMTggb2YgMjQKPT03MDA2PT0gICAgYXQgMHg0QzJCNkNEOiBtYWxsb2MgKGluIC91c3Iv
bGliL3ZhbGdyaW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2NC1saW51eC5zbykKPT03MDA2PT0g
ICAgYnkgMHg1OTczNTk0OiBuc3NfcGFyc2Vfc2VydmljZV9saXN0IChuc3N3aXRjaC5jOjY3OCkK
PT03MDA2PT0gICAgYnkgMHg1OTc0MDU1OiBfX25zc19kYXRhYmFzZV9sb29rdXAgKG5zc3dpdGNo
LmM6MTc1KQo9PTcwMDY9PSAgICBieSAweDZBNEQxNjk6ID8/Pwo9PTcwMDY9PSAgICBieSAweDU5
MkI0NkM6IGdldGdybmFtX3JAQEdMSUJDXzIuMi41IChnZXRYWGJ5WVlfci5jOjI1NikKPT03MDA2
PT0gICAgYnkgMHg1OTlCRTUxOiBncmFudHB0IChncmFudHB0LmM6MTUzKQo9PTcwMDY9PSAgICBi
eSAweDU0NjQ0ODA6IG9wZW5wdHkgKG9wZW5wdHkuYzoxMDIpCj09NzAwNj09ICAgIGJ5IDB4NDAy
QUQwOiBkb21haW5fY3JlYXRlX3R0eSAoaW8uYzo0MDkpCj09NzAwNj09ICAgIGJ5IDB4NDAzMzQ2
OiBkb21haW5fY3JlYXRlX3JpbmcgKGlvLmM6NTc0KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDog
aGFuZGxlX2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4u
YzoxNjYpCj09NzAwNj09IAo9PTcwMDY9PSAxLDIwOCBieXRlcyBpbiAxIGJsb2NrcyBhcmUgc3Rp
bGwgcmVhY2hhYmxlIGluIGxvc3MgcmVjb3JkIDE5IG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMy
QjZDRDogbWFsbG9jIChpbiAvdXNyL2xpYi92YWxncmluZC92Z3ByZWxvYWRfbWVtY2hlY2stYW1k
NjQtbGludXguc28pCj09NzAwNj09ICAgIGJ5IDB4NEU0NUMyQTogeGNfaW50ZXJmYWNlX29wZW5f
Y29tbW9uICh4Y19wcml2YXRlLmM6MTUwKQo9PTcwMDY9PSAgICBieSAweDQwNDg2MDogeGVuX3Nl
dHVwICh1dGlscy5jOjExOSkKPT03MDA2PT0gICAgYnkgMHg0MDIxQUI6IG1haW4gKG1haW4uYzox
NjEpCj09NzAwNj09IAo9PTcwMDY9PSA0LDA5NiBieXRlcyBpbiAxIGJsb2NrcyBhcmUgc3RpbGwg
cmVhY2hhYmxlIGluIGxvc3MgcmVjb3JkIDIwIG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyOUJF
ODogbWVtYWxpZ24gKGluIC91c3IvbGliL3ZhbGdyaW5kL3ZncHJlbG9hZF9tZW1jaGVjay1hbWQ2
NC1saW51eC5zbykKPT03MDA2PT0gICAgYnkgMHg0QzI5Qzk3OiBwb3NpeF9tZW1hbGlnbiAoaW4g
L3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcw
MDY9PSAgICBieSAweDRFNEJDQ0I6IHhjX19oeXBlcmNhbGxfYnVmZmVyX2FsbG9jX3BhZ2VzICh4
Y19oY2FsbF9idWYuYzoxNDMpCj09NzAwNj09ICAgIGJ5IDB4NEU0QkU2MjogeGNfX2h5cGVyY2Fs
bF9idWZmZXJfYWxsb2MgKHhjX2hjYWxsX2J1Zi5jOjE4OSkKPT03MDA2PT0gICAgYnkgMHg0RTRC
RUUyOiB4Y19faHlwZXJjYWxsX2JvdW5jZV9wcmUgKHhjX2hjYWxsX2J1Zi5jOjIzMSkKPT03MDA2
PT0gICAgYnkgMHg0RTNCQjY3OiB4Y19kb21haW5fZ2V0aW5mbyAoeGNfcHJpdmF0ZS5oOjI0MSkK
PT03MDA2PT0gICAgYnkgMHg0MDM0RTU6IGVudW1fZG9tYWlucyAoaW8uYzo3MzYpCj09NzAwNj09
ICAgIGJ5IDB4NDAyMUJBOiBtYWluIChtYWluLmM6MTY0KQo9PTcwMDY9PSAKPT03MDA2PT0gOSww
ODggYnl0ZXMgaW4gMzM5IGJsb2NrcyBhcmUgc3RpbGwgcmVhY2hhYmxlIGluIGxvc3MgcmVjb3Jk
IDIxIG9mIDI0Cj09NzAwNj09ICAgIGF0IDB4NEMyQjdCMjogcmVhbGxvYyAoaW4gL3Vzci9saWIv
dmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBi
eSAweDQwMzU4OTogZW51bV9kb21haW5zIChpby5jOjYzMykKPT03MDA2PT0gICAgYnkgMHg0MDQy
QTg6IGhhbmRsZV9pbyAoaW8uYzo4NjcpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluICht
YWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gMTYsMjcyIGJ5dGVzIGluIDMzOSBibG9ja3Mg
YXJlIHN0aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29yZCAyMiBvZiAyNAo9PTcwMDY9PSAgICBh
dCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFsZ3JpbmQvdmdwcmVsb2FkX21lbWNo
ZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBieSAweDRFNEM4RkY6IHh0bF9jcmVhdGVs
b2dnZXJfc3RkaW9zdHJlYW0gKHh0bF9sb2dnZXJfc3RkaW8uYzoxNTYpCj09NzAwNj09ICAgIGJ5
IDB4NEU0NUQyNTogeGNfaW50ZXJmYWNlX29wZW5fY29tbW9uICh4Y19wcml2YXRlLmM6MTQ1KQo9
PTcwMDY9PSAgICBieSAweDQwMzIyQTogZG9tYWluX2NyZWF0ZV9yaW5nIChpby5jOjU1NSkKPT03
MDA2PT0gICAgYnkgMHg0MDQzRTQ6IGhhbmRsZV9pbyAoaW8uYzo4NzMpCj09NzAwNj09ICAgIGJ5
IDB4NDAyMUM0OiBtYWluIChtYWluLmM6MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gNDMsMzkyIGJ5
dGVzIGluIDMzOSBibG9ja3MgYXJlIHN0aWxsIHJlYWNoYWJsZSBpbiBsb3NzIHJlY29yZCAyMyBv
ZiAyNAo9PTcwMDY9PSAgICBhdCAweDRDMkI2Q0Q6IG1hbGxvYyAoaW4gL3Vzci9saWIvdmFsZ3Jp
bmQvdmdwcmVsb2FkX21lbWNoZWNrLWFtZDY0LWxpbnV4LnNvKQo9PTcwMDY9PSAgICBieSAweDQw
MzU1MTogZW51bV9kb21haW5zIChpby5jOjYyMykKPT03MDA2PT0gICAgYnkgMHg0MDQyQTg6IGhh
bmRsZV9pbyAoaW8uYzo4NjcpCj09NzAwNj09ICAgIGJ5IDB4NDAyMUM0OiBtYWluIChtYWluLmM6
MTY2KQo9PTcwMDY9PSAKPT03MDA2PT0gNDA5LDUxMiBieXRlcyBpbiAzMzkgYmxvY2tzIGFyZSBz
dGlsbCByZWFjaGFibGUgaW4gbG9zcyByZWNvcmQgMjQgb2YgMjQKPT03MDA2PT0gICAgYXQgMHg0
QzJCNkNEOiBtYWxsb2MgKGluIC91c3IvbGliL3ZhbGdyaW5kL3ZncHJlbG9hZF9tZW1jaGVjay1h
bWQ2NC1saW51eC5zbykKPT03MDA2PT0gICAgYnkgMHg0RTQ1QzJBOiB4Y19pbnRlcmZhY2Vfb3Bl
bl9jb21tb24gKHhjX3ByaXZhdGUuYzoxNTApCj09NzAwNj09ICAgIGJ5IDB4NDAzMjJBOiBkb21h
aW5fY3JlYXRlX3JpbmcgKGlvLmM6NTU1KQo9PTcwMDY9PSAgICBieSAweDQwNDNFNDogaGFuZGxl
X2lvIChpby5jOjg3MykKPT03MDA2PT0gICAgYnkgMHg0MDIxQzQ6IG1haW4gKG1haW4uYzoxNjYp
Cj09NzAwNj09IAo9PTcwMDY9PSBMRUFLIFNVTU1BUlk6Cj09NzAwNj09ICAgIGRlZmluaXRlbHkg
bG9zdDogNjAgYnl0ZXMgaW4gMSBibG9ja3MKPT03MDA2PT0gICAgaW5kaXJlY3RseSBsb3N0OiAy
NDAgYnl0ZXMgaW4gMTAgYmxvY2tzCj09NzAwNj09ICAgICAgcG9zc2libHkgbG9zdDogMjcyIGJ5
dGVzIGluIDEgYmxvY2tzCj09NzAwNj09ICAgIHN0aWxsIHJlYWNoYWJsZTogNDg0LDMwNiBieXRl
cyBpbiAxLDM3MSBibG9ja3MKPT03MDA2PT0gICAgICAgICBzdXBwcmVzc2VkOiAwIGJ5dGVzIGlu
IDAgYmxvY2tzCj09NzAwNj09IAo9PTcwMDY9PSBGb3IgY291bnRzIG9mIGRldGVjdGVkIGFuZCBz
dXBwcmVzc2VkIGVycm9ycywgcmVydW4gd2l0aDogLXYKPT03MDA2PT0gVXNlIC0tdHJhY2stb3Jp
Z2lucz15ZXMgdG8gc2VlIHdoZXJlIHVuaW5pdGlhbGlzZWQgdmFsdWVzIGNvbWUgZnJvbQo9PTcw
MDY9PSBFUlJPUiBTVU1NQVJZOiAxMTUyNjIgZXJyb3JzIGZyb20gNCBjb250ZXh0cyAoc3VwcHJl
c3NlZDogMiBmcm9tIDIpCg==
--e89a8fb203a073963004d0d26a71
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--e89a8fb203a073963004d0d26a71--


From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtq-0003sY-7f; Tue, 18 Dec 2012 13:07:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <coley@rcf-smtp.mitre.org>)
	id 1TjGsK-0000RU-QW; Thu, 13 Dec 2012 22:03:28 +0000
Received: from [85.158.139.211:36439] by server-15.bemta-5.messagelabs.com id
	F3/0C-20523-FA05AC05; Thu, 13 Dec 2012 22:03:27 +0000
X-Env-Sender: coley@rcf-smtp.mitre.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355436207!17844105!1
X-Originating-IP: [198.49.146.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk4LjQ5LjE0Ni43NyA9PiA2MTM2MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9882 invoked from network); 13 Dec 2012 22:03:27 -0000
Received: from smtpksrv1.mitre.org (HELO smtpksrv1.mitre.org) (198.49.146.77)
	by server-9.tower-206.messagelabs.com with SMTP;
	13 Dec 2012 22:03:27 -0000
Received: from smtpksrv1.mitre.org (localhost.localdomain [127.0.0.1])
	by localhost (Postfix) with SMTP id 21C641F02BB;
	Thu, 13 Dec 2012 17:03:26 -0500 (EST)
Received: from linus.mitre.org (linus.mitre.org [129.83.10.1])
	by smtpksrv1.mitre.org (Postfix) with ESMTP id F03AF1F0281;
	Thu, 13 Dec 2012 17:03:25 -0500 (EST)
Received: from faron.mitre.org (faron.mitre.org [129.83.10.2])
	by linus.mitre.org (8.14.4+Sun/8.14.4) with ESMTP id qBDM3P7B026619;
	Thu, 13 Dec 2012 17:03:25 -0500 (EST)
Received: from localhost (coley@localhost)
	by faron.mitre.org (8.14.4+Sun/8.14.4/Submit) with ESMTP id
	qBDM3NVR022928; Thu, 13 Dec 2012 17:03:24 -0500 (EST)
X-Authentication-Warning: faron.mitre.org: coley owned process doing -bs
Date: Thu, 13 Dec 2012 17:03:23 -0500 (EST)
From: "Steven M. Christey" <coley@rcf-smtp.mitre.org>
X-X-Sender: coley@faron.mitre.org
To: oss-security@lists.openwall.com
In-Reply-To: <E1TfaBE-00066j-ID@xenbits.xen.org>
Message-ID: <Pine.GSO.4.64.1212131657560.22809@faron.mitre.org>
References: <E1TfaBE-00066j-ID@xenbits.xen.org>
MIME-Version: 1.0
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Cc: xen-users@lists.xen.org, "Xen.org security team" <security@xen.org>,
	xen-announce@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [oss-security] Xen Security Advisory 27
 (CVE-2012-5511) - several HVM operations do not validate the range of their
 inputs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


All,

This advisory required two different CVE IDs - not one - because the 
stack-based buffer overflow was fixed in a different version than the 
other issues.  CVE assigns different IDs when bugs are not present in the 
same exact set of versions.

CVE-2012-5511 - use this, but only for the stack-based buffer overflow 
that was fixed in 4.2.

CVE-2012-6333 - new ID for the other "large input" validation issues that 
lead to the physical CPU hang, which were NOT fixed in 4.2.


- Steve

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtq-0003sY-7f; Tue, 18 Dec 2012 13:07:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <coley@rcf-smtp.mitre.org>)
	id 1TjGsK-0000RU-QW; Thu, 13 Dec 2012 22:03:28 +0000
Received: from [85.158.139.211:36439] by server-15.bemta-5.messagelabs.com id
	F3/0C-20523-FA05AC05; Thu, 13 Dec 2012 22:03:27 +0000
X-Env-Sender: coley@rcf-smtp.mitre.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355436207!17844105!1
X-Originating-IP: [198.49.146.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk4LjQ5LjE0Ni43NyA9PiA2MTM2MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9882 invoked from network); 13 Dec 2012 22:03:27 -0000
Received: from smtpksrv1.mitre.org (HELO smtpksrv1.mitre.org) (198.49.146.77)
	by server-9.tower-206.messagelabs.com with SMTP;
	13 Dec 2012 22:03:27 -0000
Received: from smtpksrv1.mitre.org (localhost.localdomain [127.0.0.1])
	by localhost (Postfix) with SMTP id 21C641F02BB;
	Thu, 13 Dec 2012 17:03:26 -0500 (EST)
Received: from linus.mitre.org (linus.mitre.org [129.83.10.1])
	by smtpksrv1.mitre.org (Postfix) with ESMTP id F03AF1F0281;
	Thu, 13 Dec 2012 17:03:25 -0500 (EST)
Received: from faron.mitre.org (faron.mitre.org [129.83.10.2])
	by linus.mitre.org (8.14.4+Sun/8.14.4) with ESMTP id qBDM3P7B026619;
	Thu, 13 Dec 2012 17:03:25 -0500 (EST)
Received: from localhost (coley@localhost)
	by faron.mitre.org (8.14.4+Sun/8.14.4/Submit) with ESMTP id
	qBDM3NVR022928; Thu, 13 Dec 2012 17:03:24 -0500 (EST)
X-Authentication-Warning: faron.mitre.org: coley owned process doing -bs
Date: Thu, 13 Dec 2012 17:03:23 -0500 (EST)
From: "Steven M. Christey" <coley@rcf-smtp.mitre.org>
X-X-Sender: coley@faron.mitre.org
To: oss-security@lists.openwall.com
In-Reply-To: <E1TfaBE-00066j-ID@xenbits.xen.org>
Message-ID: <Pine.GSO.4.64.1212131657560.22809@faron.mitre.org>
References: <E1TfaBE-00066j-ID@xenbits.xen.org>
MIME-Version: 1.0
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Cc: xen-users@lists.xen.org, "Xen.org security team" <security@xen.org>,
	xen-announce@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [oss-security] Xen Security Advisory 27
 (CVE-2012-5511) - several HVM operations do not validate the range of their
 inputs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


All,

This advisory required two different CVE IDs - not one - because the 
stack-based buffer overflow was fixed in a different version than the 
other issues.  CVE assigns different IDs when bugs are not present in the 
same exact set of versions.

CVE-2012-5511 - use this, but only for the stack-based buffer overflow 
that was fixed in 4.2.

CVE-2012-6333 - new ID for the other "large input" validation issues that 
lead to the physical CPU hang, which were NOT fixed in 4.2.


- Steve

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtt-0003tm-MJ; Tue, 18 Dec 2012 13:08:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <1012503222@qq.com>) id 1TkV2V-0005xi-FM
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 07:23:03 +0000
Received: from [85.158.139.83:32043] by server-10.bemta-5.messagelabs.com id
	CF/4A-13383-658CEC05; Mon, 17 Dec 2012 07:23:02 +0000
X-Env-Sender: 1012503222@qq.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355728980!26252495!1
X-Originating-IP: [64.71.138.44]
X-SpamReason: No, hits=2.7 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2NC43MS4xMzguNDQgPT4gMTYwOTE5\n,sa_preprocessor: 
	QmFkIElQOiA2NC43MS4xMzguNDQgPT4gMTYwOTE5\n,FROM_STARTS_WITH_NUMS,
	HTML_MESSAGE, HTML_OBFUSCATE_05_10, MIME_BASE64_TEXT,
	MIME_BOUND_NEXTPART, received_headers: No Received headers
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5342 invoked from network); 17 Dec 2012 07:23:01 -0000
Received: from smtpbg55.qq.com (HELO smtpbg55.qq.com) (64.71.138.44)
	by server-6.tower-182.messagelabs.com with SMTP;
	17 Dec 2012 07:23:01 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qq.com; s=s0907;
	t=1355728979; bh=4iWxwPrCiI7Nv9zjLElv0BR4hVc0QkpbGAWemm7iqKc=;
	h=X-QQ-SSF:X-HAS-ATTACH:X-QQ-BUSINESS-ORIGIN:X-Originating-IP:
	X-QQ-STYLE:X-QQ-mid:From:To:Subject:Mime-Version:Content-Type:
	Content-Transfer-Encoding:Date:X-Priority:Message-ID:X-QQ-MIME:
	X-Mailer:X-QQ-Mailer;
	b=S8rwXxufQJSQY7qA+Co8w4rvOn4PMieLhrUBIOA6bT4eoBLxP6IrsgU8ZZmF1rHXg
	VobjpnIfYHEa/28LK2elz9tiro8g933wgX7QKHdGW5C8B5+uTIJfvajd+1Q3JXK
X-QQ-SSF: 00000000000000F000000000000000U
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 121.22.205.25
X-QQ-STYLE: 
X-QQ-mid: webmail123t1355728977t1112685
From: "=?gb18030?B?xM/OszkwoeM=?=" <1012503222@qq.com>
To: "=?gb18030?B?eGVuLWRldmVs?=" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Date: Mon, 17 Dec 2012 15:22:57 +0800
X-Priority: 3
Message-ID: <tencent_1DD2008E6058D80001392A10@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Subject: [Xen-devel] test_bindings can't run
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1490747647971457849=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============1490747647971457849==
Content-Type: multipart/alternative;
	boundary="----=_NextPart_50CEC851_D62F4BD0_1B468D3B"
Content-Transfer-Encoding: 8Bit

This is a multi-part message in MIME format.

------=_NextPart_50CEC851_D62F4BD0_1B468D3B
Content-Type: text/plain;
	charset="gb18030"
Content-Transfer-Encoding: base64

SGVsbG8gZXZlcnlvbmUNCg0KSSBjb21waWxlIHRoZSBmaWxlIHRlc3RfYmluZGluZ3MuYyBh
dCB4ZW4uLi4vdG9vbHMvbGlieGVuL3Rlc3QgYW5kIGdldCBhIGV4ZWN1dGFibGUgZmlsZSB0
ZXN0X2JpbmRpbmdzLiBidXQgd2hlbiBJIHJ1biBpdCAsdXNpbmcgdGhlIGZsb3dpbmcgcGFy
YW1ldGVycyAiLi90ZXN0X2JpbmRpbmdzIGh0dHA6Ly9sb2NhbGhvc3Q6ODAwNi94bWxycGMg
cm9vdCAxMjM0NTYiLGl0IHNheXMgIkVycm9yOiAyTUVTU0FHRV9NRVRIT0RfVU5LTk9XTiBz
ZXNzaW9uLmxvZ2luX3dpdGhfcGFzc3dvcmQiLkJlYWN1c2Ugd2UganVzdCBkbyBzb21lIHJl
YXNlYXJjaCwgd2UgdXNlIGFuIE9wZW4gc291cmNlIC4gV2UgZG93bmxvYWQgaXQgZnJvbSBo
dHRwOi8veGVuLm9yZy9wcm9kdWN0cy9kb3dubG9hZHMuaHRtbCAuIEkgcmVhbHkgZG9uJ3Qg
a25vdyB3aGF0IHRvIGRvIHdpdGggaXQuIHRoZSB0cnV0aCBpcyB0aGF0IEkgYW0gbm90IGFu
IGV4cGVydCBpbiB0aGlzIGFyZWEsYW5kIG15IEVuZ2xpc2ggaXNuJ3Qgc28gd2VsbC4NCiBQ
bGVhc2UgaWYgeW91IGNhbiBoZWxwIG1lLg0KQmVzdCByZWdhcmRzLg0KIA0KV2FsbnV0

------=_NextPart_50CEC851_D62F4BD0_1B468D3B
Content-Type: text/html;
	charset="gb18030"
Content-Transfer-Encoding: base64

PERJVj48U1BBTiBzdHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVj
aWRhIEdyYW5kZScsIFZlcmRhbmEiPkhlbGxvIGV2ZXJ5b25lPC9TUEFOPjxCUiBzdHlsZT0i
TElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVjaWRhIEdyYW5kZScsIFZlcmRh
bmEiPjxCUiBzdHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVjaWRh
IEdyYW5kZScsIFZlcmRhbmEiPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9O
VC1GQU1JTFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+SSBjb21waWxlIHRoZSBmaWxl
IHRlc3RfYmluZGluZ3MuYyBhdCB4ZW4uLi4vdG9vbHMvbGlieGVuL3Rlc3QgYW5kIGdldCBh
PC9TUEFOPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9OVC1GQU1JTFk6ICds
dWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7ZXhlY3V0YWJsZSBmaWxlIHRlc3RfYmlu
ZGluZ3MuIGJ1dCB3aGVuIEkgcnVuIGl0ICx1c2luZyB0aGUgZmxvd2luZzwvU1BBTj48U1BB
TiBzdHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVjaWRhIEdyYW5k
ZScsIFZlcmRhbmEiPiZuYnNwO3BhcmFtZXRlcnMgIi4vdGVzdF9iaW5kaW5ncyA8L1NQQU4+
PEEgc3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBPVVRMSU5FLVNUWUxFOiBub25lOyBPVVRM
SU5FLUNPTE9SOiA7IEZPTlQtRkFNSUxZOiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IENP
TE9SOiByZ2IoMzAsODQsMTQ4KTsgQ1VSU09SOiBwb2ludGVyIiBocmVmPSJodHRwOi8vbG9j
YWxob3N0OjgwMDYveG1scnBjIiB0YXJnZXQ9X2JsYW5rPmh0dHA6Ly9sb2NhbDxXQlI+aG9z
dDo4MDA2L3htPFdCUj5scnBjPC9BPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsg
Rk9OVC1GQU1JTFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7cm9vdCAxMjM0
NTYiLGl0PC9TUEFOPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9OVC1GQU1J
TFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7c2F5cyAiRXJyb3I6IDJNRVNT
QUdFX01FVEhPRF9VTktOT1dOIHNlc3Npb24ubG9naW5fd2l0aF9wYXNzd29yZCIuPC9TUEFO
PjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9OVC1GQU1JTFk6ICdsdWNpZGEg
R3JhbmRlJywgVmVyZGFuYSI+QmVhY3VzZSB3ZSBqdXN0IGRvIHNvbWU8L1NQQU4+PFNQQU4g
c3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBGT05ULUZBTUlMWTogJ2x1Y2lkYSBHcmFuZGUn
LCBWZXJkYW5hIj4mbmJzcDtyZWFzZWFyY2gsIHdlIHVzZSBhbiBPcGVuIHNvdXJjZSAuIFdl
IGRvd25sb2FkIGl0PC9TUEFOPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9O
VC1GQU1JTFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7ZnJvbSA8L1NQQU4+
PEEgc3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBPVVRMSU5FLVNUWUxFOiBub25lOyBPVVRM
SU5FLUNPTE9SOiA7IEZPTlQtRkFNSUxZOiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IENP
TE9SOiByZ2IoMzAsODQsMTQ4KTsgQ1VSU09SOiBwb2ludGVyIiBocmVmPSJodHRwOi8veGVu
Lm9yZy9wcm9kdWN0cy9kb3dubG9hZHMuaHRtbCIgdGFyZ2V0PV9ibGFuaz5odHRwOi8veGVu
Lm88V0JSPnJnL3Byb2R1Y3RzLzxXQlI+ZG93bmxvYWRzLmh0PFdCUj5tbDwvQT48U1BBTiBz
dHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVjaWRhIEdyYW5kZScs
IFZlcmRhbmEiPiZuYnNwOy48L1NQQU4+PFNQQU4gc3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4
OyBGT05ULUZBTUlMWTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hIj4mbmJzcDtJPC9TUEFO
PjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9OVC1GQU1JTFk6ICdsdWNpZGEg
R3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7cmVhbHkgZG9uJ3Qga25vdyB3aGF0IHRvIGRvIHdp
dGggaXQ8L1NQQU4+PFNQQU4gc3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBGT05ULUZBTUlM
WTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hIj4uIHRoZSB0cnV0aCBpcyB0aGF0IEkgYW0g
bm90IGFuPC9TUEFOPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9OVC1GQU1J
TFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7ZXhwZXJ0IGluIHRoaXMgYXJl
YSxhbmQgbXkgRW5nbGlzaCBpc24ndCBzbyB3ZWxsLjwvU1BBTj48L0RJVj4NCjxESVY+PFNQ
QU4gc3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBGT05ULUZBTUlMWTogJ2x1Y2lkYSBHcmFu
ZGUnLCBWZXJkYW5hIj5QbGVhc2UgaWYgeW91IGNhbiBoZWxwIG1lLjwvU1BBTj48QlIgc3R5
bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBGT05ULUZBTUlMWTogJ2x1Y2lkYSBHcmFuZGUnLCBW
ZXJkYW5hIj48U1BBTiBzdHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAn
bHVjaWRhIEdyYW5kZScsIFZlcmRhbmEiPkJlc3QgcmVnYXJkcy48L1NQQU4+PC9ESVY+DQo8
RElWPjxCUiBzdHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVjaWRh
IEdyYW5kZScsIFZlcmRhbmEiPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9O
VC1GQU1JTFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+V2FsbnV0PC9TUEFOPiA8L0RJ
Vj4=

------=_NextPart_50CEC851_D62F4BD0_1B468D3B--



--===============1490747647971457849==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1490747647971457849==--



From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtt-0003tm-MJ; Tue, 18 Dec 2012 13:08:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <1012503222@qq.com>) id 1TkV2V-0005xi-FM
	for xen-devel@lists.xen.org; Mon, 17 Dec 2012 07:23:03 +0000
Received: from [85.158.139.83:32043] by server-10.bemta-5.messagelabs.com id
	CF/4A-13383-658CEC05; Mon, 17 Dec 2012 07:23:02 +0000
X-Env-Sender: 1012503222@qq.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355728980!26252495!1
X-Originating-IP: [64.71.138.44]
X-SpamReason: No, hits=2.7 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2NC43MS4xMzguNDQgPT4gMTYwOTE5\n,sa_preprocessor: 
	QmFkIElQOiA2NC43MS4xMzguNDQgPT4gMTYwOTE5\n,FROM_STARTS_WITH_NUMS,
	HTML_MESSAGE, HTML_OBFUSCATE_05_10, MIME_BASE64_TEXT,
	MIME_BOUND_NEXTPART, received_headers: No Received headers
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5342 invoked from network); 17 Dec 2012 07:23:01 -0000
Received: from smtpbg55.qq.com (HELO smtpbg55.qq.com) (64.71.138.44)
	by server-6.tower-182.messagelabs.com with SMTP;
	17 Dec 2012 07:23:01 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qq.com; s=s0907;
	t=1355728979; bh=4iWxwPrCiI7Nv9zjLElv0BR4hVc0QkpbGAWemm7iqKc=;
	h=X-QQ-SSF:X-HAS-ATTACH:X-QQ-BUSINESS-ORIGIN:X-Originating-IP:
	X-QQ-STYLE:X-QQ-mid:From:To:Subject:Mime-Version:Content-Type:
	Content-Transfer-Encoding:Date:X-Priority:Message-ID:X-QQ-MIME:
	X-Mailer:X-QQ-Mailer;
	b=S8rwXxufQJSQY7qA+Co8w4rvOn4PMieLhrUBIOA6bT4eoBLxP6IrsgU8ZZmF1rHXg
	VobjpnIfYHEa/28LK2elz9tiro8g933wgX7QKHdGW5C8B5+uTIJfvajd+1Q3JXK
X-QQ-SSF: 00000000000000F000000000000000U
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 121.22.205.25
X-QQ-STYLE: 
X-QQ-mid: webmail123t1355728977t1112685
From: "=?gb18030?B?xM/OszkwoeM=?=" <1012503222@qq.com>
To: "=?gb18030?B?eGVuLWRldmVs?=" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Date: Mon, 17 Dec 2012 15:22:57 +0800
X-Priority: 3
Message-ID: <tencent_1DD2008E6058D80001392A10@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Subject: [Xen-devel] test_bindings can't run
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1490747647971457849=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============1490747647971457849==
Content-Type: multipart/alternative;
	boundary="----=_NextPart_50CEC851_D62F4BD0_1B468D3B"
Content-Transfer-Encoding: 8Bit

This is a multi-part message in MIME format.

------=_NextPart_50CEC851_D62F4BD0_1B468D3B
Content-Type: text/plain;
	charset="gb18030"
Content-Transfer-Encoding: base64

SGVsbG8gZXZlcnlvbmUNCg0KSSBjb21waWxlIHRoZSBmaWxlIHRlc3RfYmluZGluZ3MuYyBh
dCB4ZW4uLi4vdG9vbHMvbGlieGVuL3Rlc3QgYW5kIGdldCBhIGV4ZWN1dGFibGUgZmlsZSB0
ZXN0X2JpbmRpbmdzLiBidXQgd2hlbiBJIHJ1biBpdCAsdXNpbmcgdGhlIGZsb3dpbmcgcGFy
YW1ldGVycyAiLi90ZXN0X2JpbmRpbmdzIGh0dHA6Ly9sb2NhbGhvc3Q6ODAwNi94bWxycGMg
cm9vdCAxMjM0NTYiLGl0IHNheXMgIkVycm9yOiAyTUVTU0FHRV9NRVRIT0RfVU5LTk9XTiBz
ZXNzaW9uLmxvZ2luX3dpdGhfcGFzc3dvcmQiLkJlYWN1c2Ugd2UganVzdCBkbyBzb21lIHJl
YXNlYXJjaCwgd2UgdXNlIGFuIE9wZW4gc291cmNlIC4gV2UgZG93bmxvYWQgaXQgZnJvbSBo
dHRwOi8veGVuLm9yZy9wcm9kdWN0cy9kb3dubG9hZHMuaHRtbCAuIEkgcmVhbHkgZG9uJ3Qg
a25vdyB3aGF0IHRvIGRvIHdpdGggaXQuIHRoZSB0cnV0aCBpcyB0aGF0IEkgYW0gbm90IGFu
IGV4cGVydCBpbiB0aGlzIGFyZWEsYW5kIG15IEVuZ2xpc2ggaXNuJ3Qgc28gd2VsbC4NCiBQ
bGVhc2UgaWYgeW91IGNhbiBoZWxwIG1lLg0KQmVzdCByZWdhcmRzLg0KIA0KV2FsbnV0

------=_NextPart_50CEC851_D62F4BD0_1B468D3B
Content-Type: text/html;
	charset="gb18030"
Content-Transfer-Encoding: base64

PERJVj48U1BBTiBzdHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVj
aWRhIEdyYW5kZScsIFZlcmRhbmEiPkhlbGxvIGV2ZXJ5b25lPC9TUEFOPjxCUiBzdHlsZT0i
TElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVjaWRhIEdyYW5kZScsIFZlcmRh
bmEiPjxCUiBzdHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVjaWRh
IEdyYW5kZScsIFZlcmRhbmEiPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9O
VC1GQU1JTFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+SSBjb21waWxlIHRoZSBmaWxl
IHRlc3RfYmluZGluZ3MuYyBhdCB4ZW4uLi4vdG9vbHMvbGlieGVuL3Rlc3QgYW5kIGdldCBh
PC9TUEFOPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9OVC1GQU1JTFk6ICds
dWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7ZXhlY3V0YWJsZSBmaWxlIHRlc3RfYmlu
ZGluZ3MuIGJ1dCB3aGVuIEkgcnVuIGl0ICx1c2luZyB0aGUgZmxvd2luZzwvU1BBTj48U1BB
TiBzdHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVjaWRhIEdyYW5k
ZScsIFZlcmRhbmEiPiZuYnNwO3BhcmFtZXRlcnMgIi4vdGVzdF9iaW5kaW5ncyA8L1NQQU4+
PEEgc3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBPVVRMSU5FLVNUWUxFOiBub25lOyBPVVRM
SU5FLUNPTE9SOiA7IEZPTlQtRkFNSUxZOiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IENP
TE9SOiByZ2IoMzAsODQsMTQ4KTsgQ1VSU09SOiBwb2ludGVyIiBocmVmPSJodHRwOi8vbG9j
YWxob3N0OjgwMDYveG1scnBjIiB0YXJnZXQ9X2JsYW5rPmh0dHA6Ly9sb2NhbDxXQlI+aG9z
dDo4MDA2L3htPFdCUj5scnBjPC9BPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsg
Rk9OVC1GQU1JTFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7cm9vdCAxMjM0
NTYiLGl0PC9TUEFOPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9OVC1GQU1J
TFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7c2F5cyAiRXJyb3I6IDJNRVNT
QUdFX01FVEhPRF9VTktOT1dOIHNlc3Npb24ubG9naW5fd2l0aF9wYXNzd29yZCIuPC9TUEFO
PjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9OVC1GQU1JTFk6ICdsdWNpZGEg
R3JhbmRlJywgVmVyZGFuYSI+QmVhY3VzZSB3ZSBqdXN0IGRvIHNvbWU8L1NQQU4+PFNQQU4g
c3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBGT05ULUZBTUlMWTogJ2x1Y2lkYSBHcmFuZGUn
LCBWZXJkYW5hIj4mbmJzcDtyZWFzZWFyY2gsIHdlIHVzZSBhbiBPcGVuIHNvdXJjZSAuIFdl
IGRvd25sb2FkIGl0PC9TUEFOPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9O
VC1GQU1JTFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7ZnJvbSA8L1NQQU4+
PEEgc3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBPVVRMSU5FLVNUWUxFOiBub25lOyBPVVRM
SU5FLUNPTE9SOiA7IEZPTlQtRkFNSUxZOiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IENP
TE9SOiByZ2IoMzAsODQsMTQ4KTsgQ1VSU09SOiBwb2ludGVyIiBocmVmPSJodHRwOi8veGVu
Lm9yZy9wcm9kdWN0cy9kb3dubG9hZHMuaHRtbCIgdGFyZ2V0PV9ibGFuaz5odHRwOi8veGVu
Lm88V0JSPnJnL3Byb2R1Y3RzLzxXQlI+ZG93bmxvYWRzLmh0PFdCUj5tbDwvQT48U1BBTiBz
dHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVjaWRhIEdyYW5kZScs
IFZlcmRhbmEiPiZuYnNwOy48L1NQQU4+PFNQQU4gc3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4
OyBGT05ULUZBTUlMWTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hIj4mbmJzcDtJPC9TUEFO
PjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9OVC1GQU1JTFk6ICdsdWNpZGEg
R3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7cmVhbHkgZG9uJ3Qga25vdyB3aGF0IHRvIGRvIHdp
dGggaXQ8L1NQQU4+PFNQQU4gc3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBGT05ULUZBTUlM
WTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hIj4uIHRoZSB0cnV0aCBpcyB0aGF0IEkgYW0g
bm90IGFuPC9TUEFOPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9OVC1GQU1J
TFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+Jm5ic3A7ZXhwZXJ0IGluIHRoaXMgYXJl
YSxhbmQgbXkgRW5nbGlzaCBpc24ndCBzbyB3ZWxsLjwvU1BBTj48L0RJVj4NCjxESVY+PFNQ
QU4gc3R5bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBGT05ULUZBTUlMWTogJ2x1Y2lkYSBHcmFu
ZGUnLCBWZXJkYW5hIj5QbGVhc2UgaWYgeW91IGNhbiBoZWxwIG1lLjwvU1BBTj48QlIgc3R5
bGU9IkxJTkUtSEVJR0hUOiAyM3B4OyBGT05ULUZBTUlMWTogJ2x1Y2lkYSBHcmFuZGUnLCBW
ZXJkYW5hIj48U1BBTiBzdHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAn
bHVjaWRhIEdyYW5kZScsIFZlcmRhbmEiPkJlc3QgcmVnYXJkcy48L1NQQU4+PC9ESVY+DQo8
RElWPjxCUiBzdHlsZT0iTElORS1IRUlHSFQ6IDIzcHg7IEZPTlQtRkFNSUxZOiAnbHVjaWRh
IEdyYW5kZScsIFZlcmRhbmEiPjxTUEFOIHN0eWxlPSJMSU5FLUhFSUdIVDogMjNweDsgRk9O
VC1GQU1JTFk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYSI+V2FsbnV0PC9TUEFOPiA8L0RJ
Vj4=

------=_NextPart_50CEC851_D62F4BD0_1B468D3B--



--===============1490747647971457849==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1490747647971457849==--



From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtt-0003tY-7h; Tue, 18 Dec 2012 13:08:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chenxiaolong@cxl.epac.to>) id 1Tk8nZ-0007C9-QZ
	for xen-devel@lists.xen.org; Sun, 16 Dec 2012 07:38:10 +0000
Received: from [85.158.138.51:59726] by server-5.bemta-3.messagelabs.com id
	01/82-15136-C5A7DC05; Sun, 16 Dec 2012 07:38:04 +0000
X-Env-Sender: chenxiaolong@cxl.epac.to
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355643482!29045602!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=2.0 required=7.0 tests=RATWARE_GECKO_BUILD, RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10077 invoked from network); 16 Dec 2012 07:38:03 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 07:38:03 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so4386811iac.32
	for <xen-devel@lists.xen.org>; Sat, 15 Dec 2012 23:38:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:subject
	:content-type:content-transfer-encoding:x-gm-message-state;
	bh=g3Nxn9+oyMFqORCBk7RGvRw4vTz2b/u4lD4scpuerKM=;
	b=nogpB20hbKyTwj8Ov8YuquYHUMokLP3WbrtXSnnT3WKSnE9tgYnuRh58iTlgU0fUb7
	VHZ/gPkXVwdzI116IZD7Mn4B9yHLs7FV5QIadh+uCsi6rktTmDa2vibv12UAltGWj7+A
	ABwtD9FaZjDVWHY8fBs56vKzdrwX47Pa461a1b07iWHJVeKv2QWjyPEJqo7zTR4zbsb+
	xTkVhofC1TLVWHiAkG1FhgQajur9g2SJXQgFgcDkgfXcPuJLTV0DMIU5Qd3BqjW0Il5O
	hTbwY2bTJI0Yp5AVlNj5bTIvey2kGzDlMCdcxEcFCvVQHf9zxNZBogbwUGzSWrSr1E4X
	wZGQ==
Received: by 10.50.157.162 with SMTP id wn2mr6161271igb.27.1355643481977;
	Sat, 15 Dec 2012 23:38:01 -0800 (PST)
Received: from [10.0.0.145] (cpe-76-190-150-115.neo.res.rr.com.
	[76.190.150.115])
	by mx.google.com with ESMTPS id aa6sm2887683igc.14.2012.12.15.23.38.00
	(version=SSLv3 cipher=OTHER); Sat, 15 Dec 2012 23:38:01 -0800 (PST)
Message-ID: <50CD7A57.5030107@cxl.epac.to>
Date: Sun, 16 Dec 2012 02:37:59 -0500
From: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQmWjO/VMyCJ0yNfJhjao01uErjx7nHrsA7+d0BbWpO+lq5VG4tEe9I+8sjqGNqhImyZuOdJ
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Subject: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Xen developers,

I have been having a problem where only one CPU core is being detected.
I'm using a Lenovo W520 with a UEFI firmware and an Intel Core i7
2720qm (4 real cores, 4 virtual cores). When I boot, I see
"Dom0 has maximum 1 VCPUs", or something similar, scroll by.

In addition, only 10 GB out of my 12 GB of memory is recognized and
ACPI is not working properly (CPU frequency scaling and battery info
are not reported).

This problem has been reported before (for the same laptop too) here:
http://lists.xen.org/archives/html/xen-devel/2012-06/msg00087.html but
unfortunately, the person who sent the message didn't reply with more
information.

Here are the outputs of dmesg with and without Xen:

Without Xen: http://paste.kde.org/626222/raw/
With Xen: http://paste.kde.org/626228/raw/

and some information from xl:

xl vcpu-list: http://paste.kde.org/626234/raw/
xl info: http://paste.kde.org/626240/raw/
xl dmesg: http://paste.kde.org/626246/raw/

This is my first time venturing into Xen territory, so please let me
know if there's any other information needed.

I'm not sure if it helps, but I'd also like to point out that I had a
similar problem with Linux back around July 20, 2011, when I got my
computer. I could work around the issue by booting with "noapic". I'm
not sure which kernel version fixed the issue as I booted with that
option for quite a while. The kernel version, at the time, was 3.0.

I am glad to help in any way I can. I'm excited to get Xen working!
I'm confortable with compiling the kernel or xen if necessary.

Thanks in advance!

Xiao-Long Chen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwts-0003tH-Py; Tue, 18 Dec 2012 13:08:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <1012503222@qq.com>) id 1TjlMX-0001lW-7X
	for xen-devel@lists.xen.org; Sat, 15 Dec 2012 06:36:41 +0000
Received: from [85.158.137.99:8278] by server-3.bemta-3.messagelabs.com id
	DA/54-31588-87A1CC05; Sat, 15 Dec 2012 06:36:40 +0000
X-Env-Sender: 1012503222@qq.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355553398!13250994!1
X-Originating-IP: [64.71.138.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjQuNzEuMTM4LjQ1ID0+IDczMjI=\n,received_headers: No 
	Received headers
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15156 invoked from network); 15 Dec 2012 06:36:39 -0000
Received: from smtpbg56.qq.com (HELO smtpbg56.qq.com) (64.71.138.45)
	by server-3.tower-217.messagelabs.com with SMTP;
	15 Dec 2012 06:36:39 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qq.com; s=s0907;
	t=1355553396; bh=H5UPbrHY3CMgvNEpqpDrioqItuHSYJvF1EzVobXEhfg=;
	h=X-QQ-SSF:X-HAS-ATTACH:X-QQ-BUSINESS-ORIGIN:X-Originating-IP:
	X-QQ-STYLE:X-QQ-mid:From:To:Subject:Mime-Version:Content-Type:
	Content-Transfer-Encoding:Date:X-Priority:Message-ID:X-QQ-MIME:
	X-Mailer:X-QQ-Mailer;
	b=Z830fv/UDpI+EVdNFWy9IUXk0/YVoXHeEYbY+CXKMeXQmTiUmrpipNWmjeZYTVBxx
	92/3nqtwXQnU5OhHUZ77wpWtF3hiHoI6hEdQydtTX0WwOsfIDwC1WQAWmPju0ob
X-QQ-SSF: 00000000000000F000000000000000U
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 121.22.205.25
X-QQ-STYLE: 
X-QQ-mid: webmail123t1355553394t9614208
From: "=?gb18030?B?xM/OszkwoeM=?=" <1012503222@qq.com>
To: "=?gb18030?B?eGVuLWRldmVs?=" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Date: Sat, 15 Dec 2012 14:36:34 +0800
X-Priority: 3
Message-ID: <tencent_142864FF75B0D41D560C7187@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Subject: [Xen-devel] test_bindings can't run
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5550858106181003624=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============5550858106181003624==
Content-Type: multipart/alternative;
	boundary="----=_NextPart_50CC1A72_D61A9270_44CCD5EE"
Content-Transfer-Encoding: 8Bit

This is a multi-part message in MIME format.

------=_NextPart_50CC1A72_D61A9270_44CCD5EE
Content-Type: text/plain;
	charset="gb18030"
Content-Transfer-Encoding: base64

SGVsbG8gZXZlcnlvbmUNCg0KIEkgY29tcGlsZSB0aGUgZmlsZSB0ZXN0X2JpbmRpbmdzLmMg
YXQgeGVuLi4uL3Rvb2xzL2xpYnhlbi90ZXN0IGFuZCBnZXQgYSBleGVjdXRhYmxlIGZpbGUg
dGVzdF9iaW5kaW5ncy4gYnV0IHdoZW4gSSBydW4gaXQgLHVzaW5nIHRoZSBmbG93aW5nIHBh
cmFtZXRlcnMgIi4vdGVzdF9iaW5kaW5ncyBodHRwOi8vbG9jYWxob3N0OjgwMDYveG1scnBj
IHJvb3QgMTIzNDU2IixpdCBzYXlzICJFcnJvcjogMk1FU1NBR0VfTUVUSE9EX1VOS05PV04g
c2Vzc2lvbi5sb2dpbl93aXRoX3Bhc3N3b3JkIi5CZWFjdXNlIHdlIGp1c3QgZG8gc29tZSBy
ZWFzZWFyY2gsIHdlIHVzZSBhbiBPcGVuIHNvdXJjZSAuIFdlIGRvd25sb2FkIGl0IGZyb20g
aHR0cDovL3hlbi5vcmcvcHJvZHVjdHMvZG93bmxvYWRzLmh0bWwgLiBJIHJlYWx5IGRvbid0
IGtub3cgd2hhdCB0byBkbyB3aXRoIGl0LiB0aGUgdHJ1dGggaXMgdGhhdCBJIGFtIG5vdCBh
biBleHBlcnQgaW4gdGhpcyBhcmVhLGFuZCBteSBFbmdsaXNoIGlzbid0IHNvIHdlbGwuDQpQ
bGVhc2UgaWYgeW91IGNhbiBoZWxwIG1lLg0KQmVzdCByZWdhcmRzLg0KDQpXYWxudXQ=

------=_NextPart_50CC1A72_D61A9270_44CCD5EE
Content-Type: text/html;
	charset="gb18030"
Content-Transfer-Encoding: base64

PGRpdj48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFu
YTsgbGluZS1oZWlnaHQ6IDIzcHg7ICI+Jm5ic3A7SGVsbG8gZXZlcnlvbmU8L3NwYW4+PGJy
IHN0eWxlPSJmb250LWZhbWlseTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhl
aWdodDogMjNweDsgIj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5kZScs
IFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPjxzcGFuIHN0eWxlPSJmb250LWZhbWls
eTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhlaWdodDogMjNweDsgIj4mbmJz
cDtJIGNvbXBpbGUgdGhlIGZpbGUgdGVzdF9iaW5kaW5ncy5jIGF0IHhlbi4uLi90b29scy9s
aWJ4ZW4vdGVzdCBhbmQgZ2V0IGE8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAn
bHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwO2V4
ZWN1dGFibGUgZmlsZSB0ZXN0X2JpbmRpbmdzLiBidXQgd2hlbiBJIHJ1biBpdCAsdXNpbmcg
dGhlIGZsb3dpbmc8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdy
YW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwO3BhcmFtZXRlcnMg
Ii4vdGVzdF9iaW5kaW5ncyZuYnNwOzwvc3Bhbj48YSBocmVmPSJodHRwOi8vbG9jYWxob3N0
OjgwMDYveG1scnBjIiB0YXJnZXQ9Il9ibGFuayIgc3R5bGU9Im91dGxpbmUtc3R5bGU6IG5v
bmU7IG91dGxpbmUtd2lkdGg6IGluaXRpYWw7IG91dGxpbmUtY29sb3I6IGluaXRpYWw7IGN1
cnNvcjogcG9pbnRlcjsgY29sb3I6IHJnYigzMCwgODQsIDE0OCk7IGZvbnQtZmFtaWx5OiAn
bHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPmh0dHA6Ly9s
b2NhbDx3YnI+aG9zdDo4MDA2L3htPHdicj5scnBjPC9hPjxzcGFuIHN0eWxlPSJmb250LWZh
bWlseTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhlaWdodDogMjNweDsgIj4m
bmJzcDtyb290IDEyMzQ1NiIsaXQ8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAn
bHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwO3Nh
eXMgIkVycm9yOiAyTUVTU0FHRV9NRVRIT0RfVU5LTk9XTiBzZXNzaW9uLmxvZ2luX3dpdGhf
cGFzc3dvcmQiLjwvc3Bhbj48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6ICdsdWNpZGEgR3Jh
bmRlJywgVmVyZGFuYTsgbGluZS1oZWlnaHQ6IDIzcHg7ICI+QmVhY3VzZSB3ZSBqdXN0IGRv
IHNvbWU8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5kZScs
IFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwO3JlYXNlYXJjaCwgd2UgdXNl
IGFuIE9wZW4gc291cmNlIC4gV2UgZG93bmxvYWQgaXQ8L3NwYW4+PHNwYW4gc3R5bGU9ImZv
bnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4
OyAiPiZuYnNwO2Zyb20mbmJzcDs8L3NwYW4+PGEgaHJlZj0iaHR0cDovL3hlbi5vcmcvcHJv
ZHVjdHMvZG93bmxvYWRzLmh0bWwiIHRhcmdldD0iX2JsYW5rIiBzdHlsZT0ib3V0bGluZS1z
dHlsZTogbm9uZTsgb3V0bGluZS13aWR0aDogaW5pdGlhbDsgb3V0bGluZS1jb2xvcjogaW5p
dGlhbDsgY3Vyc29yOiBwb2ludGVyOyBjb2xvcjogcmdiKDMwLCA4NCwgMTQ4KTsgZm9udC1m
YW1pbHk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYTsgbGluZS1oZWlnaHQ6IDIzcHg7ICI+
aHR0cDovL3hlbi5vPHdicj5yZy9wcm9kdWN0cy88d2JyPmRvd25sb2Fkcy5odDx3YnI+bWw8
L2E+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7
IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwOy48L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQt
ZmFtaWx5OiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAi
PiZuYnNwO0k8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5k
ZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwO3JlYWx5IGRvbid0IGtu
b3cgd2hhdCB0byBkbyB3aXRoIGl0PC9zcGFuPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTog
J2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhlaWdodDogMjNweDsgIj4uIHRoZSB0
cnV0aCBpcyB0aGF0IEkgYW0gbm90IGFuPC9zcGFuPjxzcGFuIHN0eWxlPSJmb250LWZhbWls
eTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhlaWdodDogMjNweDsgIj4mbmJz
cDtleHBlcnQgaW4gdGhpcyBhcmVhLGFuZCBteSBFbmdsaXNoIGlzbid0IHNvIHdlbGwuPC9z
cGFuPjwvZGl2PjxkaXY+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5k
ZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPlBsZWFzZSBpZiB5b3UgY2FuIGhl
bHAgbWUuPC9zcGFuPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6ICdsdWNpZGEgR3JhbmRlJywg
VmVyZGFuYTsgbGluZS1oZWlnaHQ6IDIzcHg7ICI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5
OiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPkJlc3Qg
cmVnYXJkcy48L3NwYW4+PC9kaXY+PGRpdj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVj
aWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPjxzcGFuIHN0eWxl
PSJmb250LWZhbWlseTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhlaWdodDog
MjNweDsgIj5XYWxudXQ8L3NwYW4+CjwvZGl2Pg==

------=_NextPart_50CC1A72_D61A9270_44CCD5EE--



--===============5550858106181003624==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5550858106181003624==--



From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwtt-0003tY-7h; Tue, 18 Dec 2012 13:08:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chenxiaolong@cxl.epac.to>) id 1Tk8nZ-0007C9-QZ
	for xen-devel@lists.xen.org; Sun, 16 Dec 2012 07:38:10 +0000
Received: from [85.158.138.51:59726] by server-5.bemta-3.messagelabs.com id
	01/82-15136-C5A7DC05; Sun, 16 Dec 2012 07:38:04 +0000
X-Env-Sender: chenxiaolong@cxl.epac.to
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355643482!29045602!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=2.0 required=7.0 tests=RATWARE_GECKO_BUILD, RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10077 invoked from network); 16 Dec 2012 07:38:03 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Dec 2012 07:38:03 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so4386811iac.32
	for <xen-devel@lists.xen.org>; Sat, 15 Dec 2012 23:38:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:subject
	:content-type:content-transfer-encoding:x-gm-message-state;
	bh=g3Nxn9+oyMFqORCBk7RGvRw4vTz2b/u4lD4scpuerKM=;
	b=nogpB20hbKyTwj8Ov8YuquYHUMokLP3WbrtXSnnT3WKSnE9tgYnuRh58iTlgU0fUb7
	VHZ/gPkXVwdzI116IZD7Mn4B9yHLs7FV5QIadh+uCsi6rktTmDa2vibv12UAltGWj7+A
	ABwtD9FaZjDVWHY8fBs56vKzdrwX47Pa461a1b07iWHJVeKv2QWjyPEJqo7zTR4zbsb+
	xTkVhofC1TLVWHiAkG1FhgQajur9g2SJXQgFgcDkgfXcPuJLTV0DMIU5Qd3BqjW0Il5O
	hTbwY2bTJI0Yp5AVlNj5bTIvey2kGzDlMCdcxEcFCvVQHf9zxNZBogbwUGzSWrSr1E4X
	wZGQ==
Received: by 10.50.157.162 with SMTP id wn2mr6161271igb.27.1355643481977;
	Sat, 15 Dec 2012 23:38:01 -0800 (PST)
Received: from [10.0.0.145] (cpe-76-190-150-115.neo.res.rr.com.
	[76.190.150.115])
	by mx.google.com with ESMTPS id aa6sm2887683igc.14.2012.12.15.23.38.00
	(version=SSLv3 cipher=OTHER); Sat, 15 Dec 2012 23:38:01 -0800 (PST)
Message-ID: <50CD7A57.5030107@cxl.epac.to>
Date: Sun, 16 Dec 2012 02:37:59 -0500
From: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQmWjO/VMyCJ0yNfJhjao01uErjx7nHrsA7+d0BbWpO+lq5VG4tEe9I+8sjqGNqhImyZuOdJ
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Subject: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Xen developers,

I have been having a problem where only one CPU core is being detected.
I'm using a Lenovo W520 with a UEFI firmware and an Intel Core i7
2720qm (4 real cores, 4 virtual cores). When I boot, I see
"Dom0 has maximum 1 VCPUs", or something similar, scroll by.

In addition, only 10 GB out of my 12 GB of memory is recognized and
ACPI is not working properly (CPU frequency scaling and battery info
are not reported).

This problem has been reported before (for the same laptop too) here:
http://lists.xen.org/archives/html/xen-devel/2012-06/msg00087.html but
unfortunately, the person who sent the message didn't reply with more
information.

Here are the outputs of dmesg with and without Xen:

Without Xen: http://paste.kde.org/626222/raw/
With Xen: http://paste.kde.org/626228/raw/

and some information from xl:

xl vcpu-list: http://paste.kde.org/626234/raw/
xl info: http://paste.kde.org/626240/raw/
xl dmesg: http://paste.kde.org/626246/raw/

This is my first time venturing into Xen territory, so please let me
know if there's any other information needed.

I'm not sure if it helps, but I'd also like to point out that I had a
similar problem with Linux back around July 20, 2011, when I got my
computer. I could work around the issue by booting with "noapic". I'm
not sure which kernel version fixed the issue as I booted with that
option for quite a while. The kernel version, at the time, was 3.0.

I am glad to help in any way I can. I'm excited to get Xen working!
I'm confortable with compiling the kernel or xen if necessary.

Thanks in advance!

Xiao-Long Chen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkwts-0003tH-Py; Tue, 18 Dec 2012 13:08:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <1012503222@qq.com>) id 1TjlMX-0001lW-7X
	for xen-devel@lists.xen.org; Sat, 15 Dec 2012 06:36:41 +0000
Received: from [85.158.137.99:8278] by server-3.bemta-3.messagelabs.com id
	DA/54-31588-87A1CC05; Sat, 15 Dec 2012 06:36:40 +0000
X-Env-Sender: 1012503222@qq.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355553398!13250994!1
X-Originating-IP: [64.71.138.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjQuNzEuMTM4LjQ1ID0+IDczMjI=\n,received_headers: No 
	Received headers
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15156 invoked from network); 15 Dec 2012 06:36:39 -0000
Received: from smtpbg56.qq.com (HELO smtpbg56.qq.com) (64.71.138.45)
	by server-3.tower-217.messagelabs.com with SMTP;
	15 Dec 2012 06:36:39 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qq.com; s=s0907;
	t=1355553396; bh=H5UPbrHY3CMgvNEpqpDrioqItuHSYJvF1EzVobXEhfg=;
	h=X-QQ-SSF:X-HAS-ATTACH:X-QQ-BUSINESS-ORIGIN:X-Originating-IP:
	X-QQ-STYLE:X-QQ-mid:From:To:Subject:Mime-Version:Content-Type:
	Content-Transfer-Encoding:Date:X-Priority:Message-ID:X-QQ-MIME:
	X-Mailer:X-QQ-Mailer;
	b=Z830fv/UDpI+EVdNFWy9IUXk0/YVoXHeEYbY+CXKMeXQmTiUmrpipNWmjeZYTVBxx
	92/3nqtwXQnU5OhHUZ77wpWtF3hiHoI6hEdQydtTX0WwOsfIDwC1WQAWmPju0ob
X-QQ-SSF: 00000000000000F000000000000000U
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 121.22.205.25
X-QQ-STYLE: 
X-QQ-mid: webmail123t1355553394t9614208
From: "=?gb18030?B?xM/OszkwoeM=?=" <1012503222@qq.com>
To: "=?gb18030?B?eGVuLWRldmVs?=" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Date: Sat, 15 Dec 2012 14:36:34 +0800
X-Priority: 3
Message-ID: <tencent_142864FF75B0D41D560C7187@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-Mailman-Approved-At: Tue, 18 Dec 2012 13:07:57 +0000
Subject: [Xen-devel] test_bindings can't run
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5550858106181003624=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============5550858106181003624==
Content-Type: multipart/alternative;
	boundary="----=_NextPart_50CC1A72_D61A9270_44CCD5EE"
Content-Transfer-Encoding: 8Bit

This is a multi-part message in MIME format.

------=_NextPart_50CC1A72_D61A9270_44CCD5EE
Content-Type: text/plain;
	charset="gb18030"
Content-Transfer-Encoding: base64

SGVsbG8gZXZlcnlvbmUNCg0KIEkgY29tcGlsZSB0aGUgZmlsZSB0ZXN0X2JpbmRpbmdzLmMg
YXQgeGVuLi4uL3Rvb2xzL2xpYnhlbi90ZXN0IGFuZCBnZXQgYSBleGVjdXRhYmxlIGZpbGUg
dGVzdF9iaW5kaW5ncy4gYnV0IHdoZW4gSSBydW4gaXQgLHVzaW5nIHRoZSBmbG93aW5nIHBh
cmFtZXRlcnMgIi4vdGVzdF9iaW5kaW5ncyBodHRwOi8vbG9jYWxob3N0OjgwMDYveG1scnBj
IHJvb3QgMTIzNDU2IixpdCBzYXlzICJFcnJvcjogMk1FU1NBR0VfTUVUSE9EX1VOS05PV04g
c2Vzc2lvbi5sb2dpbl93aXRoX3Bhc3N3b3JkIi5CZWFjdXNlIHdlIGp1c3QgZG8gc29tZSBy
ZWFzZWFyY2gsIHdlIHVzZSBhbiBPcGVuIHNvdXJjZSAuIFdlIGRvd25sb2FkIGl0IGZyb20g
aHR0cDovL3hlbi5vcmcvcHJvZHVjdHMvZG93bmxvYWRzLmh0bWwgLiBJIHJlYWx5IGRvbid0
IGtub3cgd2hhdCB0byBkbyB3aXRoIGl0LiB0aGUgdHJ1dGggaXMgdGhhdCBJIGFtIG5vdCBh
biBleHBlcnQgaW4gdGhpcyBhcmVhLGFuZCBteSBFbmdsaXNoIGlzbid0IHNvIHdlbGwuDQpQ
bGVhc2UgaWYgeW91IGNhbiBoZWxwIG1lLg0KQmVzdCByZWdhcmRzLg0KDQpXYWxudXQ=

------=_NextPart_50CC1A72_D61A9270_44CCD5EE
Content-Type: text/html;
	charset="gb18030"
Content-Transfer-Encoding: base64

PGRpdj48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFu
YTsgbGluZS1oZWlnaHQ6IDIzcHg7ICI+Jm5ic3A7SGVsbG8gZXZlcnlvbmU8L3NwYW4+PGJy
IHN0eWxlPSJmb250LWZhbWlseTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhl
aWdodDogMjNweDsgIj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5kZScs
IFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPjxzcGFuIHN0eWxlPSJmb250LWZhbWls
eTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhlaWdodDogMjNweDsgIj4mbmJz
cDtJIGNvbXBpbGUgdGhlIGZpbGUgdGVzdF9iaW5kaW5ncy5jIGF0IHhlbi4uLi90b29scy9s
aWJ4ZW4vdGVzdCBhbmQgZ2V0IGE8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAn
bHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwO2V4
ZWN1dGFibGUgZmlsZSB0ZXN0X2JpbmRpbmdzLiBidXQgd2hlbiBJIHJ1biBpdCAsdXNpbmcg
dGhlIGZsb3dpbmc8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdy
YW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwO3BhcmFtZXRlcnMg
Ii4vdGVzdF9iaW5kaW5ncyZuYnNwOzwvc3Bhbj48YSBocmVmPSJodHRwOi8vbG9jYWxob3N0
OjgwMDYveG1scnBjIiB0YXJnZXQ9Il9ibGFuayIgc3R5bGU9Im91dGxpbmUtc3R5bGU6IG5v
bmU7IG91dGxpbmUtd2lkdGg6IGluaXRpYWw7IG91dGxpbmUtY29sb3I6IGluaXRpYWw7IGN1
cnNvcjogcG9pbnRlcjsgY29sb3I6IHJnYigzMCwgODQsIDE0OCk7IGZvbnQtZmFtaWx5OiAn
bHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPmh0dHA6Ly9s
b2NhbDx3YnI+aG9zdDo4MDA2L3htPHdicj5scnBjPC9hPjxzcGFuIHN0eWxlPSJmb250LWZh
bWlseTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhlaWdodDogMjNweDsgIj4m
bmJzcDtyb290IDEyMzQ1NiIsaXQ8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAn
bHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwO3Nh
eXMgIkVycm9yOiAyTUVTU0FHRV9NRVRIT0RfVU5LTk9XTiBzZXNzaW9uLmxvZ2luX3dpdGhf
cGFzc3dvcmQiLjwvc3Bhbj48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6ICdsdWNpZGEgR3Jh
bmRlJywgVmVyZGFuYTsgbGluZS1oZWlnaHQ6IDIzcHg7ICI+QmVhY3VzZSB3ZSBqdXN0IGRv
IHNvbWU8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5kZScs
IFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwO3JlYXNlYXJjaCwgd2UgdXNl
IGFuIE9wZW4gc291cmNlIC4gV2UgZG93bmxvYWQgaXQ8L3NwYW4+PHNwYW4gc3R5bGU9ImZv
bnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4
OyAiPiZuYnNwO2Zyb20mbmJzcDs8L3NwYW4+PGEgaHJlZj0iaHR0cDovL3hlbi5vcmcvcHJv
ZHVjdHMvZG93bmxvYWRzLmh0bWwiIHRhcmdldD0iX2JsYW5rIiBzdHlsZT0ib3V0bGluZS1z
dHlsZTogbm9uZTsgb3V0bGluZS13aWR0aDogaW5pdGlhbDsgb3V0bGluZS1jb2xvcjogaW5p
dGlhbDsgY3Vyc29yOiBwb2ludGVyOyBjb2xvcjogcmdiKDMwLCA4NCwgMTQ4KTsgZm9udC1m
YW1pbHk6ICdsdWNpZGEgR3JhbmRlJywgVmVyZGFuYTsgbGluZS1oZWlnaHQ6IDIzcHg7ICI+
aHR0cDovL3hlbi5vPHdicj5yZy9wcm9kdWN0cy88d2JyPmRvd25sb2Fkcy5odDx3YnI+bWw8
L2E+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7
IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwOy48L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQt
ZmFtaWx5OiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAi
PiZuYnNwO0k8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5k
ZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPiZuYnNwO3JlYWx5IGRvbid0IGtu
b3cgd2hhdCB0byBkbyB3aXRoIGl0PC9zcGFuPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTog
J2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhlaWdodDogMjNweDsgIj4uIHRoZSB0
cnV0aCBpcyB0aGF0IEkgYW0gbm90IGFuPC9zcGFuPjxzcGFuIHN0eWxlPSJmb250LWZhbWls
eTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhlaWdodDogMjNweDsgIj4mbmJz
cDtleHBlcnQgaW4gdGhpcyBhcmVhLGFuZCBteSBFbmdsaXNoIGlzbid0IHNvIHdlbGwuPC9z
cGFuPjwvZGl2PjxkaXY+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVjaWRhIEdyYW5k
ZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPlBsZWFzZSBpZiB5b3UgY2FuIGhl
bHAgbWUuPC9zcGFuPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6ICdsdWNpZGEgR3JhbmRlJywg
VmVyZGFuYTsgbGluZS1oZWlnaHQ6IDIzcHg7ICI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5
OiAnbHVjaWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPkJlc3Qg
cmVnYXJkcy48L3NwYW4+PC9kaXY+PGRpdj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiAnbHVj
aWRhIEdyYW5kZScsIFZlcmRhbmE7IGxpbmUtaGVpZ2h0OiAyM3B4OyAiPjxzcGFuIHN0eWxl
PSJmb250LWZhbWlseTogJ2x1Y2lkYSBHcmFuZGUnLCBWZXJkYW5hOyBsaW5lLWhlaWdodDog
MjNweDsgIj5XYWxudXQ8L3NwYW4+CjwvZGl2Pg==

------=_NextPart_50CC1A72_D61A9270_44CCD5EE--



--===============5550858106181003624==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5550858106181003624==--



From xen-devel-bounces@lists.xen.org Tue Dec 18 13:12:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkwyS-0004qJ-7g; Tue, 18 Dec 2012 13:12:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marc.zyngier@arm.com>) id 1TkwyQ-0004qB-NY
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:12:43 +0000
Received: from [85.158.139.211:47815] by server-10.bemta-5.messagelabs.com id
	B5/A4-13383-9CB60D05; Tue, 18 Dec 2012 13:12:41 +0000
X-Env-Sender: marc.zyngier@arm.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1355836361!20906382!1
X-Originating-IP: [91.220.42.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogOTEuMjIwLjQyLjQ0ID0+IDM0NDcwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2774 invoked from network); 18 Dec 2012 13:12:41 -0000
Received: from service87.mimecast.com (HELO service87.mimecast.com)
	(91.220.42.44) by server-14.tower-206.messagelabs.com with SMTP;
	18 Dec 2012 13:12:41 -0000
Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com
	[217.140.96.21]) by service87.mimecast.com;
	Tue, 18 Dec 2012 13:12:40 +0000
Received: from [10.1.70.21] ([10.1.255.212]) by cam-owa1.Emea.Arm.com with
	Microsoft SMTPSVC(6.0.3790.0); Tue, 18 Dec 2012 13:12:37 +0000
Message-ID: <50D06BC4.3090208@arm.com>
Date: Tue, 18 Dec 2012 13:12:36 +0000
From: Marc Zyngier <marc.zyngier@arm.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-7-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181204450.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212181204450.17523@kaball.uk.xensource.com>
X-Enigmail-Version: 1.4.6
X-OriginalArrivalTime: 18 Dec 2012 13:12:37.0660 (UTC)
	FILETIME=[59F685C0:01CDDD21]
X-MC-Unique: 112121813124006401
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Will Deacon <Will.Deacon@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 6/6] ARM: mach-virt: add SMP support
 using PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/12/12 12:19, Stefano Stabellini wrote:
> On Mon, 17 Dec 2012, Will Deacon wrote:
>> This patch adds support for SMP to mach-virt using the PSCI
>> infrastructure.
>>
>> Signed-off-by: Will Deacon <will.deacon@arm.com>
>>
>>  arch/arm/mach-virt/Kconfig   |  1 +
>>  arch/arm/mach-virt/Makefile  |  1 +
>>  arch/arm/mach-virt/platsmp.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
>>  arch/arm/mach-virt/virt.c    |  6 ++++
>>  4 files changed, 84 insertions(+)
>>  create mode 100644 arch/arm/mach-virt/platsmp.c
>>
>> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
>> index a568a2a..8958f0d 100644
>> --- a/arch/arm/mach-virt/Kconfig
>> +++ b/arch/arm/mach-virt/Kconfig
>> @@ -3,6 +3,7 @@ config ARCH_VIRT
>>  	select ARCH_WANT_OPTIONAL_GPIOLIB
>>  	select ARM_GIC
>>  	select ARM_ARCH_TIMER
>> +	select ARM_PSCI
>>  	select HAVE_SMP
>>  	select CPU_V7
>>  	select SPARSE_IRQ
> 
> Considering that PSCI is actually needed only to boot secondary cpus,
> maybe we want to select it if CONFIG_SMP is enabled?

Well, I was considering using it to "power-off" the VM when the last CPU
powers itself off, and this would apply to UP as well.

>> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
>> index 7ddbfa6..042afc1 100644
>> --- a/arch/arm/mach-virt/Makefile
>> +++ b/arch/arm/mach-virt/Makefile
>> @@ -3,3 +3,4 @@
>>  #
>>  
>>  obj-y					:= virt.o
>> +obj-$(CONFIG_SMP)			+= platsmp.o
>> diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
>> new file mode 100644
>> index 0000000..930362b
>> --- /dev/null
>> +++ b/arch/arm/mach-virt/platsmp.c
>> @@ -0,0 +1,76 @@
>> +/*
>> + * Dummy Virtual Machine - does what it says on the tin.
>> + *
>> + * Copyright (C) 2012 ARM Ltd
>> + * Author: Will Deacon <will.deacon@arm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <linux/init.h>
>> +#include <linux/smp.h>
>> +#include <linux/of.h>
>> +
>> +#include <asm/psci.h>
>> +#include <asm/smp_plat.h>
>> +#include <asm/hardware/gic.h>
>> +
>> +extern void secondary_startup(void);
>> +
>> +/*
>> + * Enumerate the possible CPU set from the device tree.
>> + */
>> +static void __init virt_smp_init_cpus(void)
>> +{
>> +	struct device_node *dn = NULL;
>> +	int cpu = 0;
>> +
>> +	while ((dn = of_find_node_by_type(dn, "cpu"))) {
>> +		if (cpu < NR_CPUS)
>> +			set_cpu_possible(cpu, true);
>> +		cpu++;
>> +	}
>> +
>> +	/* sanity check */
>> +	if (cpu > NR_CPUS)
>> +		pr_warning("no. of cores (%d) greater than configured maximum "
>> +			   "of %d - clipping\n",
>> +			   cpu, NR_CPUS);
>> +
>> +	set_smp_cross_call(gic_raise_softirq);
>> +}
>> +
>> +static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
>> +{
>> +}
>> +
>> +static int __cpuinit virt_boot_secondary(unsigned int cpu,
>> +					 struct task_struct *idle)
>> +{
>> +	if (psci_ops.cpu_on)
>> +		return psci_ops.cpu_on(cpu_logical_map(cpu),
>> +				       __pa(secondary_startup));
>> +	return -ENODEV;
>> +}
> 
> Isn't there a better way to check whether PSCI is actually "enabled", as
> in present in the device tree and initialized correctly?

All methods are optional, so I'm afraid you have to check for their
validity each time you want to access one.

> Maybe we need a pcsi_enabled() static inline of some sort?
> 
> 
>> +static void __cpuinit virt_secondary_init(unsigned int cpu)
>> +{
>> +	gic_secondary_init(0);
>> +}
>> +
>> +struct smp_operations __initdata virt_smp_ops = {
>> +	.smp_init_cpus		= virt_smp_init_cpus,
>> +	.smp_prepare_cpus	= virt_smp_prepare_cpus,
>> +	.smp_secondary_init	= virt_secondary_init,
>> +	.smp_boot_secondary	= virt_boot_secondary,
>> +};
>> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
>> index 174b9da..d764835 100644
>> --- a/arch/arm/mach-virt/virt.c
>> +++ b/arch/arm/mach-virt/virt.c
>> @@ -20,6 +20,7 @@
>>  
>>  #include <linux/of_irq.h>
>>  #include <linux/of_platform.h>
>> +#include <linux/smp.h>
>>  
>>  #include <asm/arch_timer.h>
>>  #include <asm/hardware/gic.h>
>> @@ -56,10 +57,15 @@ static struct sys_timer virt_timer = {
>>  	.init = virt_timer_init,
>>  };
>>  
>> +#ifdef CONFIG_SMP
>> +extern struct smp_operations virt_smp_ops;
>> +#endif
>> +
>>  DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
>>  	.init_irq	= gic_init_irq,
>>  	.handle_irq     = gic_handle_irq,
>>  	.timer		= &virt_timer,
>>  	.init_machine	= virt_init,
>> +	.smp		= smp_ops(virt_smp_ops),
>>  	.dt_compat	= virt_dt_match,
>>  MACHINE_END
>> -- 
>> 1.8.0
> 


-- 
Jazz is not dead. It just smells funny...


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:12:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkwyS-0004qJ-7g; Tue, 18 Dec 2012 13:12:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marc.zyngier@arm.com>) id 1TkwyQ-0004qB-NY
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:12:43 +0000
Received: from [85.158.139.211:47815] by server-10.bemta-5.messagelabs.com id
	B5/A4-13383-9CB60D05; Tue, 18 Dec 2012 13:12:41 +0000
X-Env-Sender: marc.zyngier@arm.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1355836361!20906382!1
X-Originating-IP: [91.220.42.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogOTEuMjIwLjQyLjQ0ID0+IDM0NDcwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2774 invoked from network); 18 Dec 2012 13:12:41 -0000
Received: from service87.mimecast.com (HELO service87.mimecast.com)
	(91.220.42.44) by server-14.tower-206.messagelabs.com with SMTP;
	18 Dec 2012 13:12:41 -0000
Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com
	[217.140.96.21]) by service87.mimecast.com;
	Tue, 18 Dec 2012 13:12:40 +0000
Received: from [10.1.70.21] ([10.1.255.212]) by cam-owa1.Emea.Arm.com with
	Microsoft SMTPSVC(6.0.3790.0); Tue, 18 Dec 2012 13:12:37 +0000
Message-ID: <50D06BC4.3090208@arm.com>
Date: Tue, 18 Dec 2012 13:12:36 +0000
From: Marc Zyngier <marc.zyngier@arm.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-7-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181204450.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212181204450.17523@kaball.uk.xensource.com>
X-Enigmail-Version: 1.4.6
X-OriginalArrivalTime: 18 Dec 2012 13:12:37.0660 (UTC)
	FILETIME=[59F685C0:01CDDD21]
X-MC-Unique: 112121813124006401
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Will Deacon <Will.Deacon@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 6/6] ARM: mach-virt: add SMP support
 using PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/12/12 12:19, Stefano Stabellini wrote:
> On Mon, 17 Dec 2012, Will Deacon wrote:
>> This patch adds support for SMP to mach-virt using the PSCI
>> infrastructure.
>>
>> Signed-off-by: Will Deacon <will.deacon@arm.com>
>>
>>  arch/arm/mach-virt/Kconfig   |  1 +
>>  arch/arm/mach-virt/Makefile  |  1 +
>>  arch/arm/mach-virt/platsmp.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
>>  arch/arm/mach-virt/virt.c    |  6 ++++
>>  4 files changed, 84 insertions(+)
>>  create mode 100644 arch/arm/mach-virt/platsmp.c
>>
>> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
>> index a568a2a..8958f0d 100644
>> --- a/arch/arm/mach-virt/Kconfig
>> +++ b/arch/arm/mach-virt/Kconfig
>> @@ -3,6 +3,7 @@ config ARCH_VIRT
>>  	select ARCH_WANT_OPTIONAL_GPIOLIB
>>  	select ARM_GIC
>>  	select ARM_ARCH_TIMER
>> +	select ARM_PSCI
>>  	select HAVE_SMP
>>  	select CPU_V7
>>  	select SPARSE_IRQ
> 
> Considering that PSCI is actually needed only to boot secondary cpus,
> maybe we want to select it if CONFIG_SMP is enabled?

Well, I was considering using it to "power-off" the VM when the last CPU
powers itself off, and this would apply to UP as well.

>> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
>> index 7ddbfa6..042afc1 100644
>> --- a/arch/arm/mach-virt/Makefile
>> +++ b/arch/arm/mach-virt/Makefile
>> @@ -3,3 +3,4 @@
>>  #
>>  
>>  obj-y					:= virt.o
>> +obj-$(CONFIG_SMP)			+= platsmp.o
>> diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
>> new file mode 100644
>> index 0000000..930362b
>> --- /dev/null
>> +++ b/arch/arm/mach-virt/platsmp.c
>> @@ -0,0 +1,76 @@
>> +/*
>> + * Dummy Virtual Machine - does what it says on the tin.
>> + *
>> + * Copyright (C) 2012 ARM Ltd
>> + * Author: Will Deacon <will.deacon@arm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <linux/init.h>
>> +#include <linux/smp.h>
>> +#include <linux/of.h>
>> +
>> +#include <asm/psci.h>
>> +#include <asm/smp_plat.h>
>> +#include <asm/hardware/gic.h>
>> +
>> +extern void secondary_startup(void);
>> +
>> +/*
>> + * Enumerate the possible CPU set from the device tree.
>> + */
>> +static void __init virt_smp_init_cpus(void)
>> +{
>> +	struct device_node *dn = NULL;
>> +	int cpu = 0;
>> +
>> +	while ((dn = of_find_node_by_type(dn, "cpu"))) {
>> +		if (cpu < NR_CPUS)
>> +			set_cpu_possible(cpu, true);
>> +		cpu++;
>> +	}
>> +
>> +	/* sanity check */
>> +	if (cpu > NR_CPUS)
>> +		pr_warning("no. of cores (%d) greater than configured maximum "
>> +			   "of %d - clipping\n",
>> +			   cpu, NR_CPUS);
>> +
>> +	set_smp_cross_call(gic_raise_softirq);
>> +}
>> +
>> +static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
>> +{
>> +}
>> +
>> +static int __cpuinit virt_boot_secondary(unsigned int cpu,
>> +					 struct task_struct *idle)
>> +{
>> +	if (psci_ops.cpu_on)
>> +		return psci_ops.cpu_on(cpu_logical_map(cpu),
>> +				       __pa(secondary_startup));
>> +	return -ENODEV;
>> +}
> 
> Isn't there a better way to check whether PSCI is actually "enabled", as
> in present in the device tree and initialized correctly?

All methods are optional, so I'm afraid you have to check for their
validity each time you want to access one.

> Maybe we need a pcsi_enabled() static inline of some sort?
> 
> 
>> +static void __cpuinit virt_secondary_init(unsigned int cpu)
>> +{
>> +	gic_secondary_init(0);
>> +}
>> +
>> +struct smp_operations __initdata virt_smp_ops = {
>> +	.smp_init_cpus		= virt_smp_init_cpus,
>> +	.smp_prepare_cpus	= virt_smp_prepare_cpus,
>> +	.smp_secondary_init	= virt_secondary_init,
>> +	.smp_boot_secondary	= virt_boot_secondary,
>> +};
>> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
>> index 174b9da..d764835 100644
>> --- a/arch/arm/mach-virt/virt.c
>> +++ b/arch/arm/mach-virt/virt.c
>> @@ -20,6 +20,7 @@
>>  
>>  #include <linux/of_irq.h>
>>  #include <linux/of_platform.h>
>> +#include <linux/smp.h>
>>  
>>  #include <asm/arch_timer.h>
>>  #include <asm/hardware/gic.h>
>> @@ -56,10 +57,15 @@ static struct sys_timer virt_timer = {
>>  	.init = virt_timer_init,
>>  };
>>  
>> +#ifdef CONFIG_SMP
>> +extern struct smp_operations virt_smp_ops;
>> +#endif
>> +
>>  DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
>>  	.init_irq	= gic_init_irq,
>>  	.handle_irq     = gic_handle_irq,
>>  	.timer		= &virt_timer,
>>  	.init_machine	= virt_init,
>> +	.smp		= smp_ops(virt_smp_ops),
>>  	.dt_compat	= virt_dt_match,
>>  MACHINE_END
>> -- 
>> 1.8.0
> 


-- 
Jazz is not dead. It just smells funny...


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:14:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkx0K-00055G-Uh; Tue, 18 Dec 2012 13:14:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkx0J-000551-Pt
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:14:40 +0000
Received: from [85.158.138.51:33989] by server-15.bemta-3.messagelabs.com id
	1A/5E-07921-F3C60D05; Tue, 18 Dec 2012 13:14:39 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355836474!29125710!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22916 invoked from network); 18 Dec 2012 13:14:35 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-2.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 13:14:35 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIDE3ki022827; Tue, 18 Dec 2012 13:14:03 GMT
Date: Tue, 18 Dec 2012 13:14:01 +0000
From: Will Deacon <will.deacon@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121218131401.GB22139@mudshark.cambridge.arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On Tue, Dec 18, 2012 at 12:04:38PM +0000, Stefano Stabellini wrote:
> On Mon, 17 Dec 2012, Will Deacon wrote:
> > From: Marc Zyngier <marc.zyngier@arm.com>
> > 
> > Add support for the smallest, dumbest possible platform, to be
> > used as a guest for KVM or other hypervisors.
> > 
> > It only mandates a GIC and architected timers. Fits nicely with
> > a multiplatform zImage. Uses very little silicon area.
> > 
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm/Kconfig            |  2 ++
> >  arch/arm/Makefile           |  1 +
> >  arch/arm/mach-virt/Kconfig  |  9 +++++++
> >  arch/arm/mach-virt/Makefile |  5 ++++
> >  arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
> >  5 files changed, 82 insertions(+)
> >  create mode 100644 arch/arm/mach-virt/Kconfig
> >  create mode 100644 arch/arm/mach-virt/Makefile
> >  create mode 100644 arch/arm/mach-virt/virt.c
> 
> Should it come along with a DTS?

The only things the platform needs are GIC, timers, memory and a CPU.
Furthermore, the location, size, frequency etc properties of these aren't
fixed, so a dts would be fairly useless because it will probably not match
the particular mach-virt instance you're targetting.

For kvmtool, I've been generating the device-tree at runtime based on how
kvmtool is invoked and it's been working pretty well so far.

Will

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:14:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkx0K-00055G-Uh; Tue, 18 Dec 2012 13:14:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tkx0J-000551-Pt
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:14:40 +0000
Received: from [85.158.138.51:33989] by server-15.bemta-3.messagelabs.com id
	1A/5E-07921-F3C60D05; Tue, 18 Dec 2012 13:14:39 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355836474!29125710!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22916 invoked from network); 18 Dec 2012 13:14:35 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-2.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 13:14:35 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIDE3ki022827; Tue, 18 Dec 2012 13:14:03 GMT
Date: Tue, 18 Dec 2012 13:14:01 +0000
From: Will Deacon <will.deacon@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121218131401.GB22139@mudshark.cambridge.arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On Tue, Dec 18, 2012 at 12:04:38PM +0000, Stefano Stabellini wrote:
> On Mon, 17 Dec 2012, Will Deacon wrote:
> > From: Marc Zyngier <marc.zyngier@arm.com>
> > 
> > Add support for the smallest, dumbest possible platform, to be
> > used as a guest for KVM or other hypervisors.
> > 
> > It only mandates a GIC and architected timers. Fits nicely with
> > a multiplatform zImage. Uses very little silicon area.
> > 
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm/Kconfig            |  2 ++
> >  arch/arm/Makefile           |  1 +
> >  arch/arm/mach-virt/Kconfig  |  9 +++++++
> >  arch/arm/mach-virt/Makefile |  5 ++++
> >  arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
> >  5 files changed, 82 insertions(+)
> >  create mode 100644 arch/arm/mach-virt/Kconfig
> >  create mode 100644 arch/arm/mach-virt/Makefile
> >  create mode 100644 arch/arm/mach-virt/virt.c
> 
> Should it come along with a DTS?

The only things the platform needs are GIC, timers, memory and a CPU.
Furthermore, the location, size, frequency etc properties of these aren't
fixed, so a dts would be fairly useless because it will probably not match
the particular mach-virt instance you're targetting.

For kvmtool, I've been generating the device-tree at runtime based on how
kvmtool is invoked and it's been working pretty well so far.

Will

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:16:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:16:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkx1c-0005Ca-Ef; Tue, 18 Dec 2012 13:16:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1Tkx1b-0005CQ-9F
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 13:15:59 +0000
Received: from [85.158.139.211:58433] by server-8.bemta-5.messagelabs.com id
	68/C8-15003-E8C60D05; Tue, 18 Dec 2012 13:15:58 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355836556!16762717!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzY5ODEw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2493 invoked from network); 18 Dec 2012 13:15:57 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-206.messagelabs.com with SMTP;
	18 Dec 2012 13:15:57 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Dec 2012 05:15:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,309,1355126400"; d="scan'208";a="235672448"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 18 Dec 2012 05:15:54 -0800
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 18 Dec 2012 05:15:54 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 18 Dec 2012 05:15:54 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Tue, 18 Dec 2012 21:15:52 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH V1 1/2] Xen acpi memory hotplug driver
Thread-Index: AQHN1IPpUNtbxbdZSkG9MqBQJp2L85gVdm/ggALVLaCABk6VoA==
Date: Tue, 18 Dec 2012 13:15:52 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353AD822@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
	<20121207140528.GA3140@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A7C7B@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A9E57@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353A9E57@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Liu, Jinsong wrote:
> Liu, Jinsong wrote:
>> Konrad Rzeszutek Wilk wrote:
>>> On Thu, Dec 06, 2012 at 04:27:36AM +0000, Liu, Jinsong wrote:
>>>>>>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig index
>>>>>>>> 126d8ce..abd0396 100644 --- a/drivers/xen/Kconfig
>>>>>>>> +++ b/drivers/xen/Kconfig
>>>>>>>> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
>>>>>>>>  	  Allow kernel fetching MCE error from Xen platform and
>>>>>>>>  	  converting it into Linux mcelog format for mcelog tools
>>>>>>>> 
>>>>>>>> +config XEN_ACPI_MEMORY_HOTPLUG
>>>>>>>> +	bool "Xen ACPI memory hotplug"
>>>>>>> 
>>>>>>> There should be a way to make this a module.
>>>>>> 
>>>>>> I have some concerns to make it a module:
>>>>>> 1. xen and native memhotplug driver both work as module, while
>>>>>> we need early load xen driver. 
>>>>>> 2. if possible, a xen stub driver may solve load sequence issue,
>>>>>>   but it may involve other issues * if xen driver load then
>>>>>> unload, native driver may have chance to load successfully;
>>>>> 
>>>>> The stub driver would still "occupy" the ACPI bus for the memory
>>>>> hotplug PnP, so I think this would not be a problem.
>>>>> 
>>>> 
>>>> I'm not quite clear your mean here, do you mean it has
>>>> 1. xen_stub driver + xen_memhoplug driver, then xen_strub driver
>>>> unload and entirely replaced by xen_memhotplug driver, or
>>>> 2. xen_stub driver (w/ stub ops) + xen_memhotplug ops (not driver),
>>>> then xen_stub driver keep occupying but its stub ops later replaced
>>>> by xen_memhotplug ops?
>>> 
>>> #2
>>>> 
>>>> If in way #1, it has risk that native driver may load (if xen
>>>> driver unload). If in way #2, xen_memhotplug ops lose the chance to
>>>> probe/add/bind existed memory devices (since it's done when driver
>>>> registerred).
>>> 
>>> Could the stub driver have a queue of events?
>> 
>> If so, why not do 'real' add ops (like our patch did, to build-in
>> xen memory hotplug logic)? I'm not quite clear your purpose of
>> insisting module -- what's advantage of module you prefer?
>> 
>>> 
>>>> 
>>>>>>   * if xen driver load --> unload --> load again, then it will
>>>>>> lose hotplug notification during unload period;
>>>>> 
>>>>> Sure. But I think we can do it with this driver? After all the
>>>>> function of it is to just tell the firmware to turn on/off sockets
>>>>> - and if we miss one notification we won't take advantage of the
>>>>> power savings - but we can do that later on.
>>>>> 
>>>> 
>>>> Not only inform firmware.
>>>> Hotplug notify callback will invoke acpi_bus_add -> ... ->
>>>> implicitly invoke drv->ops.add method to add the hotadded memory
>>>> device.
>>> 
>>> Gotcha.
>> 
>> ? So it will lose the notification and no way to add the new memory
>> device in the future. 
>> 
>> Xen memory hotplug logic consist of 2 parts:
>> 1) driver logic (.add/.remove etc)
>> 2) notification install/callback logic
>> If you want to use 'xen_stub driver + .add/.remove ops', then
>> notification install/callback logic would implement with xen_stub
>> driver (means in build-in part, otherwise it would lose notification
>> when the ops unload) --> but that would make xen_stub in big build-in
>> size.
> 
> How about
> * build-in part: xen_stub driver (stub .add to record what matched
> cpu devices) + notification install/callback; 
> * module part: .add/.remove ops;
> w/ it, native driver has no chance to load and no hotplug event lose,
> and approximately 1/3 code is build-in and 2/3 are module. 
> 
> I think it will work but I'm not quite sure, at least we can have a
> try/test? 
> 
> Thanks,
> Jinsong
> 

Thoughts? If you think it's OK, I will update later.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:16:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:16:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkx1c-0005Ca-Ef; Tue, 18 Dec 2012 13:16:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1Tkx1b-0005CQ-9F
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 13:15:59 +0000
Received: from [85.158.139.211:58433] by server-8.bemta-5.messagelabs.com id
	68/C8-15003-E8C60D05; Tue, 18 Dec 2012 13:15:58 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1355836556!16762717!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzY5ODEw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2493 invoked from network); 18 Dec 2012 13:15:57 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-206.messagelabs.com with SMTP;
	18 Dec 2012 13:15:57 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Dec 2012 05:15:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,309,1355126400"; d="scan'208";a="235672448"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 18 Dec 2012 05:15:54 -0800
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 18 Dec 2012 05:15:54 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 18 Dec 2012 05:15:54 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Tue, 18 Dec 2012 21:15:52 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH V1 1/2] Xen acpi memory hotplug driver
Thread-Index: AQHN1IPpUNtbxbdZSkG9MqBQJp2L85gVdm/ggALVLaCABk6VoA==
Date: Tue, 18 Dec 2012 13:15:52 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353AD822@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
	<20121207140528.GA3140@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A7C7B@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A9E57@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353A9E57@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Liu, Jinsong wrote:
> Liu, Jinsong wrote:
>> Konrad Rzeszutek Wilk wrote:
>>> On Thu, Dec 06, 2012 at 04:27:36AM +0000, Liu, Jinsong wrote:
>>>>>>>> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig index
>>>>>>>> 126d8ce..abd0396 100644 --- a/drivers/xen/Kconfig
>>>>>>>> +++ b/drivers/xen/Kconfig
>>>>>>>> @@ -206,4 +206,15 @@ config XEN_MCE_LOG
>>>>>>>>  	  Allow kernel fetching MCE error from Xen platform and
>>>>>>>>  	  converting it into Linux mcelog format for mcelog tools
>>>>>>>> 
>>>>>>>> +config XEN_ACPI_MEMORY_HOTPLUG
>>>>>>>> +	bool "Xen ACPI memory hotplug"
>>>>>>> 
>>>>>>> There should be a way to make this a module.
>>>>>> 
>>>>>> I have some concerns to make it a module:
>>>>>> 1. xen and native memhotplug driver both work as module, while
>>>>>> we need early load xen driver. 
>>>>>> 2. if possible, a xen stub driver may solve load sequence issue,
>>>>>>   but it may involve other issues * if xen driver load then
>>>>>> unload, native driver may have chance to load successfully;
>>>>> 
>>>>> The stub driver would still "occupy" the ACPI bus for the memory
>>>>> hotplug PnP, so I think this would not be a problem.
>>>>> 
>>>> 
>>>> I'm not quite clear your mean here, do you mean it has
>>>> 1. xen_stub driver + xen_memhoplug driver, then xen_strub driver
>>>> unload and entirely replaced by xen_memhotplug driver, or
>>>> 2. xen_stub driver (w/ stub ops) + xen_memhotplug ops (not driver),
>>>> then xen_stub driver keep occupying but its stub ops later replaced
>>>> by xen_memhotplug ops?
>>> 
>>> #2
>>>> 
>>>> If in way #1, it has risk that native driver may load (if xen
>>>> driver unload). If in way #2, xen_memhotplug ops lose the chance to
>>>> probe/add/bind existed memory devices (since it's done when driver
>>>> registerred).
>>> 
>>> Could the stub driver have a queue of events?
>> 
>> If so, why not do 'real' add ops (like our patch did, to build-in
>> xen memory hotplug logic)? I'm not quite clear your purpose of
>> insisting module -- what's advantage of module you prefer?
>> 
>>> 
>>>> 
>>>>>>   * if xen driver load --> unload --> load again, then it will
>>>>>> lose hotplug notification during unload period;
>>>>> 
>>>>> Sure. But I think we can do it with this driver? After all the
>>>>> function of it is to just tell the firmware to turn on/off sockets
>>>>> - and if we miss one notification we won't take advantage of the
>>>>> power savings - but we can do that later on.
>>>>> 
>>>> 
>>>> Not only inform firmware.
>>>> Hotplug notify callback will invoke acpi_bus_add -> ... ->
>>>> implicitly invoke drv->ops.add method to add the hotadded memory
>>>> device.
>>> 
>>> Gotcha.
>> 
>> ? So it will lose the notification and no way to add the new memory
>> device in the future. 
>> 
>> Xen memory hotplug logic consist of 2 parts:
>> 1) driver logic (.add/.remove etc)
>> 2) notification install/callback logic
>> If you want to use 'xen_stub driver + .add/.remove ops', then
>> notification install/callback logic would implement with xen_stub
>> driver (means in build-in part, otherwise it would lose notification
>> when the ops unload) --> but that would make xen_stub in big build-in
>> size.
> 
> How about
> * build-in part: xen_stub driver (stub .add to record what matched
> cpu devices) + notification install/callback; 
> * module part: .add/.remove ops;
> w/ it, native driver has no chance to load and no hotplug event lose,
> and approximately 1/3 code is build-in and 2/3 are module. 
> 
> I think it will work but I'm not quite sure, at least we can have a
> try/test? 
> 
> Thanks,
> Jinsong
> 

Thoughts? If you think it's OK, I will update later.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:22:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkx7j-0005uI-Jt; Tue, 18 Dec 2012 13:22:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1Tkx7i-0005tw-0H
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:22:18 +0000
Received: from [85.158.143.99:30019] by server-2.bemta-4.messagelabs.com id
	0F/DD-30861-90E60D05; Tue, 18 Dec 2012 13:22:17 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1355836936!22969449!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ2MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26086 invoked from network); 18 Dec 2012 13:22:16 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	18 Dec 2012 13:22:16 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBIDMF2v014612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 08:22:15 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-39.ams2.redhat.com
	[10.36.112.39])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id qBIDMDOn030149; Tue, 18 Dec 2012 08:22:14 -0500
Message-ID: <50D06E04.7030904@redhat.com>
Date: Tue, 18 Dec 2012 14:22:12 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<50D078B702000078000B101F@nat28.tlf.novell.com>
In-Reply-To: <50D078B702000078000B101F@nat28.tlf.novell.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/12/2012 14:07, Jan Beulich ha scritto:
>>>> On 30.11.12 at 09:33, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>> On some machines, the location at 0x40e does not point to the beginning
>>> of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>>> area of the EBDA, while the option ROMs place their data below that
>>> segment.
>>>
>>> For this reason, 0x413 is actually a better source than 0x40e to get
>>> the location of the real-mode trampoline.  But it is even better to
>>> fetch the information from the multiboot structure, where the boot
>>> loader has placed the data for us already.
>>
>> I think if anything we really should make this a minimum calculation
>> of all three (sanity checked) values, rather than throwing the other
>> sources out. It's just not certain enough that we can trust all
>> multiboot implementations.
> 
> I never saw a response from you on this one - were you
> intending to follow up, or did you (silently) expect us to sort
> this out?

No, just busy.  I agree that checking all three is best.  However, there
is at least one known case where 0x40e doesn't work, so 0x413 and
multiboot should be enough.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:22:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkx7j-0005uI-Jt; Tue, 18 Dec 2012 13:22:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1Tkx7i-0005tw-0H
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:22:18 +0000
Received: from [85.158.143.99:30019] by server-2.bemta-4.messagelabs.com id
	0F/DD-30861-90E60D05; Tue, 18 Dec 2012 13:22:17 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1355836936!22969449!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ2MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26086 invoked from network); 18 Dec 2012 13:22:16 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	18 Dec 2012 13:22:16 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBIDMF2v014612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 08:22:15 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-39.ams2.redhat.com
	[10.36.112.39])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id qBIDMDOn030149; Tue, 18 Dec 2012 08:22:14 -0500
Message-ID: <50D06E04.7030904@redhat.com>
Date: Tue, 18 Dec 2012 14:22:12 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<50D078B702000078000B101F@nat28.tlf.novell.com>
In-Reply-To: <50D078B702000078000B101F@nat28.tlf.novell.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/12/2012 14:07, Jan Beulich ha scritto:
>>>> On 30.11.12 at 09:33, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>> On some machines, the location at 0x40e does not point to the beginning
>>> of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>>> area of the EBDA, while the option ROMs place their data below that
>>> segment.
>>>
>>> For this reason, 0x413 is actually a better source than 0x40e to get
>>> the location of the real-mode trampoline.  But it is even better to
>>> fetch the information from the multiboot structure, where the boot
>>> loader has placed the data for us already.
>>
>> I think if anything we really should make this a minimum calculation
>> of all three (sanity checked) values, rather than throwing the other
>> sources out. It's just not certain enough that we can trust all
>> multiboot implementations.
> 
> I never saw a response from you on this one - were you
> intending to follow up, or did you (silently) expect us to sort
> this out?

No, just busy.  I agree that checking all three is best.  However, there
is at least one known case where 0x40e doesn't work, so 0x413 and
multiboot should be enough.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxCY-0006HR-4A; Tue, 18 Dec 2012 13:27:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkxCW-0006HA-0v
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 13:27:16 +0000
Received: from [85.158.137.99:61950] by server-9.bemta-3.messagelabs.com id
	6F/C0-11948-33F60D05; Tue, 18 Dec 2012 13:27:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355837221!14804020!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29659 invoked from network); 18 Dec 2012 13:27:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:27:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="225220"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 13:27:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 18 Dec 2012 13:26:58 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkxCE-0006sM-CD;
	Tue, 18 Dec 2012 13:26:58 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkxCE-0000df-8Q;
	Tue, 18 Dec 2012 13:26:58 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14776-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 13:26:58 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14776: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4403641889991611816=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4403641889991611816==
Content-Type: text/plain

flight 14776 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14776/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14771
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14771

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  3664b0420dfa
baseline version:
 xen                  f50aab21f9f2

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@citrix.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3664b0420dfa
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3664b0420dfa
+ branch=xen-unstable
+ revision=3664b0420dfa
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 3664b0420dfa ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 2 changes to 2 files


--===============4403641889991611816==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4403641889991611816==--

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxCY-0006HR-4A; Tue, 18 Dec 2012 13:27:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TkxCW-0006HA-0v
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 13:27:16 +0000
Received: from [85.158.137.99:61950] by server-9.bemta-3.messagelabs.com id
	6F/C0-11948-33F60D05; Tue, 18 Dec 2012 13:27:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355837221!14804020!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29659 invoked from network); 18 Dec 2012 13:27:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:27:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="225220"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 13:27:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 18 Dec 2012 13:26:58 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TkxCE-0006sM-CD;
	Tue, 18 Dec 2012 13:26:58 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TkxCE-0000df-8Q;
	Tue, 18 Dec 2012 13:26:58 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14776-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 13:26:58 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14776: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4403641889991611816=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4403641889991611816==
Content-Type: text/plain

flight 14776 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14776/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14771
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14771

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  3664b0420dfa
baseline version:
 xen                  f50aab21f9f2

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@citrix.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3664b0420dfa
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3664b0420dfa
+ branch=xen-unstable
+ revision=3664b0420dfa
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 3664b0420dfa ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 2 changes to 2 files


--===============4403641889991611816==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4403641889991611816==--

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:27:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxCl-0006KB-IU; Tue, 18 Dec 2012 13:27:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkxCk-0006Ji-2q
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:27:30 +0000
Received: from [85.158.139.83:5845] by server-7.bemta-5.messagelabs.com id
	14/4A-08009-14F60D05; Tue, 18 Dec 2012 13:27:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355837242!28641599!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13714 invoked from network); 18 Dec 2012 13:27:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 13:27:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 13:27:22 +0000
Message-Id: <50D07D4802000078000B1071@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 13:27:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Paolo Bonzini" <pbonzini@redhat.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<50D078B702000078000B101F@nat28.tlf.novell.com>
	<50D06E04.7030904@redhat.com>
In-Reply-To: <50D06E04.7030904@redhat.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 14:22, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 18/12/2012 14:07, Jan Beulich ha scritto:
>>>>> On 30.11.12 at 09:33, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>> On some machines, the location at 0x40e does not point to the beginning
>>>> of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>>>> area of the EBDA, while the option ROMs place their data below that
>>>> segment.
>>>>
>>>> For this reason, 0x413 is actually a better source than 0x40e to get
>>>> the location of the real-mode trampoline.  But it is even better to
>>>> fetch the information from the multiboot structure, where the boot
>>>> loader has placed the data for us already.
>>>
>>> I think if anything we really should make this a minimum calculation
>>> of all three (sanity checked) values, rather than throwing the other
>>> sources out. It's just not certain enough that we can trust all
>>> multiboot implementations.
>> 
>> I never saw a response from you on this one - were you
>> intending to follow up, or did you (silently) expect us to sort
>> this out?
> 
> No, just busy.  I agree that checking all three is best.  However, there
> is at least one known case where 0x40e doesn't work, so 0x413 and
> multiboot should be enough.

Can you provide more detail about this specific case? In
particular, what value 0x40e in fact has there?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:27:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxCl-0006KB-IU; Tue, 18 Dec 2012 13:27:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkxCk-0006Ji-2q
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:27:30 +0000
Received: from [85.158.139.83:5845] by server-7.bemta-5.messagelabs.com id
	14/4A-08009-14F60D05; Tue, 18 Dec 2012 13:27:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1355837242!28641599!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13714 invoked from network); 18 Dec 2012 13:27:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 13:27:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 13:27:22 +0000
Message-Id: <50D07D4802000078000B1071@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 13:27:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Paolo Bonzini" <pbonzini@redhat.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<50D078B702000078000B101F@nat28.tlf.novell.com>
	<50D06E04.7030904@redhat.com>
In-Reply-To: <50D06E04.7030904@redhat.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 14:22, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 18/12/2012 14:07, Jan Beulich ha scritto:
>>>>> On 30.11.12 at 09:33, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>> On some machines, the location at 0x40e does not point to the beginning
>>>> of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>>>> area of the EBDA, while the option ROMs place their data below that
>>>> segment.
>>>>
>>>> For this reason, 0x413 is actually a better source than 0x40e to get
>>>> the location of the real-mode trampoline.  But it is even better to
>>>> fetch the information from the multiboot structure, where the boot
>>>> loader has placed the data for us already.
>>>
>>> I think if anything we really should make this a minimum calculation
>>> of all three (sanity checked) values, rather than throwing the other
>>> sources out. It's just not certain enough that we can trust all
>>> multiboot implementations.
>> 
>> I never saw a response from you on this one - were you
>> intending to follow up, or did you (silently) expect us to sort
>> this out?
> 
> No, just busy.  I agree that checking all three is best.  However, there
> is at least one known case where 0x40e doesn't work, so 0x413 and
> multiboot should be enough.

Can you provide more detail about this specific case? In
particular, what value 0x40e in fact has there?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:30:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxFX-0006ca-Pw; Tue, 18 Dec 2012 13:30:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1TkxFW-0006cH-7G
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:30:22 +0000
Received: from [85.158.139.83:50992] by server-5.bemta-5.messagelabs.com id
	51/E8-22648-DEF60D05; Tue, 18 Dec 2012 13:30:21 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355837338!30414183!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ2MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10662 invoked from network); 18 Dec 2012 13:29:04 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-5.tower-182.messagelabs.com with SMTP;
	18 Dec 2012 13:29:04 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBIDSwlh021847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 08:28:58 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-39.ams2.redhat.com
	[10.36.112.39])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id qBIDSn0q031938; Tue, 18 Dec 2012 08:28:50 -0500
Message-ID: <50D06F90.6070803@redhat.com>
Date: Tue, 18 Dec 2012 14:28:48 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<50D078B702000078000B101F@nat28.tlf.novell.com>
	<50D06E04.7030904@redhat.com>
	<50D07D4802000078000B1071@nat28.tlf.novell.com>
In-Reply-To: <50D07D4802000078000B1071@nat28.tlf.novell.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/12/2012 14:27, Jan Beulich ha scritto:
>>>> On 18.12.12 at 14:22, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> Il 18/12/2012 14:07, Jan Beulich ha scritto:
>>>>>> On 30.11.12 at 09:33, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>>> On some machines, the location at 0x40e does not point to the beginning
>>>>> of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>>>>> area of the EBDA, while the option ROMs place their data below that
>>>>> segment.
>>>>>
>>>>> For this reason, 0x413 is actually a better source than 0x40e to get
>>>>> the location of the real-mode trampoline.  But it is even better to
>>>>> fetch the information from the multiboot structure, where the boot
>>>>> loader has placed the data for us already.
>>>>
>>>> I think if anything we really should make this a minimum calculation
>>>> of all three (sanity checked) values, rather than throwing the other
>>>> sources out. It's just not certain enough that we can trust all
>>>> multiboot implementations.
>>>
>>> I never saw a response from you on this one - were you
>>> intending to follow up, or did you (silently) expect us to sort
>>> this out?
>>
>> No, just busy.  I agree that checking all three is best.  However, there
>> is at least one known case where 0x40e doesn't work, so 0x413 and
>> multiboot should be enough.
> 
> Can you provide more detail about this specific case? In
> particular, what value 0x40e in fact has there?

Sure.  0x40e did point to the beginning of the EBDA (around 635k), but
an option ROM was reserving memory below there by lowering 0x413.
That's the "on some machines" in the commit message.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:30:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxFX-0006ca-Pw; Tue, 18 Dec 2012 13:30:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1TkxFW-0006cH-7G
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:30:22 +0000
Received: from [85.158.139.83:50992] by server-5.bemta-5.messagelabs.com id
	51/E8-22648-DEF60D05; Tue, 18 Dec 2012 13:30:21 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355837338!30414183!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ2MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10662 invoked from network); 18 Dec 2012 13:29:04 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-5.tower-182.messagelabs.com with SMTP;
	18 Dec 2012 13:29:04 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBIDSwlh021847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 08:28:58 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-39.ams2.redhat.com
	[10.36.112.39])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id qBIDSn0q031938; Tue, 18 Dec 2012 08:28:50 -0500
Message-ID: <50D06F90.6070803@redhat.com>
Date: Tue, 18 Dec 2012 14:28:48 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<50D078B702000078000B101F@nat28.tlf.novell.com>
	<50D06E04.7030904@redhat.com>
	<50D07D4802000078000B1071@nat28.tlf.novell.com>
In-Reply-To: <50D07D4802000078000B1071@nat28.tlf.novell.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/12/2012 14:27, Jan Beulich ha scritto:
>>>> On 18.12.12 at 14:22, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> Il 18/12/2012 14:07, Jan Beulich ha scritto:
>>>>>> On 30.11.12 at 09:33, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>>> On some machines, the location at 0x40e does not point to the beginning
>>>>> of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>>>>> area of the EBDA, while the option ROMs place their data below that
>>>>> segment.
>>>>>
>>>>> For this reason, 0x413 is actually a better source than 0x40e to get
>>>>> the location of the real-mode trampoline.  But it is even better to
>>>>> fetch the information from the multiboot structure, where the boot
>>>>> loader has placed the data for us already.
>>>>
>>>> I think if anything we really should make this a minimum calculation
>>>> of all three (sanity checked) values, rather than throwing the other
>>>> sources out. It's just not certain enough that we can trust all
>>>> multiboot implementations.
>>>
>>> I never saw a response from you on this one - were you
>>> intending to follow up, or did you (silently) expect us to sort
>>> this out?
>>
>> No, just busy.  I agree that checking all three is best.  However, there
>> is at least one known case where 0x40e doesn't work, so 0x413 and
>> multiboot should be enough.
> 
> Can you provide more detail about this specific case? In
> particular, what value 0x40e in fact has there?

Sure.  0x40e did point to the beginning of the EBDA (around 635k), but
an option ROM was reserving memory below there by lowering 0x413.
That's the "on some machines" in the commit message.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:33:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxI4-00071T-Kb; Tue, 18 Dec 2012 13:33:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TkxI3-00071J-Tp
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:33:00 +0000
Received: from [85.158.138.51:16943] by server-6.bemta-3.messagelabs.com id
	8E/7A-12154-B8070D05; Tue, 18 Dec 2012 13:32:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355837574!29129133!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20388 invoked from network); 18 Dec 2012 13:32:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:32:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1030545"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 13:32:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 08:32:54 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TkxHx-000664-Oo;
	Tue, 18 Dec 2012 13:32:53 +0000
Date: Tue, 18 Dec 2012 13:32:45 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <20121218131401.GB22139@mudshark.cambridge.arm.com>
Message-ID: <alpine.DEB.2.02.1212181332010.17523@kaball.uk.xensource.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Marc Zyngier <Marc.Zyngier@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:
> Hi Stefano,
> 
> On Tue, Dec 18, 2012 at 12:04:38PM +0000, Stefano Stabellini wrote:
> > On Mon, 17 Dec 2012, Will Deacon wrote:
> > > From: Marc Zyngier <marc.zyngier@arm.com>
> > > 
> > > Add support for the smallest, dumbest possible platform, to be
> > > used as a guest for KVM or other hypervisors.
> > > 
> > > It only mandates a GIC and architected timers. Fits nicely with
> > > a multiplatform zImage. Uses very little silicon area.
> > > 
> > > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > > ---
> > >  arch/arm/Kconfig            |  2 ++
> > >  arch/arm/Makefile           |  1 +
> > >  arch/arm/mach-virt/Kconfig  |  9 +++++++
> > >  arch/arm/mach-virt/Makefile |  5 ++++
> > >  arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
> > >  5 files changed, 82 insertions(+)
> > >  create mode 100644 arch/arm/mach-virt/Kconfig
> > >  create mode 100644 arch/arm/mach-virt/Makefile
> > >  create mode 100644 arch/arm/mach-virt/virt.c
> > 
> > Should it come along with a DTS?
> 
> The only things the platform needs are GIC, timers, memory and a CPU.
> Furthermore, the location, size, frequency etc properties of these aren't
> fixed, so a dts would be fairly useless because it will probably not match
> the particular mach-virt instance you're targetting.
> 
> For kvmtool, I've been generating the device-tree at runtime based on how
> kvmtool is invoked and it's been working pretty well so far.

I agree on the fact that it should be generated, but I personally think
that it would still be useful as an example.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:33:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxI4-00071T-Kb; Tue, 18 Dec 2012 13:33:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TkxI3-00071J-Tp
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:33:00 +0000
Received: from [85.158.138.51:16943] by server-6.bemta-3.messagelabs.com id
	8E/7A-12154-B8070D05; Tue, 18 Dec 2012 13:32:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355837574!29129133!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20388 invoked from network); 18 Dec 2012 13:32:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:32:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1030545"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 13:32:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 08:32:54 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TkxHx-000664-Oo;
	Tue, 18 Dec 2012 13:32:53 +0000
Date: Tue, 18 Dec 2012 13:32:45 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <20121218131401.GB22139@mudshark.cambridge.arm.com>
Message-ID: <alpine.DEB.2.02.1212181332010.17523@kaball.uk.xensource.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Marc Zyngier <Marc.Zyngier@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:
> Hi Stefano,
> 
> On Tue, Dec 18, 2012 at 12:04:38PM +0000, Stefano Stabellini wrote:
> > On Mon, 17 Dec 2012, Will Deacon wrote:
> > > From: Marc Zyngier <marc.zyngier@arm.com>
> > > 
> > > Add support for the smallest, dumbest possible platform, to be
> > > used as a guest for KVM or other hypervisors.
> > > 
> > > It only mandates a GIC and architected timers. Fits nicely with
> > > a multiplatform zImage. Uses very little silicon area.
> > > 
> > > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > > ---
> > >  arch/arm/Kconfig            |  2 ++
> > >  arch/arm/Makefile           |  1 +
> > >  arch/arm/mach-virt/Kconfig  |  9 +++++++
> > >  arch/arm/mach-virt/Makefile |  5 ++++
> > >  arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
> > >  5 files changed, 82 insertions(+)
> > >  create mode 100644 arch/arm/mach-virt/Kconfig
> > >  create mode 100644 arch/arm/mach-virt/Makefile
> > >  create mode 100644 arch/arm/mach-virt/virt.c
> > 
> > Should it come along with a DTS?
> 
> The only things the platform needs are GIC, timers, memory and a CPU.
> Furthermore, the location, size, frequency etc properties of these aren't
> fixed, so a dts would be fairly useless because it will probably not match
> the particular mach-virt instance you're targetting.
> 
> For kvmtool, I've been generating the device-tree at runtime based on how
> kvmtool is invoked and it's been working pretty well so far.

I agree on the fact that it should be generated, but I personally think
that it would still be useful as an example.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:37:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxLz-0007OB-E3; Tue, 18 Dec 2012 13:37:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkxLx-0007Nx-BF
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:37:02 +0000
Received: from [85.158.143.99:16006] by server-3.bemta-4.messagelabs.com id
	EC/B5-18211-C7170D05; Tue, 18 Dec 2012 13:37:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355837820!29037334!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1260 invoked from network); 18 Dec 2012 13:37:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 13:37:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 13:36:59 +0000
Message-Id: <50D07F8802000078000B1091@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 13:36:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Paolo Bonzini" <pbonzini@redhat.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<50D078B702000078000B101F@nat28.tlf.novell.com>
	<50D06E04.7030904@redhat.com>
	<50D07D4802000078000B1071@nat28.tlf.novell.com>
	<50D06F90.6070803@redhat.com>
In-Reply-To: <50D06F90.6070803@redhat.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 14:28, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 18/12/2012 14:27, Jan Beulich ha scritto:
>>>>> On 18.12.12 at 14:22, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>> Il 18/12/2012 14:07, Jan Beulich ha scritto:
>>>>>>> On 30.11.12 at 09:33, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>>>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>>>> On some machines, the location at 0x40e does not point to the beginning
>>>>>> of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>>>>>> area of the EBDA, while the option ROMs place their data below that
>>>>>> segment.
>>>>>>
>>>>>> For this reason, 0x413 is actually a better source than 0x40e to get
>>>>>> the location of the real-mode trampoline.  But it is even better to
>>>>>> fetch the information from the multiboot structure, where the boot
>>>>>> loader has placed the data for us already.
>>>>>
>>>>> I think if anything we really should make this a minimum calculation
>>>>> of all three (sanity checked) values, rather than throwing the other
>>>>> sources out. It's just not certain enough that we can trust all
>>>>> multiboot implementations.
>>>>
>>>> I never saw a response from you on this one - were you
>>>> intending to follow up, or did you (silently) expect us to sort
>>>> this out?
>>>
>>> No, just busy.  I agree that checking all three is best.  However, there
>>> is at least one known case where 0x40e doesn't work, so 0x413 and
>>> multiboot should be enough.
>> 
>> Can you provide more detail about this specific case? In
>> particular, what value 0x40e in fact has there?
> 
> Sure.  0x40e did point to the beginning of the EBDA (around 635k), but
> an option ROM was reserving memory below there by lowering 0x413.
> That's the "on some machines" in the commit message.

That wouldn't preclude the suggested sanity-checked-minimum
solution.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:37:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxLz-0007OB-E3; Tue, 18 Dec 2012 13:37:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkxLx-0007Nx-BF
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:37:02 +0000
Received: from [85.158.143.99:16006] by server-3.bemta-4.messagelabs.com id
	EC/B5-18211-C7170D05; Tue, 18 Dec 2012 13:37:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355837820!29037334!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1260 invoked from network); 18 Dec 2012 13:37:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 13:37:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 13:36:59 +0000
Message-Id: <50D07F8802000078000B1091@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 13:36:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Paolo Bonzini" <pbonzini@redhat.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<50D078B702000078000B101F@nat28.tlf.novell.com>
	<50D06E04.7030904@redhat.com>
	<50D07D4802000078000B1071@nat28.tlf.novell.com>
	<50D06F90.6070803@redhat.com>
In-Reply-To: <50D06F90.6070803@redhat.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 14:28, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 18/12/2012 14:27, Jan Beulich ha scritto:
>>>>> On 18.12.12 at 14:22, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>> Il 18/12/2012 14:07, Jan Beulich ha scritto:
>>>>>>> On 30.11.12 at 09:33, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>>>> On 29.11.12 at 18:34, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>>>> On some machines, the location at 0x40e does not point to the beginning
>>>>>> of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
>>>>>> area of the EBDA, while the option ROMs place their data below that
>>>>>> segment.
>>>>>>
>>>>>> For this reason, 0x413 is actually a better source than 0x40e to get
>>>>>> the location of the real-mode trampoline.  But it is even better to
>>>>>> fetch the information from the multiboot structure, where the boot
>>>>>> loader has placed the data for us already.
>>>>>
>>>>> I think if anything we really should make this a minimum calculation
>>>>> of all three (sanity checked) values, rather than throwing the other
>>>>> sources out. It's just not certain enough that we can trust all
>>>>> multiboot implementations.
>>>>
>>>> I never saw a response from you on this one - were you
>>>> intending to follow up, or did you (silently) expect us to sort
>>>> this out?
>>>
>>> No, just busy.  I agree that checking all three is best.  However, there
>>> is at least one known case where 0x40e doesn't work, so 0x413 and
>>> multiboot should be enough.
>> 
>> Can you provide more detail about this specific case? In
>> particular, what value 0x40e in fact has there?
> 
> Sure.  0x40e did point to the beginning of the EBDA (around 635k), but
> an option ROM was reserving memory below there by lowering 0x413.
> That's the "on some machines" in the commit message.

That wouldn't preclude the suggested sanity-checked-minimum
solution.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:38:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxNG-0007Vv-TU; Tue, 18 Dec 2012 13:38:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thiagocmartinsc@gmail.com>) id 1TkxNF-0007Vh-56
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:38:21 +0000
Received: from [85.158.138.51:56167] by server-7.bemta-3.messagelabs.com id
	60/FA-23008-CC170D05; Tue, 18 Dec 2012 13:38:20 +0000
X-Env-Sender: thiagocmartinsc@gmail.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355837890!27449948!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9055 invoked from network); 18 Dec 2012 13:38:11 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:38:11 -0000
Received: by mail-la0-f43.google.com with SMTP id z14so524473lag.2
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 05:37:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=h1f4Ie7/BKatzVtPAV9WX1FZuptbS4glQPCUibMId6M=;
	b=TXHxI6Gk1rM9bUXLbue4JTxHVkaTkAnsoIW8Ay4TMUziEaibky5lK2cjFk2vzHZQlI
	x6SgnSplYN9Gc5F67rNO2I0lIoreRaG8jdDMFSyX6OuvIKiLns16hXMB9QS7TjIAs1x7
	RkyWIx5vxmF/8obDaCBeGmjgo53CKHpntW5W9i42Uu9Iz3QBXP6I5uB+EINGZ7Q1PSi9
	WPTUQHO3IO0CRa8Ei4CBr0i+AYqlobB9VYpZwFKmpAQzTSS5w0JqUuOfSVJ3hjSnis0N
	q1Kw9NobmF9CCo1DxgP0vZD0Rwzh8PVlw2HjrGBMOH6QxJkTPPaHwEUjTs9MIT400AIw
	6WEg==
Received: by 10.112.28.65 with SMTP id z1mr850627lbg.119.1355837872181; Tue,
	18 Dec 2012 05:37:52 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.6.3 with HTTP; Tue, 18 Dec 2012 05:37:22 -0800 (PST)
In-Reply-To: <20121218070351.GK8912@reaktio.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
	<CAJSM8J3DvQm-wW95k9Jd3quDkeNiPexj8OYdtnb_qWEWDUs1kw@mail.gmail.com>
	<20121218070351.GK8912@reaktio.net>
From: =?ISO-2022-JP?B?TWFydGlueCAtIBskQiU4JSchPCVgJTobKEI=?=
	<thiagocmartinsc@gmail.com>
Date: Tue, 18 Dec 2012 11:37:22 -0200
Message-ID: <CAJSM8J2tjRi85X7Kwf=fuNJZT3eaAwFkoKQXhFr775BjnkuZ9A@mail.gmail.com>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3265233352219908079=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3265233352219908079==
Content-Type: multipart/alternative; boundary=bcaec554d9d6be943604d1209a64

--bcaec554d9d6be943604d1209a64
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Pasi,

 How can I take full advantage of a remote HVM DomU with a Radeon within
it, without SPICE?

 VNC is enough? Or there is something else better that I don't know?

Thank you!
Thiago

On 18 December 2012 05:03, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:

> On Mon, Dec 17, 2012 at 09:57:59PM -0200, Martinx - ?$B%8%'!<%`%: wrote:
> >    Hi Pasi!
> >     Can you tell me if it will be possible to use Xen like this:
> >     dom0 -> ATI GPU Passthrough as primary -> HVM domU with Catalyst ->
> Spice
> >    -> Spice-Client ?
> >     I do not want to use Spice "alone" and, I do not want to use my dom=
U
> with
> >    my ATI without SPICE...  That makes sense?
> >    Thanks!
>
> Afaik SPICE uses and requires the virtual QXL GPU for efficient operation=
,
> so it doesn't work with physical GPUs.
>
> -- Pasi
>
> >    Thiago
> >
> >    On 20 August 2012 16:14, Pasi K=E4rkk=E4inen <[1]pasik@iki.fi> wrote=
:
> >
> >      On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
> >      >
> >      > Features and improvements not on this list are of course welcome
> at
> >      > any time before the feature freeze.
> >      >
> >      > Any questions and feedback are welcome!
> >      >
> >      > Your 4.3 release coordinator,
> >      >  George Dunlap
> >      >
> >
> >      <snip>
> >      >
> >      > * xl USB pass-through for PV guests
> >      >   owner: ?
> >      >   Port the xend PV pass-through functionality to xl.
> >      >
> >
> >      xm/xend PVUSB works for both PV and HVM guests, so xl should suppo=
rt
> >      PVUSB for both PV and HVM guests aswell.
> >      James Harper's GPLPV drivers actually do have PVUSB frontend drive=
r
> for
> >      Windows.
> >
> >      Also Suse's xenlinux forward-ported patches have PVUSB support in
> >      unmodified_drivers for HVM guests.
> >
> >      Another USB item:
> >
> >      * xl support for USB device passthru using QEMU emulated USB for H=
VM
> >      guests (no need for PVUSB drivers in the HVM guest).
> >        This works today in xm/xend with qemu-traditional, but is limite=
d
> to
> >      USB 1.1, probably because
> >        the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
> >        So xl support for emulated USB device passthru for both
> qemu-upstream
> >      and qemu-traditional.
> >
> >      More wishlist items:
> >
> >      * Nested hardware virtualization. Important for easier testing and
> >      development of Xen (Xen-on-Xen),
> >        and for running other hypervisors in Xen VMs. Interesting for
> labs,
> >      POCs, etc.
> >
> >      * VGA/GPU passthru support for AMD/NVIDIA; lots of patches on
> xen-devel
> >      archives,
> >        but noone has yet stepped up to clean up and get them merged.
> >        Currently Intel gfx passthru patches are merged to Xen, but
> primary
> >      ATI/NVIDIA require extra patches.
> >        This is actually something that a LOT of users ask often, it's
> >      discussed almost every day on ##xen on IRC.
> >        I wonder if XenClient folks could help here?
> >
> >      * Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by
> VGA/GPU
> >      passthru users.
> >        Fujitsu guys posted some patches for this in 2010, and XenClient
> guys
> >      in 2009 (iirc),
> >        but nothing got further developed and merged to upstream Xen.
> >
> >      * QXL virtual GPU support for SPICE. Someone was already developin=
g
> >      this,
> >        and posted patches earlier during 4.2 development cycle to
> xen-devel.
> >        Upstream Qemu includes QXL support.
> >
> >      * PVSCSI support in XL. James Harper was (semi) interested in
> working
> >      with this,
> >        because he has a PVSCSI frontend driver in Windows GPLPV drivers=
,
> and
> >      he's using PVSCSI for tape backups himself.
> >
> >      * libvirt libxl driver improvements; support more Xen features.
> >        Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "defaul=
t"
> >      virtualization GUI also with Xen.
> >
> >      Hopefully we'll find interested developers for these items :)
> >
> >      -- Pasi
> >
> >      _______________________________________________
> >      Xen-devel mailing list
> >      [2]Xen-devel@lists.xen.org
> >      [3]http://lists.xen.org/xen-devel
> >
> > References
> >
> >    Visible links
> >    1. mailto:pasik@iki.fi
> >    2. mailto:Xen-devel@lists.xen.org
> >    3. http://lists.xen.org/xen-devel
>

--bcaec554d9d6be943604d1209a64
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Pasi,<div><br></div><div>=A0How can I take full advantage of a remote HVM D=
omU with a Radeon within it, without SPICE?</div><div><br></div><div>=A0VNC=
 is enough? Or there is something else better that I don&#39;t know?</div><=
div>

<br></div><div>Thank you!</div><div>Thiago<br><br><div class=3D"gmail_quote=
">
On 18 December 2012 05:03, Pasi K=E4rkk=E4inen <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:pasik@iki.fi" target=3D"_blank">pasik@iki.fi</a>&gt;</span> wro=
te:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-=
left:1px #ccc solid;padding-left:1ex">


<div>On Mon, Dec 17, 2012 at 09:57:59PM -0200, Martinx - ?$B%8%&#39;!&lt;%`=
%: wrote:<br>
&gt; =A0 =A0Hi Pasi!<br>
&gt; =A0 =A0 Can you tell me if it will be possible to use Xen like this:<b=
r>
&gt; =A0 =A0 dom0 -&gt; ATI GPU Passthrough as primary -&gt; HVM domU with =
Catalyst -&gt; Spice<br>
&gt; =A0 =A0-&gt; Spice-Client ?<br>
&gt; =A0 =A0 I do not want to use Spice &quot;alone&quot; and, I do not wan=
t to use my domU with<br>
&gt; =A0 =A0my ATI without SPICE... =A0That makes sense?<br>
&gt; =A0 =A0Thanks!<br>
<br>
</div>Afaik SPICE uses and requires the virtual QXL GPU for efficient opera=
tion,<br>
so it doesn&#39;t work with physical GPUs.<br>
<br>
-- Pasi<br>
<br>
&gt; =A0 =A0Thiago<br>
<div><div>&gt;<br>
&gt; =A0 =A0On 20 August 2012 16:14, Pasi K=E4rkk=E4inen &lt;[1]<a href=3D"=
mailto:pasik@iki.fi" target=3D"_blank">pasik@iki.fi</a>&gt; wrote:<br>
&gt;<br>
&gt; =A0 =A0 =A0On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wro=
te:<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt; =A0 =A0 =A0&gt; Features and improvements not on this list are of cour=
se welcome at<br>
&gt; =A0 =A0 =A0&gt; any time before the feature freeze.<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt; =A0 =A0 =A0&gt; Any questions and feedback are welcome!<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt; =A0 =A0 =A0&gt; Your 4.3 release coordinator,<br>
&gt; =A0 =A0 =A0&gt; =A0George Dunlap<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt;<br>
&gt; =A0 =A0 =A0&lt;snip&gt;<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt; =A0 =A0 =A0&gt; * xl USB pass-through for PV guests<br>
&gt; =A0 =A0 =A0&gt; =A0 owner: ?<br>
&gt; =A0 =A0 =A0&gt; =A0 Port the xend PV pass-through functionality to xl.=
<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt;<br>
&gt; =A0 =A0 =A0xm/xend PVUSB works for both PV and HVM guests, so xl shoul=
d support<br>
&gt; =A0 =A0 =A0PVUSB for both PV and HVM guests aswell.<br>
&gt; =A0 =A0 =A0James Harper&#39;s GPLPV drivers actually do have PVUSB fro=
ntend driver for<br>
&gt; =A0 =A0 =A0Windows.<br>
&gt;<br>
&gt; =A0 =A0 =A0Also Suse&#39;s xenlinux forward-ported patches have PVUSB =
support in<br>
&gt; =A0 =A0 =A0unmodified_drivers for HVM guests.<br>
&gt;<br>
&gt; =A0 =A0 =A0Another USB item:<br>
&gt;<br>
&gt; =A0 =A0 =A0* xl support for USB device passthru using QEMU emulated US=
B for HVM<br>
&gt; =A0 =A0 =A0guests (no need for PVUSB drivers in the HVM guest).<br>
&gt; =A0 =A0 =A0 =A0This works today in xm/xend with qemu-traditional, but =
is limited to<br>
&gt; =A0 =A0 =A0USB 1.1, probably because<br>
&gt; =A0 =A0 =A0 =A0the old version of Qemu-dm-traditional which lacks USB =
2.0/3.0.<br>
&gt; =A0 =A0 =A0 =A0So xl support for emulated USB device passthru for both=
 qemu-upstream<br>
&gt; =A0 =A0 =A0and qemu-traditional.<br>
&gt;<br>
&gt; =A0 =A0 =A0More wishlist items:<br>
&gt;<br>
&gt; =A0 =A0 =A0* Nested hardware virtualization. Important for easier test=
ing and<br>
&gt; =A0 =A0 =A0development of Xen (Xen-on-Xen),<br>
&gt; =A0 =A0 =A0 =A0and for running other hypervisors in Xen VMs. Interesti=
ng for labs,<br>
&gt; =A0 =A0 =A0POCs, etc.<br>
&gt;<br>
&gt; =A0 =A0 =A0* VGA/GPU passthru support for AMD/NVIDIA; lots of patches =
on xen-devel<br>
&gt; =A0 =A0 =A0archives,<br>
&gt; =A0 =A0 =A0 =A0but noone has yet stepped up to clean up and get them m=
erged.<br>
&gt; =A0 =A0 =A0 =A0Currently Intel gfx passthru patches are merged to Xen,=
 but primary<br>
&gt; =A0 =A0 =A0ATI/NVIDIA require extra patches.<br>
&gt; =A0 =A0 =A0 =A0This is actually something that a LOT of users ask ofte=
n, it&#39;s<br>
&gt; =A0 =A0 =A0discussed almost every day on ##xen on IRC.<br>
&gt; =A0 =A0 =A0 =A0I wonder if XenClient folks could help here?<br>
&gt;<br>
&gt; =A0 =A0 =A0* Dom0 Keyboard/mouse sharing to HVM guests; mainly needed =
by VGA/GPU<br>
&gt; =A0 =A0 =A0passthru users.<br>
&gt; =A0 =A0 =A0 =A0Fujitsu guys posted some patches for this in 2010, and =
XenClient guys<br>
&gt; =A0 =A0 =A0in 2009 (iirc),<br>
&gt; =A0 =A0 =A0 =A0but nothing got further developed and merged to upstrea=
m Xen.<br>
&gt;<br>
&gt; =A0 =A0 =A0* QXL virtual GPU support for SPICE. Someone was already de=
veloping<br>
&gt; =A0 =A0 =A0this,<br>
&gt; =A0 =A0 =A0 =A0and posted patches earlier during 4.2 development cycle=
 to xen-devel.<br>
&gt; =A0 =A0 =A0 =A0Upstream Qemu includes QXL support.<br>
&gt;<br>
&gt; =A0 =A0 =A0* PVSCSI support in XL. James Harper was (semi) interested =
in working<br>
&gt; =A0 =A0 =A0with this,<br>
&gt; =A0 =A0 =A0 =A0because he has a PVSCSI frontend driver in Windows GPLP=
V drivers, and<br>
&gt; =A0 =A0 =A0he&#39;s using PVSCSI for tape backups himself.<br>
&gt;<br>
&gt; =A0 =A0 =A0* libvirt libxl driver improvements; support more Xen featu=
res.<br>
&gt; =A0 =A0 =A0 =A0Allows better using the Ubuntu/Debian/Fedora/RHEL/CentO=
S &quot;default&quot;<br>
&gt; =A0 =A0 =A0virtualization GUI also with Xen.<br>
&gt;<br>
&gt; =A0 =A0 =A0Hopefully we&#39;ll find interested developers for these it=
ems :)<br>
&gt;<br>
&gt; =A0 =A0 =A0-- Pasi<br>
&gt;<br>
&gt; =A0 =A0 =A0_______________________________________________<br>
&gt; =A0 =A0 =A0Xen-devel mailing list<br>
</div></div>&gt; =A0 =A0 =A0[2]<a href=3D"mailto:Xen-devel@lists.xen.org" t=
arget=3D"_blank">Xen-devel@lists.xen.org</a><br>
&gt; =A0 =A0 =A0[3]<a href=3D"http://lists.xen.org/xen-devel" target=3D"_bl=
ank">http://lists.xen.org/xen-devel</a><br>
&gt;<br>
&gt; References<br>
&gt;<br>
&gt; =A0 =A0Visible links<br>
&gt; =A0 =A01. mailto:<a href=3D"mailto:pasik@iki.fi" target=3D"_blank">pas=
ik@iki.fi</a><br>
&gt; =A0 =A02. mailto:<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"=
_blank">Xen-devel@lists.xen.org</a><br>
&gt; =A0 =A03. <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank"=
>http://lists.xen.org/xen-devel</a><br>
</blockquote></div><br></div>

--bcaec554d9d6be943604d1209a64--


--===============3265233352219908079==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3265233352219908079==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 13:38:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxNG-0007Vv-TU; Tue, 18 Dec 2012 13:38:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thiagocmartinsc@gmail.com>) id 1TkxNF-0007Vh-56
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:38:21 +0000
Received: from [85.158.138.51:56167] by server-7.bemta-3.messagelabs.com id
	60/FA-23008-CC170D05; Tue, 18 Dec 2012 13:38:20 +0000
X-Env-Sender: thiagocmartinsc@gmail.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1355837890!27449948!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9055 invoked from network); 18 Dec 2012 13:38:11 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:38:11 -0000
Received: by mail-la0-f43.google.com with SMTP id z14so524473lag.2
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 05:37:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=h1f4Ie7/BKatzVtPAV9WX1FZuptbS4glQPCUibMId6M=;
	b=TXHxI6Gk1rM9bUXLbue4JTxHVkaTkAnsoIW8Ay4TMUziEaibky5lK2cjFk2vzHZQlI
	x6SgnSplYN9Gc5F67rNO2I0lIoreRaG8jdDMFSyX6OuvIKiLns16hXMB9QS7TjIAs1x7
	RkyWIx5vxmF/8obDaCBeGmjgo53CKHpntW5W9i42Uu9Iz3QBXP6I5uB+EINGZ7Q1PSi9
	WPTUQHO3IO0CRa8Ei4CBr0i+AYqlobB9VYpZwFKmpAQzTSS5w0JqUuOfSVJ3hjSnis0N
	q1Kw9NobmF9CCo1DxgP0vZD0Rwzh8PVlw2HjrGBMOH6QxJkTPPaHwEUjTs9MIT400AIw
	6WEg==
Received: by 10.112.28.65 with SMTP id z1mr850627lbg.119.1355837872181; Tue,
	18 Dec 2012 05:37:52 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.6.3 with HTTP; Tue, 18 Dec 2012 05:37:22 -0800 (PST)
In-Reply-To: <20121218070351.GK8912@reaktio.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
	<CAJSM8J3DvQm-wW95k9Jd3quDkeNiPexj8OYdtnb_qWEWDUs1kw@mail.gmail.com>
	<20121218070351.GK8912@reaktio.net>
From: =?ISO-2022-JP?B?TWFydGlueCAtIBskQiU4JSchPCVgJTobKEI=?=
	<thiagocmartinsc@gmail.com>
Date: Tue, 18 Dec 2012 11:37:22 -0200
Message-ID: <CAJSM8J2tjRi85X7Kwf=fuNJZT3eaAwFkoKQXhFr775BjnkuZ9A@mail.gmail.com>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3265233352219908079=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3265233352219908079==
Content-Type: multipart/alternative; boundary=bcaec554d9d6be943604d1209a64

--bcaec554d9d6be943604d1209a64
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Pasi,

 How can I take full advantage of a remote HVM DomU with a Radeon within
it, without SPICE?

 VNC is enough? Or there is something else better that I don't know?

Thank you!
Thiago

On 18 December 2012 05:03, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:

> On Mon, Dec 17, 2012 at 09:57:59PM -0200, Martinx - ?$B%8%'!<%`%: wrote:
> >    Hi Pasi!
> >     Can you tell me if it will be possible to use Xen like this:
> >     dom0 -> ATI GPU Passthrough as primary -> HVM domU with Catalyst ->
> Spice
> >    -> Spice-Client ?
> >     I do not want to use Spice "alone" and, I do not want to use my dom=
U
> with
> >    my ATI without SPICE...  That makes sense?
> >    Thanks!
>
> Afaik SPICE uses and requires the virtual QXL GPU for efficient operation=
,
> so it doesn't work with physical GPUs.
>
> -- Pasi
>
> >    Thiago
> >
> >    On 20 August 2012 16:14, Pasi K=E4rkk=E4inen <[1]pasik@iki.fi> wrote=
:
> >
> >      On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
> >      >
> >      > Features and improvements not on this list are of course welcome
> at
> >      > any time before the feature freeze.
> >      >
> >      > Any questions and feedback are welcome!
> >      >
> >      > Your 4.3 release coordinator,
> >      >  George Dunlap
> >      >
> >
> >      <snip>
> >      >
> >      > * xl USB pass-through for PV guests
> >      >   owner: ?
> >      >   Port the xend PV pass-through functionality to xl.
> >      >
> >
> >      xm/xend PVUSB works for both PV and HVM guests, so xl should suppo=
rt
> >      PVUSB for both PV and HVM guests aswell.
> >      James Harper's GPLPV drivers actually do have PVUSB frontend drive=
r
> for
> >      Windows.
> >
> >      Also Suse's xenlinux forward-ported patches have PVUSB support in
> >      unmodified_drivers for HVM guests.
> >
> >      Another USB item:
> >
> >      * xl support for USB device passthru using QEMU emulated USB for H=
VM
> >      guests (no need for PVUSB drivers in the HVM guest).
> >        This works today in xm/xend with qemu-traditional, but is limite=
d
> to
> >      USB 1.1, probably because
> >        the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
> >        So xl support for emulated USB device passthru for both
> qemu-upstream
> >      and qemu-traditional.
> >
> >      More wishlist items:
> >
> >      * Nested hardware virtualization. Important for easier testing and
> >      development of Xen (Xen-on-Xen),
> >        and for running other hypervisors in Xen VMs. Interesting for
> labs,
> >      POCs, etc.
> >
> >      * VGA/GPU passthru support for AMD/NVIDIA; lots of patches on
> xen-devel
> >      archives,
> >        but noone has yet stepped up to clean up and get them merged.
> >        Currently Intel gfx passthru patches are merged to Xen, but
> primary
> >      ATI/NVIDIA require extra patches.
> >        This is actually something that a LOT of users ask often, it's
> >      discussed almost every day on ##xen on IRC.
> >        I wonder if XenClient folks could help here?
> >
> >      * Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by
> VGA/GPU
> >      passthru users.
> >        Fujitsu guys posted some patches for this in 2010, and XenClient
> guys
> >      in 2009 (iirc),
> >        but nothing got further developed and merged to upstream Xen.
> >
> >      * QXL virtual GPU support for SPICE. Someone was already developin=
g
> >      this,
> >        and posted patches earlier during 4.2 development cycle to
> xen-devel.
> >        Upstream Qemu includes QXL support.
> >
> >      * PVSCSI support in XL. James Harper was (semi) interested in
> working
> >      with this,
> >        because he has a PVSCSI frontend driver in Windows GPLPV drivers=
,
> and
> >      he's using PVSCSI for tape backups himself.
> >
> >      * libvirt libxl driver improvements; support more Xen features.
> >        Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "defaul=
t"
> >      virtualization GUI also with Xen.
> >
> >      Hopefully we'll find interested developers for these items :)
> >
> >      -- Pasi
> >
> >      _______________________________________________
> >      Xen-devel mailing list
> >      [2]Xen-devel@lists.xen.org
> >      [3]http://lists.xen.org/xen-devel
> >
> > References
> >
> >    Visible links
> >    1. mailto:pasik@iki.fi
> >    2. mailto:Xen-devel@lists.xen.org
> >    3. http://lists.xen.org/xen-devel
>

--bcaec554d9d6be943604d1209a64
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Pasi,<div><br></div><div>=A0How can I take full advantage of a remote HVM D=
omU with a Radeon within it, without SPICE?</div><div><br></div><div>=A0VNC=
 is enough? Or there is something else better that I don&#39;t know?</div><=
div>

<br></div><div>Thank you!</div><div>Thiago<br><br><div class=3D"gmail_quote=
">
On 18 December 2012 05:03, Pasi K=E4rkk=E4inen <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:pasik@iki.fi" target=3D"_blank">pasik@iki.fi</a>&gt;</span> wro=
te:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-=
left:1px #ccc solid;padding-left:1ex">


<div>On Mon, Dec 17, 2012 at 09:57:59PM -0200, Martinx - ?$B%8%&#39;!&lt;%`=
%: wrote:<br>
&gt; =A0 =A0Hi Pasi!<br>
&gt; =A0 =A0 Can you tell me if it will be possible to use Xen like this:<b=
r>
&gt; =A0 =A0 dom0 -&gt; ATI GPU Passthrough as primary -&gt; HVM domU with =
Catalyst -&gt; Spice<br>
&gt; =A0 =A0-&gt; Spice-Client ?<br>
&gt; =A0 =A0 I do not want to use Spice &quot;alone&quot; and, I do not wan=
t to use my domU with<br>
&gt; =A0 =A0my ATI without SPICE... =A0That makes sense?<br>
&gt; =A0 =A0Thanks!<br>
<br>
</div>Afaik SPICE uses and requires the virtual QXL GPU for efficient opera=
tion,<br>
so it doesn&#39;t work with physical GPUs.<br>
<br>
-- Pasi<br>
<br>
&gt; =A0 =A0Thiago<br>
<div><div>&gt;<br>
&gt; =A0 =A0On 20 August 2012 16:14, Pasi K=E4rkk=E4inen &lt;[1]<a href=3D"=
mailto:pasik@iki.fi" target=3D"_blank">pasik@iki.fi</a>&gt; wrote:<br>
&gt;<br>
&gt; =A0 =A0 =A0On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wro=
te:<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt; =A0 =A0 =A0&gt; Features and improvements not on this list are of cour=
se welcome at<br>
&gt; =A0 =A0 =A0&gt; any time before the feature freeze.<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt; =A0 =A0 =A0&gt; Any questions and feedback are welcome!<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt; =A0 =A0 =A0&gt; Your 4.3 release coordinator,<br>
&gt; =A0 =A0 =A0&gt; =A0George Dunlap<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt;<br>
&gt; =A0 =A0 =A0&lt;snip&gt;<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt; =A0 =A0 =A0&gt; * xl USB pass-through for PV guests<br>
&gt; =A0 =A0 =A0&gt; =A0 owner: ?<br>
&gt; =A0 =A0 =A0&gt; =A0 Port the xend PV pass-through functionality to xl.=
<br>
&gt; =A0 =A0 =A0&gt;<br>
&gt;<br>
&gt; =A0 =A0 =A0xm/xend PVUSB works for both PV and HVM guests, so xl shoul=
d support<br>
&gt; =A0 =A0 =A0PVUSB for both PV and HVM guests aswell.<br>
&gt; =A0 =A0 =A0James Harper&#39;s GPLPV drivers actually do have PVUSB fro=
ntend driver for<br>
&gt; =A0 =A0 =A0Windows.<br>
&gt;<br>
&gt; =A0 =A0 =A0Also Suse&#39;s xenlinux forward-ported patches have PVUSB =
support in<br>
&gt; =A0 =A0 =A0unmodified_drivers for HVM guests.<br>
&gt;<br>
&gt; =A0 =A0 =A0Another USB item:<br>
&gt;<br>
&gt; =A0 =A0 =A0* xl support for USB device passthru using QEMU emulated US=
B for HVM<br>
&gt; =A0 =A0 =A0guests (no need for PVUSB drivers in the HVM guest).<br>
&gt; =A0 =A0 =A0 =A0This works today in xm/xend with qemu-traditional, but =
is limited to<br>
&gt; =A0 =A0 =A0USB 1.1, probably because<br>
&gt; =A0 =A0 =A0 =A0the old version of Qemu-dm-traditional which lacks USB =
2.0/3.0.<br>
&gt; =A0 =A0 =A0 =A0So xl support for emulated USB device passthru for both=
 qemu-upstream<br>
&gt; =A0 =A0 =A0and qemu-traditional.<br>
&gt;<br>
&gt; =A0 =A0 =A0More wishlist items:<br>
&gt;<br>
&gt; =A0 =A0 =A0* Nested hardware virtualization. Important for easier test=
ing and<br>
&gt; =A0 =A0 =A0development of Xen (Xen-on-Xen),<br>
&gt; =A0 =A0 =A0 =A0and for running other hypervisors in Xen VMs. Interesti=
ng for labs,<br>
&gt; =A0 =A0 =A0POCs, etc.<br>
&gt;<br>
&gt; =A0 =A0 =A0* VGA/GPU passthru support for AMD/NVIDIA; lots of patches =
on xen-devel<br>
&gt; =A0 =A0 =A0archives,<br>
&gt; =A0 =A0 =A0 =A0but noone has yet stepped up to clean up and get them m=
erged.<br>
&gt; =A0 =A0 =A0 =A0Currently Intel gfx passthru patches are merged to Xen,=
 but primary<br>
&gt; =A0 =A0 =A0ATI/NVIDIA require extra patches.<br>
&gt; =A0 =A0 =A0 =A0This is actually something that a LOT of users ask ofte=
n, it&#39;s<br>
&gt; =A0 =A0 =A0discussed almost every day on ##xen on IRC.<br>
&gt; =A0 =A0 =A0 =A0I wonder if XenClient folks could help here?<br>
&gt;<br>
&gt; =A0 =A0 =A0* Dom0 Keyboard/mouse sharing to HVM guests; mainly needed =
by VGA/GPU<br>
&gt; =A0 =A0 =A0passthru users.<br>
&gt; =A0 =A0 =A0 =A0Fujitsu guys posted some patches for this in 2010, and =
XenClient guys<br>
&gt; =A0 =A0 =A0in 2009 (iirc),<br>
&gt; =A0 =A0 =A0 =A0but nothing got further developed and merged to upstrea=
m Xen.<br>
&gt;<br>
&gt; =A0 =A0 =A0* QXL virtual GPU support for SPICE. Someone was already de=
veloping<br>
&gt; =A0 =A0 =A0this,<br>
&gt; =A0 =A0 =A0 =A0and posted patches earlier during 4.2 development cycle=
 to xen-devel.<br>
&gt; =A0 =A0 =A0 =A0Upstream Qemu includes QXL support.<br>
&gt;<br>
&gt; =A0 =A0 =A0* PVSCSI support in XL. James Harper was (semi) interested =
in working<br>
&gt; =A0 =A0 =A0with this,<br>
&gt; =A0 =A0 =A0 =A0because he has a PVSCSI frontend driver in Windows GPLP=
V drivers, and<br>
&gt; =A0 =A0 =A0he&#39;s using PVSCSI for tape backups himself.<br>
&gt;<br>
&gt; =A0 =A0 =A0* libvirt libxl driver improvements; support more Xen featu=
res.<br>
&gt; =A0 =A0 =A0 =A0Allows better using the Ubuntu/Debian/Fedora/RHEL/CentO=
S &quot;default&quot;<br>
&gt; =A0 =A0 =A0virtualization GUI also with Xen.<br>
&gt;<br>
&gt; =A0 =A0 =A0Hopefully we&#39;ll find interested developers for these it=
ems :)<br>
&gt;<br>
&gt; =A0 =A0 =A0-- Pasi<br>
&gt;<br>
&gt; =A0 =A0 =A0_______________________________________________<br>
&gt; =A0 =A0 =A0Xen-devel mailing list<br>
</div></div>&gt; =A0 =A0 =A0[2]<a href=3D"mailto:Xen-devel@lists.xen.org" t=
arget=3D"_blank">Xen-devel@lists.xen.org</a><br>
&gt; =A0 =A0 =A0[3]<a href=3D"http://lists.xen.org/xen-devel" target=3D"_bl=
ank">http://lists.xen.org/xen-devel</a><br>
&gt;<br>
&gt; References<br>
&gt;<br>
&gt; =A0 =A0Visible links<br>
&gt; =A0 =A01. mailto:<a href=3D"mailto:pasik@iki.fi" target=3D"_blank">pas=
ik@iki.fi</a><br>
&gt; =A0 =A02. mailto:<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"=
_blank">Xen-devel@lists.xen.org</a><br>
&gt; =A0 =A03. <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank"=
>http://lists.xen.org/xen-devel</a><br>
</blockquote></div><br></div>

--bcaec554d9d6be943604d1209a64--


--===============3265233352219908079==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3265233352219908079==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 13:39:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:39:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxOT-0007dH-Dh; Tue, 18 Dec 2012 13:39:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1TkxOS-0007d3-DH
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:39:36 +0000
Received: from [85.158.143.99:30413] by server-1.bemta-4.messagelabs.com id
	52/89-28401-71270D05; Tue, 18 Dec 2012 13:39:35 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1355837947!24806439!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ2MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28968 invoked from network); 18 Dec 2012 13:39:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-7.tower-216.messagelabs.com with SMTP;
	18 Dec 2012 13:39:08 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBIDd6V0023835
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 08:39:06 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-39.ams2.redhat.com
	[10.36.112.39])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id qBIDd4rA019408; Tue, 18 Dec 2012 08:39:05 -0500
Message-ID: <50D071F8.8040509@redhat.com>
Date: Tue, 18 Dec 2012 14:39:04 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<50D078B702000078000B101F@nat28.tlf.novell.com>
	<50D06E04.7030904@redhat.com>
	<50D07D4802000078000B1071@nat28.tlf.novell.com>
	<50D06F90.6070803@redhat.com>
	<50D07F8802000078000B1091@nat28.tlf.novell.com>
In-Reply-To: <50D07F8802000078000B1091@nat28.tlf.novell.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/12/2012 14:36, Jan Beulich ha scritto:
>>>> >>> No, just busy.  I agree that checking all three is best.  However, there
>>>> >>> is at least one known case where 0x40e doesn't work, so 0x413 and
>>>> >>> multiboot should be enough.
>>> >> 
>>> >> Can you provide more detail about this specific case? In
>>> >> particular, what value 0x40e in fact has there?
>> > 
>> > Sure.  0x40e did point to the beginning of the EBDA (around 635k), but
>> > an option ROM was reserving memory below there by lowering 0x413.
>> > That's the "on some machines" in the commit message.
> That wouldn't preclude the suggested sanity-checked-minimum
> solution.

Yes, on the other hand [0x413] should always be less than or equal to
[0x40e] << 6.  Otherwise for example DOS would not work on that system.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:39:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:39:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxOT-0007dH-Dh; Tue, 18 Dec 2012 13:39:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1TkxOS-0007d3-DH
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:39:36 +0000
Received: from [85.158.143.99:30413] by server-1.bemta-4.messagelabs.com id
	52/89-28401-71270D05; Tue, 18 Dec 2012 13:39:35 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1355837947!24806439!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ2MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28968 invoked from network); 18 Dec 2012 13:39:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-7.tower-216.messagelabs.com with SMTP;
	18 Dec 2012 13:39:08 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBIDd6V0023835
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 08:39:06 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-39.ams2.redhat.com
	[10.36.112.39])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id qBIDd4rA019408; Tue, 18 Dec 2012 08:39:05 -0500
Message-ID: <50D071F8.8040509@redhat.com>
Date: Tue, 18 Dec 2012 14:39:04 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1354210461-9739-1-git-send-email-pbonzini@redhat.com>
	<50B87D6E02000078000ACBF2@nat28.tlf.novell.com>
	<50D078B702000078000B101F@nat28.tlf.novell.com>
	<50D06E04.7030904@redhat.com>
	<50D07D4802000078000B1071@nat28.tlf.novell.com>
	<50D06F90.6070803@redhat.com>
	<50D07F8802000078000B1091@nat28.tlf.novell.com>
In-Reply-To: <50D07F8802000078000B1091@nat28.tlf.novell.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: find a better location for the
 real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/12/2012 14:36, Jan Beulich ha scritto:
>>>> >>> No, just busy.  I agree that checking all three is best.  However, there
>>>> >>> is at least one known case where 0x40e doesn't work, so 0x413 and
>>>> >>> multiboot should be enough.
>>> >> 
>>> >> Can you provide more detail about this specific case? In
>>> >> particular, what value 0x40e in fact has there?
>> > 
>> > Sure.  0x40e did point to the beginning of the EBDA (around 635k), but
>> > an option ROM was reserving memory below there by lowering 0x413.
>> > That's the "on some machines" in the commit message.
> That wouldn't preclude the suggested sanity-checked-minimum
> solution.

Yes, on the other hand [0x413] should always be less than or equal to
[0x40e] << 6.  Otherwise for example DOS would not work on that system.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:49:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxXv-0008E4-1i; Tue, 18 Dec 2012 13:49:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkxXt-0008Dz-BS
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:49:21 +0000
Received: from [85.158.143.35:39061] by server-3.bemta-4.messagelabs.com id
	D2/A8-18211-06470D05; Tue, 18 Dec 2012 13:49:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355838510!14206862!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5632 invoked from network); 18 Dec 2012 13:48:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 13:48:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 13:48:29 +0000
Message-Id: <50D0823B02000078000B10BD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 13:48:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Lars Kurth <lars.kurth@citrix.com>
Subject: [Xen-devel] [ANNOUNCE] Xen 4.1.4 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Folks,

I am pleased to announce the release of Xen 4.1.4. This is
available immediately from its mercurial repository:
http://xenbits.xen.org/xen-4.1-testing.hg (tag RELEASE-4.1.4)

This fixes the following critical vulnerabilities:
 * CVE-2012-3494 / XSA-12:
    hypercall set_debugreg vulnerability
 * CVE-2012-3495 / XSA-13:
    hypercall physdev_get_free_pirq vulnerability
 * CVE-2012-3496 / XSA-14:
    XENMEM_populate_physmap DoS vulnerability
 * CVE-2012-3498 / XSA-16:
    PHYSDEVOP_map_pirq index vulnerability
 * CVE-2012-3515 / XSA-17:
    Qemu VT100 emulation vulnerability
 * CVE-2012-4411 / XSA-19:
    guest administrator can access qemu monitor console
 * CVE-2012-4535 / XSA-20:
    Timer overflow DoS vulnerability
 * CVE-2012-4536 / XSA-21:
    pirq range check DoS vulnerability
 * CVE-2012-4537 / XSA-22:
    Memory mapping failure DoS vulnerability
 * CVE-2012-4538 / XSA-23:
    Unhooking empty PAE entries DoS vulnerability
 * CVE-2012-4539 / XSA-24:
    Grant table hypercall infinite loop DoS vulnerability
 * CVE-2012-4544,CVE-2012-2625 / XSA-25:
    Xen domain builder Out-of-memory due to malicious kernel/ramdisk
 * CVE-2012-5510 / XSA-26:
    Grant table version switch list corruption vulnerability
 * CVE-2012-5511 / XSA-27:
    several HVM operations do not validate the range of their inputs
 * CVE-2012-5512 / XSA-28:
    HVMOP_get_mem_access crash / HVMOP_set_mem_access information leak
 * CVE-2012-5513 / XSA-29:
    XENMEM_exchange may overwrite hypervisor memory
 * CVE-2012-5514 / XSA-30:
    Broken error handling in guest_physmap_mark_populate_on_demand()
 * CVE-2012-5515 / XSA-31:
    Several memory hypercall operations allow invalid extent order values

We recommend all users of the 4.1 stable series to update to this
latest point release.

Among many bug fixes and improvements (almost 100 since Xen 4.1.3):
 * A fix for a long standing time management issue
 * Bug fixes for S3 (suspend to RAM) handling
 * Bug fixes for other low level system state handling

Regards,
Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:49:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxXv-0008E4-1i; Tue, 18 Dec 2012 13:49:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkxXt-0008Dz-BS
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:49:21 +0000
Received: from [85.158.143.35:39061] by server-3.bemta-4.messagelabs.com id
	D2/A8-18211-06470D05; Tue, 18 Dec 2012 13:49:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355838510!14206862!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5632 invoked from network); 18 Dec 2012 13:48:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 13:48:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 13:48:29 +0000
Message-Id: <50D0823B02000078000B10BD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 13:48:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Lars Kurth <lars.kurth@citrix.com>
Subject: [Xen-devel] [ANNOUNCE] Xen 4.1.4 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Folks,

I am pleased to announce the release of Xen 4.1.4. This is
available immediately from its mercurial repository:
http://xenbits.xen.org/xen-4.1-testing.hg (tag RELEASE-4.1.4)

This fixes the following critical vulnerabilities:
 * CVE-2012-3494 / XSA-12:
    hypercall set_debugreg vulnerability
 * CVE-2012-3495 / XSA-13:
    hypercall physdev_get_free_pirq vulnerability
 * CVE-2012-3496 / XSA-14:
    XENMEM_populate_physmap DoS vulnerability
 * CVE-2012-3498 / XSA-16:
    PHYSDEVOP_map_pirq index vulnerability
 * CVE-2012-3515 / XSA-17:
    Qemu VT100 emulation vulnerability
 * CVE-2012-4411 / XSA-19:
    guest administrator can access qemu monitor console
 * CVE-2012-4535 / XSA-20:
    Timer overflow DoS vulnerability
 * CVE-2012-4536 / XSA-21:
    pirq range check DoS vulnerability
 * CVE-2012-4537 / XSA-22:
    Memory mapping failure DoS vulnerability
 * CVE-2012-4538 / XSA-23:
    Unhooking empty PAE entries DoS vulnerability
 * CVE-2012-4539 / XSA-24:
    Grant table hypercall infinite loop DoS vulnerability
 * CVE-2012-4544,CVE-2012-2625 / XSA-25:
    Xen domain builder Out-of-memory due to malicious kernel/ramdisk
 * CVE-2012-5510 / XSA-26:
    Grant table version switch list corruption vulnerability
 * CVE-2012-5511 / XSA-27:
    several HVM operations do not validate the range of their inputs
 * CVE-2012-5512 / XSA-28:
    HVMOP_get_mem_access crash / HVMOP_set_mem_access information leak
 * CVE-2012-5513 / XSA-29:
    XENMEM_exchange may overwrite hypervisor memory
 * CVE-2012-5514 / XSA-30:
    Broken error handling in guest_physmap_mark_populate_on_demand()
 * CVE-2012-5515 / XSA-31:
    Several memory hypercall operations allow invalid extent order values

We recommend all users of the 4.1 stable series to update to this
latest point release.

Among many bug fixes and improvements (almost 100 since Xen 4.1.3):
 * A fix for a long standing time management issue
 * Bug fixes for S3 (suspend to RAM) handling
 * Bug fixes for other low level system state handling

Regards,
Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:50:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxZ0-0008Ic-Lo; Tue, 18 Dec 2012 13:50:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkxYz-0008IV-7E
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:50:29 +0000
Received: from [85.158.138.51:47075] by server-6.bemta-3.messagelabs.com id
	4C/8B-12154-4A470D05; Tue, 18 Dec 2012 13:50:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355838622!19460319!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8303 invoked from network); 18 Dec 2012 13:50:23 -0000
Received: from unknown (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 13:50:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 13:49:20 +0000
Message-Id: <50D0826F02000078000B10C0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 13:49:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "=?UTF-8?B?TWFydGlueCAtIOOCuOOCp+ODvOODoOOCug==?="
	<thiagocmartinsc@gmail.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
	<CAJSM8J3DvQm-wW95k9Jd3quDkeNiPexj8OYdtnb_qWEWDUs1kw@mail.gmail.com>
	<20121218070351.GK8912@reaktio.net>
	<CAJSM8J2tjRi85X7Kwf=fuNJZT3eaAwFkoKQXhFr775BjnkuZ9A@mail.gmail.com>
In-Reply-To: <CAJSM8J2tjRi85X7Kwf=fuNJZT3eaAwFkoKQXhFr775BjnkuZ9A@mail.gmail.com>
Mime-Version: 1.0
Content-Language: ja
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDE4LjEyLjEyIGF0IDE0OjM3LCBNYXJ0aW54IC0g44K444Kn44O844Og44K6PHRoaWFn
b2NtYXJ0aW5zY0BnbWFpbC5jb20+IHdyb3RlOgo+ICBIb3cgY2FuIEkgdGFrZSBmdWxsIGFkdmFu
dGFnZSBvZiBhIHJlbW90ZSBIVk0gRG9tVSB3aXRoIGEgUmFkZW9uIHdpdGhpbgo+IGl0LCB3aXRo
b3V0IFNQSUNFPwo+IAo+ICBWTkMgaXMgZW5vdWdoPyBPciB0aGVyZSBpcyBzb21ldGhpbmcgZWxz
ZSBiZXR0ZXIgdGhhdCBJIGRvbid0IGtub3c/CgpQbGVhc2Ugc3RvcCBoaWphY2tpbmcgYW4gdW5y
ZWxhdGVkIHRocmVhZC4KClRoYW5rcywgSmFuCgo+IE9uIDE4IERlY2VtYmVyIDIwMTIgMDU6MDMs
IFBhc2kgS8Okcmtrw6RpbmVuIDxwYXNpa0Bpa2kuZmk+IHdyb3RlOgo+IAo+PiBPbiBNb24sIERl
YyAxNywgMjAxMiBhdCAwOTo1Nzo1OVBNIC0wMjAwLCBNYXJ0aW54IC0gPyRCJTglJyE8JWAlOiB3
cm90ZToKPj4gPiAgICBIaSBQYXNpIQo+PiA+ICAgICBDYW4geW91IHRlbGwgbWUgaWYgaXQgd2ls
bCBiZSBwb3NzaWJsZSB0byB1c2UgWGVuIGxpa2UgdGhpczoKPj4gPiAgICAgZG9tMCAtPiBBVEkg
R1BVIFBhc3N0aHJvdWdoIGFzIHByaW1hcnkgLT4gSFZNIGRvbVUgd2l0aCBDYXRhbHlzdCAtPgo+
PiBTcGljZQo+PiA+ICAgIC0+IFNwaWNlLUNsaWVudCA/Cj4+ID4gICAgIEkgZG8gbm90IHdhbnQg
dG8gdXNlIFNwaWNlICJhbG9uZSIgYW5kLCBJIGRvIG5vdCB3YW50IHRvIHVzZSBteSBkb21VCj4+
IHdpdGgKPj4gPiAgICBteSBBVEkgd2l0aG91dCBTUElDRS4uLiAgVGhhdCBtYWtlcyBzZW5zZT8K
Pj4gPiAgICBUaGFua3MhCj4+Cj4+IEFmYWlrIFNQSUNFIHVzZXMgYW5kIHJlcXVpcmVzIHRoZSB2
aXJ0dWFsIFFYTCBHUFUgZm9yIGVmZmljaWVudCBvcGVyYXRpb24sCj4+IHNvIGl0IGRvZXNuJ3Qg
d29yayB3aXRoIHBoeXNpY2FsIEdQVXMuCj4+Cj4+IC0tIFBhc2kKPj4KPj4gPiAgICBUaGlhZ28K
Pj4gPgo+PiA+ICAgIE9uIDIwIEF1Z3VzdCAyMDEyIDE2OjE0LCBQYXNpIEvDpHJra8OkaW5lbiA8
WzFdcGFzaWtAaWtpLmZpPiB3cm90ZToKPj4gPgo+PiA+ICAgICAgT24gTW9uLCBBdWcgMjAsIDIw
MTIgYXQgMDU6NDY6NTlQTSArMDEwMCwgR2VvcmdlIER1bmxhcCB3cm90ZToKPj4gPiAgICAgID4K
Pj4gPiAgICAgID4gRmVhdHVyZXMgYW5kIGltcHJvdmVtZW50cyBub3Qgb24gdGhpcyBsaXN0IGFy
ZSBvZiBjb3Vyc2Ugd2VsY29tZQo+PiBhdAo+PiA+ICAgICAgPiBhbnkgdGltZSBiZWZvcmUgdGhl
IGZlYXR1cmUgZnJlZXplLgo+PiA+ICAgICAgPgo+PiA+ICAgICAgPiBBbnkgcXVlc3Rpb25zIGFu
ZCBmZWVkYmFjayBhcmUgd2VsY29tZSEKPj4gPiAgICAgID4KPj4gPiAgICAgID4gWW91ciA0LjMg
cmVsZWFzZSBjb29yZGluYXRvciwKPj4gPiAgICAgID4gIEdlb3JnZSBEdW5sYXAKPj4gPiAgICAg
ID4KPj4gPgo+PiA+ICAgICAgPHNuaXA+Cj4+ID4gICAgICA+Cj4+ID4gICAgICA+ICogeGwgVVNC
IHBhc3MtdGhyb3VnaCBmb3IgUFYgZ3Vlc3RzCj4+ID4gICAgICA+ICAgb3duZXI6ID8KPj4gPiAg
ICAgID4gICBQb3J0IHRoZSB4ZW5kIFBWIHBhc3MtdGhyb3VnaCBmdW5jdGlvbmFsaXR5IHRvIHhs
Lgo+PiA+ICAgICAgPgo+PiA+Cj4+ID4gICAgICB4bS94ZW5kIFBWVVNCIHdvcmtzIGZvciBib3Ro
IFBWIGFuZCBIVk0gZ3Vlc3RzLCBzbyB4bCBzaG91bGQgc3VwcG9ydAo+PiA+ICAgICAgUFZVU0Ig
Zm9yIGJvdGggUFYgYW5kIEhWTSBndWVzdHMgYXN3ZWxsLgo+PiA+ICAgICAgSmFtZXMgSGFycGVy
J3MgR1BMUFYgZHJpdmVycyBhY3R1YWxseSBkbyBoYXZlIFBWVVNCIGZyb250ZW5kIGRyaXZlcgo+
PiBmb3IKPj4gPiAgICAgIFdpbmRvd3MuCj4+ID4KPj4gPiAgICAgIEFsc28gU3VzZSdzIHhlbmxp
bnV4IGZvcndhcmQtcG9ydGVkIHBhdGNoZXMgaGF2ZSBQVlVTQiBzdXBwb3J0IGluCj4+ID4gICAg
ICB1bm1vZGlmaWVkX2RyaXZlcnMgZm9yIEhWTSBndWVzdHMuCj4+ID4KPj4gPiAgICAgIEFub3Ro
ZXIgVVNCIGl0ZW06Cj4+ID4KPj4gPiAgICAgICogeGwgc3VwcG9ydCBmb3IgVVNCIGRldmljZSBw
YXNzdGhydSB1c2luZyBRRU1VIGVtdWxhdGVkIFVTQiBmb3IgSFZNCj4+ID4gICAgICBndWVzdHMg
KG5vIG5lZWQgZm9yIFBWVVNCIGRyaXZlcnMgaW4gdGhlIEhWTSBndWVzdCkuCj4+ID4gICAgICAg
IFRoaXMgd29ya3MgdG9kYXkgaW4geG0veGVuZCB3aXRoIHFlbXUtdHJhZGl0aW9uYWwsIGJ1dCBp
cyBsaW1pdGVkCj4+IHRvCj4+ID4gICAgICBVU0IgMS4xLCBwcm9iYWJseSBiZWNhdXNlCj4+ID4g
ICAgICAgIHRoZSBvbGQgdmVyc2lvbiBvZiBRZW11LWRtLXRyYWRpdGlvbmFsIHdoaWNoIGxhY2tz
IFVTQiAyLjAvMy4wLgo+PiA+ICAgICAgICBTbyB4bCBzdXBwb3J0IGZvciBlbXVsYXRlZCBVU0Ig
ZGV2aWNlIHBhc3N0aHJ1IGZvciBib3RoCj4+IHFlbXUtdXBzdHJlYW0KPj4gPiAgICAgIGFuZCBx
ZW11LXRyYWRpdGlvbmFsLgo+PiA+Cj4+ID4gICAgICBNb3JlIHdpc2hsaXN0IGl0ZW1zOgo+PiA+
Cj4+ID4gICAgICAqIE5lc3RlZCBoYXJkd2FyZSB2aXJ0dWFsaXphdGlvbi4gSW1wb3J0YW50IGZv
ciBlYXNpZXIgdGVzdGluZyBhbmQKPj4gPiAgICAgIGRldmVsb3BtZW50IG9mIFhlbiAoWGVuLW9u
LVhlbiksCj4+ID4gICAgICAgIGFuZCBmb3IgcnVubmluZyBvdGhlciBoeXBlcnZpc29ycyBpbiBY
ZW4gVk1zLiBJbnRlcmVzdGluZyBmb3IKPj4gbGFicywKPj4gPiAgICAgIFBPQ3MsIGV0Yy4KPj4g
Pgo+PiA+ICAgICAgKiBWR0EvR1BVIHBhc3N0aHJ1IHN1cHBvcnQgZm9yIEFNRC9OVklESUE7IGxv
dHMgb2YgcGF0Y2hlcyBvbgo+PiB4ZW4tZGV2ZWwKPj4gPiAgICAgIGFyY2hpdmVzLAo+PiA+ICAg
ICAgICBidXQgbm9vbmUgaGFzIHlldCBzdGVwcGVkIHVwIHRvIGNsZWFuIHVwIGFuZCBnZXQgdGhl
bSBtZXJnZWQuCj4+ID4gICAgICAgIEN1cnJlbnRseSBJbnRlbCBnZnggcGFzc3RocnUgcGF0Y2hl
cyBhcmUgbWVyZ2VkIHRvIFhlbiwgYnV0Cj4+IHByaW1hcnkKPj4gPiAgICAgIEFUSS9OVklESUEg
cmVxdWlyZSBleHRyYSBwYXRjaGVzLgo+PiA+ICAgICAgICBUaGlzIGlzIGFjdHVhbGx5IHNvbWV0
aGluZyB0aGF0IGEgTE9UIG9mIHVzZXJzIGFzayBvZnRlbiwgaXQncwo+PiA+ICAgICAgZGlzY3Vz
c2VkIGFsbW9zdCBldmVyeSBkYXkgb24gIyN4ZW4gb24gSVJDLgo+PiA+ICAgICAgICBJIHdvbmRl
ciBpZiBYZW5DbGllbnQgZm9sa3MgY291bGQgaGVscCBoZXJlPwo+PiA+Cj4+ID4gICAgICAqIERv
bTAgS2V5Ym9hcmQvbW91c2Ugc2hhcmluZyB0byBIVk0gZ3Vlc3RzOyBtYWlubHkgbmVlZGVkIGJ5
Cj4+IFZHQS9HUFUKPj4gPiAgICAgIHBhc3N0aHJ1IHVzZXJzLgo+PiA+ICAgICAgICBGdWppdHN1
IGd1eXMgcG9zdGVkIHNvbWUgcGF0Y2hlcyBmb3IgdGhpcyBpbiAyMDEwLCBhbmQgWGVuQ2xpZW50
Cj4+IGd1eXMKPj4gPiAgICAgIGluIDIwMDkgKGlpcmMpLAo+PiA+ICAgICAgICBidXQgbm90aGlu
ZyBnb3QgZnVydGhlciBkZXZlbG9wZWQgYW5kIG1lcmdlZCB0byB1cHN0cmVhbSBYZW4uCj4+ID4K
Pj4gPiAgICAgICogUVhMIHZpcnR1YWwgR1BVIHN1cHBvcnQgZm9yIFNQSUNFLiBTb21lb25lIHdh
cyBhbHJlYWR5IGRldmVsb3BpbmcKPj4gPiAgICAgIHRoaXMsCj4+ID4gICAgICAgIGFuZCBwb3N0
ZWQgcGF0Y2hlcyBlYXJsaWVyIGR1cmluZyA0LjIgZGV2ZWxvcG1lbnQgY3ljbGUgdG8KPj4geGVu
LWRldmVsLgo+PiA+ICAgICAgICBVcHN0cmVhbSBRZW11IGluY2x1ZGVzIFFYTCBzdXBwb3J0Lgo+
PiA+Cj4+ID4gICAgICAqIFBWU0NTSSBzdXBwb3J0IGluIFhMLiBKYW1lcyBIYXJwZXIgd2FzIChz
ZW1pKSBpbnRlcmVzdGVkIGluCj4+IHdvcmtpbmcKPj4gPiAgICAgIHdpdGggdGhpcywKPj4gPiAg
ICAgICAgYmVjYXVzZSBoZSBoYXMgYSBQVlNDU0kgZnJvbnRlbmQgZHJpdmVyIGluIFdpbmRvd3Mg
R1BMUFYgZHJpdmVycywKPj4gYW5kCj4+ID4gICAgICBoZSdzIHVzaW5nIFBWU0NTSSBmb3IgdGFw
ZSBiYWNrdXBzIGhpbXNlbGYuCj4+ID4KPj4gPiAgICAgICogbGlidmlydCBsaWJ4bCBkcml2ZXIg
aW1wcm92ZW1lbnRzOyBzdXBwb3J0IG1vcmUgWGVuIGZlYXR1cmVzLgo+PiA+ICAgICAgICBBbGxv
d3MgYmV0dGVyIHVzaW5nIHRoZSBVYnVudHUvRGViaWFuL0ZlZG9yYS9SSEVML0NlbnRPUyAiZGVm
YXVsdCIKPj4gPiAgICAgIHZpcnR1YWxpemF0aW9uIEdVSSBhbHNvIHdpdGggWGVuLgo+PiA+Cj4+
ID4gICAgICBIb3BlZnVsbHkgd2UnbGwgZmluZCBpbnRlcmVzdGVkIGRldmVsb3BlcnMgZm9yIHRo
ZXNlIGl0ZW1zIDopCj4+ID4KPj4gPiAgICAgIC0tIFBhc2kKPj4gPgo+PiA+ICAgICAgX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPj4gPiAgICAgIFhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKPj4gPiAgICAgIFsyXVhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnIAo+
PiA+ICAgICAgWzNdaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsIAo+PiA+Cj4+ID4gUmVm
ZXJlbmNlcwo+PiA+Cj4+ID4gICAgVmlzaWJsZSBsaW5rcwo+PiA+ICAgIDEuIG1haWx0bzpwYXNp
a0Bpa2kuZmkgCj4+ID4gICAgMi4gbWFpbHRvOlhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnIAo+PiA+
ICAgIDMuIGh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbCAKPj4KCgoKX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlz
dApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:50:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxZ0-0008Ic-Lo; Tue, 18 Dec 2012 13:50:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkxYz-0008IV-7E
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:50:29 +0000
Received: from [85.158.138.51:47075] by server-6.bemta-3.messagelabs.com id
	4C/8B-12154-4A470D05; Tue, 18 Dec 2012 13:50:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355838622!19460319!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8303 invoked from network); 18 Dec 2012 13:50:23 -0000
Received: from unknown (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 13:50:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 13:49:20 +0000
Message-Id: <50D0826F02000078000B10C0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 13:49:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "=?UTF-8?B?TWFydGlueCAtIOOCuOOCp+ODvOODoOOCug==?="
	<thiagocmartinsc@gmail.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
	<CAJSM8J3DvQm-wW95k9Jd3quDkeNiPexj8OYdtnb_qWEWDUs1kw@mail.gmail.com>
	<20121218070351.GK8912@reaktio.net>
	<CAJSM8J2tjRi85X7Kwf=fuNJZT3eaAwFkoKQXhFr775BjnkuZ9A@mail.gmail.com>
In-Reply-To: <CAJSM8J2tjRi85X7Kwf=fuNJZT3eaAwFkoKQXhFr775BjnkuZ9A@mail.gmail.com>
Mime-Version: 1.0
Content-Language: ja
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDE4LjEyLjEyIGF0IDE0OjM3LCBNYXJ0aW54IC0g44K444Kn44O844Og44K6PHRoaWFn
b2NtYXJ0aW5zY0BnbWFpbC5jb20+IHdyb3RlOgo+ICBIb3cgY2FuIEkgdGFrZSBmdWxsIGFkdmFu
dGFnZSBvZiBhIHJlbW90ZSBIVk0gRG9tVSB3aXRoIGEgUmFkZW9uIHdpdGhpbgo+IGl0LCB3aXRo
b3V0IFNQSUNFPwo+IAo+ICBWTkMgaXMgZW5vdWdoPyBPciB0aGVyZSBpcyBzb21ldGhpbmcgZWxz
ZSBiZXR0ZXIgdGhhdCBJIGRvbid0IGtub3c/CgpQbGVhc2Ugc3RvcCBoaWphY2tpbmcgYW4gdW5y
ZWxhdGVkIHRocmVhZC4KClRoYW5rcywgSmFuCgo+IE9uIDE4IERlY2VtYmVyIDIwMTIgMDU6MDMs
IFBhc2kgS8Okcmtrw6RpbmVuIDxwYXNpa0Bpa2kuZmk+IHdyb3RlOgo+IAo+PiBPbiBNb24sIERl
YyAxNywgMjAxMiBhdCAwOTo1Nzo1OVBNIC0wMjAwLCBNYXJ0aW54IC0gPyRCJTglJyE8JWAlOiB3
cm90ZToKPj4gPiAgICBIaSBQYXNpIQo+PiA+ICAgICBDYW4geW91IHRlbGwgbWUgaWYgaXQgd2ls
bCBiZSBwb3NzaWJsZSB0byB1c2UgWGVuIGxpa2UgdGhpczoKPj4gPiAgICAgZG9tMCAtPiBBVEkg
R1BVIFBhc3N0aHJvdWdoIGFzIHByaW1hcnkgLT4gSFZNIGRvbVUgd2l0aCBDYXRhbHlzdCAtPgo+
PiBTcGljZQo+PiA+ICAgIC0+IFNwaWNlLUNsaWVudCA/Cj4+ID4gICAgIEkgZG8gbm90IHdhbnQg
dG8gdXNlIFNwaWNlICJhbG9uZSIgYW5kLCBJIGRvIG5vdCB3YW50IHRvIHVzZSBteSBkb21VCj4+
IHdpdGgKPj4gPiAgICBteSBBVEkgd2l0aG91dCBTUElDRS4uLiAgVGhhdCBtYWtlcyBzZW5zZT8K
Pj4gPiAgICBUaGFua3MhCj4+Cj4+IEFmYWlrIFNQSUNFIHVzZXMgYW5kIHJlcXVpcmVzIHRoZSB2
aXJ0dWFsIFFYTCBHUFUgZm9yIGVmZmljaWVudCBvcGVyYXRpb24sCj4+IHNvIGl0IGRvZXNuJ3Qg
d29yayB3aXRoIHBoeXNpY2FsIEdQVXMuCj4+Cj4+IC0tIFBhc2kKPj4KPj4gPiAgICBUaGlhZ28K
Pj4gPgo+PiA+ICAgIE9uIDIwIEF1Z3VzdCAyMDEyIDE2OjE0LCBQYXNpIEvDpHJra8OkaW5lbiA8
WzFdcGFzaWtAaWtpLmZpPiB3cm90ZToKPj4gPgo+PiA+ICAgICAgT24gTW9uLCBBdWcgMjAsIDIw
MTIgYXQgMDU6NDY6NTlQTSArMDEwMCwgR2VvcmdlIER1bmxhcCB3cm90ZToKPj4gPiAgICAgID4K
Pj4gPiAgICAgID4gRmVhdHVyZXMgYW5kIGltcHJvdmVtZW50cyBub3Qgb24gdGhpcyBsaXN0IGFy
ZSBvZiBjb3Vyc2Ugd2VsY29tZQo+PiBhdAo+PiA+ICAgICAgPiBhbnkgdGltZSBiZWZvcmUgdGhl
IGZlYXR1cmUgZnJlZXplLgo+PiA+ICAgICAgPgo+PiA+ICAgICAgPiBBbnkgcXVlc3Rpb25zIGFu
ZCBmZWVkYmFjayBhcmUgd2VsY29tZSEKPj4gPiAgICAgID4KPj4gPiAgICAgID4gWW91ciA0LjMg
cmVsZWFzZSBjb29yZGluYXRvciwKPj4gPiAgICAgID4gIEdlb3JnZSBEdW5sYXAKPj4gPiAgICAg
ID4KPj4gPgo+PiA+ICAgICAgPHNuaXA+Cj4+ID4gICAgICA+Cj4+ID4gICAgICA+ICogeGwgVVNC
IHBhc3MtdGhyb3VnaCBmb3IgUFYgZ3Vlc3RzCj4+ID4gICAgICA+ICAgb3duZXI6ID8KPj4gPiAg
ICAgID4gICBQb3J0IHRoZSB4ZW5kIFBWIHBhc3MtdGhyb3VnaCBmdW5jdGlvbmFsaXR5IHRvIHhs
Lgo+PiA+ICAgICAgPgo+PiA+Cj4+ID4gICAgICB4bS94ZW5kIFBWVVNCIHdvcmtzIGZvciBib3Ro
IFBWIGFuZCBIVk0gZ3Vlc3RzLCBzbyB4bCBzaG91bGQgc3VwcG9ydAo+PiA+ICAgICAgUFZVU0Ig
Zm9yIGJvdGggUFYgYW5kIEhWTSBndWVzdHMgYXN3ZWxsLgo+PiA+ICAgICAgSmFtZXMgSGFycGVy
J3MgR1BMUFYgZHJpdmVycyBhY3R1YWxseSBkbyBoYXZlIFBWVVNCIGZyb250ZW5kIGRyaXZlcgo+
PiBmb3IKPj4gPiAgICAgIFdpbmRvd3MuCj4+ID4KPj4gPiAgICAgIEFsc28gU3VzZSdzIHhlbmxp
bnV4IGZvcndhcmQtcG9ydGVkIHBhdGNoZXMgaGF2ZSBQVlVTQiBzdXBwb3J0IGluCj4+ID4gICAg
ICB1bm1vZGlmaWVkX2RyaXZlcnMgZm9yIEhWTSBndWVzdHMuCj4+ID4KPj4gPiAgICAgIEFub3Ro
ZXIgVVNCIGl0ZW06Cj4+ID4KPj4gPiAgICAgICogeGwgc3VwcG9ydCBmb3IgVVNCIGRldmljZSBw
YXNzdGhydSB1c2luZyBRRU1VIGVtdWxhdGVkIFVTQiBmb3IgSFZNCj4+ID4gICAgICBndWVzdHMg
KG5vIG5lZWQgZm9yIFBWVVNCIGRyaXZlcnMgaW4gdGhlIEhWTSBndWVzdCkuCj4+ID4gICAgICAg
IFRoaXMgd29ya3MgdG9kYXkgaW4geG0veGVuZCB3aXRoIHFlbXUtdHJhZGl0aW9uYWwsIGJ1dCBp
cyBsaW1pdGVkCj4+IHRvCj4+ID4gICAgICBVU0IgMS4xLCBwcm9iYWJseSBiZWNhdXNlCj4+ID4g
ICAgICAgIHRoZSBvbGQgdmVyc2lvbiBvZiBRZW11LWRtLXRyYWRpdGlvbmFsIHdoaWNoIGxhY2tz
IFVTQiAyLjAvMy4wLgo+PiA+ICAgICAgICBTbyB4bCBzdXBwb3J0IGZvciBlbXVsYXRlZCBVU0Ig
ZGV2aWNlIHBhc3N0aHJ1IGZvciBib3RoCj4+IHFlbXUtdXBzdHJlYW0KPj4gPiAgICAgIGFuZCBx
ZW11LXRyYWRpdGlvbmFsLgo+PiA+Cj4+ID4gICAgICBNb3JlIHdpc2hsaXN0IGl0ZW1zOgo+PiA+
Cj4+ID4gICAgICAqIE5lc3RlZCBoYXJkd2FyZSB2aXJ0dWFsaXphdGlvbi4gSW1wb3J0YW50IGZv
ciBlYXNpZXIgdGVzdGluZyBhbmQKPj4gPiAgICAgIGRldmVsb3BtZW50IG9mIFhlbiAoWGVuLW9u
LVhlbiksCj4+ID4gICAgICAgIGFuZCBmb3IgcnVubmluZyBvdGhlciBoeXBlcnZpc29ycyBpbiBY
ZW4gVk1zLiBJbnRlcmVzdGluZyBmb3IKPj4gbGFicywKPj4gPiAgICAgIFBPQ3MsIGV0Yy4KPj4g
Pgo+PiA+ICAgICAgKiBWR0EvR1BVIHBhc3N0aHJ1IHN1cHBvcnQgZm9yIEFNRC9OVklESUE7IGxv
dHMgb2YgcGF0Y2hlcyBvbgo+PiB4ZW4tZGV2ZWwKPj4gPiAgICAgIGFyY2hpdmVzLAo+PiA+ICAg
ICAgICBidXQgbm9vbmUgaGFzIHlldCBzdGVwcGVkIHVwIHRvIGNsZWFuIHVwIGFuZCBnZXQgdGhl
bSBtZXJnZWQuCj4+ID4gICAgICAgIEN1cnJlbnRseSBJbnRlbCBnZnggcGFzc3RocnUgcGF0Y2hl
cyBhcmUgbWVyZ2VkIHRvIFhlbiwgYnV0Cj4+IHByaW1hcnkKPj4gPiAgICAgIEFUSS9OVklESUEg
cmVxdWlyZSBleHRyYSBwYXRjaGVzLgo+PiA+ICAgICAgICBUaGlzIGlzIGFjdHVhbGx5IHNvbWV0
aGluZyB0aGF0IGEgTE9UIG9mIHVzZXJzIGFzayBvZnRlbiwgaXQncwo+PiA+ICAgICAgZGlzY3Vz
c2VkIGFsbW9zdCBldmVyeSBkYXkgb24gIyN4ZW4gb24gSVJDLgo+PiA+ICAgICAgICBJIHdvbmRl
ciBpZiBYZW5DbGllbnQgZm9sa3MgY291bGQgaGVscCBoZXJlPwo+PiA+Cj4+ID4gICAgICAqIERv
bTAgS2V5Ym9hcmQvbW91c2Ugc2hhcmluZyB0byBIVk0gZ3Vlc3RzOyBtYWlubHkgbmVlZGVkIGJ5
Cj4+IFZHQS9HUFUKPj4gPiAgICAgIHBhc3N0aHJ1IHVzZXJzLgo+PiA+ICAgICAgICBGdWppdHN1
IGd1eXMgcG9zdGVkIHNvbWUgcGF0Y2hlcyBmb3IgdGhpcyBpbiAyMDEwLCBhbmQgWGVuQ2xpZW50
Cj4+IGd1eXMKPj4gPiAgICAgIGluIDIwMDkgKGlpcmMpLAo+PiA+ICAgICAgICBidXQgbm90aGlu
ZyBnb3QgZnVydGhlciBkZXZlbG9wZWQgYW5kIG1lcmdlZCB0byB1cHN0cmVhbSBYZW4uCj4+ID4K
Pj4gPiAgICAgICogUVhMIHZpcnR1YWwgR1BVIHN1cHBvcnQgZm9yIFNQSUNFLiBTb21lb25lIHdh
cyBhbHJlYWR5IGRldmVsb3BpbmcKPj4gPiAgICAgIHRoaXMsCj4+ID4gICAgICAgIGFuZCBwb3N0
ZWQgcGF0Y2hlcyBlYXJsaWVyIGR1cmluZyA0LjIgZGV2ZWxvcG1lbnQgY3ljbGUgdG8KPj4geGVu
LWRldmVsLgo+PiA+ICAgICAgICBVcHN0cmVhbSBRZW11IGluY2x1ZGVzIFFYTCBzdXBwb3J0Lgo+
PiA+Cj4+ID4gICAgICAqIFBWU0NTSSBzdXBwb3J0IGluIFhMLiBKYW1lcyBIYXJwZXIgd2FzIChz
ZW1pKSBpbnRlcmVzdGVkIGluCj4+IHdvcmtpbmcKPj4gPiAgICAgIHdpdGggdGhpcywKPj4gPiAg
ICAgICAgYmVjYXVzZSBoZSBoYXMgYSBQVlNDU0kgZnJvbnRlbmQgZHJpdmVyIGluIFdpbmRvd3Mg
R1BMUFYgZHJpdmVycywKPj4gYW5kCj4+ID4gICAgICBoZSdzIHVzaW5nIFBWU0NTSSBmb3IgdGFw
ZSBiYWNrdXBzIGhpbXNlbGYuCj4+ID4KPj4gPiAgICAgICogbGlidmlydCBsaWJ4bCBkcml2ZXIg
aW1wcm92ZW1lbnRzOyBzdXBwb3J0IG1vcmUgWGVuIGZlYXR1cmVzLgo+PiA+ICAgICAgICBBbGxv
d3MgYmV0dGVyIHVzaW5nIHRoZSBVYnVudHUvRGViaWFuL0ZlZG9yYS9SSEVML0NlbnRPUyAiZGVm
YXVsdCIKPj4gPiAgICAgIHZpcnR1YWxpemF0aW9uIEdVSSBhbHNvIHdpdGggWGVuLgo+PiA+Cj4+
ID4gICAgICBIb3BlZnVsbHkgd2UnbGwgZmluZCBpbnRlcmVzdGVkIGRldmVsb3BlcnMgZm9yIHRo
ZXNlIGl0ZW1zIDopCj4+ID4KPj4gPiAgICAgIC0tIFBhc2kKPj4gPgo+PiA+ICAgICAgX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPj4gPiAgICAgIFhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKPj4gPiAgICAgIFsyXVhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnIAo+
PiA+ICAgICAgWzNdaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsIAo+PiA+Cj4+ID4gUmVm
ZXJlbmNlcwo+PiA+Cj4+ID4gICAgVmlzaWJsZSBsaW5rcwo+PiA+ICAgIDEuIG1haWx0bzpwYXNp
a0Bpa2kuZmkgCj4+ID4gICAgMi4gbWFpbHRvOlhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnIAo+PiA+
ICAgIDMuIGh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbCAKPj4KCgoKX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlz
dApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:52:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxaZ-0008RP-6i; Tue, 18 Dec 2012 13:52:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TkxaX-0008RI-Tn
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:52:06 +0000
Received: from [85.158.139.211:15888] by server-11.bemta-5.messagelabs.com id
	1B/63-31624-50570D05; Tue, 18 Dec 2012 13:52:05 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355838722!20209373!1
X-Originating-IP: [209.85.220.169]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1936 invoked from network); 18 Dec 2012 13:52:03 -0000
Received: from mail-vc0-f169.google.com (HELO mail-vc0-f169.google.com)
	(209.85.220.169)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:52:03 -0000
Received: by mail-vc0-f169.google.com with SMTP id gb23so850814vcb.28
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 05:52:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=2dw3dMIon+8z6H1h9/FFcL1la5xtOKiA3fsvp/2xPKA=;
	b=kW4s/qK97XDxJNfRRvnZaDf92oRL6zByIO1Qq9nm//bHhZUoQDh78mzXxxQhYxzf6F
	sewZAPRrpVyqn8sHYJPxcqIif4Qg0oe5ZWVy0PnXvOTEulOg1D7aKL2+SdfwXHCmstgX
	t3Kx8qiXleQZqgi5J/Wb9bJDPjPuAITpNoEtgSnxL/y92zli/1UXiTSB9G+R2+8JfFXQ
	LklizuuyTJIdMA3cESP8eB9+vpKLkmcCcwq4a6dpheqpiCZeIT5FTM/8qmp2MsPWEWjf
	awledEufJ8dWJQaYBn+0PR0Wf2M7G43HrL6fqtuL/RuA0qqHn39ImYToYXBvMi0I6Fit
	dPiQ==
MIME-Version: 1.0
Received: by 10.52.64.131 with SMTP id o3mr2527566vds.116.1355838722474; Tue,
	18 Dec 2012 05:52:02 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 05:52:02 -0800 (PST)
Date: Tue, 18 Dec 2012 13:52:02 +0000
X-Google-Sender-Auth: 6BsVJWkTvkNzuUEkJr8FYpcQNrI
Message-ID: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3215671426408071363=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3215671426408071363==
Content-Type: multipart/alternative; boundary=20cf307ac7976d029104d120cd1c

--20cf307ac7976d029104d120cd1c
Content-Type: text/plain; charset=ISO-8859-1

One of the requests from the xenorg.uservoice.com page that had a moderate
amount of interest was to allow block devices to be resized.  There's a
description here:

https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-implement-block-device-resize

I have no idea what this would take -- can anyone comment?

 -George

--20cf307ac7976d029104d120cd1c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>One of the requests from the <a href=3D"http://x=
enorg.uservoice.com">xenorg.uservoice.com</a> page that had a moderate amou=
nt of interest was to allow block devices to be resized.=A0 There&#39;s a d=
escription here:<br>
<br><a href=3D"https://xenorg.uservoice.com/forums/172169-xen-development/s=
uggestions/3140313-implement-block-device-resize">https://xenorg.uservoice.=
com/forums/172169-xen-development/suggestions/3140313-implement-block-devic=
e-resize</a><br>
<br></div>I have no idea what this would take -- can anyone comment?<br><br=
></div>=A0-George<br></div>

--20cf307ac7976d029104d120cd1c--


--===============3215671426408071363==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3215671426408071363==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 13:52:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxaZ-0008RP-6i; Tue, 18 Dec 2012 13:52:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TkxaX-0008RI-Tn
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:52:06 +0000
Received: from [85.158.139.211:15888] by server-11.bemta-5.messagelabs.com id
	1B/63-31624-50570D05; Tue, 18 Dec 2012 13:52:05 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355838722!20209373!1
X-Originating-IP: [209.85.220.169]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1936 invoked from network); 18 Dec 2012 13:52:03 -0000
Received: from mail-vc0-f169.google.com (HELO mail-vc0-f169.google.com)
	(209.85.220.169)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:52:03 -0000
Received: by mail-vc0-f169.google.com with SMTP id gb23so850814vcb.28
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 05:52:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=2dw3dMIon+8z6H1h9/FFcL1la5xtOKiA3fsvp/2xPKA=;
	b=kW4s/qK97XDxJNfRRvnZaDf92oRL6zByIO1Qq9nm//bHhZUoQDh78mzXxxQhYxzf6F
	sewZAPRrpVyqn8sHYJPxcqIif4Qg0oe5ZWVy0PnXvOTEulOg1D7aKL2+SdfwXHCmstgX
	t3Kx8qiXleQZqgi5J/Wb9bJDPjPuAITpNoEtgSnxL/y92zli/1UXiTSB9G+R2+8JfFXQ
	LklizuuyTJIdMA3cESP8eB9+vpKLkmcCcwq4a6dpheqpiCZeIT5FTM/8qmp2MsPWEWjf
	awledEufJ8dWJQaYBn+0PR0Wf2M7G43HrL6fqtuL/RuA0qqHn39ImYToYXBvMi0I6Fit
	dPiQ==
MIME-Version: 1.0
Received: by 10.52.64.131 with SMTP id o3mr2527566vds.116.1355838722474; Tue,
	18 Dec 2012 05:52:02 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 05:52:02 -0800 (PST)
Date: Tue, 18 Dec 2012 13:52:02 +0000
X-Google-Sender-Auth: 6BsVJWkTvkNzuUEkJr8FYpcQNrI
Message-ID: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3215671426408071363=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3215671426408071363==
Content-Type: multipart/alternative; boundary=20cf307ac7976d029104d120cd1c

--20cf307ac7976d029104d120cd1c
Content-Type: text/plain; charset=ISO-8859-1

One of the requests from the xenorg.uservoice.com page that had a moderate
amount of interest was to allow block devices to be resized.  There's a
description here:

https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-implement-block-device-resize

I have no idea what this would take -- can anyone comment?

 -George

--20cf307ac7976d029104d120cd1c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>One of the requests from the <a href=3D"http://x=
enorg.uservoice.com">xenorg.uservoice.com</a> page that had a moderate amou=
nt of interest was to allow block devices to be resized.=A0 There&#39;s a d=
escription here:<br>
<br><a href=3D"https://xenorg.uservoice.com/forums/172169-xen-development/s=
uggestions/3140313-implement-block-device-resize">https://xenorg.uservoice.=
com/forums/172169-xen-development/suggestions/3140313-implement-block-devic=
e-resize</a><br>
<br></div>I have no idea what this would take -- can anyone comment?<br><br=
></div>=A0-George<br></div>

--20cf307ac7976d029104d120cd1c--


--===============3215671426408071363==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3215671426408071363==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 13:57:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxfp-0000Kn-5P; Tue, 18 Dec 2012 13:57:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkxfn-0000KW-2P
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 13:57:31 +0000
Received: from [85.158.137.99:5591] by server-6.bemta-3.messagelabs.com id
	FB/D7-12154-74670D05; Tue, 18 Dec 2012 13:57:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355839045!18195216!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10142 invoked from network); 18 Dec 2012 13:57:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:57:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1033426"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 13:57:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 08:57:24 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkxaJ-0006Mf-Sq;
	Tue, 18 Dec 2012 13:51:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <netdev@vger.kernel.org>
Date: Tue, 18 Dec 2012 13:51:51 +0000
Message-ID: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Sander Eikelenboom <linux@eikelenboom.it>, annie li <annie.li@oracle.com>,
	xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
than that. We have already accounted for this in
NETFRONT_SKB_CB(skb)->pull_to so use that instead.

Fixes WARN_ON from skb_try_coalesce.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Sander Eikelenboom <linux@eikelenboom.it>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: annie li <annie.li@oracle.com>
Cc: xen-devel@lists.xensource.com
Cc: netdev@vger.kernel.org
Cc: stable@kernel.org # 3.7.x only
---
 drivers/net/xen-netfront.c |   15 +++++----------
 1 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index caa0110..b06ef81 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -971,17 +971,12 @@ err:
 		 * overheads. Here, we add the size of the data pulled
 		 * in xennet_fill_frags().
 		 *
-		 * We also adjust for any unused space in the main
-		 * data area by subtracting (RX_COPY_THRESHOLD -
-		 * len). This is especially important with drivers
-		 * which split incoming packets into header and data,
-		 * using only 66 bytes of the main data area (see the
-		 * e1000 driver for example.)  On such systems,
-		 * without this last adjustement, our achievable
-		 * receive throughout using the standard receive
-		 * buffer size was cut by 25%(!!!).
+		 * We also adjust for the __pskb_pull_tail done in
+		 * handle_incoming_queue which pulls data from the
+		 * frags into the head area, which is already
+		 * accounted in RX_COPY_THRESHOLD.
 		 */
-		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
+		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
 		skb->len += skb->data_len;
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:57:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxfp-0000Kn-5P; Tue, 18 Dec 2012 13:57:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkxfn-0000KW-2P
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 13:57:31 +0000
Received: from [85.158.137.99:5591] by server-6.bemta-3.messagelabs.com id
	FB/D7-12154-74670D05; Tue, 18 Dec 2012 13:57:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355839045!18195216!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10142 invoked from network); 18 Dec 2012 13:57:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:57:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1033426"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 13:57:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 08:57:24 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkxaJ-0006Mf-Sq;
	Tue, 18 Dec 2012 13:51:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <netdev@vger.kernel.org>
Date: Tue, 18 Dec 2012 13:51:51 +0000
Message-ID: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Sander Eikelenboom <linux@eikelenboom.it>, annie li <annie.li@oracle.com>,
	xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
than that. We have already accounted for this in
NETFRONT_SKB_CB(skb)->pull_to so use that instead.

Fixes WARN_ON from skb_try_coalesce.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Sander Eikelenboom <linux@eikelenboom.it>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: annie li <annie.li@oracle.com>
Cc: xen-devel@lists.xensource.com
Cc: netdev@vger.kernel.org
Cc: stable@kernel.org # 3.7.x only
---
 drivers/net/xen-netfront.c |   15 +++++----------
 1 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index caa0110..b06ef81 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -971,17 +971,12 @@ err:
 		 * overheads. Here, we add the size of the data pulled
 		 * in xennet_fill_frags().
 		 *
-		 * We also adjust for any unused space in the main
-		 * data area by subtracting (RX_COPY_THRESHOLD -
-		 * len). This is especially important with drivers
-		 * which split incoming packets into header and data,
-		 * using only 66 bytes of the main data area (see the
-		 * e1000 driver for example.)  On such systems,
-		 * without this last adjustement, our achievable
-		 * receive throughout using the standard receive
-		 * buffer size was cut by 25%(!!!).
+		 * We also adjust for the __pskb_pull_tail done in
+		 * handle_incoming_queue which pulls data from the
+		 * frags into the head area, which is already
+		 * accounted in RX_COPY_THRESHOLD.
 		 */
-		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
+		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
 		skb->len += skb->data_len;
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:57:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxg3-0000Mh-Hv; Tue, 18 Dec 2012 13:57:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tkxg2-0000MH-RP
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:57:46 +0000
Received: from [85.158.138.51:49297] by server-11.bemta-3.messagelabs.com id
	20/33-13335-55670D05; Tue, 18 Dec 2012 13:57:41 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355839046!27620493!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27895 invoked from network); 18 Dec 2012 13:57:26 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-15.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 13:57:26 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id CC2EAC5618E;
	Tue, 18 Dec 2012 13:57:03 +0000 (GMT)
Date: Tue, 18 Dec 2012 13:57:02 +0000
From: Alex Bligh <alex@alex.org.uk>
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <39A8C47B5D7DD22A52A3DA51@Ximines.local>
In-Reply-To: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
References: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>
Subject: Re: [Xen-devel] Xen 4.3 development update, 15 Oct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



--On 15 October 2012 17:19:05 +0100 George Dunlap <George.Dunlap@eu.citrix.com> wrote:

> * Make storage migration possible
>   owner: ?
>   status: ?
>   There needs to be a way, either via command-line or via some hooks,
>   that someone can build a "storage migration" feature on top of libxl
>   or xl.

We have this working with qemu-xen, qcow2 and snapshot rebase. At a libxl
level (but not an xl level), everything seems to be there.

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:57:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxg3-0000Mh-Hv; Tue, 18 Dec 2012 13:57:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tkxg2-0000MH-RP
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:57:46 +0000
Received: from [85.158.138.51:49297] by server-11.bemta-3.messagelabs.com id
	20/33-13335-55670D05; Tue, 18 Dec 2012 13:57:41 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355839046!27620493!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27895 invoked from network); 18 Dec 2012 13:57:26 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-15.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 13:57:26 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id CC2EAC5618E;
	Tue, 18 Dec 2012 13:57:03 +0000 (GMT)
Date: Tue, 18 Dec 2012 13:57:02 +0000
From: Alex Bligh <alex@alex.org.uk>
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <39A8C47B5D7DD22A52A3DA51@Ximines.local>
In-Reply-To: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
References: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>
Subject: Re: [Xen-devel] Xen 4.3 development update, 15 Oct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



--On 15 October 2012 17:19:05 +0100 George Dunlap <George.Dunlap@eu.citrix.com> wrote:

> * Make storage migration possible
>   owner: ?
>   status: ?
>   There needs to be a way, either via command-line or via some hooks,
>   that someone can build a "storage migration" feature on top of libxl
>   or xl.

We have this working with qemu-xen, qcow2 and snapshot rebase. At a libxl
level (but not an xl level), everything seems to be there.

-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:58:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxgT-0000R2-Vl; Tue, 18 Dec 2012 13:58:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkxgS-0000Qo-GZ
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:58:12 +0000
Received: from [85.158.143.99:28366] by server-3.bemta-4.messagelabs.com id
	EE/46-18211-37670D05; Tue, 18 Dec 2012 13:58:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355839089!29992470!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12291 invoked from network); 18 Dec 2012 13:58:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:58:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="226268"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 13:58:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 13:58:08 +0000
Message-ID: <1355839087.14620.219.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Tue, 18 Dec 2012 13:58:07 +0000
In-Reply-To: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:
> One of the requests from the xenorg.uservoice.com page that had a
> moderate amount of interest was to allow block devices to be resized.
> There's a description here:
> 
> https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-implement-block-device-resize
> 
> 
> I have no idea what this would take -- can anyone comment?

Doesn't that already work? I thought this was patch in the PV block
drivers ages ago...

Yes, http://wiki.xen.org/wiki/XenParavirtOps lists it under 2.6.36.

Maybe this is a missing feature of (lib)xl vs xend?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:58:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxgT-0000R2-Vl; Tue, 18 Dec 2012 13:58:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkxgS-0000Qo-GZ
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:58:12 +0000
Received: from [85.158.143.99:28366] by server-3.bemta-4.messagelabs.com id
	EE/46-18211-37670D05; Tue, 18 Dec 2012 13:58:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355839089!29992470!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12291 invoked from network); 18 Dec 2012 13:58:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:58:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="226268"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 13:58:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 13:58:08 +0000
Message-ID: <1355839087.14620.219.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Tue, 18 Dec 2012 13:58:07 +0000
In-Reply-To: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:
> One of the requests from the xenorg.uservoice.com page that had a
> moderate amount of interest was to allow block devices to be resized.
> There's a description here:
> 
> https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-implement-block-device-resize
> 
> 
> I have no idea what this would take -- can anyone comment?

Doesn't that already work? I thought this was patch in the PV block
drivers ages ago...

Yes, http://wiki.xen.org/wiki/XenParavirtOps lists it under 2.6.36.

Maybe this is a missing feature of (lib)xl vs xend?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:59:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxh8-0000Y1-Dl; Tue, 18 Dec 2012 13:58:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkxh6-0000Xd-Oq
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:58:52 +0000
Received: from [85.158.143.99:62177] by server-3.bemta-4.messagelabs.com id
	FF/B7-18211-C9670D05; Tue, 18 Dec 2012 13:58:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355838724!29040442!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12789 invoked from network); 18 Dec 2012 13:52:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:52:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="226006"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 13:52:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 13:52:03 +0000
Message-ID: <1355838722.14620.216.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Tue, 18 Dec 2012 13:52:02 +0000
In-Reply-To: <1541942460.20121218135411@eikelenboom.it>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com> <341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
	<1355152338.21160.37.camel@zakaz.uk.xensource.com>
	<1541942460.20121218135411@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
 net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 12:54 +0000, Sander Eikelenboom wrote:
> Monday, December 10, 2012, 4:12:18 PM, you wrote:
> 
> > I wrote
> >> > I have a vague recollection of a patch to set skb->truesize more
> >> > accurately in xennet_poll (netfront), but I can't seem to find any
> >> > reference to it now.
> 
> > I finally found the following in my git tree. Looks like I never sent it
> > out.
> 
> > Does it help?
> 
> Hi Ian,
> 
> As I reported earlier it works for me.
> I haven't seen a pull request anywhere yet ?

Oops, just sent.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:59:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxh8-0000Y1-Dl; Tue, 18 Dec 2012 13:58:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkxh6-0000Xd-Oq
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:58:52 +0000
Received: from [85.158.143.99:62177] by server-3.bemta-4.messagelabs.com id
	FF/B7-18211-C9670D05; Tue, 18 Dec 2012 13:58:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355838724!29040442!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12789 invoked from network); 18 Dec 2012 13:52:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 13:52:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="226006"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 13:52:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 13:52:03 +0000
Message-ID: <1355838722.14620.216.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Tue, 18 Dec 2012 13:52:02 +0000
In-Reply-To: <1541942460.20121218135411@eikelenboom.it>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com> <341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
	<1355152338.21160.37.camel@zakaz.uk.xensource.com>
	<1541942460.20121218135411@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
 net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 12:54 +0000, Sander Eikelenboom wrote:
> Monday, December 10, 2012, 4:12:18 PM, you wrote:
> 
> > I wrote
> >> > I have a vague recollection of a patch to set skb->truesize more
> >> > accurately in xennet_poll (netfront), but I can't seem to find any
> >> > reference to it now.
> 
> > I finally found the following in my git tree. Looks like I never sent it
> > out.
> 
> > Does it help?
> 
> Hi Ian,
> 
> As I reported earlier it works for me.
> I haven't seen a pull request anywhere yet ?

Oops, just sent.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:59:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxhn-0000eZ-Rv; Tue, 18 Dec 2012 13:59:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1Tkxhm-0000eG-6C
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:59:34 +0000
Received: from [85.158.143.99:36691] by server-1.bemta-4.messagelabs.com id
	75/17-28401-5C670D05; Tue, 18 Dec 2012 13:59:33 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1355839171!24711526!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ2MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3854 invoked from network); 18 Dec 2012 13:59:31 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	18 Dec 2012 13:59:31 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBIDxUbN021829
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 08:59:30 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-39.ams2.redhat.com
	[10.36.112.39])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id qBIDxSt6025546; Tue, 18 Dec 2012 08:59:29 -0500
From: Paolo Bonzini <pbonzini@redhat.com>
To: xen-devel@lists.xen.org
Date: Tue, 18 Dec 2012 14:59:26 +0100
Message-Id: <1355839166-32272-1-git-send-email-pbonzini@redhat.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: jbeulich@suse.com
Subject: [Xen-devel] [PATCH v2] xen: find a better location for the
	real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On some machines, the location at 0x40e does not point to the beginning
of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
area of the EBDA, while the option ROMs place their data below that
segment.

For this reason, 0x413 is actually a better source than 0x40e to get
the location of the real-mode trampoline.  Xen was already using it
as a second source, and this patch keeps that working.  However, just
in case, let's also fetch the information from the multiboot structure,
where the boot loader should have placed it.  This way we don't
necessarily trust one of the BIOS or the multiboot loader more than
the other.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 xen/arch/x86/boot/head.S | 24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 7efa155..73e1c6a 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -78,16 +78,22 @@ __start:
         cmp     $0x2BADB002,%eax
         jne     not_multiboot
 
-        /* Set up trampoline segment 64k below EBDA */
-        movzwl  0x40e,%eax          /* EBDA segment */
-        cmp     $0xa000,%eax        /* sanity check (high) */
-        jae     0f
-        cmp     $0x4000,%eax        /* sanity check (low) */
-        jae     1f
-0:
-        movzwl  0x413,%eax          /* use base memory size on failure */
-        shl     $10-4,%eax
+        /* Set up trampoline segment just below end of base memory.
+         * Compare the value in the BDA with the information from the
+         * multiboot structure (if available) and use the smallest.
+         */
+        movzwl  0x413,%eax          /* base memory size in kb */
+        testb   $1,(%ebx)           /* test MBI_MEMLIMITS */
+        jz      1f                  /* not available? BDA value will be fine */
+        cmpw    $0x100,4(%ebx)      /* is the multiboot value too small?  */
+        jl      1f                  /* yes, do not use it */
+        cmpw    %ax,4(%ebx)         /* is the BDA value the smallest?  */
+        jge     1f                  /* yes, use it */
+        mov     4(%ebx),%eax
 1:
+        shl     $10-4,%eax          /* convert to a segment number */
+
+        /* Reserve 64kb for the trampoline */
         sub     $0x1000,%eax
 
         /* From arch/x86/smpboot.c: start_eip had better be page-aligned! */
-- 
1.8.0.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 13:59:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 13:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxhn-0000eZ-Rv; Tue, 18 Dec 2012 13:59:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1Tkxhm-0000eG-6C
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 13:59:34 +0000
Received: from [85.158.143.99:36691] by server-1.bemta-4.messagelabs.com id
	75/17-28401-5C670D05; Tue, 18 Dec 2012 13:59:33 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1355839171!24711526!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ2MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3854 invoked from network); 18 Dec 2012 13:59:31 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	18 Dec 2012 13:59:31 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBIDxUbN021829
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 08:59:30 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-39.ams2.redhat.com
	[10.36.112.39])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id qBIDxSt6025546; Tue, 18 Dec 2012 08:59:29 -0500
From: Paolo Bonzini <pbonzini@redhat.com>
To: xen-devel@lists.xen.org
Date: Tue, 18 Dec 2012 14:59:26 +0100
Message-Id: <1355839166-32272-1-git-send-email-pbonzini@redhat.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: jbeulich@suse.com
Subject: [Xen-devel] [PATCH v2] xen: find a better location for the
	real-mode trampoline
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On some machines, the location at 0x40e does not point to the beginning
of the EBDA.  Rather, it points to the beginning of the BIOS-reserved
area of the EBDA, while the option ROMs place their data below that
segment.

For this reason, 0x413 is actually a better source than 0x40e to get
the location of the real-mode trampoline.  Xen was already using it
as a second source, and this patch keeps that working.  However, just
in case, let's also fetch the information from the multiboot structure,
where the boot loader should have placed it.  This way we don't
necessarily trust one of the BIOS or the multiboot loader more than
the other.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 xen/arch/x86/boot/head.S | 24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 7efa155..73e1c6a 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -78,16 +78,22 @@ __start:
         cmp     $0x2BADB002,%eax
         jne     not_multiboot
 
-        /* Set up trampoline segment 64k below EBDA */
-        movzwl  0x40e,%eax          /* EBDA segment */
-        cmp     $0xa000,%eax        /* sanity check (high) */
-        jae     0f
-        cmp     $0x4000,%eax        /* sanity check (low) */
-        jae     1f
-0:
-        movzwl  0x413,%eax          /* use base memory size on failure */
-        shl     $10-4,%eax
+        /* Set up trampoline segment just below end of base memory.
+         * Compare the value in the BDA with the information from the
+         * multiboot structure (if available) and use the smallest.
+         */
+        movzwl  0x413,%eax          /* base memory size in kb */
+        testb   $1,(%ebx)           /* test MBI_MEMLIMITS */
+        jz      1f                  /* not available? BDA value will be fine */
+        cmpw    $0x100,4(%ebx)      /* is the multiboot value too small?  */
+        jl      1f                  /* yes, do not use it */
+        cmpw    %ax,4(%ebx)         /* is the BDA value the smallest?  */
+        jge     1f                  /* yes, use it */
+        mov     4(%ebx),%eax
 1:
+        shl     $10-4,%eax          /* convert to a segment number */
+
+        /* Reserve 64kb for the trampoline */
         sub     $0x1000,%eax
 
         /* From arch/x86/smpboot.c: start_eip had better be page-aligned! */
-- 
1.8.0.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:00:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:00:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxia-0000qI-9x; Tue, 18 Dec 2012 14:00:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TkxiY-0000ps-B6
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:00:22 +0000
Received: from [85.158.139.83:37286] by server-9.bemta-5.messagelabs.com id
	85/5B-10690-5F670D05; Tue, 18 Dec 2012 14:00:21 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355839219!26357516!1
X-Originating-IP: [209.85.220.178]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17372 invoked from network); 18 Dec 2012 14:00:20 -0000
Received: from mail-vc0-f178.google.com (HELO mail-vc0-f178.google.com)
	(209.85.220.178)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:00:20 -0000
Received: by mail-vc0-f178.google.com with SMTP id x16so846766vcq.37
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 06:00:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=mFflpz3XxCsZ2+9SQMQAZPiQgHzzKnQotB5HNhv2r9M=;
	b=x2m1vwRsocJVip10rS0zE7bzSu2Bj/0S+q4XyNLlZJDoCXhYctIp7sTudCpkn2Z2TP
	Ok8klVyVW889cXGCRQedn4n0L6ttFizHmwCG0uIXkAhPgLUjMyjjWEmBI0PVRL1J63LJ
	KhteJ3sa4+/PZYyHfOBAA5MA4JK2RlaF2fDVDJiy+eSw8HkY7mfYUg4HmdGVfFrs4AwM
	knUwza6JkVP4Vq0hMSVAlzxnFnBO6YYT/Giwm8tQ6G/y3FjjbMjUQDm6MRfVleQEJfBW
	qO4GYexdCEvqCa6GlcTL4ev91R14qd38ORe+JSkpxM8qhEwKYbQQU+POa7r9PpSSUzIs
	1jkA==
MIME-Version: 1.0
Received: by 10.220.239.73 with SMTP id kv9mr3060860vcb.50.1355839217966; Tue,
	18 Dec 2012 06:00:17 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 06:00:17 -0800 (PST)
In-Reply-To: <CAHNcjxU+uXSq7gr2n3OV2Li3V21w+5F0oOnfinut_uHkmzekhQ@mail.gmail.com>
References: <CAHNcjxU+uXSq7gr2n3OV2Li3V21w+5F0oOnfinut_uHkmzekhQ@mail.gmail.com>
Date: Tue, 18 Dec 2012 14:00:17 +0000
X-Google-Sender-Auth: 37uMEMTjZSBHZDVM9B8R6HdNSio
Message-ID: <CAFLBxZaYm+ab+e7HjpbxEocDZFuJT4GEcozZ=SM1==X63Uf-Ew@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Qi Zhang <cheungrck@gmail.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Something about xenalyze
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5126242240104574749=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5126242240104574749==
Content-Type: multipart/alternative; boundary=14dae9d25234f59dce04d120ea1d

--14dae9d25234f59dce04d120ea1d
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Dec 14, 2012 at 8:01 PM, Qi Zhang <cheungrck@gmail.com> wrote:

> I am a Ph.D. student, and get to something about xenalyze to ask.
>
> Part of my output of 'xenalyze --scatterplot-runstate' is:
>
> 1v0 10.822094482 2
> 1v0 10.822094482 3
> 0v0 10.822096386 3
> 0v0 10.822096386 0
>
> It seems like that VCPU0 of VM1 has two states(2 and 3) at the same time
> point(10.822094482). So, can you help me to explain that?
> Thank you!
>

The granularity that xentrace reports is only down to the nanosecond;
things that happen less than a nanosecond apart may appear to happen at the
same time.  However, even with a 4GHz clock speed, 1ns is still only 4
clock cycles (if I'm doing my math right) -- it seems unlikely that both
trace records could be taken in that time.

Did you see any warnings about tsc skew?

If you're really keen to track this down, you'd have to do "xenalyze
--dump-raw --scatterplot-runstate", so that we could see the raw TSC values
for those records, and on what pcpu each ran.

 -George

--14dae9d25234f59dce04d120ea1d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Fri, Dec 14, 2012 at 8:01 PM, Qi Zhang <span dir=3D"ltr=
">&lt;<a href=3D"mailto:cheungrck@gmail.com" target=3D"_blank">cheungrck@gm=
ail.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"g=
mail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div style=3D"font-family:arial,sans-serif;f=
ont-size:14px">I am a Ph.D. student, and get to something about xenalyze to=
 ask.</div>
<div style=3D"font-family:arial,sans-serif;font-size:14px"><br></div><div s=
tyle=3D"font-family:arial,sans-serif;font-size:14px">
Part of my output of &#39;xenalyze --scatterplot-runstate&#39; is:</div><di=
v style=3D"font-family:arial,sans-serif;font-size:14px"><br></div><div styl=
e=3D"font-family:arial,sans-serif;font-size:14px"><span style=3D"line-heigh=
t:22.383333206176758px;background-color:rgb(240,243,250);font-family:song,V=
erdana">1v0 10.822094482 2</span><br style=3D"line-height:22.38333320617675=
8px;background-color:rgb(240,243,250);font-family:song,Verdana;word-wrap:br=
eak-word">

<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">1v0 10.822094482 3</span><br style=3D"line=
-height:22.383333206176758px;background-color:rgb(240,243,250);font-family:=
song,Verdana;word-wrap:break-word">

<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">0v0 10.822096386 3</span><br style=3D"line=
-height:22.383333206176758px;background-color:rgb(240,243,250);font-family:=
song,Verdana;word-wrap:break-word">

<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">0v0 10.822096386 0</span><br clear=3D"all"=
><div><br></div><div>It seems like that VCPU0 of VM1 has two states(2 and 3=
) at the same time point(<span style=3D"line-height:22.366666793823242px;ba=
ckground-color:rgb(240,243,250);font-family:song,Verdana">10.822094482</spa=
n>). So, can you help me to explain that?</div>

<div>Thank you!<br></div></div></blockquote><div><br></div><div>The granula=
rity that xentrace reports is only down to the nanosecond; things that happ=
en less than a nanosecond apart may appear to happen at the same time.=A0 H=
owever, even with a 4GHz clock speed, 1ns is still only 4 clock cycles (if =
I&#39;m doing my math right) -- it seems unlikely that both trace records c=
ould be taken in that time.<br>
</div><div><br></div><div>Did you see any warnings about tsc skew?<br></div=
><div><br></div><div>If you&#39;re really keen to track this down, you&#39;=
d have to do &quot;xenalyze --dump-raw --scatterplot-runstate&quot;, so tha=
t we could see the raw TSC values for those records, and on what pcpu each =
ran.<br>
<br></div><div>=A0-George<br></div><br></div><br></div></div>

--14dae9d25234f59dce04d120ea1d--


--===============5126242240104574749==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5126242240104574749==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:00:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:00:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxia-0000qI-9x; Tue, 18 Dec 2012 14:00:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TkxiY-0000ps-B6
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:00:22 +0000
Received: from [85.158.139.83:37286] by server-9.bemta-5.messagelabs.com id
	85/5B-10690-5F670D05; Tue, 18 Dec 2012 14:00:21 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355839219!26357516!1
X-Originating-IP: [209.85.220.178]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17372 invoked from network); 18 Dec 2012 14:00:20 -0000
Received: from mail-vc0-f178.google.com (HELO mail-vc0-f178.google.com)
	(209.85.220.178)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:00:20 -0000
Received: by mail-vc0-f178.google.com with SMTP id x16so846766vcq.37
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 06:00:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=mFflpz3XxCsZ2+9SQMQAZPiQgHzzKnQotB5HNhv2r9M=;
	b=x2m1vwRsocJVip10rS0zE7bzSu2Bj/0S+q4XyNLlZJDoCXhYctIp7sTudCpkn2Z2TP
	Ok8klVyVW889cXGCRQedn4n0L6ttFizHmwCG0uIXkAhPgLUjMyjjWEmBI0PVRL1J63LJ
	KhteJ3sa4+/PZYyHfOBAA5MA4JK2RlaF2fDVDJiy+eSw8HkY7mfYUg4HmdGVfFrs4AwM
	knUwza6JkVP4Vq0hMSVAlzxnFnBO6YYT/Giwm8tQ6G/y3FjjbMjUQDm6MRfVleQEJfBW
	qO4GYexdCEvqCa6GlcTL4ev91R14qd38ORe+JSkpxM8qhEwKYbQQU+POa7r9PpSSUzIs
	1jkA==
MIME-Version: 1.0
Received: by 10.220.239.73 with SMTP id kv9mr3060860vcb.50.1355839217966; Tue,
	18 Dec 2012 06:00:17 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 06:00:17 -0800 (PST)
In-Reply-To: <CAHNcjxU+uXSq7gr2n3OV2Li3V21w+5F0oOnfinut_uHkmzekhQ@mail.gmail.com>
References: <CAHNcjxU+uXSq7gr2n3OV2Li3V21w+5F0oOnfinut_uHkmzekhQ@mail.gmail.com>
Date: Tue, 18 Dec 2012 14:00:17 +0000
X-Google-Sender-Auth: 37uMEMTjZSBHZDVM9B8R6HdNSio
Message-ID: <CAFLBxZaYm+ab+e7HjpbxEocDZFuJT4GEcozZ=SM1==X63Uf-Ew@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Qi Zhang <cheungrck@gmail.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Something about xenalyze
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5126242240104574749=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5126242240104574749==
Content-Type: multipart/alternative; boundary=14dae9d25234f59dce04d120ea1d

--14dae9d25234f59dce04d120ea1d
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Dec 14, 2012 at 8:01 PM, Qi Zhang <cheungrck@gmail.com> wrote:

> I am a Ph.D. student, and get to something about xenalyze to ask.
>
> Part of my output of 'xenalyze --scatterplot-runstate' is:
>
> 1v0 10.822094482 2
> 1v0 10.822094482 3
> 0v0 10.822096386 3
> 0v0 10.822096386 0
>
> It seems like that VCPU0 of VM1 has two states(2 and 3) at the same time
> point(10.822094482). So, can you help me to explain that?
> Thank you!
>

The granularity that xentrace reports is only down to the nanosecond;
things that happen less than a nanosecond apart may appear to happen at the
same time.  However, even with a 4GHz clock speed, 1ns is still only 4
clock cycles (if I'm doing my math right) -- it seems unlikely that both
trace records could be taken in that time.

Did you see any warnings about tsc skew?

If you're really keen to track this down, you'd have to do "xenalyze
--dump-raw --scatterplot-runstate", so that we could see the raw TSC values
for those records, and on what pcpu each ran.

 -George

--14dae9d25234f59dce04d120ea1d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Fri, Dec 14, 2012 at 8:01 PM, Qi Zhang <span dir=3D"ltr=
">&lt;<a href=3D"mailto:cheungrck@gmail.com" target=3D"_blank">cheungrck@gm=
ail.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"g=
mail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div style=3D"font-family:arial,sans-serif;f=
ont-size:14px">I am a Ph.D. student, and get to something about xenalyze to=
 ask.</div>
<div style=3D"font-family:arial,sans-serif;font-size:14px"><br></div><div s=
tyle=3D"font-family:arial,sans-serif;font-size:14px">
Part of my output of &#39;xenalyze --scatterplot-runstate&#39; is:</div><di=
v style=3D"font-family:arial,sans-serif;font-size:14px"><br></div><div styl=
e=3D"font-family:arial,sans-serif;font-size:14px"><span style=3D"line-heigh=
t:22.383333206176758px;background-color:rgb(240,243,250);font-family:song,V=
erdana">1v0 10.822094482 2</span><br style=3D"line-height:22.38333320617675=
8px;background-color:rgb(240,243,250);font-family:song,Verdana;word-wrap:br=
eak-word">

<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">1v0 10.822094482 3</span><br style=3D"line=
-height:22.383333206176758px;background-color:rgb(240,243,250);font-family:=
song,Verdana;word-wrap:break-word">

<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">0v0 10.822096386 3</span><br style=3D"line=
-height:22.383333206176758px;background-color:rgb(240,243,250);font-family:=
song,Verdana;word-wrap:break-word">

<span style=3D"line-height:22.383333206176758px;background-color:rgb(240,24=
3,250);font-family:song,Verdana">0v0 10.822096386 0</span><br clear=3D"all"=
><div><br></div><div>It seems like that VCPU0 of VM1 has two states(2 and 3=
) at the same time point(<span style=3D"line-height:22.366666793823242px;ba=
ckground-color:rgb(240,243,250);font-family:song,Verdana">10.822094482</spa=
n>). So, can you help me to explain that?</div>

<div>Thank you!<br></div></div></blockquote><div><br></div><div>The granula=
rity that xentrace reports is only down to the nanosecond; things that happ=
en less than a nanosecond apart may appear to happen at the same time.=A0 H=
owever, even with a 4GHz clock speed, 1ns is still only 4 clock cycles (if =
I&#39;m doing my math right) -- it seems unlikely that both trace records c=
ould be taken in that time.<br>
</div><div><br></div><div>Did you see any warnings about tsc skew?<br></div=
><div><br></div><div>If you&#39;re really keen to track this down, you&#39;=
d have to do &quot;xenalyze --dump-raw --scatterplot-runstate&quot;, so tha=
t we could see the raw TSC values for those records, and on what pcpu each =
ran.<br>
<br></div><div>=A0-George<br></div><br></div><br></div></div>

--14dae9d25234f59dce04d120ea1d--


--===============5126242240104574749==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5126242240104574749==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:02:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:02:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxkg-0001DT-3D; Tue, 18 Dec 2012 14:02:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkxke-0001DD-Hk
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:02:32 +0000
Received: from [85.158.143.35:9858] by server-3.bemta-4.messagelabs.com id
	70/0E-18211-77770D05; Tue, 18 Dec 2012 14:02:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355839346!16107735!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27616 invoked from network); 18 Dec 2012 14:02:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:02:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="226449"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:02:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:02:10 +0000
Message-ID: <1355839329.14620.221.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Tue, 18 Dec 2012 14:02:09 +0000
In-Reply-To: <1355838722.14620.216.camel@zakaz.uk.xensource.com>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com> <341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
	<1355152338.21160.37.camel@zakaz.uk.xensource.com>
	<1541942460.20121218135411@eikelenboom.it>
	<1355838722.14620.216.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
 net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 13:52 +0000, Ian Campbell wrote:
> On Tue, 2012-12-18 at 12:54 +0000, Sander Eikelenboom wrote:
> > Monday, December 10, 2012, 4:12:18 PM, you wrote:
> > 
> > > I wrote
> > >> > I have a vague recollection of a patch to set skb->truesize more
> > >> > accurately in xennet_poll (netfront), but I can't seem to find any
> > >> > reference to it now.
> > 
> > > I finally found the following in my git tree. Looks like I never sent it
> > > out.
> > 
> > > Does it help?
> > 
> > Hi Ian,
> > 
> > As I reported earlier it works for me.
> > I haven't seen a pull request anywhere yet ?
> 
> Oops, just sent.

I forgot to add your Tested-by tag though, sorry.

Anyway, this will need an Ack from Konrad before Dave picks it up I
expect.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:02:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:02:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxkg-0001DT-3D; Tue, 18 Dec 2012 14:02:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkxke-0001DD-Hk
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:02:32 +0000
Received: from [85.158.143.35:9858] by server-3.bemta-4.messagelabs.com id
	70/0E-18211-77770D05; Tue, 18 Dec 2012 14:02:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355839346!16107735!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27616 invoked from network); 18 Dec 2012 14:02:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:02:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="226449"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:02:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:02:10 +0000
Message-ID: <1355839329.14620.221.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Tue, 18 Dec 2012 14:02:09 +0000
In-Reply-To: <1355838722.14620.216.camel@zakaz.uk.xensource.com>
References: <1259250907.20121208211402@eikelenboom.it>
	<50C4A8FD.5070407@oracle.com> <341064135.20121209223602@eikelenboom.it>
	<101480918.20121210160332@eikelenboom.it>
	<1355152338.21160.37.camel@zakaz.uk.xensource.com>
	<1541942460.20121218135411@eikelenboom.it>
	<1355838722.14620.216.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, annie li <annie.li@oracle.com>,
	Eric Dumazet <edumazet@google.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Linux 3.7-rc8] DomU: WARNING: at
 net/core/skbuff.c:3444 skb_try_coalesce+0x359/0x390()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 13:52 +0000, Ian Campbell wrote:
> On Tue, 2012-12-18 at 12:54 +0000, Sander Eikelenboom wrote:
> > Monday, December 10, 2012, 4:12:18 PM, you wrote:
> > 
> > > I wrote
> > >> > I have a vague recollection of a patch to set skb->truesize more
> > >> > accurately in xennet_poll (netfront), but I can't seem to find any
> > >> > reference to it now.
> > 
> > > I finally found the following in my git tree. Looks like I never sent it
> > > out.
> > 
> > > Does it help?
> > 
> > Hi Ian,
> > 
> > As I reported earlier it works for me.
> > I haven't seen a pull request anywhere yet ?
> 
> Oops, just sent.

I forgot to add your Tested-by tag though, sorry.

Anyway, this will need an Ack from Konrad before Dave picks it up I
expect.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:07:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxoy-0001WY-Q6; Tue, 18 Dec 2012 14:07:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Tkxox-0001WR-0k
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:06:59 +0000
Received: from [85.158.143.99:30545] by server-2.bemta-4.messagelabs.com id
	0A/91-30861-28870D05; Tue, 18 Dec 2012 14:06:58 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355839616!22492792!1
X-Originating-IP: [209.85.220.174]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4600 invoked from network); 18 Dec 2012 14:06:57 -0000
Received: from mail-vc0-f174.google.com (HELO mail-vc0-f174.google.com)
	(209.85.220.174)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:06:57 -0000
Received: by mail-vc0-f174.google.com with SMTP id d16so849139vcd.19
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 06:06:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=fDNkARq2w98IBHrPrFHLsPxDRZVgXwC1Ueccp0WNPR8=;
	b=A/hJ+Z6+O3xtNng6UjNetYFS5rrYIjLNjnXoMqELiZoL3ivFPaCi/y89bxtFajEK5O
	J/YHi8K4wTCXDOPaPXRd6xJWxVwz1j7hric4dLE/X8qkmJ4PhC8PHiTT5Q3IlrFrrCiV
	PRmI4obAIaCLF6kagt643cE26VOpn/ntoKQYgj8x38aV6odmnhaFegU35vW6ypHnZOZw
	qNMN2Q1Kj2P6sew26q8NajNVm87GJ7wJmLCkJcwoSj380a5DzGjFSUFVgKtZ3e4f+/xP
	5HDCzC6sI6CGmPM9grhZKu9uOA6og9fGgtPdZkzp8hu1aMvQSKw+zUFHJrvIdFgbRv2a
	1skg==
MIME-Version: 1.0
Received: by 10.52.64.131 with SMTP id o3mr2596766vds.116.1355839616251; Tue,
	18 Dec 2012 06:06:56 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 06:06:56 -0800 (PST)
In-Reply-To: <1355839087.14620.219.camel@zakaz.uk.xensource.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
Date: Tue, 18 Dec 2012 14:06:56 +0000
X-Google-Sender-Auth: aDY0oW0dciMsnI-1nKG_FTdxGww
Message-ID: <CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5159641094121354598=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5159641094121354598==
Content-Type: multipart/alternative; boundary=20cf307ac797b2f8ed04d12102a9

--20cf307ac797b2f8ed04d12102a9
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 18, 2012 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Tue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:
> > One of the requests from the xenorg.uservoice.com page that had a
> > moderate amount of interest was to allow block devices to be resized.
> > There's a description here:
> >
> >
> https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-implement-block-device-resize
> >
> >
> > I have no idea what this would take -- can anyone comment?
>
> Doesn't that already work? I thought this was patch in the PV block
> drivers ages ago...
>
> Yes, http://wiki.xen.org/wiki/XenParavirtOps lists it under 2.6.36.
>
> Maybe this is a missing feature of (lib)xl vs xend?
>

"xm help" doesn't show a "block-resize" command, nor does grepping through
tools for "resize" turn up anything.

Would someone be willing to do some investigation into whether such a
command is implemented in the protocol, and what it would take to get a "xm
block-resize" command working?  (Not necessarily do it, but have an idea
what probably needs to be done.)

 -George

--20cf307ac797b2f8ed04d12102a9
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Tue, Dec 18, 2012 at 1:58 PM, Ian Campbell <span dir=3D=
"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.=
Campbell@citrix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><di=
v class=3D"gmail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On T=
ue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:<br>
&gt; One of the requests from the <a href=3D"http://xenorg.uservoice.com" t=
arget=3D"_blank">xenorg.uservoice.com</a> page that had a<br>
&gt; moderate amount of interest was to allow block devices to be resized.<=
br>
&gt; There&#39;s a description here:<br>
&gt;<br>
&gt; <a href=3D"https://xenorg.uservoice.com/forums/172169-xen-development/=
suggestions/3140313-implement-block-device-resize" target=3D"_blank">https:=
//xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-im=
plement-block-device-resize</a><br>

&gt;<br>
&gt;<br>
&gt; I have no idea what this would take -- can anyone comment?<br>
<br>
</div></div>Doesn&#39;t that already work? I thought this was patch in the =
PV block<br>
drivers ages ago...<br>
<br>
Yes, <a href=3D"http://wiki.xen.org/wiki/XenParavirtOps" target=3D"_blank">=
http://wiki.xen.org/wiki/XenParavirtOps</a> lists it under 2.6.36.<br>
<br>
Maybe this is a missing feature of (lib)xl vs xend?<br></blockquote><div><b=
r></div><div>&quot;xm help&quot; doesn&#39;t show a &quot;block-resize&quot=
; command, nor does grepping through tools for &quot;resize&quot; turn up a=
nything.<br>
<br></div><div>Would someone be willing to do some investigation into wheth=
er such a command is implemented in the protocol, and what it would take to=
 get a &quot;xm block-resize&quot; command working?=A0 (Not necessarily do =
it, but have an idea what probably needs to be done.)<br>
<br></div><div>=A0-George<br></div></div></div></div>

--20cf307ac797b2f8ed04d12102a9--


--===============5159641094121354598==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5159641094121354598==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:07:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkxoy-0001WY-Q6; Tue, 18 Dec 2012 14:07:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Tkxox-0001WR-0k
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:06:59 +0000
Received: from [85.158.143.99:30545] by server-2.bemta-4.messagelabs.com id
	0A/91-30861-28870D05; Tue, 18 Dec 2012 14:06:58 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355839616!22492792!1
X-Originating-IP: [209.85.220.174]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4600 invoked from network); 18 Dec 2012 14:06:57 -0000
Received: from mail-vc0-f174.google.com (HELO mail-vc0-f174.google.com)
	(209.85.220.174)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:06:57 -0000
Received: by mail-vc0-f174.google.com with SMTP id d16so849139vcd.19
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 06:06:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=fDNkARq2w98IBHrPrFHLsPxDRZVgXwC1Ueccp0WNPR8=;
	b=A/hJ+Z6+O3xtNng6UjNetYFS5rrYIjLNjnXoMqELiZoL3ivFPaCi/y89bxtFajEK5O
	J/YHi8K4wTCXDOPaPXRd6xJWxVwz1j7hric4dLE/X8qkmJ4PhC8PHiTT5Q3IlrFrrCiV
	PRmI4obAIaCLF6kagt643cE26VOpn/ntoKQYgj8x38aV6odmnhaFegU35vW6ypHnZOZw
	qNMN2Q1Kj2P6sew26q8NajNVm87GJ7wJmLCkJcwoSj380a5DzGjFSUFVgKtZ3e4f+/xP
	5HDCzC6sI6CGmPM9grhZKu9uOA6og9fGgtPdZkzp8hu1aMvQSKw+zUFHJrvIdFgbRv2a
	1skg==
MIME-Version: 1.0
Received: by 10.52.64.131 with SMTP id o3mr2596766vds.116.1355839616251; Tue,
	18 Dec 2012 06:06:56 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 06:06:56 -0800 (PST)
In-Reply-To: <1355839087.14620.219.camel@zakaz.uk.xensource.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
Date: Tue, 18 Dec 2012 14:06:56 +0000
X-Google-Sender-Auth: aDY0oW0dciMsnI-1nKG_FTdxGww
Message-ID: <CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5159641094121354598=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5159641094121354598==
Content-Type: multipart/alternative; boundary=20cf307ac797b2f8ed04d12102a9

--20cf307ac797b2f8ed04d12102a9
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 18, 2012 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Tue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:
> > One of the requests from the xenorg.uservoice.com page that had a
> > moderate amount of interest was to allow block devices to be resized.
> > There's a description here:
> >
> >
> https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-implement-block-device-resize
> >
> >
> > I have no idea what this would take -- can anyone comment?
>
> Doesn't that already work? I thought this was patch in the PV block
> drivers ages ago...
>
> Yes, http://wiki.xen.org/wiki/XenParavirtOps lists it under 2.6.36.
>
> Maybe this is a missing feature of (lib)xl vs xend?
>

"xm help" doesn't show a "block-resize" command, nor does grepping through
tools for "resize" turn up anything.

Would someone be willing to do some investigation into whether such a
command is implemented in the protocol, and what it would take to get a "xm
block-resize" command working?  (Not necessarily do it, but have an idea
what probably needs to be done.)

 -George

--20cf307ac797b2f8ed04d12102a9
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Tue, Dec 18, 2012 at 1:58 PM, Ian Campbell <span dir=3D=
"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.=
Campbell@citrix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><di=
v class=3D"gmail_quote">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On T=
ue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:<br>
&gt; One of the requests from the <a href=3D"http://xenorg.uservoice.com" t=
arget=3D"_blank">xenorg.uservoice.com</a> page that had a<br>
&gt; moderate amount of interest was to allow block devices to be resized.<=
br>
&gt; There&#39;s a description here:<br>
&gt;<br>
&gt; <a href=3D"https://xenorg.uservoice.com/forums/172169-xen-development/=
suggestions/3140313-implement-block-device-resize" target=3D"_blank">https:=
//xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-im=
plement-block-device-resize</a><br>

&gt;<br>
&gt;<br>
&gt; I have no idea what this would take -- can anyone comment?<br>
<br>
</div></div>Doesn&#39;t that already work? I thought this was patch in the =
PV block<br>
drivers ages ago...<br>
<br>
Yes, <a href=3D"http://wiki.xen.org/wiki/XenParavirtOps" target=3D"_blank">=
http://wiki.xen.org/wiki/XenParavirtOps</a> lists it under 2.6.36.<br>
<br>
Maybe this is a missing feature of (lib)xl vs xend?<br></blockquote><div><b=
r></div><div>&quot;xm help&quot; doesn&#39;t show a &quot;block-resize&quot=
; command, nor does grepping through tools for &quot;resize&quot; turn up a=
nything.<br>
<br></div><div>Would someone be willing to do some investigation into wheth=
er such a command is implemented in the protocol, and what it would take to=
 get a &quot;xm block-resize&quot; command working?=A0 (Not necessarily do =
it, but have an idea what probably needs to be done.)<br>
<br></div><div>=A0-George<br></div></div></div></div>

--20cf307ac797b2f8ed04d12102a9--


--===============5159641094121354598==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5159641094121354598==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:11:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:11:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxtJ-0001qm-6M; Tue, 18 Dec 2012 14:11:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkxtH-0001qV-Ta
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:11:28 +0000
Received: from [193.109.254.147:22612] by server-12.bemta-14.messagelabs.com
	id 28/AF-06523-F8970D05; Tue, 18 Dec 2012 14:11:27 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355839856!1744369!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28200 invoked from network); 18 Dec 2012 14:10:57 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 14:10:57 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIEAoq0027939
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 14:10:51 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIEAoUb022172
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 14:10:50 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIEAne1022358; Tue, 18 Dec 2012 08:10:49 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 06:10:49 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 288261C08ED; Tue, 18 Dec 2012 09:10:48 -0500 (EST)
Date: Tue, 18 Dec 2012 09:10:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121218141048.GC4518@phenom.dumpdata.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: netdev@vger.kernel.org, annie li <annie.li@oracle.com>,
	xen-devel@lists.xensource.com, Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 01:51:51PM +0000, Ian Campbell wrote:
> Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
> than that. We have already accounted for this in
> NETFRONT_SKB_CB(skb)->pull_to so use that instead.
> 
> Fixes WARN_ON from skb_try_coalesce.

This should probably be also on the stable tree for 3.7 at least?

> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Sander Eikelenboom <linux@eikelenboom.it>
  ^^ - Reported-by: 

> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  ^^ - Acked-by:

> Cc: annie li <annie.li@oracle.com>
> Cc: xen-devel@lists.xensource.com
> Cc: netdev@vger.kernel.org
> Cc: stable@kernel.org # 3.7.x only
> ---
>  drivers/net/xen-netfront.c |   15 +++++----------
>  1 files changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index caa0110..b06ef81 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -971,17 +971,12 @@ err:
>  		 * overheads. Here, we add the size of the data pulled
>  		 * in xennet_fill_frags().
>  		 *
> -		 * We also adjust for any unused space in the main
> -		 * data area by subtracting (RX_COPY_THRESHOLD -
> -		 * len). This is especially important with drivers
> -		 * which split incoming packets into header and data,
> -		 * using only 66 bytes of the main data area (see the
> -		 * e1000 driver for example.)  On such systems,
> -		 * without this last adjustement, our achievable
> -		 * receive throughout using the standard receive
> -		 * buffer size was cut by 25%(!!!).
> +		 * We also adjust for the __pskb_pull_tail done in
> +		 * handle_incoming_queue which pulls data from the
> +		 * frags into the head area, which is already
> +		 * accounted in RX_COPY_THRESHOLD.
>  		 */
> -		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>  		skb->len += skb->data_len;
>  
>  		if (rx->flags & XEN_NETRXF_csum_blank)
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:11:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:11:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxtJ-0001qm-6M; Tue, 18 Dec 2012 14:11:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkxtH-0001qV-Ta
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:11:28 +0000
Received: from [193.109.254.147:22612] by server-12.bemta-14.messagelabs.com
	id 28/AF-06523-F8970D05; Tue, 18 Dec 2012 14:11:27 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355839856!1744369!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28200 invoked from network); 18 Dec 2012 14:10:57 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 14:10:57 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIEAoq0027939
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 14:10:51 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIEAoUb022172
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 14:10:50 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIEAne1022358; Tue, 18 Dec 2012 08:10:49 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 06:10:49 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 288261C08ED; Tue, 18 Dec 2012 09:10:48 -0500 (EST)
Date: Tue, 18 Dec 2012 09:10:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121218141048.GC4518@phenom.dumpdata.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: netdev@vger.kernel.org, annie li <annie.li@oracle.com>,
	xen-devel@lists.xensource.com, Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 01:51:51PM +0000, Ian Campbell wrote:
> Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
> than that. We have already accounted for this in
> NETFRONT_SKB_CB(skb)->pull_to so use that instead.
> 
> Fixes WARN_ON from skb_try_coalesce.

This should probably be also on the stable tree for 3.7 at least?

> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Sander Eikelenboom <linux@eikelenboom.it>
  ^^ - Reported-by: 

> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  ^^ - Acked-by:

> Cc: annie li <annie.li@oracle.com>
> Cc: xen-devel@lists.xensource.com
> Cc: netdev@vger.kernel.org
> Cc: stable@kernel.org # 3.7.x only
> ---
>  drivers/net/xen-netfront.c |   15 +++++----------
>  1 files changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index caa0110..b06ef81 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -971,17 +971,12 @@ err:
>  		 * overheads. Here, we add the size of the data pulled
>  		 * in xennet_fill_frags().
>  		 *
> -		 * We also adjust for any unused space in the main
> -		 * data area by subtracting (RX_COPY_THRESHOLD -
> -		 * len). This is especially important with drivers
> -		 * which split incoming packets into header and data,
> -		 * using only 66 bytes of the main data area (see the
> -		 * e1000 driver for example.)  On such systems,
> -		 * without this last adjustement, our achievable
> -		 * receive throughout using the standard receive
> -		 * buffer size was cut by 25%(!!!).
> +		 * We also adjust for the __pskb_pull_tail done in
> +		 * handle_incoming_queue which pulls data from the
> +		 * frags into the head area, which is already
> +		 * accounted in RX_COPY_THRESHOLD.
>  		 */
> -		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>  		skb->len += skb->data_len;
>  
>  		if (rx->flags & XEN_NETRXF_csum_blank)
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:13:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxvL-00021J-O1; Tue, 18 Dec 2012 14:13:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkxvK-00021A-Mx
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:13:34 +0000
Received: from [193.109.254.147:45092] by server-12.bemta-14.messagelabs.com
	id C8/E2-06523-E0A70D05; Tue, 18 Dec 2012 14:13:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355839999!10510792!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31410 invoked from network); 18 Dec 2012 14:13:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:13:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="226820"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:13:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:13:09 +0000
Message-ID: <1355839988.14620.225.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Tue, 18 Dec 2012 14:13:08 +0000
In-Reply-To: <1355831212.14620.211.camel@zakaz.uk.xensource.com>
References: <50C9F2CA.1010602@jhuapl.edu>
	<1355830264.14620.196.camel@zakaz.uk.xensource.com>
	<1355831212.14620.211.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTEyLTE4IGF0IDExOjQ2ICswMDAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4g
T24gVHVlLCAyMDEyLTEyLTE4IGF0IDExOjMxICswMDAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4g
PiBPbiBUaHUsIDIwMTItMTItMTMgYXQgMTU6MjIgKzAwMDAsIE1hdHRoZXcgRmlvcmF2YW50ZSB3
cm90ZToKPiA+ID4gSWFuLCB0aGlzIG9uZSBpcyBzcGVjaWFsIGp1c3QgZm9yIHlvdS4gSSdtIHNl
bmRpbmcgaXQgYXMgYW4gYXR0YWNobWVudCAKPiA+ID4gYmVjYXVzZSBteSBlbWFpbCBjbGllbnQg
d2lsbCBtYW5nbGUgaXQuCj4gPiA+IFRoaXMgcGF0Y2ggd2lsbCByZW1vdmUgdGhlIGNtYWtlIGRl
cGVuZGVuY3kgZnJvbSB4ZW4gcHJpb3IgdG8gYXV0b2NvbmYgCj4gPiA+IHN0dWJkb20KPiA+IAo+
ID4gVGhhbmtzLCBJIG1lcmdlZCB0aGlzIGFzIGRlc2NyaWJlZCBhbmQgYWxzbyBmb2xkZWQgIkRp
c2FibGUgY2FtbC1zdHViZG9tCj4gPiBieSBkZWZhdWx0IiBpbnRvIHRoZSBwYXRjaCB3aGljaCBh
ZGRlZCBpdCBlbmFibGVkLgo+ID4gCj4gPiBIb3dldmVyIHRoaXMgc3RpbGwgZmFpbHMgZm9yIG1l
IHdoZW4gdnRwbSBpcyBub3QgZW5hYmxlZDoKPiA+ICAgICAgICAgbWFrZVsxXTogKioqIE5vIHJ1
bGUgdG8gbWFrZSB0YXJnZXQgYG1pbmktb3MteDg2XzY0LXZ0cG0nLCBuZWVkZWQgYnkgYHZ0cG0t
c3R1YmRvbScuICBTdG9wLgo+ID4gICAgICAgICBtYWtlWzFdOiBMZWF2aW5nIGRpcmVjdG9yeSBg
L2xvY2FsL3NjcmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20nCj4gPiAgICAg
ICAgIG1ha2U6ICoqKiBbaW5zdGFsbC1zdHViZG9tXSBFcnJvciAyCj4gPiAKPiA+IFNvbWV0aGlu
ZyB0byB3aXRoIHZ0cG1tZ3Igbm90IGJlaW5nIGNvbmRpdGlvbmFsPwo+IAo+IExvb2tzIGxpa2Ug
YSBzaW1wbGUgdGhpbmtvLiBJJ2xsIG1lcmdlIHRoZSBmb2xsb3dpbmcgaW50byAic3R1YmRvbTog
QWRkCj4gYXV0b2NvbmYiLCBob3BlZnVsbHkgbXkgdGVzdGluZyB3b24ndCBmaW5kIGFueSBvdGhl
ciBpc3N1ZXMuCgpJIHdhcyBqdXN0IGFib3V0IHRvIHB1c2ggdGhpcyBvdXQgd2hlbiBJIHRob3Vn
aCAiaHJtLCBtYXliZSBJIHNob3VsZApjaGVjayB0aGlzIHdpdGggY21ha2UgaW5zdGFsbGVkIi4g
SSdtIGFmcmFpZCBpdCBpcyBicm9rZW4uIFNwZXcgaXMKYmVsb3cuIExvb2tzIGxpa2UgaXQncyBu
b3QgZmluZGluZyB0aGUgZ21wIGhlYWRlcnMgLS0gSSBjYW4gc2VlIHRoZW0gaW4Kc3R1YmRvbS9n
bXAteDg2XzY0L2dtcC5oIGJ1dCBJIGNhbid0IHNlZSBhbnl0aGluZyB3aGljaCBsb29rcyBsaWtl
IGl0CmFkZHMgdGhlIG5lY2Vzc2FyeSAtSSB0byBDRkxBR1Mgb3IgYW55d2hlcmUuCgpCcmFuY2gg
aXMgYXQ6CiAgICAgICAgZ2l0Oi8veGVuYml0cy54ZW4ub3JnL3Blb3BsZS9pYW5jL3hlbi11bnN0
YWJsZS5naXQgdnRwbTMKClNvcnJ5IGlmIEkndmUgYnJva2VuIHNvbWV0aGluZyBzb21ld2hlcmUg
YWxvbmcgdGhlIHdheS4gUGxlYXNlIGNhbiB5b3UKdGVzdCBmdWxseSBib3RoIHdpdGggYW5kIHdp
dGhvdXQgY21ha2UgYW5kIHJlc3VibWl0LiBGb3IgcmVmZXJlbmNlIHRoZQpzY3JpcHQgSSBydW4g
YmVmb3JlIGNvbW1pdHRpbmcgaXM6CiAgICAgICAgIyEvYmluL2Jhc2gKICAgICAgICBzZXQgLWV4
CiAgICAgICAgCiAgICAgICAgZXhwb3J0IFBBVEg9L3Vzci9saWIvY2NhY2hlOiRQQVRICiAgICAg
ICAgCiAgICAgICAgKAogICAgICAgICAgICBtYWtlIGRpc3RjbGVhbiAtajEyIC1zCiAgICAgICAg
ICAgIGdpdCBjbGVhbiAtZiAtZHgKICAgICAgICAgICAgLi9jb25maWd1cmUKICAgICAgICAgICAg
bWFrZSBkaXN0IC1qMTIgLXMKICAgICAgICAgICAgZmluZCBkaXN0IHwgc29ydCA+IC4uL0ZJTEVf
TElTVAogICAgICAgICkgMj4mMSB8IHRlZSAuLi9DT01NSVRURVIuTE9HCgpUaGlzIG1ha2VzIHBy
ZXR0eSBzdXJlIGl0IGlzIGRvaW5nIGEgZnJlc2ggYnVpbGQgd2l0aCBubyBsZWZ0IG92ZXIgY3J1
ZnQKaW5zdGFsbGVkLgoKSWFuLgoKY2MgLW1uby1yZWQtem9uZSAtTzEgLWZuby1vbWl0LWZyYW1l
LXBvaW50ZXIgIC1tNjQgLW1uby1yZWQtem9uZSAtZm5vLXJlb3JkZXItYmxvY2tzIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWlzeXN0ZW0gL2xv
Y2FsL3NjcmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vLi4vZXh0cmFzL21p
bmktb3MvaW5jbHVkZSAtRF9fTUlOSU9TX18gLURIQVZFX0xJQkMgLWlzeXN0ZW0gL2xvY2FsL3Nj
cmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vLi4vZXh0cmFzL21pbmktb3Mv
aW5jbHVkZS9wb3NpeCAtaXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRl
ci5naXQvc3R1YmRvbS8uLi90b29scy94ZW5zdG9yZSAgLWlzeXN0ZW0gL2xvY2FsL3NjcmF0Y2gv
aWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vLi4vZXh0cmFzL21pbmktb3MvaW5jbHVk
ZS94ODYgLWlzeXN0ZW0gL2xvY2FsL3NjcmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0
dWJkb20vLi4vZXh0cmFzL21pbmktb3MvaW5jbHVkZS94ODYveDg2XzY0IC1VIF9fbGludXhfXyAt
VSBfX0ZyZWVCU0RfXyAtVSBfX3N1bl9fIC1ub3N0ZGluYyAtaXN5c3RlbSAvbG9jYWwvc2NyYXRj
aC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNs
dWRlL3Bvc2l4IC1pc3lzdGVtIC9sb2NhbC9zY3JhdGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdp
dC9zdHViZG9tL2Nyb3NzLXJvb3QteDg2XzY0L3g4Nl82NC14ZW4tZWxmL2luY2x1ZGUgLWlzeXN0
ZW0gL3Vzci9saWIvZ2NjL3g4Nl82NC1saW51eC1nbnUvNC40LjUvaW5jbHVkZSAtaXN5c3RlbSAv
bG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRvbS9sd2lwLXg4Nl82
NC9zcmMvaW5jbHVkZSAtaXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRl
ci5naXQvc3R1YmRvbS9sd2lwLXg4Nl82NC9zcmMvaW5jbHVkZS9pcHY0IC1JL2xvY2FsL3NjcmF0
Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vaW5jbHVkZSAtSS9sb2NhbC9zY3Jh
dGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdpdC9zdHViZG9tLy4uL3hlbi9pbmNsdWRlIC1JLi4v
dHBtX2VtdWxhdG9yLXg4Nl82NC9idWlsZCAtSS4uL3RwbV9lbXVsYXRvci14ODZfNjQvdHBtIC1J
Li4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8gLUkuLi90cG1fZW11bGF0b3IteDg2XzY0ICAt
YyAtbyB2dHBtLm8gdnRwbS5jCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAuLi90cG1fZW11bGF0b3It
eDg2XzY0L2NyeXB0by9yc2EuaDoyMiwKICAgICAgICAgICAgICAgICBmcm9tIC4uL3RwbV9lbXVs
YXRvci14ODZfNjQvdHBtL3RwbV9zdHJ1Y3R1cmVzLmg6MjIsCiAgICAgICAgICAgICAgICAgZnJv
bSAuLi90cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fbWFyc2hhbGxpbmcuaDoyMSwKICAgICAg
ICAgICAgICAgICBmcm9tIHZ0cG0uYzozMToKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8v
Ym4uaDoyNzoxNzogZXJyb3I6IGdtcC5oOiBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5CkluIGZp
bGUgaW5jbHVkZWQgZnJvbSAuLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9yc2EuaDoyMiwK
ICAgICAgICAgICAgICAgICBmcm9tIC4uL3RwbV9lbXVsYXRvci14ODZfNjQvdHBtL3RwbV9zdHJ1
Y3R1cmVzLmg6MjIsCiAgICAgICAgICAgICAgICAgZnJvbSAuLi90cG1fZW11bGF0b3IteDg2XzY0
L3RwbS90cG1fbWFyc2hhbGxpbmcuaDoyMSwKICAgICAgICAgICAgICAgICBmcm9tIHZ0cG0uYzoz
MToKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDoyODogZXJyb3I6IGV4cGVjdGVk
IOKAmD3igJksIOKAmCzigJksIOKAmDvigJksIOKAmGFzbeKAmSBvciDigJhfX2F0dHJpYnV0ZV9f
4oCZIGJlZm9yZSDigJh0cG1fYm5fdOKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9i
bi5oOjMxOiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQouLi90cG1fZW11
bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjMzOiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZv
cmUg4oCYYeKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjM1OiBlcnJvcjog
ZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2Ny
eXB0by9ibi5oOjM3OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQouLi90
cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjM5OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKA
mSBiZWZvcmUg4oCYYeKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjQxOiBl
cnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQouLi90cG1fZW11bGF0b3IteDg2
XzY0L2NyeXB0by9ibi5oOjQzOiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKA
mQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjQ1OiBlcnJvcjogZXhwZWN0ZWQg
4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5o
OjQ3OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYb3V04oCZCi4uL3RwbV9lbXVs
YXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NDk6IGVycm9yOiBleHBlY3RlZCBkZWNsYXJhdGlvbiBz
cGVjaWZpZXJzIG9yIOKAmC4uLuKAmSBiZWZvcmUg4oCYdHBtX2JuX3TigJkKLi4vdHBtX2VtdWxh
dG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo1MTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3Jl
IOKAmGHigJkKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo1MzogZXJyb3I6IGV4
cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmGHigJkKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlw
dG8vYm4uaDo1NTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmGHigJkKLi4vdHBt
X2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo1NzogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkg
YmVmb3JlIOKAmHJlc+KAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjU5OiBl
cnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYcmVz4oCZCi4uL3RwbV9lbXVsYXRvci14
ODZfNjQvY3J5cHRvL2JuLmg6NjE6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhy
ZXPigJkKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo2MzogZXJyb3I6IGV4cGVj
dGVkIOKAmCnigJkgYmVmb3JlIOKAmHJlc+KAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0
by9ibi5oOjY1OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYcmVz4oCZCi4uL3Rw
bV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Njc6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZ
IGJlZm9yZSDigJhyZXPigJkKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo2OTog
ZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmHJlc+KAmQouLi90cG1fZW11bGF0b3It
eDg2XzY0L2NyeXB0by9ibi5oOjcxOiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCY
cmVz4oCZCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NzM6IGVycm9yOiBleHBl
Y3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlw
dG8vYm4uaDo3NTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmHJlc+KAmQouLi90
cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjc3OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKA
mSBiZWZvcmUg4oCYcmVz4oCZCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Nzk6
IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkKLi4vdHBtX2VtdWxhdG9y
LXg4Nl82NC9jcnlwdG8vYm4uaDo4MTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKA
mHJlc+KAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjgzOiBlcnJvcjogZXhw
ZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYcmVz4oCZCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAuLi90
cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fc3RydWN0dXJlcy5oOjIyLAogICAgICAgICAgICAg
ICAgIGZyb20gLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC90cG0vdHBtX21hcnNoYWxsaW5nLmg6MjEs
CiAgICAgICAgICAgICAgICAgZnJvbSB2dHBtLmM6MzE6Ci4uL3RwbV9lbXVsYXRvci14ODZfNjQv
Y3J5cHRvL3JzYS5oOjI1OiBlcnJvcjogZXhwZWN0ZWQgc3BlY2lmaWVyLXF1YWxpZmllci1saXN0
IGJlZm9yZSDigJh0cG1fYm5fdOKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9yc2Eu
aDozNTogZXJyb3I6IGV4cGVjdGVkIHNwZWNpZmllci1xdWFsaWZpZXItbGlzdCBiZWZvcmUg4oCY
dHBtX2JuX3TigJkKSW4gZmlsZSBpbmNsdWRlZCBmcm9tIC4uL3RwbV9lbXVsYXRvci14ODZfNjQv
dHBtL3RwbV9tYXJzaGFsbGluZy5oOjIxLAogICAgICAgICAgICAgICAgIGZyb20gdnRwbS5jOjMx
OgouLi90cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fc3RydWN0dXJlcy5oOiBJbiBmdW5jdGlv
biDigJhmcmVlX1RQTV9QRVJNQU5FTlRfREFUQeKAmToKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC90
cG0vdHBtX3N0cnVjdHVyZXMuaDoyMjQ0OiBlcnJvcjog4oCYdHBtX3JzYV9wcml2YXRlX2tleV90
4oCZIGhhcyBubyBtZW1iZXIgbmFtZWQg4oCYc2l6ZeKAmQptYWtlWzJdOiAqKiogW3Z0cG0ub10g
RXJyb3IgMQptYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL2xvY2FsL3NjcmF0Y2gvaWFuYy9k
ZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vdnRwbScKbWFrZVsxXTogKioqIFt2dHBtXSBFcnJv
ciAyCm1ha2VbMV06IExlYXZpbmcgZGlyZWN0b3J5IGAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVs
L2NvbW1pdHRlci5naXQvc3R1YmRvbScKbWFrZTogKioqIFtpbnN0YWxsLXN0dWJkb21dIEVycm9y
IDIKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:13:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxvL-00021J-O1; Tue, 18 Dec 2012 14:13:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkxvK-00021A-Mx
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:13:34 +0000
Received: from [193.109.254.147:45092] by server-12.bemta-14.messagelabs.com
	id C8/E2-06523-E0A70D05; Tue, 18 Dec 2012 14:13:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355839999!10510792!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31410 invoked from network); 18 Dec 2012 14:13:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:13:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="226820"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:13:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:13:09 +0000
Message-ID: <1355839988.14620.225.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Date: Tue, 18 Dec 2012 14:13:08 +0000
In-Reply-To: <1355831212.14620.211.camel@zakaz.uk.xensource.com>
References: <50C9F2CA.1010602@jhuapl.edu>
	<1355830264.14620.196.camel@zakaz.uk.xensource.com>
	<1355831212.14620.211.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTEyLTE4IGF0IDExOjQ2ICswMDAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4g
T24gVHVlLCAyMDEyLTEyLTE4IGF0IDExOjMxICswMDAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4g
PiBPbiBUaHUsIDIwMTItMTItMTMgYXQgMTU6MjIgKzAwMDAsIE1hdHRoZXcgRmlvcmF2YW50ZSB3
cm90ZToKPiA+ID4gSWFuLCB0aGlzIG9uZSBpcyBzcGVjaWFsIGp1c3QgZm9yIHlvdS4gSSdtIHNl
bmRpbmcgaXQgYXMgYW4gYXR0YWNobWVudCAKPiA+ID4gYmVjYXVzZSBteSBlbWFpbCBjbGllbnQg
d2lsbCBtYW5nbGUgaXQuCj4gPiA+IFRoaXMgcGF0Y2ggd2lsbCByZW1vdmUgdGhlIGNtYWtlIGRl
cGVuZGVuY3kgZnJvbSB4ZW4gcHJpb3IgdG8gYXV0b2NvbmYgCj4gPiA+IHN0dWJkb20KPiA+IAo+
ID4gVGhhbmtzLCBJIG1lcmdlZCB0aGlzIGFzIGRlc2NyaWJlZCBhbmQgYWxzbyBmb2xkZWQgIkRp
c2FibGUgY2FtbC1zdHViZG9tCj4gPiBieSBkZWZhdWx0IiBpbnRvIHRoZSBwYXRjaCB3aGljaCBh
ZGRlZCBpdCBlbmFibGVkLgo+ID4gCj4gPiBIb3dldmVyIHRoaXMgc3RpbGwgZmFpbHMgZm9yIG1l
IHdoZW4gdnRwbSBpcyBub3QgZW5hYmxlZDoKPiA+ICAgICAgICAgbWFrZVsxXTogKioqIE5vIHJ1
bGUgdG8gbWFrZSB0YXJnZXQgYG1pbmktb3MteDg2XzY0LXZ0cG0nLCBuZWVkZWQgYnkgYHZ0cG0t
c3R1YmRvbScuICBTdG9wLgo+ID4gICAgICAgICBtYWtlWzFdOiBMZWF2aW5nIGRpcmVjdG9yeSBg
L2xvY2FsL3NjcmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20nCj4gPiAgICAg
ICAgIG1ha2U6ICoqKiBbaW5zdGFsbC1zdHViZG9tXSBFcnJvciAyCj4gPiAKPiA+IFNvbWV0aGlu
ZyB0byB3aXRoIHZ0cG1tZ3Igbm90IGJlaW5nIGNvbmRpdGlvbmFsPwo+IAo+IExvb2tzIGxpa2Ug
YSBzaW1wbGUgdGhpbmtvLiBJJ2xsIG1lcmdlIHRoZSBmb2xsb3dpbmcgaW50byAic3R1YmRvbTog
QWRkCj4gYXV0b2NvbmYiLCBob3BlZnVsbHkgbXkgdGVzdGluZyB3b24ndCBmaW5kIGFueSBvdGhl
ciBpc3N1ZXMuCgpJIHdhcyBqdXN0IGFib3V0IHRvIHB1c2ggdGhpcyBvdXQgd2hlbiBJIHRob3Vn
aCAiaHJtLCBtYXliZSBJIHNob3VsZApjaGVjayB0aGlzIHdpdGggY21ha2UgaW5zdGFsbGVkIi4g
SSdtIGFmcmFpZCBpdCBpcyBicm9rZW4uIFNwZXcgaXMKYmVsb3cuIExvb2tzIGxpa2UgaXQncyBu
b3QgZmluZGluZyB0aGUgZ21wIGhlYWRlcnMgLS0gSSBjYW4gc2VlIHRoZW0gaW4Kc3R1YmRvbS9n
bXAteDg2XzY0L2dtcC5oIGJ1dCBJIGNhbid0IHNlZSBhbnl0aGluZyB3aGljaCBsb29rcyBsaWtl
IGl0CmFkZHMgdGhlIG5lY2Vzc2FyeSAtSSB0byBDRkxBR1Mgb3IgYW55d2hlcmUuCgpCcmFuY2gg
aXMgYXQ6CiAgICAgICAgZ2l0Oi8veGVuYml0cy54ZW4ub3JnL3Blb3BsZS9pYW5jL3hlbi11bnN0
YWJsZS5naXQgdnRwbTMKClNvcnJ5IGlmIEkndmUgYnJva2VuIHNvbWV0aGluZyBzb21ld2hlcmUg
YWxvbmcgdGhlIHdheS4gUGxlYXNlIGNhbiB5b3UKdGVzdCBmdWxseSBib3RoIHdpdGggYW5kIHdp
dGhvdXQgY21ha2UgYW5kIHJlc3VibWl0LiBGb3IgcmVmZXJlbmNlIHRoZQpzY3JpcHQgSSBydW4g
YmVmb3JlIGNvbW1pdHRpbmcgaXM6CiAgICAgICAgIyEvYmluL2Jhc2gKICAgICAgICBzZXQgLWV4
CiAgICAgICAgCiAgICAgICAgZXhwb3J0IFBBVEg9L3Vzci9saWIvY2NhY2hlOiRQQVRICiAgICAg
ICAgCiAgICAgICAgKAogICAgICAgICAgICBtYWtlIGRpc3RjbGVhbiAtajEyIC1zCiAgICAgICAg
ICAgIGdpdCBjbGVhbiAtZiAtZHgKICAgICAgICAgICAgLi9jb25maWd1cmUKICAgICAgICAgICAg
bWFrZSBkaXN0IC1qMTIgLXMKICAgICAgICAgICAgZmluZCBkaXN0IHwgc29ydCA+IC4uL0ZJTEVf
TElTVAogICAgICAgICkgMj4mMSB8IHRlZSAuLi9DT01NSVRURVIuTE9HCgpUaGlzIG1ha2VzIHBy
ZXR0eSBzdXJlIGl0IGlzIGRvaW5nIGEgZnJlc2ggYnVpbGQgd2l0aCBubyBsZWZ0IG92ZXIgY3J1
ZnQKaW5zdGFsbGVkLgoKSWFuLgoKY2MgLW1uby1yZWQtem9uZSAtTzEgLWZuby1vbWl0LWZyYW1l
LXBvaW50ZXIgIC1tNjQgLW1uby1yZWQtem9uZSAtZm5vLXJlb3JkZXItYmxvY2tzIC1mbm8tYXN5
bmNocm9ub3VzLXVud2luZC10YWJsZXMgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgICAtZm5vLXN0YWNrLXByb3RlY3RvciAtZm5vLWV4Y2VwdGlvbnMgLWlzeXN0ZW0gL2xv
Y2FsL3NjcmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vLi4vZXh0cmFzL21p
bmktb3MvaW5jbHVkZSAtRF9fTUlOSU9TX18gLURIQVZFX0xJQkMgLWlzeXN0ZW0gL2xvY2FsL3Nj
cmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vLi4vZXh0cmFzL21pbmktb3Mv
aW5jbHVkZS9wb3NpeCAtaXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRl
ci5naXQvc3R1YmRvbS8uLi90b29scy94ZW5zdG9yZSAgLWlzeXN0ZW0gL2xvY2FsL3NjcmF0Y2gv
aWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vLi4vZXh0cmFzL21pbmktb3MvaW5jbHVk
ZS94ODYgLWlzeXN0ZW0gL2xvY2FsL3NjcmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0
dWJkb20vLi4vZXh0cmFzL21pbmktb3MvaW5jbHVkZS94ODYveDg2XzY0IC1VIF9fbGludXhfXyAt
VSBfX0ZyZWVCU0RfXyAtVSBfX3N1bl9fIC1ub3N0ZGluYyAtaXN5c3RlbSAvbG9jYWwvc2NyYXRj
aC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNs
dWRlL3Bvc2l4IC1pc3lzdGVtIC9sb2NhbC9zY3JhdGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdp
dC9zdHViZG9tL2Nyb3NzLXJvb3QteDg2XzY0L3g4Nl82NC14ZW4tZWxmL2luY2x1ZGUgLWlzeXN0
ZW0gL3Vzci9saWIvZ2NjL3g4Nl82NC1saW51eC1nbnUvNC40LjUvaW5jbHVkZSAtaXN5c3RlbSAv
bG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRvbS9sd2lwLXg4Nl82
NC9zcmMvaW5jbHVkZSAtaXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRl
ci5naXQvc3R1YmRvbS9sd2lwLXg4Nl82NC9zcmMvaW5jbHVkZS9pcHY0IC1JL2xvY2FsL3NjcmF0
Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vaW5jbHVkZSAtSS9sb2NhbC9zY3Jh
dGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdpdC9zdHViZG9tLy4uL3hlbi9pbmNsdWRlIC1JLi4v
dHBtX2VtdWxhdG9yLXg4Nl82NC9idWlsZCAtSS4uL3RwbV9lbXVsYXRvci14ODZfNjQvdHBtIC1J
Li4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8gLUkuLi90cG1fZW11bGF0b3IteDg2XzY0ICAt
YyAtbyB2dHBtLm8gdnRwbS5jCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAuLi90cG1fZW11bGF0b3It
eDg2XzY0L2NyeXB0by9yc2EuaDoyMiwKICAgICAgICAgICAgICAgICBmcm9tIC4uL3RwbV9lbXVs
YXRvci14ODZfNjQvdHBtL3RwbV9zdHJ1Y3R1cmVzLmg6MjIsCiAgICAgICAgICAgICAgICAgZnJv
bSAuLi90cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fbWFyc2hhbGxpbmcuaDoyMSwKICAgICAg
ICAgICAgICAgICBmcm9tIHZ0cG0uYzozMToKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8v
Ym4uaDoyNzoxNzogZXJyb3I6IGdtcC5oOiBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5CkluIGZp
bGUgaW5jbHVkZWQgZnJvbSAuLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9yc2EuaDoyMiwK
ICAgICAgICAgICAgICAgICBmcm9tIC4uL3RwbV9lbXVsYXRvci14ODZfNjQvdHBtL3RwbV9zdHJ1
Y3R1cmVzLmg6MjIsCiAgICAgICAgICAgICAgICAgZnJvbSAuLi90cG1fZW11bGF0b3IteDg2XzY0
L3RwbS90cG1fbWFyc2hhbGxpbmcuaDoyMSwKICAgICAgICAgICAgICAgICBmcm9tIHZ0cG0uYzoz
MToKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDoyODogZXJyb3I6IGV4cGVjdGVk
IOKAmD3igJksIOKAmCzigJksIOKAmDvigJksIOKAmGFzbeKAmSBvciDigJhfX2F0dHJpYnV0ZV9f
4oCZIGJlZm9yZSDigJh0cG1fYm5fdOKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9i
bi5oOjMxOiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQouLi90cG1fZW11
bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjMzOiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZv
cmUg4oCYYeKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjM1OiBlcnJvcjog
ZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2Ny
eXB0by9ibi5oOjM3OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQouLi90
cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjM5OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKA
mSBiZWZvcmUg4oCYYeKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjQxOiBl
cnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQouLi90cG1fZW11bGF0b3IteDg2
XzY0L2NyeXB0by9ibi5oOjQzOiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKA
mQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjQ1OiBlcnJvcjogZXhwZWN0ZWQg
4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5o
OjQ3OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYb3V04oCZCi4uL3RwbV9lbXVs
YXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NDk6IGVycm9yOiBleHBlY3RlZCBkZWNsYXJhdGlvbiBz
cGVjaWZpZXJzIG9yIOKAmC4uLuKAmSBiZWZvcmUg4oCYdHBtX2JuX3TigJkKLi4vdHBtX2VtdWxh
dG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo1MTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3Jl
IOKAmGHigJkKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo1MzogZXJyb3I6IGV4
cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmGHigJkKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlw
dG8vYm4uaDo1NTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmGHigJkKLi4vdHBt
X2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo1NzogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkg
YmVmb3JlIOKAmHJlc+KAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjU5OiBl
cnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYcmVz4oCZCi4uL3RwbV9lbXVsYXRvci14
ODZfNjQvY3J5cHRvL2JuLmg6NjE6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhy
ZXPigJkKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo2MzogZXJyb3I6IGV4cGVj
dGVkIOKAmCnigJkgYmVmb3JlIOKAmHJlc+KAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0
by9ibi5oOjY1OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYcmVz4oCZCi4uL3Rw
bV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Njc6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZ
IGJlZm9yZSDigJhyZXPigJkKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo2OTog
ZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmHJlc+KAmQouLi90cG1fZW11bGF0b3It
eDg2XzY0L2NyeXB0by9ibi5oOjcxOiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCY
cmVz4oCZCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NzM6IGVycm9yOiBleHBl
Y3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlw
dG8vYm4uaDo3NTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmHJlc+KAmQouLi90
cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjc3OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKA
mSBiZWZvcmUg4oCYcmVz4oCZCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Nzk6
IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkKLi4vdHBtX2VtdWxhdG9y
LXg4Nl82NC9jcnlwdG8vYm4uaDo4MTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKA
mHJlc+KAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjgzOiBlcnJvcjogZXhw
ZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYcmVz4oCZCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAuLi90
cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fc3RydWN0dXJlcy5oOjIyLAogICAgICAgICAgICAg
ICAgIGZyb20gLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC90cG0vdHBtX21hcnNoYWxsaW5nLmg6MjEs
CiAgICAgICAgICAgICAgICAgZnJvbSB2dHBtLmM6MzE6Ci4uL3RwbV9lbXVsYXRvci14ODZfNjQv
Y3J5cHRvL3JzYS5oOjI1OiBlcnJvcjogZXhwZWN0ZWQgc3BlY2lmaWVyLXF1YWxpZmllci1saXN0
IGJlZm9yZSDigJh0cG1fYm5fdOKAmQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9yc2Eu
aDozNTogZXJyb3I6IGV4cGVjdGVkIHNwZWNpZmllci1xdWFsaWZpZXItbGlzdCBiZWZvcmUg4oCY
dHBtX2JuX3TigJkKSW4gZmlsZSBpbmNsdWRlZCBmcm9tIC4uL3RwbV9lbXVsYXRvci14ODZfNjQv
dHBtL3RwbV9tYXJzaGFsbGluZy5oOjIxLAogICAgICAgICAgICAgICAgIGZyb20gdnRwbS5jOjMx
OgouLi90cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fc3RydWN0dXJlcy5oOiBJbiBmdW5jdGlv
biDigJhmcmVlX1RQTV9QRVJNQU5FTlRfREFUQeKAmToKLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC90
cG0vdHBtX3N0cnVjdHVyZXMuaDoyMjQ0OiBlcnJvcjog4oCYdHBtX3JzYV9wcml2YXRlX2tleV90
4oCZIGhhcyBubyBtZW1iZXIgbmFtZWQg4oCYc2l6ZeKAmQptYWtlWzJdOiAqKiogW3Z0cG0ub10g
RXJyb3IgMQptYWtlWzJdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL2xvY2FsL3NjcmF0Y2gvaWFuYy9k
ZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vdnRwbScKbWFrZVsxXTogKioqIFt2dHBtXSBFcnJv
ciAyCm1ha2VbMV06IExlYXZpbmcgZGlyZWN0b3J5IGAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVs
L2NvbW1pdHRlci5naXQvc3R1YmRvbScKbWFrZTogKioqIFtpbnN0YWxsLXN0dWJkb21dIEVycm9y
IDIKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:15:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxxN-0002C0-9b; Tue, 18 Dec 2012 14:15:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkxxM-0002Bp-57
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:15:40 +0000
Received: from [193.109.254.147:32190] by server-9.bemta-14.messagelabs.com id
	CF/D3-24482-B8A70D05; Tue, 18 Dec 2012 14:15:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355840129!10429109!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31560 invoked from network); 18 Dec 2012 14:15:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:15:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="226889"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:15:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:15:26 +0000
Message-ID: <1355840125.14620.227.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 18 Dec 2012 14:15:25 +0000
In-Reply-To: <20121218141048.GC4518@phenom.dumpdata.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<20121218141048.GC4518@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 14:10 +0000, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 18, 2012 at 01:51:51PM +0000, Ian Campbell wrote:
> > Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
> > than that. We have already accounted for this in
> > NETFRONT_SKB_CB(skb)->pull_to so use that instead.
> > 
> > Fixes WARN_ON from skb_try_coalesce.
> 
> This should probably be also on the stable tree for 3.7 at least?

Yes, hence "Cc: stable@kernel.org # 3.7.x only" below ;-)

> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Sander Eikelenboom <linux@eikelenboom.it>
>   ^^ - Reported-by: 
> 
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>   ^^ - Acked-by:
> 
> > Cc: annie li <annie.li@oracle.com>
> > Cc: xen-devel@lists.xensource.com
> > Cc: netdev@vger.kernel.org
> > Cc: stable@kernel.org # 3.7.x only
> > ---
> >  drivers/net/xen-netfront.c |   15 +++++----------
> >  1 files changed, 5 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index caa0110..b06ef81 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -971,17 +971,12 @@ err:
> >  		 * overheads. Here, we add the size of the data pulled
> >  		 * in xennet_fill_frags().
> >  		 *
> > -		 * We also adjust for any unused space in the main
> > -		 * data area by subtracting (RX_COPY_THRESHOLD -
> > -		 * len). This is especially important with drivers
> > -		 * which split incoming packets into header and data,
> > -		 * using only 66 bytes of the main data area (see the
> > -		 * e1000 driver for example.)  On such systems,
> > -		 * without this last adjustement, our achievable
> > -		 * receive throughout using the standard receive
> > -		 * buffer size was cut by 25%(!!!).
> > +		 * We also adjust for the __pskb_pull_tail done in
> > +		 * handle_incoming_queue which pulls data from the
> > +		 * frags into the head area, which is already
> > +		 * accounted in RX_COPY_THRESHOLD.
> >  		 */
> > -		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> > +		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
> >  		skb->len += skb->data_len;
> >  
> >  		if (rx->flags & XEN_NETRXF_csum_blank)
> > -- 
> > 1.7.2.5
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:15:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkxxN-0002C0-9b; Tue, 18 Dec 2012 14:15:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkxxM-0002Bp-57
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:15:40 +0000
Received: from [193.109.254.147:32190] by server-9.bemta-14.messagelabs.com id
	CF/D3-24482-B8A70D05; Tue, 18 Dec 2012 14:15:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355840129!10429109!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31560 invoked from network); 18 Dec 2012 14:15:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:15:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="226889"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:15:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:15:26 +0000
Message-ID: <1355840125.14620.227.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 18 Dec 2012 14:15:25 +0000
In-Reply-To: <20121218141048.GC4518@phenom.dumpdata.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<20121218141048.GC4518@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 14:10 +0000, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 18, 2012 at 01:51:51PM +0000, Ian Campbell wrote:
> > Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
> > than that. We have already accounted for this in
> > NETFRONT_SKB_CB(skb)->pull_to so use that instead.
> > 
> > Fixes WARN_ON from skb_try_coalesce.
> 
> This should probably be also on the stable tree for 3.7 at least?

Yes, hence "Cc: stable@kernel.org # 3.7.x only" below ;-)

> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Sander Eikelenboom <linux@eikelenboom.it>
>   ^^ - Reported-by: 
> 
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>   ^^ - Acked-by:
> 
> > Cc: annie li <annie.li@oracle.com>
> > Cc: xen-devel@lists.xensource.com
> > Cc: netdev@vger.kernel.org
> > Cc: stable@kernel.org # 3.7.x only
> > ---
> >  drivers/net/xen-netfront.c |   15 +++++----------
> >  1 files changed, 5 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index caa0110..b06ef81 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -971,17 +971,12 @@ err:
> >  		 * overheads. Here, we add the size of the data pulled
> >  		 * in xennet_fill_frags().
> >  		 *
> > -		 * We also adjust for any unused space in the main
> > -		 * data area by subtracting (RX_COPY_THRESHOLD -
> > -		 * len). This is especially important with drivers
> > -		 * which split incoming packets into header and data,
> > -		 * using only 66 bytes of the main data area (see the
> > -		 * e1000 driver for example.)  On such systems,
> > -		 * without this last adjustement, our achievable
> > -		 * receive throughout using the standard receive
> > -		 * buffer size was cut by 25%(!!!).
> > +		 * We also adjust for the __pskb_pull_tail done in
> > +		 * handle_incoming_queue which pulls data from the
> > +		 * frags into the head area, which is already
> > +		 * accounted in RX_COPY_THRESHOLD.
> >  		 */
> > -		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> > +		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
> >  		skb->len += skb->data_len;
> >  
> >  		if (rx->flags & XEN_NETRXF_csum_blank)
> > -- 
> > 1.7.2.5
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:19:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tky0i-0002Q6-3t; Tue, 18 Dec 2012 14:19:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tky0g-0002Pw-5q
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:19:06 +0000
Received: from [85.158.143.35:20939] by server-2.bemta-4.messagelabs.com id
	77/94-30861-95B70D05; Tue, 18 Dec 2012 14:19:05 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355840225!16012105!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14511 invoked from network); 18 Dec 2012 14:17:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:17:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; d="asc'?scan'208";a="226952"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:17:06 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:17:04 +0000
Message-ID: <1355840214.5376.0.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 18 Dec 2012 15:16:54 +0100
In-Reply-To: <50D05E9B.5050605@eu.citrix.com>
References: <patchbomb.1355783337@Solace> <50D05E9B.5050605@eu.citrix.com>
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0 of 5 v3] xen: sched_credit: fix picking
 and tickling and add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0249682216146938836=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0249682216146938836==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-EtNuiw/N1rfkb74dPhkC"

--=-EtNuiw/N1rfkb74dPhkC
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2012-12-18 at 12:16 +0000, George Dunlap wrote:
> >     1/5 xen: sched_credit: define and use curr_on_cpu(cpu)
> >     2/5 xen: sched_credit: improve picking up the idle CPU for a VCPU
> >   * 3/5 xen: sched_credit: improve tickling of idle CPUs
> >   * 4/5 xen: tracing: introduce per-scheduler trace event IDs
> >     5/5 xen: sched_credit: add some tracing
>=20
> Keir, Jan: All of the patches have Acks from me.
>=20
Nice! Thanks George for looking at this (again)! :-P

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-EtNuiw/N1rfkb74dPhkC
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDQetYACgkQk4XaBE3IOsReTACaA7+T1S6a+enK1Da1cvDst/4Y
I80An3Jv5yWXp+XVgPM5fl5qyi/COXVH
=gJWx
-----END PGP SIGNATURE-----

--=-EtNuiw/N1rfkb74dPhkC--


--===============0249682216146938836==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0249682216146938836==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:19:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tky0i-0002Q6-3t; Tue, 18 Dec 2012 14:19:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tky0g-0002Pw-5q
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:19:06 +0000
Received: from [85.158.143.35:20939] by server-2.bemta-4.messagelabs.com id
	77/94-30861-95B70D05; Tue, 18 Dec 2012 14:19:05 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355840225!16012105!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14511 invoked from network); 18 Dec 2012 14:17:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:17:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; d="asc'?scan'208";a="226952"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:17:06 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:17:04 +0000
Message-ID: <1355840214.5376.0.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 18 Dec 2012 15:16:54 +0100
In-Reply-To: <50D05E9B.5050605@eu.citrix.com>
References: <patchbomb.1355783337@Solace> <50D05E9B.5050605@eu.citrix.com>
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0 of 5 v3] xen: sched_credit: fix picking
 and tickling and add some tracing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0249682216146938836=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0249682216146938836==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-EtNuiw/N1rfkb74dPhkC"

--=-EtNuiw/N1rfkb74dPhkC
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2012-12-18 at 12:16 +0000, George Dunlap wrote:
> >     1/5 xen: sched_credit: define and use curr_on_cpu(cpu)
> >     2/5 xen: sched_credit: improve picking up the idle CPU for a VCPU
> >   * 3/5 xen: sched_credit: improve tickling of idle CPUs
> >   * 4/5 xen: tracing: introduce per-scheduler trace event IDs
> >     5/5 xen: sched_credit: add some tracing
>=20
> Keir, Jan: All of the patches have Acks from me.
>=20
Nice! Thanks George for looking at this (again)! :-P

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-EtNuiw/N1rfkb74dPhkC
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDQetYACgkQk4XaBE3IOsReTACaA7+T1S6a+enK1Da1cvDst/4Y
I80An3Jv5yWXp+XVgPM5fl5qyi/COXVH
=gJWx
-----END PGP SIGNATURE-----

--=-EtNuiw/N1rfkb74dPhkC--


--===============0249682216146938836==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0249682216146938836==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:26:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tky81-0002nL-Ap; Tue, 18 Dec 2012 14:26:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tky80-0002nB-5T
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:26:40 +0000
Received: from [85.158.137.99:24858] by server-7.bemta-3.messagelabs.com id
	64/C4-23008-F1D70D05; Tue, 18 Dec 2012 14:26:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355840783!14816118!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23979 invoked from network); 18 Dec 2012 14:26:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:26:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="227226"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:26:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:26:13 +0000
Message-ID: <1355840771.14620.233.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Date: Tue, 18 Dec 2012 14:26:11 +0000
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48BDDDC20E@aplesstripe.dom1.jhuapl.edu>
References: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>
	, <CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC20D@aplesstripe.dom1.jhuapl.edu>
	,<50CB4710.8010502@citrix.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC20E@aplesstripe.dom1.jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

*Please* can you stop top posting.

    A: Because it messes up the order in which people normally read text.
    Q: Why is top-posting such a bad thing?

On Fri, 2012-12-14 at 16:20 +0000, Fioravante, Matthew E. wrote:
> I thought the purpose of caml stubdom like c stubdom is an example for
> developing new domains. If that is the case, then I don't think it
> makes sense to build and install it by default. End users who are just
> building and installing xen don't need to have it.
> 
> The other problem is that c-stubdom and caml-stubdom have main()
> functions but also define CONFIG_TEST=y in their config files. The
> test framework also defines a main() function, causing a multiple
> definition linker error.
> 
> Whats the intention here? Do we want them to run the test code or do
> we want them to just be simple stubs with hello world main functions?

I think the Caml one should certainly drop CONFIG_TEST. Any "test"
behaviour of that domain should be done in ocaml not in C.

For the C one I'm less sure. I'd be in favour of nuking it but then
nothing (AFAIK) would be using CONFIG_TEST. Since the C one doesn't
really do anything else I guess it may as well continue to be the
CONFIG_TEST-bed. But then main might as well be nuked.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:26:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tky81-0002nL-Ap; Tue, 18 Dec 2012 14:26:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tky80-0002nB-5T
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:26:40 +0000
Received: from [85.158.137.99:24858] by server-7.bemta-3.messagelabs.com id
	64/C4-23008-F1D70D05; Tue, 18 Dec 2012 14:26:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355840783!14816118!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23979 invoked from network); 18 Dec 2012 14:26:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:26:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="227226"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:26:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:26:13 +0000
Message-ID: <1355840771.14620.233.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Date: Tue, 18 Dec 2012 14:26:11 +0000
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48BDDDC20E@aplesstripe.dom1.jhuapl.edu>
References: <1355412718-4610-1-git-send-email-matthew.fioravante@jhuapl.edu>
	, <CAFLBxZZQ9Nw09L01ekS4k0W92gZ27F+5R82192HgO_tWEp02Ug@mail.gmail.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC20D@aplesstripe.dom1.jhuapl.edu>
	,<50CB4710.8010502@citrix.com>
	<068F06DC4D106941B297C0C5F9F446EA48BDDDC20E@aplesstripe.dom1.jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] Disable caml-stubdom by default
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

*Please* can you stop top posting.

    A: Because it messes up the order in which people normally read text.
    Q: Why is top-posting such a bad thing?

On Fri, 2012-12-14 at 16:20 +0000, Fioravante, Matthew E. wrote:
> I thought the purpose of caml stubdom like c stubdom is an example for
> developing new domains. If that is the case, then I don't think it
> makes sense to build and install it by default. End users who are just
> building and installing xen don't need to have it.
> 
> The other problem is that c-stubdom and caml-stubdom have main()
> functions but also define CONFIG_TEST=y in their config files. The
> test framework also defines a main() function, causing a multiple
> definition linker error.
> 
> Whats the intention here? Do we want them to run the test code or do
> we want them to just be simple stubs with hello world main functions?

I think the Caml one should certainly drop CONFIG_TEST. Any "test"
behaviour of that domain should be done in ocaml not in C.

For the C one I'm less sure. I'd be in favour of nuking it but then
nothing (AFAIK) would be using CONFIG_TEST. Since the C one doesn't
really do anything else I guess it may as well continue to be the
CONFIG_TEST-bed. But then main might as well be nuked.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:29:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:29:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyAs-0002v9-V5; Tue, 18 Dec 2012 14:29:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TkyAr-0002v2-OM
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:29:37 +0000
Received: from [193.109.254.147:61950] by server-8.bemta-14.messagelabs.com id
	C4/33-26341-0DD70D05; Tue, 18 Dec 2012 14:29:36 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355840938!1747039!1
X-Originating-IP: [209.85.220.177]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31257 invoked from network); 18 Dec 2012 14:28:59 -0000
Received: from mail-vc0-f177.google.com (HELO mail-vc0-f177.google.com)
	(209.85.220.177)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:28:59 -0000
Received: by mail-vc0-f177.google.com with SMTP id m8so891503vcd.22
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 06:28:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=xsuBKhv0p9g1+eJUxTiAcTbxNL+ZJ47+BJKJNkuC7eo=;
	b=boJU+OxH2RdCiCM+fBOTIQaXtFN5nYAu49Qri/mUjAAsJvkyP01SC5plnNFocIgbVg
	laGapqsZGiDI6TK3WlS/pGhX/nAzBMR8hVW7FtCxZ55bIylqwJRw2g4z6QUod0peoO25
	mLK3h8rKWGGHKRm1iBIRK/8osaogSQp8gYYiWW396fq3+4qC3vP2ChxvguHko102Vurt
	0xDetx7EJHttooSma7yh98O3nKYPhAgoJGvEGUchhn5muTJAuDTYAdD+Odqr5nUSTreX
	fjM44FpLUimRADCrP01ZXAvVnuiaZ6g2VSa71BQywcGqkmvtt1pNrrdXR7itYl0jqcnv
	ydXA==
MIME-Version: 1.0
Received: by 10.220.108.79 with SMTP id e15mr3149370vcp.61.1355840938067; Tue,
	18 Dec 2012 06:28:58 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 06:28:57 -0800 (PST)
In-Reply-To: <39A8C47B5D7DD22A52A3DA51@Ximines.local>
References: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
	<39A8C47B5D7DD22A52A3DA51@Ximines.local>
Date: Tue, 18 Dec 2012 14:28:57 +0000
X-Google-Sender-Auth: FgS2vE9W7zeehYQrMTvukAx9A9Y
Message-ID: <CAFLBxZY3zXaNQ93zq8YuKQm=+Hc7SoN2uVsjgJ1kZb_=N5CszA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 development update, 15 Oct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4090158498743134980=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4090158498743134980==
Content-Type: multipart/alternative; boundary=f46d043892a77c481304d121519f

--f46d043892a77c481304d121519f
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 18, 2012 at 1:57 PM, Alex Bligh <alex@alex.org.uk> wrote:

>
>
> --On 15 October 2012 17:19:05 +0100 George Dunlap <
> George.Dunlap@eu.citrix.com> wrote:
>
>  * Make storage migration possible
>>   owner: ?
>>   status: ?
>>   There needs to be a way, either via command-line or via some hooks,
>>   that someone can build a "storage migration" feature on top of libxl
>>   or xl.
>>
>
> We have this working with qemu-xen, qcow2 and snapshot rebase. At a libxl
> level (but not an xl level), everything seems to be there.
>

Can you describe in more detail how you implement this?  Do you have a
script or something?

 -George

--f46d043892a77c481304d121519f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Tue, Dec 18, 2012 at 1:57 PM, Alex Bligh <span dir=3D"l=
tr">&lt;<a href=3D"mailto:alex@alex.org.uk" target=3D"_blank">alex@alex.org=
.uk</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail=
_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border=
-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im"><br>
<br>
--On 15 October 2012 17:19:05 +0100 George Dunlap &lt;<a href=3D"mailto:Geo=
rge.Dunlap@eu.citrix.com" target=3D"_blank">George.Dunlap@eu.citrix.com</a>=
&gt; wrote:<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
* Make storage migration possible<br>
=A0 owner: ?<br>
=A0 status: ?<br>
=A0 There needs to be a way, either via command-line or via some hooks,<br>
=A0 that someone can build a &quot;storage migration&quot; feature on top o=
f libxl<br>
=A0 or xl.<br>
</blockquote>
<br></div>
We have this working with qemu-xen, qcow2 and snapshot rebase. At a libxl<b=
r>
level (but not an xl level), everything seems to be there.<span class=3D"HO=
EnZb"><font color=3D"#888888"><br></font></span></blockquote><div><br></div=
><div>Can you describe in more detail how you implement this?=A0 Do you hav=
e a script or something?<br>
<br></div><div>=A0-George<br></div><br></div></div></div>

--f46d043892a77c481304d121519f--


--===============4090158498743134980==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4090158498743134980==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:29:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:29:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyAs-0002v9-V5; Tue, 18 Dec 2012 14:29:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TkyAr-0002v2-OM
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:29:37 +0000
Received: from [193.109.254.147:61950] by server-8.bemta-14.messagelabs.com id
	C4/33-26341-0DD70D05; Tue, 18 Dec 2012 14:29:36 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355840938!1747039!1
X-Originating-IP: [209.85.220.177]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31257 invoked from network); 18 Dec 2012 14:28:59 -0000
Received: from mail-vc0-f177.google.com (HELO mail-vc0-f177.google.com)
	(209.85.220.177)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:28:59 -0000
Received: by mail-vc0-f177.google.com with SMTP id m8so891503vcd.22
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 06:28:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=xsuBKhv0p9g1+eJUxTiAcTbxNL+ZJ47+BJKJNkuC7eo=;
	b=boJU+OxH2RdCiCM+fBOTIQaXtFN5nYAu49Qri/mUjAAsJvkyP01SC5plnNFocIgbVg
	laGapqsZGiDI6TK3WlS/pGhX/nAzBMR8hVW7FtCxZ55bIylqwJRw2g4z6QUod0peoO25
	mLK3h8rKWGGHKRm1iBIRK/8osaogSQp8gYYiWW396fq3+4qC3vP2ChxvguHko102Vurt
	0xDetx7EJHttooSma7yh98O3nKYPhAgoJGvEGUchhn5muTJAuDTYAdD+Odqr5nUSTreX
	fjM44FpLUimRADCrP01ZXAvVnuiaZ6g2VSa71BQywcGqkmvtt1pNrrdXR7itYl0jqcnv
	ydXA==
MIME-Version: 1.0
Received: by 10.220.108.79 with SMTP id e15mr3149370vcp.61.1355840938067; Tue,
	18 Dec 2012 06:28:58 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 06:28:57 -0800 (PST)
In-Reply-To: <39A8C47B5D7DD22A52A3DA51@Ximines.local>
References: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
	<39A8C47B5D7DD22A52A3DA51@Ximines.local>
Date: Tue, 18 Dec 2012 14:28:57 +0000
X-Google-Sender-Auth: FgS2vE9W7zeehYQrMTvukAx9A9Y
Message-ID: <CAFLBxZY3zXaNQ93zq8YuKQm=+Hc7SoN2uVsjgJ1kZb_=N5CszA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Alex Bligh <alex@alex.org.uk>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 development update, 15 Oct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4090158498743134980=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4090158498743134980==
Content-Type: multipart/alternative; boundary=f46d043892a77c481304d121519f

--f46d043892a77c481304d121519f
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Dec 18, 2012 at 1:57 PM, Alex Bligh <alex@alex.org.uk> wrote:

>
>
> --On 15 October 2012 17:19:05 +0100 George Dunlap <
> George.Dunlap@eu.citrix.com> wrote:
>
>  * Make storage migration possible
>>   owner: ?
>>   status: ?
>>   There needs to be a way, either via command-line or via some hooks,
>>   that someone can build a "storage migration" feature on top of libxl
>>   or xl.
>>
>
> We have this working with qemu-xen, qcow2 and snapshot rebase. At a libxl
> level (but not an xl level), everything seems to be there.
>

Can you describe in more detail how you implement this?  Do you have a
script or something?

 -George

--f46d043892a77c481304d121519f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Tue, Dec 18, 2012 at 1:57 PM, Alex Bligh <span dir=3D"l=
tr">&lt;<a href=3D"mailto:alex@alex.org.uk" target=3D"_blank">alex@alex.org=
.uk</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail=
_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border=
-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im"><br>
<br>
--On 15 October 2012 17:19:05 +0100 George Dunlap &lt;<a href=3D"mailto:Geo=
rge.Dunlap@eu.citrix.com" target=3D"_blank">George.Dunlap@eu.citrix.com</a>=
&gt; wrote:<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
* Make storage migration possible<br>
=A0 owner: ?<br>
=A0 status: ?<br>
=A0 There needs to be a way, either via command-line or via some hooks,<br>
=A0 that someone can build a &quot;storage migration&quot; feature on top o=
f libxl<br>
=A0 or xl.<br>
</blockquote>
<br></div>
We have this working with qemu-xen, qcow2 and snapshot rebase. At a libxl<b=
r>
level (but not an xl level), everything seems to be there.<span class=3D"HO=
EnZb"><font color=3D"#888888"><br></font></span></blockquote><div><br></div=
><div>Can you describe in more detail how you implement this?=A0 Do you hav=
e a script or something?<br>
<br></div><div>=A0-George<br></div><br></div></div></div>

--f46d043892a77c481304d121519f--


--===============4090158498743134980==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4090158498743134980==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:31:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyCc-00030i-Jx; Tue, 18 Dec 2012 14:31:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkyCb-00030Y-5c
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:31:25 +0000
Received: from [85.158.143.99:45228] by server-1.bemta-4.messagelabs.com id
	38/49-28401-C3E70D05; Tue, 18 Dec 2012 14:31:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1355841080!20390934!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25411 invoked from network); 18 Dec 2012 14:31:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 14:31:22 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIEVChS019056
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 14:31:13 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIEVB6c015060
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 14:31:12 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIEVBtQ005061; Tue, 18 Dec 2012 08:31:11 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 06:31:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 60C781BF210; Tue, 18 Dec 2012 09:31:09 -0500 (EST)
Date: Tue, 18 Dec 2012 09:31:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com, martin.petersen@oracle.com,
	felipe.franciosi@citrix.com, matthew@wil.cx, axboe@kernel.dk
Message-ID: <20121218143109.GA24471@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="BXVAT5kNtrzKuDFl"
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Content-Transfer-Encoding: 7bit
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] RFC v1: Xen block protocol overhaul - problem statement
 (with pictures!)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hey,

I am including some folks that are not always on Xen-devel to see if they=
 have
some extra ideas or can correct my misunderstandings.

This is very much RFC - so there is bound to be some bugs.
The original is here
https://docs.google.com/document/d/1Vh5T8Z3Tx3sUEhVB0DnNDKBNiqB_ZA8Z5YVqA=
sCIjuI/edit
in case one wishes to modify/provide comment on that one.


There are outstanding issues we have now with the block protocol:
Note: I am assuming 64-bit guest/host - as the size=E2=80=99s of the stru=
ctures
change on 32-bit. I am also attaching the header for the blkif ring
as of today.

A) Segment size is limited to 11 pages. It means we can at most squeeze
in 44kB per request. The ring can hold 32 (next power of two below 36)
requests, meaning we can do 1.4M of outstanding requests.

B). Producer and consumer index is on the same cache line. In present har=
dware
that means the reader and writer will compete for the same cacheline caus=
ing a
ping-pong between sockets.

C). The requests and responses are on the same ring. This again causes th=
e
ping-pong between sockets as the ownership of the cache line will shift b=
etween
sockets.

D). Cache alignment. Currently the protocol is 16-bit aligned. This is aw=
kward
as the request and responses sometimes fit within a cacheline or sometime=
s
straddle them.

E). Interrupt mitigation. We are currently doing a kick whenever we are
done =E2=80=9Cprocessing=E2=80=9D the ring. There are better ways to do t=
his - and
we could use existing network interrupt mitigation techniques to make the
code poll when there is a lot of data.=20

F). Latency. The processing of the request limits us to only do 44kB - wh=
ich means
that a 1MB chunk of data - which on contemporary devices would only use I=
/O request
- would be split up in multiple =E2=80=98requests=E2=80=99 inadvertently =
delaying the processing of
said block.=20

G) Future extensions. DIF/DIX for integrity. There might
be other in the future and it would be good to leave space for extra
flags TBD.=20

H). Separate the response and request rings. The current
implementation has one thread for one block ring. There is no reason why
there could not be two threads - one for responses and one for requests -
and especially if they are scheduled on different CPUs. Furthermore this
could also be split in multi-queues - so two queues (response and request=
)
on each vCPU.=20

I). We waste a lot of space on the ring - as we use the
ring for both requests and responses. The response structure needs to
occupy the same amount of space as the request structure (112 bytes). If
the request structure is expanded to be able to fit more segments (say
the =E2=80=98struct blkif_sring_entry is expanded to ~1500 bytes) that st=
ill
requires us to have a matching size response structure. We do not need
to use that much space for one response. Having a separate response ring
would simplify the structures.=20

J). 32 bit vs 64 bit. Right now the size
of the request structure is 112 bytes under 64-bit guest and 102 bytes
under 32-bit guest. It is confusing and furthermore requires the host
to do extra accounting and processing.

The crude drawing displays memory that the ring occupies in offset of
64 bytes (cache line). Of course future CPUs could have different cache
lines (say 32 bytes?)- which would skew this drawing. A 32-bit ring is
a bit different as the =E2=80=98struct blkif_sring_entry=E2=80=99 is of 1=
02 bytes.


The A) has two solutions to this (look at
http://comments.gmane.org/gmane.comp.emulators.xen.devel/140406 for
details). One proposed by Justin from Spectralogic has to negotiate
the segment size. This means that the =E2=80=98struct blkif_sring_entry=E2=
=80=99
is now a variable size. It can expand from 112 bytes (cover 11 pages of
data - 44kB) to 1580 bytes (256 pages of data - so 1MB). It is a simple
extension by just making the array in the request expand from 11 to a
variable size negotiated.


The math is as follow.


        struct blkif_request_segment {
                uint32 grant;                         // 4 bytes uint8_t
                first_sect, last_sect;// 1, 1 =3D 6 bytes
        }
(6 bytes for each segment) - the above structure is in an array of size
11 in the request. The =E2=80=98struct blkif_sring_entry=E2=80=99 is 112 =
bytes. The
change is to expand the array - in this example we would tack on 245 extr=
a
=E2=80=98struct blkif_request_segment=E2=80=99 - 245*6 + 112 =3D 1582. If=
 we were to
use 36 requests (so 1582*36 + 64) we would use 57016 bytes (14 pages).


The other solution (from Intel - Ronghui) was to create one extra
ring that only has the =E2=80=98struct blkif_request_segment=E2=80=99 in =
them. The
=E2=80=98struct blkif_request=E2=80=99 would be changed to have an index =
in said
=E2=80=98segment ring=E2=80=99. There is only one segment ring. This mean=
s that the
size of the initial ring is still the same. The requests would point
to the segment and enumerate out how many of the indexes it wants to
use. The limit is of course the size of the segment. If one assumes a
one-page segment this means we can in one request cover ~4MB. The math
is as follow:


First request uses the half of the segment ring - so index 0 up
to 341 (out of 682). Each entry in the segment ring is a =E2=80=98struct
blkif_request_segment=E2=80=99 so it occupies 6 bytes. The other requests=
 on
the ring (so there are 35 left) can use either the remaining 341 indexes
of the sgement ring or use the old style request. The old style request
can address use up to 44kB. For example:


 sring[0]->[uses 0->341 indexes in the segment ring] =3D 342*4096 =3D 140=
0832
 sring[1]->[use sthe old style request] =3D 11*4096 =3D 45056
 sring[2]->[uses 342->682 indexes in the segment ring] =3D 1392640
 sring[3..32] -> [uses the old style request] =3D 29*4096*11 =3D 1306624


Total: 4145152 bytes Naturally this could be extended to have a bigger
segment ring to cover more.





The problem with this extension is that we use 6 bytes and end up
straddling a cache line. Using 8 bytes will fix the cache straddling. Thi=
s
would mean fitting only 512 segments per page.


There is yet another mechanism that could be employed  - and it borrows
from VirtIO protocol. And that is the =E2=80=98indirect descriptors=E2=80=
=99. This
very similar to what Intel suggests, but with a twist.


We could provide a new BLKIF_OP (say BLKIF_OP_INDIRECT), and the =E2=80=98=
struct
blkif_sring=E2=80=99 (each entry can be up to 112 bytes if needed - so th=
e
old style request would fit). It would look like:


/* so 64 bytes under 64-bit. If necessary, the array (seg) can be
expanded to fit 11 segments as the old style request did */ struct
blkif_request_indirect {
        uint8_t        op;           /* BLKIF_OP_* (usually READ or WRITE=
    */
// 1 blkif_vdev_t   handle;       /* only for read/write requests        =
 */ // 2
#ifdef CONFIG_X86_64
        uint32_t       _pad1;             /* offsetof(blkif_request,u.rw.=
id) =3D=3D 8 */ // 2
#endif
        uint64_t       id;           /* private guest value, echoed in re=
sp  */
	grant_ref_t    gref;        /* reference to indirect buffer frame  if us=
ed*/
            struct blkif_request_segment_aligned seg[4]; // each is 8 byt=
es
} __attribute__((__packed__));


struct blkif_request {
        uint8_t        operation;    /* BLKIF_OP_???  */
	union {
                struct blkif_request_rw rw;
		struct blkif_request_indirect
                indirect; =E2=80=A6 other ..
        } u;
} __attribute__((__packed__));




The =E2=80=98operation=E2=80=99 would be BLKIF_OP_INDIRECT. The read/writ=
e/discard,
etc operation would now be in indirect.op. The indirect.gref points to
a page that is filled with:


struct blkif_request_indirect_entry {
        blkif_sector_t sector_number;
	struct blkif_request_segment seg;
} __attribute__((__packed__));
//16 bytes, so we can fit in a page 256 of these structures.


This means that with the existing 36 slots in the ring (single page)
we can cover: 32 slots * each blkif_request_indirect covers: 256 * 4096
~=3D 32M. If we don=E2=80=99t want to use indirect descriptor we can stil=
l use
up to 4 pages of the request (as it has enough space to contain four
segments and the structure will still be cache-aligned).




B). Both the producer (req_* and rsp_*) values are in the same
cache-line. This means that we end up with the same cacheline being
modified by two different guests. Depending on the architecture and
placement of the guest this could be bad - as each logical CPU would
try to write and read from the same cache-line. A mechanism where
the req_* and rsp_ values are separated and on a different cache line
could be used. The value of the correct cache-line and alignment could
be negotiated via XenBus - in case future technologies start using 128
bytes for cache or such. Or the the producer and consumer indexes are in
separate rings. Meaning we have a =E2=80=98request ring=E2=80=99 and a =E2=
=80=98response
ring=E2=80=99 - and only the =E2=80=98req_prod=E2=80=99, =E2=80=98req_eve=
nt=E2=80=99 are modified in
the =E2=80=98request ring=E2=80=99. The opposite (resp_*) are only modifi=
ed in the
=E2=80=98response ring=E2=80=99.


C). Similar to B) problem but with a bigger payload. Each
=E2=80=98blkif_sring_entry=E2=80=99 occupies 112 bytes which does not len=
d itself
to a nice cache line size. If the indirect descriptors are to be used
for everything we could =E2=80=98slim-down=E2=80=99 the blkif_request/res=
ponse to
be up-to 64 bytes. This means modifying BLKIF_MAX_SEGMENTS_PER_REQUEST
to 5 as that would slim the largest of the structures to 64-bytes.
Naturally this means negotiating a new size of the structure via XenBus.


D). The first picture shows the problem. We now aligning everything
on the wrong cachelines. Worst in =E2=85=93 of the cases we straddle
three cache-lines. We could negotiate a proper alignment for each
request/response structure.


E). The network stack has showed that going in a polling mode does improv=
e
performance. The current mechanism of kicking the guest and or block
backend is not always clear.  [TODO: Konrad to explain it in details]


F). The current block protocol for big I/Os that the backend devices can
handle ends up doing extra work by splitting the I/O in smaller chunks,
and then reassembling them. With the solutions outlined in A) this can
be fixed. This is easily seen with 1MB I/Os. Since each request can
only handle 44kB that means we have to split a 1MB I/O in 24 requests
(23 * 4096 * 11 =3D 1081344). Then the backend ends up sending them in
sector-sizes- which with contemporary devices (such as SSD) ends up with
more processing. The SSDs are comfortable handling 128kB or higher I/Os
in one go.


G). DIF/DIX. This a protocol to carry extra =E2=80=98checksum=E2=80=99 in=
formation
for each I/O. The I/O can be a sector size, page-size or an I/O size
(most popular are 1MB). The DIF/DIX needs 8 bytes of information for
each I/O. It would be worth considering putting/reserving that amount of
space in each request/response. Also putting in extra flags for future
extensions would be worth it - however the author is not aware of any
right now.


H). Separate response/request. Potentially even multi-queue per-VCPU
queues. As v2.6.37 demonstrated, the idea of WRITE_BARRIER was
flawed. There is no similar concept in the storage world were the
operating system can put a food down and say: =E2=80=9Ceverything before =
this
has to be on the disk.=E2=80=9D There are ligther versions of this - call=
ed
=E2=80=98FUA=E2=80=99 and =E2=80=98FLUSH=E2=80=99. Depending on the inter=
nal implementation
of the storage they are either ignored or do the right thing. The
filesystems determine the viability of these flags and change writing
tactics depending on this. From a protocol level, this means that the
WRITE/READ/SYNC requests can be intermixed - the storage by itself
determines the order of the operation. The filesystem is the one that
determines whether the WRITE should be with a FLUSH to preserve some form
of atomicity. This means we do not have to preserve an order of operation=
s
- so we can have multiple queues for request and responses. This has
show in the network world to improve performance considerably.


I). Wastage of response/request on the same ring. Currently each response
MUST occupy the same amount of space that the request occupies - as the
ring can have both responses and requests. Separating the request and
response ring would remove the wastage.


J). 32-bit vs 64-bit (or 102 bytes vs 112 bytes). The size of the ring
entries is different if the guest is in 32-bit or 64-bit mode. Cleaning
this up to be the same size would save considerable accounting that the
host has to do (extra memcpy for each response/request).

--BXVAT5kNtrzKuDFl
Content-Type: image/png
Content-Disposition: attachment; filename="blkif_sring_struct.png"
Content-Transfer-Encoding: base64

iVBORw0KGgoAAAANSUhEUgAAAqkAAAH8CAYAAADyjTYeAAB7QUlEQVR42uy9D5BV1Z3ve9Qe
ZRQNT/GCSvL6Rm7g+a/QwitaaOHFFF7N1USdqxaWaFEZvFIjo1yiBm/aknIwasIoj1EDI0R0
wEEHIxBMcOwRAoQgwQw6ZAZHnDFvzDxmLvMGsIEG9tvf1evXrF7s87dPd+/u/nyrvtV99p+1
1177nLM/57d+a+1CASGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQggh
hBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEII
IYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBC
CCGEEEIIIVRGY1NPS92UekrqCakHVLDfuNRDg9cDUzcW8dBo3+Gpb009JnUDlwAhhBBCCJlu
Tr0j9aepl6Ve6P/q9a7UU8uAbYsHVdOk1EkRrwm2m5+6NfW61HtSb009mMuBEEIIIYRmeRCd
WDg2kqnXt/r1T2Xse6uHyySC1KxI6jQPs2P8NorU7k59gX892EPxU1wShBBCCKH+LUVQP0s9
0r8elHpyoS0SKtBUV/wov35PBKLz/bLJGZAaa5gH0mnBss2pZ0fbjS+0pRgghBBCCKF+KkVJ
d3hQlcZ6kFTXe3Pq7YW2rvnFfr0irsuC/QWyll9aDlKXpN5WOBqpHej3meAheLovbxCXBSGE
EEKof2uch1RJXe3q0m8K1s/xIGnRz9GFtu74LJWC1FF+/Y3Bska/7DkPxs3++J/6dQghhBBC
qJ9qlgfRgodTAWJDBLGJh1MDy9YaIFVR1K3RspF+n+2FowOlBntoXsalQQghhBDqv1I3/hT/
/8ZCW1Qz1PWFtoFONgWV0gK2VQmpA30Z06LljYWOUVrTzEJbZBUhhBBCCPVTLQkgVRHMWdH6
uYW2wU2m1R4iq4HUm/26YdFyRWxbg+ObpgGpCCGEEEL9W3MKR7v71cW+Llhnc5/O9a8FpzsL
xQc2FYNUlb+9yD6C3jUZy1ZwaRBCCCGE+q/Ge/BUVHOMh9LlHkyVQ/qp90b/d2SJsopB6ppC
8RxTDajSFFaK6E7yf/f45QghhBBCqJ9KcKoop3Xha1J9RT41HZRySSd4YNXTpgZ5SNUAq6wn
Qi0sArHafmKJOmgfzbfa7MsAUBFCCCGEkIugKnqpfNQBJbYbV2iLus6hyRBCCCGEUHdI0UsN
kNI8pYpmKnI6yVuR0GYPstNoKoQQQggh1N0aV2h7TOlqD6ayTVPFk6AQQgghhBBCCCGEEEII
IYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCeZKeUnVjoe2xqgghhBBC
COVCy1O3FJjgHyGEEEII5USDU7emXkJTIIQQQgihvGha6iT1BJoCIYQQQgjlRdtSf1ogHxUh
hBBCCOVEowttUdRZNAVCCCGEEMqL5npIHU5TIIQQQgihPGhA6l2p19EUCCGEEEIoL7q50BZF
nURTIIQQQgihvGh16j2pB9IUCCGEEEIoDxpWaJsbdSFNgRBCCCGE8qIHC21d/eNoCoQQQggh
lBdtT72DZkAIIYQQQnnR2EJbFHUmTYEQQgghhPKi+R5Sh9EUCCGEEEIoD9LcqBrRv5qmQAgh
hBBCeZHmRFUU9VaaAiGEEEII5UXNqXcX2iKqCCGEUEmNSz20xPqR3tKgQukpY8akbgxeq9yJ
hbboyaAa6jbAH28QlwmhXi99NyiK+hxNgRBC+dfU1NO6+BgaQVuqa63cYwkXFo5OuD3Ob19M
O1M3BYD5aaEt/0zTzQzuxE1tHG8VhHq9mvzneTRNgRBC+VcIdV2lchBaT0gNI6kj/bajOlF3
IqkI9R3pR+s2mgEhhPIjQdutHgQviIDuMw+AowIIFJDd6N1QBNK0LOyi13bj/THGR9sJFGeX
gEWD1NEZ+1cCqWMLRyMjBqmq27TC0QESI6sA0oF+nwkZkDrKlz3M1/XWIgA7wa8fVSifooAQ
6nqN998H02kKhBDKhwRKLanXpV5TaHtW9Sy/bolfp2jqnAAY1/i/SaF4d3cY/RS0bU29y+8r
8G32sNfst90eHCMLUtcF++/x+w2oAFLn+O3H+tcWGZ7g66RtNxbaHoFYCcxr+9X+b4uH+vD8
m3277Q7q/FnhaPR2oF9u57A7aAOEUM9psf/+G0pTIIRQPrQ9gFJJg4h2BAAYd/cLptQdpujf
4GBZKUhd4vcZGECr4G1axrbFIHV7cPMY6eFuWhlIneMBMcwvC89nXADalagxgPQB/vwbMyBV
x7RJwAf7c7VjzvZ1GBa0xQ4gFaEe1SD/o3M5TYEQQvnRRg9JEwvZA4eyIHV6BkSWglRFDadE
6y+IILccpMbHXOjrXgxSBaithY7pC/WC1FszloWQujjarzmo3w4PqqEeBFIR6lFN8Z/BG2kK
hBDKjxSVXO2BrtUD1a1lIHVSFZA62P8/oQyEloPU+ObR5OtWDFIVzWzJgNt6QOq4MpA6qwSk
Zp3rRCAVoR7/sa4ejwaaAiGE8qfBHpYs33JslZA6sQh4DiiyXnA3tApIjdfPLRwdhZsFqSM9
oLYUOg6K6g5IbSoBqUpTmBmtnwKkItSjP9T1+XuKpkAIofxIALk59fUloLBSSJ0avB4bbSeY
fC4jcrGwCkhdGLxu8GXOKQGptt1mf6yGnEDq4qg++rsGSEWox/SU//xdQFMghFC+pEFN6hrX
IKQb/WvlkFqUU5C3tVB6kJON2LeJ/7f7Mmw7ldvqofLm1PP9erspaN91JUBVx9ztbybaf3Wh
4+CkUqP7R/ljT88JpDb6um/2260rHE1NQAh1rxqCzyNCCKGcaaCHJYGmInyLC8fOlbrCw6sB
4dioDKUKPOf3X1ZoG03/VLSd5iBcHmwTzol6vQfPYlNQLfQQOD+oYwiWk72lkYWOUVdpiq/f
AF+vG6NtK33S1GC//cgSyx4sHJs/+2BQv4KH61l+vyn+/528FRHqdl1fOLYnCCGEEOqXErDG
j4AVQDfTNAh1u/SDuaVQ2yOREUIIoW6RIqHjyrgejz4VpCrVQZEbRZcVRW3NAFeEUNdqsP/s
LaEpEEII5Vnqjl9YxiPrcJwGD6hKr1DurlIpJtD8CHW77LHIfP4QQgghhFBupAGhnxaYGxUh
hBBCCOVEGripKOpsmgIhhBBCCOVFcz2kDqcpEEIIIYRQHqSp6PQI1HU0BUIIIYQQyov0QBBF
USfTFAghhBBCKC/SjBqaBm4gTYEQQgghhPIgPe1Nc6MupCkQQgghhFBepIdpqKt/HE2BEEII
IYTyIj1AYyfNgBBCCCGE8qIxhbYo6kyaAiGEEEII5UXzPaQOoykQQgghhFAepLlRNaJ/DU2B
EEIIIYTyoomFtijqrTQFQgghhBDKi5pT7y60RVQRQgghhBDqcTUW2qKoz9EUCCGEEEIoL2ry
kDqGpkAIIYQQQnmR5kXdRjMghBBCCKG8aHyhLYo6naZACCGEEEJ50eLUramH0hQIoTxppP9y
SjDGGOO+6uOOP34Lt3yEepfG6cN79YSvJffc/22MMcb9yFeN/68O4K694ZY+fZ4jzr8oOeGE
hk+55SPUCyH1se8/n2z9xz0YY4z7kS+85NJk0OlnJO99vLtPn+cNvzcRSEUISMUYY9wbvLx5
i4uiTpw8tc+fK5CKEJCKMca4l/juex9wkLp09XogFSEEpGKMMe55q3v/jDOHJOdddEm/OF8g
FSEgFWOMcS/ws4tec1HUh2Z9D0hFCAGpGGOM8+Frrvt6cuJJA5Lm9z8BUhFCQCrGGOOet8BU
gCpQ7S/nDKQiBKRijDHOudXFr+/9eYuXA6kIISAVY4xxPqyJ7YecdU6fnxsVSEUISMUYY9xL
rOmm9J2v6af603kDqQgBqRhjjHNsTdyv73xN5A+kIoSAVIwxxj3uTTt2uUegXnzp5f3u3IFU
hIBUjDHGOfXTz7/soqhNT84DUhFCQCrGGON8+Krx17qpp9Zv/y2QihACUjHGGPe8397yUXJC
Q4ODtf54/kAqQkAqxhjjHPq+hx9zXf3zX/0xkIoQAlIxxhjnw43nfiU5e9iX+u35A6kIAakY
Y4xz5pfeeMdFUe+5/9tAKkIISMUYY5wP33LHZAepb236NZCKEAJSMcYY52Nu1JNPOSW5bOzV
/bodgFSEgFSMMcY58uPPLHBRVP0FUoFUhIBUjDHGubAiqKee9gUXUQVSgVSEgFSMMcY97lUb
PnRRVOWk9ve2AFIRAlJxnfy/b7gl+YcS1+TAsC8lH/TjQRDF/Kvtv00+TG/Mxdb/5tHvurbt
j23zwZaPkj2XX+n+Zq3f+fzL/bZtyrVbqc/aP0+Z5t5X3VmnjxYvd9fSXut/Ob5+Gs2v7/e3
R49x+4Trtq/ZlPzvm25z++2+7uvJ373xTtnygFSEEJCKHYR+VmK6mPTOUxLG+qs/P/+iknCv
Ng1v7v3Jer+Uet+o3fpr25TyodO+kOwoMQH+v6bw81k3T+2ka6VrGX4f/PbeB9wPjXA7zYv6
nwae6taHn4u/Wfur5PAppyT/NuFrySfPLEj+5fZJyZGGhuTvXv9pe/ladqAPzavaCyD1+tRz
Uj+VelQN+zeknpz6udSzUl8QrR+Qekrqualnpm4EfxCQioHUbm43IBVIrafVZr0BUuM66slS
+m7/7hcGHQOpqu/Bs85J3v94d/uyfRdd4iKr4TGA1G5TU+o9HlAXpm710FoNoK5LvdMD6mpf
xrgAUDf69bP9eh1vDAiEgNQu6OpSN9Sns77nvkT1i9+iA+q20rKWc7+S/L+Tp7ru37DbzqID
irh9vGCpK+fXq9dXfGNQmf/PI4+78vc3ftl19dkxrDtV9dP6vZdc6m4C6irUcbW91TfsctU2
KkfrtZ/K7wyk6viung8/5srZddcUt1znqciJtY/KD29SqpPaQ+t1w7IuxUrbR/7ble8m/9+4
r7YdY8R5rg52DF2bf3xyXoftFbnRMawNd85b5I6t/dV+YWTI2l+RH5WtbVSmtaXqfvikAe3X
vhZIVRkqf++ll7tydD7unMZfm/m+svei6qZ1rr3vmNzhfVfOKk/vZcGOvTfVDvF5K1Km9f80
4ztuuaBE52JtreXh9dzevKX9eut8VGZnINXqqbbQe1Xnrs+cvWe0LHxv632jdfqc2fWK3/uV
WO8BvRfsfWltozb+97FXJx8tWdFhe10/vS/ss6WueDu+ttf1tG31/jTb9VMd7fqpPdRmuiba
phZIVRk6B/tc6FqEXetWRx2j2HeXttd+Wq9yfnvft8pC6tf+6w1JSi7J+oWvZUPqmUM6vF/U
xkBqj2iYB8qbg2WzPVBWqmmpd6UeGixb4WFUmpi6xR/LtMZvgxCQWu8ogrqqDLT+cfYz7san
L119gQt83I0xXa8bqt0IdIOT9YWvL3TdVAU1Oyp8lrWOpeOqDN3odHM0UAojVTqugELb60Zj
wKqbumwQ9tcf/Kb9Jqe66+Zr69X1Viukqn1aTz+jvX30Wtuq7i73LGgfHdvd8Hfscjdy3QjD
cysXRYoBVfXWTV5trPPReRkwCsR1jPgG/+9+/kYBrPYXpGt//dVrA9Ww/bVMtmtuN3K9VtsX
A+tykKpzVveuzkH1VrupLfVa56d2UwTKfhipbVVHtaW9r9z7ztepEqs+KkNwK7AUsLjz9jAW
nrfgROet66NtVEddT9XD6q199N5SPVUPvadsfWcgVevsPaRrqZxGfYYsn9F+nNlnTm2hz5fq
of8daKWfA8FYCEclATVtA/eeSAFR++s9ojoYcOn87PMXwpb9MNNfnbf20/56b6g81d3efypP
df77Ra+1l6/zs3NQm+ma6NrUAqnaX+0gwNdxVSe9tvJUJ73HdK66lq4O6XorU9upzjqO6qO2
0PpSkLohvf4Djj8++Wp6fWx9CKl6D+i6CEr1ftL7SOcdAjyQ2m1SF/xuHw01jdR9OfXoCsvY
7COooYYGUDo89Y3RenX7bwWBEJDaRV1dlj9lX/T60v1VMM2KvnC1nUFXfIPW/tVAmG4aulmE
gyjCcg1SdTOy9fpfX/4GpAYQWiYIE1xrn3BQg5aFN6laIFXr7UZsN2sBRAgHVndFw3SjcucW
RLl0066mfQQLMeQYZOicVR+VZwCpa6V2+MTP3+gAM4WwcH9Xbw+2ag/tH9ZRbaj96tXdb9G2
uItccBq+b+x6tbd1ADDh+65SSNWPgw7nnQKroLTYeWsfg8Ew4mjXUz/c1LZhNE7w01lI1Xso
fh/qWPZa19baygAvjHRae+q9VUnb6HgG3u2f9fQ9os+6vb/0WbHz1Llb26u99N6L3w+6vjYg
SOAnQAw/F1oWtkNnu/u1fxzZF0jbeQk6w8i5rAi41VHvBW0fR7VLQerTX/uG6+p/+k9+mAmp
1nuj9tF7Xm2o44XvFyC12zTXQ2YsXcNJFZbR4tMDpvjo6LKgq7+QkZt6owfjqSAQAlK7CFLD
G4tuPBY5DG03UXWF6mZUbb5ZDDjxTVpf6ipDNxm7AYfwrBtYfIOxG75uCoKdGEDczTkFs85A
qm4+8YAiOWwbgb3KESTqxh/fkOzcKm0f68oMj2E3UytD4GU3bAcYAqkUVq3t1Cbh/rpZ281Y
rxUVK5WbVw9IjUHZunx1HfU+CoHUotZxOWr/OLWhFPzFxwzPS3WOr42gQoAeLtv2/ift1zMG
LYvIdRZSLfLenh5x023t3eE6hzCCbZAaR00FmJXkcNr56D0Qvif0HnGfmfTHot47an9ra9VB
dQl/ZOn9Fu6v9+h+H2F0n8/oB0L8HqkHpIY/cuIfIfbDVNsIWPVjT+9zq4OLDEdTSNkPkqw6
6hqMOf745P9Iy3jPt30MqZZaYu8FtbXaLewBAFK7TcpBbS4CnpVA5GAPtFs97DZ5UFUKwa0Z
uauJ92a/L0JAalcOGrAbaAxhZn1520CBuCzd4KqB1Bg4dQO2G0DWwJQsWHDdlOOvdTcju9nE
OYzapzOQGt9csgDSrJuabuRxV3y1EK8bq34sZB3D6qofDDZgQ21gN19rO7vhx1YkNgvWugJS
43ZXXd30TCmQ6f3iYNrn7hW7kQu+w4h6OUiNj2n5o4KwrPOOI5gWmQ4h9d+jx2B2duBUFqSq
bQSDigpa29hxrbu/2kGBcX3V1sXeExZtt3qrbJsOSu1go97jfW2brM9nV0BqvL9LffE/eFUX
y6VWWXotWLQ66HNp6QdxL0jWMTb/5yschNzzn0a68mT3Iyv9/FterevJiXJsDertBzOQ2uOQ
KsicXMH+jQGkhikDS1LvyIDUQT6dQAOttvvIKkJAaldCqqIrcUREN23d7PWlazesEAZ1k6s2
khpHzawLW92LWRBgaQhxWTboKu4Cb482nTmkrpAqIIy7h3X+gi9FUax7OIx6Wddppe2jHwkx
xKhsRYgMKFyqQ3pDtvQCG0Bi1yIGTLWPda33BKTqvRN2xap9FMG0SJ7Lj1Z3c5BmYhHAeF7K
UvAXR8rCH1VZ5633T9yFrPeQi+Qr9zIjMq7l9YRUe//YZ0pto/eR1cEiqWqP8DOp9qokyqzy
LC0mjggLqOy9aukV9p6y49nycJCStYMt6y5IjX9Q2I9UtWFW/rlA3+qgSLIcrrd2zqrjvWnZ
+k5/O31PhT1KKsM+G5YjmwmpPp0JSO02zfGwmBXxnFhhGdp2ZrTsVr+8GIRa3uuNYBACUrsY
Uu0L9pPg+dS6UVsOqW6kAkxFXXRzk/V/tZCq7e2mqRuuvvgtIpIFqQIHG/gR5gZqmeUsum62
9MZiN3vdPFROPSHVUgDCbkeBkeXLujzY9H9BtQGCdeVW2j4u/za9+VkOoisj/fEg4A4hzi1L
ASxOndBNW/Bl56V6KYpkcF0ppBYb2V8LpBqoh6BqkS/Vz45v3fXWbirnVxU+hlL1URTafqjo
mGofKzPrvC31wfKOdSybgUB1sME2BiJ6b2l9PSFVMKhjhBPZWwRY52CQqiirAaV+mKneYY52
KbtBhWlbWIqFzkPd5PEPUn2GtF0Mc3qPqd52PJWj7wGLTFYKqaV++FQCqXofW4TSBr3pO8tS
akIQ148brbc66DOr1/a5UjkqLwtS1b0/JG2HET7lodg56D2qz5rVSe2j44UpCEBqt0l5p3ui
KOgFHiAvqLAMRUyfipZN9tHYgk8baCqSJjARDEJAahdDqqx8QX2Z21RPFrELB7zoBiVo0g1N
0YpaIqm6QegYutnqr91Ai3Wn2mwENr2M/g8jSdrfRpW7aajS8jubk5p1c7GRzTqGzl/HC6FV
N00t075qI8vFrLR9BCICErspq63kMEfXbsLuxhx1N+qGqXQK1/WZ7q92EnzYjbQSSBV463Wx
m2st3f023Y9NOxZGAu34Nn2Qvbfi6F3ZgVPpeavt3fsjLV/AbpCbdd5qa8GRrqfAwvaN82VV
lr1X6w2pFtGzwTe6ZjY7Q5iTqs+ZtYvqEU8ZVe4JYur6djMopOXbZ0QQHD9JLGtAltrDXTPN
BqL3VNoeqo/9IKwEUtW+KjvuiagGUgXVNiOJziXMQRYw2zY2A4k+R2EPjA1y0v5qA70/siB1
XvrZ0vf5Q1GUNIZU/cBwdUrbo9T7B0jtFg32+afhIClNyL8tinqOibr4xwVRUgHoZ4WjU1DZ
vKnL/esH/TEagzJm+WVDwSAEpHbToy8FNLoJ6gs7a65KLdN6i2JWC6kWIXNT6qTwFXaP63/V
K2t6HUUqdFw5q17aR+W5eqfl6zxKRZtKwYbKL/YYR9c+6Y1MxyrXPgbdlbZPmEuoMgRqxaKJ
2qbYOkUUw2sUtmF8XlnvBR231imoVH5Wu9v7Sg7Xh4P4rIu70ghqnJMqcNC1iac6yjrvMNJr
bV3q86CIa6n3ZyWQqrLCrvuwfjpvHSccAGiQatfEvferbJsQNtU2ek9k1d8G32Wt0zI3w0dG
2+p84kGL1rMQlq3PSwzG1UCq2tauVdbn1p1f8J7POh/7XKkcWx9D6jUpvJ6YgmdzdJ3c5y3j
867PiX7IZ7UrkNqtmuKjnhqVv8ZHVsdGeavhvKlNPgpq0DnQ57V+5rdV+sCnwfoBfr1G9C/2
ABvPzYoQkNrbniaTBal5qXdXP3GqVkjNs+v9xKliUf1aIDUPPRT1bJsQUvuyK4XUrv4++MWC
pQ5Qr4nmje3M+wFI7VaN9nml6pofFq0bGwHlKB95HRjlsd7oAVbrBmXkuV7v108r8FhUBKT2
HkjVzbmUbWqk/gqp5dqnVB5of4VURanKtZuBYX+D1EraptgTnoDU7GP8335qoWcrnIO2XHml
0maAVIQQkNpNzzO3rrNStm7gYt2uPQEU1Tx6sxZb93BF7VPloy57yi6VoMJR951NPamk3axr
udKBRF1p1x1dz7aJuqSrbZus1II8WqkApR4dXKyrvd7fY+ePPD8Z8h+Gts+N2tny7HsPSEUI
AakYY4xr8tIUkvU9frefnQMDqQgBqRhjjHvcEydPdZC6PBoYhoFUhIBUjDHGPWJ17w86/Yzk
wozHL2MgFSEgFWOMcY/46edfdlHUpgqe4gWkAqkIAakYY4y7xVeNv9ZNPbW+iwdnAakIISAV
Y4xxRX57y0fJCQ0NyfU33UZ7AKkIAakYY4zz4Qceedx19c/vQw/cAFIRQkAqxhj3cjee+5Xk
7D404T6QihACUjHGuJf7pTfecVHUe3LwtDIgFSEEpGKMMXa+5Y7JDlJXdfHjkYFUhBCQijHG
uCJv2rErOfW0LySXjb2a9gBSEQJSMcYY58OPP7PARVH1l/YAUhECUjHGGOfCiqCefMopLqJK
ewCpCAGpGGOMe9xvbfq1i6IqJ5X2AFIRAlIxxhjnwhrNr+9sje6nPYBUhIBUjDHGubDmRdX8
qLQFkIoQkIoxxjgX1pOl9H1938OP0R5AKkL9B1In/8H/dKBajecsWOq+NPPgZWs2ufkC8+D3
Pt7NTQFj3DWg1dCQvL3lI9oDSEWo/0DqxZde7n6h475ldQ3mwV9s/HIy+vIrc+Erxn3V3bDy
4NvumuJyDPPgGY9+t+ofql3lZxe9lpsfwG+u/VUufvy+9pebkxNPGpBcNf5aoBNIRah/Qep9
Dz1aty91zd2Xl5tdXgDgm/d9KzdgNOGGW3IDjOdddEluQFqTo/PDBuPyVjQ3L59b5edW8l0z
+MwhyXHHHbc/rf8cbvsI9TJIJScV4/xY3bl5SV958fWf5iaqmZcfv01Pzuv2H7pDz/liMuB3
T06++Qff6rD87nsfyM0P4Guu+3pufgCPOP+iDjCreWXTe11r6iXc9hECUjHGGNfBSjnQ97TS
Q2gPuvsRAlIxxhjnwoqW6nt66er1tAeQihCQijHGuOet2UKGnHWO676mPYBUhIBUjDHGufC8
xctdFFWzL9AeQCpCQCrGGONcWLNxaOqp5vc/oT2AVISAVIwxxj1vgakAVaPmaQ8gFSEgFWOM
cS780Kzvua5+PdyA9gBSEQJSMcYY58J62MUZZw7hUctAKkJAKl9kGGOcDy9bs8lFUTX9FO0B
pCIEpPJFhjHGufDEyVMdpC5v3kJ7AKkIAal8kWGMcT7mRh10+hnJhZdcSnsAqQgBqUAqxhjn
w3MWLHVR1Jmzn6E9gFSEgFQgFWOM8+Grxl/rpp5av/23tAeQihCQCqRijHHP++0tHyUnNDQk
1990G+0BpCIEpAKpGGOcDz/wyOOuq/+FJStoDyAVISAVSMUY43x4+IjzkrOHfYm2AFIRQkAq
xhjnwy+98Y6Lot5z/7dpDyAVIQSkYoxxPnzLHZMdpK7a8CHtAaQihIBUjDHueW/asSs59bQv
JKMvv5L2AFIRQkAqxhjnw0/MW+SiqI8/s4D26HuQ2ph6YeqdqbembkrdUGTbiX67WjWqyP5a
vsKv2556bupBft2E1M1FPAdMQkAqxhj3Y1829urk5FNOcRFV2qNPQerA1J96QBzvIXRP6tlF
YFbrkhqPdUHqzzL2H+7LXezrcGPqHanX+fVjPUSHXuHLmQ0mISAVY4z7qd/a9GsXRf3G7ZNo
j74HqQ/66OWAYNk0H6UM1eChcVuNkDrTg+j2jP1neVAOo7fj/Xaji5S3xrsBTEJAKsYY91NP
nfEdB6ka3U979DlIXVdhNFKQuTH15BohVZHRm1NPyth/pO/Sj7v/td2YjLKmpG7xkV2EgFSM
Me6v1ryojed+hbbom5C6O/VU39Xe6uFvvk8DMI322w0vApmVyCKele6/xKcGxJHSQb4uTeAR
AlIxrnT0885/Tn687Rclt9F6baf//+pv/9a5VFnvffKv7ct+8uHWZOnmlckb7/9Vl9Rfx9vw
95/2q2um9td5x+e+8e//yS1b8ze/6rC9roFtH16bvuz5r/7YRVHve/gxPud9E1IT3w0/18Po
RA+BC4Oc1e0eZAudgNRCFftP98A8IWPdNA/Sg8AjBKTiXmFBxdNvP5Gs3bGzy46hsh9fPbMk
5N279MaSZWi9gewfv/O0c6my7Hx+9Kt17vXDP/ofyTPNc7rk/FT+q1tW9arr/uKGl5PXfrmm
5v3V/n/w57+XPLh8crJ8a1tXtn4IaJnaetqy/57MWv1Q8oud/+LWPbZqRjL99Ukdrk2/gKeG
BpeXyndNn4XU5ozu9FYPgs8V2gYpFboBUhU1neMh9OYSaQPzQSMEpOJe4xjqusICuFIQWk9I
jSOpL6xbkPyvFfd1eRv2tkhqZ8Fa7S8Itdfv/t1HydRXb0qWbXmr/cfPQ2/8fvIn7z7Xre+1
vHj99t+6Ef1XjPsq3zN9F1I/81HUY+67hbZR9fq7rXB0yqftAdhOqiOkClCX+aju2CL7jgrq
hRCQivMHo4ISRc8EEAZ0izctc+Cg5QItWV2z2kbbv739g/ZlYXlaFnfRax9Bivaz7nhtJ1A0
yMyCOYMXgaWicto/BplSkGpdzD/b8Q8dIFV1/6OffMdBapguUM6qo9pD9Vj94ZbMtAOlDsi/
/Id/63BeOqbqrtdqCznrnFdt+3l7+ZWkO8TW+en4il6u+Ov1rh5h5Fr1CNvzL3+9vUPXu9pT
10Xb2Xkp6qnzVrnqrg/3iYE8htT5P1voIqjhtn+2+UfJHy67vV9CatOT81xX/9PPv8z3T9+F
VIHhmmjZRB9JHeahMvR8D4qTPDTWA1INUD8rU+Z0n4qAEJCK82PBi7ra1dX6xE+a2rtiBSEC
FAGcwOHRVdMd7AhotO0jb051y7Wvlqlbt1R0VDCkrl6VJzDU/wJglaljaltBjV4Xg1R1CWt/
/VVU7uVNf1EWUgVMKt+6lkMQUhRP5yJQ0noDslJW/VR3tYfaS/8rHSKsh8rSX9VRYBxGJbVO
5//A63e4MmYsv7u9ve16aL2W2XXR+ZaLJIdW1FLXQ1YdrQyD8D9d/0N3/RTJ/M7K+9vb3+qo
87EUCIt06rXK0l+ds1IjVPfwuAJqrdc5x5CqcwrbKbyu9oOlP0HqxZdengw6/QzmRu3bkDo+
gM6CB1NFSxdXAZk3+lzR8L7dVCRvNGv/aX7ZVL9v6EFlgBohIBX3rC1qJrCxZU/+9LHk+XU/
yAQHg09FxizaWg5SFckUdAmObL0ifII4i8hW0t0fdg0v3LjE7W9RyCxINUB9as3j7ZHE+Hxi
mCpnwdacv3yyQ/sJ1gRmVg+Bn9rFItIxpKotLAopcFbbzW1+1r0WuAtg7XpovcqrBlJ1DNlS
GlRGCJy6DirPckVltZHAtVh3v/1IUJk6L4Fl2OayzsFyi+N21b7z3p13DEyHZfQXSH1z7a9c
FPW2u6bwHdT3nzg1xUcod3lYXF0oPjApCzLtaVWmJr9NY4X729yrWR4XbLeRfFQEpOLc2UBB
US51MYfdwqUg1QDMlpWCVMGQddeH3dHW5VwppIbHVD0FqeoyzoJUwaTATHAUnlNnIVWAqsik
orgGpnHagUCzWH6njhUPElMdFKXU/wLSeABXufaJUyq0rdpF52hW173qbZAadrMbHIfXMAtS
ra3NigRbXXUdBdeWcxq3q66F/fAJ0w76I6R+875vOUh4ZeW7fAf1fUiVBviu9qFgB0JAKq5h
4JK6fgUIgheBmEXyikFqvH8pSFXU0wCp1oFTMVTJOqZFZ2NI1WutV9QyhJ7OQqqis4o6qsvb
oqZx2kE84CiG1HhQV1iH8JyqGTgWb1vMBqlZ16scpOpHTFZOqeW/qq3th0jcrlnnrRQHlWup
Dv0BUt/7eHcy5KxzkhHnX8R3T/+BVIQQkIrrMbelph4SrAhaq4FURTWLgadgJl5vkKKu6FpH
9wuKXvr5q5mQal3TymENYamzkGpWvQVmlr9pEcTOQqq63MO0hjASXUm9lEagbeMBXaFrhdR4
8JZSGgTr1g5hBDhu1zBaHOb3qlzLle0PkDpv8XIXRZ3x6Hf5zgFSEUJAKi5lzRMqkAu74pUv
alBUCaQK0LTM5ryUBSy2nUXMwhkAwpzGSiE1hCTLpbXBTsUGTml9OMiqHjmpYeRUVk5qGNHt
DKSqS1w/EMK2NBCudCCcoptx17ryQRUBrhRS1XVfDlIt/UH10w+GEIzjdtV7RNuEMyioTkoZ
6E+j+yfccIubG7X5/U/4/gFSEUJAKi7XfS2oUZ6kInYCVEGSAY1F5gQixYBSUCEQtJH+gg+b
mD0ELS1T7qOAxUbchxE1gW38JCKDFwGOIEr1EySqrHAAU6kpqARs1u3fWUhVbqfKUgqDIog6
V517CMudgVTllOo8FVHV4DS1qY5XzcApG5Smuuma6q9NI1YppOr66MeLyioFqUoB0LHi8uJ2
FTzrnASl+mGkc1MUNkwhiK+Nrpsi4sXyZnVueh1Pf5ZXr/3gN8mJJw1Irrnu63z3AKkIISAV
V9pFLEC0QT3q8g8jq+pS1zrBpaJlWRPlCzDUnavtBHKCzXA7QYogQ+VrG4GSRQu1TvsUm4JK
UVcBS3gMQWI4IErHsqmMBFYGVzaoRyPPtUzbaFsbhKVlKquaKbt0HlYPwXcISSo77moPl+lY
Yd2y6qC66RpoPwG50iWqgVSLkGuWBtVRf/Xa1gmu40ir6hcuEzzqHG3WgbB9Y+tHTpxHmwX/
Oi+VJ/BU2fG1jiFV7RLWSfUO31P2XixWr7z5oVnfc139zy56je8dIBUhBKRi3LssII1H0WdN
hJ+nHzjhVGClIDUPTzfrSZ930SXJGWcOcYOneK8DqQghIBXjqiCplLujW1nRakGfIqmKNOqv
Xit6a0/xKuXuAjwdR9FTpYaEaRchpNpMBZU8JEHnZ7My9EVIXbZmk4ui3jllGp81IBUhBKRi
XP3z5ks57iLvyinB7HGt6ha3Sfete7uU1SXeXZCq6bcEqFmPdVVXvdWp1EwDYd6wbR/Oh9tX
LDjV9+3y5i181oBUhBCQijHG+ZgbVY9AvfCSS2kPIBUhBKRijHE+PGfBUhdFnTn7GdoDSEUI
AakYY5wPXz3ha27qKU1BRXsAqQghIBVjjHvcb2/5yE3ef/1Nt9EeQCpCCEjFGON8+IFHHndd
/S8sWUF7AKkIISAVY4zz4eEjzkuGnHUObQGkIoSAVIwxzodfWfmui6Lec/+3aQ8gFSEEpGKM
cT58yx2THaSu2vAh7QGkIoSAVIwx7nlv2rErOfW0LySjL7+S9gBSEUJAKsYY58NPzFuU8P0K
pCKEgFSMMc6Vrxj31eTkU05xEVXaA0hFCAGpGGPc435r069dFPUbt0+iPYBUhBCQijHG+fDU
Gd9xkPri6z+lPYBUhBCQijHG+fAXG7/sTFsAqQghIBVjjHPh+a/+2EVR73v4MdoDSEUIAakY
Y5wjIGpocHmptAeQihACUjHGuMe9fvtv3Yh+jeynPYBUhBCQijHGubC+S/WdqjlSaQ8gFSFU
B0i9esLX3LOlMcYY1+6zh/2fyYDfPTn55h98i/bIka+57utAKkK9FVIxxhjjvuorxl0DpCLU
WyH1gUceT1Zt+BBjjPuNf/jGO8mQs85Jfv8PH6pLebfddY8Doj9+8c9p35z5azfdBqQi1Fsh
lZxUjHF/s+BF33/qDq5HeQLe4SPOo23JSUUIAakYY5wPSJ23eLkra8aj36VtgVSEEJCKMcb5
gNQJN9zi5kZtfv8T2hZIRQgBqRhjXLsFlPWA1LUf/CY58aQBbpaUPJ3fm2v3Jd+870CXHuOV
lfuSGY/uB1IRQkAqxhjX0/WA1Jmzn3HlPLvotVyd2z33H0hGX36oi+HvoDOQihACUjHGuM6Q
KoDpTBnnXXRJcsaZQ5L3Pt7d7fV/7+M9yUtvfJ48/XxLsnT1viC6uzeZOPlgcuElh5JVG/a5
7Zrf3+uW6/W8xZ8nm3bsSd7estct7xhhPnaZtnt2UUvy4utt+9l211zX6qwygVSEEJCKMcY5
gdRlaza5Mu6cMq3b6y5wHD7icArIR5KLLz2UnHhS4iKngkh1wZ962hG37Oxhh5O3Nu11Ec8J
N7QmJ59yJK1zktz3cFukNY6ExtFRpQyc0JC0H0vlLW/el57zQVeWrGVAKkIISMUY45xAquBU
ZQhWu7vuU2ccSBrPPeyipJaDKihVlDSru1/gKTids6DFRT4VVS0HqSpLgPrCks/bI7eXjT2U
XDHuEN39CCEgFWOM8wip6t5XN/+Fl1zaI3V/4JH9Dkpnzt5/TPd8MUg976KOEc9ykKq/itKG
6xVF1YApIBUhBKRijHEOIXXOgqVufw2c6om6r9/elhOqSKcsmHxo1v72yGoWpF49obUqSM1a
z8AphBCQijHGOYZUTTmlqac0BVVPnoNg9fFnWhyAClbvvvdAUUiNgTILQgW+tkxd+3odrlfO
q1IFgFSEEJCKMcY5g1TNsarJ+zWJf0/VXTmp8TyoAkqBpUFq2FWfBZR6bdtbzukXGw+3b6fy
NSjKorOyorUaLKVlNhgLSEUIAakYY1xHK6e0Fkh94JHHHeC+sGRFzcd+7PstHSKdGqSk1zaV
lEbv67UNhIrd9OR+FzlVbur8Vz93rwWPgkit1+h9vdZIf+WsZkGq9rUydBxBrkbw23YaYKVZ
AhSl1XpFbAedfsQBsNbfcsfBZMhZbccIQRZIRQgBqRhj3AmfPexLKZh9ver9ho84L4Wzczp1
7M5CqkU1NReqop0aFGWAavsLLkecf9iVKWiV43lWFS3VLAGyorOC3XA7zRogeNUxNA2V1hmQ
ahCVRvrr2JrmCkhFCAGpGGNcJ0gdffmVFY3kt/9fWfmui6J+875v0Ya9xEAqQkAqxhj3SUj9
xu2TkqvGX+tG9N965zcdpK7a8CFtCKQihIBUjDHuOUgV5Oi7Uj7+hBOSoWcP65EJ/DGQihCQ
ijHGQGompB533HHt/5934cXJ0tXraUsgFSEEpGKMcc9Caugrxn21Q74qBlIRQkAqzol/sfNf
krU7drq/tAfuL5CqaOq5X/m/kvXbf9ut9dWUUBq1z7UDUhECUnGn/N4n/5os3byyS4+xaec/
J8u2vFV0/Z+u/2Hy4PLJRdf/eNsvknuX3uhAU6+1rfbJ2vbVLas6lPWjX61L/uDPf8/t/yfv
Plf3c4vr1lv8xvt/lfzlr7fXvP+s1Q+58w7PXW39v1bc55Y99Mbvd7jmtm2p64yz/cXGL1ff
3X/88cnpZ5yZwuJHxwBkutrNWVqsHE0pZXOMagoqTetU/EEDidvGfc537HHTPGmZ5jXtijlJ
yz0CFUitWten3llk+Vb/ftqTeknqocF6/b8sdUvq1tRrUjfWWIdGf4x4/5G+XJW/K/VTqQcU
KWNm6mZwCQGpfcwx1HWFBZSCmu6A1Le3f5As3rSs/fWTP30seWzVDLevYLne56ZyVZeuKLur
rDqrPdWunYHUp9Y87sr65T/8mwPeqa/elCzcuMQte3nTX7jXqz/c0n5M/UgAUmsBsyurgtTj
U0Ad8LsnZ+ah1htStZ3Nmaoyreyumo9U9ZmzoAVIrY8meDiMIXW0B8PZqYenHpt6e+rNwTbN
ftmY1KNSr0u9rYY6XODLSSJIHZT6s9QbfX1G++Mvzyhjut8fSEVAam+OmGZ1eReDVIHHz3b8
Q9XH2fD3nyYb//6faoZU7S/XCqlZMPXCugU1nUep89d6tWk5GIzbIk5B6Mw1VdkGiaXqGbdn
KUjVOZc7L2vXP37n6fbXzzTPSR5dNb3DNvqBIFf6YwQXh9TGc79SOaSecELy7KLXinbFG0gq
8qnXccSzGkiNAVJlV93T4uuxfns22Gp5ufSBtR/sdWUUWy9o1jZAqpOikQs9iDZnQOpzPooa
arwHweF+f/0/MVg/xi8bVUU9HvSQvDkDUqf6dWH0drjf7oIgmrs69W4PyEAqAlJ7mwUwc5uf
dVEtgYn+KgImEBGgWjesQYtA4um3n3DRRy17ft0PHIzEkBmDp/Z9+Ef/o70si15qu/AYWWCm
baa/PskBjW2nsg2uSkGqYE/HkrV9CN3hcSvtklfkT13Vts+M5Xe7buywHi9ueNn9VZ1Xbfv5
MXX7s80/6lCG2s9AMr4e2k6vq4E3necf/eQ77eU/8PodHbrWdbx5785LHl89s32b76y83+1n
gGpWO9p5CeatS/6RN6c68IxTBJQ6oahxDKnafv7PFnbYXhFt1Q1I7TykKi+13HaaI1XflQ/N
+l7JfFGB5G13HXSPEtX/eoSonuBUCaTq0aLqyn/6+ZYO3f3aXv+bbf/SDx/Q3K4HXXna58ST
EvcIU4Nm69rXI1C1/ol5LR26+3WMiy891KEM1fWlN45GiV98/fPki41HUxD05KpykeR+AKmN
HugElpMyILUxAMEsSC14MGwK1k/00Du4inos92kF4zIg9bkikdkWHzkt+AjvQr/fQiAVAam9
0AKLacv+e/JXf/u37vWav/mVey2QEuC99PNXHYgJXgSuAgkBlMBpxV+vd93n5SBV0TeVKTBS
mYIYQaPW63918yrKVizqZyCr41iEUbmNVn4xSA0B1SLEIaRqex1Xxy8XcTTruDp3bav2UJ0E
o2E9BHxqG8FrVt3+cNntbp32X771Hbf+tV+uabtppoCr9YJbHUP1VXtXA286J2tPO2cdw7rW
VWeVqS53tf9PPtzqjqkfHDqmtrM6ab2dg9pR9dK56X0hwAyjqvpxox8wWZFUHU/1iKP0KhdI
7R5I1XZ3TplWdlCTIE3gJ5hTFFOPEBXAWbd9MUiNATWEVEUpH3hkv3utY1QStZw5e78DZD3S
1NIFwvJVD5Wnx6iqi98ewRpCqrYXpGqdoqXKidUjWF3qT7rs5FOOOCBXNFbHsZzZfg6poSYV
yUkN1eC788Po6s0+T1RwONf/P7Uz9/gIUpt8mQ3BssF+u6aMMoBUBKT2RmtQlKJfgg9b9u7f
fdTeDR139wskBJwh0JWDVOUhCmjCfQTFiqRpWSXd/YKoEIgMpFROFggKuGJAzTofHbfS1AB3
I06BXSBm7SOIU3vpPKweAs9SqQiC3LhMq4P+V3uF6xWxrBTeBJw6nn5shMsVNTWA1PUSSIfr
tU7R16zufjuH8D2iqKvAUz9yrB30PhLAZkGq9o8hVaCu5RYRB1K7FlJnPPrdslNNGaQKEMPl
et69YK4YpAocFXmdt/jzogOnqu3uFxwLUhXttGXL1hwFXNVDLjZwyqK3Am1bL6AVgIflh+kM
r6zcB6RWB6mCxCW+6z3syp/ic0ZXFNoGUO0uAo+1QuooH5mdn3qgB9TlfhmQioDUvmLBhY24
VkRQQGSgUQxStX1YRjlIzVpfbU5qnM8osFGdi0UrBVDyEz9pKpljWy2kCuqtbAGwIopx2oGg
tZp8WVuma6FtDfzMOkal8GbRSYtUm3Vt7brpeijqWewaFoPUOB1C6RdWjtpFgG0/RLIgVdF5
Iqk9B6mVTg8lSAu7xOUJN7S2A2EMqeqGNy9v3lc3SG1+f6+DY+0z5Kwjrqs/hMeskfwxpGq/
Ynmx2i6GXAErkFoxpA70ECoYHR0sH+uh8sZgmQ22mlgnSLW67fHrbCDXOiAVAal90Oq2V86g
AauBVBakxkCZBaHKX6wnpMbrDVIV3csCQUGt1sXdzJ2FVMtzVY6noo+KKgsAVZ8smKsFUuPp
uBRZrRZSBY36P7RFeHU9QoCsFVItB9XSKsIBaDGkKrc2Pm+lGyhCDqTmH1Kvua41uWxscUhV
tFP5n+pKDyOTnYHU9h9paV2UK2q5o4qAVgqp8aCuGFKt6z8chAWkVgSpAz0Qqj4jo3UP+q74
WBt95LNekGr1GBvkuu4qAsJAKgJSe6MViVSkLu5eFlRUCqnqWo+jq4pght39gpGwu1/d9Yq8
Ce4q7e4P91e0V+CkfNdSIKi6aV8bid8ZSBWMqTzL37V8WwPLzkKq/tfgsni+VkUsq+3u1994
kJLlvdYLUnU9lMahsi31ohikzvnLJ9vTCcIUA0b35xNSw4FSbXOxHk7uvvdAyZxURVEFrMo9
rQekaiBUnHZw1fiOEd3OQKrKVk5qOGvAC0s+B1LLQ6rAcJv30Ix9bOR9PGfpNp+fWq/u/sVR
Tup4H1EdBqQiILWPWOCiaKOibMr5FGhoJLYBhqBO0TLBrCJ9WUBpXeAqQ9Cpbl1FGG07LdNr
5WLatEiKdNr6cLBQ1vRGNnDKBl4p31IwZzmW5Ub367V1+3c2kqrjCrZsIJkAX+euLv56QKpg
1+YTFcgLimsZOKV6qp1slgaVUSmkWpRa1yUr5ze0gFrXNs5xjSHV4Nmi2naeYWoJkFqbr7nu
63WHVMGdBkop/1O5qII5G8BUanS/opxht39nIFX5ozqu8lwVnVU+qmA5zI3tDKQKTpUOoAix
jqG8Wr0GUstC6hwPjQ/69aGHeu/2EKkI5yDfFd8a5K3e6MHRIqAj/euRFUKq9mvxXfsD/H6a
T/W5IucBpCIgtbfaBkOFU1DZwCBFCm3qKEGOomYGh+Ecq1pm0yYJ4tSVG24nSBH82tRG2sZy
OTXRu6KqWRFAiwLayHGro/63AVHaR3Bj0VJ1PYcT9guytF6gLZDW+jCaF25bzoI2AVk4BZXB
X1yPSuqWtUwgp7bSfgI9gWAcqS43BZXay66HRTvDyLccR8PD62VtrQho1nmFaSLaTte7FKTa
jxnVRdvrR0mcowqk1g4vQ846py5laQS8wE6AqEFFAjaBYQhtyk/VSH73PknBTt38YU6nQFHb
tD2y9XD7aHz9rXROVStLQCrotSmoVK5FPvX/nVMOHpM7a8tUx7BuWXUQeCuVQcs0sl/RVSC1
g2723fpxt/3OIh4T5KDa/KaJB8jrgzKm+e0t6jkm2j/UmGjbQgC6nwU5qc9FkdVQehrVEnAJ
Aam92ALTSqZhKvVAgPhhAFkDtcptU67LvZLJ5LtjftliE/F3xoosxhFLAWM8AKwn6xg/SMHm
Ri0HqWGKRC1PFsOlJ+nvklz1LfmY4F6DqOpdpgA1no1A0WNBajwArB9Damc1qEhKQD01GAxC
QCrG3WBFQBWpNZBT9Nfmrc1TPVU/pSMotSCeUivrsaiVPIqVx6LmD1L7su0xrRbpVRRZOa+N
5x7O7XXuhZCKEJAKpOJ6Pa++lItFBusNfwI8O6a6xe1JTfHTubIcz0XaVbZZBJSWkBUZDc+h
kid52bZAav+AVJuIv5RLPca0XlbeqvJe7ZiqlyKslptbyvH0VUAqQghIxf3CSmko1i2eF1tO
MQZSe/0UfFv2dhjlTyQVIQSkYowxkIqBVISAVIwxBlIxkIoQAlIxxhhIrciaSL+zea7meN2z
i1rap8WyUfy2rT3BCkhFCAGpGGNc18E/3+71kDpnQW2PTA2t/b9x+8H2hweY9chWDY4K4VU5
qNpO86jGDwQAUhFCQCrGGNcRUpvf/6TXnkO1T6MqBqkhoG7a0TZ6/4SGtidoZUVYBahAKkII
SMUY4y6E1FUbPuzR0fGCvTPOPOKeVKWnOIUPAlBXuybO11+t11OsrJtdyzU3qU3rpNeCzYmT
DyZ333vAba8IqdbFKQHaziAzhlTNg6rjqKtfsAqkIoSAVIwx7keQqojl8BGHkwsvOeS61l9Z
uc9Nii9A1Dpto0imYHPqjAMOHgWfgkoBqWDWXmudXtv8pXpUqfbRhPuXjT3k4Dc89ojzD7tt
syBVU0rp0ao2HyqQihACUjHGuB9BatOT+12Xevjo0rUf7HWQqXUGqTEMKupqgBl392u5Xusp
ULbs8WdakhNPStrnM7VHmGry/SxIjSftB1IRQkAqxhj3I0i95Y6DDkgN+MynnnbErTNINSA1
h8uyIHXIWUeOidiG4HvnlINu4FOxnFQgFSEEpGKMcT+GVEGegFIgGFuj9muFVK2Pj6XcVHX7
638d04AVSEUIAakYYwykHgOAippa/qf5iXkt7V3x9YJU5bwqtUDwG3b9A6kIISAVY4xz5gce
ebxHIVUgKnBU97uBqvJHBY2CykogVQOjtP2qDfvap47KglRZA7K07vqbWktOQQWkIoSAVIwx
7tE5Rp/v8SmoBJmKpsrqhhe0PjRrfyaQZi0TnGpfgab2KwWpmrpK28XTUQGpCCEgFWOMcwip
81/9cY/WQxFQRU5lje4P12mUfrll6rrXiH0tk8OR/fHcqFkAWwpSVV44byuQihACUjHGuJ9A
aldbqQSa5kpzsj7wyP6qILXUoC8gFSEEpGKMMZDaqVkEBKJ6OlU4YCqEVHO5svTQANsWSEUI
AakYYwyk1mx1/ysP1Z5iFVt5reZKUhNs2/AhBEAqQghIxRhjIBUDqQgBqRhjDKRiIBUhBKRi
jHE3eenq9UnTk/OSt7d8RHsAqQghILUrp3HZ5eY7xG1e3rzFRYhwm+ctXu7e67jNM2c/4564
hNt8970POJjBbZ5wwy3J6Muv7BMec+V/AVIR6u2Qqv8xxhj3TZ98yinJ2cO+1O98aQqqQCpC
vRxS8/hr/ra7phDlCazHOBIFPOqnn3+ZKHHgV1a+Sy9C4Ob3P6G7G9PdjxDd/RhjjDGQihAC
UjHGGGMgFSEgFWOMMQZSEUJAKsYYYwykIgSk8kWGMcYYSEUIAakYY4wxkIoQAlIxxhgDqQgh
IBVjjDEGUhFCQCrGGGMMpCIEpGKMMcZAKkIISMUYY4yBVISAVIwxxhhIRQgBqRhjjDGQihAC
UjHGGAOpCCEgFWOMMQZSEUJAKsYYYyAVSEUISMUYY4yBVIQQkIoxxhgDqQgBqRhjjDGQWpHG
pF6XsXx06jWpd6benPrWaP3Q1PNT7/DbrEh9QZXHHp+62e+/PfXs1AMy1sceGmwzwddfZWxM
fT3IhIBUjDHGuHdDqkD0Mw94oQSbezx46v47MfXu1FP8+obU2zy8ChLH+m13RQBZ7r7emvop
v/9EX5eFwTZPeQhuijwoANRWv2yM377VnxdCQCrGGGPcyyBV0cpZHui2ZkDqQr8sjGpO9qDa
4OEwST0yWD/Qg+30CuuwzEdqC9ExWgMIXZ16TokyVPe50TKVORVsQkAqxhhj3PsgtdEDnrrT
J2VAanMU0Sx4IE18lFLR0huj9Q0eYh+ssA6jfPQz1ER/jMH+9S6/bKiP7g6I0g0SXw5CQCrG
GGPcByC1Ifg/C1JX++77OH80KZHzOcWvv6ATddroATmE0M3+b+Ih+OaQC/zfbf7/XT4aixCQ
ijHGGPfygVNZkDrVd7tbtHSQB8jEb1/IANhWn0JQK6Au8ekCFwT3/ZbU03wEdaiP7rb66Omt
vj6f+mjrBf74pUAaISAVY4wx7sWQKmhcHEQnW3yuaeKBNO6i7wygKpd1eaFt0FS5rvsBPpr6
lI+oJoWjg7nCVIUVYBMCUjHGGOO+B6mmCzyUDvOOc0Cn+WXTOgGo63w0dGSF++zwEdUxQXd/
qLk+RQAhIBX3DW/a0bPHuOf+A8nS1fuKrn/s+y3JnAUt7n9tp+2LbTvj0f3J/Fc/b3+t/79x
+0HntR/srft5qV5Wt97k9dtrb4tVG9qugaxrE69Xm8fXwLYvdZ0xkJoTSBWYNmXknO4KXk/3
EdRbazzuIJ9Luq2QPW3VdF+vhiiSajMIDPQR3jiSqqjsMrAJAam4T/j6m1ozQaOevmzsoQ7Q
Ejt9e5esw+jLD6U3sIPtwKrti2179rDD7RDb/P7e5ISGJLnwkrb93/u4K26sB9vr1lusNrxi
3KGa99e11DW45rrW5L6HO/5gWN68Lxl0+pEOPyQE8Wqjk0850uXvNQyk1gFSbf7Rsf71KN8d
byP3x/oo5nx/fw7dGNy3wzlNG/1rW7/QQ+bNGWUM8MdUHWZ7UB3kUxAEyjb6/zlfd8tjvZmc
VASk4j5lQV1Xg4OAprsg9a1Ne9sjpi++3gZTb67tuuidQFjuXTBw0LVpZyFVEdVw+RPzWpJT
TzvifhhkRbu7472GgdQ6dffP9pC428NkU9SlnhSxbdfkX4fQGnbPt5QoozHId7Xjqy7bCx2n
rRroQTkJtpsOMiEgFfcaK3ooeLjtrrYu76Yn97d3vatrXFBx9YSj0VTBhUBv4uSDyTfvO9De
tRsDSdx1KxDUMgGQomsGilomoNHyYt3iWq96PTRr/zH7l4NUdVtPnXHAnWPY3S9rH22rc6m0
S17nofK07933djxHlaFyVc9b7jiYvLJyX4fufm2r46v9tK/K0LZxqsPjz7S4/e+cctAdT/tU
0w2u8rSPytd1VT1sndpNba52UTtqG702kFZdR5x/uEPEWW0anpeuhdbFkWe18dPPt2RC6gOP
7E9OPClJZs7e764XkIp7CaSWyxkd7v/2lBp8vuqwMqkDPV1PBKQCqbh6C2LOOPOIgyZZXbHq
ptU6QZK6YC++9FB7t63g47yLDjuQaTz3sOu+zYqEhtHPl9743JWjbn3BibrXBSQCIwNFdS8L
fopBquoluBEYf7Gx7dgGqsUgVSCmuut4b2/Z26G7XzCmY1q3dLFjx7mWOg9Bu8q4anyriwoa
QKoOahe1p/4K2sLuftVN0K86aJmAT+CmlIowiqljCJz1o0HbV9MNrjZR2+gYup46hupo56dz
sGuo+usaq22HjzjsoFPbqX11DlZvta/KHHJW23kJVlXGs4taOoCx6inAzoJUtZFdAyAV9xFI
RQgBqbgrLRgRdNjrF5Z8nky4obU9ShaDg0FdCG7lIFWQGO4jeBTwKKpWaXe/gNLqpP1VbwOd
LEg1QJXDQUBhhLBYt3QxC+AEdOEygaSihyFgGozFOalWtxDuBP8CVYN5rVcaQhhVLZfuEFpg
qrYJI82KYgp21Q52vUJItHYIYTvs7tf/Ou+wHbUshGsBuc5dsFquXYFUDKQiBKRiXNaCP4vc
CVDjLtwsSA2jjuUgVbCk/w3kas1JFazFA7oUmS0GqYJTwV+cD9oZSDWIFHQLsNVtH+dyWp1K
QWrYva+IrkV+BayC9zgdoxpIVaRTEVJLaZAFkNbGWdcrXpYFqeGPDDsXta+Bq44pYK+kXYFU
DKQiBKRiXNHAHutCF1go4qbXpSBVMFsppBZbXy2kxusVMVQXdClIVfRP3dn1glRZUVB19wvQ
DFiXrdlXdCR/FqRmDTLS/+qaF2RmRbsrhTedn6UUxNYPBbse4TlXAqnxeQlOrXvfZkmwCDCQ
ioFUhBCQiusyH6ZFT5VfqohqCKJZkBoCowAlBldbViqSOm/x5+1QUwmkxvsrkmrTJBXr7tf/
gidFQOsFqdaNrvIVBRXcC4jrAanqlo/TCRR1rSaSqtxS5RnH0ViLeNYLUm07RVAtj7Xc6H4g
FQOpCAGpuB9ZXc5hl7byIcNuaEFVMVgQuCgiGHbfG2AaFAocNFK8VFRTICjAstfqCg/BSoN0
snJSDVQqSQcI99c5Klpo9So1ul8gq4irdbF3BlIVYdb+YUqEgFBgWA9I1Uj+cJCTpQBUA6mK
HCvyGubF6nqoXJ1nPSFVPzL0/hGka8YDIBUDqQghIBV3uLGHACEA0LJwKqhS84aq21ygYVMq
KSIm6DKos25zyzfMglTta2WoK1z7ax8DDk2BZLMECKK0PhydL2CVdfxikKp6CTi1v7ZVWVbH
UpBqI/Kt278zkKrtbOS+6qoBZoI/O8/OQqpNkaUydX46jtqqGkhVm2of1VNRcR07nJe0EkgV
GGsftatgtxikypYmEv4wAlIxkIoQAlKxi0CGo8HVjR5GJTViu9yNX3mWihIqKhjOk2oAI2ix
GQBUVhils+5k7aecSkGWQEl1CCfJF8Qo2mpzg4YjxZXTqUhcsXpqufZXRNDm6QyjmWHqgI4Z
l6N1yp3UPqqXjWLXeWjbah4Bqn10HmqreC5YHSe8FvGyrLpZHcJlag8tU10tXULnWM08qWoj
1VHXNUzFsDSI8JxtmV1X7a9rpPPTsrB9Y+t6xBP/l4NUlZc17yuQioFUhIBUjHEOrdxZpUaE
00fpB0S1ebPdmc+siG2cqlFLri+QioFUhIBUjHGGNQhKkcNS7mpQVARTaRDqrrfItlIVbCBU
ufqFucNdbQ1cs4cqxNOWGaQqj9geAlGu7W2OWSAVA6kIAakY45xBquWUCjbVja6c0nDi/zxB
quBT9YvnirUUEatTJdApqLXtq3n8KwZSEUJAKsYYYwykIoSAVIwxxhhIRQhIxRhjjIFUhBCQ
ijHGGAOpCAGpGGOMMZCKEAJSMcYYYyAVIQSkYowxBlIR6qO6IPWt/m8DkIoxxhgDqQjlQdMF
et6tqbelXpZ6ZuobU48EUjHGGGMgFfUNDfTRyTmpF6Z+KvXU1I0l9hnvgXFi6kEZ67Vsst9m
VJnjT6hgG9Pg1Nf7clXXzan3BOAqt6Te6tdP99s3AqkYY4wxkIp6j6al3u2hTnDalHp26mYf
qZyb0a2+2IPhMr/frggyL/Blrku93Jcztcjxx/j1TZ08j5E+kvpg6iU+wtoawavqtDH1c/68
x3voBVIxxhhjIBXlSIqc7vSwlqVRHkJXRBHUJILSNd6mdR4UTZM8MDZmRHB3euBt6oLza/DA
fHPqWR6Yt0fgmnjIbvZAPsWD8yAgFWOMMQZSUfdL3ft6Ew31r4d7kJvtYXKU7yYf7EFyYgCu
06OyZvttpGEe/MZF2+zy0ctQ8z3MNncRpBbTAH8ek3z0eIWvfwyvn/l1T/ltR3uwBlIxxhhj
IBV1kXb47vGC/7vHA+NzHs42+u78go8uritSznBf1hz/+noPeHEkstlDqelmD8mDegBSi2mQ
j6JO8eez2sN1DK87fFR2tof9UYXKZhoAUjHGGAOpCJXQWA9aBR9JVa7mzGD9VA9j0wIQ3ZNR
jnWdbw0ijJP8slirA+gd5uFvfACwTTlur8EeMKf6lICNvs1CcA1nGmgqZM80AKRijDEGUhEq
oZketgoeqD6LIoEXePAaG4BsFniO9qC51QPrgBKQusJHHy2HdW4UZW3qhe2odplQqGymAQ02
+xMte+CRx/kiwxhjDKQilKH5ARQ2FzoOcip48GoNoqMTgshrlkZ6ILvVd+MnQa5rCKILPcS2
+KjkJO/tHmAn9ZH2HV7oONPA1kI008DJp5ySXHjJpcktd0xOZjz63WTe4uVJ8/uf8AWHMcYY
SEVAqv/fpmQKNdeDVVak8/rCsdNJDfDwNSkA1nje0099xHGaH6QUusV3n+/sw22uSPXdapur
J3zNufHcr8S5rsmg089IRl9+ZXLbXVOSh2Z9L3npjXeStR/8hi8+jDHGQCrqF3owiJ7O8VHS
hiBvstV3Tzf49duDqOqDHiqHBeVZF/8FQa7qnCgHNlxfyIiyNvWDdj8mJ3XTjl3J0tXrk8ef
WZDcOWVactX4a5Ozh33pGHg948whyRXjvuq20f6vrHw3Wb/9t3whYowxBlJRn9Ionzs5yMPm
Z0GX+2b//x4f/dwWAelAH2VV1HOWj8oKamdnpAss9Mt3R9AKpJb5kAtAFUWdOfuZZOLkqQ5Q
BaoxvApoFZW9+94HkifmLXLAK/DlixJjjDGQinqr1hSOTgk11HfDTywcnQB/us+rbPBd+AsD
WFX3/hS/bG7h2DlRpdF+3XM+V7WUJhUpo09D6vLmLe7DXI2vveGWFFivSS68+NLkPw4fkZw+
+Mzkd37nxA7getxxxyWnfmGQA9gR51+UXHr5lcl/ufa/dShHEdl77v92j/u+hx9z7ZEHv7Bk
RTL/1R/3uPVDY9WGD3vcb236NTdcjDGQinpEQ300dEnh2EFOoQSYu6JIKaoDpCpSKpBUHmrh
2LlYMcbeQ846x31Wetr60aec8Z72ZWOvrvoHbldZvTh5+MGrWVPy8GNXqVt5+LErKxDSnT9s
397yEZCK6qrBHlLVNb/Od8k3eT/nc1U/qyASiurc3V8v64tjzoKlLmI54YZb3E32xJMGdAAA
zTRw3kWXuJkGfv8PH0pmzflB8vKKd3s0iqec2zx8yWvWhbxEeqfO+E4uYEDpJ3mAI72f8wCM
8hcbv5wLiD71tC/wowb3mPVZAFJRV0jd+JN99/zCIJd0QqGyJymhnEJqlt/7eHeybM2m5Onn
X3bQcc11X69opoEXX/8pMw1gjCv6jslDyoqsHqs8/OBVsCAPP3abnpzXZT9g7d4GpCIEpNbd
NtOABmBVMtOAImrMNIAxxpicVISA1B6xAFQgykwD9bXaSz8GaAuMMZCKEAJS62h1/asbS6kA
SglQakBWTtzwEee5lAJ1BynFQKkGfJHvcW2jL3PaAmMMpCKEgNRusEZ4asCRHvf6jdsnuUFZ
GpwVgqsGb2kQlwa/aFDXs4tec/ljQCrGGAOpCCEgtVstCBWMVjLTgOBWkCvYtWlN+pp1rkAq
xhhIRQgBqTl1PNOA0gPilAGlEYQzDSjNoLfPNKCcVCAVYwykIoSA1F7kcKYBDcSqZKYBTa3S
m2Ya0PloUnauN8YYSEUIAak95KWr9yVrP9hbt5kGBKTlZhoQ2OZ5pgHV0Sa/xhhjIBUhBKR2
s19ZuS85oSFJVm3Y12XH+KNnf5vC6boOMw1kPW5WDy2IZxrQxONAKsYYA6kIAan9DFLnv/p5
CohdC6mPfb/FHaPUTAN67GslMw3oyTC1zjSgffVkLiAVYwykIoSA1Bz4hSWfJxdfeig548wj
SeO5h5M7pxxM3vu4rZt/xPmHHUBeeMmhFBg/d55wQ2vy0Kz9yZCzjiTfuP2gWzb68kMdyoyX
rd++N5k4+WDyxcbD7jjXXNfqwFfb6Zg6hrbX60pnGnjgkcfLzjSgL9xKZxrQttpXZZYDXSAV
YwykIoSA1C70W5v2poCXJN+870CybM2+5Il5Lcmg048k99x/wOWhznh0vwNILX97y14X9dT2
Aktt0/Tk/sxIaLzssrGH3D5zFrQkL73RBrDDRxx2Zd597wG3raK2el3ruSxv3nLMTAMnNDQc
M9PAxZde3mGmgeb3P3H7C2ptu9858cS0Tb5VdCAXkIoxBlIRQkBqF/rF19u68wWO4TI5q7vf
4DPcvhykalv9r8js0WjoPhdNFSQX6+7vipkG9OjXYjMNhEB73HHHub+nDz4z85oDqRhjIBUh
BKR26XRRe1xEU4Oj1OV/38MHkjfX7iuak2pAqXSASiFV0VZFX6vNSe1K20wDup6aaeDi/3zF
MeAqH3/88e7veRde3CFfVYO4gFSMMZCKEAJSuxhU1Z1//U2tycmnHHHAqO7/UpBaDjLDZQJf
pRDkCVKzBk0VA9TQisZqRgEBKpCKMQZSEUJAahdZ8ClADYFVA5wUWe0MpFouq/5/+vm29eFc
qzrOVeNb3RRXeYBUzQwQwuiA3z3Z5ahqVgHlriqKGj4lS4CqaCo3NYwxkIoQAlIzu63bcjrD
Lnq9DvM/BYmWY5o1sl+AqL+2TJCqEfhhzqrtnwWUGpGvZc8uamkfjKVR/Lad6qhI6i13HGxP
E5g644CL2mqg1OPPtJWp/cI0gu70zNnPuIFSGnilAVjlthekKi+VmxrGGEhFCAGpRSKhAjzB
oy3Ta428PzrI53D6xXOwaBmCR+2j0feC01NPO9IOnAJMLdN6jcLPglSB5RXjDrnlOpb2v+2u
gx22E+SqHMGqbSN4tnOwNANNbdUbvsiBVIwxkIoQAlJLWIAoyBNMhuAadq0rQtn8/t6yU1Gp
a18wqa74OFqr5SpD/xeb2F9TWKkMbacy4u20TOVom7C+suqrdfV4/CqQijHGQCpCQCrulwZS
McZAKkIISMVAKsYYA6kIISAVA6kYYyAVIQSk4l7ny8ZeDaRijIFUhBCQivP3RQ6kYoyBVIRQ
r4FUjVAvNd2TRr5r/fLmfX4S+QPusaFZ22obbWsj8zUiXnOYjr78UIcppur7BXW0br3FmuFA
86129xf5yaecwk0NYwyk1q7hqZtSL/R/h0frx/vloaf5daMy1plvrKIOg1NPTz0/9YP+daiG
1Lemfs77Vr8sS6N9GQjlE1LLPT3J5jXVNExtuY2HikJt/LSnBx7Zn5x4Utt8pXMWtHTJF1RY
t97i4SMOd5gntru+yPV+4KaGMQZSa76ntqRu9mC5LvWe1GOCbZan3p16Z+B1ft3N0XJZ9dZ3
89wK6zAs9Wept/o6bPSvG4NtVvg6zPHl6v9lRcra6c8Hob4PqVlRTm3PF+mxYA2kYoxxr4LU
zRmwtyaAUGlHEDmtRHP8PoMq3H6WB8sBQdR0uy/HIrmJj9qaxvpl44NlAuZdqVuBVNQrIFXP
pL/mulb3JCY9x94e91kOUhUhtS73sLv/zikH3aNG9dSmSrvkdUylEqgeOo4isGG3uMrRBPxX
T2htP06ciqD6qAydx1XjW90jTMNj6HGserqUylcdX3rj84qhO0yRmHBDWx1VVvgQADu+Htuq
87hs7CFXL2tPHUvtefGlbcvtHFSmzusbtx9MvnnfgWOeZKXUic6kNgCp1fuXH/dc+b9I3/dr
0s/me9uLp4Vo/c/944XXpu+fv1qc3aOgMuKymhe1JCvvP5C89WjXPDEtrFt/8frV+9x5x+f+
s5Vty9+e1+Kuqy1/5/mW9u1LXWecG0id6++rWdBY8KCZeCisRBP89mOqqMPojDoIMhcH9/25
Gd3/Os6kKCI83actAKko/5AqmBQ0yXoE6PU3tZaFVD0m9ISGxEFt3N2vZSPOP+weKap81GJP
fQotOFM9tK9A77yLDjvQNcCzx5gK5lQPLY/rpv0FgCpD56D1lmogQNW5CRxnzm6DYXu8aaVf
iGovnbMezar/rQyDR52r6qxHtqotBcLaXn9tvY6nc9D+1r52XoJrpUmonuGTtNQeekTrex8T
Se1qCyT+LH0P/XxD10HWpk17k5fT93ex9e+m7+kX0vdFqTpo/RofkX89/UwuKdJroTLCst5d
0la2tv/RlINdcn5h3XqL39SP4mdqr7Ogf0H6uVW76geDfoQsS3/M/uCktK3T75zF6feh/n/H
P15Z67Ss3HXGuR041eC73ZeH99zUT/nlO3yEc2CRfXd4SKxVguIpHjjHl9juxii6OjiI3AKp
qHdA6rwgCqMooJYJvIpBqgBV+ab2PPusnNRqu/u1rQZahakGgjsbiKWyFWkslpOq/QWHIcgJ
lG0f1efCSw51WC/IrBRStZ9AURHeuN6KrBqECkr12FZbr2ir8lCzuvutfcMyFT1WGWEer0Vt
6e7velcCiPU6RndAqmBJ5VjkVjD1w3MPd2kb6ni9LTq4KP2h2BmwVruG10BlCUoVYbVl+lGw
IP1Ra23THe81ILXLNMfnpF7gX0/zMDjfg+F0nw+aBYGTfFf78BqPfYE/VuIheWCR7Ub6nNUl
RdYDqSj/kCrYzMqbVLQxC1IVKRRECfBKDZyqFlIFpNpf0UQdOwQ9q1PcfR/XzSLAZsGjRX5V
73iWAZVXKaQqJULbKuKrdjOrHQSvYSQ13E/LFCktBanzoq5apSoosqr/1Q6WkgGk1qmrfUdb
d6vAYtXDB9pBQvCgiJrAYfUj+91yRVa1rbpwtb26bfV/DDTWRR8u26gfemn52s+647WdHaNY
t7jBi47/1qz9bv93owGCpSDV6qe/YXe/6vDn6ftKkFpNl7zOQ+2hemi/X+7o2LWvyLBSB2Q7
XpiKoPOwtlg140AHcLProSimyv/pk2kZH+ytuhtc+1hb/WR2WxlhV7zOfXP6g9fOQykPYdf7
n6af4b9If9DadYrP6530R2NWSoXqbe+N8BroPF+Pvo/U9a/rpjYBUnstpDb4LvU9UQRzWEZX
vHXpx8u3lgDHSjTAg+loH5Fdk7HNKA+opSAWSEX5h9QQoMzW/Z8FqQJURfXikfWdhVRFKlUf
wZnAWWUJ1tZv31t0JH+5QV16bcvUha6u9HC94LBSSLXzUzRWx4ptQBqfcyWQGp+XRarXpjfa
qenNLozEAqmdt7rzF6U/JpbfcTB5NX2PCRQEn4KYpemPJL1elv74EJgIJF5MPw+KtMnquhXA
xJHQODoq2PpB+llR+YJIdQULggRrdgwtN2DJKkvH1baCHZUVds8Xg1RBsCBU9bcoqoGQIFFd
zCq32LFjC8wUERTcqr0WnnUkWZy+Hy0yq7JfTj8Tgjydo8Auq246puqkttd6g0QBqpZpf52n
0iBU/2rgTW2q8nVuqqP+qp62vwBS11vLdD3smq/0P1otwql6qI3svFSO6iWvTH+capsQ0H/2
Rtt10jnHkJple09sfp9Iai+FVAHqMg+oleSeDozyQQs+eqpl19epTtadPzJYNtbXcVmh+PRT
QCrqPZHUsAtcuZACJ4FSFqRatFIRRMGXQWRnIVX7hXOsWv6n5bx2FlLV9R93mWuAUqWQunR1
NlAq0mlR33pBqq6BorOK9KreMVxX63vu/zaQGkWzNgaD0AQgchY42OvV/hoIzspBqqJ4Appw
YJLlgirqVml3/8og8i+oc9HVNfuKQmoMqFk5qZXAVGgB3bIbWjt05QtKLVJqkKrjheAa1k2Q
uynoGdH2Klf/q10FgTaoSGUIiKuBN52PbMfXXwHnn/veCJ2zygsjoYLZRUGvR9zdb3m77rzS
z6PAUoCpHzO2zRvp94l+tFTSrnq/6Tx13O5MLQFS6yYB5zofnRyZsX5a4diR/Y0eIG+OtttT
Bh6LaXbqqdEyG70/2r++2eepzqmgPCAV9Y6c1HBKJEGVBgMpildq4JSAUtFJA7/OQqq6yUOI
FKiqfBvp3llIFeyqvGX+Ji+w1DGrGTil7RXdNahXHRXltK75SiHVHohQDFJl5ecqaitQ7+zk
/wapaz/4Tb+/qRlACpQU2fpF1LbFIDUEiXKQqu5j/R92OctxBK0cpG6KUl4EOQbLMQgquilA
VfQvnDmgs5AqeBdkKtqoyGs8K0EI8MWivALGcH0Y+bVIcYeHiHggrwTe1MbaVl392t6s7vYf
+FQmnbMAM56BIFyWBalvRbNsCHoNfNUOuh62T6l2FaAqiqv1YQoDkNqrIHWxB9ShRdY/5eFz
WLBMk+nvirrblxXpnjeoHVc4OsXUIP96UFDep8Fri+x+6v8f6QG1qcJzAlJR/iFVETsNOLJR
84IiGxBVbgoqg1yt7yykaqCQji0w074COwHg2g/q090vsFSOqo6h8xWIa59qIFVTVqm9BKuK
KKuOQ9Kbz5tr91UMqZq1QMdWJLoUpFrk9qrxrZ3+IjdIXbXhQ25sHg4UARPECBIEUcXyBLOA
shykZq2vZeBUvFwgahHfGAQtoinwUjd0vSDV8nQFcSpHYPaj9AdUVtS0GKTGEBpCqsq1c4qj
3ZXAm51fmJIRWj8MXHd/lNYUX6MsSI3PS+kf1l2v/8NBUMXaVe8rpUGE0W0gtddBqkVEsxxO
QbXVQ+li/39WWsDWQvHJ+5t8mY3hfTzIaRUgb/OwLMDUHKm7g2PML1HPSUAq6nWQKrgSkCoq
qsFK6lZ+MxhMYd3uFslTDueLUR7bE/Na3IwA2kbbWve/tpu3uLqnQenYijIK7ASt4TRMYT2y
lmXVTa/jZYqkqr6KpCpKK8ispo6CZh1XuaI697VBtExgGZ+zloWzIKit1Y2v84vbN+7yF8yG
+wKpdY6qpu2vgS/KgxRICCJqhVSVY8vUJewiqdHAH0UIFR2tFFLD/Ef3fk5BLMyZDEHQprRS
N7oiqrZvZyE1ngtUwCpQy4roFoXU6MdjCKn6gRBDrLrlK4U3RcK17TslnmpXL0jV+0PXQBF4
1Tk8r6x21XkIZNXFnzUvLpDaayB1qIe8LN8c5aze7GFzSpGoq9aPKnKcUb7MgdFxw3IUZb01
OEb4IIAJJeqZNZPAWL8PQvmE1P5kzW064YaON0Ob9D+P9RWsC6Dfq8Ok8kBqR1AUOFjXeweo
TH9wKOezHKRafmnYHa9BUbad9hXIvRP8wFCXr9Zr32IQGsNLmP+oiJyW/WzlvpIDp1QnAbcN
suospAoifzS5I2QKgm1ZZyFVebuqb5iTqqhjNfCm3NJlNxwbrVVkuRpItR8ApeZ6VbsqCq/3
UDjjQtyuuk56D7wZTVkHpPaZKagQQn0FUq3bvZjv7KJJxeOoqiKTmuxfx9SAJHXdK9KpdeXq
OGdB90xOrpQCpTkoLUHR7XqUCaR2nO7I5W6mbaxooHIXbbS75Y0KLgQ4gswsSFWE1I0mV/d7
CieCKkUyw+1cxFG5nCnMCX5sZgCLSLqBOelrQWsxeBF8uUFd97eNLA9hr9QUVMqltG7/zkKq
5W4qGqhyda6qS9YArlogNR7dr5xaG90f5+SWm4FA+a26pjYbgkVXK4FUXW9tY1BZDFLtR0xc
XtyuNoODDeoKbVNwAalAKkJAag6+SMJ5RbNcjy7tSqwufoGf0gl0XJtNQGkG5eq4dHX33EhU
J3vEa73KBFL3HDOASVBhXbZu7s8gYi041XI3P+qGfR1G2Ye5kAJQlSEwsjLjHEZFWBXlE+CF
x9BcnjpGFqTaMTWCXiPIdYz4aUhab7Cj48RApTppH0WHta0N4hIYVTtpvaLPglTVQ5HEDjMj
3H/svKdx3eKu+Li+ahebJ1XRY4tsV5WKsObo9VCbhXm5Ouf4EbCqX3i91NY6R4PUrPMyaxBU
fK1jSNXxtCzLpQblYSAVISAV9ysDqTivFszFXeKKHi8adjiX9RUMK0obR3lryfUFUoFUhIBU
vsiAVCAVF+nKL+V3uqGHQ6Am6FNUWQOSFM3Ua5serFwdK3kgQT0sKFUdFUWNB3oZpFbzkARF
pC0lAEgFUhECUvuQbSYB+cXXs6dvCrvntY1t/+ba/ndDaHpyHpCKM8GqlOMu8q6yoM6e/qW/
BnmW9lDK73RTrrjSOZSPq5SCrMe1CratTusrSA0K0wHi+XQxkIoQkNqLbXOyak7S+x7u2FW4
vHmfe6zrPUHOmLaxOVEf+35Lv/si1/tA74dlazZxY8MYA6kIISC1qyE1Xq45SgWoWndPxmCX
/g6p81/9MTc2jDGQihACUrsTUvXYU03bpKipnvAEpAKpGGMgFSEEpPY4pCrv1PJNgVQgFWMM
pCKEgNTcdPebgVQgFWMMpCKEgFQgFUjFGGMgFSEEpAKpQCrGGAOpCAGpQCqQijHGQCpCCEgF
UuvreYuXA6kYYyAVIQSkAql5a68fO0gVrHJjwxgDqQghILWHIFVzpmobILUjpPbV9wPGGEhF
CAGpvQJSixlIBVIxxkAqQghI7XJIveH3DiZNT+4vu7220bZAKpCKMQZSEUJAapd51YZ9LudU
nrOgPHRqG9t+6ep9QCrGGAOpCCEgFQOpGGMMpCKE2rRVUPIfh38lGX35lZm+bOzV7gNerSdO
nprcc/+3q3bTk/McJFXjp59/2QFWtV7evCVZteHDqvzWpl8DqRhjDKQihLpYnwlKBp1+RnL2
sC+V9QkNDQ5icPWutI1jF/vxUMpXjb+2ph8WX7vpNlfXqyf8t4p/VNz38GNV/6iQn130Wk0/
LKr9USGv/eA33KgxBlKBVITo7u/oTTt21QQWL73xTtUA88KSFTUB04xHv1tT1LcWELzmuq/X
BJ6N536lasAdctY5/EDohNV+1ba5rlMt13fCDbdU/V76xu2Tanrf6v1ey+eklh8Vr6x8t6bP
v743gCsMpCIEpNK9izu4+f1PagKLWiBGEdVagGnqjO9UDWd33/tATT8sFJmuFjovvvTymiLn
p572BX4g1Gj19NTS5udddElNPyxqeS/ddteUbkuDemLeopo+k8vWbOpzaVBAKkJAKsa4hyxI
qBYsBCO1QIzgp1pgEmTVAmfKba8FBmuBTsFqLZB74kkD+JFQo/WjrJY214/A+Pq9veUjIBUh
IBVjjHFPpEEpfaKWHxbdmQalNJRqf1Qo3aWzaVClorlAKkJAKsYYY0x3P0IISMUYY4yBVISA
VIwxxhhIRQgBqRhjjDGQihCQijHGGAOpCCEgFWOMMQZSEUJAKsYYYyAVIQSkYowxxkAqQghI
xRhjjIFUhIBUjDHGGEhFCAGpGGOMMZCKEJCKMcYYA6kIISAVY4wxBlIRQkAqxhhjIBUhBKRi
jDHGQCpCCEjFGGMMpAKpCAGpGOfUP9+wL/nZG58XXf/eB3uTd1/9PPnljrbX2lb7ZG37iy1t
24bLmhe1JKsf2Z+sff3zutdddQrr1l+8fvU+d96yrk/78jX7kncWtLhr9MuPj25v25a6zhhI
RQgBqRgfBcS1+5JlN7R26TF+tnJfsvyOg0XXr7z/QLJo2OGi6wU3LxSSdjDVttona9s1329x
29rrVQ8fcK9fvuRQ8taj+7sEsMO69QYLqP/i9oPHwHw1XnL5oWTBKUfctRB4vrd9b/uyxecf
ThacdiT54bmHk43NR6/Zn55+pOR1xkAqQghIxbgD1HU1OAgoBTDdAalxJHXpuENdCuG9MZJq
YN1ZSH39947+8HjjroPJwrOOJJs27W2Pfr980eFkydhD3fpew0AqQghIxb3IAqh3nm/r8tZf
654V0L1574HkxTOPtHfbCmAU/VK37Vuz9rtIq3u9el/ZLnpt95PZ+5OfPrm/HSj1VzCj6Fox
mDNI3fz+XrevyjDYqQRSVUdX/+17O0Cq/uq4AtVqQFLn/v+3d64hdlVnABUdNBgfoVq0Nsqg
wRRNRSVBU6IkpCWiUqsVVJTGEspIAxEVm/qgFkW01SKphFojKo022lSs9RHR0NRILMGKLVVi
UbSgoOAPf9lUU7k96+R80z17zn3NnXszk6wPFvee9z773JlZ99vfOUMbOH+ywLmQchyW/3n9
v8dJKn1CG6L/Nq/ZOWY4PPazee3OMrPL/lm/22FwtmEfXNOXNoy/DvQV69Cf5XWs+i6OTX9y
fNaNcor0vCiNyLPDLI9zzSUVGc2/OLCvNKutpIqSahhKqsgoiB/DroBUIAlIKQKCjDD//oMa
pXQgomXWc9HuYdv7hxqNp1Z8Xm6XZ0Lz7CfTrM+2wHtqE0NMGP5lH4hTnaSynHYhlGTgaBO1
pO0kFRmkrbQzH+4vh5+r/TY7dg5ixbF/t2xX4/Glu8rzQOTTDGTML/tx66fj2saxGNrmlXUg
js0rfU7Wkf3Qvt8u+G9X8oZAst36uV+UWWLe8xo1oFwv+pHjlvse/qJsL/3I8Sl9oM0IPOUQ
0b/ldSvOnfU5/zQLCggv++Q4uaTWQYkH56mkipJqGEqqSO1wPhITAkMWDElBOOrEAflDWEIQ
yZ61k1SygeXwcZLRe2rk/4LSyXA/25N1TAUnhKiZpJIFZJ0Q1Lqa1E5kKoV9k51M94fM0Y6Q
VPbHdGSe87YhoWm2GomOfVILilzSr0yTMeYcOpU3rh/rs580q8o1jppb2odsIvDlNkVbQ2jr
hvujf6OfaDtfMBDbNKON8EZft+tXRJrt0zpgJVWUVMNQUkVGIVuKgCB9iEN6x3UzSSULmK7T
TlKfWTW+pjQErlNJRbzyIWvazTB4naQiXGyT15v2KqkIPJlEJD7PvIbc0afNbpyibak0A5Ib
mV+EleH3dDmZ2k7lLa4nXwg4ZkBWNjKfnC9C2ewaNpPUMaUNxeeEz0FIZmwTZR+t+rXM9M6s
vzZKqiiphqGkioy5wx05QDIQD4Q1BKxOUhlu70ZS65Z3e+NUvpyMYWRX6ySV6Rg2T2s+e5VU
5JihcjKR7AfZo4ZzjJBu/bSlpOa1mbSBebSTdakLzofRO5W3OL860uuRC2InkprXoCLbCHb5
Gbr+s9H3rfqVc6PvclFXUkVJNQwlVaTlsy3JeiKqiFgzSc2FEanNM3MM58d2LE8FJiSTzByv
nUhqvn2IFFm5OknlmEgf2dRUlnqV1PQmIQSZfoqMbt3jprqRVN4jcFFq0enTDVLipqf8Zqxc
SPNznoik8nkJKSe7nA7d1/Urfc8Qf6vHgympoqQahpIqUkItZC6ICCZZyBCHdHi/TijrhuPJ
toZwIE7IV1q/WM4b2i1TdRKa7z+vf4wbmOKO8mYiSOYuraHtRVI5FrWb1GOm88qMbnE+kyGp
SC/D8mmNMH3TqbzRR/RVLroM98cNXpMlqXGdyxrX4pjchNesX+M65O1SUkVJNQwlVaRp9hTB
YPgXSWDon3pBXlO54DmXo3f3Z5JKNpN12AcSghCRWQvhQLhKaSXbdtt/yowb4ss+Wc50DAHX
iVDc3Y8gIqchrdHGdo+g4k70GPbvNZO68dzdta60mX0hlEzX3SQ1EUmNpxFQ+8qXBQS1mxun
ooaV/uSVNpZ3+Bf7jJrSdpKKeJdPYiim68opUrieLKOPWz0nlXPgmsexU5RUUVL3e7ngrmR6
QzXPMJRU2bdBMkNSEExkIb1BhhIA5AxZIYsYcjim1rCYHzfnIC6si2SlGUfqFjkG2ULWiWwh
yxBUjhF3nOf75pgIM9tC1IFGnSjtj0wex02znWQXWc68ONe0HrdVdq8um8p5lOdanAuiHfLG
8dN21M3L2xZtSOexP45ByQLnTP+3yjTXDvuv2TnaRu70T/uV883Pmen0urI914NMe96/eea2
rKPNzimV1OiDZiipoqSWgroimV6dSathKKkismcho5rfONXv/4rVC3xZIEudPxViIrW+Sqo4
3G8YSqqI1GRIkaRWpI+W6pukrvps9JmmZDPJ1MaD9im3aNfG9MkC/YQbxcgKM4xfl1lHUuPR
WnVlAnVyTsmAkipKqmEoqSKS/eetVkPSUCdjk/4vat/dXaNL9pQnJjxx0a7Rf4nKkHq7NlLO
MIj+QprLh/ePfD4uixolDNEmygXa7S/WTUtERJRUw1BSRURElFTDMJRUERERJdUwlFQREREl
1TAMJVVERERJNQwl1V9kIiKipBqGoaSKiIgoqYZhKKkiIqKkGoahpIqIiCiphmEoqSIiIkqq
YSipIiIiSqphGEqqiIiIkmoYSqqIiIiSahiGkioiIqKkGoahpIqIiJJqGIaSKiIioqQahqGk
ioiIKKmGsddI6rYdHzWefeXNlrCOv/hERERJNQxjYJLKe+ZNBw497PDGMbOP2+PMPfmUxvyF
Z+1xliw7v/xlvKe59MqRxlXX3LjHufbm28vP857m7vseaax7/Lk9zqPPvNT2C+gg2PzaO4qT
KKmGYXQvqY9t2jZhKZgKggRTQRjh2OHjp4REHzA0NG2+eIgMigMPmjElfj6HTzhxSvy+OmPR
kinx+/vCy5b39OWUL2RKqmFYkyrSFc9vf2tKZPIefOKFKZHZvHPtw1Mi07vqhlunROb74itW
TAlJOnvpOVNCGufMPWlKSPTBM2dOqy8ffJaUVMNQUkVERAbC1jc+6OhLKOspqYahpIqIiFiT
ahiGkioiIqKkGoaSKiIioqQahqGkioiIKKmGYSipIiKipBqGoaSKiIgoqYZhKKkiIqKkKqmG
oaSKiIgoqYZhKKkiIiJKqmEoqSIiIkqqYRhKqoiIiJJqGEqqiIiIkmoYhpIqIiKipBqGoaSK
iIiSahiGkioiIqKkGoahpIqIiCiphqGkivSJZ195s/HYpm1Nl29944PGusefa2x/++Ny+jd/
+FNj82vvNN0Xy9N5v97wdPm53/ji9klve962fQWuF+c90XOPbcGfAVFSDcNQUmUcf9z69/KX
ej+P8egzLzW+N3J10+VXXXNjY/7Cs1oKDZ9dBJTpY2Yf1/RzzL5YHtOrbri13JZ5P77tF5N+
bnnbpgNI5eUrVjae3/7WhPfB9Tr0sMPLfp3Ifthu1peOKPvOn0NRUg3DUFKlrdT1649GKwmd
TEklw3f3fY+MTp+2YGHjwsuW9+3cyOjSlm07PppWmetexZrrxXXrpR30m5IqSmpnUfywXFCw
umb+4oJ7C9YVLKtZfmqy/PKCoWz5UMEl1fJ7GrtdwTCUVBkMf333k1L0uPa8Mh2ywi/0I758
VDmfoesnt7xWQob1zrUPj87Lh+Pr5pFRQxB/+fDvy+1ivW8s/lZj7smnNB0aDklF9NgeUulr
J6kcl3XYJh3uZx7nxjl2M6zMPmgD559mCdOhfZZRPpAP90c/st3tax4o+yL6O70elCCwnPUQ
3VblDnVwrvc88FjZD1yrXNQ5B9oR/RnXI9pOf/Ia69Fn7JM2cc04j7ykguVxrkqqKKkDFdTh
gk8KtmTzRwp2VnJ5T/X+umQ58rmrYFPBbQU7Cl5MRbV4v7Hg44I7Ch4qaBSsUJkMJVX6zpa/
/asxZ+5JpdjFEC3TiMn1P/1ZOX3A0FC5HFGJrOfBM2eWAvH9H15bmwnN5628/iflfpDR4RNO
LPfL/hjmZ18HHjSj6dAwsnPs8PGNo77y1XJ7XpFLygTaSSpCxboMX+eZYV5pUwxLd9JflATQ
VjKwtIXtb7pjzZh2nL30nNESgrxtvCdzG/3MvngNiUX8mGb5109fUL4uWXZ+y0xyzoNPvFAO
lXN82kkbuU5plpM20i8ci/7nOPQV/U//0k5e+QxwDuzrjEVLyvlsc9IppzfOu+jSMcdlXbYx
kypK6kAFlUznywUfppJavD+kEtdUSpdXonpktR3yuSFZPqvaz0g1/d1q/a8l69xV8LbKZCip
0neuvfn2Uiwim4ckMU3GrG64n1/wfE7IAJI5Q3LbSSrSxDZr1z85unzZty8uJa/T4X62jzbR
ViQrtm8mqbmg1p0P77uRKYQupBR+sOpHpbSl7UDekM7I4OaSiqRHFpJsa2QtmWZbzot+jawn
x+xUUjkugooIxzVlH4gqmdUQSOQ4srN8IeGaX3rlSO1wf5zDxVesGM1G0we0K818I670h5Iq
SupAJXV1wV+qTGgqqcuqrOesZN6Mat4l1TA/7xdl+2NYf1OSRX0oW478DqtMhpIqfQfZQGC4
gaiuBrFOUslqtqspTechiQhMPvQfmdBOJBWxy+/I5/MaQ8+5pJI5jKH8VufTraSyTzKcHD8v
TYh2xHnVCTTvQ+TSNtD/7A95vOXna7uq2U1BRDlGSG5ASQVfDEIgmU6Xsyz6qpmkpk8/QGxp
a3xx4DqwTpQWKKmipA5EUOdX2dI5BbdkknodWdGabd6r1g1JPTNbvp51qvcM/6+salU3VSxX
lwwlVQZWj0oGLYbvyeIxNB81n3WS2m5oP5/HcDX0cuNULlUhUvGYolxSmSajyPmkNZ+9SipZ
YfbJ/ukz5I55ze7kr5PU/Gcs2pCeUzc3juWZ8d1/W8YT++A1l3em20lqfvMX505GOzLKyLs3
TomSOjBBPSQksprOJfWWkM1sux3VkP2MSnDXJ8uOrkoAQlJZ/o+K1VWWlRrWm1QmQ0mVgT52
iCF8honJkMUd75MhqQxh58tjOHyid/fHMDl1rXWSSvaWac4llaVeJTV9NBdCGHWpHKtXSSX7
yfL06QNA3W6nkhqZcdrHMVOixGCikpofi/INjkW7KRdIyyCUVFFS+y6p3JH/dCalnUjq25QG
JDWqSOerVQb1/YInE0ndVc07JNsvdaoz1CZDSZW+16RSa5jOS6WyE0lFaPPhfOQt1uMYZDXT
TByfMwSSeZ1IKlnLdHtuYOJmH2S31Y1THBuRivrLXiQVGSNbmN5pjww2y+h2K6m8p741fSQW
WWBKHTqVVEoNOAblCOl8Mtn0RTeSGkP3zSSVtiGnfCHgWsYTApRUUVL7LqgxVL+uksZSUJOh
fO72vxqZrNn2/TQTWpUM3FXVtM5jWcHrSWnAQzUlBhx7ntpkKKnSV8ie7lfdpY+MUA+JEFIj
md7NznBuPJIqFybqEtkHGT9qIhkGRiBjvbgxh+l4LBK1nXGjDjKDxPKaik4qqYgmQ/5k75Ct
tI2tJBWRQqBj2L/XTCr7QiTJdiKCcZd8PGqqV0lln5zrN8/9Tnl+SHE3N04B29KfXDv2xzVj
HyHX7SSVjCvt5DqyfTNJBT43tDfqXZtJKv0VT4iImmSm06wx0zwhQEkVJbUjSd2S8V48hqpa
vriSyaOT7WZV886rppcWzM72zSOoflW9p/h/Y7Z8abWP2WqToaRK3+HOcgSQG6J4ZFH6n5eQ
rxBTZAVxSu+WTx8/hAgicNS0IrvpekgJGUKOwXrsJ2pFWYZYcYy6f03KZ5J9sU08/igdWka+
2DaGsxGm9EkCsTwEORUq3nfzmecYZJ5pAxlOss6RcczbUTeP92nb6tqAyHEM5tOP3dw4FWLO
NYjHfZFFTW/mir5Mt2E6ncf2HJN5cQ7N/jnCftmTG+okleXMC1GmP/K+YDrtByVV9iVJrbKf
i7P60MWdDqvXDPcPVY+TuiNb58PYZzXMf2+yPMT21Gr68mpof16yT+74f1VlMpRUkX0MMtZx
I1Z6Z36//zVtL0Pydc+YdbhflNSuJRWBbCTTyythHJ6IpFbzzqskM+7Mp8b0gmz5rip7inwy
3HN1to8N1T5Y/np1Y9V8lclQUkUGRAzXt4LHLA3iDyjZZrK+HDOG03kfN2e1oq5coh9wLMou
KHVIh+hTSSWDO9F/Cct2ZN6VVDGT2nEmdTgyoDXzV1bMabJ8pNnyap0zq0da8UN5pLpkKKki
AySGtFuRD5H3A4SO4XiG6slQkkWNR1Lxs9mujd3++9ReaplpH2UJ+b91jZKCaFP+71M7IT0n
P59iTaphGEqqiIgoqYZhKKkiIiJKqmEYSqqIiCipSqphKKkiIiJKqmEYSqqIiIiSahhKqoiI
iJJqGIaSKiIioqQahpIqIiKipBqGoaSKiIgoqYZhKKkiIqKkGoahpIqIiCiphmEoqSIiIkqq
YSipIiIiSqphGEqqiIiIkmoY+4SkXnjZ8lJURURE9jZOW7BQSTWMaRinIqkiIiJ7M/vvv/8/
/ZNvGNMv5lUZVRERkb2Vo/1zbxiGYRiGYRiGYRiGYRiGYRiGYRiGYRiGYRiGYRiGYRh7d/wP
FzRWZSgj7jIAAAAASUVORK5CYII=

--BXVAT5kNtrzKuDFl
Content-Type: image/png
Content-Disposition: attachment; filename="blkif_segment_struct.png"
Content-Transfer-Encoding: base64

iVBORw0KGgoAAAANSUhEUgAAAlkAAAGCCAYAAADT6dY2AABKGklEQVR42u2da6h1V32vNzS0
wdhWqhIvqYRWqmgbWlG8kEpClFeMRGsjJyWlaQkl0oBiSlqrYiSSE22rIS2hKQYNWklKKpHq
aYopBhVPCVZssYcUUppyLFjIBz/44Xzoh33eZ3X9dv7veMeca67rvj0DHvZea801L2NexrP+
4z/H3NuzWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8Vi
sVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaL
xWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi+V4l8vPcsNZnm1VWCwWi8VisTxTfvUsn1vj
+585y/5ZLrUqLRaLxWKxWJ4pT53lsTW+//az3HaW51iVFovFYrFYLJuTLIvFYrFYLJaDcsHe
f+cRPTqXjG+c5aNneV5nWiI1XzrLk2f5+7N8cK+ff0Ru0sNneWI+3+vn3/1Mme+NZ/mjvf+O
+vD3O2f51nyerNMLzvJnZ/nu/P2bOst56Vk+Nf/ud+fz//lmmnY535pP/6fzZezN14nv/vAs
35////aROvuj+XzfPp8X2/jq+Xt1G7PNF8y36+/ndfK5znpSrp7X7xPzv2fKPCwWi8VisRyz
gqSQR/TIWe6cy9F/zRv6C8t0d82n+858ugfKdLV77Ffn739v/p2Hyuuar/SZudB8dz6Pz8z/
7s8F6HtzIcp0+3NZS7lqLkU/mE9/13y6/zeXk73Ocp6aC8435vP77nwbl5Wsp+bCxLKenm/f
6/bOz8m6rdTt9+d19sj8vR8WyaPcPH//ybnEZbrUicVisVgslmNULpwLQhspee+8YY9onJm/
/lQz3RXz9/9s/vrZc+n5zt65Ea6r59O1khUBuaB8/+n5+3eW718yf+9LZb2/N5edKirPnkvJ
94sgZjkPleXU969u5GlKd+FT8+/eNn/9vGaerWR9qxHR352//7vz15fOhe2xZh1vKPVmsVgs
FovlGEoWEZ2XN+9XSXpg3tC/oDMPokI/nP9//Xy6/9GZ7tEBybqqme6x+fuXdMQmAvT2RlJq
ubERxCzndc10Eb8bVpSs/9o7N9I3JlnXN9O9vJG035+/vryzLCNZFovFYrEc0/JHe89ES+iq
urPT2D+590zEq+XJIhZ3zv9/aWc5Hx2QrEsHJGtvRLIiL4921ueRRmA+MyCIV6wpWU913h+S
rCua6S7dOzcymO9dODJPi8VisVgsx7AQ1SFa9YMiXE8UWUIo0p01xAv2nsnvurSzjNu2IFnf
GlmfGxZIylGQrHTTfmlEpJQsi8VisVhOQCEfiCjW5+YN+wPz9+lOfHrC9393QCr2OgK2jmSl
e+3MhHU6DpL1p/PXL+/M8xEly2KxWCyW41fIUyJi1RsegcjVNxoJ6MkTXXbcaUdX1y/u9RPk
Sfr+/gYl63WNpNRy03ybrjpGkpX8sA82010y3w9KlsVisVgsx6zkbsDvzwWJQjQrEam75u/R
bZjhDSIMFxb5+rMyz4f2nrk7kMjM5XMJ29+gZEXuIibJZfrV+Xp+b++ZxP1lJIv8sqfnn126
Q8nKdpP39vvzejuz90zSu5JlsVgsFssxLFfvPZOL9cPSqD+yd+4dhlftPRONqtM91EzH/w+U
zzP0QuTr2RuSrBfsPdOVhpwk4sN0P98RnymSdVdZ57t2LFkv2HsmNysQEfzGfNssFovFYrEc
w4L4EAUigsUQCK8emO7CuZS9dz7dL47Mk7GjLi/C0SZ3v3wuH+0ddb+41++WfN3A8jLSOut0
pjO/lw/M7znz93t3HV6/18+Pquvyus777TZdOn/9nE49XjGwjOc16/WNAaGzWCwWi8VyigrC
8925sNVCFyRdeE9YRd2Sx/Nc1RFBooaPWkUWi8VisZzuUnO8GJA0uUXJn7rBKuqWJLg/MRcu
6o2IYXLZzlhFFovFYrFYcsdizS1Cum6yakYLUvW9pt6+t9cfPd9isVgsFsspLtyReMXef0dl
LrA6Jpefn9fbS60Ki8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwW
i8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFY
LBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8Vi
sVgsFovFYrFYLBaLxWKxWCwWy1h5+CxPiYiIiBwCd55kydoXEREROSSeOvGS9b/+9/+RU8jF
L3zxbP9/+gtfsT5ERGRn0O6cGsn6zv/9oZxCXnTJSw4k2/oQEZFdQbujZImSJSIiomQpWaJk
iYiIkqVkiZIlIiJKlpIlSpaIiIiSpWSJkiUiIkqWkiVK1vHkzDXXzurws1/8qvUhIufAdYHr
A9cJ60PJEiVLluTVr//lWf196i//xvoQkXPgusD1geuE9aFkiZIlSpaIKFlKlpIlSpaSJSJK
lpKlZImSpWSJiJIlSpYoWaJkiYiSpWQpWaJkKVkiomQpWUqWKFlKlogoWaJkiZIlSpaIKFlK
lpIlSpaSJSJKlpKlZImSpWSJiJKlZClZomSJkiUiSpaSpWSJkqVkiYiSpWQpWaJkKVkiomQp
WUqWKFmiZImIkqVkKVmiZClZIqJkKVlK1qGcAOHBR755zmePP/n0/me/+NXZZ1//5/8457O/
+/a/nvNdXitZm4d9kjpmX4ztx7qP2Hd1/2xz25Ss1Vm0n9in2zrH/vrr/7T/8GPfHl2303D+
i5KlZClZWyPb3p4Ed9334P5zfuq5B5/9yAUX7P/W79xy8Pntn7z3nO/yWsnansAA69mb5v0f
/cR5klNO3Bnvft8HlKyjfYE9Zz9984n/3L/mXdfPzrv6+Zlrrp19tu5ykSLOb5bR+5zz/8d/
4icPlvujP3bh/m+/5/dO5PkvSpaSpWRtVbJ+4VWvme3o/Bp96NHH95910UWz9z//5a/Nfu3+
yq/dMJv2PX9w+0EjwHc+ds/9StaWJeviF754tm5/+/i/nPc59Z6GuErOP/zbD2bf+fQXvqJk
HYML7PU33jz7PxGjN731HbP3+WHD+cc5yf+894Yr3rz2Mi/92Z+bzasnWSwLqarnP9MxPUJf
z/9bPnSHkiVKlpKlZI1JVnvw82uZi2wrHUzHr1u6EdoTSMnansD0IlgI8dXvvG623gjxkOTk
BFayjvYFtu4fuvF4j/3bTn/lmbfNPhvr5hsC8eY85RyOmPckC8Hj/G+7AJEuol/Mp41oKVmi
ZClZR1qyEJcP3nn37KKH5PCrtc2RyoWS6BGRJaYjhM9FeSjkf+2v3zi7WPOLk2Xc+pGPz94f
kywuwr0TIt1S93zu4VMnWW29E3kgStSb9k/u/6uDemc6ogG96Xj/ut+8aTYd+/Gxf/z3WT2y
jxZJ1k9f+jOzhvI3bnrv7LtK1vogFURq2b+ch/zfyzUi2sQ+Yhr2HdO1OUs5Zu64+76D6fif
fcy+SF319g/HBXXK8dbO8+ZbP7xyXbP8RK7//IEvDUoW0v7ay6887322uV22kiVKlpJ15CWL
CzThexrNX3rN62cXuPzarELEdFwgWdeXvuwVs+n4Zcl0f3zvX5wzz4T3aYw5uLlwsow2D6M9
+PMrGjkYOlkQttMkWTSWdNEwzSsue9Wsvp77/IvPaxyZjgY69c7+oauvdrO2DRb7I12CzJPv
Vakakqwq16yDkrUeRIbYF0Rw2B7gvOK8qZLMdOwrPuNcZDq+w76r09Gdxrmcc5X/mS7vZX8s
u3/SjTj0w2pRY5frSZbbShZdhbyPvLffj5jVHwFKlihZStaRl6xEiIiA1F/VaXTzXrqH6i9c
xIuGn8Ygv7qTJ0U0JaF9fkEzXXthbQ/+nBC9i34OhpoAexokK9tYGxeigtQnjW0SkZOfUm8Q
oP7TxZPIFxFKvoe45bvML9NNkayKkrU+RCjZJzXvLfsJscl7L3vlZbNzrd7pSR0jXpyrOd+S
P1WPGQQmN5OsIlmbbKyGJGvs/M8PsPqZkiVKlpJ15CUrXQDthYqLcnIvkKk0zO33k9zMfIZy
p+p0UySrRqvGLsynQbL49d/KZeqDfZSGlfn0hAjBzZ1hvCZK2ItGMB3RDiVr9yBSnF/sz/o+
MhXxyrHei/IkMskPJY4HZIpIVzsdKQGrSBbCh9xxXq8SxVpWsnrnc77DjzclS5QsJevYSBYi
leRlfg1zEW/zfXKg0t3ABbnC9HyWX9zp8ugti4Zk3UhWjdScBski2pRuP/4S9SAvrUosUUQ+
p0u23T/A/k1UksaXKGVvWUTHlKzdgxzlOsD+4QdLmxNJl2/Os3b/pnue7yXi05MxzvVlJYtu
ugjWUB7gpiVr7EeWkSw5iZLVjv3Wy7OcMr5cbTfaa2JvfDkla4eDTtIdGNlKvs5tf3jPORcz
3kvEpCWRkipcLUjCmGSRVzJ00e99dloS34lmkKSeXKzc1dc2ljSEQ/sHQR6LePWkSsnaHYgz
kWJ+pGQfI81ITq1njoGhfUxEa8oPlamSlSE6OG9XuaNwWckaO/+J6ilZclIlqx37rXc9WzS+
XCXpPWPL4LWStWPoauDXKtGi5G9wcR37hdm7O6jXVUHkZVF3IfZdu7YqJNfzWU2yP41DOCDE
RCzyPe7aQsKGbrtvQbbYR73PSJJWsg4XzhOEi24xBAdx5rxIJKveXTsmMEh579iZKlnpguSY
2PSI6kOSlfO/dxzn7sR6M46SJSdNsshp5vxo020WjS9X2/CkhLRe0RtfTsnaAVT4G696y2BC
PDsiOVmJhrQXbr7PRTBj6TBte2FObtGYZEUC+OVcx8OBNDp1vqdBsjjpiHC0g4G2EQvqrFdv
7Dv2T+4wzJALiZC0icVK1u5BiGquUT3m2x86vTtv+eFB9Dh1kMhWeyxk3y+SrORucS5uYoT3
qZKV5P7ecYx4tee/kiUnTbJ617Ep48slHSDXw0w7tiwla8eJ7/U2fyyahrkOPJjwY52uDuuQ
g4OwfoQs36VBT1fXIsnKL9bamDBvulHaX7inQbISwaPe6q+b3EEWuU2ko07H39x2n9GyaaQ4
YcnRyl1qJFznV5KStXvyw6QdMoUoUiJZEZB2Os4xzq16hy/d/IkIc0NDBgJNV+SYZCHbiaCR
K1ZzONpnBbKOfLc9/5IrtopkpQGouZdcP9p8TiVLtiVZnAMcfxxv5MByPrWRpaRx0H5mOn4Q
tz8O2rHt+OHEdZfgBOdIrvtDkpXrOu0sUewxycp5y4+kXBeVrCMgWXVMHX5B8n+eG1YvdEyX
Hcd07PTkcLXdiOzkmluSA2OKZEEd7ykShwS00bHT0l2Y+qALN5G+NKI5qfkbEc74V9mPbfcr
XcL12ZCZJvtVydr9HT7ZpznmObc4h2r3OELF5+nGyzAe7XSR8PrsQaYhYlbP197+yY0sY+R8
y/fbc3jRNW1Msup4XIvOfyVLNi1Z/IDhvOGHC9/hh02OvypaTJdzlGtyhiji/5q0nh9BGduO
8zbT1evWkGQhbovGl6vX4ixbyTpiOVkZHZqLMDuQnTU0Uji/bokyMR3dD72R4TMkAAcIO5Jf
Br1ckbGDPyOXsxyiML1ui9OUk8WvGBpA6oO/bXdfFSj2S6YbuiOM+qRhpu4S0WrvDJ0iWex/
5tHL21GylhsUmF/MOeYRod4xwYWe+s65ynS950rmFznndfL2chdjK0l1/3C88PkYGcaBY4jX
bZ5Yphv7YcfnY3crTjn/lSzZtGQhU61QJYeJ4EF6AzLAdj33MkYkUa02B7ZenzLdFMla5sdJ
77qoZJ3AZxcSPu3ljfRGbF63r9xnF67W989FAAluQ9/tBWKKZE3JvVGydj+4aTvKf+16iKDv
Yv/sIo9FyZJNSRbRf+SpRqMQLn4Q5JqZ6H0dwDuQZpNBonOjSe0RaqO1SpaStTS5kBNNya9P
DlAOXA7gav5MRziWA2soIjY2pkjugFKyppMbEOhaTPSJnCx+cXFxqNGFPMKHuq6jjE+5Q47v
5BebknU4g5tyXrAvgKhlfn2ni7letKm/o/iQ8rHzP3mJSpZsSrLSpU5UH2Eiutz+IE2+Mj0F
7Zh1uR5xHU1uZE/GcmOZkqVkrZTjlYOwQs5JewDVz5eJaPXG+1Cylrubrd0/CHBbjzlR24T4
JUYRVrIOSUKSS1LhvTre1S730zbHFFKyZFOSxQ8QBKg9f3gWbH6U5pozNF4d8KN0LF+1lSol
S8laGqIj5IKwIzlwendd1DuWVolktXc7KVnLdeUlZ45fWr2cF/ZJ6niVSFbY5rYpWcOwz5If
1cux3OV+2kYk6ySc/3J0h3DguOLcYQidiBbv53XvjsOezLQ3pdRx6JQsJUtOqGSdFJQsEdmE
ZCFVpFHkSSdtQjy9MXW8uV43YG46oosxYw/2BtedenehkqVkiZKlZInIsZcsIlPkoCJTNXeY
XhnyGRPJ4lrNa67d9YHpGc+NrsZ2+B26IJk/PQbJJVSylCxRspQsETl142SR+M64VhmLjnEF
az5jbiTJ+FfJ4Wqf84lUIWc1vwuRS5djuvGVLCVLlCwlS0ROfE4W0amM5J5x2uqQDrV7kbvp
M+I742j1pqvjzyFniFcG/V004vuy48sFxq4buiFEyRIlS5QsETnUxPdN3KBBd2Evd4tIFlGw
JM9PkaxtjC+nZImSJUqWiBw7yeKO+uR4pVsQqcrQDrXbL+LDHd+0BYvuXFx1eCXmnRHslSxR
skTJEpFjKVlAFCvPkK3wUPg6bE479ts2rme98eWULFGyRMkSkWMpWYkgkVDfPvuz7VqsY78N
5Xat233Zji+nZImSJUqWiBxbyTrKKFmiZImSJSJKlpKlZImSpWSJiJKlZClZomQpWSKiZClZ
SpYoWaJkiYiSpWQpWaJkKVkiomQpWUqWKFlKlogoWUqWkiVKlihZIqJkKVlKlihZSpaIKFlK
lpIlSpaSJSJKlpKlZImSJUqWiChZSpaSJUqWkiUiSpaSpWSJkqVkiYiSpWQpWaJkKVlKlogo
WUqWkiVKlpIlIkqWkqVkiZKlZImIkqVkKVmiZClZIiJKlpKlZImSpWSJiJKlZClZomQpWSKi
ZClZSpYoWUqWiChZSpaSFcm65l3XyynkWRddNNv/b3rrO6yPFXje8y+e1d8brniT9SEi58B1
gesD1wnr43zedPU7To9kiYiIiBwCJ1+ybv/kvSIiIiI74ZYP/U9zskRERETMyVKyRERERMlS
skRERETJUrJERERElCwlS0RERJQsJUtERESULCXraJFBzTxgRURElCwla4MYiRMREVGylCwl
S0RERJQsJUtERESULCXLA1ZERETJUrKULBERESVLyVKyRERERMlSskRERETJUrKULBERESVL
yVKyRERERMlSskRERETJUrKULBERESVLyVKyRERERMlSskRERETJUrKULBERESVLyVKyRERE
RMlSskRERETJUrKULBERESVLyVKyRERERMlSskRERETJUrKULBERESVLyVKyRERERMlSskRE
RETJUrKULBERESVLyVKyjjTffOI/9//u2/86aTrrS0RElCwl69Tzqb/8m4M6gds/eW93ujde
9Zb9V7/+l7ufffoLX9n/pde8fv9HLrhgNo9nXXTR/jXvun7/6//8H0uvz4suecnBuvC/+0hE
RJQsJetYS9av/NoNM8H666//0zmf/8O//WD/+htvnk3Tkyy+j1whRLd+5OP7d9x93/61v37j
7L2XvuwV+48/+fRS6/PH9/7FbD0u/dmfU7JERETJUrKOv2T1Ilgc0K+9/MqDOutJFhEsIldt
V+K73/eB2Xdu+8N7VlovlqVkiYiIkqVkbQSiRn9y/1/t33zrh2dRIQRoTI5u+dAds+k++8Wv
Dk7HZ+//6CdmcIASWeK7kaIhyXrwkW/u/+iPXTgTqA/eefegZD3np567/4Yr3nze+w89+vjs
O0S1lCwREVGylKxDPXjoImPdfvwnfnImOPz/istedU5uE3LEe5kO+B/RqUnnCNuZa66dfRZZ
ogvvN2567zlSNSRZyBndhH/7+L8c1NtQTlaPu+57cPYdhFHJEhERJUvJOjSufud1MxH6/Je/
diBJCArrStdbpvuFV71mJk0fu+f+g+kQJAQKqcp0v/2e35t997d+55ZZ9Irp6LpLcvoiyerV
21TJYlmIIMt6+LFvK1kiIqJkKVmHB1JB11s7BALdgdy9x//3fO7h2bojUO33uZuPz0hcR3IQ
NvKl2unovtu2ZCWC1ltPJUtERJQsJWunEHHKsAV06SE/yFKdJtEpcrH4vJLvE+Giq4//3/MH
t5+3HHK+tiVZtYvyTW99x3nrr2SJiIiSpWQdStI7okQ0K+vI/wytkCT1RKvGQKySD9UTp1aq
NiVZ5I0ROWM6crnWESwlS0RElCwlayuyhfgQtWKcqSS/164+Pudg64Hs0L2YiFc7/3Q5blKy
kMAk7feWqWSJiIiSpWQdCiSmM8QCA3n2ZIP1fewf//1gKIWeEP35A1+aCU5Ei+R4Rmlvp0uX
46YkK4JFknuS8ZUsERFRspSsIwOi8tznX3zOoJ5EtYhmMUwDrxEtEtqRjzod//MeYpX36WZk
O6u4IVR8f5OShcglF2zRNpIr1o79xd2HvNc+gkfJEhERJUvJ2ggkpBMNQqgYzoGuwZ++9Gdm
60oEK9Px2JlMR44W0yFn7XTcpZjxtBC4/J+8qU1IFgOWLsoRYx0zfZ5L2LsrspUvJUtERJQs
JWtjkEuFdDAWFlLEnXp0A7bTITdEqjIdUpZhHtr8LiJZ1/3mTbNkdCJJyBTbT27WMpLFerV3
K2Z9x6iP1eGuySpdwOe8146npWSJiIiSpWQdSRjItPdYngxwmkFPp0rWYYwbpmSJiIiSpWQd
Ocjlohsxj8UBnifIsBDIS4ZYULJERETJUrKUrCVzvEiEh5e98rKDIRYQr/pA6UhWOGzZSt5W
Bmb1wiIiIkqWknXkIIrFsA7kPJGTRf5TexcfdyMiVoHH8hzmOpPYn3Xhfy8sIiKiZClZIiIi
omQpWR6wIiIiSpaSpWSJiIgoWUqWkiUiIiJKlpIlIiIiSpaSpWSJiIgoWUqWkiUiIiJKlpIl
IiIiSpaSpWSJiIgoWUqWkiUiIiJKlpIlIiIiSpaSpWSJiIgoWUqWkiUiIiJKlpIlIiIiSpaS
pWSJiIgoWUqWkiUiIiJKlpIlIiIiSpaSpWSJiIgoWUqWkiUiIiJKlpIlIiIiSpaSpWSJiIgo
WUrWrnj3+z4wwwNWREREyVKyRERERMlSskRERESULCVLRERElCwlS0RERJQsJUtEREREyVKy
RERERMk6ApKVoRBERERENsGtH/m4kiUiIiKyaV50yUuULCNZIiIiYiTLnCwRERExJ0vJEhER
EVGyRERERJQsJUtERESULCVLRERERMkSERERUbKULBEREVGylCwRERFRspQsERERESVLyRIR
ERElS8kSERERJUvJEhEREVGylCwRERFRspQsERERUbKULBERERElS8kSERERJUvJEhERESVL
yRIRERFRspQsERERUbKULBEREVGylCwRERERJUvJEhERESVLyRIRERElS8kSERERUbKULBER
EVGylCwRERFRspQsERERESVLyRIRERElS8kSERERJUvJEhEREVGyRERERJQsJUtERESULCVL
RERERMkSERERUbKULBEREVGylCwRERERJUtEREREyVKyRERERMlSskRERETJUrJERERElCwl
a8Pc9of37F/zruv3H/vHfz8x2+J+FRERJUvJOnSQEuqIA+KkbIv7VURElCwlS8naIA8/9u39
T/3l37hfRUREyVKy+tB1xw765hP/OTodnzMd/MO//WB02r/79r92RaqVrL99/F9WEq7Mf9E6
L7ttjz/59Nr1yTyYF9vmyS8iIkrWKZSsz37xq/uvuOxV2TH7P/pjF84kqBWSz3/5a/uvvfzK
/R+54IKDaZ910UX7v3HTe8+TrQ/eeff+iy55ycF0P/4TP7l/y4fuOE+yPnbP/fsvfdkrDqa7
+IUv3v+T+/9q4Trf87mH91/2yssOvsc6XXnmbTPpatf5l17z+nOmO3PNtedNx2u+X7eL9X31
6395xjLdhdTbr/zaDbN6rNt1+yfvPed7zJfPjIKJiIiSdQIlC8FCPH760p/Zv+Pu+2YN/s23
fnj2HuIVeUJCEI/nPv/iWaI30zF9RKcKxK0f+fjsPYQMYUKI+J/3kK8qJizn2l+/cf+P7/2L
mdSwDORkLPrDZ0zH+t1134PnrXOme/CRb86mQ3CyzlkGYheJJOLEa5b7nj+4fTYdf3nNPJeV
LLaH77FOzIt1jMRWoVKyREREyTrBkvULr3rNLMrURnaQIbYBkeL1+z/6idlrhKnNR+L93F2H
sDA/5KtGt3gf2SG6VcXkt37nlnPml+UQ4RpaZ4Suty5IzRuvesuBPL3hijfPROmvv/5P50zH
vPk+MliX2UaaMt2yksU2Uq+tGBJRG9suERERJeuESBZ5Sqznm976jm5uEp/RhVZFqZ2O7rgq
WYhPFZhWNCJAEZOHHn38nGmI6vB+7Vps+fQXvjKbBpFDAnvDQLAcoklE0NrPkD/kK58hY0S3
2i5PXiOMy0oWMsWyEUjqxxNfRESUrFMmWRGaMWpEhmgX3WhIWbrXMl0kayjKNPXuwqzTu9/3
gdHvIzA1zwoRQuy+/s//Mfuc6NWibUtUjb/5vwVhWlay6KYkalfzsehCRA69CIiIiJJ1iiSL
aBVy1INcqUSsiPYk7+n6G2+e5TlFZnYtWTmYiHgRkYrwkTOGDBIh4zVRqkXbNiZZbOuykpWo
H7lYyFW9AYAuTS8EIiKiZJ1wyaL7rgpSCxGZdO8lcZ332rvyIjO8JtF9qLuQrj3uRCTatI5k
ITDtevAeQpOuRpbB/+Ro9eaBhCXqxTTIY3s3JfNELJeVrN5wFNxgQNfjc37quV4IREREyToN
ie/kNREFaqWAKA/bgLgk2oMktN9HaGpyOKKSO//axHfuYCTStG4kK12F7V15Ebzkc7FOyFOb
95W8savfed2B/PWWSdfosonviBvLjHS2dd2rQxERESXrBErWnz/wpZkU9IY5QIgiQYz7xDZd
95s3zaJIRGboMkTQgNylVrwQDcQHEgnLHXzrSBZdlCyT9eMuyAwngcSx3pknXZztdNk2qHcd
ZjgF8s1YNmNpZbplI1l8NxKH0FHHibIRyXMIBxERUbJOyWCkNPJ1MNKMccXwDJmGCE0drDPT
IDJ0tyEj6X7LEBDITabl/zpEwro5WchLHcQU2Ib2bj5kkOT9Oh1C2E5HpO233/N7+5f+7M/N
onbZNiRtE4OREsEiAleje0qWiIgoWafssTpVllr4bNE07Q7f5vMJs869YRymbht5Zb3vI0RD
Q1xMge9PffyQiIiIkuUDok8cyb1qh1ig+3TRmF0iIiJKlpIlIzledHdy5x95ZnRVkk9Frlp9
/I6IiIiSpWTJknAHIgnr5GMhW+RmkaOlYImIiJKlZImIiIiSpWSJiIiIKFlKloiIiChZStZp
hcfZwKLpMgwFY2Pxmr+8XpR/xbwZ7qG+x3cZHZ8xwBh01P0gR4F6XFofIkqWknXEYVDP3vMI
dwGDok55YPTYg54rzKsOApoBUBc1SMy7DkxKQ0aCfPY3g556kp8LUsogrNbFevDDII+4WgRj
smWgW3/siShZStYxYOxB0tuGx+UsGnV9F5LF3YcM8ZDXd9334Ox7vM8wEFOiaKcNRs6fsk9k
cT1Ovabwg4hpkdtFg/SKiJKlZJ1yyWojSIclWS1Mz/d4xI8ndx/2m5K1mXqcek3JcckzRq07
ESVLyTpkuBjz4GIkigdA80Dn2t2TizbP/ON/3sv7/P3YPffPvsvzCumq4P129HTgvZ7I8Mub
X93Mg79pHMiRYvqMW8X/Yw1HJIuulZtv/fBsfox1VR8EvYxkJacleVi8jlDxfx6azTrX6aZ2
ow3VX7oiGXGeZfDZ+z/6ie4jgZie77Pf6E5i/VJv2e4sq1d3dZvqPHmf+bFs6rK3bbzHaPhM
k/Wv68gy2W/sv6FjYko9ZV9SF9RJcuna7jS6szMdy+s9yojvsp5sG5BHV/cF01BvvKYeWWfq
tp0n5wjf530imr11b+uwjXT2lsO0bEetR+aTbukp9ZjzdZuPsxIRJUvJmgCNN+t68QtfPPu1
TNdcHv4cAUJceI9R0fmf9yImNAzZXhpTGrGhqFf7IGVgZPW6fJbBSOs0hDRKLI/XPGyZ/8fy
wvicBzEzLx5IXedHQ7WMZNEtyHu1e7BG1BBOtjcPv069TK33LJNGuD5Em0acxjf7gYdek+vF
NrBd9aHd1HW6kZjuDVe8eVZPdF/W7Rl76HYbJaTBT9SEhj11CFW+ETbeywO084BxtiGNO/Pm
c9ad/xH5ZY5NZCLLrstgvWo3GNOxL1gOdZXpGK2/yiH/R1Ze9srLDqZrH9QdScmxzb7guMr+
4phgWbyfh4DXRy/VOkxXd7ajClm7nAyEm+9FtDjWspwp9ci6KFkiSpaSdcjQoKcBq7/6E6Gp
d8u14pSGG4gusGPzC3uqZNVGJsunYaFxpFFJA7lMdyHzu/LM2w6iHYgaDRbbmfktkqyeYPXW
I+uf+SxDlkljzXxq/dH4s751voxIn4he6irbURt4posQrCJZ2fdVOBGarFPqkIdms5wqMUhY
W2/rdBciF0hbGx2r24LQMA2w7bV+OYY4FmpOHfVN5C/vcYzzXk+y2L4aVaXumZa/ERi2n2Uj
wG0d8gNmrA6zHPZrFXQkqt2vy3QX5rhYJrIqIkqWkrWFO5YStapdMFycafDrkAZDkpWI15T8
rVayEg1qu39orGmoErVZRrJoWNtutSSop9Ebk6wI1hQh2YRkvfGqt3Tf70Ur8iDryC8NOwI5
1MguK1nUGxJBRKydji7F2vAnOtN2gbFuNYKyjmQhM2xjm7zNukQg8hBv/rbfT5SQ9WEe/E/k
tJ0ukaRWstp9EPlpu5URzhzX1CHHYO+8yH6g67AuJ69rNyLvcw6sIlnIJOeVDY6IkqVkHTI0
qPk1TQNEt1pvvKghyerdnj9VsmjQW8lYN/G9N4xCZDLrNCRZ6UoiMtGrg21IFjljva4eGm7W
s5L6S47UUD1//stfW0myEKTk3rXLjmDQgDMt3bm8jlAgsL27K9eRrEgly+A4QaTa6ExEir/t
OufYJnKVKBvr3S4nEt5KVitTOW7abuF6XPPjZKgOOVcSaa3Lqd2wiTC3+3aKZFE31BnnFX9t
cESULCXrkEEmaOjT1ZZGDeGaEsnqNdxTJKvXkGxCsoamY1mJ0AxJVhrAdGHuQrLa+su6jcE0
OZHaLs16ki0rWdmeMer233H3fTOZqJ8jubXbbt27CxGr5E4FpC5RzuT0jcF2DYlTrZ+pktXm
OtXjGqGbWodjx88qkpX51QiYiChZStYRgYaLxHKSglvRWFayet0liSzU6XqRLASsdhEtI1k0
+kORrDQ+Q5KVqFzWs238diFZRKnafLihqMVQ9xd5RFMlK4nrtUtw2UFn2VcIV7rNahfmpoZw
yB2A2Tcco1VwFuUftV3G7V2Am5Ks1HXNp1okRZuSLCKJiZb1tlNElCwla8dSRSPd666oje+y
kkXCcE92Ei3LayIUdFO2t9qnaycNxTKSRZ5Qm+OVRjR5O4sS3zlIcydljebtQrLSZdfrhkWC
2F/prsqQFW39IUl1eyJdbddkTsZsU8StJ77Mg65CZIXlIeC9fZ9IYKRnVcmi3ukC7MlKIlv1
7theThafIdZsZ3KlevlmiYZtQrIQTrrrej8yyLWiDnOn6zKSxT6Zek1BdmsivogoWUrWIcAv
XxoEGq2aLB4BqN0O6W5Lgz4mWcyPBi3dRnwnEZpaL2kg27sLiVLUu7BopImO9MZH6klcjcBl
GAhkLts4ZQiHum67lCzqgdww6q/ON3dJ8n5yn7KOVcgQ5wwDkO1JQjvzTR3wN0JUtynRqBrN
Ylq6Aeu2Zh/XsbeYLsNn5D1kg2WvMho+28u21O5H5sP82ZYsE6lv7y4kL41jiPXJcZMEd45F
3qOuU4ebkqxah+2wDulazbhky0hWljGlHrOejvYuomQpWUdknKwM5ZDGlMapdsGkCzGNz5hk
EYmiYY3A0VDSCGb8pl4UoY4n1I5rlUZrKMJTJYj5ZNyqOj5RHcBximTRALdisQvJiihR/xn/
KuMktfWSO8lSf5kujXndnnQjURcZSoDp+L9uU4YaqPskQ0LUSFgkJmNTpa5Zfh0Liu9k37Ef
l6kjZKQuI9vX7k/qsk6XccVY7zodopOIUIbPYJpNRrJaKW3rsB6/y0hWopOtFI/dMOA4WSJK
lpJ1BKDBzIjvQNdLe3cdv6C5eNNI0CiwI2l0hgSDCAdJ2cyP6Wi8mbYnFTSmGe2a+dcBN9M4
EhXgs6HRtdMQ0XCxrjTuGfG9/fWf9UgjlG1pR0Sne4f309hm/nUb63yWPRHG6o9t5k647BOm
bUeur1Kb0ceRsKER7DNdOzp/O11Gkc+0HBscI0N3s2UdiRC168i8WFZGxV+lnjLi+6LR5+u6
8P9QnhbixTqRR8Y0rXRnv7bHQ46b3hAhvYhkrUPOhfauxLHjh/frsZ59RT0uyplzxHcRJUvJ
EtkSqz6L8aRT86EqGTz0pDzgO5I1JOUiomQpWSJK1kbJcxTThZgo21Ci+nElN084VpaIkqVk
yUk7sCdxGiVryjhcYcr4acuCXJGvl3ysLItcr5P0CBrkkUdTeR0SUbKULDkRcLcacjOVba4L
OUMs46iJA+sztX7avLxN7idynchrIvdwmYd6H7fjkZywXg6kiChZSpaIiIiIkqVkiYiIiJKl
ZImIiIiSpWTJYeUPHebyuTOMu+EYALV9HM7UE+2o1WndplVzhbgLsB0nijHWln0oMmN/Tbn7
jtwwkvDroKabuhB6nomIkqVknTpogA/zlv48zJhR3zM47NTvMogpjz/axt1560ASNtuEYLFu
q9z5yDhWzKMVHu4YXOaByBk9fcrzMDd9pybbwGjwJqSLiJKlZJ1Kpj6QeltkBPJ2xPFlTrCj
Jll5rM8q2xSQTR6fUx+IzUj0zHfq3Ygsn0fzHJZkjT1OSUREyVKytjbswJQRt2lgh7pa+D6f
tY8EGuoOHJrPpiSLbeo9pJfb61n20AN8h56Rt03JmlpvbFP7WJkp25Tn+63TTcaI7e3zD3nM
Dvtr6rAGPIOTSN/UfdxKFtu3zsjwiyRrUT0GPp+6z0REyVKyTimMU1QfPk3ODu/RCCb3hkaN
14zOnUEW6SKigQG69zLIZAaapLuvPlaEeTEPIh95AHK65HimX10O3yfaUddhma5GBrekyy8R
E95LVx6ykffzAOj6XECWmYcJs25TBSJ1mQdLE/Hhu2PjQCGsRJiyPLab3KaIS2Qi28QjabLu
kQSEq90mps02pU5Zn3abmN/UOmZdWc+26xRR4vmAU+qH5wcyj6zTMpJF/hZyVo8bnn1YI3XM
sxdRo/54UDTrnuOU9ah1zLHBdqSecmy0+4/XeQB4PdbrcrN9SKnXGBElS8k6pdAQR2ZosHhQ
NA1DGpo05DloeCQKjRWNehoQ/vIZyc/IEnKTqEltRBMdYh5Xv/O6WWSChpNl0VAlKsD7TIPs
8f+y3VssmwYUrjzzthnkIyEJyCTLIpGdxpttZntYhyyHZfId1pWHYi/TTUV0ju9EdPjuWAI/
jTrTUofUPdNHMmvUKdvEemabEKREhoa26aFHHz+o04hB3aaMAD+l64w8rDYSxrxZ9thDw0Me
NZNnFy4rWSwH6eEYYx4R9cwv07U3KVBHHE9vvOotM+lPtyl1yPazDRwbrAvLQMhYV+qRY5D3
IqwIbY5NPmeZHMOcQ9R5e5E9zC5vEVGylKxDhsgADUTbhUSXUE+yEr2qXSY0Qr1f7CQX130Q
yUpUKRAda3Nu1ukujOC1yyExm/dZXtvFSR3URPtddBfS4Eew2u8nKlUlq7dNSSAf2ibEYlPd
hdQJctHeIMD+X9RlxnFC5Kl2NS4rWUhMXQ7Cw/FY14l5tpHH3MTQylgVS4Sp96xBloHcIqhV
FFvxTpTtJD0aSESULCVrDRKFaHNseg3RInFoc4RobNK104pL7Zpru4M2KVntcrI+PSEgqlFl
YReSlYhX7xE/kdxWstqoXqIvRGvaeSBY9bN1JQtppruvjcRN2U9sD5JVj5NVugvbz4ie1qT7
7Ld69yP7luhT6qEnWYlc9nLdsi/4jOVE+BCzRXlbIqJkKVmnlDQYva6iHCStZPWmpauF7jga
PBrNmpvVk6y2ke81epuQrHY56YZLtKOS7tF8ZxeSla7C3jLa5Web2kjJMtu0jmQhE3yXLs12
+YvGukqXZO+7y0hWr0sy0cmIauo+OWKsNxE9js2x4y15hr16TL5cpJ3uyHTn8pcoFxFFhUtE
lCw5T7LIUVlVspLLkkRkfvXT6NKtQt7VUZMsGkyWM0Qayl1I1pj00JD3BKl3FyZRmrFtSnRm
HclCopGVGjHL8dNGDFuI+vREsN7c0HaD9o4P1mFovK0aDeS4oU44NtMVXZPXe8cb67jo2Kj1
Rncskch0t6cr3QFORUTJkoOE4NwZ1X5G1GGKZCWXpUYK2u65RF8OW7KYXysKtdGskYhdSFYE
IXdWtl19UySLLrx23Kq6HnWb1pEsvsv+bKNIiMWUYR+o+xb2BWLD/2PRsBwfvcFO011YtynH
JKLPsd3mkfWOt3St9rqSaz1Sz9xMUI8h/k8y/SpPBhARJUvJOqEQeaJxqbk+NCQ0TlMkKzLS
duUgLVO74IYkq96ttQnJoiHvNdY0rMhCzdtZR7KovymSRR1R90hGlST2Re/uwt7xzFAPvcT3
JGyzXZn3OpLFfIjctBK9zoCry3YXcjdhrackvtPV1/544NhD7qjHVuB63eTpdmynzbFBXTLf
HEPt8Z4BWcciciKiZClZp/BgSCNCUjMNKY3Z0BAOrWTltn6GEaAx5I45IgnL5Dn1JAvJo4Fk
nYhIbEKyaDBzS36GO6CxzFAAVVTWkSxg26lXlkPkY2i6SBLrQN1nHKnUXaKAQ5K1aJvqXXC9
epkyhAPr347ojnAQiarjVG1bsjJ8BcdDtpHt7h0fjDWWnKl24FLqNMnrbDefsz3IWo65th4z
jhjfjbxyvGS6fDc/VhzCQUTJUrLk4IDgVz85VTQeNFB5zl2iF2OJ7xnrqg4SSWSAW+Z5ncEr
l5EsutCILPF+7+7HVXOdaCSZXx24k21uB9hcV7IydlKtw7HEcBpzGm8aaxrv3NHWbtPQ2Fy5
O7Lug3aYgVUli21ph0XIUAbrJHsvK1kIXR3Elu/3ulqr/LddnPWmg9RXolJsSzuoa++ZjO3g
vRnAtyb2K1kiSpaSJbPIUy9HKTlZUyMVeRTJphN/t/XIEubLuq7zeJYp9IYEqJ8NjatE44y4
7mKbiKaNSRYRrHboCJYxNpL9Nsmjm6aIWcbGGurW7R1fqcdFy8hjdYb2ISI6JHkiomQpWacA
fpET9WhFK6O4T33orywPktKLIlHn7eCo2wJBIAqz6A7B4waRPY7r3g0Bu7qphGT6XrK+iChZ
StYpIQm/yBYRDciQDL07Bg8DohJjt9ZXpjzeZZUTZuryl3kEDwKQXB66aOmWI3GabtL6mJ9t
Xwx6QyMcR4hAITZtHtVhSdZhLl9ElCwl64hAlyADKpITRAPPsABtjtJhgrj0hgDosWhgzFVA
dqYuvx0RfRF0OfEdokkZM4rxxYwgrgbSyjHMkAqHFcUSESVLyRIRERFRspQsERERUbKULBER
EVGylCyZmqTNgTM2JMEukonJj2JwzF3m27BcEurJDeOOwNy673Gx+h2N2x464zTnVzIsSzsc
y5ThNnJ+wdShUzLcRW8YGBElS8mSJQ+asfGUFl38hwaOnALLrYNEkqTfPkpmGUgsn7ItSCVJ
6Vkud6+NDQy6Cixj2YT5wwbRXbX+uUGgHdx00z8IuEv2OIlcHji9zjyQq/aB3dQFd6qO3cjC
NBx/9fzijlcGbF0kWxnMtj6ge11Y113cWSuiZClZRyr6QOO4zBAFIaPHr/JdyPPiEBxG8uaC
zh2QvLfq8AN5ZuAUOWQ5NDiIGQ0Z67PJkbzzCJjjdDxQf6s+t3DbkpVHFR2naCOP+Fn3mOo9
sDsj348JZ44/jnEEjahX3uNO111KVs63TUqbiJKlZJ1ociFeVbJogGg8apcEXXb88ka2Vpnn
1Eee5PE627zobzoytgumPAD7sCRr3UciHQZTHzE0BqPLt1LE8yzbB2i3ETTqisGH28/ykPi2
+3GbkrWNyJiIkqVkHRr80uXC1nYLELni/Vxg+ZzXNYyf1wgP0R0aXcYnqtPwf34V83cV0aK7
gwEm2/cZ0RsBW2Ze2Y48b65uY++Cz8jhTMt2pZ5SZ3Ub8xlRlDxgOMujW41tB6INNa+Nx74w
RlkktD7/bpnuRuZL/RONYHm93Dkkla6YrAv/D+XSEM1gm5kn0pI6ynHB+rLeY/W3jGTRZUV3
MoOxZjuYd7t+TEe0g8+Zji62GqWh/rLP+GzssTpjuUm1nhhUtPfoHNaFSCqD9rIu7b5tu1ep
x/YcyfHI2F45HvNZjqsp9cs6c560j8HivKFOx85/BiLunZeJCI4JT5Ui6p79Qn20qQHsh6Go
M48fYr1Zl1wrcr7V6aiPXh329kmOD65Lyx6fIkqWkrXxKMrQQ5tzoevlZOUXMFEmLvA0nMnn
yEWWBioPeubvKhEMugmZJw1VvTBH3JbNfcl6ZvDPoYaYz8j9yoOCec3328hTIid5qHPWi2n5
HutOYwf8zzzTQCAqebg28+9FFBbllrGMRPXy4GLeq/uU6bLdWRf+R1JrI0QjlUcqIbGsH/MG
kv+Rr8wn+3xZkWklCzFJ9y/v83/qnfWMaCEkyY9jO5ku6xY5Zd3z3az/MuvGDwaW0e4ztrXm
OiFdqWsiRawL01HvbZI5kpPjP8+kZFqkLMcjr3M8ZqT4HFdTfpj0HtjN/7zHZ6tcG3Ic1PNu
SLKoZ7aBeuCYyvmQG1RyzrTCw76lXojC5WkH9XxrhY99Sx2yb3ldBZJ55QdLPT5Yr208CUJE
yVKyti5ZQJdELqb8Gk0DtanuQho05kfjxIWfi3dyqoYezrvN7sIhyaJxoYFFoKgvIlptI8Xn
rDvbsInuQrqHaEjq/kuDy6/5vIcIUH91O1hPGq26r2jg+S7rnn0aEWDaCM8muwsjITUKw3LS
yFdhr6+T7N1uwzrdhRzLfJfjuEahqON6vET8a3SG5bFd9VmgrGuEI+8hlZG3ROF63YV5nNSU
JHDqsI32Up+s9yp3/rFs1m+RpObcrj8ckkjP+3l2Y64n1G/9PvJTcyt73YW9OmQZHN/1IeC9
6wz1y/7YZve0iJIlW5OsXnddftVuSrJoSNMFRJQhv/xZ96MkWe32IUC9SADzqu+tI1nkzSBP
bWIzkZ1EDSJdRAMWSUXqtx0igwY70blNSxb11uvSSuOaes26tl1RiGuNMq0jWWm4W7Gp+yx3
8fW2vz3WidAgOm03IvsEicw8183JQqJbgWH9WP4qd94mOr3o8U7Z3vbOSI4f5IbjqUaG2+sF
53WV955kUS+9OuQ7iWzVG2TaqBXHxkl7ALooWUrWKZEsLpKLErnXkSwupFysuejXCyUSQSNA
47LKmFnbkKy2Yc6vdIQQGSIS02v415Gs1C2NEA0q+VitcKWrhegC01cScaCRpDtuKAl6F4nv
7Eekg8gEwpAuxBw37H/qMt1TbFdPAtaRrBz3iZRSL20XV+6AQ6Lb+kxd5+HqHLdTbs5YR7KS
vF6jb1l2IknL5GjmAeXt/MaOv160Ld3nyffMfsl8c/NKjbj2JItpuAa0dQ1IG59HftP1TlRv
ahRQRMlSso6sZPUa2k1KVhq03phM+eW6yvhb25CsXqOOaCVPJJAvUvN21r27MA/4rsugYY+U
RqTGYBuyj6fI0yYli2gkwhKBSoQUaWyPG/ZDokN12noMrHt3IQKPFNf1QeaT94W4LKrP1M3U
42wdySJJH7moPzbY9yx7mQeNI7fUa/uDZopkjf14yGc5viKgnNO8rudCT7IW1XU9d5gXP/zq
8UHdrhpFF1GylKytSFaiMIctWWNDKEQEl/21vkvJqsnd5J0QJUrye37hb2oIB2QF4UoEgV//
NLxZv0UNJ98fGhuJiGLtrtmkZPE6kTbqOctJN2dv/1J3iBVdmEluTv1vaggHloF4sIwkphN9
yfE8ZYw25KeX18R+qUnq60hW8hTbHyDL5CEl343uvWXuxpsSyao5YcgrkTK2n3qp3YlDkjV1
TLv2eEWKEbpEt4xqiZKlZO0cboXu3fWTKNEmJIuGalXJyq/d9tb0+tkqt+lz4d22ZNHl1cs1
4sJfL/rrSBZi0hu5Pg0c65Tcpl5OFjKNRNTcoF6eXRq/1PXUbsVFkhWx6+UO0VVXjzlEoCeA
mS7rto5kUUe1+6pNzk9u1tCxT1cY76dLLHcTtsnn+YHAeZZ6X2XMN2Sl9/QDltvbjqFIWLpg
l81xzHHRjiifnCyitr3INPsKecr2j0kW9UIdtjlZLCNDjGQ7OCbb9IFEHo1miZKlZO2cXICQ
rXo3VW6l3oRkpUHpNfJTojM0IjRCtdFECrmIs55Tn7PWds9MyedaR7JyF2S9hZ7GlnyRmsib
+S3TtVMjA8yrRqmYL9tGvbF9yWtr82wyxATvp3HNttTGL8MaML+scy8fDmFLt+NUyWJ+1BFi
V+eFgOYYzDEX0amNJd/JTRGpv2zDKmOOZZym2v1I/aX7MPluGbKhCn7qnfcj0JEGhCfbx/FK
HTBdftzwvfZYnnJ3Id1j7bHHPKYOW8Dy00XIec98WsbuTsz2ZXiT9odEK38Zz4vjsdZnK2E1
kT7LqHcX1h+CeSRV8uEyBEZ7vk55fqOIkqVkbZT6bD4auoyflIvkJiSLBrzmtywbYaCx4MKc
2+hztxGN/pTk3KHRsbM+Y/K3jmTRgNF4JUeK9c64WbULrOb49AZdHSPDMGSMIpaR11UAaGDa
dckYU3U6RCB1Q8OfSEzGyWq7+GqDmHpZNFp3212YY433mEcEJO8nIhPZS15b5tP+SECusm6I
y7JJ5BmDiX2RfdY2/OzrjAeVekq9t418xC2PzmGd2mMgdVdzlqaMk8U0bZcb+4n5T/nxEXEd
Y8pgpDlO+Jt6GYp0pj560UtEuV4rIvU5FjLWWI4Dom/ZTv6yL+o+yXRTo3oiSpaStRXRyojt
XIySF1N/Red1veDyuvdrORGN9jZqLug0hqt049D4MU8u3HQZsb6rDt+QBptGk4v32C/+RBPq
Orfbl2l6o32zHBpd6haog15kgsaKdWkb6KnRvuw/oJuyN4BkHRm+Hcm9lwSdUbPZZ+10WSbr
nIgR9YL0LJIstrXdzgwR0Y7mzXR12jqCPtOy/J5osy5EOOoYbquMoJ991ouC1JHhx0YhT35Z
RoZnvdrp2K4cj+kapx6HRmOv3cVtXbLuU6Uix/IYY+drRmFPrhrbR32MnVNJHxjKaWO7c62o
eWvsZ7Y3y6Ce2n3b7hPqc9XBWEWULCVL5MiQPJzaMMp6EHU8ad1cyE/7LFIRJUvJEpEFN1GM
PSdPlr/po71r8DhHy4musk10B9buXRElS8mSDTJl3Jtl8rfKAb2Qo/54jdwkMIV1RgjfBkcx
MnGcj42TFOnJuF0ZJmLoQdoiSpaSJRsQialMaWiYZur8jnrXC43P1G1x3J8fnqpj47jvB3Kl
QMESJUvJEhEREVGylCwRERFRspQsERERUbKUrG3DGDcMnMe4LsuOXJ4xfOp7jLXTG+SvN74N
333jVW+Zjakz5SHL3JrPMqeMJL3sAXlYJ8I6OSar5JZMrcOMe1XfI2eIfZtHsBylxOhsV/t4
lW0NAbCL5YiIKFnHXLJobPNYkIz0PLWRZ4TldqBCGt9Ft9EjdtxGzVg1NNoZ4XrR93qjvq8D
Usm2r/qg4XUb6lXvzuNWdO6QWjTo5jp1mBHV6/7OSOGI8arPDdz2RWUX+3JXyxERUbJOSHch
j4dY5pEqudW/jnpONGHRM9u4lZqGGrlJJATZyaNUeiOFb0uydtkw9xrqVSWr9xDbTUsWkcY6
/3wvz2k7qhcVJUtElCwl68jBo2KWGaeHwfxaKeMxFTxjbqwbieUwTftIGrqikI76HLtFgoCc
tQ943XTDzPwXjSDOeix6kO2ykjW23EWSxcjnrE/vsT+rimqketnv0aU5ZQR2pmPdpnRZ96Zb
RbKy36ZEbWtdKlkiomQpWUt3X9FtN3V68rjaUZOZx6J8LB5iOyVna+zAoVuRZeVBrqx3fX4a
/w89w47nqvEZOTXppqTbk/cyThGSgkzQPZb9yPa2EToeJktELuvB39defuUsWje0DSwjDxVG
Nvk/YslyeVh0HgicASrzLDlg27NeTFfFmGekEZGsD7jl89qlO1Wy6A7MvFlm1on9V9d5aPsY
WTvd0L31aPdHHZSTLud2DC4kh31OnWU6ui3zTMOeZCVCygOD6/MF2T8sI/Nh/xOha8UN0WUZ
dRDLyKaSJSJKlpK1lGRNXV8avF4khYaYxPdF38sDhIlq0XDR4NHYLnqQbg4cJIKGE/kgYT75
QxEtGl5eM/92HjzwlmWyLqwr0yEmCECiFXwv30esEAqidiy3JowzH7aZ5VIX/EVCkKChiAzL
SCSK7/N/RCHLRU5YDpFB1o33SERnGmSBmxR4D8mNuKQblnpBcFhv1iciGYFcJSeLZbKsrFtd
56GIF+JCnbF/WJdITX0ob+aJ0FHHeahytqPKEiLGPDl2WAYyGjkm0tRKFt+JcFYhpB4it2wH
82I9Isg5BomsUXcsg7pnG8hZ5LWSJSJKlpK1FDRerO+ULi8aJxqbKkVIE98nujOWsJ0IBA0d
DS+NVSIZi5Kpc+AgMrX7hvUg0sT7WX8kgXWs20N0pApLL/qBUPAedzy2XUs0/InuDMkKdUPj
PpZb1usuzHLbZ8WxbdRT7WLtdRdGklv5QVx4P/K7auL71O7CTIegVNHkf95LtJTtYn+xbe08
qIP6SCP2VytoiVjyPvVR92UEi/3fRh8Rv/b4yV2xfD9ClmW2EbuIoZIlIkqWkrV0zs2UW9OR
ISIa7S3/i3K66nPxasSLBjddS2NDC+TAaQWoNpJ0mVURqV1tiZIkj6snWUzDe70uP7rN+AxZ
I9KB+BDJqlGwVXOy0nj3Hq3CNvAZEaqxnKw294l6RRKqHO1KsnrTReRrV2C7zghSInWRLNYF
MWojnUyLzCYHLZKa6F9bP+z3IZnPTRuJfiJ/RCSP0s0SIiJK1jEejJQumNz5tyivKg1+bZRp
HMe+l4e01q6g9qAYm0emqflXbcQm3Wc0wEQyiJpFODJkxFiDmRwcIi5IYyW5UIlu0MVZ859o
3JHNKcn4rWSlq7DXzYh4VXEZkiy2h+UjEXSLpmvrMCSrym2bsF+jQ0yf8dI4LnoP56buF935
2nsQc9t1nXVLXllL3Se8Zn/2loVcK1kiomQpWUuBONGADDUutcGvd2UhBnxv7M7AmpNF7ssq
d9zlwEl3X4XcnypZ6UJDgohSEOFqu5x6ksW6RSaGqJEYhArZQt4iNfwdS37vbStiNNRdO0Wy
2H62FYjCkD9EVJLussOQrLZrr653opWJWCE9SBbRPD5LxLBKVq9bsXdsUPesA/luHJO1+zrr
xrYN7dscP0j20HmAbCtZIqJkKVmTSU5QT2AqNERtJCoN/JTb72mgyJ/q3U6/qBum3l041F1Y
xaN2gdKg15ytIclCToZyy5DE2r3F9+uo68w7eUK9pPsxyRrrpkRYFnUXIgVsXxtFi1zuWrJ6
N0BEnujiy/GGnLbdgNkH6TolitXruuOYYVqOv3Zf8t0IZ+bPPh3qbs4NE5l2aJnsb7sLRUTJ
UrKWgsZzqLuqHbS0HRmeRmss+tVO28s9Iho01M3UHjhENup6JvGdRrGNBDEtDTldhW3jyrRt
g5lux3ZalpE7DBGZiA/r3cvtaRPYe7LZS3zvLTc5Rklq73W7ZeiDofrOIKK7kiy6K6s8JfE9
eXvZhrbrN9NViUwuV5vEXnPVxvLrqvDxA6GX+J7ctdTTUOJ7JFrJEhElS8na6BAO/IpHMtrG
joaLfK4py0FQEB4kgwgTkRsaQbp22oZ5rEsICWA9iNRktPhe0j7rlX3RSypP8jrykOhVugwR
FBp6lpFcrQgmgpZb/DOEA+sTOVn0LEa6szIkQdYryf9ZLvPLcmv0Ll2jSFWkBwGM9CFj1CvR
u3bIgV1JVqJU1B3rmyEc0lXIenAsIV1sJ6+Zju3IOkeyEFf2Ee8jVAgp+zq5VRlUtDdOFp/X
bsNEXTMECPNi/zFvjslEAuuwEdQp0+U4bZeTcdkWdZeLiChZDkY6SEZ0r1GkjEm1KAepHcQz
0ZlA5GfRHXo5cGjsIkIQYRuSuoxJNRTBS/J6oipsH6JTB76ksSVnqEog21EHtUzCfG/QzRbW
N/NP9IR5I1M1Yb039hiCV7ef9QAkta4L0sb+YZ3SRbsrySLHLOKX6GN752hEqdYx25+oHgJa
67puc7pbk7c1dNdfctJqtyGixfFQ58Xx2HYRczzWZbKuiWDW5SQKPGW/i4goWadUshYNwcCO
a6NBRBlWeYZevkt+zpTHrgwl0tc8mkViNjQN0tJbB95n/ouWwXdpoKc8oqXtCuyJZZa7aH5E
FtvuUaSSdam5YocxFEhkjPUZGrg0dTCljmtdM+2qx0x7/FBXi+aVZS7z2CQRESVLyTon12oo
2nOcIUpExGgTjbIsL1kiIqJknVrJolEkz4gus0XjXB0n6K5Krla640TJEhFRspSsnd5VSK4J
uSfLjlp+lGF7yPFBtqYMLSGbgTHEyJUyP0lERMmyu1BERESULCVLRERERMlSskRERETJUrJE
REREyVKyRERERJQsERERESVLyRIRERElS8kSERERUbJERERElCwlS0RERJQsJUtERESULCVL
RERERMlSskRERETJUrJEREREyVKyRERERJQsJUtERESULCVLRERElCwlS0RERETJUrJERERE
yVKyRERERMlSskRERESULCVLRERElCwlS0RERJQsJUtEREREyVKyRERERMlSskRERETJUrJE
RERElCwlS0RERJQsJUtERESULCVLRERERMmaS5aIiIjIIXCiJes77mARERE5JB4+ypL0/wGh
4hZDpPSCdQAAAABJRU5ErkJggg==

--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="blkif.h"

/******************************************************************************
 * blkif.h
 *
 * Unified block-device I/O interface for Xen guest OSes.
 *
 * Copyright (c) 2003-2004, Keir Fraser
 */

#ifndef __XEN_PUBLIC_IO_BLKIF_H__
#define __XEN_PUBLIC_IO_BLKIF_H__

#include <xen/interface/io/ring.h>
#include <xen/interface/grant_table.h>

/*
 * Front->back notifications: When enqueuing a new request, sending a
 * notification can be made conditional on req_event (i.e., the generic
 * hold-off mechanism provided by the ring macros). Backends must set
 * req_event appropriately (e.g., using RING_FINAL_CHECK_FOR_REQUESTS()).
 *
 * Back->front notifications: When enqueuing a new response, sending a
 * notification can be made conditional on rsp_event (i.e., the generic
 * hold-off mechanism provided by the ring macros). Frontends must set
 * rsp_event appropriately (e.g., using RING_FINAL_CHECK_FOR_RESPONSES()).
 */

typedef uint16_t blkif_vdev_t;
typedef uint64_t blkif_sector_t;

/*
 * REQUEST CODES.
 */
#define BLKIF_OP_READ              0
#define BLKIF_OP_WRITE             1
/*
 * Recognised only if "feature-barrier" is present in backend xenbus info.
 * The "feature_barrier" node contains a boolean indicating whether barrier
 * requests are likely to succeed or fail. Either way, a barrier request
 * may fail at any time with BLKIF_RSP_EOPNOTSUPP if it is unsupported by
 * the underlying block-device hardware. The boolean simply indicates whether
 * or not it is worthwhile for the frontend to attempt barrier requests.
 * If a backend does not recognise BLKIF_OP_WRITE_BARRIER, it should *not*
 * create the "feature-barrier" node!
 */
#define BLKIF_OP_WRITE_BARRIER     2

/*
 * Recognised if "feature-flush-cache" is present in backend xenbus
 * info.  A flush will ask the underlying storage hardware to flush its
 * non-volatile caches as appropriate.  The "feature-flush-cache" node
 * contains a boolean indicating whether flush requests are likely to
 * succeed or fail. Either way, a flush request may fail at any time
 * with BLKIF_RSP_EOPNOTSUPP if it is unsupported by the underlying
 * block-device hardware. The boolean simply indicates whether or not it
 * is worthwhile for the frontend to attempt flushes.  If a backend does
 * not recognise BLKIF_OP_WRITE_FLUSH_CACHE, it should *not* create the
 * "feature-flush-cache" node!
 */
#define BLKIF_OP_FLUSH_DISKCACHE   3

/*
 * Recognised only if "feature-discard" is present in backend xenbus info.
 * The "feature-discard" node contains a boolean indicating whether trim
 * (ATA) or unmap (SCSI) - conviently called discard requests are likely
 * to succeed or fail. Either way, a discard request
 * may fail at any time with BLKIF_RSP_EOPNOTSUPP if it is unsupported by
 * the underlying block-device hardware. The boolean simply indicates whether
 * or not it is worthwhile for the frontend to attempt discard requests.
 * If a backend does not recognise BLKIF_OP_DISCARD, it should *not*
 * create the "feature-discard" node!
 *
 * Discard operation is a request for the underlying block device to mark
 * extents to be erased. However, discard does not guarantee that the blocks
 * will be erased from the device - it is just a hint to the device
 * controller that these blocks are no longer in use. What the device
 * controller does with that information is left to the controller.
 * Discard operations are passed with sector_number as the
 * sector index to begin discard operations at and nr_sectors as the number of
 * sectors to be discarded. The specified sectors should be discarded if the
 * underlying block device supports trim (ATA) or unmap (SCSI) operations,
 * or a BLKIF_RSP_EOPNOTSUPP  should be returned.
 * More information about trim/unmap operations at:
 * http://t13.org/Documents/UploadedDocuments/docs2008/
 *     e07154r6-Data_Set_Management_Proposal_for_ATA-ACS2.doc
 * http://www.seagate.com/staticfiles/support/disc/manuals/
 *     Interface%20manuals/100293068c.pdf
 * The backend can optionally provide three extra XenBus attributes to
 * further optimize the discard functionality:
 * 'discard-aligment' - Devices that support discard functionality may
 * internally allocate space in units that are bigger than the exported
 * logical block size. The discard-alignment parameter indicates how many bytes
 * the beginning of the partition is offset from the internal allocation unit's
 * natural alignment.
 * 'discard-granularity'  - Devices that support discard functionality may
 * internally allocate space using units that are bigger than the logical block
 * size. The discard-granularity parameter indicates the size of the internal
 * allocation unit in bytes if reported by the device. Otherwise the
 * discard-granularity will be set to match the device's physical block size.
 * 'discard-secure' - All copies of the discarded sectors (potentially created
 * by garbage collection) must also be erased.  To use this feature, the flag
 * BLKIF_DISCARD_SECURE must be set in the blkif_request_trim.
 */
#define BLKIF_OP_DISCARD           5

/*
 * Maximum scatter/gather segments per request.
 * This is carefully chosen so that sizeof(struct blkif_ring) <= PAGE_SIZE.
 * NB. This could be 12 if the ring indexes weren't stored in the same page.
 */
#define BLKIF_MAX_SEGMENTS_PER_REQUEST 11

struct blkif_request_rw {
	uint8_t        nr_segments;  /* number of segments                   */
	blkif_vdev_t   handle;       /* only for read/write requests         */
#ifdef CONFIG_X86_64
	uint32_t       _pad1;	     /* offsetof(blkif_request,u.rw.id) == 8 */
#endif
	uint64_t       id;           /* private guest value, echoed in resp  */
	blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
	struct blkif_request_segment {
		grant_ref_t gref;        /* reference to I/O buffer frame        */
		/* @first_sect: first sector in frame to transfer (inclusive).   */
		/* @last_sect: last sector in frame to transfer (inclusive).     */
		uint8_t     first_sect, last_sect;
	} seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
} __attribute__((__packed__));

struct blkif_request_discard {
	uint8_t        flag;         /* BLKIF_DISCARD_SECURE or zero.        */
#define BLKIF_DISCARD_SECURE (1<<0)  /* ignored if discard-secure=0          */
	blkif_vdev_t   _pad1;        /* only for read/write requests         */
#ifdef CONFIG_X86_64
	uint32_t       _pad2;        /* offsetof(blkif_req..,u.discard.id)==8*/
#endif
	uint64_t       id;           /* private guest value, echoed in resp  */
	blkif_sector_t sector_number;
	uint64_t       nr_sectors;
	uint8_t        _pad3;
} __attribute__((__packed__));

struct blkif_request {
	uint8_t        operation;    /* BLKIF_OP_???                         */
	union {
		struct blkif_request_rw rw;
		struct blkif_request_discard discard;
	} u;
} __attribute__((__packed__));

struct blkif_response {
	uint64_t        id;              /* copied from request */
	uint8_t         operation;       /* copied from request */
	int16_t         status;          /* BLKIF_RSP_???       */
};

/*
 * STATUS RETURN CODES.
 */
 /* Operation not supported (only happens on barrier writes). */
#define BLKIF_RSP_EOPNOTSUPP  -2
 /* Operation failed for some unspecified reason (-EIO). */
#define BLKIF_RSP_ERROR       -1
 /* Operation completed successfully. */
#define BLKIF_RSP_OKAY         0

/*
 * Generate blkif ring structures and types.
 */

DEFINE_RING_TYPES(blkif, struct blkif_request, struct blkif_response);

#define VDISK_CDROM        0x1
#define VDISK_REMOVABLE    0x2
#define VDISK_READONLY     0x4

/* Xen-defined major numbers for virtual disks, they look strangely
 * familiar */
#define XEN_IDE0_MAJOR	3
#define XEN_IDE1_MAJOR	22
#define XEN_SCSI_DISK0_MAJOR	8
#define XEN_SCSI_DISK1_MAJOR	65
#define XEN_SCSI_DISK2_MAJOR	66
#define XEN_SCSI_DISK3_MAJOR	67
#define XEN_SCSI_DISK4_MAJOR	68
#define XEN_SCSI_DISK5_MAJOR	69
#define XEN_SCSI_DISK6_MAJOR	70
#define XEN_SCSI_DISK7_MAJOR	71
#define XEN_SCSI_DISK8_MAJOR	128
#define XEN_SCSI_DISK9_MAJOR	129
#define XEN_SCSI_DISK10_MAJOR	130
#define XEN_SCSI_DISK11_MAJOR	131
#define XEN_SCSI_DISK12_MAJOR	132
#define XEN_SCSI_DISK13_MAJOR	133
#define XEN_SCSI_DISK14_MAJOR	134
#define XEN_SCSI_DISK15_MAJOR	135

#endif /* __XEN_PUBLIC_IO_BLKIF_H__ */

--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--BXVAT5kNtrzKuDFl--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:31:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyCc-00030i-Jx; Tue, 18 Dec 2012 14:31:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkyCb-00030Y-5c
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:31:25 +0000
Received: from [85.158.143.99:45228] by server-1.bemta-4.messagelabs.com id
	38/49-28401-C3E70D05; Tue, 18 Dec 2012 14:31:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1355841080!20390934!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25411 invoked from network); 18 Dec 2012 14:31:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 14:31:22 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIEVChS019056
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 14:31:13 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIEVB6c015060
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 14:31:12 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIEVBtQ005061; Tue, 18 Dec 2012 08:31:11 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 06:31:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 60C781BF210; Tue, 18 Dec 2012 09:31:09 -0500 (EST)
Date: Tue, 18 Dec 2012 09:31:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com, martin.petersen@oracle.com,
	felipe.franciosi@citrix.com, matthew@wil.cx, axboe@kernel.dk
Message-ID: <20121218143109.GA24471@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="BXVAT5kNtrzKuDFl"
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Content-Transfer-Encoding: 7bit
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] RFC v1: Xen block protocol overhaul - problem statement
 (with pictures!)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hey,

I am including some folks that are not always on Xen-devel to see if they=
 have
some extra ideas or can correct my misunderstandings.

This is very much RFC - so there is bound to be some bugs.
The original is here
https://docs.google.com/document/d/1Vh5T8Z3Tx3sUEhVB0DnNDKBNiqB_ZA8Z5YVqA=
sCIjuI/edit
in case one wishes to modify/provide comment on that one.


There are outstanding issues we have now with the block protocol:
Note: I am assuming 64-bit guest/host - as the size=E2=80=99s of the stru=
ctures
change on 32-bit. I am also attaching the header for the blkif ring
as of today.

A) Segment size is limited to 11 pages. It means we can at most squeeze
in 44kB per request. The ring can hold 32 (next power of two below 36)
requests, meaning we can do 1.4M of outstanding requests.

B). Producer and consumer index is on the same cache line. In present har=
dware
that means the reader and writer will compete for the same cacheline caus=
ing a
ping-pong between sockets.

C). The requests and responses are on the same ring. This again causes th=
e
ping-pong between sockets as the ownership of the cache line will shift b=
etween
sockets.

D). Cache alignment. Currently the protocol is 16-bit aligned. This is aw=
kward
as the request and responses sometimes fit within a cacheline or sometime=
s
straddle them.

E). Interrupt mitigation. We are currently doing a kick whenever we are
done =E2=80=9Cprocessing=E2=80=9D the ring. There are better ways to do t=
his - and
we could use existing network interrupt mitigation techniques to make the
code poll when there is a lot of data.=20

F). Latency. The processing of the request limits us to only do 44kB - wh=
ich means
that a 1MB chunk of data - which on contemporary devices would only use I=
/O request
- would be split up in multiple =E2=80=98requests=E2=80=99 inadvertently =
delaying the processing of
said block.=20

G) Future extensions. DIF/DIX for integrity. There might
be other in the future and it would be good to leave space for extra
flags TBD.=20

H). Separate the response and request rings. The current
implementation has one thread for one block ring. There is no reason why
there could not be two threads - one for responses and one for requests -
and especially if they are scheduled on different CPUs. Furthermore this
could also be split in multi-queues - so two queues (response and request=
)
on each vCPU.=20

I). We waste a lot of space on the ring - as we use the
ring for both requests and responses. The response structure needs to
occupy the same amount of space as the request structure (112 bytes). If
the request structure is expanded to be able to fit more segments (say
the =E2=80=98struct blkif_sring_entry is expanded to ~1500 bytes) that st=
ill
requires us to have a matching size response structure. We do not need
to use that much space for one response. Having a separate response ring
would simplify the structures.=20

J). 32 bit vs 64 bit. Right now the size
of the request structure is 112 bytes under 64-bit guest and 102 bytes
under 32-bit guest. It is confusing and furthermore requires the host
to do extra accounting and processing.

The crude drawing displays memory that the ring occupies in offset of
64 bytes (cache line). Of course future CPUs could have different cache
lines (say 32 bytes?)- which would skew this drawing. A 32-bit ring is
a bit different as the =E2=80=98struct blkif_sring_entry=E2=80=99 is of 1=
02 bytes.


The A) has two solutions to this (look at
http://comments.gmane.org/gmane.comp.emulators.xen.devel/140406 for
details). One proposed by Justin from Spectralogic has to negotiate
the segment size. This means that the =E2=80=98struct blkif_sring_entry=E2=
=80=99
is now a variable size. It can expand from 112 bytes (cover 11 pages of
data - 44kB) to 1580 bytes (256 pages of data - so 1MB). It is a simple
extension by just making the array in the request expand from 11 to a
variable size negotiated.


The math is as follow.


        struct blkif_request_segment {
                uint32 grant;                         // 4 bytes uint8_t
                first_sect, last_sect;// 1, 1 =3D 6 bytes
        }
(6 bytes for each segment) - the above structure is in an array of size
11 in the request. The =E2=80=98struct blkif_sring_entry=E2=80=99 is 112 =
bytes. The
change is to expand the array - in this example we would tack on 245 extr=
a
=E2=80=98struct blkif_request_segment=E2=80=99 - 245*6 + 112 =3D 1582. If=
 we were to
use 36 requests (so 1582*36 + 64) we would use 57016 bytes (14 pages).


The other solution (from Intel - Ronghui) was to create one extra
ring that only has the =E2=80=98struct blkif_request_segment=E2=80=99 in =
them. The
=E2=80=98struct blkif_request=E2=80=99 would be changed to have an index =
in said
=E2=80=98segment ring=E2=80=99. There is only one segment ring. This mean=
s that the
size of the initial ring is still the same. The requests would point
to the segment and enumerate out how many of the indexes it wants to
use. The limit is of course the size of the segment. If one assumes a
one-page segment this means we can in one request cover ~4MB. The math
is as follow:


First request uses the half of the segment ring - so index 0 up
to 341 (out of 682). Each entry in the segment ring is a =E2=80=98struct
blkif_request_segment=E2=80=99 so it occupies 6 bytes. The other requests=
 on
the ring (so there are 35 left) can use either the remaining 341 indexes
of the sgement ring or use the old style request. The old style request
can address use up to 44kB. For example:


 sring[0]->[uses 0->341 indexes in the segment ring] =3D 342*4096 =3D 140=
0832
 sring[1]->[use sthe old style request] =3D 11*4096 =3D 45056
 sring[2]->[uses 342->682 indexes in the segment ring] =3D 1392640
 sring[3..32] -> [uses the old style request] =3D 29*4096*11 =3D 1306624


Total: 4145152 bytes Naturally this could be extended to have a bigger
segment ring to cover more.





The problem with this extension is that we use 6 bytes and end up
straddling a cache line. Using 8 bytes will fix the cache straddling. Thi=
s
would mean fitting only 512 segments per page.


There is yet another mechanism that could be employed  - and it borrows
from VirtIO protocol. And that is the =E2=80=98indirect descriptors=E2=80=
=99. This
very similar to what Intel suggests, but with a twist.


We could provide a new BLKIF_OP (say BLKIF_OP_INDIRECT), and the =E2=80=98=
struct
blkif_sring=E2=80=99 (each entry can be up to 112 bytes if needed - so th=
e
old style request would fit). It would look like:


/* so 64 bytes under 64-bit. If necessary, the array (seg) can be
expanded to fit 11 segments as the old style request did */ struct
blkif_request_indirect {
        uint8_t        op;           /* BLKIF_OP_* (usually READ or WRITE=
    */
// 1 blkif_vdev_t   handle;       /* only for read/write requests        =
 */ // 2
#ifdef CONFIG_X86_64
        uint32_t       _pad1;             /* offsetof(blkif_request,u.rw.=
id) =3D=3D 8 */ // 2
#endif
        uint64_t       id;           /* private guest value, echoed in re=
sp  */
	grant_ref_t    gref;        /* reference to indirect buffer frame  if us=
ed*/
            struct blkif_request_segment_aligned seg[4]; // each is 8 byt=
es
} __attribute__((__packed__));


struct blkif_request {
        uint8_t        operation;    /* BLKIF_OP_???  */
	union {
                struct blkif_request_rw rw;
		struct blkif_request_indirect
                indirect; =E2=80=A6 other ..
        } u;
} __attribute__((__packed__));




The =E2=80=98operation=E2=80=99 would be BLKIF_OP_INDIRECT. The read/writ=
e/discard,
etc operation would now be in indirect.op. The indirect.gref points to
a page that is filled with:


struct blkif_request_indirect_entry {
        blkif_sector_t sector_number;
	struct blkif_request_segment seg;
} __attribute__((__packed__));
//16 bytes, so we can fit in a page 256 of these structures.


This means that with the existing 36 slots in the ring (single page)
we can cover: 32 slots * each blkif_request_indirect covers: 256 * 4096
~=3D 32M. If we don=E2=80=99t want to use indirect descriptor we can stil=
l use
up to 4 pages of the request (as it has enough space to contain four
segments and the structure will still be cache-aligned).




B). Both the producer (req_* and rsp_*) values are in the same
cache-line. This means that we end up with the same cacheline being
modified by two different guests. Depending on the architecture and
placement of the guest this could be bad - as each logical CPU would
try to write and read from the same cache-line. A mechanism where
the req_* and rsp_ values are separated and on a different cache line
could be used. The value of the correct cache-line and alignment could
be negotiated via XenBus - in case future technologies start using 128
bytes for cache or such. Or the the producer and consumer indexes are in
separate rings. Meaning we have a =E2=80=98request ring=E2=80=99 and a =E2=
=80=98response
ring=E2=80=99 - and only the =E2=80=98req_prod=E2=80=99, =E2=80=98req_eve=
nt=E2=80=99 are modified in
the =E2=80=98request ring=E2=80=99. The opposite (resp_*) are only modifi=
ed in the
=E2=80=98response ring=E2=80=99.


C). Similar to B) problem but with a bigger payload. Each
=E2=80=98blkif_sring_entry=E2=80=99 occupies 112 bytes which does not len=
d itself
to a nice cache line size. If the indirect descriptors are to be used
for everything we could =E2=80=98slim-down=E2=80=99 the blkif_request/res=
ponse to
be up-to 64 bytes. This means modifying BLKIF_MAX_SEGMENTS_PER_REQUEST
to 5 as that would slim the largest of the structures to 64-bytes.
Naturally this means negotiating a new size of the structure via XenBus.


D). The first picture shows the problem. We now aligning everything
on the wrong cachelines. Worst in =E2=85=93 of the cases we straddle
three cache-lines. We could negotiate a proper alignment for each
request/response structure.


E). The network stack has showed that going in a polling mode does improv=
e
performance. The current mechanism of kicking the guest and or block
backend is not always clear.  [TODO: Konrad to explain it in details]


F). The current block protocol for big I/Os that the backend devices can
handle ends up doing extra work by splitting the I/O in smaller chunks,
and then reassembling them. With the solutions outlined in A) this can
be fixed. This is easily seen with 1MB I/Os. Since each request can
only handle 44kB that means we have to split a 1MB I/O in 24 requests
(23 * 4096 * 11 =3D 1081344). Then the backend ends up sending them in
sector-sizes- which with contemporary devices (such as SSD) ends up with
more processing. The SSDs are comfortable handling 128kB or higher I/Os
in one go.


G). DIF/DIX. This a protocol to carry extra =E2=80=98checksum=E2=80=99 in=
formation
for each I/O. The I/O can be a sector size, page-size or an I/O size
(most popular are 1MB). The DIF/DIX needs 8 bytes of information for
each I/O. It would be worth considering putting/reserving that amount of
space in each request/response. Also putting in extra flags for future
extensions would be worth it - however the author is not aware of any
right now.


H). Separate response/request. Potentially even multi-queue per-VCPU
queues. As v2.6.37 demonstrated, the idea of WRITE_BARRIER was
flawed. There is no similar concept in the storage world were the
operating system can put a food down and say: =E2=80=9Ceverything before =
this
has to be on the disk.=E2=80=9D There are ligther versions of this - call=
ed
=E2=80=98FUA=E2=80=99 and =E2=80=98FLUSH=E2=80=99. Depending on the inter=
nal implementation
of the storage they are either ignored or do the right thing. The
filesystems determine the viability of these flags and change writing
tactics depending on this. From a protocol level, this means that the
WRITE/READ/SYNC requests can be intermixed - the storage by itself
determines the order of the operation. The filesystem is the one that
determines whether the WRITE should be with a FLUSH to preserve some form
of atomicity. This means we do not have to preserve an order of operation=
s
- so we can have multiple queues for request and responses. This has
show in the network world to improve performance considerably.


I). Wastage of response/request on the same ring. Currently each response
MUST occupy the same amount of space that the request occupies - as the
ring can have both responses and requests. Separating the request and
response ring would remove the wastage.


J). 32-bit vs 64-bit (or 102 bytes vs 112 bytes). The size of the ring
entries is different if the guest is in 32-bit or 64-bit mode. Cleaning
this up to be the same size would save considerable accounting that the
host has to do (extra memcpy for each response/request).

--BXVAT5kNtrzKuDFl
Content-Type: image/png
Content-Disposition: attachment; filename="blkif_sring_struct.png"
Content-Transfer-Encoding: base64

iVBORw0KGgoAAAANSUhEUgAAAqkAAAH8CAYAAADyjTYeAAB7QUlEQVR42uy9D5BV1Z3ve9Qe
ZRQNT/GCSvL6Rm7g+a/QwitaaOHFFF7N1USdqxaWaFEZvFIjo1yiBm/aknIwasIoj1EDI0R0
wEEHIxBMcOwRAoQgwQw6ZAZHnDFvzDxmLvMGsIEG9tvf1evXrF7s87dPd+/u/nyrvtV99p+1
1177nLM/57d+a+1CASGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQggh
hBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEII
IYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBC
CCGEEEIIIVRGY1NPS92UekrqCakHVLDfuNRDg9cDUzcW8dBo3+Gpb009JnUDlwAhhBBCCJlu
Tr0j9aepl6Ve6P/q9a7UU8uAbYsHVdOk1EkRrwm2m5+6NfW61HtSb009mMuBEEIIIYRmeRCd
WDg2kqnXt/r1T2Xse6uHyySC1KxI6jQPs2P8NorU7k59gX892EPxU1wShBBCCKH+LUVQP0s9
0r8elHpyoS0SKtBUV/wov35PBKLz/bLJGZAaa5gH0mnBss2pZ0fbjS+0pRgghBBCCKF+KkVJ
d3hQlcZ6kFTXe3Pq7YW2rvnFfr0irsuC/QWyll9aDlKXpN5WOBqpHej3meAheLovbxCXBSGE
EEKof2uch1RJXe3q0m8K1s/xIGnRz9GFtu74LJWC1FF+/Y3Bska/7DkPxs3++J/6dQghhBBC
qJ9qlgfRgodTAWJDBLGJh1MDy9YaIFVR1K3RspF+n+2FowOlBntoXsalQQghhBDqv1I3/hT/
/8ZCW1Qz1PWFtoFONgWV0gK2VQmpA30Z06LljYWOUVrTzEJbZBUhhBBCCPVTLQkgVRHMWdH6
uYW2wU2m1R4iq4HUm/26YdFyRWxbg+ObpgGpCCGEEEL9W3MKR7v71cW+Llhnc5/O9a8FpzsL
xQc2FYNUlb+9yD6C3jUZy1ZwaRBCCCGE+q/Ge/BUVHOMh9LlHkyVQ/qp90b/d2SJsopB6ppC
8RxTDajSFFaK6E7yf/f45QghhBBCqJ9KcKoop3Xha1J9RT41HZRySSd4YNXTpgZ5SNUAq6wn
Qi0sArHafmKJOmgfzbfa7MsAUBFCCCGEkIugKnqpfNQBJbYbV2iLus6hyRBCCCGEUHdI0UsN
kNI8pYpmKnI6yVuR0GYPstNoKoQQQggh1N0aV2h7TOlqD6ayTVPFk6AQQgghhBBCCCGEEEII
IYQQQgghhBBCCCGEEEIIIYQQQgghhBBCCCGEEEIIIYQQQgghhBBCeZKeUnVjoe2xqgghhBBC
COVCy1O3FJjgHyGEEEII5USDU7emXkJTIIQQQgihvGha6iT1BJoCIYQQQgjlRdtSf1ogHxUh
hBBCCOVEowttUdRZNAVCCCGEEMqL5npIHU5TIIQQQgihPGhA6l2p19EUCCGEEEIoL7q50BZF
nURTIIQQQgihvGh16j2pB9IUCCGEEEIoDxpWaJsbdSFNgRBCCCGE8qIHC21d/eNoCoQQQggh
lBdtT72DZkAIIYQQQnnR2EJbFHUmTYEQQgghhPKi+R5Sh9EUCCGEEEIoD9LcqBrRv5qmQAgh
hBBCeZHmRFUU9VaaAiGEEEII5UXNqXcX2iKqCCGEUEmNSz20xPqR3tKgQukpY8akbgxeq9yJ
hbboyaAa6jbAH28QlwmhXi99NyiK+hxNgRBC+dfU1NO6+BgaQVuqa63cYwkXFo5OuD3Ob19M
O1M3BYD5aaEt/0zTzQzuxE1tHG8VhHq9mvzneTRNgRBC+VcIdV2lchBaT0gNI6kj/bajOlF3
IqkI9R3pR+s2mgEhhPIjQdutHgQviIDuMw+AowIIFJDd6N1QBNK0LOyi13bj/THGR9sJFGeX
gEWD1NEZ+1cCqWMLRyMjBqmq27TC0QESI6sA0oF+nwkZkDrKlz3M1/XWIgA7wa8fVSifooAQ
6nqN998H02kKhBDKhwRKLanXpV5TaHtW9Sy/bolfp2jqnAAY1/i/SaF4d3cY/RS0bU29y+8r
8G32sNfst90eHCMLUtcF++/x+w2oAFLn+O3H+tcWGZ7g66RtNxbaHoFYCcxr+9X+b4uH+vD8
m3277Q7q/FnhaPR2oF9u57A7aAOEUM9psf/+G0pTIIRQPrQ9gFJJg4h2BAAYd/cLptQdpujf
4GBZKUhd4vcZGECr4G1axrbFIHV7cPMY6eFuWhlIneMBMcwvC89nXADalagxgPQB/vwbMyBV
x7RJwAf7c7VjzvZ1GBa0xQ4gFaEe1SD/o3M5TYEQQvnRRg9JEwvZA4eyIHV6BkSWglRFDadE
6y+IILccpMbHXOjrXgxSBaithY7pC/WC1FszloWQujjarzmo3w4PqqEeBFIR6lFN8Z/BG2kK
hBDKjxSVXO2BrtUD1a1lIHVSFZA62P8/oQyEloPU+ObR5OtWDFIVzWzJgNt6QOq4MpA6qwSk
Zp3rRCAVoR7/sa4ejwaaAiGE8qfBHpYs33JslZA6sQh4DiiyXnA3tApIjdfPLRwdhZsFqSM9
oLYUOg6K6g5IbSoBqUpTmBmtnwKkItSjP9T1+XuKpkAIofxIALk59fUloLBSSJ0avB4bbSeY
fC4jcrGwCkhdGLxu8GXOKQGptt1mf6yGnEDq4qg++rsGSEWox/SU//xdQFMghFC+pEFN6hrX
IKQb/WvlkFqUU5C3tVB6kJON2LeJ/7f7Mmw7ldvqofLm1PP9erspaN91JUBVx9ztbybaf3Wh
4+CkUqP7R/ljT88JpDb6um/2260rHE1NQAh1rxqCzyNCCKGcaaCHJYGmInyLC8fOlbrCw6sB
4dioDKUKPOf3X1ZoG03/VLSd5iBcHmwTzol6vQfPYlNQLfQQOD+oYwiWk72lkYWOUVdpiq/f
AF+vG6NtK33S1GC//cgSyx4sHJs/+2BQv4KH61l+vyn+/528FRHqdl1fOLYnCCGEEOqXErDG
j4AVQDfTNAh1u/SDuaVQ2yOREUIIoW6RIqHjyrgejz4VpCrVQZEbRZcVRW3NAFeEUNdqsP/s
LaEpEEII5Vnqjl9YxiPrcJwGD6hKr1DurlIpJtD8CHW77LHIfP4QQgghhFBupAGhnxaYGxUh
hBBCCOVEGripKOpsmgIhhBBCCOVFcz2kDqcpEEIIIYRQHqSp6PQI1HU0BUIIIYQQyov0QBBF
USfTFAghhBBCKC/SjBqaBm4gTYEQQgghhPIgPe1Nc6MupCkQQgghhFBepIdpqKt/HE2BEEII
IYTyIj1AYyfNgBBCCCGE8qIxhbYo6kyaAiGEEEII5UXzPaQOoykQQgghhFAepLlRNaJ/DU2B
EEIIIYTyoomFtijqrTQFQgghhBDKi5pT7y60RVQRQgghhBDqcTUW2qKoz9EUCCGEEEIoL2ry
kDqGpkAIIYQQQnmR5kXdRjMghBBCCKG8aHyhLYo6naZACCGEEEJ50eLUramH0hQIoTxppP9y
SjDGGOO+6uOOP34Lt3yEepfG6cN79YSvJffc/22MMcb9yFeN/68O4K694ZY+fZ4jzr8oOeGE
hk+55SPUCyH1se8/n2z9xz0YY4z7kS+85NJk0OlnJO99vLtPn+cNvzcRSEUISMUYY9wbvLx5
i4uiTpw8tc+fK5CKEJCKMca4l/juex9wkLp09XogFSEEpGKMMe55q3v/jDOHJOdddEm/OF8g
FSEgFWOMcS/ws4tec1HUh2Z9D0hFCAGpGGOM8+Frrvt6cuJJA5Lm9z8BUhFCQCrGGOOet8BU
gCpQ7S/nDKQiBKRijDHOudXFr+/9eYuXA6kIISAVY4xxPqyJ7YecdU6fnxsVSEUISMUYY9xL
rOmm9J2v6af603kDqQgBqRhjjHNsTdyv73xN5A+kIoSAVIwxxj3uTTt2uUegXnzp5f3u3IFU
hIBUjDHGOfXTz7/soqhNT84DUhFCQCrGGON8+Krx17qpp9Zv/y2QihACUjHGGPe8397yUXJC
Q4ODtf54/kAqQkAqxhjjHPq+hx9zXf3zX/0xkIoQAlIxxhjnw43nfiU5e9iX+u35A6kIAakY
Y4xz5pfeeMdFUe+5/9tAKkIISMUYY5wP33LHZAepb236NZCKEAJSMcYY52Nu1JNPOSW5bOzV
/bodgFSEgFSMMcY58uPPLHBRVP0FUoFUhIBUjDHGubAiqKee9gUXUQVSgVSEgFSMMcY97lUb
PnRRVOWk9ve2AFIRAlJxnfy/b7gl+YcS1+TAsC8lH/TjQRDF/Kvtv00+TG/Mxdb/5tHvurbt
j23zwZaPkj2XX+n+Zq3f+fzL/bZtyrVbqc/aP0+Z5t5X3VmnjxYvd9fSXut/Ob5+Gs2v7/e3
R49x+4Trtq/ZlPzvm25z++2+7uvJ373xTtnygFSEEJCKHYR+VmK6mPTOUxLG+qs/P/+iknCv
Ng1v7v3Jer+Uet+o3fpr25TyodO+kOwoMQH+v6bw81k3T+2ka6VrGX4f/PbeB9wPjXA7zYv6
nwae6taHn4u/Wfur5PAppyT/NuFrySfPLEj+5fZJyZGGhuTvXv9pe/ladqAPzavaCyD1+tRz
Uj+VelQN+zeknpz6udSzUl8QrR+Qekrqualnpm4EfxCQioHUbm43IBVIrafVZr0BUuM66slS
+m7/7hcGHQOpqu/Bs85J3v94d/uyfRdd4iKr4TGA1G5TU+o9HlAXpm710FoNoK5LvdMD6mpf
xrgAUDf69bP9eh1vDAiEgNQu6OpSN9Sns77nvkT1i9+iA+q20rKWc7+S/L+Tp7ru37DbzqID
irh9vGCpK+fXq9dXfGNQmf/PI4+78vc3ftl19dkxrDtV9dP6vZdc6m4C6irUcbW91TfsctU2
KkfrtZ/K7wyk6viung8/5srZddcUt1znqciJtY/KD29SqpPaQ+t1w7IuxUrbR/7ble8m/9+4
r7YdY8R5rg52DF2bf3xyXoftFbnRMawNd85b5I6t/dV+YWTI2l+RH5WtbVSmtaXqfvikAe3X
vhZIVRkqf++ll7tydD7unMZfm/m+svei6qZ1rr3vmNzhfVfOKk/vZcGOvTfVDvF5K1Km9f80
4ztuuaBE52JtreXh9dzevKX9eut8VGZnINXqqbbQe1Xnrs+cvWe0LHxv632jdfqc2fWK3/uV
WO8BvRfsfWltozb+97FXJx8tWdFhe10/vS/ss6WueDu+ttf1tG31/jTb9VMd7fqpPdRmuiba
phZIVRk6B/tc6FqEXetWRx2j2HeXttd+Wq9yfnvft8pC6tf+6w1JSi7J+oWvZUPqmUM6vF/U
xkBqj2iYB8qbg2WzPVBWqmmpd6UeGixb4WFUmpi6xR/LtMZvgxCQWu8ogrqqDLT+cfYz7san
L119gQt83I0xXa8bqt0IdIOT9YWvL3TdVAU1Oyp8lrWOpeOqDN3odHM0UAojVTqugELb60Zj
wKqbumwQ9tcf/Kb9Jqe66+Zr69X1Viukqn1aTz+jvX30Wtuq7i73LGgfHdvd8Hfscjdy3QjD
cysXRYoBVfXWTV5trPPReRkwCsR1jPgG/+9+/kYBrPYXpGt//dVrA9Ww/bVMtmtuN3K9VtsX
A+tykKpzVveuzkH1VrupLfVa56d2UwTKfhipbVVHtaW9r9z7ztepEqs+KkNwK7AUsLjz9jAW
nrfgROet66NtVEddT9XD6q199N5SPVUPvadsfWcgVevsPaRrqZxGfYYsn9F+nNlnTm2hz5fq
of8daKWfA8FYCEclATVtA/eeSAFR++s9ojoYcOn87PMXwpb9MNNfnbf20/56b6g81d3efypP
df77Ra+1l6/zs3NQm+ma6NrUAqnaX+0gwNdxVSe9tvJUJ73HdK66lq4O6XorU9upzjqO6qO2
0PpSkLohvf4Djj8++Wp6fWx9CKl6D+i6CEr1ftL7SOcdAjyQ2m1SF/xuHw01jdR9OfXoCsvY
7COooYYGUDo89Y3RenX7bwWBEJDaRV1dlj9lX/T60v1VMM2KvnC1nUFXfIPW/tVAmG4aulmE
gyjCcg1SdTOy9fpfX/4GpAYQWiYIE1xrn3BQg5aFN6laIFXr7UZsN2sBRAgHVndFw3SjcucW
RLl0066mfQQLMeQYZOicVR+VZwCpa6V2+MTP3+gAM4WwcH9Xbw+2ag/tH9ZRbaj96tXdb9G2
uItccBq+b+x6tbd1ADDh+65SSNWPgw7nnQKroLTYeWsfg8Ew4mjXUz/c1LZhNE7w01lI1Xso
fh/qWPZa19baygAvjHRae+q9VUnb6HgG3u2f9fQ9os+6vb/0WbHz1Llb26u99N6L3w+6vjYg
SOAnQAw/F1oWtkNnu/u1fxzZF0jbeQk6w8i5rAi41VHvBW0fR7VLQerTX/uG6+p/+k9+mAmp
1nuj9tF7Xm2o44XvFyC12zTXQ2YsXcNJFZbR4tMDpvjo6LKgq7+QkZt6owfjqSAQAlK7CFLD
G4tuPBY5DG03UXWF6mZUbb5ZDDjxTVpf6ipDNxm7AYfwrBtYfIOxG75uCoKdGEDczTkFs85A
qm4+8YAiOWwbgb3KESTqxh/fkOzcKm0f68oMj2E3UytD4GU3bAcYAqkUVq3t1Cbh/rpZ281Y
rxUVK5WbVw9IjUHZunx1HfU+CoHUotZxOWr/OLWhFPzFxwzPS3WOr42gQoAeLtv2/ift1zMG
LYvIdRZSLfLenh5x023t3eE6hzCCbZAaR00FmJXkcNr56D0Qvif0HnGfmfTHot47an9ra9VB
dQl/ZOn9Fu6v9+h+H2F0n8/oB0L8HqkHpIY/cuIfIfbDVNsIWPVjT+9zq4OLDEdTSNkPkqw6
6hqMOf745P9Iy3jPt30MqZZaYu8FtbXaLewBAFK7TcpBbS4CnpVA5GAPtFs97DZ5UFUKwa0Z
uauJ92a/L0JAalcOGrAbaAxhZn1520CBuCzd4KqB1Bg4dQO2G0DWwJQsWHDdlOOvdTcju9nE
OYzapzOQGt9csgDSrJuabuRxV3y1EK8bq34sZB3D6qofDDZgQ21gN19rO7vhx1YkNgvWugJS
43ZXXd30TCmQ6f3iYNrn7hW7kQu+w4h6OUiNj2n5o4KwrPOOI5gWmQ4h9d+jx2B2duBUFqSq
bQSDigpa29hxrbu/2kGBcX3V1sXeExZtt3qrbJsOSu1go97jfW2brM9nV0BqvL9LffE/eFUX
y6VWWXotWLQ66HNp6QdxL0jWMTb/5yschNzzn0a68mT3Iyv9/FterevJiXJsDertBzOQ2uOQ
KsicXMH+jQGkhikDS1LvyIDUQT6dQAOttvvIKkJAaldCqqIrcUREN23d7PWlazesEAZ1k6s2
khpHzawLW92LWRBgaQhxWTboKu4Cb482nTmkrpAqIIy7h3X+gi9FUax7OIx6Wddppe2jHwkx
xKhsRYgMKFyqQ3pDtvQCG0Bi1yIGTLWPda33BKTqvRN2xap9FMG0SJ7Lj1Z3c5BmYhHAeF7K
UvAXR8rCH1VZ5633T9yFrPeQi+Qr9zIjMq7l9YRUe//YZ0pto/eR1cEiqWqP8DOp9qokyqzy
LC0mjggLqOy9aukV9p6y49nycJCStYMt6y5IjX9Q2I9UtWFW/rlA3+qgSLIcrrd2zqrjvWnZ
+k5/O31PhT1KKsM+G5YjmwmpPp0JSO02zfGwmBXxnFhhGdp2ZrTsVr+8GIRa3uuNYBACUrsY
Uu0L9pPg+dS6UVsOqW6kAkxFXXRzk/V/tZCq7e2mqRuuvvgtIpIFqQIHG/gR5gZqmeUsum62
9MZiN3vdPFROPSHVUgDCbkeBkeXLujzY9H9BtQGCdeVW2j4u/za9+VkOoisj/fEg4A4hzi1L
ASxOndBNW/Bl56V6KYpkcF0ppBYb2V8LpBqoh6BqkS/Vz45v3fXWbirnVxU+hlL1URTafqjo
mGofKzPrvC31wfKOdSybgUB1sME2BiJ6b2l9PSFVMKhjhBPZWwRY52CQqiirAaV+mKneYY52
KbtBhWlbWIqFzkPd5PEPUn2GtF0Mc3qPqd52PJWj7wGLTFYKqaV++FQCqXofW4TSBr3pO8tS
akIQ148brbc66DOr1/a5UjkqLwtS1b0/JG2HET7lodg56D2qz5rVSe2j44UpCEBqt0l5p3ui
KOgFHiAvqLAMRUyfipZN9tHYgk8baCqSJjARDEJAahdDqqx8QX2Z21RPFrELB7zoBiVo0g1N
0YpaIqm6QegYutnqr91Ai3Wn2mwENr2M/g8jSdrfRpW7aajS8jubk5p1c7GRzTqGzl/HC6FV
N00t075qI8vFrLR9BCICErspq63kMEfXbsLuxhx1N+qGqXQK1/WZ7q92EnzYjbQSSBV463Wx
m2st3f023Y9NOxZGAu34Nn2Qvbfi6F3ZgVPpeavt3fsjLV/AbpCbdd5qa8GRrqfAwvaN82VV
lr1X6w2pFtGzwTe6ZjY7Q5iTqs+ZtYvqEU8ZVe4JYur6djMopOXbZ0QQHD9JLGtAltrDXTPN
BqL3VNoeqo/9IKwEUtW+KjvuiagGUgXVNiOJziXMQRYw2zY2A4k+R2EPjA1y0v5qA70/siB1
XvrZ0vf5Q1GUNIZU/cBwdUrbo9T7B0jtFg32+afhIClNyL8tinqOibr4xwVRUgHoZ4WjU1DZ
vKnL/esH/TEagzJm+WVDwSAEpHbToy8FNLoJ6gs7a65KLdN6i2JWC6kWIXNT6qTwFXaP63/V
K2t6HUUqdFw5q17aR+W5eqfl6zxKRZtKwYbKL/YYR9c+6Y1MxyrXPgbdlbZPmEuoMgRqxaKJ
2qbYOkUUw2sUtmF8XlnvBR231imoVH5Wu9v7Sg7Xh4P4rIu70ghqnJMqcNC1iac6yjrvMNJr
bV3q86CIa6n3ZyWQqrLCrvuwfjpvHSccAGiQatfEvferbJsQNtU2ek9k1d8G32Wt0zI3w0dG
2+p84kGL1rMQlq3PSwzG1UCq2tauVdbn1p1f8J7POh/7XKkcWx9D6jUpvJ6YgmdzdJ3c5y3j
867PiX7IZ7UrkNqtmuKjnhqVv8ZHVsdGeavhvKlNPgpq0DnQ57V+5rdV+sCnwfoBfr1G9C/2
ABvPzYoQkNrbniaTBal5qXdXP3GqVkjNs+v9xKliUf1aIDUPPRT1bJsQUvuyK4XUrv4++MWC
pQ5Qr4nmje3M+wFI7VaN9nml6pofFq0bGwHlKB95HRjlsd7oAVbrBmXkuV7v108r8FhUBKT2
HkjVzbmUbWqk/gqp5dqnVB5of4VURanKtZuBYX+D1EraptgTnoDU7GP8335qoWcrnIO2XHml
0maAVIQQkNpNzzO3rrNStm7gYt2uPQEU1Tx6sxZb93BF7VPloy57yi6VoMJR951NPamk3axr
udKBRF1p1x1dz7aJuqSrbZus1II8WqkApR4dXKyrvd7fY+ePPD8Z8h+Gts+N2tny7HsPSEUI
AakYY4xr8tIUkvU9frefnQMDqQgBqRhjjHvcEydPdZC6PBoYhoFUhIBUjDHGPWJ17w86/Yzk
wozHL2MgFSEgFWOMcY/46edfdlHUpgqe4gWkAqkIAakYY4y7xVeNv9ZNPbW+iwdnAakIISAV
Y4xxRX57y0fJCQ0NyfU33UZ7AKkIAakYY4zz4Qceedx19c/vQw/cAFIRQkAqxhj3cjee+5Xk
7D404T6QihACUjHGuJf7pTfecVHUe3LwtDIgFSEEpGKMMXa+5Y7JDlJXdfHjkYFUhBCQijHG
uCJv2rErOfW0LySXjb2a9gBSEQJSMcYY58OPP7PARVH1l/YAUhECUjHGGOfCiqCefMopLqJK
ewCpCAGpGGOMe9xvbfq1i6IqJ5X2AFIRAlIxxhjnwhrNr+9sje6nPYBUhIBUjDHGubDmRdX8
qLQFkIoQkIoxxjgX1pOl9H1938OP0R5AKkL9B1In/8H/dKBajecsWOq+NPPgZWs2ufkC8+D3
Pt7NTQFj3DWg1dCQvL3lI9oDSEWo/0DqxZde7n6h475ldQ3mwV9s/HIy+vIrc+Erxn3V3bDy
4NvumuJyDPPgGY9+t+ofql3lZxe9lpsfwG+u/VUufvy+9pebkxNPGpBcNf5aoBNIRah/Qep9
Dz1aty91zd2Xl5tdXgDgm/d9KzdgNOGGW3IDjOdddEluQFqTo/PDBuPyVjQ3L59b5edW8l0z
+MwhyXHHHbc/rf8cbvsI9TJIJScV4/xY3bl5SV958fWf5iaqmZcfv01Pzuv2H7pDz/liMuB3
T06++Qff6rD87nsfyM0P4Guu+3pufgCPOP+iDjCreWXTe11r6iXc9hECUjHGGNfBSjnQ97TS
Q2gPuvsRAlIxxhjnwoqW6nt66er1tAeQihCQijHGuOet2UKGnHWO676mPYBUhIBUjDHGufC8
xctdFFWzL9AeQCpCQCrGGONcWLNxaOqp5vc/oT2AVISAVIwxxj1vgakAVaPmaQ8gFSEgFWOM
cS780Kzvua5+PdyA9gBSEQJSMcYY58J62MUZZw7hUctAKkJAKl9kGGOcDy9bs8lFUTX9FO0B
pCIEpPJFhjHGufDEyVMdpC5v3kJ7AKkIAal8kWGMcT7mRh10+hnJhZdcSnsAqQgBqUAqxhjn
w3MWLHVR1Jmzn6E9gFSEgFQgFWOM8+Grxl/rpp5av/23tAeQihCQCqRijHHP++0tHyUnNDQk
1990G+0BpCIEpAKpGGOcDz/wyOOuq/+FJStoDyAVISAVSMUY43x4+IjzkrOHfYm2AFIRQkAq
xhjnwy+98Y6Lot5z/7dpDyAVIQSkYoxxPnzLHZMdpK7a8CHtAaQihIBUjDHueW/asSs59bQv
JKMvv5L2AFIRQkAqxhjnw0/MW+SiqI8/s4D26HuQ2ph6YeqdqbembkrdUGTbiX67WjWqyP5a
vsKv2556bupBft2E1M1FPAdMQkAqxhj3Y1829urk5FNOcRFV2qNPQerA1J96QBzvIXRP6tlF
YFbrkhqPdUHqzzL2H+7LXezrcGPqHanX+fVjPUSHXuHLmQ0mISAVY4z7qd/a9GsXRf3G7ZNo
j74HqQ/66OWAYNk0H6UM1eChcVuNkDrTg+j2jP1neVAOo7fj/Xaji5S3xrsBTEJAKsYY91NP
nfEdB6ka3U979DlIXVdhNFKQuTH15BohVZHRm1NPyth/pO/Sj7v/td2YjLKmpG7xkV2EgFSM
Me6v1ryojed+hbbom5C6O/VU39Xe6uFvvk8DMI322w0vApmVyCKele6/xKcGxJHSQb4uTeAR
AlIxrnT0885/Tn687Rclt9F6baf//+pv/9a5VFnvffKv7ct+8uHWZOnmlckb7/9Vl9Rfx9vw
95/2q2um9td5x+e+8e//yS1b8ze/6rC9roFtH16bvuz5r/7YRVHve/gxPud9E1IT3w0/18Po
RA+BC4Oc1e0eZAudgNRCFftP98A8IWPdNA/Sg8AjBKTiXmFBxdNvP5Gs3bGzy46hsh9fPbMk
5N279MaSZWi9gewfv/O0c6my7Hx+9Kt17vXDP/ofyTPNc7rk/FT+q1tW9arr/uKGl5PXfrmm
5v3V/n/w57+XPLh8crJ8a1tXtn4IaJnaetqy/57MWv1Q8oud/+LWPbZqRjL99Ukdrk2/gKeG
BpeXyndNn4XU5ozu9FYPgs8V2gYpFboBUhU1neMh9OYSaQPzQSMEpOJe4xjqusICuFIQWk9I
jSOpL6xbkPyvFfd1eRv2tkhqZ8Fa7S8Itdfv/t1HydRXb0qWbXmr/cfPQ2/8fvIn7z7Xre+1
vHj99t+6Ef1XjPsq3zN9F1I/81HUY+67hbZR9fq7rXB0yqftAdhOqiOkClCX+aju2CL7jgrq
hRCQivMHo4ISRc8EEAZ0izctc+Cg5QItWV2z2kbbv739g/ZlYXlaFnfRax9Bivaz7nhtJ1A0
yMyCOYMXgaWicto/BplSkGpdzD/b8Q8dIFV1/6OffMdBapguUM6qo9pD9Vj94ZbMtAOlDsi/
/Id/63BeOqbqrtdqCznrnFdt+3l7+ZWkO8TW+en4il6u+Ov1rh5h5Fr1CNvzL3+9vUPXu9pT
10Xb2Xkp6qnzVrnqrg/3iYE8htT5P1voIqjhtn+2+UfJHy67vV9CatOT81xX/9PPv8z3T9+F
VIHhmmjZRB9JHeahMvR8D4qTPDTWA1INUD8rU+Z0n4qAEJCK82PBi7ra1dX6xE+a2rtiBSEC
FAGcwOHRVdMd7AhotO0jb051y7Wvlqlbt1R0VDCkrl6VJzDU/wJglaljaltBjV4Xg1R1CWt/
/VVU7uVNf1EWUgVMKt+6lkMQUhRP5yJQ0noDslJW/VR3tYfaS/8rHSKsh8rSX9VRYBxGJbVO
5//A63e4MmYsv7u9ve16aL2W2XXR+ZaLJIdW1FLXQ1YdrQyD8D9d/0N3/RTJ/M7K+9vb3+qo
87EUCIt06rXK0l+ds1IjVPfwuAJqrdc5x5CqcwrbKbyu9oOlP0HqxZdengw6/QzmRu3bkDo+
gM6CB1NFSxdXAZk3+lzR8L7dVCRvNGv/aX7ZVL9v6EFlgBohIBX3rC1qJrCxZU/+9LHk+XU/
yAQHg09FxizaWg5SFckUdAmObL0ifII4i8hW0t0fdg0v3LjE7W9RyCxINUB9as3j7ZHE+Hxi
mCpnwdacv3yyQ/sJ1gRmVg+Bn9rFItIxpKotLAopcFbbzW1+1r0WuAtg7XpovcqrBlJ1DNlS
GlRGCJy6DirPckVltZHAtVh3v/1IUJk6L4Fl2OayzsFyi+N21b7z3p13DEyHZfQXSH1z7a9c
FPW2u6bwHdT3nzg1xUcod3lYXF0oPjApCzLtaVWmJr9NY4X729yrWR4XbLeRfFQEpOLc2UBB
US51MYfdwqUg1QDMlpWCVMGQddeH3dHW5VwppIbHVD0FqeoyzoJUwaTATHAUnlNnIVWAqsik
orgGpnHagUCzWH6njhUPElMdFKXU/wLSeABXufaJUyq0rdpF52hW173qbZAadrMbHIfXMAtS
ra3NigRbXXUdBdeWcxq3q66F/fAJ0w76I6R+875vOUh4ZeW7fAf1fUiVBviu9qFgB0JAKq5h
4JK6fgUIgheBmEXyikFqvH8pSFXU0wCp1oFTMVTJOqZFZ2NI1WutV9QyhJ7OQqqis4o6qsvb
oqZx2kE84CiG1HhQV1iH8JyqGTgWb1vMBqlZ16scpOpHTFZOqeW/qq3th0jcrlnnrRQHlWup
Dv0BUt/7eHcy5KxzkhHnX8R3T/+BVIQQkIrrMbelph4SrAhaq4FURTWLgadgJl5vkKKu6FpH
9wuKXvr5q5mQal3TymENYamzkGpWvQVmlr9pEcTOQqq63MO0hjASXUm9lEagbeMBXaFrhdR4
8JZSGgTr1g5hBDhu1zBaHOb3qlzLle0PkDpv8XIXRZ3x6Hf5zgFSEUJAKi5lzRMqkAu74pUv
alBUCaQK0LTM5ryUBSy2nUXMwhkAwpzGSiE1hCTLpbXBTsUGTml9OMiqHjmpYeRUVk5qGNHt
DKSqS1w/EMK2NBCudCCcoptx17ryQRUBrhRS1XVfDlIt/UH10w+GEIzjdtV7RNuEMyioTkoZ
6E+j+yfccIubG7X5/U/4/gFSEUJAKi7XfS2oUZ6kInYCVEGSAY1F5gQixYBSUCEQtJH+gg+b
mD0ELS1T7qOAxUbchxE1gW38JCKDFwGOIEr1EySqrHAAU6kpqARs1u3fWUhVbqfKUgqDIog6
V517CMudgVTllOo8FVHV4DS1qY5XzcApG5Smuuma6q9NI1YppOr66MeLyioFqUoB0LHi8uJ2
FTzrnASl+mGkc1MUNkwhiK+Nrpsi4sXyZnVueh1Pf5ZXr/3gN8mJJw1Irrnu63z3AKkIISAV
V9pFLEC0QT3q8g8jq+pS1zrBpaJlWRPlCzDUnavtBHKCzXA7QYogQ+VrG4GSRQu1TvsUm4JK
UVcBS3gMQWI4IErHsqmMBFYGVzaoRyPPtUzbaFsbhKVlKquaKbt0HlYPwXcISSo77moPl+lY
Yd2y6qC66RpoPwG50iWqgVSLkGuWBtVRf/Xa1gmu40ir6hcuEzzqHG3WgbB9Y+tHTpxHmwX/
Oi+VJ/BU2fG1jiFV7RLWSfUO31P2XixWr7z5oVnfc139zy56je8dIBUhBKRi3LssII1H0WdN
hJ+nHzjhVGClIDUPTzfrSZ930SXJGWcOcYOneK8DqQghIBXjqiCplLujW1nRakGfIqmKNOqv
Xit6a0/xKuXuAjwdR9FTpYaEaRchpNpMBZU8JEHnZ7My9EVIXbZmk4ui3jllGp81IBUhBKRi
XP3z5ks57iLvyinB7HGt6ha3Sfete7uU1SXeXZCq6bcEqFmPdVVXvdWp1EwDYd6wbR/Oh9tX
LDjV9+3y5i181oBUhBCQijHG+ZgbVY9AvfCSS2kPIBUhBKRijHE+PGfBUhdFnTn7GdoDSEUI
AakYY5wPXz3ha27qKU1BRXsAqQghIBVjjHvcb2/5yE3ef/1Nt9EeQCpCCEjFGON8+IFHHndd
/S8sWUF7AKkIISAVY4zz4eEjzkuGnHUObQGkIoSAVIwxzodfWfmui6Lec/+3aQ8gFSEEpGKM
cT58yx2THaSu2vAh7QGkIoSAVIwx7nlv2rErOfW0LySjL7+S9gBSEUJAKsYY58NPzFuU8P0K
pCKEgFSMMc6Vrxj31eTkU05xEVXaA0hFCAGpGGPc435r069dFPUbt0+iPYBUhBCQijHG+fDU
Gd9xkPri6z+lPYBUhBCQijHG+fAXG7/sTFsAqQghIBVjjHPh+a/+2EVR73v4MdoDSEUIAakY
Y5wjIGpocHmptAeQihACUjHGuMe9fvtv3Yh+jeynPYBUhBCQijHGubC+S/WdqjlSaQ8gFSFU
B0i9esLX3LOlMcYY1+6zh/2fyYDfPTn55h98i/bIka+57utAKkK9FVIxxhjjvuorxl0DpCLU
WyH1gUceT1Zt+BBjjPuNf/jGO8mQs85Jfv8PH6pLebfddY8Doj9+8c9p35z5azfdBqQi1Fsh
lZxUjHF/s+BF33/qDq5HeQLe4SPOo23JSUUIAakYY5wPSJ23eLkra8aj36VtgVSEEJCKMcb5
gNQJN9zi5kZtfv8T2hZIRQgBqRhjXLsFlPWA1LUf/CY58aQBbpaUPJ3fm2v3Jd+870CXHuOV
lfuSGY/uB1IRQkAqxhjX0/WA1Jmzn3HlPLvotVyd2z33H0hGX36oi+HvoDOQihACUjHGuM6Q
KoDpTBnnXXRJcsaZQ5L3Pt7d7fV/7+M9yUtvfJ48/XxLsnT1viC6uzeZOPlgcuElh5JVG/a5
7Zrf3+uW6/W8xZ8nm3bsSd7estct7xhhPnaZtnt2UUvy4utt+9l211zX6qwygVSEEJCKMcY5
gdRlaza5Mu6cMq3b6y5wHD7icArIR5KLLz2UnHhS4iKngkh1wZ962hG37Oxhh5O3Nu11Ec8J
N7QmJ59yJK1zktz3cFukNY6ExtFRpQyc0JC0H0vlLW/el57zQVeWrGVAKkIISMUY45xAquBU
ZQhWu7vuU2ccSBrPPeyipJaDKihVlDSru1/gKTids6DFRT4VVS0HqSpLgPrCks/bI7eXjT2U
XDHuEN39CCEgFWOM8wip6t5XN/+Fl1zaI3V/4JH9Dkpnzt5/TPd8MUg976KOEc9ykKq/itKG
6xVF1YApIBUhBKRijHEOIXXOgqVufw2c6om6r9/elhOqSKcsmHxo1v72yGoWpF49obUqSM1a
z8AphBCQijHGOYZUTTmlqac0BVVPnoNg9fFnWhyAClbvvvdAUUiNgTILQgW+tkxd+3odrlfO
q1IFgFSEEJCKMcY5g1TNsarJ+zWJf0/VXTmp8TyoAkqBpUFq2FWfBZR6bdtbzukXGw+3b6fy
NSjKorOyorUaLKVlNhgLSEUIAakYY1xHK6e0Fkh94JHHHeC+sGRFzcd+7PstHSKdGqSk1zaV
lEbv67UNhIrd9OR+FzlVbur8Vz93rwWPgkit1+h9vdZIf+WsZkGq9rUydBxBrkbw23YaYKVZ
AhSl1XpFbAedfsQBsNbfcsfBZMhZbccIQRZIRQgBqRhj3AmfPexLKZh9ver9ho84L4Wzczp1
7M5CqkU1NReqop0aFGWAavsLLkecf9iVKWiV43lWFS3VLAGyorOC3XA7zRogeNUxNA2V1hmQ
ahCVRvrr2JrmCkhFCAGpGGNcJ0gdffmVFY3kt/9fWfmui6J+875v0Ya9xEAqQkAqxhj3SUj9
xu2TkqvGX+tG9N965zcdpK7a8CFtCKQihIBUjDHuOUgV5Oi7Uj7+hBOSoWcP65EJ/DGQihCQ
ijHGQGompB533HHt/5934cXJ0tXraUsgFSEEpGKMcc9Caugrxn21Q74qBlIRQkAqzol/sfNf
krU7drq/tAfuL5CqaOq5X/m/kvXbf9ut9dWUUBq1z7UDUhECUnGn/N4n/5os3byyS4+xaec/
J8u2vFV0/Z+u/2Hy4PLJRdf/eNsvknuX3uhAU6+1rfbJ2vbVLas6lPWjX61L/uDPf8/t/yfv
Plf3c4vr1lv8xvt/lfzlr7fXvP+s1Q+58w7PXW39v1bc55Y99Mbvd7jmtm2p64yz/cXGL1ff
3X/88cnpZ5yZwuJHxwBkutrNWVqsHE0pZXOMagoqTetU/EEDidvGfc537HHTPGmZ5jXtijlJ
yz0CFUitWten3llk+Vb/ftqTeknqocF6/b8sdUvq1tRrUjfWWIdGf4x4/5G+XJW/K/VTqQcU
KWNm6mZwCQGpfcwx1HWFBZSCmu6A1Le3f5As3rSs/fWTP30seWzVDLevYLne56ZyVZeuKLur
rDqrPdWunYHUp9Y87sr65T/8mwPeqa/elCzcuMQte3nTX7jXqz/c0n5M/UgAUmsBsyurgtTj
U0Ad8LsnZ+ah1htStZ3Nmaoyreyumo9U9ZmzoAVIrY8meDiMIXW0B8PZqYenHpt6e+rNwTbN
ftmY1KNSr0u9rYY6XODLSSJIHZT6s9QbfX1G++Mvzyhjut8fSEVAam+OmGZ1eReDVIHHz3b8
Q9XH2fD3nyYb//6faoZU7S/XCqlZMPXCugU1nUep89d6tWk5GIzbIk5B6Mw1VdkGiaXqGbdn
KUjVOZc7L2vXP37n6fbXzzTPSR5dNb3DNvqBIFf6YwQXh9TGc79SOaSecELy7KLXinbFG0gq
8qnXccSzGkiNAVJlV93T4uuxfns22Gp5ufSBtR/sdWUUWy9o1jZAqpOikQs9iDZnQOpzPooa
arwHweF+f/0/MVg/xi8bVUU9HvSQvDkDUqf6dWH0drjf7oIgmrs69W4PyEAqAlJ7mwUwc5uf
dVEtgYn+KgImEBGgWjesQYtA4um3n3DRRy17ft0PHIzEkBmDp/Z9+Ef/o70si15qu/AYWWCm
baa/PskBjW2nsg2uSkGqYE/HkrV9CN3hcSvtklfkT13Vts+M5Xe7buywHi9ueNn9VZ1Xbfv5
MXX7s80/6lCG2s9AMr4e2k6vq4E3necf/eQ77eU/8PodHbrWdbx5785LHl89s32b76y83+1n
gGpWO9p5CeatS/6RN6c68IxTBJQ6oahxDKnafv7PFnbYXhFt1Q1I7TykKi+13HaaI1XflQ/N
+l7JfFGB5G13HXSPEtX/eoSonuBUCaTq0aLqyn/6+ZYO3f3aXv+bbf/SDx/Q3K4HXXna58ST
EvcIU4Nm69rXI1C1/ol5LR26+3WMiy891KEM1fWlN45GiV98/fPki41HUxD05KpykeR+AKmN
HugElpMyILUxAMEsSC14MGwK1k/00Du4inos92kF4zIg9bkikdkWHzkt+AjvQr/fQiAVAam9
0AKLacv+e/JXf/u37vWav/mVey2QEuC99PNXHYgJXgSuAgkBlMBpxV+vd93n5SBV0TeVKTBS
mYIYQaPW63918yrKVizqZyCr41iEUbmNVn4xSA0B1SLEIaRqex1Xxy8XcTTruDp3bav2UJ0E
o2E9BHxqG8FrVt3+cNntbp32X771Hbf+tV+uabtppoCr9YJbHUP1VXtXA286J2tPO2cdw7rW
VWeVqS53tf9PPtzqjqkfHDqmtrM6ab2dg9pR9dK56X0hwAyjqvpxox8wWZFUHU/1iKP0KhdI
7R5I1XZ3TplWdlCTIE3gJ5hTFFOPEBXAWbd9MUiNATWEVEUpH3hkv3utY1QStZw5e78DZD3S
1NIFwvJVD5Wnx6iqi98ewRpCqrYXpGqdoqXKidUjWF3qT7rs5FOOOCBXNFbHsZzZfg6poSYV
yUkN1eC788Po6s0+T1RwONf/P7Uz9/gIUpt8mQ3BssF+u6aMMoBUBKT2RmtQlKJfgg9b9u7f
fdTeDR139wskBJwh0JWDVOUhCmjCfQTFiqRpWSXd/YKoEIgMpFROFggKuGJAzTofHbfS1AB3
I06BXSBm7SOIU3vpPKweAs9SqQiC3LhMq4P+V3uF6xWxrBTeBJw6nn5shMsVNTWA1PUSSIfr
tU7R16zufjuH8D2iqKvAUz9yrB30PhLAZkGq9o8hVaCu5RYRB1K7FlJnPPrdslNNGaQKEMPl
et69YK4YpAocFXmdt/jzogOnqu3uFxwLUhXttGXL1hwFXNVDLjZwyqK3Am1bL6AVgIflh+kM
r6zcB6RWB6mCxCW+6z3syp/ic0ZXFNoGUO0uAo+1QuooH5mdn3qgB9TlfhmQioDUvmLBhY24
VkRQQGSgUQxStX1YRjlIzVpfbU5qnM8osFGdi0UrBVDyEz9pKpljWy2kCuqtbAGwIopx2oGg
tZp8WVuma6FtDfzMOkal8GbRSYtUm3Vt7brpeijqWewaFoPUOB1C6RdWjtpFgG0/RLIgVdF5
Iqk9B6mVTg8lSAu7xOUJN7S2A2EMqeqGNy9v3lc3SG1+f6+DY+0z5Kwjrqs/hMeskfwxpGq/
Ynmx2i6GXAErkFoxpA70ECoYHR0sH+uh8sZgmQ22mlgnSLW67fHrbCDXOiAVAal90Oq2V86g
AauBVBakxkCZBaHKX6wnpMbrDVIV3csCQUGt1sXdzJ2FVMtzVY6noo+KKgsAVZ8smKsFUuPp
uBRZrRZSBY36P7RFeHU9QoCsFVItB9XSKsIBaDGkKrc2Pm+lGyhCDqTmH1Kvua41uWxscUhV
tFP5n+pKDyOTnYHU9h9paV2UK2q5o4qAVgqp8aCuGFKt6z8chAWkVgSpAz0Qqj4jo3UP+q74
WBt95LNekGr1GBvkuu4qAsJAKgJSe6MViVSkLu5eFlRUCqnqWo+jq4pght39gpGwu1/d9Yq8
Ce4q7e4P91e0V+CkfNdSIKi6aV8bid8ZSBWMqTzL37V8WwPLzkKq/tfgsni+VkUsq+3u1994
kJLlvdYLUnU9lMahsi31ohikzvnLJ9vTCcIUA0b35xNSw4FSbXOxHk7uvvdAyZxURVEFrMo9
rQekaiBUnHZw1fiOEd3OQKrKVk5qOGvAC0s+B1LLQ6rAcJv30Ix9bOR9PGfpNp+fWq/u/sVR
Tup4H1EdBqQiILWPWOCiaKOibMr5FGhoJLYBhqBO0TLBrCJ9WUBpXeAqQ9Cpbl1FGG07LdNr
5WLatEiKdNr6cLBQ1vRGNnDKBl4p31IwZzmW5Ub367V1+3c2kqrjCrZsIJkAX+euLv56QKpg
1+YTFcgLimsZOKV6qp1slgaVUSmkWpRa1yUr5ze0gFrXNs5xjSHV4Nmi2naeYWoJkFqbr7nu
63WHVMGdBkop/1O5qII5G8BUanS/opxht39nIFX5ozqu8lwVnVU+qmA5zI3tDKQKTpUOoAix
jqG8Wr0GUstC6hwPjQ/69aGHeu/2EKkI5yDfFd8a5K3e6MHRIqAj/euRFUKq9mvxXfsD/H6a
T/W5IucBpCIgtbfaBkOFU1DZwCBFCm3qKEGOomYGh+Ecq1pm0yYJ4tSVG24nSBH82tRG2sZy
OTXRu6KqWRFAiwLayHGro/63AVHaR3Bj0VJ1PYcT9guytF6gLZDW+jCaF25bzoI2AVk4BZXB
X1yPSuqWtUwgp7bSfgI9gWAcqS43BZXay66HRTvDyLccR8PD62VtrQho1nmFaSLaTte7FKTa
jxnVRdvrR0mcowqk1g4vQ846py5laQS8wE6AqEFFAjaBYQhtyk/VSH73PknBTt38YU6nQFHb
tD2y9XD7aHz9rXROVStLQCrotSmoVK5FPvX/nVMOHpM7a8tUx7BuWXUQeCuVQcs0sl/RVSC1
g2723fpxt/3OIh4T5KDa/KaJB8jrgzKm+e0t6jkm2j/UmGjbQgC6nwU5qc9FkdVQehrVEnAJ
Aam92ALTSqZhKvVAgPhhAFkDtcptU67LvZLJ5LtjftliE/F3xoosxhFLAWM8AKwn6xg/SMHm
Ri0HqWGKRC1PFsOlJ+nvklz1LfmY4F6DqOpdpgA1no1A0WNBajwArB9Damc1qEhKQD01GAxC
QCrG3WBFQBWpNZBT9Nfmrc1TPVU/pSMotSCeUivrsaiVPIqVx6LmD1L7su0xrRbpVRRZOa+N
5x7O7XXuhZCKEJAKpOJ6Pa++lItFBusNfwI8O6a6xe1JTfHTubIcz0XaVbZZBJSWkBUZDc+h
kid52bZAav+AVJuIv5RLPca0XlbeqvJe7ZiqlyKslptbyvH0VUAqQghIxf3CSmko1i2eF1tO
MQZSe/0UfFv2dhjlTyQVIQSkYowxkIqBVISAVIwxBlIxkIoQAlIxxhhIrciaSL+zea7meN2z
i1rap8WyUfy2rT3BCkhFCAGpGGNc18E/3+71kDpnQW2PTA2t/b9x+8H2hweY9chWDY4K4VU5
qNpO86jGDwQAUhFCQCrGGNcRUpvf/6TXnkO1T6MqBqkhoG7a0TZ6/4SGtidoZUVYBahAKkII
SMUY4y6E1FUbPuzR0fGCvTPOPOKeVKWnOIUPAlBXuybO11+t11OsrJtdyzU3qU3rpNeCzYmT
DyZ333vAba8IqdbFKQHaziAzhlTNg6rjqKtfsAqkIoSAVIwx7keQqojl8BGHkwsvOeS61l9Z
uc9Nii9A1Dpto0imYHPqjAMOHgWfgkoBqWDWXmudXtv8pXpUqfbRhPuXjT3k4Dc89ojzD7tt
syBVU0rp0ao2HyqQihACUjHGuB9BatOT+12Xevjo0rUf7HWQqXUGqTEMKupqgBl392u5Xusp
ULbs8WdakhNPStrnM7VHmGry/SxIjSftB1IRQkAqxhj3I0i95Y6DDkgN+MynnnbErTNINSA1
h8uyIHXIWUeOidiG4HvnlINu4FOxnFQgFSEEpGKMcT+GVEGegFIgGFuj9muFVK2Pj6XcVHX7
638d04AVSEUIAakYYwykHgOAippa/qf5iXkt7V3x9YJU5bwqtUDwG3b9A6kIISAVY4xz5gce
ebxHIVUgKnBU97uBqvJHBY2CykogVQOjtP2qDfvap47KglRZA7K07vqbWktOQQWkIoSAVIwx
7tE5Rp/v8SmoBJmKpsrqhhe0PjRrfyaQZi0TnGpfgab2KwWpmrpK28XTUQGpCCEgFWOMcwip
81/9cY/WQxFQRU5lje4P12mUfrll6rrXiH0tk8OR/fHcqFkAWwpSVV44byuQihACUjHGuJ9A
aldbqQSa5kpzsj7wyP6qILXUoC8gFSEEpGKMMZDaqVkEBKJ6OlU4YCqEVHO5svTQANsWSEUI
AakYYwyk1mx1/ysP1Z5iFVt5reZKUhNs2/AhBEAqQghIxRhjIBUDqQgBqRhjDKRiIBUhBKRi
jHE3eenq9UnTk/OSt7d8RHsAqQghILUrp3HZ5eY7xG1e3rzFRYhwm+ctXu7e67jNM2c/4564
hNt8970POJjBbZ5wwy3J6Muv7BMec+V/AVIR6u2Qqv8xxhj3TZ98yinJ2cO+1O98aQqqQCpC
vRxS8/hr/ra7phDlCazHOBIFPOqnn3+ZKHHgV1a+Sy9C4Ob3P6G7G9PdjxDd/RhjjDGQihAC
UjHGGGMgFSEgFWOMMQZSEUJAKsYYYwykIgSk8kWGMcYYSEUIAakYY4wxkIoQAlIxxhgDqQgh
IBVjjDEGUhFCQCrGGGMMpCIEpGKMMcZAKkIISMUYY4yBVISAVIwxxhhIRQgBqRhjjDGQihAC
UjHGGAOpCCEgFWOMMQZSEUJAKsYYYyAVSEUISMUYY4yBVIQQkIoxxhgDqQgBqRhjjDGQWpHG
pF6XsXx06jWpd6benPrWaP3Q1PNT7/DbrEh9QZXHHp+62e+/PfXs1AMy1sceGmwzwddfZWxM
fT3IhIBUjDHGuHdDqkD0Mw94oQSbezx46v47MfXu1FP8+obU2zy8ChLH+m13RQBZ7r7emvop
v/9EX5eFwTZPeQhuijwoANRWv2yM377VnxdCQCrGGGPcyyBV0cpZHui2ZkDqQr8sjGpO9qDa
4OEwST0yWD/Qg+30CuuwzEdqC9ExWgMIXZ16TokyVPe50TKVORVsQkAqxhhj3PsgtdEDnrrT
J2VAanMU0Sx4IE18lFLR0huj9Q0eYh+ssA6jfPQz1ER/jMH+9S6/bKiP7g6I0g0SXw5CQCrG
GGPcByC1Ifg/C1JX++77OH80KZHzOcWvv6ATddroATmE0M3+b+Ih+OaQC/zfbf7/XT4aixCQ
ijHGGPfygVNZkDrVd7tbtHSQB8jEb1/IANhWn0JQK6Au8ekCFwT3/ZbU03wEdaiP7rb66Omt
vj6f+mjrBf74pUAaISAVY4wx7sWQKmhcHEQnW3yuaeKBNO6i7wygKpd1eaFt0FS5rvsBPpr6
lI+oJoWjg7nCVIUVYBMCUjHGGOO+B6mmCzyUDvOOc0Cn+WXTOgGo63w0dGSF++zwEdUxQXd/
qLk+RQAhIBX3DW/a0bPHuOf+A8nS1fuKrn/s+y3JnAUt7n9tp+2LbTvj0f3J/Fc/b3+t/79x
+0HntR/srft5qV5Wt97k9dtrb4tVG9qugaxrE69Xm8fXwLYvdZ0xkJoTSBWYNmXknO4KXk/3
EdRbazzuIJ9Luq2QPW3VdF+vhiiSajMIDPQR3jiSqqjsMrAJAam4T/j6m1ozQaOevmzsoQ7Q
Ejt9e5esw+jLD6U3sIPtwKrti2179rDD7RDb/P7e5ISGJLnwkrb93/u4K26sB9vr1lusNrxi
3KGa99e11DW45rrW5L6HO/5gWN68Lxl0+pEOPyQE8Wqjk0850uXvNQyk1gFSbf7Rsf71KN8d
byP3x/oo5nx/fw7dGNy3wzlNG/1rW7/QQ+bNGWUM8MdUHWZ7UB3kUxAEyjb6/zlfd8tjvZmc
VASk4j5lQV1Xg4OAprsg9a1Ne9sjpi++3gZTb67tuuidQFjuXTBw0LVpZyFVEdVw+RPzWpJT
TzvifhhkRbu7472GgdQ6dffP9pC428NkU9SlnhSxbdfkX4fQGnbPt5QoozHId7Xjqy7bCx2n
rRroQTkJtpsOMiEgFfcaK3ooeLjtrrYu76Yn97d3vatrXFBx9YSj0VTBhUBv4uSDyTfvO9De
tRsDSdx1KxDUMgGQomsGilomoNHyYt3iWq96PTRr/zH7l4NUdVtPnXHAnWPY3S9rH22rc6m0
S17nofK07933djxHlaFyVc9b7jiYvLJyX4fufm2r46v9tK/K0LZxqsPjz7S4/e+cctAdT/tU
0w2u8rSPytd1VT1sndpNba52UTtqG702kFZdR5x/uEPEWW0anpeuhdbFkWe18dPPt2RC6gOP
7E9OPClJZs7e764XkIp7CaSWyxkd7v/2lBp8vuqwMqkDPV1PBKQCqbh6C2LOOPOIgyZZXbHq
ptU6QZK6YC++9FB7t63g47yLDjuQaTz3sOu+zYqEhtHPl9743JWjbn3BibrXBSQCIwNFdS8L
fopBquoluBEYf7Gx7dgGqsUgVSCmuut4b2/Z26G7XzCmY1q3dLFjx7mWOg9Bu8q4anyriwoa
QKoOahe1p/4K2sLuftVN0K86aJmAT+CmlIowiqljCJz1o0HbV9MNrjZR2+gYup46hupo56dz
sGuo+usaq22HjzjsoFPbqX11DlZvta/KHHJW23kJVlXGs4taOoCx6inAzoJUtZFdAyAV9xFI
RQgBqbgrLRgRdNjrF5Z8nky4obU9ShaDg0FdCG7lIFWQGO4jeBTwKKpWaXe/gNLqpP1VbwOd
LEg1QJXDQUBhhLBYt3QxC+AEdOEygaSihyFgGozFOalWtxDuBP8CVYN5rVcaQhhVLZfuEFpg
qrYJI82KYgp21Q52vUJItHYIYTvs7tf/Ou+wHbUshGsBuc5dsFquXYFUDKQiBKRiXNaCP4vc
CVDjLtwsSA2jjuUgVbCk/w3kas1JFazFA7oUmS0GqYJTwV+cD9oZSDWIFHQLsNVtH+dyWp1K
QWrYva+IrkV+BayC9zgdoxpIVaRTEVJLaZAFkNbGWdcrXpYFqeGPDDsXta+Bq44pYK+kXYFU
DKQiBKRiXNHAHutCF1go4qbXpSBVMFsppBZbXy2kxusVMVQXdClIVfRP3dn1glRZUVB19wvQ
DFiXrdlXdCR/FqRmDTLS/+qaF2RmRbsrhTedn6UUxNYPBbse4TlXAqnxeQlOrXvfZkmwCDCQ
ioFUhBCQiusyH6ZFT5VfqohqCKJZkBoCowAlBldbViqSOm/x5+1QUwmkxvsrkmrTJBXr7tf/
gidFQOsFqdaNrvIVBRXcC4jrAanqlo/TCRR1rSaSqtxS5RnH0ViLeNYLUm07RVAtj7Xc6H4g
FQOpCAGpuB9ZXc5hl7byIcNuaEFVMVgQuCgiGHbfG2AaFAocNFK8VFRTICjAstfqCg/BSoN0
snJSDVQqSQcI99c5Klpo9So1ul8gq4irdbF3BlIVYdb+YUqEgFBgWA9I1Uj+cJCTpQBUA6mK
HCvyGubF6nqoXJ1nPSFVPzL0/hGka8YDIBUDqQghIBV3uLGHACEA0LJwKqhS84aq21ygYVMq
KSIm6DKos25zyzfMglTta2WoK1z7ax8DDk2BZLMECKK0PhydL2CVdfxikKp6CTi1v7ZVWVbH
UpBqI/Kt278zkKrtbOS+6qoBZoI/O8/OQqpNkaUydX46jtqqGkhVm2of1VNRcR07nJe0EkgV
GGsftatgtxikypYmEv4wAlIxkIoQAlKxi0CGo8HVjR5GJTViu9yNX3mWihIqKhjOk2oAI2ix
GQBUVhils+5k7aecSkGWQEl1CCfJF8Qo2mpzg4YjxZXTqUhcsXpqufZXRNDm6QyjmWHqgI4Z
l6N1yp3UPqqXjWLXeWjbah4Bqn10HmqreC5YHSe8FvGyrLpZHcJlag8tU10tXULnWM08qWoj
1VHXNUzFsDSI8JxtmV1X7a9rpPPTsrB9Y+t6xBP/l4NUlZc17yuQioFUhIBUjHEOrdxZpUaE
00fpB0S1ebPdmc+siG2cqlFLri+QioFUhIBUjHGGNQhKkcNS7mpQVARTaRDqrrfItlIVbCBU
ufqFucNdbQ1cs4cqxNOWGaQqj9geAlGu7W2OWSAVA6kIAakY45xBquWUCjbVja6c0nDi/zxB
quBT9YvnirUUEatTJdApqLXtq3n8KwZSEUJAKsYYYwykIoSAVIwxxhhIRQhIxRhjjIFUhBCQ
ijHGGAOpCAGpGGOMMZCKEAJSMcYYYyAVIQSkYowxBlIR6qO6IPWt/m8DkIoxxhgDqQjlQdMF
et6tqbelXpZ6ZuobU48EUjHGGGMgFfUNDfTRyTmpF6Z+KvXU1I0l9hnvgXFi6kEZ67Vsst9m
VJnjT6hgG9Pg1Nf7clXXzan3BOAqt6Te6tdP99s3AqkYY4wxkIp6j6al3u2hTnDalHp26mYf
qZyb0a2+2IPhMr/frggyL/Blrku93Jcztcjxx/j1TZ08j5E+kvpg6iU+wtoawavqtDH1c/68
x3voBVIxxhhjIBXlSIqc7vSwlqVRHkJXRBHUJILSNd6mdR4UTZM8MDZmRHB3euBt6oLza/DA
fHPqWR6Yt0fgmnjIbvZAPsWD8yAgFWOMMQZSUfdL3ft6Ew31r4d7kJvtYXKU7yYf7EFyYgCu
06OyZvttpGEe/MZF2+zy0ctQ8z3MNncRpBbTAH8ek3z0eIWvfwyvn/l1T/ltR3uwBlIxxhhj
IBV1kXb47vGC/7vHA+NzHs42+u78go8uritSznBf1hz/+noPeHEkstlDqelmD8mDegBSi2mQ
j6JO8eez2sN1DK87fFR2tof9UYXKZhoAUjHGGAOpCJXQWA9aBR9JVa7mzGD9VA9j0wIQ3ZNR
jnWdbw0ijJP8slirA+gd5uFvfACwTTlur8EeMKf6lICNvs1CcA1nGmgqZM80AKRijDEGUhEq
oZketgoeqD6LIoEXePAaG4BsFniO9qC51QPrgBKQusJHHy2HdW4UZW3qhe2odplQqGymAQ02
+xMte+CRx/kiwxhjDKQilKH5ARQ2FzoOcip48GoNoqMTgshrlkZ6ILvVd+MnQa5rCKILPcS2
+KjkJO/tHmAn9ZH2HV7oONPA1kI008DJp5ySXHjJpcktd0xOZjz63WTe4uVJ8/uf8AWHMcYY
SEVAqv/fpmQKNdeDVVak8/rCsdNJDfDwNSkA1nje0099xHGaH6QUusV3n+/sw22uSPXdapur
J3zNufHcr8S5rsmg089IRl9+ZXLbXVOSh2Z9L3npjXeStR/8hi8+jDHGQCrqF3owiJ7O8VHS
hiBvstV3Tzf49duDqOqDHiqHBeVZF/8FQa7qnCgHNlxfyIiyNvWDdj8mJ3XTjl3J0tXrk8ef
WZDcOWVactX4a5Ozh33pGHg948whyRXjvuq20f6vrHw3Wb/9t3whYowxBlJRn9Ionzs5yMPm
Z0GX+2b//x4f/dwWAelAH2VV1HOWj8oKamdnpAss9Mt3R9AKpJb5kAtAFUWdOfuZZOLkqQ5Q
BaoxvApoFZW9+94HkifmLXLAK/DlixJjjDGQinqr1hSOTgk11HfDTywcnQB/us+rbPBd+AsD
WFX3/hS/bG7h2DlRpdF+3XM+V7WUJhUpo09D6vLmLe7DXI2vveGWFFivSS68+NLkPw4fkZw+
+Mzkd37nxA7getxxxyWnfmGQA9gR51+UXHr5lcl/ufa/dShHEdl77v92j/u+hx9z7ZEHv7Bk
RTL/1R/3uPVDY9WGD3vcb236NTdcjDGQinpEQ300dEnh2EFOoQSYu6JIKaoDpCpSKpBUHmrh
2LlYMcbeQ846x31Wetr60aec8Z72ZWOvrvoHbldZvTh5+MGrWVPy8GNXqVt5+LErKxDSnT9s
397yEZCK6qrBHlLVNb/Od8k3eT/nc1U/qyASiurc3V8v64tjzoKlLmI54YZb3E32xJMGdAAA
zTRw3kWXuJkGfv8PH0pmzflB8vKKd3s0iqec2zx8yWvWhbxEeqfO+E4uYEDpJ3mAI72f8wCM
8hcbv5wLiD71tC/wowb3mPVZAFJRV0jd+JN99/zCIJd0QqGyJymhnEJqlt/7eHeybM2m5Onn
X3bQcc11X69opoEXX/8pMw1gjCv6jslDyoqsHqs8/OBVsCAPP3abnpzXZT9g7d4GpCIEpNbd
NtOABmBVMtOAImrMNIAxxpicVISA1B6xAFQgykwD9bXaSz8GaAuMMZCKEAJS62h1/asbS6kA
SglQakBWTtzwEee5lAJ1BynFQKkGfJHvcW2jL3PaAmMMpCKEgNRusEZ4asCRHvf6jdsnuUFZ
GpwVgqsGb2kQlwa/aFDXs4tec/ljQCrGGAOpCCEgtVstCBWMVjLTgOBWkCvYtWlN+pp1rkAq
xhhIRQgBqTl1PNOA0gPilAGlEYQzDSjNoLfPNKCcVCAVYwykIoSA1F7kcKYBDcSqZKYBTa3S
m2Ya0PloUnauN8YYSEUIAak95KWr9yVrP9hbt5kGBKTlZhoQ2OZ5pgHV0Sa/xhhjIBUhBKR2
s19ZuS85oSFJVm3Y12XH+KNnf5vC6boOMw1kPW5WDy2IZxrQxONAKsYYA6kIAan9DFLnv/p5
CohdC6mPfb/FHaPUTAN67GslMw3oyTC1zjSgffVkLiAVYwykIoSA1Bz4hSWfJxdfeig548wj
SeO5h5M7pxxM3vu4rZt/xPmHHUBeeMmhFBg/d55wQ2vy0Kz9yZCzjiTfuP2gWzb68kMdyoyX
rd++N5k4+WDyxcbD7jjXXNfqwFfb6Zg6hrbX60pnGnjgkcfLzjSgL9xKZxrQttpXZZYDXSAV
YwykIoSA1C70W5v2poCXJN+870CybM2+5Il5Lcmg048k99x/wOWhznh0vwNILX97y14X9dT2
Aktt0/Tk/sxIaLzssrGH3D5zFrQkL73RBrDDRxx2Zd597wG3raK2el3ruSxv3nLMTAMnNDQc
M9PAxZde3mGmgeb3P3H7C2ptu9858cS0Tb5VdCAXkIoxBlIRQkBqF/rF19u68wWO4TI5q7vf
4DPcvhykalv9r8js0WjoPhdNFSQX6+7vipkG9OjXYjMNhEB73HHHub+nDz4z85oDqRhjIBUh
BKR26XRRe1xEU4Oj1OV/38MHkjfX7iuak2pAqXSASiFV0VZFX6vNSe1K20wDup6aaeDi/3zF
MeAqH3/88e7veRde3CFfVYO4gFSMMZCKEAJSuxhU1Z1//U2tycmnHHHAqO7/UpBaDjLDZQJf
pRDkCVKzBk0VA9TQisZqRgEBKpCKMQZSEUJAahdZ8ClADYFVA5wUWe0MpFouq/5/+vm29eFc
qzrOVeNb3RRXeYBUzQwQwuiA3z3Z5ahqVgHlriqKGj4lS4CqaCo3NYwxkIoQAlIzu63bcjrD
Lnq9DvM/BYmWY5o1sl+AqL+2TJCqEfhhzqrtnwWUGpGvZc8uamkfjKVR/Lad6qhI6i13HGxP
E5g644CL2mqg1OPPtJWp/cI0gu70zNnPuIFSGnilAVjlthekKi+VmxrGGEhFCAGpRSKhAjzB
oy3Ta428PzrI53D6xXOwaBmCR+2j0feC01NPO9IOnAJMLdN6jcLPglSB5RXjDrnlOpb2v+2u
gx22E+SqHMGqbSN4tnOwNANNbdUbvsiBVIwxkIoQAlJLWIAoyBNMhuAadq0rQtn8/t6yU1Gp
a18wqa74OFqr5SpD/xeb2F9TWKkMbacy4u20TOVom7C+suqrdfV4/CqQijHGQCpCQCrulwZS
McZAKkIISMVAKsYYA6kIISAVA6kYYyAVIQSk4l7ny8ZeDaRijIFUhBCQivP3RQ6kYoyBVIRQ
r4FUjVAvNd2TRr5r/fLmfX4S+QPusaFZ22obbWsj8zUiXnOYjr78UIcppur7BXW0br3FmuFA
86129xf5yaecwk0NYwyk1q7hqZtSL/R/h0frx/vloaf5daMy1plvrKIOg1NPTz0/9YP+daiG
1Lemfs77Vr8sS6N9GQjlE1LLPT3J5jXVNExtuY2HikJt/LSnBx7Zn5x4Utt8pXMWtHTJF1RY
t97i4SMOd5gntru+yPV+4KaGMQZSa76ntqRu9mC5LvWe1GOCbZan3p16Z+B1ft3N0XJZ9dZ3
89wK6zAs9Wept/o6bPSvG4NtVvg6zPHl6v9lRcra6c8Hob4PqVlRTm3PF+mxYA2kYoxxr4LU
zRmwtyaAUGlHEDmtRHP8PoMq3H6WB8sBQdR0uy/HIrmJj9qaxvpl44NlAuZdqVuBVNQrIFXP
pL/mulb3JCY9x94e91kOUhUhtS73sLv/zikH3aNG9dSmSrvkdUylEqgeOo4isGG3uMrRBPxX
T2htP06ciqD6qAydx1XjW90jTMNj6HGserqUylcdX3rj84qhO0yRmHBDWx1VVvgQADu+Htuq
87hs7CFXL2tPHUvtefGlbcvtHFSmzusbtx9MvnnfgWOeZKXUic6kNgCp1fuXH/dc+b9I3/dr
0s/me9uLp4Vo/c/944XXpu+fv1qc3aOgMuKymhe1JCvvP5C89WjXPDEtrFt/8frV+9x5x+f+
s5Vty9+e1+Kuqy1/5/mW9u1LXWecG0id6++rWdBY8KCZeCisRBP89mOqqMPojDoIMhcH9/25
Gd3/Os6kKCI83actAKko/5AqmBQ0yXoE6PU3tZaFVD0m9ISGxEFt3N2vZSPOP+weKap81GJP
fQotOFM9tK9A77yLDjvQNcCzx5gK5lQPLY/rpv0FgCpD56D1lmogQNW5CRxnzm6DYXu8aaVf
iGovnbMezar/rQyDR52r6qxHtqotBcLaXn9tvY6nc9D+1r52XoJrpUmonuGTtNQeekTrex8T
Se1qCyT+LH0P/XxD10HWpk17k5fT93ex9e+m7+kX0vdFqTpo/RofkX89/UwuKdJroTLCst5d
0la2tv/RlINdcn5h3XqL39SP4mdqr7Ogf0H6uVW76geDfoQsS3/M/uCktK3T75zF6feh/n/H
P15Z67Ss3HXGuR041eC73ZeH99zUT/nlO3yEc2CRfXd4SKxVguIpHjjHl9juxii6OjiI3AKp
qHdA6rwgCqMooJYJvIpBqgBV+ab2PPusnNRqu/u1rQZahakGgjsbiKWyFWkslpOq/QWHIcgJ
lG0f1efCSw51WC/IrBRStZ9AURHeuN6KrBqECkr12FZbr2ir8lCzuvutfcMyFT1WGWEer0Vt
6e7velcCiPU6RndAqmBJ5VjkVjD1w3MPd2kb6ni9LTq4KP2h2BmwVruG10BlCUoVYbVl+lGw
IP1Ra23THe81ILXLNMfnpF7gX0/zMDjfg+F0nw+aBYGTfFf78BqPfYE/VuIheWCR7Ub6nNUl
RdYDqSj/kCrYzMqbVLQxC1IVKRRECfBKDZyqFlIFpNpf0UQdOwQ9q1PcfR/XzSLAZsGjRX5V
73iWAZVXKaQqJULbKuKrdjOrHQSvYSQ13E/LFCktBanzoq5apSoosqr/1Q6WkgGk1qmrfUdb
d6vAYtXDB9pBQvCgiJrAYfUj+91yRVa1rbpwtb26bfV/DDTWRR8u26gfemn52s+647WdHaNY
t7jBi47/1qz9bv93owGCpSDV6qe/YXe/6vDn6ftKkFpNl7zOQ+2hemi/X+7o2LWvyLBSB2Q7
XpiKoPOwtlg140AHcLProSimyv/pk2kZH+ytuhtc+1hb/WR2WxlhV7zOfXP6g9fOQykPYdf7
n6af4b9If9DadYrP6530R2NWSoXqbe+N8BroPF+Pvo/U9a/rpjYBUnstpDb4LvU9UQRzWEZX
vHXpx8u3lgDHSjTAg+loH5Fdk7HNKA+opSAWSEX5h9QQoMzW/Z8FqQJURfXikfWdhVRFKlUf
wZnAWWUJ1tZv31t0JH+5QV16bcvUha6u9HC94LBSSLXzUzRWx4ptQBqfcyWQGp+XRarXpjfa
qenNLozEAqmdt7rzF6U/JpbfcTB5NX2PCRQEn4KYpemPJL1elv74EJgIJF5MPw+KtMnquhXA
xJHQODoq2PpB+llR+YJIdQULggRrdgwtN2DJKkvH1baCHZUVds8Xg1RBsCBU9bcoqoGQIFFd
zCq32LFjC8wUERTcqr0WnnUkWZy+Hy0yq7JfTj8Tgjydo8Auq246puqkttd6g0QBqpZpf52n
0iBU/2rgTW2q8nVuqqP+qp62vwBS11vLdD3smq/0P1otwql6qI3svFSO6iWvTH+capsQ0H/2
Rtt10jnHkJple09sfp9Iai+FVAHqMg+oleSeDozyQQs+eqpl19epTtadPzJYNtbXcVmh+PRT
QCrqPZHUsAtcuZACJ4FSFqRatFIRRMGXQWRnIVX7hXOsWv6n5bx2FlLV9R93mWuAUqWQunR1
NlAq0mlR33pBqq6BorOK9KreMVxX63vu/zaQGkWzNgaD0AQgchY42OvV/hoIzspBqqJ4Appw
YJLlgirqVml3/8og8i+oc9HVNfuKQmoMqFk5qZXAVGgB3bIbWjt05QtKLVJqkKrjheAa1k2Q
uynoGdH2Klf/q10FgTaoSGUIiKuBN52PbMfXXwHnn/veCJ2zygsjoYLZRUGvR9zdb3m77rzS
z6PAUoCpHzO2zRvp94l+tFTSrnq/6Tx13O5MLQFS6yYB5zofnRyZsX5a4diR/Y0eIG+OtttT
Bh6LaXbqqdEyG70/2r++2eepzqmgPCAV9Y6c1HBKJEGVBgMpildq4JSAUtFJA7/OQqq6yUOI
FKiqfBvp3llIFeyqvGX+Ji+w1DGrGTil7RXdNahXHRXltK75SiHVHohQDFJl5ecqaitQ7+zk
/wapaz/4Tb+/qRlACpQU2fpF1LbFIDUEiXKQqu5j/R92OctxBK0cpG6KUl4EOQbLMQgquilA
VfQvnDmgs5AqeBdkKtqoyGs8K0EI8MWivALGcH0Y+bVIcYeHiHggrwTe1MbaVl392t6s7vYf
+FQmnbMAM56BIFyWBalvRbNsCHoNfNUOuh62T6l2FaAqiqv1YQoDkNqrIHWxB9ShRdY/5eFz
WLBMk+nvirrblxXpnjeoHVc4OsXUIP96UFDep8Fri+x+6v8f6QG1qcJzAlJR/iFVETsNOLJR
84IiGxBVbgoqg1yt7yykaqCQji0w074COwHg2g/q090vsFSOqo6h8xWIa59qIFVTVqm9BKuK
KKuOQ9Kbz5tr91UMqZq1QMdWJLoUpFrk9qrxrZ3+IjdIXbXhQ25sHg4UARPECBIEUcXyBLOA
shykZq2vZeBUvFwgahHfGAQtoinwUjd0vSDV8nQFcSpHYPaj9AdUVtS0GKTGEBpCqsq1c4qj
3ZXAm51fmJIRWj8MXHd/lNYUX6MsSI3PS+kf1l2v/8NBUMXaVe8rpUGE0W0gtddBqkVEsxxO
QbXVQ+li/39WWsDWQvHJ+5t8mY3hfTzIaRUgb/OwLMDUHKm7g2PML1HPSUAq6nWQKrgSkCoq
qsFK6lZ+MxhMYd3uFslTDueLUR7bE/Na3IwA2kbbWve/tpu3uLqnQenYijIK7ASt4TRMYT2y
lmXVTa/jZYqkqr6KpCpKK8ispo6CZh1XuaI697VBtExgGZ+zloWzIKit1Y2v84vbN+7yF8yG
+wKpdY6qpu2vgS/KgxRICCJqhVSVY8vUJewiqdHAH0UIFR2tFFLD/Ef3fk5BLMyZDEHQprRS
N7oiqrZvZyE1ngtUwCpQy4roFoXU6MdjCKn6gRBDrLrlK4U3RcK17TslnmpXL0jV+0PXQBF4
1Tk8r6x21XkIZNXFnzUvLpDaayB1qIe8LN8c5aze7GFzSpGoq9aPKnKcUb7MgdFxw3IUZb01
OEb4IIAJJeqZNZPAWL8PQvmE1P5kzW064YaON0Ob9D+P9RWsC6Dfq8Ok8kBqR1AUOFjXeweo
TH9wKOezHKRafmnYHa9BUbad9hXIvRP8wFCXr9Zr32IQGsNLmP+oiJyW/WzlvpIDp1QnAbcN
suospAoifzS5I2QKgm1ZZyFVebuqb5iTqqhjNfCm3NJlNxwbrVVkuRpItR8ApeZ6VbsqCq/3
UDjjQtyuuk56D7wZTVkHpPaZKagQQn0FUq3bvZjv7KJJxeOoqiKTmuxfx9SAJHXdK9KpdeXq
OGdB90xOrpQCpTkoLUHR7XqUCaR2nO7I5W6mbaxooHIXbbS75Y0KLgQ4gswsSFWE1I0mV/d7
CieCKkUyw+1cxFG5nCnMCX5sZgCLSLqBOelrQWsxeBF8uUFd97eNLA9hr9QUVMqltG7/zkKq
5W4qGqhyda6qS9YArlogNR7dr5xaG90f5+SWm4FA+a26pjYbgkVXK4FUXW9tY1BZDFLtR0xc
XtyuNoODDeoKbVNwAalAKkJAag6+SMJ5RbNcjy7tSqwufoGf0gl0XJtNQGkG5eq4dHX33EhU
J3vEa73KBFL3HDOASVBhXbZu7s8gYi041XI3P+qGfR1G2Ye5kAJQlSEwsjLjHEZFWBXlE+CF
x9BcnjpGFqTaMTWCXiPIdYz4aUhab7Cj48RApTppH0WHta0N4hIYVTtpvaLPglTVQ5HEDjMj
3H/svKdx3eKu+Li+ahebJ1XRY4tsV5WKsObo9VCbhXm5Ouf4EbCqX3i91NY6R4PUrPMyaxBU
fK1jSNXxtCzLpQblYSAVISAV9ysDqTivFszFXeKKHi8adjiX9RUMK0obR3lryfUFUoFUhIBU
vsiAVCAVF+nKL+V3uqGHQ6Am6FNUWQOSFM3Ua5serFwdK3kgQT0sKFUdFUWNB3oZpFbzkARF
pC0lAEgFUhECUvuQbSYB+cXXs6dvCrvntY1t/+ba/ndDaHpyHpCKM8GqlOMu8q6yoM6e/qW/
BnmW9lDK73RTrrjSOZSPq5SCrMe1CratTusrSA0K0wHi+XQxkIoQkNqLbXOyak7S+x7u2FW4
vHmfe6zrPUHOmLaxOVEf+35Lv/si1/tA74dlazZxY8MYA6kIISC1qyE1Xq45SgWoWndPxmCX
/g6p81/9MTc2jDGQihACUrsTUvXYU03bpKipnvAEpAKpGGMgFSEEpPY4pCrv1PJNgVQgFWMM
pCKEgNTcdPebgVQgFWMMpCKEgFQgFUjFGGMgFSEEpAKpQCrGGAOpCAGpQCqQijHGQCpCCEgF
UuvreYuXA6kYYyAVIQSkAql5a68fO0gVrHJjwxgDqQghILWHIFVzpmobILUjpPbV9wPGGEhF
CAGpvQJSixlIBVIxxkAqQghI7XJIveH3DiZNT+4vu7220bZAKpCKMQZSEUJAapd51YZ9LudU
nrOgPHRqG9t+6ep9QCrGGAOpCCEgFQOpGGMMpCKE2rRVUPIfh38lGX35lZm+bOzV7gNerSdO
nprcc/+3q3bTk/McJFXjp59/2QFWtV7evCVZteHDqvzWpl8DqRhjDKQihLpYnwlKBp1+RnL2
sC+V9QkNDQ5icPWutI1jF/vxUMpXjb+2ph8WX7vpNlfXqyf8t4p/VNz38GNV/6iQn130Wk0/
LKr9USGv/eA33KgxBlKBVITo7u/oTTt21QQWL73xTtUA88KSFTUB04xHv1tT1LcWELzmuq/X
BJ6N536lasAdctY5/EDohNV+1ba5rlMt13fCDbdU/V76xu2Tanrf6v1ey+eklh8Vr6x8t6bP
v743gCsMpCIEpNK9izu4+f1PagKLWiBGEdVagGnqjO9UDWd33/tATT8sFJmuFjovvvTymiLn
p572BX4g1Gj19NTS5udddElNPyxqeS/ddteUbkuDemLeopo+k8vWbOpzaVBAKkJAKsa4hyxI
qBYsBCO1QIzgp1pgEmTVAmfKba8FBmuBTsFqLZB74kkD+JFQo/WjrJY214/A+Pq9veUjIBUh
IBVjjHFPpEEpfaKWHxbdmQalNJRqf1Qo3aWzaVClorlAKkJAKsYYY0x3P0IISMUYY4yBVISA
VIwxxhhIRQgBqRhjjDGQihCQijHGGAOpCCEgFWOMMQZSEUJAKsYYYyAVIQSkYowxxkAqQghI
xRhjjIFUhIBUjDHGGEhFCAGpGGOMMZCKEJCKMcYYA6kIISAVY4wxBlIRQkAqxhhjIBUhBKRi
jDHGQCpCCEjFGGMMpAKpCAGpGOfUP9+wL/nZG58XXf/eB3uTd1/9PPnljrbX2lb7ZG37iy1t
24bLmhe1JKsf2Z+sff3zutdddQrr1l+8fvU+d96yrk/78jX7kncWtLhr9MuPj25v25a6zhhI
RQgBqRgfBcS1+5JlN7R26TF+tnJfsvyOg0XXr7z/QLJo2OGi6wU3LxSSdjDVttona9s1329x
29rrVQ8fcK9fvuRQ8taj+7sEsMO69QYLqP/i9oPHwHw1XnL5oWTBKUfctRB4vrd9b/uyxecf
ThacdiT54bmHk43NR6/Zn55+pOR1xkAqQghIxbgD1HU1OAgoBTDdAalxJHXpuENdCuG9MZJq
YN1ZSH39947+8HjjroPJwrOOJJs27W2Pfr980eFkydhD3fpew0AqQghIxb3IAqh3nm/r8tZf
654V0L1574HkxTOPtHfbCmAU/VK37Vuz9rtIq3u9el/ZLnpt95PZ+5OfPrm/HSj1VzCj6Fox
mDNI3fz+XrevyjDYqQRSVUdX/+17O0Cq/uq4AtVqQFLn/v+3d64hdlVnABUdNBgfoVq0Nsqg
wRRNRSVBU6IkpCWiUqsVVJTGEspIAxEVm/qgFkW01SKphFojKo022lSs9RHR0NRILMGKLVVi
UbSgoOAPf9lUU7k96+R80z17zn3NnXszk6wPFvee9z773JlZ99vfOUMbOH+ywLmQchyW/3n9
v8dJKn1CG6L/Nq/ZOWY4PPazee3OMrPL/lm/22FwtmEfXNOXNoy/DvQV69Cf5XWs+i6OTX9y
fNaNcor0vCiNyLPDLI9zzSUVGc2/OLCvNKutpIqSahhKqsgoiB/DroBUIAlIKQKCjDD//oMa
pXQgomXWc9HuYdv7hxqNp1Z8Xm6XZ0Lz7CfTrM+2wHtqE0NMGP5lH4hTnaSynHYhlGTgaBO1
pO0kFRmkrbQzH+4vh5+r/TY7dg5ixbF/t2xX4/Glu8rzQOTTDGTML/tx66fj2saxGNrmlXUg
js0rfU7Wkf3Qvt8u+G9X8oZAst36uV+UWWLe8xo1oFwv+pHjlvse/qJsL/3I8Sl9oM0IPOUQ
0b/ldSvOnfU5/zQLCggv++Q4uaTWQYkH56mkipJqGEqqSO1wPhITAkMWDElBOOrEAflDWEIQ
yZ61k1SygeXwcZLRe2rk/4LSyXA/25N1TAUnhKiZpJIFZJ0Q1Lqa1E5kKoV9k51M94fM0Y6Q
VPbHdGSe87YhoWm2GomOfVILilzSr0yTMeYcOpU3rh/rs580q8o1jppb2odsIvDlNkVbQ2jr
hvujf6OfaDtfMBDbNKON8EZft+tXRJrt0zpgJVWUVMNQUkVGIVuKgCB9iEN6x3UzSSULmK7T
TlKfWTW+pjQErlNJRbzyIWvazTB4naQiXGyT15v2KqkIPJlEJD7PvIbc0afNbpyibak0A5Ib
mV+EleH3dDmZ2k7lLa4nXwg4ZkBWNjKfnC9C2ewaNpPUMaUNxeeEz0FIZmwTZR+t+rXM9M6s
vzZKqiiphqGkioy5wx05QDIQD4Q1BKxOUhlu70ZS65Z3e+NUvpyMYWRX6ySV6Rg2T2s+e5VU
5JihcjKR7AfZo4ZzjJBu/bSlpOa1mbSBebSTdakLzofRO5W3OL860uuRC2InkprXoCLbCHb5
Gbr+s9H3rfqVc6PvclFXUkVJNQwlVaTlsy3JeiKqiFgzSc2FEanNM3MM58d2LE8FJiSTzByv
nUhqvn2IFFm5OknlmEgf2dRUlnqV1PQmIQSZfoqMbt3jprqRVN4jcFFq0enTDVLipqf8Zqxc
SPNznoik8nkJKSe7nA7d1/Urfc8Qf6vHgympoqQahpIqUkItZC6ICCZZyBCHdHi/TijrhuPJ
toZwIE7IV1q/WM4b2i1TdRKa7z+vf4wbmOKO8mYiSOYuraHtRVI5FrWb1GOm88qMbnE+kyGp
SC/D8mmNMH3TqbzRR/RVLroM98cNXpMlqXGdyxrX4pjchNesX+M65O1SUkVJNQwlVaRp9hTB
YPgXSWDon3pBXlO54DmXo3f3Z5JKNpN12AcSghCRWQvhQLhKaSXbdtt/yowb4ss+Wc50DAHX
iVDc3Y8gIqchrdHGdo+g4k70GPbvNZO68dzdta60mX0hlEzX3SQ1EUmNpxFQ+8qXBQS1mxun
ooaV/uSVNpZ3+Bf7jJrSdpKKeJdPYiim68opUrieLKOPWz0nlXPgmsexU5RUUVL3e7ngrmR6
QzXPMJRU2bdBMkNSEExkIb1BhhIA5AxZIYsYcjim1rCYHzfnIC6si2SlGUfqFjkG2ULWiWwh
yxBUjhF3nOf75pgIM9tC1IFGnSjtj0wex02znWQXWc68ONe0HrdVdq8um8p5lOdanAuiHfLG
8dN21M3L2xZtSOexP45ByQLnTP+3yjTXDvuv2TnaRu70T/uV883Pmen0urI914NMe96/eea2
rKPNzimV1OiDZiipoqSWgroimV6dSathKKkismcho5rfONXv/4rVC3xZIEudPxViIrW+Sqo4
3G8YSqqI1GRIkaRWpI+W6pukrvps9JmmZDPJ1MaD9im3aNfG9MkC/YQbxcgKM4xfl1lHUuPR
WnVlAnVyTsmAkipKqmEoqSKS/eetVkPSUCdjk/4vat/dXaNL9pQnJjxx0a7Rf4nKkHq7NlLO
MIj+QprLh/ePfD4uixolDNEmygXa7S/WTUtERJRUw1BSRURElFTDMJRUERERJdUwlFQREREl
1TAMJVVERERJNQwl1V9kIiKipBqGoaSKiIgoqYZhKKkiIqKkGoahpIqIiCiphmEoqSIiIkqq
YSipIiIiSqphGEqqiIiIkmoYSqqIiIiSahiGkioiIqKkGoahpIqIiJJqGIaSKiIioqQahqGk
ioiIKKmGsddI6rYdHzWefeXNlrCOv/hERERJNQxjYJLKe+ZNBw497PDGMbOP2+PMPfmUxvyF
Z+1xliw7v/xlvKe59MqRxlXX3LjHufbm28vP857m7vseaax7/Lk9zqPPvNT2C+gg2PzaO4qT
KKmGYXQvqY9t2jZhKZgKggRTQRjh2OHjp4REHzA0NG2+eIgMigMPmjElfj6HTzhxSvy+OmPR
kinx+/vCy5b39OWUL2RKqmFYkyrSFc9vf2tKZPIefOKFKZHZvHPtw1Mi07vqhlunROb74itW
TAlJOnvpOVNCGufMPWlKSPTBM2dOqy8ffJaUVMNQUkVERAbC1jc+6OhLKOspqYahpIqIiFiT
ahiGkioiIqKkGoaSKiIioqQahqGkioiIKKmGYSipIiKipBqGoaSKiIgoqYZhKKkiIqKkKqmG
oaSKiIgoqYZhKKkiIiJKqmEoqSIiIkqqYRhKqoiIiJJqGEqqiIiIkmoYhpIqIiKipBqGoaSK
iIiSahiGkioiIqKkGoahpIqIiCiphqGkivSJZ195s/HYpm1Nl29944PGusefa2x/++Ny+jd/
+FNj82vvNN0Xy9N5v97wdPm53/ji9klve962fQWuF+c90XOPbcGfAVFSDcNQUmUcf9z69/KX
ej+P8egzLzW+N3J10+VXXXNjY/7Cs1oKDZ9dBJTpY2Yf1/RzzL5YHtOrbri13JZ5P77tF5N+
bnnbpgNI5eUrVjae3/7WhPfB9Tr0sMPLfp3Ifthu1peOKPvOn0NRUg3DUFKlrdT1649GKwmd
TEklw3f3fY+MTp+2YGHjwsuW9+3cyOjSlm07PppWmetexZrrxXXrpR30m5IqSmpnUfywXFCw
umb+4oJ7C9YVLKtZfmqy/PKCoWz5UMEl1fJ7GrtdwTCUVBkMf333k1L0uPa8Mh2ywi/0I758
VDmfoesnt7xWQob1zrUPj87Lh+Pr5pFRQxB/+fDvy+1ivW8s/lZj7smnNB0aDklF9NgeUulr
J6kcl3XYJh3uZx7nxjl2M6zMPmgD559mCdOhfZZRPpAP90c/st3tax4o+yL6O70elCCwnPUQ
3VblDnVwrvc88FjZD1yrXNQ5B9oR/RnXI9pOf/Ia69Fn7JM2cc04j7ykguVxrkqqKKkDFdTh
gk8KtmTzRwp2VnJ5T/X+umQ58rmrYFPBbQU7Cl5MRbV4v7Hg44I7Ch4qaBSsUJkMJVX6zpa/
/asxZ+5JpdjFEC3TiMn1P/1ZOX3A0FC5HFGJrOfBM2eWAvH9H15bmwnN5628/iflfpDR4RNO
LPfL/hjmZ18HHjSj6dAwsnPs8PGNo77y1XJ7XpFLygTaSSpCxboMX+eZYV5pUwxLd9JflATQ
VjKwtIXtb7pjzZh2nL30nNESgrxtvCdzG/3MvngNiUX8mGb5109fUL4uWXZ+y0xyzoNPvFAO
lXN82kkbuU5plpM20i8ci/7nOPQV/U//0k5e+QxwDuzrjEVLyvlsc9IppzfOu+jSMcdlXbYx
kypK6kAFlUznywUfppJavD+kEtdUSpdXonpktR3yuSFZPqvaz0g1/d1q/a8l69xV8LbKZCip
0neuvfn2Uiwim4ckMU3GrG64n1/wfE7IAJI5Q3LbSSrSxDZr1z85unzZty8uJa/T4X62jzbR
ViQrtm8mqbmg1p0P77uRKYQupBR+sOpHpbSl7UDekM7I4OaSiqRHFpJsa2QtmWZbzot+jawn
x+xUUjkugooIxzVlH4gqmdUQSOQ4srN8IeGaX3rlSO1wf5zDxVesGM1G0we0K818I670h5Iq
SupAJXV1wV+qTGgqqcuqrOesZN6Mat4l1TA/7xdl+2NYf1OSRX0oW478DqtMhpIqfQfZQGC4
gaiuBrFOUslqtqspTechiQhMPvQfmdBOJBWxy+/I5/MaQ8+5pJI5jKH8VufTraSyTzKcHD8v
TYh2xHnVCTTvQ+TSNtD/7A95vOXna7uq2U1BRDlGSG5ASQVfDEIgmU6Xsyz6qpmkpk8/QGxp
a3xx4DqwTpQWKKmipA5EUOdX2dI5BbdkknodWdGabd6r1g1JPTNbvp51qvcM/6+salU3VSxX
lwwlVQZWj0oGLYbvyeIxNB81n3WS2m5oP5/HcDX0cuNULlUhUvGYolxSmSajyPmkNZ+9SipZ
YfbJ/ukz5I55ze7kr5PU/Gcs2pCeUzc3juWZ8d1/W8YT++A1l3em20lqfvMX505GOzLKyLs3
TomSOjBBPSQksprOJfWWkM1sux3VkP2MSnDXJ8uOrkoAQlJZ/o+K1VWWlRrWm1QmQ0mVgT52
iCF8honJkMUd75MhqQxh58tjOHyid/fHMDl1rXWSSvaWac4llaVeJTV9NBdCGHWpHKtXSSX7
yfL06QNA3W6nkhqZcdrHMVOixGCikpofi/INjkW7KRdIyyCUVFFS+y6p3JH/dCalnUjq25QG
JDWqSOerVQb1/YInE0ndVc07JNsvdaoz1CZDSZW+16RSa5jOS6WyE0lFaPPhfOQt1uMYZDXT
TByfMwSSeZ1IKlnLdHtuYOJmH2S31Y1THBuRivrLXiQVGSNbmN5pjww2y+h2K6m8p741fSQW
WWBKHTqVVEoNOAblCOl8Mtn0RTeSGkP3zSSVtiGnfCHgWsYTApRUUVL7LqgxVL+uksZSUJOh
fO72vxqZrNn2/TQTWpUM3FXVtM5jWcHrSWnAQzUlBhx7ntpkKKnSV8ie7lfdpY+MUA+JEFIj
md7NznBuPJIqFybqEtkHGT9qIhkGRiBjvbgxh+l4LBK1nXGjDjKDxPKaik4qqYgmQ/5k75Ct
tI2tJBWRQqBj2L/XTCr7QiTJdiKCcZd8PGqqV0lln5zrN8/9Tnl+SHE3N04B29KfXDv2xzVj
HyHX7SSVjCvt5DqyfTNJBT43tDfqXZtJKv0VT4iImmSm06wx0zwhQEkVJbUjSd2S8V48hqpa
vriSyaOT7WZV886rppcWzM72zSOoflW9p/h/Y7Z8abWP2WqToaRK3+HOcgSQG6J4ZFH6n5eQ
rxBTZAVxSu+WTx8/hAgicNS0IrvpekgJGUKOwXrsJ2pFWYZYcYy6f03KZ5J9sU08/igdWka+
2DaGsxGm9EkCsTwEORUq3nfzmecYZJ5pAxlOss6RcczbUTeP92nb6tqAyHEM5tOP3dw4FWLO
NYjHfZFFTW/mir5Mt2E6ncf2HJN5cQ7N/jnCftmTG+okleXMC1GmP/K+YDrtByVV9iVJrbKf
i7P60MWdDqvXDPcPVY+TuiNb58PYZzXMf2+yPMT21Gr68mpof16yT+74f1VlMpRUkX0MMtZx
I1Z6Z36//zVtL0Pydc+YdbhflNSuJRWBbCTTyythHJ6IpFbzzqskM+7Mp8b0gmz5rip7inwy
3HN1to8N1T5Y/np1Y9V8lclQUkUGRAzXt4LHLA3iDyjZZrK+HDOG03kfN2e1oq5coh9wLMou
KHVIh+hTSSWDO9F/Cct2ZN6VVDGT2nEmdTgyoDXzV1bMabJ8pNnyap0zq0da8UN5pLpkKKki
AySGtFuRD5H3A4SO4XiG6slQkkWNR1Lxs9mujd3++9ReaplpH2UJ+b91jZKCaFP+71M7IT0n
P59iTaphGEqqiIgoqYZhKKkiIiJKqmEYSqqIiCipSqphKKkiIiJKqmEYSqqIiIiSahhKqoiI
iJJqGIaSKiIioqQahpIqIiKipBqGoaSKiIgoqYZhKKkiIqKkGoahpIqIiCiphmEoqSIiIkqq
YSipIiIiSqphGEqqiIiIkmoY+4SkXnjZ8lJURURE9jZOW7BQSTWMaRinIqkiIiJ7M/vvv/8/
/ZNvGNMv5lUZVRERkb2Vo/1zbxiGYRiGYRiGYRiGYRiGYRiGYRiGYRiGYRiGYRiGYRh7d/wP
FzRWZSgj7jIAAAAASUVORK5CYII=

--BXVAT5kNtrzKuDFl
Content-Type: image/png
Content-Disposition: attachment; filename="blkif_segment_struct.png"
Content-Transfer-Encoding: base64

iVBORw0KGgoAAAANSUhEUgAAAlkAAAGCCAYAAADT6dY2AABKGklEQVR42u2da6h1V32vNzS0
wdhWqhIvqYRWqmgbWlG8kEpClFeMRGsjJyWlaQkl0oBiSlqrYiSSE22rIS2hKQYNWklKKpHq
aYopBhVPCVZssYcUUppyLFjIBz/44Xzoh33eZ3X9dv7veMeca67rvj0DHvZea801L2NexrP+
4z/H3NuzWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8Vi
sVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaL
xWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi+V4l8vPcsNZnm1VWCwWi8VisTxTfvUsn1vj
+585y/5ZLrUqLRaLxWKxWJ4pT53lsTW+//az3HaW51iVFovFYrFYLJuTLIvFYrFYLJaDcsHe
f+cRPTqXjG+c5aNneV5nWiI1XzrLk2f5+7N8cK+ff0Ru0sNneWI+3+vn3/1Mme+NZ/mjvf+O
+vD3O2f51nyerNMLzvJnZ/nu/P2bOst56Vk+Nf/ud+fz//lmmnY535pP/6fzZezN14nv/vAs
35////aROvuj+XzfPp8X2/jq+Xt1G7PNF8y36+/ndfK5znpSrp7X7xPzv2fKPCwWi8VisRyz
gqSQR/TIWe6cy9F/zRv6C8t0d82n+858ugfKdLV77Ffn739v/p2Hyuuar/SZudB8dz6Pz8z/
7s8F6HtzIcp0+3NZS7lqLkU/mE9/13y6/zeXk73Ocp6aC8435vP77nwbl5Wsp+bCxLKenm/f
6/bOz8m6rdTt9+d19sj8vR8WyaPcPH//ybnEZbrUicVisVgslmNULpwLQhspee+8YY9onJm/
/lQz3RXz9/9s/vrZc+n5zt65Ea6r59O1khUBuaB8/+n5+3eW718yf+9LZb2/N5edKirPnkvJ
94sgZjkPleXU969u5GlKd+FT8+/eNn/9vGaerWR9qxHR352//7vz15fOhe2xZh1vKPVmsVgs
FovlGEoWEZ2XN+9XSXpg3tC/oDMPokI/nP9//Xy6/9GZ7tEBybqqme6x+fuXdMQmAvT2RlJq
ubERxCzndc10Eb8bVpSs/9o7N9I3JlnXN9O9vJG035+/vryzLCNZFovFYrEc0/JHe89ES+iq
urPT2D+590zEq+XJIhZ3zv9/aWc5Hx2QrEsHJGtvRLIiL4921ueRRmA+MyCIV6wpWU913h+S
rCua6S7dOzcymO9dODJPi8VisVgsx7AQ1SFa9YMiXE8UWUIo0p01xAv2nsnvurSzjNu2IFnf
GlmfGxZIylGQrHTTfmlEpJQsi8VisVhOQCEfiCjW5+YN+wPz9+lOfHrC9393QCr2OgK2jmSl
e+3MhHU6DpL1p/PXL+/M8xEly2KxWCyW41fIUyJi1RsegcjVNxoJ6MkTXXbcaUdX1y/u9RPk
Sfr+/gYl63WNpNRy03ybrjpGkpX8sA82010y3w9KlsVisVgsx6zkbsDvzwWJQjQrEam75u/R
bZjhDSIMFxb5+rMyz4f2nrk7kMjM5XMJ29+gZEXuIibJZfrV+Xp+b++ZxP1lJIv8sqfnn126
Q8nKdpP39vvzejuz90zSu5JlsVgsFssxLFfvPZOL9cPSqD+yd+4dhlftPRONqtM91EzH/w+U
zzP0QuTr2RuSrBfsPdOVhpwk4sN0P98RnymSdVdZ57t2LFkv2HsmNysQEfzGfNssFovFYrEc
w4L4EAUigsUQCK8emO7CuZS9dz7dL47Mk7GjLi/C0SZ3v3wuH+0ddb+41++WfN3A8jLSOut0
pjO/lw/M7znz93t3HV6/18+Pquvyus777TZdOn/9nE49XjGwjOc16/WNAaGzWCwWi8VyigrC
8925sNVCFyRdeE9YRd2Sx/Nc1RFBooaPWkUWi8VisZzuUnO8GJA0uUXJn7rBKuqWJLg/MRcu
6o2IYXLZzlhFFovFYrFYcsdizS1Cum6yakYLUvW9pt6+t9cfPd9isVgsFsspLtyReMXef0dl
LrA6Jpefn9fbS60Ki8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwW
i8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFY
LBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8VisVgsFovFYrFYLBaLxWKxWCwWi8Vi
sVgsFovFYrFYLBaLxWKxWCwWy1h5+CxPiYiIiBwCd55kydoXEREROSSeOvGS9b/+9/+RU8jF
L3zxbP9/+gtfsT5ERGRn0O6cGsn6zv/9oZxCXnTJSw4k2/oQEZFdQbujZImSJSIiomQpWaJk
iYiIkqVkiZIlIiJKlpIlSpaIiIiSpWSJkiUiIkqWkiVK1vHkzDXXzurws1/8qvUhIufAdYHr
A9cJ60PJEiVLluTVr//lWf196i//xvoQkXPgusD1geuE9aFkiZIlSpaIKFlKlpIlSpaSJSJK
lpKlZImSpWSJiJIlSpYoWaJkiYiSpWQpWaJkKVkiomQpWUqWKFlKlogoWaJkiZIlSpaIKFlK
lpIlSpaSJSJKlpKlZImSpWSJiJKlZClZomSJkiUiSpaSpWSJkqVkiYiSpWQpWaJkKVkiomQp
WUqWKFmiZImIkqVkKVmiZClZIqJkKVlK1qGcAOHBR755zmePP/n0/me/+NXZZ1//5/8457O/
+/a/nvNdXitZm4d9kjpmX4ztx7qP2Hd1/2xz25Ss1Vm0n9in2zrH/vrr/7T/8GPfHl2303D+
i5KlZClZWyPb3p4Ed9334P5zfuq5B5/9yAUX7P/W79xy8Pntn7z3nO/yWsnansAA69mb5v0f
/cR5klNO3Bnvft8HlKyjfYE9Zz9984n/3L/mXdfPzrv6+Zlrrp19tu5ykSLOb5bR+5zz/8d/
4icPlvujP3bh/m+/5/dO5PkvSpaSpWRtVbJ+4VWvme3o/Bp96NHH95910UWz9z//5a/Nfu3+
yq/dMJv2PX9w+0EjwHc+ds/9StaWJeviF754tm5/+/i/nPc59Z6GuErOP/zbD2bf+fQXvqJk
HYML7PU33jz7PxGjN731HbP3+WHD+cc5yf+894Yr3rz2Mi/92Z+bzasnWSwLqarnP9MxPUJf
z/9bPnSHkiVKlpKlZI1JVnvw82uZi2wrHUzHr1u6EdoTSMnansD0IlgI8dXvvG623gjxkOTk
BFayjvYFtu4fuvF4j/3bTn/lmbfNPhvr5hsC8eY85RyOmPckC8Hj/G+7AJEuol/Mp41oKVmi
ZClZR1qyEJcP3nn37KKH5PCrtc2RyoWS6BGRJaYjhM9FeSjkf+2v3zi7WPOLk2Xc+pGPz94f
kywuwr0TIt1S93zu4VMnWW29E3kgStSb9k/u/6uDemc6ogG96Xj/ut+8aTYd+/Gxf/z3WT2y
jxZJ1k9f+jOzhvI3bnrv7LtK1vogFURq2b+ch/zfyzUi2sQ+Yhr2HdO1OUs5Zu64+76D6fif
fcy+SF319g/HBXXK8dbO8+ZbP7xyXbP8RK7//IEvDUoW0v7ay6887322uV22kiVKlpJ15CWL
CzThexrNX3rN62cXuPzarELEdFwgWdeXvuwVs+n4Zcl0f3zvX5wzz4T3aYw5uLlwsow2D6M9
+PMrGjkYOlkQttMkWTSWdNEwzSsue9Wsvp77/IvPaxyZjgY69c7+oauvdrO2DRb7I12CzJPv
Vakakqwq16yDkrUeRIbYF0Rw2B7gvOK8qZLMdOwrPuNcZDq+w76r09Gdxrmcc5X/mS7vZX8s
u3/SjTj0w2pRY5frSZbbShZdhbyPvLffj5jVHwFKlihZStaRl6xEiIiA1F/VaXTzXrqH6i9c
xIuGn8Ygv7qTJ0U0JaF9fkEzXXthbQ/+nBC9i34OhpoAexokK9tYGxeigtQnjW0SkZOfUm8Q
oP7TxZPIFxFKvoe45bvML9NNkayKkrU+RCjZJzXvLfsJscl7L3vlZbNzrd7pSR0jXpyrOd+S
P1WPGQQmN5OsIlmbbKyGJGvs/M8PsPqZkiVKlpJ15CUrXQDthYqLcnIvkKk0zO33k9zMfIZy
p+p0UySrRqvGLsynQbL49d/KZeqDfZSGlfn0hAjBzZ1hvCZK2ItGMB3RDiVr9yBSnF/sz/o+
MhXxyrHei/IkMskPJY4HZIpIVzsdKQGrSBbCh9xxXq8SxVpWsnrnc77DjzclS5QsJevYSBYi
leRlfg1zEW/zfXKg0t3ABbnC9HyWX9zp8ugti4Zk3UhWjdScBski2pRuP/4S9SAvrUosUUQ+
p0u23T/A/k1UksaXKGVvWUTHlKzdgxzlOsD+4QdLmxNJl2/Os3b/pnue7yXi05MxzvVlJYtu
ugjWUB7gpiVr7EeWkSw5iZLVjv3Wy7OcMr5cbTfaa2JvfDkla4eDTtIdGNlKvs5tf3jPORcz
3kvEpCWRkipcLUjCmGSRVzJ00e99dloS34lmkKSeXKzc1dc2ljSEQ/sHQR6LePWkSsnaHYgz
kWJ+pGQfI81ITq1njoGhfUxEa8oPlamSlSE6OG9XuaNwWckaO/+J6ilZclIlqx37rXc9WzS+
XCXpPWPL4LWStWPoauDXKtGi5G9wcR37hdm7O6jXVUHkZVF3IfZdu7YqJNfzWU2yP41DOCDE
RCzyPe7aQsKGbrtvQbbYR73PSJJWsg4XzhOEi24xBAdx5rxIJKveXTsmMEh579iZKlnpguSY
2PSI6kOSlfO/dxzn7sR6M46SJSdNsshp5vxo020WjS9X2/CkhLRe0RtfTsnaAVT4G696y2BC
PDsiOVmJhrQXbr7PRTBj6TBte2FObtGYZEUC+OVcx8OBNDp1vqdBsjjpiHC0g4G2EQvqrFdv
7Dv2T+4wzJALiZC0icVK1u5BiGquUT3m2x86vTtv+eFB9Dh1kMhWeyxk3y+SrORucS5uYoT3
qZKV5P7ecYx4tee/kiUnTbJ617Ep48slHSDXw0w7tiwla8eJ7/U2fyyahrkOPJjwY52uDuuQ
g4OwfoQs36VBT1fXIsnKL9bamDBvulHaX7inQbISwaPe6q+b3EEWuU2ko07H39x2n9GyaaQ4
YcnRyl1qJFznV5KStXvyw6QdMoUoUiJZEZB2Os4xzq16hy/d/IkIc0NDBgJNV+SYZCHbiaCR
K1ZzONpnBbKOfLc9/5IrtopkpQGouZdcP9p8TiVLtiVZnAMcfxxv5MByPrWRpaRx0H5mOn4Q
tz8O2rHt+OHEdZfgBOdIrvtDkpXrOu0sUewxycp5y4+kXBeVrCMgWXVMHX5B8n+eG1YvdEyX
Hcd07PTkcLXdiOzkmluSA2OKZEEd7ykShwS00bHT0l2Y+qALN5G+NKI5qfkbEc74V9mPbfcr
XcL12ZCZJvtVydr9HT7ZpznmObc4h2r3OELF5+nGyzAe7XSR8PrsQaYhYlbP197+yY0sY+R8
y/fbc3jRNW1Msup4XIvOfyVLNi1Z/IDhvOGHC9/hh02OvypaTJdzlGtyhiji/5q0nh9BGduO
8zbT1evWkGQhbovGl6vX4ixbyTpiOVkZHZqLMDuQnTU0Uji/bokyMR3dD72R4TMkAAcIO5Jf
Br1ckbGDPyOXsxyiML1ui9OUk8WvGBpA6oO/bXdfFSj2S6YbuiOM+qRhpu4S0WrvDJ0iWex/
5tHL21GylhsUmF/MOeYRod4xwYWe+s65ynS950rmFznndfL2chdjK0l1/3C88PkYGcaBY4jX
bZ5Yphv7YcfnY3crTjn/lSzZtGQhU61QJYeJ4EF6AzLAdj33MkYkUa02B7ZenzLdFMla5sdJ
77qoZJ3AZxcSPu3ljfRGbF63r9xnF67W989FAAluQ9/tBWKKZE3JvVGydj+4aTvKf+16iKDv
Yv/sIo9FyZJNSRbRf+SpRqMQLn4Q5JqZ6H0dwDuQZpNBonOjSe0RaqO1SpaStTS5kBNNya9P
DlAOXA7gav5MRziWA2soIjY2pkjugFKyppMbEOhaTPSJnCx+cXFxqNGFPMKHuq6jjE+5Q47v
5BebknU4g5tyXrAvgKhlfn2ni7letKm/o/iQ8rHzP3mJSpZsSrLSpU5UH2Eiutz+IE2+Mj0F
7Zh1uR5xHU1uZE/GcmOZkqVkrZTjlYOwQs5JewDVz5eJaPXG+1Cylrubrd0/CHBbjzlR24T4
JUYRVrIOSUKSS1LhvTre1S730zbHFFKyZFOSxQ8QBKg9f3gWbH6U5pozNF4d8KN0LF+1lSol
S8laGqIj5IKwIzlwendd1DuWVolktXc7KVnLdeUlZ45fWr2cF/ZJ6niVSFbY5rYpWcOwz5If
1cux3OV+2kYk6ySc/3J0h3DguOLcYQidiBbv53XvjsOezLQ3pdRx6JQsJUtOqGSdFJQsEdmE
ZCFVpFHkSSdtQjy9MXW8uV43YG46oosxYw/2BtedenehkqVkiZKlZInIsZcsIlPkoCJTNXeY
XhnyGRPJ4lrNa67d9YHpGc+NrsZ2+B26IJk/PQbJJVSylCxRspQsETl142SR+M64VhmLjnEF
az5jbiTJ+FfJ4Wqf84lUIWc1vwuRS5djuvGVLCVLlCwlS0ROfE4W0amM5J5x2uqQDrV7kbvp
M+I742j1pqvjzyFniFcG/V004vuy48sFxq4buiFEyRIlS5QsETnUxPdN3KBBd2Evd4tIFlGw
JM9PkaxtjC+nZImSJUqWiBw7yeKO+uR4pVsQqcrQDrXbL+LDHd+0BYvuXFx1eCXmnRHslSxR
skTJEpFjKVlAFCvPkK3wUPg6bE479ts2rme98eWULFGyRMkSkWMpWYkgkVDfPvuz7VqsY78N
5Xat233Zji+nZImSJUqWiBxbyTrKKFmiZImSJSJKlpKlZImSpWSJiJKlZClZomQpWSKiZClZ
SpYoWaJkiYiSpWQpWaJkKVkiomQpWUqWKFlKlogoWUqWkiVKlihZIqJkKVlKlihZSpaIKFlK
lpIlSpaSJSJKlpKlZImSJUqWiChZSpaSJUqWkiUiSpaSpWSJkqVkiYiSpWQpWaJkKVlKlogo
WUqWkiVKlpIlIkqWkqVkiZKlZImIkqVkKVmiZClZIiJKlpKlZImSpWSJiJKlZClZomQpWSKi
ZClZSpYoWUqWiChZSpaSFcm65l3XyynkWRddNNv/b3rrO6yPFXje8y+e1d8brniT9SEi58B1
gesD1wnr43zedPU7To9kiYiIiBwCJ1+ybv/kvSIiIiI74ZYP/U9zskRERETMyVKyRERERMlS
skRERETJUrJERERElCwlS0RERJQsJUtERESULCXraJFBzTxgRURElCwla4MYiRMREVGylCwl
S0RERJQsJUtERESULCXLA1ZERETJUrKULBERESVLyVKyRERERMlSskRERETJUrKULBERESVL
yVKyRERERMlSskRERETJUrKULBERESVLyVKyRERERMlSskRERETJUrKULBERESVLyVKyRERE
RMlSskRERETJUrKULBERESVLyVKyRERERMlSskRERETJUrKULBERESVLyVKyRERERMlSskRE
RETJUrKULBERESVLyVKyjjTffOI/9//u2/86aTrrS0RElCwl69Tzqb/8m4M6gds/eW93ujde
9Zb9V7/+l7ufffoLX9n/pde8fv9HLrhgNo9nXXTR/jXvun7/6//8H0uvz4suecnBuvC/+0hE
RJQsJetYS9av/NoNM8H666//0zmf/8O//WD/+htvnk3Tkyy+j1whRLd+5OP7d9x93/61v37j
7L2XvuwV+48/+fRS6/PH9/7FbD0u/dmfU7JERETJUrKOv2T1Ilgc0K+9/MqDOutJFhEsIldt
V+K73/eB2Xdu+8N7VlovlqVkiYiIkqVkbQSiRn9y/1/t33zrh2dRIQRoTI5u+dAds+k++8Wv
Dk7HZ+//6CdmcIASWeK7kaIhyXrwkW/u/+iPXTgTqA/eefegZD3np567/4Yr3nze+w89+vjs
O0S1lCwREVGylKxDPXjoImPdfvwnfnImOPz/istedU5uE3LEe5kO+B/RqUnnCNuZa66dfRZZ
ogvvN2567zlSNSRZyBndhH/7+L8c1NtQTlaPu+57cPYdhFHJEhERJUvJOjSufud1MxH6/Je/
diBJCArrStdbpvuFV71mJk0fu+f+g+kQJAQKqcp0v/2e35t997d+55ZZ9Irp6LpLcvoiyerV
21TJYlmIIMt6+LFvK1kiIqJkKVmHB1JB11s7BALdgdy9x//3fO7h2bojUO33uZuPz0hcR3IQ
NvKl2unovtu2ZCWC1ltPJUtERJQsJWunEHHKsAV06SE/yFKdJtEpcrH4vJLvE+Giq4//3/MH
t5+3HHK+tiVZtYvyTW99x3nrr2SJiIiSpWQdStI7okQ0K+vI/wytkCT1RKvGQKySD9UTp1aq
NiVZ5I0ROWM6crnWESwlS0RElCwlayuyhfgQtWKcqSS/164+Pudg64Hs0L2YiFc7/3Q5blKy
kMAk7feWqWSJiIiSpWQdCiSmM8QCA3n2ZIP1fewf//1gKIWeEP35A1+aCU5Ei+R4Rmlvp0uX
46YkK4JFknuS8ZUsERFRspSsIwOi8tznX3zOoJ5EtYhmMUwDrxEtEtqRjzod//MeYpX36WZk
O6u4IVR8f5OShcglF2zRNpIr1o79xd2HvNc+gkfJEhERJUvJ2ggkpBMNQqgYzoGuwZ++9Gdm
60oEK9Px2JlMR44W0yFn7XTcpZjxtBC4/J+8qU1IFgOWLsoRYx0zfZ5L2LsrspUvJUtERJQs
JWtjkEuFdDAWFlLEnXp0A7bTITdEqjIdUpZhHtr8LiJZ1/3mTbNkdCJJyBTbT27WMpLFerV3
K2Z9x6iP1eGuySpdwOe8146npWSJiIiSpWQdSRjItPdYngxwmkFPp0rWYYwbpmSJiIiSpWQd
Ocjlohsxj8UBnifIsBDIS4ZYULJERETJUrKUrCVzvEiEh5e98rKDIRYQr/pA6UhWOGzZSt5W
Bmb1wiIiIkqWknXkIIrFsA7kPJGTRf5TexcfdyMiVoHH8hzmOpPYn3Xhfy8sIiKiZClZIiIi
omQpWR6wIiIiSpaSpWSJiIgoWUqWkiUiIiJKlpIlIiIiSpaSpWSJiIgoWUqWkiUiIiJKlpIl
IiIiSpaSpWSJiIgoWUqWkiUiIiJKlpIlIiIiSpaSpWSJiIgoWUqWkiUiIiJKlpIlIiIiSpaS
pWSJiIgoWUqWkiUiIiJKlpIlIiIiSpaSpWSJiIgoWUqWkiUiIiJKlpIlIiIiSpaSpWSJiIgo
WUrWrnj3+z4wwwNWREREyVKyRERERMlSskRERESULCVLRERElCwlS0RERJQsJUtEREREyVKy
RERERMk6ApKVoRBERERENsGtH/m4kiUiIiKyaV50yUuULCNZIiIiYiTLnCwRERExJ0vJEhER
EVGyRERERJQsJUtERESULCVLRERERMkSERERUbKULBEREVGylCwRERFRspQsERERESVLyRIR
ERElS8kSERERJUvJEhEREVGylCwRERFRspQsERERUbKULBERERElS8kSERERJUvJEhERESVL
yRIRERFRspQsERERUbKULBEREVGylCwRERERJUvJEhERESVLyRIRERElS8kSERERUbKULBER
EVGylCwRERFRspQsERERESVLyRIRERElS8kSERERJUvJEhEREVGyRERERJQsJUtERESULCVL
RERERMkSERERUbKULBEREVGylCwRERERJUtEREREyVKyRERERMlSskRERETJUrJERERElCwl
a8Pc9of37F/zruv3H/vHfz8x2+J+FRERJUvJOnSQEuqIA+KkbIv7VURElCwlS8naIA8/9u39
T/3l37hfRUREyVKy+tB1xw765hP/OTodnzMd/MO//WB02r/79r92RaqVrL99/F9WEq7Mf9E6
L7ttjz/59Nr1yTyYF9vmyS8iIkrWKZSsz37xq/uvuOxV2TH7P/pjF84kqBWSz3/5a/uvvfzK
/R+54IKDaZ910UX7v3HTe8+TrQ/eeff+iy55ycF0P/4TP7l/y4fuOE+yPnbP/fsvfdkrDqa7
+IUv3v+T+/9q4Trf87mH91/2yssOvsc6XXnmbTPpatf5l17z+nOmO3PNtedNx2u+X7eL9X31
6395xjLdhdTbr/zaDbN6rNt1+yfvPed7zJfPjIKJiIiSdQIlC8FCPH760p/Zv+Pu+2YN/s23
fnj2HuIVeUJCEI/nPv/iWaI30zF9RKcKxK0f+fjsPYQMYUKI+J/3kK8qJizn2l+/cf+P7/2L
mdSwDORkLPrDZ0zH+t1134PnrXOme/CRb86mQ3CyzlkGYheJJOLEa5b7nj+4fTYdf3nNPJeV
LLaH77FOzIt1jMRWoVKyREREyTrBkvULr3rNLMrURnaQIbYBkeL1+z/6idlrhKnNR+L93F2H
sDA/5KtGt3gf2SG6VcXkt37nlnPml+UQ4RpaZ4Suty5IzRuvesuBPL3hijfPROmvv/5P50zH
vPk+MliX2UaaMt2yksU2Uq+tGBJRG9suERERJeuESBZ5Sqznm976jm5uEp/RhVZFqZ2O7rgq
WYhPFZhWNCJAEZOHHn38nGmI6vB+7Vps+fQXvjKbBpFDAnvDQLAcoklE0NrPkD/kK58hY0S3
2i5PXiOMy0oWMsWyEUjqxxNfRESUrFMmWRGaMWpEhmgX3WhIWbrXMl0kayjKNPXuwqzTu9/3
gdHvIzA1zwoRQuy+/s//Mfuc6NWibUtUjb/5vwVhWlay6KYkalfzsehCRA69CIiIiJJ1iiSL
aBVy1INcqUSsiPYk7+n6G2+e5TlFZnYtWTmYiHgRkYrwkTOGDBIh4zVRqkXbNiZZbOuykpWo
H7lYyFW9AYAuTS8EIiKiZJ1wyaL7rgpSCxGZdO8lcZ332rvyIjO8JtF9qLuQrj3uRCTatI5k
ITDtevAeQpOuRpbB/+Ro9eaBhCXqxTTIY3s3JfNELJeVrN5wFNxgQNfjc37quV4IREREyToN
ie/kNREFaqWAKA/bgLgk2oMktN9HaGpyOKKSO//axHfuYCTStG4kK12F7V15Ebzkc7FOyFOb
95W8savfed2B/PWWSdfosonviBvLjHS2dd2rQxERESXrBErWnz/wpZkU9IY5QIgiQYz7xDZd
95s3zaJIRGboMkTQgNylVrwQDcQHEgnLHXzrSBZdlCyT9eMuyAwngcSx3pknXZztdNk2qHcd
ZjgF8s1YNmNpZbplI1l8NxKH0FHHibIRyXMIBxERUbJOyWCkNPJ1MNKMccXwDJmGCE0drDPT
IDJ0tyEj6X7LEBDITabl/zpEwro5WchLHcQU2Ib2bj5kkOT9Oh1C2E5HpO233/N7+5f+7M/N
onbZNiRtE4OREsEiAleje0qWiIgoWafssTpVllr4bNE07Q7f5vMJs869YRymbht5Zb3vI0RD
Q1xMge9PffyQiIiIkuUDok8cyb1qh1ig+3TRmF0iIiJKlpIlIzledHdy5x95ZnRVkk9Frlp9
/I6IiIiSpWTJknAHIgnr5GMhW+RmkaOlYImIiJKlZImIiIiSpWSJiIiIKFlKloiIiChZStZp
hcfZwKLpMgwFY2Pxmr+8XpR/xbwZ7qG+x3cZHZ8xwBh01P0gR4F6XFofIkqWknXEYVDP3vMI
dwGDok55YPTYg54rzKsOApoBUBc1SMy7DkxKQ0aCfPY3g556kp8LUsogrNbFevDDII+4WgRj
smWgW3/siShZStYxYOxB0tuGx+UsGnV9F5LF3YcM8ZDXd9334Ox7vM8wEFOiaKcNRs6fsk9k
cT1Ovabwg4hpkdtFg/SKiJKlZJ1yyWojSIclWS1Mz/d4xI8ndx/2m5K1mXqcek3JcckzRq07
ESVLyTpkuBjz4GIkigdA80Dn2t2TizbP/ON/3sv7/P3YPffPvsvzCumq4P129HTgvZ7I8Mub
X93Mg79pHMiRYvqMW8X/Yw1HJIuulZtv/fBsfox1VR8EvYxkJacleVi8jlDxfx6azTrX6aZ2
ow3VX7oiGXGeZfDZ+z/6ie4jgZie77Pf6E5i/VJv2e4sq1d3dZvqPHmf+bFs6rK3bbzHaPhM
k/Wv68gy2W/sv6FjYko9ZV9SF9RJcuna7jS6szMdy+s9yojvsp5sG5BHV/cF01BvvKYeWWfq
tp0n5wjf530imr11b+uwjXT2lsO0bEetR+aTbukp9ZjzdZuPsxIRJUvJmgCNN+t68QtfPPu1
TNdcHv4cAUJceI9R0fmf9yImNAzZXhpTGrGhqFf7IGVgZPW6fJbBSOs0hDRKLI/XPGyZ/8fy
wvicBzEzLx5IXedHQ7WMZNEtyHu1e7BG1BBOtjcPv069TK33LJNGuD5Em0acxjf7gYdek+vF
NrBd9aHd1HW6kZjuDVe8eVZPdF/W7Rl76HYbJaTBT9SEhj11CFW+ETbeywO084BxtiGNO/Pm
c9ad/xH5ZY5NZCLLrstgvWo3GNOxL1gOdZXpGK2/yiH/R1Ze9srLDqZrH9QdScmxzb7guMr+
4phgWbyfh4DXRy/VOkxXd7ajClm7nAyEm+9FtDjWspwp9ci6KFkiSpaSdcjQoKcBq7/6E6Gp
d8u14pSGG4gusGPzC3uqZNVGJsunYaFxpFFJA7lMdyHzu/LM2w6iHYgaDRbbmfktkqyeYPXW
I+uf+SxDlkljzXxq/dH4s751voxIn4he6irbURt4posQrCJZ2fdVOBGarFPqkIdms5wqMUhY
W2/rdBciF0hbGx2r24LQMA2w7bV+OYY4FmpOHfVN5C/vcYzzXk+y2L4aVaXumZa/ERi2n2Uj
wG0d8gNmrA6zHPZrFXQkqt2vy3QX5rhYJrIqIkqWkrWFO5YStapdMFycafDrkAZDkpWI15T8
rVayEg1qu39orGmoErVZRrJoWNtutSSop9Ebk6wI1hQh2YRkvfGqt3Tf70Ur8iDryC8NOwI5
1MguK1nUGxJBRKydji7F2vAnOtN2gbFuNYKyjmQhM2xjm7zNukQg8hBv/rbfT5SQ9WEe/E/k
tJ0ukaRWstp9EPlpu5URzhzX1CHHYO+8yH6g67AuJ69rNyLvcw6sIlnIJOeVDY6IkqVkHTI0
qPk1TQNEt1pvvKghyerdnj9VsmjQW8lYN/G9N4xCZDLrNCRZ6UoiMtGrg21IFjljva4eGm7W
s5L6S47UUD1//stfW0myEKTk3rXLjmDQgDMt3bm8jlAgsL27K9eRrEgly+A4QaTa6ExEir/t
OufYJnKVKBvr3S4nEt5KVitTOW7abuF6XPPjZKgOOVcSaa3Lqd2wiTC3+3aKZFE31BnnFX9t
cESULCXrkEEmaOjT1ZZGDeGaEsnqNdxTJKvXkGxCsoamY1mJ0AxJVhrAdGHuQrLa+su6jcE0
OZHaLs16ki0rWdmeMer233H3fTOZqJ8jubXbbt27CxGr5E4FpC5RzuT0jcF2DYlTrZ+pktXm
OtXjGqGbWodjx88qkpX51QiYiChZStYRgYaLxHKSglvRWFayet0liSzU6XqRLASsdhEtI1k0
+kORrDQ+Q5KVqFzWs238diFZRKnafLihqMVQ9xd5RFMlK4nrtUtw2UFn2VcIV7rNahfmpoZw
yB2A2Tcco1VwFuUftV3G7V2Am5Ks1HXNp1okRZuSLCKJiZb1tlNElCwla8dSRSPd666oje+y
kkXCcE92Ei3LayIUdFO2t9qnaycNxTKSRZ5Qm+OVRjR5O4sS3zlIcydljebtQrLSZdfrhkWC
2F/prsqQFW39IUl1eyJdbddkTsZsU8StJ77Mg65CZIXlIeC9fZ9IYKRnVcmi3ukC7MlKIlv1
7theThafIdZsZ3KlevlmiYZtQrIQTrrrej8yyLWiDnOn6zKSxT6Zek1BdmsivogoWUrWIcAv
XxoEGq2aLB4BqN0O6W5Lgz4mWcyPBi3dRnwnEZpaL2kg27sLiVLUu7BopImO9MZH6klcjcBl
GAhkLts4ZQiHum67lCzqgdww6q/ON3dJ8n5yn7KOVcgQ5wwDkO1JQjvzTR3wN0JUtynRqBrN
Ylq6Aeu2Zh/XsbeYLsNn5D1kg2WvMho+28u21O5H5sP82ZYsE6lv7y4kL41jiPXJcZMEd45F
3qOuU4ebkqxah+2wDulazbhky0hWljGlHrOejvYuomQpWUdknKwM5ZDGlMapdsGkCzGNz5hk
EYmiYY3A0VDSCGb8pl4UoY4n1I5rlUZrKMJTJYj5ZNyqOj5RHcBximTRALdisQvJiihR/xn/
KuMktfWSO8lSf5kujXndnnQjURcZSoDp+L9uU4YaqPskQ0LUSFgkJmNTpa5Zfh0Liu9k37Ef
l6kjZKQuI9vX7k/qsk6XccVY7zodopOIUIbPYJpNRrJaKW3rsB6/y0hWopOtFI/dMOA4WSJK
lpJ1BKDBzIjvQNdLe3cdv6C5eNNI0CiwI2l0hgSDCAdJ2cyP6Wi8mbYnFTSmGe2a+dcBN9M4
EhXgs6HRtdMQ0XCxrjTuGfG9/fWf9UgjlG1pR0Sne4f309hm/nUb63yWPRHG6o9t5k647BOm
bUeur1Kb0ceRsKER7DNdOzp/O11Gkc+0HBscI0N3s2UdiRC168i8WFZGxV+lnjLi+6LR5+u6
8P9QnhbixTqRR8Y0rXRnv7bHQ46b3hAhvYhkrUPOhfauxLHjh/frsZ59RT0uyplzxHcRJUvJ
EtkSqz6L8aRT86EqGTz0pDzgO5I1JOUiomQpWSJK1kbJcxTThZgo21Ci+nElN084VpaIkqVk
yUk7sCdxGiVryjhcYcr4acuCXJGvl3ysLItcr5P0CBrkkUdTeR0SUbKULDkRcLcacjOVba4L
OUMs46iJA+sztX7avLxN7idynchrIvdwmYd6H7fjkZywXg6kiChZSpaIiIiIkqVkiYiIiJKl
ZImIiIiSpWTJYeUPHebyuTOMu+EYALV9HM7UE+2o1WndplVzhbgLsB0nijHWln0oMmN/Tbn7
jtwwkvDroKabuhB6nomIkqVknTpogA/zlv48zJhR3zM47NTvMogpjz/axt1560ASNtuEYLFu
q9z5yDhWzKMVHu4YXOaByBk9fcrzMDd9pybbwGjwJqSLiJKlZJ1Kpj6QeltkBPJ2xPFlTrCj
Jll5rM8q2xSQTR6fUx+IzUj0zHfq3Ygsn0fzHJZkjT1OSUREyVKytjbswJQRt2lgh7pa+D6f
tY8EGuoOHJrPpiSLbeo9pJfb61n20AN8h56Rt03JmlpvbFP7WJkp25Tn+63TTcaI7e3zD3nM
Dvtr6rAGPIOTSN/UfdxKFtu3zsjwiyRrUT0GPp+6z0REyVKyTimMU1QfPk3ODu/RCCb3hkaN
14zOnUEW6SKigQG69zLIZAaapLuvPlaEeTEPIh95AHK65HimX10O3yfaUddhma5GBrekyy8R
E95LVx6ykffzAOj6XECWmYcJs25TBSJ1mQdLE/Hhu2PjQCGsRJiyPLab3KaIS2Qi28QjabLu
kQSEq90mps02pU5Zn3abmN/UOmZdWc+26xRR4vmAU+qH5wcyj6zTMpJF/hZyVo8bnn1YI3XM
sxdRo/54UDTrnuOU9ah1zLHBdqSecmy0+4/XeQB4PdbrcrN9SKnXGBElS8k6pdAQR2ZosHhQ
NA1DGpo05DloeCQKjRWNehoQ/vIZyc/IEnKTqEltRBMdYh5Xv/O6WWSChpNl0VAlKsD7TIPs
8f+y3VssmwYUrjzzthnkIyEJyCTLIpGdxpttZntYhyyHZfId1pWHYi/TTUV0ju9EdPjuWAI/
jTrTUofUPdNHMmvUKdvEemabEKREhoa26aFHHz+o04hB3aaMAD+l64w8rDYSxrxZ9thDw0Me
NZNnFy4rWSwH6eEYYx4R9cwv07U3KVBHHE9vvOotM+lPtyl1yPazDRwbrAvLQMhYV+qRY5D3
IqwIbY5NPmeZHMOcQ9R5e5E9zC5vEVGylKxDhsgADUTbhUSXUE+yEr2qXSY0Qr1f7CQX130Q
yUpUKRAda3Nu1ukujOC1yyExm/dZXtvFSR3URPtddBfS4Eew2u8nKlUlq7dNSSAf2ibEYlPd
hdQJctHeIMD+X9RlxnFC5Kl2NS4rWUhMXQ7Cw/FY14l5tpHH3MTQylgVS4Sp96xBloHcIqhV
FFvxTpTtJD0aSESULCVrDRKFaHNseg3RInFoc4RobNK104pL7Zpru4M2KVntcrI+PSEgqlFl
YReSlYhX7xE/kdxWstqoXqIvRGvaeSBY9bN1JQtppruvjcRN2U9sD5JVj5NVugvbz4ie1qT7
7Ld69yP7luhT6qEnWYlc9nLdsi/4jOVE+BCzRXlbIqJkKVmnlDQYva6iHCStZPWmpauF7jga
PBrNmpvVk6y2ke81epuQrHY56YZLtKOS7tF8ZxeSla7C3jLa5Web2kjJMtu0jmQhE3yXLs12
+YvGukqXZO+7y0hWr0sy0cmIauo+OWKsNxE9js2x4y15hr16TL5cpJ3uyHTn8pcoFxFFhUtE
lCw5T7LIUVlVspLLkkRkfvXT6NKtQt7VUZMsGkyWM0Qayl1I1pj00JD3BKl3FyZRmrFtSnRm
HclCopGVGjHL8dNGDFuI+vREsN7c0HaD9o4P1mFovK0aDeS4oU44NtMVXZPXe8cb67jo2Kj1
Rncskch0t6cr3QFORUTJkoOE4NwZ1X5G1GGKZCWXpUYK2u65RF8OW7KYXysKtdGskYhdSFYE
IXdWtl19UySLLrx23Kq6HnWb1pEsvsv+bKNIiMWUYR+o+xb2BWLD/2PRsBwfvcFO011YtynH
JKLPsd3mkfWOt3St9rqSaz1Sz9xMUI8h/k8y/SpPBhARJUvJOqEQeaJxqbk+NCQ0TlMkKzLS
duUgLVO74IYkq96ttQnJoiHvNdY0rMhCzdtZR7KovymSRR1R90hGlST2Re/uwt7xzFAPvcT3
JGyzXZn3OpLFfIjctBK9zoCry3YXcjdhrackvtPV1/544NhD7qjHVuB63eTpdmynzbFBXTLf
HEPt8Z4BWcciciKiZClZp/BgSCNCUjMNKY3Z0BAOrWTltn6GEaAx5I45IgnL5Dn1JAvJo4Fk
nYhIbEKyaDBzS36GO6CxzFAAVVTWkSxg26lXlkPkY2i6SBLrQN1nHKnUXaKAQ5K1aJvqXXC9
epkyhAPr347ojnAQiarjVG1bsjJ8BcdDtpHt7h0fjDWWnKl24FLqNMnrbDefsz3IWo65th4z
jhjfjbxyvGS6fDc/VhzCQUTJUrLk4IDgVz85VTQeNFB5zl2iF2OJ7xnrqg4SSWSAW+Z5ncEr
l5EsutCILPF+7+7HVXOdaCSZXx24k21uB9hcV7IydlKtw7HEcBpzGm8aaxrv3NHWbtPQ2Fy5
O7Lug3aYgVUli21ph0XIUAbrJHsvK1kIXR3Elu/3ulqr/LddnPWmg9RXolJsSzuoa++ZjO3g
vRnAtyb2K1kiSpaSJbPIUy9HKTlZUyMVeRTJphN/t/XIEubLuq7zeJYp9IYEqJ8NjatE44y4
7mKbiKaNSRYRrHboCJYxNpL9Nsmjm6aIWcbGGurW7R1fqcdFy8hjdYb2ISI6JHkiomQpWacA
fpET9WhFK6O4T33orywPktKLIlHn7eCo2wJBIAqz6A7B4waRPY7r3g0Bu7qphGT6XrK+iChZ
StYpIQm/yBYRDciQDL07Bg8DohJjt9ZXpjzeZZUTZuryl3kEDwKQXB66aOmWI3GabtL6mJ9t
Xwx6QyMcR4hAITZtHtVhSdZhLl9ElCwl64hAlyADKpITRAPPsABtjtJhgrj0hgDosWhgzFVA
dqYuvx0RfRF0OfEdokkZM4rxxYwgrgbSyjHMkAqHFcUSESVLyRIRERFRspQsERERUbKULBER
EVGylCyZmqTNgTM2JMEukonJj2JwzF3m27BcEurJDeOOwNy673Gx+h2N2x464zTnVzIsSzsc
y5ThNnJ+wdShUzLcRW8YGBElS8mSJQ+asfGUFl38hwaOnALLrYNEkqTfPkpmGUgsn7ItSCVJ
6Vkud6+NDQy6Cixj2YT5wwbRXbX+uUGgHdx00z8IuEv2OIlcHji9zjyQq/aB3dQFd6qO3cjC
NBx/9fzijlcGbF0kWxnMtj6ge11Y113cWSuiZClZRyr6QOO4zBAFIaPHr/JdyPPiEBxG8uaC
zh2QvLfq8AN5ZuAUOWQ5NDiIGQ0Z67PJkbzzCJjjdDxQf6s+t3DbkpVHFR2naCOP+Fn3mOo9
sDsj348JZ44/jnEEjahX3uNO111KVs63TUqbiJKlZJ1ociFeVbJogGg8apcEXXb88ka2Vpnn
1Eee5PE627zobzoytgumPAD7sCRr3UciHQZTHzE0BqPLt1LE8yzbB2i3ETTqisGH28/ykPi2
+3GbkrWNyJiIkqVkHRr80uXC1nYLELni/Vxg+ZzXNYyf1wgP0R0aXcYnqtPwf34V83cV0aK7
gwEm2/cZ0RsBW2Ze2Y48b65uY++Cz8jhTMt2pZ5SZ3Ub8xlRlDxgOMujW41tB6INNa+Nx74w
RlkktD7/bpnuRuZL/RONYHm93Dkkla6YrAv/D+XSEM1gm5kn0pI6ynHB+rLeY/W3jGTRZUV3
MoOxZjuYd7t+TEe0g8+Zji62GqWh/rLP+GzssTpjuUm1nhhUtPfoHNaFSCqD9rIu7b5tu1ep
x/YcyfHI2F45HvNZjqsp9cs6c560j8HivKFOx85/BiLunZeJCI4JT5Ui6p79Qn20qQHsh6Go
M48fYr1Zl1wrcr7V6aiPXh329kmOD65Lyx6fIkqWkrXxKMrQQ5tzoevlZOUXMFEmLvA0nMnn
yEWWBioPeubvKhEMugmZJw1VvTBH3JbNfcl6ZvDPoYaYz8j9yoOCec3328hTIid5qHPWi2n5
HutOYwf8zzzTQCAqebg28+9FFBbllrGMRPXy4GLeq/uU6bLdWRf+R1JrI0QjlUcqIbGsH/MG
kv+Rr8wn+3xZkWklCzFJ9y/v83/qnfWMaCEkyY9jO5ku6xY5Zd3z3az/MuvGDwaW0e4ztrXm
OiFdqWsiRawL01HvbZI5kpPjP8+kZFqkLMcjr3M8ZqT4HFdTfpj0HtjN/7zHZ6tcG3Ic1PNu
SLKoZ7aBeuCYyvmQG1RyzrTCw76lXojC5WkH9XxrhY99Sx2yb3ldBZJ55QdLPT5Yr208CUJE
yVKyti5ZQJdELqb8Gk0DtanuQho05kfjxIWfi3dyqoYezrvN7sIhyaJxoYFFoKgvIlptI8Xn
rDvbsInuQrqHaEjq/kuDy6/5vIcIUH91O1hPGq26r2jg+S7rnn0aEWDaCM8muwsjITUKw3LS
yFdhr6+T7N1uwzrdhRzLfJfjuEahqON6vET8a3SG5bFd9VmgrGuEI+8hlZG3ROF63YV5nNSU
JHDqsI32Up+s9yp3/rFs1m+RpObcrj8ckkjP+3l2Y64n1G/9PvJTcyt73YW9OmQZHN/1IeC9
6wz1y/7YZve0iJIlW5OsXnddftVuSrJoSNMFRJQhv/xZ96MkWe32IUC9SADzqu+tI1nkzSBP
bWIzkZ1EDSJdRAMWSUXqtx0igwY70blNSxb11uvSSuOaes26tl1RiGuNMq0jWWm4W7Gp+yx3
8fW2vz3WidAgOm03IvsEicw8183JQqJbgWH9WP4qd94mOr3o8U7Z3vbOSI4f5IbjqUaG2+sF
53WV955kUS+9OuQ7iWzVG2TaqBXHxkl7ALooWUrWKZEsLpKLErnXkSwupFysuejXCyUSQSNA
47LKmFnbkKy2Yc6vdIQQGSIS02v415Gs1C2NEA0q+VitcKWrhegC01cScaCRpDtuKAl6F4nv
7Eekg8gEwpAuxBw37H/qMt1TbFdPAtaRrBz3iZRSL20XV+6AQ6Lb+kxd5+HqHLdTbs5YR7KS
vF6jb1l2IknL5GjmAeXt/MaOv160Ld3nyffMfsl8c/NKjbj2JItpuAa0dQ1IG59HftP1TlRv
ahRQRMlSso6sZPUa2k1KVhq03phM+eW6yvhb25CsXqOOaCVPJJAvUvN21r27MA/4rsugYY+U
RqTGYBuyj6fI0yYli2gkwhKBSoQUaWyPG/ZDokN12noMrHt3IQKPFNf1QeaT94W4LKrP1M3U
42wdySJJH7moPzbY9yx7mQeNI7fUa/uDZopkjf14yGc5viKgnNO8rudCT7IW1XU9d5gXP/zq
8UHdrhpFF1GylKytSFaiMIctWWNDKEQEl/21vkvJqsnd5J0QJUrye37hb2oIB2QF4UoEgV//
NLxZv0UNJ98fGhuJiGLtrtmkZPE6kTbqOctJN2dv/1J3iBVdmEluTv1vaggHloF4sIwkphN9
yfE8ZYw25KeX18R+qUnq60hW8hTbHyDL5CEl343uvWXuxpsSyao5YcgrkTK2n3qp3YlDkjV1
TLv2eEWKEbpEt4xqiZKlZO0cboXu3fWTKNEmJIuGalXJyq/d9tb0+tkqt+lz4d22ZNHl1cs1
4sJfL/rrSBZi0hu5Pg0c65Tcpl5OFjKNRNTcoF6eXRq/1PXUbsVFkhWx6+UO0VVXjzlEoCeA
mS7rto5kUUe1+6pNzk9u1tCxT1cY76dLLHcTtsnn+YHAeZZ6X2XMN2Sl9/QDltvbjqFIWLpg
l81xzHHRjiifnCyitr3INPsKecr2j0kW9UIdtjlZLCNDjGQ7OCbb9IFEHo1miZKlZO2cXICQ
rXo3VW6l3oRkpUHpNfJTojM0IjRCtdFECrmIs55Tn7PWds9MyedaR7JyF2S9hZ7GlnyRmsib
+S3TtVMjA8yrRqmYL9tGvbF9yWtr82wyxATvp3HNttTGL8MaML+scy8fDmFLt+NUyWJ+1BFi
V+eFgOYYzDEX0amNJd/JTRGpv2zDKmOOZZym2v1I/aX7MPluGbKhCn7qnfcj0JEGhCfbx/FK
HTBdftzwvfZYnnJ3Id1j7bHHPKYOW8Dy00XIec98WsbuTsz2ZXiT9odEK38Zz4vjsdZnK2E1
kT7LqHcX1h+CeSRV8uEyBEZ7vk55fqOIkqVkbZT6bD4auoyflIvkJiSLBrzmtywbYaCx4MKc
2+hztxGN/pTk3KHRsbM+Y/K3jmTRgNF4JUeK9c64WbULrOb49AZdHSPDMGSMIpaR11UAaGDa
dckYU3U6RCB1Q8OfSEzGyWq7+GqDmHpZNFp3212YY433mEcEJO8nIhPZS15b5tP+SECusm6I
y7JJ5BmDiX2RfdY2/OzrjAeVekq9t418xC2PzmGd2mMgdVdzlqaMk8U0bZcb+4n5T/nxEXEd
Y8pgpDlO+Jt6GYp0pj560UtEuV4rIvU5FjLWWI4Dom/ZTv6yL+o+yXRTo3oiSpaStRXRyojt
XIySF1N/Red1veDyuvdrORGN9jZqLug0hqt049D4MU8u3HQZsb6rDt+QBptGk4v32C/+RBPq
Orfbl2l6o32zHBpd6haog15kgsaKdWkb6KnRvuw/oJuyN4BkHRm+Hcm9lwSdUbPZZ+10WSbr
nIgR9YL0LJIstrXdzgwR0Y7mzXR12jqCPtOy/J5osy5EOOoYbquMoJ991ouC1JHhx0YhT35Z
RoZnvdrp2K4cj+kapx6HRmOv3cVtXbLuU6Uix/IYY+drRmFPrhrbR32MnVNJHxjKaWO7c62o
eWvsZ7Y3y6Ce2n3b7hPqc9XBWEWULCVL5MiQPJzaMMp6EHU8ad1cyE/7LFIRJUvJEpEFN1GM
PSdPlr/po71r8DhHy4musk10B9buXRElS8mSDTJl3Jtl8rfKAb2Qo/54jdwkMIV1RgjfBkcx
MnGcj42TFOnJuF0ZJmLoQdoiSpaSJRsQialMaWiYZur8jnrXC43P1G1x3J8fnqpj47jvB3Kl
QMESJUvJEhEREVGylCwRERFRspQsERERUbKUrG3DGDcMnMe4LsuOXJ4xfOp7jLXTG+SvN74N
333jVW+Zjakz5SHL3JrPMqeMJL3sAXlYJ8I6OSar5JZMrcOMe1XfI2eIfZtHsBylxOhsV/t4
lW0NAbCL5YiIKFnHXLJobPNYkIz0PLWRZ4TldqBCGt9Ft9EjdtxGzVg1NNoZ4XrR93qjvq8D
Usm2r/qg4XUb6lXvzuNWdO6QWjTo5jp1mBHV6/7OSOGI8arPDdz2RWUX+3JXyxERUbJOSHch
j4dY5pEqudW/jnpONGHRM9u4lZqGGrlJJATZyaNUeiOFb0uydtkw9xrqVSWr9xDbTUsWkcY6
/3wvz2k7qhcVJUtElCwl68jBo2KWGaeHwfxaKeMxFTxjbqwbieUwTftIGrqikI76HLtFgoCc
tQ943XTDzPwXjSDOeix6kO2ykjW23EWSxcjnrE/vsT+rimqketnv0aU5ZQR2pmPdpnRZ96Zb
RbKy36ZEbWtdKlkiomQpWUt3X9FtN3V68rjaUZOZx6J8LB5iOyVna+zAoVuRZeVBrqx3fX4a
/w89w47nqvEZOTXppqTbk/cyThGSgkzQPZb9yPa2EToeJktELuvB39defuUsWje0DSwjDxVG
Nvk/YslyeVh0HgicASrzLDlg27NeTFfFmGekEZGsD7jl89qlO1Wy6A7MvFlm1on9V9d5aPsY
WTvd0L31aPdHHZSTLud2DC4kh31OnWU6ui3zTMOeZCVCygOD6/MF2T8sI/Nh/xOha8UN0WUZ
dRDLyKaSJSJKlpK1lGRNXV8avF4khYaYxPdF38sDhIlq0XDR4NHYLnqQbg4cJIKGE/kgYT75
QxEtGl5eM/92HjzwlmWyLqwr0yEmCECiFXwv30esEAqidiy3JowzH7aZ5VIX/EVCkKChiAzL
SCSK7/N/RCHLRU5YDpFB1o33SERnGmSBmxR4D8mNuKQblnpBcFhv1iciGYFcJSeLZbKsrFtd
56GIF+JCnbF/WJdITX0ob+aJ0FHHeahytqPKEiLGPDl2WAYyGjkm0tRKFt+JcFYhpB4it2wH
82I9Isg5BomsUXcsg7pnG8hZ5LWSJSJKlpK1FDRerO+ULi8aJxqbKkVIE98nujOWsJ0IBA0d
DS+NVSIZi5Kpc+AgMrX7hvUg0sT7WX8kgXWs20N0pApLL/qBUPAedzy2XUs0/InuDMkKdUPj
PpZb1usuzHLbZ8WxbdRT7WLtdRdGklv5QVx4P/K7auL71O7CTIegVNHkf95LtJTtYn+xbe08
qIP6SCP2VytoiVjyPvVR92UEi/3fRh8Rv/b4yV2xfD9ClmW2EbuIoZIlIkqWkrV0zs2UW9OR
ISIa7S3/i3K66nPxasSLBjddS2NDC+TAaQWoNpJ0mVURqV1tiZIkj6snWUzDe70uP7rN+AxZ
I9KB+BDJqlGwVXOy0nj3Hq3CNvAZEaqxnKw294l6RRKqHO1KsnrTReRrV2C7zghSInWRLNYF
MWojnUyLzCYHLZKa6F9bP+z3IZnPTRuJfiJ/RCSP0s0SIiJK1jEejJQumNz5tyivKg1+bZRp
HMe+l4e01q6g9qAYm0emqflXbcQm3Wc0wEQyiJpFODJkxFiDmRwcIi5IYyW5UIlu0MVZ859o
3JHNKcn4rWSlq7DXzYh4VXEZkiy2h+UjEXSLpmvrMCSrym2bsF+jQ0yf8dI4LnoP56buF935
2nsQc9t1nXVLXllL3Se8Zn/2loVcK1kiomQpWUuBONGADDUutcGvd2UhBnxv7M7AmpNF7ssq
d9zlwEl3X4XcnypZ6UJDgohSEOFqu5x6ksW6RSaGqJEYhArZQt4iNfwdS37vbStiNNRdO0Wy
2H62FYjCkD9EVJLussOQrLZrr653opWJWCE9SBbRPD5LxLBKVq9bsXdsUPesA/luHJO1+zrr
xrYN7dscP0j20HmAbCtZIqJkKVmTSU5QT2AqNERtJCoN/JTb72mgyJ/q3U6/qBum3l041F1Y
xaN2gdKg15ytIclCToZyy5DE2r3F9+uo68w7eUK9pPsxyRrrpkRYFnUXIgVsXxtFi1zuWrJ6
N0BEnujiy/GGnLbdgNkH6TolitXruuOYYVqOv3Zf8t0IZ+bPPh3qbs4NE5l2aJnsb7sLRUTJ
UrKWgsZzqLuqHbS0HRmeRmss+tVO28s9Iho01M3UHjhENup6JvGdRrGNBDEtDTldhW3jyrRt
g5lux3ZalpE7DBGZiA/r3cvtaRPYe7LZS3zvLTc5Rklq73W7ZeiDofrOIKK7kiy6K6s8JfE9
eXvZhrbrN9NViUwuV5vEXnPVxvLrqvDxA6GX+J7ctdTTUOJ7JFrJEhElS8na6BAO/IpHMtrG
joaLfK4py0FQEB4kgwgTkRsaQbp22oZ5rEsICWA9iNRktPhe0j7rlX3RSypP8jrykOhVugwR
FBp6lpFcrQgmgpZb/DOEA+sTOVn0LEa6szIkQdYryf9ZLvPLcmv0Ll2jSFWkBwGM9CFj1CvR
u3bIgV1JVqJU1B3rmyEc0lXIenAsIV1sJ6+Zju3IOkeyEFf2Ee8jVAgp+zq5VRlUtDdOFp/X
bsNEXTMECPNi/zFvjslEAuuwEdQp0+U4bZeTcdkWdZeLiChZDkY6SEZ0r1GkjEm1KAepHcQz
0ZlA5GfRHXo5cGjsIkIQYRuSuoxJNRTBS/J6oipsH6JTB76ksSVnqEog21EHtUzCfG/QzRbW
N/NP9IR5I1M1Yb039hiCV7ef9QAkta4L0sb+YZ3SRbsrySLHLOKX6GN752hEqdYx25+oHgJa
67puc7pbk7c1dNdfctJqtyGixfFQ58Xx2HYRczzWZbKuiWDW5SQKPGW/i4goWadUshYNwcCO
a6NBRBlWeYZevkt+zpTHrgwl0tc8mkViNjQN0tJbB95n/ouWwXdpoKc8oqXtCuyJZZa7aH5E
FtvuUaSSdam5YocxFEhkjPUZGrg0dTCljmtdM+2qx0x7/FBXi+aVZS7z2CQRESVLyTon12oo
2nOcIUpExGgTjbIsL1kiIqJknVrJolEkz4gus0XjXB0n6K5Krla640TJEhFRspSsnd5VSK4J
uSfLjlp+lGF7yPFBtqYMLSGbgTHEyJUyP0lERMmyu1BERESULCVLRERERMlSskRERETJUrJE
REREyVKyRERERJQsERERESVLyRIRERElS8kSERERUbJERERElCwlS0RERJQsJUtERESULCVL
RERERMlSskRERETJUrJEREREyVKyRERERJQsJUtERESULCVLRERElCwlS0RERETJUrJERERE
yVKyRERERMlSskRERESULCVLRERElCwlS0RERJQsJUtEREREyVKyRERERMlSskRERETJUrJE
RERElCwlS0RERJQsJUtERESULCVLRERERMmaS5aIiIjIIXCiJes77mARERE5JB4+ypL0/wGh
4hZDpPSCdQAAAABJRU5ErkJggg==

--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="blkif.h"

/******************************************************************************
 * blkif.h
 *
 * Unified block-device I/O interface for Xen guest OSes.
 *
 * Copyright (c) 2003-2004, Keir Fraser
 */

#ifndef __XEN_PUBLIC_IO_BLKIF_H__
#define __XEN_PUBLIC_IO_BLKIF_H__

#include <xen/interface/io/ring.h>
#include <xen/interface/grant_table.h>

/*
 * Front->back notifications: When enqueuing a new request, sending a
 * notification can be made conditional on req_event (i.e., the generic
 * hold-off mechanism provided by the ring macros). Backends must set
 * req_event appropriately (e.g., using RING_FINAL_CHECK_FOR_REQUESTS()).
 *
 * Back->front notifications: When enqueuing a new response, sending a
 * notification can be made conditional on rsp_event (i.e., the generic
 * hold-off mechanism provided by the ring macros). Frontends must set
 * rsp_event appropriately (e.g., using RING_FINAL_CHECK_FOR_RESPONSES()).
 */

typedef uint16_t blkif_vdev_t;
typedef uint64_t blkif_sector_t;

/*
 * REQUEST CODES.
 */
#define BLKIF_OP_READ              0
#define BLKIF_OP_WRITE             1
/*
 * Recognised only if "feature-barrier" is present in backend xenbus info.
 * The "feature_barrier" node contains a boolean indicating whether barrier
 * requests are likely to succeed or fail. Either way, a barrier request
 * may fail at any time with BLKIF_RSP_EOPNOTSUPP if it is unsupported by
 * the underlying block-device hardware. The boolean simply indicates whether
 * or not it is worthwhile for the frontend to attempt barrier requests.
 * If a backend does not recognise BLKIF_OP_WRITE_BARRIER, it should *not*
 * create the "feature-barrier" node!
 */
#define BLKIF_OP_WRITE_BARRIER     2

/*
 * Recognised if "feature-flush-cache" is present in backend xenbus
 * info.  A flush will ask the underlying storage hardware to flush its
 * non-volatile caches as appropriate.  The "feature-flush-cache" node
 * contains a boolean indicating whether flush requests are likely to
 * succeed or fail. Either way, a flush request may fail at any time
 * with BLKIF_RSP_EOPNOTSUPP if it is unsupported by the underlying
 * block-device hardware. The boolean simply indicates whether or not it
 * is worthwhile for the frontend to attempt flushes.  If a backend does
 * not recognise BLKIF_OP_WRITE_FLUSH_CACHE, it should *not* create the
 * "feature-flush-cache" node!
 */
#define BLKIF_OP_FLUSH_DISKCACHE   3

/*
 * Recognised only if "feature-discard" is present in backend xenbus info.
 * The "feature-discard" node contains a boolean indicating whether trim
 * (ATA) or unmap (SCSI) - conviently called discard requests are likely
 * to succeed or fail. Either way, a discard request
 * may fail at any time with BLKIF_RSP_EOPNOTSUPP if it is unsupported by
 * the underlying block-device hardware. The boolean simply indicates whether
 * or not it is worthwhile for the frontend to attempt discard requests.
 * If a backend does not recognise BLKIF_OP_DISCARD, it should *not*
 * create the "feature-discard" node!
 *
 * Discard operation is a request for the underlying block device to mark
 * extents to be erased. However, discard does not guarantee that the blocks
 * will be erased from the device - it is just a hint to the device
 * controller that these blocks are no longer in use. What the device
 * controller does with that information is left to the controller.
 * Discard operations are passed with sector_number as the
 * sector index to begin discard operations at and nr_sectors as the number of
 * sectors to be discarded. The specified sectors should be discarded if the
 * underlying block device supports trim (ATA) or unmap (SCSI) operations,
 * or a BLKIF_RSP_EOPNOTSUPP  should be returned.
 * More information about trim/unmap operations at:
 * http://t13.org/Documents/UploadedDocuments/docs2008/
 *     e07154r6-Data_Set_Management_Proposal_for_ATA-ACS2.doc
 * http://www.seagate.com/staticfiles/support/disc/manuals/
 *     Interface%20manuals/100293068c.pdf
 * The backend can optionally provide three extra XenBus attributes to
 * further optimize the discard functionality:
 * 'discard-aligment' - Devices that support discard functionality may
 * internally allocate space in units that are bigger than the exported
 * logical block size. The discard-alignment parameter indicates how many bytes
 * the beginning of the partition is offset from the internal allocation unit's
 * natural alignment.
 * 'discard-granularity'  - Devices that support discard functionality may
 * internally allocate space using units that are bigger than the logical block
 * size. The discard-granularity parameter indicates the size of the internal
 * allocation unit in bytes if reported by the device. Otherwise the
 * discard-granularity will be set to match the device's physical block size.
 * 'discard-secure' - All copies of the discarded sectors (potentially created
 * by garbage collection) must also be erased.  To use this feature, the flag
 * BLKIF_DISCARD_SECURE must be set in the blkif_request_trim.
 */
#define BLKIF_OP_DISCARD           5

/*
 * Maximum scatter/gather segments per request.
 * This is carefully chosen so that sizeof(struct blkif_ring) <= PAGE_SIZE.
 * NB. This could be 12 if the ring indexes weren't stored in the same page.
 */
#define BLKIF_MAX_SEGMENTS_PER_REQUEST 11

struct blkif_request_rw {
	uint8_t        nr_segments;  /* number of segments                   */
	blkif_vdev_t   handle;       /* only for read/write requests         */
#ifdef CONFIG_X86_64
	uint32_t       _pad1;	     /* offsetof(blkif_request,u.rw.id) == 8 */
#endif
	uint64_t       id;           /* private guest value, echoed in resp  */
	blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
	struct blkif_request_segment {
		grant_ref_t gref;        /* reference to I/O buffer frame        */
		/* @first_sect: first sector in frame to transfer (inclusive).   */
		/* @last_sect: last sector in frame to transfer (inclusive).     */
		uint8_t     first_sect, last_sect;
	} seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
} __attribute__((__packed__));

struct blkif_request_discard {
	uint8_t        flag;         /* BLKIF_DISCARD_SECURE or zero.        */
#define BLKIF_DISCARD_SECURE (1<<0)  /* ignored if discard-secure=0          */
	blkif_vdev_t   _pad1;        /* only for read/write requests         */
#ifdef CONFIG_X86_64
	uint32_t       _pad2;        /* offsetof(blkif_req..,u.discard.id)==8*/
#endif
	uint64_t       id;           /* private guest value, echoed in resp  */
	blkif_sector_t sector_number;
	uint64_t       nr_sectors;
	uint8_t        _pad3;
} __attribute__((__packed__));

struct blkif_request {
	uint8_t        operation;    /* BLKIF_OP_???                         */
	union {
		struct blkif_request_rw rw;
		struct blkif_request_discard discard;
	} u;
} __attribute__((__packed__));

struct blkif_response {
	uint64_t        id;              /* copied from request */
	uint8_t         operation;       /* copied from request */
	int16_t         status;          /* BLKIF_RSP_???       */
};

/*
 * STATUS RETURN CODES.
 */
 /* Operation not supported (only happens on barrier writes). */
#define BLKIF_RSP_EOPNOTSUPP  -2
 /* Operation failed for some unspecified reason (-EIO). */
#define BLKIF_RSP_ERROR       -1
 /* Operation completed successfully. */
#define BLKIF_RSP_OKAY         0

/*
 * Generate blkif ring structures and types.
 */

DEFINE_RING_TYPES(blkif, struct blkif_request, struct blkif_response);

#define VDISK_CDROM        0x1
#define VDISK_REMOVABLE    0x2
#define VDISK_READONLY     0x4

/* Xen-defined major numbers for virtual disks, they look strangely
 * familiar */
#define XEN_IDE0_MAJOR	3
#define XEN_IDE1_MAJOR	22
#define XEN_SCSI_DISK0_MAJOR	8
#define XEN_SCSI_DISK1_MAJOR	65
#define XEN_SCSI_DISK2_MAJOR	66
#define XEN_SCSI_DISK3_MAJOR	67
#define XEN_SCSI_DISK4_MAJOR	68
#define XEN_SCSI_DISK5_MAJOR	69
#define XEN_SCSI_DISK6_MAJOR	70
#define XEN_SCSI_DISK7_MAJOR	71
#define XEN_SCSI_DISK8_MAJOR	128
#define XEN_SCSI_DISK9_MAJOR	129
#define XEN_SCSI_DISK10_MAJOR	130
#define XEN_SCSI_DISK11_MAJOR	131
#define XEN_SCSI_DISK12_MAJOR	132
#define XEN_SCSI_DISK13_MAJOR	133
#define XEN_SCSI_DISK14_MAJOR	134
#define XEN_SCSI_DISK15_MAJOR	135

#endif /* __XEN_PUBLIC_IO_BLKIF_H__ */

--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--BXVAT5kNtrzKuDFl--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:35:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyG7-0003EF-Ks; Tue, 18 Dec 2012 14:35:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkyG6-0003E8-Fz
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:35:02 +0000
Received: from [85.158.138.51:31930] by server-13.bemta-3.messagelabs.com id
	3C/64-00465-51F70D05; Tue, 18 Dec 2012 14:35:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355841298!29141509!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14257 invoked from network); 18 Dec 2012 14:34:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 14:34:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 14:34:57 +0000
Message-Id: <50D08D1E02000078000B112D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 14:34:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
In-Reply-To: <CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 15:06, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> On Tue, Dec 18, 2012 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:
> 
>> On Tue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:
>> > One of the requests from the xenorg.uservoice.com page that had a
>> > moderate amount of interest was to allow block devices to be resized.
>> > There's a description here:
>> >
>> >
>> 
> https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-i 
> mplement-block-device-resize
>> >
>> >
>> > I have no idea what this would take -- can anyone comment?
>>
>> Doesn't that already work? I thought this was patch in the PV block
>> drivers ages ago...
>>
>> Yes, http://wiki.xen.org/wiki/XenParavirtOps lists it under 2.6.36.
>>
>> Maybe this is a missing feature of (lib)xl vs xend?
>>
> 
> "xm help" doesn't show a "block-resize" command, nor does grepping through
> tools for "resize" turn up anything.
> 
> Would someone be willing to do some investigation into whether such a
> command is implemented in the protocol, and what it would take to get a "xm
> block-resize" command working?  (Not necessarily do it, but have an idea
> what probably needs to be done.)

Have a look at http://xenbits.xen.org/hg/linux-2.6.18-xen.hg/rev/f7f420bd7b7a
- I don't think there is a strict need for xm/xl commands, except as
a courtesy.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:35:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyG7-0003EF-Ks; Tue, 18 Dec 2012 14:35:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkyG6-0003E8-Fz
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:35:02 +0000
Received: from [85.158.138.51:31930] by server-13.bemta-3.messagelabs.com id
	3C/64-00465-51F70D05; Tue, 18 Dec 2012 14:35:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1355841298!29141509!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14257 invoked from network); 18 Dec 2012 14:34:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 14:34:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 14:34:57 +0000
Message-Id: <50D08D1E02000078000B112D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 14:34:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
In-Reply-To: <CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 15:06, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> On Tue, Dec 18, 2012 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:
> 
>> On Tue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:
>> > One of the requests from the xenorg.uservoice.com page that had a
>> > moderate amount of interest was to allow block devices to be resized.
>> > There's a description here:
>> >
>> >
>> 
> https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-i 
> mplement-block-device-resize
>> >
>> >
>> > I have no idea what this would take -- can anyone comment?
>>
>> Doesn't that already work? I thought this was patch in the PV block
>> drivers ages ago...
>>
>> Yes, http://wiki.xen.org/wiki/XenParavirtOps lists it under 2.6.36.
>>
>> Maybe this is a missing feature of (lib)xl vs xend?
>>
> 
> "xm help" doesn't show a "block-resize" command, nor does grepping through
> tools for "resize" turn up anything.
> 
> Would someone be willing to do some investigation into whether such a
> command is implemented in the protocol, and what it would take to get a "xm
> block-resize" command working?  (Not necessarily do it, but have an idea
> what probably needs to be done.)

Have a look at http://xenbits.xen.org/hg/linux-2.6.18-xen.hg/rev/f7f420bd7b7a
- I don't think there is a strict need for xm/xl commands, except as
a courtesy.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:38:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyJ7-0003N3-80; Tue, 18 Dec 2012 14:38:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkyJ4-0003Mu-Vy
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:38:07 +0000
Received: from [85.158.139.211:35461] by server-14.bemta-5.messagelabs.com id
	D3/9B-09538-ECF70D05; Tue, 18 Dec 2012 14:38:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355841483!19564245!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17200 invoked from network); 18 Dec 2012 14:38:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 14:38:04 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIEbw5x026991
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 14:37:59 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIEbvqZ006575
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 14:37:58 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIEbvZ5010445; Tue, 18 Dec 2012 08:37:57 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 06:37:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 52D441BF210; Tue, 18 Dec 2012 09:37:56 -0500 (EST)
Date: Tue, 18 Dec 2012 09:37:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20121218143756.GA24713@phenom.dumpdata.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 02:06:56PM +0000, George Dunlap wrote:
> On Tue, Dec 18, 2012 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:
> 
> > On Tue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:
> > > One of the requests from the xenorg.uservoice.com page that had a
> > > moderate amount of interest was to allow block devices to be resized.
> > > There's a description here:
> > >
> > >
> > https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-implement-block-device-resize
> > >
> > >
> > > I have no idea what this would take -- can anyone comment?
> >
> > Doesn't that already work? I thought this was patch in the PV block
> > drivers ages ago...
> >
> > Yes, http://wiki.xen.org/wiki/XenParavirtOps lists it under 2.6.36.
> >
> > Maybe this is a missing feature of (lib)xl vs xend?
> >
> 
> "xm help" doesn't show a "block-resize" command, nor does grepping through
> tools for "resize" turn up anything.
> 
> Would someone be willing to do some investigation into whether such a
> command is implemented in the protocol, and what it would take to get a "xm
> block-resize" command working?  (Not necessarily do it, but have an idea
> what probably needs to be done.)

Looking at the history it looks to be:

1fa73be6be65028a7543bba8f14474b42e064a1b which is


commit 1fa73be6be65028a7543bba8f14474b42e064a1b
Author: K. Y. Srinivasan <ksrinivasan@novell.com>
Date:   Thu Mar 11 13:42:26 2010 -0800

    xen/front: Propagate changed size of VBDs
    
    Support dynamic resizing of virtual block devices. This patch supports
    both file backed block devices as well as physical devices that can be
    dynamically resized on the host side.
    
    Signed-off-by: K. Y. Srinivasan <ksrinivasan@novell.com>
    Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 60006b7..f47b096 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -930,9 +930,24 @@ static void blkfront_connect(struct blkfront_info *info)
 	unsigned int binfo;
 	int err;
 
-	if ((info->connected == BLKIF_STATE_CONNECTED) ||
-	    (info->connected == BLKIF_STATE_SUSPENDED) )
+	switch (info->connected) {
+	case BLKIF_STATE_CONNECTED:
+		/*
+		 * Potentially, the back-end may be signalling
+		 * a capacity change; update the capacity.
+		 */
+		err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+				   "sectors", "%Lu", &sectors);
+		if (XENBUS_EXIST_ERR(err))
+			return;
+		printk(KERN_INFO "Setting capacity to %Lu\n",
+		       sectors);
+		set_capacity(info->gd, sectors);
+
+		/* fall through */
+	case BLKIF_STATE_SUSPENDED:
 		return;
+	}
 
 	dev_dbg(&info->xbdev->dev, "%s:%s.\n",
 		__func__, info->xbdev->otherend);

So it should be altering the 'sectors' value and just writting 
the backend state from XenbusStateConnected to XenbusStateConnected.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:38:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyJ7-0003N3-80; Tue, 18 Dec 2012 14:38:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkyJ4-0003Mu-Vy
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:38:07 +0000
Received: from [85.158.139.211:35461] by server-14.bemta-5.messagelabs.com id
	D3/9B-09538-ECF70D05; Tue, 18 Dec 2012 14:38:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355841483!19564245!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17200 invoked from network); 18 Dec 2012 14:38:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 14:38:04 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIEbw5x026991
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 14:37:59 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIEbvqZ006575
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 14:37:58 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIEbvZ5010445; Tue, 18 Dec 2012 08:37:57 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 06:37:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 52D441BF210; Tue, 18 Dec 2012 09:37:56 -0500 (EST)
Date: Tue, 18 Dec 2012 09:37:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20121218143756.GA24713@phenom.dumpdata.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 02:06:56PM +0000, George Dunlap wrote:
> On Tue, Dec 18, 2012 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:
> 
> > On Tue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:
> > > One of the requests from the xenorg.uservoice.com page that had a
> > > moderate amount of interest was to allow block devices to be resized.
> > > There's a description here:
> > >
> > >
> > https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-implement-block-device-resize
> > >
> > >
> > > I have no idea what this would take -- can anyone comment?
> >
> > Doesn't that already work? I thought this was patch in the PV block
> > drivers ages ago...
> >
> > Yes, http://wiki.xen.org/wiki/XenParavirtOps lists it under 2.6.36.
> >
> > Maybe this is a missing feature of (lib)xl vs xend?
> >
> 
> "xm help" doesn't show a "block-resize" command, nor does grepping through
> tools for "resize" turn up anything.
> 
> Would someone be willing to do some investigation into whether such a
> command is implemented in the protocol, and what it would take to get a "xm
> block-resize" command working?  (Not necessarily do it, but have an idea
> what probably needs to be done.)

Looking at the history it looks to be:

1fa73be6be65028a7543bba8f14474b42e064a1b which is


commit 1fa73be6be65028a7543bba8f14474b42e064a1b
Author: K. Y. Srinivasan <ksrinivasan@novell.com>
Date:   Thu Mar 11 13:42:26 2010 -0800

    xen/front: Propagate changed size of VBDs
    
    Support dynamic resizing of virtual block devices. This patch supports
    both file backed block devices as well as physical devices that can be
    dynamically resized on the host side.
    
    Signed-off-by: K. Y. Srinivasan <ksrinivasan@novell.com>
    Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 60006b7..f47b096 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -930,9 +930,24 @@ static void blkfront_connect(struct blkfront_info *info)
 	unsigned int binfo;
 	int err;
 
-	if ((info->connected == BLKIF_STATE_CONNECTED) ||
-	    (info->connected == BLKIF_STATE_SUSPENDED) )
+	switch (info->connected) {
+	case BLKIF_STATE_CONNECTED:
+		/*
+		 * Potentially, the back-end may be signalling
+		 * a capacity change; update the capacity.
+		 */
+		err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+				   "sectors", "%Lu", &sectors);
+		if (XENBUS_EXIST_ERR(err))
+			return;
+		printk(KERN_INFO "Setting capacity to %Lu\n",
+		       sectors);
+		set_capacity(info->gd, sectors);
+
+		/* fall through */
+	case BLKIF_STATE_SUSPENDED:
 		return;
+	}
 
 	dev_dbg(&info->xbdev->dev, "%s:%s.\n",
 		__func__, info->xbdev->otherend);

So it should be altering the 'sectors' value and just writting 
the backend state from XenbusStateConnected to XenbusStateConnected.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:40:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyKn-0003V5-Rg; Tue, 18 Dec 2012 14:39:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkyKn-0003Uc-1O
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:39:53 +0000
Received: from [193.109.254.147:63538] by server-7.bemta-14.messagelabs.com id
	10/61-08102-83080D05; Tue, 18 Dec 2012 14:39:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355841557!10515271!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13864 invoked from network); 18 Dec 2012 14:39:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:39:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="227683"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:39:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:38:54 +0000
Message-ID: <1355841532.14620.242.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 14:38:52 +0000
In-Reply-To: <20681.61121.666480.840769@mariner.uk.xensource.com>
References: <patchbomb.1354274486@cosworth.uk.xensource.com>
	<d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
	<20681.61121.666480.840769@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2] xl: Introduce helper macro for
	option parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 15:05 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH 2 of 2] xl: Introduce helper macro for option parsing"):
> > - s/FOREACH_OPT/SWITCH_FOREACH_OPT/
> > - Document the macro
> 
> Thanks.
> 
> > +/*
> > + * Wraps def_getopt into a convenient loop+switch to process all arguments.
> > + *
> > + * _opt:        an int variable, holds the current option during processing.
> > + * _opts:       short options, as per getopt_long(3)'s optstring argument.
> > + * _lopts:      long options, as per getopt_long(3)'s longopts argument. May
> > + *              be null.
> > + * _help:       name of this command, for usage string.
> > + * _req:        number of non-option command line parameters which are required.
> 
> Can we have a pseudo-prototype for this ?  Eg
> 
>     *   SWITCH_FOREACH_OPT(int &opt, const char *opts,
>     *                      const struct option *longopts,
>     *                      const char *commandname,
>     *                      int num_opts_req) { ...
>     *    case ...

Will do.

> Also, there is no need to prefix macro formal arguments with _.  Why
> do you do that ?  We don't do it elsewhere in libxl...

Not sure why I did that, will remove them.

> > + * Callers should treat SWITCH_FOREACH_OPT as they would a switch
> > + * statement over the value of _opt. Each option given in _opts (or
> > + * _lopts) should be handled by a case statement as if it were inside
> > + * a switch statement.
> > + *
> > + * In addition to the options provided in _opts callers must handle
> > + * two additional pseudo options:
> > + *  0 -- generated if the user passes a -h option. help will be printed,
> > + *       caller should return 0.
> > + *  2 -- generated if the user does not provided _req non-option arguments,
> > + *       caller should return 2.
> 
> I don't think you can mean "caller should return".  Your description
> of the macro doesn't specify anything in particular about the calling
> function so it can't possibly intend for you to return particular
> values from it.
> 
> Did you mean "cause the program to exit" ?

The existing (somewhat inferred, but it seems to be reasonably
consistent) semantics of xl's main_foo() dispatchers is to return 2,
causing the xl's main to exit under these circumstances. Why 2? I've no
idea.

Rather than trying to treat this as some sort of generic helper I'd be
more inclined to just document that it is to be called only by xl
main_foo() functions, since its main purpose is to make main_foo() more
consistent in their argument handling.

I originally preferred return from main_foo() over exit(2) so that we
could have a chance to free the xl context etc (mostly for the benefit
of people using valgrind). We have an atexit hook now so perhaps that
concern is no longer valid.

> And if so why not have the macro (or the function) do that ?

I don't like returns in macros.

> I haven't gone through your call sites to review them...
> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:40:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyKn-0003V5-Rg; Tue, 18 Dec 2012 14:39:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkyKn-0003Uc-1O
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:39:53 +0000
Received: from [193.109.254.147:63538] by server-7.bemta-14.messagelabs.com id
	10/61-08102-83080D05; Tue, 18 Dec 2012 14:39:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355841557!10515271!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13864 invoked from network); 18 Dec 2012 14:39:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:39:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="227683"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:39:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:38:54 +0000
Message-ID: <1355841532.14620.242.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 14:38:52 +0000
In-Reply-To: <20681.61121.666480.840769@mariner.uk.xensource.com>
References: <patchbomb.1354274486@cosworth.uk.xensource.com>
	<d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
	<20681.61121.666480.840769@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2] xl: Introduce helper macro for
	option parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 15:05 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH 2 of 2] xl: Introduce helper macro for option parsing"):
> > - s/FOREACH_OPT/SWITCH_FOREACH_OPT/
> > - Document the macro
> 
> Thanks.
> 
> > +/*
> > + * Wraps def_getopt into a convenient loop+switch to process all arguments.
> > + *
> > + * _opt:        an int variable, holds the current option during processing.
> > + * _opts:       short options, as per getopt_long(3)'s optstring argument.
> > + * _lopts:      long options, as per getopt_long(3)'s longopts argument. May
> > + *              be null.
> > + * _help:       name of this command, for usage string.
> > + * _req:        number of non-option command line parameters which are required.
> 
> Can we have a pseudo-prototype for this ?  Eg
> 
>     *   SWITCH_FOREACH_OPT(int &opt, const char *opts,
>     *                      const struct option *longopts,
>     *                      const char *commandname,
>     *                      int num_opts_req) { ...
>     *    case ...

Will do.

> Also, there is no need to prefix macro formal arguments with _.  Why
> do you do that ?  We don't do it elsewhere in libxl...

Not sure why I did that, will remove them.

> > + * Callers should treat SWITCH_FOREACH_OPT as they would a switch
> > + * statement over the value of _opt. Each option given in _opts (or
> > + * _lopts) should be handled by a case statement as if it were inside
> > + * a switch statement.
> > + *
> > + * In addition to the options provided in _opts callers must handle
> > + * two additional pseudo options:
> > + *  0 -- generated if the user passes a -h option. help will be printed,
> > + *       caller should return 0.
> > + *  2 -- generated if the user does not provided _req non-option arguments,
> > + *       caller should return 2.
> 
> I don't think you can mean "caller should return".  Your description
> of the macro doesn't specify anything in particular about the calling
> function so it can't possibly intend for you to return particular
> values from it.
> 
> Did you mean "cause the program to exit" ?

The existing (somewhat inferred, but it seems to be reasonably
consistent) semantics of xl's main_foo() dispatchers is to return 2,
causing the xl's main to exit under these circumstances. Why 2? I've no
idea.

Rather than trying to treat this as some sort of generic helper I'd be
more inclined to just document that it is to be called only by xl
main_foo() functions, since its main purpose is to make main_foo() more
consistent in their argument handling.

I originally preferred return from main_foo() over exit(2) so that we
could have a chance to free the xl context etc (mostly for the benefit
of people using valgrind). We have an atexit hook now so perhaps that
concern is no longer valid.

> And if so why not have the macro (or the function) do that ?

I don't like returns in macros.

> I haven't gone through your call sites to review them...
> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:44:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyP1-0003jH-KR; Tue, 18 Dec 2012 14:44:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkyOz-0003jB-Ov
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:44:13 +0000
Received: from [85.158.139.83:26721] by server-1.bemta-5.messagelabs.com id
	75/F7-12813-C3180D05; Tue, 18 Dec 2012 14:44:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355841809!23048540!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7315 invoked from network); 18 Dec 2012 14:43:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:43:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="227879"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:43:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:43:20 +0000
Message-ID: <1355841799.14620.244.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 18 Dec 2012 14:43:19 +0000
In-Reply-To: <20121218143756.GA24713@phenom.dumpdata.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
	<20121218143756.GA24713@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 14:37 +0000, Konrad Rzeszutek Wilk wrote:
> 
> So it should be altering the 'sectors' value and just writting 
> the backend state from XenbusStateConnected to XenbusStateConnected. 

I wonder if anyone fancies patching xen/include/public/io/blkif.h to say
so.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:44:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyP1-0003jH-KR; Tue, 18 Dec 2012 14:44:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkyOz-0003jB-Ov
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:44:13 +0000
Received: from [85.158.139.83:26721] by server-1.bemta-5.messagelabs.com id
	75/F7-12813-C3180D05; Tue, 18 Dec 2012 14:44:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355841809!23048540!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7315 invoked from network); 18 Dec 2012 14:43:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:43:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="227879"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:43:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:43:20 +0000
Message-ID: <1355841799.14620.244.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 18 Dec 2012 14:43:19 +0000
In-Reply-To: <20121218143756.GA24713@phenom.dumpdata.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
	<20121218143756.GA24713@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 14:37 +0000, Konrad Rzeszutek Wilk wrote:
> 
> So it should be altering the 'sectors' value and just writting 
> the backend state from XenbusStateConnected to XenbusStateConnected. 

I wonder if anyone fancies patching xen/include/public/io/blkif.h to say
so.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:46:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyQf-0003oz-4g; Tue, 18 Dec 2012 14:45:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkyQc-0003os-Oh
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:45:54 +0000
Received: from [85.158.138.51:18568] by server-12.bemta-3.messagelabs.com id
	7E/82-27559-D9180D05; Tue, 18 Dec 2012 14:45:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1355841946!21428408!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8620 invoked from network); 18 Dec 2012 14:45:48 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 14:45:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIEjhIh004090
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 14:45:44 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIEjhkX014038
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 14:45:43 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIEjg7X029213; Tue, 18 Dec 2012 08:45:43 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 06:45:42 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E96081BF216; Tue, 18 Dec 2012 09:45:41 -0500 (EST)
Date: Tue, 18 Dec 2012 09:45:41 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org
Message-ID: <20121218144541.GE4518@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.8-rc0-bugfix-tag
	for 3.8.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5546234902472185501=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============5546234902472185501==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="J2SCkAp4GZ/dPZZf"
Content-Disposition: inline


--J2SCkAp4GZ/dPZZf
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.8-rc0-bugfix-tag

which has two fixes. One of them is caused by the recent change introduced
by the 'x86-bsp-hotplug-for-linus' tip tree that inhibited bootup (old function
does not do what it used to do). The other one is just a vanilla bug.

Please pull!

 arch/x86/xen/enlighten.c              |  7 ++++---
 arch/x86/xen/smp.c                    |  2 +-
 include/xen/interface/event_channel.h | 13 +++++++++++++
 3 files changed, 18 insertions(+), 4 deletions(-)

Konrad Rzeszutek Wilk (1):
      xen/smp: Use smp_store_boot_cpu_info() to store cpu info for BSP during boot time.

Wei Liu (2):
      xen: Add EVTCHNOP_reset in Xen interface header files.
      xen/vcpu: Fix vcpu restore path.


--J2SCkAp4GZ/dPZZf
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJQ0IGRAAoJEFjIrFwIi8fJ0AMH/2ztTwEJotcMty/TQWHiMXD7
3w8PshmA9v7VJzN2a+l9AWTfIp5yvtGrl9q//o5DINB6TnyUY8ZiWj8JRuqfT9dS
Qnubb8+xmbKUSQq8ML5qJMLOea72OZSGc9qihQx3FdLvf6aCBUZB+WricfU0F9R7
2kO3RVwCo+JgGr5dKL1L4azQPgKDUIUSwSfJnY5pLCxiZYlfRsebjfJcjhKwb2Vq
x0L5o9VfQxrQ+Xsjr993PAR9iLUkOdmkHtWQNIPXPI8SvqRyIsoWL0wq+NRzFYHR
V5RmSkUp9HO0CsYRCPb4TfApFEUBUPSrnYqWTCBXPsmRPhu+lrHGDCEioYc9b8o=
=9fGK
-----END PGP SIGNATURE-----

--J2SCkAp4GZ/dPZZf--


--===============5546234902472185501==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5546234902472185501==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:46:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyQf-0003oz-4g; Tue, 18 Dec 2012 14:45:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkyQc-0003os-Oh
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:45:54 +0000
Received: from [85.158.138.51:18568] by server-12.bemta-3.messagelabs.com id
	7E/82-27559-D9180D05; Tue, 18 Dec 2012 14:45:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1355841946!21428408!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8620 invoked from network); 18 Dec 2012 14:45:48 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 14:45:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIEjhIh004090
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 14:45:44 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIEjhkX014038
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 14:45:43 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIEjg7X029213; Tue, 18 Dec 2012 08:45:43 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 06:45:42 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E96081BF216; Tue, 18 Dec 2012 09:45:41 -0500 (EST)
Date: Tue, 18 Dec 2012 09:45:41 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org
Message-ID: <20121218144541.GE4518@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.8-rc0-bugfix-tag
	for 3.8.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5546234902472185501=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============5546234902472185501==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="J2SCkAp4GZ/dPZZf"
Content-Disposition: inline


--J2SCkAp4GZ/dPZZf
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.8-rc0-bugfix-tag

which has two fixes. One of them is caused by the recent change introduced
by the 'x86-bsp-hotplug-for-linus' tip tree that inhibited bootup (old function
does not do what it used to do). The other one is just a vanilla bug.

Please pull!

 arch/x86/xen/enlighten.c              |  7 ++++---
 arch/x86/xen/smp.c                    |  2 +-
 include/xen/interface/event_channel.h | 13 +++++++++++++
 3 files changed, 18 insertions(+), 4 deletions(-)

Konrad Rzeszutek Wilk (1):
      xen/smp: Use smp_store_boot_cpu_info() to store cpu info for BSP during boot time.

Wei Liu (2):
      xen: Add EVTCHNOP_reset in Xen interface header files.
      xen/vcpu: Fix vcpu restore path.


--J2SCkAp4GZ/dPZZf
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJQ0IGRAAoJEFjIrFwIi8fJ0AMH/2ztTwEJotcMty/TQWHiMXD7
3w8PshmA9v7VJzN2a+l9AWTfIp5yvtGrl9q//o5DINB6TnyUY8ZiWj8JRuqfT9dS
Qnubb8+xmbKUSQq8ML5qJMLOea72OZSGc9qihQx3FdLvf6aCBUZB+WricfU0F9R7
2kO3RVwCo+JgGr5dKL1L4azQPgKDUIUSwSfJnY5pLCxiZYlfRsebjfJcjhKwb2Vq
x0L5o9VfQxrQ+Xsjr993PAR9iLUkOdmkHtWQNIPXPI8SvqRyIsoWL0wq+NRzFYHR
V5RmSkUp9HO0CsYRCPb4TfApFEUBUPSrnYqWTCBXPsmRPhu+lrHGDCEioYc9b8o=
=9fGK
-----END PGP SIGNATURE-----

--J2SCkAp4GZ/dPZZf--


--===============5546234902472185501==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5546234902472185501==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 14:49:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyUC-0003zn-T9; Tue, 18 Dec 2012 14:49:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkyUB-0003zf-R9
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:49:35 +0000
Received: from [85.158.139.211:41881] by server-8.bemta-5.messagelabs.com id
	F6/B2-15003-F7280D05; Tue, 18 Dec 2012 14:49:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355842174!19566447!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6967 invoked from network); 18 Dec 2012 14:49:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:49:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="228099"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:49:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:49:22 +0000
Message-ID: <1355842158.14620.247.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E5=8D=97=E7=BA=AC90=C2=B0?= <1012503222@qq.com>
Date: Tue, 18 Dec 2012 14:49:18 +0000
In-Reply-To: <tencent_142864FF75B0D41D560C7187@qq.com>
References: <tencent_142864FF75B0D41D560C7187@qq.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] test_bindings can't run
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU2F0LCAyMDEyLTEyLTE1IGF0IDA2OjM2ICswMDAwLCDljZfnuqw5MMKwIHdyb3RlOgo+ICBI
ZWxsbyBldmVyeW9uZQo+IAo+ICBJIGNvbXBpbGUgdGhlIGZpbGUgdGVzdF9iaW5kaW5ncy5jIGF0
IHhlbi4uLi90b29scy9saWJ4ZW4vdGVzdCBhbmQKPiBnZXQgYSBleGVjdXRhYmxlIGZpbGUgdGVz
dF9iaW5kaW5ncy4gYnV0IHdoZW4gSSBydW4gaXQgLHVzaW5nIHRoZQo+IGZsb3dpbmcgcGFyYW1l
dGVycyAiLi90ZXN0X2JpbmRpbmdzIGh0dHA6Ly9sb2NhbGhvc3Q6ODAwNi94bWxycGMgcm9vdAo+
IDEyMzQ1NiIsaXQgc2F5cyAiRXJyb3I6IDJNRVNTQUdFX01FVEhPRF9VTktOT1dOCj4gc2Vzc2lv
bi5sb2dpbl93aXRoX3Bhc3N3b3JkIi5CZWFjdXNlIHdlIGp1c3QgZG8gc29tZSByZWFzZWFyY2gs
IHdlIHVzZQo+IGFuIE9wZW4gc291cmNlIC4gV2UgZG93bmxvYWQgaXQgZnJvbQo+IGh0dHA6Ly94
ZW4ub3JnL3Byb2R1Y3RzL2Rvd25sb2Fkcy5odG1sIC4gSSByZWFseSBkb24ndCBrbm93IHdoYXQg
dG8gZG8KPiB3aXRoIGl0LiB0aGUgdHJ1dGggaXMgdGhhdCBJIGFtIG5vdCBhbiBleHBlcnQgaW4g
dGhpcyBhcmVhLGFuZCBteQo+IEVuZ2xpc2ggaXNuJ3Qgc28gd2VsbC4KCldoYXQgaXMgeW91ciBh
Y3R1YWwgZW5kIGdvYWw/CgpUaGUgWGVuQVBJIHN0dWZmIGluIHRoZSBYZW4gdHJlZSAod2hpY2gg
aW5jbHVkZXMgdGhpcyBsaWJ4ZW4pIGlzIG1vc3RseQpkZWFkIGFuZCB1bnN1cHBvcnRlZC4gSWYg
eW91IHdhbnQgYSB1c2VmdWwgc3VwcG9ydCBYZW5BUEkgcGxhdGZvcm0gSSdkCnN1Z2dlc3QgdXNp
bmcgdGhlIHhhcGkgdG9vbHN0YWNrLCBlaXRoZXIgdmlhIFhDUCBvciB0aGUgcGFja2FnZXMgaW4K
RGViaWFuL1VidW50dS4KClVubGVzcyB5b3UgaGF2ZSBxdWVzdGlvbnMgc3BlY2lmaWNhbGx5IGFi
b3V0IGRldmVsb3BpbmcgWGVuIGl0c2VsZiB0aGVuCnhlbi11c2Vyc0AgYW5kIHhlbi1hcGlAIGFy
ZSBtb3JlIGFwcHJvcHJpYXRlIG1haWxpbmcgbGlzdHMgZm9yIHlvdXIKcXVlc3Rpb25zLgoKSWFu
LgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:49:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyUC-0003zn-T9; Tue, 18 Dec 2012 14:49:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkyUB-0003zf-R9
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:49:35 +0000
Received: from [85.158.139.211:41881] by server-8.bemta-5.messagelabs.com id
	F6/B2-15003-F7280D05; Tue, 18 Dec 2012 14:49:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1355842174!19566447!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6967 invoked from network); 18 Dec 2012 14:49:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:49:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="228099"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 14:49:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 14:49:22 +0000
Message-ID: <1355842158.14620.247.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E5=8D=97=E7=BA=AC90=C2=B0?= <1012503222@qq.com>
Date: Tue, 18 Dec 2012 14:49:18 +0000
In-Reply-To: <tencent_142864FF75B0D41D560C7187@qq.com>
References: <tencent_142864FF75B0D41D560C7187@qq.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] test_bindings can't run
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU2F0LCAyMDEyLTEyLTE1IGF0IDA2OjM2ICswMDAwLCDljZfnuqw5MMKwIHdyb3RlOgo+ICBI
ZWxsbyBldmVyeW9uZQo+IAo+ICBJIGNvbXBpbGUgdGhlIGZpbGUgdGVzdF9iaW5kaW5ncy5jIGF0
IHhlbi4uLi90b29scy9saWJ4ZW4vdGVzdCBhbmQKPiBnZXQgYSBleGVjdXRhYmxlIGZpbGUgdGVz
dF9iaW5kaW5ncy4gYnV0IHdoZW4gSSBydW4gaXQgLHVzaW5nIHRoZQo+IGZsb3dpbmcgcGFyYW1l
dGVycyAiLi90ZXN0X2JpbmRpbmdzIGh0dHA6Ly9sb2NhbGhvc3Q6ODAwNi94bWxycGMgcm9vdAo+
IDEyMzQ1NiIsaXQgc2F5cyAiRXJyb3I6IDJNRVNTQUdFX01FVEhPRF9VTktOT1dOCj4gc2Vzc2lv
bi5sb2dpbl93aXRoX3Bhc3N3b3JkIi5CZWFjdXNlIHdlIGp1c3QgZG8gc29tZSByZWFzZWFyY2gs
IHdlIHVzZQo+IGFuIE9wZW4gc291cmNlIC4gV2UgZG93bmxvYWQgaXQgZnJvbQo+IGh0dHA6Ly94
ZW4ub3JnL3Byb2R1Y3RzL2Rvd25sb2Fkcy5odG1sIC4gSSByZWFseSBkb24ndCBrbm93IHdoYXQg
dG8gZG8KPiB3aXRoIGl0LiB0aGUgdHJ1dGggaXMgdGhhdCBJIGFtIG5vdCBhbiBleHBlcnQgaW4g
dGhpcyBhcmVhLGFuZCBteQo+IEVuZ2xpc2ggaXNuJ3Qgc28gd2VsbC4KCldoYXQgaXMgeW91ciBh
Y3R1YWwgZW5kIGdvYWw/CgpUaGUgWGVuQVBJIHN0dWZmIGluIHRoZSBYZW4gdHJlZSAod2hpY2gg
aW5jbHVkZXMgdGhpcyBsaWJ4ZW4pIGlzIG1vc3RseQpkZWFkIGFuZCB1bnN1cHBvcnRlZC4gSWYg
eW91IHdhbnQgYSB1c2VmdWwgc3VwcG9ydCBYZW5BUEkgcGxhdGZvcm0gSSdkCnN1Z2dlc3QgdXNp
bmcgdGhlIHhhcGkgdG9vbHN0YWNrLCBlaXRoZXIgdmlhIFhDUCBvciB0aGUgcGFja2FnZXMgaW4K
RGViaWFuL1VidW50dS4KClVubGVzcyB5b3UgaGF2ZSBxdWVzdGlvbnMgc3BlY2lmaWNhbGx5IGFi
b3V0IGRldmVsb3BpbmcgWGVuIGl0c2VsZiB0aGVuCnhlbi11c2Vyc0AgYW5kIHhlbi1hcGlAIGFy
ZSBtb3JlIGFwcHJvcHJpYXRlIG1haWxpbmcgbGlzdHMgZm9yIHlvdXIKcXVlc3Rpb25zLgoKSWFu
LgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:49:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyUH-00040E-9W; Tue, 18 Dec 2012 14:49:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkyUF-000400-R0
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:49:39 +0000
Received: from [85.158.137.99:44464] by server-13.bemta-3.messagelabs.com id
	B4/66-00465-E7280D05; Tue, 18 Dec 2012 14:49:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355842173!14444367!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25531 invoked from network); 18 Dec 2012 14:49:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 14:49:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 14:49:32 +0000
Message-Id: <50D0908B02000078000B115C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 14:49:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20121218143109.GA24471@phenom.dumpdata.com>
In-Reply-To: <20121218143109.GA24471@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: axboe@kernel.dk, matthew@wil.cx, felipe.franciosi@citrix.com,
	martin.petersen@oracle.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC v1: Xen block protocol overhaul - problem
 statement (with pictures!)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDE4LjEyLjEyIGF0IDE1OjMxLCBLb25yYWQgUnplc3p1dGVrIFdpbGsgPGtvbnJhZC53
aWxrQG9yYWNsZS5jb20+IHdyb3RlOgo+IFRoZSBBKSBoYXMgdHdvIHNvbHV0aW9ucyB0byB0aGlz
IChsb29rIGF0Cj4gaHR0cDovL2NvbW1lbnRzLmdtYW5lLm9yZy9nbWFuZS5jb21wLmVtdWxhdG9y
cy54ZW4uZGV2ZWwvMTQwNDA2IGZvcgo+IGRldGFpbHMpLiBPbmUgcHJvcG9zZWQgYnkgSnVzdGlu
IGZyb20gU3BlY3RyYWxvZ2ljIGhhcyB0byBuZWdvdGlhdGUKPiB0aGUgc2VnbWVudCBzaXplLiBU
aGlzIG1lYW5zIHRoYXQgdGhlIOKAmHN0cnVjdCBibGtpZl9zcmluZ19lbnRyeeKAmQo+IGlzIG5v
dyBhIHZhcmlhYmxlIHNpemUuIEl0IGNhbiBleHBhbmQgZnJvbSAxMTIgYnl0ZXMgKGNvdmVyIDEx
IHBhZ2VzIG9mCj4gZGF0YSAtIDQ0a0IpIHRvIDE1ODAgYnl0ZXMgKDI1NiBwYWdlcyBvZiBkYXRh
IC0gc28gMU1CKS4gSXQgaXMgYSBzaW1wbGUKPiBleHRlbnNpb24gYnkganVzdCBtYWtpbmcgdGhl
IGFycmF5IGluIHRoZSByZXF1ZXN0IGV4cGFuZCBmcm9tIDExIHRvIGEKPiB2YXJpYWJsZSBzaXpl
IG5lZ290aWF0ZWQuCgpJaXJjIHRoaXMgZXh0ZW5zaW9uIHN0aWxsIGxpbWl0cyB0aGUgbnVtYmVy
IG9mIHNlZ21lbnRzIHBlciByZXF1ZXN0CnRvIDI1NSAoYXMgdGhlIHRvdGFsIG51bWJlciBtdXN0
IGJlIHNwZWNpZmllZCBpbiB0aGUgcmVxdWVzdCwKd2hpY2ggb25seSBoYXMgYW4gOC1iaXQgZmll
bGQgZm9yIHRoYXQgcHVycG9zZSkuCgpUaGF0J3Mgb25lIG9mIHRoZSByZWFzb25zIHdoeSBmb3Ig
dlNDU0kgSSBkaWRuJ3QgZ28gdGhhdCByb3V0ZSwKYnV0IHJhdGhlciBhbGxvd2VkIHRoZSBmcm9u
dGVuZCB0byBzdWJtaXQgc2VnbWVudCBpbmZvcm1hdGlvbgpwcmlvciB0byB0aGUgYWN0dWFsIHJl
cXVlc3QgKHdpdGggdGhlIHRyYWlsaW5nIHNlZ21lbnQgcGllY2VzLCBpZgphbnksIGJlaW5nIGlu
IHRoZSByZXF1ZXN0IGl0c2VsZikuCgpKYW4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3Rz
Lnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:49:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyUH-00040E-9W; Tue, 18 Dec 2012 14:49:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkyUF-000400-R0
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:49:39 +0000
Received: from [85.158.137.99:44464] by server-13.bemta-3.messagelabs.com id
	B4/66-00465-E7280D05; Tue, 18 Dec 2012 14:49:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355842173!14444367!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25531 invoked from network); 18 Dec 2012 14:49:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 14:49:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 14:49:32 +0000
Message-Id: <50D0908B02000078000B115C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 14:49:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20121218143109.GA24471@phenom.dumpdata.com>
In-Reply-To: <20121218143109.GA24471@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: axboe@kernel.dk, matthew@wil.cx, felipe.franciosi@citrix.com,
	martin.petersen@oracle.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC v1: Xen block protocol overhaul - problem
 statement (with pictures!)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDE4LjEyLjEyIGF0IDE1OjMxLCBLb25yYWQgUnplc3p1dGVrIFdpbGsgPGtvbnJhZC53
aWxrQG9yYWNsZS5jb20+IHdyb3RlOgo+IFRoZSBBKSBoYXMgdHdvIHNvbHV0aW9ucyB0byB0aGlz
IChsb29rIGF0Cj4gaHR0cDovL2NvbW1lbnRzLmdtYW5lLm9yZy9nbWFuZS5jb21wLmVtdWxhdG9y
cy54ZW4uZGV2ZWwvMTQwNDA2IGZvcgo+IGRldGFpbHMpLiBPbmUgcHJvcG9zZWQgYnkgSnVzdGlu
IGZyb20gU3BlY3RyYWxvZ2ljIGhhcyB0byBuZWdvdGlhdGUKPiB0aGUgc2VnbWVudCBzaXplLiBU
aGlzIG1lYW5zIHRoYXQgdGhlIOKAmHN0cnVjdCBibGtpZl9zcmluZ19lbnRyeeKAmQo+IGlzIG5v
dyBhIHZhcmlhYmxlIHNpemUuIEl0IGNhbiBleHBhbmQgZnJvbSAxMTIgYnl0ZXMgKGNvdmVyIDEx
IHBhZ2VzIG9mCj4gZGF0YSAtIDQ0a0IpIHRvIDE1ODAgYnl0ZXMgKDI1NiBwYWdlcyBvZiBkYXRh
IC0gc28gMU1CKS4gSXQgaXMgYSBzaW1wbGUKPiBleHRlbnNpb24gYnkganVzdCBtYWtpbmcgdGhl
IGFycmF5IGluIHRoZSByZXF1ZXN0IGV4cGFuZCBmcm9tIDExIHRvIGEKPiB2YXJpYWJsZSBzaXpl
IG5lZ290aWF0ZWQuCgpJaXJjIHRoaXMgZXh0ZW5zaW9uIHN0aWxsIGxpbWl0cyB0aGUgbnVtYmVy
IG9mIHNlZ21lbnRzIHBlciByZXF1ZXN0CnRvIDI1NSAoYXMgdGhlIHRvdGFsIG51bWJlciBtdXN0
IGJlIHNwZWNpZmllZCBpbiB0aGUgcmVxdWVzdCwKd2hpY2ggb25seSBoYXMgYW4gOC1iaXQgZmll
bGQgZm9yIHRoYXQgcHVycG9zZSkuCgpUaGF0J3Mgb25lIG9mIHRoZSByZWFzb25zIHdoeSBmb3Ig
dlNDU0kgSSBkaWRuJ3QgZ28gdGhhdCByb3V0ZSwKYnV0IHJhdGhlciBhbGxvd2VkIHRoZSBmcm9u
dGVuZCB0byBzdWJtaXQgc2VnbWVudCBpbmZvcm1hdGlvbgpwcmlvciB0byB0aGUgYWN0dWFsIHJl
cXVlc3QgKHdpdGggdGhlIHRyYWlsaW5nIHNlZ21lbnQgcGllY2VzLCBpZgphbnksIGJlaW5nIGlu
IHRoZSByZXF1ZXN0IGl0c2VsZikuCgpKYW4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3Rz
Lnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:51:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyW9-0004E8-Qf; Tue, 18 Dec 2012 14:51:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkyW7-0004E1-Re
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:51:35 +0000
Received: from [85.158.139.83:31363] by server-10.bemta-5.messagelabs.com id
	8A/B9-13383-6F280D05; Tue, 18 Dec 2012 14:51:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355842294!28261586!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26638 invoked from network); 18 Dec 2012 14:51:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 14:51:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 14:51:33 +0000
Message-Id: <50D0910202000078000B1171@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 14:51:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
	<20121218143756.GA24713@phenom.dumpdata.com>
In-Reply-To: <20121218143756.GA24713@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 15:37, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> So it should be altering the 'sectors' value and just writting 
> the backend state from XenbusStateConnected to XenbusStateConnected.

Which is what the corresponding backend patch does (which for
upstream was separate because I think blkback wasn't upstream
yet back when KY did that work).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:51:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyW9-0004E8-Qf; Tue, 18 Dec 2012 14:51:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TkyW7-0004E1-Re
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 14:51:35 +0000
Received: from [85.158.139.83:31363] by server-10.bemta-5.messagelabs.com id
	8A/B9-13383-6F280D05; Tue, 18 Dec 2012 14:51:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355842294!28261586!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26638 invoked from network); 18 Dec 2012 14:51:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 14:51:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Dec 2012 14:51:33 +0000
Message-Id: <50D0910202000078000B1171@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Tue, 18 Dec 2012 14:51:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
	<20121218143756.GA24713@phenom.dumpdata.com>
In-Reply-To: <20121218143756.GA24713@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 15:37, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> So it should be altering the 'sectors' value and just writting 
> the backend state from XenbusStateConnected to XenbusStateConnected.

Which is what the corresponding backend patch does (which for
upstream was separate because I think blkback wasn't upstream
yet back when KY did that work).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:58:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:58:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkycz-0004eA-WD; Tue, 18 Dec 2012 14:58:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nenolod@dereferenced.org>) id 1Tkycz-0004e4-Cx
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:58:41 +0000
Received: from [85.158.143.35:14503] by server-2.bemta-4.messagelabs.com id
	CC/20-30861-0A480D05; Tue, 18 Dec 2012 14:58:40 +0000
X-Env-Sender: nenolod@dereferenced.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355842524!13289460!1
X-Originating-IP: [209.85.216.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15207 invoked from network); 18 Dec 2012 14:55:27 -0000
Received: from mail-qc0-f170.google.com (HELO mail-qc0-f170.google.com)
	(209.85.216.170)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:55:27 -0000
Received: by mail-qc0-f170.google.com with SMTP id d42so393763qca.29
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 06:55:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=lDl4UeaZUSx5Ld5vQnEbnLmj2hJCLHe7FjwYdB9mLUk=;
	b=kRiN0Kdf8gygxhiHHrbwdq2EWNRMw8DuDeOimMPkCbdIpWc8+I9W4HEEr7NBN+mElI
	2pXABcUmK0mZaIMi1goOJNZSKEmWVuF4ItZiElGWOdT+Ho0ynuCS3dBV3R+zVVvg8s8P
	Uvvd8PEKc1+qqGhnGgOsWcxNuo5X4SOt43YsDXZ7AThoIimdZRDIMCQcx8u+U491NIHy
	t7Z+75zqEpLFBqtn1DEAtqgT1e4lKD2aQa8REuXl6TVWp3Xop2VmYcDWilMEyCPRj5aD
	5wMshPj4x8cNKgU8TOOPn7GsUwhbIMe5rljxnj9xvzXVjAA940HN2b/DOJBGzrQTdc0Q
	A10A==
MIME-Version: 1.0
Received: by 10.49.75.226 with SMTP id f2mr1042448qew.43.1355842524413; Tue,
	18 Dec 2012 06:55:24 -0800 (PST)
Received: by 10.49.127.148 with HTTP; Tue, 18 Dec 2012 06:55:24 -0800 (PST)
In-Reply-To: <20687.24486.142168.302026@mariner.uk.xensource.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
	<20687.24486.142168.302026@mariner.uk.xensource.com>
Date: Tue, 18 Dec 2012 08:55:24 -0600
Message-ID: <CA+T2pCGGbJXppP6ZEgvfxOU+u0TAhzEnpphbVgC4m0AiN2x1fg@mail.gmail.com>
From: William Pitcock <nenolod@dereferenced.org>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
X-Gm-Message-State: ALoCoQkKk2wXWCqeE1+b/VhyoAhNy94/2M0tyPEqiGlyE+UcyMkul3SCwj/cQj1UiT5PKAkmg3N6
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On Mon, Dec 17, 2012 at 12:08 PM, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> William Pitcock writes ("[Xen-devel] introducing python-xen"):
>> I would like to introduce the Python Xen library, which uses libxs and
>> libxc directly to provide some manipulation functions for the domains
>> running on a hypervisor.
>
> Thanks, that's interesting.
>
> However, we would recommend nowadays to build this kind of
> functionality on top of libxl.  The existing python bindings for libxl
> may need some work, but they're probably a good starting point.
>
> Ian.

The plan is to eventually shift over to using the libxl bindings once
they become more mature.  We had a project goal to build a cloud using
the XL toolstack as a basis instead of the xend toolstack, but various
people have told me that the Python bindings aren't ready for
prime-time yet.

We did not feel that shadowing the Xen package in Alpine was worth it
just to get an incomplete set of bindings, and we had to come up with
a solution so we could proceed with the project on time.  I intend to
come back and look at the state of the libxl bindings later, once we
have satisfied our main requirements.  I agree that poking at the
hypervisor and xenstore directly from Python is suboptimal, and I
would prefer not to do it.

William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 14:58:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 14:58:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkycz-0004eA-WD; Tue, 18 Dec 2012 14:58:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nenolod@dereferenced.org>) id 1Tkycz-0004e4-Cx
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 14:58:41 +0000
Received: from [85.158.143.35:14503] by server-2.bemta-4.messagelabs.com id
	CC/20-30861-0A480D05; Tue, 18 Dec 2012 14:58:40 +0000
X-Env-Sender: nenolod@dereferenced.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355842524!13289460!1
X-Originating-IP: [209.85.216.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15207 invoked from network); 18 Dec 2012 14:55:27 -0000
Received: from mail-qc0-f170.google.com (HELO mail-qc0-f170.google.com)
	(209.85.216.170)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 14:55:27 -0000
Received: by mail-qc0-f170.google.com with SMTP id d42so393763qca.29
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 06:55:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=lDl4UeaZUSx5Ld5vQnEbnLmj2hJCLHe7FjwYdB9mLUk=;
	b=kRiN0Kdf8gygxhiHHrbwdq2EWNRMw8DuDeOimMPkCbdIpWc8+I9W4HEEr7NBN+mElI
	2pXABcUmK0mZaIMi1goOJNZSKEmWVuF4ItZiElGWOdT+Ho0ynuCS3dBV3R+zVVvg8s8P
	Uvvd8PEKc1+qqGhnGgOsWcxNuo5X4SOt43YsDXZ7AThoIimdZRDIMCQcx8u+U491NIHy
	t7Z+75zqEpLFBqtn1DEAtqgT1e4lKD2aQa8REuXl6TVWp3Xop2VmYcDWilMEyCPRj5aD
	5wMshPj4x8cNKgU8TOOPn7GsUwhbIMe5rljxnj9xvzXVjAA940HN2b/DOJBGzrQTdc0Q
	A10A==
MIME-Version: 1.0
Received: by 10.49.75.226 with SMTP id f2mr1042448qew.43.1355842524413; Tue,
	18 Dec 2012 06:55:24 -0800 (PST)
Received: by 10.49.127.148 with HTTP; Tue, 18 Dec 2012 06:55:24 -0800 (PST)
In-Reply-To: <20687.24486.142168.302026@mariner.uk.xensource.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
	<20687.24486.142168.302026@mariner.uk.xensource.com>
Date: Tue, 18 Dec 2012 08:55:24 -0600
Message-ID: <CA+T2pCGGbJXppP6ZEgvfxOU+u0TAhzEnpphbVgC4m0AiN2x1fg@mail.gmail.com>
From: William Pitcock <nenolod@dereferenced.org>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
X-Gm-Message-State: ALoCoQkKk2wXWCqeE1+b/VhyoAhNy94/2M0tyPEqiGlyE+UcyMkul3SCwj/cQj1UiT5PKAkmg3N6
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On Mon, Dec 17, 2012 at 12:08 PM, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> William Pitcock writes ("[Xen-devel] introducing python-xen"):
>> I would like to introduce the Python Xen library, which uses libxs and
>> libxc directly to provide some manipulation functions for the domains
>> running on a hypervisor.
>
> Thanks, that's interesting.
>
> However, we would recommend nowadays to build this kind of
> functionality on top of libxl.  The existing python bindings for libxl
> may need some work, but they're probably a good starting point.
>
> Ian.

The plan is to eventually shift over to using the libxl bindings once
they become more mature.  We had a project goal to build a cloud using
the XL toolstack as a basis instead of the xend toolstack, but various
people have told me that the Python bindings aren't ready for
prime-time yet.

We did not feel that shadowing the Xen package in Alpine was worth it
just to get an incomplete set of bindings, and we had to come up with
a solution so we could proceed with the project on time.  I intend to
come back and look at the state of the libxl bindings later, once we
have satisfied our main requirements.  I agree that poking at the
hypervisor and xenstore directly from Python is suboptimal, and I
would prefer not to do it.

William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:17:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:17:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkyug-0005Di-8p; Tue, 18 Dec 2012 15:16:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkyue-0005DG-Ej
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:16:56 +0000
Received: from [85.158.137.99:53488] by server-9.bemta-3.messagelabs.com id
	16/A9-11948-7E880D05; Tue, 18 Dec 2012 15:16:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355843813!13704067!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24719 invoked from network); 18 Dec 2012 15:16:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:16:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="229036"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 15:16:54 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 15:16:51 +0000
Message-ID: <1355843810.14620.249.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 15:16:50 +0000
In-Reply-To: <1355841532.14620.242.camel@zakaz.uk.xensource.com>
References: <patchbomb.1354274486@cosworth.uk.xensource.com>
	<d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
	<20681.61121.666480.840769@mariner.uk.xensource.com>
	<1355841532.14620.242.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2] xl: Introduce helper macro for
 option parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 14:38 +0000, Ian Campbell wrote:
> 
> I originally preferred return from main_foo() over exit(2) so that we
> could have a chance to free the xl context etc (mostly for the benefit
> of people using valgrind). We have an atexit hook now so perhaps that
> concern is no longer valid.
> 
> > And if so why not have the macro (or the function) do that ?
> 
> I don't like returns in macros. 

I implemented this as a third patch in the series, so we can decide what
we think.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:17:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:17:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkyug-0005Di-8p; Tue, 18 Dec 2012 15:16:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkyue-0005DG-Ej
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:16:56 +0000
Received: from [85.158.137.99:53488] by server-9.bemta-3.messagelabs.com id
	16/A9-11948-7E880D05; Tue, 18 Dec 2012 15:16:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1355843813!13704067!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24719 invoked from network); 18 Dec 2012 15:16:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:16:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="229036"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 15:16:54 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 15:16:51 +0000
Message-ID: <1355843810.14620.249.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 15:16:50 +0000
In-Reply-To: <1355841532.14620.242.camel@zakaz.uk.xensource.com>
References: <patchbomb.1354274486@cosworth.uk.xensource.com>
	<d4cc790b47d8735ae3f2.1354274488@cosworth.uk.xensource.com>
	<20681.61121.666480.840769@mariner.uk.xensource.com>
	<1355841532.14620.242.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2] xl: Introduce helper macro for
 option parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 14:38 +0000, Ian Campbell wrote:
> 
> I originally preferred return from main_foo() over exit(2) so that we
> could have a chance to free the xl context etc (mostly for the benefit
> of people using valgrind). We have an atexit hook now so perhaps that
> concern is no longer valid.
> 
> > And if so why not have the macro (or the function) do that ?
> 
> I don't like returns in macros. 

I implemented this as a third patch in the series, so we can decide what
we think.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:19:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkywr-0005St-Rq; Tue, 18 Dec 2012 15:19:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkywr-0005Si-7P
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:19:13 +0000
Received: from [85.158.143.35:10152] by server-2.bemta-4.messagelabs.com id
	63/4F-30861-07980D05; Tue, 18 Dec 2012 15:19:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355843927!12489838!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28492 invoked from network); 18 Dec 2012 15:18:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:18:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1048371"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 15:18:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 10:18:47 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkywQ-0007qV-RT;
	Tue, 18 Dec 2012 15:18:46 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1355843926@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 18 Dec 2012 15:18:46 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH 0 of 3 V2] xl: add helpers for option parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a repost of the last two patches of "xl shutdown compatibility
with xm" which were actually helpers for xl option parsing.

I've addressed Ian J's review comments on the second patch and rebased.

I've added a third patch which makes the macro handle the 0 and 2
cases internally.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:19:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkywr-0005St-Rq; Tue, 18 Dec 2012 15:19:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkywr-0005Si-7P
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:19:13 +0000
Received: from [85.158.143.35:10152] by server-2.bemta-4.messagelabs.com id
	63/4F-30861-07980D05; Tue, 18 Dec 2012 15:19:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355843927!12489838!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28492 invoked from network); 18 Dec 2012 15:18:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:18:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1048371"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 15:18:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 10:18:47 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkywQ-0007qV-RT;
	Tue, 18 Dec 2012 15:18:46 +0000
MIME-Version: 1.0
Message-ID: <patchbomb.1355843926@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 18 Dec 2012 15:18:46 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH 0 of 3 V2] xl: add helpers for option parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a repost of the last two patches of "xl shutdown compatibility
with xm" which were actually helpers for xl option parsing.

I've addressed Ian J's review comments on the second patch and rebased.

I've added a third patch which makes the macro handle the 0 and 2
cases internally.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:19:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkyx3-0005UC-8J; Tue, 18 Dec 2012 15:19:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkyx1-0005Tz-UF
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:19:24 +0000
Received: from [85.158.143.35:10819] by server-1.bemta-4.messagelabs.com id
	EE/52-28401-B7980D05; Tue, 18 Dec 2012 15:19:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355843927!12489838!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28642 invoked from network); 18 Dec 2012 15:18:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:18:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1048373"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 15:18:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 10:18:47 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkywQ-0007qV-U4;
	Tue, 18 Dec 2012 15:18:46 +0000
MIME-Version: 1.0
X-Mercurial-Node: 03b4c57dd562e5477615f4fd6bc0d78f6227a503
Message-ID: <03b4c57dd562e5477615.1355843929@cosworth.uk.xensource.com>
In-Reply-To: <patchbomb.1355843926@cosworth.uk.xensource.com>
References: <patchbomb.1355843926@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 18 Dec 2012 15:18:49 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH 3 of 3 V2] xl: SWITCH_FOREACH_OPT handles
 special options directly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ijc@hellion.org.uk>
# Date 1355843798 0
# Node ID 03b4c57dd562e5477615f4fd6bc0d78f6227a503
# Parent  4f8b5e25370792c1360dc7b96f769acd0d22d6e9
xl: SWITCH_FOREACH_OPT handles special options directly.

This removes the need for the "case 0: case 2:" boilerplate in every
main_foo() but at the expense of a return in a macro which I find
(mildly) distasteful.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 4f8b5e253707 -r 03b4c57dd562 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Tue Dec 18 15:03:40 2012 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Tue Dec 18 15:16:38 2012 +0000
@@ -2396,12 +2396,12 @@ static int def_getopt(int argc, char * c
  * `lopts`) should be handled by a case statement as if it were inside
  * a switch statement.
  *
- * In addition to the options provided in opts callers must handle
- * two additional pseudo options:
- *  0 -- generated if the user passes a -h option. help will be printed,
- *       caller should immediately return 0.
- *  2 -- generated if the user does not provided `num_required_opts`
- *       non-option arguments, caller should immediately return 2.
+ * In addition to the options provided in opts the macro will handle
+ * two special pseudo options:
+ *  -- if the user passes a -h option. help will be printed, and the
+ *     macro will return 0.
+ *  -- if the user does not provided `num_required_opts`
+ *     non-option arguments, the macro will return 2.
  *
  * Example:
  *
@@ -2409,8 +2409,6 @@ static int def_getopt(int argc, char * c
  *     int opt;
  *
  *     SWITCH_FOREACH_OPT(opt, "blah", NULL, "foo", 0) {
- *     case 0: case2:
- *          return opt;
  *      case 'b':
  *          ... handle b option...
  *          break;
@@ -2426,6 +2424,8 @@ static int def_getopt(int argc, char * c
                            commandname, num_required_opts)              \
     while (((opt) = def_getopt(argc, argv, (opts), (longopts),          \
                                 (commandname), (num_required_opts))) != -1) \
+        if (opt == 0) return 0;                                         \
+        if (opt == 2) return 2;                                         \
         switch (opt)
 
 static int set_memory_max(uint32_t domid, const char *mem)
@@ -2452,8 +2452,7 @@ int main_memmax(int argc, char **argv)
     int rc;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "mem-max", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2488,8 +2487,7 @@ int main_memset(int argc, char **argv)
     const char *mem;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "mem-set", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2529,8 +2527,7 @@ int main_cd_eject(int argc, char **argv)
     const char *virtdev;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cd-eject", 2) {
-        case 0: case 2:
-            return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2548,8 +2545,7 @@ int main_cd_insert(int argc, char **argv
     char *file = NULL; /* modified by cd_insert tokenising it */
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cd-insert", 3) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2567,8 +2563,6 @@ int main_console(int argc, char **argv)
     libxl_console_type type = 0;
 
     SWITCH_FOREACH_OPT(opt, "n:t:", NULL, "console", 1) {
-    case 0: case 2:
-        return opt;
     case 't':
         if (!strcmp(optarg, "pv"))
             type = LIBXL_CONSOLE_TYPE_PV;
@@ -2605,8 +2599,6 @@ int main_vncviewer(int argc, char **argv
     int opt, autopass = 0;
 
     SWITCH_FOREACH_OPT(opt, "ah", opts, "vncviewer", 1) {
-    case 0: case 2:
-        return opt;
     case 'a':
         autopass = 1;
         break;
@@ -2643,8 +2635,7 @@ int main_pcilist(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-list", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2684,8 +2675,6 @@ int main_pcidetach(int argc, char **argv
     const char *bdf = NULL;
 
     SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
-    case 0: case 2:
-        return opt;
     case 'f':
         force = 1;
         break;
@@ -2724,8 +2713,7 @@ int main_pciattach(int argc, char **argv
     const char *bdf = NULL, *vs = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2760,8 +2748,7 @@ int main_pciassignable_list(int argc, ch
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pciassignable_list();
@@ -2794,8 +2781,7 @@ int main_pciassignable_add(int argc, cha
     const char *bdf = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     bdf = argv[optind];
@@ -2831,8 +2817,6 @@ int main_pciassignable_remove(int argc, 
     int rebind = 0;
 
     SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
-    case 0: case 2:
-        return opt;
     case 'r':
         rebind=1;
         break;
@@ -3647,8 +3631,6 @@ int main_restore(int argc, char **argv)
     };
 
     SWITCH_FOREACH_OPT(opt, "FhcpdeVA", opts, "restore", 1) {
-    case 0: case 2:
-        return opt;
     case 'c':
         console_autoconnect = 1;
         break;
@@ -3708,8 +3690,6 @@ int main_migrate_receive(int argc, char 
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "Fedr", NULL, "migrate-receive", 0) {
-    case 0: case 2:
-        return opt;
     case 'F':
         daemonize = 0;
         break;
@@ -3745,8 +3725,6 @@ int main_save(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "c", NULL, "save", 2) {
-    case 0: case 2:
-        return opt;
     case 'c':
         checkpoint = 1;
         break;
@@ -3776,8 +3754,6 @@ int main_migrate(int argc, char **argv)
     int opt, daemonize = 1, monitor = 1, debug = 0;
 
     SWITCH_FOREACH_OPT(opt, "FC:s:ed", NULL, "migrate", 2) {
-    case 0: case 2:
-        return opt;
     case 'C':
         config_filename = optarg;
         break;
@@ -3818,8 +3794,7 @@ int main_dump_core(int argc, char **argv
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "dump-core", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     core_dump_domain(find_domain(argv[optind]), argv[optind + 1]);
@@ -3831,8 +3806,7 @@ int main_pause(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pause", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pause_domain(find_domain(argv[optind]));
@@ -3845,8 +3819,7 @@ int main_unpause(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "unpause", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     unpause_domain(find_domain(argv[optind]));
@@ -3859,8 +3832,7 @@ int main_destroy(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "destroy", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     destroy_domain(find_domain(argv[optind]));
@@ -3884,8 +3856,6 @@ static int main_shutdown_or_reboot(int d
     };
 
     SWITCH_FOREACH_OPT(opt, "awF", opts, what, 0) {
-    case 0: case 2:
-        return opt;
     case 'a':
         all = 1;
         break;
@@ -3966,8 +3936,6 @@ int main_list(int argc, char **argv)
     int nb_domain, rc;
 
     SWITCH_FOREACH_OPT(opt, "lvhZ", opts, "list", 0) {
-    case 0: case 2:
-        return opt;
     case 'l':
         details = 1;
         break;
@@ -4023,8 +3991,7 @@ int main_vm_list(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vm-list", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     list_vm();
@@ -4056,8 +4023,6 @@ int main_create(int argc, char **argv)
     }
 
     SWITCH_FOREACH_OPT(opt, "Fhnqf:pcdeVA", opts, "create", 0) {
-    case 0: case 2:
-        return opt;
     case 'f':
         filename = optarg;
         break;
@@ -4157,8 +4122,6 @@ int main_config_update(int argc, char **
     }
 
     SWITCH_FOREACH_OPT(opt, "dhqf:", opts, "config_update", 0) {
-    case 0: case 2:
-        return opt;
     case 'd':
         debug = 1;
         break;
@@ -4254,8 +4217,7 @@ int main_button_press(int argc, char **a
 
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "button-press", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     button_press(find_domain(argv[optind]), argv[optind + 1]);
@@ -4397,8 +4359,7 @@ int main_vcpulist(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpu-list", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     vcpulist(argc - optind, argv + optind);
@@ -4460,8 +4421,7 @@ int main_vcpupin(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-pin", 3) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     vcpupin(find_domain(argv[optind]), argv[optind+1] , argv[optind+2]);
@@ -4498,8 +4458,7 @@ int main_vcpuset(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-set", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     vcpuset(find_domain(argv[optind]), argv[optind+1]);
@@ -4683,8 +4642,6 @@ int main_info(int argc, char **argv)
     int numa = 0;
 
     SWITCH_FOREACH_OPT(opt, "hn", opts, "info", 0) {
-    case 0: case 2:
-        return opt;
     case 'n':
         numa = 1;
         break;
@@ -4722,8 +4679,7 @@ int main_sharing(int argc, char **argv)
     int nb_domain, rc;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "sharing", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     if (optind >= argc) {
@@ -5005,8 +4961,6 @@ int main_sched_credit(int argc, char **a
     };
 
     SWITCH_FOREACH_OPT(opt, "d:w:c:p:t:r:hs", opts, "sched-credit", 0) {
-    case 0: case 2:
-        return opt;
     case 'd':
         dom = optarg;
         break;
@@ -5122,8 +5076,6 @@ int main_sched_credit2(int argc, char **
     };
 
     SWITCH_FOREACH_OPT(opt, "d:w:p:h", opts, "sched-credit2", 0) {
-    case 0: case 2:
-        return opt;
     case 'd':
         dom = optarg;
         break;
@@ -5195,8 +5147,6 @@ int main_sched_sedf(int argc, char **arg
     };
 
     SWITCH_FOREACH_OPT(opt, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0) {
-    case 0: case 2:
-        return opt;
     case 'd':
         dom = optarg;
         break;
@@ -5290,8 +5240,7 @@ int main_domid(int argc, char **argv)
     const char *domname = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "domid", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domname = argv[optind];
@@ -5314,8 +5263,7 @@ int main_domname(int argc, char **argv)
     char *endptr = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "domname", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = strtol(argv[optind], &endptr, 10);
@@ -5344,8 +5292,7 @@ int main_rename(int argc, char **argv)
     const char *dom, *new_name;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "rename", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     dom = argv[optind++];
@@ -5370,8 +5317,7 @@ int main_trigger(int argc, char **argv)
     libxl_trigger trigger;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "trigger", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind++]);
@@ -5402,8 +5348,7 @@ int main_sysrq(int argc, char **argv)
     const char *sysrq = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "sysrq", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind++]);
@@ -5427,8 +5372,7 @@ int main_debug_keys(int argc, char **arg
     char *keys;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "debug-keys", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     keys = argv[optind];
@@ -5449,8 +5393,6 @@ int main_dmesg(int argc, char **argv)
     int opt, ret = 1;
 
     SWITCH_FOREACH_OPT(opt, "c", NULL, "dmesg", 0) {
-    case 0: case 2:
-        return opt;
     case 'c':
         clear = 1;
         break;
@@ -5473,8 +5415,7 @@ int main_top(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "top", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     return system("xentop");
@@ -5492,8 +5433,7 @@ int main_networkattach(int argc, char **
     unsigned int val;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "network-attach", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     if (argc-optind > 11) {
@@ -5581,8 +5521,7 @@ int main_networklist(int argc, char **ar
     int nb, i;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "network-list", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
@@ -5620,8 +5559,7 @@ int main_networkdetach(int argc, char **
     libxl_device_nic nic;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "network-detach", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -5653,8 +5591,7 @@ int main_blockattach(int argc, char **ar
     XLU_Config *config = 0;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "block-attach", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     if (domain_qualifier_to_domid(argv[optind], &fe_domid, 0) < 0) {
@@ -5690,8 +5627,7 @@ int main_blocklist(int argc, char **argv
     libxl_diskinfo diskinfo;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "block-list", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     printf("%-5s %-3s %-6s %-5s %-6s %-8s %-30s\n",
@@ -5728,8 +5664,7 @@ int main_blockdetach(int argc, char **ar
     libxl_device_disk disk;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "block-detach", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -5755,8 +5690,7 @@ int main_vtpmattach(int argc, char **arg
     uint32_t domid;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-attach", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     if (domain_qualifier_to_domid(argv[optind], &domid, 0) < 0) {
@@ -5810,8 +5744,7 @@ int main_vtpmlist(int argc, char **argv)
     int nb, i;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-list", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     /*      Idx  BE   UUID   Hdl  Sta  evch rref  BE-path */
@@ -5852,8 +5785,7 @@ int main_vtpmdetach(int argc, char **arg
     libxl_uuid uuid;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-detach", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -6046,8 +5978,6 @@ int main_uptime(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "s", NULL, "uptime", 1) {
-    case 0: case 2:
-        return opt;
     case 's':
         short_mode = 1;
         break;
@@ -6071,8 +6001,6 @@ int main_tmem_list(int argc, char **argv
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "al", NULL, "tmem-list", 0) {
-    case 0: case 2:
-        return opt;
     case 'l':
         use_long = 1;
         break;
@@ -6110,8 +6038,6 @@ int main_tmem_freeze(int argc, char **ar
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-freeze", 0) {
-    case 0: case 2:
-        return opt;
     case 'a':
         all = 1;
         break;
@@ -6141,8 +6067,6 @@ int main_tmem_thaw(int argc, char **argv
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-thaw", 0) {
-    case 0: case 2:
-        return opt;
     case 'a':
         all = 1;
         break;
@@ -6174,8 +6098,6 @@ int main_tmem_set(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "aw:c:p:", NULL, "tmem-set", 0) {
-    case 0: case 2:
-        return opt;
     case 'a':
         all = 1;
         break;
@@ -6233,8 +6155,6 @@ int main_tmem_shared_auth(int argc, char
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "au:A:", NULL, "tmem-shared-auth", 0) {
-    case 0: case 2:
-        return opt;
     case 'a':
         all = 1;
         break;
@@ -6281,8 +6201,7 @@ int main_tmem_freeable(int argc, char **
     int mb;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "tmem-freeale", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     mb = libxl_tmem_freeable(ctx);
@@ -6323,8 +6242,6 @@ int main_cpupoolcreate(int argc, char **
     int rc = -ERROR_FAIL;
 
     SWITCH_FOREACH_OPT(opt, "hnf:", opts, "cpupool-create", 0) {
-    case 0: case 2:
-        return opt;
     case 'f':
         filename = optarg;
         break;
@@ -6506,8 +6423,6 @@ int main_cpupoollist(int argc, char **ar
     int ret = 0;
 
     SWITCH_FOREACH_OPT(opt, "hc", opts, "cpupool-list", 1) {
-    case 0: case 2:
-        break;
     case 'c':
         opt_cpus = 1;
         break;
@@ -6571,8 +6486,7 @@ int main_cpupooldestroy(int argc, char *
     uint32_t poolid;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-destroy", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pool = argv[optind];
@@ -6594,8 +6508,7 @@ int main_cpupoolrename(int argc, char **
     uint32_t poolid;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-rename", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pool = argv[optind++];
@@ -6626,8 +6539,7 @@ int main_cpupoolcpuadd(int argc, char **
     int n;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-add", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pool = argv[optind++];
@@ -6672,8 +6584,7 @@ int main_cpupoolcpuremove(int argc, char
     int n;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-remove", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pool = argv[optind++];
@@ -6717,8 +6628,7 @@ int main_cpupoolmigrate(int argc, char *
     uint32_t domid;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-migrate", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     dom = argv[optind++];
@@ -6759,8 +6669,7 @@ int main_cpupoolnumasplit(int argc, char
     libxl_dominfo info;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-numa-split", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     ret = 0;
@@ -7015,8 +6924,6 @@ int main_remus(int argc, char **argv)
     r_info.compression = 1;
 
     SWITCH_FOREACH_OPT(opt, "bui:s:e", NULL, "remus", 2) {
-    case 0: case 2:
-        return opt;
     case 'i':
         r_info.interval = atoi(optarg);
         break;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:19:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkyx3-0005UC-8J; Tue, 18 Dec 2012 15:19:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkyx1-0005Tz-UF
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:19:24 +0000
Received: from [85.158.143.35:10819] by server-1.bemta-4.messagelabs.com id
	EE/52-28401-B7980D05; Tue, 18 Dec 2012 15:19:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355843927!12489838!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28642 invoked from network); 18 Dec 2012 15:18:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:18:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1048373"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 15:18:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 10:18:47 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkywQ-0007qV-U4;
	Tue, 18 Dec 2012 15:18:46 +0000
MIME-Version: 1.0
X-Mercurial-Node: 03b4c57dd562e5477615f4fd6bc0d78f6227a503
Message-ID: <03b4c57dd562e5477615.1355843929@cosworth.uk.xensource.com>
In-Reply-To: <patchbomb.1355843926@cosworth.uk.xensource.com>
References: <patchbomb.1355843926@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 18 Dec 2012 15:18:49 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH 3 of 3 V2] xl: SWITCH_FOREACH_OPT handles
 special options directly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ijc@hellion.org.uk>
# Date 1355843798 0
# Node ID 03b4c57dd562e5477615f4fd6bc0d78f6227a503
# Parent  4f8b5e25370792c1360dc7b96f769acd0d22d6e9
xl: SWITCH_FOREACH_OPT handles special options directly.

This removes the need for the "case 0: case 2:" boilerplate in every
main_foo() but at the expense of a return in a macro which I find
(mildly) distasteful.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 4f8b5e253707 -r 03b4c57dd562 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Tue Dec 18 15:03:40 2012 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Tue Dec 18 15:16:38 2012 +0000
@@ -2396,12 +2396,12 @@ static int def_getopt(int argc, char * c
  * `lopts`) should be handled by a case statement as if it were inside
  * a switch statement.
  *
- * In addition to the options provided in opts callers must handle
- * two additional pseudo options:
- *  0 -- generated if the user passes a -h option. help will be printed,
- *       caller should immediately return 0.
- *  2 -- generated if the user does not provided `num_required_opts`
- *       non-option arguments, caller should immediately return 2.
+ * In addition to the options provided in opts the macro will handle
+ * two special pseudo options:
+ *  -- if the user passes a -h option. help will be printed, and the
+ *     macro will return 0.
+ *  -- if the user does not provided `num_required_opts`
+ *     non-option arguments, the macro will return 2.
  *
  * Example:
  *
@@ -2409,8 +2409,6 @@ static int def_getopt(int argc, char * c
  *     int opt;
  *
  *     SWITCH_FOREACH_OPT(opt, "blah", NULL, "foo", 0) {
- *     case 0: case2:
- *          return opt;
  *      case 'b':
  *          ... handle b option...
  *          break;
@@ -2426,6 +2424,8 @@ static int def_getopt(int argc, char * c
                            commandname, num_required_opts)              \
     while (((opt) = def_getopt(argc, argv, (opts), (longopts),          \
                                 (commandname), (num_required_opts))) != -1) \
+        if (opt == 0) return 0;                                         \
+        if (opt == 2) return 2;                                         \
         switch (opt)
 
 static int set_memory_max(uint32_t domid, const char *mem)
@@ -2452,8 +2452,7 @@ int main_memmax(int argc, char **argv)
     int rc;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "mem-max", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2488,8 +2487,7 @@ int main_memset(int argc, char **argv)
     const char *mem;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "mem-set", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2529,8 +2527,7 @@ int main_cd_eject(int argc, char **argv)
     const char *virtdev;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cd-eject", 2) {
-        case 0: case 2:
-            return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2548,8 +2545,7 @@ int main_cd_insert(int argc, char **argv
     char *file = NULL; /* modified by cd_insert tokenising it */
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cd-insert", 3) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2567,8 +2563,6 @@ int main_console(int argc, char **argv)
     libxl_console_type type = 0;
 
     SWITCH_FOREACH_OPT(opt, "n:t:", NULL, "console", 1) {
-    case 0: case 2:
-        return opt;
     case 't':
         if (!strcmp(optarg, "pv"))
             type = LIBXL_CONSOLE_TYPE_PV;
@@ -2605,8 +2599,6 @@ int main_vncviewer(int argc, char **argv
     int opt, autopass = 0;
 
     SWITCH_FOREACH_OPT(opt, "ah", opts, "vncviewer", 1) {
-    case 0: case 2:
-        return opt;
     case 'a':
         autopass = 1;
         break;
@@ -2643,8 +2635,7 @@ int main_pcilist(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-list", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2684,8 +2675,6 @@ int main_pcidetach(int argc, char **argv
     const char *bdf = NULL;
 
     SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
-    case 0: case 2:
-        return opt;
     case 'f':
         force = 1;
         break;
@@ -2724,8 +2713,7 @@ int main_pciattach(int argc, char **argv
     const char *bdf = NULL, *vs = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -2760,8 +2748,7 @@ int main_pciassignable_list(int argc, ch
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pciassignable_list();
@@ -2794,8 +2781,7 @@ int main_pciassignable_add(int argc, cha
     const char *bdf = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     bdf = argv[optind];
@@ -2831,8 +2817,6 @@ int main_pciassignable_remove(int argc, 
     int rebind = 0;
 
     SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
-    case 0: case 2:
-        return opt;
     case 'r':
         rebind=1;
         break;
@@ -3647,8 +3631,6 @@ int main_restore(int argc, char **argv)
     };
 
     SWITCH_FOREACH_OPT(opt, "FhcpdeVA", opts, "restore", 1) {
-    case 0: case 2:
-        return opt;
     case 'c':
         console_autoconnect = 1;
         break;
@@ -3708,8 +3690,6 @@ int main_migrate_receive(int argc, char 
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "Fedr", NULL, "migrate-receive", 0) {
-    case 0: case 2:
-        return opt;
     case 'F':
         daemonize = 0;
         break;
@@ -3745,8 +3725,6 @@ int main_save(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "c", NULL, "save", 2) {
-    case 0: case 2:
-        return opt;
     case 'c':
         checkpoint = 1;
         break;
@@ -3776,8 +3754,6 @@ int main_migrate(int argc, char **argv)
     int opt, daemonize = 1, monitor = 1, debug = 0;
 
     SWITCH_FOREACH_OPT(opt, "FC:s:ed", NULL, "migrate", 2) {
-    case 0: case 2:
-        return opt;
     case 'C':
         config_filename = optarg;
         break;
@@ -3818,8 +3794,7 @@ int main_dump_core(int argc, char **argv
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "dump-core", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     core_dump_domain(find_domain(argv[optind]), argv[optind + 1]);
@@ -3831,8 +3806,7 @@ int main_pause(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pause", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pause_domain(find_domain(argv[optind]));
@@ -3845,8 +3819,7 @@ int main_unpause(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "unpause", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     unpause_domain(find_domain(argv[optind]));
@@ -3859,8 +3832,7 @@ int main_destroy(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "destroy", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     destroy_domain(find_domain(argv[optind]));
@@ -3884,8 +3856,6 @@ static int main_shutdown_or_reboot(int d
     };
 
     SWITCH_FOREACH_OPT(opt, "awF", opts, what, 0) {
-    case 0: case 2:
-        return opt;
     case 'a':
         all = 1;
         break;
@@ -3966,8 +3936,6 @@ int main_list(int argc, char **argv)
     int nb_domain, rc;
 
     SWITCH_FOREACH_OPT(opt, "lvhZ", opts, "list", 0) {
-    case 0: case 2:
-        return opt;
     case 'l':
         details = 1;
         break;
@@ -4023,8 +3991,7 @@ int main_vm_list(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vm-list", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     list_vm();
@@ -4056,8 +4023,6 @@ int main_create(int argc, char **argv)
     }
 
     SWITCH_FOREACH_OPT(opt, "Fhnqf:pcdeVA", opts, "create", 0) {
-    case 0: case 2:
-        return opt;
     case 'f':
         filename = optarg;
         break;
@@ -4157,8 +4122,6 @@ int main_config_update(int argc, char **
     }
 
     SWITCH_FOREACH_OPT(opt, "dhqf:", opts, "config_update", 0) {
-    case 0: case 2:
-        return opt;
     case 'd':
         debug = 1;
         break;
@@ -4254,8 +4217,7 @@ int main_button_press(int argc, char **a
 
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "button-press", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     button_press(find_domain(argv[optind]), argv[optind + 1]);
@@ -4397,8 +4359,7 @@ int main_vcpulist(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpu-list", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     vcpulist(argc - optind, argv + optind);
@@ -4460,8 +4421,7 @@ int main_vcpupin(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-pin", 3) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     vcpupin(find_domain(argv[optind]), argv[optind+1] , argv[optind+2]);
@@ -4498,8 +4458,7 @@ int main_vcpuset(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-set", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     vcpuset(find_domain(argv[optind]), argv[optind+1]);
@@ -4683,8 +4642,6 @@ int main_info(int argc, char **argv)
     int numa = 0;
 
     SWITCH_FOREACH_OPT(opt, "hn", opts, "info", 0) {
-    case 0: case 2:
-        return opt;
     case 'n':
         numa = 1;
         break;
@@ -4722,8 +4679,7 @@ int main_sharing(int argc, char **argv)
     int nb_domain, rc;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "sharing", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     if (optind >= argc) {
@@ -5005,8 +4961,6 @@ int main_sched_credit(int argc, char **a
     };
 
     SWITCH_FOREACH_OPT(opt, "d:w:c:p:t:r:hs", opts, "sched-credit", 0) {
-    case 0: case 2:
-        return opt;
     case 'd':
         dom = optarg;
         break;
@@ -5122,8 +5076,6 @@ int main_sched_credit2(int argc, char **
     };
 
     SWITCH_FOREACH_OPT(opt, "d:w:p:h", opts, "sched-credit2", 0) {
-    case 0: case 2:
-        return opt;
     case 'd':
         dom = optarg;
         break;
@@ -5195,8 +5147,6 @@ int main_sched_sedf(int argc, char **arg
     };
 
     SWITCH_FOREACH_OPT(opt, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0) {
-    case 0: case 2:
-        return opt;
     case 'd':
         dom = optarg;
         break;
@@ -5290,8 +5240,7 @@ int main_domid(int argc, char **argv)
     const char *domname = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "domid", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domname = argv[optind];
@@ -5314,8 +5263,7 @@ int main_domname(int argc, char **argv)
     char *endptr = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "domname", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = strtol(argv[optind], &endptr, 10);
@@ -5344,8 +5292,7 @@ int main_rename(int argc, char **argv)
     const char *dom, *new_name;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "rename", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     dom = argv[optind++];
@@ -5370,8 +5317,7 @@ int main_trigger(int argc, char **argv)
     libxl_trigger trigger;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "trigger", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind++]);
@@ -5402,8 +5348,7 @@ int main_sysrq(int argc, char **argv)
     const char *sysrq = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "sysrq", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind++]);
@@ -5427,8 +5372,7 @@ int main_debug_keys(int argc, char **arg
     char *keys;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "debug-keys", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     keys = argv[optind];
@@ -5449,8 +5393,6 @@ int main_dmesg(int argc, char **argv)
     int opt, ret = 1;
 
     SWITCH_FOREACH_OPT(opt, "c", NULL, "dmesg", 0) {
-    case 0: case 2:
-        return opt;
     case 'c':
         clear = 1;
         break;
@@ -5473,8 +5415,7 @@ int main_top(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "top", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     return system("xentop");
@@ -5492,8 +5433,7 @@ int main_networkattach(int argc, char **
     unsigned int val;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "network-attach", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     if (argc-optind > 11) {
@@ -5581,8 +5521,7 @@ int main_networklist(int argc, char **ar
     int nb, i;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "network-list", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
@@ -5620,8 +5559,7 @@ int main_networkdetach(int argc, char **
     libxl_device_nic nic;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "network-detach", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -5653,8 +5591,7 @@ int main_blockattach(int argc, char **ar
     XLU_Config *config = 0;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "block-attach", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     if (domain_qualifier_to_domid(argv[optind], &fe_domid, 0) < 0) {
@@ -5690,8 +5627,7 @@ int main_blocklist(int argc, char **argv
     libxl_diskinfo diskinfo;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "block-list", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     printf("%-5s %-3s %-6s %-5s %-6s %-8s %-30s\n",
@@ -5728,8 +5664,7 @@ int main_blockdetach(int argc, char **ar
     libxl_device_disk disk;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "block-detach", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -5755,8 +5690,7 @@ int main_vtpmattach(int argc, char **arg
     uint32_t domid;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-attach", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     if (domain_qualifier_to_domid(argv[optind], &domid, 0) < 0) {
@@ -5810,8 +5744,7 @@ int main_vtpmlist(int argc, char **argv)
     int nb, i;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-list", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     /*      Idx  BE   UUID   Hdl  Sta  evch rref  BE-path */
@@ -5852,8 +5785,7 @@ int main_vtpmdetach(int argc, char **arg
     libxl_uuid uuid;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-detach", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     domid = find_domain(argv[optind]);
@@ -6046,8 +5978,6 @@ int main_uptime(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "s", NULL, "uptime", 1) {
-    case 0: case 2:
-        return opt;
     case 's':
         short_mode = 1;
         break;
@@ -6071,8 +6001,6 @@ int main_tmem_list(int argc, char **argv
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "al", NULL, "tmem-list", 0) {
-    case 0: case 2:
-        return opt;
     case 'l':
         use_long = 1;
         break;
@@ -6110,8 +6038,6 @@ int main_tmem_freeze(int argc, char **ar
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-freeze", 0) {
-    case 0: case 2:
-        return opt;
     case 'a':
         all = 1;
         break;
@@ -6141,8 +6067,6 @@ int main_tmem_thaw(int argc, char **argv
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-thaw", 0) {
-    case 0: case 2:
-        return opt;
     case 'a':
         all = 1;
         break;
@@ -6174,8 +6098,6 @@ int main_tmem_set(int argc, char **argv)
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "aw:c:p:", NULL, "tmem-set", 0) {
-    case 0: case 2:
-        return opt;
     case 'a':
         all = 1;
         break;
@@ -6233,8 +6155,6 @@ int main_tmem_shared_auth(int argc, char
     int opt;
 
     SWITCH_FOREACH_OPT(opt, "au:A:", NULL, "tmem-shared-auth", 0) {
-    case 0: case 2:
-        return opt;
     case 'a':
         all = 1;
         break;
@@ -6281,8 +6201,7 @@ int main_tmem_freeable(int argc, char **
     int mb;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "tmem-freeale", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     mb = libxl_tmem_freeable(ctx);
@@ -6323,8 +6242,6 @@ int main_cpupoolcreate(int argc, char **
     int rc = -ERROR_FAIL;
 
     SWITCH_FOREACH_OPT(opt, "hnf:", opts, "cpupool-create", 0) {
-    case 0: case 2:
-        return opt;
     case 'f':
         filename = optarg;
         break;
@@ -6506,8 +6423,6 @@ int main_cpupoollist(int argc, char **ar
     int ret = 0;
 
     SWITCH_FOREACH_OPT(opt, "hc", opts, "cpupool-list", 1) {
-    case 0: case 2:
-        break;
     case 'c':
         opt_cpus = 1;
         break;
@@ -6571,8 +6486,7 @@ int main_cpupooldestroy(int argc, char *
     uint32_t poolid;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-destroy", 1) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pool = argv[optind];
@@ -6594,8 +6508,7 @@ int main_cpupoolrename(int argc, char **
     uint32_t poolid;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-rename", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pool = argv[optind++];
@@ -6626,8 +6539,7 @@ int main_cpupoolcpuadd(int argc, char **
     int n;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-add", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pool = argv[optind++];
@@ -6672,8 +6584,7 @@ int main_cpupoolcpuremove(int argc, char
     int n;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-remove", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     pool = argv[optind++];
@@ -6717,8 +6628,7 @@ int main_cpupoolmigrate(int argc, char *
     uint32_t domid;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-migrate", 2) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     dom = argv[optind++];
@@ -6759,8 +6669,7 @@ int main_cpupoolnumasplit(int argc, char
     libxl_dominfo info;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-numa-split", 0) {
-    case 0: case 2:
-        return opt;
+        /* No options */
     }
 
     ret = 0;
@@ -7015,8 +6924,6 @@ int main_remus(int argc, char **argv)
     r_info.compression = 1;
 
     SWITCH_FOREACH_OPT(opt, "bui:s:e", NULL, "remus", 2) {
-    case 0: case 2:
-        return opt;
     case 'i':
         r_info.interval = atoi(optarg);
         break;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:19:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkyx5-0005V3-Rz; Tue, 18 Dec 2012 15:19:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkyx4-0005UK-1F
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:19:26 +0000
Received: from [85.158.143.35:2412] by server-3.bemta-4.messagelabs.com id
	09/98-18211-D7980D05; Tue, 18 Dec 2012 15:19:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355843928!16020433!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30504 invoked from network); 18 Dec 2012 15:18:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:18:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1118440"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 15:18:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 10:18:47 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkywQ-0007qV-SZ;
	Tue, 18 Dec 2012 15:18:46 +0000
MIME-Version: 1.0
X-Mercurial-Node: 4f8b5e25370792c1360dc7b96f769acd0d22d6e9
Message-ID: <4f8b5e25370792c1360d.1355843928@cosworth.uk.xensource.com>
In-Reply-To: <patchbomb.1355843926@cosworth.uk.xensource.com>
References: <patchbomb.1355843926@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 18 Dec 2012 15:18:48 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH 2 of 3 V2] l: Introduce helper macro for option
	parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ijc@hellion.org.uk>
# Date 1355843020 0
# Node ID 4f8b5e25370792c1360dc7b96f769acd0d22d6e9
# Parent  a0d112303c6b0ee71d96bccfd1cb1a0786d6aadb
l: Introduce helper macro for option parsing.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3:
- drop underscores from macro params.
- improved macro documentation.
v2:
- s/FOREACH_OPT/SWITCH_FOREACH_OPT/
- Document the macro

diff -r a0d112303c6b -r 4f8b5e253707 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Tue Dec 18 14:50:07 2012 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Tue Dec 18 15:03:40 2012 +0000
@@ -2326,6 +2326,10 @@ static int64_t parse_mem_size_kb(const c
 
 #define COMMON_LONG_OPTS {"help", 0, 0, 'h'}
 
+/*
+ * Callers should use SWITCH_FOREACH_OPT in preference to calling this
+ * directly.
+ */
 static int def_getopt(int argc, char * const argv[],
                       const char *optstring,
                       const struct option *longopts,
@@ -2364,6 +2368,66 @@ static int def_getopt(int argc, char * c
     return -1;
 }
 
+/*
+ * Wraps def_getopt into a convenient loop+switch to process all
+ * arguments. This macro is intended to be called from main_XXX().
+ *
+ *   SWITCH_FOREACH_OPT(int *opt, const char *opts,
+ *                      const struct option *longopts,
+ *                      const char *commandname,
+ *                      int num_opts_req) { ...
+ *
+ * opt:               pointer to an int variable, holds the current option
+ *                    during processing.
+ * opts:              short options, as per getopt_long(3)'s optstring argument.
+ * longopts:          long options, as per getopt_long(3)'s longopts argument.
+ *                    May be null.
+ * commandname:       name of this command, for usage string.
+ * num_required_opts: number of non-option command line parameters
+ *                    which are required.
+ *
+ * In addition the calling context is expected to contain variables
+ * "argc" and "argv" in the conventional C-style:
+ *   main(int argc, char **argv)
+ * manner.
+ *
+ * Callers should treat SWITCH_FOREACH_OPT as they would a switch
+ * statement over the value of `opt`. Each option given in `opts` (or
+ * `lopts`) should be handled by a case statement as if it were inside
+ * a switch statement.
+ *
+ * In addition to the options provided in opts callers must handle
+ * two additional pseudo options:
+ *  0 -- generated if the user passes a -h option. help will be printed,
+ *       caller should immediately return 0.
+ *  2 -- generated if the user does not provided `num_required_opts`
+ *       non-option arguments, caller should immediately return 2.
+ *
+ * Example:
+ *
+ * int main_foo(int argc, char **argv) {
+ *     int opt;
+ *
+ *     SWITCH_FOREACH_OPT(opt, "blah", NULL, "foo", 0) {
+ *     case 0: case2:
+ *          return opt;
+ *      case 'b':
+ *          ... handle b option...
+ *          break;
+ *      case 'l':
+ *          ... handle l option ...
+ *          break;
+ *      case etc etc...
+ *      }
+ *      ... do something useful with the options ...
+ * }
+ */
+#define SWITCH_FOREACH_OPT(opt, opts, longopts,                         \
+                           commandname, num_required_opts)              \
+    while (((opt) = def_getopt(argc, argv, (opts), (longopts),          \
+                                (commandname), (num_required_opts))) != -1) \
+        switch (opt)
+
 static int set_memory_max(uint32_t domid, const char *mem)
 {
     int64_t memorykb;
@@ -2387,8 +2451,10 @@ int main_memmax(int argc, char **argv)
     char *mem;
     int rc;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "mem-max", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "mem-max", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
     mem = argv[optind + 1];
@@ -2421,8 +2487,10 @@ int main_memset(int argc, char **argv)
     int opt = 0;
     const char *mem;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "mem-set", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "mem-set", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
     mem = argv[optind + 1];
@@ -2460,8 +2528,10 @@ int main_cd_eject(int argc, char **argv)
     int opt = 0;
     const char *virtdev;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cd-eject", 2)) != -1)
-        return opt;
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cd-eject", 2) {
+        case 0: case 2:
+            return opt;
+    }
 
     domid = find_domain(argv[optind]);
     virtdev = argv[optind + 1];
@@ -2477,8 +2547,10 @@ int main_cd_insert(int argc, char **argv
     const char *virtdev;
     char *file = NULL; /* modified by cd_insert tokenising it */
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cd-insert", 3)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cd-insert", 3) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
     virtdev = argv[optind + 1];
@@ -2494,24 +2566,22 @@ int main_console(int argc, char **argv)
     int opt = 0, num = 0;
     libxl_console_type type = 0;
 
-    while ((opt = def_getopt(argc, argv, "n:t:", NULL, "console", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 't':
-            if (!strcmp(optarg, "pv"))
-                type = LIBXL_CONSOLE_TYPE_PV;
-            else if (!strcmp(optarg, "serial"))
-                type = LIBXL_CONSOLE_TYPE_SERIAL;
-            else {
-                fprintf(stderr, "console type supported are: pv, serial\n");
-                return 2;
-            }
-            break;
-        case 'n':
-            num = atoi(optarg);
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "n:t:", NULL, "console", 1) {
+    case 0: case 2:
+        return opt;
+    case 't':
+        if (!strcmp(optarg, "pv"))
+            type = LIBXL_CONSOLE_TYPE_PV;
+        else if (!strcmp(optarg, "serial"))
+            type = LIBXL_CONSOLE_TYPE_SERIAL;
+        else {
+            fprintf(stderr, "console type supported are: pv, serial\n");
+            return 2;
+        }
+        break;
+    case 'n':
+        num = atoi(optarg);
+        break;
     }
 
     domid = find_domain(argv[optind]);
@@ -2534,14 +2604,12 @@ int main_vncviewer(int argc, char **argv
     uint32_t domid;
     int opt, autopass = 0;
 
-    while ((opt = def_getopt(argc, argv, "ah", opts, "vncviewer", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            autopass = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "ah", opts, "vncviewer", 1) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        autopass = 1;
+        break;
     }
 
     domid = find_domain(argv[optind]);
@@ -2574,8 +2642,10 @@ int main_pcilist(int argc, char **argv)
     uint32_t domid;
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "pci-list", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-list", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
 
@@ -2613,14 +2683,12 @@ int main_pcidetach(int argc, char **argv
     int force = 0;
     const char *bdf = NULL;
 
-    while ((opt = def_getopt(argc, argv, "f", NULL, "pci-detach", 2)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'f':
-            force = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
+    case 0: case 2:
+        return opt;
+    case 'f':
+        force = 1;
+        break;
     }
 
     domid = find_domain(argv[optind]);
@@ -2655,8 +2723,10 @@ int main_pciattach(int argc, char **argv
     int opt;
     const char *bdf = NULL, *vs = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "pci-attach", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
     bdf = argv[optind + 1];
@@ -2689,8 +2759,10 @@ int main_pciassignable_list(int argc, ch
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-list", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     pciassignable_list();
     return 0;
@@ -2721,11 +2793,9 @@ int main_pciassignable_add(int argc, cha
     int opt;
     const char *bdf = NULL;
 
-    while ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-add", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        }
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
+    case 0: case 2:
+        return opt;
     }
 
     bdf = argv[optind];
@@ -2760,14 +2830,12 @@ int main_pciassignable_remove(int argc, 
     const char *bdf = NULL;
     int rebind = 0;
 
-    while ((opt = def_getopt(argc, argv, "r", NULL, "pci-assignable-remove", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'r':
-            rebind=1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
+    case 0: case 2:
+        return opt;
+    case 'r':
+        rebind=1;
+        break;
     }
 
     bdf = argv[optind];
@@ -3578,34 +3646,31 @@ int main_restore(int argc, char **argv)
         {0, 0, 0, 0}
     };
 
-    while ((opt = def_getopt(argc, argv, "FhcpdeVA",
-                             opts, "restore", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'c':
-            console_autoconnect = 1;
-            break;
-        case 'p':
-            paused = 1;
-            break;
-        case 'd':
-            debug = 1;
-            break;
-        case 'F':
-            daemonize = 0;
-            break;
-        case 'e':
-            daemonize = 0;
-            monitor = 0;
-            break;
-        case 'V':
-            vnc = 1;
-            break;
-        case 'A':
-            vnc = vncautopass = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "FhcpdeVA", opts, "restore", 1) {
+    case 0: case 2:
+        return opt;
+    case 'c':
+        console_autoconnect = 1;
+        break;
+    case 'p':
+        paused = 1;
+        break;
+    case 'd':
+        debug = 1;
+        break;
+    case 'F':
+        daemonize = 0;
+        break;
+    case 'e':
+        daemonize = 0;
+        monitor = 0;
+        break;
+    case 'V':
+        vnc = 1;
+        break;
+    case 'A':
+        vnc = vncautopass = 1;
+        break;
     }
 
     if (argc-optind == 1) {
@@ -3642,24 +3707,22 @@ int main_migrate_receive(int argc, char 
     int debug = 0, daemonize = 1, monitor = 1, remus = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "Fedr", NULL, "migrate-receive", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'F':
-            daemonize = 0;
-            break;
-        case 'e':
-            daemonize = 0;
-            monitor = 0;
-            break;
-        case 'd':
-            debug = 1;
-            break;
-        case 'r':
-            remus = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "Fedr", NULL, "migrate-receive", 0) {
+    case 0: case 2:
+        return opt;
+    case 'F':
+        daemonize = 0;
+        break;
+    case 'e':
+        daemonize = 0;
+        monitor = 0;
+        break;
+    case 'd':
+        debug = 1;
+        break;
+    case 'r':
+        remus = 1;
+        break;
     }
 
     if (argc-optind != 0) {
@@ -3681,14 +3744,12 @@ int main_save(int argc, char **argv)
     int checkpoint = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "c", NULL, "save", 2)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'c':
-            checkpoint = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "c", NULL, "save", 2) {
+    case 0: case 2:
+        return opt;
+    case 'c':
+        checkpoint = 1;
+        break;
     }
 
     if (argc-optind > 3) {
@@ -3714,27 +3775,25 @@ int main_migrate(int argc, char **argv)
     char *host;
     int opt, daemonize = 1, monitor = 1, debug = 0;
 
-    while ((opt = def_getopt(argc, argv, "FC:s:ed", NULL, "migrate", 2)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'C':
-            config_filename = optarg;
-            break;
-        case 's':
-            ssh_command = optarg;
-            break;
-        case 'F':
-            daemonize = 0;
-            break;
-        case 'e':
-            daemonize = 0;
-            monitor = 0;
-            break;
-        case 'd':
-            debug = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "FC:s:ed", NULL, "migrate", 2) {
+    case 0: case 2:
+        return opt;
+    case 'C':
+        config_filename = optarg;
+        break;
+    case 's':
+        ssh_command = optarg;
+        break;
+    case 'F':
+        daemonize = 0;
+        break;
+    case 'e':
+        daemonize = 0;
+        monitor = 0;
+        break;
+    case 'd':
+        debug = 1;
+        break;
     }
 
     domid = find_domain(argv[optind]);
@@ -3758,8 +3817,10 @@ int main_dump_core(int argc, char **argv
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "dump-core", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "dump-core", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     core_dump_domain(find_domain(argv[optind]), argv[optind + 1]);
     return 0;
@@ -3769,8 +3830,10 @@ int main_pause(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "pause", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pause", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     pause_domain(find_domain(argv[optind]));
 
@@ -3781,8 +3844,10 @@ int main_unpause(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "unpause", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "unpause", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     unpause_domain(find_domain(argv[optind]));
 
@@ -3793,8 +3858,10 @@ int main_destroy(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "destroy", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "destroy", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     destroy_domain(find_domain(argv[optind]));
     return 0;
@@ -3816,20 +3883,18 @@ static int main_shutdown_or_reboot(int d
         {0, 0, 0, 0}
     };
 
-    while ((opt = def_getopt(argc, argv, "awF", opts, what, 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            all = 1;
-            break;
-        case 'w':
-            wait_for_it = 1;
-            break;
-        case 'F':
-            fallback_trigger = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "awF", opts, what, 0) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        all = 1;
+        break;
+    case 'w':
+        wait_for_it = 1;
+        break;
+    case 'F':
+        fallback_trigger = 1;
+        break;
     }
 
     if (!argv[optind] && !all) {
@@ -3900,23 +3965,18 @@ int main_list(int argc, char **argv)
     libxl_dominfo *info, *info_free=0;
     int nb_domain, rc;
 
-    while ((opt = def_getopt(argc, argv, "lvhZ", opts, "list", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'l':
-            details = 1;
-            break;
-        case 'h':
-            help("list");
-            return 0;
-        case 'v':
-            verbose = 1;
-            break;
-        case 'Z':
-            context = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "lvhZ", opts, "list", 0) {
+    case 0: case 2:
+        return opt;
+    case 'l':
+        details = 1;
+        break;
+    case 'v':
+        verbose = 1;
+        break;
+    case 'Z':
+        context = 1;
+        break;
     }
 
     if (optind >= argc) {
@@ -3962,8 +4022,10 @@ int main_vm_list(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vm-list", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vm-list", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     list_vm();
     return 0;
@@ -3993,45 +4055,40 @@ int main_create(int argc, char **argv)
         argc--; argv++;
     }
 
-    while ((opt = def_getopt(argc, argv, "Fhnqf:pcdeVA", opts, "create", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'f':
-            filename = optarg;
-            break;
-        case 'p':
-            paused = 1;
-            break;
-        case 'c':
-            console_autoconnect = 1;
-            break;
-        case 'd':
-            debug = 1;
-            break;
-        case 'F':
-            daemonize = 0;
-            break;
-        case 'e':
-            daemonize = 0;
-            monitor = 0;
-            break;
-        case 'h':
-            help("create");
-            return 0;
-        case 'n':
-            dryrun_only = 1;
-            break;
-        case 'q':
-            quiet = 1;
-            break;
-        case 'V':
-            vnc = 1;
-            break;
-        case 'A':
-            vnc = vncautopass = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "Fhnqf:pcdeVA", opts, "create", 0) {
+    case 0: case 2:
+        return opt;
+    case 'f':
+        filename = optarg;
+        break;
+    case 'p':
+        paused = 1;
+        break;
+    case 'c':
+        console_autoconnect = 1;
+        break;
+    case 'd':
+        debug = 1;
+        break;
+    case 'F':
+        daemonize = 0;
+        break;
+    case 'e':
+        daemonize = 0;
+        monitor = 0;
+        break;
+    case 'n':
+        dryrun_only = 1;
+        break;
+    case 'q':
+        quiet = 1;
+        break;
+    case 'V':
+        vnc = 1;
+        break;
+    case 'A':
+        vnc = vncautopass = 1;
+        break;
     }
 
     extra_config[0] = '\0';
@@ -4099,17 +4156,15 @@ int main_config_update(int argc, char **
         argc--; argv++;
     }
 
-    while ((opt = def_getopt(argc, argv, "dhqf:", opts, "config_update", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'd':
-            debug = 1;
-            break;
-        case 'f':
-            filename = optarg;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "dhqf:", opts, "config_update", 0) {
+    case 0: case 2:
+        return opt;
+    case 'd':
+        debug = 1;
+        break;
+    case 'f':
+        filename = optarg;
+        break;
     }
 
     extra_config[0] = '\0';
@@ -4197,8 +4252,11 @@ int main_button_press(int argc, char **a
     fprintf(stderr, "WARNING: \"button-press\" is deprecated. "
             "Please use \"trigger\"\n");
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "button-press", 2)) != -1)
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "button-press", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     button_press(find_domain(argv[optind]), argv[optind + 1]);
 
@@ -4338,8 +4396,10 @@ int main_vcpulist(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpu-list", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpu-list", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     vcpulist(argc - optind, argv + optind);
     return 0;
@@ -4399,8 +4459,10 @@ int main_vcpupin(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-pin", 3)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-pin", 3) {
+    case 0: case 2:
         return opt;
+    }
 
     vcpupin(find_domain(argv[optind]), argv[optind+1] , argv[optind+2]);
     return 0;
@@ -4435,8 +4497,10 @@ int main_vcpuset(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-set", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-set", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     vcpuset(find_domain(argv[optind]), argv[optind+1]);
     return 0;
@@ -4618,14 +4682,12 @@ int main_info(int argc, char **argv)
     };
     int numa = 0;
 
-    while ((opt = def_getopt(argc, argv, "hn", opts, "info", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'n':
-            numa = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "hn", opts, "info", 0) {
+    case 0: case 2:
+        return opt;
+    case 'n':
+        numa = 1;
+        break;
     }
 
     print_info(numa);
@@ -4659,8 +4721,10 @@ int main_sharing(int argc, char **argv)
     libxl_dominfo *info, *info_free = NULL;
     int nb_domain, rc;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "sharing", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "sharing", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     if (optind >= argc) {
         info = libxl_list_domain(ctx, &nb_domain);
@@ -4940,36 +5004,34 @@ int main_sched_credit(int argc, char **a
         {0, 0, 0, 0}
     };
 
-    while ((opt = def_getopt(argc, argv, "d:w:c:p:t:r:hs", opts, "sched-credit", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'd':
-            dom = optarg;
-            break;
-        case 'w':
-            weight = strtol(optarg, NULL, 10);
-            opt_w = 1;
-            break;
-        case 'c':
-            cap = strtol(optarg, NULL, 10);
-            opt_c = 1;
-            break;
-        case 't':
-            tslice = strtol(optarg, NULL, 10);
-            opt_t = 1;
-            break;
-        case 'r':
-            ratelimit = strtol(optarg, NULL, 10);
-            opt_r = 1;
-            break;
-        case 's':
-            opt_s = 1;
-            break;
-        case 'p':
-            cpupool = optarg;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "d:w:c:p:t:r:hs", opts, "sched-credit", 0) {
+    case 0: case 2:
+        return opt;
+    case 'd':
+        dom = optarg;
+        break;
+    case 'w':
+        weight = strtol(optarg, NULL, 10);
+        opt_w = 1;
+        break;
+    case 'c':
+        cap = strtol(optarg, NULL, 10);
+        opt_c = 1;
+        break;
+    case 't':
+        tslice = strtol(optarg, NULL, 10);
+        opt_t = 1;
+        break;
+    case 'r':
+        ratelimit = strtol(optarg, NULL, 10);
+        opt_r = 1;
+        break;
+    case 's':
+        opt_s = 1;
+        break;
+    case 'p':
+        cpupool = optarg;
+        break;
     }
 
     if ((cpupool || opt_s) && (dom || opt_w || opt_c)) {
@@ -5059,21 +5121,19 @@ int main_sched_credit2(int argc, char **
         {0, 0, 0, 0}
     };
 
-    while ((opt = def_getopt(argc, argv, "d:w:p:h", opts, "sched-credit2", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'd':
-            dom = optarg;
-            break;
-        case 'w':
-            weight = strtol(optarg, NULL, 10);
-            opt_w = 1;
-            break;
-        case 'p':
-            cpupool = optarg;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "d:w:p:h", opts, "sched-credit2", 0) {
+    case 0: case 2:
+        return opt;
+    case 'd':
+        dom = optarg;
+        break;
+    case 'w':
+        weight = strtol(optarg, NULL, 10);
+        opt_w = 1;
+        break;
+    case 'p':
+        cpupool = optarg;
+        break;
     }
 
     if (cpupool && (dom || opt_w)) {
@@ -5134,37 +5194,35 @@ int main_sched_sedf(int argc, char **arg
         {0, 0, 0, 0}
     };
 
-    while ((opt = def_getopt(argc, argv, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'd':
-            dom = optarg;
-            break;
-        case 'p':
-            period = strtol(optarg, NULL, 10);
-            opt_p = 1;
-            break;
-        case 's':
-            slice = strtol(optarg, NULL, 10);
-            opt_s = 1;
-            break;
-        case 'l':
-            latency = strtol(optarg, NULL, 10);
-            opt_l = 1;
-            break;
-        case 'e':
-            extra = strtol(optarg, NULL, 10);
-            opt_e = 1;
-            break;
-        case 'w':
-            weight = strtol(optarg, NULL, 10);
-            opt_w = 1;
-            break;
-        case 'c':
-            cpupool = optarg;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0) {
+    case 0: case 2:
+        return opt;
+    case 'd':
+        dom = optarg;
+        break;
+    case 'p':
+        period = strtol(optarg, NULL, 10);
+        opt_p = 1;
+        break;
+    case 's':
+        slice = strtol(optarg, NULL, 10);
+        opt_s = 1;
+        break;
+    case 'l':
+        latency = strtol(optarg, NULL, 10);
+        opt_l = 1;
+        break;
+    case 'e':
+        extra = strtol(optarg, NULL, 10);
+        opt_e = 1;
+        break;
+    case 'w':
+        weight = strtol(optarg, NULL, 10);
+        opt_w = 1;
+        break;
+    case 'c':
+        cpupool = optarg;
+        break;
     }
 
     if (cpupool && (dom || opt_p || opt_s || opt_l || opt_e || opt_w)) {
@@ -5231,8 +5289,10 @@ int main_domid(int argc, char **argv)
     int opt;
     const char *domname = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "domid", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "domid", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     domname = argv[optind];
 
@@ -5253,8 +5313,10 @@ int main_domname(int argc, char **argv)
     char *domname = NULL;
     char *endptr = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "domname", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "domname", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = strtol(argv[optind], &endptr, 10);
     if (domid == 0 && !strcmp(endptr, argv[optind])) {
@@ -5281,8 +5343,10 @@ int main_rename(int argc, char **argv)
     int opt;
     const char *dom, *new_name;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "rename", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "rename", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     dom = argv[optind++];
     new_name = argv[optind];
@@ -5305,8 +5369,10 @@ int main_trigger(int argc, char **argv)
     const char *trigger_name = NULL;
     libxl_trigger trigger;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "trigger", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "trigger", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind++]);
 
@@ -5335,8 +5401,10 @@ int main_sysrq(int argc, char **argv)
     int opt;
     const char *sysrq = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "sysrq", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "sysrq", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind++]);
 
@@ -5358,8 +5426,10 @@ int main_debug_keys(int argc, char **arg
     int opt;
     char *keys;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "debug-keys", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "debug-keys", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     keys = argv[optind];
 
@@ -5378,14 +5448,12 @@ int main_dmesg(int argc, char **argv)
     char *line;
     int opt, ret = 1;
 
-    while ((opt = def_getopt(argc, argv, "c", NULL, "dmesg", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'c':
-            clear = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "c", NULL, "dmesg", 0) {
+    case 0: case 2:
+        return opt;
+    case 'c':
+        clear = 1;
+        break;
     }
 
     cr = libxl_xen_console_read_start(ctx, clear);
@@ -5404,8 +5472,10 @@ int main_top(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "top", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "top", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     return system("xentop");
 }
@@ -5421,8 +5491,10 @@ int main_networkattach(int argc, char **
     int i;
     unsigned int val;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "network-attach", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "network-attach", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     if (argc-optind > 11) {
         help("network-attach");
@@ -5508,8 +5580,10 @@ int main_networklist(int argc, char **ar
     libxl_nicinfo nicinfo;
     int nb, i;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "network-list", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "network-list", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
     printf("%-3s %-2s %-17s %-6s %-5s %-6s %5s/%-5s %-30s\n",
@@ -5545,8 +5619,10 @@ int main_networkdetach(int argc, char **
     int opt;
     libxl_device_nic nic;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "network-detach", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "network-detach", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
 
@@ -5576,8 +5652,10 @@ int main_blockattach(int argc, char **ar
     libxl_device_disk disk = { 0 };
     XLU_Config *config = 0;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "block-attach", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "block-attach", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     if (domain_qualifier_to_domid(argv[optind], &fe_domid, 0) < 0) {
         fprintf(stderr, "%s is an invalid domain identifier\n", argv[optind]);
@@ -5611,8 +5689,10 @@ int main_blocklist(int argc, char **argv
     libxl_device_disk *disks;
     libxl_diskinfo diskinfo;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "block-list", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "block-list", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     printf("%-5s %-3s %-6s %-5s %-6s %-8s %-30s\n",
            "Vdev", "BE", "handle", "state", "evt-ch", "ring-ref", "BE-path");
@@ -5647,8 +5727,10 @@ int main_blockdetach(int argc, char **ar
     int opt, rc = 0;
     libxl_device_disk disk;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "block-detach", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "block-detach", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
 
@@ -5672,8 +5754,10 @@ int main_vtpmattach(int argc, char **arg
     unsigned int val;
     uint32_t domid;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-attach", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-attach", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     if (domain_qualifier_to_domid(argv[optind], &domid, 0) < 0) {
         fprintf(stderr, "%s is an invalid domain identifier\n", argv[optind]);
@@ -5725,8 +5809,10 @@ int main_vtpmlist(int argc, char **argv)
     libxl_vtpminfo vtpminfo;
     int nb, i;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-list", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-list", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     /*      Idx  BE   UUID   Hdl  Sta  evch rref  BE-path */
     printf("%-3s %-2s %-36s %-6s %-5s %-6s %-5s %-10s\n",
@@ -5765,8 +5851,10 @@ int main_vtpmdetach(int argc, char **arg
     libxl_device_vtpm vtpm;
     libxl_uuid uuid;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-detach", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-detach", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
 
@@ -5957,14 +6045,12 @@ int main_uptime(int argc, char **argv)
     int nb_doms = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "s", NULL, "uptime", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 's':
-            short_mode = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "s", NULL, "uptime", 1) {
+    case 0: case 2:
+        return opt;
+    case 's':
+        short_mode = 1;
+        break;
     }
 
     for (;(dom = argv[optind]) != NULL; nb_doms++,optind++)
@@ -5984,17 +6070,15 @@ int main_tmem_list(int argc, char **argv
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "al", NULL, "tmem-list", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'l':
-            use_long = 1;
-            break;
-        case 'a':
-            all = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "al", NULL, "tmem-list", 0) {
+    case 0: case 2:
+        return opt;
+    case 'l':
+        use_long = 1;
+        break;
+    case 'a':
+        all = 1;
+        break;
     }
 
     dom = argv[optind];
@@ -6025,14 +6109,12 @@ int main_tmem_freeze(int argc, char **ar
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-freeze", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            all = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-freeze", 0) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        all = 1;
+        break;
     }
 
     dom = argv[optind];
@@ -6058,14 +6140,12 @@ int main_tmem_thaw(int argc, char **argv
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-thaw", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            all = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-thaw", 0) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        all = 1;
+        break;
     }
 
     dom = argv[optind];
@@ -6093,26 +6173,24 @@ int main_tmem_set(int argc, char **argv)
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "aw:c:p:", NULL, "tmem-set", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            all = 1;
-            break;
-        case 'w':
-            weight = strtol(optarg, NULL, 10);
-            opt_w = 1;
-            break;
-        case 'c':
-            cap = strtol(optarg, NULL, 10);
-            opt_c = 1;
-            break;
-        case 'p':
-            compress = strtol(optarg, NULL, 10);
-            opt_p = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "aw:c:p:", NULL, "tmem-set", 0) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        all = 1;
+        break;
+    case 'w':
+        weight = strtol(optarg, NULL, 10);
+        opt_w = 1;
+        break;
+    case 'c':
+        cap = strtol(optarg, NULL, 10);
+        opt_c = 1;
+        break;
+    case 'p':
+        compress = strtol(optarg, NULL, 10);
+        opt_p = 1;
+        break;
     }
 
     dom = argv[optind];
@@ -6154,20 +6232,18 @@ int main_tmem_shared_auth(int argc, char
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "au:A:", NULL, "tmem-shared-auth", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            all = 1;
-            break;
-        case 'u':
-            uuid = optarg;
-            break;
-        case 'A':
-            autharg = optarg;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "au:A:", NULL, "tmem-shared-auth", 0) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        all = 1;
+        break;
+    case 'u':
+        uuid = optarg;
+        break;
+    case 'A':
+        autharg = optarg;
+        break;
     }
 
     dom = argv[optind];
@@ -6204,8 +6280,10 @@ int main_tmem_freeable(int argc, char **
     int opt;
     int mb;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "tmem-freeable", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "tmem-freeale", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     mb = libxl_tmem_freeable(ctx);
     if (mb == -1)
@@ -6244,17 +6322,15 @@ int main_cpupoolcreate(int argc, char **
     libxl_cputopology *topology;
     int rc = -ERROR_FAIL;
 
-    while ((opt = def_getopt(argc, argv, "hnf:", opts, "cpupool-create", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'f':
-            filename = optarg;
-            break;
-        case 'n':
-            dryrun_only = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "hnf:", opts, "cpupool-create", 0) {
+    case 0: case 2:
+        return opt;
+    case 'f':
+        filename = optarg;
+        break;
+    case 'n':
+        dryrun_only = 1;
+        break;
     }
 
     memset(extra_config, 0, sizeof(extra_config));
@@ -6429,14 +6505,12 @@ int main_cpupoollist(int argc, char **ar
     char *name;
     int ret = 0;
 
-    while ((opt = def_getopt(argc, argv, "hc", opts, "cpupool-list", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            break;
-        case 'c':
-            opt_cpus = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "hc", opts, "cpupool-list", 1) {
+    case 0: case 2:
+        break;
+    case 'c':
+        opt_cpus = 1;
+        break;
     }
 
     if (optind < argc) {
@@ -6496,8 +6570,10 @@ int main_cpupooldestroy(int argc, char *
     const char *pool;
     uint32_t poolid;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-destroy", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-destroy", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     pool = argv[optind];
 
@@ -6517,8 +6593,10 @@ int main_cpupoolrename(int argc, char **
     const char *new_name;
     uint32_t poolid;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-rename", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-rename", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     pool = argv[optind++];
 
@@ -6547,8 +6625,10 @@ int main_cpupoolcpuadd(int argc, char **
     int node;
     int n;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-add", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-add", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     pool = argv[optind++];
     node = -1;
@@ -6591,8 +6671,10 @@ int main_cpupoolcpuremove(int argc, char
     int node;
     int n;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-remove", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-remove", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     pool = argv[optind++];
     node = -1;
@@ -6634,8 +6716,10 @@ int main_cpupoolmigrate(int argc, char *
     const char *dom;
     uint32_t domid;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-migrate", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-migrate", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     dom = argv[optind++];
     pool = argv[optind];
@@ -6674,8 +6758,11 @@ int main_cpupoolnumasplit(int argc, char
     libxl_cputopology *topology;
     libxl_dominfo info;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-numa-split", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-numa-split", 0) {
+    case 0: case 2:
         return opt;
+    }
+
     ret = 0;
 
     poolinfo = libxl_list_cpupool(ctx, &n_pools);
@@ -6927,27 +7014,24 @@ int main_remus(int argc, char **argv)
     r_info.blackhole = 0;
     r_info.compression = 1;
 
-    while ((opt = def_getopt(argc, argv, "bui:s:e", NULL, "remus", 2)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-
-        case 'i':
-	    r_info.interval = atoi(optarg);
-            break;
-        case 'b':
-            r_info.blackhole = 1;
-            break;
-        case 'u':
-	    r_info.compression = 0;
-            break;
-        case 's':
-            ssh_command = optarg;
-            break;
-        case 'e':
-            daemonize = 0;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "bui:s:e", NULL, "remus", 2) {
+    case 0: case 2:
+        return opt;
+    case 'i':
+        r_info.interval = atoi(optarg);
+        break;
+    case 'b':
+        r_info.blackhole = 1;
+        break;
+    case 'u':
+        r_info.compression = 0;
+        break;
+    case 's':
+        ssh_command = optarg;
+        break;
+    case 'e':
+        daemonize = 0;
+        break;
     }
 
     domid = find_domain(argv[optind]);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:19:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkyx5-0005V3-Rz; Tue, 18 Dec 2012 15:19:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkyx4-0005UK-1F
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:19:26 +0000
Received: from [85.158.143.35:2412] by server-3.bemta-4.messagelabs.com id
	09/98-18211-D7980D05; Tue, 18 Dec 2012 15:19:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355843928!16020433!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30504 invoked from network); 18 Dec 2012 15:18:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:18:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1118440"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 15:18:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 10:18:47 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkywQ-0007qV-SZ;
	Tue, 18 Dec 2012 15:18:46 +0000
MIME-Version: 1.0
X-Mercurial-Node: 4f8b5e25370792c1360dc7b96f769acd0d22d6e9
Message-ID: <4f8b5e25370792c1360d.1355843928@cosworth.uk.xensource.com>
In-Reply-To: <patchbomb.1355843926@cosworth.uk.xensource.com>
References: <patchbomb.1355843926@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 18 Dec 2012 15:18:48 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH 2 of 3 V2] l: Introduce helper macro for option
	parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ijc@hellion.org.uk>
# Date 1355843020 0
# Node ID 4f8b5e25370792c1360dc7b96f769acd0d22d6e9
# Parent  a0d112303c6b0ee71d96bccfd1cb1a0786d6aadb
l: Introduce helper macro for option parsing.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3:
- drop underscores from macro params.
- improved macro documentation.
v2:
- s/FOREACH_OPT/SWITCH_FOREACH_OPT/
- Document the macro

diff -r a0d112303c6b -r 4f8b5e253707 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Tue Dec 18 14:50:07 2012 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Tue Dec 18 15:03:40 2012 +0000
@@ -2326,6 +2326,10 @@ static int64_t parse_mem_size_kb(const c
 
 #define COMMON_LONG_OPTS {"help", 0, 0, 'h'}
 
+/*
+ * Callers should use SWITCH_FOREACH_OPT in preference to calling this
+ * directly.
+ */
 static int def_getopt(int argc, char * const argv[],
                       const char *optstring,
                       const struct option *longopts,
@@ -2364,6 +2368,66 @@ static int def_getopt(int argc, char * c
     return -1;
 }
 
+/*
+ * Wraps def_getopt into a convenient loop+switch to process all
+ * arguments. This macro is intended to be called from main_XXX().
+ *
+ *   SWITCH_FOREACH_OPT(int *opt, const char *opts,
+ *                      const struct option *longopts,
+ *                      const char *commandname,
+ *                      int num_opts_req) { ...
+ *
+ * opt:               pointer to an int variable, holds the current option
+ *                    during processing.
+ * opts:              short options, as per getopt_long(3)'s optstring argument.
+ * longopts:          long options, as per getopt_long(3)'s longopts argument.
+ *                    May be null.
+ * commandname:       name of this command, for usage string.
+ * num_required_opts: number of non-option command line parameters
+ *                    which are required.
+ *
+ * In addition the calling context is expected to contain variables
+ * "argc" and "argv" in the conventional C-style:
+ *   main(int argc, char **argv)
+ * manner.
+ *
+ * Callers should treat SWITCH_FOREACH_OPT as they would a switch
+ * statement over the value of `opt`. Each option given in `opts` (or
+ * `lopts`) should be handled by a case statement as if it were inside
+ * a switch statement.
+ *
+ * In addition to the options provided in opts callers must handle
+ * two additional pseudo options:
+ *  0 -- generated if the user passes a -h option. help will be printed,
+ *       caller should immediately return 0.
+ *  2 -- generated if the user does not provided `num_required_opts`
+ *       non-option arguments, caller should immediately return 2.
+ *
+ * Example:
+ *
+ * int main_foo(int argc, char **argv) {
+ *     int opt;
+ *
+ *     SWITCH_FOREACH_OPT(opt, "blah", NULL, "foo", 0) {
+ *     case 0: case2:
+ *          return opt;
+ *      case 'b':
+ *          ... handle b option...
+ *          break;
+ *      case 'l':
+ *          ... handle l option ...
+ *          break;
+ *      case etc etc...
+ *      }
+ *      ... do something useful with the options ...
+ * }
+ */
+#define SWITCH_FOREACH_OPT(opt, opts, longopts,                         \
+                           commandname, num_required_opts)              \
+    while (((opt) = def_getopt(argc, argv, (opts), (longopts),          \
+                                (commandname), (num_required_opts))) != -1) \
+        switch (opt)
+
 static int set_memory_max(uint32_t domid, const char *mem)
 {
     int64_t memorykb;
@@ -2387,8 +2451,10 @@ int main_memmax(int argc, char **argv)
     char *mem;
     int rc;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "mem-max", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "mem-max", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
     mem = argv[optind + 1];
@@ -2421,8 +2487,10 @@ int main_memset(int argc, char **argv)
     int opt = 0;
     const char *mem;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "mem-set", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "mem-set", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
     mem = argv[optind + 1];
@@ -2460,8 +2528,10 @@ int main_cd_eject(int argc, char **argv)
     int opt = 0;
     const char *virtdev;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cd-eject", 2)) != -1)
-        return opt;
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cd-eject", 2) {
+        case 0: case 2:
+            return opt;
+    }
 
     domid = find_domain(argv[optind]);
     virtdev = argv[optind + 1];
@@ -2477,8 +2547,10 @@ int main_cd_insert(int argc, char **argv
     const char *virtdev;
     char *file = NULL; /* modified by cd_insert tokenising it */
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cd-insert", 3)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cd-insert", 3) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
     virtdev = argv[optind + 1];
@@ -2494,24 +2566,22 @@ int main_console(int argc, char **argv)
     int opt = 0, num = 0;
     libxl_console_type type = 0;
 
-    while ((opt = def_getopt(argc, argv, "n:t:", NULL, "console", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 't':
-            if (!strcmp(optarg, "pv"))
-                type = LIBXL_CONSOLE_TYPE_PV;
-            else if (!strcmp(optarg, "serial"))
-                type = LIBXL_CONSOLE_TYPE_SERIAL;
-            else {
-                fprintf(stderr, "console type supported are: pv, serial\n");
-                return 2;
-            }
-            break;
-        case 'n':
-            num = atoi(optarg);
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "n:t:", NULL, "console", 1) {
+    case 0: case 2:
+        return opt;
+    case 't':
+        if (!strcmp(optarg, "pv"))
+            type = LIBXL_CONSOLE_TYPE_PV;
+        else if (!strcmp(optarg, "serial"))
+            type = LIBXL_CONSOLE_TYPE_SERIAL;
+        else {
+            fprintf(stderr, "console type supported are: pv, serial\n");
+            return 2;
+        }
+        break;
+    case 'n':
+        num = atoi(optarg);
+        break;
     }
 
     domid = find_domain(argv[optind]);
@@ -2534,14 +2604,12 @@ int main_vncviewer(int argc, char **argv
     uint32_t domid;
     int opt, autopass = 0;
 
-    while ((opt = def_getopt(argc, argv, "ah", opts, "vncviewer", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            autopass = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "ah", opts, "vncviewer", 1) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        autopass = 1;
+        break;
     }
 
     domid = find_domain(argv[optind]);
@@ -2574,8 +2642,10 @@ int main_pcilist(int argc, char **argv)
     uint32_t domid;
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "pci-list", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-list", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
 
@@ -2613,14 +2683,12 @@ int main_pcidetach(int argc, char **argv
     int force = 0;
     const char *bdf = NULL;
 
-    while ((opt = def_getopt(argc, argv, "f", NULL, "pci-detach", 2)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'f':
-            force = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
+    case 0: case 2:
+        return opt;
+    case 'f':
+        force = 1;
+        break;
     }
 
     domid = find_domain(argv[optind]);
@@ -2655,8 +2723,10 @@ int main_pciattach(int argc, char **argv
     int opt;
     const char *bdf = NULL, *vs = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "pci-attach", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
     bdf = argv[optind + 1];
@@ -2689,8 +2759,10 @@ int main_pciassignable_list(int argc, ch
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-list", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     pciassignable_list();
     return 0;
@@ -2721,11 +2793,9 @@ int main_pciassignable_add(int argc, cha
     int opt;
     const char *bdf = NULL;
 
-    while ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-add", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        }
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
+    case 0: case 2:
+        return opt;
     }
 
     bdf = argv[optind];
@@ -2760,14 +2830,12 @@ int main_pciassignable_remove(int argc, 
     const char *bdf = NULL;
     int rebind = 0;
 
-    while ((opt = def_getopt(argc, argv, "r", NULL, "pci-assignable-remove", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'r':
-            rebind=1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
+    case 0: case 2:
+        return opt;
+    case 'r':
+        rebind=1;
+        break;
     }
 
     bdf = argv[optind];
@@ -3578,34 +3646,31 @@ int main_restore(int argc, char **argv)
         {0, 0, 0, 0}
     };
 
-    while ((opt = def_getopt(argc, argv, "FhcpdeVA",
-                             opts, "restore", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'c':
-            console_autoconnect = 1;
-            break;
-        case 'p':
-            paused = 1;
-            break;
-        case 'd':
-            debug = 1;
-            break;
-        case 'F':
-            daemonize = 0;
-            break;
-        case 'e':
-            daemonize = 0;
-            monitor = 0;
-            break;
-        case 'V':
-            vnc = 1;
-            break;
-        case 'A':
-            vnc = vncautopass = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "FhcpdeVA", opts, "restore", 1) {
+    case 0: case 2:
+        return opt;
+    case 'c':
+        console_autoconnect = 1;
+        break;
+    case 'p':
+        paused = 1;
+        break;
+    case 'd':
+        debug = 1;
+        break;
+    case 'F':
+        daemonize = 0;
+        break;
+    case 'e':
+        daemonize = 0;
+        monitor = 0;
+        break;
+    case 'V':
+        vnc = 1;
+        break;
+    case 'A':
+        vnc = vncautopass = 1;
+        break;
     }
 
     if (argc-optind == 1) {
@@ -3642,24 +3707,22 @@ int main_migrate_receive(int argc, char 
     int debug = 0, daemonize = 1, monitor = 1, remus = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "Fedr", NULL, "migrate-receive", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'F':
-            daemonize = 0;
-            break;
-        case 'e':
-            daemonize = 0;
-            monitor = 0;
-            break;
-        case 'd':
-            debug = 1;
-            break;
-        case 'r':
-            remus = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "Fedr", NULL, "migrate-receive", 0) {
+    case 0: case 2:
+        return opt;
+    case 'F':
+        daemonize = 0;
+        break;
+    case 'e':
+        daemonize = 0;
+        monitor = 0;
+        break;
+    case 'd':
+        debug = 1;
+        break;
+    case 'r':
+        remus = 1;
+        break;
     }
 
     if (argc-optind != 0) {
@@ -3681,14 +3744,12 @@ int main_save(int argc, char **argv)
     int checkpoint = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "c", NULL, "save", 2)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'c':
-            checkpoint = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "c", NULL, "save", 2) {
+    case 0: case 2:
+        return opt;
+    case 'c':
+        checkpoint = 1;
+        break;
     }
 
     if (argc-optind > 3) {
@@ -3714,27 +3775,25 @@ int main_migrate(int argc, char **argv)
     char *host;
     int opt, daemonize = 1, monitor = 1, debug = 0;
 
-    while ((opt = def_getopt(argc, argv, "FC:s:ed", NULL, "migrate", 2)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'C':
-            config_filename = optarg;
-            break;
-        case 's':
-            ssh_command = optarg;
-            break;
-        case 'F':
-            daemonize = 0;
-            break;
-        case 'e':
-            daemonize = 0;
-            monitor = 0;
-            break;
-        case 'd':
-            debug = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "FC:s:ed", NULL, "migrate", 2) {
+    case 0: case 2:
+        return opt;
+    case 'C':
+        config_filename = optarg;
+        break;
+    case 's':
+        ssh_command = optarg;
+        break;
+    case 'F':
+        daemonize = 0;
+        break;
+    case 'e':
+        daemonize = 0;
+        monitor = 0;
+        break;
+    case 'd':
+        debug = 1;
+        break;
     }
 
     domid = find_domain(argv[optind]);
@@ -3758,8 +3817,10 @@ int main_dump_core(int argc, char **argv
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "dump-core", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "dump-core", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     core_dump_domain(find_domain(argv[optind]), argv[optind + 1]);
     return 0;
@@ -3769,8 +3830,10 @@ int main_pause(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "pause", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pause", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     pause_domain(find_domain(argv[optind]));
 
@@ -3781,8 +3844,10 @@ int main_unpause(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "unpause", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "unpause", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     unpause_domain(find_domain(argv[optind]));
 
@@ -3793,8 +3858,10 @@ int main_destroy(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "destroy", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "destroy", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     destroy_domain(find_domain(argv[optind]));
     return 0;
@@ -3816,20 +3883,18 @@ static int main_shutdown_or_reboot(int d
         {0, 0, 0, 0}
     };
 
-    while ((opt = def_getopt(argc, argv, "awF", opts, what, 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            all = 1;
-            break;
-        case 'w':
-            wait_for_it = 1;
-            break;
-        case 'F':
-            fallback_trigger = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "awF", opts, what, 0) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        all = 1;
+        break;
+    case 'w':
+        wait_for_it = 1;
+        break;
+    case 'F':
+        fallback_trigger = 1;
+        break;
     }
 
     if (!argv[optind] && !all) {
@@ -3900,23 +3965,18 @@ int main_list(int argc, char **argv)
     libxl_dominfo *info, *info_free=0;
     int nb_domain, rc;
 
-    while ((opt = def_getopt(argc, argv, "lvhZ", opts, "list", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'l':
-            details = 1;
-            break;
-        case 'h':
-            help("list");
-            return 0;
-        case 'v':
-            verbose = 1;
-            break;
-        case 'Z':
-            context = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "lvhZ", opts, "list", 0) {
+    case 0: case 2:
+        return opt;
+    case 'l':
+        details = 1;
+        break;
+    case 'v':
+        verbose = 1;
+        break;
+    case 'Z':
+        context = 1;
+        break;
     }
 
     if (optind >= argc) {
@@ -3962,8 +4022,10 @@ int main_vm_list(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vm-list", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vm-list", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     list_vm();
     return 0;
@@ -3993,45 +4055,40 @@ int main_create(int argc, char **argv)
         argc--; argv++;
     }
 
-    while ((opt = def_getopt(argc, argv, "Fhnqf:pcdeVA", opts, "create", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'f':
-            filename = optarg;
-            break;
-        case 'p':
-            paused = 1;
-            break;
-        case 'c':
-            console_autoconnect = 1;
-            break;
-        case 'd':
-            debug = 1;
-            break;
-        case 'F':
-            daemonize = 0;
-            break;
-        case 'e':
-            daemonize = 0;
-            monitor = 0;
-            break;
-        case 'h':
-            help("create");
-            return 0;
-        case 'n':
-            dryrun_only = 1;
-            break;
-        case 'q':
-            quiet = 1;
-            break;
-        case 'V':
-            vnc = 1;
-            break;
-        case 'A':
-            vnc = vncautopass = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "Fhnqf:pcdeVA", opts, "create", 0) {
+    case 0: case 2:
+        return opt;
+    case 'f':
+        filename = optarg;
+        break;
+    case 'p':
+        paused = 1;
+        break;
+    case 'c':
+        console_autoconnect = 1;
+        break;
+    case 'd':
+        debug = 1;
+        break;
+    case 'F':
+        daemonize = 0;
+        break;
+    case 'e':
+        daemonize = 0;
+        monitor = 0;
+        break;
+    case 'n':
+        dryrun_only = 1;
+        break;
+    case 'q':
+        quiet = 1;
+        break;
+    case 'V':
+        vnc = 1;
+        break;
+    case 'A':
+        vnc = vncautopass = 1;
+        break;
     }
 
     extra_config[0] = '\0';
@@ -4099,17 +4156,15 @@ int main_config_update(int argc, char **
         argc--; argv++;
     }
 
-    while ((opt = def_getopt(argc, argv, "dhqf:", opts, "config_update", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'd':
-            debug = 1;
-            break;
-        case 'f':
-            filename = optarg;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "dhqf:", opts, "config_update", 0) {
+    case 0: case 2:
+        return opt;
+    case 'd':
+        debug = 1;
+        break;
+    case 'f':
+        filename = optarg;
+        break;
     }
 
     extra_config[0] = '\0';
@@ -4197,8 +4252,11 @@ int main_button_press(int argc, char **a
     fprintf(stderr, "WARNING: \"button-press\" is deprecated. "
             "Please use \"trigger\"\n");
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "button-press", 2)) != -1)
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "button-press", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     button_press(find_domain(argv[optind]), argv[optind + 1]);
 
@@ -4338,8 +4396,10 @@ int main_vcpulist(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpu-list", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpu-list", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     vcpulist(argc - optind, argv + optind);
     return 0;
@@ -4399,8 +4459,10 @@ int main_vcpupin(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-pin", 3)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-pin", 3) {
+    case 0: case 2:
         return opt;
+    }
 
     vcpupin(find_domain(argv[optind]), argv[optind+1] , argv[optind+2]);
     return 0;
@@ -4435,8 +4497,10 @@ int main_vcpuset(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-set", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-set", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     vcpuset(find_domain(argv[optind]), argv[optind+1]);
     return 0;
@@ -4618,14 +4682,12 @@ int main_info(int argc, char **argv)
     };
     int numa = 0;
 
-    while ((opt = def_getopt(argc, argv, "hn", opts, "info", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'n':
-            numa = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "hn", opts, "info", 0) {
+    case 0: case 2:
+        return opt;
+    case 'n':
+        numa = 1;
+        break;
     }
 
     print_info(numa);
@@ -4659,8 +4721,10 @@ int main_sharing(int argc, char **argv)
     libxl_dominfo *info, *info_free = NULL;
     int nb_domain, rc;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "sharing", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "sharing", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     if (optind >= argc) {
         info = libxl_list_domain(ctx, &nb_domain);
@@ -4940,36 +5004,34 @@ int main_sched_credit(int argc, char **a
         {0, 0, 0, 0}
     };
 
-    while ((opt = def_getopt(argc, argv, "d:w:c:p:t:r:hs", opts, "sched-credit", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'd':
-            dom = optarg;
-            break;
-        case 'w':
-            weight = strtol(optarg, NULL, 10);
-            opt_w = 1;
-            break;
-        case 'c':
-            cap = strtol(optarg, NULL, 10);
-            opt_c = 1;
-            break;
-        case 't':
-            tslice = strtol(optarg, NULL, 10);
-            opt_t = 1;
-            break;
-        case 'r':
-            ratelimit = strtol(optarg, NULL, 10);
-            opt_r = 1;
-            break;
-        case 's':
-            opt_s = 1;
-            break;
-        case 'p':
-            cpupool = optarg;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "d:w:c:p:t:r:hs", opts, "sched-credit", 0) {
+    case 0: case 2:
+        return opt;
+    case 'd':
+        dom = optarg;
+        break;
+    case 'w':
+        weight = strtol(optarg, NULL, 10);
+        opt_w = 1;
+        break;
+    case 'c':
+        cap = strtol(optarg, NULL, 10);
+        opt_c = 1;
+        break;
+    case 't':
+        tslice = strtol(optarg, NULL, 10);
+        opt_t = 1;
+        break;
+    case 'r':
+        ratelimit = strtol(optarg, NULL, 10);
+        opt_r = 1;
+        break;
+    case 's':
+        opt_s = 1;
+        break;
+    case 'p':
+        cpupool = optarg;
+        break;
     }
 
     if ((cpupool || opt_s) && (dom || opt_w || opt_c)) {
@@ -5059,21 +5121,19 @@ int main_sched_credit2(int argc, char **
         {0, 0, 0, 0}
     };
 
-    while ((opt = def_getopt(argc, argv, "d:w:p:h", opts, "sched-credit2", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'd':
-            dom = optarg;
-            break;
-        case 'w':
-            weight = strtol(optarg, NULL, 10);
-            opt_w = 1;
-            break;
-        case 'p':
-            cpupool = optarg;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "d:w:p:h", opts, "sched-credit2", 0) {
+    case 0: case 2:
+        return opt;
+    case 'd':
+        dom = optarg;
+        break;
+    case 'w':
+        weight = strtol(optarg, NULL, 10);
+        opt_w = 1;
+        break;
+    case 'p':
+        cpupool = optarg;
+        break;
     }
 
     if (cpupool && (dom || opt_w)) {
@@ -5134,37 +5194,35 @@ int main_sched_sedf(int argc, char **arg
         {0, 0, 0, 0}
     };
 
-    while ((opt = def_getopt(argc, argv, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'd':
-            dom = optarg;
-            break;
-        case 'p':
-            period = strtol(optarg, NULL, 10);
-            opt_p = 1;
-            break;
-        case 's':
-            slice = strtol(optarg, NULL, 10);
-            opt_s = 1;
-            break;
-        case 'l':
-            latency = strtol(optarg, NULL, 10);
-            opt_l = 1;
-            break;
-        case 'e':
-            extra = strtol(optarg, NULL, 10);
-            opt_e = 1;
-            break;
-        case 'w':
-            weight = strtol(optarg, NULL, 10);
-            opt_w = 1;
-            break;
-        case 'c':
-            cpupool = optarg;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0) {
+    case 0: case 2:
+        return opt;
+    case 'd':
+        dom = optarg;
+        break;
+    case 'p':
+        period = strtol(optarg, NULL, 10);
+        opt_p = 1;
+        break;
+    case 's':
+        slice = strtol(optarg, NULL, 10);
+        opt_s = 1;
+        break;
+    case 'l':
+        latency = strtol(optarg, NULL, 10);
+        opt_l = 1;
+        break;
+    case 'e':
+        extra = strtol(optarg, NULL, 10);
+        opt_e = 1;
+        break;
+    case 'w':
+        weight = strtol(optarg, NULL, 10);
+        opt_w = 1;
+        break;
+    case 'c':
+        cpupool = optarg;
+        break;
     }
 
     if (cpupool && (dom || opt_p || opt_s || opt_l || opt_e || opt_w)) {
@@ -5231,8 +5289,10 @@ int main_domid(int argc, char **argv)
     int opt;
     const char *domname = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "domid", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "domid", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     domname = argv[optind];
 
@@ -5253,8 +5313,10 @@ int main_domname(int argc, char **argv)
     char *domname = NULL;
     char *endptr = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "domname", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "domname", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = strtol(argv[optind], &endptr, 10);
     if (domid == 0 && !strcmp(endptr, argv[optind])) {
@@ -5281,8 +5343,10 @@ int main_rename(int argc, char **argv)
     int opt;
     const char *dom, *new_name;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "rename", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "rename", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     dom = argv[optind++];
     new_name = argv[optind];
@@ -5305,8 +5369,10 @@ int main_trigger(int argc, char **argv)
     const char *trigger_name = NULL;
     libxl_trigger trigger;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "trigger", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "trigger", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind++]);
 
@@ -5335,8 +5401,10 @@ int main_sysrq(int argc, char **argv)
     int opt;
     const char *sysrq = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "sysrq", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "sysrq", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind++]);
 
@@ -5358,8 +5426,10 @@ int main_debug_keys(int argc, char **arg
     int opt;
     char *keys;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "debug-keys", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "debug-keys", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     keys = argv[optind];
 
@@ -5378,14 +5448,12 @@ int main_dmesg(int argc, char **argv)
     char *line;
     int opt, ret = 1;
 
-    while ((opt = def_getopt(argc, argv, "c", NULL, "dmesg", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'c':
-            clear = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "c", NULL, "dmesg", 0) {
+    case 0: case 2:
+        return opt;
+    case 'c':
+        clear = 1;
+        break;
     }
 
     cr = libxl_xen_console_read_start(ctx, clear);
@@ -5404,8 +5472,10 @@ int main_top(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "top", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "top", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     return system("xentop");
 }
@@ -5421,8 +5491,10 @@ int main_networkattach(int argc, char **
     int i;
     unsigned int val;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "network-attach", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "network-attach", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     if (argc-optind > 11) {
         help("network-attach");
@@ -5508,8 +5580,10 @@ int main_networklist(int argc, char **ar
     libxl_nicinfo nicinfo;
     int nb, i;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "network-list", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "network-list", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
     printf("%-3s %-2s %-17s %-6s %-5s %-6s %5s/%-5s %-30s\n",
@@ -5545,8 +5619,10 @@ int main_networkdetach(int argc, char **
     int opt;
     libxl_device_nic nic;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "network-detach", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "network-detach", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
 
@@ -5576,8 +5652,10 @@ int main_blockattach(int argc, char **ar
     libxl_device_disk disk = { 0 };
     XLU_Config *config = 0;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "block-attach", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "block-attach", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     if (domain_qualifier_to_domid(argv[optind], &fe_domid, 0) < 0) {
         fprintf(stderr, "%s is an invalid domain identifier\n", argv[optind]);
@@ -5611,8 +5689,10 @@ int main_blocklist(int argc, char **argv
     libxl_device_disk *disks;
     libxl_diskinfo diskinfo;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "block-list", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "block-list", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     printf("%-5s %-3s %-6s %-5s %-6s %-8s %-30s\n",
            "Vdev", "BE", "handle", "state", "evt-ch", "ring-ref", "BE-path");
@@ -5647,8 +5727,10 @@ int main_blockdetach(int argc, char **ar
     int opt, rc = 0;
     libxl_device_disk disk;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "block-detach", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "block-detach", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
 
@@ -5672,8 +5754,10 @@ int main_vtpmattach(int argc, char **arg
     unsigned int val;
     uint32_t domid;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-attach", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-attach", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     if (domain_qualifier_to_domid(argv[optind], &domid, 0) < 0) {
         fprintf(stderr, "%s is an invalid domain identifier\n", argv[optind]);
@@ -5725,8 +5809,10 @@ int main_vtpmlist(int argc, char **argv)
     libxl_vtpminfo vtpminfo;
     int nb, i;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-list", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-list", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     /*      Idx  BE   UUID   Hdl  Sta  evch rref  BE-path */
     printf("%-3s %-2s %-36s %-6s %-5s %-6s %-5s %-10s\n",
@@ -5765,8 +5851,10 @@ int main_vtpmdetach(int argc, char **arg
     libxl_device_vtpm vtpm;
     libxl_uuid uuid;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-detach", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vtpm-detach", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     domid = find_domain(argv[optind]);
 
@@ -5957,14 +6045,12 @@ int main_uptime(int argc, char **argv)
     int nb_doms = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "s", NULL, "uptime", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 's':
-            short_mode = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "s", NULL, "uptime", 1) {
+    case 0: case 2:
+        return opt;
+    case 's':
+        short_mode = 1;
+        break;
     }
 
     for (;(dom = argv[optind]) != NULL; nb_doms++,optind++)
@@ -5984,17 +6070,15 @@ int main_tmem_list(int argc, char **argv
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "al", NULL, "tmem-list", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'l':
-            use_long = 1;
-            break;
-        case 'a':
-            all = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "al", NULL, "tmem-list", 0) {
+    case 0: case 2:
+        return opt;
+    case 'l':
+        use_long = 1;
+        break;
+    case 'a':
+        all = 1;
+        break;
     }
 
     dom = argv[optind];
@@ -6025,14 +6109,12 @@ int main_tmem_freeze(int argc, char **ar
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-freeze", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            all = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-freeze", 0) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        all = 1;
+        break;
     }
 
     dom = argv[optind];
@@ -6058,14 +6140,12 @@ int main_tmem_thaw(int argc, char **argv
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-thaw", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            all = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "a", NULL, "tmem-thaw", 0) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        all = 1;
+        break;
     }
 
     dom = argv[optind];
@@ -6093,26 +6173,24 @@ int main_tmem_set(int argc, char **argv)
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "aw:c:p:", NULL, "tmem-set", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            all = 1;
-            break;
-        case 'w':
-            weight = strtol(optarg, NULL, 10);
-            opt_w = 1;
-            break;
-        case 'c':
-            cap = strtol(optarg, NULL, 10);
-            opt_c = 1;
-            break;
-        case 'p':
-            compress = strtol(optarg, NULL, 10);
-            opt_p = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "aw:c:p:", NULL, "tmem-set", 0) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        all = 1;
+        break;
+    case 'w':
+        weight = strtol(optarg, NULL, 10);
+        opt_w = 1;
+        break;
+    case 'c':
+        cap = strtol(optarg, NULL, 10);
+        opt_c = 1;
+        break;
+    case 'p':
+        compress = strtol(optarg, NULL, 10);
+        opt_p = 1;
+        break;
     }
 
     dom = argv[optind];
@@ -6154,20 +6232,18 @@ int main_tmem_shared_auth(int argc, char
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "au:A:", NULL, "tmem-shared-auth", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'a':
-            all = 1;
-            break;
-        case 'u':
-            uuid = optarg;
-            break;
-        case 'A':
-            autharg = optarg;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "au:A:", NULL, "tmem-shared-auth", 0) {
+    case 0: case 2:
+        return opt;
+    case 'a':
+        all = 1;
+        break;
+    case 'u':
+        uuid = optarg;
+        break;
+    case 'A':
+        autharg = optarg;
+        break;
     }
 
     dom = argv[optind];
@@ -6204,8 +6280,10 @@ int main_tmem_freeable(int argc, char **
     int opt;
     int mb;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "tmem-freeable", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "tmem-freeale", 0) {
+    case 0: case 2:
         return opt;
+    }
 
     mb = libxl_tmem_freeable(ctx);
     if (mb == -1)
@@ -6244,17 +6322,15 @@ int main_cpupoolcreate(int argc, char **
     libxl_cputopology *topology;
     int rc = -ERROR_FAIL;
 
-    while ((opt = def_getopt(argc, argv, "hnf:", opts, "cpupool-create", 0)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-        case 'f':
-            filename = optarg;
-            break;
-        case 'n':
-            dryrun_only = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "hnf:", opts, "cpupool-create", 0) {
+    case 0: case 2:
+        return opt;
+    case 'f':
+        filename = optarg;
+        break;
+    case 'n':
+        dryrun_only = 1;
+        break;
     }
 
     memset(extra_config, 0, sizeof(extra_config));
@@ -6429,14 +6505,12 @@ int main_cpupoollist(int argc, char **ar
     char *name;
     int ret = 0;
 
-    while ((opt = def_getopt(argc, argv, "hc", opts, "cpupool-list", 1)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            break;
-        case 'c':
-            opt_cpus = 1;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "hc", opts, "cpupool-list", 1) {
+    case 0: case 2:
+        break;
+    case 'c':
+        opt_cpus = 1;
+        break;
     }
 
     if (optind < argc) {
@@ -6496,8 +6570,10 @@ int main_cpupooldestroy(int argc, char *
     const char *pool;
     uint32_t poolid;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-destroy", 1)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-destroy", 1) {
+    case 0: case 2:
         return opt;
+    }
 
     pool = argv[optind];
 
@@ -6517,8 +6593,10 @@ int main_cpupoolrename(int argc, char **
     const char *new_name;
     uint32_t poolid;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-rename", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-rename", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     pool = argv[optind++];
 
@@ -6547,8 +6625,10 @@ int main_cpupoolcpuadd(int argc, char **
     int node;
     int n;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-add", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-add", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     pool = argv[optind++];
     node = -1;
@@ -6591,8 +6671,10 @@ int main_cpupoolcpuremove(int argc, char
     int node;
     int n;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-remove", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-cpu-remove", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     pool = argv[optind++];
     node = -1;
@@ -6634,8 +6716,10 @@ int main_cpupoolmigrate(int argc, char *
     const char *dom;
     uint32_t domid;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-migrate", 2)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-migrate", 2) {
+    case 0: case 2:
         return opt;
+    }
 
     dom = argv[optind++];
     pool = argv[optind];
@@ -6674,8 +6758,11 @@ int main_cpupoolnumasplit(int argc, char
     libxl_cputopology *topology;
     libxl_dominfo info;
 
-    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-numa-split", 0)) != -1)
+    SWITCH_FOREACH_OPT(opt, "", NULL, "cpupool-numa-split", 0) {
+    case 0: case 2:
         return opt;
+    }
+
     ret = 0;
 
     poolinfo = libxl_list_cpupool(ctx, &n_pools);
@@ -6927,27 +7014,24 @@ int main_remus(int argc, char **argv)
     r_info.blackhole = 0;
     r_info.compression = 1;
 
-    while ((opt = def_getopt(argc, argv, "bui:s:e", NULL, "remus", 2)) != -1) {
-        switch (opt) {
-        case 0: case 2:
-            return opt;
-
-        case 'i':
-	    r_info.interval = atoi(optarg);
-            break;
-        case 'b':
-            r_info.blackhole = 1;
-            break;
-        case 'u':
-	    r_info.compression = 0;
-            break;
-        case 's':
-            ssh_command = optarg;
-            break;
-        case 'e':
-            daemonize = 0;
-            break;
-        }
+    SWITCH_FOREACH_OPT(opt, "bui:s:e", NULL, "remus", 2) {
+    case 0: case 2:
+        return opt;
+    case 'i':
+        r_info.interval = atoi(optarg);
+        break;
+    case 'b':
+        r_info.blackhole = 1;
+        break;
+    case 'u':
+        r_info.compression = 0;
+        break;
+    case 's':
+        ssh_command = optarg;
+        break;
+    case 'e':
+        daemonize = 0;
+        break;
     }
 
     domid = find_domain(argv[optind]);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:19:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyxM-0005a0-Mx; Tue, 18 Dec 2012 15:19:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkyxK-0005ZW-RB
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:19:43 +0000
Received: from [85.158.143.35:11918] by server-2.bemta-4.messagelabs.com id
	37/FF-30861-E8980D05; Tue, 18 Dec 2012 15:19:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355843928!16020433!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30357 invoked from network); 18 Dec 2012 15:18:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:18:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1118439"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 15:18:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 10:18:47 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkywQ-0007qV-S0;
	Tue, 18 Dec 2012 15:18:46 +0000
MIME-Version: 1.0
X-Mercurial-Node: a0d112303c6b0ee71d96bccfd1cb1a0786d6aadb
Message-ID: <a0d112303c6b0ee71d96.1355843927@cosworth.uk.xensource.com>
In-Reply-To: <patchbomb.1355843926@cosworth.uk.xensource.com>
References: <patchbomb.1355843926@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 18 Dec 2012 15:18:47 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH 1 of 3 V2] xl: allow def_getopt to handle long
	options
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ijc@hellion.org.uk>
# Date 1355842207 0
# Node ID a0d112303c6b0ee71d96bccfd1cb1a0786d6aadb
# Parent  22dfde8230f74b7868ddd42fee8ca29babdc21c5
xl: allow def_getopt to handle long options

Improves consistency of option parsing and error handling.

Consistently support --help for all options.

Many users of getopt_long were needlessly passing an option_index
pointer which was not used.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff -r 22dfde8230f7 -r a0d112303c6b tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Tue Dec 18 14:49:51 2012 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Tue Dec 18 14:50:07 2012 +0000
@@ -2324,19 +2324,34 @@ static int64_t parse_mem_size_kb(const c
     return kbytes;
 }
 
-static int def_getopt(int argc, char * const argv[], const char *optstring,
+#define COMMON_LONG_OPTS {"help", 0, 0, 'h'}
+
+static int def_getopt(int argc, char * const argv[],
+                      const char *optstring,
+                      const struct option *longopts,
                       const char* helpstr, int reqargs)
 {
     int opt;
+    const struct option def_options[] = {
+        COMMON_LONG_OPTS,
+        {0, 0, 0, 0}
+    };
+
+    if (!longopts)
+        longopts = def_options;
 
     opterr = 0;
-    while ((opt = getopt(argc, argv, optstring)) == '?') {
+    while ((opt = getopt_long(argc, argv, optstring, longopts, NULL)) == '?') {
         if (optopt == 'h') {
             help(helpstr);
             return 0;
         }
         fprintf(stderr, "option `%c' not supported.\n", optopt);
     }
+    if (opt == 'h') {
+        help(helpstr);
+        return 0;
+    }
     if (opt != -1)
         return opt;
 
@@ -2372,7 +2387,7 @@ int main_memmax(int argc, char **argv)
     char *mem;
     int rc;
 
-    if ((opt = def_getopt(argc, argv, "", "mem-max", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "mem-max", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2406,7 +2421,7 @@ int main_memset(int argc, char **argv)
     int opt = 0;
     const char *mem;
 
-    if ((opt = def_getopt(argc, argv, "", "mem-set", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "mem-set", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2445,7 +2460,7 @@ int main_cd_eject(int argc, char **argv)
     int opt = 0;
     const char *virtdev;
 
-    if ((opt = def_getopt(argc, argv, "", "cd-eject", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cd-eject", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2462,7 +2477,7 @@ int main_cd_insert(int argc, char **argv
     const char *virtdev;
     char *file = NULL; /* modified by cd_insert tokenising it */
 
-    if ((opt = def_getopt(argc, argv, "", "cd-insert", 3)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cd-insert", 3)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2479,7 +2494,7 @@ int main_console(int argc, char **argv)
     int opt = 0, num = 0;
     libxl_console_type type = 0;
 
-    while ((opt = def_getopt(argc, argv, "n:t:", "console", 1)) != -1) {
+    while ((opt = def_getopt(argc, argv, "n:t:", NULL, "console", 1)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -2510,36 +2525,23 @@ int main_console(int argc, char **argv)
 
 int main_vncviewer(int argc, char **argv)
 {
-    static const struct option long_options[] = {
+    static const struct option opts[] = {
         {"autopass", 0, 0, 'a'},
         {"vncviewer-autopass", 0, 0, 'a'},
-        {"help", 0, 0, 'h'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
     uint32_t domid;
     int opt, autopass = 0;
 
-    while (1) {
-        opt = getopt_long(argc, argv, "ah", long_options, NULL);
-        if (opt == -1)
-            break;
-
+    while ((opt = def_getopt(argc, argv, "ah", opts, "vncviewer", 1)) != -1) {
         switch (opt) {
+        case 0: case 2:
+            return opt;
         case 'a':
             autopass = 1;
             break;
-        case 'h':
-            help("vncviewer");
-            return 0;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
-        }
-    }
-
-    if (argc - optind != 1) {
-        help("vncviewer");
-        return 2;
+        }
     }
 
     domid = find_domain(argv[optind]);
@@ -2572,7 +2574,7 @@ int main_pcilist(int argc, char **argv)
     uint32_t domid;
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "pci-list", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "pci-list", 1)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2611,7 +2613,7 @@ int main_pcidetach(int argc, char **argv
     int force = 0;
     const char *bdf = NULL;
 
-    while ((opt = def_getopt(argc, argv, "f", "pci-detach", 2)) != -1) {
+    while ((opt = def_getopt(argc, argv, "f", NULL, "pci-detach", 2)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -2653,7 +2655,7 @@ int main_pciattach(int argc, char **argv
     int opt;
     const char *bdf = NULL, *vs = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", "pci-attach", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "pci-attach", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2687,7 +2689,7 @@ int main_pciassignable_list(int argc, ch
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "pci-assignable-list", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-list", 0)) != -1)
         return opt;
 
     pciassignable_list();
@@ -2719,7 +2721,7 @@ int main_pciassignable_add(int argc, cha
     int opt;
     const char *bdf = NULL;
 
-    while ((opt = def_getopt(argc, argv, "", "pci-assignable-add", 1)) != -1) {
+    while ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-add", 1)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -2758,7 +2760,7 @@ int main_pciassignable_remove(int argc, 
     const char *bdf = NULL;
     int rebind = 0;
 
-    while ((opt = def_getopt(argc, argv, "r", "pci-assignable-remove", 1)) != -1) {
+    while ((opt = def_getopt(argc, argv, "r", NULL, "pci-assignable-remove", 1)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -3569,24 +3571,18 @@ int main_restore(int argc, char **argv)
     int paused = 0, debug = 0, daemonize = 1, monitor = 1,
         console_autoconnect = 0, vnc = 0, vncautopass = 0;
     int opt, rc;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"vncviewer", 0, 0, 'V'},
         {"vncviewer-autopass", 0, 0, 'A'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    while (1) {
-        opt = getopt_long(argc, argv, "FhcpdeVA", long_options, &option_index);
-        if (opt == -1)
-            break;
-
+    while ((opt = def_getopt(argc, argv, "FhcpdeVA",
+                             opts, "restore", 1)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
-        case 'h':
-            help("restore");
-            return 2;
         case 'c':
             console_autoconnect = 1;
             break;
@@ -3646,7 +3642,7 @@ int main_migrate_receive(int argc, char 
     int debug = 0, daemonize = 1, monitor = 1, remus = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "Fedr", "migrate-receive", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "Fedr", NULL, "migrate-receive", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -3685,7 +3681,7 @@ int main_save(int argc, char **argv)
     int checkpoint = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "c", "save", 2)) != -1) {
+    while ((opt = def_getopt(argc, argv, "c", NULL, "save", 2)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -3718,7 +3714,7 @@ int main_migrate(int argc, char **argv)
     char *host;
     int opt, daemonize = 1, monitor = 1, debug = 0;
 
-    while ((opt = def_getopt(argc, argv, "FC:s:ed", "migrate", 2)) != -1) {
+    while ((opt = def_getopt(argc, argv, "FC:s:ed", NULL, "migrate", 2)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -3762,7 +3758,7 @@ int main_dump_core(int argc, char **argv
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "dump-core", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "dump-core", 2)) != -1)
         return opt;
 
     core_dump_domain(find_domain(argv[optind]), argv[optind + 1]);
@@ -3773,7 +3769,7 @@ int main_pause(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "pause", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "pause", 1)) != -1)
         return opt;
 
     pause_domain(find_domain(argv[optind]));
@@ -3785,7 +3781,7 @@ int main_unpause(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "unpause", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "unpause", 1)) != -1)
         return opt;
 
     unpause_domain(find_domain(argv[optind]));
@@ -3797,7 +3793,7 @@ int main_destroy(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "destroy", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "destroy", 1)) != -1)
         return opt;
 
     destroy_domain(find_domain(argv[optind]));
@@ -3806,19 +3802,21 @@ int main_destroy(int argc, char **argv)
 
 static int main_shutdown_or_reboot(int do_reboot, int argc, char **argv)
 {
+    const char *what = do_reboot ? "reboot" : "shutdown";
     void (*fn)(uint32_t domid,
                libxl_evgen_domain_death **, libxl_ev_user, int) =
         do_reboot ? &reboot_domain : &shutdown_domain;
     int opt, i, nb_domain;
     int wait_for_it = 0, all =0;
     int fallback_trigger = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"all", 0, 0, 'a'},
         {"wait", 0, 0, 'w'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    while ((opt = getopt_long(argc, argv, "awF", long_options, NULL)) != -1) {
+    while ((opt = def_getopt(argc, argv, "awF", opts, what, 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -3890,12 +3888,11 @@ int main_list(int argc, char **argv)
     int opt, verbose = 0;
     int context = 0;
     int details = 0;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"long", 0, 0, 'l'},
-        {"help", 0, 0, 'h'},
         {"verbose", 0, 0, 'v'},
         {"context", 0, 0, 'Z'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
@@ -3903,12 +3900,10 @@ int main_list(int argc, char **argv)
     libxl_dominfo *info, *info_free=0;
     int nb_domain, rc;
 
-    while (1) {
-        opt = getopt_long(argc, argv, "lvhZ", long_options, &option_index);
-        if (opt == -1)
-            break;
-
+    while ((opt = def_getopt(argc, argv, "lvhZ", opts, "list", 0)) != -1) {
         switch (opt) {
+        case 0: case 2:
+            return opt;
         case 'l':
             details = 1;
             break;
@@ -3921,9 +3916,6 @@ int main_list(int argc, char **argv)
         case 'Z':
             context = 1;
             break;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
         }
     }
 
@@ -3970,7 +3962,7 @@ int main_vm_list(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "vm-list", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vm-list", 0)) != -1)
         return opt;
 
     list_vm();
@@ -3986,14 +3978,13 @@ int main_create(int argc, char **argv)
     int paused = 0, debug = 0, daemonize = 1, console_autoconnect = 0,
         quiet = 0, monitor = 1, vnc = 0, vncautopass = 0;
     int opt, rc;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"dryrun", 0, 0, 'n'},
         {"quiet", 0, 0, 'q'},
-        {"help", 0, 0, 'h'},
         {"defconfig", 1, 0, 'f'},
         {"vncviewer", 0, 0, 'V'},
         {"vncviewer-autopass", 0, 0, 'A'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
@@ -4002,12 +3993,10 @@ int main_create(int argc, char **argv)
         argc--; argv++;
     }
 
-    while (1) {
-        opt = getopt_long(argc, argv, "Fhnqf:pcdeVA", long_options, &option_index);
-        if (opt == -1)
-            break;
-
+    while ((opt = def_getopt(argc, argv, "Fhnqf:pcdeVA", opts, "create", 0)) != -1) {
         switch (opt) {
+        case 0: case 2:
+            return opt;
         case 'f':
             filename = optarg;
             break;
@@ -4042,9 +4031,6 @@ int main_create(int argc, char **argv)
         case 'A':
             vnc = vncautopass = 1;
             break;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
         }
     }
 
@@ -4092,11 +4078,10 @@ int main_config_update(int argc, char **
     int config_len = 0;
     libxl_domain_config d_config;
     int opt, rc;
-    int option_index = 0;
     int debug = 0;
-    static struct option long_options[] = {
-        {"help", 0, 0, 'h'},
+    static struct option opts[] = {
         {"defconfig", 1, 0, 'f'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
@@ -4114,24 +4099,16 @@ int main_config_update(int argc, char **
         argc--; argv++;
     }
 
-    while (1) {
-        opt = getopt_long(argc, argv, "dhqf:", long_options, &option_index);
-        if (opt == -1)
-            break;
-
+    while ((opt = def_getopt(argc, argv, "dhqf:", opts, "config_update", 0)) != -1) {
         switch (opt) {
+        case 0: case 2:
+            return opt;
         case 'd':
             debug = 1;
             break;
         case 'f':
             filename = optarg;
             break;
-        case 'h':
-            help("create");
-            return 0;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
         }
     }
 
@@ -4220,7 +4197,7 @@ int main_button_press(int argc, char **a
     fprintf(stderr, "WARNING: \"button-press\" is deprecated. "
             "Please use \"trigger\"\n");
 
-    if ((opt = def_getopt(argc, argv, "", "button-press", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "button-press", 2)) != -1)
         return opt;
 
     button_press(find_domain(argv[optind]), argv[optind + 1]);
@@ -4361,7 +4338,7 @@ int main_vcpulist(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "cpu-list", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpu-list", 0)) != -1)
         return opt;
 
     vcpulist(argc - optind, argv + optind);
@@ -4422,7 +4399,7 @@ int main_vcpupin(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "vcpu-pin", 3)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-pin", 3)) != -1)
         return opt;
 
     vcpupin(find_domain(argv[optind]), argv[optind+1] , argv[optind+2]);
@@ -4458,7 +4435,7 @@ int main_vcpuset(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "vcpu-set", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-set", 2)) != -1)
         return opt;
 
     vcpuset(find_domain(argv[optind]), argv[optind+1]);
@@ -4634,25 +4611,20 @@ static void print_info(int numa)
 int main_info(int argc, char **argv)
 {
     int opt;
-    int option_index = 0;
-    static struct option long_options[] = {
-        {"help", 0, 0, 'h'},
+    static struct option opts[] = {
         {"numa", 0, 0, 'n'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
     int numa = 0;
 
-    while ((opt = getopt_long(argc, argv, "hn", long_options, &option_index)) != -1) {
+    while ((opt = def_getopt(argc, argv, "hn", opts, "info", 0)) != -1) {
         switch (opt) {
-        case 'h':
-            help("info");
-            return 0;
+        case 0: case 2:
+            return opt;
         case 'n':
             numa = 1;
             break;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
         }
     }
 
@@ -4687,7 +4659,7 @@ int main_sharing(int argc, char **argv)
     libxl_dominfo *info, *info_free = NULL;
     int nb_domain, rc;
 
-    if ((opt = def_getopt(argc, argv, "", "sharing", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "sharing", 0)) != -1)
         return opt;
 
     if (optind >= argc) {
@@ -4956,8 +4928,7 @@ int main_sched_credit(int argc, char **a
     int opt_s = 0;
     int tslice = 0, opt_t = 0, ratelimit = 0, opt_r = 0;
     int opt, rc;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"domain", 1, 0, 'd'},
         {"weight", 1, 0, 'w'},
         {"cap", 1, 0, 'c'},
@@ -4965,15 +4936,11 @@ int main_sched_credit(int argc, char **a
         {"tslice_ms", 1, 0, 't'},
         {"ratelimit_us", 1, 0, 'r'},
         {"cpupool", 1, 0, 'p'},
-        {"help", 0, 0, 'h'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    while (1) {
-        opt = getopt_long(argc, argv, "d:w:c:p:t:r:hs", long_options,
-                          &option_index);
-        if (opt == -1)
-            break;
+    while ((opt = def_getopt(argc, argv, "d:w:c:p:t:r:hs", opts, "sched-credit", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -5002,9 +4969,6 @@ int main_sched_credit(int argc, char **a
         case 'p':
             cpupool = optarg;
             break;
-        case 'h':
-            help("sched-credit");
-            return 0;
         }
     }
 
@@ -5087,19 +5051,15 @@ int main_sched_credit2(int argc, char **
     const char *cpupool = NULL;
     int weight = 256, opt_w = 0;
     int opt, rc;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"domain", 1, 0, 'd'},
         {"weight", 1, 0, 'w'},
         {"cpupool", 1, 0, 'p'},
-        {"help", 0, 0, 'h'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    while (1) {
-        opt = getopt_long(argc, argv, "d:w:p:h", long_options, &option_index);
-        if (opt == -1)
-            break;
+    while ((opt = def_getopt(argc, argv, "d:w:p:h", opts, "sched-credit2", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -5113,9 +5073,6 @@ int main_sched_credit2(int argc, char **
         case 'p':
             cpupool = optarg;
             break;
-        case 'h':
-            help("sched-credit");
-            return 0;
         }
     }
 
@@ -5166,23 +5123,18 @@ int main_sched_sedf(int argc, char **arg
     int extra = 0, opt_e = 0;
     int weight = 0, opt_w = 0;
     int opt, rc;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"period", 1, 0, 'p'},
         {"slice", 1, 0, 's'},
         {"latency", 1, 0, 'l'},
         {"extra", 1, 0, 'e'},
         {"weight", 1, 0, 'w'},
         {"cpupool", 1, 0, 'c'},
-        {"help", 0, 0, 'h'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    while (1) {
-        opt = getopt_long(argc, argv, "d:p:s:l:e:w:c:h", long_options,
-                          &option_index);
-        if (opt == -1)
-            break;
+    while ((opt = def_getopt(argc, argv, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -5212,9 +5164,6 @@ int main_sched_sedf(int argc, char **arg
         case 'c':
             cpupool = optarg;
             break;
-        case 'h':
-            help("sched-sedf");
-            return 0;
         }
     }
 
@@ -5282,7 +5231,7 @@ int main_domid(int argc, char **argv)
     int opt;
     const char *domname = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", "domid", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "domid", 1)) != -1)
         return opt;
 
     domname = argv[optind];
@@ -5304,7 +5253,7 @@ int main_domname(int argc, char **argv)
     char *domname = NULL;
     char *endptr = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", "domname", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "domname", 1)) != -1)
         return opt;
 
     domid = strtol(argv[optind], &endptr, 10);
@@ -5332,7 +5281,7 @@ int main_rename(int argc, char **argv)
     int opt;
     const char *dom, *new_name;
 
-    if ((opt = def_getopt(argc, argv, "", "rename", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "rename", 2)) != -1)
         return opt;
 
     dom = argv[optind++];
@@ -5356,7 +5305,7 @@ int main_trigger(int argc, char **argv)
     const char *trigger_name = NULL;
     libxl_trigger trigger;
 
-    if ((opt = def_getopt(argc, argv, "", "trigger", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "trigger", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind++]);
@@ -5386,7 +5335,7 @@ int main_sysrq(int argc, char **argv)
     int opt;
     const char *sysrq = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", "sysrq", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "sysrq", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind++]);
@@ -5409,7 +5358,7 @@ int main_debug_keys(int argc, char **arg
     int opt;
     char *keys;
 
-    if ((opt = def_getopt(argc, argv, "", "debug-keys", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "debug-keys", 1)) != -1)
         return opt;
 
     keys = argv[optind];
@@ -5429,7 +5378,7 @@ int main_dmesg(int argc, char **argv)
     char *line;
     int opt, ret = 1;
 
-    while ((opt = def_getopt(argc, argv, "c", "dmesg", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "c", NULL, "dmesg", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -5455,7 +5404,7 @@ int main_top(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "top", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "top", 0)) != -1)
         return opt;
 
     return system("xentop");
@@ -5472,7 +5421,7 @@ int main_networkattach(int argc, char **
     int i;
     unsigned int val;
 
-    if ((opt = def_getopt(argc, argv, "", "network-attach", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "network-attach", 1)) != -1)
         return opt;
 
     if (argc-optind > 11) {
@@ -5559,7 +5508,7 @@ int main_networklist(int argc, char **ar
     libxl_nicinfo nicinfo;
     int nb, i;
 
-    if ((opt = def_getopt(argc, argv, "", "network-list", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "network-list", 1)) != -1)
         return opt;
 
     /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
@@ -5596,7 +5545,7 @@ int main_networkdetach(int argc, char **
     int opt;
     libxl_device_nic nic;
 
-    if ((opt = def_getopt(argc, argv, "", "network-detach", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "network-detach", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -5627,7 +5576,7 @@ int main_blockattach(int argc, char **ar
     libxl_device_disk disk = { 0 };
     XLU_Config *config = 0;
 
-    if ((opt = def_getopt(argc, argv, "", "block-attach", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "block-attach", 2)) != -1)
         return opt;
 
     if (domain_qualifier_to_domid(argv[optind], &fe_domid, 0) < 0) {
@@ -5662,7 +5611,7 @@ int main_blocklist(int argc, char **argv
     libxl_device_disk *disks;
     libxl_diskinfo diskinfo;
 
-    if ((opt = def_getopt(argc, argv, "", "block-list", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "block-list", 1)) != -1)
         return opt;
 
     printf("%-5s %-3s %-6s %-5s %-6s %-8s %-30s\n",
@@ -5698,7 +5647,7 @@ int main_blockdetach(int argc, char **ar
     int opt, rc = 0;
     libxl_device_disk disk;
 
-    if ((opt = def_getopt(argc, argv, "", "block-detach", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "block-detach", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -5723,7 +5672,7 @@ int main_vtpmattach(int argc, char **arg
     unsigned int val;
     uint32_t domid;
 
-    if ((opt = def_getopt(argc, argv, "", "vtpm-attach", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-attach", 1)) != -1)
         return opt;
 
     if (domain_qualifier_to_domid(argv[optind], &domid, 0) < 0) {
@@ -5776,7 +5725,7 @@ int main_vtpmlist(int argc, char **argv)
     libxl_vtpminfo vtpminfo;
     int nb, i;
 
-    if ((opt = def_getopt(argc, argv, "", "vtpm-list", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-list", 1)) != -1)
         return opt;
 
     /*      Idx  BE   UUID   Hdl  Sta  evch rref  BE-path */
@@ -5816,7 +5765,7 @@ int main_vtpmdetach(int argc, char **arg
     libxl_device_vtpm vtpm;
     libxl_uuid uuid;
 
-    if ((opt = def_getopt(argc, argv, "", "vtpm-detach", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-detach", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -6008,7 +5957,7 @@ int main_uptime(int argc, char **argv)
     int nb_doms = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "s", "uptime", 1)) != -1) {
+    while ((opt = def_getopt(argc, argv, "s", NULL, "uptime", 1)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6035,7 +5984,7 @@ int main_tmem_list(int argc, char **argv
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "al", "tmem-list", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "al", NULL, "tmem-list", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6076,7 +6025,7 @@ int main_tmem_freeze(int argc, char **ar
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "a", "tmem-freeze", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-freeze", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6109,7 +6058,7 @@ int main_tmem_thaw(int argc, char **argv
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "a", "tmem-thaw", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-thaw", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6144,7 +6093,7 @@ int main_tmem_set(int argc, char **argv)
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "aw:c:p:", "tmem-set", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "aw:c:p:", NULL, "tmem-set", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6205,7 +6154,7 @@ int main_tmem_shared_auth(int argc, char
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "au:A:", "tmem-shared-auth", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "au:A:", NULL, "tmem-shared-auth", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6255,7 +6204,7 @@ int main_tmem_freeable(int argc, char **
     int opt;
     int mb;
 
-    if ((opt = def_getopt(argc, argv, "", "tmem-freeable", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "tmem-freeable", 0)) != -1)
         return opt;
 
     mb = libxl_tmem_freeable(ctx);
@@ -6272,11 +6221,10 @@ int main_cpupoolcreate(int argc, char **
     const char *p;
     char extra_config[1024];
     int opt;
-    int option_index = 0;
-    static struct option long_options[] = {
-        {"help", 0, 0, 'h'},
+    static struct option opts[] = {
         {"defconfig", 1, 0, 'f'},
         {"dryrun", 0, 0, 'n'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
     int ret;
@@ -6294,26 +6242,18 @@ int main_cpupoolcreate(int argc, char **
     libxl_bitmap cpumap;
     libxl_uuid uuid;
     libxl_cputopology *topology;
-    int rc = -ERROR_FAIL; 
-
-    while (1) {
-        opt = getopt_long(argc, argv, "hnf:", long_options, &option_index);
-        if (opt == -1)
-            break;
-
+    int rc = -ERROR_FAIL;
+
+    while ((opt = def_getopt(argc, argv, "hnf:", opts, "cpupool-create", 0)) != -1) {
         switch (opt) {
+        case 0: case 2:
+            return opt;
         case 'f':
             filename = optarg;
             break;
-        case 'h':
-            help("cpupool-create");
-            return 0;
         case 'n':
             dryrun_only = 1;
             break;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
         }
     }
 
@@ -6476,10 +6416,9 @@ out:
 int main_cpupoollist(int argc, char **argv)
 {
     int opt;
-    int option_index = 0;
-    static struct option long_options[] = {
-        {"help", 0, 0, 'h'},
+    static struct option opts[] = {
         {"cpus", 0, 0, 'c'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
     int opt_cpus = 0;
@@ -6490,28 +6429,16 @@ int main_cpupoollist(int argc, char **ar
     char *name;
     int ret = 0;
 
-    while (1) {
-        opt = getopt_long(argc, argv, "hc", long_options, &option_index);
-        if (opt == -1)
+    while ((opt = def_getopt(argc, argv, "hc", opts, "cpupool-list", 1)) != -1) {
+        switch (opt) {
+        case 0: case 2:
             break;
-
-        switch (opt) {
-        case 'h':
-            help("cpupool-list");
-            return 0;
         case 'c':
             opt_cpus = 1;
             break;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
-        }
-    }
-
-    if ((optind + 1) < argc) {
-        help("cpupool-list");
-        return -ERROR_FAIL;
-    }
+        }
+    }
+
     if (optind < argc) {
         pool = argv[optind];
         if (libxl_name_to_cpupoolid(ctx, pool, &poolid)) {
@@ -6569,7 +6496,7 @@ int main_cpupooldestroy(int argc, char *
     const char *pool;
     uint32_t poolid;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-destroy", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-destroy", 1)) != -1)
         return opt;
 
     pool = argv[optind];
@@ -6590,7 +6517,7 @@ int main_cpupoolrename(int argc, char **
     const char *new_name;
     uint32_t poolid;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-rename", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-rename", 2)) != -1)
         return opt;
 
     pool = argv[optind++];
@@ -6620,7 +6547,7 @@ int main_cpupoolcpuadd(int argc, char **
     int node;
     int n;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-cpu-add", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-add", 2)) != -1)
         return opt;
 
     pool = argv[optind++];
@@ -6664,7 +6591,7 @@ int main_cpupoolcpuremove(int argc, char
     int node;
     int n;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-cpu-remove", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-remove", 2)) != -1)
         return opt;
 
     pool = argv[optind++];
@@ -6707,7 +6634,7 @@ int main_cpupoolmigrate(int argc, char *
     const char *dom;
     uint32_t domid;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-migrate", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-migrate", 2)) != -1)
         return opt;
 
     dom = argv[optind++];
@@ -6747,7 +6674,7 @@ int main_cpupoolnumasplit(int argc, char
     libxl_cputopology *topology;
     libxl_dominfo info;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-numa-split", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-numa-split", 0)) != -1)
         return opt;
     ret = 0;
 
@@ -7000,7 +6927,7 @@ int main_remus(int argc, char **argv)
     r_info.blackhole = 0;
     r_info.compression = 1;
 
-    while ((opt = def_getopt(argc, argv, "bui:s:e", "remus", 2)) != -1) {
+    while ((opt = def_getopt(argc, argv, "bui:s:e", NULL, "remus", 2)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:19:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkyxM-0005a0-Mx; Tue, 18 Dec 2012 15:19:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkyxK-0005ZW-RB
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:19:43 +0000
Received: from [85.158.143.35:11918] by server-2.bemta-4.messagelabs.com id
	37/FF-30861-E8980D05; Tue, 18 Dec 2012 15:19:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355843928!16020433!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30357 invoked from network); 18 Dec 2012 15:18:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:18:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1118439"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 15:18:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 10:18:47 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TkywQ-0007qV-S0;
	Tue, 18 Dec 2012 15:18:46 +0000
MIME-Version: 1.0
X-Mercurial-Node: a0d112303c6b0ee71d96bccfd1cb1a0786d6aadb
Message-ID: <a0d112303c6b0ee71d96.1355843927@cosworth.uk.xensource.com>
In-Reply-To: <patchbomb.1355843926@cosworth.uk.xensource.com>
References: <patchbomb.1355843926@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 18 Dec 2012 15:18:47 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Cc: ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH 1 of 3 V2] xl: allow def_getopt to handle long
	options
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ijc@hellion.org.uk>
# Date 1355842207 0
# Node ID a0d112303c6b0ee71d96bccfd1cb1a0786d6aadb
# Parent  22dfde8230f74b7868ddd42fee8ca29babdc21c5
xl: allow def_getopt to handle long options

Improves consistency of option parsing and error handling.

Consistently support --help for all options.

Many users of getopt_long were needlessly passing an option_index
pointer which was not used.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff -r 22dfde8230f7 -r a0d112303c6b tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Tue Dec 18 14:49:51 2012 +0000
+++ b/tools/libxl/xl_cmdimpl.c	Tue Dec 18 14:50:07 2012 +0000
@@ -2324,19 +2324,34 @@ static int64_t parse_mem_size_kb(const c
     return kbytes;
 }
 
-static int def_getopt(int argc, char * const argv[], const char *optstring,
+#define COMMON_LONG_OPTS {"help", 0, 0, 'h'}
+
+static int def_getopt(int argc, char * const argv[],
+                      const char *optstring,
+                      const struct option *longopts,
                       const char* helpstr, int reqargs)
 {
     int opt;
+    const struct option def_options[] = {
+        COMMON_LONG_OPTS,
+        {0, 0, 0, 0}
+    };
+
+    if (!longopts)
+        longopts = def_options;
 
     opterr = 0;
-    while ((opt = getopt(argc, argv, optstring)) == '?') {
+    while ((opt = getopt_long(argc, argv, optstring, longopts, NULL)) == '?') {
         if (optopt == 'h') {
             help(helpstr);
             return 0;
         }
         fprintf(stderr, "option `%c' not supported.\n", optopt);
     }
+    if (opt == 'h') {
+        help(helpstr);
+        return 0;
+    }
     if (opt != -1)
         return opt;
 
@@ -2372,7 +2387,7 @@ int main_memmax(int argc, char **argv)
     char *mem;
     int rc;
 
-    if ((opt = def_getopt(argc, argv, "", "mem-max", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "mem-max", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2406,7 +2421,7 @@ int main_memset(int argc, char **argv)
     int opt = 0;
     const char *mem;
 
-    if ((opt = def_getopt(argc, argv, "", "mem-set", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "mem-set", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2445,7 +2460,7 @@ int main_cd_eject(int argc, char **argv)
     int opt = 0;
     const char *virtdev;
 
-    if ((opt = def_getopt(argc, argv, "", "cd-eject", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cd-eject", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2462,7 +2477,7 @@ int main_cd_insert(int argc, char **argv
     const char *virtdev;
     char *file = NULL; /* modified by cd_insert tokenising it */
 
-    if ((opt = def_getopt(argc, argv, "", "cd-insert", 3)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cd-insert", 3)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2479,7 +2494,7 @@ int main_console(int argc, char **argv)
     int opt = 0, num = 0;
     libxl_console_type type = 0;
 
-    while ((opt = def_getopt(argc, argv, "n:t:", "console", 1)) != -1) {
+    while ((opt = def_getopt(argc, argv, "n:t:", NULL, "console", 1)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -2510,36 +2525,23 @@ int main_console(int argc, char **argv)
 
 int main_vncviewer(int argc, char **argv)
 {
-    static const struct option long_options[] = {
+    static const struct option opts[] = {
         {"autopass", 0, 0, 'a'},
         {"vncviewer-autopass", 0, 0, 'a'},
-        {"help", 0, 0, 'h'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
     uint32_t domid;
     int opt, autopass = 0;
 
-    while (1) {
-        opt = getopt_long(argc, argv, "ah", long_options, NULL);
-        if (opt == -1)
-            break;
-
+    while ((opt = def_getopt(argc, argv, "ah", opts, "vncviewer", 1)) != -1) {
         switch (opt) {
+        case 0: case 2:
+            return opt;
         case 'a':
             autopass = 1;
             break;
-        case 'h':
-            help("vncviewer");
-            return 0;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
-        }
-    }
-
-    if (argc - optind != 1) {
-        help("vncviewer");
-        return 2;
+        }
     }
 
     domid = find_domain(argv[optind]);
@@ -2572,7 +2574,7 @@ int main_pcilist(int argc, char **argv)
     uint32_t domid;
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "pci-list", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "pci-list", 1)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2611,7 +2613,7 @@ int main_pcidetach(int argc, char **argv
     int force = 0;
     const char *bdf = NULL;
 
-    while ((opt = def_getopt(argc, argv, "f", "pci-detach", 2)) != -1) {
+    while ((opt = def_getopt(argc, argv, "f", NULL, "pci-detach", 2)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -2653,7 +2655,7 @@ int main_pciattach(int argc, char **argv
     int opt;
     const char *bdf = NULL, *vs = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", "pci-attach", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "pci-attach", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -2687,7 +2689,7 @@ int main_pciassignable_list(int argc, ch
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "pci-assignable-list", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-list", 0)) != -1)
         return opt;
 
     pciassignable_list();
@@ -2719,7 +2721,7 @@ int main_pciassignable_add(int argc, cha
     int opt;
     const char *bdf = NULL;
 
-    while ((opt = def_getopt(argc, argv, "", "pci-assignable-add", 1)) != -1) {
+    while ((opt = def_getopt(argc, argv, "", NULL, "pci-assignable-add", 1)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -2758,7 +2760,7 @@ int main_pciassignable_remove(int argc, 
     const char *bdf = NULL;
     int rebind = 0;
 
-    while ((opt = def_getopt(argc, argv, "r", "pci-assignable-remove", 1)) != -1) {
+    while ((opt = def_getopt(argc, argv, "r", NULL, "pci-assignable-remove", 1)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -3569,24 +3571,18 @@ int main_restore(int argc, char **argv)
     int paused = 0, debug = 0, daemonize = 1, monitor = 1,
         console_autoconnect = 0, vnc = 0, vncautopass = 0;
     int opt, rc;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"vncviewer", 0, 0, 'V'},
         {"vncviewer-autopass", 0, 0, 'A'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    while (1) {
-        opt = getopt_long(argc, argv, "FhcpdeVA", long_options, &option_index);
-        if (opt == -1)
-            break;
-
+    while ((opt = def_getopt(argc, argv, "FhcpdeVA",
+                             opts, "restore", 1)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
-        case 'h':
-            help("restore");
-            return 2;
         case 'c':
             console_autoconnect = 1;
             break;
@@ -3646,7 +3642,7 @@ int main_migrate_receive(int argc, char 
     int debug = 0, daemonize = 1, monitor = 1, remus = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "Fedr", "migrate-receive", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "Fedr", NULL, "migrate-receive", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -3685,7 +3681,7 @@ int main_save(int argc, char **argv)
     int checkpoint = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "c", "save", 2)) != -1) {
+    while ((opt = def_getopt(argc, argv, "c", NULL, "save", 2)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -3718,7 +3714,7 @@ int main_migrate(int argc, char **argv)
     char *host;
     int opt, daemonize = 1, monitor = 1, debug = 0;
 
-    while ((opt = def_getopt(argc, argv, "FC:s:ed", "migrate", 2)) != -1) {
+    while ((opt = def_getopt(argc, argv, "FC:s:ed", NULL, "migrate", 2)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -3762,7 +3758,7 @@ int main_dump_core(int argc, char **argv
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "dump-core", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "dump-core", 2)) != -1)
         return opt;
 
     core_dump_domain(find_domain(argv[optind]), argv[optind + 1]);
@@ -3773,7 +3769,7 @@ int main_pause(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "pause", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "pause", 1)) != -1)
         return opt;
 
     pause_domain(find_domain(argv[optind]));
@@ -3785,7 +3781,7 @@ int main_unpause(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "unpause", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "unpause", 1)) != -1)
         return opt;
 
     unpause_domain(find_domain(argv[optind]));
@@ -3797,7 +3793,7 @@ int main_destroy(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "destroy", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "destroy", 1)) != -1)
         return opt;
 
     destroy_domain(find_domain(argv[optind]));
@@ -3806,19 +3802,21 @@ int main_destroy(int argc, char **argv)
 
 static int main_shutdown_or_reboot(int do_reboot, int argc, char **argv)
 {
+    const char *what = do_reboot ? "reboot" : "shutdown";
     void (*fn)(uint32_t domid,
                libxl_evgen_domain_death **, libxl_ev_user, int) =
         do_reboot ? &reboot_domain : &shutdown_domain;
     int opt, i, nb_domain;
     int wait_for_it = 0, all =0;
     int fallback_trigger = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"all", 0, 0, 'a'},
         {"wait", 0, 0, 'w'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    while ((opt = getopt_long(argc, argv, "awF", long_options, NULL)) != -1) {
+    while ((opt = def_getopt(argc, argv, "awF", opts, what, 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -3890,12 +3888,11 @@ int main_list(int argc, char **argv)
     int opt, verbose = 0;
     int context = 0;
     int details = 0;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"long", 0, 0, 'l'},
-        {"help", 0, 0, 'h'},
         {"verbose", 0, 0, 'v'},
         {"context", 0, 0, 'Z'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
@@ -3903,12 +3900,10 @@ int main_list(int argc, char **argv)
     libxl_dominfo *info, *info_free=0;
     int nb_domain, rc;
 
-    while (1) {
-        opt = getopt_long(argc, argv, "lvhZ", long_options, &option_index);
-        if (opt == -1)
-            break;
-
+    while ((opt = def_getopt(argc, argv, "lvhZ", opts, "list", 0)) != -1) {
         switch (opt) {
+        case 0: case 2:
+            return opt;
         case 'l':
             details = 1;
             break;
@@ -3921,9 +3916,6 @@ int main_list(int argc, char **argv)
         case 'Z':
             context = 1;
             break;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
         }
     }
 
@@ -3970,7 +3962,7 @@ int main_vm_list(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "vm-list", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vm-list", 0)) != -1)
         return opt;
 
     list_vm();
@@ -3986,14 +3978,13 @@ int main_create(int argc, char **argv)
     int paused = 0, debug = 0, daemonize = 1, console_autoconnect = 0,
         quiet = 0, monitor = 1, vnc = 0, vncautopass = 0;
     int opt, rc;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"dryrun", 0, 0, 'n'},
         {"quiet", 0, 0, 'q'},
-        {"help", 0, 0, 'h'},
         {"defconfig", 1, 0, 'f'},
         {"vncviewer", 0, 0, 'V'},
         {"vncviewer-autopass", 0, 0, 'A'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
@@ -4002,12 +3993,10 @@ int main_create(int argc, char **argv)
         argc--; argv++;
     }
 
-    while (1) {
-        opt = getopt_long(argc, argv, "Fhnqf:pcdeVA", long_options, &option_index);
-        if (opt == -1)
-            break;
-
+    while ((opt = def_getopt(argc, argv, "Fhnqf:pcdeVA", opts, "create", 0)) != -1) {
         switch (opt) {
+        case 0: case 2:
+            return opt;
         case 'f':
             filename = optarg;
             break;
@@ -4042,9 +4031,6 @@ int main_create(int argc, char **argv)
         case 'A':
             vnc = vncautopass = 1;
             break;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
         }
     }
 
@@ -4092,11 +4078,10 @@ int main_config_update(int argc, char **
     int config_len = 0;
     libxl_domain_config d_config;
     int opt, rc;
-    int option_index = 0;
     int debug = 0;
-    static struct option long_options[] = {
-        {"help", 0, 0, 'h'},
+    static struct option opts[] = {
         {"defconfig", 1, 0, 'f'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
@@ -4114,24 +4099,16 @@ int main_config_update(int argc, char **
         argc--; argv++;
     }
 
-    while (1) {
-        opt = getopt_long(argc, argv, "dhqf:", long_options, &option_index);
-        if (opt == -1)
-            break;
-
+    while ((opt = def_getopt(argc, argv, "dhqf:", opts, "config_update", 0)) != -1) {
         switch (opt) {
+        case 0: case 2:
+            return opt;
         case 'd':
             debug = 1;
             break;
         case 'f':
             filename = optarg;
             break;
-        case 'h':
-            help("create");
-            return 0;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
         }
     }
 
@@ -4220,7 +4197,7 @@ int main_button_press(int argc, char **a
     fprintf(stderr, "WARNING: \"button-press\" is deprecated. "
             "Please use \"trigger\"\n");
 
-    if ((opt = def_getopt(argc, argv, "", "button-press", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "button-press", 2)) != -1)
         return opt;
 
     button_press(find_domain(argv[optind]), argv[optind + 1]);
@@ -4361,7 +4338,7 @@ int main_vcpulist(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "cpu-list", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpu-list", 0)) != -1)
         return opt;
 
     vcpulist(argc - optind, argv + optind);
@@ -4422,7 +4399,7 @@ int main_vcpupin(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "vcpu-pin", 3)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-pin", 3)) != -1)
         return opt;
 
     vcpupin(find_domain(argv[optind]), argv[optind+1] , argv[optind+2]);
@@ -4458,7 +4435,7 @@ int main_vcpuset(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "vcpu-set", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vcpu-set", 2)) != -1)
         return opt;
 
     vcpuset(find_domain(argv[optind]), argv[optind+1]);
@@ -4634,25 +4611,20 @@ static void print_info(int numa)
 int main_info(int argc, char **argv)
 {
     int opt;
-    int option_index = 0;
-    static struct option long_options[] = {
-        {"help", 0, 0, 'h'},
+    static struct option opts[] = {
         {"numa", 0, 0, 'n'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
     int numa = 0;
 
-    while ((opt = getopt_long(argc, argv, "hn", long_options, &option_index)) != -1) {
+    while ((opt = def_getopt(argc, argv, "hn", opts, "info", 0)) != -1) {
         switch (opt) {
-        case 'h':
-            help("info");
-            return 0;
+        case 0: case 2:
+            return opt;
         case 'n':
             numa = 1;
             break;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
         }
     }
 
@@ -4687,7 +4659,7 @@ int main_sharing(int argc, char **argv)
     libxl_dominfo *info, *info_free = NULL;
     int nb_domain, rc;
 
-    if ((opt = def_getopt(argc, argv, "", "sharing", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "sharing", 0)) != -1)
         return opt;
 
     if (optind >= argc) {
@@ -4956,8 +4928,7 @@ int main_sched_credit(int argc, char **a
     int opt_s = 0;
     int tslice = 0, opt_t = 0, ratelimit = 0, opt_r = 0;
     int opt, rc;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"domain", 1, 0, 'd'},
         {"weight", 1, 0, 'w'},
         {"cap", 1, 0, 'c'},
@@ -4965,15 +4936,11 @@ int main_sched_credit(int argc, char **a
         {"tslice_ms", 1, 0, 't'},
         {"ratelimit_us", 1, 0, 'r'},
         {"cpupool", 1, 0, 'p'},
-        {"help", 0, 0, 'h'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    while (1) {
-        opt = getopt_long(argc, argv, "d:w:c:p:t:r:hs", long_options,
-                          &option_index);
-        if (opt == -1)
-            break;
+    while ((opt = def_getopt(argc, argv, "d:w:c:p:t:r:hs", opts, "sched-credit", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -5002,9 +4969,6 @@ int main_sched_credit(int argc, char **a
         case 'p':
             cpupool = optarg;
             break;
-        case 'h':
-            help("sched-credit");
-            return 0;
         }
     }
 
@@ -5087,19 +5051,15 @@ int main_sched_credit2(int argc, char **
     const char *cpupool = NULL;
     int weight = 256, opt_w = 0;
     int opt, rc;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"domain", 1, 0, 'd'},
         {"weight", 1, 0, 'w'},
         {"cpupool", 1, 0, 'p'},
-        {"help", 0, 0, 'h'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    while (1) {
-        opt = getopt_long(argc, argv, "d:w:p:h", long_options, &option_index);
-        if (opt == -1)
-            break;
+    while ((opt = def_getopt(argc, argv, "d:w:p:h", opts, "sched-credit2", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -5113,9 +5073,6 @@ int main_sched_credit2(int argc, char **
         case 'p':
             cpupool = optarg;
             break;
-        case 'h':
-            help("sched-credit");
-            return 0;
         }
     }
 
@@ -5166,23 +5123,18 @@ int main_sched_sedf(int argc, char **arg
     int extra = 0, opt_e = 0;
     int weight = 0, opt_w = 0;
     int opt, rc;
-    int option_index = 0;
-    static struct option long_options[] = {
+    static struct option opts[] = {
         {"period", 1, 0, 'p'},
         {"slice", 1, 0, 's'},
         {"latency", 1, 0, 'l'},
         {"extra", 1, 0, 'e'},
         {"weight", 1, 0, 'w'},
         {"cpupool", 1, 0, 'c'},
-        {"help", 0, 0, 'h'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    while (1) {
-        opt = getopt_long(argc, argv, "d:p:s:l:e:w:c:h", long_options,
-                          &option_index);
-        if (opt == -1)
-            break;
+    while ((opt = def_getopt(argc, argv, "d:p:s:l:e:w:c:h", opts, "sched-sedf", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -5212,9 +5164,6 @@ int main_sched_sedf(int argc, char **arg
         case 'c':
             cpupool = optarg;
             break;
-        case 'h':
-            help("sched-sedf");
-            return 0;
         }
     }
 
@@ -5282,7 +5231,7 @@ int main_domid(int argc, char **argv)
     int opt;
     const char *domname = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", "domid", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "domid", 1)) != -1)
         return opt;
 
     domname = argv[optind];
@@ -5304,7 +5253,7 @@ int main_domname(int argc, char **argv)
     char *domname = NULL;
     char *endptr = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", "domname", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "domname", 1)) != -1)
         return opt;
 
     domid = strtol(argv[optind], &endptr, 10);
@@ -5332,7 +5281,7 @@ int main_rename(int argc, char **argv)
     int opt;
     const char *dom, *new_name;
 
-    if ((opt = def_getopt(argc, argv, "", "rename", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "rename", 2)) != -1)
         return opt;
 
     dom = argv[optind++];
@@ -5356,7 +5305,7 @@ int main_trigger(int argc, char **argv)
     const char *trigger_name = NULL;
     libxl_trigger trigger;
 
-    if ((opt = def_getopt(argc, argv, "", "trigger", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "trigger", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind++]);
@@ -5386,7 +5335,7 @@ int main_sysrq(int argc, char **argv)
     int opt;
     const char *sysrq = NULL;
 
-    if ((opt = def_getopt(argc, argv, "", "sysrq", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "sysrq", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind++]);
@@ -5409,7 +5358,7 @@ int main_debug_keys(int argc, char **arg
     int opt;
     char *keys;
 
-    if ((opt = def_getopt(argc, argv, "", "debug-keys", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "debug-keys", 1)) != -1)
         return opt;
 
     keys = argv[optind];
@@ -5429,7 +5378,7 @@ int main_dmesg(int argc, char **argv)
     char *line;
     int opt, ret = 1;
 
-    while ((opt = def_getopt(argc, argv, "c", "dmesg", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "c", NULL, "dmesg", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -5455,7 +5404,7 @@ int main_top(int argc, char **argv)
 {
     int opt;
 
-    if ((opt = def_getopt(argc, argv, "", "top", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "top", 0)) != -1)
         return opt;
 
     return system("xentop");
@@ -5472,7 +5421,7 @@ int main_networkattach(int argc, char **
     int i;
     unsigned int val;
 
-    if ((opt = def_getopt(argc, argv, "", "network-attach", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "network-attach", 1)) != -1)
         return opt;
 
     if (argc-optind > 11) {
@@ -5559,7 +5508,7 @@ int main_networklist(int argc, char **ar
     libxl_nicinfo nicinfo;
     int nb, i;
 
-    if ((opt = def_getopt(argc, argv, "", "network-list", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "network-list", 1)) != -1)
         return opt;
 
     /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
@@ -5596,7 +5545,7 @@ int main_networkdetach(int argc, char **
     int opt;
     libxl_device_nic nic;
 
-    if ((opt = def_getopt(argc, argv, "", "network-detach", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "network-detach", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -5627,7 +5576,7 @@ int main_blockattach(int argc, char **ar
     libxl_device_disk disk = { 0 };
     XLU_Config *config = 0;
 
-    if ((opt = def_getopt(argc, argv, "", "block-attach", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "block-attach", 2)) != -1)
         return opt;
 
     if (domain_qualifier_to_domid(argv[optind], &fe_domid, 0) < 0) {
@@ -5662,7 +5611,7 @@ int main_blocklist(int argc, char **argv
     libxl_device_disk *disks;
     libxl_diskinfo diskinfo;
 
-    if ((opt = def_getopt(argc, argv, "", "block-list", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "block-list", 1)) != -1)
         return opt;
 
     printf("%-5s %-3s %-6s %-5s %-6s %-8s %-30s\n",
@@ -5698,7 +5647,7 @@ int main_blockdetach(int argc, char **ar
     int opt, rc = 0;
     libxl_device_disk disk;
 
-    if ((opt = def_getopt(argc, argv, "", "block-detach", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "block-detach", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -5723,7 +5672,7 @@ int main_vtpmattach(int argc, char **arg
     unsigned int val;
     uint32_t domid;
 
-    if ((opt = def_getopt(argc, argv, "", "vtpm-attach", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-attach", 1)) != -1)
         return opt;
 
     if (domain_qualifier_to_domid(argv[optind], &domid, 0) < 0) {
@@ -5776,7 +5725,7 @@ int main_vtpmlist(int argc, char **argv)
     libxl_vtpminfo vtpminfo;
     int nb, i;
 
-    if ((opt = def_getopt(argc, argv, "", "vtpm-list", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-list", 1)) != -1)
         return opt;
 
     /*      Idx  BE   UUID   Hdl  Sta  evch rref  BE-path */
@@ -5816,7 +5765,7 @@ int main_vtpmdetach(int argc, char **arg
     libxl_device_vtpm vtpm;
     libxl_uuid uuid;
 
-    if ((opt = def_getopt(argc, argv, "", "vtpm-detach", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "vtpm-detach", 2)) != -1)
         return opt;
 
     domid = find_domain(argv[optind]);
@@ -6008,7 +5957,7 @@ int main_uptime(int argc, char **argv)
     int nb_doms = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "s", "uptime", 1)) != -1) {
+    while ((opt = def_getopt(argc, argv, "s", NULL, "uptime", 1)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6035,7 +5984,7 @@ int main_tmem_list(int argc, char **argv
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "al", "tmem-list", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "al", NULL, "tmem-list", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6076,7 +6025,7 @@ int main_tmem_freeze(int argc, char **ar
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "a", "tmem-freeze", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-freeze", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6109,7 +6058,7 @@ int main_tmem_thaw(int argc, char **argv
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "a", "tmem-thaw", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "a", NULL, "tmem-thaw", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6144,7 +6093,7 @@ int main_tmem_set(int argc, char **argv)
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "aw:c:p:", "tmem-set", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "aw:c:p:", NULL, "tmem-set", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6205,7 +6154,7 @@ int main_tmem_shared_auth(int argc, char
     int all = 0;
     int opt;
 
-    while ((opt = def_getopt(argc, argv, "au:A:", "tmem-shared-auth", 0)) != -1) {
+    while ((opt = def_getopt(argc, argv, "au:A:", NULL, "tmem-shared-auth", 0)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;
@@ -6255,7 +6204,7 @@ int main_tmem_freeable(int argc, char **
     int opt;
     int mb;
 
-    if ((opt = def_getopt(argc, argv, "", "tmem-freeable", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "tmem-freeable", 0)) != -1)
         return opt;
 
     mb = libxl_tmem_freeable(ctx);
@@ -6272,11 +6221,10 @@ int main_cpupoolcreate(int argc, char **
     const char *p;
     char extra_config[1024];
     int opt;
-    int option_index = 0;
-    static struct option long_options[] = {
-        {"help", 0, 0, 'h'},
+    static struct option opts[] = {
         {"defconfig", 1, 0, 'f'},
         {"dryrun", 0, 0, 'n'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
     int ret;
@@ -6294,26 +6242,18 @@ int main_cpupoolcreate(int argc, char **
     libxl_bitmap cpumap;
     libxl_uuid uuid;
     libxl_cputopology *topology;
-    int rc = -ERROR_FAIL; 
-
-    while (1) {
-        opt = getopt_long(argc, argv, "hnf:", long_options, &option_index);
-        if (opt == -1)
-            break;
-
+    int rc = -ERROR_FAIL;
+
+    while ((opt = def_getopt(argc, argv, "hnf:", opts, "cpupool-create", 0)) != -1) {
         switch (opt) {
+        case 0: case 2:
+            return opt;
         case 'f':
             filename = optarg;
             break;
-        case 'h':
-            help("cpupool-create");
-            return 0;
         case 'n':
             dryrun_only = 1;
             break;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
         }
     }
 
@@ -6476,10 +6416,9 @@ out:
 int main_cpupoollist(int argc, char **argv)
 {
     int opt;
-    int option_index = 0;
-    static struct option long_options[] = {
-        {"help", 0, 0, 'h'},
+    static struct option opts[] = {
         {"cpus", 0, 0, 'c'},
+        COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
     int opt_cpus = 0;
@@ -6490,28 +6429,16 @@ int main_cpupoollist(int argc, char **ar
     char *name;
     int ret = 0;
 
-    while (1) {
-        opt = getopt_long(argc, argv, "hc", long_options, &option_index);
-        if (opt == -1)
+    while ((opt = def_getopt(argc, argv, "hc", opts, "cpupool-list", 1)) != -1) {
+        switch (opt) {
+        case 0: case 2:
             break;
-
-        switch (opt) {
-        case 'h':
-            help("cpupool-list");
-            return 0;
         case 'c':
             opt_cpus = 1;
             break;
-        default:
-            fprintf(stderr, "option `%c' not supported.\n", optopt);
-            break;
-        }
-    }
-
-    if ((optind + 1) < argc) {
-        help("cpupool-list");
-        return -ERROR_FAIL;
-    }
+        }
+    }
+
     if (optind < argc) {
         pool = argv[optind];
         if (libxl_name_to_cpupoolid(ctx, pool, &poolid)) {
@@ -6569,7 +6496,7 @@ int main_cpupooldestroy(int argc, char *
     const char *pool;
     uint32_t poolid;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-destroy", 1)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-destroy", 1)) != -1)
         return opt;
 
     pool = argv[optind];
@@ -6590,7 +6517,7 @@ int main_cpupoolrename(int argc, char **
     const char *new_name;
     uint32_t poolid;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-rename", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-rename", 2)) != -1)
         return opt;
 
     pool = argv[optind++];
@@ -6620,7 +6547,7 @@ int main_cpupoolcpuadd(int argc, char **
     int node;
     int n;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-cpu-add", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-add", 2)) != -1)
         return opt;
 
     pool = argv[optind++];
@@ -6664,7 +6591,7 @@ int main_cpupoolcpuremove(int argc, char
     int node;
     int n;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-cpu-remove", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-cpu-remove", 2)) != -1)
         return opt;
 
     pool = argv[optind++];
@@ -6707,7 +6634,7 @@ int main_cpupoolmigrate(int argc, char *
     const char *dom;
     uint32_t domid;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-migrate", 2)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-migrate", 2)) != -1)
         return opt;
 
     dom = argv[optind++];
@@ -6747,7 +6674,7 @@ int main_cpupoolnumasplit(int argc, char
     libxl_cputopology *topology;
     libxl_dominfo info;
 
-    if ((opt = def_getopt(argc, argv, "", "cpupool-numa-split", 0)) != -1)
+    if ((opt = def_getopt(argc, argv, "", NULL, "cpupool-numa-split", 0)) != -1)
         return opt;
     ret = 0;
 
@@ -7000,7 +6927,7 @@ int main_remus(int argc, char **argv)
     r_info.blackhole = 0;
     r_info.compression = 1;
 
-    while ((opt = def_getopt(argc, argv, "bui:s:e", "remus", 2)) != -1) {
+    while ((opt = def_getopt(argc, argv, "bui:s:e", NULL, "remus", 2)) != -1) {
         switch (opt) {
         case 0: case 2:
             return opt;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:23:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkz13-0006Gu-Qn; Tue, 18 Dec 2012 15:23:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>) id 1Tkz12-0006Gi-9d
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:23:32 +0000
Received: from [193.109.254.147:17522] by server-12.bemta-14.messagelabs.com
	id 18/77-06523-37A80D05; Tue, 18 Dec 2012 15:23:31 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355844208!3355636!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30395 invoked from network); 18 Dec 2012 15:23:30 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 15:23:30 -0000
Received: from aplexcas2.dom1.jhuapl.edu (aplexcas2.dom1.jhuapl.edu
	[128.244.198.91]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 6d01_d435_e2098d95_81e9_4551_9ebd_9b1056cc7d70;
	Tue, 18 Dec 2012 10:23:20 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas2.dom1.jhuapl.edu ([128.244.198.91]) with mapi; Tue, 18 Dec 2012
	10:23:06 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 18 Dec 2012 10:23:05 -0500
Thread-Topic: [Xen-devel] [PATCH special] vtpm fix cmake dependency
Thread-Index: Ac3dKdeDCw6almVSQUWFY2TIyN8p3AACa6yA
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48CB62D12A@aplesstripe.dom1.jhuapl.edu>
References: <50C9F2CA.1010602@jhuapl.edu>
	<1355830264.14620.196.camel@zakaz.uk.xensource.com>
	<1355831212.14620.211.camel@zakaz.uk.xensource.com>
	<1355839988.14620.225.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355839988.14620.225.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

V2hhdCBvcyBhcmUgeW91IHVzaW5nPyBJJ3ZlIG5vdGljZWQgcHJvYmxlbXMgd2l0aCBnbXAgb24g
VWJ1bnR1Lg0KDQotLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KRnJvbTogSWFuIENhbXBiZWxs
IFttYWlsdG86SWFuLkNhbXBiZWxsQGNpdHJpeC5jb21dIA0KU2VudDogVHVlc2RheSwgRGVjZW1i
ZXIgMTgsIDIwMTIgOToxMyBBTQ0KVG86IEZpb3JhdmFudGUsIE1hdHRoZXcgRS4NCkNjOiB4ZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZw0KU3ViamVjdDogUmU6IFtYZW4tZGV2ZWxdIFtQQVRDSCBzcGVj
aWFsXSB2dHBtIGZpeCBjbWFrZSBkZXBlbmRlbmN5DQoNCk9uIFR1ZSwgMjAxMi0xMi0xOCBhdCAx
MTo0NiArMDAwMCwgSWFuIENhbXBiZWxsIHdyb3RlOg0KPiBPbiBUdWUsIDIwMTItMTItMTggYXQg
MTE6MzEgKzAwMDAsIElhbiBDYW1wYmVsbCB3cm90ZToNCj4gPiBPbiBUaHUsIDIwMTItMTItMTMg
YXQgMTU6MjIgKzAwMDAsIE1hdHRoZXcgRmlvcmF2YW50ZSB3cm90ZToNCj4gPiA+IElhbiwgdGhp
cyBvbmUgaXMgc3BlY2lhbCBqdXN0IGZvciB5b3UuIEknbSBzZW5kaW5nIGl0IGFzIGFuIA0KPiA+
ID4gYXR0YWNobWVudCBiZWNhdXNlIG15IGVtYWlsIGNsaWVudCB3aWxsIG1hbmdsZSBpdC4NCj4g
PiA+IFRoaXMgcGF0Y2ggd2lsbCByZW1vdmUgdGhlIGNtYWtlIGRlcGVuZGVuY3kgZnJvbSB4ZW4g
cHJpb3IgdG8gDQo+ID4gPiBhdXRvY29uZiBzdHViZG9tDQo+ID4gDQo+ID4gVGhhbmtzLCBJIG1l
cmdlZCB0aGlzIGFzIGRlc2NyaWJlZCBhbmQgYWxzbyBmb2xkZWQgIkRpc2FibGUgDQo+ID4gY2Ft
bC1zdHViZG9tIGJ5IGRlZmF1bHQiIGludG8gdGhlIHBhdGNoIHdoaWNoIGFkZGVkIGl0IGVuYWJs
ZWQuDQo+ID4gDQo+ID4gSG93ZXZlciB0aGlzIHN0aWxsIGZhaWxzIGZvciBtZSB3aGVuIHZ0cG0g
aXMgbm90IGVuYWJsZWQ6DQo+ID4gICAgICAgICBtYWtlWzFdOiAqKiogTm8gcnVsZSB0byBtYWtl
IHRhcmdldCBgbWluaS1vcy14ODZfNjQtdnRwbScsIG5lZWRlZCBieSBgdnRwbS1zdHViZG9tJy4g
IFN0b3AuDQo+ID4gICAgICAgICBtYWtlWzFdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL2xvY2FsL3Nj
cmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20nDQo+ID4gICAgICAgICBtYWtl
OiAqKiogW2luc3RhbGwtc3R1YmRvbV0gRXJyb3IgMg0KPiA+IA0KPiA+IFNvbWV0aGluZyB0byB3
aXRoIHZ0cG1tZ3Igbm90IGJlaW5nIGNvbmRpdGlvbmFsPw0KPiANCj4gTG9va3MgbGlrZSBhIHNp
bXBsZSB0aGlua28uIEknbGwgbWVyZ2UgdGhlIGZvbGxvd2luZyBpbnRvICJzdHViZG9tOiANCj4g
QWRkIGF1dG9jb25mIiwgaG9wZWZ1bGx5IG15IHRlc3Rpbmcgd29uJ3QgZmluZCBhbnkgb3RoZXIg
aXNzdWVzLg0KDQpJIHdhcyBqdXN0IGFib3V0IHRvIHB1c2ggdGhpcyBvdXQgd2hlbiBJIHRob3Vn
aCAiaHJtLCBtYXliZSBJIHNob3VsZCBjaGVjayB0aGlzIHdpdGggY21ha2UgaW5zdGFsbGVkIi4g
SSdtIGFmcmFpZCBpdCBpcyBicm9rZW4uIFNwZXcgaXMgYmVsb3cuIExvb2tzIGxpa2UgaXQncyBu
b3QgZmluZGluZyB0aGUgZ21wIGhlYWRlcnMgLS0gSSBjYW4gc2VlIHRoZW0gaW4gc3R1YmRvbS9n
bXAteDg2XzY0L2dtcC5oIGJ1dCBJIGNhbid0IHNlZSBhbnl0aGluZyB3aGljaCBsb29rcyBsaWtl
IGl0IGFkZHMgdGhlIG5lY2Vzc2FyeSAtSSB0byBDRkxBR1Mgb3IgYW55d2hlcmUuDQoNCkJyYW5j
aCBpcyBhdDoNCiAgICAgICAgZ2l0Oi8veGVuYml0cy54ZW4ub3JnL3Blb3BsZS9pYW5jL3hlbi11
bnN0YWJsZS5naXQgdnRwbTMNCg0KU29ycnkgaWYgSSd2ZSBicm9rZW4gc29tZXRoaW5nIHNvbWV3
aGVyZSBhbG9uZyB0aGUgd2F5LiBQbGVhc2UgY2FuIHlvdSB0ZXN0IGZ1bGx5IGJvdGggd2l0aCBh
bmQgd2l0aG91dCBjbWFrZSBhbmQgcmVzdWJtaXQuIEZvciByZWZlcmVuY2UgdGhlIHNjcmlwdCBJ
IHJ1biBiZWZvcmUgY29tbWl0dGluZyBpczoNCiAgICAgICAgIyEvYmluL2Jhc2gNCiAgICAgICAg
c2V0IC1leA0KICAgICAgICANCiAgICAgICAgZXhwb3J0IFBBVEg9L3Vzci9saWIvY2NhY2hlOiRQ
QVRIDQogICAgICAgIA0KICAgICAgICAoDQogICAgICAgICAgICBtYWtlIGRpc3RjbGVhbiAtajEy
IC1zDQogICAgICAgICAgICBnaXQgY2xlYW4gLWYgLWR4DQogICAgICAgICAgICAuL2NvbmZpZ3Vy
ZQ0KICAgICAgICAgICAgbWFrZSBkaXN0IC1qMTIgLXMNCiAgICAgICAgICAgIGZpbmQgZGlzdCB8
IHNvcnQgPiAuLi9GSUxFX0xJU1QNCiAgICAgICAgKSAyPiYxIHwgdGVlIC4uL0NPTU1JVFRFUi5M
T0cNCg0KVGhpcyBtYWtlcyBwcmV0dHkgc3VyZSBpdCBpcyBkb2luZyBhIGZyZXNoIGJ1aWxkIHdp
dGggbm8gbGVmdCBvdmVyIGNydWZ0IGluc3RhbGxlZC4NCg0KSWFuLg0KDQpjYyAtbW5vLXJlZC16
b25lIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAgLW02NCAtbW5vLXJlZC16b25lIC1mbm8t
cmVvcmRlci1ibG9ja3MgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtaXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5n
aXQvc3R1YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNsdWRlIC1EX19NSU5JT1NfXyAtREhBVkVf
TElCQyAtaXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1
YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNsdWRlL3Bvc2l4IC1pc3lzdGVtIC9sb2NhbC9zY3Jh
dGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdpdC9zdHViZG9tLy4uL3Rvb2xzL3hlbnN0b3JlICAt
aXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRvbS8u
Li9leHRyYXMvbWluaS1vcy9pbmNsdWRlL3g4NiAtaXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5j
L2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNsdWRlL3g4
Ni94ODZfNjQgLVUgX19saW51eF9fIC1VIF9fRnJlZUJTRF9fIC1VIF9fc3VuX18gLW5vc3RkaW5j
IC1pc3lzdGVtIC9sb2NhbC9zY3JhdGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdpdC9zdHViZG9t
Ly4uL2V4dHJhcy9taW5pLW9zL2luY2x1ZGUvcG9zaXggLWlzeXN0ZW0gL2xvY2FsL3NjcmF0Y2gv
aWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vY3Jvc3Mtcm9vdC14ODZfNjQveDg2XzY0
LXhlbi1lbGYvaW5jbHVkZSAtaXN5c3RlbSAvdXNyL2xpYi9nY2MveDg2XzY0LWxpbnV4LWdudS80
LjQuNS9pbmNsdWRlIC1pc3lzdGVtIC9sb2NhbC9zY3JhdGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVy
LmdpdC9zdHViZG9tL2x3aXAteDg2XzY0L3NyYy9pbmNsdWRlIC1pc3lzdGVtIC9sb2NhbC9zY3Jh
dGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdpdC9zdHViZG9tL2x3aXAteDg2XzY0L3NyYy9pbmNs
dWRlL2lwdjQgLUkvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRv
bS9pbmNsdWRlIC1JL2xvY2FsL3NjcmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJk
b20vLi4veGVuL2luY2x1ZGUgLUkuLi90cG1fZW11bGF0b3IteDg2XzY0L2J1aWxkIC1JLi4vdHBt
X2VtdWxhdG9yLXg4Nl82NC90cG0gLUkuLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0byAtSS4u
L3RwbV9lbXVsYXRvci14ODZfNjQgIC1jIC1vIHZ0cG0ubyB2dHBtLmMNCkluIGZpbGUgaW5jbHVk
ZWQgZnJvbSAuLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9yc2EuaDoyMiwNCiAgICAgICAg
ICAgICAgICAgZnJvbSAuLi90cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fc3RydWN0dXJlcy5o
OjIyLA0KICAgICAgICAgICAgICAgICBmcm9tIC4uL3RwbV9lbXVsYXRvci14ODZfNjQvdHBtL3Rw
bV9tYXJzaGFsbGluZy5oOjIxLA0KICAgICAgICAgICAgICAgICBmcm9tIHZ0cG0uYzozMToNCi4u
L3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Mjc6MTc6IGVycm9yOiBnbXAuaDogTm8g
c3VjaCBmaWxlIG9yIGRpcmVjdG9yeSBJbiBmaWxlIGluY2x1ZGVkIGZyb20gLi4vdHBtX2VtdWxh
dG9yLXg4Nl82NC9jcnlwdG8vcnNhLmg6MjIsDQogICAgICAgICAgICAgICAgIGZyb20gLi4vdHBt
X2VtdWxhdG9yLXg4Nl82NC90cG0vdHBtX3N0cnVjdHVyZXMuaDoyMiwNCiAgICAgICAgICAgICAg
ICAgZnJvbSAuLi90cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fbWFyc2hhbGxpbmcuaDoyMSwN
CiAgICAgICAgICAgICAgICAgZnJvbSB2dHBtLmM6MzE6DQouLi90cG1fZW11bGF0b3IteDg2XzY0
L2NyeXB0by9ibi5oOjI4OiBlcnJvcjogZXhwZWN0ZWQg4oCYPeKAmSwg4oCYLOKAmSwg4oCYO+KA
mSwg4oCYYXNt4oCZIG9yIOKAmF9fYXR0cmlidXRlX1/igJkgYmVmb3JlIOKAmHRwbV9ibl904oCZ
DQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjMxOiBlcnJvcjogZXhwZWN0ZWQg
4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQ0KLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4u
aDozMzogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmGHigJkNCi4uL3RwbV9lbXVs
YXRvci14ODZfNjQvY3J5cHRvL2JuLmg6MzU6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9y
ZSDigJhh4oCZDQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjM3OiBlcnJvcjog
ZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQ0KLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9j
cnlwdG8vYm4uaDozOTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmGHigJkNCi4u
L3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NDE6IGVycm9yOiBleHBlY3RlZCDigJgp
4oCZIGJlZm9yZSDigJhh4oCZDQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjQz
OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQ0KLi4vdHBtX2VtdWxhdG9y
LXg4Nl82NC9jcnlwdG8vYm4uaDo0NTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKA
mGHigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NDc6IGVycm9yOiBleHBl
Y3RlZCDigJgp4oCZIGJlZm9yZSDigJhvdXTigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5
cHRvL2JuLmg6NDk6IGVycm9yOiBleHBlY3RlZCBkZWNsYXJhdGlvbiBzcGVjaWZpZXJzIG9yIOKA
mC4uLuKAmSBiZWZvcmUg4oCYdHBtX2JuX3TigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5
cHRvL2JuLmg6NTE6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhh4oCZDQouLi90
cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjUzOiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKA
mSBiZWZvcmUg4oCYYeKAmQ0KLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo1NTog
ZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmGHigJkNCi4uL3RwbV9lbXVsYXRvci14
ODZfNjQvY3J5cHRvL2JuLmg6NTc6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhy
ZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NTk6IGVycm9yOiBleHBl
Y3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5
cHRvL2JuLmg6NjE6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4u
L3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NjM6IGVycm9yOiBleHBlY3RlZCDigJgp
4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6
NjU6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVs
YXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Njc6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9y
ZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Njk6IGVycm9y
OiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZf
NjQvY3J5cHRvL2JuLmg6NzE6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPi
gJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NzM6IGVycm9yOiBleHBlY3Rl
ZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRv
L2JuLmg6NzU6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3Rw
bV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Nzc6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZ
IGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Nzk6
IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRv
ci14ODZfNjQvY3J5cHRvL2JuLmg6ODE6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDi
gJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6ODM6IGVycm9yOiBl
eHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAu
Li90cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fc3RydWN0dXJlcy5oOjIyLA0KICAgICAgICAg
ICAgICAgICBmcm9tIC4uL3RwbV9lbXVsYXRvci14ODZfNjQvdHBtL3RwbV9tYXJzaGFsbGluZy5o
OjIxLA0KICAgICAgICAgICAgICAgICBmcm9tIHZ0cG0uYzozMToNCi4uL3RwbV9lbXVsYXRvci14
ODZfNjQvY3J5cHRvL3JzYS5oOjI1OiBlcnJvcjogZXhwZWN0ZWQgc3BlY2lmaWVyLXF1YWxpZmll
ci1saXN0IGJlZm9yZSDigJh0cG1fYm5fdOKAmQ0KLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlw
dG8vcnNhLmg6MzU6IGVycm9yOiBleHBlY3RlZCBzcGVjaWZpZXItcXVhbGlmaWVyLWxpc3QgYmVm
b3JlIOKAmHRwbV9ibl904oCZDQpJbiBmaWxlIGluY2x1ZGVkIGZyb20gLi4vdHBtX2VtdWxhdG9y
LXg4Nl82NC90cG0vdHBtX21hcnNoYWxsaW5nLmg6MjEsDQogICAgICAgICAgICAgICAgIGZyb20g
dnRwbS5jOjMxOg0KLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC90cG0vdHBtX3N0cnVjdHVyZXMuaDog
SW4gZnVuY3Rpb24g4oCYZnJlZV9UUE1fUEVSTUFORU5UX0RBVEHigJk6DQouLi90cG1fZW11bGF0
b3IteDg2XzY0L3RwbS90cG1fc3RydWN0dXJlcy5oOjIyNDQ6IGVycm9yOiDigJh0cG1fcnNhX3By
aXZhdGVfa2V5X3TigJkgaGFzIG5vIG1lbWJlciBuYW1lZCDigJhzaXpl4oCZDQptYWtlWzJdOiAq
KiogW3Z0cG0ub10gRXJyb3IgMQ0KbWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9sb2NhbC9z
Y3JhdGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdpdC9zdHViZG9tL3Z0cG0nDQptYWtlWzFdOiAq
KiogW3Z0cG1dIEVycm9yIDINCm1ha2VbMV06IExlYXZpbmcgZGlyZWN0b3J5IGAvbG9jYWwvc2Ny
YXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRvbScNCm1ha2U6ICoqKiBbaW5zdGFs
bC1zdHViZG9tXSBFcnJvciAyDQoNCg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:23:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkz13-0006Gu-Qn; Tue, 18 Dec 2012 15:23:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>) id 1Tkz12-0006Gi-9d
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:23:32 +0000
Received: from [193.109.254.147:17522] by server-12.bemta-14.messagelabs.com
	id 18/77-06523-37A80D05; Tue, 18 Dec 2012 15:23:31 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-6.tower-27.messagelabs.com!1355844208!3355636!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30395 invoked from network); 18 Dec 2012 15:23:30 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 15:23:30 -0000
Received: from aplexcas2.dom1.jhuapl.edu (aplexcas2.dom1.jhuapl.edu
	[128.244.198.91]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 6d01_d435_e2098d95_81e9_4551_9ebd_9b1056cc7d70;
	Tue, 18 Dec 2012 10:23:20 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas2.dom1.jhuapl.edu ([128.244.198.91]) with mapi; Tue, 18 Dec 2012
	10:23:06 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 18 Dec 2012 10:23:05 -0500
Thread-Topic: [Xen-devel] [PATCH special] vtpm fix cmake dependency
Thread-Index: Ac3dKdeDCw6almVSQUWFY2TIyN8p3AACa6yA
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48CB62D12A@aplesstripe.dom1.jhuapl.edu>
References: <50C9F2CA.1010602@jhuapl.edu>
	<1355830264.14620.196.camel@zakaz.uk.xensource.com>
	<1355831212.14620.211.camel@zakaz.uk.xensource.com>
	<1355839988.14620.225.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355839988.14620.225.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

V2hhdCBvcyBhcmUgeW91IHVzaW5nPyBJJ3ZlIG5vdGljZWQgcHJvYmxlbXMgd2l0aCBnbXAgb24g
VWJ1bnR1Lg0KDQotLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KRnJvbTogSWFuIENhbXBiZWxs
IFttYWlsdG86SWFuLkNhbXBiZWxsQGNpdHJpeC5jb21dIA0KU2VudDogVHVlc2RheSwgRGVjZW1i
ZXIgMTgsIDIwMTIgOToxMyBBTQ0KVG86IEZpb3JhdmFudGUsIE1hdHRoZXcgRS4NCkNjOiB4ZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZw0KU3ViamVjdDogUmU6IFtYZW4tZGV2ZWxdIFtQQVRDSCBzcGVj
aWFsXSB2dHBtIGZpeCBjbWFrZSBkZXBlbmRlbmN5DQoNCk9uIFR1ZSwgMjAxMi0xMi0xOCBhdCAx
MTo0NiArMDAwMCwgSWFuIENhbXBiZWxsIHdyb3RlOg0KPiBPbiBUdWUsIDIwMTItMTItMTggYXQg
MTE6MzEgKzAwMDAsIElhbiBDYW1wYmVsbCB3cm90ZToNCj4gPiBPbiBUaHUsIDIwMTItMTItMTMg
YXQgMTU6MjIgKzAwMDAsIE1hdHRoZXcgRmlvcmF2YW50ZSB3cm90ZToNCj4gPiA+IElhbiwgdGhp
cyBvbmUgaXMgc3BlY2lhbCBqdXN0IGZvciB5b3UuIEknbSBzZW5kaW5nIGl0IGFzIGFuIA0KPiA+
ID4gYXR0YWNobWVudCBiZWNhdXNlIG15IGVtYWlsIGNsaWVudCB3aWxsIG1hbmdsZSBpdC4NCj4g
PiA+IFRoaXMgcGF0Y2ggd2lsbCByZW1vdmUgdGhlIGNtYWtlIGRlcGVuZGVuY3kgZnJvbSB4ZW4g
cHJpb3IgdG8gDQo+ID4gPiBhdXRvY29uZiBzdHViZG9tDQo+ID4gDQo+ID4gVGhhbmtzLCBJIG1l
cmdlZCB0aGlzIGFzIGRlc2NyaWJlZCBhbmQgYWxzbyBmb2xkZWQgIkRpc2FibGUgDQo+ID4gY2Ft
bC1zdHViZG9tIGJ5IGRlZmF1bHQiIGludG8gdGhlIHBhdGNoIHdoaWNoIGFkZGVkIGl0IGVuYWJs
ZWQuDQo+ID4gDQo+ID4gSG93ZXZlciB0aGlzIHN0aWxsIGZhaWxzIGZvciBtZSB3aGVuIHZ0cG0g
aXMgbm90IGVuYWJsZWQ6DQo+ID4gICAgICAgICBtYWtlWzFdOiAqKiogTm8gcnVsZSB0byBtYWtl
IHRhcmdldCBgbWluaS1vcy14ODZfNjQtdnRwbScsIG5lZWRlZCBieSBgdnRwbS1zdHViZG9tJy4g
IFN0b3AuDQo+ID4gICAgICAgICBtYWtlWzFdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL2xvY2FsL3Nj
cmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20nDQo+ID4gICAgICAgICBtYWtl
OiAqKiogW2luc3RhbGwtc3R1YmRvbV0gRXJyb3IgMg0KPiA+IA0KPiA+IFNvbWV0aGluZyB0byB3
aXRoIHZ0cG1tZ3Igbm90IGJlaW5nIGNvbmRpdGlvbmFsPw0KPiANCj4gTG9va3MgbGlrZSBhIHNp
bXBsZSB0aGlua28uIEknbGwgbWVyZ2UgdGhlIGZvbGxvd2luZyBpbnRvICJzdHViZG9tOiANCj4g
QWRkIGF1dG9jb25mIiwgaG9wZWZ1bGx5IG15IHRlc3Rpbmcgd29uJ3QgZmluZCBhbnkgb3RoZXIg
aXNzdWVzLg0KDQpJIHdhcyBqdXN0IGFib3V0IHRvIHB1c2ggdGhpcyBvdXQgd2hlbiBJIHRob3Vn
aCAiaHJtLCBtYXliZSBJIHNob3VsZCBjaGVjayB0aGlzIHdpdGggY21ha2UgaW5zdGFsbGVkIi4g
SSdtIGFmcmFpZCBpdCBpcyBicm9rZW4uIFNwZXcgaXMgYmVsb3cuIExvb2tzIGxpa2UgaXQncyBu
b3QgZmluZGluZyB0aGUgZ21wIGhlYWRlcnMgLS0gSSBjYW4gc2VlIHRoZW0gaW4gc3R1YmRvbS9n
bXAteDg2XzY0L2dtcC5oIGJ1dCBJIGNhbid0IHNlZSBhbnl0aGluZyB3aGljaCBsb29rcyBsaWtl
IGl0IGFkZHMgdGhlIG5lY2Vzc2FyeSAtSSB0byBDRkxBR1Mgb3IgYW55d2hlcmUuDQoNCkJyYW5j
aCBpcyBhdDoNCiAgICAgICAgZ2l0Oi8veGVuYml0cy54ZW4ub3JnL3Blb3BsZS9pYW5jL3hlbi11
bnN0YWJsZS5naXQgdnRwbTMNCg0KU29ycnkgaWYgSSd2ZSBicm9rZW4gc29tZXRoaW5nIHNvbWV3
aGVyZSBhbG9uZyB0aGUgd2F5LiBQbGVhc2UgY2FuIHlvdSB0ZXN0IGZ1bGx5IGJvdGggd2l0aCBh
bmQgd2l0aG91dCBjbWFrZSBhbmQgcmVzdWJtaXQuIEZvciByZWZlcmVuY2UgdGhlIHNjcmlwdCBJ
IHJ1biBiZWZvcmUgY29tbWl0dGluZyBpczoNCiAgICAgICAgIyEvYmluL2Jhc2gNCiAgICAgICAg
c2V0IC1leA0KICAgICAgICANCiAgICAgICAgZXhwb3J0IFBBVEg9L3Vzci9saWIvY2NhY2hlOiRQ
QVRIDQogICAgICAgIA0KICAgICAgICAoDQogICAgICAgICAgICBtYWtlIGRpc3RjbGVhbiAtajEy
IC1zDQogICAgICAgICAgICBnaXQgY2xlYW4gLWYgLWR4DQogICAgICAgICAgICAuL2NvbmZpZ3Vy
ZQ0KICAgICAgICAgICAgbWFrZSBkaXN0IC1qMTIgLXMNCiAgICAgICAgICAgIGZpbmQgZGlzdCB8
IHNvcnQgPiAuLi9GSUxFX0xJU1QNCiAgICAgICAgKSAyPiYxIHwgdGVlIC4uL0NPTU1JVFRFUi5M
T0cNCg0KVGhpcyBtYWtlcyBwcmV0dHkgc3VyZSBpdCBpcyBkb2luZyBhIGZyZXNoIGJ1aWxkIHdp
dGggbm8gbGVmdCBvdmVyIGNydWZ0IGluc3RhbGxlZC4NCg0KSWFuLg0KDQpjYyAtbW5vLXJlZC16
b25lIC1PMSAtZm5vLW9taXQtZnJhbWUtcG9pbnRlciAgLW02NCAtbW5vLXJlZC16b25lIC1mbm8t
cmVvcmRlci1ibG9ja3MgLWZuby1hc3luY2hyb25vdXMtdW53aW5kLXRhYmxlcyAtbTY0IC1nIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAt
V2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAgIC1mbm8tc3RhY2stcHJvdGVjdG9yIC1mbm8t
ZXhjZXB0aW9ucyAtaXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5n
aXQvc3R1YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNsdWRlIC1EX19NSU5JT1NfXyAtREhBVkVf
TElCQyAtaXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1
YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNsdWRlL3Bvc2l4IC1pc3lzdGVtIC9sb2NhbC9zY3Jh
dGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdpdC9zdHViZG9tLy4uL3Rvb2xzL3hlbnN0b3JlICAt
aXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRvbS8u
Li9leHRyYXMvbWluaS1vcy9pbmNsdWRlL3g4NiAtaXN5c3RlbSAvbG9jYWwvc2NyYXRjaC9pYW5j
L2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRvbS8uLi9leHRyYXMvbWluaS1vcy9pbmNsdWRlL3g4
Ni94ODZfNjQgLVUgX19saW51eF9fIC1VIF9fRnJlZUJTRF9fIC1VIF9fc3VuX18gLW5vc3RkaW5j
IC1pc3lzdGVtIC9sb2NhbC9zY3JhdGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdpdC9zdHViZG9t
Ly4uL2V4dHJhcy9taW5pLW9zL2luY2x1ZGUvcG9zaXggLWlzeXN0ZW0gL2xvY2FsL3NjcmF0Y2gv
aWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJkb20vY3Jvc3Mtcm9vdC14ODZfNjQveDg2XzY0
LXhlbi1lbGYvaW5jbHVkZSAtaXN5c3RlbSAvdXNyL2xpYi9nY2MveDg2XzY0LWxpbnV4LWdudS80
LjQuNS9pbmNsdWRlIC1pc3lzdGVtIC9sb2NhbC9zY3JhdGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVy
LmdpdC9zdHViZG9tL2x3aXAteDg2XzY0L3NyYy9pbmNsdWRlIC1pc3lzdGVtIC9sb2NhbC9zY3Jh
dGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdpdC9zdHViZG9tL2x3aXAteDg2XzY0L3NyYy9pbmNs
dWRlL2lwdjQgLUkvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRv
bS9pbmNsdWRlIC1JL2xvY2FsL3NjcmF0Y2gvaWFuYy9kZXZlbC9jb21taXR0ZXIuZ2l0L3N0dWJk
b20vLi4veGVuL2luY2x1ZGUgLUkuLi90cG1fZW11bGF0b3IteDg2XzY0L2J1aWxkIC1JLi4vdHBt
X2VtdWxhdG9yLXg4Nl82NC90cG0gLUkuLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0byAtSS4u
L3RwbV9lbXVsYXRvci14ODZfNjQgIC1jIC1vIHZ0cG0ubyB2dHBtLmMNCkluIGZpbGUgaW5jbHVk
ZWQgZnJvbSAuLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9yc2EuaDoyMiwNCiAgICAgICAg
ICAgICAgICAgZnJvbSAuLi90cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fc3RydWN0dXJlcy5o
OjIyLA0KICAgICAgICAgICAgICAgICBmcm9tIC4uL3RwbV9lbXVsYXRvci14ODZfNjQvdHBtL3Rw
bV9tYXJzaGFsbGluZy5oOjIxLA0KICAgICAgICAgICAgICAgICBmcm9tIHZ0cG0uYzozMToNCi4u
L3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Mjc6MTc6IGVycm9yOiBnbXAuaDogTm8g
c3VjaCBmaWxlIG9yIGRpcmVjdG9yeSBJbiBmaWxlIGluY2x1ZGVkIGZyb20gLi4vdHBtX2VtdWxh
dG9yLXg4Nl82NC9jcnlwdG8vcnNhLmg6MjIsDQogICAgICAgICAgICAgICAgIGZyb20gLi4vdHBt
X2VtdWxhdG9yLXg4Nl82NC90cG0vdHBtX3N0cnVjdHVyZXMuaDoyMiwNCiAgICAgICAgICAgICAg
ICAgZnJvbSAuLi90cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fbWFyc2hhbGxpbmcuaDoyMSwN
CiAgICAgICAgICAgICAgICAgZnJvbSB2dHBtLmM6MzE6DQouLi90cG1fZW11bGF0b3IteDg2XzY0
L2NyeXB0by9ibi5oOjI4OiBlcnJvcjogZXhwZWN0ZWQg4oCYPeKAmSwg4oCYLOKAmSwg4oCYO+KA
mSwg4oCYYXNt4oCZIG9yIOKAmF9fYXR0cmlidXRlX1/igJkgYmVmb3JlIOKAmHRwbV9ibl904oCZ
DQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjMxOiBlcnJvcjogZXhwZWN0ZWQg
4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQ0KLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4u
aDozMzogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmGHigJkNCi4uL3RwbV9lbXVs
YXRvci14ODZfNjQvY3J5cHRvL2JuLmg6MzU6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9y
ZSDigJhh4oCZDQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjM3OiBlcnJvcjog
ZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQ0KLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9j
cnlwdG8vYm4uaDozOTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmGHigJkNCi4u
L3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NDE6IGVycm9yOiBleHBlY3RlZCDigJgp
4oCZIGJlZm9yZSDigJhh4oCZDQouLi90cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjQz
OiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKAmSBiZWZvcmUg4oCYYeKAmQ0KLi4vdHBtX2VtdWxhdG9y
LXg4Nl82NC9jcnlwdG8vYm4uaDo0NTogZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKA
mGHigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NDc6IGVycm9yOiBleHBl
Y3RlZCDigJgp4oCZIGJlZm9yZSDigJhvdXTigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5
cHRvL2JuLmg6NDk6IGVycm9yOiBleHBlY3RlZCBkZWNsYXJhdGlvbiBzcGVjaWZpZXJzIG9yIOKA
mC4uLuKAmSBiZWZvcmUg4oCYdHBtX2JuX3TigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5
cHRvL2JuLmg6NTE6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhh4oCZDQouLi90
cG1fZW11bGF0b3IteDg2XzY0L2NyeXB0by9ibi5oOjUzOiBlcnJvcjogZXhwZWN0ZWQg4oCYKeKA
mSBiZWZvcmUg4oCYYeKAmQ0KLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlwdG8vYm4uaDo1NTog
ZXJyb3I6IGV4cGVjdGVkIOKAmCnigJkgYmVmb3JlIOKAmGHigJkNCi4uL3RwbV9lbXVsYXRvci14
ODZfNjQvY3J5cHRvL2JuLmg6NTc6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhy
ZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NTk6IGVycm9yOiBleHBl
Y3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5
cHRvL2JuLmg6NjE6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4u
L3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NjM6IGVycm9yOiBleHBlY3RlZCDigJgp
4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6
NjU6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVs
YXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Njc6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9y
ZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Njk6IGVycm9y
OiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZf
NjQvY3J5cHRvL2JuLmg6NzE6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPi
gJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6NzM6IGVycm9yOiBleHBlY3Rl
ZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRv
L2JuLmg6NzU6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3Rw
bV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Nzc6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZ
IGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6Nzk6
IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCi4uL3RwbV9lbXVsYXRv
ci14ODZfNjQvY3J5cHRvL2JuLmg6ODE6IGVycm9yOiBleHBlY3RlZCDigJgp4oCZIGJlZm9yZSDi
gJhyZXPigJkNCi4uL3RwbV9lbXVsYXRvci14ODZfNjQvY3J5cHRvL2JuLmg6ODM6IGVycm9yOiBl
eHBlY3RlZCDigJgp4oCZIGJlZm9yZSDigJhyZXPigJkNCkluIGZpbGUgaW5jbHVkZWQgZnJvbSAu
Li90cG1fZW11bGF0b3IteDg2XzY0L3RwbS90cG1fc3RydWN0dXJlcy5oOjIyLA0KICAgICAgICAg
ICAgICAgICBmcm9tIC4uL3RwbV9lbXVsYXRvci14ODZfNjQvdHBtL3RwbV9tYXJzaGFsbGluZy5o
OjIxLA0KICAgICAgICAgICAgICAgICBmcm9tIHZ0cG0uYzozMToNCi4uL3RwbV9lbXVsYXRvci14
ODZfNjQvY3J5cHRvL3JzYS5oOjI1OiBlcnJvcjogZXhwZWN0ZWQgc3BlY2lmaWVyLXF1YWxpZmll
ci1saXN0IGJlZm9yZSDigJh0cG1fYm5fdOKAmQ0KLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC9jcnlw
dG8vcnNhLmg6MzU6IGVycm9yOiBleHBlY3RlZCBzcGVjaWZpZXItcXVhbGlmaWVyLWxpc3QgYmVm
b3JlIOKAmHRwbV9ibl904oCZDQpJbiBmaWxlIGluY2x1ZGVkIGZyb20gLi4vdHBtX2VtdWxhdG9y
LXg4Nl82NC90cG0vdHBtX21hcnNoYWxsaW5nLmg6MjEsDQogICAgICAgICAgICAgICAgIGZyb20g
dnRwbS5jOjMxOg0KLi4vdHBtX2VtdWxhdG9yLXg4Nl82NC90cG0vdHBtX3N0cnVjdHVyZXMuaDog
SW4gZnVuY3Rpb24g4oCYZnJlZV9UUE1fUEVSTUFORU5UX0RBVEHigJk6DQouLi90cG1fZW11bGF0
b3IteDg2XzY0L3RwbS90cG1fc3RydWN0dXJlcy5oOjIyNDQ6IGVycm9yOiDigJh0cG1fcnNhX3By
aXZhdGVfa2V5X3TigJkgaGFzIG5vIG1lbWJlciBuYW1lZCDigJhzaXpl4oCZDQptYWtlWzJdOiAq
KiogW3Z0cG0ub10gRXJyb3IgMQ0KbWFrZVsyXTogTGVhdmluZyBkaXJlY3RvcnkgYC9sb2NhbC9z
Y3JhdGNoL2lhbmMvZGV2ZWwvY29tbWl0dGVyLmdpdC9zdHViZG9tL3Z0cG0nDQptYWtlWzFdOiAq
KiogW3Z0cG1dIEVycm9yIDINCm1ha2VbMV06IExlYXZpbmcgZGlyZWN0b3J5IGAvbG9jYWwvc2Ny
YXRjaC9pYW5jL2RldmVsL2NvbW1pdHRlci5naXQvc3R1YmRvbScNCm1ha2U6ICoqKiBbaW5zdGFs
bC1zdHViZG9tXSBFcnJvciAyDQoNCg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:26:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkz48-0006Zh-IT; Tue, 18 Dec 2012 15:26:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkz47-0006ZY-Q1
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 15:26:43 +0000
Received: from [193.109.254.147:54695] by server-8.bemta-14.messagelabs.com id
	BC/FD-26341-33B80D05; Tue, 18 Dec 2012 15:26:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355844400!10812082!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1924 invoked from network); 18 Dec 2012 15:26:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:26:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="229300"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 15:26:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 15:26:39 +0000
Message-ID: <1355844398.14620.254.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Eric Dumazet <erdnetdev@gmail.com>
Date: Tue, 18 Dec 2012 15:26:38 +0000
In-Reply-To: <1355843525.9380.18.camel@edumazet-glaptop>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 15:12 +0000, Eric Dumazet wrote:
> On Tue, 2012-12-18 at 13:51 +0000, Ian Campbell wrote:
> > Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
> > than that. We have already accounted for this in
> > NETFRONT_SKB_CB(skb)->pull_to so use that instead.
> > 
> > Fixes WARN_ON from skb_try_coalesce.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Sander Eikelenboom <linux@eikelenboom.it>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: annie li <annie.li@oracle.com>
> > Cc: xen-devel@lists.xensource.com
> > Cc: netdev@vger.kernel.org
> > Cc: stable@kernel.org # 3.7.x only
> > ---
> >  drivers/net/xen-netfront.c |   15 +++++----------
> >  1 files changed, 5 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index caa0110..b06ef81 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -971,17 +971,12 @@ err:
> >  		 * overheads. Here, we add the size of the data pulled
> >  		 * in xennet_fill_frags().
> >  		 *
> > -		 * We also adjust for any unused space in the main
> > -		 * data area by subtracting (RX_COPY_THRESHOLD -
> > -		 * len). This is especially important with drivers
> > -		 * which split incoming packets into header and data,
> > -		 * using only 66 bytes of the main data area (see the
> > -		 * e1000 driver for example.)  On such systems,
> > -		 * without this last adjustement, our achievable
> > -		 * receive throughout using the standard receive
> > -		 * buffer size was cut by 25%(!!!).
> > +		 * We also adjust for the __pskb_pull_tail done in
> > +		 * handle_incoming_queue which pulls data from the
> > +		 * frags into the head area, which is already
> > +		 * accounted in RX_COPY_THRESHOLD.
> >  		 */
> > -		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> > +		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
> >  		skb->len += skb->data_len;
> >  
> >  		if (rx->flags & XEN_NETRXF_csum_blank)
> 
> 
> But skb truesize is not what you think.

Indeed, it seems I was completely backwards about what it means!

> You must account the exact memory used by this skb, not only the used
> part of it.
> 
> At the very minimum, it should be
> 
> skb->truesize += skb->data_len;
> 
> But it really should be the allocated size of the fragment.
> 
> If its a page, then its a page, even if you use one single byte in it.

So actually we want += PAGE_SIZE * skb_shinfo(skb)->nr_frags ?

Sander, can you try that change?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:26:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkz48-0006Zh-IT; Tue, 18 Dec 2012 15:26:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkz47-0006ZY-Q1
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 15:26:43 +0000
Received: from [193.109.254.147:54695] by server-8.bemta-14.messagelabs.com id
	BC/FD-26341-33B80D05; Tue, 18 Dec 2012 15:26:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355844400!10812082!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1924 invoked from network); 18 Dec 2012 15:26:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:26:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="229300"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 15:26:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 15:26:39 +0000
Message-ID: <1355844398.14620.254.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Eric Dumazet <erdnetdev@gmail.com>
Date: Tue, 18 Dec 2012 15:26:38 +0000
In-Reply-To: <1355843525.9380.18.camel@edumazet-glaptop>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 15:12 +0000, Eric Dumazet wrote:
> On Tue, 2012-12-18 at 13:51 +0000, Ian Campbell wrote:
> > Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
> > than that. We have already accounted for this in
> > NETFRONT_SKB_CB(skb)->pull_to so use that instead.
> > 
> > Fixes WARN_ON from skb_try_coalesce.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Sander Eikelenboom <linux@eikelenboom.it>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: annie li <annie.li@oracle.com>
> > Cc: xen-devel@lists.xensource.com
> > Cc: netdev@vger.kernel.org
> > Cc: stable@kernel.org # 3.7.x only
> > ---
> >  drivers/net/xen-netfront.c |   15 +++++----------
> >  1 files changed, 5 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index caa0110..b06ef81 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -971,17 +971,12 @@ err:
> >  		 * overheads. Here, we add the size of the data pulled
> >  		 * in xennet_fill_frags().
> >  		 *
> > -		 * We also adjust for any unused space in the main
> > -		 * data area by subtracting (RX_COPY_THRESHOLD -
> > -		 * len). This is especially important with drivers
> > -		 * which split incoming packets into header and data,
> > -		 * using only 66 bytes of the main data area (see the
> > -		 * e1000 driver for example.)  On such systems,
> > -		 * without this last adjustement, our achievable
> > -		 * receive throughout using the standard receive
> > -		 * buffer size was cut by 25%(!!!).
> > +		 * We also adjust for the __pskb_pull_tail done in
> > +		 * handle_incoming_queue which pulls data from the
> > +		 * frags into the head area, which is already
> > +		 * accounted in RX_COPY_THRESHOLD.
> >  		 */
> > -		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> > +		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
> >  		skb->len += skb->data_len;
> >  
> >  		if (rx->flags & XEN_NETRXF_csum_blank)
> 
> 
> But skb truesize is not what you think.

Indeed, it seems I was completely backwards about what it means!

> You must account the exact memory used by this skb, not only the used
> part of it.
> 
> At the very minimum, it should be
> 
> skb->truesize += skb->data_len;
> 
> But it really should be the allocated size of the fragment.
> 
> If its a page, then its a page, even if you use one single byte in it.

So actually we want += PAGE_SIZE * skb_shinfo(skb)->nr_frags ?

Sander, can you try that change?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:28:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkz5I-0006f6-1Y; Tue, 18 Dec 2012 15:27:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkz5H-0006er-2R
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:27:55 +0000
Received: from [85.158.139.83:57183] by server-9.bemta-5.messagelabs.com id
	C0/F4-10690-A7B80D05; Tue, 18 Dec 2012 15:27:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1355844471!23140252!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29741 invoked from network); 18 Dec 2012 15:27:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:27:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="229341"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 15:27:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 15:27:47 +0000
Message-ID: <1355844466.14620.255.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Date: Tue, 18 Dec 2012 15:27:46 +0000
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48CB62D12A@aplesstripe.dom1.jhuapl.edu>
References: <50C9F2CA.1010602@jhuapl.edu>
	<1355830264.14620.196.camel@zakaz.uk.xensource.com>
	<1355831212.14620.211.camel@zakaz.uk.xensource.com>
	<1355839988.14620.225.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48CB62D12A@aplesstripe.dom1.jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Top posting!

On Tue, 2012-12-18 at 15:23 +0000, Fioravante, Matthew E. wrote:
> What os are you using? I've noticed problems with gmp on Ubuntu.

Debian.

But the stub domain should be using its own built copy of gmp, the
host's installed version should be irrelevant.

If something is building against /usr/include/gmp.h then that is wrong.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:28:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkz5I-0006f6-1Y; Tue, 18 Dec 2012 15:27:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkz5H-0006er-2R
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:27:55 +0000
Received: from [85.158.139.83:57183] by server-9.bemta-5.messagelabs.com id
	C0/F4-10690-A7B80D05; Tue, 18 Dec 2012 15:27:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1355844471!23140252!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29741 invoked from network); 18 Dec 2012 15:27:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:27:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="229341"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 15:27:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 15:27:47 +0000
Message-ID: <1355844466.14620.255.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Date: Tue, 18 Dec 2012 15:27:46 +0000
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48CB62D12A@aplesstripe.dom1.jhuapl.edu>
References: <50C9F2CA.1010602@jhuapl.edu>
	<1355830264.14620.196.camel@zakaz.uk.xensource.com>
	<1355831212.14620.211.camel@zakaz.uk.xensource.com>
	<1355839988.14620.225.camel@zakaz.uk.xensource.com>
	<068F06DC4D106941B297C0C5F9F446EA48CB62D12A@aplesstripe.dom1.jhuapl.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH special] vtpm fix cmake dependency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Top posting!

On Tue, 2012-12-18 at 15:23 +0000, Fioravante, Matthew E. wrote:
> What os are you using? I've noticed problems with gmp on Ubuntu.

Debian.

But the stub domain should be using its own built copy of gmp, the
host's installed version should be irrelevant.

If something is building against /usr/include/gmp.h then that is wrong.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:31:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkz8K-0006vC-QO; Tue, 18 Dec 2012 15:31:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkz8J-0006v5-Jx
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 15:31:03 +0000
Received: from [85.158.139.83:38988] by server-13.bemta-5.messagelabs.com id
	64/D1-10716-63C80D05; Tue, 18 Dec 2012 15:31:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355844652!29702076!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5734 invoked from network); 18 Dec 2012 15:30:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="229456"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 15:30:53 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 15:30:51 +0000
Message-ID: <1355844650.14620.257.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: William Pitcock <nenolod@dereferenced.org>
Date: Tue, 18 Dec 2012 15:30:50 +0000
In-Reply-To: <CA+T2pCGGbJXppP6ZEgvfxOU+u0TAhzEnpphbVgC4m0AiN2x1fg@mail.gmail.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
	<20687.24486.142168.302026@mariner.uk.xensource.com>
	<CA+T2pCGGbJXppP6ZEgvfxOU+u0TAhzEnpphbVgC4m0AiN2x1fg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 14:55 +0000, William Pitcock wrote:
> Hello,
> 
> On Mon, Dec 17, 2012 at 12:08 PM, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> > William Pitcock writes ("[Xen-devel] introducing python-xen"):
> >> I would like to introduce the Python Xen library, which uses libxs and
> >> libxc directly to provide some manipulation functions for the domains
> >> running on a hypervisor.
> >
> > Thanks, that's interesting.
> >
> > However, we would recommend nowadays to build this kind of
> > functionality on top of libxl.  The existing python bindings for libxl
> > may need some work, but they're probably a good starting point.
> >
> > Ian.
> 
> The plan is to eventually shift over to using the libxl bindings once
> they become more mature.

libxl itself is in a good state. But AFAIK no one is working on the
Python bindings. Personally I don't think that what is there is a good
basis for future work, it'd be better to tear them up and start again.

I'm happy to give pointers etc about the libxl and IDL side of things,
you probably know way more about the Python side than I do though... (I
can cargo cult bindings together and that's about it ;-))

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:31:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkz8K-0006vC-QO; Tue, 18 Dec 2012 15:31:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tkz8J-0006v5-Jx
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 15:31:03 +0000
Received: from [85.158.139.83:38988] by server-13.bemta-5.messagelabs.com id
	64/D1-10716-63C80D05; Tue, 18 Dec 2012 15:31:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355844652!29702076!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5734 invoked from network); 18 Dec 2012 15:30:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="229456"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 15:30:53 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 15:30:51 +0000
Message-ID: <1355844650.14620.257.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: William Pitcock <nenolod@dereferenced.org>
Date: Tue, 18 Dec 2012 15:30:50 +0000
In-Reply-To: <CA+T2pCGGbJXppP6ZEgvfxOU+u0TAhzEnpphbVgC4m0AiN2x1fg@mail.gmail.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
	<20687.24486.142168.302026@mariner.uk.xensource.com>
	<CA+T2pCGGbJXppP6ZEgvfxOU+u0TAhzEnpphbVgC4m0AiN2x1fg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 14:55 +0000, William Pitcock wrote:
> Hello,
> 
> On Mon, Dec 17, 2012 at 12:08 PM, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> > William Pitcock writes ("[Xen-devel] introducing python-xen"):
> >> I would like to introduce the Python Xen library, which uses libxs and
> >> libxc directly to provide some manipulation functions for the domains
> >> running on a hypervisor.
> >
> > Thanks, that's interesting.
> >
> > However, we would recommend nowadays to build this kind of
> > functionality on top of libxl.  The existing python bindings for libxl
> > may need some work, but they're probably a good starting point.
> >
> > Ian.
> 
> The plan is to eventually shift over to using the libxl bindings once
> they become more mature.

libxl itself is in a good state. But AFAIK no one is working on the
Python bindings. Personally I don't think that what is there is a good
basis for future work, it'd be better to tear them up and start again.

I'm happy to give pointers etc about the libxl and IDL side of things,
you probably know way more about the Python side than I do though... (I
can cargo cult bindings together and that's about it ;-))

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:37:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:37:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzEA-0007CQ-10; Tue, 18 Dec 2012 15:37:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TkzE8-0007CL-56
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:37:04 +0000
Received: from [85.158.139.211:56958] by server-9.bemta-5.messagelabs.com id
	2F/13-10690-F9D80D05; Tue, 18 Dec 2012 15:37:03 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355845020!20173979!1
X-Originating-IP: [209.85.210.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9759 invoked from network); 18 Dec 2012 15:37:02 -0000
Received: from mail-ia0-f179.google.com (HELO mail-ia0-f179.google.com)
	(209.85.210.179)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:37:02 -0000
Received: by mail-ia0-f179.google.com with SMTP id o25so647625iad.38
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 07:37:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:cc:content-type;
	bh=QCgzdxdC+wLxXNZwEACgNV1aVEs9YJOxkuLKGKBns/0=;
	b=ajWaLxSlvGsJ5jFGzUhU6TGAPDfzxNIvgZqezhHQD5/vEMtL8yKoWP6zR7ikc7skW0
	1jjdUhsiIRMhRaP6TTzOXzVhqkgZI06xtiyEmNgDELLvkfdglwDgCuEBpe6YvGdGl6EJ
	RN6SGaOC8BaKjHUbzE3Vw0Y6DMGdU+h4Hr7GEmYO4y7H1H3wxM4olZng10+od3RUA2xh
	gW/qNWxiuVkh4/86uWvbV0FK3ZTTCEUNPqayE6+QVUq5qYHRGc83znKMrH8UtkZSsDvP
	6/Igj8NvX1jJrIFYdwXeT4/vI1vJnhrRJCQIsRuKN4xrjzGF28QouXdWeBiQn8nNGgw3
	xxXw==
MIME-Version: 1.0
Received: by 10.50.185.230 with SMTP id ff6mr3013563igc.7.1355845020169; Tue,
	18 Dec 2012 07:37:00 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Tue, 18 Dec 2012 07:37:00 -0800 (PST)
Date: Tue, 18 Dec 2012 23:37:00 +0800
X-Google-Sender-Auth: djTJvAPKkBdxBtldOo8_EzDIQO8
Message-ID: <CAKhsbWaxot1pzWP=-BJJ25y_jZC2uHdM8yGfqO6qbhSqe9dQfQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] Correctly expose PCH ISA bridge for IGD
 passthrough (was PCI-PCI bridge)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

Per your request, I resend your patch with my local modification to
fix the PCH ISA bridge to be exposed correctly to domU. This is XEN
part of the fix to make i915 driver properly detect the PCH version.
Another patch for i915 driver side is required too. I'll send that to
intel-gfx list separately. The combined patch set do fix the PCH
detection issue for me.

Thanks,
Timothy

diff --git a/hw/pci.c b/hw/pci.c
index f051de1..d371bd7 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
     }
 }

-typedef struct {
-    PCIDevice dev;
-    PCIBus *bus;
-} PCIBridge;
-
 void pci_bridge_write_config(
PCIDevice *d,
                              uint32_t address, uint32_t val, int len)
 {
diff --git a/hw/pci.h b/hw/pci.h
index edc58b6..c2acab9 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -222,6 +222,11 @@ struct PCIDevice {
     int irq_state[4];
 };

+typedef struct {
+    PCIDevice dev;
+    PCIBus *bus;
+} PCIBridge;
+
 extern char direct_pci_str[];
 extern int direct_pci_msitranslate;
 extern int direct_pci_power_mgmt;
diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
index c6f8869..de21f90 100644
--- a/hw/pt-graphics.c
+++ b/hw/pt-graphics.c
@@ -3,6 +3,7 @@
  */

 #include "pass-through.h"
+#include "pci.h"
 #include "pci/header.h"
 #include "pci/pci.h"

@@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
     did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
     rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);

-    if ( vid == PCI_VENDOR_ID_INTEL )
-        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
-                        pch_map_irq, "intel_bridge_1f");
+    if (vid == PCI_VENDOR_ID_INTEL) {
+        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
+                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
pci_bridge_write_config);
+
+        pci_config_set_vendor_id(s->dev.config, vid);
+        pci_config_set_device_id(s->dev.config, did);
+
+        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
+        s->dev.config[PCI_COMMAND + 1] = 0x00;
+        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
back-to-back, 66MHz, no error
+        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
+        s->dev.config[PCI_REVISION] = rid;
+        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
+        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
+        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
+        s->dev.config[PCI_HEADER_TYPE] = 0x80;
+        s->dev.config[PCI_SEC_STATUS] = 0xa0;
+
+        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
+    }
 }

 uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:37:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:37:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzEA-0007CQ-10; Tue, 18 Dec 2012 15:37:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TkzE8-0007CL-56
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:37:04 +0000
Received: from [85.158.139.211:56958] by server-9.bemta-5.messagelabs.com id
	2F/13-10690-F9D80D05; Tue, 18 Dec 2012 15:37:03 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355845020!20173979!1
X-Originating-IP: [209.85.210.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9759 invoked from network); 18 Dec 2012 15:37:02 -0000
Received: from mail-ia0-f179.google.com (HELO mail-ia0-f179.google.com)
	(209.85.210.179)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:37:02 -0000
Received: by mail-ia0-f179.google.com with SMTP id o25so647625iad.38
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 07:37:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:cc:content-type;
	bh=QCgzdxdC+wLxXNZwEACgNV1aVEs9YJOxkuLKGKBns/0=;
	b=ajWaLxSlvGsJ5jFGzUhU6TGAPDfzxNIvgZqezhHQD5/vEMtL8yKoWP6zR7ikc7skW0
	1jjdUhsiIRMhRaP6TTzOXzVhqkgZI06xtiyEmNgDELLvkfdglwDgCuEBpe6YvGdGl6EJ
	RN6SGaOC8BaKjHUbzE3Vw0Y6DMGdU+h4Hr7GEmYO4y7H1H3wxM4olZng10+od3RUA2xh
	gW/qNWxiuVkh4/86uWvbV0FK3ZTTCEUNPqayE6+QVUq5qYHRGc83znKMrH8UtkZSsDvP
	6/Igj8NvX1jJrIFYdwXeT4/vI1vJnhrRJCQIsRuKN4xrjzGF28QouXdWeBiQn8nNGgw3
	xxXw==
MIME-Version: 1.0
Received: by 10.50.185.230 with SMTP id ff6mr3013563igc.7.1355845020169; Tue,
	18 Dec 2012 07:37:00 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Tue, 18 Dec 2012 07:37:00 -0800 (PST)
Date: Tue, 18 Dec 2012 23:37:00 +0800
X-Google-Sender-Auth: djTJvAPKkBdxBtldOo8_EzDIQO8
Message-ID: <CAKhsbWaxot1pzWP=-BJJ25y_jZC2uHdM8yGfqO6qbhSqe9dQfQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] Correctly expose PCH ISA bridge for IGD
 passthrough (was PCI-PCI bridge)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

Per your request, I resend your patch with my local modification to
fix the PCH ISA bridge to be exposed correctly to domU. This is XEN
part of the fix to make i915 driver properly detect the PCH version.
Another patch for i915 driver side is required too. I'll send that to
intel-gfx list separately. The combined patch set do fix the PCH
detection issue for me.

Thanks,
Timothy

diff --git a/hw/pci.c b/hw/pci.c
index f051de1..d371bd7 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
     }
 }

-typedef struct {
-    PCIDevice dev;
-    PCIBus *bus;
-} PCIBridge;
-
 void pci_bridge_write_config(
PCIDevice *d,
                              uint32_t address, uint32_t val, int len)
 {
diff --git a/hw/pci.h b/hw/pci.h
index edc58b6..c2acab9 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -222,6 +222,11 @@ struct PCIDevice {
     int irq_state[4];
 };

+typedef struct {
+    PCIDevice dev;
+    PCIBus *bus;
+} PCIBridge;
+
 extern char direct_pci_str[];
 extern int direct_pci_msitranslate;
 extern int direct_pci_power_mgmt;
diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
index c6f8869..de21f90 100644
--- a/hw/pt-graphics.c
+++ b/hw/pt-graphics.c
@@ -3,6 +3,7 @@
  */

 #include "pass-through.h"
+#include "pci.h"
 #include "pci/header.h"
 #include "pci/pci.h"

@@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
     did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
     rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);

-    if ( vid == PCI_VENDOR_ID_INTEL )
-        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
-                        pch_map_irq, "intel_bridge_1f");
+    if (vid == PCI_VENDOR_ID_INTEL) {
+        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
+                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
pci_bridge_write_config);
+
+        pci_config_set_vendor_id(s->dev.config, vid);
+        pci_config_set_device_id(s->dev.config, did);
+
+        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
+        s->dev.config[PCI_COMMAND + 1] = 0x00;
+        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
back-to-back, 66MHz, no error
+        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
+        s->dev.config[PCI_REVISION] = rid;
+        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
+        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
+        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
+        s->dev.config[PCI_HEADER_TYPE] = 0x80;
+        s->dev.config[PCI_SEC_STATUS] = 0xa0;
+
+        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
+    }
 }

 uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:48:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzOU-0007Su-C8; Tue, 18 Dec 2012 15:47:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkzOT-0007Sp-IJ
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 15:47:45 +0000
Received: from [85.158.137.99:18821] by server-6.bemta-3.messagelabs.com id
	95/3C-12154-02090D05; Tue, 18 Dec 2012 15:47:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1355845662!14573068!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24740 invoked from network); 18 Dec 2012 15:47:44 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 15:47:44 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIFlebv020370
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 15:47:40 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIFldhh011512
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 15:47:40 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIFld4T030018; Tue, 18 Dec 2012 09:47:39 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 07:47:38 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CDB9C1BF216; Tue, 18 Dec 2012 10:47:37 -0500 (EST)
Date: Tue, 18 Dec 2012 10:47:37 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>
Message-ID: <20121218154737.GB13450@phenom.dumpdata.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
	<20121207140528.GA3140@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A7C7B@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A9E57@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353AD822@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353AD822@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>>> Not only inform firmware.
> >>>> Hotplug notify callback will invoke acpi_bus_add -> ... ->
> >>>> implicitly invoke drv->ops.add method to add the hotadded memory
> >>>> device.
> >>> 
> >>> Gotcha.
> >> 
> >> ? So it will lose the notification and no way to add the new memory
> >> device in the future. 
> >> 
> >> Xen memory hotplug logic consist of 2 parts:
> >> 1) driver logic (.add/.remove etc)
> >> 2) notification install/callback logic
> >> If you want to use 'xen_stub driver + .add/.remove ops', then
> >> notification install/callback logic would implement with xen_stub
> >> driver (means in build-in part, otherwise it would lose notification
> >> when the ops unload) --> but that would make xen_stub in big build-in
> >> size.
> > 
> > How about
> > * build-in part: xen_stub driver (stub .add to record what matched
> > cpu devices) + notification install/callback; 
> > * module part: .add/.remove ops;
> > w/ it, native driver has no chance to load and no hotplug event lose,
> > and approximately 1/3 code is build-in and 2/3 are module. 
> > 
> > I think it will work but I'm not quite sure, at least we can have a
> > try/test? 
> > 
> > Thanks,
> > Jinsong
> > 
> 
> Thoughts? If you think it's OK, I will update later.

Pls try. I am just thinking that the less we of code that has to be built-in - the
better.
> 
> Thanks,
> Jinsong

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:48:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzOU-0007Su-C8; Tue, 18 Dec 2012 15:47:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkzOT-0007Sp-IJ
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 15:47:45 +0000
Received: from [85.158.137.99:18821] by server-6.bemta-3.messagelabs.com id
	95/3C-12154-02090D05; Tue, 18 Dec 2012 15:47:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1355845662!14573068!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24740 invoked from network); 18 Dec 2012 15:47:44 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 15:47:44 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIFlebv020370
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 15:47:40 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIFldhh011512
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 15:47:40 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIFld4T030018; Tue, 18 Dec 2012 09:47:39 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 07:47:38 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CDB9C1BF216; Tue, 18 Dec 2012 10:47:37 -0500 (EST)
Date: Tue, 18 Dec 2012 10:47:37 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>
Message-ID: <20121218154737.GB13450@phenom.dumpdata.com>
References: <DE8DF0795D48FD4CA783C40EC82923353942F9@SHSMSX101.ccr.corp.intel.com>
	<20121128192601.GA15871@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC829233539AC1D@SHSMSX101.ccr.corp.intel.com>
	<20121205174307.GC16072@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A2366@SHSMSX101.ccr.corp.intel.com>
	<20121207140528.GA3140@phenom.dumpdata.com>
	<DE8DF0795D48FD4CA783C40EC82923353A7C7B@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353A9E57@SHSMSX101.ccr.corp.intel.com>
	<DE8DF0795D48FD4CA783C40EC82923353AD822@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353AD822@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH V1 1/2] Xen acpi memory hotplug driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>>> Not only inform firmware.
> >>>> Hotplug notify callback will invoke acpi_bus_add -> ... ->
> >>>> implicitly invoke drv->ops.add method to add the hotadded memory
> >>>> device.
> >>> 
> >>> Gotcha.
> >> 
> >> ? So it will lose the notification and no way to add the new memory
> >> device in the future. 
> >> 
> >> Xen memory hotplug logic consist of 2 parts:
> >> 1) driver logic (.add/.remove etc)
> >> 2) notification install/callback logic
> >> If you want to use 'xen_stub driver + .add/.remove ops', then
> >> notification install/callback logic would implement with xen_stub
> >> driver (means in build-in part, otherwise it would lose notification
> >> when the ops unload) --> but that would make xen_stub in big build-in
> >> size.
> > 
> > How about
> > * build-in part: xen_stub driver (stub .add to record what matched
> > cpu devices) + notification install/callback; 
> > * module part: .add/.remove ops;
> > w/ it, native driver has no chance to load and no hotplug event lose,
> > and approximately 1/3 code is build-in and 2/3 are module. 
> > 
> > I think it will work but I'm not quite sure, at least we can have a
> > try/test? 
> > 
> > Thanks,
> > Jinsong
> > 
> 
> Thoughts? If you think it's OK, I will update later.

Pls try. I am just thinking that the less we of code that has to be built-in - the
better.
> 
> Thanks,
> Jinsong

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:50:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzQw-0007ZB-Vk; Tue, 18 Dec 2012 15:50:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TkzQv-0007Z4-F8
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:50:17 +0000
Received: from [193.109.254.147:37040] by server-3.bemta-14.messagelabs.com id
	DB/D8-26055-8B090D05; Tue, 18 Dec 2012 15:50:16 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355845801!10815594!1
X-Originating-IP: [209.85.212.44]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1304 invoked from network); 18 Dec 2012 15:50:03 -0000
Received: from mail-vb0-f44.google.com (HELO mail-vb0-f44.google.com)
	(209.85.212.44)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:50:03 -0000
Received: by mail-vb0-f44.google.com with SMTP id fc26so1017135vbb.17
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 07:50:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=mMwjNEsTPY02OnimYcI9L+Dy+O1+TGYM4DIPgygdRBg=;
	b=Z+kitu66SN9+rd6VxtUe40ewLcEKa3K37YLEc5bjEPGtpsjp/l21NRL6FQcjXVh9rR
	4ikDUP0uo9AbgRn492K3Unb8iFzk0T6RTta9rrUoIHjFsXlveGrIu2RpCLK0R1wtZWWs
	0O83E9MWJcEi1+sfieBrW2+rp4V89x9C5N7UQYQYsAIm7e1HIJHHY1rGeCSfvg7JF35M
	3y9iEmyjKduyN4hzsmri9cHY6Gj5EoPD4++CCD6/P53UmObFa9G7v0ZC9xfUROotbx34
	gw6ifZvj6H6phjG3cmSXl3Bl0ZFRF6omH1MnxYAwWqPerGJC1YD6rckDstHj5+u07KZm
	iVQQ==
MIME-Version: 1.0
Received: by 10.52.70.232 with SMTP id p8mr3401973vdu.0.1355845801204; Tue, 18
	Dec 2012 07:50:01 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 07:50:01 -0800 (PST)
In-Reply-To: <CAFLBxZazULnr8PRN_4fshUAmuohyn2o4AcqpohdjxbmDnVStGQ@mail.gmail.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
	<CAFLBxZazULnr8PRN_4fshUAmuohyn2o4AcqpohdjxbmDnVStGQ@mail.gmail.com>
Date: Tue, 18 Dec 2012 15:50:01 +0000
X-Google-Sender-Auth: 56fAoF-Jxwxtevvt9WCBKgkBu6I
Message-ID: <CAFLBxZYQLdN_CO=ECOWyk-91j63rtWwwPE7EhR8OEtf+i_zxZA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7426101174037856555=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7426101174037856555==
Content-Type: multipart/alternative; boundary=20cf3071c76e59dbc804d12273bf

--20cf3071c76e59dbc804d12273bf
Content-Type: text/plain; charset=ISO-8859-1

Oops, somehow forgot to cc the list here...


On Tue, Dec 18, 2012 at 2:13 PM, George Dunlap
<George.Dunlap@eu.citrix.com>wrote:

> On Tue, Dec 18, 2012 at 2:06 PM, George Dunlap <
> George.Dunlap@eu.citrix.com> wrote:
>
>> "xm help" doesn't show a "block-resize" command, nor does grepping
>> through tools for "resize" turn up anything.
>>
>
> According to the guy who submitted the patch that was accepted into
> 2.6.36:  "My goal was to not have the end-user do anything other than what
> was minimally required to  resizing the device on the host side. Once the
> device is resized on the host side, this capacity change is propagated to
> the guest without having to invoke any xm command."
>
> So in theory this should already work.
>
> I've asked the guy who submitted the request to test it.
>
>  -George
>

--20cf3071c76e59dbc804d12273bf
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Oops, somehow forgot to cc the list here...<br></div><div =
class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Dec 18, 20=
12 at 2:13 PM, George Dunlap <span dir=3D"ltr">&lt;<a href=3D"mailto:George=
.Dunlap@eu.citrix.com" target=3D"_blank">George.Dunlap@eu.citrix.com</a>&gt=
;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr"><div class=3D"im">On Tue, D=
ec 18, 2012 at 2:06 PM, George Dunlap <span dir=3D"ltr">&lt;<a href=3D"mail=
to:George.Dunlap@eu.citrix.com" target=3D"_blank">George.Dunlap@eu.citrix.c=
om</a>&gt;</span> wrote:<br>
</div><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div class=3D"i=
m">
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr">&quot;xm=
 help&quot; doesn&#39;t show a &quot;block-resize&quot; command, nor does g=
repping through tools for &quot;resize&quot; turn up anything.<br>

</div></blockquote><div><br></div></div><div>According to the guy who submi=
tted the patch that was accepted into 2.6.36:=A0 &quot;My goal was to not h=
ave the end-user do anything other than what was minimally required to =A0<=
span>resizing</span> the <span>device</span> on the host side. Once the <sp=
an>device</span> is <span>resized</span> on the host side, this capacity ch=
ange is propagated to the guest without having to invoke any xm command.&qu=
ot;<br>

<br></div><div>So in theory this should already work.=A0 <br><br>I&#39;ve a=
sked the guy who submitted the request to test it.<span class=3D"HOEnZb"><f=
ont color=3D"#888888"><br><br></font></span></div><span class=3D"HOEnZb"><f=
ont color=3D"#888888"><div>
=A0-George<br>
</div></font></span></div></div></div>
</blockquote></div><br></div>

--20cf3071c76e59dbc804d12273bf--


--===============7426101174037856555==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7426101174037856555==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 15:50:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzQw-0007ZB-Vk; Tue, 18 Dec 2012 15:50:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1TkzQv-0007Z4-F8
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:50:17 +0000
Received: from [193.109.254.147:37040] by server-3.bemta-14.messagelabs.com id
	DB/D8-26055-8B090D05; Tue, 18 Dec 2012 15:50:16 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355845801!10815594!1
X-Originating-IP: [209.85.212.44]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1304 invoked from network); 18 Dec 2012 15:50:03 -0000
Received: from mail-vb0-f44.google.com (HELO mail-vb0-f44.google.com)
	(209.85.212.44)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:50:03 -0000
Received: by mail-vb0-f44.google.com with SMTP id fc26so1017135vbb.17
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 07:50:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=mMwjNEsTPY02OnimYcI9L+Dy+O1+TGYM4DIPgygdRBg=;
	b=Z+kitu66SN9+rd6VxtUe40ewLcEKa3K37YLEc5bjEPGtpsjp/l21NRL6FQcjXVh9rR
	4ikDUP0uo9AbgRn492K3Unb8iFzk0T6RTta9rrUoIHjFsXlveGrIu2RpCLK0R1wtZWWs
	0O83E9MWJcEi1+sfieBrW2+rp4V89x9C5N7UQYQYsAIm7e1HIJHHY1rGeCSfvg7JF35M
	3y9iEmyjKduyN4hzsmri9cHY6Gj5EoPD4++CCD6/P53UmObFa9G7v0ZC9xfUROotbx34
	gw6ifZvj6H6phjG3cmSXl3Bl0ZFRF6omH1MnxYAwWqPerGJC1YD6rckDstHj5+u07KZm
	iVQQ==
MIME-Version: 1.0
Received: by 10.52.70.232 with SMTP id p8mr3401973vdu.0.1355845801204; Tue, 18
	Dec 2012 07:50:01 -0800 (PST)
Received: by 10.58.54.39 with HTTP; Tue, 18 Dec 2012 07:50:01 -0800 (PST)
In-Reply-To: <CAFLBxZazULnr8PRN_4fshUAmuohyn2o4AcqpohdjxbmDnVStGQ@mail.gmail.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
	<CAFLBxZazULnr8PRN_4fshUAmuohyn2o4AcqpohdjxbmDnVStGQ@mail.gmail.com>
Date: Tue, 18 Dec 2012 15:50:01 +0000
X-Google-Sender-Auth: 56fAoF-Jxwxtevvt9WCBKgkBu6I
Message-ID: <CAFLBxZYQLdN_CO=ECOWyk-91j63rtWwwPE7EhR8OEtf+i_zxZA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7426101174037856555=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7426101174037856555==
Content-Type: multipart/alternative; boundary=20cf3071c76e59dbc804d12273bf

--20cf3071c76e59dbc804d12273bf
Content-Type: text/plain; charset=ISO-8859-1

Oops, somehow forgot to cc the list here...


On Tue, Dec 18, 2012 at 2:13 PM, George Dunlap
<George.Dunlap@eu.citrix.com>wrote:

> On Tue, Dec 18, 2012 at 2:06 PM, George Dunlap <
> George.Dunlap@eu.citrix.com> wrote:
>
>> "xm help" doesn't show a "block-resize" command, nor does grepping
>> through tools for "resize" turn up anything.
>>
>
> According to the guy who submitted the patch that was accepted into
> 2.6.36:  "My goal was to not have the end-user do anything other than what
> was minimally required to  resizing the device on the host side. Once the
> device is resized on the host side, this capacity change is propagated to
> the guest without having to invoke any xm command."
>
> So in theory this should already work.
>
> I've asked the guy who submitted the request to test it.
>
>  -George
>

--20cf3071c76e59dbc804d12273bf
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Oops, somehow forgot to cc the list here...<br></div><div =
class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Dec 18, 20=
12 at 2:13 PM, George Dunlap <span dir=3D"ltr">&lt;<a href=3D"mailto:George=
.Dunlap@eu.citrix.com" target=3D"_blank">George.Dunlap@eu.citrix.com</a>&gt=
;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr"><div class=3D"im">On Tue, D=
ec 18, 2012 at 2:06 PM, George Dunlap <span dir=3D"ltr">&lt;<a href=3D"mail=
to:George.Dunlap@eu.citrix.com" target=3D"_blank">George.Dunlap@eu.citrix.c=
om</a>&gt;</span> wrote:<br>
</div><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div class=3D"i=
m">
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr">&quot;xm=
 help&quot; doesn&#39;t show a &quot;block-resize&quot; command, nor does g=
repping through tools for &quot;resize&quot; turn up anything.<br>

</div></blockquote><div><br></div></div><div>According to the guy who submi=
tted the patch that was accepted into 2.6.36:=A0 &quot;My goal was to not h=
ave the end-user do anything other than what was minimally required to =A0<=
span>resizing</span> the <span>device</span> on the host side. Once the <sp=
an>device</span> is <span>resized</span> on the host side, this capacity ch=
ange is propagated to the guest without having to invoke any xm command.&qu=
ot;<br>

<br></div><div>So in theory this should already work.=A0 <br><br>I&#39;ve a=
sked the guy who submitted the request to test it.<span class=3D"HOEnZb"><f=
ont color=3D"#888888"><br><br></font></span></div><span class=3D"HOEnZb"><f=
ont color=3D"#888888"><div>
=A0-George<br>
</div></font></span></div></div></div>
</blockquote></div><br></div>

--20cf3071c76e59dbc804d12273bf--


--===============7426101174037856555==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7426101174037856555==--


From xen-devel-bounces@lists.xen.org Tue Dec 18 15:56:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzWm-0007qy-3n; Tue, 18 Dec 2012 15:56:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkzWk-0007qr-Bm
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 15:56:18 +0000
Received: from [85.158.143.35:46784] by server-3.bemta-4.messagelabs.com id
	4B/1C-18211-12290D05; Tue, 18 Dec 2012 15:56:17 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355845784!4596586!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19088 invoked from network); 18 Dec 2012 15:49:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 15:49:45 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIFngJi023124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 15:49:42 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIFnf4w015999
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 15:49:42 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIFnfs5031741; Tue, 18 Dec 2012 09:49:41 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 07:49:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 51C751BF216; Tue, 18 Dec 2012 10:49:40 -0500 (EST)
Date: Tue, 18 Dec 2012 10:49:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: kbuild test robot <fengguang.wu@intel.com>
Message-ID: <20121218154940.GC13450@phenom.dumpdata.com>
References: <50d00197.j0ofCx2fkiqUCGLe%fengguang.wu@intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50d00197.j0ofCx2fkiqUCGLe%fengguang.wu@intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen:stable/for-linus-3.8 11/13]
 arch/x86/xen/smp.c:257:2: error: implicit declaration of function
 'smp_store_boot_cpu_info'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 01:39:35PM +0800, kbuild test robot wrote:
> tree:   git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.8
> head:   9d328a948f38ec240fc6d05db2c146e23ccd9b8b
> commit: 06d0b5d9edcecccab45588a472cd34af2608e665 [11/13] xen/smp: Use smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
> config: make ARCH=x86_64 allmodconfig
> 
> All error/warnings:
> 
> arch/x86/xen/smp.c: In function 'xen_smp_prepare_cpus':
> arch/x86/xen/smp.c:257:2: error: implicit declaration of function 'smp_store_boot_cpu_info' [-Werror=implicit-function-declaration]
> cc1: some warnings being treated as errors

This is OK - I was contemplating merging in the required patches (so merge a 3.8 merge)
in the branch but Linus is adamant about that sort of thing not being done. So will
punt on this - as when Linus merges this branch it won't have compile errors.

> 
> vim +/smp_store_boot_cpu_info +257 arch/x86/xen/smp.c
> 
> ed467e69 Konrad Rzeszutek Wilk 2011-09-01  251  
> ed467e69 Konrad Rzeszutek Wilk 2011-09-01  252  		xen_raw_printk(m);
> ed467e69 Konrad Rzeszutek Wilk 2011-09-01  253  		panic(m);
> ed467e69 Konrad Rzeszutek Wilk 2011-09-01  254  	}
> 2d9e1e2f Jeremy Fitzhardinge   2008-07-07  255  	xen_init_lock_cpu(0);
> 2d9e1e2f Jeremy Fitzhardinge   2008-07-07  256  
> 06d0b5d9 Konrad Rzeszutek Wilk 2012-12-17 @257  	smp_store_boot_cpu_info();
> c7b75947 Jeremy Fitzhardinge   2008-07-08  258  	cpu_data(0).x86_max_cores = 1;
> 900cba88 Andrew Jones          2009-12-18  259  
> 900cba88 Andrew Jones          2009-12-18  260  	for_each_possible_cpu(i) {
> 
> ---
> 0-DAY kernel build testing backend         Open Source Technology Center
> Fengguang Wu, Yuanhan Liu                              Intel Corporation

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:56:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzWm-0007qy-3n; Tue, 18 Dec 2012 15:56:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TkzWk-0007qr-Bm
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 15:56:18 +0000
Received: from [85.158.143.35:46784] by server-3.bemta-4.messagelabs.com id
	4B/1C-18211-12290D05; Tue, 18 Dec 2012 15:56:17 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355845784!4596586!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19088 invoked from network); 18 Dec 2012 15:49:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 15:49:45 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIFngJi023124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 15:49:42 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIFnf4w015999
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 15:49:42 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIFnfs5031741; Tue, 18 Dec 2012 09:49:41 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 07:49:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 51C751BF216; Tue, 18 Dec 2012 10:49:40 -0500 (EST)
Date: Tue, 18 Dec 2012 10:49:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: kbuild test robot <fengguang.wu@intel.com>
Message-ID: <20121218154940.GC13450@phenom.dumpdata.com>
References: <50d00197.j0ofCx2fkiqUCGLe%fengguang.wu@intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50d00197.j0ofCx2fkiqUCGLe%fengguang.wu@intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen:stable/for-linus-3.8 11/13]
 arch/x86/xen/smp.c:257:2: error: implicit declaration of function
 'smp_store_boot_cpu_info'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 01:39:35PM +0800, kbuild test robot wrote:
> tree:   git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.8
> head:   9d328a948f38ec240fc6d05db2c146e23ccd9b8b
> commit: 06d0b5d9edcecccab45588a472cd34af2608e665 [11/13] xen/smp: Use smp_store_boot_cpu_info() to store cpu info for BSP during boot time.
> config: make ARCH=x86_64 allmodconfig
> 
> All error/warnings:
> 
> arch/x86/xen/smp.c: In function 'xen_smp_prepare_cpus':
> arch/x86/xen/smp.c:257:2: error: implicit declaration of function 'smp_store_boot_cpu_info' [-Werror=implicit-function-declaration]
> cc1: some warnings being treated as errors

This is OK - I was contemplating merging in the required patches (so merge a 3.8 merge)
in the branch but Linus is adamant about that sort of thing not being done. So will
punt on this - as when Linus merges this branch it won't have compile errors.

> 
> vim +/smp_store_boot_cpu_info +257 arch/x86/xen/smp.c
> 
> ed467e69 Konrad Rzeszutek Wilk 2011-09-01  251  
> ed467e69 Konrad Rzeszutek Wilk 2011-09-01  252  		xen_raw_printk(m);
> ed467e69 Konrad Rzeszutek Wilk 2011-09-01  253  		panic(m);
> ed467e69 Konrad Rzeszutek Wilk 2011-09-01  254  	}
> 2d9e1e2f Jeremy Fitzhardinge   2008-07-07  255  	xen_init_lock_cpu(0);
> 2d9e1e2f Jeremy Fitzhardinge   2008-07-07  256  
> 06d0b5d9 Konrad Rzeszutek Wilk 2012-12-17 @257  	smp_store_boot_cpu_info();
> c7b75947 Jeremy Fitzhardinge   2008-07-08  258  	cpu_data(0).x86_max_cores = 1;
> 900cba88 Andrew Jones          2009-12-18  259  
> 900cba88 Andrew Jones          2009-12-18  260  	for_each_possible_cpu(i) {
> 
> ---
> 0-DAY kernel build testing backend         Open Source Technology Center
> Fengguang Wu, Yuanhan Liu                              Intel Corporation

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:58:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:58:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzYr-0007xM-Kf; Tue, 18 Dec 2012 15:58:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkzYq-0007xF-HH
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:58:28 +0000
Received: from [85.158.137.99:24867] by server-2.bemta-3.messagelabs.com id
	69/FC-11239-3A290D05; Tue, 18 Dec 2012 15:58:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355846306!17622825!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10891 invoked from network); 18 Dec 2012 15:58:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:58:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="230393"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 15:58:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 15:58:17 +0000
Message-ID: <1355846296.14620.260.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 18 Dec 2012 15:58:16 +0000
In-Reply-To: <50D0910202000078000B1171@nat28.tlf.novell.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
	<20121218143756.GA24713@phenom.dumpdata.com>
	<50D0910202000078000B1171@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 14:51 +0000, Jan Beulich wrote:
> >>> On 18.12.12 at 15:37, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > So it should be altering the 'sectors' value and just writting 
> > the backend state from XenbusStateConnected to XenbusStateConnected.
> 
> Which is what the corresponding backend patch does (which for
> upstream was separate because I think blkback wasn't upstream
> yet back when KY did that work).

Looks like this was upstream, in the first version of the patch from the
looks of things.

I suspect it wasn't usable until 496b318eb655 which fixed a xenbus hang
on the second resize but that was in v3.0-rc1 so it ought to be ok...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 15:58:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 15:58:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzYr-0007xM-Kf; Tue, 18 Dec 2012 15:58:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkzYq-0007xF-HH
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:58:28 +0000
Received: from [85.158.137.99:24867] by server-2.bemta-3.messagelabs.com id
	69/FC-11239-3A290D05; Tue, 18 Dec 2012 15:58:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355846306!17622825!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10891 invoked from network); 18 Dec 2012 15:58:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:58:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="230393"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 15:58:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 15:58:17 +0000
Message-ID: <1355846296.14620.260.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 18 Dec 2012 15:58:16 +0000
In-Reply-To: <50D0910202000078000B1171@nat28.tlf.novell.com>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
	<20121218143756.GA24713@phenom.dumpdata.com>
	<50D0910202000078000B1171@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 14:51 +0000, Jan Beulich wrote:
> >>> On 18.12.12 at 15:37, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > So it should be altering the 'sectors' value and just writting 
> > the backend state from XenbusStateConnected to XenbusStateConnected.
> 
> Which is what the corresponding backend patch does (which for
> upstream was separate because I think blkback wasn't upstream
> yet back when KY did that work).

Looks like this was upstream, in the first version of the patch from the
looks of things.

I suspect it wasn't usable until 496b318eb655 which fixed a xenbus hang
on the second resize but that was in v3.0-rc1 so it ought to be ok...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:04:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzeD-00009r-Dm; Tue, 18 Dec 2012 16:04:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TkzeC-00009k-Dh
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:04:00 +0000
Received: from [85.158.138.51:30161] by server-1.bemta-3.messagelabs.com id
	E4/5E-08906-FE390D05; Tue, 18 Dec 2012 16:03:59 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355846639!27644168!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.2 required=7.0 tests=MIME_QP_LONG_LINE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11584 invoked from network); 18 Dec 2012 16:03:59 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-15.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 16:03:59 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 105B8C5618D;
	Tue, 18 Dec 2012 16:03:47 +0000 (GMT)
Date: Tue, 18 Dec 2012 16:03:46 +0000
From: Alex Bligh <alex@alex.org.uk>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <008ECD032A8603F73C448ABD@Ximines.local>
In-Reply-To: <CAFLBxZY3zXaNQ93zq8YuKQm=+Hc7SoN2uVsjgJ1kZb_=N5CszA@mail.gmail.com>
References: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
	<39A8C47B5D7DD22A52A3DA51@Ximines.local>
	<CAFLBxZY3zXaNQ93zq8YuKQm=+Hc7SoN2uVsjgJ1kZb_=N5CszA@mail.gmail.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 development update, 15 Oct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CgotLU9uIDE4IERlY2VtYmVyIDIwMTIgMTQ6Mjg6NTcgKzAwMDAgR2VvcmdlIER1bmxhcCAKPEdl
b3JnZS5EdW5sYXBAZXUuY2l0cml4LmNvbT4gd3JvdGU6Cgo+Pj4gKiBNYWtlIHN0b3JhZ2UgbWln
cmF0aW9uIHBvc3NpYmxlCj4+PiDCoCBvd25lcjogPwo+Pj4gwqAgc3RhdHVzOiA/Cj4+PiDCoCBU
aGVyZSBuZWVkcyB0byBiZSBhIHdheSwgZWl0aGVyIHZpYSBjb21tYW5kLWxpbmUgb3IgdmlhIHNv
bWUgaG9va3MsCj4+PiDCoCB0aGF0IHNvbWVvbmUgY2FuIGJ1aWxkIGEgInN0b3JhZ2UgbWlncmF0
aW9uIiBmZWF0dXJlIG9uIHRvcCBvZiBsaWJ4bAo+Pj4gwqAgb3IgeGwuCj4+Cj4+IFdlIGhhdmUg
dGhpcyB3b3JraW5nIHdpdGggcWVtdS14ZW4sIHFjb3cyIGFuZCBzbmFwc2hvdCByZWJhc2UuIEF0
IGEgbGlieGwKPj4gbGV2ZWwgKGJ1dCBub3QgYW4geGwgbGV2ZWwpLCBldmVyeXRoaW5nIHNlZW1z
IHRvIGJlIHRoZXJlLgo+Cj4gQ2FuIHlvdSBkZXNjcmliZSBpbiBtb3JlIGRldGFpbCBob3cgeW91
IGltcGxlbWVudCB0aGlzP8KgIERvIHlvdSBoYXZlIGEKPiBzY3JpcHQgb3Igc29tZXRoaW5nPwoK
V2UgaGF2ZSBhIHBpbGUgb2YgQyBjb2RlIDotKQoKQSBzY3JpcHQgd291bGQgZG8gc29tZXRoaW5n
IGxpa2U6CgoxLiBBc2sgcWVtdSB0byBkbyBhIGxpdmUgc25hcHNob3QgdXNpbmcgc25hcHNob3Rf
YmxrZGV2IHB1dHRpbmcgdGhlCnNuYXBzaG90IGluIGEgbmV3IGZpbGUgb24gdGhlIG5ldyBzdG9y
YWdlIGRldmljZS4gVGhpcyBlbnN1cmVzIHRoYXQgYWxsIG5ldwp3cml0ZXMgZ28gdG8gdGhlIG5l
dyBzdG9yYWdlIGRldmljZS4KCjIuIFJlYmFzZSB0aGUgc25hcHNob3QgdG8gYSBudWxsIGJhY2tp
bmcgZmlsZSAoSSB0aGluayB0aGF0J3MgcWVtdS1pbWcKcmViYXNlIHdpdGggLWIgJycgdGhvdWdo
IHdlIGhhZCB0byBzdWJtaXQgYSBjb3VwbGUgb2YgbGluZXMgb2YgcGF0Y2ggdG8KcWVtdSB0byBt
YWtlIGl0IHdvcmspIHdoaWNoIGZpbGxzIHRoZSBub24td3JpdHRlbiBibG9ja3MgZnJvbSB0aGUg
bmV3CnNuYXBzaG90IGZyb20gdGhlIG9sZCBiYXNlIGltYWdlIGFuZCBicmVha3MgdGhlIGxpbmsg
dG8gdGhlIHRoZSBvbGQgYmFzZQppbWFnZS4KCkluIG91ciBpbXBsZW1lbnRhdGlvbiB3ZSBoYXZl
IGEgc2VwYXJhdGUgaGllcmFyY2h5IGRhdGFiYXNlIGZyb20gdGhlIHBhcmVudApsaW5rcyBzdG9y
ZWQgd2l0aGluIHRoZSBxY293MiBmaWxlcyBzbyB3ZSBnZXQgcHJvcGVyIHVzYWdlIGNvdW50aW5n
IGV0Yy4KYnV0IHRoYXQncyBhbiAnaW1wbGVtZW50YXRpb24gZGV0YWlsJy4KCj5Gcm9tIG1lbW9y
eSAoYW5kIGF0IHJpc2sgb2YgY3Jvc3NpbmcgdGhyZWFkcykgdXNpbmcgdGhpcyBkZXZpY2UgbW9k
ZWwgYW5kCmV4dGVybmFsIHNuYXBzaG90cyB5b3UgY2FuIHVzZSBxZW11LWltZyByZXNpemUgdG8g
cmVzaXplIGRyaXZlcyBsaXZlIGF0CmxlYXN0IHVuZGVyIEtWTS4gT2J2aW91c2x5IHRoZSBpbmZv
cm1hdGlvbiB0aGF0IHRoZSBkcml2ZSBoYXMgYmVlbiByZXNpemVkCm5lZWRzIHRvIGdldCB0byB0
aGUgZ3Vlc3Qgc29tZWhvdy4KCi0tIApBbGV4IEJsaWdoCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:04:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzeD-00009r-Dm; Tue, 18 Dec 2012 16:04:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1TkzeC-00009k-Dh
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:04:00 +0000
Received: from [85.158.138.51:30161] by server-1.bemta-3.messagelabs.com id
	E4/5E-08906-FE390D05; Tue, 18 Dec 2012 16:03:59 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355846639!27644168!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.2 required=7.0 tests=MIME_QP_LONG_LINE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11584 invoked from network); 18 Dec 2012 16:03:59 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-15.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 16:03:59 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 105B8C5618D;
	Tue, 18 Dec 2012 16:03:47 +0000 (GMT)
Date: Tue, 18 Dec 2012 16:03:46 +0000
From: Alex Bligh <alex@alex.org.uk>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <008ECD032A8603F73C448ABD@Ximines.local>
In-Reply-To: <CAFLBxZY3zXaNQ93zq8YuKQm=+Hc7SoN2uVsjgJ1kZb_=N5CszA@mail.gmail.com>
References: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
	<39A8C47B5D7DD22A52A3DA51@Ximines.local>
	<CAFLBxZY3zXaNQ93zq8YuKQm=+Hc7SoN2uVsjgJ1kZb_=N5CszA@mail.gmail.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 development update, 15 Oct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CgotLU9uIDE4IERlY2VtYmVyIDIwMTIgMTQ6Mjg6NTcgKzAwMDAgR2VvcmdlIER1bmxhcCAKPEdl
b3JnZS5EdW5sYXBAZXUuY2l0cml4LmNvbT4gd3JvdGU6Cgo+Pj4gKiBNYWtlIHN0b3JhZ2UgbWln
cmF0aW9uIHBvc3NpYmxlCj4+PiDCoCBvd25lcjogPwo+Pj4gwqAgc3RhdHVzOiA/Cj4+PiDCoCBU
aGVyZSBuZWVkcyB0byBiZSBhIHdheSwgZWl0aGVyIHZpYSBjb21tYW5kLWxpbmUgb3IgdmlhIHNv
bWUgaG9va3MsCj4+PiDCoCB0aGF0IHNvbWVvbmUgY2FuIGJ1aWxkIGEgInN0b3JhZ2UgbWlncmF0
aW9uIiBmZWF0dXJlIG9uIHRvcCBvZiBsaWJ4bAo+Pj4gwqAgb3IgeGwuCj4+Cj4+IFdlIGhhdmUg
dGhpcyB3b3JraW5nIHdpdGggcWVtdS14ZW4sIHFjb3cyIGFuZCBzbmFwc2hvdCByZWJhc2UuIEF0
IGEgbGlieGwKPj4gbGV2ZWwgKGJ1dCBub3QgYW4geGwgbGV2ZWwpLCBldmVyeXRoaW5nIHNlZW1z
IHRvIGJlIHRoZXJlLgo+Cj4gQ2FuIHlvdSBkZXNjcmliZSBpbiBtb3JlIGRldGFpbCBob3cgeW91
IGltcGxlbWVudCB0aGlzP8KgIERvIHlvdSBoYXZlIGEKPiBzY3JpcHQgb3Igc29tZXRoaW5nPwoK
V2UgaGF2ZSBhIHBpbGUgb2YgQyBjb2RlIDotKQoKQSBzY3JpcHQgd291bGQgZG8gc29tZXRoaW5n
IGxpa2U6CgoxLiBBc2sgcWVtdSB0byBkbyBhIGxpdmUgc25hcHNob3QgdXNpbmcgc25hcHNob3Rf
YmxrZGV2IHB1dHRpbmcgdGhlCnNuYXBzaG90IGluIGEgbmV3IGZpbGUgb24gdGhlIG5ldyBzdG9y
YWdlIGRldmljZS4gVGhpcyBlbnN1cmVzIHRoYXQgYWxsIG5ldwp3cml0ZXMgZ28gdG8gdGhlIG5l
dyBzdG9yYWdlIGRldmljZS4KCjIuIFJlYmFzZSB0aGUgc25hcHNob3QgdG8gYSBudWxsIGJhY2tp
bmcgZmlsZSAoSSB0aGluayB0aGF0J3MgcWVtdS1pbWcKcmViYXNlIHdpdGggLWIgJycgdGhvdWdo
IHdlIGhhZCB0byBzdWJtaXQgYSBjb3VwbGUgb2YgbGluZXMgb2YgcGF0Y2ggdG8KcWVtdSB0byBt
YWtlIGl0IHdvcmspIHdoaWNoIGZpbGxzIHRoZSBub24td3JpdHRlbiBibG9ja3MgZnJvbSB0aGUg
bmV3CnNuYXBzaG90IGZyb20gdGhlIG9sZCBiYXNlIGltYWdlIGFuZCBicmVha3MgdGhlIGxpbmsg
dG8gdGhlIHRoZSBvbGQgYmFzZQppbWFnZS4KCkluIG91ciBpbXBsZW1lbnRhdGlvbiB3ZSBoYXZl
IGEgc2VwYXJhdGUgaGllcmFyY2h5IGRhdGFiYXNlIGZyb20gdGhlIHBhcmVudApsaW5rcyBzdG9y
ZWQgd2l0aGluIHRoZSBxY293MiBmaWxlcyBzbyB3ZSBnZXQgcHJvcGVyIHVzYWdlIGNvdW50aW5n
IGV0Yy4KYnV0IHRoYXQncyBhbiAnaW1wbGVtZW50YXRpb24gZGV0YWlsJy4KCj5Gcm9tIG1lbW9y
eSAoYW5kIGF0IHJpc2sgb2YgY3Jvc3NpbmcgdGhyZWFkcykgdXNpbmcgdGhpcyBkZXZpY2UgbW9k
ZWwgYW5kCmV4dGVybmFsIHNuYXBzaG90cyB5b3UgY2FuIHVzZSBxZW11LWltZyByZXNpemUgdG8g
cmVzaXplIGRyaXZlcyBsaXZlIGF0CmxlYXN0IHVuZGVyIEtWTS4gT2J2aW91c2x5IHRoZSBpbmZv
cm1hdGlvbiB0aGF0IHRoZSBkcml2ZSBoYXMgYmVlbiByZXNpemVkCm5lZWRzIHRvIGdldCB0byB0
aGUgZ3Vlc3Qgc29tZWhvdy4KCi0tIApBbGV4IEJsaWdoCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:07:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:07:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkzhr-0000IH-2O; Tue, 18 Dec 2012 16:07:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tkzhp-0000I7-Mx
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 16:07:45 +0000
Received: from [85.158.143.99:22915] by server-1.bemta-4.messagelabs.com id
	CC/45-28401-0D490D05; Tue, 18 Dec 2012 16:07:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1355846859!29378048!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21391 invoked from network); 18 Dec 2012 16:07:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 16:07:40 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIG76AX015914
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 16:07:07 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIG76pE000204
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 16:07:06 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIG76TA022574; Tue, 18 Dec 2012 10:07:06 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 08:07:05 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AF7E21BF216; Tue, 18 Dec 2012 11:07:04 -0500 (EST)
Date: Tue, 18 Dec 2012 11:07:04 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mats Petersson <mats.petersson@citrix.com>
Message-ID: <20121218160704.GA3543@phenom.dumpdata.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D074F5.6060202@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 01:51:49PM +0000, Mats Petersson wrote:
> On 18/12/12 11:40, Ian Campbell wrote:
> >On Tue, 2012-12-18 at 11:28 +0000, Mats Petersson wrote:
> >>On 18/12/12 11:17, Ian Campbell wrote:
> >>>On Mon, 2012-12-17 at 17:38 +0000, Mats Petersson wrote:
> >>>>On 17/12/12 16:57, Ian Campbell wrote:
> >>>>>On Fri, 2012-12-14 at 17:00 +0000, Mats Petersson wrote:
> >>>>>>Ian, Konrad:
> >>>>>>I took Konrad's latest version [I think] and applied my patch (which
> >>>>>>needed some adjustments as there are other "post 3.7" changes to same
> >>>>>>source code - including losing the xen_flush_tlb_all() ??)
> >>>>>>
> >>>>>>Attached are the patches:
> >>>>>>arm-enlighten.patch, which updates the ARM code.
> >>>>>>improve-pricmd.patch, which updates the privcmd code.
> >>>>>>
> >>>>>>Ian, can you have a look at the ARM code - which I quickly hacked
> >>>>>>together, I haven't compiled it, and I certainly haven't tested it,
> >>>>>There are a lot of build errors as you might expect (patch below, a few
> >>>>>warnings remain). You can find a cross compiler at
> >>>>>http://www.kernel.org/pub/tools/crosstool/files/bin/x86_64/4.6.3/
> >>>>>
> >>>>>or you can use
> >>>>>drall:/home/ianc/devel/cross/x86_64-gcc-4.6.0-nolibc_arm-unknown-linux-gnueabi.tar.bz2
> >>>>>
> >>>>>which is an older version from the same place.
> >>>>>
> >>>>>Anyway, the patch...
> >>>>>>and
> >>>>>>it needs further changes to make my changes actually make it more
> >>>>>>efficient.
> >>>>>Right, the benefit on PVH or ARM would be in batching the
> >>>>>XENMEM_add_to_physmap_range calls. The batching of the
> >>>>>apply_to_page_range which this patch adds isn't useful because there is
> >>>>>no HYPERVISOR_mmu_update call to batch in this case. So basically this
> >>>>>patch as it stands does a lot of needless work for no gain I'm afraid.
> >>>>So, basically, what is an improvement on x86 isn't anything useful on
> >>>>ARM, and you'd prefer to loop around in privcmd.c calling into
> >>>>xen_remap_domain_mfn_range() a lot of times?
> >>>Not at all. ARM (and PVH) still benefits from the interface change but
> >>>the implementation of the benefit is totally different.
> >>>
> >>>For normal x86 PV you want to batch the HYPERVISOR_mmu_update.
> >>>
> >>>For both x86 PVH and ARM this hypercall  doesn't exist but instead there
> >>>is a call to HYPERVISOR_memory_op XENMEM_add_to_physmap_range which is
> >>>something which would benefit from batching.
> >>So, you want me to fix that up?
> >If you want to sure, yes please.
> >
> >But feel free to just make the existing code work with the interface,
> >without adding any batching. That should be a much smaller change than
> >what you proposed.
> >
> >(aside; I do wonder how much of this x86/arm code could be made generic)
> I think, once it goes to PVH everywhere, quite a bit (as I believe
> the hypercalls should be the same by then, right?)
> 
> In the PVOPS kernel, it's probably a bit more job. I'm sure it can
> be done, but with a bit more work.
> 
> I think I'll do the minimal patch first, then, if I find some spare
> time, work on the "batching" variant.

OK. The batching is IMHO just using the multicall variant.

> >
> >>To make xentrace not work until it is fixed wouldn't be a terrible
> >>thing, would it?
> >On ARM I think it is fine (I doubt this is the only thing stopping
> >xentrace from working). I suspect people would be less impressed with
> >breaking xentrace on x86. For PVH it probably is a requirement for it to
> >keep working, I'm not sure though.
> Ok, ENOSYS it is for remap_range() then.
> >
> >>  Then we can remove that old gunk from x86 as well
> >>(eventually).
> >>Thanks. I was starting to wonder if I'd been teleported back to the time
> >>when I struggled with pointers...
> >>Maybe it needs a better comment.
> >The other thing I had missed was that this was a pure increment and not
> >taking the value at the same time, which also confused me.
> >
> >Splitting the increment out from the dereference usually makes these
> >things clearer, I was obviously just being a bit hard of thinking
> >yesterday!
> No worries. I will see about making a more readable comment (and for
> ARM, I can remove the whole if/else and just do the one increment
> (based on above discussion), so should make the code better.

You can use the v3.8 tree as your base - it has the required PVH and ARM
patches. There is one bug (where dom0 crashes) - and I just sent
a git pull for that in your Linus's tree:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.8

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:07:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:07:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tkzhr-0000IH-2O; Tue, 18 Dec 2012 16:07:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tkzhp-0000I7-Mx
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 16:07:45 +0000
Received: from [85.158.143.99:22915] by server-1.bemta-4.messagelabs.com id
	CC/45-28401-0D490D05; Tue, 18 Dec 2012 16:07:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1355846859!29378048!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21391 invoked from network); 18 Dec 2012 16:07:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 16:07:40 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBIG76AX015914
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 16:07:07 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBIG76pE000204
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 16:07:06 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBIG76TA022574; Tue, 18 Dec 2012 10:07:06 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 08:07:05 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AF7E21BF216; Tue, 18 Dec 2012 11:07:04 -0500 (EST)
Date: Tue, 18 Dec 2012 11:07:04 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mats Petersson <mats.petersson@citrix.com>
Message-ID: <20121218160704.GA3543@phenom.dumpdata.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D074F5.6060202@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 01:51:49PM +0000, Mats Petersson wrote:
> On 18/12/12 11:40, Ian Campbell wrote:
> >On Tue, 2012-12-18 at 11:28 +0000, Mats Petersson wrote:
> >>On 18/12/12 11:17, Ian Campbell wrote:
> >>>On Mon, 2012-12-17 at 17:38 +0000, Mats Petersson wrote:
> >>>>On 17/12/12 16:57, Ian Campbell wrote:
> >>>>>On Fri, 2012-12-14 at 17:00 +0000, Mats Petersson wrote:
> >>>>>>Ian, Konrad:
> >>>>>>I took Konrad's latest version [I think] and applied my patch (which
> >>>>>>needed some adjustments as there are other "post 3.7" changes to same
> >>>>>>source code - including losing the xen_flush_tlb_all() ??)
> >>>>>>
> >>>>>>Attached are the patches:
> >>>>>>arm-enlighten.patch, which updates the ARM code.
> >>>>>>improve-pricmd.patch, which updates the privcmd code.
> >>>>>>
> >>>>>>Ian, can you have a look at the ARM code - which I quickly hacked
> >>>>>>together, I haven't compiled it, and I certainly haven't tested it,
> >>>>>There are a lot of build errors as you might expect (patch below, a few
> >>>>>warnings remain). You can find a cross compiler at
> >>>>>http://www.kernel.org/pub/tools/crosstool/files/bin/x86_64/4.6.3/
> >>>>>
> >>>>>or you can use
> >>>>>drall:/home/ianc/devel/cross/x86_64-gcc-4.6.0-nolibc_arm-unknown-linux-gnueabi.tar.bz2
> >>>>>
> >>>>>which is an older version from the same place.
> >>>>>
> >>>>>Anyway, the patch...
> >>>>>>and
> >>>>>>it needs further changes to make my changes actually make it more
> >>>>>>efficient.
> >>>>>Right, the benefit on PVH or ARM would be in batching the
> >>>>>XENMEM_add_to_physmap_range calls. The batching of the
> >>>>>apply_to_page_range which this patch adds isn't useful because there is
> >>>>>no HYPERVISOR_mmu_update call to batch in this case. So basically this
> >>>>>patch as it stands does a lot of needless work for no gain I'm afraid.
> >>>>So, basically, what is an improvement on x86 isn't anything useful on
> >>>>ARM, and you'd prefer to loop around in privcmd.c calling into
> >>>>xen_remap_domain_mfn_range() a lot of times?
> >>>Not at all. ARM (and PVH) still benefits from the interface change but
> >>>the implementation of the benefit is totally different.
> >>>
> >>>For normal x86 PV you want to batch the HYPERVISOR_mmu_update.
> >>>
> >>>For both x86 PVH and ARM this hypercall  doesn't exist but instead there
> >>>is a call to HYPERVISOR_memory_op XENMEM_add_to_physmap_range which is
> >>>something which would benefit from batching.
> >>So, you want me to fix that up?
> >If you want to sure, yes please.
> >
> >But feel free to just make the existing code work with the interface,
> >without adding any batching. That should be a much smaller change than
> >what you proposed.
> >
> >(aside; I do wonder how much of this x86/arm code could be made generic)
> I think, once it goes to PVH everywhere, quite a bit (as I believe
> the hypercalls should be the same by then, right?)
> 
> In the PVOPS kernel, it's probably a bit more job. I'm sure it can
> be done, but with a bit more work.
> 
> I think I'll do the minimal patch first, then, if I find some spare
> time, work on the "batching" variant.

OK. The batching is IMHO just using the multicall variant.

> >
> >>To make xentrace not work until it is fixed wouldn't be a terrible
> >>thing, would it?
> >On ARM I think it is fine (I doubt this is the only thing stopping
> >xentrace from working). I suspect people would be less impressed with
> >breaking xentrace on x86. For PVH it probably is a requirement for it to
> >keep working, I'm not sure though.
> Ok, ENOSYS it is for remap_range() then.
> >
> >>  Then we can remove that old gunk from x86 as well
> >>(eventually).
> >>Thanks. I was starting to wonder if I'd been teleported back to the time
> >>when I struggled with pointers...
> >>Maybe it needs a better comment.
> >The other thing I had missed was that this was a pure increment and not
> >taking the value at the same time, which also confused me.
> >
> >Splitting the increment out from the dereference usually makes these
> >things clearer, I was obviously just being a bit hard of thinking
> >yesterday!
> No worries. I will see about making a more readable comment (and for
> ARM, I can remove the whole if/else and just do the one increment
> (based on above discussion), so should make the code better.

You can use the v3.8 tree as your base - it has the required PVH and ARM
patches. There is one bug (where dom0 crashes) - and I just sent
a git pull for that in your Linus's tree:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.8

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:23:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzwM-0000id-W4; Tue, 18 Dec 2012 16:22:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkzwL-0000iX-MA
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 16:22:45 +0000
Received: from [85.158.138.51:26763] by server-11.bemta-3.messagelabs.com id
	2A/2B-13335-45890D05; Tue, 18 Dec 2012 16:22:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1355847764!29439954!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22907 invoked from network); 18 Dec 2012 16:22:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:22:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="231100"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 16:22:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 16:22:39 +0000
Message-ID: <1355847758.14620.265.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Eric Dumazet <erdnetdev@gmail.com>
Date: Tue, 18 Dec 2012 16:22:38 +0000
In-Reply-To: <1355847180.9380.21.camel@edumazet-glaptop>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<1355847180.9380.21.camel@edumazet-glaptop>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 16:13 +0000, Eric Dumazet wrote:
> On Tue, 2012-12-18 at 15:26 +0000, Ian Campbell wrote:
> 
> > So actually we want += PAGE_SIZE * skb_shinfo(skb)->nr_frags ?
> > 
> 
> I dont know what are the real frag sizes in your case.

I think it's a page, see xennet_alloc_rx_buffers and the alloc_page
therein.

> Some drivers allocate a full page for an ethernet frame, others use half
> of a page, it really depends.
> 
> As the frag ABI doesnt contain real size, its ok in this case to account
> the actual frag size.
> 
> (skb->data_len in your driver)

I guess I'm a bit confused by what truesize means again then ;-),
because in that case the original patch is correct although it would
have been less confusing to do:
	skb->truesize += skb->data_len; 
in xennet_poll() and then do the subtraction of
NETFRONT_SKB_CB(skb)->pull_to in handle_incoming_queue() where we
actually do the pull up.

Unless __pskb_pull_tail does that adjustment for us, but if it does I
can't see where.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:23:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TkzwM-0000id-W4; Tue, 18 Dec 2012 16:22:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TkzwL-0000iX-MA
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 16:22:45 +0000
Received: from [85.158.138.51:26763] by server-11.bemta-3.messagelabs.com id
	2A/2B-13335-45890D05; Tue, 18 Dec 2012 16:22:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1355847764!29439954!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22907 invoked from network); 18 Dec 2012 16:22:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:22:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="231100"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 16:22:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 16:22:39 +0000
Message-ID: <1355847758.14620.265.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Eric Dumazet <erdnetdev@gmail.com>
Date: Tue, 18 Dec 2012 16:22:38 +0000
In-Reply-To: <1355847180.9380.21.camel@edumazet-glaptop>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<1355847180.9380.21.camel@edumazet-glaptop>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 16:13 +0000, Eric Dumazet wrote:
> On Tue, 2012-12-18 at 15:26 +0000, Ian Campbell wrote:
> 
> > So actually we want += PAGE_SIZE * skb_shinfo(skb)->nr_frags ?
> > 
> 
> I dont know what are the real frag sizes in your case.

I think it's a page, see xennet_alloc_rx_buffers and the alloc_page
therein.

> Some drivers allocate a full page for an ethernet frame, others use half
> of a page, it really depends.
> 
> As the frag ABI doesnt contain real size, its ok in this case to account
> the actual frag size.
> 
> (skb->data_len in your driver)

I guess I'm a bit confused by what truesize means again then ;-),
because in that case the original patch is correct although it would
have been less confusing to do:
	skb->truesize += skb->data_len; 
in xennet_poll() and then do the subtraction of
NETFRONT_SKB_CB(skb)->pull_to in handle_incoming_queue() where we
actually do the pull up.

Unless __pskb_pull_tail does that adjustment for us, but if it does I
can't see where.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:32:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl055-0000zN-QJ; Tue, 18 Dec 2012 16:31:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <erdnetdev@gmail.com>) id 1Tkzo4-0000ZY-QM
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 16:14:13 +0000
Received: from [85.158.139.83:63161] by server-14.bemta-5.messagelabs.com id
	8A/E1-09538-45690D05; Tue, 18 Dec 2012 16:14:12 +0000
X-Env-Sender: erdnetdev@gmail.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355847183!26522573!1
X-Originating-IP: [209.85.220.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18413 invoked from network); 18 Dec 2012 16:13:05 -0000
Received: from mail-pa0-f41.google.com (HELO mail-pa0-f41.google.com)
	(209.85.220.41)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:13:05 -0000
Received: by mail-pa0-f41.google.com with SMTP id bj3so644880pad.28
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 08:13:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:subject:from:to:cc:in-reply-to:references:content-type
	:date:message-id:mime-version:x-mailer:content-transfer-encoding;
	bh=mcyn1CvpPJuABHwuPWotvMCyAyInlvsM5HV2ppdsB/o=;
	b=BCXpvTsfGnIw0TO7bi1Y7MzIJAwVxoSEYtKD2tKrzIQkaqayMQ3n10V2yQrkuYG237
	0390n0FYUf3TbeNUZdetpDKOsQaRGXpRNK2NTMznCOUskdG9Jyvx4UMVsTtggJxz83IU
	XOWWR2ofyAhcLVklIDtyDCaw8b0Z4BBmbeD1+D4HnlvFb+MXU5eBkthF5YioVnquItS8
	jOgS0Ch/33Seq2ohvVcy0qj/6a6AEYMfaHFTk1mJKbXLZCHgnxsNh4AemzuQCNE+/tsC
	XYic3bqR6+RBxmdnSMKhvurhA9sNVRBtxiobemcgx9FQQBKS6ZGiB9EEXF0t2l4eU68L
	z4wA==
X-Received: by 10.66.85.39 with SMTP id e7mr7946607paz.63.1355847183338;
	Tue, 18 Dec 2012 08:13:03 -0800 (PST)
Received: from [172.19.247.87] ([172.19.247.87])
	by mx.google.com with ESMTPS id ou3sm1375760pbb.46.2012.12.18.08.13.01
	(version=SSLv3 cipher=OTHER); Tue, 18 Dec 2012 08:13:02 -0800 (PST)
From: Eric Dumazet <erdnetdev@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355844398.14620.254.camel@zakaz.uk.xensource.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
Date: Tue, 18 Dec 2012 08:13:00 -0800
Message-ID: <1355847180.9380.21.camel@edumazet-glaptop>
Mime-Version: 1.0
X-Mailer: Evolution 2.28.3 
X-Mailman-Approved-At: Tue, 18 Dec 2012 16:31:46 +0000
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 15:26 +0000, Ian Campbell wrote:

> So actually we want += PAGE_SIZE * skb_shinfo(skb)->nr_frags ?
> 

I dont know what are the real frag sizes in your case.

Some drivers allocate a full page for an ethernet frame, others use half
of a page, it really depends.

As the frag ABI doesnt contain real size, its ok in this case to account
the actual frag size.

(skb->data_len in your driver)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:32:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl055-0000zN-QJ; Tue, 18 Dec 2012 16:31:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <erdnetdev@gmail.com>) id 1Tkzo4-0000ZY-QM
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 16:14:13 +0000
Received: from [85.158.139.83:63161] by server-14.bemta-5.messagelabs.com id
	8A/E1-09538-45690D05; Tue, 18 Dec 2012 16:14:12 +0000
X-Env-Sender: erdnetdev@gmail.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355847183!26522573!1
X-Originating-IP: [209.85.220.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18413 invoked from network); 18 Dec 2012 16:13:05 -0000
Received: from mail-pa0-f41.google.com (HELO mail-pa0-f41.google.com)
	(209.85.220.41)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:13:05 -0000
Received: by mail-pa0-f41.google.com with SMTP id bj3so644880pad.28
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 08:13:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:subject:from:to:cc:in-reply-to:references:content-type
	:date:message-id:mime-version:x-mailer:content-transfer-encoding;
	bh=mcyn1CvpPJuABHwuPWotvMCyAyInlvsM5HV2ppdsB/o=;
	b=BCXpvTsfGnIw0TO7bi1Y7MzIJAwVxoSEYtKD2tKrzIQkaqayMQ3n10V2yQrkuYG237
	0390n0FYUf3TbeNUZdetpDKOsQaRGXpRNK2NTMznCOUskdG9Jyvx4UMVsTtggJxz83IU
	XOWWR2ofyAhcLVklIDtyDCaw8b0Z4BBmbeD1+D4HnlvFb+MXU5eBkthF5YioVnquItS8
	jOgS0Ch/33Seq2ohvVcy0qj/6a6AEYMfaHFTk1mJKbXLZCHgnxsNh4AemzuQCNE+/tsC
	XYic3bqR6+RBxmdnSMKhvurhA9sNVRBtxiobemcgx9FQQBKS6ZGiB9EEXF0t2l4eU68L
	z4wA==
X-Received: by 10.66.85.39 with SMTP id e7mr7946607paz.63.1355847183338;
	Tue, 18 Dec 2012 08:13:03 -0800 (PST)
Received: from [172.19.247.87] ([172.19.247.87])
	by mx.google.com with ESMTPS id ou3sm1375760pbb.46.2012.12.18.08.13.01
	(version=SSLv3 cipher=OTHER); Tue, 18 Dec 2012 08:13:02 -0800 (PST)
From: Eric Dumazet <erdnetdev@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355844398.14620.254.camel@zakaz.uk.xensource.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
Date: Tue, 18 Dec 2012 08:13:00 -0800
Message-ID: <1355847180.9380.21.camel@edumazet-glaptop>
Mime-Version: 1.0
X-Mailer: Evolution 2.28.3 
X-Mailman-Approved-At: Tue, 18 Dec 2012 16:31:46 +0000
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 15:26 +0000, Ian Campbell wrote:

> So actually we want += PAGE_SIZE * skb_shinfo(skb)->nr_frags ?
> 

I dont know what are the real frag sizes in your case.

Some drivers allocate a full page for an ethernet frame, others use half
of a page, it really depends.

As the frag ABI doesnt contain real size, its ok in this case to account
the actual frag size.

(skb->data_len in your driver)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:32:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl055-0000z9-1S; Tue, 18 Dec 2012 16:31:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <erdnetdev@gmail.com>) id 1Tkyq5-0004ys-Q7
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 15:12:14 +0000
Received: from [85.158.139.83:41893] by server-15.bemta-5.messagelabs.com id
	84/A2-20523-DC780D05; Tue, 18 Dec 2012 15:12:13 +0000
X-Env-Sender: erdnetdev@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355843528!30251157!1
X-Originating-IP: [209.85.220.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13423 invoked from network); 18 Dec 2012 15:12:12 -0000
Received: from mail-pa0-f48.google.com (HELO mail-pa0-f48.google.com)
	(209.85.220.48)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:12:12 -0000
Received: by mail-pa0-f48.google.com with SMTP id fa1so605916pad.21
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 07:12:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:subject:from:to:cc:in-reply-to:references:content-type
	:date:message-id:mime-version:x-mailer:content-transfer-encoding;
	bh=AyrehypZx09niLV2GuBUNnp+2KnXMk71yTu0tPXalIQ=;
	b=fZj73lDP0Ro3cu97OjZ6o3ouKy+wn1NJ3bAj2DCHB9RPvQM34bz2rB8ENGKoO8LXHd
	WKgrFAMrACq1v0GRocRjfHgAlRZli4S0rSGZxe/JjeNsmlVBdj/fZEgsm62r7uztqeau
	Guvps16gSyQli58xh/8mTvaF8PoLK4hnYuxN5NzGSIZYkiq1CAsGOmbqjwH2385DzB3G
	jPYBZ2Eo47WcC9oCpI8uaYxS9h+gRA2+Xx4/F7SsTaW39N8zFDbQ45wTGGlEpZhjjTsK
	s76VBLHOwqOfb1y3hYYhnOIwSOe8ikGJOii0CJPFKg4COeC5S2xvH332472Ixj5l0tBX
	zVTQ==
X-Received: by 10.66.76.194 with SMTP id m2mr7749850paw.14.1355843528273;
	Tue, 18 Dec 2012 07:12:08 -0800 (PST)
Received: from [192.168.1.119] (c-67-170-232-166.hsd1.ca.comcast.net.
	[67.170.232.166])
	by mx.google.com with ESMTPS id ol4sm1296324pbb.58.2012.12.18.07.12.06
	(version=SSLv3 cipher=OTHER); Tue, 18 Dec 2012 07:12:07 -0800 (PST)
From: Eric Dumazet <erdnetdev@gmail.com>
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
Date: Tue, 18 Dec 2012 07:12:05 -0800
Message-ID: <1355843525.9380.18.camel@edumazet-glaptop>
Mime-Version: 1.0
X-Mailer: Evolution 2.28.3 
X-Mailman-Approved-At: Tue, 18 Dec 2012 16:31:46 +0000
Cc: netdev@vger.kernel.org, annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel@lists.xensource.com, Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 13:51 +0000, Ian Campbell wrote:
> Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
> than that. We have already accounted for this in
> NETFRONT_SKB_CB(skb)->pull_to so use that instead.
> 
> Fixes WARN_ON from skb_try_coalesce.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Sander Eikelenboom <linux@eikelenboom.it>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: annie li <annie.li@oracle.com>
> Cc: xen-devel@lists.xensource.com
> Cc: netdev@vger.kernel.org
> Cc: stable@kernel.org # 3.7.x only
> ---
>  drivers/net/xen-netfront.c |   15 +++++----------
>  1 files changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index caa0110..b06ef81 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -971,17 +971,12 @@ err:
>  		 * overheads. Here, we add the size of the data pulled
>  		 * in xennet_fill_frags().
>  		 *
> -		 * We also adjust for any unused space in the main
> -		 * data area by subtracting (RX_COPY_THRESHOLD -
> -		 * len). This is especially important with drivers
> -		 * which split incoming packets into header and data,
> -		 * using only 66 bytes of the main data area (see the
> -		 * e1000 driver for example.)  On such systems,
> -		 * without this last adjustement, our achievable
> -		 * receive throughout using the standard receive
> -		 * buffer size was cut by 25%(!!!).
> +		 * We also adjust for the __pskb_pull_tail done in
> +		 * handle_incoming_queue which pulls data from the
> +		 * frags into the head area, which is already
> +		 * accounted in RX_COPY_THRESHOLD.
>  		 */
> -		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>  		skb->len += skb->data_len;
>  
>  		if (rx->flags & XEN_NETRXF_csum_blank)


But skb truesize is not what you think.

You must account the exact memory used by this skb, not only the used
part of it.

At the very minimum, it should be

skb->truesize += skb->data_len;

But it really should be the allocated size of the fragment.

If its a page, then its a page, even if you use one single byte in it.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:32:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl055-0000z9-1S; Tue, 18 Dec 2012 16:31:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <erdnetdev@gmail.com>) id 1Tkyq5-0004ys-Q7
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 15:12:14 +0000
Received: from [85.158.139.83:41893] by server-15.bemta-5.messagelabs.com id
	84/A2-20523-DC780D05; Tue, 18 Dec 2012 15:12:13 +0000
X-Env-Sender: erdnetdev@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355843528!30251157!1
X-Originating-IP: [209.85.220.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13423 invoked from network); 18 Dec 2012 15:12:12 -0000
Received: from mail-pa0-f48.google.com (HELO mail-pa0-f48.google.com)
	(209.85.220.48)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:12:12 -0000
Received: by mail-pa0-f48.google.com with SMTP id fa1so605916pad.21
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 07:12:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:subject:from:to:cc:in-reply-to:references:content-type
	:date:message-id:mime-version:x-mailer:content-transfer-encoding;
	bh=AyrehypZx09niLV2GuBUNnp+2KnXMk71yTu0tPXalIQ=;
	b=fZj73lDP0Ro3cu97OjZ6o3ouKy+wn1NJ3bAj2DCHB9RPvQM34bz2rB8ENGKoO8LXHd
	WKgrFAMrACq1v0GRocRjfHgAlRZli4S0rSGZxe/JjeNsmlVBdj/fZEgsm62r7uztqeau
	Guvps16gSyQli58xh/8mTvaF8PoLK4hnYuxN5NzGSIZYkiq1CAsGOmbqjwH2385DzB3G
	jPYBZ2Eo47WcC9oCpI8uaYxS9h+gRA2+Xx4/F7SsTaW39N8zFDbQ45wTGGlEpZhjjTsK
	s76VBLHOwqOfb1y3hYYhnOIwSOe8ikGJOii0CJPFKg4COeC5S2xvH332472Ixj5l0tBX
	zVTQ==
X-Received: by 10.66.76.194 with SMTP id m2mr7749850paw.14.1355843528273;
	Tue, 18 Dec 2012 07:12:08 -0800 (PST)
Received: from [192.168.1.119] (c-67-170-232-166.hsd1.ca.comcast.net.
	[67.170.232.166])
	by mx.google.com with ESMTPS id ol4sm1296324pbb.58.2012.12.18.07.12.06
	(version=SSLv3 cipher=OTHER); Tue, 18 Dec 2012 07:12:07 -0800 (PST)
From: Eric Dumazet <erdnetdev@gmail.com>
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
Date: Tue, 18 Dec 2012 07:12:05 -0800
Message-ID: <1355843525.9380.18.camel@edumazet-glaptop>
Mime-Version: 1.0
X-Mailer: Evolution 2.28.3 
X-Mailman-Approved-At: Tue, 18 Dec 2012 16:31:46 +0000
Cc: netdev@vger.kernel.org, annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel@lists.xensource.com, Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 13:51 +0000, Ian Campbell wrote:
> Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
> than that. We have already accounted for this in
> NETFRONT_SKB_CB(skb)->pull_to so use that instead.
> 
> Fixes WARN_ON from skb_try_coalesce.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Sander Eikelenboom <linux@eikelenboom.it>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: annie li <annie.li@oracle.com>
> Cc: xen-devel@lists.xensource.com
> Cc: netdev@vger.kernel.org
> Cc: stable@kernel.org # 3.7.x only
> ---
>  drivers/net/xen-netfront.c |   15 +++++----------
>  1 files changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index caa0110..b06ef81 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -971,17 +971,12 @@ err:
>  		 * overheads. Here, we add the size of the data pulled
>  		 * in xennet_fill_frags().
>  		 *
> -		 * We also adjust for any unused space in the main
> -		 * data area by subtracting (RX_COPY_THRESHOLD -
> -		 * len). This is especially important with drivers
> -		 * which split incoming packets into header and data,
> -		 * using only 66 bytes of the main data area (see the
> -		 * e1000 driver for example.)  On such systems,
> -		 * without this last adjustement, our achievable
> -		 * receive throughout using the standard receive
> -		 * buffer size was cut by 25%(!!!).
> +		 * We also adjust for the __pskb_pull_tail done in
> +		 * handle_incoming_queue which pulls data from the
> +		 * frags into the head area, which is already
> +		 * accounted in RX_COPY_THRESHOLD.
>  		 */
> -		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +		skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>  		skb->len += skb->data_len;
>  
>  		if (rx->flags & XEN_NETRXF_csum_blank)


But skb truesize is not what you think.

You must account the exact memory used by this skb, not only the used
part of it.

At the very minimum, it should be

skb->truesize += skb->data_len;

But it really should be the allocated size of the fragment.

If its a page, then its a page, even if you use one single byte in it.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:32:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:32:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl055-0000zG-DT; Tue, 18 Dec 2012 16:31:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TkzAj-00076I-2P
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:33:33 +0000
Received: from [85.158.139.83:52224] by server-4.bemta-5.messagelabs.com id
	51/76-14693-CCC80D05; Tue, 18 Dec 2012 15:33:32 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355844808!28166241!1
X-Originating-IP: [209.85.210.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11742 invoked from network); 18 Dec 2012 15:33:29 -0000
Received: from mail-ia0-f180.google.com (HELO mail-ia0-f180.google.com)
	(209.85.210.180)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:33:29 -0000
Received: by mail-ia0-f180.google.com with SMTP id t4so658749iag.11
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 07:33:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=BftnZlyGncXjf7M2HBtqLmyMgnb5GQlQpbCPZq37dk4=;
	b=GYm+dTgd/TrGAE099uQKEWwvsDXSr5dB1YfL8N7Hb6150H1gaR7ArgzEAx0XQnnWM7
	avmQFbLvIglnE7O7kM821eyoLCWbmuNQkMThJ9g2T0PC0m251hMqTLw1w10wJgwvLpGT
	H1U4lE+Sx+f5+9Te2oWpQPO1uA3WajO+xQPyUT+JlZieaUCdj9hWnT41SBljeFJ0cl/Q
	NCiZORJpZJM9oxAcNjeNPaWIn4AefjWUo82gZcXywixKSaNhpnem7xzLmGU98q+mzuXS
	+PKI+c1VH9L+o7HI8QwzkcRpLH8kX2DGf/mnca8+4HMknfRxJ/yhhuihtHXjjePW+KGS
	Y01Q==
MIME-Version: 1.0
Received: by 10.50.40.225 with SMTP id a1mr3373943igl.7.1355844807026; Tue, 18
	Dec 2012 07:33:27 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Tue, 18 Dec 2012 07:33:26 -0800 (PST)
Date: Tue, 18 Dec 2012 23:33:26 +0800
Message-ID: <CAKhsbWaE8e7eDxe-RDT4urx9_cB3yKC=DXTuXxjUvEyr0VyM8Q@mail.gmail.com>
From: "G.R." <firemeteor.guo@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-Mailman-Approved-At: Tue, 18 Dec 2012 16:31:46 +0000
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] Correctly expose PCH ISA bridge for IGD
 passthrough (was PCI-PCI bridge)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

Per your request, I resend your patch with my local modification to
fix the PCH ISA bridge to be exposed correctly to domU. This is XEN
part of the fix to make i915 driver properly detect the PCH version.
Another patch for i915 driver side is required too. I'll send that to
intel-gfx list separately. The combined patch set do fix the PCH
detection issue for me.

Thanks,
Timothy

diff --git a/hw/pci.c b/hw/pci.c
index f051de1..d371bd7 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
     }
 }

-typedef struct {
-    PCIDevice dev;
-    PCIBus *bus;
-} PCIBridge;
-
 void pci_bridge_write_config(PCIDevice *d,
                              uint32_t address, uint32_t val, int len)
 {
diff --git a/hw/pci.h b/hw/pci.h
index edc58b6..c2acab9 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -222,6 +222,11 @@ struct PCIDevice {
     int irq_state[4];
 };

+typedef struct {
+    PCIDevice dev;
+    PCIBus *bus;
+} PCIBridge;
+
 extern char direct_pci_str[];
 extern int direct_pci_msitranslate;
 extern int direct_pci_power_mgmt;
diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
index c6f8869..de21f90 100644
--- a/hw/pt-graphics.c
+++ b/hw/pt-graphics.c
@@ -3,6 +3,7 @@
  */

 #include "pass-through.h"
+#include "pci.h"
 #include "pci/header.h"
 #include "pci/pci.h"

@@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
     did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
     rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);

-    if ( vid == PCI_VENDOR_ID_INTEL )
-        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
-                        pch_map_irq, "intel_bridge_1f");
+    if (vid == PCI_VENDOR_ID_INTEL) {
+        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
+                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
pci_bridge_write_config);
+
+        pci_config_set_vendor_id(s->dev.config, vid);
+        pci_config_set_device_id(s->dev.config, did);
+
+        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
+        s->dev.config[PCI_COMMAND + 1] = 0x00;
+        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
back-to-back, 66MHz, no error
+        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
+        s->dev.config[PCI_REVISION] = rid;
+        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
+        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
+        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
+        s->dev.config[PCI_HEADER_TYPE] = 0x80;
+        s->dev.config[PCI_SEC_STATUS] = 0xa0;
+
+        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
+    }
 }

 uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:32:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:32:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl055-0000zG-DT; Tue, 18 Dec 2012 16:31:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TkzAj-00076I-2P
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 15:33:33 +0000
Received: from [85.158.139.83:52224] by server-4.bemta-5.messagelabs.com id
	51/76-14693-CCC80D05; Tue, 18 Dec 2012 15:33:32 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355844808!28166241!1
X-Originating-IP: [209.85.210.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11742 invoked from network); 18 Dec 2012 15:33:29 -0000
Received: from mail-ia0-f180.google.com (HELO mail-ia0-f180.google.com)
	(209.85.210.180)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 15:33:29 -0000
Received: by mail-ia0-f180.google.com with SMTP id t4so658749iag.11
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 07:33:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=BftnZlyGncXjf7M2HBtqLmyMgnb5GQlQpbCPZq37dk4=;
	b=GYm+dTgd/TrGAE099uQKEWwvsDXSr5dB1YfL8N7Hb6150H1gaR7ArgzEAx0XQnnWM7
	avmQFbLvIglnE7O7kM821eyoLCWbmuNQkMThJ9g2T0PC0m251hMqTLw1w10wJgwvLpGT
	H1U4lE+Sx+f5+9Te2oWpQPO1uA3WajO+xQPyUT+JlZieaUCdj9hWnT41SBljeFJ0cl/Q
	NCiZORJpZJM9oxAcNjeNPaWIn4AefjWUo82gZcXywixKSaNhpnem7xzLmGU98q+mzuXS
	+PKI+c1VH9L+o7HI8QwzkcRpLH8kX2DGf/mnca8+4HMknfRxJ/yhhuihtHXjjePW+KGS
	Y01Q==
MIME-Version: 1.0
Received: by 10.50.40.225 with SMTP id a1mr3373943igl.7.1355844807026; Tue, 18
	Dec 2012 07:33:27 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Tue, 18 Dec 2012 07:33:26 -0800 (PST)
Date: Tue, 18 Dec 2012 23:33:26 +0800
Message-ID: <CAKhsbWaE8e7eDxe-RDT4urx9_cB3yKC=DXTuXxjUvEyr0VyM8Q@mail.gmail.com>
From: "G.R." <firemeteor.guo@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-Mailman-Approved-At: Tue, 18 Dec 2012 16:31:46 +0000
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] Correctly expose PCH ISA bridge for IGD
 passthrough (was PCI-PCI bridge)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

Per your request, I resend your patch with my local modification to
fix the PCH ISA bridge to be exposed correctly to domU. This is XEN
part of the fix to make i915 driver properly detect the PCH version.
Another patch for i915 driver side is required too. I'll send that to
intel-gfx list separately. The combined patch set do fix the PCH
detection issue for me.

Thanks,
Timothy

diff --git a/hw/pci.c b/hw/pci.c
index f051de1..d371bd7 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
     }
 }

-typedef struct {
-    PCIDevice dev;
-    PCIBus *bus;
-} PCIBridge;
-
 void pci_bridge_write_config(PCIDevice *d,
                              uint32_t address, uint32_t val, int len)
 {
diff --git a/hw/pci.h b/hw/pci.h
index edc58b6..c2acab9 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -222,6 +222,11 @@ struct PCIDevice {
     int irq_state[4];
 };

+typedef struct {
+    PCIDevice dev;
+    PCIBus *bus;
+} PCIBridge;
+
 extern char direct_pci_str[];
 extern int direct_pci_msitranslate;
 extern int direct_pci_power_mgmt;
diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
index c6f8869..de21f90 100644
--- a/hw/pt-graphics.c
+++ b/hw/pt-graphics.c
@@ -3,6 +3,7 @@
  */

 #include "pass-through.h"
+#include "pci.h"
 #include "pci/header.h"
 #include "pci/pci.h"

@@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
     did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
     rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);

-    if ( vid == PCI_VENDOR_ID_INTEL )
-        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
-                        pch_map_irq, "intel_bridge_1f");
+    if (vid == PCI_VENDOR_ID_INTEL) {
+        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
+                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
pci_bridge_write_config);
+
+        pci_config_set_vendor_id(s->dev.config, vid);
+        pci_config_set_device_id(s->dev.config, did);
+
+        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
+        s->dev.config[PCI_COMMAND + 1] = 0x00;
+        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
back-to-back, 66MHz, no error
+        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
+        s->dev.config[PCI_REVISION] = rid;
+        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
+        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
+        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
+        s->dev.config[PCI_HEADER_TYPE] = 0x80;
+        s->dev.config[PCI_SEC_STATUS] = 0xa0;
+
+        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
+    }
 }

 uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:38:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0Au-0001Rw-L5; Tue, 18 Dec 2012 16:37:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <erdnetdev@gmail.com>) id 1Tl0At-0001Rm-1O
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 16:37:47 +0000
Received: from [193.109.254.147:23095] by server-1.bemta-14.messagelabs.com id
	4F/C2-15901-9DB90D05; Tue, 18 Dec 2012 16:37:45 +0000
X-Env-Sender: erdnetdev@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1355848551!11058363!1
X-Originating-IP: [209.85.220.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19366 invoked from network); 18 Dec 2012 16:35:56 -0000
Received: from mail-pa0-f49.google.com (HELO mail-pa0-f49.google.com)
	(209.85.220.49)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:35:56 -0000
Received: by mail-pa0-f49.google.com with SMTP id bi1so654233pad.8
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 08:35:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:subject:from:to:cc:in-reply-to:references:content-type
	:date:message-id:mime-version:x-mailer:content-transfer-encoding;
	bh=0mcuXEEPq8JD6yR4H5ndsB9I/L+XtYu4SkQAYOU+sX4=;
	b=H04DQtVI/WLW1GefjUEJkqccrjX84/0CBwVxRDnDiHp3l10HBF2O3EFy/e8ajjE4v9
	aHEsRP5jkW/pjgfiltlJKMIs6KirA3M1KvV5vXdxdvwCag4CdSyJna/HJKADdH+KRVUu
	WJjK6QH7C5ALkjxtNHRYgs0kqqeuUv15G2VDr3PyDyAADv4+jax+XY/vKVBwTggpWtnd
	Rb2638hXHpa9KSWxw74BvTXainvFy5T2oySscRVKkZzs3k6Rspi6YkV7U0gmDTZDdWcO
	iAWc0u1Oo6Yhg8PWygZtpyzs3E1JrbhTRFyKMCQHC7jVLTMBD2ODw6WVWTZsiMo/tXl1
	yxyA==
X-Received: by 10.68.191.104 with SMTP id gx8mr8303292pbc.138.1355848550546;
	Tue, 18 Dec 2012 08:35:50 -0800 (PST)
Received: from [172.19.247.87] ([172.19.247.87])
	by mx.google.com with ESMTPS id wr4sm1397417pbc.72.2012.12.18.08.35.49
	(version=SSLv3 cipher=OTHER); Tue, 18 Dec 2012 08:35:50 -0800 (PST)
From: Eric Dumazet <erdnetdev@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355847758.14620.265.camel@zakaz.uk.xensource.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<1355847180.9380.21.camel@edumazet-glaptop>
	<1355847758.14620.265.camel@zakaz.uk.xensource.com>
Date: Tue, 18 Dec 2012 08:35:48 -0800
Message-ID: <1355848548.9380.35.camel@edumazet-glaptop>
Mime-Version: 1.0
X-Mailer: Evolution 2.28.3 
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 16:22 +0000, Ian Campbell wrote:
> On Tue, 2012-12-18 at 16:13 +0000, Eric Dumazet wrote:
> > On Tue, 2012-12-18 at 15:26 +0000, Ian Campbell wrote:
> > 
> > > So actually we want += PAGE_SIZE * skb_shinfo(skb)->nr_frags ?
> > > 
> > 
> > I dont know what are the real frag sizes in your case.
> 
> I think it's a page, see xennet_alloc_rx_buffers and the alloc_page
> therein.
> 

If they are order-0 pages, then PAGE_SIZE * nr_frags  is OK.


> > Some drivers allocate a full page for an ethernet frame, others use half
> > of a page, it really depends.
> > 

> > As the frag ABI doesnt contain real size, its ok in this case to account
> > the actual frag size.
> > 
> > (skb->data_len in your driver)
> 
> I guess I'm a bit confused by what truesize means again then ;-),
> because in that case the original patch is correct although it would
> have been less confusing to do:
> 	skb->truesize += skb->data_len; 
> in xennet_poll() and then do the subtraction of
> NETFRONT_SKB_CB(skb)->pull_to in handle_incoming_queue() where we
> actually do the pull up.
> 
> Unless __pskb_pull_tail does that adjustment for us, but if it does I
> can't see where.

Thats because skb frags only contain :

a page pointer.
An offset
A size. (Exact number or used bytes in this frag)

And not the 'originally allocated size. It could be 256, 768, 2048,
4096, 65536 bytes, nobody but the driver really knows.

So when we pull X bytes from a fragment to skb->head, there is no way to
remember what was the original size of the fragment.

Only the driver allocating the frag knows its truesize.

Once skb is given to the stack, we lose this information, and rely on
skb->truesize being an accurate estimation.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:38:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0Au-0001Rw-L5; Tue, 18 Dec 2012 16:37:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <erdnetdev@gmail.com>) id 1Tl0At-0001Rm-1O
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 16:37:47 +0000
Received: from [193.109.254.147:23095] by server-1.bemta-14.messagelabs.com id
	4F/C2-15901-9DB90D05; Tue, 18 Dec 2012 16:37:45 +0000
X-Env-Sender: erdnetdev@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1355848551!11058363!1
X-Originating-IP: [209.85.220.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19366 invoked from network); 18 Dec 2012 16:35:56 -0000
Received: from mail-pa0-f49.google.com (HELO mail-pa0-f49.google.com)
	(209.85.220.49)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:35:56 -0000
Received: by mail-pa0-f49.google.com with SMTP id bi1so654233pad.8
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 08:35:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:subject:from:to:cc:in-reply-to:references:content-type
	:date:message-id:mime-version:x-mailer:content-transfer-encoding;
	bh=0mcuXEEPq8JD6yR4H5ndsB9I/L+XtYu4SkQAYOU+sX4=;
	b=H04DQtVI/WLW1GefjUEJkqccrjX84/0CBwVxRDnDiHp3l10HBF2O3EFy/e8ajjE4v9
	aHEsRP5jkW/pjgfiltlJKMIs6KirA3M1KvV5vXdxdvwCag4CdSyJna/HJKADdH+KRVUu
	WJjK6QH7C5ALkjxtNHRYgs0kqqeuUv15G2VDr3PyDyAADv4+jax+XY/vKVBwTggpWtnd
	Rb2638hXHpa9KSWxw74BvTXainvFy5T2oySscRVKkZzs3k6Rspi6YkV7U0gmDTZDdWcO
	iAWc0u1Oo6Yhg8PWygZtpyzs3E1JrbhTRFyKMCQHC7jVLTMBD2ODw6WVWTZsiMo/tXl1
	yxyA==
X-Received: by 10.68.191.104 with SMTP id gx8mr8303292pbc.138.1355848550546;
	Tue, 18 Dec 2012 08:35:50 -0800 (PST)
Received: from [172.19.247.87] ([172.19.247.87])
	by mx.google.com with ESMTPS id wr4sm1397417pbc.72.2012.12.18.08.35.49
	(version=SSLv3 cipher=OTHER); Tue, 18 Dec 2012 08:35:50 -0800 (PST)
From: Eric Dumazet <erdnetdev@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355847758.14620.265.camel@zakaz.uk.xensource.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<1355847180.9380.21.camel@edumazet-glaptop>
	<1355847758.14620.265.camel@zakaz.uk.xensource.com>
Date: Tue, 18 Dec 2012 08:35:48 -0800
Message-ID: <1355848548.9380.35.camel@edumazet-glaptop>
Mime-Version: 1.0
X-Mailer: Evolution 2.28.3 
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 16:22 +0000, Ian Campbell wrote:
> On Tue, 2012-12-18 at 16:13 +0000, Eric Dumazet wrote:
> > On Tue, 2012-12-18 at 15:26 +0000, Ian Campbell wrote:
> > 
> > > So actually we want += PAGE_SIZE * skb_shinfo(skb)->nr_frags ?
> > > 
> > 
> > I dont know what are the real frag sizes in your case.
> 
> I think it's a page, see xennet_alloc_rx_buffers and the alloc_page
> therein.
> 

If they are order-0 pages, then PAGE_SIZE * nr_frags  is OK.


> > Some drivers allocate a full page for an ethernet frame, others use half
> > of a page, it really depends.
> > 

> > As the frag ABI doesnt contain real size, its ok in this case to account
> > the actual frag size.
> > 
> > (skb->data_len in your driver)
> 
> I guess I'm a bit confused by what truesize means again then ;-),
> because in that case the original patch is correct although it would
> have been less confusing to do:
> 	skb->truesize += skb->data_len; 
> in xennet_poll() and then do the subtraction of
> NETFRONT_SKB_CB(skb)->pull_to in handle_incoming_queue() where we
> actually do the pull up.
> 
> Unless __pskb_pull_tail does that adjustment for us, but if it does I
> can't see where.

Thats because skb frags only contain :

a page pointer.
An offset
A size. (Exact number or used bytes in this frag)

And not the 'originally allocated size. It could be 256, 768, 2048,
4096, 65536 bytes, nobody but the driver really knows.

So when we pull X bytes from a fragment to skb->head, there is no way to
remember what was the original size of the fragment.

Only the driver allocating the frag knows its truesize.

Once skb is given to the stack, we lose this information, and rely on
skb->truesize being an accurate estimation.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:40:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0DG-0001Y3-8M; Tue, 18 Dec 2012 16:40:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tl0DE-0001Xs-M3
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:40:12 +0000
Received: from [85.158.143.99:36851] by server-1.bemta-4.messagelabs.com id
	08/BD-28401-C6C90D05; Tue, 18 Dec 2012 16:40:12 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355848808!30024614!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8358 invoked from network); 18 Dec 2012 16:40:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:40:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1132915"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:40:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:40:07 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tl0D9-0000oP-LB;
	Tue, 18 Dec 2012 16:40:07 +0000
Message-ID: <50D09C67.1010703@eu.citrix.com>
Date: Tue, 18 Dec 2012 16:40:07 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Alex Bligh <alex@alex.org.uk>
References: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
	<39A8C47B5D7DD22A52A3DA51@Ximines.local>
	<CAFLBxZY3zXaNQ93zq8YuKQm=+Hc7SoN2uVsjgJ1kZb_=N5CszA@mail.gmail.com>
	<008ECD032A8603F73C448ABD@Ximines.local>
In-Reply-To: <008ECD032A8603F73C448ABD@Ximines.local>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 development update, 15 Oct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/12/12 16:03, Alex Bligh wrote:
>
>
> --On 18 December 2012 14:28:57 +0000 George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
>
>>>> * Make storage migration possible
>>>>    owner: ?
>>>>    status: ?
>>>>    There needs to be a way, either via command-line or via some hooks,
>>>>    that someone can build a "storage migration" feature on top of libxl
>>>>    or xl.
>>>
>>> We have this working with qemu-xen, qcow2 and snapshot rebase. At a libxl
>>> level (but not an xl level), everything seems to be there.
>>
>> Can you describe in more detail how you implement this?  Do you have a
>> script or something?
>
> We have a pile of C code :-)
>
> A script would do something like:
>
> 1. Ask qemu to do a live snapshot using snapshot_blkdev putting the
> snapshot in a new file on the new storage device. This ensures that all new
> writes go to the new storage device.
>
> 2. Rebase the snapshot to a null backing file (I think that's qemu-img
> rebase with -b '' though we had to submit a couple of lines of patch to
> qemu to make it work) which fills the non-written blocks from the new
> snapshot from the old base image and breaks the link to the the old base
> image.

So it sounds like in this case you're talking about moving the disk to a 
different storage device connected to the same host, leaving the VM 
running on the same host.

What I was talking about in this was migrating the VM and storage 
together to a new host.  People who want this typically have all VMs on 
local storage, so (if I'm understanding you right) the snapshot trick 
won't work, because host A (where it's running) can't directly access 
host B's disk (to which we want to migrate it).

Although I suppose one could always hack something together with sshfs 
or something. :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:40:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0DG-0001Y3-8M; Tue, 18 Dec 2012 16:40:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tl0DE-0001Xs-M3
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:40:12 +0000
Received: from [85.158.143.99:36851] by server-1.bemta-4.messagelabs.com id
	08/BD-28401-C6C90D05; Tue, 18 Dec 2012 16:40:12 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355848808!30024614!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8358 invoked from network); 18 Dec 2012 16:40:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:40:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1132915"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:40:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:40:07 -0500
Received: from gateway-1.uk.xensource.com ([10.80.16.66] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tl0D9-0000oP-LB;
	Tue, 18 Dec 2012 16:40:07 +0000
Message-ID: <50D09C67.1010703@eu.citrix.com>
Date: Tue, 18 Dec 2012 16:40:07 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Alex Bligh <alex@alex.org.uk>
References: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
	<39A8C47B5D7DD22A52A3DA51@Ximines.local>
	<CAFLBxZY3zXaNQ93zq8YuKQm=+Hc7SoN2uVsjgJ1kZb_=N5CszA@mail.gmail.com>
	<008ECD032A8603F73C448ABD@Ximines.local>
In-Reply-To: <008ECD032A8603F73C448ABD@Ximines.local>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 development update, 15 Oct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/12/12 16:03, Alex Bligh wrote:
>
>
> --On 18 December 2012 14:28:57 +0000 George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
>
>>>> * Make storage migration possible
>>>>    owner: ?
>>>>    status: ?
>>>>    There needs to be a way, either via command-line or via some hooks,
>>>>    that someone can build a "storage migration" feature on top of libxl
>>>>    or xl.
>>>
>>> We have this working with qemu-xen, qcow2 and snapshot rebase. At a libxl
>>> level (but not an xl level), everything seems to be there.
>>
>> Can you describe in more detail how you implement this?  Do you have a
>> script or something?
>
> We have a pile of C code :-)
>
> A script would do something like:
>
> 1. Ask qemu to do a live snapshot using snapshot_blkdev putting the
> snapshot in a new file on the new storage device. This ensures that all new
> writes go to the new storage device.
>
> 2. Rebase the snapshot to a null backing file (I think that's qemu-img
> rebase with -b '' though we had to submit a couple of lines of patch to
> qemu to make it work) which fills the non-written blocks from the new
> snapshot from the old base image and breaks the link to the the old base
> image.

So it sounds like in this case you're talking about moving the disk to a 
different storage device connected to the same host, leaving the VM 
running on the same host.

What I was talking about in this was migrating the VM and storage 
together to a new host.  People who want this typically have all VMs on 
local storage, so (if I'm understanding you right) the snapshot trick 
won't work, because host A (where it's running) can't directly access 
host B's disk (to which we want to migrate it).

Although I suppose one could always hack something together with sshfs 
or something. :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:43:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0G2-0001i3-RB; Tue, 18 Dec 2012 16:43:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Tl0G1-0001hv-AN
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:43:05 +0000
Received: from [85.158.143.35:16197] by server-2.bemta-4.messagelabs.com id
	77/2E-30861-71D90D05; Tue, 18 Dec 2012 16:43:03 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355848973!13304523!1
X-Originating-IP: [209.85.217.172]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22208 invoked from network); 18 Dec 2012 16:42:55 -0000
Received: from mail-lb0-f172.google.com (HELO mail-lb0-f172.google.com)
	(209.85.217.172)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:42:55 -0000
Received: by mail-lb0-f172.google.com with SMTP id y2so900462lbk.3
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 08:42:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=YlEGBSqrOd4jkwd2Bn/gXlkVC0OVh3CTP43F6p8r/pM=;
	b=Z+GMas5oDJXKfMh95/AxKtW7IFIWipFO1aVYiIFhmA4AsUS/2xFhvKAS9psfFGguvb
	8qGO8PwBj8SHWa1C9Mr5fttqR3aTbC93WRX2z69S/ARZn2H1evwr8PCJ8rDGWg/NpLep
	Ymw+d6i3dLuC8NWVEOGs9JOWSaC0yM/6Er9r6MnwHUJpa1ojkOVoZFY4qAB03WXoj++v
	WYDqPWtZPoAFhfcJV0ZuoIogaJsMdfosGYIoxZoGpx7ME1LQbQyg8eAfiFgU/y0wm0Ku
	8wLA+e1+TrISzXqtH4seTikwdKVh6jVWKw8e4SJzVvzYg1/9d1ppdYdXymIdakqWJezy
	eXig==
Received: by 10.112.43.161 with SMTP id x1mr1139267lbl.32.1355848973254; Tue,
	18 Dec 2012 08:42:53 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.19.103 with HTTP; Tue, 18 Dec 2012 08:42:33 -0800 (PST)
In-Reply-To: <03b4c57dd562e5477615.1355843929@cosworth.uk.xensource.com>
References: <patchbomb.1355843926@cosworth.uk.xensource.com>
	<03b4c57dd562e5477615.1355843929@cosworth.uk.xensource.com>
From: Dario Faggioli <raistlin@linux.it>
Date: Tue, 18 Dec 2012 17:42:33 +0100
X-Google-Sender-Auth: sX9ktkhST_gVUHfZlQgxv7bXPo0
Message-ID: <CAAWQecuHHiwy+vAWrXhSq1qPU-WNX-QqmeFJ7=L_o9EQuTzi_w@mail.gmail.com>
To: Ian Campbell <ian.campbell@citrix.com>
Cc: ian.jackson@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3 of 3 V2] xl: SWITCH_FOREACH_OPT handles
 special options directly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 4:18 PM, Ian Campbell <ian.campbell@citrix.com> wrote:
> xl: SWITCH_FOREACH_OPT handles special options directly.
>
> This removes the need for the "case 0: case 2:" boilerplate in every
> main_foo() but at the expense of a return in a macro which I find
> (mildly) distasteful.
>
I tend not to like return in macros either, but, in this case, I think I
like it more than having to always put that "case 0: case 2:" thing
explicitly... So, FWIW, I'd keep this patch.

Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
---------------------------------------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:43:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0G2-0001i3-RB; Tue, 18 Dec 2012 16:43:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Tl0G1-0001hv-AN
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:43:05 +0000
Received: from [85.158.143.35:16197] by server-2.bemta-4.messagelabs.com id
	77/2E-30861-71D90D05; Tue, 18 Dec 2012 16:43:03 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355848973!13304523!1
X-Originating-IP: [209.85.217.172]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22208 invoked from network); 18 Dec 2012 16:42:55 -0000
Received: from mail-lb0-f172.google.com (HELO mail-lb0-f172.google.com)
	(209.85.217.172)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:42:55 -0000
Received: by mail-lb0-f172.google.com with SMTP id y2so900462lbk.3
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 08:42:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=YlEGBSqrOd4jkwd2Bn/gXlkVC0OVh3CTP43F6p8r/pM=;
	b=Z+GMas5oDJXKfMh95/AxKtW7IFIWipFO1aVYiIFhmA4AsUS/2xFhvKAS9psfFGguvb
	8qGO8PwBj8SHWa1C9Mr5fttqR3aTbC93WRX2z69S/ARZn2H1evwr8PCJ8rDGWg/NpLep
	Ymw+d6i3dLuC8NWVEOGs9JOWSaC0yM/6Er9r6MnwHUJpa1ojkOVoZFY4qAB03WXoj++v
	WYDqPWtZPoAFhfcJV0ZuoIogaJsMdfosGYIoxZoGpx7ME1LQbQyg8eAfiFgU/y0wm0Ku
	8wLA+e1+TrISzXqtH4seTikwdKVh6jVWKw8e4SJzVvzYg1/9d1ppdYdXymIdakqWJezy
	eXig==
Received: by 10.112.43.161 with SMTP id x1mr1139267lbl.32.1355848973254; Tue,
	18 Dec 2012 08:42:53 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.19.103 with HTTP; Tue, 18 Dec 2012 08:42:33 -0800 (PST)
In-Reply-To: <03b4c57dd562e5477615.1355843929@cosworth.uk.xensource.com>
References: <patchbomb.1355843926@cosworth.uk.xensource.com>
	<03b4c57dd562e5477615.1355843929@cosworth.uk.xensource.com>
From: Dario Faggioli <raistlin@linux.it>
Date: Tue, 18 Dec 2012 17:42:33 +0100
X-Google-Sender-Auth: sX9ktkhST_gVUHfZlQgxv7bXPo0
Message-ID: <CAAWQecuHHiwy+vAWrXhSq1qPU-WNX-QqmeFJ7=L_o9EQuTzi_w@mail.gmail.com>
To: Ian Campbell <ian.campbell@citrix.com>
Cc: ian.jackson@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3 of 3 V2] xl: SWITCH_FOREACH_OPT handles
 special options directly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 4:18 PM, Ian Campbell <ian.campbell@citrix.com> wrote:
> xl: SWITCH_FOREACH_OPT handles special options directly.
>
> This removes the need for the "case 0: case 2:" boilerplate in every
> main_foo() but at the expense of a return in a macro which I find
> (mildly) distasteful.
>
I tend not to like return in macros either, but, in this case, I think I
like it more than having to always put that "case 0: case 2:" thing
explicitly... So, FWIW, I'd keep this patch.

Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
---------------------------------------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:49:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0MD-0001xR-QL; Tue, 18 Dec 2012 16:49:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0MC-0001xM-Nz
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:49:28 +0000
Received: from [85.158.137.99:58941] by server-4.bemta-3.messagelabs.com id
	D1/CC-31835-29E90D05; Tue, 18 Dec 2012 16:49:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355849362!18229799!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1775 invoked from network); 18 Dec 2012 16:49:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:49:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="231786"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 16:49:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 16:49:12 +0000
Message-ID: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:11 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: [Xen-devel] [PATCH 0/5] xen: arm: fix guest register access bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the places where we currently access guest registers are subtly
broken, they always access the usr copy of a banked register regardless
of the guest VCPU mode. Luckily because fiq mode isn't used much this
mostly affects the SP and LR registers which are not typically used in
mmio instructions (which is most of the reason for this kind of
emulation). However the effect of hitting the bug is going to be some
pretty weird behaviour!

I've also included, mostly because they were in my branch and rebasing
things over them is a faff, some patches to clean up the tabs and line
lengths in entry.S.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:49:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0MD-0001xR-QL; Tue, 18 Dec 2012 16:49:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0MC-0001xM-Nz
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:49:28 +0000
Received: from [85.158.137.99:58941] by server-4.bemta-3.messagelabs.com id
	D1/CC-31835-29E90D05; Tue, 18 Dec 2012 16:49:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355849362!18229799!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1775 invoked from network); 18 Dec 2012 16:49:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:49:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="231786"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 16:49:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 16:49:12 +0000
Message-ID: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:11 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: [Xen-devel] [PATCH 0/5] xen: arm: fix guest register access bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the places where we currently access guest registers are subtly
broken, they always access the usr copy of a banked register regardless
of the guest VCPU mode. Luckily because fiq mode isn't used much this
mostly affects the SP and LR registers which are not typically used in
mmio instructions (which is most of the reason for this kind of
emulation). However the effect of hitting the bug is going to be some
pretty weird behaviour!

I've also included, mostly because they were in my branch and rebasing
things over them is a faff, some patches to clean up the tabs and line
lengths in entry.S.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:50:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:50:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0MZ-0001zT-7G; Tue, 18 Dec 2012 16:49:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0MY-0001zG-3R
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:49:50 +0000
Received: from [85.158.143.35:57227] by server-3.bemta-4.messagelabs.com id
	D1/83-18211-DAE90D05; Tue, 18 Dec 2012 16:49:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355849379!13638621!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23877 invoked from network); 18 Dec 2012 16:49:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:49:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1134420"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:49:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:49:37 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tl0MK-0000wW-Rx;
	Tue, 18 Dec 2012 16:49:36 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:36 +0000
Message-ID: <1355849376-26652-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5/5] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We weren't taking the guest mode (CPSR) into account and would always
access the user version of the registers.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/traps.c       |   62 ++++++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/vgic.c        |    4 +-
 xen/arch/arm/vpl011.c      |    4 +-
 xen/arch/arm/vtimer.c      |    8 +++---
 xen/include/asm-arm/regs.h |    6 ++++
 5 files changed, 74 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 096dc0b..e3c0290 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -73,6 +73,64 @@ static void print_xen_info(void)
            debug, print_tainted(taint_str));
 }
 
+uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg)
+{
+    BUG_ON( guest_mode(regs) );
+
+    /*
+     * We rely heavily on the layout of cpu_user_regs to avoid having
+     * to handle all of the registers individually. Use BUILD_BUG_ON to
+     * ensure that things which expect are contiguous actually are.
+     */
+#define REGOFFS(R) offsetof(struct cpu_user_regs, R)
+
+    switch ( reg ) {
+    case 0 ... 7: /* Unbanked registers */
+        BUILD_BUG_ON(REGOFFS(r0) + 7*sizeof(uint32_t) != REGOFFS(r7));
+        return &regs->r0 + reg;
+    case 8 ... 12: /* Register banked in FIQ mode */
+        BUILD_BUG_ON(REGOFFS(r8_fiq) + 4*sizeof(uint32_t) != REGOFFS(r12_fiq));
+        if ( fiq_mode(regs) )
+            return &regs->r8_fiq + reg - 8;
+        else
+            return &regs->r8_fiq + reg - 8;
+    case 13 ... 14: /* Banked SP + LR registers */
+        BUILD_BUG_ON(REGOFFS(sp_fiq) + 1*sizeof(uint32_t) != REGOFFS(lr_fiq));
+        BUILD_BUG_ON(REGOFFS(sp_irq) + 1*sizeof(uint32_t) != REGOFFS(lr_irq));
+        BUILD_BUG_ON(REGOFFS(sp_svc) + 1*sizeof(uint32_t) != REGOFFS(lr_svc));
+        BUILD_BUG_ON(REGOFFS(sp_abt) + 1*sizeof(uint32_t) != REGOFFS(lr_abt));
+        BUILD_BUG_ON(REGOFFS(sp_und) + 1*sizeof(uint32_t) != REGOFFS(lr_und));
+        switch ( regs->cpsr & PSR_MODE_MASK )
+        {
+        case PSR_MODE_USR:
+        case PSR_MODE_SYS: /* Sys regs are the usr regs */
+            if ( reg == 13 )
+                return &regs->sp_usr;
+            else /* lr_usr == lr in a user frame */
+                return &regs->lr;
+        case PSR_MODE_FIQ:
+            return &regs->sp_fiq + reg - 13;
+        case PSR_MODE_IRQ:
+            return &regs->sp_irq + reg - 13;
+        case PSR_MODE_SVC:
+            return &regs->sp_svc + reg - 13;
+        case PSR_MODE_ABT:
+            return &regs->sp_abt + reg - 13;
+        case PSR_MODE_UND:
+            return &regs->sp_und + reg - 13;
+        case PSR_MODE_MON:
+        case PSR_MODE_HYP:
+        default:
+            BUG();
+        }
+    case 15: /* PC */
+        return &regs->pc;
+    default:
+        BUG();
+    }
+#undef REGOFFS
+}
+
 static const char *decode_fsc(uint32_t fsc, int *level)
 {
     const char *msg = NULL;
@@ -448,7 +506,7 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
     switch ( code ) {
     case 0xe0 ... 0xef:
         reg = code - 0xe0;
-        r = &regs->r0 + reg;
+        r = select_user_reg(regs, reg);
         printk("DOM%d: R%d = %#010"PRIx32" at %#010"PRIx32"\n",
                domid, reg, *r, regs->pc);
         break;
@@ -518,7 +576,7 @@ static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp32 cp32 = hsr.cp32;
-    uint32_t *r = &regs->r0 + cp32.reg;
+    uint32_t *r = select_user_reg(regs, cp32.reg);
 
     if ( !cp32.ccvalid ) {
         dprintk(XENLOG_ERR, "cp_15(32): need to handle invalid condition codes\n");
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 3f7e757..59780d2 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -160,7 +160,7 @@ static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     struct vgic_irq_rank *rank;
     int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
     int gicd_reg = REG(offset);
@@ -372,7 +372,7 @@ static int vgic_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     struct vgic_irq_rank *rank;
     int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
     int gicd_reg = REG(offset);
diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index 1522667..7dcee90 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -92,7 +92,7 @@ static int uart0_mmio_read(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     int offset = (int)(info->gpa - UART0_START);
 
     switch ( offset )
@@ -114,7 +114,7 @@ static int uart0_mmio_write(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     int offset = (int)(info->gpa - UART0_START);
 
     switch ( offset )
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 490b021..07994b2 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -21,7 +21,7 @@
 #include <xen/lib.h>
 #include <xen/timer.h>
 #include <xen/sched.h>
-#include "gic.h"
+#include <asm/regs.h>
 
 extern s_time_t ticks_to_ns(uint64_t ticks);
 extern uint64_t ns_to_ticks(s_time_t ns);
@@ -49,7 +49,7 @@ static int vtimer_emulate_32(struct cpu_user_regs *regs, union hsr hsr)
 {
     struct vcpu *v = current;
     struct hsr_cp32 cp32 = hsr.cp32;
-    uint32_t *r = &regs->r0 + cp32.reg;
+    uint32_t *r = select_user_reg(regs, cp32.reg);
     s_time_t now;
 
     switch ( hsr.bits & HSR_CP32_REGS_MASK )
@@ -101,8 +101,8 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
 {
     struct vcpu *v = current;
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = &regs->r0 + cp64.reg1;
-    uint32_t *r2 = &regs->r0 + cp64.reg2;
+    uint32_t *r1 = select_user_reg(regs, cp64.reg1);
+    uint32_t *r2 = select_user_reg(regs, cp64.reg2);
     uint64_t ticks;
     s_time_t now;
 
diff --git a/xen/include/asm-arm/regs.h b/xen/include/asm-arm/regs.h
index 54f6ed8..7486944 100644
--- a/xen/include/asm-arm/regs.h
+++ b/xen/include/asm-arm/regs.h
@@ -30,6 +30,12 @@
 
 #define return_reg(v) ((v)->arch.cpu_info->guest_cpu_user_regs.r0)
 
+/*
+ * Returns a pointer to the given register value in regs, taking the
+ * processor mode (CPSR) into account.
+ */
+extern uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg);
+
 #endif /* __ARM_REGS_H__ */
 /*
  * Local variables:
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:50:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:50:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0MZ-0001zT-7G; Tue, 18 Dec 2012 16:49:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0MY-0001zG-3R
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:49:50 +0000
Received: from [85.158.143.35:57227] by server-3.bemta-4.messagelabs.com id
	D1/83-18211-DAE90D05; Tue, 18 Dec 2012 16:49:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355849379!13638621!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23877 invoked from network); 18 Dec 2012 16:49:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:49:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1134420"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:49:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:49:37 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tl0MK-0000wW-Rx;
	Tue, 18 Dec 2012 16:49:36 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:36 +0000
Message-ID: <1355849376-26652-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5/5] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We weren't taking the guest mode (CPSR) into account and would always
access the user version of the registers.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/traps.c       |   62 ++++++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/vgic.c        |    4 +-
 xen/arch/arm/vpl011.c      |    4 +-
 xen/arch/arm/vtimer.c      |    8 +++---
 xen/include/asm-arm/regs.h |    6 ++++
 5 files changed, 74 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 096dc0b..e3c0290 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -73,6 +73,64 @@ static void print_xen_info(void)
            debug, print_tainted(taint_str));
 }
 
+uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg)
+{
+    BUG_ON( guest_mode(regs) );
+
+    /*
+     * We rely heavily on the layout of cpu_user_regs to avoid having
+     * to handle all of the registers individually. Use BUILD_BUG_ON to
+     * ensure that things which expect are contiguous actually are.
+     */
+#define REGOFFS(R) offsetof(struct cpu_user_regs, R)
+
+    switch ( reg ) {
+    case 0 ... 7: /* Unbanked registers */
+        BUILD_BUG_ON(REGOFFS(r0) + 7*sizeof(uint32_t) != REGOFFS(r7));
+        return &regs->r0 + reg;
+    case 8 ... 12: /* Register banked in FIQ mode */
+        BUILD_BUG_ON(REGOFFS(r8_fiq) + 4*sizeof(uint32_t) != REGOFFS(r12_fiq));
+        if ( fiq_mode(regs) )
+            return &regs->r8_fiq + reg - 8;
+        else
+            return &regs->r8_fiq + reg - 8;
+    case 13 ... 14: /* Banked SP + LR registers */
+        BUILD_BUG_ON(REGOFFS(sp_fiq) + 1*sizeof(uint32_t) != REGOFFS(lr_fiq));
+        BUILD_BUG_ON(REGOFFS(sp_irq) + 1*sizeof(uint32_t) != REGOFFS(lr_irq));
+        BUILD_BUG_ON(REGOFFS(sp_svc) + 1*sizeof(uint32_t) != REGOFFS(lr_svc));
+        BUILD_BUG_ON(REGOFFS(sp_abt) + 1*sizeof(uint32_t) != REGOFFS(lr_abt));
+        BUILD_BUG_ON(REGOFFS(sp_und) + 1*sizeof(uint32_t) != REGOFFS(lr_und));
+        switch ( regs->cpsr & PSR_MODE_MASK )
+        {
+        case PSR_MODE_USR:
+        case PSR_MODE_SYS: /* Sys regs are the usr regs */
+            if ( reg == 13 )
+                return &regs->sp_usr;
+            else /* lr_usr == lr in a user frame */
+                return &regs->lr;
+        case PSR_MODE_FIQ:
+            return &regs->sp_fiq + reg - 13;
+        case PSR_MODE_IRQ:
+            return &regs->sp_irq + reg - 13;
+        case PSR_MODE_SVC:
+            return &regs->sp_svc + reg - 13;
+        case PSR_MODE_ABT:
+            return &regs->sp_abt + reg - 13;
+        case PSR_MODE_UND:
+            return &regs->sp_und + reg - 13;
+        case PSR_MODE_MON:
+        case PSR_MODE_HYP:
+        default:
+            BUG();
+        }
+    case 15: /* PC */
+        return &regs->pc;
+    default:
+        BUG();
+    }
+#undef REGOFFS
+}
+
 static const char *decode_fsc(uint32_t fsc, int *level)
 {
     const char *msg = NULL;
@@ -448,7 +506,7 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
     switch ( code ) {
     case 0xe0 ... 0xef:
         reg = code - 0xe0;
-        r = &regs->r0 + reg;
+        r = select_user_reg(regs, reg);
         printk("DOM%d: R%d = %#010"PRIx32" at %#010"PRIx32"\n",
                domid, reg, *r, regs->pc);
         break;
@@ -518,7 +576,7 @@ static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp32 cp32 = hsr.cp32;
-    uint32_t *r = &regs->r0 + cp32.reg;
+    uint32_t *r = select_user_reg(regs, cp32.reg);
 
     if ( !cp32.ccvalid ) {
         dprintk(XENLOG_ERR, "cp_15(32): need to handle invalid condition codes\n");
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 3f7e757..59780d2 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -160,7 +160,7 @@ static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     struct vgic_irq_rank *rank;
     int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
     int gicd_reg = REG(offset);
@@ -372,7 +372,7 @@ static int vgic_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     struct vgic_irq_rank *rank;
     int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
     int gicd_reg = REG(offset);
diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index 1522667..7dcee90 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -92,7 +92,7 @@ static int uart0_mmio_read(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     int offset = (int)(info->gpa - UART0_START);
 
     switch ( offset )
@@ -114,7 +114,7 @@ static int uart0_mmio_write(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     int offset = (int)(info->gpa - UART0_START);
 
     switch ( offset )
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 490b021..07994b2 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -21,7 +21,7 @@
 #include <xen/lib.h>
 #include <xen/timer.h>
 #include <xen/sched.h>
-#include "gic.h"
+#include <asm/regs.h>
 
 extern s_time_t ticks_to_ns(uint64_t ticks);
 extern uint64_t ns_to_ticks(s_time_t ns);
@@ -49,7 +49,7 @@ static int vtimer_emulate_32(struct cpu_user_regs *regs, union hsr hsr)
 {
     struct vcpu *v = current;
     struct hsr_cp32 cp32 = hsr.cp32;
-    uint32_t *r = &regs->r0 + cp32.reg;
+    uint32_t *r = select_user_reg(regs, cp32.reg);
     s_time_t now;
 
     switch ( hsr.bits & HSR_CP32_REGS_MASK )
@@ -101,8 +101,8 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
 {
     struct vcpu *v = current;
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = &regs->r0 + cp64.reg1;
-    uint32_t *r2 = &regs->r0 + cp64.reg2;
+    uint32_t *r1 = select_user_reg(regs, cp64.reg1);
+    uint32_t *r2 = select_user_reg(regs, cp64.reg2);
     uint64_t ticks;
     s_time_t now;
 
diff --git a/xen/include/asm-arm/regs.h b/xen/include/asm-arm/regs.h
index 54f6ed8..7486944 100644
--- a/xen/include/asm-arm/regs.h
+++ b/xen/include/asm-arm/regs.h
@@ -30,6 +30,12 @@
 
 #define return_reg(v) ((v)->arch.cpu_info->guest_cpu_user_regs.r0)
 
+/*
+ * Returns a pointer to the given register value in regs, taking the
+ * processor mode (CPSR) into account.
+ */
+extern uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg);
+
 #endif /* __ARM_REGS_H__ */
 /*
  * Local variables:
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:50:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0Mx-00022w-L7; Tue, 18 Dec 2012 16:50:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0Mw-00022i-2x
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:50:14 +0000
Received: from [193.109.254.147:50089] by server-13.bemta-14.messagelabs.com
	id 14/0A-01725-5CE90D05; Tue, 18 Dec 2012 16:50:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355849410!10823492!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5685 invoked from network); 18 Dec 2012 16:50:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:50:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1064223"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:49:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:49:37 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tl0MK-0000wW-QK;
	Tue, 18 Dec 2012 16:49:36 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:35 +0000
Message-ID: <1355849376-26652-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/5] xen: arm: reorder registers in struct
	cpu_user_regs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Primarily this is so that they are ordered in the same way as the
mapping from arm64 x0..x31 registers to the arm32 registers, which is
just less confusing for everyone going forward.

It also makes the implementation of select_user_regs in the next patch
slightly simpler.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/entry.S          |    4 ++--
 xen/arch/arm/io.h             |    1 +
 xen/arch/arm/traps.c          |    2 +-
 xen/include/public/arch-arm.h |   11 +++++++----
 4 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
index cbd1c48..3611427 100644
--- a/xen/arch/arm/entry.S
+++ b/xen/arch/arm/entry.S
@@ -12,7 +12,7 @@
         RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
 
 #define SAVE_ALL                                                        \
-        sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */      \
+        sub sp, #(UREGS_SP_usr - UREGS_sp); /* SP, LR, SPSR, PC */      \
         push {r0-r12}; /* Save R0-R12 */                                \
                                                                         \
         mrs r11, ELR_hyp;               /* ELR_hyp is return address. */\
@@ -115,7 +115,7 @@ ENTRY(return_to_hypervisor)
         ldr r11, [sp, #UREGS_cpsr]
         msr SPSR_hyp, r11
         pop {r0-r12}
-        add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
+        add sp, #(UREGS_SP_usr - UREGS_sp); /* SP, LR, SPSR, PC */
         eret
 
 /*
diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
index 9a507f5..0933aa8 100644
--- a/xen/arch/arm/io.h
+++ b/xen/arch/arm/io.h
@@ -21,6 +21,7 @@
 
 #include <xen/lib.h>
 #include <asm/processor.h>
+#include <asm/regs.h>
 
 typedef struct
 {
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 19e2081..096dc0b 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -43,7 +43,7 @@
  * stack) must be doubleword-aligned in size.  */
 static inline void check_stack_alignment_constraints(void) {
     BUILD_BUG_ON((sizeof (struct cpu_user_regs)) & 0x7);
-    BUILD_BUG_ON((offsetof(struct cpu_user_regs, r8_fiq)) & 0x7);
+    BUILD_BUG_ON((offsetof(struct cpu_user_regs, sp_usr)) & 0x7);
     BUILD_BUG_ON((sizeof (struct cpu_info)) & 0x7);
 }
 
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index ff02d15..d8788f2 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -119,12 +119,15 @@ struct cpu_user_regs
 
     /* Outer guest frame only from here on... */
 
-    uint32_t r8_fiq, r9_fiq, r10_fiq, r11_fiq, r12_fiq;
-
     uint32_t sp_usr; /* LR_usr is the same register as LR, see above */
 
-    uint32_t sp_svc, sp_abt, sp_und, sp_irq, sp_fiq;
-    uint32_t lr_svc, lr_abt, lr_und, lr_irq, lr_fiq;
+    uint32_t sp_irq, lr_irq;
+    uint32_t sp_svc, lr_svc;
+    uint32_t sp_abt, lr_abt;
+    uint32_t sp_und, lr_und;
+
+    uint32_t r8_fiq, r9_fiq, r10_fiq, r11_fiq, r12_fiq;
+    uint32_t sp_fiq, lr_fiq;
 
     uint32_t spsr_svc, spsr_abt, spsr_und, spsr_irq, spsr_fiq;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:50:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0Mx-00022w-L7; Tue, 18 Dec 2012 16:50:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0Mw-00022i-2x
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:50:14 +0000
Received: from [193.109.254.147:50089] by server-13.bemta-14.messagelabs.com
	id 14/0A-01725-5CE90D05; Tue, 18 Dec 2012 16:50:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355849410!10823492!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5685 invoked from network); 18 Dec 2012 16:50:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:50:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1064223"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:49:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:49:37 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tl0MK-0000wW-QK;
	Tue, 18 Dec 2012 16:49:36 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:35 +0000
Message-ID: <1355849376-26652-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/5] xen: arm: reorder registers in struct
	cpu_user_regs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Primarily this is so that they are ordered in the same way as the
mapping from arm64 x0..x31 registers to the arm32 registers, which is
just less confusing for everyone going forward.

It also makes the implementation of select_user_regs in the next patch
slightly simpler.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/entry.S          |    4 ++--
 xen/arch/arm/io.h             |    1 +
 xen/arch/arm/traps.c          |    2 +-
 xen/include/public/arch-arm.h |   11 +++++++----
 4 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
index cbd1c48..3611427 100644
--- a/xen/arch/arm/entry.S
+++ b/xen/arch/arm/entry.S
@@ -12,7 +12,7 @@
         RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
 
 #define SAVE_ALL                                                        \
-        sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */      \
+        sub sp, #(UREGS_SP_usr - UREGS_sp); /* SP, LR, SPSR, PC */      \
         push {r0-r12}; /* Save R0-R12 */                                \
                                                                         \
         mrs r11, ELR_hyp;               /* ELR_hyp is return address. */\
@@ -115,7 +115,7 @@ ENTRY(return_to_hypervisor)
         ldr r11, [sp, #UREGS_cpsr]
         msr SPSR_hyp, r11
         pop {r0-r12}
-        add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
+        add sp, #(UREGS_SP_usr - UREGS_sp); /* SP, LR, SPSR, PC */
         eret
 
 /*
diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
index 9a507f5..0933aa8 100644
--- a/xen/arch/arm/io.h
+++ b/xen/arch/arm/io.h
@@ -21,6 +21,7 @@
 
 #include <xen/lib.h>
 #include <asm/processor.h>
+#include <asm/regs.h>
 
 typedef struct
 {
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 19e2081..096dc0b 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -43,7 +43,7 @@
  * stack) must be doubleword-aligned in size.  */
 static inline void check_stack_alignment_constraints(void) {
     BUILD_BUG_ON((sizeof (struct cpu_user_regs)) & 0x7);
-    BUILD_BUG_ON((offsetof(struct cpu_user_regs, r8_fiq)) & 0x7);
+    BUILD_BUG_ON((offsetof(struct cpu_user_regs, sp_usr)) & 0x7);
     BUILD_BUG_ON((sizeof (struct cpu_info)) & 0x7);
 }
 
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index ff02d15..d8788f2 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -119,12 +119,15 @@ struct cpu_user_regs
 
     /* Outer guest frame only from here on... */
 
-    uint32_t r8_fiq, r9_fiq, r10_fiq, r11_fiq, r12_fiq;
-
     uint32_t sp_usr; /* LR_usr is the same register as LR, see above */
 
-    uint32_t sp_svc, sp_abt, sp_und, sp_irq, sp_fiq;
-    uint32_t lr_svc, lr_abt, lr_und, lr_irq, lr_fiq;
+    uint32_t sp_irq, lr_irq;
+    uint32_t sp_svc, lr_svc;
+    uint32_t sp_abt, lr_abt;
+    uint32_t sp_und, lr_und;
+
+    uint32_t r8_fiq, r9_fiq, r10_fiq, r11_fiq, r12_fiq;
+    uint32_t sp_fiq, lr_fiq;
 
     uint32_t spsr_svc, spsr_abt, spsr_und, spsr_irq, spsr_fiq;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:50:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0N0-00023r-H9; Tue, 18 Dec 2012 16:50:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0Mz-00023D-GU
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:50:17 +0000
Received: from [85.158.139.83:49271] by server-7.bemta-5.messagelabs.com id
	9D/5A-08009-8CE90D05; Tue, 18 Dec 2012 16:50:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355849406!28179612!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4032 invoked from network); 18 Dec 2012 16:50:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:50:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1064224"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:49:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:49:36 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tl0MK-0000wW-NN;
	Tue, 18 Dec 2012 16:49:36 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:32 +0000
Message-ID: <1355849376-26652-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/5] xen: arm: fix long lines in entry.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/entry.S |   66 +++++++++++++++++++++++++-------------------------
 1 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
index 1d6ff32..83793c2 100644
--- a/xen/arch/arm/entry.S
+++ b/xen/arch/arm/entry.S
@@ -11,22 +11,22 @@
 #define RESTORE_BANKED(mode) \
 	RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
 
-#define SAVE_ALL											\
-	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */					\
-	push {r0-r12}; /* Save R0-R12 */								\
-													\
-	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */				\
-	str r11, [sp, #UREGS_pc];									\
-													\
-	str lr, [sp, #UREGS_lr];									\
-													\
-	add r11, sp, #UREGS_kernel_sizeof+4;								\
-	str r11, [sp, #UREGS_sp];									\
-													\
-	mrs r11, SPSR_hyp;										\
-	str r11, [sp, #UREGS_cpsr];									\
-	and r11, #PSR_MODE_MASK;									\
-	cmp r11, #PSR_MODE_HYP;										\
+#define SAVE_ALL							\
+	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */	\
+	push {r0-r12}; /* Save R0-R12 */				\
+									\
+	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */\
+	str r11, [sp, #UREGS_pc];					\
+									\
+	str lr, [sp, #UREGS_lr];					\
+									\
+	add r11, sp, #UREGS_kernel_sizeof+4;				\
+	str r11, [sp, #UREGS_sp];					\
+									\
+	mrs r11, SPSR_hyp;						\
+	str r11, [sp, #UREGS_cpsr];					\
+	and r11, #PSR_MODE_MASK;					\
+	cmp r11, #PSR_MODE_HYP;						\
 	blne save_guest_regs
 
 save_guest_regs:
@@ -43,25 +43,25 @@ save_guest_regs:
 	SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
 	mov pc, lr
 
-#define DEFINE_TRAP_ENTRY(trap)										\
-	ALIGN;												\
-trap_##trap:												\
-	SAVE_ALL;											\
-	cpsie i; 	/* local_irq_enable */								\
-	adr lr, return_from_trap;									\
-	mov r0, sp;											\
-	mov r11, sp;											\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
+#define DEFINE_TRAP_ENTRY(trap)						\
+	ALIGN;								\
+trap_##trap:								\
+	SAVE_ALL;							\
+	cpsie i; 	/* local_irq_enable */				\
+	adr lr, return_from_trap;					\
+	mov r0, sp;							\
+	mov r11, sp;							\
+	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
 	b do_trap_##trap
 
-#define DEFINE_TRAP_ENTRY_NOIRQ(trap)									\
-	ALIGN;												\
-trap_##trap:												\
-	SAVE_ALL;											\
-	adr lr, return_from_trap;									\
-	mov r0, sp;											\
-	mov r11, sp;											\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
+#define DEFINE_TRAP_ENTRY_NOIRQ(trap)					\
+	ALIGN;								\
+trap_##trap:								\
+	SAVE_ALL;							\
+	adr lr, return_from_trap;					\
+	mov r0, sp;							\
+	mov r11, sp;							\
+	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
 	b do_trap_##trap
 
 .globl hyp_traps_vector
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:50:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0N0-00023f-1y; Tue, 18 Dec 2012 16:50:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0Mz-00023A-44
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:50:17 +0000
Received: from [85.158.139.83:56895] by server-3.bemta-5.messagelabs.com id
	76/88-25441-8CE90D05; Tue, 18 Dec 2012 16:50:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355849406!28179612!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31186 invoked from network); 18 Dec 2012 16:50:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:50:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1064229"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:49:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:49:36 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tl0MK-0000wW-P5;
	Tue, 18 Dec 2012 16:49:36 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:33 +0000
Message-ID: <1355849376-26652-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/5] xen: arm: remove hard tabs from asm code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Run expand(1) over xen/arch/arm/.../*.S

Add emacs local vars block.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/entry.S       |  194 +++++++-------
 xen/arch/arm/head.S        |  613 ++++++++++++++++++++++----------------------
 xen/arch/arm/mode_switch.S |  145 ++++++-----
 xen/arch/arm/proc-ca15.S   |   17 +-
 4 files changed, 498 insertions(+), 471 deletions(-)

diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
index 83793c2..cbd1c48 100644
--- a/xen/arch/arm/entry.S
+++ b/xen/arch/arm/entry.S
@@ -2,79 +2,79 @@
 #include <asm/asm_defns.h>
 #include <public/xen.h>
 
-#define SAVE_ONE_BANKED(reg)	mrs r11, reg; str r11, [sp, #UREGS_##reg]
-#define RESTORE_ONE_BANKED(reg)	ldr r11, [sp, #UREGS_##reg]; msr reg, r11
+#define SAVE_ONE_BANKED(reg)    mrs r11, reg; str r11, [sp, #UREGS_##reg]
+#define RESTORE_ONE_BANKED(reg) ldr r11, [sp, #UREGS_##reg]; msr reg, r11
 
 #define SAVE_BANKED(mode) \
-	SAVE_ONE_BANKED(SP_##mode) ; SAVE_ONE_BANKED(LR_##mode) ; SAVE_ONE_BANKED(SPSR_##mode)
+        SAVE_ONE_BANKED(SP_##mode) ; SAVE_ONE_BANKED(LR_##mode) ; SAVE_ONE_BANKED(SPSR_##mode)
 
 #define RESTORE_BANKED(mode) \
-	RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
+        RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
 
-#define SAVE_ALL							\
-	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */	\
-	push {r0-r12}; /* Save R0-R12 */				\
-									\
-	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */\
-	str r11, [sp, #UREGS_pc];					\
-									\
-	str lr, [sp, #UREGS_lr];					\
-									\
-	add r11, sp, #UREGS_kernel_sizeof+4;				\
-	str r11, [sp, #UREGS_sp];					\
-									\
-	mrs r11, SPSR_hyp;						\
-	str r11, [sp, #UREGS_cpsr];					\
-	and r11, #PSR_MODE_MASK;					\
-	cmp r11, #PSR_MODE_HYP;						\
-	blne save_guest_regs
+#define SAVE_ALL                                                        \
+        sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */      \
+        push {r0-r12}; /* Save R0-R12 */                                \
+                                                                        \
+        mrs r11, ELR_hyp;               /* ELR_hyp is return address. */\
+        str r11, [sp, #UREGS_pc];                                       \
+                                                                        \
+        str lr, [sp, #UREGS_lr];                                        \
+                                                                        \
+        add r11, sp, #UREGS_kernel_sizeof+4;                            \
+        str r11, [sp, #UREGS_sp];                                       \
+                                                                        \
+        mrs r11, SPSR_hyp;                                              \
+        str r11, [sp, #UREGS_cpsr];                                     \
+        and r11, #PSR_MODE_MASK;                                        \
+        cmp r11, #PSR_MODE_HYP;                                         \
+        blne save_guest_regs
 
 save_guest_regs:
-	ldr r11, =0xffffffff  /* Clobber SP which is only valid for hypervisor frames. */
-	str r11, [sp, #UREGS_sp]
-	SAVE_ONE_BANKED(SP_usr)
-	/* LR_usr is the same physical register as lr and is saved in SAVE_ALL */
-	SAVE_BANKED(svc)
-	SAVE_BANKED(abt)
-	SAVE_BANKED(und)
-	SAVE_BANKED(irq)
-	SAVE_BANKED(fiq)
-	SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq)
-	SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
-	mov pc, lr
+        ldr r11, =0xffffffff  /* Clobber SP which is only valid for hypervisor frames. */
+        str r11, [sp, #UREGS_sp]
+        SAVE_ONE_BANKED(SP_usr)
+        /* LR_usr is the same physical register as lr and is saved in SAVE_ALL */
+        SAVE_BANKED(svc)
+        SAVE_BANKED(abt)
+        SAVE_BANKED(und)
+        SAVE_BANKED(irq)
+        SAVE_BANKED(fiq)
+        SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq)
+        SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
+        mov pc, lr
 
-#define DEFINE_TRAP_ENTRY(trap)						\
-	ALIGN;								\
-trap_##trap:								\
-	SAVE_ALL;							\
-	cpsie i; 	/* local_irq_enable */				\
-	adr lr, return_from_trap;					\
-	mov r0, sp;							\
-	mov r11, sp;							\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
-	b do_trap_##trap
+#define DEFINE_TRAP_ENTRY(trap)                                         \
+        ALIGN;                                                          \
+trap_##trap:                                                            \
+        SAVE_ALL;                                                       \
+        cpsie i;        /* local_irq_enable */                          \
+        adr lr, return_from_trap;                                       \
+        mov r0, sp;                                                     \
+        mov r11, sp;                                                    \
+        bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
+        b do_trap_##trap
 
-#define DEFINE_TRAP_ENTRY_NOIRQ(trap)					\
-	ALIGN;								\
-trap_##trap:								\
-	SAVE_ALL;							\
-	adr lr, return_from_trap;					\
-	mov r0, sp;							\
-	mov r11, sp;							\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
-	b do_trap_##trap
+#define DEFINE_TRAP_ENTRY_NOIRQ(trap)                                   \
+        ALIGN;                                                          \
+trap_##trap:                                                            \
+        SAVE_ALL;                                                       \
+        adr lr, return_from_trap;                                       \
+        mov r0, sp;                                                     \
+        mov r11, sp;                                                    \
+        bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
+        b do_trap_##trap
 
 .globl hyp_traps_vector
-	.align 5
+        .align 5
 hyp_traps_vector:
-	.word 0				/* 0x00 - Reset */
-	b trap_undefined_instruction	/* 0x04 - Undefined Instruction */
-	b trap_supervisor_call		/* 0x08 - Supervisor Call */
-	b trap_prefetch_abort		/* 0x0c - Prefetch Abort */
-	b trap_data_abort		/* 0x10 - Data Abort */
-	b trap_hypervisor		/* 0x14 - Hypervisor */
-	b trap_irq			/* 0x18 - IRQ */
-	b trap_fiq			/* 0x1c - FIQ */
+        .word 0                         /* 0x00 - Reset */
+        b trap_undefined_instruction    /* 0x04 - Undefined Instruction */
+        b trap_supervisor_call          /* 0x08 - Supervisor Call */
+        b trap_prefetch_abort           /* 0x0c - Prefetch Abort */
+        b trap_data_abort               /* 0x10 - Data Abort */
+        b trap_hypervisor               /* 0x14 - Hypervisor */
+        b trap_irq                      /* 0x18 - IRQ */
+        b trap_fiq                      /* 0x1c - FIQ */
 
 DEFINE_TRAP_ENTRY(undefined_instruction)
 DEFINE_TRAP_ENTRY(supervisor_call)
@@ -85,38 +85,38 @@ DEFINE_TRAP_ENTRY_NOIRQ(irq)
 DEFINE_TRAP_ENTRY_NOIRQ(fiq)
 
 return_from_trap:
-	mov sp, r11
+        mov sp, r11
 ENTRY(return_to_new_vcpu)
-	ldr r11, [sp, #UREGS_cpsr]
-	and r11, #PSR_MODE_MASK
-	cmp r11, #PSR_MODE_HYP
-	beq return_to_hypervisor
-	/* Fall thru */
+        ldr r11, [sp, #UREGS_cpsr]
+        and r11, #PSR_MODE_MASK
+        cmp r11, #PSR_MODE_HYP
+        beq return_to_hypervisor
+        /* Fall thru */
 ENTRY(return_to_guest)
-	mov r11, sp
-	bic sp, #7 /* Align the stack pointer */
-	bl leave_hypervisor_tail /* Disables interrupts on return */
-	mov sp, r11
-	RESTORE_ONE_BANKED(SP_usr)
-	/* LR_usr is the same physical register as lr and is restored below */
-	RESTORE_BANKED(svc)
-	RESTORE_BANKED(abt)
-	RESTORE_BANKED(und)
-	RESTORE_BANKED(irq)
-	RESTORE_BANKED(fiq)
-	RESTORE_ONE_BANKED(R8_fiq); RESTORE_ONE_BANKED(R9_fiq); RESTORE_ONE_BANKED(R10_fiq)
-	RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq);
-	/* Fall thru */
+        mov r11, sp
+        bic sp, #7 /* Align the stack pointer */
+        bl leave_hypervisor_tail /* Disables interrupts on return */
+        mov sp, r11
+        RESTORE_ONE_BANKED(SP_usr)
+        /* LR_usr is the same physical register as lr and is restored below */
+        RESTORE_BANKED(svc)
+        RESTORE_BANKED(abt)
+        RESTORE_BANKED(und)
+        RESTORE_BANKED(irq)
+        RESTORE_BANKED(fiq)
+        RESTORE_ONE_BANKED(R8_fiq); RESTORE_ONE_BANKED(R9_fiq); RESTORE_ONE_BANKED(R10_fiq)
+        RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq);
+        /* Fall thru */
 ENTRY(return_to_hypervisor)
-	cpsid i
-	ldr lr, [sp, #UREGS_lr]
-	ldr r11, [sp, #UREGS_pc]
-	msr ELR_hyp, r11
-	ldr r11, [sp, #UREGS_cpsr]
-	msr SPSR_hyp, r11
-	pop {r0-r12}
-	add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
-	eret
+        cpsid i
+        ldr lr, [sp, #UREGS_lr]
+        ldr r11, [sp, #UREGS_pc]
+        msr ELR_hyp, r11
+        ldr r11, [sp, #UREGS_cpsr]
+        msr SPSR_hyp, r11
+        pop {r0-r12}
+        add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
+        eret
 
 /*
  * struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next)
@@ -127,9 +127,15 @@ ENTRY(return_to_hypervisor)
  * Returns prev in r0
  */
 ENTRY(__context_switch)
-	add     ip, r0, #VCPU_arch_saved_context
-	stmia   ip!, {r4 - sl, fp, sp, lr}      /* Save register state */
+        add     ip, r0, #VCPU_arch_saved_context
+        stmia   ip!, {r4 - sl, fp, sp, lr}      /* Save register state */
 
-	add     r4, r1, #VCPU_arch_saved_context
-	ldmia   r4, {r4 - sl, fp, sp, pc}       /* Load registers and return */
+        add     r4, r1, #VCPU_arch_saved_context
+        ldmia   r4, {r4 - sl, fp, sp, pc}       /* Load registers and return */
 
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
index 8e2e284..0d9a799 100644
--- a/xen/arch/arm/head.S
+++ b/xen/arch/arm/head.S
@@ -36,366 +36,366 @@
  * Clobbers r0-r3. */
 #ifdef EARLY_UART_ADDRESS
 #define PRINT(_s)       \
-	adr   r0, 98f ; \
-	bl    puts    ; \
-	b     99f     ; \
-98:	.asciz _s     ; \
-	.align 2      ; \
+        adr   r0, 98f ; \
+        bl    puts    ; \
+        b     99f     ; \
+98:     .asciz _s     ; \
+        .align 2      ; \
 99:
 #else
 #define PRINT(s)
 #endif
 
-	.arm
+        .arm
 
-	/* This must be the very first address in the loaded image.
-	 * It should be linked at XEN_VIRT_START, and loaded at any
-	 * 2MB-aligned address.  All of text+data+bss must fit in 2MB,
-	 * or the initial pagetable code below will need adjustment. */
-	.global start
+        /* This must be the very first address in the loaded image.
+         * It should be linked at XEN_VIRT_START, and loaded at any
+         * 2MB-aligned address.  All of text+data+bss must fit in 2MB,
+         * or the initial pagetable code below will need adjustment. */
+        .global start
 start:
 
-	/* zImage magic header, see:
-	 * http://www.simtec.co.uk/products/SWLINUX/files/booting_article.html#d0e309
-	 */
-	.rept 8
-	mov   r0, r0
-	.endr
-	b     past_zImage
+        /* zImage magic header, see:
+         * http://www.simtec.co.uk/products/SWLINUX/files/booting_article.html#d0e309
+         */
+        .rept 8
+        mov   r0, r0
+        .endr
+        b     past_zImage
 
-	.word ZIMAGE_MAGIC_NUMBER    /* Magic numbers to help the loader */
-	.word 0x00000000             /* absolute load/run zImage address or
-	                              * 0 for PiC */
-	.word (_end - start)         /* zImage end address */
+        .word ZIMAGE_MAGIC_NUMBER    /* Magic numbers to help the loader */
+        .word 0x00000000             /* absolute load/run zImage address or
+                                      * 0 for PiC */
+        .word (_end - start)         /* zImage end address */
 
 past_zImage:
-	cpsid aif                    /* Disable all interrupts */
+        cpsid aif                    /* Disable all interrupts */
 
-	/* Save the bootloader arguments in less-clobberable registers */
-	mov   r7, r1                 /* r7 := ARM-linux machine type */
-	mov   r8, r2                 /* r8 := ATAG base address */
+        /* Save the bootloader arguments in less-clobberable registers */
+        mov   r7, r1                 /* r7 := ARM-linux machine type */
+        mov   r8, r2                 /* r8 := ATAG base address */
 
-	/* Find out where we are */
-	ldr   r0, =start
-	adr   r9, start              /* r9  := paddr (start) */
-	sub   r10, r9, r0            /* r10 := phys-offset */
+        /* Find out where we are */
+        ldr   r0, =start
+        adr   r9, start              /* r9  := paddr (start) */
+        sub   r10, r9, r0            /* r10 := phys-offset */
 
-	/* Using the DTB in the .dtb section? */
+        /* Using the DTB in the .dtb section? */
 #ifdef CONFIG_DTB_FILE
-	ldr   r8, =_sdtb
-	add   r8, r10                /* r8 := paddr(DTB) */
+        ldr   r8, =_sdtb
+        add   r8, r10                /* r8 := paddr(DTB) */
 #endif
 
-	/* Are we the boot CPU? */
-	mov   r12, #0                /* r12 := CPU ID */
-	mrc   CP32(r0, MPIDR)
-	tst   r0, #(1<<31)           /* Multiprocessor extension supported? */
-	beq   boot_cpu
-	tst   r0, #(1<<30)           /* Uniprocessor system? */
-	bne   boot_cpu
-	bics  r12, r0, #(0xff << 24) /* Mask out flags to get CPU ID */
-	beq   boot_cpu               /* If we're CPU 0, boot now */
-
-	/* Non-boot CPUs wait here to be woken up one at a time. */
-1:	dsb
-	ldr   r0, =smp_up_cpu        /* VA of gate */
-	add   r0, r0, r10            /* PA of gate */
-	ldr   r1, [r0]               /* Which CPU is being booted? */
-	teq   r1, r12                /* Is it us? */
-	wfene
-	bne   1b
+        /* Are we the boot CPU? */
+        mov   r12, #0                /* r12 := CPU ID */
+        mrc   CP32(r0, MPIDR)
+        tst   r0, #(1<<31)           /* Multiprocessor extension supported? */
+        beq   boot_cpu
+        tst   r0, #(1<<30)           /* Uniprocessor system? */
+        bne   boot_cpu
+        bics  r12, r0, #(0xff << 24) /* Mask out flags to get CPU ID */
+        beq   boot_cpu               /* If we're CPU 0, boot now */
+
+        /* Non-boot CPUs wait here to be woken up one at a time. */
+1:      dsb
+        ldr   r0, =smp_up_cpu        /* VA of gate */
+        add   r0, r0, r10            /* PA of gate */
+        ldr   r1, [r0]               /* Which CPU is being booted? */
+        teq   r1, r12                /* Is it us? */
+        wfene
+        bne   1b
 
 boot_cpu:
 #ifdef EARLY_UART_ADDRESS
-	ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
-	teq   r12, #0                   /* CPU 0 sets up the UART too */
-	bleq  init_uart
-	PRINT("- CPU ")
-	mov   r0, r12
-	bl    putn
-	PRINT(" booting -\r\n")
+        ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
+        teq   r12, #0                   /* CPU 0 sets up the UART too */
+        bleq  init_uart
+        PRINT("- CPU ")
+        mov   r0, r12
+        bl    putn
+        PRINT(" booting -\r\n")
 #endif
 
-	/* Wake up secondary cpus */
-	teq   r12, #0
-	bleq  kick_cpus
-
-	/* Check that this CPU has Hyp mode */
-	mrc   CP32(r0, ID_PFR1)
-	and   r0, r0, #0xf000        /* Bits 12-15 define virt extensions */
-	teq   r0, #0x1000            /* Must == 0x1 or may be incompatible */
-	beq   1f
-	PRINT("- CPU doesn't support the virtualization extensions -\r\n")
-	b     fail
+        /* Wake up secondary cpus */
+        teq   r12, #0
+        bleq  kick_cpus
+
+        /* Check that this CPU has Hyp mode */
+        mrc   CP32(r0, ID_PFR1)
+        and   r0, r0, #0xf000        /* Bits 12-15 define virt extensions */
+        teq   r0, #0x1000            /* Must == 0x1 or may be incompatible */
+        beq   1f
+        PRINT("- CPU doesn't support the virtualization extensions -\r\n")
+        b     fail
 1:
-	/* Check if we're already in it */
-	mrs   r0, cpsr
-	and   r0, r0, #0x1f          /* Mode is in the low 5 bits of CPSR */
-	teq   r0, #0x1a              /* Hyp Mode? */
-	bne   1f
-	PRINT("- Started in Hyp mode -\r\n")
-	b     hyp
+        /* Check if we're already in it */
+        mrs   r0, cpsr
+        and   r0, r0, #0x1f          /* Mode is in the low 5 bits of CPSR */
+        teq   r0, #0x1a              /* Hyp Mode? */
+        bne   1f
+        PRINT("- Started in Hyp mode -\r\n")
+        b     hyp
 1:
-	/* Otherwise, it must have been Secure Supervisor mode */
-	mrc   CP32(r0, SCR)
-	tst   r0, #0x1               /* Not-Secure bit set? */
-	beq   1f
-	PRINT("- CPU is not in Hyp mode or Secure state -\r\n")
-	b     fail
+        /* Otherwise, it must have been Secure Supervisor mode */
+        mrc   CP32(r0, SCR)
+        tst   r0, #0x1               /* Not-Secure bit set? */
+        beq   1f
+        PRINT("- CPU is not in Hyp mode or Secure state -\r\n")
+        b     fail
 1:
-	/* OK, we're in Secure state. */
-	PRINT("- Started in Secure state -\r\n- Entering Hyp mode -\r\n")
-	ldr   r0, =enter_hyp_mode    /* VA of function */
-	adr   lr, hyp                /* Set return address for call */
-	add   pc, r0, r10            /* Call PA of function */
+        /* OK, we're in Secure state. */
+        PRINT("- Started in Secure state -\r\n- Entering Hyp mode -\r\n")
+        ldr   r0, =enter_hyp_mode    /* VA of function */
+        adr   lr, hyp                /* Set return address for call */
+        add   pc, r0, r10            /* Call PA of function */
 
 hyp:
 
-	/* Zero BSS On the boot CPU to avoid nasty surprises */
-	teq   r12, #0
-	bne   skip_bss
-
-	PRINT("- Zero BSS -\r\n")
-	ldr   r0, =__bss_start       /* Load start & end of bss */
-	ldr   r1, =__bss_end
-	add   r0, r0, r10            /* Apply physical offset */
-	add   r1, r1, r10
-	
-	mov   r2, #0
-1:	str   r2, [r0], #4
-	cmp   r0, r1
-	blo   1b
-
-skip_bss:	
-
-	PRINT("- Setting up control registers -\r\n")
-	
-	/* Read CPU ID */
-	mrc   CP32(r0, MIDR)
-	ldr   r1, =(MIDR_MASK)
-	and   r0, r0, r1
-	/* Is this a Cortex A15? */
-	ldr   r1, =(CORTEX_A15_ID)
-	teq   r0, r1
-	bleq  cortex_a15_init
-
-	/* Set up memory attribute type tables */
-	ldr   r0, =MAIR0VAL
-	ldr   r1, =MAIR1VAL
-	mcr   CP32(r0, MAIR0)
-	mcr   CP32(r1, MAIR1)
-	mcr   CP32(r0, HMAIR0)
-	mcr   CP32(r1, HMAIR1)
-
-	/* Set up the HTCR:
-	 * PT walks use Outer-Shareable accesses,
-	 * PT walks are write-back, no-write-allocate in both cache levels,
-	 * Full 32-bit address space goes through this table. */
-	ldr   r0, =0x80002500
-	mcr   CP32(r0, HTCR)
-
-	/* Set up the HSCTLR:
-	 * Exceptions in LE ARM,
-	 * Low-latency IRQs disabled,
-	 * Write-implies-XN disabled (for now),
-	 * D-cache disabled (for now),
-	 * I-cache enabled,
-	 * Alignment checking enabled,
-	 * MMU translation disabled (for now). */
-	ldr   r0, =(HSCTLR_BASE|SCTLR_A)
-	mcr   CP32(r0, HSCTLR)
-
-	/* Write Xen's PT's paddr into the HTTBR */
-	ldr   r4, =xen_pgtable
-	add   r4, r4, r10            /* r4 := paddr (xen_pagetable) */
-	mov   r5, #0                 /* r4:r5 is paddr (xen_pagetable) */
-	mcrr  CP64(r4, r5, HTTBR)
-
-	/* Non-boot CPUs don't need to rebuild the pagetable */
-	teq   r12, #0
-	bne   pt_ready
-	
-	/* console fixmap */
+        /* Zero BSS On the boot CPU to avoid nasty surprises */
+        teq   r12, #0
+        bne   skip_bss
+
+        PRINT("- Zero BSS -\r\n")
+        ldr   r0, =__bss_start       /* Load start & end of bss */
+        ldr   r1, =__bss_end
+        add   r0, r0, r10            /* Apply physical offset */
+        add   r1, r1, r10
+        
+        mov   r2, #0
+1:      str   r2, [r0], #4
+        cmp   r0, r1
+        blo   1b
+
+skip_bss:       
+
+        PRINT("- Setting up control registers -\r\n")
+        
+        /* Read CPU ID */
+        mrc   CP32(r0, MIDR)
+        ldr   r1, =(MIDR_MASK)
+        and   r0, r0, r1
+        /* Is this a Cortex A15? */
+        ldr   r1, =(CORTEX_A15_ID)
+        teq   r0, r1
+        bleq  cortex_a15_init
+
+        /* Set up memory attribute type tables */
+        ldr   r0, =MAIR0VAL
+        ldr   r1, =MAIR1VAL
+        mcr   CP32(r0, MAIR0)
+        mcr   CP32(r1, MAIR1)
+        mcr   CP32(r0, HMAIR0)
+        mcr   CP32(r1, HMAIR1)
+
+        /* Set up the HTCR:
+         * PT walks use Outer-Shareable accesses,
+         * PT walks are write-back, no-write-allocate in both cache levels,
+         * Full 32-bit address space goes through this table. */
+        ldr   r0, =0x80002500
+        mcr   CP32(r0, HTCR)
+
+        /* Set up the HSCTLR:
+         * Exceptions in LE ARM,
+         * Low-latency IRQs disabled,
+         * Write-implies-XN disabled (for now),
+         * D-cache disabled (for now),
+         * I-cache enabled,
+         * Alignment checking enabled,
+         * MMU translation disabled (for now). */
+        ldr   r0, =(HSCTLR_BASE|SCTLR_A)
+        mcr   CP32(r0, HSCTLR)
+
+        /* Write Xen's PT's paddr into the HTTBR */
+        ldr   r4, =xen_pgtable
+        add   r4, r4, r10            /* r4 := paddr (xen_pagetable) */
+        mov   r5, #0                 /* r4:r5 is paddr (xen_pagetable) */
+        mcrr  CP64(r4, r5, HTTBR)
+
+        /* Non-boot CPUs don't need to rebuild the pagetable */
+        teq   r12, #0
+        bne   pt_ready
+        
+        /* console fixmap */
 #ifdef EARLY_UART_ADDRESS
-	ldr   r1, =xen_fixmap
-	add   r1, r1, r10            /* r1 := paddr (xen_fixmap) */
-	mov   r3, #0
-	lsr   r2, r11, #12
-	lsl   r2, r2, #12            /* 4K aligned paddr of UART */
-	orr   r2, r2, #PT_UPPER(DEV_L3)
-	orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
-	strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
+        ldr   r1, =xen_fixmap
+        add   r1, r1, r10            /* r1 := paddr (xen_fixmap) */
+        mov   r3, #0
+        lsr   r2, r11, #12
+        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
+        orr   r2, r2, #PT_UPPER(DEV_L3)
+        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
+        strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
 #endif
 
-	/* Build the baseline idle pagetable's first-level entries */
-	ldr   r1, =xen_second
-	add   r1, r1, r10            /* r1 := paddr (xen_second) */
-	mov   r3, #0x0
-	orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of xen_second */
-	orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
-	strd  r2, r3, [r4, #0]       /* Map it in slot 0 */
-	add   r2, r2, #0x1000
-	strd  r2, r3, [r4, #8]       /* Map 2nd page in slot 1 */
-	add   r2, r2, #0x1000
-	strd  r2, r3, [r4, #16]      /* Map 3rd page in slot 2 */
-	add   r2, r2, #0x1000
-	strd  r2, r3, [r4, #24]      /* Map 4th page in slot 3 */
-
-	/* Now set up the second-level entries */
-	orr   r2, r9, #PT_UPPER(MEM)
-	orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB normal map of Xen */
-	mov   r4, r9, lsr #18        /* Slot for paddr(start) */
-	strd  r2, r3, [r1, r4]       /* Map Xen there */
-	ldr   r4, =start
-	lsr   r4, #18                /* Slot for vaddr(start) */
-	strd  r2, r3, [r1, r4]       /* Map Xen there too */
-
-	/* xen_fixmap pagetable */
-	ldr   r2, =xen_fixmap
-	add   r2, r2, r10            /* r2 := paddr (xen_fixmap) */
-	orr   r2, r2, #PT_UPPER(PT)
-	orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
-	add   r4, r4, #8
-	strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
-
-	mov   r3, #0x0
-	lsr   r2, r8, #21
-	lsl   r2, r2, #21            /* 2MB-aligned paddr of DTB */
-	orr   r2, r2, #PT_UPPER(MEM)
-	orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
-	add   r4, r4, #8
-	strd  r2, r3, [r1, r4]       /* Map it in the early boot slot */
+        /* Build the baseline idle pagetable's first-level entries */
+        ldr   r1, =xen_second
+        add   r1, r1, r10            /* r1 := paddr (xen_second) */
+        mov   r3, #0x0
+        orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of xen_second */
+        orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
+        strd  r2, r3, [r4, #0]       /* Map it in slot 0 */
+        add   r2, r2, #0x1000
+        strd  r2, r3, [r4, #8]       /* Map 2nd page in slot 1 */
+        add   r2, r2, #0x1000
+        strd  r2, r3, [r4, #16]      /* Map 3rd page in slot 2 */
+        add   r2, r2, #0x1000
+        strd  r2, r3, [r4, #24]      /* Map 4th page in slot 3 */
+
+        /* Now set up the second-level entries */
+        orr   r2, r9, #PT_UPPER(MEM)
+        orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB normal map of Xen */
+        mov   r4, r9, lsr #18        /* Slot for paddr(start) */
+        strd  r2, r3, [r1, r4]       /* Map Xen there */
+        ldr   r4, =start
+        lsr   r4, #18                /* Slot for vaddr(start) */
+        strd  r2, r3, [r1, r4]       /* Map Xen there too */
+
+        /* xen_fixmap pagetable */
+        ldr   r2, =xen_fixmap
+        add   r2, r2, r10            /* r2 := paddr (xen_fixmap) */
+        orr   r2, r2, #PT_UPPER(PT)
+        orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
+        add   r4, r4, #8
+        strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
+
+        mov   r3, #0x0
+        lsr   r2, r8, #21
+        lsl   r2, r2, #21            /* 2MB-aligned paddr of DTB */
+        orr   r2, r2, #PT_UPPER(MEM)
+        orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
+        add   r4, r4, #8
+        strd  r2, r3, [r1, r4]       /* Map it in the early boot slot */
 
 pt_ready:
-	PRINT("- Turning on paging -\r\n")
-
-	ldr   r1, =paging            /* Explicit vaddr, not RIP-relative */
-	mrc   CP32(r0, HSCTLR)
-	orr   r0, r0, #(SCTLR_M|SCTLR_C) /* Enable MMU and D-cache */
-	dsb                          /* Flush PTE writes and finish reads */
-	mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
-	isb                          /* Now, flush the icache */
-	mov   pc, r1                 /* Get a proper vaddr into PC */
+        PRINT("- Turning on paging -\r\n")
+
+        ldr   r1, =paging            /* Explicit vaddr, not RIP-relative */
+        mrc   CP32(r0, HSCTLR)
+        orr   r0, r0, #(SCTLR_M|SCTLR_C) /* Enable MMU and D-cache */
+        dsb                          /* Flush PTE writes and finish reads */
+        mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
+        isb                          /* Now, flush the icache */
+        mov   pc, r1                 /* Get a proper vaddr into PC */
 paging:
 
 
 #ifdef EARLY_UART_ADDRESS
-	/* Use a virtual address to access the UART. */
-	ldr   r11, =FIXMAP_ADDR(FIXMAP_CONSOLE)
+        /* Use a virtual address to access the UART. */
+        ldr   r11, =FIXMAP_ADDR(FIXMAP_CONSOLE)
 #endif
 
-	PRINT("- Ready -\r\n")
-
-	/* The boot CPU should go straight into C now */
-	teq   r12, #0
-	beq   launch
-
-	/* Non-boot CPUs need to move on to the relocated pagetables */
-	mov   r0, #0
-	ldr   r4, =boot_httbr        /* VA of HTTBR value stashed by CPU 0 */
-	add   r4, r4, r10            /* PA of it */
-	ldrd  r4, r5, [r4]           /* Actual value */
-	dsb
-	mcrr  CP64(r4, r5, HTTBR)
-	dsb
-	isb
-	mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
-	mcr   CP32(r0, ICIALLU)      /* Flush I-cache */
-	mcr   CP32(r0, BPIALL)       /* Flush branch predictor */
-	dsb                          /* Ensure completion of TLB+BP flush */
-	isb
-
-	/* Non-boot CPUs report that they've got this far */
-	ldr   r0, =ready_cpus
-1:	ldrex r1, [r0]               /*            { read # of ready CPUs } */
-	add   r1, r1, #1             /* Atomically { ++                   } */
-	strex r2, r1, [r0]           /*            { writeback            } */
-	teq   r2, #0
-	bne   1b
-	dsb
-	mcr   CP32(r0, DCCMVAC)      /* flush D-Cache */
-	dsb
-
-	/* Here, the non-boot CPUs must wait again -- they're now running on
-	 * the boot CPU's pagetables so it's safe for the boot CPU to
-	 * overwrite the non-relocated copy of Xen.  Once it's done that,
-	 * and brought up the memory allocator, non-boot CPUs can get their
-	 * own stacks and enter C. */
-1:	wfe
-	dsb
-	ldr   r0, =smp_up_cpu
-	ldr   r1, [r0]               /* Which CPU is being booted? */
-	teq   r1, r12                /* Is it us? */
-	bne   1b
-
-launch:	
-	ldr   r0, =init_stack        /* Find the boot-time stack */
-	ldr   sp, [r0]
-	add   sp, #STACK_SIZE        /* (which grows down from the top). */
-	sub   sp, #CPUINFO_sizeof    /* Make room for CPU save record */
-	mov   r0, r10                /* Marshal args: - phys_offset */
-	mov   r1, r7                 /*               - machine type */
-	mov   r2, r8                 /*               - ATAG address */
-	movs  r3, r12                /*               - CPU ID */
-	beq   start_xen              /* and disappear into the land of C */
-	b     start_secondary        /* (to the appropriate entry point) */
+        PRINT("- Ready -\r\n")
+
+        /* The boot CPU should go straight into C now */
+        teq   r12, #0
+        beq   launch
+
+        /* Non-boot CPUs need to move on to the relocated pagetables */
+        mov   r0, #0
+        ldr   r4, =boot_httbr        /* VA of HTTBR value stashed by CPU 0 */
+        add   r4, r4, r10            /* PA of it */
+        ldrd  r4, r5, [r4]           /* Actual value */
+        dsb
+        mcrr  CP64(r4, r5, HTTBR)
+        dsb
+        isb
+        mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
+        mcr   CP32(r0, ICIALLU)      /* Flush I-cache */
+        mcr   CP32(r0, BPIALL)       /* Flush branch predictor */
+        dsb                          /* Ensure completion of TLB+BP flush */
+        isb
+
+        /* Non-boot CPUs report that they've got this far */
+        ldr   r0, =ready_cpus
+1:      ldrex r1, [r0]               /*            { read # of ready CPUs } */
+        add   r1, r1, #1             /* Atomically { ++                   } */
+        strex r2, r1, [r0]           /*            { writeback            } */
+        teq   r2, #0
+        bne   1b
+        dsb
+        mcr   CP32(r0, DCCMVAC)      /* flush D-Cache */
+        dsb
+
+        /* Here, the non-boot CPUs must wait again -- they're now running on
+         * the boot CPU's pagetables so it's safe for the boot CPU to
+         * overwrite the non-relocated copy of Xen.  Once it's done that,
+         * and brought up the memory allocator, non-boot CPUs can get their
+         * own stacks and enter C. */
+1:      wfe
+        dsb
+        ldr   r0, =smp_up_cpu
+        ldr   r1, [r0]               /* Which CPU is being booted? */
+        teq   r1, r12                /* Is it us? */
+        bne   1b
+
+launch: 
+        ldr   r0, =init_stack        /* Find the boot-time stack */
+        ldr   sp, [r0]
+        add   sp, #STACK_SIZE        /* (which grows down from the top). */
+        sub   sp, #CPUINFO_sizeof    /* Make room for CPU save record */
+        mov   r0, r10                /* Marshal args: - phys_offset */
+        mov   r1, r7                 /*               - machine type */
+        mov   r2, r8                 /*               - ATAG address */
+        movs  r3, r12                /*               - CPU ID */
+        beq   start_xen              /* and disappear into the land of C */
+        b     start_secondary        /* (to the appropriate entry point) */
 
 /* Fail-stop
  * r0: string explaining why */
-fail:	PRINT("- Boot failed -\r\n")
-1:	wfe
-	b     1b
+fail:   PRINT("- Boot failed -\r\n")
+1:      wfe
+        b     1b
 
 #ifdef EARLY_UART_ADDRESS
 
 /* Bring up the UART. Specific to the PL011 UART.
  * Clobbers r0-r2 */
 init_uart:
-	mov   r1, #0x0
-	str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor fraction) */
-	mov   r1, #0x4               /* 7.3728MHz / 0x4 == 16 * 115200 */
-	str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor integer) */
-	mov   r1, #0x60              /* 8n1 */
-	str   r1, [r11, #0x24]       /* -> UARTLCR_H (Line control) */
-	ldr   r1, =0x00000301        /* RXE | TXE | UARTEN */
-	str   r1, [r11, #0x30]       /* -> UARTCR (Control Register) */
-	adr   r0, 1f
-	b     puts
-1:	.asciz "- UART enabled -\r\n"
-	.align 4
+        mov   r1, #0x0
+        str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor fraction) */
+        mov   r1, #0x4               /* 7.3728MHz / 0x4 == 16 * 115200 */
+        str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor integer) */
+        mov   r1, #0x60              /* 8n1 */
+        str   r1, [r11, #0x24]       /* -> UARTLCR_H (Line control) */
+        ldr   r1, =0x00000301        /* RXE | TXE | UARTEN */
+        str   r1, [r11, #0x30]       /* -> UARTCR (Control Register) */
+        adr   r0, 1f
+        b     puts
+1:      .asciz "- UART enabled -\r\n"
+        .align 4
 
 /* Print early debug messages.  Specific to the PL011 UART.
  * r0: Nul-terminated string to print.
  * Clobbers r0-r2 */
 puts:
-	ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
-	tst   r2, #0x8               /* Check BUSY bit */
-	bne   puts                   /* Wait for the UART to be ready */
-	ldrb  r2, [r0], #1           /* Load next char */
-	teq   r2, #0                 /* Exit on nul */
-	moveq pc, lr
-	str   r2, [r11]              /* -> UARTDR (Data Register) */
-	b     puts
+        ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
+        tst   r2, #0x8               /* Check BUSY bit */
+        bne   puts                   /* Wait for the UART to be ready */
+        ldrb  r2, [r0], #1           /* Load next char */
+        teq   r2, #0                 /* Exit on nul */
+        moveq pc, lr
+        str   r2, [r11]              /* -> UARTDR (Data Register) */
+        b     puts
 
 /* Print a 32-bit number in hex.  Specific to the PL011 UART.
  * r0: Number to print.
  * clobbers r0-r3 */
 putn:
-	adr   r1, hex
-	mov   r3, #8
-1:	ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
-	tst   r2, #0x8               /* Check BUSY bit */
-	bne   1b                     /* Wait for the UART to be ready */
-	and   r2, r0, #0xf0000000    /* Mask off the top nybble */
-	ldrb  r2, [r1, r2, lsr #28]  /* Convert to a char */
-	str   r2, [r11]              /* -> UARTDR (Data Register) */
-	lsl   r0, #4                 /* Roll it through one nybble at a time */
-	subs  r3, r3, #1
-	bne   1b
-	mov   pc, lr
-
-hex:	.ascii "0123456789abcdef"
-	.align 2
+        adr   r1, hex
+        mov   r3, #8
+1:      ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
+        tst   r2, #0x8               /* Check BUSY bit */
+        bne   1b                     /* Wait for the UART to be ready */
+        and   r2, r0, #0xf0000000    /* Mask off the top nybble */
+        ldrb  r2, [r1, r2, lsr #28]  /* Convert to a char */
+        str   r2, [r11]              /* -> UARTDR (Data Register) */
+        lsl   r0, #4                 /* Roll it through one nybble at a time */
+        subs  r3, r3, #1
+        bne   1b
+        mov   pc, lr
+
+hex:    .ascii "0123456789abcdef"
+        .align 2
 
 #else  /* EARLY_UART_ADDRESS */
 
@@ -403,6 +403,13 @@ init_uart:
 .global early_puts
 early_puts:
 puts:
-putn:	mov   pc, lr
+putn:   mov   pc, lr
 
 #endif /* EARLY_UART_ADDRESS */
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
index 7c3b357..3dba38d 100644
--- a/xen/arch/arm/mode_switch.S
+++ b/xen/arch/arm/mode_switch.S
@@ -28,25 +28,25 @@
 /* wake up secondary cpus */
 .globl kick_cpus
 kick_cpus:
-	/* write start paddr to v2m sysreg FLAGSSET register */
-	ldr   r0, =(V2M_SYS_MMIO_BASE)        /* base V2M sysreg MMIO address */
-	dsb
-	mov   r2, #0xffffffff
-	str   r2, [r0, #(V2M_SYS_FLAGSCLR)]
-	dsb
-	ldr   r2, =start
-	add   r2, r2, r10
-	str   r2, [r0, #(V2M_SYS_FLAGSSET)]
-	dsb
-	/* send an interrupt */
-	ldr   r0, =(GIC_BASE_ADDRESS + GIC_DR_OFFSET) /* base GICD MMIO address */
-	mov   r2, #0x1
-	str   r2, [r0, #(GICD_CTLR * 4)]      /* enable distributor */
-	mov   r2, #0xfe0000
-	str   r2, [r0, #(GICD_SGIR * 4)]      /* send IPI to everybody */
-	dsb
-	str   r2, [r0, #(GICD_CTLR * 4)]      /* disable distributor */
-	mov   pc, lr
+        /* write start paddr to v2m sysreg FLAGSSET register */
+        ldr   r0, =(V2M_SYS_MMIO_BASE)        /* base V2M sysreg MMIO address */
+        dsb
+        mov   r2, #0xffffffff
+        str   r2, [r0, #(V2M_SYS_FLAGSCLR)]
+        dsb
+        ldr   r2, =start
+        add   r2, r2, r10
+        str   r2, [r0, #(V2M_SYS_FLAGSSET)]
+        dsb
+        /* send an interrupt */
+        ldr   r0, =(GIC_BASE_ADDRESS + GIC_DR_OFFSET) /* base GICD MMIO address */
+        mov   r2, #0x1
+        str   r2, [r0, #(GICD_CTLR * 4)]      /* enable distributor */
+        mov   r2, #0xfe0000
+        str   r2, [r0, #(GICD_SGIR * 4)]      /* send IPI to everybody */
+        dsb
+        str   r2, [r0, #(GICD_CTLR * 4)]      /* disable distributor */
+        mov   pc, lr
 
 
 /* Get up a CPU into Hyp mode.  Clobbers r0-r3.
@@ -61,54 +61,61 @@ kick_cpus:
 
 .globl enter_hyp_mode
 enter_hyp_mode:
-	mov   r3, lr                 /* Put return address in non-banked reg */
-	cpsid aif, #0x16             /* Enter Monitor mode */
-	mrc   CP32(r0, SCR)
-	orr   r0, r0, #0x100         /* Set HCE */
-	orr   r0, r0, #0xb1          /* Set SCD, AW, FW and NS */
-	bic   r0, r0, #0xe           /* Clear EA, FIQ and IRQ */
-	mcr   CP32(r0, SCR)
-	/* Ugly: the system timer's frequency register is only
-	 * programmable in Secure state.  Since we don't know where its
-	 * memory-mapped control registers live, we can't find out the
-	 * right frequency.  Use the VE model's default frequency here. */
-	ldr   r0, =0x5f5e100         /* 100 MHz */
-	mcr   CP32(r0, CNTFRQ)
-	ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
-	mcr   CP32(r0, NSACR)
-	mov   r0, #GIC_BASE_ADDRESS
-	add   r0, r0, #GIC_DR_OFFSET
-	/* Disable the GIC distributor, on the boot CPU only */
-	mov   r1, #0
-	teq   r12, #0                /* Is this the boot CPU? */
-	streq r1, [r0]
-	/* Continuing ugliness: Set up the GIC so NS state owns interrupts,
-	 * The first 32 interrupts (SGIs & PPIs) must be configured on all
-	 * CPUs while the remainder are SPIs and only need to be done one, on
-	 * the boot CPU. */
-	add   r0, r0, #0x80          /* GICD_IGROUP0 */
-	mov   r2, #0xffffffff        /* All interrupts to group 1 */
-	teq   r12, #0                /* Boot CPU? */
-	str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
-	streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
-	streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
-	/* Disable the GIC CPU interface on all processors */
-	mov   r0, #GIC_BASE_ADDRESS
-	add   r0, r0, #GIC_CR_OFFSET
-	mov   r1, #0
-	str   r1, [r0]
-	/* Must drop priority mask below 0x80 before entering NS state */
-	ldr   r1, =0xff
-	str   r1, [r0, #0x4]         /* -> GICC_PMR */
-	/* Reset a few config registers */
-	mov   r0, #0
-	mcr   CP32(r0, FCSEIDR)
-	mcr   CP32(r0, CONTEXTIDR)
-	/* Allow non-secure access to coprocessors, FIQs, VFP and NEON */
-	ldr   r1, =0x3fff            /* 14 CP bits set, all others clear */
-	mcr   CP32(r1, NSACR)
+        mov   r3, lr                 /* Put return address in non-banked reg */
+        cpsid aif, #0x16             /* Enter Monitor mode */
+        mrc   CP32(r0, SCR)
+        orr   r0, r0, #0x100         /* Set HCE */
+        orr   r0, r0, #0xb1          /* Set SCD, AW, FW and NS */
+        bic   r0, r0, #0xe           /* Clear EA, FIQ and IRQ */
+        mcr   CP32(r0, SCR)
+        /* Ugly: the system timer's frequency register is only
+         * programmable in Secure state.  Since we don't know where its
+         * memory-mapped control registers live, we can't find out the
+         * right frequency.  Use the VE model's default frequency here. */
+        ldr   r0, =0x5f5e100         /* 100 MHz */
+        mcr   CP32(r0, CNTFRQ)
+        ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
+        mcr   CP32(r0, NSACR)
+        mov   r0, #GIC_BASE_ADDRESS
+        add   r0, r0, #GIC_DR_OFFSET
+        /* Disable the GIC distributor, on the boot CPU only */
+        mov   r1, #0
+        teq   r12, #0                /* Is this the boot CPU? */
+        streq r1, [r0]
+        /* Continuing ugliness: Set up the GIC so NS state owns interrupts,
+         * The first 32 interrupts (SGIs & PPIs) must be configured on all
+         * CPUs while the remainder are SPIs and only need to be done one, on
+         * the boot CPU. */
+        add   r0, r0, #0x80          /* GICD_IGROUP0 */
+        mov   r2, #0xffffffff        /* All interrupts to group 1 */
+        teq   r12, #0                /* Boot CPU? */
+        str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
+        streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
+        streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
+        /* Disable the GIC CPU interface on all processors */
+        mov   r0, #GIC_BASE_ADDRESS
+        add   r0, r0, #GIC_CR_OFFSET
+        mov   r1, #0
+        str   r1, [r0]
+        /* Must drop priority mask below 0x80 before entering NS state */
+        ldr   r1, =0xff
+        str   r1, [r0, #0x4]         /* -> GICC_PMR */
+        /* Reset a few config registers */
+        mov   r0, #0
+        mcr   CP32(r0, FCSEIDR)
+        mcr   CP32(r0, CONTEXTIDR)
+        /* Allow non-secure access to coprocessors, FIQs, VFP and NEON */
+        ldr   r1, =0x3fff            /* 14 CP bits set, all others clear */
+        mcr   CP32(r1, NSACR)
 
-	mrs   r0, cpsr               /* Copy the CPSR */
-	add   r0, r0, #0x4           /* 0x16 (Monitor) -> 0x1a (Hyp) */
-	msr   spsr_cxsf, r0          /* into the SPSR */
-	movs  pc, r3                 /* Exception-return into Hyp mode */
+        mrs   r0, cpsr               /* Copy the CPSR */
+        add   r0, r0, #0x4           /* 0x16 (Monitor) -> 0x1a (Hyp) */
+        msr   spsr_cxsf, r0          /* into the SPSR */
+        movs  pc, r3                 /* Exception-return into Hyp mode */
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/proc-ca15.S b/xen/arch/arm/proc-ca15.S
index 5a5bf64..dcdd42e 100644
--- a/xen/arch/arm/proc-ca15.S
+++ b/xen/arch/arm/proc-ca15.S
@@ -21,8 +21,15 @@
 
 .globl cortex_a15_init
 cortex_a15_init:
-	/* Set up the SMP bit in ACTLR */
-	mrc   CP32(r0, ACTLR)
-	orr   r0, r0, #(ACTLR_CA15_SMP) /* enable SMP bit*/
-	mcr   CP32(r0, ACTLR)
-	mov   pc, lr
+        /* Set up the SMP bit in ACTLR */
+        mrc   CP32(r0, ACTLR)
+        orr   r0, r0, #(ACTLR_CA15_SMP) /* enable SMP bit */
+        mcr   CP32(r0, ACTLR)
+        mov   pc, lr
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:50:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0N0-00023f-1y; Tue, 18 Dec 2012 16:50:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0Mz-00023A-44
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:50:17 +0000
Received: from [85.158.139.83:56895] by server-3.bemta-5.messagelabs.com id
	76/88-25441-8CE90D05; Tue, 18 Dec 2012 16:50:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355849406!28179612!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31186 invoked from network); 18 Dec 2012 16:50:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:50:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1064229"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:49:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:49:36 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tl0MK-0000wW-P5;
	Tue, 18 Dec 2012 16:49:36 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:33 +0000
Message-ID: <1355849376-26652-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/5] xen: arm: remove hard tabs from asm code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Run expand(1) over xen/arch/arm/.../*.S

Add emacs local vars block.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/entry.S       |  194 +++++++-------
 xen/arch/arm/head.S        |  613 ++++++++++++++++++++++----------------------
 xen/arch/arm/mode_switch.S |  145 ++++++-----
 xen/arch/arm/proc-ca15.S   |   17 +-
 4 files changed, 498 insertions(+), 471 deletions(-)

diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
index 83793c2..cbd1c48 100644
--- a/xen/arch/arm/entry.S
+++ b/xen/arch/arm/entry.S
@@ -2,79 +2,79 @@
 #include <asm/asm_defns.h>
 #include <public/xen.h>
 
-#define SAVE_ONE_BANKED(reg)	mrs r11, reg; str r11, [sp, #UREGS_##reg]
-#define RESTORE_ONE_BANKED(reg)	ldr r11, [sp, #UREGS_##reg]; msr reg, r11
+#define SAVE_ONE_BANKED(reg)    mrs r11, reg; str r11, [sp, #UREGS_##reg]
+#define RESTORE_ONE_BANKED(reg) ldr r11, [sp, #UREGS_##reg]; msr reg, r11
 
 #define SAVE_BANKED(mode) \
-	SAVE_ONE_BANKED(SP_##mode) ; SAVE_ONE_BANKED(LR_##mode) ; SAVE_ONE_BANKED(SPSR_##mode)
+        SAVE_ONE_BANKED(SP_##mode) ; SAVE_ONE_BANKED(LR_##mode) ; SAVE_ONE_BANKED(SPSR_##mode)
 
 #define RESTORE_BANKED(mode) \
-	RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
+        RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
 
-#define SAVE_ALL							\
-	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */	\
-	push {r0-r12}; /* Save R0-R12 */				\
-									\
-	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */\
-	str r11, [sp, #UREGS_pc];					\
-									\
-	str lr, [sp, #UREGS_lr];					\
-									\
-	add r11, sp, #UREGS_kernel_sizeof+4;				\
-	str r11, [sp, #UREGS_sp];					\
-									\
-	mrs r11, SPSR_hyp;						\
-	str r11, [sp, #UREGS_cpsr];					\
-	and r11, #PSR_MODE_MASK;					\
-	cmp r11, #PSR_MODE_HYP;						\
-	blne save_guest_regs
+#define SAVE_ALL                                                        \
+        sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */      \
+        push {r0-r12}; /* Save R0-R12 */                                \
+                                                                        \
+        mrs r11, ELR_hyp;               /* ELR_hyp is return address. */\
+        str r11, [sp, #UREGS_pc];                                       \
+                                                                        \
+        str lr, [sp, #UREGS_lr];                                        \
+                                                                        \
+        add r11, sp, #UREGS_kernel_sizeof+4;                            \
+        str r11, [sp, #UREGS_sp];                                       \
+                                                                        \
+        mrs r11, SPSR_hyp;                                              \
+        str r11, [sp, #UREGS_cpsr];                                     \
+        and r11, #PSR_MODE_MASK;                                        \
+        cmp r11, #PSR_MODE_HYP;                                         \
+        blne save_guest_regs
 
 save_guest_regs:
-	ldr r11, =0xffffffff  /* Clobber SP which is only valid for hypervisor frames. */
-	str r11, [sp, #UREGS_sp]
-	SAVE_ONE_BANKED(SP_usr)
-	/* LR_usr is the same physical register as lr and is saved in SAVE_ALL */
-	SAVE_BANKED(svc)
-	SAVE_BANKED(abt)
-	SAVE_BANKED(und)
-	SAVE_BANKED(irq)
-	SAVE_BANKED(fiq)
-	SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq)
-	SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
-	mov pc, lr
+        ldr r11, =0xffffffff  /* Clobber SP which is only valid for hypervisor frames. */
+        str r11, [sp, #UREGS_sp]
+        SAVE_ONE_BANKED(SP_usr)
+        /* LR_usr is the same physical register as lr and is saved in SAVE_ALL */
+        SAVE_BANKED(svc)
+        SAVE_BANKED(abt)
+        SAVE_BANKED(und)
+        SAVE_BANKED(irq)
+        SAVE_BANKED(fiq)
+        SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq)
+        SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
+        mov pc, lr
 
-#define DEFINE_TRAP_ENTRY(trap)						\
-	ALIGN;								\
-trap_##trap:								\
-	SAVE_ALL;							\
-	cpsie i; 	/* local_irq_enable */				\
-	adr lr, return_from_trap;					\
-	mov r0, sp;							\
-	mov r11, sp;							\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
-	b do_trap_##trap
+#define DEFINE_TRAP_ENTRY(trap)                                         \
+        ALIGN;                                                          \
+trap_##trap:                                                            \
+        SAVE_ALL;                                                       \
+        cpsie i;        /* local_irq_enable */                          \
+        adr lr, return_from_trap;                                       \
+        mov r0, sp;                                                     \
+        mov r11, sp;                                                    \
+        bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
+        b do_trap_##trap
 
-#define DEFINE_TRAP_ENTRY_NOIRQ(trap)					\
-	ALIGN;								\
-trap_##trap:								\
-	SAVE_ALL;							\
-	adr lr, return_from_trap;					\
-	mov r0, sp;							\
-	mov r11, sp;							\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
-	b do_trap_##trap
+#define DEFINE_TRAP_ENTRY_NOIRQ(trap)                                   \
+        ALIGN;                                                          \
+trap_##trap:                                                            \
+        SAVE_ALL;                                                       \
+        adr lr, return_from_trap;                                       \
+        mov r0, sp;                                                     \
+        mov r11, sp;                                                    \
+        bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
+        b do_trap_##trap
 
 .globl hyp_traps_vector
-	.align 5
+        .align 5
 hyp_traps_vector:
-	.word 0				/* 0x00 - Reset */
-	b trap_undefined_instruction	/* 0x04 - Undefined Instruction */
-	b trap_supervisor_call		/* 0x08 - Supervisor Call */
-	b trap_prefetch_abort		/* 0x0c - Prefetch Abort */
-	b trap_data_abort		/* 0x10 - Data Abort */
-	b trap_hypervisor		/* 0x14 - Hypervisor */
-	b trap_irq			/* 0x18 - IRQ */
-	b trap_fiq			/* 0x1c - FIQ */
+        .word 0                         /* 0x00 - Reset */
+        b trap_undefined_instruction    /* 0x04 - Undefined Instruction */
+        b trap_supervisor_call          /* 0x08 - Supervisor Call */
+        b trap_prefetch_abort           /* 0x0c - Prefetch Abort */
+        b trap_data_abort               /* 0x10 - Data Abort */
+        b trap_hypervisor               /* 0x14 - Hypervisor */
+        b trap_irq                      /* 0x18 - IRQ */
+        b trap_fiq                      /* 0x1c - FIQ */
 
 DEFINE_TRAP_ENTRY(undefined_instruction)
 DEFINE_TRAP_ENTRY(supervisor_call)
@@ -85,38 +85,38 @@ DEFINE_TRAP_ENTRY_NOIRQ(irq)
 DEFINE_TRAP_ENTRY_NOIRQ(fiq)
 
 return_from_trap:
-	mov sp, r11
+        mov sp, r11
 ENTRY(return_to_new_vcpu)
-	ldr r11, [sp, #UREGS_cpsr]
-	and r11, #PSR_MODE_MASK
-	cmp r11, #PSR_MODE_HYP
-	beq return_to_hypervisor
-	/* Fall thru */
+        ldr r11, [sp, #UREGS_cpsr]
+        and r11, #PSR_MODE_MASK
+        cmp r11, #PSR_MODE_HYP
+        beq return_to_hypervisor
+        /* Fall thru */
 ENTRY(return_to_guest)
-	mov r11, sp
-	bic sp, #7 /* Align the stack pointer */
-	bl leave_hypervisor_tail /* Disables interrupts on return */
-	mov sp, r11
-	RESTORE_ONE_BANKED(SP_usr)
-	/* LR_usr is the same physical register as lr and is restored below */
-	RESTORE_BANKED(svc)
-	RESTORE_BANKED(abt)
-	RESTORE_BANKED(und)
-	RESTORE_BANKED(irq)
-	RESTORE_BANKED(fiq)
-	RESTORE_ONE_BANKED(R8_fiq); RESTORE_ONE_BANKED(R9_fiq); RESTORE_ONE_BANKED(R10_fiq)
-	RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq);
-	/* Fall thru */
+        mov r11, sp
+        bic sp, #7 /* Align the stack pointer */
+        bl leave_hypervisor_tail /* Disables interrupts on return */
+        mov sp, r11
+        RESTORE_ONE_BANKED(SP_usr)
+        /* LR_usr is the same physical register as lr and is restored below */
+        RESTORE_BANKED(svc)
+        RESTORE_BANKED(abt)
+        RESTORE_BANKED(und)
+        RESTORE_BANKED(irq)
+        RESTORE_BANKED(fiq)
+        RESTORE_ONE_BANKED(R8_fiq); RESTORE_ONE_BANKED(R9_fiq); RESTORE_ONE_BANKED(R10_fiq)
+        RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq);
+        /* Fall thru */
 ENTRY(return_to_hypervisor)
-	cpsid i
-	ldr lr, [sp, #UREGS_lr]
-	ldr r11, [sp, #UREGS_pc]
-	msr ELR_hyp, r11
-	ldr r11, [sp, #UREGS_cpsr]
-	msr SPSR_hyp, r11
-	pop {r0-r12}
-	add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
-	eret
+        cpsid i
+        ldr lr, [sp, #UREGS_lr]
+        ldr r11, [sp, #UREGS_pc]
+        msr ELR_hyp, r11
+        ldr r11, [sp, #UREGS_cpsr]
+        msr SPSR_hyp, r11
+        pop {r0-r12}
+        add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
+        eret
 
 /*
  * struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next)
@@ -127,9 +127,15 @@ ENTRY(return_to_hypervisor)
  * Returns prev in r0
  */
 ENTRY(__context_switch)
-	add     ip, r0, #VCPU_arch_saved_context
-	stmia   ip!, {r4 - sl, fp, sp, lr}      /* Save register state */
+        add     ip, r0, #VCPU_arch_saved_context
+        stmia   ip!, {r4 - sl, fp, sp, lr}      /* Save register state */
 
-	add     r4, r1, #VCPU_arch_saved_context
-	ldmia   r4, {r4 - sl, fp, sp, pc}       /* Load registers and return */
+        add     r4, r1, #VCPU_arch_saved_context
+        ldmia   r4, {r4 - sl, fp, sp, pc}       /* Load registers and return */
 
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
index 8e2e284..0d9a799 100644
--- a/xen/arch/arm/head.S
+++ b/xen/arch/arm/head.S
@@ -36,366 +36,366 @@
  * Clobbers r0-r3. */
 #ifdef EARLY_UART_ADDRESS
 #define PRINT(_s)       \
-	adr   r0, 98f ; \
-	bl    puts    ; \
-	b     99f     ; \
-98:	.asciz _s     ; \
-	.align 2      ; \
+        adr   r0, 98f ; \
+        bl    puts    ; \
+        b     99f     ; \
+98:     .asciz _s     ; \
+        .align 2      ; \
 99:
 #else
 #define PRINT(s)
 #endif
 
-	.arm
+        .arm
 
-	/* This must be the very first address in the loaded image.
-	 * It should be linked at XEN_VIRT_START, and loaded at any
-	 * 2MB-aligned address.  All of text+data+bss must fit in 2MB,
-	 * or the initial pagetable code below will need adjustment. */
-	.global start
+        /* This must be the very first address in the loaded image.
+         * It should be linked at XEN_VIRT_START, and loaded at any
+         * 2MB-aligned address.  All of text+data+bss must fit in 2MB,
+         * or the initial pagetable code below will need adjustment. */
+        .global start
 start:
 
-	/* zImage magic header, see:
-	 * http://www.simtec.co.uk/products/SWLINUX/files/booting_article.html#d0e309
-	 */
-	.rept 8
-	mov   r0, r0
-	.endr
-	b     past_zImage
+        /* zImage magic header, see:
+         * http://www.simtec.co.uk/products/SWLINUX/files/booting_article.html#d0e309
+         */
+        .rept 8
+        mov   r0, r0
+        .endr
+        b     past_zImage
 
-	.word ZIMAGE_MAGIC_NUMBER    /* Magic numbers to help the loader */
-	.word 0x00000000             /* absolute load/run zImage address or
-	                              * 0 for PiC */
-	.word (_end - start)         /* zImage end address */
+        .word ZIMAGE_MAGIC_NUMBER    /* Magic numbers to help the loader */
+        .word 0x00000000             /* absolute load/run zImage address or
+                                      * 0 for PiC */
+        .word (_end - start)         /* zImage end address */
 
 past_zImage:
-	cpsid aif                    /* Disable all interrupts */
+        cpsid aif                    /* Disable all interrupts */
 
-	/* Save the bootloader arguments in less-clobberable registers */
-	mov   r7, r1                 /* r7 := ARM-linux machine type */
-	mov   r8, r2                 /* r8 := ATAG base address */
+        /* Save the bootloader arguments in less-clobberable registers */
+        mov   r7, r1                 /* r7 := ARM-linux machine type */
+        mov   r8, r2                 /* r8 := ATAG base address */
 
-	/* Find out where we are */
-	ldr   r0, =start
-	adr   r9, start              /* r9  := paddr (start) */
-	sub   r10, r9, r0            /* r10 := phys-offset */
+        /* Find out where we are */
+        ldr   r0, =start
+        adr   r9, start              /* r9  := paddr (start) */
+        sub   r10, r9, r0            /* r10 := phys-offset */
 
-	/* Using the DTB in the .dtb section? */
+        /* Using the DTB in the .dtb section? */
 #ifdef CONFIG_DTB_FILE
-	ldr   r8, =_sdtb
-	add   r8, r10                /* r8 := paddr(DTB) */
+        ldr   r8, =_sdtb
+        add   r8, r10                /* r8 := paddr(DTB) */
 #endif
 
-	/* Are we the boot CPU? */
-	mov   r12, #0                /* r12 := CPU ID */
-	mrc   CP32(r0, MPIDR)
-	tst   r0, #(1<<31)           /* Multiprocessor extension supported? */
-	beq   boot_cpu
-	tst   r0, #(1<<30)           /* Uniprocessor system? */
-	bne   boot_cpu
-	bics  r12, r0, #(0xff << 24) /* Mask out flags to get CPU ID */
-	beq   boot_cpu               /* If we're CPU 0, boot now */
-
-	/* Non-boot CPUs wait here to be woken up one at a time. */
-1:	dsb
-	ldr   r0, =smp_up_cpu        /* VA of gate */
-	add   r0, r0, r10            /* PA of gate */
-	ldr   r1, [r0]               /* Which CPU is being booted? */
-	teq   r1, r12                /* Is it us? */
-	wfene
-	bne   1b
+        /* Are we the boot CPU? */
+        mov   r12, #0                /* r12 := CPU ID */
+        mrc   CP32(r0, MPIDR)
+        tst   r0, #(1<<31)           /* Multiprocessor extension supported? */
+        beq   boot_cpu
+        tst   r0, #(1<<30)           /* Uniprocessor system? */
+        bne   boot_cpu
+        bics  r12, r0, #(0xff << 24) /* Mask out flags to get CPU ID */
+        beq   boot_cpu               /* If we're CPU 0, boot now */
+
+        /* Non-boot CPUs wait here to be woken up one at a time. */
+1:      dsb
+        ldr   r0, =smp_up_cpu        /* VA of gate */
+        add   r0, r0, r10            /* PA of gate */
+        ldr   r1, [r0]               /* Which CPU is being booted? */
+        teq   r1, r12                /* Is it us? */
+        wfene
+        bne   1b
 
 boot_cpu:
 #ifdef EARLY_UART_ADDRESS
-	ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
-	teq   r12, #0                   /* CPU 0 sets up the UART too */
-	bleq  init_uart
-	PRINT("- CPU ")
-	mov   r0, r12
-	bl    putn
-	PRINT(" booting -\r\n")
+        ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
+        teq   r12, #0                   /* CPU 0 sets up the UART too */
+        bleq  init_uart
+        PRINT("- CPU ")
+        mov   r0, r12
+        bl    putn
+        PRINT(" booting -\r\n")
 #endif
 
-	/* Wake up secondary cpus */
-	teq   r12, #0
-	bleq  kick_cpus
-
-	/* Check that this CPU has Hyp mode */
-	mrc   CP32(r0, ID_PFR1)
-	and   r0, r0, #0xf000        /* Bits 12-15 define virt extensions */
-	teq   r0, #0x1000            /* Must == 0x1 or may be incompatible */
-	beq   1f
-	PRINT("- CPU doesn't support the virtualization extensions -\r\n")
-	b     fail
+        /* Wake up secondary cpus */
+        teq   r12, #0
+        bleq  kick_cpus
+
+        /* Check that this CPU has Hyp mode */
+        mrc   CP32(r0, ID_PFR1)
+        and   r0, r0, #0xf000        /* Bits 12-15 define virt extensions */
+        teq   r0, #0x1000            /* Must == 0x1 or may be incompatible */
+        beq   1f
+        PRINT("- CPU doesn't support the virtualization extensions -\r\n")
+        b     fail
 1:
-	/* Check if we're already in it */
-	mrs   r0, cpsr
-	and   r0, r0, #0x1f          /* Mode is in the low 5 bits of CPSR */
-	teq   r0, #0x1a              /* Hyp Mode? */
-	bne   1f
-	PRINT("- Started in Hyp mode -\r\n")
-	b     hyp
+        /* Check if we're already in it */
+        mrs   r0, cpsr
+        and   r0, r0, #0x1f          /* Mode is in the low 5 bits of CPSR */
+        teq   r0, #0x1a              /* Hyp Mode? */
+        bne   1f
+        PRINT("- Started in Hyp mode -\r\n")
+        b     hyp
 1:
-	/* Otherwise, it must have been Secure Supervisor mode */
-	mrc   CP32(r0, SCR)
-	tst   r0, #0x1               /* Not-Secure bit set? */
-	beq   1f
-	PRINT("- CPU is not in Hyp mode or Secure state -\r\n")
-	b     fail
+        /* Otherwise, it must have been Secure Supervisor mode */
+        mrc   CP32(r0, SCR)
+        tst   r0, #0x1               /* Not-Secure bit set? */
+        beq   1f
+        PRINT("- CPU is not in Hyp mode or Secure state -\r\n")
+        b     fail
 1:
-	/* OK, we're in Secure state. */
-	PRINT("- Started in Secure state -\r\n- Entering Hyp mode -\r\n")
-	ldr   r0, =enter_hyp_mode    /* VA of function */
-	adr   lr, hyp                /* Set return address for call */
-	add   pc, r0, r10            /* Call PA of function */
+        /* OK, we're in Secure state. */
+        PRINT("- Started in Secure state -\r\n- Entering Hyp mode -\r\n")
+        ldr   r0, =enter_hyp_mode    /* VA of function */
+        adr   lr, hyp                /* Set return address for call */
+        add   pc, r0, r10            /* Call PA of function */
 
 hyp:
 
-	/* Zero BSS On the boot CPU to avoid nasty surprises */
-	teq   r12, #0
-	bne   skip_bss
-
-	PRINT("- Zero BSS -\r\n")
-	ldr   r0, =__bss_start       /* Load start & end of bss */
-	ldr   r1, =__bss_end
-	add   r0, r0, r10            /* Apply physical offset */
-	add   r1, r1, r10
-	
-	mov   r2, #0
-1:	str   r2, [r0], #4
-	cmp   r0, r1
-	blo   1b
-
-skip_bss:	
-
-	PRINT("- Setting up control registers -\r\n")
-	
-	/* Read CPU ID */
-	mrc   CP32(r0, MIDR)
-	ldr   r1, =(MIDR_MASK)
-	and   r0, r0, r1
-	/* Is this a Cortex A15? */
-	ldr   r1, =(CORTEX_A15_ID)
-	teq   r0, r1
-	bleq  cortex_a15_init
-
-	/* Set up memory attribute type tables */
-	ldr   r0, =MAIR0VAL
-	ldr   r1, =MAIR1VAL
-	mcr   CP32(r0, MAIR0)
-	mcr   CP32(r1, MAIR1)
-	mcr   CP32(r0, HMAIR0)
-	mcr   CP32(r1, HMAIR1)
-
-	/* Set up the HTCR:
-	 * PT walks use Outer-Shareable accesses,
-	 * PT walks are write-back, no-write-allocate in both cache levels,
-	 * Full 32-bit address space goes through this table. */
-	ldr   r0, =0x80002500
-	mcr   CP32(r0, HTCR)
-
-	/* Set up the HSCTLR:
-	 * Exceptions in LE ARM,
-	 * Low-latency IRQs disabled,
-	 * Write-implies-XN disabled (for now),
-	 * D-cache disabled (for now),
-	 * I-cache enabled,
-	 * Alignment checking enabled,
-	 * MMU translation disabled (for now). */
-	ldr   r0, =(HSCTLR_BASE|SCTLR_A)
-	mcr   CP32(r0, HSCTLR)
-
-	/* Write Xen's PT's paddr into the HTTBR */
-	ldr   r4, =xen_pgtable
-	add   r4, r4, r10            /* r4 := paddr (xen_pagetable) */
-	mov   r5, #0                 /* r4:r5 is paddr (xen_pagetable) */
-	mcrr  CP64(r4, r5, HTTBR)
-
-	/* Non-boot CPUs don't need to rebuild the pagetable */
-	teq   r12, #0
-	bne   pt_ready
-	
-	/* console fixmap */
+        /* Zero BSS On the boot CPU to avoid nasty surprises */
+        teq   r12, #0
+        bne   skip_bss
+
+        PRINT("- Zero BSS -\r\n")
+        ldr   r0, =__bss_start       /* Load start & end of bss */
+        ldr   r1, =__bss_end
+        add   r0, r0, r10            /* Apply physical offset */
+        add   r1, r1, r10
+        
+        mov   r2, #0
+1:      str   r2, [r0], #4
+        cmp   r0, r1
+        blo   1b
+
+skip_bss:       
+
+        PRINT("- Setting up control registers -\r\n")
+        
+        /* Read CPU ID */
+        mrc   CP32(r0, MIDR)
+        ldr   r1, =(MIDR_MASK)
+        and   r0, r0, r1
+        /* Is this a Cortex A15? */
+        ldr   r1, =(CORTEX_A15_ID)
+        teq   r0, r1
+        bleq  cortex_a15_init
+
+        /* Set up memory attribute type tables */
+        ldr   r0, =MAIR0VAL
+        ldr   r1, =MAIR1VAL
+        mcr   CP32(r0, MAIR0)
+        mcr   CP32(r1, MAIR1)
+        mcr   CP32(r0, HMAIR0)
+        mcr   CP32(r1, HMAIR1)
+
+        /* Set up the HTCR:
+         * PT walks use Outer-Shareable accesses,
+         * PT walks are write-back, no-write-allocate in both cache levels,
+         * Full 32-bit address space goes through this table. */
+        ldr   r0, =0x80002500
+        mcr   CP32(r0, HTCR)
+
+        /* Set up the HSCTLR:
+         * Exceptions in LE ARM,
+         * Low-latency IRQs disabled,
+         * Write-implies-XN disabled (for now),
+         * D-cache disabled (for now),
+         * I-cache enabled,
+         * Alignment checking enabled,
+         * MMU translation disabled (for now). */
+        ldr   r0, =(HSCTLR_BASE|SCTLR_A)
+        mcr   CP32(r0, HSCTLR)
+
+        /* Write Xen's PT's paddr into the HTTBR */
+        ldr   r4, =xen_pgtable
+        add   r4, r4, r10            /* r4 := paddr (xen_pagetable) */
+        mov   r5, #0                 /* r4:r5 is paddr (xen_pagetable) */
+        mcrr  CP64(r4, r5, HTTBR)
+
+        /* Non-boot CPUs don't need to rebuild the pagetable */
+        teq   r12, #0
+        bne   pt_ready
+        
+        /* console fixmap */
 #ifdef EARLY_UART_ADDRESS
-	ldr   r1, =xen_fixmap
-	add   r1, r1, r10            /* r1 := paddr (xen_fixmap) */
-	mov   r3, #0
-	lsr   r2, r11, #12
-	lsl   r2, r2, #12            /* 4K aligned paddr of UART */
-	orr   r2, r2, #PT_UPPER(DEV_L3)
-	orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
-	strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
+        ldr   r1, =xen_fixmap
+        add   r1, r1, r10            /* r1 := paddr (xen_fixmap) */
+        mov   r3, #0
+        lsr   r2, r11, #12
+        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
+        orr   r2, r2, #PT_UPPER(DEV_L3)
+        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
+        strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
 #endif
 
-	/* Build the baseline idle pagetable's first-level entries */
-	ldr   r1, =xen_second
-	add   r1, r1, r10            /* r1 := paddr (xen_second) */
-	mov   r3, #0x0
-	orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of xen_second */
-	orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
-	strd  r2, r3, [r4, #0]       /* Map it in slot 0 */
-	add   r2, r2, #0x1000
-	strd  r2, r3, [r4, #8]       /* Map 2nd page in slot 1 */
-	add   r2, r2, #0x1000
-	strd  r2, r3, [r4, #16]      /* Map 3rd page in slot 2 */
-	add   r2, r2, #0x1000
-	strd  r2, r3, [r4, #24]      /* Map 4th page in slot 3 */
-
-	/* Now set up the second-level entries */
-	orr   r2, r9, #PT_UPPER(MEM)
-	orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB normal map of Xen */
-	mov   r4, r9, lsr #18        /* Slot for paddr(start) */
-	strd  r2, r3, [r1, r4]       /* Map Xen there */
-	ldr   r4, =start
-	lsr   r4, #18                /* Slot for vaddr(start) */
-	strd  r2, r3, [r1, r4]       /* Map Xen there too */
-
-	/* xen_fixmap pagetable */
-	ldr   r2, =xen_fixmap
-	add   r2, r2, r10            /* r2 := paddr (xen_fixmap) */
-	orr   r2, r2, #PT_UPPER(PT)
-	orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
-	add   r4, r4, #8
-	strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
-
-	mov   r3, #0x0
-	lsr   r2, r8, #21
-	lsl   r2, r2, #21            /* 2MB-aligned paddr of DTB */
-	orr   r2, r2, #PT_UPPER(MEM)
-	orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
-	add   r4, r4, #8
-	strd  r2, r3, [r1, r4]       /* Map it in the early boot slot */
+        /* Build the baseline idle pagetable's first-level entries */
+        ldr   r1, =xen_second
+        add   r1, r1, r10            /* r1 := paddr (xen_second) */
+        mov   r3, #0x0
+        orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of xen_second */
+        orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
+        strd  r2, r3, [r4, #0]       /* Map it in slot 0 */
+        add   r2, r2, #0x1000
+        strd  r2, r3, [r4, #8]       /* Map 2nd page in slot 1 */
+        add   r2, r2, #0x1000
+        strd  r2, r3, [r4, #16]      /* Map 3rd page in slot 2 */
+        add   r2, r2, #0x1000
+        strd  r2, r3, [r4, #24]      /* Map 4th page in slot 3 */
+
+        /* Now set up the second-level entries */
+        orr   r2, r9, #PT_UPPER(MEM)
+        orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB normal map of Xen */
+        mov   r4, r9, lsr #18        /* Slot for paddr(start) */
+        strd  r2, r3, [r1, r4]       /* Map Xen there */
+        ldr   r4, =start
+        lsr   r4, #18                /* Slot for vaddr(start) */
+        strd  r2, r3, [r1, r4]       /* Map Xen there too */
+
+        /* xen_fixmap pagetable */
+        ldr   r2, =xen_fixmap
+        add   r2, r2, r10            /* r2 := paddr (xen_fixmap) */
+        orr   r2, r2, #PT_UPPER(PT)
+        orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
+        add   r4, r4, #8
+        strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
+
+        mov   r3, #0x0
+        lsr   r2, r8, #21
+        lsl   r2, r2, #21            /* 2MB-aligned paddr of DTB */
+        orr   r2, r2, #PT_UPPER(MEM)
+        orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
+        add   r4, r4, #8
+        strd  r2, r3, [r1, r4]       /* Map it in the early boot slot */
 
 pt_ready:
-	PRINT("- Turning on paging -\r\n")
-
-	ldr   r1, =paging            /* Explicit vaddr, not RIP-relative */
-	mrc   CP32(r0, HSCTLR)
-	orr   r0, r0, #(SCTLR_M|SCTLR_C) /* Enable MMU and D-cache */
-	dsb                          /* Flush PTE writes and finish reads */
-	mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
-	isb                          /* Now, flush the icache */
-	mov   pc, r1                 /* Get a proper vaddr into PC */
+        PRINT("- Turning on paging -\r\n")
+
+        ldr   r1, =paging            /* Explicit vaddr, not RIP-relative */
+        mrc   CP32(r0, HSCTLR)
+        orr   r0, r0, #(SCTLR_M|SCTLR_C) /* Enable MMU and D-cache */
+        dsb                          /* Flush PTE writes and finish reads */
+        mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
+        isb                          /* Now, flush the icache */
+        mov   pc, r1                 /* Get a proper vaddr into PC */
 paging:
 
 
 #ifdef EARLY_UART_ADDRESS
-	/* Use a virtual address to access the UART. */
-	ldr   r11, =FIXMAP_ADDR(FIXMAP_CONSOLE)
+        /* Use a virtual address to access the UART. */
+        ldr   r11, =FIXMAP_ADDR(FIXMAP_CONSOLE)
 #endif
 
-	PRINT("- Ready -\r\n")
-
-	/* The boot CPU should go straight into C now */
-	teq   r12, #0
-	beq   launch
-
-	/* Non-boot CPUs need to move on to the relocated pagetables */
-	mov   r0, #0
-	ldr   r4, =boot_httbr        /* VA of HTTBR value stashed by CPU 0 */
-	add   r4, r4, r10            /* PA of it */
-	ldrd  r4, r5, [r4]           /* Actual value */
-	dsb
-	mcrr  CP64(r4, r5, HTTBR)
-	dsb
-	isb
-	mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
-	mcr   CP32(r0, ICIALLU)      /* Flush I-cache */
-	mcr   CP32(r0, BPIALL)       /* Flush branch predictor */
-	dsb                          /* Ensure completion of TLB+BP flush */
-	isb
-
-	/* Non-boot CPUs report that they've got this far */
-	ldr   r0, =ready_cpus
-1:	ldrex r1, [r0]               /*            { read # of ready CPUs } */
-	add   r1, r1, #1             /* Atomically { ++                   } */
-	strex r2, r1, [r0]           /*            { writeback            } */
-	teq   r2, #0
-	bne   1b
-	dsb
-	mcr   CP32(r0, DCCMVAC)      /* flush D-Cache */
-	dsb
-
-	/* Here, the non-boot CPUs must wait again -- they're now running on
-	 * the boot CPU's pagetables so it's safe for the boot CPU to
-	 * overwrite the non-relocated copy of Xen.  Once it's done that,
-	 * and brought up the memory allocator, non-boot CPUs can get their
-	 * own stacks and enter C. */
-1:	wfe
-	dsb
-	ldr   r0, =smp_up_cpu
-	ldr   r1, [r0]               /* Which CPU is being booted? */
-	teq   r1, r12                /* Is it us? */
-	bne   1b
-
-launch:	
-	ldr   r0, =init_stack        /* Find the boot-time stack */
-	ldr   sp, [r0]
-	add   sp, #STACK_SIZE        /* (which grows down from the top). */
-	sub   sp, #CPUINFO_sizeof    /* Make room for CPU save record */
-	mov   r0, r10                /* Marshal args: - phys_offset */
-	mov   r1, r7                 /*               - machine type */
-	mov   r2, r8                 /*               - ATAG address */
-	movs  r3, r12                /*               - CPU ID */
-	beq   start_xen              /* and disappear into the land of C */
-	b     start_secondary        /* (to the appropriate entry point) */
+        PRINT("- Ready -\r\n")
+
+        /* The boot CPU should go straight into C now */
+        teq   r12, #0
+        beq   launch
+
+        /* Non-boot CPUs need to move on to the relocated pagetables */
+        mov   r0, #0
+        ldr   r4, =boot_httbr        /* VA of HTTBR value stashed by CPU 0 */
+        add   r4, r4, r10            /* PA of it */
+        ldrd  r4, r5, [r4]           /* Actual value */
+        dsb
+        mcrr  CP64(r4, r5, HTTBR)
+        dsb
+        isb
+        mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
+        mcr   CP32(r0, ICIALLU)      /* Flush I-cache */
+        mcr   CP32(r0, BPIALL)       /* Flush branch predictor */
+        dsb                          /* Ensure completion of TLB+BP flush */
+        isb
+
+        /* Non-boot CPUs report that they've got this far */
+        ldr   r0, =ready_cpus
+1:      ldrex r1, [r0]               /*            { read # of ready CPUs } */
+        add   r1, r1, #1             /* Atomically { ++                   } */
+        strex r2, r1, [r0]           /*            { writeback            } */
+        teq   r2, #0
+        bne   1b
+        dsb
+        mcr   CP32(r0, DCCMVAC)      /* flush D-Cache */
+        dsb
+
+        /* Here, the non-boot CPUs must wait again -- they're now running on
+         * the boot CPU's pagetables so it's safe for the boot CPU to
+         * overwrite the non-relocated copy of Xen.  Once it's done that,
+         * and brought up the memory allocator, non-boot CPUs can get their
+         * own stacks and enter C. */
+1:      wfe
+        dsb
+        ldr   r0, =smp_up_cpu
+        ldr   r1, [r0]               /* Which CPU is being booted? */
+        teq   r1, r12                /* Is it us? */
+        bne   1b
+
+launch: 
+        ldr   r0, =init_stack        /* Find the boot-time stack */
+        ldr   sp, [r0]
+        add   sp, #STACK_SIZE        /* (which grows down from the top). */
+        sub   sp, #CPUINFO_sizeof    /* Make room for CPU save record */
+        mov   r0, r10                /* Marshal args: - phys_offset */
+        mov   r1, r7                 /*               - machine type */
+        mov   r2, r8                 /*               - ATAG address */
+        movs  r3, r12                /*               - CPU ID */
+        beq   start_xen              /* and disappear into the land of C */
+        b     start_secondary        /* (to the appropriate entry point) */
 
 /* Fail-stop
  * r0: string explaining why */
-fail:	PRINT("- Boot failed -\r\n")
-1:	wfe
-	b     1b
+fail:   PRINT("- Boot failed -\r\n")
+1:      wfe
+        b     1b
 
 #ifdef EARLY_UART_ADDRESS
 
 /* Bring up the UART. Specific to the PL011 UART.
  * Clobbers r0-r2 */
 init_uart:
-	mov   r1, #0x0
-	str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor fraction) */
-	mov   r1, #0x4               /* 7.3728MHz / 0x4 == 16 * 115200 */
-	str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor integer) */
-	mov   r1, #0x60              /* 8n1 */
-	str   r1, [r11, #0x24]       /* -> UARTLCR_H (Line control) */
-	ldr   r1, =0x00000301        /* RXE | TXE | UARTEN */
-	str   r1, [r11, #0x30]       /* -> UARTCR (Control Register) */
-	adr   r0, 1f
-	b     puts
-1:	.asciz "- UART enabled -\r\n"
-	.align 4
+        mov   r1, #0x0
+        str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor fraction) */
+        mov   r1, #0x4               /* 7.3728MHz / 0x4 == 16 * 115200 */
+        str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor integer) */
+        mov   r1, #0x60              /* 8n1 */
+        str   r1, [r11, #0x24]       /* -> UARTLCR_H (Line control) */
+        ldr   r1, =0x00000301        /* RXE | TXE | UARTEN */
+        str   r1, [r11, #0x30]       /* -> UARTCR (Control Register) */
+        adr   r0, 1f
+        b     puts
+1:      .asciz "- UART enabled -\r\n"
+        .align 4
 
 /* Print early debug messages.  Specific to the PL011 UART.
  * r0: Nul-terminated string to print.
  * Clobbers r0-r2 */
 puts:
-	ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
-	tst   r2, #0x8               /* Check BUSY bit */
-	bne   puts                   /* Wait for the UART to be ready */
-	ldrb  r2, [r0], #1           /* Load next char */
-	teq   r2, #0                 /* Exit on nul */
-	moveq pc, lr
-	str   r2, [r11]              /* -> UARTDR (Data Register) */
-	b     puts
+        ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
+        tst   r2, #0x8               /* Check BUSY bit */
+        bne   puts                   /* Wait for the UART to be ready */
+        ldrb  r2, [r0], #1           /* Load next char */
+        teq   r2, #0                 /* Exit on nul */
+        moveq pc, lr
+        str   r2, [r11]              /* -> UARTDR (Data Register) */
+        b     puts
 
 /* Print a 32-bit number in hex.  Specific to the PL011 UART.
  * r0: Number to print.
  * clobbers r0-r3 */
 putn:
-	adr   r1, hex
-	mov   r3, #8
-1:	ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
-	tst   r2, #0x8               /* Check BUSY bit */
-	bne   1b                     /* Wait for the UART to be ready */
-	and   r2, r0, #0xf0000000    /* Mask off the top nybble */
-	ldrb  r2, [r1, r2, lsr #28]  /* Convert to a char */
-	str   r2, [r11]              /* -> UARTDR (Data Register) */
-	lsl   r0, #4                 /* Roll it through one nybble at a time */
-	subs  r3, r3, #1
-	bne   1b
-	mov   pc, lr
-
-hex:	.ascii "0123456789abcdef"
-	.align 2
+        adr   r1, hex
+        mov   r3, #8
+1:      ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
+        tst   r2, #0x8               /* Check BUSY bit */
+        bne   1b                     /* Wait for the UART to be ready */
+        and   r2, r0, #0xf0000000    /* Mask off the top nybble */
+        ldrb  r2, [r1, r2, lsr #28]  /* Convert to a char */
+        str   r2, [r11]              /* -> UARTDR (Data Register) */
+        lsl   r0, #4                 /* Roll it through one nybble at a time */
+        subs  r3, r3, #1
+        bne   1b
+        mov   pc, lr
+
+hex:    .ascii "0123456789abcdef"
+        .align 2
 
 #else  /* EARLY_UART_ADDRESS */
 
@@ -403,6 +403,13 @@ init_uart:
 .global early_puts
 early_puts:
 puts:
-putn:	mov   pc, lr
+putn:   mov   pc, lr
 
 #endif /* EARLY_UART_ADDRESS */
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
index 7c3b357..3dba38d 100644
--- a/xen/arch/arm/mode_switch.S
+++ b/xen/arch/arm/mode_switch.S
@@ -28,25 +28,25 @@
 /* wake up secondary cpus */
 .globl kick_cpus
 kick_cpus:
-	/* write start paddr to v2m sysreg FLAGSSET register */
-	ldr   r0, =(V2M_SYS_MMIO_BASE)        /* base V2M sysreg MMIO address */
-	dsb
-	mov   r2, #0xffffffff
-	str   r2, [r0, #(V2M_SYS_FLAGSCLR)]
-	dsb
-	ldr   r2, =start
-	add   r2, r2, r10
-	str   r2, [r0, #(V2M_SYS_FLAGSSET)]
-	dsb
-	/* send an interrupt */
-	ldr   r0, =(GIC_BASE_ADDRESS + GIC_DR_OFFSET) /* base GICD MMIO address */
-	mov   r2, #0x1
-	str   r2, [r0, #(GICD_CTLR * 4)]      /* enable distributor */
-	mov   r2, #0xfe0000
-	str   r2, [r0, #(GICD_SGIR * 4)]      /* send IPI to everybody */
-	dsb
-	str   r2, [r0, #(GICD_CTLR * 4)]      /* disable distributor */
-	mov   pc, lr
+        /* write start paddr to v2m sysreg FLAGSSET register */
+        ldr   r0, =(V2M_SYS_MMIO_BASE)        /* base V2M sysreg MMIO address */
+        dsb
+        mov   r2, #0xffffffff
+        str   r2, [r0, #(V2M_SYS_FLAGSCLR)]
+        dsb
+        ldr   r2, =start
+        add   r2, r2, r10
+        str   r2, [r0, #(V2M_SYS_FLAGSSET)]
+        dsb
+        /* send an interrupt */
+        ldr   r0, =(GIC_BASE_ADDRESS + GIC_DR_OFFSET) /* base GICD MMIO address */
+        mov   r2, #0x1
+        str   r2, [r0, #(GICD_CTLR * 4)]      /* enable distributor */
+        mov   r2, #0xfe0000
+        str   r2, [r0, #(GICD_SGIR * 4)]      /* send IPI to everybody */
+        dsb
+        str   r2, [r0, #(GICD_CTLR * 4)]      /* disable distributor */
+        mov   pc, lr
 
 
 /* Get up a CPU into Hyp mode.  Clobbers r0-r3.
@@ -61,54 +61,61 @@ kick_cpus:
 
 .globl enter_hyp_mode
 enter_hyp_mode:
-	mov   r3, lr                 /* Put return address in non-banked reg */
-	cpsid aif, #0x16             /* Enter Monitor mode */
-	mrc   CP32(r0, SCR)
-	orr   r0, r0, #0x100         /* Set HCE */
-	orr   r0, r0, #0xb1          /* Set SCD, AW, FW and NS */
-	bic   r0, r0, #0xe           /* Clear EA, FIQ and IRQ */
-	mcr   CP32(r0, SCR)
-	/* Ugly: the system timer's frequency register is only
-	 * programmable in Secure state.  Since we don't know where its
-	 * memory-mapped control registers live, we can't find out the
-	 * right frequency.  Use the VE model's default frequency here. */
-	ldr   r0, =0x5f5e100         /* 100 MHz */
-	mcr   CP32(r0, CNTFRQ)
-	ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
-	mcr   CP32(r0, NSACR)
-	mov   r0, #GIC_BASE_ADDRESS
-	add   r0, r0, #GIC_DR_OFFSET
-	/* Disable the GIC distributor, on the boot CPU only */
-	mov   r1, #0
-	teq   r12, #0                /* Is this the boot CPU? */
-	streq r1, [r0]
-	/* Continuing ugliness: Set up the GIC so NS state owns interrupts,
-	 * The first 32 interrupts (SGIs & PPIs) must be configured on all
-	 * CPUs while the remainder are SPIs and only need to be done one, on
-	 * the boot CPU. */
-	add   r0, r0, #0x80          /* GICD_IGROUP0 */
-	mov   r2, #0xffffffff        /* All interrupts to group 1 */
-	teq   r12, #0                /* Boot CPU? */
-	str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
-	streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
-	streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
-	/* Disable the GIC CPU interface on all processors */
-	mov   r0, #GIC_BASE_ADDRESS
-	add   r0, r0, #GIC_CR_OFFSET
-	mov   r1, #0
-	str   r1, [r0]
-	/* Must drop priority mask below 0x80 before entering NS state */
-	ldr   r1, =0xff
-	str   r1, [r0, #0x4]         /* -> GICC_PMR */
-	/* Reset a few config registers */
-	mov   r0, #0
-	mcr   CP32(r0, FCSEIDR)
-	mcr   CP32(r0, CONTEXTIDR)
-	/* Allow non-secure access to coprocessors, FIQs, VFP and NEON */
-	ldr   r1, =0x3fff            /* 14 CP bits set, all others clear */
-	mcr   CP32(r1, NSACR)
+        mov   r3, lr                 /* Put return address in non-banked reg */
+        cpsid aif, #0x16             /* Enter Monitor mode */
+        mrc   CP32(r0, SCR)
+        orr   r0, r0, #0x100         /* Set HCE */
+        orr   r0, r0, #0xb1          /* Set SCD, AW, FW and NS */
+        bic   r0, r0, #0xe           /* Clear EA, FIQ and IRQ */
+        mcr   CP32(r0, SCR)
+        /* Ugly: the system timer's frequency register is only
+         * programmable in Secure state.  Since we don't know where its
+         * memory-mapped control registers live, we can't find out the
+         * right frequency.  Use the VE model's default frequency here. */
+        ldr   r0, =0x5f5e100         /* 100 MHz */
+        mcr   CP32(r0, CNTFRQ)
+        ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
+        mcr   CP32(r0, NSACR)
+        mov   r0, #GIC_BASE_ADDRESS
+        add   r0, r0, #GIC_DR_OFFSET
+        /* Disable the GIC distributor, on the boot CPU only */
+        mov   r1, #0
+        teq   r12, #0                /* Is this the boot CPU? */
+        streq r1, [r0]
+        /* Continuing ugliness: Set up the GIC so NS state owns interrupts,
+         * The first 32 interrupts (SGIs & PPIs) must be configured on all
+         * CPUs while the remainder are SPIs and only need to be done one, on
+         * the boot CPU. */
+        add   r0, r0, #0x80          /* GICD_IGROUP0 */
+        mov   r2, #0xffffffff        /* All interrupts to group 1 */
+        teq   r12, #0                /* Boot CPU? */
+        str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
+        streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
+        streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
+        /* Disable the GIC CPU interface on all processors */
+        mov   r0, #GIC_BASE_ADDRESS
+        add   r0, r0, #GIC_CR_OFFSET
+        mov   r1, #0
+        str   r1, [r0]
+        /* Must drop priority mask below 0x80 before entering NS state */
+        ldr   r1, =0xff
+        str   r1, [r0, #0x4]         /* -> GICC_PMR */
+        /* Reset a few config registers */
+        mov   r0, #0
+        mcr   CP32(r0, FCSEIDR)
+        mcr   CP32(r0, CONTEXTIDR)
+        /* Allow non-secure access to coprocessors, FIQs, VFP and NEON */
+        ldr   r1, =0x3fff            /* 14 CP bits set, all others clear */
+        mcr   CP32(r1, NSACR)
 
-	mrs   r0, cpsr               /* Copy the CPSR */
-	add   r0, r0, #0x4           /* 0x16 (Monitor) -> 0x1a (Hyp) */
-	msr   spsr_cxsf, r0          /* into the SPSR */
-	movs  pc, r3                 /* Exception-return into Hyp mode */
+        mrs   r0, cpsr               /* Copy the CPSR */
+        add   r0, r0, #0x4           /* 0x16 (Monitor) -> 0x1a (Hyp) */
+        msr   spsr_cxsf, r0          /* into the SPSR */
+        movs  pc, r3                 /* Exception-return into Hyp mode */
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/proc-ca15.S b/xen/arch/arm/proc-ca15.S
index 5a5bf64..dcdd42e 100644
--- a/xen/arch/arm/proc-ca15.S
+++ b/xen/arch/arm/proc-ca15.S
@@ -21,8 +21,15 @@
 
 .globl cortex_a15_init
 cortex_a15_init:
-	/* Set up the SMP bit in ACTLR */
-	mrc   CP32(r0, ACTLR)
-	orr   r0, r0, #(ACTLR_CA15_SMP) /* enable SMP bit*/
-	mcr   CP32(r0, ACTLR)
-	mov   pc, lr
+        /* Set up the SMP bit in ACTLR */
+        mrc   CP32(r0, ACTLR)
+        orr   r0, r0, #(ACTLR_CA15_SMP) /* enable SMP bit */
+        mcr   CP32(r0, ACTLR)
+        mov   pc, lr
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:50:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0N0-00023r-H9; Tue, 18 Dec 2012 16:50:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0Mz-00023D-GU
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:50:17 +0000
Received: from [85.158.139.83:49271] by server-7.bemta-5.messagelabs.com id
	9D/5A-08009-8CE90D05; Tue, 18 Dec 2012 16:50:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355849406!28179612!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4032 invoked from network); 18 Dec 2012 16:50:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:50:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1064224"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:49:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:49:36 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tl0MK-0000wW-NN;
	Tue, 18 Dec 2012 16:49:36 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:32 +0000
Message-ID: <1355849376-26652-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/5] xen: arm: fix long lines in entry.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/entry.S |   66 +++++++++++++++++++++++++-------------------------
 1 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
index 1d6ff32..83793c2 100644
--- a/xen/arch/arm/entry.S
+++ b/xen/arch/arm/entry.S
@@ -11,22 +11,22 @@
 #define RESTORE_BANKED(mode) \
 	RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
 
-#define SAVE_ALL											\
-	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */					\
-	push {r0-r12}; /* Save R0-R12 */								\
-													\
-	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */				\
-	str r11, [sp, #UREGS_pc];									\
-													\
-	str lr, [sp, #UREGS_lr];									\
-													\
-	add r11, sp, #UREGS_kernel_sizeof+4;								\
-	str r11, [sp, #UREGS_sp];									\
-													\
-	mrs r11, SPSR_hyp;										\
-	str r11, [sp, #UREGS_cpsr];									\
-	and r11, #PSR_MODE_MASK;									\
-	cmp r11, #PSR_MODE_HYP;										\
+#define SAVE_ALL							\
+	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */	\
+	push {r0-r12}; /* Save R0-R12 */				\
+									\
+	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */\
+	str r11, [sp, #UREGS_pc];					\
+									\
+	str lr, [sp, #UREGS_lr];					\
+									\
+	add r11, sp, #UREGS_kernel_sizeof+4;				\
+	str r11, [sp, #UREGS_sp];					\
+									\
+	mrs r11, SPSR_hyp;						\
+	str r11, [sp, #UREGS_cpsr];					\
+	and r11, #PSR_MODE_MASK;					\
+	cmp r11, #PSR_MODE_HYP;						\
 	blne save_guest_regs
 
 save_guest_regs:
@@ -43,25 +43,25 @@ save_guest_regs:
 	SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
 	mov pc, lr
 
-#define DEFINE_TRAP_ENTRY(trap)										\
-	ALIGN;												\
-trap_##trap:												\
-	SAVE_ALL;											\
-	cpsie i; 	/* local_irq_enable */								\
-	adr lr, return_from_trap;									\
-	mov r0, sp;											\
-	mov r11, sp;											\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
+#define DEFINE_TRAP_ENTRY(trap)						\
+	ALIGN;								\
+trap_##trap:								\
+	SAVE_ALL;							\
+	cpsie i; 	/* local_irq_enable */				\
+	adr lr, return_from_trap;					\
+	mov r0, sp;							\
+	mov r11, sp;							\
+	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
 	b do_trap_##trap
 
-#define DEFINE_TRAP_ENTRY_NOIRQ(trap)									\
-	ALIGN;												\
-trap_##trap:												\
-	SAVE_ALL;											\
-	adr lr, return_from_trap;									\
-	mov r0, sp;											\
-	mov r11, sp;											\
-	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
+#define DEFINE_TRAP_ENTRY_NOIRQ(trap)					\
+	ALIGN;								\
+trap_##trap:								\
+	SAVE_ALL;							\
+	adr lr, return_from_trap;					\
+	mov r0, sp;							\
+	mov r11, sp;							\
+	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
 	b do_trap_##trap
 
 .globl hyp_traps_vector
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:50:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:50:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0N0-00023z-Tb; Tue, 18 Dec 2012 16:50:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0Mz-00023F-IY
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:50:17 +0000
Received: from [85.158.139.83:56944] by server-2.bemta-5.messagelabs.com id
	7A/56-16162-8CE90D05; Tue, 18 Dec 2012 16:50:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355849406!28179612!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1904 invoked from network); 18 Dec 2012 16:50:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:50:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1064222"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:49:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:49:36 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tl0MK-0000wW-Pr;
	Tue, 18 Dec 2012 16:49:36 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:34 +0000
Message-ID: <1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
	drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This shortens an overly long line.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/head.S |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
index 0d9a799..eb54925 100644
--- a/xen/arch/arm/head.S
+++ b/xen/arch/arm/head.S
@@ -25,9 +25,12 @@
 #define ZIMAGE_MAGIC_NUMBER 0x016f2818
 
 #define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
+
+/* Second Level */
 #define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
-#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
-#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
+
+/* Third Level */
+#define PT_DEV 0xe73 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
 
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)
@@ -222,8 +225,8 @@ skip_bss:
         mov   r3, #0
         lsr   r2, r11, #12
         lsl   r2, r2, #12            /* 4K aligned paddr of UART */
-        orr   r2, r2, #PT_UPPER(DEV_L3)
-        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
+        orr   r2, r2, #PT_UPPER(DEV)
+        orr   r2, r2, #PT_LOWER(DEV) /* r2:r3 := 4K dev map including UART */
         strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
 #endif
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 16:50:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 16:50:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0N0-00023z-Tb; Tue, 18 Dec 2012 16:50:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0Mz-00023F-IY
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 16:50:17 +0000
Received: from [85.158.139.83:56944] by server-2.bemta-5.messagelabs.com id
	7A/56-16162-8CE90D05; Tue, 18 Dec 2012 16:50:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355849406!28179612!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1904 invoked from network); 18 Dec 2012 16:50:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 16:50:10 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355097600"; 
   d="scan'208";a="1064222"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 16:49:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 11:49:36 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Tl0MK-0000wW-Pr;
	Tue, 18 Dec 2012 16:49:36 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 16:49:34 +0000
Message-ID: <1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
	drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This shortens an overly long line.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/head.S |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
index 0d9a799..eb54925 100644
--- a/xen/arch/arm/head.S
+++ b/xen/arch/arm/head.S
@@ -25,9 +25,12 @@
 #define ZIMAGE_MAGIC_NUMBER 0x016f2818
 
 #define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
+
+/* Second Level */
 #define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
-#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
-#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
+
+/* Third Level */
+#define PT_DEV 0xe73 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
 
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)
@@ -222,8 +225,8 @@ skip_bss:
         mov   r3, #0
         lsr   r2, r11, #12
         lsl   r2, r2, #12            /* 4K aligned paddr of UART */
-        orr   r2, r2, #PT_UPPER(DEV_L3)
-        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
+        orr   r2, r2, #PT_UPPER(DEV)
+        orr   r2, r2, #PT_LOWER(DEV) /* r2:r3 := 4K dev map including UART */
         strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
 #endif
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:04:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0af-00032V-Hw; Tue, 18 Dec 2012 17:04:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0ae-00032Q-Aq
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 17:04:24 +0000
Received: from [193.109.254.147:62023] by server-2.bemta-14.messagelabs.com id
	FC/FA-30744-712A0D05; Tue, 18 Dec 2012 17:04:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355850259!10452772!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3161 invoked from network); 18 Dec 2012 17:04:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:04:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="232215"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 17:04:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 17:04:16 +0000
Message-ID: <1355850255.14620.277.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Razvan Cojocaru <rzvncj@gmail.com>
Date: Tue, 18 Dec 2012 17:04:15 +0000
In-Reply-To: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-17 at 11:38 +0000, Razvan Cojocaru wrote:
> Add a xc_domain_hvm_get_mtrr_type() call to the libxc API,
> to support functionality similar to get_mtrr_type() (which
> is only available at the hypervisor level).

Do you have a user in mind for this new functionality?

> Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com>
> 
> diff -r f50aab21f9f2 -r a23515aabc91 tools/libxc/xc_domain.c
> --- a/tools/libxc/xc_domain.c	Thu Dec 13 14:39:31 2012 +0000
> +++ b/tools/libxc/xc_domain.c	Mon Dec 17 12:53:11 2012 +0200
> @@ -379,6 +379,45 @@ int xc_domain_hvm_setcontext(xc_interfac
>      return ret;
>  }
>  
> +#define MTRR_PHYSMASK_VALID_BIT  11
> +#define MTRR_PHYSMASK_SHIFT      12
> +#define MTRR_PHYSBASE_TYPE_MASK  0xff   /* lowest 8 bits */
> +
> +int xc_domain_hvm_get_mtrr_type(xc_interface *xch, 
> +                                uint32_t domid,
> +                                unsigned long paddress,
> +                                uint8_t *type)
> +{

This version seems to do a lot less than get_mtrr_type() in the
hypervisor. Is that deliberate? Why isn't the fixed mtrr slot and
overlap handling required here?

> [...]

> diff -r f50aab21f9f2 -r a23515aabc91 tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h	Thu Dec 13 14:39:31 2012 +0000
> +++ b/tools/libxc/xenctrl.h	Mon Dec 17 12:53:11 2012 +0200
> @@ -633,6 +633,15 @@ int xc_domain_hvm_setcontext(xc_interfac
>                               uint32_t size);
>  
>  /**
> + * This function returns information about the MTRR type of
> + * a given guest physical address/

                                   ^ you mean . not / I think.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:04:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0af-00032V-Hw; Tue, 18 Dec 2012 17:04:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0ae-00032Q-Aq
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 17:04:24 +0000
Received: from [193.109.254.147:62023] by server-2.bemta-14.messagelabs.com id
	FC/FA-30744-712A0D05; Tue, 18 Dec 2012 17:04:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355850259!10452772!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3161 invoked from network); 18 Dec 2012 17:04:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:04:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="232215"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 17:04:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 17:04:16 +0000
Message-ID: <1355850255.14620.277.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Razvan Cojocaru <rzvncj@gmail.com>
Date: Tue, 18 Dec 2012 17:04:15 +0000
In-Reply-To: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-17 at 11:38 +0000, Razvan Cojocaru wrote:
> Add a xc_domain_hvm_get_mtrr_type() call to the libxc API,
> to support functionality similar to get_mtrr_type() (which
> is only available at the hypervisor level).

Do you have a user in mind for this new functionality?

> Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com>
> 
> diff -r f50aab21f9f2 -r a23515aabc91 tools/libxc/xc_domain.c
> --- a/tools/libxc/xc_domain.c	Thu Dec 13 14:39:31 2012 +0000
> +++ b/tools/libxc/xc_domain.c	Mon Dec 17 12:53:11 2012 +0200
> @@ -379,6 +379,45 @@ int xc_domain_hvm_setcontext(xc_interfac
>      return ret;
>  }
>  
> +#define MTRR_PHYSMASK_VALID_BIT  11
> +#define MTRR_PHYSMASK_SHIFT      12
> +#define MTRR_PHYSBASE_TYPE_MASK  0xff   /* lowest 8 bits */
> +
> +int xc_domain_hvm_get_mtrr_type(xc_interface *xch, 
> +                                uint32_t domid,
> +                                unsigned long paddress,
> +                                uint8_t *type)
> +{

This version seems to do a lot less than get_mtrr_type() in the
hypervisor. Is that deliberate? Why isn't the fixed mtrr slot and
overlap handling required here?

> [...]

> diff -r f50aab21f9f2 -r a23515aabc91 tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h	Thu Dec 13 14:39:31 2012 +0000
> +++ b/tools/libxc/xenctrl.h	Mon Dec 17 12:53:11 2012 +0200
> @@ -633,6 +633,15 @@ int xc_domain_hvm_setcontext(xc_interfac
>                               uint32_t size);
>  
>  /**
> + * This function returns information about the MTRR type of
> + * a given guest physical address/

                                   ^ you mean . not / I think.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:15:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0ko-0003Fv-ON; Tue, 18 Dec 2012 17:14:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0kn-0003Fq-Nq
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:14:53 +0000
Received: from [85.158.139.83:4843] by server-11.bemta-5.messagelabs.com id
	17/7A-31624-C84A0D05; Tue, 18 Dec 2012 17:14:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355850892!29875406!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3264 invoked from network); 18 Dec 2012 17:14:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:14:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="232500"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 17:14:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 17:14:51 +0000
Message-ID: <1355850890.14620.279.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 17:14:50 +0000
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ping? Any comments on the ARM parts before I go through a rebase +
refresh cycle?

On Tue, 2012-12-04 at 11:56 +0000, Ian Campbell wrote:
> This was a short term hack to get something linking quickly, but its
> usefulness has now passed.
> 
> This series replaces everything in here with proper functions. In many
> cases these are still just stubs.
> 
> It seems to me that at least some of this stuff consists of x86-isms
> which should instead be removed from the common code.
> 
> This highlights two large missing pieces of functionality: wallclock
> time and cleaning up on domain destroy.
> 
> Ian.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:15:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0ko-0003Fv-ON; Tue, 18 Dec 2012 17:14:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0kn-0003Fq-Nq
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:14:53 +0000
Received: from [85.158.139.83:4843] by server-11.bemta-5.messagelabs.com id
	17/7A-31624-C84A0D05; Tue, 18 Dec 2012 17:14:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355850892!29875406!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3264 invoked from network); 18 Dec 2012 17:14:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:14:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="232500"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 17:14:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 17:14:51 +0000
Message-ID: <1355850890.14620.279.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 17:14:50 +0000
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ping? Any comments on the ARM parts before I go through a rebase +
refresh cycle?

On Tue, 2012-12-04 at 11:56 +0000, Ian Campbell wrote:
> This was a short term hack to get something linking quickly, but its
> usefulness has now passed.
> 
> This series replaces everything in here with proper functions. In many
> cases these are still just stubs.
> 
> It seems to me that at least some of this stuff consists of x86-isms
> which should instead be removed from the common code.
> 
> This highlights two large missing pieces of functionality: wallclock
> time and cleaning up on domain destroy.
> 
> Ian.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:17:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0nP-0003P0-Ap; Tue, 18 Dec 2012 17:17:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0nN-0003Ov-I9
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:17:33 +0000
Received: from [85.158.143.35:23878] by server-3.bemta-4.messagelabs.com id
	57/73-18211-C25A0D05; Tue, 18 Dec 2012 17:17:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355851045!14236864!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1706 invoked from network); 18 Dec 2012 17:17:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:17:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="232557"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 17:17:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 17:17:23 +0000
Message-ID: <1355851042.14620.280.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 17:17:22 +0000
In-Reply-To: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
References: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of
	arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ping?
On Tue, 2012-12-04 at 15:57 +0000, Ian Campbell wrote:
> Eventually we will have arm64 as well.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  Config.mk                                    |    4 +++-
>  config/{arm.mk => arm32.mk}                  |    0
>  xen/Rules.mk                                 |    2 +-
>  xen/arch/arm/Makefile                        |    9 +++------
>  xen/arch/arm/Rules.mk                        |   13 ++++++++-----
>  xen/arch/arm/arm32/Makefile                  |    5 +++++
>  xen/arch/arm/{ => arm32}/asm-offsets.c       |    0
>  xen/arch/arm/{ => arm32}/entry.S             |    0
>  xen/arch/arm/{ => arm32}/head.S              |    0
>  xen/arch/arm/{ => arm32}/lib/Makefile        |    0
>  xen/arch/arm/{ => arm32}/lib/assembler.h     |    0
>  xen/arch/arm/{ => arm32}/lib/bitops.h        |    0
>  xen/arch/arm/{ => arm32}/lib/changebit.S     |    0
>  xen/arch/arm/{ => arm32}/lib/clearbit.S      |    0
>  xen/arch/arm/{ => arm32}/lib/copy_template.S |    0
>  xen/arch/arm/{ => arm32}/lib/div64.S         |    0
>  xen/arch/arm/{ => arm32}/lib/findbit.S       |    0
>  xen/arch/arm/{ => arm32}/lib/lib1funcs.S     |    0
>  xen/arch/arm/{ => arm32}/lib/lshrdi3.S       |    0
>  xen/arch/arm/{ => arm32}/lib/memcpy.S        |    0
>  xen/arch/arm/{ => arm32}/lib/memmove.S       |    0
>  xen/arch/arm/{ => arm32}/lib/memset.S        |    0
>  xen/arch/arm/{ => arm32}/lib/memzero.S       |    0
>  xen/arch/arm/{ => arm32}/lib/setbit.S        |    0
>  xen/arch/arm/{ => arm32}/lib/testchangebit.S |    0
>  xen/arch/arm/{ => arm32}/lib/testclearbit.S  |    0
>  xen/arch/arm/{ => arm32}/lib/testsetbit.S    |    0
>  xen/arch/arm/{ => arm32}/mode_switch.S       |    2 +-
>  xen/arch/arm/{ => arm32}/proc-ca15.S         |    0
>  xen/arch/arm/domain.c                        |    2 +-
>  xen/arch/arm/domain_build.c                  |    2 +-
>  xen/arch/arm/gic.c                           |    2 +-
>  xen/arch/arm/irq.c                           |    2 +-
>  xen/arch/arm/p2m.c                           |    2 +-
>  xen/arch/arm/setup.c                         |    2 +-
>  xen/arch/arm/smpboot.c                       |    2 +-
>  xen/arch/arm/traps.c                         |    2 +-
>  xen/arch/arm/vgic.c                          |    2 +-
>  xen/arch/arm/vtimer.c                        |    2 +-
>  xen/{arch/arm => include/asm-arm}/gic.h      |    6 ++----
>  40 files changed, 33 insertions(+), 28 deletions(-)
>  rename config/{arm.mk => arm32.mk} (100%)
>  create mode 100644 xen/arch/arm/arm32/Makefile
>  rename xen/arch/arm/{ => arm32}/asm-offsets.c (100%)
>  rename xen/arch/arm/{ => arm32}/entry.S (100%)
>  rename xen/arch/arm/{ => arm32}/head.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/Makefile (100%)
>  rename xen/arch/arm/{ => arm32}/lib/assembler.h (100%)
>  rename xen/arch/arm/{ => arm32}/lib/bitops.h (100%)
>  rename xen/arch/arm/{ => arm32}/lib/changebit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/clearbit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/copy_template.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/div64.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/findbit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/lib1funcs.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/lshrdi3.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/memcpy.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/memmove.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/memset.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/memzero.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/setbit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/testchangebit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/testclearbit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/testsetbit.S (100%)
>  rename xen/arch/arm/{ => arm32}/mode_switch.S (99%)
>  rename xen/arch/arm/{ => arm32}/proc-ca15.S (100%)
>  rename xen/{arch/arm => include/asm-arm}/gic.h (98%)
> 
> diff --git a/Config.mk b/Config.mk
> index d99b9a1..8e35886 100644
> --- a/Config.mk
> +++ b/Config.mk
> @@ -14,7 +14,9 @@ debug ?= y
>  debug_symbols ?= $(debug)
> 
>  XEN_COMPILE_ARCH    ?= $(shell uname -m | sed -e s/i.86/x86_32/ \
> -                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ -e s/arm.*/arm/)
> +                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ \
> +                         -e s/armv7.*/arm32/)
> +
>  XEN_TARGET_ARCH     ?= $(XEN_COMPILE_ARCH)
>  XEN_OS              ?= $(shell uname -s)
> 
> diff --git a/config/arm.mk b/config/arm32.mk
> similarity index 100%
> rename from config/arm.mk
> rename to config/arm32.mk
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index f7cb8b2..c2db449 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -28,7 +28,7 @@ endif
>  # Set ARCH/SUBARCH appropriately.
>  override TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
>  override TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
> -                              sed -e 's/x86.*/x86/')
> +                              sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
> 
>  TARGET := $(BASEDIR)/xen
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index fd92b72..1b33767 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -1,8 +1,7 @@
> -subdir-y += lib
> +subdir-$(arm32) += arm32
> 
>  obj-y += dummy.o
>  obj-y += early_printk.o
> -obj-y += entry.o
>  obj-y += domain.o
>  obj-y += domctl.o
>  obj-y += sysctl.o
> @@ -12,8 +11,6 @@ obj-y += io.o
>  obj-y += irq.o
>  obj-y += kernel.o
>  obj-y += mm.o
> -obj-y += mode_switch.o
> -obj-y += proc-ca15.o
>  obj-y += p2m.o
>  obj-y += percpu.o
>  obj-y += guestcopy.o
> @@ -36,7 +33,7 @@ obj-y += dtb.o
>  AFLAGS += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
>  endif
> 
> -ALL_OBJS := head.o $(ALL_OBJS)
> +ALL_OBJS := $(TARGET_SUBARCH)/head.o $(ALL_OBJS)
> 
>  $(TARGET): $(TARGET)-syms $(TARGET).bin
>         # XXX: VE model loads by VMA so instead of
> @@ -81,7 +78,7 @@ $(TARGET)-syms: prelink.o xen.lds $(BASEDIR)/common/symbols-dummy.o
>             $(@D)/.$(@F).1.o -o $@
>         rm -f $(@D)/.$(@F).[0-9]*
> 
> -asm-offsets.s: asm-offsets.c
> +asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c
>         $(CC) $(filter-out -flto,$(CFLAGS)) -S -o $@ $<
> 
>  xen.lds: xen.lds.S
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index a45c654..f83bfee 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -12,16 +12,19 @@ CFLAGS += -fno-builtin -fno-common -Wredundant-decls
>  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
>  CFLAGS += -I$(BASEDIR)/include
> 
> -# Prevent floating-point variables from creeping into Xen.
> -CFLAGS += -msoft-float
> -
>  $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
>  $(call cc-option-add,CFLAGS,CC,-Wnested-externs)
> 
>  arm := y
> 
> +ifeq ($(TARGET_SUBARCH),arm32)
> +# Prevent floating-point variables from creeping into Xen.
> +CFLAGS += -msoft-float
> +CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
> +arm32 := y
> +arm64 := n
> +endif
> +
>  ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n)
>  CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
>  endif
> -
> -CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
> diff --git a/xen/arch/arm/arm32/Makefile b/xen/arch/arm/arm32/Makefile
> new file mode 100644
> index 0000000..20931fa
> --- /dev/null
> +++ b/xen/arch/arm/arm32/Makefile
> @@ -0,0 +1,5 @@
> +subdir-y += lib
> +
> +obj-y += entry.o
> +obj-y += mode_switch.o
> +obj-y += proc-ca15.o
> diff --git a/xen/arch/arm/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c
> similarity index 100%
> rename from xen/arch/arm/asm-offsets.c
> rename to xen/arch/arm/arm32/asm-offsets.c
> diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/arm32/entry.S
> similarity index 100%
> rename from xen/arch/arm/entry.S
> rename to xen/arch/arm/arm32/entry.S
> diff --git a/xen/arch/arm/head.S b/xen/arch/arm/arm32/head.S
> similarity index 100%
> rename from xen/arch/arm/head.S
> rename to xen/arch/arm/arm32/head.S
> diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/arm32/lib/Makefile
> similarity index 100%
> rename from xen/arch/arm/lib/Makefile
> rename to xen/arch/arm/arm32/lib/Makefile
> diff --git a/xen/arch/arm/lib/assembler.h b/xen/arch/arm/arm32/lib/assembler.h
> similarity index 100%
> rename from xen/arch/arm/lib/assembler.h
> rename to xen/arch/arm/arm32/lib/assembler.h
> diff --git a/xen/arch/arm/lib/bitops.h b/xen/arch/arm/arm32/lib/bitops.h
> similarity index 100%
> rename from xen/arch/arm/lib/bitops.h
> rename to xen/arch/arm/arm32/lib/bitops.h
> diff --git a/xen/arch/arm/lib/changebit.S b/xen/arch/arm/arm32/lib/changebit.S
> similarity index 100%
> rename from xen/arch/arm/lib/changebit.S
> rename to xen/arch/arm/arm32/lib/changebit.S
> diff --git a/xen/arch/arm/lib/clearbit.S b/xen/arch/arm/arm32/lib/clearbit.S
> similarity index 100%
> rename from xen/arch/arm/lib/clearbit.S
> rename to xen/arch/arm/arm32/lib/clearbit.S
> diff --git a/xen/arch/arm/lib/copy_template.S b/xen/arch/arm/arm32/lib/copy_template.S
> similarity index 100%
> rename from xen/arch/arm/lib/copy_template.S
> rename to xen/arch/arm/arm32/lib/copy_template.S
> diff --git a/xen/arch/arm/lib/div64.S b/xen/arch/arm/arm32/lib/div64.S
> similarity index 100%
> rename from xen/arch/arm/lib/div64.S
> rename to xen/arch/arm/arm32/lib/div64.S
> diff --git a/xen/arch/arm/lib/findbit.S b/xen/arch/arm/arm32/lib/findbit.S
> similarity index 100%
> rename from xen/arch/arm/lib/findbit.S
> rename to xen/arch/arm/arm32/lib/findbit.S
> diff --git a/xen/arch/arm/lib/lib1funcs.S b/xen/arch/arm/arm32/lib/lib1funcs.S
> similarity index 100%
> rename from xen/arch/arm/lib/lib1funcs.S
> rename to xen/arch/arm/arm32/lib/lib1funcs.S
> diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/arm32/lib/lshrdi3.S
> similarity index 100%
> rename from xen/arch/arm/lib/lshrdi3.S
> rename to xen/arch/arm/arm32/lib/lshrdi3.S
> diff --git a/xen/arch/arm/lib/memcpy.S b/xen/arch/arm/arm32/lib/memcpy.S
> similarity index 100%
> rename from xen/arch/arm/lib/memcpy.S
> rename to xen/arch/arm/arm32/lib/memcpy.S
> diff --git a/xen/arch/arm/lib/memmove.S b/xen/arch/arm/arm32/lib/memmove.S
> similarity index 100%
> rename from xen/arch/arm/lib/memmove.S
> rename to xen/arch/arm/arm32/lib/memmove.S
> diff --git a/xen/arch/arm/lib/memset.S b/xen/arch/arm/arm32/lib/memset.S
> similarity index 100%
> rename from xen/arch/arm/lib/memset.S
> rename to xen/arch/arm/arm32/lib/memset.S
> diff --git a/xen/arch/arm/lib/memzero.S b/xen/arch/arm/arm32/lib/memzero.S
> similarity index 100%
> rename from xen/arch/arm/lib/memzero.S
> rename to xen/arch/arm/arm32/lib/memzero.S
> diff --git a/xen/arch/arm/lib/setbit.S b/xen/arch/arm/arm32/lib/setbit.S
> similarity index 100%
> rename from xen/arch/arm/lib/setbit.S
> rename to xen/arch/arm/arm32/lib/setbit.S
> diff --git a/xen/arch/arm/lib/testchangebit.S b/xen/arch/arm/arm32/lib/testchangebit.S
> similarity index 100%
> rename from xen/arch/arm/lib/testchangebit.S
> rename to xen/arch/arm/arm32/lib/testchangebit.S
> diff --git a/xen/arch/arm/lib/testclearbit.S b/xen/arch/arm/arm32/lib/testclearbit.S
> similarity index 100%
> rename from xen/arch/arm/lib/testclearbit.S
> rename to xen/arch/arm/arm32/lib/testclearbit.S
> diff --git a/xen/arch/arm/lib/testsetbit.S b/xen/arch/arm/arm32/lib/testsetbit.S
> similarity index 100%
> rename from xen/arch/arm/lib/testsetbit.S
> rename to xen/arch/arm/arm32/lib/testsetbit.S
> diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/arm32/mode_switch.S
> similarity index 99%
> rename from xen/arch/arm/mode_switch.S
> rename to xen/arch/arm/arm32/mode_switch.S
> index 7c3b357..d550c33 100644
> --- a/xen/arch/arm/mode_switch.S
> +++ b/xen/arch/arm/arm32/mode_switch.S
> @@ -21,7 +21,7 @@
>  #include <asm/page.h>
>  #include <asm/platform_vexpress.h>
>  #include <asm/asm_defns.h>
> -#include "gic.h"
> +#include <asm/gic.h>
> 
> 
>  /* XXX: Versatile Express specific code */
> diff --git a/xen/arch/arm/proc-ca15.S b/xen/arch/arm/arm32/proc-ca15.S
> similarity index 100%
> rename from xen/arch/arm/proc-ca15.S
> rename to xen/arch/arm/arm32/proc-ca15.S
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index c5292c7..0875045 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -12,7 +12,7 @@
>  #include <asm/p2m.h>
>  #include <asm/irq.h>
> 
> -#include "gic.h"
> +#include <asm/gic.h>
>  #include "vtimer.h"
>  #include "vpl011.h"
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index a9e7f43..aac92b3 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -11,7 +11,7 @@
>  #include <xen/libfdt/libfdt.h>
>  #include <xen/guest_access.h>
> 
> -#include "gic.h"
> +#include <asm/gic.h>
>  #include "kernel.h"
> 
>  static unsigned int __initdata opt_dom0_max_vcpus;
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 0c6fab9..41824c9 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -29,7 +29,7 @@
>  #include <asm/p2m.h>
>  #include <asm/domain.h>
> 
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  /* Access to the GIC Distributor registers through the fixmap */
>  #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index 72e83e6..c141d81 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -25,7 +25,7 @@
>  #include <xen/errno.h>
>  #include <xen/sched.h>
> 
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  static void enable_none(struct irq_desc *irq) { }
>  static unsigned int startup_none(struct irq_desc *irq) { return 0; }
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 7ae4515..852f0d8 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -4,7 +4,7 @@
>  #include <xen/errno.h>
>  #include <xen/domain_page.h>
>  #include <asm/flushtlb.h>
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  void dump_p2m_lookup(struct domain *d, paddr_t addr)
>  {
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 2076724..8f85ae6 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -39,7 +39,7 @@
>  #include <asm/setup.h>
>  #include <asm/vfp.h>
>  #include <asm/early_printk.h>
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  static __used void init_done(void)
>  {
> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> index 6555ac6..7b6ffa0 100644
> --- a/xen/arch/arm/smpboot.c
> +++ b/xen/arch/arm/smpboot.c
> @@ -29,7 +29,7 @@
>  #include <xen/timer.h>
>  #include <xen/irq.h>
>  #include <asm/vfp.h>
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  cpumask_t cpu_online_map;
>  EXPORT_SYMBOL(cpu_online_map);
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 19e2081..d01ff6d 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -35,7 +35,7 @@
> 
>  #include "io.h"
>  #include "vtimer.h"
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  /* The base of the stack must always be double-word aligned, which means
>   * that both the kernel half of struct cpu_user_regs (which is pushed in
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 3f7e757..7d1a5ad 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -27,7 +27,7 @@
>  #include <asm/current.h>
> 
>  #include "io.h"
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  #define VGIC_DISTR_BASE_ADDRESS 0x000000002c001000
> 
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 490b021..1c45f4a 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -21,7 +21,7 @@
>  #include <xen/lib.h>
>  #include <xen/timer.h>
>  #include <xen/sched.h>
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  extern s_time_t ticks_to_ns(uint64_t ticks);
>  extern uint64_t ns_to_ticks(s_time_t ns);
> diff --git a/xen/arch/arm/gic.h b/xen/include/asm-arm/gic.h
> similarity index 98%
> rename from xen/arch/arm/gic.h
> rename to xen/include/asm-arm/gic.h
> index 1bf1b02..bf30fbd 100644
> --- a/xen/arch/arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -1,6 +1,4 @@
>  /*
> - * xen/arch/arm/gic.h
> - *
>   * ARM Generic Interrupt Controller support
>   *
>   * Tim Deegan <tim@xen.org>
> @@ -17,8 +15,8 @@
>   * GNU General Public License for more details.
>   */
> 
> -#ifndef __ARCH_ARM_GIC_H__
> -#define __ARCH_ARM_GIC_H__
> +#ifndef __ASM_ARM_GIC_H__
> +#define __ASM_ARM_GIC_H__
> 
>  #define GICD_CTLR       (0x000/4)
>  #define GICD_TYPER      (0x004/4)
> --
> 1.7.2.5
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:17:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0nP-0003P0-Ap; Tue, 18 Dec 2012 17:17:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl0nN-0003Ov-I9
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:17:33 +0000
Received: from [85.158.143.35:23878] by server-3.bemta-4.messagelabs.com id
	57/73-18211-C25A0D05; Tue, 18 Dec 2012 17:17:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355851045!14236864!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1706 invoked from network); 18 Dec 2012 17:17:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:17:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="232557"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 17:17:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 17:17:23 +0000
Message-ID: <1355851042.14620.280.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Tue, 18 Dec 2012 17:17:22 +0000
In-Reply-To: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
References: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of
	arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ping?
On Tue, 2012-12-04 at 15:57 +0000, Ian Campbell wrote:
> Eventually we will have arm64 as well.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  Config.mk                                    |    4 +++-
>  config/{arm.mk => arm32.mk}                  |    0
>  xen/Rules.mk                                 |    2 +-
>  xen/arch/arm/Makefile                        |    9 +++------
>  xen/arch/arm/Rules.mk                        |   13 ++++++++-----
>  xen/arch/arm/arm32/Makefile                  |    5 +++++
>  xen/arch/arm/{ => arm32}/asm-offsets.c       |    0
>  xen/arch/arm/{ => arm32}/entry.S             |    0
>  xen/arch/arm/{ => arm32}/head.S              |    0
>  xen/arch/arm/{ => arm32}/lib/Makefile        |    0
>  xen/arch/arm/{ => arm32}/lib/assembler.h     |    0
>  xen/arch/arm/{ => arm32}/lib/bitops.h        |    0
>  xen/arch/arm/{ => arm32}/lib/changebit.S     |    0
>  xen/arch/arm/{ => arm32}/lib/clearbit.S      |    0
>  xen/arch/arm/{ => arm32}/lib/copy_template.S |    0
>  xen/arch/arm/{ => arm32}/lib/div64.S         |    0
>  xen/arch/arm/{ => arm32}/lib/findbit.S       |    0
>  xen/arch/arm/{ => arm32}/lib/lib1funcs.S     |    0
>  xen/arch/arm/{ => arm32}/lib/lshrdi3.S       |    0
>  xen/arch/arm/{ => arm32}/lib/memcpy.S        |    0
>  xen/arch/arm/{ => arm32}/lib/memmove.S       |    0
>  xen/arch/arm/{ => arm32}/lib/memset.S        |    0
>  xen/arch/arm/{ => arm32}/lib/memzero.S       |    0
>  xen/arch/arm/{ => arm32}/lib/setbit.S        |    0
>  xen/arch/arm/{ => arm32}/lib/testchangebit.S |    0
>  xen/arch/arm/{ => arm32}/lib/testclearbit.S  |    0
>  xen/arch/arm/{ => arm32}/lib/testsetbit.S    |    0
>  xen/arch/arm/{ => arm32}/mode_switch.S       |    2 +-
>  xen/arch/arm/{ => arm32}/proc-ca15.S         |    0
>  xen/arch/arm/domain.c                        |    2 +-
>  xen/arch/arm/domain_build.c                  |    2 +-
>  xen/arch/arm/gic.c                           |    2 +-
>  xen/arch/arm/irq.c                           |    2 +-
>  xen/arch/arm/p2m.c                           |    2 +-
>  xen/arch/arm/setup.c                         |    2 +-
>  xen/arch/arm/smpboot.c                       |    2 +-
>  xen/arch/arm/traps.c                         |    2 +-
>  xen/arch/arm/vgic.c                          |    2 +-
>  xen/arch/arm/vtimer.c                        |    2 +-
>  xen/{arch/arm => include/asm-arm}/gic.h      |    6 ++----
>  40 files changed, 33 insertions(+), 28 deletions(-)
>  rename config/{arm.mk => arm32.mk} (100%)
>  create mode 100644 xen/arch/arm/arm32/Makefile
>  rename xen/arch/arm/{ => arm32}/asm-offsets.c (100%)
>  rename xen/arch/arm/{ => arm32}/entry.S (100%)
>  rename xen/arch/arm/{ => arm32}/head.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/Makefile (100%)
>  rename xen/arch/arm/{ => arm32}/lib/assembler.h (100%)
>  rename xen/arch/arm/{ => arm32}/lib/bitops.h (100%)
>  rename xen/arch/arm/{ => arm32}/lib/changebit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/clearbit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/copy_template.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/div64.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/findbit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/lib1funcs.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/lshrdi3.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/memcpy.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/memmove.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/memset.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/memzero.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/setbit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/testchangebit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/testclearbit.S (100%)
>  rename xen/arch/arm/{ => arm32}/lib/testsetbit.S (100%)
>  rename xen/arch/arm/{ => arm32}/mode_switch.S (99%)
>  rename xen/arch/arm/{ => arm32}/proc-ca15.S (100%)
>  rename xen/{arch/arm => include/asm-arm}/gic.h (98%)
> 
> diff --git a/Config.mk b/Config.mk
> index d99b9a1..8e35886 100644
> --- a/Config.mk
> +++ b/Config.mk
> @@ -14,7 +14,9 @@ debug ?= y
>  debug_symbols ?= $(debug)
> 
>  XEN_COMPILE_ARCH    ?= $(shell uname -m | sed -e s/i.86/x86_32/ \
> -                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ -e s/arm.*/arm/)
> +                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ \
> +                         -e s/armv7.*/arm32/)
> +
>  XEN_TARGET_ARCH     ?= $(XEN_COMPILE_ARCH)
>  XEN_OS              ?= $(shell uname -s)
> 
> diff --git a/config/arm.mk b/config/arm32.mk
> similarity index 100%
> rename from config/arm.mk
> rename to config/arm32.mk
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index f7cb8b2..c2db449 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -28,7 +28,7 @@ endif
>  # Set ARCH/SUBARCH appropriately.
>  override TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
>  override TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
> -                              sed -e 's/x86.*/x86/')
> +                              sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
> 
>  TARGET := $(BASEDIR)/xen
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index fd92b72..1b33767 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -1,8 +1,7 @@
> -subdir-y += lib
> +subdir-$(arm32) += arm32
> 
>  obj-y += dummy.o
>  obj-y += early_printk.o
> -obj-y += entry.o
>  obj-y += domain.o
>  obj-y += domctl.o
>  obj-y += sysctl.o
> @@ -12,8 +11,6 @@ obj-y += io.o
>  obj-y += irq.o
>  obj-y += kernel.o
>  obj-y += mm.o
> -obj-y += mode_switch.o
> -obj-y += proc-ca15.o
>  obj-y += p2m.o
>  obj-y += percpu.o
>  obj-y += guestcopy.o
> @@ -36,7 +33,7 @@ obj-y += dtb.o
>  AFLAGS += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
>  endif
> 
> -ALL_OBJS := head.o $(ALL_OBJS)
> +ALL_OBJS := $(TARGET_SUBARCH)/head.o $(ALL_OBJS)
> 
>  $(TARGET): $(TARGET)-syms $(TARGET).bin
>         # XXX: VE model loads by VMA so instead of
> @@ -81,7 +78,7 @@ $(TARGET)-syms: prelink.o xen.lds $(BASEDIR)/common/symbols-dummy.o
>             $(@D)/.$(@F).1.o -o $@
>         rm -f $(@D)/.$(@F).[0-9]*
> 
> -asm-offsets.s: asm-offsets.c
> +asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c
>         $(CC) $(filter-out -flto,$(CFLAGS)) -S -o $@ $<
> 
>  xen.lds: xen.lds.S
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index a45c654..f83bfee 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -12,16 +12,19 @@ CFLAGS += -fno-builtin -fno-common -Wredundant-decls
>  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
>  CFLAGS += -I$(BASEDIR)/include
> 
> -# Prevent floating-point variables from creeping into Xen.
> -CFLAGS += -msoft-float
> -
>  $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
>  $(call cc-option-add,CFLAGS,CC,-Wnested-externs)
> 
>  arm := y
> 
> +ifeq ($(TARGET_SUBARCH),arm32)
> +# Prevent floating-point variables from creeping into Xen.
> +CFLAGS += -msoft-float
> +CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
> +arm32 := y
> +arm64 := n
> +endif
> +
>  ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n)
>  CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
>  endif
> -
> -CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
> diff --git a/xen/arch/arm/arm32/Makefile b/xen/arch/arm/arm32/Makefile
> new file mode 100644
> index 0000000..20931fa
> --- /dev/null
> +++ b/xen/arch/arm/arm32/Makefile
> @@ -0,0 +1,5 @@
> +subdir-y += lib
> +
> +obj-y += entry.o
> +obj-y += mode_switch.o
> +obj-y += proc-ca15.o
> diff --git a/xen/arch/arm/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c
> similarity index 100%
> rename from xen/arch/arm/asm-offsets.c
> rename to xen/arch/arm/arm32/asm-offsets.c
> diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/arm32/entry.S
> similarity index 100%
> rename from xen/arch/arm/entry.S
> rename to xen/arch/arm/arm32/entry.S
> diff --git a/xen/arch/arm/head.S b/xen/arch/arm/arm32/head.S
> similarity index 100%
> rename from xen/arch/arm/head.S
> rename to xen/arch/arm/arm32/head.S
> diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/arm32/lib/Makefile
> similarity index 100%
> rename from xen/arch/arm/lib/Makefile
> rename to xen/arch/arm/arm32/lib/Makefile
> diff --git a/xen/arch/arm/lib/assembler.h b/xen/arch/arm/arm32/lib/assembler.h
> similarity index 100%
> rename from xen/arch/arm/lib/assembler.h
> rename to xen/arch/arm/arm32/lib/assembler.h
> diff --git a/xen/arch/arm/lib/bitops.h b/xen/arch/arm/arm32/lib/bitops.h
> similarity index 100%
> rename from xen/arch/arm/lib/bitops.h
> rename to xen/arch/arm/arm32/lib/bitops.h
> diff --git a/xen/arch/arm/lib/changebit.S b/xen/arch/arm/arm32/lib/changebit.S
> similarity index 100%
> rename from xen/arch/arm/lib/changebit.S
> rename to xen/arch/arm/arm32/lib/changebit.S
> diff --git a/xen/arch/arm/lib/clearbit.S b/xen/arch/arm/arm32/lib/clearbit.S
> similarity index 100%
> rename from xen/arch/arm/lib/clearbit.S
> rename to xen/arch/arm/arm32/lib/clearbit.S
> diff --git a/xen/arch/arm/lib/copy_template.S b/xen/arch/arm/arm32/lib/copy_template.S
> similarity index 100%
> rename from xen/arch/arm/lib/copy_template.S
> rename to xen/arch/arm/arm32/lib/copy_template.S
> diff --git a/xen/arch/arm/lib/div64.S b/xen/arch/arm/arm32/lib/div64.S
> similarity index 100%
> rename from xen/arch/arm/lib/div64.S
> rename to xen/arch/arm/arm32/lib/div64.S
> diff --git a/xen/arch/arm/lib/findbit.S b/xen/arch/arm/arm32/lib/findbit.S
> similarity index 100%
> rename from xen/arch/arm/lib/findbit.S
> rename to xen/arch/arm/arm32/lib/findbit.S
> diff --git a/xen/arch/arm/lib/lib1funcs.S b/xen/arch/arm/arm32/lib/lib1funcs.S
> similarity index 100%
> rename from xen/arch/arm/lib/lib1funcs.S
> rename to xen/arch/arm/arm32/lib/lib1funcs.S
> diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/arm32/lib/lshrdi3.S
> similarity index 100%
> rename from xen/arch/arm/lib/lshrdi3.S
> rename to xen/arch/arm/arm32/lib/lshrdi3.S
> diff --git a/xen/arch/arm/lib/memcpy.S b/xen/arch/arm/arm32/lib/memcpy.S
> similarity index 100%
> rename from xen/arch/arm/lib/memcpy.S
> rename to xen/arch/arm/arm32/lib/memcpy.S
> diff --git a/xen/arch/arm/lib/memmove.S b/xen/arch/arm/arm32/lib/memmove.S
> similarity index 100%
> rename from xen/arch/arm/lib/memmove.S
> rename to xen/arch/arm/arm32/lib/memmove.S
> diff --git a/xen/arch/arm/lib/memset.S b/xen/arch/arm/arm32/lib/memset.S
> similarity index 100%
> rename from xen/arch/arm/lib/memset.S
> rename to xen/arch/arm/arm32/lib/memset.S
> diff --git a/xen/arch/arm/lib/memzero.S b/xen/arch/arm/arm32/lib/memzero.S
> similarity index 100%
> rename from xen/arch/arm/lib/memzero.S
> rename to xen/arch/arm/arm32/lib/memzero.S
> diff --git a/xen/arch/arm/lib/setbit.S b/xen/arch/arm/arm32/lib/setbit.S
> similarity index 100%
> rename from xen/arch/arm/lib/setbit.S
> rename to xen/arch/arm/arm32/lib/setbit.S
> diff --git a/xen/arch/arm/lib/testchangebit.S b/xen/arch/arm/arm32/lib/testchangebit.S
> similarity index 100%
> rename from xen/arch/arm/lib/testchangebit.S
> rename to xen/arch/arm/arm32/lib/testchangebit.S
> diff --git a/xen/arch/arm/lib/testclearbit.S b/xen/arch/arm/arm32/lib/testclearbit.S
> similarity index 100%
> rename from xen/arch/arm/lib/testclearbit.S
> rename to xen/arch/arm/arm32/lib/testclearbit.S
> diff --git a/xen/arch/arm/lib/testsetbit.S b/xen/arch/arm/arm32/lib/testsetbit.S
> similarity index 100%
> rename from xen/arch/arm/lib/testsetbit.S
> rename to xen/arch/arm/arm32/lib/testsetbit.S
> diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/arm32/mode_switch.S
> similarity index 99%
> rename from xen/arch/arm/mode_switch.S
> rename to xen/arch/arm/arm32/mode_switch.S
> index 7c3b357..d550c33 100644
> --- a/xen/arch/arm/mode_switch.S
> +++ b/xen/arch/arm/arm32/mode_switch.S
> @@ -21,7 +21,7 @@
>  #include <asm/page.h>
>  #include <asm/platform_vexpress.h>
>  #include <asm/asm_defns.h>
> -#include "gic.h"
> +#include <asm/gic.h>
> 
> 
>  /* XXX: Versatile Express specific code */
> diff --git a/xen/arch/arm/proc-ca15.S b/xen/arch/arm/arm32/proc-ca15.S
> similarity index 100%
> rename from xen/arch/arm/proc-ca15.S
> rename to xen/arch/arm/arm32/proc-ca15.S
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index c5292c7..0875045 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -12,7 +12,7 @@
>  #include <asm/p2m.h>
>  #include <asm/irq.h>
> 
> -#include "gic.h"
> +#include <asm/gic.h>
>  #include "vtimer.h"
>  #include "vpl011.h"
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index a9e7f43..aac92b3 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -11,7 +11,7 @@
>  #include <xen/libfdt/libfdt.h>
>  #include <xen/guest_access.h>
> 
> -#include "gic.h"
> +#include <asm/gic.h>
>  #include "kernel.h"
> 
>  static unsigned int __initdata opt_dom0_max_vcpus;
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 0c6fab9..41824c9 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -29,7 +29,7 @@
>  #include <asm/p2m.h>
>  #include <asm/domain.h>
> 
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  /* Access to the GIC Distributor registers through the fixmap */
>  #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index 72e83e6..c141d81 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -25,7 +25,7 @@
>  #include <xen/errno.h>
>  #include <xen/sched.h>
> 
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  static void enable_none(struct irq_desc *irq) { }
>  static unsigned int startup_none(struct irq_desc *irq) { return 0; }
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 7ae4515..852f0d8 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -4,7 +4,7 @@
>  #include <xen/errno.h>
>  #include <xen/domain_page.h>
>  #include <asm/flushtlb.h>
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  void dump_p2m_lookup(struct domain *d, paddr_t addr)
>  {
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 2076724..8f85ae6 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -39,7 +39,7 @@
>  #include <asm/setup.h>
>  #include <asm/vfp.h>
>  #include <asm/early_printk.h>
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  static __used void init_done(void)
>  {
> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> index 6555ac6..7b6ffa0 100644
> --- a/xen/arch/arm/smpboot.c
> +++ b/xen/arch/arm/smpboot.c
> @@ -29,7 +29,7 @@
>  #include <xen/timer.h>
>  #include <xen/irq.h>
>  #include <asm/vfp.h>
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  cpumask_t cpu_online_map;
>  EXPORT_SYMBOL(cpu_online_map);
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 19e2081..d01ff6d 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -35,7 +35,7 @@
> 
>  #include "io.h"
>  #include "vtimer.h"
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  /* The base of the stack must always be double-word aligned, which means
>   * that both the kernel half of struct cpu_user_regs (which is pushed in
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 3f7e757..7d1a5ad 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -27,7 +27,7 @@
>  #include <asm/current.h>
> 
>  #include "io.h"
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  #define VGIC_DISTR_BASE_ADDRESS 0x000000002c001000
> 
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 490b021..1c45f4a 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -21,7 +21,7 @@
>  #include <xen/lib.h>
>  #include <xen/timer.h>
>  #include <xen/sched.h>
> -#include "gic.h"
> +#include <asm/gic.h>
> 
>  extern s_time_t ticks_to_ns(uint64_t ticks);
>  extern uint64_t ns_to_ticks(s_time_t ns);
> diff --git a/xen/arch/arm/gic.h b/xen/include/asm-arm/gic.h
> similarity index 98%
> rename from xen/arch/arm/gic.h
> rename to xen/include/asm-arm/gic.h
> index 1bf1b02..bf30fbd 100644
> --- a/xen/arch/arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -1,6 +1,4 @@
>  /*
> - * xen/arch/arm/gic.h
> - *
>   * ARM Generic Interrupt Controller support
>   *
>   * Tim Deegan <tim@xen.org>
> @@ -17,8 +15,8 @@
>   * GNU General Public License for more details.
>   */
> 
> -#ifndef __ARCH_ARM_GIC_H__
> -#define __ARCH_ARM_GIC_H__
> +#ifndef __ASM_ARM_GIC_H__
> +#define __ASM_ARM_GIC_H__
> 
>  #define GICD_CTLR       (0x000/4)
>  #define GICD_TYPER      (0x004/4)
> --
> 1.7.2.5
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:22:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0sI-0003gk-9e; Tue, 18 Dec 2012 17:22:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wdauchy@gmail.com>) id 1Tl0sG-0003gd-MI
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:22:36 +0000
Received: from [193.109.254.147:23870] by server-11.bemta-14.messagelabs.com
	id 30/3E-02659-C56A0D05; Tue, 18 Dec 2012 17:22:36 +0000
X-Env-Sender: wdauchy@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355851355!10826873!1
X-Originating-IP: [209.85.214.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9009 invoked from network); 18 Dec 2012 17:22:35 -0000
Received: from mail-bk0-f54.google.com (HELO mail-bk0-f54.google.com)
	(209.85.214.54)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:22:35 -0000
Received: by mail-bk0-f54.google.com with SMTP id je9so489618bkc.41
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 09:22:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=KrgaY+mC1X4d4c3aFXPtSmqSptwwifpM2K/4C5hqzNc=;
	b=ZPkIqiRGeNbPpVFgo+hqiQ6tg49jJuIQoOifIR40fFAuU5gkN3Y6vMcdk5MHeBuxVP
	ZoJC4p12fNiZWCGpRj1uuyNjfejcS5qsnWJulmcs/r3Tmea2B/uGfO6vs9ClDLB+J7Km
	FuPT0/b4BdC2aWWUMy4oXLToN0/hhfu51yp5scEMAiXCsP8ZNyC4r+WZ8t/AGe9fu/HH
	lNQp66KqGxTUmMbMw/OYOj0mAJ5RrEruH2C1kDEOoEtg7J/wWVewsgEZKf1+3zMiuLN2
	RCELAbcuR0gb4wh++ErwLgvQdVuZHDEZQxZx/DSxofN+1IZTHEKmbGictLq9cLXHo7uh
	JtWw==
Received: by 10.204.149.12 with SMTP id r12mr1099631bkv.30.1355851354712; Tue,
	18 Dec 2012 09:22:34 -0800 (PST)
MIME-Version: 1.0
Received: by 10.204.13.71 with HTTP; Tue, 18 Dec 2012 09:22:13 -0800 (PST)
In-Reply-To: <50D0823B02000078000B10BD@nat28.tlf.novell.com>
References: <50D0823B02000078000B10BD@nat28.tlf.novell.com>
From: William Dauchy <wdauchy@gmail.com>
Date: Tue, 18 Dec 2012 18:22:13 +0100
Message-ID: <CAJ75kXZyfAf_xKdo6G7xW2Gjd2c76_MSFqR6K1WzX=dsVcuPDQ@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Lars Kurth <lars.kurth@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [ANNOUNCE] Xen 4.1.4 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On Tue, Dec 18, 2012 at 2:48 PM, Jan Beulich <JBeulich@suse.com> wrote:
> I am pleased to announce the release of Xen 4.1.4. This is
> available immediately from its mercurial repository:
> http://xenbits.xen.org/xen-4.1-testing.hg (tag RELEASE-4.1.4)

I can't find the RELEASE-4.1.4 tag
http://xenbits.xen.org/hg/xen-4.1-testing.hg/tags
last one is 4.1.4-rc2 at the moment.

Regards,

--
William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:22:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0sI-0003gk-9e; Tue, 18 Dec 2012 17:22:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wdauchy@gmail.com>) id 1Tl0sG-0003gd-MI
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:22:36 +0000
Received: from [193.109.254.147:23870] by server-11.bemta-14.messagelabs.com
	id 30/3E-02659-C56A0D05; Tue, 18 Dec 2012 17:22:36 +0000
X-Env-Sender: wdauchy@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1355851355!10826873!1
X-Originating-IP: [209.85.214.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9009 invoked from network); 18 Dec 2012 17:22:35 -0000
Received: from mail-bk0-f54.google.com (HELO mail-bk0-f54.google.com)
	(209.85.214.54)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:22:35 -0000
Received: by mail-bk0-f54.google.com with SMTP id je9so489618bkc.41
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 09:22:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=KrgaY+mC1X4d4c3aFXPtSmqSptwwifpM2K/4C5hqzNc=;
	b=ZPkIqiRGeNbPpVFgo+hqiQ6tg49jJuIQoOifIR40fFAuU5gkN3Y6vMcdk5MHeBuxVP
	ZoJC4p12fNiZWCGpRj1uuyNjfejcS5qsnWJulmcs/r3Tmea2B/uGfO6vs9ClDLB+J7Km
	FuPT0/b4BdC2aWWUMy4oXLToN0/hhfu51yp5scEMAiXCsP8ZNyC4r+WZ8t/AGe9fu/HH
	lNQp66KqGxTUmMbMw/OYOj0mAJ5RrEruH2C1kDEOoEtg7J/wWVewsgEZKf1+3zMiuLN2
	RCELAbcuR0gb4wh++ErwLgvQdVuZHDEZQxZx/DSxofN+1IZTHEKmbGictLq9cLXHo7uh
	JtWw==
Received: by 10.204.149.12 with SMTP id r12mr1099631bkv.30.1355851354712; Tue,
	18 Dec 2012 09:22:34 -0800 (PST)
MIME-Version: 1.0
Received: by 10.204.13.71 with HTTP; Tue, 18 Dec 2012 09:22:13 -0800 (PST)
In-Reply-To: <50D0823B02000078000B10BD@nat28.tlf.novell.com>
References: <50D0823B02000078000B10BD@nat28.tlf.novell.com>
From: William Dauchy <wdauchy@gmail.com>
Date: Tue, 18 Dec 2012 18:22:13 +0100
Message-ID: <CAJ75kXZyfAf_xKdo6G7xW2Gjd2c76_MSFqR6K1WzX=dsVcuPDQ@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Lars Kurth <lars.kurth@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [ANNOUNCE] Xen 4.1.4 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On Tue, Dec 18, 2012 at 2:48 PM, Jan Beulich <JBeulich@suse.com> wrote:
> I am pleased to announce the release of Xen 4.1.4. This is
> available immediately from its mercurial repository:
> http://xenbits.xen.org/xen-4.1-testing.hg (tag RELEASE-4.1.4)

I can't find the RELEASE-4.1.4 tag
http://xenbits.xen.org/hg/xen-4.1-testing.hg/tags
last one is 4.1.4-rc2 at the moment.

Regards,

--
William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:23:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0si-0003kB-Mj; Tue, 18 Dec 2012 17:23:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tl0sh-0003jw-8u
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:23:03 +0000
Received: from [85.158.138.51:58989] by server-7.bemta-3.messagelabs.com id
	DE/E2-23008-676A0D05; Tue, 18 Dec 2012 17:23:02 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355851381!19500413!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26802 invoked from network); 18 Dec 2012 17:23:01 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 17:23:01 -0000
Received: (qmail 20825 invoked from network); 18 Dec 2012 19:23:00 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	18 Dec 2012 19:23:00 +0200
Message-ID: <50D0A6B1.30702@gmail.com>
Date: Tue, 18 Dec 2012 19:24:01 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355850255.14620.277.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234426,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS;
	NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	5a16fb50404be5890d7bb539defb7e9f.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t9.17emt64bi.h1c1], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44425
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for the reply!

> Do you have a user in mind for this new functionality?

Yes. An (userspace) application that needs to look at this information 
to decide if a set of pages are interesting for monitoring.

> This version seems to do a lot less than get_mtrr_type() in the
> hypervisor. Is that deliberate? Why isn't the fixed mtrr slot and
> overlap handling required here?

It does do less. It's somewhat deliberate :), ideally it should have 
done everything that get_mtrr_type() does. It did work with my initial 
test addresses, but clearly more is required if it is to provide the 
full functionality of get_mtrr_type(). The code currently only iterates 
through the var_ranges, not the fixed array, etc.

The overlap handling isn't there because there doesn't seem to be a 
clear correspondence between struct mtrr_state and  struct hvm_hw_mtrr. 
I implemented as much of the get_mtrr_type() logic as was obvious using 
what mapping was clear bewtween them.

Is having the full functionality in libxc feasible?

>>   /**
>> + * This function returns information about the MTRR type of
>> + * a given guest physical address/
>
>                                     ^ you mean . not / I think.

You're right, sorry for the typo.

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:23:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0si-0003kB-Mj; Tue, 18 Dec 2012 17:23:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tl0sh-0003jw-8u
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:23:03 +0000
Received: from [85.158.138.51:58989] by server-7.bemta-3.messagelabs.com id
	DE/E2-23008-676A0D05; Tue, 18 Dec 2012 17:23:02 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1355851381!19500413!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26802 invoked from network); 18 Dec 2012 17:23:01 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 17:23:01 -0000
Received: (qmail 20825 invoked from network); 18 Dec 2012 19:23:00 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	18 Dec 2012 19:23:00 +0200
Message-ID: <50D0A6B1.30702@gmail.com>
Date: Tue, 18 Dec 2012 19:24:01 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355850255.14620.277.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234426,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS;
	NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	5a16fb50404be5890d7bb539defb7e9f.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t9.17emt64bi.h1c1], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44425
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for the reply!

> Do you have a user in mind for this new functionality?

Yes. An (userspace) application that needs to look at this information 
to decide if a set of pages are interesting for monitoring.

> This version seems to do a lot less than get_mtrr_type() in the
> hypervisor. Is that deliberate? Why isn't the fixed mtrr slot and
> overlap handling required here?

It does do less. It's somewhat deliberate :), ideally it should have 
done everything that get_mtrr_type() does. It did work with my initial 
test addresses, but clearly more is required if it is to provide the 
full functionality of get_mtrr_type(). The code currently only iterates 
through the var_ranges, not the fixed array, etc.

The overlap handling isn't there because there doesn't seem to be a 
clear correspondence between struct mtrr_state and  struct hvm_hw_mtrr. 
I implemented as much of the get_mtrr_type() logic as was obvious using 
what mapping was clear bewtween them.

Is having the full functionality in libxc feasible?

>>   /**
>> + * This function returns information about the MTRR type of
>> + * a given guest physical address/
>
>                                     ^ you mean . not / I think.

You're right, sorry for the typo.

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:23:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:23:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0t8-0003nE-44; Tue, 18 Dec 2012 17:23:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tl0t6-0003mv-K6
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 17:23:28 +0000
Received: from [85.158.143.99:57154] by server-1.bemta-4.messagelabs.com id
	40/9F-28401-F86A0D05; Tue, 18 Dec 2012 17:23:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355851407!23188104!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15164 invoked from network); 18 Dec 2012 17:23:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:23:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="232669"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 17:23:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 18 Dec 2012 17:23:23 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tl0t1-0008KF-R5;
	Tue, 18 Dec 2012 17:23:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tl0t1-00082d-Ct;
	Tue, 18 Dec 2012 17:23:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14779-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 17:23:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14779: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14779 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14779/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)       broken REGR. vs. 14670

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14670
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14670

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                881c0a027c495dd35992346176a40d39a7666fb9
baseline version:
 linux                4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 881c0a027c495dd35992346176a40d39a7666fb9
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Mon Dec 17 10:56:46 2012 -0800

    Linux 3.0.57

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:23:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:23:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0t8-0003nE-44; Tue, 18 Dec 2012 17:23:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tl0t6-0003mv-K6
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 17:23:28 +0000
Received: from [85.158.143.99:57154] by server-1.bemta-4.messagelabs.com id
	40/9F-28401-F86A0D05; Tue, 18 Dec 2012 17:23:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355851407!23188104!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15164 invoked from network); 18 Dec 2012 17:23:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:23:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="232669"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 17:23:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 18 Dec 2012 17:23:23 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tl0t1-0008KF-R5;
	Tue, 18 Dec 2012 17:23:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tl0t1-00082d-Ct;
	Tue, 18 Dec 2012 17:23:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14779-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 17:23:23 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14779: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14779 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14779/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)       broken REGR. vs. 14670

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14670
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14670

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                881c0a027c495dd35992346176a40d39a7666fb9
baseline version:
 linux                4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 881c0a027c495dd35992346176a40d39a7666fb9
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Mon Dec 17 10:56:46 2012 -0800

    Linux 3.0.57

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:26:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0wK-00047n-Oj; Tue, 18 Dec 2012 17:26:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tl0wJ-00047g-Mq
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:26:47 +0000
Received: from [193.109.254.147:12695] by server-15.bemta-14.messagelabs.com
	id FB/6A-05116-757A0D05; Tue, 18 Dec 2012 17:26:47 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355851605!1770910!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20173 invoked from network); 18 Dec 2012 17:26:45 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-14.tower-27.messagelabs.com with SMTP;
	18 Dec 2012 17:26:45 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id BC467C5618D;
	Tue, 18 Dec 2012 17:26:34 +0000 (GMT)
Date: Tue, 18 Dec 2012 17:26:33 +0000
From: Alex Bligh <alex@alex.org.uk>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <A2D92E35B0C55903594CA2D7@Ximines.local>
In-Reply-To: <50D09C67.1010703@eu.citrix.com>
References: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
	<39A8C47B5D7DD22A52A3DA51@Ximines.local>
	<CAFLBxZY3zXaNQ93zq8YuKQm=+Hc7SoN2uVsjgJ1kZb_=N5CszA@mail.gmail.com>
	<008ECD032A8603F73C448ABD@Ximines.local>
	<50D09C67.1010703@eu.citrix.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 development update, 15 Oct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George,

--On 18 December 2012 16:40:07 +0000 George Dunlap 
<george.dunlap@eu.citrix.com> wrote:

> So it sounds like in this case you're talking about moving the disk to a
> different storage device connected to the same host, leaving the VM
> running on the same host.

Correct, though as you suggest below, in order to achieve the effect
you describe what we do is ensure that between the two migrates
starting and finishing the relevant disk access to both VMs is
maintained (in our case via NFS rather than sshfs :-) ). So it
gives the illusion of working even when the migrate is local disk
to local disk. Generalising this is left as an exercise for the
reader.

Giving xl access to qmp snapshot_blkdev_sync and the qemu-img rebase
would no doubt be useful to those using xl though.

Alex

> What I was talking about in this was migrating the VM and storage
> together to a new host.  People who want this typically have all VMs on
> local storage, so (if I'm understanding you right) the snapshot trick
> won't work, because host A (where it's running) can't directly access
> host B's disk (to which we want to migrate it).
>
> Although I suppose one could always hack something together with sshfs or
> something. :-)



-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:26:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0wK-00047n-Oj; Tue, 18 Dec 2012 17:26:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tl0wJ-00047g-Mq
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:26:47 +0000
Received: from [193.109.254.147:12695] by server-15.bemta-14.messagelabs.com
	id FB/6A-05116-757A0D05; Tue, 18 Dec 2012 17:26:47 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355851605!1770910!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20173 invoked from network); 18 Dec 2012 17:26:45 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-14.tower-27.messagelabs.com with SMTP;
	18 Dec 2012 17:26:45 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id BC467C5618D;
	Tue, 18 Dec 2012 17:26:34 +0000 (GMT)
Date: Tue, 18 Dec 2012 17:26:33 +0000
From: Alex Bligh <alex@alex.org.uk>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <A2D92E35B0C55903594CA2D7@Ximines.local>
In-Reply-To: <50D09C67.1010703@eu.citrix.com>
References: <CAFLBxZbXS1p6+e2N+0sTb=Fa4LHCafYKLLhuyKtkofiXQcMBaA@mail.gmail.com>
	<39A8C47B5D7DD22A52A3DA51@Ximines.local>
	<CAFLBxZY3zXaNQ93zq8YuKQm=+Hc7SoN2uVsjgJ1kZb_=N5CszA@mail.gmail.com>
	<008ECD032A8603F73C448ABD@Ximines.local>
	<50D09C67.1010703@eu.citrix.com>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Alex Bligh <alex@alex.org.uk>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 development update, 15 Oct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George,

--On 18 December 2012 16:40:07 +0000 George Dunlap 
<george.dunlap@eu.citrix.com> wrote:

> So it sounds like in this case you're talking about moving the disk to a
> different storage device connected to the same host, leaving the VM
> running on the same host.

Correct, though as you suggest below, in order to achieve the effect
you describe what we do is ensure that between the two migrates
starting and finishing the relevant disk access to both VMs is
maintained (in our case via NFS rather than sshfs :-) ). So it
gives the illusion of working even when the migrate is local disk
to local disk. Generalising this is left as an exercise for the
reader.

Giving xl access to qmp snapshot_blkdev_sync and the qemu-img rebase
would no doubt be useful to those using xl though.

Alex

> What I was talking about in this was migrating the VM and storage
> together to a new host.  People who want this typically have all VMs on
> local storage, so (if I'm understanding you right) the snapshot trick
> won't work, because host A (where it's running) can't directly access
> host B's disk (to which we want to migrate it).
>
> Although I suppose one could always hack something together with sshfs or
> something. :-)



-- 
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:28:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0yG-0004GX-90; Tue, 18 Dec 2012 17:28:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tl0yE-0004GO-D9
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:28:46 +0000
Received: from [85.158.139.211:54763] by server-10.bemta-5.messagelabs.com id
	D8/37-13383-DC7A0D05; Tue, 18 Dec 2012 17:28:45 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355851723!20192043!1
X-Originating-IP: [209.85.223.169]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25282 invoked from network); 18 Dec 2012 17:28:44 -0000
Received: from mail-ie0-f169.google.com (HELO mail-ie0-f169.google.com)
	(209.85.223.169)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:28:44 -0000
Received: by mail-ie0-f169.google.com with SMTP id c14so1350168ieb.0
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 09:28:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=VfZSmxjxtOysC7yKrnacurnFbo2fmmcxap64KcGtqZA=;
	b=rEDRRXWy+WLJs+9QrBxRklRnWvWweRL2G0XHmIWsTSROqZ2jw7y/7aveTu67hS73ih
	CVRUyeGSKG4Gg58Q8UOAhXnI0Hg+OZ/0Aiq+hbDPMJFS+vB6REQr1rEFsdIAfFMh/Jbx
	WBIMJIOIgqHOHgTt4jKKc3PlXMsEwq77EhQwGQZ9Z/Ts/C3CZ3Q3F6ULYETFNXcWbR46
	RR6HGjQZmtxqEXKKabh0IdgWqHjn4RDqzD+NppM1Xcu72lmlhB5pro4/FPz9Zu/Bc/0F
	vdqza5eyy/03sk4wWz+bWGcUpQ2Js6fPLSZU1lMgiAzmy6owQr2wkprmhsxn1+0U5QFi
	cpeQ==
MIME-Version: 1.0
Received: by 10.50.185.230 with SMTP id ff6mr3668863igc.7.1355851723056; Tue,
	18 Dec 2012 09:28:43 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Tue, 18 Dec 2012 09:28:42 -0800 (PST)
Date: Wed, 19 Dec 2012 01:28:42 +0800
X-Google-Sender-Auth: vqAjqmOJFI9sP1J9j7WANwQzia4
Message-ID: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

I recently tried to play some 3D games on my linux guest.
The game starts without problem but it freezes the entire system after
a some time (a minute or so?).
Here I mean both the host and domU are not responsive anymore.
The ssh freezes and i had to shutdown the machine using power button directly.

I did not find anything obvious from the host log. But from the guest,
I can find this:

Dec 18 20:28:38 debvm kernel: [    0.899860] resource map sanity check
conflict: 0xfeff5018 0xfeff7017 0xfeff7000 0xffffffff reserved
Dec 18 20:28:38 debvm kernel: [    0.899862] ------------[ cut here
]------------
Dec 18 20:28:38 debvm kernel: [    0.899869] WARNING: at
arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2c4/0x33c()
Dec 18 20:28:38 debvm kernel: [    0.899870] Hardware name: HVM domU
Dec 18 20:28:38 debvm kernel: [    0.899872] Info: mapping multiple
BARs. Your kernel is fine.
Dec 18 20:28:38 debvm kernel: [    0.899873] Modules linked in:
Dec 18 20:28:38 debvm kernel: [    0.899878] Pid: 1, comm: swapper/0
Not tainted 3.6.9 #4
Dec 18 20:28:38 debvm kernel: [    0.899892] Call Trace:
Dec 18 20:28:38 debvm kernel: [    0.899896]  [<ffffffff8103d194>] ?
warn_slowpath_common+0x76/0x8a
Dec 18 20:28:38 debvm kernel: [    0.899898]  [<ffffffff8103d240>] ?
warn_slowpath_fmt+0x45/0x4a
Dec 18 20:28:38 debvm kernel: [    0.899900]  [<ffffffff81032a6c>] ?
__ioremap_caller+0x2c4/0x33c
Dec 18 20:28:38 debvm kernel: [    0.899902]  [<ffffffff812c3be3>] ?
intel_opregion_setup+0x9c/0x201
Dec 18 20:28:38 debvm kernel: [    0.899904]  [<ffffffff812bcb75>] ?
intel_setup_gmbus+0x175/0x19d
Dec 18 20:28:38 debvm kernel: [    0.899907]  [<ffffffff8128a37a>] ?
i915_driver_load+0x548/0x90d
Dec 18 20:28:38 debvm kernel: [    0.899910]  [<ffffffff812ff804>] ?
setup_hpet_msi_remapped+0x20/0x20
Dec 18 20:28:38 debvm kernel: [    0.899912]  [<ffffffff81272706>] ?
drm_get_pci_dev+0x152/0x259
Dec 18 20:28:38 debvm kernel: [    0.899915]  [<ffffffff813d4883>] ?
_raw_spin_lock_irqsave+0x21/0x45
Dec 18 20:28:38 debvm kernel: [    0.899918]  [<ffffffff811d9ecc>] ?
local_pci_probe+0x5a/0xa0
Dec 18 20:28:38 debvm kernel: [    0.899920]  [<ffffffff811d9fcf>] ?
pci_device_probe+0xbd/0xe7
Dec 18 20:28:38 debvm kernel: [    0.899922]  [<ffffffff812cd887>] ?
driver_probe_device+0x1b0/0x1b0
Dec 18 20:28:38 debvm kernel: [    0.899923]  [<ffffffff812cd887>] ?
driver_probe_device+0x1b0/0x1b0
Dec 18 20:28:38 debvm kernel: [    0.899925]  [<ffffffff812cd769>] ?
driver_probe_device+0x92/0x1b0
Dec 18 20:28:38 debvm kernel: [    0.899926]  [<ffffffff812cd8da>] ?
__driver_attach+0x53/0x73
Dec 18 20:28:38 debvm kernel: [    0.899928]  [<ffffffff812cc06f>] ?
bus_for_each_dev+0x46/0x77
Dec 18 20:28:38 debvm kernel: [    0.899930]  [<ffffffff812ccf8f>] ?
bus_add_driver+0xd5/0x1f4
Dec 18 20:28:38 debvm kernel: [    0.899931]  [<ffffffff812cde14>] ?
driver_register+0x89/0x101
Dec 18 20:28:38 debvm kernel: [    0.899933]  [<ffffffff811d9336>] ?
__pci_register_driver+0x49/0xa3
Dec 18 20:28:38 debvm kernel: [    0.899935]  [<ffffffff816d55c7>] ?
ttm_init+0x63/0x63
Dec 18 20:28:38 debvm kernel: [    0.899937]  [<ffffffff81002085>] ?
do_one_initcall+0x75/0x12c
Dec 18 20:28:38 debvm kernel: [    0.899940]  [<ffffffff816a6cc2>] ?
kernel_init+0x13c/0x1c0
Dec 18 20:28:38 debvm kernel: [    0.899941]  [<ffffffff816a6565>] ?
do_early_param+0x83/0x83
Dec 18 20:28:38 debvm kernel: [    0.899943]  [<ffffffff813d9f44>] ?
kernel_thread_helper+0x4/0x10
Dec 18 20:28:38 debvm kernel: [    0.899945]  [<ffffffff816a6b86>] ?
start_kernel+0x3e1/0x3e1
Dec 18 20:28:38 debvm kernel: [    0.899947]  [<ffffffff813d9f40>] ?
gs_change+0x13/0x13
Dec 18 20:28:38 debvm kernel: [    0.899950] ---[ end trace
db461543ce599b44 ]---

I'm not sure if this has anything to do with the freeze. This seems to
show up on every boot after I upgraded to xen version 4.2.1-rc2. Both
debian kernel 3.2.32 / 3.6.9 suffers from the same log. But whole
system freeze happens only during gaming, which is much less frequent.
So I'm not sure if the two are related. But anyway, could you comment
about what does this log mean?

I can find the one of the mentioned address in the qemu_dm log:
pt_pci_write_config: [00:02:0] address=00fc val=0xfeff5000 len=4
igd_write_opregion: Map OpRegion: cd996018 -> feff5018
igd_write_opregion: [00:02:0] addr=fc len=2 val=feff5000

PS: I also run xbmc on domU and it playbacks video under HW
acceleration (VAAPI) without any problem. XBMC by itself is also an
graphics intensive program. But this runs on an pure HVM guest, while
the failing case is on PVHVM.

PS2: I also suffered another instability yesterday. It happens when I
was compiling kernel in side the domU. The host reboots suddenly.
Since I'm not using graphics at that time (Xorg session is idle, I
connected through SSH), this may be a different issue.

Thanks,
Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:28:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl0yG-0004GX-90; Tue, 18 Dec 2012 17:28:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tl0yE-0004GO-D9
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:28:46 +0000
Received: from [85.158.139.211:54763] by server-10.bemta-5.messagelabs.com id
	D8/37-13383-DC7A0D05; Tue, 18 Dec 2012 17:28:45 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355851723!20192043!1
X-Originating-IP: [209.85.223.169]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25282 invoked from network); 18 Dec 2012 17:28:44 -0000
Received: from mail-ie0-f169.google.com (HELO mail-ie0-f169.google.com)
	(209.85.223.169)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:28:44 -0000
Received: by mail-ie0-f169.google.com with SMTP id c14so1350168ieb.0
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 09:28:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=VfZSmxjxtOysC7yKrnacurnFbo2fmmcxap64KcGtqZA=;
	b=rEDRRXWy+WLJs+9QrBxRklRnWvWweRL2G0XHmIWsTSROqZ2jw7y/7aveTu67hS73ih
	CVRUyeGSKG4Gg58Q8UOAhXnI0Hg+OZ/0Aiq+hbDPMJFS+vB6REQr1rEFsdIAfFMh/Jbx
	WBIMJIOIgqHOHgTt4jKKc3PlXMsEwq77EhQwGQZ9Z/Ts/C3CZ3Q3F6ULYETFNXcWbR46
	RR6HGjQZmtxqEXKKabh0IdgWqHjn4RDqzD+NppM1Xcu72lmlhB5pro4/FPz9Zu/Bc/0F
	vdqza5eyy/03sk4wWz+bWGcUpQ2Js6fPLSZU1lMgiAzmy6owQr2wkprmhsxn1+0U5QFi
	cpeQ==
MIME-Version: 1.0
Received: by 10.50.185.230 with SMTP id ff6mr3668863igc.7.1355851723056; Tue,
	18 Dec 2012 09:28:43 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Tue, 18 Dec 2012 09:28:42 -0800 (PST)
Date: Wed, 19 Dec 2012 01:28:42 +0800
X-Google-Sender-Auth: vqAjqmOJFI9sP1J9j7WANwQzia4
Message-ID: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

I recently tried to play some 3D games on my linux guest.
The game starts without problem but it freezes the entire system after
a some time (a minute or so?).
Here I mean both the host and domU are not responsive anymore.
The ssh freezes and i had to shutdown the machine using power button directly.

I did not find anything obvious from the host log. But from the guest,
I can find this:

Dec 18 20:28:38 debvm kernel: [    0.899860] resource map sanity check
conflict: 0xfeff5018 0xfeff7017 0xfeff7000 0xffffffff reserved
Dec 18 20:28:38 debvm kernel: [    0.899862] ------------[ cut here
]------------
Dec 18 20:28:38 debvm kernel: [    0.899869] WARNING: at
arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2c4/0x33c()
Dec 18 20:28:38 debvm kernel: [    0.899870] Hardware name: HVM domU
Dec 18 20:28:38 debvm kernel: [    0.899872] Info: mapping multiple
BARs. Your kernel is fine.
Dec 18 20:28:38 debvm kernel: [    0.899873] Modules linked in:
Dec 18 20:28:38 debvm kernel: [    0.899878] Pid: 1, comm: swapper/0
Not tainted 3.6.9 #4
Dec 18 20:28:38 debvm kernel: [    0.899892] Call Trace:
Dec 18 20:28:38 debvm kernel: [    0.899896]  [<ffffffff8103d194>] ?
warn_slowpath_common+0x76/0x8a
Dec 18 20:28:38 debvm kernel: [    0.899898]  [<ffffffff8103d240>] ?
warn_slowpath_fmt+0x45/0x4a
Dec 18 20:28:38 debvm kernel: [    0.899900]  [<ffffffff81032a6c>] ?
__ioremap_caller+0x2c4/0x33c
Dec 18 20:28:38 debvm kernel: [    0.899902]  [<ffffffff812c3be3>] ?
intel_opregion_setup+0x9c/0x201
Dec 18 20:28:38 debvm kernel: [    0.899904]  [<ffffffff812bcb75>] ?
intel_setup_gmbus+0x175/0x19d
Dec 18 20:28:38 debvm kernel: [    0.899907]  [<ffffffff8128a37a>] ?
i915_driver_load+0x548/0x90d
Dec 18 20:28:38 debvm kernel: [    0.899910]  [<ffffffff812ff804>] ?
setup_hpet_msi_remapped+0x20/0x20
Dec 18 20:28:38 debvm kernel: [    0.899912]  [<ffffffff81272706>] ?
drm_get_pci_dev+0x152/0x259
Dec 18 20:28:38 debvm kernel: [    0.899915]  [<ffffffff813d4883>] ?
_raw_spin_lock_irqsave+0x21/0x45
Dec 18 20:28:38 debvm kernel: [    0.899918]  [<ffffffff811d9ecc>] ?
local_pci_probe+0x5a/0xa0
Dec 18 20:28:38 debvm kernel: [    0.899920]  [<ffffffff811d9fcf>] ?
pci_device_probe+0xbd/0xe7
Dec 18 20:28:38 debvm kernel: [    0.899922]  [<ffffffff812cd887>] ?
driver_probe_device+0x1b0/0x1b0
Dec 18 20:28:38 debvm kernel: [    0.899923]  [<ffffffff812cd887>] ?
driver_probe_device+0x1b0/0x1b0
Dec 18 20:28:38 debvm kernel: [    0.899925]  [<ffffffff812cd769>] ?
driver_probe_device+0x92/0x1b0
Dec 18 20:28:38 debvm kernel: [    0.899926]  [<ffffffff812cd8da>] ?
__driver_attach+0x53/0x73
Dec 18 20:28:38 debvm kernel: [    0.899928]  [<ffffffff812cc06f>] ?
bus_for_each_dev+0x46/0x77
Dec 18 20:28:38 debvm kernel: [    0.899930]  [<ffffffff812ccf8f>] ?
bus_add_driver+0xd5/0x1f4
Dec 18 20:28:38 debvm kernel: [    0.899931]  [<ffffffff812cde14>] ?
driver_register+0x89/0x101
Dec 18 20:28:38 debvm kernel: [    0.899933]  [<ffffffff811d9336>] ?
__pci_register_driver+0x49/0xa3
Dec 18 20:28:38 debvm kernel: [    0.899935]  [<ffffffff816d55c7>] ?
ttm_init+0x63/0x63
Dec 18 20:28:38 debvm kernel: [    0.899937]  [<ffffffff81002085>] ?
do_one_initcall+0x75/0x12c
Dec 18 20:28:38 debvm kernel: [    0.899940]  [<ffffffff816a6cc2>] ?
kernel_init+0x13c/0x1c0
Dec 18 20:28:38 debvm kernel: [    0.899941]  [<ffffffff816a6565>] ?
do_early_param+0x83/0x83
Dec 18 20:28:38 debvm kernel: [    0.899943]  [<ffffffff813d9f44>] ?
kernel_thread_helper+0x4/0x10
Dec 18 20:28:38 debvm kernel: [    0.899945]  [<ffffffff816a6b86>] ?
start_kernel+0x3e1/0x3e1
Dec 18 20:28:38 debvm kernel: [    0.899947]  [<ffffffff813d9f40>] ?
gs_change+0x13/0x13
Dec 18 20:28:38 debvm kernel: [    0.899950] ---[ end trace
db461543ce599b44 ]---

I'm not sure if this has anything to do with the freeze. This seems to
show up on every boot after I upgraded to xen version 4.2.1-rc2. Both
debian kernel 3.2.32 / 3.6.9 suffers from the same log. But whole
system freeze happens only during gaming, which is much less frequent.
So I'm not sure if the two are related. But anyway, could you comment
about what does this log mean?

I can find the one of the mentioned address in the qemu_dm log:
pt_pci_write_config: [00:02:0] address=00fc val=0xfeff5000 len=4
igd_write_opregion: Map OpRegion: cd996018 -> feff5018
igd_write_opregion: [00:02:0] addr=fc len=2 val=feff5000

PS: I also run xbmc on domU and it playbacks video under HW
acceleration (VAAPI) without any problem. XBMC by itself is also an
graphics intensive program. But this runs on an pure HVM guest, while
the failing case is on PVHVM.

PS2: I also suffered another instability yesterday. It happens when I
was compiling kernel in side the domU. The host reboots suddenly.
Since I'm not using graphics at that time (Xorg session is idle, I
connected through SSH), this may be a different issue.

Thanks,
Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1MJ-0004fi-Tr; Tue, 18 Dec 2012 17:53:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1MI-0004fJ-5z
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:38 +0000
Received: from [85.158.143.99:12147] by server-1.bemta-4.messagelabs.com id
	AF/E8-28401-1ADA0D05; Tue, 18 Dec 2012 17:53:37 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355853216!30034680!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23706 invoked from network); 18 Dec 2012 17:53:36 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-15.tower-216.messagelabs.com with SMTP;
	18 Dec 2012 17:53:36 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrKki028159; Tue, 18 Dec 2012 17:53:20 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 1D274C2B15; Tue, 18 Dec 2012 17:53:18 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:16 +0000
Message-Id: <1355853196-23676-7-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 6/6] ARM: mach-virt: add SMP support using
	PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for SMP to mach-virt using the PSCI
infrastructure.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/mach-virt/Kconfig   |  1 +
 arch/arm/mach-virt/Makefile  |  1 +
 arch/arm/mach-virt/platsmp.c | 58 ++++++++++++++++++++++++++++++++++++++++++++
 arch/arm/mach-virt/virt.c    |  4 +++
 4 files changed, 64 insertions(+)
 create mode 100644 arch/arm/mach-virt/platsmp.c

diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
index a568a2a..8958f0d 100644
--- a/arch/arm/mach-virt/Kconfig
+++ b/arch/arm/mach-virt/Kconfig
@@ -3,6 +3,7 @@ config ARCH_VIRT
 	select ARCH_WANT_OPTIONAL_GPIOLIB
 	select ARM_GIC
 	select ARM_ARCH_TIMER
+	select ARM_PSCI
 	select HAVE_SMP
 	select CPU_V7
 	select SPARSE_IRQ
diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
index 7ddbfa6..042afc1 100644
--- a/arch/arm/mach-virt/Makefile
+++ b/arch/arm/mach-virt/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-y					:= virt.o
+obj-$(CONFIG_SMP)			+= platsmp.o
diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
new file mode 100644
index 0000000..e358beb
--- /dev/null
+++ b/arch/arm/mach-virt/platsmp.c
@@ -0,0 +1,58 @@
+/*
+ * Dummy Virtual Machine - does what it says on the tin.
+ *
+ * Copyright (C) 2012 ARM Ltd
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/of.h>
+
+#include <asm/psci.h>
+#include <asm/smp_plat.h>
+#include <asm/hardware/gic.h>
+
+extern void secondary_startup(void);
+
+static void __init virt_smp_init_cpus(void)
+{
+	set_smp_cross_call(gic_raise_softirq);
+}
+
+static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+static int __cpuinit virt_boot_secondary(unsigned int cpu,
+					 struct task_struct *idle)
+{
+	if (psci_ops.cpu_on)
+		return psci_ops.cpu_on(cpu_logical_map(cpu),
+				       __pa(secondary_startup));
+	return -ENODEV;
+}
+
+static void __cpuinit virt_secondary_init(unsigned int cpu)
+{
+	gic_secondary_init(0);
+}
+
+struct smp_operations __initdata virt_smp_ops = {
+	.smp_init_cpus		= virt_smp_init_cpus,
+	.smp_prepare_cpus	= virt_smp_prepare_cpus,
+	.smp_secondary_init	= virt_secondary_init,
+	.smp_boot_secondary	= virt_boot_secondary,
+};
diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
index 174b9da..1d0a85a 100644
--- a/arch/arm/mach-virt/virt.c
+++ b/arch/arm/mach-virt/virt.c
@@ -20,6 +20,7 @@
 
 #include <linux/of_irq.h>
 #include <linux/of_platform.h>
+#include <linux/smp.h>
 
 #include <asm/arch_timer.h>
 #include <asm/hardware/gic.h>
@@ -56,10 +57,13 @@ static struct sys_timer virt_timer = {
 	.init = virt_timer_init,
 };
 
+extern struct smp_operations virt_smp_ops;
+
 DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
 	.init_irq	= gic_init_irq,
 	.handle_irq     = gic_handle_irq,
 	.timer		= &virt_timer,
 	.init_machine	= virt_init,
+	.smp		= smp_ops(virt_smp_ops),
 	.dt_compat	= virt_dt_match,
 MACHINE_END
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1MK-0004gA-Rz; Tue, 18 Dec 2012 17:53:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1MI-0004fI-T2
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:39 +0000
Received: from [85.158.138.51:40023] by server-7.bemta-3.messagelabs.com id
	D1/72-23008-2ADA0D05; Tue, 18 Dec 2012 17:53:38 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1355853217!29454275!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31937 invoked from network); 18 Dec 2012 17:53:37 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-5.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 17:53:37 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrJki028148; Tue, 18 Dec 2012 17:53:19 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 7D814C2A9D; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:10 +0000
Message-Id: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 0/6] Add support for a fake,
	para-virtualised machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This is version three of the patches originally posted here:

  v1.) http://lists.infradead.org/pipermail/linux-arm-kernel/2012-December/135870.html
  v2.) http://lists.infradead.org/pipermail/linux-arm-kernel/2012-December/137750.html

Thanks to all those who have provided comments so far.
Changes for v3 include:

	* Ripped out *even more* SMP code by rebasing onto latest
	  mainline
	* Removed function-base property from device-tree binding
	* Annotated the low-level firmware invocation functions with
	  noinline to clarify intent
	* Minor cleanups

As usual, testing this relies on KVM support for PSCI, a magic kvmtool
and Mark Rutland's arch-timer patches.

Comments welcome,

Will


Marc Zyngier (1):
  ARM: Dummy Virtual Machine platform support

Will Deacon (5):
  ARM: opcodes: add missing include of linux/linkage.h
  ARM: opcodes: add opcodes definitions for ARM security extensions
  ARM: psci: add devicetree binding for describing PSCI firmware
  ARM: psci: add support for PSCI invocations from the kernel
  ARM: mach-virt: add SMP support using PSCI

 Documentation/devicetree/bindings/arm/psci.txt |  55 +++++++
 arch/arm/Kconfig                               |  12 ++
 arch/arm/Makefile                              |   1 +
 arch/arm/include/asm/opcodes-sec.h             |  24 +++
 arch/arm/include/asm/opcodes.h                 |   1 +
 arch/arm/include/asm/psci.h                    |  36 +++++
 arch/arm/kernel/Makefile                       |   1 +
 arch/arm/kernel/psci.c                         | 211 +++++++++++++++++++++++++
 arch/arm/mach-virt/Kconfig                     |  10 ++
 arch/arm/mach-virt/Makefile                    |   6 +
 arch/arm/mach-virt/platsmp.c                   |  58 +++++++
 arch/arm/mach-virt/virt.c                      |  69 ++++++++
 12 files changed, 484 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/arm/psci.txt
 create mode 100644 arch/arm/include/asm/opcodes-sec.h
 create mode 100644 arch/arm/include/asm/psci.h
 create mode 100644 arch/arm/kernel/psci.c
 create mode 100644 arch/arm/mach-virt/Kconfig
 create mode 100644 arch/arm/mach-virt/Makefile
 create mode 100644 arch/arm/mach-virt/platsmp.c
 create mode 100644 arch/arm/mach-virt/virt.c

-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1MK-0004gA-Rz; Tue, 18 Dec 2012 17:53:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1MI-0004fI-T2
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:39 +0000
Received: from [85.158.138.51:40023] by server-7.bemta-3.messagelabs.com id
	D1/72-23008-2ADA0D05; Tue, 18 Dec 2012 17:53:38 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1355853217!29454275!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31937 invoked from network); 18 Dec 2012 17:53:37 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-5.tower-174.messagelabs.com with SMTP;
	18 Dec 2012 17:53:37 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrJki028148; Tue, 18 Dec 2012 17:53:19 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 7D814C2A9D; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:10 +0000
Message-Id: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 0/6] Add support for a fake,
	para-virtualised machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This is version three of the patches originally posted here:

  v1.) http://lists.infradead.org/pipermail/linux-arm-kernel/2012-December/135870.html
  v2.) http://lists.infradead.org/pipermail/linux-arm-kernel/2012-December/137750.html

Thanks to all those who have provided comments so far.
Changes for v3 include:

	* Ripped out *even more* SMP code by rebasing onto latest
	  mainline
	* Removed function-base property from device-tree binding
	* Annotated the low-level firmware invocation functions with
	  noinline to clarify intent
	* Minor cleanups

As usual, testing this relies on KVM support for PSCI, a magic kvmtool
and Mark Rutland's arch-timer patches.

Comments welcome,

Will


Marc Zyngier (1):
  ARM: Dummy Virtual Machine platform support

Will Deacon (5):
  ARM: opcodes: add missing include of linux/linkage.h
  ARM: opcodes: add opcodes definitions for ARM security extensions
  ARM: psci: add devicetree binding for describing PSCI firmware
  ARM: psci: add support for PSCI invocations from the kernel
  ARM: mach-virt: add SMP support using PSCI

 Documentation/devicetree/bindings/arm/psci.txt |  55 +++++++
 arch/arm/Kconfig                               |  12 ++
 arch/arm/Makefile                              |   1 +
 arch/arm/include/asm/opcodes-sec.h             |  24 +++
 arch/arm/include/asm/opcodes.h                 |   1 +
 arch/arm/include/asm/psci.h                    |  36 +++++
 arch/arm/kernel/Makefile                       |   1 +
 arch/arm/kernel/psci.c                         | 211 +++++++++++++++++++++++++
 arch/arm/mach-virt/Kconfig                     |  10 ++
 arch/arm/mach-virt/Makefile                    |   6 +
 arch/arm/mach-virt/platsmp.c                   |  58 +++++++
 arch/arm/mach-virt/virt.c                      |  69 ++++++++
 12 files changed, 484 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/arm/psci.txt
 create mode 100644 arch/arm/include/asm/opcodes-sec.h
 create mode 100644 arch/arm/include/asm/psci.h
 create mode 100644 arch/arm/kernel/psci.c
 create mode 100644 arch/arm/mach-virt/Kconfig
 create mode 100644 arch/arm/mach-virt/Makefile
 create mode 100644 arch/arm/mach-virt/platsmp.c
 create mode 100644 arch/arm/mach-virt/virt.c

-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1MK-0004fp-AX; Tue, 18 Dec 2012 17:53:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1MI-0004fJ-KX
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:38 +0000
Received: from [85.158.143.35:19565] by server-1.bemta-4.messagelabs.com id
	CF/E8-28401-1ADA0D05; Tue, 18 Dec 2012 17:53:37 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355853216!11785960!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27182 invoked from network); 18 Dec 2012 17:53:36 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-7.tower-21.messagelabs.com with SMTP;
	18 Dec 2012 17:53:36 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrJki028149; Tue, 18 Dec 2012 17:53:19 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 91150C22D0; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:11 +0000
Message-Id: <1355853196-23676-2-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 1/6] ARM: opcodes: add missing include of
	linux/linkage.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

opcodes.h wants to declare an asmlinkage function, so we need to include
linux/linkage.h

Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/opcodes.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h
index 74e211a..e796c59 100644
--- a/arch/arm/include/asm/opcodes.h
+++ b/arch/arm/include/asm/opcodes.h
@@ -10,6 +10,7 @@
 #define __ASM_ARM_OPCODES_H
 
 #ifndef __ASSEMBLY__
+#include <linux/linkage.h>
 extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr);
 #endif
 
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1MJ-0004fi-Tr; Tue, 18 Dec 2012 17:53:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1MI-0004fJ-5z
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:38 +0000
Received: from [85.158.143.99:12147] by server-1.bemta-4.messagelabs.com id
	AF/E8-28401-1ADA0D05; Tue, 18 Dec 2012 17:53:37 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355853216!30034680!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23706 invoked from network); 18 Dec 2012 17:53:36 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-15.tower-216.messagelabs.com with SMTP;
	18 Dec 2012 17:53:36 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrKki028159; Tue, 18 Dec 2012 17:53:20 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 1D274C2B15; Tue, 18 Dec 2012 17:53:18 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:16 +0000
Message-Id: <1355853196-23676-7-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 6/6] ARM: mach-virt: add SMP support using
	PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for SMP to mach-virt using the PSCI
infrastructure.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/mach-virt/Kconfig   |  1 +
 arch/arm/mach-virt/Makefile  |  1 +
 arch/arm/mach-virt/platsmp.c | 58 ++++++++++++++++++++++++++++++++++++++++++++
 arch/arm/mach-virt/virt.c    |  4 +++
 4 files changed, 64 insertions(+)
 create mode 100644 arch/arm/mach-virt/platsmp.c

diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
index a568a2a..8958f0d 100644
--- a/arch/arm/mach-virt/Kconfig
+++ b/arch/arm/mach-virt/Kconfig
@@ -3,6 +3,7 @@ config ARCH_VIRT
 	select ARCH_WANT_OPTIONAL_GPIOLIB
 	select ARM_GIC
 	select ARM_ARCH_TIMER
+	select ARM_PSCI
 	select HAVE_SMP
 	select CPU_V7
 	select SPARSE_IRQ
diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
index 7ddbfa6..042afc1 100644
--- a/arch/arm/mach-virt/Makefile
+++ b/arch/arm/mach-virt/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-y					:= virt.o
+obj-$(CONFIG_SMP)			+= platsmp.o
diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
new file mode 100644
index 0000000..e358beb
--- /dev/null
+++ b/arch/arm/mach-virt/platsmp.c
@@ -0,0 +1,58 @@
+/*
+ * Dummy Virtual Machine - does what it says on the tin.
+ *
+ * Copyright (C) 2012 ARM Ltd
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/of.h>
+
+#include <asm/psci.h>
+#include <asm/smp_plat.h>
+#include <asm/hardware/gic.h>
+
+extern void secondary_startup(void);
+
+static void __init virt_smp_init_cpus(void)
+{
+	set_smp_cross_call(gic_raise_softirq);
+}
+
+static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
+{
+}
+
+static int __cpuinit virt_boot_secondary(unsigned int cpu,
+					 struct task_struct *idle)
+{
+	if (psci_ops.cpu_on)
+		return psci_ops.cpu_on(cpu_logical_map(cpu),
+				       __pa(secondary_startup));
+	return -ENODEV;
+}
+
+static void __cpuinit virt_secondary_init(unsigned int cpu)
+{
+	gic_secondary_init(0);
+}
+
+struct smp_operations __initdata virt_smp_ops = {
+	.smp_init_cpus		= virt_smp_init_cpus,
+	.smp_prepare_cpus	= virt_smp_prepare_cpus,
+	.smp_secondary_init	= virt_secondary_init,
+	.smp_boot_secondary	= virt_boot_secondary,
+};
diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
index 174b9da..1d0a85a 100644
--- a/arch/arm/mach-virt/virt.c
+++ b/arch/arm/mach-virt/virt.c
@@ -20,6 +20,7 @@
 
 #include <linux/of_irq.h>
 #include <linux/of_platform.h>
+#include <linux/smp.h>
 
 #include <asm/arch_timer.h>
 #include <asm/hardware/gic.h>
@@ -56,10 +57,13 @@ static struct sys_timer virt_timer = {
 	.init = virt_timer_init,
 };
 
+extern struct smp_operations virt_smp_ops;
+
 DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
 	.init_irq	= gic_init_irq,
 	.handle_irq     = gic_handle_irq,
 	.timer		= &virt_timer,
 	.init_machine	= virt_init,
+	.smp		= smp_ops(virt_smp_ops),
 	.dt_compat	= virt_dt_match,
 MACHINE_END
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1ML-0004gK-8u; Tue, 18 Dec 2012 17:53:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1MK-0004fe-7q
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:40 +0000
Received: from [85.158.139.83:7331] by server-13.bemta-5.messagelabs.com id
	6F/7E-10716-3ADA0D05; Tue, 18 Dec 2012 17:53:39 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355853218!23080153!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18256 invoked from network); 18 Dec 2012 17:53:38 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-16.tower-182.messagelabs.com with SMTP;
	18 Dec 2012 17:53:38 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrKki028158; Tue, 18 Dec 2012 17:53:20 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id F2C83C2B14; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:15 +0000
Message-Id: <1355853196-23676-6-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc Zyngier <marc.zyngier@arm.com>, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 5/6] ARM: Dummy Virtual Machine platform
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Marc Zyngier <marc.zyngier@arm.com>

Add support for the smallest, dumbest possible platform, to be
used as a guest for KVM or other hypervisors.

It only mandates a GIC and architected timers. Fits nicely with
a multiplatform zImage. Uses very little silicon area.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/Kconfig            |  2 ++
 arch/arm/Makefile           |  1 +
 arch/arm/mach-virt/Kconfig  |  9 +++++++
 arch/arm/mach-virt/Makefile |  5 ++++
 arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 82 insertions(+)
 create mode 100644 arch/arm/mach-virt/Kconfig
 create mode 100644 arch/arm/mach-virt/Makefile
 create mode 100644 arch/arm/mach-virt/virt.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 80d54b8..3443d89 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1130,6 +1130,8 @@ source "arch/arm/mach-versatile/Kconfig"
 source "arch/arm/mach-vexpress/Kconfig"
 source "arch/arm/plat-versatile/Kconfig"
 
+source "arch/arm/mach-virt/Kconfig"
+
 source "arch/arm/mach-vt8500/Kconfig"
 
 source "arch/arm/mach-w90x900/Kconfig"
diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 30c443c..ea4f481 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -194,6 +194,7 @@ machine-$(CONFIG_ARCH_SOCFPGA)		+= socfpga
 machine-$(CONFIG_ARCH_SPEAR13XX)	+= spear13xx
 machine-$(CONFIG_ARCH_SPEAR3XX)		+= spear3xx
 machine-$(CONFIG_MACH_SPEAR600)		+= spear6xx
+machine-$(CONFIG_ARCH_VIRT)		+= virt
 machine-$(CONFIG_ARCH_ZYNQ)		+= zynq
 machine-$(CONFIG_ARCH_SUNXI)		+= sunxi
 
diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
new file mode 100644
index 0000000..a568a2a
--- /dev/null
+++ b/arch/arm/mach-virt/Kconfig
@@ -0,0 +1,9 @@
+config ARCH_VIRT
+	bool "Dummy Virtual Machine" if ARCH_MULTI_V7
+	select ARCH_WANT_OPTIONAL_GPIOLIB
+	select ARM_GIC
+	select ARM_ARCH_TIMER
+	select HAVE_SMP
+	select CPU_V7
+	select SPARSE_IRQ
+	select USE_OF
diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
new file mode 100644
index 0000000..7ddbfa6
--- /dev/null
+++ b/arch/arm/mach-virt/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for the linux kernel.
+#
+
+obj-y					:= virt.o
diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
new file mode 100644
index 0000000..174b9da
--- /dev/null
+++ b/arch/arm/mach-virt/virt.c
@@ -0,0 +1,65 @@
+/*
+ * Dummy Virtual Machine - does what it says on the tin.
+ *
+ * Copyright (C) 2012 ARM Ltd
+ * Authors: Will Deacon <will.deacon@arm.com>,
+ *          Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+
+#include <asm/arch_timer.h>
+#include <asm/hardware/gic.h>
+#include <asm/mach/arch.h>
+#include <asm/mach/time.h>
+
+const static struct of_device_id irq_match[] = {
+	{ .compatible = "arm,cortex-a15-gic", .data = gic_of_init, },
+	{}
+};
+
+static void __init gic_init_irq(void)
+{
+	of_irq_init(irq_match);
+}
+
+static void __init virt_init(void)
+{
+	of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+
+static void __init virt_timer_init(void)
+{
+	WARN_ON(arch_timer_of_register() != 0);
+	WARN_ON(arch_timer_sched_clock_init() != 0);
+}
+
+static const char *virt_dt_match[] = {
+	"linux,dummy-virt",
+	NULL
+};
+
+static struct sys_timer virt_timer = {
+	.init = virt_timer_init,
+};
+
+DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
+	.init_irq	= gic_init_irq,
+	.handle_irq     = gic_handle_irq,
+	.timer		= &virt_timer,
+	.init_machine	= virt_init,
+	.dt_compat	= virt_dt_match,
+MACHINE_END
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1MJ-0004fa-IN; Tue, 18 Dec 2012 17:53:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1MH-0004fI-LI
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:37 +0000
Received: from [85.158.137.99:6144] by server-7.bemta-3.messagelabs.com id
	49/62-23008-0ADA0D05; Tue, 18 Dec 2012 17:53:36 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355853215!14476650!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20605 invoked from network); 18 Dec 2012 17:53:36 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-7.tower-217.messagelabs.com with SMTP;
	18 Dec 2012 17:53:36 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrJki028150; Tue, 18 Dec 2012 17:53:19 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id A6EC5C2A9E; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:12 +0000
Message-Id: <1355853196-23676-3-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 2/6] ARM: opcodes: add opcodes definitions
	for ARM security extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The ARM security extensions introduced the smc instruction, which is not
supported by all versions of GAS.

This patch introduces opcodes-sec.h, so that smc is made available in a
similar manner to hvc.

Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/opcodes-sec.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)
 create mode 100644 arch/arm/include/asm/opcodes-sec.h

diff --git a/arch/arm/include/asm/opcodes-sec.h b/arch/arm/include/asm/opcodes-sec.h
new file mode 100644
index 0000000..bc3a917
--- /dev/null
+++ b/arch/arm/include/asm/opcodes-sec.h
@@ -0,0 +1,24 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ */
+
+#ifndef __ASM_ARM_OPCODES_SEC_H
+#define __ASM_ARM_OPCODES_SEC_H
+
+#include <asm/opcodes.h>
+
+#define __SMC(imm4) __inst_arm_thumb32(					\
+	0xE1600070 | (((imm4) & 0xF) << 0),				\
+	0xF7F08000 | (((imm4) & 0xF) << 16)				\
+)
+
+#endif /* __ASM_ARM_OPCODES_SEC_H */
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1ML-0004gK-8u; Tue, 18 Dec 2012 17:53:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1MK-0004fe-7q
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:40 +0000
Received: from [85.158.139.83:7331] by server-13.bemta-5.messagelabs.com id
	6F/7E-10716-3ADA0D05; Tue, 18 Dec 2012 17:53:39 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355853218!23080153!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18256 invoked from network); 18 Dec 2012 17:53:38 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-16.tower-182.messagelabs.com with SMTP;
	18 Dec 2012 17:53:38 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrKki028158; Tue, 18 Dec 2012 17:53:20 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id F2C83C2B14; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:15 +0000
Message-Id: <1355853196-23676-6-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc Zyngier <marc.zyngier@arm.com>, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 5/6] ARM: Dummy Virtual Machine platform
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Marc Zyngier <marc.zyngier@arm.com>

Add support for the smallest, dumbest possible platform, to be
used as a guest for KVM or other hypervisors.

It only mandates a GIC and architected timers. Fits nicely with
a multiplatform zImage. Uses very little silicon area.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/Kconfig            |  2 ++
 arch/arm/Makefile           |  1 +
 arch/arm/mach-virt/Kconfig  |  9 +++++++
 arch/arm/mach-virt/Makefile |  5 ++++
 arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 82 insertions(+)
 create mode 100644 arch/arm/mach-virt/Kconfig
 create mode 100644 arch/arm/mach-virt/Makefile
 create mode 100644 arch/arm/mach-virt/virt.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 80d54b8..3443d89 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1130,6 +1130,8 @@ source "arch/arm/mach-versatile/Kconfig"
 source "arch/arm/mach-vexpress/Kconfig"
 source "arch/arm/plat-versatile/Kconfig"
 
+source "arch/arm/mach-virt/Kconfig"
+
 source "arch/arm/mach-vt8500/Kconfig"
 
 source "arch/arm/mach-w90x900/Kconfig"
diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 30c443c..ea4f481 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -194,6 +194,7 @@ machine-$(CONFIG_ARCH_SOCFPGA)		+= socfpga
 machine-$(CONFIG_ARCH_SPEAR13XX)	+= spear13xx
 machine-$(CONFIG_ARCH_SPEAR3XX)		+= spear3xx
 machine-$(CONFIG_MACH_SPEAR600)		+= spear6xx
+machine-$(CONFIG_ARCH_VIRT)		+= virt
 machine-$(CONFIG_ARCH_ZYNQ)		+= zynq
 machine-$(CONFIG_ARCH_SUNXI)		+= sunxi
 
diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
new file mode 100644
index 0000000..a568a2a
--- /dev/null
+++ b/arch/arm/mach-virt/Kconfig
@@ -0,0 +1,9 @@
+config ARCH_VIRT
+	bool "Dummy Virtual Machine" if ARCH_MULTI_V7
+	select ARCH_WANT_OPTIONAL_GPIOLIB
+	select ARM_GIC
+	select ARM_ARCH_TIMER
+	select HAVE_SMP
+	select CPU_V7
+	select SPARSE_IRQ
+	select USE_OF
diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
new file mode 100644
index 0000000..7ddbfa6
--- /dev/null
+++ b/arch/arm/mach-virt/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for the linux kernel.
+#
+
+obj-y					:= virt.o
diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
new file mode 100644
index 0000000..174b9da
--- /dev/null
+++ b/arch/arm/mach-virt/virt.c
@@ -0,0 +1,65 @@
+/*
+ * Dummy Virtual Machine - does what it says on the tin.
+ *
+ * Copyright (C) 2012 ARM Ltd
+ * Authors: Will Deacon <will.deacon@arm.com>,
+ *          Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+
+#include <asm/arch_timer.h>
+#include <asm/hardware/gic.h>
+#include <asm/mach/arch.h>
+#include <asm/mach/time.h>
+
+const static struct of_device_id irq_match[] = {
+	{ .compatible = "arm,cortex-a15-gic", .data = gic_of_init, },
+	{}
+};
+
+static void __init gic_init_irq(void)
+{
+	of_irq_init(irq_match);
+}
+
+static void __init virt_init(void)
+{
+	of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
+}
+
+static void __init virt_timer_init(void)
+{
+	WARN_ON(arch_timer_of_register() != 0);
+	WARN_ON(arch_timer_sched_clock_init() != 0);
+}
+
+static const char *virt_dt_match[] = {
+	"linux,dummy-virt",
+	NULL
+};
+
+static struct sys_timer virt_timer = {
+	.init = virt_timer_init,
+};
+
+DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
+	.init_irq	= gic_init_irq,
+	.handle_irq     = gic_handle_irq,
+	.timer		= &virt_timer,
+	.init_machine	= virt_init,
+	.dt_compat	= virt_dt_match,
+MACHINE_END
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1MK-0004fp-AX; Tue, 18 Dec 2012 17:53:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1MI-0004fJ-KX
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:38 +0000
Received: from [85.158.143.35:19565] by server-1.bemta-4.messagelabs.com id
	CF/E8-28401-1ADA0D05; Tue, 18 Dec 2012 17:53:37 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355853216!11785960!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27182 invoked from network); 18 Dec 2012 17:53:36 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-7.tower-21.messagelabs.com with SMTP;
	18 Dec 2012 17:53:36 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrJki028149; Tue, 18 Dec 2012 17:53:19 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id 91150C22D0; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:11 +0000
Message-Id: <1355853196-23676-2-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 1/6] ARM: opcodes: add missing include of
	linux/linkage.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

opcodes.h wants to declare an asmlinkage function, so we need to include
linux/linkage.h

Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/opcodes.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h
index 74e211a..e796c59 100644
--- a/arch/arm/include/asm/opcodes.h
+++ b/arch/arm/include/asm/opcodes.h
@@ -10,6 +10,7 @@
 #define __ASM_ARM_OPCODES_H
 
 #ifndef __ASSEMBLY__
+#include <linux/linkage.h>
 extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr);
 #endif
 
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1MJ-0004fa-IN; Tue, 18 Dec 2012 17:53:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1MH-0004fI-LI
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:37 +0000
Received: from [85.158.137.99:6144] by server-7.bemta-3.messagelabs.com id
	49/62-23008-0ADA0D05; Tue, 18 Dec 2012 17:53:36 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355853215!14476650!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20605 invoked from network); 18 Dec 2012 17:53:36 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-7.tower-217.messagelabs.com with SMTP;
	18 Dec 2012 17:53:36 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrJki028150; Tue, 18 Dec 2012 17:53:19 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id A6EC5C2A9E; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:12 +0000
Message-Id: <1355853196-23676-3-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 2/6] ARM: opcodes: add opcodes definitions
	for ARM security extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The ARM security extensions introduced the smc instruction, which is not
supported by all versions of GAS.

This patch introduces opcodes-sec.h, so that smc is made available in a
similar manner to hvc.

Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/opcodes-sec.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)
 create mode 100644 arch/arm/include/asm/opcodes-sec.h

diff --git a/arch/arm/include/asm/opcodes-sec.h b/arch/arm/include/asm/opcodes-sec.h
new file mode 100644
index 0000000..bc3a917
--- /dev/null
+++ b/arch/arm/include/asm/opcodes-sec.h
@@ -0,0 +1,24 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ */
+
+#ifndef __ASM_ARM_OPCODES_SEC_H
+#define __ASM_ARM_OPCODES_SEC_H
+
+#include <asm/opcodes.h>
+
+#define __SMC(imm4) __inst_arm_thumb32(					\
+	0xE1600070 | (((imm4) & 0xF) << 0),				\
+	0xF7F08000 | (((imm4) & 0xF) << 16)				\
+)
+
+#endif /* __ASM_ARM_OPCODES_SEC_H */
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1MM-0004gt-MM; Tue, 18 Dec 2012 17:53:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1ML-0004ft-3M
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:41 +0000
Received: from [85.158.139.211:11165] by server-10.bemta-5.messagelabs.com id
	B5/7A-13383-4ADA0D05; Tue, 18 Dec 2012 17:53:40 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355853218!20194989!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10474 invoked from network); 18 Dec 2012 17:53:38 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-3.tower-206.messagelabs.com with SMTP;
	18 Dec 2012 17:53:38 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrKki028155; Tue, 18 Dec 2012 17:53:20 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id DC5A6C2B13; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:14 +0000
Message-Id: <1355853196-23676-5-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 4/6] ARM: psci: add support for PSCI
	invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for the Power State Coordination Interface
defined by ARM, allowing Linux to request CPU-centric power-management
operations from firmware implementing the PSCI protocol.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/Kconfig            |  10 +++
 arch/arm/include/asm/psci.h |  36 ++++++++
 arch/arm/kernel/Makefile    |   1 +
 arch/arm/kernel/psci.c      | 211 ++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 258 insertions(+)
 create mode 100644 arch/arm/include/asm/psci.h
 create mode 100644 arch/arm/kernel/psci.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 8c83d98..80d54b8 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1617,6 +1617,16 @@ config HOTPLUG_CPU
 	  Say Y here to experiment with turning CPUs off and on.  CPUs
 	  can be controlled through /sys/devices/system/cpu.
 
+config ARM_PSCI
+	bool "Support for the ARM Power State Coordination Interface (PSCI)"
+	depends on CPU_V7
+	help
+	  Say Y here if you want Linux to communicate with system firmware
+	  implementing the PSCI specification for CPU-centric power
+	  management operations described in ARM document number ARM DEN
+	  0022A ("Power State Coordination Interface System Software on
+	  ARM processors").
+
 config LOCAL_TIMERS
 	bool "Use local timer interrupts"
 	depends on SMP
diff --git a/arch/arm/include/asm/psci.h b/arch/arm/include/asm/psci.h
new file mode 100644
index 0000000..ce0dbe7
--- /dev/null
+++ b/arch/arm/include/asm/psci.h
@@ -0,0 +1,36 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ */
+
+#ifndef __ASM_ARM_PSCI_H
+#define __ASM_ARM_PSCI_H
+
+#define PSCI_POWER_STATE_TYPE_STANDBY		0
+#define PSCI_POWER_STATE_TYPE_POWER_DOWN	1
+
+struct psci_power_state {
+	u16	id;
+	u8	type;
+	u8	affinity_level;
+};
+
+struct psci_operations {
+	int (*cpu_suspend)(struct psci_power_state state,
+			   unsigned long entry_point);
+	int (*cpu_off)(struct psci_power_state state);
+	int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
+	int (*migrate)(unsigned long cpuid);
+};
+
+extern struct psci_operations psci_ops;
+
+#endif /* __ASM_ARM_PSCI_H */
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index 5bbec7b..5f3338e 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -82,5 +82,6 @@ obj-$(CONFIG_DEBUG_LL)	+= debug.o
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 
 obj-$(CONFIG_ARM_VIRT_EXT)	+= hyp-stub.o
+obj-$(CONFIG_ARM_PSCI)		+= psci.o
 
 extra-y := $(head-y) vmlinux.lds
diff --git a/arch/arm/kernel/psci.c b/arch/arm/kernel/psci.c
new file mode 100644
index 0000000..3653164
--- /dev/null
+++ b/arch/arm/kernel/psci.c
@@ -0,0 +1,211 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ */
+
+#define pr_fmt(fmt) "psci: " fmt
+
+#include <linux/init.h>
+#include <linux/of.h>
+
+#include <asm/compiler.h>
+#include <asm/errno.h>
+#include <asm/opcodes-sec.h>
+#include <asm/opcodes-virt.h>
+#include <asm/psci.h>
+
+struct psci_operations psci_ops;
+
+static int (*invoke_psci_fn)(u32, u32, u32, u32);
+
+enum psci_function {
+	PSCI_FN_CPU_SUSPEND,
+	PSCI_FN_CPU_ON,
+	PSCI_FN_CPU_OFF,
+	PSCI_FN_MIGRATE,
+	PSCI_FN_MAX,
+};
+
+static u32 psci_function_id[PSCI_FN_MAX];
+
+#define PSCI_RET_SUCCESS		0
+#define PSCI_RET_EOPNOTSUPP		-1
+#define PSCI_RET_EINVAL			-2
+#define PSCI_RET_EPERM			-3
+
+static int psci_to_linux_errno(int errno)
+{
+	switch (errno) {
+	case PSCI_RET_SUCCESS:
+		return 0;
+	case PSCI_RET_EOPNOTSUPP:
+		return -EOPNOTSUPP;
+	case PSCI_RET_EINVAL:
+		return -EINVAL;
+	case PSCI_RET_EPERM:
+		return -EPERM;
+	};
+
+	return -EINVAL;
+}
+
+#define PSCI_POWER_STATE_ID_MASK	0xffff
+#define PSCI_POWER_STATE_ID_SHIFT	0
+#define PSCI_POWER_STATE_TYPE_MASK	0x1
+#define PSCI_POWER_STATE_TYPE_SHIFT	16
+#define PSCI_POWER_STATE_AFFL_MASK	0x3
+#define PSCI_POWER_STATE_AFFL_SHIFT	24
+
+static u32 psci_power_state_pack(struct psci_power_state state)
+{
+	return	((state.id & PSCI_POWER_STATE_ID_MASK)
+			<< PSCI_POWER_STATE_ID_SHIFT)	|
+		((state.type & PSCI_POWER_STATE_TYPE_MASK)
+			<< PSCI_POWER_STATE_TYPE_SHIFT)	|
+		((state.affinity_level & PSCI_POWER_STATE_AFFL_MASK)
+			<< PSCI_POWER_STATE_AFFL_SHIFT);
+}
+
+/*
+ * The following two functions are invoked via the invoke_psci_fn pointer
+ * and will not be inlined, allowing us to piggyback on the AAPCS.
+ */
+static noinline int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1,
+					 u32 arg2)
+{
+	asm volatile(
+			__asmeq("%0", "r0")
+			__asmeq("%1", "r1")
+			__asmeq("%2", "r2")
+			__asmeq("%3", "r3")
+			__HVC(0)
+		: "+r" (function_id)
+		: "r" (arg0), "r" (arg1), "r" (arg2));
+
+	return function_id;
+}
+
+static noinline int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1,
+					 u32 arg2)
+{
+	asm volatile(
+			__asmeq("%0", "r0")
+			__asmeq("%1", "r1")
+			__asmeq("%2", "r2")
+			__asmeq("%3", "r3")
+			__SMC(0)
+		: "+r" (function_id)
+		: "r" (arg0), "r" (arg1), "r" (arg2));
+
+	return function_id;
+}
+
+static int psci_cpu_suspend(struct psci_power_state state,
+			    unsigned long entry_point)
+{
+	int err;
+	u32 fn, power_state;
+
+	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
+	power_state = psci_power_state_pack(state);
+	err = invoke_psci_fn(fn, power_state, entry_point, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_cpu_off(struct psci_power_state state)
+{
+	int err;
+	u32 fn, power_state;
+
+	fn = psci_function_id[PSCI_FN_CPU_OFF];
+	power_state = psci_power_state_pack(state);
+	err = invoke_psci_fn(fn, power_state, 0, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
+{
+	int err;
+	u32 fn;
+
+	fn = psci_function_id[PSCI_FN_CPU_ON];
+	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_migrate(unsigned long cpuid)
+{
+	int err;
+	u32 fn;
+
+	fn = psci_function_id[PSCI_FN_MIGRATE];
+	err = invoke_psci_fn(fn, cpuid, 0, 0);
+	return psci_to_linux_errno(err);
+}
+
+static const struct of_device_id psci_of_match[] __initconst = {
+	{ .compatible = "arm,psci",	},
+	{},
+};
+
+static int __init psci_init(void)
+{
+	struct device_node *np;
+	const char *method;
+	u32 id;
+
+	np = of_find_matching_node(NULL, psci_of_match);
+	if (!np)
+		return 0;
+
+	pr_info("probing function IDs from device-tree\n");
+
+	if (of_property_read_string(np, "method", &method)) {
+		pr_warning("missing \"method\" property\n");
+		goto out_put_node;
+	}
+
+	if (!strcmp("hvc", method)) {
+		invoke_psci_fn = __invoke_psci_fn_hvc;
+	} else if (!strcmp("smc", method)) {
+		invoke_psci_fn = __invoke_psci_fn_smc;
+	} else {
+		pr_warning("invalid \"method\" property: %s\n", method);
+		goto out_put_node;
+	}
+
+	if (!of_property_read_u32(np, "cpu_suspend", &id)) {
+		psci_function_id[PSCI_FN_CPU_SUSPEND] = id;
+		psci_ops.cpu_suspend = psci_cpu_suspend;
+	}
+
+	if (!of_property_read_u32(np, "cpu_off", &id)) {
+		psci_function_id[PSCI_FN_CPU_OFF] = id;
+		psci_ops.cpu_off = psci_cpu_off;
+	}
+
+	if (!of_property_read_u32(np, "cpu_on", &id)) {
+		psci_function_id[PSCI_FN_CPU_ON] = id;
+		psci_ops.cpu_on = psci_cpu_on;
+	}
+
+	if (!of_property_read_u32(np, "migrate", &id)) {
+		psci_function_id[PSCI_FN_MIGRATE] = id;
+		psci_ops.migrate = psci_migrate;
+	}
+
+out_put_node:
+	of_node_put(np);
+	return 0;
+}
+early_initcall(psci_init);
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:53:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:53:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1MM-0004gt-MM; Tue, 18 Dec 2012 17:53:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1ML-0004ft-3M
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:41 +0000
Received: from [85.158.139.211:11165] by server-10.bemta-5.messagelabs.com id
	B5/7A-13383-4ADA0D05; Tue, 18 Dec 2012 17:53:40 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355853218!20194989!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10474 invoked from network); 18 Dec 2012 17:53:38 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-3.tower-206.messagelabs.com with SMTP;
	18 Dec 2012 17:53:38 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrKki028155; Tue, 18 Dec 2012 17:53:20 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id DC5A6C2B13; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:14 +0000
Message-Id: <1355853196-23676-5-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 4/6] ARM: psci: add support for PSCI
	invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for the Power State Coordination Interface
defined by ARM, allowing Linux to request CPU-centric power-management
operations from firmware implementing the PSCI protocol.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/Kconfig            |  10 +++
 arch/arm/include/asm/psci.h |  36 ++++++++
 arch/arm/kernel/Makefile    |   1 +
 arch/arm/kernel/psci.c      | 211 ++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 258 insertions(+)
 create mode 100644 arch/arm/include/asm/psci.h
 create mode 100644 arch/arm/kernel/psci.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 8c83d98..80d54b8 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1617,6 +1617,16 @@ config HOTPLUG_CPU
 	  Say Y here to experiment with turning CPUs off and on.  CPUs
 	  can be controlled through /sys/devices/system/cpu.
 
+config ARM_PSCI
+	bool "Support for the ARM Power State Coordination Interface (PSCI)"
+	depends on CPU_V7
+	help
+	  Say Y here if you want Linux to communicate with system firmware
+	  implementing the PSCI specification for CPU-centric power
+	  management operations described in ARM document number ARM DEN
+	  0022A ("Power State Coordination Interface System Software on
+	  ARM processors").
+
 config LOCAL_TIMERS
 	bool "Use local timer interrupts"
 	depends on SMP
diff --git a/arch/arm/include/asm/psci.h b/arch/arm/include/asm/psci.h
new file mode 100644
index 0000000..ce0dbe7
--- /dev/null
+++ b/arch/arm/include/asm/psci.h
@@ -0,0 +1,36 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ */
+
+#ifndef __ASM_ARM_PSCI_H
+#define __ASM_ARM_PSCI_H
+
+#define PSCI_POWER_STATE_TYPE_STANDBY		0
+#define PSCI_POWER_STATE_TYPE_POWER_DOWN	1
+
+struct psci_power_state {
+	u16	id;
+	u8	type;
+	u8	affinity_level;
+};
+
+struct psci_operations {
+	int (*cpu_suspend)(struct psci_power_state state,
+			   unsigned long entry_point);
+	int (*cpu_off)(struct psci_power_state state);
+	int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
+	int (*migrate)(unsigned long cpuid);
+};
+
+extern struct psci_operations psci_ops;
+
+#endif /* __ASM_ARM_PSCI_H */
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index 5bbec7b..5f3338e 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -82,5 +82,6 @@ obj-$(CONFIG_DEBUG_LL)	+= debug.o
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 
 obj-$(CONFIG_ARM_VIRT_EXT)	+= hyp-stub.o
+obj-$(CONFIG_ARM_PSCI)		+= psci.o
 
 extra-y := $(head-y) vmlinux.lds
diff --git a/arch/arm/kernel/psci.c b/arch/arm/kernel/psci.c
new file mode 100644
index 0000000..3653164
--- /dev/null
+++ b/arch/arm/kernel/psci.c
@@ -0,0 +1,211 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2012 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ */
+
+#define pr_fmt(fmt) "psci: " fmt
+
+#include <linux/init.h>
+#include <linux/of.h>
+
+#include <asm/compiler.h>
+#include <asm/errno.h>
+#include <asm/opcodes-sec.h>
+#include <asm/opcodes-virt.h>
+#include <asm/psci.h>
+
+struct psci_operations psci_ops;
+
+static int (*invoke_psci_fn)(u32, u32, u32, u32);
+
+enum psci_function {
+	PSCI_FN_CPU_SUSPEND,
+	PSCI_FN_CPU_ON,
+	PSCI_FN_CPU_OFF,
+	PSCI_FN_MIGRATE,
+	PSCI_FN_MAX,
+};
+
+static u32 psci_function_id[PSCI_FN_MAX];
+
+#define PSCI_RET_SUCCESS		0
+#define PSCI_RET_EOPNOTSUPP		-1
+#define PSCI_RET_EINVAL			-2
+#define PSCI_RET_EPERM			-3
+
+static int psci_to_linux_errno(int errno)
+{
+	switch (errno) {
+	case PSCI_RET_SUCCESS:
+		return 0;
+	case PSCI_RET_EOPNOTSUPP:
+		return -EOPNOTSUPP;
+	case PSCI_RET_EINVAL:
+		return -EINVAL;
+	case PSCI_RET_EPERM:
+		return -EPERM;
+	};
+
+	return -EINVAL;
+}
+
+#define PSCI_POWER_STATE_ID_MASK	0xffff
+#define PSCI_POWER_STATE_ID_SHIFT	0
+#define PSCI_POWER_STATE_TYPE_MASK	0x1
+#define PSCI_POWER_STATE_TYPE_SHIFT	16
+#define PSCI_POWER_STATE_AFFL_MASK	0x3
+#define PSCI_POWER_STATE_AFFL_SHIFT	24
+
+static u32 psci_power_state_pack(struct psci_power_state state)
+{
+	return	((state.id & PSCI_POWER_STATE_ID_MASK)
+			<< PSCI_POWER_STATE_ID_SHIFT)	|
+		((state.type & PSCI_POWER_STATE_TYPE_MASK)
+			<< PSCI_POWER_STATE_TYPE_SHIFT)	|
+		((state.affinity_level & PSCI_POWER_STATE_AFFL_MASK)
+			<< PSCI_POWER_STATE_AFFL_SHIFT);
+}
+
+/*
+ * The following two functions are invoked via the invoke_psci_fn pointer
+ * and will not be inlined, allowing us to piggyback on the AAPCS.
+ */
+static noinline int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1,
+					 u32 arg2)
+{
+	asm volatile(
+			__asmeq("%0", "r0")
+			__asmeq("%1", "r1")
+			__asmeq("%2", "r2")
+			__asmeq("%3", "r3")
+			__HVC(0)
+		: "+r" (function_id)
+		: "r" (arg0), "r" (arg1), "r" (arg2));
+
+	return function_id;
+}
+
+static noinline int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1,
+					 u32 arg2)
+{
+	asm volatile(
+			__asmeq("%0", "r0")
+			__asmeq("%1", "r1")
+			__asmeq("%2", "r2")
+			__asmeq("%3", "r3")
+			__SMC(0)
+		: "+r" (function_id)
+		: "r" (arg0), "r" (arg1), "r" (arg2));
+
+	return function_id;
+}
+
+static int psci_cpu_suspend(struct psci_power_state state,
+			    unsigned long entry_point)
+{
+	int err;
+	u32 fn, power_state;
+
+	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
+	power_state = psci_power_state_pack(state);
+	err = invoke_psci_fn(fn, power_state, entry_point, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_cpu_off(struct psci_power_state state)
+{
+	int err;
+	u32 fn, power_state;
+
+	fn = psci_function_id[PSCI_FN_CPU_OFF];
+	power_state = psci_power_state_pack(state);
+	err = invoke_psci_fn(fn, power_state, 0, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
+{
+	int err;
+	u32 fn;
+
+	fn = psci_function_id[PSCI_FN_CPU_ON];
+	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
+	return psci_to_linux_errno(err);
+}
+
+static int psci_migrate(unsigned long cpuid)
+{
+	int err;
+	u32 fn;
+
+	fn = psci_function_id[PSCI_FN_MIGRATE];
+	err = invoke_psci_fn(fn, cpuid, 0, 0);
+	return psci_to_linux_errno(err);
+}
+
+static const struct of_device_id psci_of_match[] __initconst = {
+	{ .compatible = "arm,psci",	},
+	{},
+};
+
+static int __init psci_init(void)
+{
+	struct device_node *np;
+	const char *method;
+	u32 id;
+
+	np = of_find_matching_node(NULL, psci_of_match);
+	if (!np)
+		return 0;
+
+	pr_info("probing function IDs from device-tree\n");
+
+	if (of_property_read_string(np, "method", &method)) {
+		pr_warning("missing \"method\" property\n");
+		goto out_put_node;
+	}
+
+	if (!strcmp("hvc", method)) {
+		invoke_psci_fn = __invoke_psci_fn_hvc;
+	} else if (!strcmp("smc", method)) {
+		invoke_psci_fn = __invoke_psci_fn_smc;
+	} else {
+		pr_warning("invalid \"method\" property: %s\n", method);
+		goto out_put_node;
+	}
+
+	if (!of_property_read_u32(np, "cpu_suspend", &id)) {
+		psci_function_id[PSCI_FN_CPU_SUSPEND] = id;
+		psci_ops.cpu_suspend = psci_cpu_suspend;
+	}
+
+	if (!of_property_read_u32(np, "cpu_off", &id)) {
+		psci_function_id[PSCI_FN_CPU_OFF] = id;
+		psci_ops.cpu_off = psci_cpu_off;
+	}
+
+	if (!of_property_read_u32(np, "cpu_on", &id)) {
+		psci_function_id[PSCI_FN_CPU_ON] = id;
+		psci_ops.cpu_on = psci_cpu_on;
+	}
+
+	if (!of_property_read_u32(np, "migrate", &id)) {
+		psci_function_id[PSCI_FN_MIGRATE] = id;
+		psci_ops.migrate = psci_migrate;
+	}
+
+out_put_node:
+	of_node_put(np);
+	return 0;
+}
+early_initcall(psci_init);
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:54:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1Mc-0004nx-DO; Tue, 18 Dec 2012 17:53:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1Ma-0004mU-QU
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:57 +0000
Received: from [193.109.254.147:58909] by server-9.bemta-14.messagelabs.com id
	E1/1D-24482-4BDA0D05; Tue, 18 Dec 2012 17:53:56 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355853215!10538560!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23539 invoked from network); 18 Dec 2012 17:53:36 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-12.tower-27.messagelabs.com with SMTP;
	18 Dec 2012 17:53:36 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrJki028151; Tue, 18 Dec 2012 17:53:19 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id C158CC2B12; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:13 +0000
Message-Id: <1355853196-23676-4-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 3/6] ARM: psci: add devicetree binding for
	describing PSCI firmware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a new devicetree binding for describing PSCI firmware
to Linux.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 Documentation/devicetree/bindings/arm/psci.txt | 55 ++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/arm/psci.txt

diff --git a/Documentation/devicetree/bindings/arm/psci.txt b/Documentation/devicetree/bindings/arm/psci.txt
new file mode 100644
index 0000000..433afe9
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/psci.txt
@@ -0,0 +1,55 @@
+* Power State Coordination Interface (PSCI)
+
+Firmware implementing the PSCI functions described in ARM document number
+ARM DEN 0022A ("Power State Coordination Interface System Software on ARM
+processors") can be used by Linux to initiate various CPU-centric power
+operations.
+
+Issue A of the specification describes functions for CPU suspend, hotplug
+and migration of secure software.
+
+Functions are invoked by trapping to the privilege level of the PSCI
+firmware (specified as part of the binding below) and passing arguments
+in a manner similar to that specified by AAPCS:
+
+	 r0		=> 32-bit Function ID / return value
+	{r1 - r3}	=> Parameters
+
+Note that the immediate field of the trapping instruction must be set
+to #0.
+
+
+Main node required properties:
+
+ - compatible    : Must be "arm,psci"
+
+ - method        : The method of calling the PSCI firmware. Permitted
+                   values are:
+
+                   "smc" : SMC #0, with the register assignments specified
+		           in this binding.
+
+                   "hvc" : HVC #0, with the register assignments specified
+		           in this binding.
+
+Main node optional properties:
+
+ - cpu_suspend   : Function ID for CPU_SUSPEND operation
+
+ - cpu_off       : Function ID for CPU_OFF operation
+
+ - cpu_on        : Function ID for CPU_ON operation
+
+ - migrate       : Function ID for MIGRATE operation
+
+
+Example:
+
+	psci {
+		compatible	= "arm,psci";
+		method		= "smc";
+		cpu_suspend	= <0x95c10000>;
+		cpu_off		= <0x95c10001>;
+		cpu_on		= <0x95c10002>;
+		migrate		= <0x95c10003>;
+	};
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:54:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1Mc-0004nx-DO; Tue, 18 Dec 2012 17:53:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1Tl1Ma-0004mU-QU
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:53:57 +0000
Received: from [193.109.254.147:58909] by server-9.bemta-14.messagelabs.com id
	E1/1D-24482-4BDA0D05; Tue, 18 Dec 2012 17:53:56 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355853215!10538560!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23539 invoked from network); 18 Dec 2012 17:53:36 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-12.tower-27.messagelabs.com with SMTP;
	18 Dec 2012 17:53:36 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	qBIHrJki028151; Tue, 18 Dec 2012 17:53:19 GMT
Received: by mudshark.cambridge.arm.com (Postfix, from userid 1000)
	id C158CC2B12; Tue, 18 Dec 2012 17:53:17 +0000 (GMT)
From: Will Deacon <will.deacon@arm.com>
To: linux-arm-kernel@lists.infradead.org
Date: Tue, 18 Dec 2012 17:53:13 +0000
Message-Id: <1355853196-23676-4-git-send-email-will.deacon@arm.com>
X-Mailer: git-send-email 1.8.0
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
Cc: dave.martin@linaro.org, arnd@arndb.de, nico@fluxnic.net,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	Will Deacon <will.deacon@arm.com>, xen-devel@lists.xen.org,
	robherring2@gmail.com
Subject: [Xen-devel] [PATCH v3 3/6] ARM: psci: add devicetree binding for
	describing PSCI firmware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a new devicetree binding for describing PSCI firmware
to Linux.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 Documentation/devicetree/bindings/arm/psci.txt | 55 ++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/arm/psci.txt

diff --git a/Documentation/devicetree/bindings/arm/psci.txt b/Documentation/devicetree/bindings/arm/psci.txt
new file mode 100644
index 0000000..433afe9
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/psci.txt
@@ -0,0 +1,55 @@
+* Power State Coordination Interface (PSCI)
+
+Firmware implementing the PSCI functions described in ARM document number
+ARM DEN 0022A ("Power State Coordination Interface System Software on ARM
+processors") can be used by Linux to initiate various CPU-centric power
+operations.
+
+Issue A of the specification describes functions for CPU suspend, hotplug
+and migration of secure software.
+
+Functions are invoked by trapping to the privilege level of the PSCI
+firmware (specified as part of the binding below) and passing arguments
+in a manner similar to that specified by AAPCS:
+
+	 r0		=> 32-bit Function ID / return value
+	{r1 - r3}	=> Parameters
+
+Note that the immediate field of the trapping instruction must be set
+to #0.
+
+
+Main node required properties:
+
+ - compatible    : Must be "arm,psci"
+
+ - method        : The method of calling the PSCI firmware. Permitted
+                   values are:
+
+                   "smc" : SMC #0, with the register assignments specified
+		           in this binding.
+
+                   "hvc" : HVC #0, with the register assignments specified
+		           in this binding.
+
+Main node optional properties:
+
+ - cpu_suspend   : Function ID for CPU_SUSPEND operation
+
+ - cpu_off       : Function ID for CPU_OFF operation
+
+ - cpu_on        : Function ID for CPU_ON operation
+
+ - migrate       : Function ID for MIGRATE operation
+
+
+Example:
+
+	psci {
+		compatible	= "arm,psci";
+		method		= "smc";
+		cpu_suspend	= <0x95c10000>;
+		cpu_off		= <0x95c10001>;
+		cpu_on		= <0x95c10002>;
+		migrate		= <0x95c10003>;
+	};
-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:59:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1Ro-0005ex-7p; Tue, 18 Dec 2012 17:59:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1Tl1Rn-0005eh-9v
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:59:19 +0000
Received: from [85.158.137.99:46979] by server-15.bemta-3.messagelabs.com id
	6D/04-07921-6FEA0D05; Tue, 18 Dec 2012 17:59:18 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355853557!14853951!1
X-Originating-IP: [212.227.126.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODYgPT4gNjAxNDQ=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODYgPT4gNjAxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2795 invoked from network); 18 Dec 2012 17:59:17 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.186)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:59:17 -0000
Received: from klappe2.localnet
	(HSI-KBW-095-208-003-199.hsi5.kabel-badenwuerttemberg.de
	[95.208.3.199])
	by mrelayeu.kundenserver.de (node=mrbap4) with ESMTP (Nemesis)
	id 0LZfCu-1TNjq62h4O-00lUUp; Tue, 18 Dec 2012 18:59:03 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Will Deacon <will.deacon@arm.com>
Date: Tue, 18 Dec 2012 17:59:01 +0000
User-Agent: KMail/1.12.2 (Linux/3.5.0; KDE/4.3.2; x86_64; ; )
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
MIME-Version: 1.0
Message-Id: <201212181759.01676.arnd@arndb.de>
X-Provags-ID: V02:K0:/Xgub/Efpaw/Clc8KVRf/T0QyaDfH5h6x34m9kgre7m
	dn1dJkP0unMHB7XBHa+vrRNYm7ZXeu7Lu4CaSGxKh9raeMx/ed
	S3riP+nYu8EChCgDri3OTdkEU70jOkMxtA1KlGljGcMbcAzFqL
	Z+kqhMs+14kn0ZwO7PRe9mZ/vnFD4JeEnymHRS7Tu+4BUgQ+3p
	OhdgPnsnb1YLFa6+TENvNFfE9kH/UZhPekTp4EDiS8B7ulRX5d
	qDle/GRn7Adf8eXydkpgxSZJsOCv9ZyS6dEHIAxA8jUTv8g7m5
	V3a8muqV9ynGq5u4Vwo4YmIYfYrpK7j8ULYUMPVZqSDVFX7WUX
	VSSsMbt1Mb4SzzdvH9Sc=
Cc: dave.martin@linaro.org, nico@fluxnic.net, Marc.Zyngier@arm.com,
	devicetree-discuss@lists.ozlabs.org, xen-devel@lists.xen.org,
	robherring2@gmail.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 0/6] Add support for a fake,
	para-virtualised machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday 18 December 2012, Will Deacon wrote:
> This is version three of the patches originally posted here:
> 
>   v1.) http://lists.infradead.org/pipermail/linux-arm-kernel/2012-December/135870.html
>   v2.) http://lists.infradead.org/pipermail/linux-arm-kernel/2012-December/137750.html
> 
> Thanks to all those who have provided comments so far.
> Changes for v3 include:
> 
>         * Ripped out *even more* SMP code by rebasing onto latest
>           mainline
>         * Removed function-base property from device-tree binding
>         * Annotated the low-level firmware invocation functions with
>           noinline to clarify intent
>         * Minor cleanups
> 
> As usual, testing this relies on KVM support for PSCI, a magic kvmtool
> and Mark Rutland's arch-timer patches.
> 

Acked-by: Arnd Bergmann <arnd@arndb.de>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 17:59:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 17:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1Ro-0005ex-7p; Tue, 18 Dec 2012 17:59:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1Tl1Rn-0005eh-9v
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 17:59:19 +0000
Received: from [85.158.137.99:46979] by server-15.bemta-3.messagelabs.com id
	6D/04-07921-6FEA0D05; Tue, 18 Dec 2012 17:59:18 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355853557!14853951!1
X-Originating-IP: [212.227.126.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODYgPT4gNjAxNDQ=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODYgPT4gNjAxNDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2795 invoked from network); 18 Dec 2012 17:59:17 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.186)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 17:59:17 -0000
Received: from klappe2.localnet
	(HSI-KBW-095-208-003-199.hsi5.kabel-badenwuerttemberg.de
	[95.208.3.199])
	by mrelayeu.kundenserver.de (node=mrbap4) with ESMTP (Nemesis)
	id 0LZfCu-1TNjq62h4O-00lUUp; Tue, 18 Dec 2012 18:59:03 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Will Deacon <will.deacon@arm.com>
Date: Tue, 18 Dec 2012 17:59:01 +0000
User-Agent: KMail/1.12.2 (Linux/3.5.0; KDE/4.3.2; x86_64; ; )
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
In-Reply-To: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
MIME-Version: 1.0
Message-Id: <201212181759.01676.arnd@arndb.de>
X-Provags-ID: V02:K0:/Xgub/Efpaw/Clc8KVRf/T0QyaDfH5h6x34m9kgre7m
	dn1dJkP0unMHB7XBHa+vrRNYm7ZXeu7Lu4CaSGxKh9raeMx/ed
	S3riP+nYu8EChCgDri3OTdkEU70jOkMxtA1KlGljGcMbcAzFqL
	Z+kqhMs+14kn0ZwO7PRe9mZ/vnFD4JeEnymHRS7Tu+4BUgQ+3p
	OhdgPnsnb1YLFa6+TENvNFfE9kH/UZhPekTp4EDiS8B7ulRX5d
	qDle/GRn7Adf8eXydkpgxSZJsOCv9ZyS6dEHIAxA8jUTv8g7m5
	V3a8muqV9ynGq5u4Vwo4YmIYfYrpK7j8ULYUMPVZqSDVFX7WUX
	VSSsMbt1Mb4SzzdvH9Sc=
Cc: dave.martin@linaro.org, nico@fluxnic.net, Marc.Zyngier@arm.com,
	devicetree-discuss@lists.ozlabs.org, xen-devel@lists.xen.org,
	robherring2@gmail.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 0/6] Add support for a fake,
	para-virtualised machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday 18 December 2012, Will Deacon wrote:
> This is version three of the patches originally posted here:
> 
>   v1.) http://lists.infradead.org/pipermail/linux-arm-kernel/2012-December/135870.html
>   v2.) http://lists.infradead.org/pipermail/linux-arm-kernel/2012-December/137750.html
> 
> Thanks to all those who have provided comments so far.
> Changes for v3 include:
> 
>         * Ripped out *even more* SMP code by rebasing onto latest
>           mainline
>         * Removed function-base property from device-tree binding
>         * Annotated the low-level firmware invocation functions with
>           noinline to clarify intent
>         * Minor cleanups
> 
> As usual, testing this relies on KVM support for PSCI, a magic kvmtool
> and Mark Rutland's arch-timer patches.
> 

Acked-by: Arnd Bergmann <arnd@arndb.de>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:01:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:01:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1Td-0005r3-P2; Tue, 18 Dec 2012 18:01:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1Tl1Tc-0005qw-Qi
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:01:13 +0000
Received: from [85.158.137.99:49781] by server-11.bemta-3.messagelabs.com id
	BF/F5-13335-36FA0D05; Tue, 18 Dec 2012 18:01:07 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355853664!19913957!1
X-Originating-IP: [199.106.114.251]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjEwNi4xMTQuMjUxID0+IDE4NjkyMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12721 invoked from network); 18 Dec 2012 18:01:05 -0000
Received: from wolverine02.qualcomm.com (HELO wolverine02.qualcomm.com)
	(199.106.114.251)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 18:01:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355126400"; d="scan'208";a="15014487"
Received: from pdmz-ns-snip_115.254.qualcomm.com (HELO mostmsg01.qualcomm.com)
	([199.106.115.254])
	by wolverine02.qualcomm.com with ESMTP/TLS/DHE-RSA-AES256-SHA;
	18 Dec 2012 10:01:02 -0800
Received: from [10.228.68.45] (pdmz-ns-snip_218_1.qualcomm.com [192.168.218.1])
	by mostmsg01.qualcomm.com (Postfix) with ESMTPA id 028BA10004B4;
	Tue, 18 Dec 2012 10:01:01 -0800 (PST)
Message-ID: <50D0AF5C.1070605@codeaurora.org>
Date: Tue, 18 Dec 2012 13:01:00 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.24) Gecko/20111108 Thunderbird/3.1.16
MIME-Version: 1.0
To: Will Deacon <will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
In-Reply-To: <20121218131401.GB22139@mudshark.cambridge.arm.com>
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Marc Zyngier <Marc.Zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Will,

On 12/18/2012 08:14 AM, Will Deacon wrote:
> Hi Stefano,
> 
> On Tue, Dec 18, 2012 at 12:04:38PM +0000, Stefano Stabellini wrote:
>> On Mon, 17 Dec 2012, Will Deacon wrote:
>>> From: Marc Zyngier <marc.zyngier@arm.com>
>>>
>>> Add support for the smallest, dumbest possible platform, to be
>>> used as a guest for KVM or other hypervisors.

[...]

>> Should it come along with a DTS?
> 
> The only things the platform needs are GIC, timers, memory and a CPU.

I assume multiple virtio-mmio peripherals are hiding behind what you seem to
advertising here as plain old memory?

> Furthermore, the location, size, frequency etc properties of these aren't
> fixed, so a dts would be fairly useless because it will probably not match
> the particular mach-virt instance you're targetting.

I disagree. I think an example DTS would be fairly useful, if only for the
full list of peripherals you're using on the platform.

> For kvmtool, I've been generating the device-tree at runtime based on how
> kvmtool is invoked and it's been working pretty well so far.

If you'd much prefer to post the command line, tools version, etc. that you're
using to generate the DTB, rather than the DTS, that'd be better than nothing.

It seems like Rob Herring's earlier question about whether the dummy platform
is really justified never got answered. I think sending a sample DTS out with
the patchset would help "highlight where we need to do more work on DT driving
the initialization."

Lastly, I'm somewhat curious, why virtio-mmio console rather than DCC?

Thanks,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:01:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:01:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1Td-0005r3-P2; Tue, 18 Dec 2012 18:01:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1Tl1Tc-0005qw-Qi
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:01:13 +0000
Received: from [85.158.137.99:49781] by server-11.bemta-3.messagelabs.com id
	BF/F5-13335-36FA0D05; Tue, 18 Dec 2012 18:01:07 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355853664!19913957!1
X-Originating-IP: [199.106.114.251]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjEwNi4xMTQuMjUxID0+IDE4NjkyMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12721 invoked from network); 18 Dec 2012 18:01:05 -0000
Received: from wolverine02.qualcomm.com (HELO wolverine02.qualcomm.com)
	(199.106.114.251)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Dec 2012 18:01:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,309,1355126400"; d="scan'208";a="15014487"
Received: from pdmz-ns-snip_115.254.qualcomm.com (HELO mostmsg01.qualcomm.com)
	([199.106.115.254])
	by wolverine02.qualcomm.com with ESMTP/TLS/DHE-RSA-AES256-SHA;
	18 Dec 2012 10:01:02 -0800
Received: from [10.228.68.45] (pdmz-ns-snip_218_1.qualcomm.com [192.168.218.1])
	by mostmsg01.qualcomm.com (Postfix) with ESMTPA id 028BA10004B4;
	Tue, 18 Dec 2012 10:01:01 -0800 (PST)
Message-ID: <50D0AF5C.1070605@codeaurora.org>
Date: Tue, 18 Dec 2012 13:01:00 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.24) Gecko/20111108 Thunderbird/3.1.16
MIME-Version: 1.0
To: Will Deacon <will.deacon@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
In-Reply-To: <20121218131401.GB22139@mudshark.cambridge.arm.com>
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Marc Zyngier <Marc.Zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Will,

On 12/18/2012 08:14 AM, Will Deacon wrote:
> Hi Stefano,
> 
> On Tue, Dec 18, 2012 at 12:04:38PM +0000, Stefano Stabellini wrote:
>> On Mon, 17 Dec 2012, Will Deacon wrote:
>>> From: Marc Zyngier <marc.zyngier@arm.com>
>>>
>>> Add support for the smallest, dumbest possible platform, to be
>>> used as a guest for KVM or other hypervisors.

[...]

>> Should it come along with a DTS?
> 
> The only things the platform needs are GIC, timers, memory and a CPU.

I assume multiple virtio-mmio peripherals are hiding behind what you seem to
advertising here as plain old memory?

> Furthermore, the location, size, frequency etc properties of these aren't
> fixed, so a dts would be fairly useless because it will probably not match
> the particular mach-virt instance you're targetting.

I disagree. I think an example DTS would be fairly useful, if only for the
full list of peripherals you're using on the platform.

> For kvmtool, I've been generating the device-tree at runtime based on how
> kvmtool is invoked and it's been working pretty well so far.

If you'd much prefer to post the command line, tools version, etc. that you're
using to generate the DTB, rather than the DTS, that'd be better than nothing.

It seems like Rob Herring's earlier question about whether the dummy platform
is really justified never got answered. I think sending a sample DTS out with
the patchset would help "highlight where we need to do more work on DT driving
the initialization."

Lastly, I'm somewhat curious, why virtio-mmio console rather than DCC?

Thanks,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:18:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1js-0006GT-GN; Tue, 18 Dec 2012 18:18:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl1jq-0006GO-HF
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:17:58 +0000
Received: from [85.158.139.211:52371] by server-4.bemta-5.messagelabs.com id
	FA/87-14693-553B0D05; Tue, 18 Dec 2012 18:17:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355854675!20941086!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4024 invoked from network); 18 Dec 2012 18:17:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:17:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1148305"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:17:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:17:54 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl1jm-0002FX-Lm;
	Tue, 18 Dec 2012 18:17:54 +0000
Date: Tue, 18 Dec 2012 18:17:46 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355849376-26652-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181817250.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: arm: fix long lines in entry.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

There are actually no functional changes, right?
If so:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  xen/arch/arm/entry.S |   66 +++++++++++++++++++++++++-------------------------
>  1 files changed, 33 insertions(+), 33 deletions(-)
> 
> diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
> index 1d6ff32..83793c2 100644
> --- a/xen/arch/arm/entry.S
> +++ b/xen/arch/arm/entry.S
> @@ -11,22 +11,22 @@
>  #define RESTORE_BANKED(mode) \
>  	RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
>  
> -#define SAVE_ALL											\
> -	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */					\
> -	push {r0-r12}; /* Save R0-R12 */								\
> -													\
> -	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */				\
> -	str r11, [sp, #UREGS_pc];									\
> -													\
> -	str lr, [sp, #UREGS_lr];									\
> -													\
> -	add r11, sp, #UREGS_kernel_sizeof+4;								\
> -	str r11, [sp, #UREGS_sp];									\
> -													\
> -	mrs r11, SPSR_hyp;										\
> -	str r11, [sp, #UREGS_cpsr];									\
> -	and r11, #PSR_MODE_MASK;									\
> -	cmp r11, #PSR_MODE_HYP;										\
> +#define SAVE_ALL							\
> +	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */	\
> +	push {r0-r12}; /* Save R0-R12 */				\
> +									\
> +	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */\
> +	str r11, [sp, #UREGS_pc];					\
> +									\
> +	str lr, [sp, #UREGS_lr];					\
> +									\
> +	add r11, sp, #UREGS_kernel_sizeof+4;				\
> +	str r11, [sp, #UREGS_sp];					\
> +									\
> +	mrs r11, SPSR_hyp;						\
> +	str r11, [sp, #UREGS_cpsr];					\
> +	and r11, #PSR_MODE_MASK;					\
> +	cmp r11, #PSR_MODE_HYP;						\
>  	blne save_guest_regs
>  
>  save_guest_regs:
> @@ -43,25 +43,25 @@ save_guest_regs:
>  	SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
>  	mov pc, lr
>  
> -#define DEFINE_TRAP_ENTRY(trap)										\
> -	ALIGN;												\
> -trap_##trap:												\
> -	SAVE_ALL;											\
> -	cpsie i; 	/* local_irq_enable */								\
> -	adr lr, return_from_trap;									\
> -	mov r0, sp;											\
> -	mov r11, sp;											\
> -	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
> +#define DEFINE_TRAP_ENTRY(trap)						\
> +	ALIGN;								\
> +trap_##trap:								\
> +	SAVE_ALL;							\
> +	cpsie i; 	/* local_irq_enable */				\
> +	adr lr, return_from_trap;					\
> +	mov r0, sp;							\
> +	mov r11, sp;							\
> +	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
>  	b do_trap_##trap
>  
> -#define DEFINE_TRAP_ENTRY_NOIRQ(trap)									\
> -	ALIGN;												\
> -trap_##trap:												\
> -	SAVE_ALL;											\
> -	adr lr, return_from_trap;									\
> -	mov r0, sp;											\
> -	mov r11, sp;											\
> -	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
> +#define DEFINE_TRAP_ENTRY_NOIRQ(trap)					\
> +	ALIGN;								\
> +trap_##trap:								\
> +	SAVE_ALL;							\
> +	adr lr, return_from_trap;					\
> +	mov r0, sp;							\
> +	mov r11, sp;							\
> +	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
>  	b do_trap_##trap
>  
>  .globl hyp_traps_vector
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:18:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1js-0006GT-GN; Tue, 18 Dec 2012 18:18:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl1jq-0006GO-HF
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:17:58 +0000
Received: from [85.158.139.211:52371] by server-4.bemta-5.messagelabs.com id
	FA/87-14693-553B0D05; Tue, 18 Dec 2012 18:17:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355854675!20941086!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4024 invoked from network); 18 Dec 2012 18:17:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:17:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1148305"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:17:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:17:54 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl1jm-0002FX-Lm;
	Tue, 18 Dec 2012 18:17:54 +0000
Date: Tue, 18 Dec 2012 18:17:46 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355849376-26652-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181817250.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: arm: fix long lines in entry.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

There are actually no functional changes, right?
If so:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  xen/arch/arm/entry.S |   66 +++++++++++++++++++++++++-------------------------
>  1 files changed, 33 insertions(+), 33 deletions(-)
> 
> diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
> index 1d6ff32..83793c2 100644
> --- a/xen/arch/arm/entry.S
> +++ b/xen/arch/arm/entry.S
> @@ -11,22 +11,22 @@
>  #define RESTORE_BANKED(mode) \
>  	RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
>  
> -#define SAVE_ALL											\
> -	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */					\
> -	push {r0-r12}; /* Save R0-R12 */								\
> -													\
> -	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */				\
> -	str r11, [sp, #UREGS_pc];									\
> -													\
> -	str lr, [sp, #UREGS_lr];									\
> -													\
> -	add r11, sp, #UREGS_kernel_sizeof+4;								\
> -	str r11, [sp, #UREGS_sp];									\
> -													\
> -	mrs r11, SPSR_hyp;										\
> -	str r11, [sp, #UREGS_cpsr];									\
> -	and r11, #PSR_MODE_MASK;									\
> -	cmp r11, #PSR_MODE_HYP;										\
> +#define SAVE_ALL							\
> +	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */	\
> +	push {r0-r12}; /* Save R0-R12 */				\
> +									\
> +	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */\
> +	str r11, [sp, #UREGS_pc];					\
> +									\
> +	str lr, [sp, #UREGS_lr];					\
> +									\
> +	add r11, sp, #UREGS_kernel_sizeof+4;				\
> +	str r11, [sp, #UREGS_sp];					\
> +									\
> +	mrs r11, SPSR_hyp;						\
> +	str r11, [sp, #UREGS_cpsr];					\
> +	and r11, #PSR_MODE_MASK;					\
> +	cmp r11, #PSR_MODE_HYP;						\
>  	blne save_guest_regs
>  
>  save_guest_regs:
> @@ -43,25 +43,25 @@ save_guest_regs:
>  	SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
>  	mov pc, lr
>  
> -#define DEFINE_TRAP_ENTRY(trap)										\
> -	ALIGN;												\
> -trap_##trap:												\
> -	SAVE_ALL;											\
> -	cpsie i; 	/* local_irq_enable */								\
> -	adr lr, return_from_trap;									\
> -	mov r0, sp;											\
> -	mov r11, sp;											\
> -	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
> +#define DEFINE_TRAP_ENTRY(trap)						\
> +	ALIGN;								\
> +trap_##trap:								\
> +	SAVE_ALL;							\
> +	cpsie i; 	/* local_irq_enable */				\
> +	adr lr, return_from_trap;					\
> +	mov r0, sp;							\
> +	mov r11, sp;							\
> +	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
>  	b do_trap_##trap
>  
> -#define DEFINE_TRAP_ENTRY_NOIRQ(trap)									\
> -	ALIGN;												\
> -trap_##trap:												\
> -	SAVE_ALL;											\
> -	adr lr, return_from_trap;									\
> -	mov r0, sp;											\
> -	mov r11, sp;											\
> -	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
> +#define DEFINE_TRAP_ENTRY_NOIRQ(trap)					\
> +	ALIGN;								\
> +trap_##trap:								\
> +	SAVE_ALL;							\
> +	adr lr, return_from_trap;					\
> +	mov r0, sp;							\
> +	mov r11, sp;							\
> +	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
>  	b do_trap_##trap
>  
>  .globl hyp_traps_vector
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:18:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1ka-0006KP-Tr; Tue, 18 Dec 2012 18:18:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl1kZ-0006KD-IP
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:18:44 +0000
Received: from [85.158.143.99:15564] by server-2.bemta-4.messagelabs.com id
	2F/96-30861-283B0D05; Tue, 18 Dec 2012 18:18:42 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355854716!29083663!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25384 invoked from network); 18 Dec 2012 18:18:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:18:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1148419"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:18:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:18:32 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl1kO-0002Fn-8S;
	Tue, 18 Dec 2012 18:18:32 +0000
Date: Tue, 18 Dec 2012 18:18:23 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355849376-26652-2-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181817580.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-2-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/5] xen: arm: remove hard tabs from asm
	code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> Run expand(1) over xen/arch/arm/.../*.S
> 
> Add emacs local vars block.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  xen/arch/arm/entry.S       |  194 +++++++-------
>  xen/arch/arm/head.S        |  613 ++++++++++++++++++++++----------------------
>  xen/arch/arm/mode_switch.S |  145 ++++++-----
>  xen/arch/arm/proc-ca15.S   |   17 +-
>  4 files changed, 498 insertions(+), 471 deletions(-)
> 
> diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
> index 83793c2..cbd1c48 100644
> --- a/xen/arch/arm/entry.S
> +++ b/xen/arch/arm/entry.S
> @@ -2,79 +2,79 @@
>  #include <asm/asm_defns.h>
>  #include <public/xen.h>
> 
> -#define SAVE_ONE_BANKED(reg)   mrs r11, reg; str r11, [sp, #UREGS_##reg]
> -#define RESTORE_ONE_BANKED(reg)        ldr r11, [sp, #UREGS_##reg]; msr reg, r11
> +#define SAVE_ONE_BANKED(reg)    mrs r11, reg; str r11, [sp, #UREGS_##reg]
> +#define RESTORE_ONE_BANKED(reg) ldr r11, [sp, #UREGS_##reg]; msr reg, r11
> 
>  #define SAVE_BANKED(mode) \
> -       SAVE_ONE_BANKED(SP_##mode) ; SAVE_ONE_BANKED(LR_##mode) ; SAVE_ONE_BANKED(SPSR_##mode)
> +        SAVE_ONE_BANKED(SP_##mode) ; SAVE_ONE_BANKED(LR_##mode) ; SAVE_ONE_BANKED(SPSR_##mode)
> 
>  #define RESTORE_BANKED(mode) \
> -       RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
> +        RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
> 
> -#define SAVE_ALL                                                       \
> -       sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */      \
> -       push {r0-r12}; /* Save R0-R12 */                                \
> -                                                                       \
> -       mrs r11, ELR_hyp;               /* ELR_hyp is return address. */\
> -       str r11, [sp, #UREGS_pc];                                       \
> -                                                                       \
> -       str lr, [sp, #UREGS_lr];                                        \
> -                                                                       \
> -       add r11, sp, #UREGS_kernel_sizeof+4;                            \
> -       str r11, [sp, #UREGS_sp];                                       \
> -                                                                       \
> -       mrs r11, SPSR_hyp;                                              \
> -       str r11, [sp, #UREGS_cpsr];                                     \
> -       and r11, #PSR_MODE_MASK;                                        \
> -       cmp r11, #PSR_MODE_HYP;                                         \
> -       blne save_guest_regs
> +#define SAVE_ALL                                                        \
> +        sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */      \
> +        push {r0-r12}; /* Save R0-R12 */                                \
> +                                                                        \
> +        mrs r11, ELR_hyp;               /* ELR_hyp is return address. */\
> +        str r11, [sp, #UREGS_pc];                                       \
> +                                                                        \
> +        str lr, [sp, #UREGS_lr];                                        \
> +                                                                        \
> +        add r11, sp, #UREGS_kernel_sizeof+4;                            \
> +        str r11, [sp, #UREGS_sp];                                       \
> +                                                                        \
> +        mrs r11, SPSR_hyp;                                              \
> +        str r11, [sp, #UREGS_cpsr];                                     \
> +        and r11, #PSR_MODE_MASK;                                        \
> +        cmp r11, #PSR_MODE_HYP;                                         \
> +        blne save_guest_regs
> 
>  save_guest_regs:
> -       ldr r11, =0xffffffff  /* Clobber SP which is only valid for hypervisor frames. */
> -       str r11, [sp, #UREGS_sp]
> -       SAVE_ONE_BANKED(SP_usr)
> -       /* LR_usr is the same physical register as lr and is saved in SAVE_ALL */
> -       SAVE_BANKED(svc)
> -       SAVE_BANKED(abt)
> -       SAVE_BANKED(und)
> -       SAVE_BANKED(irq)
> -       SAVE_BANKED(fiq)
> -       SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq)
> -       SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
> -       mov pc, lr
> +        ldr r11, =0xffffffff  /* Clobber SP which is only valid for hypervisor frames. */
> +        str r11, [sp, #UREGS_sp]
> +        SAVE_ONE_BANKED(SP_usr)
> +        /* LR_usr is the same physical register as lr and is saved in SAVE_ALL */
> +        SAVE_BANKED(svc)
> +        SAVE_BANKED(abt)
> +        SAVE_BANKED(und)
> +        SAVE_BANKED(irq)
> +        SAVE_BANKED(fiq)
> +        SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq)
> +        SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
> +        mov pc, lr
> 
> -#define DEFINE_TRAP_ENTRY(trap)                                                \
> -       ALIGN;                                                          \
> -trap_##trap:                                                           \
> -       SAVE_ALL;                                                       \
> -       cpsie i;        /* local_irq_enable */                          \
> -       adr lr, return_from_trap;                                       \
> -       mov r0, sp;                                                     \
> -       mov r11, sp;                                                    \
> -       bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
> -       b do_trap_##trap
> +#define DEFINE_TRAP_ENTRY(trap)                                         \
> +        ALIGN;                                                          \
> +trap_##trap:                                                            \
> +        SAVE_ALL;                                                       \
> +        cpsie i;        /* local_irq_enable */                          \
> +        adr lr, return_from_trap;                                       \
> +        mov r0, sp;                                                     \
> +        mov r11, sp;                                                    \
> +        bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
> +        b do_trap_##trap
> 
> -#define DEFINE_TRAP_ENTRY_NOIRQ(trap)                                  \
> -       ALIGN;                                                          \
> -trap_##trap:                                                           \
> -       SAVE_ALL;                                                       \
> -       adr lr, return_from_trap;                                       \
> -       mov r0, sp;                                                     \
> -       mov r11, sp;                                                    \
> -       bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
> -       b do_trap_##trap
> +#define DEFINE_TRAP_ENTRY_NOIRQ(trap)                                   \
> +        ALIGN;                                                          \
> +trap_##trap:                                                            \
> +        SAVE_ALL;                                                       \
> +        adr lr, return_from_trap;                                       \
> +        mov r0, sp;                                                     \
> +        mov r11, sp;                                                    \
> +        bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
> +        b do_trap_##trap
> 
>  .globl hyp_traps_vector
> -       .align 5
> +        .align 5
>  hyp_traps_vector:
> -       .word 0                         /* 0x00 - Reset */
> -       b trap_undefined_instruction    /* 0x04 - Undefined Instruction */
> -       b trap_supervisor_call          /* 0x08 - Supervisor Call */
> -       b trap_prefetch_abort           /* 0x0c - Prefetch Abort */
> -       b trap_data_abort               /* 0x10 - Data Abort */
> -       b trap_hypervisor               /* 0x14 - Hypervisor */
> -       b trap_irq                      /* 0x18 - IRQ */
> -       b trap_fiq                      /* 0x1c - FIQ */
> +        .word 0                         /* 0x00 - Reset */
> +        b trap_undefined_instruction    /* 0x04 - Undefined Instruction */
> +        b trap_supervisor_call          /* 0x08 - Supervisor Call */
> +        b trap_prefetch_abort           /* 0x0c - Prefetch Abort */
> +        b trap_data_abort               /* 0x10 - Data Abort */
> +        b trap_hypervisor               /* 0x14 - Hypervisor */
> +        b trap_irq                      /* 0x18 - IRQ */
> +        b trap_fiq                      /* 0x1c - FIQ */
> 
>  DEFINE_TRAP_ENTRY(undefined_instruction)
>  DEFINE_TRAP_ENTRY(supervisor_call)
> @@ -85,38 +85,38 @@ DEFINE_TRAP_ENTRY_NOIRQ(irq)
>  DEFINE_TRAP_ENTRY_NOIRQ(fiq)
> 
>  return_from_trap:
> -       mov sp, r11
> +        mov sp, r11
>  ENTRY(return_to_new_vcpu)
> -       ldr r11, [sp, #UREGS_cpsr]
> -       and r11, #PSR_MODE_MASK
> -       cmp r11, #PSR_MODE_HYP
> -       beq return_to_hypervisor
> -       /* Fall thru */
> +        ldr r11, [sp, #UREGS_cpsr]
> +        and r11, #PSR_MODE_MASK
> +        cmp r11, #PSR_MODE_HYP
> +        beq return_to_hypervisor
> +        /* Fall thru */
>  ENTRY(return_to_guest)
> -       mov r11, sp
> -       bic sp, #7 /* Align the stack pointer */
> -       bl leave_hypervisor_tail /* Disables interrupts on return */
> -       mov sp, r11
> -       RESTORE_ONE_BANKED(SP_usr)
> -       /* LR_usr is the same physical register as lr and is restored below */
> -       RESTORE_BANKED(svc)
> -       RESTORE_BANKED(abt)
> -       RESTORE_BANKED(und)
> -       RESTORE_BANKED(irq)
> -       RESTORE_BANKED(fiq)
> -       RESTORE_ONE_BANKED(R8_fiq); RESTORE_ONE_BANKED(R9_fiq); RESTORE_ONE_BANKED(R10_fiq)
> -       RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq);
> -       /* Fall thru */
> +        mov r11, sp
> +        bic sp, #7 /* Align the stack pointer */
> +        bl leave_hypervisor_tail /* Disables interrupts on return */
> +        mov sp, r11
> +        RESTORE_ONE_BANKED(SP_usr)
> +        /* LR_usr is the same physical register as lr and is restored below */
> +        RESTORE_BANKED(svc)
> +        RESTORE_BANKED(abt)
> +        RESTORE_BANKED(und)
> +        RESTORE_BANKED(irq)
> +        RESTORE_BANKED(fiq)
> +        RESTORE_ONE_BANKED(R8_fiq); RESTORE_ONE_BANKED(R9_fiq); RESTORE_ONE_BANKED(R10_fiq)
> +        RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq);
> +        /* Fall thru */
>  ENTRY(return_to_hypervisor)
> -       cpsid i
> -       ldr lr, [sp, #UREGS_lr]
> -       ldr r11, [sp, #UREGS_pc]
> -       msr ELR_hyp, r11
> -       ldr r11, [sp, #UREGS_cpsr]
> -       msr SPSR_hyp, r11
> -       pop {r0-r12}
> -       add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
> -       eret
> +        cpsid i
> +        ldr lr, [sp, #UREGS_lr]
> +        ldr r11, [sp, #UREGS_pc]
> +        msr ELR_hyp, r11
> +        ldr r11, [sp, #UREGS_cpsr]
> +        msr SPSR_hyp, r11
> +        pop {r0-r12}
> +        add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
> +        eret
> 
>  /*
>   * struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next)
> @@ -127,9 +127,15 @@ ENTRY(return_to_hypervisor)
>   * Returns prev in r0
>   */
>  ENTRY(__context_switch)
> -       add     ip, r0, #VCPU_arch_saved_context
> -       stmia   ip!, {r4 - sl, fp, sp, lr}      /* Save register state */
> +        add     ip, r0, #VCPU_arch_saved_context
> +        stmia   ip!, {r4 - sl, fp, sp, lr}      /* Save register state */
> 
> -       add     r4, r1, #VCPU_arch_saved_context
> -       ldmia   r4, {r4 - sl, fp, sp, pc}       /* Load registers and return */
> +        add     r4, r1, #VCPU_arch_saved_context
> +        ldmia   r4, {r4 - sl, fp, sp, pc}       /* Load registers and return */
> 
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
> index 8e2e284..0d9a799 100644
> --- a/xen/arch/arm/head.S
> +++ b/xen/arch/arm/head.S
> @@ -36,366 +36,366 @@
>   * Clobbers r0-r3. */
>  #ifdef EARLY_UART_ADDRESS
>  #define PRINT(_s)       \
> -       adr   r0, 98f ; \
> -       bl    puts    ; \
> -       b     99f     ; \
> -98:    .asciz _s     ; \
> -       .align 2      ; \
> +        adr   r0, 98f ; \
> +        bl    puts    ; \
> +        b     99f     ; \
> +98:     .asciz _s     ; \
> +        .align 2      ; \
>  99:
>  #else
>  #define PRINT(s)
>  #endif
> 
> -       .arm
> +        .arm
> 
> -       /* This must be the very first address in the loaded image.
> -        * It should be linked at XEN_VIRT_START, and loaded at any
> -        * 2MB-aligned address.  All of text+data+bss must fit in 2MB,
> -        * or the initial pagetable code below will need adjustment. */
> -       .global start
> +        /* This must be the very first address in the loaded image.
> +         * It should be linked at XEN_VIRT_START, and loaded at any
> +         * 2MB-aligned address.  All of text+data+bss must fit in 2MB,
> +         * or the initial pagetable code below will need adjustment. */
> +        .global start
>  start:
> 
> -       /* zImage magic header, see:
> -        * http://www.simtec.co.uk/products/SWLINUX/files/booting_article.html#d0e309
> -        */
> -       .rept 8
> -       mov   r0, r0
> -       .endr
> -       b     past_zImage
> +        /* zImage magic header, see:
> +         * http://www.simtec.co.uk/products/SWLINUX/files/booting_article.html#d0e309
> +         */
> +        .rept 8
> +        mov   r0, r0
> +        .endr
> +        b     past_zImage
> 
> -       .word ZIMAGE_MAGIC_NUMBER    /* Magic numbers to help the loader */
> -       .word 0x00000000             /* absolute load/run zImage address or
> -                                     * 0 for PiC */
> -       .word (_end - start)         /* zImage end address */
> +        .word ZIMAGE_MAGIC_NUMBER    /* Magic numbers to help the loader */
> +        .word 0x00000000             /* absolute load/run zImage address or
> +                                      * 0 for PiC */
> +        .word (_end - start)         /* zImage end address */
> 
>  past_zImage:
> -       cpsid aif                    /* Disable all interrupts */
> +        cpsid aif                    /* Disable all interrupts */
> 
> -       /* Save the bootloader arguments in less-clobberable registers */
> -       mov   r7, r1                 /* r7 := ARM-linux machine type */
> -       mov   r8, r2                 /* r8 := ATAG base address */
> +        /* Save the bootloader arguments in less-clobberable registers */
> +        mov   r7, r1                 /* r7 := ARM-linux machine type */
> +        mov   r8, r2                 /* r8 := ATAG base address */
> 
> -       /* Find out where we are */
> -       ldr   r0, =start
> -       adr   r9, start              /* r9  := paddr (start) */
> -       sub   r10, r9, r0            /* r10 := phys-offset */
> +        /* Find out where we are */
> +        ldr   r0, =start
> +        adr   r9, start              /* r9  := paddr (start) */
> +        sub   r10, r9, r0            /* r10 := phys-offset */
> 
> -       /* Using the DTB in the .dtb section? */
> +        /* Using the DTB in the .dtb section? */
>  #ifdef CONFIG_DTB_FILE
> -       ldr   r8, =_sdtb
> -       add   r8, r10                /* r8 := paddr(DTB) */
> +        ldr   r8, =_sdtb
> +        add   r8, r10                /* r8 := paddr(DTB) */
>  #endif
> 
> -       /* Are we the boot CPU? */
> -       mov   r12, #0                /* r12 := CPU ID */
> -       mrc   CP32(r0, MPIDR)
> -       tst   r0, #(1<<31)           /* Multiprocessor extension supported? */
> -       beq   boot_cpu
> -       tst   r0, #(1<<30)           /* Uniprocessor system? */
> -       bne   boot_cpu
> -       bics  r12, r0, #(0xff << 24) /* Mask out flags to get CPU ID */
> -       beq   boot_cpu               /* If we're CPU 0, boot now */
> -
> -       /* Non-boot CPUs wait here to be woken up one at a time. */
> -1:     dsb
> -       ldr   r0, =smp_up_cpu        /* VA of gate */
> -       add   r0, r0, r10            /* PA of gate */
> -       ldr   r1, [r0]               /* Which CPU is being booted? */
> -       teq   r1, r12                /* Is it us? */
> -       wfene
> -       bne   1b
> +        /* Are we the boot CPU? */
> +        mov   r12, #0                /* r12 := CPU ID */
> +        mrc   CP32(r0, MPIDR)
> +        tst   r0, #(1<<31)           /* Multiprocessor extension supported? */
> +        beq   boot_cpu
> +        tst   r0, #(1<<30)           /* Uniprocessor system? */
> +        bne   boot_cpu
> +        bics  r12, r0, #(0xff << 24) /* Mask out flags to get CPU ID */
> +        beq   boot_cpu               /* If we're CPU 0, boot now */
> +
> +        /* Non-boot CPUs wait here to be woken up one at a time. */
> +1:      dsb
> +        ldr   r0, =smp_up_cpu        /* VA of gate */
> +        add   r0, r0, r10            /* PA of gate */
> +        ldr   r1, [r0]               /* Which CPU is being booted? */
> +        teq   r1, r12                /* Is it us? */
> +        wfene
> +        bne   1b
> 
>  boot_cpu:
>  #ifdef EARLY_UART_ADDRESS
> -       ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
> -       teq   r12, #0                   /* CPU 0 sets up the UART too */
> -       bleq  init_uart
> -       PRINT("- CPU ")
> -       mov   r0, r12
> -       bl    putn
> -       PRINT(" booting -\r\n")
> +        ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
> +        teq   r12, #0                   /* CPU 0 sets up the UART too */
> +        bleq  init_uart
> +        PRINT("- CPU ")
> +        mov   r0, r12
> +        bl    putn
> +        PRINT(" booting -\r\n")
>  #endif
> 
> -       /* Wake up secondary cpus */
> -       teq   r12, #0
> -       bleq  kick_cpus
> -
> -       /* Check that this CPU has Hyp mode */
> -       mrc   CP32(r0, ID_PFR1)
> -       and   r0, r0, #0xf000        /* Bits 12-15 define virt extensions */
> -       teq   r0, #0x1000            /* Must == 0x1 or may be incompatible */
> -       beq   1f
> -       PRINT("- CPU doesn't support the virtualization extensions -\r\n")
> -       b     fail
> +        /* Wake up secondary cpus */
> +        teq   r12, #0
> +        bleq  kick_cpus
> +
> +        /* Check that this CPU has Hyp mode */
> +        mrc   CP32(r0, ID_PFR1)
> +        and   r0, r0, #0xf000        /* Bits 12-15 define virt extensions */
> +        teq   r0, #0x1000            /* Must == 0x1 or may be incompatible */
> +        beq   1f
> +        PRINT("- CPU doesn't support the virtualization extensions -\r\n")
> +        b     fail
>  1:
> -       /* Check if we're already in it */
> -       mrs   r0, cpsr
> -       and   r0, r0, #0x1f          /* Mode is in the low 5 bits of CPSR */
> -       teq   r0, #0x1a              /* Hyp Mode? */
> -       bne   1f
> -       PRINT("- Started in Hyp mode -\r\n")
> -       b     hyp
> +        /* Check if we're already in it */
> +        mrs   r0, cpsr
> +        and   r0, r0, #0x1f          /* Mode is in the low 5 bits of CPSR */
> +        teq   r0, #0x1a              /* Hyp Mode? */
> +        bne   1f
> +        PRINT("- Started in Hyp mode -\r\n")
> +        b     hyp
>  1:
> -       /* Otherwise, it must have been Secure Supervisor mode */
> -       mrc   CP32(r0, SCR)
> -       tst   r0, #0x1               /* Not-Secure bit set? */
> -       beq   1f
> -       PRINT("- CPU is not in Hyp mode or Secure state -\r\n")
> -       b     fail
> +        /* Otherwise, it must have been Secure Supervisor mode */
> +        mrc   CP32(r0, SCR)
> +        tst   r0, #0x1               /* Not-Secure bit set? */
> +        beq   1f
> +        PRINT("- CPU is not in Hyp mode or Secure state -\r\n")
> +        b     fail
>  1:
> -       /* OK, we're in Secure state. */
> -       PRINT("- Started in Secure state -\r\n- Entering Hyp mode -\r\n")
> -       ldr   r0, =enter_hyp_mode    /* VA of function */
> -       adr   lr, hyp                /* Set return address for call */
> -       add   pc, r0, r10            /* Call PA of function */
> +        /* OK, we're in Secure state. */
> +        PRINT("- Started in Secure state -\r\n- Entering Hyp mode -\r\n")
> +        ldr   r0, =enter_hyp_mode    /* VA of function */
> +        adr   lr, hyp                /* Set return address for call */
> +        add   pc, r0, r10            /* Call PA of function */
> 
>  hyp:
> 
> -       /* Zero BSS On the boot CPU to avoid nasty surprises */
> -       teq   r12, #0
> -       bne   skip_bss
> -
> -       PRINT("- Zero BSS -\r\n")
> -       ldr   r0, =__bss_start       /* Load start & end of bss */
> -       ldr   r1, =__bss_end
> -       add   r0, r0, r10            /* Apply physical offset */
> -       add   r1, r1, r10
> -
> -       mov   r2, #0
> -1:     str   r2, [r0], #4
> -       cmp   r0, r1
> -       blo   1b
> -
> -skip_bss:
> -
> -       PRINT("- Setting up control registers -\r\n")
> -
> -       /* Read CPU ID */
> -       mrc   CP32(r0, MIDR)
> -       ldr   r1, =(MIDR_MASK)
> -       and   r0, r0, r1
> -       /* Is this a Cortex A15? */
> -       ldr   r1, =(CORTEX_A15_ID)
> -       teq   r0, r1
> -       bleq  cortex_a15_init
> -
> -       /* Set up memory attribute type tables */
> -       ldr   r0, =MAIR0VAL
> -       ldr   r1, =MAIR1VAL
> -       mcr   CP32(r0, MAIR0)
> -       mcr   CP32(r1, MAIR1)
> -       mcr   CP32(r0, HMAIR0)
> -       mcr   CP32(r1, HMAIR1)
> -
> -       /* Set up the HTCR:
> -        * PT walks use Outer-Shareable accesses,
> -        * PT walks are write-back, no-write-allocate in both cache levels,
> -        * Full 32-bit address space goes through this table. */
> -       ldr   r0, =0x80002500
> -       mcr   CP32(r0, HTCR)
> -
> -       /* Set up the HSCTLR:
> -        * Exceptions in LE ARM,
> -        * Low-latency IRQs disabled,
> -        * Write-implies-XN disabled (for now),
> -        * D-cache disabled (for now),
> -        * I-cache enabled,
> -        * Alignment checking enabled,
> -        * MMU translation disabled (for now). */
> -       ldr   r0, =(HSCTLR_BASE|SCTLR_A)
> -       mcr   CP32(r0, HSCTLR)
> -
> -       /* Write Xen's PT's paddr into the HTTBR */
> -       ldr   r4, =xen_pgtable
> -       add   r4, r4, r10            /* r4 := paddr (xen_pagetable) */
> -       mov   r5, #0                 /* r4:r5 is paddr (xen_pagetable) */
> -       mcrr  CP64(r4, r5, HTTBR)
> -
> -       /* Non-boot CPUs don't need to rebuild the pagetable */
> -       teq   r12, #0
> -       bne   pt_ready
> -
> -       /* console fixmap */
> +        /* Zero BSS On the boot CPU to avoid nasty surprises */
> +        teq   r12, #0
> +        bne   skip_bss
> +
> +        PRINT("- Zero BSS -\r\n")
> +        ldr   r0, =__bss_start       /* Load start & end of bss */
> +        ldr   r1, =__bss_end
> +        add   r0, r0, r10            /* Apply physical offset */
> +        add   r1, r1, r10
> +
> +        mov   r2, #0
> +1:      str   r2, [r0], #4
> +        cmp   r0, r1
> +        blo   1b
> +
> +skip_bss:
> +
> +        PRINT("- Setting up control registers -\r\n")
> +
> +        /* Read CPU ID */
> +        mrc   CP32(r0, MIDR)
> +        ldr   r1, =(MIDR_MASK)
> +        and   r0, r0, r1
> +        /* Is this a Cortex A15? */
> +        ldr   r1, =(CORTEX_A15_ID)
> +        teq   r0, r1
> +        bleq  cortex_a15_init
> +
> +        /* Set up memory attribute type tables */
> +        ldr   r0, =MAIR0VAL
> +        ldr   r1, =MAIR1VAL
> +        mcr   CP32(r0, MAIR0)
> +        mcr   CP32(r1, MAIR1)
> +        mcr   CP32(r0, HMAIR0)
> +        mcr   CP32(r1, HMAIR1)
> +
> +        /* Set up the HTCR:
> +         * PT walks use Outer-Shareable accesses,
> +         * PT walks are write-back, no-write-allocate in both cache levels,
> +         * Full 32-bit address space goes through this table. */
> +        ldr   r0, =0x80002500
> +        mcr   CP32(r0, HTCR)
> +
> +        /* Set up the HSCTLR:
> +         * Exceptions in LE ARM,
> +         * Low-latency IRQs disabled,
> +         * Write-implies-XN disabled (for now),
> +         * D-cache disabled (for now),
> +         * I-cache enabled,
> +         * Alignment checking enabled,
> +         * MMU translation disabled (for now). */
> +        ldr   r0, =(HSCTLR_BASE|SCTLR_A)
> +        mcr   CP32(r0, HSCTLR)
> +
> +        /* Write Xen's PT's paddr into the HTTBR */
> +        ldr   r4, =xen_pgtable
> +        add   r4, r4, r10            /* r4 := paddr (xen_pagetable) */
> +        mov   r5, #0                 /* r4:r5 is paddr (xen_pagetable) */
> +        mcrr  CP64(r4, r5, HTTBR)
> +
> +        /* Non-boot CPUs don't need to rebuild the pagetable */
> +        teq   r12, #0
> +        bne   pt_ready
> +
> +        /* console fixmap */
>  #ifdef EARLY_UART_ADDRESS
> -       ldr   r1, =xen_fixmap
> -       add   r1, r1, r10            /* r1 := paddr (xen_fixmap) */
> -       mov   r3, #0
> -       lsr   r2, r11, #12
> -       lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> -       orr   r2, r2, #PT_UPPER(DEV_L3)
> -       orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> -       strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> +        ldr   r1, =xen_fixmap
> +        add   r1, r1, r10            /* r1 := paddr (xen_fixmap) */
> +        mov   r3, #0
> +        lsr   r2, r11, #12
> +        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> +        orr   r2, r2, #PT_UPPER(DEV_L3)
> +        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> +        strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
>  #endif
> 
> -       /* Build the baseline idle pagetable's first-level entries */
> -       ldr   r1, =xen_second
> -       add   r1, r1, r10            /* r1 := paddr (xen_second) */
> -       mov   r3, #0x0
> -       orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of xen_second */
> -       orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
> -       strd  r2, r3, [r4, #0]       /* Map it in slot 0 */
> -       add   r2, r2, #0x1000
> -       strd  r2, r3, [r4, #8]       /* Map 2nd page in slot 1 */
> -       add   r2, r2, #0x1000
> -       strd  r2, r3, [r4, #16]      /* Map 3rd page in slot 2 */
> -       add   r2, r2, #0x1000
> -       strd  r2, r3, [r4, #24]      /* Map 4th page in slot 3 */
> -
> -       /* Now set up the second-level entries */
> -       orr   r2, r9, #PT_UPPER(MEM)
> -       orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB normal map of Xen */
> -       mov   r4, r9, lsr #18        /* Slot for paddr(start) */
> -       strd  r2, r3, [r1, r4]       /* Map Xen there */
> -       ldr   r4, =start
> -       lsr   r4, #18                /* Slot for vaddr(start) */
> -       strd  r2, r3, [r1, r4]       /* Map Xen there too */
> -
> -       /* xen_fixmap pagetable */
> -       ldr   r2, =xen_fixmap
> -       add   r2, r2, r10            /* r2 := paddr (xen_fixmap) */
> -       orr   r2, r2, #PT_UPPER(PT)
> -       orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
> -       add   r4, r4, #8
> -       strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
> -
> -       mov   r3, #0x0
> -       lsr   r2, r8, #21
> -       lsl   r2, r2, #21            /* 2MB-aligned paddr of DTB */
> -       orr   r2, r2, #PT_UPPER(MEM)
> -       orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
> -       add   r4, r4, #8
> -       strd  r2, r3, [r1, r4]       /* Map it in the early boot slot */
> +        /* Build the baseline idle pagetable's first-level entries */
> +        ldr   r1, =xen_second
> +        add   r1, r1, r10            /* r1 := paddr (xen_second) */
> +        mov   r3, #0x0
> +        orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of xen_second */
> +        orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
> +        strd  r2, r3, [r4, #0]       /* Map it in slot 0 */
> +        add   r2, r2, #0x1000
> +        strd  r2, r3, [r4, #8]       /* Map 2nd page in slot 1 */
> +        add   r2, r2, #0x1000
> +        strd  r2, r3, [r4, #16]      /* Map 3rd page in slot 2 */
> +        add   r2, r2, #0x1000
> +        strd  r2, r3, [r4, #24]      /* Map 4th page in slot 3 */
> +
> +        /* Now set up the second-level entries */
> +        orr   r2, r9, #PT_UPPER(MEM)
> +        orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB normal map of Xen */
> +        mov   r4, r9, lsr #18        /* Slot for paddr(start) */
> +        strd  r2, r3, [r1, r4]       /* Map Xen there */
> +        ldr   r4, =start
> +        lsr   r4, #18                /* Slot for vaddr(start) */
> +        strd  r2, r3, [r1, r4]       /* Map Xen there too */
> +
> +        /* xen_fixmap pagetable */
> +        ldr   r2, =xen_fixmap
> +        add   r2, r2, r10            /* r2 := paddr (xen_fixmap) */
> +        orr   r2, r2, #PT_UPPER(PT)
> +        orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
> +        add   r4, r4, #8
> +        strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
> +
> +        mov   r3, #0x0
> +        lsr   r2, r8, #21
> +        lsl   r2, r2, #21            /* 2MB-aligned paddr of DTB */
> +        orr   r2, r2, #PT_UPPER(MEM)
> +        orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
> +        add   r4, r4, #8
> +        strd  r2, r3, [r1, r4]       /* Map it in the early boot slot */
> 
>  pt_ready:
> -       PRINT("- Turning on paging -\r\n")
> -
> -       ldr   r1, =paging            /* Explicit vaddr, not RIP-relative */
> -       mrc   CP32(r0, HSCTLR)
> -       orr   r0, r0, #(SCTLR_M|SCTLR_C) /* Enable MMU and D-cache */
> -       dsb                          /* Flush PTE writes and finish reads */
> -       mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
> -       isb                          /* Now, flush the icache */
> -       mov   pc, r1                 /* Get a proper vaddr into PC */
> +        PRINT("- Turning on paging -\r\n")
> +
> +        ldr   r1, =paging            /* Explicit vaddr, not RIP-relative */
> +        mrc   CP32(r0, HSCTLR)
> +        orr   r0, r0, #(SCTLR_M|SCTLR_C) /* Enable MMU and D-cache */
> +        dsb                          /* Flush PTE writes and finish reads */
> +        mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
> +        isb                          /* Now, flush the icache */
> +        mov   pc, r1                 /* Get a proper vaddr into PC */
>  paging:
> 
> 
>  #ifdef EARLY_UART_ADDRESS
> -       /* Use a virtual address to access the UART. */
> -       ldr   r11, =FIXMAP_ADDR(FIXMAP_CONSOLE)
> +        /* Use a virtual address to access the UART. */
> +        ldr   r11, =FIXMAP_ADDR(FIXMAP_CONSOLE)
>  #endif
> 
> -       PRINT("- Ready -\r\n")
> -
> -       /* The boot CPU should go straight into C now */
> -       teq   r12, #0
> -       beq   launch
> -
> -       /* Non-boot CPUs need to move on to the relocated pagetables */
> -       mov   r0, #0
> -       ldr   r4, =boot_httbr        /* VA of HTTBR value stashed by CPU 0 */
> -       add   r4, r4, r10            /* PA of it */
> -       ldrd  r4, r5, [r4]           /* Actual value */
> -       dsb
> -       mcrr  CP64(r4, r5, HTTBR)
> -       dsb
> -       isb
> -       mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
> -       mcr   CP32(r0, ICIALLU)      /* Flush I-cache */
> -       mcr   CP32(r0, BPIALL)       /* Flush branch predictor */
> -       dsb                          /* Ensure completion of TLB+BP flush */
> -       isb
> -
> -       /* Non-boot CPUs report that they've got this far */
> -       ldr   r0, =ready_cpus
> -1:     ldrex r1, [r0]               /*            { read # of ready CPUs } */
> -       add   r1, r1, #1             /* Atomically { ++                   } */
> -       strex r2, r1, [r0]           /*            { writeback            } */
> -       teq   r2, #0
> -       bne   1b
> -       dsb
> -       mcr   CP32(r0, DCCMVAC)      /* flush D-Cache */
> -       dsb
> -
> -       /* Here, the non-boot CPUs must wait again -- they're now running on
> -        * the boot CPU's pagetables so it's safe for the boot CPU to
> -        * overwrite the non-relocated copy of Xen.  Once it's done that,
> -        * and brought up the memory allocator, non-boot CPUs can get their
> -        * own stacks and enter C. */
> -1:     wfe
> -       dsb
> -       ldr   r0, =smp_up_cpu
> -       ldr   r1, [r0]               /* Which CPU is being booted? */
> -       teq   r1, r12                /* Is it us? */
> -       bne   1b
> -
> -launch:
> -       ldr   r0, =init_stack        /* Find the boot-time stack */
> -       ldr   sp, [r0]
> -       add   sp, #STACK_SIZE        /* (which grows down from the top). */
> -       sub   sp, #CPUINFO_sizeof    /* Make room for CPU save record */
> -       mov   r0, r10                /* Marshal args: - phys_offset */
> -       mov   r1, r7                 /*               - machine type */
> -       mov   r2, r8                 /*               - ATAG address */
> -       movs  r3, r12                /*               - CPU ID */
> -       beq   start_xen              /* and disappear into the land of C */
> -       b     start_secondary        /* (to the appropriate entry point) */
> +        PRINT("- Ready -\r\n")
> +
> +        /* The boot CPU should go straight into C now */
> +        teq   r12, #0
> +        beq   launch
> +
> +        /* Non-boot CPUs need to move on to the relocated pagetables */
> +        mov   r0, #0
> +        ldr   r4, =boot_httbr        /* VA of HTTBR value stashed by CPU 0 */
> +        add   r4, r4, r10            /* PA of it */
> +        ldrd  r4, r5, [r4]           /* Actual value */
> +        dsb
> +        mcrr  CP64(r4, r5, HTTBR)
> +        dsb
> +        isb
> +        mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
> +        mcr   CP32(r0, ICIALLU)      /* Flush I-cache */
> +        mcr   CP32(r0, BPIALL)       /* Flush branch predictor */
> +        dsb                          /* Ensure completion of TLB+BP flush */
> +        isb
> +
> +        /* Non-boot CPUs report that they've got this far */
> +        ldr   r0, =ready_cpus
> +1:      ldrex r1, [r0]               /*            { read # of ready CPUs } */
> +        add   r1, r1, #1             /* Atomically { ++                   } */
> +        strex r2, r1, [r0]           /*            { writeback            } */
> +        teq   r2, #0
> +        bne   1b
> +        dsb
> +        mcr   CP32(r0, DCCMVAC)      /* flush D-Cache */
> +        dsb
> +
> +        /* Here, the non-boot CPUs must wait again -- they're now running on
> +         * the boot CPU's pagetables so it's safe for the boot CPU to
> +         * overwrite the non-relocated copy of Xen.  Once it's done that,
> +         * and brought up the memory allocator, non-boot CPUs can get their
> +         * own stacks and enter C. */
> +1:      wfe
> +        dsb
> +        ldr   r0, =smp_up_cpu
> +        ldr   r1, [r0]               /* Which CPU is being booted? */
> +        teq   r1, r12                /* Is it us? */
> +        bne   1b
> +
> +launch:
> +        ldr   r0, =init_stack        /* Find the boot-time stack */
> +        ldr   sp, [r0]
> +        add   sp, #STACK_SIZE        /* (which grows down from the top). */
> +        sub   sp, #CPUINFO_sizeof    /* Make room for CPU save record */
> +        mov   r0, r10                /* Marshal args: - phys_offset */
> +        mov   r1, r7                 /*               - machine type */
> +        mov   r2, r8                 /*               - ATAG address */
> +        movs  r3, r12                /*               - CPU ID */
> +        beq   start_xen              /* and disappear into the land of C */
> +        b     start_secondary        /* (to the appropriate entry point) */
> 
>  /* Fail-stop
>   * r0: string explaining why */
> -fail:  PRINT("- Boot failed -\r\n")
> -1:     wfe
> -       b     1b
> +fail:   PRINT("- Boot failed -\r\n")
> +1:      wfe
> +        b     1b
> 
>  #ifdef EARLY_UART_ADDRESS
> 
>  /* Bring up the UART. Specific to the PL011 UART.
>   * Clobbers r0-r2 */
>  init_uart:
> -       mov   r1, #0x0
> -       str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor fraction) */
> -       mov   r1, #0x4               /* 7.3728MHz / 0x4 == 16 * 115200 */
> -       str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor integer) */
> -       mov   r1, #0x60              /* 8n1 */
> -       str   r1, [r11, #0x24]       /* -> UARTLCR_H (Line control) */
> -       ldr   r1, =0x00000301        /* RXE | TXE | UARTEN */
> -       str   r1, [r11, #0x30]       /* -> UARTCR (Control Register) */
> -       adr   r0, 1f
> -       b     puts
> -1:     .asciz "- UART enabled -\r\n"
> -       .align 4
> +        mov   r1, #0x0
> +        str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor fraction) */
> +        mov   r1, #0x4               /* 7.3728MHz / 0x4 == 16 * 115200 */
> +        str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor integer) */
> +        mov   r1, #0x60              /* 8n1 */
> +        str   r1, [r11, #0x24]       /* -> UARTLCR_H (Line control) */
> +        ldr   r1, =0x00000301        /* RXE | TXE | UARTEN */
> +        str   r1, [r11, #0x30]       /* -> UARTCR (Control Register) */
> +        adr   r0, 1f
> +        b     puts
> +1:      .asciz "- UART enabled -\r\n"
> +        .align 4
> 
>  /* Print early debug messages.  Specific to the PL011 UART.
>   * r0: Nul-terminated string to print.
>   * Clobbers r0-r2 */
>  puts:
> -       ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
> -       tst   r2, #0x8               /* Check BUSY bit */
> -       bne   puts                   /* Wait for the UART to be ready */
> -       ldrb  r2, [r0], #1           /* Load next char */
> -       teq   r2, #0                 /* Exit on nul */
> -       moveq pc, lr
> -       str   r2, [r11]              /* -> UARTDR (Data Register) */
> -       b     puts
> +        ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
> +        tst   r2, #0x8               /* Check BUSY bit */
> +        bne   puts                   /* Wait for the UART to be ready */
> +        ldrb  r2, [r0], #1           /* Load next char */
> +        teq   r2, #0                 /* Exit on nul */
> +        moveq pc, lr
> +        str   r2, [r11]              /* -> UARTDR (Data Register) */
> +        b     puts
> 
>  /* Print a 32-bit number in hex.  Specific to the PL011 UART.
>   * r0: Number to print.
>   * clobbers r0-r3 */
>  putn:
> -       adr   r1, hex
> -       mov   r3, #8
> -1:     ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
> -       tst   r2, #0x8               /* Check BUSY bit */
> -       bne   1b                     /* Wait for the UART to be ready */
> -       and   r2, r0, #0xf0000000    /* Mask off the top nybble */
> -       ldrb  r2, [r1, r2, lsr #28]  /* Convert to a char */
> -       str   r2, [r11]              /* -> UARTDR (Data Register) */
> -       lsl   r0, #4                 /* Roll it through one nybble at a time */
> -       subs  r3, r3, #1
> -       bne   1b
> -       mov   pc, lr
> -
> -hex:   .ascii "0123456789abcdef"
> -       .align 2
> +        adr   r1, hex
> +        mov   r3, #8
> +1:      ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
> +        tst   r2, #0x8               /* Check BUSY bit */
> +        bne   1b                     /* Wait for the UART to be ready */
> +        and   r2, r0, #0xf0000000    /* Mask off the top nybble */
> +        ldrb  r2, [r1, r2, lsr #28]  /* Convert to a char */
> +        str   r2, [r11]              /* -> UARTDR (Data Register) */
> +        lsl   r0, #4                 /* Roll it through one nybble at a time */
> +        subs  r3, r3, #1
> +        bne   1b
> +        mov   pc, lr
> +
> +hex:    .ascii "0123456789abcdef"
> +        .align 2
> 
>  #else  /* EARLY_UART_ADDRESS */
> 
> @@ -403,6 +403,13 @@ init_uart:
>  .global early_puts
>  early_puts:
>  puts:
> -putn:  mov   pc, lr
> +putn:   mov   pc, lr
> 
>  #endif /* EARLY_UART_ADDRESS */
> +
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
> index 7c3b357..3dba38d 100644
> --- a/xen/arch/arm/mode_switch.S
> +++ b/xen/arch/arm/mode_switch.S
> @@ -28,25 +28,25 @@
>  /* wake up secondary cpus */
>  .globl kick_cpus
>  kick_cpus:
> -       /* write start paddr to v2m sysreg FLAGSSET register */
> -       ldr   r0, =(V2M_SYS_MMIO_BASE)        /* base V2M sysreg MMIO address */
> -       dsb
> -       mov   r2, #0xffffffff
> -       str   r2, [r0, #(V2M_SYS_FLAGSCLR)]
> -       dsb
> -       ldr   r2, =start
> -       add   r2, r2, r10
> -       str   r2, [r0, #(V2M_SYS_FLAGSSET)]
> -       dsb
> -       /* send an interrupt */
> -       ldr   r0, =(GIC_BASE_ADDRESS + GIC_DR_OFFSET) /* base GICD MMIO address */
> -       mov   r2, #0x1
> -       str   r2, [r0, #(GICD_CTLR * 4)]      /* enable distributor */
> -       mov   r2, #0xfe0000
> -       str   r2, [r0, #(GICD_SGIR * 4)]      /* send IPI to everybody */
> -       dsb
> -       str   r2, [r0, #(GICD_CTLR * 4)]      /* disable distributor */
> -       mov   pc, lr
> +        /* write start paddr to v2m sysreg FLAGSSET register */
> +        ldr   r0, =(V2M_SYS_MMIO_BASE)        /* base V2M sysreg MMIO address */
> +        dsb
> +        mov   r2, #0xffffffff
> +        str   r2, [r0, #(V2M_SYS_FLAGSCLR)]
> +        dsb
> +        ldr   r2, =start
> +        add   r2, r2, r10
> +        str   r2, [r0, #(V2M_SYS_FLAGSSET)]
> +        dsb
> +        /* send an interrupt */
> +        ldr   r0, =(GIC_BASE_ADDRESS + GIC_DR_OFFSET) /* base GICD MMIO address */
> +        mov   r2, #0x1
> +        str   r2, [r0, #(GICD_CTLR * 4)]      /* enable distributor */
> +        mov   r2, #0xfe0000
> +        str   r2, [r0, #(GICD_SGIR * 4)]      /* send IPI to everybody */
> +        dsb
> +        str   r2, [r0, #(GICD_CTLR * 4)]      /* disable distributor */
> +        mov   pc, lr
> 
> 
>  /* Get up a CPU into Hyp mode.  Clobbers r0-r3.
> @@ -61,54 +61,61 @@ kick_cpus:
> 
>  .globl enter_hyp_mode
>  enter_hyp_mode:
> -       mov   r3, lr                 /* Put return address in non-banked reg */
> -       cpsid aif, #0x16             /* Enter Monitor mode */
> -       mrc   CP32(r0, SCR)
> -       orr   r0, r0, #0x100         /* Set HCE */
> -       orr   r0, r0, #0xb1          /* Set SCD, AW, FW and NS */
> -       bic   r0, r0, #0xe           /* Clear EA, FIQ and IRQ */
> -       mcr   CP32(r0, SCR)
> -       /* Ugly: the system timer's frequency register is only
> -        * programmable in Secure state.  Since we don't know where its
> -        * memory-mapped control registers live, we can't find out the
> -        * right frequency.  Use the VE model's default frequency here. */
> -       ldr   r0, =0x5f5e100         /* 100 MHz */
> -       mcr   CP32(r0, CNTFRQ)
> -       ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
> -       mcr   CP32(r0, NSACR)
> -       mov   r0, #GIC_BASE_ADDRESS
> -       add   r0, r0, #GIC_DR_OFFSET
> -       /* Disable the GIC distributor, on the boot CPU only */
> -       mov   r1, #0
> -       teq   r12, #0                /* Is this the boot CPU? */
> -       streq r1, [r0]
> -       /* Continuing ugliness: Set up the GIC so NS state owns interrupts,
> -        * The first 32 interrupts (SGIs & PPIs) must be configured on all
> -        * CPUs while the remainder are SPIs and only need to be done one, on
> -        * the boot CPU. */
> -       add   r0, r0, #0x80          /* GICD_IGROUP0 */
> -       mov   r2, #0xffffffff        /* All interrupts to group 1 */
> -       teq   r12, #0                /* Boot CPU? */
> -       str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
> -       streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
> -       streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
> -       /* Disable the GIC CPU interface on all processors */
> -       mov   r0, #GIC_BASE_ADDRESS
> -       add   r0, r0, #GIC_CR_OFFSET
> -       mov   r1, #0
> -       str   r1, [r0]
> -       /* Must drop priority mask below 0x80 before entering NS state */
> -       ldr   r1, =0xff
> -       str   r1, [r0, #0x4]         /* -> GICC_PMR */
> -       /* Reset a few config registers */
> -       mov   r0, #0
> -       mcr   CP32(r0, FCSEIDR)
> -       mcr   CP32(r0, CONTEXTIDR)
> -       /* Allow non-secure access to coprocessors, FIQs, VFP and NEON */
> -       ldr   r1, =0x3fff            /* 14 CP bits set, all others clear */
> -       mcr   CP32(r1, NSACR)
> +        mov   r3, lr                 /* Put return address in non-banked reg */
> +        cpsid aif, #0x16             /* Enter Monitor mode */
> +        mrc   CP32(r0, SCR)
> +        orr   r0, r0, #0x100         /* Set HCE */
> +        orr   r0, r0, #0xb1          /* Set SCD, AW, FW and NS */
> +        bic   r0, r0, #0xe           /* Clear EA, FIQ and IRQ */
> +        mcr   CP32(r0, SCR)
> +        /* Ugly: the system timer's frequency register is only
> +         * programmable in Secure state.  Since we don't know where its
> +         * memory-mapped control registers live, we can't find out the
> +         * right frequency.  Use the VE model's default frequency here. */
> +        ldr   r0, =0x5f5e100         /* 100 MHz */
> +        mcr   CP32(r0, CNTFRQ)
> +        ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
> +        mcr   CP32(r0, NSACR)
> +        mov   r0, #GIC_BASE_ADDRESS
> +        add   r0, r0, #GIC_DR_OFFSET
> +        /* Disable the GIC distributor, on the boot CPU only */
> +        mov   r1, #0
> +        teq   r12, #0                /* Is this the boot CPU? */
> +        streq r1, [r0]
> +        /* Continuing ugliness: Set up the GIC so NS state owns interrupts,
> +         * The first 32 interrupts (SGIs & PPIs) must be configured on all
> +         * CPUs while the remainder are SPIs and only need to be done one, on
> +         * the boot CPU. */
> +        add   r0, r0, #0x80          /* GICD_IGROUP0 */
> +        mov   r2, #0xffffffff        /* All interrupts to group 1 */
> +        teq   r12, #0                /* Boot CPU? */
> +        str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
> +        streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
> +        streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
> +        /* Disable the GIC CPU interface on all processors */
> +        mov   r0, #GIC_BASE_ADDRESS
> +        add   r0, r0, #GIC_CR_OFFSET
> +        mov   r1, #0
> +        str   r1, [r0]
> +        /* Must drop priority mask below 0x80 before entering NS state */
> +        ldr   r1, =0xff
> +        str   r1, [r0, #0x4]         /* -> GICC_PMR */
> +        /* Reset a few config registers */
> +        mov   r0, #0
> +        mcr   CP32(r0, FCSEIDR)
> +        mcr   CP32(r0, CONTEXTIDR)
> +        /* Allow non-secure access to coprocessors, FIQs, VFP and NEON */
> +        ldr   r1, =0x3fff            /* 14 CP bits set, all others clear */
> +        mcr   CP32(r1, NSACR)
> 
> -       mrs   r0, cpsr               /* Copy the CPSR */
> -       add   r0, r0, #0x4           /* 0x16 (Monitor) -> 0x1a (Hyp) */
> -       msr   spsr_cxsf, r0          /* into the SPSR */
> -       movs  pc, r3                 /* Exception-return into Hyp mode */
> +        mrs   r0, cpsr               /* Copy the CPSR */
> +        add   r0, r0, #0x4           /* 0x16 (Monitor) -> 0x1a (Hyp) */
> +        msr   spsr_cxsf, r0          /* into the SPSR */
> +        movs  pc, r3                 /* Exception-return into Hyp mode */
> +
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/proc-ca15.S b/xen/arch/arm/proc-ca15.S
> index 5a5bf64..dcdd42e 100644
> --- a/xen/arch/arm/proc-ca15.S
> +++ b/xen/arch/arm/proc-ca15.S
> @@ -21,8 +21,15 @@
> 
>  .globl cortex_a15_init
>  cortex_a15_init:
> -       /* Set up the SMP bit in ACTLR */
> -       mrc   CP32(r0, ACTLR)
> -       orr   r0, r0, #(ACTLR_CA15_SMP) /* enable SMP bit*/
> -       mcr   CP32(r0, ACTLR)
> -       mov   pc, lr
> +        /* Set up the SMP bit in ACTLR */
> +        mrc   CP32(r0, ACTLR)
> +        orr   r0, r0, #(ACTLR_CA15_SMP) /* enable SMP bit */
> +        mcr   CP32(r0, ACTLR)
> +        mov   pc, lr
> +
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> --
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:18:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1ka-0006KP-Tr; Tue, 18 Dec 2012 18:18:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl1kZ-0006KD-IP
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:18:44 +0000
Received: from [85.158.143.99:15564] by server-2.bemta-4.messagelabs.com id
	2F/96-30861-283B0D05; Tue, 18 Dec 2012 18:18:42 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355854716!29083663!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25384 invoked from network); 18 Dec 2012 18:18:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:18:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1148419"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:18:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:18:32 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl1kO-0002Fn-8S;
	Tue, 18 Dec 2012 18:18:32 +0000
Date: Tue, 18 Dec 2012 18:18:23 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355849376-26652-2-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181817580.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-2-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/5] xen: arm: remove hard tabs from asm
	code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> Run expand(1) over xen/arch/arm/.../*.S
> 
> Add emacs local vars block.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  xen/arch/arm/entry.S       |  194 +++++++-------
>  xen/arch/arm/head.S        |  613 ++++++++++++++++++++++----------------------
>  xen/arch/arm/mode_switch.S |  145 ++++++-----
>  xen/arch/arm/proc-ca15.S   |   17 +-
>  4 files changed, 498 insertions(+), 471 deletions(-)
> 
> diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
> index 83793c2..cbd1c48 100644
> --- a/xen/arch/arm/entry.S
> +++ b/xen/arch/arm/entry.S
> @@ -2,79 +2,79 @@
>  #include <asm/asm_defns.h>
>  #include <public/xen.h>
> 
> -#define SAVE_ONE_BANKED(reg)   mrs r11, reg; str r11, [sp, #UREGS_##reg]
> -#define RESTORE_ONE_BANKED(reg)        ldr r11, [sp, #UREGS_##reg]; msr reg, r11
> +#define SAVE_ONE_BANKED(reg)    mrs r11, reg; str r11, [sp, #UREGS_##reg]
> +#define RESTORE_ONE_BANKED(reg) ldr r11, [sp, #UREGS_##reg]; msr reg, r11
> 
>  #define SAVE_BANKED(mode) \
> -       SAVE_ONE_BANKED(SP_##mode) ; SAVE_ONE_BANKED(LR_##mode) ; SAVE_ONE_BANKED(SPSR_##mode)
> +        SAVE_ONE_BANKED(SP_##mode) ; SAVE_ONE_BANKED(LR_##mode) ; SAVE_ONE_BANKED(SPSR_##mode)
> 
>  #define RESTORE_BANKED(mode) \
> -       RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
> +        RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
> 
> -#define SAVE_ALL                                                       \
> -       sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */      \
> -       push {r0-r12}; /* Save R0-R12 */                                \
> -                                                                       \
> -       mrs r11, ELR_hyp;               /* ELR_hyp is return address. */\
> -       str r11, [sp, #UREGS_pc];                                       \
> -                                                                       \
> -       str lr, [sp, #UREGS_lr];                                        \
> -                                                                       \
> -       add r11, sp, #UREGS_kernel_sizeof+4;                            \
> -       str r11, [sp, #UREGS_sp];                                       \
> -                                                                       \
> -       mrs r11, SPSR_hyp;                                              \
> -       str r11, [sp, #UREGS_cpsr];                                     \
> -       and r11, #PSR_MODE_MASK;                                        \
> -       cmp r11, #PSR_MODE_HYP;                                         \
> -       blne save_guest_regs
> +#define SAVE_ALL                                                        \
> +        sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */      \
> +        push {r0-r12}; /* Save R0-R12 */                                \
> +                                                                        \
> +        mrs r11, ELR_hyp;               /* ELR_hyp is return address. */\
> +        str r11, [sp, #UREGS_pc];                                       \
> +                                                                        \
> +        str lr, [sp, #UREGS_lr];                                        \
> +                                                                        \
> +        add r11, sp, #UREGS_kernel_sizeof+4;                            \
> +        str r11, [sp, #UREGS_sp];                                       \
> +                                                                        \
> +        mrs r11, SPSR_hyp;                                              \
> +        str r11, [sp, #UREGS_cpsr];                                     \
> +        and r11, #PSR_MODE_MASK;                                        \
> +        cmp r11, #PSR_MODE_HYP;                                         \
> +        blne save_guest_regs
> 
>  save_guest_regs:
> -       ldr r11, =0xffffffff  /* Clobber SP which is only valid for hypervisor frames. */
> -       str r11, [sp, #UREGS_sp]
> -       SAVE_ONE_BANKED(SP_usr)
> -       /* LR_usr is the same physical register as lr and is saved in SAVE_ALL */
> -       SAVE_BANKED(svc)
> -       SAVE_BANKED(abt)
> -       SAVE_BANKED(und)
> -       SAVE_BANKED(irq)
> -       SAVE_BANKED(fiq)
> -       SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq)
> -       SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
> -       mov pc, lr
> +        ldr r11, =0xffffffff  /* Clobber SP which is only valid for hypervisor frames. */
> +        str r11, [sp, #UREGS_sp]
> +        SAVE_ONE_BANKED(SP_usr)
> +        /* LR_usr is the same physical register as lr and is saved in SAVE_ALL */
> +        SAVE_BANKED(svc)
> +        SAVE_BANKED(abt)
> +        SAVE_BANKED(und)
> +        SAVE_BANKED(irq)
> +        SAVE_BANKED(fiq)
> +        SAVE_ONE_BANKED(R8_fiq); SAVE_ONE_BANKED(R9_fiq); SAVE_ONE_BANKED(R10_fiq)
> +        SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
> +        mov pc, lr
> 
> -#define DEFINE_TRAP_ENTRY(trap)                                                \
> -       ALIGN;                                                          \
> -trap_##trap:                                                           \
> -       SAVE_ALL;                                                       \
> -       cpsie i;        /* local_irq_enable */                          \
> -       adr lr, return_from_trap;                                       \
> -       mov r0, sp;                                                     \
> -       mov r11, sp;                                                    \
> -       bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
> -       b do_trap_##trap
> +#define DEFINE_TRAP_ENTRY(trap)                                         \
> +        ALIGN;                                                          \
> +trap_##trap:                                                            \
> +        SAVE_ALL;                                                       \
> +        cpsie i;        /* local_irq_enable */                          \
> +        adr lr, return_from_trap;                                       \
> +        mov r0, sp;                                                     \
> +        mov r11, sp;                                                    \
> +        bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
> +        b do_trap_##trap
> 
> -#define DEFINE_TRAP_ENTRY_NOIRQ(trap)                                  \
> -       ALIGN;                                                          \
> -trap_##trap:                                                           \
> -       SAVE_ALL;                                                       \
> -       adr lr, return_from_trap;                                       \
> -       mov r0, sp;                                                     \
> -       mov r11, sp;                                                    \
> -       bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
> -       b do_trap_##trap
> +#define DEFINE_TRAP_ENTRY_NOIRQ(trap)                                   \
> +        ALIGN;                                                          \
> +trap_##trap:                                                            \
> +        SAVE_ALL;                                                       \
> +        adr lr, return_from_trap;                                       \
> +        mov r0, sp;                                                     \
> +        mov r11, sp;                                                    \
> +        bic sp, #7; /* Align the stack pointer (noop on guest trap) */  \
> +        b do_trap_##trap
> 
>  .globl hyp_traps_vector
> -       .align 5
> +        .align 5
>  hyp_traps_vector:
> -       .word 0                         /* 0x00 - Reset */
> -       b trap_undefined_instruction    /* 0x04 - Undefined Instruction */
> -       b trap_supervisor_call          /* 0x08 - Supervisor Call */
> -       b trap_prefetch_abort           /* 0x0c - Prefetch Abort */
> -       b trap_data_abort               /* 0x10 - Data Abort */
> -       b trap_hypervisor               /* 0x14 - Hypervisor */
> -       b trap_irq                      /* 0x18 - IRQ */
> -       b trap_fiq                      /* 0x1c - FIQ */
> +        .word 0                         /* 0x00 - Reset */
> +        b trap_undefined_instruction    /* 0x04 - Undefined Instruction */
> +        b trap_supervisor_call          /* 0x08 - Supervisor Call */
> +        b trap_prefetch_abort           /* 0x0c - Prefetch Abort */
> +        b trap_data_abort               /* 0x10 - Data Abort */
> +        b trap_hypervisor               /* 0x14 - Hypervisor */
> +        b trap_irq                      /* 0x18 - IRQ */
> +        b trap_fiq                      /* 0x1c - FIQ */
> 
>  DEFINE_TRAP_ENTRY(undefined_instruction)
>  DEFINE_TRAP_ENTRY(supervisor_call)
> @@ -85,38 +85,38 @@ DEFINE_TRAP_ENTRY_NOIRQ(irq)
>  DEFINE_TRAP_ENTRY_NOIRQ(fiq)
> 
>  return_from_trap:
> -       mov sp, r11
> +        mov sp, r11
>  ENTRY(return_to_new_vcpu)
> -       ldr r11, [sp, #UREGS_cpsr]
> -       and r11, #PSR_MODE_MASK
> -       cmp r11, #PSR_MODE_HYP
> -       beq return_to_hypervisor
> -       /* Fall thru */
> +        ldr r11, [sp, #UREGS_cpsr]
> +        and r11, #PSR_MODE_MASK
> +        cmp r11, #PSR_MODE_HYP
> +        beq return_to_hypervisor
> +        /* Fall thru */
>  ENTRY(return_to_guest)
> -       mov r11, sp
> -       bic sp, #7 /* Align the stack pointer */
> -       bl leave_hypervisor_tail /* Disables interrupts on return */
> -       mov sp, r11
> -       RESTORE_ONE_BANKED(SP_usr)
> -       /* LR_usr is the same physical register as lr and is restored below */
> -       RESTORE_BANKED(svc)
> -       RESTORE_BANKED(abt)
> -       RESTORE_BANKED(und)
> -       RESTORE_BANKED(irq)
> -       RESTORE_BANKED(fiq)
> -       RESTORE_ONE_BANKED(R8_fiq); RESTORE_ONE_BANKED(R9_fiq); RESTORE_ONE_BANKED(R10_fiq)
> -       RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq);
> -       /* Fall thru */
> +        mov r11, sp
> +        bic sp, #7 /* Align the stack pointer */
> +        bl leave_hypervisor_tail /* Disables interrupts on return */
> +        mov sp, r11
> +        RESTORE_ONE_BANKED(SP_usr)
> +        /* LR_usr is the same physical register as lr and is restored below */
> +        RESTORE_BANKED(svc)
> +        RESTORE_BANKED(abt)
> +        RESTORE_BANKED(und)
> +        RESTORE_BANKED(irq)
> +        RESTORE_BANKED(fiq)
> +        RESTORE_ONE_BANKED(R8_fiq); RESTORE_ONE_BANKED(R9_fiq); RESTORE_ONE_BANKED(R10_fiq)
> +        RESTORE_ONE_BANKED(R11_fiq); RESTORE_ONE_BANKED(R12_fiq);
> +        /* Fall thru */
>  ENTRY(return_to_hypervisor)
> -       cpsid i
> -       ldr lr, [sp, #UREGS_lr]
> -       ldr r11, [sp, #UREGS_pc]
> -       msr ELR_hyp, r11
> -       ldr r11, [sp, #UREGS_cpsr]
> -       msr SPSR_hyp, r11
> -       pop {r0-r12}
> -       add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
> -       eret
> +        cpsid i
> +        ldr lr, [sp, #UREGS_lr]
> +        ldr r11, [sp, #UREGS_pc]
> +        msr ELR_hyp, r11
> +        ldr r11, [sp, #UREGS_cpsr]
> +        msr SPSR_hyp, r11
> +        pop {r0-r12}
> +        add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
> +        eret
> 
>  /*
>   * struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next)
> @@ -127,9 +127,15 @@ ENTRY(return_to_hypervisor)
>   * Returns prev in r0
>   */
>  ENTRY(__context_switch)
> -       add     ip, r0, #VCPU_arch_saved_context
> -       stmia   ip!, {r4 - sl, fp, sp, lr}      /* Save register state */
> +        add     ip, r0, #VCPU_arch_saved_context
> +        stmia   ip!, {r4 - sl, fp, sp, lr}      /* Save register state */
> 
> -       add     r4, r1, #VCPU_arch_saved_context
> -       ldmia   r4, {r4 - sl, fp, sp, pc}       /* Load registers and return */
> +        add     r4, r1, #VCPU_arch_saved_context
> +        ldmia   r4, {r4 - sl, fp, sp, pc}       /* Load registers and return */
> 
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
> index 8e2e284..0d9a799 100644
> --- a/xen/arch/arm/head.S
> +++ b/xen/arch/arm/head.S
> @@ -36,366 +36,366 @@
>   * Clobbers r0-r3. */
>  #ifdef EARLY_UART_ADDRESS
>  #define PRINT(_s)       \
> -       adr   r0, 98f ; \
> -       bl    puts    ; \
> -       b     99f     ; \
> -98:    .asciz _s     ; \
> -       .align 2      ; \
> +        adr   r0, 98f ; \
> +        bl    puts    ; \
> +        b     99f     ; \
> +98:     .asciz _s     ; \
> +        .align 2      ; \
>  99:
>  #else
>  #define PRINT(s)
>  #endif
> 
> -       .arm
> +        .arm
> 
> -       /* This must be the very first address in the loaded image.
> -        * It should be linked at XEN_VIRT_START, and loaded at any
> -        * 2MB-aligned address.  All of text+data+bss must fit in 2MB,
> -        * or the initial pagetable code below will need adjustment. */
> -       .global start
> +        /* This must be the very first address in the loaded image.
> +         * It should be linked at XEN_VIRT_START, and loaded at any
> +         * 2MB-aligned address.  All of text+data+bss must fit in 2MB,
> +         * or the initial pagetable code below will need adjustment. */
> +        .global start
>  start:
> 
> -       /* zImage magic header, see:
> -        * http://www.simtec.co.uk/products/SWLINUX/files/booting_article.html#d0e309
> -        */
> -       .rept 8
> -       mov   r0, r0
> -       .endr
> -       b     past_zImage
> +        /* zImage magic header, see:
> +         * http://www.simtec.co.uk/products/SWLINUX/files/booting_article.html#d0e309
> +         */
> +        .rept 8
> +        mov   r0, r0
> +        .endr
> +        b     past_zImage
> 
> -       .word ZIMAGE_MAGIC_NUMBER    /* Magic numbers to help the loader */
> -       .word 0x00000000             /* absolute load/run zImage address or
> -                                     * 0 for PiC */
> -       .word (_end - start)         /* zImage end address */
> +        .word ZIMAGE_MAGIC_NUMBER    /* Magic numbers to help the loader */
> +        .word 0x00000000             /* absolute load/run zImage address or
> +                                      * 0 for PiC */
> +        .word (_end - start)         /* zImage end address */
> 
>  past_zImage:
> -       cpsid aif                    /* Disable all interrupts */
> +        cpsid aif                    /* Disable all interrupts */
> 
> -       /* Save the bootloader arguments in less-clobberable registers */
> -       mov   r7, r1                 /* r7 := ARM-linux machine type */
> -       mov   r8, r2                 /* r8 := ATAG base address */
> +        /* Save the bootloader arguments in less-clobberable registers */
> +        mov   r7, r1                 /* r7 := ARM-linux machine type */
> +        mov   r8, r2                 /* r8 := ATAG base address */
> 
> -       /* Find out where we are */
> -       ldr   r0, =start
> -       adr   r9, start              /* r9  := paddr (start) */
> -       sub   r10, r9, r0            /* r10 := phys-offset */
> +        /* Find out where we are */
> +        ldr   r0, =start
> +        adr   r9, start              /* r9  := paddr (start) */
> +        sub   r10, r9, r0            /* r10 := phys-offset */
> 
> -       /* Using the DTB in the .dtb section? */
> +        /* Using the DTB in the .dtb section? */
>  #ifdef CONFIG_DTB_FILE
> -       ldr   r8, =_sdtb
> -       add   r8, r10                /* r8 := paddr(DTB) */
> +        ldr   r8, =_sdtb
> +        add   r8, r10                /* r8 := paddr(DTB) */
>  #endif
> 
> -       /* Are we the boot CPU? */
> -       mov   r12, #0                /* r12 := CPU ID */
> -       mrc   CP32(r0, MPIDR)
> -       tst   r0, #(1<<31)           /* Multiprocessor extension supported? */
> -       beq   boot_cpu
> -       tst   r0, #(1<<30)           /* Uniprocessor system? */
> -       bne   boot_cpu
> -       bics  r12, r0, #(0xff << 24) /* Mask out flags to get CPU ID */
> -       beq   boot_cpu               /* If we're CPU 0, boot now */
> -
> -       /* Non-boot CPUs wait here to be woken up one at a time. */
> -1:     dsb
> -       ldr   r0, =smp_up_cpu        /* VA of gate */
> -       add   r0, r0, r10            /* PA of gate */
> -       ldr   r1, [r0]               /* Which CPU is being booted? */
> -       teq   r1, r12                /* Is it us? */
> -       wfene
> -       bne   1b
> +        /* Are we the boot CPU? */
> +        mov   r12, #0                /* r12 := CPU ID */
> +        mrc   CP32(r0, MPIDR)
> +        tst   r0, #(1<<31)           /* Multiprocessor extension supported? */
> +        beq   boot_cpu
> +        tst   r0, #(1<<30)           /* Uniprocessor system? */
> +        bne   boot_cpu
> +        bics  r12, r0, #(0xff << 24) /* Mask out flags to get CPU ID */
> +        beq   boot_cpu               /* If we're CPU 0, boot now */
> +
> +        /* Non-boot CPUs wait here to be woken up one at a time. */
> +1:      dsb
> +        ldr   r0, =smp_up_cpu        /* VA of gate */
> +        add   r0, r0, r10            /* PA of gate */
> +        ldr   r1, [r0]               /* Which CPU is being booted? */
> +        teq   r1, r12                /* Is it us? */
> +        wfene
> +        bne   1b
> 
>  boot_cpu:
>  #ifdef EARLY_UART_ADDRESS
> -       ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
> -       teq   r12, #0                   /* CPU 0 sets up the UART too */
> -       bleq  init_uart
> -       PRINT("- CPU ")
> -       mov   r0, r12
> -       bl    putn
> -       PRINT(" booting -\r\n")
> +        ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
> +        teq   r12, #0                   /* CPU 0 sets up the UART too */
> +        bleq  init_uart
> +        PRINT("- CPU ")
> +        mov   r0, r12
> +        bl    putn
> +        PRINT(" booting -\r\n")
>  #endif
> 
> -       /* Wake up secondary cpus */
> -       teq   r12, #0
> -       bleq  kick_cpus
> -
> -       /* Check that this CPU has Hyp mode */
> -       mrc   CP32(r0, ID_PFR1)
> -       and   r0, r0, #0xf000        /* Bits 12-15 define virt extensions */
> -       teq   r0, #0x1000            /* Must == 0x1 or may be incompatible */
> -       beq   1f
> -       PRINT("- CPU doesn't support the virtualization extensions -\r\n")
> -       b     fail
> +        /* Wake up secondary cpus */
> +        teq   r12, #0
> +        bleq  kick_cpus
> +
> +        /* Check that this CPU has Hyp mode */
> +        mrc   CP32(r0, ID_PFR1)
> +        and   r0, r0, #0xf000        /* Bits 12-15 define virt extensions */
> +        teq   r0, #0x1000            /* Must == 0x1 or may be incompatible */
> +        beq   1f
> +        PRINT("- CPU doesn't support the virtualization extensions -\r\n")
> +        b     fail
>  1:
> -       /* Check if we're already in it */
> -       mrs   r0, cpsr
> -       and   r0, r0, #0x1f          /* Mode is in the low 5 bits of CPSR */
> -       teq   r0, #0x1a              /* Hyp Mode? */
> -       bne   1f
> -       PRINT("- Started in Hyp mode -\r\n")
> -       b     hyp
> +        /* Check if we're already in it */
> +        mrs   r0, cpsr
> +        and   r0, r0, #0x1f          /* Mode is in the low 5 bits of CPSR */
> +        teq   r0, #0x1a              /* Hyp Mode? */
> +        bne   1f
> +        PRINT("- Started in Hyp mode -\r\n")
> +        b     hyp
>  1:
> -       /* Otherwise, it must have been Secure Supervisor mode */
> -       mrc   CP32(r0, SCR)
> -       tst   r0, #0x1               /* Not-Secure bit set? */
> -       beq   1f
> -       PRINT("- CPU is not in Hyp mode or Secure state -\r\n")
> -       b     fail
> +        /* Otherwise, it must have been Secure Supervisor mode */
> +        mrc   CP32(r0, SCR)
> +        tst   r0, #0x1               /* Not-Secure bit set? */
> +        beq   1f
> +        PRINT("- CPU is not in Hyp mode or Secure state -\r\n")
> +        b     fail
>  1:
> -       /* OK, we're in Secure state. */
> -       PRINT("- Started in Secure state -\r\n- Entering Hyp mode -\r\n")
> -       ldr   r0, =enter_hyp_mode    /* VA of function */
> -       adr   lr, hyp                /* Set return address for call */
> -       add   pc, r0, r10            /* Call PA of function */
> +        /* OK, we're in Secure state. */
> +        PRINT("- Started in Secure state -\r\n- Entering Hyp mode -\r\n")
> +        ldr   r0, =enter_hyp_mode    /* VA of function */
> +        adr   lr, hyp                /* Set return address for call */
> +        add   pc, r0, r10            /* Call PA of function */
> 
>  hyp:
> 
> -       /* Zero BSS On the boot CPU to avoid nasty surprises */
> -       teq   r12, #0
> -       bne   skip_bss
> -
> -       PRINT("- Zero BSS -\r\n")
> -       ldr   r0, =__bss_start       /* Load start & end of bss */
> -       ldr   r1, =__bss_end
> -       add   r0, r0, r10            /* Apply physical offset */
> -       add   r1, r1, r10
> -
> -       mov   r2, #0
> -1:     str   r2, [r0], #4
> -       cmp   r0, r1
> -       blo   1b
> -
> -skip_bss:
> -
> -       PRINT("- Setting up control registers -\r\n")
> -
> -       /* Read CPU ID */
> -       mrc   CP32(r0, MIDR)
> -       ldr   r1, =(MIDR_MASK)
> -       and   r0, r0, r1
> -       /* Is this a Cortex A15? */
> -       ldr   r1, =(CORTEX_A15_ID)
> -       teq   r0, r1
> -       bleq  cortex_a15_init
> -
> -       /* Set up memory attribute type tables */
> -       ldr   r0, =MAIR0VAL
> -       ldr   r1, =MAIR1VAL
> -       mcr   CP32(r0, MAIR0)
> -       mcr   CP32(r1, MAIR1)
> -       mcr   CP32(r0, HMAIR0)
> -       mcr   CP32(r1, HMAIR1)
> -
> -       /* Set up the HTCR:
> -        * PT walks use Outer-Shareable accesses,
> -        * PT walks are write-back, no-write-allocate in both cache levels,
> -        * Full 32-bit address space goes through this table. */
> -       ldr   r0, =0x80002500
> -       mcr   CP32(r0, HTCR)
> -
> -       /* Set up the HSCTLR:
> -        * Exceptions in LE ARM,
> -        * Low-latency IRQs disabled,
> -        * Write-implies-XN disabled (for now),
> -        * D-cache disabled (for now),
> -        * I-cache enabled,
> -        * Alignment checking enabled,
> -        * MMU translation disabled (for now). */
> -       ldr   r0, =(HSCTLR_BASE|SCTLR_A)
> -       mcr   CP32(r0, HSCTLR)
> -
> -       /* Write Xen's PT's paddr into the HTTBR */
> -       ldr   r4, =xen_pgtable
> -       add   r4, r4, r10            /* r4 := paddr (xen_pagetable) */
> -       mov   r5, #0                 /* r4:r5 is paddr (xen_pagetable) */
> -       mcrr  CP64(r4, r5, HTTBR)
> -
> -       /* Non-boot CPUs don't need to rebuild the pagetable */
> -       teq   r12, #0
> -       bne   pt_ready
> -
> -       /* console fixmap */
> +        /* Zero BSS On the boot CPU to avoid nasty surprises */
> +        teq   r12, #0
> +        bne   skip_bss
> +
> +        PRINT("- Zero BSS -\r\n")
> +        ldr   r0, =__bss_start       /* Load start & end of bss */
> +        ldr   r1, =__bss_end
> +        add   r0, r0, r10            /* Apply physical offset */
> +        add   r1, r1, r10
> +
> +        mov   r2, #0
> +1:      str   r2, [r0], #4
> +        cmp   r0, r1
> +        blo   1b
> +
> +skip_bss:
> +
> +        PRINT("- Setting up control registers -\r\n")
> +
> +        /* Read CPU ID */
> +        mrc   CP32(r0, MIDR)
> +        ldr   r1, =(MIDR_MASK)
> +        and   r0, r0, r1
> +        /* Is this a Cortex A15? */
> +        ldr   r1, =(CORTEX_A15_ID)
> +        teq   r0, r1
> +        bleq  cortex_a15_init
> +
> +        /* Set up memory attribute type tables */
> +        ldr   r0, =MAIR0VAL
> +        ldr   r1, =MAIR1VAL
> +        mcr   CP32(r0, MAIR0)
> +        mcr   CP32(r1, MAIR1)
> +        mcr   CP32(r0, HMAIR0)
> +        mcr   CP32(r1, HMAIR1)
> +
> +        /* Set up the HTCR:
> +         * PT walks use Outer-Shareable accesses,
> +         * PT walks are write-back, no-write-allocate in both cache levels,
> +         * Full 32-bit address space goes through this table. */
> +        ldr   r0, =0x80002500
> +        mcr   CP32(r0, HTCR)
> +
> +        /* Set up the HSCTLR:
> +         * Exceptions in LE ARM,
> +         * Low-latency IRQs disabled,
> +         * Write-implies-XN disabled (for now),
> +         * D-cache disabled (for now),
> +         * I-cache enabled,
> +         * Alignment checking enabled,
> +         * MMU translation disabled (for now). */
> +        ldr   r0, =(HSCTLR_BASE|SCTLR_A)
> +        mcr   CP32(r0, HSCTLR)
> +
> +        /* Write Xen's PT's paddr into the HTTBR */
> +        ldr   r4, =xen_pgtable
> +        add   r4, r4, r10            /* r4 := paddr (xen_pagetable) */
> +        mov   r5, #0                 /* r4:r5 is paddr (xen_pagetable) */
> +        mcrr  CP64(r4, r5, HTTBR)
> +
> +        /* Non-boot CPUs don't need to rebuild the pagetable */
> +        teq   r12, #0
> +        bne   pt_ready
> +
> +        /* console fixmap */
>  #ifdef EARLY_UART_ADDRESS
> -       ldr   r1, =xen_fixmap
> -       add   r1, r1, r10            /* r1 := paddr (xen_fixmap) */
> -       mov   r3, #0
> -       lsr   r2, r11, #12
> -       lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> -       orr   r2, r2, #PT_UPPER(DEV_L3)
> -       orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> -       strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> +        ldr   r1, =xen_fixmap
> +        add   r1, r1, r10            /* r1 := paddr (xen_fixmap) */
> +        mov   r3, #0
> +        lsr   r2, r11, #12
> +        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> +        orr   r2, r2, #PT_UPPER(DEV_L3)
> +        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> +        strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
>  #endif
> 
> -       /* Build the baseline idle pagetable's first-level entries */
> -       ldr   r1, =xen_second
> -       add   r1, r1, r10            /* r1 := paddr (xen_second) */
> -       mov   r3, #0x0
> -       orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of xen_second */
> -       orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
> -       strd  r2, r3, [r4, #0]       /* Map it in slot 0 */
> -       add   r2, r2, #0x1000
> -       strd  r2, r3, [r4, #8]       /* Map 2nd page in slot 1 */
> -       add   r2, r2, #0x1000
> -       strd  r2, r3, [r4, #16]      /* Map 3rd page in slot 2 */
> -       add   r2, r2, #0x1000
> -       strd  r2, r3, [r4, #24]      /* Map 4th page in slot 3 */
> -
> -       /* Now set up the second-level entries */
> -       orr   r2, r9, #PT_UPPER(MEM)
> -       orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB normal map of Xen */
> -       mov   r4, r9, lsr #18        /* Slot for paddr(start) */
> -       strd  r2, r3, [r1, r4]       /* Map Xen there */
> -       ldr   r4, =start
> -       lsr   r4, #18                /* Slot for vaddr(start) */
> -       strd  r2, r3, [r1, r4]       /* Map Xen there too */
> -
> -       /* xen_fixmap pagetable */
> -       ldr   r2, =xen_fixmap
> -       add   r2, r2, r10            /* r2 := paddr (xen_fixmap) */
> -       orr   r2, r2, #PT_UPPER(PT)
> -       orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
> -       add   r4, r4, #8
> -       strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
> -
> -       mov   r3, #0x0
> -       lsr   r2, r8, #21
> -       lsl   r2, r2, #21            /* 2MB-aligned paddr of DTB */
> -       orr   r2, r2, #PT_UPPER(MEM)
> -       orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
> -       add   r4, r4, #8
> -       strd  r2, r3, [r1, r4]       /* Map it in the early boot slot */
> +        /* Build the baseline idle pagetable's first-level entries */
> +        ldr   r1, =xen_second
> +        add   r1, r1, r10            /* r1 := paddr (xen_second) */
> +        mov   r3, #0x0
> +        orr   r2, r1, #PT_UPPER(PT)  /* r2:r3 := table map of xen_second */
> +        orr   r2, r2, #PT_LOWER(PT)  /* (+ rights for linear PT) */
> +        strd  r2, r3, [r4, #0]       /* Map it in slot 0 */
> +        add   r2, r2, #0x1000
> +        strd  r2, r3, [r4, #8]       /* Map 2nd page in slot 1 */
> +        add   r2, r2, #0x1000
> +        strd  r2, r3, [r4, #16]      /* Map 3rd page in slot 2 */
> +        add   r2, r2, #0x1000
> +        strd  r2, r3, [r4, #24]      /* Map 4th page in slot 3 */
> +
> +        /* Now set up the second-level entries */
> +        orr   r2, r9, #PT_UPPER(MEM)
> +        orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB normal map of Xen */
> +        mov   r4, r9, lsr #18        /* Slot for paddr(start) */
> +        strd  r2, r3, [r1, r4]       /* Map Xen there */
> +        ldr   r4, =start
> +        lsr   r4, #18                /* Slot for vaddr(start) */
> +        strd  r2, r3, [r1, r4]       /* Map Xen there too */
> +
> +        /* xen_fixmap pagetable */
> +        ldr   r2, =xen_fixmap
> +        add   r2, r2, r10            /* r2 := paddr (xen_fixmap) */
> +        orr   r2, r2, #PT_UPPER(PT)
> +        orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
> +        add   r4, r4, #8
> +        strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
> +
> +        mov   r3, #0x0
> +        lsr   r2, r8, #21
> +        lsl   r2, r2, #21            /* 2MB-aligned paddr of DTB */
> +        orr   r2, r2, #PT_UPPER(MEM)
> +        orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
> +        add   r4, r4, #8
> +        strd  r2, r3, [r1, r4]       /* Map it in the early boot slot */
> 
>  pt_ready:
> -       PRINT("- Turning on paging -\r\n")
> -
> -       ldr   r1, =paging            /* Explicit vaddr, not RIP-relative */
> -       mrc   CP32(r0, HSCTLR)
> -       orr   r0, r0, #(SCTLR_M|SCTLR_C) /* Enable MMU and D-cache */
> -       dsb                          /* Flush PTE writes and finish reads */
> -       mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
> -       isb                          /* Now, flush the icache */
> -       mov   pc, r1                 /* Get a proper vaddr into PC */
> +        PRINT("- Turning on paging -\r\n")
> +
> +        ldr   r1, =paging            /* Explicit vaddr, not RIP-relative */
> +        mrc   CP32(r0, HSCTLR)
> +        orr   r0, r0, #(SCTLR_M|SCTLR_C) /* Enable MMU and D-cache */
> +        dsb                          /* Flush PTE writes and finish reads */
> +        mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
> +        isb                          /* Now, flush the icache */
> +        mov   pc, r1                 /* Get a proper vaddr into PC */
>  paging:
> 
> 
>  #ifdef EARLY_UART_ADDRESS
> -       /* Use a virtual address to access the UART. */
> -       ldr   r11, =FIXMAP_ADDR(FIXMAP_CONSOLE)
> +        /* Use a virtual address to access the UART. */
> +        ldr   r11, =FIXMAP_ADDR(FIXMAP_CONSOLE)
>  #endif
> 
> -       PRINT("- Ready -\r\n")
> -
> -       /* The boot CPU should go straight into C now */
> -       teq   r12, #0
> -       beq   launch
> -
> -       /* Non-boot CPUs need to move on to the relocated pagetables */
> -       mov   r0, #0
> -       ldr   r4, =boot_httbr        /* VA of HTTBR value stashed by CPU 0 */
> -       add   r4, r4, r10            /* PA of it */
> -       ldrd  r4, r5, [r4]           /* Actual value */
> -       dsb
> -       mcrr  CP64(r4, r5, HTTBR)
> -       dsb
> -       isb
> -       mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
> -       mcr   CP32(r0, ICIALLU)      /* Flush I-cache */
> -       mcr   CP32(r0, BPIALL)       /* Flush branch predictor */
> -       dsb                          /* Ensure completion of TLB+BP flush */
> -       isb
> -
> -       /* Non-boot CPUs report that they've got this far */
> -       ldr   r0, =ready_cpus
> -1:     ldrex r1, [r0]               /*            { read # of ready CPUs } */
> -       add   r1, r1, #1             /* Atomically { ++                   } */
> -       strex r2, r1, [r0]           /*            { writeback            } */
> -       teq   r2, #0
> -       bne   1b
> -       dsb
> -       mcr   CP32(r0, DCCMVAC)      /* flush D-Cache */
> -       dsb
> -
> -       /* Here, the non-boot CPUs must wait again -- they're now running on
> -        * the boot CPU's pagetables so it's safe for the boot CPU to
> -        * overwrite the non-relocated copy of Xen.  Once it's done that,
> -        * and brought up the memory allocator, non-boot CPUs can get their
> -        * own stacks and enter C. */
> -1:     wfe
> -       dsb
> -       ldr   r0, =smp_up_cpu
> -       ldr   r1, [r0]               /* Which CPU is being booted? */
> -       teq   r1, r12                /* Is it us? */
> -       bne   1b
> -
> -launch:
> -       ldr   r0, =init_stack        /* Find the boot-time stack */
> -       ldr   sp, [r0]
> -       add   sp, #STACK_SIZE        /* (which grows down from the top). */
> -       sub   sp, #CPUINFO_sizeof    /* Make room for CPU save record */
> -       mov   r0, r10                /* Marshal args: - phys_offset */
> -       mov   r1, r7                 /*               - machine type */
> -       mov   r2, r8                 /*               - ATAG address */
> -       movs  r3, r12                /*               - CPU ID */
> -       beq   start_xen              /* and disappear into the land of C */
> -       b     start_secondary        /* (to the appropriate entry point) */
> +        PRINT("- Ready -\r\n")
> +
> +        /* The boot CPU should go straight into C now */
> +        teq   r12, #0
> +        beq   launch
> +
> +        /* Non-boot CPUs need to move on to the relocated pagetables */
> +        mov   r0, #0
> +        ldr   r4, =boot_httbr        /* VA of HTTBR value stashed by CPU 0 */
> +        add   r4, r4, r10            /* PA of it */
> +        ldrd  r4, r5, [r4]           /* Actual value */
> +        dsb
> +        mcrr  CP64(r4, r5, HTTBR)
> +        dsb
> +        isb
> +        mcr   CP32(r0, TLBIALLH)     /* Flush hypervisor TLB */
> +        mcr   CP32(r0, ICIALLU)      /* Flush I-cache */
> +        mcr   CP32(r0, BPIALL)       /* Flush branch predictor */
> +        dsb                          /* Ensure completion of TLB+BP flush */
> +        isb
> +
> +        /* Non-boot CPUs report that they've got this far */
> +        ldr   r0, =ready_cpus
> +1:      ldrex r1, [r0]               /*            { read # of ready CPUs } */
> +        add   r1, r1, #1             /* Atomically { ++                   } */
> +        strex r2, r1, [r0]           /*            { writeback            } */
> +        teq   r2, #0
> +        bne   1b
> +        dsb
> +        mcr   CP32(r0, DCCMVAC)      /* flush D-Cache */
> +        dsb
> +
> +        /* Here, the non-boot CPUs must wait again -- they're now running on
> +         * the boot CPU's pagetables so it's safe for the boot CPU to
> +         * overwrite the non-relocated copy of Xen.  Once it's done that,
> +         * and brought up the memory allocator, non-boot CPUs can get their
> +         * own stacks and enter C. */
> +1:      wfe
> +        dsb
> +        ldr   r0, =smp_up_cpu
> +        ldr   r1, [r0]               /* Which CPU is being booted? */
> +        teq   r1, r12                /* Is it us? */
> +        bne   1b
> +
> +launch:
> +        ldr   r0, =init_stack        /* Find the boot-time stack */
> +        ldr   sp, [r0]
> +        add   sp, #STACK_SIZE        /* (which grows down from the top). */
> +        sub   sp, #CPUINFO_sizeof    /* Make room for CPU save record */
> +        mov   r0, r10                /* Marshal args: - phys_offset */
> +        mov   r1, r7                 /*               - machine type */
> +        mov   r2, r8                 /*               - ATAG address */
> +        movs  r3, r12                /*               - CPU ID */
> +        beq   start_xen              /* and disappear into the land of C */
> +        b     start_secondary        /* (to the appropriate entry point) */
> 
>  /* Fail-stop
>   * r0: string explaining why */
> -fail:  PRINT("- Boot failed -\r\n")
> -1:     wfe
> -       b     1b
> +fail:   PRINT("- Boot failed -\r\n")
> +1:      wfe
> +        b     1b
> 
>  #ifdef EARLY_UART_ADDRESS
> 
>  /* Bring up the UART. Specific to the PL011 UART.
>   * Clobbers r0-r2 */
>  init_uart:
> -       mov   r1, #0x0
> -       str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor fraction) */
> -       mov   r1, #0x4               /* 7.3728MHz / 0x4 == 16 * 115200 */
> -       str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor integer) */
> -       mov   r1, #0x60              /* 8n1 */
> -       str   r1, [r11, #0x24]       /* -> UARTLCR_H (Line control) */
> -       ldr   r1, =0x00000301        /* RXE | TXE | UARTEN */
> -       str   r1, [r11, #0x30]       /* -> UARTCR (Control Register) */
> -       adr   r0, 1f
> -       b     puts
> -1:     .asciz "- UART enabled -\r\n"
> -       .align 4
> +        mov   r1, #0x0
> +        str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor fraction) */
> +        mov   r1, #0x4               /* 7.3728MHz / 0x4 == 16 * 115200 */
> +        str   r1, [r11, #0x24]       /* -> UARTIBRD (Baud divisor integer) */
> +        mov   r1, #0x60              /* 8n1 */
> +        str   r1, [r11, #0x24]       /* -> UARTLCR_H (Line control) */
> +        ldr   r1, =0x00000301        /* RXE | TXE | UARTEN */
> +        str   r1, [r11, #0x30]       /* -> UARTCR (Control Register) */
> +        adr   r0, 1f
> +        b     puts
> +1:      .asciz "- UART enabled -\r\n"
> +        .align 4
> 
>  /* Print early debug messages.  Specific to the PL011 UART.
>   * r0: Nul-terminated string to print.
>   * Clobbers r0-r2 */
>  puts:
> -       ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
> -       tst   r2, #0x8               /* Check BUSY bit */
> -       bne   puts                   /* Wait for the UART to be ready */
> -       ldrb  r2, [r0], #1           /* Load next char */
> -       teq   r2, #0                 /* Exit on nul */
> -       moveq pc, lr
> -       str   r2, [r11]              /* -> UARTDR (Data Register) */
> -       b     puts
> +        ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
> +        tst   r2, #0x8               /* Check BUSY bit */
> +        bne   puts                   /* Wait for the UART to be ready */
> +        ldrb  r2, [r0], #1           /* Load next char */
> +        teq   r2, #0                 /* Exit on nul */
> +        moveq pc, lr
> +        str   r2, [r11]              /* -> UARTDR (Data Register) */
> +        b     puts
> 
>  /* Print a 32-bit number in hex.  Specific to the PL011 UART.
>   * r0: Number to print.
>   * clobbers r0-r3 */
>  putn:
> -       adr   r1, hex
> -       mov   r3, #8
> -1:     ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
> -       tst   r2, #0x8               /* Check BUSY bit */
> -       bne   1b                     /* Wait for the UART to be ready */
> -       and   r2, r0, #0xf0000000    /* Mask off the top nybble */
> -       ldrb  r2, [r1, r2, lsr #28]  /* Convert to a char */
> -       str   r2, [r11]              /* -> UARTDR (Data Register) */
> -       lsl   r0, #4                 /* Roll it through one nybble at a time */
> -       subs  r3, r3, #1
> -       bne   1b
> -       mov   pc, lr
> -
> -hex:   .ascii "0123456789abcdef"
> -       .align 2
> +        adr   r1, hex
> +        mov   r3, #8
> +1:      ldr   r2, [r11, #0x18]       /* <- UARTFR (Flag register) */
> +        tst   r2, #0x8               /* Check BUSY bit */
> +        bne   1b                     /* Wait for the UART to be ready */
> +        and   r2, r0, #0xf0000000    /* Mask off the top nybble */
> +        ldrb  r2, [r1, r2, lsr #28]  /* Convert to a char */
> +        str   r2, [r11]              /* -> UARTDR (Data Register) */
> +        lsl   r0, #4                 /* Roll it through one nybble at a time */
> +        subs  r3, r3, #1
> +        bne   1b
> +        mov   pc, lr
> +
> +hex:    .ascii "0123456789abcdef"
> +        .align 2
> 
>  #else  /* EARLY_UART_ADDRESS */
> 
> @@ -403,6 +403,13 @@ init_uart:
>  .global early_puts
>  early_puts:
>  puts:
> -putn:  mov   pc, lr
> +putn:   mov   pc, lr
> 
>  #endif /* EARLY_UART_ADDRESS */
> +
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
> index 7c3b357..3dba38d 100644
> --- a/xen/arch/arm/mode_switch.S
> +++ b/xen/arch/arm/mode_switch.S
> @@ -28,25 +28,25 @@
>  /* wake up secondary cpus */
>  .globl kick_cpus
>  kick_cpus:
> -       /* write start paddr to v2m sysreg FLAGSSET register */
> -       ldr   r0, =(V2M_SYS_MMIO_BASE)        /* base V2M sysreg MMIO address */
> -       dsb
> -       mov   r2, #0xffffffff
> -       str   r2, [r0, #(V2M_SYS_FLAGSCLR)]
> -       dsb
> -       ldr   r2, =start
> -       add   r2, r2, r10
> -       str   r2, [r0, #(V2M_SYS_FLAGSSET)]
> -       dsb
> -       /* send an interrupt */
> -       ldr   r0, =(GIC_BASE_ADDRESS + GIC_DR_OFFSET) /* base GICD MMIO address */
> -       mov   r2, #0x1
> -       str   r2, [r0, #(GICD_CTLR * 4)]      /* enable distributor */
> -       mov   r2, #0xfe0000
> -       str   r2, [r0, #(GICD_SGIR * 4)]      /* send IPI to everybody */
> -       dsb
> -       str   r2, [r0, #(GICD_CTLR * 4)]      /* disable distributor */
> -       mov   pc, lr
> +        /* write start paddr to v2m sysreg FLAGSSET register */
> +        ldr   r0, =(V2M_SYS_MMIO_BASE)        /* base V2M sysreg MMIO address */
> +        dsb
> +        mov   r2, #0xffffffff
> +        str   r2, [r0, #(V2M_SYS_FLAGSCLR)]
> +        dsb
> +        ldr   r2, =start
> +        add   r2, r2, r10
> +        str   r2, [r0, #(V2M_SYS_FLAGSSET)]
> +        dsb
> +        /* send an interrupt */
> +        ldr   r0, =(GIC_BASE_ADDRESS + GIC_DR_OFFSET) /* base GICD MMIO address */
> +        mov   r2, #0x1
> +        str   r2, [r0, #(GICD_CTLR * 4)]      /* enable distributor */
> +        mov   r2, #0xfe0000
> +        str   r2, [r0, #(GICD_SGIR * 4)]      /* send IPI to everybody */
> +        dsb
> +        str   r2, [r0, #(GICD_CTLR * 4)]      /* disable distributor */
> +        mov   pc, lr
> 
> 
>  /* Get up a CPU into Hyp mode.  Clobbers r0-r3.
> @@ -61,54 +61,61 @@ kick_cpus:
> 
>  .globl enter_hyp_mode
>  enter_hyp_mode:
> -       mov   r3, lr                 /* Put return address in non-banked reg */
> -       cpsid aif, #0x16             /* Enter Monitor mode */
> -       mrc   CP32(r0, SCR)
> -       orr   r0, r0, #0x100         /* Set HCE */
> -       orr   r0, r0, #0xb1          /* Set SCD, AW, FW and NS */
> -       bic   r0, r0, #0xe           /* Clear EA, FIQ and IRQ */
> -       mcr   CP32(r0, SCR)
> -       /* Ugly: the system timer's frequency register is only
> -        * programmable in Secure state.  Since we don't know where its
> -        * memory-mapped control registers live, we can't find out the
> -        * right frequency.  Use the VE model's default frequency here. */
> -       ldr   r0, =0x5f5e100         /* 100 MHz */
> -       mcr   CP32(r0, CNTFRQ)
> -       ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
> -       mcr   CP32(r0, NSACR)
> -       mov   r0, #GIC_BASE_ADDRESS
> -       add   r0, r0, #GIC_DR_OFFSET
> -       /* Disable the GIC distributor, on the boot CPU only */
> -       mov   r1, #0
> -       teq   r12, #0                /* Is this the boot CPU? */
> -       streq r1, [r0]
> -       /* Continuing ugliness: Set up the GIC so NS state owns interrupts,
> -        * The first 32 interrupts (SGIs & PPIs) must be configured on all
> -        * CPUs while the remainder are SPIs and only need to be done one, on
> -        * the boot CPU. */
> -       add   r0, r0, #0x80          /* GICD_IGROUP0 */
> -       mov   r2, #0xffffffff        /* All interrupts to group 1 */
> -       teq   r12, #0                /* Boot CPU? */
> -       str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
> -       streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
> -       streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
> -       /* Disable the GIC CPU interface on all processors */
> -       mov   r0, #GIC_BASE_ADDRESS
> -       add   r0, r0, #GIC_CR_OFFSET
> -       mov   r1, #0
> -       str   r1, [r0]
> -       /* Must drop priority mask below 0x80 before entering NS state */
> -       ldr   r1, =0xff
> -       str   r1, [r0, #0x4]         /* -> GICC_PMR */
> -       /* Reset a few config registers */
> -       mov   r0, #0
> -       mcr   CP32(r0, FCSEIDR)
> -       mcr   CP32(r0, CONTEXTIDR)
> -       /* Allow non-secure access to coprocessors, FIQs, VFP and NEON */
> -       ldr   r1, =0x3fff            /* 14 CP bits set, all others clear */
> -       mcr   CP32(r1, NSACR)
> +        mov   r3, lr                 /* Put return address in non-banked reg */
> +        cpsid aif, #0x16             /* Enter Monitor mode */
> +        mrc   CP32(r0, SCR)
> +        orr   r0, r0, #0x100         /* Set HCE */
> +        orr   r0, r0, #0xb1          /* Set SCD, AW, FW and NS */
> +        bic   r0, r0, #0xe           /* Clear EA, FIQ and IRQ */
> +        mcr   CP32(r0, SCR)
> +        /* Ugly: the system timer's frequency register is only
> +         * programmable in Secure state.  Since we don't know where its
> +         * memory-mapped control registers live, we can't find out the
> +         * right frequency.  Use the VE model's default frequency here. */
> +        ldr   r0, =0x5f5e100         /* 100 MHz */
> +        mcr   CP32(r0, CNTFRQ)
> +        ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
> +        mcr   CP32(r0, NSACR)
> +        mov   r0, #GIC_BASE_ADDRESS
> +        add   r0, r0, #GIC_DR_OFFSET
> +        /* Disable the GIC distributor, on the boot CPU only */
> +        mov   r1, #0
> +        teq   r12, #0                /* Is this the boot CPU? */
> +        streq r1, [r0]
> +        /* Continuing ugliness: Set up the GIC so NS state owns interrupts,
> +         * The first 32 interrupts (SGIs & PPIs) must be configured on all
> +         * CPUs while the remainder are SPIs and only need to be done one, on
> +         * the boot CPU. */
> +        add   r0, r0, #0x80          /* GICD_IGROUP0 */
> +        mov   r2, #0xffffffff        /* All interrupts to group 1 */
> +        teq   r12, #0                /* Boot CPU? */
> +        str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
> +        streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
> +        streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
> +        /* Disable the GIC CPU interface on all processors */
> +        mov   r0, #GIC_BASE_ADDRESS
> +        add   r0, r0, #GIC_CR_OFFSET
> +        mov   r1, #0
> +        str   r1, [r0]
> +        /* Must drop priority mask below 0x80 before entering NS state */
> +        ldr   r1, =0xff
> +        str   r1, [r0, #0x4]         /* -> GICC_PMR */
> +        /* Reset a few config registers */
> +        mov   r0, #0
> +        mcr   CP32(r0, FCSEIDR)
> +        mcr   CP32(r0, CONTEXTIDR)
> +        /* Allow non-secure access to coprocessors, FIQs, VFP and NEON */
> +        ldr   r1, =0x3fff            /* 14 CP bits set, all others clear */
> +        mcr   CP32(r1, NSACR)
> 
> -       mrs   r0, cpsr               /* Copy the CPSR */
> -       add   r0, r0, #0x4           /* 0x16 (Monitor) -> 0x1a (Hyp) */
> -       msr   spsr_cxsf, r0          /* into the SPSR */
> -       movs  pc, r3                 /* Exception-return into Hyp mode */
> +        mrs   r0, cpsr               /* Copy the CPSR */
> +        add   r0, r0, #0x4           /* 0x16 (Monitor) -> 0x1a (Hyp) */
> +        msr   spsr_cxsf, r0          /* into the SPSR */
> +        movs  pc, r3                 /* Exception-return into Hyp mode */
> +
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/proc-ca15.S b/xen/arch/arm/proc-ca15.S
> index 5a5bf64..dcdd42e 100644
> --- a/xen/arch/arm/proc-ca15.S
> +++ b/xen/arch/arm/proc-ca15.S
> @@ -21,8 +21,15 @@
> 
>  .globl cortex_a15_init
>  cortex_a15_init:
> -       /* Set up the SMP bit in ACTLR */
> -       mrc   CP32(r0, ACTLR)
> -       orr   r0, r0, #(ACTLR_CA15_SMP) /* enable SMP bit*/
> -       mcr   CP32(r0, ACTLR)
> -       mov   pc, lr
> +        /* Set up the SMP bit in ACTLR */
> +        mrc   CP32(r0, ACTLR)
> +        orr   r0, r0, #(ACTLR_CA15_SMP) /* enable SMP bit */
> +        mcr   CP32(r0, ACTLR)
> +        mov   pc, lr
> +
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> --
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:19:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1kk-0006Lc-I0; Tue, 18 Dec 2012 18:18:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marc.zyngier@arm.com>) id 1Tl1ki-0006LQ-Vv
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:18:53 +0000
Received: from [85.158.143.35:27973] by server-3.bemta-4.messagelabs.com id
	D3/B5-18211-C83B0D05; Tue, 18 Dec 2012 18:18:52 +0000
X-Env-Sender: marc.zyngier@arm.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355854727!14243781!1
X-Originating-IP: [91.220.42.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogOTEuMjIwLjQyLjQ0ID0+IDM0NDcwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8876 invoked from network); 18 Dec 2012 18:18:50 -0000
Received: from service87.mimecast.com (HELO service87.mimecast.com)
	(91.220.42.44) by server-8.tower-21.messagelabs.com with SMTP;
	18 Dec 2012 18:18:50 -0000
Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com
	[217.140.96.21]) by service87.mimecast.com;
	Tue, 18 Dec 2012 18:18:47 +0000
Received: from [10.1.70.21] ([10.1.255.212]) by cam-owa1.Emea.Arm.com with
	Microsoft SMTPSVC(6.0.3790.0); Tue, 18 Dec 2012 18:18:45 +0000
Message-ID: <50D0B384.40605@arm.com>
Date: Tue, 18 Dec 2012 18:18:44 +0000
From: Marc Zyngier <marc.zyngier@arm.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Christopher Covington <cov@codeaurora.org>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
	<50D0AF5C.1070605@codeaurora.org>
In-Reply-To: <50D0AF5C.1070605@codeaurora.org>
X-Enigmail-Version: 1.4.6
X-OriginalArrivalTime: 18 Dec 2012 18:18:45.0308 (UTC)
	FILETIME=[1DEF2FC0:01CDDD4C]
X-MC-Unique: 112121818184701601
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Will Deacon <Will.Deacon@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Christopher,

On 18/12/12 18:01, Christopher Covington wrote:
> Hi Will,
> 
> On 12/18/2012 08:14 AM, Will Deacon wrote:
>> Hi Stefano,
>>
>> On Tue, Dec 18, 2012 at 12:04:38PM +0000, Stefano Stabellini wrote:
>>> On Mon, 17 Dec 2012, Will Deacon wrote:
>>>> From: Marc Zyngier <marc.zyngier@arm.com>
>>>>
>>>> Add support for the smallest, dumbest possible platform, to be
>>>> used as a guest for KVM or other hypervisors.
> 
> [...]
> 
>>> Should it come along with a DTS?
>>
>> The only things the platform needs are GIC, timers, memory and a CPU.
> 
> I assume multiple virtio-mmio peripherals are hiding behind what you seem to
> advertising here as plain old memory?

No. Memory is memory. Virtio peripherals are created outside of the 
memory range. They end up having rings and descriptor in memory, but 
that's not any different from what you have with an fairly complicated 
DMA capable hardware device.

Furthermore, even if virtio-mmio is what we use with KVM, it could be 
something radically different. Xen uses something somewhat different.
It is not even required to boot the platform!

>> Furthermore, the location, size, frequency etc properties of these aren't
>> fixed, so a dts would be fairly useless because it will probably not match
>> the particular mach-virt instance you're targetting.
> 
> I disagree. I think an example DTS would be fairly useful, if only for the
> full list of peripherals you're using on the platform.

That's the whole point: we do not want to to specify anything, because 
there is no need to. You could have anything there, depending on your hypervisor.

>> For kvmtool, I've been generating the device-tree at runtime based on how
>> kvmtool is invoked and it's been working pretty well so far.
> 
> If you'd much prefer to post the command line, tools version, etc. that you're
> using to generate the DTB, rather than the DTS, that'd be better than nothing.

Here's what kvmtool has been seen to generate, with the parameters I used
a few minutes ago:

/dts-v1/;

/memreserve/	0x000000008fff0000 0x0000000000001000;
/ {
	interrupt-parent = <0x1>;
	compatible = "linux,dummy-virt";
	#address-cells = <0x2>;
	#size-cells = <0x2>;

	chosen {
		bootargs = "console=hvc0,38400 root=/dev/vda1";
	};

	memory {
		device_type = "memory";
		reg = <0x0 0x80000000 0x0 0x40000000>;
	};

	cpus {
		#address-cells = <0x1>;
		#size-cells = <0x0>;

		cpu@0 {
			device_type = "cpu";
			compatible = "arm,cortex-a15";
			enable-method = "psci";
			reg = <0x0>;
		};

		cpu@1 {
			device_type = "cpu";
			compatible = "arm,cortex-a15";
			enable-method = "psci";
			reg = <0x1>;
		};
	};

	intc {
		compatible = "arm,cortex-a15-gic";
		#interrupt-cells = <0x3>;
		interrupt-controller;
		reg = <0x0 0x3ffff000 0x0 0x1000 0x0 0x3fffd000 0x0 0x2000>;
		phandle = <0x1>;
	};

	timer {
		compatible = "arm,armv7-timer";
		interrupts = <0x1 0xd 0x301 0x1 0xe 0x301 0x1 0xb 0x301 0x1 0xa 0x301>;
	};

	virtio@0 {
		compatible = "virtio,mmio";
		reg = <0x0 0x0 0x0 0x200>;
		interrupts = <0x0 0x0 0x1>;
	};

	virtio@200 {
		compatible = "virtio,mmio";
		reg = <0x0 0x200 0x0 0x200>;
		interrupts = <0x0 0x1 0x1>;
	};

	virtio@400 {
		compatible = "virtio,mmio";
		reg = <0x0 0x400 0x0 0x200>;
		interrupts = <0x0 0x2 0x1>;
	};

	virtio@600 {
		compatible = "virtio,mmio";
		reg = <0x0 0x600 0x0 0x200>;
		interrupts = <0x0 0x3 0x1>;
	};

	psci {
		compatible = "arm,psci";
		method = "hvc";
		cpu_suspend = <0x95c1ba5e>;
		cpu_off = <0x95c1ba5f>;
		cpu_on = <0x95c1ba60>;
		migrate = <0x95c1ba61>;
	};
};

Does it help?

> It seems like Rob Herring's earlier question about whether the dummy platform
> is really justified never got answered. I think sending a sample DTS out with
> the patchset would help "highlight where we need to do more work on DT driving
> the initialization."
> 
> Lastly, I'm somewhat curious, why virtio-mmio console rather than DCC?

What would be the point of using DCC? We would have to trap on each access, and
then we'd have to invent yet another mechanism to channel the console to userspace.
Not to mention that I like to be able to actually input something on a console,
not just read from it.

	M.
-- 
Jazz is not dead. It just smells funny...


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:19:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1kk-0006Lc-I0; Tue, 18 Dec 2012 18:18:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marc.zyngier@arm.com>) id 1Tl1ki-0006LQ-Vv
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:18:53 +0000
Received: from [85.158.143.35:27973] by server-3.bemta-4.messagelabs.com id
	D3/B5-18211-C83B0D05; Tue, 18 Dec 2012 18:18:52 +0000
X-Env-Sender: marc.zyngier@arm.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355854727!14243781!1
X-Originating-IP: [91.220.42.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogOTEuMjIwLjQyLjQ0ID0+IDM0NDcwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8876 invoked from network); 18 Dec 2012 18:18:50 -0000
Received: from service87.mimecast.com (HELO service87.mimecast.com)
	(91.220.42.44) by server-8.tower-21.messagelabs.com with SMTP;
	18 Dec 2012 18:18:50 -0000
Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com
	[217.140.96.21]) by service87.mimecast.com;
	Tue, 18 Dec 2012 18:18:47 +0000
Received: from [10.1.70.21] ([10.1.255.212]) by cam-owa1.Emea.Arm.com with
	Microsoft SMTPSVC(6.0.3790.0); Tue, 18 Dec 2012 18:18:45 +0000
Message-ID: <50D0B384.40605@arm.com>
Date: Tue, 18 Dec 2012 18:18:44 +0000
From: Marc Zyngier <marc.zyngier@arm.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Christopher Covington <cov@codeaurora.org>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
	<50D0AF5C.1070605@codeaurora.org>
In-Reply-To: <50D0AF5C.1070605@codeaurora.org>
X-Enigmail-Version: 1.4.6
X-OriginalArrivalTime: 18 Dec 2012 18:18:45.0308 (UTC)
	FILETIME=[1DEF2FC0:01CDDD4C]
X-MC-Unique: 112121818184701601
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Will Deacon <Will.Deacon@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Christopher,

On 18/12/12 18:01, Christopher Covington wrote:
> Hi Will,
> 
> On 12/18/2012 08:14 AM, Will Deacon wrote:
>> Hi Stefano,
>>
>> On Tue, Dec 18, 2012 at 12:04:38PM +0000, Stefano Stabellini wrote:
>>> On Mon, 17 Dec 2012, Will Deacon wrote:
>>>> From: Marc Zyngier <marc.zyngier@arm.com>
>>>>
>>>> Add support for the smallest, dumbest possible platform, to be
>>>> used as a guest for KVM or other hypervisors.
> 
> [...]
> 
>>> Should it come along with a DTS?
>>
>> The only things the platform needs are GIC, timers, memory and a CPU.
> 
> I assume multiple virtio-mmio peripherals are hiding behind what you seem to
> advertising here as plain old memory?

No. Memory is memory. Virtio peripherals are created outside of the 
memory range. They end up having rings and descriptor in memory, but 
that's not any different from what you have with an fairly complicated 
DMA capable hardware device.

Furthermore, even if virtio-mmio is what we use with KVM, it could be 
something radically different. Xen uses something somewhat different.
It is not even required to boot the platform!

>> Furthermore, the location, size, frequency etc properties of these aren't
>> fixed, so a dts would be fairly useless because it will probably not match
>> the particular mach-virt instance you're targetting.
> 
> I disagree. I think an example DTS would be fairly useful, if only for the
> full list of peripherals you're using on the platform.

That's the whole point: we do not want to to specify anything, because 
there is no need to. You could have anything there, depending on your hypervisor.

>> For kvmtool, I've been generating the device-tree at runtime based on how
>> kvmtool is invoked and it's been working pretty well so far.
> 
> If you'd much prefer to post the command line, tools version, etc. that you're
> using to generate the DTB, rather than the DTS, that'd be better than nothing.

Here's what kvmtool has been seen to generate, with the parameters I used
a few minutes ago:

/dts-v1/;

/memreserve/	0x000000008fff0000 0x0000000000001000;
/ {
	interrupt-parent = <0x1>;
	compatible = "linux,dummy-virt";
	#address-cells = <0x2>;
	#size-cells = <0x2>;

	chosen {
		bootargs = "console=hvc0,38400 root=/dev/vda1";
	};

	memory {
		device_type = "memory";
		reg = <0x0 0x80000000 0x0 0x40000000>;
	};

	cpus {
		#address-cells = <0x1>;
		#size-cells = <0x0>;

		cpu@0 {
			device_type = "cpu";
			compatible = "arm,cortex-a15";
			enable-method = "psci";
			reg = <0x0>;
		};

		cpu@1 {
			device_type = "cpu";
			compatible = "arm,cortex-a15";
			enable-method = "psci";
			reg = <0x1>;
		};
	};

	intc {
		compatible = "arm,cortex-a15-gic";
		#interrupt-cells = <0x3>;
		interrupt-controller;
		reg = <0x0 0x3ffff000 0x0 0x1000 0x0 0x3fffd000 0x0 0x2000>;
		phandle = <0x1>;
	};

	timer {
		compatible = "arm,armv7-timer";
		interrupts = <0x1 0xd 0x301 0x1 0xe 0x301 0x1 0xb 0x301 0x1 0xa 0x301>;
	};

	virtio@0 {
		compatible = "virtio,mmio";
		reg = <0x0 0x0 0x0 0x200>;
		interrupts = <0x0 0x0 0x1>;
	};

	virtio@200 {
		compatible = "virtio,mmio";
		reg = <0x0 0x200 0x0 0x200>;
		interrupts = <0x0 0x1 0x1>;
	};

	virtio@400 {
		compatible = "virtio,mmio";
		reg = <0x0 0x400 0x0 0x200>;
		interrupts = <0x0 0x2 0x1>;
	};

	virtio@600 {
		compatible = "virtio,mmio";
		reg = <0x0 0x600 0x0 0x200>;
		interrupts = <0x0 0x3 0x1>;
	};

	psci {
		compatible = "arm,psci";
		method = "hvc";
		cpu_suspend = <0x95c1ba5e>;
		cpu_off = <0x95c1ba5f>;
		cpu_on = <0x95c1ba60>;
		migrate = <0x95c1ba61>;
	};
};

Does it help?

> It seems like Rob Herring's earlier question about whether the dummy platform
> is really justified never got answered. I think sending a sample DTS out with
> the patchset would help "highlight where we need to do more work on DT driving
> the initialization."
> 
> Lastly, I'm somewhat curious, why virtio-mmio console rather than DCC?

What would be the point of using DCC? We would have to trap on each access, and
then we'd have to invent yet another mechanism to channel the console to userspace.
Not to mention that I like to be able to actually input something on a console,
not just read from it.

	M.
-- 
Jazz is not dead. It just smells funny...


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:28:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1tU-0006pZ-UY; Tue, 18 Dec 2012 18:27:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl1tS-0006pU-PR
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:27:55 +0000
Received: from [85.158.137.99:5020] by server-5.bemta-3.messagelabs.com id
	16/4A-15136-9A5B0D05; Tue, 18 Dec 2012 18:27:53 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355855271!17646917!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25069 invoked from network); 18 Dec 2012 18:27:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:27:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1080097"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:27:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:27:49 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl1tN-0002P1-6I;
	Tue, 18 Dec 2012 18:27:49 +0000
Date: Tue, 18 Dec 2012 18:27:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355851042.14620.280.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212181824460.17523@kaball.uk.xensource.com>
References: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
	<1355851042.14620.280.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of
	arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It would be nice to add to the description of the patch what you are
doing. Something along these lines:

- move gic.h to include/asm-arm/gic.h;
- move assembly files (entry.S, head.S, mode_switch.S, proc-ca15.S,
lib/*) to xen/arch/arm/arm32;
- move asm-offsets.c to xen/arch/arm/arm32;
- make the appropriate Makefile changes.

Other than that the patch is OK.

On Tue, 18 Dec 2012, Ian Campbell wrote:
> Ping?
> On Tue, 2012-12-04 at 15:57 +0000, Ian Campbell wrote:
> > Eventually we will have arm64 as well.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> >  Config.mk                                    |    4 +++-
> >  config/{arm.mk => arm32.mk}                  |    0
> >  xen/Rules.mk                                 |    2 +-
> >  xen/arch/arm/Makefile                        |    9 +++------
> >  xen/arch/arm/Rules.mk                        |   13 ++++++++-----
> >  xen/arch/arm/arm32/Makefile                  |    5 +++++
> >  xen/arch/arm/{ => arm32}/asm-offsets.c       |    0
> >  xen/arch/arm/{ => arm32}/entry.S             |    0
> >  xen/arch/arm/{ => arm32}/head.S              |    0
> >  xen/arch/arm/{ => arm32}/lib/Makefile        |    0
> >  xen/arch/arm/{ => arm32}/lib/assembler.h     |    0
> >  xen/arch/arm/{ => arm32}/lib/bitops.h        |    0
> >  xen/arch/arm/{ => arm32}/lib/changebit.S     |    0
> >  xen/arch/arm/{ => arm32}/lib/clearbit.S      |    0
> >  xen/arch/arm/{ => arm32}/lib/copy_template.S |    0
> >  xen/arch/arm/{ => arm32}/lib/div64.S         |    0
> >  xen/arch/arm/{ => arm32}/lib/findbit.S       |    0
> >  xen/arch/arm/{ => arm32}/lib/lib1funcs.S     |    0
> >  xen/arch/arm/{ => arm32}/lib/lshrdi3.S       |    0
> >  xen/arch/arm/{ => arm32}/lib/memcpy.S        |    0
> >  xen/arch/arm/{ => arm32}/lib/memmove.S       |    0
> >  xen/arch/arm/{ => arm32}/lib/memset.S        |    0
> >  xen/arch/arm/{ => arm32}/lib/memzero.S       |    0
> >  xen/arch/arm/{ => arm32}/lib/setbit.S        |    0
> >  xen/arch/arm/{ => arm32}/lib/testchangebit.S |    0
> >  xen/arch/arm/{ => arm32}/lib/testclearbit.S  |    0
> >  xen/arch/arm/{ => arm32}/lib/testsetbit.S    |    0
> >  xen/arch/arm/{ => arm32}/mode_switch.S       |    2 +-
> >  xen/arch/arm/{ => arm32}/proc-ca15.S         |    0
> >  xen/arch/arm/domain.c                        |    2 +-
> >  xen/arch/arm/domain_build.c                  |    2 +-
> >  xen/arch/arm/gic.c                           |    2 +-
> >  xen/arch/arm/irq.c                           |    2 +-
> >  xen/arch/arm/p2m.c                           |    2 +-
> >  xen/arch/arm/setup.c                         |    2 +-
> >  xen/arch/arm/smpboot.c                       |    2 +-
> >  xen/arch/arm/traps.c                         |    2 +-
> >  xen/arch/arm/vgic.c                          |    2 +-
> >  xen/arch/arm/vtimer.c                        |    2 +-
> >  xen/{arch/arm => include/asm-arm}/gic.h      |    6 ++----
> >  40 files changed, 33 insertions(+), 28 deletions(-)
> >  rename config/{arm.mk => arm32.mk} (100%)
> >  create mode 100644 xen/arch/arm/arm32/Makefile
> >  rename xen/arch/arm/{ => arm32}/asm-offsets.c (100%)
> >  rename xen/arch/arm/{ => arm32}/entry.S (100%)
> >  rename xen/arch/arm/{ => arm32}/head.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/Makefile (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/assembler.h (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/bitops.h (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/changebit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/clearbit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/copy_template.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/div64.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/findbit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/lib1funcs.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/lshrdi3.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/memcpy.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/memmove.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/memset.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/memzero.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/setbit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/testchangebit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/testclearbit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/testsetbit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/mode_switch.S (99%)
> >  rename xen/arch/arm/{ => arm32}/proc-ca15.S (100%)
> >  rename xen/{arch/arm => include/asm-arm}/gic.h (98%)
> >
> > diff --git a/Config.mk b/Config.mk
> > index d99b9a1..8e35886 100644
> > --- a/Config.mk
> > +++ b/Config.mk
> > @@ -14,7 +14,9 @@ debug ?= y
> >  debug_symbols ?= $(debug)
> >
> >  XEN_COMPILE_ARCH    ?= $(shell uname -m | sed -e s/i.86/x86_32/ \
> > -                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ -e s/arm.*/arm/)
> > +                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ \
> > +                         -e s/armv7.*/arm32/)
> > +
> >  XEN_TARGET_ARCH     ?= $(XEN_COMPILE_ARCH)
> >  XEN_OS              ?= $(shell uname -s)
> >
> > diff --git a/config/arm.mk b/config/arm32.mk
> > similarity index 100%
> > rename from config/arm.mk
> > rename to config/arm32.mk
> > diff --git a/xen/Rules.mk b/xen/Rules.mk
> > index f7cb8b2..c2db449 100644
> > --- a/xen/Rules.mk
> > +++ b/xen/Rules.mk
> > @@ -28,7 +28,7 @@ endif
> >  # Set ARCH/SUBARCH appropriately.
> >  override TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
> >  override TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
> > -                              sed -e 's/x86.*/x86/')
> > +                              sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
> >
> >  TARGET := $(BASEDIR)/xen
> >
> > diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> > index fd92b72..1b33767 100644
> > --- a/xen/arch/arm/Makefile
> > +++ b/xen/arch/arm/Makefile
> > @@ -1,8 +1,7 @@
> > -subdir-y += lib
> > +subdir-$(arm32) += arm32
> >
> >  obj-y += dummy.o
> >  obj-y += early_printk.o
> > -obj-y += entry.o
> >  obj-y += domain.o
> >  obj-y += domctl.o
> >  obj-y += sysctl.o
> > @@ -12,8 +11,6 @@ obj-y += io.o
> >  obj-y += irq.o
> >  obj-y += kernel.o
> >  obj-y += mm.o
> > -obj-y += mode_switch.o
> > -obj-y += proc-ca15.o
> >  obj-y += p2m.o
> >  obj-y += percpu.o
> >  obj-y += guestcopy.o
> > @@ -36,7 +33,7 @@ obj-y += dtb.o
> >  AFLAGS += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
> >  endif
> >
> > -ALL_OBJS := head.o $(ALL_OBJS)
> > +ALL_OBJS := $(TARGET_SUBARCH)/head.o $(ALL_OBJS)
> >
> >  $(TARGET): $(TARGET)-syms $(TARGET).bin
> >         # XXX: VE model loads by VMA so instead of
> > @@ -81,7 +78,7 @@ $(TARGET)-syms: prelink.o xen.lds $(BASEDIR)/common/symbols-dummy.o
> >             $(@D)/.$(@F).1.o -o $@
> >         rm -f $(@D)/.$(@F).[0-9]*
> >
> > -asm-offsets.s: asm-offsets.c
> > +asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c
> >         $(CC) $(filter-out -flto,$(CFLAGS)) -S -o $@ $<
> >
> >  xen.lds: xen.lds.S
> > diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> > index a45c654..f83bfee 100644
> > --- a/xen/arch/arm/Rules.mk
> > +++ b/xen/arch/arm/Rules.mk
> > @@ -12,16 +12,19 @@ CFLAGS += -fno-builtin -fno-common -Wredundant-decls
> >  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> >  CFLAGS += -I$(BASEDIR)/include
> >
> > -# Prevent floating-point variables from creeping into Xen.
> > -CFLAGS += -msoft-float
> > -
> >  $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
> >  $(call cc-option-add,CFLAGS,CC,-Wnested-externs)
> >
> >  arm := y
> >
> > +ifeq ($(TARGET_SUBARCH),arm32)
> > +# Prevent floating-point variables from creeping into Xen.
> > +CFLAGS += -msoft-float
> > +CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
> > +arm32 := y
> > +arm64 := n
> > +endif
> > +
> >  ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n)
> >  CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
> >  endif
> > -
> > -CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
> > diff --git a/xen/arch/arm/arm32/Makefile b/xen/arch/arm/arm32/Makefile
> > new file mode 100644
> > index 0000000..20931fa
> > --- /dev/null
> > +++ b/xen/arch/arm/arm32/Makefile
> > @@ -0,0 +1,5 @@
> > +subdir-y += lib
> > +
> > +obj-y += entry.o
> > +obj-y += mode_switch.o
> > +obj-y += proc-ca15.o
> > diff --git a/xen/arch/arm/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c
> > similarity index 100%
> > rename from xen/arch/arm/asm-offsets.c
> > rename to xen/arch/arm/arm32/asm-offsets.c
> > diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/arm32/entry.S
> > similarity index 100%
> > rename from xen/arch/arm/entry.S
> > rename to xen/arch/arm/arm32/entry.S
> > diff --git a/xen/arch/arm/head.S b/xen/arch/arm/arm32/head.S
> > similarity index 100%
> > rename from xen/arch/arm/head.S
> > rename to xen/arch/arm/arm32/head.S
> > diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/arm32/lib/Makefile
> > similarity index 100%
> > rename from xen/arch/arm/lib/Makefile
> > rename to xen/arch/arm/arm32/lib/Makefile
> > diff --git a/xen/arch/arm/lib/assembler.h b/xen/arch/arm/arm32/lib/assembler.h
> > similarity index 100%
> > rename from xen/arch/arm/lib/assembler.h
> > rename to xen/arch/arm/arm32/lib/assembler.h
> > diff --git a/xen/arch/arm/lib/bitops.h b/xen/arch/arm/arm32/lib/bitops.h
> > similarity index 100%
> > rename from xen/arch/arm/lib/bitops.h
> > rename to xen/arch/arm/arm32/lib/bitops.h
> > diff --git a/xen/arch/arm/lib/changebit.S b/xen/arch/arm/arm32/lib/changebit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/changebit.S
> > rename to xen/arch/arm/arm32/lib/changebit.S
> > diff --git a/xen/arch/arm/lib/clearbit.S b/xen/arch/arm/arm32/lib/clearbit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/clearbit.S
> > rename to xen/arch/arm/arm32/lib/clearbit.S
> > diff --git a/xen/arch/arm/lib/copy_template.S b/xen/arch/arm/arm32/lib/copy_template.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/copy_template.S
> > rename to xen/arch/arm/arm32/lib/copy_template.S
> > diff --git a/xen/arch/arm/lib/div64.S b/xen/arch/arm/arm32/lib/div64.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/div64.S
> > rename to xen/arch/arm/arm32/lib/div64.S
> > diff --git a/xen/arch/arm/lib/findbit.S b/xen/arch/arm/arm32/lib/findbit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/findbit.S
> > rename to xen/arch/arm/arm32/lib/findbit.S
> > diff --git a/xen/arch/arm/lib/lib1funcs.S b/xen/arch/arm/arm32/lib/lib1funcs.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/lib1funcs.S
> > rename to xen/arch/arm/arm32/lib/lib1funcs.S
> > diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/arm32/lib/lshrdi3.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/lshrdi3.S
> > rename to xen/arch/arm/arm32/lib/lshrdi3.S
> > diff --git a/xen/arch/arm/lib/memcpy.S b/xen/arch/arm/arm32/lib/memcpy.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/memcpy.S
> > rename to xen/arch/arm/arm32/lib/memcpy.S
> > diff --git a/xen/arch/arm/lib/memmove.S b/xen/arch/arm/arm32/lib/memmove.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/memmove.S
> > rename to xen/arch/arm/arm32/lib/memmove.S
> > diff --git a/xen/arch/arm/lib/memset.S b/xen/arch/arm/arm32/lib/memset.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/memset.S
> > rename to xen/arch/arm/arm32/lib/memset.S
> > diff --git a/xen/arch/arm/lib/memzero.S b/xen/arch/arm/arm32/lib/memzero.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/memzero.S
> > rename to xen/arch/arm/arm32/lib/memzero.S
> > diff --git a/xen/arch/arm/lib/setbit.S b/xen/arch/arm/arm32/lib/setbit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/setbit.S
> > rename to xen/arch/arm/arm32/lib/setbit.S
> > diff --git a/xen/arch/arm/lib/testchangebit.S b/xen/arch/arm/arm32/lib/testchangebit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/testchangebit.S
> > rename to xen/arch/arm/arm32/lib/testchangebit.S
> > diff --git a/xen/arch/arm/lib/testclearbit.S b/xen/arch/arm/arm32/lib/testclearbit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/testclearbit.S
> > rename to xen/arch/arm/arm32/lib/testclearbit.S
> > diff --git a/xen/arch/arm/lib/testsetbit.S b/xen/arch/arm/arm32/lib/testsetbit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/testsetbit.S
> > rename to xen/arch/arm/arm32/lib/testsetbit.S
> > diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/arm32/mode_switch.S
> > similarity index 99%
> > rename from xen/arch/arm/mode_switch.S
> > rename to xen/arch/arm/arm32/mode_switch.S
> > index 7c3b357..d550c33 100644
> > --- a/xen/arch/arm/mode_switch.S
> > +++ b/xen/arch/arm/arm32/mode_switch.S
> > @@ -21,7 +21,7 @@
> >  #include <asm/page.h>
> >  #include <asm/platform_vexpress.h>
> >  #include <asm/asm_defns.h>
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >
> >  /* XXX: Versatile Express specific code */
> > diff --git a/xen/arch/arm/proc-ca15.S b/xen/arch/arm/arm32/proc-ca15.S
> > similarity index 100%
> > rename from xen/arch/arm/proc-ca15.S
> > rename to xen/arch/arm/arm32/proc-ca15.S
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index c5292c7..0875045 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -12,7 +12,7 @@
> >  #include <asm/p2m.h>
> >  #include <asm/irq.h>
> >
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >  #include "vtimer.h"
> >  #include "vpl011.h"
> >
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index a9e7f43..aac92b3 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -11,7 +11,7 @@
> >  #include <xen/libfdt/libfdt.h>
> >  #include <xen/guest_access.h>
> >
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >  #include "kernel.h"
> >
> >  static unsigned int __initdata opt_dom0_max_vcpus;
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 0c6fab9..41824c9 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -29,7 +29,7 @@
> >  #include <asm/p2m.h>
> >  #include <asm/domain.h>
> >
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  /* Access to the GIC Distributor registers through the fixmap */
> >  #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
> > diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> > index 72e83e6..c141d81 100644
> > --- a/xen/arch/arm/irq.c
> > +++ b/xen/arch/arm/irq.c
> > @@ -25,7 +25,7 @@
> >  #include <xen/errno.h>
> >  #include <xen/sched.h>
> >
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  static void enable_none(struct irq_desc *irq) { }
> >  static unsigned int startup_none(struct irq_desc *irq) { return 0; }
> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > index 7ae4515..852f0d8 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -4,7 +4,7 @@
> >  #include <xen/errno.h>
> >  #include <xen/domain_page.h>
> >  #include <asm/flushtlb.h>
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  void dump_p2m_lookup(struct domain *d, paddr_t addr)
> >  {
> > diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> > index 2076724..8f85ae6 100644
> > --- a/xen/arch/arm/setup.c
> > +++ b/xen/arch/arm/setup.c
> > @@ -39,7 +39,7 @@
> >  #include <asm/setup.h>
> >  #include <asm/vfp.h>
> >  #include <asm/early_printk.h>
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  static __used void init_done(void)
> >  {
> > diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> > index 6555ac6..7b6ffa0 100644
> > --- a/xen/arch/arm/smpboot.c
> > +++ b/xen/arch/arm/smpboot.c
> > @@ -29,7 +29,7 @@
> >  #include <xen/timer.h>
> >  #include <xen/irq.h>
> >  #include <asm/vfp.h>
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  cpumask_t cpu_online_map;
> >  EXPORT_SYMBOL(cpu_online_map);
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index 19e2081..d01ff6d 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -35,7 +35,7 @@
> >
> >  #include "io.h"
> >  #include "vtimer.h"
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  /* The base of the stack must always be double-word aligned, which means
> >   * that both the kernel half of struct cpu_user_regs (which is pushed in
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 3f7e757..7d1a5ad 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -27,7 +27,7 @@
> >  #include <asm/current.h>
> >
> >  #include "io.h"
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  #define VGIC_DISTR_BASE_ADDRESS 0x000000002c001000
> >
> > diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> > index 490b021..1c45f4a 100644
> > --- a/xen/arch/arm/vtimer.c
> > +++ b/xen/arch/arm/vtimer.c
> > @@ -21,7 +21,7 @@
> >  #include <xen/lib.h>
> >  #include <xen/timer.h>
> >  #include <xen/sched.h>
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  extern s_time_t ticks_to_ns(uint64_t ticks);
> >  extern uint64_t ns_to_ticks(s_time_t ns);
> > diff --git a/xen/arch/arm/gic.h b/xen/include/asm-arm/gic.h
> > similarity index 98%
> > rename from xen/arch/arm/gic.h
> > rename to xen/include/asm-arm/gic.h
> > index 1bf1b02..bf30fbd 100644
> > --- a/xen/arch/arm/gic.h
> > +++ b/xen/include/asm-arm/gic.h
> > @@ -1,6 +1,4 @@
> >  /*
> > - * xen/arch/arm/gic.h
> > - *
> >   * ARM Generic Interrupt Controller support
> >   *
> >   * Tim Deegan <tim@xen.org>
> > @@ -17,8 +15,8 @@
> >   * GNU General Public License for more details.
> >   */
> >
> > -#ifndef __ARCH_ARM_GIC_H__
> > -#define __ARCH_ARM_GIC_H__
> > +#ifndef __ASM_ARM_GIC_H__
> > +#define __ASM_ARM_GIC_H__
> >
> >  #define GICD_CTLR       (0x000/4)
> >  #define GICD_TYPER      (0x004/4)
> > --
> > 1.7.2.5
> >
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:28:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1tU-0006pZ-UY; Tue, 18 Dec 2012 18:27:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl1tS-0006pU-PR
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:27:55 +0000
Received: from [85.158.137.99:5020] by server-5.bemta-3.messagelabs.com id
	16/4A-15136-9A5B0D05; Tue, 18 Dec 2012 18:27:53 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355855271!17646917!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25069 invoked from network); 18 Dec 2012 18:27:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:27:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1080097"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:27:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:27:49 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl1tN-0002P1-6I;
	Tue, 18 Dec 2012 18:27:49 +0000
Date: Tue, 18 Dec 2012 18:27:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355851042.14620.280.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212181824460.17523@kaball.uk.xensource.com>
References: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
	<1355851042.14620.280.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of
	arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It would be nice to add to the description of the patch what you are
doing. Something along these lines:

- move gic.h to include/asm-arm/gic.h;
- move assembly files (entry.S, head.S, mode_switch.S, proc-ca15.S,
lib/*) to xen/arch/arm/arm32;
- move asm-offsets.c to xen/arch/arm/arm32;
- make the appropriate Makefile changes.

Other than that the patch is OK.

On Tue, 18 Dec 2012, Ian Campbell wrote:
> Ping?
> On Tue, 2012-12-04 at 15:57 +0000, Ian Campbell wrote:
> > Eventually we will have arm64 as well.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> >  Config.mk                                    |    4 +++-
> >  config/{arm.mk => arm32.mk}                  |    0
> >  xen/Rules.mk                                 |    2 +-
> >  xen/arch/arm/Makefile                        |    9 +++------
> >  xen/arch/arm/Rules.mk                        |   13 ++++++++-----
> >  xen/arch/arm/arm32/Makefile                  |    5 +++++
> >  xen/arch/arm/{ => arm32}/asm-offsets.c       |    0
> >  xen/arch/arm/{ => arm32}/entry.S             |    0
> >  xen/arch/arm/{ => arm32}/head.S              |    0
> >  xen/arch/arm/{ => arm32}/lib/Makefile        |    0
> >  xen/arch/arm/{ => arm32}/lib/assembler.h     |    0
> >  xen/arch/arm/{ => arm32}/lib/bitops.h        |    0
> >  xen/arch/arm/{ => arm32}/lib/changebit.S     |    0
> >  xen/arch/arm/{ => arm32}/lib/clearbit.S      |    0
> >  xen/arch/arm/{ => arm32}/lib/copy_template.S |    0
> >  xen/arch/arm/{ => arm32}/lib/div64.S         |    0
> >  xen/arch/arm/{ => arm32}/lib/findbit.S       |    0
> >  xen/arch/arm/{ => arm32}/lib/lib1funcs.S     |    0
> >  xen/arch/arm/{ => arm32}/lib/lshrdi3.S       |    0
> >  xen/arch/arm/{ => arm32}/lib/memcpy.S        |    0
> >  xen/arch/arm/{ => arm32}/lib/memmove.S       |    0
> >  xen/arch/arm/{ => arm32}/lib/memset.S        |    0
> >  xen/arch/arm/{ => arm32}/lib/memzero.S       |    0
> >  xen/arch/arm/{ => arm32}/lib/setbit.S        |    0
> >  xen/arch/arm/{ => arm32}/lib/testchangebit.S |    0
> >  xen/arch/arm/{ => arm32}/lib/testclearbit.S  |    0
> >  xen/arch/arm/{ => arm32}/lib/testsetbit.S    |    0
> >  xen/arch/arm/{ => arm32}/mode_switch.S       |    2 +-
> >  xen/arch/arm/{ => arm32}/proc-ca15.S         |    0
> >  xen/arch/arm/domain.c                        |    2 +-
> >  xen/arch/arm/domain_build.c                  |    2 +-
> >  xen/arch/arm/gic.c                           |    2 +-
> >  xen/arch/arm/irq.c                           |    2 +-
> >  xen/arch/arm/p2m.c                           |    2 +-
> >  xen/arch/arm/setup.c                         |    2 +-
> >  xen/arch/arm/smpboot.c                       |    2 +-
> >  xen/arch/arm/traps.c                         |    2 +-
> >  xen/arch/arm/vgic.c                          |    2 +-
> >  xen/arch/arm/vtimer.c                        |    2 +-
> >  xen/{arch/arm => include/asm-arm}/gic.h      |    6 ++----
> >  40 files changed, 33 insertions(+), 28 deletions(-)
> >  rename config/{arm.mk => arm32.mk} (100%)
> >  create mode 100644 xen/arch/arm/arm32/Makefile
> >  rename xen/arch/arm/{ => arm32}/asm-offsets.c (100%)
> >  rename xen/arch/arm/{ => arm32}/entry.S (100%)
> >  rename xen/arch/arm/{ => arm32}/head.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/Makefile (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/assembler.h (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/bitops.h (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/changebit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/clearbit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/copy_template.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/div64.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/findbit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/lib1funcs.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/lshrdi3.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/memcpy.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/memmove.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/memset.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/memzero.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/setbit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/testchangebit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/testclearbit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/lib/testsetbit.S (100%)
> >  rename xen/arch/arm/{ => arm32}/mode_switch.S (99%)
> >  rename xen/arch/arm/{ => arm32}/proc-ca15.S (100%)
> >  rename xen/{arch/arm => include/asm-arm}/gic.h (98%)
> >
> > diff --git a/Config.mk b/Config.mk
> > index d99b9a1..8e35886 100644
> > --- a/Config.mk
> > +++ b/Config.mk
> > @@ -14,7 +14,9 @@ debug ?= y
> >  debug_symbols ?= $(debug)
> >
> >  XEN_COMPILE_ARCH    ?= $(shell uname -m | sed -e s/i.86/x86_32/ \
> > -                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ -e s/arm.*/arm/)
> > +                         -e s/i86pc/x86_32/ -e s/amd64/x86_64/ \
> > +                         -e s/armv7.*/arm32/)
> > +
> >  XEN_TARGET_ARCH     ?= $(XEN_COMPILE_ARCH)
> >  XEN_OS              ?= $(shell uname -s)
> >
> > diff --git a/config/arm.mk b/config/arm32.mk
> > similarity index 100%
> > rename from config/arm.mk
> > rename to config/arm32.mk
> > diff --git a/xen/Rules.mk b/xen/Rules.mk
> > index f7cb8b2..c2db449 100644
> > --- a/xen/Rules.mk
> > +++ b/xen/Rules.mk
> > @@ -28,7 +28,7 @@ endif
> >  # Set ARCH/SUBARCH appropriately.
> >  override TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
> >  override TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
> > -                              sed -e 's/x86.*/x86/')
> > +                              sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
> >
> >  TARGET := $(BASEDIR)/xen
> >
> > diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> > index fd92b72..1b33767 100644
> > --- a/xen/arch/arm/Makefile
> > +++ b/xen/arch/arm/Makefile
> > @@ -1,8 +1,7 @@
> > -subdir-y += lib
> > +subdir-$(arm32) += arm32
> >
> >  obj-y += dummy.o
> >  obj-y += early_printk.o
> > -obj-y += entry.o
> >  obj-y += domain.o
> >  obj-y += domctl.o
> >  obj-y += sysctl.o
> > @@ -12,8 +11,6 @@ obj-y += io.o
> >  obj-y += irq.o
> >  obj-y += kernel.o
> >  obj-y += mm.o
> > -obj-y += mode_switch.o
> > -obj-y += proc-ca15.o
> >  obj-y += p2m.o
> >  obj-y += percpu.o
> >  obj-y += guestcopy.o
> > @@ -36,7 +33,7 @@ obj-y += dtb.o
> >  AFLAGS += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
> >  endif
> >
> > -ALL_OBJS := head.o $(ALL_OBJS)
> > +ALL_OBJS := $(TARGET_SUBARCH)/head.o $(ALL_OBJS)
> >
> >  $(TARGET): $(TARGET)-syms $(TARGET).bin
> >         # XXX: VE model loads by VMA so instead of
> > @@ -81,7 +78,7 @@ $(TARGET)-syms: prelink.o xen.lds $(BASEDIR)/common/symbols-dummy.o
> >             $(@D)/.$(@F).1.o -o $@
> >         rm -f $(@D)/.$(@F).[0-9]*
> >
> > -asm-offsets.s: asm-offsets.c
> > +asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c
> >         $(CC) $(filter-out -flto,$(CFLAGS)) -S -o $@ $<
> >
> >  xen.lds: xen.lds.S
> > diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> > index a45c654..f83bfee 100644
> > --- a/xen/arch/arm/Rules.mk
> > +++ b/xen/arch/arm/Rules.mk
> > @@ -12,16 +12,19 @@ CFLAGS += -fno-builtin -fno-common -Wredundant-decls
> >  CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
> >  CFLAGS += -I$(BASEDIR)/include
> >
> > -# Prevent floating-point variables from creeping into Xen.
> > -CFLAGS += -msoft-float
> > -
> >  $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
> >  $(call cc-option-add,CFLAGS,CC,-Wnested-externs)
> >
> >  arm := y
> >
> > +ifeq ($(TARGET_SUBARCH),arm32)
> > +# Prevent floating-point variables from creeping into Xen.
> > +CFLAGS += -msoft-float
> > +CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
> > +arm32 := y
> > +arm64 := n
> > +endif
> > +
> >  ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n)
> >  CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
> >  endif
> > -
> > -CFLAGS += -mcpu=cortex-a15 -mfpu=vfpv3 -mfloat-abi=softfp
> > diff --git a/xen/arch/arm/arm32/Makefile b/xen/arch/arm/arm32/Makefile
> > new file mode 100644
> > index 0000000..20931fa
> > --- /dev/null
> > +++ b/xen/arch/arm/arm32/Makefile
> > @@ -0,0 +1,5 @@
> > +subdir-y += lib
> > +
> > +obj-y += entry.o
> > +obj-y += mode_switch.o
> > +obj-y += proc-ca15.o
> > diff --git a/xen/arch/arm/asm-offsets.c b/xen/arch/arm/arm32/asm-offsets.c
> > similarity index 100%
> > rename from xen/arch/arm/asm-offsets.c
> > rename to xen/arch/arm/arm32/asm-offsets.c
> > diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/arm32/entry.S
> > similarity index 100%
> > rename from xen/arch/arm/entry.S
> > rename to xen/arch/arm/arm32/entry.S
> > diff --git a/xen/arch/arm/head.S b/xen/arch/arm/arm32/head.S
> > similarity index 100%
> > rename from xen/arch/arm/head.S
> > rename to xen/arch/arm/arm32/head.S
> > diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/arm32/lib/Makefile
> > similarity index 100%
> > rename from xen/arch/arm/lib/Makefile
> > rename to xen/arch/arm/arm32/lib/Makefile
> > diff --git a/xen/arch/arm/lib/assembler.h b/xen/arch/arm/arm32/lib/assembler.h
> > similarity index 100%
> > rename from xen/arch/arm/lib/assembler.h
> > rename to xen/arch/arm/arm32/lib/assembler.h
> > diff --git a/xen/arch/arm/lib/bitops.h b/xen/arch/arm/arm32/lib/bitops.h
> > similarity index 100%
> > rename from xen/arch/arm/lib/bitops.h
> > rename to xen/arch/arm/arm32/lib/bitops.h
> > diff --git a/xen/arch/arm/lib/changebit.S b/xen/arch/arm/arm32/lib/changebit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/changebit.S
> > rename to xen/arch/arm/arm32/lib/changebit.S
> > diff --git a/xen/arch/arm/lib/clearbit.S b/xen/arch/arm/arm32/lib/clearbit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/clearbit.S
> > rename to xen/arch/arm/arm32/lib/clearbit.S
> > diff --git a/xen/arch/arm/lib/copy_template.S b/xen/arch/arm/arm32/lib/copy_template.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/copy_template.S
> > rename to xen/arch/arm/arm32/lib/copy_template.S
> > diff --git a/xen/arch/arm/lib/div64.S b/xen/arch/arm/arm32/lib/div64.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/div64.S
> > rename to xen/arch/arm/arm32/lib/div64.S
> > diff --git a/xen/arch/arm/lib/findbit.S b/xen/arch/arm/arm32/lib/findbit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/findbit.S
> > rename to xen/arch/arm/arm32/lib/findbit.S
> > diff --git a/xen/arch/arm/lib/lib1funcs.S b/xen/arch/arm/arm32/lib/lib1funcs.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/lib1funcs.S
> > rename to xen/arch/arm/arm32/lib/lib1funcs.S
> > diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/arm32/lib/lshrdi3.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/lshrdi3.S
> > rename to xen/arch/arm/arm32/lib/lshrdi3.S
> > diff --git a/xen/arch/arm/lib/memcpy.S b/xen/arch/arm/arm32/lib/memcpy.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/memcpy.S
> > rename to xen/arch/arm/arm32/lib/memcpy.S
> > diff --git a/xen/arch/arm/lib/memmove.S b/xen/arch/arm/arm32/lib/memmove.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/memmove.S
> > rename to xen/arch/arm/arm32/lib/memmove.S
> > diff --git a/xen/arch/arm/lib/memset.S b/xen/arch/arm/arm32/lib/memset.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/memset.S
> > rename to xen/arch/arm/arm32/lib/memset.S
> > diff --git a/xen/arch/arm/lib/memzero.S b/xen/arch/arm/arm32/lib/memzero.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/memzero.S
> > rename to xen/arch/arm/arm32/lib/memzero.S
> > diff --git a/xen/arch/arm/lib/setbit.S b/xen/arch/arm/arm32/lib/setbit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/setbit.S
> > rename to xen/arch/arm/arm32/lib/setbit.S
> > diff --git a/xen/arch/arm/lib/testchangebit.S b/xen/arch/arm/arm32/lib/testchangebit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/testchangebit.S
> > rename to xen/arch/arm/arm32/lib/testchangebit.S
> > diff --git a/xen/arch/arm/lib/testclearbit.S b/xen/arch/arm/arm32/lib/testclearbit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/testclearbit.S
> > rename to xen/arch/arm/arm32/lib/testclearbit.S
> > diff --git a/xen/arch/arm/lib/testsetbit.S b/xen/arch/arm/arm32/lib/testsetbit.S
> > similarity index 100%
> > rename from xen/arch/arm/lib/testsetbit.S
> > rename to xen/arch/arm/arm32/lib/testsetbit.S
> > diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/arm32/mode_switch.S
> > similarity index 99%
> > rename from xen/arch/arm/mode_switch.S
> > rename to xen/arch/arm/arm32/mode_switch.S
> > index 7c3b357..d550c33 100644
> > --- a/xen/arch/arm/mode_switch.S
> > +++ b/xen/arch/arm/arm32/mode_switch.S
> > @@ -21,7 +21,7 @@
> >  #include <asm/page.h>
> >  #include <asm/platform_vexpress.h>
> >  #include <asm/asm_defns.h>
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >
> >  /* XXX: Versatile Express specific code */
> > diff --git a/xen/arch/arm/proc-ca15.S b/xen/arch/arm/arm32/proc-ca15.S
> > similarity index 100%
> > rename from xen/arch/arm/proc-ca15.S
> > rename to xen/arch/arm/arm32/proc-ca15.S
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index c5292c7..0875045 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -12,7 +12,7 @@
> >  #include <asm/p2m.h>
> >  #include <asm/irq.h>
> >
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >  #include "vtimer.h"
> >  #include "vpl011.h"
> >
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index a9e7f43..aac92b3 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -11,7 +11,7 @@
> >  #include <xen/libfdt/libfdt.h>
> >  #include <xen/guest_access.h>
> >
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >  #include "kernel.h"
> >
> >  static unsigned int __initdata opt_dom0_max_vcpus;
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 0c6fab9..41824c9 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -29,7 +29,7 @@
> >  #include <asm/p2m.h>
> >  #include <asm/domain.h>
> >
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  /* Access to the GIC Distributor registers through the fixmap */
> >  #define GICD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_GICD))
> > diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> > index 72e83e6..c141d81 100644
> > --- a/xen/arch/arm/irq.c
> > +++ b/xen/arch/arm/irq.c
> > @@ -25,7 +25,7 @@
> >  #include <xen/errno.h>
> >  #include <xen/sched.h>
> >
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  static void enable_none(struct irq_desc *irq) { }
> >  static unsigned int startup_none(struct irq_desc *irq) { return 0; }
> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > index 7ae4515..852f0d8 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -4,7 +4,7 @@
> >  #include <xen/errno.h>
> >  #include <xen/domain_page.h>
> >  #include <asm/flushtlb.h>
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  void dump_p2m_lookup(struct domain *d, paddr_t addr)
> >  {
> > diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> > index 2076724..8f85ae6 100644
> > --- a/xen/arch/arm/setup.c
> > +++ b/xen/arch/arm/setup.c
> > @@ -39,7 +39,7 @@
> >  #include <asm/setup.h>
> >  #include <asm/vfp.h>
> >  #include <asm/early_printk.h>
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  static __used void init_done(void)
> >  {
> > diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> > index 6555ac6..7b6ffa0 100644
> > --- a/xen/arch/arm/smpboot.c
> > +++ b/xen/arch/arm/smpboot.c
> > @@ -29,7 +29,7 @@
> >  #include <xen/timer.h>
> >  #include <xen/irq.h>
> >  #include <asm/vfp.h>
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  cpumask_t cpu_online_map;
> >  EXPORT_SYMBOL(cpu_online_map);
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index 19e2081..d01ff6d 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -35,7 +35,7 @@
> >
> >  #include "io.h"
> >  #include "vtimer.h"
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  /* The base of the stack must always be double-word aligned, which means
> >   * that both the kernel half of struct cpu_user_regs (which is pushed in
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 3f7e757..7d1a5ad 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -27,7 +27,7 @@
> >  #include <asm/current.h>
> >
> >  #include "io.h"
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  #define VGIC_DISTR_BASE_ADDRESS 0x000000002c001000
> >
> > diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> > index 490b021..1c45f4a 100644
> > --- a/xen/arch/arm/vtimer.c
> > +++ b/xen/arch/arm/vtimer.c
> > @@ -21,7 +21,7 @@
> >  #include <xen/lib.h>
> >  #include <xen/timer.h>
> >  #include <xen/sched.h>
> > -#include "gic.h"
> > +#include <asm/gic.h>
> >
> >  extern s_time_t ticks_to_ns(uint64_t ticks);
> >  extern uint64_t ns_to_ticks(s_time_t ns);
> > diff --git a/xen/arch/arm/gic.h b/xen/include/asm-arm/gic.h
> > similarity index 98%
> > rename from xen/arch/arm/gic.h
> > rename to xen/include/asm-arm/gic.h
> > index 1bf1b02..bf30fbd 100644
> > --- a/xen/arch/arm/gic.h
> > +++ b/xen/include/asm-arm/gic.h
> > @@ -1,6 +1,4 @@
> >  /*
> > - * xen/arch/arm/gic.h
> > - *
> >   * ARM Generic Interrupt Controller support
> >   *
> >   * Tim Deegan <tim@xen.org>
> > @@ -17,8 +15,8 @@
> >   * GNU General Public License for more details.
> >   */
> >
> > -#ifndef __ARCH_ARM_GIC_H__
> > -#define __ARCH_ARM_GIC_H__
> > +#ifndef __ASM_ARM_GIC_H__
> > +#define __ASM_ARM_GIC_H__
> >
> >  #define GICD_CTLR       (0x000/4)
> >  #define GICD_TYPER      (0x004/4)
> > --
> > 1.7.2.5
> >
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:31:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1wj-0006w8-HZ; Tue, 18 Dec 2012 18:31:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl1wi-0006w2-AX
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:31:16 +0000
Received: from [85.158.143.99:21781] by server-2.bemta-4.messagelabs.com id
	60/6F-30861-376B0D05; Tue, 18 Dec 2012 18:31:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1355855471!24852689!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28811 invoked from network); 18 Dec 2012 18:31:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:31:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1080556"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:31:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:31:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl1wb-0002Sl-2W;
	Tue, 18 Dec 2012 18:31:09 +0000
Date: Tue, 18 Dec 2012 18:31:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355850890.14620.279.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212181830400.17523@kaball.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1355850890.14620.279.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I don't have any comments, they all look pretty straightforward.

On Tue, 18 Dec 2012, Ian Campbell wrote:
> Ping? Any comments on the ARM parts before I go through a rebase +
> refresh cycle?
> 
> On Tue, 2012-12-04 at 11:56 +0000, Ian Campbell wrote:
> > This was a short term hack to get something linking quickly, but its
> > usefulness has now passed.
> > 
> > This series replaces everything in here with proper functions. In many
> > cases these are still just stubs.
> > 
> > It seems to me that at least some of this stuff consists of x86-isms
> > which should instead be removed from the common code.
> > 
> > This highlights two large missing pieces of functionality: wallclock
> > time and cleaning up on domain destroy.
> > 
> > Ian.
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:31:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1wj-0006w8-HZ; Tue, 18 Dec 2012 18:31:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl1wi-0006w2-AX
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:31:16 +0000
Received: from [85.158.143.99:21781] by server-2.bemta-4.messagelabs.com id
	60/6F-30861-376B0D05; Tue, 18 Dec 2012 18:31:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1355855471!24852689!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28811 invoked from network); 18 Dec 2012 18:31:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:31:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1080556"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:31:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:31:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl1wb-0002Sl-2W;
	Tue, 18 Dec 2012 18:31:09 +0000
Date: Tue, 18 Dec 2012 18:31:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355850890.14620.279.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212181830400.17523@kaball.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1355850890.14620.279.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I don't have any comments, they all look pretty straightforward.

On Tue, 18 Dec 2012, Ian Campbell wrote:
> Ping? Any comments on the ARM parts before I go through a rebase +
> refresh cycle?
> 
> On Tue, 2012-12-04 at 11:56 +0000, Ian Campbell wrote:
> > This was a short term hack to get something linking quickly, but its
> > usefulness has now passed.
> > 
> > This series replaces everything in here with proper functions. In many
> > cases these are still just stubs.
> > 
> > It seems to me that at least some of this stuff consists of x86-isms
> > which should instead be removed from the common code.
> > 
> > This highlights two large missing pieces of functionality: wallclock
> > time and cleaning up on domain destroy.
> > 
> > Ian.
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:33:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1yq-00079F-Cr; Tue, 18 Dec 2012 18:33:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl1yo-000793-9k
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:33:26 +0000
Received: from [85.158.137.99:29943] by server-13.bemta-3.messagelabs.com id
	CA/D8-00465-2F6B0D05; Tue, 18 Dec 2012 18:33:22 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355855600!20014249!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18756 invoked from network); 18 Dec 2012 18:33:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:33:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1150747"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:33:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:33:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl1yh-0002Uy-Ml;
	Tue, 18 Dec 2012 18:33:19 +0000
Date: Tue, 18 Dec 2012 18:33:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> This shortens an overly long line.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

honestly I would rather keep it because it has been quite useful for
debugging in the past once all the bugs have been fixed (TM) then we can
remove it ;-)


>  xen/arch/arm/head.S |   11 +++++++----
>  1 files changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
> index 0d9a799..eb54925 100644
> --- a/xen/arch/arm/head.S
> +++ b/xen/arch/arm/head.S
> @@ -25,9 +25,12 @@
>  #define ZIMAGE_MAGIC_NUMBER 0x016f2818
>  
>  #define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
> +
> +/* Second Level */
>  #define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
> -#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
> -#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
> +
> +/* Third Level */
> +#define PT_DEV 0xe73 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
>  
>  #define PT_UPPER(x) (PT_##x & 0xf00)
>  #define PT_LOWER(x) (PT_##x & 0x0ff)
> @@ -222,8 +225,8 @@ skip_bss:
>          mov   r3, #0
>          lsr   r2, r11, #12
>          lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> -        orr   r2, r2, #PT_UPPER(DEV_L3)
> -        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> +        orr   r2, r2, #PT_UPPER(DEV)
> +        orr   r2, r2, #PT_LOWER(DEV) /* r2:r3 := 4K dev map including UART */
>          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
>  #endif
>  
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:33:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl1yq-00079F-Cr; Tue, 18 Dec 2012 18:33:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl1yo-000793-9k
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:33:26 +0000
Received: from [85.158.137.99:29943] by server-13.bemta-3.messagelabs.com id
	CA/D8-00465-2F6B0D05; Tue, 18 Dec 2012 18:33:22 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355855600!20014249!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18756 invoked from network); 18 Dec 2012 18:33:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:33:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1150747"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:33:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:33:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl1yh-0002Uy-Ml;
	Tue, 18 Dec 2012 18:33:19 +0000
Date: Tue, 18 Dec 2012 18:33:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> This shortens an overly long line.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

honestly I would rather keep it because it has been quite useful for
debugging in the past once all the bugs have been fixed (TM) then we can
remove it ;-)


>  xen/arch/arm/head.S |   11 +++++++----
>  1 files changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
> index 0d9a799..eb54925 100644
> --- a/xen/arch/arm/head.S
> +++ b/xen/arch/arm/head.S
> @@ -25,9 +25,12 @@
>  #define ZIMAGE_MAGIC_NUMBER 0x016f2818
>  
>  #define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
> +
> +/* Second Level */
>  #define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
> -#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
> -#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
> +
> +/* Third Level */
> +#define PT_DEV 0xe73 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
>  
>  #define PT_UPPER(x) (PT_##x & 0xf00)
>  #define PT_LOWER(x) (PT_##x & 0x0ff)
> @@ -222,8 +225,8 @@ skip_bss:
>          mov   r3, #0
>          lsr   r2, r11, #12
>          lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> -        orr   r2, r2, #PT_UPPER(DEV_L3)
> -        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> +        orr   r2, r2, #PT_UPPER(DEV)
> +        orr   r2, r2, #PT_LOWER(DEV) /* r2:r3 := 4K dev map including UART */
>          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
>  #endif
>  
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:35:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl214-0007Jd-1B; Tue, 18 Dec 2012 18:35:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl212-0007JS-Qd
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:35:45 +0000
Received: from [85.158.143.99:51039] by server-3.bemta-4.messagelabs.com id
	EC/C0-18211-087B0D05; Tue, 18 Dec 2012 18:35:44 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355855742!24741827!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14616 invoked from network); 18 Dec 2012 18:35:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:35:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1081196"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:35:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:35:41 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl20z-0002X5-Bs;
	Tue, 18 Dec 2012 18:35:41 +0000
Date: Tue, 18 Dec 2012 18:35:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355849376-26652-4-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181835260.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-4-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: arm: reorder registers in struct
 cpu_user_regs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> Primarily this is so that they are ordered in the same way as the
> mapping from arm64 x0..x31 registers to the arm32 registers, which is
> just less confusing for everyone going forward.
> 
> It also makes the implementation of select_user_regs in the next patch
> slightly simpler.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  xen/arch/arm/entry.S          |    4 ++--
>  xen/arch/arm/io.h             |    1 +
>  xen/arch/arm/traps.c          |    2 +-
>  xen/include/public/arch-arm.h |   11 +++++++----
>  4 files changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
> index cbd1c48..3611427 100644
> --- a/xen/arch/arm/entry.S
> +++ b/xen/arch/arm/entry.S
> @@ -12,7 +12,7 @@
>          RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
>  
>  #define SAVE_ALL                                                        \
> -        sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */      \
> +        sub sp, #(UREGS_SP_usr - UREGS_sp); /* SP, LR, SPSR, PC */      \
>          push {r0-r12}; /* Save R0-R12 */                                \
>                                                                          \
>          mrs r11, ELR_hyp;               /* ELR_hyp is return address. */\
> @@ -115,7 +115,7 @@ ENTRY(return_to_hypervisor)
>          ldr r11, [sp, #UREGS_cpsr]
>          msr SPSR_hyp, r11
>          pop {r0-r12}
> -        add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
> +        add sp, #(UREGS_SP_usr - UREGS_sp); /* SP, LR, SPSR, PC */
>          eret
>  
>  /*
> diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
> index 9a507f5..0933aa8 100644
> --- a/xen/arch/arm/io.h
> +++ b/xen/arch/arm/io.h
> @@ -21,6 +21,7 @@
>  
>  #include <xen/lib.h>
>  #include <asm/processor.h>
> +#include <asm/regs.h>
>  
>  typedef struct
>  {
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 19e2081..096dc0b 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -43,7 +43,7 @@
>   * stack) must be doubleword-aligned in size.  */
>  static inline void check_stack_alignment_constraints(void) {
>      BUILD_BUG_ON((sizeof (struct cpu_user_regs)) & 0x7);
> -    BUILD_BUG_ON((offsetof(struct cpu_user_regs, r8_fiq)) & 0x7);
> +    BUILD_BUG_ON((offsetof(struct cpu_user_regs, sp_usr)) & 0x7);
>      BUILD_BUG_ON((sizeof (struct cpu_info)) & 0x7);
>  }
>  
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index ff02d15..d8788f2 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -119,12 +119,15 @@ struct cpu_user_regs
>  
>      /* Outer guest frame only from here on... */
>  
> -    uint32_t r8_fiq, r9_fiq, r10_fiq, r11_fiq, r12_fiq;
> -
>      uint32_t sp_usr; /* LR_usr is the same register as LR, see above */
>  
> -    uint32_t sp_svc, sp_abt, sp_und, sp_irq, sp_fiq;
> -    uint32_t lr_svc, lr_abt, lr_und, lr_irq, lr_fiq;
> +    uint32_t sp_irq, lr_irq;
> +    uint32_t sp_svc, lr_svc;
> +    uint32_t sp_abt, lr_abt;
> +    uint32_t sp_und, lr_und;
> +
> +    uint32_t r8_fiq, r9_fiq, r10_fiq, r11_fiq, r12_fiq;
> +    uint32_t sp_fiq, lr_fiq;
>  
>      uint32_t spsr_svc, spsr_abt, spsr_und, spsr_irq, spsr_fiq;
>  
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:35:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl214-0007Jd-1B; Tue, 18 Dec 2012 18:35:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl212-0007JS-Qd
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:35:45 +0000
Received: from [85.158.143.99:51039] by server-3.bemta-4.messagelabs.com id
	EC/C0-18211-087B0D05; Tue, 18 Dec 2012 18:35:44 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355855742!24741827!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14616 invoked from network); 18 Dec 2012 18:35:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:35:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1081196"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:35:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:35:41 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl20z-0002X5-Bs;
	Tue, 18 Dec 2012 18:35:41 +0000
Date: Tue, 18 Dec 2012 18:35:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355849376-26652-4-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181835260.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-4-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: arm: reorder registers in struct
 cpu_user_regs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> Primarily this is so that they are ordered in the same way as the
> mapping from arm64 x0..x31 registers to the arm32 registers, which is
> just less confusing for everyone going forward.
> 
> It also makes the implementation of select_user_regs in the next patch
> slightly simpler.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  xen/arch/arm/entry.S          |    4 ++--
>  xen/arch/arm/io.h             |    1 +
>  xen/arch/arm/traps.c          |    2 +-
>  xen/include/public/arch-arm.h |   11 +++++++----
>  4 files changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
> index cbd1c48..3611427 100644
> --- a/xen/arch/arm/entry.S
> +++ b/xen/arch/arm/entry.S
> @@ -12,7 +12,7 @@
>          RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
>  
>  #define SAVE_ALL                                                        \
> -        sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */      \
> +        sub sp, #(UREGS_SP_usr - UREGS_sp); /* SP, LR, SPSR, PC */      \
>          push {r0-r12}; /* Save R0-R12 */                                \
>                                                                          \
>          mrs r11, ELR_hyp;               /* ELR_hyp is return address. */\
> @@ -115,7 +115,7 @@ ENTRY(return_to_hypervisor)
>          ldr r11, [sp, #UREGS_cpsr]
>          msr SPSR_hyp, r11
>          pop {r0-r12}
> -        add sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */
> +        add sp, #(UREGS_SP_usr - UREGS_sp); /* SP, LR, SPSR, PC */
>          eret
>  
>  /*
> diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
> index 9a507f5..0933aa8 100644
> --- a/xen/arch/arm/io.h
> +++ b/xen/arch/arm/io.h
> @@ -21,6 +21,7 @@
>  
>  #include <xen/lib.h>
>  #include <asm/processor.h>
> +#include <asm/regs.h>
>  
>  typedef struct
>  {
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 19e2081..096dc0b 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -43,7 +43,7 @@
>   * stack) must be doubleword-aligned in size.  */
>  static inline void check_stack_alignment_constraints(void) {
>      BUILD_BUG_ON((sizeof (struct cpu_user_regs)) & 0x7);
> -    BUILD_BUG_ON((offsetof(struct cpu_user_regs, r8_fiq)) & 0x7);
> +    BUILD_BUG_ON((offsetof(struct cpu_user_regs, sp_usr)) & 0x7);
>      BUILD_BUG_ON((sizeof (struct cpu_info)) & 0x7);
>  }
>  
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index ff02d15..d8788f2 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -119,12 +119,15 @@ struct cpu_user_regs
>  
>      /* Outer guest frame only from here on... */
>  
> -    uint32_t r8_fiq, r9_fiq, r10_fiq, r11_fiq, r12_fiq;
> -
>      uint32_t sp_usr; /* LR_usr is the same register as LR, see above */
>  
> -    uint32_t sp_svc, sp_abt, sp_und, sp_irq, sp_fiq;
> -    uint32_t lr_svc, lr_abt, lr_und, lr_irq, lr_fiq;
> +    uint32_t sp_irq, lr_irq;
> +    uint32_t sp_svc, lr_svc;
> +    uint32_t sp_abt, lr_abt;
> +    uint32_t sp_und, lr_und;
> +
> +    uint32_t r8_fiq, r9_fiq, r10_fiq, r11_fiq, r12_fiq;
> +    uint32_t sp_fiq, lr_fiq;
>  
>      uint32_t spsr_svc, spsr_abt, spsr_und, spsr_irq, spsr_fiq;
>  
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:40:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl25P-0007XY-QC; Tue, 18 Dec 2012 18:40:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl25O-0007XR-Ec
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:40:14 +0000
Received: from [85.158.143.99:49339] by server-2.bemta-4.messagelabs.com id
	A4/65-30861-D88B0D05; Tue, 18 Dec 2012 18:40:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1355856010!23020699!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20974 invoked from network); 18 Dec 2012 18:40:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:40:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1081947"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:40:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:40:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl25I-0002c2-U5;
	Tue, 18 Dec 2012 18:40:08 +0000
Date: Tue, 18 Dec 2012 18:40:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355849376-26652-5-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181838460.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-5-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> We weren't taking the guest mode (CPSR) into account and would always
> access the user version of the registers.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/traps.c       |   62 ++++++++++++++++++++++++++++++++++++++++++-
>  xen/arch/arm/vgic.c        |    4 +-
>  xen/arch/arm/vpl011.c      |    4 +-
>  xen/arch/arm/vtimer.c      |    8 +++---
>  xen/include/asm-arm/regs.h |    6 ++++
>  5 files changed, 74 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 096dc0b..e3c0290 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -73,6 +73,64 @@ static void print_xen_info(void)
>             debug, print_tainted(taint_str));
>  }
>  
> +uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg)
> +{
> +    BUG_ON( guest_mode(regs) );
> +
> +    /*
> +     * We rely heavily on the layout of cpu_user_regs to avoid having
> +     * to handle all of the registers individually. Use BUILD_BUG_ON to
> +     * ensure that things which expect are contiguous actually are.
> +     */
> +#define REGOFFS(R) offsetof(struct cpu_user_regs, R)
> +
> +    switch ( reg ) {
> +    case 0 ... 7: /* Unbanked registers */
> +        BUILD_BUG_ON(REGOFFS(r0) + 7*sizeof(uint32_t) != REGOFFS(r7));
> +        return &regs->r0 + reg;
> +    case 8 ... 12: /* Register banked in FIQ mode */
> +        BUILD_BUG_ON(REGOFFS(r8_fiq) + 4*sizeof(uint32_t) != REGOFFS(r12_fiq));
> +        if ( fiq_mode(regs) )
> +            return &regs->r8_fiq + reg - 8;
> +        else
> +            return &regs->r8_fiq + reg - 8;

what's the point of this if?


> +    case 13 ... 14: /* Banked SP + LR registers */
> +        BUILD_BUG_ON(REGOFFS(sp_fiq) + 1*sizeof(uint32_t) != REGOFFS(lr_fiq));
> +        BUILD_BUG_ON(REGOFFS(sp_irq) + 1*sizeof(uint32_t) != REGOFFS(lr_irq));
> +        BUILD_BUG_ON(REGOFFS(sp_svc) + 1*sizeof(uint32_t) != REGOFFS(lr_svc));
> +        BUILD_BUG_ON(REGOFFS(sp_abt) + 1*sizeof(uint32_t) != REGOFFS(lr_abt));
> +        BUILD_BUG_ON(REGOFFS(sp_und) + 1*sizeof(uint32_t) != REGOFFS(lr_und));
> +        switch ( regs->cpsr & PSR_MODE_MASK )
> +        {
> +        case PSR_MODE_USR:
> +        case PSR_MODE_SYS: /* Sys regs are the usr regs */
> +            if ( reg == 13 )
> +                return &regs->sp_usr;
> +            else /* lr_usr == lr in a user frame */
> +                return &regs->lr;
> +        case PSR_MODE_FIQ:
> +            return &regs->sp_fiq + reg - 13;
> +        case PSR_MODE_IRQ:
> +            return &regs->sp_irq + reg - 13;
> +        case PSR_MODE_SVC:
> +            return &regs->sp_svc + reg - 13;
> +        case PSR_MODE_ABT:
> +            return &regs->sp_abt + reg - 13;
> +        case PSR_MODE_UND:
> +            return &regs->sp_und + reg - 13;
> +        case PSR_MODE_MON:
> +        case PSR_MODE_HYP:
> +        default:
> +            BUG();
> +        }
> +    case 15: /* PC */
> +        return &regs->pc;
> +    default:
> +        BUG();
> +    }
> +#undef REGOFFS
> +}
> +
>  static const char *decode_fsc(uint32_t fsc, int *level)
>  {
>      const char *msg = NULL;
> @@ -448,7 +506,7 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
>      switch ( code ) {
>      case 0xe0 ... 0xef:
>          reg = code - 0xe0;
> -        r = &regs->r0 + reg;
> +        r = select_user_reg(regs, reg);
>          printk("DOM%d: R%d = %#010"PRIx32" at %#010"PRIx32"\n",
>                 domid, reg, *r, regs->pc);
>          break;
> @@ -518,7 +576,7 @@ static void do_cp15_32(struct cpu_user_regs *regs,
>                         union hsr hsr)
>  {
>      struct hsr_cp32 cp32 = hsr.cp32;
> -    uint32_t *r = &regs->r0 + cp32.reg;
> +    uint32_t *r = select_user_reg(regs, cp32.reg);
>  
>      if ( !cp32.ccvalid ) {
>          dprintk(XENLOG_ERR, "cp_15(32): need to handle invalid condition codes\n");
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 3f7e757..59780d2 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -160,7 +160,7 @@ static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info)
>  {
>      struct hsr_dabt dabt = info->dabt;
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> -    uint32_t *r = &regs->r0 + dabt.reg;
> +    uint32_t *r = select_user_reg(regs, dabt.reg);
>      struct vgic_irq_rank *rank;
>      int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
>      int gicd_reg = REG(offset);
> @@ -372,7 +372,7 @@ static int vgic_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
>  {
>      struct hsr_dabt dabt = info->dabt;
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> -    uint32_t *r = &regs->r0 + dabt.reg;
> +    uint32_t *r = select_user_reg(regs, dabt.reg);
>      struct vgic_irq_rank *rank;
>      int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
>      int gicd_reg = REG(offset);
> diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
> index 1522667..7dcee90 100644
> --- a/xen/arch/arm/vpl011.c
> +++ b/xen/arch/arm/vpl011.c
> @@ -92,7 +92,7 @@ static int uart0_mmio_read(struct vcpu *v, mmio_info_t *info)
>  {
>      struct hsr_dabt dabt = info->dabt;
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> -    uint32_t *r = &regs->r0 + dabt.reg;
> +    uint32_t *r = select_user_reg(regs, dabt.reg);
>      int offset = (int)(info->gpa - UART0_START);
>  
>      switch ( offset )
> @@ -114,7 +114,7 @@ static int uart0_mmio_write(struct vcpu *v, mmio_info_t *info)
>  {
>      struct hsr_dabt dabt = info->dabt;
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> -    uint32_t *r = &regs->r0 + dabt.reg;
> +    uint32_t *r = select_user_reg(regs, dabt.reg);
>      int offset = (int)(info->gpa - UART0_START);
>  
>      switch ( offset )
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 490b021..07994b2 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -21,7 +21,7 @@
>  #include <xen/lib.h>
>  #include <xen/timer.h>
>  #include <xen/sched.h>
> -#include "gic.h"
> +#include <asm/regs.h>
>  
>  extern s_time_t ticks_to_ns(uint64_t ticks);
>  extern uint64_t ns_to_ticks(s_time_t ns);
> @@ -49,7 +49,7 @@ static int vtimer_emulate_32(struct cpu_user_regs *regs, union hsr hsr)
>  {
>      struct vcpu *v = current;
>      struct hsr_cp32 cp32 = hsr.cp32;
> -    uint32_t *r = &regs->r0 + cp32.reg;
> +    uint32_t *r = select_user_reg(regs, cp32.reg);
>      s_time_t now;
>  
>      switch ( hsr.bits & HSR_CP32_REGS_MASK )
> @@ -101,8 +101,8 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
>  {
>      struct vcpu *v = current;
>      struct hsr_cp64 cp64 = hsr.cp64;
> -    uint32_t *r1 = &regs->r0 + cp64.reg1;
> -    uint32_t *r2 = &regs->r0 + cp64.reg2;
> +    uint32_t *r1 = select_user_reg(regs, cp64.reg1);
> +    uint32_t *r2 = select_user_reg(regs, cp64.reg2);
>      uint64_t ticks;
>      s_time_t now;
>  
> diff --git a/xen/include/asm-arm/regs.h b/xen/include/asm-arm/regs.h
> index 54f6ed8..7486944 100644
> --- a/xen/include/asm-arm/regs.h
> +++ b/xen/include/asm-arm/regs.h
> @@ -30,6 +30,12 @@
>  
>  #define return_reg(v) ((v)->arch.cpu_info->guest_cpu_user_regs.r0)
>  
> +/*
> + * Returns a pointer to the given register value in regs, taking the
> + * processor mode (CPSR) into account.
> + */
> +extern uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg);
> +
>  #endif /* __ARM_REGS_H__ */
>  /*
>   * Local variables:
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:40:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl25P-0007XY-QC; Tue, 18 Dec 2012 18:40:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl25O-0007XR-Ec
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 18:40:14 +0000
Received: from [85.158.143.99:49339] by server-2.bemta-4.messagelabs.com id
	A4/65-30861-D88B0D05; Tue, 18 Dec 2012 18:40:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1355856010!23020699!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20974 invoked from network); 18 Dec 2012 18:40:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:40:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1081947"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:40:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:40:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl25I-0002c2-U5;
	Tue, 18 Dec 2012 18:40:08 +0000
Date: Tue, 18 Dec 2012 18:40:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1355849376-26652-5-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1212181838460.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-5-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> We weren't taking the guest mode (CPSR) into account and would always
> access the user version of the registers.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/traps.c       |   62 ++++++++++++++++++++++++++++++++++++++++++-
>  xen/arch/arm/vgic.c        |    4 +-
>  xen/arch/arm/vpl011.c      |    4 +-
>  xen/arch/arm/vtimer.c      |    8 +++---
>  xen/include/asm-arm/regs.h |    6 ++++
>  5 files changed, 74 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 096dc0b..e3c0290 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -73,6 +73,64 @@ static void print_xen_info(void)
>             debug, print_tainted(taint_str));
>  }
>  
> +uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg)
> +{
> +    BUG_ON( guest_mode(regs) );
> +
> +    /*
> +     * We rely heavily on the layout of cpu_user_regs to avoid having
> +     * to handle all of the registers individually. Use BUILD_BUG_ON to
> +     * ensure that things which expect are contiguous actually are.
> +     */
> +#define REGOFFS(R) offsetof(struct cpu_user_regs, R)
> +
> +    switch ( reg ) {
> +    case 0 ... 7: /* Unbanked registers */
> +        BUILD_BUG_ON(REGOFFS(r0) + 7*sizeof(uint32_t) != REGOFFS(r7));
> +        return &regs->r0 + reg;
> +    case 8 ... 12: /* Register banked in FIQ mode */
> +        BUILD_BUG_ON(REGOFFS(r8_fiq) + 4*sizeof(uint32_t) != REGOFFS(r12_fiq));
> +        if ( fiq_mode(regs) )
> +            return &regs->r8_fiq + reg - 8;
> +        else
> +            return &regs->r8_fiq + reg - 8;

what's the point of this if?


> +    case 13 ... 14: /* Banked SP + LR registers */
> +        BUILD_BUG_ON(REGOFFS(sp_fiq) + 1*sizeof(uint32_t) != REGOFFS(lr_fiq));
> +        BUILD_BUG_ON(REGOFFS(sp_irq) + 1*sizeof(uint32_t) != REGOFFS(lr_irq));
> +        BUILD_BUG_ON(REGOFFS(sp_svc) + 1*sizeof(uint32_t) != REGOFFS(lr_svc));
> +        BUILD_BUG_ON(REGOFFS(sp_abt) + 1*sizeof(uint32_t) != REGOFFS(lr_abt));
> +        BUILD_BUG_ON(REGOFFS(sp_und) + 1*sizeof(uint32_t) != REGOFFS(lr_und));
> +        switch ( regs->cpsr & PSR_MODE_MASK )
> +        {
> +        case PSR_MODE_USR:
> +        case PSR_MODE_SYS: /* Sys regs are the usr regs */
> +            if ( reg == 13 )
> +                return &regs->sp_usr;
> +            else /* lr_usr == lr in a user frame */
> +                return &regs->lr;
> +        case PSR_MODE_FIQ:
> +            return &regs->sp_fiq + reg - 13;
> +        case PSR_MODE_IRQ:
> +            return &regs->sp_irq + reg - 13;
> +        case PSR_MODE_SVC:
> +            return &regs->sp_svc + reg - 13;
> +        case PSR_MODE_ABT:
> +            return &regs->sp_abt + reg - 13;
> +        case PSR_MODE_UND:
> +            return &regs->sp_und + reg - 13;
> +        case PSR_MODE_MON:
> +        case PSR_MODE_HYP:
> +        default:
> +            BUG();
> +        }
> +    case 15: /* PC */
> +        return &regs->pc;
> +    default:
> +        BUG();
> +    }
> +#undef REGOFFS
> +}
> +
>  static const char *decode_fsc(uint32_t fsc, int *level)
>  {
>      const char *msg = NULL;
> @@ -448,7 +506,7 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
>      switch ( code ) {
>      case 0xe0 ... 0xef:
>          reg = code - 0xe0;
> -        r = &regs->r0 + reg;
> +        r = select_user_reg(regs, reg);
>          printk("DOM%d: R%d = %#010"PRIx32" at %#010"PRIx32"\n",
>                 domid, reg, *r, regs->pc);
>          break;
> @@ -518,7 +576,7 @@ static void do_cp15_32(struct cpu_user_regs *regs,
>                         union hsr hsr)
>  {
>      struct hsr_cp32 cp32 = hsr.cp32;
> -    uint32_t *r = &regs->r0 + cp32.reg;
> +    uint32_t *r = select_user_reg(regs, cp32.reg);
>  
>      if ( !cp32.ccvalid ) {
>          dprintk(XENLOG_ERR, "cp_15(32): need to handle invalid condition codes\n");
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 3f7e757..59780d2 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -160,7 +160,7 @@ static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info)
>  {
>      struct hsr_dabt dabt = info->dabt;
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> -    uint32_t *r = &regs->r0 + dabt.reg;
> +    uint32_t *r = select_user_reg(regs, dabt.reg);
>      struct vgic_irq_rank *rank;
>      int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
>      int gicd_reg = REG(offset);
> @@ -372,7 +372,7 @@ static int vgic_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
>  {
>      struct hsr_dabt dabt = info->dabt;
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> -    uint32_t *r = &regs->r0 + dabt.reg;
> +    uint32_t *r = select_user_reg(regs, dabt.reg);
>      struct vgic_irq_rank *rank;
>      int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
>      int gicd_reg = REG(offset);
> diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
> index 1522667..7dcee90 100644
> --- a/xen/arch/arm/vpl011.c
> +++ b/xen/arch/arm/vpl011.c
> @@ -92,7 +92,7 @@ static int uart0_mmio_read(struct vcpu *v, mmio_info_t *info)
>  {
>      struct hsr_dabt dabt = info->dabt;
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> -    uint32_t *r = &regs->r0 + dabt.reg;
> +    uint32_t *r = select_user_reg(regs, dabt.reg);
>      int offset = (int)(info->gpa - UART0_START);
>  
>      switch ( offset )
> @@ -114,7 +114,7 @@ static int uart0_mmio_write(struct vcpu *v, mmio_info_t *info)
>  {
>      struct hsr_dabt dabt = info->dabt;
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> -    uint32_t *r = &regs->r0 + dabt.reg;
> +    uint32_t *r = select_user_reg(regs, dabt.reg);
>      int offset = (int)(info->gpa - UART0_START);
>  
>      switch ( offset )
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 490b021..07994b2 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -21,7 +21,7 @@
>  #include <xen/lib.h>
>  #include <xen/timer.h>
>  #include <xen/sched.h>
> -#include "gic.h"
> +#include <asm/regs.h>
>  
>  extern s_time_t ticks_to_ns(uint64_t ticks);
>  extern uint64_t ns_to_ticks(s_time_t ns);
> @@ -49,7 +49,7 @@ static int vtimer_emulate_32(struct cpu_user_regs *regs, union hsr hsr)
>  {
>      struct vcpu *v = current;
>      struct hsr_cp32 cp32 = hsr.cp32;
> -    uint32_t *r = &regs->r0 + cp32.reg;
> +    uint32_t *r = select_user_reg(regs, cp32.reg);
>      s_time_t now;
>  
>      switch ( hsr.bits & HSR_CP32_REGS_MASK )
> @@ -101,8 +101,8 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
>  {
>      struct vcpu *v = current;
>      struct hsr_cp64 cp64 = hsr.cp64;
> -    uint32_t *r1 = &regs->r0 + cp64.reg1;
> -    uint32_t *r2 = &regs->r0 + cp64.reg2;
> +    uint32_t *r1 = select_user_reg(regs, cp64.reg1);
> +    uint32_t *r2 = select_user_reg(regs, cp64.reg2);
>      uint64_t ticks;
>      s_time_t now;
>  
> diff --git a/xen/include/asm-arm/regs.h b/xen/include/asm-arm/regs.h
> index 54f6ed8..7486944 100644
> --- a/xen/include/asm-arm/regs.h
> +++ b/xen/include/asm-arm/regs.h
> @@ -30,6 +30,12 @@
>  
>  #define return_reg(v) ((v)->arch.cpu_info->guest_cpu_user_regs.r0)
>  
> +/*
> + * Returns a pointer to the given register value in regs, taking the
> + * processor mode (CPSR) into account.
> + */
> +extern uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg);
> +
>  #endif /* __ARM_REGS_H__ */
>  /*
>   * Local variables:
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:46:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2As-0007mb-KG; Tue, 18 Dec 2012 18:45:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Aq-0007mW-LX
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:45:52 +0000
Received: from [85.158.139.211:8770] by server-10.bemta-5.messagelabs.com id
	19/2A-13383-FD9B0D05; Tue, 18 Dec 2012 18:45:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355856350!20943345!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22056 invoked from network); 18 Dec 2012 18:45:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:45:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083006"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:45:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:45:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Am-0002h5-Nv;
	Tue, 18 Dec 2012 18:45:48 +0000
Date: Tue, 18 Dec 2012 18:45:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 0/8] xen: ARM HDLCD video driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series introduces a very simple driver for the ARM HDLCD
Controller, that means that we can finally have something on the screen
while Xen is booting on ARM :)

The driver is capable of reading the mode property on device tree and
setting the HDLCD accordingly. It is also capable of setting the
required OSC5 timer to the right frequency for the pixel clock.

In order to reduce code duplication with x86, I tried to generalize the
existing vesa character rendering functions into a architecture agnostic
framebuffer driver that can be used by vesa and the hdlcd drivers.

I would very much appreciate if you could give a close look at the vesa
changes because I don't have any x86 test machines that boot in vesa
mode, therefore I couldn't test it.


Changes in v3:
- rename fb_cr to fb_carriage_return.


Changes in v2:
- rebase on latest xen-unstable;
- add support for multiple resolutions;
- add support to dynamically change the OSC5 motherboard timer;
- add the patch "preserve DTB mappings".



Stefano Stabellini (8):
      xen/arm: introduce early_ioremap
      xen: infrastructure to have cross-platform video drivers
      xen: introduce a generic framebuffer driver
      xen/vesa: use the new fb_* functions
      xen/arm: preserve DTB mappings
      xen/device_tree: introduce find_compatible_node
      xen/arm: introduce vexpress_syscfg
      xen/arm: introduce a driver for the ARM HDLCD controller

 xen/arch/arm/Makefile                   |    1 +
 xen/arch/arm/Rules.mk                   |    2 +
 xen/arch/arm/kernel.h                   |    2 +
 xen/arch/arm/mm.c                       |   44 +++++
 xen/arch/arm/platform_vexpress.c        |   97 +++++++++++
 xen/arch/arm/setup.c                    |    8 +-
 xen/arch/x86/Rules.mk                   |    1 +
 xen/common/device_tree.c                |   51 ++++++
 xen/drivers/Makefile                    |    2 +-
 xen/drivers/char/console.c              |   12 +-
 xen/drivers/video/Makefile              |   12 +-
 xen/drivers/video/arm_hdlcd.c           |  282 +++++++++++++++++++++++++++++++
 xen/drivers/video/fb.c                  |  209 +++++++++++++++++++++++
 xen/drivers/video/fb.h                  |   49 ++++++
 xen/drivers/video/modelines.h           |   69 ++++++++
 xen/drivers/video/vesa.c                |  179 +++-----------------
 xen/drivers/video/vga.c                 |   12 +-
 xen/include/asm-arm/config.h            |    4 +
 xen/include/asm-arm/mm.h                |    5 +-
 xen/include/asm-arm/page.h              |   23 +++
 xen/include/asm-arm/platform_vexpress.h |   23 +++
 xen/include/asm-x86/config.h            |    1 +
 xen/include/xen/device_tree.h           |    3 +
 xen/include/xen/vga.h                   |    9 +-
 xen/include/xen/video.h                 |   24 +++
 25 files changed, 940 insertions(+), 184 deletions(-)


Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:46:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2As-0007mb-KG; Tue, 18 Dec 2012 18:45:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Aq-0007mW-LX
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:45:52 +0000
Received: from [85.158.139.211:8770] by server-10.bemta-5.messagelabs.com id
	19/2A-13383-FD9B0D05; Tue, 18 Dec 2012 18:45:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355856350!20943345!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22056 invoked from network); 18 Dec 2012 18:45:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:45:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083006"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:45:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:45:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Am-0002h5-Nv;
	Tue, 18 Dec 2012 18:45:48 +0000
Date: Tue, 18 Dec 2012 18:45:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 0/8] xen: ARM HDLCD video driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series introduces a very simple driver for the ARM HDLCD
Controller, that means that we can finally have something on the screen
while Xen is booting on ARM :)

The driver is capable of reading the mode property on device tree and
setting the HDLCD accordingly. It is also capable of setting the
required OSC5 timer to the right frequency for the pixel clock.

In order to reduce code duplication with x86, I tried to generalize the
existing vesa character rendering functions into a architecture agnostic
framebuffer driver that can be used by vesa and the hdlcd drivers.

I would very much appreciate if you could give a close look at the vesa
changes because I don't have any x86 test machines that boot in vesa
mode, therefore I couldn't test it.


Changes in v3:
- rename fb_cr to fb_carriage_return.


Changes in v2:
- rebase on latest xen-unstable;
- add support for multiple resolutions;
- add support to dynamically change the OSC5 motherboard timer;
- add the patch "preserve DTB mappings".



Stefano Stabellini (8):
      xen/arm: introduce early_ioremap
      xen: infrastructure to have cross-platform video drivers
      xen: introduce a generic framebuffer driver
      xen/vesa: use the new fb_* functions
      xen/arm: preserve DTB mappings
      xen/device_tree: introduce find_compatible_node
      xen/arm: introduce vexpress_syscfg
      xen/arm: introduce a driver for the ARM HDLCD controller

 xen/arch/arm/Makefile                   |    1 +
 xen/arch/arm/Rules.mk                   |    2 +
 xen/arch/arm/kernel.h                   |    2 +
 xen/arch/arm/mm.c                       |   44 +++++
 xen/arch/arm/platform_vexpress.c        |   97 +++++++++++
 xen/arch/arm/setup.c                    |    8 +-
 xen/arch/x86/Rules.mk                   |    1 +
 xen/common/device_tree.c                |   51 ++++++
 xen/drivers/Makefile                    |    2 +-
 xen/drivers/char/console.c              |   12 +-
 xen/drivers/video/Makefile              |   12 +-
 xen/drivers/video/arm_hdlcd.c           |  282 +++++++++++++++++++++++++++++++
 xen/drivers/video/fb.c                  |  209 +++++++++++++++++++++++
 xen/drivers/video/fb.h                  |   49 ++++++
 xen/drivers/video/modelines.h           |   69 ++++++++
 xen/drivers/video/vesa.c                |  179 +++-----------------
 xen/drivers/video/vga.c                 |   12 +-
 xen/include/asm-arm/config.h            |    4 +
 xen/include/asm-arm/mm.h                |    5 +-
 xen/include/asm-arm/page.h              |   23 +++
 xen/include/asm-arm/platform_vexpress.h |   23 +++
 xen/include/asm-x86/config.h            |    1 +
 xen/include/xen/device_tree.h           |    3 +
 xen/include/xen/vga.h                   |    9 +-
 xen/include/xen/video.h                 |   24 +++
 25 files changed, 940 insertions(+), 184 deletions(-)


Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2Bz-0007qx-Ln; Tue, 18 Dec 2012 18:47:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Bx-0007q2-LS
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:02 +0000
Received: from [85.158.139.211:35896] by server-12.bemta-5.messagelabs.com id
	D1/26-02275-52AB0D05; Tue, 18 Dec 2012 18:47:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355856418!21081247!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30989 invoked from network); 18 Dec 2012 18:46:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:46:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1153213"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-UL;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:39 +0000
Message-ID: <1355856402-26614-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 6/8] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a find_compatible_node function that can be used by device
drivers to find the node corresponding to their device in the device
tree.

Initialize device_tree_flattened early in start_xen, so that it is
available before setup_mm. Get rid of fdt in the process.

Also add device_tree_node_compatible to device_tree.h, that is currently
missing.

Changes in v2:
- remove fdt;
- return early from _find_compatible_node, if a node has already been
found.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/setup.c          |    7 ++---
 xen/common/device_tree.c      |   51 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/device_tree.h |    3 ++
 3 files changed, 57 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 06a878f..2c7ee5a 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -196,7 +196,6 @@ void __init start_xen(unsigned long boot_phys_offset,
                       unsigned long atag_paddr,
                       unsigned long cpuid)
 {
-    void *fdt;
     size_t fdt_size;
     int cpus, i;
 
@@ -204,12 +203,12 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     smp_clear_cpu_maps();
 
-    fdt = (void *)BOOT_MISC_VIRT_START
+    device_tree_flattened = (void *)BOOT_MISC_VIRT_START
         + (atag_paddr & ((1 << SECOND_SHIFT) - 1));
-    fdt_size = device_tree_early_init(fdt);
+    fdt_size = device_tree_early_init(device_tree_flattened);
 
     cpus = smp_get_max_cpus();
-    cmdline_parse(device_tree_bootargs(fdt));
+    cmdline_parse(device_tree_bootargs(device_tree_flattened));
 
     setup_pagetables(boot_phys_offset, get_xen_paddr());
 
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 8b4ef2f..d4391f8 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -172,6 +172,57 @@ int device_tree_for_each_node(const void *fdt,
     return 0;
 }
 
+struct find_compat {
+    const char *compatible;
+    int found;
+    int node;
+    int depth;
+    u32 address_cells;
+    u32 size_cells;
+};
+
+static int _find_compatible_node(const void *fdt,
+                             int node, const char *name, int depth,
+                             u32 address_cells, u32 size_cells,
+                             void *data)
+{
+    struct find_compat *c = (struct find_compat *) data;
+
+    if (  c->found  )
+        return 0;
+
+    if ( device_tree_node_compatible(fdt, node, c->compatible) )
+    {
+        c->found = 1;
+        c->node = node;
+        c->depth = depth;
+        c->address_cells = address_cells;
+        c->size_cells = size_cells;
+    }
+    return 0;
+}
+ 
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells)
+{
+    int ret;
+    struct find_compat c;
+    c.compatible = compatible;
+    c.found = 0;
+
+    ret = device_tree_for_each_node(device_tree_flattened, _find_compatible_node, &c);
+    if ( !c.found )
+        return ret;
+    else
+    {
+        *node = c.node;
+        *depth = c.depth;
+        *address_cells = c.address_cells;
+        *size_cells = c.size_cells;
+        return 1;
+    }
+}
+
 /**
  * device_tree_bootargs - return the bootargs (the Xen command line)
  * @fdt flat device tree.
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index a0e3a97..5a75f0e 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -54,6 +54,9 @@ void device_tree_set_reg(u32 **cell, u32 address_cells, u32 size_cells,
                          u64 start, u64 size);
 u32 device_tree_get_u32(const void *fdt, int node, const char *prop_name);
 bool_t device_tree_node_matches(const void *fdt, int node, const char *match);
+bool_t device_tree_node_compatible(const void *fdt, int node, const char *match);
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells);
 int device_tree_for_each_node(const void *fdt,
                               device_tree_node_func func, void *data);
 const char *device_tree_bootargs(const void *fdt);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2Bz-0007qx-Ln; Tue, 18 Dec 2012 18:47:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Bx-0007q2-LS
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:02 +0000
Received: from [85.158.139.211:35896] by server-12.bemta-5.messagelabs.com id
	D1/26-02275-52AB0D05; Tue, 18 Dec 2012 18:47:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355856418!21081247!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30989 invoked from network); 18 Dec 2012 18:46:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:46:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1153213"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-UL;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:39 +0000
Message-ID: <1355856402-26614-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 6/8] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a find_compatible_node function that can be used by device
drivers to find the node corresponding to their device in the device
tree.

Initialize device_tree_flattened early in start_xen, so that it is
available before setup_mm. Get rid of fdt in the process.

Also add device_tree_node_compatible to device_tree.h, that is currently
missing.

Changes in v2:
- remove fdt;
- return early from _find_compatible_node, if a node has already been
found.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/setup.c          |    7 ++---
 xen/common/device_tree.c      |   51 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/device_tree.h |    3 ++
 3 files changed, 57 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 06a878f..2c7ee5a 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -196,7 +196,6 @@ void __init start_xen(unsigned long boot_phys_offset,
                       unsigned long atag_paddr,
                       unsigned long cpuid)
 {
-    void *fdt;
     size_t fdt_size;
     int cpus, i;
 
@@ -204,12 +203,12 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     smp_clear_cpu_maps();
 
-    fdt = (void *)BOOT_MISC_VIRT_START
+    device_tree_flattened = (void *)BOOT_MISC_VIRT_START
         + (atag_paddr & ((1 << SECOND_SHIFT) - 1));
-    fdt_size = device_tree_early_init(fdt);
+    fdt_size = device_tree_early_init(device_tree_flattened);
 
     cpus = smp_get_max_cpus();
-    cmdline_parse(device_tree_bootargs(fdt));
+    cmdline_parse(device_tree_bootargs(device_tree_flattened));
 
     setup_pagetables(boot_phys_offset, get_xen_paddr());
 
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 8b4ef2f..d4391f8 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -172,6 +172,57 @@ int device_tree_for_each_node(const void *fdt,
     return 0;
 }
 
+struct find_compat {
+    const char *compatible;
+    int found;
+    int node;
+    int depth;
+    u32 address_cells;
+    u32 size_cells;
+};
+
+static int _find_compatible_node(const void *fdt,
+                             int node, const char *name, int depth,
+                             u32 address_cells, u32 size_cells,
+                             void *data)
+{
+    struct find_compat *c = (struct find_compat *) data;
+
+    if (  c->found  )
+        return 0;
+
+    if ( device_tree_node_compatible(fdt, node, c->compatible) )
+    {
+        c->found = 1;
+        c->node = node;
+        c->depth = depth;
+        c->address_cells = address_cells;
+        c->size_cells = size_cells;
+    }
+    return 0;
+}
+ 
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells)
+{
+    int ret;
+    struct find_compat c;
+    c.compatible = compatible;
+    c.found = 0;
+
+    ret = device_tree_for_each_node(device_tree_flattened, _find_compatible_node, &c);
+    if ( !c.found )
+        return ret;
+    else
+    {
+        *node = c.node;
+        *depth = c.depth;
+        *address_cells = c.address_cells;
+        *size_cells = c.size_cells;
+        return 1;
+    }
+}
+
 /**
  * device_tree_bootargs - return the bootargs (the Xen command line)
  * @fdt flat device tree.
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index a0e3a97..5a75f0e 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -54,6 +54,9 @@ void device_tree_set_reg(u32 **cell, u32 address_cells, u32 size_cells,
                          u64 start, u64 size);
 u32 device_tree_get_u32(const void *fdt, int node, const char *prop_name);
 bool_t device_tree_node_matches(const void *fdt, int node, const char *match);
+bool_t device_tree_node_compatible(const void *fdt, int node, const char *match);
+int find_compatible_node(const char *compatible, int *node, int *depth,
+                u32 *address_cells, u32 *size_cells);
 int device_tree_for_each_node(const void *fdt,
                               device_tree_node_func func, void *data);
 const char *device_tree_bootargs(const void *fdt);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C1-0007rp-9Y; Tue, 18 Dec 2012 18:47:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Bz-0007qS-5e
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:03 +0000
Received: from [85.158.139.211:50833] by server-8.bemta-5.messagelabs.com id
	88/75-15003-62AB0D05; Tue, 18 Dec 2012 18:47:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355856418!21081247!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31086 invoked from network); 18 Dec 2012 18:47:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1153215"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-VX;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:41 +0000
Message-ID: <1355856402-26614-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 8/8] xen/arm: introduce a driver for the ARM
	HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Read the screen resolution setting from device tree, find the
corresponding modeline in a small table of standard video modes, set the
hardware accordingly.

Use vexpress_syscfg to configure the pixel clock.

Use the generic framebuffer functions to print on the screen.

Changes in v2:
- read mode from DT;
- support multiple resolutions;
- use vexpress_syscfg to set the pixclock.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Rules.mk         |    1 +
 xen/drivers/video/Makefile    |    1 +
 xen/drivers/video/arm_hdlcd.c |  282 +++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/modelines.h |   69 ++++++++++
 xen/include/asm-arm/config.h  |    2 +
 5 files changed, 355 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/arm_hdlcd.c
 create mode 100644 xen/drivers/video/modelines.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index fa9f9c1..9580e6b 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -8,6 +8,7 @@
 
 HAS_DEVICE_TREE := y
 HAS_VIDEO := y
+HAS_ARM_HDLCD := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 3b3eb43..8a6f5da 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
 obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
+obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
new file mode 100644
index 0000000..9e69856
--- /dev/null
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -0,0 +1,282 @@
+/*
+ * xen/drivers/video/arm_hdlcd.c
+ *
+ * Driver for ARM HDLCD Controller
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/delay.h>
+#include <asm/types.h>
+#include <asm/platform_vexpress.h>
+#include <xen/config.h>
+#include <xen/device_tree.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include "font.h"
+#include "fb.h"
+#include "modelines.h"
+
+#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
+
+#define HDLCD_INTMASK       (0x18/4)
+#define HDLCD_FBBASE        (0x100/4)
+#define HDLCD_LINELENGTH    (0x104/4)
+#define HDLCD_LINECOUNT     (0x108/4)
+#define HDLCD_LINEPITCH     (0x10C/4)
+#define HDLCD_BUS           (0x110/4)
+#define HDLCD_VSYNC         (0x200/4)
+#define HDLCD_VBACK         (0x204/4)
+#define HDLCD_VDATA         (0x208/4)
+#define HDLCD_VFRONT        (0x20C/4)
+#define HDLCD_HSYNC         (0x210/4)
+#define HDLCD_HBACK         (0x214/4)
+#define HDLCD_HDATA         (0x218/4)
+#define HDLCD_HFRONT        (0x21C/4)
+#define HDLCD_POLARITIES    (0x220/4)
+#define HDLCD_COMMAND       (0x230/4)
+#define HDLCD_PF            (0x240/4)
+#define HDLCD_RED           (0x244/4)
+#define HDLCD_GREEN         (0x248/4)
+#define HDLCD_BLUE          (0x24C/4)
+
+static void vga_noop_puts(const char *s) {}
+void (*video_puts)(const char *) = vga_noop_puts;
+
+static void hdlcd_flush(void)
+{
+    dsb();
+}
+
+static void set_color_masks(int bpp,
+                       int *red_shift, int *green_shift, int *blue_shift,
+                       int *red_size, int *green_size, int *blue_size)
+{
+    switch (bpp) {
+        case 2:
+            *red_shift = 0;
+            *green_shift = 5;
+            *blue_shift = 11;
+            *red_size = 5;
+            *green_size = 6;
+            *blue_size = 5;
+            break;
+        case 3:
+        case 4:
+            *red_shift = 0;
+            *green_shift = 8;
+            *blue_shift = 16;
+            *red_size = 8;
+            *green_size = 8;
+            *blue_size = 8;
+            break;
+        default:
+            BUG();
+            break;
+    }
+}
+
+static void set_pixclock(uint32_t pixclock)
+{
+    vexpress_syscfg(1, V2M_SYS_CFG_OSC_FUNC, V2M_SYS_CFG_OSC5, &pixclock);
+}
+
+void __init video_init(void)
+{
+    int node, depth;
+    u32 address_cells, size_cells;
+    struct fb_prop fbp;
+    unsigned char *lfb;
+    paddr_t hdlcd_start, hdlcd_size;
+    paddr_t framebuffer_start, framebuffer_size;
+    const struct fdt_property *prop;
+    const u32 *cell;
+    const char *mode_string;
+    char _mode_string[16];
+    int bpp;
+    int red_shift, green_shift, blue_shift;
+    int red_size, green_size, blue_size;
+    struct modeline *videomode = NULL;
+    int i;
+
+    if ( find_compatible_node("arm,hdlcd", &node, &depth,
+                &address_cells, &size_cells) <= 0 )
+        return;
+
+    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &hdlcd_start, &hdlcd_size); 
+
+    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &framebuffer_start, &framebuffer_size); 
+
+    mode_string = fdt_getprop(device_tree_flattened, node, "mode", NULL);
+    if ( !mode_string )
+    {
+        bpp = 4;
+        set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                &red_size, &green_size, &blue_size);
+        memcpy(_mode_string, "1280x1024@60", strlen("1280x1024@60"));
+    }
+    if ( strlen(mode_string) < strlen("800x600@60") )
+    {
+        printk("HDLCD: invalid modeline=%s\n", mode_string);
+        return;
+    } else {
+        char *s = strchr(mode_string, '-');
+        if ( !s )
+        {
+            printk("HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
+                    mode_string);
+            bpp = 4;
+            set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                    &red_size, &green_size, &blue_size);
+            memcpy(_mode_string, mode_string, strlen(mode_string) + 1);
+        } else {
+			if ( strlen(s) < 6 )
+			{
+				printk("HDLCD: invalid mode %s\n", mode_string);
+				return;
+			}
+            s++;
+            if ( !strncmp(s, "16", 2) )
+            {
+                bpp = 2;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            }
+            else if ( !strncmp(s, "24", 2) )
+            {
+                bpp = 3;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            }
+            else if ( !strncmp(s, "32", 2) )
+            {
+                bpp = 4;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            } else  {
+                printk("HDLCD: unsupported bpp %s\n", s);
+                return;
+            }
+            i = s - mode_string - 1;
+            memcpy(_mode_string, mode_string, i);
+            memcpy(_mode_string + i, mode_string + i + 3, 4);
+        }
+    }
+
+    for ( i = 0; i < ARRAY_SIZE(videomodes); i++ )
+    {
+        if ( !strcmp(_mode_string, videomodes[i].mode) )
+        {
+            videomode = &videomodes[i];
+            break;
+        }
+    }
+    if ( !videomode )
+    {
+        printk("HDLCD: unsupported videomode %s\n", _mode_string);
+        return;
+    }
+
+
+    if ( !hdlcd_start || !framebuffer_start )
+        return;
+
+    if ( framebuffer_size < bpp * videomode->xres * videomode->yres )
+    {
+        printk("HDLCD: the framebuffer is too small, disable the HDLCD driver\n");
+        return;
+    }
+
+    printk("Initializing HDLCD driver\n");
+
+    lfb = early_ioremap(framebuffer_start, framebuffer_size, DEV_WC);
+    if ( !lfb )
+    {
+        printk("Couldn't map the framebuffer\n");
+        return;
+    }
+    memset(lfb, 0x00, bpp * videomode->xres * videomode->yres);
+
+    /* uses FIXMAP_MISC */
+    set_pixclock(videomode->pixclock);
+
+    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
+    HDLCD[HDLCD_COMMAND] = 0;
+
+    HDLCD[HDLCD_LINELENGTH] = videomode->xres * bpp;
+    HDLCD[HDLCD_LINECOUNT] = videomode->yres - 1;
+    HDLCD[HDLCD_LINEPITCH] = videomode->xres * bpp;
+    HDLCD[HDLCD_PF] = ((bpp - 1) << 3);
+    HDLCD[HDLCD_INTMASK] = 0;
+    HDLCD[HDLCD_FBBASE] = framebuffer_start;
+    HDLCD[HDLCD_BUS] = 0xf00 | (1 << 4);
+    HDLCD[HDLCD_VBACK] = videomode->vback - 1;
+    HDLCD[HDLCD_VSYNC] = videomode->vsync - 1;
+    HDLCD[HDLCD_VDATA] = videomode->yres - 1;
+    HDLCD[HDLCD_VFRONT] = videomode->vfront - 1;
+    HDLCD[HDLCD_HBACK] = videomode->hback - 1;
+    HDLCD[HDLCD_HSYNC] = videomode->hsync - 1;
+    HDLCD[HDLCD_HDATA] = videomode->xres - 1;
+    HDLCD[HDLCD_HFRONT] = videomode->hfront - 1;
+    HDLCD[HDLCD_POLARITIES] = (1 << 2) | (1 << 3);
+    HDLCD[HDLCD_RED] = (red_size << 8) | red_shift;
+    HDLCD[HDLCD_GREEN] = (green_size << 8) | green_shift;
+    HDLCD[HDLCD_BLUE] = (blue_size << 8) | blue_shift;
+    HDLCD[HDLCD_COMMAND] = 1;
+    clear_fixmap(FIXMAP_MISC);
+
+    fbp.pixel_on = (((1 << red_size) - 1) << red_shift) |
+        (((1 << green_size) - 1) << green_shift) |
+        (((1 << blue_size) - 1) << blue_shift);
+    fbp.lfb = lfb;
+    fbp.font = &font_vga_8x16;
+    fbp.bits_per_pixel = bpp*8;
+    fbp.bytes_per_line = bpp*videomode->xres;
+    fbp.width = videomode->xres;
+    fbp.height = videomode->yres;
+    fbp.flush = hdlcd_flush;
+    fbp.text_columns = videomode->xres / 8;
+    fbp.text_rows = videomode->yres / 16;
+    if ( fb_init(fbp) < 0 )
+            return;
+    video_puts = fb_scroll_puts;
+}
+
+void video_endboot(void)
+{
+    if ( video_puts != vga_noop_puts )
+        fb_alloc();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/drivers/video/modelines.h b/xen/drivers/video/modelines.h
new file mode 100644
index 0000000..b91368d
--- /dev/null
+++ b/xen/drivers/video/modelines.h
@@ -0,0 +1,69 @@
+/*
+ * xen/drivers/video/modelines.h
+ *
+ * Timings for many popular monitor resolutions
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _XEN_MODLINES_H
+#define _XEN_MODLINES_H
+
+struct modeline {
+    const char* mode;  /* in the form 1280x1024@60 */
+    uint32_t pixclock; /* Khz */
+    uint32_t xres;
+    uint32_t hfront;   /* horizontal front porch in pixels */
+    uint32_t hsync;    /* horizontal sync pulse in pixels */
+    uint32_t hback;    /* horizontal back porch in pixels */
+    uint32_t yres;
+    uint32_t vfront;   /* vertical front porch in lines */
+    uint32_t vsync;    /* vertical sync pulse in lines */
+    uint32_t vback;    /* vertical back  porch in lines */
+};
+
+struct modeline __initdata videomodes[] = {
+    { "640x480@60",   25175,  640,  16,   96,   48,   480,  11,   2,    31 },
+    { "640x480@72",   31500,  640,  24,   40,   128,  480,  9,    3,    28 },
+    { "640x480@75",   31500,  640,  16,   96,   48,   480,  11,   2,    32 },
+    { "640x480@85",   36000,  640,  32,   48,   112,  480,  1,    3,    25 },
+    { "800x600@56",   38100,  800,  32,   128,  128,  600,  1,    4,    14 },
+    { "800x600@60",   40000,  800,  40,   128,  88 ,  600,  1,    4,    23 },
+    { "800x600@72",   50000,  800,  56,   120,  64 ,  600,  37,   6,    23 },
+    { "800x600@75",   49500,  800,  16,   80,   160,  600,  1,    2,    21 },
+    { "800x600@85",   56250,  800,  32,   64,   152,  600,  1,    3,    27 },
+    { "1024x768@60",  65000,  1024, 24,   136,  160,  768,  3,    6,    29 },
+    { "1024x768@70",  75000,  1024, 24,   136,  144,  768,  3,    6,    29 },
+    { "1024x768@75",  78750,  1024, 16,   96,   176,  768,  1,    3,    28 },
+    { "1024x768@85",  94500,  1024, 48,   96,   208,  768,  1,    3,    36 },
+    { "1280x1024@60", 108000, 1280, 48,   112,  248,  1024, 1,    3,    38 },
+    { "1280x1024@75", 135000, 1280, 16,   144,  248,  1024, 1,    3,    38 },
+    { "1280x1024@85", 157500, 1280, 64,   160,  224,  1024, 1,    3,    44 },
+    { "1400x1050@60", 122610, 1400, 88,   152,  240,  1050, 1,    3,    33 },
+    { "1400x1050@75", 155850, 1400, 96,   152,  248,  1050, 1,    3,    42 },
+    { "1600x1200@60", 162000, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@65", 175500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@70", 189000, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@75", 202500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@85", 229500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1792x1344@60", 204800, 1792, 128,  200,  328,  1344, 1,    3,    46 },
+    { "1792x1344@75", 261000, 1792, 96,   216,  352,  1344, 1,    3,    69 },
+    { "1856x1392@60", 218300, 1856, 96,   224,  352,  1392, 1,    3,    43 },
+    { "1856x1392@75", 288000, 1856, 128,  224,  352,  1392, 1,    3,    104 },
+    { "1920x1200@75", 193160, 1920, 128,  208,  336,  1200, 1,    3,    38 },
+    { "1920x1440@60", 234000, 1920, 128,  208,  344,  1440, 1,    3,    56 },
+    { "1920x1440@75", 297000, 1920, 144,  224,  352,  1440, 1,    3,    56 },
+};
+
+#endif
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 87db0d1..d8aa66b 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -19,6 +19,8 @@
 
 #define CONFIG_DOMAIN_PAGE 1
 
+#define CONFIG_VIDEO 1
+
 #define OPT_CONSOLE_STR "com1"
 
 #ifdef MAX_PHYS_CPUS
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C0-0007rZ-Rp; Tue, 18 Dec 2012 18:47:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Bz-0007qR-4S
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:03 +0000
Received: from [85.158.138.51:33957] by server-6.bemta-3.messagelabs.com id
	77/EF-12154-62AB0D05; Tue, 18 Dec 2012 18:47:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355856420!21327608!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13422 invoked from network); 18 Dec 2012 18:47:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083193"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-QD;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:36 +0000
Message-ID: <1355856402-26614-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Abstract away from vesa.c the funcions to handle a linear framebuffer
and print characters to it.
The corresponding functions are going to be removed from vesa.c in the
next patch.

Changes in v3:
- rename fb_cr to fb_carriage_return.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/Makefile |    1 +
 xen/drivers/video/fb.c     |  209 ++++++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/fb.h     |   49 ++++++++++
 3 files changed, 259 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/fb.c
 create mode 100644 xen/drivers/video/fb.h

diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 2993c39..3b3eb43 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -2,4 +2,5 @@ obj-$(HAS_VGA) := vga.o
 obj-$(HAS_VIDEO) += font_8x14.o
 obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/fb.c b/xen/drivers/video/fb.c
new file mode 100644
index 0000000..a4f0500
--- /dev/null
+++ b/xen/drivers/video/fb.c
@@ -0,0 +1,209 @@
+/******************************************************************************
+ * fb.c
+ *
+ * linear frame buffer handling.
+ */
+
+#include <xen/config.h>
+#include <xen/kernel.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include "fb.h"
+#include "font.h"
+
+#define MAX_XRES 1900
+#define MAX_YRES 1200
+#define MAX_BPP 4
+#define MAX_FONT_W 8
+#define MAX_FONT_H 16
+static __initdata unsigned int line_len[MAX_XRES / MAX_FONT_W];
+static __initdata unsigned char lbuf[MAX_XRES * MAX_BPP];
+static __initdata unsigned char text_buf[(MAX_XRES / MAX_FONT_W) * \
+                          (MAX_YRES / MAX_FONT_H)];
+
+struct fb_status {
+    struct fb_prop fbp;
+
+    unsigned char *lbuf, *text_buf;
+    unsigned int *line_len;
+    unsigned int xpos, ypos;
+};
+static struct fb_status fb;
+
+static void fb_show_line(
+    const unsigned char *text_line,
+    unsigned char *video_line,
+    unsigned int nr_chars,
+    unsigned int nr_cells)
+{
+    unsigned int i, j, b, bpp, pixel;
+
+    bpp = (fb.fbp.bits_per_pixel + 7) >> 3;
+
+    for ( i = 0; i < fb.fbp.font->height; i++ )
+    {
+        unsigned char *ptr = fb.lbuf;
+
+        for ( j = 0; j < nr_chars; j++ )
+        {
+            const unsigned char *bits = fb.fbp.font->data;
+            bits += ((text_line[j] * fb.fbp.font->height + i) *
+                     ((fb.fbp.font->width + 7) >> 3));
+            for ( b = fb.fbp.font->width; b--; )
+            {
+                pixel = (*bits & (1u<<b)) ? fb.fbp.pixel_on : 0;
+                memcpy(ptr, &pixel, bpp);
+                ptr += bpp;
+            }
+        }
+
+        memset(ptr, 0, (fb.fbp.width - nr_chars * fb.fbp.font->width) * bpp);
+        memcpy(video_line, fb.lbuf, nr_cells * fb.fbp.font->width * bpp);
+        video_line += fb.fbp.bytes_per_line;
+    }
+}
+
+/* Fast mode which redraws all modified parts of a 2D text buffer. */
+void fb_redraw_puts(const char *s)
+{
+    unsigned int i, min_redraw_y = fb.ypos;
+    char c;
+
+    /* Paste characters into text buffer. */
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            if ( ++fb.ypos >= fb.fbp.text_rows )
+            {
+                min_redraw_y = 0;
+                fb.ypos = fb.fbp.text_rows - 1;
+                memmove(fb.text_buf, fb.text_buf + fb.fbp.text_columns,
+                        fb.ypos * fb.fbp.text_columns);
+                memset(fb.text_buf + fb.ypos * fb.fbp.text_columns, 0, fb.xpos);
+            }
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++ + fb.ypos * fb.fbp.text_columns] = c;
+    }
+
+    /* Render modified section of text buffer to VESA linear framebuffer. */
+    for ( i = min_redraw_y; i <= fb.ypos; i++ )
+    {
+        const unsigned char *line = fb.text_buf + i * fb.fbp.text_columns;
+        unsigned int width;
+
+        for ( width = fb.fbp.text_columns; width; --width )
+            if ( line[width - 1] )
+                 break;
+        fb_show_line(line,
+                       fb.fbp.lfb + i * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                       width, max(fb.line_len[i], width));
+        fb.line_len[i] = width;
+    }
+
+    fb.fbp.flush();
+}
+
+/* Slower line-based scroll mode which interacts better with dom0. */
+void fb_scroll_puts(const char *s)
+{
+    unsigned int i;
+    char c;
+
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            unsigned int bytes = (fb.fbp.width *
+                                  ((fb.fbp.bits_per_pixel + 7) >> 3));
+            unsigned char *src = fb.fbp.lfb + fb.fbp.font->height * fb.fbp.bytes_per_line;
+            unsigned char *dst = fb.fbp.lfb;
+
+            /* New line: scroll all previous rows up one line. */
+            for ( i = fb.fbp.font->height; i < fb.fbp.height; i++ )
+            {
+                memcpy(dst, src, bytes);
+                src += fb.fbp.bytes_per_line;
+                dst += fb.fbp.bytes_per_line;
+            }
+
+            /* Render new line. */
+            fb_show_line(
+                fb.text_buf,
+                fb.fbp.lfb + (fb.fbp.text_rows-1) * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                fb.xpos, fb.fbp.text_columns);
+
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++] = c;
+    }
+
+    fb.fbp.flush();
+}
+
+void fb_carriage_return(void)
+{
+    fb.xpos = 0;
+}
+
+int __init fb_init(struct fb_prop fbp)
+{
+    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
+    {
+        printk("Couldn't initialize a %xx%x framebuffer early.\n",
+                        fbp.width, fbp.height);
+        return -EINVAL;
+    }
+
+    fb.fbp = fbp;
+    fb.lbuf = lbuf;
+    fb.text_buf = text_buf;
+    fb.line_len = line_len;
+    return 0;
+}
+
+int __init fb_alloc(void)
+{
+    fb.lbuf = NULL;
+    fb.text_buf = NULL;
+    fb.line_len = NULL;
+
+    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
+    if ( !fb.lbuf )
+        goto fail;
+
+    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
+    if ( !fb.text_buf )
+        goto fail;
+
+    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
+    if ( !fb.line_len )
+        goto fail;
+
+    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
+    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
+    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
+
+    return 0;
+
+fail:
+    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
+                    "the framebuffer\n");
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+
+    return -ENOMEM;
+}
+
+void fb_free(void)
+{
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+}
diff --git a/xen/drivers/video/fb.h b/xen/drivers/video/fb.h
new file mode 100644
index 0000000..0084ffa
--- /dev/null
+++ b/xen/drivers/video/fb.h
@@ -0,0 +1,49 @@
+/*
+ * xen/drivers/video/fb.h
+ *
+ * Cross-platform framebuffer library
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _XEN_FB_H
+#define _XEN_FB_H
+
+#include <xen/init.h>
+
+struct fb_prop {
+    const struct font_desc *font;
+    unsigned char *lfb;
+    unsigned int pixel_on;
+    uint16_t width, height;
+    uint16_t bytes_per_line;
+    uint16_t bits_per_pixel;
+    void (*flush)(void);
+
+    unsigned int text_columns;
+    unsigned int text_rows;
+};
+
+void fb_redraw_puts(const char *s);
+void fb_scroll_puts(const char *s);
+void fb_carriage_return(void);
+void fb_free(void);
+
+/* initialize the framebuffer, can be called early (before xmalloc is
+ * available) */
+int __init fb_init(struct fb_prop fbp);
+/* fb_alloc allocates internal structures using xmalloc */
+int __init fb_alloc(void);
+
+#endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C0-0007rG-ER; Tue, 18 Dec 2012 18:47:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2By-0007qA-0p
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:02 +0000
Received: from [85.158.139.211:50793] by server-6.bemta-5.messagelabs.com id
	FA/AE-30498-52AB0D05; Tue, 18 Dec 2012 18:47:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355856418!21081247!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31025 invoked from network); 18 Dec 2012 18:47:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1153214"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-Uz;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:40 +0000
Message-ID: <1355856402-26614-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 7/8] xen/arm: introduce vexpress_syscfg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a Versatile Express specific function to read/write
motherboard settings.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Makefile                   |    1 +
 xen/arch/arm/platform_vexpress.c        |   97 +++++++++++++++++++++++++++++++
 xen/include/asm-arm/platform_vexpress.h |   23 +++++++
 3 files changed, 121 insertions(+), 0 deletions(-)
 create mode 100644 xen/arch/arm/platform_vexpress.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 4c61b04..24689c5 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -18,6 +18,7 @@ obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
 obj-y += physdev.o
+obj-y += platform_vexpress.o
 obj-y += setup.o
 obj-y += time.o
 obj-y += smpboot.o
diff --git a/xen/arch/arm/platform_vexpress.c b/xen/arch/arm/platform_vexpress.c
new file mode 100644
index 0000000..41e3806
--- /dev/null
+++ b/xen/arch/arm/platform_vexpress.c
@@ -0,0 +1,97 @@
+/*
+ * xen/arch/arm/platform_vexpress.c
+ *
+ * Versatile Express specific settings
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/platform_vexpress.h>
+#include <xen/mm.h>
+
+#define DCC_SHIFT      26
+#define FUNCTION_SHIFT 20
+#define SITE_SHIFT     16
+#define POSITION_SHIFT 12
+#define DEVICE_SHIFT   0
+
+int vexpress_syscfg(int write, int function, int device, uint32_t *data)
+{
+    uint32_t *syscfg = (uint32_t *) FIXMAP_ADDR(FIXMAP_MISC);
+    uint32_t stat;
+    int dcc = 0; /* DCC to access */
+    int site = 0; /* motherboard */
+    int position = 0; /* motherboard */
+
+    set_fixmap(FIXMAP_MISC, V2M_SYS_MMIO_BASE >> PAGE_SHIFT, DEV_SHARED);
+
+    if ( syscfg[V2M_SYS_CFGCTRL] & V2M_SYS_CFG_START )
+        return -1;
+
+    /* clear the complete bit in the V2M_SYS_CFGSTAT status register */
+    syscfg[V2M_SYS_CFGSTAT] = 0;
+
+    if ( write )
+    {
+        /* write data */
+        syscfg[V2M_SYS_CFGDATA] = *data;
+
+        /* set control register */
+        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | V2M_SYS_CFG_WRITE |
+            (dcc << DCC_SHIFT) | (function << FUNCTION_SHIFT) |
+            (site << SITE_SHIFT) | (position << POSITION_SHIFT) |
+            (device << DEVICE_SHIFT);
+
+        /* wait for complete flag to be set */
+        do {
+            stat = syscfg[V2M_SYS_CFGSTAT];
+            dsb();
+        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
+
+        /* check error status and return error flag if set */
+        if ( stat & V2M_SYS_CFG_ERROR )
+            return -1;
+    } else {
+        /* set control register */
+        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | (dcc << DCC_SHIFT) |
+            (function << FUNCTION_SHIFT) | (site << SITE_SHIFT) |
+            (position << POSITION_SHIFT) | (device << DEVICE_SHIFT);
+
+        /* wait for complete flag to be set */
+        do {
+            stat = syscfg[V2M_SYS_CFGSTAT];
+            dsb();
+        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
+
+        /* check error status flag and return error flag if set */
+        if ( stat & V2M_SYS_CFG_ERROR )
+            return -1;
+        else
+            /* read data */
+            *data = syscfg[V2M_SYS_CFGDATA];
+    }
+
+    clear_fixmap(FIXMAP_MISC);
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/platform_vexpress.h b/xen/include/asm-arm/platform_vexpress.h
index 3556af3..407602d 100644
--- a/xen/include/asm-arm/platform_vexpress.h
+++ b/xen/include/asm-arm/platform_vexpress.h
@@ -6,6 +6,29 @@
 #define V2M_SYS_FLAGSSET      (0x30)
 #define V2M_SYS_FLAGSCLR      (0x34)
 
+#define V2M_SYS_CFGDATA       (0x00A0/4)
+#define V2M_SYS_CFGCTRL       (0x00A4/4)
+#define V2M_SYS_CFGSTAT       (0x00A8/4)
+
+#define V2M_SYS_CFG_START     (1<<31)
+#define V2M_SYS_CFG_WRITE     (1<<30)
+#define V2M_SYS_CFG_ERROR     (1<<1)
+#define V2M_SYS_CFG_COMPLETE  (1<<0)
+
+#define V2M_SYS_CFG_OSC_FUNC  1
+#define V2M_SYS_CFG_OSC0      0
+#define V2M_SYS_CFG_OSC1      1
+#define V2M_SYS_CFG_OSC2      2
+#define V2M_SYS_CFG_OSC3      3
+#define V2M_SYS_CFG_OSC4      4
+#define V2M_SYS_CFG_OSC5      5
+
+#ifndef __ASSEMBLY__
+#include <xen/inttypes.h>
+
+int vexpress_syscfg(int write, int function, int device, uint32_t *data);
+#endif
+
 #endif /* __ASM_ARM_PLATFORM_H */
 /*
  * Local variables:
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C0-0007rZ-Rp; Tue, 18 Dec 2012 18:47:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Bz-0007qR-4S
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:03 +0000
Received: from [85.158.138.51:33957] by server-6.bemta-3.messagelabs.com id
	77/EF-12154-62AB0D05; Tue, 18 Dec 2012 18:47:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355856420!21327608!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13422 invoked from network); 18 Dec 2012 18:47:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083193"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-QD;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:36 +0000
Message-ID: <1355856402-26614-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Abstract away from vesa.c the funcions to handle a linear framebuffer
and print characters to it.
The corresponding functions are going to be removed from vesa.c in the
next patch.

Changes in v3:
- rename fb_cr to fb_carriage_return.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/Makefile |    1 +
 xen/drivers/video/fb.c     |  209 ++++++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/fb.h     |   49 ++++++++++
 3 files changed, 259 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/fb.c
 create mode 100644 xen/drivers/video/fb.h

diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 2993c39..3b3eb43 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -2,4 +2,5 @@ obj-$(HAS_VGA) := vga.o
 obj-$(HAS_VIDEO) += font_8x14.o
 obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/fb.c b/xen/drivers/video/fb.c
new file mode 100644
index 0000000..a4f0500
--- /dev/null
+++ b/xen/drivers/video/fb.c
@@ -0,0 +1,209 @@
+/******************************************************************************
+ * fb.c
+ *
+ * linear frame buffer handling.
+ */
+
+#include <xen/config.h>
+#include <xen/kernel.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include "fb.h"
+#include "font.h"
+
+#define MAX_XRES 1900
+#define MAX_YRES 1200
+#define MAX_BPP 4
+#define MAX_FONT_W 8
+#define MAX_FONT_H 16
+static __initdata unsigned int line_len[MAX_XRES / MAX_FONT_W];
+static __initdata unsigned char lbuf[MAX_XRES * MAX_BPP];
+static __initdata unsigned char text_buf[(MAX_XRES / MAX_FONT_W) * \
+                          (MAX_YRES / MAX_FONT_H)];
+
+struct fb_status {
+    struct fb_prop fbp;
+
+    unsigned char *lbuf, *text_buf;
+    unsigned int *line_len;
+    unsigned int xpos, ypos;
+};
+static struct fb_status fb;
+
+static void fb_show_line(
+    const unsigned char *text_line,
+    unsigned char *video_line,
+    unsigned int nr_chars,
+    unsigned int nr_cells)
+{
+    unsigned int i, j, b, bpp, pixel;
+
+    bpp = (fb.fbp.bits_per_pixel + 7) >> 3;
+
+    for ( i = 0; i < fb.fbp.font->height; i++ )
+    {
+        unsigned char *ptr = fb.lbuf;
+
+        for ( j = 0; j < nr_chars; j++ )
+        {
+            const unsigned char *bits = fb.fbp.font->data;
+            bits += ((text_line[j] * fb.fbp.font->height + i) *
+                     ((fb.fbp.font->width + 7) >> 3));
+            for ( b = fb.fbp.font->width; b--; )
+            {
+                pixel = (*bits & (1u<<b)) ? fb.fbp.pixel_on : 0;
+                memcpy(ptr, &pixel, bpp);
+                ptr += bpp;
+            }
+        }
+
+        memset(ptr, 0, (fb.fbp.width - nr_chars * fb.fbp.font->width) * bpp);
+        memcpy(video_line, fb.lbuf, nr_cells * fb.fbp.font->width * bpp);
+        video_line += fb.fbp.bytes_per_line;
+    }
+}
+
+/* Fast mode which redraws all modified parts of a 2D text buffer. */
+void fb_redraw_puts(const char *s)
+{
+    unsigned int i, min_redraw_y = fb.ypos;
+    char c;
+
+    /* Paste characters into text buffer. */
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            if ( ++fb.ypos >= fb.fbp.text_rows )
+            {
+                min_redraw_y = 0;
+                fb.ypos = fb.fbp.text_rows - 1;
+                memmove(fb.text_buf, fb.text_buf + fb.fbp.text_columns,
+                        fb.ypos * fb.fbp.text_columns);
+                memset(fb.text_buf + fb.ypos * fb.fbp.text_columns, 0, fb.xpos);
+            }
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++ + fb.ypos * fb.fbp.text_columns] = c;
+    }
+
+    /* Render modified section of text buffer to VESA linear framebuffer. */
+    for ( i = min_redraw_y; i <= fb.ypos; i++ )
+    {
+        const unsigned char *line = fb.text_buf + i * fb.fbp.text_columns;
+        unsigned int width;
+
+        for ( width = fb.fbp.text_columns; width; --width )
+            if ( line[width - 1] )
+                 break;
+        fb_show_line(line,
+                       fb.fbp.lfb + i * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                       width, max(fb.line_len[i], width));
+        fb.line_len[i] = width;
+    }
+
+    fb.fbp.flush();
+}
+
+/* Slower line-based scroll mode which interacts better with dom0. */
+void fb_scroll_puts(const char *s)
+{
+    unsigned int i;
+    char c;
+
+    while ( (c = *s++) != '\0' )
+    {
+        if ( (c == '\n') || (fb.xpos >= fb.fbp.text_columns) )
+        {
+            unsigned int bytes = (fb.fbp.width *
+                                  ((fb.fbp.bits_per_pixel + 7) >> 3));
+            unsigned char *src = fb.fbp.lfb + fb.fbp.font->height * fb.fbp.bytes_per_line;
+            unsigned char *dst = fb.fbp.lfb;
+
+            /* New line: scroll all previous rows up one line. */
+            for ( i = fb.fbp.font->height; i < fb.fbp.height; i++ )
+            {
+                memcpy(dst, src, bytes);
+                src += fb.fbp.bytes_per_line;
+                dst += fb.fbp.bytes_per_line;
+            }
+
+            /* Render new line. */
+            fb_show_line(
+                fb.text_buf,
+                fb.fbp.lfb + (fb.fbp.text_rows-1) * fb.fbp.font->height * fb.fbp.bytes_per_line,
+                fb.xpos, fb.fbp.text_columns);
+
+            fb.xpos = 0;
+        }
+
+        if ( c != '\n' )
+            fb.text_buf[fb.xpos++] = c;
+    }
+
+    fb.fbp.flush();
+}
+
+void fb_carriage_return(void)
+{
+    fb.xpos = 0;
+}
+
+int __init fb_init(struct fb_prop fbp)
+{
+    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
+    {
+        printk("Couldn't initialize a %xx%x framebuffer early.\n",
+                        fbp.width, fbp.height);
+        return -EINVAL;
+    }
+
+    fb.fbp = fbp;
+    fb.lbuf = lbuf;
+    fb.text_buf = text_buf;
+    fb.line_len = line_len;
+    return 0;
+}
+
+int __init fb_alloc(void)
+{
+    fb.lbuf = NULL;
+    fb.text_buf = NULL;
+    fb.line_len = NULL;
+
+    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
+    if ( !fb.lbuf )
+        goto fail;
+
+    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
+    if ( !fb.text_buf )
+        goto fail;
+
+    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
+    if ( !fb.line_len )
+        goto fail;
+
+    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
+    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
+    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
+
+    return 0;
+
+fail:
+    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
+                    "the framebuffer\n");
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+
+    return -ENOMEM;
+}
+
+void fb_free(void)
+{
+    xfree(fb.lbuf);
+    xfree(fb.text_buf);
+    xfree(fb.line_len);
+}
diff --git a/xen/drivers/video/fb.h b/xen/drivers/video/fb.h
new file mode 100644
index 0000000..0084ffa
--- /dev/null
+++ b/xen/drivers/video/fb.h
@@ -0,0 +1,49 @@
+/*
+ * xen/drivers/video/fb.h
+ *
+ * Cross-platform framebuffer library
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _XEN_FB_H
+#define _XEN_FB_H
+
+#include <xen/init.h>
+
+struct fb_prop {
+    const struct font_desc *font;
+    unsigned char *lfb;
+    unsigned int pixel_on;
+    uint16_t width, height;
+    uint16_t bytes_per_line;
+    uint16_t bits_per_pixel;
+    void (*flush)(void);
+
+    unsigned int text_columns;
+    unsigned int text_rows;
+};
+
+void fb_redraw_puts(const char *s);
+void fb_scroll_puts(const char *s);
+void fb_carriage_return(void);
+void fb_free(void);
+
+/* initialize the framebuffer, can be called early (before xmalloc is
+ * available) */
+int __init fb_init(struct fb_prop fbp);
+/* fb_alloc allocates internal structures using xmalloc */
+int __init fb_alloc(void);
+
+#endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C1-0007rp-9Y; Tue, 18 Dec 2012 18:47:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Bz-0007qS-5e
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:03 +0000
Received: from [85.158.139.211:50833] by server-8.bemta-5.messagelabs.com id
	88/75-15003-62AB0D05; Tue, 18 Dec 2012 18:47:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355856418!21081247!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31086 invoked from network); 18 Dec 2012 18:47:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1153215"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-VX;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:41 +0000
Message-ID: <1355856402-26614-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 8/8] xen/arm: introduce a driver for the ARM
	HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Read the screen resolution setting from device tree, find the
corresponding modeline in a small table of standard video modes, set the
hardware accordingly.

Use vexpress_syscfg to configure the pixel clock.

Use the generic framebuffer functions to print on the screen.

Changes in v2:
- read mode from DT;
- support multiple resolutions;
- use vexpress_syscfg to set the pixclock.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Rules.mk         |    1 +
 xen/drivers/video/Makefile    |    1 +
 xen/drivers/video/arm_hdlcd.c |  282 +++++++++++++++++++++++++++++++++++++++++
 xen/drivers/video/modelines.h |   69 ++++++++++
 xen/include/asm-arm/config.h  |    2 +
 5 files changed, 355 insertions(+), 0 deletions(-)
 create mode 100644 xen/drivers/video/arm_hdlcd.c
 create mode 100644 xen/drivers/video/modelines.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index fa9f9c1..9580e6b 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -8,6 +8,7 @@
 
 HAS_DEVICE_TREE := y
 HAS_VIDEO := y
+HAS_ARM_HDLCD := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 3b3eb43..8a6f5da 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -4,3 +4,4 @@ obj-$(HAS_VIDEO) += font_8x16.o
 obj-$(HAS_VIDEO) += font_8x8.o
 obj-$(HAS_VIDEO) += fb.o
 obj-$(HAS_VGA) += vesa.o
+obj-$(HAS_ARM_HDLCD) += arm_hdlcd.o
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
new file mode 100644
index 0000000..9e69856
--- /dev/null
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -0,0 +1,282 @@
+/*
+ * xen/drivers/video/arm_hdlcd.c
+ *
+ * Driver for ARM HDLCD Controller
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/delay.h>
+#include <asm/types.h>
+#include <asm/platform_vexpress.h>
+#include <xen/config.h>
+#include <xen/device_tree.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include "font.h"
+#include "fb.h"
+#include "modelines.h"
+
+#define HDLCD ((volatile uint32_t *) FIXMAP_ADDR(FIXMAP_MISC))
+
+#define HDLCD_INTMASK       (0x18/4)
+#define HDLCD_FBBASE        (0x100/4)
+#define HDLCD_LINELENGTH    (0x104/4)
+#define HDLCD_LINECOUNT     (0x108/4)
+#define HDLCD_LINEPITCH     (0x10C/4)
+#define HDLCD_BUS           (0x110/4)
+#define HDLCD_VSYNC         (0x200/4)
+#define HDLCD_VBACK         (0x204/4)
+#define HDLCD_VDATA         (0x208/4)
+#define HDLCD_VFRONT        (0x20C/4)
+#define HDLCD_HSYNC         (0x210/4)
+#define HDLCD_HBACK         (0x214/4)
+#define HDLCD_HDATA         (0x218/4)
+#define HDLCD_HFRONT        (0x21C/4)
+#define HDLCD_POLARITIES    (0x220/4)
+#define HDLCD_COMMAND       (0x230/4)
+#define HDLCD_PF            (0x240/4)
+#define HDLCD_RED           (0x244/4)
+#define HDLCD_GREEN         (0x248/4)
+#define HDLCD_BLUE          (0x24C/4)
+
+static void vga_noop_puts(const char *s) {}
+void (*video_puts)(const char *) = vga_noop_puts;
+
+static void hdlcd_flush(void)
+{
+    dsb();
+}
+
+static void set_color_masks(int bpp,
+                       int *red_shift, int *green_shift, int *blue_shift,
+                       int *red_size, int *green_size, int *blue_size)
+{
+    switch (bpp) {
+        case 2:
+            *red_shift = 0;
+            *green_shift = 5;
+            *blue_shift = 11;
+            *red_size = 5;
+            *green_size = 6;
+            *blue_size = 5;
+            break;
+        case 3:
+        case 4:
+            *red_shift = 0;
+            *green_shift = 8;
+            *blue_shift = 16;
+            *red_size = 8;
+            *green_size = 8;
+            *blue_size = 8;
+            break;
+        default:
+            BUG();
+            break;
+    }
+}
+
+static void set_pixclock(uint32_t pixclock)
+{
+    vexpress_syscfg(1, V2M_SYS_CFG_OSC_FUNC, V2M_SYS_CFG_OSC5, &pixclock);
+}
+
+void __init video_init(void)
+{
+    int node, depth;
+    u32 address_cells, size_cells;
+    struct fb_prop fbp;
+    unsigned char *lfb;
+    paddr_t hdlcd_start, hdlcd_size;
+    paddr_t framebuffer_start, framebuffer_size;
+    const struct fdt_property *prop;
+    const u32 *cell;
+    const char *mode_string;
+    char _mode_string[16];
+    int bpp;
+    int red_shift, green_shift, blue_shift;
+    int red_size, green_size, blue_size;
+    struct modeline *videomode = NULL;
+    int i;
+
+    if ( find_compatible_node("arm,hdlcd", &node, &depth,
+                &address_cells, &size_cells) <= 0 )
+        return;
+
+    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &hdlcd_start, &hdlcd_size); 
+
+    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
+    if ( !prop )
+        return;
+
+    cell = (const u32 *)prop->data;
+    device_tree_get_reg(&cell, address_cells, size_cells,
+            &framebuffer_start, &framebuffer_size); 
+
+    mode_string = fdt_getprop(device_tree_flattened, node, "mode", NULL);
+    if ( !mode_string )
+    {
+        bpp = 4;
+        set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                &red_size, &green_size, &blue_size);
+        memcpy(_mode_string, "1280x1024@60", strlen("1280x1024@60"));
+    }
+    if ( strlen(mode_string) < strlen("800x600@60") )
+    {
+        printk("HDLCD: invalid modeline=%s\n", mode_string);
+        return;
+    } else {
+        char *s = strchr(mode_string, '-');
+        if ( !s )
+        {
+            printk("HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
+                    mode_string);
+            bpp = 4;
+            set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                    &red_size, &green_size, &blue_size);
+            memcpy(_mode_string, mode_string, strlen(mode_string) + 1);
+        } else {
+			if ( strlen(s) < 6 )
+			{
+				printk("HDLCD: invalid mode %s\n", mode_string);
+				return;
+			}
+            s++;
+            if ( !strncmp(s, "16", 2) )
+            {
+                bpp = 2;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            }
+            else if ( !strncmp(s, "24", 2) )
+            {
+                bpp = 3;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            }
+            else if ( !strncmp(s, "32", 2) )
+            {
+                bpp = 4;
+                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
+                        &red_size, &green_size, &blue_size);
+            } else  {
+                printk("HDLCD: unsupported bpp %s\n", s);
+                return;
+            }
+            i = s - mode_string - 1;
+            memcpy(_mode_string, mode_string, i);
+            memcpy(_mode_string + i, mode_string + i + 3, 4);
+        }
+    }
+
+    for ( i = 0; i < ARRAY_SIZE(videomodes); i++ )
+    {
+        if ( !strcmp(_mode_string, videomodes[i].mode) )
+        {
+            videomode = &videomodes[i];
+            break;
+        }
+    }
+    if ( !videomode )
+    {
+        printk("HDLCD: unsupported videomode %s\n", _mode_string);
+        return;
+    }
+
+
+    if ( !hdlcd_start || !framebuffer_start )
+        return;
+
+    if ( framebuffer_size < bpp * videomode->xres * videomode->yres )
+    {
+        printk("HDLCD: the framebuffer is too small, disable the HDLCD driver\n");
+        return;
+    }
+
+    printk("Initializing HDLCD driver\n");
+
+    lfb = early_ioremap(framebuffer_start, framebuffer_size, DEV_WC);
+    if ( !lfb )
+    {
+        printk("Couldn't map the framebuffer\n");
+        return;
+    }
+    memset(lfb, 0x00, bpp * videomode->xres * videomode->yres);
+
+    /* uses FIXMAP_MISC */
+    set_pixclock(videomode->pixclock);
+
+    set_fixmap(FIXMAP_MISC, hdlcd_start >> PAGE_SHIFT, DEV_SHARED);
+    HDLCD[HDLCD_COMMAND] = 0;
+
+    HDLCD[HDLCD_LINELENGTH] = videomode->xres * bpp;
+    HDLCD[HDLCD_LINECOUNT] = videomode->yres - 1;
+    HDLCD[HDLCD_LINEPITCH] = videomode->xres * bpp;
+    HDLCD[HDLCD_PF] = ((bpp - 1) << 3);
+    HDLCD[HDLCD_INTMASK] = 0;
+    HDLCD[HDLCD_FBBASE] = framebuffer_start;
+    HDLCD[HDLCD_BUS] = 0xf00 | (1 << 4);
+    HDLCD[HDLCD_VBACK] = videomode->vback - 1;
+    HDLCD[HDLCD_VSYNC] = videomode->vsync - 1;
+    HDLCD[HDLCD_VDATA] = videomode->yres - 1;
+    HDLCD[HDLCD_VFRONT] = videomode->vfront - 1;
+    HDLCD[HDLCD_HBACK] = videomode->hback - 1;
+    HDLCD[HDLCD_HSYNC] = videomode->hsync - 1;
+    HDLCD[HDLCD_HDATA] = videomode->xres - 1;
+    HDLCD[HDLCD_HFRONT] = videomode->hfront - 1;
+    HDLCD[HDLCD_POLARITIES] = (1 << 2) | (1 << 3);
+    HDLCD[HDLCD_RED] = (red_size << 8) | red_shift;
+    HDLCD[HDLCD_GREEN] = (green_size << 8) | green_shift;
+    HDLCD[HDLCD_BLUE] = (blue_size << 8) | blue_shift;
+    HDLCD[HDLCD_COMMAND] = 1;
+    clear_fixmap(FIXMAP_MISC);
+
+    fbp.pixel_on = (((1 << red_size) - 1) << red_shift) |
+        (((1 << green_size) - 1) << green_shift) |
+        (((1 << blue_size) - 1) << blue_shift);
+    fbp.lfb = lfb;
+    fbp.font = &font_vga_8x16;
+    fbp.bits_per_pixel = bpp*8;
+    fbp.bytes_per_line = bpp*videomode->xres;
+    fbp.width = videomode->xres;
+    fbp.height = videomode->yres;
+    fbp.flush = hdlcd_flush;
+    fbp.text_columns = videomode->xres / 8;
+    fbp.text_rows = videomode->yres / 16;
+    if ( fb_init(fbp) < 0 )
+            return;
+    video_puts = fb_scroll_puts;
+}
+
+void video_endboot(void)
+{
+    if ( video_puts != vga_noop_puts )
+        fb_alloc();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/drivers/video/modelines.h b/xen/drivers/video/modelines.h
new file mode 100644
index 0000000..b91368d
--- /dev/null
+++ b/xen/drivers/video/modelines.h
@@ -0,0 +1,69 @@
+/*
+ * xen/drivers/video/modelines.h
+ *
+ * Timings for many popular monitor resolutions
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _XEN_MODLINES_H
+#define _XEN_MODLINES_H
+
+struct modeline {
+    const char* mode;  /* in the form 1280x1024@60 */
+    uint32_t pixclock; /* Khz */
+    uint32_t xres;
+    uint32_t hfront;   /* horizontal front porch in pixels */
+    uint32_t hsync;    /* horizontal sync pulse in pixels */
+    uint32_t hback;    /* horizontal back porch in pixels */
+    uint32_t yres;
+    uint32_t vfront;   /* vertical front porch in lines */
+    uint32_t vsync;    /* vertical sync pulse in lines */
+    uint32_t vback;    /* vertical back  porch in lines */
+};
+
+struct modeline __initdata videomodes[] = {
+    { "640x480@60",   25175,  640,  16,   96,   48,   480,  11,   2,    31 },
+    { "640x480@72",   31500,  640,  24,   40,   128,  480,  9,    3,    28 },
+    { "640x480@75",   31500,  640,  16,   96,   48,   480,  11,   2,    32 },
+    { "640x480@85",   36000,  640,  32,   48,   112,  480,  1,    3,    25 },
+    { "800x600@56",   38100,  800,  32,   128,  128,  600,  1,    4,    14 },
+    { "800x600@60",   40000,  800,  40,   128,  88 ,  600,  1,    4,    23 },
+    { "800x600@72",   50000,  800,  56,   120,  64 ,  600,  37,   6,    23 },
+    { "800x600@75",   49500,  800,  16,   80,   160,  600,  1,    2,    21 },
+    { "800x600@85",   56250,  800,  32,   64,   152,  600,  1,    3,    27 },
+    { "1024x768@60",  65000,  1024, 24,   136,  160,  768,  3,    6,    29 },
+    { "1024x768@70",  75000,  1024, 24,   136,  144,  768,  3,    6,    29 },
+    { "1024x768@75",  78750,  1024, 16,   96,   176,  768,  1,    3,    28 },
+    { "1024x768@85",  94500,  1024, 48,   96,   208,  768,  1,    3,    36 },
+    { "1280x1024@60", 108000, 1280, 48,   112,  248,  1024, 1,    3,    38 },
+    { "1280x1024@75", 135000, 1280, 16,   144,  248,  1024, 1,    3,    38 },
+    { "1280x1024@85", 157500, 1280, 64,   160,  224,  1024, 1,    3,    44 },
+    { "1400x1050@60", 122610, 1400, 88,   152,  240,  1050, 1,    3,    33 },
+    { "1400x1050@75", 155850, 1400, 96,   152,  248,  1050, 1,    3,    42 },
+    { "1600x1200@60", 162000, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@65", 175500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@70", 189000, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@75", 202500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1600x1200@85", 229500, 1600, 64,   192,  304,  1200, 1,    3,    46 },
+    { "1792x1344@60", 204800, 1792, 128,  200,  328,  1344, 1,    3,    46 },
+    { "1792x1344@75", 261000, 1792, 96,   216,  352,  1344, 1,    3,    69 },
+    { "1856x1392@60", 218300, 1856, 96,   224,  352,  1392, 1,    3,    43 },
+    { "1856x1392@75", 288000, 1856, 128,  224,  352,  1392, 1,    3,    104 },
+    { "1920x1200@75", 193160, 1920, 128,  208,  336,  1200, 1,    3,    38 },
+    { "1920x1440@60", 234000, 1920, 128,  208,  344,  1440, 1,    3,    56 },
+    { "1920x1440@75", 297000, 1920, 144,  224,  352,  1440, 1,    3,    56 },
+};
+
+#endif
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 87db0d1..d8aa66b 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -19,6 +19,8 @@
 
 #define CONFIG_DOMAIN_PAGE 1
 
+#define CONFIG_VIDEO 1
+
 #define OPT_CONSOLE_STR "com1"
 
 #ifdef MAX_PHYS_CPUS
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C0-0007rG-ER; Tue, 18 Dec 2012 18:47:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2By-0007qA-0p
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:02 +0000
Received: from [85.158.139.211:50793] by server-6.bemta-5.messagelabs.com id
	FA/AE-30498-52AB0D05; Tue, 18 Dec 2012 18:47:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355856418!21081247!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31025 invoked from network); 18 Dec 2012 18:47:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1153214"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-Uz;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:40 +0000
Message-ID: <1355856402-26614-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 7/8] xen/arm: introduce vexpress_syscfg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a Versatile Express specific function to read/write
motherboard settings.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/Makefile                   |    1 +
 xen/arch/arm/platform_vexpress.c        |   97 +++++++++++++++++++++++++++++++
 xen/include/asm-arm/platform_vexpress.h |   23 +++++++
 3 files changed, 121 insertions(+), 0 deletions(-)
 create mode 100644 xen/arch/arm/platform_vexpress.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 4c61b04..24689c5 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -18,6 +18,7 @@ obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
 obj-y += physdev.o
+obj-y += platform_vexpress.o
 obj-y += setup.o
 obj-y += time.o
 obj-y += smpboot.o
diff --git a/xen/arch/arm/platform_vexpress.c b/xen/arch/arm/platform_vexpress.c
new file mode 100644
index 0000000..41e3806
--- /dev/null
+++ b/xen/arch/arm/platform_vexpress.c
@@ -0,0 +1,97 @@
+/*
+ * xen/arch/arm/platform_vexpress.c
+ *
+ * Versatile Express specific settings
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ * Copyright (c) 2012 Citrix Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/platform_vexpress.h>
+#include <xen/mm.h>
+
+#define DCC_SHIFT      26
+#define FUNCTION_SHIFT 20
+#define SITE_SHIFT     16
+#define POSITION_SHIFT 12
+#define DEVICE_SHIFT   0
+
+int vexpress_syscfg(int write, int function, int device, uint32_t *data)
+{
+    uint32_t *syscfg = (uint32_t *) FIXMAP_ADDR(FIXMAP_MISC);
+    uint32_t stat;
+    int dcc = 0; /* DCC to access */
+    int site = 0; /* motherboard */
+    int position = 0; /* motherboard */
+
+    set_fixmap(FIXMAP_MISC, V2M_SYS_MMIO_BASE >> PAGE_SHIFT, DEV_SHARED);
+
+    if ( syscfg[V2M_SYS_CFGCTRL] & V2M_SYS_CFG_START )
+        return -1;
+
+    /* clear the complete bit in the V2M_SYS_CFGSTAT status register */
+    syscfg[V2M_SYS_CFGSTAT] = 0;
+
+    if ( write )
+    {
+        /* write data */
+        syscfg[V2M_SYS_CFGDATA] = *data;
+
+        /* set control register */
+        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | V2M_SYS_CFG_WRITE |
+            (dcc << DCC_SHIFT) | (function << FUNCTION_SHIFT) |
+            (site << SITE_SHIFT) | (position << POSITION_SHIFT) |
+            (device << DEVICE_SHIFT);
+
+        /* wait for complete flag to be set */
+        do {
+            stat = syscfg[V2M_SYS_CFGSTAT];
+            dsb();
+        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
+
+        /* check error status and return error flag if set */
+        if ( stat & V2M_SYS_CFG_ERROR )
+            return -1;
+    } else {
+        /* set control register */
+        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | (dcc << DCC_SHIFT) |
+            (function << FUNCTION_SHIFT) | (site << SITE_SHIFT) |
+            (position << POSITION_SHIFT) | (device << DEVICE_SHIFT);
+
+        /* wait for complete flag to be set */
+        do {
+            stat = syscfg[V2M_SYS_CFGSTAT];
+            dsb();
+        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
+
+        /* check error status flag and return error flag if set */
+        if ( stat & V2M_SYS_CFG_ERROR )
+            return -1;
+        else
+            /* read data */
+            *data = syscfg[V2M_SYS_CFGDATA];
+    }
+
+    clear_fixmap(FIXMAP_MISC);
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/platform_vexpress.h b/xen/include/asm-arm/platform_vexpress.h
index 3556af3..407602d 100644
--- a/xen/include/asm-arm/platform_vexpress.h
+++ b/xen/include/asm-arm/platform_vexpress.h
@@ -6,6 +6,29 @@
 #define V2M_SYS_FLAGSSET      (0x30)
 #define V2M_SYS_FLAGSCLR      (0x34)
 
+#define V2M_SYS_CFGDATA       (0x00A0/4)
+#define V2M_SYS_CFGCTRL       (0x00A4/4)
+#define V2M_SYS_CFGSTAT       (0x00A8/4)
+
+#define V2M_SYS_CFG_START     (1<<31)
+#define V2M_SYS_CFG_WRITE     (1<<30)
+#define V2M_SYS_CFG_ERROR     (1<<1)
+#define V2M_SYS_CFG_COMPLETE  (1<<0)
+
+#define V2M_SYS_CFG_OSC_FUNC  1
+#define V2M_SYS_CFG_OSC0      0
+#define V2M_SYS_CFG_OSC1      1
+#define V2M_SYS_CFG_OSC2      2
+#define V2M_SYS_CFG_OSC3      3
+#define V2M_SYS_CFG_OSC4      4
+#define V2M_SYS_CFG_OSC5      5
+
+#ifndef __ASSEMBLY__
+#include <xen/inttypes.h>
+
+int vexpress_syscfg(int write, int function, int device, uint32_t *data);
+#endif
+
 #endif /* __ASM_ARM_PLATFORM_H */
 /*
  * Local variables:
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C1-0007sT-Sd; Tue, 18 Dec 2012 18:47:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2C0-0007qB-EO
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:04 +0000
Received: from [85.158.139.83:13907] by server-2.bemta-5.messagelabs.com id
	D7/7B-16162-52AB0D05; Tue, 18 Dec 2012 18:47:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355856418!28194012!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23349 invoked from network); 18 Dec 2012 18:47:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083192"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-SZ;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:38 +0000
Message-ID: <1355856402-26614-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 5/8] xen/arm: preserve DTB mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At the moment we destroy the DTB mappings we have in setup_pagetables
and we don't restore them until setup_mm.

Keep the temporary DTB mapping until we create the new ones.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/kernel.h    |    2 ++
 xen/arch/arm/mm.c        |   12 ++++++++++++
 xen/arch/arm/setup.c     |    1 +
 xen/include/asm-arm/mm.h |    2 ++
 4 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index 4533568..a179ffb 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -38,4 +38,6 @@ struct kernel_info {
 int kernel_prepare(struct kernel_info *info);
 void kernel_load(struct kernel_info *info);
 
+extern char _sdtb[];
+
 #endif /* #ifdef __ARCH_ARM_KERNEL_H__ */
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0d7a163..2410794 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -34,6 +34,7 @@
 #include <asm/current.h>
 #include <public/memory.h>
 #include <xen/sched.h>
+#include "kernel.h"
 
 struct domain *dom_xen, *dom_io;
 
@@ -295,12 +296,23 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
     /* TLBFLUSH and ISB would be needed here, but wait until we set WXN */
 
+    /* preserve the DTB mapping a little while longer */
+    pte = mfn_to_xen_entry(((unsigned long) _sdtb + boot_phys_offset) >> PAGE_SHIFT);
+    write_pte(xen_second + second_linear_offset(BOOT_MISC_VIRT_START), pte);
+
     /* From now on, no mapping may be both writable and executable. */
     WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR);
     /* Flush everything after setting WXN bit. */
     flush_xen_text_tlb();
 }
 
+void __init destroy_dtb_mapping(void)
+{
+    /* destroy old DTB mapping */
+    xen_second[second_linear_offset(BOOT_MISC_VIRT_START)].bits = 0;
+    dsb();
+}
+
 /* MMU setup for secondary CPUS (which already have paging enabled) */
 void __cpuinit mmu_init_secondary_cpu(void)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..06a878f 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -229,6 +229,7 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     init_xen_time();
 
+    destroy_dtb_mapping();
     setup_mm(atag_paddr, fdt_size);
 
     /* Setup Hyp vector base */
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 4ed5df6..6fa4308 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -139,6 +139,8 @@ extern unsigned long total_pages;
 
 /* Boot-time pagetable setup */
 extern void setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr);
+/* Destroy temporary DTB mapping */
+extern void destroy_dtb_mapping(void);
 /* MMU setup for seccondary CPUS (which already have paging enabled) */
 extern void __cpuinit mmu_init_secondary_cpu(void);
 /* Set up the xenheap: up to 1GB of contiguous, always-mapped memory.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C0-0007r7-2a; Tue, 18 Dec 2012 18:47:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Bx-0007q8-TC
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:02 +0000
Received: from [85.158.139.83:13859] by server-11.bemta-5.messagelabs.com id
	4B/65-31624-52AB0D05; Tue, 18 Dec 2012 18:47:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355856418!28194012!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23313 invoked from network); 18 Dec 2012 18:47:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083191"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-Pi;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:35 +0000
Message-ID: <1355856402-26614-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 2/8] xen: infrastructure to have
	cross-platform video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- introduce a new HAS_VIDEO config variable;
- build xen/drivers/video/font* if HAS_VIDEO;
- rename vga_puts to video_puts;
- rename vga_init to video_init;
- rename vga_endboot to video_endboot.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/Rules.mk        |    1 +
 xen/arch/x86/Rules.mk        |    1 +
 xen/drivers/Makefile         |    2 +-
 xen/drivers/char/console.c   |   12 ++++++------
 xen/drivers/video/Makefile   |   10 +++++-----
 xen/drivers/video/vesa.c     |    4 ++--
 xen/drivers/video/vga.c      |   12 ++++++------
 xen/include/asm-x86/config.h |    1 +
 xen/include/xen/vga.h        |    9 +--------
 xen/include/xen/video.h      |   24 ++++++++++++++++++++++++
 10 files changed, 48 insertions(+), 28 deletions(-)
 create mode 100644 xen/include/xen/video.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index a45c654..fa9f9c1 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -7,6 +7,7 @@
 #
 
 HAS_DEVICE_TREE := y
+HAS_VIDEO := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
index 963850f..0a9d68d 100644
--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -3,6 +3,7 @@
 
 HAS_ACPI := y
 HAS_VGA  := y
+HAS_VIDEO  := y
 HAS_CPUFREQ := y
 HAS_PCI := y
 HAS_PASSTHROUGH := y
diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile
index 7239375..9c70f20 100644
--- a/xen/drivers/Makefile
+++ b/xen/drivers/Makefile
@@ -3,4 +3,4 @@ subdir-$(HAS_CPUFREQ) += cpufreq
 subdir-$(HAS_PCI) += pci
 subdir-$(HAS_PASSTHROUGH) += passthrough
 subdir-$(HAS_ACPI) += acpi
-subdir-$(HAS_VGA) += video
+subdir-$(HAS_VIDEO) += video
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index ff360fe..1b7a593 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -21,7 +21,7 @@
 #include <xen/delay.h>
 #include <xen/guest_access.h>
 #include <xen/shutdown.h>
-#include <xen/vga.h>
+#include <xen/video.h>
 #include <xen/kexec.h>
 #include <asm/debugger.h>
 #include <asm/div64.h>
@@ -297,7 +297,7 @@ static void dump_console_ring_key(unsigned char key)
     buf[sofar] = '\0';
 
     sercon_puts(buf);
-    vga_puts(buf);
+    video_puts(buf);
 
     free_xenheap_pages(buf, order);
 }
@@ -383,7 +383,7 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
         spin_lock_irq(&console_lock);
 
         sercon_puts(kbuf);
-        vga_puts(kbuf);
+        video_puts(kbuf);
 
         if ( opt_console_to_ring )
         {
@@ -464,7 +464,7 @@ static void __putstr(const char *str)
     ASSERT(spin_is_locked(&console_lock));
 
     sercon_puts(str);
-    vga_puts(str);
+    video_puts(str);
 
     if ( !console_locks_busted )
     {
@@ -592,7 +592,7 @@ void __init console_init_preirq(void)
         if ( *p == ',' )
             p++;
         if ( !strncmp(p, "vga", 3) )
-            vga_init();
+            video_init();
         else if ( !strncmp(p, "none", 4) )
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
@@ -694,7 +694,7 @@ void __init console_endboot(void)
         printk("\n");
     }
 
-    vga_endboot();
+    video_endboot();
 
     /*
      * If user specifies so, we fool the switch routine to redirect input
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 6c3e5b4..2993c39 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -1,5 +1,5 @@
-obj-y := vga.o
-obj-$(CONFIG_X86) += font_8x14.o
-obj-$(CONFIG_X86) += font_8x16.o
-obj-$(CONFIG_X86) += font_8x8.o
-obj-$(CONFIG_X86) += vesa.o
+obj-$(HAS_VGA) := vga.o
+obj-$(HAS_VIDEO) += font_8x14.o
+obj-$(HAS_VIDEO) += font_8x16.o
+obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index d0a83ff..aaf8b23 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -108,7 +108,7 @@ void __init vesa_init(void)
 
     memset(lfb, 0, vram_remap);
 
-    vga_puts = vesa_redraw_puts;
+    video_puts = vesa_redraw_puts;
 
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
@@ -193,7 +193,7 @@ void __init vesa_endboot(bool_t keep)
     if ( keep )
     {
         xpos = 0;
-        vga_puts = vesa_scroll_puts;
+        video_puts = vesa_scroll_puts;
     }
     else
     {
diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
index a98bd00..40e5963 100644
--- a/xen/drivers/video/vga.c
+++ b/xen/drivers/video/vga.c
@@ -21,7 +21,7 @@ static unsigned char *video;
 
 static void vga_text_puts(const char *s);
 static void vga_noop_puts(const char *s) {}
-void (*vga_puts)(const char *) = vga_noop_puts;
+void (*video_puts)(const char *) = vga_noop_puts;
 
 /*
  * 'vga=<mode-specifier>[,keep]' where <mode-specifier> is one of:
@@ -62,7 +62,7 @@ void vesa_endboot(bool_t keep);
 #define vesa_endboot(x)   ((void)0)
 #endif
 
-void __init vga_init(void)
+void __init video_init(void)
 {
     char *p;
 
@@ -85,7 +85,7 @@ void __init vga_init(void)
         columns = vga_console_info.u.text_mode_3.columns;
         lines   = vga_console_info.u.text_mode_3.rows;
         memset(video, 0, columns * lines * 2);
-        vga_puts = vga_text_puts;
+        video_puts = vga_text_puts;
         break;
     case XEN_VGATYPE_VESA_LFB:
     case XEN_VGATYPE_EFI_LFB:
@@ -97,16 +97,16 @@ void __init vga_init(void)
     }
 }
 
-void __init vga_endboot(void)
+void __init video_endboot(void)
 {
-    if ( vga_puts == vga_noop_puts )
+    if ( video_puts == vga_noop_puts )
         return;
 
     printk("Xen is %s VGA console.\n",
            vgacon_keep ? "keeping" : "relinquishing");
 
     if ( !vgacon_keep )
-        vga_puts = vga_noop_puts;
+        video_puts = vga_noop_puts;
     else
     {
         int bus, devfn;
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 0c4868c..e8da4f7 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -38,6 +38,7 @@
 #define CONFIG_ACPI_CSTATE 1
 
 #define CONFIG_VGA 1
+#define CONFIG_VIDEO 1
 
 #define CONFIG_HOTPLUG 1
 #define CONFIG_HOTPLUG_CPU 1
diff --git a/xen/include/xen/vga.h b/xen/include/xen/vga.h
index cc690b9..f72b63d 100644
--- a/xen/include/xen/vga.h
+++ b/xen/include/xen/vga.h
@@ -9,17 +9,10 @@
 #ifndef _XEN_VGA_H
 #define _XEN_VGA_H
 
-#include <public/xen.h>
+#include <xen/video.h>
 
 #ifdef CONFIG_VGA
 extern struct xen_vga_console_info vga_console_info;
-void vga_init(void);
-void vga_endboot(void);
-extern void (*vga_puts)(const char *);
-#else
-#define vga_init()    ((void)0)
-#define vga_endboot() ((void)0)
-#define vga_puts(s)   ((void)0)
 #endif
 
 #endif /* _XEN_VGA_H */
diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
new file mode 100644
index 0000000..2e897f9
--- /dev/null
+++ b/xen/include/xen/video.h
@@ -0,0 +1,24 @@
+/*
+ *  video.h
+ *
+ *  This file is subject to the terms and conditions of the GNU General Public
+ *  License.  See the file COPYING in the main directory of this archive
+ *  for more details.
+ */
+
+#ifndef _XEN_VIDEO_H
+#define _XEN_VIDEO_H
+
+#include <public/xen.h>
+
+#ifdef CONFIG_VIDEO
+void video_init(void);
+extern void (*video_puts)(const char *);
+void video_endboot(void);
+#else
+#define video_init()    ((void)0)
+#define video_puts(s)   ((void)0)
+#define video_endboot() ((void)0)
+#endif
+
+#endif /* _XEN_VIDEO_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C2-0007si-A7; Tue, 18 Dec 2012 18:47:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2C0-0007qB-Uz
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:05 +0000
Received: from [85.158.139.83:9311] by server-2.bemta-5.messagelabs.com id
	89/7B-16162-62AB0D05; Tue, 18 Dec 2012 18:47:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355856418!28194012!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23402 invoked from network); 18 Dec 2012 18:47:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083194"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-S1;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:37 +0000
Message-ID: <1355856402-26614-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make use of the framebuffer functions previously introduced.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
 1 files changed, 26 insertions(+), 153 deletions(-)

diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index aaf8b23..9f24d03 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -13,20 +13,15 @@
 #include <asm/io.h>
 #include <asm/page.h>
 #include "font.h"
+#include "fb.h"
 
 #define vlfb_info    vga_console_info.u.vesa_lfb
-#define text_columns (vlfb_info.width / font->width)
-#define text_rows    (vlfb_info.height / font->height)
 
-static void vesa_redraw_puts(const char *s);
-static void vesa_scroll_puts(const char *s);
+static void lfb_flush(void);
 
-static unsigned char *lfb, *lbuf, *text_buf;
-static unsigned int *__initdata line_len;
+static unsigned char *lfb;
 static const struct font_desc *font;
 static bool_t vga_compat;
-static unsigned int pixel_on;
-static unsigned int xpos, ypos;
 
 static unsigned int vram_total;
 integer_param("vesa-ram", vram_total);
@@ -87,29 +82,26 @@ void __init vesa_early_init(void)
 
 void __init vesa_init(void)
 {
-    if ( !font )
-        goto fail;
-
-    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
-    if ( !lbuf )
-        goto fail;
+    struct fb_prop fbp;
 
-    text_buf = xzalloc_bytes(text_columns * text_rows);
-    if ( !text_buf )
-        goto fail;
+    if ( !font )
+        return;
 
-    line_len = xzalloc_array(unsigned int, text_columns);
-    if ( !line_len )
-        goto fail;
+    fbp.font = font;
+    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
+    fbp.bytes_per_line = vlfb_info.bytes_per_line;
+    fbp.width = vlfb_info.width;
+    fbp.height = vlfb_info.height;
+    fbp.flush = lfb_flush;
+    fbp.text_columns = vlfb_info.width / font->width;
+    fbp.text_rows = vlfb_info.height / font->height;
 
-    lfb = ioremap(vlfb_info.lfb_base, vram_remap);
+    fbp.lfb = lfb = ioremap(vlfb_info.lfb_base, vram_remap);
     if ( !lfb )
-        goto fail;
+        return;
 
     memset(lfb, 0, vram_remap);
 
-    video_puts = vesa_redraw_puts;
-
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
            vlfb_info.lfb_base, lfb,
@@ -131,7 +123,7 @@ void __init vesa_init(void)
     {
         /* Light grey in truecolor. */
         unsigned int grey = 0xaaaaaaaa;
-        pixel_on = 
+        fbp.pixel_on = 
             ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
             ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
             ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
@@ -139,15 +131,14 @@ void __init vesa_init(void)
     else
     {
         /* White(ish) in default pseudocolor palette. */
-        pixel_on = 7;
+        fbp.pixel_on = 7;
     }
 
-    return;
-
- fail:
-    xfree(lbuf);
-    xfree(text_buf);
-    xfree(line_len);
+    if ( fb_init(fbp) < 0 )
+        return;
+    if ( fb_alloc() < 0 )
+        return;
+    video_puts = fb_redraw_puts;
 }
 
 #include <asm/mtrr.h>
@@ -192,8 +183,8 @@ void __init vesa_endboot(bool_t keep)
 {
     if ( keep )
     {
-        xpos = 0;
-        video_puts = vesa_scroll_puts;
+        video_puts = fb_scroll_puts;
+        fb_carriage_return();
     }
     else
     {
@@ -202,124 +193,6 @@ void __init vesa_endboot(bool_t keep)
             memset(lfb + i * vlfb_info.bytes_per_line, 0,
                    vlfb_info.width * bpp);
         lfb_flush();
+        fb_free();
     }
-
-    xfree(line_len);
-}
-
-/* Render one line of text to given linear framebuffer line. */
-static void vesa_show_line(
-    const unsigned char *text_line,
-    unsigned char *video_line,
-    unsigned int nr_chars,
-    unsigned int nr_cells)
-{
-    unsigned int i, j, b, bpp, pixel;
-
-    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
-
-    for ( i = 0; i < font->height; i++ )
-    {
-        unsigned char *ptr = lbuf;
-
-        for ( j = 0; j < nr_chars; j++ )
-        {
-            const unsigned char *bits = font->data;
-            bits += ((text_line[j] * font->height + i) *
-                     ((font->width + 7) >> 3));
-            for ( b = font->width; b--; )
-            {
-                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
-                memcpy(ptr, &pixel, bpp);
-                ptr += bpp;
-            }
-        }
-
-        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
-        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
-        video_line += vlfb_info.bytes_per_line;
-    }
-}
-
-/* Fast mode which redraws all modified parts of a 2D text buffer. */
-static void __init vesa_redraw_puts(const char *s)
-{
-    unsigned int i, min_redraw_y = ypos;
-    char c;
-
-    /* Paste characters into text buffer. */
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            if ( ++ypos >= text_rows )
-            {
-                min_redraw_y = 0;
-                ypos = text_rows - 1;
-                memmove(text_buf, text_buf + text_columns,
-                        ypos * text_columns);
-                memset(text_buf + ypos * text_columns, 0, xpos);
-            }
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++ + ypos * text_columns] = c;
-    }
-
-    /* Render modified section of text buffer to VESA linear framebuffer. */
-    for ( i = min_redraw_y; i <= ypos; i++ )
-    {
-        const unsigned char *line = text_buf + i * text_columns;
-        unsigned int width;
-
-        for ( width = text_columns; width; --width )
-            if ( line[width - 1] )
-                 break;
-        vesa_show_line(line,
-                       lfb + i * font->height * vlfb_info.bytes_per_line,
-                       width, max(line_len[i], width));
-        line_len[i] = width;
-    }
-
-    lfb_flush();
-}
-
-/* Slower line-based scroll mode which interacts better with dom0. */
-static void vesa_scroll_puts(const char *s)
-{
-    unsigned int i;
-    char c;
-
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            unsigned int bytes = (vlfb_info.width *
-                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
-            unsigned char *src = lfb + font->height * vlfb_info.bytes_per_line;
-            unsigned char *dst = lfb;
-            
-            /* New line: scroll all previous rows up one line. */
-            for ( i = font->height; i < vlfb_info.height; i++ )
-            {
-                memcpy(dst, src, bytes);
-                src += vlfb_info.bytes_per_line;
-                dst += vlfb_info.bytes_per_line;
-            }
-
-            /* Render new line. */
-            vesa_show_line(
-                text_buf,
-                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
-                xpos, text_columns);
-
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++] = c;
-    }
-
-    lfb_flush();
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C0-0007r7-2a; Tue, 18 Dec 2012 18:47:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Bx-0007q8-TC
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:02 +0000
Received: from [85.158.139.83:13859] by server-11.bemta-5.messagelabs.com id
	4B/65-31624-52AB0D05; Tue, 18 Dec 2012 18:47:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355856418!28194012!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23313 invoked from network); 18 Dec 2012 18:47:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083191"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-Pi;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:35 +0000
Message-ID: <1355856402-26614-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 2/8] xen: infrastructure to have
	cross-platform video drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- introduce a new HAS_VIDEO config variable;
- build xen/drivers/video/font* if HAS_VIDEO;
- rename vga_puts to video_puts;
- rename vga_init to video_init;
- rename vga_endboot to video_endboot.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/Rules.mk        |    1 +
 xen/arch/x86/Rules.mk        |    1 +
 xen/drivers/Makefile         |    2 +-
 xen/drivers/char/console.c   |   12 ++++++------
 xen/drivers/video/Makefile   |   10 +++++-----
 xen/drivers/video/vesa.c     |    4 ++--
 xen/drivers/video/vga.c      |   12 ++++++------
 xen/include/asm-x86/config.h |    1 +
 xen/include/xen/vga.h        |    9 +--------
 xen/include/xen/video.h      |   24 ++++++++++++++++++++++++
 10 files changed, 48 insertions(+), 28 deletions(-)
 create mode 100644 xen/include/xen/video.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index a45c654..fa9f9c1 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -7,6 +7,7 @@
 #
 
 HAS_DEVICE_TREE := y
+HAS_VIDEO := y
 
 CFLAGS += -fno-builtin -fno-common -Wredundant-decls
 CFLAGS += -iwithprefix include -Werror -Wno-pointer-arith -pipe
diff --git a/xen/arch/x86/Rules.mk b/xen/arch/x86/Rules.mk
index 963850f..0a9d68d 100644
--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -3,6 +3,7 @@
 
 HAS_ACPI := y
 HAS_VGA  := y
+HAS_VIDEO  := y
 HAS_CPUFREQ := y
 HAS_PCI := y
 HAS_PASSTHROUGH := y
diff --git a/xen/drivers/Makefile b/xen/drivers/Makefile
index 7239375..9c70f20 100644
--- a/xen/drivers/Makefile
+++ b/xen/drivers/Makefile
@@ -3,4 +3,4 @@ subdir-$(HAS_CPUFREQ) += cpufreq
 subdir-$(HAS_PCI) += pci
 subdir-$(HAS_PASSTHROUGH) += passthrough
 subdir-$(HAS_ACPI) += acpi
-subdir-$(HAS_VGA) += video
+subdir-$(HAS_VIDEO) += video
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index ff360fe..1b7a593 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -21,7 +21,7 @@
 #include <xen/delay.h>
 #include <xen/guest_access.h>
 #include <xen/shutdown.h>
-#include <xen/vga.h>
+#include <xen/video.h>
 #include <xen/kexec.h>
 #include <asm/debugger.h>
 #include <asm/div64.h>
@@ -297,7 +297,7 @@ static void dump_console_ring_key(unsigned char key)
     buf[sofar] = '\0';
 
     sercon_puts(buf);
-    vga_puts(buf);
+    video_puts(buf);
 
     free_xenheap_pages(buf, order);
 }
@@ -383,7 +383,7 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
         spin_lock_irq(&console_lock);
 
         sercon_puts(kbuf);
-        vga_puts(kbuf);
+        video_puts(kbuf);
 
         if ( opt_console_to_ring )
         {
@@ -464,7 +464,7 @@ static void __putstr(const char *str)
     ASSERT(spin_is_locked(&console_lock));
 
     sercon_puts(str);
-    vga_puts(str);
+    video_puts(str);
 
     if ( !console_locks_busted )
     {
@@ -592,7 +592,7 @@ void __init console_init_preirq(void)
         if ( *p == ',' )
             p++;
         if ( !strncmp(p, "vga", 3) )
-            vga_init();
+            video_init();
         else if ( !strncmp(p, "none", 4) )
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
@@ -694,7 +694,7 @@ void __init console_endboot(void)
         printk("\n");
     }
 
-    vga_endboot();
+    video_endboot();
 
     /*
      * If user specifies so, we fool the switch routine to redirect input
diff --git a/xen/drivers/video/Makefile b/xen/drivers/video/Makefile
index 6c3e5b4..2993c39 100644
--- a/xen/drivers/video/Makefile
+++ b/xen/drivers/video/Makefile
@@ -1,5 +1,5 @@
-obj-y := vga.o
-obj-$(CONFIG_X86) += font_8x14.o
-obj-$(CONFIG_X86) += font_8x16.o
-obj-$(CONFIG_X86) += font_8x8.o
-obj-$(CONFIG_X86) += vesa.o
+obj-$(HAS_VGA) := vga.o
+obj-$(HAS_VIDEO) += font_8x14.o
+obj-$(HAS_VIDEO) += font_8x16.o
+obj-$(HAS_VIDEO) += font_8x8.o
+obj-$(HAS_VGA) += vesa.o
diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index d0a83ff..aaf8b23 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -108,7 +108,7 @@ void __init vesa_init(void)
 
     memset(lfb, 0, vram_remap);
 
-    vga_puts = vesa_redraw_puts;
+    video_puts = vesa_redraw_puts;
 
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
@@ -193,7 +193,7 @@ void __init vesa_endboot(bool_t keep)
     if ( keep )
     {
         xpos = 0;
-        vga_puts = vesa_scroll_puts;
+        video_puts = vesa_scroll_puts;
     }
     else
     {
diff --git a/xen/drivers/video/vga.c b/xen/drivers/video/vga.c
index a98bd00..40e5963 100644
--- a/xen/drivers/video/vga.c
+++ b/xen/drivers/video/vga.c
@@ -21,7 +21,7 @@ static unsigned char *video;
 
 static void vga_text_puts(const char *s);
 static void vga_noop_puts(const char *s) {}
-void (*vga_puts)(const char *) = vga_noop_puts;
+void (*video_puts)(const char *) = vga_noop_puts;
 
 /*
  * 'vga=<mode-specifier>[,keep]' where <mode-specifier> is one of:
@@ -62,7 +62,7 @@ void vesa_endboot(bool_t keep);
 #define vesa_endboot(x)   ((void)0)
 #endif
 
-void __init vga_init(void)
+void __init video_init(void)
 {
     char *p;
 
@@ -85,7 +85,7 @@ void __init vga_init(void)
         columns = vga_console_info.u.text_mode_3.columns;
         lines   = vga_console_info.u.text_mode_3.rows;
         memset(video, 0, columns * lines * 2);
-        vga_puts = vga_text_puts;
+        video_puts = vga_text_puts;
         break;
     case XEN_VGATYPE_VESA_LFB:
     case XEN_VGATYPE_EFI_LFB:
@@ -97,16 +97,16 @@ void __init vga_init(void)
     }
 }
 
-void __init vga_endboot(void)
+void __init video_endboot(void)
 {
-    if ( vga_puts == vga_noop_puts )
+    if ( video_puts == vga_noop_puts )
         return;
 
     printk("Xen is %s VGA console.\n",
            vgacon_keep ? "keeping" : "relinquishing");
 
     if ( !vgacon_keep )
-        vga_puts = vga_noop_puts;
+        video_puts = vga_noop_puts;
     else
     {
         int bus, devfn;
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 0c4868c..e8da4f7 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -38,6 +38,7 @@
 #define CONFIG_ACPI_CSTATE 1
 
 #define CONFIG_VGA 1
+#define CONFIG_VIDEO 1
 
 #define CONFIG_HOTPLUG 1
 #define CONFIG_HOTPLUG_CPU 1
diff --git a/xen/include/xen/vga.h b/xen/include/xen/vga.h
index cc690b9..f72b63d 100644
--- a/xen/include/xen/vga.h
+++ b/xen/include/xen/vga.h
@@ -9,17 +9,10 @@
 #ifndef _XEN_VGA_H
 #define _XEN_VGA_H
 
-#include <public/xen.h>
+#include <xen/video.h>
 
 #ifdef CONFIG_VGA
 extern struct xen_vga_console_info vga_console_info;
-void vga_init(void);
-void vga_endboot(void);
-extern void (*vga_puts)(const char *);
-#else
-#define vga_init()    ((void)0)
-#define vga_endboot() ((void)0)
-#define vga_puts(s)   ((void)0)
 #endif
 
 #endif /* _XEN_VGA_H */
diff --git a/xen/include/xen/video.h b/xen/include/xen/video.h
new file mode 100644
index 0000000..2e897f9
--- /dev/null
+++ b/xen/include/xen/video.h
@@ -0,0 +1,24 @@
+/*
+ *  video.h
+ *
+ *  This file is subject to the terms and conditions of the GNU General Public
+ *  License.  See the file COPYING in the main directory of this archive
+ *  for more details.
+ */
+
+#ifndef _XEN_VIDEO_H
+#define _XEN_VIDEO_H
+
+#include <public/xen.h>
+
+#ifdef CONFIG_VIDEO
+void video_init(void);
+extern void (*video_puts)(const char *);
+void video_endboot(void);
+#else
+#define video_init()    ((void)0)
+#define video_puts(s)   ((void)0)
+#define video_endboot() ((void)0)
+#endif
+
+#endif /* _XEN_VIDEO_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2Bz-0007qU-2w; Tue, 18 Dec 2012 18:47:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Bw-0007pw-UU
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:01 +0000
Received: from [85.158.139.83:13829] by server-13.bemta-5.messagelabs.com id
	F5/AE-10716-42AB0D05; Tue, 18 Dec 2012 18:47:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355856418!28194012!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23286 invoked from network); 18 Dec 2012 18:46:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:46:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083190"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-P6;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:34 +0000
Message-ID: <1355856402-26614-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 1/8] xen/arm: introduce early_ioremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a function to map a range of physical memory into Xen virtual
memory.
It doesn't need domheap to be setup.
It is going to be used to map the videoram.

Add flush_xen_data_tlb_range, that flushes a range of virtual addresses.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/mm.c            |   32 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/config.h |    2 ++
 xen/include/asm-arm/mm.h     |    3 ++-
 xen/include/asm-arm/page.h   |   23 +++++++++++++++++++++++
 4 files changed, 59 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 855f83d..0d7a163 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -367,6 +367,38 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
 }
 
+/* Map the physical memory range start -  start + len into virtual
+ * memory and return the virtual address of the mapping.
+ * start has to be 2MB aligned.
+ * len has to be < EARLY_VMAP_END - EARLY_VMAP_START.
+ */
+void* early_ioremap(paddr_t start, size_t len, unsigned attributes)
+{
+    static unsigned long virt_start = EARLY_VMAP_START;
+    void* ret_addr = (void *)virt_start;
+    paddr_t end = start + len;
+
+    ASSERT(!(start & (~SECOND_MASK)));
+    ASSERT(!(virt_start & (~SECOND_MASK)));
+
+    /* The range we need to map is too big */
+    if ( virt_start + len >= EARLY_VMAP_END )
+        return NULL;
+
+    while ( start < end )
+    {
+        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
+        e.pt.ai = attributes;
+        write_pte(xen_second + second_table_offset(virt_start), e);
+
+        start += SECOND_SIZE;
+        virt_start += SECOND_SIZE;
+    }
+    flush_xen_data_tlb_range((unsigned long) ret_addr, len);
+
+    return ret_addr;
+}
+
 enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
 static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
 {
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 2a05539..87db0d1 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -73,9 +73,11 @@
 #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
 #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
 #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
+#define EARLY_VMAP_START       mk_unsigned_long(0x10000000)
 #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
 #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
 
+#define EARLY_VMAP_END         XENHEAP_VIRT_START
 #define HYPERVISOR_VIRT_START  XEN_VIRT_START
 
 #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index e95ece1..4ed5df6 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -150,7 +150,8 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
 extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
 /* Remove a mapping from a fixmap entry */
 extern void clear_fixmap(unsigned map);
-
+/* map a 2MB aligned physical range in virtual memory. */
+void* early_ioremap(paddr_t start, size_t len, unsigned attributes);
 
 #define mfn_valid(mfn)        ({                                              \
     unsigned long __m_f_n = (mfn);                                            \
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index d89261e..0790dda 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -328,6 +328,23 @@ static inline void flush_xen_data_tlb_va(unsigned long va)
                  : : "r" (va) : "memory");
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB. This is not
+ * sufficient when changing code mappings or for self modifying code.
+ */
+static inline void flush_xen_data_tlb_range(unsigned long va, unsigned long size)
+{
+    unsigned long end = va + size;
+    while ( va < end ) {
+        asm volatile("dsb;" /* Ensure preceding are visible */
+                STORE_CP32(0, TLBIMVAH)
+                "dsb;" /* Ensure completion of the TLB flush */
+                "isb;"
+                : : "r" (va) : "memory");
+        va += PAGE_SIZE;
+    }
+}
+
 /* Flush all non-hypervisor mappings from the TLB */
 static inline void flush_guest_tlb(void)
 {
@@ -418,8 +435,14 @@ static inline uint64_t gva_to_ipa(uint32_t va)
 #define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
 
 #define THIRD_SHIFT  PAGE_SHIFT
+#define THIRD_SIZE   (1u << THIRD_SHIFT)
+#define THIRD_MASK   (~(THIRD_SIZE - 1))
 #define SECOND_SHIFT (THIRD_SHIFT + LPAE_SHIFT)
+#define SECOND_SIZE   (1u << SECOND_SHIFT)
+#define SECOND_MASK   (~(SECOND_SIZE - 1))
 #define FIRST_SHIFT  (SECOND_SHIFT + LPAE_SHIFT)
+#define FIRST_SIZE   (1u << FIRST_SHIFT)
+#define FIRST_MASK   (~(FIRST_SIZE - 1))
 
 /* Calculate the offsets into the pagetables for a given VA */
 #define first_linear_offset(va) (va >> FIRST_SHIFT)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C1-0007sT-Sd; Tue, 18 Dec 2012 18:47:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2C0-0007qB-EO
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:04 +0000
Received: from [85.158.139.83:13907] by server-2.bemta-5.messagelabs.com id
	D7/7B-16162-52AB0D05; Tue, 18 Dec 2012 18:47:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355856418!28194012!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23349 invoked from network); 18 Dec 2012 18:47:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083192"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-SZ;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:38 +0000
Message-ID: <1355856402-26614-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 5/8] xen/arm: preserve DTB mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At the moment we destroy the DTB mappings we have in setup_pagetables
and we don't restore them until setup_mm.

Keep the temporary DTB mapping until we create the new ones.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/kernel.h    |    2 ++
 xen/arch/arm/mm.c        |   12 ++++++++++++
 xen/arch/arm/setup.c     |    1 +
 xen/include/asm-arm/mm.h |    2 ++
 4 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index 4533568..a179ffb 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -38,4 +38,6 @@ struct kernel_info {
 int kernel_prepare(struct kernel_info *info);
 void kernel_load(struct kernel_info *info);
 
+extern char _sdtb[];
+
 #endif /* #ifdef __ARCH_ARM_KERNEL_H__ */
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0d7a163..2410794 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -34,6 +34,7 @@
 #include <asm/current.h>
 #include <public/memory.h>
 #include <xen/sched.h>
+#include "kernel.h"
 
 struct domain *dom_xen, *dom_io;
 
@@ -295,12 +296,23 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
     /* TLBFLUSH and ISB would be needed here, but wait until we set WXN */
 
+    /* preserve the DTB mapping a little while longer */
+    pte = mfn_to_xen_entry(((unsigned long) _sdtb + boot_phys_offset) >> PAGE_SHIFT);
+    write_pte(xen_second + second_linear_offset(BOOT_MISC_VIRT_START), pte);
+
     /* From now on, no mapping may be both writable and executable. */
     WRITE_CP32(READ_CP32(HSCTLR) | SCTLR_WXN, HSCTLR);
     /* Flush everything after setting WXN bit. */
     flush_xen_text_tlb();
 }
 
+void __init destroy_dtb_mapping(void)
+{
+    /* destroy old DTB mapping */
+    xen_second[second_linear_offset(BOOT_MISC_VIRT_START)].bits = 0;
+    dsb();
+}
+
 /* MMU setup for secondary CPUS (which already have paging enabled) */
 void __cpuinit mmu_init_secondary_cpu(void)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 2076724..06a878f 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -229,6 +229,7 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     init_xen_time();
 
+    destroy_dtb_mapping();
     setup_mm(atag_paddr, fdt_size);
 
     /* Setup Hyp vector base */
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 4ed5df6..6fa4308 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -139,6 +139,8 @@ extern unsigned long total_pages;
 
 /* Boot-time pagetable setup */
 extern void setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr);
+/* Destroy temporary DTB mapping */
+extern void destroy_dtb_mapping(void);
 /* MMU setup for seccondary CPUS (which already have paging enabled) */
 extern void __cpuinit mmu_init_secondary_cpu(void);
 /* Set up the xenheap: up to 1GB of contiguous, always-mapped memory.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2C2-0007si-A7; Tue, 18 Dec 2012 18:47:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2C0-0007qB-Uz
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:05 +0000
Received: from [85.158.139.83:9311] by server-2.bemta-5.messagelabs.com id
	89/7B-16162-62AB0D05; Tue, 18 Dec 2012 18:47:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355856418!28194012!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23402 invoked from network); 18 Dec 2012 18:47:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:47:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083194"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-S1;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:37 +0000
Message-ID: <1355856402-26614-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make use of the framebuffer functions previously introduced.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/drivers/video/vesa.c |  179 +++++++---------------------------------------
 1 files changed, 26 insertions(+), 153 deletions(-)

diff --git a/xen/drivers/video/vesa.c b/xen/drivers/video/vesa.c
index aaf8b23..9f24d03 100644
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -13,20 +13,15 @@
 #include <asm/io.h>
 #include <asm/page.h>
 #include "font.h"
+#include "fb.h"
 
 #define vlfb_info    vga_console_info.u.vesa_lfb
-#define text_columns (vlfb_info.width / font->width)
-#define text_rows    (vlfb_info.height / font->height)
 
-static void vesa_redraw_puts(const char *s);
-static void vesa_scroll_puts(const char *s);
+static void lfb_flush(void);
 
-static unsigned char *lfb, *lbuf, *text_buf;
-static unsigned int *__initdata line_len;
+static unsigned char *lfb;
 static const struct font_desc *font;
 static bool_t vga_compat;
-static unsigned int pixel_on;
-static unsigned int xpos, ypos;
 
 static unsigned int vram_total;
 integer_param("vesa-ram", vram_total);
@@ -87,29 +82,26 @@ void __init vesa_early_init(void)
 
 void __init vesa_init(void)
 {
-    if ( !font )
-        goto fail;
-
-    lbuf = xmalloc_bytes(vlfb_info.bytes_per_line);
-    if ( !lbuf )
-        goto fail;
+    struct fb_prop fbp;
 
-    text_buf = xzalloc_bytes(text_columns * text_rows);
-    if ( !text_buf )
-        goto fail;
+    if ( !font )
+        return;
 
-    line_len = xzalloc_array(unsigned int, text_columns);
-    if ( !line_len )
-        goto fail;
+    fbp.font = font;
+    fbp.bits_per_pixel = vlfb_info.bits_per_pixel;
+    fbp.bytes_per_line = vlfb_info.bytes_per_line;
+    fbp.width = vlfb_info.width;
+    fbp.height = vlfb_info.height;
+    fbp.flush = lfb_flush;
+    fbp.text_columns = vlfb_info.width / font->width;
+    fbp.text_rows = vlfb_info.height / font->height;
 
-    lfb = ioremap(vlfb_info.lfb_base, vram_remap);
+    fbp.lfb = lfb = ioremap(vlfb_info.lfb_base, vram_remap);
     if ( !lfb )
-        goto fail;
+        return;
 
     memset(lfb, 0, vram_remap);
 
-    video_puts = vesa_redraw_puts;
-
     printk(XENLOG_INFO "vesafb: framebuffer at %#x, mapped to 0x%p, "
            "using %uk, total %uk\n",
            vlfb_info.lfb_base, lfb,
@@ -131,7 +123,7 @@ void __init vesa_init(void)
     {
         /* Light grey in truecolor. */
         unsigned int grey = 0xaaaaaaaa;
-        pixel_on = 
+        fbp.pixel_on = 
             ((grey >> (32 - vlfb_info.  red_size)) << vlfb_info.  red_pos) |
             ((grey >> (32 - vlfb_info.green_size)) << vlfb_info.green_pos) |
             ((grey >> (32 - vlfb_info. blue_size)) << vlfb_info. blue_pos);
@@ -139,15 +131,14 @@ void __init vesa_init(void)
     else
     {
         /* White(ish) in default pseudocolor palette. */
-        pixel_on = 7;
+        fbp.pixel_on = 7;
     }
 
-    return;
-
- fail:
-    xfree(lbuf);
-    xfree(text_buf);
-    xfree(line_len);
+    if ( fb_init(fbp) < 0 )
+        return;
+    if ( fb_alloc() < 0 )
+        return;
+    video_puts = fb_redraw_puts;
 }
 
 #include <asm/mtrr.h>
@@ -192,8 +183,8 @@ void __init vesa_endboot(bool_t keep)
 {
     if ( keep )
     {
-        xpos = 0;
-        video_puts = vesa_scroll_puts;
+        video_puts = fb_scroll_puts;
+        fb_carriage_return();
     }
     else
     {
@@ -202,124 +193,6 @@ void __init vesa_endboot(bool_t keep)
             memset(lfb + i * vlfb_info.bytes_per_line, 0,
                    vlfb_info.width * bpp);
         lfb_flush();
+        fb_free();
     }
-
-    xfree(line_len);
-}
-
-/* Render one line of text to given linear framebuffer line. */
-static void vesa_show_line(
-    const unsigned char *text_line,
-    unsigned char *video_line,
-    unsigned int nr_chars,
-    unsigned int nr_cells)
-{
-    unsigned int i, j, b, bpp, pixel;
-
-    bpp = (vlfb_info.bits_per_pixel + 7) >> 3;
-
-    for ( i = 0; i < font->height; i++ )
-    {
-        unsigned char *ptr = lbuf;
-
-        for ( j = 0; j < nr_chars; j++ )
-        {
-            const unsigned char *bits = font->data;
-            bits += ((text_line[j] * font->height + i) *
-                     ((font->width + 7) >> 3));
-            for ( b = font->width; b--; )
-            {
-                pixel = (*bits & (1u<<b)) ? pixel_on : 0;
-                memcpy(ptr, &pixel, bpp);
-                ptr += bpp;
-            }
-        }
-
-        memset(ptr, 0, (vlfb_info.width - nr_chars * font->width) * bpp);
-        memcpy(video_line, lbuf, nr_cells * font->width * bpp);
-        video_line += vlfb_info.bytes_per_line;
-    }
-}
-
-/* Fast mode which redraws all modified parts of a 2D text buffer. */
-static void __init vesa_redraw_puts(const char *s)
-{
-    unsigned int i, min_redraw_y = ypos;
-    char c;
-
-    /* Paste characters into text buffer. */
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            if ( ++ypos >= text_rows )
-            {
-                min_redraw_y = 0;
-                ypos = text_rows - 1;
-                memmove(text_buf, text_buf + text_columns,
-                        ypos * text_columns);
-                memset(text_buf + ypos * text_columns, 0, xpos);
-            }
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++ + ypos * text_columns] = c;
-    }
-
-    /* Render modified section of text buffer to VESA linear framebuffer. */
-    for ( i = min_redraw_y; i <= ypos; i++ )
-    {
-        const unsigned char *line = text_buf + i * text_columns;
-        unsigned int width;
-
-        for ( width = text_columns; width; --width )
-            if ( line[width - 1] )
-                 break;
-        vesa_show_line(line,
-                       lfb + i * font->height * vlfb_info.bytes_per_line,
-                       width, max(line_len[i], width));
-        line_len[i] = width;
-    }
-
-    lfb_flush();
-}
-
-/* Slower line-based scroll mode which interacts better with dom0. */
-static void vesa_scroll_puts(const char *s)
-{
-    unsigned int i;
-    char c;
-
-    while ( (c = *s++) != '\0' )
-    {
-        if ( (c == '\n') || (xpos >= text_columns) )
-        {
-            unsigned int bytes = (vlfb_info.width *
-                                  ((vlfb_info.bits_per_pixel + 7) >> 3));
-            unsigned char *src = lfb + font->height * vlfb_info.bytes_per_line;
-            unsigned char *dst = lfb;
-            
-            /* New line: scroll all previous rows up one line. */
-            for ( i = font->height; i < vlfb_info.height; i++ )
-            {
-                memcpy(dst, src, bytes);
-                src += vlfb_info.bytes_per_line;
-                dst += vlfb_info.bytes_per_line;
-            }
-
-            /* Render new line. */
-            vesa_show_line(
-                text_buf,
-                lfb + (text_rows-1) * font->height * vlfb_info.bytes_per_line,
-                xpos, text_columns);
-
-            xpos = 0;
-        }
-
-        if ( c != '\n' )
-            text_buf[xpos++] = c;
-    }
-
-    lfb_flush();
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2Bz-0007qU-2w; Tue, 18 Dec 2012 18:47:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2Bw-0007pw-UU
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:47:01 +0000
Received: from [85.158.139.83:13829] by server-13.bemta-5.messagelabs.com id
	F5/AE-10716-42AB0D05; Tue, 18 Dec 2012 18:47:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355856418!28194012!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTI5MjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23286 invoked from network); 18 Dec 2012 18:46:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:46:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1083190"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:46:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:46:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2Bn-0002ik-P6;
	Tue, 18 Dec 2012 18:46:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Dec 2012 18:46:34 +0000
Message-ID: <1355856402-26614-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 1/8] xen/arm: introduce early_ioremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce a function to map a range of physical memory into Xen virtual
memory.
It doesn't need domheap to be setup.
It is going to be used to map the videoram.

Add flush_xen_data_tlb_range, that flushes a range of virtual addresses.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/mm.c            |   32 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/config.h |    2 ++
 xen/include/asm-arm/mm.h     |    3 ++-
 xen/include/asm-arm/page.h   |   23 +++++++++++++++++++++++
 4 files changed, 59 insertions(+), 1 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 855f83d..0d7a163 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -367,6 +367,38 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
 }
 
+/* Map the physical memory range start -  start + len into virtual
+ * memory and return the virtual address of the mapping.
+ * start has to be 2MB aligned.
+ * len has to be < EARLY_VMAP_END - EARLY_VMAP_START.
+ */
+void* early_ioremap(paddr_t start, size_t len, unsigned attributes)
+{
+    static unsigned long virt_start = EARLY_VMAP_START;
+    void* ret_addr = (void *)virt_start;
+    paddr_t end = start + len;
+
+    ASSERT(!(start & (~SECOND_MASK)));
+    ASSERT(!(virt_start & (~SECOND_MASK)));
+
+    /* The range we need to map is too big */
+    if ( virt_start + len >= EARLY_VMAP_END )
+        return NULL;
+
+    while ( start < end )
+    {
+        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
+        e.pt.ai = attributes;
+        write_pte(xen_second + second_table_offset(virt_start), e);
+
+        start += SECOND_SIZE;
+        virt_start += SECOND_SIZE;
+    }
+    flush_xen_data_tlb_range((unsigned long) ret_addr, len);
+
+    return ret_addr;
+}
+
 enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
 static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
 {
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 2a05539..87db0d1 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -73,9 +73,11 @@
 #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
 #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
 #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
+#define EARLY_VMAP_START       mk_unsigned_long(0x10000000)
 #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
 #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
 
+#define EARLY_VMAP_END         XENHEAP_VIRT_START
 #define HYPERVISOR_VIRT_START  XEN_VIRT_START
 
 #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index e95ece1..4ed5df6 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -150,7 +150,8 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
 extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
 /* Remove a mapping from a fixmap entry */
 extern void clear_fixmap(unsigned map);
-
+/* map a 2MB aligned physical range in virtual memory. */
+void* early_ioremap(paddr_t start, size_t len, unsigned attributes);
 
 #define mfn_valid(mfn)        ({                                              \
     unsigned long __m_f_n = (mfn);                                            \
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index d89261e..0790dda 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -328,6 +328,23 @@ static inline void flush_xen_data_tlb_va(unsigned long va)
                  : : "r" (va) : "memory");
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB. This is not
+ * sufficient when changing code mappings or for self modifying code.
+ */
+static inline void flush_xen_data_tlb_range(unsigned long va, unsigned long size)
+{
+    unsigned long end = va + size;
+    while ( va < end ) {
+        asm volatile("dsb;" /* Ensure preceding are visible */
+                STORE_CP32(0, TLBIMVAH)
+                "dsb;" /* Ensure completion of the TLB flush */
+                "isb;"
+                : : "r" (va) : "memory");
+        va += PAGE_SIZE;
+    }
+}
+
 /* Flush all non-hypervisor mappings from the TLB */
 static inline void flush_guest_tlb(void)
 {
@@ -418,8 +435,14 @@ static inline uint64_t gva_to_ipa(uint32_t va)
 #define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
 
 #define THIRD_SHIFT  PAGE_SHIFT
+#define THIRD_SIZE   (1u << THIRD_SHIFT)
+#define THIRD_MASK   (~(THIRD_SIZE - 1))
 #define SECOND_SHIFT (THIRD_SHIFT + LPAE_SHIFT)
+#define SECOND_SIZE   (1u << SECOND_SHIFT)
+#define SECOND_MASK   (~(SECOND_SIZE - 1))
 #define FIRST_SHIFT  (SECOND_SHIFT + LPAE_SHIFT)
+#define FIRST_SIZE   (1u << FIRST_SHIFT)
+#define FIRST_MASK   (~(FIRST_SIZE - 1))
 
 /* Calculate the offsets into the pagetables for a given VA */
 #define first_linear_offset(va) (va >> FIRST_SHIFT)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:51:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:51:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2GK-0000jQ-Va; Tue, 18 Dec 2012 18:51:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2GK-0000jC-A6
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:51:32 +0000
Received: from [85.158.143.35:38581] by server-3.bemta-4.messagelabs.com id
	68/9B-18211-33BB0D05; Tue, 18 Dec 2012 18:51:31 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355856689!5004087!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2862 invoked from network); 18 Dec 2012 18:51:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:51:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1153966"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:51:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:51:28 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2GG-0002oH-0v;
	Tue, 18 Dec 2012 18:51:28 +0000
Date: Tue, 18 Dec 2012 18:51:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354785657.17165.41.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212181847430.17523@kaball.uk.xensource.com>
References: <1354732666-3132-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354785657.17165.41.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: flush dcache after memcpy'ing the
 kernel image
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> On Wed, 2012-12-05 at 18:37 +0000, Stefano Stabellini wrote:
> > After memcpy'ing the kernel in guest memory we need to flush the dcache
> > to make sure that the data actually reaches the memory before we start
> > executing guest code with caches disabled.
> > 
> > This fixes a boot time bug on the Cortex A15 Versatile Express that
> > usually shows up as follow:
> > 
> > (XEN) Hypervisor Trap. HSR=0x80000006 EC=0x20 IL=0 Syndrome=6
> > (XEN) Unexpected Trap: Hypervisor
> 
> That's a symptom of a thousand different problems though, since it's
> just a generic Instruction Abort from guest mode caused by a translation
> fault at stage 2. 
> 
> Anyhow this won't apply in top of "arm: support for initial modules
> (e.g. dom0) and DTB supplied in RAM", could you rebase on that please?

The patch applies fine on both xen/master and xen/staging.

Do you have a branch with "arm: support for initial modules (e.g. dom0)
and DTB supplied in RAM" somewhere?




> 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/kernel.c |    2 ++
> >  1 files changed, 2 insertions(+), 0 deletions(-)
> > 
> > diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> > index b4a823d..81818b1 100644
> > --- a/xen/arch/arm/kernel.c
> > +++ b/xen/arch/arm/kernel.c
> > @@ -53,6 +53,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
> >  
> >          set_fixmap(FIXMAP_MISC, p, attrindx);
> >          memcpy(dst, src + s, l);
> > +        flush_xen_dcache_va_range(dst, l);
> >  
> >          paddr += l;
> >          dst += l;
> > @@ -82,6 +83,7 @@ static void kernel_zimage_load(struct kernel_info *info)
> >  
> >          set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED);
> >          memcpy(dst, src, PAGE_SIZE);
> > +        flush_xen_dcache_va_range(dst, PAGE_SIZE);
> >          clear_fixmap(FIXMAP_MISC);
> >  
> >          unmap_domain_page(dst);
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 18:51:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 18:51:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl2GK-0000jQ-Va; Tue, 18 Dec 2012 18:51:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Tl2GK-0000jC-A6
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 18:51:32 +0000
Received: from [85.158.143.35:38581] by server-3.bemta-4.messagelabs.com id
	68/9B-18211-33BB0D05; Tue, 18 Dec 2012 18:51:31 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355856689!5004087!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2862 invoked from network); 18 Dec 2012 18:51:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 18:51:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1153966"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 18:51:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 13:51:28 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tl2GG-0002oH-0v;
	Tue, 18 Dec 2012 18:51:28 +0000
Date: Tue, 18 Dec 2012 18:51:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1354785657.17165.41.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212181847430.17523@kaball.uk.xensource.com>
References: <1354732666-3132-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354785657.17165.41.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: flush dcache after memcpy'ing the
 kernel image
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Dec 2012, Ian Campbell wrote:
> On Wed, 2012-12-05 at 18:37 +0000, Stefano Stabellini wrote:
> > After memcpy'ing the kernel in guest memory we need to flush the dcache
> > to make sure that the data actually reaches the memory before we start
> > executing guest code with caches disabled.
> > 
> > This fixes a boot time bug on the Cortex A15 Versatile Express that
> > usually shows up as follow:
> > 
> > (XEN) Hypervisor Trap. HSR=0x80000006 EC=0x20 IL=0 Syndrome=6
> > (XEN) Unexpected Trap: Hypervisor
> 
> That's a symptom of a thousand different problems though, since it's
> just a generic Instruction Abort from guest mode caused by a translation
> fault at stage 2. 
> 
> Anyhow this won't apply in top of "arm: support for initial modules
> (e.g. dom0) and DTB supplied in RAM", could you rebase on that please?

The patch applies fine on both xen/master and xen/staging.

Do you have a branch with "arm: support for initial modules (e.g. dom0)
and DTB supplied in RAM" somewhere?




> 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/kernel.c |    2 ++
> >  1 files changed, 2 insertions(+), 0 deletions(-)
> > 
> > diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> > index b4a823d..81818b1 100644
> > --- a/xen/arch/arm/kernel.c
> > +++ b/xen/arch/arm/kernel.c
> > @@ -53,6 +53,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
> >  
> >          set_fixmap(FIXMAP_MISC, p, attrindx);
> >          memcpy(dst, src + s, l);
> > +        flush_xen_dcache_va_range(dst, l);
> >  
> >          paddr += l;
> >          dst += l;
> > @@ -82,6 +83,7 @@ static void kernel_zimage_load(struct kernel_info *info)
> >  
> >          set_fixmap(FIXMAP_MISC, (flash+offs) >> PAGE_SHIFT, DEV_SHARED);
> >          memcpy(dst, src, PAGE_SIZE);
> > +        flush_xen_dcache_va_range(dst, PAGE_SIZE);
> >          clear_fixmap(FIXMAP_MISC);
> >  
> >          unmap_domain_page(dst);
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 19:42:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 19:42:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl32p-0001qy-9V; Tue, 18 Dec 2012 19:41:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tl32n-0001qt-84
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 19:41:37 +0000
Received: from [85.158.138.51:11447] by server-3.bemta-3.messagelabs.com id
	24/72-31588-DE6C0D05; Tue, 18 Dec 2012 19:41:33 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355859242!21332073!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30451 invoked from network); 18 Dec 2012 19:34:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 19:34:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1159846"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 19:34:02 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 14:34:01 -0500
Message-ID: <50D0C528.4090602@citrix.com>
Date: Tue, 18 Dec 2012 19:34:00 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
In-Reply-To: <20121218160704.GA3543@phenom.dumpdata.com>
Content-Type: multipart/mixed; boundary="------------070305080800070105090305"
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------070305080800070105090305
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit

On 18/12/12 16:07, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 18, 2012 at 01:51:49PM +0000, Mats Petersson wrote:
>> On 18/12/12 11:40, Ian Campbell wrote:
>>> On Tue, 2012-12-18 at 11:28 +0000, Mats Petersson wrote:
>>>> On 18/12/12 11:17, Ian Campbell wrote:
>>>>> On Mon, 2012-12-17 at 17:38 +0000, Mats Petersson wrote:
>>>>>> On 17/12/12 16:57, Ian Campbell wrote:
>>>>>>> On Fri, 2012-12-14 at 17:00 +0000, Mats Petersson wrote:
>>>>>>>> Ian, Konrad:
>>>>>>>> I took Konrad's latest version [I think] and applied my patch (which
>>>>>>>> needed some adjustments as there are other "post 3.7" changes to same
>>>>>>>> source code - including losing the xen_flush_tlb_all() ??)
>>>>>>>>
>>>>>>>> Attached are the patches:
>>>>>>>> arm-enlighten.patch, which updates the ARM code.
>>>>>>>> improve-pricmd.patch, which updates the privcmd code.
>>>>>>>>
>>>>>>>> Ian, can you have a look at the ARM code - which I quickly hacked
>>>>>>>> together, I haven't compiled it, and I certainly haven't tested it,
>>>>>>> There are a lot of build errors as you might expect (patch below, a few
>>>>>>> warnings remain). You can find a cross compiler at
>>>>>>> http://www.kernel.org/pub/tools/crosstool/files/bin/x86_64/4.6.3/
>>>>>>>
>>>>>>> or you can use
>>>>>>> drall:/home/ianc/devel/cross/x86_64-gcc-4.6.0-nolibc_arm-unknown-linux-gnueabi.tar.bz2
>>>>>>>
>>>>>>> which is an older version from the same place.
>>>>>>>
>>>>>>> Anyway, the patch...
>>>>>>>> and
>>>>>>>> it needs further changes to make my changes actually make it more
>>>>>>>> efficient.
>>>>>>> Right, the benefit on PVH or ARM would be in batching the
>>>>>>> XENMEM_add_to_physmap_range calls. The batching of the
>>>>>>> apply_to_page_range which this patch adds isn't useful because there is
>>>>>>> no HYPERVISOR_mmu_update call to batch in this case. So basically this
>>>>>>> patch as it stands does a lot of needless work for no gain I'm afraid.
>>>>>> So, basically, what is an improvement on x86 isn't anything useful on
>>>>>> ARM, and you'd prefer to loop around in privcmd.c calling into
>>>>>> xen_remap_domain_mfn_range() a lot of times?
>>>>> Not at all. ARM (and PVH) still benefits from the interface change but
>>>>> the implementation of the benefit is totally different.
>>>>>
>>>>> For normal x86 PV you want to batch the HYPERVISOR_mmu_update.
>>>>>
>>>>> For both x86 PVH and ARM this hypercall  doesn't exist but instead there
>>>>> is a call to HYPERVISOR_memory_op XENMEM_add_to_physmap_range which is
>>>>> something which would benefit from batching.
>>>> So, you want me to fix that up?
>>> If you want to sure, yes please.
>>>
>>> But feel free to just make the existing code work with the interface,
>>> without adding any batching. That should be a much smaller change than
>>> what you proposed.
>>>
>>> (aside; I do wonder how much of this x86/arm code could be made generic)
>> I think, once it goes to PVH everywhere, quite a bit (as I believe
>> the hypercalls should be the same by then, right?)
>>
>> In the PVOPS kernel, it's probably a bit more job. I'm sure it can
>> be done, but with a bit more work.
>>
>> I think I'll do the minimal patch first, then, if I find some spare
>> time, work on the "batching" variant.
> OK. The batching is IMHO just using the multicall variant.
>
>>>> To make xentrace not work until it is fixed wouldn't be a terrible
>>>> thing, would it?
>>> On ARM I think it is fine (I doubt this is the only thing stopping
>>> xentrace from working). I suspect people would be less impressed with
>>> breaking xentrace on x86. For PVH it probably is a requirement for it to
>>> keep working, I'm not sure though.
>> Ok, ENOSYS it is for remap_range() then.
>>>>   Then we can remove that old gunk from x86 as well
>>>> (eventually).
>>>> Thanks. I was starting to wonder if I'd been teleported back to the time
>>>> when I struggled with pointers...
>>>> Maybe it needs a better comment.
>>> The other thing I had missed was that this was a pure increment and not
>>> taking the value at the same time, which also confused me.
>>>
>>> Splitting the increment out from the dereference usually makes these
>>> things clearer, I was obviously just being a bit hard of thinking
>>> yesterday!
>> No worries. I will see about making a more readable comment (and for
>> ARM, I can remove the whole if/else and just do the one increment
>> (based on above discussion), so should make the code better.
> You can use the v3.8 tree as your base - it has the required PVH and ARM
> patches. There is one bug (where dom0 crashes) - and I just sent
> a git pull for that in your Linus's tree:
>
>   git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.8
Thanks. I have attached a patch for x86 (+generic changes) and one for arm.
Both compile and the x86 has been tested with localhost migration.

Ian: Can you review the code - I'm fairly sure it does the right thing, 
without too much "extra" code, and achieves "what we want". It's not 
optimized, but if the hypercalls are batched by multicall, it shouldn't 
be horrible. In fact I expect it will be marginally better than before, 
because it saves one level of calls with 7 or so arguments per mapped 
page since we've pushed the loop down a level to the do_remap_mfn() - 
I've kept the structure similar to x86, should we decide to merge the 
code into something common in the future. No doubt it will diverge, but 
starting with something similar gives it a little chance... ;)

Note: the purpose of this post is mainly to confirm that the ARM 
solution is "on track".

--
Mats

--------------070305080800070105090305
Content-Type: text/x-patch; name="xen-privcmd-remap-array-arm.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename="xen-privcmd-remap-array-arm.patch"

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 7a32976..dc50a53 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -73,7 +73,7 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
 }
 
 struct remap_data {
-	xen_pfn_t fgmfn; /* foreign domain's gmfn */
+	xen_pfn_t *fgmfn; /* foreign domain's gmfn */
 	pgprot_t prot;
 	domid_t  domid;
 	struct vm_area_struct *vma;
@@ -90,38 +90,120 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
 	unsigned long pfn = page_to_pfn(page);
 	pte_t pte = pfn_pte(pfn, info->prot);
 
-	if (map_foreign_page(pfn, info->fgmfn, info->domid))
+	// TODO: We should really batch these updates
+	if (map_foreign_page(pfn, *info->fgmfn, info->domid))
 		return -EFAULT;
 	set_pte_at(info->vma->vm_mm, addr, ptep, pte);
+	info->fgmfn++;
 
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       xen_pfn_t mfn, int nr,
-			       pgprot_t prot, unsigned domid,
-			       struct page **pages)
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information (for PVH, not used currently)
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			xen_pfn_t *mfn, int nr,
+			pgprot_t prot, unsigned domid,
+			struct page **pages)
 {
 	int err;
 	struct remap_data data;
 
-	/* TBD: Batching, current sole caller only does page at a time */
-	if (nr > 1)
-		return -EINVAL;
+	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	data.fgmfn = mfn;
-	data.prot = prot;
+	data.prot  = prot;
 	data.domid = domid;
-	data.vma = vma;
-	data.index = 0;
+	data.vma   = vma;
 	data.pages = pages;
-	err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
-				  remap_pte_fn, &data);
-	return err;
+	data.index = 0;
+
+	while (nr) {
+		unsigned long range = 1 << PAGE_SHIFT;
+
+		err = apply_to_page_range(vma->vm_mm, addr, range,
+					  remap_pte_fn, &data);
+		/* Warning: We do probably need to care about what error we
+		   get here. However, currently, the remap_area_mfn_pte_fn is
+		   only likely to return EFAULT or some other "things are very
+		   bad" error code, which the rest of the calling code won't
+		   be able to fix up. So we just exit with the error we got.
+		*/
+		if (err)
+			return err;
+
+		nr--;
+		addr += range;
+	}
+	return 0;
+}
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information (for PVH, not used currently)
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not sucessfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid,
+			       struct page **pages)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 * Note: This variant doesn't actually use err_ptr at the moment.
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, prot, domid, pages);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
+
+/* Not used in ARM. Use xen_remap_domain_mfn_array(). */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t mfn, int nr,
+			       pgprot_t prot, unsigned domid,
+			       struct page **pages)
+{
+	return -ENOSYS;
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
+
 int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int nr, struct page **pages)
 {

--------------070305080800070105090305
Content-Type: text/x-patch; name="xen-privcmd-remap-array-x86.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename="xen-privcmd-remap-array-x86.patch"

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index dcf5f2d..a67774f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
 #define REMAP_BATCH_SIZE 16
 
 struct remap_data {
-	unsigned long mfn;
+	unsigned long *mfn;
+	bool contiguous;
 	pgprot_t prot;
 	struct mmu_update *mmu_update;
 };
@@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
+	/* If we have a contigious range, just update the mfn itself,
+	   else update pointer to be "next mfn". */
+	if (rmd->contiguous)
+		(*rmd->mfn)++;
+	else
+		rmd->mfn++;
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       unsigned long mfn, int nr,
-			       pgprot_t prot, unsigned domid)
-{
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ *
+ * Note that err_ptr is used to indicate whether *mfn
+ * is a list or a "first mfn of a contiguous range". */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			unsigned long *mfn, int nr,
+			int *err_ptr, pgprot_t prot,
+			unsigned domid)
+{
+	int err, last_err = 0;
 	struct remap_data rmd;
 	struct mmu_update mmu_update[REMAP_BATCH_SIZE];
-	int batch;
 	unsigned long range;
-	int err = 0;
 
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
@@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 
 	rmd.mfn = mfn;
 	rmd.prot = prot;
+	/* We use the err_ptr to indicate if there we are doing a contigious
+	 * mapping or a discontigious mapping. */
+	rmd.contiguous = !err_ptr;
 
 	while (nr) {
-		batch = min(REMAP_BATCH_SIZE, nr);
+		int index = 0;
+		int done = 0;
+		int batch = min(REMAP_BATCH_SIZE, nr);
+		int batch_left = batch;
 		range = (unsigned long)batch << PAGE_SHIFT;
 
 		rmd.mmu_update = mmu_update;
@@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
-		if (err < 0)
-			goto out;
+		/* We record the error for each page that gives an error, but
+		 * continue mapping until the whole set is done */
+		do {
+			err = HYPERVISOR_mmu_update(&mmu_update[index],
+						    batch_left, &done, domid);
+			if (err < 0) {
+				if (!err_ptr)
+					goto out;
+				/* increment done so we skip the error item */
+				done++;
+				last_err = err_ptr[index] = err;
+			}
+			batch_left -= done;
+			index += done;
+		} while (batch_left);
 
 		nr -= batch;
 		addr += range;
+		if (err_ptr)
+			err_ptr += batch;
 	}
 
-	err = 0;
+	err = last_err;
 out:
 
 	xen_flush_tlb_all();
 
 	return err;
 }
+
+/* xen_remap_domain_mfn_range() - Used to map foreign pages
+ * @vma:   the vma for the pages to be mapped into
+ * @addr:  the address at which to map the pages
+ * @mfn:   the first MFN to map
+ * @nr:    the number of consecutive mfns to map
+ * @prot:  page protection mask
+ * @domid: id of the domain that we are mapping from
+ *
+ * This function takes a mfn and maps nr pages on from that into this kernel's
+ * memory. The owner of the pages is defined by domid. Where the pages are
+ * mapped is determined by addr, and vma is used for "accounting" of the
+ * pages.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long mfn, int nr,
+			       pgprot_t prot, unsigned domid)
+{
+	return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not sucessfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 71f5c45..75f6e86 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
 	return ret;
 }
 
+/*
+ * Similar to traverse_pages, but use each page as a "block" of
+ * data to be processed as one unit.
+ */
+static int traverse_pages_block(unsigned nelem, size_t size,
+				struct list_head *pos,
+				int (*fn)(void *data, int nr, void *state),
+				void *state)
+{
+	void *pagedata;
+	unsigned pageidx;
+	int ret = 0;
+
+	BUG_ON(size > PAGE_SIZE);
+
+	pageidx = PAGE_SIZE;
+
+	while (nelem) {
+		int nr = (PAGE_SIZE/size);
+		struct page *page;
+		if (nr > nelem)
+			nr = nelem;
+		pos = pos->next;
+		page = list_entry(pos, struct page, lru);
+		pagedata = page_address(page);
+		ret = (*fn)(pagedata, nr, state);
+		if (ret)
+			break;
+		nelem -= nr;
+	}
+
+	return ret;
+}
+
 struct mmap_mfn_state {
 	unsigned long va;
 	struct vm_area_struct *vma;
@@ -250,7 +284,7 @@ struct mmap_batch_state {
 	 *      0 for no errors
 	 *      1 if at least one error has happened (and no
 	 *          -ENOENT errors have happened)
-	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 *      -ENOENT if at least one -ENOENT has happened.
 	 */
 	int global_error;
 	/* An array for individual errors */
@@ -260,17 +294,20 @@ struct mmap_batch_state {
 	xen_pfn_t __user *user_mfn;
 };
 
-static int mmap_batch_fn(void *data, void *state)
+static int mmap_batch_fn(void *data, int nr, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
 	int ret;
 
-	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-					 st->vma->vm_page_prot, st->domain);
+	BUG_ON(nr < 0);
 
-	/* Store error code for second pass. */
-	*(st->err++) = ret;
+	ret = xen_remap_domain_mfn_array(st->vma,
+					 st->va & PAGE_MASK,
+					 mfnp, nr,
+					 st->err,
+					 st->vma->vm_page_prot,
+					 st->domain);
 
 	/* And see if it affects the global_error. */
 	if (ret < 0) {
@@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
 				st->global_error = 1;
 		}
 	}
-	st->va += PAGE_SIZE;
+	st->va += PAGE_SIZE * nr;
 
 	return 0;
 }
@@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 	state.err           = err_array;
 
 	/* mmap_batch_fn guarantees ret == 0 */
-	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state));
+	BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
+				    &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 6a198e4..22cad75 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid);
 
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid);
 #endif /* INCLUDE_XEN_OPS_H */

--------------070305080800070105090305
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------070305080800070105090305--


From xen-devel-bounces@lists.xen.org Tue Dec 18 19:42:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 19:42:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl32p-0001qy-9V; Tue, 18 Dec 2012 19:41:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tl32n-0001qt-84
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 19:41:37 +0000
Received: from [85.158.138.51:11447] by server-3.bemta-3.messagelabs.com id
	24/72-31588-DE6C0D05; Tue, 18 Dec 2012 19:41:33 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355859242!21332073!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA0ODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30451 invoked from network); 18 Dec 2012 19:34:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 19:34:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,310,1355097600"; 
   d="scan'208";a="1159846"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	18 Dec 2012 19:34:02 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Tue, 18 Dec 2012 14:34:01 -0500
Message-ID: <50D0C528.4090602@citrix.com>
Date: Tue, 18 Dec 2012 19:34:00 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
In-Reply-To: <20121218160704.GA3543@phenom.dumpdata.com>
Content-Type: multipart/mixed; boundary="------------070305080800070105090305"
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------070305080800070105090305
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit

On 18/12/12 16:07, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 18, 2012 at 01:51:49PM +0000, Mats Petersson wrote:
>> On 18/12/12 11:40, Ian Campbell wrote:
>>> On Tue, 2012-12-18 at 11:28 +0000, Mats Petersson wrote:
>>>> On 18/12/12 11:17, Ian Campbell wrote:
>>>>> On Mon, 2012-12-17 at 17:38 +0000, Mats Petersson wrote:
>>>>>> On 17/12/12 16:57, Ian Campbell wrote:
>>>>>>> On Fri, 2012-12-14 at 17:00 +0000, Mats Petersson wrote:
>>>>>>>> Ian, Konrad:
>>>>>>>> I took Konrad's latest version [I think] and applied my patch (which
>>>>>>>> needed some adjustments as there are other "post 3.7" changes to same
>>>>>>>> source code - including losing the xen_flush_tlb_all() ??)
>>>>>>>>
>>>>>>>> Attached are the patches:
>>>>>>>> arm-enlighten.patch, which updates the ARM code.
>>>>>>>> improve-pricmd.patch, which updates the privcmd code.
>>>>>>>>
>>>>>>>> Ian, can you have a look at the ARM code - which I quickly hacked
>>>>>>>> together, I haven't compiled it, and I certainly haven't tested it,
>>>>>>> There are a lot of build errors as you might expect (patch below, a few
>>>>>>> warnings remain). You can find a cross compiler at
>>>>>>> http://www.kernel.org/pub/tools/crosstool/files/bin/x86_64/4.6.3/
>>>>>>>
>>>>>>> or you can use
>>>>>>> drall:/home/ianc/devel/cross/x86_64-gcc-4.6.0-nolibc_arm-unknown-linux-gnueabi.tar.bz2
>>>>>>>
>>>>>>> which is an older version from the same place.
>>>>>>>
>>>>>>> Anyway, the patch...
>>>>>>>> and
>>>>>>>> it needs further changes to make my changes actually make it more
>>>>>>>> efficient.
>>>>>>> Right, the benefit on PVH or ARM would be in batching the
>>>>>>> XENMEM_add_to_physmap_range calls. The batching of the
>>>>>>> apply_to_page_range which this patch adds isn't useful because there is
>>>>>>> no HYPERVISOR_mmu_update call to batch in this case. So basically this
>>>>>>> patch as it stands does a lot of needless work for no gain I'm afraid.
>>>>>> So, basically, what is an improvement on x86 isn't anything useful on
>>>>>> ARM, and you'd prefer to loop around in privcmd.c calling into
>>>>>> xen_remap_domain_mfn_range() a lot of times?
>>>>> Not at all. ARM (and PVH) still benefits from the interface change but
>>>>> the implementation of the benefit is totally different.
>>>>>
>>>>> For normal x86 PV you want to batch the HYPERVISOR_mmu_update.
>>>>>
>>>>> For both x86 PVH and ARM this hypercall  doesn't exist but instead there
>>>>> is a call to HYPERVISOR_memory_op XENMEM_add_to_physmap_range which is
>>>>> something which would benefit from batching.
>>>> So, you want me to fix that up?
>>> If you want to sure, yes please.
>>>
>>> But feel free to just make the existing code work with the interface,
>>> without adding any batching. That should be a much smaller change than
>>> what you proposed.
>>>
>>> (aside; I do wonder how much of this x86/arm code could be made generic)
>> I think, once it goes to PVH everywhere, quite a bit (as I believe
>> the hypercalls should be the same by then, right?)
>>
>> In the PVOPS kernel, it's probably a bit more job. I'm sure it can
>> be done, but with a bit more work.
>>
>> I think I'll do the minimal patch first, then, if I find some spare
>> time, work on the "batching" variant.
> OK. The batching is IMHO just using the multicall variant.
>
>>>> To make xentrace not work until it is fixed wouldn't be a terrible
>>>> thing, would it?
>>> On ARM I think it is fine (I doubt this is the only thing stopping
>>> xentrace from working). I suspect people would be less impressed with
>>> breaking xentrace on x86. For PVH it probably is a requirement for it to
>>> keep working, I'm not sure though.
>> Ok, ENOSYS it is for remap_range() then.
>>>>   Then we can remove that old gunk from x86 as well
>>>> (eventually).
>>>> Thanks. I was starting to wonder if I'd been teleported back to the time
>>>> when I struggled with pointers...
>>>> Maybe it needs a better comment.
>>> The other thing I had missed was that this was a pure increment and not
>>> taking the value at the same time, which also confused me.
>>>
>>> Splitting the increment out from the dereference usually makes these
>>> things clearer, I was obviously just being a bit hard of thinking
>>> yesterday!
>> No worries. I will see about making a more readable comment (and for
>> ARM, I can remove the whole if/else and just do the one increment
>> (based on above discussion), so should make the code better.
> You can use the v3.8 tree as your base - it has the required PVH and ARM
> patches. There is one bug (where dom0 crashes) - and I just sent
> a git pull for that in your Linus's tree:
>
>   git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.8
Thanks. I have attached a patch for x86 (+generic changes) and one for arm.
Both compile and the x86 has been tested with localhost migration.

Ian: Can you review the code - I'm fairly sure it does the right thing, 
without too much "extra" code, and achieves "what we want". It's not 
optimized, but if the hypercalls are batched by multicall, it shouldn't 
be horrible. In fact I expect it will be marginally better than before, 
because it saves one level of calls with 7 or so arguments per mapped 
page since we've pushed the loop down a level to the do_remap_mfn() - 
I've kept the structure similar to x86, should we decide to merge the 
code into something common in the future. No doubt it will diverge, but 
starting with something similar gives it a little chance... ;)

Note: the purpose of this post is mainly to confirm that the ARM 
solution is "on track".

--
Mats

--------------070305080800070105090305
Content-Type: text/x-patch; name="xen-privcmd-remap-array-arm.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename="xen-privcmd-remap-array-arm.patch"

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 7a32976..dc50a53 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -73,7 +73,7 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
 }
 
 struct remap_data {
-	xen_pfn_t fgmfn; /* foreign domain's gmfn */
+	xen_pfn_t *fgmfn; /* foreign domain's gmfn */
 	pgprot_t prot;
 	domid_t  domid;
 	struct vm_area_struct *vma;
@@ -90,38 +90,120 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
 	unsigned long pfn = page_to_pfn(page);
 	pte_t pte = pfn_pte(pfn, info->prot);
 
-	if (map_foreign_page(pfn, info->fgmfn, info->domid))
+	// TODO: We should really batch these updates
+	if (map_foreign_page(pfn, *info->fgmfn, info->domid))
 		return -EFAULT;
 	set_pte_at(info->vma->vm_mm, addr, ptep, pte);
+	info->fgmfn++;
 
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       xen_pfn_t mfn, int nr,
-			       pgprot_t prot, unsigned domid,
-			       struct page **pages)
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information (for PVH, not used currently)
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			xen_pfn_t *mfn, int nr,
+			pgprot_t prot, unsigned domid,
+			struct page **pages)
 {
 	int err;
 	struct remap_data data;
 
-	/* TBD: Batching, current sole caller only does page at a time */
-	if (nr > 1)
-		return -EINVAL;
+	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	data.fgmfn = mfn;
-	data.prot = prot;
+	data.prot  = prot;
 	data.domid = domid;
-	data.vma = vma;
-	data.index = 0;
+	data.vma   = vma;
 	data.pages = pages;
-	err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
-				  remap_pte_fn, &data);
-	return err;
+	data.index = 0;
+
+	while (nr) {
+		unsigned long range = 1 << PAGE_SHIFT;
+
+		err = apply_to_page_range(vma->vm_mm, addr, range,
+					  remap_pte_fn, &data);
+		/* Warning: We do probably need to care about what error we
+		   get here. However, currently, the remap_area_mfn_pte_fn is
+		   only likely to return EFAULT or some other "things are very
+		   bad" error code, which the rest of the calling code won't
+		   be able to fix up. So we just exit with the error we got.
+		*/
+		if (err)
+			return err;
+
+		nr--;
+		addr += range;
+	}
+	return 0;
+}
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information (for PVH, not used currently)
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not sucessfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid,
+			       struct page **pages)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 * Note: This variant doesn't actually use err_ptr at the moment.
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, prot, domid, pages);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
+
+/* Not used in ARM. Use xen_remap_domain_mfn_array(). */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t mfn, int nr,
+			       pgprot_t prot, unsigned domid,
+			       struct page **pages)
+{
+	return -ENOSYS;
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
+
 int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int nr, struct page **pages)
 {

--------------070305080800070105090305
Content-Type: text/x-patch; name="xen-privcmd-remap-array-x86.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename="xen-privcmd-remap-array-x86.patch"

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index dcf5f2d..a67774f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
 #define REMAP_BATCH_SIZE 16
 
 struct remap_data {
-	unsigned long mfn;
+	unsigned long *mfn;
+	bool contiguous;
 	pgprot_t prot;
 	struct mmu_update *mmu_update;
 };
@@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
+	/* If we have a contigious range, just update the mfn itself,
+	   else update pointer to be "next mfn". */
+	if (rmd->contiguous)
+		(*rmd->mfn)++;
+	else
+		rmd->mfn++;
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2495,16 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       unsigned long mfn, int nr,
-			       pgprot_t prot, unsigned domid)
-{
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ *
+ * Note that err_ptr is used to indicate whether *mfn
+ * is a list or a "first mfn of a contiguous range". */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			unsigned long *mfn, int nr,
+			int *err_ptr, pgprot_t prot,
+			unsigned domid)
+{
+	int err, last_err = 0;
 	struct remap_data rmd;
 	struct mmu_update mmu_update[REMAP_BATCH_SIZE];
-	int batch;
 	unsigned long range;
-	int err = 0;
 
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
@@ -2515,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 
 	rmd.mfn = mfn;
 	rmd.prot = prot;
+	/* We use the err_ptr to indicate if there we are doing a contigious
+	 * mapping or a discontigious mapping. */
+	rmd.contiguous = !err_ptr;
 
 	while (nr) {
-		batch = min(REMAP_BATCH_SIZE, nr);
+		int index = 0;
+		int done = 0;
+		int batch = min(REMAP_BATCH_SIZE, nr);
+		int batch_left = batch;
 		range = (unsigned long)batch << PAGE_SHIFT;
 
 		rmd.mmu_update = mmu_update;
@@ -2526,19 +2557,92 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
-		if (err < 0)
-			goto out;
+		/* We record the error for each page that gives an error, but
+		 * continue mapping until the whole set is done */
+		do {
+			err = HYPERVISOR_mmu_update(&mmu_update[index],
+						    batch_left, &done, domid);
+			if (err < 0) {
+				if (!err_ptr)
+					goto out;
+				/* increment done so we skip the error item */
+				done++;
+				last_err = err_ptr[index] = err;
+			}
+			batch_left -= done;
+			index += done;
+		} while (batch_left);
 
 		nr -= batch;
 		addr += range;
+		if (err_ptr)
+			err_ptr += batch;
 	}
 
-	err = 0;
+	err = last_err;
 out:
 
 	xen_flush_tlb_all();
 
 	return err;
 }
+
+/* xen_remap_domain_mfn_range() - Used to map foreign pages
+ * @vma:   the vma for the pages to be mapped into
+ * @addr:  the address at which to map the pages
+ * @mfn:   the first MFN to map
+ * @nr:    the number of consecutive mfns to map
+ * @prot:  page protection mask
+ * @domid: id of the domain that we are mapping from
+ *
+ * This function takes a mfn and maps nr pages on from that into this kernel's
+ * memory. The owner of the pages is defined by domid. Where the pages are
+ * mapped is determined by addr, and vma is used for "accounting" of the
+ * pages.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long mfn, int nr,
+			       pgprot_t prot, unsigned domid)
+{
+	return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not sucessfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 71f5c45..75f6e86 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -151,6 +151,40 @@ static int traverse_pages(unsigned nelem, size_t size,
 	return ret;
 }
 
+/*
+ * Similar to traverse_pages, but use each page as a "block" of
+ * data to be processed as one unit.
+ */
+static int traverse_pages_block(unsigned nelem, size_t size,
+				struct list_head *pos,
+				int (*fn)(void *data, int nr, void *state),
+				void *state)
+{
+	void *pagedata;
+	unsigned pageidx;
+	int ret = 0;
+
+	BUG_ON(size > PAGE_SIZE);
+
+	pageidx = PAGE_SIZE;
+
+	while (nelem) {
+		int nr = (PAGE_SIZE/size);
+		struct page *page;
+		if (nr > nelem)
+			nr = nelem;
+		pos = pos->next;
+		page = list_entry(pos, struct page, lru);
+		pagedata = page_address(page);
+		ret = (*fn)(pagedata, nr, state);
+		if (ret)
+			break;
+		nelem -= nr;
+	}
+
+	return ret;
+}
+
 struct mmap_mfn_state {
 	unsigned long va;
 	struct vm_area_struct *vma;
@@ -250,7 +284,7 @@ struct mmap_batch_state {
 	 *      0 for no errors
 	 *      1 if at least one error has happened (and no
 	 *          -ENOENT errors have happened)
-	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 *      -ENOENT if at least one -ENOENT has happened.
 	 */
 	int global_error;
 	/* An array for individual errors */
@@ -260,17 +294,20 @@ struct mmap_batch_state {
 	xen_pfn_t __user *user_mfn;
 };
 
-static int mmap_batch_fn(void *data, void *state)
+static int mmap_batch_fn(void *data, int nr, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
 	int ret;
 
-	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-					 st->vma->vm_page_prot, st->domain);
+	BUG_ON(nr < 0);
 
-	/* Store error code for second pass. */
-	*(st->err++) = ret;
+	ret = xen_remap_domain_mfn_array(st->vma,
+					 st->va & PAGE_MASK,
+					 mfnp, nr,
+					 st->err,
+					 st->vma->vm_page_prot,
+					 st->domain);
 
 	/* And see if it affects the global_error. */
 	if (ret < 0) {
@@ -282,7 +319,7 @@ static int mmap_batch_fn(void *data, void *state)
 				st->global_error = 1;
 		}
 	}
-	st->va += PAGE_SIZE;
+	st->va += PAGE_SIZE * nr;
 
 	return 0;
 }
@@ -378,8 +415,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 	state.err           = err_array;
 
 	/* mmap_batch_fn guarantees ret == 0 */
-	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state));
+	BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
+				    &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 6a198e4..22cad75 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -29,4 +29,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid);
 
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid);
 #endif /* INCLUDE_XEN_OPS_H */

--------------070305080800070105090305
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------070305080800070105090305--


From xen-devel-bounces@lists.xen.org Tue Dec 18 19:44:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 19:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl35D-0001zE-Rc; Tue, 18 Dec 2012 19:44:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tl35C-0001z3-0X
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 19:44:06 +0000
Received: from [85.158.139.83:60572] by server-13.bemta-5.messagelabs.com id
	39/5A-10716-587C0D05; Tue, 18 Dec 2012 19:44:05 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355859843!24136620!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDk2OTM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25720 invoked from network); 18 Dec 2012 19:44:04 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 19:44:04 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355859780; x=1387395780;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=UW7KQkctiHUSpL8WX0yAiLPutfEp8zyvsVp/J38CTHM=;
	b=Yr0rXYckRJEiocPGfOEEjycUvfZDGxYajmCxyUh+vgByFMsSmj6POwsT
	+k9Mkh4BUIpZ26BsoIh1yJsHsfKmrEtFsvQg4gakvks6+b6mVKlh8X+JP
	fByiUMT42WZJ9SOhqQt0jFVYsW4/m+gMr0eHXI447CzDecpg67Rugay5u Q=;
X-IronPort-AV: E=Sophos;i="4.84,311,1355097600"; d="scan'208";a="496869561"
Received: from smtp-in-9002.sea19.amazon.com ([10.186.174.20])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 18 Dec 2012 19:42:57 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-9002.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBIJhuC0022390
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 18 Dec 2012 19:43:56 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.118) by
	ex10-hub-31005.ant.amazon.com (10.185.176.12) with Microsoft SMTP
	Server id 14.2.247.3; Tue, 18 Dec 2012 11:43:50 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Tue, 18 Dec 2012 11:43:50 -0800
Date: Tue, 18 Dec 2012 11:43:50 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121218194348.GB29382@u109add4315675089e695.ant.amazon.com>
References: <20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
	<1355743598.14620.43.camel@zakaz.uk.xensource.com>
	<20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
	<1355824968.14620.143.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355824968.14620.143.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 10:02:48AM +0000, Ian Campbell wrote:
> On Mon, 2012-12-17 at 20:09 +0000, Matt Wilson wrote:
> > On Mon, Dec 17, 2012 at 11:26:38AM +0000, Ian Campbell wrote:
[...]
> > > Do you mean the ring or the actual buffers?
> > 
> > Sorry, the actual buffers.
> > 
> > > The current code tries to coalesce multiple small frags/heads because it
> > > is usually trivial but doesn't try too hard with multiple larger frags,
> > > since they take up most of a page by themselves anyway. I suppose this
> > > does waste a bit of buffer space and therefore could take more ring
> > > slots, but it's not clear to me how much this matters in practice (it
> > > might be tricky to measure this with any realistic workload).
> > 
> > In the case where we're consistently handling large heads (like when
> > using a MTU value of 9000 for streaming traffic), we're wasting 1/3 of
> > the available buffers.
> 
> Sorry if I missed this earlier in the thread, but how do we end up
> wasting so much?

I see SKBs with:
  skb_headlen(skb) == 8157
  offset_in_page(skb->data) == 64

when handling long streaming ingress flows from ixgbe with MTU (on the
NIC and both sides of the VIF) set to 9000. When all the SKBs making
up the flow have the above property, xen-netback uses 3 pages instead
of two. The first buffer gets 4032 bytes copied into it. The next
buffer gets 4096 bytes copied into it. The final buffer gets 29 bytes
copied into it. See this post in the archives for a more detailed
walk through netbk_gop_frag_copy():
  http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html

> For an skb with 9000 bytes in the linear area, which must necessarily be
> contiguous, do we not fill the first two page sized buffers completely?
> The remaining 808 bytes must then have its own buffer. Hrm, I suppose
> that's about 27% wasted over the three pages. If we are doing something
> worse than that though then we have a bug somewhere (nb: older netbacks
> would only fill the first 2048 bytes of each buffer, the wastage is
> presumably phenomenal in that case ;-), MAX_BUFFER_OFFSET is now ==
> PAGE_SIZE though)

Sorry, I should have said 8157 bytes for my example. :-)

> Unless I've misunderstood this thread and we are considering packing
> data from multiple skbs into a single buffer? (i.e. the remaining
> 4096-808=3288 bytes in the third buffer above would contain data for the
> next skb). Does the ring protocol even allow for that possibility? It
> seems like a path to complexity to me.

No, I'm not suggesting that we come up with an extension to pack the
next skb into any remaining space left over from the current one. I
agree that would make for a lot of complexity managing the ratio of
meta slots to buffers, etc.

> > > The cost of splitting a copy into two should be low though, the copies
> > > are already batched into a single hypercall and I'd expect things to be
> > > mostly dominated by the data copy itself rather than the setup of each
> > > individual op, which would argue for splitting a copy in two if that
> > > helps fill the buffers.
> > 
> > That was my thought as well. We're testing a patch that does just this
> > now.
> > 
> > > The flip side is that once you get past the headers etc the paged frags
> > > likely tend to either bits and bobs (fine) or mostly whole pages. In the
> > > whole pages case trying to fill the buffers will result in every copy
> > > getting split. My gut tells me that the whole pages case probably
> > > dominates, but I'm not sure what the real world impact of splitting all
> > > the copies would be.
> > 
> > Right, I'm less concerned about the paged frags. It might make sense
> > to skip some space so that the copying can be page aligned. I suppose
> > it depends on how many defferent pages are in the list, and what the
> > total size is.
> > 
> > In practice I'd think it would be rare to see a paged SKB for ingress
> > traffic to domUs unless there is significant intra-host communication
> > (dom0->domU, domU->domU). When domU ingress traffic is originating
> > from an Ethernet device it shouldn't be paged. Paged SKBs would come
> > into play when a SKB is formed for transmit on an egress device that
> > is SG-capable. Or am I misunderstanding how paged SKBs are used these
> > days?
> 
> I think it depends on the hardware and/or driver. IIRC some devices push
> down frag zero into the device for RX DMA and then share it with the
> linear area (I think this might have something to do with making LRO or
> GRO easier/workable).
> 
> Also things such as GRO can commonly cause received skbs being passed up
> the stack to contain several frags.
> 
> I'm not quite sure how this works but in the case of s/w GRO I wouldn't
> be surprised if this resulted in a skb with lots of 1500 byte (i.e. wire
> MTU) frags. I think we would end up at least coalescing those two per
> buffer on transmit (3000/4096 = 73% filling the page).
> 
> Doing better would either need start_new_rx_buffer to always completely
> fill each buffer or to take a much more global view of the frags (e.g.
> taking the size of the next frag and how it fits into consideration
> too).

What's the down side to making start_new_rx_buffer() always try to
fill each buffer?

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 19:44:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 19:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl35D-0001zE-Rc; Tue, 18 Dec 2012 19:44:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tl35C-0001z3-0X
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 19:44:06 +0000
Received: from [85.158.139.83:60572] by server-13.bemta-5.messagelabs.com id
	39/5A-10716-587C0D05; Tue, 18 Dec 2012 19:44:05 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1355859843!24136620!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDk2OTM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25720 invoked from network); 18 Dec 2012 19:44:04 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 19:44:04 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355859780; x=1387395780;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=UW7KQkctiHUSpL8WX0yAiLPutfEp8zyvsVp/J38CTHM=;
	b=Yr0rXYckRJEiocPGfOEEjycUvfZDGxYajmCxyUh+vgByFMsSmj6POwsT
	+k9Mkh4BUIpZ26BsoIh1yJsHsfKmrEtFsvQg4gakvks6+b6mVKlh8X+JP
	fByiUMT42WZJ9SOhqQt0jFVYsW4/m+gMr0eHXI447CzDecpg67Rugay5u Q=;
X-IronPort-AV: E=Sophos;i="4.84,311,1355097600"; d="scan'208";a="496869561"
Received: from smtp-in-9002.sea19.amazon.com ([10.186.174.20])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 18 Dec 2012 19:42:57 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-9002.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBIJhuC0022390
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 18 Dec 2012 19:43:56 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.118) by
	ex10-hub-31005.ant.amazon.com (10.185.176.12) with Microsoft SMTP
	Server id 14.2.247.3; Tue, 18 Dec 2012 11:43:50 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Tue, 18 Dec 2012 11:43:50 -0800
Date: Tue, 18 Dec 2012 11:43:50 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121218194348.GB29382@u109add4315675089e695.ant.amazon.com>
References: <20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
	<1355743598.14620.43.camel@zakaz.uk.xensource.com>
	<20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
	<1355824968.14620.143.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355824968.14620.143.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 10:02:48AM +0000, Ian Campbell wrote:
> On Mon, 2012-12-17 at 20:09 +0000, Matt Wilson wrote:
> > On Mon, Dec 17, 2012 at 11:26:38AM +0000, Ian Campbell wrote:
[...]
> > > Do you mean the ring or the actual buffers?
> > 
> > Sorry, the actual buffers.
> > 
> > > The current code tries to coalesce multiple small frags/heads because it
> > > is usually trivial but doesn't try too hard with multiple larger frags,
> > > since they take up most of a page by themselves anyway. I suppose this
> > > does waste a bit of buffer space and therefore could take more ring
> > > slots, but it's not clear to me how much this matters in practice (it
> > > might be tricky to measure this with any realistic workload).
> > 
> > In the case where we're consistently handling large heads (like when
> > using a MTU value of 9000 for streaming traffic), we're wasting 1/3 of
> > the available buffers.
> 
> Sorry if I missed this earlier in the thread, but how do we end up
> wasting so much?

I see SKBs with:
  skb_headlen(skb) == 8157
  offset_in_page(skb->data) == 64

when handling long streaming ingress flows from ixgbe with MTU (on the
NIC and both sides of the VIF) set to 9000. When all the SKBs making
up the flow have the above property, xen-netback uses 3 pages instead
of two. The first buffer gets 4032 bytes copied into it. The next
buffer gets 4096 bytes copied into it. The final buffer gets 29 bytes
copied into it. See this post in the archives for a more detailed
walk through netbk_gop_frag_copy():
  http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html

> For an skb with 9000 bytes in the linear area, which must necessarily be
> contiguous, do we not fill the first two page sized buffers completely?
> The remaining 808 bytes must then have its own buffer. Hrm, I suppose
> that's about 27% wasted over the three pages. If we are doing something
> worse than that though then we have a bug somewhere (nb: older netbacks
> would only fill the first 2048 bytes of each buffer, the wastage is
> presumably phenomenal in that case ;-), MAX_BUFFER_OFFSET is now ==
> PAGE_SIZE though)

Sorry, I should have said 8157 bytes for my example. :-)

> Unless I've misunderstood this thread and we are considering packing
> data from multiple skbs into a single buffer? (i.e. the remaining
> 4096-808=3288 bytes in the third buffer above would contain data for the
> next skb). Does the ring protocol even allow for that possibility? It
> seems like a path to complexity to me.

No, I'm not suggesting that we come up with an extension to pack the
next skb into any remaining space left over from the current one. I
agree that would make for a lot of complexity managing the ratio of
meta slots to buffers, etc.

> > > The cost of splitting a copy into two should be low though, the copies
> > > are already batched into a single hypercall and I'd expect things to be
> > > mostly dominated by the data copy itself rather than the setup of each
> > > individual op, which would argue for splitting a copy in two if that
> > > helps fill the buffers.
> > 
> > That was my thought as well. We're testing a patch that does just this
> > now.
> > 
> > > The flip side is that once you get past the headers etc the paged frags
> > > likely tend to either bits and bobs (fine) or mostly whole pages. In the
> > > whole pages case trying to fill the buffers will result in every copy
> > > getting split. My gut tells me that the whole pages case probably
> > > dominates, but I'm not sure what the real world impact of splitting all
> > > the copies would be.
> > 
> > Right, I'm less concerned about the paged frags. It might make sense
> > to skip some space so that the copying can be page aligned. I suppose
> > it depends on how many defferent pages are in the list, and what the
> > total size is.
> > 
> > In practice I'd think it would be rare to see a paged SKB for ingress
> > traffic to domUs unless there is significant intra-host communication
> > (dom0->domU, domU->domU). When domU ingress traffic is originating
> > from an Ethernet device it shouldn't be paged. Paged SKBs would come
> > into play when a SKB is formed for transmit on an egress device that
> > is SG-capable. Or am I misunderstanding how paged SKBs are used these
> > days?
> 
> I think it depends on the hardware and/or driver. IIRC some devices push
> down frag zero into the device for RX DMA and then share it with the
> linear area (I think this might have something to do with making LRO or
> GRO easier/workable).
> 
> Also things such as GRO can commonly cause received skbs being passed up
> the stack to contain several frags.
> 
> I'm not quite sure how this works but in the case of s/w GRO I wouldn't
> be surprised if this resulted in a skb with lots of 1500 byte (i.e. wire
> MTU) frags. I think we would end up at least coalescing those two per
> buffer on transmit (3000/4096 = 73% filling the page).
> 
> Doing better would either need start_new_rx_buffer to always completely
> fill each buffer or to take a much more global view of the frags (e.g.
> taking the size of the next frag and how it fits into consideration
> too).

What's the down side to making start_new_rx_buffer() always try to
fill each buffer?

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 20:21:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 20:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl3fF-0002au-6v; Tue, 18 Dec 2012 20:21:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Tl3fE-0002ap-8T
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 20:21:20 +0000
Received: from [193.109.254.147:13600] by server-16.bemta-14.messagelabs.com
	id 98/5C-18932-F30D0D05; Tue, 18 Dec 2012 20:21:19 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355862078!10562121!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjU1NzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24424 invoked from network); 18 Dec 2012 20:21:18 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 20:21:18 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 30345274A;
	Tue, 18 Dec 2012 22:21:17 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id EFF0420062; Tue, 18 Dec 2012 22:21:16 +0200 (EET)
Date: Tue, 18 Dec 2012 22:21:16 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: William Dauchy <wdauchy@gmail.com>
Message-ID: <20121218202116.GL8912@reaktio.net>
References: <50D0823B02000078000B10BD@nat28.tlf.novell.com>
	<CAJ75kXZyfAf_xKdo6G7xW2Gjd2c76_MSFqR6K1WzX=dsVcuPDQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAJ75kXZyfAf_xKdo6G7xW2Gjd2c76_MSFqR6K1WzX=dsVcuPDQ@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Lars Kurth <lars.kurth@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [ANNOUNCE] Xen 4.1.4 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 06:22:13PM +0100, William Dauchy wrote:
> Hello,
> 
> On Tue, Dec 18, 2012 at 2:48 PM, Jan Beulich <JBeulich@suse.com> wrote:
> > I am pleased to announce the release of Xen 4.1.4. This is
> > available immediately from its mercurial repository:
> > http://xenbits.xen.org/xen-4.1-testing.hg (tag RELEASE-4.1.4)
> 
> I can't find the RELEASE-4.1.4 tag
> http://xenbits.xen.org/hg/xen-4.1-testing.hg/tags
> last one is 4.1.4-rc2 at the moment.
> 

It's in the staging tree:
http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg/tags

It hasn't passed the automated tests yet, so push to the non-staging tree hasn't happened.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 20:21:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 20:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl3fF-0002au-6v; Tue, 18 Dec 2012 20:21:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Tl3fE-0002ap-8T
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 20:21:20 +0000
Received: from [193.109.254.147:13600] by server-16.bemta-14.messagelabs.com
	id 98/5C-18932-F30D0D05; Tue, 18 Dec 2012 20:21:19 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355862078!10562121!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjU1NzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24424 invoked from network); 18 Dec 2012 20:21:18 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 20:21:18 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 30345274A;
	Tue, 18 Dec 2012 22:21:17 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id EFF0420062; Tue, 18 Dec 2012 22:21:16 +0200 (EET)
Date: Tue, 18 Dec 2012 22:21:16 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: William Dauchy <wdauchy@gmail.com>
Message-ID: <20121218202116.GL8912@reaktio.net>
References: <50D0823B02000078000B10BD@nat28.tlf.novell.com>
	<CAJ75kXZyfAf_xKdo6G7xW2Gjd2c76_MSFqR6K1WzX=dsVcuPDQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAJ75kXZyfAf_xKdo6G7xW2Gjd2c76_MSFqR6K1WzX=dsVcuPDQ@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Lars Kurth <lars.kurth@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [ANNOUNCE] Xen 4.1.4 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 06:22:13PM +0100, William Dauchy wrote:
> Hello,
> 
> On Tue, Dec 18, 2012 at 2:48 PM, Jan Beulich <JBeulich@suse.com> wrote:
> > I am pleased to announce the release of Xen 4.1.4. This is
> > available immediately from its mercurial repository:
> > http://xenbits.xen.org/xen-4.1-testing.hg (tag RELEASE-4.1.4)
> 
> I can't find the RELEASE-4.1.4 tag
> http://xenbits.xen.org/hg/xen-4.1-testing.hg/tags
> last one is 4.1.4-rc2 at the moment.
> 

It's in the staging tree:
http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg/tags

It hasn't passed the automated tests yet, so push to the non-staging tree hasn't happened.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 20:24:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 20:24:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl3hT-0002gu-OO; Tue, 18 Dec 2012 20:23:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Tl3hS-0002gl-Cm
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 20:23:38 +0000
Received: from [85.158.143.35:33442] by server-2.bemta-4.messagelabs.com id
	15/11-30861-9C0D0D05; Tue, 18 Dec 2012 20:23:37 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355862216!14639797!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjU1NzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6335 invoked from network); 18 Dec 2012 20:23:37 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 20:23:37 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id E99801E73;
	Tue, 18 Dec 2012 22:23:35 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id B9CA9104001; Tue, 18 Dec 2012 22:23:35 +0200 (EET)
Date: Tue, 18 Dec 2012 22:23:35 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20121218202335.GM8912@reaktio.net>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 02:06:56PM +0000, George Dunlap wrote:
>    On Tue, Dec 18, 2012 at 1:58 PM, Ian Campbell <[1]Ian.Campbell@citrix.com>
>    wrote:
> 
>      On Tue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:
>      > One of the requests from the [2]xenorg.uservoice.com page that had a
>      > moderate amount of interest was to allow block devices to be resized.
>      > There's a description here:
>      >
>      >
>      [3]https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-implement-block-device-resize
>      >
>      >
>      > I have no idea what this would take -- can anyone comment?
> 
>      Doesn't that already work? I thought this was patch in the PV block
>      drivers ages ago...
> 
>      Yes, [4]http://wiki.xen.org/wiki/XenParavirtOps lists it under 2.6.36.
> 
>      Maybe this is a missing feature of (lib)xl vs xend?
> 
>    "xm help" doesn't show a "block-resize" command, nor does grepping through
>    tools for "resize" turn up anything.
> 
>    Would someone be willing to do some investigation into whether such a
>    command is implemented in the protocol, and what it would take to get a
>    "xm block-resize" command working?  (Not necessarily do it, but have an
>    idea what probably needs to be done.)
> 

Resizing is supported automatically if running new enough blkback and blkfront!
Resize the backing LVM volume in dom0 and the domU will notice the disk has been resized.

There has been discussion (and patches) on xen-devel a couple of years ago.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 20:24:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 20:24:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl3hT-0002gu-OO; Tue, 18 Dec 2012 20:23:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Tl3hS-0002gl-Cm
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 20:23:38 +0000
Received: from [85.158.143.35:33442] by server-2.bemta-4.messagelabs.com id
	15/11-30861-9C0D0D05; Tue, 18 Dec 2012 20:23:37 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355862216!14639797!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjU1NzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6335 invoked from network); 18 Dec 2012 20:23:37 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 20:23:37 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id E99801E73;
	Tue, 18 Dec 2012 22:23:35 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id B9CA9104001; Tue, 18 Dec 2012 22:23:35 +0200 (EET)
Date: Tue, 18 Dec 2012 22:23:35 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20121218202335.GM8912@reaktio.net>
References: <CAFLBxZa1XmDwMEkmwSV9G3bY98wU_dz5YgsgDD0ROCkYxfD+iA@mail.gmail.com>
	<1355839087.14620.219.camel@zakaz.uk.xensource.com>
	<CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZZk9=dtnE+1ro-ZCN-6SN5JicQdH7nmejVvJgLmiSNzJg@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Resizing block devices live?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 02:06:56PM +0000, George Dunlap wrote:
>    On Tue, Dec 18, 2012 at 1:58 PM, Ian Campbell <[1]Ian.Campbell@citrix.com>
>    wrote:
> 
>      On Tue, 2012-12-18 at 13:52 +0000, George Dunlap wrote:
>      > One of the requests from the [2]xenorg.uservoice.com page that had a
>      > moderate amount of interest was to allow block devices to be resized.
>      > There's a description here:
>      >
>      >
>      [3]https://xenorg.uservoice.com/forums/172169-xen-development/suggestions/3140313-implement-block-device-resize
>      >
>      >
>      > I have no idea what this would take -- can anyone comment?
> 
>      Doesn't that already work? I thought this was patch in the PV block
>      drivers ages ago...
> 
>      Yes, [4]http://wiki.xen.org/wiki/XenParavirtOps lists it under 2.6.36.
> 
>      Maybe this is a missing feature of (lib)xl vs xend?
> 
>    "xm help" doesn't show a "block-resize" command, nor does grepping through
>    tools for "resize" turn up anything.
> 
>    Would someone be willing to do some investigation into whether such a
>    command is implemented in the protocol, and what it would take to get a
>    "xm block-resize" command working?  (Not necessarily do it, but have an
>    idea what probably needs to be done.)
> 

Resizing is supported automatically if running new enough blkback and blkfront!
Resize the backing LVM volume in dom0 and the domU will notice the disk has been resized.

There has been discussion (and patches) on xen-devel a couple of years ago.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 20:53:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 20:53:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl49g-00038s-JF; Tue, 18 Dec 2012 20:52:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl49e-00038n-OW
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 20:52:46 +0000
Received: from [85.158.138.51:13774] by server-8.bemta-3.messagelabs.com id
	33/61-01297-997D0D05; Tue, 18 Dec 2012 20:52:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355863960!10821196!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9864 invoked from network); 18 Dec 2012 20:52:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 20:52:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="236435"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 20:52:38 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 20:52:37 +0000
Message-ID: <1355863957.28610.40.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 18 Dec 2012 20:52:37 +0000
In-Reply-To: <alpine.DEB.2.02.1212181838460.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-5-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181838460.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:40 +0000, Stefano Stabellini wrote:
> > +    case 8 ... 12: /* Register banked in FIQ mode */
> > +        BUILD_BUG_ON(REGOFFS(r8_fiq) + 4*sizeof(uint32_t) != REGOFFS(r12_fiq));
> > +        if ( fiq_mode(regs) )
> > +            return &regs->r8_fiq + reg - 8;
> > +        else
> > +            return &regs->r8_fiq + reg - 8;
> 
> what's the point of this if?

Oops, the else case shouldn't have the _fiq suffix!

(seriously, please can you start trimming your quotes! I nearly missed
this single line comment among the 100 lines you needlessly quoted)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 20:53:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 20:53:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl49g-00038s-JF; Tue, 18 Dec 2012 20:52:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl49e-00038n-OW
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 20:52:46 +0000
Received: from [85.158.138.51:13774] by server-8.bemta-3.messagelabs.com id
	33/61-01297-997D0D05; Tue, 18 Dec 2012 20:52:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355863960!10821196!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9864 invoked from network); 18 Dec 2012 20:52:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 20:52:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="236435"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 20:52:38 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 20:52:37 +0000
Message-ID: <1355863957.28610.40.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 18 Dec 2012 20:52:37 +0000
In-Reply-To: <alpine.DEB.2.02.1212181838460.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-5-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181838460.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:40 +0000, Stefano Stabellini wrote:
> > +    case 8 ... 12: /* Register banked in FIQ mode */
> > +        BUILD_BUG_ON(REGOFFS(r8_fiq) + 4*sizeof(uint32_t) != REGOFFS(r12_fiq));
> > +        if ( fiq_mode(regs) )
> > +            return &regs->r8_fiq + reg - 8;
> > +        else
> > +            return &regs->r8_fiq + reg - 8;
> 
> what's the point of this if?

Oops, the else case shouldn't have the _fiq suffix!

(seriously, please can you start trimming your quotes! I nearly missed
this single line comment among the 100 lines you needlessly quoted)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 20:55:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 20:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl4Ba-0003Dq-3k; Tue, 18 Dec 2012 20:54:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl4BY-0003Di-Vy
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 20:54:45 +0000
Received: from [85.158.139.211:57665] by server-15.bemta-5.messagelabs.com id
	D6/BF-20523-418D0D05; Tue, 18 Dec 2012 20:54:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355864083!20574006!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16461 invoked from network); 18 Dec 2012 20:54:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 20:54:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="236449"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 20:54:43 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 20:54:43 +0000
Message-ID: <1355864082.28610.41.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 18 Dec 2012 20:54:42 +0000
In-Reply-To: <alpine.DEB.2.02.1212181830400.17523@kaball.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1355850890.14620.279.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181830400.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:31 +0000, Stefano Stabellini wrote:
> I don't have any comments, they all look pretty straightforward.

Is that an Acked-by: Stefano on all 15?




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 20:55:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 20:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl4Ba-0003Dq-3k; Tue, 18 Dec 2012 20:54:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tl4BY-0003Di-Vy
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 20:54:45 +0000
Received: from [85.158.139.211:57665] by server-15.bemta-5.messagelabs.com id
	D6/BF-20523-418D0D05; Tue, 18 Dec 2012 20:54:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355864083!20574006!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16461 invoked from network); 18 Dec 2012 20:54:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 20:54:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="236449"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 20:54:43 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Tue, 18 Dec 2012 20:54:43 +0000
Message-ID: <1355864082.28610.41.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 18 Dec 2012 20:54:42 +0000
In-Reply-To: <alpine.DEB.2.02.1212181830400.17523@kaball.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1355850890.14620.279.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181830400.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:31 +0000, Stefano Stabellini wrote:
> I don't have any comments, they all look pretty straightforward.

Is that an Acked-by: Stefano on all 15?




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 21:17:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 21:17:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl4Wn-0003bE-3c; Tue, 18 Dec 2012 21:16:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tl4Wl-0003b9-EQ
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 21:16:39 +0000
Received: from [85.158.139.211:63862] by server-13.bemta-5.messagelabs.com id
	C5/D3-10716-63DD0D05; Tue, 18 Dec 2012 21:16:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355865397!19249582!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8646 invoked from network); 18 Dec 2012 21:16:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 21:16:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="236708"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 21:16:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 18 Dec 2012 21:16:37 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tl4Wj-0001Wz-6e;
	Tue, 18 Dec 2012 21:16:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tl4Wi-00038x-P5;
	Tue, 18 Dec 2012 21:16:36 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14780-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 21:16:36 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14780: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7537694671169627713=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7537694671169627713==
Content-Type: text/plain

flight 14780 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14780/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14776
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14776

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  d5c0389bf26c
baseline version:
 xen                  3664b0420dfa

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dario.faggioli@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@citrix.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=d5c0389bf26c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable d5c0389bf26c
+ branch=xen-unstable
+ revision=d5c0389bf26c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r d5c0389bf26c ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 4 changes to 4 files


--===============7537694671169627713==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7537694671169627713==--

From xen-devel-bounces@lists.xen.org Tue Dec 18 21:17:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 21:17:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl4Wn-0003bE-3c; Tue, 18 Dec 2012 21:16:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tl4Wl-0003b9-EQ
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 21:16:39 +0000
Received: from [85.158.139.211:63862] by server-13.bemta-5.messagelabs.com id
	C5/D3-10716-63DD0D05; Tue, 18 Dec 2012 21:16:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1355865397!19249582!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDExNzM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8646 invoked from network); 18 Dec 2012 21:16:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 21:16:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="236708"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Dec 2012 21:16:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Tue, 18 Dec 2012 21:16:37 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tl4Wj-0001Wz-6e;
	Tue, 18 Dec 2012 21:16:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tl4Wi-00038x-P5;
	Tue, 18 Dec 2012 21:16:36 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14780-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Dec 2012 21:16:36 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14780: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7537694671169627713=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7537694671169627713==
Content-Type: text/plain

flight 14780 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14780/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14776
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14776

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  d5c0389bf26c
baseline version:
 xen                  3664b0420dfa

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dario.faggioli@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@citrix.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=d5c0389bf26c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable d5c0389bf26c
+ branch=xen-unstable
+ revision=d5c0389bf26c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r d5c0389bf26c ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 4 changes to 4 files


--===============7537694671169627713==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7537694671169627713==--

From xen-devel-bounces@lists.xen.org Tue Dec 18 21:17:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 21:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl4XW-0003dk-Ht; Tue, 18 Dec 2012 21:17:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tl4XV-0003dX-4k
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 21:17:25 +0000
Received: from [85.158.143.99:16798] by server-1.bemta-4.messagelabs.com id
	9F/92-28401-46DD0D05; Tue, 18 Dec 2012 21:17:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355865442!24754691!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30256 invoked from network); 18 Dec 2012 21:17:23 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 21:17:23 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBILGHxl004751
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 21:16:18 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBILGGQ4004940
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 21:16:17 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBILGF4M029819; Tue, 18 Dec 2012 15:16:15 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 13:16:15 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 000471BF216; Tue, 18 Dec 2012 16:16:13 -0500 (EST)
Date: Tue, 18 Dec 2012 16:16:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121218211613.GA5697@phenom.dumpdata.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 02:25:16PM +0000, Stefano Stabellini wrote:
> On Thu, 13 Dec 2012, Jan Beulich wrote:
> > >>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > wrote:
> > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > >> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > >> > On Wed, 12 Dec 2012 17:15:23 -0800
> > >> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > >> > 
> > >> >> On Tue, 11 Dec 2012 12:10:19 +0000
> > >> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > >> >> 
> > >> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > >> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> > >> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > >> >> > 
> > >> >> > That's strange because AFAIK Linux is never editing the MSI-X
> > >> >> > entries directly: give a look at
> > >> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> > >> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> > >> >> > touch the real MSI-X table.
> > >> >> 
> > >> >> So, this is what's happening. The side effect of :
> > >> >> 
> > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> > >> >>                                 dev->msix_table.last) )
> > >> >>             WARN();
> > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> > >> >>                                 dev->msix_pba.last) )
> > >> >>             WARN();
> > >> >> 
> > >> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> > >> >> I've mapped are going from RW to read only. Then when dom0 accesses
> > >> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> > >> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> > >> >> don't understand why? Looking into that now...
> > >> 
> > >> As far as I was able to tell back at the time when I implemented
> > >> this, existing code shouldn't have mappings for these tables in
> > >> place at the time these ranges get added here. But I noted in
> > >> the patch description that this is a potential issue (and may need
> > >> fixing if deemed severe enough - back then, apparently nobody
> > >> really cared, perhaps largely because passthrough to PV guests
> > >> isn't considered fully secure anyway).
> > >> 
> > >> Now - did that change? I.e. can you describe where the mappings
> > >> come from that cause the problem here?
> > > 
> > > The generic Linux MSI-X handling code does that, before calling the
> > > arch specific msi setup function, for which we have a xen version
> > > (xen_initdom_setup_msi_irqs):
> > > 
> > > pci_enable_msix -> msix_capability_init -> msix_map_region
> > 
> > Ah, okay, (of course?) I had looked only at the forward ported
> > version of this. Is all that fiddling with the mask bits really
> > being suppressed properly when running under Xen? Otherwise
> > pv-ops is quite broken in this regard at present... And if it is,
> > I don't see what the respective ioremap() is good for here in
> > the Xen case.
> 
> Actually I think that you might be right: just looking at the code it
> seems that the mask bits get written to the table once as part of the
> initialization process:
> 
> pci_enable_msix -> msix_capability_init -> msix_program_entries
> 
> Unfortunately msix_program_entries is called few lines after
> arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
> a pirq.
> However after that is done, all the masking/unmask is done via irq_mask
> that we handle properly masking/unmasking the corresponding event
> channels.
> 
> 
> Possible solutions on top of my head:

There is also the potential to piggyback on Joerg's patches
that introduce a new x86_msi_ops: compose_msi_msg.

See here: https://lkml.org/lkml/2012/8/20/432
(I think there was also a more recent one posted at some point).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 21:17:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 21:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl4XW-0003dk-Ht; Tue, 18 Dec 2012 21:17:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tl4XV-0003dX-4k
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 21:17:25 +0000
Received: from [85.158.143.99:16798] by server-1.bemta-4.messagelabs.com id
	9F/92-28401-46DD0D05; Tue, 18 Dec 2012 21:17:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355865442!24754691!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMjkwNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30256 invoked from network); 18 Dec 2012 21:17:23 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Dec 2012 21:17:23 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBILGHxl004751
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Dec 2012 21:16:18 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBILGGQ4004940
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Dec 2012 21:16:17 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBILGF4M029819; Tue, 18 Dec 2012 15:16:15 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 13:16:15 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 000471BF216; Tue, 18 Dec 2012 16:16:13 -0500 (EST)
Date: Tue, 18 Dec 2012 16:16:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121218211613.GA5697@phenom.dumpdata.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 02:25:16PM +0000, Stefano Stabellini wrote:
> On Thu, 13 Dec 2012, Jan Beulich wrote:
> > >>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > wrote:
> > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > >> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > >> > On Wed, 12 Dec 2012 17:15:23 -0800
> > >> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > >> > 
> > >> >> On Tue, 11 Dec 2012 12:10:19 +0000
> > >> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > >> >> 
> > >> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > >> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> > >> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > >> >> > 
> > >> >> > That's strange because AFAIK Linux is never editing the MSI-X
> > >> >> > entries directly: give a look at
> > >> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> > >> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> > >> >> > touch the real MSI-X table.
> > >> >> 
> > >> >> So, this is what's happening. The side effect of :
> > >> >> 
> > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> > >> >>                                 dev->msix_table.last) )
> > >> >>             WARN();
> > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> > >> >>                                 dev->msix_pba.last) )
> > >> >>             WARN();
> > >> >> 
> > >> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> > >> >> I've mapped are going from RW to read only. Then when dom0 accesses
> > >> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> > >> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> > >> >> don't understand why? Looking into that now...
> > >> 
> > >> As far as I was able to tell back at the time when I implemented
> > >> this, existing code shouldn't have mappings for these tables in
> > >> place at the time these ranges get added here. But I noted in
> > >> the patch description that this is a potential issue (and may need
> > >> fixing if deemed severe enough - back then, apparently nobody
> > >> really cared, perhaps largely because passthrough to PV guests
> > >> isn't considered fully secure anyway).
> > >> 
> > >> Now - did that change? I.e. can you describe where the mappings
> > >> come from that cause the problem here?
> > > 
> > > The generic Linux MSI-X handling code does that, before calling the
> > > arch specific msi setup function, for which we have a xen version
> > > (xen_initdom_setup_msi_irqs):
> > > 
> > > pci_enable_msix -> msix_capability_init -> msix_map_region
> > 
> > Ah, okay, (of course?) I had looked only at the forward ported
> > version of this. Is all that fiddling with the mask bits really
> > being suppressed properly when running under Xen? Otherwise
> > pv-ops is quite broken in this regard at present... And if it is,
> > I don't see what the respective ioremap() is good for here in
> > the Xen case.
> 
> Actually I think that you might be right: just looking at the code it
> seems that the mask bits get written to the table once as part of the
> initialization process:
> 
> pci_enable_msix -> msix_capability_init -> msix_program_entries
> 
> Unfortunately msix_program_entries is called few lines after
> arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
> a pirq.
> However after that is done, all the masking/unmask is done via irq_mask
> that we handle properly masking/unmasking the corresponding event
> channels.
> 
> 
> Possible solutions on top of my head:

There is also the potential to piggyback on Joerg's patches
that introduce a new x86_msi_ops: compose_msi_msg.

See here: https://lkml.org/lkml/2012/8/20/432
(I think there was also a more recent one posted at some point).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 22:00:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 22:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl5Ca-0004ef-Pj; Tue, 18 Dec 2012 21:59:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1Tl5CZ-0004eY-5I
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 21:59:51 +0000
Received: from [85.158.143.99:58547] by server-2.bemta-4.messagelabs.com id
	1D/EB-30861-657E0D05; Tue, 18 Dec 2012 21:59:50 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355867988!17806861!1
X-Originating-IP: [209.85.212.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30982 invoked from network); 18 Dec 2012 21:59:49 -0000
Received: from mail-vb0-f46.google.com (HELO mail-vb0-f46.google.com)
	(209.85.212.46)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 21:59:49 -0000
Received: by mail-vb0-f46.google.com with SMTP id b13so1515324vby.5
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 13:59:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type:x-gm-message-state;
	bh=qTnhU+Z2X//hkiR0xzy53bpM5PTMsQtkiV+jffdpA6U=;
	b=a59FqtjOqyR4dco+wI816E2qgA9oYYGIpufxO8mrK/dQjsjaBOX7DbkPdfH76TNH3b
	rgLcO/jmgJBIXwFrLzwHJ6JYDqPl6c8bXazN7Na0oItBaMb+uVm5gJg9JU7LoqVsmNG5
	mPqIwaFvAFiUuvuYU7k1RYlLWjnBxddhe4sSeto5rbxFq1Rl0NodnYffC3plA3SnvZeQ
	IizHezl5FDmmkEEOO3URxAhLvynGPwvlPoZ8HIxe59pr/VVKFRAp81iiI263jGJl9dsx
	oT9hBTmAzqM/7Hri9NiRmJrnMxbocRhlvpy9uvoxV8zlEwBhJPfU7joAo5bVatBvNKBB
	PU/w==
X-Received: by 10.52.27.244 with SMTP id w20mr5003585vdg.44.1355867988499;
	Tue, 18 Dec 2012 13:59:48 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id n10sm2403872vde.9.2012.12.18.13.59.46
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 13:59:47 -0800 (PST)
Date: Tue, 18 Dec 2012 16:59:45 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <20121218101125.GC9072@mudshark.cambridge.arm.com>
Message-ID: <alpine.LFD.2.02.1212181657020.1263@xanadu.home>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-5-git-send-email-will.deacon@arm.com>
	<alpine.LFD.2.02.1212171537230.1263@xanadu.home>
	<20121218101125.GC9072@mudshark.cambridge.arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQlaBQDZgBoUap/5AaLI1fgZ9xWAfvYb+5o9OuV/wjf5Grqnc+C0HzQ04gkN7rxZeX2LIS6b
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 4/6] ARM: psci: add support for PSCI
 invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:
> On Mon, Dec 17, 2012 at 08:51:27PM +0000, Nicolas Pitre wrote:
> > On Mon, 17 Dec 2012, Will Deacon wrote:
> > > +static int psci_cpu_suspend(struct psci_power_state state,
> > > +			    unsigned long entry_point)
> > > +{
> > > +	int err;
> > > +	u32 fn, power_state;
> > > +
> > > +	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
> > > +	power_state = psci_power_state_pack(state);
> > > +	err = invoke_psci_fn(fn, power_state, (u32)entry_point, 0);
> > 
> > Why do you need the u32 cast here?
> 
> That's a hangover from when entry_point was a void *. I'll fix that, thanks.

Hopefully you didn't pass virtual pointers to the PSCI call, did you?  :-)

> > > +static int __init psci_init(void)
> > > +{
> > > +	struct device_node *np;
> > > +	const char *method;
> > > +	u32 base, id;
> > > +
> > > +	np = of_find_matching_node(NULL, psci_of_match);
> > > +	if (!np)
> > > +		return 0;
> > > +
> > > +	pr_info("probing function IDs from device-tree\n");
> > 
> > Having "probing function IDs from device-tree" in the middle of a kernel 
> > log isn't very informative.  Better make this more useful or remove it.
> > 
> > > +
> > > +	if (of_property_read_u32(np, "function-base", &base)) {
> > > +		pr_warning("missing \"function-base\" property\n");
> > 
> > Same thing here: this lacks context in a kernel log.
> > And so on for the other occurrences.
> 
> Actually, these are all prefixed with "psci: " thanks to the pr_fmt
> definition at the top of the file.

Ah, goodie!  No more issue then.


Nicolas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 22:00:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 22:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl5Ca-0004ef-Pj; Tue, 18 Dec 2012 21:59:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1Tl5CZ-0004eY-5I
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 21:59:51 +0000
Received: from [85.158.143.99:58547] by server-2.bemta-4.messagelabs.com id
	1D/EB-30861-657E0D05; Tue, 18 Dec 2012 21:59:50 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355867988!17806861!1
X-Originating-IP: [209.85.212.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30982 invoked from network); 18 Dec 2012 21:59:49 -0000
Received: from mail-vb0-f46.google.com (HELO mail-vb0-f46.google.com)
	(209.85.212.46)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 21:59:49 -0000
Received: by mail-vb0-f46.google.com with SMTP id b13so1515324vby.5
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 13:59:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type:x-gm-message-state;
	bh=qTnhU+Z2X//hkiR0xzy53bpM5PTMsQtkiV+jffdpA6U=;
	b=a59FqtjOqyR4dco+wI816E2qgA9oYYGIpufxO8mrK/dQjsjaBOX7DbkPdfH76TNH3b
	rgLcO/jmgJBIXwFrLzwHJ6JYDqPl6c8bXazN7Na0oItBaMb+uVm5gJg9JU7LoqVsmNG5
	mPqIwaFvAFiUuvuYU7k1RYlLWjnBxddhe4sSeto5rbxFq1Rl0NodnYffC3plA3SnvZeQ
	IizHezl5FDmmkEEOO3URxAhLvynGPwvlPoZ8HIxe59pr/VVKFRAp81iiI263jGJl9dsx
	oT9hBTmAzqM/7Hri9NiRmJrnMxbocRhlvpy9uvoxV8zlEwBhJPfU7joAo5bVatBvNKBB
	PU/w==
X-Received: by 10.52.27.244 with SMTP id w20mr5003585vdg.44.1355867988499;
	Tue, 18 Dec 2012 13:59:48 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id n10sm2403872vde.9.2012.12.18.13.59.46
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 13:59:47 -0800 (PST)
Date: Tue, 18 Dec 2012 16:59:45 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <20121218101125.GC9072@mudshark.cambridge.arm.com>
Message-ID: <alpine.LFD.2.02.1212181657020.1263@xanadu.home>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-5-git-send-email-will.deacon@arm.com>
	<alpine.LFD.2.02.1212171537230.1263@xanadu.home>
	<20121218101125.GC9072@mudshark.cambridge.arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQlaBQDZgBoUap/5AaLI1fgZ9xWAfvYb+5o9OuV/wjf5Grqnc+C0HzQ04gkN7rxZeX2LIS6b
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 4/6] ARM: psci: add support for PSCI
 invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:
> On Mon, Dec 17, 2012 at 08:51:27PM +0000, Nicolas Pitre wrote:
> > On Mon, 17 Dec 2012, Will Deacon wrote:
> > > +static int psci_cpu_suspend(struct psci_power_state state,
> > > +			    unsigned long entry_point)
> > > +{
> > > +	int err;
> > > +	u32 fn, power_state;
> > > +
> > > +	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
> > > +	power_state = psci_power_state_pack(state);
> > > +	err = invoke_psci_fn(fn, power_state, (u32)entry_point, 0);
> > 
> > Why do you need the u32 cast here?
> 
> That's a hangover from when entry_point was a void *. I'll fix that, thanks.

Hopefully you didn't pass virtual pointers to the PSCI call, did you?  :-)

> > > +static int __init psci_init(void)
> > > +{
> > > +	struct device_node *np;
> > > +	const char *method;
> > > +	u32 base, id;
> > > +
> > > +	np = of_find_matching_node(NULL, psci_of_match);
> > > +	if (!np)
> > > +		return 0;
> > > +
> > > +	pr_info("probing function IDs from device-tree\n");
> > 
> > Having "probing function IDs from device-tree" in the middle of a kernel 
> > log isn't very informative.  Better make this more useful or remove it.
> > 
> > > +
> > > +	if (of_property_read_u32(np, "function-base", &base)) {
> > > +		pr_warning("missing \"function-base\" property\n");
> > 
> > Same thing here: this lacks context in a kernel log.
> > And so on for the other occurrences.
> 
> Actually, these are all prefixed with "psci: " thanks to the pr_fmt
> definition at the top of the file.

Ah, goodie!  No more issue then.


Nicolas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 22:18:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 22:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl5UF-0004y5-Hr; Tue, 18 Dec 2012 22:18:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Tl5UB-0004xx-Tf
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 22:18:05 +0000
Received: from [85.158.138.51:43847] by server-2.bemta-3.messagelabs.com id
	23/17-11239-69BE0D05; Tue, 18 Dec 2012 22:17:58 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355869075!27683234!1
X-Originating-IP: [209.85.212.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25540 invoked from network); 18 Dec 2012 22:17:56 -0000
Received: from mail-vb0-f46.google.com (HELO mail-vb0-f46.google.com)
	(209.85.212.46)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 22:17:56 -0000
Received: by mail-vb0-f46.google.com with SMTP id b13so1552660vby.33
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 14:17:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition
	:content-transfer-encoding:in-reply-to:user-agent;
	bh=IoD9LChxNaQ1OwfmZecxYxXR55H60/PIbgy3CAXvYkk=;
	b=z1FG4pgqUJ/LzgzXyG7+ozoUX7MZg1IkrZ2g0QnJMqcNF6gkMcaDFwcmZyq04ltFhD
	7e/adaq5ir8zhjioGkQ42tMswpFmzrJXMwo7tyXbH/4Wk5nmvzUzUUkY+jZA22g3aZFe
	lB389UAdFy4pmU1OspyKCKPBw4/tVH32Qj07bkzQpsLUD6xH2HGtipBDoGU7/jEH63Be
	9nmsvYDNHh/BAja8iGrSmyA8WSHO994poZ15nN74YCYrfKUZaW0dVFLS7Omc/k/I7oeP
	iTJa2YXJHq37Kqz1mIyAaNq2kMH2RoE9cX44V1N02haP1Rwn2dh+BrBdUjAYtVT8jzCT
	ZkrA==
X-Received: by 10.52.175.163 with SMTP id cb3mr4871258vdc.76.1355869074591;
	Tue, 18 Dec 2012 14:17:54 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id j3sm2451470vdv.0.2012.12.18.14.17.53
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 14:17:54 -0800 (PST)
Date: Tue, 18 Dec 2012 17:17:51 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Message-ID: <20121218221749.GA6332@phenom.dumpdata.com>
References: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
	<49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
 problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGV5IEFuZHJlcywKClRoYW5rcyBmb3IgeW91ciByZXNwb25zZS4gU29ycnkgZm9yIHRoZSByZWFs
bHkgbGF0ZSByZXNwb25zZSAtIEkKaGFkIGl0IGluIG15IHBvc3Rwb25lZCBtYWlsYm94IGFuZCB0
aG91Z2h0IGl0IGhhcyBiZWVuIHNlbnQKYWxyZWFkeS4KCk9uIE1vbiwgRGVjIDAzLCAyMDEyIGF0
IDEwOjI0OjQwUE0gLTA1MDAsIEFuZHJlcyBMYWdhci1DYXZpbGxhIHdyb3RlOgo+ID4gSSBlYXJs
aWVyIHByb21pc2VkIGEgY29tcGxldGUgYW5hbHlzaXMgb2YgdGhlIHByb2JsZW0KPiA+IGFkZHJl
c3NlZCBieSB0aGUgcHJvcG9zZWQgY2xhaW0gaHlwZXJjYWxsIGFzIHdlbGwgYXMKPiA+IGFuIGFu
YWx5c2lzIG9mIHRoZSBhbHRlcm5hdGUgc29sdXRpb25zLiAgSSBoYWQgbm90Cj4gPiB5ZXQgcHJv
dmlkZWQgdGhlc2UgYW5hbHlzZXMgd2hlbiBJIGFza2VkIGZvciBhcHByb3ZhbAo+ID4gdG8gY29t
bWl0IHRoZSBoeXBlcnZpc29yIHBhdGNoLCBzbyB0aGVyZSB3YXMgc3RpbGwKPiA+IGEgZ29vZCBh
bW91bnQgb2YgbWlzdW5kZXJzdGFuZGluZywgYW5kIEkgYW0gdHJ5aW5nCj4gPiB0byBmaXggdGhh
dCBoZXJlLgo+ID4gCj4gPiBJIGhhZCBob3BlZCB0aGlzIGVzc2F5IGNvdWxkIGJlIGJvdGggY29u
Y2lzZSBhbmQgY29tcGxldGUKPiA+IGJ1dCBxdWlja2x5IGZvdW5kIGl0IHRvIGJlIGltcG9zc2li
bGUgdG8gYmUgYm90aCBhdCB0aGUKPiA+IHNhbWUgdGltZS4gIFNvIEkgaGF2ZSBlcnJlZCBvbiB0
aGUgc2lkZSBvZiB2ZXJib3NpdHksCj4gPiBidXQgYWxzbyBoYXZlIGF0dGVtcHRlZCB0byBlbnN1
cmUgdGhhdCB0aGUgYW5hbHlzaXMKPiA+IGZsb3dzIHNtb290aGx5IGFuZCBpcyB1bmRlcnN0YW5k
YWJsZSB0byBhbnlvbmUgaW50ZXJlc3RlZAo+ID4gaW4gbGVhcm5pbmcgbW9yZSBhYm91dCBtZW1v
cnkgYWxsb2NhdGlvbiBpbiBYZW4uCj4gPiBJJ2QgYXBwcmVjaWF0ZSBmZWVkYmFjayBmcm9tIG90
aGVyIGRldmVsb3BlcnMgdG8gdW5kZXJzdGFuZAo+ID4gaWYgSSd2ZSBhbHNvIGFjaGlldmVkIHRo
YXQgZ29hbC4KPiA+IAo+ID4gSWFuLCBJYW4sIEdlb3JnZSwgYW5kIFRpbSAtLSBJIGhhdmUgdGFn
Z2VkIGEgZmV3Cj4gPiBvdXQtb2YtZmxvdyBxdWVzdGlvbnMgdG8geW91IHdpdGggW0lJR0ZdLiAg
SWYgSSBsb3NlCj4gPiB5b3UgYXQgYW55IHBvaW50LCBJJ2QgZXNwZWNpYWxseSBhcHByZWNpYXRl
IHlvdXIgZmVlZGJhY2sKPiA+IGF0IHRob3NlIHBvaW50cy4gIEkgdHJ1c3QgdGhhdCwgZmlyc3Qs
IHlvdSB3aWxsIHJlYWQKPiA+IHRoaXMgY29tcGxldGVseS4gIEFzIEkndmUgc2FpZCwgSSB1bmRl
cnN0YW5kIHRoYXQKPiA+IE9yYWNsZSdzIHBhcmFkaWdtIG1heSBkaWZmZXIgaW4gbWFueSB3YXlz
IGZyb20geW91cgo+ID4gb3duLCBzbyBJIGFsc28gdHJ1c3QgdGhhdCB5b3Ugd2lsbCByZWFkIGl0
IGNvbXBsZXRlbHkKPiA+IHdpdGggYW4gb3BlbiBtaW5kLgo+ID4gCj4gPiBUaGFua3MsCj4gPiBE
YW4KPiA+IAo+ID4gUFJPQkxFTSBTVEFURU1FTlQgT1ZFUlZJRVcKPiA+IAo+ID4gVGhlIGZ1bmRh
bWVudGFsIHByb2JsZW0gaXMgYSByYWNlOyB0d28gZW50aXRpZXMgYXJlCj4gPiBjb21wZXRpbmcg
Zm9yIHBhcnQgb3IgYWxsIG9mIGEgc2hhcmVkIHJlc291cmNlOiBpbiB0aGlzIGNhc2UsCj4gPiBw
aHlzaWNhbCBzeXN0ZW0gUkFNLiAgTm9ybWFsbHksIGEgbG9jayBpcyB1c2VkIHRvIG1lZGlhdGUK
PiA+IGEgcmFjZS4KPiA+IAo+ID4gRm9yIG1lbW9yeSBhbGxvY2F0aW9uIGluIFhlbiwgdGhlcmUg
YXJlIHR3byBzaWduaWZpY2FudAo+ID4gZW50aXRpZXMsIHRoZSB0b29sc3RhY2sgYW5kIHRoZSBo
eXBlcnZpc29yLiAgQW5kLCBpbgo+ID4gZ2VuZXJhbCB0ZXJtcywgdGhlcmUgYXJlIGN1cnJlbnRs
eSB0d28gaW1wb3J0YW50IGxvY2tzOgo+ID4gb25lIHVzZWQgaW4gdGhlIHRvb2xzdGFjayBmb3Ig
ZG9tYWluIGNyZWF0aW9uOwo+ID4gYW5kIG9uZSBpbiB0aGUgaHlwZXJ2aXNvciB1c2VkIGZvciB0
aGUgYnVkZHkgYWxsb2NhdG9yLgo+ID4gCj4gPiBDb25zaWRlcmluZyBmaXJzdCBvbmx5IGRvbWFp
biBjcmVhdGlvbiwgdGhlIHRvb2xzdGFjawo+ID4gbG9jayBpcyB0YWtlbiB0byBlbnN1cmUgdGhh
dCBkb21haW4gY3JlYXRpb24gaXMgc2VyaWFsaXplZC4KPiA+IFRoZSBsb2NrIGlzIHRha2VuIHdo
ZW4gZG9tYWluIGNyZWF0aW9uIHN0YXJ0cywgYW5kIHJlbGVhc2VkCj4gPiB3aGVuIGRvbWFpbiBj
cmVhdGlvbiBpcyBjb21wbGV0ZS4KPiA+IAo+ID4gQXMgc3lzdGVtIGFuZCBkb21haW4gbWVtb3J5
IHJlcXVpcmVtZW50cyBncm93LCB0aGUgYW1vdW50Cj4gPiBvZiB0aW1lIHRvIGFsbG9jYXRlIGFs
bCBuZWNlc3NhcnkgbWVtb3J5IHRvIGxhdW5jaCBhIGxhcmdlCj4gPiBkb21haW4gaXMgZ3Jvd2lu
ZyBhbmQgbWF5IG5vdyBleGNlZWQgc2V2ZXJhbCBtaW51dGVzLCBzbwo+ID4gdGhpcyBzZXJpYWxp
emF0aW9uIGlzIGluY3JlYXNpbmdseSBwcm9ibGVtYXRpYy4gIFRoZSByZXN1bHQKPiA+IGlzIGEg
Y3VzdG9tZXIgcmVwb3J0ZWQgcHJvYmxlbTogIElmIGEgY3VzdG9tZXIgd2FudHMgdG8KPiA+IGxh
dW5jaCB0d28gb3IgbW9yZSB2ZXJ5IGxhcmdlIGRvbWFpbnMsIHRoZSAid2FpdCB0aW1lIgo+ID4g
cmVxdWlyZWQgYnkgdGhlIHNlcmlhbGl6YXRpb24gaXMgdW5hY2NlcHRhYmxlLgo+ID4gCj4gPiBP
cmFjbGUgd291bGQgbGlrZSB0byBzb2x2ZSB0aGlzIHByb2JsZW0uICBBbmQgT3JhY2xlCj4gPiB3
b3VsZCBsaWtlIHRvIHNvbHZlIHRoaXMgcHJvYmxlbSBub3QganVzdCBmb3IgYSBzaW5nbGUKPiA+
IGN1c3RvbWVyIHNpdHRpbmcgaW4gZnJvbnQgb2YgYSBzaW5nbGUgbWFjaGluZSBjb25zb2xlLCBi
dXQKPiA+IGZvciB0aGUgdmVyeSBjb21wbGV4IGNhc2Ugb2YgYSBsYXJnZSBudW1iZXIgb2YgbWFj
aGluZXMsCj4gPiB3aXRoIHRoZSAiYWdlbnQiIG9uIGVhY2ggbWFjaGluZSB0YWtpbmcgaW5kZXBl
bmRlbnQKPiA+IGFjdGlvbnMgaW5jbHVkaW5nIGF1dG9tYXRpYyBsb2FkIGJhbGFuY2luZyBhbmQg
cG93ZXIKPiA+IG1hbmFnZW1lbnQgdmlhIG1pZ3JhdGlvbi4KPiBIaSBEYW4sCj4gYW4gaXNzdWUg
d2l0aCB5b3VyIHJlYXNvbmluZyB0aHJvdWdob3V0IGhhcyBiZWVuIHRoZSBjb25zdGFudCBpbnZv
Y2F0aW9uIG9mIHRoZSBtdWx0aSBob3N0IGVudmlyb25tZW50IGFzIGEganVzdGlmaWNhdGlvbiBm
b3IgeW91ciBwcm9wb3NhbC4gQnV0IHRoaXMgYXJndW1lbnQgaXMgbm90IHVzZWQgaW4geW91ciBw
cm9wb3NhbCBiZWxvdyBiZXlvbmQgdGhpcyBtZW50aW9uIGluIHBhc3NpbmcuIEZ1cnRoZXIsIHRo
ZXJlIGlzIG5vIHJlbGF0aW9uIGJldHdlZW4gd2hhdCB5b3UgYXJlIGNoYW5naW5nICh0aGUgaHlw
ZXJ2aXNvcikgYW5kIHdoYXQgeW91IGFyZSBjbGFpbWluZyBpdCBpcyBuZWVkZWQgZm9yIChtdWx0
aSBob3N0IFZNIG1hbmFnZW1lbnQpLgo+IAoKSGVoLiBJIGhhZG4ndCByZWFsaXplZCB0aGF0IHRo
ZSBlbWFpbHMgbmVlZCB0byBjb25mb3JtIHRvIGEKdGhlIHdheSBsZWdhbCBicmllZnMgYXJlIHdy
aXR0ZW4gaW4gVVMgOi0pIE1lYW5pbmcgdGhhdAplYWNoIHRvcGljIG11c3QgYmUgYWRkcmVzc2Vk
LgoKQW55aG93LCB0aGUgbXVsdGktaG9zdCBlbnYgb3IgYSBzaW5nbGUtaG9zdCBlbnYgaGFzIHRo
ZSBzYW1lCmlzc3VlIC0geW91IHRyeSB0byBsYXVuY2ggbXVsdGlwbGUgZ3Vlc3RzIGFuZCB5b3Ug
c29tZSBvZgp0aGVtIG1pZ2h0IG5vdCBsYXVuY2guCgpUaGUgY2hhbmdlcyB0aGF0IERhbiBpcyBw
cm9wb3NpbmcgKHRoZSBjbGFpbSBoeXBlcmNhbGwpCndvdWxkIHByb3ZpZGUgdGhlIGZ1bmN0aW9u
YWxpdHkgdG8gZml4IHRoaXMgcHJvYmxlbS4KCj4gCj4gPiAgKFRoaXMgY29tcGxleCBlbnZpcm9u
bWVudAo+ID4gaXMgc29sZCBieSBPcmFjbGUgdG9kYXk7IGl0IGlzIG5vdCBhICJmdXR1cmUgdmlz
aW9uIi4pCj4gPiAKPiA+IFtJSUdUXSBDb21wbGV0ZWx5IGlnbm9yaW5nIGFueSBwb3NzaWJsZSBz
b2x1dGlvbnMgdG8gdGhpcwo+ID4gcHJvYmxlbSwgaXMgZXZlcnlvbmUgaW4gYWdyZWVtZW50IHRo
YXQgdGhpcyBfaXNfIGEgcHJvYmxlbQo+ID4gdGhhdCBfbmVlZHNfIHRvIGJlIHNvbHZlZCB3aXRo
IF9zb21lXyBjaGFuZ2UgaW4gdGhlIFhlbgo+ID4gZWNvc3lzdGVtPwo+ID4gCj4gPiBTT01FIElN
UE9SVEFOVCBCQUNLR1JPVU5EIElORk9STUFUSU9OCj4gPiAKPiA+IEluIHRoZSBzdWJzZXF1ZW50
IGRpc2N1c3Npb24sIGl0IGlzIGltcG9ydGFudCB0bwo+ID4gdW5kZXJzdGFuZCBhIGZldyB0aGlu
Z3M6Cj4gPiAKPiA+IFdoaWxlIHRoZSB0b29sc3RhY2sgbG9jayBpcyBoZWxkLCBhbGxvY2F0aW5n
IG1lbW9yeSBmb3IKPiA+IHRoZSBkb21haW4gY3JlYXRpb24gcHJvY2VzcyBpcyBkb25lIGFzIGEg
c2VxdWVuY2Ugb2Ygb25lCj4gPiBvciBtb3JlIGh5cGVyY2FsbHMsIGVhY2ggYXNraW5nIHRoZSBo
eXBlcnZpc29yIHRvIGFsbG9jYXRlCj4gPiBvbmUgb3IgbW9yZSAtLSAiWCIgLS0gc2xhYnMgb2Yg
cGh5c2ljYWwgUkFNLCB3aGVyZSBhIHNsYWIKPiA+IGlzIDIqKk4gY29udGlndW91cyBhbGlnbmVk
IHBhZ2VzLCBhbHNvIGtub3duIGFzIGFuCj4gPiAib3JkZXIgTiIgYWxsb2NhdGlvbi4gIFdoaWxl
IHRoZSBoeXBlcmNhbGwgaXMgZGVmaW5lZAo+ID4gdG8gd29yayB3aXRoIGFueSB2YWx1ZSBvZiBO
LCBjb21tb24gdmFsdWVzIGFyZSBOPTAKPiA+IChpbmRpdmlkdWFsIHBhZ2VzKSwgTj05ICgiaHVn
ZXBhZ2VzIiBvciAic3VwZXJwYWdlcyIpLAo+ID4gYW5kIE49MTggKCIxR2lCIHBhZ2VzIikuICBT
bywgZm9yIGV4YW1wbGUsIGlmIHRoZSB0b29sc3RhY2sKPiA+IHJlcXVpcmVzIDIwMU1pQiBvZiBt
ZW1vcnksIGl0IHdpbGwgbWFrZSB0d28gaHlwZXJjYWxsczoKPiA+IE9uZSB3aXRoIFg9MTAwIGFu
ZCBOPTksIGFuZCBvbmUgd2l0aCBYPTEgYW5kIE49MC4KPiA+IAo+ID4gV2hpbGUgdGhlIHRvb2xz
dGFjayBtYXkgYXNrIGZvciBhIHNtYWxsZXIgbnVtYmVyIFggb2YKPiA+IG9yZGVyPT05IHNsYWJz
LCBzeXN0ZW0gZnJhZ21lbnRhdGlvbiBtYXkgdW5wcmVkaWN0YWJseQo+ID4gY2F1c2UgdGhlIGh5
cGVydmlzb3IgdG8gZmFpbCB0aGUgcmVxdWVzdCwgaW4gd2hpY2ggY2FzZQo+ID4gdGhlIHRvb2xz
dGFjayB3aWxsIGZhbGwgYmFjayB0byBhIHJlcXVlc3QgZm9yIDUxMipYCj4gPiBpbmRpdmlkdWFs
IHBhZ2VzLiAgSWYgdGhlcmUgaXMgc3VmZmljaWVudCBSQU0gaW4gdGhlIHN5c3RlbSwKPiA+IHRo
aXMgcmVxdWVzdCBmb3Igb3JkZXI9PTAgcGFnZXMgaXMgZ3VhcmFudGVlZCB0byBzdWNjZWVkLgo+
ID4gVGh1cyBmb3IgYSAxVGlCIGRvbWFpbiwgdGhlIGh5cGVydmlzb3IgbXVzdCBiZSBwcmVwYXJl
ZAo+ID4gdG8gYWxsb2NhdGUgdXAgdG8gMjU2TWkgaW5kaXZpZHVhbCBwYWdlcy4KPiA+IAo+ID4g
Tm90ZSBjYXJlZnVsbHkgdGhhdCB3aGVuIHRoZSB0b29sc3RhY2sgaHlwZXJjYWxsIGFza3MgZm9y
Cj4gPiAxMDAgc2xhYnMsIHRoZSBoeXBlcnZpc29yICJoZWFwbG9jayIgaXMgY3VycmVudGx5IHRh
a2VuCj4gPiBhbmQgcmVsZWFzZWQgMTAwIHRpbWVzLiAgU2ltaWxhcmx5LCBmb3IgMjU2TSBpbmRp
dmlkdWFsCj4gPiBwYWdlcy4uLiAyNTYgbWlsbGlvbiBzcGluX2xvY2stYWxsb2NfcGFnZS1zcGlu
X3VubG9ja3MuCj4gPiBUaGlzIG1lYW5zIHRoYXQgZG9tYWluIGNyZWF0aW9uIGlzIG5vdCAiYXRv
bWljIiBpbnNpZGUKPiA+IHRoZSBoeXBlcnZpc29yLCB3aGljaCBtZWFucyB0aGF0IHJhY2VzIGNh
biBhbmQgd2lsbCBzdGlsbAo+ID4gb2NjdXIuCj4gPiAKPiA+IFJVTElORyBPVVQgU09NRSBTSU1Q
TEUgU09MVVRJT05TCj4gPiAKPiA+IElzIHRoZXJlIGFuIGVsZWdhbnQgc2ltcGxlIHNvbHV0aW9u
IGhlcmU/Cj4gPiAKPiA+IExldCdzIGZpcnN0IGNvbnNpZGVyIHRoZSBwb3NzaWJpbGl0eSBvZiBy
ZW1vdmluZyB0aGUgdG9vbHN0YWNrCj4gPiBzZXJpYWxpemF0aW9uIGVudGlyZWx5IGFuZC9vciB0
aGUgcG9zc2liaWxpdHkgdGhhdCB0d28KPiA+IGluZGVwZW5kZW50IHRvb2xzdGFjayB0aHJlYWRz
IChvciAiYWdlbnRzIikgY2FuIHNpbXVsdGFuZW91c2x5Cj4gPiByZXF1ZXN0IGEgdmVyeSBsYXJn
ZSBkb21haW4gY3JlYXRpb24gaW4gcGFyYWxsZWwuICBBcyBkZXNjcmliZWQKPiA+IGFib3ZlLCB0
aGUgaHlwZXJ2aXNvcidzIGhlYXBsb2NrIGlzIGluc3VmZmljaWVudCB0byBzZXJpYWxpemUgUkFN
Cj4gPiBhbGxvY2F0aW9uLCBzbyB0aGUgdHdvIGRvbWFpbiBjcmVhdGlvbiBwcm9jZXNzZXMgcmFj
ZS4gIElmIHRoZXJlCj4gPiBpcyBzdWZmaWNpZW50IHJlc291cmNlIGZvciBlaXRoZXIgb25lIHRv
IGxhdW5jaCwgYnV0IGluc3VmZmljaWVudAo+ID4gcmVzb3VyY2UgZm9yIGJvdGggdG8gbGF1bmNo
LCB0aGUgd2lubmVyIG9mIHRoZSByYWNlIGlzIGluZGV0ZXJtaW5hdGUsCj4gPiBhbmQgb25lIG9y
IGJvdGggbGF1bmNoZXMgd2lsbCBmYWlsLCBwb3NzaWJseSBhZnRlciBvbmUgb3IgYm90aCAKPiA+
IGRvbWFpbiBjcmVhdGlvbiB0aHJlYWRzIGhhdmUgYmVlbiB3b3JraW5nIGZvciBzZXZlcmFsIG1p
bnV0ZXMuCj4gPiBUaGlzIGlzIGEgY2xhc3NpYyAiVE9DVE9VIiAodGltZS1vZi1jaGVjay10aW1l
LW9mLXVzZSkgcmFjZS4KPiA+IElmIGEgY3VzdG9tZXIgaXMgdW5oYXBweSB3YWl0aW5nIHNldmVy
YWwgbWludXRlcyB0byBsYXVuY2gKPiA+IGEgZG9tYWluLCB0aGV5IHdpbGwgYmUgZXZlbiBtb3Jl
IHVuaGFwcHkgd2FpdGluZyBmb3Igc2V2ZXJhbAo+ID4gbWludXRlcyB0byBiZSB0b2xkIHRoYXQg
b25lIG9yIGJvdGggb2YgdGhlIGxhdW5jaGVzIGhhcyBmYWlsZWQuCj4gPiBNdWx0aS1taW51dGUg
ZmFpbHVyZSBpcyBldmVuIG1vcmUgdW5hY2NlcHRhYmxlIGZvciBhbiBhdXRvbWF0ZWQKPiA+IGFn
ZW50IHRyeWluZyB0bywgZm9yIGV4YW1wbGUsIGV2YWN1YXRlIGEgbWFjaGluZSB0aGF0IHRoZQo+
ID4gZGF0YSBjZW50ZXIgYWRtaW5pc3RyYXRvciBuZWVkcyB0byBwb3dlcmN5Y2xlLgo+ID4gCj4g
PiBbSUlHVDogUGxlYXNlIGhvbGQgeW91ciBvYmplY3Rpb25zIGZvciBhIG1vbWVudC4uLiB0aGUg
cGFyYWdyYXBoCj4gPiBhYm92ZSBpcyBkaXNjdXNzaW5nIHRoZSBzaW1wbGUgc29sdXRpb24gb2Yg
cmVtb3ZpbmcgdGhlIHNlcmlhbGl6YXRpb247Cj4gPiB5b3VyIHN1Z2dlc3RlZCBzb2x1dGlvbiB3
aWxsIGJlIGRpc2N1c3NlZCBzb29uLl0KPiA+IAo+ID4gTmV4dCwgbGV0J3MgY29uc2lkZXIgdGhl
IHBvc3NpYmlsaXR5IG9mIGNoYW5naW5nIHRoZSBoZWFwbG9jawo+ID4gc3RyYXRlZ3kgaW4gdGhl
IGh5cGVydmlzb3Igc28gdGhhdCB0aGUgbG9jayBpcyBoZWxkIG5vdAo+ID4gZm9yIG9uZSBzbGFi
IGJ1dCBmb3IgdGhlIGVudGlyZSByZXF1ZXN0IG9mIE4gc2xhYnMuICBBcyB3aXRoCj4gPiBhbnkg
Y29yZSBoeXBlcnZpc29yIGxvY2ssIGhvbGRpbmcgdGhlIGhlYXBsb2NrIGZvciBhICJsb25nIHRp
bWUiCj4gPiBpcyB1bmFjY2VwdGFibGUuICBUbyBhIGh5cGVydmlzb3IsIHNldmVyYWwgbWludXRl
cyBpcyBhbiBldGVybml0eS4KPiA+IEFuZCwgaW4gYW55IGNhc2UsIGJ5IHNlcmlhbGl6aW5nIGRv
bWFpbiBjcmVhdGlvbiBpbiB0aGUgaHlwZXJ2aXNvciwKPiA+IHdlIGhhdmUgcmVhbGx5IG9ubHkg
bW92ZWQgdGhlIHByb2JsZW0gZnJvbSB0aGUgdG9vbHN0YWNrIGludG8KPiA+IHRoZSBoeXBlcnZp
c29yLCBub3Qgc29sdmVkIHRoZSBwcm9ibGVtLgo+ID4gCj4gPiBbSUlHVF0gQXJlIHdlIGluIGFn
cmVlbWVudCB0aGF0IHRoZXNlIHNpbXBsZSBzb2x1dGlvbnMgY2FuIGJlCj4gPiBzYWZlbHkgcnVs
ZWQgb3V0Pwo+ID4gCj4gPiBDQVBBQ0lUWSBBTExPQ0FUSU9OIFZTIFJBTSBBTExPQ0FUSU9OCj4g
PiAKPiA+IExvb2tpbmcgZm9yIGEgY3JlYXRpdmUgc29sdXRpb24sIG9uZSBtYXkgcmVhbGl6ZSB0
aGF0IGl0IGlzIHRoZQo+ID4gcGFnZSBhbGxvY2F0aW9uIC0tIGVzcGVjaWFsbHkgaW4gbGFyZ2Ug
cXVhbnRpdGllcyAtLSB0aGF0IGlzIHZlcnkKPiA+IHRpbWUtY29uc3VtaW5nLiAgQnV0LCB0aGlu
a2luZyBvdXRzaWRlIG9mIHRoZSBib3gsIGl0IGlzIG5vdAo+ID4gdGhlIGFjdHVhbCBwYWdlcyBv
ZiBSQU0gdGhhdCB3ZSBhcmUgcmFjaW5nIG9uLCBidXQgdGhlIHF1YW50aXR5IG9mIHBhZ2VzIHJl
cXVpcmVkIHRvIGxhdW5jaCBhIGRvbWFpbiEgIElmIHdlIGluc3RlYWQgaGF2ZSBhIHdheSB0bwo+
ID4gImNsYWltIiBhIHF1YW50aXR5IG9mIHBhZ2VzIGNoZWFwbHkgbm93IGFuZCB0aGVuIGFsbG9j
YXRlIHRoZSBhY3R1YWwKPiA+IHBoeXNpY2FsIFJBTSBwYWdlcyBsYXRlciwgd2UgaGF2ZSBjaGFu
Z2VkIHRoZSByYWNlIHRvIHJlcXVpcmUgb25seSBzZXJpYWxpemF0aW9uIG9mIHRoZSBjbGFpbWlu
ZyBwcm9jZXNzISAgSW4gb3RoZXIgd29yZHMsIGlmIHNvbWUgZW50aXR5Cj4gPiBrbm93cyB0aGUg
bnVtYmVyIG9mIHBhZ2VzIGF2YWlsYWJsZSBpbiB0aGUgc3lzdGVtLCBhbmQgY2FuICJjbGFpbSIK
PiA+IE4gcGFnZXMgZm9yIHRoZSBiZW5lZml0IG9mIGEgZG9tYWluIGJlaW5nIGxhdW5jaGVkLCB0
aGUgc3VjY2Vzc2Z1bCBsYXVuY2ggb2YgdGhlIGRvbWFpbiBjYW4gYmUgZW5zdXJlZC4gIFdlbGwu
Li4gdGhlIGRvbWFpbiBsYXVuY2ggbWF5Cj4gPiBzdGlsbCBmYWlsIGZvciBhbiB1bnJlbGF0ZWQg
cmVhc29uLCBidXQgbm90IGR1ZSB0byBhIG1lbW9yeSBUT0NUT1UKPiA+IHJhY2UuICBCdXQsIGlu
IHRoaXMgY2FzZSwgaWYgdGhlIGNvc3QgKGluIHRpbWUpIG9mIHRoZSBjbGFpbWluZwo+ID4gcHJv
Y2VzcyBpcyB2ZXJ5IHNtYWxsIGNvbXBhcmVkIHRvIHRoZSBjb3N0IG9mIHRoZSBkb21haW4gbGF1
bmNoLAo+ID4gd2UgaGF2ZSBzb2x2ZWQgdGhlIG1lbW9yeSBUT0NUT1UgcmFjZSB3aXRoIGhhcmRs
eSBhbnkgZGVsYXkgYWRkZWQKPiA+IHRvIGEgbm9uLW1lbW9yeS1yZWxhdGVkIGZhaWx1cmUgdGhh
dCB3b3VsZCBoYXZlIG9jY3VycmVkIGFueXdheS4KPiA+IAo+ID4gVGhpcyAiY2xhaW0iIHNvdW5k
cyBwcm9taXNpbmcuICBCdXQgd2UgaGF2ZSBtYWRlIGFuIGFzc3VtcHRpb24gdGhhdAo+ID4gYW4g
ImVudGl0eSIgaGFzIGNlcnRhaW4ga25vd2xlZGdlLiAgSW4gdGhlIFhlbiBzeXN0ZW0sIHRoYXQg
ZW50aXR5Cj4gPiBtdXN0IGJlIGVpdGhlciB0aGUgdG9vbHN0YWNrIG9yIHRoZSBoeXBlcnZpc29y
LiAgT3IsIGluIHRoZSBPcmFjbGUKPiA+IGVudmlyb25tZW50LCBhbiAiYWdlbnQiLi4uIGJ1dCBh
biBhZ2VudCBhbmQgYSB0b29sc3RhY2sgYXJlIHNpbWlsYXIKPiA+IGVub3VnaCBmb3Igb3VyIHB1
cnBvc2VzIHRoYXQgd2Ugd2lsbCBqdXN0IHVzZSB0aGUgbW9yZSBicm9hZGx5LXVzZWQKPiA+IHRl
cm0gInRvb2xzdGFjayIuICBJbiB1c2luZyB0aGlzIHRlcm0sIGhvd2V2ZXIsIGl0J3MgaW1wb3J0
YW50IHRvCj4gPiByZW1lbWJlciBpdCBpcyBuZWNlc3NhcnkgdG8gY29uc2lkZXIgdGhlIGV4aXN0
ZW5jZSBvZiBtdWx0aXBsZQo+ID4gdGhyZWFkcyB3aXRoaW4gdGhpcyB0b29sc3RhY2suCj4gPiAK
PiA+IE5vdyBJIHF1b3RlIElhbiBKYWNrc29uOiAiSXQgaXMgYSBrZXkgZGVzaWduIHByaW5jaXBs
ZSBvZiBhIHN5c3RlbQo+ID4gbGlrZSBYZW4gdGhhdCB0aGUgaHlwZXJ2aXNvciBzaG91bGQgcHJv
dmlkZSBvbmx5IHRob3NlIGZhY2lsaXRpZXMKPiA+IHdoaWNoIGFyZSBzdHJpY3RseSBuZWNlc3Nh
cnkuICBBbnkgZnVuY3Rpb25hbGl0eSB3aGljaCBjYW4gYmUKPiA+IHJlYXNvbmFibHkgcHJvdmlk
ZWQgb3V0c2lkZSB0aGUgaHlwZXJ2aXNvciBzaG91bGQgYmUgZXhjbHVkZWQKPiA+IGZyb20gaXQu
Igo+ID4gCj4gPiBTbyBsZXQncyBleGFtaW5lIHRoZSB0b29sc3RhY2sgZmlyc3QuCj4gPiAKPiA+
IFtJSUdUXSBTdGlsbCBhbGwgb24gdGhlIHNhbWUgcGFnZSAocHVuIGludGVuZGVkKT8KPiA+IAo+
ID4gVE9PTFNUQUNLLUJBU0VEIENBUEFDSVRZIEFMTE9DQVRJT04KPiA+IAo+ID4gRG9lcyB0aGUg
dG9vbHN0YWNrIGtub3cgaG93IG1hbnkgcGh5c2ljYWwgcGFnZXMgb2YgUkFNIGFyZSBhdmFpbGFi
bGU/Cj4gPiBZZXMsIGl0IGNhbiB1c2UgYSBoeXBlcmNhbGwgdG8gZmluZCBvdXQgdGhpcyBpbmZv
cm1hdGlvbiBhZnRlciBYZW4gYW5kCj4gPiBkb20wIGxhdW5jaCwgYnV0IGJlZm9yZSBpdCBsYXVu
Y2hlcyBhbnkgZG9tYWluLiAgVGhlbiBpZiBpdCBzdWJ0cmFjdHMKPiA+IHRoZSBudW1iZXIgb2Yg
cGFnZXMgdXNlZCB3aGVuIGl0IGxhdW5jaGVzIGEgZG9tYWluIGFuZCBpcyBhd2FyZSBvZgo+ID4g
d2hlbiBhbnkgZG9tYWluIGRpZXMsIGFuZCBhZGRzIHRoZW0gYmFjaywgdGhlIHRvb2xzdGFjayBo
YXMgYSBwcmV0dHkKPiA+IGdvb2QgZXN0aW1hdGUuICBJbiBhY3R1YWxpdHksIHRoZSB0b29sc3Rh
Y2sgZG9lc24ndCBfcmVhbGx5XyBrbm93IHRoZQo+ID4gZXhhY3QgbnVtYmVyIG9mIHBhZ2VzIHVz
ZWQgd2hlbiBhIGRvbWFpbiBpcyBsYXVuY2hlZCwgYnV0IHRoZXJlCj4gPiBpcyBhIHBvb3JseS1k
b2N1bWVudGVkICJmdXp6IGZhY3RvciIuLi4gdGhlIHRvb2xzdGFjayBrbm93cyB0aGUKPiA+IG51
bWJlciBvZiBwYWdlcyB3aXRoaW4gYSBmZXcgbWVnYWJ5dGVzLCB3aGljaCBpcyBwcm9iYWJseSBj
bG9zZSBlbm91Z2guCj4gPiAKPiA+IFRoaXMgaXMgYSBmYWlybHkgZ29vZCBkZXNjcmlwdGlvbiBv
ZiBob3cgdGhlIHRvb2xzdGFjayB3b3JrcyB0b2RheQo+ID4gYW5kIHRoZSBhY2NvdW50aW5nIHNl
ZW1zIHNpbXBsZSBlbm91Z2gsIHNvIGRvZXMgdG9vbHN0YWNrLWJhc2VkCj4gPiBjYXBhY2l0eSBh
bGxvY2F0aW9uIHNvbHZlIG91ciBvcmlnaW5hbCBwcm9ibGVtPyAgSXQgd291bGQgc2VlbSBzby4K
PiA+IEV2ZW4gaWYgdGhlcmUgYXJlIG11bHRpcGxlIHRocmVhZHMsIHRoZSBhY2NvdW50aW5nIC0t
IG5vdCB0aGUgZXh0ZW5kZWQKPiA+IHNlcXVlbmNlIG9mIHBhZ2UgYWxsb2NhdGlvbiBmb3IgdGhl
IGRvbWFpbiBjcmVhdGlvbiAtLSBjYW4gYmUKPiA+IHNlcmlhbGl6ZWQgYnkgYSBsb2NrIGluIHRo
ZSB0b29sc3RhY2suICBCdXQgbm90ZSBjYXJlZnVsbHksIGVpdGhlcgo+ID4gdGhlIHRvb2xzdGFj
ayBhbmQgdGhlIGh5cGVydmlzb3IgbXVzdCBhbHdheXMgYmUgaW4gc3luYyBvbiB0aGUKPiA+IG51
bWJlciBvZiBhdmFpbGFibGUgcGFnZXMgKHdpdGhpbiBhbiBhY2NlcHRhYmxlIG1hcmdpbiBvZiBl
cnJvcik7Cj4gPiBvciBhbnkgcXVlcnkgdG8gdGhlIGh5cGVydmlzb3IgX2FuZF8gdGhlIHRvb2xz
dGFjay1iYXNlZCBjbGFpbSBtdXN0Cj4gPiBiZSBwYWlyZWQgYXRvbWljYWxseSwgaS5lLiB0aGUg
dG9vbHN0YWNrIGxvY2sgbXVzdCBiZSBoZWxkIGFjcm9zcwo+ID4gYm90aC4gIE90aGVyd2lzZSB3
ZSBhZ2FpbiBoYXZlIGFub3RoZXIgVE9DVE9VIHJhY2UuIEludGVyZXN0aW5nLAo+ID4gYnV0IHBy
b2JhYmx5IG5vdCByZWFsbHkgYSBwcm9ibGVtLgo+ID4gCj4gPiBXYWl0LCBpc24ndCBpdCBwb3Nz
aWJsZSBmb3IgdGhlIHRvb2xzdGFjayB0byBkeW5hbWljYWxseSBjaGFuZ2UgdGhlCj4gPiBudW1i
ZXIgb2YgcGFnZXMgYXNzaWduZWQgdG8gYSBkb21haW4/ICBZZXMsIHRoaXMgaXMgb2Z0ZW4gY2Fs
bGVkCj4gPiBiYWxsb29uaW5nIGFuZCB0aGUgdG9vbHN0YWNrIGNhbiBkbyB0aGlzIHZpYSBhIGh5
cGVyY2FsbC4gIEJ1dAo+IAo+ID4gdGhhdCdzIHN0aWxsIE9LIGJlY2F1c2UgZWFjaCBjYWxsIGdv
ZXMgdGhyb3VnaCB0aGUgdG9vbHN0YWNrIGFuZAo+ID4gaXQgc2ltcGx5IG5lZWRzIHRvIGFkZCBt
b3JlIGFjY291bnRpbmcgZm9yIHdoZW4gaXQgdXNlcyBiYWxsb29uaW5nCj4gPiB0byBhZGp1c3Qg
dGhlIGRvbWFpbidzIG1lbW9yeSBmb290cHJpbnQuICBTbyB3ZSBhcmUgc3RpbGwgT0suCj4gPiAK
PiA+IEJ1dCB3YWl0IGFnYWluLi4uIHRoYXQgYnJpbmdzIHVwIGFuIGludGVyZXN0aW5nIHBvaW50
LiAgQXJlIHRoZXJlCj4gPiBhbnkgc2lnbmlmaWNhbnQgYWxsb2NhdGlvbnMgdGhhdCBhcmUgZG9u
ZSBpbiB0aGUgaHlwZXJ2aXNvciB3aXRob3V0Cj4gPiB0aGUga25vd2xlZGdlIGFuZC9vciBwZXJt
aXNzaW9uIG9mIHRoZSB0b29sc3RhY2s/ICBJZiBzbywgdGhlCj4gPiB0b29sc3RhY2sgbWF5IGJl
IG1pc3NpbmcgaW1wb3J0YW50IGluZm9ybWF0aW9uLgo+ID4gCj4gPiBTbyBhcmUgdGhlcmUgYW55
IHN1Y2ggYWxsb2NhdGlvbnM/ICBXZWxsLi4uIHllcy4gVGhlcmUgYXJlIGEgZmV3Lgo+ID4gTGV0
J3MgdGFrZSBhIG1vbWVudCB0byBlbnVtZXJhdGUgdGhlbToKPiA+IAo+ID4gQSkgSW4gTGludXgs
IGEgcHJpdmlsZWdlZCB1c2VyIGNhbiB3cml0ZSB0byBhIHN5c2ZzIGZpbGUgd2hpY2ggd3JpdGVz
Cj4gPiB0byB0aGUgYmFsbG9vbiBkcml2ZXIgd2hpY2ggbWFrZXMgaHlwZXJjYWxscyBmcm9tIHRo
ZSBndWVzdCBrZXJuZWwgdG8KPiAKPiBBIGZhaXJseSBiaXphcnJlIGxpbWl0YXRpb24gb2YgYSBi
YWxsb29uLWJhc2VkIGFwcHJvYWNoIHRvIG1lbW9yeSBtYW5hZ2VtZW50LiBXaHkgb24gZWFydGgg
c2hvdWxkIHRoZSBndWVzdCBiZSBhbGxvd2VkIHRvIGNoYW5nZSB0aGUgc2l6ZSBvZiBpdHMgYmFs
bG9vbiwgYW5kIHRoZXJlZm9yZSBpdHMgZm9vdHByaW50IG9uIHRoZSBob3N0LiBUaGlzIG1heSBi
ZSBqdXN0aWZpZWQgd2l0aCBhcmd1bWVudHMgcGVydGFpbmluZyB0byB0aGUgc3RhYmlsaXR5IG9m
IHRoZSBpbi1ndWVzdCB3b3JrbG9hZC4gV2hhdCB0aGV5IHJlYWxseSByZXZlYWwgYXJlIGxpbWl0
YXRpb25zIG9mIGJhbGxvb25pbmcuIEJ1dCB0aGUgaW5hZGVxdWFjeSBvZiB0aGUgYmFsbG9vbiBp
biBpdHNlbGYgZG9lc24ndCBhdXRvbWF0aWNhbGx5IHRyYW5zbGF0ZSBpbnRvIGp1c3RpZnlpbmcg
dGhlIG5lZWQgZm9yIGEgbmV3IGh5cGVyIGNhbGwuCgpXaHkgaXMgdGhpcyBhIGxpbWl0YXRpb24/
IFdoeSBzaG91bGRuJ3QgdGhlIGd1ZXN0IHRoZSBhbGxvd2VkIHRvIGNoYW5nZQppdHMgbWVtb3J5
IHVzYWdlPyBJdCBjYW4gZ28gdXAgYW5kIGRvd24gYXMgaXQgc2VlcyBmaXQuCkFuZCBpZiBpdCBn
b2VzIGRvd24gYW5kIGl0IGdldHMgYmV0dGVyIHBlcmZvcm1hbmNlIC0gd2VsbCwgd2h5IHNob3Vs
ZG4ndAppdCBkbyBpdD8KCkkgY29uY3VyIGl0IGlzIG9kZCAtIGJ1dCBpdCBoYXMgYmVlbiBsaWtl
IHRoYXQgZm9yIGRlY2FkZXMuCgoKPiAKPiA+IHRoZSBoeXBlcnZpc29yLCB3aGljaCBhZGp1c3Rz
IHRoZSBkb21haW4gbWVtb3J5IGZvb3RwcmludCwgd2hpY2ggY2hhbmdlcyB0aGUgbnVtYmVyIG9m
IGZyZWUgcGFnZXMgX3dpdGhvdXRfIHRoZSB0b29sc3RhY2sga25vd2xlZGdlLgo+ID4gVGhlIHRv
b2xzdGFjayBjb250cm9scyBjb25zdHJhaW50cyAoZXNzZW50aWFsbHkgYSBtaW5pbXVtIGFuZCBt
YXhpbXVtKQo+ID4gd2hpY2ggdGhlIGh5cGVydmlzb3IgZW5mb3JjZXMuICBUaGUgdG9vbHN0YWNr
IGNhbiBlbnN1cmUgdGhhdCB0aGUKPiA+IG1pbmltdW0gYW5kIG1heGltdW0gYXJlIGlkZW50aWNh
bCB0byBlc3NlbnRpYWxseSBkaXNhbGxvdyBMaW51eCBmcm9tCj4gPiB1c2luZyB0aGlzIGZ1bmN0
aW9uYWxpdHkuICBJbmRlZWQsIHRoaXMgaXMgcHJlY2lzZWx5IHdoYXQgQ2l0cml4J3MKPiA+IER5
bmFtaWMgTWVtb3J5IENvbnRyb2xsZXIgKERNQykgZG9lczogZW5mb3JjZSBtaW49PW1heCBzbyB0
aGF0IERNQyBhbHdheXMgaGFzIGNvbXBsZXRlIGNvbnRyb2wgYW5kLCBzbywga25vd2xlZGdlIG9m
IGFueSBkb21haW4gbWVtb3J5Cj4gPiBmb290cHJpbnQgY2hhbmdlcy4gIEJ1dCBETUMgaXMgbm90
IHByZXNjcmliZWQgYnkgdGhlIHRvb2xzdGFjaywKPiAKPiBOZWl0aGVyIGlzIGVuZm9yY2luZyBt
aW49PW1heC4gVGhpcyB3YXMgbXkgYXJndW1lbnQgd2hlbiBwcmV2aW91c2x5IGNvbW1lbnRpbmcg
b24gdGhpcyB0aHJlYWQuIFRoZSBmYWN0IHRoYXQgeW91IGhhdmUgZW5mb3JjZW1lbnQgb2YgYSBt
YXhpbXVtIGRvbWFpbiBhbGxvY2F0aW9uIGdpdmVzIHlvdSBhbiBleGNlbGxlbnQgdG9vbCB0byBr
ZWVwIGEgZG9tYWluJ3MgdW5zdXBlcnZpc2VkIGdyb3d0aCBhdCBiYXkuIFRoZSB0b29sc3RhY2sg
Y2FuIGNob29zZSBob3cgZmluZS1ncmFpbmVkLCBob3cgb2Z0ZW4gdG8gYmUgYWxlcnRlZCBhbmQg
c3RhbGwgdGhlIGRvbWFpbi4KClRoZXJlIGlzIGEgZG93bi1jYWxsIChzbyBldmVudHMpIHRvIHRo
ZSB0b29sLXN0YWNrIGZyb20gdGhlIGh5cGVydmlzb3Igd2hlbgp0aGUgZ3Vlc3QgdHJpZXMgdG8g
YmFsbG9vbiBpbi9vdXQ/IFNvIHRoZSBuZWVkIGZvciB0aGlzIHByb2JsZW0gYXJvc2UKYnV0IHRo
ZSBtZWNoYW5pc20gdG8gZGVhbCB3aXRoIGl0IGhhcyBiZWVuIHNoaWZ0ZWQgdG8gdGhlIHVzZXIt
c3BhY2UKdGhlbj8gV2hhdCB0byBkbyB3aGVuIHRoZSBndWVzdCBkb2VzIHRoaXMgaW4vb3V0IGJh
bGxvb24gYXQgZnJlcQppbnRlcnZhbHM/CgpJIGFtIG1pc3NpbmcgYWN0dWFsbHkgdGhlIHJlYXNv
bmluZyBiZWhpbmQgd2FudGluZyB0byBzdGFsbCB0aGUgZG9tYWluPwpJcyB0aGF0IHRvIGNvbXBy
ZXNzL3N3YXAgdGhlIHBhZ2VzIHRoYXQgdGhlIGd1ZXN0IHJlcXVlc3RzPyBNZWFuaW5nCmFuIHVz
ZXItc3BhY2UgZGFlbW9uIHRoYXQgZG9lcyAidGhpbmdzIiBhbmQgaGFzIG93bmVyc2hpcApvZiB0
aGUgcGFnZXM/Cgo+IAo+ID4gYW5kIHNvbWUgcmVhbCBPcmFjbGUgTGludXggY3VzdG9tZXJzIHVz
ZSBhbmQgZGVwZW5kIG9uIHRoZSBmbGV4aWJpbGl0eQo+ID4gcHJvdmlkZWQgYnkgaW4tZ3Vlc3Qg
YmFsbG9vbmluZy4gICBTbyBndWVzdC1wcml2aWxlZ2VkLXVzZXItZHJpdmVuLQo+ID4gYmFsbG9v
bmluZyBpcyBhIHBvdGVudGlhbCBpc3N1ZSBmb3IgdG9vbHN0YWNrLWJhc2VkIGNhcGFjaXR5IGFs
bG9jYXRpb24uCj4gPiAKPiA+IFtJSUdUOiBUaGlzIGlzIHdoeSBJIGhhdmUgYnJvdWdodCB1cCBE
TUMgc2V2ZXJhbCB0aW1lcyBhbmQgaGF2ZQo+ID4gY2FsbGVkIHRoaXMgdGhlICJDaXRyaXggbW9k
ZWwsIi4uIEknbSBub3QgdHJ5aW5nIHRvIGJlIHNuaXBweQo+ID4gb3IgaW1wdWduIHlvdXIgbW9y
YWxzIGFzIG1haW50YWluZXJzLl0KPiA+IAo+ID4gQikgWGVuJ3MgcGFnZSBzaGFyaW5nIGZlYXR1
cmUgaGFzIHNsb3dseSBiZWVuIGNvbXBsZXRlZCBvdmVyIGEgbnVtYmVyCj4gPiBvZiByZWNlbnQg
WGVuIHJlbGVhc2VzLiAgSXQgdGFrZXMgYWR2YW50YWdlIG9mIHRoZSBmYWN0IHRoYXQgbWFueQo+
ID4gcGFnZXMgb2Z0ZW4gY29udGFpbiBpZGVudGljYWwgZGF0YTsgdGhlIGh5cGVydmlzb3IgbWVy
Z2VzIHRoZW0gdG8gc2F2ZQo+IAo+IEdyZWF0IGNhcmUgaGFzIGJlZW4gdGFrZW4gZm9yIHRoaXMg
c3RhdGVtZW50IHRvIG5vdCBiZSBleGFjdGx5IHRydWUuIFRoZSBoeXBlcnZpc29yIGRpc2NhcmRz
IG9uZSBvZiB0d28gcGFnZXMgdGhhdCB0aGUgdG9vbHN0YWNrIHRlbGxzIGl0IHRvIChhbmQgcGF0
Y2hlcyB0aGUgcGh5c21hcCBvZiB0aGUgVk0gcHJldmlvdXNseSBwb2ludGluZyB0byB0aGUgZGlz
Y2FyZCBwYWdlKS4gSXQgZG9lc24ndCBtZXJnZSwgbm9yIGRvZXMgaXQgbG9vayBpbnRvIGNvbnRl
bnRzLiBUaGUgaHlwZXJ2aXNvciBkb2Vzbid0IGNhcmUgYWJvdXQgdGhlIHBhZ2UgY29udGVudHMu
IFRoaXMgaXMgZGVsaWJlcmF0ZSwgc28gYXMgdG8gYXZvaWQgc3B1cmlvdXMgY2xhaW1zIG9mICJ5
b3UgYXJlIHVzaW5nIHRlY2huaXF1ZSBYISIKPiAKCklzIHRoZSB0b29sc3RhY2sgKG9yIGEgZGFl
bW9uIGluIHVzZXJzcGFjZSkgZG9pbmcgdGhpcz8gSSB3b3VsZApoYXZlIHRob3VnaHQgdGhhdCB0
aGVyZSB3b3VsZCBiZSBzb21lIG9wdGltaXphdGlvbiB0byBkbyB0aGlzCnNvbWV3aGVyZT8KCj4g
PiBwaHlzaWNhbCBSQU0uICBXaGVuIGFueSAic2hhcmVkIiBwYWdlIGlzIHdyaXR0ZW4sIHRoZSBo
eXBlcnZpc29yCj4gPiAic3BsaXRzIiB0aGUgcGFnZSAoYWthLCBjb3B5LW9uLXdyaXRlKSBieSBh
bGxvY2F0aW5nIGEgbmV3IHBoeXNpY2FsCj4gPiBwYWdlLiAgVGhlcmUgaXMgYSBsb25nIGhpc3Rv
cnkgb2YgdGhpcyBmZWF0dXJlIGluIG90aGVyIHZpcnR1YWxpemF0aW9uCj4gPiBwcm9kdWN0cyBh
bmQgaXQgaXMga25vd24gdG8gYmUgcG9zc2libGUgdGhhdCwgdW5kZXIgbWFueSBjaXJjdW1zdGFu
Y2VzLCB0aG91c2FuZHMgb2Ygc3BsaXRzIG1heSBvY2N1ciBpbiBhbnkgZnJhY3Rpb24gb2YgYSBz
ZWNvbmQuICBUaGUKPiA+IGh5cGVydmlzb3IgZG9lcyBub3Qgbm90aWZ5IG9yIGFzayBwZXJtaXNz
aW9uIG9mIHRoZSB0b29sc3RhY2suCj4gPiBTbywgcGFnZS1zcGxpdHRpbmcgaXMgYW4gaXNzdWUg
Zm9yIHRvb2xzdGFjay1iYXNlZCBjYXBhY2l0eQo+ID4gYWxsb2NhdGlvbiwgYXQgbGVhc3QgYXMg
Y3VycmVudGx5IGNvZGVkIGluIFhlbi4KPiA+IAo+ID4gW0FuZHJlOiBQbGVhc2UgaG9sZCB5b3Vy
IG9iamVjdGlvbiBoZXJlIHVudGlsIHlvdSByZWFkIGZ1cnRoZXIuXQo+IAo+IE5hbWUgaXMgQW5k
cmVzLiBBbmQgcGxlYXNlIGNjIG1lIGlmIHlvdSdsbCBiZSBhZGRyZXNzaW5nIG1lIGRpcmVjdGx5
IQo+IAo+IE5vdGUgdGhhdCBJIGRvbid0IGRpc2FncmVlIHdpdGggeW91ciBwcmV2aW91cyBzdGF0
ZW1lbnQgaW4gaXRzZWxmLiBBbHRob3VnaCAicGFnZS1zcGxpdHRpbmciIGlzIGZhaXJseSB1bmlx
dWUgdGVybWlub2xvZ3ksIGFuZCBjb25mdXNpbmcgKGF0IGxlYXN0IHRvIG1lKS4gQ29XIHdvcmtz
LgoKPG5vZHM+Cj4gCj4gPiAKPiA+IEMpIFRyYW5zY2VuZGVudCBNZW1vcnkgKCJ0bWVtIikgaGFz
IGV4aXN0ZWQgaW4gdGhlIFhlbiBoeXBlcnZpc29yIGFuZAo+ID4gdG9vbHN0YWNrIGZvciBvdmVy
IHRocmVlIHllYXJzLiAgSXQgZGVwZW5kcyBvbiBhbiBpbi1ndWVzdC1rZXJuZWwKPiA+IGFkYXB0
aXZlIHRlY2huaXF1ZSB0byBjb25zdGFudGx5IGFkanVzdCB0aGUgZG9tYWluIG1lbW9yeSBmb290
cHJpbnQgYXMKPiA+IHdlbGwgYXMgaG9va3MgaW4gdGhlIGluLWd1ZXN0LWtlcm5lbCB0byBtb3Zl
IGRhdGEgdG8gYW5kIGZyb20gdGhlCj4gPiBoeXBlcnZpc29yLiAgV2hpbGUgdGhlIGRhdGEgaXMg
aW4gdGhlIGh5cGVydmlzb3IncyBjYXJlLCBpbnRlcmVzdGluZwo+ID4gbWVtb3J5LWxvYWQgYmFs
YW5jaW5nIGJldHdlZW4gZ3Vlc3RzIGlzIGRvbmUsIGluY2x1ZGluZyBvcHRpb25hbAo+ID4gY29t
cHJlc3Npb24gYW5kIGRlZHVwbGljYXRpb24uICBBbGwgb2YgdGhpcyBoYXMgYmVlbiBpbiBYZW4g
c2luY2UgMjAwOQo+ID4gYW5kIGhhcyBiZWVuIGF3YWl0aW5nIGNoYW5nZXMgaW4gdGhlIChndWVz
dC1zaWRlKSBMaW51eCBrZXJuZWwuIFRob3NlCj4gPiBjaGFuZ2VzIGFyZSBub3cgbWVyZ2VkIGlu
dG8gdGhlIG1haW5zdHJlYW0ga2VybmVsIGFuZCBhcmUgZnVsbHkKPiA+IGZ1bmN0aW9uYWwgaW4g
c2hpcHBpbmcgZGlzdHJvcy4KPiA+IAo+ID4gV2hpbGUgYSBjb21wbGV0ZSBkZXNjcmlwdGlvbiBv
ZiB0bWVtJ3MgZ3Vlc3Q8LT5oeXBlcnZpc29yIGludGVyYWN0aW9uCj4gPiBpcyBiZXlvbmQgdGhl
IHNjb3BlIG9mIHRoaXMgZG9jdW1lbnQsIGl0IGlzIGltcG9ydGFudCB0byB1bmRlcnN0YW5kCj4g
PiB0aGF0IGFueSB0bWVtLWVuYWJsZWQgZ3Vlc3Qga2VybmVsIG1heSB1bnByZWRpY3RhYmx5IHJl
cXVlc3QgdGhvdXNhbmRzCj4gPiBvciBldmVuIG1pbGxpb25zIG9mIHBhZ2VzIGRpcmVjdGx5IHZp
YSBoeXBlcmNhbGxzIGZyb20gdGhlIGh5cGVydmlzb3IgaW4gYSBmcmFjdGlvbiBvZiBhIHNlY29u
ZCB3aXRoIGFic29sdXRlbHkgbm8gaW50ZXJhY3Rpb24gd2l0aCB0aGUgdG9vbHN0YWNrLiAgRnVy
dGhlciwgdGhlIGd1ZXN0LXNpZGUgaHlwZXJjYWxscyB0aGF0IGFsbG9jYXRlIHBhZ2VzCj4gPiB2
aWEgdGhlIGh5cGVydmlzb3IgYXJlIGRvbmUgaW4gImF0b21pYyIgY29kZSBkZWVwIGluIHRoZSBM
aW51eCBtbQo+ID4gc3Vic3lzdGVtLgo+ID4gCj4gPiBJbmRlZWQsIGlmIG9uZSB0cnVseSB1bmRl
cnN0YW5kcyB0bWVtLCBpdCBzaG91bGQgYmVjb21lIGNsZWFyIHRoYXQKPiA+IHRtZW0gaXMgZnVu
ZGFtZW50YWxseSBpbmNvbXBhdGlibGUgd2l0aCB0b29sc3RhY2stYmFzZWQgY2FwYWNpdHkKPiA+
IGFsbG9jYXRpb24uIEJ1dCBsZXQncyBzdG9wIGRpc2N1c3NpbmcgdG1lbSBmb3Igbm93IGFuZCBt
b3ZlIG9uLgo+IAo+IFlvdSBoYXZlIG5vdCBkaXNjdXNzZWQgdG1lbSBwb29sIHRoYXcgYW5kIGZy
ZWV6ZSBpbiB0aGlzIHByb3Bvc2FsLgoKT29vaCwgeW91IGtub3cgYWJvdXQgaXQgOi0pIERhbiBk
aWRuJ3Qgd2FudCB0byBnbyB0b28gdmVyYm9zZSBvbgpwZW9wbGUuIEl0IGlzIGEgYml0IG9mIHJh
dGhvbGUgLSBhbmQgdGhpcyBoeXBlcmNhbGwgd291bGQKYWxsb3cgdG8gZGVwcmVjYXRlIHNhaWQg
ZnJlZXplL3RoYXcgY2FsbHMuCgo+IAo+ID4gCj4gPiBPSy4gIFNvIHdpdGggZXhpc3RpbmcgY29k
ZSBib3RoIGluIFhlbiBhbmQgTGludXggZ3Vlc3RzLCB0aGVyZSBhcmUKPiA+IHRocmVlIGNoYWxs
ZW5nZXMgdG8gdG9vbHN0YWNrLWJhc2VkIGNhcGFjaXR5IGFsbG9jYXRpb24uICBXZSdkCj4gPiBy
ZWFsbHkgc3RpbGwgbGlrZSB0byBkbyBjYXBhY2l0eSBhbGxvY2F0aW9uIGluIHRoZSB0b29sc3Rh
Y2suICBDYW4KPiA+IHNvbWV0aGluZyBiZSBkb25lIGluIHRoZSB0b29sc3RhY2sgdG8gImZpeCIg
dGhlc2UgdGhyZWUgY2FzZXM/Cj4gPiAKPiA+IFBvc3NpYmx5LiAgQnV0IGxldCdzIGZpcnN0IGxv
b2sgYXQgaHlwZXJ2aXNvci1iYXNlZCBjYXBhY2l0eQo+ID4gYWxsb2NhdGlvbjogdGhlIHByb3Bv
c2VkICJYRU5NRU1fY2xhaW1fcGFnZXMiIGh5cGVyY2FsbC4KPiA+IAo+ID4gSFlQRVJWSVNPUi1C
QVNFRCBDQVBBQ0lUWSBBTExPQ0FUSU9OCj4gPiAKPiA+IFRoZSBwb3N0ZWQgcGF0Y2ggZm9yIHRo
ZSBjbGFpbSBoeXBlcmNhbGwgaXMgcXVpdGUgc2ltcGxlLCBidXQgbGV0J3MKPiA+IGxvb2sgYXQg
aXQgaW4gZGV0YWlsLiAgVGhlIGNsYWltIGh5cGVyY2FsbCBpcyBhY3R1YWxseSBhIHN1Ym9wCj4g
PiBvZiBhbiBleGlzdGluZyBoeXBlcmNhbGwuICBBZnRlciBjaGVja2luZyBwYXJhbWV0ZXJzIGZv
ciB2YWxpZGl0eSwKPiA+IGEgbmV3IGZ1bmN0aW9uIGlzIGNhbGxlZCBpbiB0aGUgY29yZSBYZW4g
bWVtb3J5IG1hbmFnZW1lbnQgY29kZS4KPiA+IFRoaXMgZnVuY3Rpb24gdGFrZXMgdGhlIGh5cGVy
dmlzb3IgaGVhcGxvY2ssIGNoZWNrcyBmb3IgYSBmZXcKPiA+IHNwZWNpYWwgY2FzZXMsIGRvZXMg
c29tZSBhcml0aG1ldGljIHRvIGVuc3VyZSBhIHZhbGlkIGNsYWltLCBzdGFrZXMKPiA+IHRoZSBj
bGFpbSwgcmVsZWFzZXMgdGhlIGh5cGVydmlzb3IgaGVhcGxvY2ssIGFuZCB0aGVuIHJldHVybnMu
ICBUbwo+ID4gcmV2aWV3IGZyb20gZWFybGllciwgdGhlIGh5cGVydmlzb3IgaGVhcGxvY2sgcHJv
dGVjdHMgX2FsbF8gcGFnZS9zbGFiCj4gPiBhbGxvY2F0aW9ucywgc28gd2UgY2FuIGJlIGFic29s
dXRlbHkgY2VydGFpbiB0aGF0IHRoZXJlIGFyZSBubyBvdGhlcgo+ID4gcGFnZSBhbGxvY2F0aW9u
IHJhY2VzLiAgVGhpcyBuZXcgZnVuY3Rpb24gaXMgYWJvdXQgMzUgbGluZXMgb2YgY29kZSwKPiA+
IG5vdCBjb3VudGluZyBjb21tZW50cy4KPiA+IAo+ID4gVGhlIHBhdGNoIGluY2x1ZGVzIHR3byBv
dGhlciBzaWduaWZpY2FudCBjaGFuZ2VzIHRvIHRoZSBoeXBlcnZpc29yOgo+ID4gRmlyc3QsIHdo
ZW4gYW55IGFkanVzdG1lbnQgdG8gYSBkb21haW4ncyBtZW1vcnkgZm9vdHByaW50IGlzIG1hZGUK
PiA+IChlaXRoZXIgdGhyb3VnaCBhIHRvb2xzdGFjay1hd2FyZSBoeXBlcmNhbGwgb3Igb25lIG9m
IHRoZSB0aHJlZQo+ID4gdG9vbHN0YWNrLXVuYXdhcmUgbWV0aG9kcyBkZXNjcmliZWQgYWJvdmUp
LCB0aGUgaGVhcGxvY2sgaXMKPiA+IHRha2VuLCBhcml0aG1ldGljIGlzIGRvbmUsIGFuZCB0aGUg
aGVhcGxvY2sgaXMgcmVsZWFzZWQuICBUaGlzCj4gPiBpcyAxMiBsaW5lcyBvZiBjb2RlLiAgU2Vj
b25kLCB3aGVuIGFueSBtZW1vcnkgaXMgYWxsb2NhdGVkIHdpdGhpbgo+ID4gWGVuLCBhIGNoZWNr
IG11c3QgYmUgbWFkZSAod2l0aCB0aGUgaGVhcGxvY2sgYWxyZWFkeSBoZWxkKSB0bwo+ID4gZGV0
ZXJtaW5lIGlmLCBnaXZlbiBhIHByZXZpb3VzIGNsYWltLCB0aGUgZG9tYWluIGhhcyBleGNlZWRl
ZAo+ID4gaXRzIHVwcGVyIGJvdW5kLCBtYXhtZW0uICBUaGlzIGNvZGUgaXMgYSBzaW5nbGUgY29u
ZGl0aW9uYWwgdGVzdC4KPiA+IAo+ID4gV2l0aCBzb21lIGRlY2xhcmF0aW9ucywgYnV0IG5vdCBj
b3VudGluZyB0aGUgY29waW91cyBjb21tZW50cywKPiA+IGFsbCB0b2xkLCB0aGUgbmV3IGNvZGUg
cHJvdmlkZWQgYnkgdGhlIHBhdGNoIGlzIHdlbGwgdW5kZXIgMTAwIGxpbmVzLgo+ID4gCj4gPiBX
aGF0IGFib3V0IHRoZSB0b29sc3RhY2sgc2lkZT8gIEZpcnN0LCBpdCdzIGltcG9ydGFudCB0byBu
b3RlIHRoYXQKPiA+IHRoZSB0b29sc3RhY2sgY2hhbmdlcyBhcmUgZW50aXJlbHkgb3B0aW9uYWwu
ICBJZiBhbnkgdG9vbHN0YWNrCj4gPiB3aXNoZXMgZWl0aGVyIHRvIG5vdCBmaXggdGhlIG9yaWdp
bmFsIHByb2JsZW0sIG9yIGF2b2lkIHRvb2xzdGFjay0KPiA+IHVuYXdhcmUgYWxsb2NhdGlvbiBj
b21wbGV0ZWx5IGJ5IGlnbm9yaW5nIHRoZSBmdW5jdGlvbmFsaXR5IHByb3ZpZGVkCj4gPiBieSBp
bi1ndWVzdCBiYWxsb29uaW5nLCBwYWdlLXNoYXJpbmcsIGFuZC9vciB0bWVtLCB0aGF0IHRvb2xz
dGFjayBuZWVkCj4gPiBub3QgdXNlIHRoZSBuZXcgaHlwZXIgY2FsbC4KPiAKPiBZb3UgYXJlIHJ1
bGluZyBvdXQgYW55IG90aGVyIHBvc3NpYmlsaXR5IGhlcmUuIEluIHBhcnRpY3VsYXIsIGJ1dCBu
b3QgbGltaXRlZCB0bywgdXNlIG9mIG1heF9wYWdlcy4KClRoZSBvbmUgbWF4X3BhZ2UgY2hlY2sg
dGhhdCBjb21lcyB0byBteSBtaW5kIGlzIHRoZSBvbmUgdGhhdCBYYXBpCnVzZXMuIFRoYXQgaXMg
aXQgaGFzIGEgZGFlbW9uIHRoYXQgc2V0cyB0aGUgbWF4X3BhZ2VzIG9mIGFsbCB0aGUKZ3Vlc3Rz
IGF0IHNvbWUgdmFsdWUgc28gdGhhdCBpdCBjYW4gc3F1ZWV6ZSBpbiBhcyBtYW55IGd1ZXN0cyBh
cwpwb3NzaWJsZS4gSXQgYWxzbyBiYWxsb29ucyBwYWdlcyBvdXQgb2YgYSBndWVzdCB0byBtYWtl
IHNwYWNlIGlmCm5lZWQgdG8gbGF1bmNoLiBUaGUgaGV1cmVzdGljIG9mIGhvdyBtYW55IHBhZ2Vz
IG9yIHRoZSByYXRpbwpvZiBtYXgvbWluIGxvb2tzIHRvIGJlIHByb3BvcnRpb25hbCAoc28gdG8g
bWFrZSBzcGFjZSBmb3IgMUdCCmZvciBhIGd1ZXN0LCBhbmQgc2F5IHdlIGhhdmUgMTAgZ3Vlc3Rz
LCB3ZSB3aWxsIHN1YnRyYWN0CjEwMU1CIGZyb20gZWFjaCBndWVzdCAtIHRoZSBleHRyYSAxTUIg
aXMgZm9yIGV4dHJhIG92ZXJoZWFkKS4KVGhpcyBkZXBlbmRzIG9uIG9uZSBoeXBlcmNhbGwgdGhh
dCAneGwnIG9yICd4bScgdG9vbHN0YWNrIGRvIG5vdAp1c2UgLSB3aGljaCBzZXRzIHRoZSBtYXhf
cGFnZXMuCgpUaGF0IGNvZGUgbWFrZXMgY2VydGFpbiBhc3N1bXB0aW9ucyAtIHRoYXQgdGhlIGd1
ZXN0IHdpbGwgbm90IGdvL3VwIGRvd24KaW4gdGhlIGJhbGxvb25pbmcgb25jZSB0aGUgdG9vbHN0
YWNrIGhhcyBkZWNyZWVkIGhvdyBtdWNoCm1lbW9yeSB0aGUgZ3Vlc3Qgc2hvdWxkIHVzZS4gSXQg
YWxzbyBhc3N1bWVzIHRoYXQgdGhlIG9wZXJhdGlvbnMKYXJlIHNlbWktYXRvbWljIC0gYW5kIHRv
IG1ha2UgaXQgc28gYXMgbXVjaCBhcyBpdCBjYW4gLSBpdCBleGVjdXRlcwp0aGVzZSBvcGVyYXRp
b25zIGluIHNlcmlhbC4KClRoaXMgZ29lcyBiYWNrIHRvIHRoZSBwcm9ibGVtIHN0YXRlbWVudCAt
IGlmIHdlIHRyeSB0byBwYXJhbGxpemUKdGhpcyB3ZSBydW4gaW4gdGhlIHByb2JsZW0gdGhhdCB0
aGUgYW1vdW50IG9mIG1lbW9yeSB3ZSB0aG91Z2h0CndlIGZyZWUgaXMgbm90IHRydWUgYW55bW9y
ZS4gVGhlIHN0YXJ0IG9mIHRoaXMgZW1haWwgaGFzIGEgZ29vZApkZXNjcmlwdGlvbiBvZiBzb21l
IG9mIHRoZSBpc3N1ZXMuCgpJbiBlc3NlbmNlLCB0aGUgbWF4X3BhZ2VzIGRvZXMgd29yayAtIF9p
Zl8gb25lIGRvZXMgdGhlc2Ugb3BlcmF0aW9ucwppbiBzZXJpYWwuIFdlIGFyZSB0cnlpbmcgdG8g
bWFrZSB0aGlzIHdvcmsgaW4gcGFyYWxsZWwgYW5kIHdpdGhvdXQKYW55IGZhaWx1cmVzIC0gZm9y
IHRoYXQgd2UgLSBvbmUgd2F5IHRoYXQgaXMgcXVpdGUgc2ltcGxpc3RpYwppcyB0aGUgY2xhaW0g
aHlwZXJjYWxsLiBJdCBzZXRzIHVwIGEgJ3N0YWtlJyBvZiB0aGUgYW1vdW50IG9mCm1lbW9yeSB0
aGF0IHRoZSBoeXBlcnZpc29yIHNob3VsZCByZXNlcnZlLiBUaGlzIHdheSBvdGhlcgpndWVzdHMg
Y3JlYXRpb25zLyBiYWxsb25uaW5nIGRvIG5vdCBpbmZyaW5nZSBvbiB0aGUgJ2NsYWltZWQnIGFt
b3VudC4KCkkgYmVsaWV2ZSB3aXRoIHRoaXMgaHlwZXJjYWxsIHRoZSBYYXBpIGNhbiBiZSBtYWRl
IHRvIGRvIGl0cyBvcGVyYXRpb25zCmluIHBhcmFsbGVsIGFzIHdlbGwuCgo+IAo+ID4gIFNlY29u
ZCwgaXQncyB2ZXJ5IHJlbGV2YW50IHRvIG5vdGUgdGhhdCB0aGUgT3JhY2xlIHByb2R1Y3QgdXNl
cyBhIGNvbWJpbmF0aW9uIG9mIGEgcHJvcHJpZXRhcnkgIm1hbmFnZXIiCj4gPiB3aGljaCBvdmVy
c2VlcyBtYW55IG1hY2hpbmVzLCBhbmQgdGhlIG9sZGVyIG9wZW4tc291cmNlIHhtL3hlbmQKPiA+
IHRvb2xzdGFjaywgZm9yIHdoaWNoIHRoZSBjdXJyZW50IFhlbiB0b29sc3RhY2sgbWFpbnRhaW5l
cnMgYXJlIG5vCj4gPiBsb25nZXIgYWNjZXB0aW5nIHBhdGNoZXMuCj4gPiAKPiA+IFRoZSBwcmVm
YWNlIG9mIHRoZSBwdWJsaXNoZWQgcGF0Y2ggZG9lcyBzdWdnZXN0LCBob3dldmVyLCBzb21lCj4g
PiBzdHJhaWdodGZvcndhcmQgcHNldWRvLWNvZGUsIGFzIGZvbGxvd3M6Cj4gPiAKPiA+IEN1cnJl
bnQgdG9vbHN0YWNrIGRvbWFpbiBjcmVhdGlvbiBtZW1vcnkgYWxsb2NhdGlvbiBjb2RlIGZyYWdt
ZW50Ogo+ID4gCj4gPiAxLiBjYWxsIHBvcHVsYXRlX3BoeXNtYXAgcmVwZWF0ZWRseSB0byBhY2hp
ZXZlIG1lbT1OIG1lbW9yeQo+ID4gMi4gaWYgYW55IHBvcHVsYXRlX3BoeXNtYXAgY2FsbCBmYWls
cywgcmVwb3J0IC1FTk9NRU0gdXAgdGhlIHN0YWNrCj4gPiAzLiBtZW1vcnkgaXMgaGVsZCB1bnRp
bCBkb21haW4gZGllcyBvciB0aGUgdG9vbHN0YWNrIGRlY3JlYXNlcyBpdAo+ID4gCj4gPiBQcm9w
b3NlZCB0b29sc3RhY2sgZG9tYWluIGNyZWF0aW9uIG1lbW9yeSBhbGxvY2F0aW9uIGNvZGUgZnJh
Z21lbnQKPiA+IChuZXcgY29kZSBtYXJrZWQgd2l0aCAiKyIpOgo+ID4gCj4gPiArICBjYWxsIGNs
YWltIGZvciBtZW09TiBhbW91bnQgb2YgbWVtb3J5Cj4gPiArLiBpZiBjbGFpbSBzdWNjZWVkczoK
PiA+IDEuICBjYWxsIHBvcHVsYXRlX3BoeXNtYXAgcmVwZWF0ZWRseSB0byBhY2hpZXZlIG1lbT1O
IG1lbW9yeSAoZmFpbHNhZmUpCj4gPiArICBlbHNlCj4gPiAyLiAgcmVwb3J0IC1FTk9NRU0gdXAg
dGhlIHN0YWNrCj4gPiArICBjbGFpbSBpcyBoZWxkIHVudGlsIG1lbT1OIGlzIGFjaGlldmVkIG9y
IHRoZSBkb21haW4gZGllcyBvcgo+ID4gICAgZm9yY2VkIHRvIDAgYnkgYSBzZWNvbmQgaHlwZXJj
YWxsCj4gPiAzLiBtZW1vcnkgaXMgaGVsZCB1bnRpbCBkb21haW4gZGllcyBvciB0aGUgdG9vbHN0
YWNrIGRlY3JlYXNlcyBpdAo+ID4gCj4gPiBSZXZpZXdpbmcgdGhlIHBzZXVkby1jb2RlLCBvbmUg
Y2FuIHJlYWRpbHkgc2VlIHRoYXQgdGhlIHRvb2xzdGFjawo+ID4gY2hhbmdlcyByZXF1aXJlZCB0
byBpbXBsZW1lbnQgdGhlIGh5cGVyY2FsbCBhcmUgcXVpdGUgc21hbGwuCj4gPiAKPiA+IFRvIGNv
bXBsZXRlIHRoaXMgZGlzY3Vzc2lvbiwgaXQgaGFzIGJlZW4gcG9pbnRlZCBvdXQgdGhhdAo+ID4g
dGhlIHByb3Bvc2VkIGh5cGVyY2FsbCBkb2Vzbid0IHNvbHZlIHRoZSBvcmlnaW5hbCBwcm9ibGVt
Cj4gPiBmb3IgY2VydGFpbiBjbGFzc2VzIG9mIGxlZ2FjeSBkb21haW5zLi4uIGJ1dCBhbHNvIG5l
aXRoZXIKPiA+IGRvZXMgaXQgbWFrZSB0aGUgcHJvYmxlbSB3b3JzZS4gIEl0IGhhcyBhbHNvIGJl
ZW4gcG9pbnRlZAo+ID4gb3V0IHRoYXQgdGhlIHByb3Bvc2VkIHBhdGNoIGlzIG5vdCAoeWV0KSBO
VU1BLWF3YXJlLgo+ID4gCj4gPiBOb3cgbGV0J3MgcmV0dXJuIHRvIHRoZSBlYXJsaWVyIHF1ZXN0
aW9uOiAgVGhlcmUgYXJlIHRocmVlIAo+ID4gY2hhbGxlbmdlcyB0byB0b29sc3RhY2stYmFzZWQg
Y2FwYWNpdHkgYWxsb2NhdGlvbiwgd2hpY2ggYXJlCj4gPiBhbGwgaGFuZGxlZCBlYXNpbHkgYnkg
aW4taHlwZXJ2aXNvciBjYXBhY2l0eSBhbGxvY2F0aW9uLiBCdXQgd2UnZAo+ID4gcmVhbGx5IHN0
aWxsIGxpa2UgdG8gZG8gY2FwYWNpdHkgYWxsb2NhdGlvbiBpbiB0aGUgdG9vbHN0YWNrLgo+ID4g
Q2FuIHNvbWV0aGluZyBiZSBkb25lIGluIHRoZSB0b29sc3RhY2sgdG8gImZpeCIgdGhlc2UgdGhy
ZWUgY2FzZXM/Cj4gPiAKPiA+IFRoZSBhbnN3ZXIgaXMsIG9mIGNvdXJzZSwgY2VydGFpbmx5Li4u
IGFueXRoaW5nIGNhbiBiZSBkb25lIGluCj4gPiBzb2Z0d2FyZS4gIFNvLCByZWNhbGxpbmcgSWFu
IEphY2tzb24ncyBzdGF0ZWQgcmVxdWlyZW1lbnQ6Cj4gPiAKPiA+ICJBbnkgZnVuY3Rpb25hbGl0
eSB3aGljaCBjYW4gYmUgcmVhc29uYWJseSBwcm92aWRlZCBvdXRzaWRlIHRoZQo+ID4gIGh5cGVy
dmlzb3Igc2hvdWxkIGJlIGV4Y2x1ZGVkIGZyb20gaXQuIgo+ID4gCj4gPiB3ZSBhcmUgbm93IGxl
ZnQgdG8gZXZhbHVhdGUgdGhlIHN1YmplY3RpdmUgdGVybSAicmVhc29uYWJseSIuCj4gPiAKPiA+
IENBTiBUT09MU1RBQ0stQkFTRUQgQ0FQQUNJVFkgQUxMT0NBVElPTiBPVkVSQ09NRSBUSEUgSVNT
VUVTPwo+ID4gCj4gPiBJbiBlYXJsaWVyIGRpc2N1c3Npb24gb24gdGhpcyB0b3BpYywgd2hlbiBw
YWdlLXNwbGl0dGluZyB3YXMgcmFpc2VkCj4gPiBhcyBhIGNvbmNlcm4sIHNvbWUgb2YgdGhlIGF1
dGhvcnMgb2YgWGVuJ3MgcGFnZS1zaGFyaW5nIGZlYXR1cmUKPiA+IHBvaW50ZWQgb3V0IHRoYXQg
YSBtZWNoYW5pc20gY291bGQgYmUgZGVzaWduZWQgc3VjaCB0aGF0ICJiYXRjaGVzIgo+ID4gb2Yg
cGFnZXMgd2VyZSBwcmUtYWxsb2NhdGVkIGJ5IHRoZSB0b29sc3RhY2sgYW5kIHByb3ZpZGVkIHRv
IHRoZQo+ID4gaHlwZXJ2aXNvciB0byBiZSB1dGlsaXplZCBhcyBuZWVkZWQgZm9yIHBhZ2Utc3Bs
aXR0aW5nLiAgU2hvdWxkIHRoZQo+ID4gYmF0Y2ggcnVuIGRyeSwgdGhlIGh5cGVydmlzb3IgY291
bGQgc3RvcCB0aGUgZG9tYWluIHRoYXQgd2FzIHByb3Zva2luZwo+ID4gdGhlIHBhZ2Utc3BsaXQg
dW50aWwgdGhlIHRvb2xzdGFjayBjb3VsZCBiZSBjb25zdWx0ZWQgYW5kIHRoZSB0b29sc3RhY2ss
IGF0IGl0cyBsZWlzdXJlLCBjb3VsZCByZXF1ZXN0IHRoZSBoeXBlcnZpc29yIHRvIHJlZmlsbAo+
ID4gdGhlIGJhdGNoLCB3aGljaCB0aGVuIGFsbG93cyB0aGUgcGFnZS1zcGxpdC1jYXVzaW5nIGRv
bWFpbiB0byBwcm9jZWVkLgo+ID4gCj4gPiBCdXQgdGhpcyBiYXRjaCBwYWdlLWFsbG9jYXRpb24g
aXNuJ3QgaW1wbGVtZW50ZWQgaW4gWGVuIHRvZGF5Lgo+ID4gCj4gPiBBbmRyZXMgTGFnYXItQ2F2
aWxsYSBzYXlzICIuLi4gdGhpcyBpcyBiZWNhdXNlIG9mIHNob3J0Y29taW5ncyBpbiB0aGUKPiA+
IFtYZW5dIG1tIGxheWVyIGFuZCBpdHMgaW50ZXJhY3Rpb24gd2l0aCB3YWl0IHF1ZXVlcywgZG9j
dW1lbnRlZAo+ID4gZWxzZXdoZXJlLiIgIEluIG90aGVyIHdvcmRzLCB0aGlzIGJhdGNoaW5nIHBy
b3Bvc2FsIHJlcXVpcmVzCj4gPiBzaWduaWZpY2FudCBjaGFuZ2VzIHRvIHRoZSBoeXBlcnZpc29y
LCB3aGljaCBJIHRoaW5rIHdlCj4gPiBhbGwgYWdyZWVkIHdlIHdlcmUgdHJ5aW5nIHRvIGF2b2lk
Lgo+IAo+IFRoaXMgaXMgYSBtaXN1bmRlcnN0YW5kaW5nLiBUaGVyZSBpcyBubyBjb25uZWN0aW9u
IGJldHdlZW4gdGhlIGJhdGNoaW5nIHByb3Bvc2FsIGFuZCB3aGF0IEkgd2FzIHJlZmVycmluZyB0
byBpbiB0aGUgcXVvdGUuIENlcnRhaW5seSBJIG5ldmVyIGFkdm9jYXRlZCBmb3IgcHJlLWFsbG9j
YXRpb25zLgo+IAo+IFRoZSAic2lnbmlmaWNhbnQgY2hhbmdlcyB0byB0aGUgaHlwZXJ2aXNvciIg
c3RhdGVtZW50IGlzIEZVRC4gRXZlcnlvbmUgeW91J3ZlIGFkZHJlc3NlZCBvbiB0aGlzIGVtYWls
IG1ha2VzIHNpZ25pZmljYW50IGNoYW5nZXMgdG8gdGhlIGh5cGVydmlzb3IsIHVuZGVyIHRoZSBw
cm92aXNvIHRoYXQgdGhleSBhcmUgbmVjZXNzYXJ5L3VzZWZ1bCBjaGFuZ2VzLgo+IAo+IFRoZSBp
bnRlcmFjdGlvbnMgYmV0d2VlbiB0aGUgbW0gbGF5ZXIgYW5kIHdhaXQgcXVldWVzIG5lZWQgZml4
aW5nLCBzb29uZXIgb3IgbGF0ZXIsIGNsYWltIGh5cGVyIGNhbGwgb3Igbm90LiBCdXQgdGhleSBh
cmUgbm90IGEgYmxvY2tlciwgdGhleSBhcmUgZXNzZW50aWFsbHkgYSByYWNlIHRoYXQgbWF5IHRy
aWdnZXIgdW5kZXIgY2VydGFpbiBjaXJjdW1zdGFuY2VzLiBUaGF0IGlzIHdoeSB0aGV5IHJlbWFp
biBhIGxvdyBwcmlvcml0eSBmaXguCj4gCj4gPiAKPiA+IFtOb3RlIHRvIEFuZHJlOiBJJ20gbm90
IG9iamVjdGluZyB0byB0aGUgbmVlZCBmb3IgdGhpcyBmdW5jdGlvbmFsaXR5Cj4gPiBmb3IgcGFn
ZS1zaGFyaW5nIHRvIHdvcmsgd2l0aCBwcm9wcmlldGFyeSBrZXJuZWxzIGFuZCBETUM7IGp1c3QK
PiAKPiBMZXQgbWUgbmlwIHRoaXMgYXQgdGhlIGJ1ZC4gSSB1c2UgcGFnZSBzaGFyaW5nIGFuZCBv
dGhlciB0ZWNobmlxdWVzIGluIGFuIGVudmlyb25tZW50IHRoYXQgZG9lc24ndCB1c2UgQ2l0cml4
J3MgRE1DLCBub3IgaXMgZm9jdXNlZCBvbmx5IG9uIHByb3ByaWV0YXJ5IGtlcm5lbHMuLi4KPiAK
PiA+IHBvaW50aW5nIG91dCB0aGF0IGl0LCB0b28sIGlzIGRlcGVuZGVudCBvbiBmdXJ0aGVyIGh5
cGVydmlzb3IgY2hhbmdlcy5dCj4gCj4g4oCmIHdpdGggNC4yIFhlbi4gSXQgaXMgbm90IHBlcmZl
Y3QgYW5kIGhhcyBsaW1pdGF0aW9ucyB0aGF0IEkgYW0gdHJ5aW5nIHRvIGZpeC4gQnV0IG91ciBw
cm9kdWN0IHNoaXBzLCBhbmQgcGFnZSBzaGFyaW5nIHdvcmtzIGZvciBhbnlvbmUgd2hvIHdvdWxk
IHdhbnQgdG8gY29uc3VtZSBpdCwgaW5kZXBlbmRlbnRseSBvZiBmdXJ0aGVyIGh5cGVydmlzb3Ig
Y2hhbmdlcy4KPiAKCkkgYmVsaWV2ZSBEYW4gaXMgc2F5aW5nIGlzIHRoYXQgaXQgaXMgbm90IGVu
YWJsZWQgYnkgZGVmYXVsdC4KTWVhbmluZyBpdCBkb2VzIG5vdCBnZXQgZXhlY3V0ZWQgaW4gYnkg
L2V0Yy9pbml0LmQveGVuY29tbW9ucyBhbmQKYXMgc3VjaCBpdCBuZXZlciBnZXRzIHJ1biAob3Ig
ZG9lcyBpdCBub3c/KSAtIHVubGVzcyBvbmUga25vd3MKYWJvdXQgaXQgLSBvciBpdCBpcyBlbmFi
bGVkIGJ5IGRlZmF1bHQgaW4gYSBwcm9kdWN0LiBCdXQgcGVyaGFwcwp3ZSBhcmUgYm90aCBtaXN0
YWtlbj8gSXMgaXQgZW5hYmxlZCBieSBkZWZhdWx0IG5vdyBvbiB4ZW4tdW5zdGFibGU/Cgo+ID4g
Cj4gPiBTdWNoIGFuIGFwcHJvYWNoIG1ha2VzIHNlbnNlIGluIHRoZSBtaW49PW1heCBtb2RlbCBl
bmZvcmNlZCBieQo+ID4gRE1DIGJ1dCwgYWdhaW4sIERNQyBpcyBub3QgcHJlc2NyaWJlZCBieSB0
aGUgdG9vbHN0YWNrLgo+ID4gCj4gPiBGdXJ0aGVyLCB0aGlzIHdhaXRxdWV1ZSBzb2x1dGlvbiBm
b3IgcGFnZS1zcGxpdHRpbmcgb25seSBhd2t3YXJkbHkKPiA+IHdvcmtzIGFyb3VuZCBpbi1ndWVz
dCBiYWxsb29uaW5nIChwcm9iYWJseSBvbmx5IHdpdGggbW9yZSBoeXBlcnZpc29yCj4gPiBjaGFu
Z2VzLCBUQkQpIGFuZCB3b3VsZCBiZSB1c2VsZXNzIGZvciB0bWVtLiAgW0lJR1Q6IFBsZWFzZSBh
cmd1ZQo+ID4gdGhpcyBsYXN0IHBvaW50IG9ubHkgaWYgeW91IGZlZWwgY29uZmlkZW50IHlvdSB0
cnVseSB1bmRlcnN0YW5kIGhvdwo+ID4gdG1lbSB3b3Jrcy5dCj4gCj4gSSB3aWxsIGFyZ3VlIHRo
b3VnaCB0aGF0ICJ3YWl0cXVldWUgc29sdXRpb24g4oCmIGJhbGxvb25pbmciIGlzIG5vdCB0cnVl
LiBCYWxsb29uaW5nIGhhcyBuZXZlciBuZWVkZWQgbm9yIGl0IHN1ZGRlbmx5IG5lZWRzIG5vdyBo
eXBlcnZpc29yIHdhaXQgcXVldWVzLgoKSXQgaXMgdGhlIHVzZSBjYXNlIG9mIHBhcmFsbGVsIHN0
YXJ0cyB0aGF0IHdlIGFyZSB0cnlpbmcgdG8gc29sdmUuCldvcnN0IC0gd2Ugd2FudCB0byBzdGFy
dCAxNkdCIG9yIDMyR0IgZ3Vlc3RzIGFuZCB0aG9zZSBzZWVtIHRvIHRha2UKcXVpdGUgYSBiaXQg
b2YgdGltZS4KCj4gCj4gPiAKPiA+IFNvIHRoaXMgYXMteWV0LXVuaW1wbGVtZW50ZWQgc29sdXRp
b24gb25seSByZWFsbHkgc29sdmVzIGEgcGFydAo+ID4gb2YgdGhlIHByb2JsZW0uCj4gCj4gQXMg
cGVyIHRoZSBwcmV2aW91cyBjb21tZW50cywgSSBkb24ndCBzZWUgeW91ciBjaGFyYWN0ZXJpemF0
aW9uIGFzIGFjY3VyYXRlLgo+IAo+IEFuZHJlcwo+ID4gCj4gPiBBcmUgdGhlcmUgYW55IG90aGVy
IHBvc3NpYmlsaXRpZXMgcHJvcG9zZWQ/ICBJYW4gSmFja3NvbiBoYXMKPiA+IHN1Z2dlc3RlZCBh
IHNvbWV3aGF0IGRpZmZlcmVudCBhcHByb2FjaDoKPiA+IAo+ID4gTGV0IG1lIHF1b3RlIElhbiBK
YWNrc29uIGFnYWluOgo+ID4gCj4gPiAiT2YgY291cnNlIGlmIGl0IGlzIHJlYWxseSBkZXNpcmVk
IHRvIGhhdmUgZWFjaCBndWVzdCBtYWtlIGl0cyBvd24KPiA+IGRlY2lzaW9ucyBhbmQgc2ltcGx5
IGZvciB0aGVtIHRvIHNvbWVob3cgYWdyZWUgdG8gZGl2dnkgdXAgdGhlCj4gPiBhdmFpbGFibGUg
cmVzb3VyY2VzLCB0aGVuIGV2ZW4gc28gYSBuZXcgaHlwZXJ2aXNvciBtZWNoYW5pc20gaXMKPiA+
IG5vdCBuZWVkZWQuICBBbGwgdGhhdCBpcyBuZWVkZWQgaXMgYSB3YXkgZm9yIHRob3NlIGd1ZXN0
cyB0bwo+ID4gc3luY2hyb25pc2UgdGhlaXIgYWNjZXNzZXMgYW5kIHVwZGF0ZXMgdG8gc2hhcmVk
IHJlY29yZHMgb2YgdGhlCj4gPiBhdmFpbGFibGUgYW5kIGluLXVzZSBtZW1vcnkuIgo+ID4gCj4g
PiBJYW4gdGhlbiBnb2VzIG9uIHRvIHNheTogICJJIGRvbid0IGhhdmUgYSBkZXRhaWxlZCBjb3Vu
dGVyLXByb3Bvc2FsCj4gPiBkZXNpZ24gb2YgY291cnNlLi4uIgo+ID4gCj4gPiBUaGlzIHByb3Bv
c2FsIGlzIGNlcnRhaW5seSBwb3NzaWJsZSwgYnV0IEkgdGhpbmsgbW9zdCB3b3VsZCBhZ3JlZSB0
aGF0Cj4gPiBpdCB3b3VsZCByZXF1aXJlIHNvbWUgZmFpcmx5IG1hc3NpdmUgY2hhbmdlcyBpbiBP
UyBtZW1vcnkgbWFuYWdlbWVudAo+ID4gZGVzaWduIHRoYXQgd291bGQgcnVuIGNvbnRyYXJ5IHRv
IG1hbnkgeWVhcnMgb2YgY29tcHV0aW5nIGhpc3RvcnkuCj4gPiBJdCByZXF1aXJlcyBndWVzdCBP
UydzIHRvIGNvb3BlcmF0ZSB3aXRoIGVhY2ggb3RoZXIgYWJvdXQgYmFzaWMgbWVtb3J5Cj4gPiBt
YW5hZ2VtZW50IGRlY2lzaW9ucy4gIEFuZCB0byB3b3JrIGZvciB0bWVtLCBpdCB3b3VsZCByZXF1
aXJlCj4gPiBjb21tdW5pY2F0aW9uIGZyb20gYXRvbWljIGNvZGUgaW4gdGhlIGtlcm5lbCB0byB1
c2VyLXNwYWNlLCB0aGVuIGNvbW11bmljYXRpb24gZnJvbSB1c2VyLXNwYWNlIGluIGEgZ3Vlc3Qg
dG8gdXNlci1zcGFjZS1pbi1kb21haW4wCj4gPiBhbmQgdGhlbiAocHJlc3VtYWJseS4uLiBJIGRv
bid0IGhhdmUgYSBkZXNpZ24gZWl0aGVyKSBiYWNrIGFnYWluLgo+ID4gT25lIG11c3QgYWxzbyB3
b25kZXIgd2hhdCB0aGUgcGVyZm9ybWFuY2UgaW1wYWN0IHdvdWxkIGJlLgo+ID4gCj4gPiBDT05D
TFVESU5HIFJFTUFSS1MKPiA+IAo+ID4gIkFueSBmdW5jdGlvbmFsaXR5IHdoaWNoIGNhbiBiZSBy
ZWFzb25hYmx5IHByb3ZpZGVkIG91dHNpZGUgdGhlCj4gPiAgaHlwZXJ2aXNvciBzaG91bGQgYmUg
ZXhjbHVkZWQgZnJvbSBpdC4iCj4gPiAKPiA+IEkgdGhpbmsgdGhpcyBkb2N1bWVudCBoYXMgZGVz
Y3JpYmVkIGEgcmVhbCBjdXN0b21lciBwcm9ibGVtIGFuZAo+ID4gYSBnb29kIHNvbHV0aW9uIHRo
YXQgY291bGQgYmUgaW1wbGVtZW50ZWQgZWl0aGVyIGluIHRoZSB0b29sc3RhY2sKPiA+IG9yIGlu
IHRoZSBoeXBlcnZpc29yLiAgTWVtb3J5IGFsbG9jYXRpb24gaW4gZXhpc3RpbmcgWGVuIGZ1bmN0
aW9uYWxpdHkKPiA+IGhhcyBiZWVuIHNob3duIHRvIGludGVyZmVyZSBzaWduaWZpY2FudGx5IHdp
dGggdGhlIHRvb2xzdGFjay1iYXNlZAo+ID4gc29sdXRpb24gYW5kIHN1Z2dlc3RlZCBwYXJ0aWFs
IHNvbHV0aW9ucyB0byB0aG9zZSBpc3N1ZXMgZWl0aGVyCj4gPiByZXF1aXJlIGV2ZW4gbW9yZSBo
eXBlcnZpc29yIHdvcmssIG9yIGFyZSBjb21wbGV0ZWx5IHVuZGVzaWduZWQgYW5kLAo+ID4gYXQg
bGVhc3QsIGNhbGwgaW50byBxdWVzdGlvbiB0aGUgZGVmaW5pdGlvbiBvZiAicmVhc29uYWJseSIu
Cj4gPiAKPiA+IFRoZSBoeXBlcnZpc29yLWJhc2VkIHNvbHV0aW9uIGhhcyBiZWVuIHNob3duIHRv
IGJlIGV4dHJlbWVseQo+ID4gc2ltcGxlLCBmaXRzIHZlcnkgbG9naWNhbGx5IHdpdGggZXhpc3Rp
bmcgWGVuIG1lbW9yeSBtYW5hZ2VtZW50Cj4gPiBtZWNoYW5pc21zL2NvZGUsIGFuZCBoYXMgYmVl
biByZXZpZXdlZCB0aHJvdWdoIHNldmVyYWwgaXRlcmF0aW9ucwo+ID4gYnkgWGVuIGh5cGVydmlz
b3IgZXhwZXJ0cy4KPiA+IAo+ID4gV2hpbGUgSSB1bmRlcnN0YW5kIGNvbXBsZXRlbHkgdGhlIFhl
biBtYWludGFpbmVycycgZGVzaXJlIHRvCj4gPiBmZW5kIG9mZiB1bm5lY2Vzc2FyeSBhZGRpdGlv
bnMgdG8gdGhlIGh5cGVydmlzb3IsIEkgYmVsaWV2ZQo+ID4gWEVOTUVNX2NsYWltX3BhZ2VzIGlz
IGEgcmVhc29uYWJsZSBhbmQgbmF0dXJhbCBoeXBlcnZpc29yIGZlYXR1cmUKPiA+IGFuZCBJIGhv
cGUgeW91IHdpbGwgbm93IEFjayB0aGUgcGF0Y2guCgoKSnVzdCBhcyBhIHN1bW1hcnkgYXMgdGhp
cyBpcyBnZXR0aW5nIHRvIGJlIGEgbG9uZyB0aHJlYWQgLSBteQp1bmRlcnN0YW5kaW5nIGhhcyBi
ZWVuIHRoYXQgdGhlIGh5cGVydmlzb3IgaXMgc3VwcG9zZSB0byB0b29sc3RhY2sKaW5kZXBlbmRl
bnQuCgpPdXIgZmlyc3QgZ29hbCBpcyB0byBpbXBsZW1lbnQgdGhpcyBpbiAneGVuZCcgYXMgdGhh
dAppcyB3aGF0IHdlIHVzZSByaWdodCBub3cuIFByb2JsZW0gd2lsbCBiZSBvZiBjb3Vyc2UgdG8g
ZmluZCBzb21lYm9keQp0byByZXZpZXcgaXQgOi0oCgpXZSBjZXJ0YWlubHkgd2FudCB0byBpbXBs
ZW1lbnQgdGhpcyBhbHNvIGluIHRoZSAneGwnIHRvb2wtc3RhY2sKYXMgaW4gdGhlIGZ1dHVyZSB0
aGF0IGlzIHdoYXQgd2Ugd2FudCB0byB1c2Ugd2hlbiB3ZSByZWJhc2UKb3VyIHByb2R1Y3Qgb24g
WGVuIDQuMiBvciBncmVhdGVyLgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Dec 18 22:18:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 22:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl5UF-0004y5-Hr; Tue, 18 Dec 2012 22:18:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Tl5UB-0004xx-Tf
	for xen-devel@lists.xen.org; Tue, 18 Dec 2012 22:18:05 +0000
Received: from [85.158.138.51:43847] by server-2.bemta-3.messagelabs.com id
	23/17-11239-69BE0D05; Tue, 18 Dec 2012 22:17:58 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355869075!27683234!1
X-Originating-IP: [209.85.212.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25540 invoked from network); 18 Dec 2012 22:17:56 -0000
Received: from mail-vb0-f46.google.com (HELO mail-vb0-f46.google.com)
	(209.85.212.46)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 22:17:56 -0000
Received: by mail-vb0-f46.google.com with SMTP id b13so1552660vby.33
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 14:17:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition
	:content-transfer-encoding:in-reply-to:user-agent;
	bh=IoD9LChxNaQ1OwfmZecxYxXR55H60/PIbgy3CAXvYkk=;
	b=z1FG4pgqUJ/LzgzXyG7+ozoUX7MZg1IkrZ2g0QnJMqcNF6gkMcaDFwcmZyq04ltFhD
	7e/adaq5ir8zhjioGkQ42tMswpFmzrJXMwo7tyXbH/4Wk5nmvzUzUUkY+jZA22g3aZFe
	lB389UAdFy4pmU1OspyKCKPBw4/tVH32Qj07bkzQpsLUD6xH2HGtipBDoGU7/jEH63Be
	9nmsvYDNHh/BAja8iGrSmyA8WSHO994poZ15nN74YCYrfKUZaW0dVFLS7Omc/k/I7oeP
	iTJa2YXJHq37Kqz1mIyAaNq2kMH2RoE9cX44V1N02haP1Rwn2dh+BrBdUjAYtVT8jzCT
	ZkrA==
X-Received: by 10.52.175.163 with SMTP id cb3mr4871258vdc.76.1355869074591;
	Tue, 18 Dec 2012 14:17:54 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id j3sm2451470vdv.0.2012.12.18.14.17.53
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 14:17:54 -0800 (PST)
Date: Tue, 18 Dec 2012 17:17:51 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Message-ID: <20121218221749.GA6332@phenom.dumpdata.com>
References: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
	<49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
 problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGV5IEFuZHJlcywKClRoYW5rcyBmb3IgeW91ciByZXNwb25zZS4gU29ycnkgZm9yIHRoZSByZWFs
bHkgbGF0ZSByZXNwb25zZSAtIEkKaGFkIGl0IGluIG15IHBvc3Rwb25lZCBtYWlsYm94IGFuZCB0
aG91Z2h0IGl0IGhhcyBiZWVuIHNlbnQKYWxyZWFkeS4KCk9uIE1vbiwgRGVjIDAzLCAyMDEyIGF0
IDEwOjI0OjQwUE0gLTA1MDAsIEFuZHJlcyBMYWdhci1DYXZpbGxhIHdyb3RlOgo+ID4gSSBlYXJs
aWVyIHByb21pc2VkIGEgY29tcGxldGUgYW5hbHlzaXMgb2YgdGhlIHByb2JsZW0KPiA+IGFkZHJl
c3NlZCBieSB0aGUgcHJvcG9zZWQgY2xhaW0gaHlwZXJjYWxsIGFzIHdlbGwgYXMKPiA+IGFuIGFu
YWx5c2lzIG9mIHRoZSBhbHRlcm5hdGUgc29sdXRpb25zLiAgSSBoYWQgbm90Cj4gPiB5ZXQgcHJv
dmlkZWQgdGhlc2UgYW5hbHlzZXMgd2hlbiBJIGFza2VkIGZvciBhcHByb3ZhbAo+ID4gdG8gY29t
bWl0IHRoZSBoeXBlcnZpc29yIHBhdGNoLCBzbyB0aGVyZSB3YXMgc3RpbGwKPiA+IGEgZ29vZCBh
bW91bnQgb2YgbWlzdW5kZXJzdGFuZGluZywgYW5kIEkgYW0gdHJ5aW5nCj4gPiB0byBmaXggdGhh
dCBoZXJlLgo+ID4gCj4gPiBJIGhhZCBob3BlZCB0aGlzIGVzc2F5IGNvdWxkIGJlIGJvdGggY29u
Y2lzZSBhbmQgY29tcGxldGUKPiA+IGJ1dCBxdWlja2x5IGZvdW5kIGl0IHRvIGJlIGltcG9zc2li
bGUgdG8gYmUgYm90aCBhdCB0aGUKPiA+IHNhbWUgdGltZS4gIFNvIEkgaGF2ZSBlcnJlZCBvbiB0
aGUgc2lkZSBvZiB2ZXJib3NpdHksCj4gPiBidXQgYWxzbyBoYXZlIGF0dGVtcHRlZCB0byBlbnN1
cmUgdGhhdCB0aGUgYW5hbHlzaXMKPiA+IGZsb3dzIHNtb290aGx5IGFuZCBpcyB1bmRlcnN0YW5k
YWJsZSB0byBhbnlvbmUgaW50ZXJlc3RlZAo+ID4gaW4gbGVhcm5pbmcgbW9yZSBhYm91dCBtZW1v
cnkgYWxsb2NhdGlvbiBpbiBYZW4uCj4gPiBJJ2QgYXBwcmVjaWF0ZSBmZWVkYmFjayBmcm9tIG90
aGVyIGRldmVsb3BlcnMgdG8gdW5kZXJzdGFuZAo+ID4gaWYgSSd2ZSBhbHNvIGFjaGlldmVkIHRo
YXQgZ29hbC4KPiA+IAo+ID4gSWFuLCBJYW4sIEdlb3JnZSwgYW5kIFRpbSAtLSBJIGhhdmUgdGFn
Z2VkIGEgZmV3Cj4gPiBvdXQtb2YtZmxvdyBxdWVzdGlvbnMgdG8geW91IHdpdGggW0lJR0ZdLiAg
SWYgSSBsb3NlCj4gPiB5b3UgYXQgYW55IHBvaW50LCBJJ2QgZXNwZWNpYWxseSBhcHByZWNpYXRl
IHlvdXIgZmVlZGJhY2sKPiA+IGF0IHRob3NlIHBvaW50cy4gIEkgdHJ1c3QgdGhhdCwgZmlyc3Qs
IHlvdSB3aWxsIHJlYWQKPiA+IHRoaXMgY29tcGxldGVseS4gIEFzIEkndmUgc2FpZCwgSSB1bmRl
cnN0YW5kIHRoYXQKPiA+IE9yYWNsZSdzIHBhcmFkaWdtIG1heSBkaWZmZXIgaW4gbWFueSB3YXlz
IGZyb20geW91cgo+ID4gb3duLCBzbyBJIGFsc28gdHJ1c3QgdGhhdCB5b3Ugd2lsbCByZWFkIGl0
IGNvbXBsZXRlbHkKPiA+IHdpdGggYW4gb3BlbiBtaW5kLgo+ID4gCj4gPiBUaGFua3MsCj4gPiBE
YW4KPiA+IAo+ID4gUFJPQkxFTSBTVEFURU1FTlQgT1ZFUlZJRVcKPiA+IAo+ID4gVGhlIGZ1bmRh
bWVudGFsIHByb2JsZW0gaXMgYSByYWNlOyB0d28gZW50aXRpZXMgYXJlCj4gPiBjb21wZXRpbmcg
Zm9yIHBhcnQgb3IgYWxsIG9mIGEgc2hhcmVkIHJlc291cmNlOiBpbiB0aGlzIGNhc2UsCj4gPiBw
aHlzaWNhbCBzeXN0ZW0gUkFNLiAgTm9ybWFsbHksIGEgbG9jayBpcyB1c2VkIHRvIG1lZGlhdGUK
PiA+IGEgcmFjZS4KPiA+IAo+ID4gRm9yIG1lbW9yeSBhbGxvY2F0aW9uIGluIFhlbiwgdGhlcmUg
YXJlIHR3byBzaWduaWZpY2FudAo+ID4gZW50aXRpZXMsIHRoZSB0b29sc3RhY2sgYW5kIHRoZSBo
eXBlcnZpc29yLiAgQW5kLCBpbgo+ID4gZ2VuZXJhbCB0ZXJtcywgdGhlcmUgYXJlIGN1cnJlbnRs
eSB0d28gaW1wb3J0YW50IGxvY2tzOgo+ID4gb25lIHVzZWQgaW4gdGhlIHRvb2xzdGFjayBmb3Ig
ZG9tYWluIGNyZWF0aW9uOwo+ID4gYW5kIG9uZSBpbiB0aGUgaHlwZXJ2aXNvciB1c2VkIGZvciB0
aGUgYnVkZHkgYWxsb2NhdG9yLgo+ID4gCj4gPiBDb25zaWRlcmluZyBmaXJzdCBvbmx5IGRvbWFp
biBjcmVhdGlvbiwgdGhlIHRvb2xzdGFjawo+ID4gbG9jayBpcyB0YWtlbiB0byBlbnN1cmUgdGhh
dCBkb21haW4gY3JlYXRpb24gaXMgc2VyaWFsaXplZC4KPiA+IFRoZSBsb2NrIGlzIHRha2VuIHdo
ZW4gZG9tYWluIGNyZWF0aW9uIHN0YXJ0cywgYW5kIHJlbGVhc2VkCj4gPiB3aGVuIGRvbWFpbiBj
cmVhdGlvbiBpcyBjb21wbGV0ZS4KPiA+IAo+ID4gQXMgc3lzdGVtIGFuZCBkb21haW4gbWVtb3J5
IHJlcXVpcmVtZW50cyBncm93LCB0aGUgYW1vdW50Cj4gPiBvZiB0aW1lIHRvIGFsbG9jYXRlIGFs
bCBuZWNlc3NhcnkgbWVtb3J5IHRvIGxhdW5jaCBhIGxhcmdlCj4gPiBkb21haW4gaXMgZ3Jvd2lu
ZyBhbmQgbWF5IG5vdyBleGNlZWQgc2V2ZXJhbCBtaW51dGVzLCBzbwo+ID4gdGhpcyBzZXJpYWxp
emF0aW9uIGlzIGluY3JlYXNpbmdseSBwcm9ibGVtYXRpYy4gIFRoZSByZXN1bHQKPiA+IGlzIGEg
Y3VzdG9tZXIgcmVwb3J0ZWQgcHJvYmxlbTogIElmIGEgY3VzdG9tZXIgd2FudHMgdG8KPiA+IGxh
dW5jaCB0d28gb3IgbW9yZSB2ZXJ5IGxhcmdlIGRvbWFpbnMsIHRoZSAid2FpdCB0aW1lIgo+ID4g
cmVxdWlyZWQgYnkgdGhlIHNlcmlhbGl6YXRpb24gaXMgdW5hY2NlcHRhYmxlLgo+ID4gCj4gPiBP
cmFjbGUgd291bGQgbGlrZSB0byBzb2x2ZSB0aGlzIHByb2JsZW0uICBBbmQgT3JhY2xlCj4gPiB3
b3VsZCBsaWtlIHRvIHNvbHZlIHRoaXMgcHJvYmxlbSBub3QganVzdCBmb3IgYSBzaW5nbGUKPiA+
IGN1c3RvbWVyIHNpdHRpbmcgaW4gZnJvbnQgb2YgYSBzaW5nbGUgbWFjaGluZSBjb25zb2xlLCBi
dXQKPiA+IGZvciB0aGUgdmVyeSBjb21wbGV4IGNhc2Ugb2YgYSBsYXJnZSBudW1iZXIgb2YgbWFj
aGluZXMsCj4gPiB3aXRoIHRoZSAiYWdlbnQiIG9uIGVhY2ggbWFjaGluZSB0YWtpbmcgaW5kZXBl
bmRlbnQKPiA+IGFjdGlvbnMgaW5jbHVkaW5nIGF1dG9tYXRpYyBsb2FkIGJhbGFuY2luZyBhbmQg
cG93ZXIKPiA+IG1hbmFnZW1lbnQgdmlhIG1pZ3JhdGlvbi4KPiBIaSBEYW4sCj4gYW4gaXNzdWUg
d2l0aCB5b3VyIHJlYXNvbmluZyB0aHJvdWdob3V0IGhhcyBiZWVuIHRoZSBjb25zdGFudCBpbnZv
Y2F0aW9uIG9mIHRoZSBtdWx0aSBob3N0IGVudmlyb25tZW50IGFzIGEganVzdGlmaWNhdGlvbiBm
b3IgeW91ciBwcm9wb3NhbC4gQnV0IHRoaXMgYXJndW1lbnQgaXMgbm90IHVzZWQgaW4geW91ciBw
cm9wb3NhbCBiZWxvdyBiZXlvbmQgdGhpcyBtZW50aW9uIGluIHBhc3NpbmcuIEZ1cnRoZXIsIHRo
ZXJlIGlzIG5vIHJlbGF0aW9uIGJldHdlZW4gd2hhdCB5b3UgYXJlIGNoYW5naW5nICh0aGUgaHlw
ZXJ2aXNvcikgYW5kIHdoYXQgeW91IGFyZSBjbGFpbWluZyBpdCBpcyBuZWVkZWQgZm9yIChtdWx0
aSBob3N0IFZNIG1hbmFnZW1lbnQpLgo+IAoKSGVoLiBJIGhhZG4ndCByZWFsaXplZCB0aGF0IHRo
ZSBlbWFpbHMgbmVlZCB0byBjb25mb3JtIHRvIGEKdGhlIHdheSBsZWdhbCBicmllZnMgYXJlIHdy
aXR0ZW4gaW4gVVMgOi0pIE1lYW5pbmcgdGhhdAplYWNoIHRvcGljIG11c3QgYmUgYWRkcmVzc2Vk
LgoKQW55aG93LCB0aGUgbXVsdGktaG9zdCBlbnYgb3IgYSBzaW5nbGUtaG9zdCBlbnYgaGFzIHRo
ZSBzYW1lCmlzc3VlIC0geW91IHRyeSB0byBsYXVuY2ggbXVsdGlwbGUgZ3Vlc3RzIGFuZCB5b3Ug
c29tZSBvZgp0aGVtIG1pZ2h0IG5vdCBsYXVuY2guCgpUaGUgY2hhbmdlcyB0aGF0IERhbiBpcyBw
cm9wb3NpbmcgKHRoZSBjbGFpbSBoeXBlcmNhbGwpCndvdWxkIHByb3ZpZGUgdGhlIGZ1bmN0aW9u
YWxpdHkgdG8gZml4IHRoaXMgcHJvYmxlbS4KCj4gCj4gPiAgKFRoaXMgY29tcGxleCBlbnZpcm9u
bWVudAo+ID4gaXMgc29sZCBieSBPcmFjbGUgdG9kYXk7IGl0IGlzIG5vdCBhICJmdXR1cmUgdmlz
aW9uIi4pCj4gPiAKPiA+IFtJSUdUXSBDb21wbGV0ZWx5IGlnbm9yaW5nIGFueSBwb3NzaWJsZSBz
b2x1dGlvbnMgdG8gdGhpcwo+ID4gcHJvYmxlbSwgaXMgZXZlcnlvbmUgaW4gYWdyZWVtZW50IHRo
YXQgdGhpcyBfaXNfIGEgcHJvYmxlbQo+ID4gdGhhdCBfbmVlZHNfIHRvIGJlIHNvbHZlZCB3aXRo
IF9zb21lXyBjaGFuZ2UgaW4gdGhlIFhlbgo+ID4gZWNvc3lzdGVtPwo+ID4gCj4gPiBTT01FIElN
UE9SVEFOVCBCQUNLR1JPVU5EIElORk9STUFUSU9OCj4gPiAKPiA+IEluIHRoZSBzdWJzZXF1ZW50
IGRpc2N1c3Npb24sIGl0IGlzIGltcG9ydGFudCB0bwo+ID4gdW5kZXJzdGFuZCBhIGZldyB0aGlu
Z3M6Cj4gPiAKPiA+IFdoaWxlIHRoZSB0b29sc3RhY2sgbG9jayBpcyBoZWxkLCBhbGxvY2F0aW5n
IG1lbW9yeSBmb3IKPiA+IHRoZSBkb21haW4gY3JlYXRpb24gcHJvY2VzcyBpcyBkb25lIGFzIGEg
c2VxdWVuY2Ugb2Ygb25lCj4gPiBvciBtb3JlIGh5cGVyY2FsbHMsIGVhY2ggYXNraW5nIHRoZSBo
eXBlcnZpc29yIHRvIGFsbG9jYXRlCj4gPiBvbmUgb3IgbW9yZSAtLSAiWCIgLS0gc2xhYnMgb2Yg
cGh5c2ljYWwgUkFNLCB3aGVyZSBhIHNsYWIKPiA+IGlzIDIqKk4gY29udGlndW91cyBhbGlnbmVk
IHBhZ2VzLCBhbHNvIGtub3duIGFzIGFuCj4gPiAib3JkZXIgTiIgYWxsb2NhdGlvbi4gIFdoaWxl
IHRoZSBoeXBlcmNhbGwgaXMgZGVmaW5lZAo+ID4gdG8gd29yayB3aXRoIGFueSB2YWx1ZSBvZiBO
LCBjb21tb24gdmFsdWVzIGFyZSBOPTAKPiA+IChpbmRpdmlkdWFsIHBhZ2VzKSwgTj05ICgiaHVn
ZXBhZ2VzIiBvciAic3VwZXJwYWdlcyIpLAo+ID4gYW5kIE49MTggKCIxR2lCIHBhZ2VzIikuICBT
bywgZm9yIGV4YW1wbGUsIGlmIHRoZSB0b29sc3RhY2sKPiA+IHJlcXVpcmVzIDIwMU1pQiBvZiBt
ZW1vcnksIGl0IHdpbGwgbWFrZSB0d28gaHlwZXJjYWxsczoKPiA+IE9uZSB3aXRoIFg9MTAwIGFu
ZCBOPTksIGFuZCBvbmUgd2l0aCBYPTEgYW5kIE49MC4KPiA+IAo+ID4gV2hpbGUgdGhlIHRvb2xz
dGFjayBtYXkgYXNrIGZvciBhIHNtYWxsZXIgbnVtYmVyIFggb2YKPiA+IG9yZGVyPT05IHNsYWJz
LCBzeXN0ZW0gZnJhZ21lbnRhdGlvbiBtYXkgdW5wcmVkaWN0YWJseQo+ID4gY2F1c2UgdGhlIGh5
cGVydmlzb3IgdG8gZmFpbCB0aGUgcmVxdWVzdCwgaW4gd2hpY2ggY2FzZQo+ID4gdGhlIHRvb2xz
dGFjayB3aWxsIGZhbGwgYmFjayB0byBhIHJlcXVlc3QgZm9yIDUxMipYCj4gPiBpbmRpdmlkdWFs
IHBhZ2VzLiAgSWYgdGhlcmUgaXMgc3VmZmljaWVudCBSQU0gaW4gdGhlIHN5c3RlbSwKPiA+IHRo
aXMgcmVxdWVzdCBmb3Igb3JkZXI9PTAgcGFnZXMgaXMgZ3VhcmFudGVlZCB0byBzdWNjZWVkLgo+
ID4gVGh1cyBmb3IgYSAxVGlCIGRvbWFpbiwgdGhlIGh5cGVydmlzb3IgbXVzdCBiZSBwcmVwYXJl
ZAo+ID4gdG8gYWxsb2NhdGUgdXAgdG8gMjU2TWkgaW5kaXZpZHVhbCBwYWdlcy4KPiA+IAo+ID4g
Tm90ZSBjYXJlZnVsbHkgdGhhdCB3aGVuIHRoZSB0b29sc3RhY2sgaHlwZXJjYWxsIGFza3MgZm9y
Cj4gPiAxMDAgc2xhYnMsIHRoZSBoeXBlcnZpc29yICJoZWFwbG9jayIgaXMgY3VycmVudGx5IHRh
a2VuCj4gPiBhbmQgcmVsZWFzZWQgMTAwIHRpbWVzLiAgU2ltaWxhcmx5LCBmb3IgMjU2TSBpbmRp
dmlkdWFsCj4gPiBwYWdlcy4uLiAyNTYgbWlsbGlvbiBzcGluX2xvY2stYWxsb2NfcGFnZS1zcGlu
X3VubG9ja3MuCj4gPiBUaGlzIG1lYW5zIHRoYXQgZG9tYWluIGNyZWF0aW9uIGlzIG5vdCAiYXRv
bWljIiBpbnNpZGUKPiA+IHRoZSBoeXBlcnZpc29yLCB3aGljaCBtZWFucyB0aGF0IHJhY2VzIGNh
biBhbmQgd2lsbCBzdGlsbAo+ID4gb2NjdXIuCj4gPiAKPiA+IFJVTElORyBPVVQgU09NRSBTSU1Q
TEUgU09MVVRJT05TCj4gPiAKPiA+IElzIHRoZXJlIGFuIGVsZWdhbnQgc2ltcGxlIHNvbHV0aW9u
IGhlcmU/Cj4gPiAKPiA+IExldCdzIGZpcnN0IGNvbnNpZGVyIHRoZSBwb3NzaWJpbGl0eSBvZiBy
ZW1vdmluZyB0aGUgdG9vbHN0YWNrCj4gPiBzZXJpYWxpemF0aW9uIGVudGlyZWx5IGFuZC9vciB0
aGUgcG9zc2liaWxpdHkgdGhhdCB0d28KPiA+IGluZGVwZW5kZW50IHRvb2xzdGFjayB0aHJlYWRz
IChvciAiYWdlbnRzIikgY2FuIHNpbXVsdGFuZW91c2x5Cj4gPiByZXF1ZXN0IGEgdmVyeSBsYXJn
ZSBkb21haW4gY3JlYXRpb24gaW4gcGFyYWxsZWwuICBBcyBkZXNjcmliZWQKPiA+IGFib3ZlLCB0
aGUgaHlwZXJ2aXNvcidzIGhlYXBsb2NrIGlzIGluc3VmZmljaWVudCB0byBzZXJpYWxpemUgUkFN
Cj4gPiBhbGxvY2F0aW9uLCBzbyB0aGUgdHdvIGRvbWFpbiBjcmVhdGlvbiBwcm9jZXNzZXMgcmFj
ZS4gIElmIHRoZXJlCj4gPiBpcyBzdWZmaWNpZW50IHJlc291cmNlIGZvciBlaXRoZXIgb25lIHRv
IGxhdW5jaCwgYnV0IGluc3VmZmljaWVudAo+ID4gcmVzb3VyY2UgZm9yIGJvdGggdG8gbGF1bmNo
LCB0aGUgd2lubmVyIG9mIHRoZSByYWNlIGlzIGluZGV0ZXJtaW5hdGUsCj4gPiBhbmQgb25lIG9y
IGJvdGggbGF1bmNoZXMgd2lsbCBmYWlsLCBwb3NzaWJseSBhZnRlciBvbmUgb3IgYm90aCAKPiA+
IGRvbWFpbiBjcmVhdGlvbiB0aHJlYWRzIGhhdmUgYmVlbiB3b3JraW5nIGZvciBzZXZlcmFsIG1p
bnV0ZXMuCj4gPiBUaGlzIGlzIGEgY2xhc3NpYyAiVE9DVE9VIiAodGltZS1vZi1jaGVjay10aW1l
LW9mLXVzZSkgcmFjZS4KPiA+IElmIGEgY3VzdG9tZXIgaXMgdW5oYXBweSB3YWl0aW5nIHNldmVy
YWwgbWludXRlcyB0byBsYXVuY2gKPiA+IGEgZG9tYWluLCB0aGV5IHdpbGwgYmUgZXZlbiBtb3Jl
IHVuaGFwcHkgd2FpdGluZyBmb3Igc2V2ZXJhbAo+ID4gbWludXRlcyB0byBiZSB0b2xkIHRoYXQg
b25lIG9yIGJvdGggb2YgdGhlIGxhdW5jaGVzIGhhcyBmYWlsZWQuCj4gPiBNdWx0aS1taW51dGUg
ZmFpbHVyZSBpcyBldmVuIG1vcmUgdW5hY2NlcHRhYmxlIGZvciBhbiBhdXRvbWF0ZWQKPiA+IGFn
ZW50IHRyeWluZyB0bywgZm9yIGV4YW1wbGUsIGV2YWN1YXRlIGEgbWFjaGluZSB0aGF0IHRoZQo+
ID4gZGF0YSBjZW50ZXIgYWRtaW5pc3RyYXRvciBuZWVkcyB0byBwb3dlcmN5Y2xlLgo+ID4gCj4g
PiBbSUlHVDogUGxlYXNlIGhvbGQgeW91ciBvYmplY3Rpb25zIGZvciBhIG1vbWVudC4uLiB0aGUg
cGFyYWdyYXBoCj4gPiBhYm92ZSBpcyBkaXNjdXNzaW5nIHRoZSBzaW1wbGUgc29sdXRpb24gb2Yg
cmVtb3ZpbmcgdGhlIHNlcmlhbGl6YXRpb247Cj4gPiB5b3VyIHN1Z2dlc3RlZCBzb2x1dGlvbiB3
aWxsIGJlIGRpc2N1c3NlZCBzb29uLl0KPiA+IAo+ID4gTmV4dCwgbGV0J3MgY29uc2lkZXIgdGhl
IHBvc3NpYmlsaXR5IG9mIGNoYW5naW5nIHRoZSBoZWFwbG9jawo+ID4gc3RyYXRlZ3kgaW4gdGhl
IGh5cGVydmlzb3Igc28gdGhhdCB0aGUgbG9jayBpcyBoZWxkIG5vdAo+ID4gZm9yIG9uZSBzbGFi
IGJ1dCBmb3IgdGhlIGVudGlyZSByZXF1ZXN0IG9mIE4gc2xhYnMuICBBcyB3aXRoCj4gPiBhbnkg
Y29yZSBoeXBlcnZpc29yIGxvY2ssIGhvbGRpbmcgdGhlIGhlYXBsb2NrIGZvciBhICJsb25nIHRp
bWUiCj4gPiBpcyB1bmFjY2VwdGFibGUuICBUbyBhIGh5cGVydmlzb3IsIHNldmVyYWwgbWludXRl
cyBpcyBhbiBldGVybml0eS4KPiA+IEFuZCwgaW4gYW55IGNhc2UsIGJ5IHNlcmlhbGl6aW5nIGRv
bWFpbiBjcmVhdGlvbiBpbiB0aGUgaHlwZXJ2aXNvciwKPiA+IHdlIGhhdmUgcmVhbGx5IG9ubHkg
bW92ZWQgdGhlIHByb2JsZW0gZnJvbSB0aGUgdG9vbHN0YWNrIGludG8KPiA+IHRoZSBoeXBlcnZp
c29yLCBub3Qgc29sdmVkIHRoZSBwcm9ibGVtLgo+ID4gCj4gPiBbSUlHVF0gQXJlIHdlIGluIGFn
cmVlbWVudCB0aGF0IHRoZXNlIHNpbXBsZSBzb2x1dGlvbnMgY2FuIGJlCj4gPiBzYWZlbHkgcnVs
ZWQgb3V0Pwo+ID4gCj4gPiBDQVBBQ0lUWSBBTExPQ0FUSU9OIFZTIFJBTSBBTExPQ0FUSU9OCj4g
PiAKPiA+IExvb2tpbmcgZm9yIGEgY3JlYXRpdmUgc29sdXRpb24sIG9uZSBtYXkgcmVhbGl6ZSB0
aGF0IGl0IGlzIHRoZQo+ID4gcGFnZSBhbGxvY2F0aW9uIC0tIGVzcGVjaWFsbHkgaW4gbGFyZ2Ug
cXVhbnRpdGllcyAtLSB0aGF0IGlzIHZlcnkKPiA+IHRpbWUtY29uc3VtaW5nLiAgQnV0LCB0aGlu
a2luZyBvdXRzaWRlIG9mIHRoZSBib3gsIGl0IGlzIG5vdAo+ID4gdGhlIGFjdHVhbCBwYWdlcyBv
ZiBSQU0gdGhhdCB3ZSBhcmUgcmFjaW5nIG9uLCBidXQgdGhlIHF1YW50aXR5IG9mIHBhZ2VzIHJl
cXVpcmVkIHRvIGxhdW5jaCBhIGRvbWFpbiEgIElmIHdlIGluc3RlYWQgaGF2ZSBhIHdheSB0bwo+
ID4gImNsYWltIiBhIHF1YW50aXR5IG9mIHBhZ2VzIGNoZWFwbHkgbm93IGFuZCB0aGVuIGFsbG9j
YXRlIHRoZSBhY3R1YWwKPiA+IHBoeXNpY2FsIFJBTSBwYWdlcyBsYXRlciwgd2UgaGF2ZSBjaGFu
Z2VkIHRoZSByYWNlIHRvIHJlcXVpcmUgb25seSBzZXJpYWxpemF0aW9uIG9mIHRoZSBjbGFpbWlu
ZyBwcm9jZXNzISAgSW4gb3RoZXIgd29yZHMsIGlmIHNvbWUgZW50aXR5Cj4gPiBrbm93cyB0aGUg
bnVtYmVyIG9mIHBhZ2VzIGF2YWlsYWJsZSBpbiB0aGUgc3lzdGVtLCBhbmQgY2FuICJjbGFpbSIK
PiA+IE4gcGFnZXMgZm9yIHRoZSBiZW5lZml0IG9mIGEgZG9tYWluIGJlaW5nIGxhdW5jaGVkLCB0
aGUgc3VjY2Vzc2Z1bCBsYXVuY2ggb2YgdGhlIGRvbWFpbiBjYW4gYmUgZW5zdXJlZC4gIFdlbGwu
Li4gdGhlIGRvbWFpbiBsYXVuY2ggbWF5Cj4gPiBzdGlsbCBmYWlsIGZvciBhbiB1bnJlbGF0ZWQg
cmVhc29uLCBidXQgbm90IGR1ZSB0byBhIG1lbW9yeSBUT0NUT1UKPiA+IHJhY2UuICBCdXQsIGlu
IHRoaXMgY2FzZSwgaWYgdGhlIGNvc3QgKGluIHRpbWUpIG9mIHRoZSBjbGFpbWluZwo+ID4gcHJv
Y2VzcyBpcyB2ZXJ5IHNtYWxsIGNvbXBhcmVkIHRvIHRoZSBjb3N0IG9mIHRoZSBkb21haW4gbGF1
bmNoLAo+ID4gd2UgaGF2ZSBzb2x2ZWQgdGhlIG1lbW9yeSBUT0NUT1UgcmFjZSB3aXRoIGhhcmRs
eSBhbnkgZGVsYXkgYWRkZWQKPiA+IHRvIGEgbm9uLW1lbW9yeS1yZWxhdGVkIGZhaWx1cmUgdGhh
dCB3b3VsZCBoYXZlIG9jY3VycmVkIGFueXdheS4KPiA+IAo+ID4gVGhpcyAiY2xhaW0iIHNvdW5k
cyBwcm9taXNpbmcuICBCdXQgd2UgaGF2ZSBtYWRlIGFuIGFzc3VtcHRpb24gdGhhdAo+ID4gYW4g
ImVudGl0eSIgaGFzIGNlcnRhaW4ga25vd2xlZGdlLiAgSW4gdGhlIFhlbiBzeXN0ZW0sIHRoYXQg
ZW50aXR5Cj4gPiBtdXN0IGJlIGVpdGhlciB0aGUgdG9vbHN0YWNrIG9yIHRoZSBoeXBlcnZpc29y
LiAgT3IsIGluIHRoZSBPcmFjbGUKPiA+IGVudmlyb25tZW50LCBhbiAiYWdlbnQiLi4uIGJ1dCBh
biBhZ2VudCBhbmQgYSB0b29sc3RhY2sgYXJlIHNpbWlsYXIKPiA+IGVub3VnaCBmb3Igb3VyIHB1
cnBvc2VzIHRoYXQgd2Ugd2lsbCBqdXN0IHVzZSB0aGUgbW9yZSBicm9hZGx5LXVzZWQKPiA+IHRl
cm0gInRvb2xzdGFjayIuICBJbiB1c2luZyB0aGlzIHRlcm0sIGhvd2V2ZXIsIGl0J3MgaW1wb3J0
YW50IHRvCj4gPiByZW1lbWJlciBpdCBpcyBuZWNlc3NhcnkgdG8gY29uc2lkZXIgdGhlIGV4aXN0
ZW5jZSBvZiBtdWx0aXBsZQo+ID4gdGhyZWFkcyB3aXRoaW4gdGhpcyB0b29sc3RhY2suCj4gPiAK
PiA+IE5vdyBJIHF1b3RlIElhbiBKYWNrc29uOiAiSXQgaXMgYSBrZXkgZGVzaWduIHByaW5jaXBs
ZSBvZiBhIHN5c3RlbQo+ID4gbGlrZSBYZW4gdGhhdCB0aGUgaHlwZXJ2aXNvciBzaG91bGQgcHJv
dmlkZSBvbmx5IHRob3NlIGZhY2lsaXRpZXMKPiA+IHdoaWNoIGFyZSBzdHJpY3RseSBuZWNlc3Nh
cnkuICBBbnkgZnVuY3Rpb25hbGl0eSB3aGljaCBjYW4gYmUKPiA+IHJlYXNvbmFibHkgcHJvdmlk
ZWQgb3V0c2lkZSB0aGUgaHlwZXJ2aXNvciBzaG91bGQgYmUgZXhjbHVkZWQKPiA+IGZyb20gaXQu
Igo+ID4gCj4gPiBTbyBsZXQncyBleGFtaW5lIHRoZSB0b29sc3RhY2sgZmlyc3QuCj4gPiAKPiA+
IFtJSUdUXSBTdGlsbCBhbGwgb24gdGhlIHNhbWUgcGFnZSAocHVuIGludGVuZGVkKT8KPiA+IAo+
ID4gVE9PTFNUQUNLLUJBU0VEIENBUEFDSVRZIEFMTE9DQVRJT04KPiA+IAo+ID4gRG9lcyB0aGUg
dG9vbHN0YWNrIGtub3cgaG93IG1hbnkgcGh5c2ljYWwgcGFnZXMgb2YgUkFNIGFyZSBhdmFpbGFi
bGU/Cj4gPiBZZXMsIGl0IGNhbiB1c2UgYSBoeXBlcmNhbGwgdG8gZmluZCBvdXQgdGhpcyBpbmZv
cm1hdGlvbiBhZnRlciBYZW4gYW5kCj4gPiBkb20wIGxhdW5jaCwgYnV0IGJlZm9yZSBpdCBsYXVu
Y2hlcyBhbnkgZG9tYWluLiAgVGhlbiBpZiBpdCBzdWJ0cmFjdHMKPiA+IHRoZSBudW1iZXIgb2Yg
cGFnZXMgdXNlZCB3aGVuIGl0IGxhdW5jaGVzIGEgZG9tYWluIGFuZCBpcyBhd2FyZSBvZgo+ID4g
d2hlbiBhbnkgZG9tYWluIGRpZXMsIGFuZCBhZGRzIHRoZW0gYmFjaywgdGhlIHRvb2xzdGFjayBo
YXMgYSBwcmV0dHkKPiA+IGdvb2QgZXN0aW1hdGUuICBJbiBhY3R1YWxpdHksIHRoZSB0b29sc3Rh
Y2sgZG9lc24ndCBfcmVhbGx5XyBrbm93IHRoZQo+ID4gZXhhY3QgbnVtYmVyIG9mIHBhZ2VzIHVz
ZWQgd2hlbiBhIGRvbWFpbiBpcyBsYXVuY2hlZCwgYnV0IHRoZXJlCj4gPiBpcyBhIHBvb3JseS1k
b2N1bWVudGVkICJmdXp6IGZhY3RvciIuLi4gdGhlIHRvb2xzdGFjayBrbm93cyB0aGUKPiA+IG51
bWJlciBvZiBwYWdlcyB3aXRoaW4gYSBmZXcgbWVnYWJ5dGVzLCB3aGljaCBpcyBwcm9iYWJseSBj
bG9zZSBlbm91Z2guCj4gPiAKPiA+IFRoaXMgaXMgYSBmYWlybHkgZ29vZCBkZXNjcmlwdGlvbiBv
ZiBob3cgdGhlIHRvb2xzdGFjayB3b3JrcyB0b2RheQo+ID4gYW5kIHRoZSBhY2NvdW50aW5nIHNl
ZW1zIHNpbXBsZSBlbm91Z2gsIHNvIGRvZXMgdG9vbHN0YWNrLWJhc2VkCj4gPiBjYXBhY2l0eSBh
bGxvY2F0aW9uIHNvbHZlIG91ciBvcmlnaW5hbCBwcm9ibGVtPyAgSXQgd291bGQgc2VlbSBzby4K
PiA+IEV2ZW4gaWYgdGhlcmUgYXJlIG11bHRpcGxlIHRocmVhZHMsIHRoZSBhY2NvdW50aW5nIC0t
IG5vdCB0aGUgZXh0ZW5kZWQKPiA+IHNlcXVlbmNlIG9mIHBhZ2UgYWxsb2NhdGlvbiBmb3IgdGhl
IGRvbWFpbiBjcmVhdGlvbiAtLSBjYW4gYmUKPiA+IHNlcmlhbGl6ZWQgYnkgYSBsb2NrIGluIHRo
ZSB0b29sc3RhY2suICBCdXQgbm90ZSBjYXJlZnVsbHksIGVpdGhlcgo+ID4gdGhlIHRvb2xzdGFj
ayBhbmQgdGhlIGh5cGVydmlzb3IgbXVzdCBhbHdheXMgYmUgaW4gc3luYyBvbiB0aGUKPiA+IG51
bWJlciBvZiBhdmFpbGFibGUgcGFnZXMgKHdpdGhpbiBhbiBhY2NlcHRhYmxlIG1hcmdpbiBvZiBl
cnJvcik7Cj4gPiBvciBhbnkgcXVlcnkgdG8gdGhlIGh5cGVydmlzb3IgX2FuZF8gdGhlIHRvb2xz
dGFjay1iYXNlZCBjbGFpbSBtdXN0Cj4gPiBiZSBwYWlyZWQgYXRvbWljYWxseSwgaS5lLiB0aGUg
dG9vbHN0YWNrIGxvY2sgbXVzdCBiZSBoZWxkIGFjcm9zcwo+ID4gYm90aC4gIE90aGVyd2lzZSB3
ZSBhZ2FpbiBoYXZlIGFub3RoZXIgVE9DVE9VIHJhY2UuIEludGVyZXN0aW5nLAo+ID4gYnV0IHBy
b2JhYmx5IG5vdCByZWFsbHkgYSBwcm9ibGVtLgo+ID4gCj4gPiBXYWl0LCBpc24ndCBpdCBwb3Nz
aWJsZSBmb3IgdGhlIHRvb2xzdGFjayB0byBkeW5hbWljYWxseSBjaGFuZ2UgdGhlCj4gPiBudW1i
ZXIgb2YgcGFnZXMgYXNzaWduZWQgdG8gYSBkb21haW4/ICBZZXMsIHRoaXMgaXMgb2Z0ZW4gY2Fs
bGVkCj4gPiBiYWxsb29uaW5nIGFuZCB0aGUgdG9vbHN0YWNrIGNhbiBkbyB0aGlzIHZpYSBhIGh5
cGVyY2FsbC4gIEJ1dAo+IAo+ID4gdGhhdCdzIHN0aWxsIE9LIGJlY2F1c2UgZWFjaCBjYWxsIGdv
ZXMgdGhyb3VnaCB0aGUgdG9vbHN0YWNrIGFuZAo+ID4gaXQgc2ltcGx5IG5lZWRzIHRvIGFkZCBt
b3JlIGFjY291bnRpbmcgZm9yIHdoZW4gaXQgdXNlcyBiYWxsb29uaW5nCj4gPiB0byBhZGp1c3Qg
dGhlIGRvbWFpbidzIG1lbW9yeSBmb290cHJpbnQuICBTbyB3ZSBhcmUgc3RpbGwgT0suCj4gPiAK
PiA+IEJ1dCB3YWl0IGFnYWluLi4uIHRoYXQgYnJpbmdzIHVwIGFuIGludGVyZXN0aW5nIHBvaW50
LiAgQXJlIHRoZXJlCj4gPiBhbnkgc2lnbmlmaWNhbnQgYWxsb2NhdGlvbnMgdGhhdCBhcmUgZG9u
ZSBpbiB0aGUgaHlwZXJ2aXNvciB3aXRob3V0Cj4gPiB0aGUga25vd2xlZGdlIGFuZC9vciBwZXJt
aXNzaW9uIG9mIHRoZSB0b29sc3RhY2s/ICBJZiBzbywgdGhlCj4gPiB0b29sc3RhY2sgbWF5IGJl
IG1pc3NpbmcgaW1wb3J0YW50IGluZm9ybWF0aW9uLgo+ID4gCj4gPiBTbyBhcmUgdGhlcmUgYW55
IHN1Y2ggYWxsb2NhdGlvbnM/ICBXZWxsLi4uIHllcy4gVGhlcmUgYXJlIGEgZmV3Lgo+ID4gTGV0
J3MgdGFrZSBhIG1vbWVudCB0byBlbnVtZXJhdGUgdGhlbToKPiA+IAo+ID4gQSkgSW4gTGludXgs
IGEgcHJpdmlsZWdlZCB1c2VyIGNhbiB3cml0ZSB0byBhIHN5c2ZzIGZpbGUgd2hpY2ggd3JpdGVz
Cj4gPiB0byB0aGUgYmFsbG9vbiBkcml2ZXIgd2hpY2ggbWFrZXMgaHlwZXJjYWxscyBmcm9tIHRo
ZSBndWVzdCBrZXJuZWwgdG8KPiAKPiBBIGZhaXJseSBiaXphcnJlIGxpbWl0YXRpb24gb2YgYSBi
YWxsb29uLWJhc2VkIGFwcHJvYWNoIHRvIG1lbW9yeSBtYW5hZ2VtZW50LiBXaHkgb24gZWFydGgg
c2hvdWxkIHRoZSBndWVzdCBiZSBhbGxvd2VkIHRvIGNoYW5nZSB0aGUgc2l6ZSBvZiBpdHMgYmFs
bG9vbiwgYW5kIHRoZXJlZm9yZSBpdHMgZm9vdHByaW50IG9uIHRoZSBob3N0LiBUaGlzIG1heSBi
ZSBqdXN0aWZpZWQgd2l0aCBhcmd1bWVudHMgcGVydGFpbmluZyB0byB0aGUgc3RhYmlsaXR5IG9m
IHRoZSBpbi1ndWVzdCB3b3JrbG9hZC4gV2hhdCB0aGV5IHJlYWxseSByZXZlYWwgYXJlIGxpbWl0
YXRpb25zIG9mIGJhbGxvb25pbmcuIEJ1dCB0aGUgaW5hZGVxdWFjeSBvZiB0aGUgYmFsbG9vbiBp
biBpdHNlbGYgZG9lc24ndCBhdXRvbWF0aWNhbGx5IHRyYW5zbGF0ZSBpbnRvIGp1c3RpZnlpbmcg
dGhlIG5lZWQgZm9yIGEgbmV3IGh5cGVyIGNhbGwuCgpXaHkgaXMgdGhpcyBhIGxpbWl0YXRpb24/
IFdoeSBzaG91bGRuJ3QgdGhlIGd1ZXN0IHRoZSBhbGxvd2VkIHRvIGNoYW5nZQppdHMgbWVtb3J5
IHVzYWdlPyBJdCBjYW4gZ28gdXAgYW5kIGRvd24gYXMgaXQgc2VlcyBmaXQuCkFuZCBpZiBpdCBn
b2VzIGRvd24gYW5kIGl0IGdldHMgYmV0dGVyIHBlcmZvcm1hbmNlIC0gd2VsbCwgd2h5IHNob3Vs
ZG4ndAppdCBkbyBpdD8KCkkgY29uY3VyIGl0IGlzIG9kZCAtIGJ1dCBpdCBoYXMgYmVlbiBsaWtl
IHRoYXQgZm9yIGRlY2FkZXMuCgoKPiAKPiA+IHRoZSBoeXBlcnZpc29yLCB3aGljaCBhZGp1c3Rz
IHRoZSBkb21haW4gbWVtb3J5IGZvb3RwcmludCwgd2hpY2ggY2hhbmdlcyB0aGUgbnVtYmVyIG9m
IGZyZWUgcGFnZXMgX3dpdGhvdXRfIHRoZSB0b29sc3RhY2sga25vd2xlZGdlLgo+ID4gVGhlIHRv
b2xzdGFjayBjb250cm9scyBjb25zdHJhaW50cyAoZXNzZW50aWFsbHkgYSBtaW5pbXVtIGFuZCBt
YXhpbXVtKQo+ID4gd2hpY2ggdGhlIGh5cGVydmlzb3IgZW5mb3JjZXMuICBUaGUgdG9vbHN0YWNr
IGNhbiBlbnN1cmUgdGhhdCB0aGUKPiA+IG1pbmltdW0gYW5kIG1heGltdW0gYXJlIGlkZW50aWNh
bCB0byBlc3NlbnRpYWxseSBkaXNhbGxvdyBMaW51eCBmcm9tCj4gPiB1c2luZyB0aGlzIGZ1bmN0
aW9uYWxpdHkuICBJbmRlZWQsIHRoaXMgaXMgcHJlY2lzZWx5IHdoYXQgQ2l0cml4J3MKPiA+IER5
bmFtaWMgTWVtb3J5IENvbnRyb2xsZXIgKERNQykgZG9lczogZW5mb3JjZSBtaW49PW1heCBzbyB0
aGF0IERNQyBhbHdheXMgaGFzIGNvbXBsZXRlIGNvbnRyb2wgYW5kLCBzbywga25vd2xlZGdlIG9m
IGFueSBkb21haW4gbWVtb3J5Cj4gPiBmb290cHJpbnQgY2hhbmdlcy4gIEJ1dCBETUMgaXMgbm90
IHByZXNjcmliZWQgYnkgdGhlIHRvb2xzdGFjaywKPiAKPiBOZWl0aGVyIGlzIGVuZm9yY2luZyBt
aW49PW1heC4gVGhpcyB3YXMgbXkgYXJndW1lbnQgd2hlbiBwcmV2aW91c2x5IGNvbW1lbnRpbmcg
b24gdGhpcyB0aHJlYWQuIFRoZSBmYWN0IHRoYXQgeW91IGhhdmUgZW5mb3JjZW1lbnQgb2YgYSBt
YXhpbXVtIGRvbWFpbiBhbGxvY2F0aW9uIGdpdmVzIHlvdSBhbiBleGNlbGxlbnQgdG9vbCB0byBr
ZWVwIGEgZG9tYWluJ3MgdW5zdXBlcnZpc2VkIGdyb3d0aCBhdCBiYXkuIFRoZSB0b29sc3RhY2sg
Y2FuIGNob29zZSBob3cgZmluZS1ncmFpbmVkLCBob3cgb2Z0ZW4gdG8gYmUgYWxlcnRlZCBhbmQg
c3RhbGwgdGhlIGRvbWFpbi4KClRoZXJlIGlzIGEgZG93bi1jYWxsIChzbyBldmVudHMpIHRvIHRo
ZSB0b29sLXN0YWNrIGZyb20gdGhlIGh5cGVydmlzb3Igd2hlbgp0aGUgZ3Vlc3QgdHJpZXMgdG8g
YmFsbG9vbiBpbi9vdXQ/IFNvIHRoZSBuZWVkIGZvciB0aGlzIHByb2JsZW0gYXJvc2UKYnV0IHRo
ZSBtZWNoYW5pc20gdG8gZGVhbCB3aXRoIGl0IGhhcyBiZWVuIHNoaWZ0ZWQgdG8gdGhlIHVzZXIt
c3BhY2UKdGhlbj8gV2hhdCB0byBkbyB3aGVuIHRoZSBndWVzdCBkb2VzIHRoaXMgaW4vb3V0IGJh
bGxvb24gYXQgZnJlcQppbnRlcnZhbHM/CgpJIGFtIG1pc3NpbmcgYWN0dWFsbHkgdGhlIHJlYXNv
bmluZyBiZWhpbmQgd2FudGluZyB0byBzdGFsbCB0aGUgZG9tYWluPwpJcyB0aGF0IHRvIGNvbXBy
ZXNzL3N3YXAgdGhlIHBhZ2VzIHRoYXQgdGhlIGd1ZXN0IHJlcXVlc3RzPyBNZWFuaW5nCmFuIHVz
ZXItc3BhY2UgZGFlbW9uIHRoYXQgZG9lcyAidGhpbmdzIiBhbmQgaGFzIG93bmVyc2hpcApvZiB0
aGUgcGFnZXM/Cgo+IAo+ID4gYW5kIHNvbWUgcmVhbCBPcmFjbGUgTGludXggY3VzdG9tZXJzIHVz
ZSBhbmQgZGVwZW5kIG9uIHRoZSBmbGV4aWJpbGl0eQo+ID4gcHJvdmlkZWQgYnkgaW4tZ3Vlc3Qg
YmFsbG9vbmluZy4gICBTbyBndWVzdC1wcml2aWxlZ2VkLXVzZXItZHJpdmVuLQo+ID4gYmFsbG9v
bmluZyBpcyBhIHBvdGVudGlhbCBpc3N1ZSBmb3IgdG9vbHN0YWNrLWJhc2VkIGNhcGFjaXR5IGFs
bG9jYXRpb24uCj4gPiAKPiA+IFtJSUdUOiBUaGlzIGlzIHdoeSBJIGhhdmUgYnJvdWdodCB1cCBE
TUMgc2V2ZXJhbCB0aW1lcyBhbmQgaGF2ZQo+ID4gY2FsbGVkIHRoaXMgdGhlICJDaXRyaXggbW9k
ZWwsIi4uIEknbSBub3QgdHJ5aW5nIHRvIGJlIHNuaXBweQo+ID4gb3IgaW1wdWduIHlvdXIgbW9y
YWxzIGFzIG1haW50YWluZXJzLl0KPiA+IAo+ID4gQikgWGVuJ3MgcGFnZSBzaGFyaW5nIGZlYXR1
cmUgaGFzIHNsb3dseSBiZWVuIGNvbXBsZXRlZCBvdmVyIGEgbnVtYmVyCj4gPiBvZiByZWNlbnQg
WGVuIHJlbGVhc2VzLiAgSXQgdGFrZXMgYWR2YW50YWdlIG9mIHRoZSBmYWN0IHRoYXQgbWFueQo+
ID4gcGFnZXMgb2Z0ZW4gY29udGFpbiBpZGVudGljYWwgZGF0YTsgdGhlIGh5cGVydmlzb3IgbWVy
Z2VzIHRoZW0gdG8gc2F2ZQo+IAo+IEdyZWF0IGNhcmUgaGFzIGJlZW4gdGFrZW4gZm9yIHRoaXMg
c3RhdGVtZW50IHRvIG5vdCBiZSBleGFjdGx5IHRydWUuIFRoZSBoeXBlcnZpc29yIGRpc2NhcmRz
IG9uZSBvZiB0d28gcGFnZXMgdGhhdCB0aGUgdG9vbHN0YWNrIHRlbGxzIGl0IHRvIChhbmQgcGF0
Y2hlcyB0aGUgcGh5c21hcCBvZiB0aGUgVk0gcHJldmlvdXNseSBwb2ludGluZyB0byB0aGUgZGlz
Y2FyZCBwYWdlKS4gSXQgZG9lc24ndCBtZXJnZSwgbm9yIGRvZXMgaXQgbG9vayBpbnRvIGNvbnRl
bnRzLiBUaGUgaHlwZXJ2aXNvciBkb2Vzbid0IGNhcmUgYWJvdXQgdGhlIHBhZ2UgY29udGVudHMu
IFRoaXMgaXMgZGVsaWJlcmF0ZSwgc28gYXMgdG8gYXZvaWQgc3B1cmlvdXMgY2xhaW1zIG9mICJ5
b3UgYXJlIHVzaW5nIHRlY2huaXF1ZSBYISIKPiAKCklzIHRoZSB0b29sc3RhY2sgKG9yIGEgZGFl
bW9uIGluIHVzZXJzcGFjZSkgZG9pbmcgdGhpcz8gSSB3b3VsZApoYXZlIHRob3VnaHQgdGhhdCB0
aGVyZSB3b3VsZCBiZSBzb21lIG9wdGltaXphdGlvbiB0byBkbyB0aGlzCnNvbWV3aGVyZT8KCj4g
PiBwaHlzaWNhbCBSQU0uICBXaGVuIGFueSAic2hhcmVkIiBwYWdlIGlzIHdyaXR0ZW4sIHRoZSBo
eXBlcnZpc29yCj4gPiAic3BsaXRzIiB0aGUgcGFnZSAoYWthLCBjb3B5LW9uLXdyaXRlKSBieSBh
bGxvY2F0aW5nIGEgbmV3IHBoeXNpY2FsCj4gPiBwYWdlLiAgVGhlcmUgaXMgYSBsb25nIGhpc3Rv
cnkgb2YgdGhpcyBmZWF0dXJlIGluIG90aGVyIHZpcnR1YWxpemF0aW9uCj4gPiBwcm9kdWN0cyBh
bmQgaXQgaXMga25vd24gdG8gYmUgcG9zc2libGUgdGhhdCwgdW5kZXIgbWFueSBjaXJjdW1zdGFu
Y2VzLCB0aG91c2FuZHMgb2Ygc3BsaXRzIG1heSBvY2N1ciBpbiBhbnkgZnJhY3Rpb24gb2YgYSBz
ZWNvbmQuICBUaGUKPiA+IGh5cGVydmlzb3IgZG9lcyBub3Qgbm90aWZ5IG9yIGFzayBwZXJtaXNz
aW9uIG9mIHRoZSB0b29sc3RhY2suCj4gPiBTbywgcGFnZS1zcGxpdHRpbmcgaXMgYW4gaXNzdWUg
Zm9yIHRvb2xzdGFjay1iYXNlZCBjYXBhY2l0eQo+ID4gYWxsb2NhdGlvbiwgYXQgbGVhc3QgYXMg
Y3VycmVudGx5IGNvZGVkIGluIFhlbi4KPiA+IAo+ID4gW0FuZHJlOiBQbGVhc2UgaG9sZCB5b3Vy
IG9iamVjdGlvbiBoZXJlIHVudGlsIHlvdSByZWFkIGZ1cnRoZXIuXQo+IAo+IE5hbWUgaXMgQW5k
cmVzLiBBbmQgcGxlYXNlIGNjIG1lIGlmIHlvdSdsbCBiZSBhZGRyZXNzaW5nIG1lIGRpcmVjdGx5
IQo+IAo+IE5vdGUgdGhhdCBJIGRvbid0IGRpc2FncmVlIHdpdGggeW91ciBwcmV2aW91cyBzdGF0
ZW1lbnQgaW4gaXRzZWxmLiBBbHRob3VnaCAicGFnZS1zcGxpdHRpbmciIGlzIGZhaXJseSB1bmlx
dWUgdGVybWlub2xvZ3ksIGFuZCBjb25mdXNpbmcgKGF0IGxlYXN0IHRvIG1lKS4gQ29XIHdvcmtz
LgoKPG5vZHM+Cj4gCj4gPiAKPiA+IEMpIFRyYW5zY2VuZGVudCBNZW1vcnkgKCJ0bWVtIikgaGFz
IGV4aXN0ZWQgaW4gdGhlIFhlbiBoeXBlcnZpc29yIGFuZAo+ID4gdG9vbHN0YWNrIGZvciBvdmVy
IHRocmVlIHllYXJzLiAgSXQgZGVwZW5kcyBvbiBhbiBpbi1ndWVzdC1rZXJuZWwKPiA+IGFkYXB0
aXZlIHRlY2huaXF1ZSB0byBjb25zdGFudGx5IGFkanVzdCB0aGUgZG9tYWluIG1lbW9yeSBmb290
cHJpbnQgYXMKPiA+IHdlbGwgYXMgaG9va3MgaW4gdGhlIGluLWd1ZXN0LWtlcm5lbCB0byBtb3Zl
IGRhdGEgdG8gYW5kIGZyb20gdGhlCj4gPiBoeXBlcnZpc29yLiAgV2hpbGUgdGhlIGRhdGEgaXMg
aW4gdGhlIGh5cGVydmlzb3IncyBjYXJlLCBpbnRlcmVzdGluZwo+ID4gbWVtb3J5LWxvYWQgYmFs
YW5jaW5nIGJldHdlZW4gZ3Vlc3RzIGlzIGRvbmUsIGluY2x1ZGluZyBvcHRpb25hbAo+ID4gY29t
cHJlc3Npb24gYW5kIGRlZHVwbGljYXRpb24uICBBbGwgb2YgdGhpcyBoYXMgYmVlbiBpbiBYZW4g
c2luY2UgMjAwOQo+ID4gYW5kIGhhcyBiZWVuIGF3YWl0aW5nIGNoYW5nZXMgaW4gdGhlIChndWVz
dC1zaWRlKSBMaW51eCBrZXJuZWwuIFRob3NlCj4gPiBjaGFuZ2VzIGFyZSBub3cgbWVyZ2VkIGlu
dG8gdGhlIG1haW5zdHJlYW0ga2VybmVsIGFuZCBhcmUgZnVsbHkKPiA+IGZ1bmN0aW9uYWwgaW4g
c2hpcHBpbmcgZGlzdHJvcy4KPiA+IAo+ID4gV2hpbGUgYSBjb21wbGV0ZSBkZXNjcmlwdGlvbiBv
ZiB0bWVtJ3MgZ3Vlc3Q8LT5oeXBlcnZpc29yIGludGVyYWN0aW9uCj4gPiBpcyBiZXlvbmQgdGhl
IHNjb3BlIG9mIHRoaXMgZG9jdW1lbnQsIGl0IGlzIGltcG9ydGFudCB0byB1bmRlcnN0YW5kCj4g
PiB0aGF0IGFueSB0bWVtLWVuYWJsZWQgZ3Vlc3Qga2VybmVsIG1heSB1bnByZWRpY3RhYmx5IHJl
cXVlc3QgdGhvdXNhbmRzCj4gPiBvciBldmVuIG1pbGxpb25zIG9mIHBhZ2VzIGRpcmVjdGx5IHZp
YSBoeXBlcmNhbGxzIGZyb20gdGhlIGh5cGVydmlzb3IgaW4gYSBmcmFjdGlvbiBvZiBhIHNlY29u
ZCB3aXRoIGFic29sdXRlbHkgbm8gaW50ZXJhY3Rpb24gd2l0aCB0aGUgdG9vbHN0YWNrLiAgRnVy
dGhlciwgdGhlIGd1ZXN0LXNpZGUgaHlwZXJjYWxscyB0aGF0IGFsbG9jYXRlIHBhZ2VzCj4gPiB2
aWEgdGhlIGh5cGVydmlzb3IgYXJlIGRvbmUgaW4gImF0b21pYyIgY29kZSBkZWVwIGluIHRoZSBM
aW51eCBtbQo+ID4gc3Vic3lzdGVtLgo+ID4gCj4gPiBJbmRlZWQsIGlmIG9uZSB0cnVseSB1bmRl
cnN0YW5kcyB0bWVtLCBpdCBzaG91bGQgYmVjb21lIGNsZWFyIHRoYXQKPiA+IHRtZW0gaXMgZnVu
ZGFtZW50YWxseSBpbmNvbXBhdGlibGUgd2l0aCB0b29sc3RhY2stYmFzZWQgY2FwYWNpdHkKPiA+
IGFsbG9jYXRpb24uIEJ1dCBsZXQncyBzdG9wIGRpc2N1c3NpbmcgdG1lbSBmb3Igbm93IGFuZCBt
b3ZlIG9uLgo+IAo+IFlvdSBoYXZlIG5vdCBkaXNjdXNzZWQgdG1lbSBwb29sIHRoYXcgYW5kIGZy
ZWV6ZSBpbiB0aGlzIHByb3Bvc2FsLgoKT29vaCwgeW91IGtub3cgYWJvdXQgaXQgOi0pIERhbiBk
aWRuJ3Qgd2FudCB0byBnbyB0b28gdmVyYm9zZSBvbgpwZW9wbGUuIEl0IGlzIGEgYml0IG9mIHJh
dGhvbGUgLSBhbmQgdGhpcyBoeXBlcmNhbGwgd291bGQKYWxsb3cgdG8gZGVwcmVjYXRlIHNhaWQg
ZnJlZXplL3RoYXcgY2FsbHMuCgo+IAo+ID4gCj4gPiBPSy4gIFNvIHdpdGggZXhpc3RpbmcgY29k
ZSBib3RoIGluIFhlbiBhbmQgTGludXggZ3Vlc3RzLCB0aGVyZSBhcmUKPiA+IHRocmVlIGNoYWxs
ZW5nZXMgdG8gdG9vbHN0YWNrLWJhc2VkIGNhcGFjaXR5IGFsbG9jYXRpb24uICBXZSdkCj4gPiBy
ZWFsbHkgc3RpbGwgbGlrZSB0byBkbyBjYXBhY2l0eSBhbGxvY2F0aW9uIGluIHRoZSB0b29sc3Rh
Y2suICBDYW4KPiA+IHNvbWV0aGluZyBiZSBkb25lIGluIHRoZSB0b29sc3RhY2sgdG8gImZpeCIg
dGhlc2UgdGhyZWUgY2FzZXM/Cj4gPiAKPiA+IFBvc3NpYmx5LiAgQnV0IGxldCdzIGZpcnN0IGxv
b2sgYXQgaHlwZXJ2aXNvci1iYXNlZCBjYXBhY2l0eQo+ID4gYWxsb2NhdGlvbjogdGhlIHByb3Bv
c2VkICJYRU5NRU1fY2xhaW1fcGFnZXMiIGh5cGVyY2FsbC4KPiA+IAo+ID4gSFlQRVJWSVNPUi1C
QVNFRCBDQVBBQ0lUWSBBTExPQ0FUSU9OCj4gPiAKPiA+IFRoZSBwb3N0ZWQgcGF0Y2ggZm9yIHRo
ZSBjbGFpbSBoeXBlcmNhbGwgaXMgcXVpdGUgc2ltcGxlLCBidXQgbGV0J3MKPiA+IGxvb2sgYXQg
aXQgaW4gZGV0YWlsLiAgVGhlIGNsYWltIGh5cGVyY2FsbCBpcyBhY3R1YWxseSBhIHN1Ym9wCj4g
PiBvZiBhbiBleGlzdGluZyBoeXBlcmNhbGwuICBBZnRlciBjaGVja2luZyBwYXJhbWV0ZXJzIGZv
ciB2YWxpZGl0eSwKPiA+IGEgbmV3IGZ1bmN0aW9uIGlzIGNhbGxlZCBpbiB0aGUgY29yZSBYZW4g
bWVtb3J5IG1hbmFnZW1lbnQgY29kZS4KPiA+IFRoaXMgZnVuY3Rpb24gdGFrZXMgdGhlIGh5cGVy
dmlzb3IgaGVhcGxvY2ssIGNoZWNrcyBmb3IgYSBmZXcKPiA+IHNwZWNpYWwgY2FzZXMsIGRvZXMg
c29tZSBhcml0aG1ldGljIHRvIGVuc3VyZSBhIHZhbGlkIGNsYWltLCBzdGFrZXMKPiA+IHRoZSBj
bGFpbSwgcmVsZWFzZXMgdGhlIGh5cGVydmlzb3IgaGVhcGxvY2ssIGFuZCB0aGVuIHJldHVybnMu
ICBUbwo+ID4gcmV2aWV3IGZyb20gZWFybGllciwgdGhlIGh5cGVydmlzb3IgaGVhcGxvY2sgcHJv
dGVjdHMgX2FsbF8gcGFnZS9zbGFiCj4gPiBhbGxvY2F0aW9ucywgc28gd2UgY2FuIGJlIGFic29s
dXRlbHkgY2VydGFpbiB0aGF0IHRoZXJlIGFyZSBubyBvdGhlcgo+ID4gcGFnZSBhbGxvY2F0aW9u
IHJhY2VzLiAgVGhpcyBuZXcgZnVuY3Rpb24gaXMgYWJvdXQgMzUgbGluZXMgb2YgY29kZSwKPiA+
IG5vdCBjb3VudGluZyBjb21tZW50cy4KPiA+IAo+ID4gVGhlIHBhdGNoIGluY2x1ZGVzIHR3byBv
dGhlciBzaWduaWZpY2FudCBjaGFuZ2VzIHRvIHRoZSBoeXBlcnZpc29yOgo+ID4gRmlyc3QsIHdo
ZW4gYW55IGFkanVzdG1lbnQgdG8gYSBkb21haW4ncyBtZW1vcnkgZm9vdHByaW50IGlzIG1hZGUK
PiA+IChlaXRoZXIgdGhyb3VnaCBhIHRvb2xzdGFjay1hd2FyZSBoeXBlcmNhbGwgb3Igb25lIG9m
IHRoZSB0aHJlZQo+ID4gdG9vbHN0YWNrLXVuYXdhcmUgbWV0aG9kcyBkZXNjcmliZWQgYWJvdmUp
LCB0aGUgaGVhcGxvY2sgaXMKPiA+IHRha2VuLCBhcml0aG1ldGljIGlzIGRvbmUsIGFuZCB0aGUg
aGVhcGxvY2sgaXMgcmVsZWFzZWQuICBUaGlzCj4gPiBpcyAxMiBsaW5lcyBvZiBjb2RlLiAgU2Vj
b25kLCB3aGVuIGFueSBtZW1vcnkgaXMgYWxsb2NhdGVkIHdpdGhpbgo+ID4gWGVuLCBhIGNoZWNr
IG11c3QgYmUgbWFkZSAod2l0aCB0aGUgaGVhcGxvY2sgYWxyZWFkeSBoZWxkKSB0bwo+ID4gZGV0
ZXJtaW5lIGlmLCBnaXZlbiBhIHByZXZpb3VzIGNsYWltLCB0aGUgZG9tYWluIGhhcyBleGNlZWRl
ZAo+ID4gaXRzIHVwcGVyIGJvdW5kLCBtYXhtZW0uICBUaGlzIGNvZGUgaXMgYSBzaW5nbGUgY29u
ZGl0aW9uYWwgdGVzdC4KPiA+IAo+ID4gV2l0aCBzb21lIGRlY2xhcmF0aW9ucywgYnV0IG5vdCBj
b3VudGluZyB0aGUgY29waW91cyBjb21tZW50cywKPiA+IGFsbCB0b2xkLCB0aGUgbmV3IGNvZGUg
cHJvdmlkZWQgYnkgdGhlIHBhdGNoIGlzIHdlbGwgdW5kZXIgMTAwIGxpbmVzLgo+ID4gCj4gPiBX
aGF0IGFib3V0IHRoZSB0b29sc3RhY2sgc2lkZT8gIEZpcnN0LCBpdCdzIGltcG9ydGFudCB0byBu
b3RlIHRoYXQKPiA+IHRoZSB0b29sc3RhY2sgY2hhbmdlcyBhcmUgZW50aXJlbHkgb3B0aW9uYWwu
ICBJZiBhbnkgdG9vbHN0YWNrCj4gPiB3aXNoZXMgZWl0aGVyIHRvIG5vdCBmaXggdGhlIG9yaWdp
bmFsIHByb2JsZW0sIG9yIGF2b2lkIHRvb2xzdGFjay0KPiA+IHVuYXdhcmUgYWxsb2NhdGlvbiBj
b21wbGV0ZWx5IGJ5IGlnbm9yaW5nIHRoZSBmdW5jdGlvbmFsaXR5IHByb3ZpZGVkCj4gPiBieSBp
bi1ndWVzdCBiYWxsb29uaW5nLCBwYWdlLXNoYXJpbmcsIGFuZC9vciB0bWVtLCB0aGF0IHRvb2xz
dGFjayBuZWVkCj4gPiBub3QgdXNlIHRoZSBuZXcgaHlwZXIgY2FsbC4KPiAKPiBZb3UgYXJlIHJ1
bGluZyBvdXQgYW55IG90aGVyIHBvc3NpYmlsaXR5IGhlcmUuIEluIHBhcnRpY3VsYXIsIGJ1dCBu
b3QgbGltaXRlZCB0bywgdXNlIG9mIG1heF9wYWdlcy4KClRoZSBvbmUgbWF4X3BhZ2UgY2hlY2sg
dGhhdCBjb21lcyB0byBteSBtaW5kIGlzIHRoZSBvbmUgdGhhdCBYYXBpCnVzZXMuIFRoYXQgaXMg
aXQgaGFzIGEgZGFlbW9uIHRoYXQgc2V0cyB0aGUgbWF4X3BhZ2VzIG9mIGFsbCB0aGUKZ3Vlc3Rz
IGF0IHNvbWUgdmFsdWUgc28gdGhhdCBpdCBjYW4gc3F1ZWV6ZSBpbiBhcyBtYW55IGd1ZXN0cyBh
cwpwb3NzaWJsZS4gSXQgYWxzbyBiYWxsb29ucyBwYWdlcyBvdXQgb2YgYSBndWVzdCB0byBtYWtl
IHNwYWNlIGlmCm5lZWQgdG8gbGF1bmNoLiBUaGUgaGV1cmVzdGljIG9mIGhvdyBtYW55IHBhZ2Vz
IG9yIHRoZSByYXRpbwpvZiBtYXgvbWluIGxvb2tzIHRvIGJlIHByb3BvcnRpb25hbCAoc28gdG8g
bWFrZSBzcGFjZSBmb3IgMUdCCmZvciBhIGd1ZXN0LCBhbmQgc2F5IHdlIGhhdmUgMTAgZ3Vlc3Rz
LCB3ZSB3aWxsIHN1YnRyYWN0CjEwMU1CIGZyb20gZWFjaCBndWVzdCAtIHRoZSBleHRyYSAxTUIg
aXMgZm9yIGV4dHJhIG92ZXJoZWFkKS4KVGhpcyBkZXBlbmRzIG9uIG9uZSBoeXBlcmNhbGwgdGhh
dCAneGwnIG9yICd4bScgdG9vbHN0YWNrIGRvIG5vdAp1c2UgLSB3aGljaCBzZXRzIHRoZSBtYXhf
cGFnZXMuCgpUaGF0IGNvZGUgbWFrZXMgY2VydGFpbiBhc3N1bXB0aW9ucyAtIHRoYXQgdGhlIGd1
ZXN0IHdpbGwgbm90IGdvL3VwIGRvd24KaW4gdGhlIGJhbGxvb25pbmcgb25jZSB0aGUgdG9vbHN0
YWNrIGhhcyBkZWNyZWVkIGhvdyBtdWNoCm1lbW9yeSB0aGUgZ3Vlc3Qgc2hvdWxkIHVzZS4gSXQg
YWxzbyBhc3N1bWVzIHRoYXQgdGhlIG9wZXJhdGlvbnMKYXJlIHNlbWktYXRvbWljIC0gYW5kIHRv
IG1ha2UgaXQgc28gYXMgbXVjaCBhcyBpdCBjYW4gLSBpdCBleGVjdXRlcwp0aGVzZSBvcGVyYXRp
b25zIGluIHNlcmlhbC4KClRoaXMgZ29lcyBiYWNrIHRvIHRoZSBwcm9ibGVtIHN0YXRlbWVudCAt
IGlmIHdlIHRyeSB0byBwYXJhbGxpemUKdGhpcyB3ZSBydW4gaW4gdGhlIHByb2JsZW0gdGhhdCB0
aGUgYW1vdW50IG9mIG1lbW9yeSB3ZSB0aG91Z2h0CndlIGZyZWUgaXMgbm90IHRydWUgYW55bW9y
ZS4gVGhlIHN0YXJ0IG9mIHRoaXMgZW1haWwgaGFzIGEgZ29vZApkZXNjcmlwdGlvbiBvZiBzb21l
IG9mIHRoZSBpc3N1ZXMuCgpJbiBlc3NlbmNlLCB0aGUgbWF4X3BhZ2VzIGRvZXMgd29yayAtIF9p
Zl8gb25lIGRvZXMgdGhlc2Ugb3BlcmF0aW9ucwppbiBzZXJpYWwuIFdlIGFyZSB0cnlpbmcgdG8g
bWFrZSB0aGlzIHdvcmsgaW4gcGFyYWxsZWwgYW5kIHdpdGhvdXQKYW55IGZhaWx1cmVzIC0gZm9y
IHRoYXQgd2UgLSBvbmUgd2F5IHRoYXQgaXMgcXVpdGUgc2ltcGxpc3RpYwppcyB0aGUgY2xhaW0g
aHlwZXJjYWxsLiBJdCBzZXRzIHVwIGEgJ3N0YWtlJyBvZiB0aGUgYW1vdW50IG9mCm1lbW9yeSB0
aGF0IHRoZSBoeXBlcnZpc29yIHNob3VsZCByZXNlcnZlLiBUaGlzIHdheSBvdGhlcgpndWVzdHMg
Y3JlYXRpb25zLyBiYWxsb25uaW5nIGRvIG5vdCBpbmZyaW5nZSBvbiB0aGUgJ2NsYWltZWQnIGFt
b3VudC4KCkkgYmVsaWV2ZSB3aXRoIHRoaXMgaHlwZXJjYWxsIHRoZSBYYXBpIGNhbiBiZSBtYWRl
IHRvIGRvIGl0cyBvcGVyYXRpb25zCmluIHBhcmFsbGVsIGFzIHdlbGwuCgo+IAo+ID4gIFNlY29u
ZCwgaXQncyB2ZXJ5IHJlbGV2YW50IHRvIG5vdGUgdGhhdCB0aGUgT3JhY2xlIHByb2R1Y3QgdXNl
cyBhIGNvbWJpbmF0aW9uIG9mIGEgcHJvcHJpZXRhcnkgIm1hbmFnZXIiCj4gPiB3aGljaCBvdmVy
c2VlcyBtYW55IG1hY2hpbmVzLCBhbmQgdGhlIG9sZGVyIG9wZW4tc291cmNlIHhtL3hlbmQKPiA+
IHRvb2xzdGFjaywgZm9yIHdoaWNoIHRoZSBjdXJyZW50IFhlbiB0b29sc3RhY2sgbWFpbnRhaW5l
cnMgYXJlIG5vCj4gPiBsb25nZXIgYWNjZXB0aW5nIHBhdGNoZXMuCj4gPiAKPiA+IFRoZSBwcmVm
YWNlIG9mIHRoZSBwdWJsaXNoZWQgcGF0Y2ggZG9lcyBzdWdnZXN0LCBob3dldmVyLCBzb21lCj4g
PiBzdHJhaWdodGZvcndhcmQgcHNldWRvLWNvZGUsIGFzIGZvbGxvd3M6Cj4gPiAKPiA+IEN1cnJl
bnQgdG9vbHN0YWNrIGRvbWFpbiBjcmVhdGlvbiBtZW1vcnkgYWxsb2NhdGlvbiBjb2RlIGZyYWdt
ZW50Ogo+ID4gCj4gPiAxLiBjYWxsIHBvcHVsYXRlX3BoeXNtYXAgcmVwZWF0ZWRseSB0byBhY2hp
ZXZlIG1lbT1OIG1lbW9yeQo+ID4gMi4gaWYgYW55IHBvcHVsYXRlX3BoeXNtYXAgY2FsbCBmYWls
cywgcmVwb3J0IC1FTk9NRU0gdXAgdGhlIHN0YWNrCj4gPiAzLiBtZW1vcnkgaXMgaGVsZCB1bnRp
bCBkb21haW4gZGllcyBvciB0aGUgdG9vbHN0YWNrIGRlY3JlYXNlcyBpdAo+ID4gCj4gPiBQcm9w
b3NlZCB0b29sc3RhY2sgZG9tYWluIGNyZWF0aW9uIG1lbW9yeSBhbGxvY2F0aW9uIGNvZGUgZnJh
Z21lbnQKPiA+IChuZXcgY29kZSBtYXJrZWQgd2l0aCAiKyIpOgo+ID4gCj4gPiArICBjYWxsIGNs
YWltIGZvciBtZW09TiBhbW91bnQgb2YgbWVtb3J5Cj4gPiArLiBpZiBjbGFpbSBzdWNjZWVkczoK
PiA+IDEuICBjYWxsIHBvcHVsYXRlX3BoeXNtYXAgcmVwZWF0ZWRseSB0byBhY2hpZXZlIG1lbT1O
IG1lbW9yeSAoZmFpbHNhZmUpCj4gPiArICBlbHNlCj4gPiAyLiAgcmVwb3J0IC1FTk9NRU0gdXAg
dGhlIHN0YWNrCj4gPiArICBjbGFpbSBpcyBoZWxkIHVudGlsIG1lbT1OIGlzIGFjaGlldmVkIG9y
IHRoZSBkb21haW4gZGllcyBvcgo+ID4gICAgZm9yY2VkIHRvIDAgYnkgYSBzZWNvbmQgaHlwZXJj
YWxsCj4gPiAzLiBtZW1vcnkgaXMgaGVsZCB1bnRpbCBkb21haW4gZGllcyBvciB0aGUgdG9vbHN0
YWNrIGRlY3JlYXNlcyBpdAo+ID4gCj4gPiBSZXZpZXdpbmcgdGhlIHBzZXVkby1jb2RlLCBvbmUg
Y2FuIHJlYWRpbHkgc2VlIHRoYXQgdGhlIHRvb2xzdGFjawo+ID4gY2hhbmdlcyByZXF1aXJlZCB0
byBpbXBsZW1lbnQgdGhlIGh5cGVyY2FsbCBhcmUgcXVpdGUgc21hbGwuCj4gPiAKPiA+IFRvIGNv
bXBsZXRlIHRoaXMgZGlzY3Vzc2lvbiwgaXQgaGFzIGJlZW4gcG9pbnRlZCBvdXQgdGhhdAo+ID4g
dGhlIHByb3Bvc2VkIGh5cGVyY2FsbCBkb2Vzbid0IHNvbHZlIHRoZSBvcmlnaW5hbCBwcm9ibGVt
Cj4gPiBmb3IgY2VydGFpbiBjbGFzc2VzIG9mIGxlZ2FjeSBkb21haW5zLi4uIGJ1dCBhbHNvIG5l
aXRoZXIKPiA+IGRvZXMgaXQgbWFrZSB0aGUgcHJvYmxlbSB3b3JzZS4gIEl0IGhhcyBhbHNvIGJl
ZW4gcG9pbnRlZAo+ID4gb3V0IHRoYXQgdGhlIHByb3Bvc2VkIHBhdGNoIGlzIG5vdCAoeWV0KSBO
VU1BLWF3YXJlLgo+ID4gCj4gPiBOb3cgbGV0J3MgcmV0dXJuIHRvIHRoZSBlYXJsaWVyIHF1ZXN0
aW9uOiAgVGhlcmUgYXJlIHRocmVlIAo+ID4gY2hhbGxlbmdlcyB0byB0b29sc3RhY2stYmFzZWQg
Y2FwYWNpdHkgYWxsb2NhdGlvbiwgd2hpY2ggYXJlCj4gPiBhbGwgaGFuZGxlZCBlYXNpbHkgYnkg
aW4taHlwZXJ2aXNvciBjYXBhY2l0eSBhbGxvY2F0aW9uLiBCdXQgd2UnZAo+ID4gcmVhbGx5IHN0
aWxsIGxpa2UgdG8gZG8gY2FwYWNpdHkgYWxsb2NhdGlvbiBpbiB0aGUgdG9vbHN0YWNrLgo+ID4g
Q2FuIHNvbWV0aGluZyBiZSBkb25lIGluIHRoZSB0b29sc3RhY2sgdG8gImZpeCIgdGhlc2UgdGhy
ZWUgY2FzZXM/Cj4gPiAKPiA+IFRoZSBhbnN3ZXIgaXMsIG9mIGNvdXJzZSwgY2VydGFpbmx5Li4u
IGFueXRoaW5nIGNhbiBiZSBkb25lIGluCj4gPiBzb2Z0d2FyZS4gIFNvLCByZWNhbGxpbmcgSWFu
IEphY2tzb24ncyBzdGF0ZWQgcmVxdWlyZW1lbnQ6Cj4gPiAKPiA+ICJBbnkgZnVuY3Rpb25hbGl0
eSB3aGljaCBjYW4gYmUgcmVhc29uYWJseSBwcm92aWRlZCBvdXRzaWRlIHRoZQo+ID4gIGh5cGVy
dmlzb3Igc2hvdWxkIGJlIGV4Y2x1ZGVkIGZyb20gaXQuIgo+ID4gCj4gPiB3ZSBhcmUgbm93IGxl
ZnQgdG8gZXZhbHVhdGUgdGhlIHN1YmplY3RpdmUgdGVybSAicmVhc29uYWJseSIuCj4gPiAKPiA+
IENBTiBUT09MU1RBQ0stQkFTRUQgQ0FQQUNJVFkgQUxMT0NBVElPTiBPVkVSQ09NRSBUSEUgSVNT
VUVTPwo+ID4gCj4gPiBJbiBlYXJsaWVyIGRpc2N1c3Npb24gb24gdGhpcyB0b3BpYywgd2hlbiBw
YWdlLXNwbGl0dGluZyB3YXMgcmFpc2VkCj4gPiBhcyBhIGNvbmNlcm4sIHNvbWUgb2YgdGhlIGF1
dGhvcnMgb2YgWGVuJ3MgcGFnZS1zaGFyaW5nIGZlYXR1cmUKPiA+IHBvaW50ZWQgb3V0IHRoYXQg
YSBtZWNoYW5pc20gY291bGQgYmUgZGVzaWduZWQgc3VjaCB0aGF0ICJiYXRjaGVzIgo+ID4gb2Yg
cGFnZXMgd2VyZSBwcmUtYWxsb2NhdGVkIGJ5IHRoZSB0b29sc3RhY2sgYW5kIHByb3ZpZGVkIHRv
IHRoZQo+ID4gaHlwZXJ2aXNvciB0byBiZSB1dGlsaXplZCBhcyBuZWVkZWQgZm9yIHBhZ2Utc3Bs
aXR0aW5nLiAgU2hvdWxkIHRoZQo+ID4gYmF0Y2ggcnVuIGRyeSwgdGhlIGh5cGVydmlzb3IgY291
bGQgc3RvcCB0aGUgZG9tYWluIHRoYXQgd2FzIHByb3Zva2luZwo+ID4gdGhlIHBhZ2Utc3BsaXQg
dW50aWwgdGhlIHRvb2xzdGFjayBjb3VsZCBiZSBjb25zdWx0ZWQgYW5kIHRoZSB0b29sc3RhY2ss
IGF0IGl0cyBsZWlzdXJlLCBjb3VsZCByZXF1ZXN0IHRoZSBoeXBlcnZpc29yIHRvIHJlZmlsbAo+
ID4gdGhlIGJhdGNoLCB3aGljaCB0aGVuIGFsbG93cyB0aGUgcGFnZS1zcGxpdC1jYXVzaW5nIGRv
bWFpbiB0byBwcm9jZWVkLgo+ID4gCj4gPiBCdXQgdGhpcyBiYXRjaCBwYWdlLWFsbG9jYXRpb24g
aXNuJ3QgaW1wbGVtZW50ZWQgaW4gWGVuIHRvZGF5Lgo+ID4gCj4gPiBBbmRyZXMgTGFnYXItQ2F2
aWxsYSBzYXlzICIuLi4gdGhpcyBpcyBiZWNhdXNlIG9mIHNob3J0Y29taW5ncyBpbiB0aGUKPiA+
IFtYZW5dIG1tIGxheWVyIGFuZCBpdHMgaW50ZXJhY3Rpb24gd2l0aCB3YWl0IHF1ZXVlcywgZG9j
dW1lbnRlZAo+ID4gZWxzZXdoZXJlLiIgIEluIG90aGVyIHdvcmRzLCB0aGlzIGJhdGNoaW5nIHBy
b3Bvc2FsIHJlcXVpcmVzCj4gPiBzaWduaWZpY2FudCBjaGFuZ2VzIHRvIHRoZSBoeXBlcnZpc29y
LCB3aGljaCBJIHRoaW5rIHdlCj4gPiBhbGwgYWdyZWVkIHdlIHdlcmUgdHJ5aW5nIHRvIGF2b2lk
Lgo+IAo+IFRoaXMgaXMgYSBtaXN1bmRlcnN0YW5kaW5nLiBUaGVyZSBpcyBubyBjb25uZWN0aW9u
IGJldHdlZW4gdGhlIGJhdGNoaW5nIHByb3Bvc2FsIGFuZCB3aGF0IEkgd2FzIHJlZmVycmluZyB0
byBpbiB0aGUgcXVvdGUuIENlcnRhaW5seSBJIG5ldmVyIGFkdm9jYXRlZCBmb3IgcHJlLWFsbG9j
YXRpb25zLgo+IAo+IFRoZSAic2lnbmlmaWNhbnQgY2hhbmdlcyB0byB0aGUgaHlwZXJ2aXNvciIg
c3RhdGVtZW50IGlzIEZVRC4gRXZlcnlvbmUgeW91J3ZlIGFkZHJlc3NlZCBvbiB0aGlzIGVtYWls
IG1ha2VzIHNpZ25pZmljYW50IGNoYW5nZXMgdG8gdGhlIGh5cGVydmlzb3IsIHVuZGVyIHRoZSBw
cm92aXNvIHRoYXQgdGhleSBhcmUgbmVjZXNzYXJ5L3VzZWZ1bCBjaGFuZ2VzLgo+IAo+IFRoZSBp
bnRlcmFjdGlvbnMgYmV0d2VlbiB0aGUgbW0gbGF5ZXIgYW5kIHdhaXQgcXVldWVzIG5lZWQgZml4
aW5nLCBzb29uZXIgb3IgbGF0ZXIsIGNsYWltIGh5cGVyIGNhbGwgb3Igbm90LiBCdXQgdGhleSBh
cmUgbm90IGEgYmxvY2tlciwgdGhleSBhcmUgZXNzZW50aWFsbHkgYSByYWNlIHRoYXQgbWF5IHRy
aWdnZXIgdW5kZXIgY2VydGFpbiBjaXJjdW1zdGFuY2VzLiBUaGF0IGlzIHdoeSB0aGV5IHJlbWFp
biBhIGxvdyBwcmlvcml0eSBmaXguCj4gCj4gPiAKPiA+IFtOb3RlIHRvIEFuZHJlOiBJJ20gbm90
IG9iamVjdGluZyB0byB0aGUgbmVlZCBmb3IgdGhpcyBmdW5jdGlvbmFsaXR5Cj4gPiBmb3IgcGFn
ZS1zaGFyaW5nIHRvIHdvcmsgd2l0aCBwcm9wcmlldGFyeSBrZXJuZWxzIGFuZCBETUM7IGp1c3QK
PiAKPiBMZXQgbWUgbmlwIHRoaXMgYXQgdGhlIGJ1ZC4gSSB1c2UgcGFnZSBzaGFyaW5nIGFuZCBv
dGhlciB0ZWNobmlxdWVzIGluIGFuIGVudmlyb25tZW50IHRoYXQgZG9lc24ndCB1c2UgQ2l0cml4
J3MgRE1DLCBub3IgaXMgZm9jdXNlZCBvbmx5IG9uIHByb3ByaWV0YXJ5IGtlcm5lbHMuLi4KPiAK
PiA+IHBvaW50aW5nIG91dCB0aGF0IGl0LCB0b28sIGlzIGRlcGVuZGVudCBvbiBmdXJ0aGVyIGh5
cGVydmlzb3IgY2hhbmdlcy5dCj4gCj4g4oCmIHdpdGggNC4yIFhlbi4gSXQgaXMgbm90IHBlcmZl
Y3QgYW5kIGhhcyBsaW1pdGF0aW9ucyB0aGF0IEkgYW0gdHJ5aW5nIHRvIGZpeC4gQnV0IG91ciBw
cm9kdWN0IHNoaXBzLCBhbmQgcGFnZSBzaGFyaW5nIHdvcmtzIGZvciBhbnlvbmUgd2hvIHdvdWxk
IHdhbnQgdG8gY29uc3VtZSBpdCwgaW5kZXBlbmRlbnRseSBvZiBmdXJ0aGVyIGh5cGVydmlzb3Ig
Y2hhbmdlcy4KPiAKCkkgYmVsaWV2ZSBEYW4gaXMgc2F5aW5nIGlzIHRoYXQgaXQgaXMgbm90IGVu
YWJsZWQgYnkgZGVmYXVsdC4KTWVhbmluZyBpdCBkb2VzIG5vdCBnZXQgZXhlY3V0ZWQgaW4gYnkg
L2V0Yy9pbml0LmQveGVuY29tbW9ucyBhbmQKYXMgc3VjaCBpdCBuZXZlciBnZXRzIHJ1biAob3Ig
ZG9lcyBpdCBub3c/KSAtIHVubGVzcyBvbmUga25vd3MKYWJvdXQgaXQgLSBvciBpdCBpcyBlbmFi
bGVkIGJ5IGRlZmF1bHQgaW4gYSBwcm9kdWN0LiBCdXQgcGVyaGFwcwp3ZSBhcmUgYm90aCBtaXN0
YWtlbj8gSXMgaXQgZW5hYmxlZCBieSBkZWZhdWx0IG5vdyBvbiB4ZW4tdW5zdGFibGU/Cgo+ID4g
Cj4gPiBTdWNoIGFuIGFwcHJvYWNoIG1ha2VzIHNlbnNlIGluIHRoZSBtaW49PW1heCBtb2RlbCBl
bmZvcmNlZCBieQo+ID4gRE1DIGJ1dCwgYWdhaW4sIERNQyBpcyBub3QgcHJlc2NyaWJlZCBieSB0
aGUgdG9vbHN0YWNrLgo+ID4gCj4gPiBGdXJ0aGVyLCB0aGlzIHdhaXRxdWV1ZSBzb2x1dGlvbiBm
b3IgcGFnZS1zcGxpdHRpbmcgb25seSBhd2t3YXJkbHkKPiA+IHdvcmtzIGFyb3VuZCBpbi1ndWVz
dCBiYWxsb29uaW5nIChwcm9iYWJseSBvbmx5IHdpdGggbW9yZSBoeXBlcnZpc29yCj4gPiBjaGFu
Z2VzLCBUQkQpIGFuZCB3b3VsZCBiZSB1c2VsZXNzIGZvciB0bWVtLiAgW0lJR1Q6IFBsZWFzZSBh
cmd1ZQo+ID4gdGhpcyBsYXN0IHBvaW50IG9ubHkgaWYgeW91IGZlZWwgY29uZmlkZW50IHlvdSB0
cnVseSB1bmRlcnN0YW5kIGhvdwo+ID4gdG1lbSB3b3Jrcy5dCj4gCj4gSSB3aWxsIGFyZ3VlIHRo
b3VnaCB0aGF0ICJ3YWl0cXVldWUgc29sdXRpb24g4oCmIGJhbGxvb25pbmciIGlzIG5vdCB0cnVl
LiBCYWxsb29uaW5nIGhhcyBuZXZlciBuZWVkZWQgbm9yIGl0IHN1ZGRlbmx5IG5lZWRzIG5vdyBo
eXBlcnZpc29yIHdhaXQgcXVldWVzLgoKSXQgaXMgdGhlIHVzZSBjYXNlIG9mIHBhcmFsbGVsIHN0
YXJ0cyB0aGF0IHdlIGFyZSB0cnlpbmcgdG8gc29sdmUuCldvcnN0IC0gd2Ugd2FudCB0byBzdGFy
dCAxNkdCIG9yIDMyR0IgZ3Vlc3RzIGFuZCB0aG9zZSBzZWVtIHRvIHRha2UKcXVpdGUgYSBiaXQg
b2YgdGltZS4KCj4gCj4gPiAKPiA+IFNvIHRoaXMgYXMteWV0LXVuaW1wbGVtZW50ZWQgc29sdXRp
b24gb25seSByZWFsbHkgc29sdmVzIGEgcGFydAo+ID4gb2YgdGhlIHByb2JsZW0uCj4gCj4gQXMg
cGVyIHRoZSBwcmV2aW91cyBjb21tZW50cywgSSBkb24ndCBzZWUgeW91ciBjaGFyYWN0ZXJpemF0
aW9uIGFzIGFjY3VyYXRlLgo+IAo+IEFuZHJlcwo+ID4gCj4gPiBBcmUgdGhlcmUgYW55IG90aGVy
IHBvc3NpYmlsaXRpZXMgcHJvcG9zZWQ/ICBJYW4gSmFja3NvbiBoYXMKPiA+IHN1Z2dlc3RlZCBh
IHNvbWV3aGF0IGRpZmZlcmVudCBhcHByb2FjaDoKPiA+IAo+ID4gTGV0IG1lIHF1b3RlIElhbiBK
YWNrc29uIGFnYWluOgo+ID4gCj4gPiAiT2YgY291cnNlIGlmIGl0IGlzIHJlYWxseSBkZXNpcmVk
IHRvIGhhdmUgZWFjaCBndWVzdCBtYWtlIGl0cyBvd24KPiA+IGRlY2lzaW9ucyBhbmQgc2ltcGx5
IGZvciB0aGVtIHRvIHNvbWVob3cgYWdyZWUgdG8gZGl2dnkgdXAgdGhlCj4gPiBhdmFpbGFibGUg
cmVzb3VyY2VzLCB0aGVuIGV2ZW4gc28gYSBuZXcgaHlwZXJ2aXNvciBtZWNoYW5pc20gaXMKPiA+
IG5vdCBuZWVkZWQuICBBbGwgdGhhdCBpcyBuZWVkZWQgaXMgYSB3YXkgZm9yIHRob3NlIGd1ZXN0
cyB0bwo+ID4gc3luY2hyb25pc2UgdGhlaXIgYWNjZXNzZXMgYW5kIHVwZGF0ZXMgdG8gc2hhcmVk
IHJlY29yZHMgb2YgdGhlCj4gPiBhdmFpbGFibGUgYW5kIGluLXVzZSBtZW1vcnkuIgo+ID4gCj4g
PiBJYW4gdGhlbiBnb2VzIG9uIHRvIHNheTogICJJIGRvbid0IGhhdmUgYSBkZXRhaWxlZCBjb3Vu
dGVyLXByb3Bvc2FsCj4gPiBkZXNpZ24gb2YgY291cnNlLi4uIgo+ID4gCj4gPiBUaGlzIHByb3Bv
c2FsIGlzIGNlcnRhaW5seSBwb3NzaWJsZSwgYnV0IEkgdGhpbmsgbW9zdCB3b3VsZCBhZ3JlZSB0
aGF0Cj4gPiBpdCB3b3VsZCByZXF1aXJlIHNvbWUgZmFpcmx5IG1hc3NpdmUgY2hhbmdlcyBpbiBP
UyBtZW1vcnkgbWFuYWdlbWVudAo+ID4gZGVzaWduIHRoYXQgd291bGQgcnVuIGNvbnRyYXJ5IHRv
IG1hbnkgeWVhcnMgb2YgY29tcHV0aW5nIGhpc3RvcnkuCj4gPiBJdCByZXF1aXJlcyBndWVzdCBP
UydzIHRvIGNvb3BlcmF0ZSB3aXRoIGVhY2ggb3RoZXIgYWJvdXQgYmFzaWMgbWVtb3J5Cj4gPiBt
YW5hZ2VtZW50IGRlY2lzaW9ucy4gIEFuZCB0byB3b3JrIGZvciB0bWVtLCBpdCB3b3VsZCByZXF1
aXJlCj4gPiBjb21tdW5pY2F0aW9uIGZyb20gYXRvbWljIGNvZGUgaW4gdGhlIGtlcm5lbCB0byB1
c2VyLXNwYWNlLCB0aGVuIGNvbW11bmljYXRpb24gZnJvbSB1c2VyLXNwYWNlIGluIGEgZ3Vlc3Qg
dG8gdXNlci1zcGFjZS1pbi1kb21haW4wCj4gPiBhbmQgdGhlbiAocHJlc3VtYWJseS4uLiBJIGRv
bid0IGhhdmUgYSBkZXNpZ24gZWl0aGVyKSBiYWNrIGFnYWluLgo+ID4gT25lIG11c3QgYWxzbyB3
b25kZXIgd2hhdCB0aGUgcGVyZm9ybWFuY2UgaW1wYWN0IHdvdWxkIGJlLgo+ID4gCj4gPiBDT05D
TFVESU5HIFJFTUFSS1MKPiA+IAo+ID4gIkFueSBmdW5jdGlvbmFsaXR5IHdoaWNoIGNhbiBiZSBy
ZWFzb25hYmx5IHByb3ZpZGVkIG91dHNpZGUgdGhlCj4gPiAgaHlwZXJ2aXNvciBzaG91bGQgYmUg
ZXhjbHVkZWQgZnJvbSBpdC4iCj4gPiAKPiA+IEkgdGhpbmsgdGhpcyBkb2N1bWVudCBoYXMgZGVz
Y3JpYmVkIGEgcmVhbCBjdXN0b21lciBwcm9ibGVtIGFuZAo+ID4gYSBnb29kIHNvbHV0aW9uIHRo
YXQgY291bGQgYmUgaW1wbGVtZW50ZWQgZWl0aGVyIGluIHRoZSB0b29sc3RhY2sKPiA+IG9yIGlu
IHRoZSBoeXBlcnZpc29yLiAgTWVtb3J5IGFsbG9jYXRpb24gaW4gZXhpc3RpbmcgWGVuIGZ1bmN0
aW9uYWxpdHkKPiA+IGhhcyBiZWVuIHNob3duIHRvIGludGVyZmVyZSBzaWduaWZpY2FudGx5IHdp
dGggdGhlIHRvb2xzdGFjay1iYXNlZAo+ID4gc29sdXRpb24gYW5kIHN1Z2dlc3RlZCBwYXJ0aWFs
IHNvbHV0aW9ucyB0byB0aG9zZSBpc3N1ZXMgZWl0aGVyCj4gPiByZXF1aXJlIGV2ZW4gbW9yZSBo
eXBlcnZpc29yIHdvcmssIG9yIGFyZSBjb21wbGV0ZWx5IHVuZGVzaWduZWQgYW5kLAo+ID4gYXQg
bGVhc3QsIGNhbGwgaW50byBxdWVzdGlvbiB0aGUgZGVmaW5pdGlvbiBvZiAicmVhc29uYWJseSIu
Cj4gPiAKPiA+IFRoZSBoeXBlcnZpc29yLWJhc2VkIHNvbHV0aW9uIGhhcyBiZWVuIHNob3duIHRv
IGJlIGV4dHJlbWVseQo+ID4gc2ltcGxlLCBmaXRzIHZlcnkgbG9naWNhbGx5IHdpdGggZXhpc3Rp
bmcgWGVuIG1lbW9yeSBtYW5hZ2VtZW50Cj4gPiBtZWNoYW5pc21zL2NvZGUsIGFuZCBoYXMgYmVl
biByZXZpZXdlZCB0aHJvdWdoIHNldmVyYWwgaXRlcmF0aW9ucwo+ID4gYnkgWGVuIGh5cGVydmlz
b3IgZXhwZXJ0cy4KPiA+IAo+ID4gV2hpbGUgSSB1bmRlcnN0YW5kIGNvbXBsZXRlbHkgdGhlIFhl
biBtYWludGFpbmVycycgZGVzaXJlIHRvCj4gPiBmZW5kIG9mZiB1bm5lY2Vzc2FyeSBhZGRpdGlv
bnMgdG8gdGhlIGh5cGVydmlzb3IsIEkgYmVsaWV2ZQo+ID4gWEVOTUVNX2NsYWltX3BhZ2VzIGlz
IGEgcmVhc29uYWJsZSBhbmQgbmF0dXJhbCBoeXBlcnZpc29yIGZlYXR1cmUKPiA+IGFuZCBJIGhv
cGUgeW91IHdpbGwgbm93IEFjayB0aGUgcGF0Y2guCgoKSnVzdCBhcyBhIHN1bW1hcnkgYXMgdGhp
cyBpcyBnZXR0aW5nIHRvIGJlIGEgbG9uZyB0aHJlYWQgLSBteQp1bmRlcnN0YW5kaW5nIGhhcyBi
ZWVuIHRoYXQgdGhlIGh5cGVydmlzb3IgaXMgc3VwcG9zZSB0byB0b29sc3RhY2sKaW5kZXBlbmRl
bnQuCgpPdXIgZmlyc3QgZ29hbCBpcyB0byBpbXBsZW1lbnQgdGhpcyBpbiAneGVuZCcgYXMgdGhh
dAppcyB3aGF0IHdlIHVzZSByaWdodCBub3cuIFByb2JsZW0gd2lsbCBiZSBvZiBjb3Vyc2UgdG8g
ZmluZCBzb21lYm9keQp0byByZXZpZXcgaXQgOi0oCgpXZSBjZXJ0YWlubHkgd2FudCB0byBpbXBs
ZW1lbnQgdGhpcyBhbHNvIGluIHRoZSAneGwnIHRvb2wtc3RhY2sKYXMgaW4gdGhlIGZ1dHVyZSB0
aGF0IGlzIHdoYXQgd2Ugd2FudCB0byB1c2Ugd2hlbiB3ZSByZWJhc2UKb3VyIHByb2R1Y3Qgb24g
WGVuIDQuMiBvciBncmVhdGVyLgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Dec 18 22:46:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 22:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl5vS-0005Jh-D8; Tue, 18 Dec 2012 22:46:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Tl5vR-0005Jc-Ge
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 22:46:13 +0000
Received: from [193.109.254.147:30960] by server-11.bemta-14.messagelabs.com
	id 87/96-02659-432F0D05; Tue, 18 Dec 2012 22:46:12 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355870771!9172607!1
X-Originating-IP: [209.85.220.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31606 invoked from network); 18 Dec 2012 22:46:12 -0000
Received: from mail-vc0-f174.google.com (HELO mail-vc0-f174.google.com)
	(209.85.220.174)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 22:46:12 -0000
Received: by mail-vc0-f174.google.com with SMTP id d16so1556728vcd.19
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 14:46:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:date:from:to:subject:message-id:mime-version
	:content-type:content-disposition:user-agent;
	bh=4oGsPq3s5YGVSSxeiwqJRyqk6OQk/hLH7kHNnJcHeOw=;
	b=Vsnf1oHvEvhTkmUgyyf30Tdb0hivvnpXWGKcvqp4fdqjaagCmjqC8u1zj6+mZ4IYuG
	/GDsamR4fvfaip6xItgmyHx4SmQLMzRnENTKx+Qy3DIs5kcjBI2qOBwE0fpT68M3b6BD
	shgXZbmB2N9jHD7s/PSKUC1XDdU+wA47AD2eIXObfGXYFfi6NYP+JJ4U80LclHDxsmTY
	zLEoaQj81MLzSxrhjEX/BitNCiiqt9w/2jxDdMx03/f8wjlGg4PlGi8Hknh1rQS5HFfq
	VIf7f6u7QM5SoCS2RpAI9j3v/l0Rp27ymUbLbII/qUbrc78We+go/8fOshPXbX1W6JZG
	fuvw==
X-Received: by 10.220.106.147 with SMTP id x19mr5774178vco.37.1355870770563;
	Tue, 18 Dec 2012 14:46:10 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id cv19sm2509101vdb.5.2012.12.18.14.46.09
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 14:46:10 -0800 (PST)
Date: Tue, 18 Dec 2012 17:46:07 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: xiantao.zhang@intel.com, xen-devel@lists.xensource.com
Message-ID: <20121218224606.GA6918@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Subject: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

I was wondering if the ACPI PCI hotplug (so inserting a new PCIe card
in a server that supports said functionality) is something that Intel
has been testing or using?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 22:46:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 22:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl5vS-0005Jh-D8; Tue, 18 Dec 2012 22:46:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1Tl5vR-0005Jc-Ge
	for xen-devel@lists.xensource.com; Tue, 18 Dec 2012 22:46:13 +0000
Received: from [193.109.254.147:30960] by server-11.bemta-14.messagelabs.com
	id 87/96-02659-432F0D05; Tue, 18 Dec 2012 22:46:12 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355870771!9172607!1
X-Originating-IP: [209.85.220.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31606 invoked from network); 18 Dec 2012 22:46:12 -0000
Received: from mail-vc0-f174.google.com (HELO mail-vc0-f174.google.com)
	(209.85.220.174)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 22:46:12 -0000
Received: by mail-vc0-f174.google.com with SMTP id d16so1556728vcd.19
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 14:46:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:date:from:to:subject:message-id:mime-version
	:content-type:content-disposition:user-agent;
	bh=4oGsPq3s5YGVSSxeiwqJRyqk6OQk/hLH7kHNnJcHeOw=;
	b=Vsnf1oHvEvhTkmUgyyf30Tdb0hivvnpXWGKcvqp4fdqjaagCmjqC8u1zj6+mZ4IYuG
	/GDsamR4fvfaip6xItgmyHx4SmQLMzRnENTKx+Qy3DIs5kcjBI2qOBwE0fpT68M3b6BD
	shgXZbmB2N9jHD7s/PSKUC1XDdU+wA47AD2eIXObfGXYFfi6NYP+JJ4U80LclHDxsmTY
	zLEoaQj81MLzSxrhjEX/BitNCiiqt9w/2jxDdMx03/f8wjlGg4PlGi8Hknh1rQS5HFfq
	VIf7f6u7QM5SoCS2RpAI9j3v/l0Rp27ymUbLbII/qUbrc78We+go/8fOshPXbX1W6JZG
	fuvw==
X-Received: by 10.220.106.147 with SMTP id x19mr5774178vco.37.1355870770563;
	Tue, 18 Dec 2012 14:46:10 -0800 (PST)
Received: from phenom.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPS id cv19sm2509101vdb.5.2012.12.18.14.46.09
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 14:46:10 -0800 (PST)
Date: Tue, 18 Dec 2012 17:46:07 -0500
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: xiantao.zhang@intel.com, xen-devel@lists.xensource.com
Message-ID: <20121218224606.GA6918@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Subject: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

I was wondering if the ACPI PCI hotplug (so inserting a new PCIe card
in a server that supports said functionality) is something that Intel
has been testing or using?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 18 23:10:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 23:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl6IX-0005bf-Kr; Tue, 18 Dec 2012 23:10:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <2rushikeshj@gmail.com>)
	id 1Tl4zt-0004CZ-UX; Tue, 18 Dec 2012 21:46:46 +0000
Received: from [85.158.139.83:52567] by server-9.bemta-5.messagelabs.com id
	5B/E7-10690-544E0D05; Tue, 18 Dec 2012 21:46:45 +0000
X-Env-Sender: 2rushikeshj@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355867202!28310993!1
X-Originating-IP: [209.85.216.181]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 450 invoked from network); 18 Dec 2012 21:46:43 -0000
Received: from mail-qc0-f181.google.com (HELO mail-qc0-f181.google.com)
	(209.85.216.181)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 21:46:43 -0000
Received: by mail-qc0-f181.google.com with SMTP id x40so679330qcp.26
	for <multiple recipients>; Tue, 18 Dec 2012 13:46:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=sIXDVcvcdJahNOrh2NzOxy/Q0FshyKk9RdU1z3qmnxw=;
	b=MMb+uQ+APJSZYIyHjzjZaSowoTehNdzSl7K9iD9BPJb2vNj3yDnXJQP3ghWaxHhomw
	sNWXALWqbnlP4B2XGGVR2Gp/WkNY0h+pCxhonVA8I1OnYCOQOoaJTb9qVsRqlvNyWg7U
	tysdxjVwyQuf838MK9/j7Kr+kAXxV5lHTKaMig6SFNVwjdlyzBNXA17GUzklBnrgDjGC
	UcIe/T0Jf1QLE6MlTP8jCtIUzDUqB94OAid2wGDwryuEp0LcNySaaj0liqVRptrO4oyl
	FdmBDil7lKJ4l5QWVv+3W8lFFVN6iMjD8xw+zX7ClcejqKO/HaUJB83l21FulcPal8AB
	SBZw==
MIME-Version: 1.0
Received: by 10.49.121.40 with SMTP id lh8mr1717301qeb.30.1355867202190; Tue,
	18 Dec 2012 13:46:42 -0800 (PST)
Received: by 10.229.17.19 with HTTP; Tue, 18 Dec 2012 13:46:42 -0800 (PST)
Date: Wed, 19 Dec 2012 03:16:42 +0530
Message-ID: <CAO9XypUE6v8Ome4j7uDbqRaBQ2eSRjT53Bm_9z1yHW6mjRunRQ@mail.gmail.com>
From: Rushikesh Jadhav <2rushikeshj@gmail.com>
To: "xen-api@lists.xen.org" <xen-api@lists.xen.org>, xen-users@lists.xen.org,
	xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=047d7bdc1be4f2fcb904d1276eee
X-Mailman-Approved-At: Tue, 18 Dec 2012 23:10:04 +0000
Subject: [Xen-devel] XCP 1.1 host crash due to VM console connection
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7bdc1be4f2fcb904d1276eee
Content-Type: multipart/alternative; boundary=047d7bdc1be4f2fcb304d1276eec

--047d7bdc1be4f2fcb304d1276eec
Content-Type: text/plain; charset=ISO-8859-1

Hi everyone,

We faced a dom0 crash on XCP1.1 and Xen 3.4.2.

I've attached the log file for reference. On dom0 we could find only
"Xc.Error" and then reboot of host. There is no information in /var/crash/
or other log files.

Interestingly the "INTERNAL_ERROR" variable keeps growing until a length
and then host reboots. Possible memory overlap ? Could someone please
help me diagnose this and sort out a way to avoid it in future ?

Thank you for reading.

Regards,
Rushikesh

--047d7bdc1be4f2fcb304d1276eec
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div>Hi everyone,<br><br></div>We faced a d=
om0 crash on XCP1.1 and Xen 3.4.2. <br><br>I&#39;ve attached the log file f=
or reference. On dom0 we could find only &quot;Xc.Error&quot; and then rebo=
ot of host. There is no information in /var/crash/ or other log files. <br>
<br></div>Interestingly the &quot;INTERNAL_ERROR&quot; variable keeps growi=
ng until a length and then host reboots. Possible memory overlap ? Could so=
meone please help me diagnose this and sort out a way to avoid it in future=
 ?<br>
<br></div>Thank you for reading.<br><br></div>Regards,<br>Rushikesh<br><div=
><div><div><br><br><br></div></div></div></div>

--047d7bdc1be4f2fcb304d1276eec--
--047d7bdc1be4f2fcb904d1276eee
Content-Type: application/octet-stream; name="xensource.log"
Content-Disposition: attachment; filename="xensource.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_havk9nfc0

W3Jvb3RAeGNwMTEgbG9nXSMgY2F0IHhlbnNvdXJjZS5sb2cgfCAgZ3JlcCAiSU5URVJOQUxfRVJS
T1I6IiB8IGdyZXAgIlhjLkVycm9yIg0KWzIwMTIxMjE4VDE2OjIwOjE2LjY1M1p8ZGVidWd8eGNw
MTF8MzA2NCBpbmV0LVJQQ3xDb25uZWN0aW9uIHRvIFZNIGNvbnNvbGUgUjo0OGM4ZDdkNzQ3YWZ8
ZGlzcGF0Y2hlcl0gU2VydmVyX2hlbHBlcnMuZXhlYyBleGNlcHRpb25faGFuZGxlcjogR290IGV4
Y2VwdGlvbiBJTlRFUk5BTF9FUlJPUjogWyBYYy5FcnJvcigiZ2V0aW5mbyBmYWlsZWQ6IGRvbWFp
biAtMTogaHlwZXJjYWxsIDM2IGZhaWw6IDExOiBSZXNvdXJjZSB0ZW1wb3JhcmlseSB1bmF2YWls
YWJsZSAocmV0IC0xKSIpIF0NClsyMDEyMTIxOFQxNjoyMDoxOC40NDBafGRlYnVnfHhjcDExfDMw
NjUgaW5ldC1SUEN8Q29ubmVjdGlvbiB0byBWTSBjb25zb2xlIFI6ZTA0MDEyNDk4NTY1fGRpc3Bh
dGNoZXJdIFNlcnZlcl9oZWxwZXJzLmV4ZWMgZXhjZXB0aW9uX2hhbmRsZXI6IEdvdCBleGNlcHRp
b24gSU5URVJOQUxfRVJST1I6IFsgWGMuRXJyb3IoImdldGluZm8gZmFpbGVkOiBkb21haW4gLTE6
IGdldGluZm8gZmFpbGVkOiBkb21haW4gODogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiA4OiBnZXRp
bmZvIGZhaWxlZDogZG9tYWluIC0xOiBoeXBlcmNhbGwgMzYgZmFpbDogMTE6IFJlc291cmNlIHRl
bXBvcmFyaWx5IHVuYXZhaWxhYmxlIChyZXQgLTEpIikgXQ0KWzIwMTIxMjE4VDE2OjIwOjE5LjA5
OVp8ZGVidWd8eGNwMTF8MzA3MSBpbmV0LVJQQ3xDb25uZWN0aW9uIHRvIFZNIGNvbnNvbGUgUjox
NTM4M2FkZTQ3NjV8ZGlzcGF0Y2hlcl0gU2VydmVyX2hlbHBlcnMuZXhlYyBleGNlcHRpb25faGFu
ZGxlcjogR290IGV4Y2VwdGlvbiBJTlRFUk5BTF9FUlJPUjogWyBYYy5FcnJvcigiZ2V0aW5mbyBm
YWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWls
ZWQ6IGRvbWFpbiA4OiBnZXRpbmZvIGZhaWxlZDogZG9tYWluIDg6IGdldGluZm8gZmFpbGVkOiBk
b21haW4gLTE6IGh5cGVyY2FsbCAzNiBmYWlsOiAxMTogUmVzb3VyY2UgdGVtcG9yYXJpbHkgdW5h
dmFpbGFibGUgKHJldCAtMSkiKSBdDQpbMjAxMjEyMThUMTY6MjA6MTkuNjAwWnxkZWJ1Z3x4Y3Ax
MXwzMDc1IGluZXQtUlBDfENvbm5lY3Rpb24gdG8gVk0gY29uc29sZSBSOmRiZDM3Mjg2YmRjYXxk
aXNwYXRjaGVyXSBTZXJ2ZXJfaGVscGVycy5leGVjIGV4Y2VwdGlvbl9oYW5kbGVyOiBHb3QgZXhj
ZXB0aW9uIElOVEVSTkFMX0VSUk9SOiBbIFhjLkVycm9yKCJnZXRpbmZvIGZhaWxlZDogZG9tYWlu
IC0xOiBnZXRpbmZvIGZhaWxlZDogZG9tYWluIC0xOiBnZXRpbmZvIGZhaWxlZDogZG9tYWluIC0x
OiBnZXRpbmZvIGZhaWxlZDogZG9tYWluIDg6IGdldGluZm8gZmFpbGVkOiBkb21haW4gODogZ2V0
aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogaHlwZXJjYWxsIDM2IGZhaWw6IDExOiBSZXNvdXJjZSB0
ZW1wb3JhcmlseSB1bmF2YWlsYWJsZSAocmV0IC0xKSIpIF0NClsyMDEyMTIxOFQxNjoyMDoxOS45
NDlafGRlYnVnfHhjcDExfDMwNzYgaW5ldC1SUEN8Q29ubmVjdGlvbiB0byBWTSBjb25zb2xlIFI6
OGJkYzhlMjUwZjJkfGRpc3BhdGNoZXJdIFNlcnZlcl9oZWxwZXJzLmV4ZWMgZXhjZXB0aW9uX2hh
bmRsZXI6IEdvdCBleGNlcHRpb24gSU5URVJOQUxfRVJST1I6IFsgWGMuRXJyb3IoImdldGluZm8g
ZmFpbGVkOiBkb21haW4gLTE6IGdldGluZm8gZmFpbGVkOiBkb21haW4gLTE6IGdldGluZm8gZmFp
bGVkOiBkb21haW4gLTE6IGdldGluZm8gZmFpbGVkOiBkb21haW4gLTE6IGdldGluZm8gZmFpbGVk
OiBkb21haW4gODogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiA4OiBnZXRpbmZvIGZhaWxlZDogZG9t
YWluIC0xOiBoeXBlcmNhbGwgMzYgZmFpbDogMTE6IFJlc291cmNlIHRlbXBvcmFyaWx5IHVuYXZh
aWxhYmxlIChyZXQgLTEpIikgXQ0KWzIwMTIxMjE4VDE2OjIwOjIwLjMzMVp8ZGVidWd8eGNwMTF8
MzA3OSBpbmV0LVJQQ3xDb25uZWN0aW9uIHRvIFZNIGNvbnNvbGUgUjo1MmYwZTMxMjRkMDZ8ZGlz
cGF0Y2hlcl0gU2VydmVyX2hlbHBlcnMuZXhlYyBleGNlcHRpb25faGFuZGxlcjogR290IGV4Y2Vw
dGlvbiBJTlRFUk5BTF9FUlJPUjogWyBYYy5FcnJvcigiZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAt
MTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTog
Z2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0
aW5mbyBmYWlsZWQ6IGRvbWFpbiA4OiBnZXRpbmZvIGZhaWxlZDogZG9tYWluIDg6IGdldGluZm8g
ZmFpbGVkOiBkb21haW4gLTE6IGh5cGVyY2FsbCAzNiBmYWlsOiAxMTogUmVzb3VyY2UgdGVtcG9y
YXJpIikgXQ0KWzIwMTIxMjE4VDE2OjIwOjIwLjYyN1p8ZGVidWd8eGNwMTF8MzA4MyBpbmV0LVJQ
Q3xDb25uZWN0aW9uIHRvIFZNIGNvbnNvbGUgUjo3NmU4M2E3NmVjNWJ8ZGlzcGF0Y2hlcl0gU2Vy
dmVyX2hlbHBlcnMuZXhlYyBleGNlcHRpb25faGFuZGxlcjogR290IGV4Y2VwdGlvbiBJTlRFUk5B
TF9FUlJPUjogWyBYYy5FcnJvcigiZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBm
YWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWls
ZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6
IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiA4OiBnZXRpbmZvIGZhaWxlZDogZG9t
YWluIDg6IGdldGluZm8gZmFpbGVkOiBkb21haW4gLTE6IGh5cGVyY2FsbCAzNiBmIikgXQ0KWzIw
MTIxMjE4VDE2OjIwOjIxLjE4Nlp8ZGVidWd8eGNwMTF8MzA5OSBpbmV0LVJQQ3xDb25uZWN0aW9u
IHRvIFZNIGNvbnNvbGUgUjpiNGE1MWZlZTg3YzR8ZGlzcGF0Y2hlcl0gU2VydmVyX2hlbHBlcnMu
ZXhlYyBleGNlcHRpb25faGFuZGxlcjogR290IGV4Y2VwdGlvbiBJTlRFUk5BTF9FUlJPUjogWyBY
Yy5FcnJvcigiZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFp
biAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAt
MTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTog
Z2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiA4OiBnZXRp
bmZvIGZhaWxlZDogZG9tYWluIDg6IGdldGluZm8gZmFpbGVkIikgXQ0K
--047d7bdc1be4f2fcb904d1276eee
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7bdc1be4f2fcb904d1276eee--


From xen-devel-bounces@lists.xen.org Tue Dec 18 23:10:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Dec 2012 23:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl6IX-0005bf-Kr; Tue, 18 Dec 2012 23:10:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <2rushikeshj@gmail.com>)
	id 1Tl4zt-0004CZ-UX; Tue, 18 Dec 2012 21:46:46 +0000
Received: from [85.158.139.83:52567] by server-9.bemta-5.messagelabs.com id
	5B/E7-10690-544E0D05; Tue, 18 Dec 2012 21:46:45 +0000
X-Env-Sender: 2rushikeshj@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355867202!28310993!1
X-Originating-IP: [209.85.216.181]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 450 invoked from network); 18 Dec 2012 21:46:43 -0000
Received: from mail-qc0-f181.google.com (HELO mail-qc0-f181.google.com)
	(209.85.216.181)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Dec 2012 21:46:43 -0000
Received: by mail-qc0-f181.google.com with SMTP id x40so679330qcp.26
	for <multiple recipients>; Tue, 18 Dec 2012 13:46:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=sIXDVcvcdJahNOrh2NzOxy/Q0FshyKk9RdU1z3qmnxw=;
	b=MMb+uQ+APJSZYIyHjzjZaSowoTehNdzSl7K9iD9BPJb2vNj3yDnXJQP3ghWaxHhomw
	sNWXALWqbnlP4B2XGGVR2Gp/WkNY0h+pCxhonVA8I1OnYCOQOoaJTb9qVsRqlvNyWg7U
	tysdxjVwyQuf838MK9/j7Kr+kAXxV5lHTKaMig6SFNVwjdlyzBNXA17GUzklBnrgDjGC
	UcIe/T0Jf1QLE6MlTP8jCtIUzDUqB94OAid2wGDwryuEp0LcNySaaj0liqVRptrO4oyl
	FdmBDil7lKJ4l5QWVv+3W8lFFVN6iMjD8xw+zX7ClcejqKO/HaUJB83l21FulcPal8AB
	SBZw==
MIME-Version: 1.0
Received: by 10.49.121.40 with SMTP id lh8mr1717301qeb.30.1355867202190; Tue,
	18 Dec 2012 13:46:42 -0800 (PST)
Received: by 10.229.17.19 with HTTP; Tue, 18 Dec 2012 13:46:42 -0800 (PST)
Date: Wed, 19 Dec 2012 03:16:42 +0530
Message-ID: <CAO9XypUE6v8Ome4j7uDbqRaBQ2eSRjT53Bm_9z1yHW6mjRunRQ@mail.gmail.com>
From: Rushikesh Jadhav <2rushikeshj@gmail.com>
To: "xen-api@lists.xen.org" <xen-api@lists.xen.org>, xen-users@lists.xen.org,
	xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=047d7bdc1be4f2fcb904d1276eee
X-Mailman-Approved-At: Tue, 18 Dec 2012 23:10:04 +0000
Subject: [Xen-devel] XCP 1.1 host crash due to VM console connection
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7bdc1be4f2fcb904d1276eee
Content-Type: multipart/alternative; boundary=047d7bdc1be4f2fcb304d1276eec

--047d7bdc1be4f2fcb304d1276eec
Content-Type: text/plain; charset=ISO-8859-1

Hi everyone,

We faced a dom0 crash on XCP1.1 and Xen 3.4.2.

I've attached the log file for reference. On dom0 we could find only
"Xc.Error" and then reboot of host. There is no information in /var/crash/
or other log files.

Interestingly the "INTERNAL_ERROR" variable keeps growing until a length
and then host reboots. Possible memory overlap ? Could someone please
help me diagnose this and sort out a way to avoid it in future ?

Thank you for reading.

Regards,
Rushikesh

--047d7bdc1be4f2fcb304d1276eec
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div>Hi everyone,<br><br></div>We faced a d=
om0 crash on XCP1.1 and Xen 3.4.2. <br><br>I&#39;ve attached the log file f=
or reference. On dom0 we could find only &quot;Xc.Error&quot; and then rebo=
ot of host. There is no information in /var/crash/ or other log files. <br>
<br></div>Interestingly the &quot;INTERNAL_ERROR&quot; variable keeps growi=
ng until a length and then host reboots. Possible memory overlap ? Could so=
meone please help me diagnose this and sort out a way to avoid it in future=
 ?<br>
<br></div>Thank you for reading.<br><br></div>Regards,<br>Rushikesh<br><div=
><div><div><br><br><br></div></div></div></div>

--047d7bdc1be4f2fcb304d1276eec--
--047d7bdc1be4f2fcb904d1276eee
Content-Type: application/octet-stream; name="xensource.log"
Content-Disposition: attachment; filename="xensource.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_havk9nfc0

W3Jvb3RAeGNwMTEgbG9nXSMgY2F0IHhlbnNvdXJjZS5sb2cgfCAgZ3JlcCAiSU5URVJOQUxfRVJS
T1I6IiB8IGdyZXAgIlhjLkVycm9yIg0KWzIwMTIxMjE4VDE2OjIwOjE2LjY1M1p8ZGVidWd8eGNw
MTF8MzA2NCBpbmV0LVJQQ3xDb25uZWN0aW9uIHRvIFZNIGNvbnNvbGUgUjo0OGM4ZDdkNzQ3YWZ8
ZGlzcGF0Y2hlcl0gU2VydmVyX2hlbHBlcnMuZXhlYyBleGNlcHRpb25faGFuZGxlcjogR290IGV4
Y2VwdGlvbiBJTlRFUk5BTF9FUlJPUjogWyBYYy5FcnJvcigiZ2V0aW5mbyBmYWlsZWQ6IGRvbWFp
biAtMTogaHlwZXJjYWxsIDM2IGZhaWw6IDExOiBSZXNvdXJjZSB0ZW1wb3JhcmlseSB1bmF2YWls
YWJsZSAocmV0IC0xKSIpIF0NClsyMDEyMTIxOFQxNjoyMDoxOC40NDBafGRlYnVnfHhjcDExfDMw
NjUgaW5ldC1SUEN8Q29ubmVjdGlvbiB0byBWTSBjb25zb2xlIFI6ZTA0MDEyNDk4NTY1fGRpc3Bh
dGNoZXJdIFNlcnZlcl9oZWxwZXJzLmV4ZWMgZXhjZXB0aW9uX2hhbmRsZXI6IEdvdCBleGNlcHRp
b24gSU5URVJOQUxfRVJST1I6IFsgWGMuRXJyb3IoImdldGluZm8gZmFpbGVkOiBkb21haW4gLTE6
IGdldGluZm8gZmFpbGVkOiBkb21haW4gODogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiA4OiBnZXRp
bmZvIGZhaWxlZDogZG9tYWluIC0xOiBoeXBlcmNhbGwgMzYgZmFpbDogMTE6IFJlc291cmNlIHRl
bXBvcmFyaWx5IHVuYXZhaWxhYmxlIChyZXQgLTEpIikgXQ0KWzIwMTIxMjE4VDE2OjIwOjE5LjA5
OVp8ZGVidWd8eGNwMTF8MzA3MSBpbmV0LVJQQ3xDb25uZWN0aW9uIHRvIFZNIGNvbnNvbGUgUjox
NTM4M2FkZTQ3NjV8ZGlzcGF0Y2hlcl0gU2VydmVyX2hlbHBlcnMuZXhlYyBleGNlcHRpb25faGFu
ZGxlcjogR290IGV4Y2VwdGlvbiBJTlRFUk5BTF9FUlJPUjogWyBYYy5FcnJvcigiZ2V0aW5mbyBm
YWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWls
ZWQ6IGRvbWFpbiA4OiBnZXRpbmZvIGZhaWxlZDogZG9tYWluIDg6IGdldGluZm8gZmFpbGVkOiBk
b21haW4gLTE6IGh5cGVyY2FsbCAzNiBmYWlsOiAxMTogUmVzb3VyY2UgdGVtcG9yYXJpbHkgdW5h
dmFpbGFibGUgKHJldCAtMSkiKSBdDQpbMjAxMjEyMThUMTY6MjA6MTkuNjAwWnxkZWJ1Z3x4Y3Ax
MXwzMDc1IGluZXQtUlBDfENvbm5lY3Rpb24gdG8gVk0gY29uc29sZSBSOmRiZDM3Mjg2YmRjYXxk
aXNwYXRjaGVyXSBTZXJ2ZXJfaGVscGVycy5leGVjIGV4Y2VwdGlvbl9oYW5kbGVyOiBHb3QgZXhj
ZXB0aW9uIElOVEVSTkFMX0VSUk9SOiBbIFhjLkVycm9yKCJnZXRpbmZvIGZhaWxlZDogZG9tYWlu
IC0xOiBnZXRpbmZvIGZhaWxlZDogZG9tYWluIC0xOiBnZXRpbmZvIGZhaWxlZDogZG9tYWluIC0x
OiBnZXRpbmZvIGZhaWxlZDogZG9tYWluIDg6IGdldGluZm8gZmFpbGVkOiBkb21haW4gODogZ2V0
aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogaHlwZXJjYWxsIDM2IGZhaWw6IDExOiBSZXNvdXJjZSB0
ZW1wb3JhcmlseSB1bmF2YWlsYWJsZSAocmV0IC0xKSIpIF0NClsyMDEyMTIxOFQxNjoyMDoxOS45
NDlafGRlYnVnfHhjcDExfDMwNzYgaW5ldC1SUEN8Q29ubmVjdGlvbiB0byBWTSBjb25zb2xlIFI6
OGJkYzhlMjUwZjJkfGRpc3BhdGNoZXJdIFNlcnZlcl9oZWxwZXJzLmV4ZWMgZXhjZXB0aW9uX2hh
bmRsZXI6IEdvdCBleGNlcHRpb24gSU5URVJOQUxfRVJST1I6IFsgWGMuRXJyb3IoImdldGluZm8g
ZmFpbGVkOiBkb21haW4gLTE6IGdldGluZm8gZmFpbGVkOiBkb21haW4gLTE6IGdldGluZm8gZmFp
bGVkOiBkb21haW4gLTE6IGdldGluZm8gZmFpbGVkOiBkb21haW4gLTE6IGdldGluZm8gZmFpbGVk
OiBkb21haW4gODogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiA4OiBnZXRpbmZvIGZhaWxlZDogZG9t
YWluIC0xOiBoeXBlcmNhbGwgMzYgZmFpbDogMTE6IFJlc291cmNlIHRlbXBvcmFyaWx5IHVuYXZh
aWxhYmxlIChyZXQgLTEpIikgXQ0KWzIwMTIxMjE4VDE2OjIwOjIwLjMzMVp8ZGVidWd8eGNwMTF8
MzA3OSBpbmV0LVJQQ3xDb25uZWN0aW9uIHRvIFZNIGNvbnNvbGUgUjo1MmYwZTMxMjRkMDZ8ZGlz
cGF0Y2hlcl0gU2VydmVyX2hlbHBlcnMuZXhlYyBleGNlcHRpb25faGFuZGxlcjogR290IGV4Y2Vw
dGlvbiBJTlRFUk5BTF9FUlJPUjogWyBYYy5FcnJvcigiZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAt
MTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTog
Z2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0
aW5mbyBmYWlsZWQ6IGRvbWFpbiA4OiBnZXRpbmZvIGZhaWxlZDogZG9tYWluIDg6IGdldGluZm8g
ZmFpbGVkOiBkb21haW4gLTE6IGh5cGVyY2FsbCAzNiBmYWlsOiAxMTogUmVzb3VyY2UgdGVtcG9y
YXJpIikgXQ0KWzIwMTIxMjE4VDE2OjIwOjIwLjYyN1p8ZGVidWd8eGNwMTF8MzA4MyBpbmV0LVJQ
Q3xDb25uZWN0aW9uIHRvIFZNIGNvbnNvbGUgUjo3NmU4M2E3NmVjNWJ8ZGlzcGF0Y2hlcl0gU2Vy
dmVyX2hlbHBlcnMuZXhlYyBleGNlcHRpb25faGFuZGxlcjogR290IGV4Y2VwdGlvbiBJTlRFUk5B
TF9FUlJPUjogWyBYYy5FcnJvcigiZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBm
YWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWls
ZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6
IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiA4OiBnZXRpbmZvIGZhaWxlZDogZG9t
YWluIDg6IGdldGluZm8gZmFpbGVkOiBkb21haW4gLTE6IGh5cGVyY2FsbCAzNiBmIikgXQ0KWzIw
MTIxMjE4VDE2OjIwOjIxLjE4Nlp8ZGVidWd8eGNwMTF8MzA5OSBpbmV0LVJQQ3xDb25uZWN0aW9u
IHRvIFZNIGNvbnNvbGUgUjpiNGE1MWZlZTg3YzR8ZGlzcGF0Y2hlcl0gU2VydmVyX2hlbHBlcnMu
ZXhlYyBleGNlcHRpb25faGFuZGxlcjogR290IGV4Y2VwdGlvbiBJTlRFUk5BTF9FUlJPUjogWyBY
Yy5FcnJvcigiZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFp
biAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAt
MTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTog
Z2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiAtMTogZ2V0aW5mbyBmYWlsZWQ6IGRvbWFpbiA4OiBnZXRp
bmZvIGZhaWxlZDogZG9tYWluIDg6IGdldGluZm8gZmFpbGVkIikgXQ0K
--047d7bdc1be4f2fcb904d1276eee
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7bdc1be4f2fcb904d1276eee--


From xen-devel-bounces@lists.xen.org Wed Dec 19 00:17:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 00:17:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl7Le-0006Yo-2T; Wed, 19 Dec 2012 00:17:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1Tl7Ld-0006Yj-1a
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 00:17:21 +0000
Received: from [85.158.137.99:4159] by server-10.bemta-3.messagelabs.com id
	B7/F2-07616-B8701D05; Wed, 19 Dec 2012 00:17:15 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-12.tower-217.messagelabs.com!1355876233!14580301!1
X-Originating-IP: [209.85.210.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3290 invoked from network); 19 Dec 2012 00:17:14 -0000
Received: from mail-ia0-f176.google.com (HELO mail-ia0-f176.google.com)
	(209.85.210.176)
	by server-12.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 00:17:14 -0000
Received: by mail-ia0-f176.google.com with SMTP id y26so1170084iab.35
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 16:17:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=MgdnsE9/K0uDmbd7kUn3SZv6DDSSnpcXAsWhdngyzTA=;
	b=GoDBkaD7mulUliJ07AZOMnN+yxl9hPnOmLiIM+nXhpBdSm2j+Yrrg5H6hjieNg/8ah
	ZXdJzQFygwpXOfx9ck62mXaa3sKpfVP3FEVh1wQNFPE4HQ1txchQYq4w5GKtTwG0nZ1/
	XHSFszH0aZjuliF6UOVXPcw5hKRCkttHfMwXidylGKD3ZziFPrqlfli73PQrvlPrU+Ap
	VqNrAGXsoZq8Em+O7eBh0qj1zJeqB82Dojn7WZzfGcODOHGEE+b9s7CssD+Le1akYIcd
	h7xJMeWjk5B3y7RkeWYBZxAFlrnDECVbmBGiGxBK2+blJvz0rA2B7PSmAqkxhGE5qAwR
	vYyg==
MIME-Version: 1.0
Received: by 10.50.185.194 with SMTP id fe2mr4994090igc.60.1355876232425; Tue,
	18 Dec 2012 16:17:12 -0800 (PST)
Received: by 10.50.20.164 with HTTP; Tue, 18 Dec 2012 16:17:12 -0800 (PST)
X-Originating-IP: [59.167.234.130]
In-Reply-To: <1355844650.14620.257.camel@zakaz.uk.xensource.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
	<20687.24486.142168.302026@mariner.uk.xensource.com>
	<CA+T2pCGGbJXppP6ZEgvfxOU+u0TAhzEnpphbVgC4m0AiN2x1fg@mail.gmail.com>
	<1355844650.14620.257.camel@zakaz.uk.xensource.com>
Date: Wed, 19 Dec 2012 11:17:12 +1100
Message-ID: <CAOzFzEj63zoMoC_gh2C8YWVCuyqG2WT=QHi60i97vpzcd6xUyA@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: William Pitcock <nenolod@dereferenced.org>
X-Gm-Message-State: ALoCoQk/meLAlYieP/WLcHEvnEWgvuc28ik5jUm2DSZHRoVp+ntRxuEEd4A46YvKNbxM0KKGSV/7
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19 December 2012 02:30, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-12-18 at 14:55 +0000, William Pitcock wrote:
>> Hello,
>>
>> On Mon, Dec 17, 2012 at 12:08 PM, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> > William Pitcock writes ("[Xen-devel] introducing python-xen"):
>> >> I would like to introduce the Python Xen library, which uses libxs and
>> >> libxc directly to provide some manipulation functions for the domains
>> >> running on a hypervisor.
>> >
>> > Thanks, that's interesting.
>> >
>> > However, we would recommend nowadays to build this kind of
>> > functionality on top of libxl.  The existing python bindings for libxl
>> > may need some work, but they're probably a good starting point.
>> >
>> > Ian.
>>
>> The plan is to eventually shift over to using the libxl bindings once
>> they become more mature.
>
> libxl itself is in a good state. But AFAIK no one is working on the
> Python bindings. Personally I don't think that what is there is a good
> basis for future work, it'd be better to tear them up and start again.
>
> I'm happy to give pointers etc about the libxl and IDL side of things,
> you probably know way more about the Python side than I do though... (I
> can cargo cult bindings together and that's about it ;-))
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

Hi,

I would be interested in assisting with the Python libxl bindings.
I made some preliminary work at interfacing with libxl via Cython with
moderate success prior to the overhaul WRT async ops.
Now that the API has been stabilized and async APIs are available I
would like to take another go at it and integrate it with
gevent/libev.
If any others are interested in collaborating on this that would be awesome.

Joseph.


-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 00:17:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 00:17:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl7Le-0006Yo-2T; Wed, 19 Dec 2012 00:17:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1Tl7Ld-0006Yj-1a
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 00:17:21 +0000
Received: from [85.158.137.99:4159] by server-10.bemta-3.messagelabs.com id
	B7/F2-07616-B8701D05; Wed, 19 Dec 2012 00:17:15 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-12.tower-217.messagelabs.com!1355876233!14580301!1
X-Originating-IP: [209.85.210.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3290 invoked from network); 19 Dec 2012 00:17:14 -0000
Received: from mail-ia0-f176.google.com (HELO mail-ia0-f176.google.com)
	(209.85.210.176)
	by server-12.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 00:17:14 -0000
Received: by mail-ia0-f176.google.com with SMTP id y26so1170084iab.35
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Dec 2012 16:17:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=MgdnsE9/K0uDmbd7kUn3SZv6DDSSnpcXAsWhdngyzTA=;
	b=GoDBkaD7mulUliJ07AZOMnN+yxl9hPnOmLiIM+nXhpBdSm2j+Yrrg5H6hjieNg/8ah
	ZXdJzQFygwpXOfx9ck62mXaa3sKpfVP3FEVh1wQNFPE4HQ1txchQYq4w5GKtTwG0nZ1/
	XHSFszH0aZjuliF6UOVXPcw5hKRCkttHfMwXidylGKD3ZziFPrqlfli73PQrvlPrU+Ap
	VqNrAGXsoZq8Em+O7eBh0qj1zJeqB82Dojn7WZzfGcODOHGEE+b9s7CssD+Le1akYIcd
	h7xJMeWjk5B3y7RkeWYBZxAFlrnDECVbmBGiGxBK2+blJvz0rA2B7PSmAqkxhGE5qAwR
	vYyg==
MIME-Version: 1.0
Received: by 10.50.185.194 with SMTP id fe2mr4994090igc.60.1355876232425; Tue,
	18 Dec 2012 16:17:12 -0800 (PST)
Received: by 10.50.20.164 with HTTP; Tue, 18 Dec 2012 16:17:12 -0800 (PST)
X-Originating-IP: [59.167.234.130]
In-Reply-To: <1355844650.14620.257.camel@zakaz.uk.xensource.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
	<20687.24486.142168.302026@mariner.uk.xensource.com>
	<CA+T2pCGGbJXppP6ZEgvfxOU+u0TAhzEnpphbVgC4m0AiN2x1fg@mail.gmail.com>
	<1355844650.14620.257.camel@zakaz.uk.xensource.com>
Date: Wed, 19 Dec 2012 11:17:12 +1100
Message-ID: <CAOzFzEj63zoMoC_gh2C8YWVCuyqG2WT=QHi60i97vpzcd6xUyA@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: William Pitcock <nenolod@dereferenced.org>
X-Gm-Message-State: ALoCoQk/meLAlYieP/WLcHEvnEWgvuc28ik5jUm2DSZHRoVp+ntRxuEEd4A46YvKNbxM0KKGSV/7
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19 December 2012 02:30, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-12-18 at 14:55 +0000, William Pitcock wrote:
>> Hello,
>>
>> On Mon, Dec 17, 2012 at 12:08 PM, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> > William Pitcock writes ("[Xen-devel] introducing python-xen"):
>> >> I would like to introduce the Python Xen library, which uses libxs and
>> >> libxc directly to provide some manipulation functions for the domains
>> >> running on a hypervisor.
>> >
>> > Thanks, that's interesting.
>> >
>> > However, we would recommend nowadays to build this kind of
>> > functionality on top of libxl.  The existing python bindings for libxl
>> > may need some work, but they're probably a good starting point.
>> >
>> > Ian.
>>
>> The plan is to eventually shift over to using the libxl bindings once
>> they become more mature.
>
> libxl itself is in a good state. But AFAIK no one is working on the
> Python bindings. Personally I don't think that what is there is a good
> basis for future work, it'd be better to tear them up and start again.
>
> I'm happy to give pointers etc about the libxl and IDL side of things,
> you probably know way more about the Python side than I do though... (I
> can cargo cult bindings together and that's about it ;-))
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

Hi,

I would be interested in assisting with the Python libxl bindings.
I made some preliminary work at interfacing with libxl via Cython with
moderate success prior to the overhaul WRT async ops.
Now that the API has been stabilized and async APIs are available I
would like to take another go at it and integrate it with
gevent/libev.
If any others are interested in collaborating on this that would be awesome.

Joseph.


-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 00:51:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 00:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl7rl-0006rd-Sa; Wed, 19 Dec 2012 00:50:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tl7rk-0006rY-Et
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 00:50:32 +0000
Received: from [85.158.139.211:22555] by server-14.bemta-5.messagelabs.com id
	AD/EA-09538-75F01D05; Wed, 19 Dec 2012 00:50:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355878229!20281475!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI1NTI2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4957 invoked from network); 19 Dec 2012 00:50:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 00:50:30 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJ0nTf2014746
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 00:49:30 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJ0nSsh008746
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 00:49:29 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJ0nS5t014122; Tue, 18 Dec 2012 18:49:28 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 16:49:28 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5A6251BF30E; Tue, 18 Dec 2012 19:49:27 -0500 (EST)
Date: Tue, 18 Dec 2012 19:49:27 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Lars Kurth <lars.kurth@xen.org>
Message-ID: <20121219004927.GA8195@phenom.dumpdata.com>
References: <50C0A5F7.7050405@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C0A5F7.7050405@xen.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 2013 Xen event plan's : Call for Input/Action
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 02:04:39PM +0000, Lars Kurth wrote:
> Hi everybody,
> 
> _*Next Xen Hackathon*__*- vote on date/location needed*_
> I have been working with a still to be disclosed vendor on a Xen
> Hackathon to be hosted in spring. The following options are on the
> table
> 
>  * Option 1: May 16/17, Dublin, Ireland (preferred by vendor)

I vote for Ireland.

>  * Option 2: 2 days in February 11-14, Munich - note that FOSDEM is Feb
>    2 & 3
>  * Option 3: 2 days in March 18-22, Munich
> 
> Given that we had a Hackathon in Munich already and that I know that
> quite a few Xen devs (including me) are on vacation in the middle of
> March, my preference would be option 1. But I still wanted to put
> this to you to get a community view.
> 
> _*XenSummit EU*__*, Oct 2013
> *_I just signed the contracts to host XenSummit the week of Oct
> 21-25 at the Edinburgh International Conference Centre. The exact
> dates are still to be decided. That time coincides with the 10th
> birthday of Xen depending on how one counts: the 1.0 release was on
> Oct 2nd 2003, SOSP 2003 where Xen was presented was October 21 2003.
> 
> _*XenSummit US or Asia, April-June 2013*_
> Given that the EU summit is some time away, it would make sense to
> host a XenSummit in either the US or Asia in late spring or early
> summer. Time-frame wise I was thinking of April to June, which
> should fit well with the Xen 4.3 release. The only problem is that
> there is no bigger event we could co-locate with. So I can only do
> this at a) significant cost to Xen.org or b) if a vendor in the
> community who has space to host 100-200 people and is based in the
> US or Asia would volunteer to provide their premises for a 2 day
> event. If you work for a vendor who can accomodate us, and you would
> be interested in hosting, please get in touch with me.
> 
> Best Regards
> Lars
> 
> 
> 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 00:51:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 00:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl7rl-0006rd-Sa; Wed, 19 Dec 2012 00:50:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tl7rk-0006rY-Et
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 00:50:32 +0000
Received: from [85.158.139.211:22555] by server-14.bemta-5.messagelabs.com id
	AD/EA-09538-75F01D05; Wed, 19 Dec 2012 00:50:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355878229!20281475!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI1NTI2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4957 invoked from network); 19 Dec 2012 00:50:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 00:50:30 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJ0nTf2014746
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 00:49:30 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJ0nSsh008746
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 00:49:29 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJ0nS5t014122; Tue, 18 Dec 2012 18:49:28 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Dec 2012 16:49:28 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5A6251BF30E; Tue, 18 Dec 2012 19:49:27 -0500 (EST)
Date: Tue, 18 Dec 2012 19:49:27 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Lars Kurth <lars.kurth@xen.org>
Message-ID: <20121219004927.GA8195@phenom.dumpdata.com>
References: <50C0A5F7.7050405@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C0A5F7.7050405@xen.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 2013 Xen event plan's : Call for Input/Action
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 02:04:39PM +0000, Lars Kurth wrote:
> Hi everybody,
> 
> _*Next Xen Hackathon*__*- vote on date/location needed*_
> I have been working with a still to be disclosed vendor on a Xen
> Hackathon to be hosted in spring. The following options are on the
> table
> 
>  * Option 1: May 16/17, Dublin, Ireland (preferred by vendor)

I vote for Ireland.

>  * Option 2: 2 days in February 11-14, Munich - note that FOSDEM is Feb
>    2 & 3
>  * Option 3: 2 days in March 18-22, Munich
> 
> Given that we had a Hackathon in Munich already and that I know that
> quite a few Xen devs (including me) are on vacation in the middle of
> March, my preference would be option 1. But I still wanted to put
> this to you to get a community view.
> 
> _*XenSummit EU*__*, Oct 2013
> *_I just signed the contracts to host XenSummit the week of Oct
> 21-25 at the Edinburgh International Conference Centre. The exact
> dates are still to be decided. That time coincides with the 10th
> birthday of Xen depending on how one counts: the 1.0 release was on
> Oct 2nd 2003, SOSP 2003 where Xen was presented was October 21 2003.
> 
> _*XenSummit US or Asia, April-June 2013*_
> Given that the EU summit is some time away, it would make sense to
> host a XenSummit in either the US or Asia in late spring or early
> summer. Time-frame wise I was thinking of April to June, which
> should fit well with the Xen 4.3 release. The only problem is that
> there is no bigger event we could co-locate with. So I can only do
> this at a) significant cost to Xen.org or b) if a vendor in the
> community who has space to host 100-200 people and is based in the
> US or Asia would volunteer to provide their premises for a 2 day
> event. If you work for a vendor who can accomodate us, and you would
> be interested in hosting, please get in touch with me.
> 
> Best Regards
> Lars
> 
> 
> 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 01:30:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 01:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl8Tr-0002dZ-47; Wed, 19 Dec 2012 01:29:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tl8To-0002dU-SO
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 01:29:53 +0000
Received: from [85.158.143.99:23109] by server-3.bemta-4.messagelabs.com id
	F0/E0-18211-F8811D05; Wed, 19 Dec 2012 01:29:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355880590!29116062!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMTY5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17359 invoked from network); 19 Dec 2012 01:29:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 01:29:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="240415"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 01:29:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 19 Dec 2012 01:29:50 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tl8Tm-0002vj-3A;
	Wed, 19 Dec 2012 01:29:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tl8Tl-0000fI-Ii;
	Wed, 19 Dec 2012 01:29:49 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14781-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Dec 2012 01:29:49 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14781: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14781 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14781/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-winxpsp3 12 guest-localmigrate/x10 fail REGR. vs. 14773

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14773

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  b13c5ee3c109
baseline version:
 xen                  516dbd9deb4f

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23432:b13c5ee3c109
tag:         tip
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Dec 18 12:53:15 2012 +0000
    
    Added signature for changeset 12c4c4c0a715
    
    
changeset:   23431:0fa518f50193
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 18 13:36:34 2012 +0100
    
    Added tag RELEASE-4.1.4 for changeset 12c4c4c0a715
    
    
changeset:   23430:12c4c4c0a715
tag:         RELEASE-4.1.4
user:        Jan Beulich
date:        Tue Dec 18 13:35:59 2012 +0100
    
    update Xen version to 4.1.4
    
    
changeset:   23429:516dbd9deb4f
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Mon Dec 17 11:54:52 2012 +0000
    
    libxl: revert 23428:93e17b0cd035 "avoid blktap2 deadlock"
    
    This results in additional leakage in xenstore according to the
    automated tests.
    
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 01:30:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 01:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tl8Tr-0002dZ-47; Wed, 19 Dec 2012 01:29:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tl8To-0002dU-SO
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 01:29:53 +0000
Received: from [85.158.143.99:23109] by server-3.bemta-4.messagelabs.com id
	F0/E0-18211-F8811D05; Wed, 19 Dec 2012 01:29:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355880590!29116062!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMTY5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17359 invoked from network); 19 Dec 2012 01:29:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 01:29:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,300,1355097600"; 
   d="scan'208";a="240415"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 01:29:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 19 Dec 2012 01:29:50 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tl8Tm-0002vj-3A;
	Wed, 19 Dec 2012 01:29:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tl8Tl-0000fI-Ii;
	Wed, 19 Dec 2012 01:29:49 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14781-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Dec 2012 01:29:49 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14781: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14781 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14781/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-winxpsp3 12 guest-localmigrate/x10 fail REGR. vs. 14773

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14773

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  b13c5ee3c109
baseline version:
 xen                  516dbd9deb4f

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23432:b13c5ee3c109
tag:         tip
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Dec 18 12:53:15 2012 +0000
    
    Added signature for changeset 12c4c4c0a715
    
    
changeset:   23431:0fa518f50193
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Dec 18 13:36:34 2012 +0100
    
    Added tag RELEASE-4.1.4 for changeset 12c4c4c0a715
    
    
changeset:   23430:12c4c4c0a715
tag:         RELEASE-4.1.4
user:        Jan Beulich
date:        Tue Dec 18 13:35:59 2012 +0100
    
    update Xen version to 4.1.4
    
    
changeset:   23429:516dbd9deb4f
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Mon Dec 17 11:54:52 2012 +0000
    
    libxl: revert 23428:93e17b0cd035 "avoid blktap2 deadlock"
    
    This results in additional leakage in xenstore according to the
    automated tests.
    
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 04:11:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 04:11:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlAzW-0003zi-6p; Wed, 19 Dec 2012 04:10:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1TlAzU-0003zb-0q
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 04:10:44 +0000
Received: from [85.158.139.211:10857] by server-16.bemta-5.messagelabs.com id
	0C/F3-09208-34E31D05; Wed, 19 Dec 2012 04:10:43 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355890241!20239354!1
X-Originating-IP: [209.85.212.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 525 invoked from network); 19 Dec 2012 04:10:42 -0000
Received: from mail-vb0-f43.google.com (HELO mail-vb0-f43.google.com)
	(209.85.212.43)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 04:10:42 -0000
Received: by mail-vb0-f43.google.com with SMTP id fs19so1823861vbb.2
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 20:09:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type:x-gm-message-state;
	bh=gKbz5WZNVFiKAdyfEuNgbYV47cvY8LqckVMAXC21JLo=;
	b=iYmgS8vKECbfFoPCv+YxkatCndpPNT8tTe1GniGEctJl3HIXkUu6EggD4bKcuOVkuK
	PDIP4NvCVV7w4xau4K+Giu8T7Z6JfYHSJdtg0gIC7n9QJyMYuALGuk9vVS9Mb/Naqon6
	0d1teIVbq6GzXI6DWV2Bi0nJmBmQLVuFebBrpYZOHdhZPsDOR95X4SHS/qWKLnXyv0pv
	O5z3uecCMEwGSdfcmg05UU5Qf2F6bFb07c2L/DTiW2SnzwL6HPWOirfV2e1YApYCGLPL
	37kyiO2ZST96kxcpxe0RXkiGpHES34Xh9jhlftCQH+FNgoKforSRDkEtQJEJKkNyAL30
	Gk5g==
X-Received: by 10.220.108.142 with SMTP id f14mr6949393vcp.9.1355890181869;
	Tue, 18 Dec 2012 20:09:41 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id yf5sm3112657veb.13.2012.12.18.20.09.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 20:09:41 -0800 (PST)
Date: Tue, 18 Dec 2012 23:09:39 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355853196-23676-7-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212182308460.1263@xanadu.home>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
	<1355853196-23676-7-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQmUHBaL34rzhga13/S7nj22ZrfJ5VoATGjySm3dl1pENPKXs6UdXST4Wd/Pvje14qBBaOVo
Cc: Dave Martin <dave.martin@linaro.org>, Arnd Bergmann <arnd@arndb.de>,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	xen-devel@lists.xen.org, robherring2@gmail.com,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 6/6] ARM: mach-virt: add SMP support
	using PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:

> This patch adds support for SMP to mach-virt using the PSCI
> infrastructure.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Reviewed-by: Nicolas Pitre <nico@linaro.org>


> ---
>  arch/arm/mach-virt/Kconfig   |  1 +
>  arch/arm/mach-virt/Makefile  |  1 +
>  arch/arm/mach-virt/platsmp.c | 58 ++++++++++++++++++++++++++++++++++++++++++++
>  arch/arm/mach-virt/virt.c    |  4 +++
>  4 files changed, 64 insertions(+)
>  create mode 100644 arch/arm/mach-virt/platsmp.c
> 
> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
> index a568a2a..8958f0d 100644
> --- a/arch/arm/mach-virt/Kconfig
> +++ b/arch/arm/mach-virt/Kconfig
> @@ -3,6 +3,7 @@ config ARCH_VIRT
>  	select ARCH_WANT_OPTIONAL_GPIOLIB
>  	select ARM_GIC
>  	select ARM_ARCH_TIMER
> +	select ARM_PSCI
>  	select HAVE_SMP
>  	select CPU_V7
>  	select SPARSE_IRQ
> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
> index 7ddbfa6..042afc1 100644
> --- a/arch/arm/mach-virt/Makefile
> +++ b/arch/arm/mach-virt/Makefile
> @@ -3,3 +3,4 @@
>  #
>  
>  obj-y					:= virt.o
> +obj-$(CONFIG_SMP)			+= platsmp.o
> diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
> new file mode 100644
> index 0000000..e358beb
> --- /dev/null
> +++ b/arch/arm/mach-virt/platsmp.c
> @@ -0,0 +1,58 @@
> +/*
> + * Dummy Virtual Machine - does what it says on the tin.
> + *
> + * Copyright (C) 2012 ARM Ltd
> + * Author: Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/smp.h>
> +#include <linux/of.h>
> +
> +#include <asm/psci.h>
> +#include <asm/smp_plat.h>
> +#include <asm/hardware/gic.h>
> +
> +extern void secondary_startup(void);
> +
> +static void __init virt_smp_init_cpus(void)
> +{
> +	set_smp_cross_call(gic_raise_softirq);
> +}
> +
> +static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
> +{
> +}
> +
> +static int __cpuinit virt_boot_secondary(unsigned int cpu,
> +					 struct task_struct *idle)
> +{
> +	if (psci_ops.cpu_on)
> +		return psci_ops.cpu_on(cpu_logical_map(cpu),
> +				       __pa(secondary_startup));
> +	return -ENODEV;
> +}
> +
> +static void __cpuinit virt_secondary_init(unsigned int cpu)
> +{
> +	gic_secondary_init(0);
> +}
> +
> +struct smp_operations __initdata virt_smp_ops = {
> +	.smp_init_cpus		= virt_smp_init_cpus,
> +	.smp_prepare_cpus	= virt_smp_prepare_cpus,
> +	.smp_secondary_init	= virt_secondary_init,
> +	.smp_boot_secondary	= virt_boot_secondary,
> +};
> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
> index 174b9da..1d0a85a 100644
> --- a/arch/arm/mach-virt/virt.c
> +++ b/arch/arm/mach-virt/virt.c
> @@ -20,6 +20,7 @@
>  
>  #include <linux/of_irq.h>
>  #include <linux/of_platform.h>
> +#include <linux/smp.h>
>  
>  #include <asm/arch_timer.h>
>  #include <asm/hardware/gic.h>
> @@ -56,10 +57,13 @@ static struct sys_timer virt_timer = {
>  	.init = virt_timer_init,
>  };
>  
> +extern struct smp_operations virt_smp_ops;
> +
>  DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
>  	.init_irq	= gic_init_irq,
>  	.handle_irq     = gic_handle_irq,
>  	.timer		= &virt_timer,
>  	.init_machine	= virt_init,
> +	.smp		= smp_ops(virt_smp_ops),
>  	.dt_compat	= virt_dt_match,
>  MACHINE_END
> -- 
> 1.8.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 04:11:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 04:11:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlAzW-0003zi-6p; Wed, 19 Dec 2012 04:10:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1TlAzU-0003zb-0q
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 04:10:44 +0000
Received: from [85.158.139.211:10857] by server-16.bemta-5.messagelabs.com id
	0C/F3-09208-34E31D05; Wed, 19 Dec 2012 04:10:43 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355890241!20239354!1
X-Originating-IP: [209.85.212.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 525 invoked from network); 19 Dec 2012 04:10:42 -0000
Received: from mail-vb0-f43.google.com (HELO mail-vb0-f43.google.com)
	(209.85.212.43)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 04:10:42 -0000
Received: by mail-vb0-f43.google.com with SMTP id fs19so1823861vbb.2
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 20:09:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type:x-gm-message-state;
	bh=gKbz5WZNVFiKAdyfEuNgbYV47cvY8LqckVMAXC21JLo=;
	b=iYmgS8vKECbfFoPCv+YxkatCndpPNT8tTe1GniGEctJl3HIXkUu6EggD4bKcuOVkuK
	PDIP4NvCVV7w4xau4K+Giu8T7Z6JfYHSJdtg0gIC7n9QJyMYuALGuk9vVS9Mb/Naqon6
	0d1teIVbq6GzXI6DWV2Bi0nJmBmQLVuFebBrpYZOHdhZPsDOR95X4SHS/qWKLnXyv0pv
	O5z3uecCMEwGSdfcmg05UU5Qf2F6bFb07c2L/DTiW2SnzwL6HPWOirfV2e1YApYCGLPL
	37kyiO2ZST96kxcpxe0RXkiGpHES34Xh9jhlftCQH+FNgoKforSRDkEtQJEJKkNyAL30
	Gk5g==
X-Received: by 10.220.108.142 with SMTP id f14mr6949393vcp.9.1355890181869;
	Tue, 18 Dec 2012 20:09:41 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id yf5sm3112657veb.13.2012.12.18.20.09.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 20:09:41 -0800 (PST)
Date: Tue, 18 Dec 2012 23:09:39 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355853196-23676-7-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212182308460.1263@xanadu.home>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
	<1355853196-23676-7-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQmUHBaL34rzhga13/S7nj22ZrfJ5VoATGjySm3dl1pENPKXs6UdXST4Wd/Pvje14qBBaOVo
Cc: Dave Martin <dave.martin@linaro.org>, Arnd Bergmann <arnd@arndb.de>,
	Marc.Zyngier@arm.com, devicetree-discuss@lists.ozlabs.org,
	xen-devel@lists.xen.org, robherring2@gmail.com,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 6/6] ARM: mach-virt: add SMP support
	using PSCI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:

> This patch adds support for SMP to mach-virt using the PSCI
> infrastructure.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Reviewed-by: Nicolas Pitre <nico@linaro.org>


> ---
>  arch/arm/mach-virt/Kconfig   |  1 +
>  arch/arm/mach-virt/Makefile  |  1 +
>  arch/arm/mach-virt/platsmp.c | 58 ++++++++++++++++++++++++++++++++++++++++++++
>  arch/arm/mach-virt/virt.c    |  4 +++
>  4 files changed, 64 insertions(+)
>  create mode 100644 arch/arm/mach-virt/platsmp.c
> 
> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
> index a568a2a..8958f0d 100644
> --- a/arch/arm/mach-virt/Kconfig
> +++ b/arch/arm/mach-virt/Kconfig
> @@ -3,6 +3,7 @@ config ARCH_VIRT
>  	select ARCH_WANT_OPTIONAL_GPIOLIB
>  	select ARM_GIC
>  	select ARM_ARCH_TIMER
> +	select ARM_PSCI
>  	select HAVE_SMP
>  	select CPU_V7
>  	select SPARSE_IRQ
> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
> index 7ddbfa6..042afc1 100644
> --- a/arch/arm/mach-virt/Makefile
> +++ b/arch/arm/mach-virt/Makefile
> @@ -3,3 +3,4 @@
>  #
>  
>  obj-y					:= virt.o
> +obj-$(CONFIG_SMP)			+= platsmp.o
> diff --git a/arch/arm/mach-virt/platsmp.c b/arch/arm/mach-virt/platsmp.c
> new file mode 100644
> index 0000000..e358beb
> --- /dev/null
> +++ b/arch/arm/mach-virt/platsmp.c
> @@ -0,0 +1,58 @@
> +/*
> + * Dummy Virtual Machine - does what it says on the tin.
> + *
> + * Copyright (C) 2012 ARM Ltd
> + * Author: Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/smp.h>
> +#include <linux/of.h>
> +
> +#include <asm/psci.h>
> +#include <asm/smp_plat.h>
> +#include <asm/hardware/gic.h>
> +
> +extern void secondary_startup(void);
> +
> +static void __init virt_smp_init_cpus(void)
> +{
> +	set_smp_cross_call(gic_raise_softirq);
> +}
> +
> +static void __init virt_smp_prepare_cpus(unsigned int max_cpus)
> +{
> +}
> +
> +static int __cpuinit virt_boot_secondary(unsigned int cpu,
> +					 struct task_struct *idle)
> +{
> +	if (psci_ops.cpu_on)
> +		return psci_ops.cpu_on(cpu_logical_map(cpu),
> +				       __pa(secondary_startup));
> +	return -ENODEV;
> +}
> +
> +static void __cpuinit virt_secondary_init(unsigned int cpu)
> +{
> +	gic_secondary_init(0);
> +}
> +
> +struct smp_operations __initdata virt_smp_ops = {
> +	.smp_init_cpus		= virt_smp_init_cpus,
> +	.smp_prepare_cpus	= virt_smp_prepare_cpus,
> +	.smp_secondary_init	= virt_secondary_init,
> +	.smp_boot_secondary	= virt_boot_secondary,
> +};
> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
> index 174b9da..1d0a85a 100644
> --- a/arch/arm/mach-virt/virt.c
> +++ b/arch/arm/mach-virt/virt.c
> @@ -20,6 +20,7 @@
>  
>  #include <linux/of_irq.h>
>  #include <linux/of_platform.h>
> +#include <linux/smp.h>
>  
>  #include <asm/arch_timer.h>
>  #include <asm/hardware/gic.h>
> @@ -56,10 +57,13 @@ static struct sys_timer virt_timer = {
>  	.init = virt_timer_init,
>  };
>  
> +extern struct smp_operations virt_smp_ops;
> +
>  DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
>  	.init_irq	= gic_init_irq,
>  	.handle_irq     = gic_handle_irq,
>  	.timer		= &virt_timer,
>  	.init_machine	= virt_init,
> +	.smp		= smp_ops(virt_smp_ops),
>  	.dt_compat	= virt_dt_match,
>  MACHINE_END
> -- 
> 1.8.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 04:12:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 04:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlB0S-00044i-Mh; Wed, 19 Dec 2012 04:11:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1TlB0R-00044b-Tp
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 04:11:44 +0000
Received: from [85.158.143.99:28977] by server-3.bemta-4.messagelabs.com id
	F0/8C-18211-F7E31D05; Wed, 19 Dec 2012 04:11:43 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355890301!29957531!1
X-Originating-IP: [209.85.212.44]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 557 invoked from network); 19 Dec 2012 04:11:42 -0000
Received: from mail-vb0-f44.google.com (HELO mail-vb0-f44.google.com)
	(209.85.212.44)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 04:11:42 -0000
Received: by mail-vb0-f44.google.com with SMTP id fc26so1825115vbb.31
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 20:11:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type:x-gm-message-state;
	bh=aM2jnKrf3VA5+6nAV8nHzha7yLUBY1FneWKFvQD4HVA=;
	b=fxNTAmBWRvcjL5LpsQmIiT/3rhaU42HmPz9d5yhC0C9aRbzgizmjv4sJ/qY0eb6GON
	f7LRVznja2XpQNALdFnbCqjVHPezd8rZ4dPFkL8xsLSWKvGIzJ5xGHs1Xh05l/oWcIJV
	nrgMjwf8izIxX4ellokTfzN3tuCfkGjQUrtpwt0kgEqVTzEnffMq63zZp/FACQYUcB6Q
	hic3ARAo5u8SNukhItWtYbedvNY4ZdWkzB/AccupLkZBw/2Ww7U/W3eqToI7edJ39jPe
	JOeHWZyC2fDkvzfLQl1mbovGBp+/tZcZZaTdY6IaZMaPnMzdmSbYhL4EfRknGP2B52GJ
	b5fw==
X-Received: by 10.52.89.106 with SMTP id bn10mr5990027vdb.68.1355890301207;
	Tue, 18 Dec 2012 20:11:41 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id ey7sm3150653ved.0.2012.12.18.20.11.39
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 20:11:40 -0800 (PST)
Date: Tue, 18 Dec 2012 23:11:38 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355853196-23676-4-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212182311170.1263@xanadu.home>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
	<1355853196-23676-4-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQkJLehX3Zc/xIYCF2FQ6hnL4SzER/WsQvGugly3IkseEzRNzEKefbb7oBS8o+Twl4x1Zk5E
Cc: dave.martin@linaro.org, arnd@arndb.de, Marc.Zyngier@arm.com,
	devicetree-discuss@lists.ozlabs.org, xen-devel@lists.xen.org,
	robherring2@gmail.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 3/6] ARM: psci: add devicetree binding
 for describing PSCI firmware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:

> This patch adds a new devicetree binding for describing PSCI firmware
> to Linux.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Acked-by: Nicolas Pitre <nico@linaro.org>


> ---
>  Documentation/devicetree/bindings/arm/psci.txt | 55 ++++++++++++++++++++++++++
>  1 file changed, 55 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/arm/psci.txt
> 
> diff --git a/Documentation/devicetree/bindings/arm/psci.txt b/Documentation/devicetree/bindings/arm/psci.txt
> new file mode 100644
> index 0000000..433afe9
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/arm/psci.txt
> @@ -0,0 +1,55 @@
> +* Power State Coordination Interface (PSCI)
> +
> +Firmware implementing the PSCI functions described in ARM document number
> +ARM DEN 0022A ("Power State Coordination Interface System Software on ARM
> +processors") can be used by Linux to initiate various CPU-centric power
> +operations.
> +
> +Issue A of the specification describes functions for CPU suspend, hotplug
> +and migration of secure software.
> +
> +Functions are invoked by trapping to the privilege level of the PSCI
> +firmware (specified as part of the binding below) and passing arguments
> +in a manner similar to that specified by AAPCS:
> +
> +	 r0		=> 32-bit Function ID / return value
> +	{r1 - r3}	=> Parameters
> +
> +Note that the immediate field of the trapping instruction must be set
> +to #0.
> +
> +
> +Main node required properties:
> +
> + - compatible    : Must be "arm,psci"
> +
> + - method        : The method of calling the PSCI firmware. Permitted
> +                   values are:
> +
> +                   "smc" : SMC #0, with the register assignments specified
> +		           in this binding.
> +
> +                   "hvc" : HVC #0, with the register assignments specified
> +		           in this binding.
> +
> +Main node optional properties:
> +
> + - cpu_suspend   : Function ID for CPU_SUSPEND operation
> +
> + - cpu_off       : Function ID for CPU_OFF operation
> +
> + - cpu_on        : Function ID for CPU_ON operation
> +
> + - migrate       : Function ID for MIGRATE operation
> +
> +
> +Example:
> +
> +	psci {
> +		compatible	= "arm,psci";
> +		method		= "smc";
> +		cpu_suspend	= <0x95c10000>;
> +		cpu_off		= <0x95c10001>;
> +		cpu_on		= <0x95c10002>;
> +		migrate		= <0x95c10003>;
> +	};
> -- 
> 1.8.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 04:12:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 04:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlB0S-00044i-Mh; Wed, 19 Dec 2012 04:11:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1TlB0R-00044b-Tp
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 04:11:44 +0000
Received: from [85.158.143.99:28977] by server-3.bemta-4.messagelabs.com id
	F0/8C-18211-F7E31D05; Wed, 19 Dec 2012 04:11:43 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355890301!29957531!1
X-Originating-IP: [209.85.212.44]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 557 invoked from network); 19 Dec 2012 04:11:42 -0000
Received: from mail-vb0-f44.google.com (HELO mail-vb0-f44.google.com)
	(209.85.212.44)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 04:11:42 -0000
Received: by mail-vb0-f44.google.com with SMTP id fc26so1825115vbb.31
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 20:11:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type:x-gm-message-state;
	bh=aM2jnKrf3VA5+6nAV8nHzha7yLUBY1FneWKFvQD4HVA=;
	b=fxNTAmBWRvcjL5LpsQmIiT/3rhaU42HmPz9d5yhC0C9aRbzgizmjv4sJ/qY0eb6GON
	f7LRVznja2XpQNALdFnbCqjVHPezd8rZ4dPFkL8xsLSWKvGIzJ5xGHs1Xh05l/oWcIJV
	nrgMjwf8izIxX4ellokTfzN3tuCfkGjQUrtpwt0kgEqVTzEnffMq63zZp/FACQYUcB6Q
	hic3ARAo5u8SNukhItWtYbedvNY4ZdWkzB/AccupLkZBw/2Ww7U/W3eqToI7edJ39jPe
	JOeHWZyC2fDkvzfLQl1mbovGBp+/tZcZZaTdY6IaZMaPnMzdmSbYhL4EfRknGP2B52GJ
	b5fw==
X-Received: by 10.52.89.106 with SMTP id bn10mr5990027vdb.68.1355890301207;
	Tue, 18 Dec 2012 20:11:41 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id ey7sm3150653ved.0.2012.12.18.20.11.39
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 20:11:40 -0800 (PST)
Date: Tue, 18 Dec 2012 23:11:38 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355853196-23676-4-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212182311170.1263@xanadu.home>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
	<1355853196-23676-4-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQkJLehX3Zc/xIYCF2FQ6hnL4SzER/WsQvGugly3IkseEzRNzEKefbb7oBS8o+Twl4x1Zk5E
Cc: dave.martin@linaro.org, arnd@arndb.de, Marc.Zyngier@arm.com,
	devicetree-discuss@lists.ozlabs.org, xen-devel@lists.xen.org,
	robherring2@gmail.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 3/6] ARM: psci: add devicetree binding
 for describing PSCI firmware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:

> This patch adds a new devicetree binding for describing PSCI firmware
> to Linux.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Acked-by: Nicolas Pitre <nico@linaro.org>


> ---
>  Documentation/devicetree/bindings/arm/psci.txt | 55 ++++++++++++++++++++++++++
>  1 file changed, 55 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/arm/psci.txt
> 
> diff --git a/Documentation/devicetree/bindings/arm/psci.txt b/Documentation/devicetree/bindings/arm/psci.txt
> new file mode 100644
> index 0000000..433afe9
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/arm/psci.txt
> @@ -0,0 +1,55 @@
> +* Power State Coordination Interface (PSCI)
> +
> +Firmware implementing the PSCI functions described in ARM document number
> +ARM DEN 0022A ("Power State Coordination Interface System Software on ARM
> +processors") can be used by Linux to initiate various CPU-centric power
> +operations.
> +
> +Issue A of the specification describes functions for CPU suspend, hotplug
> +and migration of secure software.
> +
> +Functions are invoked by trapping to the privilege level of the PSCI
> +firmware (specified as part of the binding below) and passing arguments
> +in a manner similar to that specified by AAPCS:
> +
> +	 r0		=> 32-bit Function ID / return value
> +	{r1 - r3}	=> Parameters
> +
> +Note that the immediate field of the trapping instruction must be set
> +to #0.
> +
> +
> +Main node required properties:
> +
> + - compatible    : Must be "arm,psci"
> +
> + - method        : The method of calling the PSCI firmware. Permitted
> +                   values are:
> +
> +                   "smc" : SMC #0, with the register assignments specified
> +		           in this binding.
> +
> +                   "hvc" : HVC #0, with the register assignments specified
> +		           in this binding.
> +
> +Main node optional properties:
> +
> + - cpu_suspend   : Function ID for CPU_SUSPEND operation
> +
> + - cpu_off       : Function ID for CPU_OFF operation
> +
> + - cpu_on        : Function ID for CPU_ON operation
> +
> + - migrate       : Function ID for MIGRATE operation
> +
> +
> +Example:
> +
> +	psci {
> +		compatible	= "arm,psci";
> +		method		= "smc";
> +		cpu_suspend	= <0x95c10000>;
> +		cpu_off		= <0x95c10001>;
> +		cpu_on		= <0x95c10002>;
> +		migrate		= <0x95c10003>;
> +	};
> -- 
> 1.8.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 04:16:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 04:16:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlB4o-0004HC-DV; Wed, 19 Dec 2012 04:16:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1TlB4m-0004H7-G2
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 04:16:12 +0000
Received: from [85.158.137.99:38238] by server-13.bemta-3.messagelabs.com id
	F6/8E-00465-B8F31D05; Wed, 19 Dec 2012 04:16:11 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-11.tower-217.messagelabs.com!1355890568!16894085!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13930 invoked from network); 19 Dec 2012 04:16:09 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 04:16:09 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so1848588vbi.32
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 20:16:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type:x-gm-message-state;
	bh=S9A9WO/2fHeLATMc7JN24njWx454vJGq86cG2gh99Xo=;
	b=aQ/vGKvj7+R9s5M53tfVsxsycpOOoNY2Vj0bCVXJyXZP/radYBt79VXfBbuzLWp1hN
	bivmI4U+WQItCY2etsudXwiDnnH/XV+oynYnnrynVgTlYUdxMHAfJP1oYUSfSn8tr06B
	R68o2QPsRRDqqj7QyKYA58dj+67oilWjgZNe+7ruC7nVuNpSWIp+EjrHz1WAkyBdyWtN
	EGvlt+TN6tWr3EZMnl2gH9lmZ/L4peD4lx871k9izefpjcLmcgQlCBKsyt00Awmp0ZhF
	rMYFWlmBVXUca33sLpKOBRuiQ94amftzaG3RVM1jRp0sPs3DHWG/mNFTS4pBZqE9682l
	t4Mg==
X-Received: by 10.220.219.145 with SMTP id hu17mr6703202vcb.52.1355890568438; 
	Tue, 18 Dec 2012 20:16:08 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id ey7sm3159618ved.0.2012.12.18.20.16.07
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 20:16:07 -0800 (PST)
Date: Tue, 18 Dec 2012 23:16:06 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355853196-23676-5-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212182315440.1263@xanadu.home>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
	<1355853196-23676-5-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQmyVQmDiVZ671NJ6fYDiQolMvkQ9xJrtP1t8G14Sbk7pb/6zW7vUVC6lLvel27N17BlmzR/
Cc: dave.martin@linaro.org, arnd@arndb.de, Marc.Zyngier@arm.com,
	devicetree-discuss@lists.ozlabs.org, xen-devel@lists.xen.org,
	robherring2@gmail.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 4/6] ARM: psci: add support for PSCI
 invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:

> This patch adds support for the Power State Coordination Interface
> defined by ARM, allowing Linux to request CPU-centric power-management
> operations from firmware implementing the PSCI protocol.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Acked-by: Nicolas Pitre <nico@linaro.org>


> ---
>  arch/arm/Kconfig            |  10 +++
>  arch/arm/include/asm/psci.h |  36 ++++++++
>  arch/arm/kernel/Makefile    |   1 +
>  arch/arm/kernel/psci.c      | 211 ++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 258 insertions(+)
>  create mode 100644 arch/arm/include/asm/psci.h
>  create mode 100644 arch/arm/kernel/psci.c
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 8c83d98..80d54b8 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1617,6 +1617,16 @@ config HOTPLUG_CPU
>  	  Say Y here to experiment with turning CPUs off and on.  CPUs
>  	  can be controlled through /sys/devices/system/cpu.
>  
> +config ARM_PSCI
> +	bool "Support for the ARM Power State Coordination Interface (PSCI)"
> +	depends on CPU_V7
> +	help
> +	  Say Y here if you want Linux to communicate with system firmware
> +	  implementing the PSCI specification for CPU-centric power
> +	  management operations described in ARM document number ARM DEN
> +	  0022A ("Power State Coordination Interface System Software on
> +	  ARM processors").
> +
>  config LOCAL_TIMERS
>  	bool "Use local timer interrupts"
>  	depends on SMP
> diff --git a/arch/arm/include/asm/psci.h b/arch/arm/include/asm/psci.h
> new file mode 100644
> index 0000000..ce0dbe7
> --- /dev/null
> +++ b/arch/arm/include/asm/psci.h
> @@ -0,0 +1,36 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2012 ARM Limited
> + */
> +
> +#ifndef __ASM_ARM_PSCI_H
> +#define __ASM_ARM_PSCI_H
> +
> +#define PSCI_POWER_STATE_TYPE_STANDBY		0
> +#define PSCI_POWER_STATE_TYPE_POWER_DOWN	1
> +
> +struct psci_power_state {
> +	u16	id;
> +	u8	type;
> +	u8	affinity_level;
> +};
> +
> +struct psci_operations {
> +	int (*cpu_suspend)(struct psci_power_state state,
> +			   unsigned long entry_point);
> +	int (*cpu_off)(struct psci_power_state state);
> +	int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
> +	int (*migrate)(unsigned long cpuid);
> +};
> +
> +extern struct psci_operations psci_ops;
> +
> +#endif /* __ASM_ARM_PSCI_H */
> diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
> index 5bbec7b..5f3338e 100644
> --- a/arch/arm/kernel/Makefile
> +++ b/arch/arm/kernel/Makefile
> @@ -82,5 +82,6 @@ obj-$(CONFIG_DEBUG_LL)	+= debug.o
>  obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
>  
>  obj-$(CONFIG_ARM_VIRT_EXT)	+= hyp-stub.o
> +obj-$(CONFIG_ARM_PSCI)		+= psci.o
>  
>  extra-y := $(head-y) vmlinux.lds
> diff --git a/arch/arm/kernel/psci.c b/arch/arm/kernel/psci.c
> new file mode 100644
> index 0000000..3653164
> --- /dev/null
> +++ b/arch/arm/kernel/psci.c
> @@ -0,0 +1,211 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2012 ARM Limited
> + *
> + * Author: Will Deacon <will.deacon@arm.com>
> + */
> +
> +#define pr_fmt(fmt) "psci: " fmt
> +
> +#include <linux/init.h>
> +#include <linux/of.h>
> +
> +#include <asm/compiler.h>
> +#include <asm/errno.h>
> +#include <asm/opcodes-sec.h>
> +#include <asm/opcodes-virt.h>
> +#include <asm/psci.h>
> +
> +struct psci_operations psci_ops;
> +
> +static int (*invoke_psci_fn)(u32, u32, u32, u32);
> +
> +enum psci_function {
> +	PSCI_FN_CPU_SUSPEND,
> +	PSCI_FN_CPU_ON,
> +	PSCI_FN_CPU_OFF,
> +	PSCI_FN_MIGRATE,
> +	PSCI_FN_MAX,
> +};
> +
> +static u32 psci_function_id[PSCI_FN_MAX];
> +
> +#define PSCI_RET_SUCCESS		0
> +#define PSCI_RET_EOPNOTSUPP		-1
> +#define PSCI_RET_EINVAL			-2
> +#define PSCI_RET_EPERM			-3
> +
> +static int psci_to_linux_errno(int errno)
> +{
> +	switch (errno) {
> +	case PSCI_RET_SUCCESS:
> +		return 0;
> +	case PSCI_RET_EOPNOTSUPP:
> +		return -EOPNOTSUPP;
> +	case PSCI_RET_EINVAL:
> +		return -EINVAL;
> +	case PSCI_RET_EPERM:
> +		return -EPERM;
> +	};
> +
> +	return -EINVAL;
> +}
> +
> +#define PSCI_POWER_STATE_ID_MASK	0xffff
> +#define PSCI_POWER_STATE_ID_SHIFT	0
> +#define PSCI_POWER_STATE_TYPE_MASK	0x1
> +#define PSCI_POWER_STATE_TYPE_SHIFT	16
> +#define PSCI_POWER_STATE_AFFL_MASK	0x3
> +#define PSCI_POWER_STATE_AFFL_SHIFT	24
> +
> +static u32 psci_power_state_pack(struct psci_power_state state)
> +{
> +	return	((state.id & PSCI_POWER_STATE_ID_MASK)
> +			<< PSCI_POWER_STATE_ID_SHIFT)	|
> +		((state.type & PSCI_POWER_STATE_TYPE_MASK)
> +			<< PSCI_POWER_STATE_TYPE_SHIFT)	|
> +		((state.affinity_level & PSCI_POWER_STATE_AFFL_MASK)
> +			<< PSCI_POWER_STATE_AFFL_SHIFT);
> +}
> +
> +/*
> + * The following two functions are invoked via the invoke_psci_fn pointer
> + * and will not be inlined, allowing us to piggyback on the AAPCS.
> + */
> +static noinline int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1,
> +					 u32 arg2)
> +{
> +	asm volatile(
> +			__asmeq("%0", "r0")
> +			__asmeq("%1", "r1")
> +			__asmeq("%2", "r2")
> +			__asmeq("%3", "r3")
> +			__HVC(0)
> +		: "+r" (function_id)
> +		: "r" (arg0), "r" (arg1), "r" (arg2));
> +
> +	return function_id;
> +}
> +
> +static noinline int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1,
> +					 u32 arg2)
> +{
> +	asm volatile(
> +			__asmeq("%0", "r0")
> +			__asmeq("%1", "r1")
> +			__asmeq("%2", "r2")
> +			__asmeq("%3", "r3")
> +			__SMC(0)
> +		: "+r" (function_id)
> +		: "r" (arg0), "r" (arg1), "r" (arg2));
> +
> +	return function_id;
> +}
> +
> +static int psci_cpu_suspend(struct psci_power_state state,
> +			    unsigned long entry_point)
> +{
> +	int err;
> +	u32 fn, power_state;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
> +	power_state = psci_power_state_pack(state);
> +	err = invoke_psci_fn(fn, power_state, entry_point, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_cpu_off(struct psci_power_state state)
> +{
> +	int err;
> +	u32 fn, power_state;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_OFF];
> +	power_state = psci_power_state_pack(state);
> +	err = invoke_psci_fn(fn, power_state, 0, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
> +{
> +	int err;
> +	u32 fn;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_ON];
> +	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_migrate(unsigned long cpuid)
> +{
> +	int err;
> +	u32 fn;
> +
> +	fn = psci_function_id[PSCI_FN_MIGRATE];
> +	err = invoke_psci_fn(fn, cpuid, 0, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static const struct of_device_id psci_of_match[] __initconst = {
> +	{ .compatible = "arm,psci",	},
> +	{},
> +};
> +
> +static int __init psci_init(void)
> +{
> +	struct device_node *np;
> +	const char *method;
> +	u32 id;
> +
> +	np = of_find_matching_node(NULL, psci_of_match);
> +	if (!np)
> +		return 0;
> +
> +	pr_info("probing function IDs from device-tree\n");
> +
> +	if (of_property_read_string(np, "method", &method)) {
> +		pr_warning("missing \"method\" property\n");
> +		goto out_put_node;
> +	}
> +
> +	if (!strcmp("hvc", method)) {
> +		invoke_psci_fn = __invoke_psci_fn_hvc;
> +	} else if (!strcmp("smc", method)) {
> +		invoke_psci_fn = __invoke_psci_fn_smc;
> +	} else {
> +		pr_warning("invalid \"method\" property: %s\n", method);
> +		goto out_put_node;
> +	}
> +
> +	if (!of_property_read_u32(np, "cpu_suspend", &id)) {
> +		psci_function_id[PSCI_FN_CPU_SUSPEND] = id;
> +		psci_ops.cpu_suspend = psci_cpu_suspend;
> +	}
> +
> +	if (!of_property_read_u32(np, "cpu_off", &id)) {
> +		psci_function_id[PSCI_FN_CPU_OFF] = id;
> +		psci_ops.cpu_off = psci_cpu_off;
> +	}
> +
> +	if (!of_property_read_u32(np, "cpu_on", &id)) {
> +		psci_function_id[PSCI_FN_CPU_ON] = id;
> +		psci_ops.cpu_on = psci_cpu_on;
> +	}
> +
> +	if (!of_property_read_u32(np, "migrate", &id)) {
> +		psci_function_id[PSCI_FN_MIGRATE] = id;
> +		psci_ops.migrate = psci_migrate;
> +	}
> +
> +out_put_node:
> +	of_node_put(np);
> +	return 0;
> +}
> +early_initcall(psci_init);
> -- 
> 1.8.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 04:16:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 04:16:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlB4o-0004HC-DV; Wed, 19 Dec 2012 04:16:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1TlB4m-0004H7-G2
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 04:16:12 +0000
Received: from [85.158.137.99:38238] by server-13.bemta-3.messagelabs.com id
	F6/8E-00465-B8F31D05; Wed, 19 Dec 2012 04:16:11 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-11.tower-217.messagelabs.com!1355890568!16894085!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13930 invoked from network); 19 Dec 2012 04:16:09 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 04:16:09 -0000
Received: by mail-vb0-f45.google.com with SMTP id p1so1848588vbi.32
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 20:16:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type:x-gm-message-state;
	bh=S9A9WO/2fHeLATMc7JN24njWx454vJGq86cG2gh99Xo=;
	b=aQ/vGKvj7+R9s5M53tfVsxsycpOOoNY2Vj0bCVXJyXZP/radYBt79VXfBbuzLWp1hN
	bivmI4U+WQItCY2etsudXwiDnnH/XV+oynYnnrynVgTlYUdxMHAfJP1oYUSfSn8tr06B
	R68o2QPsRRDqqj7QyKYA58dj+67oilWjgZNe+7ruC7nVuNpSWIp+EjrHz1WAkyBdyWtN
	EGvlt+TN6tWr3EZMnl2gH9lmZ/L4peD4lx871k9izefpjcLmcgQlCBKsyt00Awmp0ZhF
	rMYFWlmBVXUca33sLpKOBRuiQ94amftzaG3RVM1jRp0sPs3DHWG/mNFTS4pBZqE9682l
	t4Mg==
X-Received: by 10.220.219.145 with SMTP id hu17mr6703202vcb.52.1355890568438; 
	Tue, 18 Dec 2012 20:16:08 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id ey7sm3159618ved.0.2012.12.18.20.16.07
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 20:16:07 -0800 (PST)
Date: Tue, 18 Dec 2012 23:16:06 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355853196-23676-5-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212182315440.1263@xanadu.home>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
	<1355853196-23676-5-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQmyVQmDiVZ671NJ6fYDiQolMvkQ9xJrtP1t8G14Sbk7pb/6zW7vUVC6lLvel27N17BlmzR/
Cc: dave.martin@linaro.org, arnd@arndb.de, Marc.Zyngier@arm.com,
	devicetree-discuss@lists.ozlabs.org, xen-devel@lists.xen.org,
	robherring2@gmail.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 4/6] ARM: psci: add support for PSCI
 invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:

> This patch adds support for the Power State Coordination Interface
> defined by ARM, allowing Linux to request CPU-centric power-management
> operations from firmware implementing the PSCI protocol.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Acked-by: Nicolas Pitre <nico@linaro.org>


> ---
>  arch/arm/Kconfig            |  10 +++
>  arch/arm/include/asm/psci.h |  36 ++++++++
>  arch/arm/kernel/Makefile    |   1 +
>  arch/arm/kernel/psci.c      | 211 ++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 258 insertions(+)
>  create mode 100644 arch/arm/include/asm/psci.h
>  create mode 100644 arch/arm/kernel/psci.c
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 8c83d98..80d54b8 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1617,6 +1617,16 @@ config HOTPLUG_CPU
>  	  Say Y here to experiment with turning CPUs off and on.  CPUs
>  	  can be controlled through /sys/devices/system/cpu.
>  
> +config ARM_PSCI
> +	bool "Support for the ARM Power State Coordination Interface (PSCI)"
> +	depends on CPU_V7
> +	help
> +	  Say Y here if you want Linux to communicate with system firmware
> +	  implementing the PSCI specification for CPU-centric power
> +	  management operations described in ARM document number ARM DEN
> +	  0022A ("Power State Coordination Interface System Software on
> +	  ARM processors").
> +
>  config LOCAL_TIMERS
>  	bool "Use local timer interrupts"
>  	depends on SMP
> diff --git a/arch/arm/include/asm/psci.h b/arch/arm/include/asm/psci.h
> new file mode 100644
> index 0000000..ce0dbe7
> --- /dev/null
> +++ b/arch/arm/include/asm/psci.h
> @@ -0,0 +1,36 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2012 ARM Limited
> + */
> +
> +#ifndef __ASM_ARM_PSCI_H
> +#define __ASM_ARM_PSCI_H
> +
> +#define PSCI_POWER_STATE_TYPE_STANDBY		0
> +#define PSCI_POWER_STATE_TYPE_POWER_DOWN	1
> +
> +struct psci_power_state {
> +	u16	id;
> +	u8	type;
> +	u8	affinity_level;
> +};
> +
> +struct psci_operations {
> +	int (*cpu_suspend)(struct psci_power_state state,
> +			   unsigned long entry_point);
> +	int (*cpu_off)(struct psci_power_state state);
> +	int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
> +	int (*migrate)(unsigned long cpuid);
> +};
> +
> +extern struct psci_operations psci_ops;
> +
> +#endif /* __ASM_ARM_PSCI_H */
> diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
> index 5bbec7b..5f3338e 100644
> --- a/arch/arm/kernel/Makefile
> +++ b/arch/arm/kernel/Makefile
> @@ -82,5 +82,6 @@ obj-$(CONFIG_DEBUG_LL)	+= debug.o
>  obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
>  
>  obj-$(CONFIG_ARM_VIRT_EXT)	+= hyp-stub.o
> +obj-$(CONFIG_ARM_PSCI)		+= psci.o
>  
>  extra-y := $(head-y) vmlinux.lds
> diff --git a/arch/arm/kernel/psci.c b/arch/arm/kernel/psci.c
> new file mode 100644
> index 0000000..3653164
> --- /dev/null
> +++ b/arch/arm/kernel/psci.c
> @@ -0,0 +1,211 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2012 ARM Limited
> + *
> + * Author: Will Deacon <will.deacon@arm.com>
> + */
> +
> +#define pr_fmt(fmt) "psci: " fmt
> +
> +#include <linux/init.h>
> +#include <linux/of.h>
> +
> +#include <asm/compiler.h>
> +#include <asm/errno.h>
> +#include <asm/opcodes-sec.h>
> +#include <asm/opcodes-virt.h>
> +#include <asm/psci.h>
> +
> +struct psci_operations psci_ops;
> +
> +static int (*invoke_psci_fn)(u32, u32, u32, u32);
> +
> +enum psci_function {
> +	PSCI_FN_CPU_SUSPEND,
> +	PSCI_FN_CPU_ON,
> +	PSCI_FN_CPU_OFF,
> +	PSCI_FN_MIGRATE,
> +	PSCI_FN_MAX,
> +};
> +
> +static u32 psci_function_id[PSCI_FN_MAX];
> +
> +#define PSCI_RET_SUCCESS		0
> +#define PSCI_RET_EOPNOTSUPP		-1
> +#define PSCI_RET_EINVAL			-2
> +#define PSCI_RET_EPERM			-3
> +
> +static int psci_to_linux_errno(int errno)
> +{
> +	switch (errno) {
> +	case PSCI_RET_SUCCESS:
> +		return 0;
> +	case PSCI_RET_EOPNOTSUPP:
> +		return -EOPNOTSUPP;
> +	case PSCI_RET_EINVAL:
> +		return -EINVAL;
> +	case PSCI_RET_EPERM:
> +		return -EPERM;
> +	};
> +
> +	return -EINVAL;
> +}
> +
> +#define PSCI_POWER_STATE_ID_MASK	0xffff
> +#define PSCI_POWER_STATE_ID_SHIFT	0
> +#define PSCI_POWER_STATE_TYPE_MASK	0x1
> +#define PSCI_POWER_STATE_TYPE_SHIFT	16
> +#define PSCI_POWER_STATE_AFFL_MASK	0x3
> +#define PSCI_POWER_STATE_AFFL_SHIFT	24
> +
> +static u32 psci_power_state_pack(struct psci_power_state state)
> +{
> +	return	((state.id & PSCI_POWER_STATE_ID_MASK)
> +			<< PSCI_POWER_STATE_ID_SHIFT)	|
> +		((state.type & PSCI_POWER_STATE_TYPE_MASK)
> +			<< PSCI_POWER_STATE_TYPE_SHIFT)	|
> +		((state.affinity_level & PSCI_POWER_STATE_AFFL_MASK)
> +			<< PSCI_POWER_STATE_AFFL_SHIFT);
> +}
> +
> +/*
> + * The following two functions are invoked via the invoke_psci_fn pointer
> + * and will not be inlined, allowing us to piggyback on the AAPCS.
> + */
> +static noinline int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1,
> +					 u32 arg2)
> +{
> +	asm volatile(
> +			__asmeq("%0", "r0")
> +			__asmeq("%1", "r1")
> +			__asmeq("%2", "r2")
> +			__asmeq("%3", "r3")
> +			__HVC(0)
> +		: "+r" (function_id)
> +		: "r" (arg0), "r" (arg1), "r" (arg2));
> +
> +	return function_id;
> +}
> +
> +static noinline int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1,
> +					 u32 arg2)
> +{
> +	asm volatile(
> +			__asmeq("%0", "r0")
> +			__asmeq("%1", "r1")
> +			__asmeq("%2", "r2")
> +			__asmeq("%3", "r3")
> +			__SMC(0)
> +		: "+r" (function_id)
> +		: "r" (arg0), "r" (arg1), "r" (arg2));
> +
> +	return function_id;
> +}
> +
> +static int psci_cpu_suspend(struct psci_power_state state,
> +			    unsigned long entry_point)
> +{
> +	int err;
> +	u32 fn, power_state;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
> +	power_state = psci_power_state_pack(state);
> +	err = invoke_psci_fn(fn, power_state, entry_point, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_cpu_off(struct psci_power_state state)
> +{
> +	int err;
> +	u32 fn, power_state;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_OFF];
> +	power_state = psci_power_state_pack(state);
> +	err = invoke_psci_fn(fn, power_state, 0, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
> +{
> +	int err;
> +	u32 fn;
> +
> +	fn = psci_function_id[PSCI_FN_CPU_ON];
> +	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static int psci_migrate(unsigned long cpuid)
> +{
> +	int err;
> +	u32 fn;
> +
> +	fn = psci_function_id[PSCI_FN_MIGRATE];
> +	err = invoke_psci_fn(fn, cpuid, 0, 0);
> +	return psci_to_linux_errno(err);
> +}
> +
> +static const struct of_device_id psci_of_match[] __initconst = {
> +	{ .compatible = "arm,psci",	},
> +	{},
> +};
> +
> +static int __init psci_init(void)
> +{
> +	struct device_node *np;
> +	const char *method;
> +	u32 id;
> +
> +	np = of_find_matching_node(NULL, psci_of_match);
> +	if (!np)
> +		return 0;
> +
> +	pr_info("probing function IDs from device-tree\n");
> +
> +	if (of_property_read_string(np, "method", &method)) {
> +		pr_warning("missing \"method\" property\n");
> +		goto out_put_node;
> +	}
> +
> +	if (!strcmp("hvc", method)) {
> +		invoke_psci_fn = __invoke_psci_fn_hvc;
> +	} else if (!strcmp("smc", method)) {
> +		invoke_psci_fn = __invoke_psci_fn_smc;
> +	} else {
> +		pr_warning("invalid \"method\" property: %s\n", method);
> +		goto out_put_node;
> +	}
> +
> +	if (!of_property_read_u32(np, "cpu_suspend", &id)) {
> +		psci_function_id[PSCI_FN_CPU_SUSPEND] = id;
> +		psci_ops.cpu_suspend = psci_cpu_suspend;
> +	}
> +
> +	if (!of_property_read_u32(np, "cpu_off", &id)) {
> +		psci_function_id[PSCI_FN_CPU_OFF] = id;
> +		psci_ops.cpu_off = psci_cpu_off;
> +	}
> +
> +	if (!of_property_read_u32(np, "cpu_on", &id)) {
> +		psci_function_id[PSCI_FN_CPU_ON] = id;
> +		psci_ops.cpu_on = psci_cpu_on;
> +	}
> +
> +	if (!of_property_read_u32(np, "migrate", &id)) {
> +		psci_function_id[PSCI_FN_MIGRATE] = id;
> +		psci_ops.migrate = psci_migrate;
> +	}
> +
> +out_put_node:
> +	of_node_put(np);
> +	return 0;
> +}
> +early_initcall(psci_init);
> -- 
> 1.8.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 04:18:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 04:18:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlB6D-0004Mw-TX; Wed, 19 Dec 2012 04:17:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1TlB6B-0004Mm-Hv
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 04:17:39 +0000
Received: from [85.158.139.83:63722] by server-8.bemta-5.messagelabs.com id
	97/92-15003-2EF31D05; Wed, 19 Dec 2012 04:17:38 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355890656!30500086!1
X-Originating-IP: [209.85.220.181]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2272 invoked from network); 19 Dec 2012 04:17:37 -0000
Received: from mail-vc0-f181.google.com (HELO mail-vc0-f181.google.com)
	(209.85.220.181)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 04:17:37 -0000
Received: by mail-vc0-f181.google.com with SMTP id gb30so1797225vcb.26
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 20:17:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type:x-gm-message-state;
	bh=N09BPL3uTuFSI6bu/pA/wGgoqftcwJrADH1xy5ByxsE=;
	b=bp61dqq2BoX0H0+MBZfAHsygm9YEWlIu+j+Ge/qIzUrPqDRcPjdwQsN8kadoJlFEwX
	HRmALQCmltfCdqbT/7RSSGoezep12MBBqWHzrRZBrFWgXWC2fgqFUpS0T6jAKDBRqkQE
	fw0ywM+CeKXKACZblBAhMMFtfM1nGBY06Mj5nj/nf5qUNfOfXAcKCYjzJTSZ8RdMC1f/
	MYGjXAWh8iVxt5QdH3i/vonTXtPHuYDiSpbfsQYZViOdbpYT9Hpd2K4Bi2VQlt+qp99e
	eRl4SCWFI97Lk8qTIGz9jrMbD0DMtiHPkdlrPPB8/I1lBYeS1YBev+wN9DFVkrRo9PCy
	TQJw==
X-Received: by 10.58.127.195 with SMTP id ni3mr7072529veb.30.1355890656210;
	Tue, 18 Dec 2012 20:17:36 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id dl18sm3342588vdb.2.2012.12.18.20.17.34
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 20:17:35 -0800 (PST)
Date: Tue, 18 Dec 2012 23:17:34 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355853196-23676-6-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212182316500.1263@xanadu.home>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
	<1355853196-23676-6-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQmmiXkhSi0DiswhbwVvTO9HIg/pFpWbiJ6ICNy4Yg2tU7XW5cJP/z+XBL7CNQ15jfJbVD4Z
Cc: dave.martin@linaro.org, arnd@arndb.de, Marc Zyngier <marc.zyngier@arm.com>,
	devicetree-discuss@lists.ozlabs.org, xen-devel@lists.xen.org,
	robherring2@gmail.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 5/6] ARM: Dummy Virtual Machine platform
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:

> From: Marc Zyngier <marc.zyngier@arm.com>
> 
> Add support for the smallest, dumbest possible platform, to be
> used as a guest for KVM or other hypervisors.
> 
> It only mandates a GIC and architected timers. Fits nicely with
> a multiplatform zImage. Uses very little silicon area.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Acked-by: Nicolas Pitre <nico@linaro.org>


> ---
>  arch/arm/Kconfig            |  2 ++
>  arch/arm/Makefile           |  1 +
>  arch/arm/mach-virt/Kconfig  |  9 +++++++
>  arch/arm/mach-virt/Makefile |  5 ++++
>  arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 82 insertions(+)
>  create mode 100644 arch/arm/mach-virt/Kconfig
>  create mode 100644 arch/arm/mach-virt/Makefile
>  create mode 100644 arch/arm/mach-virt/virt.c
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 80d54b8..3443d89 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1130,6 +1130,8 @@ source "arch/arm/mach-versatile/Kconfig"
>  source "arch/arm/mach-vexpress/Kconfig"
>  source "arch/arm/plat-versatile/Kconfig"
>  
> +source "arch/arm/mach-virt/Kconfig"
> +
>  source "arch/arm/mach-vt8500/Kconfig"
>  
>  source "arch/arm/mach-w90x900/Kconfig"
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index 30c443c..ea4f481 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -194,6 +194,7 @@ machine-$(CONFIG_ARCH_SOCFPGA)		+= socfpga
>  machine-$(CONFIG_ARCH_SPEAR13XX)	+= spear13xx
>  machine-$(CONFIG_ARCH_SPEAR3XX)		+= spear3xx
>  machine-$(CONFIG_MACH_SPEAR600)		+= spear6xx
> +machine-$(CONFIG_ARCH_VIRT)		+= virt
>  machine-$(CONFIG_ARCH_ZYNQ)		+= zynq
>  machine-$(CONFIG_ARCH_SUNXI)		+= sunxi
>  
> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
> new file mode 100644
> index 0000000..a568a2a
> --- /dev/null
> +++ b/arch/arm/mach-virt/Kconfig
> @@ -0,0 +1,9 @@
> +config ARCH_VIRT
> +	bool "Dummy Virtual Machine" if ARCH_MULTI_V7
> +	select ARCH_WANT_OPTIONAL_GPIOLIB
> +	select ARM_GIC
> +	select ARM_ARCH_TIMER
> +	select HAVE_SMP
> +	select CPU_V7
> +	select SPARSE_IRQ
> +	select USE_OF
> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
> new file mode 100644
> index 0000000..7ddbfa6
> --- /dev/null
> +++ b/arch/arm/mach-virt/Makefile
> @@ -0,0 +1,5 @@
> +#
> +# Makefile for the linux kernel.
> +#
> +
> +obj-y					:= virt.o
> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
> new file mode 100644
> index 0000000..174b9da
> --- /dev/null
> +++ b/arch/arm/mach-virt/virt.c
> @@ -0,0 +1,65 @@
> +/*
> + * Dummy Virtual Machine - does what it says on the tin.
> + *
> + * Copyright (C) 2012 ARM Ltd
> + * Authors: Will Deacon <will.deacon@arm.com>,
> + *          Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +
> +#include <asm/arch_timer.h>
> +#include <asm/hardware/gic.h>
> +#include <asm/mach/arch.h>
> +#include <asm/mach/time.h>
> +
> +const static struct of_device_id irq_match[] = {
> +	{ .compatible = "arm,cortex-a15-gic", .data = gic_of_init, },
> +	{}
> +};
> +
> +static void __init gic_init_irq(void)
> +{
> +	of_irq_init(irq_match);
> +}
> +
> +static void __init virt_init(void)
> +{
> +	of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
> +}
> +
> +static void __init virt_timer_init(void)
> +{
> +	WARN_ON(arch_timer_of_register() != 0);
> +	WARN_ON(arch_timer_sched_clock_init() != 0);
> +}
> +
> +static const char *virt_dt_match[] = {
> +	"linux,dummy-virt",
> +	NULL
> +};
> +
> +static struct sys_timer virt_timer = {
> +	.init = virt_timer_init,
> +};
> +
> +DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
> +	.init_irq	= gic_init_irq,
> +	.handle_irq     = gic_handle_irq,
> +	.timer		= &virt_timer,
> +	.init_machine	= virt_init,
> +	.dt_compat	= virt_dt_match,
> +MACHINE_END
> -- 
> 1.8.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 04:18:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 04:18:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlB6D-0004Mw-TX; Wed, 19 Dec 2012 04:17:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1TlB6B-0004Mm-Hv
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 04:17:39 +0000
Received: from [85.158.139.83:63722] by server-8.bemta-5.messagelabs.com id
	97/92-15003-2EF31D05; Wed, 19 Dec 2012 04:17:38 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355890656!30500086!1
X-Originating-IP: [209.85.220.181]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2272 invoked from network); 19 Dec 2012 04:17:37 -0000
Received: from mail-vc0-f181.google.com (HELO mail-vc0-f181.google.com)
	(209.85.220.181)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 04:17:37 -0000
Received: by mail-vc0-f181.google.com with SMTP id gb30so1797225vcb.26
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 20:17:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type:x-gm-message-state;
	bh=N09BPL3uTuFSI6bu/pA/wGgoqftcwJrADH1xy5ByxsE=;
	b=bp61dqq2BoX0H0+MBZfAHsygm9YEWlIu+j+Ge/qIzUrPqDRcPjdwQsN8kadoJlFEwX
	HRmALQCmltfCdqbT/7RSSGoezep12MBBqWHzrRZBrFWgXWC2fgqFUpS0T6jAKDBRqkQE
	fw0ywM+CeKXKACZblBAhMMFtfM1nGBY06Mj5nj/nf5qUNfOfXAcKCYjzJTSZ8RdMC1f/
	MYGjXAWh8iVxt5QdH3i/vonTXtPHuYDiSpbfsQYZViOdbpYT9Hpd2K4Bi2VQlt+qp99e
	eRl4SCWFI97Lk8qTIGz9jrMbD0DMtiHPkdlrPPB8/I1lBYeS1YBev+wN9DFVkrRo9PCy
	TQJw==
X-Received: by 10.58.127.195 with SMTP id ni3mr7072529veb.30.1355890656210;
	Tue, 18 Dec 2012 20:17:36 -0800 (PST)
Received: from xanadu.home (modemcable203.213-202-24.mc.videotron.ca.
	[24.202.213.203])
	by mx.google.com with ESMTPS id dl18sm3342588vdb.2.2012.12.18.20.17.34
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 18 Dec 2012 20:17:35 -0800 (PST)
Date: Tue, 18 Dec 2012 23:17:34 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <1355853196-23676-6-git-send-email-will.deacon@arm.com>
Message-ID: <alpine.LFD.2.02.1212182316500.1263@xanadu.home>
References: <1355853196-23676-1-git-send-email-will.deacon@arm.com>
	<1355853196-23676-6-git-send-email-will.deacon@arm.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQmmiXkhSi0DiswhbwVvTO9HIg/pFpWbiJ6ICNy4Yg2tU7XW5cJP/z+XBL7CNQ15jfJbVD4Z
Cc: dave.martin@linaro.org, arnd@arndb.de, Marc Zyngier <marc.zyngier@arm.com>,
	devicetree-discuss@lists.ozlabs.org, xen-devel@lists.xen.org,
	robherring2@gmail.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 5/6] ARM: Dummy Virtual Machine platform
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Will Deacon wrote:

> From: Marc Zyngier <marc.zyngier@arm.com>
> 
> Add support for the smallest, dumbest possible platform, to be
> used as a guest for KVM or other hypervisors.
> 
> It only mandates a GIC and architected timers. Fits nicely with
> a multiplatform zImage. Uses very little silicon area.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Acked-by: Nicolas Pitre <nico@linaro.org>


> ---
>  arch/arm/Kconfig            |  2 ++
>  arch/arm/Makefile           |  1 +
>  arch/arm/mach-virt/Kconfig  |  9 +++++++
>  arch/arm/mach-virt/Makefile |  5 ++++
>  arch/arm/mach-virt/virt.c   | 65 +++++++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 82 insertions(+)
>  create mode 100644 arch/arm/mach-virt/Kconfig
>  create mode 100644 arch/arm/mach-virt/Makefile
>  create mode 100644 arch/arm/mach-virt/virt.c
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 80d54b8..3443d89 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1130,6 +1130,8 @@ source "arch/arm/mach-versatile/Kconfig"
>  source "arch/arm/mach-vexpress/Kconfig"
>  source "arch/arm/plat-versatile/Kconfig"
>  
> +source "arch/arm/mach-virt/Kconfig"
> +
>  source "arch/arm/mach-vt8500/Kconfig"
>  
>  source "arch/arm/mach-w90x900/Kconfig"
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index 30c443c..ea4f481 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -194,6 +194,7 @@ machine-$(CONFIG_ARCH_SOCFPGA)		+= socfpga
>  machine-$(CONFIG_ARCH_SPEAR13XX)	+= spear13xx
>  machine-$(CONFIG_ARCH_SPEAR3XX)		+= spear3xx
>  machine-$(CONFIG_MACH_SPEAR600)		+= spear6xx
> +machine-$(CONFIG_ARCH_VIRT)		+= virt
>  machine-$(CONFIG_ARCH_ZYNQ)		+= zynq
>  machine-$(CONFIG_ARCH_SUNXI)		+= sunxi
>  
> diff --git a/arch/arm/mach-virt/Kconfig b/arch/arm/mach-virt/Kconfig
> new file mode 100644
> index 0000000..a568a2a
> --- /dev/null
> +++ b/arch/arm/mach-virt/Kconfig
> @@ -0,0 +1,9 @@
> +config ARCH_VIRT
> +	bool "Dummy Virtual Machine" if ARCH_MULTI_V7
> +	select ARCH_WANT_OPTIONAL_GPIOLIB
> +	select ARM_GIC
> +	select ARM_ARCH_TIMER
> +	select HAVE_SMP
> +	select CPU_V7
> +	select SPARSE_IRQ
> +	select USE_OF
> diff --git a/arch/arm/mach-virt/Makefile b/arch/arm/mach-virt/Makefile
> new file mode 100644
> index 0000000..7ddbfa6
> --- /dev/null
> +++ b/arch/arm/mach-virt/Makefile
> @@ -0,0 +1,5 @@
> +#
> +# Makefile for the linux kernel.
> +#
> +
> +obj-y					:= virt.o
> diff --git a/arch/arm/mach-virt/virt.c b/arch/arm/mach-virt/virt.c
> new file mode 100644
> index 0000000..174b9da
> --- /dev/null
> +++ b/arch/arm/mach-virt/virt.c
> @@ -0,0 +1,65 @@
> +/*
> + * Dummy Virtual Machine - does what it says on the tin.
> + *
> + * Copyright (C) 2012 ARM Ltd
> + * Authors: Will Deacon <will.deacon@arm.com>,
> + *          Marc Zyngier <marc.zyngier@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +
> +#include <asm/arch_timer.h>
> +#include <asm/hardware/gic.h>
> +#include <asm/mach/arch.h>
> +#include <asm/mach/time.h>
> +
> +const static struct of_device_id irq_match[] = {
> +	{ .compatible = "arm,cortex-a15-gic", .data = gic_of_init, },
> +	{}
> +};
> +
> +static void __init gic_init_irq(void)
> +{
> +	of_irq_init(irq_match);
> +}
> +
> +static void __init virt_init(void)
> +{
> +	of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
> +}
> +
> +static void __init virt_timer_init(void)
> +{
> +	WARN_ON(arch_timer_of_register() != 0);
> +	WARN_ON(arch_timer_sched_clock_init() != 0);
> +}
> +
> +static const char *virt_dt_match[] = {
> +	"linux,dummy-virt",
> +	NULL
> +};
> +
> +static struct sys_timer virt_timer = {
> +	.init = virt_timer_init,
> +};
> +
> +DT_MACHINE_START(VIRT, "Dummy Virtual Machine")
> +	.init_irq	= gic_init_irq,
> +	.handle_irq     = gic_handle_irq,
> +	.timer		= &virt_timer,
> +	.init_machine	= virt_init,
> +	.dt_compat	= virt_dt_match,
> +MACHINE_END
> -- 
> 1.8.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 05:13:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 05:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlBxa-0005Bj-SV; Wed, 19 Dec 2012 05:12:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlBxY-0005Be-Vp
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 05:12:49 +0000
Received: from [85.158.137.99:16250] by server-4.bemta-3.messagelabs.com id
	09/53-31835-FCC41D05; Wed, 19 Dec 2012 05:12:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355893967!18291375!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMTY5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6669 invoked from network); 19 Dec 2012 05:12:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 05:12:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="241857"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 05:12:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 19 Dec 2012 05:12:46 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlBxW-000412-Gq;
	Wed, 19 Dec 2012 05:12:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlBxW-0004W5-2o;
	Wed, 19 Dec 2012 05:12:46 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14782-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Dec 2012 05:12:46 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14782: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5839080161234684067=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5839080161234684067==
Content-Type: text/plain

flight 14782 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14782/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install   fail REGR. vs. 14780

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14780
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14780

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  b04de677de31
baseline version:
 xen                  d5c0389bf26c

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dario Faggioli <dario.faggioli@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 359 lines long.)


--===============5839080161234684067==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5839080161234684067==--

From xen-devel-bounces@lists.xen.org Wed Dec 19 05:13:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 05:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlBxa-0005Bj-SV; Wed, 19 Dec 2012 05:12:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlBxY-0005Be-Vp
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 05:12:49 +0000
Received: from [85.158.137.99:16250] by server-4.bemta-3.messagelabs.com id
	09/53-31835-FCC41D05; Wed, 19 Dec 2012 05:12:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355893967!18291375!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMTY5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6669 invoked from network); 19 Dec 2012 05:12:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 05:12:47 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="241857"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 05:12:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 19 Dec 2012 05:12:46 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlBxW-000412-Gq;
	Wed, 19 Dec 2012 05:12:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlBxW-0004W5-2o;
	Wed, 19 Dec 2012 05:12:46 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14782-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Dec 2012 05:12:46 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14782: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5839080161234684067=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5839080161234684067==
Content-Type: text/plain

flight 14782 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14782/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install   fail REGR. vs. 14780

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14780
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14780

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  b04de677de31
baseline version:
 xen                  d5c0389bf26c

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dario Faggioli <dario.faggioli@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 359 lines long.)


--===============5839080161234684067==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5839080161234684067==--

From xen-devel-bounces@lists.xen.org Wed Dec 19 06:21:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 06:21:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlD1S-0005oI-Js; Wed, 19 Dec 2012 06:20:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlD1R-0005nt-ER
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 06:20:53 +0000
Received: from [85.158.139.83:20662] by server-7.bemta-5.messagelabs.com id
	C9/E5-08009-4CC51D05; Wed, 19 Dec 2012 06:20:52 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355898049!29935871!1
X-Originating-IP: [209.85.210.180]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16139 invoked from network); 19 Dec 2012 06:20:51 -0000
Received: from mail-ia0-f180.google.com (HELO mail-ia0-f180.google.com)
	(209.85.210.180)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 06:20:51 -0000
Received: by mail-ia0-f180.google.com with SMTP id t4so1373909iag.25
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 22:20:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=lhcEYRxsVdZvHd7fGiRyAbLgVStB217MrLML/dscupg=;
	b=pWWSghItwGGobBBA4PVNRBx5hNfbchx85OBbsQHvECWoOZalpWl5Tb2TfQ+MrBSs2r
	NC+HmAHOGwP6NklQzb59siuOcoxbJYqLoR8ULV2KCP5fyaGoedmt1tAEFzb80UwpjsVm
	lowAVLPciXjAFr20I/IFyGvhis1WW773mErRcTJQD6bh10Aqlu7shqou+61j0ny6tJ/y
	629gSjy1/VCCR1RRbsn5b6RmTHZu0ivrUrwcOPkQEoC7CLjKpbGFB6KAJy3mygy2alf0
	mynuhOuLAv71aZ1QeyPWBATo/aAD8OKcDeRt3gLjgqcBmh+J2JdM7R2ivoWsnN9gGtnh
	aE8w==
MIME-Version: 1.0
Received: by 10.42.48.147 with SMTP id s19mr4969604icf.18.1355898049431; Tue,
	18 Dec 2012 22:20:49 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Tue, 18 Dec 2012 22:20:49 -0800 (PST)
In-Reply-To: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
Date: Wed, 19 Dec 2012 14:20:49 +0800
X-Google-Sender-Auth: BptumU4vTZVMUE8BtuZqeXYKVg0
Message-ID: <CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, 
	Jean Guyader <jean.guyader@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding Jean, the author to the opregion patch.

Jean, I believe the warning is due to the offset within the page.
To accommodate the offset, you would need to reserve another page for it.
Will the extra page cause any unexpected problem?

The original thread is about an instability issue that directly freeze the host.
I believe this warning above should not has such effect.
What do you think? And any suggestion?

Thanks,
Timothy

On Wed, Dec 19, 2012 at 1:28 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> Hi Stefano,
>
> I recently tried to play some 3D games on my linux guest.
> The game starts without problem but it freezes the entire system after
> a some time (a minute or so?).
> Here I mean both the host and domU are not responsive anymore.
> The ssh freezes and i had to shutdown the machine using power button directly.
>
> I did not find anything obvious from the host log. But from the guest,
> I can find this:
>
> Dec 18 20:28:38 debvm kernel: [    0.899860] resource map sanity check
> conflict: 0xfeff5018 0xfeff7017 0xfeff7000 0xffffffff reserved
> Dec 18 20:28:38 debvm kernel: [    0.899862] ------------[ cut here
> ]------------
> Dec 18 20:28:38 debvm kernel: [    0.899869] WARNING: at
> arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2c4/0x33c()
> Dec 18 20:28:38 debvm kernel: [    0.899870] Hardware name: HVM domU
> Dec 18 20:28:38 debvm kernel: [    0.899872] Info: mapping multiple
> BARs. Your kernel is fine.
> Dec 18 20:28:38 debvm kernel: [    0.899873] Modules linked in:
> Dec 18 20:28:38 debvm kernel: [    0.899878] Pid: 1, comm: swapper/0
> Not tainted 3.6.9 #4
> Dec 18 20:28:38 debvm kernel: [    0.899892] Call Trace:
> Dec 18 20:28:38 debvm kernel: [    0.899896]  [<ffffffff8103d194>] ?
> warn_slowpath_common+0x76/0x8a
> Dec 18 20:28:38 debvm kernel: [    0.899898]  [<ffffffff8103d240>] ?
> warn_slowpath_fmt+0x45/0x4a
> Dec 18 20:28:38 debvm kernel: [    0.899900]  [<ffffffff81032a6c>] ?
> __ioremap_caller+0x2c4/0x33c
> Dec 18 20:28:38 debvm kernel: [    0.899902]  [<ffffffff812c3be3>] ?
> intel_opregion_setup+0x9c/0x201
> Dec 18 20:28:38 debvm kernel: [    0.899904]  [<ffffffff812bcb75>] ?
> intel_setup_gmbus+0x175/0x19d
> Dec 18 20:28:38 debvm kernel: [    0.899907]  [<ffffffff8128a37a>] ?
> i915_driver_load+0x548/0x90d
> Dec 18 20:28:38 debvm kernel: [    0.899910]  [<ffffffff812ff804>] ?
> setup_hpet_msi_remapped+0x20/0x20
> Dec 18 20:28:38 debvm kernel: [    0.899912]  [<ffffffff81272706>] ?
> drm_get_pci_dev+0x152/0x259
> Dec 18 20:28:38 debvm kernel: [    0.899915]  [<ffffffff813d4883>] ?
> _raw_spin_lock_irqsave+0x21/0x45
> Dec 18 20:28:38 debvm kernel: [    0.899918]  [<ffffffff811d9ecc>] ?
> local_pci_probe+0x5a/0xa0
> Dec 18 20:28:38 debvm kernel: [    0.899920]  [<ffffffff811d9fcf>] ?
> pci_device_probe+0xbd/0xe7
> Dec 18 20:28:38 debvm kernel: [    0.899922]  [<ffffffff812cd887>] ?
> driver_probe_device+0x1b0/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899923]  [<ffffffff812cd887>] ?
> driver_probe_device+0x1b0/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899925]  [<ffffffff812cd769>] ?
> driver_probe_device+0x92/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899926]  [<ffffffff812cd8da>] ?
> __driver_attach+0x53/0x73
> Dec 18 20:28:38 debvm kernel: [    0.899928]  [<ffffffff812cc06f>] ?
> bus_for_each_dev+0x46/0x77
> Dec 18 20:28:38 debvm kernel: [    0.899930]  [<ffffffff812ccf8f>] ?
> bus_add_driver+0xd5/0x1f4
> Dec 18 20:28:38 debvm kernel: [    0.899931]  [<ffffffff812cde14>] ?
> driver_register+0x89/0x101
> Dec 18 20:28:38 debvm kernel: [    0.899933]  [<ffffffff811d9336>] ?
> __pci_register_driver+0x49/0xa3
> Dec 18 20:28:38 debvm kernel: [    0.899935]  [<ffffffff816d55c7>] ?
> ttm_init+0x63/0x63
> Dec 18 20:28:38 debvm kernel: [    0.899937]  [<ffffffff81002085>] ?
> do_one_initcall+0x75/0x12c
> Dec 18 20:28:38 debvm kernel: [    0.899940]  [<ffffffff816a6cc2>] ?
> kernel_init+0x13c/0x1c0
> Dec 18 20:28:38 debvm kernel: [    0.899941]  [<ffffffff816a6565>] ?
> do_early_param+0x83/0x83
> Dec 18 20:28:38 debvm kernel: [    0.899943]  [<ffffffff813d9f44>] ?
> kernel_thread_helper+0x4/0x10
> Dec 18 20:28:38 debvm kernel: [    0.899945]  [<ffffffff816a6b86>] ?
> start_kernel+0x3e1/0x3e1
> Dec 18 20:28:38 debvm kernel: [    0.899947]  [<ffffffff813d9f40>] ?
> gs_change+0x13/0x13
> Dec 18 20:28:38 debvm kernel: [    0.899950] ---[ end trace
> db461543ce599b44 ]---
>
> I'm not sure if this has anything to do with the freeze. This seems to
> show up on every boot after I upgraded to xen version 4.2.1-rc2. Both
> debian kernel 3.2.32 / 3.6.9 suffers from the same log. But whole
> system freeze happens only during gaming, which is much less frequent.
> So I'm not sure if the two are related. But anyway, could you comment
> about what does this log mean?
>
> I can find the one of the mentioned address in the qemu_dm log:
> pt_pci_write_config: [00:02:0] address=00fc val=0xfeff5000 len=4
> igd_write_opregion: Map OpRegion: cd996018 -> feff5018
> igd_write_opregion: [00:02:0] addr=fc len=2 val=feff5000
>
> PS: I also run xbmc on domU and it playbacks video under HW
> acceleration (VAAPI) without any problem. XBMC by itself is also an
> graphics intensive program. But this runs on an pure HVM guest, while
> the failing case is on PVHVM.
>
> PS2: I also suffered another instability yesterday. It happens when I
> was compiling kernel in side the domU. The host reboots suddenly.
> Since I'm not using graphics at that time (Xorg session is idle, I
> connected through SSH), this may be a different issue.
>
> Thanks,
> Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 06:21:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 06:21:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlD1S-0005oI-Js; Wed, 19 Dec 2012 06:20:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlD1R-0005nt-ER
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 06:20:53 +0000
Received: from [85.158.139.83:20662] by server-7.bemta-5.messagelabs.com id
	C9/E5-08009-4CC51D05; Wed, 19 Dec 2012 06:20:52 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355898049!29935871!1
X-Originating-IP: [209.85.210.180]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16139 invoked from network); 19 Dec 2012 06:20:51 -0000
Received: from mail-ia0-f180.google.com (HELO mail-ia0-f180.google.com)
	(209.85.210.180)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 06:20:51 -0000
Received: by mail-ia0-f180.google.com with SMTP id t4so1373909iag.25
	for <xen-devel@lists.xen.org>; Tue, 18 Dec 2012 22:20:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=lhcEYRxsVdZvHd7fGiRyAbLgVStB217MrLML/dscupg=;
	b=pWWSghItwGGobBBA4PVNRBx5hNfbchx85OBbsQHvECWoOZalpWl5Tb2TfQ+MrBSs2r
	NC+HmAHOGwP6NklQzb59siuOcoxbJYqLoR8ULV2KCP5fyaGoedmt1tAEFzb80UwpjsVm
	lowAVLPciXjAFr20I/IFyGvhis1WW773mErRcTJQD6bh10Aqlu7shqou+61j0ny6tJ/y
	629gSjy1/VCCR1RRbsn5b6RmTHZu0ivrUrwcOPkQEoC7CLjKpbGFB6KAJy3mygy2alf0
	mynuhOuLAv71aZ1QeyPWBATo/aAD8OKcDeRt3gLjgqcBmh+J2JdM7R2ivoWsnN9gGtnh
	aE8w==
MIME-Version: 1.0
Received: by 10.42.48.147 with SMTP id s19mr4969604icf.18.1355898049431; Tue,
	18 Dec 2012 22:20:49 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Tue, 18 Dec 2012 22:20:49 -0800 (PST)
In-Reply-To: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
Date: Wed, 19 Dec 2012 14:20:49 +0800
X-Google-Sender-Auth: BptumU4vTZVMUE8BtuZqeXYKVg0
Message-ID: <CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, 
	Jean Guyader <jean.guyader@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding Jean, the author to the opregion patch.

Jean, I believe the warning is due to the offset within the page.
To accommodate the offset, you would need to reserve another page for it.
Will the extra page cause any unexpected problem?

The original thread is about an instability issue that directly freeze the host.
I believe this warning above should not has such effect.
What do you think? And any suggestion?

Thanks,
Timothy

On Wed, Dec 19, 2012 at 1:28 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> Hi Stefano,
>
> I recently tried to play some 3D games on my linux guest.
> The game starts without problem but it freezes the entire system after
> a some time (a minute or so?).
> Here I mean both the host and domU are not responsive anymore.
> The ssh freezes and i had to shutdown the machine using power button directly.
>
> I did not find anything obvious from the host log. But from the guest,
> I can find this:
>
> Dec 18 20:28:38 debvm kernel: [    0.899860] resource map sanity check
> conflict: 0xfeff5018 0xfeff7017 0xfeff7000 0xffffffff reserved
> Dec 18 20:28:38 debvm kernel: [    0.899862] ------------[ cut here
> ]------------
> Dec 18 20:28:38 debvm kernel: [    0.899869] WARNING: at
> arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2c4/0x33c()
> Dec 18 20:28:38 debvm kernel: [    0.899870] Hardware name: HVM domU
> Dec 18 20:28:38 debvm kernel: [    0.899872] Info: mapping multiple
> BARs. Your kernel is fine.
> Dec 18 20:28:38 debvm kernel: [    0.899873] Modules linked in:
> Dec 18 20:28:38 debvm kernel: [    0.899878] Pid: 1, comm: swapper/0
> Not tainted 3.6.9 #4
> Dec 18 20:28:38 debvm kernel: [    0.899892] Call Trace:
> Dec 18 20:28:38 debvm kernel: [    0.899896]  [<ffffffff8103d194>] ?
> warn_slowpath_common+0x76/0x8a
> Dec 18 20:28:38 debvm kernel: [    0.899898]  [<ffffffff8103d240>] ?
> warn_slowpath_fmt+0x45/0x4a
> Dec 18 20:28:38 debvm kernel: [    0.899900]  [<ffffffff81032a6c>] ?
> __ioremap_caller+0x2c4/0x33c
> Dec 18 20:28:38 debvm kernel: [    0.899902]  [<ffffffff812c3be3>] ?
> intel_opregion_setup+0x9c/0x201
> Dec 18 20:28:38 debvm kernel: [    0.899904]  [<ffffffff812bcb75>] ?
> intel_setup_gmbus+0x175/0x19d
> Dec 18 20:28:38 debvm kernel: [    0.899907]  [<ffffffff8128a37a>] ?
> i915_driver_load+0x548/0x90d
> Dec 18 20:28:38 debvm kernel: [    0.899910]  [<ffffffff812ff804>] ?
> setup_hpet_msi_remapped+0x20/0x20
> Dec 18 20:28:38 debvm kernel: [    0.899912]  [<ffffffff81272706>] ?
> drm_get_pci_dev+0x152/0x259
> Dec 18 20:28:38 debvm kernel: [    0.899915]  [<ffffffff813d4883>] ?
> _raw_spin_lock_irqsave+0x21/0x45
> Dec 18 20:28:38 debvm kernel: [    0.899918]  [<ffffffff811d9ecc>] ?
> local_pci_probe+0x5a/0xa0
> Dec 18 20:28:38 debvm kernel: [    0.899920]  [<ffffffff811d9fcf>] ?
> pci_device_probe+0xbd/0xe7
> Dec 18 20:28:38 debvm kernel: [    0.899922]  [<ffffffff812cd887>] ?
> driver_probe_device+0x1b0/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899923]  [<ffffffff812cd887>] ?
> driver_probe_device+0x1b0/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899925]  [<ffffffff812cd769>] ?
> driver_probe_device+0x92/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899926]  [<ffffffff812cd8da>] ?
> __driver_attach+0x53/0x73
> Dec 18 20:28:38 debvm kernel: [    0.899928]  [<ffffffff812cc06f>] ?
> bus_for_each_dev+0x46/0x77
> Dec 18 20:28:38 debvm kernel: [    0.899930]  [<ffffffff812ccf8f>] ?
> bus_add_driver+0xd5/0x1f4
> Dec 18 20:28:38 debvm kernel: [    0.899931]  [<ffffffff812cde14>] ?
> driver_register+0x89/0x101
> Dec 18 20:28:38 debvm kernel: [    0.899933]  [<ffffffff811d9336>] ?
> __pci_register_driver+0x49/0xa3
> Dec 18 20:28:38 debvm kernel: [    0.899935]  [<ffffffff816d55c7>] ?
> ttm_init+0x63/0x63
> Dec 18 20:28:38 debvm kernel: [    0.899937]  [<ffffffff81002085>] ?
> do_one_initcall+0x75/0x12c
> Dec 18 20:28:38 debvm kernel: [    0.899940]  [<ffffffff816a6cc2>] ?
> kernel_init+0x13c/0x1c0
> Dec 18 20:28:38 debvm kernel: [    0.899941]  [<ffffffff816a6565>] ?
> do_early_param+0x83/0x83
> Dec 18 20:28:38 debvm kernel: [    0.899943]  [<ffffffff813d9f44>] ?
> kernel_thread_helper+0x4/0x10
> Dec 18 20:28:38 debvm kernel: [    0.899945]  [<ffffffff816a6b86>] ?
> start_kernel+0x3e1/0x3e1
> Dec 18 20:28:38 debvm kernel: [    0.899947]  [<ffffffff813d9f40>] ?
> gs_change+0x13/0x13
> Dec 18 20:28:38 debvm kernel: [    0.899950] ---[ end trace
> db461543ce599b44 ]---
>
> I'm not sure if this has anything to do with the freeze. This seems to
> show up on every boot after I upgraded to xen version 4.2.1-rc2. Both
> debian kernel 3.2.32 / 3.6.9 suffers from the same log. But whole
> system freeze happens only during gaming, which is much less frequent.
> So I'm not sure if the two are related. But anyway, could you comment
> about what does this log mean?
>
> I can find the one of the mentioned address in the qemu_dm log:
> pt_pci_write_config: [00:02:0] address=00fc val=0xfeff5000 len=4
> igd_write_opregion: Map OpRegion: cd996018 -> feff5018
> igd_write_opregion: [00:02:0] addr=fc len=2 val=feff5000
>
> PS: I also run xbmc on domU and it playbacks video under HW
> acceleration (VAAPI) without any problem. XBMC by itself is also an
> graphics intensive program. But this runs on an pure HVM guest, while
> the failing case is on PVHVM.
>
> PS2: I also suffered another instability yesterday. It happens when I
> was compiling kernel in side the domU. The host reboots suddenly.
> Since I'm not using graphics at that time (Xorg session is idle, I
> connected through SSH), this may be a different issue.
>
> Thanks,
> Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZI-0006gN-N0; Wed, 19 Dec 2012 07:59:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZH-0006f0-Io
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:55 +0000
Received: from [85.158.139.211:57394] by server-1.bemta-5.messagelabs.com id
	87/95-12813-9F371D05; Wed, 19 Dec 2012 07:59:53 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355903992!21137572!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8290 invoked from network); 19 Dec 2012 07:59:52 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-206.messagelabs.com with SMTP;
	19 Dec 2012 07:59:52 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:51 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287159"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:50 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:23 +0800
Message-Id: <1355946267-24227-7-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 06/10] nEPT: Sync PDPTR fields if L2 guest in
	PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
vmentry.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    9 ++++++++-
 1 files changed, 8 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index c100730..7c55c51 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -826,7 +826,14 @@ static void load_shadow_guest_state(struct vcpu *v)
     vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
     vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
 
-    /* TODO: PDPTRs for nested ept */
+    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
+                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
+    }
+
     /* TODO: CR3 target control */
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZB-0006dh-Qa; Wed, 19 Dec 2012 07:59:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZA-0006dT-1R
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:48 +0000
Received: from [85.158.139.83:64685] by server-11.bemta-5.messagelabs.com id
	7E/A1-31624-3F371D05; Wed, 19 Dec 2012 07:59:47 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355903985!23147720!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11426 invoked from network); 19 Dec 2012 07:59:46 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 07:59:46 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:44 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287112"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:43 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:18 +0800
Message-Id: <1355946267-24227-2-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 01/10] nestedhap: Change hostcr3 and p2m->cr3
	to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

VMX doesn't have the concept about host cr3 for nested p2m,
and only SVM has, so change it to netural words.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c             |    6 +++---
 xen/arch/x86/hvm/svm/svm.c         |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    2 +-
 xen/arch/x86/mm/hap/nested_hap.c   |   15 ++++++++-------
 xen/arch/x86/mm/mm-locks.h         |    2 +-
 xen/arch/x86/mm/p2m.c              |   26 +++++++++++++-------------
 xen/include/asm-x86/hvm/hvm.h      |    4 ++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +-
 xen/include/asm-x86/p2m.h          |   16 ++++++++--------
 10 files changed, 39 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 40c1ab2..1cae8a8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4536,10 +4536,10 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v)
     return -EOPNOTSUPP;
 }
 
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v)
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v)
 {
-    if (hvm_funcs.nhvm_vcpu_hostcr3)
-        return hvm_funcs.nhvm_vcpu_hostcr3(v);
+    if (hvm_funcs.nhvm_vcpu_p2m_base)
+        return hvm_funcs.nhvm_vcpu_p2m_base(v);
     return -EOPNOTSUPP;
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 55a5ae5..2c8504a 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2003,7 +2003,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vcpu_vmexit = nsvm_vcpu_vmexit_inject,
     .nhvm_vcpu_vmexit_trap = nsvm_vcpu_vmexit_trap,
     .nhvm_vcpu_guestcr3 = nsvm_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3 = nsvm_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base = nsvm_vcpu_hostcr3,
     .nhvm_vcpu_asid = nsvm_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index aee1f9e..98309da 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1504,7 +1504,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_destroy    = nvmx_vcpu_destroy,
     .nhvm_vcpu_reset      = nvmx_vcpu_reset,
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3    = nvmx_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b005816..6d1a736 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -94,7 +94,7 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
     return 0;
 }
 
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v)
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
     /* TODO */
     ASSERT(0);
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 317875d..f9a5edc 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -48,9 +48,10 @@
  *    1. If #NPF is from L1 guest, then we crash the guest VM (same as old 
  *       code)
  *    2. If #NPF is from L2 guest, then we continue from (3)
- *    3. Get h_cr3 from L1 guest. Map h_cr3 into L0 hypervisor address space.
- *    4. Walk the h_cr3 page table
- *    5.    - if not present, then we inject #NPF back to L1 guest and 
+ *    3. Get np2m base from L1 guest. Map np2m base into L0 hypervisor address space.
+ *    4. Walk the np2m's  page table
+ *    5.    - if not present or permission check failure, then we inject #NPF back to 
+ *    L1 guest and 
  *            re-launch L1 guest (L1 guest will either treat this #NPF as MMIO,
  *            or fix its p2m table for L2 guest)
  *    6.    - if present, then we will get the a new translated value L1-GPA 
@@ -89,7 +90,7 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 
     if (old_flags & _PAGE_PRESENT)
         flush_tlb_mask(p2m->dirty_cpumask);
-    
+
     paging_unlock(d);
 }
 
@@ -110,7 +111,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     /* If this p2m table has been flushed or recycled under our feet, 
      * leave it alone.  We'll pick up the right one as we try to 
      * vmenter the guest. */
-    if ( p2m->cr3 == nhvm_vcpu_hostcr3(v) )
+    if ( p2m->np2m_base == nhvm_vcpu_p2m_base(v) )
     {
         unsigned long gfn, mask;
         mfn_t mfn;
@@ -186,7 +187,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t pfec;
     unsigned long nested_cr3, gfn;
     
-    nested_cr3 = nhvm_vcpu_hostcr3(v);
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
 
     pfec = PFEC_user_mode | PFEC_page_present;
     if (access_w)
@@ -221,7 +222,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
-    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
     rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 3700e32..1817f81 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -249,7 +249,7 @@ declare_mm_order_constraint(per_page_sharing)
  * A per-domain lock that protects the mapping from nested-CR3 to 
  * nested-p2m.  In particular it covers:
  * - the array of nested-p2m tables, and all LRU activity therein; and
- * - setting the "cr3" field of any p2m table to a non-CR3_EADDR value. 
+ * - setting the "cr3" field of any p2m table to a non-P2M_BASE_EAADR value. 
  *   (i.e. assigning a p2m table to be the shadow of that cr3 */
 
 /* PoD lock (per-p2m-table)
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 258f46e..6a4bdd9 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -69,7 +69,7 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->domain = d;
     p2m->default_access = p2m_access_rwx;
 
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
         ept_p2m_init(p2m);
@@ -1433,7 +1433,7 @@ p2m_flush_table(struct p2m_domain *p2m)
     ASSERT(page_list_empty(&p2m->pod.single));
 
     /* This is no longer a valid nested p2m for any address space */
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
     
     /* Zap the top level of the trie */
     top = mfn_to_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
@@ -1471,7 +1471,7 @@ p2m_flush_nestedp2m(struct domain *d)
 }
 
 struct p2m_domain *
-p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
+p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
 {
     /* Use volatile to prevent gcc to cache nv->nv_p2m in a cpu register as
      * this may change within the loop by an other (v)cpu.
@@ -1480,8 +1480,8 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     struct domain *d;
     struct p2m_domain *p2m;
 
-    /* Mask out low bits; this avoids collisions with CR3_EADDR */
-    cr3 &= ~(0xfffull);
+    /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
+    np2m_base &= ~(0xfffull);
 
     if (nv->nv_flushp2m && nv->nv_p2m) {
         nv->nv_p2m = NULL;
@@ -1493,14 +1493,14 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     if ( p2m ) 
     {
         p2m_lock(p2m);
-        if ( p2m->cr3 == cr3 || p2m->cr3 == CR3_EADDR )
+        if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
         {
             nv->nv_flushp2m = 0;
             p2m_getlru_nestedp2m(d, p2m);
             nv->nv_p2m = p2m;
-            if (p2m->cr3 == CR3_EADDR)
+            if (p2m->np2m_base == P2M_BASE_EADDR)
                 hvm_asid_flush_vcpu(v);
-            p2m->cr3 = cr3;
+            p2m->np2m_base = np2m_base;
             cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
             p2m_unlock(p2m);
             nestedp2m_unlock(d);
@@ -1515,7 +1515,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     p2m_flush_table(p2m);
     p2m_lock(p2m);
     nv->nv_p2m = p2m;
-    p2m->cr3 = cr3;
+    p2m->np2m_base = np2m_base;
     nv->nv_flushp2m = 0;
     hvm_asid_flush_vcpu(v);
     cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
@@ -1531,7 +1531,7 @@ p2m_get_p2m(struct vcpu *v)
     if (!nestedhvm_is_n2(v))
         return p2m_get_hostp2m(v->domain);
 
-    return p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    return p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 }
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
@@ -1549,15 +1549,15 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
         uint32_t pfec_21 = *pfec;
-        uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
+        uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
 
         /* translate l2 guest va into l2 guest gfn */
-        p2m = p2m_get_nestedp2m(v, ncr3);
+        p2m = p2m_get_nestedp2m(v, np2m_base);
         mode = paging_get_nestedmode(v);
         gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
         /* translate l2 guest gfn into l1 guest gfn */
-        return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
+        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
                                        gfn << PAGE_SHIFT, &pfec_21, NULL);
     }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index fdb0f58..d3535b6 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -170,7 +170,7 @@ struct hvm_function_table {
                                 uint64_t exitcode);
     int (*nhvm_vcpu_vmexit_trap)(struct vcpu *v, struct hvm_trap *trap);
     uint64_t (*nhvm_vcpu_guestcr3)(struct vcpu *v);
-    uint64_t (*nhvm_vcpu_hostcr3)(struct vcpu *v);
+    uint64_t (*nhvm_vcpu_p2m_base)(struct vcpu *v);
     uint32_t (*nhvm_vcpu_asid)(struct vcpu *v);
     int (*nhvm_vmcx_guest_intercepts_trap)(struct vcpu *v, 
                                unsigned int trapnr, int errcode);
@@ -475,7 +475,7 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v);
 /* returns l1 guest's cr3 that points to the page table used to
  * translate l2 guest physical address to l1 guest physical address.
  */
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v);
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v);
 /* returns the asid number l1 guest wants to use to run the l2 guest */
 uint32_t nhvm_vcpu_asid(struct vcpu *v);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index dce2cd8..d97011d 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -99,7 +99,7 @@ int nvmx_vcpu_initialise(struct vcpu *v);
 void nvmx_vcpu_destroy(struct vcpu *v);
 int nvmx_vcpu_reset(struct vcpu *v);
 uint64_t nvmx_vcpu_guestcr3(struct vcpu *v);
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v);
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v);
 uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 2bd2048..ce26594 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -197,17 +197,17 @@ struct p2m_domain {
 
     struct domain     *domain;   /* back pointer to domain */
 
-    /* Nested p2ms only: nested-CR3 value that this p2m shadows. 
-     * This can be cleared to CR3_EADDR under the per-p2m lock but
+    /* Nested p2ms only: nested p2m base value that this p2m shadows. 
+     * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but
      * needs both the per-p2m lock and the per-domain nestedp2m lock
      * to set it to any other value. */
-#define CR3_EADDR     (~0ULL)
-    uint64_t           cr3;
+#define P2M_BASE_EADDR     (~0ULL)
+    uint64_t           np2m_base;
 
     /* Nested p2ms: linked list of n2pms allocated to this domain. 
      * The host p2m hasolds the head of the list and the np2ms are 
      * threaded on in LRU order. */
-    struct list_head np2m_list; 
+    struct list_head   np2m_list; 
 
 
     /* Host p2m: when this flag is set, don't flush all the nested-p2m 
@@ -282,11 +282,11 @@ struct p2m_domain {
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
-/* Get p2m table (re)usable for specified cr3.
+/* Get p2m table (re)usable for specified np2m base.
  * Automatically destroys and re-initializes a p2m if none found.
- * If cr3 == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
+ * If np2m_base == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
  */
-struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3);
+struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
 
 /* If vcpu is in host mode then behaviour matches p2m_get_hostp2m().
  * If vcpu is in guest mode then behaviour matches p2m_get_nestedp2m().
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZI-0006gN-N0; Wed, 19 Dec 2012 07:59:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZH-0006f0-Io
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:55 +0000
Received: from [85.158.139.211:57394] by server-1.bemta-5.messagelabs.com id
	87/95-12813-9F371D05; Wed, 19 Dec 2012 07:59:53 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355903992!21137572!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8290 invoked from network); 19 Dec 2012 07:59:52 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-206.messagelabs.com with SMTP;
	19 Dec 2012 07:59:52 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:51 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287159"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:50 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:23 +0800
Message-Id: <1355946267-24227-7-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 06/10] nEPT: Sync PDPTR fields if L2 guest in
	PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
vmentry.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    9 ++++++++-
 1 files changed, 8 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index c100730..7c55c51 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -826,7 +826,14 @@ static void load_shadow_guest_state(struct vcpu *v)
     vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
     vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
 
-    /* TODO: PDPTRs for nested ept */
+    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
+                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
+    }
+
     /* TODO: CR3 target control */
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZB-0006dh-Qa; Wed, 19 Dec 2012 07:59:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZA-0006dT-1R
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:48 +0000
Received: from [85.158.139.83:64685] by server-11.bemta-5.messagelabs.com id
	7E/A1-31624-3F371D05; Wed, 19 Dec 2012 07:59:47 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355903985!23147720!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11426 invoked from network); 19 Dec 2012 07:59:46 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 07:59:46 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:44 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287112"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:43 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:18 +0800
Message-Id: <1355946267-24227-2-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 01/10] nestedhap: Change hostcr3 and p2m->cr3
	to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

VMX doesn't have the concept about host cr3 for nested p2m,
and only SVM has, so change it to netural words.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c             |    6 +++---
 xen/arch/x86/hvm/svm/svm.c         |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    2 +-
 xen/arch/x86/mm/hap/nested_hap.c   |   15 ++++++++-------
 xen/arch/x86/mm/mm-locks.h         |    2 +-
 xen/arch/x86/mm/p2m.c              |   26 +++++++++++++-------------
 xen/include/asm-x86/hvm/hvm.h      |    4 ++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +-
 xen/include/asm-x86/p2m.h          |   16 ++++++++--------
 10 files changed, 39 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 40c1ab2..1cae8a8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4536,10 +4536,10 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v)
     return -EOPNOTSUPP;
 }
 
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v)
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v)
 {
-    if (hvm_funcs.nhvm_vcpu_hostcr3)
-        return hvm_funcs.nhvm_vcpu_hostcr3(v);
+    if (hvm_funcs.nhvm_vcpu_p2m_base)
+        return hvm_funcs.nhvm_vcpu_p2m_base(v);
     return -EOPNOTSUPP;
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 55a5ae5..2c8504a 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2003,7 +2003,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vcpu_vmexit = nsvm_vcpu_vmexit_inject,
     .nhvm_vcpu_vmexit_trap = nsvm_vcpu_vmexit_trap,
     .nhvm_vcpu_guestcr3 = nsvm_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3 = nsvm_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base = nsvm_vcpu_hostcr3,
     .nhvm_vcpu_asid = nsvm_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index aee1f9e..98309da 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1504,7 +1504,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_destroy    = nvmx_vcpu_destroy,
     .nhvm_vcpu_reset      = nvmx_vcpu_reset,
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3    = nvmx_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b005816..6d1a736 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -94,7 +94,7 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
     return 0;
 }
 
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v)
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
     /* TODO */
     ASSERT(0);
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 317875d..f9a5edc 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -48,9 +48,10 @@
  *    1. If #NPF is from L1 guest, then we crash the guest VM (same as old 
  *       code)
  *    2. If #NPF is from L2 guest, then we continue from (3)
- *    3. Get h_cr3 from L1 guest. Map h_cr3 into L0 hypervisor address space.
- *    4. Walk the h_cr3 page table
- *    5.    - if not present, then we inject #NPF back to L1 guest and 
+ *    3. Get np2m base from L1 guest. Map np2m base into L0 hypervisor address space.
+ *    4. Walk the np2m's  page table
+ *    5.    - if not present or permission check failure, then we inject #NPF back to 
+ *    L1 guest and 
  *            re-launch L1 guest (L1 guest will either treat this #NPF as MMIO,
  *            or fix its p2m table for L2 guest)
  *    6.    - if present, then we will get the a new translated value L1-GPA 
@@ -89,7 +90,7 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 
     if (old_flags & _PAGE_PRESENT)
         flush_tlb_mask(p2m->dirty_cpumask);
-    
+
     paging_unlock(d);
 }
 
@@ -110,7 +111,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     /* If this p2m table has been flushed or recycled under our feet, 
      * leave it alone.  We'll pick up the right one as we try to 
      * vmenter the guest. */
-    if ( p2m->cr3 == nhvm_vcpu_hostcr3(v) )
+    if ( p2m->np2m_base == nhvm_vcpu_p2m_base(v) )
     {
         unsigned long gfn, mask;
         mfn_t mfn;
@@ -186,7 +187,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t pfec;
     unsigned long nested_cr3, gfn;
     
-    nested_cr3 = nhvm_vcpu_hostcr3(v);
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
 
     pfec = PFEC_user_mode | PFEC_page_present;
     if (access_w)
@@ -221,7 +222,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
-    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
     rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 3700e32..1817f81 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -249,7 +249,7 @@ declare_mm_order_constraint(per_page_sharing)
  * A per-domain lock that protects the mapping from nested-CR3 to 
  * nested-p2m.  In particular it covers:
  * - the array of nested-p2m tables, and all LRU activity therein; and
- * - setting the "cr3" field of any p2m table to a non-CR3_EADDR value. 
+ * - setting the "cr3" field of any p2m table to a non-P2M_BASE_EAADR value. 
  *   (i.e. assigning a p2m table to be the shadow of that cr3 */
 
 /* PoD lock (per-p2m-table)
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 258f46e..6a4bdd9 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -69,7 +69,7 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->domain = d;
     p2m->default_access = p2m_access_rwx;
 
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
         ept_p2m_init(p2m);
@@ -1433,7 +1433,7 @@ p2m_flush_table(struct p2m_domain *p2m)
     ASSERT(page_list_empty(&p2m->pod.single));
 
     /* This is no longer a valid nested p2m for any address space */
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
     
     /* Zap the top level of the trie */
     top = mfn_to_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
@@ -1471,7 +1471,7 @@ p2m_flush_nestedp2m(struct domain *d)
 }
 
 struct p2m_domain *
-p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
+p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
 {
     /* Use volatile to prevent gcc to cache nv->nv_p2m in a cpu register as
      * this may change within the loop by an other (v)cpu.
@@ -1480,8 +1480,8 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     struct domain *d;
     struct p2m_domain *p2m;
 
-    /* Mask out low bits; this avoids collisions with CR3_EADDR */
-    cr3 &= ~(0xfffull);
+    /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
+    np2m_base &= ~(0xfffull);
 
     if (nv->nv_flushp2m && nv->nv_p2m) {
         nv->nv_p2m = NULL;
@@ -1493,14 +1493,14 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     if ( p2m ) 
     {
         p2m_lock(p2m);
-        if ( p2m->cr3 == cr3 || p2m->cr3 == CR3_EADDR )
+        if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
         {
             nv->nv_flushp2m = 0;
             p2m_getlru_nestedp2m(d, p2m);
             nv->nv_p2m = p2m;
-            if (p2m->cr3 == CR3_EADDR)
+            if (p2m->np2m_base == P2M_BASE_EADDR)
                 hvm_asid_flush_vcpu(v);
-            p2m->cr3 = cr3;
+            p2m->np2m_base = np2m_base;
             cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
             p2m_unlock(p2m);
             nestedp2m_unlock(d);
@@ -1515,7 +1515,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     p2m_flush_table(p2m);
     p2m_lock(p2m);
     nv->nv_p2m = p2m;
-    p2m->cr3 = cr3;
+    p2m->np2m_base = np2m_base;
     nv->nv_flushp2m = 0;
     hvm_asid_flush_vcpu(v);
     cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
@@ -1531,7 +1531,7 @@ p2m_get_p2m(struct vcpu *v)
     if (!nestedhvm_is_n2(v))
         return p2m_get_hostp2m(v->domain);
 
-    return p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    return p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 }
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
@@ -1549,15 +1549,15 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
         uint32_t pfec_21 = *pfec;
-        uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
+        uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
 
         /* translate l2 guest va into l2 guest gfn */
-        p2m = p2m_get_nestedp2m(v, ncr3);
+        p2m = p2m_get_nestedp2m(v, np2m_base);
         mode = paging_get_nestedmode(v);
         gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
         /* translate l2 guest gfn into l1 guest gfn */
-        return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
+        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
                                        gfn << PAGE_SHIFT, &pfec_21, NULL);
     }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index fdb0f58..d3535b6 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -170,7 +170,7 @@ struct hvm_function_table {
                                 uint64_t exitcode);
     int (*nhvm_vcpu_vmexit_trap)(struct vcpu *v, struct hvm_trap *trap);
     uint64_t (*nhvm_vcpu_guestcr3)(struct vcpu *v);
-    uint64_t (*nhvm_vcpu_hostcr3)(struct vcpu *v);
+    uint64_t (*nhvm_vcpu_p2m_base)(struct vcpu *v);
     uint32_t (*nhvm_vcpu_asid)(struct vcpu *v);
     int (*nhvm_vmcx_guest_intercepts_trap)(struct vcpu *v, 
                                unsigned int trapnr, int errcode);
@@ -475,7 +475,7 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v);
 /* returns l1 guest's cr3 that points to the page table used to
  * translate l2 guest physical address to l1 guest physical address.
  */
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v);
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v);
 /* returns the asid number l1 guest wants to use to run the l2 guest */
 uint32_t nhvm_vcpu_asid(struct vcpu *v);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index dce2cd8..d97011d 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -99,7 +99,7 @@ int nvmx_vcpu_initialise(struct vcpu *v);
 void nvmx_vcpu_destroy(struct vcpu *v);
 int nvmx_vcpu_reset(struct vcpu *v);
 uint64_t nvmx_vcpu_guestcr3(struct vcpu *v);
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v);
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v);
 uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 2bd2048..ce26594 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -197,17 +197,17 @@ struct p2m_domain {
 
     struct domain     *domain;   /* back pointer to domain */
 
-    /* Nested p2ms only: nested-CR3 value that this p2m shadows. 
-     * This can be cleared to CR3_EADDR under the per-p2m lock but
+    /* Nested p2ms only: nested p2m base value that this p2m shadows. 
+     * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but
      * needs both the per-p2m lock and the per-domain nestedp2m lock
      * to set it to any other value. */
-#define CR3_EADDR     (~0ULL)
-    uint64_t           cr3;
+#define P2M_BASE_EADDR     (~0ULL)
+    uint64_t           np2m_base;
 
     /* Nested p2ms: linked list of n2pms allocated to this domain. 
      * The host p2m hasolds the head of the list and the np2ms are 
      * threaded on in LRU order. */
-    struct list_head np2m_list; 
+    struct list_head   np2m_list; 
 
 
     /* Host p2m: when this flag is set, don't flush all the nested-p2m 
@@ -282,11 +282,11 @@ struct p2m_domain {
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
-/* Get p2m table (re)usable for specified cr3.
+/* Get p2m table (re)usable for specified np2m base.
  * Automatically destroys and re-initializes a p2m if none found.
- * If cr3 == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
+ * If np2m_base == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
  */
-struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3);
+struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
 
 /* If vcpu is in host mode then behaviour matches p2m_get_hostp2m().
  * If vcpu is in guest mode then behaviour matches p2m_get_nestedp2m().
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZM-0006ib-4j; Wed, 19 Dec 2012 08:00:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZK-0006h3-8c
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:58 +0000
Received: from [85.158.143.99:58819] by server-2.bemta-4.messagelabs.com id
	1C/46-30861-DF371D05; Wed, 19 Dec 2012 07:59:57 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355903995!18912557!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20140 invoked from network); 19 Dec 2012 07:59:56 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-216.messagelabs.com with SMTP;
	19 Dec 2012 07:59:56 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:56 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287195"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:54 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:26 +0800
Message-Id: <1355946267-24227-10-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 09/10] nVMX: virutalize VPID capability to
	nested VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Virtualize VPID for the nested vmm, use host's VPID
to emualte guest's VPID. For each virtual vmentry, if
guest'v vpid is changed, allocate a new host VPID for
L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   10 +++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   55 +++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +
 3 files changed, 64 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 7af92cc..2144820 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2576,10 +2576,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
             update_guest_eip();
         break;
+    case EXIT_REASON_INVVPID:
+        if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
          * running in guest context, and the CPU checks that before getting
@@ -2697,8 +2700,11 @@ void vmx_vmenter_helper(void)
 
     if ( !cpu_has_vmx_vpid )
         goto out;
+    if ( nestedhvm_vcpu_in_guestmode(curr) )
+        p_asid = &vcpu_nestedhvm(curr).nv_n2asid;
+    else
+        p_asid = &curr->arch.hvm_vcpu.n1asid;
 
-    p_asid = &curr->arch.hvm_vcpu.n1asid;
     old_asid = p_asid->asid;
     need_flush = hvm_asid_handle_vmenter(p_asid);
     new_asid = p_asid->asid;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b7c3639..f2d7039 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -42,6 +42,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
 	goto out;
     }
     nvmx->ept.enabled = 0;
+    nvmx->guest_vpid = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -848,6 +849,16 @@ static uint64_t get_shadow_eptp(struct vcpu *v)
     return ept_get_eptp(ept);
 }
 
+static bool_t nvmx_vpid_enabled(struct nestedvcpu *nvcpu)
+{
+    uint32_t second_cntl;
+
+    second_cntl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
+    if ( second_cntl & SECONDARY_EXEC_ENABLE_VPID )
+        return 1;
+    return 0;
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -896,6 +907,18 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     if ( nestedhvm_paging_mode_hap(v) )
         __vmwrite(EPT_POINTER, get_shadow_eptp(v));
 
+    /* nested VPID support! */
+    if ( cpu_has_vmx_vpid && nvmx_vpid_enabled(nvcpu) )
+    {
+        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+        uint32_t new_vpid =  __get_vvmcs(vvmcs, VIRTUAL_PROCESSOR_ID);
+        if ( nvmx->guest_vpid != new_vpid )
+        {
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
+            nvmx->guest_vpid = new_vpid;
+        }
+    }
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -1187,7 +1210,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
     {
         vmreturn (regs, VMFAIL_INVALID);
-        return X86EMUL_OKAY;        
+        return X86EMUL_OKAY;
     }
 
     launched = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
@@ -1402,6 +1425,36 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
     ((uint32_t)(__emul_value(enable1, default1) | host_value)))
 
+int nvmx_handle_invvpid(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long vpid;
+    u64 inv_type;
+
+    if ( !cpu_has_vmx_vpid )
+        return X86EMUL_EXCEPTION;
+
+    if ( decode_vmx_inst(regs, &decode, &vpid, 0) != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, vpid:%lx\n", inv_type, vpid);
+
+    switch ( inv_type ) {
+        /* Just invalidate all tlb entries for all types! */
+        case INVVPID_INDIVIDUAL_ADDR:
+        case INVVPID_SINGLE_CONTEXT:
+        case INVVPID_ALL_CONTEXT:
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
+            break;
+        default:
+            return X86EMUL_EXCEPTION;
+    }
+    vmreturn(regs, VMSUCCEED);
+
+    return X86EMUL_OKAY;
+}
+
 /*
  * Capability reporting
  */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index cf5ed9a..28dd727 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -37,6 +37,7 @@ struct nestedvmx {
         uint32_t exit_reason;
         uint32_t exit_qual;
     } ept;
+    uint32_t guest_vpid;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -191,6 +192,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
 int nvmx_handle_invept(struct cpu_user_regs *regs);
+int nvmx_handle_invvpid(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZM-0006ib-4j; Wed, 19 Dec 2012 08:00:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZK-0006h3-8c
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:58 +0000
Received: from [85.158.143.99:58819] by server-2.bemta-4.messagelabs.com id
	1C/46-30861-DF371D05; Wed, 19 Dec 2012 07:59:57 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355903995!18912557!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20140 invoked from network); 19 Dec 2012 07:59:56 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-216.messagelabs.com with SMTP;
	19 Dec 2012 07:59:56 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:56 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287195"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:54 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:26 +0800
Message-Id: <1355946267-24227-10-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 09/10] nVMX: virutalize VPID capability to
	nested VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Virtualize VPID for the nested vmm, use host's VPID
to emualte guest's VPID. For each virtual vmentry, if
guest'v vpid is changed, allocate a new host VPID for
L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   10 +++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   55 +++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +
 3 files changed, 64 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 7af92cc..2144820 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2576,10 +2576,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
             update_guest_eip();
         break;
+    case EXIT_REASON_INVVPID:
+        if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
          * running in guest context, and the CPU checks that before getting
@@ -2697,8 +2700,11 @@ void vmx_vmenter_helper(void)
 
     if ( !cpu_has_vmx_vpid )
         goto out;
+    if ( nestedhvm_vcpu_in_guestmode(curr) )
+        p_asid = &vcpu_nestedhvm(curr).nv_n2asid;
+    else
+        p_asid = &curr->arch.hvm_vcpu.n1asid;
 
-    p_asid = &curr->arch.hvm_vcpu.n1asid;
     old_asid = p_asid->asid;
     need_flush = hvm_asid_handle_vmenter(p_asid);
     new_asid = p_asid->asid;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b7c3639..f2d7039 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -42,6 +42,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
 	goto out;
     }
     nvmx->ept.enabled = 0;
+    nvmx->guest_vpid = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -848,6 +849,16 @@ static uint64_t get_shadow_eptp(struct vcpu *v)
     return ept_get_eptp(ept);
 }
 
+static bool_t nvmx_vpid_enabled(struct nestedvcpu *nvcpu)
+{
+    uint32_t second_cntl;
+
+    second_cntl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
+    if ( second_cntl & SECONDARY_EXEC_ENABLE_VPID )
+        return 1;
+    return 0;
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -896,6 +907,18 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     if ( nestedhvm_paging_mode_hap(v) )
         __vmwrite(EPT_POINTER, get_shadow_eptp(v));
 
+    /* nested VPID support! */
+    if ( cpu_has_vmx_vpid && nvmx_vpid_enabled(nvcpu) )
+    {
+        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+        uint32_t new_vpid =  __get_vvmcs(vvmcs, VIRTUAL_PROCESSOR_ID);
+        if ( nvmx->guest_vpid != new_vpid )
+        {
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
+            nvmx->guest_vpid = new_vpid;
+        }
+    }
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -1187,7 +1210,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
     {
         vmreturn (regs, VMFAIL_INVALID);
-        return X86EMUL_OKAY;        
+        return X86EMUL_OKAY;
     }
 
     launched = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
@@ -1402,6 +1425,36 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
     ((uint32_t)(__emul_value(enable1, default1) | host_value)))
 
+int nvmx_handle_invvpid(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long vpid;
+    u64 inv_type;
+
+    if ( !cpu_has_vmx_vpid )
+        return X86EMUL_EXCEPTION;
+
+    if ( decode_vmx_inst(regs, &decode, &vpid, 0) != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, vpid:%lx\n", inv_type, vpid);
+
+    switch ( inv_type ) {
+        /* Just invalidate all tlb entries for all types! */
+        case INVVPID_INDIVIDUAL_ADDR:
+        case INVVPID_SINGLE_CONTEXT:
+        case INVVPID_ALL_CONTEXT:
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
+            break;
+        default:
+            return X86EMUL_EXCEPTION;
+    }
+    vmreturn(regs, VMSUCCEED);
+
+    return X86EMUL_OKAY;
+}
+
 /*
  * Capability reporting
  */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index cf5ed9a..28dd727 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -37,6 +37,7 @@ struct nestedvmx {
         uint32_t exit_reason;
         uint32_t exit_qual;
     } ept;
+    uint32_t guest_vpid;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -191,6 +192,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
 int nvmx_handle_invept(struct cpu_user_regs *regs);
+int nvmx_handle_invvpid(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZH-0006fl-8E; Wed, 19 Dec 2012 07:59:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZG-0006f1-6U
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:54 +0000
Received: from [85.158.139.83:5516] by server-8.bemta-5.messagelabs.com id
	77/04-15003-7F371D05; Wed, 19 Dec 2012 07:59:51 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355903986!30528495!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3414 invoked from network); 19 Dec 2012 07:59:50 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-5.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 07:59:50 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:49 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287142"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:47 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:21 +0800
Message-Id: <1355946267-24227-5-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 04/10] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Share the current EPT logic with nested EPT case, so
make the related data structure or operations netural
to comment EPT and nested EPT.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    9 +++-
 xen/arch/x86/hvm/vmx/vmx.c         |   53 ++-----------------
 xen/arch/x86/mm/p2m-ept.c          |  104 ++++++++++++++++++++++++++++--------
 xen/arch/x86/mm/p2m.c              |   23 ++++++---
 xen/include/asm-x86/hvm/vmx/vmcs.h |   23 ++++----
 xen/include/asm-x86/hvm/vmx/vmx.h  |   10 +++-
 xen/include/asm-x86/p2m.h          |    4 ++
 7 files changed, 133 insertions(+), 93 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 9adc7a4..379b75c 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -941,8 +941,13 @@ static int construct_vmcs(struct vcpu *v)
         __vmwrite(TPR_THRESHOLD, 0);
     }
 
-    if ( paging_mode_hap(d) )
-        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept_control.eptp);
+    if ( paging_mode_hap(d) ) {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
+        struct ept_data *ept = &p2m->ept;
+
+        ept->asr  = pagetable_get_pfn(p2m_get_pagetable(p2m));
+        __vmwrite(EPT_POINTER, ept_get_eptp(ept));
+    }
 
     if ( cpu_has_vmx_pat && paging_mode_hap(d) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4abfa90..d74aae0 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -74,38 +74,19 @@ static void vmx_fpu_dirty_intercept(void);
 static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content);
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
 static void vmx_invlpg_intercept(unsigned long vaddr);
-static void __ept_sync_domain(void *info);
 
 static int vmx_domain_initialise(struct domain *d)
 {
     int rc;
 
-    /* Set the memory type used when accessing EPT paging structures. */
-    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
-
-    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
-    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
-
-    d->arch.hvm_domain.vmx.ept_control.asr  =
-        pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-
-    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
-        return -ENOMEM;
-
     if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
-    {
-        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
         return rc;
-    }
 
     return 0;
 }
 
 static void vmx_domain_destroy(struct domain *d)
 {
-    if ( paging_mode_hap(d) )
-        on_each_cpu(__ept_sync_domain, d, 1);
-    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
     vmx_free_vlapic_mapping(d);
 }
 
@@ -641,6 +622,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
 {
     struct domain *d = v->domain;
     unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
+    struct ept_data *ept_data = &p2m_get_hostp2m(d)->ept;
 
     /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
     if ( old_cr4 != new_cr4 )
@@ -650,10 +632,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
     {
         unsigned int cpu = smp_processor_id();
         /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
-        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced) &&
+        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
              !cpumask_test_and_set_cpu(cpu,
-                                       d->arch.hvm_domain.vmx.ept_synced) )
-            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
+                                       ept_get_synced_mask(ept_data)) )
+            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
     }
 
     vmx_restore_guest_msrs(v);
@@ -1216,33 +1198,6 @@ static void vmx_update_guest_efer(struct vcpu *v)
                    (v->arch.hvm_vcpu.guest_efer & EFER_SCE));
 }
 
-static void __ept_sync_domain(void *info)
-{
-    struct domain *d = info;
-    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
-}
-
-void ept_sync_domain(struct domain *d)
-{
-    /* Only if using EPT and this domain has some VCPUs to dirty. */
-    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
-        return;
-
-    ASSERT(local_irq_is_enabled());
-
-    /*
-     * Flush active cpus synchronously. Flush others the next time this domain
-     * is scheduled onto them. We accept the race of other CPUs adding to
-     * the ept_synced mask before on_selected_cpus() reads it, resulting in
-     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
-     */
-    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
-                d->domain_dirty_cpumask, &cpu_online_map);
-
-    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
-                     __ept_sync_domain, d, 1);
-}
-
 void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
             unsigned long intr_fields, int error_code)
 {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index c964f54..e33f415 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     int need_modify_vtd_table = 1;
     int vtd_pte_present = 0;
     int needs_sync = 1;
-    struct domain *d = p2m->domain;
     ept_entry_t old_entry = { .epte = 0 };
+    struct ept_data *ept = &p2m->ept;
+    struct domain *d = p2m->domain;
 
+    ASSERT(ept);
     /*
      * the caller must make sure:
      * 1. passing valid gfn and mfn at order boundary.
@@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
      * 3. passing a valid order.
      */
     if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
-         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
+         ((u64)gfn >> ((ept_get_wl(ept) + 1) * EPT_TABLE_ORDER)) ||
          (order % EPT_TABLE_ORDER) )
         return 0;
 
-    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
-           (target == 1 && hvm_hap_has_2mb(d)) ||
+    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
+           (target == 1 && hvm_hap_has_2mb()) ||
            (target == 0));
 
-    table = map_domain_page(ept_get_asr(d));
+    table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-    for ( i = ept_get_wl(d); i > target; i-- )
+    for ( i = ept_get_wl(ept); i > target; i-- )
     {
         ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
         if ( !ret )
@@ -439,9 +441,11 @@ out:
     unmap_domain_page(table);
 
     if ( needs_sync )
-        ept_sync_domain(p2m->domain);
+        ept_sync_domain(p2m);
 
-    if ( rv && iommu_enabled && need_iommu(p2m->domain) && need_modify_vtd_table )
+    /* For non-nested p2m, may need to change VT-d page table.*/
+    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled && need_iommu(p2m->domain) &&
+                need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
             iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
@@ -488,14 +492,14 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
                            unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
                            p2m_query_t q, unsigned int *page_order)
 {
-    struct domain *d = p2m->domain;
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    ept_entry_t *table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     u32 index;
     int i;
     int ret = 0;
     mfn_t mfn = _mfn(INVALID_MFN);
+    struct ept_data *ept = &p2m->ept;
 
     *t = p2m_mmio_dm;
     *a = p2m_access_n;
@@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
 
     /* Should check if gfn obeys GAW here. */
 
-    for ( i = ept_get_wl(d); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
     retry:
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
@@ -588,19 +592,20 @@ out:
 static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
     unsigned long gfn, int *level)
 {
-    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     ept_entry_t content = { .epte = 0 };
     u32 index;
     int i;
     int ret=0;
+    struct ept_data *ept = &p2m->ept;
 
     /* This pfn is higher than the highest the p2m map currently holds */
     if ( gfn > p2m->max_mapped_pfn )
         goto out;
 
-    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
         if ( !ret || ret == GUEST_TABLE_POD_PAGE )
@@ -622,7 +627,8 @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
 void ept_walk_table(struct domain *d, unsigned long gfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    struct ept_data *ept = &p2m->ept;
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
 
     int i;
@@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned long gfn)
         goto out;
     }
 
-    for ( i = ept_get_wl(d); i >= 0; i-- )
+    for ( i = ept_get_wl(ept); i >= 0; i-- )
     {
         ept_entry_t *ept_entry, *next;
         u32 index;
@@ -778,24 +784,76 @@ static void ept_change_entry_type_page(mfn_t ept_page_mfn, int ept_page_level,
 static void ept_change_entry_type_global(struct p2m_domain *p2m,
                                          p2m_type_t ot, p2m_type_t nt)
 {
-    struct domain *d = p2m->domain;
-    if ( ept_get_asr(d) == 0 )
+    struct ept_data *ept = &p2m->ept;
+    if ( ept_get_asr(ept) == 0 )
         return;
 
     BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
     BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct));
 
-    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d), ot, nt);
+    ept_change_entry_type_page(_mfn(ept_get_asr(ept)),
+            ept_get_wl(ept), ot, nt);
+
+    ept_sync_domain(p2m);
+}
+
+static void __ept_sync_domain(void *info)
+{
+    struct ept_data *ept = &((struct p2m_domain *)info)->ept;
 
-    ept_sync_domain(d);
+    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept), 0);
 }
 
-void ept_p2m_init(struct p2m_domain *p2m)
+void ept_sync_domain(struct p2m_domain *p2m)
 {
+    struct domain *d = p2m->domain;
+    struct ept_data *ept = &p2m->ept;
+    /* Only if using EPT and this domain has some VCPUs to dirty. */
+    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
+        return;
+
+    ASSERT(local_irq_is_enabled());
+
+    /*
+     * Flush active cpus synchronously. Flush others the next time this domain
+     * is scheduled onto them. We accept the race of other CPUs adding to
+     * the ept_synced mask before on_selected_cpus() reads it, resulting in
+     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
+     */
+    cpumask_and(ept_get_synced_mask(ept),
+                d->domain_dirty_cpumask, &cpu_online_map);
+
+    on_selected_cpus(ept_get_synced_mask(ept),
+                     __ept_sync_domain, p2m, 1);
+}
+
+int ept_p2m_init(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+
     p2m->set_entry = ept_set_entry;
     p2m->get_entry = ept_get_entry;
     p2m->change_entry_type_global = ept_change_entry_type_global;
     p2m->audit_p2m = NULL;
+
+    /* Set the memory type used when accessing EPT paging structures. */
+    ept->ept_mt = EPT_DEFAULT_MT;
+
+    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
+    ept->ept_wl = 3;
+
+    if ( !zalloc_cpumask_var(&ept->synced_mask) )
+        return -ENOMEM;
+
+    on_each_cpu(__ept_sync_domain, p2m, 1);
+
+    return 0;
+}
+
+void ept_p2m_uninit(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+    free_cpumask_var(ept->synced_mask);
 }
 
 static void ept_dump_p2m_table(unsigned char key)
@@ -811,6 +869,7 @@ static void ept_dump_p2m_table(unsigned char key)
     unsigned long gfn, gfn_remainder;
     unsigned long record_counter = 0;
     struct p2m_domain *p2m;
+    struct ept_data *ept;
 
     for_each_domain(d)
     {
@@ -818,15 +877,16 @@ static void ept_dump_p2m_table(unsigned char key)
             continue;
 
         p2m = p2m_get_hostp2m(d);
+        ept = &p2m->ept;
         printk("\ndomain%d EPT p2m table: \n", d->domain_id);
 
         for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
         {
             gfn_remainder = gfn;
             mfn = _mfn(INVALID_MFN);
-            table = map_domain_page(ept_get_asr(d));
+            table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-            for ( i = ept_get_wl(d); i > 0; i-- )
+            for ( i = ept_get_wl(ept); i > 0; i-- )
             {
                 ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
                 if ( ret != GUEST_TABLE_NORMAL_PAGE )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6a4bdd9..1f59410 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -57,8 +57,10 @@ boolean_param("hap_2mb", opt_hap_2mb);
 
 
 /* Init the datastructures for later use by the p2m code */
-static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
+    int ret = 0;
+
     mm_rwlock_init(&p2m->lock);
     mm_lock_init(&p2m->pod.lock);
     INIT_LIST_HEAD(&p2m->np2m_list);
@@ -72,11 +74,11 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
-        ept_p2m_init(p2m);
+        ret = ept_p2m_init(p2m);
     else
         p2m_pt_init(p2m);
 
-    return;
+    return ret;
 }
 
 static int
@@ -119,7 +121,7 @@ int p2m_init(struct domain *d)
      * since nestedhvm_enabled(d) returns false here.
      * (p2m_init runs too early for HVM_PARAM_* options) */
     rc = p2m_init_nestedp2m(d);
-    if ( rc ) 
+    if ( rc )
         p2m_final_teardown(d);
     return rc;
 }
@@ -424,12 +426,16 @@ void p2m_teardown(struct p2m_domain *p2m)
 static void p2m_teardown_nestedp2m(struct domain *d)
 {
     uint8_t i;
+    struct p2m_domain *p2m;
 
     for (i = 0; i < MAX_NESTEDP2M; i++) {
         if ( !d->arch.nested_p2m[i] )
             continue;
-        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
-        xfree(d->arch.nested_p2m[i]);
+        p2m = d->arch.nested_p2m[i];
+        free_cpumask_var(p2m->dirty_cpumask);
+        if ( hap_enabled(d) && cpu_has_vmx )
+            ept_p2m_uninit(p2m);
+        xfree(p2m);
         d->arch.nested_p2m[i] = NULL;
     }
 }
@@ -437,9 +443,12 @@ static void p2m_teardown_nestedp2m(struct domain *d)
 void p2m_final_teardown(struct domain *d)
 {
     /* Iterate over all p2m tables per domain */
-    if ( d->arch.p2m )
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    if ( p2m )
     {
         free_cpumask_var(d->arch.p2m->dirty_cpumask);
+        if ( hap_enabled(d) && cpu_has_vmx )
+            ept_p2m_uninit(p2m);
         xfree(d->arch.p2m);
         d->arch.p2m = NULL;
     }
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 9a728b6..2d38b43 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -56,26 +56,27 @@ struct vmx_msr_state {
 
 #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
 
-struct vmx_domain {
-    unsigned long apic_access_mfn;
+struct ept_data{
     union {
-        struct {
+    struct {
             u64 ept_mt :3,
                 ept_wl :3,
                 rsvd   :6,
                 asr    :52;
         };
         u64 eptp;
-    } ept_control;
-    cpumask_var_t ept_synced;
+    };
+    cpumask_var_t synced_mask;
+};
+
+struct vmx_domain {
+    unsigned long apic_access_mfn;
 };
 
-#define ept_get_wl(d)   \
-    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
-#define ept_get_asr(d)  \
-    ((d)->arch.hvm_domain.vmx.ept_control.asr)
-#define ept_get_eptp(d) \
-    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
+#define ept_get_wl(ept)   ((ept)->ept_wl)
+#define ept_get_asr(ept)  ((ept)->asr)
+#define ept_get_eptp(ept) ((ept)->eptp)
+#define ept_get_synced_mask(ept) ((ept)->synced_mask)
 
 struct arch_vmx_struct {
     /* Virtual address of VMCS. */
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index feaaa80..2600694 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -360,7 +360,7 @@ static inline void ept_sync_all(void)
     __invept(INVEPT_ALL_CONTEXT, 0, 0);
 }
 
-void ept_sync_domain(struct domain *d);
+void ept_sync_domain(struct p2m_domain *p2m);
 
 static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
 {
@@ -422,12 +422,18 @@ void vmx_get_segment_register(struct vcpu *, enum x86_segment,
 void vmx_inject_extint(int trap);
 void vmx_inject_nmi(void);
 
-void ept_p2m_init(struct p2m_domain *p2m);
+int ept_p2m_init(struct p2m_domain *p2m);
+void ept_p2m_uninit(struct p2m_domain *p2m);
+
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
 void update_guest_eip(void);
 
+int alloc_p2m_hap_data(struct p2m_domain *p2m);
+void free_p2m_hap_data(struct p2m_domain *p2m);
+void p2m_init_hap_data(struct p2m_domain *p2m);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index ce26594..b6a84b6 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -277,6 +277,10 @@ struct p2m_domain {
         mm_lock_t        lock;         /* Locking of private pod structs,   *
                                         * not relying on the p2m lock.      */
     } pod;
+    union {
+        struct ept_data ept;
+        /* NPT-equivalent structure could be added here. */
+    };
 };
 
 /* get host p2m table */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZH-0006fl-8E; Wed, 19 Dec 2012 07:59:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZG-0006f1-6U
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:54 +0000
Received: from [85.158.139.83:5516] by server-8.bemta-5.messagelabs.com id
	77/04-15003-7F371D05; Wed, 19 Dec 2012 07:59:51 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355903986!30528495!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3414 invoked from network); 19 Dec 2012 07:59:50 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-5.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 07:59:50 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:49 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287142"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:47 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:21 +0800
Message-Id: <1355946267-24227-5-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 04/10] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Share the current EPT logic with nested EPT case, so
make the related data structure or operations netural
to comment EPT and nested EPT.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    9 +++-
 xen/arch/x86/hvm/vmx/vmx.c         |   53 ++-----------------
 xen/arch/x86/mm/p2m-ept.c          |  104 ++++++++++++++++++++++++++++--------
 xen/arch/x86/mm/p2m.c              |   23 ++++++---
 xen/include/asm-x86/hvm/vmx/vmcs.h |   23 ++++----
 xen/include/asm-x86/hvm/vmx/vmx.h  |   10 +++-
 xen/include/asm-x86/p2m.h          |    4 ++
 7 files changed, 133 insertions(+), 93 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 9adc7a4..379b75c 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -941,8 +941,13 @@ static int construct_vmcs(struct vcpu *v)
         __vmwrite(TPR_THRESHOLD, 0);
     }
 
-    if ( paging_mode_hap(d) )
-        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept_control.eptp);
+    if ( paging_mode_hap(d) ) {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
+        struct ept_data *ept = &p2m->ept;
+
+        ept->asr  = pagetable_get_pfn(p2m_get_pagetable(p2m));
+        __vmwrite(EPT_POINTER, ept_get_eptp(ept));
+    }
 
     if ( cpu_has_vmx_pat && paging_mode_hap(d) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4abfa90..d74aae0 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -74,38 +74,19 @@ static void vmx_fpu_dirty_intercept(void);
 static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content);
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
 static void vmx_invlpg_intercept(unsigned long vaddr);
-static void __ept_sync_domain(void *info);
 
 static int vmx_domain_initialise(struct domain *d)
 {
     int rc;
 
-    /* Set the memory type used when accessing EPT paging structures. */
-    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
-
-    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
-    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
-
-    d->arch.hvm_domain.vmx.ept_control.asr  =
-        pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-
-    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
-        return -ENOMEM;
-
     if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
-    {
-        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
         return rc;
-    }
 
     return 0;
 }
 
 static void vmx_domain_destroy(struct domain *d)
 {
-    if ( paging_mode_hap(d) )
-        on_each_cpu(__ept_sync_domain, d, 1);
-    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
     vmx_free_vlapic_mapping(d);
 }
 
@@ -641,6 +622,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
 {
     struct domain *d = v->domain;
     unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
+    struct ept_data *ept_data = &p2m_get_hostp2m(d)->ept;
 
     /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
     if ( old_cr4 != new_cr4 )
@@ -650,10 +632,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
     {
         unsigned int cpu = smp_processor_id();
         /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
-        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced) &&
+        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
              !cpumask_test_and_set_cpu(cpu,
-                                       d->arch.hvm_domain.vmx.ept_synced) )
-            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
+                                       ept_get_synced_mask(ept_data)) )
+            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
     }
 
     vmx_restore_guest_msrs(v);
@@ -1216,33 +1198,6 @@ static void vmx_update_guest_efer(struct vcpu *v)
                    (v->arch.hvm_vcpu.guest_efer & EFER_SCE));
 }
 
-static void __ept_sync_domain(void *info)
-{
-    struct domain *d = info;
-    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
-}
-
-void ept_sync_domain(struct domain *d)
-{
-    /* Only if using EPT and this domain has some VCPUs to dirty. */
-    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
-        return;
-
-    ASSERT(local_irq_is_enabled());
-
-    /*
-     * Flush active cpus synchronously. Flush others the next time this domain
-     * is scheduled onto them. We accept the race of other CPUs adding to
-     * the ept_synced mask before on_selected_cpus() reads it, resulting in
-     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
-     */
-    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
-                d->domain_dirty_cpumask, &cpu_online_map);
-
-    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
-                     __ept_sync_domain, d, 1);
-}
-
 void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
             unsigned long intr_fields, int error_code)
 {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index c964f54..e33f415 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     int need_modify_vtd_table = 1;
     int vtd_pte_present = 0;
     int needs_sync = 1;
-    struct domain *d = p2m->domain;
     ept_entry_t old_entry = { .epte = 0 };
+    struct ept_data *ept = &p2m->ept;
+    struct domain *d = p2m->domain;
 
+    ASSERT(ept);
     /*
      * the caller must make sure:
      * 1. passing valid gfn and mfn at order boundary.
@@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
      * 3. passing a valid order.
      */
     if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
-         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
+         ((u64)gfn >> ((ept_get_wl(ept) + 1) * EPT_TABLE_ORDER)) ||
          (order % EPT_TABLE_ORDER) )
         return 0;
 
-    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
-           (target == 1 && hvm_hap_has_2mb(d)) ||
+    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
+           (target == 1 && hvm_hap_has_2mb()) ||
            (target == 0));
 
-    table = map_domain_page(ept_get_asr(d));
+    table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-    for ( i = ept_get_wl(d); i > target; i-- )
+    for ( i = ept_get_wl(ept); i > target; i-- )
     {
         ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
         if ( !ret )
@@ -439,9 +441,11 @@ out:
     unmap_domain_page(table);
 
     if ( needs_sync )
-        ept_sync_domain(p2m->domain);
+        ept_sync_domain(p2m);
 
-    if ( rv && iommu_enabled && need_iommu(p2m->domain) && need_modify_vtd_table )
+    /* For non-nested p2m, may need to change VT-d page table.*/
+    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled && need_iommu(p2m->domain) &&
+                need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
             iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
@@ -488,14 +492,14 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
                            unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
                            p2m_query_t q, unsigned int *page_order)
 {
-    struct domain *d = p2m->domain;
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    ept_entry_t *table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     u32 index;
     int i;
     int ret = 0;
     mfn_t mfn = _mfn(INVALID_MFN);
+    struct ept_data *ept = &p2m->ept;
 
     *t = p2m_mmio_dm;
     *a = p2m_access_n;
@@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
 
     /* Should check if gfn obeys GAW here. */
 
-    for ( i = ept_get_wl(d); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
     retry:
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
@@ -588,19 +592,20 @@ out:
 static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
     unsigned long gfn, int *level)
 {
-    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     ept_entry_t content = { .epte = 0 };
     u32 index;
     int i;
     int ret=0;
+    struct ept_data *ept = &p2m->ept;
 
     /* This pfn is higher than the highest the p2m map currently holds */
     if ( gfn > p2m->max_mapped_pfn )
         goto out;
 
-    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
         if ( !ret || ret == GUEST_TABLE_POD_PAGE )
@@ -622,7 +627,8 @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
 void ept_walk_table(struct domain *d, unsigned long gfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    struct ept_data *ept = &p2m->ept;
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
 
     int i;
@@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned long gfn)
         goto out;
     }
 
-    for ( i = ept_get_wl(d); i >= 0; i-- )
+    for ( i = ept_get_wl(ept); i >= 0; i-- )
     {
         ept_entry_t *ept_entry, *next;
         u32 index;
@@ -778,24 +784,76 @@ static void ept_change_entry_type_page(mfn_t ept_page_mfn, int ept_page_level,
 static void ept_change_entry_type_global(struct p2m_domain *p2m,
                                          p2m_type_t ot, p2m_type_t nt)
 {
-    struct domain *d = p2m->domain;
-    if ( ept_get_asr(d) == 0 )
+    struct ept_data *ept = &p2m->ept;
+    if ( ept_get_asr(ept) == 0 )
         return;
 
     BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
     BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct));
 
-    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d), ot, nt);
+    ept_change_entry_type_page(_mfn(ept_get_asr(ept)),
+            ept_get_wl(ept), ot, nt);
+
+    ept_sync_domain(p2m);
+}
+
+static void __ept_sync_domain(void *info)
+{
+    struct ept_data *ept = &((struct p2m_domain *)info)->ept;
 
-    ept_sync_domain(d);
+    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept), 0);
 }
 
-void ept_p2m_init(struct p2m_domain *p2m)
+void ept_sync_domain(struct p2m_domain *p2m)
 {
+    struct domain *d = p2m->domain;
+    struct ept_data *ept = &p2m->ept;
+    /* Only if using EPT and this domain has some VCPUs to dirty. */
+    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
+        return;
+
+    ASSERT(local_irq_is_enabled());
+
+    /*
+     * Flush active cpus synchronously. Flush others the next time this domain
+     * is scheduled onto them. We accept the race of other CPUs adding to
+     * the ept_synced mask before on_selected_cpus() reads it, resulting in
+     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
+     */
+    cpumask_and(ept_get_synced_mask(ept),
+                d->domain_dirty_cpumask, &cpu_online_map);
+
+    on_selected_cpus(ept_get_synced_mask(ept),
+                     __ept_sync_domain, p2m, 1);
+}
+
+int ept_p2m_init(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+
     p2m->set_entry = ept_set_entry;
     p2m->get_entry = ept_get_entry;
     p2m->change_entry_type_global = ept_change_entry_type_global;
     p2m->audit_p2m = NULL;
+
+    /* Set the memory type used when accessing EPT paging structures. */
+    ept->ept_mt = EPT_DEFAULT_MT;
+
+    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
+    ept->ept_wl = 3;
+
+    if ( !zalloc_cpumask_var(&ept->synced_mask) )
+        return -ENOMEM;
+
+    on_each_cpu(__ept_sync_domain, p2m, 1);
+
+    return 0;
+}
+
+void ept_p2m_uninit(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+    free_cpumask_var(ept->synced_mask);
 }
 
 static void ept_dump_p2m_table(unsigned char key)
@@ -811,6 +869,7 @@ static void ept_dump_p2m_table(unsigned char key)
     unsigned long gfn, gfn_remainder;
     unsigned long record_counter = 0;
     struct p2m_domain *p2m;
+    struct ept_data *ept;
 
     for_each_domain(d)
     {
@@ -818,15 +877,16 @@ static void ept_dump_p2m_table(unsigned char key)
             continue;
 
         p2m = p2m_get_hostp2m(d);
+        ept = &p2m->ept;
         printk("\ndomain%d EPT p2m table: \n", d->domain_id);
 
         for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
         {
             gfn_remainder = gfn;
             mfn = _mfn(INVALID_MFN);
-            table = map_domain_page(ept_get_asr(d));
+            table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-            for ( i = ept_get_wl(d); i > 0; i-- )
+            for ( i = ept_get_wl(ept); i > 0; i-- )
             {
                 ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
                 if ( ret != GUEST_TABLE_NORMAL_PAGE )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6a4bdd9..1f59410 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -57,8 +57,10 @@ boolean_param("hap_2mb", opt_hap_2mb);
 
 
 /* Init the datastructures for later use by the p2m code */
-static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
+    int ret = 0;
+
     mm_rwlock_init(&p2m->lock);
     mm_lock_init(&p2m->pod.lock);
     INIT_LIST_HEAD(&p2m->np2m_list);
@@ -72,11 +74,11 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
-        ept_p2m_init(p2m);
+        ret = ept_p2m_init(p2m);
     else
         p2m_pt_init(p2m);
 
-    return;
+    return ret;
 }
 
 static int
@@ -119,7 +121,7 @@ int p2m_init(struct domain *d)
      * since nestedhvm_enabled(d) returns false here.
      * (p2m_init runs too early for HVM_PARAM_* options) */
     rc = p2m_init_nestedp2m(d);
-    if ( rc ) 
+    if ( rc )
         p2m_final_teardown(d);
     return rc;
 }
@@ -424,12 +426,16 @@ void p2m_teardown(struct p2m_domain *p2m)
 static void p2m_teardown_nestedp2m(struct domain *d)
 {
     uint8_t i;
+    struct p2m_domain *p2m;
 
     for (i = 0; i < MAX_NESTEDP2M; i++) {
         if ( !d->arch.nested_p2m[i] )
             continue;
-        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
-        xfree(d->arch.nested_p2m[i]);
+        p2m = d->arch.nested_p2m[i];
+        free_cpumask_var(p2m->dirty_cpumask);
+        if ( hap_enabled(d) && cpu_has_vmx )
+            ept_p2m_uninit(p2m);
+        xfree(p2m);
         d->arch.nested_p2m[i] = NULL;
     }
 }
@@ -437,9 +443,12 @@ static void p2m_teardown_nestedp2m(struct domain *d)
 void p2m_final_teardown(struct domain *d)
 {
     /* Iterate over all p2m tables per domain */
-    if ( d->arch.p2m )
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    if ( p2m )
     {
         free_cpumask_var(d->arch.p2m->dirty_cpumask);
+        if ( hap_enabled(d) && cpu_has_vmx )
+            ept_p2m_uninit(p2m);
         xfree(d->arch.p2m);
         d->arch.p2m = NULL;
     }
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 9a728b6..2d38b43 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -56,26 +56,27 @@ struct vmx_msr_state {
 
 #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
 
-struct vmx_domain {
-    unsigned long apic_access_mfn;
+struct ept_data{
     union {
-        struct {
+    struct {
             u64 ept_mt :3,
                 ept_wl :3,
                 rsvd   :6,
                 asr    :52;
         };
         u64 eptp;
-    } ept_control;
-    cpumask_var_t ept_synced;
+    };
+    cpumask_var_t synced_mask;
+};
+
+struct vmx_domain {
+    unsigned long apic_access_mfn;
 };
 
-#define ept_get_wl(d)   \
-    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
-#define ept_get_asr(d)  \
-    ((d)->arch.hvm_domain.vmx.ept_control.asr)
-#define ept_get_eptp(d) \
-    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
+#define ept_get_wl(ept)   ((ept)->ept_wl)
+#define ept_get_asr(ept)  ((ept)->asr)
+#define ept_get_eptp(ept) ((ept)->eptp)
+#define ept_get_synced_mask(ept) ((ept)->synced_mask)
 
 struct arch_vmx_struct {
     /* Virtual address of VMCS. */
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index feaaa80..2600694 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -360,7 +360,7 @@ static inline void ept_sync_all(void)
     __invept(INVEPT_ALL_CONTEXT, 0, 0);
 }
 
-void ept_sync_domain(struct domain *d);
+void ept_sync_domain(struct p2m_domain *p2m);
 
 static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
 {
@@ -422,12 +422,18 @@ void vmx_get_segment_register(struct vcpu *, enum x86_segment,
 void vmx_inject_extint(int trap);
 void vmx_inject_nmi(void);
 
-void ept_p2m_init(struct p2m_domain *p2m);
+int ept_p2m_init(struct p2m_domain *p2m);
+void ept_p2m_uninit(struct p2m_domain *p2m);
+
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
 void update_guest_eip(void);
 
+int alloc_p2m_hap_data(struct p2m_domain *p2m);
+void free_p2m_hap_data(struct p2m_domain *p2m);
+void p2m_init_hap_data(struct p2m_domain *p2m);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index ce26594..b6a84b6 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -277,6 +277,10 @@ struct p2m_domain {
         mm_lock_t        lock;         /* Locking of private pod structs,   *
                                         * not relying on the p2m lock.      */
     } pod;
+    union {
+        struct ept_data ept;
+        /* NPT-equivalent structure could be added here. */
+    };
 };
 
 /* get host p2m table */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZL-0006iK-PA; Wed, 19 Dec 2012 07:59:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZJ-0006gR-7K
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:57 +0000
Received: from [85.158.139.211:57551] by server-16.bemta-5.messagelabs.com id
	47/07-09208-CF371D05; Wed, 19 Dec 2012 07:59:56 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355903992!21137572!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8313 invoked from network); 19 Dec 2012 07:59:53 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-206.messagelabs.com with SMTP;
	19 Dec 2012 07:59:53 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:53 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287167"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:51 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:24 +0800
Message-Id: <1355946267-24227-8-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 07/10] nEPT: Use minimal permission for
	nested p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Emulate permission check for the nested p2m. Current solution is to
use minimal permission, and once meet permission violation in L0, then
determin whether it is caused by guest EPT or host EPT
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |    4 +-
 xen/arch/x86/mm/hap/nested_ept.c        |    5 ++-
 xen/arch/x86/mm/hap/nested_hap.c        |   38 +++++++++++++++++++++++-------
 xen/include/asm-x86/hvm/hvm.h           |    2 +-
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    6 ++--
 7 files changed, 40 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 5dcb354..ab455a9 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1177,7 +1177,7 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
  */
 int
 nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint32_t pfec;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 7c55c51..cb2c6e7 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1493,7 +1493,7 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
  */
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
@@ -1503,7 +1503,7 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
-    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn, p2m_acc,
                                 &exit_qual, &exit_reason);
     switch ( rc ) {
         case EPT_TRANSLATE_SUCCEED:
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 5f80d82..4b99281 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -215,8 +215,8 @@ out:
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason)
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason)
 {
     uint32_t rc, rwx_bits = 0;
     ept_walk_t gw;
@@ -251,6 +251,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
         if ( nept_permission_check(rwx_acc, rwx_bits) )
         {
             *l1gfn = gw.lxe[0].mfn;
+            *p2m_acc = (uint8_t)rwx_bits;
             break;
         }
         rc = EPT_TRANSLATE_VIOLATION;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 6d1264b..84dbf15 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -142,12 +142,12 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
  */
 static int
 nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
 
-    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order, p2m_acc,
         access_r, access_w, access_x);
 }
 
@@ -158,16 +158,15 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      p2m_type_t *p2mt,
+                      p2m_type_t *p2mt, p2m_access_t *p2ma,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
@@ -206,12 +205,14 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
     p2m_type_t p2mt_10;
+    p2m_access_t p2ma_10 = p2m_access_rwx;
+    uint8_t p2ma_21;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
@@ -229,7 +230,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     /* ==> we have to walk L0 P2M */
     rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
-        &p2mt_10, &page_order_10,
+        &p2mt_10, &p2ma_10, &page_order_10,
         access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
@@ -250,10 +251,29 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     page_order_20 = min(page_order_21, page_order_10);
 
+    ASSERT(p2ma_10 <= p2m_access_n2rwx);
+    /*NOTE: if assert fails, needs to handle new access type here */
+
+    switch ( p2ma_10 ) {
+    case p2m_access_n ... p2m_access_rwx:
+        break;
+    case p2m_access_rx2rw:
+        p2ma_10 = p2m_access_rx;
+        break;
+    case p2m_access_n2rwx:
+        p2ma_10 = p2m_access_n;
+        break;
+    default:
+        p2ma_10 = p2m_access_n;
+        /* For safety, remove all permissions. */
+        gdprintk(XENLOG_ERR, "Unhandled p2m access type:%d\n", p2ma_10);
+    }
+    /* Use minimal permission for nested p2m. */
+    p2ma_10 &= (p2m_access_t)p2ma_21;
+
     /* fix p2m_get_pagetable(nested_p2m) */
     nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
-        p2mt_10,
-        p2m_access_rwx /* FIXME: Should use minimum permission. */);
+        p2mt_10, p2ma_10);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 80f07e9..889e3c9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -186,7 +186,7 @@ struct hvm_function_table {
 
     /*Walk nested p2m  */
     int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index 0c90f30..748cc04 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -134,7 +134,7 @@ void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
 int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 661cd8a..55c0ad1 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -124,7 +124,7 @@ int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
@@ -207,7 +207,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason);
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZL-0006iK-PA; Wed, 19 Dec 2012 07:59:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZJ-0006gR-7K
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:57 +0000
Received: from [85.158.139.211:57551] by server-16.bemta-5.messagelabs.com id
	47/07-09208-CF371D05; Wed, 19 Dec 2012 07:59:56 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355903992!21137572!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8313 invoked from network); 19 Dec 2012 07:59:53 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-206.messagelabs.com with SMTP;
	19 Dec 2012 07:59:53 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:53 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287167"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:51 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:24 +0800
Message-Id: <1355946267-24227-8-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 07/10] nEPT: Use minimal permission for
	nested p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Emulate permission check for the nested p2m. Current solution is to
use minimal permission, and once meet permission violation in L0, then
determin whether it is caused by guest EPT or host EPT
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |    4 +-
 xen/arch/x86/mm/hap/nested_ept.c        |    5 ++-
 xen/arch/x86/mm/hap/nested_hap.c        |   38 +++++++++++++++++++++++-------
 xen/include/asm-x86/hvm/hvm.h           |    2 +-
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    6 ++--
 7 files changed, 40 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 5dcb354..ab455a9 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1177,7 +1177,7 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
  */
 int
 nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint32_t pfec;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 7c55c51..cb2c6e7 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1493,7 +1493,7 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
  */
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
@@ -1503,7 +1503,7 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
-    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn, p2m_acc,
                                 &exit_qual, &exit_reason);
     switch ( rc ) {
         case EPT_TRANSLATE_SUCCEED:
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 5f80d82..4b99281 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -215,8 +215,8 @@ out:
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason)
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason)
 {
     uint32_t rc, rwx_bits = 0;
     ept_walk_t gw;
@@ -251,6 +251,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
         if ( nept_permission_check(rwx_acc, rwx_bits) )
         {
             *l1gfn = gw.lxe[0].mfn;
+            *p2m_acc = (uint8_t)rwx_bits;
             break;
         }
         rc = EPT_TRANSLATE_VIOLATION;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 6d1264b..84dbf15 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -142,12 +142,12 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
  */
 static int
 nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
 
-    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order, p2m_acc,
         access_r, access_w, access_x);
 }
 
@@ -158,16 +158,15 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      p2m_type_t *p2mt,
+                      p2m_type_t *p2mt, p2m_access_t *p2ma,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
@@ -206,12 +205,14 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
     p2m_type_t p2mt_10;
+    p2m_access_t p2ma_10 = p2m_access_rwx;
+    uint8_t p2ma_21;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
@@ -229,7 +230,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     /* ==> we have to walk L0 P2M */
     rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
-        &p2mt_10, &page_order_10,
+        &p2mt_10, &p2ma_10, &page_order_10,
         access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
@@ -250,10 +251,29 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     page_order_20 = min(page_order_21, page_order_10);
 
+    ASSERT(p2ma_10 <= p2m_access_n2rwx);
+    /*NOTE: if assert fails, needs to handle new access type here */
+
+    switch ( p2ma_10 ) {
+    case p2m_access_n ... p2m_access_rwx:
+        break;
+    case p2m_access_rx2rw:
+        p2ma_10 = p2m_access_rx;
+        break;
+    case p2m_access_n2rwx:
+        p2ma_10 = p2m_access_n;
+        break;
+    default:
+        p2ma_10 = p2m_access_n;
+        /* For safety, remove all permissions. */
+        gdprintk(XENLOG_ERR, "Unhandled p2m access type:%d\n", p2ma_10);
+    }
+    /* Use minimal permission for nested p2m. */
+    p2ma_10 &= (p2m_access_t)p2ma_21;
+
     /* fix p2m_get_pagetable(nested_p2m) */
     nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
-        p2mt_10,
-        p2m_access_rwx /* FIXME: Should use minimum permission. */);
+        p2mt_10, p2ma_10);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 80f07e9..889e3c9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -186,7 +186,7 @@ struct hvm_function_table {
 
     /*Walk nested p2m  */
     int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index 0c90f30..748cc04 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -134,7 +134,7 @@ void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
 int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 661cd8a..55c0ad1 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -124,7 +124,7 @@ int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
@@ -207,7 +207,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason);
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZG-0006fV-Qv; Wed, 19 Dec 2012 07:59:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZF-0006eQ-3z
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:53 +0000
Received: from [85.158.138.51:55787] by server-7.bemta-3.messagelabs.com id
	E4/9B-23008-8F371D05; Wed, 19 Dec 2012 07:59:52 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355903990!23171185!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8484 invoked from network); 19 Dec 2012 07:59:51 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-174.messagelabs.com with SMTP;
	19 Dec 2012 07:59:51 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287151"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:49 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:22 +0800
Message-Id: <1355946267-24227-6-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 05/10] nEPT: Try to enable EPT paging for L2
	guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Once found EPT is enabled by L1 VMM, enabled nested EPT support
for L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
 xen/arch/x86/hvm/vmx/vvmx.c        |   48 +++++++++++++++++++++++++++--------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
 3 files changed, 54 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index d74aae0..e5be5a2 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1461,6 +1461,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
     .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
+    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
     .nhvm_intr_blocked    = nvmx_intr_blocked,
@@ -2003,6 +2004,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
     unsigned long gla, gfn = gpa >> PAGE_SHIFT;
     mfn_t mfn;
     p2m_type_t p2mt;
+    int ret;
     struct domain *d = current->domain;
 
     if ( tb_init_done )
@@ -2017,18 +2019,26 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
         _d.gpa = gpa;
         _d.qualification = qualification;
         _d.mfn = mfn_x(get_gfn_query_unlocked(d, gfn, &_d.p2mt));
-        
+
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
-    if ( hvm_hap_nested_page_fault(gpa,
+    ret = hvm_hap_nested_page_fault(gpa,
                                    qualification & EPT_GLA_VALID       ? 1 : 0,
                                    qualification & EPT_GLA_VALID
                                      ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
                                    qualification & EPT_READ_VIOLATION  ? 1 : 0,
                                    qualification & EPT_WRITE_VIOLATION ? 1 : 0,
-                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
+                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
+    switch ( ret ) {
+    case 0:
+        break;
+    case 1:
         return;
+    case -1:
+        vcpu_nestedhvm(current).nv_vmexit_pending = 1;
+        return;
+    }
 
     /* Everything else is an error. */
     mfn = get_gfn_query_unlocked(d, gfn, &p2mt);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 76cf757..c100730 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
         gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
 	goto out;
     }
+    nvmx->ept.enabled = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -96,9 +97,11 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
 
 uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
-    /* TODO */
-    ASSERT(0);
-    return 0;
+    uint64_t eptp_base;
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+
+    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
+    return eptp_base & PAGE_MASK; 
 }
 
 uint32_t nvmx_vcpu_asid(struct vcpu *v)
@@ -108,6 +111,13 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v)
     return 0;
 }
 
+bool_t nvmx_ept_enabled(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    return !!(nvmx->ept.enabled);
+}
+
 static const enum x86_segment sreg_to_index[] = {
     [VMX_SREG_ES] = x86_seg_es,
     [VMX_SREG_CS] = x86_seg_cs,
@@ -503,14 +513,16 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
 }
 
 void nvmx_update_secondary_exec_control(struct vcpu *v,
-                                            unsigned long value)
+                                            unsigned long host_cntrl)
 {
     u32 shadow_cntrl;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
     shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
-    shadow_cntrl |= value;
-    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
+    nvmx->ept.enabled = !!(shadow_cntrl & SECONDARY_EXEC_ENABLE_EPT);
+    shadow_cntrl |= host_cntrl;
+    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
 }
 
 static void nvmx_update_pin_control(struct vcpu *v, unsigned long host_cntrl)
@@ -818,6 +830,17 @@ static void load_shadow_guest_state(struct vcpu *v)
     /* TODO: CR3 target control */
 }
 
+
+static uint64_t get_shadow_eptp(struct vcpu *v)
+{
+    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
+    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
+    struct ept_data *ept = &p2m->ept;
+
+    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    return ept_get_eptp(ept);
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -862,7 +885,10 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
 
-    /* TODO: EPT_POINTER */
+    /* Setup virtual ETP for L2 guest*/
+    if ( nestedhvm_paging_mode_hap(v) )
+        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -915,8 +941,8 @@ static void sync_vvmcs_ro(struct vcpu *v)
     /* Adjust exit_reason/exit_qualifciation for violation case */
     if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
                 EXIT_REASON_EPT_VIOLATION ) {
-        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
-        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
     }
 }
 
@@ -1480,8 +1506,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
         case EPT_TRANSLATE_VIOLATION:
         case EPT_TRANSLATE_MISCONFIG:
             rc = NESTEDHVM_PAGEFAULT_INJECT;
-            nvmx->ept_exit.exit_reason = exit_reason;
-            nvmx->ept_exit.exit_qual = exit_qual;
+            nvmx->ept.exit_reason = exit_reason;
+            nvmx->ept.exit_qual = exit_qual;
             break;
         case EPT_TRANSLATE_RETRY:
             rc = NESTEDHVM_PAGEFAULT_RETRY;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 8eb377b..661cd8a 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -33,9 +33,10 @@ struct nestedvmx {
         u32           error_code;
     } intr;
     struct {
+        char     enabled;
         uint32_t exit_reason;
         uint32_t exit_qual;
-    } ept_exit;
+    } ept;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -110,6 +111,8 @@ int nvmx_intercepts_exception(struct vcpu *v,
                               unsigned int trap, int error_code);
 void nvmx_domain_relinquish_resources(struct domain *d);
 
+bool_t nvmx_ept_enabled(struct vcpu *v);
+
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZG-0006fV-Qv; Wed, 19 Dec 2012 07:59:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZF-0006eQ-3z
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:53 +0000
Received: from [85.158.138.51:55787] by server-7.bemta-3.messagelabs.com id
	E4/9B-23008-8F371D05; Wed, 19 Dec 2012 07:59:52 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1355903990!23171185!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8484 invoked from network); 19 Dec 2012 07:59:51 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-174.messagelabs.com with SMTP;
	19 Dec 2012 07:59:51 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287151"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:49 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:22 +0800
Message-Id: <1355946267-24227-6-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 05/10] nEPT: Try to enable EPT paging for L2
	guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Once found EPT is enabled by L1 VMM, enabled nested EPT support
for L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
 xen/arch/x86/hvm/vmx/vvmx.c        |   48 +++++++++++++++++++++++++++--------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
 3 files changed, 54 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index d74aae0..e5be5a2 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1461,6 +1461,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
     .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
+    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
     .nhvm_intr_blocked    = nvmx_intr_blocked,
@@ -2003,6 +2004,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
     unsigned long gla, gfn = gpa >> PAGE_SHIFT;
     mfn_t mfn;
     p2m_type_t p2mt;
+    int ret;
     struct domain *d = current->domain;
 
     if ( tb_init_done )
@@ -2017,18 +2019,26 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
         _d.gpa = gpa;
         _d.qualification = qualification;
         _d.mfn = mfn_x(get_gfn_query_unlocked(d, gfn, &_d.p2mt));
-        
+
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
-    if ( hvm_hap_nested_page_fault(gpa,
+    ret = hvm_hap_nested_page_fault(gpa,
                                    qualification & EPT_GLA_VALID       ? 1 : 0,
                                    qualification & EPT_GLA_VALID
                                      ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
                                    qualification & EPT_READ_VIOLATION  ? 1 : 0,
                                    qualification & EPT_WRITE_VIOLATION ? 1 : 0,
-                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
+                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
+    switch ( ret ) {
+    case 0:
+        break;
+    case 1:
         return;
+    case -1:
+        vcpu_nestedhvm(current).nv_vmexit_pending = 1;
+        return;
+    }
 
     /* Everything else is an error. */
     mfn = get_gfn_query_unlocked(d, gfn, &p2mt);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 76cf757..c100730 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
         gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
 	goto out;
     }
+    nvmx->ept.enabled = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -96,9 +97,11 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
 
 uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
-    /* TODO */
-    ASSERT(0);
-    return 0;
+    uint64_t eptp_base;
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+
+    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
+    return eptp_base & PAGE_MASK; 
 }
 
 uint32_t nvmx_vcpu_asid(struct vcpu *v)
@@ -108,6 +111,13 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v)
     return 0;
 }
 
+bool_t nvmx_ept_enabled(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    return !!(nvmx->ept.enabled);
+}
+
 static const enum x86_segment sreg_to_index[] = {
     [VMX_SREG_ES] = x86_seg_es,
     [VMX_SREG_CS] = x86_seg_cs,
@@ -503,14 +513,16 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
 }
 
 void nvmx_update_secondary_exec_control(struct vcpu *v,
-                                            unsigned long value)
+                                            unsigned long host_cntrl)
 {
     u32 shadow_cntrl;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
     shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
-    shadow_cntrl |= value;
-    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
+    nvmx->ept.enabled = !!(shadow_cntrl & SECONDARY_EXEC_ENABLE_EPT);
+    shadow_cntrl |= host_cntrl;
+    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
 }
 
 static void nvmx_update_pin_control(struct vcpu *v, unsigned long host_cntrl)
@@ -818,6 +830,17 @@ static void load_shadow_guest_state(struct vcpu *v)
     /* TODO: CR3 target control */
 }
 
+
+static uint64_t get_shadow_eptp(struct vcpu *v)
+{
+    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
+    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
+    struct ept_data *ept = &p2m->ept;
+
+    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    return ept_get_eptp(ept);
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -862,7 +885,10 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
 
-    /* TODO: EPT_POINTER */
+    /* Setup virtual ETP for L2 guest*/
+    if ( nestedhvm_paging_mode_hap(v) )
+        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -915,8 +941,8 @@ static void sync_vvmcs_ro(struct vcpu *v)
     /* Adjust exit_reason/exit_qualifciation for violation case */
     if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
                 EXIT_REASON_EPT_VIOLATION ) {
-        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
-        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
     }
 }
 
@@ -1480,8 +1506,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
         case EPT_TRANSLATE_VIOLATION:
         case EPT_TRANSLATE_MISCONFIG:
             rc = NESTEDHVM_PAGEFAULT_INJECT;
-            nvmx->ept_exit.exit_reason = exit_reason;
-            nvmx->ept_exit.exit_qual = exit_qual;
+            nvmx->ept.exit_reason = exit_reason;
+            nvmx->ept.exit_qual = exit_qual;
             break;
         case EPT_TRANSLATE_RETRY:
             rc = NESTEDHVM_PAGEFAULT_RETRY;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 8eb377b..661cd8a 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -33,9 +33,10 @@ struct nestedvmx {
         u32           error_code;
     } intr;
     struct {
+        char     enabled;
         uint32_t exit_reason;
         uint32_t exit_qual;
-    } ept_exit;
+    } ept;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -110,6 +111,8 @@ int nvmx_intercepts_exception(struct vcpu *v,
                               unsigned int trap, int error_code);
 void nvmx_domain_relinquish_resources(struct domain *d);
 
+bool_t nvmx_ept_enabled(struct vcpu *v);
+
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZC-0006do-5z; Wed, 19 Dec 2012 07:59:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZA-0006dU-Im
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:48 +0000
Received: from [85.158.139.211:18949] by server-5.bemta-5.messagelabs.com id
	EB/C6-22648-3F371D05; Wed, 19 Dec 2012 07:59:47 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355903984!18549811!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4838 invoked from network); 19 Dec 2012 07:59:45 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-9.tower-206.messagelabs.com with SMTP;
	19 Dec 2012 07:59:45 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:43 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287099"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:42 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:17 +0800
Message-Id: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 00/10] Nested VMX: Add virtual EPT & VPID
	support to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

With virtual EPT support, L1 hyerpvisor can use EPT hardware
for L2 guest's memory virtualization.
In this way, L2 guest's performance can be improved sharply.
According to our testing, some benchmarks can show > 5x performance gain.

Changes from v1:
Update the patches according to Tim's comments. 
1. Patch 03: Enhance the virtual EPT's walker logic.
2. Patch 04: Add a new field in struct p2m_domain, and use it to store
   EPT-specific data. For host p2m, it saves L1 VMM's EPT data,
   and for nested p2m, it saves nested EPT's data
3. Patch 07: strictly check host's p2m access type.
4. Other patches: some whitespace mangling fixes.

Zhang Xiantao (10):
  nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
  nestedhap: Change nested p2m's walker to vendor-specific
  nested_ept: Implement guest ept's walker
  EPT: Make ept data structure or operations neutral
  nEPT: Try to enable EPT paging for L2 guest.
  nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
  nEPT: Use minimal permission for nested p2m.
  nEPT: handle invept instruction from L1 VMM
  nVMX: virutalize VPID capability to nested VMM.
  nEPT: expost EPT & VPID capablities to L1 VMM

 xen/arch/x86/hvm/hvm.c                  |    7 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 ++++
 xen/arch/x86/hvm/svm/svm.c              |    3 +-
 xen/arch/x86/hvm/vmx/vmcs.c             |    9 +-
 xen/arch/x86/hvm/vmx/vmx.c              |   90 ++++-------
 xen/arch/x86/hvm/vmx/vvmx.c             |  213 ++++++++++++++++++++++--
 xen/arch/x86/mm/guest_walk.c            |   16 +-
 xen/arch/x86/mm/hap/Makefile            |    1 +
 xen/arch/x86/mm/hap/nested_ept.c        |  282 +++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   95 ++++++-----
 xen/arch/x86/mm/mm-locks.h              |    2 +-
 xen/arch/x86/mm/p2m-ept.c               |  104 +++++++++---
 xen/arch/x86/mm/p2m.c                   |   51 ++++---
 xen/arch/x86/mm/shadow/multi.c          |    2 +-
 xen/include/asm-x86/guest_pt.h          |    8 +
 xen/include/asm-x86/hvm/hvm.h           |    9 +-
 xen/include/asm-x86/hvm/nestedhvm.h     |    1 +
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
 xen/include/asm-x86/hvm/vmx/vmcs.h      |   24 ++--
 xen/include/asm-x86/hvm/vmx/vmx.h       |   38 ++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |   29 +++-
 xen/include/asm-x86/p2m.h               |   20 ++-
 22 files changed, 843 insertions(+), 195 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZL-0006ht-B0; Wed, 19 Dec 2012 07:59:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZJ-0006gT-83
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:57 +0000
Received: from [85.158.139.211:61566] by server-3.bemta-5.messagelabs.com id
	6F/7E-25441-CF371D05; Wed, 19 Dec 2012 07:59:56 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355903992!21137572!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8390 invoked from network); 19 Dec 2012 07:59:56 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-206.messagelabs.com with SMTP;
	19 Dec 2012 07:59:56 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287181"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:53 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:25 +0800
Message-Id: <1355946267-24227-9-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 08/10] nEPT: handle invept instruction from
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Add the INVEPT instruction emulation logic.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |    6 +++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/p2m.c              |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 45 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index e5be5a2..7af92cc 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2572,11 +2572,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
             update_guest_eip();
         break;
-
+    case EXIT_REASON_INVEPT:
+        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVEPT:
     case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index cb2c6e7..b7c3639 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1356,6 +1356,45 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+int nvmx_handle_invept(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long eptp;
+    u64 inv_type;
+
+    if ( !cpu_has_vmx_ept )
+        return X86EMUL_EXCEPTION;
+
+    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
+             != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);
+
+    switch ( inv_type ) {
+    case INVEPT_SINGLE_CONTEXT:
+        {
+            struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
+            if ( p2m )
+            {
+	            p2m_flush(current, p2m);
+                ept_sync_domain(p2m);
+            }
+        }
+        break;
+    case INVEPT_ALL_CONTEXT:
+        p2m_flush_nestedp2m(current->domain);
+        __invept(INVEPT_ALL_CONTEXT, 0, 0);
+        break;
+    default:
+        return X86EMUL_EXCEPTION;
+    }
+    vmreturn(regs, VMSUCCEED);
+    return X86EMUL_OKAY;
+}
+
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 1f59410..17903ee 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1465,7 +1465,7 @@ p2m_flush_table(struct p2m_domain *p2m)
 void
 p2m_flush(struct vcpu *v, struct p2m_domain *p2m)
 {
-    ASSERT(v->domain == p2m->domain);
+    ASSERT(p2m && v->domain == p2m->domain);
     vcpu_nestedhvm(v).nv_p2m = NULL;
     p2m_flush_table(p2m);
     hvm_asid_flush_vcpu(v);
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 55c0ad1..cf5ed9a 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -190,6 +190,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs);
 int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
+int nvmx_handle_invept(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZC-0006do-5z; Wed, 19 Dec 2012 07:59:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZA-0006dU-Im
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:48 +0000
Received: from [85.158.139.211:18949] by server-5.bemta-5.messagelabs.com id
	EB/C6-22648-3F371D05; Wed, 19 Dec 2012 07:59:47 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1355903984!18549811!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4838 invoked from network); 19 Dec 2012 07:59:45 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-9.tower-206.messagelabs.com with SMTP;
	19 Dec 2012 07:59:45 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:43 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287099"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:42 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:17 +0800
Message-Id: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 00/10] Nested VMX: Add virtual EPT & VPID
	support to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

With virtual EPT support, L1 hyerpvisor can use EPT hardware
for L2 guest's memory virtualization.
In this way, L2 guest's performance can be improved sharply.
According to our testing, some benchmarks can show > 5x performance gain.

Changes from v1:
Update the patches according to Tim's comments. 
1. Patch 03: Enhance the virtual EPT's walker logic.
2. Patch 04: Add a new field in struct p2m_domain, and use it to store
   EPT-specific data. For host p2m, it saves L1 VMM's EPT data,
   and for nested p2m, it saves nested EPT's data
3. Patch 07: strictly check host's p2m access type.
4. Other patches: some whitespace mangling fixes.

Zhang Xiantao (10):
  nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
  nestedhap: Change nested p2m's walker to vendor-specific
  nested_ept: Implement guest ept's walker
  EPT: Make ept data structure or operations neutral
  nEPT: Try to enable EPT paging for L2 guest.
  nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
  nEPT: Use minimal permission for nested p2m.
  nEPT: handle invept instruction from L1 VMM
  nVMX: virutalize VPID capability to nested VMM.
  nEPT: expost EPT & VPID capablities to L1 VMM

 xen/arch/x86/hvm/hvm.c                  |    7 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 ++++
 xen/arch/x86/hvm/svm/svm.c              |    3 +-
 xen/arch/x86/hvm/vmx/vmcs.c             |    9 +-
 xen/arch/x86/hvm/vmx/vmx.c              |   90 ++++-------
 xen/arch/x86/hvm/vmx/vvmx.c             |  213 ++++++++++++++++++++++--
 xen/arch/x86/mm/guest_walk.c            |   16 +-
 xen/arch/x86/mm/hap/Makefile            |    1 +
 xen/arch/x86/mm/hap/nested_ept.c        |  282 +++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   95 ++++++-----
 xen/arch/x86/mm/mm-locks.h              |    2 +-
 xen/arch/x86/mm/p2m-ept.c               |  104 +++++++++---
 xen/arch/x86/mm/p2m.c                   |   51 ++++---
 xen/arch/x86/mm/shadow/multi.c          |    2 +-
 xen/include/asm-x86/guest_pt.h          |    8 +
 xen/include/asm-x86/hvm/hvm.h           |    9 +-
 xen/include/asm-x86/hvm/nestedhvm.h     |    1 +
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
 xen/include/asm-x86/hvm/vmx/vmcs.h      |   24 ++--
 xen/include/asm-x86/hvm/vmx/vmx.h       |   38 ++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |   29 +++-
 xen/include/asm-x86/p2m.h               |   20 ++-
 22 files changed, 843 insertions(+), 195 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZL-0006ht-B0; Wed, 19 Dec 2012 07:59:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZJ-0006gT-83
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:57 +0000
Received: from [85.158.139.211:61566] by server-3.bemta-5.messagelabs.com id
	6F/7E-25441-CF371D05; Wed, 19 Dec 2012 07:59:56 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355903992!21137572!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8390 invoked from network); 19 Dec 2012 07:59:56 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-206.messagelabs.com with SMTP;
	19 Dec 2012 07:59:56 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287181"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:53 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:25 +0800
Message-Id: <1355946267-24227-9-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 08/10] nEPT: handle invept instruction from
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Add the INVEPT instruction emulation logic.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c         |    6 +++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/p2m.c              |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 4 files changed, 45 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index e5be5a2..7af92cc 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2572,11 +2572,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
             update_guest_eip();
         break;
-
+    case EXIT_REASON_INVEPT:
+        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVEPT:
     case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index cb2c6e7..b7c3639 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1356,6 +1356,45 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+int nvmx_handle_invept(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long eptp;
+    u64 inv_type;
+
+    if ( !cpu_has_vmx_ept )
+        return X86EMUL_EXCEPTION;
+
+    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
+             != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);
+
+    switch ( inv_type ) {
+    case INVEPT_SINGLE_CONTEXT:
+        {
+            struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
+            if ( p2m )
+            {
+	            p2m_flush(current, p2m);
+                ept_sync_domain(p2m);
+            }
+        }
+        break;
+    case INVEPT_ALL_CONTEXT:
+        p2m_flush_nestedp2m(current->domain);
+        __invept(INVEPT_ALL_CONTEXT, 0, 0);
+        break;
+    default:
+        return X86EMUL_EXCEPTION;
+    }
+    vmreturn(regs, VMSUCCEED);
+    return X86EMUL_OKAY;
+}
+
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 1f59410..17903ee 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1465,7 +1465,7 @@ p2m_flush_table(struct p2m_domain *p2m)
 void
 p2m_flush(struct vcpu *v, struct p2m_domain *p2m)
 {
-    ASSERT(v->domain == p2m->domain);
+    ASSERT(p2m && v->domain == p2m->domain);
     vcpu_nestedhvm(v).nv_p2m = NULL;
     p2m_flush_table(p2m);
     hvm_asid_flush_vcpu(v);
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 55c0ad1..cf5ed9a 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -190,6 +190,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs);
 int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
+int nvmx_handle_invept(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZE-0006eK-4j; Wed, 19 Dec 2012 07:59:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZC-0006dT-Rn
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:51 +0000
Received: from [85.158.139.83:5407] by server-11.bemta-5.messagelabs.com id
	3D/C1-31624-6F371D05; Wed, 19 Dec 2012 07:59:50 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355903986!30528495!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3216 invoked from network); 19 Dec 2012 07:59:48 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-5.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 07:59:48 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:47 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287136"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:46 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:20 +0800
Message-Id: <1355946267-24227-4-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 03/10] nested_ept: Implement guest ept's
	walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Implment guest EPT PT walker, some logic is based on shadow's
ia32e PT walker. During the PT walking, if the target pages are
not in memory, use RETRY mechanism and get a chance to let the
target page back.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c              |    1 +
 xen/arch/x86/hvm/vmx/vvmx.c         |   42 +++++-
 xen/arch/x86/mm/guest_walk.c        |   16 ++-
 xen/arch/x86/mm/hap/Makefile        |    1 +
 xen/arch/x86/mm/hap/nested_ept.c    |  276 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c    |    2 +-
 xen/arch/x86/mm/shadow/multi.c      |    2 +-
 xen/include/asm-x86/guest_pt.h      |    8 +
 xen/include/asm-x86/hvm/nestedhvm.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h   |   28 ++++
 xen/include/asm-x86/hvm/vmx/vvmx.h  |   14 ++
 12 files changed, 382 insertions(+), 10 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1cae8a8..3cd0075 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1324,6 +1324,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
+        case NESTEDHVM_PAGEFAULT_RETRY:
             return 1;
         case NESTEDHVM_PAGEFAULT_L1_ERROR:
             /* An error occured while translating gpa from
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 4495dd6..76cf757 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -906,9 +906,18 @@ static void sync_vvmcs_ro(struct vcpu *v)
 {
     int i;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    void *vvmcs = nvcpu->nv_vvmcx;
 
     for ( i = 0; i < ARRAY_SIZE(vmcs_ro_field); i++ )
         shadow_to_vvmcs(nvcpu->nv_vvmcx, vmcs_ro_field[i]);
+
+    /* Adjust exit_reason/exit_qualifciation for violation case */
+    if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
+                EXIT_REASON_EPT_VIOLATION ) {
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+    }
 }
 
 static void load_vvmcs_host_state(struct vcpu *v)
@@ -1454,8 +1463,37 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
-    /*TODO:*/
-    return 0;
+    uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
+    uint32_t exit_reason = EXIT_REASON_EPT_VIOLATION;
+    int rc;
+    unsigned long gfn;
+    uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+                                &exit_qual, &exit_reason);
+    switch ( rc ) {
+        case EPT_TRANSLATE_SUCCEED:
+            *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+            rc = NESTEDHVM_PAGEFAULT_DONE;
+            break;
+        case EPT_TRANSLATE_VIOLATION:
+        case EPT_TRANSLATE_MISCONFIG:
+            rc = NESTEDHVM_PAGEFAULT_INJECT;
+            nvmx->ept_exit.exit_reason = exit_reason;
+            nvmx->ept_exit.exit_qual = exit_qual;
+            break;
+        case EPT_TRANSLATE_RETRY:
+            rc = NESTEDHVM_PAGEFAULT_RETRY;
+            break;
+        case EPT_TRANSLATE_ERR_PAGE:
+            rc = NESTEDHVM_PAGEFAULT_L1_ERROR;
+            break;
+        default:
+            gdprintk(XENLOG_ERR, "GUEST EPT translation error!\n");
+    }
+
+    return rc;
 }
 
 void nvmx_idtv_handling(void)
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 0f08fb0..1c165c6 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -88,18 +88,19 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
 
 /* If the map is non-NULL, we leave this function having 
  * acquired an extra ref on mfn_to_page(*mfn) */
-static inline void *map_domain_gfn(struct p2m_domain *p2m,
-                                   gfn_t gfn, 
+void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
                                    mfn_t *mfn,
                                    p2m_type_t *p2mt,
-                                   uint32_t *rc) 
+                                   p2m_query_t q,
+                                   uint32_t *rc)
 {
     struct page_info *page;
     void *map;
 
     /* Translate the gfn, unsharing if shared */
     page = get_page_from_gfn_p2m(p2m->domain, p2m, gfn_x(gfn), p2mt, NULL,
-                                  P2M_ALLOC | P2M_UNSHARE);
+                                  q);
     if ( p2m_is_paging(*p2mt) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
@@ -128,7 +129,6 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
     return map;
 }
 
-
 /* Walk the guest pagetables, after the manner of a hardware walker. */
 /* Because the walk is essentially random, it can cause a deadlock 
  * warning in the p2m locking code. Highly unlikely this is an actual
@@ -149,6 +149,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     uint32_t gflags, mflags, iflags, rc = 0;
     int smep;
     bool_t pse1G = 0, pse2M = 0;
+    p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
 
     perfc_incr(guest_walk);
     memset(gw, 0, sizeof(*gw));
@@ -188,7 +189,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     l3p = map_domain_gfn(p2m, 
                          guest_l4e_get_gfn(gw->l4e), 
                          &gw->l3mfn,
-                         &p2mt, 
+                         &p2mt,
+                         qt, 
                          &rc); 
     if(l3p == NULL)
         goto out;
@@ -249,6 +251,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                          guest_l3e_get_gfn(gw->l3e), 
                          &gw->l2mfn,
                          &p2mt, 
+                         qt,
                          &rc); 
     if(l2p == NULL)
         goto out;
@@ -322,6 +325,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                              guest_l2e_get_gfn(gw->l2e), 
                              &gw->l1mfn,
                              &p2mt,
+                             qt,
                              &rc);
         if(l1p == NULL)
             goto out;
diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
index 80a6bec..68f2bb5 100644
--- a/xen/arch/x86/mm/hap/Makefile
+++ b/xen/arch/x86/mm/hap/Makefile
@@ -3,6 +3,7 @@ obj-y += guest_walk_2level.o
 obj-y += guest_walk_3level.o
 obj-$(x86_64) += guest_walk_4level.o
 obj-y += nested_hap.o
+obj-y += nested_ept.o
 
 guest_walk_%level.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
new file mode 100644
index 0000000..5f80d82
--- /dev/null
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -0,0 +1,276 @@
+/*
+ * nested_ept.c: Handling virtulized EPT for guest in nested case.
+ *
+ * Copyright (c) 2012, Intel Corporation
+ *  Xiantao Zhang <xiantao.zhang@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+#include <asm/domain.h>
+#include <asm/page.h>
+#include <asm/paging.h>
+#include <asm/p2m.h>
+#include <asm/mem_event.h>
+#include <public/mem_event.h>
+#include <asm/mem_sharing.h>
+#include <xen/event.h>
+#include <asm/hap.h>
+#include <asm/hvm/support.h>
+
+#include <asm/hvm/nestedhvm.h>
+
+#include "private.h"
+
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vvmx.h>
+
+/* EPT always use 4-level paging structure */
+#define GUEST_PAGING_LEVELS 4
+#include <asm/guest_pt.h>
+
+/* Must reserved bits in all level entries  */
+#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
+                     ~((1ull << paddr_bits) - 1))
+
+/*
+ *TODO: Just leave it as 0 here for compile pass, will
+ * define real capabilities in the subsequent patches.
+ */
+#define NEPT_VPID_CAP_BITS 0
+
+
+#define NEPT_1G_ENTRY_FLAG (1 << 11)
+#define NEPT_2M_ENTRY_FLAG (1 << 10)
+#define NEPT_4K_ENTRY_FLAG (1 << 9)
+
+bool_t nept_sp_entry(ept_entry_t e)
+{
+    return !!(e.sp);
+}
+
+static bool_t nept_rsv_bits_check(ept_entry_t e, uint32_t level)
+{
+    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
+
+    switch ( level ) {
+    case 1:
+        break;
+    case 2 ... 3:
+        if (nept_sp_entry(e))
+            rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
+        else
+            rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK;
+        break;
+    case 4:
+        rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK | EPTE_SUPER_PAGE_MASK;
+    break;
+    default:
+        printk("Unsupported EPT paging level: %d\n", level);
+    }
+    return !!(e.epte & rsv_bits);
+}
+
+/* EMT checking*/
+static bool_t nept_emt_bits_check(ept_entry_t e, uint32_t level)
+{
+    if ( e.sp || level == 1 ) {
+        if ( e.emt == EPT_EMT_RSV0 || e.emt == EPT_EMT_RSV1 ||
+                e.emt == EPT_EMT_RSV2 )
+            return 1;
+    }
+    return 0;
+}
+
+static bool_t nept_rwx_bits_check(ept_entry_t e) {
+    /*write only or write/execute only*/
+    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
+
+    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
+        return 1;
+
+    if ( rwx_bits == ept_access_x && !(NEPT_VPID_CAP_BITS &
+                        VMX_EPT_EXEC_ONLY_SUPPORTED))
+        return 1;
+
+    return 0;
+}
+
+/* nept's misconfiguration check */
+static bool_t nept_misconfiguration_check(ept_entry_t e, uint32_t level)
+{
+    return (nept_rsv_bits_check(e, level) ||
+                nept_emt_bits_check(e, level) ||
+                nept_rwx_bits_check(e));
+}
+
+static bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
+{
+    return !(EPTE_RWX_MASK & rwx_acc & ~rwx_bits);
+}
+
+/* nept's non-present check */
+static bool_t nept_non_present_check(ept_entry_t e)
+{
+    if (e.epte & EPTE_RWX_MASK)
+        return 0;
+    return 1;
+}
+
+uint64_t nept_get_ept_vpid_cap(void)
+{
+    return NEPT_VPID_CAP_BITS;
+}
+
+static int ept_lvl_table_offset(unsigned long gpa, int lvl)
+{
+    return (gpa >>(EPT_L4_PAGETABLE_SHIFT -(4 - lvl) * 9)) &
+                (EPT_PAGETABLE_ENTRIES -1 );
+}
+
+static uint32_t
+nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw)
+{
+    int lvl;
+    p2m_type_t p2mt;
+    uint32_t rc = 0, ret = 0, gflags;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = d->arch.p2m;
+    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
+    mfn_t lxmfn;
+    ept_entry_t *lxp = NULL;
+
+    memset(gw, 0, sizeof(*gw));
+
+    for (lvl = 4; lvl > 0; lvl--)
+    {
+        lxp = map_domain_gfn(p2m, base_gfn, &lxmfn, &p2mt, P2M_ALLOC, &rc);
+        if ( !lxp )
+            goto map_err;
+        gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
+        unmap_domain_page(lxp);
+        put_page(mfn_to_page(mfn_x(lxmfn)));
+
+        if (nept_non_present_check(gw->lxe[lvl]))
+            goto non_present;
+
+        if (nept_misconfiguration_check(gw->lxe[lvl], lvl))
+            goto misconfig_err;
+
+        if ( (lvl == 2 || lvl == 3) && nept_sp_entry(gw->lxe[lvl]) )
+        {
+            /* Generate a fake l1 table entry so callers don't all
+             * have to understand superpages. */
+            unsigned long gfn_lvl_mask =  (1ull << ((lvl - 1) * 9)) - 1;
+            gfn_t start = _gfn(gw->lxe[lvl].mfn);
+            /* Increment the pfn by the right number of 4k pages. */
+            start = _gfn((gfn_x(start) & ~gfn_lvl_mask) +
+                     ((l2ga >> PAGE_SHIFT) & gfn_lvl_mask));
+            gflags = (gw->lxe[lvl].epte & EPTE_FLAG_MASK) |
+                    (lvl == 3 ? NEPT_1G_ENTRY_FLAG: NEPT_2M_ENTRY_FLAG);
+            gw->lxe[0].epte = (gfn_x(start) << PAGE_SHIFT) | gflags;
+            goto done;
+        }
+        if ( lvl > 1 )
+            base_gfn = _gfn(gw->lxe[lvl].mfn);
+    }
+
+    /* If this is not a super entry, we can reach here. */
+    gflags = (gw->lxe[1].epte & EPTE_FLAG_MASK) | NEPT_4K_ENTRY_FLAG;
+    gw->lxe[0].epte = (gw->lxe[1].epte & PAGE_MASK) | gflags;
+
+done:
+    ret = EPT_TRANSLATE_SUCCEED;
+    goto out;
+
+map_err:
+    if ( rc == _PAGE_PAGED )
+        ret = EPT_TRANSLATE_RETRY;
+    else
+        ret = EPT_TRANSLATE_ERR_PAGE;
+    goto out;
+
+misconfig_err:
+    ret =  EPT_TRANSLATE_MISCONFIG;
+    goto out;
+
+non_present:
+    ret = EPT_TRANSLATE_VIOLATION;
+    /* fall through. */
+out:
+    return ret;
+}
+
+/* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
+
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason)
+{
+    uint32_t rc, rwx_bits = 0;
+    ept_walk_t gw;
+    rwx_acc &= EPTE_RWX_MASK;
+
+    *l1gfn = INVALID_GFN;
+
+    rc = nept_walk_tables(v, l2ga, &gw);
+    switch ( rc ) {
+    case EPT_TRANSLATE_SUCCEED:
+        if ( likely(gw.lxe[0].epte & NEPT_2M_ENTRY_FLAG) )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                            EPTE_RWX_MASK;
+            *page_order = 9;
+        }
+        else if ( gw.lxe[0].epte & NEPT_4K_ENTRY_FLAG ) {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                    gw.lxe[1].epte & EPTE_RWX_MASK;
+            *page_order = 0;
+        }
+        else if ( gw.lxe[0].epte & NEPT_1G_ENTRY_FLAG  )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte  & EPTE_RWX_MASK;
+            *page_order = 18;
+        }
+        else
+        {
+            gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
+            BUG();
+        }
+        if ( nept_permission_check(rwx_acc, rwx_bits) )
+        {
+            *l1gfn = gw.lxe[0].mfn;
+            break;
+        }
+        rc = EPT_TRANSLATE_VIOLATION;
+    /* Fall through to EPT violation if permission check fails. */
+    case EPT_TRANSLATE_VIOLATION:
+        *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
+        *exit_reason = EXIT_REASON_EPT_VIOLATION;
+        break;
+
+    case EPT_TRANSLATE_ERR_PAGE:
+        break;
+    case EPT_TRANSLATE_MISCONFIG:
+        rc = EPT_TRANSLATE_MISCONFIG;
+        *exit_qual = 0;
+        *exit_reason = EXIT_REASON_EPT_MISCONFIG;
+        break;
+    case EPT_TRANSLATE_RETRY:
+        break;
+    default:
+        gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);
+    }
+    return rc;
+}
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 8787c91..6d1264b 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -217,7 +217,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
-        return rv;
+    case NESTEDHVM_PAGEFAULT_RETRY:
     case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 4967da1..409198c 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
     /* Translate the GFN to an MFN */
     ASSERT(!paging_locked_by_me(v->domain));
     mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
-        
+
     if ( p2m_is_readonly(p2mt) )
     {
         put_gfn(v->domain, gfn);
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index 4e1dda0..db8a0b6 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -315,6 +315,14 @@ guest_walk_to_page_order(walk_t *gw)
 #define GPT_RENAME2(_n, _l) _n ## _ ## _l ## _levels
 #define GPT_RENAME(_n, _l) GPT_RENAME2(_n, _l)
 #define guest_walk_tables GPT_RENAME(guest_walk_tables, GUEST_PAGING_LEVELS)
+#define map_domain_gfn GPT_RENAME(map_domain_gfn, GUEST_PAGING_LEVELS)
+
+extern void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
+                                   mfn_t *mfn,
+                                   p2m_type_t *p2mt,
+                                   p2m_query_t q,
+                                   uint32_t *rc);
 
 extern uint32_t 
 guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, unsigned long va,
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index 91fde0b..649c511 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -52,6 +52,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
 #define NESTEDHVM_PAGEFAULT_L1_ERROR   2
 #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
 #define NESTEDHVM_PAGEFAULT_MMIO       4
+#define NESTEDHVM_PAGEFAULT_RETRY      5
 int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ef2c9c9..9a728b6 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -194,6 +194,7 @@ extern u32 vmx_secondary_exec_control;
 
 extern bool_t cpu_has_vmx_ins_outs_instr_info;
 
+#define VMX_EPT_EXEC_ONLY_SUPPORTED             0x00000001
 #define VMX_EPT_WALK_LENGTH_4_SUPPORTED         0x00000040
 #define VMX_EPT_MEMORY_TYPE_UC                  0x00000100
 #define VMX_EPT_MEMORY_TYPE_WB                  0x00004000
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index aa5b080..feaaa80 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -51,6 +51,11 @@ typedef union {
     u64 epte;
 } ept_entry_t;
 
+typedef struct {
+    /*use lxe[0] to save result */
+    ept_entry_t lxe[5];
+} ept_walk_t;
+
 #define EPT_TABLE_ORDER         9
 #define EPTE_SUPER_PAGE_MASK    0x80
 #define EPTE_MFN_MASK           0xffffffffff000ULL
@@ -60,6 +65,28 @@ typedef union {
 #define EPTE_AVAIL1_SHIFT       8
 #define EPTE_EMT_SHIFT          3
 #define EPTE_IGMT_SHIFT         6
+#define EPTE_RWX_MASK           0x7
+#define EPTE_FLAG_MASK          0x7f
+
+#define EPT_EMT_UC              0
+#define EPT_EMT_WC              1
+#define EPT_EMT_RSV0            2
+#define EPT_EMT_RSV1            3
+#define EPT_EMT_WT              4
+#define EPT_EMT_WP              5
+#define EPT_EMT_WB              6
+#define EPT_EMT_RSV2            7
+
+typedef enum {
+    ept_access_n     = 0, /* No access permissions allowed */
+    ept_access_r     = 1,
+    ept_access_w     = 2,
+    ept_access_rw    = 3,
+    ept_access_x     = 4,
+    ept_access_rx    = 5,
+    ept_access_wx    = 6,
+    ept_access_all   = 7,
+} ept_access_t;
 
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
@@ -419,6 +446,7 @@ void update_guest_eip(void);
 #define _EPT_GLA_FAULT              8
 #define EPT_GLA_FAULT               (1UL<<_EPT_GLA_FAULT)
 
+#define EPT_L4_PAGETABLE_SHIFT      39
 #define EPT_PAGETABLE_ENTRIES       512
 
 #endif /* __ASM_X86_HVM_VMX_VMX_H__ */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 422f006..8eb377b 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -32,6 +32,10 @@ struct nestedvmx {
         unsigned long intr_info;
         u32           error_code;
     } intr;
+    struct {
+        uint32_t exit_reason;
+        uint32_t exit_qual;
+    } ept_exit;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -109,6 +113,12 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
+#define EPT_TRANSLATE_SUCCEED   0
+#define EPT_TRANSLATE_VIOLATION 1
+#define EPT_TRANSLATE_ERR_PAGE  2
+#define EPT_TRANSLATE_MISCONFIG 3
+#define EPT_TRANSLATE_RETRY     4
+
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
@@ -192,5 +202,9 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZE-0006eK-4j; Wed, 19 Dec 2012 07:59:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZC-0006dT-Rn
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:51 +0000
Received: from [85.158.139.83:5407] by server-11.bemta-5.messagelabs.com id
	3D/C1-31624-6F371D05; Wed, 19 Dec 2012 07:59:50 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355903986!30528495!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3216 invoked from network); 19 Dec 2012 07:59:48 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-5.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 07:59:48 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:47 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287136"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:46 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:20 +0800
Message-Id: <1355946267-24227-4-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 03/10] nested_ept: Implement guest ept's
	walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Implment guest EPT PT walker, some logic is based on shadow's
ia32e PT walker. During the PT walking, if the target pages are
not in memory, use RETRY mechanism and get a chance to let the
target page back.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c              |    1 +
 xen/arch/x86/hvm/vmx/vvmx.c         |   42 +++++-
 xen/arch/x86/mm/guest_walk.c        |   16 ++-
 xen/arch/x86/mm/hap/Makefile        |    1 +
 xen/arch/x86/mm/hap/nested_ept.c    |  276 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c    |    2 +-
 xen/arch/x86/mm/shadow/multi.c      |    2 +-
 xen/include/asm-x86/guest_pt.h      |    8 +
 xen/include/asm-x86/hvm/nestedhvm.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h   |   28 ++++
 xen/include/asm-x86/hvm/vmx/vvmx.h  |   14 ++
 12 files changed, 382 insertions(+), 10 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1cae8a8..3cd0075 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1324,6 +1324,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
+        case NESTEDHVM_PAGEFAULT_RETRY:
             return 1;
         case NESTEDHVM_PAGEFAULT_L1_ERROR:
             /* An error occured while translating gpa from
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 4495dd6..76cf757 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -906,9 +906,18 @@ static void sync_vvmcs_ro(struct vcpu *v)
 {
     int i;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    void *vvmcs = nvcpu->nv_vvmcx;
 
     for ( i = 0; i < ARRAY_SIZE(vmcs_ro_field); i++ )
         shadow_to_vvmcs(nvcpu->nv_vvmcx, vmcs_ro_field[i]);
+
+    /* Adjust exit_reason/exit_qualifciation for violation case */
+    if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
+                EXIT_REASON_EPT_VIOLATION ) {
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+    }
 }
 
 static void load_vvmcs_host_state(struct vcpu *v)
@@ -1454,8 +1463,37 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
-    /*TODO:*/
-    return 0;
+    uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
+    uint32_t exit_reason = EXIT_REASON_EPT_VIOLATION;
+    int rc;
+    unsigned long gfn;
+    uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+                                &exit_qual, &exit_reason);
+    switch ( rc ) {
+        case EPT_TRANSLATE_SUCCEED:
+            *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+            rc = NESTEDHVM_PAGEFAULT_DONE;
+            break;
+        case EPT_TRANSLATE_VIOLATION:
+        case EPT_TRANSLATE_MISCONFIG:
+            rc = NESTEDHVM_PAGEFAULT_INJECT;
+            nvmx->ept_exit.exit_reason = exit_reason;
+            nvmx->ept_exit.exit_qual = exit_qual;
+            break;
+        case EPT_TRANSLATE_RETRY:
+            rc = NESTEDHVM_PAGEFAULT_RETRY;
+            break;
+        case EPT_TRANSLATE_ERR_PAGE:
+            rc = NESTEDHVM_PAGEFAULT_L1_ERROR;
+            break;
+        default:
+            gdprintk(XENLOG_ERR, "GUEST EPT translation error!\n");
+    }
+
+    return rc;
 }
 
 void nvmx_idtv_handling(void)
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 0f08fb0..1c165c6 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -88,18 +88,19 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
 
 /* If the map is non-NULL, we leave this function having 
  * acquired an extra ref on mfn_to_page(*mfn) */
-static inline void *map_domain_gfn(struct p2m_domain *p2m,
-                                   gfn_t gfn, 
+void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
                                    mfn_t *mfn,
                                    p2m_type_t *p2mt,
-                                   uint32_t *rc) 
+                                   p2m_query_t q,
+                                   uint32_t *rc)
 {
     struct page_info *page;
     void *map;
 
     /* Translate the gfn, unsharing if shared */
     page = get_page_from_gfn_p2m(p2m->domain, p2m, gfn_x(gfn), p2mt, NULL,
-                                  P2M_ALLOC | P2M_UNSHARE);
+                                  q);
     if ( p2m_is_paging(*p2mt) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
@@ -128,7 +129,6 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
     return map;
 }
 
-
 /* Walk the guest pagetables, after the manner of a hardware walker. */
 /* Because the walk is essentially random, it can cause a deadlock 
  * warning in the p2m locking code. Highly unlikely this is an actual
@@ -149,6 +149,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     uint32_t gflags, mflags, iflags, rc = 0;
     int smep;
     bool_t pse1G = 0, pse2M = 0;
+    p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
 
     perfc_incr(guest_walk);
     memset(gw, 0, sizeof(*gw));
@@ -188,7 +189,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     l3p = map_domain_gfn(p2m, 
                          guest_l4e_get_gfn(gw->l4e), 
                          &gw->l3mfn,
-                         &p2mt, 
+                         &p2mt,
+                         qt, 
                          &rc); 
     if(l3p == NULL)
         goto out;
@@ -249,6 +251,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                          guest_l3e_get_gfn(gw->l3e), 
                          &gw->l2mfn,
                          &p2mt, 
+                         qt,
                          &rc); 
     if(l2p == NULL)
         goto out;
@@ -322,6 +325,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                              guest_l2e_get_gfn(gw->l2e), 
                              &gw->l1mfn,
                              &p2mt,
+                             qt,
                              &rc);
         if(l1p == NULL)
             goto out;
diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
index 80a6bec..68f2bb5 100644
--- a/xen/arch/x86/mm/hap/Makefile
+++ b/xen/arch/x86/mm/hap/Makefile
@@ -3,6 +3,7 @@ obj-y += guest_walk_2level.o
 obj-y += guest_walk_3level.o
 obj-$(x86_64) += guest_walk_4level.o
 obj-y += nested_hap.o
+obj-y += nested_ept.o
 
 guest_walk_%level.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
new file mode 100644
index 0000000..5f80d82
--- /dev/null
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -0,0 +1,276 @@
+/*
+ * nested_ept.c: Handling virtulized EPT for guest in nested case.
+ *
+ * Copyright (c) 2012, Intel Corporation
+ *  Xiantao Zhang <xiantao.zhang@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+#include <asm/domain.h>
+#include <asm/page.h>
+#include <asm/paging.h>
+#include <asm/p2m.h>
+#include <asm/mem_event.h>
+#include <public/mem_event.h>
+#include <asm/mem_sharing.h>
+#include <xen/event.h>
+#include <asm/hap.h>
+#include <asm/hvm/support.h>
+
+#include <asm/hvm/nestedhvm.h>
+
+#include "private.h"
+
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vvmx.h>
+
+/* EPT always use 4-level paging structure */
+#define GUEST_PAGING_LEVELS 4
+#include <asm/guest_pt.h>
+
+/* Must reserved bits in all level entries  */
+#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
+                     ~((1ull << paddr_bits) - 1))
+
+/*
+ *TODO: Just leave it as 0 here for compile pass, will
+ * define real capabilities in the subsequent patches.
+ */
+#define NEPT_VPID_CAP_BITS 0
+
+
+#define NEPT_1G_ENTRY_FLAG (1 << 11)
+#define NEPT_2M_ENTRY_FLAG (1 << 10)
+#define NEPT_4K_ENTRY_FLAG (1 << 9)
+
+bool_t nept_sp_entry(ept_entry_t e)
+{
+    return !!(e.sp);
+}
+
+static bool_t nept_rsv_bits_check(ept_entry_t e, uint32_t level)
+{
+    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
+
+    switch ( level ) {
+    case 1:
+        break;
+    case 2 ... 3:
+        if (nept_sp_entry(e))
+            rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
+        else
+            rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK;
+        break;
+    case 4:
+        rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK | EPTE_SUPER_PAGE_MASK;
+    break;
+    default:
+        printk("Unsupported EPT paging level: %d\n", level);
+    }
+    return !!(e.epte & rsv_bits);
+}
+
+/* EMT checking*/
+static bool_t nept_emt_bits_check(ept_entry_t e, uint32_t level)
+{
+    if ( e.sp || level == 1 ) {
+        if ( e.emt == EPT_EMT_RSV0 || e.emt == EPT_EMT_RSV1 ||
+                e.emt == EPT_EMT_RSV2 )
+            return 1;
+    }
+    return 0;
+}
+
+static bool_t nept_rwx_bits_check(ept_entry_t e) {
+    /*write only or write/execute only*/
+    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
+
+    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
+        return 1;
+
+    if ( rwx_bits == ept_access_x && !(NEPT_VPID_CAP_BITS &
+                        VMX_EPT_EXEC_ONLY_SUPPORTED))
+        return 1;
+
+    return 0;
+}
+
+/* nept's misconfiguration check */
+static bool_t nept_misconfiguration_check(ept_entry_t e, uint32_t level)
+{
+    return (nept_rsv_bits_check(e, level) ||
+                nept_emt_bits_check(e, level) ||
+                nept_rwx_bits_check(e));
+}
+
+static bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
+{
+    return !(EPTE_RWX_MASK & rwx_acc & ~rwx_bits);
+}
+
+/* nept's non-present check */
+static bool_t nept_non_present_check(ept_entry_t e)
+{
+    if (e.epte & EPTE_RWX_MASK)
+        return 0;
+    return 1;
+}
+
+uint64_t nept_get_ept_vpid_cap(void)
+{
+    return NEPT_VPID_CAP_BITS;
+}
+
+static int ept_lvl_table_offset(unsigned long gpa, int lvl)
+{
+    return (gpa >>(EPT_L4_PAGETABLE_SHIFT -(4 - lvl) * 9)) &
+                (EPT_PAGETABLE_ENTRIES -1 );
+}
+
+static uint32_t
+nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw)
+{
+    int lvl;
+    p2m_type_t p2mt;
+    uint32_t rc = 0, ret = 0, gflags;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = d->arch.p2m;
+    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
+    mfn_t lxmfn;
+    ept_entry_t *lxp = NULL;
+
+    memset(gw, 0, sizeof(*gw));
+
+    for (lvl = 4; lvl > 0; lvl--)
+    {
+        lxp = map_domain_gfn(p2m, base_gfn, &lxmfn, &p2mt, P2M_ALLOC, &rc);
+        if ( !lxp )
+            goto map_err;
+        gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
+        unmap_domain_page(lxp);
+        put_page(mfn_to_page(mfn_x(lxmfn)));
+
+        if (nept_non_present_check(gw->lxe[lvl]))
+            goto non_present;
+
+        if (nept_misconfiguration_check(gw->lxe[lvl], lvl))
+            goto misconfig_err;
+
+        if ( (lvl == 2 || lvl == 3) && nept_sp_entry(gw->lxe[lvl]) )
+        {
+            /* Generate a fake l1 table entry so callers don't all
+             * have to understand superpages. */
+            unsigned long gfn_lvl_mask =  (1ull << ((lvl - 1) * 9)) - 1;
+            gfn_t start = _gfn(gw->lxe[lvl].mfn);
+            /* Increment the pfn by the right number of 4k pages. */
+            start = _gfn((gfn_x(start) & ~gfn_lvl_mask) +
+                     ((l2ga >> PAGE_SHIFT) & gfn_lvl_mask));
+            gflags = (gw->lxe[lvl].epte & EPTE_FLAG_MASK) |
+                    (lvl == 3 ? NEPT_1G_ENTRY_FLAG: NEPT_2M_ENTRY_FLAG);
+            gw->lxe[0].epte = (gfn_x(start) << PAGE_SHIFT) | gflags;
+            goto done;
+        }
+        if ( lvl > 1 )
+            base_gfn = _gfn(gw->lxe[lvl].mfn);
+    }
+
+    /* If this is not a super entry, we can reach here. */
+    gflags = (gw->lxe[1].epte & EPTE_FLAG_MASK) | NEPT_4K_ENTRY_FLAG;
+    gw->lxe[0].epte = (gw->lxe[1].epte & PAGE_MASK) | gflags;
+
+done:
+    ret = EPT_TRANSLATE_SUCCEED;
+    goto out;
+
+map_err:
+    if ( rc == _PAGE_PAGED )
+        ret = EPT_TRANSLATE_RETRY;
+    else
+        ret = EPT_TRANSLATE_ERR_PAGE;
+    goto out;
+
+misconfig_err:
+    ret =  EPT_TRANSLATE_MISCONFIG;
+    goto out;
+
+non_present:
+    ret = EPT_TRANSLATE_VIOLATION;
+    /* fall through. */
+out:
+    return ret;
+}
+
+/* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
+
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason)
+{
+    uint32_t rc, rwx_bits = 0;
+    ept_walk_t gw;
+    rwx_acc &= EPTE_RWX_MASK;
+
+    *l1gfn = INVALID_GFN;
+
+    rc = nept_walk_tables(v, l2ga, &gw);
+    switch ( rc ) {
+    case EPT_TRANSLATE_SUCCEED:
+        if ( likely(gw.lxe[0].epte & NEPT_2M_ENTRY_FLAG) )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                            EPTE_RWX_MASK;
+            *page_order = 9;
+        }
+        else if ( gw.lxe[0].epte & NEPT_4K_ENTRY_FLAG ) {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                    gw.lxe[1].epte & EPTE_RWX_MASK;
+            *page_order = 0;
+        }
+        else if ( gw.lxe[0].epte & NEPT_1G_ENTRY_FLAG  )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte  & EPTE_RWX_MASK;
+            *page_order = 18;
+        }
+        else
+        {
+            gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
+            BUG();
+        }
+        if ( nept_permission_check(rwx_acc, rwx_bits) )
+        {
+            *l1gfn = gw.lxe[0].mfn;
+            break;
+        }
+        rc = EPT_TRANSLATE_VIOLATION;
+    /* Fall through to EPT violation if permission check fails. */
+    case EPT_TRANSLATE_VIOLATION:
+        *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
+        *exit_reason = EXIT_REASON_EPT_VIOLATION;
+        break;
+
+    case EPT_TRANSLATE_ERR_PAGE:
+        break;
+    case EPT_TRANSLATE_MISCONFIG:
+        rc = EPT_TRANSLATE_MISCONFIG;
+        *exit_qual = 0;
+        *exit_reason = EXIT_REASON_EPT_MISCONFIG;
+        break;
+    case EPT_TRANSLATE_RETRY:
+        break;
+    default:
+        gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);
+    }
+    return rc;
+}
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 8787c91..6d1264b 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -217,7 +217,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
-        return rv;
+    case NESTEDHVM_PAGEFAULT_RETRY:
     case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 4967da1..409198c 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
     /* Translate the GFN to an MFN */
     ASSERT(!paging_locked_by_me(v->domain));
     mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
-        
+
     if ( p2m_is_readonly(p2mt) )
     {
         put_gfn(v->domain, gfn);
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index 4e1dda0..db8a0b6 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -315,6 +315,14 @@ guest_walk_to_page_order(walk_t *gw)
 #define GPT_RENAME2(_n, _l) _n ## _ ## _l ## _levels
 #define GPT_RENAME(_n, _l) GPT_RENAME2(_n, _l)
 #define guest_walk_tables GPT_RENAME(guest_walk_tables, GUEST_PAGING_LEVELS)
+#define map_domain_gfn GPT_RENAME(map_domain_gfn, GUEST_PAGING_LEVELS)
+
+extern void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
+                                   mfn_t *mfn,
+                                   p2m_type_t *p2mt,
+                                   p2m_query_t q,
+                                   uint32_t *rc);
 
 extern uint32_t 
 guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, unsigned long va,
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index 91fde0b..649c511 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -52,6 +52,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
 #define NESTEDHVM_PAGEFAULT_L1_ERROR   2
 #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
 #define NESTEDHVM_PAGEFAULT_MMIO       4
+#define NESTEDHVM_PAGEFAULT_RETRY      5
 int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ef2c9c9..9a728b6 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -194,6 +194,7 @@ extern u32 vmx_secondary_exec_control;
 
 extern bool_t cpu_has_vmx_ins_outs_instr_info;
 
+#define VMX_EPT_EXEC_ONLY_SUPPORTED             0x00000001
 #define VMX_EPT_WALK_LENGTH_4_SUPPORTED         0x00000040
 #define VMX_EPT_MEMORY_TYPE_UC                  0x00000100
 #define VMX_EPT_MEMORY_TYPE_WB                  0x00004000
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index aa5b080..feaaa80 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -51,6 +51,11 @@ typedef union {
     u64 epte;
 } ept_entry_t;
 
+typedef struct {
+    /*use lxe[0] to save result */
+    ept_entry_t lxe[5];
+} ept_walk_t;
+
 #define EPT_TABLE_ORDER         9
 #define EPTE_SUPER_PAGE_MASK    0x80
 #define EPTE_MFN_MASK           0xffffffffff000ULL
@@ -60,6 +65,28 @@ typedef union {
 #define EPTE_AVAIL1_SHIFT       8
 #define EPTE_EMT_SHIFT          3
 #define EPTE_IGMT_SHIFT         6
+#define EPTE_RWX_MASK           0x7
+#define EPTE_FLAG_MASK          0x7f
+
+#define EPT_EMT_UC              0
+#define EPT_EMT_WC              1
+#define EPT_EMT_RSV0            2
+#define EPT_EMT_RSV1            3
+#define EPT_EMT_WT              4
+#define EPT_EMT_WP              5
+#define EPT_EMT_WB              6
+#define EPT_EMT_RSV2            7
+
+typedef enum {
+    ept_access_n     = 0, /* No access permissions allowed */
+    ept_access_r     = 1,
+    ept_access_w     = 2,
+    ept_access_rw    = 3,
+    ept_access_x     = 4,
+    ept_access_rx    = 5,
+    ept_access_wx    = 6,
+    ept_access_all   = 7,
+} ept_access_t;
 
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
@@ -419,6 +446,7 @@ void update_guest_eip(void);
 #define _EPT_GLA_FAULT              8
 #define EPT_GLA_FAULT               (1UL<<_EPT_GLA_FAULT)
 
+#define EPT_L4_PAGETABLE_SHIFT      39
 #define EPT_PAGETABLE_ENTRIES       512
 
 #endif /* __ASM_X86_HVM_VMX_VMX_H__ */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 422f006..8eb377b 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -32,6 +32,10 @@ struct nestedvmx {
         unsigned long intr_info;
         u32           error_code;
     } intr;
+    struct {
+        uint32_t exit_reason;
+        uint32_t exit_qual;
+    } ept_exit;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -109,6 +113,12 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
+#define EPT_TRANSLATE_SUCCEED   0
+#define EPT_TRANSLATE_VIOLATION 1
+#define EPT_TRANSLATE_ERR_PAGE  2
+#define EPT_TRANSLATE_MISCONFIG 3
+#define EPT_TRANSLATE_RETRY     4
+
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
@@ -192,5 +202,9 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZO-0006jK-3S; Wed, 19 Dec 2012 08:00:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZL-0006hn-OL
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:59 +0000
Received: from [85.158.139.83:5942] by server-11.bemta-5.messagelabs.com id
	1A/12-31624-FF371D05; Wed, 19 Dec 2012 07:59:59 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355903997!26463627!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2932 invoked from network); 19 Dec 2012 07:59:58 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 07:59:58 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287202"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:56 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:27 +0800
Message-Id: <1355946267-24227-11-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 10/10] nEPT: expost EPT & VPID capablities to
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Expose EPT's  and VPID 's basic features to L1 VMM.
For EPT, no EPT A/D bit feature supported.
For VPID, exposes all features to L1 VMM

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++++++++++--
 xen/arch/x86/mm/hap/nested_ept.c   |   19 ++++++++++++-------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 ++
 3 files changed, 29 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f2d7039..0da81e3 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1485,6 +1485,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
+    {
+        u32 default1_bits = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1506,12 +1508,20 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_PAUSE_EXITING |
                CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        data = gen_vmx_msr(data, VMX_PROCBASED_CTLS_DEFAULT1, host_data);
+
+        if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
+            default1_bits &= ~(CPU_BASED_CR3_LOAD_EXITING |
+                    CPU_BASED_CR3_STORE_EXITING | CPU_BASED_INVLPG_EXITING);
+
+        data = gen_vmx_msr(data, default1_bits, host_data);
         break;
+    }
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
-               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+               SECONDARY_EXEC_ENABLE_VPID |
+               SECONDARY_EXEC_ENABLE_EPT;
         data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
@@ -1564,6 +1574,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_MISC:
         gdprintk(XENLOG_WARNING, "VMX MSR %x not fully supported yet.\n", msr);
         break;
+    case MSR_IA32_VMX_EPT_VPID_CAP:
+        data = nept_get_ept_vpid_cap();
+        break;
     default:
         r = 0;
         break;
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 4b99281..5b60f37 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -43,12 +43,15 @@
 #define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
                      ~((1ull << paddr_bits) - 1))
 
-/*
- *TODO: Just leave it as 0 here for compile pass, will
- * define real capabilities in the subsequent patches.
- */
-#define NEPT_VPID_CAP_BITS 0
-
+#define NEPT_VPID_CAP_BITS  \
+        (VMX_EPT_INVEPT_ALL_CONTEXT | VMX_EPT_INVEPT_SINGLE_CONTEXT |   \
+        VMX_EPT_INVEPT_INSTRUCTION | VMX_EPT_SUPERPAGE_1GB |            \
+        VMX_EPT_SUPERPAGE_2MB | VMX_EPT_MEMORY_TYPE_WB |                \
+        VMX_EPT_MEMORY_TYPE_UC | VMX_EPT_WALK_LENGTH_4_SUPPORTED |      \
+        VMX_EPT_EXEC_ONLY_SUPPORTED | VMX_VPID_INVVPID_INSTRUCTION |    \
+        VMX_VPID_INVVPID_INDIVIDUAL_ADDR |                              \
+        VMX_VPID_INVVPID_SINGLE_CONTEXT | VMX_VPID_INVVPID_ALL_CONTEXT |\
+        VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL)
 
 #define NEPT_1G_ENTRY_FLAG (1 << 11)
 #define NEPT_2M_ENTRY_FLAG (1 << 10)
@@ -129,7 +132,9 @@ static bool_t nept_non_present_check(ept_entry_t e)
 
 uint64_t nept_get_ept_vpid_cap(void)
 {
-    return NEPT_VPID_CAP_BITS;
+    if ( cpu_has_vmx_ept && cpu_has_vmx_vpid )
+        return NEPT_VPID_CAP_BITS;
+    return 0;
 }
 
 static int ept_lvl_table_offset(unsigned long gpa, int lvl)
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 28dd727..1e7a6d7 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -208,6 +208,8 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+uint64_t nept_get_ept_vpid_cap(void);
+
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
                         unsigned long *l1gfn, uint8_t *p2m_acc,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZO-0006jK-3S; Wed, 19 Dec 2012 08:00:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZL-0006hn-OL
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:59 +0000
Received: from [85.158.139.83:5942] by server-11.bemta-5.messagelabs.com id
	1A/12-31624-FF371D05; Wed, 19 Dec 2012 07:59:59 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355903997!26463627!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2932 invoked from network); 19 Dec 2012 07:59:58 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 07:59:58 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287202"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:56 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:27 +0800
Message-Id: <1355946267-24227-11-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 10/10] nEPT: expost EPT & VPID capablities to
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Expose EPT's  and VPID 's basic features to L1 VMM.
For EPT, no EPT A/D bit feature supported.
For VPID, exposes all features to L1 VMM

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++++++++++--
 xen/arch/x86/mm/hap/nested_ept.c   |   19 ++++++++++++-------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 ++
 3 files changed, 29 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f2d7039..0da81e3 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1485,6 +1485,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
+    {
+        u32 default1_bits = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1506,12 +1508,20 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_PAUSE_EXITING |
                CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        data = gen_vmx_msr(data, VMX_PROCBASED_CTLS_DEFAULT1, host_data);
+
+        if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
+            default1_bits &= ~(CPU_BASED_CR3_LOAD_EXITING |
+                    CPU_BASED_CR3_STORE_EXITING | CPU_BASED_INVLPG_EXITING);
+
+        data = gen_vmx_msr(data, default1_bits, host_data);
         break;
+    }
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
-               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+               SECONDARY_EXEC_ENABLE_VPID |
+               SECONDARY_EXEC_ENABLE_EPT;
         data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
@@ -1564,6 +1574,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_MISC:
         gdprintk(XENLOG_WARNING, "VMX MSR %x not fully supported yet.\n", msr);
         break;
+    case MSR_IA32_VMX_EPT_VPID_CAP:
+        data = nept_get_ept_vpid_cap();
+        break;
     default:
         r = 0;
         break;
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 4b99281..5b60f37 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -43,12 +43,15 @@
 #define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
                      ~((1ull << paddr_bits) - 1))
 
-/*
- *TODO: Just leave it as 0 here for compile pass, will
- * define real capabilities in the subsequent patches.
- */
-#define NEPT_VPID_CAP_BITS 0
-
+#define NEPT_VPID_CAP_BITS  \
+        (VMX_EPT_INVEPT_ALL_CONTEXT | VMX_EPT_INVEPT_SINGLE_CONTEXT |   \
+        VMX_EPT_INVEPT_INSTRUCTION | VMX_EPT_SUPERPAGE_1GB |            \
+        VMX_EPT_SUPERPAGE_2MB | VMX_EPT_MEMORY_TYPE_WB |                \
+        VMX_EPT_MEMORY_TYPE_UC | VMX_EPT_WALK_LENGTH_4_SUPPORTED |      \
+        VMX_EPT_EXEC_ONLY_SUPPORTED | VMX_VPID_INVVPID_INSTRUCTION |    \
+        VMX_VPID_INVVPID_INDIVIDUAL_ADDR |                              \
+        VMX_VPID_INVVPID_SINGLE_CONTEXT | VMX_VPID_INVVPID_ALL_CONTEXT |\
+        VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL)
 
 #define NEPT_1G_ENTRY_FLAG (1 << 11)
 #define NEPT_2M_ENTRY_FLAG (1 << 10)
@@ -129,7 +132,9 @@ static bool_t nept_non_present_check(ept_entry_t e)
 
 uint64_t nept_get_ept_vpid_cap(void)
 {
-    return NEPT_VPID_CAP_BITS;
+    if ( cpu_has_vmx_ept && cpu_has_vmx_vpid )
+        return NEPT_VPID_CAP_BITS;
+    return 0;
 }
 
 static int ept_lvl_table_offset(unsigned long gpa, int lvl)
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 28dd727..1e7a6d7 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -208,6 +208,8 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+uint64_t nept_get_ept_vpid_cap(void);
+
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
                         unsigned long *l1gfn, uint8_t *p2m_acc,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZC-0006dv-I0; Wed, 19 Dec 2012 07:59:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZA-0006dT-Pz
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:49 +0000
Received: from [85.158.139.83:64763] by server-11.bemta-5.messagelabs.com id
	A6/B1-31624-4F371D05; Wed, 19 Dec 2012 07:59:48 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355903986!30528495!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3083 invoked from network); 19 Dec 2012 07:59:47 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-5.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 07:59:47 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287124"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:44 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:19 +0800
Message-Id: <1355946267-24227-3-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 02/10] nestedhap: Change nested p2m's walker
	to vendor-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

EPT and NPT adopts differnt formats for each-level entry,
so change the walker functions to vendor-specific.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c              |    1 +
 xen/arch/x86/hvm/vmx/vmx.c              |    3 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |   13 +++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   46 +++++++++++--------------------
 xen/include/asm-x86/hvm/hvm.h           |    5 +++
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 ++
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    5 +++
 8 files changed, 76 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index ed0faa6..5dcb354 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1171,6 +1171,37 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
     return vcpu_nestedsvm(v).ns_hap_enabled;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    uint32_t pfec;
+    unsigned long nested_cr3, gfn;
+    
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
+
+    pfec = PFEC_user_mode | PFEC_page_present;
+    if (access_w)
+        pfec |= PFEC_write_access;
+    if (access_x)
+        pfec |= PFEC_insn_fetch;
+
+    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
+    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
+
+    if ( gfn == INVALID_GFN ) 
+        return NESTEDHVM_PAGEFAULT_INJECT;
+
+    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+    return NESTEDHVM_PAGEFAULT_DONE;
+}
+
+
 enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 2c8504a..acd2d49 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2008,6 +2008,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
     .nhvm_intr_blocked = nsvm_intr_blocked,
+    .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m,
 };
 
 void svm_vmexit_handler(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 98309da..4abfa90 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1511,7 +1511,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_intr_blocked    = nvmx_intr_blocked,
     .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
     .update_eoi_exit_bitmap = vmx_update_eoi_exit_bitmap,
-    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled
+    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled,
+    .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6d1a736..4495dd6 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1445,6 +1445,19 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
     return 1;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    /*TODO:*/
+    return 0;
+}
+
 void nvmx_idtv_handling(void)
 {
     struct vcpu *v = current;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index f9a5edc..8787c91 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -136,6 +136,22 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     }
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+static int
+nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
+
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+        access_r, access_w, access_x);
+}
+
+
 /* This function uses L1_gpa to walk the P2M table in L0 hypervisor. If the
  * walk is successful, the translated value is returned in L0_gpa. The return 
  * value tells the upper level what to do.
@@ -175,36 +191,6 @@ out:
     return rc;
 }
 
-/* This function uses L2_gpa to walk the P2M page table in L1. If the 
- * walk is successful, the translated value is returned in
- * L1_gpa. The result value tells what to do next.
- */
-static int
-nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
-                      bool_t access_r, bool_t access_w, bool_t access_x)
-{
-    uint32_t pfec;
-    unsigned long nested_cr3, gfn;
-    
-    nested_cr3 = nhvm_vcpu_p2m_base(v);
-
-    pfec = PFEC_user_mode | PFEC_page_present;
-    if (access_w)
-        pfec |= PFEC_write_access;
-    if (access_x)
-        pfec |= PFEC_insn_fetch;
-
-    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
-    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
-
-    if ( gfn == INVALID_GFN ) 
-        return NESTEDHVM_PAGEFAULT_INJECT;
-
-    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
-    return NESTEDHVM_PAGEFAULT_DONE;
-}
-
 /*
  * The following function, nestedhap_page_fault(), is for steps (3)--(10).
  *
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index d3535b6..80f07e9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -183,6 +183,11 @@ struct hvm_function_table {
     /* Virtual interrupt delivery */
     void (*update_eoi_exit_bitmap)(struct vcpu *v, u8 vector, u8 trig);
     int (*virtual_intr_delivery_enabled)(void);
+
+    /*Walk nested p2m  */
+    int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index fa83023..0c90f30 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -133,6 +133,9 @@ int nsvm_wrmsr(struct vcpu *v, unsigned int msr, uint64_t msr_content);
 void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
+int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
 #define NSVM_INTR_NOTINTERCEPTED 2
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d97011d..422f006 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -108,6 +108,11 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
+
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
  *
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:00:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEZC-0006dv-I0; Wed, 19 Dec 2012 07:59:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEZA-0006dT-Pz
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 07:59:49 +0000
Received: from [85.158.139.83:64763] by server-11.bemta-5.messagelabs.com id
	A6/B1-31624-4F371D05; Wed, 19 Dec 2012 07:59:48 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1355903986!30528495!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5NzUwMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3083 invoked from network); 19 Dec 2012 07:59:47 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-5.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 07:59:47 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Dec 2012 23:59:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="264287124"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 18 Dec 2012 23:59:44 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 03:44:19 +0800
Message-Id: <1355946267-24227-3-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 02/10] nestedhap: Change nested p2m's walker
	to vendor-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

EPT and NPT adopts differnt formats for each-level entry,
so change the walker functions to vendor-specific.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c              |    1 +
 xen/arch/x86/hvm/vmx/vmx.c              |    3 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |   13 +++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   46 +++++++++++--------------------
 xen/include/asm-x86/hvm/hvm.h           |    5 +++
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 ++
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    5 +++
 8 files changed, 76 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index ed0faa6..5dcb354 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1171,6 +1171,37 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
     return vcpu_nestedsvm(v).ns_hap_enabled;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    uint32_t pfec;
+    unsigned long nested_cr3, gfn;
+    
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
+
+    pfec = PFEC_user_mode | PFEC_page_present;
+    if (access_w)
+        pfec |= PFEC_write_access;
+    if (access_x)
+        pfec |= PFEC_insn_fetch;
+
+    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
+    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
+
+    if ( gfn == INVALID_GFN ) 
+        return NESTEDHVM_PAGEFAULT_INJECT;
+
+    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+    return NESTEDHVM_PAGEFAULT_DONE;
+}
+
+
 enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 2c8504a..acd2d49 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2008,6 +2008,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
     .nhvm_intr_blocked = nsvm_intr_blocked,
+    .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m,
 };
 
 void svm_vmexit_handler(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 98309da..4abfa90 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1511,7 +1511,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_intr_blocked    = nvmx_intr_blocked,
     .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
     .update_eoi_exit_bitmap = vmx_update_eoi_exit_bitmap,
-    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled
+    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled,
+    .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6d1a736..4495dd6 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1445,6 +1445,19 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
     return 1;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    /*TODO:*/
+    return 0;
+}
+
 void nvmx_idtv_handling(void)
 {
     struct vcpu *v = current;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index f9a5edc..8787c91 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -136,6 +136,22 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     }
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+static int
+nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
+
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+        access_r, access_w, access_x);
+}
+
+
 /* This function uses L1_gpa to walk the P2M table in L0 hypervisor. If the
  * walk is successful, the translated value is returned in L0_gpa. The return 
  * value tells the upper level what to do.
@@ -175,36 +191,6 @@ out:
     return rc;
 }
 
-/* This function uses L2_gpa to walk the P2M page table in L1. If the 
- * walk is successful, the translated value is returned in
- * L1_gpa. The result value tells what to do next.
- */
-static int
-nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
-                      bool_t access_r, bool_t access_w, bool_t access_x)
-{
-    uint32_t pfec;
-    unsigned long nested_cr3, gfn;
-    
-    nested_cr3 = nhvm_vcpu_p2m_base(v);
-
-    pfec = PFEC_user_mode | PFEC_page_present;
-    if (access_w)
-        pfec |= PFEC_write_access;
-    if (access_x)
-        pfec |= PFEC_insn_fetch;
-
-    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
-    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
-
-    if ( gfn == INVALID_GFN ) 
-        return NESTEDHVM_PAGEFAULT_INJECT;
-
-    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
-    return NESTEDHVM_PAGEFAULT_DONE;
-}
-
 /*
  * The following function, nestedhap_page_fault(), is for steps (3)--(10).
  *
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index d3535b6..80f07e9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -183,6 +183,11 @@ struct hvm_function_table {
     /* Virtual interrupt delivery */
     void (*update_eoi_exit_bitmap)(struct vcpu *v, u8 vector, u8 trig);
     int (*virtual_intr_delivery_enabled)(void);
+
+    /*Walk nested p2m  */
+    int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index fa83023..0c90f30 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -133,6 +133,9 @@ int nsvm_wrmsr(struct vcpu *v, unsigned int msr, uint64_t msr_content);
 void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
+int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
 #define NSVM_INTR_NOTINTERCEPTED 2
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d97011d..422f006 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -108,6 +108,11 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
+
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
  *
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEmC-0000RI-9X; Wed, 19 Dec 2012 08:13:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEmB-0000RD-7t
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 08:13:15 +0000
Received: from [85.158.139.83:46784] by server-13.bemta-5.messagelabs.com id
	DA/48-10716-A1771D05; Wed, 19 Dec 2012 08:13:14 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355904793!26465839!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMDM3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21872 invoked from network); 19 Dec 2012 08:13:13 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-7.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 08:13:13 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 19 Dec 2012 00:12:19 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="259518978"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 19 Dec 2012 00:13:11 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 00:13:11 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 00:13:11 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Wed, 19 Dec 2012 16:13:09 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>, "xen-devel@lists.xensource.com"
	<xen-devel@lists.xensource.com>
Thread-Topic: Xen 4.2 and PCI hotplug.
Thread-Index: AQHN3XF/WgpPKKZtrU+hLDd0DNirTJgfxZEg
Date: Wed, 19 Dec 2012 08:13:08 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
In-Reply-To: <20121218224606.GA6918@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Are you playing with Xen ?  so far,  Xen doesn't support PCIe device hot-plug feature yet.  But if you want to hot-plug a PCIe device to a Xen guest, it should supported well.  
Xiantao 

> -----Original Message-----
> From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf Of Konrad
> Rzeszutek Wilk
> Sent: Wednesday, December 19, 2012 6:46 AM
> To: Zhang, Xiantao; xen-devel@lists.xensource.com
> Subject: Xen 4.2 and PCI hotplug.
> 
> Hey,
> 
> I was wondering if the ACPI PCI hotplug (so inserting a new PCIe card in a
> server that supports said functionality) is something that Intel has been
> testing or using?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEmC-0000RI-9X; Wed, 19 Dec 2012 08:13:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlEmB-0000RD-7t
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 08:13:15 +0000
Received: from [85.158.139.83:46784] by server-13.bemta-5.messagelabs.com id
	DA/48-10716-A1771D05; Wed, 19 Dec 2012 08:13:14 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355904793!26465839!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwMDM3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21872 invoked from network); 19 Dec 2012 08:13:13 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-7.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 08:13:13 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 19 Dec 2012 00:12:19 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="259518978"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 19 Dec 2012 00:13:11 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 00:13:11 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 00:13:11 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Wed, 19 Dec 2012 16:13:09 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>, "xen-devel@lists.xensource.com"
	<xen-devel@lists.xensource.com>
Thread-Topic: Xen 4.2 and PCI hotplug.
Thread-Index: AQHN3XF/WgpPKKZtrU+hLDd0DNirTJgfxZEg
Date: Wed, 19 Dec 2012 08:13:08 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
In-Reply-To: <20121218224606.GA6918@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Are you playing with Xen ?  so far,  Xen doesn't support PCIe device hot-plug feature yet.  But if you want to hot-plug a PCIe device to a Xen guest, it should supported well.  
Xiantao 

> -----Original Message-----
> From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf Of Konrad
> Rzeszutek Wilk
> Sent: Wednesday, December 19, 2012 6:46 AM
> To: Zhang, Xiantao; xen-devel@lists.xensource.com
> Subject: Xen 4.2 and PCI hotplug.
> 
> Hey,
> 
> I was wondering if the ACPI PCI hotplug (so inserting a new PCIe card in a
> server that supports said functionality) is something that Intel has been
> testing or using?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:23:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEvj-0000bf-Ej; Wed, 19 Dec 2012 08:23:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TlEvg-0000ba-Ub
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 08:23:05 +0000
Received: from [85.158.139.83:53457] by server-7.bemta-5.messagelabs.com id
	33/91-08009-86971D05; Wed, 19 Dec 2012 08:23:04 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355905379!27769861!1
X-Originating-IP: [220.181.15.62]
X-SpamReason: No, hits=2.4 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDExMjQ0\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDExMjQ0\n,HTML_40_50,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3152 invoked from network); 19 Dec 2012 08:23:01 -0000
Received: from m15-62.126.com (HELO m15-62.126.com) (220.181.15.62)
	by server-4.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 08:23:01 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=KazlDcjZDKUq1BdirzA/u7n7vj4ZOrONafXs
	gUKADyA=; b=MN2G6Gmh0lzt4TJ9bey0MudMp1bOfd0o9Jz+gNNEROx4Qdsy2M9k
	AKjHSDkPH2qdbH9Ubli08tCVHVynxFUg8KnEcyteSKT1WElJkjiX+OKq+cRBYfsu
	5mIoV7NoRQSv0BZiiZbbDY0ewRsDtCdD5jfCjcBPFHnCQl9qcaoQrso=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr62
	(Coremail) ; Wed, 19 Dec 2012 16:22:54 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Wed, 19 Dec 2012 16:22:54 +0800 (CST)
From: hxkhust <hxkhust@126.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: /364b2Zvb3Rlcl9odG09MzA3MTo4MQ==
MIME-Version: 1.0
Message-ID: <1d22c86e.1162a.13bb2421883.Coremail.hxkhust@126.com>
X-CM-TRANSID: PsqowEAZQEJeedFQvKkbAA--.4884W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitBuKBUX9klwifAAAsI
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(about read operation in qemu-img-xen)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9190638590533725399=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9190638590533725399==
Content-Type: multipart/alternative; 
	boundary="----=_Part_269305_2121489187.1355905374339"

------=_Part_269305_2121489187.1355905374339
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

Hi,gays,


what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :
static void qcow_aio_read_cb(void *opaque, int ret)
{
........
if (!acb->cluster_offset) {
        if (bs->backing_hd) {
            /* read from the base image */
            acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,                           //*************
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);   //**************
//I read what the acb->buf points to, but find the reading operation is not finished. 
            if (acb->hd_aiocb == NULL)
                goto fail;
        } else {
            /* Note: in this case, no need to wait */
            memset(acb->buf, 0, 512 * acb->n);
            goto redo;
        }
    } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
        /* add AIO support for compressed blocks ? */
        if (decompress_cluster(s, acb->cluster_offset) < 0)
            goto fail;
        memcpy(acb->buf,
               s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
        goto redo;
 .........
//********************************************************************************************8
when the statement:
acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);n
has been completed, the content which the acb->buf points to has not been prepared.This is a asynchronous read operation.Who could tell me the principle or process about this  asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when  the data has been copied to the memory which the acb->buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.


A newbie
------=_Part_269305_2121489187.1355905374339
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div>Hi,gays,</div><div><br></div>what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :<div><div>static void qcow_aio_read_cb(void *opaque, int ret)</div><div>{</div><div>........</div><div>if (!acb-&gt;cluster_offset) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (bs-&gt;backing_hd) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* read from the base image */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd, &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; //*************</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb); &nbsp; //**************</div><div>//I read what the acb-&gt;buf points to, but find the reading operation is not finished.&nbsp;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (acb-&gt;hd_aiocb == NULL)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; } else {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* Note: in this case, no need to wait */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; memset(acb-&gt;buf, 0, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; }</div><div>&nbsp; &nbsp; } else if (acb-&gt;cluster_offset &amp; QCOW_OFLAG_COMPRESSED) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; /* add AIO support for compressed blocks ? */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (decompress_cluster(s, acb-&gt;cluster_offset) &lt; 0)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; memcpy(acb-&gt;buf,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s-&gt;cluster_cache + index_in_cluster * 512, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp;.........</div></div><div>//********************************************************************************************8</div><div>when the statement:</div><div>acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);n</div><div>has been completed, the content which the acb-&gt;buf points to has not been prepared.This is a&nbsp;asynchronous read operation.Who could tell me the&nbsp;<span style="font-family: arial, sans-serif; font-size: 13px; line-height: normal; white-space: nowrap; ">principle or process about this&nbsp;</span>&nbsp;asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when &nbsp;the data has been copied to the memory which the acb-&gt;buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.</div><div><br></div><div>A newbie</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_269305_2121489187.1355905374339--



--===============9190638590533725399==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9190638590533725399==--



From xen-devel-bounces@lists.xen.org Wed Dec 19 08:23:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlEvj-0000bf-Ej; Wed, 19 Dec 2012 08:23:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TlEvg-0000ba-Ub
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 08:23:05 +0000
Received: from [85.158.139.83:53457] by server-7.bemta-5.messagelabs.com id
	33/91-08009-86971D05; Wed, 19 Dec 2012 08:23:04 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355905379!27769861!1
X-Originating-IP: [220.181.15.62]
X-SpamReason: No, hits=2.4 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDExMjQ0\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDExMjQ0\n,HTML_40_50,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3152 invoked from network); 19 Dec 2012 08:23:01 -0000
Received: from m15-62.126.com (HELO m15-62.126.com) (220.181.15.62)
	by server-4.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 08:23:01 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=KazlDcjZDKUq1BdirzA/u7n7vj4ZOrONafXs
	gUKADyA=; b=MN2G6Gmh0lzt4TJ9bey0MudMp1bOfd0o9Jz+gNNEROx4Qdsy2M9k
	AKjHSDkPH2qdbH9Ubli08tCVHVynxFUg8KnEcyteSKT1WElJkjiX+OKq+cRBYfsu
	5mIoV7NoRQSv0BZiiZbbDY0ewRsDtCdD5jfCjcBPFHnCQl9qcaoQrso=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr62
	(Coremail) ; Wed, 19 Dec 2012 16:22:54 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Wed, 19 Dec 2012 16:22:54 +0800 (CST)
From: hxkhust <hxkhust@126.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: /364b2Zvb3Rlcl9odG09MzA3MTo4MQ==
MIME-Version: 1.0
Message-ID: <1d22c86e.1162a.13bb2421883.Coremail.hxkhust@126.com>
X-CM-TRANSID: PsqowEAZQEJeedFQvKkbAA--.4884W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitBuKBUX9klwifAAAsI
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(about read operation in qemu-img-xen)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9190638590533725399=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9190638590533725399==
Content-Type: multipart/alternative; 
	boundary="----=_Part_269305_2121489187.1355905374339"

------=_Part_269305_2121489187.1355905374339
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

Hi,gays,


what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :
static void qcow_aio_read_cb(void *opaque, int ret)
{
........
if (!acb->cluster_offset) {
        if (bs->backing_hd) {
            /* read from the base image */
            acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,                           //*************
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);   //**************
//I read what the acb->buf points to, but find the reading operation is not finished. 
            if (acb->hd_aiocb == NULL)
                goto fail;
        } else {
            /* Note: in this case, no need to wait */
            memset(acb->buf, 0, 512 * acb->n);
            goto redo;
        }
    } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
        /* add AIO support for compressed blocks ? */
        if (decompress_cluster(s, acb->cluster_offset) < 0)
            goto fail;
        memcpy(acb->buf,
               s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
        goto redo;
 .........
//********************************************************************************************8
when the statement:
acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);n
has been completed, the content which the acb->buf points to has not been prepared.This is a asynchronous read operation.Who could tell me the principle or process about this  asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when  the data has been copied to the memory which the acb->buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.


A newbie
------=_Part_269305_2121489187.1355905374339
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div>Hi,gays,</div><div><br></div>what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :<div><div>static void qcow_aio_read_cb(void *opaque, int ret)</div><div>{</div><div>........</div><div>if (!acb-&gt;cluster_offset) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (bs-&gt;backing_hd) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* read from the base image */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd, &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; //*************</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb); &nbsp; //**************</div><div>//I read what the acb-&gt;buf points to, but find the reading operation is not finished.&nbsp;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (acb-&gt;hd_aiocb == NULL)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; } else {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* Note: in this case, no need to wait */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; memset(acb-&gt;buf, 0, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; }</div><div>&nbsp; &nbsp; } else if (acb-&gt;cluster_offset &amp; QCOW_OFLAG_COMPRESSED) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; /* add AIO support for compressed blocks ? */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (decompress_cluster(s, acb-&gt;cluster_offset) &lt; 0)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; memcpy(acb-&gt;buf,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s-&gt;cluster_cache + index_in_cluster * 512, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp;.........</div></div><div>//********************************************************************************************8</div><div>when the statement:</div><div>acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);n</div><div>has been completed, the content which the acb-&gt;buf points to has not been prepared.This is a&nbsp;asynchronous read operation.Who could tell me the&nbsp;<span style="font-family: arial, sans-serif; font-size: 13px; line-height: normal; white-space: nowrap; ">principle or process about this&nbsp;</span>&nbsp;asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when &nbsp;the data has been copied to the memory which the acb-&gt;buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.</div><div><br></div><div>A newbie</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_269305_2121489187.1355905374339--



--===============9190638590533725399==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9190638590533725399==--



From xen-devel-bounces@lists.xen.org Wed Dec 19 08:25:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlExh-0000iV-1Q; Wed, 19 Dec 2012 08:25:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TlExf-0000iL-Bx
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 08:25:07 +0000
Received: from [193.109.254.147:26533] by server-11.bemta-14.messagelabs.com
	id 99/5A-02659-2E971D05; Wed, 19 Dec 2012 08:25:06 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1355905465!3456296!1
X-Originating-IP: [220.181.15.62]
X-SpamReason: No, hits=1.9 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDExMjQ0\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDExMjQ0\n,HTML_50_60,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31736 invoked from network); 19 Dec 2012 08:24:27 -0000
Received: from m15-62.126.com (HELO m15-62.126.com) (220.181.15.62)
	by server-10.tower-27.messagelabs.com with SMTP;
	19 Dec 2012 08:24:27 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=ccKulrNyFxS+B101KZdk8FzmRp6wtRXskVdH
	JfCLzhc=; b=YKKa0IlZSe3Aa0p8qAqzCIFm0FkmpubTSFVB91ysKEe+nC0wjBbC
	vPIk1aRx3M3XOLdQWfkVkpdIJRTcdiHe1+L8KS+LKhtpFqX2ZGb8/oGAgsLzIaMx
	gw7091P3l6VzZXlwUIigmrhDnBXTJlOi0RjUmIw6ECbfUgZPsSm6vUo=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr62
	(Coremail) ; Wed, 19 Dec 2012 16:24:23 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Wed, 19 Dec 2012 16:24:23 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: 1gN+UWZvb3Rlcl9odG09MzIzNDo4MQ==
MIME-Version: 1.0
Message-ID: <5b77509e.1174a.13bb2437601.Coremail.hxkhust@126.com>
X-CM-TRANSID: PsqowED5IkW3edFQUasbAA--.1861W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitBuKBUX9klwifAABsJ
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(about read operation in qemu-img-xen)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3799715335036896564=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3799715335036896564==
Content-Type: multipart/alternative; 
	boundary="----=_Part_270381_109645334.1355905463809"

------=_Part_270381_109645334.1355905463809
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

Hi,guys,


what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :
static void qcow_aio_read_cb(void *opaque, int ret)
{
........
if (!acb->cluster_offset) {
        if (bs->backing_hd) {
            /* read from the base image */
            acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,                           //*************
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);   //**************
//I read what the acb->buf points to, but find the reading operation is not finished. 
            if (acb->hd_aiocb == NULL)
                goto fail;
        } else {
            /* Note: in this case, no need to wait */
            memset(acb->buf, 0, 512 * acb->n);
            goto redo;
        }
    } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
        /* add AIO support for compressed blocks ? */
        if (decompress_cluster(s, acb->cluster_offset) < 0)
            goto fail;
        memcpy(acb->buf,
               s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
        goto redo;
 .........
//********************************************************************************************8
when the statement:
acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);n
has been completed, the content which the acb->buf points to has not been prepared.This is a asynchronous read operation.Who could tell me the principle or process about this  asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when  the data has been copied to the memory which the acb->buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.


A newbie



------=_Part_270381_109645334.1355905463809
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div>Hi,guys,</div><div><br></div>what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :<div><div>static void qcow_aio_read_cb(void *opaque, int ret)</div><div>{</div><div>........</div><div>if (!acb-&gt;cluster_offset) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (bs-&gt;backing_hd) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* read from the base image */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd, &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; //*************</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb); &nbsp; //**************</div><div>//I read what the acb-&gt;buf points to, but find the reading operation is not finished.&nbsp;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (acb-&gt;hd_aiocb == NULL)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; } else {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* Note: in this case, no need to wait */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; memset(acb-&gt;buf, 0, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; }</div><div>&nbsp; &nbsp; } else if (acb-&gt;cluster_offset &amp; QCOW_OFLAG_COMPRESSED) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; /* add AIO support for compressed blocks ? */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (decompress_cluster(s, acb-&gt;cluster_offset) &lt; 0)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; memcpy(acb-&gt;buf,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s-&gt;cluster_cache + index_in_cluster * 512, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp;.........</div></div><div>//********************************************************************************************8</div><div>when the statement:</div><div>acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);n</div><div>has been completed, the content which the acb-&gt;buf points to has not been prepared.This is a&nbsp;asynchronous read operation.Who could tell me the&nbsp;<span style="font-family: arial, sans-serif; font-size: 13px; line-height: normal; white-space: nowrap; ">principle or process about this&nbsp;</span>&nbsp;asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when &nbsp;the data has been copied to the memory which the acb-&gt;buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.</div><div><br></div><div>A newbie</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_270381_109645334.1355905463809--



--===============3799715335036896564==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3799715335036896564==--



From xen-devel-bounces@lists.xen.org Wed Dec 19 08:25:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlExh-0000iV-1Q; Wed, 19 Dec 2012 08:25:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TlExf-0000iL-Bx
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 08:25:07 +0000
Received: from [193.109.254.147:26533] by server-11.bemta-14.messagelabs.com
	id 99/5A-02659-2E971D05; Wed, 19 Dec 2012 08:25:06 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1355905465!3456296!1
X-Originating-IP: [220.181.15.62]
X-SpamReason: No, hits=1.9 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDExMjQ0\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDExMjQ0\n,HTML_50_60,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31736 invoked from network); 19 Dec 2012 08:24:27 -0000
Received: from m15-62.126.com (HELO m15-62.126.com) (220.181.15.62)
	by server-10.tower-27.messagelabs.com with SMTP;
	19 Dec 2012 08:24:27 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=ccKulrNyFxS+B101KZdk8FzmRp6wtRXskVdH
	JfCLzhc=; b=YKKa0IlZSe3Aa0p8qAqzCIFm0FkmpubTSFVB91ysKEe+nC0wjBbC
	vPIk1aRx3M3XOLdQWfkVkpdIJRTcdiHe1+L8KS+LKhtpFqX2ZGb8/oGAgsLzIaMx
	gw7091P3l6VzZXlwUIigmrhDnBXTJlOi0RjUmIw6ECbfUgZPsSm6vUo=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr62
	(Coremail) ; Wed, 19 Dec 2012 16:24:23 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Wed, 19 Dec 2012 16:24:23 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: 1gN+UWZvb3Rlcl9odG09MzIzNDo4MQ==
MIME-Version: 1.0
Message-ID: <5b77509e.1174a.13bb2437601.Coremail.hxkhust@126.com>
X-CM-TRANSID: PsqowED5IkW3edFQUasbAA--.1861W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitBuKBUX9klwifAABsJ
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(about read operation in qemu-img-xen)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3799715335036896564=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3799715335036896564==
Content-Type: multipart/alternative; 
	boundary="----=_Part_270381_109645334.1355905463809"

------=_Part_270381_109645334.1355905463809
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

Hi,guys,


what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :
static void qcow_aio_read_cb(void *opaque, int ret)
{
........
if (!acb->cluster_offset) {
        if (bs->backing_hd) {
            /* read from the base image */
            acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,                           //*************
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);   //**************
//I read what the acb->buf points to, but find the reading operation is not finished. 
            if (acb->hd_aiocb == NULL)
                goto fail;
        } else {
            /* Note: in this case, no need to wait */
            memset(acb->buf, 0, 512 * acb->n);
            goto redo;
        }
    } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
        /* add AIO support for compressed blocks ? */
        if (decompress_cluster(s, acb->cluster_offset) < 0)
            goto fail;
        memcpy(acb->buf,
               s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
        goto redo;
 .........
//********************************************************************************************8
when the statement:
acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);n
has been completed, the content which the acb->buf points to has not been prepared.This is a asynchronous read operation.Who could tell me the principle or process about this  asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when  the data has been copied to the memory which the acb->buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.


A newbie



------=_Part_270381_109645334.1355905463809
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div>Hi,guys,</div><div><br></div>what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :<div><div>static void qcow_aio_read_cb(void *opaque, int ret)</div><div>{</div><div>........</div><div>if (!acb-&gt;cluster_offset) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (bs-&gt;backing_hd) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* read from the base image */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd, &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; //*************</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb); &nbsp; //**************</div><div>//I read what the acb-&gt;buf points to, but find the reading operation is not finished.&nbsp;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (acb-&gt;hd_aiocb == NULL)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; } else {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* Note: in this case, no need to wait */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; memset(acb-&gt;buf, 0, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; }</div><div>&nbsp; &nbsp; } else if (acb-&gt;cluster_offset &amp; QCOW_OFLAG_COMPRESSED) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; /* add AIO support for compressed blocks ? */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (decompress_cluster(s, acb-&gt;cluster_offset) &lt; 0)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; memcpy(acb-&gt;buf,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s-&gt;cluster_cache + index_in_cluster * 512, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp;.........</div></div><div>//********************************************************************************************8</div><div>when the statement:</div><div>acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);n</div><div>has been completed, the content which the acb-&gt;buf points to has not been prepared.This is a&nbsp;asynchronous read operation.Who could tell me the&nbsp;<span style="font-family: arial, sans-serif; font-size: 13px; line-height: normal; white-space: nowrap; ">principle or process about this&nbsp;</span>&nbsp;asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when &nbsp;the data has been copied to the memory which the acb-&gt;buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.</div><div><br></div><div>A newbie</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_270381_109645334.1355905463809--



--===============3799715335036896564==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3799715335036896564==--



From xen-devel-bounces@lists.xen.org Wed Dec 19 08:35:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlF7I-000139-FN; Wed, 19 Dec 2012 08:35:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlF7G-000134-Pc
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:35:02 +0000
Received: from [85.158.143.99:29787] by server-2.bemta-4.messagelabs.com id
	99/7D-30861-63C71D05; Wed, 19 Dec 2012 08:35:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355906100!23262388!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10560 invoked from network); 19 Dec 2012 08:35:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 08:35:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 08:35:03 +0000
Message-Id: <50D1840D02000078000B154D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 08:08:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1355856402-26614-3-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 19:46, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> +int __init fb_init(struct fb_prop fbp)

Any reason to pass a structure by value here?

> +{
> +    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
> +    {
> +        printk("Couldn't initialize a %xx%x framebuffer early.\n",

%x to me seems to produce rather meaningless messages here;
customarily screen dimensions are expected to be decimal. Also,
in new code you should specify a log level.

> +                        fbp.width, fbp.height);
> +        return -EINVAL;
> +    }
> +
> +    fb.fbp = fbp;
> +    fb.lbuf = lbuf;
> +    fb.text_buf = text_buf;
> +    fb.line_len = line_len;
> +    return 0;
> +}
> +
> +int __init fb_alloc(void)
> +{
> +    fb.lbuf = NULL;
> +    fb.text_buf = NULL;
> +    fb.line_len = NULL;
> +
> +    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
> +    if ( !fb.lbuf )
> +        goto fail;
> +
> +    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
> +    if ( !fb.text_buf )
> +        goto fail;
> +
> +    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
> +    if ( !fb.line_len )
> +        goto fail;
> +
> +    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
> +    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
> +    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
> +
> +    return 0;
> +
> +fail:
> +    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
> +                    "the framebuffer\n");

I think it was generally agreed that breaking up messages to fit 80
columns is undesirable.

> +    xfree(fb.lbuf);
> +    xfree(fb.text_buf);
> +    xfree(fb.line_len);

fb_free()?

> +
> +    return -ENOMEM;
> +}
> +
> +void fb_free(void)
> +{
> +    xfree(fb.lbuf);
> +    xfree(fb.text_buf);
> +    xfree(fb.line_len);
> +}
> --- /dev/null
> +++ b/xen/drivers/video/fb.h
> @@ -0,0 +1,49 @@
> +/*
> + * xen/drivers/video/fb.h
> + *
> + * Cross-platform framebuffer library
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + * Copyright (c) 2012 Citrix Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef _XEN_FB_H
> +#define _XEN_FB_H
> +
> +#include <xen/init.h>
> +
> +struct fb_prop {
> +    const struct font_desc *font;
> +    unsigned char *lfb;
> +    unsigned int pixel_on;
> +    uint16_t width, height;
> +    uint16_t bytes_per_line;
> +    uint16_t bits_per_pixel;
> +    void (*flush)(void);
> +
> +    unsigned int text_columns;
> +    unsigned int text_rows;
> +};
> +
> +void fb_redraw_puts(const char *s);
> +void fb_scroll_puts(const char *s);
> +void fb_carriage_return(void);
> +void fb_free(void);
> +
> +/* initialize the framebuffer, can be called early (before xmalloc is
> + * available) */
> +int __init fb_init(struct fb_prop fbp);
> +/* fb_alloc allocates internal structures using xmalloc */
> +int __init fb_alloc(void);

No __init annotations on declarations, please.

Jan

> +
> +#endif



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:35:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlF7I-000139-FN; Wed, 19 Dec 2012 08:35:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlF7G-000134-Pc
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:35:02 +0000
Received: from [85.158.143.99:29787] by server-2.bemta-4.messagelabs.com id
	99/7D-30861-63C71D05; Wed, 19 Dec 2012 08:35:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355906100!23262388!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10560 invoked from network); 19 Dec 2012 08:35:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 08:35:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 08:35:03 +0000
Message-Id: <50D1840D02000078000B154D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 08:08:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1355856402-26614-3-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 19:46, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> +int __init fb_init(struct fb_prop fbp)

Any reason to pass a structure by value here?

> +{
> +    if ( fbp.width > MAX_XRES || fbp.height > MAX_YRES )
> +    {
> +        printk("Couldn't initialize a %xx%x framebuffer early.\n",

%x to me seems to produce rather meaningless messages here;
customarily screen dimensions are expected to be decimal. Also,
in new code you should specify a log level.

> +                        fbp.width, fbp.height);
> +        return -EINVAL;
> +    }
> +
> +    fb.fbp = fbp;
> +    fb.lbuf = lbuf;
> +    fb.text_buf = text_buf;
> +    fb.line_len = line_len;
> +    return 0;
> +}
> +
> +int __init fb_alloc(void)
> +{
> +    fb.lbuf = NULL;
> +    fb.text_buf = NULL;
> +    fb.line_len = NULL;
> +
> +    fb.lbuf = xmalloc_bytes(fb.fbp.bytes_per_line);
> +    if ( !fb.lbuf )
> +        goto fail;
> +
> +    fb.text_buf = xzalloc_bytes(fb.fbp.text_columns * fb.fbp.text_rows);
> +    if ( !fb.text_buf )
> +        goto fail;
> +
> +    fb.line_len = xzalloc_array(unsigned int, fb.fbp.text_columns);
> +    if ( !fb.line_len )
> +        goto fail;
> +
> +    memcpy(fb.lbuf, lbuf, fb.fbp.bytes_per_line);
> +    memcpy(fb.text_buf, text_buf, fb.fbp.text_columns * fb.fbp.text_rows);
> +    memcpy(fb.line_len, line_len, fb.fbp.text_columns);
> +
> +    return 0;
> +
> +fail:
> +    printk(XENLOG_ERR "Couldn't allocate enough memory to drive "
> +                    "the framebuffer\n");

I think it was generally agreed that breaking up messages to fit 80
columns is undesirable.

> +    xfree(fb.lbuf);
> +    xfree(fb.text_buf);
> +    xfree(fb.line_len);

fb_free()?

> +
> +    return -ENOMEM;
> +}
> +
> +void fb_free(void)
> +{
> +    xfree(fb.lbuf);
> +    xfree(fb.text_buf);
> +    xfree(fb.line_len);
> +}
> --- /dev/null
> +++ b/xen/drivers/video/fb.h
> @@ -0,0 +1,49 @@
> +/*
> + * xen/drivers/video/fb.h
> + *
> + * Cross-platform framebuffer library
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + * Copyright (c) 2012 Citrix Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef _XEN_FB_H
> +#define _XEN_FB_H
> +
> +#include <xen/init.h>
> +
> +struct fb_prop {
> +    const struct font_desc *font;
> +    unsigned char *lfb;
> +    unsigned int pixel_on;
> +    uint16_t width, height;
> +    uint16_t bytes_per_line;
> +    uint16_t bits_per_pixel;
> +    void (*flush)(void);
> +
> +    unsigned int text_columns;
> +    unsigned int text_rows;
> +};
> +
> +void fb_redraw_puts(const char *s);
> +void fb_scroll_puts(const char *s);
> +void fb_carriage_return(void);
> +void fb_free(void);
> +
> +/* initialize the framebuffer, can be called early (before xmalloc is
> + * available) */
> +int __init fb_init(struct fb_prop fbp);
> +/* fb_alloc allocates internal structures using xmalloc */
> +int __init fb_alloc(void);

No __init annotations on declarations, please.

Jan

> +
> +#endif



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:36:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:36:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlF84-000169-UC; Wed, 19 Dec 2012 08:35:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlF83-00015z-D3
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 08:35:51 +0000
Received: from [85.158.143.99:35131] by server-2.bemta-4.messagelabs.com id
	7A/7E-30861-66C71D05; Wed, 19 Dec 2012 08:35:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355906149!29982202!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13690 invoked from network); 19 Dec 2012 08:35:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 08:35:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 08:35:51 +0000
Message-Id: <50D1855102000078000B1550@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 08:13:53 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	<xen-devel@lists.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1355856402-26614-3-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH v3 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 19:46, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> Abstract away from vesa.c the funcions to handle a linear framebuffer
> and print characters to it.
> The corresponding functions are going to be removed from vesa.c in the
> next patch.
> 
> Changes in v3:
> - rename fb_cr to fb_carriage_return.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Oh, and (sorry for not noticing this earlier) as this is about _linear_
frame buffers only, naming the types, functions, and variables
lfb_* rather than fb_* seems very desirable to me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:36:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:36:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlF84-000169-UC; Wed, 19 Dec 2012 08:35:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlF83-00015z-D3
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 08:35:51 +0000
Received: from [85.158.143.99:35131] by server-2.bemta-4.messagelabs.com id
	7A/7E-30861-66C71D05; Wed, 19 Dec 2012 08:35:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355906149!29982202!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13690 invoked from network); 19 Dec 2012 08:35:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 08:35:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 08:35:51 +0000
Message-Id: <50D1855102000078000B1550@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 08:13:53 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	<xen-devel@lists.xensource.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1355856402-26614-3-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH v3 3/8] xen: introduce a generic framebuffer
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.12.12 at 19:46, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> Abstract away from vesa.c the funcions to handle a linear framebuffer
> and print characters to it.
> The corresponding functions are going to be removed from vesa.c in the
> next patch.
> 
> Changes in v3:
> - rename fb_cr to fb_carriage_return.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Oh, and (sorry for not noticing this earlier) as this is about _linear_
frame buffers only, naming the types, functions, and variables
lfb_* rather than fb_* seems very desirable to me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:36:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlF89-00016e-Ca; Wed, 19 Dec 2012 08:35:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlF88-00016S-RH
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:35:56 +0000
Received: from [85.158.143.99:3664] by server-3.bemta-4.messagelabs.com id
	83/24-18211-C6C71D05; Wed, 19 Dec 2012 08:35:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355906153!17856474!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2810 invoked from network); 19 Dec 2012 08:35:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 08:35:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 08:35:57 +0000
Message-Id: <50D186B502000078000B1557@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 08:19:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-2-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1355946267-24227-2-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, jun.nakajima@intel.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 01/10] nestedhap: Change hostcr3 and
 p2m->cr3 to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 20:44, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> VMX doesn't have the concept about host cr3 for nested p2m,
> and only SVM has, so change it to netural words.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

You had an ack on this from Tim already, so unless you needed to
drop it because of non-trivial changes (which I see no indication
of either here or in 00/10), you should be adding such below your
S-o-b line to avoid committers from having to go hunt for prior
acks in the list archives. Same for at least 02/10 and 06/10.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:36:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlF89-00016e-Ca; Wed, 19 Dec 2012 08:35:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlF88-00016S-RH
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:35:56 +0000
Received: from [85.158.143.99:3664] by server-3.bemta-4.messagelabs.com id
	83/24-18211-C6C71D05; Wed, 19 Dec 2012 08:35:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355906153!17856474!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2810 invoked from network); 19 Dec 2012 08:35:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 08:35:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 08:35:57 +0000
Message-Id: <50D186B502000078000B1557@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 08:19:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-2-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1355946267-24227-2-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, jun.nakajima@intel.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 01/10] nestedhap: Change hostcr3 and
 p2m->cr3 to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 20:44, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> VMX doesn't have the concept about host cr3 for nested p2m,
> and only SVM has, so change it to netural words.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

You had an ack on this from Tim already, so unless you needed to
drop it because of non-trivial changes (which I see no indication
of either here or in 00/10), you should be adding such below your
S-o-b line to avoid committers from having to go hunt for prior
acks in the list archives. Same for at least 02/10 and 06/10.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:38:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlFA4-0001Mn-1C; Wed, 19 Dec 2012 08:37:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlFA2-0001MZ-6M
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:37:54 +0000
Received: from [85.158.143.35:64123] by server-3.bemta-4.messagelabs.com id
	47/76-18211-1EC71D05; Wed, 19 Dec 2012 08:37:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355906226!10129161!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29173 invoked from network); 19 Dec 2012 08:37:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 08:37:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 08:37:10 +0000
Message-Id: <50D188C902000078000B1572@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 08:28:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-9-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1355946267-24227-9-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, jun.nakajima@intel.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 08/10] nEPT: handle invept instruction
 from L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 20:44, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2572,11 +2572,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>          if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
>              update_guest_eip();
>          break;
> -
> +    case EXIT_REASON_INVEPT:
> +        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
> +            update_guest_eip();
> +        break;

In (potentially going to become) long switch statements, please
don't drop the blank lines between individual cases - instead of
dropping the line here, you wold want to insert another one
below the new separately handled case.

>      case EXIT_REASON_MWAIT_INSTRUCTION:
>      case EXIT_REASON_MONITOR_INSTRUCTION:
>      case EXIT_REASON_GETSEC:
> -    case EXIT_REASON_INVEPT:
>      case EXIT_REASON_INVVPID:
>          /*
>           * We should never exit on GETSEC because CR4.SMXE is always 0 when
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1356,6 +1356,45 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
>      return X86EMUL_OKAY;
>  }
>  
> +int nvmx_handle_invept(struct cpu_user_regs *regs)
> +{
> +    struct vmx_inst_decoded decode;
> +    unsigned long eptp;
> +    u64 inv_type;
> +
> +    if ( !cpu_has_vmx_ept )
> +        return X86EMUL_EXCEPTION;
> +
> +    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
> +             != X86EMUL_OKAY )
> +        return X86EMUL_EXCEPTION;
> +
> +    inv_type = reg_read(regs, decode.reg2);
> +    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);

An unconditional printk() on an operation potentially happening
quite frequently? Even with XENLOG_DEBUG this is not acceptable
imo.

> +
> +    switch ( inv_type ) {
> +    case INVEPT_SINGLE_CONTEXT:
> +        {
> +            struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
> +            if ( p2m )
> +            {
> +	            p2m_flush(current, p2m);

Despite your comment in 00/10, there still is a whitespace issues
at least here (didn't look that closely elsewhere).

> +                ept_sync_domain(p2m);
> +            }
> +        }
> +        break;
> +    case INVEPT_ALL_CONTEXT:
> +        p2m_flush_nestedp2m(current->domain);
> +        __invept(INVEPT_ALL_CONTEXT, 0, 0);
> +        break;
> +    default:
> +        return X86EMUL_EXCEPTION;
> +    }
> +    vmreturn(regs, VMSUCCEED);
> +    return X86EMUL_OKAY;
> +}
> +
> +
>  #define __emul_value(enable1, default1) \
>      ((enable1 | default1) << 32 | (default1))
>  
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1465,7 +1465,7 @@ p2m_flush_table(struct p2m_domain *p2m)
>  void
>  p2m_flush(struct vcpu *v, struct p2m_domain *p2m)
>  {
> -    ASSERT(v->domain == p2m->domain);
> +    ASSERT(p2m && v->domain == p2m->domain);

How is this change related to the rest of the patch?

Jan

>      vcpu_nestedhvm(v).nv_p2m = NULL;
>      p2m_flush_table(p2m);
>      hvm_asid_flush_vcpu(v);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:38:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlFA4-0001Mn-1C; Wed, 19 Dec 2012 08:37:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlFA2-0001MZ-6M
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:37:54 +0000
Received: from [85.158.143.35:64123] by server-3.bemta-4.messagelabs.com id
	47/76-18211-1EC71D05; Wed, 19 Dec 2012 08:37:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355906226!10129161!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29173 invoked from network); 19 Dec 2012 08:37:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 08:37:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 08:37:10 +0000
Message-Id: <50D188C902000078000B1572@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 08:28:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-9-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1355946267-24227-9-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, jun.nakajima@intel.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 08/10] nEPT: handle invept instruction
 from L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 20:44, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2572,11 +2572,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>          if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
>              update_guest_eip();
>          break;
> -
> +    case EXIT_REASON_INVEPT:
> +        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
> +            update_guest_eip();
> +        break;

In (potentially going to become) long switch statements, please
don't drop the blank lines between individual cases - instead of
dropping the line here, you wold want to insert another one
below the new separately handled case.

>      case EXIT_REASON_MWAIT_INSTRUCTION:
>      case EXIT_REASON_MONITOR_INSTRUCTION:
>      case EXIT_REASON_GETSEC:
> -    case EXIT_REASON_INVEPT:
>      case EXIT_REASON_INVVPID:
>          /*
>           * We should never exit on GETSEC because CR4.SMXE is always 0 when
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1356,6 +1356,45 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
>      return X86EMUL_OKAY;
>  }
>  
> +int nvmx_handle_invept(struct cpu_user_regs *regs)
> +{
> +    struct vmx_inst_decoded decode;
> +    unsigned long eptp;
> +    u64 inv_type;
> +
> +    if ( !cpu_has_vmx_ept )
> +        return X86EMUL_EXCEPTION;
> +
> +    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
> +             != X86EMUL_OKAY )
> +        return X86EMUL_EXCEPTION;
> +
> +    inv_type = reg_read(regs, decode.reg2);
> +    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);

An unconditional printk() on an operation potentially happening
quite frequently? Even with XENLOG_DEBUG this is not acceptable
imo.

> +
> +    switch ( inv_type ) {
> +    case INVEPT_SINGLE_CONTEXT:
> +        {
> +            struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
> +            if ( p2m )
> +            {
> +	            p2m_flush(current, p2m);

Despite your comment in 00/10, there still is a whitespace issues
at least here (didn't look that closely elsewhere).

> +                ept_sync_domain(p2m);
> +            }
> +        }
> +        break;
> +    case INVEPT_ALL_CONTEXT:
> +        p2m_flush_nestedp2m(current->domain);
> +        __invept(INVEPT_ALL_CONTEXT, 0, 0);
> +        break;
> +    default:
> +        return X86EMUL_EXCEPTION;
> +    }
> +    vmreturn(regs, VMSUCCEED);
> +    return X86EMUL_OKAY;
> +}
> +
> +
>  #define __emul_value(enable1, default1) \
>      ((enable1 | default1) << 32 | (default1))
>  
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1465,7 +1465,7 @@ p2m_flush_table(struct p2m_domain *p2m)
>  void
>  p2m_flush(struct vcpu *v, struct p2m_domain *p2m)
>  {
> -    ASSERT(v->domain == p2m->domain);
> +    ASSERT(p2m && v->domain == p2m->domain);

How is this change related to the rest of the patch?

Jan

>      vcpu_nestedhvm(v).nv_p2m = NULL;
>      p2m_flush_table(p2m);
>      hvm_asid_flush_vcpu(v);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:41:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlFDN-0001dz-N0; Wed, 19 Dec 2012 08:41:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlFDM-0001dr-Ak
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:41:20 +0000
Received: from [85.158.143.35:21569] by server-2.bemta-4.messagelabs.com id
	06/35-30861-FAD71D05; Wed, 19 Dec 2012 08:41:19 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355906462!14307091!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjQwNjc5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26584 invoked from network); 19 Dec 2012 08:41:04 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-8.tower-21.messagelabs.com with SMTP;
	19 Dec 2012 08:41:04 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 19 Dec 2012 00:41:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="233752485"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by azsmga001.ch.intel.com with ESMTP; 19 Dec 2012 00:40:59 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 00:40:59 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Wed, 19 Dec 2012 16:40:50 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v2 01/10] nestedhap: Change hostcr3 and p2m->cr3 to
	meaningful words
Thread-Index: AQHN3cPo1ImzaQhoGk2J3pgpzlTnwJgfzOeQ
Date: Wed, 19 Dec 2012 08:40:50 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BA53F@SHSMSX101.ccr.corp.intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-2-git-send-email-xiantao.zhang@intel.com>
	<50D186B502000078000B1557@nat28.tlf.novell.com>
In-Reply-To: <50D186B502000078000B1557@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Dong, Eddie" <eddie.dong@intel.com>,
	"tim@xen.org" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 01/10] nestedhap: Change hostcr3 and
 p2m->cr3 to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks,  I will send them  again. 
Xiantao

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, December 19, 2012 4:20 PM
> To: Zhang, Xiantao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org; keir@xen.org;
> tim@xen.org
> Subject: Re: [PATCH v2 01/10] nestedhap: Change hostcr3 and p2m->cr3 to
> meaningful words
> 
> >>> On 19.12.12 at 20:44, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> > From: Zhang Xiantao <xiantao.zhang@intel.com>
> >
> > VMX doesn't have the concept about host cr3 for nested p2m, and only
> > SVM has, so change it to netural words.
> >
> > Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> You had an ack on this from Tim already, so unless you needed to drop it
> because of non-trivial changes (which I see no indication of either here or in
> 00/10), you should be adding such below your S-o-b line to avoid committers
> from having to go hunt for prior acks in the list archives. Same for at least
> 02/10 and 06/10.
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:41:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlFDN-0001dz-N0; Wed, 19 Dec 2012 08:41:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlFDM-0001dr-Ak
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:41:20 +0000
Received: from [85.158.143.35:21569] by server-2.bemta-4.messagelabs.com id
	06/35-30861-FAD71D05; Wed, 19 Dec 2012 08:41:19 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1355906462!14307091!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjQwNjc5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26584 invoked from network); 19 Dec 2012 08:41:04 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-8.tower-21.messagelabs.com with SMTP;
	19 Dec 2012 08:41:04 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 19 Dec 2012 00:41:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="233752485"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by azsmga001.ch.intel.com with ESMTP; 19 Dec 2012 00:40:59 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 00:40:59 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Wed, 19 Dec 2012 16:40:50 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v2 01/10] nestedhap: Change hostcr3 and p2m->cr3 to
	meaningful words
Thread-Index: AQHN3cPo1ImzaQhoGk2J3pgpzlTnwJgfzOeQ
Date: Wed, 19 Dec 2012 08:40:50 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BA53F@SHSMSX101.ccr.corp.intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-2-git-send-email-xiantao.zhang@intel.com>
	<50D186B502000078000B1557@nat28.tlf.novell.com>
In-Reply-To: <50D186B502000078000B1557@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Dong, Eddie" <eddie.dong@intel.com>,
	"tim@xen.org" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 01/10] nestedhap: Change hostcr3 and
 p2m->cr3 to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks,  I will send them  again. 
Xiantao

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, December 19, 2012 4:20 PM
> To: Zhang, Xiantao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org; keir@xen.org;
> tim@xen.org
> Subject: Re: [PATCH v2 01/10] nestedhap: Change hostcr3 and p2m->cr3 to
> meaningful words
> 
> >>> On 19.12.12 at 20:44, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> > From: Zhang Xiantao <xiantao.zhang@intel.com>
> >
> > VMX doesn't have the concept about host cr3 for nested p2m, and only
> > SVM has, so change it to netural words.
> >
> > Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> You had an ack on this from Tim already, so unless you needed to drop it
> because of non-trivial changes (which I see no indication of either here or in
> 00/10), you should be adding such below your S-o-b line to avoid committers
> from having to go hunt for prior acks in the list archives. Same for at least
> 02/10 and 06/10.
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:48:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlFJr-0001rN-NL; Wed, 19 Dec 2012 08:48:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlFJq-0001rI-FF
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:48:02 +0000
Received: from [85.158.143.99:49064] by server-3.bemta-4.messagelabs.com id
	49/83-18211-14F71D05; Wed, 19 Dec 2012 08:48:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355906880!29153305!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11934 invoked from network); 19 Dec 2012 08:48:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 08:48:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 08:48:05 +0000
Message-Id: <50D18D4E02000078000B15B3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 08:47:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> Are you playing with Xen ?  so far,  Xen doesn't support PCIe device hot-plug 
> feature yet.

When saying Xen, I assume you mean the pv-ops kernel instead? So
far I was under the impression that this worked even with the very
old 2.6.18 tree (as much or as little as hotplug there worked in the
native case). And given that there are no special requirements on
the hypervisor to make this work, it's not even obvious to me what
would be missing in the pv-ops kernel to make it work.

All that is of course with me not having any practical experience
with hotplug, due to the lack of capable hardware...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 08:48:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 08:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlFJr-0001rN-NL; Wed, 19 Dec 2012 08:48:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlFJq-0001rI-FF
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:48:02 +0000
Received: from [85.158.143.99:49064] by server-3.bemta-4.messagelabs.com id
	49/83-18211-14F71D05; Wed, 19 Dec 2012 08:48:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355906880!29153305!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11934 invoked from network); 19 Dec 2012 08:48:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 08:48:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 08:48:05 +0000
Message-Id: <50D18D4E02000078000B15B3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 08:47:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> Are you playing with Xen ?  so far,  Xen doesn't support PCIe device hot-plug 
> feature yet.

When saying Xen, I assume you mean the pv-ops kernel instead? So
far I was under the impression that this worked even with the very
old 2.6.18 tree (as much or as little as hotplug there worked in the
native case). And given that there are no special requirements on
the hypervisor to make this work, it's not even obvious to me what
would be missing in the pv-ops kernel to make it work.

All that is of course with me not having any practical experience
with hotplug, due to the lack of capable hardware...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 09:15:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 09:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlFji-0002Gh-D4; Wed, 19 Dec 2012 09:14:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlFjg-0002Ga-H9
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 09:14:44 +0000
Received: from [85.158.139.83:17252] by server-5.bemta-5.messagelabs.com id
	FA/17-22648-38581D05; Wed, 19 Dec 2012 09:14:43 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355908468!27778977!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMyMjA3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24165 invoked from network); 19 Dec 2012 09:14:29 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 09:14:29 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 19 Dec 2012 01:14:24 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="236310332"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 19 Dec 2012 01:14:15 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 01:14:15 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 01:14:15 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Wed, 19 Dec 2012 17:14:13 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] Xen 4.2 and PCI hotplug.
Thread-Index: AQHN3XF/WgpPKKZtrU+hLDd0DNirTJgfxZEg//+EhgCAAIjQ0A==
Date: Wed, 19 Dec 2012 09:14:13 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BA621@SHSMSX101.ccr.corp.intel.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
	<50D18D4E02000078000B15B3@nat28.tlf.novell.com>
In-Reply-To: <50D18D4E02000078000B15B3@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, December 19, 2012 4:48 PM
> To: Zhang, Xiantao
> Cc: Konrad Rzeszutek Wilk; xen-devel
> Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
> 
> >>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> wrote:
> > Are you playing with Xen ?  so far,  Xen doesn't support PCIe device
> > hot-plug feature yet.
> 
> When saying Xen, I assume you mean the pv-ops kernel instead? So far I was
> under the impression that this worked even with the very old 2.6.18 tree (as
> much or as little as hotplug there worked in the native case). And given that
> there are no special requirements on the hypervisor to make this work, it's
> not even obvious to me what would be missing in the pv-ops kernel to make
> it work. 
Oh, my fault!  Perhaps we don't need to do anything for pv-ops kernel to support device hot-plug if native system has it supported.  Actually, we didn't do such testings before, since it is a native feature, not a Xen-specific one. 
Xiantao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 09:15:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 09:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlFji-0002Gh-D4; Wed, 19 Dec 2012 09:14:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlFjg-0002Ga-H9
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 09:14:44 +0000
Received: from [85.158.139.83:17252] by server-5.bemta-5.messagelabs.com id
	FA/17-22648-38581D05; Wed, 19 Dec 2012 09:14:43 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355908468!27778977!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMyMjA3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24165 invoked from network); 19 Dec 2012 09:14:29 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-182.messagelabs.com with SMTP;
	19 Dec 2012 09:14:29 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 19 Dec 2012 01:14:24 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,313,1355126400"; d="scan'208";a="236310332"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 19 Dec 2012 01:14:15 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 01:14:15 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 01:14:15 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Wed, 19 Dec 2012 17:14:13 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] Xen 4.2 and PCI hotplug.
Thread-Index: AQHN3XF/WgpPKKZtrU+hLDd0DNirTJgfxZEg//+EhgCAAIjQ0A==
Date: Wed, 19 Dec 2012 09:14:13 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BA621@SHSMSX101.ccr.corp.intel.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
	<50D18D4E02000078000B15B3@nat28.tlf.novell.com>
In-Reply-To: <50D18D4E02000078000B15B3@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, December 19, 2012 4:48 PM
> To: Zhang, Xiantao
> Cc: Konrad Rzeszutek Wilk; xen-devel
> Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
> 
> >>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> wrote:
> > Are you playing with Xen ?  so far,  Xen doesn't support PCIe device
> > hot-plug feature yet.
> 
> When saying Xen, I assume you mean the pv-ops kernel instead? So far I was
> under the impression that this worked even with the very old 2.6.18 tree (as
> much or as little as hotplug there worked in the native case). And given that
> there are no special requirements on the hypervisor to make this work, it's
> not even obvious to me what would be missing in the pv-ops kernel to make
> it work. 
Oh, my fault!  Perhaps we don't need to do anything for pv-ops kernel to support device hot-plug if native system has it supported.  Actually, we didn't do such testings before, since it is a native feature, not a Xen-specific one. 
Xiantao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:15:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlGg0-00030k-38; Wed, 19 Dec 2012 10:15:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlGfz-00030f-9V
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:14:59 +0000
Received: from [85.158.143.99:48003] by server-2.bemta-4.messagelabs.com id
	41/A7-30861-2A391D05; Wed, 19 Dec 2012 10:14:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355912065!23280089!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23238 invoked from network); 19 Dec 2012 10:14:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:14:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="247288"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:14:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:14:25 +0000
Message-ID: <1355912063.14620.286.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Razvan Cojocaru <rzvncj@gmail.com>
Date: Wed, 19 Dec 2012 10:14:23 +0000
In-Reply-To: <50D0A6B1.30702@gmail.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 17:24 +0000, Razvan Cojocaru wrote:
> Thanks for the reply!
> 
> > Do you have a user in mind for this new functionality?
> 
> Yes. An (userspace) application that needs to look at this information 
> to decide if a set of pages are interesting for monitoring.
> 
> > This version seems to do a lot less than get_mtrr_type() in the
> > hypervisor. Is that deliberate? Why isn't the fixed mtrr slot and
> > overlap handling required here?
> 
> It does do less. It's somewhat deliberate :), ideally it should have 
> done everything that get_mtrr_type() does. It did work with my initial 
> test addresses, but clearly more is required if it is to provide the 
> full functionality of get_mtrr_type(). The code currently only iterates 
> through the var_ranges, not the fixed array, etc.
> 
> The overlap handling isn't there because there doesn't seem to be a 
> clear correspondence between struct mtrr_state and  struct hvm_hw_mtrr. 
> I implemented as much of the get_mtrr_type() logic as was obvious using 
> what mapping was clear bewtween them.
> 
> Is having the full functionality in libxc feasible?

It would certainly be preferable to have libxc do it now rather than
every application it or having to introduce cleverer versions of the API
in the future.

I don't know enough about MTRRs or how Xen represents them either
internally or at the hypercall interface to have a sensible opinion
about the feasibility but my gut feeling is that there's no reason it
shouldn't be.

>From a brief look it seems like hvm_hw_mtrr is a subset of mtrr_state,
which makes sense since you would expect the internal state (struct
mtrr_state) to cache or precompute some stuff for efficiency while
emulating but you don't want to expose that in the architectural state
(struct hvm_hw_mtrr). libxc can (and should) probably recompute anything
it needs above what is in hvm_hw_mtrr on the fly (e.g. check for
overlaps).


Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:15:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlGg0-00030k-38; Wed, 19 Dec 2012 10:15:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlGfz-00030f-9V
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:14:59 +0000
Received: from [85.158.143.99:48003] by server-2.bemta-4.messagelabs.com id
	41/A7-30861-2A391D05; Wed, 19 Dec 2012 10:14:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355912065!23280089!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23238 invoked from network); 19 Dec 2012 10:14:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:14:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="247288"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:14:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:14:25 +0000
Message-ID: <1355912063.14620.286.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Razvan Cojocaru <rzvncj@gmail.com>
Date: Wed, 19 Dec 2012 10:14:23 +0000
In-Reply-To: <50D0A6B1.30702@gmail.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 17:24 +0000, Razvan Cojocaru wrote:
> Thanks for the reply!
> 
> > Do you have a user in mind for this new functionality?
> 
> Yes. An (userspace) application that needs to look at this information 
> to decide if a set of pages are interesting for monitoring.
> 
> > This version seems to do a lot less than get_mtrr_type() in the
> > hypervisor. Is that deliberate? Why isn't the fixed mtrr slot and
> > overlap handling required here?
> 
> It does do less. It's somewhat deliberate :), ideally it should have 
> done everything that get_mtrr_type() does. It did work with my initial 
> test addresses, but clearly more is required if it is to provide the 
> full functionality of get_mtrr_type(). The code currently only iterates 
> through the var_ranges, not the fixed array, etc.
> 
> The overlap handling isn't there because there doesn't seem to be a 
> clear correspondence between struct mtrr_state and  struct hvm_hw_mtrr. 
> I implemented as much of the get_mtrr_type() logic as was obvious using 
> what mapping was clear bewtween them.
> 
> Is having the full functionality in libxc feasible?

It would certainly be preferable to have libxc do it now rather than
every application it or having to introduce cleverer versions of the API
in the future.

I don't know enough about MTRRs or how Xen represents them either
internally or at the hypercall interface to have a sensible opinion
about the feasibility but my gut feeling is that there's no reason it
shouldn't be.

>From a brief look it seems like hvm_hw_mtrr is a subset of mtrr_state,
which makes sense since you would expect the internal state (struct
mtrr_state) to cache or precompute some stuff for efficiency while
emulating but you don't want to expose that in the architectural state
(struct hvm_hw_mtrr). libxc can (and should) probably recompute anything
it needs above what is in hvm_hw_mtrr on the fly (e.g. check for
overlaps).


Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:17:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:17:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlGhb-000356-Ix; Wed, 19 Dec 2012 10:16:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlGha-000350-3A
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:16:38 +0000
Received: from [85.158.139.83:64040] by server-8.bemta-5.messagelabs.com id
	2F/D3-15003-50491D05; Wed, 19 Dec 2012 10:16:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355912196!27716564!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19117 invoked from network); 19 Dec 2012 10:16:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:16:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="247344"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:16:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:16:36 +0000
Message-ID: <1355912195.14620.288.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 10:16:35 +0000
In-Reply-To: <alpine.DEB.2.02.1212181817250.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181817250.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: arm: fix long lines in entry.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:17 +0000, Stefano Stabellini wrote:
> On Tue, 18 Dec 2012, Ian Campbell wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> There are actually no functional changes, right?

Nope, just whitespace changes. I will amend the commit message to say
so.

> If so:
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Thanks.

> 
> >  xen/arch/arm/entry.S |   66 +++++++++++++++++++++++++-------------------------
> >  1 files changed, 33 insertions(+), 33 deletions(-)
> > 
> > diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
> > index 1d6ff32..83793c2 100644
> > --- a/xen/arch/arm/entry.S
> > +++ b/xen/arch/arm/entry.S
> > @@ -11,22 +11,22 @@
> >  #define RESTORE_BANKED(mode) \
> >  	RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
> >  
> > -#define SAVE_ALL											\
> > -	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */					\
> > -	push {r0-r12}; /* Save R0-R12 */								\
> > -													\
> > -	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */				\
> > -	str r11, [sp, #UREGS_pc];									\
> > -													\
> > -	str lr, [sp, #UREGS_lr];									\
> > -													\
> > -	add r11, sp, #UREGS_kernel_sizeof+4;								\
> > -	str r11, [sp, #UREGS_sp];									\
> > -													\
> > -	mrs r11, SPSR_hyp;										\
> > -	str r11, [sp, #UREGS_cpsr];									\
> > -	and r11, #PSR_MODE_MASK;									\
> > -	cmp r11, #PSR_MODE_HYP;										\
> > +#define SAVE_ALL							\
> > +	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */	\
> > +	push {r0-r12}; /* Save R0-R12 */				\
> > +									\
> > +	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */\
> > +	str r11, [sp, #UREGS_pc];					\
> > +									\
> > +	str lr, [sp, #UREGS_lr];					\
> > +									\
> > +	add r11, sp, #UREGS_kernel_sizeof+4;				\
> > +	str r11, [sp, #UREGS_sp];					\
> > +									\
> > +	mrs r11, SPSR_hyp;						\
> > +	str r11, [sp, #UREGS_cpsr];					\
> > +	and r11, #PSR_MODE_MASK;					\
> > +	cmp r11, #PSR_MODE_HYP;						\
> >  	blne save_guest_regs
> >  
> >  save_guest_regs:
> > @@ -43,25 +43,25 @@ save_guest_regs:
> >  	SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
> >  	mov pc, lr
> >  
> > -#define DEFINE_TRAP_ENTRY(trap)										\
> > -	ALIGN;												\
> > -trap_##trap:												\
> > -	SAVE_ALL;											\
> > -	cpsie i; 	/* local_irq_enable */								\
> > -	adr lr, return_from_trap;									\
> > -	mov r0, sp;											\
> > -	mov r11, sp;											\
> > -	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
> > +#define DEFINE_TRAP_ENTRY(trap)						\
> > +	ALIGN;								\
> > +trap_##trap:								\
> > +	SAVE_ALL;							\
> > +	cpsie i; 	/* local_irq_enable */				\
> > +	adr lr, return_from_trap;					\
> > +	mov r0, sp;							\
> > +	mov r11, sp;							\
> > +	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
> >  	b do_trap_##trap
> >  
> > -#define DEFINE_TRAP_ENTRY_NOIRQ(trap)									\
> > -	ALIGN;												\
> > -trap_##trap:												\
> > -	SAVE_ALL;											\
> > -	adr lr, return_from_trap;									\
> > -	mov r0, sp;											\
> > -	mov r11, sp;											\
> > -	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
> > +#define DEFINE_TRAP_ENTRY_NOIRQ(trap)					\
> > +	ALIGN;								\
> > +trap_##trap:								\
> > +	SAVE_ALL;							\
> > +	adr lr, return_from_trap;					\
> > +	mov r0, sp;							\
> > +	mov r11, sp;							\
> > +	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
> >  	b do_trap_##trap
> >  
> >  .globl hyp_traps_vector
> > -- 
> > 1.7.2.5
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:17:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:17:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlGhb-000356-Ix; Wed, 19 Dec 2012 10:16:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlGha-000350-3A
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:16:38 +0000
Received: from [85.158.139.83:64040] by server-8.bemta-5.messagelabs.com id
	2F/D3-15003-50491D05; Wed, 19 Dec 2012 10:16:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1355912196!27716564!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19117 invoked from network); 19 Dec 2012 10:16:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:16:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="247344"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:16:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:16:36 +0000
Message-ID: <1355912195.14620.288.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 10:16:35 +0000
In-Reply-To: <alpine.DEB.2.02.1212181817250.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181817250.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: arm: fix long lines in entry.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:17 +0000, Stefano Stabellini wrote:
> On Tue, 18 Dec 2012, Ian Campbell wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> There are actually no functional changes, right?

Nope, just whitespace changes. I will amend the commit message to say
so.

> If so:
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Thanks.

> 
> >  xen/arch/arm/entry.S |   66 +++++++++++++++++++++++++-------------------------
> >  1 files changed, 33 insertions(+), 33 deletions(-)
> > 
> > diff --git a/xen/arch/arm/entry.S b/xen/arch/arm/entry.S
> > index 1d6ff32..83793c2 100644
> > --- a/xen/arch/arm/entry.S
> > +++ b/xen/arch/arm/entry.S
> > @@ -11,22 +11,22 @@
> >  #define RESTORE_BANKED(mode) \
> >  	RESTORE_ONE_BANKED(SP_##mode) ; RESTORE_ONE_BANKED(LR_##mode) ; RESTORE_ONE_BANKED(SPSR_##mode)
> >  
> > -#define SAVE_ALL											\
> > -	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */					\
> > -	push {r0-r12}; /* Save R0-R12 */								\
> > -													\
> > -	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */				\
> > -	str r11, [sp, #UREGS_pc];									\
> > -													\
> > -	str lr, [sp, #UREGS_lr];									\
> > -													\
> > -	add r11, sp, #UREGS_kernel_sizeof+4;								\
> > -	str r11, [sp, #UREGS_sp];									\
> > -													\
> > -	mrs r11, SPSR_hyp;										\
> > -	str r11, [sp, #UREGS_cpsr];									\
> > -	and r11, #PSR_MODE_MASK;									\
> > -	cmp r11, #PSR_MODE_HYP;										\
> > +#define SAVE_ALL							\
> > +	sub sp, #(UREGS_R8_fiq - UREGS_sp); /* SP, LR, SPSR, PC */	\
> > +	push {r0-r12}; /* Save R0-R12 */				\
> > +									\
> > +	mrs r11, ELR_hyp;		/* ELR_hyp is return address. */\
> > +	str r11, [sp, #UREGS_pc];					\
> > +									\
> > +	str lr, [sp, #UREGS_lr];					\
> > +									\
> > +	add r11, sp, #UREGS_kernel_sizeof+4;				\
> > +	str r11, [sp, #UREGS_sp];					\
> > +									\
> > +	mrs r11, SPSR_hyp;						\
> > +	str r11, [sp, #UREGS_cpsr];					\
> > +	and r11, #PSR_MODE_MASK;					\
> > +	cmp r11, #PSR_MODE_HYP;						\
> >  	blne save_guest_regs
> >  
> >  save_guest_regs:
> > @@ -43,25 +43,25 @@ save_guest_regs:
> >  	SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
> >  	mov pc, lr
> >  
> > -#define DEFINE_TRAP_ENTRY(trap)										\
> > -	ALIGN;												\
> > -trap_##trap:												\
> > -	SAVE_ALL;											\
> > -	cpsie i; 	/* local_irq_enable */								\
> > -	adr lr, return_from_trap;									\
> > -	mov r0, sp;											\
> > -	mov r11, sp;											\
> > -	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
> > +#define DEFINE_TRAP_ENTRY(trap)						\
> > +	ALIGN;								\
> > +trap_##trap:								\
> > +	SAVE_ALL;							\
> > +	cpsie i; 	/* local_irq_enable */				\
> > +	adr lr, return_from_trap;					\
> > +	mov r0, sp;							\
> > +	mov r11, sp;							\
> > +	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
> >  	b do_trap_##trap
> >  
> > -#define DEFINE_TRAP_ENTRY_NOIRQ(trap)									\
> > -	ALIGN;												\
> > -trap_##trap:												\
> > -	SAVE_ALL;											\
> > -	adr lr, return_from_trap;									\
> > -	mov r0, sp;											\
> > -	mov r11, sp;											\
> > -	bic sp, #7; /* Align the stack pointer (noop on guest trap) */					\
> > +#define DEFINE_TRAP_ENTRY_NOIRQ(trap)					\
> > +	ALIGN;								\
> > +trap_##trap:								\
> > +	SAVE_ALL;							\
> > +	adr lr, return_from_trap;					\
> > +	mov r0, sp;							\
> > +	mov r11, sp;							\
> > +	bic sp, #7; /* Align the stack pointer (noop on guest trap) */	\
> >  	b do_trap_##trap
> >  
> >  .globl hyp_traps_vector
> > -- 
> > 1.7.2.5
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:20:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlGlJ-0003Gb-9q; Wed, 19 Dec 2012 10:20:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlGlH-0003GQ-Mp
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:20:27 +0000
Received: from [193.109.254.147:37463] by server-3.bemta-14.messagelabs.com id
	9F/B1-26055-AE491D05; Wed, 19 Dec 2012 10:20:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355912425!10615022!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8009 invoked from network); 19 Dec 2012 10:20:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:20:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="247458"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:20:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:20:24 +0000
Message-ID: <1355912423.14620.291.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 10:20:23 +0000
In-Reply-To: <alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> On Tue, 18 Dec 2012, Ian Campbell wrote:
> > This shortens an overly long line.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> honestly I would rather keep it because it has been quite useful for
> debugging in the past once all the bugs have been fixed (TM) then we can
> remove it ;-)

Can you not just re-add it for debug?

I mostly just want to get rid of the overlong line, I could nuke the
spaces from the comment (in all of them, not just this one) instead?

> 
> >  xen/arch/arm/head.S |   11 +++++++----
> >  1 files changed, 7 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
> > index 0d9a799..eb54925 100644
> > --- a/xen/arch/arm/head.S
> > +++ b/xen/arch/arm/head.S
> > @@ -25,9 +25,12 @@
> >  #define ZIMAGE_MAGIC_NUMBER 0x016f2818
> >  
> >  #define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
> > +
> > +/* Second Level */
> >  #define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
> > -#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
> > -#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
> > +
> > +/* Third Level */
> > +#define PT_DEV 0xe73 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
> >  
> >  #define PT_UPPER(x) (PT_##x & 0xf00)
> >  #define PT_LOWER(x) (PT_##x & 0x0ff)
> > @@ -222,8 +225,8 @@ skip_bss:
> >          mov   r3, #0
> >          lsr   r2, r11, #12
> >          lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> > -        orr   r2, r2, #PT_UPPER(DEV_L3)
> > -        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> > +        orr   r2, r2, #PT_UPPER(DEV)
> > +        orr   r2, r2, #PT_LOWER(DEV) /* r2:r3 := 4K dev map including UART */
> >          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> >  #endif
> >  
> > -- 
> > 1.7.2.5
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:20:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlGlJ-0003Gb-9q; Wed, 19 Dec 2012 10:20:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlGlH-0003GQ-Mp
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:20:27 +0000
Received: from [193.109.254.147:37463] by server-3.bemta-14.messagelabs.com id
	9F/B1-26055-AE491D05; Wed, 19 Dec 2012 10:20:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355912425!10615022!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8009 invoked from network); 19 Dec 2012 10:20:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:20:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="247458"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:20:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:20:24 +0000
Message-ID: <1355912423.14620.291.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 10:20:23 +0000
In-Reply-To: <alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> On Tue, 18 Dec 2012, Ian Campbell wrote:
> > This shortens an overly long line.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> honestly I would rather keep it because it has been quite useful for
> debugging in the past once all the bugs have been fixed (TM) then we can
> remove it ;-)

Can you not just re-add it for debug?

I mostly just want to get rid of the overlong line, I could nuke the
spaces from the comment (in all of them, not just this one) instead?

> 
> >  xen/arch/arm/head.S |   11 +++++++----
> >  1 files changed, 7 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
> > index 0d9a799..eb54925 100644
> > --- a/xen/arch/arm/head.S
> > +++ b/xen/arch/arm/head.S
> > @@ -25,9 +25,12 @@
> >  #define ZIMAGE_MAGIC_NUMBER 0x016f2818
> >  
> >  #define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
> > +
> > +/* Second Level */
> >  #define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
> > -#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
> > -#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
> > +
> > +/* Third Level */
> > +#define PT_DEV 0xe73 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
> >  
> >  #define PT_UPPER(x) (PT_##x & 0xf00)
> >  #define PT_LOWER(x) (PT_##x & 0x0ff)
> > @@ -222,8 +225,8 @@ skip_bss:
> >          mov   r3, #0
> >          lsr   r2, r11, #12
> >          lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> > -        orr   r2, r2, #PT_UPPER(DEV_L3)
> > -        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> > +        orr   r2, r2, #PT_UPPER(DEV)
> > +        orr   r2, r2, #PT_LOWER(DEV) /* r2:r3 := 4K dev map including UART */
> >          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> >  #endif
> >  
> > -- 
> > 1.7.2.5
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:28:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlGtD-0003WF-DV; Wed, 19 Dec 2012 10:28:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlGtC-0003WA-S0
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:28:39 +0000
Received: from [85.158.137.99:36088] by server-15.bemta-3.messagelabs.com id
	28/00-07921-1D691D05; Wed, 19 Dec 2012 10:28:33 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355912911!17012930!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27164 invoked from network); 19 Dec 2012 10:28:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:28:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="1163046"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 10:28:30 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 05:28:30 -0500
Message-ID: <50D196CD.6080706@citrix.com>
Date: Wed, 19 Dec 2012 10:28:29 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <5b77509e.1174a.13bb2437601.Coremail.hxkhust@126.com>
In-Reply-To: <5b77509e.1174a.13bb2437601.Coremail.hxkhust@126.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(about read operation in qemu-img-xen)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 08:24, hxkhust wrote:
> Hi,guys,
>
> what I concern is the following (which is in the
> /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :
> static void qcow_aio_read_cb(void *opaque, int ret)
> {
> ........
> if (!acb->cluster_offset) {
> if (bs->backing_hd) {
> /* read from the base image */
> acb->hd_aiocb = bdrv_aio_read(bs->backing_hd, //*************
> acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);
> //**************
> //I read what the acb->buf points to, but find the reading operation
> is not finished.
> if (acb->hd_aiocb == NULL)
> goto fail;
> } else {
> /* Note: in this case, no need to wait */
> memset(acb->buf, 0, 512 * acb->n);
> goto redo;
> }
> } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
> /* add AIO support for compressed blocks ? */
> if (decompress_cluster(s, acb->cluster_offset) < 0)
> goto fail;
> memcpy(acb->buf,
> s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
> goto redo;
> .........
> //********************************************************************************************8
> when the statement:
> acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
> acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);n
> has been completed, the content which the acb->buf points to has not
> been prepared.This is a asynchronous read operation.Who could tell me
> the principle or process about this asynchronous read operation about
> these codes? if you describe it using the codes in xen,that will be so
> kind of you.I need to know when the data has been copied to the memory
> which the acb->buf points to, and this problem is important to me.as
> the title mentioned ,I have to solve it as soon as possible.
>
> A newbie
For reference:
http://www.catb.org/esr/faqs/smart-questions.html#urgent
http://www.catb.org/esr/faqs/smart-questions.html#goal

It would probably help if you described what you are trying to solve.
(Please take some time to read the WHOLE of the above page, as you may
actually learn something useful, that will help you for the rest of your
life...)

I'm sure you are correct in that the buffer isn't (guaranteed to be)
filled in when the read function returns (it is also not guaranteed that
it's NOT filled in - if the read preceeding this call asked for the
section of disk immediately preceeding where this request is for, you
may well end up with the data already available, and thus the buffer is
filled in immediately).

If you read about asynchronous IO, for example here:
http://www.kernel.org/doc/man-pages/online/pages/man7/aio.7.html
there are functions, such as aio_suspend, which are called to "wait for
IO to complete".

I'm not familiar with this particular piece of code, so I don't know
where the aio_suspend (or whatever similar function it uses) gets
called, but I'm pretty sure that's how it works.

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:28:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlGtD-0003WF-DV; Wed, 19 Dec 2012 10:28:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlGtC-0003WA-S0
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:28:39 +0000
Received: from [85.158.137.99:36088] by server-15.bemta-3.messagelabs.com id
	28/00-07921-1D691D05; Wed, 19 Dec 2012 10:28:33 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355912911!17012930!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27164 invoked from network); 19 Dec 2012 10:28:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:28:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="1163046"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 10:28:30 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 05:28:30 -0500
Message-ID: <50D196CD.6080706@citrix.com>
Date: Wed, 19 Dec 2012 10:28:29 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <5b77509e.1174a.13bb2437601.Coremail.hxkhust@126.com>
In-Reply-To: <5b77509e.1174a.13bb2437601.Coremail.hxkhust@126.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(about read operation in qemu-img-xen)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 08:24, hxkhust wrote:
> Hi,guys,
>
> what I concern is the following (which is in the
> /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :
> static void qcow_aio_read_cb(void *opaque, int ret)
> {
> ........
> if (!acb->cluster_offset) {
> if (bs->backing_hd) {
> /* read from the base image */
> acb->hd_aiocb = bdrv_aio_read(bs->backing_hd, //*************
> acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);
> //**************
> //I read what the acb->buf points to, but find the reading operation
> is not finished.
> if (acb->hd_aiocb == NULL)
> goto fail;
> } else {
> /* Note: in this case, no need to wait */
> memset(acb->buf, 0, 512 * acb->n);
> goto redo;
> }
> } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
> /* add AIO support for compressed blocks ? */
> if (decompress_cluster(s, acb->cluster_offset) < 0)
> goto fail;
> memcpy(acb->buf,
> s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
> goto redo;
> .........
> //********************************************************************************************8
> when the statement:
> acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
> acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);n
> has been completed, the content which the acb->buf points to has not
> been prepared.This is a asynchronous read operation.Who could tell me
> the principle or process about this asynchronous read operation about
> these codes? if you describe it using the codes in xen,that will be so
> kind of you.I need to know when the data has been copied to the memory
> which the acb->buf points to, and this problem is important to me.as
> the title mentioned ,I have to solve it as soon as possible.
>
> A newbie
For reference:
http://www.catb.org/esr/faqs/smart-questions.html#urgent
http://www.catb.org/esr/faqs/smart-questions.html#goal

It would probably help if you described what you are trying to solve.
(Please take some time to read the WHOLE of the above page, as you may
actually learn something useful, that will help you for the rest of your
life...)

I'm sure you are correct in that the buffer isn't (guaranteed to be)
filled in when the read function returns (it is also not guaranteed that
it's NOT filled in - if the read preceeding this call asked for the
section of disk immediately preceeding where this request is for, you
may well end up with the data already available, and thus the buffer is
filled in immediately).

If you read about asynchronous IO, for example here:
http://www.kernel.org/doc/man-pages/online/pages/man7/aio.7.html
there are functions, such as aio_suspend, which are called to "wait for
IO to complete".

I'm not familiar with this particular piece of code, so I don't know
where the aio_suspend (or whatever similar function it uses) gets
called, but I'm pretty sure that's how it works.

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:28:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:28:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlGtI-0003Wh-QL; Wed, 19 Dec 2012 10:28:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlGtH-0003WU-PY
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:28:44 +0000
Received: from [85.158.138.51:33643] by server-1.bemta-3.messagelabs.com id
	42/6B-08906-DB691D05; Wed, 19 Dec 2012 10:28:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355912891!28364289!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7582 invoked from network); 19 Dec 2012 10:28:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:28:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="247715"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:28:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:28:11 +0000
Message-ID: <1355912890.14620.297.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 10:28:10 +0000
In-Reply-To: <alpine.DEB.2.02.1212181824460.17523@kaball.uk.xensource.com>
References: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
	<1355851042.14620.280.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181824460.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of
	arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:27 +0000, Stefano Stabellini wrote:
> It would be nice to add to the description of the patch what you are
> doing. Something along these lines:
> 
> - move gic.h to include/asm-arm/gic.h;
> - move assembly files (entry.S, head.S, mode_switch.S, proc-ca15.S,
> lib/*) to xen/arch/arm/arm32;
> - move asm-offsets.c to xen/arch/arm/arm32;
> - make the appropriate Makefile changes.
> 
> Other than that the patch is OK.

Is this ok:

xen: arm: introduce arm32 as a subarch of arm.

- move 32-bit specific files into subarch specific arm32 subdirectory.
- move gic.h to xen/include/asm-arm (it is needed from both subarch
  and generic code).
- make the appropriate build and config file changes to support
  XEN_TARGET_ARCH=arm32.

This prepares us for an eventual 64-bit subarch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:28:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:28:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlGtI-0003Wh-QL; Wed, 19 Dec 2012 10:28:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlGtH-0003WU-PY
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:28:44 +0000
Received: from [85.158.138.51:33643] by server-1.bemta-3.messagelabs.com id
	42/6B-08906-DB691D05; Wed, 19 Dec 2012 10:28:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355912891!28364289!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7582 invoked from network); 19 Dec 2012 10:28:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:28:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="247715"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:28:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:28:11 +0000
Message-ID: <1355912890.14620.297.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 10:28:10 +0000
In-Reply-To: <alpine.DEB.2.02.1212181824460.17523@kaball.uk.xensource.com>
References: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
	<1355851042.14620.280.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181824460.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of
	arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:27 +0000, Stefano Stabellini wrote:
> It would be nice to add to the description of the patch what you are
> doing. Something along these lines:
> 
> - move gic.h to include/asm-arm/gic.h;
> - move assembly files (entry.S, head.S, mode_switch.S, proc-ca15.S,
> lib/*) to xen/arch/arm/arm32;
> - move asm-offsets.c to xen/arch/arm/arm32;
> - make the appropriate Makefile changes.
> 
> Other than that the patch is OK.

Is this ok:

xen: arm: introduce arm32 as a subarch of arm.

- move 32-bit specific files into subarch specific arm32 subdirectory.
- move gic.h to xen/include/asm-arm (it is needed from both subarch
  and generic code).
- make the appropriate build and config file changes to support
  XEN_TARGET_ARCH=arm32.

This prepares us for an eventual 64-bit subarch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:36:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlH07-0003rJ-P1; Wed, 19 Dec 2012 10:35:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlH05-0003rE-Qw
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 10:35:46 +0000
Received: from [85.158.139.83:64943] by server-7.bemta-5.messagelabs.com id
	43/29-08009-08891D05; Wed, 19 Dec 2012 10:35:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355913326!26633427!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7618 invoked from network); 19 Dec 2012 10:35:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:35:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="247957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:35:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:35:26 +0000
Message-ID: <1355913325.14620.303.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 10:35:25 +0000
In-Reply-To: <alpine.DEB.2.02.1212181847430.17523@kaball.uk.xensource.com>
References: <1354732666-3132-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354785657.17165.41.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181847430.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/arm: flush dcache after memcpy'ing the
	kernel image
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:51 +0000, Stefano Stabellini wrote:
> On Thu, 6 Dec 2012, Ian Campbell wrote:
> > On Wed, 2012-12-05 at 18:37 +0000, Stefano Stabellini wrote:
> > > After memcpy'ing the kernel in guest memory we need to flush the dcache
> > > to make sure that the data actually reaches the memory before we start
> > > executing guest code with caches disabled.
> > > 
> > > This fixes a boot time bug on the Cortex A15 Versatile Express that
> > > usually shows up as follow:
> > > 
> > > (XEN) Hypervisor Trap. HSR=0x80000006 EC=0x20 IL=0 Syndrome=6
> > > (XEN) Unexpected Trap: Hypervisor
> > 
> > That's a symptom of a thousand different problems though, since it's
> > just a generic Instruction Abort from guest mode caused by a translation
> > fault at stage 2. 
> > 
> > Anyhow this won't apply in top of "arm: support for initial modules
> > (e.g. dom0) and DTB supplied in RAM", could you rebase on that please?
> 
> The patch applies fine on both xen/master and xen/staging.
> 
> Do you have a branch with "arm: support for initial modules (e.g. dom0)
> and DTB supplied in RAM" somewhere?

I do now:
        git://xenbits.xen.org/people/ianc/xen-unstable.git boot-wrapper
        
You had some comments on the DTB parsing/filtering which I replied to.

Perhaps I should reorder the series to put the useful infrastructure
first (so it can be pushed) but leave the actual concrete DTB interface
until the end. The DTB stuff is really a PoC pending a real bootloader
protocol and if no modules are given (which is what it will look like if
we omit that bit) then Xen should fallback to the current method of
looking in the flash for the kernel.

I think that would mean ordering this series as:

[PATCH 1/9] xen: arm: mark early_panic as a noreturn function
<bits of 2/9 which define the datastructures, but not the parsing stuff>
[PATCH 3/9] arm: avoid placing Xen over any modules.
<patches 4..8>
[PATCH 2/9] xen: arm: parse modules from DT during early boot.
[PATCH 9/9] xen: strip /chosen/modules/module@<N>/* from dom0 device tree


Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:36:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlH07-0003rJ-P1; Wed, 19 Dec 2012 10:35:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlH05-0003rE-Qw
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 10:35:46 +0000
Received: from [85.158.139.83:64943] by server-7.bemta-5.messagelabs.com id
	43/29-08009-08891D05; Wed, 19 Dec 2012 10:35:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355913326!26633427!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7618 invoked from network); 19 Dec 2012 10:35:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:35:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,313,1355097600"; 
   d="scan'208";a="247957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:35:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:35:26 +0000
Message-ID: <1355913325.14620.303.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 10:35:25 +0000
In-Reply-To: <alpine.DEB.2.02.1212181847430.17523@kaball.uk.xensource.com>
References: <1354732666-3132-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<1354785657.17165.41.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181847430.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/arm: flush dcache after memcpy'ing the
	kernel image
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:51 +0000, Stefano Stabellini wrote:
> On Thu, 6 Dec 2012, Ian Campbell wrote:
> > On Wed, 2012-12-05 at 18:37 +0000, Stefano Stabellini wrote:
> > > After memcpy'ing the kernel in guest memory we need to flush the dcache
> > > to make sure that the data actually reaches the memory before we start
> > > executing guest code with caches disabled.
> > > 
> > > This fixes a boot time bug on the Cortex A15 Versatile Express that
> > > usually shows up as follow:
> > > 
> > > (XEN) Hypervisor Trap. HSR=0x80000006 EC=0x20 IL=0 Syndrome=6
> > > (XEN) Unexpected Trap: Hypervisor
> > 
> > That's a symptom of a thousand different problems though, since it's
> > just a generic Instruction Abort from guest mode caused by a translation
> > fault at stage 2. 
> > 
> > Anyhow this won't apply in top of "arm: support for initial modules
> > (e.g. dom0) and DTB supplied in RAM", could you rebase on that please?
> 
> The patch applies fine on both xen/master and xen/staging.
> 
> Do you have a branch with "arm: support for initial modules (e.g. dom0)
> and DTB supplied in RAM" somewhere?

I do now:
        git://xenbits.xen.org/people/ianc/xen-unstable.git boot-wrapper
        
You had some comments on the DTB parsing/filtering which I replied to.

Perhaps I should reorder the series to put the useful infrastructure
first (so it can be pushed) but leave the actual concrete DTB interface
until the end. The DTB stuff is really a PoC pending a real bootloader
protocol and if no modules are given (which is what it will look like if
we omit that bit) then Xen should fallback to the current method of
looking in the flash for the kernel.

I think that would mean ordering this series as:

[PATCH 1/9] xen: arm: mark early_panic as a noreturn function
<bits of 2/9 which define the datastructures, but not the parsing stuff>
[PATCH 3/9] arm: avoid placing Xen over any modules.
<patches 4..8>
[PATCH 2/9] xen: arm: parse modules from DT during early boot.
[PATCH 9/9] xen: strip /chosen/modules/module@<N>/* from dom0 device tree


Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:42:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:42:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlH63-00043p-IH; Wed, 19 Dec 2012 10:41:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlH62-00043k-1x
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:41:54 +0000
Received: from [85.158.138.51:28067] by server-2.bemta-3.messagelabs.com id
	01/A2-11239-1F991D05; Wed, 19 Dec 2012 10:41:53 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355913711!21645061!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25655 invoked from network); 19 Dec 2012 10:41:52 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 10:41:52 -0000
Received: (qmail 22149 invoked from network); 19 Dec 2012 12:41:51 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 12:41:51 +0200
Message-ID: <50D19A2B.2050006@gmail.com>
Date: Wed, 19 Dec 2012 12:42:51 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355912063.14620.286.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234434,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 8bdb9f0568514fb024e567103ed47109.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id:
	2m1g3t8.17emt5i0a.226kk], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44438
Cc: Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> It would certainly be preferable to have libxc do it now rather than
> every application it or having to introduce cleverer versions of the API
> in the future.

I agree.

> I don't know enough about MTRRs or how Xen represents them either
> internally or at the hypercall interface to have a sensible opinion
> about the feasibility but my gut feeling is that there's no reason it
> shouldn't be.

Unfortunately I'm far from being an expert on MTRRs too, so maybe 
someone can tell us what the equivalent of this:

uint8_t get_mtrr_type(struct mtrr_state *m, paddr_t pa)
{
     [...]
     if ( unlikely(!(m->enabled & 0x2)) )
         return MTRR_TYPE_UNCACHABLE;
     [...]
}

whould be with libxc code using struct hvm_hw_mtrr? Or, more to the 
point, what the equivalent of m->enabled is in this:

struct hvm_hw_mtrr {
#define MTRR_VCNT 8
#define NUM_FIXED_MSR 11
     uint64_t msr_pat_cr;
     /* mtrr physbase & physmask msr pair*/
     uint64_t msr_mtrr_var[MTRR_VCNT*2];
     uint64_t msr_mtrr_fixed[NUM_FIXED_MSR];
     uint64_t msr_mtrr_cap;
     uint64_t msr_mtrr_def_type;
};

I'm also having a hard time figuring out how to map m->overlapped on the 
hvm_hw_mtrr members. It's also quite possible (to me, at least, at this 
stage) that they're not needed because at libxc level they're assumed to 
be set to some value (i.e. 'overlapped' is assumed to be 0, 'enabled' to 
be 1).

Any help is appreciated, if we clear this up I'll modify and re-submit 
the patch.

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:42:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:42:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlH63-00043p-IH; Wed, 19 Dec 2012 10:41:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlH62-00043k-1x
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:41:54 +0000
Received: from [85.158.138.51:28067] by server-2.bemta-3.messagelabs.com id
	01/A2-11239-1F991D05; Wed, 19 Dec 2012 10:41:53 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355913711!21645061!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25655 invoked from network); 19 Dec 2012 10:41:52 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 10:41:52 -0000
Received: (qmail 22149 invoked from network); 19 Dec 2012 12:41:51 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 12:41:51 +0200
Message-ID: <50D19A2B.2050006@gmail.com>
Date: Wed, 19 Dec 2012 12:42:51 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355912063.14620.286.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234434,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 8bdb9f0568514fb024e567103ed47109.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id:
	2m1g3t8.17emt5i0a.226kk], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44438
Cc: Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> It would certainly be preferable to have libxc do it now rather than
> every application it or having to introduce cleverer versions of the API
> in the future.

I agree.

> I don't know enough about MTRRs or how Xen represents them either
> internally or at the hypercall interface to have a sensible opinion
> about the feasibility but my gut feeling is that there's no reason it
> shouldn't be.

Unfortunately I'm far from being an expert on MTRRs too, so maybe 
someone can tell us what the equivalent of this:

uint8_t get_mtrr_type(struct mtrr_state *m, paddr_t pa)
{
     [...]
     if ( unlikely(!(m->enabled & 0x2)) )
         return MTRR_TYPE_UNCACHABLE;
     [...]
}

whould be with libxc code using struct hvm_hw_mtrr? Or, more to the 
point, what the equivalent of m->enabled is in this:

struct hvm_hw_mtrr {
#define MTRR_VCNT 8
#define NUM_FIXED_MSR 11
     uint64_t msr_pat_cr;
     /* mtrr physbase & physmask msr pair*/
     uint64_t msr_mtrr_var[MTRR_VCNT*2];
     uint64_t msr_mtrr_fixed[NUM_FIXED_MSR];
     uint64_t msr_mtrr_cap;
     uint64_t msr_mtrr_def_type;
};

I'm also having a hard time figuring out how to map m->overlapped on the 
hvm_hw_mtrr members. It's also quite possible (to me, at least, at this 
stage) that they're not needed because at libxc level they're assumed to 
be set to some value (i.e. 'overlapped' is assumed to be 0, 'enabled' to 
be 1).

Any help is appreciated, if we clear this up I'll modify and re-submit 
the patch.

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:50:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHDl-0004DU-LY; Wed, 19 Dec 2012 10:49:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TlHDk-0004DP-Kc
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:49:52 +0000
Received: from [85.158.139.211:15906] by server-9.bemta-5.messagelabs.com id
	36/8F-10690-FCB91D05; Wed, 19 Dec 2012 10:49:51 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355914174!20650192!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14654 invoked from network); 19 Dec 2012 10:49:36 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 10:49:36 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TlHDQ-000HKo-In; Wed, 19 Dec 2012 10:49:32 +0000
Date: Wed, 19 Dec 2012 10:49:32 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121219104932.GA65599@ocelot.phlegethon.org>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355912423.14620.291.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
	drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:20 +0000 on 19 Dec (1355912423), Ian Campbell wrote:
> On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> > On Tue, 18 Dec 2012, Ian Campbell wrote:
> > > This shortens an overly long line.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > honestly I would rather keep it because it has been quite useful for
> > debugging in the past once all the bugs have been fixed (TM) then we can
> > remove it ;-)
> 
> Can you not just re-add it for debug?
> 
> I mostly just want to get rid of the overlong line, I could nuke the
> spaces from the comment (in all of them, not just this one) instead?

Could you just remove the 'lev3: ' from the comment, pulling it in to
exactly 80 chars?  Your' added 'second level' and 'third level' make it
redundant, and I'd rather not lose the spaces in the comments.

Tim.

> > 
> > >  xen/arch/arm/head.S |   11 +++++++----
> > >  1 files changed, 7 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
> > > index 0d9a799..eb54925 100644
> > > --- a/xen/arch/arm/head.S
> > > +++ b/xen/arch/arm/head.S
> > > @@ -25,9 +25,12 @@
> > >  #define ZIMAGE_MAGIC_NUMBER 0x016f2818
> > >  
> > >  #define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
> > > +
> > > +/* Second Level */
> > >  #define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
> > > -#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
> > > -#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
> > > +
> > > +/* Third Level */
> > > +#define PT_DEV 0xe73 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
> > >  
> > >  #define PT_UPPER(x) (PT_##x & 0xf00)
> > >  #define PT_LOWER(x) (PT_##x & 0x0ff)
> > > @@ -222,8 +225,8 @@ skip_bss:
> > >          mov   r3, #0
> > >          lsr   r2, r11, #12
> > >          lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> > > -        orr   r2, r2, #PT_UPPER(DEV_L3)
> > > -        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> > > +        orr   r2, r2, #PT_UPPER(DEV)
> > > +        orr   r2, r2, #PT_LOWER(DEV) /* r2:r3 := 4K dev map including UART */
> > >          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> > >  #endif
> > >  
> > > -- 
> > > 1.7.2.5
> > > 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 10:50:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 10:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHDl-0004DU-LY; Wed, 19 Dec 2012 10:49:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TlHDk-0004DP-Kc
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:49:52 +0000
Received: from [85.158.139.211:15906] by server-9.bemta-5.messagelabs.com id
	36/8F-10690-FCB91D05; Wed, 19 Dec 2012 10:49:51 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355914174!20650192!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14654 invoked from network); 19 Dec 2012 10:49:36 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 10:49:36 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TlHDQ-000HKo-In; Wed, 19 Dec 2012 10:49:32 +0000
Date: Wed, 19 Dec 2012 10:49:32 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121219104932.GA65599@ocelot.phlegethon.org>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355912423.14620.291.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
	drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:20 +0000 on 19 Dec (1355912423), Ian Campbell wrote:
> On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> > On Tue, 18 Dec 2012, Ian Campbell wrote:
> > > This shortens an overly long line.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > honestly I would rather keep it because it has been quite useful for
> > debugging in the past once all the bugs have been fixed (TM) then we can
> > remove it ;-)
> 
> Can you not just re-add it for debug?
> 
> I mostly just want to get rid of the overlong line, I could nuke the
> spaces from the comment (in all of them, not just this one) instead?

Could you just remove the 'lev3: ' from the comment, pulling it in to
exactly 80 chars?  Your' added 'second level' and 'third level' make it
redundant, and I'd rather not lose the spaces in the comments.

Tim.

> > 
> > >  xen/arch/arm/head.S |   11 +++++++----
> > >  1 files changed, 7 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
> > > index 0d9a799..eb54925 100644
> > > --- a/xen/arch/arm/head.S
> > > +++ b/xen/arch/arm/head.S
> > > @@ -25,9 +25,12 @@
> > >  #define ZIMAGE_MAGIC_NUMBER 0x016f2818
> > >  
> > >  #define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
> > > +
> > > +/* Second Level */
> > >  #define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
> > > -#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
> > > -#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
> > > +
> > > +/* Third Level */
> > > +#define PT_DEV 0xe73 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
> > >  
> > >  #define PT_UPPER(x) (PT_##x & 0xf00)
> > >  #define PT_LOWER(x) (PT_##x & 0x0ff)
> > > @@ -222,8 +225,8 @@ skip_bss:
> > >          mov   r3, #0
> > >          lsr   r2, r11, #12
> > >          lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> > > -        orr   r2, r2, #PT_UPPER(DEV_L3)
> > > -        orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
> > > +        orr   r2, r2, #PT_UPPER(DEV)
> > > +        orr   r2, r2, #PT_LOWER(DEV) /* r2:r3 := 4K dev map including UART */
> > >          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> > >  #endif
> > >  
> > > -- 
> > > 1.7.2.5
> > > 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:00:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHNV-0004Qc-QN; Wed, 19 Dec 2012 10:59:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlHNU-0004QS-F9
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 10:59:56 +0000
Received: from [85.158.143.35:38480] by server-2.bemta-4.messagelabs.com id
	D2/2C-30861-C2E91D05; Wed, 19 Dec 2012 10:59:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355914795!13073785!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6485 invoked from network); 19 Dec 2012 10:59:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:59:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="248622"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:59:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:59:54 +0000
Message-ID: <1355914793.14620.319.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Wed, 19 Dec 2012 10:59:53 +0000
In-Reply-To: <50D0C528.4090602@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 19:34 +0000, Mats Petersson wrote:

> >> I think I'll do the minimal patch first, then, if I find some spare
> >> time, work on the "batching" variant.
> > OK. The batching is IMHO just using the multicall variant.

The XENMEM_add_to_physmap_range is itself batched (it contains ranges of
mfns etc), so we don't just want to batch the actual hypercalls we
really want to make sure each hypercall processes a batch, this will
lets us optimise the flushes in the hypervisor.

I don't know if they mutlicall infrastructure allows for that but it
would be pretty trivial to do it explicitly

I expect these patches will need to be folded into one to avoid a
bisecability hazard?

xen-privcmd-remap-array-arm.patch:
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 7a32976..dc50a53 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -73,7 +73,7 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
>  }
>  
>  struct remap_data {
> -       xen_pfn_t fgmfn; /* foreign domain's gmfn */
> +       xen_pfn_t *fgmfn; /* foreign domain's gmfn */
>         pgprot_t prot;
>         domid_t  domid;
>         struct vm_area_struct *vma;
> @@ -90,38 +90,120 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
>         unsigned long pfn = page_to_pfn(page);
>         pte_t pte = pfn_pte(pfn, info->prot);
>  
> -       if (map_foreign_page(pfn, info->fgmfn, info->domid))
> +       // TODO: We should really batch these updates
> +       if (map_foreign_page(pfn, *info->fgmfn, info->domid))
>                 return -EFAULT;
>         set_pte_at(info->vma->vm_mm, addr, ptep, pte);
> +       info->fgmfn++;

Looks good.
 
>         return 0;
>  }
>  
> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> -                              unsigned long addr,
> -                              xen_pfn_t mfn, int nr,
> -                              pgprot_t prot, unsigned domid,
> -                              struct page **pages)
> +/* do_remap_mfn() - helper function to map foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages
> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + * @pages:   page information (for PVH, not used currently)
> + *
> + * This function takes an array of mfns and maps nr pages from that into
> + * this kernel's memory. The owner of the pages is defined by domid. Where the
> + * pages are mapped is determined by addr, and vma is used for "accounting" of
> + * the pages.
> + *
> + * Return value is zero for success, negative for failure.
> + */
> +static int do_remap_mfn(struct vm_area_struct *vma,
> +                       unsigned long addr,
> +                       xen_pfn_t *mfn, int nr,
> +                       pgprot_t prot, unsigned domid,
> +                       struct page **pages)

Since xen_remap_domain_mfn_range isn't implemented on ARM the only
caller is xen_remap_domain_mfn_array so you might as well just call this
function ..._array.

> 
>  {
>         int err;
>         struct remap_data data;
>  
> -       /* TBD: Batching, current sole caller only does page at a time */
> -       if (nr > 1)
> -               return -EINVAL;
> +       BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));

Where does this restriction come from?

I think it is true of X86 PV MMU (which has certain requirements about
how and when PTEs are changed) but I don't think ARM or PVH need it
because they use two level paging so the PTEs are just normal native
ones with no extra restrictions.

Maybe it is useful to enforce similarity between PV and PVH/ARM though?


>         data.fgmfn = mfn;
> -       data.prot = prot;
> +       data.prot  = prot;
>         data.domid = domid;
> -       data.vma = vma;
> -       data.index = 0;
> +       data.vma   = vma;
> 
> 
>         data.pages = pages;
> -       err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
> -                                 remap_pte_fn, &data);
> -       return err;
> +       data.index = 0;
> +
> +       while (nr) {
> +               unsigned long range = 1 << PAGE_SHIFT;
> +
> +               err = apply_to_page_range(vma->vm_mm, addr, range,
> +                                         remap_pte_fn, &data);
> +               /* Warning: We do probably need to care about what error we
> +                  get here. However, currently, the remap_area_mfn_pte_fn is

                                                       ^ this isn't the name of the fn
> 
> +                  only likely to return EFAULT or some other "things are very
> +                  bad" error code, which the rest of the calling code won't
> +                  be able to fix up. So we just exit with the error we got.

It expect it is more important to accumulate the individual errors from
remap_pte_fn into err_ptr.

> +               */
> +               if (err)
> +                       return err;
> +
> +               nr--;
> +               addr += range;
> +       }
> 

This while loop (and therefore this change) is unnecessary. The single
call to apply_to_page_range is sufficient and as your TODO notes any
batching should happen in remap_pte_fn (which can handle both
accumulation and flushing when the  batch is large enough).

> +       return 0;
> +}
> +
> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages

physical address, right?

> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @err_ptr: pointer to array of integers, one per MFN, for an error
> + *           value for each page. The err_ptr must not be NULL.

Nothing seems to be filling this in?

> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + * @pages:   page information (for PVH, not used currently)

No such thing as PVH on ARM. Also pages is used by this code.

> 
> + *
> + * This function takes an array of mfns and maps nr pages from that into this
> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
> + * are mapped is determined by addr, and vma is used for "accounting" of the
> + * pages. The err_ptr array is filled in on any page that is not sucessfully
> 
>                                                                   successfully
> 
> + * mapped in.
> + *
> + * Return value is zero for success, negative ERRNO value for failure.
> + * Note that the error value -ENOENT is considered a "retry", so when this
> + * error code is seen, another call should be made with the list of pages that
> + * are marked as -ENOENT in the err_ptr array.
> + */
> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +                              unsigned long addr,
> +                              xen_pfn_t *mfn, int nr,
> +                              int *err_ptr, pgprot_t prot,
> +                              unsigned domid,
> +                              struct page **pages)
> +{
> +       /* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
> +        * and the consequences later is quite hard to detect what the actual
> +        * cause of "wrong memory was mapped in".
> +        * Note: This variant doesn't actually use err_ptr at the moment.

True ;-)

> +        */
> +       BUG_ON(err_ptr == NULL);
> +       return do_remap_mfn(vma, addr, mfn, nr, prot, domid, pages);
> +}
> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
> +
> +/* Not used in ARM. Use xen_remap_domain_mfn_array(). */
> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> +                              unsigned long addr,
> +                              xen_pfn_t mfn, int nr,
> +                              pgprot_t prot, unsigned domid,
> +                              struct page **pages)
> +{
> +       return -ENOSYS;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>  
> +
>  int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>                                int nr, struct page **pages)
>  {
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:00:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHNV-0004Qc-QN; Wed, 19 Dec 2012 10:59:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlHNU-0004QS-F9
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 10:59:56 +0000
Received: from [85.158.143.35:38480] by server-2.bemta-4.messagelabs.com id
	D2/2C-30861-C2E91D05; Wed, 19 Dec 2012 10:59:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1355914795!13073785!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6485 invoked from network); 19 Dec 2012 10:59:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 10:59:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="248622"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 10:59:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 10:59:54 +0000
Message-ID: <1355914793.14620.319.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Wed, 19 Dec 2012 10:59:53 +0000
In-Reply-To: <50D0C528.4090602@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 19:34 +0000, Mats Petersson wrote:

> >> I think I'll do the minimal patch first, then, if I find some spare
> >> time, work on the "batching" variant.
> > OK. The batching is IMHO just using the multicall variant.

The XENMEM_add_to_physmap_range is itself batched (it contains ranges of
mfns etc), so we don't just want to batch the actual hypercalls we
really want to make sure each hypercall processes a batch, this will
lets us optimise the flushes in the hypervisor.

I don't know if they mutlicall infrastructure allows for that but it
would be pretty trivial to do it explicitly

I expect these patches will need to be folded into one to avoid a
bisecability hazard?

xen-privcmd-remap-array-arm.patch:
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 7a32976..dc50a53 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -73,7 +73,7 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
>  }
>  
>  struct remap_data {
> -       xen_pfn_t fgmfn; /* foreign domain's gmfn */
> +       xen_pfn_t *fgmfn; /* foreign domain's gmfn */
>         pgprot_t prot;
>         domid_t  domid;
>         struct vm_area_struct *vma;
> @@ -90,38 +90,120 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
>         unsigned long pfn = page_to_pfn(page);
>         pte_t pte = pfn_pte(pfn, info->prot);
>  
> -       if (map_foreign_page(pfn, info->fgmfn, info->domid))
> +       // TODO: We should really batch these updates
> +       if (map_foreign_page(pfn, *info->fgmfn, info->domid))
>                 return -EFAULT;
>         set_pte_at(info->vma->vm_mm, addr, ptep, pte);
> +       info->fgmfn++;

Looks good.
 
>         return 0;
>  }
>  
> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> -                              unsigned long addr,
> -                              xen_pfn_t mfn, int nr,
> -                              pgprot_t prot, unsigned domid,
> -                              struct page **pages)
> +/* do_remap_mfn() - helper function to map foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages
> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + * @pages:   page information (for PVH, not used currently)
> + *
> + * This function takes an array of mfns and maps nr pages from that into
> + * this kernel's memory. The owner of the pages is defined by domid. Where the
> + * pages are mapped is determined by addr, and vma is used for "accounting" of
> + * the pages.
> + *
> + * Return value is zero for success, negative for failure.
> + */
> +static int do_remap_mfn(struct vm_area_struct *vma,
> +                       unsigned long addr,
> +                       xen_pfn_t *mfn, int nr,
> +                       pgprot_t prot, unsigned domid,
> +                       struct page **pages)

Since xen_remap_domain_mfn_range isn't implemented on ARM the only
caller is xen_remap_domain_mfn_array so you might as well just call this
function ..._array.

> 
>  {
>         int err;
>         struct remap_data data;
>  
> -       /* TBD: Batching, current sole caller only does page at a time */
> -       if (nr > 1)
> -               return -EINVAL;
> +       BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));

Where does this restriction come from?

I think it is true of X86 PV MMU (which has certain requirements about
how and when PTEs are changed) but I don't think ARM or PVH need it
because they use two level paging so the PTEs are just normal native
ones with no extra restrictions.

Maybe it is useful to enforce similarity between PV and PVH/ARM though?


>         data.fgmfn = mfn;
> -       data.prot = prot;
> +       data.prot  = prot;
>         data.domid = domid;
> -       data.vma = vma;
> -       data.index = 0;
> +       data.vma   = vma;
> 
> 
>         data.pages = pages;
> -       err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
> -                                 remap_pte_fn, &data);
> -       return err;
> +       data.index = 0;
> +
> +       while (nr) {
> +               unsigned long range = 1 << PAGE_SHIFT;
> +
> +               err = apply_to_page_range(vma->vm_mm, addr, range,
> +                                         remap_pte_fn, &data);
> +               /* Warning: We do probably need to care about what error we
> +                  get here. However, currently, the remap_area_mfn_pte_fn is

                                                       ^ this isn't the name of the fn
> 
> +                  only likely to return EFAULT or some other "things are very
> +                  bad" error code, which the rest of the calling code won't
> +                  be able to fix up. So we just exit with the error we got.

It expect it is more important to accumulate the individual errors from
remap_pte_fn into err_ptr.

> +               */
> +               if (err)
> +                       return err;
> +
> +               nr--;
> +               addr += range;
> +       }
> 

This while loop (and therefore this change) is unnecessary. The single
call to apply_to_page_range is sufficient and as your TODO notes any
batching should happen in remap_pte_fn (which can handle both
accumulation and flushing when the  batch is large enough).

> +       return 0;
> +}
> +
> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
> + * @vma:     the vma for the pages to be mapped into
> + * @addr:    the address at which to map the pages

physical address, right?

> + * @mfn:     pointer to array of MFNs to map
> + * @nr:      the number entries in the MFN array
> + * @err_ptr: pointer to array of integers, one per MFN, for an error
> + *           value for each page. The err_ptr must not be NULL.

Nothing seems to be filling this in?

> + * @prot:    page protection mask
> + * @domid:   id of the domain that we are mapping from
> + * @pages:   page information (for PVH, not used currently)

No such thing as PVH on ARM. Also pages is used by this code.

> 
> + *
> + * This function takes an array of mfns and maps nr pages from that into this
> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
> + * are mapped is determined by addr, and vma is used for "accounting" of the
> + * pages. The err_ptr array is filled in on any page that is not sucessfully
> 
>                                                                   successfully
> 
> + * mapped in.
> + *
> + * Return value is zero for success, negative ERRNO value for failure.
> + * Note that the error value -ENOENT is considered a "retry", so when this
> + * error code is seen, another call should be made with the list of pages that
> + * are marked as -ENOENT in the err_ptr array.
> + */
> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +                              unsigned long addr,
> +                              xen_pfn_t *mfn, int nr,
> +                              int *err_ptr, pgprot_t prot,
> +                              unsigned domid,
> +                              struct page **pages)
> +{
> +       /* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
> +        * and the consequences later is quite hard to detect what the actual
> +        * cause of "wrong memory was mapped in".
> +        * Note: This variant doesn't actually use err_ptr at the moment.

True ;-)

> +        */
> +       BUG_ON(err_ptr == NULL);
> +       return do_remap_mfn(vma, addr, mfn, nr, prot, domid, pages);
> +}
> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
> +
> +/* Not used in ARM. Use xen_remap_domain_mfn_array(). */
> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> +                              unsigned long addr,
> +                              xen_pfn_t mfn, int nr,
> +                              pgprot_t prot, unsigned domid,
> +                              struct page **pages)
> +{
> +       return -ENOSYS;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>  
> +
>  int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>                                int nr, struct page **pages)
>  {
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:20:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHgX-0004mc-Rc; Wed, 19 Dec 2012 11:19:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlHgW-0004mX-7F
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:19:36 +0000
Received: from [193.109.254.147:19445] by server-11.bemta-14.messagelabs.com
	id C0/35-02659-7C2A1D05; Wed, 19 Dec 2012 11:19:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355915972!9236806!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16275 invoked from network); 19 Dec 2012 11:19:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:19:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1239020"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 11:19:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 06:19:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlHgR-00089Y-J5;
	Wed, 19 Dec 2012 11:19:31 +0000
Date: Wed, 19 Dec 2012 11:19:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121218211613.GA5697@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1212191116450.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121218211613.GA5697@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 13, 2012 at 02:25:16PM +0000, Stefano Stabellini wrote:
> > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > >>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > wrote:
> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > >> > On Wed, 12 Dec 2012 17:15:23 -0800
> > > >> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > >> > 
> > > >> >> On Tue, 11 Dec 2012 12:10:19 +0000
> > > >> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > >> >> 
> > > >> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > > >> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> > > >> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > >> >> > 
> > > >> >> > That's strange because AFAIK Linux is never editing the MSI-X
> > > >> >> > entries directly: give a look at
> > > >> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> > > >> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> > > >> >> > touch the real MSI-X table.
> > > >> >> 
> > > >> >> So, this is what's happening. The side effect of :
> > > >> >> 
> > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> > > >> >>                                 dev->msix_table.last) )
> > > >> >>             WARN();
> > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> > > >> >>                                 dev->msix_pba.last) )
> > > >> >>             WARN();
> > > >> >> 
> > > >> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> > > >> >> I've mapped are going from RW to read only. Then when dom0 accesses
> > > >> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> > > >> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> > > >> >> don't understand why? Looking into that now...
> > > >> 
> > > >> As far as I was able to tell back at the time when I implemented
> > > >> this, existing code shouldn't have mappings for these tables in
> > > >> place at the time these ranges get added here. But I noted in
> > > >> the patch description that this is a potential issue (and may need
> > > >> fixing if deemed severe enough - back then, apparently nobody
> > > >> really cared, perhaps largely because passthrough to PV guests
> > > >> isn't considered fully secure anyway).
> > > >> 
> > > >> Now - did that change? I.e. can you describe where the mappings
> > > >> come from that cause the problem here?
> > > > 
> > > > The generic Linux MSI-X handling code does that, before calling the
> > > > arch specific msi setup function, for which we have a xen version
> > > > (xen_initdom_setup_msi_irqs):
> > > > 
> > > > pci_enable_msix -> msix_capability_init -> msix_map_region
> > > 
> > > Ah, okay, (of course?) I had looked only at the forward ported
> > > version of this. Is all that fiddling with the mask bits really
> > > being suppressed properly when running under Xen? Otherwise
> > > pv-ops is quite broken in this regard at present... And if it is,
> > > I don't see what the respective ioremap() is good for here in
> > > the Xen case.
> > 
> > Actually I think that you might be right: just looking at the code it
> > seems that the mask bits get written to the table once as part of the
> > initialization process:
> > 
> > pci_enable_msix -> msix_capability_init -> msix_program_entries
> > 
> > Unfortunately msix_program_entries is called few lines after
> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
> > a pirq.
> > However after that is done, all the masking/unmask is done via irq_mask
> > that we handle properly masking/unmasking the corresponding event
> > channels.
> > 
> > 
> > Possible solutions on top of my head:
> 
> There is also the potential to piggyback on Joerg's patches
> that introduce a new x86_msi_ops: compose_msi_msg.
> 
> See here: https://lkml.org/lkml/2012/8/20/432
> (I think there was also a more recent one posted at some point).

Given that dom0 should never write to the MSI-X table, introducing a new
msi_ops that replaces msix_program_entries (or at least the part of
msix_program_entries that masks all the entried) is the only solution
left.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:20:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHgX-0004mc-Rc; Wed, 19 Dec 2012 11:19:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlHgW-0004mX-7F
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:19:36 +0000
Received: from [193.109.254.147:19445] by server-11.bemta-14.messagelabs.com
	id C0/35-02659-7C2A1D05; Wed, 19 Dec 2012 11:19:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355915972!9236806!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16275 invoked from network); 19 Dec 2012 11:19:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:19:33 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1239020"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 11:19:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 06:19:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlHgR-00089Y-J5;
	Wed, 19 Dec 2012 11:19:31 +0000
Date: Wed, 19 Dec 2012 11:19:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121218211613.GA5697@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1212191116450.17523@kaball.uk.xensource.com>
References: <20121207174636.49c4f7eb@mantra.us.oracle.com>
	<50C5BCD602000078000AF51A@nat28.tlf.novell.com>
	<20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121218211613.GA5697@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 13, 2012 at 02:25:16PM +0000, Stefano Stabellini wrote:
> > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > >>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > wrote:
> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > >> > On Wed, 12 Dec 2012 17:15:23 -0800
> > > >> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > >> > 
> > > >> >> On Tue, 11 Dec 2012 12:10:19 +0000
> > > >> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > >> >> 
> > > >> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > > >> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> > > >> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > >> >> > 
> > > >> >> > That's strange because AFAIK Linux is never editing the MSI-X
> > > >> >> > entries directly: give a look at
> > > >> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> > > >> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> > > >> >> > touch the real MSI-X table.
> > > >> >> 
> > > >> >> So, this is what's happening. The side effect of :
> > > >> >> 
> > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> > > >> >>                                 dev->msix_table.last) )
> > > >> >>             WARN();
> > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> > > >> >>                                 dev->msix_pba.last) )
> > > >> >>             WARN();
> > > >> >> 
> > > >> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> > > >> >> I've mapped are going from RW to read only. Then when dom0 accesses
> > > >> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> > > >> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> > > >> >> don't understand why? Looking into that now...
> > > >> 
> > > >> As far as I was able to tell back at the time when I implemented
> > > >> this, existing code shouldn't have mappings for these tables in
> > > >> place at the time these ranges get added here. But I noted in
> > > >> the patch description that this is a potential issue (and may need
> > > >> fixing if deemed severe enough - back then, apparently nobody
> > > >> really cared, perhaps largely because passthrough to PV guests
> > > >> isn't considered fully secure anyway).
> > > >> 
> > > >> Now - did that change? I.e. can you describe where the mappings
> > > >> come from that cause the problem here?
> > > > 
> > > > The generic Linux MSI-X handling code does that, before calling the
> > > > arch specific msi setup function, for which we have a xen version
> > > > (xen_initdom_setup_msi_irqs):
> > > > 
> > > > pci_enable_msix -> msix_capability_init -> msix_map_region
> > > 
> > > Ah, okay, (of course?) I had looked only at the forward ported
> > > version of this. Is all that fiddling with the mask bits really
> > > being suppressed properly when running under Xen? Otherwise
> > > pv-ops is quite broken in this regard at present... And if it is,
> > > I don't see what the respective ioremap() is good for here in
> > > the Xen case.
> > 
> > Actually I think that you might be right: just looking at the code it
> > seems that the mask bits get written to the table once as part of the
> > initialization process:
> > 
> > pci_enable_msix -> msix_capability_init -> msix_program_entries
> > 
> > Unfortunately msix_program_entries is called few lines after
> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
> > a pirq.
> > However after that is done, all the masking/unmask is done via irq_mask
> > that we handle properly masking/unmasking the corresponding event
> > channels.
> > 
> > 
> > Possible solutions on top of my head:
> 
> There is also the potential to piggyback on Joerg's patches
> that introduce a new x86_msi_ops: compose_msi_msg.
> 
> See here: https://lkml.org/lkml/2012/8/20/432
> (I think there was also a more recent one posted at some point).

Given that dom0 should never write to the MSI-X table, introducing a new
msi_ops that replaces msix_program_entries (or at least the part of
msix_program_entries that masks all the entried) is the only solution
left.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:22:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHiy-0004vH-Q3; Wed, 19 Dec 2012 11:22:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlHix-0004v0-22
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:22:07 +0000
Received: from [85.158.138.51:46848] by server-15.bemta-3.messagelabs.com id
	AE/1B-07921-E53A1D05; Wed, 19 Dec 2012 11:22:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355916125!21423524!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12155 invoked from network); 19 Dec 2012 11:22:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:22:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="249337"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 11:22:05 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 11:22:05 +0000
Message-ID: <1355916124.14620.328.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 19 Dec 2012 11:22:04 +0000
In-Reply-To: <20121219104932.GA65599@ocelot.phlegethon.org>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
	<20121219104932.GA65599@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 10:49 +0000, Tim Deegan wrote:
> At 10:20 +0000 on 19 Dec (1355912423), Ian Campbell wrote:
> > On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> > > On Tue, 18 Dec 2012, Ian Campbell wrote:
> > > > This shortens an overly long line.
> > > > 
> > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > 
> > > honestly I would rather keep it because it has been quite useful for
> > > debugging in the past once all the bugs have been fixed (TM) then we can
> > > remove it ;-)
> > 
> > Can you not just re-add it for debug?
> > 
> > I mostly just want to get rid of the overlong line, I could nuke the
> > spaces from the comment (in all of them, not just this one) instead?
> 
> Could you just remove the 'lev3: ' from the comment, pulling it in to
> exactly 80 chars?  Your' added 'second level' and 'third level' make it
> redundant, and I'd rather not lose the spaces in the comments.

I think that makes it exactly 80 characters, which is probably ok.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:22:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHiy-0004vH-Q3; Wed, 19 Dec 2012 11:22:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlHix-0004v0-22
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:22:07 +0000
Received: from [85.158.138.51:46848] by server-15.bemta-3.messagelabs.com id
	AE/1B-07921-E53A1D05; Wed, 19 Dec 2012 11:22:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355916125!21423524!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12155 invoked from network); 19 Dec 2012 11:22:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:22:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="249337"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 11:22:05 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 11:22:05 +0000
Message-ID: <1355916124.14620.328.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 19 Dec 2012 11:22:04 +0000
In-Reply-To: <20121219104932.GA65599@ocelot.phlegethon.org>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
	<20121219104932.GA65599@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 10:49 +0000, Tim Deegan wrote:
> At 10:20 +0000 on 19 Dec (1355912423), Ian Campbell wrote:
> > On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> > > On Tue, 18 Dec 2012, Ian Campbell wrote:
> > > > This shortens an overly long line.
> > > > 
> > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > 
> > > honestly I would rather keep it because it has been quite useful for
> > > debugging in the past once all the bugs have been fixed (TM) then we can
> > > remove it ;-)
> > 
> > Can you not just re-add it for debug?
> > 
> > I mostly just want to get rid of the overlong line, I could nuke the
> > spaces from the comment (in all of them, not just this one) instead?
> 
> Could you just remove the 'lev3: ' from the comment, pulling it in to
> exactly 80 chars?  Your' added 'second level' and 'third level' make it
> redundant, and I'd rather not lose the spaces in the comments.

I think that makes it exactly 80 characters, which is probably ok.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:22:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHit-0004um-D8; Wed, 19 Dec 2012 11:22:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlHis-0004ub-5I
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:22:02 +0000
Received: from [85.158.139.211:56707] by server-1.bemta-5.messagelabs.com id
	C2/E1-12813-953A1D05; Wed, 19 Dec 2012 11:22:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355916118!20656674!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4045 invoked from network); 19 Dec 2012 11:22:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:22:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1167070"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 11:21:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 06:21:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlHin-0008BO-7R;
	Wed, 19 Dec 2012 11:21:57 +0000
Date: Wed, 19 Dec 2012 11:21:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355864082.28610.41.camel@dagon.hellion.org.uk>
Message-ID: <alpine.DEB.2.02.1212191121350.17523@kaball.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1355850890.14620.279.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181830400.17523@kaball.uk.xensource.com>
	<1355864082.28610.41.camel@dagon.hellion.org.uk>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> On Tue, 2012-12-18 at 18:31 +0000, Stefano Stabellini wrote:
> > I don't have any comments, they all look pretty straightforward.
> 
> Is that an Acked-by: Stefano on all 15?
> 

Yes

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:22:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHit-0004um-D8; Wed, 19 Dec 2012 11:22:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlHis-0004ub-5I
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:22:02 +0000
Received: from [85.158.139.211:56707] by server-1.bemta-5.messagelabs.com id
	C2/E1-12813-953A1D05; Wed, 19 Dec 2012 11:22:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355916118!20656674!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4045 invoked from network); 19 Dec 2012 11:22:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:22:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1167070"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 11:21:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 06:21:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlHin-0008BO-7R;
	Wed, 19 Dec 2012 11:21:57 +0000
Date: Wed, 19 Dec 2012 11:21:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355864082.28610.41.camel@dagon.hellion.org.uk>
Message-ID: <alpine.DEB.2.02.1212191121350.17523@kaball.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1355850890.14620.279.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181830400.17523@kaball.uk.xensource.com>
	<1355864082.28610.41.camel@dagon.hellion.org.uk>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, Ian Campbell wrote:
> On Tue, 2012-12-18 at 18:31 +0000, Stefano Stabellini wrote:
> > I don't have any comments, they all look pretty straightforward.
> 
> Is that an Acked-by: Stefano on all 15?
> 

Yes

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:23:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:23:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHkT-00054v-Aj; Wed, 19 Dec 2012 11:23:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlHkR-00054e-9y
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:23:39 +0000
Received: from [85.158.139.211:17342] by server-11.bemta-5.messagelabs.com id
	1C/9A-31624-AB3A1D05; Wed, 19 Dec 2012 11:23:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355916216!18689437!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28353 invoked from network); 19 Dec 2012 11:23:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:23:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1239505"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 11:23:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 06:23:35 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlHkN-0008DA-GM;
	Wed, 19 Dec 2012 11:23:35 +0000
Date: Wed, 19 Dec 2012 11:23:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355912890.14620.297.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212191123050.17523@kaball.uk.xensource.com>
References: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
	<1355851042.14620.280.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181824460.17523@kaball.uk.xensource.com>
	<1355912890.14620.297.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of
	arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Dec 2012, Ian Campbell wrote:
> On Tue, 2012-12-18 at 18:27 +0000, Stefano Stabellini wrote:
> > It would be nice to add to the description of the patch what you are
> > doing. Something along these lines:
> > 
> > - move gic.h to include/asm-arm/gic.h;
> > - move assembly files (entry.S, head.S, mode_switch.S, proc-ca15.S,
> > lib/*) to xen/arch/arm/arm32;
> > - move asm-offsets.c to xen/arch/arm/arm32;
> > - make the appropriate Makefile changes.
> > 
> > Other than that the patch is OK.
> 
> Is this ok:
> 
> xen: arm: introduce arm32 as a subarch of arm.
> 
> - move 32-bit specific files into subarch specific arm32 subdirectory.
> - move gic.h to xen/include/asm-arm (it is needed from both subarch
>   and generic code).
> - make the appropriate build and config file changes to support
>   XEN_TARGET_ARCH=arm32.
> 
> This prepares us for an eventual 64-bit subarch.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 

that's OK

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:23:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:23:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHkT-00054v-Aj; Wed, 19 Dec 2012 11:23:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlHkR-00054e-9y
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:23:39 +0000
Received: from [85.158.139.211:17342] by server-11.bemta-5.messagelabs.com id
	1C/9A-31624-AB3A1D05; Wed, 19 Dec 2012 11:23:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355916216!18689437!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28353 invoked from network); 19 Dec 2012 11:23:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:23:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1239505"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 11:23:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 06:23:35 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlHkN-0008DA-GM;
	Wed, 19 Dec 2012 11:23:35 +0000
Date: Wed, 19 Dec 2012 11:23:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355912890.14620.297.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1212191123050.17523@kaball.uk.xensource.com>
References: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
	<1355851042.14620.280.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181824460.17523@kaball.uk.xensource.com>
	<1355912890.14620.297.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of
	arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Dec 2012, Ian Campbell wrote:
> On Tue, 2012-12-18 at 18:27 +0000, Stefano Stabellini wrote:
> > It would be nice to add to the description of the patch what you are
> > doing. Something along these lines:
> > 
> > - move gic.h to include/asm-arm/gic.h;
> > - move assembly files (entry.S, head.S, mode_switch.S, proc-ca15.S,
> > lib/*) to xen/arch/arm/arm32;
> > - move asm-offsets.c to xen/arch/arm/arm32;
> > - make the appropriate Makefile changes.
> > 
> > Other than that the patch is OK.
> 
> Is this ok:
> 
> xen: arm: introduce arm32 as a subarch of arm.
> 
> - move 32-bit specific files into subarch specific arm32 subdirectory.
> - move gic.h to xen/include/asm-arm (it is needed from both subarch
>   and generic code).
> - make the appropriate build and config file changes to support
>   XEN_TARGET_ARCH=arm32.
> 
> This prepares us for an eventual 64-bit subarch.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 

that's OK

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHoS-0005Mr-1n; Wed, 19 Dec 2012 11:27:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1TlHoQ-0005Mf-RV
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:27:47 +0000
Received: from [85.158.143.35:46749] by server-3.bemta-4.messagelabs.com id
	06/D3-18211-2B4A1D05; Wed, 19 Dec 2012 11:27:46 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355916463!11246787!1
X-Originating-IP: [217.140.96.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18629 invoked from network); 19 Dec 2012 11:27:43 -0000
Received: from fw-tnat.cambridge.arm.com (HELO cam-smtp0.cambridge.arm.com)
	(217.140.96.21)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 11:27:43 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-smtp0.cambridge.arm.com (8.13.8/8.13.8) with ESMTP id
	qBJBRUTb004413; Wed, 19 Dec 2012 11:27:30 GMT
Date: Wed, 19 Dec 2012 11:27:28 +0000
From: Will Deacon <will.deacon@arm.com>
To: Nicolas Pitre <nicolas.pitre@linaro.org>
Message-ID: <20121219112728.GB26329@mudshark.cambridge.arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-5-git-send-email-will.deacon@arm.com>
	<alpine.LFD.2.02.1212171537230.1263@xanadu.home>
	<20121218101125.GC9072@mudshark.cambridge.arm.com>
	<alpine.LFD.2.02.1212181657020.1263@xanadu.home>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.LFD.2.02.1212181657020.1263@xanadu.home>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 4/6] ARM: psci: add support for PSCI
 invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 09:59:45PM +0000, Nicolas Pitre wrote:
> On Tue, 18 Dec 2012, Will Deacon wrote:
> > On Mon, Dec 17, 2012 at 08:51:27PM +0000, Nicolas Pitre wrote:
> > > On Mon, 17 Dec 2012, Will Deacon wrote:
> > > > +static int psci_cpu_suspend(struct psci_power_state state,
> > > > +			    unsigned long entry_point)
> > > > +{
> > > > +	int err;
> > > > +	u32 fn, power_state;
> > > > +
> > > > +	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
> > > > +	power_state = psci_power_state_pack(state);
> > > > +	err = invoke_psci_fn(fn, power_state, (u32)entry_point, 0);
> > > 
> > > Why do you need the u32 cast here?
> > 
> > That's a hangover from when entry_point was a void *. I'll fix that, thanks.
> 
> Hopefully you didn't pass virtual pointers to the PSCI call, did you?  :-)

...and I'd have gotten away with it if it wasn't for those meddling kids!

It was also made worse by Marc's code working first time too (after I blamed
the firmware like any sane kernel hacker would do :)

Will

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHoS-0005Mr-1n; Wed, 19 Dec 2012 11:27:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1TlHoQ-0005Mf-RV
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:27:47 +0000
Received: from [85.158.143.35:46749] by server-3.bemta-4.messagelabs.com id
	06/D3-18211-2B4A1D05; Wed, 19 Dec 2012 11:27:46 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1355916463!11246787!1
X-Originating-IP: [217.140.96.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18629 invoked from network); 19 Dec 2012 11:27:43 -0000
Received: from fw-tnat.cambridge.arm.com (HELO cam-smtp0.cambridge.arm.com)
	(217.140.96.21)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 11:27:43 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.79.58])
	by cam-smtp0.cambridge.arm.com (8.13.8/8.13.8) with ESMTP id
	qBJBRUTb004413; Wed, 19 Dec 2012 11:27:30 GMT
Date: Wed, 19 Dec 2012 11:27:28 +0000
From: Will Deacon <will.deacon@arm.com>
To: Nicolas Pitre <nicolas.pitre@linaro.org>
Message-ID: <20121219112728.GB26329@mudshark.cambridge.arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-5-git-send-email-will.deacon@arm.com>
	<alpine.LFD.2.02.1212171537230.1263@xanadu.home>
	<20121218101125.GC9072@mudshark.cambridge.arm.com>
	<alpine.LFD.2.02.1212181657020.1263@xanadu.home>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.LFD.2.02.1212181657020.1263@xanadu.home>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"robherring2@gmail.com" <robherring2@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 4/6] ARM: psci: add support for PSCI
 invocations from the kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 09:59:45PM +0000, Nicolas Pitre wrote:
> On Tue, 18 Dec 2012, Will Deacon wrote:
> > On Mon, Dec 17, 2012 at 08:51:27PM +0000, Nicolas Pitre wrote:
> > > On Mon, 17 Dec 2012, Will Deacon wrote:
> > > > +static int psci_cpu_suspend(struct psci_power_state state,
> > > > +			    unsigned long entry_point)
> > > > +{
> > > > +	int err;
> > > > +	u32 fn, power_state;
> > > > +
> > > > +	fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
> > > > +	power_state = psci_power_state_pack(state);
> > > > +	err = invoke_psci_fn(fn, power_state, (u32)entry_point, 0);
> > > 
> > > Why do you need the u32 cast here?
> > 
> > That's a hangover from when entry_point was a void *. I'll fix that, thanks.
> 
> Hopefully you didn't pass virtual pointers to the PSCI call, did you?  :-)

...and I'd have gotten away with it if it wasn't for those meddling kids!

It was also made worse by Marc's code working first time too (after I blamed
the firmware like any sane kernel hacker would do :)

Will

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:29:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHpf-0005Sv-Iv; Wed, 19 Dec 2012 11:29:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlHpd-0005Sj-UE
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:29:02 +0000
Received: from [85.158.139.211:42513] by server-2.bemta-5.messagelabs.com id
	3E/53-16162-DF4A1D05; Wed, 19 Dec 2012 11:29:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355916540!20348740!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3228 invoked from network); 19 Dec 2012 11:29:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:29:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="249548"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 11:29:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 11:29:00 +0000
Message-ID: <1355916539.14620.332.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Razvan Cojocaru <rzvncj@gmail.com>
Date: Wed, 19 Dec 2012 11:28:59 +0000
In-Reply-To: <50D19A2B.2050006@gmail.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 10:42 +0000, Razvan Cojocaru wrote:
> > It would certainly be preferable to have libxc do it now rather than
> > every application it or having to introduce cleverer versions of the API
> > in the future.
> 
> I agree.
> 
> > I don't know enough about MTRRs or how Xen represents them either
> > internally or at the hypercall interface to have a sensible opinion
> > about the feasibility but my gut feeling is that there's no reason it
> > shouldn't be.
> 
> Unfortunately I'm far from being an expert on MTRRs too,

There is a section in the Intel SDM (chapter 10.4 in my copy) which
explains the meanings etc of the registers, which are what is exposed in
hvm_hw_mtrr I think.

>  so maybe 
> someone can tell us what the equivalent of this:
> 
> uint8_t get_mtrr_type(struct mtrr_state *m, paddr_t pa)
> {
>      [...]
>      if ( unlikely(!(m->enabled & 0x2)) )
>          return MTRR_TYPE_UNCACHABLE;
>      [...]
> }
> 
> whould be with libxc code using struct hvm_hw_mtrr? Or, more to the 
> point, what the equivalent of m->enabled is in this:

xen/arch/x86/hvm/mtrr.c:mtrr_def_type_msr_set() seems to be where it is
written, and it is called with hw_mtrr.msr_mtrr_def_type so it looks
like it can be derived from that value in hvm_hw_mtrr.

> 
> struct hvm_hw_mtrr {
> #define MTRR_VCNT 8
> #define NUM_FIXED_MSR 11
>      uint64_t msr_pat_cr;
>      /* mtrr physbase & physmask msr pair*/
>      uint64_t msr_mtrr_var[MTRR_VCNT*2];
>      uint64_t msr_mtrr_fixed[NUM_FIXED_MSR];
>      uint64_t msr_mtrr_cap;
>      uint64_t msr_mtrr_def_type;
> };
> 
> I'm also having a hard time figuring out how to map m->overlapped on the 
> hvm_hw_mtrr members.

    m->overlapped = is_var_mtrr_overlapped(m);

Looks like that function contains the necessary logic.

>  It's also quite possible (to me, at least, at this 
> stage) that they're not needed because at libxc level they're assumed to 
> be set to some value (i.e. 'overlapped' is assumed to be 0, 'enabled' to 
> be 1).
> 
> Any help is appreciated, if we clear this up I'll modify and re-submit 
> the patch.
> 
> Thanks,
> Razvan Cojocaru



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:29:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHpf-0005Sv-Iv; Wed, 19 Dec 2012 11:29:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlHpd-0005Sj-UE
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:29:02 +0000
Received: from [85.158.139.211:42513] by server-2.bemta-5.messagelabs.com id
	3E/53-16162-DF4A1D05; Wed, 19 Dec 2012 11:29:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355916540!20348740!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3228 invoked from network); 19 Dec 2012 11:29:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:29:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="249548"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 11:29:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 11:29:00 +0000
Message-ID: <1355916539.14620.332.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Razvan Cojocaru <rzvncj@gmail.com>
Date: Wed, 19 Dec 2012 11:28:59 +0000
In-Reply-To: <50D19A2B.2050006@gmail.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 10:42 +0000, Razvan Cojocaru wrote:
> > It would certainly be preferable to have libxc do it now rather than
> > every application it or having to introduce cleverer versions of the API
> > in the future.
> 
> I agree.
> 
> > I don't know enough about MTRRs or how Xen represents them either
> > internally or at the hypercall interface to have a sensible opinion
> > about the feasibility but my gut feeling is that there's no reason it
> > shouldn't be.
> 
> Unfortunately I'm far from being an expert on MTRRs too,

There is a section in the Intel SDM (chapter 10.4 in my copy) which
explains the meanings etc of the registers, which are what is exposed in
hvm_hw_mtrr I think.

>  so maybe 
> someone can tell us what the equivalent of this:
> 
> uint8_t get_mtrr_type(struct mtrr_state *m, paddr_t pa)
> {
>      [...]
>      if ( unlikely(!(m->enabled & 0x2)) )
>          return MTRR_TYPE_UNCACHABLE;
>      [...]
> }
> 
> whould be with libxc code using struct hvm_hw_mtrr? Or, more to the 
> point, what the equivalent of m->enabled is in this:

xen/arch/x86/hvm/mtrr.c:mtrr_def_type_msr_set() seems to be where it is
written, and it is called with hw_mtrr.msr_mtrr_def_type so it looks
like it can be derived from that value in hvm_hw_mtrr.

> 
> struct hvm_hw_mtrr {
> #define MTRR_VCNT 8
> #define NUM_FIXED_MSR 11
>      uint64_t msr_pat_cr;
>      /* mtrr physbase & physmask msr pair*/
>      uint64_t msr_mtrr_var[MTRR_VCNT*2];
>      uint64_t msr_mtrr_fixed[NUM_FIXED_MSR];
>      uint64_t msr_mtrr_cap;
>      uint64_t msr_mtrr_def_type;
> };
> 
> I'm also having a hard time figuring out how to map m->overlapped on the 
> hvm_hw_mtrr members.

    m->overlapped = is_var_mtrr_overlapped(m);

Looks like that function contains the necessary logic.

>  It's also quite possible (to me, at least, at this 
> stage) that they're not needed because at libxc level they're assumed to 
> be set to some value (i.e. 'overlapped' is assumed to be 0, 'enabled' to 
> be 1).
> 
> Any help is appreciated, if we clear this up I'll modify and re-submit 
> the patch.
> 
> Thanks,
> Razvan Cojocaru



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:34:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHuP-0005kN-HK; Wed, 19 Dec 2012 11:33:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlHuO-0005kH-48
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:33:56 +0000
Received: from [85.158.139.211:9989] by server-9.bemta-5.messagelabs.com id
	64/48-10690-326A1D05; Wed, 19 Dec 2012 11:33:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355916834!19680372!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9423 invoked from network); 19 Dec 2012 11:33:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 11:33:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 11:33:53 +0000
Message-Id: <50D1B42F02000078000B1625@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 11:33:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ijc@hellion.org.uk>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
	<1354613604.2693.55.camel@zakaz.uk.xensource.com>
	<50BDD5A302000078000ADA26@nat28.tlf.novell.com>
	<1354615936.2693.59.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354615936.2693.59.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Bastian Blank <waldi@debian.org>,
	"695056@bugs.debian.org" <695056@bugs.debian.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 11:12, Ian Campbell <ijc@hellion.org.uk> wrote:
> On Tue, 2012-12-04 at 09:51 +0000, Jan Beulich wrote:
>> >>> On 04.12.12 at 10:33, Ian Campbell <ijc@hellion.org.uk> wrote:
>> > On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
>> >> Package: src:xen
>> >> Version: 4.1.3-4
>> >> Severity: serious
>> >> 
>> >> The bzimage loader used in both libxc and the hypervisor lacks XZ
>> >> support. Debian kernels since 3.6 are compressed with XZ and can't be
>> >> loaded.
>> >> 
>> >> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
>> >> somewhere last year.
>> > 
>> > Indeed. Jan this would be a good candidate for a future 4.1.x I think
>> > (it was already in 4.2.0).
>> 
>> Hmm, I'm not really convinced - we're at 4.1.4 (I'm looking
>> towards an RC2 followed by a release within the next days),
>> and doing a feature addition like this in a .5 stable release
>> looks questionable to me in the context of how we managed
>> stable updates in the past. But I'm open to be outvoted of
>> course...
> 
> My thinking was that it is a reasonably self contained patch (most of
> the code is the new files) and without it people won't be able to use
> 4.1.x with their distro kernels at all, so it's something between a bug
> fix and a new feature.

Just did this on the hypervisor side (i.e. for Dom0). Unless told
otherwise I'm going to assume that IanJ will take care of the
DomU (tools side) part.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:34:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHuP-0005kN-HK; Wed, 19 Dec 2012 11:33:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlHuO-0005kH-48
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:33:56 +0000
Received: from [85.158.139.211:9989] by server-9.bemta-5.messagelabs.com id
	64/48-10690-326A1D05; Wed, 19 Dec 2012 11:33:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355916834!19680372!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9423 invoked from network); 19 Dec 2012 11:33:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 11:33:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 11:33:53 +0000
Message-Id: <50D1B42F02000078000B1625@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 11:33:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ijc@hellion.org.uk>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
	<1354613604.2693.55.camel@zakaz.uk.xensource.com>
	<50BDD5A302000078000ADA26@nat28.tlf.novell.com>
	<1354615936.2693.59.camel@zakaz.uk.xensource.com>
In-Reply-To: <1354615936.2693.59.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Bastian Blank <waldi@debian.org>,
	"695056@bugs.debian.org" <695056@bugs.debian.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.12.12 at 11:12, Ian Campbell <ijc@hellion.org.uk> wrote:
> On Tue, 2012-12-04 at 09:51 +0000, Jan Beulich wrote:
>> >>> On 04.12.12 at 10:33, Ian Campbell <ijc@hellion.org.uk> wrote:
>> > On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
>> >> Package: src:xen
>> >> Version: 4.1.3-4
>> >> Severity: serious
>> >> 
>> >> The bzimage loader used in both libxc and the hypervisor lacks XZ
>> >> support. Debian kernels since 3.6 are compressed with XZ and can't be
>> >> loaded.
>> >> 
>> >> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
>> >> somewhere last year.
>> > 
>> > Indeed. Jan this would be a good candidate for a future 4.1.x I think
>> > (it was already in 4.2.0).
>> 
>> Hmm, I'm not really convinced - we're at 4.1.4 (I'm looking
>> towards an RC2 followed by a release within the next days),
>> and doing a feature addition like this in a .5 stable release
>> looks questionable to me in the context of how we managed
>> stable updates in the past. But I'm open to be outvoted of
>> course...
> 
> My thinking was that it is a reasonably self contained patch (most of
> the code is the new files) and without it people won't be able to use
> 4.1.x with their distro kernels at all, so it's something between a bug
> fix and a new feature.

Just did this on the hypervisor side (i.e. for Dom0). Unless told
otherwise I'm going to assume that IanJ will take care of the
DomU (tools side) part.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:34:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHv2-0005nN-7b; Wed, 19 Dec 2012 11:34:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TlHv1-0005nB-2l
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 11:34:35 +0000
Received: from [85.158.137.99:30564] by server-10.bemta-3.messagelabs.com id
	44/96-07616-A46A1D05; Wed, 19 Dec 2012 11:34:34 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-217.messagelabs.com!1355916873!14702510!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27064 invoked from network); 19 Dec 2012 11:34:33 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 11:34:33 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:56463 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TlHzE-0006rZ-0d; Wed, 19 Dec 2012 12:38:56 +0100
Date: Wed, 19 Dec 2012 12:34:27 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <55633610.20121219123427@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355844398.14620.254.camel@zakaz.uk.xensource.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Eric Dumazet <erdnetdev@gmail.com>, annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, December 18, 2012, 4:26:38 PM, you wrote:

> On Tue, 2012-12-18 at 15:12 +0000, Eric Dumazet wrote:
>> On Tue, 2012-12-18 at 13:51 +0000, Ian Campbell wrote:
>> > Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
>> > than that. We have already accounted for this in
>> > NETFRONT_SKB_CB(skb)->pull_to so use that instead.
>> > 
>> > Fixes WARN_ON from skb_try_coalesce.
>> > 
>> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> > Cc: Sander Eikelenboom <linux@eikelenboom.it>
>> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > Cc: annie li <annie.li@oracle.com>
>> > Cc: xen-devel@lists.xensource.com
>> > Cc: netdev@vger.kernel.org
>> > Cc: stable@kernel.org # 3.7.x only
>> > ---
>> >  drivers/net/xen-netfront.c |   15 +++++----------
>> >  1 files changed, 5 insertions(+), 10 deletions(-)
>> > 
>> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> > index caa0110..b06ef81 100644
>> > --- a/drivers/net/xen-netfront.c
>> > +++ b/drivers/net/xen-netfront.c
>> > @@ -971,17 +971,12 @@ err:
>> >              * overheads. Here, we add the size of the data pulled
>> >              * in xennet_fill_frags().
>> >              *
>> > -            * We also adjust for any unused space in the main
>> > -            * data area by subtracting (RX_COPY_THRESHOLD -
>> > -            * len). This is especially important with drivers
>> > -            * which split incoming packets into header and data,
>> > -            * using only 66 bytes of the main data area (see the
>> > -            * e1000 driver for example.)  On such systems,
>> > -            * without this last adjustement, our achievable
>> > -            * receive throughout using the standard receive
>> > -            * buffer size was cut by 25%(!!!).
>> > +            * We also adjust for the __pskb_pull_tail done in
>> > +            * handle_incoming_queue which pulls data from the
>> > +            * frags into the head area, which is already
>> > +            * accounted in RX_COPY_THRESHOLD.
>> >              */
>> > -           skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
>> > +           skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>> >             skb->len += skb->data_len;
>> >  
>> >             if (rx->flags & XEN_NETRXF_csum_blank)
>> 
>> 
>> But skb truesize is not what you think.

> Indeed, it seems I was completely backwards about what it means!

>> You must account the exact memory used by this skb, not only the used
>> part of it.
>> 
>> At the very minimum, it should be
>> 
>> skb->truesize += skb->data_len;
>> 
>> But it really should be the allocated size of the fragment.
>> 
>> If its a page, then its a page, even if you use one single byte in it.

> So actually we want += PAGE_SIZE * skb_shinfo(skb)->nr_frags ?

> Sander, can you try that change?

Hi Ian,

It ran overnight and i haven't seen the warn_once trigger.
(but i also didn't with the previous patch)

--
Sander

> Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:34:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlHv2-0005nN-7b; Wed, 19 Dec 2012 11:34:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TlHv1-0005nB-2l
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 11:34:35 +0000
Received: from [85.158.137.99:30564] by server-10.bemta-3.messagelabs.com id
	44/96-07616-A46A1D05; Wed, 19 Dec 2012 11:34:34 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-217.messagelabs.com!1355916873!14702510!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27064 invoked from network); 19 Dec 2012 11:34:33 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 11:34:33 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:56463 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TlHzE-0006rZ-0d; Wed, 19 Dec 2012 12:38:56 +0100
Date: Wed, 19 Dec 2012 12:34:27 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <55633610.20121219123427@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1355844398.14620.254.camel@zakaz.uk.xensource.com>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Eric Dumazet <erdnetdev@gmail.com>, annie li <annie.li@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, December 18, 2012, 4:26:38 PM, you wrote:

> On Tue, 2012-12-18 at 15:12 +0000, Eric Dumazet wrote:
>> On Tue, 2012-12-18 at 13:51 +0000, Ian Campbell wrote:
>> > Using RX_COPY_THRESHOLD is incorrect if the SKB is actually smaller
>> > than that. We have already accounted for this in
>> > NETFRONT_SKB_CB(skb)->pull_to so use that instead.
>> > 
>> > Fixes WARN_ON from skb_try_coalesce.
>> > 
>> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> > Cc: Sander Eikelenboom <linux@eikelenboom.it>
>> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > Cc: annie li <annie.li@oracle.com>
>> > Cc: xen-devel@lists.xensource.com
>> > Cc: netdev@vger.kernel.org
>> > Cc: stable@kernel.org # 3.7.x only
>> > ---
>> >  drivers/net/xen-netfront.c |   15 +++++----------
>> >  1 files changed, 5 insertions(+), 10 deletions(-)
>> > 
>> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> > index caa0110..b06ef81 100644
>> > --- a/drivers/net/xen-netfront.c
>> > +++ b/drivers/net/xen-netfront.c
>> > @@ -971,17 +971,12 @@ err:
>> >              * overheads. Here, we add the size of the data pulled
>> >              * in xennet_fill_frags().
>> >              *
>> > -            * We also adjust for any unused space in the main
>> > -            * data area by subtracting (RX_COPY_THRESHOLD -
>> > -            * len). This is especially important with drivers
>> > -            * which split incoming packets into header and data,
>> > -            * using only 66 bytes of the main data area (see the
>> > -            * e1000 driver for example.)  On such systems,
>> > -            * without this last adjustement, our achievable
>> > -            * receive throughout using the standard receive
>> > -            * buffer size was cut by 25%(!!!).
>> > +            * We also adjust for the __pskb_pull_tail done in
>> > +            * handle_incoming_queue which pulls data from the
>> > +            * frags into the head area, which is already
>> > +            * accounted in RX_COPY_THRESHOLD.
>> >              */
>> > -           skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
>> > +           skb->truesize += skb->data_len - NETFRONT_SKB_CB(skb)->pull_to;
>> >             skb->len += skb->data_len;
>> >  
>> >             if (rx->flags & XEN_NETRXF_csum_blank)
>> 
>> 
>> But skb truesize is not what you think.

> Indeed, it seems I was completely backwards about what it means!

>> You must account the exact memory used by this skb, not only the used
>> part of it.
>> 
>> At the very minimum, it should be
>> 
>> skb->truesize += skb->data_len;
>> 
>> But it really should be the allocated size of the fragment.
>> 
>> If its a page, then its a page, even if you use one single byte in it.

> So actually we want += PAGE_SIZE * skb_shinfo(skb)->nr_frags ?

> Sander, can you try that change?

Hi Ian,

It ran overnight and i haven't seen the warn_once trigger.
(but i also didn't with the previous patch)

--
Sander

> Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:47:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlI7I-0006Bk-Av; Wed, 19 Dec 2012 11:47:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlI7H-0006Bf-A7
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:47:15 +0000
Received: from [85.158.138.51:15733] by server-1.bemta-3.messagelabs.com id
	B1/4B-08906-249A1D05; Wed, 19 Dec 2012 11:47:14 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355917632!23324911!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12384 invoked from network); 19 Dec 2012 11:47:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:47:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1241270"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 11:47:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 06:47:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlI7C-00009E-Tz;
	Wed, 19 Dec 2012 11:47:10 +0000
Date: Wed, 19 Dec 2012 11:47:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWaxot1pzWP=-BJJ25y_jZC2uHdM8yGfqO6qbhSqe9dQfQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212191145250.17523@kaball.uk.xensource.com>
References: <CAKhsbWaxot1pzWP=-BJJ25y_jZC2uHdM8yGfqO6qbhSqe9dQfQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Zhang, 
	Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] Correctly expose PCH ISA bridge for IGD
 passthrough (was PCI-PCI bridge)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, G.R. wrote:
> Hi Stefano,
> 
> Per your request, I resend your patch with my local modification to
> fix the PCH ISA bridge to be exposed correctly to domU. This is XEN
> part of the fix to make i915 driver properly detect the PCH version.
> Another patch for i915 driver side is required too. I'll send that to
> intel-gfx list separately. The combined patch set do fix the PCH
> detection issue for me.
> 
> Thanks,
> Timothy

Thanks! The patch looks OK.
You need to write a proper description and add your signed-off-by as
well as mine.

http://wiki.xen.org/wiki/Submitting_Xen_Patches


> diff --git a/hw/pci.c b/hw/pci.c
> index f051de1..d371bd7 100644
> --- a/hw/pci.c
> +++ b/hw/pci.c
> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>      }
>  }
> 
> -typedef struct {
> -    PCIDevice dev;
> -    PCIBus *bus;
> -} PCIBridge;
> -
>  void pci_bridge_write_config(
> PCIDevice *d,
>                               uint32_t address, uint32_t val, int len)
>  {
> diff --git a/hw/pci.h b/hw/pci.h
> index edc58b6..c2acab9 100644
> --- a/hw/pci.h
> +++ b/hw/pci.h
> @@ -222,6 +222,11 @@ struct PCIDevice {
>      int irq_state[4];
>  };
> 
> +typedef struct {
> +    PCIDevice dev;
> +    PCIBus *bus;
> +} PCIBridge;
> +
>  extern char direct_pci_str[];
>  extern int direct_pci_msitranslate;
>  extern int direct_pci_power_mgmt;
> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
> index c6f8869..de21f90 100644
> --- a/hw/pt-graphics.c
> +++ b/hw/pt-graphics.c
> @@ -3,6 +3,7 @@
>   */
> 
>  #include "pass-through.h"
> +#include "pci.h"
>  #include "pci/header.h"
>  #include "pci/pci.h"
> 
> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
> 
> -    if ( vid == PCI_VENDOR_ID_INTEL )
> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
> -                        pch_map_irq, "intel_bridge_1f");
> +    if (vid == PCI_VENDOR_ID_INTEL) {
> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
> pci_bridge_write_config);
> +
> +        pci_config_set_vendor_id(s->dev.config, vid);
> +        pci_config_set_device_id(s->dev.config, did);
> +
> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
> back-to-back, 66MHz, no error
> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
> +        s->dev.config[PCI_REVISION] = rid;
> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
> +        s->dev.config[PCI_HEADER_TYPE] = 0x80;
> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
> +
> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
> +    }
>  }
> 
>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:47:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlI7I-0006Bk-Av; Wed, 19 Dec 2012 11:47:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlI7H-0006Bf-A7
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:47:15 +0000
Received: from [85.158.138.51:15733] by server-1.bemta-3.messagelabs.com id
	B1/4B-08906-249A1D05; Wed, 19 Dec 2012 11:47:14 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1355917632!23324911!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12384 invoked from network); 19 Dec 2012 11:47:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:47:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1241270"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 11:47:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 06:47:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlI7C-00009E-Tz;
	Wed, 19 Dec 2012 11:47:10 +0000
Date: Wed, 19 Dec 2012 11:47:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWaxot1pzWP=-BJJ25y_jZC2uHdM8yGfqO6qbhSqe9dQfQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212191145250.17523@kaball.uk.xensource.com>
References: <CAKhsbWaxot1pzWP=-BJJ25y_jZC2uHdM8yGfqO6qbhSqe9dQfQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Zhang, 
	Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] Correctly expose PCH ISA bridge for IGD
 passthrough (was PCI-PCI bridge)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Dec 2012, G.R. wrote:
> Hi Stefano,
> 
> Per your request, I resend your patch with my local modification to
> fix the PCH ISA bridge to be exposed correctly to domU. This is XEN
> part of the fix to make i915 driver properly detect the PCH version.
> Another patch for i915 driver side is required too. I'll send that to
> intel-gfx list separately. The combined patch set do fix the PCH
> detection issue for me.
> 
> Thanks,
> Timothy

Thanks! The patch looks OK.
You need to write a proper description and add your signed-off-by as
well as mine.

http://wiki.xen.org/wiki/Submitting_Xen_Patches


> diff --git a/hw/pci.c b/hw/pci.c
> index f051de1..d371bd7 100644
> --- a/hw/pci.c
> +++ b/hw/pci.c
> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>      }
>  }
> 
> -typedef struct {
> -    PCIDevice dev;
> -    PCIBus *bus;
> -} PCIBridge;
> -
>  void pci_bridge_write_config(
> PCIDevice *d,
>                               uint32_t address, uint32_t val, int len)
>  {
> diff --git a/hw/pci.h b/hw/pci.h
> index edc58b6..c2acab9 100644
> --- a/hw/pci.h
> +++ b/hw/pci.h
> @@ -222,6 +222,11 @@ struct PCIDevice {
>      int irq_state[4];
>  };
> 
> +typedef struct {
> +    PCIDevice dev;
> +    PCIBus *bus;
> +} PCIBridge;
> +
>  extern char direct_pci_str[];
>  extern int direct_pci_msitranslate;
>  extern int direct_pci_power_mgmt;
> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
> index c6f8869..de21f90 100644
> --- a/hw/pt-graphics.c
> +++ b/hw/pt-graphics.c
> @@ -3,6 +3,7 @@
>   */
> 
>  #include "pass-through.h"
> +#include "pci.h"
>  #include "pci/header.h"
>  #include "pci/pci.h"
> 
> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
> 
> -    if ( vid == PCI_VENDOR_ID_INTEL )
> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
> -                        pch_map_irq, "intel_bridge_1f");
> +    if (vid == PCI_VENDOR_ID_INTEL) {
> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
> pci_bridge_write_config);
> +
> +        pci_config_set_vendor_id(s->dev.config, vid);
> +        pci_config_set_device_id(s->dev.config, did);
> +
> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
> back-to-back, 66MHz, no error
> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
> +        s->dev.config[PCI_REVISION] = rid;
> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
> +        s->dev.config[PCI_HEADER_TYPE] = 0x80;
> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
> +
> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
> +    }
>  }
> 
>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:48:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlI8h-0006GU-Qv; Wed, 19 Dec 2012 11:48:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlI8f-0006GK-W4
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:48:42 +0000
Received: from [85.158.139.83:38945] by server-4.bemta-5.messagelabs.com id
	D1/49-14693-999A1D05; Wed, 19 Dec 2012 11:48:41 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355917720!30385118!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24323 invoked from network); 19 Dec 2012 11:48:40 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 11:48:40 -0000
Received: (qmail 19356 invoked from network); 19 Dec 2012 13:48:36 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 13:48:36 +0200
Message-ID: <50D1A9D1.2020106@gmail.com>
Date: Wed, 19 Dec 2012 13:49:37 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355916539.14620.332.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234434,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 8288a65e9997f143afcb7917af0fb32f.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3t9.17epd43f4.2hvf],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44439
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> There is a section in the Intel SDM (chapter 10.4 in my copy) which
> explains the meanings etc of the registers, which are what is exposed in
> hvm_hw_mtrr I think.

Thanks, I'll look that up.

> xen/arch/x86/hvm/mtrr.c:mtrr_def_type_msr_set() seems to be where it is
> written, and it is called with hw_mtrr.msr_mtrr_def_type so it looks
> like it can be derived from that value in hvm_hw_mtrr.

Indeed it is, but this happens there:

uint8_t def_type = msr_content & 0xff;
uint8_t enabled = (msr_content >> 10) & 0x3;

So what ends up being put in def_type is only one byte of msr_content, 
whereas enabled is some bits from another byte of msr_content. To make 
matters worse, hw_mtrr.msr_mtrr_def_type is an uint64_t, so would that 
mean that hw_mtrr.msr_mtrr_def_type is actually the whole of msr_content?

>> I'm also having a hard time figuring out how to map m->overlapped on the
>> hvm_hw_mtrr members.
>
>      m->overlapped = is_var_mtrr_overlapped(m);
>
> Looks like that function contains the necessary logic.

You're right, but what happens there is that that function depends on 
the get_mtrr_range() function, which in turn depends on the size_or_mask 
global variable, which is initialized in hvm_mtrr_pat_init(), which then 
depends on a global table, and so on. Putting that into libxc is pretty 
much putting the whole mtrr.c file there.

Is this amount of copy/paste code a good thing, and wouldn't it be less 
tedious and bug-prone to have that code in a single place, and just add 
overlapped and enabled to hw_mtrr before sending it out into userspace?

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:48:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlI8h-0006GU-Qv; Wed, 19 Dec 2012 11:48:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlI8f-0006GK-W4
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:48:42 +0000
Received: from [85.158.139.83:38945] by server-4.bemta-5.messagelabs.com id
	D1/49-14693-999A1D05; Wed, 19 Dec 2012 11:48:41 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1355917720!30385118!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24323 invoked from network); 19 Dec 2012 11:48:40 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 11:48:40 -0000
Received: (qmail 19356 invoked from network); 19 Dec 2012 13:48:36 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 13:48:36 +0200
Message-ID: <50D1A9D1.2020106@gmail.com>
Date: Wed, 19 Dec 2012 13:49:37 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355916539.14620.332.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234434,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 8288a65e9997f143afcb7917af0fb32f.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3t9.17epd43f4.2hvf],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44439
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> There is a section in the Intel SDM (chapter 10.4 in my copy) which
> explains the meanings etc of the registers, which are what is exposed in
> hvm_hw_mtrr I think.

Thanks, I'll look that up.

> xen/arch/x86/hvm/mtrr.c:mtrr_def_type_msr_set() seems to be where it is
> written, and it is called with hw_mtrr.msr_mtrr_def_type so it looks
> like it can be derived from that value in hvm_hw_mtrr.

Indeed it is, but this happens there:

uint8_t def_type = msr_content & 0xff;
uint8_t enabled = (msr_content >> 10) & 0x3;

So what ends up being put in def_type is only one byte of msr_content, 
whereas enabled is some bits from another byte of msr_content. To make 
matters worse, hw_mtrr.msr_mtrr_def_type is an uint64_t, so would that 
mean that hw_mtrr.msr_mtrr_def_type is actually the whole of msr_content?

>> I'm also having a hard time figuring out how to map m->overlapped on the
>> hvm_hw_mtrr members.
>
>      m->overlapped = is_var_mtrr_overlapped(m);
>
> Looks like that function contains the necessary logic.

You're right, but what happens there is that that function depends on 
the get_mtrr_range() function, which in turn depends on the size_or_mask 
global variable, which is initialized in hvm_mtrr_pat_init(), which then 
depends on a global table, and so on. Putting that into libxc is pretty 
much putting the whole mtrr.c file there.

Is this amount of copy/paste code a good thing, and wouldn't it be less 
tedious and bug-prone to have that code in a single place, and just add 
overlapped and enabled to hw_mtrr before sending it out into userspace?

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:53:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlICj-0006VK-HQ; Wed, 19 Dec 2012 11:52:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlICh-0006V3-OM
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 11:52:52 +0000
Received: from [85.158.137.99:11469] by server-16.bemta-3.messagelabs.com id
	DE/F6-27634-E8AA1D05; Wed, 19 Dec 2012 11:52:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355917965!20120384!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16777 invoked from network); 19 Dec 2012 11:52:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:52:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="250222"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 11:52:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 11:52:44 +0000
Message-ID: <1355917963.14620.346.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 11:52:43 +0000
In-Reply-To: <1355856402-26614-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 1/8] xen/arm: introduce early_ioremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:46 +0000, Stefano Stabellini wrote:
> Introduce a function to map a range of physical memory into Xen virtual
> memory.
> It doesn't need domheap to be setup.
> It is going to be used to map the videoram.
> 
> Add flush_xen_data_tlb_range, that flushes a range of virtual addresses.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/mm.c            |   32 ++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/config.h |    2 ++
>  xen/include/asm-arm/mm.h     |    3 ++-
>  xen/include/asm-arm/page.h   |   23 +++++++++++++++++++++++
>  4 files changed, 59 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 855f83d..0d7a163 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -367,6 +367,38 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
>  }
>  
> +/* Map the physical memory range start -  start + len into virtual
> + * memory and return the virtual address of the mapping.
> + * start has to be 2MB aligned.
> + * len has to be < EARLY_VMAP_END - EARLY_VMAP_START.
> + */
> +void* early_ioremap(paddr_t start, size_t len, unsigned attributes)

Should be __init I think, to discourage its use later on. If when we
need that they proper vmap+ioremap should be implemented.

> +{
> +    static unsigned long virt_start = EARLY_VMAP_START;

Is the idea that if/when we implement proper vmap + ioremap support we
should initialise the vmap area with EARLY_VMAP_START..virt_start
already assigned and virt_start..EARLY_VMAP_END free (removing all the
EARLY_ I guess)?

At some point I suppose we will need to integrate this with
xen/common/vmap.c.

> +    void* ret_addr = (void *)virt_start;
> +    paddr_t end = start + len;
> +
> +    ASSERT(!(start & (~SECOND_MASK)));
> +    ASSERT(!(virt_start & (~SECOND_MASK)));
> +
> +    /* The range we need to map is too big */
> +    if ( virt_start + len >= EARLY_VMAP_END )
> +        return NULL;
> +
> +    while ( start < end )
> +    {
> +        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
> +        e.pt.ai = attributes;
> +        write_pte(xen_second + second_table_offset(virt_start), e);
> +
> +        start += SECOND_SIZE;
> +        virt_start += SECOND_SIZE;
> +    }
> +    flush_xen_data_tlb_range((unsigned long) ret_addr, len);

Just cast this in the return? At the moment you cast to void* only to
cast it back to unsigned long.

> +
> +    return ret_addr;
> +}
> +
>  enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
>  static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>  {
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index 2a05539..87db0d1 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -73,9 +73,11 @@
>  #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
>  #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
>  #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
> +#define EARLY_VMAP_START       mk_unsigned_long(0x10000000)

There is a comment right above here which describes Xen virtual memory
layout, this needs updating as Tim requested before.

Also the convention is FOO_VIRT_START.

>  #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
>  #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
>  
> +#define EARLY_VMAP_END         XENHEAP_VIRT_START
>  #define HYPERVISOR_VIRT_START  XEN_VIRT_START
>  
>  #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index e95ece1..4ed5df6 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -150,7 +150,8 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
>  extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
>  /* Remove a mapping from a fixmap entry */
>  extern void clear_fixmap(unsigned map);
> -
> +/* map a 2MB aligned physical range in virtual memory. */
> +void* early_ioremap(paddr_t start, size_t len, unsigned attributes);
>  
>  #define mfn_valid(mfn)        ({                                              \
>      unsigned long __m_f_n = (mfn);                                            \
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index d89261e..0790dda 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -328,6 +328,23 @@ static inline void flush_xen_data_tlb_va(unsigned long va)
>                   : : "r" (va) : "memory");
>  }
>  
> +/*
> + * Flush a range of VA's hypervisor mappings from the data TLB. This is not
> + * sufficient when changing code mappings or for self modifying code.
> + */
> +static inline void flush_xen_data_tlb_range(unsigned long va, unsigned long size)
> +{
> +    unsigned long end = va + size;
> +    while ( va < end ) {
> +        asm volatile("dsb;" /* Ensure preceding are visible */

Either this dsb should be on the next line (aligned with the following
lines) or the following lines should be indented further to match (we
have both styles in this file, but lets not have a 3rd).

Although it would probably be better to just call flush_xen_data_tlb_va
here?

Function should be flush_xen_data_tlb_range_va I think. I'm not sure it
is worth a new function though, perhaps just add size to the existing
flush_xen_data_tlb_va, there's not too many callers?

While I was grepping I noticed flush_xen_data_tlb_va(dest_va) in
setup_pagetables(). dest_va has just been setup with a second level (2M
mapping). Is it not a bit dodgy to only flush the first 4K of that?

> +                STORE_CP32(0, TLBIMVAH)
> +                "dsb;" /* Ensure completion of the TLB flush */
> +                "isb;"
> +                : : "r" (va) : "memory");
> +        va += PAGE_SIZE;
> +    }
> +}
> +
>  /* Flush all non-hypervisor mappings from the TLB */
>  static inline void flush_guest_tlb(void)
>  {
> @@ -418,8 +435,14 @@ static inline uint64_t gva_to_ipa(uint32_t va)
>  #define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
>  
>  #define THIRD_SHIFT  PAGE_SHIFT
> +#define THIRD_SIZE   (1u << THIRD_SHIFT)
> +#define THIRD_MASK   (~(THIRD_SIZE - 1))
>  #define SECOND_SHIFT (THIRD_SHIFT + LPAE_SHIFT)
> +#define SECOND_SIZE   (1u << SECOND_SHIFT)
> +#define SECOND_MASK   (~(SECOND_SIZE - 1))
>  #define FIRST_SHIFT  (SECOND_SHIFT + LPAE_SHIFT)
> +#define FIRST_SIZE   (1u << FIRST_SHIFT)
> +#define FIRST_MASK   (~(FIRST_SIZE - 1))

Whitespace is off here.

>  
>  /* Calculate the offsets into the pagetables for a given VA */
>  #define first_linear_offset(va) (va >> FIRST_SHIFT)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:53:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlICj-0006VK-HQ; Wed, 19 Dec 2012 11:52:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlICh-0006V3-OM
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 11:52:52 +0000
Received: from [85.158.137.99:11469] by server-16.bemta-3.messagelabs.com id
	DE/F6-27634-E8AA1D05; Wed, 19 Dec 2012 11:52:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355917965!20120384!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16777 invoked from network); 19 Dec 2012 11:52:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 11:52:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="250222"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 11:52:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 11:52:44 +0000
Message-ID: <1355917963.14620.346.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 11:52:43 +0000
In-Reply-To: <1355856402-26614-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 1/8] xen/arm: introduce early_ioremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:46 +0000, Stefano Stabellini wrote:
> Introduce a function to map a range of physical memory into Xen virtual
> memory.
> It doesn't need domheap to be setup.
> It is going to be used to map the videoram.
> 
> Add flush_xen_data_tlb_range, that flushes a range of virtual addresses.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/mm.c            |   32 ++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/config.h |    2 ++
>  xen/include/asm-arm/mm.h     |    3 ++-
>  xen/include/asm-arm/page.h   |   23 +++++++++++++++++++++++
>  4 files changed, 59 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 855f83d..0d7a163 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -367,6 +367,38 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      frametable_virt_end = FRAMETABLE_VIRT_START + (nr_pages * sizeof(struct page_info));
>  }
>  
> +/* Map the physical memory range start -  start + len into virtual
> + * memory and return the virtual address of the mapping.
> + * start has to be 2MB aligned.
> + * len has to be < EARLY_VMAP_END - EARLY_VMAP_START.
> + */
> +void* early_ioremap(paddr_t start, size_t len, unsigned attributes)

Should be __init I think, to discourage its use later on. If when we
need that they proper vmap+ioremap should be implemented.

> +{
> +    static unsigned long virt_start = EARLY_VMAP_START;

Is the idea that if/when we implement proper vmap + ioremap support we
should initialise the vmap area with EARLY_VMAP_START..virt_start
already assigned and virt_start..EARLY_VMAP_END free (removing all the
EARLY_ I guess)?

At some point I suppose we will need to integrate this with
xen/common/vmap.c.

> +    void* ret_addr = (void *)virt_start;
> +    paddr_t end = start + len;
> +
> +    ASSERT(!(start & (~SECOND_MASK)));
> +    ASSERT(!(virt_start & (~SECOND_MASK)));
> +
> +    /* The range we need to map is too big */
> +    if ( virt_start + len >= EARLY_VMAP_END )
> +        return NULL;
> +
> +    while ( start < end )
> +    {
> +        lpae_t e = mfn_to_xen_entry(start >> PAGE_SHIFT);
> +        e.pt.ai = attributes;
> +        write_pte(xen_second + second_table_offset(virt_start), e);
> +
> +        start += SECOND_SIZE;
> +        virt_start += SECOND_SIZE;
> +    }
> +    flush_xen_data_tlb_range((unsigned long) ret_addr, len);

Just cast this in the return? At the moment you cast to void* only to
cast it back to unsigned long.

> +
> +    return ret_addr;
> +}
> +
>  enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
>  static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
>  {
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index 2a05539..87db0d1 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -73,9 +73,11 @@
>  #define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
>  #define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
>  #define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
> +#define EARLY_VMAP_START       mk_unsigned_long(0x10000000)

There is a comment right above here which describes Xen virtual memory
layout, this needs updating as Tim requested before.

Also the convention is FOO_VIRT_START.

>  #define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
>  #define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
>  
> +#define EARLY_VMAP_END         XENHEAP_VIRT_START
>  #define HYPERVISOR_VIRT_START  XEN_VIRT_START
>  
>  #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index e95ece1..4ed5df6 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -150,7 +150,8 @@ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
>  extern void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes);
>  /* Remove a mapping from a fixmap entry */
>  extern void clear_fixmap(unsigned map);
> -
> +/* map a 2MB aligned physical range in virtual memory. */
> +void* early_ioremap(paddr_t start, size_t len, unsigned attributes);
>  
>  #define mfn_valid(mfn)        ({                                              \
>      unsigned long __m_f_n = (mfn);                                            \
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index d89261e..0790dda 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -328,6 +328,23 @@ static inline void flush_xen_data_tlb_va(unsigned long va)
>                   : : "r" (va) : "memory");
>  }
>  
> +/*
> + * Flush a range of VA's hypervisor mappings from the data TLB. This is not
> + * sufficient when changing code mappings or for self modifying code.
> + */
> +static inline void flush_xen_data_tlb_range(unsigned long va, unsigned long size)
> +{
> +    unsigned long end = va + size;
> +    while ( va < end ) {
> +        asm volatile("dsb;" /* Ensure preceding are visible */

Either this dsb should be on the next line (aligned with the following
lines) or the following lines should be indented further to match (we
have both styles in this file, but lets not have a 3rd).

Although it would probably be better to just call flush_xen_data_tlb_va
here?

Function should be flush_xen_data_tlb_range_va I think. I'm not sure it
is worth a new function though, perhaps just add size to the existing
flush_xen_data_tlb_va, there's not too many callers?

While I was grepping I noticed flush_xen_data_tlb_va(dest_va) in
setup_pagetables(). dest_va has just been setup with a second level (2M
mapping). Is it not a bit dodgy to only flush the first 4K of that?

> +                STORE_CP32(0, TLBIMVAH)
> +                "dsb;" /* Ensure completion of the TLB flush */
> +                "isb;"
> +                : : "r" (va) : "memory");
> +        va += PAGE_SIZE;
> +    }
> +}
> +
>  /* Flush all non-hypervisor mappings from the TLB */
>  static inline void flush_guest_tlb(void)
>  {
> @@ -418,8 +435,14 @@ static inline uint64_t gva_to_ipa(uint32_t va)
>  #define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
>  
>  #define THIRD_SHIFT  PAGE_SHIFT
> +#define THIRD_SIZE   (1u << THIRD_SHIFT)
> +#define THIRD_MASK   (~(THIRD_SIZE - 1))
>  #define SECOND_SHIFT (THIRD_SHIFT + LPAE_SHIFT)
> +#define SECOND_SIZE   (1u << SECOND_SHIFT)
> +#define SECOND_MASK   (~(SECOND_SIZE - 1))
>  #define FIRST_SHIFT  (SECOND_SHIFT + LPAE_SHIFT)
> +#define FIRST_SIZE   (1u << FIRST_SHIFT)
> +#define FIRST_MASK   (~(FIRST_SIZE - 1))

Whitespace is off here.

>  
>  /* Calculate the offsets into the pagetables for a given VA */
>  #define first_linear_offset(va) (va >> FIRST_SHIFT)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:56:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:56:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlIGS-0006ew-7f; Wed, 19 Dec 2012 11:56:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TlIGQ-0006eq-KD
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:56:42 +0000
Received: from [85.158.139.83:20811] by server-1.bemta-5.messagelabs.com id
	A2/CD-12813-97BA1D05; Wed, 19 Dec 2012 11:56:41 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355918201!29993433!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8303 invoked from network); 19 Dec 2012 11:56:41 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-13.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 11:56:41 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TlIGJ-00019k-3q; Wed, 19 Dec 2012 11:56:35 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TlIGG-0006U7-4B; Wed, 19 Dec 2012 11:56:34 +0000
Message-ID: <1355918186.14620.349.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 19 Dec 2012 11:56:26 +0000
In-Reply-To: <50D1B42F02000078000B1625@nat28.tlf.novell.com>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
	<1354613604.2693.55.camel@zakaz.uk.xensource.com>
	<50BDD5A302000078000ADA26@nat28.tlf.novell.com>
	<1354615936.2693.59.camel@zakaz.uk.xensource.com>
	<50D1B42F02000078000B1625@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: Bastian Blank <waldi@debian.org>,
	"695056@bugs.debian.org" <695056@bugs.debian.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 11:33 +0000, Jan Beulich wrote:
> >>> On 04.12.12 at 11:12, Ian Campbell <ijc@hellion.org.uk> wrote:
> > On Tue, 2012-12-04 at 09:51 +0000, Jan Beulich wrote:
> >> >>> On 04.12.12 at 10:33, Ian Campbell <ijc@hellion.org.uk> wrote:
> >> > On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
> >> >> Package: src:xen
> >> >> Version: 4.1.3-4
> >> >> Severity: serious
> >> >> 
> >> >> The bzimage loader used in both libxc and the hypervisor lacks XZ
> >> >> support. Debian kernels since 3.6 are compressed with XZ and can't be
> >> >> loaded.
> >> >> 
> >> >> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
> >> >> somewhere last year.
> >> > 
> >> > Indeed. Jan this would be a good candidate for a future 4.1.x I think
> >> > (it was already in 4.2.0).
> >> 
> >> Hmm, I'm not really convinced - we're at 4.1.4 (I'm looking
> >> towards an RC2 followed by a release within the next days),
> >> and doing a feature addition like this in a .5 stable release
> >> looks questionable to me in the context of how we managed
> >> stable updates in the past. But I'm open to be outvoted of
> >> course...
> > 
> > My thinking was that it is a reasonably self contained patch (most of
> > the code is the new files) and without it people won't be able to use
> > 4.1.x with their distro kernels at all, so it's something between a bug
> > fix and a new feature.
> 
> Just did this on the hypervisor side (i.e. for Dom0). Unless told
> otherwise I'm going to assume that IanJ will take care of the
> DomU (tools side) part.

Which FWIW is 23002:eb64b8f8eebb but it very likely requires portions of
23021:da7c950772ed, 23322:d9982136d8fa and 26115:37a8946eeb9d which are
fixes (some security sensitive) in that code.

Ian.

-- 
Ian Campbell
Current Noise: Katatonia - The Longest Year

Staff meeting in the conference room in %d minutes.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 11:56:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 11:56:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlIGS-0006ew-7f; Wed, 19 Dec 2012 11:56:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1TlIGQ-0006eq-KD
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:56:42 +0000
Received: from [85.158.139.83:20811] by server-1.bemta-5.messagelabs.com id
	A2/CD-12813-97BA1D05; Wed, 19 Dec 2012 11:56:41 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355918201!29993433!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8303 invoked from network); 19 Dec 2012 11:56:41 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-13.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 11:56:41 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginmedia.com
	([86.6.25.227] helo=hopkins.hellion.org.uk)
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1TlIGJ-00019k-3q; Wed, 19 Dec 2012 11:56:35 +0000
Received: from firewall.ctxuk.citrix.com ([46.33.159.2] helo=[10.80.2.42])
	by hopkins.hellion.org.uk with esmtpsa (SSL3.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <ijc@hellion.org.uk>)
	id 1TlIGG-0006U7-4B; Wed, 19 Dec 2012 11:56:34 +0000
Message-ID: <1355918186.14620.349.camel@zakaz.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 19 Dec 2012 11:56:26 +0000
In-Reply-To: <50D1B42F02000078000B1625@nat28.tlf.novell.com>
References: <20121203184750.15579.68864.reportbug@lumphammer.waldi.eu.org>
	<1354613604.2693.55.camel@zakaz.uk.xensource.com>
	<50BDD5A302000078000ADA26@nat28.tlf.novell.com>
	<1354615936.2693.59.camel@zakaz.uk.xensource.com>
	<50D1B42F02000078000B1625@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.4-1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 46.33.159.2
X-SA-Exim-Mail-From: ijc@hellion.org.uk
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on hopkins.hellion.org.uk)
Cc: Bastian Blank <waldi@debian.org>,
	"695056@bugs.debian.org" <695056@bugs.debian.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Pkg-xen-devel] Bug#695056: xen - Missing support
 for XZ compressed bzimage kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 11:33 +0000, Jan Beulich wrote:
> >>> On 04.12.12 at 11:12, Ian Campbell <ijc@hellion.org.uk> wrote:
> > On Tue, 2012-12-04 at 09:51 +0000, Jan Beulich wrote:
> >> >>> On 04.12.12 at 10:33, Ian Campbell <ijc@hellion.org.uk> wrote:
> >> > On Mon, 2012-12-03 at 19:47 +0100, Bastian Blank wrote:
> >> >> Package: src:xen
> >> >> Version: 4.1.3-4
> >> >> Severity: serious
> >> >> 
> >> >> The bzimage loader used in both libxc and the hypervisor lacks XZ
> >> >> support. Debian kernels since 3.6 are compressed with XZ and can't be
> >> >> loaded.
> >> >> 
> >> >> Support for XZ compression was added in changeset 23002:eb64b8f8eebb
> >> >> somewhere last year.
> >> > 
> >> > Indeed. Jan this would be a good candidate for a future 4.1.x I think
> >> > (it was already in 4.2.0).
> >> 
> >> Hmm, I'm not really convinced - we're at 4.1.4 (I'm looking
> >> towards an RC2 followed by a release within the next days),
> >> and doing a feature addition like this in a .5 stable release
> >> looks questionable to me in the context of how we managed
> >> stable updates in the past. But I'm open to be outvoted of
> >> course...
> > 
> > My thinking was that it is a reasonably self contained patch (most of
> > the code is the new files) and without it people won't be able to use
> > 4.1.x with their distro kernels at all, so it's something between a bug
> > fix and a new feature.
> 
> Just did this on the hypervisor side (i.e. for Dom0). Unless told
> otherwise I'm going to assume that IanJ will take care of the
> DomU (tools side) part.

Which FWIW is 23002:eb64b8f8eebb but it very likely requires portions of
23021:da7c950772ed, 23322:d9982136d8fa and 26115:37a8946eeb9d which are
fixes (some security sensitive) in that code.

Ian.

-- 
Ian Campbell
Current Noise: Katatonia - The Longest Year

Staff meeting in the conference room in %d minutes.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:00:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlIJc-0006pL-1f; Wed, 19 Dec 2012 12:00:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlIJa-0006pF-JP
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:59:58 +0000
Received: from [85.158.143.35:16419] by server-1.bemta-4.messagelabs.com id
	60/4E-28401-D3CA1D05; Wed, 19 Dec 2012 11:59:57 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355918396!16135168!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25735 invoked from network); 19 Dec 2012 11:59:57 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 11:59:57 -0000
Received: (qmail 14446 invoked from network); 19 Dec 2012 13:59:55 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 13:59:55 +0200
Message-ID: <50D1AC7A.70709@gmail.com>
Date: Wed, 19 Dec 2012 14:00:58 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com>
In-Reply-To: <50D1A9D1.2020106@gmail.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234434,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 6988029b9f4ec979b6cf935562e623b8.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3t9.17epd43f4.31fq],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44439
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> xen/arch/x86/hvm/mtrr.c:mtrr_def_type_msr_set() seems to be where it is
>> written, and it is called with hw_mtrr.msr_mtrr_def_type so it looks
>> like it can be derived from that value in hvm_hw_mtrr.
>
> Indeed it is, but this happens there:
>
> uint8_t def_type = msr_content & 0xff;
> uint8_t enabled = (msr_content >> 10) & 0x3;

Nevermind, found it :)

mtrr.c / static int hvm_save_mtrr_msr(struct domain *d, 
hvm_domain_context_t *h):

hw_mtrr.msr_mtrr_def_type = mtrr_state->def_type
     | (mtrr_state->enabled << 10);

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:00:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlIJc-0006pL-1f; Wed, 19 Dec 2012 12:00:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlIJa-0006pF-JP
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 11:59:58 +0000
Received: from [85.158.143.35:16419] by server-1.bemta-4.messagelabs.com id
	60/4E-28401-D3CA1D05; Wed, 19 Dec 2012 11:59:57 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355918396!16135168!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25735 invoked from network); 19 Dec 2012 11:59:57 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 11:59:57 -0000
Received: (qmail 14446 invoked from network); 19 Dec 2012 13:59:55 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 13:59:55 +0200
Message-ID: <50D1AC7A.70709@gmail.com>
Date: Wed, 19 Dec 2012 14:00:58 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com>
In-Reply-To: <50D1A9D1.2020106@gmail.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234434,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 6988029b9f4ec979b6cf935562e623b8.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3t9.17epd43f4.31fq],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44439
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> xen/arch/x86/hvm/mtrr.c:mtrr_def_type_msr_set() seems to be where it is
>> written, and it is called with hw_mtrr.msr_mtrr_def_type so it looks
>> like it can be derived from that value in hvm_hw_mtrr.
>
> Indeed it is, but this happens there:
>
> uint8_t def_type = msr_content & 0xff;
> uint8_t enabled = (msr_content >> 10) & 0x3;

Nevermind, found it :)

mtrr.c / static int hvm_save_mtrr_msr(struct domain *d, 
hvm_domain_context_t *h):

hw_mtrr.msr_mtrr_def_type = mtrr_state->def_type
     | (mtrr_state->enabled << 10);

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:04:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlIO2-0007Lj-Qb; Wed, 19 Dec 2012 12:04:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TlIO1-0007Lb-CX
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 12:04:33 +0000
Received: from [85.158.139.83:14914] by server-9.bemta-5.messagelabs.com id
	20/03-10690-05DA1D05; Wed, 19 Dec 2012 12:04:32 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355918670!23195295!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7509 invoked from network); 19 Dec 2012 12:04:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:04:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1242451"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 12:04:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 07:04:29 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TlINx-0000Rk-RW;
	Wed, 19 Dec 2012 12:04:29 +0000
Message-ID: <1355918673.10526.3.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Yiming Zhang <sdiris@gmail.com>
Date: Wed, 19 Dec 2012 12:04:33 +0000
In-Reply-To: <001901cddca8$75110ac0$5f332040$@gmail.com>
References: <001901cddca8$75110ac0$5f332040$@gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Error in installing xen-4.2 on Debian
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDEyLTEyLTE3IGF0IDIyOjQ3ICswMDAwLCBZaW1pbmcgWmhhbmcgd3JvdGU6Cj4g
RGVhciBhbGwsCj4gCj4gIAo+IAo+IEkgd2FudCB0byBpbnN0YWxsIFhlbiA0LjIgZnJvbSB0aGUg
c291cmNlIGZpbGVzIG9uIERlYmlhbiBTcXVlZXplLgo+IEFmdGVyIOKAnG1ha2XigJ0gYW5kIOKA
nG1ha2UgaW5zdGFsbOKAnSwgSSB1cGRhdGUtZ3J1YiBhbmQgZ290IHRoZSBlcnJvcgo+IG1lc3Nh
Z2U6Cj4gCj4gIAo+IAo+IGRwa2c6IHZlcnNpb24gJy9ib290L3hlbi5neicgaGFzIGJhZCBzeW50
YXg6IGludmFsaWQgY2hhcmFjdGVyIGluCj4gdmVyc2lvbiBudW1iZXIKPiAKPiAgCj4gCj4gSSBn
b29nbGVkIHRoZSBzb2x1dGlvbi4gU29tZW9uZSBzYWlkIHRvIGRlbGV0ZSB4ZW4uZ3ogKGEgc3lt
Ym9saWMgbGluawo+IHRvIHhlbi40LjIuMC5neikgYnV0IGl0IGRvZXNu4oCZdCB3b3JrIGZvciBt
ZS4gU29tZW9uZSBzYWlkIGl0IGlzIGEgYnVnCj4gb2YgZ3J1YiBhbmQgc28gSSBpbnN0YWxsZWQg
dGhlIGxhdGVzdCB2ZXJzaW9uLCBidXQgYWZ0ZXIgcmVib290IG15Cj4gc3lzdGVtIGdldCBpbnRv
IGEgY29uZnVzaW5nIGdydWIgaW50ZXJmYWNlIGFuZCBJIGRvbuKAmXQga25vdyBob3cgdG8KPiBj
b250aW51ZS4gKHRoYW5rIGdvZCBJIGFtIHVzaW5nIHZtd2FyZSkKPiAKPiAgCj4gCj4gQ2FuIHNv
bWVib2R5IGhlbHAgbWU/IFRoYW5rIHlvdSB2ZXJ5IG11Y2ghCj4gCgpUaGF0IGVycm9yIHdpbGwg
bm90IGFmZmVjdCBncnViIG1lbnUgZW50cnkgZ2VuZXJhdGlvbi4KCllvdSB3aWxsIG5lZWQgYSBE
b20wIGtlcm5lbCB3aXRoIENPTkZJR19YRU5fRE9NMD15IGFuZCBwbGFjZSBjb25maWcgZmlsZQpp
biAvYm9vdC4KCgpXZWkuCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:04:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlIO2-0007Lj-Qb; Wed, 19 Dec 2012 12:04:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TlIO1-0007Lb-CX
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 12:04:33 +0000
Received: from [85.158.139.83:14914] by server-9.bemta-5.messagelabs.com id
	20/03-10690-05DA1D05; Wed, 19 Dec 2012 12:04:32 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355918670!23195295!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7509 invoked from network); 19 Dec 2012 12:04:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:04:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1242451"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 12:04:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 07:04:29 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TlINx-0000Rk-RW;
	Wed, 19 Dec 2012 12:04:29 +0000
Message-ID: <1355918673.10526.3.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Yiming Zhang <sdiris@gmail.com>
Date: Wed, 19 Dec 2012 12:04:33 +0000
In-Reply-To: <001901cddca8$75110ac0$5f332040$@gmail.com>
References: <001901cddca8$75110ac0$5f332040$@gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Error in installing xen-4.2 on Debian
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDEyLTEyLTE3IGF0IDIyOjQ3ICswMDAwLCBZaW1pbmcgWmhhbmcgd3JvdGU6Cj4g
RGVhciBhbGwsCj4gCj4gIAo+IAo+IEkgd2FudCB0byBpbnN0YWxsIFhlbiA0LjIgZnJvbSB0aGUg
c291cmNlIGZpbGVzIG9uIERlYmlhbiBTcXVlZXplLgo+IEFmdGVyIOKAnG1ha2XigJ0gYW5kIOKA
nG1ha2UgaW5zdGFsbOKAnSwgSSB1cGRhdGUtZ3J1YiBhbmQgZ290IHRoZSBlcnJvcgo+IG1lc3Nh
Z2U6Cj4gCj4gIAo+IAo+IGRwa2c6IHZlcnNpb24gJy9ib290L3hlbi5neicgaGFzIGJhZCBzeW50
YXg6IGludmFsaWQgY2hhcmFjdGVyIGluCj4gdmVyc2lvbiBudW1iZXIKPiAKPiAgCj4gCj4gSSBn
b29nbGVkIHRoZSBzb2x1dGlvbi4gU29tZW9uZSBzYWlkIHRvIGRlbGV0ZSB4ZW4uZ3ogKGEgc3lt
Ym9saWMgbGluawo+IHRvIHhlbi40LjIuMC5neikgYnV0IGl0IGRvZXNu4oCZdCB3b3JrIGZvciBt
ZS4gU29tZW9uZSBzYWlkIGl0IGlzIGEgYnVnCj4gb2YgZ3J1YiBhbmQgc28gSSBpbnN0YWxsZWQg
dGhlIGxhdGVzdCB2ZXJzaW9uLCBidXQgYWZ0ZXIgcmVib290IG15Cj4gc3lzdGVtIGdldCBpbnRv
IGEgY29uZnVzaW5nIGdydWIgaW50ZXJmYWNlIGFuZCBJIGRvbuKAmXQga25vdyBob3cgdG8KPiBj
b250aW51ZS4gKHRoYW5rIGdvZCBJIGFtIHVzaW5nIHZtd2FyZSkKPiAKPiAgCj4gCj4gQ2FuIHNv
bWVib2R5IGhlbHAgbWU/IFRoYW5rIHlvdSB2ZXJ5IG11Y2ghCj4gCgpUaGF0IGVycm9yIHdpbGwg
bm90IGFmZmVjdCBncnViIG1lbnUgZW50cnkgZ2VuZXJhdGlvbi4KCllvdSB3aWxsIG5lZWQgYSBE
b20wIGtlcm5lbCB3aXRoIENPTkZJR19YRU5fRE9NMD15IGFuZCBwbGFjZSBjb25maWcgZmlsZQpp
biAvYm9vdC4KCgpXZWkuCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:10:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:10:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlIU0-0007Zf-QV; Wed, 19 Dec 2012 12:10:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlITz-0007ZX-CS
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:10:43 +0000
Received: from [85.158.139.83:60242] by server-12.bemta-5.messagelabs.com id
	C3/6B-02275-2CEA1D05; Wed, 19 Dec 2012 12:10:42 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355919039!26505838!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11704 invoked from network); 19 Dec 2012 12:10:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:10:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; d="scan'208,223";a="1171018"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 12:10:38 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 07:10:38 -0500
Message-ID: <50D1AEBD.1090202@citrix.com>
Date: Wed, 19 Dec 2012 12:10:37 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355914793.14620.319.camel@zakaz.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------080801040503010508030802"
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------080801040503010508030802
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 7bit

New patch attached.

I haven't done the relevant spelling fixes etc, as I had a little mishap 
with git, and need to fix up a few things. Thought you may want to have 
a look over the ARM side meanwhile, and if this is OK, I will post an 
"official" V5 patch to the list.

Further comments below.

--
Mats
On 19/12/12 10:59, Ian Campbell wrote:
> On Tue, 2012-12-18 at 19:34 +0000, Mats Petersson wrote:
>
>>>> I think I'll do the minimal patch first, then, if I find some spare
>>>> time, work on the "batching" variant.
>>> OK. The batching is IMHO just using the multicall variant.
> The XENMEM_add_to_physmap_range is itself batched (it contains ranges of
> mfns etc), so we don't just want to batch the actual hypercalls we
> really want to make sure each hypercall processes a batch, this will
> lets us optimise the flushes in the hypervisor.
>
> I don't know if they mutlicall infrastructure allows for that but it
> would be pretty trivial to do it explicitly
Yes, I'm sure it is. I would prefer to do that AFTER my x86 patch has 
gone in, if that's possible. (Or that someone who can actually 
understands the ARM architecture and can test it on actual ARM does it...)
>
> I expect these patches will need to be folded into one to avoid a
> bisecability hazard?
Yes, that is certainly my plan. I just made two patches for ease of 
"what is ARM and what isn't" - but final submit should be as one patch.
>
> xen-privcmd-remap-array-arm.patch:
>> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
>> index 7a32976..dc50a53 100644
>> --- a/arch/arm/xen/enlighten.c
>> +++ b/arch/arm/xen/enlighten.c
>> @@ -73,7 +73,7 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
>>   }
>>   
>>   struct remap_data {
>> -       xen_pfn_t fgmfn; /* foreign domain's gmfn */
>> +       xen_pfn_t *fgmfn; /* foreign domain's gmfn */
>>          pgprot_t prot;
>>          domid_t  domid;
>>          struct vm_area_struct *vma;
>> @@ -90,38 +90,120 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
>>          unsigned long pfn = page_to_pfn(page);
>>          pte_t pte = pfn_pte(pfn, info->prot);
>>   
>> -       if (map_foreign_page(pfn, info->fgmfn, info->domid))
>> +       // TODO: We should really batch these updates
>> +       if (map_foreign_page(pfn, *info->fgmfn, info->domid))
>>                  return -EFAULT;
>>          set_pte_at(info->vma->vm_mm, addr, ptep, pte);
>> +       info->fgmfn++;
> Looks good.
>   
>>          return 0;
>>   }
>>   
>> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> -                              unsigned long addr,
>> -                              xen_pfn_t mfn, int nr,
>> -                              pgprot_t prot, unsigned domid,
>> -                              struct page **pages)
>> +/* do_remap_mfn() - helper function to map foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + * @pages:   page information (for PVH, not used currently)
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into
>> + * this kernel's memory. The owner of the pages is defined by domid. Where the
>> + * pages are mapped is determined by addr, and vma is used for "accounting" of
>> + * the pages.
>> + *
>> + * Return value is zero for success, negative for failure.
>> + */
>> +static int do_remap_mfn(struct vm_area_struct *vma,
>> +                       unsigned long addr,
>> +                       xen_pfn_t *mfn, int nr,
>> +                       pgprot_t prot, unsigned domid,
>> +                       struct page **pages)
> Since xen_remap_domain_mfn_range isn't implemented on ARM the only
> caller is xen_remap_domain_mfn_array so you might as well just call this
> function ..._array.
Yes, could do. As I stated in the commentary text, I tried to keep the 
code similar in structure to x86.
[Actually, one iteration of my code had two API functions, one of which 
called the other, but it was considered a better solution to make one 
common function, and have the two x86 functions call that one].
>>   {
>>          int err;
>>          struct remap_data data;
>>   
>> -       /* TBD: Batching, current sole caller only does page at a time */
>> -       if (nr > 1)
>> -               return -EINVAL;
>> +       BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
> Where does this restriction come from?
>
> I think it is true of X86 PV MMU (which has certain requirements about
> how and when PTEs are changed) but I don't think ARM or PVH need it
> because they use two level paging so the PTEs are just normal native
> ones with no extra restrictions.
>
> Maybe it is useful to enforce similarity between PV and PVH/ARM though?
I don't know if it's useful or not. I'm sure removing it will make 
little difference, but since the VM flags are set by the calling code, 
the privcmd.c or higher level code would have to be correct for whatever 
architecture they are on. Not sure if it is "helpful" to allow certain 
combinations in one arch, when it's not in another. My choice would be 
to keep the restriction until there is a good reason to remove it, but 
if you feel it is beneficial to remove it, feel free to say so.
[Perhaps the ARM code should have a comment to the effect of "not needed 
on PVH or ARM, kept for compatibility with PVOPS on x86" - so when PVOPS 
is "retired" in some years time, it can be removed]
>
>
>>          data.fgmfn = mfn;
>> -       data.prot = prot;
>> +       data.prot  = prot;
>>          data.domid = domid;
>> -       data.vma = vma;
>> -       data.index = 0;
>> +       data.vma   = vma;
>>
>>
>>          data.pages = pages;
>> -       err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
>> -                                 remap_pte_fn, &data);
>> -       return err;
>> +       data.index = 0;
>> +
>> +       while (nr) {
>> +               unsigned long range = 1 << PAGE_SHIFT;
>> +
>> +               err = apply_to_page_range(vma->vm_mm, addr, range,
>> +                                         remap_pte_fn, &data);
>> +               /* Warning: We do probably need to care about what error we
>> +                  get here. However, currently, the remap_area_mfn_pte_fn is
>                                                         ^ this isn't the name of the fn
Fixed.
>> +                  only likely to return EFAULT or some other "things are very
>> +                  bad" error code, which the rest of the calling code won't
>> +                  be able to fix up. So we just exit with the error we got.
> It expect it is more important to accumulate the individual errors from
> remap_pte_fn into err_ptr.
Yes, but since that exits on error with EFAULT, the calling code won't 
"accept" the errors, and thus the whole house of cards fall apart at 
this point.

There should probably be a task to fix this up properly, hence the 
comment. But right now, any error besides ENOENT is "unacceptable" by 
the callers of this code. If you want me to add this to the comment, I'm 
happy to. But as long as remap_pte_fn returns EFAULT on error, nothing 
will work after an error.
>
>> +               */
>> +               if (err)
>> +                       return err;
>> +
>> +               nr--;
>> +               addr += range;
>> +       }
>>
> This while loop (and therefore this change) is unnecessary. The single
> call to apply_to_page_range is sufficient and as your TODO notes any
> batching should happen in remap_pte_fn (which can handle both
> accumulation and flushing when the  batch is large enough).
Ah, I see how you mean. I have updated the code accordingly.
>
>> +       return 0;
>> +}
>> +
>> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
> physical address, right?
Virtual, at least if we assume that in " st->va & PAGE_MASK,"  'va' 
actually means virtual address - it would be rather devious to keep use 
the name va as a physical address - although I have seen such things in 
the past.

I shall amend the comments to say such "virtual address" to be more 
clear. [Not done in the attached patch, just realized this when 
re-reading my comments that I probably should...]
>
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @err_ptr: pointer to array of integers, one per MFN, for an error
>> + *           value for each page. The err_ptr must not be NULL.
> Nothing seems to be filling this in?
As discussed above (and below).
>
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + * @pages:   page information (for PVH, not used currently)
> No such thing as PVH on ARM. Also pages is used by this code.
Removed part in ().
>
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into this
>> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
>> + * are mapped is determined by addr, and vma is used for "accounting" of the
>> + * pages. The err_ptr array is filled in on any page that is not sucessfully
>>
>>                                                                    successfully
Thanks...
>>
>> + * mapped in.
>> + *
>> + * Return value is zero for success, negative ERRNO value for failure.
>> + * Note that the error value -ENOENT is considered a "retry", so when this
>> + * error code is seen, another call should be made with the list of pages that
>> + * are marked as -ENOENT in the err_ptr array.
>> + */
>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>> +                              unsigned long addr,
>> +                              xen_pfn_t *mfn, int nr,
>> +                              int *err_ptr, pgprot_t prot,
>> +                              unsigned domid,
>> +                              struct page **pages)
>> +{
>> +       /* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
>> +        * and the consequences later is quite hard to detect what the actual
>> +        * cause of "wrong memory was mapped in".
>> +        * Note: This variant doesn't actually use err_ptr at the moment.
> True ;-)
Do you prefer the "not used" comment moved up a bit?

--
Mats
>
>> +        */
>> +       BUG_ON(err_ptr == NULL);
>> +       return do_remap_mfn(vma, addr, mfn, nr, prot, domid, pages);
>> +}
>> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
>> +
>> +/* Not used in ARM. Use xen_remap_domain_mfn_array(). */
>> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> +                              unsigned long addr,
>> +                              xen_pfn_t mfn, int nr,
>> +                              pgprot_t prot, unsigned domid,
>> +                              struct page **pages)
>> +{
>> +       return -ENOSYS;
>>   }
>>   EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>>   
>> +
>>   int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>>                                 int nr, struct page **pages)
>>   {
>>


--------------080801040503010508030802
Content-Type: text/x-patch; name="0001-Fixed-up-after-IanC-s-comments.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename="0001-Fixed-up-after-IanC-s-comments.patch"

>From 7ec4da2e1a40963d459dec6c61e810e5badd390a Mon Sep 17 00:00:00 2001
From: Mats Petersson <mats.petersson@citrix.com>
Date: Wed, 19 Dec 2012 11:58:23 +0000
Subject: [PATCH] Fixed up after IanC's comments.

---
 arch/arm/xen/enlighten.c |  104 +++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 90 insertions(+), 14 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 7a32976..2bf8556 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -73,7 +73,7 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
 }
 
 struct remap_data {
-	xen_pfn_t fgmfn; /* foreign domain's gmfn */
+	xen_pfn_t *fgmfn; /* foreign domain's gmfn */
 	pgprot_t prot;
 	domid_t  domid;
 	struct vm_area_struct *vma;
@@ -90,38 +90,114 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
 	unsigned long pfn = page_to_pfn(page);
 	pte_t pte = pfn_pte(pfn, info->prot);
 
-	if (map_foreign_page(pfn, info->fgmfn, info->domid))
+	// TODO: We should really batch these updates
+	if (map_foreign_page(pfn, *info->fgmfn, info->domid))
 		return -EFAULT;
 	set_pte_at(info->vma->vm_mm, addr, ptep, pte);
+	info->fgmfn++;
 
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       xen_pfn_t mfn, int nr,
-			       pgprot_t prot, unsigned domid,
-			       struct page **pages)
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information.
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			xen_pfn_t *mfn, int nr,
+			pgprot_t prot, unsigned domid,
+			struct page **pages)
 {
 	int err;
 	struct remap_data data;
 
-	/* TBD: Batching, current sole caller only does page at a time */
-	if (nr > 1)
-		return -EINVAL;
+	/* Kept here for the purpose of 
+	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	data.fgmfn = mfn;
-	data.prot = prot;
+	data.prot  = prot;
 	data.domid = domid;
-	data.vma = vma;
-	data.index = 0;
+	data.vma   = vma;
 	data.pages = pages;
-	err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
+	data.index = 0;
+
+	unsigned long range = nr << PAGE_SHIFT;
+	
+	err = apply_to_page_range(vma->vm_mm, addr, range,
 				  remap_pte_fn, &data);
+	/* Warning: We do probably need to care about what error we
+	   get here. However, currently, the remap_pte_fn is only 
+	   likely to return EFAULT or some other "things are very
+	   bad" error code, which the rest of the calling code won't
+	   be able to fix up. So we just exit with the error we got.
+	*/
 	return err;
 }
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not successfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid,
+			       struct page **pages)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 * Note: This variant doesn't actually use err_ptr at the moment.
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, prot, domid, pages);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
+
+/* Not used in ARM. Use xen_remap_domain_mfn_array(). */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t mfn, int nr,
+			       pgprot_t prot, unsigned domid,
+			       struct page **pages)
+{
+	return -ENOSYS;
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
+
 int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int nr, struct page **pages)
 {
-- 
1.7.9.5


--------------080801040503010508030802
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------080801040503010508030802--


From xen-devel-bounces@lists.xen.org Wed Dec 19 12:10:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:10:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlIU0-0007Zf-QV; Wed, 19 Dec 2012 12:10:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlITz-0007ZX-CS
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:10:43 +0000
Received: from [85.158.139.83:60242] by server-12.bemta-5.messagelabs.com id
	C3/6B-02275-2CEA1D05; Wed, 19 Dec 2012 12:10:42 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355919039!26505838!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11704 invoked from network); 19 Dec 2012 12:10:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:10:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; d="scan'208,223";a="1171018"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 12:10:38 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 07:10:38 -0500
Message-ID: <50D1AEBD.1090202@citrix.com>
Date: Wed, 19 Dec 2012 12:10:37 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355914793.14620.319.camel@zakaz.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------080801040503010508030802"
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------080801040503010508030802
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 7bit

New patch attached.

I haven't done the relevant spelling fixes etc, as I had a little mishap 
with git, and need to fix up a few things. Thought you may want to have 
a look over the ARM side meanwhile, and if this is OK, I will post an 
"official" V5 patch to the list.

Further comments below.

--
Mats
On 19/12/12 10:59, Ian Campbell wrote:
> On Tue, 2012-12-18 at 19:34 +0000, Mats Petersson wrote:
>
>>>> I think I'll do the minimal patch first, then, if I find some spare
>>>> time, work on the "batching" variant.
>>> OK. The batching is IMHO just using the multicall variant.
> The XENMEM_add_to_physmap_range is itself batched (it contains ranges of
> mfns etc), so we don't just want to batch the actual hypercalls we
> really want to make sure each hypercall processes a batch, this will
> lets us optimise the flushes in the hypervisor.
>
> I don't know if they mutlicall infrastructure allows for that but it
> would be pretty trivial to do it explicitly
Yes, I'm sure it is. I would prefer to do that AFTER my x86 patch has 
gone in, if that's possible. (Or that someone who can actually 
understands the ARM architecture and can test it on actual ARM does it...)
>
> I expect these patches will need to be folded into one to avoid a
> bisecability hazard?
Yes, that is certainly my plan. I just made two patches for ease of 
"what is ARM and what isn't" - but final submit should be as one patch.
>
> xen-privcmd-remap-array-arm.patch:
>> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
>> index 7a32976..dc50a53 100644
>> --- a/arch/arm/xen/enlighten.c
>> +++ b/arch/arm/xen/enlighten.c
>> @@ -73,7 +73,7 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
>>   }
>>   
>>   struct remap_data {
>> -       xen_pfn_t fgmfn; /* foreign domain's gmfn */
>> +       xen_pfn_t *fgmfn; /* foreign domain's gmfn */
>>          pgprot_t prot;
>>          domid_t  domid;
>>          struct vm_area_struct *vma;
>> @@ -90,38 +90,120 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
>>          unsigned long pfn = page_to_pfn(page);
>>          pte_t pte = pfn_pte(pfn, info->prot);
>>   
>> -       if (map_foreign_page(pfn, info->fgmfn, info->domid))
>> +       // TODO: We should really batch these updates
>> +       if (map_foreign_page(pfn, *info->fgmfn, info->domid))
>>                  return -EFAULT;
>>          set_pte_at(info->vma->vm_mm, addr, ptep, pte);
>> +       info->fgmfn++;
> Looks good.
>   
>>          return 0;
>>   }
>>   
>> -int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> -                              unsigned long addr,
>> -                              xen_pfn_t mfn, int nr,
>> -                              pgprot_t prot, unsigned domid,
>> -                              struct page **pages)
>> +/* do_remap_mfn() - helper function to map foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + * @pages:   page information (for PVH, not used currently)
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into
>> + * this kernel's memory. The owner of the pages is defined by domid. Where the
>> + * pages are mapped is determined by addr, and vma is used for "accounting" of
>> + * the pages.
>> + *
>> + * Return value is zero for success, negative for failure.
>> + */
>> +static int do_remap_mfn(struct vm_area_struct *vma,
>> +                       unsigned long addr,
>> +                       xen_pfn_t *mfn, int nr,
>> +                       pgprot_t prot, unsigned domid,
>> +                       struct page **pages)
> Since xen_remap_domain_mfn_range isn't implemented on ARM the only
> caller is xen_remap_domain_mfn_array so you might as well just call this
> function ..._array.
Yes, could do. As I stated in the commentary text, I tried to keep the 
code similar in structure to x86.
[Actually, one iteration of my code had two API functions, one of which 
called the other, but it was considered a better solution to make one 
common function, and have the two x86 functions call that one].
>>   {
>>          int err;
>>          struct remap_data data;
>>   
>> -       /* TBD: Batching, current sole caller only does page at a time */
>> -       if (nr > 1)
>> -               return -EINVAL;
>> +       BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
> Where does this restriction come from?
>
> I think it is true of X86 PV MMU (which has certain requirements about
> how and when PTEs are changed) but I don't think ARM or PVH need it
> because they use two level paging so the PTEs are just normal native
> ones with no extra restrictions.
>
> Maybe it is useful to enforce similarity between PV and PVH/ARM though?
I don't know if it's useful or not. I'm sure removing it will make 
little difference, but since the VM flags are set by the calling code, 
the privcmd.c or higher level code would have to be correct for whatever 
architecture they are on. Not sure if it is "helpful" to allow certain 
combinations in one arch, when it's not in another. My choice would be 
to keep the restriction until there is a good reason to remove it, but 
if you feel it is beneficial to remove it, feel free to say so.
[Perhaps the ARM code should have a comment to the effect of "not needed 
on PVH or ARM, kept for compatibility with PVOPS on x86" - so when PVOPS 
is "retired" in some years time, it can be removed]
>
>
>>          data.fgmfn = mfn;
>> -       data.prot = prot;
>> +       data.prot  = prot;
>>          data.domid = domid;
>> -       data.vma = vma;
>> -       data.index = 0;
>> +       data.vma   = vma;
>>
>>
>>          data.pages = pages;
>> -       err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
>> -                                 remap_pte_fn, &data);
>> -       return err;
>> +       data.index = 0;
>> +
>> +       while (nr) {
>> +               unsigned long range = 1 << PAGE_SHIFT;
>> +
>> +               err = apply_to_page_range(vma->vm_mm, addr, range,
>> +                                         remap_pte_fn, &data);
>> +               /* Warning: We do probably need to care about what error we
>> +                  get here. However, currently, the remap_area_mfn_pte_fn is
>                                                         ^ this isn't the name of the fn
Fixed.
>> +                  only likely to return EFAULT or some other "things are very
>> +                  bad" error code, which the rest of the calling code won't
>> +                  be able to fix up. So we just exit with the error we got.
> It expect it is more important to accumulate the individual errors from
> remap_pte_fn into err_ptr.
Yes, but since that exits on error with EFAULT, the calling code won't 
"accept" the errors, and thus the whole house of cards fall apart at 
this point.

There should probably be a task to fix this up properly, hence the 
comment. But right now, any error besides ENOENT is "unacceptable" by 
the callers of this code. If you want me to add this to the comment, I'm 
happy to. But as long as remap_pte_fn returns EFAULT on error, nothing 
will work after an error.
>
>> +               */
>> +               if (err)
>> +                       return err;
>> +
>> +               nr--;
>> +               addr += range;
>> +       }
>>
> This while loop (and therefore this change) is unnecessary. The single
> call to apply_to_page_range is sufficient and as your TODO notes any
> batching should happen in remap_pte_fn (which can handle both
> accumulation and flushing when the  batch is large enough).
Ah, I see how you mean. I have updated the code accordingly.
>
>> +       return 0;
>> +}
>> +
>> +/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
>> + * @vma:     the vma for the pages to be mapped into
>> + * @addr:    the address at which to map the pages
> physical address, right?
Virtual, at least if we assume that in " st->va & PAGE_MASK,"  'va' 
actually means virtual address - it would be rather devious to keep use 
the name va as a physical address - although I have seen such things in 
the past.

I shall amend the comments to say such "virtual address" to be more 
clear. [Not done in the attached patch, just realized this when 
re-reading my comments that I probably should...]
>
>> + * @mfn:     pointer to array of MFNs to map
>> + * @nr:      the number entries in the MFN array
>> + * @err_ptr: pointer to array of integers, one per MFN, for an error
>> + *           value for each page. The err_ptr must not be NULL.
> Nothing seems to be filling this in?
As discussed above (and below).
>
>> + * @prot:    page protection mask
>> + * @domid:   id of the domain that we are mapping from
>> + * @pages:   page information (for PVH, not used currently)
> No such thing as PVH on ARM. Also pages is used by this code.
Removed part in ().
>
>> + *
>> + * This function takes an array of mfns and maps nr pages from that into this
>> + * kernel's memory. The owner of the pages is defined by domid. Where the pages
>> + * are mapped is determined by addr, and vma is used for "accounting" of the
>> + * pages. The err_ptr array is filled in on any page that is not sucessfully
>>
>>                                                                    successfully
Thanks...
>>
>> + * mapped in.
>> + *
>> + * Return value is zero for success, negative ERRNO value for failure.
>> + * Note that the error value -ENOENT is considered a "retry", so when this
>> + * error code is seen, another call should be made with the list of pages that
>> + * are marked as -ENOENT in the err_ptr array.
>> + */
>> +int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>> +                              unsigned long addr,
>> +                              xen_pfn_t *mfn, int nr,
>> +                              int *err_ptr, pgprot_t prot,
>> +                              unsigned domid,
>> +                              struct page **pages)
>> +{
>> +       /* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
>> +        * and the consequences later is quite hard to detect what the actual
>> +        * cause of "wrong memory was mapped in".
>> +        * Note: This variant doesn't actually use err_ptr at the moment.
> True ;-)
Do you prefer the "not used" comment moved up a bit?

--
Mats
>
>> +        */
>> +       BUG_ON(err_ptr == NULL);
>> +       return do_remap_mfn(vma, addr, mfn, nr, prot, domid, pages);
>> +}
>> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
>> +
>> +/* Not used in ARM. Use xen_remap_domain_mfn_array(). */
>> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>> +                              unsigned long addr,
>> +                              xen_pfn_t mfn, int nr,
>> +                              pgprot_t prot, unsigned domid,
>> +                              struct page **pages)
>> +{
>> +       return -ENOSYS;
>>   }
>>   EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>>   
>> +
>>   int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>>                                 int nr, struct page **pages)
>>   {
>>


--------------080801040503010508030802
Content-Type: text/x-patch; name="0001-Fixed-up-after-IanC-s-comments.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename="0001-Fixed-up-after-IanC-s-comments.patch"

>From 7ec4da2e1a40963d459dec6c61e810e5badd390a Mon Sep 17 00:00:00 2001
From: Mats Petersson <mats.petersson@citrix.com>
Date: Wed, 19 Dec 2012 11:58:23 +0000
Subject: [PATCH] Fixed up after IanC's comments.

---
 arch/arm/xen/enlighten.c |  104 +++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 90 insertions(+), 14 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 7a32976..2bf8556 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -73,7 +73,7 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
 }
 
 struct remap_data {
-	xen_pfn_t fgmfn; /* foreign domain's gmfn */
+	xen_pfn_t *fgmfn; /* foreign domain's gmfn */
 	pgprot_t prot;
 	domid_t  domid;
 	struct vm_area_struct *vma;
@@ -90,38 +90,114 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
 	unsigned long pfn = page_to_pfn(page);
 	pte_t pte = pfn_pte(pfn, info->prot);
 
-	if (map_foreign_page(pfn, info->fgmfn, info->domid))
+	// TODO: We should really batch these updates
+	if (map_foreign_page(pfn, *info->fgmfn, info->domid))
 		return -EFAULT;
 	set_pte_at(info->vma->vm_mm, addr, ptep, pte);
+	info->fgmfn++;
 
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       xen_pfn_t mfn, int nr,
-			       pgprot_t prot, unsigned domid,
-			       struct page **pages)
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information.
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			xen_pfn_t *mfn, int nr,
+			pgprot_t prot, unsigned domid,
+			struct page **pages)
 {
 	int err;
 	struct remap_data data;
 
-	/* TBD: Batching, current sole caller only does page at a time */
-	if (nr > 1)
-		return -EINVAL;
+	/* Kept here for the purpose of 
+	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	data.fgmfn = mfn;
-	data.prot = prot;
+	data.prot  = prot;
 	data.domid = domid;
-	data.vma = vma;
-	data.index = 0;
+	data.vma   = vma;
 	data.pages = pages;
-	err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
+	data.index = 0;
+
+	unsigned long range = nr << PAGE_SHIFT;
+	
+	err = apply_to_page_range(vma->vm_mm, addr, range,
 				  remap_pte_fn, &data);
+	/* Warning: We do probably need to care about what error we
+	   get here. However, currently, the remap_pte_fn is only 
+	   likely to return EFAULT or some other "things are very
+	   bad" error code, which the rest of the calling code won't
+	   be able to fix up. So we just exit with the error we got.
+	*/
 	return err;
 }
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not successfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid,
+			       struct page **pages)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 * Note: This variant doesn't actually use err_ptr at the moment.
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, prot, domid, pages);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
+
+/* Not used in ARM. Use xen_remap_domain_mfn_array(). */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t mfn, int nr,
+			       pgprot_t prot, unsigned domid,
+			       struct page **pages)
+{
+	return -ENOSYS;
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
+
 int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int nr, struct page **pages)
 {
-- 
1.7.9.5


--------------080801040503010508030802
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------080801040503010508030802--


From xen-devel-bounces@lists.xen.org Wed Dec 19 12:22:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlIfW-0007qx-3T; Wed, 19 Dec 2012 12:22:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlIfU-0007qs-Ha
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:22:36 +0000
Received: from [85.158.143.35:60776] by server-1.bemta-4.messagelabs.com id
	07/F2-28401-B81B1D05; Wed, 19 Dec 2012 12:22:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355919749!13749847!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18343 invoked from network); 19 Dec 2012 12:22:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:22:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="251014"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 12:22:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 12:22:26 +0000
Message-ID: <1355919745.14620.363.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Wed, 19 Dec 2012 12:22:25 +0000
In-Reply-To: <50D1AEBD.1090202@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:

> >> +                  only likely to return EFAULT or some other "things are very
> >> +                  bad" error code, which the rest of the calling code won't
> >> +                  be able to fix up. So we just exit with the error we got.
> > It expect it is more important to accumulate the individual errors from
> > remap_pte_fn into err_ptr.
> Yes, but since that exits on error with EFAULT, the calling code won't 
> "accept" the errors, and thus the whole house of cards fall apart at 
> this point.
> 
> There should probably be a task to fix this up properly, hence the 
> comment. But right now, any error besides ENOENT is "unacceptable" by 
> the callers of this code. If you want me to add this to the comment, I'm 
> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing 
> will work after an error.

Are you sure? privcmd.c has some special casing for ENOENT but it looks
like it should just pass through other errors back to userspace.

In any case surely this needs fixing?

On the X86 side err_ptr is the result of the mmupdate hypercall which
can already be other than ENOENT, including EFAULT, ESRCH etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:22:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlIfW-0007qx-3T; Wed, 19 Dec 2012 12:22:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlIfU-0007qs-Ha
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:22:36 +0000
Received: from [85.158.143.35:60776] by server-1.bemta-4.messagelabs.com id
	07/F2-28401-B81B1D05; Wed, 19 Dec 2012 12:22:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355919749!13749847!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18343 invoked from network); 19 Dec 2012 12:22:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:22:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="251014"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 12:22:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 12:22:26 +0000
Message-ID: <1355919745.14620.363.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Wed, 19 Dec 2012 12:22:25 +0000
In-Reply-To: <50D1AEBD.1090202@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:

> >> +                  only likely to return EFAULT or some other "things are very
> >> +                  bad" error code, which the rest of the calling code won't
> >> +                  be able to fix up. So we just exit with the error we got.
> > It expect it is more important to accumulate the individual errors from
> > remap_pte_fn into err_ptr.
> Yes, but since that exits on error with EFAULT, the calling code won't 
> "accept" the errors, and thus the whole house of cards fall apart at 
> this point.
> 
> There should probably be a task to fix this up properly, hence the 
> comment. But right now, any error besides ENOENT is "unacceptable" by 
> the callers of this code. If you want me to add this to the comment, I'm 
> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing 
> will work after an error.

Are you sure? privcmd.c has some special casing for ENOENT but it looks
like it should just pass through other errors back to userspace.

In any case surely this needs fixing?

On the X86 side err_ptr is the result of the mmupdate hypercall which
can already be other than ENOENT, including EFAULT, ESRCH etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:30:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:30:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlImM-00081N-6e; Wed, 19 Dec 2012 12:29:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlImK-00081I-8b
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:29:40 +0000
Received: from [193.109.254.147:19984] by server-12.bemta-14.messagelabs.com
	id 43/C3-06523-333B1D05; Wed, 19 Dec 2012 12:29:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355920178!1896077!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26477 invoked from network); 19 Dec 2012 12:29:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:29:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="251201"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 12:29:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 12:29:38 +0000
Message-ID: <1355920175.14620.365.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 12:29:35 +0000
In-Reply-To: <1355856402-26614-4-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-4-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:46 +0000, Stefano Stabellini wrote:
> Make use of the framebuffer functions previously introduced.

Weren't you going to squash this into the previous patch, since it is
effectively code motion?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:30:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:30:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlImM-00081N-6e; Wed, 19 Dec 2012 12:29:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlImK-00081I-8b
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:29:40 +0000
Received: from [193.109.254.147:19984] by server-12.bemta-14.messagelabs.com
	id 43/C3-06523-333B1D05; Wed, 19 Dec 2012 12:29:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355920178!1896077!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26477 invoked from network); 19 Dec 2012 12:29:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:29:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="251201"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 12:29:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 12:29:38 +0000
Message-ID: <1355920175.14620.365.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 12:29:35 +0000
In-Reply-To: <1355856402-26614-4-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-4-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/8] xen/vesa: use the new fb_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:46 +0000, Stefano Stabellini wrote:
> Make use of the framebuffer functions previously introduced.

Weren't you going to squash this into the previous patch, since it is
effectively code motion?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:37:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlItA-0008Gc-MX; Wed, 19 Dec 2012 12:36:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlIt9-0008GW-Bo
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:36:43 +0000
Received: from [85.158.143.99:20895] by server-2.bemta-4.messagelabs.com id
	6F/AE-30861-AD4B1D05; Wed, 19 Dec 2012 12:36:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355920602!29199289!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20524 invoked from network); 19 Dec 2012 12:36:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:36:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="251340"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 12:36:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 12:36:41 +0000
Message-ID: <1355920600.14620.371.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 12:36:40 +0000
In-Reply-To: <1355856402-26614-5-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-5-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 5/8] xen/arm: preserve DTB mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> @@ -295,12 +296,23 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
>      write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
>      /* TLBFLUSH and ISB would be needed here, but wait until we set WXN */
>  
> +    /* preserve the DTB mapping a little while longer */

Not so much "preserve" as "put back" after this function clobbered it.

I'm not convinced by this effectively open coding of a "stack" of
mappings in the BOOT_MISC_VIRT_START slot. IMHO this slot should only be
used for ephemeral mappings within individual functions or over short
spans of code -- otherwise there is plenty of potential for clashes
which lead to hard to decipher bugs.

Can we not map the boot fdt as and when we need it instead of trying to
preserve a mapping? Or maybe FIXMAP_BOOT_FDT?

> +    pte = mfn_to_xen_entry(((unsigned long) _sdtb + boot_phys_offset) >> PAGE_SHIFT);
> +    write_pte(xen_second + second_linear_offset(BOOT_MISC_VIRT_START), pte);

This use of _sdtb is only valid if CONFIG_DTB_FILE. You need to
propagate atag_paddr as passed to start_xen.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:37:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlItA-0008Gc-MX; Wed, 19 Dec 2012 12:36:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlIt9-0008GW-Bo
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:36:43 +0000
Received: from [85.158.143.99:20895] by server-2.bemta-4.messagelabs.com id
	6F/AE-30861-AD4B1D05; Wed, 19 Dec 2012 12:36:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355920602!29199289!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20524 invoked from network); 19 Dec 2012 12:36:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:36:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="251340"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 12:36:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 12:36:41 +0000
Message-ID: <1355920600.14620.371.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 12:36:40 +0000
In-Reply-To: <1355856402-26614-5-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-5-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 5/8] xen/arm: preserve DTB mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> @@ -295,12 +296,23 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
>      write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
>      /* TLBFLUSH and ISB would be needed here, but wait until we set WXN */
>  
> +    /* preserve the DTB mapping a little while longer */

Not so much "preserve" as "put back" after this function clobbered it.

I'm not convinced by this effectively open coding of a "stack" of
mappings in the BOOT_MISC_VIRT_START slot. IMHO this slot should only be
used for ephemeral mappings within individual functions or over short
spans of code -- otherwise there is plenty of potential for clashes
which lead to hard to decipher bugs.

Can we not map the boot fdt as and when we need it instead of trying to
preserve a mapping? Or maybe FIXMAP_BOOT_FDT?

> +    pte = mfn_to_xen_entry(((unsigned long) _sdtb + boot_phys_offset) >> PAGE_SHIFT);
> +    write_pte(xen_second + second_linear_offset(BOOT_MISC_VIRT_START), pte);

This use of _sdtb is only valid if CONFIG_DTB_FILE. You need to
propagate atag_paddr as passed to start_xen.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:48:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJ4Q-0008Ug-V6; Wed, 19 Dec 2012 12:48:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlJ4P-0008Ub-CR
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:48:21 +0000
Received: from [193.109.254.147:54822] by server-5.bemta-14.messagelabs.com id
	3E/A8-32031-497B1D05; Wed, 19 Dec 2012 12:48:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355921299!2214321!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20990 invoked from network); 19 Dec 2012 12:48:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:48:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="251639"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 12:47:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 12:47:57 +0000
Message-ID: <1355921276.14620.380.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 12:47:56 +0000
In-Reply-To: <1355856402-26614-7-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-7-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 7/8] xen/arm: introduce vexpress_syscfg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:46 +0000, Stefano Stabellini wrote:
> Introduce a Versatile Express specific function to read/write
> motherboard settings.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/Makefile                   |    1 +
>  xen/arch/arm/platform_vexpress.c        |   97 +++++++++++++++++++++++++++++++
>  xen/include/asm-arm/platform_vexpress.h |   23 +++++++

I wonder if we ought to start a "platforms" subdirs include and arch?
This is presumably the first of many. Going the plat-<foo> route seems
unnecessary at least at this stage.

> +#include <asm/platform_vexpress.h>
> +#include <xen/mm.h>
> +
> +#define DCC_SHIFT      26
> +#define FUNCTION_SHIFT 20
> +#define SITE_SHIFT     16
> +#define POSITION_SHIFT 12
> +#define DEVICE_SHIFT   0
> +
> +int vexpress_syscfg(int write, int function, int device, uint32_t *data)
> +{
> +    uint32_t *syscfg = (uint32_t *) FIXMAP_ADDR(FIXMAP_MISC);
> +    uint32_t stat;
> +    int dcc = 0; /* DCC to access */
> +    int site = 0; /* motherboard */
> +    int position = 0; /* motherboard */

motherboard twice?

> +    set_fixmap(FIXMAP_MISC, V2M_SYS_MMIO_BASE >> PAGE_SHIFT, DEV_SHARED);
> +
> +    if ( syscfg[V2M_SYS_CFGCTRL] & V2M_SYS_CFG_START )
> +        return -1;
> +
> +    /* clear the complete bit in the V2M_SYS_CFGSTAT status register */

Do you mean clear all the bits or specifically the "completion" bit?

> +    syscfg[V2M_SYS_CFGSTAT] = 0;
> +
> +    if ( write )
> +    {
> +        /* write data */
> +        syscfg[V2M_SYS_CFGDATA] = *data;
> +
> +        /* set control register */
> +        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | V2M_SYS_CFG_WRITE |
> +            (dcc << DCC_SHIFT) | (function << FUNCTION_SHIFT) |
> +            (site << SITE_SHIFT) | (position << POSITION_SHIFT) |
> +            (device << DEVICE_SHIFT);

Most of this shifting is repeated below, perhaps do it once into a local
var to avoid them getting out of sync?

> +
> +        /* wait for complete flag to be set */
> +        do {
> +            stat = syscfg[V2M_SYS_CFGSTAT];
> +            dsb();
> +        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );

I assume there's no wait-for-event or way to relax this spin loop?

Since this is repeated below a helper "wait_for_ocmplet" might be good.

Actually, this while bit "set control register" until the error check is
common to both the read and write cases, isn't it?

> +
> +        /* check error status and return error flag if set */
> +        if ( stat & V2M_SYS_CFG_ERROR )
> +            return -1;
> +    } else {
> +        /* set control register */
> +        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | (dcc << DCC_SHIFT) |
> +            (function << FUNCTION_SHIFT) | (site << SITE_SHIFT) |
> +            (position << POSITION_SHIFT) | (device << DEVICE_SHIFT);
> +
> +        /* wait for complete flag to be set */
> +        do {
> +            stat = syscfg[V2M_SYS_CFGSTAT];
> +            dsb();
> +        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
> +
> +        /* check error status flag and return error flag if set */
> +        if ( stat & V2M_SYS_CFG_ERROR )
> +            return -1;
> +        else
> +            /* read data */
> +            *data = syscfg[V2M_SYS_CFGDATA];
> +    }
> +
> +    clear_fixmap(FIXMAP_MISC);
> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-set-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-arm/platform_vexpress.h b/xen/include/asm-arm/platform_vexpress.h
> index 3556af3..407602d 100644
> --- a/xen/include/asm-arm/platform_vexpress.h
> +++ b/xen/include/asm-arm/platform_vexpress.h
> @@ -6,6 +6,29 @@
>  #define V2M_SYS_FLAGSSET      (0x30)
>  #define V2M_SYS_FLAGSCLR      (0x34)
>  
> +#define V2M_SYS_CFGDATA       (0x00A0/4)
> +#define V2M_SYS_CFGCTRL       (0x00A4/4)
> +#define V2M_SYS_CFGSTAT       (0x00A8/4)

It'd be better to either define all registers in bytes (as the existing
FLAGS*) or words  (the new ones) and not to mix the two...

> +
> +#define V2M_SYS_CFG_START     (1<<31)
> +#define V2M_SYS_CFG_WRITE     (1<<30)
> +#define V2M_SYS_CFG_ERROR     (1<<1)
> +#define V2M_SYS_CFG_COMPLETE  (1<<0)
> +
> +#define V2M_SYS_CFG_OSC_FUNC  1
> +#define V2M_SYS_CFG_OSC0      0
> +#define V2M_SYS_CFG_OSC1      1
> +#define V2M_SYS_CFG_OSC2      2
> +#define V2M_SYS_CFG_OSC3      3
> +#define V2M_SYS_CFG_OSC4      4
> +#define V2M_SYS_CFG_OSC5      5
> +
> +#ifndef __ASSEMBLY__
> +#include <xen/inttypes.h>
> +
> +int vexpress_syscfg(int write, int function, int device, uint32_t *data);
> +#endif
> +
>  #endif /* __ASM_ARM_PLATFORM_H */
>  /*
>   * Local variables:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:48:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJ4Q-0008Ug-V6; Wed, 19 Dec 2012 12:48:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlJ4P-0008Ub-CR
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:48:21 +0000
Received: from [193.109.254.147:54822] by server-5.bemta-14.messagelabs.com id
	3E/A8-32031-497B1D05; Wed, 19 Dec 2012 12:48:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355921299!2214321!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20990 invoked from network); 19 Dec 2012 12:48:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:48:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="251639"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 12:47:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 12:47:57 +0000
Message-ID: <1355921276.14620.380.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 12:47:56 +0000
In-Reply-To: <1355856402-26614-7-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-7-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 7/8] xen/arm: introduce vexpress_syscfg
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:46 +0000, Stefano Stabellini wrote:
> Introduce a Versatile Express specific function to read/write
> motherboard settings.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/Makefile                   |    1 +
>  xen/arch/arm/platform_vexpress.c        |   97 +++++++++++++++++++++++++++++++
>  xen/include/asm-arm/platform_vexpress.h |   23 +++++++

I wonder if we ought to start a "platforms" subdirs include and arch?
This is presumably the first of many. Going the plat-<foo> route seems
unnecessary at least at this stage.

> +#include <asm/platform_vexpress.h>
> +#include <xen/mm.h>
> +
> +#define DCC_SHIFT      26
> +#define FUNCTION_SHIFT 20
> +#define SITE_SHIFT     16
> +#define POSITION_SHIFT 12
> +#define DEVICE_SHIFT   0
> +
> +int vexpress_syscfg(int write, int function, int device, uint32_t *data)
> +{
> +    uint32_t *syscfg = (uint32_t *) FIXMAP_ADDR(FIXMAP_MISC);
> +    uint32_t stat;
> +    int dcc = 0; /* DCC to access */
> +    int site = 0; /* motherboard */
> +    int position = 0; /* motherboard */

motherboard twice?

> +    set_fixmap(FIXMAP_MISC, V2M_SYS_MMIO_BASE >> PAGE_SHIFT, DEV_SHARED);
> +
> +    if ( syscfg[V2M_SYS_CFGCTRL] & V2M_SYS_CFG_START )
> +        return -1;
> +
> +    /* clear the complete bit in the V2M_SYS_CFGSTAT status register */

Do you mean clear all the bits or specifically the "completion" bit?

> +    syscfg[V2M_SYS_CFGSTAT] = 0;
> +
> +    if ( write )
> +    {
> +        /* write data */
> +        syscfg[V2M_SYS_CFGDATA] = *data;
> +
> +        /* set control register */
> +        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | V2M_SYS_CFG_WRITE |
> +            (dcc << DCC_SHIFT) | (function << FUNCTION_SHIFT) |
> +            (site << SITE_SHIFT) | (position << POSITION_SHIFT) |
> +            (device << DEVICE_SHIFT);

Most of this shifting is repeated below, perhaps do it once into a local
var to avoid them getting out of sync?

> +
> +        /* wait for complete flag to be set */
> +        do {
> +            stat = syscfg[V2M_SYS_CFGSTAT];
> +            dsb();
> +        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );

I assume there's no wait-for-event or way to relax this spin loop?

Since this is repeated below a helper "wait_for_ocmplet" might be good.

Actually, this while bit "set control register" until the error check is
common to both the read and write cases, isn't it?

> +
> +        /* check error status and return error flag if set */
> +        if ( stat & V2M_SYS_CFG_ERROR )
> +            return -1;
> +    } else {
> +        /* set control register */
> +        syscfg[V2M_SYS_CFGCTRL] = V2M_SYS_CFG_START | (dcc << DCC_SHIFT) |
> +            (function << FUNCTION_SHIFT) | (site << SITE_SHIFT) |
> +            (position << POSITION_SHIFT) | (device << DEVICE_SHIFT);
> +
> +        /* wait for complete flag to be set */
> +        do {
> +            stat = syscfg[V2M_SYS_CFGSTAT];
> +            dsb();
> +        } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
> +
> +        /* check error status flag and return error flag if set */
> +        if ( stat & V2M_SYS_CFG_ERROR )
> +            return -1;
> +        else
> +            /* read data */
> +            *data = syscfg[V2M_SYS_CFGDATA];
> +    }
> +
> +    clear_fixmap(FIXMAP_MISC);
> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-set-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-arm/platform_vexpress.h b/xen/include/asm-arm/platform_vexpress.h
> index 3556af3..407602d 100644
> --- a/xen/include/asm-arm/platform_vexpress.h
> +++ b/xen/include/asm-arm/platform_vexpress.h
> @@ -6,6 +6,29 @@
>  #define V2M_SYS_FLAGSSET      (0x30)
>  #define V2M_SYS_FLAGSCLR      (0x34)
>  
> +#define V2M_SYS_CFGDATA       (0x00A0/4)
> +#define V2M_SYS_CFGCTRL       (0x00A4/4)
> +#define V2M_SYS_CFGSTAT       (0x00A8/4)

It'd be better to either define all registers in bytes (as the existing
FLAGS*) or words  (the new ones) and not to mix the two...

> +
> +#define V2M_SYS_CFG_START     (1<<31)
> +#define V2M_SYS_CFG_WRITE     (1<<30)
> +#define V2M_SYS_CFG_ERROR     (1<<1)
> +#define V2M_SYS_CFG_COMPLETE  (1<<0)
> +
> +#define V2M_SYS_CFG_OSC_FUNC  1
> +#define V2M_SYS_CFG_OSC0      0
> +#define V2M_SYS_CFG_OSC1      1
> +#define V2M_SYS_CFG_OSC2      2
> +#define V2M_SYS_CFG_OSC3      3
> +#define V2M_SYS_CFG_OSC4      4
> +#define V2M_SYS_CFG_OSC5      5
> +
> +#ifndef __ASSEMBLY__
> +#include <xen/inttypes.h>
> +
> +int vexpress_syscfg(int write, int function, int device, uint32_t *data);
> +#endif
> +
>  #endif /* __ASM_ARM_PLATFORM_H */
>  /*
>   * Local variables:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:50:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJ6E-00008I-EK; Wed, 19 Dec 2012 12:50:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlJ6D-00008A-3k
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 12:50:13 +0000
Received: from [85.158.139.83:9264] by server-15.bemta-5.messagelabs.com id
	00/02-20523-408B1D05; Wed, 19 Dec 2012 12:50:12 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355921409!26512872!1
X-Originating-IP: [209.85.223.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8369 invoked from network); 19 Dec 2012 12:50:10 -0000
Received: from mail-ie0-f169.google.com (HELO mail-ie0-f169.google.com)
	(209.85.223.169)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:50:10 -0000
Received: by mail-ie0-f169.google.com with SMTP id c14so2674517ieb.28
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 04:50:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Tap36NBCnsMH4IhYPWg7qUb5jdV5oXGFmg+/6z5vKI0=;
	b=KsY21NVaNHxe0nCJp/YHo95Bovf6m1YK/HAbyJAKIJZpssWaefT9zkfjQ7s9mYigbv
	k8KAx1Or4XWD5cRk93cBpQBy3jC2YZqHtjI0JDuJIt5Pu9jg4ET2DicjIdBK3DqWbqPE
	J3AX+AXwOD2c0qqE8ZbnU1W1OT/LtG6NiNnuH7dVmMU5UvqmdVKxXE5RhdEB/TMjJyCb
	4bmFH+1HSUYAEO8ihu5G6Nlsv7jWl2DdbUcsNQ+Q4y7KyMA2mGOkE61Ez3G/1vtnPGS0
	82Nad2qQgWW2nsEOhxcDCEzmArCyJwXsT7YmToOyQr5R//nd9+Gn8zZzaoa4Wjdsa72k
	uhEg==
MIME-Version: 1.0
Received: by 10.50.197.169 with SMTP id iv9mr6429210igc.32.1355921409120; Wed,
	19 Dec 2012 04:50:09 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Wed, 19 Dec 2012 04:50:08 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212191145250.17523@kaball.uk.xensource.com>
References: <CAKhsbWaxot1pzWP=-BJJ25y_jZC2uHdM8yGfqO6qbhSqe9dQfQ@mail.gmail.com>
	<alpine.DEB.2.02.1212191145250.17523@kaball.uk.xensource.com>
Date: Wed, 19 Dec 2012 20:50:08 +0800
X-Google-Sender-Auth: 7aekN1p0WF8y_IDQ82BeVr85m_U
Message-ID: <CAKhsbWad1WYhoMW5kebxhKfBwfpuaowuCK4NTy1B-UKM-v2Bkg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Correctly expose PCH ISA bridge for IGD
 passthrough (was PCI-PCI bridge)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 7:47 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Tue, 18 Dec 2012, G.R. wrote:
>> Hi Stefano,
>>
>> Per your request, I resend your patch with my local modification to
>> fix the PCH ISA bridge to be exposed correctly to domU. This is XEN
>> part of the fix to make i915 driver properly detect the PCH version.
>> Another patch for i915 driver side is required too. I'll send that to
>> intel-gfx list separately. The combined patch set do fix the PCH
>> detection issue for me.
>>
>> Thanks,
>> Timothy
>
> Thanks! The patch looks OK.
> You need to write a proper description and add your signed-off-by as
> well as mine.
>
> http://wiki.xen.org/wiki/Submitting_Xen_Patches
>
Thanks, I'll resend in a new thread.

>
>> diff --git a/hw/pci.c b/hw/pci.c
>> index f051de1..d371bd7 100644
>> --- a/hw/pci.c
>> +++ b/hw/pci.c
>> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>>      }
>>  }
>>
>> -typedef struct {
>> -    PCIDevice dev;
>> -    PCIBus *bus;
>> -} PCIBridge;
>> -
>>  void pci_bridge_write_config(
>> PCIDevice *d,
>>                               uint32_t address, uint32_t val, int len)
>>  {
>> diff --git a/hw/pci.h b/hw/pci.h
>> index edc58b6..c2acab9 100644
>> --- a/hw/pci.h
>> +++ b/hw/pci.h
>> @@ -222,6 +222,11 @@ struct PCIDevice {
>>      int irq_state[4];
>>  };
>>
>> +typedef struct {
>> +    PCIDevice dev;
>> +    PCIBus *bus;
>> +} PCIBridge;
>> +
>>  extern char direct_pci_str[];
>>  extern int direct_pci_msitranslate;
>>  extern int direct_pci_power_mgmt;
>> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
>> index c6f8869..de21f90 100644
>> --- a/hw/pt-graphics.c
>> +++ b/hw/pt-graphics.c
>> @@ -3,6 +3,7 @@
>>   */
>>
>>  #include "pass-through.h"
>> +#include "pci.h"
>>  #include "pci/header.h"
>>  #include "pci/pci.h"
>>
>> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
>>
>> -    if ( vid == PCI_VENDOR_ID_INTEL )
>> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
>> -                        pch_map_irq, "intel_bridge_1f");
>> +    if (vid == PCI_VENDOR_ID_INTEL) {
>> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
>> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
>> pci_bridge_write_config);
>> +
>> +        pci_config_set_vendor_id(s->dev.config, vid);
>> +        pci_config_set_device_id(s->dev.config, did);
>> +
>> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
>> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
>> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
>> back-to-back, 66MHz, no error
>> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
>> +        s->dev.config[PCI_REVISION] = rid;
>> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
>> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
>> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
>> +        s->dev.config[PCI_HEADER_TYPE] = 0x80;
>> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
>> +
>> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
>> +    }
>>  }
>>
>>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:50:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJ6E-00008I-EK; Wed, 19 Dec 2012 12:50:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlJ6D-00008A-3k
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 12:50:13 +0000
Received: from [85.158.139.83:9264] by server-15.bemta-5.messagelabs.com id
	00/02-20523-408B1D05; Wed, 19 Dec 2012 12:50:12 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355921409!26512872!1
X-Originating-IP: [209.85.223.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8369 invoked from network); 19 Dec 2012 12:50:10 -0000
Received: from mail-ie0-f169.google.com (HELO mail-ie0-f169.google.com)
	(209.85.223.169)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:50:10 -0000
Received: by mail-ie0-f169.google.com with SMTP id c14so2674517ieb.28
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 04:50:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Tap36NBCnsMH4IhYPWg7qUb5jdV5oXGFmg+/6z5vKI0=;
	b=KsY21NVaNHxe0nCJp/YHo95Bovf6m1YK/HAbyJAKIJZpssWaefT9zkfjQ7s9mYigbv
	k8KAx1Or4XWD5cRk93cBpQBy3jC2YZqHtjI0JDuJIt5Pu9jg4ET2DicjIdBK3DqWbqPE
	J3AX+AXwOD2c0qqE8ZbnU1W1OT/LtG6NiNnuH7dVmMU5UvqmdVKxXE5RhdEB/TMjJyCb
	4bmFH+1HSUYAEO8ihu5G6Nlsv7jWl2DdbUcsNQ+Q4y7KyMA2mGOkE61Ez3G/1vtnPGS0
	82Nad2qQgWW2nsEOhxcDCEzmArCyJwXsT7YmToOyQr5R//nd9+Gn8zZzaoa4Wjdsa72k
	uhEg==
MIME-Version: 1.0
Received: by 10.50.197.169 with SMTP id iv9mr6429210igc.32.1355921409120; Wed,
	19 Dec 2012 04:50:09 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Wed, 19 Dec 2012 04:50:08 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1212191145250.17523@kaball.uk.xensource.com>
References: <CAKhsbWaxot1pzWP=-BJJ25y_jZC2uHdM8yGfqO6qbhSqe9dQfQ@mail.gmail.com>
	<alpine.DEB.2.02.1212191145250.17523@kaball.uk.xensource.com>
Date: Wed, 19 Dec 2012 20:50:08 +0800
X-Google-Sender-Auth: 7aekN1p0WF8y_IDQ82BeVr85m_U
Message-ID: <CAKhsbWad1WYhoMW5kebxhKfBwfpuaowuCK4NTy1B-UKM-v2Bkg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, "Kay,
	Allen M" <allen.m.kay@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Correctly expose PCH ISA bridge for IGD
 passthrough (was PCI-PCI bridge)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 7:47 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Tue, 18 Dec 2012, G.R. wrote:
>> Hi Stefano,
>>
>> Per your request, I resend your patch with my local modification to
>> fix the PCH ISA bridge to be exposed correctly to domU. This is XEN
>> part of the fix to make i915 driver properly detect the PCH version.
>> Another patch for i915 driver side is required too. I'll send that to
>> intel-gfx list separately. The combined patch set do fix the PCH
>> detection issue for me.
>>
>> Thanks,
>> Timothy
>
> Thanks! The patch looks OK.
> You need to write a proper description and add your signed-off-by as
> well as mine.
>
> http://wiki.xen.org/wiki/Submitting_Xen_Patches
>
Thanks, I'll resend in a new thread.

>
>> diff --git a/hw/pci.c b/hw/pci.c
>> index f051de1..d371bd7 100644
>> --- a/hw/pci.c
>> +++ b/hw/pci.c
>> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>>      }
>>  }
>>
>> -typedef struct {
>> -    PCIDevice dev;
>> -    PCIBus *bus;
>> -} PCIBridge;
>> -
>>  void pci_bridge_write_config(
>> PCIDevice *d,
>>                               uint32_t address, uint32_t val, int len)
>>  {
>> diff --git a/hw/pci.h b/hw/pci.h
>> index edc58b6..c2acab9 100644
>> --- a/hw/pci.h
>> +++ b/hw/pci.h
>> @@ -222,6 +222,11 @@ struct PCIDevice {
>>      int irq_state[4];
>>  };
>>
>> +typedef struct {
>> +    PCIDevice dev;
>> +    PCIBus *bus;
>> +} PCIBridge;
>> +
>>  extern char direct_pci_str[];
>>  extern int direct_pci_msitranslate;
>>  extern int direct_pci_power_mgmt;
>> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
>> index c6f8869..de21f90 100644
>> --- a/hw/pt-graphics.c
>> +++ b/hw/pt-graphics.c
>> @@ -3,6 +3,7 @@
>>   */
>>
>>  #include "pass-through.h"
>> +#include "pci.h"
>>  #include "pci/header.h"
>>  #include "pci/pci.h"
>>
>> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
>>
>> -    if ( vid == PCI_VENDOR_ID_INTEL )
>> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
>> -                        pch_map_irq, "intel_bridge_1f");
>> +    if (vid == PCI_VENDOR_ID_INTEL) {
>> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
>> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
>> pci_bridge_write_config);
>> +
>> +        pci_config_set_vendor_id(s->dev.config, vid);
>> +        pci_config_set_device_id(s->dev.config, did);
>> +
>> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
>> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
>> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
>> back-to-back, 66MHz, no error
>> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
>> +        s->dev.config[PCI_REVISION] = rid;
>> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
>> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
>> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
>> +        s->dev.config[PCI_HEADER_TYPE] = 0x80;
>> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
>> +
>> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
>> +    }
>>  }
>>
>>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:58:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJEP-0000PS-Gg; Wed, 19 Dec 2012 12:58:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlJEO-0000PN-DK
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:58:40 +0000
Received: from [85.158.138.51:37845] by server-13.bemta-3.messagelabs.com id
	D3/DF-00465-FF9B1D05; Wed, 19 Dec 2012 12:58:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355921917!29526887!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28682 invoked from network); 19 Dec 2012 12:58:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:58:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="251848"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 12:58:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 12:58:37 +0000
Message-ID: <1355921915.14620.389.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 12:58:35 +0000
In-Reply-To: <1355856402-26614-8-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-8-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 8/8] xen/arm: introduce a driver for the
 ARM HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:46 +0000, Stefano Stabellini wrote:
> +static void set_color_masks(int bpp,
> +                       int *red_shift, int *green_shift, int *blue_shift,
> +                       int *red_size, int *green_size, int *blue_size)

This is crying out for a pointer to a struct.

> +{
> +    switch (bpp) {
> +        case 2:
> +            *red_shift = 0;
> +            *green_shift = 5;
> +            *blue_shift = 11;
> +            *red_size = 5;
> +            *green_size = 6;
> +            *blue_size = 5;
> +            break;
> +        case 3:
> +        case 4:
> +            *red_shift = 0;
> +            *green_shift = 8;
> +            *blue_shift = 16;
> +            *red_size = 8;
> +            *green_size = 8;
> +            *blue_size = 8;
> +            break;
> +        default:
> +            BUG();
> +            break;
> +    }
> +}
> +
> +static void set_pixclock(uint32_t pixclock)
> +{

Doesn't there need to be some sort of "are we running on a vexpress"
check here? e.g. a DTB compatibility node check or something?

> +    vexpress_syscfg(1, V2M_SYS_CFG_OSC_FUNC, V2M_SYS_CFG_OSC5, &pixclock);
> +}
> +
> +void __init video_init(void)
> +{
> +    int node, depth;
> +    u32 address_cells, size_cells;
> +    struct fb_prop fbp;
> +    unsigned char *lfb;
> +    paddr_t hdlcd_start, hdlcd_size;
> +    paddr_t framebuffer_start, framebuffer_size;
> +    const struct fdt_property *prop;
> +    const u32 *cell;
> +    const char *mode_string;
> +    char _mode_string[16];
> +    int bpp;
> +    int red_shift, green_shift, blue_shift;
> +    int red_size, green_size, blue_size;
> +    struct modeline *videomode = NULL;
> +    int i;
> +
> +    if ( find_compatible_node("arm,hdlcd", &node, &depth,
> +                &address_cells, &size_cells) <= 0 )
> +        return;
> +
> +    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &hdlcd_start, &hdlcd_size);
> +
> +    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &framebuffer_start, &framebuffer_size);
> +
> +    mode_string = fdt_getprop(device_tree_flattened, node, "mode", NULL);
> +    if ( !mode_string )
> +    {
> +        bpp = 4;
> +        set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
> +                &red_size, &green_size, &blue_size);
> +        memcpy(_mode_string, "1280x1024@60", strlen("1280x1024@60"));

What associates mode_string with this _mode_string?

> +    }

Or should there be an else here? Otherwise can't mode_string be NULL?

> +    if ( strlen(mode_string) < strlen("800x600@60") )
> +    {
> +        printk("HDLCD: invalid modeline=%s\n", mode_string);
> +        return;
> +    } else {
> +        char *s = strchr(mode_string, '-');
> +        if ( !s )
> +        {
> +            printk("HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
> +                    mode_string);
> +            bpp = 4;
> +            set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
> +                    &red_size, &green_size, &blue_size);
> +            memcpy(_mode_string, mode_string, strlen(mode_string) + 1);

What if strlen(mode_string)+1 > 16 ?

> +        } else {
> +                       if ( strlen(s) < 6 )
> +                       {
> +                               printk("HDLCD: invalid mode %s\n", mode_string);
> +                               return;
> +                       }

Indentation

> +            s++;
> +            if ( !strncmp(s, "16", 2) )
> +            {
> +                bpp = 2;
> +                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
> +                        &red_size, &green_size, &blue_size);
> +            }
> +            else if ( !strncmp(s, "24", 2) )
> +            {
> +                bpp = 3;
> +                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
> +                        &red_size, &green_size, &blue_size);
> +            }
> +            else if ( !strncmp(s, "32", 2) )
> +            {
> +                bpp = 4;
> +                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
> +                        &red_size, &green_size, &blue_size);

This all smells like a lookup table to me.

> +            } else  {

extra space here.

> +                printk("HDLCD: unsupported bpp %s\n", s);
> +                return;
> +            }
> +            i = s - mode_string - 1;
> +            memcpy(_mode_string, mode_string, i);
> +            memcpy(_mode_string + i, mode_string + i + 3, 4);
> +        }
> +    }
> +
> +    for ( i = 0; i < ARRAY_SIZE(videomodes); i++ )
> +    {
> +        if ( !strcmp(_mode_string, videomodes[i].mode) )
> +        {
> +            videomode = &videomodes[i];
> +            break;
> +        }
> +    }
> +    if ( !videomode )
> +    {
> +        printk("HDLCD: unsupported videomode %s\n", _mode_string);
> +        return;
> +    }
> +
> +
> +    if ( !hdlcd_start || !framebuffer_start )

Worth a message? Also couldn't you have checked this much earlier
(before the mode parsing stuff).

> +        return;
> +
> +    if ( framebuffer_size < bpp * videomode->xres * videomode->yres )
> +    {
> +        printk("HDLCD: the framebuffer is too small, disable the HDLCD driver\n");

"disable" or "disabling" ?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 12:58:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 12:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJEP-0000PS-Gg; Wed, 19 Dec 2012 12:58:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlJEO-0000PN-DK
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 12:58:40 +0000
Received: from [85.158.138.51:37845] by server-13.bemta-3.messagelabs.com id
	D3/DF-00465-FF9B1D05; Wed, 19 Dec 2012 12:58:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355921917!29526887!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28682 invoked from network); 19 Dec 2012 12:58:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 12:58:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="251848"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 12:58:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 12:58:37 +0000
Message-ID: <1355921915.14620.389.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 12:58:35 +0000
In-Reply-To: <1355856402-26614-8-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-8-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 8/8] xen/arm: introduce a driver for the
 ARM HDLCD controller
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:46 +0000, Stefano Stabellini wrote:
> +static void set_color_masks(int bpp,
> +                       int *red_shift, int *green_shift, int *blue_shift,
> +                       int *red_size, int *green_size, int *blue_size)

This is crying out for a pointer to a struct.

> +{
> +    switch (bpp) {
> +        case 2:
> +            *red_shift = 0;
> +            *green_shift = 5;
> +            *blue_shift = 11;
> +            *red_size = 5;
> +            *green_size = 6;
> +            *blue_size = 5;
> +            break;
> +        case 3:
> +        case 4:
> +            *red_shift = 0;
> +            *green_shift = 8;
> +            *blue_shift = 16;
> +            *red_size = 8;
> +            *green_size = 8;
> +            *blue_size = 8;
> +            break;
> +        default:
> +            BUG();
> +            break;
> +    }
> +}
> +
> +static void set_pixclock(uint32_t pixclock)
> +{

Doesn't there need to be some sort of "are we running on a vexpress"
check here? e.g. a DTB compatibility node check or something?

> +    vexpress_syscfg(1, V2M_SYS_CFG_OSC_FUNC, V2M_SYS_CFG_OSC5, &pixclock);
> +}
> +
> +void __init video_init(void)
> +{
> +    int node, depth;
> +    u32 address_cells, size_cells;
> +    struct fb_prop fbp;
> +    unsigned char *lfb;
> +    paddr_t hdlcd_start, hdlcd_size;
> +    paddr_t framebuffer_start, framebuffer_size;
> +    const struct fdt_property *prop;
> +    const u32 *cell;
> +    const char *mode_string;
> +    char _mode_string[16];
> +    int bpp;
> +    int red_shift, green_shift, blue_shift;
> +    int red_size, green_size, blue_size;
> +    struct modeline *videomode = NULL;
> +    int i;
> +
> +    if ( find_compatible_node("arm,hdlcd", &node, &depth,
> +                &address_cells, &size_cells) <= 0 )
> +        return;
> +
> +    prop = fdt_get_property(device_tree_flattened, node, "reg", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &hdlcd_start, &hdlcd_size);
> +
> +    prop = fdt_get_property(device_tree_flattened, node, "framebuffer", NULL);
> +    if ( !prop )
> +        return;
> +
> +    cell = (const u32 *)prop->data;
> +    device_tree_get_reg(&cell, address_cells, size_cells,
> +            &framebuffer_start, &framebuffer_size);
> +
> +    mode_string = fdt_getprop(device_tree_flattened, node, "mode", NULL);
> +    if ( !mode_string )
> +    {
> +        bpp = 4;
> +        set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
> +                &red_size, &green_size, &blue_size);
> +        memcpy(_mode_string, "1280x1024@60", strlen("1280x1024@60"));

What associates mode_string with this _mode_string?

> +    }

Or should there be an else here? Otherwise can't mode_string be NULL?

> +    if ( strlen(mode_string) < strlen("800x600@60") )
> +    {
> +        printk("HDLCD: invalid modeline=%s\n", mode_string);
> +        return;
> +    } else {
> +        char *s = strchr(mode_string, '-');
> +        if ( !s )
> +        {
> +            printk("HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
> +                    mode_string);
> +            bpp = 4;
> +            set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
> +                    &red_size, &green_size, &blue_size);
> +            memcpy(_mode_string, mode_string, strlen(mode_string) + 1);

What if strlen(mode_string)+1 > 16 ?

> +        } else {
> +                       if ( strlen(s) < 6 )
> +                       {
> +                               printk("HDLCD: invalid mode %s\n", mode_string);
> +                               return;
> +                       }

Indentation

> +            s++;
> +            if ( !strncmp(s, "16", 2) )
> +            {
> +                bpp = 2;
> +                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
> +                        &red_size, &green_size, &blue_size);
> +            }
> +            else if ( !strncmp(s, "24", 2) )
> +            {
> +                bpp = 3;
> +                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
> +                        &red_size, &green_size, &blue_size);
> +            }
> +            else if ( !strncmp(s, "32", 2) )
> +            {
> +                bpp = 4;
> +                set_color_masks(bpp, &red_shift, &green_shift, &blue_shift,
> +                        &red_size, &green_size, &blue_size);

This all smells like a lookup table to me.

> +            } else  {

extra space here.

> +                printk("HDLCD: unsupported bpp %s\n", s);
> +                return;
> +            }
> +            i = s - mode_string - 1;
> +            memcpy(_mode_string, mode_string, i);
> +            memcpy(_mode_string + i, mode_string + i + 3, 4);
> +        }
> +    }
> +
> +    for ( i = 0; i < ARRAY_SIZE(videomodes); i++ )
> +    {
> +        if ( !strcmp(_mode_string, videomodes[i].mode) )
> +        {
> +            videomode = &videomodes[i];
> +            break;
> +        }
> +    }
> +    if ( !videomode )
> +    {
> +        printk("HDLCD: unsupported videomode %s\n", _mode_string);
> +        return;
> +    }
> +
> +
> +    if ( !hdlcd_start || !framebuffer_start )

Worth a message? Also couldn't you have checked this much earlier
(before the mode parsing stuff).

> +        return;
> +
> +    if ( framebuffer_size < bpp * videomode->xres * videomode->yres )
> +    {
> +        printk("HDLCD: the framebuffer is too small, disable the HDLCD driver\n");

"disable" or "disabling" ?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:00:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:00:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJFy-0000Vw-0d; Wed, 19 Dec 2012 13:00:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TlJFw-0000Vn-F8
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:00:16 +0000
Received: from [85.158.138.51:54600] by server-9.bemta-3.messagelabs.com id
	81/8A-11948-F5AB1D05; Wed, 19 Dec 2012 13:00:15 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355922006!21440346!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30354 invoked from network); 19 Dec 2012 13:00:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:00:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1174981"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 12:59:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 07:59:47 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TlJFS-0001ES-VN;
	Wed, 19 Dec 2012 12:59:46 +0000
Message-ID: <50D1B8D0.4020500@eu.citrix.com>
Date: Wed, 19 Dec 2012 12:53:36 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
References: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
	<49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
	<20121218221749.GA6332@phenom.dumpdata.com>
In-Reply-To: <20121218221749.GA6332@phenom.dumpdata.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
 problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/12/12 22:17, Konrad Rzeszutek Wilk wrote:
>> Hi Dan, an issue with your reasoning throughout has been the constant 
>> invocation of the multi host environment as a justification for your 
>> proposal. But this argument is not used in your proposal below beyond 
>> this mention in passing. Further, there is no relation between what 
>> you are changing (the hypervisor) and what you are claiming it is 
>> needed for (multi host VM management). 
> Heh. I hadn't realized that the emails need to conform to a
> the way legal briefs are written in US :-) Meaning that
> each topic must be addressed.

Every time we try to suggest alternatives, Dan goes on some rant about 
how we're on different planets, how we're all old-guard stuck in 
static-land thinking, and how we're focused on single-server use cases, 
but that multi-server use cases are so different.  That's not a one-off, 
Dan has brought up the multi-server case as a reason that a user-space 
version won't work several times.  But when it comes down to it, he 
(apparently) has barely mentioned it.  If it's such a key reason point, 
why does he not bring it up here?  It turns out we were right all along 
-- the whole multi-server thing has nothing to do with it.  That's the 
point Andres is getting at, I think.

(FYI I'm not wasting my time reading mail from Dan anymore on this 
subject.  As far as I can tell in this entire discussion he has never 
changed his mind or his core argument in response to anything anyone has 
said, nor has he understood better our ideas or where we are coming 
from.  He has only responded by generating more verbiage than anyone has 
the time to read and understand, much less respond to.  That's why I 
suggested to Dan that he ask someone else to take over the conversation.)

> Anyhow, the multi-host env or a single-host env has the same
> issue - you try to launch multiple guests and you some of
> them might not launch.
>
> The changes that Dan is proposing (the claim hypercall)
> would provide the functionality to fix this problem.
>
>> A fairly bizarre limitation of a balloon-based approach to memory management. Why on earth should the guest be allowed to change the size of its balloon, and therefore its footprint on the host. This may be justified with arguments pertaining to the stability of the in-guest workload. What they really reveal are limitations of ballooning. But the inadequacy of the balloon in itself doesn't automatically translate into justifying the need for a new hyper call.
> Why is this a limitation? Why shouldn't the guest the allowed to change
> its memory usage? It can go up and down as it sees fit.
> And if it goes down and it gets better performance - well, why shouldn't
> it do it?
>
> I concur it is odd - but it has been like that for decades.

Well, it shouldn't be allowed to do it because it causes this problem 
you're having with creating guests in parallel.  Ultimately, that is the 
core of your problem.  So if you want us to solve the problem by 
implementing something in the hypervisor, then you need to justify why 
"Just don't have guests balloon down" is an unacceptable option.  Saying 
"why shouldn't it", and "it's been that way for decades*" isn't a good 
enough reason.

* Xen is only just 10, so "decades" is a bit of a hyperbole. :-)

>
>
>>> the hypervisor, which adjusts the domain memory footprint, which changes the number of free pages _without_ the toolstack knowledge.
>>> The toolstack controls constraints (essentially a minimum and maximum)
>>> which the hypervisor enforces.  The toolstack can ensure that the
>>> minimum and maximum are identical to essentially disallow Linux from
>>> using this functionality.  Indeed, this is precisely what Citrix's
>>> Dynamic Memory Controller (DMC) does: enforce min==max so that DMC always has complete control and, so, knowledge of any domain memory
>>> footprint changes.  But DMC is not prescribed by the toolstack,
>> Neither is enforcing min==max. This was my argument when previously commenting on this thread. The fact that you have enforcement of a maximum domain allocation gives you an excellent tool to keep a domain's unsupervised growth at bay. The toolstack can choose how fine-grained, how often to be alerted and stall the domain.
> There is a down-call (so events) to the tool-stack from the hypervisor when
> the guest tries to balloon in/out? So the need for this problem arose
> but the mechanism to deal with it has been shifted to the user-space
> then? What to do when the guest does this in/out balloon at freq
> intervals?
>
> I am missing actually the reasoning behind wanting to stall the domain?
> Is that to compress/swap the pages that the guest requests? Meaning
> an user-space daemon that does "things" and has ownership
> of the pages?
>
>>> and some real Oracle Linux customers use and depend on the flexibility
>>> provided by in-guest ballooning.   So guest-privileged-user-driven-
>>> ballooning is a potential issue for toolstack-based capacity allocation.
>>>
>>> [IIGT: This is why I have brought up DMC several times and have
>>> called this the "Citrix model,".. I'm not trying to be snippy
>>> or impugn your morals as maintainers.]
>>>
>>> B) Xen's page sharing feature has slowly been completed over a number
>>> of recent Xen releases.  It takes advantage of the fact that many
>>> pages often contain identical data; the hypervisor merges them to save
>> Great care has been taken for this statement to not be exactly true. The hypervisor discards one of two pages that the toolstack tells it to (and patches the physmap of the VM previously pointing to the discard page). It doesn't merge, nor does it look into contents. The hypervisor doesn't care about the page contents. This is deliberate, so as to avoid spurious claims of "you are using technique X!"
>>
> Is the toolstack (or a daemon in userspace) doing this? I would
> have thought that there would be some optimization to do this
> somewhere?
>
>>> physical RAM.  When any "shared" page is written, the hypervisor
>>> "splits" the page (aka, copy-on-write) by allocating a new physical
>>> page.  There is a long history of this feature in other virtualization
>>> products and it is known to be possible that, under many circumstances, thousands of splits may occur in any fraction of a second.  The
>>> hypervisor does not notify or ask permission of the toolstack.
>>> So, page-splitting is an issue for toolstack-based capacity
>>> allocation, at least as currently coded in Xen.
>>>
>>> [Andre: Please hold your objection here until you read further.]
>> Name is Andres. And please cc me if you'll be addressing me directly!
>>
>> Note that I don't disagree with your previous statement in itself. Although "page-splitting" is fairly unique terminology, and confusing (at least to me). CoW works.
> <nods>
>>> C) Transcendent Memory ("tmem") has existed in the Xen hypervisor and
>>> toolstack for over three years.  It depends on an in-guest-kernel
>>> adaptive technique to constantly adjust the domain memory footprint as
>>> well as hooks in the in-guest-kernel to move data to and from the
>>> hypervisor.  While the data is in the hypervisor's care, interesting
>>> memory-load balancing between guests is done, including optional
>>> compression and deduplication.  All of this has been in Xen since 2009
>>> and has been awaiting changes in the (guest-side) Linux kernel. Those
>>> changes are now merged into the mainstream kernel and are fully
>>> functional in shipping distros.
>>>
>>> While a complete description of tmem's guest<->hypervisor interaction
>>> is beyond the scope of this document, it is important to understand
>>> that any tmem-enabled guest kernel may unpredictably request thousands
>>> or even millions of pages directly via hypercalls from the hypervisor in a fraction of a second with absolutely no interaction with the toolstack.  Further, the guest-side hypercalls that allocate pages
>>> via the hypervisor are done in "atomic" code deep in the Linux mm
>>> subsystem.
>>>
>>> Indeed, if one truly understands tmem, it should become clear that
>>> tmem is fundamentally incompatible with toolstack-based capacity
>>> allocation. But let's stop discussing tmem for now and move on.
>> You have not discussed tmem pool thaw and freeze in this proposal.
> Oooh, you know about it :-) Dan didn't want to go too verbose on
> people. It is a bit of rathole - and this hypercall would
> allow to deprecate said freeze/thaw calls.
>
>>> OK.  So with existing code both in Xen and Linux guests, there are
>>> three challenges to toolstack-based capacity allocation.  We'd
>>> really still like to do capacity allocation in the toolstack.  Can
>>> something be done in the toolstack to "fix" these three cases?
>>>
>>> Possibly.  But let's first look at hypervisor-based capacity
>>> allocation: the proposed "XENMEM_claim_pages" hypercall.
>>>
>>> HYPERVISOR-BASED CAPACITY ALLOCATION
>>>
>>> The posted patch for the claim hypercall is quite simple, but let's
>>> look at it in detail.  The claim hypercall is actually a subop
>>> of an existing hypercall.  After checking parameters for validity,
>>> a new function is called in the core Xen memory management code.
>>> This function takes the hypervisor heaplock, checks for a few
>>> special cases, does some arithmetic to ensure a valid claim, stakes
>>> the claim, releases the hypervisor heaplock, and then returns.  To
>>> review from earlier, the hypervisor heaplock protects _all_ page/slab
>>> allocations, so we can be absolutely certain that there are no other
>>> page allocation races.  This new function is about 35 lines of code,
>>> not counting comments.
>>>
>>> The patch includes two other significant changes to the hypervisor:
>>> First, when any adjustment to a domain's memory footprint is made
>>> (either through a toolstack-aware hypercall or one of the three
>>> toolstack-unaware methods described above), the heaplock is
>>> taken, arithmetic is done, and the heaplock is released.  This
>>> is 12 lines of code.  Second, when any memory is allocated within
>>> Xen, a check must be made (with the heaplock already held) to
>>> determine if, given a previous claim, the domain has exceeded
>>> its upper bound, maxmem.  This code is a single conditional test.
>>>
>>> With some declarations, but not counting the copious comments,
>>> all told, the new code provided by the patch is well under 100 lines.
>>>
>>> What about the toolstack side?  First, it's important to note that
>>> the toolstack changes are entirely optional.  If any toolstack
>>> wishes either to not fix the original problem, or avoid toolstack-
>>> unaware allocation completely by ignoring the functionality provided
>>> by in-guest ballooning, page-sharing, and/or tmem, that toolstack need
>>> not use the new hyper call.
>> You are ruling out any other possibility here. In particular, but not limited to, use of max_pages.
> The one max_page check that comes to my mind is the one that Xapi
> uses. That is it has a daemon that sets the max_pages of all the
> guests at some value so that it can squeeze in as many guests as
> possible. It also balloons pages out of a guest to make space if
> need to launch. The heurestic of how many pages or the ratio
> of max/min looks to be proportional (so to make space for 1GB
> for a guest, and say we have 10 guests, we will subtract
> 101MB from each guest - the extra 1MB is for extra overhead).
> This depends on one hypercall that 'xl' or 'xm' toolstack do not
> use - which sets the max_pages.
>
> That code makes certain assumptions - that the guest will not go/up down
> in the ballooning once the toolstack has decreed how much
> memory the guest should use. It also assumes that the operations
> are semi-atomic - and to make it so as much as it can - it executes
> these operations in serial.

No, the xapi code does no such assumptions.  After it tells a guest to 
balloon down, it watches to see  what actually happens, and has 
heuristics to deal with "non-cooperative guests".  It does assume that 
if it sets max_pages lower than or equal to the current amount of used 
memory, that the hypervisor will not allow the guest to balloon up -- 
but that's a pretty safe assumption.  A guest can balloon down if it 
wants to, but as xapi does not consider that memory free, it will never 
use it.

BTW, I don't know if you realize this: Originally Xen would return an 
error if you tried to set max_pages below tot_pages.  But as a result of 
the DMC work, it was seen as useful to allow the toolstack to tell the 
hypervisor once, "Once the VM has ballooned down to X, don't let it 
balloon up above X anymore."

> This goes back to the problem statement - if we try to parallize
> this we run in the problem that the amount of memory we thought
> we free is not true anymore. The start of this email has a good
> description of some of the issues.
>
> In essence, the max_pages does work - _if_ one does these operations
> in serial. We are trying to make this work in parallel and without
> any failures - for that we - one way that is quite simplistic
> is the claim hypercall. It sets up a 'stake' of the amount of
> memory that the hypervisor should reserve. This way other
> guests creations/ ballonning do not infringe on the 'claimed' amount.

I'm not sure what you mean by "do these operations in serial" in this 
context.  Each of your "reservation hypercalls" has to happen in 
serial.  If we had a user-space daemon that was in charge of freeing up 
or reserving memory, each request to that daemon would happen in serial 
as well.  But once the allocation / reservation happened, the domain 
builds could happen in parallel.

> I believe with this hypercall the Xapi can be made to do its operations
> in parallel as well.

xapi can already boot guests in parallel when there's enough memory to 
do so -- what operations did you have in mind?

I haven't followed all of the discussion (for reasons mentioned above), 
but I think the alternative to Dan's solution is something like below.  
Maybe you can tell me why it's not very suitable:

Have one place in the user-space -- either in the toolstack, or a 
separate daemon -- that is responsible for knowing all the places where 
memory might be in use.  Memory can be in use either by Xen, or by one 
of several VMs, or in a tmem pool.

In your case, when not creating VMs, it can remove all limitations -- 
allow the guests or tmem to grow or shrink as much as they want.

When a request comes in for a certain amount of memory, it will go and 
set each VM's max_pages, and the max tmem pool size.  It can then check 
whether there is enough free memory to complete the allocation or not 
(since there's a race between checking how much memory a guest is using 
and setting max_pages).  If that succeeds, it can return "success".  If, 
while that VM is being built, another request comes in, it can again go 
around and set the max sizes lower.  It has to know how much of the 
memory is "reserved" for the first guest being built, but if there's 
enough left after that, it can return "success" and allow the second VM 
to start being built.

After the VMs are built, the toolstack can remove the limits again if it 
wants, again allowing the free flow of memory.

Do you see any problems with this scheme?  All it requires is for the 
toolstack to be able to temporarliy set limits on both guests ballooning 
up and on tmem allocating more than a certain amount of memory.  We 
already have mechanisms for the first, so if we had a "max_pages" for 
tmem, then you'd have all the tools you need to implement it.

This is the point at which Dan says something about giant multi-host 
deployments, which has absolutely no bearing on the issue -- the 
reservation happens at a host level, whether it's in userspace or the 
hypervisor.

It's also where he goes on about how we're stuck in an old stodgy static 
world and he lives in a magical dynamic hippie world of peace and free 
love... er, free memory.  Which is also not true -- in the scenario I 
describe above, tmem is actively being used, and guests can actively 
balloon down and up, while the VM builds are happening.  In Dan's 
proposal, tmem and guests are prevented from allocating "reserved" 
memory by some complicated scheme inside the allocator; in the above 
proposal, tmem and guests are prevented from allocating "reserved" 
memory by simple hypervisor-enforced max_page settings.  The end result 
looks the same to me.

  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:00:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:00:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJFy-0000Vw-0d; Wed, 19 Dec 2012 13:00:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TlJFw-0000Vn-F8
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:00:16 +0000
Received: from [85.158.138.51:54600] by server-9.bemta-3.messagelabs.com id
	81/8A-11948-F5AB1D05; Wed, 19 Dec 2012 13:00:15 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1355922006!21440346!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30354 invoked from network); 19 Dec 2012 13:00:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:00:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="1174981"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 12:59:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 07:59:47 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TlJFS-0001ES-VN;
	Wed, 19 Dec 2012 12:59:46 +0000
Message-ID: <50D1B8D0.4020500@eu.citrix.com>
Date: Wed, 19 Dec 2012 12:53:36 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
References: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
	<49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
	<20121218221749.GA6332@phenom.dumpdata.com>
In-Reply-To: <20121218221749.GA6332@phenom.dumpdata.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
 problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/12/12 22:17, Konrad Rzeszutek Wilk wrote:
>> Hi Dan, an issue with your reasoning throughout has been the constant 
>> invocation of the multi host environment as a justification for your 
>> proposal. But this argument is not used in your proposal below beyond 
>> this mention in passing. Further, there is no relation between what 
>> you are changing (the hypervisor) and what you are claiming it is 
>> needed for (multi host VM management). 
> Heh. I hadn't realized that the emails need to conform to a
> the way legal briefs are written in US :-) Meaning that
> each topic must be addressed.

Every time we try to suggest alternatives, Dan goes on some rant about 
how we're on different planets, how we're all old-guard stuck in 
static-land thinking, and how we're focused on single-server use cases, 
but that multi-server use cases are so different.  That's not a one-off, 
Dan has brought up the multi-server case as a reason that a user-space 
version won't work several times.  But when it comes down to it, he 
(apparently) has barely mentioned it.  If it's such a key reason point, 
why does he not bring it up here?  It turns out we were right all along 
-- the whole multi-server thing has nothing to do with it.  That's the 
point Andres is getting at, I think.

(FYI I'm not wasting my time reading mail from Dan anymore on this 
subject.  As far as I can tell in this entire discussion he has never 
changed his mind or his core argument in response to anything anyone has 
said, nor has he understood better our ideas or where we are coming 
from.  He has only responded by generating more verbiage than anyone has 
the time to read and understand, much less respond to.  That's why I 
suggested to Dan that he ask someone else to take over the conversation.)

> Anyhow, the multi-host env or a single-host env has the same
> issue - you try to launch multiple guests and you some of
> them might not launch.
>
> The changes that Dan is proposing (the claim hypercall)
> would provide the functionality to fix this problem.
>
>> A fairly bizarre limitation of a balloon-based approach to memory management. Why on earth should the guest be allowed to change the size of its balloon, and therefore its footprint on the host. This may be justified with arguments pertaining to the stability of the in-guest workload. What they really reveal are limitations of ballooning. But the inadequacy of the balloon in itself doesn't automatically translate into justifying the need for a new hyper call.
> Why is this a limitation? Why shouldn't the guest the allowed to change
> its memory usage? It can go up and down as it sees fit.
> And if it goes down and it gets better performance - well, why shouldn't
> it do it?
>
> I concur it is odd - but it has been like that for decades.

Well, it shouldn't be allowed to do it because it causes this problem 
you're having with creating guests in parallel.  Ultimately, that is the 
core of your problem.  So if you want us to solve the problem by 
implementing something in the hypervisor, then you need to justify why 
"Just don't have guests balloon down" is an unacceptable option.  Saying 
"why shouldn't it", and "it's been that way for decades*" isn't a good 
enough reason.

* Xen is only just 10, so "decades" is a bit of a hyperbole. :-)

>
>
>>> the hypervisor, which adjusts the domain memory footprint, which changes the number of free pages _without_ the toolstack knowledge.
>>> The toolstack controls constraints (essentially a minimum and maximum)
>>> which the hypervisor enforces.  The toolstack can ensure that the
>>> minimum and maximum are identical to essentially disallow Linux from
>>> using this functionality.  Indeed, this is precisely what Citrix's
>>> Dynamic Memory Controller (DMC) does: enforce min==max so that DMC always has complete control and, so, knowledge of any domain memory
>>> footprint changes.  But DMC is not prescribed by the toolstack,
>> Neither is enforcing min==max. This was my argument when previously commenting on this thread. The fact that you have enforcement of a maximum domain allocation gives you an excellent tool to keep a domain's unsupervised growth at bay. The toolstack can choose how fine-grained, how often to be alerted and stall the domain.
> There is a down-call (so events) to the tool-stack from the hypervisor when
> the guest tries to balloon in/out? So the need for this problem arose
> but the mechanism to deal with it has been shifted to the user-space
> then? What to do when the guest does this in/out balloon at freq
> intervals?
>
> I am missing actually the reasoning behind wanting to stall the domain?
> Is that to compress/swap the pages that the guest requests? Meaning
> an user-space daemon that does "things" and has ownership
> of the pages?
>
>>> and some real Oracle Linux customers use and depend on the flexibility
>>> provided by in-guest ballooning.   So guest-privileged-user-driven-
>>> ballooning is a potential issue for toolstack-based capacity allocation.
>>>
>>> [IIGT: This is why I have brought up DMC several times and have
>>> called this the "Citrix model,".. I'm not trying to be snippy
>>> or impugn your morals as maintainers.]
>>>
>>> B) Xen's page sharing feature has slowly been completed over a number
>>> of recent Xen releases.  It takes advantage of the fact that many
>>> pages often contain identical data; the hypervisor merges them to save
>> Great care has been taken for this statement to not be exactly true. The hypervisor discards one of two pages that the toolstack tells it to (and patches the physmap of the VM previously pointing to the discard page). It doesn't merge, nor does it look into contents. The hypervisor doesn't care about the page contents. This is deliberate, so as to avoid spurious claims of "you are using technique X!"
>>
> Is the toolstack (or a daemon in userspace) doing this? I would
> have thought that there would be some optimization to do this
> somewhere?
>
>>> physical RAM.  When any "shared" page is written, the hypervisor
>>> "splits" the page (aka, copy-on-write) by allocating a new physical
>>> page.  There is a long history of this feature in other virtualization
>>> products and it is known to be possible that, under many circumstances, thousands of splits may occur in any fraction of a second.  The
>>> hypervisor does not notify or ask permission of the toolstack.
>>> So, page-splitting is an issue for toolstack-based capacity
>>> allocation, at least as currently coded in Xen.
>>>
>>> [Andre: Please hold your objection here until you read further.]
>> Name is Andres. And please cc me if you'll be addressing me directly!
>>
>> Note that I don't disagree with your previous statement in itself. Although "page-splitting" is fairly unique terminology, and confusing (at least to me). CoW works.
> <nods>
>>> C) Transcendent Memory ("tmem") has existed in the Xen hypervisor and
>>> toolstack for over three years.  It depends on an in-guest-kernel
>>> adaptive technique to constantly adjust the domain memory footprint as
>>> well as hooks in the in-guest-kernel to move data to and from the
>>> hypervisor.  While the data is in the hypervisor's care, interesting
>>> memory-load balancing between guests is done, including optional
>>> compression and deduplication.  All of this has been in Xen since 2009
>>> and has been awaiting changes in the (guest-side) Linux kernel. Those
>>> changes are now merged into the mainstream kernel and are fully
>>> functional in shipping distros.
>>>
>>> While a complete description of tmem's guest<->hypervisor interaction
>>> is beyond the scope of this document, it is important to understand
>>> that any tmem-enabled guest kernel may unpredictably request thousands
>>> or even millions of pages directly via hypercalls from the hypervisor in a fraction of a second with absolutely no interaction with the toolstack.  Further, the guest-side hypercalls that allocate pages
>>> via the hypervisor are done in "atomic" code deep in the Linux mm
>>> subsystem.
>>>
>>> Indeed, if one truly understands tmem, it should become clear that
>>> tmem is fundamentally incompatible with toolstack-based capacity
>>> allocation. But let's stop discussing tmem for now and move on.
>> You have not discussed tmem pool thaw and freeze in this proposal.
> Oooh, you know about it :-) Dan didn't want to go too verbose on
> people. It is a bit of rathole - and this hypercall would
> allow to deprecate said freeze/thaw calls.
>
>>> OK.  So with existing code both in Xen and Linux guests, there are
>>> three challenges to toolstack-based capacity allocation.  We'd
>>> really still like to do capacity allocation in the toolstack.  Can
>>> something be done in the toolstack to "fix" these three cases?
>>>
>>> Possibly.  But let's first look at hypervisor-based capacity
>>> allocation: the proposed "XENMEM_claim_pages" hypercall.
>>>
>>> HYPERVISOR-BASED CAPACITY ALLOCATION
>>>
>>> The posted patch for the claim hypercall is quite simple, but let's
>>> look at it in detail.  The claim hypercall is actually a subop
>>> of an existing hypercall.  After checking parameters for validity,
>>> a new function is called in the core Xen memory management code.
>>> This function takes the hypervisor heaplock, checks for a few
>>> special cases, does some arithmetic to ensure a valid claim, stakes
>>> the claim, releases the hypervisor heaplock, and then returns.  To
>>> review from earlier, the hypervisor heaplock protects _all_ page/slab
>>> allocations, so we can be absolutely certain that there are no other
>>> page allocation races.  This new function is about 35 lines of code,
>>> not counting comments.
>>>
>>> The patch includes two other significant changes to the hypervisor:
>>> First, when any adjustment to a domain's memory footprint is made
>>> (either through a toolstack-aware hypercall or one of the three
>>> toolstack-unaware methods described above), the heaplock is
>>> taken, arithmetic is done, and the heaplock is released.  This
>>> is 12 lines of code.  Second, when any memory is allocated within
>>> Xen, a check must be made (with the heaplock already held) to
>>> determine if, given a previous claim, the domain has exceeded
>>> its upper bound, maxmem.  This code is a single conditional test.
>>>
>>> With some declarations, but not counting the copious comments,
>>> all told, the new code provided by the patch is well under 100 lines.
>>>
>>> What about the toolstack side?  First, it's important to note that
>>> the toolstack changes are entirely optional.  If any toolstack
>>> wishes either to not fix the original problem, or avoid toolstack-
>>> unaware allocation completely by ignoring the functionality provided
>>> by in-guest ballooning, page-sharing, and/or tmem, that toolstack need
>>> not use the new hyper call.
>> You are ruling out any other possibility here. In particular, but not limited to, use of max_pages.
> The one max_page check that comes to my mind is the one that Xapi
> uses. That is it has a daemon that sets the max_pages of all the
> guests at some value so that it can squeeze in as many guests as
> possible. It also balloons pages out of a guest to make space if
> need to launch. The heurestic of how many pages or the ratio
> of max/min looks to be proportional (so to make space for 1GB
> for a guest, and say we have 10 guests, we will subtract
> 101MB from each guest - the extra 1MB is for extra overhead).
> This depends on one hypercall that 'xl' or 'xm' toolstack do not
> use - which sets the max_pages.
>
> That code makes certain assumptions - that the guest will not go/up down
> in the ballooning once the toolstack has decreed how much
> memory the guest should use. It also assumes that the operations
> are semi-atomic - and to make it so as much as it can - it executes
> these operations in serial.

No, the xapi code does no such assumptions.  After it tells a guest to 
balloon down, it watches to see  what actually happens, and has 
heuristics to deal with "non-cooperative guests".  It does assume that 
if it sets max_pages lower than or equal to the current amount of used 
memory, that the hypervisor will not allow the guest to balloon up -- 
but that's a pretty safe assumption.  A guest can balloon down if it 
wants to, but as xapi does not consider that memory free, it will never 
use it.

BTW, I don't know if you realize this: Originally Xen would return an 
error if you tried to set max_pages below tot_pages.  But as a result of 
the DMC work, it was seen as useful to allow the toolstack to tell the 
hypervisor once, "Once the VM has ballooned down to X, don't let it 
balloon up above X anymore."

> This goes back to the problem statement - if we try to parallize
> this we run in the problem that the amount of memory we thought
> we free is not true anymore. The start of this email has a good
> description of some of the issues.
>
> In essence, the max_pages does work - _if_ one does these operations
> in serial. We are trying to make this work in parallel and without
> any failures - for that we - one way that is quite simplistic
> is the claim hypercall. It sets up a 'stake' of the amount of
> memory that the hypervisor should reserve. This way other
> guests creations/ ballonning do not infringe on the 'claimed' amount.

I'm not sure what you mean by "do these operations in serial" in this 
context.  Each of your "reservation hypercalls" has to happen in 
serial.  If we had a user-space daemon that was in charge of freeing up 
or reserving memory, each request to that daemon would happen in serial 
as well.  But once the allocation / reservation happened, the domain 
builds could happen in parallel.

> I believe with this hypercall the Xapi can be made to do its operations
> in parallel as well.

xapi can already boot guests in parallel when there's enough memory to 
do so -- what operations did you have in mind?

I haven't followed all of the discussion (for reasons mentioned above), 
but I think the alternative to Dan's solution is something like below.  
Maybe you can tell me why it's not very suitable:

Have one place in the user-space -- either in the toolstack, or a 
separate daemon -- that is responsible for knowing all the places where 
memory might be in use.  Memory can be in use either by Xen, or by one 
of several VMs, or in a tmem pool.

In your case, when not creating VMs, it can remove all limitations -- 
allow the guests or tmem to grow or shrink as much as they want.

When a request comes in for a certain amount of memory, it will go and 
set each VM's max_pages, and the max tmem pool size.  It can then check 
whether there is enough free memory to complete the allocation or not 
(since there's a race between checking how much memory a guest is using 
and setting max_pages).  If that succeeds, it can return "success".  If, 
while that VM is being built, another request comes in, it can again go 
around and set the max sizes lower.  It has to know how much of the 
memory is "reserved" for the first guest being built, but if there's 
enough left after that, it can return "success" and allow the second VM 
to start being built.

After the VMs are built, the toolstack can remove the limits again if it 
wants, again allowing the free flow of memory.

Do you see any problems with this scheme?  All it requires is for the 
toolstack to be able to temporarliy set limits on both guests ballooning 
up and on tmem allocating more than a certain amount of memory.  We 
already have mechanisms for the first, so if we had a "max_pages" for 
tmem, then you'd have all the tools you need to implement it.

This is the point at which Dan says something about giant multi-host 
deployments, which has absolutely no bearing on the issue -- the 
reservation happens at a host level, whether it's in userspace or the 
hypervisor.

It's also where he goes on about how we're stuck in an old stodgy static 
world and he lives in a magical dynamic hippie world of peace and free 
love... er, free memory.  Which is also not true -- in the scenario I 
describe above, tmem is actively being used, and guests can actively 
balloon down and up, while the VM builds are happening.  In Dan's 
proposal, tmem and guests are prevented from allocating "reserved" 
memory by some complicated scheme inside the allocator; in the above 
proposal, tmem and guests are prevented from allocating "reserved" 
memory by simple hypervisor-enforced max_page settings.  The end result 
looks the same to me.

  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:04:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:04:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJJx-0000nk-VR; Wed, 19 Dec 2012 13:04:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlJJv-0000nd-Qy
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:04:24 +0000
Received: from [85.158.138.51:5462] by server-3.bemta-3.messagelabs.com id
	73/99-31588-65BB1D05; Wed, 19 Dec 2012 13:04:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355922191!29679610!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29774 invoked from network); 19 Dec 2012 13:03:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 13:03:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 13:03:11 +0000
Message-Id: <50D1C91D02000078000B1690@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 13:03:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part4F7EC31D.0__="
Subject: [Xen-devel] [PATCH] x86: also print CRn register values upon double
	fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part4F7EC31D.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Do so by simply re-using _show_registers().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -225,6 +225,7 @@ void double_fault(void);
 void do_double_fault(struct cpu_user_regs *regs)
 {
     unsigned int cpu;
+    unsigned long crs[8];
=20
     watchdog_disable();
=20
@@ -235,22 +236,18 @@ void do_double_fault(struct cpu_user_reg
     /* Find information saved during fault and dump it to the console. */
     printk("*** DOUBLE FAULT ***\n");
     print_xen_info();
-    printk("CPU:    %d\nRIP:    %04x:[<%016lx>]",
-           cpu, regs->cs, regs->rip);
-    print_symbol(" %s", regs->rip);
-    printk("\nRFLAGS: %016lx\n", regs->rflags);
-    printk("rax: %016lx   rbx: %016lx   rcx: %016lx\n",
-           regs->rax, regs->rbx, regs->rcx);
-    printk("rdx: %016lx   rsi: %016lx   rdi: %016lx\n",
-           regs->rdx, regs->rsi, regs->rdi);
-    printk("rbp: %016lx   rsp: %016lx   r8:  %016lx\n",
-           regs->rbp, regs->rsp, regs->r8);
-    printk("r9:  %016lx   r10: %016lx   r11: %016lx\n",
-           regs->r9,  regs->r10, regs->r11);
-    printk("r12: %016lx   r13: %016lx   r14: %016lx\n",
-           regs->r12, regs->r13, regs->r14);
-    printk("r15: %016lx    cs: %016lx    ss: %016lx\n",
-           regs->r15, (long)regs->cs, (long)regs->ss);
+
+    crs[0] =3D read_cr0();
+    crs[2] =3D read_cr2();
+    crs[3] =3D read_cr3();
+    crs[4] =3D read_cr4();
+    regs->ds =3D read_segment_register(ds);
+    regs->es =3D read_segment_register(es);
+    regs->fs =3D read_segment_register(fs);
+    regs->gs =3D read_segment_register(gs);
+
+    printk("CPU:    %d\n", cpu);
+    _show_registers(regs, crs, CTXT_hypervisor, NULL);
     show_stack_overflow(cpu, regs->rsp);
=20
     panic("DOUBLE FAULT -- system shutdown\n");




--=__Part4F7EC31D.0__=
Content-Type: text/plain; name="x86-double-fault-print-CRs.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-double-fault-print-CRs.patch"

x86: also print CRn register values upon double fault=0A=0ADo so by simply =
re-using _show_registers().=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.=
com>=0A=0A--- a/xen/arch/x86/x86_64/traps.c=0A+++ b/xen/arch/x86/x86_64/tra=
ps.c=0A@@ -225,6 +225,7 @@ void double_fault(void);=0A void do_double_fault=
(struct cpu_user_regs *regs)=0A {=0A     unsigned int cpu;=0A+    unsigned =
long crs[8];=0A =0A     watchdog_disable();=0A =0A@@ -235,22 +236,18 @@ =
void do_double_fault(struct cpu_user_reg=0A     /* Find information saved =
during fault and dump it to the console. */=0A     printk("*** DOUBLE =
FAULT ***\n");=0A     print_xen_info();=0A-    printk("CPU:    %d\nRIP:    =
%04x:[<%016lx>]",=0A-           cpu, regs->cs, regs->rip);=0A-    =
print_symbol(" %s", regs->rip);=0A-    printk("\nRFLAGS: %016lx\n", =
regs->rflags);=0A-    printk("rax: %016lx   rbx: %016lx   rcx: %016lx\n",=
=0A-           regs->rax, regs->rbx, regs->rcx);=0A-    printk("rdx: =
%016lx   rsi: %016lx   rdi: %016lx\n",=0A-           regs->rdx, regs->rsi, =
regs->rdi);=0A-    printk("rbp: %016lx   rsp: %016lx   r8:  %016lx\n",=0A- =
          regs->rbp, regs->rsp, regs->r8);=0A-    printk("r9:  %016lx   =
r10: %016lx   r11: %016lx\n",=0A-           regs->r9,  regs->r10, =
regs->r11);=0A-    printk("r12: %016lx   r13: %016lx   r14: %016lx\n",=0A- =
          regs->r12, regs->r13, regs->r14);=0A-    printk("r15: %016lx    =
cs: %016lx    ss: %016lx\n",=0A-           regs->r15, (long)regs->cs, =
(long)regs->ss);=0A+=0A+    crs[0] =3D read_cr0();=0A+    crs[2] =3D =
read_cr2();=0A+    crs[3] =3D read_cr3();=0A+    crs[4] =3D read_cr4();=0A+=
    regs->ds =3D read_segment_register(ds);=0A+    regs->es =3D read_segmen=
t_register(es);=0A+    regs->fs =3D read_segment_register(fs);=0A+    =
regs->gs =3D read_segment_register(gs);=0A+=0A+    printk("CPU:    %d\n", =
cpu);=0A+    _show_registers(regs, crs, CTXT_hypervisor, NULL);=0A     =
show_stack_overflow(cpu, regs->rsp);=0A =0A     panic("DOUBLE FAULT -- =
system shutdown\n");=0A
--=__Part4F7EC31D.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part4F7EC31D.0__=--


From xen-devel-bounces@lists.xen.org Wed Dec 19 13:04:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:04:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJJx-0000nk-VR; Wed, 19 Dec 2012 13:04:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlJJv-0000nd-Qy
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:04:24 +0000
Received: from [85.158.138.51:5462] by server-3.bemta-3.messagelabs.com id
	73/99-31588-65BB1D05; Wed, 19 Dec 2012 13:04:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355922191!29679610!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29774 invoked from network); 19 Dec 2012 13:03:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 13:03:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 13:03:11 +0000
Message-Id: <50D1C91D02000078000B1690@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 13:03:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part4F7EC31D.0__="
Subject: [Xen-devel] [PATCH] x86: also print CRn register values upon double
	fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part4F7EC31D.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Do so by simply re-using _show_registers().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -225,6 +225,7 @@ void double_fault(void);
 void do_double_fault(struct cpu_user_regs *regs)
 {
     unsigned int cpu;
+    unsigned long crs[8];
=20
     watchdog_disable();
=20
@@ -235,22 +236,18 @@ void do_double_fault(struct cpu_user_reg
     /* Find information saved during fault and dump it to the console. */
     printk("*** DOUBLE FAULT ***\n");
     print_xen_info();
-    printk("CPU:    %d\nRIP:    %04x:[<%016lx>]",
-           cpu, regs->cs, regs->rip);
-    print_symbol(" %s", regs->rip);
-    printk("\nRFLAGS: %016lx\n", regs->rflags);
-    printk("rax: %016lx   rbx: %016lx   rcx: %016lx\n",
-           regs->rax, regs->rbx, regs->rcx);
-    printk("rdx: %016lx   rsi: %016lx   rdi: %016lx\n",
-           regs->rdx, regs->rsi, regs->rdi);
-    printk("rbp: %016lx   rsp: %016lx   r8:  %016lx\n",
-           regs->rbp, regs->rsp, regs->r8);
-    printk("r9:  %016lx   r10: %016lx   r11: %016lx\n",
-           regs->r9,  regs->r10, regs->r11);
-    printk("r12: %016lx   r13: %016lx   r14: %016lx\n",
-           regs->r12, regs->r13, regs->r14);
-    printk("r15: %016lx    cs: %016lx    ss: %016lx\n",
-           regs->r15, (long)regs->cs, (long)regs->ss);
+
+    crs[0] =3D read_cr0();
+    crs[2] =3D read_cr2();
+    crs[3] =3D read_cr3();
+    crs[4] =3D read_cr4();
+    regs->ds =3D read_segment_register(ds);
+    regs->es =3D read_segment_register(es);
+    regs->fs =3D read_segment_register(fs);
+    regs->gs =3D read_segment_register(gs);
+
+    printk("CPU:    %d\n", cpu);
+    _show_registers(regs, crs, CTXT_hypervisor, NULL);
     show_stack_overflow(cpu, regs->rsp);
=20
     panic("DOUBLE FAULT -- system shutdown\n");




--=__Part4F7EC31D.0__=
Content-Type: text/plain; name="x86-double-fault-print-CRs.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-double-fault-print-CRs.patch"

x86: also print CRn register values upon double fault=0A=0ADo so by simply =
re-using _show_registers().=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.=
com>=0A=0A--- a/xen/arch/x86/x86_64/traps.c=0A+++ b/xen/arch/x86/x86_64/tra=
ps.c=0A@@ -225,6 +225,7 @@ void double_fault(void);=0A void do_double_fault=
(struct cpu_user_regs *regs)=0A {=0A     unsigned int cpu;=0A+    unsigned =
long crs[8];=0A =0A     watchdog_disable();=0A =0A@@ -235,22 +236,18 @@ =
void do_double_fault(struct cpu_user_reg=0A     /* Find information saved =
during fault and dump it to the console. */=0A     printk("*** DOUBLE =
FAULT ***\n");=0A     print_xen_info();=0A-    printk("CPU:    %d\nRIP:    =
%04x:[<%016lx>]",=0A-           cpu, regs->cs, regs->rip);=0A-    =
print_symbol(" %s", regs->rip);=0A-    printk("\nRFLAGS: %016lx\n", =
regs->rflags);=0A-    printk("rax: %016lx   rbx: %016lx   rcx: %016lx\n",=
=0A-           regs->rax, regs->rbx, regs->rcx);=0A-    printk("rdx: =
%016lx   rsi: %016lx   rdi: %016lx\n",=0A-           regs->rdx, regs->rsi, =
regs->rdi);=0A-    printk("rbp: %016lx   rsp: %016lx   r8:  %016lx\n",=0A- =
          regs->rbp, regs->rsp, regs->r8);=0A-    printk("r9:  %016lx   =
r10: %016lx   r11: %016lx\n",=0A-           regs->r9,  regs->r10, =
regs->r11);=0A-    printk("r12: %016lx   r13: %016lx   r14: %016lx\n",=0A- =
          regs->r12, regs->r13, regs->r14);=0A-    printk("r15: %016lx    =
cs: %016lx    ss: %016lx\n",=0A-           regs->r15, (long)regs->cs, =
(long)regs->ss);=0A+=0A+    crs[0] =3D read_cr0();=0A+    crs[2] =3D =
read_cr2();=0A+    crs[3] =3D read_cr3();=0A+    crs[4] =3D read_cr4();=0A+=
    regs->ds =3D read_segment_register(ds);=0A+    regs->es =3D read_segmen=
t_register(es);=0A+    regs->fs =3D read_segment_register(fs);=0A+    =
regs->gs =3D read_segment_register(gs);=0A+=0A+    printk("CPU:    %d\n", =
cpu);=0A+    _show_registers(regs, crs, CTXT_hypervisor, NULL);=0A     =
show_stack_overflow(cpu, regs->rsp);=0A =0A     panic("DOUBLE FAULT -- =
system shutdown\n");=0A
--=__Part4F7EC31D.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part4F7EC31D.0__=--


From xen-devel-bounces@lists.xen.org Wed Dec 19 13:06:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJLj-0000uS-Gj; Wed, 19 Dec 2012 13:06:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlJLh-0000uM-WA
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:06:14 +0000
Received: from [85.158.143.35:21274] by server-3.bemta-4.messagelabs.com id
	F6/2B-18211-5CBB1D05; Wed, 19 Dec 2012 13:06:13 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355922370!12614141!1
X-Originating-IP: [209.85.223.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2030 invoked from network); 19 Dec 2012 13:06:12 -0000
Received: from mail-ie0-f174.google.com (HELO mail-ie0-f174.google.com)
	(209.85.223.174)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:06:12 -0000
Received: by mail-ie0-f174.google.com with SMTP id c11so2684305ieb.33
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 05:06:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:cc:content-type;
	bh=VBBjKnNfrcE5LuwnCRGOUOVCb+HduYkhPVBvZPFRz/8=;
	b=UUpIfjsqruOQj514E6gZ6rsUzCTwmU0Om2OoFpVXAtEwD7+5ceUL20mzR1nXlXEGqh
	ogdcl46U5WUZ9/U5I/QnlHF7RT9fT7Du5eRD1mNZBztZ4rQQtO+NdYMOafD+/YBhewfc
	VOV8ZWcQJZB2BXrwgSFOw8JvWfT9HX/j5x9gHWBQN0B+72B5KfJoc+hELmCObv6NvI9Z
	XeIW+V2h8bcwSM0U/ltI3xtAhghjEhhHUeqfaElVqywtzueDJubWknV0zeXyUnloVzvG
	lYbcU7L/mtkpdYDC3WoSqxVvzwfzmW9dLIyuA0b/pQheuqu+Pdbhj+0sDOEmzig5aGza
	TAXA==
MIME-Version: 1.0
Received: by 10.50.190.163 with SMTP id gr3mr6842413igc.28.1355922369908; Wed,
	19 Dec 2012 05:06:09 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Wed, 19 Dec 2012 05:06:09 -0800 (PST)
Date: Wed, 19 Dec 2012 21:06:09 +0800
X-Google-Sender-Auth: wdWxaMxrwWFTdtpKNBAYwrG5TBU
Message-ID: <CAKhsbWbezhvrAV=a=tvJmWosfeN0xsPb74sjqB6J9JViv2F7BQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2] qemu-xen:Correctly expose PCH ISA bridge for
	IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fix IGD passthrough logic to properly expose PCH ISA bridge (instead
of exposing as pci-pci bridge). The i915 driver require this to
correctly detect the PCH version and enable version specific code
path.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
Timothy Guo <firemeteor@users.sourceforge.net>

diff --git a/hw/pci.c b/hw/pci.c
index f051de1..d371bd7 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
     }
 }

-typedef struct {
-    PCIDevice dev;
-    PCIBus *bus;
-} PCIBridge;
-
 void pci_bridge_write_config(
PCIDevice *d,
                              uint32_t address, uint32_t val, int len)
 {
diff --git a/hw/pci.h b/hw/pci.h
index edc58b6..c2acab9 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -222,6 +222,11 @@ struct PCIDevice {
     int irq_state[4];
 };

+typedef struct {
+    PCIDevice dev;
+    PCIBus *bus;
+} PCIBridge;
+
 extern char direct_pci_str[];
 extern int direct_pci_msitranslate;
 extern int direct_pci_power_mgmt;
diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
index c6f8869..de21f90 100644
--- a/hw/pt-graphics.c
+++ b/hw/pt-graphics.c
@@ -3,6 +3,7 @@
  */

 #include "pass-through.h"
+#include "pci.h"
 #include "pci/header.h"
 #include "pci/pci.h"

@@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
     did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
     rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);

-    if ( vid == PCI_VENDOR_ID_INTEL )
-        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
-                        pch_map_irq, "intel_bridge_1f");
+    if (vid == PCI_VENDOR_ID_INTEL) {
+        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
+                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
pci_bridge_write_config);
+
+        pci_config_set_vendor_id(s->dev.config, vid);
+        pci_config_set_device_id(s->dev.config, did);
+
+        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
+        s->dev.config[PCI_COMMAND + 1] = 0x00;
+        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
back-to-back, 66MHz, no error
+        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
+        s->dev.config[PCI_REVISION] = rid;
+        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
+        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
+        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
+        s->dev.config[PCI_HEADER_TYPE] = 0x80;
+        s->dev.config[PCI_SEC_STATUS] = 0xa0;
+
+        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
+    }
 }

 uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:06:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJLj-0000uS-Gj; Wed, 19 Dec 2012 13:06:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlJLh-0000uM-WA
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:06:14 +0000
Received: from [85.158.143.35:21274] by server-3.bemta-4.messagelabs.com id
	F6/2B-18211-5CBB1D05; Wed, 19 Dec 2012 13:06:13 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355922370!12614141!1
X-Originating-IP: [209.85.223.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2030 invoked from network); 19 Dec 2012 13:06:12 -0000
Received: from mail-ie0-f174.google.com (HELO mail-ie0-f174.google.com)
	(209.85.223.174)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:06:12 -0000
Received: by mail-ie0-f174.google.com with SMTP id c11so2684305ieb.33
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 05:06:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:cc:content-type;
	bh=VBBjKnNfrcE5LuwnCRGOUOVCb+HduYkhPVBvZPFRz/8=;
	b=UUpIfjsqruOQj514E6gZ6rsUzCTwmU0Om2OoFpVXAtEwD7+5ceUL20mzR1nXlXEGqh
	ogdcl46U5WUZ9/U5I/QnlHF7RT9fT7Du5eRD1mNZBztZ4rQQtO+NdYMOafD+/YBhewfc
	VOV8ZWcQJZB2BXrwgSFOw8JvWfT9HX/j5x9gHWBQN0B+72B5KfJoc+hELmCObv6NvI9Z
	XeIW+V2h8bcwSM0U/ltI3xtAhghjEhhHUeqfaElVqywtzueDJubWknV0zeXyUnloVzvG
	lYbcU7L/mtkpdYDC3WoSqxVvzwfzmW9dLIyuA0b/pQheuqu+Pdbhj+0sDOEmzig5aGza
	TAXA==
MIME-Version: 1.0
Received: by 10.50.190.163 with SMTP id gr3mr6842413igc.28.1355922369908; Wed,
	19 Dec 2012 05:06:09 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Wed, 19 Dec 2012 05:06:09 -0800 (PST)
Date: Wed, 19 Dec 2012 21:06:09 +0800
X-Google-Sender-Auth: wdWxaMxrwWFTdtpKNBAYwrG5TBU
Message-ID: <CAKhsbWbezhvrAV=a=tvJmWosfeN0xsPb74sjqB6J9JViv2F7BQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: xen-devel <xen-devel@lists.xen.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2] qemu-xen:Correctly expose PCH ISA bridge for
	IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fix IGD passthrough logic to properly expose PCH ISA bridge (instead
of exposing as pci-pci bridge). The i915 driver require this to
correctly detect the PCH version and enable version specific code
path.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
Timothy Guo <firemeteor@users.sourceforge.net>

diff --git a/hw/pci.c b/hw/pci.c
index f051de1..d371bd7 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
     }
 }

-typedef struct {
-    PCIDevice dev;
-    PCIBus *bus;
-} PCIBridge;
-
 void pci_bridge_write_config(
PCIDevice *d,
                              uint32_t address, uint32_t val, int len)
 {
diff --git a/hw/pci.h b/hw/pci.h
index edc58b6..c2acab9 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -222,6 +222,11 @@ struct PCIDevice {
     int irq_state[4];
 };

+typedef struct {
+    PCIDevice dev;
+    PCIBus *bus;
+} PCIBridge;
+
 extern char direct_pci_str[];
 extern int direct_pci_msitranslate;
 extern int direct_pci_power_mgmt;
diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
index c6f8869..de21f90 100644
--- a/hw/pt-graphics.c
+++ b/hw/pt-graphics.c
@@ -3,6 +3,7 @@
  */

 #include "pass-through.h"
+#include "pci.h"
 #include "pci/header.h"
 #include "pci/pci.h"

@@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
     did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
     rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);

-    if ( vid == PCI_VENDOR_ID_INTEL )
-        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
-                        pch_map_irq, "intel_bridge_1f");
+    if (vid == PCI_VENDOR_ID_INTEL) {
+        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
+                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
pci_bridge_write_config);
+
+        pci_config_set_vendor_id(s->dev.config, vid);
+        pci_config_set_device_id(s->dev.config, did);
+
+        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
+        s->dev.config[PCI_COMMAND + 1] = 0x00;
+        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
back-to-back, 66MHz, no error
+        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
+        s->dev.config[PCI_REVISION] = rid;
+        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
+        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
+        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
+        s->dev.config[PCI_HEADER_TYPE] = 0x80;
+        s->dev.config[PCI_SEC_STATUS] = 0xa0;
+
+        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
+    }
 }

 uint32_t igd_read_opregion(struct pt_dev *pci_dev)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:06:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJLw-0000ve-Tz; Wed, 19 Dec 2012 13:06:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlJLv-0000vI-Jc
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 13:06:27 +0000
Received: from [85.158.137.99:46038] by server-9.bemta-3.messagelabs.com id
	EF/3A-11948-2DBB1D05; Wed, 19 Dec 2012 13:06:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355922385!16924748!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26693 invoked from network); 19 Dec 2012 13:06:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:06:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="252050"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 13:06:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 19 Dec 2012 13:06:25 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlJLt-0006rO-B6;
	Wed, 19 Dec 2012 13:06:25 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlJLt-0000Qh-4q;
	Wed, 19 Dec 2012 13:06:25 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14783-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Dec 2012 13:06:25 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14783: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14783 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14783/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14773

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  b13c5ee3c109
baseline version:
 xen                  516dbd9deb4f

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=b13c5ee3c109
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing b13c5ee3c109
+ branch=xen-4.1-testing
+ revision=b13c5ee3c109
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r b13c5ee3c109 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 4 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:06:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJLw-0000ve-Tz; Wed, 19 Dec 2012 13:06:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlJLv-0000vI-Jc
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 13:06:27 +0000
Received: from [85.158.137.99:46038] by server-9.bemta-3.messagelabs.com id
	EF/3A-11948-2DBB1D05; Wed, 19 Dec 2012 13:06:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-217.messagelabs.com!1355922385!16924748!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26693 invoked from network); 19 Dec 2012 13:06:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:06:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="252050"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 13:06:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 19 Dec 2012 13:06:25 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlJLt-0006rO-B6;
	Wed, 19 Dec 2012 13:06:25 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlJLt-0000Qh-4q;
	Wed, 19 Dec 2012 13:06:25 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14783-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Dec 2012 13:06:25 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14783: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14783 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14783/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14773

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  b13c5ee3c109
baseline version:
 xen                  516dbd9deb4f

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=b13c5ee3c109
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing b13c5ee3c109
+ branch=xen-4.1-testing
+ revision=b13c5ee3c109
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r b13c5ee3c109 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 4 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:22:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJbV-0001iw-I7; Wed, 19 Dec 2012 13:22:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TlJbU-0001iq-8N
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:22:32 +0000
Received: from [85.158.139.83:32159] by server-5.bemta-5.messagelabs.com id
	DD/6F-22648-79FB1D05; Wed, 19 Dec 2012 13:22:31 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355923350!30581138!1
X-Originating-IP: [209.85.214.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8423 invoked from network); 19 Dec 2012 13:22:30 -0000
Received: from mail-bk0-f42.google.com (HELO mail-bk0-f42.google.com)
	(209.85.214.42)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:22:30 -0000
Received: by mail-bk0-f42.google.com with SMTP id ji2so979746bkc.15
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 05:22:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=j4ceWujiGfDAQNHau4C1+kF75QL5feQ/+fDNTeI02ko=;
	b=unCEzAE63XiQjFtyYRkPB7TvROGpmvLzZFLOA/beHUc5vfPfXL4HX8qj4l4qJ7gVFH
	2xUiZVrzhSdxQwk0h505np7sBnTABm1vLRhp1TW3asmnGBwE2EEC8IFgNTKkZ2SsK0C3
	GRIeaqJnWIUE+XfZM5QfqzDDEpA+u/FMnIy1L8JFALxeMKBvR+tJezUdK2HJeKOBVwPJ
	jEY7hFiMg2elkXCsRYuAh8jYuc5C2W8U43+SKsJzqYa6y8KG0pfD2UScN2Bvm8ee8DAQ
	+dWUwyNt7GAt398jMuUNZUg7b7q1og7YjUuX2+Xzy41Kvf+06GSv2WfkEEuUZeaos20X
	zxMw==
X-Received: by 10.204.148.19 with SMTP id n19mr2556934bkv.131.1355923349641;
	Wed, 19 Dec 2012 05:22:29 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id o7sm4150804bkv.13.2012.12.19.05.22.27
	(version=SSLv3 cipher=OTHER); Wed, 19 Dec 2012 05:22:28 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Wed, 19 Dec 2012 13:22:23 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCF7700F.55FDC%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86: also print CRn register values upon
	double fault
Thread-Index: Ac3d6+FEapWYLTuQSkeIW9faCBFS/g==
In-Reply-To: <50D1C91D02000078000B1690@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: also print CRn register values upon
 double fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/2012 13:03, "Jan Beulich" <JBeulich@suse.com> wrote:

> Do so by simply re-using _show_registers().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -225,6 +225,7 @@ void double_fault(void);
>  void do_double_fault(struct cpu_user_regs *regs)
>  {
>      unsigned int cpu;
> +    unsigned long crs[8];
>  
>      watchdog_disable();
>  
> @@ -235,22 +236,18 @@ void do_double_fault(struct cpu_user_reg
>      /* Find information saved during fault and dump it to the console. */
>      printk("*** DOUBLE FAULT ***\n");
>      print_xen_info();
> -    printk("CPU:    %d\nRIP:    %04x:[<%016lx>]",
> -           cpu, regs->cs, regs->rip);
> -    print_symbol(" %s", regs->rip);
> -    printk("\nRFLAGS: %016lx\n", regs->rflags);
> -    printk("rax: %016lx   rbx: %016lx   rcx: %016lx\n",
> -           regs->rax, regs->rbx, regs->rcx);
> -    printk("rdx: %016lx   rsi: %016lx   rdi: %016lx\n",
> -           regs->rdx, regs->rsi, regs->rdi);
> -    printk("rbp: %016lx   rsp: %016lx   r8:  %016lx\n",
> -           regs->rbp, regs->rsp, regs->r8);
> -    printk("r9:  %016lx   r10: %016lx   r11: %016lx\n",
> -           regs->r9,  regs->r10, regs->r11);
> -    printk("r12: %016lx   r13: %016lx   r14: %016lx\n",
> -           regs->r12, regs->r13, regs->r14);
> -    printk("r15: %016lx    cs: %016lx    ss: %016lx\n",
> -           regs->r15, (long)regs->cs, (long)regs->ss);
> +
> +    crs[0] = read_cr0();
> +    crs[2] = read_cr2();
> +    crs[3] = read_cr3();
> +    crs[4] = read_cr4();
> +    regs->ds = read_segment_register(ds);
> +    regs->es = read_segment_register(es);
> +    regs->fs = read_segment_register(fs);
> +    regs->gs = read_segment_register(gs);
> +
> +    printk("CPU:    %d\n", cpu);
> +    _show_registers(regs, crs, CTXT_hypervisor, NULL);
>      show_stack_overflow(cpu, regs->rsp);
>  
>      panic("DOUBLE FAULT -- system shutdown\n");
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:22:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJbV-0001iw-I7; Wed, 19 Dec 2012 13:22:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TlJbU-0001iq-8N
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:22:32 +0000
Received: from [85.158.139.83:32159] by server-5.bemta-5.messagelabs.com id
	DD/6F-22648-79FB1D05; Wed, 19 Dec 2012 13:22:31 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355923350!30581138!1
X-Originating-IP: [209.85.214.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8423 invoked from network); 19 Dec 2012 13:22:30 -0000
Received: from mail-bk0-f42.google.com (HELO mail-bk0-f42.google.com)
	(209.85.214.42)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:22:30 -0000
Received: by mail-bk0-f42.google.com with SMTP id ji2so979746bkc.15
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 05:22:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=j4ceWujiGfDAQNHau4C1+kF75QL5feQ/+fDNTeI02ko=;
	b=unCEzAE63XiQjFtyYRkPB7TvROGpmvLzZFLOA/beHUc5vfPfXL4HX8qj4l4qJ7gVFH
	2xUiZVrzhSdxQwk0h505np7sBnTABm1vLRhp1TW3asmnGBwE2EEC8IFgNTKkZ2SsK0C3
	GRIeaqJnWIUE+XfZM5QfqzDDEpA+u/FMnIy1L8JFALxeMKBvR+tJezUdK2HJeKOBVwPJ
	jEY7hFiMg2elkXCsRYuAh8jYuc5C2W8U43+SKsJzqYa6y8KG0pfD2UScN2Bvm8ee8DAQ
	+dWUwyNt7GAt398jMuUNZUg7b7q1og7YjUuX2+Xzy41Kvf+06GSv2WfkEEuUZeaos20X
	zxMw==
X-Received: by 10.204.148.19 with SMTP id n19mr2556934bkv.131.1355923349641;
	Wed, 19 Dec 2012 05:22:29 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id o7sm4150804bkv.13.2012.12.19.05.22.27
	(version=SSLv3 cipher=OTHER); Wed, 19 Dec 2012 05:22:28 -0800 (PST)
User-Agent: Microsoft-Entourage/12.34.0.120813
Date: Wed, 19 Dec 2012 13:22:23 +0000
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CCF7700F.55FDC%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86: also print CRn register values upon
	double fault
Thread-Index: Ac3d6+FEapWYLTuQSkeIW9faCBFS/g==
In-Reply-To: <50D1C91D02000078000B1690@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: also print CRn register values upon
 double fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/2012 13:03, "Jan Beulich" <JBeulich@suse.com> wrote:

> Do so by simply re-using _show_registers().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -225,6 +225,7 @@ void double_fault(void);
>  void do_double_fault(struct cpu_user_regs *regs)
>  {
>      unsigned int cpu;
> +    unsigned long crs[8];
>  
>      watchdog_disable();
>  
> @@ -235,22 +236,18 @@ void do_double_fault(struct cpu_user_reg
>      /* Find information saved during fault and dump it to the console. */
>      printk("*** DOUBLE FAULT ***\n");
>      print_xen_info();
> -    printk("CPU:    %d\nRIP:    %04x:[<%016lx>]",
> -           cpu, regs->cs, regs->rip);
> -    print_symbol(" %s", regs->rip);
> -    printk("\nRFLAGS: %016lx\n", regs->rflags);
> -    printk("rax: %016lx   rbx: %016lx   rcx: %016lx\n",
> -           regs->rax, regs->rbx, regs->rcx);
> -    printk("rdx: %016lx   rsi: %016lx   rdi: %016lx\n",
> -           regs->rdx, regs->rsi, regs->rdi);
> -    printk("rbp: %016lx   rsp: %016lx   r8:  %016lx\n",
> -           regs->rbp, regs->rsp, regs->r8);
> -    printk("r9:  %016lx   r10: %016lx   r11: %016lx\n",
> -           regs->r9,  regs->r10, regs->r11);
> -    printk("r12: %016lx   r13: %016lx   r14: %016lx\n",
> -           regs->r12, regs->r13, regs->r14);
> -    printk("r15: %016lx    cs: %016lx    ss: %016lx\n",
> -           regs->r15, (long)regs->cs, (long)regs->ss);
> +
> +    crs[0] = read_cr0();
> +    crs[2] = read_cr2();
> +    crs[3] = read_cr3();
> +    crs[4] = read_cr4();
> +    regs->ds = read_segment_register(ds);
> +    regs->es = read_segment_register(es);
> +    regs->fs = read_segment_register(fs);
> +    regs->gs = read_segment_register(gs);
> +
> +    printk("CPU:    %d\n", cpu);
> +    _show_registers(regs, crs, CTXT_hypervisor, NULL);
>      show_stack_overflow(cpu, regs->rsp);
>  
>      panic("DOUBLE FAULT -- system shutdown\n");
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:28:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJgg-0001xc-9s; Wed, 19 Dec 2012 13:27:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlJge-0001xX-S1
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:27:52 +0000
Received: from [193.109.254.147:56554] by server-13.bemta-14.messagelabs.com
	id D0/E6-01725-8D0C1D05; Wed, 19 Dec 2012 13:27:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355923671!10561926!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24995 invoked from network); 19 Dec 2012 13:27:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:27:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="252646"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 13:27:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 13:27:51 +0000
Message-ID: <1355923669.14620.401.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 19 Dec 2012 13:27:49 +0000
In-Reply-To: <20121206120913.GM82725@ocelot.phlegethon.org>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<20121206120913.GM82725@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 12:09 +0000, Tim Deegan wrote:
> At 11:56 +0000 on 04 Dec (1354622173), Ian Campbell wrote:
> > This was a short term hack to get something linking quickly, but its
> > usefulness has now passed.
> > 
> > This series replaces everything in here with proper functions. In many
> > cases these are still just stubs.
> 
> For the arch/arm parts,
> 
> Acked-by: Tim Deegan <tim@xen.org>

I somehow missed this in my ping yesterday, sorry!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:28:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlJgg-0001xc-9s; Wed, 19 Dec 2012 13:27:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlJge-0001xX-S1
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:27:52 +0000
Received: from [193.109.254.147:56554] by server-13.bemta-14.messagelabs.com
	id D0/E6-01725-8D0C1D05; Wed, 19 Dec 2012 13:27:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355923671!10561926!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24995 invoked from network); 19 Dec 2012 13:27:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:27:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,316,1355097600"; 
   d="scan'208";a="252646"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 13:27:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 13:27:51 +0000
Message-ID: <1355923669.14620.401.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 19 Dec 2012 13:27:49 +0000
In-Reply-To: <20121206120913.GM82725@ocelot.phlegethon.org>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<20121206120913.GM82725@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-06 at 12:09 +0000, Tim Deegan wrote:
> At 11:56 +0000 on 04 Dec (1354622173), Ian Campbell wrote:
> > This was a short term hack to get something linking quickly, but its
> > usefulness has now passed.
> > 
> > This series replaces everything in here with proper functions. In many
> > cases these are still just stubs.
> 
> For the arch/arm parts,
> 
> Acked-by: Tim Deegan <tim@xen.org>

I somehow missed this in my ping yesterday, sorry!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:55:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlK7B-0002i7-PP; Wed, 19 Dec 2012 13:55:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TlK79-0002i2-G5
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:55:15 +0000
Received: from [85.158.137.99:25278] by server-5.bemta-3.messagelabs.com id
	B2/BF-15136-247C1D05; Wed, 19 Dec 2012 13:55:14 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1355925310!16985831!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4305 invoked from network); 19 Dec 2012 13:55:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:55:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="1254758"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 13:55:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 08:55:04 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TlK6y-00021m-7Q;
	Wed, 19 Dec 2012 13:55:04 +0000
Message-ID: <50D1C5C5.4070704@eu.citrix.com>
Date: Wed, 19 Dec 2012 13:48:53 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
References: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
	<49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
	<20121218221749.GA6332@phenom.dumpdata.com>
	<50D1B8D0.4020500@eu.citrix.com>
In-Reply-To: <50D1B8D0.4020500@eu.citrix.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
 problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 12:53, George Dunlap wrote:
> When a request comes in for a certain amount of memory, it will go and 
> set each VM's max_pages, and the max tmem pool size.  It can then 
> check whether there is enough free memory to complete the allocation 
> or not (since there's a race between checking how much memory a guest 
> is using and setting max_pages).  If that succeeds, it can return 
> "success".  If, while that VM is being built, another request comes 
> in, it can again go around and set the max sizes lower.  It has to 
> know how much of the memory is "reserved" for the first guest being 
> built, but if there's enough left after that, it can return "success" 
> and allow the second VM to start being built.
>
> After the VMs are built, the toolstack can remove the limits again if 
> it wants, again allowing the free flow of memory.
>
> Do you see any problems with this scheme?  All it requires is for the 
> toolstack to be able to temporarliy set limits on both guests 
> ballooning up and on tmem allocating more than a certain amount of 
> memory.  We already have mechanisms for the first, so if we had a 
> "max_pages" for tmem, then you'd have all the tools you need to 
> implement it.

I should also point out, this scheme has some distinct *advantages*: 
Namely, that if there isn't enough free memory, such a daemon can easily 
be modified to *make* free memory by cranking down balloon targets 
and/or tmem pool size.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:55:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlK7B-0002i7-PP; Wed, 19 Dec 2012 13:55:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TlK79-0002i2-G5
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:55:15 +0000
Received: from [85.158.137.99:25278] by server-5.bemta-3.messagelabs.com id
	B2/BF-15136-247C1D05; Wed, 19 Dec 2012 13:55:14 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1355925310!16985831!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4305 invoked from network); 19 Dec 2012 13:55:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:55:11 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="1254758"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 13:55:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 08:55:04 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TlK6y-00021m-7Q;
	Wed, 19 Dec 2012 13:55:04 +0000
Message-ID: <50D1C5C5.4070704@eu.citrix.com>
Date: Wed, 19 Dec 2012 13:48:53 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
References: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
	<49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
	<20121218221749.GA6332@phenom.dumpdata.com>
	<50D1B8D0.4020500@eu.citrix.com>
In-Reply-To: <50D1B8D0.4020500@eu.citrix.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
 problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 12:53, George Dunlap wrote:
> When a request comes in for a certain amount of memory, it will go and 
> set each VM's max_pages, and the max tmem pool size.  It can then 
> check whether there is enough free memory to complete the allocation 
> or not (since there's a race between checking how much memory a guest 
> is using and setting max_pages).  If that succeeds, it can return 
> "success".  If, while that VM is being built, another request comes 
> in, it can again go around and set the max sizes lower.  It has to 
> know how much of the memory is "reserved" for the first guest being 
> built, but if there's enough left after that, it can return "success" 
> and allow the second VM to start being built.
>
> After the VMs are built, the toolstack can remove the limits again if 
> it wants, again allowing the free flow of memory.
>
> Do you see any problems with this scheme?  All it requires is for the 
> toolstack to be able to temporarliy set limits on both guests 
> ballooning up and on tmem allocating more than a certain amount of 
> memory.  We already have mechanisms for the first, so if we had a 
> "max_pages" for tmem, then you'd have all the tools you need to 
> implement it.

I should also point out, this scheme has some distinct *advantages*: 
Namely, that if there isn't enough free memory, such a daemon can easily 
be modified to *make* free memory by cranking down balloon targets 
and/or tmem pool size.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:57:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlK9F-0002nT-GJ; Wed, 19 Dec 2012 13:57:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1TlK9E-0002nD-I4; Wed, 19 Dec 2012 13:57:24 +0000
Received: from [85.158.143.99:6048] by server-1.bemta-4.messagelabs.com id
	74/95-28401-3C7C1D05; Wed, 19 Dec 2012 13:57:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1355925439!20557020!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7761 invoked from network); 19 Dec 2012 13:57:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:57:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253492"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 13:57:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 13:57:19 +0000
Message-ID: <1355925437.14620.409.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rob Hoes <Rob.Hoes@citrix.com>
Date: Wed, 19 Dec 2012 13:57:17 +0000
In-Reply-To: <7EA643C653F17F4C80DE959E978F10EDFA101107AB@LONPMAILBOX01.citrite.net>
References: <patchbomb.1353432200@cosworth.uk.xensource.com>
	<8195cb0ebac691ae94e9.1353432202@cosworth.uk.xensource.com>
	<7EA643C653F17F4C80DE959E978F10EDFA101107AB@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 02 of 15] libxl: Add
 LIBXL_SHUTDOWN_REASON_UNKNOWN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-11-29 at 16:23 +0000, Rob Hoes wrote:
> > libxl: Add LIBXL_SHUTDOWN_REASON_UNKNOWN
> > 
> > libxl_dominfo.shutdown_reason is valid iff (shutdown||dying). This is a bit
> > annoying when generating language bindings since it needs all sorts of
> > special casing. Just introduce an explicit value instead.
> > 
> > Signed-off-by: Ian Campbell <ian.cambell@citrix.com>
> 
> This change is very useful from an ocaml-bindings point of view.
> 
> Acked-by: Rob Hoes <rob.hoes@citrix.com>

Thanks. I think I'm actually going to defer on applying this one until
we have a clearer idea what direction the bindings are taking.

In particular if we decide to implement the "default" state with
None/Some (see comment on patch 15/15) then that may be more appropriate
than this change.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:57:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlK9F-0002nT-GJ; Wed, 19 Dec 2012 13:57:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1TlK9E-0002nD-I4; Wed, 19 Dec 2012 13:57:24 +0000
Received: from [85.158.143.99:6048] by server-1.bemta-4.messagelabs.com id
	74/95-28401-3C7C1D05; Wed, 19 Dec 2012 13:57:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1355925439!20557020!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7761 invoked from network); 19 Dec 2012 13:57:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:57:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253492"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 13:57:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 13:57:19 +0000
Message-ID: <1355925437.14620.409.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rob Hoes <Rob.Hoes@citrix.com>
Date: Wed, 19 Dec 2012 13:57:17 +0000
In-Reply-To: <7EA643C653F17F4C80DE959E978F10EDFA101107AB@LONPMAILBOX01.citrite.net>
References: <patchbomb.1353432200@cosworth.uk.xensource.com>
	<8195cb0ebac691ae94e9.1353432202@cosworth.uk.xensource.com>
	<7EA643C653F17F4C80DE959E978F10EDFA101107AB@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 02 of 15] libxl: Add
 LIBXL_SHUTDOWN_REASON_UNKNOWN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-11-29 at 16:23 +0000, Rob Hoes wrote:
> > libxl: Add LIBXL_SHUTDOWN_REASON_UNKNOWN
> > 
> > libxl_dominfo.shutdown_reason is valid iff (shutdown||dying). This is a bit
> > annoying when generating language bindings since it needs all sorts of
> > special casing. Just introduce an explicit value instead.
> > 
> > Signed-off-by: Ian Campbell <ian.cambell@citrix.com>
> 
> This change is very useful from an ocaml-bindings point of view.
> 
> Acked-by: Rob Hoes <rob.hoes@citrix.com>

Thanks. I think I'm actually going to defer on applying this one until
we have a clearer idea what direction the bindings are taking.

In particular if we decide to implement the "default" state with
None/Some (see comment on patch 15/15) then that may be more appropriate
than this change.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:59:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKBS-0002y2-2P; Wed, 19 Dec 2012 13:59:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKBR-0002xv-2r
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:59:41 +0000
Received: from [85.158.137.99:19729] by server-11.bemta-3.messagelabs.com id
	F9/F3-13335-C48C1D05; Wed, 19 Dec 2012 13:59:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355925552!20144655!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15266 invoked from network); 19 Dec 2012 13:59:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:59:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="1182788"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 13:59:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 08:59:12 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TlKAx-00025u-N1;
	Wed, 19 Dec 2012 13:59:11 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 13:59:11 +0000
Message-ID: <1355925551-14149-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools/tests: Restrict some tests to x86 only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MCE injection and x86_emulator are clearly x86 specific.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/tests/Makefile |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/tests/Makefile b/tools/tests/Makefile
index cc96cd3..adeb120 100644
--- a/tools/tests/Makefile
+++ b/tools/tests/Makefile
@@ -5,12 +5,12 @@ CFLAGS  += $(CFLAGS_libxenctrl)
 LDLIBS += $(LDLIBS_libxenctrl)
 
 SUBDIRS-y :=
-SUBDIRS-y += mce-test
+SUBDIRS-$(CONFIG_X86) += mce-test
 SUBDIRS-y += mem-sharing
 ifeq ($(XEN_TARGET_ARCH),__fixme__)
 SUBDIRS-y += regression
 endif
-SUBDIRS-y += x86_emulator
+SUBDIRS-$(CONFIG_X86) += x86_emulator
 SUBDIRS-y += xen-access
 
 .PHONY: all clean install distclean
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 13:59:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 13:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKBS-0002y2-2P; Wed, 19 Dec 2012 13:59:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKBR-0002xv-2r
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 13:59:41 +0000
Received: from [85.158.137.99:19729] by server-11.bemta-3.messagelabs.com id
	F9/F3-13335-C48C1D05; Wed, 19 Dec 2012 13:59:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355925552!20144655!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15266 invoked from network); 19 Dec 2012 13:59:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 13:59:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="1182788"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 13:59:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 08:59:12 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TlKAx-00025u-N1;
	Wed, 19 Dec 2012 13:59:11 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 13:59:11 +0000
Message-ID: <1355925551-14149-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools/tests: Restrict some tests to x86 only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MCE injection and x86_emulator are clearly x86 specific.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/tests/Makefile |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/tests/Makefile b/tools/tests/Makefile
index cc96cd3..adeb120 100644
--- a/tools/tests/Makefile
+++ b/tools/tests/Makefile
@@ -5,12 +5,12 @@ CFLAGS  += $(CFLAGS_libxenctrl)
 LDLIBS += $(LDLIBS_libxenctrl)
 
 SUBDIRS-y :=
-SUBDIRS-y += mce-test
+SUBDIRS-$(CONFIG_X86) += mce-test
 SUBDIRS-y += mem-sharing
 ifeq ($(XEN_TARGET_ARCH),__fixme__)
 SUBDIRS-y += regression
 endif
-SUBDIRS-y += x86_emulator
+SUBDIRS-$(CONFIG_X86) += x86_emulator
 SUBDIRS-y += xen-access
 
 .PHONY: all clean install distclean
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:03:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKEh-0003In-NS; Wed, 19 Dec 2012 14:03:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKEf-0003Ib-CF
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 14:03:01 +0000
Received: from [85.158.138.51:36091] by server-5.bemta-3.messagelabs.com id
	99/8D-15136-419C1D05; Wed, 19 Dec 2012 14:03:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355925762!29690144!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17468 invoked from network); 19 Dec 2012 14:02:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:02:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253657"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:02:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:02:42 +0000
Message-ID: <1355925761.14620.410.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 14:02:41 +0000
In-Reply-To: <alpine.DEB.2.02.1211271831560.5310@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211271831560.5310@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: do not map vGIC twice for dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-11-27 at 18:36 +0000, Stefano Stabellini wrote:
> We don't need to manually set the P2M for the vGIC in construct_dom0,
> because we have already done it generally for every guest in gicv_setup.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

This fell through the cracks at some point, sorry. Now acked + applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:03:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKEh-0003In-NS; Wed, 19 Dec 2012 14:03:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKEf-0003Ib-CF
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 14:03:01 +0000
Received: from [85.158.138.51:36091] by server-5.bemta-3.messagelabs.com id
	99/8D-15136-419C1D05; Wed, 19 Dec 2012 14:03:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355925762!29690144!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17468 invoked from network); 19 Dec 2012 14:02:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:02:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253657"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:02:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:02:42 +0000
Message-ID: <1355925761.14620.410.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 14:02:41 +0000
In-Reply-To: <alpine.DEB.2.02.1211271831560.5310@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1211271831560.5310@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: do not map vGIC twice for dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-11-27 at 18:36 +0000, Stefano Stabellini wrote:
> We don't need to manually set the P2M for the vGIC in construct_dom0,
> because we have already done it generally for every guest in gicv_setup.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

This fell through the cracks at some point, sorry. Now acked + applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:03:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKFM-0003M5-5H; Wed, 19 Dec 2012 14:03:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKFK-0003Lf-VY
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:03:43 +0000
Received: from [85.158.139.83:35424] by server-10.bemta-5.messagelabs.com id
	2D/5B-13383-E39C1D05; Wed, 19 Dec 2012 14:03:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355925781!26674983!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4928 invoked from network); 19 Dec 2012 14:03:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:03:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253671"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:03:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:03:01 +0000
Message-ID: <1355925780.14620.411.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 14:03:00 +0000
In-Reply-To: <alpine.DEB.2.02.1212181142070.17523@kaball.uk.xensource.com>
References: <1355757581-11845-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181142070.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: Call init_xen_time earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 11:42 +0000, Stefano Stabellini wrote:
> On Mon, 17 Dec 2012, Ian Campbell wrote:
> > If we panic before calling init_xen_time then the "Rebooting in 5
> > seconds" delay ends up calling udelay which uses cntfrq before it has
> > been initialised resulting in a divide by zero.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Applied, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:03:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKFM-0003M5-5H; Wed, 19 Dec 2012 14:03:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKFK-0003Lf-VY
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:03:43 +0000
Received: from [85.158.139.83:35424] by server-10.bemta-5.messagelabs.com id
	2D/5B-13383-E39C1D05; Wed, 19 Dec 2012 14:03:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355925781!26674983!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4928 invoked from network); 19 Dec 2012 14:03:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:03:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253671"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:03:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:03:01 +0000
Message-ID: <1355925780.14620.411.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 14:03:00 +0000
In-Reply-To: <alpine.DEB.2.02.1212181142070.17523@kaball.uk.xensource.com>
References: <1355757581-11845-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181142070.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: Call init_xen_time earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 11:42 +0000, Stefano Stabellini wrote:
> On Mon, 17 Dec 2012, Ian Campbell wrote:
> > If we panic before calling init_xen_time then the "Rebooting in 5
> > seconds" delay ends up calling udelay which uses cntfrq before it has
> > been initialised resulting in a divide by zero.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Applied, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:04:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:04:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKFW-0003Nj-Ht; Wed, 19 Dec 2012 14:03:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKFV-0003NS-OG
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:03:53 +0000
Received: from [85.158.143.35:38668] by server-3.bemta-4.messagelabs.com id
	29/1E-18211-849C1D05; Wed, 19 Dec 2012 14:03:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355925832!12622690!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26658 invoked from network); 19 Dec 2012 14:03:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:03:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253697"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:03:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:03:52 +0000
Message-ID: <1355925830.14620.412.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 14:03:50 +0000
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 11:56 +0000, Ian Campbell wrote:
> This was a short term hack to get something linking quickly, but its
> usefulness has now passed.
> 
> This series replaces everything in here with proper functions. In many
> cases these are still just stubs.
> 
> It seems to me that at least some of this stuff consists of x86-isms
> which should instead be removed from the common code.
> 
> This highlights two large missing pieces of functionality: wallclock
> time and cleaning up on domain destroy.

Applied 1..4,6..14 with Tim+Stefano's acks. Thanks.

5 was the patch which touched x86 which I need to refresh after Jan's
comment. 15 is the final removal of dummy.S which needs 5.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:04:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:04:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKFW-0003Nj-Ht; Wed, 19 Dec 2012 14:03:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKFV-0003NS-OG
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:03:53 +0000
Received: from [85.158.143.35:38668] by server-3.bemta-4.messagelabs.com id
	29/1E-18211-849C1D05; Wed, 19 Dec 2012 14:03:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355925832!12622690!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26658 invoked from network); 19 Dec 2012 14:03:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:03:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253697"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:03:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:03:52 +0000
Message-ID: <1355925830.14620.412.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 14:03:50 +0000
In-Reply-To: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH 00/15] xen: arm: remove dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 11:56 +0000, Ian Campbell wrote:
> This was a short term hack to get something linking quickly, but its
> usefulness has now passed.
> 
> This series replaces everything in here with proper functions. In many
> cases these are still just stubs.
> 
> It seems to me that at least some of this stuff consists of x86-isms
> which should instead be removed from the common code.
> 
> This highlights two large missing pieces of functionality: wallclock
> time and cleaning up on domain destroy.

Applied 1..4,6..14 with Tim+Stefano's acks. Thanks.

5 was the patch which touched x86 which I need to refresh after Jan's
comment. 15 is the final removal of dummy.S which needs 5.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:06:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKHt-0003gD-3t; Wed, 19 Dec 2012 14:06:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKHr-0003fy-If
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:06:19 +0000
Received: from [85.158.139.211:33131] by server-7.bemta-5.messagelabs.com id
	34/1A-08009-AD9C1D05; Wed, 19 Dec 2012 14:06:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355925962!21067451!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7492 invoked from network); 19 Dec 2012 14:06:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:06:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253832"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:04:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:04:25 +0000
Message-ID: <1355925864.14620.413.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 19 Dec 2012 14:04:24 +0000
In-Reply-To: <20121213123612.GE75286@ocelot.phlegethon.org>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-1-git-send-email-ian.campbell@citrix.com>
	<20121213123612.GE75286@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/9] xen: arm: mark early_panic as a
 noreturn function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 12:36 +0000, Tim Deegan wrote:
> At 13:10 +0000 on 06 Dec (1354799442), Ian Campbell wrote:
> > Otherwise gcc complains about variables being used when not
> > initialised when in fact that point is never reached.
> > 
> > There aren't any instances of this in tree right now, I noticed this
> > while developing another patch.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Tim Deegan <tim@xen.org>

Applied, thanks.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:06:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKHt-0003gD-3t; Wed, 19 Dec 2012 14:06:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKHr-0003fy-If
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:06:19 +0000
Received: from [85.158.139.211:33131] by server-7.bemta-5.messagelabs.com id
	34/1A-08009-AD9C1D05; Wed, 19 Dec 2012 14:06:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355925962!21067451!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7492 invoked from network); 19 Dec 2012 14:06:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:06:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253832"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:04:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:04:25 +0000
Message-ID: <1355925864.14620.413.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 19 Dec 2012 14:04:24 +0000
In-Reply-To: <20121213123612.GE75286@ocelot.phlegethon.org>
References: <1354799399.17165.93.camel@zakaz.uk.xensource.com>
	<1354799451-16876-1-git-send-email-ian.campbell@citrix.com>
	<20121213123612.GE75286@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/9] xen: arm: mark early_panic as a
 noreturn function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-13 at 12:36 +0000, Tim Deegan wrote:
> At 13:10 +0000 on 06 Dec (1354799442), Ian Campbell wrote:
> > Otherwise gcc complains about variables being used when not
> > initialised when in fact that point is never reached.
> > 
> > There aren't any instances of this in tree right now, I noticed this
> > while developing another patch.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Tim Deegan <tim@xen.org>

Applied, thanks.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:06:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:06:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKI0-0003hR-GO; Wed, 19 Dec 2012 14:06:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKHz-0003hC-I1
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:06:27 +0000
Received: from [85.158.143.99:12187] by server-2.bemta-4.messagelabs.com id
	3D/3D-30861-2E9C1D05; Wed, 19 Dec 2012 14:06:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355925985!23325003!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5880 invoked from network); 19 Dec 2012 14:06:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:06:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253880"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:06:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:06:24 +0000
Message-ID: <1355925983.14620.414.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 14:06:23 +0000
In-Reply-To: <alpine.DEB.2.02.1212191123050.17523@kaball.uk.xensource.com>
References: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
	<1355851042.14620.280.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181824460.17523@kaball.uk.xensource.com>
	<1355912890.14620.297.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212191123050.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of
	arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 11:23 +0000, Stefano Stabellini wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> 
> that's OK

Then applied, thanks!



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:06:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:06:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKI0-0003hR-GO; Wed, 19 Dec 2012 14:06:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKHz-0003hC-I1
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:06:27 +0000
Received: from [85.158.143.99:12187] by server-2.bemta-4.messagelabs.com id
	3D/3D-30861-2E9C1D05; Wed, 19 Dec 2012 14:06:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1355925985!23325003!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5880 invoked from network); 19 Dec 2012 14:06:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:06:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253880"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:06:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:06:24 +0000
Message-ID: <1355925983.14620.414.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 14:06:23 +0000
In-Reply-To: <alpine.DEB.2.02.1212191123050.17523@kaball.uk.xensource.com>
References: <1354636677-16271-1-git-send-email-ian.campbell@citrix.com>
	<1355851042.14620.280.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212181824460.17523@kaball.uk.xensource.com>
	<1355912890.14620.297.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1212191123050.17523@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: introduce arm32 as a subarch of
	arm.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 11:23 +0000, Stefano Stabellini wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> 
> that's OK

Then applied, thanks!



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:06:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:06:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKID-0003ke-U9; Wed, 19 Dec 2012 14:06:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKIC-0003k9-LB
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 14:06:40 +0000
Received: from [85.158.138.51:60072] by server-12.bemta-3.messagelabs.com id
	FF/8C-27559-FE9C1D05; Wed, 19 Dec 2012 14:06:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355925999!28407690!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6621 invoked from network); 19 Dec 2012 14:06:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:06:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253890"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:06:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:06:38 +0000
Message-ID: <1355925997.14620.415.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 14:06:37 +0000
In-Reply-To: <1355856402-26614-6-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-6-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 6/8] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:46 +0000, Stefano Stabellini wrote:
> Introduce a find_compatible_node function that can be used by device

> +static int _find_compatible_node(const void *fdt,
> +                             int node, const char *name, int depth,
> +                             u32 address_cells, u32 size_cells,
> +                             void *data)
> +{
> +    struct find_compat *c = (struct find_compat *) data;
> +
> +    if (  c->found  )
> +        return 0;

It'd be nice if returning e.g. 1 would cause device_tree_for_each_node
to stop walking the DTB and return immediately. Would make this function
cleaner and avoid pointlessly parsing the rest of the DTB.

> +
> +    if ( device_tree_node_compatible(fdt, node, c->compatible) )
> +    {
> +        c->found = 1;
> +        c->node = node;
> +        c->depth = depth;
> +        c->address_cells = address_cells;
> +        c->size_cells = size_cells;
> +    }
> +    return 0;
> +}
> + 
> +int find_compatible_node(const char *compatible, int *node, int *depth,
> +                u32 *address_cells, u32 *size_cells)
> +{
> +    int ret;
> +    struct find_compat c;
> +    c.compatible = compatible;
> +    c.found = 0;
> +
> +    ret = device_tree_for_each_node(device_tree_flattened, _find_compatible_node, &c);
> +    if ( !c.found )
> +        return ret;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:06:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:06:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKID-0003ke-U9; Wed, 19 Dec 2012 14:06:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKIC-0003k9-LB
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 14:06:40 +0000
Received: from [85.158.138.51:60072] by server-12.bemta-3.messagelabs.com id
	FF/8C-27559-FE9C1D05; Wed, 19 Dec 2012 14:06:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355925999!28407690!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6621 invoked from network); 19 Dec 2012 14:06:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:06:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253890"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:06:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:06:38 +0000
Message-ID: <1355925997.14620.415.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 19 Dec 2012 14:06:37 +0000
In-Reply-To: <1355856402-26614-6-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1212181843200.17523@kaball.uk.xensource.com>
	<1355856402-26614-6-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 6/8] xen/device_tree: introduce
	find_compatible_node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 18:46 +0000, Stefano Stabellini wrote:
> Introduce a find_compatible_node function that can be used by device

> +static int _find_compatible_node(const void *fdt,
> +                             int node, const char *name, int depth,
> +                             u32 address_cells, u32 size_cells,
> +                             void *data)
> +{
> +    struct find_compat *c = (struct find_compat *) data;
> +
> +    if (  c->found  )
> +        return 0;

It'd be nice if returning e.g. 1 would cause device_tree_for_each_node
to stop walking the DTB and return immediately. Would make this function
cleaner and avoid pointlessly parsing the rest of the DTB.

> +
> +    if ( device_tree_node_compatible(fdt, node, c->compatible) )
> +    {
> +        c->found = 1;
> +        c->node = node;
> +        c->depth = depth;
> +        c->address_cells = address_cells;
> +        c->size_cells = size_cells;
> +    }
> +    return 0;
> +}
> + 
> +int find_compatible_node(const char *compatible, int *node, int *depth,
> +                u32 *address_cells, u32 *size_cells)
> +{
> +    int ret;
> +    struct find_compat c;
> +    c.compatible = compatible;
> +    c.found = 0;
> +
> +    ret = device_tree_for_each_node(device_tree_flattened, _find_compatible_node, &c);
> +    if ( !c.found )
> +        return ret;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:07:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKIy-0003vs-IH; Wed, 19 Dec 2012 14:07:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKIw-0003vG-GN
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:07:26 +0000
Received: from [85.158.143.35:3979] by server-3.bemta-4.messagelabs.com id
	02/B3-18211-D1AC1D05; Wed, 19 Dec 2012 14:07:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355926044!13765401!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10216 invoked from network); 19 Dec 2012 14:07:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:07:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253925"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:07:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:07:24 +0000
Message-ID: <1355926042.14620.416.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 14:07:22 +0000
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] xen: arm: fix guest register access bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 16:49 +0000, Ian Campbell wrote:
> All the places where we currently access guest registers are subtly
> broken, they always access the usr copy of a banked register regardless
> of the guest VCPU mode. Luckily because fiq mode isn't used much this
> mostly affects the SP and LR registers which are not typically used in
> mmio instructions (which is most of the reason for this kind of
> emulation). However the effect of hitting the bug is going to be some
> pretty weird behaviour!
> 
> I've also included, mostly because they were in my branch and rebasing
> things over them is a faff, some patches to clean up the tabs and line
> lengths in entry.S.

Applied 1,2 & 4 w/ Stefano's ACK, thanks.

Will redo #3 as discussed and repost.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:07:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKIy-0003vs-IH; Wed, 19 Dec 2012 14:07:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKIw-0003vG-GN
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:07:26 +0000
Received: from [85.158.143.35:3979] by server-3.bemta-4.messagelabs.com id
	02/B3-18211-D1AC1D05; Wed, 19 Dec 2012 14:07:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355926044!13765401!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10216 invoked from network); 19 Dec 2012 14:07:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:07:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,317,1355097600"; 
   d="scan'208";a="253925"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:07:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:07:24 +0000
Message-ID: <1355926042.14620.416.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 14:07:22 +0000
In-Reply-To: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] xen: arm: fix guest register access bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 16:49 +0000, Ian Campbell wrote:
> All the places where we currently access guest registers are subtly
> broken, they always access the usr copy of a banked register regardless
> of the guest VCPU mode. Luckily because fiq mode isn't used much this
> mostly affects the SP and LR registers which are not typically used in
> mmio instructions (which is most of the reason for this kind of
> emulation). However the effect of hitting the bug is going to be some
> pretty weird behaviour!
> 
> I've also included, mostly because they were in my branch and rebasing
> things over them is a faff, some patches to clean up the tabs and line
> lengths in entry.S.

Applied 1,2 & 4 w/ Stefano's ACK, thanks.

Will redo #3 as discussed and repost.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:34:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKjF-0004jr-SO; Wed, 19 Dec 2012 14:34:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1TlKjE-0004jc-HI; Wed, 19 Dec 2012 14:34:36 +0000
Received: from [85.158.139.83:33176] by server-14.bemta-5.messagelabs.com id
	D3/48-09538-B70D1D05; Wed, 19 Dec 2012 14:34:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355927666!23225719!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8108 invoked from network); 19 Dec 2012 14:34:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:34:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="254795"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:34:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:34:25 +0000
Message-ID: <1355927664.14620.428.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 14:34:24 +0000
In-Reply-To: <601dc257a740d3a60476.1353432201@cosworth.uk.xensource.com>
References: <patchbomb.1353432200@cosworth.uk.xensource.com>
	<601dc257a740d3a60476.1353432201@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 01 of 15] libxl: move definition of
 libxl_domain_config into the IDL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-11-20 at 17:23 +0000, Ian Campbell wrote:
> # HG changeset patch
> # User Ian Campbell <ijc@hellion.org.uk>
> # Date 1353432136 0
> # Node ID 601dc257a740d3a6047667731007283a4dcb9600
> # Parent  c893596e2d4c7ddd62a3704ea5460be4e5be38df
> libxl: move definition of libxl_domain_config into the IDL
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> Posted during 4.2 freeze and deferred until 4.3...

And now, finally, committed.

> 
> diff -r c893596e2d4c -r 601dc257a740 tools/libxl/libxl.h
> --- a/tools/libxl/libxl.h       Tue Nov 20 17:22:10 2012 +0000
> +++ b/tools/libxl/libxl.h       Tue Nov 20 17:22:16 2012 +0000
> @@ -474,26 +474,6 @@ typedef struct {
> 
>  #define LIBXL_VERSION 0
> 
> -typedef struct {
> -    libxl_domain_create_info c_info;
> -    libxl_domain_build_info b_info;
> -
> -    int num_disks, num_nics, num_pcidevs, num_vfbs, num_vkbs, num_vtpms;
> -
> -    libxl_device_disk *disks;
> -    libxl_device_nic *nics;
> -    libxl_device_pci *pcidevs;
> -    libxl_device_vfb *vfbs;
> -    libxl_device_vkb *vkbs;
> -    libxl_device_vtpm *vtpms;
> -
> -    libxl_action_on_shutdown on_poweroff;
> -    libxl_action_on_shutdown on_reboot;
> -    libxl_action_on_shutdown on_watchdog;
> -    libxl_action_on_shutdown on_crash;
> -} libxl_domain_config;
> -char *libxl_domain_config_to_json(libxl_ctx *ctx, libxl_domain_config *p);
> -
>  /* context functions */
>  int libxl_ctx_alloc(libxl_ctx **pctx, int version,
>                      unsigned flags /* none currently defined */,
> diff -r c893596e2d4c -r 601dc257a740 tools/libxl/libxl_create.c
> --- a/tools/libxl/libxl_create.c        Tue Nov 20 17:22:10 2012 +0000
> +++ b/tools/libxl/libxl_create.c        Tue Nov 20 17:22:16 2012 +0000
> @@ -24,43 +24,6 @@
>  #include <xenguest.h>
>  #include <xen/hvm/hvm_info_table.h>
> 
> -void libxl_domain_config_init(libxl_domain_config *d_config)
> -{
> -    memset(d_config, 0, sizeof(*d_config));
> -    libxl_domain_create_info_init(&d_config->c_info);
> -    libxl_domain_build_info_init(&d_config->b_info);
> -}
> -
> -void libxl_domain_config_dispose(libxl_domain_config *d_config)
> -{
> -    int i;
> -
> -    for (i=0; i<d_config->num_disks; i++)
> -        libxl_device_disk_dispose(&d_config->disks[i]);
> -    free(d_config->disks);
> -
> -    for (i=0; i<d_config->num_nics; i++)
> -        libxl_device_nic_dispose(&d_config->nics[i]);
> -    free(d_config->nics);
> -
> -    for (i=0; i<d_config->num_pcidevs; i++)
> -        libxl_device_pci_dispose(&d_config->pcidevs[i]);
> -    free(d_config->pcidevs);
> -
> -    for (i=0; i<d_config->num_vfbs; i++)
> -        libxl_device_vfb_dispose(&d_config->vfbs[i]);
> -    free(d_config->vfbs);
> -
> -    for (i=0; i<d_config->num_vkbs; i++)
> -        libxl_device_vkb_dispose(&d_config->vkbs[i]);
> -    free(d_config->vkbs);
> -
> -    libxl_device_vtpm_list_free(d_config->vtpms, d_config->num_vtpms);
> -
> -    libxl_domain_create_info_dispose(&d_config->c_info);
> -    libxl_domain_build_info_dispose(&d_config->b_info);
> -}
> -
>  int libxl__domain_create_info_setdefault(libxl__gc *gc,
>                                           libxl_domain_create_info *c_info)
>  {
> diff -r c893596e2d4c -r 601dc257a740 tools/libxl/libxl_json.c
> --- a/tools/libxl/libxl_json.c  Tue Nov 20 17:22:10 2012 +0000
> +++ b/tools/libxl/libxl_json.c  Tue Nov 20 17:22:16 2012 +0000
> @@ -786,158 +786,6 @@ out:
>      return ret;
>  }
> 
> -yajl_gen_status libxl_domain_config_gen_json(yajl_gen hand,
> -                                             libxl_domain_config *p)
> -{
> -    yajl_gen_status s;
> -    int i;
> -
> -    s = yajl_gen_map_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"c_info",
> -                        sizeof("c_info")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_domain_create_info_gen_json(hand, &p->c_info);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"b_info",
> -                        sizeof("b_info")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_domain_build_info_gen_json(hand, &p->b_info);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"disks",
> -                        sizeof("disks")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = yajl_gen_array_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    for (i = 0; i < p->num_disks; i++) {
> -        s = libxl_device_disk_gen_json(hand, &p->disks[i]);
> -        if (s != yajl_gen_status_ok)
> -            goto out;
> -    }
> -    s = yajl_gen_array_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"nics",
> -                        sizeof("nics")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = yajl_gen_array_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    for (i = 0; i < p->num_nics; i++) {
> -        s = libxl_device_nic_gen_json(hand, &p->nics[i]);
> -        if (s != yajl_gen_status_ok)
> -            goto out;
> -    }
> -    s = yajl_gen_array_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"pcidevs",
> -                        sizeof("pcidevs")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = yajl_gen_array_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    for (i = 0; i < p->num_pcidevs; i++) {
> -        s = libxl_device_pci_gen_json(hand, &p->pcidevs[i]);
> -        if (s != yajl_gen_status_ok)
> -            goto out;
> -    }
> -    s = yajl_gen_array_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"vfbs",
> -                        sizeof("vfbs")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = yajl_gen_array_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    for (i = 0; i < p->num_vfbs; i++) {
> -        s = libxl_device_vfb_gen_json(hand, &p->vfbs[i]);
> -        if (s != yajl_gen_status_ok)
> -            goto out;
> -    }
> -    s = yajl_gen_array_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"vkbs",
> -                        sizeof("vkbs")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = yajl_gen_array_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    for (i = 0; i < p->num_vkbs; i++) {
> -        s = libxl_device_vkb_gen_json(hand, &p->vkbs[i]);
> -        if (s != yajl_gen_status_ok)
> -            goto out;
> -    }
> -    s = yajl_gen_array_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"on_poweroff",
> -                        sizeof("on_poweroff")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_action_on_shutdown_gen_json(hand, &p->on_poweroff);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"on_reboot",
> -                        sizeof("on_reboot")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_action_on_shutdown_gen_json(hand, &p->on_reboot);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"on_watchdog",
> -                        sizeof("on_watchdog")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_action_on_shutdown_gen_json(hand, &p->on_watchdog);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"on_crash",
> -                        sizeof("on_crash")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_action_on_shutdown_gen_json(hand, &p->on_crash);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_map_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    out:
> -    return s;
> -}
> -
> -char *libxl_domain_config_to_json(libxl_ctx *ctx, libxl_domain_config *p)
> -{
> -    return libxl__object_to_json(ctx, "libxl_domain_config",
> -                        (libxl__gen_json_callback)&libxl_domain_config_gen_json,
> -                        (void *)p);
> -}
> -
>  /*
>   * Local variables:
>   * mode: C
> diff -r c893596e2d4c -r 601dc257a740 tools/libxl/libxl_types.idl
> --- a/tools/libxl/libxl_types.idl       Tue Nov 20 17:22:10 2012 +0000
> +++ b/tools/libxl/libxl_types.idl       Tue Nov 20 17:22:16 2012 +0000
> @@ -401,6 +401,23 @@ libxl_device_vtpm = Struct("device_vtpm"
>      ("uuid",             libxl_uuid),
>  ])
> 
> +libxl_domain_config = Struct("domain_config", [
> +    ("c_info", libxl_domain_create_info),
> +    ("b_info", libxl_domain_build_info),
> +
> +    ("disks", Array(libxl_device_disk, "num_disks")),
> +    ("nics", Array(libxl_device_nic, "num_nics")),
> +    ("pcidevs", Array(libxl_device_pci, "num_pcidevs")),
> +    ("vfbs", Array(libxl_device_vfb, "num_vfbs")),
> +    ("vkbs", Array(libxl_device_vkb, "num_vkbs")),
> +    ("vtpms", Array(libxl_device_vtpm, "num_vtpms")),
> +
> +    ("on_poweroff", libxl_action_on_shutdown),
> +    ("on_reboot", libxl_action_on_shutdown),
> +    ("on_watchdog", libxl_action_on_shutdown),
> +    ("on_crash", libxl_action_on_shutdown),
> +    ])
> +
>  libxl_diskinfo = Struct("diskinfo", [
>      ("backend", string),
>      ("backend_id", uint32),
> diff -r c893596e2d4c -r 601dc257a740 tools/ocaml/libs/xl/genwrap.py
> --- a/tools/ocaml/libs/xl/genwrap.py    Tue Nov 20 17:22:10 2012 +0000
> +++ b/tools/ocaml/libs/xl/genwrap.py    Tue Nov 20 17:22:16 2012 +0000
> @@ -283,6 +283,7 @@ if __name__ == '__main__':
>          "cpupoolinfo",
>          "domain_create_info",
>          "domain_build_info",
> +        "domain_config",
>          "vcpuinfo",
>          "event",
>          ]
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:34:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKjF-0004jr-SO; Wed, 19 Dec 2012 14:34:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1TlKjE-0004jc-HI; Wed, 19 Dec 2012 14:34:36 +0000
Received: from [85.158.139.83:33176] by server-14.bemta-5.messagelabs.com id
	D3/48-09538-B70D1D05; Wed, 19 Dec 2012 14:34:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1355927666!23225719!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8108 invoked from network); 19 Dec 2012 14:34:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:34:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="254795"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:34:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:34:25 +0000
Message-ID: <1355927664.14620.428.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 14:34:24 +0000
In-Reply-To: <601dc257a740d3a60476.1353432201@cosworth.uk.xensource.com>
References: <patchbomb.1353432200@cosworth.uk.xensource.com>
	<601dc257a740d3a60476.1353432201@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 01 of 15] libxl: move definition of
 libxl_domain_config into the IDL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-11-20 at 17:23 +0000, Ian Campbell wrote:
> # HG changeset patch
> # User Ian Campbell <ijc@hellion.org.uk>
> # Date 1353432136 0
> # Node ID 601dc257a740d3a6047667731007283a4dcb9600
> # Parent  c893596e2d4c7ddd62a3704ea5460be4e5be38df
> libxl: move definition of libxl_domain_config into the IDL
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> Posted during 4.2 freeze and deferred until 4.3...

And now, finally, committed.

> 
> diff -r c893596e2d4c -r 601dc257a740 tools/libxl/libxl.h
> --- a/tools/libxl/libxl.h       Tue Nov 20 17:22:10 2012 +0000
> +++ b/tools/libxl/libxl.h       Tue Nov 20 17:22:16 2012 +0000
> @@ -474,26 +474,6 @@ typedef struct {
> 
>  #define LIBXL_VERSION 0
> 
> -typedef struct {
> -    libxl_domain_create_info c_info;
> -    libxl_domain_build_info b_info;
> -
> -    int num_disks, num_nics, num_pcidevs, num_vfbs, num_vkbs, num_vtpms;
> -
> -    libxl_device_disk *disks;
> -    libxl_device_nic *nics;
> -    libxl_device_pci *pcidevs;
> -    libxl_device_vfb *vfbs;
> -    libxl_device_vkb *vkbs;
> -    libxl_device_vtpm *vtpms;
> -
> -    libxl_action_on_shutdown on_poweroff;
> -    libxl_action_on_shutdown on_reboot;
> -    libxl_action_on_shutdown on_watchdog;
> -    libxl_action_on_shutdown on_crash;
> -} libxl_domain_config;
> -char *libxl_domain_config_to_json(libxl_ctx *ctx, libxl_domain_config *p);
> -
>  /* context functions */
>  int libxl_ctx_alloc(libxl_ctx **pctx, int version,
>                      unsigned flags /* none currently defined */,
> diff -r c893596e2d4c -r 601dc257a740 tools/libxl/libxl_create.c
> --- a/tools/libxl/libxl_create.c        Tue Nov 20 17:22:10 2012 +0000
> +++ b/tools/libxl/libxl_create.c        Tue Nov 20 17:22:16 2012 +0000
> @@ -24,43 +24,6 @@
>  #include <xenguest.h>
>  #include <xen/hvm/hvm_info_table.h>
> 
> -void libxl_domain_config_init(libxl_domain_config *d_config)
> -{
> -    memset(d_config, 0, sizeof(*d_config));
> -    libxl_domain_create_info_init(&d_config->c_info);
> -    libxl_domain_build_info_init(&d_config->b_info);
> -}
> -
> -void libxl_domain_config_dispose(libxl_domain_config *d_config)
> -{
> -    int i;
> -
> -    for (i=0; i<d_config->num_disks; i++)
> -        libxl_device_disk_dispose(&d_config->disks[i]);
> -    free(d_config->disks);
> -
> -    for (i=0; i<d_config->num_nics; i++)
> -        libxl_device_nic_dispose(&d_config->nics[i]);
> -    free(d_config->nics);
> -
> -    for (i=0; i<d_config->num_pcidevs; i++)
> -        libxl_device_pci_dispose(&d_config->pcidevs[i]);
> -    free(d_config->pcidevs);
> -
> -    for (i=0; i<d_config->num_vfbs; i++)
> -        libxl_device_vfb_dispose(&d_config->vfbs[i]);
> -    free(d_config->vfbs);
> -
> -    for (i=0; i<d_config->num_vkbs; i++)
> -        libxl_device_vkb_dispose(&d_config->vkbs[i]);
> -    free(d_config->vkbs);
> -
> -    libxl_device_vtpm_list_free(d_config->vtpms, d_config->num_vtpms);
> -
> -    libxl_domain_create_info_dispose(&d_config->c_info);
> -    libxl_domain_build_info_dispose(&d_config->b_info);
> -}
> -
>  int libxl__domain_create_info_setdefault(libxl__gc *gc,
>                                           libxl_domain_create_info *c_info)
>  {
> diff -r c893596e2d4c -r 601dc257a740 tools/libxl/libxl_json.c
> --- a/tools/libxl/libxl_json.c  Tue Nov 20 17:22:10 2012 +0000
> +++ b/tools/libxl/libxl_json.c  Tue Nov 20 17:22:16 2012 +0000
> @@ -786,158 +786,6 @@ out:
>      return ret;
>  }
> 
> -yajl_gen_status libxl_domain_config_gen_json(yajl_gen hand,
> -                                             libxl_domain_config *p)
> -{
> -    yajl_gen_status s;
> -    int i;
> -
> -    s = yajl_gen_map_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"c_info",
> -                        sizeof("c_info")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_domain_create_info_gen_json(hand, &p->c_info);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"b_info",
> -                        sizeof("b_info")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_domain_build_info_gen_json(hand, &p->b_info);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"disks",
> -                        sizeof("disks")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = yajl_gen_array_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    for (i = 0; i < p->num_disks; i++) {
> -        s = libxl_device_disk_gen_json(hand, &p->disks[i]);
> -        if (s != yajl_gen_status_ok)
> -            goto out;
> -    }
> -    s = yajl_gen_array_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"nics",
> -                        sizeof("nics")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = yajl_gen_array_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    for (i = 0; i < p->num_nics; i++) {
> -        s = libxl_device_nic_gen_json(hand, &p->nics[i]);
> -        if (s != yajl_gen_status_ok)
> -            goto out;
> -    }
> -    s = yajl_gen_array_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"pcidevs",
> -                        sizeof("pcidevs")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = yajl_gen_array_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    for (i = 0; i < p->num_pcidevs; i++) {
> -        s = libxl_device_pci_gen_json(hand, &p->pcidevs[i]);
> -        if (s != yajl_gen_status_ok)
> -            goto out;
> -    }
> -    s = yajl_gen_array_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"vfbs",
> -                        sizeof("vfbs")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = yajl_gen_array_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    for (i = 0; i < p->num_vfbs; i++) {
> -        s = libxl_device_vfb_gen_json(hand, &p->vfbs[i]);
> -        if (s != yajl_gen_status_ok)
> -            goto out;
> -    }
> -    s = yajl_gen_array_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"vkbs",
> -                        sizeof("vkbs")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = yajl_gen_array_open(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    for (i = 0; i < p->num_vkbs; i++) {
> -        s = libxl_device_vkb_gen_json(hand, &p->vkbs[i]);
> -        if (s != yajl_gen_status_ok)
> -            goto out;
> -    }
> -    s = yajl_gen_array_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"on_poweroff",
> -                        sizeof("on_poweroff")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_action_on_shutdown_gen_json(hand, &p->on_poweroff);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"on_reboot",
> -                        sizeof("on_reboot")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_action_on_shutdown_gen_json(hand, &p->on_reboot);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"on_watchdog",
> -                        sizeof("on_watchdog")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_action_on_shutdown_gen_json(hand, &p->on_watchdog);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_string(hand, (const unsigned char *)"on_crash",
> -                        sizeof("on_crash")-1);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    s = libxl_action_on_shutdown_gen_json(hand, &p->on_crash);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -
> -    s = yajl_gen_map_close(hand);
> -    if (s != yajl_gen_status_ok)
> -        goto out;
> -    out:
> -    return s;
> -}
> -
> -char *libxl_domain_config_to_json(libxl_ctx *ctx, libxl_domain_config *p)
> -{
> -    return libxl__object_to_json(ctx, "libxl_domain_config",
> -                        (libxl__gen_json_callback)&libxl_domain_config_gen_json,
> -                        (void *)p);
> -}
> -
>  /*
>   * Local variables:
>   * mode: C
> diff -r c893596e2d4c -r 601dc257a740 tools/libxl/libxl_types.idl
> --- a/tools/libxl/libxl_types.idl       Tue Nov 20 17:22:10 2012 +0000
> +++ b/tools/libxl/libxl_types.idl       Tue Nov 20 17:22:16 2012 +0000
> @@ -401,6 +401,23 @@ libxl_device_vtpm = Struct("device_vtpm"
>      ("uuid",             libxl_uuid),
>  ])
> 
> +libxl_domain_config = Struct("domain_config", [
> +    ("c_info", libxl_domain_create_info),
> +    ("b_info", libxl_domain_build_info),
> +
> +    ("disks", Array(libxl_device_disk, "num_disks")),
> +    ("nics", Array(libxl_device_nic, "num_nics")),
> +    ("pcidevs", Array(libxl_device_pci, "num_pcidevs")),
> +    ("vfbs", Array(libxl_device_vfb, "num_vfbs")),
> +    ("vkbs", Array(libxl_device_vkb, "num_vkbs")),
> +    ("vtpms", Array(libxl_device_vtpm, "num_vtpms")),
> +
> +    ("on_poweroff", libxl_action_on_shutdown),
> +    ("on_reboot", libxl_action_on_shutdown),
> +    ("on_watchdog", libxl_action_on_shutdown),
> +    ("on_crash", libxl_action_on_shutdown),
> +    ])
> +
>  libxl_diskinfo = Struct("diskinfo", [
>      ("backend", string),
>      ("backend_id", uint32),
> diff -r c893596e2d4c -r 601dc257a740 tools/ocaml/libs/xl/genwrap.py
> --- a/tools/ocaml/libs/xl/genwrap.py    Tue Nov 20 17:22:10 2012 +0000
> +++ b/tools/ocaml/libs/xl/genwrap.py    Tue Nov 20 17:22:16 2012 +0000
> @@ -283,6 +283,7 @@ if __name__ == '__main__':
>          "cpupoolinfo",
>          "domain_create_info",
>          "domain_build_info",
> +        "domain_config",
>          "vcpuinfo",
>          "event",
>          ]
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:37:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:37:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKlo-0004tR-F1; Wed, 19 Dec 2012 14:37:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TlKlm-0004tC-EX
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:37:14 +0000
Received: from [85.158.139.83:41239] by server-13.bemta-5.messagelabs.com id
	8D/20-10716-911D1D05; Wed, 19 Dec 2012 14:37:13 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355927800!26531011!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19242 invoked from network); 19 Dec 2012 14:36:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:36:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="254907"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:36:40 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:36:39 +0000
Message-ID: <50D1D0E8.7050203@citrix.com>
Date: Wed, 19 Dec 2012 14:36:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
	<50D18D4E02000078000B15B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA621@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A48345644033BA621@SHSMSX101.ccr.corp.intel.com>
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/2012 09:14, Zhang, Xiantao wrote:
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Wednesday, December 19, 2012 4:48 PM
>> To: Zhang, Xiantao
>> Cc: Konrad Rzeszutek Wilk; xen-devel
>> Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
>>
>>>>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> wrote:
>>> Are you playing with Xen ?  so far,  Xen doesn't support PCIe device
>>> hot-plug feature yet.
>> When saying Xen, I assume you mean the pv-ops kernel instead? So far I was
>> under the impression that this worked even with the very old 2.6.18 tree (as
>> much or as little as hotplug there worked in the native case). And given that
>> there are no special requirements on the hypervisor to make this work, it's
>> not even obvious to me what would be missing in the pv-ops kernel to make
>> it work. 
> Oh, my fault!  Perhaps we don't need to do anything for pv-ops kernel to support device hot-plug if native system has it supported.  Actually, we didn't do such testings before, since it is a native feature, not a Xen-specific one. 
> Xiantao

My current understanding is that on boot, Xen scans the PCI bus, then
dom0 rescans it later.  If a hotplug event gets serviced by dom0, does
there not need to be some hypercall informing Xen that a new device has
appeared?  I expect PCIPassthrough would not work correctly on a
hotplugged device which Xen is unaware of.

(But if I have got the wrong end of the stick, or this mechanism already
exist, please ignore me.  It just strikes me as a little xen-specific,
even if the bulk of it is native)

~Andrew

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:37:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:37:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKlo-0004tR-F1; Wed, 19 Dec 2012 14:37:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TlKlm-0004tC-EX
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:37:14 +0000
Received: from [85.158.139.83:41239] by server-13.bemta-5.messagelabs.com id
	8D/20-10716-911D1D05; Wed, 19 Dec 2012 14:37:13 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1355927800!26531011!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19242 invoked from network); 19 Dec 2012 14:36:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:36:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="254907"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:36:40 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:36:39 +0000
Message-ID: <50D1D0E8.7050203@citrix.com>
Date: Wed, 19 Dec 2012 14:36:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
	<50D18D4E02000078000B15B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA621@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A48345644033BA621@SHSMSX101.ccr.corp.intel.com>
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/2012 09:14, Zhang, Xiantao wrote:
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Wednesday, December 19, 2012 4:48 PM
>> To: Zhang, Xiantao
>> Cc: Konrad Rzeszutek Wilk; xen-devel
>> Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
>>
>>>>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>> wrote:
>>> Are you playing with Xen ?  so far,  Xen doesn't support PCIe device
>>> hot-plug feature yet.
>> When saying Xen, I assume you mean the pv-ops kernel instead? So far I was
>> under the impression that this worked even with the very old 2.6.18 tree (as
>> much or as little as hotplug there worked in the native case). And given that
>> there are no special requirements on the hypervisor to make this work, it's
>> not even obvious to me what would be missing in the pv-ops kernel to make
>> it work. 
> Oh, my fault!  Perhaps we don't need to do anything for pv-ops kernel to support device hot-plug if native system has it supported.  Actually, we didn't do such testings before, since it is a native feature, not a Xen-specific one. 
> Xiantao

My current understanding is that on boot, Xen scans the PCI bus, then
dom0 rescans it later.  If a hotplug event gets serviced by dom0, does
there not need to be some hypercall informing Xen that a new device has
appeared?  I expect PCIPassthrough would not work correctly on a
hotplugged device which Xen is unaware of.

(But if I have got the wrong end of the stick, or this mechanism already
exist, please ignore me.  It just strikes me as a little xen-specific,
even if the bulk of it is native)

~Andrew

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:45:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKtr-0005Bu-Hp; Wed, 19 Dec 2012 14:45:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKtq-0005Bp-SZ
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:45:35 +0000
Received: from [85.158.138.51:37418] by server-10.bemta-3.messagelabs.com id
	1E/46-07616-903D1D05; Wed, 19 Dec 2012 14:45:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355928294!27800265!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17666 invoked from network); 19 Dec 2012 14:44:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:44:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1194987"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 14:43:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 09:43:54 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TlKsD-0002jb-Uf;
	Wed, 19 Dec 2012 14:43:53 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 14:43:53 +0000
Message-ID: <1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354628943.2693.80.camel@zakaz.uk.xensource.com>
References: <1354628943.2693.80.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] xen: remove nr_irqs_gsi from generic code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The concept is X86 specific.

AFAICT the generic concept here is the number of physical IRQs which
the current hardware has, so call this nr_hw_irqs.

Also using "defined NR_IRQS" as a standin for x86 might have made
sense at one point but its just cleaner to push the necessary
definitions into asm/irq.h.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Keir (Xen.org) <keir@xen.org>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
--
v2: s/nr_hw_irqs/nr_static_irqs/g
---
 xen/arch/arm/dummy.S      |    2 --
 xen/common/domain.c       |    4 ++--
 xen/include/asm-arm/irq.h |    3 +++
 xen/include/asm-x86/irq.h |    4 ++++
 xen/include/xen/irq.h     |    8 --------
 xen/xsm/flask/hooks.c     |    4 ++--
 6 files changed, 11 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 6416f94..a214fbf 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -6,5 +6,3 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 	.globl x; \
 x:	mov pc, lr
 	
-/* PIRQ support */
-DUMMY(nr_irqs_gsi);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 12c8e24..2f8ef00 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -259,9 +259,9 @@ struct domain *domain_create(
         atomic_inc(&d->pause_count);
 
         if ( domid )
-            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
+            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
         else
-            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
+            d->nr_pirqs = nr_static_irqs + extra_dom0_irqs;
         if ( d->nr_pirqs > nr_irqs )
             d->nr_pirqs = nr_irqs;
 
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index abde839..bd6b54a 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -21,6 +21,9 @@ struct irq_cfg {
 #define NR_IRQS		1024
 #define nr_irqs NR_IRQS
 
+#define nr_irqs NR_IRQS
+#define nr_static_irqs NR_IRQS
+
 struct irq_desc;
 
 struct irq_desc *__irq_to_desc(int irq);
diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
index 5eefb94..7f5da06 100644
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -11,6 +11,10 @@
 #include <irq_vectors.h>
 #include <asm/percpu.h>
 
+extern unsigned int nr_irqs_gsi;
+extern unsigned int nr_irqs;
+#define nr_static_irqs nr_irqs_gsi
+
 #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
 			     (1 << (irq)) & io_apic_irqs : \
 			     (irq) < nr_irqs_gsi)
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 5973cce..7386358 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
 
 #include <asm/irq.h>
 
-#ifdef NR_IRQS
-# define nr_irqs NR_IRQS
-# define nr_irqs_gsi NR_IRQS
-#else
-extern unsigned int nr_irqs_gsi;
-extern unsigned int nr_irqs;
-#endif
-
 struct msi_desc;
 /*
  * This is the "IRQ descriptor", which contains various information
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 0ca10d0..782e28c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct avc_audit_data *ad)
     struct irq_desc *desc = irq_to_desc(irq);
     if ( irq >= nr_irqs || irq < 0 )
         return -EINVAL;
-    if ( irq < nr_irqs_gsi ) {
+    if ( irq < nr_static_irqs ) {
         if (ad) {
             AVC_AUDIT_DATA_INIT(ad, IRQ);
             ad->irq = irq;
@@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int irq, void *data)
     if ( rc )
         return rc;
 
-    if ( irq >= nr_irqs_gsi && msi ) {
+    if ( irq >= nr_static_irqs && msi ) {
         u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
         AVC_AUDIT_DATA_INIT(&ad, DEV);
         ad.device = machine_bdf;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:45:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlKtr-0005Bu-Hp; Wed, 19 Dec 2012 14:45:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlKtq-0005Bp-SZ
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:45:35 +0000
Received: from [85.158.138.51:37418] by server-10.bemta-3.messagelabs.com id
	1E/46-07616-903D1D05; Wed, 19 Dec 2012 14:45:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1355928294!27800265!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17666 invoked from network); 19 Dec 2012 14:44:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:44:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1194987"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 14:43:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 09:43:54 -0500
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TlKsD-0002jb-Uf;
	Wed, 19 Dec 2012 14:43:53 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 14:43:53 +0000
Message-ID: <1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1354628943.2693.80.camel@zakaz.uk.xensource.com>
References: <1354628943.2693.80.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] xen: remove nr_irqs_gsi from generic code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The concept is X86 specific.

AFAICT the generic concept here is the number of physical IRQs which
the current hardware has, so call this nr_hw_irqs.

Also using "defined NR_IRQS" as a standin for x86 might have made
sense at one point but its just cleaner to push the necessary
definitions into asm/irq.h.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Keir (Xen.org) <keir@xen.org>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
--
v2: s/nr_hw_irqs/nr_static_irqs/g
---
 xen/arch/arm/dummy.S      |    2 --
 xen/common/domain.c       |    4 ++--
 xen/include/asm-arm/irq.h |    3 +++
 xen/include/asm-x86/irq.h |    4 ++++
 xen/include/xen/irq.h     |    8 --------
 xen/xsm/flask/hooks.c     |    4 ++--
 6 files changed, 11 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
index 6416f94..a214fbf 100644
--- a/xen/arch/arm/dummy.S
+++ b/xen/arch/arm/dummy.S
@@ -6,5 +6,3 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
 	.globl x; \
 x:	mov pc, lr
 	
-/* PIRQ support */
-DUMMY(nr_irqs_gsi);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 12c8e24..2f8ef00 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -259,9 +259,9 @@ struct domain *domain_create(
         atomic_inc(&d->pause_count);
 
         if ( domid )
-            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
+            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
         else
-            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
+            d->nr_pirqs = nr_static_irqs + extra_dom0_irqs;
         if ( d->nr_pirqs > nr_irqs )
             d->nr_pirqs = nr_irqs;
 
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index abde839..bd6b54a 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -21,6 +21,9 @@ struct irq_cfg {
 #define NR_IRQS		1024
 #define nr_irqs NR_IRQS
 
+#define nr_irqs NR_IRQS
+#define nr_static_irqs NR_IRQS
+
 struct irq_desc;
 
 struct irq_desc *__irq_to_desc(int irq);
diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
index 5eefb94..7f5da06 100644
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -11,6 +11,10 @@
 #include <irq_vectors.h>
 #include <asm/percpu.h>
 
+extern unsigned int nr_irqs_gsi;
+extern unsigned int nr_irqs;
+#define nr_static_irqs nr_irqs_gsi
+
 #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
 			     (1 << (irq)) & io_apic_irqs : \
 			     (irq) < nr_irqs_gsi)
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 5973cce..7386358 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
 
 #include <asm/irq.h>
 
-#ifdef NR_IRQS
-# define nr_irqs NR_IRQS
-# define nr_irqs_gsi NR_IRQS
-#else
-extern unsigned int nr_irqs_gsi;
-extern unsigned int nr_irqs;
-#endif
-
 struct msi_desc;
 /*
  * This is the "IRQ descriptor", which contains various information
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 0ca10d0..782e28c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct avc_audit_data *ad)
     struct irq_desc *desc = irq_to_desc(irq);
     if ( irq >= nr_irqs || irq < 0 )
         return -EINVAL;
-    if ( irq < nr_irqs_gsi ) {
+    if ( irq < nr_static_irqs ) {
         if (ad) {
             AVC_AUDIT_DATA_INIT(ad, IRQ);
             ad->irq = irq;
@@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int irq, void *data)
     if ( rc )
         return rc;
 
-    if ( irq >= nr_irqs_gsi && msi ) {
+    if ( irq >= nr_static_irqs && msi ) {
         u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
         AVC_AUDIT_DATA_INIT(&ad, DEV);
         ad.device = machine_bdf;
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:54:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlL2Q-0005MC-Il; Wed, 19 Dec 2012 14:54:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlL2P-0005M7-16
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:54:25 +0000
Received: from [193.109.254.147:61192] by server-9.bemta-14.messagelabs.com id
	9B/AA-24482-025D1D05; Wed, 19 Dec 2012 14:54:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355928842!1889033!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32087 invoked from network); 19 Dec 2012 14:54:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:54:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="255457"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:54:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:54:02 +0000
Message-ID: <1355928840.14620.432.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>
Date: Wed, 19 Dec 2012 14:54:00 +0000
In-Reply-To: <1355916124.14620.328.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
	<20121219104932.GA65599@ocelot.phlegethon.org>
	<1355916124.14620.328.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 11:22 +0000, Ian Campbell wrote:
> On Wed, 2012-12-19 at 10:49 +0000, Tim Deegan wrote:
> > At 10:20 +0000 on 19 Dec (1355912423), Ian Campbell wrote:
> > > On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> > > > On Tue, 18 Dec 2012, Ian Campbell wrote:
> > > > > This shortens an overly long line.
> > > > > 
> > > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > > 
> > > > honestly I would rather keep it because it has been quite useful for
> > > > debugging in the past once all the bugs have been fixed (TM) then we can
> > > > remove it ;-)
> > > 
> > > Can you not just re-add it for debug?
> > > 
> > > I mostly just want to get rid of the overlong line, I could nuke the
> > > spaces from the comment (in all of them, not just this one) instead?
> > 
> > Could you just remove the 'lev3: ' from the comment, pulling it in to
> > exactly 80 chars?  Your' added 'second level' and 'third level' make it
> > redundant, and I'd rather not lose the spaces in the comments.
> 
> I think that makes it exactly 80 characters, which is probably ok.

It ends up as below, exactly 80 characters long. I think it's probably
not worth it.

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 93f4edb..be09418 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -24,10 +24,14 @@
 
 #define ZIMAGE_MAGIC_NUMBER 0x016f2818
 
-#define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
-#define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
-#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
-#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
+#define PT_PT     0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
+
+/* Second Level */
+#define PT_MEM    0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
+#define PT_DEV    0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
+
+/* Third Level */
+#define PT_DEV_L3 0xe73 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
 
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:54:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlL2Q-0005MC-Il; Wed, 19 Dec 2012 14:54:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlL2P-0005M7-16
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:54:25 +0000
Received: from [193.109.254.147:61192] by server-9.bemta-14.messagelabs.com id
	9B/AA-24482-025D1D05; Wed, 19 Dec 2012 14:54:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355928842!1889033!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32087 invoked from network); 19 Dec 2012 14:54:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:54:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="255457"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 14:54:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 14:54:02 +0000
Message-ID: <1355928840.14620.432.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>
Date: Wed, 19 Dec 2012 14:54:00 +0000
In-Reply-To: <1355916124.14620.328.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
	<20121219104932.GA65599@ocelot.phlegethon.org>
	<1355916124.14620.328.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 11:22 +0000, Ian Campbell wrote:
> On Wed, 2012-12-19 at 10:49 +0000, Tim Deegan wrote:
> > At 10:20 +0000 on 19 Dec (1355912423), Ian Campbell wrote:
> > > On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> > > > On Tue, 18 Dec 2012, Ian Campbell wrote:
> > > > > This shortens an overly long line.
> > > > > 
> > > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > > 
> > > > honestly I would rather keep it because it has been quite useful for
> > > > debugging in the past once all the bugs have been fixed (TM) then we can
> > > > remove it ;-)
> > > 
> > > Can you not just re-add it for debug?
> > > 
> > > I mostly just want to get rid of the overlong line, I could nuke the
> > > spaces from the comment (in all of them, not just this one) instead?
> > 
> > Could you just remove the 'lev3: ' from the comment, pulling it in to
> > exactly 80 chars?  Your' added 'second level' and 'third level' make it
> > redundant, and I'd rather not lose the spaces in the comments.
> 
> I think that makes it exactly 80 characters, which is probably ok.

It ends up as below, exactly 80 characters long. I think it's probably
not worth it.

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 93f4edb..be09418 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -24,10 +24,14 @@
 
 #define ZIMAGE_MAGIC_NUMBER 0x016f2818
 
-#define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
-#define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
-#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
-#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
+#define PT_PT     0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
+
+/* Second Level */
+#define PT_MEM    0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
+#define PT_DEV    0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
+
+/* Third Level */
+#define PT_DEV_L3 0xe73 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
 
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:56:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlL4A-0005Ss-At; Wed, 19 Dec 2012 14:56:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlL48-0005Sm-HK
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:56:12 +0000
Received: from [85.158.143.35:9729] by server-1.bemta-4.messagelabs.com id
	95/CD-28401-B85D1D05; Wed, 19 Dec 2012 14:56:11 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355928961!13772753!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12261 invoked from network); 19 Dec 2012 14:56:02 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 14:56:02 -0000
Received: (qmail 30661 invoked from network); 19 Dec 2012 16:56:01 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 16:56:00 +0200
Message-ID: <50D1D5BD.8080001@gmail.com>
Date: Wed, 19 Dec 2012 16:57:01 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com>
In-Reply-To: <50D1A9D1.2020106@gmail.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234434,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 61410f6a28e579ea1cf9f4b9eac27425.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3ta.17epd43fm.dnrm],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44441
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>      m->overlapped = is_var_mtrr_overlapped(m);
>>
>> Looks like that function contains the necessary logic.
>
> You're right, but what happens there is that that function depends on
> the get_mtrr_range() function, which in turn depends on the size_or_mask
> global variable, which is initialized in hvm_mtrr_pat_init(), which then
> depends on a global table, and so on. Putting that into libxc is pretty
> much putting the whole mtrr.c file there.

This is where it gets tricky:

static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
                            uint64_t *base, uint64_t *end)
{
     [...]
     phys_addr = 36;

     if ( cpuid_eax(0x80000000) >= 0x80000008 )
         phys_addr = (uint8_t)cpuid_eax(0x80000008);

     size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
     [...]
}

specifically, in the cpuid_eax() call, which doesn't make much sense in 
dom0 userspace.

I did manage to take 'enabled' into account with what appears to be 
success, but if I've read the situation correctly, there's not much to 
do about 'overlap', unless we save it in hvm_save_mtrr_msr() like it's 
done with 'enabled'. What do you think?

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:56:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlL4A-0005Ss-At; Wed, 19 Dec 2012 14:56:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlL48-0005Sm-HK
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:56:12 +0000
Received: from [85.158.143.35:9729] by server-1.bemta-4.messagelabs.com id
	95/CD-28401-B85D1D05; Wed, 19 Dec 2012 14:56:11 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1355928961!13772753!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12261 invoked from network); 19 Dec 2012 14:56:02 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 14:56:02 -0000
Received: (qmail 30661 invoked from network); 19 Dec 2012 16:56:01 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 16:56:00 +0200
Message-ID: <50D1D5BD.8080001@gmail.com>
Date: Wed, 19 Dec 2012 16:57:01 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com>
In-Reply-To: <50D1A9D1.2020106@gmail.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234434,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 61410f6a28e579ea1cf9f4b9eac27425.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3ta.17epd43fm.dnrm],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44441
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>      m->overlapped = is_var_mtrr_overlapped(m);
>>
>> Looks like that function contains the necessary logic.
>
> You're right, but what happens there is that that function depends on
> the get_mtrr_range() function, which in turn depends on the size_or_mask
> global variable, which is initialized in hvm_mtrr_pat_init(), which then
> depends on a global table, and so on. Putting that into libxc is pretty
> much putting the whole mtrr.c file there.

This is where it gets tricky:

static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
                            uint64_t *base, uint64_t *end)
{
     [...]
     phys_addr = 36;

     if ( cpuid_eax(0x80000000) >= 0x80000008 )
         phys_addr = (uint8_t)cpuid_eax(0x80000008);

     size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
     [...]
}

specifically, in the cpuid_eax() call, which doesn't make much sense in 
dom0 userspace.

I did manage to take 'enabled' into account with what appears to be 
success, but if I've read the situation correctly, there's not much to 
do about 'overlap', unless we save it in hvm_save_mtrr_msr() like it's 
done with 'enabled'. What do you think?

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:57:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlL54-0005Z9-RZ; Wed, 19 Dec 2012 14:57:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlL52-0005Yn-V7
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:57:09 +0000
Received: from [85.158.137.99:22708] by server-6.bemta-3.messagelabs.com id
	A6/51-12154-FB5D1D05; Wed, 19 Dec 2012 14:57:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1355929003!19811602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1975 invoked from network); 19 Dec 2012 14:56:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:56:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1265998"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 14:56:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 09:56:42 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TlL4c-0002up-0F;
	Wed, 19 Dec 2012 14:56:42 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 14:56:41 +0000
Message-ID: <1355929001-19183-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH V2] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We weren't taking the guest mode (CPSR) into account and would always
access the user version of the registers.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2: Fix r8 vs r8_fiq thinko.
---
 xen/arch/arm/traps.c       |   62 ++++++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/vgic.c        |    4 +-
 xen/arch/arm/vpl011.c      |    4 +-
 xen/arch/arm/vtimer.c      |    7 +++--
 xen/include/asm-arm/regs.h |    6 ++++
 5 files changed, 74 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index bddd7d4..f42e4e9 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -73,6 +73,64 @@ static void print_xen_info(void)
            debug, print_tainted(taint_str));
 }
 
+uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg)
+{
+    BUG_ON( guest_mode(regs) );
+
+    /*
+     * We rely heavily on the layout of cpu_user_regs to avoid having
+     * to handle all of the registers individually. Use BUILD_BUG_ON to
+     * ensure that things which expect are contiguous actually are.
+     */
+#define REGOFFS(R) offsetof(struct cpu_user_regs, R)
+
+    switch ( reg ) {
+    case 0 ... 7: /* Unbanked registers */
+        BUILD_BUG_ON(REGOFFS(r0) + 7*sizeof(uint32_t) != REGOFFS(r7));
+        return &regs->r0 + reg;
+    case 8 ... 12: /* Register banked in FIQ mode */
+        BUILD_BUG_ON(REGOFFS(r8_fiq) + 4*sizeof(uint32_t) != REGOFFS(r12_fiq));
+        if ( fiq_mode(regs) )
+            return &regs->r8_fiq + reg - 8;
+        else
+            return &regs->r8 + reg - 8;
+    case 13 ... 14: /* Banked SP + LR registers */
+        BUILD_BUG_ON(REGOFFS(sp_fiq) + 1*sizeof(uint32_t) != REGOFFS(lr_fiq));
+        BUILD_BUG_ON(REGOFFS(sp_irq) + 1*sizeof(uint32_t) != REGOFFS(lr_irq));
+        BUILD_BUG_ON(REGOFFS(sp_svc) + 1*sizeof(uint32_t) != REGOFFS(lr_svc));
+        BUILD_BUG_ON(REGOFFS(sp_abt) + 1*sizeof(uint32_t) != REGOFFS(lr_abt));
+        BUILD_BUG_ON(REGOFFS(sp_und) + 1*sizeof(uint32_t) != REGOFFS(lr_und));
+        switch ( regs->cpsr & PSR_MODE_MASK )
+        {
+        case PSR_MODE_USR:
+        case PSR_MODE_SYS: /* Sys regs are the usr regs */
+            if ( reg == 13 )
+                return &regs->sp_usr;
+            else /* lr_usr == lr in a user frame */
+                return &regs->lr;
+        case PSR_MODE_FIQ:
+            return &regs->sp_fiq + reg - 13;
+        case PSR_MODE_IRQ:
+            return &regs->sp_irq + reg - 13;
+        case PSR_MODE_SVC:
+            return &regs->sp_svc + reg - 13;
+        case PSR_MODE_ABT:
+            return &regs->sp_abt + reg - 13;
+        case PSR_MODE_UND:
+            return &regs->sp_und + reg - 13;
+        case PSR_MODE_MON:
+        case PSR_MODE_HYP:
+        default:
+            BUG();
+        }
+    case 15: /* PC */
+        return &regs->pc;
+    default:
+        BUG();
+    }
+#undef REGOFFS
+}
+
 static const char *decode_fsc(uint32_t fsc, int *level)
 {
     const char *msg = NULL;
@@ -448,7 +506,7 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
     switch ( code ) {
     case 0xe0 ... 0xef:
         reg = code - 0xe0;
-        r = &regs->r0 + reg;
+        r = select_user_reg(regs, reg);
         printk("DOM%d: R%d = %#010"PRIx32" at %#010"PRIx32"\n",
                domid, reg, *r, regs->pc);
         break;
@@ -518,7 +576,7 @@ static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp32 cp32 = hsr.cp32;
-    uint32_t *r = &regs->r0 + cp32.reg;
+    uint32_t *r = select_user_reg(regs, cp32.reg);
 
     if ( !cp32.ccvalid ) {
         dprintk(XENLOG_ERR, "cp_15(32): need to handle invalid condition codes\n");
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7d1a5ad..39b9775 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -160,7 +160,7 @@ static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     struct vgic_irq_rank *rank;
     int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
     int gicd_reg = REG(offset);
@@ -372,7 +372,7 @@ static int vgic_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     struct vgic_irq_rank *rank;
     int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
     int gicd_reg = REG(offset);
diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index 1522667..7dcee90 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -92,7 +92,7 @@ static int uart0_mmio_read(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     int offset = (int)(info->gpa - UART0_START);
 
     switch ( offset )
@@ -114,7 +114,7 @@ static int uart0_mmio_write(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     int offset = (int)(info->gpa - UART0_START);
 
     switch ( offset )
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 1c45f4a..fc452e3 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -22,6 +22,7 @@
 #include <xen/timer.h>
 #include <xen/sched.h>
 #include <asm/gic.h>
+#include <asm/regs.h>
 
 extern s_time_t ticks_to_ns(uint64_t ticks);
 extern uint64_t ns_to_ticks(s_time_t ns);
@@ -49,7 +50,7 @@ static int vtimer_emulate_32(struct cpu_user_regs *regs, union hsr hsr)
 {
     struct vcpu *v = current;
     struct hsr_cp32 cp32 = hsr.cp32;
-    uint32_t *r = &regs->r0 + cp32.reg;
+    uint32_t *r = select_user_reg(regs, cp32.reg);
     s_time_t now;
 
     switch ( hsr.bits & HSR_CP32_REGS_MASK )
@@ -101,8 +102,8 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
 {
     struct vcpu *v = current;
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = &regs->r0 + cp64.reg1;
-    uint32_t *r2 = &regs->r0 + cp64.reg2;
+    uint32_t *r1 = select_user_reg(regs, cp64.reg1);
+    uint32_t *r2 = select_user_reg(regs, cp64.reg2);
     uint64_t ticks;
     s_time_t now;
 
diff --git a/xen/include/asm-arm/regs.h b/xen/include/asm-arm/regs.h
index 54f6ed8..7486944 100644
--- a/xen/include/asm-arm/regs.h
+++ b/xen/include/asm-arm/regs.h
@@ -30,6 +30,12 @@
 
 #define return_reg(v) ((v)->arch.cpu_info->guest_cpu_user_regs.r0)
 
+/*
+ * Returns a pointer to the given register value in regs, taking the
+ * processor mode (CPSR) into account.
+ */
+extern uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg);
+
 #endif /* __ARM_REGS_H__ */
 /*
  * Local variables:
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 14:57:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 14:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlL54-0005Z9-RZ; Wed, 19 Dec 2012 14:57:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlL52-0005Yn-V7
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 14:57:09 +0000
Received: from [85.158.137.99:22708] by server-6.bemta-3.messagelabs.com id
	A6/51-12154-FB5D1D05; Wed, 19 Dec 2012 14:57:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1355929003!19811602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1975 invoked from network); 19 Dec 2012 14:56:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 14:56:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1265998"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 14:56:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 09:56:42 -0500
Received: from drall.uk.xensource.com ([10.80.227.107])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1TlL4c-0002up-0F;
	Wed, 19 Dec 2012 14:56:42 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 14:56:41 +0000
Message-ID: <1355929001-19183-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: tim@xen.org, stefano.stabellini@citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH V2] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We weren't taking the guest mode (CPSR) into account and would always
access the user version of the registers.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2: Fix r8 vs r8_fiq thinko.
---
 xen/arch/arm/traps.c       |   62 ++++++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/vgic.c        |    4 +-
 xen/arch/arm/vpl011.c      |    4 +-
 xen/arch/arm/vtimer.c      |    7 +++--
 xen/include/asm-arm/regs.h |    6 ++++
 5 files changed, 74 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index bddd7d4..f42e4e9 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -73,6 +73,64 @@ static void print_xen_info(void)
            debug, print_tainted(taint_str));
 }
 
+uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg)
+{
+    BUG_ON( guest_mode(regs) );
+
+    /*
+     * We rely heavily on the layout of cpu_user_regs to avoid having
+     * to handle all of the registers individually. Use BUILD_BUG_ON to
+     * ensure that things which expect are contiguous actually are.
+     */
+#define REGOFFS(R) offsetof(struct cpu_user_regs, R)
+
+    switch ( reg ) {
+    case 0 ... 7: /* Unbanked registers */
+        BUILD_BUG_ON(REGOFFS(r0) + 7*sizeof(uint32_t) != REGOFFS(r7));
+        return &regs->r0 + reg;
+    case 8 ... 12: /* Register banked in FIQ mode */
+        BUILD_BUG_ON(REGOFFS(r8_fiq) + 4*sizeof(uint32_t) != REGOFFS(r12_fiq));
+        if ( fiq_mode(regs) )
+            return &regs->r8_fiq + reg - 8;
+        else
+            return &regs->r8 + reg - 8;
+    case 13 ... 14: /* Banked SP + LR registers */
+        BUILD_BUG_ON(REGOFFS(sp_fiq) + 1*sizeof(uint32_t) != REGOFFS(lr_fiq));
+        BUILD_BUG_ON(REGOFFS(sp_irq) + 1*sizeof(uint32_t) != REGOFFS(lr_irq));
+        BUILD_BUG_ON(REGOFFS(sp_svc) + 1*sizeof(uint32_t) != REGOFFS(lr_svc));
+        BUILD_BUG_ON(REGOFFS(sp_abt) + 1*sizeof(uint32_t) != REGOFFS(lr_abt));
+        BUILD_BUG_ON(REGOFFS(sp_und) + 1*sizeof(uint32_t) != REGOFFS(lr_und));
+        switch ( regs->cpsr & PSR_MODE_MASK )
+        {
+        case PSR_MODE_USR:
+        case PSR_MODE_SYS: /* Sys regs are the usr regs */
+            if ( reg == 13 )
+                return &regs->sp_usr;
+            else /* lr_usr == lr in a user frame */
+                return &regs->lr;
+        case PSR_MODE_FIQ:
+            return &regs->sp_fiq + reg - 13;
+        case PSR_MODE_IRQ:
+            return &regs->sp_irq + reg - 13;
+        case PSR_MODE_SVC:
+            return &regs->sp_svc + reg - 13;
+        case PSR_MODE_ABT:
+            return &regs->sp_abt + reg - 13;
+        case PSR_MODE_UND:
+            return &regs->sp_und + reg - 13;
+        case PSR_MODE_MON:
+        case PSR_MODE_HYP:
+        default:
+            BUG();
+        }
+    case 15: /* PC */
+        return &regs->pc;
+    default:
+        BUG();
+    }
+#undef REGOFFS
+}
+
 static const char *decode_fsc(uint32_t fsc, int *level)
 {
     const char *msg = NULL;
@@ -448,7 +506,7 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
     switch ( code ) {
     case 0xe0 ... 0xef:
         reg = code - 0xe0;
-        r = &regs->r0 + reg;
+        r = select_user_reg(regs, reg);
         printk("DOM%d: R%d = %#010"PRIx32" at %#010"PRIx32"\n",
                domid, reg, *r, regs->pc);
         break;
@@ -518,7 +576,7 @@ static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp32 cp32 = hsr.cp32;
-    uint32_t *r = &regs->r0 + cp32.reg;
+    uint32_t *r = select_user_reg(regs, cp32.reg);
 
     if ( !cp32.ccvalid ) {
         dprintk(XENLOG_ERR, "cp_15(32): need to handle invalid condition codes\n");
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7d1a5ad..39b9775 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -160,7 +160,7 @@ static int vgic_distr_mmio_read(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     struct vgic_irq_rank *rank;
     int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
     int gicd_reg = REG(offset);
@@ -372,7 +372,7 @@ static int vgic_distr_mmio_write(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     struct vgic_irq_rank *rank;
     int offset = (int)(info->gpa - VGIC_DISTR_BASE_ADDRESS);
     int gicd_reg = REG(offset);
diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index 1522667..7dcee90 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -92,7 +92,7 @@ static int uart0_mmio_read(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     int offset = (int)(info->gpa - UART0_START);
 
     switch ( offset )
@@ -114,7 +114,7 @@ static int uart0_mmio_write(struct vcpu *v, mmio_info_t *info)
 {
     struct hsr_dabt dabt = info->dabt;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-    uint32_t *r = &regs->r0 + dabt.reg;
+    uint32_t *r = select_user_reg(regs, dabt.reg);
     int offset = (int)(info->gpa - UART0_START);
 
     switch ( offset )
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 1c45f4a..fc452e3 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -22,6 +22,7 @@
 #include <xen/timer.h>
 #include <xen/sched.h>
 #include <asm/gic.h>
+#include <asm/regs.h>
 
 extern s_time_t ticks_to_ns(uint64_t ticks);
 extern uint64_t ns_to_ticks(s_time_t ns);
@@ -49,7 +50,7 @@ static int vtimer_emulate_32(struct cpu_user_regs *regs, union hsr hsr)
 {
     struct vcpu *v = current;
     struct hsr_cp32 cp32 = hsr.cp32;
-    uint32_t *r = &regs->r0 + cp32.reg;
+    uint32_t *r = select_user_reg(regs, cp32.reg);
     s_time_t now;
 
     switch ( hsr.bits & HSR_CP32_REGS_MASK )
@@ -101,8 +102,8 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
 {
     struct vcpu *v = current;
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = &regs->r0 + cp64.reg1;
-    uint32_t *r2 = &regs->r0 + cp64.reg2;
+    uint32_t *r1 = select_user_reg(regs, cp64.reg1);
+    uint32_t *r2 = select_user_reg(regs, cp64.reg2);
     uint64_t ticks;
     s_time_t now;
 
diff --git a/xen/include/asm-arm/regs.h b/xen/include/asm-arm/regs.h
index 54f6ed8..7486944 100644
--- a/xen/include/asm-arm/regs.h
+++ b/xen/include/asm-arm/regs.h
@@ -30,6 +30,12 @@
 
 #define return_reg(v) ((v)->arch.cpu_info->guest_cpu_user_regs.r0)
 
+/*
+ * Returns a pointer to the given register value in regs, taking the
+ * processor mode (CPSR) into account.
+ */
+extern uint32_t *select_user_reg(struct cpu_user_regs *regs, int reg);
+
 #endif /* __ARM_REGS_H__ */
 /*
  * Local variables:
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:03:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLAk-0005xJ-L9; Wed, 19 Dec 2012 15:03:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlLAi-0005xC-RO
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:03:01 +0000
Received: from [85.158.143.35:24874] by server-3.bemta-4.messagelabs.com id
	D5/59-18211-427D1D05; Wed, 19 Dec 2012 15:03:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355929248!15742599!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5963 invoked from network); 19 Dec 2012 15:00:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:00:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="255663"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 15:00:48 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 15:00:48 +0000
Message-ID: <1355929247.14620.436.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Razvan Cojocaru <rzvncj@gmail.com>
Date: Wed, 19 Dec 2012 15:00:47 +0000
In-Reply-To: <50D1D5BD.8080001@gmail.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 14:57 +0000, Razvan Cojocaru wrote:
> >>      m->overlapped = is_var_mtrr_overlapped(m);
> >>
> >> Looks like that function contains the necessary logic.
> >
> > You're right, but what happens there is that that function depends on
> > the get_mtrr_range() function, which in turn depends on the size_or_mask
> > global variable, which is initialized in hvm_mtrr_pat_init(), which then
> > depends on a global table, and so on. Putting that into libxc is pretty
> > much putting the whole mtrr.c file there.
> 
> This is where it gets tricky:
> 
> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
>                             uint64_t *base, uint64_t *end)
> {
>      [...]
>      phys_addr = 36;
> 
>      if ( cpuid_eax(0x80000000) >= 0x80000008 )
>          phys_addr = (uint8_t)cpuid_eax(0x80000008);
> 
>      size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
>      [...]
> }
> 
> specifically, in the cpuid_eax() call, which doesn't make much sense in 
> dom0 userspace.

The fact that get_mtrr_range is querying the underlying physical CPUID
suggests it has something to do with the translation from virtual to
physical MTRR and is therefore not something userspace needs to worry
about, but I'm only speculating.

> I did manage to take 'enabled' into account with what appears to be 
> success, but if I've read the situation correctly, there's not much to 
> do about 'overlap', unless we save it in hvm_save_mtrr_msr() like it's 
> done with 'enabled'. What do you think?

It's not an architectural thing so I don't think it belongs in there.

TBH until someone figures out or explains what overlap actually is I
don't know if it even needs exporting or taking into account in
userspace.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:03:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLAk-0005xJ-L9; Wed, 19 Dec 2012 15:03:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlLAi-0005xC-RO
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:03:01 +0000
Received: from [85.158.143.35:24874] by server-3.bemta-4.messagelabs.com id
	D5/59-18211-427D1D05; Wed, 19 Dec 2012 15:03:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355929248!15742599!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5963 invoked from network); 19 Dec 2012 15:00:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:00:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="255663"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 15:00:48 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 15:00:48 +0000
Message-ID: <1355929247.14620.436.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Razvan Cojocaru <rzvncj@gmail.com>
Date: Wed, 19 Dec 2012 15:00:47 +0000
In-Reply-To: <50D1D5BD.8080001@gmail.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 14:57 +0000, Razvan Cojocaru wrote:
> >>      m->overlapped = is_var_mtrr_overlapped(m);
> >>
> >> Looks like that function contains the necessary logic.
> >
> > You're right, but what happens there is that that function depends on
> > the get_mtrr_range() function, which in turn depends on the size_or_mask
> > global variable, which is initialized in hvm_mtrr_pat_init(), which then
> > depends on a global table, and so on. Putting that into libxc is pretty
> > much putting the whole mtrr.c file there.
> 
> This is where it gets tricky:
> 
> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
>                             uint64_t *base, uint64_t *end)
> {
>      [...]
>      phys_addr = 36;
> 
>      if ( cpuid_eax(0x80000000) >= 0x80000008 )
>          phys_addr = (uint8_t)cpuid_eax(0x80000008);
> 
>      size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
>      [...]
> }
> 
> specifically, in the cpuid_eax() call, which doesn't make much sense in 
> dom0 userspace.

The fact that get_mtrr_range is querying the underlying physical CPUID
suggests it has something to do with the translation from virtual to
physical MTRR and is therefore not something userspace needs to worry
about, but I'm only speculating.

> I did manage to take 'enabled' into account with what appears to be 
> success, but if I've read the situation correctly, there's not much to 
> do about 'overlap', unless we save it in hvm_save_mtrr_msr() like it's 
> done with 'enabled'. What do you think?

It's not an architectural thing so I don't think it belongs in there.

TBH until someone figures out or explains what overlap actually is I
don't know if it even needs exporting or taking into account in
userspace.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:08:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLG7-00069N-Hw; Wed, 19 Dec 2012 15:08:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlLG5-00069G-S5
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:08:34 +0000
Received: from [85.158.143.99:9663] by server-1.bemta-4.messagelabs.com id
	62/90-28401-178D1D05; Wed, 19 Dec 2012 15:08:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355929712!30059787!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5667 invoked from network); 19 Dec 2012 15:08:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:08:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 15:08:32 +0000
Message-Id: <50D1E67F02000078000B1791@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 15:08:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
	<50D18D4E02000078000B15B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA621@SHSMSX101.ccr.corp.intel.com>
	<50D1D0E8.7050203@citrix.com>
In-Reply-To: <50D1D0E8.7050203@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 15:36, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 19/12/2012 09:14, Zhang, Xiantao wrote:
>>> -----Original Message-----
>>> From: Jan Beulich [mailto:JBeulich@suse.com]
>>> Sent: Wednesday, December 19, 2012 4:48 PM
>>> To: Zhang, Xiantao
>>> Cc: Konrad Rzeszutek Wilk; xen-devel
>>> Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
>>>
>>>>>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>>> wrote:
>>>> Are you playing with Xen ?  so far,  Xen doesn't support PCIe device
>>>> hot-plug feature yet.
>>> When saying Xen, I assume you mean the pv-ops kernel instead? So far I was
>>> under the impression that this worked even with the very old 2.6.18 tree (as
>>> much or as little as hotplug there worked in the native case). And given 
> that
>>> there are no special requirements on the hypervisor to make this work, it's
>>> not even obvious to me what would be missing in the pv-ops kernel to make
>>> it work. 
>> Oh, my fault!  Perhaps we don't need to do anything for pv-ops kernel to 
> support device hot-plug if native system has it supported.  Actually, we 
> didn't do such testings before, since it is a native feature, not a 
> Xen-specific one. 
>> Xiantao
> 
> My current understanding is that on boot, Xen scans the PCI bus, then
> dom0 rescans it later.  If a hotplug event gets serviced by dom0, does
> there not need to be some hypercall informing Xen that a new device has
> appeared?  I expect PCIPassthrough would not work correctly on a
> hotplugged device which Xen is unaware of.

Sure - such a hypercall exists and is - from all I can tell - being made
not only during the boot time bus scan, but also during hotplug
processing. See drivers/xen/pci.c.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:08:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLG7-00069N-Hw; Wed, 19 Dec 2012 15:08:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlLG5-00069G-S5
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:08:34 +0000
Received: from [85.158.143.99:9663] by server-1.bemta-4.messagelabs.com id
	62/90-28401-178D1D05; Wed, 19 Dec 2012 15:08:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355929712!30059787!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5667 invoked from network); 19 Dec 2012 15:08:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:08:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 15:08:32 +0000
Message-Id: <50D1E67F02000078000B1791@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 15:08:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
	<50D18D4E02000078000B15B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA621@SHSMSX101.ccr.corp.intel.com>
	<50D1D0E8.7050203@citrix.com>
In-Reply-To: <50D1D0E8.7050203@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 15:36, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 19/12/2012 09:14, Zhang, Xiantao wrote:
>>> -----Original Message-----
>>> From: Jan Beulich [mailto:JBeulich@suse.com]
>>> Sent: Wednesday, December 19, 2012 4:48 PM
>>> To: Zhang, Xiantao
>>> Cc: Konrad Rzeszutek Wilk; xen-devel
>>> Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
>>>
>>>>>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com>
>>> wrote:
>>>> Are you playing with Xen ?  so far,  Xen doesn't support PCIe device
>>>> hot-plug feature yet.
>>> When saying Xen, I assume you mean the pv-ops kernel instead? So far I was
>>> under the impression that this worked even with the very old 2.6.18 tree (as
>>> much or as little as hotplug there worked in the native case). And given 
> that
>>> there are no special requirements on the hypervisor to make this work, it's
>>> not even obvious to me what would be missing in the pv-ops kernel to make
>>> it work. 
>> Oh, my fault!  Perhaps we don't need to do anything for pv-ops kernel to 
> support device hot-plug if native system has it supported.  Actually, we 
> didn't do such testings before, since it is a native feature, not a 
> Xen-specific one. 
>> Xiantao
> 
> My current understanding is that on boot, Xen scans the PCI bus, then
> dom0 rescans it later.  If a hotplug event gets serviced by dom0, does
> there not need to be some hypercall informing Xen that a new device has
> appeared?  I expect PCIPassthrough would not work correctly on a
> hotplugged device which Xen is unaware of.

Sure - such a hypercall exists and is - from all I can tell - being made
not only during the boot time bus scan, but also during hotplug
processing. See drivers/xen/pci.c.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:09:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLGQ-0006Aj-VW; Wed, 19 Dec 2012 15:08:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlLGP-0006AZ-HP
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:08:53 +0000
Received: from [85.158.143.99:48081] by server-1.bemta-4.messagelabs.com id
	FB/91-28401-488D1D05; Wed, 19 Dec 2012 15:08:52 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355929730!18992191!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29572 invoked from network); 19 Dec 2012 15:08:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:08:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1200178"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 15:08:49 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 10:08:49 -0500
Message-ID: <50D1D880.10701@citrix.com>
Date: Wed, 19 Dec 2012 15:08:48 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
	<1355919745.14620.363.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355919745.14620.363.camel@zakaz.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 12:22, Ian Campbell wrote:
> On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:
>
>>>> +                  only likely to return EFAULT or some other "things are very
>>>> +                  bad" error code, which the rest of the calling code won't
>>>> +                  be able to fix up. So we just exit with the error we got.
>>> It expect it is more important to accumulate the individual errors from
>>> remap_pte_fn into err_ptr.
>> Yes, but since that exits on error with EFAULT, the calling code won't
>> "accept" the errors, and thus the whole house of cards fall apart at
>> this point.
>>
>> There should probably be a task to fix this up properly, hence the
>> comment. But right now, any error besides ENOENT is "unacceptable" by
>> the callers of this code. If you want me to add this to the comment, I'm
>> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing
>> will work after an error.
> Are you sure? privcmd.c has some special casing for ENOENT but it looks
> like it should just pass through other errors back to userspace.
>
> In any case surely this needs fixing?
>
> On the X86 side err_ptr is the result of the mmupdate hypercall which
> can already be other than ENOENT, including EFAULT, ESRCH etc.
Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the 
calling code revoking the mapping and returning an error) is ENOENT.

Or at least, that's how I believe it should SHOULD be - since only 
ENOENT is a "success" error code, anything else pretty much means that 
the operation requested didn't work properly. If you are aware of any 
use-case where EFAULT, ESRCH or other error codes would still result in 
a valid, usable memory mapping - I have a fair understanding of the xc_* 
code, and although I may not know every piece of that code, I'm fairly 
certainly that is the expected behaviour.

--
Mats
>
> Ian.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:09:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLGQ-0006Aj-VW; Wed, 19 Dec 2012 15:08:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlLGP-0006AZ-HP
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:08:53 +0000
Received: from [85.158.143.99:48081] by server-1.bemta-4.messagelabs.com id
	FB/91-28401-488D1D05; Wed, 19 Dec 2012 15:08:52 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355929730!18992191!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29572 invoked from network); 19 Dec 2012 15:08:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:08:52 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1200178"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 15:08:49 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 10:08:49 -0500
Message-ID: <50D1D880.10701@citrix.com>
Date: Wed, 19 Dec 2012 15:08:48 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
	<1355919745.14620.363.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355919745.14620.363.camel@zakaz.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 12:22, Ian Campbell wrote:
> On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:
>
>>>> +                  only likely to return EFAULT or some other "things are very
>>>> +                  bad" error code, which the rest of the calling code won't
>>>> +                  be able to fix up. So we just exit with the error we got.
>>> It expect it is more important to accumulate the individual errors from
>>> remap_pte_fn into err_ptr.
>> Yes, but since that exits on error with EFAULT, the calling code won't
>> "accept" the errors, and thus the whole house of cards fall apart at
>> this point.
>>
>> There should probably be a task to fix this up properly, hence the
>> comment. But right now, any error besides ENOENT is "unacceptable" by
>> the callers of this code. If you want me to add this to the comment, I'm
>> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing
>> will work after an error.
> Are you sure? privcmd.c has some special casing for ENOENT but it looks
> like it should just pass through other errors back to userspace.
>
> In any case surely this needs fixing?
>
> On the X86 side err_ptr is the result of the mmupdate hypercall which
> can already be other than ENOENT, including EFAULT, ESRCH etc.
Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the 
calling code revoking the mapping and returning an error) is ENOENT.

Or at least, that's how I believe it should SHOULD be - since only 
ENOENT is a "success" error code, anything else pretty much means that 
the operation requested didn't work properly. If you are aware of any 
use-case where EFAULT, ESRCH or other error codes would still result in 
a valid, usable memory mapping - I have a fair understanding of the xc_* 
code, and although I may not know every piece of that code, I'm fairly 
certainly that is the expected behaviour.

--
Mats
>
> Ian.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:12:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:12:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLJc-0006PT-Iy; Wed, 19 Dec 2012 15:12:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlLJb-0006PJ-8J
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:12:11 +0000
Received: from [85.158.138.51:33442] by server-10.bemta-3.messagelabs.com id
	E0/BA-07616-A49D1D05; Wed, 19 Dec 2012 15:12:10 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355929812!29703159!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14411 invoked from network); 19 Dec 2012 15:10:13 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:10:13 -0000
Received: (qmail 11027 invoked from network); 19 Dec 2012 17:09:10 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 17:09:10 +0200
Message-ID: <50D1D8D4.8030300@gmail.com>
Date: Wed, 19 Dec 2012 17:10:12 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355929247.14620.436.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234434,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 147cb269f1d19cbc063511471ca18e52.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3t9.17epd43f4.efc7],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44441
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> I did manage to take 'enabled' into account with what appears to be
>> success, but if I've read the situation correctly, there's not much to
>> do about 'overlap', unless we save it in hvm_save_mtrr_msr() like it's
>> done with 'enabled'. What do you think?
>
> It's not an architectural thing so I don't think it belongs in there.
>
> TBH until someone figures out or explains what overlap actually is I
> don't know if it even needs exporting or taking into account in
> userspace.

Well, get_mtrr_range() is being called by is_var_mtrr_overlapped(), 
which is the function that sets m->overlapped, which is then used quite 
a lot in the logic of the get_mtrr_type(), which this patch attempts to 
bring into userspace via libxc.

I would quite happily discount all checks against the overlap boolean 
argument (and my code seems to work like that), but I suspect whoever 
wrote get_mtrr_type() had good reason to check for that.

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:12:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:12:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLJc-0006PT-Iy; Wed, 19 Dec 2012 15:12:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlLJb-0006PJ-8J
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:12:11 +0000
Received: from [85.158.138.51:33442] by server-10.bemta-3.messagelabs.com id
	E0/BA-07616-A49D1D05; Wed, 19 Dec 2012 15:12:10 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355929812!29703159!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14411 invoked from network); 19 Dec 2012 15:10:13 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:10:13 -0000
Received: (qmail 11027 invoked from network); 19 Dec 2012 17:09:10 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 17:09:10 +0200
Message-ID: <50D1D8D4.8030300@gmail.com>
Date: Wed, 19 Dec 2012 17:10:12 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355929247.14620.436.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234434,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 147cb269f1d19cbc063511471ca18e52.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3t9.17epd43f4.efc7],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44441
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> I did manage to take 'enabled' into account with what appears to be
>> success, but if I've read the situation correctly, there's not much to
>> do about 'overlap', unless we save it in hvm_save_mtrr_msr() like it's
>> done with 'enabled'. What do you think?
>
> It's not an architectural thing so I don't think it belongs in there.
>
> TBH until someone figures out or explains what overlap actually is I
> don't know if it even needs exporting or taking into account in
> userspace.

Well, get_mtrr_range() is being called by is_var_mtrr_overlapped(), 
which is the function that sets m->overlapped, which is then used quite 
a lot in the logic of the get_mtrr_type(), which this patch attempts to 
bring into userspace via libxc.

I would quite happily discount all checks against the overlap boolean 
argument (and my code seems to work like that), but I suspect whoever 
wrote get_mtrr_type() had good reason to check for that.

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:18:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLPR-0006i0-IQ; Wed, 19 Dec 2012 15:18:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TlLPQ-0006hv-6w
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:18:12 +0000
Received: from [85.158.138.51:51189] by server-7.bemta-3.messagelabs.com id
	22/EB-23008-3BAD1D05; Wed, 19 Dec 2012 15:18:11 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355930290!29623049!1
X-Originating-IP: [209.85.214.53]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12885 invoked from network); 19 Dec 2012 15:18:10 -0000
Received: from mail-bk0-f53.google.com (HELO mail-bk0-f53.google.com)
	(209.85.214.53)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:18:10 -0000
Received: by mail-bk0-f53.google.com with SMTP id j5so1062326bkw.40
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 07:18:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:cc:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=N1kq26rGtz38REgwuXMgu1CXz0Iq3twvywb3SyBC+o8=;
	b=VO5xgTeB7rEoMP6GUeVXHKPhqHk9g6Or024tYsRe3xJHyW5oxFbri5F8bFCHS3FT9Z
	5tElM9OSP2KKeGMMwPv+jUee1hhXpdvQEKCTW95R7PvqYA9ewCtUQl+CMgy4KHlZuT8b
	C+bOT8QHs2hZoTsL758O++NTl62wNXCH3Si0nvdlSQq6qIMys1sNgFLbnOYXeGtn8mNt
	NFL3AmmHv2OSL0tW3a5jVYbtO8NUIN+DTkPfyseMiV8/2vh+bZs9+9uoKkocNsQnXNnj
	k07WkW4pozRMxpAX4/dRUnAUzb2+IriDB/lhOAlOZCr/m1IMWh09TMlHg9sZ5vGHMymu
	pprg==
X-Received: by 10.204.147.141 with SMTP id l13mr2754473bkv.43.1355930289714;
	Wed, 19 Dec 2012 07:18:09 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id o9sm4531020bko.15.2012.12.19.07.18.02
	(version=SSLv3 cipher=OTHER); Wed, 19 Dec 2012 07:18:08 -0800 (PST)
User-Agent: Microsoft-Entourage/12.35.0.121009
Date: Wed, 19 Dec 2012 15:18:00 +0000
From: Keir Fraser <keir@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CCF78B28.56019%keir@xen.org>
Thread-Topic: [PATCH] xen: remove nr_irqs_gsi from generic code
Thread-Index: Ac3d/AgKr/lrQiQ9yUudHcHsbI4qRA==
In-Reply-To: <1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
Mime-version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove nr_irqs_gsi from generic code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/2012 14:43, "Ian Campbell" <ian.campbell@citrix.com> wrote:

> The concept is X86 specific.
> 
> AFAICT the generic concept here is the number of physical IRQs which
> the current hardware has, so call this nr_hw_irqs.
> 
> Also using "defined NR_IRQS" as a standin for x86 might have made
> sense at one point but its just cleaner to push the necessary
> definitions into asm/irq.h.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Keir (Xen.org) <keir@xen.org>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Acked-by: Keir Fraser <keir@xen.org>

> --
> v2: s/nr_hw_irqs/nr_static_irqs/g
> ---
>  xen/arch/arm/dummy.S      |    2 --
>  xen/common/domain.c       |    4 ++--
>  xen/include/asm-arm/irq.h |    3 +++
>  xen/include/asm-x86/irq.h |    4 ++++
>  xen/include/xen/irq.h     |    8 --------
>  xen/xsm/flask/hooks.c     |    4 ++--
>  6 files changed, 11 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
> index 6416f94..a214fbf 100644
> --- a/xen/arch/arm/dummy.S
> +++ b/xen/arch/arm/dummy.S
> @@ -6,5 +6,3 @@ x: .word 0xe7f000f0 /* Undefined instruction */
> .globl x; \
>  x: mov pc, lr
> 
> -/* PIRQ support */
> -DUMMY(nr_irqs_gsi);
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 12c8e24..2f8ef00 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -259,9 +259,9 @@ struct domain *domain_create(
>          atomic_inc(&d->pause_count);
>  
>          if ( domid )
> -            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
> +            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
>          else
> -            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
> +            d->nr_pirqs = nr_static_irqs + extra_dom0_irqs;
>          if ( d->nr_pirqs > nr_irqs )
>              d->nr_pirqs = nr_irqs;
>  
> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> index abde839..bd6b54a 100644
> --- a/xen/include/asm-arm/irq.h
> +++ b/xen/include/asm-arm/irq.h
> @@ -21,6 +21,9 @@ struct irq_cfg {
>  #define NR_IRQS  1024
>  #define nr_irqs NR_IRQS
>  
> +#define nr_irqs NR_IRQS
> +#define nr_static_irqs NR_IRQS
> +
>  struct irq_desc;
>  
>  struct irq_desc *__irq_to_desc(int irq);
> diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
> index 5eefb94..7f5da06 100644
> --- a/xen/include/asm-x86/irq.h
> +++ b/xen/include/asm-x86/irq.h
> @@ -11,6 +11,10 @@
>  #include <irq_vectors.h>
>  #include <asm/percpu.h>
>  
> +extern unsigned int nr_irqs_gsi;
> +extern unsigned int nr_irqs;
> +#define nr_static_irqs nr_irqs_gsi
> +
>  #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
>     (1 << (irq)) & io_apic_irqs : \
>     (irq) < nr_irqs_gsi)
> diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> index 5973cce..7386358 100644
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
>  
>  #include <asm/irq.h>
>  
> -#ifdef NR_IRQS
> -# define nr_irqs NR_IRQS
> -# define nr_irqs_gsi NR_IRQS
> -#else
> -extern unsigned int nr_irqs_gsi;
> -extern unsigned int nr_irqs;
> -#endif
> -
>  struct msi_desc;
>  /*
>   * This is the "IRQ descriptor", which contains various information
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 0ca10d0..782e28c 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct
> avc_audit_data *ad)
>      struct irq_desc *desc = irq_to_desc(irq);
>      if ( irq >= nr_irqs || irq < 0 )
>          return -EINVAL;
> -    if ( irq < nr_irqs_gsi ) {
> +    if ( irq < nr_static_irqs ) {
>          if (ad) {
>              AVC_AUDIT_DATA_INIT(ad, IRQ);
>              ad->irq = irq;
> @@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int
> irq, void *data)
>      if ( rc )
>          return rc;
>  
> -    if ( irq >= nr_irqs_gsi && msi ) {
> +    if ( irq >= nr_static_irqs && msi ) {
>          u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
>          AVC_AUDIT_DATA_INIT(&ad, DEV);
>          ad.device = machine_bdf;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:18:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLPR-0006i0-IQ; Wed, 19 Dec 2012 15:18:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1TlLPQ-0006hv-6w
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:18:12 +0000
Received: from [85.158.138.51:51189] by server-7.bemta-3.messagelabs.com id
	22/EB-23008-3BAD1D05; Wed, 19 Dec 2012 15:18:11 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355930290!29623049!1
X-Originating-IP: [209.85.214.53]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12885 invoked from network); 19 Dec 2012 15:18:10 -0000
Received: from mail-bk0-f53.google.com (HELO mail-bk0-f53.google.com)
	(209.85.214.53)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:18:10 -0000
Received: by mail-bk0-f53.google.com with SMTP id j5so1062326bkw.40
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 07:18:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:cc:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=N1kq26rGtz38REgwuXMgu1CXz0Iq3twvywb3SyBC+o8=;
	b=VO5xgTeB7rEoMP6GUeVXHKPhqHk9g6Or024tYsRe3xJHyW5oxFbri5F8bFCHS3FT9Z
	5tElM9OSP2KKeGMMwPv+jUee1hhXpdvQEKCTW95R7PvqYA9ewCtUQl+CMgy4KHlZuT8b
	C+bOT8QHs2hZoTsL758O++NTl62wNXCH3Si0nvdlSQq6qIMys1sNgFLbnOYXeGtn8mNt
	NFL3AmmHv2OSL0tW3a5jVYbtO8NUIN+DTkPfyseMiV8/2vh+bZs9+9uoKkocNsQnXNnj
	k07WkW4pozRMxpAX4/dRUnAUzb2+IriDB/lhOAlOZCr/m1IMWh09TMlHg9sZ5vGHMymu
	pprg==
X-Received: by 10.204.147.141 with SMTP id l13mr2754473bkv.43.1355930289714;
	Wed, 19 Dec 2012 07:18:09 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id o9sm4531020bko.15.2012.12.19.07.18.02
	(version=SSLv3 cipher=OTHER); Wed, 19 Dec 2012 07:18:08 -0800 (PST)
User-Agent: Microsoft-Entourage/12.35.0.121009
Date: Wed, 19 Dec 2012 15:18:00 +0000
From: Keir Fraser <keir@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CCF78B28.56019%keir@xen.org>
Thread-Topic: [PATCH] xen: remove nr_irqs_gsi from generic code
Thread-Index: Ac3d/AgKr/lrQiQ9yUudHcHsbI4qRA==
In-Reply-To: <1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
Mime-version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove nr_irqs_gsi from generic code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/2012 14:43, "Ian Campbell" <ian.campbell@citrix.com> wrote:

> The concept is X86 specific.
> 
> AFAICT the generic concept here is the number of physical IRQs which
> the current hardware has, so call this nr_hw_irqs.
> 
> Also using "defined NR_IRQS" as a standin for x86 might have made
> sense at one point but its just cleaner to push the necessary
> definitions into asm/irq.h.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Keir (Xen.org) <keir@xen.org>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Acked-by: Keir Fraser <keir@xen.org>

> --
> v2: s/nr_hw_irqs/nr_static_irqs/g
> ---
>  xen/arch/arm/dummy.S      |    2 --
>  xen/common/domain.c       |    4 ++--
>  xen/include/asm-arm/irq.h |    3 +++
>  xen/include/asm-x86/irq.h |    4 ++++
>  xen/include/xen/irq.h     |    8 --------
>  xen/xsm/flask/hooks.c     |    4 ++--
>  6 files changed, 11 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
> index 6416f94..a214fbf 100644
> --- a/xen/arch/arm/dummy.S
> +++ b/xen/arch/arm/dummy.S
> @@ -6,5 +6,3 @@ x: .word 0xe7f000f0 /* Undefined instruction */
> .globl x; \
>  x: mov pc, lr
> 
> -/* PIRQ support */
> -DUMMY(nr_irqs_gsi);
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 12c8e24..2f8ef00 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -259,9 +259,9 @@ struct domain *domain_create(
>          atomic_inc(&d->pause_count);
>  
>          if ( domid )
> -            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
> +            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
>          else
> -            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
> +            d->nr_pirqs = nr_static_irqs + extra_dom0_irqs;
>          if ( d->nr_pirqs > nr_irqs )
>              d->nr_pirqs = nr_irqs;
>  
> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> index abde839..bd6b54a 100644
> --- a/xen/include/asm-arm/irq.h
> +++ b/xen/include/asm-arm/irq.h
> @@ -21,6 +21,9 @@ struct irq_cfg {
>  #define NR_IRQS  1024
>  #define nr_irqs NR_IRQS
>  
> +#define nr_irqs NR_IRQS
> +#define nr_static_irqs NR_IRQS
> +
>  struct irq_desc;
>  
>  struct irq_desc *__irq_to_desc(int irq);
> diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
> index 5eefb94..7f5da06 100644
> --- a/xen/include/asm-x86/irq.h
> +++ b/xen/include/asm-x86/irq.h
> @@ -11,6 +11,10 @@
>  #include <irq_vectors.h>
>  #include <asm/percpu.h>
>  
> +extern unsigned int nr_irqs_gsi;
> +extern unsigned int nr_irqs;
> +#define nr_static_irqs nr_irqs_gsi
> +
>  #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
>     (1 << (irq)) & io_apic_irqs : \
>     (irq) < nr_irqs_gsi)
> diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> index 5973cce..7386358 100644
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
>  
>  #include <asm/irq.h>
>  
> -#ifdef NR_IRQS
> -# define nr_irqs NR_IRQS
> -# define nr_irqs_gsi NR_IRQS
> -#else
> -extern unsigned int nr_irqs_gsi;
> -extern unsigned int nr_irqs;
> -#endif
> -
>  struct msi_desc;
>  /*
>   * This is the "IRQ descriptor", which contains various information
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 0ca10d0..782e28c 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct
> avc_audit_data *ad)
>      struct irq_desc *desc = irq_to_desc(irq);
>      if ( irq >= nr_irqs || irq < 0 )
>          return -EINVAL;
> -    if ( irq < nr_irqs_gsi ) {
> +    if ( irq < nr_static_irqs ) {
>          if (ad) {
>              AVC_AUDIT_DATA_INIT(ad, IRQ);
>              ad->irq = irq;
> @@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int
> irq, void *data)
>      if ( rc )
>          return rc;
>  
> -    if ( irq >= nr_irqs_gsi && msi ) {
> +    if ( irq >= nr_static_irqs && msi ) {
>          u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
>          AVC_AUDIT_DATA_INIT(&ad, DEV);
>          ad.device = machine_bdf;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:24:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLV3-0006wI-Kh; Wed, 19 Dec 2012 15:24:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TlLV2-0006wC-Gc
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:24:00 +0000
Received: from [85.158.143.99:19683] by server-1.bemta-4.messagelabs.com id
	CB/D6-28401-F0CD1D05; Wed, 19 Dec 2012 15:23:59 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355930635!29228700!1
X-Originating-IP: [220.181.15.60]
X-SpamReason: No, hits=2.4 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYwID0+IDU0MTU=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYwID0+IDU0MTU=\n,HTML_40_50,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29769 invoked from network); 19 Dec 2012 15:23:57 -0000
Received: from m15-60.126.com (HELO m15-60.126.com) (220.181.15.60)
	by server-5.tower-216.messagelabs.com with SMTP;
	19 Dec 2012 15:23:57 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=/xRuloJX9CNibycJLNKrCJ+bBJriN+6J65XF
	HhcDUrM=; b=K3y7KdJj5mRmPK4/cd7Zvxk+ONrpRYBemCjGXsDScB/JPOTCWRi8
	CJu8m6qNE7WuSSuu7dyILqQWbS/EebGbChoa35+sv6mEqNYDSbplb2rEjIchN4s6
	b9aoSSD20i7i4QWl0IDzlfFS9LLHmuCUqgWR9JYPWKs5rmRjNsUy4qQ=
Received: from hxkhust$126.com ( [202.114.0.254] ) by ajax-webmail-wmsvr60
	(Coremail) ; Wed, 19 Dec 2012 23:23:52 +0800 (CST)
X-Originating-IP: [202.114.0.254]
Date: Wed, 19 Dec 2012 23:23:52 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: A1WtKmZvb3Rlcl9odG09Mzk1Mzo4MQ==
MIME-Version: 1.0
Message-ID: <2cff694b.c956.13bb3c38014.Coremail.hxkhust@126.com>
X-CM-TRANSID: PMqowEB5jEAJ3NFQ+y8OAA--.4250W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitBSKBUX9kmFs1QAAsd
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!!help!I wouldn't be able to meet the deadline!(qcow
 format image file read operation in qemu-img-xen)[updated]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8001003203864926959=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8001003203864926959==
Content-Type: multipart/alternative; 
	boundary="----=_Part_194202_1543841167.1355930632211"

------=_Part_194202_1543841167.1355930632211
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

Hi,guys,
During a HVM's running which take a qcow format image file as its own virtual disk, the qcow image file will be always read.In the situation that its qcow format image is based on a raw format image,  if nesethe backingfile ,just that raw format image file,would be read .my purpose is to cache the data that is read from the backingfile when the hvm is running .
Now what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :
static void qcow_aio_read_cb(void *opaque, int ret)
{
........
if (!acb->cluster_offset) {
        if (bs->backing_hd) {
            /* read from the base image */
            acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,                           //*************
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);   //**************
//I read what the acb->buf points to, but find the reading operation is not finished. 
            if (acb->hd_aiocb == NULL)
                goto fail;
        } else {
            /* Note: in this case, no need to wait */
            memset(acb->buf, 0, 512 * acb->n);
            goto redo;
        }
    } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
        /* add AIO support for compressed blocks ? */
        if (decompress_cluster(s, acb->cluster_offset) < 0)
            goto fail;
        memcpy(acb->buf,
               s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
        goto redo;
 .........
//********************************************************************************************8
when the statement:
acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);n
has been completed, the content which the acb->buf points to has not been prepared.This is a asynchronous read operation.Who could tell me the principle or process about this  asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when  the data has been copied to the memory which the acb->buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.Or could you tell me how to cache the data which is read from the backingfile when a qcow image is regarded as a virtual disk in a running  HVM?


A newbie






------=_Part_194202_1543841167.1355930632211
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div>Hi,guys,</div><div>During a HVM's running which take a qcow format image file as its own virtual disk, the qcow image file will be always read.In the situation that its qcow format image is based on a raw format image, &nbsp;if nesethe backingfile ,just that raw format image file,would be read .my purpose is to cache the data that is read from the backingfile when the hvm is running .</div>Now what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :<div><div>static void qcow_aio_read_cb(void *opaque, int ret)</div><div>{</div><div>........</div><div>if (!acb-&gt;cluster_offset) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (bs-&gt;backing_hd) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* read from the base image */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd, &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; //*************</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb); &nbsp; //**************</div><div>//I read what the acb-&gt;buf points to, but find the reading operation is not finished.&nbsp;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (acb-&gt;hd_aiocb == NULL)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; } else {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* Note: in this case, no need to wait */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; memset(acb-&gt;buf, 0, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; }</div><div>&nbsp; &nbsp; } else if (acb-&gt;cluster_offset &amp; QCOW_OFLAG_COMPRESSED) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; /* add AIO support for compressed blocks ? */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (decompress_cluster(s, acb-&gt;cluster_offset) &lt; 0)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; memcpy(acb-&gt;buf,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s-&gt;cluster_cache + index_in_cluster * 512, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp;.........</div></div><div>//********************************************************************************************8</div><div>when the statement:</div><div>acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);n</div><div>has been completed, the content which the acb-&gt;buf points to has not been prepared.This is a&nbsp;asynchronous read operation.Who could tell me the&nbsp;<span style="font-family: arial, sans-serif; font-size: 13px; line-height: normal; white-space: nowrap; ">principle or process about this&nbsp;</span>&nbsp;asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when &nbsp;the data has been copied to the memory which the acb-&gt;buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.<span style="line-height: 1.7;">Or could you tell me how to cache the data which is read from the backingfile when a qcow image is regarded as a virtual disk in a running &nbsp;HVM?</span></div><div><br></div><div>A newbie</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_194202_1543841167.1355930632211--



--===============8001003203864926959==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8001003203864926959==--



From xen-devel-bounces@lists.xen.org Wed Dec 19 15:24:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLV3-0006wI-Kh; Wed, 19 Dec 2012 15:24:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TlLV2-0006wC-Gc
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:24:00 +0000
Received: from [85.158.143.99:19683] by server-1.bemta-4.messagelabs.com id
	CB/D6-28401-F0CD1D05; Wed, 19 Dec 2012 15:23:59 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355930635!29228700!1
X-Originating-IP: [220.181.15.60]
X-SpamReason: No, hits=2.4 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYwID0+IDU0MTU=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYwID0+IDU0MTU=\n,HTML_40_50,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29769 invoked from network); 19 Dec 2012 15:23:57 -0000
Received: from m15-60.126.com (HELO m15-60.126.com) (220.181.15.60)
	by server-5.tower-216.messagelabs.com with SMTP;
	19 Dec 2012 15:23:57 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=/xRuloJX9CNibycJLNKrCJ+bBJriN+6J65XF
	HhcDUrM=; b=K3y7KdJj5mRmPK4/cd7Zvxk+ONrpRYBemCjGXsDScB/JPOTCWRi8
	CJu8m6qNE7WuSSuu7dyILqQWbS/EebGbChoa35+sv6mEqNYDSbplb2rEjIchN4s6
	b9aoSSD20i7i4QWl0IDzlfFS9LLHmuCUqgWR9JYPWKs5rmRjNsUy4qQ=
Received: from hxkhust$126.com ( [202.114.0.254] ) by ajax-webmail-wmsvr60
	(Coremail) ; Wed, 19 Dec 2012 23:23:52 +0800 (CST)
X-Originating-IP: [202.114.0.254]
Date: Wed, 19 Dec 2012 23:23:52 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: A1WtKmZvb3Rlcl9odG09Mzk1Mzo4MQ==
MIME-Version: 1.0
Message-ID: <2cff694b.c956.13bb3c38014.Coremail.hxkhust@126.com>
X-CM-TRANSID: PMqowEB5jEAJ3NFQ+y8OAA--.4250W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitBSKBUX9kmFs1QAAsd
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!!help!I wouldn't be able to meet the deadline!(qcow
 format image file read operation in qemu-img-xen)[updated]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8001003203864926959=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8001003203864926959==
Content-Type: multipart/alternative; 
	boundary="----=_Part_194202_1543841167.1355930632211"

------=_Part_194202_1543841167.1355930632211
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

Hi,guys,
During a HVM's running which take a qcow format image file as its own virtual disk, the qcow image file will be always read.In the situation that its qcow format image is based on a raw format image,  if nesethe backingfile ,just that raw format image file,would be read .my purpose is to cache the data that is read from the backingfile when the hvm is running .
Now what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :
static void qcow_aio_read_cb(void *opaque, int ret)
{
........
if (!acb->cluster_offset) {
        if (bs->backing_hd) {
            /* read from the base image */
            acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,                           //*************
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);   //**************
//I read what the acb->buf points to, but find the reading operation is not finished. 
            if (acb->hd_aiocb == NULL)
                goto fail;
        } else {
            /* Note: in this case, no need to wait */
            memset(acb->buf, 0, 512 * acb->n);
            goto redo;
        }
    } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
        /* add AIO support for compressed blocks ? */
        if (decompress_cluster(s, acb->cluster_offset) < 0)
            goto fail;
        memcpy(acb->buf,
               s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
        goto redo;
 .........
//********************************************************************************************8
when the statement:
acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);n
has been completed, the content which the acb->buf points to has not been prepared.This is a asynchronous read operation.Who could tell me the principle or process about this  asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when  the data has been copied to the memory which the acb->buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.Or could you tell me how to cache the data which is read from the backingfile when a qcow image is regarded as a virtual disk in a running  HVM?


A newbie






------=_Part_194202_1543841167.1355930632211
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div>Hi,guys,</div><div>During a HVM's running which take a qcow format image file as its own virtual disk, the qcow image file will be always read.In the situation that its qcow format image is based on a raw format image, &nbsp;if nesethe backingfile ,just that raw format image file,would be read .my purpose is to cache the data that is read from the backingfile when the hvm is running .</div>Now what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :<div><div>static void qcow_aio_read_cb(void *opaque, int ret)</div><div>{</div><div>........</div><div>if (!acb-&gt;cluster_offset) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (bs-&gt;backing_hd) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* read from the base image */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd, &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; //*************</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb); &nbsp; //**************</div><div>//I read what the acb-&gt;buf points to, but find the reading operation is not finished.&nbsp;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (acb-&gt;hd_aiocb == NULL)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; } else {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* Note: in this case, no need to wait */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; memset(acb-&gt;buf, 0, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; }</div><div>&nbsp; &nbsp; } else if (acb-&gt;cluster_offset &amp; QCOW_OFLAG_COMPRESSED) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; /* add AIO support for compressed blocks ? */</div><div>&nbsp; &nbsp; &nbsp; &nbsp; if (decompress_cluster(s, acb-&gt;cluster_offset) &lt; 0)</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>&nbsp; &nbsp; &nbsp; &nbsp; memcpy(acb-&gt;buf,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s-&gt;cluster_cache + index_in_cluster * 512, 512 * acb-&gt;n);</div><div>&nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>&nbsp;.........</div></div><div>//********************************************************************************************8</div><div>when the statement:</div><div>acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd,</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);n</div><div>has been completed, the content which the acb-&gt;buf points to has not been prepared.This is a&nbsp;asynchronous read operation.Who could tell me the&nbsp;<span style="font-family: arial, sans-serif; font-size: 13px; line-height: normal; white-space: nowrap; ">principle or process about this&nbsp;</span>&nbsp;asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when &nbsp;the data has been copied to the memory which the acb-&gt;buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.<span style="line-height: 1.7;">Or could you tell me how to cache the data which is read from the backingfile when a qcow image is regarded as a virtual disk in a running &nbsp;HVM?</span></div><div><br></div><div>A newbie</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_194202_1543841167.1355930632211--



--===============8001003203864926959==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8001003203864926959==--



From xen-devel-bounces@lists.xen.org Wed Dec 19 15:26:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:26:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLWn-00072I-64; Wed, 19 Dec 2012 15:25:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1TlLWl-00072B-Q3
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:25:48 +0000
Received: from [85.158.137.99:14387] by server-13.bemta-3.messagelabs.com id
	6E/5B-00465-67CD1D05; Wed, 19 Dec 2012 15:25:42 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355930738!17070708!1
X-Originating-IP: [199.106.114.254]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjEwNi4xMTQuMjU0ID0+IDM1NjgzNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20373 invoked from network); 19 Dec 2012 15:25:40 -0000
Received: from wolverine01.qualcomm.com (HELO wolverine01.qualcomm.com)
	(199.106.114.254)
	by server-10.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:25:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355126400"; d="scan'208";a="15144522"
Received: from pdmz-ns-snip_115_219.qualcomm.com (HELO mostmsg01.qualcomm.com)
	([199.106.115.219])
	by wolverine01.qualcomm.com with ESMTP/TLS/DHE-RSA-AES256-SHA;
	19 Dec 2012 07:25:37 -0800
Received: from [10.228.68.45] (pdmz-ns-snip_218_1.qualcomm.com [192.168.218.1])
	by mostmsg01.qualcomm.com (Postfix) with ESMTPA id D3A0110004B4;
	Wed, 19 Dec 2012 07:25:36 -0800 (PST)
Message-ID: <50D1DC6F.3010304@codeaurora.org>
Date: Wed, 19 Dec 2012 10:25:35 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.24) Gecko/20111108 Thunderbird/3.1.16
MIME-Version: 1.0
To: Marc Zyngier <marc.zyngier@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
	<50D0AF5C.1070605@codeaurora.org> <50D0B384.40605@arm.com>
In-Reply-To: <50D0B384.40605@arm.com>
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"nico@fluxnic.net" <nico@fluxnic.net>, Will Deacon <Will.Deacon@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/18/2012 01:18 PM, Marc Zyngier wrote:
> Hi Christopher,
> 
> On 18/12/12 18:01, Christopher Covington wrote:
>> Hi Will,
>>
>> On 12/18/2012 08:14 AM, Will Deacon wrote:
>>> Hi Stefano,
>>>

[...]

>>
>>> The only things the platform needs are GIC, timers, memory and a CPU.
>>
>> I assume multiple virtio-mmio peripherals are hiding behind what you seem to
>> be advertising here as plain old memory?
> 
> No. Memory is memory. Virtio peripherals are created outside of the 
> memory range. They end up having rings and descriptor in memory, but 
> that's not any different from what you have with an fairly complicated 
> DMA capable hardware device.

Sure, but I would consider such a device to be part of the platform (or
perhaps there's some better name to group together the set of devices that
are expected to modify memory?), and I was trying to fish for what additional
devices might be part of the platform on a regular basis, like what
console(s) and network interface(s).

> Here's what kvmtool has been seen to generate, with the parameters I used
> a few minutes ago:
>
> /dts-v1/;
> 
> /memreserve/	0x000000008fff0000 0x0000000000001000;
> / {
> 	interrupt-parent = <0x1>;
> 	compatible = "linux,dummy-virt";

Might it make sense to call this a generic ARM platform, using something
roughly in the direction of "linux,arm-generic" here and
s/mach-virt/mach-generic/ in the paths? Then any device-tree enabled ARMv7
platform using the generic timer and interrupt controller could reuse this
definition. This machine/platform seems like it could prove useful in
simulation and hardware scenarios. Would, for example, a fully-DT-enabled
Versatile Express machine converge on this definition? I wonder if it might
also be useful as a simple example with which to test out code sharing
between arch/arm and arch/arm64.

[...]
 
> Does it help?

Yes, thanks!

[...]
 
> What would be the point of using DCC?

It seemed like ARM-Ltd.-architected peripherals were picked for the timer and
interrupt controller, so I wondered why not for the console as well. As best
I'm aware, unless one ventures into the PrimeCell line with the PL011, DCC is
the closest match for an officially architected console mechanism.

> We would have to trap on each access...

Now that you point his out, this would indeed be fundamentally different than how
the coprocessor-register-accessed generic timer is handled, because the
virtualization extensions mean the hypervisor just needs to handle setup and
switching, but not intervention during normal operation.

This is drifting a bit off-topic to this particular patchset, but while I'm
on the topic of coprocessor register accesses in a virtualized environment,
is there a plan or are there existing mechanisms for handling the performance
counters? Do ID or cache or other register accesses need to be trapped?
Perhaps there are existing threads or portions of code on the topic that I've
overlooked.

> ...and then we'd have to invent yet another mechanism to channel the
> console to userspace.

What mechanism does virtio-mmio support in KVM use to channel the console to
userspace? Would there be a fundamental difference?

It seemed like Will was trying to frame the changes as _whether_ to support
various peripheral devices, because some of them aren't "needed" for an
artificial use case. I would rather frame the implementation decisions as
_which_ devices are going to be supported in the foreseeable future by your
work. Choosing a virtio-mmio console over DCC, semihosting, ram buffer, UART,
and whatever other alternatives there might be impacts someone trying to put
together a Linux image that boots on some combination of simulation,
virtualization, and hardware.

If one puts together a Linux image that uses a UART on hardware and a
virtio-mmio console in a virtualized environment, I would argue that this
image has some incrementally higher debug and maintenance cost than an image
that has the same console mechanism across all platforms. On the other hand,
perhaps the cost of implementing a uniform mechanism across all platforms is
higher still.

> Not to mention that I like to be able to actually input something on a console,
> not just read from it.

Perhaps there's some misunderstanding here. The Debug Communications Channel
that I'm familiar with is fully capable of both input and output.

http://infocenter.arm.com/help/topic/com.arm.doc.dui0471c/BEIHGIBB.html
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/tty/hvc/hvc_dcc.c

Thanks,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:26:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:26:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLWn-00072I-64; Wed, 19 Dec 2012 15:25:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1TlLWl-00072B-Q3
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:25:48 +0000
Received: from [85.158.137.99:14387] by server-13.bemta-3.messagelabs.com id
	6E/5B-00465-67CD1D05; Wed, 19 Dec 2012 15:25:42 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355930738!17070708!1
X-Originating-IP: [199.106.114.254]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjEwNi4xMTQuMjU0ID0+IDM1NjgzNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20373 invoked from network); 19 Dec 2012 15:25:40 -0000
Received: from wolverine01.qualcomm.com (HELO wolverine01.qualcomm.com)
	(199.106.114.254)
	by server-10.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:25:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355126400"; d="scan'208";a="15144522"
Received: from pdmz-ns-snip_115_219.qualcomm.com (HELO mostmsg01.qualcomm.com)
	([199.106.115.219])
	by wolverine01.qualcomm.com with ESMTP/TLS/DHE-RSA-AES256-SHA;
	19 Dec 2012 07:25:37 -0800
Received: from [10.228.68.45] (pdmz-ns-snip_218_1.qualcomm.com [192.168.218.1])
	by mostmsg01.qualcomm.com (Postfix) with ESMTPA id D3A0110004B4;
	Wed, 19 Dec 2012 07:25:36 -0800 (PST)
Message-ID: <50D1DC6F.3010304@codeaurora.org>
Date: Wed, 19 Dec 2012 10:25:35 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.24) Gecko/20111108 Thunderbird/3.1.16
MIME-Version: 1.0
To: Marc Zyngier <marc.zyngier@arm.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
	<50D0AF5C.1070605@codeaurora.org> <50D0B384.40605@arm.com>
In-Reply-To: <50D0B384.40605@arm.com>
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"nico@fluxnic.net" <nico@fluxnic.net>, Will Deacon <Will.Deacon@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/18/2012 01:18 PM, Marc Zyngier wrote:
> Hi Christopher,
> 
> On 18/12/12 18:01, Christopher Covington wrote:
>> Hi Will,
>>
>> On 12/18/2012 08:14 AM, Will Deacon wrote:
>>> Hi Stefano,
>>>

[...]

>>
>>> The only things the platform needs are GIC, timers, memory and a CPU.
>>
>> I assume multiple virtio-mmio peripherals are hiding behind what you seem to
>> be advertising here as plain old memory?
> 
> No. Memory is memory. Virtio peripherals are created outside of the 
> memory range. They end up having rings and descriptor in memory, but 
> that's not any different from what you have with an fairly complicated 
> DMA capable hardware device.

Sure, but I would consider such a device to be part of the platform (or
perhaps there's some better name to group together the set of devices that
are expected to modify memory?), and I was trying to fish for what additional
devices might be part of the platform on a regular basis, like what
console(s) and network interface(s).

> Here's what kvmtool has been seen to generate, with the parameters I used
> a few minutes ago:
>
> /dts-v1/;
> 
> /memreserve/	0x000000008fff0000 0x0000000000001000;
> / {
> 	interrupt-parent = <0x1>;
> 	compatible = "linux,dummy-virt";

Might it make sense to call this a generic ARM platform, using something
roughly in the direction of "linux,arm-generic" here and
s/mach-virt/mach-generic/ in the paths? Then any device-tree enabled ARMv7
platform using the generic timer and interrupt controller could reuse this
definition. This machine/platform seems like it could prove useful in
simulation and hardware scenarios. Would, for example, a fully-DT-enabled
Versatile Express machine converge on this definition? I wonder if it might
also be useful as a simple example with which to test out code sharing
between arch/arm and arch/arm64.

[...]
 
> Does it help?

Yes, thanks!

[...]
 
> What would be the point of using DCC?

It seemed like ARM-Ltd.-architected peripherals were picked for the timer and
interrupt controller, so I wondered why not for the console as well. As best
I'm aware, unless one ventures into the PrimeCell line with the PL011, DCC is
the closest match for an officially architected console mechanism.

> We would have to trap on each access...

Now that you point his out, this would indeed be fundamentally different than how
the coprocessor-register-accessed generic timer is handled, because the
virtualization extensions mean the hypervisor just needs to handle setup and
switching, but not intervention during normal operation.

This is drifting a bit off-topic to this particular patchset, but while I'm
on the topic of coprocessor register accesses in a virtualized environment,
is there a plan or are there existing mechanisms for handling the performance
counters? Do ID or cache or other register accesses need to be trapped?
Perhaps there are existing threads or portions of code on the topic that I've
overlooked.

> ...and then we'd have to invent yet another mechanism to channel the
> console to userspace.

What mechanism does virtio-mmio support in KVM use to channel the console to
userspace? Would there be a fundamental difference?

It seemed like Will was trying to frame the changes as _whether_ to support
various peripheral devices, because some of them aren't "needed" for an
artificial use case. I would rather frame the implementation decisions as
_which_ devices are going to be supported in the foreseeable future by your
work. Choosing a virtio-mmio console over DCC, semihosting, ram buffer, UART,
and whatever other alternatives there might be impacts someone trying to put
together a Linux image that boots on some combination of simulation,
virtualization, and hardware.

If one puts together a Linux image that uses a UART on hardware and a
virtio-mmio console in a virtualized environment, I would argue that this
image has some incrementally higher debug and maintenance cost than an image
that has the same console mechanism across all platforms. On the other hand,
perhaps the cost of implementing a uniform mechanism across all platforms is
higher still.

> Not to mention that I like to be able to actually input something on a console,
> not just read from it.

Perhaps there's some misunderstanding here. The Debug Communications Channel
that I'm familiar with is fully capable of both input and output.

http://infocenter.arm.com/help/topic/com.arm.doc.dui0471c/BEIHGIBB.html
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/tty/hvc/hvc_dcc.c

Thanks,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by
the Linux Foundation

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:28:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLZ6-0007BU-PO; Wed, 19 Dec 2012 15:28:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TlLZ5-0007BM-6A
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:28:11 +0000
Received: from [193.109.254.147:32686] by server-9.bemta-14.messagelabs.com id
	FA/EE-24482-A0DD1D05; Wed, 19 Dec 2012 15:28:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355930888!1894381!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28574 invoked from network); 19 Dec 2012 15:28:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:28:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="256557"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 15:27:09 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 15:27:08 +0000
Message-ID: <50D1DCC5.5050309@citrix.com>
Date: Wed, 19 Dec 2012 15:27:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355929247.14620.436.camel@zakaz.uk.xensource.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Razvan Cojocaru <rzvncj@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/2012 15:00, Ian Campbell wrote:
> On Wed, 2012-12-19 at 14:57 +0000, Razvan Cojocaru wrote:
>>>>      m->overlapped = is_var_mtrr_overlapped(m);
>>>>
>>>> Looks like that function contains the necessary logic.
>>> You're right, but what happens there is that that function depends on
>>> the get_mtrr_range() function, which in turn depends on the size_or_mask
>>> global variable, which is initialized in hvm_mtrr_pat_init(), which then
>>> depends on a global table, and so on. Putting that into libxc is pretty
>>> much putting the whole mtrr.c file there.
>> This is where it gets tricky:
>>
>> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
>>                             uint64_t *base, uint64_t *end)
>> {
>>      [...]
>>      phys_addr = 36;
>>
>>      if ( cpuid_eax(0x80000000) >= 0x80000008 )
>>          phys_addr = (uint8_t)cpuid_eax(0x80000008);
>>
>>      size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
>>      [...]
>> }
>>
>> specifically, in the cpuid_eax() call, which doesn't make much sense in 
>> dom0 userspace.
> The fact that get_mtrr_range is querying the underlying physical CPUID
> suggests it has something to do with the translation from virtual to
> physical MTRR and is therefore not something userspace needs to worry
> about, but I'm only speculating.

CPUID 0x80000008.EAX is the physical address size supported by the
processor (in bits).  Typical values on modern hardware are 40 or 48.

You need to know this information to work out which bits in the MTRR are
valid.  It is a variable scale of which bits are reserved.

Having said the above, this information is never going to change on the
same CPU, so should be cached once to save repeated emulates of cpuid.

~Andrew

>
>> I did manage to take 'enabled' into account with what appears to be 
>> success, but if I've read the situation correctly, there's not much to 
>> do about 'overlap', unless we save it in hvm_save_mtrr_msr() like it's 
>> done with 'enabled'. What do you think?
> It's not an architectural thing so I don't think it belongs in there.
>
> TBH until someone figures out or explains what overlap actually is I
> don't know if it even needs exporting or taking into account in
> userspace.
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:28:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLZ6-0007BU-PO; Wed, 19 Dec 2012 15:28:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TlLZ5-0007BM-6A
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:28:11 +0000
Received: from [193.109.254.147:32686] by server-9.bemta-14.messagelabs.com id
	FA/EE-24482-A0DD1D05; Wed, 19 Dec 2012 15:28:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1355930888!1894381!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28574 invoked from network); 19 Dec 2012 15:28:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:28:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="256557"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 15:27:09 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 15:27:08 +0000
Message-ID: <50D1DCC5.5050309@citrix.com>
Date: Wed, 19 Dec 2012 15:27:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355929247.14620.436.camel@zakaz.uk.xensource.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Razvan Cojocaru <rzvncj@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/2012 15:00, Ian Campbell wrote:
> On Wed, 2012-12-19 at 14:57 +0000, Razvan Cojocaru wrote:
>>>>      m->overlapped = is_var_mtrr_overlapped(m);
>>>>
>>>> Looks like that function contains the necessary logic.
>>> You're right, but what happens there is that that function depends on
>>> the get_mtrr_range() function, which in turn depends on the size_or_mask
>>> global variable, which is initialized in hvm_mtrr_pat_init(), which then
>>> depends on a global table, and so on. Putting that into libxc is pretty
>>> much putting the whole mtrr.c file there.
>> This is where it gets tricky:
>>
>> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
>>                             uint64_t *base, uint64_t *end)
>> {
>>      [...]
>>      phys_addr = 36;
>>
>>      if ( cpuid_eax(0x80000000) >= 0x80000008 )
>>          phys_addr = (uint8_t)cpuid_eax(0x80000008);
>>
>>      size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
>>      [...]
>> }
>>
>> specifically, in the cpuid_eax() call, which doesn't make much sense in 
>> dom0 userspace.
> The fact that get_mtrr_range is querying the underlying physical CPUID
> suggests it has something to do with the translation from virtual to
> physical MTRR and is therefore not something userspace needs to worry
> about, but I'm only speculating.

CPUID 0x80000008.EAX is the physical address size supported by the
processor (in bits).  Typical values on modern hardware are 40 or 48.

You need to know this information to work out which bits in the MTRR are
valid.  It is a variable scale of which bits are reserved.

Having said the above, this information is never going to change on the
same CPU, so should be cached once to save repeated emulates of cpuid.

~Andrew

>
>> I did manage to take 'enabled' into account with what appears to be 
>> success, but if I've read the situation correctly, there's not much to 
>> do about 'overlap', unless we save it in hvm_save_mtrr_msr() like it's 
>> done with 'enabled'. What do you think?
> It's not an architectural thing so I don't think it belongs in there.
>
> TBH until someone figures out or explains what overlap actually is I
> don't know if it even needs exporting or taking into account in
> userspace.
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:31:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLcQ-0007Sr-Dn; Wed, 19 Dec 2012 15:31:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <afaerber@suse.de>) id 1TlLcP-0007Sm-Cr
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:31:37 +0000
Received: from [85.158.137.99:39015] by server-5.bemta-3.messagelabs.com id
	FE/31-15136-6DDD1D05; Wed, 19 Dec 2012 15:31:34 +0000
X-Env-Sender: afaerber@suse.de
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355931093!15005119!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6443 invoked from network); 19 Dec 2012 15:31:34 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-6.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 15:31:34 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 63FC1A5212;
	Wed, 19 Dec 2012 16:31:31 +0100 (CET)
From: =?UTF-8?q?Andreas=20F=C3=A4rber?= <afaerber@suse.de>
To: qemu-devel@nongnu.org
Date: Wed, 19 Dec 2012 16:31:10 +0100
Message-Id: <1355931071-22100-7-git-send-email-afaerber@suse.de>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1355931071-22100-1-git-send-email-afaerber@suse.de>
References: <1355931071-22100-1-git-send-email-afaerber@suse.de>
MIME-Version: 1.0
Cc: "open list:X86" <xen-devel@lists.xensource.com>,
	=?UTF-8?q?Andreas=20F=C3=A4rber?= <afaerber@suse.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH qom-cpu 6/7] xen: Simplify halting of first CPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VXNlIHRoZSBnbG9iYWwgZmlyc3RfY3B1IHZhcmlhYmxlIHRvIGhhbHQgdGhlIENQVSByYXRoZXIg
dGhhbiB1c2luZyBhCmxvY2FsIGZpcnN0X2NwdSBpbml0aWFsaXplZCBmcm9tIHFlbXVfZ2V0X2Nw
dSgwKS4KClRoaXMgd2lsbCBhbGxvdyB0byBjaGFuZ2UgcWVtdV9nZXRfY3B1KCkgcmV0dXJuIHR5
cGUgdG8gQ1BVU3RhdGUKZGVzcGl0ZSB1c2Ugb2YgdGhlIENQVV9DT01NT04gaGFsdGVkIGZpZWxk
IGluIHRoZSByZXNldCBoYW5kbGVyLgoKU2lnbmVkLW9mZi1ieTogQW5kcmVhcyBGw6RyYmVyIDxh
ZmFlcmJlckBzdXNlLmRlPgotLS0KIHhlbi1hbGwuYyB8ICAgIDQgKy0tLQogMSBEYXRlaSBnZcOk
bmRlcnQsIDEgWmVpbGUgaGluenVnZWbDvGd0KCspLCAzIFplaWxlbiBlbnRmZXJudCgtKQoKZGlm
ZiAtLWdpdCBhL3hlbi1hbGwuYyBiL3hlbi1hbGwuYwppbmRleCBkYWY0M2I5Li5lODdlZDdhIDEw
MDY0NAotLS0gYS94ZW4tYWxsLmMKKysrIGIveGVuLWFsbC5jCkBAIC01ODQsOSArNTg0LDcgQEAg
c3RhdGljIHZvaWQgeGVuX3Jlc2V0X3ZjcHUodm9pZCAqb3BhcXVlKQogCiB2b2lkIHhlbl92Y3B1
X2luaXQodm9pZCkKIHsKLSAgICBDUFVBcmNoU3RhdGUgKmZpcnN0X2NwdTsKLQotICAgIGlmICgo
Zmlyc3RfY3B1ID0gcWVtdV9nZXRfY3B1KDApKSkgeworICAgIGlmIChmaXJzdF9jcHUgIT0gTlVM
TCkgewogICAgICAgICBxZW11X3JlZ2lzdGVyX3Jlc2V0KHhlbl9yZXNldF92Y3B1LCBmaXJzdF9j
cHUpOwogICAgICAgICB4ZW5fcmVzZXRfdmNwdShmaXJzdF9jcHUpOwogICAgIH0KLS0gCjEuNy4x
MC40CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:31:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLcQ-0007Sr-Dn; Wed, 19 Dec 2012 15:31:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <afaerber@suse.de>) id 1TlLcP-0007Sm-Cr
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:31:37 +0000
Received: from [85.158.137.99:39015] by server-5.bemta-3.messagelabs.com id
	FE/31-15136-6DDD1D05; Wed, 19 Dec 2012 15:31:34 +0000
X-Env-Sender: afaerber@suse.de
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355931093!15005119!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6443 invoked from network); 19 Dec 2012 15:31:34 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-6.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 15:31:34 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 63FC1A5212;
	Wed, 19 Dec 2012 16:31:31 +0100 (CET)
From: =?UTF-8?q?Andreas=20F=C3=A4rber?= <afaerber@suse.de>
To: qemu-devel@nongnu.org
Date: Wed, 19 Dec 2012 16:31:10 +0100
Message-Id: <1355931071-22100-7-git-send-email-afaerber@suse.de>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1355931071-22100-1-git-send-email-afaerber@suse.de>
References: <1355931071-22100-1-git-send-email-afaerber@suse.de>
MIME-Version: 1.0
Cc: "open list:X86" <xen-devel@lists.xensource.com>,
	=?UTF-8?q?Andreas=20F=C3=A4rber?= <afaerber@suse.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH qom-cpu 6/7] xen: Simplify halting of first CPU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VXNlIHRoZSBnbG9iYWwgZmlyc3RfY3B1IHZhcmlhYmxlIHRvIGhhbHQgdGhlIENQVSByYXRoZXIg
dGhhbiB1c2luZyBhCmxvY2FsIGZpcnN0X2NwdSBpbml0aWFsaXplZCBmcm9tIHFlbXVfZ2V0X2Nw
dSgwKS4KClRoaXMgd2lsbCBhbGxvdyB0byBjaGFuZ2UgcWVtdV9nZXRfY3B1KCkgcmV0dXJuIHR5
cGUgdG8gQ1BVU3RhdGUKZGVzcGl0ZSB1c2Ugb2YgdGhlIENQVV9DT01NT04gaGFsdGVkIGZpZWxk
IGluIHRoZSByZXNldCBoYW5kbGVyLgoKU2lnbmVkLW9mZi1ieTogQW5kcmVhcyBGw6RyYmVyIDxh
ZmFlcmJlckBzdXNlLmRlPgotLS0KIHhlbi1hbGwuYyB8ICAgIDQgKy0tLQogMSBEYXRlaSBnZcOk
bmRlcnQsIDEgWmVpbGUgaGluenVnZWbDvGd0KCspLCAzIFplaWxlbiBlbnRmZXJudCgtKQoKZGlm
ZiAtLWdpdCBhL3hlbi1hbGwuYyBiL3hlbi1hbGwuYwppbmRleCBkYWY0M2I5Li5lODdlZDdhIDEw
MDY0NAotLS0gYS94ZW4tYWxsLmMKKysrIGIveGVuLWFsbC5jCkBAIC01ODQsOSArNTg0LDcgQEAg
c3RhdGljIHZvaWQgeGVuX3Jlc2V0X3ZjcHUodm9pZCAqb3BhcXVlKQogCiB2b2lkIHhlbl92Y3B1
X2luaXQodm9pZCkKIHsKLSAgICBDUFVBcmNoU3RhdGUgKmZpcnN0X2NwdTsKLQotICAgIGlmICgo
Zmlyc3RfY3B1ID0gcWVtdV9nZXRfY3B1KDApKSkgeworICAgIGlmIChmaXJzdF9jcHUgIT0gTlVM
TCkgewogICAgICAgICBxZW11X3JlZ2lzdGVyX3Jlc2V0KHhlbl9yZXNldF92Y3B1LCBmaXJzdF9j
cHUpOwogICAgICAgICB4ZW5fcmVzZXRfdmNwdShmaXJzdF9jcHUpOwogICAgIH0KLS0gCjEuNy4x
MC40CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:32:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:32:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLcp-0007VW-0W; Wed, 19 Dec 2012 15:32:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlLcn-0007VC-6C
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:32:01 +0000
Received: from [85.158.139.211:37096] by server-1.bemta-5.messagelabs.com id
	9D/6A-12813-0FDD1D05; Wed, 19 Dec 2012 15:32:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355931117!18732900!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32451 invoked from network); 19 Dec 2012 15:31:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:31:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 15:31:56 +0000
Message-Id: <50D1EBFB02000078000B17B8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 15:31:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1354628943.2693.80.camel@zakaz.uk.xensource.com>
	<1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: remove nr_irqs_gsi from generic code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 15:43, Ian Campbell <ian.campbell@citrix.com> wrote:
> The concept is X86 specific.
> 
> AFAICT the generic concept here is the number of physical IRQs which
> the current hardware has, so call this nr_hw_irqs.
> 
> Also using "defined NR_IRQS" as a standin for x86 might have made
> sense at one point but its just cleaner to push the necessary
> definitions into asm/irq.h.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Keir (Xen.org) <keir@xen.org>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Acked-by: Jan Beulich <jbeulich@suse.com>

> --
> v2: s/nr_hw_irqs/nr_static_irqs/g
> ---
>  xen/arch/arm/dummy.S      |    2 --
>  xen/common/domain.c       |    4 ++--
>  xen/include/asm-arm/irq.h |    3 +++
>  xen/include/asm-x86/irq.h |    4 ++++
>  xen/include/xen/irq.h     |    8 --------
>  xen/xsm/flask/hooks.c     |    4 ++--
>  6 files changed, 11 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
> index 6416f94..a214fbf 100644
> --- a/xen/arch/arm/dummy.S
> +++ b/xen/arch/arm/dummy.S
> @@ -6,5 +6,3 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
>  	.globl x; \
>  x:	mov pc, lr
>  	
> -/* PIRQ support */
> -DUMMY(nr_irqs_gsi);
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 12c8e24..2f8ef00 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -259,9 +259,9 @@ struct domain *domain_create(
>          atomic_inc(&d->pause_count);
>  
>          if ( domid )
> -            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
> +            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
>          else
> -            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
> +            d->nr_pirqs = nr_static_irqs + extra_dom0_irqs;
>          if ( d->nr_pirqs > nr_irqs )
>              d->nr_pirqs = nr_irqs;
>  
> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> index abde839..bd6b54a 100644
> --- a/xen/include/asm-arm/irq.h
> +++ b/xen/include/asm-arm/irq.h
> @@ -21,6 +21,9 @@ struct irq_cfg {
>  #define NR_IRQS		1024
>  #define nr_irqs NR_IRQS
>  
> +#define nr_irqs NR_IRQS
> +#define nr_static_irqs NR_IRQS
> +
>  struct irq_desc;
>  
>  struct irq_desc *__irq_to_desc(int irq);
> diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
> index 5eefb94..7f5da06 100644
> --- a/xen/include/asm-x86/irq.h
> +++ b/xen/include/asm-x86/irq.h
> @@ -11,6 +11,10 @@
>  #include <irq_vectors.h>
>  #include <asm/percpu.h>
>  
> +extern unsigned int nr_irqs_gsi;
> +extern unsigned int nr_irqs;
> +#define nr_static_irqs nr_irqs_gsi
> +
>  #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
>  			     (1 << (irq)) & io_apic_irqs : \
>  			     (irq) < nr_irqs_gsi)
> diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> index 5973cce..7386358 100644
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
>  
>  #include <asm/irq.h>
>  
> -#ifdef NR_IRQS
> -# define nr_irqs NR_IRQS
> -# define nr_irqs_gsi NR_IRQS
> -#else
> -extern unsigned int nr_irqs_gsi;
> -extern unsigned int nr_irqs;
> -#endif
> -
>  struct msi_desc;
>  /*
>   * This is the "IRQ descriptor", which contains various information
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 0ca10d0..782e28c 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct 
> avc_audit_data *ad)
>      struct irq_desc *desc = irq_to_desc(irq);
>      if ( irq >= nr_irqs || irq < 0 )
>          return -EINVAL;
> -    if ( irq < nr_irqs_gsi ) {
> +    if ( irq < nr_static_irqs ) {
>          if (ad) {
>              AVC_AUDIT_DATA_INIT(ad, IRQ);
>              ad->irq = irq;
> @@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int 
> irq, void *data)
>      if ( rc )
>          return rc;
>  
> -    if ( irq >= nr_irqs_gsi && msi ) {
> +    if ( irq >= nr_static_irqs && msi ) {
>          u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
>          AVC_AUDIT_DATA_INIT(&ad, DEV);
>          ad.device = machine_bdf;
> -- 
> 1.7.9.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:32:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:32:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLcp-0007VW-0W; Wed, 19 Dec 2012 15:32:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlLcn-0007VC-6C
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:32:01 +0000
Received: from [85.158.139.211:37096] by server-1.bemta-5.messagelabs.com id
	9D/6A-12813-0FDD1D05; Wed, 19 Dec 2012 15:32:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355931117!18732900!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=2.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32451 invoked from network); 19 Dec 2012 15:31:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:31:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Dec 2012 15:31:56 +0000
Message-Id: <50D1EBFB02000078000B17B8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Wed, 19 Dec 2012 15:31:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1354628943.2693.80.camel@zakaz.uk.xensource.com>
	<1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: remove nr_irqs_gsi from generic code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 15:43, Ian Campbell <ian.campbell@citrix.com> wrote:
> The concept is X86 specific.
> 
> AFAICT the generic concept here is the number of physical IRQs which
> the current hardware has, so call this nr_hw_irqs.
> 
> Also using "defined NR_IRQS" as a standin for x86 might have made
> sense at one point but its just cleaner to push the necessary
> definitions into asm/irq.h.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Keir (Xen.org) <keir@xen.org>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Acked-by: Jan Beulich <jbeulich@suse.com>

> --
> v2: s/nr_hw_irqs/nr_static_irqs/g
> ---
>  xen/arch/arm/dummy.S      |    2 --
>  xen/common/domain.c       |    4 ++--
>  xen/include/asm-arm/irq.h |    3 +++
>  xen/include/asm-x86/irq.h |    4 ++++
>  xen/include/xen/irq.h     |    8 --------
>  xen/xsm/flask/hooks.c     |    4 ++--
>  6 files changed, 11 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
> index 6416f94..a214fbf 100644
> --- a/xen/arch/arm/dummy.S
> +++ b/xen/arch/arm/dummy.S
> @@ -6,5 +6,3 @@ x:	.word 0xe7f000f0 /* Undefined instruction */
>  	.globl x; \
>  x:	mov pc, lr
>  	
> -/* PIRQ support */
> -DUMMY(nr_irqs_gsi);
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 12c8e24..2f8ef00 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -259,9 +259,9 @@ struct domain *domain_create(
>          atomic_inc(&d->pause_count);
>  
>          if ( domid )
> -            d->nr_pirqs = nr_irqs_gsi + extra_domU_irqs;
> +            d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
>          else
> -            d->nr_pirqs = nr_irqs_gsi + extra_dom0_irqs;
> +            d->nr_pirqs = nr_static_irqs + extra_dom0_irqs;
>          if ( d->nr_pirqs > nr_irqs )
>              d->nr_pirqs = nr_irqs;
>  
> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> index abde839..bd6b54a 100644
> --- a/xen/include/asm-arm/irq.h
> +++ b/xen/include/asm-arm/irq.h
> @@ -21,6 +21,9 @@ struct irq_cfg {
>  #define NR_IRQS		1024
>  #define nr_irqs NR_IRQS
>  
> +#define nr_irqs NR_IRQS
> +#define nr_static_irqs NR_IRQS
> +
>  struct irq_desc;
>  
>  struct irq_desc *__irq_to_desc(int irq);
> diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
> index 5eefb94..7f5da06 100644
> --- a/xen/include/asm-x86/irq.h
> +++ b/xen/include/asm-x86/irq.h
> @@ -11,6 +11,10 @@
>  #include <irq_vectors.h>
>  #include <asm/percpu.h>
>  
> +extern unsigned int nr_irqs_gsi;
> +extern unsigned int nr_irqs;
> +#define nr_static_irqs nr_irqs_gsi
> +
>  #define IO_APIC_IRQ(irq)    (platform_legacy_irq(irq) ?    \
>  			     (1 << (irq)) & io_apic_irqs : \
>  			     (irq) < nr_irqs_gsi)
> diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> index 5973cce..7386358 100644
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -58,14 +58,6 @@ typedef const struct hw_interrupt_type hw_irq_controller;
>  
>  #include <asm/irq.h>
>  
> -#ifdef NR_IRQS
> -# define nr_irqs NR_IRQS
> -# define nr_irqs_gsi NR_IRQS
> -#else
> -extern unsigned int nr_irqs_gsi;
> -extern unsigned int nr_irqs;
> -#endif
> -
>  struct msi_desc;
>  /*
>   * This is the "IRQ descriptor", which contains various information
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 0ca10d0..782e28c 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -72,7 +72,7 @@ static int get_irq_sid(int irq, u32 *sid, struct 
> avc_audit_data *ad)
>      struct irq_desc *desc = irq_to_desc(irq);
>      if ( irq >= nr_irqs || irq < 0 )
>          return -EINVAL;
> -    if ( irq < nr_irqs_gsi ) {
> +    if ( irq < nr_static_irqs ) {
>          if (ad) {
>              AVC_AUDIT_DATA_INIT(ad, IRQ);
>              ad->irq = irq;
> @@ -699,7 +699,7 @@ static int flask_map_domain_pirq (struct domain *d, int 
> irq, void *data)
>      if ( rc )
>          return rc;
>  
> -    if ( irq >= nr_irqs_gsi && msi ) {
> +    if ( irq >= nr_static_irqs && msi ) {
>          u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
>          AVC_AUDIT_DATA_INIT(&ad, DEV);
>          ad.device = machine_bdf;
> -- 
> 1.7.9.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:33:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLda-0007bs-F1; Wed, 19 Dec 2012 15:32:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlLdZ-0007bj-GT
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:32:49 +0000
Received: from [193.109.254.147:24346] by server-14.bemta-14.messagelabs.com
	id EA/7F-10022-02ED1D05; Wed, 19 Dec 2012 15:32:48 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355931162!2236296!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25771 invoked from network); 19 Dec 2012 15:32:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:32:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1205015"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 15:32:42 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 10:32:42 -0500
Message-ID: <50D1DE19.1080709@citrix.com>
Date: Wed, 19 Dec 2012 15:32:41 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <2cff694b.c956.13bb3c38014.Coremail.hxkhust@126.com>
In-Reply-To: <2cff694b.c956.13bb3c38014.Coremail.hxkhust@126.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(qcow format image file read operation in qemu-img-xen)[updated]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 15:23, hxkhust wrote:
> Or could you tell me how to cache the data which is read from the
> backingfile when a qcow image is regarded as a virtual disk in a
> running HVM?
I take it the above single question is the effect of my previous reply?
Why did you have to "hide" that little extra question in the whole
previous e-mail?

Sorry, don't know the answer to your question [I'm guessing, in general,
that the Dom0 will do that for you, subject to available space], just
pointing out that there is a minor difference between your previous and
current mail.

By caching, do you mean "load the entire file into RAM", or "if a read
is requested for the same piece of 'disk' multiple times, I want the
previous result to be stored and returned".

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:33:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLda-0007bs-F1; Wed, 19 Dec 2012 15:32:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlLdZ-0007bj-GT
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:32:49 +0000
Received: from [193.109.254.147:24346] by server-14.bemta-14.messagelabs.com
	id EA/7F-10022-02ED1D05; Wed, 19 Dec 2012 15:32:48 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355931162!2236296!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25771 invoked from network); 19 Dec 2012 15:32:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:32:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1205015"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 15:32:42 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 10:32:42 -0500
Message-ID: <50D1DE19.1080709@citrix.com>
Date: Wed, 19 Dec 2012 15:32:41 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <2cff694b.c956.13bb3c38014.Coremail.hxkhust@126.com>
In-Reply-To: <2cff694b.c956.13bb3c38014.Coremail.hxkhust@126.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(qcow format image file read operation in qemu-img-xen)[updated]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 15:23, hxkhust wrote:
> Or could you tell me how to cache the data which is read from the
> backingfile when a qcow image is regarded as a virtual disk in a
> running HVM?
I take it the above single question is the effect of my previous reply?
Why did you have to "hide" that little extra question in the whole
previous e-mail?

Sorry, don't know the answer to your question [I'm guessing, in general,
that the Dom0 will do that for you, subject to available space], just
pointing out that there is a minor difference between your previous and
current mail.

By caching, do you mean "load the entire file into RAM", or "if a read
is requested for the same piece of 'disk' multiple times, I want the
previous result to be stored and returned".

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:35:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLg0-0007rE-1j; Wed, 19 Dec 2012 15:35:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1TlLfy-0007r6-Vq
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:35:19 +0000
Received: from [85.158.143.99:38544] by server-3.bemta-4.messagelabs.com id
	65/A9-18211-6BED1D05; Wed, 19 Dec 2012 15:35:18 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355931316!22682784!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27928 invoked from network); 19 Dec 2012 15:35:16 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:35:16 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id C16F9400AF7;
	Wed, 19 Dec 2012 16:35:15 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 0agrrtVQGbeT; Wed, 19 Dec 2012 16:35:14 +0100 (CET)
Received: from [192.168.178.50]
	(host51-78-dynamic.2-87-r.retail.telecomitalia.it [87.2.78.51])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id 29477400630;
	Wed, 19 Dec 2012 16:35:14 +0100 (CET)
Message-ID: <50D1DEB0.9090306@tiscali.it>
Date: Wed, 19 Dec 2012 16:35:12 +0100
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] Test report for xen-unstable and qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8046501980912464354=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============8046501980912464354==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030808080308020806040406"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms030808080308020806040406
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable

Dom0:
Wheezy 64 bit with kernel from package linux-image-3.2.0-4-amd64 version =

3.2.32-1, package blktap-dkms and all dependency packages for xen
hg clone http://xenbits.xen.org/hg/xen-unstable.hg (in this build=20
changeset is 26286:d5c0389bf26c)
-------------------------
vi Config.mk
------------
PYTHON_PREFIX_ARG =3D
-------------------------
Added some patches:
- Add qxl vga interface support v5 (posted by other people in mailing lis=
t)
- improve_make_deb (posted in mailing list)
- tools_config_spice_test (not posted now, enable spice build on qemu-xen=
)
-------------------------
=2E/configure
-------------------------
make deb

-------------------------
Issues solved from my previous test build:
-------------
- xl cd-insert segmentation fault
-------------------------

-------------------------
Old issues not solved my previous test build:
-------------
- IMPORTANT - On restore the network is up but is not working, I tried=20
it with W7 pro 64 bit with gplpv last build (357) on qemu-xen
- On qemu-xen there are empty floppy and cdrom device that are not set=20
in configuration file (linux and windows)
- HVM Quantal domU with spice and qxl don't work, qemu crashes but it=20
works without qxl
On log:
(/usr/sbin/xl:5250): Spice-CRITICAL **: red_memslots.c:123:get_virt:=20
slot_id 194 too big, addr=3Dc2c2c2c2c2c2c2c2
-------------------------

-------------------------
New issues from my previous test build:
-------------
- xl cd-eject and cd-insert are not working, tried with W7 pro 64 bit=20
with gplpv last build (357) on qemu-xen and also ubuntu 12.10
xl cd-eject QUANTALHVM hdb # On my previuos test was working - changeset =

26094:c69bcb248128
libxl: error: libxl_device.c:269:libxl__device_disk_set_backend: no=20
suitable backend for disk hdb
-------------------------

If you need other informations, logs and tests tell me and I'll do them=20
and post them.
Thanks for any reply.


--------------ms030808080308020806040406
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMTIxOTE1MzUxMlowIwYJKoZIhvcNAQkEMRYEFDW+gRNd5TuBVCpu6CcWXrT3
tpKpMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAj8ZFkav7Uvd00JxUrmlBmndd
hS+2ow2R4JMwezAxnLSNAtetFMpow+knbgpNC/HbIxSWnv8e9Cv2/QoBi+lzroun3cYwtXzZ
rhqITLBmvWJir38MOeY7otHZhcjIEMCSKGOg3KxAGj9mFypXKA0R5bPrwB+Y5S7BTPCAix50
9tSK/cd07IW4OaQqoShkPJa746O/wTUlXiu2sFN+HXPwvJJPeSVK1ALGApVkLZkR5rIJod53
iGb+Ak1Hz/YXOXDYKP8quN5YwGfVznzIOkMDI/Vr5Bj5cMe0i8lSCCcMUN3wcKvwwDSGT4Fh
HmXL3u2MInJlbbN635p+t6gsCtbJNwAAAAAAAA==
--------------ms030808080308020806040406--


--===============8046501980912464354==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8046501980912464354==--


From xen-devel-bounces@lists.xen.org Wed Dec 19 15:35:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLg0-0007rE-1j; Wed, 19 Dec 2012 15:35:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1TlLfy-0007r6-Vq
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:35:19 +0000
Received: from [85.158.143.99:38544] by server-3.bemta-4.messagelabs.com id
	65/A9-18211-6BED1D05; Wed, 19 Dec 2012 15:35:18 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355931316!22682784!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27928 invoked from network); 19 Dec 2012 15:35:16 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:35:16 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id C16F9400AF7;
	Wed, 19 Dec 2012 16:35:15 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 0agrrtVQGbeT; Wed, 19 Dec 2012 16:35:14 +0100 (CET)
Received: from [192.168.178.50]
	(host51-78-dynamic.2-87-r.retail.telecomitalia.it [87.2.78.51])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id 29477400630;
	Wed, 19 Dec 2012 16:35:14 +0100 (CET)
Message-ID: <50D1DEB0.9090306@tiscali.it>
Date: Wed, 19 Dec 2012 16:35:12 +0100
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] Test report for xen-unstable and qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8046501980912464354=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============8046501980912464354==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms030808080308020806040406"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms030808080308020806040406
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable

Dom0:
Wheezy 64 bit with kernel from package linux-image-3.2.0-4-amd64 version =

3.2.32-1, package blktap-dkms and all dependency packages for xen
hg clone http://xenbits.xen.org/hg/xen-unstable.hg (in this build=20
changeset is 26286:d5c0389bf26c)
-------------------------
vi Config.mk
------------
PYTHON_PREFIX_ARG =3D
-------------------------
Added some patches:
- Add qxl vga interface support v5 (posted by other people in mailing lis=
t)
- improve_make_deb (posted in mailing list)
- tools_config_spice_test (not posted now, enable spice build on qemu-xen=
)
-------------------------
=2E/configure
-------------------------
make deb

-------------------------
Issues solved from my previous test build:
-------------
- xl cd-insert segmentation fault
-------------------------

-------------------------
Old issues not solved my previous test build:
-------------
- IMPORTANT - On restore the network is up but is not working, I tried=20
it with W7 pro 64 bit with gplpv last build (357) on qemu-xen
- On qemu-xen there are empty floppy and cdrom device that are not set=20
in configuration file (linux and windows)
- HVM Quantal domU with spice and qxl don't work, qemu crashes but it=20
works without qxl
On log:
(/usr/sbin/xl:5250): Spice-CRITICAL **: red_memslots.c:123:get_virt:=20
slot_id 194 too big, addr=3Dc2c2c2c2c2c2c2c2
-------------------------

-------------------------
New issues from my previous test build:
-------------
- xl cd-eject and cd-insert are not working, tried with W7 pro 64 bit=20
with gplpv last build (357) on qemu-xen and also ubuntu 12.10
xl cd-eject QUANTALHVM hdb # On my previuos test was working - changeset =

26094:c69bcb248128
libxl: error: libxl_device.c:269:libxl__device_disk_set_backend: no=20
suitable backend for disk hdb
-------------------------

If you need other informations, logs and tests tell me and I'll do them=20
and post them.
Thanks for any reply.


--------------ms030808080308020806040406
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMTIxOTE1MzUxMlowIwYJKoZIhvcNAQkEMRYEFDW+gRNd5TuBVCpu6CcWXrT3
tpKpMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAj8ZFkav7Uvd00JxUrmlBmndd
hS+2ow2R4JMwezAxnLSNAtetFMpow+knbgpNC/HbIxSWnv8e9Cv2/QoBi+lzroun3cYwtXzZ
rhqITLBmvWJir38MOeY7otHZhcjIEMCSKGOg3KxAGj9mFypXKA0R5bPrwB+Y5S7BTPCAix50
9tSK/cd07IW4OaQqoShkPJa746O/wTUlXiu2sFN+HXPwvJJPeSVK1ALGApVkLZkR5rIJod53
iGb+Ak1Hz/YXOXDYKP8quN5YwGfVznzIOkMDI/Vr5Bj5cMe0i8lSCCcMUN3wcKvwwDSGT4Fh
HmXL3u2MInJlbbN635p+t6gsCtbJNwAAAAAAAA==
--------------ms030808080308020806040406--


--===============8046501980912464354==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8046501980912464354==--


From xen-devel-bounces@lists.xen.org Wed Dec 19 15:38:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLjJ-00083s-MQ; Wed, 19 Dec 2012 15:38:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlLjH-00083k-T3
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:38:44 +0000
Received: from [85.158.143.35:11602] by server-3.bemta-4.messagelabs.com id
	68/AE-18211-38FD1D05; Wed, 19 Dec 2012 15:38:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355931456!5124320!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13269 invoked from network); 19 Dec 2012 15:37:37 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 15:37:37 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJFbVCW007124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 15:37:32 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJFbUWo019549
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 15:37:31 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJFbUt0005580; Wed, 19 Dec 2012 09:37:30 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 07:37:30 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DFF291BF762; Wed, 19 Dec 2012 10:37:28 -0500 (EST)
Date: Wed, 19 Dec 2012 10:37:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121219153728.GG10062@phenom.dumpdata.com>
References: <20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121218211613.GA5697@phenom.dumpdata.com>
	<alpine.DEB.2.02.1212191116450.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1212191116450.17523@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 11:19:22AM +0000, Stefano Stabellini wrote:
> On Tue, 18 Dec 2012, Konrad Rzeszutek Wilk wrote:
> > On Thu, Dec 13, 2012 at 02:25:16PM +0000, Stefano Stabellini wrote:
> > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > > >>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > wrote:
> > > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > > >> > On Wed, 12 Dec 2012 17:15:23 -0800
> > > > >> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > > >> > 
> > > > >> >> On Tue, 11 Dec 2012 12:10:19 +0000
> > > > >> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > > >> >> 
> > > > >> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > > > >> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> > > > >> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > >> >> > 
> > > > >> >> > That's strange because AFAIK Linux is never editing the MSI-X
> > > > >> >> > entries directly: give a look at
> > > > >> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> > > > >> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> > > > >> >> > touch the real MSI-X table.
> > > > >> >> 
> > > > >> >> So, this is what's happening. The side effect of :
> > > > >> >> 
> > > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> > > > >> >>                                 dev->msix_table.last) )
> > > > >> >>             WARN();
> > > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> > > > >> >>                                 dev->msix_pba.last) )
> > > > >> >>             WARN();
> > > > >> >> 
> > > > >> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> > > > >> >> I've mapped are going from RW to read only. Then when dom0 accesses
> > > > >> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> > > > >> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> > > > >> >> don't understand why? Looking into that now...
> > > > >> 
> > > > >> As far as I was able to tell back at the time when I implemented
> > > > >> this, existing code shouldn't have mappings for these tables in
> > > > >> place at the time these ranges get added here. But I noted in
> > > > >> the patch description that this is a potential issue (and may need
> > > > >> fixing if deemed severe enough - back then, apparently nobody
> > > > >> really cared, perhaps largely because passthrough to PV guests
> > > > >> isn't considered fully secure anyway).
> > > > >> 
> > > > >> Now - did that change? I.e. can you describe where the mappings
> > > > >> come from that cause the problem here?
> > > > > 
> > > > > The generic Linux MSI-X handling code does that, before calling the
> > > > > arch specific msi setup function, for which we have a xen version
> > > > > (xen_initdom_setup_msi_irqs):
> > > > > 
> > > > > pci_enable_msix -> msix_capability_init -> msix_map_region
> > > > 
> > > > Ah, okay, (of course?) I had looked only at the forward ported
> > > > version of this. Is all that fiddling with the mask bits really
> > > > being suppressed properly when running under Xen? Otherwise
> > > > pv-ops is quite broken in this regard at present... And if it is,
> > > > I don't see what the respective ioremap() is good for here in
> > > > the Xen case.
> > > 
> > > Actually I think that you might be right: just looking at the code it
> > > seems that the mask bits get written to the table once as part of the
> > > initialization process:
> > > 
> > > pci_enable_msix -> msix_capability_init -> msix_program_entries
> > > 
> > > Unfortunately msix_program_entries is called few lines after
> > > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
> > > a pirq.
> > > However after that is done, all the masking/unmask is done via irq_mask
> > > that we handle properly masking/unmasking the corresponding event
> > > channels.
> > > 
> > > 
> > > Possible solutions on top of my head:
> > 
> > There is also the potential to piggyback on Joerg's patches
> > that introduce a new x86_msi_ops: compose_msi_msg.
> > 
> > See here: https://lkml.org/lkml/2012/8/20/432
> > (I think there was also a more recent one posted at some point).
> 
> Given that dom0 should never write to the MSI-X table, introducing a new

How does this work with QEMU setting up MSI and MSI-X on behalf of
guests? Or is that actually handled by Xen hypervisor?

> msi_ops that replaces msix_program_entries (or at least the part of
> msix_program_entries that masks all the entried) is the only solution
> left.

so this one (__msix_mask_irq):

         mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT;
 198         if (flag)
 199                 mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
 200         writel(mask_bits, desc->mask_base + offset);





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:38:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLjJ-00083s-MQ; Wed, 19 Dec 2012 15:38:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlLjH-00083k-T3
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:38:44 +0000
Received: from [85.158.143.35:11602] by server-3.bemta-4.messagelabs.com id
	68/AE-18211-38FD1D05; Wed, 19 Dec 2012 15:38:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1355931456!5124320!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13269 invoked from network); 19 Dec 2012 15:37:37 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 15:37:37 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJFbVCW007124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 15:37:32 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJFbUWo019549
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 15:37:31 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJFbUt0005580; Wed, 19 Dec 2012 09:37:30 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 07:37:30 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DFF291BF762; Wed, 19 Dec 2012 10:37:28 -0500 (EST)
Date: Wed, 19 Dec 2012 10:37:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121219153728.GG10062@phenom.dumpdata.com>
References: <20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121218211613.GA5697@phenom.dumpdata.com>
	<alpine.DEB.2.02.1212191116450.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1212191116450.17523@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 11:19:22AM +0000, Stefano Stabellini wrote:
> On Tue, 18 Dec 2012, Konrad Rzeszutek Wilk wrote:
> > On Thu, Dec 13, 2012 at 02:25:16PM +0000, Stefano Stabellini wrote:
> > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > > >>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > wrote:
> > > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > > >> > On Wed, 12 Dec 2012 17:15:23 -0800
> > > > >> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > > >> > 
> > > > >> >> On Tue, 11 Dec 2012 12:10:19 +0000
> > > > >> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > > >> >> 
> > > > >> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > > > >> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> > > > >> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > >> >> > 
> > > > >> >> > That's strange because AFAIK Linux is never editing the MSI-X
> > > > >> >> > entries directly: give a look at
> > > > >> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> > > > >> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> > > > >> >> > touch the real MSI-X table.
> > > > >> >> 
> > > > >> >> So, this is what's happening. The side effect of :
> > > > >> >> 
> > > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> > > > >> >>                                 dev->msix_table.last) )
> > > > >> >>             WARN();
> > > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> > > > >> >>                                 dev->msix_pba.last) )
> > > > >> >>             WARN();
> > > > >> >> 
> > > > >> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> > > > >> >> I've mapped are going from RW to read only. Then when dom0 accesses
> > > > >> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> > > > >> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> > > > >> >> don't understand why? Looking into that now...
> > > > >> 
> > > > >> As far as I was able to tell back at the time when I implemented
> > > > >> this, existing code shouldn't have mappings for these tables in
> > > > >> place at the time these ranges get added here. But I noted in
> > > > >> the patch description that this is a potential issue (and may need
> > > > >> fixing if deemed severe enough - back then, apparently nobody
> > > > >> really cared, perhaps largely because passthrough to PV guests
> > > > >> isn't considered fully secure anyway).
> > > > >> 
> > > > >> Now - did that change? I.e. can you describe where the mappings
> > > > >> come from that cause the problem here?
> > > > > 
> > > > > The generic Linux MSI-X handling code does that, before calling the
> > > > > arch specific msi setup function, for which we have a xen version
> > > > > (xen_initdom_setup_msi_irqs):
> > > > > 
> > > > > pci_enable_msix -> msix_capability_init -> msix_map_region
> > > > 
> > > > Ah, okay, (of course?) I had looked only at the forward ported
> > > > version of this. Is all that fiddling with the mask bits really
> > > > being suppressed properly when running under Xen? Otherwise
> > > > pv-ops is quite broken in this regard at present... And if it is,
> > > > I don't see what the respective ioremap() is good for here in
> > > > the Xen case.
> > > 
> > > Actually I think that you might be right: just looking at the code it
> > > seems that the mask bits get written to the table once as part of the
> > > initialization process:
> > > 
> > > pci_enable_msix -> msix_capability_init -> msix_program_entries
> > > 
> > > Unfortunately msix_program_entries is called few lines after
> > > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
> > > a pirq.
> > > However after that is done, all the masking/unmask is done via irq_mask
> > > that we handle properly masking/unmasking the corresponding event
> > > channels.
> > > 
> > > 
> > > Possible solutions on top of my head:
> > 
> > There is also the potential to piggyback on Joerg's patches
> > that introduce a new x86_msi_ops: compose_msi_msg.
> > 
> > See here: https://lkml.org/lkml/2012/8/20/432
> > (I think there was also a more recent one posted at some point).
> 
> Given that dom0 should never write to the MSI-X table, introducing a new

How does this work with QEMU setting up MSI and MSI-X on behalf of
guests? Or is that actually handled by Xen hypervisor?

> msi_ops that replaces msix_program_entries (or at least the part of
> msix_program_entries that masks all the entried) is the only solution
> left.

so this one (__msix_mask_irq):

         mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT;
 198         if (flag)
 199                 mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
 200         writel(mask_bits, desc->mask_base + offset);





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:42:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLmh-0008FG-AB; Wed, 19 Dec 2012 15:42:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlLmf-0008F5-EH
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:42:13 +0000
Received: from [85.158.139.83:12932] by server-12.bemta-5.messagelabs.com id
	93/42-02275-450E1D05; Wed, 19 Dec 2012 15:42:12 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355931729!30035781!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15059 invoked from network); 19 Dec 2012 15:42:10 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:42:10 -0000
Received: (qmail 28404 invoked from network); 19 Dec 2012 17:42:08 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 17:42:08 +0200
Message-ID: <50D1E08D.6020208@gmail.com>
Date: Wed, 19 Dec 2012 17:43:09 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355929247.14620.436.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234447,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY;
	NN_LEGIT_S_SQARE_BRACKETS], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	88c4997b2daa09ea7b1bc10371054f2d.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t9.17epd43f4.ghh5], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44442
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> TBH until someone figures out or explains what overlap actually is I
> don't know if it even needs exporting or taking into account in
> userspace.

http://www.mjmwired.net/kernel/Documentation/mtrr.txt

"Creating overlapping MTRRs:

%echo "base=0xfb000000 size=0x1000000 type=write-combining" >/proc/mtrr
%echo "base=0xfb000000 size=0x1000 type=uncachable" >/proc/mtrr

And the results: cat /proc/mtrr
reg00: base=0x00000000 (   0MB), size=  64MB: write-back, count=1
reg01: base=0xfb000000 (4016MB), size=  16MB: write-combining, count=1
reg02: base=0xfb000000 (4016MB), size=   4kB: uncachable, count=1"

Overlapped MTRRs are MTRRs with the same base, but different types of 
addresses at different offsets.

At least to my understanding.

Thanks,
Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:42:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLmh-0008FG-AB; Wed, 19 Dec 2012 15:42:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlLmf-0008F5-EH
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:42:13 +0000
Received: from [85.158.139.83:12932] by server-12.bemta-5.messagelabs.com id
	93/42-02275-450E1D05; Wed, 19 Dec 2012 15:42:12 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355931729!30035781!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15059 invoked from network); 19 Dec 2012 15:42:10 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 15:42:10 -0000
Received: (qmail 28404 invoked from network); 19 Dec 2012 17:42:08 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 17:42:08 +0200
Message-ID: <50D1E08D.6020208@gmail.com>
Date: Wed, 19 Dec 2012 17:43:09 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355929247.14620.436.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234447,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY;
	NN_LEGIT_S_SQARE_BRACKETS], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	88c4997b2daa09ea7b1bc10371054f2d.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t9.17epd43f4.ghh5], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44442
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> TBH until someone figures out or explains what overlap actually is I
> don't know if it even needs exporting or taking into account in
> userspace.

http://www.mjmwired.net/kernel/Documentation/mtrr.txt

"Creating overlapping MTRRs:

%echo "base=0xfb000000 size=0x1000000 type=write-combining" >/proc/mtrr
%echo "base=0xfb000000 size=0x1000 type=uncachable" >/proc/mtrr

And the results: cat /proc/mtrr
reg00: base=0x00000000 (   0MB), size=  64MB: write-back, count=1
reg01: base=0xfb000000 (4016MB), size=  16MB: write-combining, count=1
reg02: base=0xfb000000 (4016MB), size=   4kB: uncachable, count=1"

Overlapped MTRRs are MTRRs with the same base, but different types of 
addresses at different offsets.

At least to my understanding.

Thanks,
Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:46:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:46:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLqF-0008T0-5O; Wed, 19 Dec 2012 15:45:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlLqD-0008St-D7
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:45:53 +0000
Received: from [85.158.143.99:30044] by server-1.bemta-4.messagelabs.com id
	5B/17-28401-031E1D05; Wed, 19 Dec 2012 15:45:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355931950!17935130!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22333 invoked from network); 19 Dec 2012 15:45:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:45:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="257118"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 15:45:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 15:45:49 +0000
Message-ID: <1355931948.14620.442.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Wed, 19 Dec 2012 15:45:48 +0000
In-Reply-To: <50D1D880.10701@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
	<1355919745.14620.363.camel@zakaz.uk.xensource.com>
	<50D1D880.10701@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 15:08 +0000, Mats Petersson wrote:
> On 19/12/12 12:22, Ian Campbell wrote:
> > On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:
> >
> >>>> +                  only likely to return EFAULT or some other "things are very
> >>>> +                  bad" error code, which the rest of the calling code won't
> >>>> +                  be able to fix up. So we just exit with the error we got.
> >>> It expect it is more important to accumulate the individual errors from
> >>> remap_pte_fn into err_ptr.
> >> Yes, but since that exits on error with EFAULT, the calling code won't
> >> "accept" the errors, and thus the whole house of cards fall apart at
> >> this point.
> >>
> >> There should probably be a task to fix this up properly, hence the
> >> comment. But right now, any error besides ENOENT is "unacceptable" by
> >> the callers of this code. If you want me to add this to the comment, I'm
> >> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing
> >> will work after an error.
> > Are you sure? privcmd.c has some special casing for ENOENT but it looks
> > like it should just pass through other errors back to userspace.
> >
> > In any case surely this needs fixing?
> >
> > On the X86 side err_ptr is the result of the mmupdate hypercall which
> > can already be other than ENOENT, including EFAULT, ESRCH etc.
> Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the 
> calling code revoking the mapping and returning an error) is ENOENT.

Hr, Probably the right thing is for map_foreign_page to propagate the
result of XENMEM_add_to_physmap_range and for remap_pte_fn to store it
in err_ptr. That -EFAULT thing just looks wrong to me.

> 
> Or at least, that's how I believe it should SHOULD be - since only 
> ENOENT is a "success" error code, anything else pretty much means that 
> the operation requested didn't work properly. If you are aware of any 
> use-case where EFAULT, ESRCH or other error codes would still result in 
> a valid, usable memory mapping - I have a fair understanding of the xc_* 
> code, and although I may not know every piece of that code, I'm fairly 
> certainly that is the expected behaviour.
> 
> --
> Mats
> >
> > Ian.
> >
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:46:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:46:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLqF-0008T0-5O; Wed, 19 Dec 2012 15:45:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlLqD-0008St-D7
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:45:53 +0000
Received: from [85.158.143.99:30044] by server-1.bemta-4.messagelabs.com id
	5B/17-28401-031E1D05; Wed, 19 Dec 2012 15:45:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355931950!17935130!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22333 invoked from network); 19 Dec 2012 15:45:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:45:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="257118"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 15:45:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 15:45:49 +0000
Message-ID: <1355931948.14620.442.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Wed, 19 Dec 2012 15:45:48 +0000
In-Reply-To: <50D1D880.10701@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
	<1355919745.14620.363.camel@zakaz.uk.xensource.com>
	<50D1D880.10701@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 15:08 +0000, Mats Petersson wrote:
> On 19/12/12 12:22, Ian Campbell wrote:
> > On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:
> >
> >>>> +                  only likely to return EFAULT or some other "things are very
> >>>> +                  bad" error code, which the rest of the calling code won't
> >>>> +                  be able to fix up. So we just exit with the error we got.
> >>> It expect it is more important to accumulate the individual errors from
> >>> remap_pte_fn into err_ptr.
> >> Yes, but since that exits on error with EFAULT, the calling code won't
> >> "accept" the errors, and thus the whole house of cards fall apart at
> >> this point.
> >>
> >> There should probably be a task to fix this up properly, hence the
> >> comment. But right now, any error besides ENOENT is "unacceptable" by
> >> the callers of this code. If you want me to add this to the comment, I'm
> >> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing
> >> will work after an error.
> > Are you sure? privcmd.c has some special casing for ENOENT but it looks
> > like it should just pass through other errors back to userspace.
> >
> > In any case surely this needs fixing?
> >
> > On the X86 side err_ptr is the result of the mmupdate hypercall which
> > can already be other than ENOENT, including EFAULT, ESRCH etc.
> Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the 
> calling code revoking the mapping and returning an error) is ENOENT.

Hr, Probably the right thing is for map_foreign_page to propagate the
result of XENMEM_add_to_physmap_range and for remap_pte_fn to store it
in err_ptr. That -EFAULT thing just looks wrong to me.

> 
> Or at least, that's how I believe it should SHOULD be - since only 
> ENOENT is a "success" error code, anything else pretty much means that 
> the operation requested didn't work properly. If you are aware of any 
> use-case where EFAULT, ESRCH or other error codes would still result in 
> a valid, usable memory mapping - I have a fair understanding of the xc_* 
> code, and although I may not know every piece of that code, I'm fairly 
> certainly that is the expected behaviour.
> 
> --
> Mats
> >
> > Ian.
> >
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:47:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLs0-00007e-LR; Wed, 19 Dec 2012 15:47:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlLry-00007V-UH
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:47:43 +0000
Received: from [85.158.143.35:43856] by server-3.bemta-4.messagelabs.com id
	07/3C-18211-E91E1D05; Wed, 19 Dec 2012 15:47:42 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355932058!15749264!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31943 invoked from network); 19 Dec 2012 15:47:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:47:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1277192"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 15:47:37 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 10:47:37 -0500
Message-ID: <50D1E198.8090205@citrix.com>
Date: Wed, 19 Dec 2012 15:47:36 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
	<1355919745.14620.363.camel@zakaz.uk.xensource.com>
	<50D1D880.10701@citrix.com>
	<1355931948.14620.442.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355931948.14620.442.camel@zakaz.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 15:45, Ian Campbell wrote:
> On Wed, 2012-12-19 at 15:08 +0000, Mats Petersson wrote:
>> On 19/12/12 12:22, Ian Campbell wrote:
>>> On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:
>>>
>>>>>> +                  only likely to return EFAULT or some other "things are very
>>>>>> +                  bad" error code, which the rest of the calling code won't
>>>>>> +                  be able to fix up. So we just exit with the error we got.
>>>>> It expect it is more important to accumulate the individual errors from
>>>>> remap_pte_fn into err_ptr.
>>>> Yes, but since that exits on error with EFAULT, the calling code won't
>>>> "accept" the errors, and thus the whole house of cards fall apart at
>>>> this point.
>>>>
>>>> There should probably be a task to fix this up properly, hence the
>>>> comment. But right now, any error besides ENOENT is "unacceptable" by
>>>> the callers of this code. If you want me to add this to the comment, I'm
>>>> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing
>>>> will work after an error.
>>> Are you sure? privcmd.c has some special casing for ENOENT but it looks
>>> like it should just pass through other errors back to userspace.
>>>
>>> In any case surely this needs fixing?
>>>
>>> On the X86 side err_ptr is the result of the mmupdate hypercall which
>>> can already be other than ENOENT, including EFAULT, ESRCH etc.
>> Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the
>> calling code revoking the mapping and returning an error) is ENOENT.
> Hr, Probably the right thing is for map_foreign_page to propagate the
> result of XENMEM_add_to_physmap_range and for remap_pte_fn to store it
> in err_ptr. That -EFAULT thing just looks wrong to me.
Ok, so you want me to fix that up, I suppose? I mean, I just copied the 
behaviour that was already there - just massaged the code around a bit...

--
Mats
>
>> Or at least, that's how I believe it should SHOULD be - since only
>> ENOENT is a "success" error code, anything else pretty much means that
>> the operation requested didn't work properly. If you are aware of any
>> use-case where EFAULT, ESRCH or other error codes would still result in
>> a valid, usable memory mapping - I have a fair understanding of the xc_*
>> code, and although I may not know every piece of that code, I'm fairly
>> certainly that is the expected behaviour.
>>
>> --
>> Mats
>>> Ian.
>>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:47:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLs0-00007e-LR; Wed, 19 Dec 2012 15:47:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlLry-00007V-UH
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:47:43 +0000
Received: from [85.158.143.35:43856] by server-3.bemta-4.messagelabs.com id
	07/3C-18211-E91E1D05; Wed, 19 Dec 2012 15:47:42 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355932058!15749264!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31943 invoked from network); 19 Dec 2012 15:47:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:47:39 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1277192"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 15:47:37 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 10:47:37 -0500
Message-ID: <50D1E198.8090205@citrix.com>
Date: Wed, 19 Dec 2012 15:47:36 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
	<1355919745.14620.363.camel@zakaz.uk.xensource.com>
	<50D1D880.10701@citrix.com>
	<1355931948.14620.442.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355931948.14620.442.camel@zakaz.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 15:45, Ian Campbell wrote:
> On Wed, 2012-12-19 at 15:08 +0000, Mats Petersson wrote:
>> On 19/12/12 12:22, Ian Campbell wrote:
>>> On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:
>>>
>>>>>> +                  only likely to return EFAULT or some other "things are very
>>>>>> +                  bad" error code, which the rest of the calling code won't
>>>>>> +                  be able to fix up. So we just exit with the error we got.
>>>>> It expect it is more important to accumulate the individual errors from
>>>>> remap_pte_fn into err_ptr.
>>>> Yes, but since that exits on error with EFAULT, the calling code won't
>>>> "accept" the errors, and thus the whole house of cards fall apart at
>>>> this point.
>>>>
>>>> There should probably be a task to fix this up properly, hence the
>>>> comment. But right now, any error besides ENOENT is "unacceptable" by
>>>> the callers of this code. If you want me to add this to the comment, I'm
>>>> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing
>>>> will work after an error.
>>> Are you sure? privcmd.c has some special casing for ENOENT but it looks
>>> like it should just pass through other errors back to userspace.
>>>
>>> In any case surely this needs fixing?
>>>
>>> On the X86 side err_ptr is the result of the mmupdate hypercall which
>>> can already be other than ENOENT, including EFAULT, ESRCH etc.
>> Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the
>> calling code revoking the mapping and returning an error) is ENOENT.
> Hr, Probably the right thing is for map_foreign_page to propagate the
> result of XENMEM_add_to_physmap_range and for remap_pte_fn to store it
> in err_ptr. That -EFAULT thing just looks wrong to me.
Ok, so you want me to fix that up, I suppose? I mean, I just copied the 
behaviour that was already there - just massaged the code around a bit...

--
Mats
>
>> Or at least, that's how I believe it should SHOULD be - since only
>> ENOENT is a "success" error code, anything else pretty much means that
>> the operation requested didn't work properly. If you are aware of any
>> use-case where EFAULT, ESRCH or other error codes would still result in
>> a valid, usable memory mapping - I have a fair understanding of the xc_*
>> code, and although I may not know every piece of that code, I'm fairly
>> certainly that is the expected behaviour.
>>
>> --
>> Mats
>>> Ian.
>>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:52:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLw6-0000KG-BA; Wed, 19 Dec 2012 15:51:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlLw4-0000K9-T0
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:51:57 +0000
Received: from [85.158.143.35:11373] by server-1.bemta-4.messagelabs.com id
	AB/2F-28401-C92E1D05; Wed, 19 Dec 2012 15:51:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355932291!10190927!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10562 invoked from network); 19 Dec 2012 15:51:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:51:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="257458"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 15:51:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 15:51:31 +0000
Message-ID: <1355932290.14620.446.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Wed, 19 Dec 2012 15:51:30 +0000
In-Reply-To: <50D1E198.8090205@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
	<1355919745.14620.363.camel@zakaz.uk.xensource.com>
	<50D1D880.10701@citrix.com>
	<1355931948.14620.442.camel@zakaz.uk.xensource.com>
	<50D1E198.8090205@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 15:47 +0000, Mats Petersson wrote:
> On 19/12/12 15:45, Ian Campbell wrote:
> > On Wed, 2012-12-19 at 15:08 +0000, Mats Petersson wrote:
> >> On 19/12/12 12:22, Ian Campbell wrote:
> >>> On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:
> >>>
> >>>>>> +                  only likely to return EFAULT or some other "things are very
> >>>>>> +                  bad" error code, which the rest of the calling code won't
> >>>>>> +                  be able to fix up. So we just exit with the error we got.
> >>>>> It expect it is more important to accumulate the individual errors from
> >>>>> remap_pte_fn into err_ptr.
> >>>> Yes, but since that exits on error with EFAULT, the calling code won't
> >>>> "accept" the errors, and thus the whole house of cards fall apart at
> >>>> this point.
> >>>>
> >>>> There should probably be a task to fix this up properly, hence the
> >>>> comment. But right now, any error besides ENOENT is "unacceptable" by
> >>>> the callers of this code. If you want me to add this to the comment, I'm
> >>>> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing
> >>>> will work after an error.
> >>> Are you sure? privcmd.c has some special casing for ENOENT but it looks
> >>> like it should just pass through other errors back to userspace.
> >>>
> >>> In any case surely this needs fixing?
> >>>
> >>> On the X86 side err_ptr is the result of the mmupdate hypercall which
> >>> can already be other than ENOENT, including EFAULT, ESRCH etc.
> >> Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the
> >> calling code revoking the mapping and returning an error) is ENOENT.
> > Hr, Probably the right thing is for map_foreign_page to propagate the
> > result of XENMEM_add_to_physmap_range and for remap_pte_fn to store it
> > in err_ptr. That -EFAULT thing just looks wrong to me.
> Ok, so you want me to fix that up, I suppose? I mean, I just copied the 
> behaviour that was already there - just massaged the code around a bit...

Yes please, it didn't really matter before but I think it matters after
your changes.

> 
> --
> Mats
> >
> >> Or at least, that's how I believe it should SHOULD be - since only
> >> ENOENT is a "success" error code, anything else pretty much means that
> >> the operation requested didn't work properly. If you are aware of any
> >> use-case where EFAULT, ESRCH or other error codes would still result in
> >> a valid, usable memory mapping - I have a fair understanding of the xc_*
> >> code, and although I may not know every piece of that code, I'm fairly
> >> certainly that is the expected behaviour.
> >>
> >> --
> >> Mats
> >>> Ian.
> >>>
> >
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:52:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLw6-0000KG-BA; Wed, 19 Dec 2012 15:51:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlLw4-0000K9-T0
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 15:51:57 +0000
Received: from [85.158.143.35:11373] by server-1.bemta-4.messagelabs.com id
	AB/2F-28401-C92E1D05; Wed, 19 Dec 2012 15:51:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1355932291!10190927!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10562 invoked from network); 19 Dec 2012 15:51:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:51:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="257458"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 15:51:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 15:51:31 +0000
Message-ID: <1355932290.14620.446.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Wed, 19 Dec 2012 15:51:30 +0000
In-Reply-To: <50D1E198.8090205@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
	<1355919745.14620.363.camel@zakaz.uk.xensource.com>
	<50D1D880.10701@citrix.com>
	<1355931948.14620.442.camel@zakaz.uk.xensource.com>
	<50D1E198.8090205@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 15:47 +0000, Mats Petersson wrote:
> On 19/12/12 15:45, Ian Campbell wrote:
> > On Wed, 2012-12-19 at 15:08 +0000, Mats Petersson wrote:
> >> On 19/12/12 12:22, Ian Campbell wrote:
> >>> On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:
> >>>
> >>>>>> +                  only likely to return EFAULT or some other "things are very
> >>>>>> +                  bad" error code, which the rest of the calling code won't
> >>>>>> +                  be able to fix up. So we just exit with the error we got.
> >>>>> It expect it is more important to accumulate the individual errors from
> >>>>> remap_pte_fn into err_ptr.
> >>>> Yes, but since that exits on error with EFAULT, the calling code won't
> >>>> "accept" the errors, and thus the whole house of cards fall apart at
> >>>> this point.
> >>>>
> >>>> There should probably be a task to fix this up properly, hence the
> >>>> comment. But right now, any error besides ENOENT is "unacceptable" by
> >>>> the callers of this code. If you want me to add this to the comment, I'm
> >>>> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing
> >>>> will work after an error.
> >>> Are you sure? privcmd.c has some special casing for ENOENT but it looks
> >>> like it should just pass through other errors back to userspace.
> >>>
> >>> In any case surely this needs fixing?
> >>>
> >>> On the X86 side err_ptr is the result of the mmupdate hypercall which
> >>> can already be other than ENOENT, including EFAULT, ESRCH etc.
> >> Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the
> >> calling code revoking the mapping and returning an error) is ENOENT.
> > Hr, Probably the right thing is for map_foreign_page to propagate the
> > result of XENMEM_add_to_physmap_range and for remap_pte_fn to store it
> > in err_ptr. That -EFAULT thing just looks wrong to me.
> Ok, so you want me to fix that up, I suppose? I mean, I just copied the 
> behaviour that was already there - just massaged the code around a bit...

Yes please, it didn't really matter before but I think it matters after
your changes.

> 
> --
> Mats
> >
> >> Or at least, that's how I believe it should SHOULD be - since only
> >> ENOENT is a "success" error code, anything else pretty much means that
> >> the operation requested didn't work properly. If you are aware of any
> >> use-case where EFAULT, ESRCH or other error codes would still result in
> >> a valid, usable memory mapping - I have a fair understanding of the xc_*
> >> code, and although I may not know every piece of that code, I'm fairly
> >> certainly that is the expected behaviour.
> >>
> >> --
> >> Mats
> >>> Ian.
> >>>
> >
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:54:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLyB-0000RJ-Sw; Wed, 19 Dec 2012 15:54:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlLyA-0000R7-Qv
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:54:07 +0000
Received: from [193.109.254.147:36464] by server-13.bemta-14.messagelabs.com
	id E4/8F-01725-E13E1D05; Wed, 19 Dec 2012 15:54:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355932445!11042049!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15851 invoked from network); 19 Dec 2012 15:54:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:54:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="257569"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 15:54:05 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 15:54:05 +0000
Message-ID: <1355932443.14620.448.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Date: Wed, 19 Dec 2012 15:54:03 +0000
In-Reply-To: <50D1DCC5.5050309@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Razvan Cojocaru <rzvncj@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 15:27 +0000, Andrew Cooper wrote:
> On 19/12/2012 15:00, Ian Campbell wrote:
> > On Wed, 2012-12-19 at 14:57 +0000, Razvan Cojocaru wrote:
> >>>>      m->overlapped = is_var_mtrr_overlapped(m);
> >>>>
> >>>> Looks like that function contains the necessary logic.
> >>> You're right, but what happens there is that that function depends on
> >>> the get_mtrr_range() function, which in turn depends on the size_or_mask
> >>> global variable, which is initialized in hvm_mtrr_pat_init(), which then
> >>> depends on a global table, and so on. Putting that into libxc is pretty
> >>> much putting the whole mtrr.c file there.
> >> This is where it gets tricky:
> >>
> >> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
> >>                             uint64_t *base, uint64_t *end)
> >> {
> >>      [...]
> >>      phys_addr = 36;
> >>
> >>      if ( cpuid_eax(0x80000000) >= 0x80000008 )
> >>          phys_addr = (uint8_t)cpuid_eax(0x80000008);
> >>
> >>      size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
> >>      [...]
> >> }
> >>
> >> specifically, in the cpuid_eax() call, which doesn't make much sense in 
> >> dom0 userspace.
> > The fact that get_mtrr_range is querying the underlying physical CPUID
> > suggests it has something to do with the translation from virtual to
> > physical MTRR and is therefore not something userspace needs to worry
> > about, but I'm only speculating.
> 
> CPUID 0x80000008.EAX is the physical address size supported by the
> processor (in bits).  Typical values on modern hardware are 40 or 48.

I know what the bit is. This code seems to be leaking physical CPU
parameters into the virtual CPU state and the question is if userspace
needs to care about that. I suspect the answer is no.

What should matter for the guest state is the virtualised CPUID
0x80000008.EAX which, at least in theory, could be different (e.g. a
migrated guest?). 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 15:54:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 15:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlLyB-0000RJ-Sw; Wed, 19 Dec 2012 15:54:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlLyA-0000R7-Qv
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 15:54:07 +0000
Received: from [193.109.254.147:36464] by server-13.bemta-14.messagelabs.com
	id E4/8F-01725-E13E1D05; Wed, 19 Dec 2012 15:54:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355932445!11042049!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15851 invoked from network); 19 Dec 2012 15:54:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:54:05 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="257569"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 15:54:05 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 15:54:05 +0000
Message-ID: <1355932443.14620.448.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Date: Wed, 19 Dec 2012 15:54:03 +0000
In-Reply-To: <50D1DCC5.5050309@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Razvan Cojocaru <rzvncj@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 15:27 +0000, Andrew Cooper wrote:
> On 19/12/2012 15:00, Ian Campbell wrote:
> > On Wed, 2012-12-19 at 14:57 +0000, Razvan Cojocaru wrote:
> >>>>      m->overlapped = is_var_mtrr_overlapped(m);
> >>>>
> >>>> Looks like that function contains the necessary logic.
> >>> You're right, but what happens there is that that function depends on
> >>> the get_mtrr_range() function, which in turn depends on the size_or_mask
> >>> global variable, which is initialized in hvm_mtrr_pat_init(), which then
> >>> depends on a global table, and so on. Putting that into libxc is pretty
> >>> much putting the whole mtrr.c file there.
> >> This is where it gets tricky:
> >>
> >> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
> >>                             uint64_t *base, uint64_t *end)
> >> {
> >>      [...]
> >>      phys_addr = 36;
> >>
> >>      if ( cpuid_eax(0x80000000) >= 0x80000008 )
> >>          phys_addr = (uint8_t)cpuid_eax(0x80000008);
> >>
> >>      size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
> >>      [...]
> >> }
> >>
> >> specifically, in the cpuid_eax() call, which doesn't make much sense in 
> >> dom0 userspace.
> > The fact that get_mtrr_range is querying the underlying physical CPUID
> > suggests it has something to do with the translation from virtual to
> > physical MTRR and is therefore not something userspace needs to worry
> > about, but I'm only speculating.
> 
> CPUID 0x80000008.EAX is the physical address size supported by the
> processor (in bits).  Typical values on modern hardware are 40 or 48.

I know what the bit is. This code seems to be leaking physical CPU
parameters into the virtual CPU state and the question is if userspace
needs to care about that. I suspect the answer is no.

What should matter for the guest state is the virtualised CPUID
0x80000008.EAX which, at least in theory, could be different (e.g. a
migrated guest?). 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:01:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlM5F-0001Ci-QL; Wed, 19 Dec 2012 16:01:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlM5E-0001Cd-4W
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 16:01:24 +0000
Received: from [85.158.143.35:47338] by server-2.bemta-4.messagelabs.com id
	49/73-30861-3D4E1D05; Wed, 19 Dec 2012 16:01:23 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355932767!15751043!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9791 invoked from network); 19 Dec 2012 15:59:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:59:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1210065"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 15:59:26 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 10:59:25 -0500
Message-ID: <50D1E45C.3090108@citrix.com>
Date: Wed, 19 Dec 2012 15:59:24 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
	<1355919745.14620.363.camel@zakaz.uk.xensource.com>
	<50D1D880.10701@citrix.com>
	<1355931948.14620.442.camel@zakaz.uk.xensource.com>
	<50D1E198.8090205@citrix.com>
	<1355932290.14620.446.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355932290.14620.446.camel@zakaz.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 15:51, Ian Campbell wrote:
> On Wed, 2012-12-19 at 15:47 +0000, Mats Petersson wrote:
>> On 19/12/12 15:45, Ian Campbell wrote:
>>> On Wed, 2012-12-19 at 15:08 +0000, Mats Petersson wrote:
>>>> On 19/12/12 12:22, Ian Campbell wrote:
>>>>> On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:
>>>>>
>>>>>>>> +                  only likely to return EFAULT or some other "things are very
>>>>>>>> +                  bad" error code, which the rest of the calling code won't
>>>>>>>> +                  be able to fix up. So we just exit with the error we got.
>>>>>>> It expect it is more important to accumulate the individual errors from
>>>>>>> remap_pte_fn into err_ptr.
>>>>>> Yes, but since that exits on error with EFAULT, the calling code won't
>>>>>> "accept" the errors, and thus the whole house of cards fall apart at
>>>>>> this point.
>>>>>>
>>>>>> There should probably be a task to fix this up properly, hence the
>>>>>> comment. But right now, any error besides ENOENT is "unacceptable" by
>>>>>> the callers of this code. If you want me to add this to the comment, I'm
>>>>>> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing
>>>>>> will work after an error.
>>>>> Are you sure? privcmd.c has some special casing for ENOENT but it looks
>>>>> like it should just pass through other errors back to userspace.
>>>>>
>>>>> In any case surely this needs fixing?
>>>>>
>>>>> On the X86 side err_ptr is the result of the mmupdate hypercall which
>>>>> can already be other than ENOENT, including EFAULT, ESRCH etc.
>>>> Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the
>>>> calling code revoking the mapping and returning an error) is ENOENT.
>>> Hr, Probably the right thing is for map_foreign_page to propagate the
>>> result of XENMEM_add_to_physmap_range and for remap_pte_fn to store it
>>> in err_ptr. That -EFAULT thing just looks wrong to me.
>> Ok, so you want me to fix that up, I suppose? I mean, I just copied the
>> behaviour that was already there - just massaged the code around a bit...
> Yes please, it didn't really matter before but I think it matters after
> your changes.
Ok, I will try to fix. But since I can't test it, it will still be "does 
it compile" testing... {Would be nice to understand what has changed - 
as far as I see, the old code was just as much broken as the new code}

--
Mats
>
>> --
>> Mats
>>>> Or at least, that's how I believe it should SHOULD be - since only
>>>> ENOENT is a "success" error code, anything else pretty much means that
>>>> the operation requested didn't work properly. If you are aware of any
>>>> use-case where EFAULT, ESRCH or other error codes would still result in
>>>> a valid, usable memory mapping - I have a fair understanding of the xc_*
>>>> code, and although I may not know every piece of that code, I'm fairly
>>>> certainly that is the expected behaviour.
>>>>
>>>> --
>>>> Mats
>>>>> Ian.
>>>>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:01:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlM5F-0001Ci-QL; Wed, 19 Dec 2012 16:01:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlM5E-0001Cd-4W
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 16:01:24 +0000
Received: from [85.158.143.35:47338] by server-2.bemta-4.messagelabs.com id
	49/73-30861-3D4E1D05; Wed, 19 Dec 2012 16:01:23 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355932767!15751043!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9791 invoked from network); 19 Dec 2012 15:59:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 15:59:32 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1210065"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 15:59:26 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 10:59:25 -0500
Message-ID: <50D1E45C.3090108@citrix.com>
Date: Wed, 19 Dec 2012 15:59:24 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50CB5B32.50406@citrix.com>
	<1355763448.14620.111.camel@zakaz.uk.xensource.com>
	<50CF587B.5090602@citrix.com>
	<1355829451.14620.188.camel@zakaz.uk.xensource.com>
	<50D05358.30303@citrix.com>
	<1355830856.14620.206.camel@zakaz.uk.xensource.com>
	<50D074F5.6060202@citrix.com>
	<20121218160704.GA3543@phenom.dumpdata.com>
	<50D0C528.4090602@citrix.com>
	<1355914793.14620.319.camel@zakaz.uk.xensource.com>
	<50D1AEBD.1090202@citrix.com>
	<1355919745.14620.363.camel@zakaz.uk.xensource.com>
	<50D1D880.10701@citrix.com>
	<1355931948.14620.442.camel@zakaz.uk.xensource.com>
	<50D1E198.8090205@citrix.com>
	<1355932290.14620.446.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355932290.14620.446.camel@zakaz.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ARM fixes for my improved privcmd patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 15:51, Ian Campbell wrote:
> On Wed, 2012-12-19 at 15:47 +0000, Mats Petersson wrote:
>> On 19/12/12 15:45, Ian Campbell wrote:
>>> On Wed, 2012-12-19 at 15:08 +0000, Mats Petersson wrote:
>>>> On 19/12/12 12:22, Ian Campbell wrote:
>>>>> On Wed, 2012-12-19 at 12:10 +0000, Mats Petersson wrote:
>>>>>
>>>>>>>> +                  only likely to return EFAULT or some other "things are very
>>>>>>>> +                  bad" error code, which the rest of the calling code won't
>>>>>>>> +                  be able to fix up. So we just exit with the error we got.
>>>>>>> It expect it is more important to accumulate the individual errors from
>>>>>>> remap_pte_fn into err_ptr.
>>>>>> Yes, but since that exits on error with EFAULT, the calling code won't
>>>>>> "accept" the errors, and thus the whole house of cards fall apart at
>>>>>> this point.
>>>>>>
>>>>>> There should probably be a task to fix this up properly, hence the
>>>>>> comment. But right now, any error besides ENOENT is "unacceptable" by
>>>>>> the callers of this code. If you want me to add this to the comment, I'm
>>>>>> happy to. But as long as remap_pte_fn returns EFAULT on error, nothing
>>>>>> will work after an error.
>>>>> Are you sure? privcmd.c has some special casing for ENOENT but it looks
>>>>> like it should just pass through other errors back to userspace.
>>>>>
>>>>> In any case surely this needs fixing?
>>>>>
>>>>> On the X86 side err_ptr is the result of the mmupdate hypercall which
>>>>> can already be other than ENOENT, including EFAULT, ESRCH etc.
>>>> Yes, but the ONLY error that is "acceptable" (as in, doesn't lead to the
>>>> calling code revoking the mapping and returning an error) is ENOENT.
>>> Hr, Probably the right thing is for map_foreign_page to propagate the
>>> result of XENMEM_add_to_physmap_range and for remap_pte_fn to store it
>>> in err_ptr. That -EFAULT thing just looks wrong to me.
>> Ok, so you want me to fix that up, I suppose? I mean, I just copied the
>> behaviour that was already there - just massaged the code around a bit...
> Yes please, it didn't really matter before but I think it matters after
> your changes.
Ok, I will try to fix. But since I can't test it, it will still be "does 
it compile" testing... {Would be nice to understand what has changed - 
as far as I see, the old code was just as much broken as the new code}

--
Mats
>
>> --
>> Mats
>>>> Or at least, that's how I believe it should SHOULD be - since only
>>>> ENOENT is a "success" error code, anything else pretty much means that
>>>> the operation requested didn't work properly. If you are aware of any
>>>> use-case where EFAULT, ESRCH or other error codes would still result in
>>>> a valid, usable memory mapping - I have a fair understanding of the xc_*
>>>> code, and although I may not know every piece of that code, I'm fairly
>>>> certainly that is the expected behaviour.
>>>>
>>>> --
>>>> Mats
>>>>> Ian.
>>>>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:04:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlM88-0001Kl-EB; Wed, 19 Dec 2012 16:04:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlM86-0001Kd-B5
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:04:22 +0000
Received: from [85.158.139.83:12573] by server-6.bemta-5.messagelabs.com id
	C6/27-30498-585E1D05; Wed, 19 Dec 2012 16:04:21 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355933041!30039804!1
X-Originating-IP: [209.85.210.170]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16399 invoked from network); 19 Dec 2012 16:04:03 -0000
Received: from mail-ia0-f170.google.com (HELO mail-ia0-f170.google.com)
	(209.85.210.170)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:04:03 -0000
Received: by mail-ia0-f170.google.com with SMTP id i1so1884862iaa.15
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 08:04:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=zRXcgmDUD//7XZ2qKIuJYSzHV7vjbZLphUlAqP5oZ4M=;
	b=pXNogEePPJ2WrNEMa5WHLiOG+dtT0Dx8oa3n9Y22Yjx5GD8FXFzw2RpMZKNkUkap6f
	08jI/ThIm2y5r1fiDj/fxwav1Fk+9H1RD3pZ1d3kVUZ7BmDhdJZTXLbRJRIRMlhYGwA5
	1dreUkfOr6VPmTo4DurmnpfScn7EK6BQSbhHxnLTf2jEc92cKfz1OLq61d/CNN3LnheK
	K+BPcHa1MbhgHSs/MmPGRK7/XbHWpaG//N9oatbgGarcoAwSSXQe6kQhOPYOzs3JuYC/
	KxlaviPw5SJc7LjNTEoph5ZN+XTsjGK3hcECzIX6aoqkGgNAsXEM2NpDYj0YFOr+9CxV
	zH8w==
MIME-Version: 1.0
Received: by 10.50.214.38 with SMTP id nx6mr7162475igc.28.1355933041403; Wed,
	19 Dec 2012 08:04:01 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Wed, 19 Dec 2012 08:04:01 -0800 (PST)
In-Reply-To: <CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
Date: Thu, 20 Dec 2012 00:04:01 +0800
X-Google-Sender-Auth: PobWm4rWvsNxoCDEAcUO2SOHNwI
Message-ID: <CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 2:20 PM, G.R. <firemeteor@users.sourceforge.net> wrote:
> Adding Jean, the author to the opregion patch.
>
> Jean, I believe the warning is due to the offset within the page.
> To accommodate the offset, you would need to reserve another page for it.
> Will the extra page cause any unexpected problem?
>
> The original thread is about an instability issue that directly freeze the host.
> I believe this warning above should not has such effect.
> What do you think? And any suggestion?
>

Jean appears to be no longer reach able.
The warning I found turns out to be not relevant.
According to the OpRegion spec, the tail part is reserved and should
never be touched by the guest.
But anyway, I had a local fix to get rid of the warning, but reserving
one more page and map it when the host opregion is not page aligned.
I'll send it to a separate thread.

Back to the topic. I updated to xen 4.2.1 and tried three times tonight.
Two of them lead to total freeze with no error log available, after
game playing for a couple of minutes.
And the last try ended up with GPU hang after 10+ minutes of game playing.
This is a guest only hang. But I still have no way to check GPU error
state even it has been collected:

[ 1553.588076] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
elapsed... GPU hung
[ 1553.592112] [drm] capturing error event; look for more information
in /debug/dri/0/i915_error_state
[ 1582.004075] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
elapsed... GPU hung
[ 1597.220075] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
elapsed... GPU hung
[ 1613.220074] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
elapsed... GPU hung

I'm wondering if the two syndromes are due to the same underlying cause.
But I guess a GPU hang caused by guest driver issue should not freeze
the host. Is it true?

I'm going to try more with different config -- different kernel
version, with / without PVOPS, native run vs VM etc.
But this is kind of blindly since I have no clue at all. If you have
anything to suspect, it will be highly appreciated.

Thanks,
Timothy

> Thanks,
> Timothy
>
> On Wed, Dec 19, 2012 at 1:28 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
>> Hi Stefano,
>>
>> I recently tried to play some 3D games on my linux guest.
>> The game starts without problem but it freezes the entire system after
>> a some time (a minute or so?).
>> Here I mean both the host and domU are not responsive anymore.
>> The ssh freezes and i had to shutdown the machine using power button directly.
>>
>> I did not find anything obvious from the host log. But from the guest,
>> I can find this:
>>
>> Dec 18 20:28:38 debvm kernel: [    0.899860] resource map sanity check
>> conflict: 0xfeff5018 0xfeff7017 0xfeff7000 0xffffffff reserved
>> Dec 18 20:28:38 debvm kernel: [    0.899862] ------------[ cut here
>> ]------------
>> Dec 18 20:28:38 debvm kernel: [    0.899869] WARNING: at
>> arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2c4/0x33c()
>> Dec 18 20:28:38 debvm kernel: [    0.899870] Hardware name: HVM domU
>> Dec 18 20:28:38 debvm kernel: [    0.899872] Info: mapping multiple
>> BARs. Your kernel is fine.
>> Dec 18 20:28:38 debvm kernel: [    0.899873] Modules linked in:
>> Dec 18 20:28:38 debvm kernel: [    0.899878] Pid: 1, comm: swapper/0
>> Not tainted 3.6.9 #4
>> Dec 18 20:28:38 debvm kernel: [    0.899892] Call Trace:
>> Dec 18 20:28:38 debvm kernel: [    0.899896]  [<ffffffff8103d194>] ?
>> warn_slowpath_common+0x76/0x8a
>> Dec 18 20:28:38 debvm kernel: [    0.899898]  [<ffffffff8103d240>] ?
>> warn_slowpath_fmt+0x45/0x4a
>> Dec 18 20:28:38 debvm kernel: [    0.899900]  [<ffffffff81032a6c>] ?
>> __ioremap_caller+0x2c4/0x33c
>> Dec 18 20:28:38 debvm kernel: [    0.899902]  [<ffffffff812c3be3>] ?
>> intel_opregion_setup+0x9c/0x201
>> Dec 18 20:28:38 debvm kernel: [    0.899904]  [<ffffffff812bcb75>] ?
>> intel_setup_gmbus+0x175/0x19d
>> Dec 18 20:28:38 debvm kernel: [    0.899907]  [<ffffffff8128a37a>] ?
>> i915_driver_load+0x548/0x90d
>> Dec 18 20:28:38 debvm kernel: [    0.899910]  [<ffffffff812ff804>] ?
>> setup_hpet_msi_remapped+0x20/0x20
>> Dec 18 20:28:38 debvm kernel: [    0.899912]  [<ffffffff81272706>] ?
>> drm_get_pci_dev+0x152/0x259
>> Dec 18 20:28:38 debvm kernel: [    0.899915]  [<ffffffff813d4883>] ?
>> _raw_spin_lock_irqsave+0x21/0x45
>> Dec 18 20:28:38 debvm kernel: [    0.899918]  [<ffffffff811d9ecc>] ?
>> local_pci_probe+0x5a/0xa0
>> Dec 18 20:28:38 debvm kernel: [    0.899920]  [<ffffffff811d9fcf>] ?
>> pci_device_probe+0xbd/0xe7
>> Dec 18 20:28:38 debvm kernel: [    0.899922]  [<ffffffff812cd887>] ?
>> driver_probe_device+0x1b0/0x1b0
>> Dec 18 20:28:38 debvm kernel: [    0.899923]  [<ffffffff812cd887>] ?
>> driver_probe_device+0x1b0/0x1b0
>> Dec 18 20:28:38 debvm kernel: [    0.899925]  [<ffffffff812cd769>] ?
>> driver_probe_device+0x92/0x1b0
>> Dec 18 20:28:38 debvm kernel: [    0.899926]  [<ffffffff812cd8da>] ?
>> __driver_attach+0x53/0x73
>> Dec 18 20:28:38 debvm kernel: [    0.899928]  [<ffffffff812cc06f>] ?
>> bus_for_each_dev+0x46/0x77
>> Dec 18 20:28:38 debvm kernel: [    0.899930]  [<ffffffff812ccf8f>] ?
>> bus_add_driver+0xd5/0x1f4
>> Dec 18 20:28:38 debvm kernel: [    0.899931]  [<ffffffff812cde14>] ?
>> driver_register+0x89/0x101
>> Dec 18 20:28:38 debvm kernel: [    0.899933]  [<ffffffff811d9336>] ?
>> __pci_register_driver+0x49/0xa3
>> Dec 18 20:28:38 debvm kernel: [    0.899935]  [<ffffffff816d55c7>] ?
>> ttm_init+0x63/0x63
>> Dec 18 20:28:38 debvm kernel: [    0.899937]  [<ffffffff81002085>] ?
>> do_one_initcall+0x75/0x12c
>> Dec 18 20:28:38 debvm kernel: [    0.899940]  [<ffffffff816a6cc2>] ?
>> kernel_init+0x13c/0x1c0
>> Dec 18 20:28:38 debvm kernel: [    0.899941]  [<ffffffff816a6565>] ?
>> do_early_param+0x83/0x83
>> Dec 18 20:28:38 debvm kernel: [    0.899943]  [<ffffffff813d9f44>] ?
>> kernel_thread_helper+0x4/0x10
>> Dec 18 20:28:38 debvm kernel: [    0.899945]  [<ffffffff816a6b86>] ?
>> start_kernel+0x3e1/0x3e1
>> Dec 18 20:28:38 debvm kernel: [    0.899947]  [<ffffffff813d9f40>] ?
>> gs_change+0x13/0x13
>> Dec 18 20:28:38 debvm kernel: [    0.899950] ---[ end trace
>> db461543ce599b44 ]---
>>
>> I'm not sure if this has anything to do with the freeze. This seems to
>> show up on every boot after I upgraded to xen version 4.2.1-rc2. Both
>> debian kernel 3.2.32 / 3.6.9 suffers from the same log. But whole
>> system freeze happens only during gaming, which is much less frequent.
>> So I'm not sure if the two are related. But anyway, could you comment
>> about what does this log mean?
>>
>> I can find the one of the mentioned address in the qemu_dm log:
>> pt_pci_write_config: [00:02:0] address=00fc val=0xfeff5000 len=4
>> igd_write_opregion: Map OpRegion: cd996018 -> feff5018
>> igd_write_opregion: [00:02:0] addr=fc len=2 val=feff5000
>>
>> PS: I also run xbmc on domU and it playbacks video under HW
>> acceleration (VAAPI) without any problem. XBMC by itself is also an
>> graphics intensive program. But this runs on an pure HVM guest, while
>> the failing case is on PVHVM.
>>
>> PS2: I also suffered another instability yesterday. It happens when I
>> was compiling kernel in side the domU. The host reboots suddenly.
>> Since I'm not using graphics at that time (Xorg session is idle, I
>> connected through SSH), this may be a different issue.
>>
>> Thanks,
>> Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:04:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlM88-0001Kl-EB; Wed, 19 Dec 2012 16:04:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlM86-0001Kd-B5
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:04:22 +0000
Received: from [85.158.139.83:12573] by server-6.bemta-5.messagelabs.com id
	C6/27-30498-585E1D05; Wed, 19 Dec 2012 16:04:21 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1355933041!30039804!1
X-Originating-IP: [209.85.210.170]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16399 invoked from network); 19 Dec 2012 16:04:03 -0000
Received: from mail-ia0-f170.google.com (HELO mail-ia0-f170.google.com)
	(209.85.210.170)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:04:03 -0000
Received: by mail-ia0-f170.google.com with SMTP id i1so1884862iaa.15
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 08:04:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=zRXcgmDUD//7XZ2qKIuJYSzHV7vjbZLphUlAqP5oZ4M=;
	b=pXNogEePPJ2WrNEMa5WHLiOG+dtT0Dx8oa3n9Y22Yjx5GD8FXFzw2RpMZKNkUkap6f
	08jI/ThIm2y5r1fiDj/fxwav1Fk+9H1RD3pZ1d3kVUZ7BmDhdJZTXLbRJRIRMlhYGwA5
	1dreUkfOr6VPmTo4DurmnpfScn7EK6BQSbhHxnLTf2jEc92cKfz1OLq61d/CNN3LnheK
	K+BPcHa1MbhgHSs/MmPGRK7/XbHWpaG//N9oatbgGarcoAwSSXQe6kQhOPYOzs3JuYC/
	KxlaviPw5SJc7LjNTEoph5ZN+XTsjGK3hcECzIX6aoqkGgNAsXEM2NpDYj0YFOr+9CxV
	zH8w==
MIME-Version: 1.0
Received: by 10.50.214.38 with SMTP id nx6mr7162475igc.28.1355933041403; Wed,
	19 Dec 2012 08:04:01 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Wed, 19 Dec 2012 08:04:01 -0800 (PST)
In-Reply-To: <CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
Date: Thu, 20 Dec 2012 00:04:01 +0800
X-Google-Sender-Auth: PobWm4rWvsNxoCDEAcUO2SOHNwI
Message-ID: <CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 2:20 PM, G.R. <firemeteor@users.sourceforge.net> wrote:
> Adding Jean, the author to the opregion patch.
>
> Jean, I believe the warning is due to the offset within the page.
> To accommodate the offset, you would need to reserve another page for it.
> Will the extra page cause any unexpected problem?
>
> The original thread is about an instability issue that directly freeze the host.
> I believe this warning above should not has such effect.
> What do you think? And any suggestion?
>

Jean appears to be no longer reach able.
The warning I found turns out to be not relevant.
According to the OpRegion spec, the tail part is reserved and should
never be touched by the guest.
But anyway, I had a local fix to get rid of the warning, but reserving
one more page and map it when the host opregion is not page aligned.
I'll send it to a separate thread.

Back to the topic. I updated to xen 4.2.1 and tried three times tonight.
Two of them lead to total freeze with no error log available, after
game playing for a couple of minutes.
And the last try ended up with GPU hang after 10+ minutes of game playing.
This is a guest only hang. But I still have no way to check GPU error
state even it has been collected:

[ 1553.588076] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
elapsed... GPU hung
[ 1553.592112] [drm] capturing error event; look for more information
in /debug/dri/0/i915_error_state
[ 1582.004075] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
elapsed... GPU hung
[ 1597.220075] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
elapsed... GPU hung
[ 1613.220074] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
elapsed... GPU hung

I'm wondering if the two syndromes are due to the same underlying cause.
But I guess a GPU hang caused by guest driver issue should not freeze
the host. Is it true?

I'm going to try more with different config -- different kernel
version, with / without PVOPS, native run vs VM etc.
But this is kind of blindly since I have no clue at all. If you have
anything to suspect, it will be highly appreciated.

Thanks,
Timothy

> Thanks,
> Timothy
>
> On Wed, Dec 19, 2012 at 1:28 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
>> Hi Stefano,
>>
>> I recently tried to play some 3D games on my linux guest.
>> The game starts without problem but it freezes the entire system after
>> a some time (a minute or so?).
>> Here I mean both the host and domU are not responsive anymore.
>> The ssh freezes and i had to shutdown the machine using power button directly.
>>
>> I did not find anything obvious from the host log. But from the guest,
>> I can find this:
>>
>> Dec 18 20:28:38 debvm kernel: [    0.899860] resource map sanity check
>> conflict: 0xfeff5018 0xfeff7017 0xfeff7000 0xffffffff reserved
>> Dec 18 20:28:38 debvm kernel: [    0.899862] ------------[ cut here
>> ]------------
>> Dec 18 20:28:38 debvm kernel: [    0.899869] WARNING: at
>> arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2c4/0x33c()
>> Dec 18 20:28:38 debvm kernel: [    0.899870] Hardware name: HVM domU
>> Dec 18 20:28:38 debvm kernel: [    0.899872] Info: mapping multiple
>> BARs. Your kernel is fine.
>> Dec 18 20:28:38 debvm kernel: [    0.899873] Modules linked in:
>> Dec 18 20:28:38 debvm kernel: [    0.899878] Pid: 1, comm: swapper/0
>> Not tainted 3.6.9 #4
>> Dec 18 20:28:38 debvm kernel: [    0.899892] Call Trace:
>> Dec 18 20:28:38 debvm kernel: [    0.899896]  [<ffffffff8103d194>] ?
>> warn_slowpath_common+0x76/0x8a
>> Dec 18 20:28:38 debvm kernel: [    0.899898]  [<ffffffff8103d240>] ?
>> warn_slowpath_fmt+0x45/0x4a
>> Dec 18 20:28:38 debvm kernel: [    0.899900]  [<ffffffff81032a6c>] ?
>> __ioremap_caller+0x2c4/0x33c
>> Dec 18 20:28:38 debvm kernel: [    0.899902]  [<ffffffff812c3be3>] ?
>> intel_opregion_setup+0x9c/0x201
>> Dec 18 20:28:38 debvm kernel: [    0.899904]  [<ffffffff812bcb75>] ?
>> intel_setup_gmbus+0x175/0x19d
>> Dec 18 20:28:38 debvm kernel: [    0.899907]  [<ffffffff8128a37a>] ?
>> i915_driver_load+0x548/0x90d
>> Dec 18 20:28:38 debvm kernel: [    0.899910]  [<ffffffff812ff804>] ?
>> setup_hpet_msi_remapped+0x20/0x20
>> Dec 18 20:28:38 debvm kernel: [    0.899912]  [<ffffffff81272706>] ?
>> drm_get_pci_dev+0x152/0x259
>> Dec 18 20:28:38 debvm kernel: [    0.899915]  [<ffffffff813d4883>] ?
>> _raw_spin_lock_irqsave+0x21/0x45
>> Dec 18 20:28:38 debvm kernel: [    0.899918]  [<ffffffff811d9ecc>] ?
>> local_pci_probe+0x5a/0xa0
>> Dec 18 20:28:38 debvm kernel: [    0.899920]  [<ffffffff811d9fcf>] ?
>> pci_device_probe+0xbd/0xe7
>> Dec 18 20:28:38 debvm kernel: [    0.899922]  [<ffffffff812cd887>] ?
>> driver_probe_device+0x1b0/0x1b0
>> Dec 18 20:28:38 debvm kernel: [    0.899923]  [<ffffffff812cd887>] ?
>> driver_probe_device+0x1b0/0x1b0
>> Dec 18 20:28:38 debvm kernel: [    0.899925]  [<ffffffff812cd769>] ?
>> driver_probe_device+0x92/0x1b0
>> Dec 18 20:28:38 debvm kernel: [    0.899926]  [<ffffffff812cd8da>] ?
>> __driver_attach+0x53/0x73
>> Dec 18 20:28:38 debvm kernel: [    0.899928]  [<ffffffff812cc06f>] ?
>> bus_for_each_dev+0x46/0x77
>> Dec 18 20:28:38 debvm kernel: [    0.899930]  [<ffffffff812ccf8f>] ?
>> bus_add_driver+0xd5/0x1f4
>> Dec 18 20:28:38 debvm kernel: [    0.899931]  [<ffffffff812cde14>] ?
>> driver_register+0x89/0x101
>> Dec 18 20:28:38 debvm kernel: [    0.899933]  [<ffffffff811d9336>] ?
>> __pci_register_driver+0x49/0xa3
>> Dec 18 20:28:38 debvm kernel: [    0.899935]  [<ffffffff816d55c7>] ?
>> ttm_init+0x63/0x63
>> Dec 18 20:28:38 debvm kernel: [    0.899937]  [<ffffffff81002085>] ?
>> do_one_initcall+0x75/0x12c
>> Dec 18 20:28:38 debvm kernel: [    0.899940]  [<ffffffff816a6cc2>] ?
>> kernel_init+0x13c/0x1c0
>> Dec 18 20:28:38 debvm kernel: [    0.899941]  [<ffffffff816a6565>] ?
>> do_early_param+0x83/0x83
>> Dec 18 20:28:38 debvm kernel: [    0.899943]  [<ffffffff813d9f44>] ?
>> kernel_thread_helper+0x4/0x10
>> Dec 18 20:28:38 debvm kernel: [    0.899945]  [<ffffffff816a6b86>] ?
>> start_kernel+0x3e1/0x3e1
>> Dec 18 20:28:38 debvm kernel: [    0.899947]  [<ffffffff813d9f40>] ?
>> gs_change+0x13/0x13
>> Dec 18 20:28:38 debvm kernel: [    0.899950] ---[ end trace
>> db461543ce599b44 ]---
>>
>> I'm not sure if this has anything to do with the freeze. This seems to
>> show up on every boot after I upgraded to xen version 4.2.1-rc2. Both
>> debian kernel 3.2.32 / 3.6.9 suffers from the same log. But whole
>> system freeze happens only during gaming, which is much less frequent.
>> So I'm not sure if the two are related. But anyway, could you comment
>> about what does this log mean?
>>
>> I can find the one of the mentioned address in the qemu_dm log:
>> pt_pci_write_config: [00:02:0] address=00fc val=0xfeff5000 len=4
>> igd_write_opregion: Map OpRegion: cd996018 -> feff5018
>> igd_write_opregion: [00:02:0] addr=fc len=2 val=feff5000
>>
>> PS: I also run xbmc on domU and it playbacks video under HW
>> acceleration (VAAPI) without any problem. XBMC by itself is also an
>> graphics intensive program. But this runs on an pure HVM guest, while
>> the failing case is on PVHVM.
>>
>> PS2: I also suffered another instability yesterday. It happens when I
>> was compiling kernel in side the domU. The host reboots suddenly.
>> Since I'm not using graphics at that time (Xorg session is idle, I
>> connected through SSH), this may be a different issue.
>>
>> Thanks,
>> Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:05:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlM8w-0001Pz-2V; Wed, 19 Dec 2012 16:05:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlM8u-0001Pl-PW
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:05:13 +0000
Received: from [85.158.137.99:35844] by server-9.bemta-3.messagelabs.com id
	BB/D9-11948-3B5E1D05; Wed, 19 Dec 2012 16:05:07 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355933101!20074158!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10836 invoked from network); 19 Dec 2012 16:05:07 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 16:05:07 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJG4vUW007937
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 16:05:00 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJG4uom000734
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 16:04:57 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJG4umG006598; Wed, 19 Dec 2012 10:04:56 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 08:04:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 779941BF762; Wed, 19 Dec 2012 11:04:55 -0500 (EST)
Date: Wed, 19 Dec 2012 11:04:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Message-ID: <20121219160455.GA12077@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355411537.8376.52.camel@iceland>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> Hi Konrad
> 
> I encountered a bug when trying to bring offline a cpu then online it
> again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
> with a quick fix.

I took your two patches that you posted and they are in v3.8 now.

It seems that there are bugs in the offline/online code thought.

I did this:
# echo 0 > /sys/devices/system/cpu/cpu3/online
# echo 1 > /sys/devices/system/cpu/cpu3/online

With a PV guest and it blows up (with or without your patches).

Have you seen something similar to this:

[  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
[  106.167168] microcode: CPU2 sig=0x206a7, pf=0x2, revision=0x17
[  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [last unloaded: dump_dma]
[  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0-rc3upstream-00139-gb1849b3-dirty #1
[  106.170152] Call Trace:
[  106.170598]  [<ffffffff8109bcbd>] __schedule_bug+0x4d/0x60
[  106.171042]  [<ffffffff815be0fc>] __schedule+0x69c/0x760
[  106.171469]  [<ffffffff815be284>] schedule+0x24/0x70
[  106.171890]  [<ffffffff8103fbe9>] cpu_idle+0xc9/0xe0
[  106.172309]  [<ffffffff81033e79>] ? xen_irq_enable_direct_reloc+0x4/0x4
[  106.172726]  [<ffffffff815b1c5d>] cpu_bringup_and_idle+0xe/0x10
[  106.174533] BUG: scheduling while atomic: swapper/2/0/0x00000000
?

> 
> The HVM DomU is configured with 4 vcpus. After booting into command
> prompt, I do following operations.
> 
> 
> With Debian's default 2.6.32-5-amd64 kernel, the last log is:
> 
>     Booting processor 3 APIC 0x6 ip 0x6000
> 
> With my own kernel which is of version 3.5, I'm able to get more logs:
> 
> [   44.047358] Booting Node 0 Processor 3 APIC 0x6
> [   44.061201] ------------[ cut here ]------------
> [   44.065186] kernel BUG at kernel/hrtimer.c:1259!
> [   44.065186] invalid opcode: 0000 [#1] SMP
> [   44.065186] CPU 3
> [   44.065186] Modules linked in:
> [   44.065186]
> [   44.065186] Pid: 0, comm: swapper/3 Not tainted 3.5.0-xen-evtchn+ #50 Xen HVM domU
> [   44.065186] RIP: 0010:[<ffffffff8105682e>]  [<ffffffff8105682e>] hrtimer_interrupt+0x24/0x1a5
> [   44.065186] RSP: 0000:ffff88000f463de8  EFLAGS: 00010046
> [   44.065186] RAX: ffffffff8105680a RBX: ffff88000f46e640 RCX: 00000000fffffffa
> [   44.065186] RDX: 00000000fffffffa RSI: 0000000000000000 RDI: ffff88000f46bd80
> [   44.065186] RBP: 0000000000000057 R08: ffff88000e000b40 R09: 0000000000000019
> [   44.065186] R10: 0000000000000000 R11: 0000000000000001 R12: ffff88000e6e8e00
> [   44.065186] R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000000
> [   44.065186] FS:  0000000000000000(0000) GS:ffff88000f460000(0000) knlGS:0000000000000000
> [   44.065186] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> [   44.065186] CR2: 0000000000000000 CR3: 000000000181b000 CR4: 00000000000007e0
> [   44.065186] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [   44.065186] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> [   44.065186] Process swapper/3 (pid: 0, threadinfo ffff88000e62e000, task ffff88000e62aea0)
> [   44.065186] Stack:
> [   44.065186]  0000000000000001 ffff88000f46e680 ffffffff81013711 00000008cfba9b27
> [   44.065186]  00000000fffffffa ffff88000e6e97c0 0000000000000057 ffff88000e6e8e00
> [   44.065186]  0000000000000000 0000000000000001 0000000000000000 ffffffff81006954
> [   44.065186] Call Trace:
> [   44.065186]  <IRQ>
> [   44.065186]  [<ffffffff81013711>] ? paravirt_sched_clock+0x5/0x8
> [   44.065186]  [<ffffffff81006954>] ? xen_timer_interrupt+0x26/0x162
> [   44.065186]  [<ffffffff8109a220>] ? check_for_new_grace_period.isra.32+0x90/0x9a
> [   44.065186]  [<ffffffff810956df>] ? handle_irq_event_percpu+0x32/0x1b0
> [   44.065186]  [<ffffffff8128f88b>] ? irq_get_handler_data+0x7/0x16
> [   44.065186]  [<ffffffff81097e39>] ? handle_percpu_irq+0x3a/0x4f
> [   44.065186]  [<ffffffff8128f9ec>] ? __xen_evtchn_do_upcall_l2+0x131/0x1c0
> [   44.065186]  [<ffffffff812913d3>] ? xen_evtchn_do_upcall+0x27/0x37
> [   44.065186]  [<ffffffff8140081a>] ? xen_hvm_callback_vector+0x6a/0x70
> [   44.065186]  <EOI>
> [   44.065186]  [<ffffffff81094b8f>] ? cpumask_next+0x17/0x19
> [   44.065186]  [<ffffffff813eb75b>] ? start_secondary+0x184/0x1e2
> [   44.065186]  [<ffffffff813eb757>] ? start_secondary+0x180/0x1e2
> [   44.065186]  [<ffffffff813eb5d7>] ? set_cpu_sibling_map+0x40e/0x40e
> [   44.065186] Code: 41 5d 41 5e 41 5f c3 41 57 41 56 41 55 41 54 55 53 48 c7 c3 40 e6 00 00 48 83 ec 28 65 48 03 1c 25 e8 db 00 00 83 7b 18 00 75 02 <0f> 0b 48
>  ff 43 20 48 bd ff ff ff ff ff ff ff 7f 41 be 03 00 00
> [   44.065186] RIP  [<ffffffff8105682e>] hrtimer_interrupt+0x24/0x1a5
> [   44.065186]  RSP <ffff88000f463de8>
> [   44.065186] ---[ end trace 9366352b116a03db ]---
> [   44.065186] Kernel panic - not syncing: Fatal exception in interrupt
> 
> And if I offline online cpu 2 in 2.6.32-5-amd64:
> 
> [   27.933928] Booting processor 2 APIC 0x4 ip 0x6000
> [   25.708098] Initializing CPU#2
> [   25.708098] CPU: L1 I cache: 32K, L1 D cache: 32K
> [   25.708098] CPU: L2 cache: 6144K
> [   25.708098] CPU 2/0x4 -> Node 0
> [   25.708098] CPU: Physical Processor ID: 0
> [   25.708098] CPU: Processor Core ID: 4
> [   28.028234] CPU2: Intel(R) Core(TM)2 Quad  CPU   Q9450  @ 2.66GHz stepping 07
> [   28.069320] checking TSC synchronization [CPU#0 -> CPU#2]: passed.
> [   25.708098] installing Xen timer for CPU 2
> [   28.098101] CPU0 attaching NULL sched-domain.
> [   28.098106] CPU1 attaching NULL sched-domain.
> [   28.098110] CPU3 attaching NULL sched-domain.
> [   28.098092] ------------[ cut here ]------------
> [   28.098092] WARNING: at /build/buildd-linux-2.6_2.6.32-30-amd64-d4MbNM/linux-2.6-2.6.32/debian/build/source_amd64_none/kernel/irq/chip.c:88 unbind_from_irq+0
> x147/0x159()
> [   28.098092] Hardware name: HVM domU
> [   28.144127] CPU0 attaching sched-domain:
> [   28.144131]  domain 0: span 0-3 level CPU
> [   28.144133]   groups: 0 1 2 3
> [   28.144139] CPU1 attaching sched-domain:
> [   28.144142]  domain 0: span 0-3 level CPU
> [   28.144145]   groups: 1 2 3 0
> [   28.144150] CPU2 attaching sched-domain:
> [   28.144152]  domain 0: span 0-3 level CPU
> [   28.144155]   groups: 2 3 0 1
> [   28.144160] CPU3 attaching sched-domain:
> [   28.144162]  domain 0: span 0-3 level CPU
> [   28.144165]   groups: 3 0 1 2
> [   28.209159] Destroying IRQ18 without calling free_irq
> [   28.215985] Modules linked in: loop parport_pc parport psmouse evdev serio_raw snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr i2c_piix4 i2c_core butto
> n processor ext3 jbd mbcache ata_generic ata_piix libata floppy thermal thermal_sys xen_blkfront scsi_mod [last unloaded: scsi_wait_scan]
> [   28.224050] Pid: 0, comm: swapper Not tainted 2.6.32-5-amd64 #1
> [   28.224050] Call Trace:
> [   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
> [   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
> [   28.224050]  [<ffffffff8104dd7c>] ? warn_slowpath_common+0x77/0xa3
> [   28.224050]  [<ffffffff8104de04>] ? warn_slowpath_fmt+0x51/0x59
> [   28.224050]  [<ffffffff810e4493>] ? get_partial_node+0x15/0x85
> [   28.224050]  [<ffffffff811966fd>] ? kvasprintf+0x41/0x68
> [   28.224050]  [<ffffffff8109639e>] ? dynamic_irq_cleanup_x+0x4b/0xc2
> [   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
> [   28.224050]  [<ffffffff811ef5b7>] ? bind_virq_to_irqhandler+0x14c/0x15d
> [   28.224050]  [<ffffffff8100df77>] ? xen_timer_interrupt+0x0/0x18d
> [   28.224050]  [<ffffffff812f5121>] ? set_cpu_sibling_map+0x2f4/0x311
> [   28.224050]  [<ffffffff8100df0d>] ? xen_setup_timer+0x55/0xa2
> [   28.224050]  [<ffffffff8100df71>] ? xen_hvm_setup_cpu_clockevents+0x17/0x1d
> [   28.224050]  [<ffffffff812f52fc>] ? start_secondary+0x17c/0x185
> [   28.224050] ---[ end trace db1493923b5e103d ]---
> 
> The logs for cpu 2 in my 3.5 kernel is identical to those for cpu 3.
> 
> 
> Wei.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:05:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlM8w-0001Pz-2V; Wed, 19 Dec 2012 16:05:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlM8u-0001Pl-PW
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:05:13 +0000
Received: from [85.158.137.99:35844] by server-9.bemta-3.messagelabs.com id
	BB/D9-11948-3B5E1D05; Wed, 19 Dec 2012 16:05:07 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-217.messagelabs.com!1355933101!20074158!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10836 invoked from network); 19 Dec 2012 16:05:07 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 16:05:07 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJG4vUW007937
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 16:05:00 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJG4uom000734
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 16:04:57 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJG4umG006598; Wed, 19 Dec 2012 10:04:56 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 08:04:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 779941BF762; Wed, 19 Dec 2012 11:04:55 -0500 (EST)
Date: Wed, 19 Dec 2012 11:04:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Message-ID: <20121219160455.GA12077@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355411537.8376.52.camel@iceland>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> Hi Konrad
> 
> I encountered a bug when trying to bring offline a cpu then online it
> again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
> with a quick fix.

I took your two patches that you posted and they are in v3.8 now.

It seems that there are bugs in the offline/online code thought.

I did this:
# echo 0 > /sys/devices/system/cpu/cpu3/online
# echo 1 > /sys/devices/system/cpu/cpu3/online

With a PV guest and it blows up (with or without your patches).

Have you seen something similar to this:

[  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
[  106.167168] microcode: CPU2 sig=0x206a7, pf=0x2, revision=0x17
[  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [last unloaded: dump_dma]
[  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0-rc3upstream-00139-gb1849b3-dirty #1
[  106.170152] Call Trace:
[  106.170598]  [<ffffffff8109bcbd>] __schedule_bug+0x4d/0x60
[  106.171042]  [<ffffffff815be0fc>] __schedule+0x69c/0x760
[  106.171469]  [<ffffffff815be284>] schedule+0x24/0x70
[  106.171890]  [<ffffffff8103fbe9>] cpu_idle+0xc9/0xe0
[  106.172309]  [<ffffffff81033e79>] ? xen_irq_enable_direct_reloc+0x4/0x4
[  106.172726]  [<ffffffff815b1c5d>] cpu_bringup_and_idle+0xe/0x10
[  106.174533] BUG: scheduling while atomic: swapper/2/0/0x00000000
?

> 
> The HVM DomU is configured with 4 vcpus. After booting into command
> prompt, I do following operations.
> 
> 
> With Debian's default 2.6.32-5-amd64 kernel, the last log is:
> 
>     Booting processor 3 APIC 0x6 ip 0x6000
> 
> With my own kernel which is of version 3.5, I'm able to get more logs:
> 
> [   44.047358] Booting Node 0 Processor 3 APIC 0x6
> [   44.061201] ------------[ cut here ]------------
> [   44.065186] kernel BUG at kernel/hrtimer.c:1259!
> [   44.065186] invalid opcode: 0000 [#1] SMP
> [   44.065186] CPU 3
> [   44.065186] Modules linked in:
> [   44.065186]
> [   44.065186] Pid: 0, comm: swapper/3 Not tainted 3.5.0-xen-evtchn+ #50 Xen HVM domU
> [   44.065186] RIP: 0010:[<ffffffff8105682e>]  [<ffffffff8105682e>] hrtimer_interrupt+0x24/0x1a5
> [   44.065186] RSP: 0000:ffff88000f463de8  EFLAGS: 00010046
> [   44.065186] RAX: ffffffff8105680a RBX: ffff88000f46e640 RCX: 00000000fffffffa
> [   44.065186] RDX: 00000000fffffffa RSI: 0000000000000000 RDI: ffff88000f46bd80
> [   44.065186] RBP: 0000000000000057 R08: ffff88000e000b40 R09: 0000000000000019
> [   44.065186] R10: 0000000000000000 R11: 0000000000000001 R12: ffff88000e6e8e00
> [   44.065186] R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000000
> [   44.065186] FS:  0000000000000000(0000) GS:ffff88000f460000(0000) knlGS:0000000000000000
> [   44.065186] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> [   44.065186] CR2: 0000000000000000 CR3: 000000000181b000 CR4: 00000000000007e0
> [   44.065186] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [   44.065186] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> [   44.065186] Process swapper/3 (pid: 0, threadinfo ffff88000e62e000, task ffff88000e62aea0)
> [   44.065186] Stack:
> [   44.065186]  0000000000000001 ffff88000f46e680 ffffffff81013711 00000008cfba9b27
> [   44.065186]  00000000fffffffa ffff88000e6e97c0 0000000000000057 ffff88000e6e8e00
> [   44.065186]  0000000000000000 0000000000000001 0000000000000000 ffffffff81006954
> [   44.065186] Call Trace:
> [   44.065186]  <IRQ>
> [   44.065186]  [<ffffffff81013711>] ? paravirt_sched_clock+0x5/0x8
> [   44.065186]  [<ffffffff81006954>] ? xen_timer_interrupt+0x26/0x162
> [   44.065186]  [<ffffffff8109a220>] ? check_for_new_grace_period.isra.32+0x90/0x9a
> [   44.065186]  [<ffffffff810956df>] ? handle_irq_event_percpu+0x32/0x1b0
> [   44.065186]  [<ffffffff8128f88b>] ? irq_get_handler_data+0x7/0x16
> [   44.065186]  [<ffffffff81097e39>] ? handle_percpu_irq+0x3a/0x4f
> [   44.065186]  [<ffffffff8128f9ec>] ? __xen_evtchn_do_upcall_l2+0x131/0x1c0
> [   44.065186]  [<ffffffff812913d3>] ? xen_evtchn_do_upcall+0x27/0x37
> [   44.065186]  [<ffffffff8140081a>] ? xen_hvm_callback_vector+0x6a/0x70
> [   44.065186]  <EOI>
> [   44.065186]  [<ffffffff81094b8f>] ? cpumask_next+0x17/0x19
> [   44.065186]  [<ffffffff813eb75b>] ? start_secondary+0x184/0x1e2
> [   44.065186]  [<ffffffff813eb757>] ? start_secondary+0x180/0x1e2
> [   44.065186]  [<ffffffff813eb5d7>] ? set_cpu_sibling_map+0x40e/0x40e
> [   44.065186] Code: 41 5d 41 5e 41 5f c3 41 57 41 56 41 55 41 54 55 53 48 c7 c3 40 e6 00 00 48 83 ec 28 65 48 03 1c 25 e8 db 00 00 83 7b 18 00 75 02 <0f> 0b 48
>  ff 43 20 48 bd ff ff ff ff ff ff ff 7f 41 be 03 00 00
> [   44.065186] RIP  [<ffffffff8105682e>] hrtimer_interrupt+0x24/0x1a5
> [   44.065186]  RSP <ffff88000f463de8>
> [   44.065186] ---[ end trace 9366352b116a03db ]---
> [   44.065186] Kernel panic - not syncing: Fatal exception in interrupt
> 
> And if I offline online cpu 2 in 2.6.32-5-amd64:
> 
> [   27.933928] Booting processor 2 APIC 0x4 ip 0x6000
> [   25.708098] Initializing CPU#2
> [   25.708098] CPU: L1 I cache: 32K, L1 D cache: 32K
> [   25.708098] CPU: L2 cache: 6144K
> [   25.708098] CPU 2/0x4 -> Node 0
> [   25.708098] CPU: Physical Processor ID: 0
> [   25.708098] CPU: Processor Core ID: 4
> [   28.028234] CPU2: Intel(R) Core(TM)2 Quad  CPU   Q9450  @ 2.66GHz stepping 07
> [   28.069320] checking TSC synchronization [CPU#0 -> CPU#2]: passed.
> [   25.708098] installing Xen timer for CPU 2
> [   28.098101] CPU0 attaching NULL sched-domain.
> [   28.098106] CPU1 attaching NULL sched-domain.
> [   28.098110] CPU3 attaching NULL sched-domain.
> [   28.098092] ------------[ cut here ]------------
> [   28.098092] WARNING: at /build/buildd-linux-2.6_2.6.32-30-amd64-d4MbNM/linux-2.6-2.6.32/debian/build/source_amd64_none/kernel/irq/chip.c:88 unbind_from_irq+0
> x147/0x159()
> [   28.098092] Hardware name: HVM domU
> [   28.144127] CPU0 attaching sched-domain:
> [   28.144131]  domain 0: span 0-3 level CPU
> [   28.144133]   groups: 0 1 2 3
> [   28.144139] CPU1 attaching sched-domain:
> [   28.144142]  domain 0: span 0-3 level CPU
> [   28.144145]   groups: 1 2 3 0
> [   28.144150] CPU2 attaching sched-domain:
> [   28.144152]  domain 0: span 0-3 level CPU
> [   28.144155]   groups: 2 3 0 1
> [   28.144160] CPU3 attaching sched-domain:
> [   28.144162]  domain 0: span 0-3 level CPU
> [   28.144165]   groups: 3 0 1 2
> [   28.209159] Destroying IRQ18 without calling free_irq
> [   28.215985] Modules linked in: loop parport_pc parport psmouse evdev serio_raw snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr i2c_piix4 i2c_core butto
> n processor ext3 jbd mbcache ata_generic ata_piix libata floppy thermal thermal_sys xen_blkfront scsi_mod [last unloaded: scsi_wait_scan]
> [   28.224050] Pid: 0, comm: swapper Not tainted 2.6.32-5-amd64 #1
> [   28.224050] Call Trace:
> [   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
> [   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
> [   28.224050]  [<ffffffff8104dd7c>] ? warn_slowpath_common+0x77/0xa3
> [   28.224050]  [<ffffffff8104de04>] ? warn_slowpath_fmt+0x51/0x59
> [   28.224050]  [<ffffffff810e4493>] ? get_partial_node+0x15/0x85
> [   28.224050]  [<ffffffff811966fd>] ? kvasprintf+0x41/0x68
> [   28.224050]  [<ffffffff8109639e>] ? dynamic_irq_cleanup_x+0x4b/0xc2
> [   28.224050]  [<ffffffff811ef131>] ? unbind_from_irq+0x147/0x159
> [   28.224050]  [<ffffffff811ef5b7>] ? bind_virq_to_irqhandler+0x14c/0x15d
> [   28.224050]  [<ffffffff8100df77>] ? xen_timer_interrupt+0x0/0x18d
> [   28.224050]  [<ffffffff812f5121>] ? set_cpu_sibling_map+0x2f4/0x311
> [   28.224050]  [<ffffffff8100df0d>] ? xen_setup_timer+0x55/0xa2
> [   28.224050]  [<ffffffff8100df71>] ? xen_hvm_setup_cpu_clockevents+0x17/0x1d
> [   28.224050]  [<ffffffff812f52fc>] ? start_secondary+0x17c/0x185
> [   28.224050] ---[ end trace db1493923b5e103d ]---
> 
> The logs for cpu 2 in my 3.5 kernel is identical to those for cpu 3.
> 
> 
> Wei.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:06:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlM9Z-0001Uz-HX; Wed, 19 Dec 2012 16:05:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TlM9X-0001Ui-UJ
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 16:05:52 +0000
Received: from [85.158.139.211:5328] by server-10.bemta-5.messagelabs.com id
	F4/5E-13383-FD5E1D05; Wed, 19 Dec 2012 16:05:51 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355933107!21223614!1
X-Originating-IP: [220.181.15.60]
X-SpamReason: No, hits=2.7 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYwID0+IDU0MTU=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYwID0+IDU0MTU=\n,HTML_40_50,HTML_MESSAGE,
	MANY_EXCLAMATIONS,MIME_BASE64_TEXT,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1096 invoked from network); 19 Dec 2012 16:05:10 -0000
Received: from m15-60.126.com (HELO m15-60.126.com) (220.181.15.60)
	by server-16.tower-206.messagelabs.com with SMTP;
	19 Dec 2012 16:05:10 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Cc:Subject:Content-Type:
	MIME-Version:Message-ID; bh=UKkIzP6/KE0BTJnYDi96N7ooly413VP8PRJ5
	JvbSJeM=; b=MNlIX4JPqk02Jx310bLf+tSy0AT+am6/0OVqcwLG3CD8CHu/C90y
	0lPBssCmxI2YdWxqXVcjM9YANc5gop8wKnsYt9SwfzyRqim+1OA8YatCbT+8Ng0K
	a6xt8LB7YFHH8/NjvJJY4qsNYw+sZjK8XbpdGQexI7fYm2Bs1BWfZ2Y=
Received: from hxkhust$126.com ( [202.114.0.254] ) by ajax-webmail-wmsvr60
	(Coremail) ; Thu, 20 Dec 2012 00:04:31 +0800 (CST)
X-Originating-IP: [202.114.0.254]
Date: Thu, 20 Dec 2012 00:04:31 +0800 (CST)
From: hxkhust <hxkhust@126.com>
To: mats.petersson@citrix.com
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: n2fibGZvb3Rlcl9odG09Mjc2NTo4MQ==
MIME-Version: 1.0
Message-ID: <4e6c46ce.cb93.13bb3e8b8fe.Coremail.hxkhust@126.com>
X-CM-TRANSID: PMqowEAZyEOQ5dFQMTMOAA--.4295W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbi4wGKBU3LlkLfbQABsM
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(qcow format image file read operation in qemu-img-xen)[updated]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1356638737172164270=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1356638737172164270==
Content-Type: multipart/alternative; 
	boundary="----=_Part_196415_450950715.1355933071613"

------=_Part_196415_450950715.1355933071613
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64

PkRhdGU6IFdlZCwgMTkgRGVjIDIwMTIgMTU6MzI6NDEgKzAwMDAKPkZyb206IE1hdHMgUGV0ZXJz
c29uIDxtYXRzLnBldGVyc3NvbkBjaXRyaXguY29tPgo+VG86IDx4ZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZz4KPlN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSAhISEhIWhlbHAhSSB3b3VsZG4ndCBiZSBh
YmxlIHRvIG1lZXQgdGhlCj4JZGVhZGxpbmUhKHFjb3cgZm9ybWF0IGltYWdlIGZpbGUgcmVhZCBv
cGVyYXRpb24gaW4KPglxZW11LWltZy14ZW4pW3VwZGF0ZWRdCj5NZXNzYWdlLUlEOiA8NTBEMURF
MTkuMTA4MDcwOUBjaXRyaXguY29tPgo+Q29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0
PSJHQjIzMTIiCj4KPk9uIDE5LzEyLzEyIDE1OjIzLCBoeGtodXN0IHdyb3RlOgo+PiBPciBjb3Vs
ZCB5b3UgdGVsbCBtZSBob3cgdG8gY2FjaGUgdGhlIGRhdGEgd2hpY2ggaXMgcmVhZCBmcm9tIHRo
ZQo+PiBiYWNraW5nZmlsZSB3aGVuIGEgcWNvdyBpbWFnZSBpcyByZWdhcmRlZCBhcyBhIHZpcnR1
YWwgZGlzayBpbiBhCj4+IHJ1bm5pbmcgSFZNPwo+SSB0YWtlIGl0IHRoZSBhYm92ZSBzaW5nbGUg
cXVlc3Rpb24gaXMgdGhlIGVmZmVjdCBvZiBteSBwcmV2aW91cyByZXBseT8KPldoeSBkaWQgeW91
IGhhdmUgdG8gImhpZGUiIHRoYXQgbGl0dGxlIGV4dHJhIHF1ZXN0aW9uIGluIHRoZSB3aG9sZQo+
cHJldmlvdXMgZS1tYWlsPwo+Li4uLi4ueXNlLHRoYXQgaXMuQmVjYXVzZSBwcmV2aW91c2x5IHdo
YXQgcG9pbnQgb3V0IGlzIGEgZGV0YWlsIHF1ZXN0aW9uLCB0aGUgcmVhZGVyIG1heSBiZSBjb252
ZW5pZW50IHRvIGFuc3dlYXIgYW5kIGF0IGxlYXN0IGlucHV0IGxlc3Mgd29yZHMgdG8gbWUsIEkg
Z3Vlc3MuQW5kIGluIHRoaXMgd2F5IEkgY2FuIGdldCB0aGUgb25lIHRoYXQgaXMgbW9yZSBtYW5l
dXZlcmFibGUuCj5Tb3JyeSwgZG9uJ3Qga25vdyB0aGUgYW5zd2VyIHRvIHlvdXIgcXVlc3Rpb24g
W0knbSBndWVzc2luZywgaW4gZ2VuZXJhbCwKPnRoYXQgdGhlIERvbTAgd2lsbCBkbyB0aGF0IGZv
ciB5b3UsIHN1YmplY3QgdG8gYXZhaWxhYmxlIHNwYWNlXSwganVzdAo+cG9pbnRpbmcgb3V0IHRo
YXQgdGhlcmUgaXMgYSBtaW5vciBkaWZmZXJlbmNlIGJldHdlZW4geW91ciBwcmV2aW91cyBhbmQK
PmN1cnJlbnQgbWFpbC4KPnllYW4sdGhlIGRpZmZlcmVuY2UgaXMgbWlub3IuSSBoYXZlIG5vIHRp
bWUgYW5kIEknbSB3b3JyeSBhYm91dCB0aGUgcHJvYmxlbS5JIGhhdmUgc2VlbiB0aGF0IGEgbG90
IG9mIHF1ZXN0aW9ucyBoYXZlIG1vcmUgdGhhdCBvbmUgUmU6WFhYIG1haWxzKG1haWxzIHRvIGFu
c3dlciB0aGUgcXVlc3Rpb24pLEhvdyBjb3VsZCB0aGV5IGRvIHRoYXQ/ICAKPkJ5IGNhY2hpbmcs
IGRvIHlvdSBtZWFuICJsb2FkIHRoZSBlbnRpcmUgZmlsZSBpbnRvIFJBTSIsIG9yICJpZiBhIHJl
YWQKPmlzIHJlcXVlc3RlZCBmb3IgdGhlIHNhbWUgcGllY2Ugb2YgJ2Rpc2snIG11bHRpcGxlIHRp
bWVzLCBJIHdhbnQgdGhlCj5wcmV2aW91cyByZXN1bHQgdG8gYmUgc3RvcmVkIGFuZCByZXR1cm5l
ZCIuCj5JIG1lYW4gdGhlIGxhdHRlciBvbmUuQ291bGQgeW91IGdpdmUgbWUgc29tZSBwcm9wb3Nh
bD8KPi0tCj5NYXRz
------=_Part_196415_450950715.1355933071613
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxwcmU+Jmd0O0RhdGU6Jm5ic3A7V2VkLCZuYnNwOzE5Jm5ic3A7
RGVjJm5ic3A7MjAxMiZuYnNwOzE1OjMyOjQxJm5ic3A7KzAwMDAKJmd0O0Zyb206Jm5ic3A7TWF0
cyZuYnNwO1BldGVyc3NvbiZuYnNwOyZsdDttYXRzLnBldGVyc3NvbkBjaXRyaXguY29tJmd0Owom
Z3Q7VG86Jm5ic3A7Jmx0O3hlbi1kZXZlbEBsaXN0cy54ZW4ub3JnJmd0OwomZ3Q7U3ViamVjdDom
bmJzcDtSZTombmJzcDtbWGVuLWRldmVsXSZuYnNwOyEhISEhaGVscCFJJm5ic3A7d291bGRuJ3Qm
bmJzcDtiZSZuYnNwO2FibGUmbmJzcDt0byZuYnNwO21lZXQmbmJzcDt0aGUKJmd0OwlkZWFkbGlu
ZSEocWNvdyZuYnNwO2Zvcm1hdCZuYnNwO2ltYWdlJm5ic3A7ZmlsZSZuYnNwO3JlYWQmbmJzcDtv
cGVyYXRpb24mbmJzcDtpbgomZ3Q7CXFlbXUtaW1nLXhlbilbdXBkYXRlZF0KJmd0O01lc3NhZ2Ut
SUQ6Jm5ic3A7Jmx0OzUwRDFERTE5LjEwODA3MDlAY2l0cml4LmNvbSZndDsKJmd0O0NvbnRlbnQt
VHlwZTombmJzcDt0ZXh0L3BsYWluOyZuYnNwO2NoYXJzZXQ9IkdCMjMxMiIKJmd0OwomZ3Q7T24m
bmJzcDsxOS8xMi8xMiZuYnNwOzE1OjIzLCZuYnNwO2h4a2h1c3QmbmJzcDt3cm90ZToKJmd0OyZn
dDsmbmJzcDtPciZuYnNwO2NvdWxkJm5ic3A7eW91Jm5ic3A7dGVsbCZuYnNwO21lJm5ic3A7aG93
Jm5ic3A7dG8mbmJzcDtjYWNoZSZuYnNwO3RoZSZuYnNwO2RhdGEmbmJzcDt3aGljaCZuYnNwO2lz
Jm5ic3A7cmVhZCZuYnNwO2Zyb20mbmJzcDt0aGUKJmd0OyZndDsmbmJzcDtiYWNraW5nZmlsZSZu
YnNwO3doZW4mbmJzcDthJm5ic3A7cWNvdyZuYnNwO2ltYWdlJm5ic3A7aXMmbmJzcDtyZWdhcmRl
ZCZuYnNwO2FzJm5ic3A7YSZuYnNwO3ZpcnR1YWwmbmJzcDtkaXNrJm5ic3A7aW4mbmJzcDthCiZn
dDsmZ3Q7Jm5ic3A7cnVubmluZyZuYnNwO0hWTT8KJmd0O0kmbmJzcDt0YWtlJm5ic3A7aXQmbmJz
cDt0aGUmbmJzcDthYm92ZSZuYnNwO3NpbmdsZSZuYnNwO3F1ZXN0aW9uJm5ic3A7aXMmbmJzcDt0
aGUmbmJzcDtlZmZlY3QmbmJzcDtvZiZuYnNwO215Jm5ic3A7cHJldmlvdXMmbmJzcDtyZXBseT8K
Jmd0O1doeSZuYnNwO2RpZCZuYnNwO3lvdSZuYnNwO2hhdmUmbmJzcDt0byZuYnNwOyJoaWRlIiZu
YnNwO3RoYXQmbmJzcDtsaXR0bGUmbmJzcDtleHRyYSZuYnNwO3F1ZXN0aW9uJm5ic3A7aW4mbmJz
cDt0aGUmbmJzcDt3aG9sZQomZ3Q7cHJldmlvdXMmbmJzcDtlLW1haWw/CiZndDs8L3ByZT48cHJl
Pi4uLi4uLnlzZSx0aGF0IGlzLjwvcHJlPjxwcmU+QmVjYXVzZSBwcmV2aW91c2x5IHdoYXQgcG9p
bnQgb3V0IGlzIGEgZGV0YWlsIHF1ZXN0aW9uLCB0aGUgcmVhZGVyIG1heSBiZSBjb252ZW5pZW50
PGZvbnQgZmFjZT0iYXJpYWwsIMvOzOUiIHNpemU9IjMiPjxzcGFuIHN0eWxlPSJsaW5lLWhlaWdo
dDogMjBweDsiPiB0byBhbnN3ZWFyIGFuZCBhdCBsZWFzdCBpbnB1dCBsZXNzIHdvcmRzIHRvIG1l
LCBJIGd1ZXNzLjwvc3Bhbj48L2ZvbnQ+PC9wcmU+PHByZT5BbmQgaW4gdGhpcyB3YXkgSSBjYW4g
Z2V0IHRoZSBvbmUgdGhhdCBpcyBtb3JlIG1hbmV1dmVyYWJsZS48L3ByZT48cHJlPgomZ3Q7U29y
cnksJm5ic3A7ZG9uJ3QmbmJzcDtrbm93Jm5ic3A7dGhlJm5ic3A7YW5zd2VyJm5ic3A7dG8mbmJz
cDt5b3VyJm5ic3A7cXVlc3Rpb24mbmJzcDtbSSdtJm5ic3A7Z3Vlc3NpbmcsJm5ic3A7aW4mbmJz
cDtnZW5lcmFsLAomZ3Q7dGhhdCZuYnNwO3RoZSZuYnNwO0RvbTAmbmJzcDt3aWxsJm5ic3A7ZG8m
bmJzcDt0aGF0Jm5ic3A7Zm9yJm5ic3A7eW91LCZuYnNwO3N1YmplY3QmbmJzcDt0byZuYnNwO2F2
YWlsYWJsZSZuYnNwO3NwYWNlXSwmbmJzcDtqdXN0CiZndDtwb2ludGluZyZuYnNwO291dCZuYnNw
O3RoYXQmbmJzcDt0aGVyZSZuYnNwO2lzJm5ic3A7YSZuYnNwO21pbm9yJm5ic3A7ZGlmZmVyZW5j
ZSZuYnNwO2JldHdlZW4mbmJzcDt5b3VyJm5ic3A7cHJldmlvdXMmbmJzcDthbmQKJmd0O2N1cnJl
bnQmbmJzcDttYWlsLgomZ3Q7PC9wcmU+PHByZT55ZWFuLHRoZSBkaWZmZXJlbmNlIGlzIG1pbm9y
LkkgaGF2ZSBubyB0aW1lIGFuZCBJJ20gd29ycnkgYWJvdXQgdGhlIHByb2JsZW0uPC9wcmU+PHBy
ZT5JIGhhdmUgc2VlbiB0aGF0IGEgbG90IG9mIHF1ZXN0aW9ucyBoYXZlIG1vcmUgdGhhdCBvbmUg
UmU6WFhYIG1haWxzKG1haWxzIHRvIGFuc3dlciB0aGUgcXVlc3Rpb24pLEhvdyBjb3VsZCB0aGV5
IGRvIHRoYXQ/PC9wcmU+PHByZT4gIAomZ3Q7QnkmbmJzcDtjYWNoaW5nLCZuYnNwO2RvJm5ic3A7
eW91Jm5ic3A7bWVhbiZuYnNwOyJsb2FkJm5ic3A7dGhlJm5ic3A7ZW50aXJlJm5ic3A7ZmlsZSZu
YnNwO2ludG8mbmJzcDtSQU0iLCZuYnNwO29yJm5ic3A7ImlmJm5ic3A7YSZuYnNwO3JlYWQKJmd0
O2lzJm5ic3A7cmVxdWVzdGVkJm5ic3A7Zm9yJm5ic3A7dGhlJm5ic3A7c2FtZSZuYnNwO3BpZWNl
Jm5ic3A7b2YmbmJzcDsnZGlzaycmbmJzcDttdWx0aXBsZSZuYnNwO3RpbWVzLCZuYnNwO0kmbmJz
cDt3YW50Jm5ic3A7dGhlCiZndDtwcmV2aW91cyZuYnNwO3Jlc3VsdCZuYnNwO3RvJm5ic3A7YmUm
bmJzcDtzdG9yZWQmbmJzcDthbmQmbmJzcDtyZXR1cm5lZCIuCiZndDs8L3ByZT48cHJlPkkgbWVh
biB0aGUgbGF0dGVyIG9uZS5Db3VsZCB5b3UgZ2l2ZSBtZSBzb21lIHByb3Bvc2FsPzwvcHJlPjxw
cmU+CiZndDstLQomZ3Q7TWF0czwvcHJlPjwvZGl2Pjxicj48YnI+PHNwYW4gdGl0bGU9Im5ldGVh
c2Vmb290ZXIiPjxzcGFuIGlkPSJuZXRlYXNlX21haWxfZm9vdGVyIj48L3NwYW4+PC9zcGFuPg==

------=_Part_196415_450950715.1355933071613--



--===============1356638737172164270==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1356638737172164270==--



From xen-devel-bounces@lists.xen.org Wed Dec 19 16:06:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlM9Z-0001Uz-HX; Wed, 19 Dec 2012 16:05:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TlM9X-0001Ui-UJ
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 16:05:52 +0000
Received: from [85.158.139.211:5328] by server-10.bemta-5.messagelabs.com id
	F4/5E-13383-FD5E1D05; Wed, 19 Dec 2012 16:05:51 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355933107!21223614!1
X-Originating-IP: [220.181.15.60]
X-SpamReason: No, hits=2.7 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYwID0+IDU0MTU=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYwID0+IDU0MTU=\n,HTML_40_50,HTML_MESSAGE,
	MANY_EXCLAMATIONS,MIME_BASE64_TEXT,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1096 invoked from network); 19 Dec 2012 16:05:10 -0000
Received: from m15-60.126.com (HELO m15-60.126.com) (220.181.15.60)
	by server-16.tower-206.messagelabs.com with SMTP;
	19 Dec 2012 16:05:10 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Cc:Subject:Content-Type:
	MIME-Version:Message-ID; bh=UKkIzP6/KE0BTJnYDi96N7ooly413VP8PRJ5
	JvbSJeM=; b=MNlIX4JPqk02Jx310bLf+tSy0AT+am6/0OVqcwLG3CD8CHu/C90y
	0lPBssCmxI2YdWxqXVcjM9YANc5gop8wKnsYt9SwfzyRqim+1OA8YatCbT+8Ng0K
	a6xt8LB7YFHH8/NjvJJY4qsNYw+sZjK8XbpdGQexI7fYm2Bs1BWfZ2Y=
Received: from hxkhust$126.com ( [202.114.0.254] ) by ajax-webmail-wmsvr60
	(Coremail) ; Thu, 20 Dec 2012 00:04:31 +0800 (CST)
X-Originating-IP: [202.114.0.254]
Date: Thu, 20 Dec 2012 00:04:31 +0800 (CST)
From: hxkhust <hxkhust@126.com>
To: mats.petersson@citrix.com
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: n2fibGZvb3Rlcl9odG09Mjc2NTo4MQ==
MIME-Version: 1.0
Message-ID: <4e6c46ce.cb93.13bb3e8b8fe.Coremail.hxkhust@126.com>
X-CM-TRANSID: PMqowEAZyEOQ5dFQMTMOAA--.4295W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbi4wGKBU3LlkLfbQABsM
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(qcow format image file read operation in qemu-img-xen)[updated]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1356638737172164270=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1356638737172164270==
Content-Type: multipart/alternative; 
	boundary="----=_Part_196415_450950715.1355933071613"

------=_Part_196415_450950715.1355933071613
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64

PkRhdGU6IFdlZCwgMTkgRGVjIDIwMTIgMTU6MzI6NDEgKzAwMDAKPkZyb206IE1hdHMgUGV0ZXJz
c29uIDxtYXRzLnBldGVyc3NvbkBjaXRyaXguY29tPgo+VG86IDx4ZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZz4KPlN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSAhISEhIWhlbHAhSSB3b3VsZG4ndCBiZSBh
YmxlIHRvIG1lZXQgdGhlCj4JZGVhZGxpbmUhKHFjb3cgZm9ybWF0IGltYWdlIGZpbGUgcmVhZCBv
cGVyYXRpb24gaW4KPglxZW11LWltZy14ZW4pW3VwZGF0ZWRdCj5NZXNzYWdlLUlEOiA8NTBEMURF
MTkuMTA4MDcwOUBjaXRyaXguY29tPgo+Q29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0
PSJHQjIzMTIiCj4KPk9uIDE5LzEyLzEyIDE1OjIzLCBoeGtodXN0IHdyb3RlOgo+PiBPciBjb3Vs
ZCB5b3UgdGVsbCBtZSBob3cgdG8gY2FjaGUgdGhlIGRhdGEgd2hpY2ggaXMgcmVhZCBmcm9tIHRo
ZQo+PiBiYWNraW5nZmlsZSB3aGVuIGEgcWNvdyBpbWFnZSBpcyByZWdhcmRlZCBhcyBhIHZpcnR1
YWwgZGlzayBpbiBhCj4+IHJ1bm5pbmcgSFZNPwo+SSB0YWtlIGl0IHRoZSBhYm92ZSBzaW5nbGUg
cXVlc3Rpb24gaXMgdGhlIGVmZmVjdCBvZiBteSBwcmV2aW91cyByZXBseT8KPldoeSBkaWQgeW91
IGhhdmUgdG8gImhpZGUiIHRoYXQgbGl0dGxlIGV4dHJhIHF1ZXN0aW9uIGluIHRoZSB3aG9sZQo+
cHJldmlvdXMgZS1tYWlsPwo+Li4uLi4ueXNlLHRoYXQgaXMuQmVjYXVzZSBwcmV2aW91c2x5IHdo
YXQgcG9pbnQgb3V0IGlzIGEgZGV0YWlsIHF1ZXN0aW9uLCB0aGUgcmVhZGVyIG1heSBiZSBjb252
ZW5pZW50IHRvIGFuc3dlYXIgYW5kIGF0IGxlYXN0IGlucHV0IGxlc3Mgd29yZHMgdG8gbWUsIEkg
Z3Vlc3MuQW5kIGluIHRoaXMgd2F5IEkgY2FuIGdldCB0aGUgb25lIHRoYXQgaXMgbW9yZSBtYW5l
dXZlcmFibGUuCj5Tb3JyeSwgZG9uJ3Qga25vdyB0aGUgYW5zd2VyIHRvIHlvdXIgcXVlc3Rpb24g
W0knbSBndWVzc2luZywgaW4gZ2VuZXJhbCwKPnRoYXQgdGhlIERvbTAgd2lsbCBkbyB0aGF0IGZv
ciB5b3UsIHN1YmplY3QgdG8gYXZhaWxhYmxlIHNwYWNlXSwganVzdAo+cG9pbnRpbmcgb3V0IHRo
YXQgdGhlcmUgaXMgYSBtaW5vciBkaWZmZXJlbmNlIGJldHdlZW4geW91ciBwcmV2aW91cyBhbmQK
PmN1cnJlbnQgbWFpbC4KPnllYW4sdGhlIGRpZmZlcmVuY2UgaXMgbWlub3IuSSBoYXZlIG5vIHRp
bWUgYW5kIEknbSB3b3JyeSBhYm91dCB0aGUgcHJvYmxlbS5JIGhhdmUgc2VlbiB0aGF0IGEgbG90
IG9mIHF1ZXN0aW9ucyBoYXZlIG1vcmUgdGhhdCBvbmUgUmU6WFhYIG1haWxzKG1haWxzIHRvIGFu
c3dlciB0aGUgcXVlc3Rpb24pLEhvdyBjb3VsZCB0aGV5IGRvIHRoYXQ/ICAKPkJ5IGNhY2hpbmcs
IGRvIHlvdSBtZWFuICJsb2FkIHRoZSBlbnRpcmUgZmlsZSBpbnRvIFJBTSIsIG9yICJpZiBhIHJl
YWQKPmlzIHJlcXVlc3RlZCBmb3IgdGhlIHNhbWUgcGllY2Ugb2YgJ2Rpc2snIG11bHRpcGxlIHRp
bWVzLCBJIHdhbnQgdGhlCj5wcmV2aW91cyByZXN1bHQgdG8gYmUgc3RvcmVkIGFuZCByZXR1cm5l
ZCIuCj5JIG1lYW4gdGhlIGxhdHRlciBvbmUuQ291bGQgeW91IGdpdmUgbWUgc29tZSBwcm9wb3Nh
bD8KPi0tCj5NYXRz
------=_Part_196415_450950715.1355933071613
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxwcmU+Jmd0O0RhdGU6Jm5ic3A7V2VkLCZuYnNwOzE5Jm5ic3A7
RGVjJm5ic3A7MjAxMiZuYnNwOzE1OjMyOjQxJm5ic3A7KzAwMDAKJmd0O0Zyb206Jm5ic3A7TWF0
cyZuYnNwO1BldGVyc3NvbiZuYnNwOyZsdDttYXRzLnBldGVyc3NvbkBjaXRyaXguY29tJmd0Owom
Z3Q7VG86Jm5ic3A7Jmx0O3hlbi1kZXZlbEBsaXN0cy54ZW4ub3JnJmd0OwomZ3Q7U3ViamVjdDom
bmJzcDtSZTombmJzcDtbWGVuLWRldmVsXSZuYnNwOyEhISEhaGVscCFJJm5ic3A7d291bGRuJ3Qm
bmJzcDtiZSZuYnNwO2FibGUmbmJzcDt0byZuYnNwO21lZXQmbmJzcDt0aGUKJmd0OwlkZWFkbGlu
ZSEocWNvdyZuYnNwO2Zvcm1hdCZuYnNwO2ltYWdlJm5ic3A7ZmlsZSZuYnNwO3JlYWQmbmJzcDtv
cGVyYXRpb24mbmJzcDtpbgomZ3Q7CXFlbXUtaW1nLXhlbilbdXBkYXRlZF0KJmd0O01lc3NhZ2Ut
SUQ6Jm5ic3A7Jmx0OzUwRDFERTE5LjEwODA3MDlAY2l0cml4LmNvbSZndDsKJmd0O0NvbnRlbnQt
VHlwZTombmJzcDt0ZXh0L3BsYWluOyZuYnNwO2NoYXJzZXQ9IkdCMjMxMiIKJmd0OwomZ3Q7T24m
bmJzcDsxOS8xMi8xMiZuYnNwOzE1OjIzLCZuYnNwO2h4a2h1c3QmbmJzcDt3cm90ZToKJmd0OyZn
dDsmbmJzcDtPciZuYnNwO2NvdWxkJm5ic3A7eW91Jm5ic3A7dGVsbCZuYnNwO21lJm5ic3A7aG93
Jm5ic3A7dG8mbmJzcDtjYWNoZSZuYnNwO3RoZSZuYnNwO2RhdGEmbmJzcDt3aGljaCZuYnNwO2lz
Jm5ic3A7cmVhZCZuYnNwO2Zyb20mbmJzcDt0aGUKJmd0OyZndDsmbmJzcDtiYWNraW5nZmlsZSZu
YnNwO3doZW4mbmJzcDthJm5ic3A7cWNvdyZuYnNwO2ltYWdlJm5ic3A7aXMmbmJzcDtyZWdhcmRl
ZCZuYnNwO2FzJm5ic3A7YSZuYnNwO3ZpcnR1YWwmbmJzcDtkaXNrJm5ic3A7aW4mbmJzcDthCiZn
dDsmZ3Q7Jm5ic3A7cnVubmluZyZuYnNwO0hWTT8KJmd0O0kmbmJzcDt0YWtlJm5ic3A7aXQmbmJz
cDt0aGUmbmJzcDthYm92ZSZuYnNwO3NpbmdsZSZuYnNwO3F1ZXN0aW9uJm5ic3A7aXMmbmJzcDt0
aGUmbmJzcDtlZmZlY3QmbmJzcDtvZiZuYnNwO215Jm5ic3A7cHJldmlvdXMmbmJzcDtyZXBseT8K
Jmd0O1doeSZuYnNwO2RpZCZuYnNwO3lvdSZuYnNwO2hhdmUmbmJzcDt0byZuYnNwOyJoaWRlIiZu
YnNwO3RoYXQmbmJzcDtsaXR0bGUmbmJzcDtleHRyYSZuYnNwO3F1ZXN0aW9uJm5ic3A7aW4mbmJz
cDt0aGUmbmJzcDt3aG9sZQomZ3Q7cHJldmlvdXMmbmJzcDtlLW1haWw/CiZndDs8L3ByZT48cHJl
Pi4uLi4uLnlzZSx0aGF0IGlzLjwvcHJlPjxwcmU+QmVjYXVzZSBwcmV2aW91c2x5IHdoYXQgcG9p
bnQgb3V0IGlzIGEgZGV0YWlsIHF1ZXN0aW9uLCB0aGUgcmVhZGVyIG1heSBiZSBjb252ZW5pZW50
PGZvbnQgZmFjZT0iYXJpYWwsIMvOzOUiIHNpemU9IjMiPjxzcGFuIHN0eWxlPSJsaW5lLWhlaWdo
dDogMjBweDsiPiB0byBhbnN3ZWFyIGFuZCBhdCBsZWFzdCBpbnB1dCBsZXNzIHdvcmRzIHRvIG1l
LCBJIGd1ZXNzLjwvc3Bhbj48L2ZvbnQ+PC9wcmU+PHByZT5BbmQgaW4gdGhpcyB3YXkgSSBjYW4g
Z2V0IHRoZSBvbmUgdGhhdCBpcyBtb3JlIG1hbmV1dmVyYWJsZS48L3ByZT48cHJlPgomZ3Q7U29y
cnksJm5ic3A7ZG9uJ3QmbmJzcDtrbm93Jm5ic3A7dGhlJm5ic3A7YW5zd2VyJm5ic3A7dG8mbmJz
cDt5b3VyJm5ic3A7cXVlc3Rpb24mbmJzcDtbSSdtJm5ic3A7Z3Vlc3NpbmcsJm5ic3A7aW4mbmJz
cDtnZW5lcmFsLAomZ3Q7dGhhdCZuYnNwO3RoZSZuYnNwO0RvbTAmbmJzcDt3aWxsJm5ic3A7ZG8m
bmJzcDt0aGF0Jm5ic3A7Zm9yJm5ic3A7eW91LCZuYnNwO3N1YmplY3QmbmJzcDt0byZuYnNwO2F2
YWlsYWJsZSZuYnNwO3NwYWNlXSwmbmJzcDtqdXN0CiZndDtwb2ludGluZyZuYnNwO291dCZuYnNw
O3RoYXQmbmJzcDt0aGVyZSZuYnNwO2lzJm5ic3A7YSZuYnNwO21pbm9yJm5ic3A7ZGlmZmVyZW5j
ZSZuYnNwO2JldHdlZW4mbmJzcDt5b3VyJm5ic3A7cHJldmlvdXMmbmJzcDthbmQKJmd0O2N1cnJl
bnQmbmJzcDttYWlsLgomZ3Q7PC9wcmU+PHByZT55ZWFuLHRoZSBkaWZmZXJlbmNlIGlzIG1pbm9y
LkkgaGF2ZSBubyB0aW1lIGFuZCBJJ20gd29ycnkgYWJvdXQgdGhlIHByb2JsZW0uPC9wcmU+PHBy
ZT5JIGhhdmUgc2VlbiB0aGF0IGEgbG90IG9mIHF1ZXN0aW9ucyBoYXZlIG1vcmUgdGhhdCBvbmUg
UmU6WFhYIG1haWxzKG1haWxzIHRvIGFuc3dlciB0aGUgcXVlc3Rpb24pLEhvdyBjb3VsZCB0aGV5
IGRvIHRoYXQ/PC9wcmU+PHByZT4gIAomZ3Q7QnkmbmJzcDtjYWNoaW5nLCZuYnNwO2RvJm5ic3A7
eW91Jm5ic3A7bWVhbiZuYnNwOyJsb2FkJm5ic3A7dGhlJm5ic3A7ZW50aXJlJm5ic3A7ZmlsZSZu
YnNwO2ludG8mbmJzcDtSQU0iLCZuYnNwO29yJm5ic3A7ImlmJm5ic3A7YSZuYnNwO3JlYWQKJmd0
O2lzJm5ic3A7cmVxdWVzdGVkJm5ic3A7Zm9yJm5ic3A7dGhlJm5ic3A7c2FtZSZuYnNwO3BpZWNl
Jm5ic3A7b2YmbmJzcDsnZGlzaycmbmJzcDttdWx0aXBsZSZuYnNwO3RpbWVzLCZuYnNwO0kmbmJz
cDt3YW50Jm5ic3A7dGhlCiZndDtwcmV2aW91cyZuYnNwO3Jlc3VsdCZuYnNwO3RvJm5ic3A7YmUm
bmJzcDtzdG9yZWQmbmJzcDthbmQmbmJzcDtyZXR1cm5lZCIuCiZndDs8L3ByZT48cHJlPkkgbWVh
biB0aGUgbGF0dGVyIG9uZS5Db3VsZCB5b3UgZ2l2ZSBtZSBzb21lIHByb3Bvc2FsPzwvcHJlPjxw
cmU+CiZndDstLQomZ3Q7TWF0czwvcHJlPjwvZGl2Pjxicj48YnI+PHNwYW4gdGl0bGU9Im5ldGVh
c2Vmb290ZXIiPjxzcGFuIGlkPSJuZXRlYXNlX21haWxfZm9vdGVyIj48L3NwYW4+PC9zcGFuPg==

------=_Part_196415_450950715.1355933071613--



--===============1356638737172164270==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1356638737172164270==--



From xen-devel-bounces@lists.xen.org Wed Dec 19 16:06:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMAP-0001cB-0F; Wed, 19 Dec 2012 16:06:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlMAN-0001bp-CI
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:06:43 +0000
Received: from [85.158.137.99:34545] by server-15.bemta-3.messagelabs.com id
	97/0A-07921-216E1D05; Wed, 19 Dec 2012 16:06:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355933130!20170079!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27993 invoked from network); 19 Dec 2012 16:05:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:05:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="257943"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 16:05:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 16:05:29 +0000
Message-ID: <1355933127.14620.451.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 16:05:27 +0000
In-Reply-To: <1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
References: <1354628943.2693.80.camel@zakaz.uk.xensource.com>
	<1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove nr_irqs_gsi from generic code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 14:43 +0000, Ian Campbell wrote:
> The concept is X86 specific.
> 
> AFAICT the generic concept here is the number of physical IRQs which
> the current hardware has, so call this nr_hw_irqs.

I corrected this to:
AFAICT the generic concept here is the number of static physical IRQs
which the current hardware has, so call this nr_static_irqs.

> Also using "defined NR_IRQS" as a standin for x86 might have made
> sense at one point but its just cleaner to push the necessary
> definitions into asm/irq.h.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Keir (Xen.org) <keir@xen.org>
> Cc: Jan Beulich <JBeulich@suse.com>

Applied with Keir + Jan's ack.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:06:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMAP-0001cB-0F; Wed, 19 Dec 2012 16:06:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlMAN-0001bp-CI
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:06:43 +0000
Received: from [85.158.137.99:34545] by server-15.bemta-3.messagelabs.com id
	97/0A-07921-216E1D05; Wed, 19 Dec 2012 16:06:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355933130!20170079!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27993 invoked from network); 19 Dec 2012 16:05:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:05:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="257943"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 16:05:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 16:05:29 +0000
Message-ID: <1355933127.14620.451.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 16:05:27 +0000
In-Reply-To: <1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
References: <1354628943.2693.80.camel@zakaz.uk.xensource.com>
	<1355928233-15885-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, "Keir \(Xen.org\)" <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove nr_irqs_gsi from generic code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 14:43 +0000, Ian Campbell wrote:
> The concept is X86 specific.
> 
> AFAICT the generic concept here is the number of physical IRQs which
> the current hardware has, so call this nr_hw_irqs.

I corrected this to:
AFAICT the generic concept here is the number of static physical IRQs
which the current hardware has, so call this nr_static_irqs.

> Also using "defined NR_IRQS" as a standin for x86 might have made
> sense at one point but its just cleaner to push the necessary
> definitions into asm/irq.h.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Keir (Xen.org) <keir@xen.org>
> Cc: Jan Beulich <JBeulich@suse.com>

Applied with Keir + Jan's ack.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:06:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:06:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMAX-0001dw-Et; Wed, 19 Dec 2012 16:06:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlMAV-0001dX-UF
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:06:52 +0000
Received: from [85.158.137.99:6516] by server-10.bemta-3.messagelabs.com id
	F6/25-07616-B16E1D05; Wed, 19 Dec 2012 16:06:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355933130!20170079!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28183 invoked from network); 19 Dec 2012 16:05:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:05:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="257948"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 16:05:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 16:05:34 +0000
Message-ID: <1355933133.14620.452.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 16:05:33 +0000
In-Reply-To: <1354622199-27504-15-git-send-email-ian.campbell@citrix.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1354622199-27504-15-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH 15/15] xen: arm: remove now empty dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 11:56 +0000, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Applied with Tim + Stefano's Ack now that the nr_irqs_gsi thing is in.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:06:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:06:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMAX-0001dw-Et; Wed, 19 Dec 2012 16:06:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlMAV-0001dX-UF
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:06:52 +0000
Received: from [85.158.137.99:6516] by server-10.bemta-3.messagelabs.com id
	F6/25-07616-B16E1D05; Wed, 19 Dec 2012 16:06:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355933130!20170079!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28183 invoked from network); 19 Dec 2012 16:05:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:05:35 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="257948"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 16:05:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 16:05:34 +0000
Message-ID: <1355933133.14620.452.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 19 Dec 2012 16:05:33 +0000
In-Reply-To: <1354622199-27504-15-git-send-email-ian.campbell@citrix.com>
References: <1354622173.2693.72.camel@zakaz.uk.xensource.com>
	<1354622199-27504-15-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH 15/15] xen: arm: remove now empty dummy.S
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-04 at 11:56 +0000, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Applied with Tim + Stefano's Ack now that the nr_irqs_gsi thing is in.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:11:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMEq-0002C4-6F; Wed, 19 Dec 2012 16:11:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TlMEo-0002B0-TO
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:11:19 +0000
Received: from [193.109.254.147:15195] by server-7.bemta-14.messagelabs.com id
	D2/16-08102-627E1D05; Wed, 19 Dec 2012 16:11:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355933477!10665176!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24781 invoked from network); 19 Dec 2012 16:11:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:11:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="258162"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 16:11:17 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 16:11:17 +0000
Message-ID: <50D1E723.4070201@citrix.com>
Date: Wed, 19 Dec 2012 16:11:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355932443.14620.448.camel@zakaz.uk.xensource.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Razvan Cojocaru <rzvncj@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/2012 15:54, Ian Campbell wrote:
> On Wed, 2012-12-19 at 15:27 +0000, Andrew Cooper wrote:
>> On 19/12/2012 15:00, Ian Campbell wrote:
>>> On Wed, 2012-12-19 at 14:57 +0000, Razvan Cojocaru wrote:
>>>>>>      m->overlapped = is_var_mtrr_overlapped(m);
>>>>>>
>>>>>> Looks like that function contains the necessary logic.
>>>>> You're right, but what happens there is that that function depends on
>>>>> the get_mtrr_range() function, which in turn depends on the size_or_mask
>>>>> global variable, which is initialized in hvm_mtrr_pat_init(), which then
>>>>> depends on a global table, and so on. Putting that into libxc is pretty
>>>>> much putting the whole mtrr.c file there.
>>>> This is where it gets tricky:
>>>>
>>>> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
>>>>                             uint64_t *base, uint64_t *end)
>>>> {
>>>>      [...]
>>>>      phys_addr = 36;
>>>>
>>>>      if ( cpuid_eax(0x80000000) >= 0x80000008 )
>>>>          phys_addr = (uint8_t)cpuid_eax(0x80000008);
>>>>
>>>>      size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
>>>>      [...]
>>>> }
>>>>
>>>> specifically, in the cpuid_eax() call, which doesn't make much sense in 
>>>> dom0 userspace.
>>> The fact that get_mtrr_range is querying the underlying physical CPUID
>>> suggests it has something to do with the translation from virtual to
>>> physical MTRR and is therefore not something userspace needs to worry
>>> about, but I'm only speculating.
>> CPUID 0x80000008.EAX is the physical address size supported by the
>> processor (in bits).  Typical values on modern hardware are 40 or 48.
> I know what the bit is. This code seems to be leaking physical CPU
> parameters into the virtual CPU state and the question is if userspace
> needs to care about that. I suspect the answer is no.
>
> What should matter for the guest state is the virtualised CPUID
> 0x80000008.EAX which, at least in theory, could be different (e.g. a
> migrated guest?). 
>
> Ian.
>

Ah - I see your concern.  Yes - it might well be different.  Is this
information passed in an HVM save record?  It does not appear to be
associated with the MTRR HVM save record.

An HVM domain, booted on processor with 48bit physical addressing, which
makes use of the upper 8 bits when setting up the variable MTRRs, which
is subsequently migrated to a different server with only 40bit physical
addressing is going to have problems.  Whether a protection fault occurs
or the top bits are simply ignored, stuff will break.

As a result, I think the virtualised CPUID value needs feature levelling
across hosts, or restrictions applied to where you can migrate a started
VM to.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:11:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMEq-0002C4-6F; Wed, 19 Dec 2012 16:11:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1TlMEo-0002B0-TO
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:11:19 +0000
Received: from [193.109.254.147:15195] by server-7.bemta-14.messagelabs.com id
	D2/16-08102-627E1D05; Wed, 19 Dec 2012 16:11:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1355933477!10665176!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24781 invoked from network); 19 Dec 2012 16:11:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:11:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="258162"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 16:11:17 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 16:11:17 +0000
Message-ID: <50D1E723.4070201@citrix.com>
Date: Wed, 19 Dec 2012 16:11:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355932443.14620.448.camel@zakaz.uk.xensource.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Razvan Cojocaru <rzvncj@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/2012 15:54, Ian Campbell wrote:
> On Wed, 2012-12-19 at 15:27 +0000, Andrew Cooper wrote:
>> On 19/12/2012 15:00, Ian Campbell wrote:
>>> On Wed, 2012-12-19 at 14:57 +0000, Razvan Cojocaru wrote:
>>>>>>      m->overlapped = is_var_mtrr_overlapped(m);
>>>>>>
>>>>>> Looks like that function contains the necessary logic.
>>>>> You're right, but what happens there is that that function depends on
>>>>> the get_mtrr_range() function, which in turn depends on the size_or_mask
>>>>> global variable, which is initialized in hvm_mtrr_pat_init(), which then
>>>>> depends on a global table, and so on. Putting that into libxc is pretty
>>>>> much putting the whole mtrr.c file there.
>>>> This is where it gets tricky:
>>>>
>>>> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
>>>>                             uint64_t *base, uint64_t *end)
>>>> {
>>>>      [...]
>>>>      phys_addr = 36;
>>>>
>>>>      if ( cpuid_eax(0x80000000) >= 0x80000008 )
>>>>          phys_addr = (uint8_t)cpuid_eax(0x80000008);
>>>>
>>>>      size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
>>>>      [...]
>>>> }
>>>>
>>>> specifically, in the cpuid_eax() call, which doesn't make much sense in 
>>>> dom0 userspace.
>>> The fact that get_mtrr_range is querying the underlying physical CPUID
>>> suggests it has something to do with the translation from virtual to
>>> physical MTRR and is therefore not something userspace needs to worry
>>> about, but I'm only speculating.
>> CPUID 0x80000008.EAX is the physical address size supported by the
>> processor (in bits).  Typical values on modern hardware are 40 or 48.
> I know what the bit is. This code seems to be leaking physical CPU
> parameters into the virtual CPU state and the question is if userspace
> needs to care about that. I suspect the answer is no.
>
> What should matter for the guest state is the virtualised CPUID
> 0x80000008.EAX which, at least in theory, could be different (e.g. a
> migrated guest?). 
>
> Ian.
>

Ah - I see your concern.  Yes - it might well be different.  Is this
information passed in an HVM save record?  It does not appear to be
associated with the MTRR HVM save record.

An HVM domain, booted on processor with 48bit physical addressing, which
makes use of the upper 8 bits when setting up the variable MTRRs, which
is subsequently migrated to a different server with only 40bit physical
addressing is going to have problems.  Whether a protection fault occurs
or the top bits are simply ignored, stuff will break.

As a result, I think the virtualised CPUID value needs feature levelling
across hosts, or restrictions applied to where you can migrate a started
VM to.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:15:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMJ8-0002N2-5x; Wed, 19 Dec 2012 16:15:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlMJ7-0002Mv-0I
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:15:45 +0000
Received: from [85.158.143.99:63258] by server-2.bemta-4.messagelabs.com id
	5F/87-30861-038E1D05; Wed, 19 Dec 2012 16:15:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355933741!17940546!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2265 invoked from network); 19 Dec 2012 16:15:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:15:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="258290"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 16:15:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 16:15:40 +0000
Message-ID: <1355933739.14620.456.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Date: Wed, 19 Dec 2012 16:15:39 +0000
In-Reply-To: <50D1E723.4070201@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Razvan Cojocaru <rzvncj@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 16:11 +0000, Andrew Cooper wrote:
> On 19/12/2012 15:54, Ian Campbell wrote:
> > On Wed, 2012-12-19 at 15:27 +0000, Andrew Cooper wrote:
> >> On 19/12/2012 15:00, Ian Campbell wrote:
> >>> On Wed, 2012-12-19 at 14:57 +0000, Razvan Cojocaru wrote:
> >>>>>>      m->overlapped = is_var_mtrr_overlapped(m);
> >>>>>>
> >>>>>> Looks like that function contains the necessary logic.
> >>>>> You're right, but what happens there is that that function depends on
> >>>>> the get_mtrr_range() function, which in turn depends on the size_or_mask
> >>>>> global variable, which is initialized in hvm_mtrr_pat_init(), which then
> >>>>> depends on a global table, and so on. Putting that into libxc is pretty
> >>>>> much putting the whole mtrr.c file there.
> >>>> This is where it gets tricky:
> >>>>
> >>>> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
> >>>>                             uint64_t *base, uint64_t *end)
> >>>> {
> >>>>      [...]
> >>>>      phys_addr = 36;
> >>>>
> >>>>      if ( cpuid_eax(0x80000000) >= 0x80000008 )
> >>>>          phys_addr = (uint8_t)cpuid_eax(0x80000008);
> >>>>
> >>>>      size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
> >>>>      [...]
> >>>> }
> >>>>
> >>>> specifically, in the cpuid_eax() call, which doesn't make much sense in 
> >>>> dom0 userspace.
> >>> The fact that get_mtrr_range is querying the underlying physical CPUID
> >>> suggests it has something to do with the translation from virtual to
> >>> physical MTRR and is therefore not something userspace needs to worry
> >>> about, but I'm only speculating.
> >> CPUID 0x80000008.EAX is the physical address size supported by the
> >> processor (in bits).  Typical values on modern hardware are 40 or 48.
> > I know what the bit is. This code seems to be leaking physical CPU
> > parameters into the virtual CPU state and the question is if userspace
> > needs to care about that. I suspect the answer is no.
> >
> > What should matter for the guest state is the virtualised CPUID
> > 0x80000008.EAX which, at least in theory, could be different (e.g. a
> > migrated guest?). 
> >
> > Ian.
> >
> 
> Ah - I see your concern.  Yes - it might well be different.  Is this
> information passed in an HVM save record?  It does not appear to be
> associated with the MTRR HVM save record.

No , this is the internal state not the save record but Razvan is
implementing the same logic in libxc which must necessarily be based on
the hvm saved state only and not the internal emulation state.

> An HVM domain, booted on processor with 48bit physical addressing, which
> makes use of the upper 8 bits when setting up the variable MTRRs, which
> is subsequently migrated to a different server with only 40bit physical
> addressing is going to have problems.  Whether a protection fault occurs
> or the top bits are simply ignored, stuff will break.

This is HVM, so I would hope the hypervisor would catch it and do
something sensible before it hits the real (or VMCS) registers.

> As a result, I think the virtualised CPUID value needs feature levelling
> across hosts, or restrictions applied to where you can migrate a started
> VM to.

This seems likely. 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:15:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMJ8-0002N2-5x; Wed, 19 Dec 2012 16:15:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlMJ7-0002Mv-0I
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:15:45 +0000
Received: from [85.158.143.99:63258] by server-2.bemta-4.messagelabs.com id
	5F/87-30861-038E1D05; Wed, 19 Dec 2012 16:15:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355933741!17940546!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2265 invoked from network); 19 Dec 2012 16:15:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:15:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="258290"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 16:15:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 16:15:40 +0000
Message-ID: <1355933739.14620.456.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Date: Wed, 19 Dec 2012 16:15:39 +0000
In-Reply-To: <50D1E723.4070201@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Razvan Cojocaru <rzvncj@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 16:11 +0000, Andrew Cooper wrote:
> On 19/12/2012 15:54, Ian Campbell wrote:
> > On Wed, 2012-12-19 at 15:27 +0000, Andrew Cooper wrote:
> >> On 19/12/2012 15:00, Ian Campbell wrote:
> >>> On Wed, 2012-12-19 at 14:57 +0000, Razvan Cojocaru wrote:
> >>>>>>      m->overlapped = is_var_mtrr_overlapped(m);
> >>>>>>
> >>>>>> Looks like that function contains the necessary logic.
> >>>>> You're right, but what happens there is that that function depends on
> >>>>> the get_mtrr_range() function, which in turn depends on the size_or_mask
> >>>>> global variable, which is initialized in hvm_mtrr_pat_init(), which then
> >>>>> depends on a global table, and so on. Putting that into libxc is pretty
> >>>>> much putting the whole mtrr.c file there.
> >>>> This is where it gets tricky:
> >>>>
> >>>> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
> >>>>                             uint64_t *base, uint64_t *end)
> >>>> {
> >>>>      [...]
> >>>>      phys_addr = 36;
> >>>>
> >>>>      if ( cpuid_eax(0x80000000) >= 0x80000008 )
> >>>>          phys_addr = (uint8_t)cpuid_eax(0x80000008);
> >>>>
> >>>>      size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
> >>>>      [...]
> >>>> }
> >>>>
> >>>> specifically, in the cpuid_eax() call, which doesn't make much sense in 
> >>>> dom0 userspace.
> >>> The fact that get_mtrr_range is querying the underlying physical CPUID
> >>> suggests it has something to do with the translation from virtual to
> >>> physical MTRR and is therefore not something userspace needs to worry
> >>> about, but I'm only speculating.
> >> CPUID 0x80000008.EAX is the physical address size supported by the
> >> processor (in bits).  Typical values on modern hardware are 40 or 48.
> > I know what the bit is. This code seems to be leaking physical CPU
> > parameters into the virtual CPU state and the question is if userspace
> > needs to care about that. I suspect the answer is no.
> >
> > What should matter for the guest state is the virtualised CPUID
> > 0x80000008.EAX which, at least in theory, could be different (e.g. a
> > migrated guest?). 
> >
> > Ian.
> >
> 
> Ah - I see your concern.  Yes - it might well be different.  Is this
> information passed in an HVM save record?  It does not appear to be
> associated with the MTRR HVM save record.

No , this is the internal state not the save record but Razvan is
implementing the same logic in libxc which must necessarily be based on
the hvm saved state only and not the internal emulation state.

> An HVM domain, booted on processor with 48bit physical addressing, which
> makes use of the upper 8 bits when setting up the variable MTRRs, which
> is subsequently migrated to a different server with only 40bit physical
> addressing is going to have problems.  Whether a protection fault occurs
> or the top bits are simply ignored, stuff will break.

This is HVM, so I would hope the hypervisor would catch it and do
something sensible before it hits the real (or VMCS) registers.

> As a result, I think the virtualised CPUID value needs feature levelling
> across hosts, or restrictions applied to where you can migrate a started
> VM to.

This seems likely. 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:18:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMLK-0002TU-2O; Wed, 19 Dec 2012 16:18:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <erdnetdev@gmail.com>) id 1TlMLI-0002TN-Pf
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 16:18:00 +0000
Received: from [193.109.254.147:42145] by server-7.bemta-14.messagelabs.com id
	95/CE-08102-8B8E1D05; Wed, 19 Dec 2012 16:18:00 +0000
X-Env-Sender: erdnetdev@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355933871!2242194!1
X-Originating-IP: [209.85.220.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26869 invoked from network); 19 Dec 2012 16:17:53 -0000
Received: from mail-pa0-f53.google.com (HELO mail-pa0-f53.google.com)
	(209.85.220.53)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:17:53 -0000
Received: by mail-pa0-f53.google.com with SMTP id hz1so1419363pad.12
	for <xen-devel@lists.xensource.com>;
	Wed, 19 Dec 2012 08:17:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:subject:from:to:cc:in-reply-to:references:content-type
	:date:message-id:mime-version:x-mailer:content-transfer-encoding;
	bh=dgbtxaidRlVrmkkeHD6bAkUo/Zu9gMoki3LdcKIkqAo=;
	b=v9W2QUN65NeVTCzO4hgTOpa+SgItPfUkMTy+8PuYDY13XACT5HpJB/ybgPGPjzj4Fs
	PYpECPQvn9ohDj9h6Ske56GkC0yLrvCtBSNAdASgqPMrd/X37RQIEUew9yuO+FiwUUkH
	DyRpKuqvLea32cvE2XGQbbTEKGtylGWojNVnl/j+ftQdp3+7PStMpyO4z/JI39WhYf4i
	L7AnhI7yuI9+fU/73W0Ep4fsmR8XuQsw4oF4iRK0eJLFPnQDg3LZPF7szDbruipThB4B
	aBcf376pWVSPBncBhd8GvEMG4eAi013fsZ3ez/x5LcCRBVNomrurGeAoMs+v5EhcKqG/
	OKUg==
X-Received: by 10.68.248.74 with SMTP id yk10mr20160691pbc.86.1355933871532;
	Wed, 19 Dec 2012 08:17:51 -0800 (PST)
Received: from [172.19.250.213] ([172.19.250.213])
	by mx.google.com with ESMTPS id qn3sm1177192pbb.56.2012.12.19.08.17.49
	(version=SSLv3 cipher=OTHER); Wed, 19 Dec 2012 08:17:50 -0800 (PST)
From: Eric Dumazet <erdnetdev@gmail.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <55633610.20121219123427@eikelenboom.it>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
Date: Wed, 19 Dec 2012 08:17:49 -0800
Message-ID: <1355933869.21834.13.camel@edumazet-glaptop>
Mime-Version: 1.0
X-Mailer: Evolution 2.28.3 
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 12:34 +0100, Sander Eikelenboom wrote:

> Hi Ian,
> 
> It ran overnight and i haven't seen the warn_once trigger.
> (but i also didn't with the previous patch)
> 

As I said, the miminum value to not trigger the warning was what Ian
patch was doing, but it was still a not accurate estimation.

Doing the real accounting might trigger slow transferts, or dropped
packets because of socket limits (SNDBUF / RCVBUF) being hit sooner.

So the real question was : If accounting for full pages, is your
applications run as smooth as before, with no huge performance
regression ?




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:18:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMLK-0002TU-2O; Wed, 19 Dec 2012 16:18:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <erdnetdev@gmail.com>) id 1TlMLI-0002TN-Pf
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 16:18:00 +0000
Received: from [193.109.254.147:42145] by server-7.bemta-14.messagelabs.com id
	95/CE-08102-8B8E1D05; Wed, 19 Dec 2012 16:18:00 +0000
X-Env-Sender: erdnetdev@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1355933871!2242194!1
X-Originating-IP: [209.85.220.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26869 invoked from network); 19 Dec 2012 16:17:53 -0000
Received: from mail-pa0-f53.google.com (HELO mail-pa0-f53.google.com)
	(209.85.220.53)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:17:53 -0000
Received: by mail-pa0-f53.google.com with SMTP id hz1so1419363pad.12
	for <xen-devel@lists.xensource.com>;
	Wed, 19 Dec 2012 08:17:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:subject:from:to:cc:in-reply-to:references:content-type
	:date:message-id:mime-version:x-mailer:content-transfer-encoding;
	bh=dgbtxaidRlVrmkkeHD6bAkUo/Zu9gMoki3LdcKIkqAo=;
	b=v9W2QUN65NeVTCzO4hgTOpa+SgItPfUkMTy+8PuYDY13XACT5HpJB/ybgPGPjzj4Fs
	PYpECPQvn9ohDj9h6Ske56GkC0yLrvCtBSNAdASgqPMrd/X37RQIEUew9yuO+FiwUUkH
	DyRpKuqvLea32cvE2XGQbbTEKGtylGWojNVnl/j+ftQdp3+7PStMpyO4z/JI39WhYf4i
	L7AnhI7yuI9+fU/73W0Ep4fsmR8XuQsw4oF4iRK0eJLFPnQDg3LZPF7szDbruipThB4B
	aBcf376pWVSPBncBhd8GvEMG4eAi013fsZ3ez/x5LcCRBVNomrurGeAoMs+v5EhcKqG/
	OKUg==
X-Received: by 10.68.248.74 with SMTP id yk10mr20160691pbc.86.1355933871532;
	Wed, 19 Dec 2012 08:17:51 -0800 (PST)
Received: from [172.19.250.213] ([172.19.250.213])
	by mx.google.com with ESMTPS id qn3sm1177192pbb.56.2012.12.19.08.17.49
	(version=SSLv3 cipher=OTHER); Wed, 19 Dec 2012 08:17:50 -0800 (PST)
From: Eric Dumazet <erdnetdev@gmail.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <55633610.20121219123427@eikelenboom.it>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
Date: Wed, 19 Dec 2012 08:17:49 -0800
Message-ID: <1355933869.21834.13.camel@edumazet-glaptop>
Mime-Version: 1.0
X-Mailer: Evolution 2.28.3 
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 12:34 +0100, Sander Eikelenboom wrote:

> Hi Ian,
> 
> It ran overnight and i haven't seen the warn_once trigger.
> (but i also didn't with the previous patch)
> 

As I said, the miminum value to not trigger the warning was what Ian
patch was doing, but it was still a not accurate estimation.

Doing the real accounting might trigger slow transferts, or dropped
packets because of socket limits (SNDBUF / RCVBUF) being hit sooner.

So the real question was : If accounting for full pages, is your
applications run as smooth as before, with no huge performance
regression ?




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:19:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMMj-0002Zv-IJ; Wed, 19 Dec 2012 16:19:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TlMMh-0002Zm-8k
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:19:27 +0000
Received: from [85.158.138.51:6843] by server-2.bemta-3.messagelabs.com id
	B7/EE-11239-E09E1D05; Wed, 19 Dec 2012 16:19:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355933964!10966136!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 692 invoked from network); 19 Dec 2012 16:19:25 -0000
Received: from unknown (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:19:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1282753"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 16:18:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 11:18:19 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TlMLb-00045c-3o;
	Wed, 19 Dec 2012 16:18:19 +0000
Message-ID: <1355933902.10526.27.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Dec 2012 16:18:22 +0000
In-Reply-To: <20121219160455.GA12077@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 16:04 +0000, Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> > Hi Konrad
> > 
> > I encountered a bug when trying to bring offline a cpu then online it
> > again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
> > with a quick fix.
> 
> I took your two patches that you posted and they are in v3.8 now.
> 
> It seems that there are bugs in the offline/online code thought.
> 
> I did this:
> # echo 0 > /sys/devices/system/cpu/cpu3/online
> # echo 1 > /sys/devices/system/cpu/cpu3/online
> 
> With a PV guest and it blows up (with or without your patches).
> 
> Have you seen something similar to this:
> 
> [  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
> [  106.167168] microcode: CPU2 sig=0x206a7, pf=0x2, revision=0x17
> [  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [last unloaded: dump_dma]
> [  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0-rc3upstream-00139-gb1849b3-dirty #1
> [  106.170152] Call Trace:
> [  106.170598]  [<ffffffff8109bcbd>] __schedule_bug+0x4d/0x60
> [  106.171042]  [<ffffffff815be0fc>] __schedule+0x69c/0x760
> [  106.171469]  [<ffffffff815be284>] schedule+0x24/0x70
> [  106.171890]  [<ffffffff8103fbe9>] cpu_idle+0xc9/0xe0
> [  106.172309]  [<ffffffff81033e79>] ? xen_irq_enable_direct_reloc+0x4/0x4
> [  106.172726]  [<ffffffff815b1c5d>] cpu_bringup_and_idle+0xe/0x10
> [  106.174533] BUG: scheduling while atomic: swapper/2/0/0x00000000
> ?
> 

IIRC I didn't see this. I was using your xen.git kernel tree, not the
upstream one. The PV path was fixed after applying my patch and the HVM
path I didn't have much idea. I didn't play much with online/offline
after fixing what I could fix.

Let me play with upstream kernel and give you some feedback.


Wei


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:19:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMMj-0002Zv-IJ; Wed, 19 Dec 2012 16:19:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TlMMh-0002Zm-8k
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:19:27 +0000
Received: from [85.158.138.51:6843] by server-2.bemta-3.messagelabs.com id
	B7/EE-11239-E09E1D05; Wed, 19 Dec 2012 16:19:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355933964!10966136!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 692 invoked from network); 19 Dec 2012 16:19:25 -0000
Received: from unknown (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:19:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1282753"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 16:18:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 11:18:19 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TlMLb-00045c-3o;
	Wed, 19 Dec 2012 16:18:19 +0000
Message-ID: <1355933902.10526.27.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Dec 2012 16:18:22 +0000
In-Reply-To: <20121219160455.GA12077@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 16:04 +0000, Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> > Hi Konrad
> > 
> > I encountered a bug when trying to bring offline a cpu then online it
> > again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
> > with a quick fix.
> 
> I took your two patches that you posted and they are in v3.8 now.
> 
> It seems that there are bugs in the offline/online code thought.
> 
> I did this:
> # echo 0 > /sys/devices/system/cpu/cpu3/online
> # echo 1 > /sys/devices/system/cpu/cpu3/online
> 
> With a PV guest and it blows up (with or without your patches).
> 
> Have you seen something similar to this:
> 
> [  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
> [  106.167168] microcode: CPU2 sig=0x206a7, pf=0x2, revision=0x17
> [  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [last unloaded: dump_dma]
> [  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0-rc3upstream-00139-gb1849b3-dirty #1
> [  106.170152] Call Trace:
> [  106.170598]  [<ffffffff8109bcbd>] __schedule_bug+0x4d/0x60
> [  106.171042]  [<ffffffff815be0fc>] __schedule+0x69c/0x760
> [  106.171469]  [<ffffffff815be284>] schedule+0x24/0x70
> [  106.171890]  [<ffffffff8103fbe9>] cpu_idle+0xc9/0xe0
> [  106.172309]  [<ffffffff81033e79>] ? xen_irq_enable_direct_reloc+0x4/0x4
> [  106.172726]  [<ffffffff815b1c5d>] cpu_bringup_and_idle+0xe/0x10
> [  106.174533] BUG: scheduling while atomic: swapper/2/0/0x00000000
> ?
> 

IIRC I didn't see this. I was using your xen.git kernel tree, not the
upstream one. The PV path was fixed after applying my patch and the HVM
path I didn't have much idea. I didn't play much with online/offline
after fixing what I could fix.

Let me play with upstream kernel and give you some feedback.


Wei


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:24:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMR2-0002u0-8e; Wed, 19 Dec 2012 16:23:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlMR1-0002tt-0j
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 16:23:55 +0000
Received: from [85.158.139.211:49721] by server-16.bemta-5.messagelabs.com id
	06/EE-09208-A1AE1D05; Wed, 19 Dec 2012 16:23:54 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355934232!20348417!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 446 invoked from network); 19 Dec 2012 16:23:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:23:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1283938"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 16:23:51 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 11:23:51 -0500
Message-ID: <50D1EA16.1070005@citrix.com>
Date: Wed, 19 Dec 2012 16:23:50 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: hxkhust <hxkhust@126.com>
References: <4e6c46ce.cb93.13bb3e8b8fe.Coremail.hxkhust@126.com>
In-Reply-To: <4e6c46ce.cb93.13bb3e8b8fe.Coremail.hxkhust@126.com>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(qcow format image file read operation in qemu-img-xen)[updated]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 16:04, hxkhust wrote:
> >Date: Wed, 19 Dec 2012 15:32:41 +0000
> >From: Mats Petersson <mats.petersson@citrix.com>
> >To: <xen-devel@lists.xen.org>
> >Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
> >	deadline!(qcow format image file read operation in
> >	qemu-img-xen)[updated]
> >Message-ID: <50D1DE19.1080709@citrix.com>
> >Content-Type: text/plain; charset="GB2312"
> >
> >On 19/12/12 15:23, hxkhust wrote:
> >> Or could you tell me how to cache the data which is read from the
> >> backingfile when a qcow image is regarded as a virtual disk in a
> >> running HVM?
> >I take it the above single question is the effect of my previous reply?
> >Why did you have to "hide" that little extra question in the whole
> >previous e-mail?
> >
> ......yse,that is.
> Because previously what point out is a detail question, the reader may be convenient to answear and at least input less words to me, I guess.
You are, I suppose, aware that most members of this list hasn't even
replied to your original email, and most probably thought your re-send
was simply another copy of the same mail - given that there have been
about 100 mails to the list since, it's not inconceivable.

I very nearly missed the tiny difference between your first, second and
third posting.

> And in this way I can get the one that is more maneuverable.
> >Sorry, don't know the answer to your question [I'm guessing, in general,
> >that the Dom0 will do that for you, subject to available space], just
> >pointing out that there is a minor difference between your previous and
> >current mail.
> >
> yean,the difference is minor.I have no time and I'm worry about the problem.
> I have seen that a lot of questions have more that one Re:XXX mails(mails to answer the question),How could they do that?
Not sure what you are trying to say here. It is your task to make it as
easy as possible for us to help you. Please read the links I sent
earlier. It is REALLY going to help you get better help to understand
that if you "ask smart questions" you get good answers. Included in that
is "do not same the same email several times whilst ignoring replies
already given".
>   
> >By caching, do you mean "load the entire file into RAM", or "if a read
> >is requested for the same piece of 'disk' multiple times, I want the
> >previous result to be stored and returned".
> >
> I mean the latter one.Could you give me some proposal?
Keep some sort of memory structure that records the previously read
data, along with "where the data came from" [beware that this needs to
be updated/cleared if there is a write]. If you can find a read for the
location on "disk" that you have cached, then you can pretty much return
immediately.

Exactly what other problems you run into, I'm not sure - as I said
earlier, I'm not overly familiar with this code.

--
Mats
> >--
> >Mats
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:24:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMR2-0002u0-8e; Wed, 19 Dec 2012 16:23:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlMR1-0002tt-0j
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 16:23:55 +0000
Received: from [85.158.139.211:49721] by server-16.bemta-5.messagelabs.com id
	06/EE-09208-A1AE1D05; Wed, 19 Dec 2012 16:23:54 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355934232!20348417!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 446 invoked from network); 19 Dec 2012 16:23:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:23:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1283938"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 16:23:51 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 11:23:51 -0500
Message-ID: <50D1EA16.1070005@citrix.com>
Date: Wed, 19 Dec 2012 16:23:50 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: hxkhust <hxkhust@126.com>
References: <4e6c46ce.cb93.13bb3e8b8fe.Coremail.hxkhust@126.com>
In-Reply-To: <4e6c46ce.cb93.13bb3e8b8fe.Coremail.hxkhust@126.com>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
 deadline!(qcow format image file read operation in qemu-img-xen)[updated]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 16:04, hxkhust wrote:
> >Date: Wed, 19 Dec 2012 15:32:41 +0000
> >From: Mats Petersson <mats.petersson@citrix.com>
> >To: <xen-devel@lists.xen.org>
> >Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
> >	deadline!(qcow format image file read operation in
> >	qemu-img-xen)[updated]
> >Message-ID: <50D1DE19.1080709@citrix.com>
> >Content-Type: text/plain; charset="GB2312"
> >
> >On 19/12/12 15:23, hxkhust wrote:
> >> Or could you tell me how to cache the data which is read from the
> >> backingfile when a qcow image is regarded as a virtual disk in a
> >> running HVM?
> >I take it the above single question is the effect of my previous reply?
> >Why did you have to "hide" that little extra question in the whole
> >previous e-mail?
> >
> ......yse,that is.
> Because previously what point out is a detail question, the reader may be convenient to answear and at least input less words to me, I guess.
You are, I suppose, aware that most members of this list hasn't even
replied to your original email, and most probably thought your re-send
was simply another copy of the same mail - given that there have been
about 100 mails to the list since, it's not inconceivable.

I very nearly missed the tiny difference between your first, second and
third posting.

> And in this way I can get the one that is more maneuverable.
> >Sorry, don't know the answer to your question [I'm guessing, in general,
> >that the Dom0 will do that for you, subject to available space], just
> >pointing out that there is a minor difference between your previous and
> >current mail.
> >
> yean,the difference is minor.I have no time and I'm worry about the problem.
> I have seen that a lot of questions have more that one Re:XXX mails(mails to answer the question),How could they do that?
Not sure what you are trying to say here. It is your task to make it as
easy as possible for us to help you. Please read the links I sent
earlier. It is REALLY going to help you get better help to understand
that if you "ask smart questions" you get good answers. Included in that
is "do not same the same email several times whilst ignoring replies
already given".
>   
> >By caching, do you mean "load the entire file into RAM", or "if a read
> >is requested for the same piece of 'disk' multiple times, I want the
> >previous result to be stored and returned".
> >
> I mean the latter one.Could you give me some proposal?
Keep some sort of memory structure that records the previously read
data, along with "where the data came from" [beware that this needs to
be updated/cleared if there is a write]. If you can find a read for the
location on "disk" that you have cached, then you can pretty much return
immediately.

Exactly what other problems you run into, I'm not sure - as I said
earlier, I'm not overly familiar with this code.

--
Mats
> >--
> >Mats
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:28:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMVG-00032a-VB; Wed, 19 Dec 2012 16:28:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlMVF-00032T-OG
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:28:17 +0000
Received: from [85.158.138.51:56437] by server-16.bemta-3.messagelabs.com id
	A2/56-27634-C1BE1D05; Wed, 19 Dec 2012 16:28:12 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355934490!10967537!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6032 invoked from network); 19 Dec 2012 16:28:11 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 16:28:11 -0000
Received: (qmail 473 invoked from network); 19 Dec 2012 18:28:09 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 18:28:09 +0200
Message-ID: <50D1EB56.40400@gmail.com>
Date: Wed, 19 Dec 2012 18:29:10 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
	<1355933739.14620.456.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355933739.14620.456.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234455,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 21588ff0f570d7cabb337f719b0611f0.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3ta.17epd43fm.jpat],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44443
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> Ah - I see your concern.  Yes - it might well be different.  Is this
>> information passed in an HVM save record?  It does not appear to be
>> associated with the MTRR HVM save record.
>
> No , this is the internal state not the save record but Razvan is
> implementing the same logic in libxc which must necessarily be based on
> the hvm saved state only and not the internal emulation state.

Exactly, and all I need extra an extra variable in the save record (or 
simply a single bit in a safe place in one of the existing ones), 
telling me if the MTRRs are overlapped or not. The CPUID code is just a 
part of the logic that finds this out in the hypervisor; and 
specifically it is a part that's better _left_in_ the hypervisor. That 
is in fact what I was trying to say a few messages ago, when I called 
the cpuid_eax() function the tricky part. :)

Do we agree that a bool_t overlapped should be added to struct 
hvm_hw_mtrr for this case?

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:28:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMVG-00032a-VB; Wed, 19 Dec 2012 16:28:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlMVF-00032T-OG
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:28:17 +0000
Received: from [85.158.138.51:56437] by server-16.bemta-3.messagelabs.com id
	A2/56-27634-C1BE1D05; Wed, 19 Dec 2012 16:28:12 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1355934490!10967537!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6032 invoked from network); 19 Dec 2012 16:28:11 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 16:28:11 -0000
Received: (qmail 473 invoked from network); 19 Dec 2012 18:28:09 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	19 Dec 2012 18:28:09 +0200
Message-ID: <50D1EB56.40400@gmail.com>
Date: Wed, 19 Dec 2012 18:29:10 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
	<1355933739.14620.456.camel@zakaz.uk.xensource.com>
In-Reply-To: <1355933739.14620.456.camel@zakaz.uk.xensource.com>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234455,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 21588ff0f570d7cabb337f719b0611f0.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id: 2m1g3ta.17epd43fm.jpat],
	total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44443
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> Ah - I see your concern.  Yes - it might well be different.  Is this
>> information passed in an HVM save record?  It does not appear to be
>> associated with the MTRR HVM save record.
>
> No , this is the internal state not the save record but Razvan is
> implementing the same logic in libxc which must necessarily be based on
> the hvm saved state only and not the internal emulation state.

Exactly, and all I need extra an extra variable in the save record (or 
simply a single bit in a safe place in one of the existing ones), 
telling me if the MTRRs are overlapped or not. The CPUID code is just a 
part of the logic that finds this out in the hypervisor; and 
specifically it is a part that's better _left_in_ the hypervisor. That 
is in fact what I was trying to say a few messages ago, when I called 
the cpuid_eax() function the tricky part. :)

Do we agree that a bool_t overlapped should be added to struct 
hvm_hw_mtrr for this case?

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:43:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMjT-0003Re-Kb; Wed, 19 Dec 2012 16:42:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jun.nakajima@intel.com>) id 1TlMjS-0003RZ-5w
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:42:58 +0000
Received: from [85.158.143.35:35562] by server-2.bemta-4.messagelabs.com id
	24/A8-30861-19EE1D05; Wed, 19 Dec 2012 16:42:57 +0000
X-Env-Sender: jun.nakajima@intel.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355935374!4723711!1
X-Originating-IP: [192.55.52.89]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27029 invoked from network); 19 Dec 2012 16:42:54 -0000
Received: from mga05.intel.com (HELO fmsmga101.fm.intel.com) (192.55.52.89)
	by server-9.tower-21.messagelabs.com with SMTP;
	19 Dec 2012 16:42:54 -0000
Received: from mail-gg0-f198.google.com ([209.85.161.198])
	by mga01.intel.com with ESMTP/TLS/RC4-SHA; 19 Dec 2012 08:42:53 -0800
Received: by mail-gg0-f198.google.com with SMTP id f4so3226113ggn.1
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 08:42:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:mime-version:in-reply-to:references:date:message-id
	:subject:from:to:cc:content-type:x-gm-message-state;
	bh=cmhONlHKd80BNBfpE3CGGrSLpLVMR7ZTLBAIiiqTcyY=;
	b=HDppnA1GGWqbFjMbE1ZEMF9/x15WqMRQOOtkBjFzXzaXARSmyBvfmkmiS5E3yj9mao
	BxwNQOwEiRt1ItanxJFFpupP+62cgOoFKiHc0HOpRvMDjKXBfmtl7D4e/42fRtn8Z0CZ
	+ERJq5wKaikfAid/jp/sVrsVmjEn/JQ7/3RsXfh0fQ2Q601SQukEguzOO6H/CUjN+lu2
	MDyxdL6hZYSfG+WhhyaH8GNvGYZB/RHG0c0ZmI5dFXJf1k1Z2Z/0K0LgvPAItzPpT6CJ
	Y82KR0iE7K+Bwup++PFfhha/tj1ajJE5iL+MgHcVFwCftv47sSwFH8XSZl1NgW+qZQ5d
	9vcg==
X-Received: by 10.49.74.198 with SMTP id w6mr3369390qev.57.1355935372104;
	Wed, 19 Dec 2012 08:42:52 -0800 (PST)
MIME-Version: 1.0
Received: by 10.49.74.198 with SMTP id w6mr3369378qev.57.1355935371980; Wed,
	19 Dec 2012 08:42:51 -0800 (PST)
Received: by 10.229.27.148 with HTTP; Wed, 19 Dec 2012 08:42:51 -0800 (PST)
In-Reply-To: <1355946267-24227-4-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-4-git-send-email-xiantao.zhang@intel.com>
Date: Wed, 19 Dec 2012 08:42:51 -0800
Message-ID: <CAL54oT2Sv0W3gwa7-ufdc904WXB2rC0mLEskjQ4HUgkC+TY6mQ@mail.gmail.com>
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: Xiantao Zhang <xiantao.zhang@intel.com>
X-Gm-Message-State: ALoCoQmsW8sJQAgV+1NfLwTklvun6qBJG2Af3FrP/hqxAGfu+87IDTnr8Ibd1mUHq9EA9IesKPGP8mGW8fZXdk4Dgh1OktXQJ+txPDAklD7n/wFtP6hRWzSl5xoy9F2tLZ1UiOyC1fGdjztf5kQHycFhltQACBA5Aw==
Cc: tim@xen.org, keir@xen.org, eddie.dong@intel.com, JBeulich@suse.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 03/10] nested_ept: Implement guest ept's
	walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 11:44 AM, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
>
> Implment guest EPT PT walker, some logic is based on shadow's
> ia32e PT walker. During the PT walking, if the target pages are
> not in memory, use RETRY mechanism and get a chance to let the
> target page back.

It's just a programming style, but I would add 'break' for the default
case in the switch statements. Also, we should set an error for some
of the switch statements. We should not depend on print messages for
error handling or debugging.

>
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> ---
>  xen/arch/x86/hvm/hvm.c              |    1 +
>  xen/arch/x86/hvm/vmx/vvmx.c         |   42 +++++-
>  xen/arch/x86/mm/guest_walk.c        |   16 ++-
>  xen/arch/x86/mm/hap/Makefile        |    1 +
>  xen/arch/x86/mm/hap/nested_ept.c    |  276 +++++++++++++++++++++++++++++++++++
>  xen/arch/x86/mm/hap/nested_hap.c    |    2 +-
>  xen/arch/x86/mm/shadow/multi.c      |    2 +-
>  xen/include/asm-x86/guest_pt.h      |    8 +
>  xen/include/asm-x86/hvm/nestedhvm.h |    1 +
>  xen/include/asm-x86/hvm/vmx/vmcs.h  |    1 +
>  xen/include/asm-x86/hvm/vmx/vmx.h   |   28 ++++
>  xen/include/asm-x86/hvm/vmx/vvmx.h  |   14 ++
>  12 files changed, 382 insertions(+), 10 deletions(-)
>  create mode 100644 xen/arch/x86/mm/hap/nested_ept.c
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 1cae8a8..3cd0075 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1324,6 +1324,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>                                               access_r, access_w, access_x);
>          switch (rv) {
>          case NESTEDHVM_PAGEFAULT_DONE:
> +        case NESTEDHVM_PAGEFAULT_RETRY:
>              return 1;
>          case NESTEDHVM_PAGEFAULT_L1_ERROR:
>              /* An error occured while translating gpa from
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 4495dd6..76cf757 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -906,9 +906,18 @@ static void sync_vvmcs_ro(struct vcpu *v)
>  {
>      int i;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> +    void *vvmcs = nvcpu->nv_vvmcx;
>
>      for ( i = 0; i < ARRAY_SIZE(vmcs_ro_field); i++ )
>          shadow_to_vvmcs(nvcpu->nv_vvmcx, vmcs_ro_field[i]);
> +
> +    /* Adjust exit_reason/exit_qualifciation for violation case */
> +    if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
> +                EXIT_REASON_EPT_VIOLATION ) {
> +        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
> +        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
> +    }
>  }
>
>  static void load_vvmcs_host_state(struct vcpu *v)
> @@ -1454,8 +1463,37 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
>                        unsigned int *page_order,
>                        bool_t access_r, bool_t access_w, bool_t access_x)
>  {
> -    /*TODO:*/
> -    return 0;
> +    uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
> +    uint32_t exit_reason = EXIT_REASON_EPT_VIOLATION;
> +    int rc;
> +    unsigned long gfn;
> +    uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
> +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> +
> +    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
> +                                &exit_qual, &exit_reason);
> +    switch ( rc ) {
> +        case EPT_TRANSLATE_SUCCEED:
> +            *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
> +            rc = NESTEDHVM_PAGEFAULT_DONE;
> +            break;
> +        case EPT_TRANSLATE_VIOLATION:
> +        case EPT_TRANSLATE_MISCONFIG:
> +            rc = NESTEDHVM_PAGEFAULT_INJECT;
> +            nvmx->ept_exit.exit_reason = exit_reason;
> +            nvmx->ept_exit.exit_qual = exit_qual;
> +            break;
> +        case EPT_TRANSLATE_RETRY:
> +            rc = NESTEDHVM_PAGEFAULT_RETRY;
> +            break;
> +        case EPT_TRANSLATE_ERR_PAGE:
> +            rc = NESTEDHVM_PAGEFAULT_L1_ERROR;
> +            break;
> +        default:
> +            gdprintk(XENLOG_ERR, "GUEST EPT translation error!\n");

break and rc = ;

> +    }
> +
> +    return rc;
>  }
>
>  void nvmx_idtv_handling(void)
> diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
> index 0f08fb0..1c165c6 100644
> --- a/xen/arch/x86/mm/guest_walk.c
> +++ b/xen/arch/x86/mm/guest_walk.c
> @@ -88,18 +88,19 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
>
>  /* If the map is non-NULL, we leave this function having
>   * acquired an extra ref on mfn_to_page(*mfn) */
> -static inline void *map_domain_gfn(struct p2m_domain *p2m,
> -                                   gfn_t gfn,
> +void *map_domain_gfn(struct p2m_domain *p2m,
> +                                   gfn_t gfn,
>                                     mfn_t *mfn,
>                                     p2m_type_t *p2mt,
> -                                   uint32_t *rc)
> +                                   p2m_query_t q,
> +                                   uint32_t *rc)
>  {
>      struct page_info *page;
>      void *map;
>
>      /* Translate the gfn, unsharing if shared */
>      page = get_page_from_gfn_p2m(p2m->domain, p2m, gfn_x(gfn), p2mt, NULL,
> -                                  P2M_ALLOC | P2M_UNSHARE);
> +                                  q);
>      if ( p2m_is_paging(*p2mt) )
>      {
>          ASSERT(!p2m_is_nestedp2m(p2m));
> @@ -128,7 +129,6 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
>      return map;
>  }
>
> -
>  /* Walk the guest pagetables, after the manner of a hardware walker. */
>  /* Because the walk is essentially random, it can cause a deadlock
>   * warning in the p2m locking code. Highly unlikely this is an actual
> @@ -149,6 +149,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>      uint32_t gflags, mflags, iflags, rc = 0;
>      int smep;
>      bool_t pse1G = 0, pse2M = 0;
> +    p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
>
>      perfc_incr(guest_walk);
>      memset(gw, 0, sizeof(*gw));
> @@ -188,7 +189,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>      l3p = map_domain_gfn(p2m,
>                           guest_l4e_get_gfn(gw->l4e),
>                           &gw->l3mfn,
> -                         &p2mt,
> +                         &p2mt,
> +                         qt,
>                           &rc);
>      if(l3p == NULL)
>          goto out;
> @@ -249,6 +251,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>                           guest_l3e_get_gfn(gw->l3e),
>                           &gw->l2mfn,
>                           &p2mt,
> +                         qt,
>                           &rc);
>      if(l2p == NULL)
>          goto out;
> @@ -322,6 +325,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>                               guest_l2e_get_gfn(gw->l2e),
>                               &gw->l1mfn,
>                               &p2mt,
> +                             qt,
>                               &rc);
>          if(l1p == NULL)
>              goto out;
> diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
> index 80a6bec..68f2bb5 100644
> --- a/xen/arch/x86/mm/hap/Makefile
> +++ b/xen/arch/x86/mm/hap/Makefile
> @@ -3,6 +3,7 @@ obj-y += guest_walk_2level.o
>  obj-y += guest_walk_3level.o
>  obj-$(x86_64) += guest_walk_4level.o
>  obj-y += nested_hap.o
> +obj-y += nested_ept.o
>
>  guest_walk_%level.o: guest_walk.c Makefile
>         $(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
> diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
> new file mode 100644
> index 0000000..5f80d82
> --- /dev/null
> +++ b/xen/arch/x86/mm/hap/nested_ept.c
> @@ -0,0 +1,276 @@
> +/*
> + * nested_ept.c: Handling virtulized EPT for guest in nested case.
> + *
> + * Copyright (c) 2012, Intel Corporation
> + *  Xiantao Zhang <xiantao.zhang@intel.com>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> + * Place - Suite 330, Boston, MA 02111-1307 USA.
> + */
> +#include <asm/domain.h>
> +#include <asm/page.h>
> +#include <asm/paging.h>
> +#include <asm/p2m.h>
> +#include <asm/mem_event.h>
> +#include <public/mem_event.h>
> +#include <asm/mem_sharing.h>
> +#include <xen/event.h>
> +#include <asm/hap.h>
> +#include <asm/hvm/support.h>
> +
> +#include <asm/hvm/nestedhvm.h>
> +
> +#include "private.h"
> +
> +#include <asm/hvm/vmx/vmx.h>
> +#include <asm/hvm/vmx/vvmx.h>
> +
> +/* EPT always use 4-level paging structure */
> +#define GUEST_PAGING_LEVELS 4
> +#include <asm/guest_pt.h>
> +
> +/* Must reserved bits in all level entries  */
> +#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
> +                     ~((1ull << paddr_bits) - 1))
> +
> +/*
> + *TODO: Just leave it as 0 here for compile pass, will
> + * define real capabilities in the subsequent patches.
> + */
> +#define NEPT_VPID_CAP_BITS 0
> +
> +
> +#define NEPT_1G_ENTRY_FLAG (1 << 11)
> +#define NEPT_2M_ENTRY_FLAG (1 << 10)
> +#define NEPT_4K_ENTRY_FLAG (1 << 9)
> +
> +bool_t nept_sp_entry(ept_entry_t e)
> +{
> +    return !!(e.sp);
> +}
> +
> +static bool_t nept_rsv_bits_check(ept_entry_t e, uint32_t level)
> +{
> +    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
> +
> +    switch ( level ) {
> +    case 1:
> +        break;
> +    case 2 ... 3:
> +        if (nept_sp_entry(e))
> +            rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
> +        else
> +            rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK;
> +        break;
> +    case 4:
> +        rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK | EPTE_SUPER_PAGE_MASK;
> +    break;
> +    default:
> +        printk("Unsupported EPT paging level: %d\n", level);

break (or return) and we need to take a definitive action here.

> +    }
> +    return !!(e.epte & rsv_bits);
> +}
> +
> +/* EMT checking*/
> +static bool_t nept_emt_bits_check(ept_entry_t e, uint32_t level)
> +{
> +    if ( e.sp || level == 1 ) {
> +        if ( e.emt == EPT_EMT_RSV0 || e.emt == EPT_EMT_RSV1 ||
> +                e.emt == EPT_EMT_RSV2 )
> +            return 1;
> +    }
> +    return 0;
> +}
> +
> +static bool_t nept_rwx_bits_check(ept_entry_t e) {
> +    /*write only or write/execute only*/
> +    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
> +
> +    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
> +        return 1;
> +
> +    if ( rwx_bits == ept_access_x && !(NEPT_VPID_CAP_BITS &
> +                        VMX_EPT_EXEC_ONLY_SUPPORTED))
> +        return 1;
> +
> +    return 0;
> +}
> +
> +/* nept's misconfiguration check */
> +static bool_t nept_misconfiguration_check(ept_entry_t e, uint32_t level)
> +{
> +    return (nept_rsv_bits_check(e, level) ||
> +                nept_emt_bits_check(e, level) ||
> +                nept_rwx_bits_check(e));
> +}
> +
> +static bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
> +{
> +    return !(EPTE_RWX_MASK & rwx_acc & ~rwx_bits);
> +}
> +
> +/* nept's non-present check */
> +static bool_t nept_non_present_check(ept_entry_t e)
> +{
> +    if (e.epte & EPTE_RWX_MASK)
> +        return 0;
> +    return 1;
> +}
> +
> +uint64_t nept_get_ept_vpid_cap(void)
> +{
> +    return NEPT_VPID_CAP_BITS;
> +}
> +
> +static int ept_lvl_table_offset(unsigned long gpa, int lvl)
> +{
> +    return (gpa >>(EPT_L4_PAGETABLE_SHIFT -(4 - lvl) * 9)) &
> +                (EPT_PAGETABLE_ENTRIES -1 );
> +}
> +
> +static uint32_t
> +nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw)
> +{
> +    int lvl;
> +    p2m_type_t p2mt;
> +    uint32_t rc = 0, ret = 0, gflags;
> +    struct domain *d = v->domain;
> +    struct p2m_domain *p2m = d->arch.p2m;
> +    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
> +    mfn_t lxmfn;
> +    ept_entry_t *lxp = NULL;
> +
> +    memset(gw, 0, sizeof(*gw));
> +
> +    for (lvl = 4; lvl > 0; lvl--)
> +    {
> +        lxp = map_domain_gfn(p2m, base_gfn, &lxmfn, &p2mt, P2M_ALLOC, &rc);
> +        if ( !lxp )
> +            goto map_err;
> +        gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
> +        unmap_domain_page(lxp);
> +        put_page(mfn_to_page(mfn_x(lxmfn)));
> +
> +        if (nept_non_present_check(gw->lxe[lvl]))
> +            goto non_present;
> +
> +        if (nept_misconfiguration_check(gw->lxe[lvl], lvl))
> +            goto misconfig_err;
> +
> +        if ( (lvl == 2 || lvl == 3) && nept_sp_entry(gw->lxe[lvl]) )
> +        {
> +            /* Generate a fake l1 table entry so callers don't all
> +             * have to understand superpages. */
> +            unsigned long gfn_lvl_mask =  (1ull << ((lvl - 1) * 9)) - 1;
> +            gfn_t start = _gfn(gw->lxe[lvl].mfn);
> +            /* Increment the pfn by the right number of 4k pages. */
> +            start = _gfn((gfn_x(start) & ~gfn_lvl_mask) +
> +                     ((l2ga >> PAGE_SHIFT) & gfn_lvl_mask));
> +            gflags = (gw->lxe[lvl].epte & EPTE_FLAG_MASK) |
> +                    (lvl == 3 ? NEPT_1G_ENTRY_FLAG: NEPT_2M_ENTRY_FLAG);
> +            gw->lxe[0].epte = (gfn_x(start) << PAGE_SHIFT) | gflags;
> +            goto done;
> +        }
> +        if ( lvl > 1 )
> +            base_gfn = _gfn(gw->lxe[lvl].mfn);
> +    }
> +
> +    /* If this is not a super entry, we can reach here. */
> +    gflags = (gw->lxe[1].epte & EPTE_FLAG_MASK) | NEPT_4K_ENTRY_FLAG;
> +    gw->lxe[0].epte = (gw->lxe[1].epte & PAGE_MASK) | gflags;
> +
> +done:
> +    ret = EPT_TRANSLATE_SUCCEED;
> +    goto out;
> +
> +map_err:
> +    if ( rc == _PAGE_PAGED )
> +        ret = EPT_TRANSLATE_RETRY;
> +    else
> +        ret = EPT_TRANSLATE_ERR_PAGE;
> +    goto out;
> +
> +misconfig_err:
> +    ret =  EPT_TRANSLATE_MISCONFIG;
> +    goto out;
> +
> +non_present:
> +    ret = EPT_TRANSLATE_VIOLATION;
> +    /* fall through. */
> +out:
> +    return ret;
> +}
> +
> +/* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
> +
> +int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
> +                        unsigned int *page_order, uint32_t rwx_acc,
> +                        unsigned long *l1gfn, uint64_t *exit_qual,
> +                        uint32_t *exit_reason)
> +{
> +    uint32_t rc, rwx_bits = 0;
> +    ept_walk_t gw;
> +    rwx_acc &= EPTE_RWX_MASK;
> +
> +    *l1gfn = INVALID_GFN;
> +
> +    rc = nept_walk_tables(v, l2ga, &gw);
> +    switch ( rc ) {
> +    case EPT_TRANSLATE_SUCCEED:
> +        if ( likely(gw.lxe[0].epte & NEPT_2M_ENTRY_FLAG) )
> +        {
> +            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
> +                            EPTE_RWX_MASK;
> +            *page_order = 9;
> +        }
> +        else if ( gw.lxe[0].epte & NEPT_4K_ENTRY_FLAG ) {
> +            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
> +                    gw.lxe[1].epte & EPTE_RWX_MASK;
> +            *page_order = 0;
> +        }
> +        else if ( gw.lxe[0].epte & NEPT_1G_ENTRY_FLAG  )
> +        {
> +            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte  & EPTE_RWX_MASK;
> +            *page_order = 18;
> +        }
> +        else
> +        {
> +            gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
> +            BUG();
> +        }
> +        if ( nept_permission_check(rwx_acc, rwx_bits) )
> +        {
> +            *l1gfn = gw.lxe[0].mfn;
> +            break;
> +        }
> +        rc = EPT_TRANSLATE_VIOLATION;
> +    /* Fall through to EPT violation if permission check fails. */
> +    case EPT_TRANSLATE_VIOLATION:
> +        *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
> +        *exit_reason = EXIT_REASON_EPT_VIOLATION;
> +        break;
> +
> +    case EPT_TRANSLATE_ERR_PAGE:
> +        break;
> +    case EPT_TRANSLATE_MISCONFIG:
> +        rc = EPT_TRANSLATE_MISCONFIG;
> +        *exit_qual = 0;
> +        *exit_reason = EXIT_REASON_EPT_MISCONFIG;
> +        break;
> +    case EPT_TRANSLATE_RETRY:
> +        break;
> +    default:
> +        gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);

Same here.

> +    }
> +    return rc;
> +}
> diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
> index 8787c91..6d1264b 100644
> --- a/xen/arch/x86/mm/hap/nested_hap.c
> +++ b/xen/arch/x86/mm/hap/nested_hap.c
> @@ -217,7 +217,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>      /* let caller to handle these two cases */
>      switch (rv) {
>      case NESTEDHVM_PAGEFAULT_INJECT:
> -        return rv;
> +    case NESTEDHVM_PAGEFAULT_RETRY:
>      case NESTEDHVM_PAGEFAULT_L1_ERROR:
>          return rv;
>      case NESTEDHVM_PAGEFAULT_DONE:
> diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> index 4967da1..409198c 100644
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
>      /* Translate the GFN to an MFN */
>      ASSERT(!paging_locked_by_me(v->domain));
>      mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
> -
> +
>      if ( p2m_is_readonly(p2mt) )
>      {
>          put_gfn(v->domain, gfn);
> diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
> index 4e1dda0..db8a0b6 100644
> --- a/xen/include/asm-x86/guest_pt.h
> +++ b/xen/include/asm-x86/guest_pt.h
> @@ -315,6 +315,14 @@ guest_walk_to_page_order(walk_t *gw)
>  #define GPT_RENAME2(_n, _l) _n ## _ ## _l ## _levels
>  #define GPT_RENAME(_n, _l) GPT_RENAME2(_n, _l)
>  #define guest_walk_tables GPT_RENAME(guest_walk_tables, GUEST_PAGING_LEVELS)
> +#define map_domain_gfn GPT_RENAME(map_domain_gfn, GUEST_PAGING_LEVELS)
> +
> +extern void *map_domain_gfn(struct p2m_domain *p2m,
> +                                   gfn_t gfn,
> +                                   mfn_t *mfn,
> +                                   p2m_type_t *p2mt,
> +                                   p2m_query_t q,
> +                                   uint32_t *rc);
>
>  extern uint32_t
>  guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, unsigned long va,
> diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
> index 91fde0b..649c511 100644
> --- a/xen/include/asm-x86/hvm/nestedhvm.h
> +++ b/xen/include/asm-x86/hvm/nestedhvm.h
> @@ -52,6 +52,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
>  #define NESTEDHVM_PAGEFAULT_L1_ERROR   2
>  #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
>  #define NESTEDHVM_PAGEFAULT_MMIO       4
> +#define NESTEDHVM_PAGEFAULT_RETRY      5
>  int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>      bool_t access_r, bool_t access_w, bool_t access_x);
>
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index ef2c9c9..9a728b6 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -194,6 +194,7 @@ extern u32 vmx_secondary_exec_control;
>
>  extern bool_t cpu_has_vmx_ins_outs_instr_info;
>
> +#define VMX_EPT_EXEC_ONLY_SUPPORTED             0x00000001
>  #define VMX_EPT_WALK_LENGTH_4_SUPPORTED         0x00000040
>  #define VMX_EPT_MEMORY_TYPE_UC                  0x00000100
>  #define VMX_EPT_MEMORY_TYPE_WB                  0x00004000
> diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
> index aa5b080..feaaa80 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
> @@ -51,6 +51,11 @@ typedef union {
>      u64 epte;
>  } ept_entry_t;
>
> +typedef struct {
> +    /*use lxe[0] to save result */
> +    ept_entry_t lxe[5];
> +} ept_walk_t;
> +
>  #define EPT_TABLE_ORDER         9
>  #define EPTE_SUPER_PAGE_MASK    0x80
>  #define EPTE_MFN_MASK           0xffffffffff000ULL
> @@ -60,6 +65,28 @@ typedef union {
>  #define EPTE_AVAIL1_SHIFT       8
>  #define EPTE_EMT_SHIFT          3
>  #define EPTE_IGMT_SHIFT         6
> +#define EPTE_RWX_MASK           0x7
> +#define EPTE_FLAG_MASK          0x7f
> +
> +#define EPT_EMT_UC              0
> +#define EPT_EMT_WC              1
> +#define EPT_EMT_RSV0            2
> +#define EPT_EMT_RSV1            3
> +#define EPT_EMT_WT              4
> +#define EPT_EMT_WP              5
> +#define EPT_EMT_WB              6
> +#define EPT_EMT_RSV2            7
> +
> +typedef enum {
> +    ept_access_n     = 0, /* No access permissions allowed */
> +    ept_access_r     = 1,
> +    ept_access_w     = 2,
> +    ept_access_rw    = 3,
> +    ept_access_x     = 4,
> +    ept_access_rx    = 5,
> +    ept_access_wx    = 6,
> +    ept_access_all   = 7,
> +} ept_access_t;
>
>  void vmx_asm_vmexit_handler(struct cpu_user_regs);
>  void vmx_asm_do_vmentry(void);
> @@ -419,6 +446,7 @@ void update_guest_eip(void);
>  #define _EPT_GLA_FAULT              8
>  #define EPT_GLA_FAULT               (1UL<<_EPT_GLA_FAULT)
>
> +#define EPT_L4_PAGETABLE_SHIFT      39
>  #define EPT_PAGETABLE_ENTRIES       512
>
>  #endif /* __ASM_X86_HVM_VMX_VMX_H__ */
> diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
> index 422f006..8eb377b 100644
> --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> @@ -32,6 +32,10 @@ struct nestedvmx {
>          unsigned long intr_info;
>          u32           error_code;
>      } intr;
> +    struct {
> +        uint32_t exit_reason;
> +        uint32_t exit_qual;
> +    } ept_exit;
>  };
>
>  #define vcpu_2_nvmx(v) (vcpu_nestedhvm(v).u.nvmx)
> @@ -109,6 +113,12 @@ void nvmx_domain_relinquish_resources(struct domain *d);
>  int nvmx_handle_vmxon(struct cpu_user_regs *regs);
>  int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
>
> +#define EPT_TRANSLATE_SUCCEED   0
> +#define EPT_TRANSLATE_VIOLATION 1
> +#define EPT_TRANSLATE_ERR_PAGE  2
> +#define EPT_TRANSLATE_MISCONFIG 3
> +#define EPT_TRANSLATE_RETRY     4
> +
>  int
>  nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
>                        unsigned int *page_order,
> @@ -192,5 +202,9 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
>  int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
>                            unsigned int exit_reason);
>
> +int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
> +                        unsigned int *page_order, uint32_t rwx_acc,
> +                        unsigned long *l1gfn, uint64_t *exit_qual,
> +                        uint32_t *exit_reason);
>  #endif /* __ASM_X86_HVM_VVMX_H__ */
>
> --
> 1.7.1
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



-- 
Jun
Intel Open Source Technology Center

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:43:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMjT-0003Re-Kb; Wed, 19 Dec 2012 16:42:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jun.nakajima@intel.com>) id 1TlMjS-0003RZ-5w
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 16:42:58 +0000
Received: from [85.158.143.35:35562] by server-2.bemta-4.messagelabs.com id
	24/A8-30861-19EE1D05; Wed, 19 Dec 2012 16:42:57 +0000
X-Env-Sender: jun.nakajima@intel.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355935374!4723711!1
X-Originating-IP: [192.55.52.89]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27029 invoked from network); 19 Dec 2012 16:42:54 -0000
Received: from mga05.intel.com (HELO fmsmga101.fm.intel.com) (192.55.52.89)
	by server-9.tower-21.messagelabs.com with SMTP;
	19 Dec 2012 16:42:54 -0000
Received: from mail-gg0-f198.google.com ([209.85.161.198])
	by mga01.intel.com with ESMTP/TLS/RC4-SHA; 19 Dec 2012 08:42:53 -0800
Received: by mail-gg0-f198.google.com with SMTP id f4so3226113ggn.1
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 08:42:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:mime-version:in-reply-to:references:date:message-id
	:subject:from:to:cc:content-type:x-gm-message-state;
	bh=cmhONlHKd80BNBfpE3CGGrSLpLVMR7ZTLBAIiiqTcyY=;
	b=HDppnA1GGWqbFjMbE1ZEMF9/x15WqMRQOOtkBjFzXzaXARSmyBvfmkmiS5E3yj9mao
	BxwNQOwEiRt1ItanxJFFpupP+62cgOoFKiHc0HOpRvMDjKXBfmtl7D4e/42fRtn8Z0CZ
	+ERJq5wKaikfAid/jp/sVrsVmjEn/JQ7/3RsXfh0fQ2Q601SQukEguzOO6H/CUjN+lu2
	MDyxdL6hZYSfG+WhhyaH8GNvGYZB/RHG0c0ZmI5dFXJf1k1Z2Z/0K0LgvPAItzPpT6CJ
	Y82KR0iE7K+Bwup++PFfhha/tj1ajJE5iL+MgHcVFwCftv47sSwFH8XSZl1NgW+qZQ5d
	9vcg==
X-Received: by 10.49.74.198 with SMTP id w6mr3369390qev.57.1355935372104;
	Wed, 19 Dec 2012 08:42:52 -0800 (PST)
MIME-Version: 1.0
Received: by 10.49.74.198 with SMTP id w6mr3369378qev.57.1355935371980; Wed,
	19 Dec 2012 08:42:51 -0800 (PST)
Received: by 10.229.27.148 with HTTP; Wed, 19 Dec 2012 08:42:51 -0800 (PST)
In-Reply-To: <1355946267-24227-4-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-4-git-send-email-xiantao.zhang@intel.com>
Date: Wed, 19 Dec 2012 08:42:51 -0800
Message-ID: <CAL54oT2Sv0W3gwa7-ufdc904WXB2rC0mLEskjQ4HUgkC+TY6mQ@mail.gmail.com>
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: Xiantao Zhang <xiantao.zhang@intel.com>
X-Gm-Message-State: ALoCoQmsW8sJQAgV+1NfLwTklvun6qBJG2Af3FrP/hqxAGfu+87IDTnr8Ibd1mUHq9EA9IesKPGP8mGW8fZXdk4Dgh1OktXQJ+txPDAklD7n/wFtP6hRWzSl5xoy9F2tLZ1UiOyC1fGdjztf5kQHycFhltQACBA5Aw==
Cc: tim@xen.org, keir@xen.org, eddie.dong@intel.com, JBeulich@suse.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 03/10] nested_ept: Implement guest ept's
	walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 11:44 AM, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
>
> Implment guest EPT PT walker, some logic is based on shadow's
> ia32e PT walker. During the PT walking, if the target pages are
> not in memory, use RETRY mechanism and get a chance to let the
> target page back.

It's just a programming style, but I would add 'break' for the default
case in the switch statements. Also, we should set an error for some
of the switch statements. We should not depend on print messages for
error handling or debugging.

>
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> ---
>  xen/arch/x86/hvm/hvm.c              |    1 +
>  xen/arch/x86/hvm/vmx/vvmx.c         |   42 +++++-
>  xen/arch/x86/mm/guest_walk.c        |   16 ++-
>  xen/arch/x86/mm/hap/Makefile        |    1 +
>  xen/arch/x86/mm/hap/nested_ept.c    |  276 +++++++++++++++++++++++++++++++++++
>  xen/arch/x86/mm/hap/nested_hap.c    |    2 +-
>  xen/arch/x86/mm/shadow/multi.c      |    2 +-
>  xen/include/asm-x86/guest_pt.h      |    8 +
>  xen/include/asm-x86/hvm/nestedhvm.h |    1 +
>  xen/include/asm-x86/hvm/vmx/vmcs.h  |    1 +
>  xen/include/asm-x86/hvm/vmx/vmx.h   |   28 ++++
>  xen/include/asm-x86/hvm/vmx/vvmx.h  |   14 ++
>  12 files changed, 382 insertions(+), 10 deletions(-)
>  create mode 100644 xen/arch/x86/mm/hap/nested_ept.c
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 1cae8a8..3cd0075 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1324,6 +1324,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>                                               access_r, access_w, access_x);
>          switch (rv) {
>          case NESTEDHVM_PAGEFAULT_DONE:
> +        case NESTEDHVM_PAGEFAULT_RETRY:
>              return 1;
>          case NESTEDHVM_PAGEFAULT_L1_ERROR:
>              /* An error occured while translating gpa from
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 4495dd6..76cf757 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -906,9 +906,18 @@ static void sync_vvmcs_ro(struct vcpu *v)
>  {
>      int i;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> +    void *vvmcs = nvcpu->nv_vvmcx;
>
>      for ( i = 0; i < ARRAY_SIZE(vmcs_ro_field); i++ )
>          shadow_to_vvmcs(nvcpu->nv_vvmcx, vmcs_ro_field[i]);
> +
> +    /* Adjust exit_reason/exit_qualifciation for violation case */
> +    if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
> +                EXIT_REASON_EPT_VIOLATION ) {
> +        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
> +        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
> +    }
>  }
>
>  static void load_vvmcs_host_state(struct vcpu *v)
> @@ -1454,8 +1463,37 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
>                        unsigned int *page_order,
>                        bool_t access_r, bool_t access_w, bool_t access_x)
>  {
> -    /*TODO:*/
> -    return 0;
> +    uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
> +    uint32_t exit_reason = EXIT_REASON_EPT_VIOLATION;
> +    int rc;
> +    unsigned long gfn;
> +    uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
> +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> +
> +    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
> +                                &exit_qual, &exit_reason);
> +    switch ( rc ) {
> +        case EPT_TRANSLATE_SUCCEED:
> +            *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
> +            rc = NESTEDHVM_PAGEFAULT_DONE;
> +            break;
> +        case EPT_TRANSLATE_VIOLATION:
> +        case EPT_TRANSLATE_MISCONFIG:
> +            rc = NESTEDHVM_PAGEFAULT_INJECT;
> +            nvmx->ept_exit.exit_reason = exit_reason;
> +            nvmx->ept_exit.exit_qual = exit_qual;
> +            break;
> +        case EPT_TRANSLATE_RETRY:
> +            rc = NESTEDHVM_PAGEFAULT_RETRY;
> +            break;
> +        case EPT_TRANSLATE_ERR_PAGE:
> +            rc = NESTEDHVM_PAGEFAULT_L1_ERROR;
> +            break;
> +        default:
> +            gdprintk(XENLOG_ERR, "GUEST EPT translation error!\n");

break and rc = ;

> +    }
> +
> +    return rc;
>  }
>
>  void nvmx_idtv_handling(void)
> diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
> index 0f08fb0..1c165c6 100644
> --- a/xen/arch/x86/mm/guest_walk.c
> +++ b/xen/arch/x86/mm/guest_walk.c
> @@ -88,18 +88,19 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
>
>  /* If the map is non-NULL, we leave this function having
>   * acquired an extra ref on mfn_to_page(*mfn) */
> -static inline void *map_domain_gfn(struct p2m_domain *p2m,
> -                                   gfn_t gfn,
> +void *map_domain_gfn(struct p2m_domain *p2m,
> +                                   gfn_t gfn,
>                                     mfn_t *mfn,
>                                     p2m_type_t *p2mt,
> -                                   uint32_t *rc)
> +                                   p2m_query_t q,
> +                                   uint32_t *rc)
>  {
>      struct page_info *page;
>      void *map;
>
>      /* Translate the gfn, unsharing if shared */
>      page = get_page_from_gfn_p2m(p2m->domain, p2m, gfn_x(gfn), p2mt, NULL,
> -                                  P2M_ALLOC | P2M_UNSHARE);
> +                                  q);
>      if ( p2m_is_paging(*p2mt) )
>      {
>          ASSERT(!p2m_is_nestedp2m(p2m));
> @@ -128,7 +129,6 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
>      return map;
>  }
>
> -
>  /* Walk the guest pagetables, after the manner of a hardware walker. */
>  /* Because the walk is essentially random, it can cause a deadlock
>   * warning in the p2m locking code. Highly unlikely this is an actual
> @@ -149,6 +149,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>      uint32_t gflags, mflags, iflags, rc = 0;
>      int smep;
>      bool_t pse1G = 0, pse2M = 0;
> +    p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
>
>      perfc_incr(guest_walk);
>      memset(gw, 0, sizeof(*gw));
> @@ -188,7 +189,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>      l3p = map_domain_gfn(p2m,
>                           guest_l4e_get_gfn(gw->l4e),
>                           &gw->l3mfn,
> -                         &p2mt,
> +                         &p2mt,
> +                         qt,
>                           &rc);
>      if(l3p == NULL)
>          goto out;
> @@ -249,6 +251,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>                           guest_l3e_get_gfn(gw->l3e),
>                           &gw->l2mfn,
>                           &p2mt,
> +                         qt,
>                           &rc);
>      if(l2p == NULL)
>          goto out;
> @@ -322,6 +325,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>                               guest_l2e_get_gfn(gw->l2e),
>                               &gw->l1mfn,
>                               &p2mt,
> +                             qt,
>                               &rc);
>          if(l1p == NULL)
>              goto out;
> diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
> index 80a6bec..68f2bb5 100644
> --- a/xen/arch/x86/mm/hap/Makefile
> +++ b/xen/arch/x86/mm/hap/Makefile
> @@ -3,6 +3,7 @@ obj-y += guest_walk_2level.o
>  obj-y += guest_walk_3level.o
>  obj-$(x86_64) += guest_walk_4level.o
>  obj-y += nested_hap.o
> +obj-y += nested_ept.o
>
>  guest_walk_%level.o: guest_walk.c Makefile
>         $(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
> diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
> new file mode 100644
> index 0000000..5f80d82
> --- /dev/null
> +++ b/xen/arch/x86/mm/hap/nested_ept.c
> @@ -0,0 +1,276 @@
> +/*
> + * nested_ept.c: Handling virtulized EPT for guest in nested case.
> + *
> + * Copyright (c) 2012, Intel Corporation
> + *  Xiantao Zhang <xiantao.zhang@intel.com>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> + * Place - Suite 330, Boston, MA 02111-1307 USA.
> + */
> +#include <asm/domain.h>
> +#include <asm/page.h>
> +#include <asm/paging.h>
> +#include <asm/p2m.h>
> +#include <asm/mem_event.h>
> +#include <public/mem_event.h>
> +#include <asm/mem_sharing.h>
> +#include <xen/event.h>
> +#include <asm/hap.h>
> +#include <asm/hvm/support.h>
> +
> +#include <asm/hvm/nestedhvm.h>
> +
> +#include "private.h"
> +
> +#include <asm/hvm/vmx/vmx.h>
> +#include <asm/hvm/vmx/vvmx.h>
> +
> +/* EPT always use 4-level paging structure */
> +#define GUEST_PAGING_LEVELS 4
> +#include <asm/guest_pt.h>
> +
> +/* Must reserved bits in all level entries  */
> +#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
> +                     ~((1ull << paddr_bits) - 1))
> +
> +/*
> + *TODO: Just leave it as 0 here for compile pass, will
> + * define real capabilities in the subsequent patches.
> + */
> +#define NEPT_VPID_CAP_BITS 0
> +
> +
> +#define NEPT_1G_ENTRY_FLAG (1 << 11)
> +#define NEPT_2M_ENTRY_FLAG (1 << 10)
> +#define NEPT_4K_ENTRY_FLAG (1 << 9)
> +
> +bool_t nept_sp_entry(ept_entry_t e)
> +{
> +    return !!(e.sp);
> +}
> +
> +static bool_t nept_rsv_bits_check(ept_entry_t e, uint32_t level)
> +{
> +    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
> +
> +    switch ( level ) {
> +    case 1:
> +        break;
> +    case 2 ... 3:
> +        if (nept_sp_entry(e))
> +            rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
> +        else
> +            rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK;
> +        break;
> +    case 4:
> +        rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK | EPTE_SUPER_PAGE_MASK;
> +    break;
> +    default:
> +        printk("Unsupported EPT paging level: %d\n", level);

break (or return) and we need to take a definitive action here.

> +    }
> +    return !!(e.epte & rsv_bits);
> +}
> +
> +/* EMT checking*/
> +static bool_t nept_emt_bits_check(ept_entry_t e, uint32_t level)
> +{
> +    if ( e.sp || level == 1 ) {
> +        if ( e.emt == EPT_EMT_RSV0 || e.emt == EPT_EMT_RSV1 ||
> +                e.emt == EPT_EMT_RSV2 )
> +            return 1;
> +    }
> +    return 0;
> +}
> +
> +static bool_t nept_rwx_bits_check(ept_entry_t e) {
> +    /*write only or write/execute only*/
> +    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
> +
> +    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
> +        return 1;
> +
> +    if ( rwx_bits == ept_access_x && !(NEPT_VPID_CAP_BITS &
> +                        VMX_EPT_EXEC_ONLY_SUPPORTED))
> +        return 1;
> +
> +    return 0;
> +}
> +
> +/* nept's misconfiguration check */
> +static bool_t nept_misconfiguration_check(ept_entry_t e, uint32_t level)
> +{
> +    return (nept_rsv_bits_check(e, level) ||
> +                nept_emt_bits_check(e, level) ||
> +                nept_rwx_bits_check(e));
> +}
> +
> +static bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
> +{
> +    return !(EPTE_RWX_MASK & rwx_acc & ~rwx_bits);
> +}
> +
> +/* nept's non-present check */
> +static bool_t nept_non_present_check(ept_entry_t e)
> +{
> +    if (e.epte & EPTE_RWX_MASK)
> +        return 0;
> +    return 1;
> +}
> +
> +uint64_t nept_get_ept_vpid_cap(void)
> +{
> +    return NEPT_VPID_CAP_BITS;
> +}
> +
> +static int ept_lvl_table_offset(unsigned long gpa, int lvl)
> +{
> +    return (gpa >>(EPT_L4_PAGETABLE_SHIFT -(4 - lvl) * 9)) &
> +                (EPT_PAGETABLE_ENTRIES -1 );
> +}
> +
> +static uint32_t
> +nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw)
> +{
> +    int lvl;
> +    p2m_type_t p2mt;
> +    uint32_t rc = 0, ret = 0, gflags;
> +    struct domain *d = v->domain;
> +    struct p2m_domain *p2m = d->arch.p2m;
> +    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
> +    mfn_t lxmfn;
> +    ept_entry_t *lxp = NULL;
> +
> +    memset(gw, 0, sizeof(*gw));
> +
> +    for (lvl = 4; lvl > 0; lvl--)
> +    {
> +        lxp = map_domain_gfn(p2m, base_gfn, &lxmfn, &p2mt, P2M_ALLOC, &rc);
> +        if ( !lxp )
> +            goto map_err;
> +        gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
> +        unmap_domain_page(lxp);
> +        put_page(mfn_to_page(mfn_x(lxmfn)));
> +
> +        if (nept_non_present_check(gw->lxe[lvl]))
> +            goto non_present;
> +
> +        if (nept_misconfiguration_check(gw->lxe[lvl], lvl))
> +            goto misconfig_err;
> +
> +        if ( (lvl == 2 || lvl == 3) && nept_sp_entry(gw->lxe[lvl]) )
> +        {
> +            /* Generate a fake l1 table entry so callers don't all
> +             * have to understand superpages. */
> +            unsigned long gfn_lvl_mask =  (1ull << ((lvl - 1) * 9)) - 1;
> +            gfn_t start = _gfn(gw->lxe[lvl].mfn);
> +            /* Increment the pfn by the right number of 4k pages. */
> +            start = _gfn((gfn_x(start) & ~gfn_lvl_mask) +
> +                     ((l2ga >> PAGE_SHIFT) & gfn_lvl_mask));
> +            gflags = (gw->lxe[lvl].epte & EPTE_FLAG_MASK) |
> +                    (lvl == 3 ? NEPT_1G_ENTRY_FLAG: NEPT_2M_ENTRY_FLAG);
> +            gw->lxe[0].epte = (gfn_x(start) << PAGE_SHIFT) | gflags;
> +            goto done;
> +        }
> +        if ( lvl > 1 )
> +            base_gfn = _gfn(gw->lxe[lvl].mfn);
> +    }
> +
> +    /* If this is not a super entry, we can reach here. */
> +    gflags = (gw->lxe[1].epte & EPTE_FLAG_MASK) | NEPT_4K_ENTRY_FLAG;
> +    gw->lxe[0].epte = (gw->lxe[1].epte & PAGE_MASK) | gflags;
> +
> +done:
> +    ret = EPT_TRANSLATE_SUCCEED;
> +    goto out;
> +
> +map_err:
> +    if ( rc == _PAGE_PAGED )
> +        ret = EPT_TRANSLATE_RETRY;
> +    else
> +        ret = EPT_TRANSLATE_ERR_PAGE;
> +    goto out;
> +
> +misconfig_err:
> +    ret =  EPT_TRANSLATE_MISCONFIG;
> +    goto out;
> +
> +non_present:
> +    ret = EPT_TRANSLATE_VIOLATION;
> +    /* fall through. */
> +out:
> +    return ret;
> +}
> +
> +/* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
> +
> +int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
> +                        unsigned int *page_order, uint32_t rwx_acc,
> +                        unsigned long *l1gfn, uint64_t *exit_qual,
> +                        uint32_t *exit_reason)
> +{
> +    uint32_t rc, rwx_bits = 0;
> +    ept_walk_t gw;
> +    rwx_acc &= EPTE_RWX_MASK;
> +
> +    *l1gfn = INVALID_GFN;
> +
> +    rc = nept_walk_tables(v, l2ga, &gw);
> +    switch ( rc ) {
> +    case EPT_TRANSLATE_SUCCEED:
> +        if ( likely(gw.lxe[0].epte & NEPT_2M_ENTRY_FLAG) )
> +        {
> +            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
> +                            EPTE_RWX_MASK;
> +            *page_order = 9;
> +        }
> +        else if ( gw.lxe[0].epte & NEPT_4K_ENTRY_FLAG ) {
> +            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
> +                    gw.lxe[1].epte & EPTE_RWX_MASK;
> +            *page_order = 0;
> +        }
> +        else if ( gw.lxe[0].epte & NEPT_1G_ENTRY_FLAG  )
> +        {
> +            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte  & EPTE_RWX_MASK;
> +            *page_order = 18;
> +        }
> +        else
> +        {
> +            gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
> +            BUG();
> +        }
> +        if ( nept_permission_check(rwx_acc, rwx_bits) )
> +        {
> +            *l1gfn = gw.lxe[0].mfn;
> +            break;
> +        }
> +        rc = EPT_TRANSLATE_VIOLATION;
> +    /* Fall through to EPT violation if permission check fails. */
> +    case EPT_TRANSLATE_VIOLATION:
> +        *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
> +        *exit_reason = EXIT_REASON_EPT_VIOLATION;
> +        break;
> +
> +    case EPT_TRANSLATE_ERR_PAGE:
> +        break;
> +    case EPT_TRANSLATE_MISCONFIG:
> +        rc = EPT_TRANSLATE_MISCONFIG;
> +        *exit_qual = 0;
> +        *exit_reason = EXIT_REASON_EPT_MISCONFIG;
> +        break;
> +    case EPT_TRANSLATE_RETRY:
> +        break;
> +    default:
> +        gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);

Same here.

> +    }
> +    return rc;
> +}
> diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
> index 8787c91..6d1264b 100644
> --- a/xen/arch/x86/mm/hap/nested_hap.c
> +++ b/xen/arch/x86/mm/hap/nested_hap.c
> @@ -217,7 +217,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>      /* let caller to handle these two cases */
>      switch (rv) {
>      case NESTEDHVM_PAGEFAULT_INJECT:
> -        return rv;
> +    case NESTEDHVM_PAGEFAULT_RETRY:
>      case NESTEDHVM_PAGEFAULT_L1_ERROR:
>          return rv;
>      case NESTEDHVM_PAGEFAULT_DONE:
> diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> index 4967da1..409198c 100644
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
>      /* Translate the GFN to an MFN */
>      ASSERT(!paging_locked_by_me(v->domain));
>      mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
> -
> +
>      if ( p2m_is_readonly(p2mt) )
>      {
>          put_gfn(v->domain, gfn);
> diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
> index 4e1dda0..db8a0b6 100644
> --- a/xen/include/asm-x86/guest_pt.h
> +++ b/xen/include/asm-x86/guest_pt.h
> @@ -315,6 +315,14 @@ guest_walk_to_page_order(walk_t *gw)
>  #define GPT_RENAME2(_n, _l) _n ## _ ## _l ## _levels
>  #define GPT_RENAME(_n, _l) GPT_RENAME2(_n, _l)
>  #define guest_walk_tables GPT_RENAME(guest_walk_tables, GUEST_PAGING_LEVELS)
> +#define map_domain_gfn GPT_RENAME(map_domain_gfn, GUEST_PAGING_LEVELS)
> +
> +extern void *map_domain_gfn(struct p2m_domain *p2m,
> +                                   gfn_t gfn,
> +                                   mfn_t *mfn,
> +                                   p2m_type_t *p2mt,
> +                                   p2m_query_t q,
> +                                   uint32_t *rc);
>
>  extern uint32_t
>  guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, unsigned long va,
> diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
> index 91fde0b..649c511 100644
> --- a/xen/include/asm-x86/hvm/nestedhvm.h
> +++ b/xen/include/asm-x86/hvm/nestedhvm.h
> @@ -52,6 +52,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
>  #define NESTEDHVM_PAGEFAULT_L1_ERROR   2
>  #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
>  #define NESTEDHVM_PAGEFAULT_MMIO       4
> +#define NESTEDHVM_PAGEFAULT_RETRY      5
>  int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>      bool_t access_r, bool_t access_w, bool_t access_x);
>
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index ef2c9c9..9a728b6 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -194,6 +194,7 @@ extern u32 vmx_secondary_exec_control;
>
>  extern bool_t cpu_has_vmx_ins_outs_instr_info;
>
> +#define VMX_EPT_EXEC_ONLY_SUPPORTED             0x00000001
>  #define VMX_EPT_WALK_LENGTH_4_SUPPORTED         0x00000040
>  #define VMX_EPT_MEMORY_TYPE_UC                  0x00000100
>  #define VMX_EPT_MEMORY_TYPE_WB                  0x00004000
> diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
> index aa5b080..feaaa80 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
> @@ -51,6 +51,11 @@ typedef union {
>      u64 epte;
>  } ept_entry_t;
>
> +typedef struct {
> +    /*use lxe[0] to save result */
> +    ept_entry_t lxe[5];
> +} ept_walk_t;
> +
>  #define EPT_TABLE_ORDER         9
>  #define EPTE_SUPER_PAGE_MASK    0x80
>  #define EPTE_MFN_MASK           0xffffffffff000ULL
> @@ -60,6 +65,28 @@ typedef union {
>  #define EPTE_AVAIL1_SHIFT       8
>  #define EPTE_EMT_SHIFT          3
>  #define EPTE_IGMT_SHIFT         6
> +#define EPTE_RWX_MASK           0x7
> +#define EPTE_FLAG_MASK          0x7f
> +
> +#define EPT_EMT_UC              0
> +#define EPT_EMT_WC              1
> +#define EPT_EMT_RSV0            2
> +#define EPT_EMT_RSV1            3
> +#define EPT_EMT_WT              4
> +#define EPT_EMT_WP              5
> +#define EPT_EMT_WB              6
> +#define EPT_EMT_RSV2            7
> +
> +typedef enum {
> +    ept_access_n     = 0, /* No access permissions allowed */
> +    ept_access_r     = 1,
> +    ept_access_w     = 2,
> +    ept_access_rw    = 3,
> +    ept_access_x     = 4,
> +    ept_access_rx    = 5,
> +    ept_access_wx    = 6,
> +    ept_access_all   = 7,
> +} ept_access_t;
>
>  void vmx_asm_vmexit_handler(struct cpu_user_regs);
>  void vmx_asm_do_vmentry(void);
> @@ -419,6 +446,7 @@ void update_guest_eip(void);
>  #define _EPT_GLA_FAULT              8
>  #define EPT_GLA_FAULT               (1UL<<_EPT_GLA_FAULT)
>
> +#define EPT_L4_PAGETABLE_SHIFT      39
>  #define EPT_PAGETABLE_ENTRIES       512
>
>  #endif /* __ASM_X86_HVM_VMX_VMX_H__ */
> diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
> index 422f006..8eb377b 100644
> --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> @@ -32,6 +32,10 @@ struct nestedvmx {
>          unsigned long intr_info;
>          u32           error_code;
>      } intr;
> +    struct {
> +        uint32_t exit_reason;
> +        uint32_t exit_qual;
> +    } ept_exit;
>  };
>
>  #define vcpu_2_nvmx(v) (vcpu_nestedhvm(v).u.nvmx)
> @@ -109,6 +113,12 @@ void nvmx_domain_relinquish_resources(struct domain *d);
>  int nvmx_handle_vmxon(struct cpu_user_regs *regs);
>  int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
>
> +#define EPT_TRANSLATE_SUCCEED   0
> +#define EPT_TRANSLATE_VIOLATION 1
> +#define EPT_TRANSLATE_ERR_PAGE  2
> +#define EPT_TRANSLATE_MISCONFIG 3
> +#define EPT_TRANSLATE_RETRY     4
> +
>  int
>  nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
>                        unsigned int *page_order,
> @@ -192,5 +202,9 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
>  int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
>                            unsigned int exit_reason);
>
> +int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
> +                        unsigned int *page_order, uint32_t rwx_acc,
> +                        unsigned long *l1gfn, uint64_t *exit_qual,
> +                        uint32_t *exit_reason);
>  #endif /* __ASM_X86_HVM_VVMX_H__ */
>
> --
> 1.7.1
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



-- 
Jun
Intel Open Source Technology Center

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:49:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMpo-0003cU-Md; Wed, 19 Dec 2012 16:49:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlMpn-0003cO-2D
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 16:49:31 +0000
Received: from [85.158.143.35:14229] by server-2.bemta-4.messagelabs.com id
	C1/51-30861-A10F1D05; Wed, 19 Dec 2012 16:49:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355935769!4815656!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9941 invoked from network); 19 Dec 2012 16:49:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:49:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="259278"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 16:49:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 19 Dec 2012 16:49:29 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlMpl-0008Gk-6Z;
	Wed, 19 Dec 2012 16:49:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlMpk-0003A2-VD;
	Wed, 19 Dec 2012 16:49:29 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14785-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Dec 2012 16:49:29 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14785: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3175494499515425707=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3175494499515425707==
Content-Type: text/plain

flight 14785 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14785/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14780
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14780

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  b04de677de31
baseline version:
 xen                  d5c0389bf26c

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dario Faggioli <dario.faggioli@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=b04de677de31
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable b04de677de31
+ branch=xen-unstable
+ revision=b04de677de31
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r b04de677de31 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 15 changes to 12 files


--===============3175494499515425707==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3175494499515425707==--

From xen-devel-bounces@lists.xen.org Wed Dec 19 16:49:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 16:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlMpo-0003cU-Md; Wed, 19 Dec 2012 16:49:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlMpn-0003cO-2D
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 16:49:31 +0000
Received: from [85.158.143.35:14229] by server-2.bemta-4.messagelabs.com id
	C1/51-30861-A10F1D05; Wed, 19 Dec 2012 16:49:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355935769!4815656!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9941 invoked from network); 19 Dec 2012 16:49:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 16:49:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="259278"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 16:49:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 19 Dec 2012 16:49:29 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlMpl-0008Gk-6Z;
	Wed, 19 Dec 2012 16:49:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlMpk-0003A2-VD;
	Wed, 19 Dec 2012 16:49:29 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14785-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Dec 2012 16:49:29 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14785: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3175494499515425707=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3175494499515425707==
Content-Type: text/plain

flight 14785 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14785/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14780
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14780

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  b04de677de31
baseline version:
 xen                  d5c0389bf26c

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Dario Faggioli <dario.faggioli@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  George Dunlap <george.dunlap@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=b04de677de31
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable b04de677de31
+ branch=xen-unstable
+ revision=b04de677de31
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r b04de677de31 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 15 changes to 12 files


--===============3175494499515425707==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3175494499515425707==--

From xen-devel-bounces@lists.xen.org Wed Dec 19 17:02:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 17:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlN2H-0004F0-P0; Wed, 19 Dec 2012 17:02:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TlN2G-0004Ev-SY
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 17:02:25 +0000
Received: from [85.158.143.35:26458] by server-2.bemta-4.messagelabs.com id
	F4/20-30861-023F1D05; Wed, 19 Dec 2012 17:02:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355936535!4725659!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26123 invoked from network); 19 Dec 2012 17:02:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 17:02:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1291301"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 17:01:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 12:01:54 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TlN1m-0004jv-5P;
	Wed, 19 Dec 2012 17:01:54 +0000
Message-ID: <1355936517.10526.28.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Dec 2012 17:01:57 +0000
In-Reply-To: <20121219160455.GA12077@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 16:04 +0000, Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> > Hi Konrad
> > 
> > I encountered a bug when trying to bring offline a cpu then online it
> > again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
> > with a quick fix.
> 
> I took your two patches that you posted and they are in v3.8 now.
> 
> It seems that there are bugs in the offline/online code thought.
> 
> I did this:
> # echo 0 > /sys/devices/system/cpu/cpu3/online
> # echo 1 > /sys/devices/system/cpu/cpu3/online
> 
> With a PV guest and it blows up (with or without your patches).
> 
> Have you seen something similar to this:
> 
> [  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
> [  106.167168] microcode: CPU2 sig=0x206a7, pf=0x2, revision=0x17
> [  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [last unloaded: dump_dma]
> [  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0-rc3upstream-00139-gb1849b3-dirty #1

Can you tell me which tree you're using? I cannot find cs:gb1849b3 in my
repository.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 17:02:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 17:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlN2H-0004F0-P0; Wed, 19 Dec 2012 17:02:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TlN2G-0004Ev-SY
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 17:02:25 +0000
Received: from [85.158.143.35:26458] by server-2.bemta-4.messagelabs.com id
	F4/20-30861-023F1D05; Wed, 19 Dec 2012 17:02:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355936535!4725659!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26123 invoked from network); 19 Dec 2012 17:02:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 17:02:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1291301"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 17:01:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 12:01:54 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TlN1m-0004jv-5P;
	Wed, 19 Dec 2012 17:01:54 +0000
Message-ID: <1355936517.10526.28.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Dec 2012 17:01:57 +0000
In-Reply-To: <20121219160455.GA12077@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 16:04 +0000, Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> > Hi Konrad
> > 
> > I encountered a bug when trying to bring offline a cpu then online it
> > again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
> > with a quick fix.
> 
> I took your two patches that you posted and they are in v3.8 now.
> 
> It seems that there are bugs in the offline/online code thought.
> 
> I did this:
> # echo 0 > /sys/devices/system/cpu/cpu3/online
> # echo 1 > /sys/devices/system/cpu/cpu3/online
> 
> With a PV guest and it blows up (with or without your patches).
> 
> Have you seen something similar to this:
> 
> [  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
> [  106.167168] microcode: CPU2 sig=0x206a7, pf=0x2, revision=0x17
> [  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [last unloaded: dump_dma]
> [  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0-rc3upstream-00139-gb1849b3-dirty #1

Can you tell me which tree you're using? I cannot find cs:gb1849b3 in my
repository.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 17:16:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 17:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlNFw-0004VH-6D; Wed, 19 Dec 2012 17:16:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jun.nakajima@intel.com>) id 1TlNFu-0004VC-Mb
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 17:16:30 +0000
Received: from [85.158.143.35:29283] by server-3.bemta-4.messagelabs.com id
	0C/50-18211-D66F1D05; Wed, 19 Dec 2012 17:16:29 +0000
X-Env-Sender: jun.nakajima@intel.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355937387!11924384!1
X-Originating-IP: [134.134.136.21]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 629 invoked from network); 19 Dec 2012 17:16:28 -0000
Received: from mga06.intel.com (HELO orsmga101.jf.intel.com) (134.134.136.21)
	by server-7.tower-21.messagelabs.com with SMTP;
	19 Dec 2012 17:16:28 -0000
Received: from mail-ye0-f200.google.com ([209.85.213.200])
	by mga02.intel.com with ESMTP/TLS/RC4-SHA; 19 Dec 2012 09:16:26 -0800
Received: by mail-ye0-f200.google.com with SMTP id r13so3337840yen.7
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 09:16:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:mime-version:in-reply-to:references:date:message-id
	:subject:from:to:cc:content-type:x-gm-message-state;
	bh=6BuvPxIq2Z/41Ivr8CSzrmGbYBfROuXoP26OdlMkVEs=;
	b=kMwOb/INTUu8kAv3eN+VpeLzJWwsMi5b/1LsPRvEh9ecgttceAbGVmYBCnFBAI76pz
	ku9BK7sEHLOW3ZxdFAx0bFK6bJyBTomk5KaLpr0ZbZqfm1z9aR9pwX/kXSLusmYc8Gap
	pOVuJD7eHVVBU/BL4BGUGSEpIatoVBPzzUyrL7nWqa/Hl7OJCTUhiL/+ZmftOADKPWrB
	iHw/KNMuufMClVoPZQjEHxZYvp8WxUniGlFjP+CqBscRKDe7trDIW0B9lPPGk4g1MyNv
	h7t1A840MixiY1QuyVT+wf5Qxi+NAOU46Uuw7RnC8hyIZVW4QypMy6VikG8xwvUmULfd
	lgPQ==
X-Received: by 10.224.107.3 with SMTP id z3mr3185225qao.9.1355937385624;
	Wed, 19 Dec 2012 09:16:25 -0800 (PST)
MIME-Version: 1.0
Received: by 10.224.107.3 with SMTP id z3mr3185216qao.9.1355937385503; Wed, 19
	Dec 2012 09:16:25 -0800 (PST)
Received: by 10.229.27.148 with HTTP; Wed, 19 Dec 2012 09:16:25 -0800 (PST)
In-Reply-To: <1355946267-24227-6-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-6-git-send-email-xiantao.zhang@intel.com>
Date: Wed, 19 Dec 2012 09:16:25 -0800
Message-ID: <CAL54oT0ZKwYkFvktNj53gg3oXN_zjB=38Pi8Dzqno8nZqY57OQ@mail.gmail.com>
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: Xiantao Zhang <xiantao.zhang@intel.com>
X-Gm-Message-State: ALoCoQlIEIoZTqZPwKF/J9I+JDjR7Ec/xA3erUEWS8TQaw2GYzrnDt8YbTyrGHjC5B4iydnsljFbCNU9E64N1B2wK5GqNFAuW8vty0Li4C+C4+V/I+dcg2d4xaV1Q7SYNEv2YSNhwFc+RcW90r1q0o3fJFN+gccLKQ==
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, JBeulich@suse.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 05/10] nEPT: Try to enable EPT paging for
	L2 guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Minor comments below.

On Wed, Dec 19, 2012 at 11:44 AM, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
>
> Once found EPT is enabled by L1 VMM, enabled nested EPT support
> for L2 guest.
>
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
>  xen/arch/x86/hvm/vmx/vvmx.c        |   48 +++++++++++++++++++++++++++--------
>  xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
>  3 files changed, 54 insertions(+), 15 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index d74aae0..e5be5a2 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -1461,6 +1461,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
>      .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
>      .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
>      .nhvm_vcpu_asid       = nvmx_vcpu_asid,
> +    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
>      .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
>      .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
>      .nhvm_intr_blocked    = nvmx_intr_blocked,
> @@ -2003,6 +2004,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
>      unsigned long gla, gfn = gpa >> PAGE_SHIFT;
>      mfn_t mfn;
>      p2m_type_t p2mt;
> +    int ret;
>      struct domain *d = current->domain;
>
>      if ( tb_init_done )
> @@ -2017,18 +2019,26 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
>          _d.gpa = gpa;
>          _d.qualification = qualification;
>          _d.mfn = mfn_x(get_gfn_query_unlocked(d, gfn, &_d.p2mt));
> -
> +
>          __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
>      }
>
> -    if ( hvm_hap_nested_page_fault(gpa,
> +    ret = hvm_hap_nested_page_fault(gpa,
>                                     qualification & EPT_GLA_VALID       ? 1 : 0,
>                                     qualification & EPT_GLA_VALID
>                                       ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
>                                     qualification & EPT_READ_VIOLATION  ? 1 : 0,
>                                     qualification & EPT_WRITE_VIOLATION ? 1 : 0,
> -                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
> +                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
> +    switch ( ret ) {
> +    case 0:
> +        break;
> +    case 1:
>          return;
> +    case -1:
> +        vcpu_nestedhvm(current).nv_vmexit_pending = 1;

I think we should add some comments for this case (e.g. what it means,
what to do).


> +        return;
> +    }
>
>      /* Everything else is an error. */
>      mfn = get_gfn_query_unlocked(d, gfn, &p2mt);
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 76cf757..c100730 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
>          gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
>         goto out;
>      }
> +    nvmx->ept.enabled = 0;
>      nvmx->vmxon_region_pa = 0;
>      nvcpu->nv_vvmcx = NULL;
>      nvcpu->nv_vvmcxaddr = VMCX_EADDR;
> @@ -96,9 +97,11 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
>
>  uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
>  {
> -    /* TODO */
> -    ASSERT(0);
> -    return 0;
> +    uint64_t eptp_base;
> +    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> +
> +    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
> +    return eptp_base & PAGE_MASK;
>  }
>
>  uint32_t nvmx_vcpu_asid(struct vcpu *v)
> @@ -108,6 +111,13 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v)
>      return 0;
>  }
>
> +bool_t nvmx_ept_enabled(struct vcpu *v)
> +{
> +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> +
> +    return !!(nvmx->ept.enabled);
> +}
> +
>  static const enum x86_segment sreg_to_index[] = {
>      [VMX_SREG_ES] = x86_seg_es,
>      [VMX_SREG_CS] = x86_seg_cs,
> @@ -503,14 +513,16 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
>  }
>
>  void nvmx_update_secondary_exec_control(struct vcpu *v,
> -                                            unsigned long value)
> +                                            unsigned long host_cntrl)
>  {
>      u32 shadow_cntrl;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
>
>      shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
> -    shadow_cntrl |= value;
> -    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
> +    nvmx->ept.enabled = !!(shadow_cntrl & SECONDARY_EXEC_ENABLE_EPT);
> +    shadow_cntrl |= host_cntrl;
> +    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
>  }
>
>  static void nvmx_update_pin_control(struct vcpu *v, unsigned long host_cntrl)
> @@ -818,6 +830,17 @@ static void load_shadow_guest_state(struct vcpu *v)
>      /* TODO: CR3 target control */
>  }
>
> +
> +static uint64_t get_shadow_eptp(struct vcpu *v)
> +{
> +    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
> +    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
> +    struct ept_data *ept = &p2m->ept;
> +
> +    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> +    return ept_get_eptp(ept);
> +}
> +
>  static void virtual_vmentry(struct cpu_user_regs *regs)
>  {
>      struct vcpu *v = current;
> @@ -862,7 +885,10 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
>      /* updating host cr0 to sync TS bit */
>      __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
>
> -    /* TODO: EPT_POINTER */
> +    /* Setup virtual ETP for L2 guest*/
> +    if ( nestedhvm_paging_mode_hap(v) )
> +        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
> +
>  }
>
>  static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
> @@ -915,8 +941,8 @@ static void sync_vvmcs_ro(struct vcpu *v)
>      /* Adjust exit_reason/exit_qualifciation for violation case */
>      if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
>                  EXIT_REASON_EPT_VIOLATION ) {
> -        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
> -        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
> +        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
> +        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
>      }
>  }
>
> @@ -1480,8 +1506,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
>          case EPT_TRANSLATE_VIOLATION:
>          case EPT_TRANSLATE_MISCONFIG:
>              rc = NESTEDHVM_PAGEFAULT_INJECT;
> -            nvmx->ept_exit.exit_reason = exit_reason;
> -            nvmx->ept_exit.exit_qual = exit_qual;
> +            nvmx->ept.exit_reason = exit_reason;
> +            nvmx->ept.exit_qual = exit_qual;
>              break;
>          case EPT_TRANSLATE_RETRY:
>              rc = NESTEDHVM_PAGEFAULT_RETRY;
> diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
> index 8eb377b..661cd8a 100644
> --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> @@ -33,9 +33,10 @@ struct nestedvmx {
>          u32           error_code;
>      } intr;
>      struct {
> +        char     enabled;

I think we should use boot_t not char.

>          uint32_t exit_reason;
>          uint32_t exit_qual;
> -    } ept_exit;
> +    } ept;
>  };
>
>  #define vcpu_2_nvmx(v) (vcpu_nestedhvm(v).u.nvmx)
> @@ -110,6 +111,8 @@ int nvmx_intercepts_exception(struct vcpu *v,
>                                unsigned int trap, int error_code);
>  void nvmx_domain_relinquish_resources(struct domain *d);
>
> +bool_t nvmx_ept_enabled(struct vcpu *v);
> +
>  int nvmx_handle_vmxon(struct cpu_user_regs *regs);
>  int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
>
> --
> 1.7.1
>



-- 
Jun
Intel Open Source Technology Center

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 17:16:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 17:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlNFw-0004VH-6D; Wed, 19 Dec 2012 17:16:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jun.nakajima@intel.com>) id 1TlNFu-0004VC-Mb
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 17:16:30 +0000
Received: from [85.158.143.35:29283] by server-3.bemta-4.messagelabs.com id
	0C/50-18211-D66F1D05; Wed, 19 Dec 2012 17:16:29 +0000
X-Env-Sender: jun.nakajima@intel.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355937387!11924384!1
X-Originating-IP: [134.134.136.21]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 629 invoked from network); 19 Dec 2012 17:16:28 -0000
Received: from mga06.intel.com (HELO orsmga101.jf.intel.com) (134.134.136.21)
	by server-7.tower-21.messagelabs.com with SMTP;
	19 Dec 2012 17:16:28 -0000
Received: from mail-ye0-f200.google.com ([209.85.213.200])
	by mga02.intel.com with ESMTP/TLS/RC4-SHA; 19 Dec 2012 09:16:26 -0800
Received: by mail-ye0-f200.google.com with SMTP id r13so3337840yen.7
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 09:16:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:mime-version:in-reply-to:references:date:message-id
	:subject:from:to:cc:content-type:x-gm-message-state;
	bh=6BuvPxIq2Z/41Ivr8CSzrmGbYBfROuXoP26OdlMkVEs=;
	b=kMwOb/INTUu8kAv3eN+VpeLzJWwsMi5b/1LsPRvEh9ecgttceAbGVmYBCnFBAI76pz
	ku9BK7sEHLOW3ZxdFAx0bFK6bJyBTomk5KaLpr0ZbZqfm1z9aR9pwX/kXSLusmYc8Gap
	pOVuJD7eHVVBU/BL4BGUGSEpIatoVBPzzUyrL7nWqa/Hl7OJCTUhiL/+ZmftOADKPWrB
	iHw/KNMuufMClVoPZQjEHxZYvp8WxUniGlFjP+CqBscRKDe7trDIW0B9lPPGk4g1MyNv
	h7t1A840MixiY1QuyVT+wf5Qxi+NAOU46Uuw7RnC8hyIZVW4QypMy6VikG8xwvUmULfd
	lgPQ==
X-Received: by 10.224.107.3 with SMTP id z3mr3185225qao.9.1355937385624;
	Wed, 19 Dec 2012 09:16:25 -0800 (PST)
MIME-Version: 1.0
Received: by 10.224.107.3 with SMTP id z3mr3185216qao.9.1355937385503; Wed, 19
	Dec 2012 09:16:25 -0800 (PST)
Received: by 10.229.27.148 with HTTP; Wed, 19 Dec 2012 09:16:25 -0800 (PST)
In-Reply-To: <1355946267-24227-6-git-send-email-xiantao.zhang@intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-6-git-send-email-xiantao.zhang@intel.com>
Date: Wed, 19 Dec 2012 09:16:25 -0800
Message-ID: <CAL54oT0ZKwYkFvktNj53gg3oXN_zjB=38Pi8Dzqno8nZqY57OQ@mail.gmail.com>
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: Xiantao Zhang <xiantao.zhang@intel.com>
X-Gm-Message-State: ALoCoQlIEIoZTqZPwKF/J9I+JDjR7Ec/xA3erUEWS8TQaw2GYzrnDt8YbTyrGHjC5B4iydnsljFbCNU9E64N1B2wK5GqNFAuW8vty0Li4C+C4+V/I+dcg2d4xaV1Q7SYNEv2YSNhwFc+RcW90r1q0o3fJFN+gccLKQ==
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, JBeulich@suse.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 05/10] nEPT: Try to enable EPT paging for
	L2 guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Minor comments below.

On Wed, Dec 19, 2012 at 11:44 AM, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
>
> Once found EPT is enabled by L1 VMM, enabled nested EPT support
> for L2 guest.
>
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
>  xen/arch/x86/hvm/vmx/vvmx.c        |   48 +++++++++++++++++++++++++++--------
>  xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
>  3 files changed, 54 insertions(+), 15 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index d74aae0..e5be5a2 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -1461,6 +1461,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
>      .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
>      .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
>      .nhvm_vcpu_asid       = nvmx_vcpu_asid,
> +    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
>      .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
>      .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
>      .nhvm_intr_blocked    = nvmx_intr_blocked,
> @@ -2003,6 +2004,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
>      unsigned long gla, gfn = gpa >> PAGE_SHIFT;
>      mfn_t mfn;
>      p2m_type_t p2mt;
> +    int ret;
>      struct domain *d = current->domain;
>
>      if ( tb_init_done )
> @@ -2017,18 +2019,26 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
>          _d.gpa = gpa;
>          _d.qualification = qualification;
>          _d.mfn = mfn_x(get_gfn_query_unlocked(d, gfn, &_d.p2mt));
> -
> +
>          __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
>      }
>
> -    if ( hvm_hap_nested_page_fault(gpa,
> +    ret = hvm_hap_nested_page_fault(gpa,
>                                     qualification & EPT_GLA_VALID       ? 1 : 0,
>                                     qualification & EPT_GLA_VALID
>                                       ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
>                                     qualification & EPT_READ_VIOLATION  ? 1 : 0,
>                                     qualification & EPT_WRITE_VIOLATION ? 1 : 0,
> -                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
> +                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
> +    switch ( ret ) {
> +    case 0:
> +        break;
> +    case 1:
>          return;
> +    case -1:
> +        vcpu_nestedhvm(current).nv_vmexit_pending = 1;

I think we should add some comments for this case (e.g. what it means,
what to do).


> +        return;
> +    }
>
>      /* Everything else is an error. */
>      mfn = get_gfn_query_unlocked(d, gfn, &p2mt);
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 76cf757..c100730 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
>          gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
>         goto out;
>      }
> +    nvmx->ept.enabled = 0;
>      nvmx->vmxon_region_pa = 0;
>      nvcpu->nv_vvmcx = NULL;
>      nvcpu->nv_vvmcxaddr = VMCX_EADDR;
> @@ -96,9 +97,11 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
>
>  uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
>  {
> -    /* TODO */
> -    ASSERT(0);
> -    return 0;
> +    uint64_t eptp_base;
> +    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> +
> +    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
> +    return eptp_base & PAGE_MASK;
>  }
>
>  uint32_t nvmx_vcpu_asid(struct vcpu *v)
> @@ -108,6 +111,13 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v)
>      return 0;
>  }
>
> +bool_t nvmx_ept_enabled(struct vcpu *v)
> +{
> +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> +
> +    return !!(nvmx->ept.enabled);
> +}
> +
>  static const enum x86_segment sreg_to_index[] = {
>      [VMX_SREG_ES] = x86_seg_es,
>      [VMX_SREG_CS] = x86_seg_cs,
> @@ -503,14 +513,16 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
>  }
>
>  void nvmx_update_secondary_exec_control(struct vcpu *v,
> -                                            unsigned long value)
> +                                            unsigned long host_cntrl)
>  {
>      u32 shadow_cntrl;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
>
>      shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
> -    shadow_cntrl |= value;
> -    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
> +    nvmx->ept.enabled = !!(shadow_cntrl & SECONDARY_EXEC_ENABLE_EPT);
> +    shadow_cntrl |= host_cntrl;
> +    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
>  }
>
>  static void nvmx_update_pin_control(struct vcpu *v, unsigned long host_cntrl)
> @@ -818,6 +830,17 @@ static void load_shadow_guest_state(struct vcpu *v)
>      /* TODO: CR3 target control */
>  }
>
> +
> +static uint64_t get_shadow_eptp(struct vcpu *v)
> +{
> +    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
> +    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
> +    struct ept_data *ept = &p2m->ept;
> +
> +    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> +    return ept_get_eptp(ept);
> +}
> +
>  static void virtual_vmentry(struct cpu_user_regs *regs)
>  {
>      struct vcpu *v = current;
> @@ -862,7 +885,10 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
>      /* updating host cr0 to sync TS bit */
>      __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
>
> -    /* TODO: EPT_POINTER */
> +    /* Setup virtual ETP for L2 guest*/
> +    if ( nestedhvm_paging_mode_hap(v) )
> +        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
> +
>  }
>
>  static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
> @@ -915,8 +941,8 @@ static void sync_vvmcs_ro(struct vcpu *v)
>      /* Adjust exit_reason/exit_qualifciation for violation case */
>      if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
>                  EXIT_REASON_EPT_VIOLATION ) {
> -        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
> -        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
> +        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
> +        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
>      }
>  }
>
> @@ -1480,8 +1506,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
>          case EPT_TRANSLATE_VIOLATION:
>          case EPT_TRANSLATE_MISCONFIG:
>              rc = NESTEDHVM_PAGEFAULT_INJECT;
> -            nvmx->ept_exit.exit_reason = exit_reason;
> -            nvmx->ept_exit.exit_qual = exit_qual;
> +            nvmx->ept.exit_reason = exit_reason;
> +            nvmx->ept.exit_qual = exit_qual;
>              break;
>          case EPT_TRANSLATE_RETRY:
>              rc = NESTEDHVM_PAGEFAULT_RETRY;
> diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
> index 8eb377b..661cd8a 100644
> --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> @@ -33,9 +33,10 @@ struct nestedvmx {
>          u32           error_code;
>      } intr;
>      struct {
> +        char     enabled;

I think we should use boot_t not char.

>          uint32_t exit_reason;
>          uint32_t exit_qual;
> -    } ept_exit;
> +    } ept;
>  };
>
>  #define vcpu_2_nvmx(v) (vcpu_nestedhvm(v).u.nvmx)
> @@ -110,6 +111,8 @@ int nvmx_intercepts_exception(struct vcpu *v,
>                                unsigned int trap, int error_code);
>  void nvmx_domain_relinquish_resources(struct domain *d);
>
> +bool_t nvmx_ept_enabled(struct vcpu *v);
> +
>  int nvmx_handle_vmxon(struct cpu_user_regs *regs);
>  int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
>
> --
> 1.7.1
>



-- 
Jun
Intel Open Source Technology Center

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 17:40:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 17:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlNcr-0004wV-TZ; Wed, 19 Dec 2012 17:40:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlNcq-0004wK-KZ
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 17:40:12 +0000
Received: from [85.158.138.51:23262] by server-7.bemta-3.messagelabs.com id
	2D/73-23008-BFBF1D05; Wed, 19 Dec 2012 17:40:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355938805!29645630!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1558 invoked from network); 19 Dec 2012 17:40:07 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 17:40:07 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJHe22X011758
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 17:40:03 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJHe2uA015491
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 17:40:02 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJHe2pk003963; Wed, 19 Dec 2012 11:40:02 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 09:40:01 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E5FD41BF762; Wed, 19 Dec 2012 12:40:00 -0500 (EST)
Date: Wed, 19 Dec 2012 12:40:00 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Message-ID: <20121219174000.GA28570@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
	<1355936517.10526.28.camel@iceland>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355936517.10526.28.camel@iceland>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 05:01:57PM +0000, Wei Liu wrote:
> On Wed, 2012-12-19 at 16:04 +0000, Konrad Rzeszutek Wilk wrote:
> > On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> > > Hi Konrad
> > > 
> > > I encountered a bug when trying to bring offline a cpu then online it
> > > again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
> > > with a quick fix.
> > 
> > I took your two patches that you posted and they are in v3.8 now.
> > 
> > It seems that there are bugs in the offline/online code thought.
> > 
> > I did this:
> > # echo 0 > /sys/devices/system/cpu/cpu3/online
> > # echo 1 > /sys/devices/system/cpu/cpu3/online
> > 
> > With a PV guest and it blows up (with or without your patches).
> > 
> > Have you seen something similar to this:
> > 
> > [  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
> > [  106.167168] microcode: CPU2 sig=0x206a7, pf=0x2, revision=0x17
> > [  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [last unloaded: dump_dma]
> > [  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0-rc3upstream-00139-gb1849b3-dirty #1
> 
> Can you tell me which tree you're using? I cannot find cs:gb1849b3 in my
> repository.

Oh, .. that might have been me messing around. I would just do v3.7 and
try that out.
> 
> 
> Wei.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 17:40:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 17:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlNcr-0004wV-TZ; Wed, 19 Dec 2012 17:40:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlNcq-0004wK-KZ
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 17:40:12 +0000
Received: from [85.158.138.51:23262] by server-7.bemta-3.messagelabs.com id
	2D/73-23008-BFBF1D05; Wed, 19 Dec 2012 17:40:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1355938805!29645630!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1558 invoked from network); 19 Dec 2012 17:40:07 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 17:40:07 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJHe22X011758
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 17:40:03 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJHe2uA015491
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 17:40:02 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJHe2pk003963; Wed, 19 Dec 2012 11:40:02 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 09:40:01 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E5FD41BF762; Wed, 19 Dec 2012 12:40:00 -0500 (EST)
Date: Wed, 19 Dec 2012 12:40:00 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Message-ID: <20121219174000.GA28570@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
	<1355936517.10526.28.camel@iceland>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355936517.10526.28.camel@iceland>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 05:01:57PM +0000, Wei Liu wrote:
> On Wed, 2012-12-19 at 16:04 +0000, Konrad Rzeszutek Wilk wrote:
> > On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> > > Hi Konrad
> > > 
> > > I encountered a bug when trying to bring offline a cpu then online it
> > > again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
> > > with a quick fix.
> > 
> > I took your two patches that you posted and they are in v3.8 now.
> > 
> > It seems that there are bugs in the offline/online code thought.
> > 
> > I did this:
> > # echo 0 > /sys/devices/system/cpu/cpu3/online
> > # echo 1 > /sys/devices/system/cpu/cpu3/online
> > 
> > With a PV guest and it blows up (with or without your patches).
> > 
> > Have you seen something similar to this:
> > 
> > [  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
> > [  106.167168] microcode: CPU2 sig=0x206a7, pf=0x2, revision=0x17
> > [  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [last unloaded: dump_dma]
> > [  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0-rc3upstream-00139-gb1849b3-dirty #1
> 
> Can you tell me which tree you're using? I cannot find cs:gb1849b3 in my
> repository.

Oh, .. that might have been me messing around. I would just do v3.7 and
try that out.
> 
> 
> Wei.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 17:46:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 17:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlNix-0005Lf-UF; Wed, 19 Dec 2012 17:46:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TlNiw-0005LU-1y
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 17:46:30 +0000
Received: from [85.158.138.51:52808] by server-9.bemta-3.messagelabs.com id
	0B/B4-11948-57DF1D05; Wed, 19 Dec 2012 17:46:29 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355939188!29575635!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7639 invoked from network); 19 Dec 2012 17:46:28 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 17:46:28 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TlNit-000IXi-2M; Wed, 19 Dec 2012 17:46:27 +0000
Date: Wed, 19 Dec 2012 17:46:27 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121219174627.GA67643@ocelot.phlegethon.org>
References: <50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
	<1355933739.14620.456.camel@zakaz.uk.xensource.com>
	<50D1EB56.40400@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D1EB56.40400@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
	call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:29 +0200 on 19 Dec (1355941750), Razvan Cojocaru wrote:
> >>Ah - I see your concern.  Yes - it might well be different.  Is this
> >>information passed in an HVM save record?  It does not appear to be
> >>associated with the MTRR HVM save record.
> >
> >No , this is the internal state not the save record but Razvan is
> >implementing the same logic in libxc which must necessarily be based on
> >the hvm saved state only and not the internal emulation state.
> 
> Exactly, and all I need extra an extra variable in the save record (or 
> simply a single bit in a safe place in one of the existing ones), 
> telling me if the MTRRs are overlapped or not. The CPUID code is just a 
> part of the logic that finds this out in the hypervisor; and 
> specifically it is a part that's better _left_in_ the hypervisor. That 
> is in fact what I was trying to say a few messages ago, when I called 
> the cpuid_eax() function the tricky part. :)
> 
> Do we agree that a bool_t overlapped should be added to struct 
> hvm_hw_mtrr for this case?

Sorry, no.  The 'overlapped' bit is a piece of xen implementation, and
not an architectural state of the VM, so it doesn't belong in the save
record.  It's trivial to recalculate in a user-space tool, and the
result can be cached (since you can also get a mem-event on MSR writes,
you don't have to pull all this MTRR state out of the giuest except when
the MTRRs have been changed).

I'll take a proper look at this thread tomorrow and see if I can suggest
anything more helpful. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 17:46:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 17:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlNix-0005Lf-UF; Wed, 19 Dec 2012 17:46:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TlNiw-0005LU-1y
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 17:46:30 +0000
Received: from [85.158.138.51:52808] by server-9.bemta-3.messagelabs.com id
	0B/B4-11948-57DF1D05; Wed, 19 Dec 2012 17:46:29 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-174.messagelabs.com!1355939188!29575635!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7639 invoked from network); 19 Dec 2012 17:46:28 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 17:46:28 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TlNit-000IXi-2M; Wed, 19 Dec 2012 17:46:27 +0000
Date: Wed, 19 Dec 2012 17:46:27 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121219174627.GA67643@ocelot.phlegethon.org>
References: <50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
	<1355933739.14620.456.camel@zakaz.uk.xensource.com>
	<50D1EB56.40400@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D1EB56.40400@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
	call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:29 +0200 on 19 Dec (1355941750), Razvan Cojocaru wrote:
> >>Ah - I see your concern.  Yes - it might well be different.  Is this
> >>information passed in an HVM save record?  It does not appear to be
> >>associated with the MTRR HVM save record.
> >
> >No , this is the internal state not the save record but Razvan is
> >implementing the same logic in libxc which must necessarily be based on
> >the hvm saved state only and not the internal emulation state.
> 
> Exactly, and all I need extra an extra variable in the save record (or 
> simply a single bit in a safe place in one of the existing ones), 
> telling me if the MTRRs are overlapped or not. The CPUID code is just a 
> part of the logic that finds this out in the hypervisor; and 
> specifically it is a part that's better _left_in_ the hypervisor. That 
> is in fact what I was trying to say a few messages ago, when I called 
> the cpuid_eax() function the tricky part. :)
> 
> Do we agree that a bool_t overlapped should be added to struct 
> hvm_hw_mtrr for this case?

Sorry, no.  The 'overlapped' bit is a piece of xen implementation, and
not an architectural state of the VM, so it doesn't belong in the save
record.  It's trivial to recalculate in a user-space tool, and the
result can be cached (since you can also get a mem-event on MSR writes,
you don't have to pull all this MTRR state out of the giuest except when
the MTRRs have been changed).

I'll take a proper look at this thread tomorrow and see if I can suggest
anything more helpful. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 17:52:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 17:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlNou-0005h3-P8; Wed, 19 Dec 2012 17:52:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TlNos-0005gx-HP
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 17:52:38 +0000
Received: from [85.158.137.99:3497] by server-15.bemta-3.messagelabs.com id
	A0/F6-07921-5EEF1D05; Wed, 19 Dec 2012 17:52:37 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1355939555!20120670!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30693 invoked from network); 19 Dec 2012 17:52:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 17:52:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1233614"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 17:52:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 12:52:20 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TlNoa-0005pD-Gv;
	Wed, 19 Dec 2012 17:52:20 +0000
Message-ID: <1355939543.10526.30.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Dec 2012 17:52:23 +0000
In-Reply-To: <20121219174000.GA28570@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
	<1355936517.10526.28.camel@iceland>
	<20121219174000.GA28570@phenom.dumpdata.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 17:40 +0000, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 19, 2012 at 05:01:57PM +0000, Wei Liu wrote:
> > On Wed, 2012-12-19 at 16:04 +0000, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> > > > Hi Konrad
> > > > 
> > > > I encountered a bug when trying to bring offline a cpu then online it
> > > > again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
> > > > with a quick fix.
> > > 
> > > I took your two patches that you posted and they are in v3.8 now.
> > > 
> > > It seems that there are bugs in the offline/online code thought.
> > > 
> > > I did this:
> > > # echo 0 > /sys/devices/system/cpu/cpu3/online
> > > # echo 1 > /sys/devices/system/cpu/cpu3/online
> > > 
> > > With a PV guest and it blows up (with or without your patches).
> > > 
> > > Have you seen something similar to this:
> > > 
> > > [  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
> > > [  106.167168] microcode: CPU2 sig=0x206a7, pf=0x2, revision=0x17
> > > [  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [last unloaded: dump_dma]
> > > [  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0-rc3upstream-00139-gb1849b3-dirty #1
> > 
> > Can you tell me which tree you're using? I cannot find cs:gb1849b3 in my
> > repository.
> 
> Oh, .. that might have been me messing around. I would just do v3.7 and
> try that out.
> >

Just played with 3.7 several times, no oops so far.

Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 17:52:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 17:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlNou-0005h3-P8; Wed, 19 Dec 2012 17:52:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TlNos-0005gx-HP
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 17:52:38 +0000
Received: from [85.158.137.99:3497] by server-15.bemta-3.messagelabs.com id
	A0/F6-07921-5EEF1D05; Wed, 19 Dec 2012 17:52:37 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1355939555!20120670!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTMyODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30693 invoked from network); 19 Dec 2012 17:52:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 17:52:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1233614"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 17:52:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 12:52:20 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TlNoa-0005pD-Gv;
	Wed, 19 Dec 2012 17:52:20 +0000
Message-ID: <1355939543.10526.30.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Dec 2012 17:52:23 +0000
In-Reply-To: <20121219174000.GA28570@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
	<1355936517.10526.28.camel@iceland>
	<20121219174000.GA28570@phenom.dumpdata.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 17:40 +0000, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 19, 2012 at 05:01:57PM +0000, Wei Liu wrote:
> > On Wed, 2012-12-19 at 16:04 +0000, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> > > > Hi Konrad
> > > > 
> > > > I encountered a bug when trying to bring offline a cpu then online it
> > > > again in HVM. As I'm not very familiar with HVM stuffs I cannot come up
> > > > with a quick fix.
> > > 
> > > I took your two patches that you posted and they are in v3.8 now.
> > > 
> > > It seems that there are bugs in the offline/online code thought.
> > > 
> > > I did this:
> > > # echo 0 > /sys/devices/system/cpu/cpu3/online
> > > # echo 1 > /sys/devices/system/cpu/cpu3/online
> > > 
> > > With a PV guest and it blows up (with or without your patches).
> > > 
> > > Have you seen something similar to this:
> > > 
> > > [  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
> > > [  106.167168] microcode: CPU2 sig=0x206a7, pf=0x2, revision=0x17
> > > [  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [last unloaded: dump_dma]
> > > [  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0-rc3upstream-00139-gb1849b3-dirty #1
> > 
> > Can you tell me which tree you're using? I cannot find cs:gb1849b3 in my
> > repository.
> 
> Oh, .. that might have been me messing around. I would just do v3.7 and
> try that out.
> >

Just played with 3.7 several times, no oops so far.

Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 18:19:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 18:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOEZ-00069r-At; Wed, 19 Dec 2012 18:19:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1TlOEY-00069m-FX
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 18:19:10 +0000
Received: from [85.158.137.99:50337] by server-6.bemta-3.messagelabs.com id
	34/2F-12154-D1502D05; Wed, 19 Dec 2012 18:19:09 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355941147!15029305!1
X-Originating-IP: [209.85.210.174]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10025 invoked from network); 19 Dec 2012 18:19:08 -0000
Received: from mail-ia0-f174.google.com (HELO mail-ia0-f174.google.com)
	(209.85.210.174)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 18:19:08 -0000
Received: by mail-ia0-f174.google.com with SMTP id y25so1982311iay.5
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 10:19:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=BTl3BoKyyWXR4rtW4VDTbTiqviFyJD8AddwZZlV9gLc=;
	b=fbGEJxdXRWsKT+rK0pjtdb6xXTTK6/AH9vpqCW+4sQUDLBbSlcmdj2hsBK3mRbAAtp
	I3fffPuPata2lUbArZ/fUEXSgrz6QRgV08JqD1Xk8bEMoIcjBHT3Mdfz6mqCM+qSXmpy
	pLCIFexDRkTd2HsfBR03wb1KrWEbitok9XnFh+f1Wpnq2iB5xrfZzW8LsiBvLRgsTVYU
	S3l46sfiS+K/T0lc73uKLpx2BLFsRW4xoG7VFbfxpBzdnEFFZvBk+cQvpzl5j9Q6XsUY
	kBLVOEVetb9QKDFYT+ay4AidfEpgJ7qNeYYwQKrU9CbFXp7ivRpLCeL+G0NyNBBeDD7J
	A8nw==
Received: by 10.43.17.199 with SMTP id qd7mr2211856icb.52.1355941146874; Wed,
	19 Dec 2012 10:19:06 -0800 (PST)
MIME-Version: 1.0
Received: by 10.64.18.165 with HTTP; Wed, 19 Dec 2012 10:18:46 -0800 (PST)
In-Reply-To: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Wed, 19 Dec 2012 10:18:46 -0800
Message-ID: <CAEBdQ92Cu+_EZWP2yDx1WCHSMM7fee-xtrO3XSFbCfhMeqHH6Q@mail.gmail.com>
To: "G.R." <firemeteor@users.sourceforge.net>
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 9:28 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> Hi Stefano,
>
> I recently tried to play some 3D games on my linux guest.
> The game starts without problem but it freezes the entire system after
> a some time (a minute or so?).
> Here I mean both the host and domU are not responsive anymore.
> The ssh freezes and i had to shutdown the machine using power button directly.
>
> I did not find anything obvious from the host log. But from the guest,
> I can find this:
>
> Dec 18 20:28:38 debvm kernel: [    0.899860] resource map sanity check
> conflict: 0xfeff5018 0xfeff7017 0xfeff7000 0xffffffff reserved
> Dec 18 20:28:38 debvm kernel: [    0.899862] ------------[ cut here
> ]------------
> Dec 18 20:28:38 debvm kernel: [    0.899869] WARNING: at
> arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2c4/0x33c()
> Dec 18 20:28:38 debvm kernel: [    0.899870] Hardware name: HVM domU
> Dec 18 20:28:38 debvm kernel: [    0.899872] Info: mapping multiple
> BARs. Your kernel is fine.
> Dec 18 20:28:38 debvm kernel: [    0.899873] Modules linked in:
> Dec 18 20:28:38 debvm kernel: [    0.899878] Pid: 1, comm: swapper/0
> Not tainted 3.6.9 #4
> Dec 18 20:28:38 debvm kernel: [    0.899892] Call Trace:
> Dec 18 20:28:38 debvm kernel: [    0.899896]  [<ffffffff8103d194>] ?
> warn_slowpath_common+0x76/0x8a
> Dec 18 20:28:38 debvm kernel: [    0.899898]  [<ffffffff8103d240>] ?
> warn_slowpath_fmt+0x45/0x4a
> Dec 18 20:28:38 debvm kernel: [    0.899900]  [<ffffffff81032a6c>] ?
> __ioremap_caller+0x2c4/0x33c
> Dec 18 20:28:38 debvm kernel: [    0.899902]  [<ffffffff812c3be3>] ?
> intel_opregion_setup+0x9c/0x201
> Dec 18 20:28:38 debvm kernel: [    0.899904]  [<ffffffff812bcb75>] ?
> intel_setup_gmbus+0x175/0x19d
> Dec 18 20:28:38 debvm kernel: [    0.899907]  [<ffffffff8128a37a>] ?
> i915_driver_load+0x548/0x90d
> Dec 18 20:28:38 debvm kernel: [    0.899910]  [<ffffffff812ff804>] ?
> setup_hpet_msi_remapped+0x20/0x20
> Dec 18 20:28:38 debvm kernel: [    0.899912]  [<ffffffff81272706>] ?
> drm_get_pci_dev+0x152/0x259
> Dec 18 20:28:38 debvm kernel: [    0.899915]  [<ffffffff813d4883>] ?
> _raw_spin_lock_irqsave+0x21/0x45
> Dec 18 20:28:38 debvm kernel: [    0.899918]  [<ffffffff811d9ecc>] ?
> local_pci_probe+0x5a/0xa0
> Dec 18 20:28:38 debvm kernel: [    0.899920]  [<ffffffff811d9fcf>] ?
> pci_device_probe+0xbd/0xe7
> Dec 18 20:28:38 debvm kernel: [    0.899922]  [<ffffffff812cd887>] ?
> driver_probe_device+0x1b0/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899923]  [<ffffffff812cd887>] ?
> driver_probe_device+0x1b0/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899925]  [<ffffffff812cd769>] ?
> driver_probe_device+0x92/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899926]  [<ffffffff812cd8da>] ?
> __driver_attach+0x53/0x73
> Dec 18 20:28:38 debvm kernel: [    0.899928]  [<ffffffff812cc06f>] ?
> bus_for_each_dev+0x46/0x77
> Dec 18 20:28:38 debvm kernel: [    0.899930]  [<ffffffff812ccf8f>] ?
> bus_add_driver+0xd5/0x1f4
> Dec 18 20:28:38 debvm kernel: [    0.899931]  [<ffffffff812cde14>] ?
> driver_register+0x89/0x101
> Dec 18 20:28:38 debvm kernel: [    0.899933]  [<ffffffff811d9336>] ?
> __pci_register_driver+0x49/0xa3
> Dec 18 20:28:38 debvm kernel: [    0.899935]  [<ffffffff816d55c7>] ?
> ttm_init+0x63/0x63
> Dec 18 20:28:38 debvm kernel: [    0.899937]  [<ffffffff81002085>] ?
> do_one_initcall+0x75/0x12c
> Dec 18 20:28:38 debvm kernel: [    0.899940]  [<ffffffff816a6cc2>] ?
> kernel_init+0x13c/0x1c0
> Dec 18 20:28:38 debvm kernel: [    0.899941]  [<ffffffff816a6565>] ?
> do_early_param+0x83/0x83
> Dec 18 20:28:38 debvm kernel: [    0.899943]  [<ffffffff813d9f44>] ?
> kernel_thread_helper+0x4/0x10
> Dec 18 20:28:38 debvm kernel: [    0.899945]  [<ffffffff816a6b86>] ?
> start_kernel+0x3e1/0x3e1
> Dec 18 20:28:38 debvm kernel: [    0.899947]  [<ffffffff813d9f40>] ?
> gs_change+0x13/0x13
> Dec 18 20:28:38 debvm kernel: [    0.899950] ---[ end trace
> db461543ce599b44 ]---
>
> I'm not sure if this has anything to do with the freeze. This seems to
> show up on every boot after I upgraded to xen version 4.2.1-rc2. Both
> debian kernel 3.2.32 / 3.6.9 suffers from the same log. But whole
> system freeze happens only during gaming, which is much less frequent.
> So I'm not sure if the two are related. But anyway, could you comment
> about what does this log mean?
>
> I can find the one of the mentioned address in the qemu_dm log:
> pt_pci_write_config: [00:02:0] address=00fc val=0xfeff5000 len=4
> igd_write_opregion: Map OpRegion: cd996018 -> feff5018
> igd_write_opregion: [00:02:0] addr=fc len=2 val=feff5000
>
> PS: I also run xbmc on domU and it playbacks video under HW
> acceleration (VAAPI) without any problem. XBMC by itself is also an
> graphics intensive program. But this runs on an pure HVM guest, while
> the failing case is on PVHVM.
>
> PS2: I also suffered another instability yesterday. It happens when I
> was compiling kernel in side the domU. The host reboots suddenly.
> Since I'm not using graphics at that time (Xorg session is idle, I
> connected through SSH), this may be a different issue.
>

Hi Timothy,

Could you send /proc/iomem, lspci -vvvv and the e820 from dmesg for this VM?

Thanks,
Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 18:19:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 18:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOEZ-00069r-At; Wed, 19 Dec 2012 18:19:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1TlOEY-00069m-FX
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 18:19:10 +0000
Received: from [85.158.137.99:50337] by server-6.bemta-3.messagelabs.com id
	34/2F-12154-D1502D05; Wed, 19 Dec 2012 18:19:09 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355941147!15029305!1
X-Originating-IP: [209.85.210.174]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10025 invoked from network); 19 Dec 2012 18:19:08 -0000
Received: from mail-ia0-f174.google.com (HELO mail-ia0-f174.google.com)
	(209.85.210.174)
	by server-6.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 18:19:08 -0000
Received: by mail-ia0-f174.google.com with SMTP id y25so1982311iay.5
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 10:19:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=BTl3BoKyyWXR4rtW4VDTbTiqviFyJD8AddwZZlV9gLc=;
	b=fbGEJxdXRWsKT+rK0pjtdb6xXTTK6/AH9vpqCW+4sQUDLBbSlcmdj2hsBK3mRbAAtp
	I3fffPuPata2lUbArZ/fUEXSgrz6QRgV08JqD1Xk8bEMoIcjBHT3Mdfz6mqCM+qSXmpy
	pLCIFexDRkTd2HsfBR03wb1KrWEbitok9XnFh+f1Wpnq2iB5xrfZzW8LsiBvLRgsTVYU
	S3l46sfiS+K/T0lc73uKLpx2BLFsRW4xoG7VFbfxpBzdnEFFZvBk+cQvpzl5j9Q6XsUY
	kBLVOEVetb9QKDFYT+ay4AidfEpgJ7qNeYYwQKrU9CbFXp7ivRpLCeL+G0NyNBBeDD7J
	A8nw==
Received: by 10.43.17.199 with SMTP id qd7mr2211856icb.52.1355941146874; Wed,
	19 Dec 2012 10:19:06 -0800 (PST)
MIME-Version: 1.0
Received: by 10.64.18.165 with HTTP; Wed, 19 Dec 2012 10:18:46 -0800 (PST)
In-Reply-To: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Wed, 19 Dec 2012 10:18:46 -0800
Message-ID: <CAEBdQ92Cu+_EZWP2yDx1WCHSMM7fee-xtrO3XSFbCfhMeqHH6Q@mail.gmail.com>
To: "G.R." <firemeteor@users.sourceforge.net>
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 18, 2012 at 9:28 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> Hi Stefano,
>
> I recently tried to play some 3D games on my linux guest.
> The game starts without problem but it freezes the entire system after
> a some time (a minute or so?).
> Here I mean both the host and domU are not responsive anymore.
> The ssh freezes and i had to shutdown the machine using power button directly.
>
> I did not find anything obvious from the host log. But from the guest,
> I can find this:
>
> Dec 18 20:28:38 debvm kernel: [    0.899860] resource map sanity check
> conflict: 0xfeff5018 0xfeff7017 0xfeff7000 0xffffffff reserved
> Dec 18 20:28:38 debvm kernel: [    0.899862] ------------[ cut here
> ]------------
> Dec 18 20:28:38 debvm kernel: [    0.899869] WARNING: at
> arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2c4/0x33c()
> Dec 18 20:28:38 debvm kernel: [    0.899870] Hardware name: HVM domU
> Dec 18 20:28:38 debvm kernel: [    0.899872] Info: mapping multiple
> BARs. Your kernel is fine.
> Dec 18 20:28:38 debvm kernel: [    0.899873] Modules linked in:
> Dec 18 20:28:38 debvm kernel: [    0.899878] Pid: 1, comm: swapper/0
> Not tainted 3.6.9 #4
> Dec 18 20:28:38 debvm kernel: [    0.899892] Call Trace:
> Dec 18 20:28:38 debvm kernel: [    0.899896]  [<ffffffff8103d194>] ?
> warn_slowpath_common+0x76/0x8a
> Dec 18 20:28:38 debvm kernel: [    0.899898]  [<ffffffff8103d240>] ?
> warn_slowpath_fmt+0x45/0x4a
> Dec 18 20:28:38 debvm kernel: [    0.899900]  [<ffffffff81032a6c>] ?
> __ioremap_caller+0x2c4/0x33c
> Dec 18 20:28:38 debvm kernel: [    0.899902]  [<ffffffff812c3be3>] ?
> intel_opregion_setup+0x9c/0x201
> Dec 18 20:28:38 debvm kernel: [    0.899904]  [<ffffffff812bcb75>] ?
> intel_setup_gmbus+0x175/0x19d
> Dec 18 20:28:38 debvm kernel: [    0.899907]  [<ffffffff8128a37a>] ?
> i915_driver_load+0x548/0x90d
> Dec 18 20:28:38 debvm kernel: [    0.899910]  [<ffffffff812ff804>] ?
> setup_hpet_msi_remapped+0x20/0x20
> Dec 18 20:28:38 debvm kernel: [    0.899912]  [<ffffffff81272706>] ?
> drm_get_pci_dev+0x152/0x259
> Dec 18 20:28:38 debvm kernel: [    0.899915]  [<ffffffff813d4883>] ?
> _raw_spin_lock_irqsave+0x21/0x45
> Dec 18 20:28:38 debvm kernel: [    0.899918]  [<ffffffff811d9ecc>] ?
> local_pci_probe+0x5a/0xa0
> Dec 18 20:28:38 debvm kernel: [    0.899920]  [<ffffffff811d9fcf>] ?
> pci_device_probe+0xbd/0xe7
> Dec 18 20:28:38 debvm kernel: [    0.899922]  [<ffffffff812cd887>] ?
> driver_probe_device+0x1b0/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899923]  [<ffffffff812cd887>] ?
> driver_probe_device+0x1b0/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899925]  [<ffffffff812cd769>] ?
> driver_probe_device+0x92/0x1b0
> Dec 18 20:28:38 debvm kernel: [    0.899926]  [<ffffffff812cd8da>] ?
> __driver_attach+0x53/0x73
> Dec 18 20:28:38 debvm kernel: [    0.899928]  [<ffffffff812cc06f>] ?
> bus_for_each_dev+0x46/0x77
> Dec 18 20:28:38 debvm kernel: [    0.899930]  [<ffffffff812ccf8f>] ?
> bus_add_driver+0xd5/0x1f4
> Dec 18 20:28:38 debvm kernel: [    0.899931]  [<ffffffff812cde14>] ?
> driver_register+0x89/0x101
> Dec 18 20:28:38 debvm kernel: [    0.899933]  [<ffffffff811d9336>] ?
> __pci_register_driver+0x49/0xa3
> Dec 18 20:28:38 debvm kernel: [    0.899935]  [<ffffffff816d55c7>] ?
> ttm_init+0x63/0x63
> Dec 18 20:28:38 debvm kernel: [    0.899937]  [<ffffffff81002085>] ?
> do_one_initcall+0x75/0x12c
> Dec 18 20:28:38 debvm kernel: [    0.899940]  [<ffffffff816a6cc2>] ?
> kernel_init+0x13c/0x1c0
> Dec 18 20:28:38 debvm kernel: [    0.899941]  [<ffffffff816a6565>] ?
> do_early_param+0x83/0x83
> Dec 18 20:28:38 debvm kernel: [    0.899943]  [<ffffffff813d9f44>] ?
> kernel_thread_helper+0x4/0x10
> Dec 18 20:28:38 debvm kernel: [    0.899945]  [<ffffffff816a6b86>] ?
> start_kernel+0x3e1/0x3e1
> Dec 18 20:28:38 debvm kernel: [    0.899947]  [<ffffffff813d9f40>] ?
> gs_change+0x13/0x13
> Dec 18 20:28:38 debvm kernel: [    0.899950] ---[ end trace
> db461543ce599b44 ]---
>
> I'm not sure if this has anything to do with the freeze. This seems to
> show up on every boot after I upgraded to xen version 4.2.1-rc2. Both
> debian kernel 3.2.32 / 3.6.9 suffers from the same log. But whole
> system freeze happens only during gaming, which is much less frequent.
> So I'm not sure if the two are related. But anyway, could you comment
> about what does this log mean?
>
> I can find the one of the mentioned address in the qemu_dm log:
> pt_pci_write_config: [00:02:0] address=00fc val=0xfeff5000 len=4
> igd_write_opregion: Map OpRegion: cd996018 -> feff5018
> igd_write_opregion: [00:02:0] addr=fc len=2 val=feff5000
>
> PS: I also run xbmc on domU and it playbacks video under HW
> acceleration (VAAPI) without any problem. XBMC by itself is also an
> graphics intensive program. But this runs on an pure HVM guest, while
> the failing case is on PVHVM.
>
> PS2: I also suffered another instability yesterday. It happens when I
> was compiling kernel in side the domU. The host reboots suddenly.
> Since I'm not using graphics at that time (Xorg session is idle, I
> connected through SSH), this may be a different issue.
>

Hi Timothy,

Could you send /proc/iomem, lspci -vvvv and the e820 from dmesg for this VM?

Thanks,
Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 18:33:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 18:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlORz-0006Z6-M5; Wed, 19 Dec 2012 18:33:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlORy-0006Z1-8q
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 18:33:02 +0000
Received: from [85.158.138.51:58733] by server-14.bemta-3.messagelabs.com id
	FF/82-27443-B5802D05; Wed, 19 Dec 2012 18:32:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355941976!21723620!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1602 invoked from network); 19 Dec 2012 18:32:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 18:32:58 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJIWiNB001676
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 18:32:46 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJIWgfj018757
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 18:32:43 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJIWeOT010241; Wed, 19 Dec 2012 12:32:42 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 10:32:40 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A01191C032B; Wed, 19 Dec 2012 13:32:39 -0500 (EST)
Date: Wed, 19 Dec 2012 13:32:39 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Todd Deshane <todd.deshane@xen.org>
Message-ID: <20121219183239.GB15037@phenom.dumpdata.com>
References: <CAC-_gzBaG9H2k6j1ihq=o6XgMYe-a_B4-wjAmoQ5TY-FRqVRwQ@mail.gmail.com>
	<20120117161208.GA21545@phenom.dumpdata.com>
	<CAMrPLWLeV0uDX7xA4BGAEd7K0aj9uH-Y5gyjuat7fPhVV=wRhw@mail.gmail.com>
	<20120209174210.GA14007@andromeda.dapyr.net>
	<4F3424C0.2050103@citrix.com>
	<CAMrPLW++8_18_U0Wnmw5X2E1w6Yh-bB1mb9fLv2n-_MiN7RhSA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMrPLW++8_18_U0Wnmw5X2E1w6Yh-bB1mb9fLv2n-_MiN7RhSA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
	xen-devel mailing list <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] Sound not working properly on Xen Dom0,
 but works on native
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 09, 2012 at 09:07:38PM -0500, Todd Deshane wrote:
> On Thu, Feb 9, 2012 at 2:55 PM, David Vrabel <david.vrabel@citrix.com> wrote:
> > On 09/02/12 17:42, Konrad Rzeszutek Wilk wrote:
> >> On Sat, Jan 28, 2012 at 01:19:23PM -0500, Todd Deshane wrote:
> >>> On Tue, Jan 17, 2012 at 11:12 AM, Konrad Rzeszutek Wilk
> >>> <konrad.wilk@oracle.com> wrote:
> >>>> On Sun, Jan 15, 2012 at 07:12:52PM -0500, Todd Deshane wrote:
> >>>>> Hi,
> >>>>>
> >>>>> I'm doing some testing on Ubuntu 12.04 Alpha. All sounds comes out as
> >>>>> static on the Dom0 system. (I can PCI passthrough the audio card to a
> >>>>> DomU and that works). Native sound works fine.
> >>>>>
> >>>>> Linux kronos 3.2.0-8-generic-pae #15-Ubuntu SMP Wed Jan 11 15:34:57
> >>>>> UTC 2012 i686 i686 i386 GNU/Linux
> >>>>
> >>>> Did you try 64-bit dom0?
> >>>
> >>> 64bit Dom0 works perfectly.
> >>
> >> Aha! I have an inkling this is the
> >> 7a7546b377bdaa25ac77f33d9433c59f259b9688
> >
> > This is "x86: xen: size struct xen_spinlock to always fit in
> > arch_spinlock_t" from the 3.2 stable tree.
> >
> > There are many spinlock in sound/ affected by the bug fixed by the above
> > dunno if any would cause sound distortion but there were plenty of
> > deadlock opportunities though.
> 
> Tried the latest Ubuntu kernel that includes that patch
> (http://archive.ubuntu.com/ubuntu/pool/main/l/linux/linux_3.2.0-15.24.diff.gz)
> and it still has the same problem.

I think this issue has been fixed right? I believe this:
commit b5031ed1be0aa419250557123633453753181643
Author: Ronny Hegewald <ronny.hegewald@online.de>
Date:   Fri Aug 31 09:57:52 2012 +0000

    xen: Use correct masking in xen_swiotlb_alloc_coherent.
    

fixed it?

> 
> Thanks,
> Todd
> 
> -- 
> Todd Deshane
> http://www.linkedin.com/in/deshantm
> http://blog.xen.org/
> http://wiki.xen.org/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 18:33:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 18:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlORz-0006Z6-M5; Wed, 19 Dec 2012 18:33:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlORy-0006Z1-8q
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 18:33:02 +0000
Received: from [85.158.138.51:58733] by server-14.bemta-3.messagelabs.com id
	FF/82-27443-B5802D05; Wed, 19 Dec 2012 18:32:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355941976!21723620!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1602 invoked from network); 19 Dec 2012 18:32:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 18:32:58 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJIWiNB001676
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 18:32:46 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJIWgfj018757
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 18:32:43 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJIWeOT010241; Wed, 19 Dec 2012 12:32:42 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 10:32:40 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A01191C032B; Wed, 19 Dec 2012 13:32:39 -0500 (EST)
Date: Wed, 19 Dec 2012 13:32:39 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Todd Deshane <todd.deshane@xen.org>
Message-ID: <20121219183239.GB15037@phenom.dumpdata.com>
References: <CAC-_gzBaG9H2k6j1ihq=o6XgMYe-a_B4-wjAmoQ5TY-FRqVRwQ@mail.gmail.com>
	<20120117161208.GA21545@phenom.dumpdata.com>
	<CAMrPLWLeV0uDX7xA4BGAEd7K0aj9uH-Y5gyjuat7fPhVV=wRhw@mail.gmail.com>
	<20120209174210.GA14007@andromeda.dapyr.net>
	<4F3424C0.2050103@citrix.com>
	<CAMrPLW++8_18_U0Wnmw5X2E1w6Yh-bB1mb9fLv2n-_MiN7RhSA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMrPLW++8_18_U0Wnmw5X2E1w6Yh-bB1mb9fLv2n-_MiN7RhSA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
	xen-devel mailing list <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] Sound not working properly on Xen Dom0,
 but works on native
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 09, 2012 at 09:07:38PM -0500, Todd Deshane wrote:
> On Thu, Feb 9, 2012 at 2:55 PM, David Vrabel <david.vrabel@citrix.com> wrote:
> > On 09/02/12 17:42, Konrad Rzeszutek Wilk wrote:
> >> On Sat, Jan 28, 2012 at 01:19:23PM -0500, Todd Deshane wrote:
> >>> On Tue, Jan 17, 2012 at 11:12 AM, Konrad Rzeszutek Wilk
> >>> <konrad.wilk@oracle.com> wrote:
> >>>> On Sun, Jan 15, 2012 at 07:12:52PM -0500, Todd Deshane wrote:
> >>>>> Hi,
> >>>>>
> >>>>> I'm doing some testing on Ubuntu 12.04 Alpha. All sounds comes out as
> >>>>> static on the Dom0 system. (I can PCI passthrough the audio card to a
> >>>>> DomU and that works). Native sound works fine.
> >>>>>
> >>>>> Linux kronos 3.2.0-8-generic-pae #15-Ubuntu SMP Wed Jan 11 15:34:57
> >>>>> UTC 2012 i686 i686 i386 GNU/Linux
> >>>>
> >>>> Did you try 64-bit dom0?
> >>>
> >>> 64bit Dom0 works perfectly.
> >>
> >> Aha! I have an inkling this is the
> >> 7a7546b377bdaa25ac77f33d9433c59f259b9688
> >
> > This is "x86: xen: size struct xen_spinlock to always fit in
> > arch_spinlock_t" from the 3.2 stable tree.
> >
> > There are many spinlock in sound/ affected by the bug fixed by the above
> > dunno if any would cause sound distortion but there were plenty of
> > deadlock opportunities though.
> 
> Tried the latest Ubuntu kernel that includes that patch
> (http://archive.ubuntu.com/ubuntu/pool/main/l/linux/linux_3.2.0-15.24.diff.gz)
> and it still has the same problem.

I think this issue has been fixed right? I believe this:
commit b5031ed1be0aa419250557123633453753181643
Author: Ronny Hegewald <ronny.hegewald@online.de>
Date:   Fri Aug 31 09:57:52 2012 +0000

    xen: Use correct masking in xen_swiotlb_alloc_coherent.
    

fixed it?

> 
> Thanks,
> Todd
> 
> -- 
> Todd Deshane
> http://www.linkedin.com/in/deshantm
> http://blog.xen.org/
> http://wiki.xen.org/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 18:47:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 18:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOfb-0006rD-3i; Wed, 19 Dec 2012 18:47:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlOfZ-0006r8-FO
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 18:47:05 +0000
Received: from [85.158.143.35:16072] by server-3.bemta-4.messagelabs.com id
	12/64-18211-8AB02D05; Wed, 19 Dec 2012 18:47:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355942823!4826110!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21259 invoked from network); 19 Dec 2012 18:47:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 18:47:04 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJIkvLl030066
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 18:46:58 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJIktrm028995
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 18:46:56 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJIktXA020822; Wed, 19 Dec 2012 12:46:55 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 10:46:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E67B81C032B; Wed, 19 Dec 2012 13:46:53 -0500 (EST)
Date: Wed, 19 Dec 2012 13:46:53 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121219184653.GD15037@phenom.dumpdata.com>
References: <1334595957-12552-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120425084524.GA17537@lst.de>
	<1335344565.28015.7.camel@zakaz.uk.xensource.com>
	<20120425102024.GA19800@lst.de>
	<alpine.DEB.2.00.1204251213480.26786@kaball-desktop>
	<20120425112335.GA20868@lst.de>
	<20120426154101.GD26830@phenom.dumpdata.com>
	<alpine.DEB.2.00.1205091342290.26786@kaball-desktop>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.00.1205091342290.26786@kaball-desktop>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "kwolf@redhat.com" <kwolf@redhat.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Christoph Hellwig <hch@lst.de>, Ian Campbell <Ian.Campbell@citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH] xen_disk: implement
 BLKIF_OP_FLUSH_DISKCACHE, remove BLKIF_OP_WRITE_BARRIER
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, May 09, 2012 at 01:42:41PM +0100, Stefano Stabellini wrote:
> On Thu, 26 Apr 2012, Konrad Rzeszutek Wilk wrote:
> > On Wed, Apr 25, 2012 at 01:23:35PM +0200, Christoph Hellwig wrote:
> > > On Wed, Apr 25, 2012 at 12:21:53PM +0100, Stefano Stabellini wrote:
> > > > That is true, in fact I couldn't figure out what I had to implement just
> > > > reading the comment. So I went through the blkback code and tried to
> > > > understand what I had to do, but I got it wrong.
> > > > 
> > > > Reading the code again it seems to me that BLKIF_OP_FLUSH_DISKCACHE
> > > > is supposed to have the same semantics as REQ_FLUSH, that implies a
> > > > preflush if nr_segments > 0, not a postflush like I did.
> > > 
> > > It's worse - blkfront translates both a REQ_FLUSH or a REQ_FUA
> > > into BLKIF_OP_FLUSH_DISKCACHE.
> > 
> > I think that is what remained of the BARRIER request.
> > > 
> > > REQ_FLUSH either is a pre flush or a pure flush without a data transfer,
> > > and REQ_FUA is a post flush.  So to get the proper semantics you'll have
> > > to do both, _and_ sequence it so that no operation starts before the
> > > previous one finished.
> > 
> > If I were to emulate the SCSI SYNC command which one would it be?
> > 
> > I think REQ_FLUSH? In which I would think that the blkfront needs to
> > get rid of the REQ_FUA part?
> > 
> 
> ping?

And just shy of 7 months later I answer :-)

I think you are right. Getting rid of REQ_FUA looks like the
right way. Oh, and blkfront already does that!

1290         err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
1291                             "feature-flush-cache", "%d", &flush,
1292                             NULL);
1293 
1294         if (!err && flush) {
1295                 info->feature_flush = REQ_FLUSH;
1296                 info->flush_op = BLKIF_OP_FLUSH_DISKCACHE;
1297         }
1298 

So what I am missing?
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 18:47:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 18:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOfb-0006rD-3i; Wed, 19 Dec 2012 18:47:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlOfZ-0006r8-FO
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 18:47:05 +0000
Received: from [85.158.143.35:16072] by server-3.bemta-4.messagelabs.com id
	12/64-18211-8AB02D05; Wed, 19 Dec 2012 18:47:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1355942823!4826110!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21259 invoked from network); 19 Dec 2012 18:47:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 18:47:04 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJIkvLl030066
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 18:46:58 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJIktrm028995
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 18:46:56 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJIktXA020822; Wed, 19 Dec 2012 12:46:55 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 10:46:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E67B81C032B; Wed, 19 Dec 2012 13:46:53 -0500 (EST)
Date: Wed, 19 Dec 2012 13:46:53 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121219184653.GD15037@phenom.dumpdata.com>
References: <1334595957-12552-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120425084524.GA17537@lst.de>
	<1335344565.28015.7.camel@zakaz.uk.xensource.com>
	<20120425102024.GA19800@lst.de>
	<alpine.DEB.2.00.1204251213480.26786@kaball-desktop>
	<20120425112335.GA20868@lst.de>
	<20120426154101.GD26830@phenom.dumpdata.com>
	<alpine.DEB.2.00.1205091342290.26786@kaball-desktop>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.00.1205091342290.26786@kaball-desktop>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "kwolf@redhat.com" <kwolf@redhat.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Christoph Hellwig <hch@lst.de>, Ian Campbell <Ian.Campbell@citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH] xen_disk: implement
 BLKIF_OP_FLUSH_DISKCACHE, remove BLKIF_OP_WRITE_BARRIER
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, May 09, 2012 at 01:42:41PM +0100, Stefano Stabellini wrote:
> On Thu, 26 Apr 2012, Konrad Rzeszutek Wilk wrote:
> > On Wed, Apr 25, 2012 at 01:23:35PM +0200, Christoph Hellwig wrote:
> > > On Wed, Apr 25, 2012 at 12:21:53PM +0100, Stefano Stabellini wrote:
> > > > That is true, in fact I couldn't figure out what I had to implement just
> > > > reading the comment. So I went through the blkback code and tried to
> > > > understand what I had to do, but I got it wrong.
> > > > 
> > > > Reading the code again it seems to me that BLKIF_OP_FLUSH_DISKCACHE
> > > > is supposed to have the same semantics as REQ_FLUSH, that implies a
> > > > preflush if nr_segments > 0, not a postflush like I did.
> > > 
> > > It's worse - blkfront translates both a REQ_FLUSH or a REQ_FUA
> > > into BLKIF_OP_FLUSH_DISKCACHE.
> > 
> > I think that is what remained of the BARRIER request.
> > > 
> > > REQ_FLUSH either is a pre flush or a pure flush without a data transfer,
> > > and REQ_FUA is a post flush.  So to get the proper semantics you'll have
> > > to do both, _and_ sequence it so that no operation starts before the
> > > previous one finished.
> > 
> > If I were to emulate the SCSI SYNC command which one would it be?
> > 
> > I think REQ_FLUSH? In which I would think that the blkfront needs to
> > get rid of the REQ_FUA part?
> > 
> 
> ping?

And just shy of 7 months later I answer :-)

I think you are right. Getting rid of REQ_FUA looks like the
right way. Oh, and blkfront already does that!

1290         err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
1291                             "feature-flush-cache", "%d", &flush,
1292                             NULL);
1293 
1294         if (!err && flush) {
1295                 info->feature_flush = REQ_FLUSH;
1296                 info->flush_op = BLKIF_OP_FLUSH_DISKCACHE;
1297         }
1298 

So what I am missing?
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 18:53:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 18:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOln-00071D-VD; Wed, 19 Dec 2012 18:53:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlOlm-000718-7t
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 18:53:30 +0000
Received: from [85.158.138.51:45506] by server-3.bemta-3.messagelabs.com id
	E4/3F-31588-62D02D05; Wed, 19 Dec 2012 18:53:26 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1355943204!21635170!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6786 invoked from network); 19 Dec 2012 18:53:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 18:53:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1312790"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 18:53:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 13:53:22 -0500
Received: from [10.80.3.146] (helo=localmatsp-T3500.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<mats.petersson@citrix.com>)	id 1TlOle-0006kE-JQ;
	Wed, 19 Dec 2012 18:53:22 +0000
From: Mats Petersson <mats.petersson@citrix.com>
To: <konrad@darnok.org>, <xen-devel@lists.xensource.com>,
	<Ian.Campbell@citrix.com>
Date: Wed, 19 Dec 2012 18:53:07 +0000
Message-ID: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: mats@planetcatfish.com, david.vrabel@citrix.com, JBeulich@suse.com,
	Mats Petersson <mats.petersson@citrix.com>
Subject: [Xen-devel] [PATCH V5] xen/privcmd.c: improve performance of
	MMAPBATCH_V2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch makes the IOCTL_PRIVCMD_MMAPBATCH_V2 (and older V1 version)
map multiple (typically 4KB worth of mfn's) pages at a time rather than one page at
a time, despite the pages being non-consecutive MFNs. The main change
is to pass a pointer to an array of mfns, rather than one mfn. To
support error reporting, we also pass an err_ptr.

Using a small test program to map Guest memory into Dom0 (repeatedly
for "Iterations" mapping the same first "Num Pages")
Iterations    Num Pages    Time 3.7rc4  Time With this patch
5000          4096         76.107       37.027
10000         2048         75.703       37.177
20000         1024         75.893       37.247

Performance of mapping guest memory into Dom0 is about half of the
original time.

When migrating a guest, this gives around 15-40% improvement on
overall live-migration time - bigger guests show bigger improvement,
and around 30% improvment on "guest downtime".

Signed-off-by: Mats Petersson <mats.petersson@citrix.com>

---

V5 of the patch contains fixes to ARM architecture, to add the same
API on that side (which would otherwise have caused a build break).
The ARM variant can do with a little bit of performance improvement.

Some minor formatting, and merging to latest 3.8 branch. No
functionality changes for x86.
---
 arch/arm/xen/enlighten.c |  113 +++++++++++++++++++++++++++++++-------
 arch/x86/xen/mmu.c       |  137 ++++++++++++++++++++++++++++++++++++++++------
 drivers/xen/privcmd.c    |   54 ++++++++++++++----
 include/xen/xen-ops.h    |    6 ++
 4 files changed, 264 insertions(+), 46 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 7a32976..967dea9 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -67,19 +67,19 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
 	if (rc) {
 		pr_warn("Failed to map pfn to mfn rc:%d pfn:%lx mfn:%lx\n",
 			rc, lpfn, fgmfn);
-		return 1;
+		return rc;
 	}
 	return 0;
 }
 
 struct remap_data {
-	xen_pfn_t fgmfn; /* foreign domain's gmfn */
+	xen_pfn_t *fgmfn; /* foreign domain's gmfn */
 	pgprot_t prot;
 	domid_t  domid;
 	struct vm_area_struct *vma;
 	int index;
 	struct page **pages;
-	struct xen_remap_mfn_info *info;
+	int *err_ptr;
 };
 
 static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
@@ -89,37 +89,112 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
 	struct page *page = info->pages[info->index++];
 	unsigned long pfn = page_to_pfn(page);
 	pte_t pte = pfn_pte(pfn, info->prot);
-
-	if (map_foreign_page(pfn, info->fgmfn, info->domid))
-		return -EFAULT;
+	int err;
+	// TODO: We should really batch these updates.
+	err = map_foreign_page(pfn, *info->fgmfn, info->domid);
+	*info->err_ptr++ = err;
 	set_pte_at(info->vma->vm_mm, addr, ptep, pte);
+	info->fgmfn++;
 
-	return 0;
+	return err;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       xen_pfn_t mfn, int nr,
-			       pgprot_t prot, unsigned domid,
-			       struct page **pages)
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information.
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			xen_pfn_t *mfn, int nr, int *err_ptr,
+			pgprot_t prot, unsigned domid,
+			struct page **pages)
 {
 	int err;
 	struct remap_data data;
+	unsigned long range = nr << PAGE_SHIFT;
 
-	/* TBD: Batching, current sole caller only does page at a time */
-	if (nr > 1)
-		return -EINVAL;
+	/* Kept here for the purpose of making sure code doesn't break
+	   x86 PVOPS */
+	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	data.fgmfn = mfn;
-	data.prot = prot;
+	data.prot  = prot;
 	data.domid = domid;
-	data.vma = vma;
-	data.index = 0;
+	data.vma   = vma;
 	data.pages = pages;
-	err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
+	data.index = 0;
+	data.err_ptr = err_ptr;
+
+	err = apply_to_page_range(vma->vm_mm, addr, range,
 				  remap_pte_fn, &data);
+	/* Warning: We do probably need to care about what error we
+	   get here. However, currently, the remap_pte_fn is only
+	   likely to return EFAULT or some other "things are very
+	   bad" error code, which the rest of the calling code won't
+	   be able to fix up. So we just exit with the error we got.
+	*/
 	return err;
 }
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not successfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid,
+			       struct page **pages)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid, pages);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
+
+/* Not used in ARM. Use xen_remap_domain_mfn_array(). */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t mfn, int nr,
+			       pgprot_t prot, unsigned domid,
+			       struct page **pages)
+{
+	return -ENOSYS;
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
 int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 01de35c..323a2ab 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
 #define REMAP_BATCH_SIZE 16
 
 struct remap_data {
-	unsigned long mfn;
+	xen_pfn_t *mfn;
+	bool contiguous;
 	pgprot_t prot;
 	struct mmu_update *mmu_update;
 };
@@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
+	/* If we have a contigious range, just update the mfn itself,
+	   else update pointer to be "next mfn". */
+	if (rmd->contiguous)
+		(*rmd->mfn)++;
+	else
+		rmd->mfn++;
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2495,18 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       xen_pfn_t mfn, int nr,
-			       pgprot_t prot, unsigned domid,
-			       struct page **pages)
-
-{
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ *
+ * Note that err_ptr is used to indicate whether *mfn
+ * is a list or a "first mfn of a contiguous range". */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			xen_pfn_t *mfn, int nr,
+			int *err_ptr, pgprot_t prot,
+			unsigned domid)
+{
+	int err, last_err = 0;
 	struct remap_data rmd;
 	struct mmu_update mmu_update[REMAP_BATCH_SIZE];
-	int batch;
 	unsigned long range;
-	int err = 0;
 
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
@@ -2517,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 
 	rmd.mfn = mfn;
 	rmd.prot = prot;
+	/* We use the err_ptr to indicate if there we are doing a contigious
+	 * mapping or a discontigious mapping. */
+	rmd.contiguous = !err_ptr;
 
 	while (nr) {
-		batch = min(REMAP_BATCH_SIZE, nr);
+		int index = 0;
+		int done = 0;
+		int batch = min(REMAP_BATCH_SIZE, nr);
+		int batch_left = batch;
 		range = (unsigned long)batch << PAGE_SHIFT;
 
 		rmd.mmu_update = mmu_update;
@@ -2528,23 +2557,98 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
-		if (err < 0)
-			goto out;
+		/* We record the error for each page that gives an error, but
+		 * continue mapping until the whole set is done */
+		do {
+			err = HYPERVISOR_mmu_update(&mmu_update[index],
+						    batch_left, &done, domid);
+			if (err < 0) {
+				if (!err_ptr)
+					goto out;
+				/* increment done so we skip the error item */
+				done++;
+				last_err = err_ptr[index] = err;
+			}
+			batch_left -= done;
+			index += done;
+		} while (batch_left);
 
 		nr -= batch;
 		addr += range;
+		if (err_ptr)
+			err_ptr += batch;
 	}
 
-	err = 0;
+	err = last_err;
 out:
 
 	xen_flush_tlb_all();
 
 	return err;
 }
+
+/* xen_remap_domain_mfn_range() - Used to map foreign pages
+ * @vma:   the vma for the pages to be mapped into
+ * @addr:  the address at which to map the pages
+ * @mfn:   the first MFN to map
+ * @nr:    the number of consecutive mfns to map
+ * @prot:  page protection mask
+ * @domid: id of the domain that we are mapping from
+ *
+ * This function takes a mfn and maps nr pages on from that into this kernel's
+ * memory. The owner of the pages is defined by domid. Where the pages are
+ * mapped is determined by addr, and vma is used for "accounting" of the
+ * pages.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t mfn, int nr,
+			       pgprot_t prot, unsigned domid,
+			       struct page **pages)
+{
+	return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not successfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid, struct page **pages)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
+
+
 /* Returns: 0 success */
 int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int numpgs, struct page **pages)
@@ -2555,3 +2659,4 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 	return -EINVAL;
 }
 EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
+
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 0bbbccb..8f86a44 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -154,6 +154,40 @@ static int traverse_pages(unsigned nelem, size_t size,
 	return ret;
 }
 
+/*
+ * Similar to traverse_pages, but use each page as a "block" of
+ * data to be processed as one unit.
+ */
+static int traverse_pages_block(unsigned nelem, size_t size,
+				struct list_head *pos,
+				int (*fn)(void *data, int nr, void *state),
+				void *state)
+{
+	void *pagedata;
+	unsigned pageidx;
+	int ret = 0;
+
+	BUG_ON(size > PAGE_SIZE);
+
+	pageidx = PAGE_SIZE;
+
+	while (nelem) {
+		int nr = (PAGE_SIZE/size);
+		struct page *page;
+		if (nr > nelem)
+			nr = nelem;
+		pos = pos->next;
+		page = list_entry(pos, struct page, lru);
+		pagedata = page_address(page);
+		ret = (*fn)(pagedata, nr, state);
+		if (ret)
+			break;
+		nelem -= nr;
+	}
+
+	return ret;
+}
+
 struct mmap_mfn_state {
 	unsigned long va;
 	struct vm_area_struct *vma;
@@ -258,7 +292,7 @@ struct mmap_batch_state {
 	 *      0 for no errors
 	 *      1 if at least one error has happened (and no
 	 *          -ENOENT errors have happened)
-	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 *      -ENOENT if at least one -ENOENT has happened.
 	 */
 	int global_error;
 	/* An array for individual errors */
@@ -271,7 +305,7 @@ struct mmap_batch_state {
 /* auto translated dom0 note: if domU being created is PV, then mfn is
  * mfn(addr on bus). If it's auto xlated, then mfn is pfn (input to HAP).
  */
-static int mmap_batch_fn(void *data, void *state)
+static int mmap_batch_fn(void *data, int nr, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
@@ -283,12 +317,10 @@ static int mmap_batch_fn(void *data, void *state)
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		cur_page = pages[st->index++];
 
-	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-					 st->vma->vm_page_prot, st->domain,
-					 &cur_page);
-
-	/* Store error code for second pass. */
-	*(st->err++) = ret;
+	BUG_ON(nr < 0);
+	ret = xen_remap_domain_mfn_array(st->vma, st->va & PAGE_MASK, mfnp, nr,
+					 st->err, st->vma->vm_page_prot, 
+					 st->domain, &cur_page);
 
 	/* And see if it affects the global_error. */
 	if (ret < 0) {
@@ -300,7 +332,7 @@ static int mmap_batch_fn(void *data, void *state)
 				st->global_error = 1;
 		}
 	}
-	st->va += PAGE_SIZE;
+	st->va += PAGE_SIZE * nr;
 
 	return 0;
 }
@@ -430,8 +462,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 	state.err           = err_array;
 
 	/* mmap_batch_fn guarantees ret == 0 */
-	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state));
+	BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
+				    &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index d6fe062..d8c57cc 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -30,6 +30,12 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       xen_pfn_t mfn, int nr,
 			       pgprot_t prot, unsigned domid,
 			       struct page **pages);
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid, 
+			       struct page **pages);
 int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int numpgs, struct page **pages);
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 18:53:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 18:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOln-00071D-VD; Wed, 19 Dec 2012 18:53:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1TlOlm-000718-7t
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 18:53:30 +0000
Received: from [85.158.138.51:45506] by server-3.bemta-3.messagelabs.com id
	E4/3F-31588-62D02D05; Wed, 19 Dec 2012 18:53:26 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1355943204!21635170!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTA4NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6786 invoked from network); 19 Dec 2012 18:53:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 18:53:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="1312790"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	19 Dec 2012 18:53:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 19 Dec 2012 13:53:22 -0500
Received: from [10.80.3.146] (helo=localmatsp-T3500.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<mats.petersson@citrix.com>)	id 1TlOle-0006kE-JQ;
	Wed, 19 Dec 2012 18:53:22 +0000
From: Mats Petersson <mats.petersson@citrix.com>
To: <konrad@darnok.org>, <xen-devel@lists.xensource.com>,
	<Ian.Campbell@citrix.com>
Date: Wed, 19 Dec 2012 18:53:07 +0000
Message-ID: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: mats@planetcatfish.com, david.vrabel@citrix.com, JBeulich@suse.com,
	Mats Petersson <mats.petersson@citrix.com>
Subject: [Xen-devel] [PATCH V5] xen/privcmd.c: improve performance of
	MMAPBATCH_V2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch makes the IOCTL_PRIVCMD_MMAPBATCH_V2 (and older V1 version)
map multiple (typically 4KB worth of mfn's) pages at a time rather than one page at
a time, despite the pages being non-consecutive MFNs. The main change
is to pass a pointer to an array of mfns, rather than one mfn. To
support error reporting, we also pass an err_ptr.

Using a small test program to map Guest memory into Dom0 (repeatedly
for "Iterations" mapping the same first "Num Pages")
Iterations    Num Pages    Time 3.7rc4  Time With this patch
5000          4096         76.107       37.027
10000         2048         75.703       37.177
20000         1024         75.893       37.247

Performance of mapping guest memory into Dom0 is about half of the
original time.

When migrating a guest, this gives around 15-40% improvement on
overall live-migration time - bigger guests show bigger improvement,
and around 30% improvment on "guest downtime".

Signed-off-by: Mats Petersson <mats.petersson@citrix.com>

---

V5 of the patch contains fixes to ARM architecture, to add the same
API on that side (which would otherwise have caused a build break).
The ARM variant can do with a little bit of performance improvement.

Some minor formatting, and merging to latest 3.8 branch. No
functionality changes for x86.
---
 arch/arm/xen/enlighten.c |  113 +++++++++++++++++++++++++++++++-------
 arch/x86/xen/mmu.c       |  137 ++++++++++++++++++++++++++++++++++++++++------
 drivers/xen/privcmd.c    |   54 ++++++++++++++----
 include/xen/xen-ops.h    |    6 ++
 4 files changed, 264 insertions(+), 46 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 7a32976..967dea9 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -67,19 +67,19 @@ static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
 	if (rc) {
 		pr_warn("Failed to map pfn to mfn rc:%d pfn:%lx mfn:%lx\n",
 			rc, lpfn, fgmfn);
-		return 1;
+		return rc;
 	}
 	return 0;
 }
 
 struct remap_data {
-	xen_pfn_t fgmfn; /* foreign domain's gmfn */
+	xen_pfn_t *fgmfn; /* foreign domain's gmfn */
 	pgprot_t prot;
 	domid_t  domid;
 	struct vm_area_struct *vma;
 	int index;
 	struct page **pages;
-	struct xen_remap_mfn_info *info;
+	int *err_ptr;
 };
 
 static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
@@ -89,37 +89,112 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
 	struct page *page = info->pages[info->index++];
 	unsigned long pfn = page_to_pfn(page);
 	pte_t pte = pfn_pte(pfn, info->prot);
-
-	if (map_foreign_page(pfn, info->fgmfn, info->domid))
-		return -EFAULT;
+	int err;
+	// TODO: We should really batch these updates.
+	err = map_foreign_page(pfn, *info->fgmfn, info->domid);
+	*info->err_ptr++ = err;
 	set_pte_at(info->vma->vm_mm, addr, ptep, pte);
+	info->fgmfn++;
 
-	return 0;
+	return err;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       xen_pfn_t mfn, int nr,
-			       pgprot_t prot, unsigned domid,
-			       struct page **pages)
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information.
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			xen_pfn_t *mfn, int nr, int *err_ptr,
+			pgprot_t prot, unsigned domid,
+			struct page **pages)
 {
 	int err;
 	struct remap_data data;
+	unsigned long range = nr << PAGE_SHIFT;
 
-	/* TBD: Batching, current sole caller only does page at a time */
-	if (nr > 1)
-		return -EINVAL;
+	/* Kept here for the purpose of making sure code doesn't break
+	   x86 PVOPS */
+	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	data.fgmfn = mfn;
-	data.prot = prot;
+	data.prot  = prot;
 	data.domid = domid;
-	data.vma = vma;
-	data.index = 0;
+	data.vma   = vma;
 	data.pages = pages;
-	err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
+	data.index = 0;
+	data.err_ptr = err_ptr;
+
+	err = apply_to_page_range(vma->vm_mm, addr, range,
 				  remap_pte_fn, &data);
+	/* Warning: We do probably need to care about what error we
+	   get here. However, currently, the remap_pte_fn is only
+	   likely to return EFAULT or some other "things are very
+	   bad" error code, which the rest of the calling code won't
+	   be able to fix up. So we just exit with the error we got.
+	*/
 	return err;
 }
+
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ * @pages:   page information
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not successfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid,
+			       struct page **pages)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid, pages);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
+
+/* Not used in ARM. Use xen_remap_domain_mfn_array(). */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t mfn, int nr,
+			       pgprot_t prot, unsigned domid,
+			       struct page **pages)
+{
+	return -ENOSYS;
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
 int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 01de35c..323a2ab 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2477,7 +2477,8 @@ void __init xen_hvm_init_mmu_ops(void)
 #define REMAP_BATCH_SIZE 16
 
 struct remap_data {
-	unsigned long mfn;
+	xen_pfn_t *mfn;
+	bool contiguous;
 	pgprot_t prot;
 	struct mmu_update *mmu_update;
 };
@@ -2486,7 +2487,13 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(pfn_pte(*rmd->mfn, rmd->prot));
+	/* If we have a contigious range, just update the mfn itself,
+	   else update pointer to be "next mfn". */
+	if (rmd->contiguous)
+		(*rmd->mfn)++;
+	else
+		rmd->mfn++;
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2495,18 +2502,34 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 	return 0;
 }
 
-int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
-			       unsigned long addr,
-			       xen_pfn_t mfn, int nr,
-			       pgprot_t prot, unsigned domid,
-			       struct page **pages)
-
-{
+/* do_remap_mfn() - helper function to map foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into
+ * this kernel's memory. The owner of the pages is defined by domid. Where the
+ * pages are mapped is determined by addr, and vma is used for "accounting" of
+ * the pages.
+ *
+ * Return value is zero for success, negative for failure.
+ *
+ * Note that err_ptr is used to indicate whether *mfn
+ * is a list or a "first mfn of a contiguous range". */
+static int do_remap_mfn(struct vm_area_struct *vma,
+			unsigned long addr,
+			xen_pfn_t *mfn, int nr,
+			int *err_ptr, pgprot_t prot,
+			unsigned domid)
+{
+	int err, last_err = 0;
 	struct remap_data rmd;
 	struct mmu_update mmu_update[REMAP_BATCH_SIZE];
-	int batch;
 	unsigned long range;
-	int err = 0;
 
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
@@ -2517,9 +2540,15 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 
 	rmd.mfn = mfn;
 	rmd.prot = prot;
+	/* We use the err_ptr to indicate if there we are doing a contigious
+	 * mapping or a discontigious mapping. */
+	rmd.contiguous = !err_ptr;
 
 	while (nr) {
-		batch = min(REMAP_BATCH_SIZE, nr);
+		int index = 0;
+		int done = 0;
+		int batch = min(REMAP_BATCH_SIZE, nr);
+		int batch_left = batch;
 		range = (unsigned long)batch << PAGE_SHIFT;
 
 		rmd.mmu_update = mmu_update;
@@ -2528,23 +2557,98 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
-		if (err < 0)
-			goto out;
+		/* We record the error for each page that gives an error, but
+		 * continue mapping until the whole set is done */
+		do {
+			err = HYPERVISOR_mmu_update(&mmu_update[index],
+						    batch_left, &done, domid);
+			if (err < 0) {
+				if (!err_ptr)
+					goto out;
+				/* increment done so we skip the error item */
+				done++;
+				last_err = err_ptr[index] = err;
+			}
+			batch_left -= done;
+			index += done;
+		} while (batch_left);
 
 		nr -= batch;
 		addr += range;
+		if (err_ptr)
+			err_ptr += batch;
 	}
 
-	err = 0;
+	err = last_err;
 out:
 
 	xen_flush_tlb_all();
 
 	return err;
 }
+
+/* xen_remap_domain_mfn_range() - Used to map foreign pages
+ * @vma:   the vma for the pages to be mapped into
+ * @addr:  the address at which to map the pages
+ * @mfn:   the first MFN to map
+ * @nr:    the number of consecutive mfns to map
+ * @prot:  page protection mask
+ * @domid: id of the domain that we are mapping from
+ *
+ * This function takes a mfn and maps nr pages on from that into this kernel's
+ * memory. The owner of the pages is defined by domid. Where the pages are
+ * mapped is determined by addr, and vma is used for "accounting" of the
+ * pages.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ */
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t mfn, int nr,
+			       pgprot_t prot, unsigned domid,
+			       struct page **pages)
+{
+	return do_remap_mfn(vma, addr, &mfn, nr, NULL, prot, domid);
+}
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
+/* xen_remap_domain_mfn_array() - Used to map an array of foreign pages
+ * @vma:     the vma for the pages to be mapped into
+ * @addr:    the address at which to map the pages
+ * @mfn:     pointer to array of MFNs to map
+ * @nr:      the number entries in the MFN array
+ * @err_ptr: pointer to array of integers, one per MFN, for an error
+ *           value for each page. The err_ptr must not be NULL.
+ * @prot:    page protection mask
+ * @domid:   id of the domain that we are mapping from
+ *
+ * This function takes an array of mfns and maps nr pages from that into this
+ * kernel's memory. The owner of the pages is defined by domid. Where the pages
+ * are mapped is determined by addr, and vma is used for "accounting" of the
+ * pages. The err_ptr array is filled in on any page that is not successfully
+ * mapped in.
+ *
+ * Return value is zero for success, negative ERRNO value for failure.
+ * Note that the error value -ENOENT is considered a "retry", so when this
+ * error code is seen, another call should be made with the list of pages that
+ * are marked as -ENOENT in the err_ptr array.
+ */
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid, struct page **pages)
+{
+	/* We BUG_ON because it's a programmer error to pass a NULL err_ptr,
+	 * and the consequences later is quite hard to detect what the actual
+	 * cause of "wrong memory was mapped in".
+	 */
+	BUG_ON(err_ptr == NULL);
+	return do_remap_mfn(vma, addr, mfn, nr, err_ptr, prot, domid);
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array);
+
+
 /* Returns: 0 success */
 int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int numpgs, struct page **pages)
@@ -2555,3 +2659,4 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 	return -EINVAL;
 }
 EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
+
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 0bbbccb..8f86a44 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -154,6 +154,40 @@ static int traverse_pages(unsigned nelem, size_t size,
 	return ret;
 }
 
+/*
+ * Similar to traverse_pages, but use each page as a "block" of
+ * data to be processed as one unit.
+ */
+static int traverse_pages_block(unsigned nelem, size_t size,
+				struct list_head *pos,
+				int (*fn)(void *data, int nr, void *state),
+				void *state)
+{
+	void *pagedata;
+	unsigned pageidx;
+	int ret = 0;
+
+	BUG_ON(size > PAGE_SIZE);
+
+	pageidx = PAGE_SIZE;
+
+	while (nelem) {
+		int nr = (PAGE_SIZE/size);
+		struct page *page;
+		if (nr > nelem)
+			nr = nelem;
+		pos = pos->next;
+		page = list_entry(pos, struct page, lru);
+		pagedata = page_address(page);
+		ret = (*fn)(pagedata, nr, state);
+		if (ret)
+			break;
+		nelem -= nr;
+	}
+
+	return ret;
+}
+
 struct mmap_mfn_state {
 	unsigned long va;
 	struct vm_area_struct *vma;
@@ -258,7 +292,7 @@ struct mmap_batch_state {
 	 *      0 for no errors
 	 *      1 if at least one error has happened (and no
 	 *          -ENOENT errors have happened)
-	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 *      -ENOENT if at least one -ENOENT has happened.
 	 */
 	int global_error;
 	/* An array for individual errors */
@@ -271,7 +305,7 @@ struct mmap_batch_state {
 /* auto translated dom0 note: if domU being created is PV, then mfn is
  * mfn(addr on bus). If it's auto xlated, then mfn is pfn (input to HAP).
  */
-static int mmap_batch_fn(void *data, void *state)
+static int mmap_batch_fn(void *data, int nr, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
@@ -283,12 +317,10 @@ static int mmap_batch_fn(void *data, void *state)
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		cur_page = pages[st->index++];
 
-	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-					 st->vma->vm_page_prot, st->domain,
-					 &cur_page);
-
-	/* Store error code for second pass. */
-	*(st->err++) = ret;
+	BUG_ON(nr < 0);
+	ret = xen_remap_domain_mfn_array(st->vma, st->va & PAGE_MASK, mfnp, nr,
+					 st->err, st->vma->vm_page_prot, 
+					 st->domain, &cur_page);
 
 	/* And see if it affects the global_error. */
 	if (ret < 0) {
@@ -300,7 +332,7 @@ static int mmap_batch_fn(void *data, void *state)
 				st->global_error = 1;
 		}
 	}
-	st->va += PAGE_SIZE;
+	st->va += PAGE_SIZE * nr;
 
 	return 0;
 }
@@ -430,8 +462,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 	state.err           = err_array;
 
 	/* mmap_batch_fn guarantees ret == 0 */
-	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state));
+	BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
+				    &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index d6fe062..d8c57cc 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -30,6 +30,12 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       xen_pfn_t mfn, int nr,
 			       pgprot_t prot, unsigned domid,
 			       struct page **pages);
+int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       xen_pfn_t *mfn, int nr,
+			       int *err_ptr, pgprot_t prot,
+			       unsigned domid, 
+			       struct page **pages);
 int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int numpgs, struct page **pages);
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzn-0007Rw-JV; Wed, 19 Dec 2012 19:07:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzl-0007Rr-CI
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:07:57 +0000
Received: from [85.158.137.99:31027] by server-9.bemta-3.messagelabs.com id
	A9/9B-11948-C8012D05; Wed, 19 Dec 2012 19:07:56 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355944075!18425397!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32762 invoked from network); 19 Dec 2012 19:07:55 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:07:55 -0000
Received: by mail-wi0-f178.google.com with SMTP id hn3so1469371wib.17
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:07:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:message-id:user-agent:date:from
	:to:cc; bh=QokVDijDx9k8x+ocm0D0PNotl8rukv3kAV/mnU2nBCk=;
	b=l4ZutK9Entbrw/qqq8HTKUIZVoJm8PQ0CNuhw7SKw4q73iGaipVCQ1z1bGIKCuU6ZS
	eL5vPek4m2RSF86BO+Lsm7NrlML//Z7ulHl/bMwkJgW65py/RpWahBWpsaghXtqdu0u5
	wRObABJvmmEmz+OkCxOZh+RnknPCKwL25JxpnfsjXzNHFm6wuKiRuMtq3DRVYHGgDDZD
	ikhLSiL/3th7OVxyQgoqv93H/hPAWTCSzHKk4pNXFcLrgz6I8MsmJecrKfpISl8+fbw0
	ryaNmJKI3LOwqVA7y+TKzt9XbHvMZFp2v4lNj1ul4XJXwUpA45+uMa0W7PH5E8oKW2l+
	yK2w==
X-Received: by 10.194.21.70 with SMTP id t6mr13380521wje.42.1355944075353;
	Wed, 19 Dec 2012 11:07:55 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.07.53
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:07:54 -0800 (PST)
MIME-Version: 1.0
Message-Id: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:16 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 00 of 10 v2] NUMA aware credit scheduling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Everyone,

Here it is the take 2 of the NUMA aware credit scheduling series. Sorry it took
a bit, but I had to take care of those nasty bugs causing scheduling anomalies,
as they were getting in the way and messing up the numbers when trying to
evaluate performances of this! :-)

I also rewrote most of the core of the two step vcpu and node affinity
balancing algorithm, as per George's suggestion during last round, to try to
squeeze a little bit more of performances improvement.

As already and repeatedly said, what the series does is providing the (credit)
scheduler with the knowledge of a domain's node-affinity. It will then always
try to run domain's vCPUs on one of those nodes first. Only if that turns out
to be impossible, it falls back to the old behaviour. (BTW, for any update on
the status of my "quest" about improving NUMA support in Xen, look
http://wiki.xen.org/wiki/Xen_NUMA_Roadmap.)

I rerun my usual benchmark, SpecJBB2005, plus some others, i.e., some
configurations of sysbench and lmbench. A little bit more about them follows:

 * SpecJBB is all about throughput, so pinning is likely the ideal solution.

 * Sysbench-memory is the time it takes for writing a fixed amount of memory
   (and then it is the throughput that is measured). What we expect is
   locality to be important, but at the same time the potential imbalances
   due to pinning could have a say in it.

 * Lmbench-proc is the time it takes for a process to fork a fixed amount of
   children. This is much more about latency than throughput, with locality
   of memory accesses playing a smaller role and, again, imbalances due to
   pinning being a potential issue.

On a 2 nodes, 16 cores system, where I can have 2 to 10 VMs (2 vCPUs each)
executing the benchmarks concurrently, here's what I get:

 ----------------------------------------------------
 | SpecJBB2005, throughput (the higher the better)  |
 ----------------------------------------------------
 | #VMs | No affinity |  Pinning  | NUMA scheduling |
 |    2 |   43451.853 | 49876.750 |    49693.653    |
 |    6 |   29368.589 | 33782.132 |    33692.936    |
 |   10 |   19138.934 | 21950.696 |    21413.311    |
 ----------------------------------------------------
 | Sysbench memory, throughput (the higher the better)
 ----------------------------------------------------
 | #VMs | No affinity |  Pinning  | NUMA scheduling |
 |    2 |  484.42167  | 552.32667 |    552.86167    |
 |    6 |  404.43667  | 440.00056 |    449.42611    |
 |   10 |  296.45600  | 315.51733 |    331.49067    |
 ----------------------------------------------------
 | LMBench proc, latency (the lower the better)     |
 ----------------------------------------------------
 | #VMs | No affinity |  Pinning  | NUMA scheduling |
 ----------------------------------------------------
 |    2 |  824.00437  | 749.51892 |    741.42952    |
 |    6 |  942.39442  | 985.02761 |    974.94700    |
 |   10 |  1254.3121  | 1363.0792 |    1301.2917    |
 ----------------------------------------------------

Which, reasoning in terms of %-performances increase/decrease, means NUMA aware
scheduling does as follows, as compared to no affinity at all and to pinning:

     ----------------------------------
     | SpecJBB2005 (throughput)       |
     ----------------------------------
     | #VMs | No affinity |  Pinning  |
     |    2 |   +14.36%   |   -0.36%  |
     |    6 |   +14.72%   |   -0.26%  |
     |   10 |   +11.88%   |   -2.44%  |
     ----------------------------------
     | Sysbench memory (throughput)   |
     ----------------------------------
     | #VMs | No affinity |  Pinning  |
     |    2 |   +14.12%   |   +0.09%  |
     |    6 |   +11.12%   |   +2.14%  |
     |   10 |   +11.81%   |   +5.06%  |
     ----------------------------------
     | LMBench proc (latency)         |
     ----------------------------------
     | #VMs | No affinity |  Pinning  |
     ----------------------------------
     |    2 |   +10.02%   |   +1.07%  |
     |    6 |    +3.45%   |   +1.02%  |
     |   10 |    +2.94%   |   +4.53%  |
     ----------------------------------

Numbers seem to tell we're being successful in taking advantage of both the
improved locality (when compared to no affinity) and the greater flexibility
the NUMA aware scheduling approach gives us (when compared to pinning).  In
fact, when throughput only is concerned (SpecJBB case), it behaves almost on
par with pinning, and a lot better than no affinity at all. Moreover, we're
even able to do better than them both, when latency comes a little bit more
into the game and the imbalances caused by pinning would make things worse than
not having any affinity, like in the sysbench and, especially, in the LMBench
case.

Here are the patches included in the series. I '*'-ed ones already received one
or more acks during v1.  However, there are patches that were significantly
reworked since then. In that case, I just ignored that, and left them with my
SOB only, as I think they definitely need to be re-reviewd. :-)

 * [ 1/10] xen, libxc: rename xenctl_cpumap to xenctl_bitmap
 * [ 2/10] xen, libxc: introduce node maps and masks
   [ 3/10] xen: sched_credit: let the scheduler know about node-affinity
   [ 4/10] xen: allow for explicitly specifying node-affinity
 * [ 5/10] libxc: allow for explicitly specifying node-affinity
 * [ 6/10] libxl: allow for explicitly specifying node-affinity
   [ 7/10] libxl: optimize the calculation of how many VCPUs can run on a candidate
 * [ 8/10] libxl: automatic placement deals with node-affinity
 * [ 9/10] xl: add node-affinity to the output of `xl list`
   [10/10] docs: rearrange and update NUMA placement documentation

Thanks and Regards,
Dario

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzn-0007Rw-JV; Wed, 19 Dec 2012 19:07:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzl-0007Rr-CI
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:07:57 +0000
Received: from [85.158.137.99:31027] by server-9.bemta-3.messagelabs.com id
	A9/9B-11948-C8012D05; Wed, 19 Dec 2012 19:07:56 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355944075!18425397!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32762 invoked from network); 19 Dec 2012 19:07:55 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-14.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:07:55 -0000
Received: by mail-wi0-f178.google.com with SMTP id hn3so1469371wib.17
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:07:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:message-id:user-agent:date:from
	:to:cc; bh=QokVDijDx9k8x+ocm0D0PNotl8rukv3kAV/mnU2nBCk=;
	b=l4ZutK9Entbrw/qqq8HTKUIZVoJm8PQ0CNuhw7SKw4q73iGaipVCQ1z1bGIKCuU6ZS
	eL5vPek4m2RSF86BO+Lsm7NrlML//Z7ulHl/bMwkJgW65py/RpWahBWpsaghXtqdu0u5
	wRObABJvmmEmz+OkCxOZh+RnknPCKwL25JxpnfsjXzNHFm6wuKiRuMtq3DRVYHGgDDZD
	ikhLSiL/3th7OVxyQgoqv93H/hPAWTCSzHKk4pNXFcLrgz6I8MsmJecrKfpISl8+fbw0
	ryaNmJKI3LOwqVA7y+TKzt9XbHvMZFp2v4lNj1ul4XJXwUpA45+uMa0W7PH5E8oKW2l+
	yK2w==
X-Received: by 10.194.21.70 with SMTP id t6mr13380521wje.42.1355944075353;
	Wed, 19 Dec 2012 11:07:55 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.07.53
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:07:54 -0800 (PST)
MIME-Version: 1.0
Message-Id: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:16 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 00 of 10 v2] NUMA aware credit scheduling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Everyone,

Here it is the take 2 of the NUMA aware credit scheduling series. Sorry it took
a bit, but I had to take care of those nasty bugs causing scheduling anomalies,
as they were getting in the way and messing up the numbers when trying to
evaluate performances of this! :-)

I also rewrote most of the core of the two step vcpu and node affinity
balancing algorithm, as per George's suggestion during last round, to try to
squeeze a little bit more of performances improvement.

As already and repeatedly said, what the series does is providing the (credit)
scheduler with the knowledge of a domain's node-affinity. It will then always
try to run domain's vCPUs on one of those nodes first. Only if that turns out
to be impossible, it falls back to the old behaviour. (BTW, for any update on
the status of my "quest" about improving NUMA support in Xen, look
http://wiki.xen.org/wiki/Xen_NUMA_Roadmap.)

I rerun my usual benchmark, SpecJBB2005, plus some others, i.e., some
configurations of sysbench and lmbench. A little bit more about them follows:

 * SpecJBB is all about throughput, so pinning is likely the ideal solution.

 * Sysbench-memory is the time it takes for writing a fixed amount of memory
   (and then it is the throughput that is measured). What we expect is
   locality to be important, but at the same time the potential imbalances
   due to pinning could have a say in it.

 * Lmbench-proc is the time it takes for a process to fork a fixed amount of
   children. This is much more about latency than throughput, with locality
   of memory accesses playing a smaller role and, again, imbalances due to
   pinning being a potential issue.

On a 2 nodes, 16 cores system, where I can have 2 to 10 VMs (2 vCPUs each)
executing the benchmarks concurrently, here's what I get:

 ----------------------------------------------------
 | SpecJBB2005, throughput (the higher the better)  |
 ----------------------------------------------------
 | #VMs | No affinity |  Pinning  | NUMA scheduling |
 |    2 |   43451.853 | 49876.750 |    49693.653    |
 |    6 |   29368.589 | 33782.132 |    33692.936    |
 |   10 |   19138.934 | 21950.696 |    21413.311    |
 ----------------------------------------------------
 | Sysbench memory, throughput (the higher the better)
 ----------------------------------------------------
 | #VMs | No affinity |  Pinning  | NUMA scheduling |
 |    2 |  484.42167  | 552.32667 |    552.86167    |
 |    6 |  404.43667  | 440.00056 |    449.42611    |
 |   10 |  296.45600  | 315.51733 |    331.49067    |
 ----------------------------------------------------
 | LMBench proc, latency (the lower the better)     |
 ----------------------------------------------------
 | #VMs | No affinity |  Pinning  | NUMA scheduling |
 ----------------------------------------------------
 |    2 |  824.00437  | 749.51892 |    741.42952    |
 |    6 |  942.39442  | 985.02761 |    974.94700    |
 |   10 |  1254.3121  | 1363.0792 |    1301.2917    |
 ----------------------------------------------------

Which, reasoning in terms of %-performances increase/decrease, means NUMA aware
scheduling does as follows, as compared to no affinity at all and to pinning:

     ----------------------------------
     | SpecJBB2005 (throughput)       |
     ----------------------------------
     | #VMs | No affinity |  Pinning  |
     |    2 |   +14.36%   |   -0.36%  |
     |    6 |   +14.72%   |   -0.26%  |
     |   10 |   +11.88%   |   -2.44%  |
     ----------------------------------
     | Sysbench memory (throughput)   |
     ----------------------------------
     | #VMs | No affinity |  Pinning  |
     |    2 |   +14.12%   |   +0.09%  |
     |    6 |   +11.12%   |   +2.14%  |
     |   10 |   +11.81%   |   +5.06%  |
     ----------------------------------
     | LMBench proc (latency)         |
     ----------------------------------
     | #VMs | No affinity |  Pinning  |
     ----------------------------------
     |    2 |   +10.02%   |   +1.07%  |
     |    6 |    +3.45%   |   +1.02%  |
     |   10 |    +2.94%   |   +4.53%  |
     ----------------------------------

Numbers seem to tell we're being successful in taking advantage of both the
improved locality (when compared to no affinity) and the greater flexibility
the NUMA aware scheduling approach gives us (when compared to pinning).  In
fact, when throughput only is concerned (SpecJBB case), it behaves almost on
par with pinning, and a lot better than no affinity at all. Moreover, we're
even able to do better than them both, when latency comes a little bit more
into the game and the imbalances caused by pinning would make things worse than
not having any affinity, like in the sysbench and, especially, in the LMBench
case.

Here are the patches included in the series. I '*'-ed ones already received one
or more acks during v1.  However, there are patches that were significantly
reworked since then. In that case, I just ignored that, and left them with my
SOB only, as I think they definitely need to be re-reviewd. :-)

 * [ 1/10] xen, libxc: rename xenctl_cpumap to xenctl_bitmap
 * [ 2/10] xen, libxc: introduce node maps and masks
   [ 3/10] xen: sched_credit: let the scheduler know about node-affinity
   [ 4/10] xen: allow for explicitly specifying node-affinity
 * [ 5/10] libxc: allow for explicitly specifying node-affinity
 * [ 6/10] libxl: allow for explicitly specifying node-affinity
   [ 7/10] libxl: optimize the calculation of how many VCPUs can run on a candidate
 * [ 8/10] libxl: automatic placement deals with node-affinity
 * [ 9/10] xl: add node-affinity to the output of `xl list`
   [10/10] docs: rearrange and update NUMA placement documentation

Thanks and Regards,
Dario

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzt-0007SS-W7; Wed, 19 Dec 2012 19:08:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzs-0007SA-2z
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:04 +0000
Received: from [85.158.139.211:24239] by server-11.bemta-5.messagelabs.com id
	7B/50-31624-29012D05; Wed, 19 Dec 2012 19:08:02 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355944081!18395774!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25142 invoked from network); 19 Dec 2012 19:08:01 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:01 -0000
Received: by mail-we0-f169.google.com with SMTP id t49so1173079wey.14
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:07:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=gMfE7DU4gn6ciyd7gz6hVj+8qsu3IgTNtjgayDOUh2s=;
	b=bHNVpnh0xtMFpHwE5O5p8g3zC5TCEcSa3TeASUXOe2LKd5hvebHCa1PtBT8Qop9wC6
	BZKn4VhzON7k0MQRYSsSPD8OfNFiniCiIF1X0P0uNPjhDPoHaOXslvSA1jV7fTQXwJ6e
	DzHQ9hKGF1bF3BfOKDSRzp1bXR4u/mL4IuoiN6nHMCh/WEZXk5pXXCILCD/k8CuhZekj
	rbwIckb4Ho3BFIi/5z7Gxx2J6m0vxxb5AQKSThQ6OY+acsYRF8AP+/KFQLcUakiG8/jg
	QG70vT60fzYvXMMiw8lJuq+ii72Fjadeb0c+elj4SUy30uc9lv0PYC6m8FL8a77S4Hwt
	6JTw==
X-Received: by 10.180.93.133 with SMTP id cu5mr5789887wib.32.1355944077675;
	Wed, 19 Dec 2012 11:07:57 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.07.55
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:07:56 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 9bcd5859fb9c7fa29dee8d04438a81432246114d
Message-Id: <9bcd5859fb9c7fa29dee.1355944037@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:17 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 01 of 10 v2] xen,
	libxc: rename xenctl_cpumap to xenctl_bitmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

More specifically:
 1. replaces xenctl_cpumap with xenctl_bitmap
 2. provides bitmap_to_xenctl_bitmap and the reverse;
 3. re-implement cpumask_to_xenctl_bitmap with
    bitmap_to_xenctl_bitmap and the reverse;

Other than #3, no functional changes. Interface only slightly
afected.

This is in preparation of introducing NUMA node-affinity maps.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

diff --git a/tools/libxc/xc_cpupool.c b/tools/libxc/xc_cpupool.c
--- a/tools/libxc/xc_cpupool.c
+++ b/tools/libxc/xc_cpupool.c
@@ -90,7 +90,7 @@ xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_
     sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_INFO;
     sysctl.u.cpupool_op.cpupool_id = poolid;
     set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
-    sysctl.u.cpupool_op.cpumap.nr_cpus = local_size * 8;
+    sysctl.u.cpupool_op.cpumap.nr_elems = local_size * 8;
 
     err = do_sysctl_save(xch, &sysctl);
 
@@ -184,7 +184,7 @@ xc_cpumap_t xc_cpupool_freeinfo(xc_inter
     sysctl.cmd = XEN_SYSCTL_cpupool_op;
     sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_FREEINFO;
     set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
-    sysctl.u.cpupool_op.cpumap.nr_cpus = mapsize * 8;
+    sysctl.u.cpupool_op.cpumap.nr_elems = mapsize * 8;
 
     err = do_sysctl_save(xch, &sysctl);
 
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -142,7 +142,7 @@ int xc_vcpu_setaffinity(xc_interface *xc
 
     set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
 
-    domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
+    domctl.u.vcpuaffinity.cpumap.nr_elems = cpusize * 8;
 
     ret = do_domctl(xch, &domctl);
 
@@ -182,7 +182,7 @@ int xc_vcpu_getaffinity(xc_interface *xc
     domctl.u.vcpuaffinity.vcpu = vcpu;
 
     set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
-    domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
+    domctl.u.vcpuaffinity.cpumap.nr_elems = cpusize * 8;
 
     ret = do_domctl(xch, &domctl);
 
diff --git a/tools/libxc/xc_tbuf.c b/tools/libxc/xc_tbuf.c
--- a/tools/libxc/xc_tbuf.c
+++ b/tools/libxc/xc_tbuf.c
@@ -134,7 +134,7 @@ int xc_tbuf_set_cpu_mask(xc_interface *x
     bitmap_64_to_byte(bytemap, &mask64, sizeof (mask64) * 8);
 
     set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap);
-    sysctl.u.tbuf_op.cpu_mask.nr_cpus = sizeof(bytemap) * 8;
+    sysctl.u.tbuf_op.cpu_mask.nr_elems = sizeof(bytemap) * 8;
 
     ret = do_sysctl(xch, &sysctl);
 
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1457,8 +1457,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_m
             cpumap = &cpu_online_map;
         else
         {
-            ret = xenctl_cpumap_to_cpumask(&cmv,
-                                           &op->u.mc_inject_v2.cpumap);
+            ret = xenctl_bitmap_to_cpumask(&cmv, &op->u.mc_inject_v2.cpumap);
             if ( ret )
                 break;
             cpumap = cmv;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -375,7 +375,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
     {
         uint32_t cpu;
         uint64_t idletime, now = NOW();
-        struct xenctl_cpumap ctlmap;
+        struct xenctl_bitmap ctlmap;
         cpumask_var_t cpumap;
         XEN_GUEST_HANDLE(uint8) cpumap_bitmap;
         XEN_GUEST_HANDLE(uint64) idletimes;
@@ -388,11 +388,11 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
         if ( cpufreq_controller != FREQCTL_dom0_kernel )
             break;
 
-        ctlmap.nr_cpus  = op->u.getidletime.cpumap_nr_cpus;
+        ctlmap.nr_elems  = op->u.getidletime.cpumap_nr_cpus;
         guest_from_compat_handle(cpumap_bitmap,
                                  op->u.getidletime.cpumap_bitmap);
         ctlmap.bitmap.p = cpumap_bitmap.p; /* handle -> handle_64 conversion */
-        if ( (ret = xenctl_cpumap_to_cpumask(&cpumap, &ctlmap)) != 0 )
+        if ( (ret = xenctl_bitmap_to_cpumask(&cpumap, &ctlmap)) != 0 )
             goto out;
         guest_from_compat_handle(idletimes, op->u.getidletime.idletime);
 
@@ -411,7 +411,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
 
         op->u.getidletime.now = now;
         if ( ret == 0 )
-            ret = cpumask_to_xenctl_cpumap(&ctlmap, cpumap);
+            ret = cpumask_to_xenctl_bitmap(&ctlmap, cpumap);
         free_cpumask_var(cpumap);
 
         if ( ret == 0 && __copy_field_to_guest(u_xenpf_op, op, u.getidletime) )
diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
--- a/xen/common/cpupool.c
+++ b/xen/common/cpupool.c
@@ -493,7 +493,7 @@ int cpupool_do_sysctl(struct xen_sysctl_
         op->cpupool_id = c->cpupool_id;
         op->sched_id = c->sched->sched_id;
         op->n_dom = c->n_dom;
-        ret = cpumask_to_xenctl_cpumap(&op->cpumap, c->cpu_valid);
+        ret = cpumask_to_xenctl_bitmap(&op->cpumap, c->cpu_valid);
         cpupool_put(c);
     }
     break;
@@ -588,7 +588,7 @@ int cpupool_do_sysctl(struct xen_sysctl_
 
     case XEN_SYSCTL_CPUPOOL_OP_FREEINFO:
     {
-        ret = cpumask_to_xenctl_cpumap(
+        ret = cpumask_to_xenctl_bitmap(
             &op->cpumap, &cpupool_free_cpus);
     }
     break;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -32,28 +32,29 @@
 static DEFINE_SPINLOCK(domctl_lock);
 DEFINE_SPINLOCK(vcpu_alloc_lock);
 
-int cpumask_to_xenctl_cpumap(
-    struct xenctl_cpumap *xenctl_cpumap, const cpumask_t *cpumask)
+int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_bitmap,
+                            const unsigned long *bitmap,
+                            unsigned int nbits)
 {
     unsigned int guest_bytes, copy_bytes, i;
     uint8_t zero = 0;
     int err = 0;
-    uint8_t *bytemap = xmalloc_array(uint8_t, (nr_cpu_ids + 7) / 8);
+    uint8_t *bytemap = xmalloc_array(uint8_t, (nbits + 7) / 8);
 
     if ( !bytemap )
         return -ENOMEM;
 
-    guest_bytes = (xenctl_cpumap->nr_cpus + 7) / 8;
-    copy_bytes  = min_t(unsigned int, guest_bytes, (nr_cpu_ids + 7) / 8);
+    guest_bytes = (xenctl_bitmap->nr_elems + 7) / 8;
+    copy_bytes  = min_t(unsigned int, guest_bytes, (nbits + 7) / 8);
 
-    bitmap_long_to_byte(bytemap, cpumask_bits(cpumask), nr_cpu_ids);
+    bitmap_long_to_byte(bytemap, bitmap, nbits);
 
     if ( copy_bytes != 0 )
-        if ( copy_to_guest(xenctl_cpumap->bitmap, bytemap, copy_bytes) )
+        if ( copy_to_guest(xenctl_bitmap->bitmap, bytemap, copy_bytes) )
             err = -EFAULT;
 
     for ( i = copy_bytes; !err && i < guest_bytes; i++ )
-        if ( copy_to_guest_offset(xenctl_cpumap->bitmap, i, &zero, 1) )
+        if ( copy_to_guest_offset(xenctl_bitmap->bitmap, i, &zero, 1) )
             err = -EFAULT;
 
     xfree(bytemap);
@@ -61,36 +62,58 @@ int cpumask_to_xenctl_cpumap(
     return err;
 }
 
-int xenctl_cpumap_to_cpumask(
-    cpumask_var_t *cpumask, const struct xenctl_cpumap *xenctl_cpumap)
+int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
+                            const struct xenctl_bitmap *xenctl_bitmap,
+                            unsigned int nbits)
 {
     unsigned int guest_bytes, copy_bytes;
     int err = 0;
-    uint8_t *bytemap = xzalloc_array(uint8_t, (nr_cpu_ids + 7) / 8);
+    uint8_t *bytemap = xzalloc_array(uint8_t, (nbits + 7) / 8);
 
     if ( !bytemap )
         return -ENOMEM;
 
-    guest_bytes = (xenctl_cpumap->nr_cpus + 7) / 8;
-    copy_bytes  = min_t(unsigned int, guest_bytes, (nr_cpu_ids + 7) / 8);
+    guest_bytes = (xenctl_bitmap->nr_elems + 7) / 8;
+    copy_bytes  = min_t(unsigned int, guest_bytes, (nbits + 7) / 8);
 
     if ( copy_bytes != 0 )
     {
-        if ( copy_from_guest(bytemap, xenctl_cpumap->bitmap, copy_bytes) )
+        if ( copy_from_guest(bytemap, xenctl_bitmap->bitmap, copy_bytes) )
             err = -EFAULT;
-        if ( (xenctl_cpumap->nr_cpus & 7) && (guest_bytes == copy_bytes) )
-            bytemap[guest_bytes-1] &= ~(0xff << (xenctl_cpumap->nr_cpus & 7));
+        if ( (xenctl_bitmap->nr_elems & 7) && (guest_bytes == copy_bytes) )
+            bytemap[guest_bytes-1] &= ~(0xff << (xenctl_bitmap->nr_elems & 7));
     }
 
-    if ( err )
-        /* nothing */;
-    else if ( alloc_cpumask_var(cpumask) )
-        bitmap_byte_to_long(cpumask_bits(*cpumask), bytemap, nr_cpu_ids);
+    if ( !err )
+        bitmap_byte_to_long(bitmap, bytemap, nbits);
+
+    xfree(bytemap);
+
+    return err;
+}
+
+int cpumask_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_cpumap,
+                             const cpumask_t *cpumask)
+{
+    return bitmap_to_xenctl_bitmap(xenctl_cpumap, cpumask_bits(cpumask),
+                                   nr_cpu_ids);
+}
+
+int xenctl_bitmap_to_cpumask(cpumask_var_t *cpumask,
+                             const struct xenctl_bitmap *xenctl_cpumap)
+{
+    int err = 0;
+
+    if ( alloc_cpumask_var(cpumask) ) {
+        err = xenctl_bitmap_to_bitmap(cpumask_bits(*cpumask), xenctl_cpumap,
+                                      nr_cpu_ids);
+        /* In case of error, cleanup is up to us, as the caller won't care! */
+        if ( err )
+            free_cpumask_var(*cpumask);
+    }
     else
         err = -ENOMEM;
 
-    xfree(bytemap);
-
     return err;
 }
 
@@ -583,7 +606,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         {
             cpumask_var_t new_affinity;
 
-            ret = xenctl_cpumap_to_cpumask(
+            ret = xenctl_bitmap_to_cpumask(
                 &new_affinity, &op->u.vcpuaffinity.cpumap);
             if ( !ret )
             {
@@ -593,7 +616,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         }
         else
         {
-            ret = cpumask_to_xenctl_cpumap(
+            ret = cpumask_to_xenctl_bitmap(
                 &op->u.vcpuaffinity.cpumap, v->cpu_affinity);
         }
     }
diff --git a/xen/common/trace.c b/xen/common/trace.c
--- a/xen/common/trace.c
+++ b/xen/common/trace.c
@@ -384,7 +384,7 @@ int tb_control(xen_sysctl_tbuf_op_t *tbc
     {
         cpumask_var_t mask;
 
-        rc = xenctl_cpumap_to_cpumask(&mask, &tbc->cpu_mask);
+        rc = xenctl_bitmap_to_cpumask(&mask, &tbc->cpu_mask);
         if ( !rc )
         {
             cpumask_copy(&tb_cpu_mask, mask);
diff --git a/xen/include/public/arch-x86/xen-mca.h b/xen/include/public/arch-x86/xen-mca.h
--- a/xen/include/public/arch-x86/xen-mca.h
+++ b/xen/include/public/arch-x86/xen-mca.h
@@ -414,7 +414,7 @@ struct xen_mc_mceinject {
 
 struct xen_mc_inject_v2 {
 	uint32_t flags;
-	struct xenctl_cpumap cpumap;
+	struct xenctl_bitmap cpumap;
 };
 #endif
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -284,7 +284,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_getvc
 /* XEN_DOMCTL_getvcpuaffinity */
 struct xen_domctl_vcpuaffinity {
     uint32_t  vcpu;              /* IN */
-    struct xenctl_cpumap cpumap; /* IN/OUT */
+    struct xenctl_bitmap cpumap; /* IN/OUT */
 };
 typedef struct xen_domctl_vcpuaffinity xen_domctl_vcpuaffinity_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_vcpuaffinity_t);
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -71,7 +71,7 @@ struct xen_sysctl_tbuf_op {
 #define XEN_SYSCTL_TBUFOP_disable      5
     uint32_t cmd;
     /* IN/OUT variables */
-    struct xenctl_cpumap cpu_mask;
+    struct xenctl_bitmap cpu_mask;
     uint32_t             evt_mask;
     /* OUT variables */
     uint64_aligned_t buffer_mfn;
@@ -532,7 +532,7 @@ struct xen_sysctl_cpupool_op {
     uint32_t domid;       /* IN: M              */
     uint32_t cpu;         /* IN: AR             */
     uint32_t n_dom;       /*            OUT: I  */
-    struct xenctl_cpumap cpumap; /*     OUT: IF */
+    struct xenctl_bitmap cpumap; /*     OUT: IF */
 };
 typedef struct xen_sysctl_cpupool_op xen_sysctl_cpupool_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpupool_op_t);
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -851,9 +851,9 @@ typedef uint8_t xen_domain_handle_t[16];
 #endif
 
 #ifndef __ASSEMBLY__
-struct xenctl_cpumap {
+struct xenctl_bitmap {
     XEN_GUEST_HANDLE_64(uint8) bitmap;
-    uint32_t nr_cpus;
+    uint32_t nr_elems;
 };
 #endif
 
diff --git a/xen/include/xen/cpumask.h b/xen/include/xen/cpumask.h
--- a/xen/include/xen/cpumask.h
+++ b/xen/include/xen/cpumask.h
@@ -424,8 +424,8 @@ extern cpumask_t cpu_present_map;
 #define for_each_present_cpu(cpu)  for_each_cpu(cpu, &cpu_present_map)
 
 /* Copy to/from cpumap provided by control tools. */
-struct xenctl_cpumap;
-int cpumask_to_xenctl_cpumap(struct xenctl_cpumap *, const cpumask_t *);
-int xenctl_cpumap_to_cpumask(cpumask_var_t *, const struct xenctl_cpumap *);
+struct xenctl_bitmap;
+int cpumask_to_xenctl_bitmap(struct xenctl_bitmap *, const cpumask_t *);
+int xenctl_bitmap_to_cpumask(cpumask_var_t *, const struct xenctl_bitmap *);
 
 #endif /* __XEN_CPUMASK_H */
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -2,7 +2,7 @@
 # ! - needs translation
 # ? - needs checking
 ?	dom0_vga_console_info		xen.h
-?	xenctl_cpumap			xen.h
+?	xenctl_bitmap			xen.h
 ?	mmu_update			xen.h
 !	mmuext_op			xen.h
 !	start_info			xen.h

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzt-0007SS-W7; Wed, 19 Dec 2012 19:08:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzs-0007SA-2z
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:04 +0000
Received: from [85.158.139.211:24239] by server-11.bemta-5.messagelabs.com id
	7B/50-31624-29012D05; Wed, 19 Dec 2012 19:08:02 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1355944081!18395774!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25142 invoked from network); 19 Dec 2012 19:08:01 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:01 -0000
Received: by mail-we0-f169.google.com with SMTP id t49so1173079wey.14
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:07:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=gMfE7DU4gn6ciyd7gz6hVj+8qsu3IgTNtjgayDOUh2s=;
	b=bHNVpnh0xtMFpHwE5O5p8g3zC5TCEcSa3TeASUXOe2LKd5hvebHCa1PtBT8Qop9wC6
	BZKn4VhzON7k0MQRYSsSPD8OfNFiniCiIF1X0P0uNPjhDPoHaOXslvSA1jV7fTQXwJ6e
	DzHQ9hKGF1bF3BfOKDSRzp1bXR4u/mL4IuoiN6nHMCh/WEZXk5pXXCILCD/k8CuhZekj
	rbwIckb4Ho3BFIi/5z7Gxx2J6m0vxxb5AQKSThQ6OY+acsYRF8AP+/KFQLcUakiG8/jg
	QG70vT60fzYvXMMiw8lJuq+ii72Fjadeb0c+elj4SUy30uc9lv0PYC6m8FL8a77S4Hwt
	6JTw==
X-Received: by 10.180.93.133 with SMTP id cu5mr5789887wib.32.1355944077675;
	Wed, 19 Dec 2012 11:07:57 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.07.55
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:07:56 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 9bcd5859fb9c7fa29dee8d04438a81432246114d
Message-Id: <9bcd5859fb9c7fa29dee.1355944037@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:17 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 01 of 10 v2] xen,
	libxc: rename xenctl_cpumap to xenctl_bitmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

More specifically:
 1. replaces xenctl_cpumap with xenctl_bitmap
 2. provides bitmap_to_xenctl_bitmap and the reverse;
 3. re-implement cpumask_to_xenctl_bitmap with
    bitmap_to_xenctl_bitmap and the reverse;

Other than #3, no functional changes. Interface only slightly
afected.

This is in preparation of introducing NUMA node-affinity maps.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

diff --git a/tools/libxc/xc_cpupool.c b/tools/libxc/xc_cpupool.c
--- a/tools/libxc/xc_cpupool.c
+++ b/tools/libxc/xc_cpupool.c
@@ -90,7 +90,7 @@ xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_
     sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_INFO;
     sysctl.u.cpupool_op.cpupool_id = poolid;
     set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
-    sysctl.u.cpupool_op.cpumap.nr_cpus = local_size * 8;
+    sysctl.u.cpupool_op.cpumap.nr_elems = local_size * 8;
 
     err = do_sysctl_save(xch, &sysctl);
 
@@ -184,7 +184,7 @@ xc_cpumap_t xc_cpupool_freeinfo(xc_inter
     sysctl.cmd = XEN_SYSCTL_cpupool_op;
     sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_FREEINFO;
     set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
-    sysctl.u.cpupool_op.cpumap.nr_cpus = mapsize * 8;
+    sysctl.u.cpupool_op.cpumap.nr_elems = mapsize * 8;
 
     err = do_sysctl_save(xch, &sysctl);
 
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -142,7 +142,7 @@ int xc_vcpu_setaffinity(xc_interface *xc
 
     set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
 
-    domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
+    domctl.u.vcpuaffinity.cpumap.nr_elems = cpusize * 8;
 
     ret = do_domctl(xch, &domctl);
 
@@ -182,7 +182,7 @@ int xc_vcpu_getaffinity(xc_interface *xc
     domctl.u.vcpuaffinity.vcpu = vcpu;
 
     set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
-    domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
+    domctl.u.vcpuaffinity.cpumap.nr_elems = cpusize * 8;
 
     ret = do_domctl(xch, &domctl);
 
diff --git a/tools/libxc/xc_tbuf.c b/tools/libxc/xc_tbuf.c
--- a/tools/libxc/xc_tbuf.c
+++ b/tools/libxc/xc_tbuf.c
@@ -134,7 +134,7 @@ int xc_tbuf_set_cpu_mask(xc_interface *x
     bitmap_64_to_byte(bytemap, &mask64, sizeof (mask64) * 8);
 
     set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap);
-    sysctl.u.tbuf_op.cpu_mask.nr_cpus = sizeof(bytemap) * 8;
+    sysctl.u.tbuf_op.cpu_mask.nr_elems = sizeof(bytemap) * 8;
 
     ret = do_sysctl(xch, &sysctl);
 
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1457,8 +1457,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_m
             cpumap = &cpu_online_map;
         else
         {
-            ret = xenctl_cpumap_to_cpumask(&cmv,
-                                           &op->u.mc_inject_v2.cpumap);
+            ret = xenctl_bitmap_to_cpumask(&cmv, &op->u.mc_inject_v2.cpumap);
             if ( ret )
                 break;
             cpumap = cmv;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -375,7 +375,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
     {
         uint32_t cpu;
         uint64_t idletime, now = NOW();
-        struct xenctl_cpumap ctlmap;
+        struct xenctl_bitmap ctlmap;
         cpumask_var_t cpumap;
         XEN_GUEST_HANDLE(uint8) cpumap_bitmap;
         XEN_GUEST_HANDLE(uint64) idletimes;
@@ -388,11 +388,11 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
         if ( cpufreq_controller != FREQCTL_dom0_kernel )
             break;
 
-        ctlmap.nr_cpus  = op->u.getidletime.cpumap_nr_cpus;
+        ctlmap.nr_elems  = op->u.getidletime.cpumap_nr_cpus;
         guest_from_compat_handle(cpumap_bitmap,
                                  op->u.getidletime.cpumap_bitmap);
         ctlmap.bitmap.p = cpumap_bitmap.p; /* handle -> handle_64 conversion */
-        if ( (ret = xenctl_cpumap_to_cpumask(&cpumap, &ctlmap)) != 0 )
+        if ( (ret = xenctl_bitmap_to_cpumask(&cpumap, &ctlmap)) != 0 )
             goto out;
         guest_from_compat_handle(idletimes, op->u.getidletime.idletime);
 
@@ -411,7 +411,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PA
 
         op->u.getidletime.now = now;
         if ( ret == 0 )
-            ret = cpumask_to_xenctl_cpumap(&ctlmap, cpumap);
+            ret = cpumask_to_xenctl_bitmap(&ctlmap, cpumap);
         free_cpumask_var(cpumap);
 
         if ( ret == 0 && __copy_field_to_guest(u_xenpf_op, op, u.getidletime) )
diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
--- a/xen/common/cpupool.c
+++ b/xen/common/cpupool.c
@@ -493,7 +493,7 @@ int cpupool_do_sysctl(struct xen_sysctl_
         op->cpupool_id = c->cpupool_id;
         op->sched_id = c->sched->sched_id;
         op->n_dom = c->n_dom;
-        ret = cpumask_to_xenctl_cpumap(&op->cpumap, c->cpu_valid);
+        ret = cpumask_to_xenctl_bitmap(&op->cpumap, c->cpu_valid);
         cpupool_put(c);
     }
     break;
@@ -588,7 +588,7 @@ int cpupool_do_sysctl(struct xen_sysctl_
 
     case XEN_SYSCTL_CPUPOOL_OP_FREEINFO:
     {
-        ret = cpumask_to_xenctl_cpumap(
+        ret = cpumask_to_xenctl_bitmap(
             &op->cpumap, &cpupool_free_cpus);
     }
     break;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -32,28 +32,29 @@
 static DEFINE_SPINLOCK(domctl_lock);
 DEFINE_SPINLOCK(vcpu_alloc_lock);
 
-int cpumask_to_xenctl_cpumap(
-    struct xenctl_cpumap *xenctl_cpumap, const cpumask_t *cpumask)
+int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_bitmap,
+                            const unsigned long *bitmap,
+                            unsigned int nbits)
 {
     unsigned int guest_bytes, copy_bytes, i;
     uint8_t zero = 0;
     int err = 0;
-    uint8_t *bytemap = xmalloc_array(uint8_t, (nr_cpu_ids + 7) / 8);
+    uint8_t *bytemap = xmalloc_array(uint8_t, (nbits + 7) / 8);
 
     if ( !bytemap )
         return -ENOMEM;
 
-    guest_bytes = (xenctl_cpumap->nr_cpus + 7) / 8;
-    copy_bytes  = min_t(unsigned int, guest_bytes, (nr_cpu_ids + 7) / 8);
+    guest_bytes = (xenctl_bitmap->nr_elems + 7) / 8;
+    copy_bytes  = min_t(unsigned int, guest_bytes, (nbits + 7) / 8);
 
-    bitmap_long_to_byte(bytemap, cpumask_bits(cpumask), nr_cpu_ids);
+    bitmap_long_to_byte(bytemap, bitmap, nbits);
 
     if ( copy_bytes != 0 )
-        if ( copy_to_guest(xenctl_cpumap->bitmap, bytemap, copy_bytes) )
+        if ( copy_to_guest(xenctl_bitmap->bitmap, bytemap, copy_bytes) )
             err = -EFAULT;
 
     for ( i = copy_bytes; !err && i < guest_bytes; i++ )
-        if ( copy_to_guest_offset(xenctl_cpumap->bitmap, i, &zero, 1) )
+        if ( copy_to_guest_offset(xenctl_bitmap->bitmap, i, &zero, 1) )
             err = -EFAULT;
 
     xfree(bytemap);
@@ -61,36 +62,58 @@ int cpumask_to_xenctl_cpumap(
     return err;
 }
 
-int xenctl_cpumap_to_cpumask(
-    cpumask_var_t *cpumask, const struct xenctl_cpumap *xenctl_cpumap)
+int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
+                            const struct xenctl_bitmap *xenctl_bitmap,
+                            unsigned int nbits)
 {
     unsigned int guest_bytes, copy_bytes;
     int err = 0;
-    uint8_t *bytemap = xzalloc_array(uint8_t, (nr_cpu_ids + 7) / 8);
+    uint8_t *bytemap = xzalloc_array(uint8_t, (nbits + 7) / 8);
 
     if ( !bytemap )
         return -ENOMEM;
 
-    guest_bytes = (xenctl_cpumap->nr_cpus + 7) / 8;
-    copy_bytes  = min_t(unsigned int, guest_bytes, (nr_cpu_ids + 7) / 8);
+    guest_bytes = (xenctl_bitmap->nr_elems + 7) / 8;
+    copy_bytes  = min_t(unsigned int, guest_bytes, (nbits + 7) / 8);
 
     if ( copy_bytes != 0 )
     {
-        if ( copy_from_guest(bytemap, xenctl_cpumap->bitmap, copy_bytes) )
+        if ( copy_from_guest(bytemap, xenctl_bitmap->bitmap, copy_bytes) )
             err = -EFAULT;
-        if ( (xenctl_cpumap->nr_cpus & 7) && (guest_bytes == copy_bytes) )
-            bytemap[guest_bytes-1] &= ~(0xff << (xenctl_cpumap->nr_cpus & 7));
+        if ( (xenctl_bitmap->nr_elems & 7) && (guest_bytes == copy_bytes) )
+            bytemap[guest_bytes-1] &= ~(0xff << (xenctl_bitmap->nr_elems & 7));
     }
 
-    if ( err )
-        /* nothing */;
-    else if ( alloc_cpumask_var(cpumask) )
-        bitmap_byte_to_long(cpumask_bits(*cpumask), bytemap, nr_cpu_ids);
+    if ( !err )
+        bitmap_byte_to_long(bitmap, bytemap, nbits);
+
+    xfree(bytemap);
+
+    return err;
+}
+
+int cpumask_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_cpumap,
+                             const cpumask_t *cpumask)
+{
+    return bitmap_to_xenctl_bitmap(xenctl_cpumap, cpumask_bits(cpumask),
+                                   nr_cpu_ids);
+}
+
+int xenctl_bitmap_to_cpumask(cpumask_var_t *cpumask,
+                             const struct xenctl_bitmap *xenctl_cpumap)
+{
+    int err = 0;
+
+    if ( alloc_cpumask_var(cpumask) ) {
+        err = xenctl_bitmap_to_bitmap(cpumask_bits(*cpumask), xenctl_cpumap,
+                                      nr_cpu_ids);
+        /* In case of error, cleanup is up to us, as the caller won't care! */
+        if ( err )
+            free_cpumask_var(*cpumask);
+    }
     else
         err = -ENOMEM;
 
-    xfree(bytemap);
-
     return err;
 }
 
@@ -583,7 +606,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         {
             cpumask_var_t new_affinity;
 
-            ret = xenctl_cpumap_to_cpumask(
+            ret = xenctl_bitmap_to_cpumask(
                 &new_affinity, &op->u.vcpuaffinity.cpumap);
             if ( !ret )
             {
@@ -593,7 +616,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         }
         else
         {
-            ret = cpumask_to_xenctl_cpumap(
+            ret = cpumask_to_xenctl_bitmap(
                 &op->u.vcpuaffinity.cpumap, v->cpu_affinity);
         }
     }
diff --git a/xen/common/trace.c b/xen/common/trace.c
--- a/xen/common/trace.c
+++ b/xen/common/trace.c
@@ -384,7 +384,7 @@ int tb_control(xen_sysctl_tbuf_op_t *tbc
     {
         cpumask_var_t mask;
 
-        rc = xenctl_cpumap_to_cpumask(&mask, &tbc->cpu_mask);
+        rc = xenctl_bitmap_to_cpumask(&mask, &tbc->cpu_mask);
         if ( !rc )
         {
             cpumask_copy(&tb_cpu_mask, mask);
diff --git a/xen/include/public/arch-x86/xen-mca.h b/xen/include/public/arch-x86/xen-mca.h
--- a/xen/include/public/arch-x86/xen-mca.h
+++ b/xen/include/public/arch-x86/xen-mca.h
@@ -414,7 +414,7 @@ struct xen_mc_mceinject {
 
 struct xen_mc_inject_v2 {
 	uint32_t flags;
-	struct xenctl_cpumap cpumap;
+	struct xenctl_bitmap cpumap;
 };
 #endif
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -284,7 +284,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_getvc
 /* XEN_DOMCTL_getvcpuaffinity */
 struct xen_domctl_vcpuaffinity {
     uint32_t  vcpu;              /* IN */
-    struct xenctl_cpumap cpumap; /* IN/OUT */
+    struct xenctl_bitmap cpumap; /* IN/OUT */
 };
 typedef struct xen_domctl_vcpuaffinity xen_domctl_vcpuaffinity_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_vcpuaffinity_t);
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -71,7 +71,7 @@ struct xen_sysctl_tbuf_op {
 #define XEN_SYSCTL_TBUFOP_disable      5
     uint32_t cmd;
     /* IN/OUT variables */
-    struct xenctl_cpumap cpu_mask;
+    struct xenctl_bitmap cpu_mask;
     uint32_t             evt_mask;
     /* OUT variables */
     uint64_aligned_t buffer_mfn;
@@ -532,7 +532,7 @@ struct xen_sysctl_cpupool_op {
     uint32_t domid;       /* IN: M              */
     uint32_t cpu;         /* IN: AR             */
     uint32_t n_dom;       /*            OUT: I  */
-    struct xenctl_cpumap cpumap; /*     OUT: IF */
+    struct xenctl_bitmap cpumap; /*     OUT: IF */
 };
 typedef struct xen_sysctl_cpupool_op xen_sysctl_cpupool_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpupool_op_t);
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -851,9 +851,9 @@ typedef uint8_t xen_domain_handle_t[16];
 #endif
 
 #ifndef __ASSEMBLY__
-struct xenctl_cpumap {
+struct xenctl_bitmap {
     XEN_GUEST_HANDLE_64(uint8) bitmap;
-    uint32_t nr_cpus;
+    uint32_t nr_elems;
 };
 #endif
 
diff --git a/xen/include/xen/cpumask.h b/xen/include/xen/cpumask.h
--- a/xen/include/xen/cpumask.h
+++ b/xen/include/xen/cpumask.h
@@ -424,8 +424,8 @@ extern cpumask_t cpu_present_map;
 #define for_each_present_cpu(cpu)  for_each_cpu(cpu, &cpu_present_map)
 
 /* Copy to/from cpumap provided by control tools. */
-struct xenctl_cpumap;
-int cpumask_to_xenctl_cpumap(struct xenctl_cpumap *, const cpumask_t *);
-int xenctl_cpumap_to_cpumask(cpumask_var_t *, const struct xenctl_cpumap *);
+struct xenctl_bitmap;
+int cpumask_to_xenctl_bitmap(struct xenctl_bitmap *, const cpumask_t *);
+int xenctl_bitmap_to_cpumask(cpumask_var_t *, const struct xenctl_bitmap *);
 
 #endif /* __XEN_CPUMASK_H */
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -2,7 +2,7 @@
 # ! - needs translation
 # ? - needs checking
 ?	dom0_vga_console_info		xen.h
-?	xenctl_cpumap			xen.h
+?	xenctl_bitmap			xen.h
 ?	mmu_update			xen.h
 !	mmuext_op			xen.h
 !	start_info			xen.h

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzw-0007TE-05; Wed, 19 Dec 2012 19:08:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzu-0007SQ-B4
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:06 +0000
Received: from [85.158.139.211:24495] by server-3.bemta-5.messagelabs.com id
	B7/3F-25441-59012D05; Wed, 19 Dec 2012 19:08:05 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355944082!21257991!1
X-Originating-IP: [74.125.82.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11143 invoked from network); 19 Dec 2012 19:08:02 -0000
Received: from mail-we0-f179.google.com (HELO mail-we0-f179.google.com)
	(74.125.82.179)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:02 -0000
Received: by mail-we0-f179.google.com with SMTP id r6so1134916wey.38
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:07:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=3wj16CZkpVw38ceCtgX8+4jOhOHx6Z2LGVJV3F8CoNw=;
	b=hC4+A6XDj5VLkz7iZCCuKLTp17dlu6khQthh/uv9hP9abYCn0H07qBvIgf8nVeYRBE
	o4wnNsbGCC4iNXP8VK/A6aE0kEtZycD8jpomWutdRScn//PtRWWtNlBWd7NkZtmv2LUd
	nIoIf72voYoTd1RVJC5xnMnpdShS9vcWqXtfSQs8Ote21Al9pLf9U7M8ANUMmipmLfH7
	UybnQBH+S2dCk1GHmzaNpZQtVeM36moB0iZHqsfzGYF0MYGPPUF8MYnC58OQuCtC3txA
	lqTgo9Z15WjLDWgffMSQyQxB9VuhKjsDaOG0iymm4wxuRwIorfFkxrh6XPzbJ95uUXXl
	XwiA==
X-Received: by 10.194.82.168 with SMTP id j8mr13480734wjy.15.1355944079716;
	Wed, 19 Dec 2012 11:07:59 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.07.57
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:07:58 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 4c57c8f1e7ad20c15b8cf0d2343fec6f89ef41f2
Message-Id: <4c57c8f1e7ad20c15b8c.1355944038@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:18 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 02 of 10 v2] xen,
	libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Following suit from cpumap and cpumask implementations.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -54,6 +54,11 @@ int xc_get_cpumap_size(xc_interface *xch
     return (xc_get_max_cpus(xch) + 7) / 8;
 }
 
+int xc_get_nodemap_size(xc_interface *xch)
+{
+    return (xc_get_max_nodes(xch) + 7) / 8;
+}
+
 xc_cpumap_t xc_cpumap_alloc(xc_interface *xch)
 {
     int sz;
@@ -64,6 +69,16 @@ xc_cpumap_t xc_cpumap_alloc(xc_interface
     return calloc(1, sz);
 }
 
+xc_nodemap_t xc_nodemap_alloc(xc_interface *xch)
+{
+    int sz;
+
+    sz = xc_get_nodemap_size(xch);
+    if (sz == 0)
+        return NULL;
+    return calloc(1, sz);
+}
+
 int xc_readconsolering(xc_interface *xch,
                        char *buffer,
                        unsigned int *pnr_chars,
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -330,12 +330,20 @@ int xc_get_cpumap_size(xc_interface *xch
 /* allocate a cpumap */
 xc_cpumap_t xc_cpumap_alloc(xc_interface *xch);
 
- /*
+/*
  * NODEMAP handling
  */
+typedef uint8_t *xc_nodemap_t;
+
 /* return maximum number of NUMA nodes the hypervisor supports */
 int xc_get_max_nodes(xc_interface *xch);
 
+/* return array size for nodemap */
+int xc_get_nodemap_size(xc_interface *xch);
+
+/* allocate a nodemap */
+xc_nodemap_t xc_nodemap_alloc(xc_interface *xch);
+
 /*
  * DOMAIN DEBUGGING FUNCTIONS
  */
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -117,6 +117,30 @@ int xenctl_bitmap_to_cpumask(cpumask_var
     return err;
 }
 
+int nodemask_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_nodemap,
+                              const nodemask_t *nodemask)
+{
+    return bitmap_to_xenctl_bitmap(xenctl_nodemap, cpumask_bits(nodemask),
+                                   MAX_NUMNODES);
+}
+
+int xenctl_bitmap_to_nodemask(nodemask_t *nodemask,
+                              const struct xenctl_bitmap *xenctl_nodemap)
+{
+    int err = 0;
+
+    if ( alloc_nodemask_var(nodemask) ) {
+        err = xenctl_bitmap_to_bitmap(nodes_addr(*nodemask), xenctl_nodemap,
+                                      MAX_NUMNODES);
+        if ( err )
+            free_nodemask_var(*nodemask);
+    }
+    else
+        err = -ENOMEM;
+
+    return err;
+}
+
 static inline int is_free_domid(domid_t dom)
 {
     struct domain *d;
diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
--- a/xen/include/xen/nodemask.h
+++ b/xen/include/xen/nodemask.h
@@ -298,6 +298,53 @@ static inline int __nodemask_parse(const
 }
 #endif
 
+/*
+ * nodemask_var_t: struct nodemask for stack usage.
+ *
+ * See definition of cpumask_var_t in include/xen//cpumask.h.
+ */
+#if MAX_NUMNODES > 2 * BITS_PER_LONG
+#include <xen/xmalloc.h>
+
+typedef nodemask_t *nodemask_var_t;
+
+#define nr_nodemask_bits (BITS_TO_LONGS(MAX_NUMNODES) * BITS_PER_LONG)
+
+static inline bool_t alloc_nodemask_var(nodemask_var_t *mask)
+{
+	*(void **)mask = _xmalloc(nr_nodemask_bits / 8, sizeof(long));
+	return *mask != NULL;
+}
+
+static inline bool_t zalloc_nodemask_var(nodemask_var_t *mask)
+{
+	*(void **)mask = _xzalloc(nr_nodemask_bits / 8, sizeof(long));
+	return *mask != NULL;
+}
+
+static inline void free_nodemask_var(nodemask_var_t mask)
+{
+	xfree(mask);
+}
+#else
+typedef nodemask_t nodemask_var_t;
+
+static inline bool_t alloc_nodemask_var(nodemask_var_t *mask)
+{
+	return 1;
+}
+
+static inline bool_t zalloc_nodemask_var(nodemask_var_t *mask)
+{
+	nodes_clear(*mask);
+	return 1;
+}
+
+static inline void free_nodemask_var(nodemask_var_t mask)
+{
+}
+#endif
+
 #if MAX_NUMNODES > 1
 #define for_each_node_mask(node, mask)			\
 	for ((node) = first_node(mask);			\

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzw-0007TE-05; Wed, 19 Dec 2012 19:08:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzu-0007SQ-B4
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:06 +0000
Received: from [85.158.139.211:24495] by server-3.bemta-5.messagelabs.com id
	B7/3F-25441-59012D05; Wed, 19 Dec 2012 19:08:05 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355944082!21257991!1
X-Originating-IP: [74.125.82.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11143 invoked from network); 19 Dec 2012 19:08:02 -0000
Received: from mail-we0-f179.google.com (HELO mail-we0-f179.google.com)
	(74.125.82.179)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:02 -0000
Received: by mail-we0-f179.google.com with SMTP id r6so1134916wey.38
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:07:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=3wj16CZkpVw38ceCtgX8+4jOhOHx6Z2LGVJV3F8CoNw=;
	b=hC4+A6XDj5VLkz7iZCCuKLTp17dlu6khQthh/uv9hP9abYCn0H07qBvIgf8nVeYRBE
	o4wnNsbGCC4iNXP8VK/A6aE0kEtZycD8jpomWutdRScn//PtRWWtNlBWd7NkZtmv2LUd
	nIoIf72voYoTd1RVJC5xnMnpdShS9vcWqXtfSQs8Ote21Al9pLf9U7M8ANUMmipmLfH7
	UybnQBH+S2dCk1GHmzaNpZQtVeM36moB0iZHqsfzGYF0MYGPPUF8MYnC58OQuCtC3txA
	lqTgo9Z15WjLDWgffMSQyQxB9VuhKjsDaOG0iymm4wxuRwIorfFkxrh6XPzbJ95uUXXl
	XwiA==
X-Received: by 10.194.82.168 with SMTP id j8mr13480734wjy.15.1355944079716;
	Wed, 19 Dec 2012 11:07:59 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.07.57
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:07:58 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 4c57c8f1e7ad20c15b8cf0d2343fec6f89ef41f2
Message-Id: <4c57c8f1e7ad20c15b8c.1355944038@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:18 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 02 of 10 v2] xen,
	libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Following suit from cpumap and cpumask implementations.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -54,6 +54,11 @@ int xc_get_cpumap_size(xc_interface *xch
     return (xc_get_max_cpus(xch) + 7) / 8;
 }
 
+int xc_get_nodemap_size(xc_interface *xch)
+{
+    return (xc_get_max_nodes(xch) + 7) / 8;
+}
+
 xc_cpumap_t xc_cpumap_alloc(xc_interface *xch)
 {
     int sz;
@@ -64,6 +69,16 @@ xc_cpumap_t xc_cpumap_alloc(xc_interface
     return calloc(1, sz);
 }
 
+xc_nodemap_t xc_nodemap_alloc(xc_interface *xch)
+{
+    int sz;
+
+    sz = xc_get_nodemap_size(xch);
+    if (sz == 0)
+        return NULL;
+    return calloc(1, sz);
+}
+
 int xc_readconsolering(xc_interface *xch,
                        char *buffer,
                        unsigned int *pnr_chars,
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -330,12 +330,20 @@ int xc_get_cpumap_size(xc_interface *xch
 /* allocate a cpumap */
 xc_cpumap_t xc_cpumap_alloc(xc_interface *xch);
 
- /*
+/*
  * NODEMAP handling
  */
+typedef uint8_t *xc_nodemap_t;
+
 /* return maximum number of NUMA nodes the hypervisor supports */
 int xc_get_max_nodes(xc_interface *xch);
 
+/* return array size for nodemap */
+int xc_get_nodemap_size(xc_interface *xch);
+
+/* allocate a nodemap */
+xc_nodemap_t xc_nodemap_alloc(xc_interface *xch);
+
 /*
  * DOMAIN DEBUGGING FUNCTIONS
  */
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -117,6 +117,30 @@ int xenctl_bitmap_to_cpumask(cpumask_var
     return err;
 }
 
+int nodemask_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_nodemap,
+                              const nodemask_t *nodemask)
+{
+    return bitmap_to_xenctl_bitmap(xenctl_nodemap, cpumask_bits(nodemask),
+                                   MAX_NUMNODES);
+}
+
+int xenctl_bitmap_to_nodemask(nodemask_t *nodemask,
+                              const struct xenctl_bitmap *xenctl_nodemap)
+{
+    int err = 0;
+
+    if ( alloc_nodemask_var(nodemask) ) {
+        err = xenctl_bitmap_to_bitmap(nodes_addr(*nodemask), xenctl_nodemap,
+                                      MAX_NUMNODES);
+        if ( err )
+            free_nodemask_var(*nodemask);
+    }
+    else
+        err = -ENOMEM;
+
+    return err;
+}
+
 static inline int is_free_domid(domid_t dom)
 {
     struct domain *d;
diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
--- a/xen/include/xen/nodemask.h
+++ b/xen/include/xen/nodemask.h
@@ -298,6 +298,53 @@ static inline int __nodemask_parse(const
 }
 #endif
 
+/*
+ * nodemask_var_t: struct nodemask for stack usage.
+ *
+ * See definition of cpumask_var_t in include/xen//cpumask.h.
+ */
+#if MAX_NUMNODES > 2 * BITS_PER_LONG
+#include <xen/xmalloc.h>
+
+typedef nodemask_t *nodemask_var_t;
+
+#define nr_nodemask_bits (BITS_TO_LONGS(MAX_NUMNODES) * BITS_PER_LONG)
+
+static inline bool_t alloc_nodemask_var(nodemask_var_t *mask)
+{
+	*(void **)mask = _xmalloc(nr_nodemask_bits / 8, sizeof(long));
+	return *mask != NULL;
+}
+
+static inline bool_t zalloc_nodemask_var(nodemask_var_t *mask)
+{
+	*(void **)mask = _xzalloc(nr_nodemask_bits / 8, sizeof(long));
+	return *mask != NULL;
+}
+
+static inline void free_nodemask_var(nodemask_var_t mask)
+{
+	xfree(mask);
+}
+#else
+typedef nodemask_t nodemask_var_t;
+
+static inline bool_t alloc_nodemask_var(nodemask_var_t *mask)
+{
+	return 1;
+}
+
+static inline bool_t zalloc_nodemask_var(nodemask_var_t *mask)
+{
+	nodes_clear(*mask);
+	return 1;
+}
+
+static inline void free_nodemask_var(nodemask_var_t mask)
+{
+}
+#endif
+
 #if MAX_NUMNODES > 1
 #define for_each_node_mask(node, mask)			\
 	for ((node) = first_node(mask);			\

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzy-0007Tu-Eb; Wed, 19 Dec 2012 19:08:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzw-0007SC-VP
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:09 +0000
Received: from [85.158.143.99:22833] by server-2.bemta-4.messagelabs.com id
	5B/5B-30861-89012D05; Wed, 19 Dec 2012 19:08:08 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355944086!24917979!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7294 invoked from network); 19 Dec 2012 19:08:07 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:07 -0000
Received: by mail-we0-f178.google.com with SMTP id x43so1155113wey.37
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=xKkDQcbpGUW14KnnFpXkE4I66iSWRogEDqabcIDb+IQ=;
	b=uJDyAiyFQYKEUd3C+SVFz8R4WjeJ6eaJJCgE+6TbxKKI5xdNyzQJmi91ysgKZDGsyw
	Njtsg7d7edgVihqkHCCpXHLuXl1qDAfrubYiQqXrqSHFwqXMMuHkSkkI7Yse6acI+tnU
	n7NOZnuMnG8UJu/i8poSp+ViNpwYk2CV9eELS2ZdirUIXDRr5F3gp3QWJ9WHCeoJSx/f
	csgenPZBkRU6Uq9AzG0L3y8rf7BxPqnzdHZ6Df3+I1R5pwz8f0Hc8DXG+XGHDKIyW/QO
	/jdhHMm9G4e4lm311uPT7ZRIByCchar+/escX5fZhkXM2bY/jYzABo0PVgq4rxvdXyB7
	opTA==
X-Received: by 10.180.87.73 with SMTP id v9mr5775346wiz.26.1355944086848;
	Wed, 19 Dec 2012 11:08:06 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.02
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:06 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 7e8c5e21c3ae1c267c231d615918816987a2dfe0
Message-Id: <7e8c5e21c3ae1c267c23.1355944040@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:20 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 04 of 10 v2] xen: allow for explicitly
	specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make it possible to pass the node-affinity of a domain to the hypervisor
from the upper layers, instead of always being computed automatically.

Note that this also required generalizing the Flask hooks for setting
and getting the affinity, so that they now deal with both vcpu and
node affinity.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * added the missing dummy hook for nodeaffinity;
 * let the permission renaming affect flask policies too.

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -47,8 +47,8 @@ class domain
     transition
     max_vcpus
     destroy
-    setvcpuaffinity
-	getvcpuaffinity
+    setaffinity
+	getaffinity
 	scheduler
 	getdomaininfo
 	getvcpuinfo
diff --git a/tools/flask/policy/policy/mls b/tools/flask/policy/policy/mls
--- a/tools/flask/policy/policy/mls
+++ b/tools/flask/policy/policy/mls
@@ -70,11 +70,11 @@ mlsconstrain domain transition
 	(( h1 dom h2 ) and (( l1 eq l2 ) or (t1 == mls_priv)));
 
 # all the domain "read" ops
-mlsconstrain domain { getvcpuaffinity getdomaininfo getvcpuinfo getvcpucontext getaddrsize getextvcpucontext }
+mlsconstrain domain { getaffinity getdomaininfo getvcpuinfo getvcpucontext getaddrsize getextvcpucontext }
 	((l1 dom l2) or (t1 == mls_priv));
 
 # all the domain "write" ops
-mlsconstrain domain { setvcpucontext pause unpause resume create max_vcpus destroy setvcpuaffinity scheduler setdomainmaxmem setdomainhandle setdebugging hypercall settime set_target shutdown setaddrsize trigger setextvcpucontext }
+mlsconstrain domain { setvcpucontext pause unpause resume create max_vcpus destroy setaffinity scheduler setdomainmaxmem setdomainhandle setdebugging hypercall settime set_target shutdown setaddrsize trigger setextvcpucontext }
 	((l1 eq l2) or (t1 == mls_priv));
 
 # This is incomplete - similar constraints must be written for all classes
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -55,9 +55,9 @@ define(`create_domain_build_label', `
 # manage_domain(priv, target)
 #   Allow managing a running domain
 define(`manage_domain', `
-	allow $1 $2:domain { getdomaininfo getvcpuinfo getvcpuaffinity
+	allow $1 $2:domain { getdomaininfo getvcpuinfo getaffinity
 			getaddrsize pause unpause trigger shutdown destroy
-			setvcpuaffinity setdomainmaxmem };
+			setaffinity setdomainmaxmem };
 ')
 
 # migrate_domain_out(priv, target)
diff --git a/xen/common/domain.c b/xen/common/domain.c
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -222,6 +222,7 @@ struct domain *domain_create(
 
     spin_lock_init(&d->node_affinity_lock);
     d->node_affinity = NODE_MASK_ALL;
+    d->auto_node_affinity = 1;
 
     spin_lock_init(&d->shutdown_lock);
     d->shutdown_code = -1;
@@ -362,11 +363,26 @@ void domain_update_node_affinity(struct 
         cpumask_or(cpumask, cpumask, online_affinity);
     }
 
-    for_each_online_node ( node )
-        if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
-            node_set(node, nodemask);
+    if ( d->auto_node_affinity )
+    {
+        /* Node-affinity is automaically computed from all vcpu-affinities */
+        for_each_online_node ( node )
+            if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
+                node_set(node, nodemask);
 
-    d->node_affinity = nodemask;
+        d->node_affinity = nodemask;
+    }
+    else
+    {
+        /* Node-affinity is provided by someone else, just filter out cpus
+         * that are either offline or not in the affinity of any vcpus. */
+        for_each_node_mask ( node, d->node_affinity )
+            if ( !cpumask_intersects(&node_to_cpumask(node), cpumask) )
+                node_clear(node, d->node_affinity);
+    }
+
+    sched_set_node_affinity(d, &d->node_affinity);
+
     spin_unlock(&d->node_affinity_lock);
 
     free_cpumask_var(online_affinity);
@@ -374,6 +390,36 @@ void domain_update_node_affinity(struct 
 }
 
 
+int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity)
+{
+    /* Being affine with no nodes is just wrong */
+    if ( nodes_empty(*affinity) )
+        return -EINVAL;
+
+    spin_lock(&d->node_affinity_lock);
+
+    /*
+     * Being/becoming explicitly affine to all nodes is not particularly
+     * useful. Let's take it as the `reset node affinity` command.
+     */
+    if ( nodes_full(*affinity) )
+    {
+        d->auto_node_affinity = 1;
+        goto out;
+    }
+
+    d->auto_node_affinity = 0;
+    d->node_affinity = *affinity;
+
+out:
+    spin_unlock(&d->node_affinity_lock);
+
+    domain_update_node_affinity(d);
+
+    return 0;
+}
+
+
 struct domain *get_domain_by_id(domid_t dom)
 {
     struct domain *d;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -609,6 +609,40 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
     }
     break;
 
+    case XEN_DOMCTL_setnodeaffinity:
+    case XEN_DOMCTL_getnodeaffinity:
+    {
+        domid_t dom = op->domain;
+        struct domain *d = rcu_lock_domain_by_id(dom);
+
+        ret = -ESRCH;
+        if ( d == NULL )
+            break;
+
+        ret = xsm_nodeaffinity(op->cmd, d);
+        if ( ret )
+            goto nodeaffinity_out;
+
+        if ( op->cmd == XEN_DOMCTL_setnodeaffinity )
+        {
+            nodemask_t new_affinity;
+
+            ret = xenctl_bitmap_to_nodemask(&new_affinity,
+                                            &op->u.nodeaffinity.nodemap);
+            if ( !ret )
+                ret = domain_set_node_affinity(d, &new_affinity);
+        }
+        else
+        {
+            ret = nodemask_to_xenctl_bitmap(&op->u.nodeaffinity.nodemap,
+                                            &d->node_affinity);
+        }
+
+    nodeaffinity_out:
+        rcu_unlock_domain(d);
+    }
+    break;
+
     case XEN_DOMCTL_setvcpuaffinity:
     case XEN_DOMCTL_getvcpuaffinity:
     {
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -217,6 +217,14 @@ static void cpuset_print(char *set, int 
     *set++ = '\0';
 }
 
+static void nodeset_print(char *set, int size, const nodemask_t *mask)
+{
+    *set++ = '[';
+    set += nodelist_scnprintf(set, size-2, mask);
+    *set++ = ']';
+    *set++ = '\0';
+}
+
 static void periodic_timer_print(char *str, int size, uint64_t period)
 {
     if ( period == 0 )
@@ -272,6 +280,9 @@ static void dump_domains(unsigned char k
 
         dump_pageframe_info(d);
                
+        nodeset_print(tmpstr, sizeof(tmpstr), &d->node_affinity);
+        printk("NODE affinity for domain %d: %s\n", d->domain_id, tmpstr);
+
         printk("VCPU information and callbacks for domain %u:\n",
                d->domain_id);
         for_each_vcpu ( d, v )
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -269,6 +269,33 @@ static inline void
     list_del_init(&svc->runq_elem);
 }
 
+/*
+ * Translates node-affinity mask into a cpumask, so that we can use it during
+ * actual scheduling. That of course will contain all the cpus from all the
+ * set nodes in the original node-affinity mask.
+ *
+ * Note that any serialization needed to access mask safely is complete
+ * responsibility of the caller of this function/hook.
+ */
+static void csched_set_node_affinity(
+    const struct scheduler *ops,
+    struct domain *d,
+    nodemask_t *mask)
+{
+    struct csched_dom *sdom;
+    int node;
+
+    /* Skip idle domain since it doesn't even have a node_affinity_cpumask */
+    if ( unlikely(is_idle_domain(d)) )
+        return;
+
+    sdom = CSCHED_DOM(d);
+    cpumask_clear(sdom->node_affinity_cpumask);
+    for_each_node_mask( node, *mask )
+        cpumask_or(sdom->node_affinity_cpumask, sdom->node_affinity_cpumask,
+                   &node_to_cpumask(node));
+}
+
 #define for_each_csched_balance_step(__step) \
     for ( (__step) = CSCHED_BALANCE_LAST; (__step) >= 0; (__step)-- )
 
@@ -296,7 +323,8 @@ csched_balance_cpumask(const struct vcpu
 
         cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
 
-        if ( cpumask_full(sdom->node_affinity_cpumask) )
+        if ( cpumask_full(sdom->node_affinity_cpumask) ||
+             d->auto_node_affinity == 1 )
             return -1;
     }
     else /* step == CSCHED_BALANCE_CPU_AFFINITY */
@@ -1896,6 +1924,8 @@ const struct scheduler sched_credit_def 
     .adjust         = csched_dom_cntl,
     .adjust_global  = csched_sys_cntl,
 
+    .set_node_affinity  = csched_set_node_affinity,
+
     .pick_cpu       = csched_cpu_pick,
     .do_schedule    = csched_schedule,
 
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -590,6 +590,11 @@ int cpu_disable_scheduler(unsigned int c
     return ret;
 }
 
+void sched_set_node_affinity(struct domain *d, nodemask_t *mask)
+{
+    SCHED_OP(DOM2OP(d), set_node_affinity, d, mask);
+}
+
 int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity)
 {
     cpumask_t online_affinity;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -279,6 +279,16 @@ typedef struct xen_domctl_getvcpuinfo xe
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_getvcpuinfo_t);
 
 
+/* Get/set the NUMA node(s) with which the guest has affinity with. */
+/* XEN_DOMCTL_setnodeaffinity */
+/* XEN_DOMCTL_getnodeaffinity */
+struct xen_domctl_nodeaffinity {
+    struct xenctl_bitmap nodemap;/* IN */
+};
+typedef struct xen_domctl_nodeaffinity xen_domctl_nodeaffinity_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_nodeaffinity_t);
+
+
 /* Get/set which physical cpus a vcpu can execute on. */
 /* XEN_DOMCTL_setvcpuaffinity */
 /* XEN_DOMCTL_getvcpuaffinity */
@@ -907,6 +917,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_audit_p2m                     65
 #define XEN_DOMCTL_set_virq_handler              66
 #define XEN_DOMCTL_set_broken_page_p2m           67
+#define XEN_DOMCTL_setnodeaffinity               68
+#define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -920,6 +932,7 @@ struct xen_domctl {
         struct xen_domctl_getpageframeinfo  getpageframeinfo;
         struct xen_domctl_getpageframeinfo2 getpageframeinfo2;
         struct xen_domctl_getpageframeinfo3 getpageframeinfo3;
+        struct xen_domctl_nodeaffinity      nodeaffinity;
         struct xen_domctl_vcpuaffinity      vcpuaffinity;
         struct xen_domctl_shadow_op         shadow_op;
         struct xen_domctl_max_mem           max_mem;
diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
--- a/xen/include/xen/nodemask.h
+++ b/xen/include/xen/nodemask.h
@@ -8,8 +8,9 @@
  * See detailed comments in the file linux/bitmap.h describing the
  * data type on which these nodemasks are based.
  *
- * For details of nodemask_scnprintf() and nodemask_parse(),
- * see bitmap_scnprintf() and bitmap_parse() in lib/bitmap.c.
+ * For details of nodemask_scnprintf(), nodelist_scnpintf() and
+ * nodemask_parse(), see bitmap_scnprintf() and bitmap_parse()
+ * in lib/bitmap.c.
  *
  * The available nodemask operations are:
  *
@@ -50,6 +51,7 @@
  * unsigned long *nodes_addr(mask)	Array of unsigned long's in mask
  *
  * int nodemask_scnprintf(buf, len, mask) Format nodemask for printing
+ * int nodelist_scnprintf(buf, len, mask) Format nodemask as a list for printing
  * int nodemask_parse(ubuf, ulen, mask)	Parse ascii string as nodemask
  *
  * for_each_node_mask(node, mask)	for-loop node over mask
@@ -292,6 +294,14 @@ static inline int __cycle_node(int n, co
 
 #define nodes_addr(src) ((src).bits)
 
+#define nodelist_scnprintf(buf, len, src) \
+			__nodelist_scnprintf((buf), (len), (src), MAX_NUMNODES)
+static inline int __nodelist_scnprintf(char *buf, int len,
+					const nodemask_t *srcp, int nbits)
+{
+	return bitmap_scnlistprintf(buf, len, srcp->bits, nbits);
+}
+
 #if 0
 #define nodemask_scnprintf(buf, len, src) \
 			__nodemask_scnprintf((buf), (len), &(src), MAX_NUMNODES)
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -184,6 +184,8 @@ struct scheduler {
                                     struct xen_domctl_scheduler_op *);
     int          (*adjust_global)  (const struct scheduler *,
                                     struct xen_sysctl_scheduler_op *);
+    void         (*set_node_affinity) (const struct scheduler *,
+                                       struct domain *, nodemask_t *);
     void         (*dump_settings)  (const struct scheduler *);
     void         (*dump_cpu_state) (const struct scheduler *, int);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -359,8 +359,12 @@ struct domain
     /* Various mem_events */
     struct mem_event_per_domain *mem_event;
 
-    /* Currently computed from union of all vcpu cpu-affinity masks. */
+    /*
+     * Can be specified by the user. If that is not the case, it is
+     * computed from the union of all the vcpu cpu-affinity masks.
+     */
     nodemask_t node_affinity;
+    int auto_node_affinity;
     unsigned int last_alloc_node;
     spinlock_t node_affinity_lock;
 };
@@ -429,6 +433,7 @@ static inline void get_knownalive_domain
     ASSERT(!(atomic_read(&d->refcnt) & DOMAIN_DESTROYED));
 }
 
+int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity);
 void domain_update_node_affinity(struct domain *d);
 
 struct domain *domain_create(
@@ -543,6 +548,7 @@ void sched_destroy_domain(struct domain 
 int sched_move_domain(struct domain *d, struct cpupool *c);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
+void sched_set_node_affinity(struct domain *, nodemask_t *);
 int  sched_id(void);
 void sched_tick_suspend(void);
 void sched_tick_resume(void);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -56,6 +56,7 @@ struct xsm_operations {
     int (*domain_create) (struct domain *d, u32 ssidref);
     int (*max_vcpus) (struct domain *d);
     int (*destroydomain) (struct domain *d);
+    int (*nodeaffinity) (int cmd, struct domain *d);
     int (*vcpuaffinity) (int cmd, struct domain *d);
     int (*scheduler) (struct domain *d);
     int (*getdomaininfo) (struct domain *d);
@@ -229,6 +230,11 @@ static inline int xsm_destroydomain (str
     return xsm_call(destroydomain(d));
 }
 
+static inline int xsm_nodeaffinity (int cmd, struct domain *d)
+{
+    return xsm_call(nodeaffinity(cmd, d));
+}
+
 static inline int xsm_vcpuaffinity (int cmd, struct domain *d)
 {
     return xsm_call(vcpuaffinity(cmd, d));
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -54,6 +54,11 @@ static int dummy_destroydomain (struct d
     return 0;
 }
 
+static int dummy_nodeaffinity (int cmd, struct domain *d)
+{
+    return 0;
+}
+
 static int dummy_vcpuaffinity (int cmd, struct domain *d)
 {
     return 0;
@@ -634,6 +639,7 @@ void xsm_fixup_ops (struct xsm_operation
     set_to_dummy_if_null(ops, domain_create);
     set_to_dummy_if_null(ops, max_vcpus);
     set_to_dummy_if_null(ops, destroydomain);
+    set_to_dummy_if_null(ops, nodeaffinity);
     set_to_dummy_if_null(ops, vcpuaffinity);
     set_to_dummy_if_null(ops, scheduler);
     set_to_dummy_if_null(ops, getdomaininfo);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -521,17 +521,19 @@ static int flask_destroydomain(struct do
                            DOMAIN__DESTROY);
 }
 
-static int flask_vcpuaffinity(int cmd, struct domain *d)
+static int flask_affinity(int cmd, struct domain *d)
 {
     u32 perm;
 
     switch ( cmd )
     {
     case XEN_DOMCTL_setvcpuaffinity:
-        perm = DOMAIN__SETVCPUAFFINITY;
+    case XEN_DOMCTL_setnodeaffinity:
+        perm = DOMAIN__SETAFFINITY;
         break;
     case XEN_DOMCTL_getvcpuaffinity:
-        perm = DOMAIN__GETVCPUAFFINITY;
+    case XEN_DOMCTL_getnodeaffinity:
+        perm = DOMAIN__GETAFFINITY;
         break;
     default:
         return -EPERM;
@@ -1473,7 +1475,8 @@ static struct xsm_operations flask_ops =
     .domain_create = flask_domain_create,
     .max_vcpus = flask_max_vcpus,
     .destroydomain = flask_destroydomain,
-    .vcpuaffinity = flask_vcpuaffinity,
+    .nodeaffinity = flask_affinity,
+    .vcpuaffinity = flask_affinity,
     .scheduler = flask_scheduler,
     .getdomaininfo = flask_getdomaininfo,
     .getvcpucontext = flask_getvcpucontext,
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -37,8 +37,8 @@
    S_(SECCLASS_DOMAIN, DOMAIN__TRANSITION, "transition")
    S_(SECCLASS_DOMAIN, DOMAIN__MAX_VCPUS, "max_vcpus")
    S_(SECCLASS_DOMAIN, DOMAIN__DESTROY, "destroy")
-   S_(SECCLASS_DOMAIN, DOMAIN__SETVCPUAFFINITY, "setvcpuaffinity")
-   S_(SECCLASS_DOMAIN, DOMAIN__GETVCPUAFFINITY, "getvcpuaffinity")
+   S_(SECCLASS_DOMAIN, DOMAIN__SETAFFINITY, "setaffinity")
+   S_(SECCLASS_DOMAIN, DOMAIN__GETAFFINITY, "getaffinity")
    S_(SECCLASS_DOMAIN, DOMAIN__SCHEDULER, "scheduler")
    S_(SECCLASS_DOMAIN, DOMAIN__GETDOMAININFO, "getdomaininfo")
    S_(SECCLASS_DOMAIN, DOMAIN__GETVCPUINFO, "getvcpuinfo")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -38,8 +38,8 @@
 #define DOMAIN__TRANSITION                        0x00000020UL
 #define DOMAIN__MAX_VCPUS                         0x00000040UL
 #define DOMAIN__DESTROY                           0x00000080UL
-#define DOMAIN__SETVCPUAFFINITY                   0x00000100UL
-#define DOMAIN__GETVCPUAFFINITY                   0x00000200UL
+#define DOMAIN__SETAFFINITY                       0x00000100UL
+#define DOMAIN__GETAFFINITY                       0x00000200UL
 #define DOMAIN__SCHEDULER                         0x00000400UL
 #define DOMAIN__GETDOMAININFO                     0x00000800UL
 #define DOMAIN__GETVCPUINFO                       0x00001000UL

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzy-0007Tu-Eb; Wed, 19 Dec 2012 19:08:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzw-0007SC-VP
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:09 +0000
Received: from [85.158.143.99:22833] by server-2.bemta-4.messagelabs.com id
	5B/5B-30861-89012D05; Wed, 19 Dec 2012 19:08:08 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355944086!24917979!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7294 invoked from network); 19 Dec 2012 19:08:07 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:07 -0000
Received: by mail-we0-f178.google.com with SMTP id x43so1155113wey.37
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=xKkDQcbpGUW14KnnFpXkE4I66iSWRogEDqabcIDb+IQ=;
	b=uJDyAiyFQYKEUd3C+SVFz8R4WjeJ6eaJJCgE+6TbxKKI5xdNyzQJmi91ysgKZDGsyw
	Njtsg7d7edgVihqkHCCpXHLuXl1qDAfrubYiQqXrqSHFwqXMMuHkSkkI7Yse6acI+tnU
	n7NOZnuMnG8UJu/i8poSp+ViNpwYk2CV9eELS2ZdirUIXDRr5F3gp3QWJ9WHCeoJSx/f
	csgenPZBkRU6Uq9AzG0L3y8rf7BxPqnzdHZ6Df3+I1R5pwz8f0Hc8DXG+XGHDKIyW/QO
	/jdhHMm9G4e4lm311uPT7ZRIByCchar+/escX5fZhkXM2bY/jYzABo0PVgq4rxvdXyB7
	opTA==
X-Received: by 10.180.87.73 with SMTP id v9mr5775346wiz.26.1355944086848;
	Wed, 19 Dec 2012 11:08:06 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.02
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:06 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 7e8c5e21c3ae1c267c231d615918816987a2dfe0
Message-Id: <7e8c5e21c3ae1c267c23.1355944040@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:20 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 04 of 10 v2] xen: allow for explicitly
	specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make it possible to pass the node-affinity of a domain to the hypervisor
from the upper layers, instead of always being computed automatically.

Note that this also required generalizing the Flask hooks for setting
and getting the affinity, so that they now deal with both vcpu and
node affinity.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * added the missing dummy hook for nodeaffinity;
 * let the permission renaming affect flask policies too.

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -47,8 +47,8 @@ class domain
     transition
     max_vcpus
     destroy
-    setvcpuaffinity
-	getvcpuaffinity
+    setaffinity
+	getaffinity
 	scheduler
 	getdomaininfo
 	getvcpuinfo
diff --git a/tools/flask/policy/policy/mls b/tools/flask/policy/policy/mls
--- a/tools/flask/policy/policy/mls
+++ b/tools/flask/policy/policy/mls
@@ -70,11 +70,11 @@ mlsconstrain domain transition
 	(( h1 dom h2 ) and (( l1 eq l2 ) or (t1 == mls_priv)));
 
 # all the domain "read" ops
-mlsconstrain domain { getvcpuaffinity getdomaininfo getvcpuinfo getvcpucontext getaddrsize getextvcpucontext }
+mlsconstrain domain { getaffinity getdomaininfo getvcpuinfo getvcpucontext getaddrsize getextvcpucontext }
 	((l1 dom l2) or (t1 == mls_priv));
 
 # all the domain "write" ops
-mlsconstrain domain { setvcpucontext pause unpause resume create max_vcpus destroy setvcpuaffinity scheduler setdomainmaxmem setdomainhandle setdebugging hypercall settime set_target shutdown setaddrsize trigger setextvcpucontext }
+mlsconstrain domain { setvcpucontext pause unpause resume create max_vcpus destroy setaffinity scheduler setdomainmaxmem setdomainhandle setdebugging hypercall settime set_target shutdown setaddrsize trigger setextvcpucontext }
 	((l1 eq l2) or (t1 == mls_priv));
 
 # This is incomplete - similar constraints must be written for all classes
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -55,9 +55,9 @@ define(`create_domain_build_label', `
 # manage_domain(priv, target)
 #   Allow managing a running domain
 define(`manage_domain', `
-	allow $1 $2:domain { getdomaininfo getvcpuinfo getvcpuaffinity
+	allow $1 $2:domain { getdomaininfo getvcpuinfo getaffinity
 			getaddrsize pause unpause trigger shutdown destroy
-			setvcpuaffinity setdomainmaxmem };
+			setaffinity setdomainmaxmem };
 ')
 
 # migrate_domain_out(priv, target)
diff --git a/xen/common/domain.c b/xen/common/domain.c
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -222,6 +222,7 @@ struct domain *domain_create(
 
     spin_lock_init(&d->node_affinity_lock);
     d->node_affinity = NODE_MASK_ALL;
+    d->auto_node_affinity = 1;
 
     spin_lock_init(&d->shutdown_lock);
     d->shutdown_code = -1;
@@ -362,11 +363,26 @@ void domain_update_node_affinity(struct 
         cpumask_or(cpumask, cpumask, online_affinity);
     }
 
-    for_each_online_node ( node )
-        if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
-            node_set(node, nodemask);
+    if ( d->auto_node_affinity )
+    {
+        /* Node-affinity is automaically computed from all vcpu-affinities */
+        for_each_online_node ( node )
+            if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
+                node_set(node, nodemask);
 
-    d->node_affinity = nodemask;
+        d->node_affinity = nodemask;
+    }
+    else
+    {
+        /* Node-affinity is provided by someone else, just filter out cpus
+         * that are either offline or not in the affinity of any vcpus. */
+        for_each_node_mask ( node, d->node_affinity )
+            if ( !cpumask_intersects(&node_to_cpumask(node), cpumask) )
+                node_clear(node, d->node_affinity);
+    }
+
+    sched_set_node_affinity(d, &d->node_affinity);
+
     spin_unlock(&d->node_affinity_lock);
 
     free_cpumask_var(online_affinity);
@@ -374,6 +390,36 @@ void domain_update_node_affinity(struct 
 }
 
 
+int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity)
+{
+    /* Being affine with no nodes is just wrong */
+    if ( nodes_empty(*affinity) )
+        return -EINVAL;
+
+    spin_lock(&d->node_affinity_lock);
+
+    /*
+     * Being/becoming explicitly affine to all nodes is not particularly
+     * useful. Let's take it as the `reset node affinity` command.
+     */
+    if ( nodes_full(*affinity) )
+    {
+        d->auto_node_affinity = 1;
+        goto out;
+    }
+
+    d->auto_node_affinity = 0;
+    d->node_affinity = *affinity;
+
+out:
+    spin_unlock(&d->node_affinity_lock);
+
+    domain_update_node_affinity(d);
+
+    return 0;
+}
+
+
 struct domain *get_domain_by_id(domid_t dom)
 {
     struct domain *d;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -609,6 +609,40 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
     }
     break;
 
+    case XEN_DOMCTL_setnodeaffinity:
+    case XEN_DOMCTL_getnodeaffinity:
+    {
+        domid_t dom = op->domain;
+        struct domain *d = rcu_lock_domain_by_id(dom);
+
+        ret = -ESRCH;
+        if ( d == NULL )
+            break;
+
+        ret = xsm_nodeaffinity(op->cmd, d);
+        if ( ret )
+            goto nodeaffinity_out;
+
+        if ( op->cmd == XEN_DOMCTL_setnodeaffinity )
+        {
+            nodemask_t new_affinity;
+
+            ret = xenctl_bitmap_to_nodemask(&new_affinity,
+                                            &op->u.nodeaffinity.nodemap);
+            if ( !ret )
+                ret = domain_set_node_affinity(d, &new_affinity);
+        }
+        else
+        {
+            ret = nodemask_to_xenctl_bitmap(&op->u.nodeaffinity.nodemap,
+                                            &d->node_affinity);
+        }
+
+    nodeaffinity_out:
+        rcu_unlock_domain(d);
+    }
+    break;
+
     case XEN_DOMCTL_setvcpuaffinity:
     case XEN_DOMCTL_getvcpuaffinity:
     {
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -217,6 +217,14 @@ static void cpuset_print(char *set, int 
     *set++ = '\0';
 }
 
+static void nodeset_print(char *set, int size, const nodemask_t *mask)
+{
+    *set++ = '[';
+    set += nodelist_scnprintf(set, size-2, mask);
+    *set++ = ']';
+    *set++ = '\0';
+}
+
 static void periodic_timer_print(char *str, int size, uint64_t period)
 {
     if ( period == 0 )
@@ -272,6 +280,9 @@ static void dump_domains(unsigned char k
 
         dump_pageframe_info(d);
                
+        nodeset_print(tmpstr, sizeof(tmpstr), &d->node_affinity);
+        printk("NODE affinity for domain %d: %s\n", d->domain_id, tmpstr);
+
         printk("VCPU information and callbacks for domain %u:\n",
                d->domain_id);
         for_each_vcpu ( d, v )
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -269,6 +269,33 @@ static inline void
     list_del_init(&svc->runq_elem);
 }
 
+/*
+ * Translates node-affinity mask into a cpumask, so that we can use it during
+ * actual scheduling. That of course will contain all the cpus from all the
+ * set nodes in the original node-affinity mask.
+ *
+ * Note that any serialization needed to access mask safely is complete
+ * responsibility of the caller of this function/hook.
+ */
+static void csched_set_node_affinity(
+    const struct scheduler *ops,
+    struct domain *d,
+    nodemask_t *mask)
+{
+    struct csched_dom *sdom;
+    int node;
+
+    /* Skip idle domain since it doesn't even have a node_affinity_cpumask */
+    if ( unlikely(is_idle_domain(d)) )
+        return;
+
+    sdom = CSCHED_DOM(d);
+    cpumask_clear(sdom->node_affinity_cpumask);
+    for_each_node_mask( node, *mask )
+        cpumask_or(sdom->node_affinity_cpumask, sdom->node_affinity_cpumask,
+                   &node_to_cpumask(node));
+}
+
 #define for_each_csched_balance_step(__step) \
     for ( (__step) = CSCHED_BALANCE_LAST; (__step) >= 0; (__step)-- )
 
@@ -296,7 +323,8 @@ csched_balance_cpumask(const struct vcpu
 
         cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
 
-        if ( cpumask_full(sdom->node_affinity_cpumask) )
+        if ( cpumask_full(sdom->node_affinity_cpumask) ||
+             d->auto_node_affinity == 1 )
             return -1;
     }
     else /* step == CSCHED_BALANCE_CPU_AFFINITY */
@@ -1896,6 +1924,8 @@ const struct scheduler sched_credit_def 
     .adjust         = csched_dom_cntl,
     .adjust_global  = csched_sys_cntl,
 
+    .set_node_affinity  = csched_set_node_affinity,
+
     .pick_cpu       = csched_cpu_pick,
     .do_schedule    = csched_schedule,
 
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -590,6 +590,11 @@ int cpu_disable_scheduler(unsigned int c
     return ret;
 }
 
+void sched_set_node_affinity(struct domain *d, nodemask_t *mask)
+{
+    SCHED_OP(DOM2OP(d), set_node_affinity, d, mask);
+}
+
 int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity)
 {
     cpumask_t online_affinity;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -279,6 +279,16 @@ typedef struct xen_domctl_getvcpuinfo xe
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_getvcpuinfo_t);
 
 
+/* Get/set the NUMA node(s) with which the guest has affinity with. */
+/* XEN_DOMCTL_setnodeaffinity */
+/* XEN_DOMCTL_getnodeaffinity */
+struct xen_domctl_nodeaffinity {
+    struct xenctl_bitmap nodemap;/* IN */
+};
+typedef struct xen_domctl_nodeaffinity xen_domctl_nodeaffinity_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_nodeaffinity_t);
+
+
 /* Get/set which physical cpus a vcpu can execute on. */
 /* XEN_DOMCTL_setvcpuaffinity */
 /* XEN_DOMCTL_getvcpuaffinity */
@@ -907,6 +917,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_audit_p2m                     65
 #define XEN_DOMCTL_set_virq_handler              66
 #define XEN_DOMCTL_set_broken_page_p2m           67
+#define XEN_DOMCTL_setnodeaffinity               68
+#define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -920,6 +932,7 @@ struct xen_domctl {
         struct xen_domctl_getpageframeinfo  getpageframeinfo;
         struct xen_domctl_getpageframeinfo2 getpageframeinfo2;
         struct xen_domctl_getpageframeinfo3 getpageframeinfo3;
+        struct xen_domctl_nodeaffinity      nodeaffinity;
         struct xen_domctl_vcpuaffinity      vcpuaffinity;
         struct xen_domctl_shadow_op         shadow_op;
         struct xen_domctl_max_mem           max_mem;
diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
--- a/xen/include/xen/nodemask.h
+++ b/xen/include/xen/nodemask.h
@@ -8,8 +8,9 @@
  * See detailed comments in the file linux/bitmap.h describing the
  * data type on which these nodemasks are based.
  *
- * For details of nodemask_scnprintf() and nodemask_parse(),
- * see bitmap_scnprintf() and bitmap_parse() in lib/bitmap.c.
+ * For details of nodemask_scnprintf(), nodelist_scnpintf() and
+ * nodemask_parse(), see bitmap_scnprintf() and bitmap_parse()
+ * in lib/bitmap.c.
  *
  * The available nodemask operations are:
  *
@@ -50,6 +51,7 @@
  * unsigned long *nodes_addr(mask)	Array of unsigned long's in mask
  *
  * int nodemask_scnprintf(buf, len, mask) Format nodemask for printing
+ * int nodelist_scnprintf(buf, len, mask) Format nodemask as a list for printing
  * int nodemask_parse(ubuf, ulen, mask)	Parse ascii string as nodemask
  *
  * for_each_node_mask(node, mask)	for-loop node over mask
@@ -292,6 +294,14 @@ static inline int __cycle_node(int n, co
 
 #define nodes_addr(src) ((src).bits)
 
+#define nodelist_scnprintf(buf, len, src) \
+			__nodelist_scnprintf((buf), (len), (src), MAX_NUMNODES)
+static inline int __nodelist_scnprintf(char *buf, int len,
+					const nodemask_t *srcp, int nbits)
+{
+	return bitmap_scnlistprintf(buf, len, srcp->bits, nbits);
+}
+
 #if 0
 #define nodemask_scnprintf(buf, len, src) \
 			__nodemask_scnprintf((buf), (len), &(src), MAX_NUMNODES)
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -184,6 +184,8 @@ struct scheduler {
                                     struct xen_domctl_scheduler_op *);
     int          (*adjust_global)  (const struct scheduler *,
                                     struct xen_sysctl_scheduler_op *);
+    void         (*set_node_affinity) (const struct scheduler *,
+                                       struct domain *, nodemask_t *);
     void         (*dump_settings)  (const struct scheduler *);
     void         (*dump_cpu_state) (const struct scheduler *, int);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -359,8 +359,12 @@ struct domain
     /* Various mem_events */
     struct mem_event_per_domain *mem_event;
 
-    /* Currently computed from union of all vcpu cpu-affinity masks. */
+    /*
+     * Can be specified by the user. If that is not the case, it is
+     * computed from the union of all the vcpu cpu-affinity masks.
+     */
     nodemask_t node_affinity;
+    int auto_node_affinity;
     unsigned int last_alloc_node;
     spinlock_t node_affinity_lock;
 };
@@ -429,6 +433,7 @@ static inline void get_knownalive_domain
     ASSERT(!(atomic_read(&d->refcnt) & DOMAIN_DESTROYED));
 }
 
+int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity);
 void domain_update_node_affinity(struct domain *d);
 
 struct domain *domain_create(
@@ -543,6 +548,7 @@ void sched_destroy_domain(struct domain 
 int sched_move_domain(struct domain *d, struct cpupool *c);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
+void sched_set_node_affinity(struct domain *, nodemask_t *);
 int  sched_id(void);
 void sched_tick_suspend(void);
 void sched_tick_resume(void);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -56,6 +56,7 @@ struct xsm_operations {
     int (*domain_create) (struct domain *d, u32 ssidref);
     int (*max_vcpus) (struct domain *d);
     int (*destroydomain) (struct domain *d);
+    int (*nodeaffinity) (int cmd, struct domain *d);
     int (*vcpuaffinity) (int cmd, struct domain *d);
     int (*scheduler) (struct domain *d);
     int (*getdomaininfo) (struct domain *d);
@@ -229,6 +230,11 @@ static inline int xsm_destroydomain (str
     return xsm_call(destroydomain(d));
 }
 
+static inline int xsm_nodeaffinity (int cmd, struct domain *d)
+{
+    return xsm_call(nodeaffinity(cmd, d));
+}
+
 static inline int xsm_vcpuaffinity (int cmd, struct domain *d)
 {
     return xsm_call(vcpuaffinity(cmd, d));
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -54,6 +54,11 @@ static int dummy_destroydomain (struct d
     return 0;
 }
 
+static int dummy_nodeaffinity (int cmd, struct domain *d)
+{
+    return 0;
+}
+
 static int dummy_vcpuaffinity (int cmd, struct domain *d)
 {
     return 0;
@@ -634,6 +639,7 @@ void xsm_fixup_ops (struct xsm_operation
     set_to_dummy_if_null(ops, domain_create);
     set_to_dummy_if_null(ops, max_vcpus);
     set_to_dummy_if_null(ops, destroydomain);
+    set_to_dummy_if_null(ops, nodeaffinity);
     set_to_dummy_if_null(ops, vcpuaffinity);
     set_to_dummy_if_null(ops, scheduler);
     set_to_dummy_if_null(ops, getdomaininfo);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -521,17 +521,19 @@ static int flask_destroydomain(struct do
                            DOMAIN__DESTROY);
 }
 
-static int flask_vcpuaffinity(int cmd, struct domain *d)
+static int flask_affinity(int cmd, struct domain *d)
 {
     u32 perm;
 
     switch ( cmd )
     {
     case XEN_DOMCTL_setvcpuaffinity:
-        perm = DOMAIN__SETVCPUAFFINITY;
+    case XEN_DOMCTL_setnodeaffinity:
+        perm = DOMAIN__SETAFFINITY;
         break;
     case XEN_DOMCTL_getvcpuaffinity:
-        perm = DOMAIN__GETVCPUAFFINITY;
+    case XEN_DOMCTL_getnodeaffinity:
+        perm = DOMAIN__GETAFFINITY;
         break;
     default:
         return -EPERM;
@@ -1473,7 +1475,8 @@ static struct xsm_operations flask_ops =
     .domain_create = flask_domain_create,
     .max_vcpus = flask_max_vcpus,
     .destroydomain = flask_destroydomain,
-    .vcpuaffinity = flask_vcpuaffinity,
+    .nodeaffinity = flask_affinity,
+    .vcpuaffinity = flask_affinity,
     .scheduler = flask_scheduler,
     .getdomaininfo = flask_getdomaininfo,
     .getvcpucontext = flask_getvcpucontext,
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -37,8 +37,8 @@
    S_(SECCLASS_DOMAIN, DOMAIN__TRANSITION, "transition")
    S_(SECCLASS_DOMAIN, DOMAIN__MAX_VCPUS, "max_vcpus")
    S_(SECCLASS_DOMAIN, DOMAIN__DESTROY, "destroy")
-   S_(SECCLASS_DOMAIN, DOMAIN__SETVCPUAFFINITY, "setvcpuaffinity")
-   S_(SECCLASS_DOMAIN, DOMAIN__GETVCPUAFFINITY, "getvcpuaffinity")
+   S_(SECCLASS_DOMAIN, DOMAIN__SETAFFINITY, "setaffinity")
+   S_(SECCLASS_DOMAIN, DOMAIN__GETAFFINITY, "getaffinity")
    S_(SECCLASS_DOMAIN, DOMAIN__SCHEDULER, "scheduler")
    S_(SECCLASS_DOMAIN, DOMAIN__GETDOMAININFO, "getdomaininfo")
    S_(SECCLASS_DOMAIN, DOMAIN__GETVCPUINFO, "getvcpuinfo")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -38,8 +38,8 @@
 #define DOMAIN__TRANSITION                        0x00000020UL
 #define DOMAIN__MAX_VCPUS                         0x00000040UL
 #define DOMAIN__DESTROY                           0x00000080UL
-#define DOMAIN__SETVCPUAFFINITY                   0x00000100UL
-#define DOMAIN__GETVCPUAFFINITY                   0x00000200UL
+#define DOMAIN__SETAFFINITY                       0x00000100UL
+#define DOMAIN__GETAFFINITY                       0x00000200UL
 #define DOMAIN__SCHEDULER                         0x00000400UL
 #define DOMAIN__GETDOMAININFO                     0x00000800UL
 #define DOMAIN__GETVCPUINFO                       0x00001000UL

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzu-0007Sa-DV; Wed, 19 Dec 2012 19:08:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzs-0007SC-GE
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:04 +0000
Received: from [85.158.143.99:22669] by server-2.bemta-4.messagelabs.com id
	C2/5B-30861-39012D05; Wed, 19 Dec 2012 19:08:03 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355944082!24917972!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7152 invoked from network); 19 Dec 2012 19:08:02 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:02 -0000
Received: by mail-wg0-f54.google.com with SMTP id fg15so1100947wgb.33
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=T92QiZ3bfQrTT2AV01kQGLH9Afuv3Ghx1sJBKFLKBdk=;
	b=BMytaQGwARIGkQKO3bWDK73JyIHlK+Pa5pB6YRIlLogirem0vzFrAjwsHXPX2ZYMXy
	fgDusZNgX3/Dy8UH/u/TMLTjquWybKFIfs47ts1LUjpllR5trygld8aU7VunCK+tPbty
	ROWQYjo3LmnTTGddyVTqGo79UssSWgzVyX9fRf4NAor0qR/47tFgw6TDCthTtBReZtyO
	HdTLjsccRDaEwlHPeLw5V0tvdUVYHUElq8t98Ooot/P5th2ZO0weSEgMogmVQH5H1wLH
	CB1bhsbJwOJQgxC6Nwxcboesd1KVAxTf/GXcIKbQnVvb83n2zx7yvV0reNQ11eXe0w+g
	gF0w==
X-Received: by 10.180.19.99 with SMTP id d3mr13066620wie.4.1355944081949;
	Wed, 19 Dec 2012 11:08:01 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.07.59
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:01 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 06d2f322a6319d8ba212b50dc3cbee6ed71668da
Message-Id: <06d2f322a6319d8ba212.1355944039@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:19 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As vcpu-affinity tells where VCPUs must run, node-affinity tells
where they should or, better, prefer. While respecting vcpu-affinity
remains mandatory, node-affinity is not that strict, it only expresses
a preference, although honouring it is almost always true that will
bring significant performances benefit (especially as compared to
not having any affinity at all).

This change modifies the VCPU load balancing algorithm (for the
credit scheduler only), introducing a two steps logic.
During the first step, we use the node-affinity mask. The aim is
giving precedence to the CPUs where it is known to be preferable
for the domain to run. If that fails in finding a valid PCPU, the
node-affinity is just ignored and, in the second step, we fall
back to using cpu-affinity only.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * CPU masks variables moved off from the stack, as requested during
   review. As per the comments in the code, having them in the private
   (per-scheduler instance) struct could have been enough, but it would be
   racy (again, see comments). For that reason, use a global bunch of
   them of (via per_cpu());
 * George suggested a different load balancing logic during v1's review. I
   think he was right and then I changed the old implementation in a way
   that resembles exactly that. I rewrote most of this patch to introduce
   a more sensible and effective noda-affinity handling logic.

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -111,6 +111,33 @@
 
 
 /*
+ * Node Balancing
+ */
+#define CSCHED_BALANCE_CPU_AFFINITY     0
+#define CSCHED_BALANCE_NODE_AFFINITY    1
+#define CSCHED_BALANCE_LAST CSCHED_BALANCE_NODE_AFFINITY
+
+/*
+ * When building for high number of CPUs, cpumask_var_t
+ * variables on stack are better avoided. However, we need them,
+ * in order to be able to consider both vcpu and node affinity.
+ * We also don't want to xmalloc()/xfree() them, as that would
+ * happen in critical code paths. Therefore, let's (pre)allocate
+ * some scratch space for them.
+ *
+ * Having one mask for each instance of the scheduler seems
+ * enough, and that would suggest putting it wihin `struct
+ * csched_private' below. However, we don't always hold the
+ * private scheduler lock when the mask itself would need to
+ * be used, leaving room for races. For that reason, we define
+ * and use a cpumask_t for each CPU. As preemption is not an
+ * issue here (we're holding the runqueue spin-lock!), that is
+ * both enough and safe.
+ */
+DEFINE_PER_CPU(cpumask_t, csched_balance_mask);
+#define scratch_balance_mask (this_cpu(csched_balance_mask))
+
+/*
  * Boot parameters
  */
 static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
@@ -159,6 +186,9 @@ struct csched_dom {
     struct list_head active_vcpu;
     struct list_head active_sdom_elem;
     struct domain *dom;
+    /* cpumask translated from the domain's node-affinity.
+     * Basically, the CPUs we prefer to be scheduled on. */
+    cpumask_var_t node_affinity_cpumask;
     uint16_t active_vcpu_count;
     uint16_t weight;
     uint16_t cap;
@@ -239,6 +269,42 @@ static inline void
     list_del_init(&svc->runq_elem);
 }
 
+#define for_each_csched_balance_step(__step) \
+    for ( (__step) = CSCHED_BALANCE_LAST; (__step) >= 0; (__step)-- )
+
+/*
+ * Each csched-balance step has to use its own cpumask. This function
+ * determines which one, given the step, and copies it in mask. Notice
+ * that, in case of node-affinity balancing step, it also filters out from
+ * the node-affinity mask the cpus that are not part of vc's cpu-affinity,
+ * as we do not want to end up running a vcpu where it would like, but
+ * is not allowed to!
+ *
+ * As an optimization, if a domain does not have any node-affinity at all
+ * (namely, its node affinity is automatically computed), not only the
+ * computed mask will reflect its vcpu-affinity, but we also return -1 to
+ * let the caller know that he can skip the step or quit the loop (if he
+ * wants).
+ */
+static int
+csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask)
+{
+    if ( step == CSCHED_BALANCE_NODE_AFFINITY )
+    {
+        struct domain *d = vc->domain;
+        struct csched_dom *sdom = CSCHED_DOM(d);
+
+        cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
+
+        if ( cpumask_full(sdom->node_affinity_cpumask) )
+            return -1;
+    }
+    else /* step == CSCHED_BALANCE_CPU_AFFINITY */
+        cpumask_copy(mask, vc->cpu_affinity);
+
+    return 0;
+}
+
 static void burn_credits(struct csched_vcpu *svc, s_time_t now)
 {
     s_time_t delta;
@@ -266,67 +332,94 @@ static inline void
     struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
     cpumask_t mask, idle_mask;
-    int idlers_empty;
+    int balance_step, idlers_empty;
 
     ASSERT(cur);
-    cpumask_clear(&mask);
-
     idlers_empty = cpumask_empty(prv->idlers);
 
     /*
-     * If the pcpu is idle, or there are no idlers and the new
-     * vcpu is a higher priority than the old vcpu, run it here.
-     *
-     * If there are idle cpus, first try to find one suitable to run
-     * new, so we can avoid preempting cur.  If we cannot find a
-     * suitable idler on which to run new, run it here, but try to
-     * find a suitable idler on which to run cur instead.
+     * Node and vcpu-affinity balancing loop. To speed things up, in case
+     * no node-affinity at all is present, scratch_balance_mask reflects
+     * the vcpu-affinity, and ret is -1, so that we then can quit the
+     * loop after only one step.
      */
-    if ( cur->pri == CSCHED_PRI_IDLE
-         || (idlers_empty && new->pri > cur->pri) )
+    for_each_csched_balance_step( balance_step )
     {
-        if ( cur->pri != CSCHED_PRI_IDLE )
-            SCHED_STAT_CRANK(tickle_idlers_none);
-        cpumask_set_cpu(cpu, &mask);
-    }
-    else if ( !idlers_empty )
-    {
-        /* Check whether or not there are idlers that can run new */
-        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
+        int ret, new_idlers_empty;
+
+        cpumask_clear(&mask);
 
         /*
-         * If there are no suitable idlers for new, and it's higher
-         * priority than cur, ask the scheduler to migrate cur away.
-         * We have to act like this (instead of just waking some of
-         * the idlers suitable for cur) because cur is running.
+         * If the pcpu is idle, or there are no idlers and the new
+         * vcpu is a higher priority than the old vcpu, run it here.
          *
-         * If there are suitable idlers for new, no matter priorities,
-         * leave cur alone (as it is running and is, likely, cache-hot)
-         * and wake some of them (which is waking up and so is, likely,
-         * cache cold anyway).
+         * If there are idle cpus, first try to find one suitable to run
+         * new, so we can avoid preempting cur.  If we cannot find a
+         * suitable idler on which to run new, run it here, but try to
+         * find a suitable idler on which to run cur instead.
          */
-        if ( cpumask_empty(&idle_mask) && new->pri > cur->pri )
+        if ( cur->pri == CSCHED_PRI_IDLE
+             || (idlers_empty && new->pri > cur->pri) )
         {
-            SCHED_STAT_CRANK(tickle_idlers_none);
-            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
-            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
-            SCHED_STAT_CRANK(migrate_kicked_away);
-            set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
+            if ( cur->pri != CSCHED_PRI_IDLE )
+                SCHED_STAT_CRANK(tickle_idlers_none);
             cpumask_set_cpu(cpu, &mask);
         }
-        else if ( !cpumask_empty(&idle_mask) )
+        else if ( !idlers_empty )
         {
-            /* Which of the idlers suitable for new shall we wake up? */
-            SCHED_STAT_CRANK(tickle_idlers_some);
-            if ( opt_tickle_one_idle )
+            /* Are there idlers suitable for new (for this balance step)? */
+            ret = csched_balance_cpumask(new->vcpu, balance_step,
+                                         &scratch_balance_mask);
+            cpumask_and(&idle_mask, prv->idlers, &scratch_balance_mask);
+            new_idlers_empty = cpumask_empty(&idle_mask);
+
+            /*
+             * Let's not be too harsh! If there aren't idlers suitable
+             * for new in its node-affinity mask, make sure we check its
+             * vcpu-affinity as well, before tacking final decisions.
+             */
+            if ( new_idlers_empty
+                 && (balance_step == CSCHED_BALANCE_NODE_AFFINITY && !ret) )
+                continue;
+
+            /*
+             * If there are no suitable idlers for new, and it's higher
+             * priority than cur, ask the scheduler to migrate cur away.
+             * We have to act like this (instead of just waking some of
+             * the idlers suitable for cur) because cur is running.
+             *
+             * If there are suitable idlers for new, no matter priorities,
+             * leave cur alone (as it is running and is, likely, cache-hot)
+             * and wake some of them (which is waking up and so is, likely,
+             * cache cold anyway).
+             */
+            if ( new_idlers_empty && new->pri > cur->pri )
             {
-                this_cpu(last_tickle_cpu) =
-                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
-                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
+                SCHED_STAT_CRANK(tickle_idlers_none);
+                SCHED_VCPU_STAT_CRANK(cur, kicked_away);
+                SCHED_VCPU_STAT_CRANK(cur, migrate_r);
+                SCHED_STAT_CRANK(migrate_kicked_away);
+                set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
+                cpumask_set_cpu(cpu, &mask);
             }
-            else
-                cpumask_or(&mask, &mask, &idle_mask);
+            else if ( !new_idlers_empty )
+            {
+                /* Which of the idlers suitable for new shall we wake up? */
+                SCHED_STAT_CRANK(tickle_idlers_some);
+                if ( opt_tickle_one_idle )
+                {
+                    this_cpu(last_tickle_cpu) =
+                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
+                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
+                }
+                else
+                    cpumask_or(&mask, &mask, &idle_mask);
+            }
         }
+
+        /* Did we find anyone (or csched_balance_cpumask() says we're done)? */
+        if ( !cpumask_empty(&mask) || ret )
+            break;
     }
 
     if ( !cpumask_empty(&mask) )
@@ -475,15 +568,28 @@ static inline int
 }
 
 static inline int
-__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu)
+__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, cpumask_t *mask)
 {
     /*
      * Don't pick up work that's in the peer's scheduling tail or hot on
-     * peer PCPU. Only pick up work that's allowed to run on our CPU.
+     * peer PCPU. Only pick up work that prefers and/or is allowed to run
+     * on our CPU.
      */
     return !vc->is_running &&
            !__csched_vcpu_is_cache_hot(vc) &&
-           cpumask_test_cpu(dest_cpu, vc->cpu_affinity);
+           cpumask_test_cpu(dest_cpu, mask);
+}
+
+static inline int
+__csched_vcpu_should_migrate(int cpu, cpumask_t *mask, cpumask_t *idlers)
+{
+    /*
+     * Consent to migration if cpu is one of the idlers in the VCPU's
+     * affinity mask. In fact, if that is not the case, it just means it
+     * was some other CPU that was tickled and should hence come and pick
+     * VCPU up. Migrating it to cpu would only make things worse.
+     */
+    return cpumask_test_cpu(cpu, idlers) && cpumask_test_cpu(cpu, mask);
 }
 
 static int
@@ -493,85 +599,98 @@ static int
     cpumask_t idlers;
     cpumask_t *online;
     struct csched_pcpu *spc = NULL;
+    int ret, balance_step;
     int cpu;
 
-    /*
-     * Pick from online CPUs in VCPU's affinity mask, giving a
-     * preference to its current processor if it's in there.
-     */
     online = cpupool_scheduler_cpumask(vc->domain->cpupool);
-    cpumask_and(&cpus, online, vc->cpu_affinity);
-    cpu = cpumask_test_cpu(vc->processor, &cpus)
-            ? vc->processor
-            : cpumask_cycle(vc->processor, &cpus);
-    ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
+    for_each_csched_balance_step( balance_step )
+    {
+        /* Pick an online CPU from the proper affinity mask */
+        ret = csched_balance_cpumask(vc, balance_step, &cpus);
+        cpumask_and(&cpus, &cpus, online);
 
-    /*
-     * Try to find an idle processor within the above constraints.
-     *
-     * In multi-core and multi-threaded CPUs, not all idle execution
-     * vehicles are equal!
-     *
-     * We give preference to the idle execution vehicle with the most
-     * idling neighbours in its grouping. This distributes work across
-     * distinct cores first and guarantees we don't do something stupid
-     * like run two VCPUs on co-hyperthreads while there are idle cores
-     * or sockets.
-     *
-     * Notice that, when computing the "idleness" of cpu, we may want to
-     * discount vc. That is, iff vc is the currently running and the only
-     * runnable vcpu on cpu, we add cpu to the idlers.
-     */
-    cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
-    if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
-        cpumask_set_cpu(cpu, &idlers);
-    cpumask_and(&cpus, &cpus, &idlers);
-    cpumask_clear_cpu(cpu, &cpus);
+        /* If present, prefer vc's current processor */
+        cpu = cpumask_test_cpu(vc->processor, &cpus)
+                ? vc->processor
+                : cpumask_cycle(vc->processor, &cpus);
+        ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
 
-    while ( !cpumask_empty(&cpus) )
-    {
-        cpumask_t cpu_idlers;
-        cpumask_t nxt_idlers;
-        int nxt, weight_cpu, weight_nxt;
-        int migrate_factor;
+        /*
+         * Try to find an idle processor within the above constraints.
+         *
+         * In multi-core and multi-threaded CPUs, not all idle execution
+         * vehicles are equal!
+         *
+         * We give preference to the idle execution vehicle with the most
+         * idling neighbours in its grouping. This distributes work across
+         * distinct cores first and guarantees we don't do something stupid
+         * like run two VCPUs on co-hyperthreads while there are idle cores
+         * or sockets.
+         *
+         * Notice that, when computing the "idleness" of cpu, we may want to
+         * discount vc. That is, iff vc is the currently running and the only
+         * runnable vcpu on cpu, we add cpu to the idlers.
+         */
+        cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
+        if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
+            cpumask_set_cpu(cpu, &idlers);
+        cpumask_and(&cpus, &cpus, &idlers);
+        /* If there are idlers and cpu is still not among them, pick one */
+        if ( !cpumask_empty(&cpus) && !cpumask_test_cpu(cpu, &cpus) )
+            cpu = cpumask_cycle(cpu, &cpus);
+        cpumask_clear_cpu(cpu, &cpus);
 
-        nxt = cpumask_cycle(cpu, &cpus);
+        while ( !cpumask_empty(&cpus) )
+        {
+            cpumask_t cpu_idlers;
+            cpumask_t nxt_idlers;
+            int nxt, weight_cpu, weight_nxt;
+            int migrate_factor;
 
-        if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
-        {
-            /* We're on the same socket, so check the busy-ness of threads.
-             * Migrate if # of idlers is less at all */
-            ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
-            migrate_factor = 1;
-            cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_sibling_mask, cpu));
-            cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_sibling_mask, nxt));
-        }
-        else
-        {
-            /* We're on different sockets, so check the busy-ness of cores.
-             * Migrate only if the other core is twice as idle */
-            ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
-            migrate_factor = 2;
-            cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_core_mask, cpu));
-            cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
+            nxt = cpumask_cycle(cpu, &cpus);
+
+            if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
+            {
+                /* We're on the same socket, so check the busy-ness of threads.
+                 * Migrate if # of idlers is less at all */
+                ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
+                migrate_factor = 1;
+                cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_sibling_mask,
+                            cpu));
+                cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_sibling_mask,
+                            nxt));
+            }
+            else
+            {
+                /* We're on different sockets, so check the busy-ness of cores.
+                 * Migrate only if the other core is twice as idle */
+                ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
+                migrate_factor = 2;
+                cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_core_mask, cpu));
+                cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
+            }
+
+            weight_cpu = cpumask_weight(&cpu_idlers);
+            weight_nxt = cpumask_weight(&nxt_idlers);
+            /* smt_power_savings: consolidate work rather than spreading it */
+            if ( sched_smt_power_savings ?
+                 weight_cpu > weight_nxt :
+                 weight_cpu * migrate_factor < weight_nxt )
+            {
+                cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
+                spc = CSCHED_PCPU(nxt);
+                cpu = cpumask_cycle(spc->idle_bias, &nxt_idlers);
+                cpumask_andnot(&cpus, &cpus, per_cpu(cpu_sibling_mask, cpu));
+            }
+            else
+            {
+                cpumask_andnot(&cpus, &cpus, &nxt_idlers);
+            }
         }
 
-        weight_cpu = cpumask_weight(&cpu_idlers);
-        weight_nxt = cpumask_weight(&nxt_idlers);
-        /* smt_power_savings: consolidate work rather than spreading it */
-        if ( sched_smt_power_savings ?
-             weight_cpu > weight_nxt :
-             weight_cpu * migrate_factor < weight_nxt )
-        {
-            cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
-            spc = CSCHED_PCPU(nxt);
-            cpu = cpumask_cycle(spc->idle_bias, &nxt_idlers);
-            cpumask_andnot(&cpus, &cpus, per_cpu(cpu_sibling_mask, cpu));
-        }
-        else
-        {
-            cpumask_andnot(&cpus, &cpus, &nxt_idlers);
-        }
+        /* Stop if cpu is idle (or if csched_balance_cpumask() says we can) */
+        if ( cpumask_test_cpu(cpu, &idlers) || ret )
+            break;
     }
 
     if ( commit && spc )
@@ -913,6 +1032,13 @@ csched_alloc_domdata(const struct schedu
     if ( sdom == NULL )
         return NULL;
 
+    if ( !alloc_cpumask_var(&sdom->node_affinity_cpumask) )
+    {
+        xfree(sdom);
+        return NULL;
+    }
+    cpumask_setall(sdom->node_affinity_cpumask);
+
     /* Initialize credit and weight */
     INIT_LIST_HEAD(&sdom->active_vcpu);
     sdom->active_vcpu_count = 0;
@@ -944,6 +1070,9 @@ csched_dom_init(const struct scheduler *
 static void
 csched_free_domdata(const struct scheduler *ops, void *data)
 {
+    struct csched_dom *sdom = data;
+
+    free_cpumask_var(sdom->node_affinity_cpumask);
     xfree(data);
 }
 
@@ -1240,9 +1369,10 @@ csched_tick(void *_cpu)
 }
 
 static struct csched_vcpu *
-csched_runq_steal(int peer_cpu, int cpu, int pri)
+csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
 {
     const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
+    struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, peer_cpu));
     const struct vcpu * const peer_vcpu = curr_on_cpu(peer_cpu);
     struct csched_vcpu *speer;
     struct list_head *iter;
@@ -1265,11 +1395,24 @@ csched_runq_steal(int peer_cpu, int cpu,
             if ( speer->pri <= pri )
                 break;
 
-            /* Is this VCPU is runnable on our PCPU? */
+            /* Is this VCPU runnable on our PCPU? */
             vc = speer->vcpu;
             BUG_ON( is_idle_vcpu(vc) );
 
-            if (__csched_vcpu_is_migrateable(vc, cpu))
+            /*
+             * Retrieve the correct mask for this balance_step or, if we're
+             * dealing with node-affinity and the vcpu has no node affinity
+             * at all, just skip this vcpu. That is needed if we want to
+             * check if we have any node-affine work to steal first (wrt
+             * any vcpu-affine work).
+             */
+            if ( csched_balance_cpumask(vc, balance_step,
+                                        &scratch_balance_mask) )
+                continue;
+
+            if ( __csched_vcpu_is_migrateable(vc, cpu, &scratch_balance_mask)
+                 && __csched_vcpu_should_migrate(cpu, &scratch_balance_mask,
+                                                 prv->idlers) )
             {
                 /* We got a candidate. Grab it! */
                 TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
@@ -1295,7 +1438,8 @@ csched_load_balance(struct csched_privat
     struct csched_vcpu *speer;
     cpumask_t workers;
     cpumask_t *online;
-    int peer_cpu;
+    int peer_cpu, peer_node, bstep;
+    int node = cpu_to_node(cpu);
 
     BUG_ON( cpu != snext->vcpu->processor );
     online = cpupool_scheduler_cpumask(per_cpu(cpupool, cpu));
@@ -1312,42 +1456,68 @@ csched_load_balance(struct csched_privat
         SCHED_STAT_CRANK(load_balance_other);
 
     /*
-     * Peek at non-idling CPUs in the system, starting with our
-     * immediate neighbour.
+     * Let's look around for work to steal, taking both vcpu-affinity
+     * and node-affinity into account. More specifically, we check all
+     * the non-idle CPUs' runq, looking for:
+     *  1. any node-affine work to steal first,
+     *  2. if not finding anything, any vcpu-affine work to steal.
      */
-    cpumask_andnot(&workers, online, prv->idlers);
-    cpumask_clear_cpu(cpu, &workers);
-    peer_cpu = cpu;
+    for_each_csched_balance_step( bstep )
+    {
+        /*
+         * We peek at the non-idling CPUs in a node-wise fashion. In fact,
+         * it is more likely that we find some node-affine work on our same
+         * node, not to mention that migrating vcpus within the same node
+         * could well expected to be cheaper than across-nodes (memory
+         * stays local, there might be some node-wide cache[s], etc.).
+         */
+        peer_node = node;
+        do
+        {
+            /* Find out what the !idle are in this node */
+            cpumask_andnot(&workers, online, prv->idlers);
+            cpumask_and(&workers, &workers, &node_to_cpumask(peer_node));
+            cpumask_clear_cpu(cpu, &workers);
 
-    while ( !cpumask_empty(&workers) )
-    {
-        peer_cpu = cpumask_cycle(peer_cpu, &workers);
-        cpumask_clear_cpu(peer_cpu, &workers);
+            if ( cpumask_empty(&workers) )
+                goto next_node;
 
-        /*
-         * Get ahold of the scheduler lock for this peer CPU.
-         *
-         * Note: We don't spin on this lock but simply try it. Spinning could
-         * cause a deadlock if the peer CPU is also load balancing and trying
-         * to lock this CPU.
-         */
-        if ( !pcpu_schedule_trylock(peer_cpu) )
-        {
-            SCHED_STAT_CRANK(steal_trylock_failed);
-            continue;
-        }
+            peer_cpu = cpumask_first(&workers);
+            do
+            {
+                /*
+                 * Get ahold of the scheduler lock for this peer CPU.
+                 *
+                 * Note: We don't spin on this lock but simply try it. Spinning
+                 * could cause a deadlock if the peer CPU is also load
+                 * balancing and trying to lock this CPU.
+                 */
+                if ( !pcpu_schedule_trylock(peer_cpu) )
+                {
+                    SCHED_STAT_CRANK(steal_trylock_failed);
+                    peer_cpu = cpumask_cycle(peer_cpu, &workers);
+                    continue;
+                }
 
-        /*
-         * Any work over there to steal?
-         */
-        speer = cpumask_test_cpu(peer_cpu, online) ?
-            csched_runq_steal(peer_cpu, cpu, snext->pri) : NULL;
-        pcpu_schedule_unlock(peer_cpu);
-        if ( speer != NULL )
-        {
-            *stolen = 1;
-            return speer;
-        }
+                /* Any work over there to steal? */
+                speer = cpumask_test_cpu(peer_cpu, online) ?
+                    csched_runq_steal(peer_cpu, cpu, snext->pri, bstep) : NULL;
+                pcpu_schedule_unlock(peer_cpu);
+
+                /* As soon as one vcpu is found, balancing ends */
+                if ( speer != NULL )
+                {
+                    *stolen = 1;
+                    return speer;
+                }
+
+                peer_cpu = cpumask_cycle(peer_cpu, &workers);
+
+            } while( peer_cpu != cpumask_first(&workers) );
+
+ next_node:
+            peer_node = cycle_node(peer_node, node_online_map);
+        } while( peer_node != node );
     }
 
  out:
diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
--- a/xen/include/xen/nodemask.h
+++ b/xen/include/xen/nodemask.h
@@ -41,6 +41,8 @@
  * int last_node(mask)			Number highest set bit, or MAX_NUMNODES
  * int first_unset_node(mask)		First node not set in mask, or 
  *					MAX_NUMNODES.
+ * int cycle_node(node, mask)		Next node cycling from 'node', or
+ *					MAX_NUMNODES
  *
  * nodemask_t nodemask_of_node(node)	Return nodemask with bit 'node' set
  * NODE_MASK_ALL			Initializer - all bits set
@@ -254,6 +256,16 @@ static inline int __first_unset_node(con
 			find_first_zero_bit(maskp->bits, MAX_NUMNODES));
 }
 
+#define cycle_node(n, src) __cycle_node((n), &(src), MAX_NUMNODES)
+static inline int __cycle_node(int n, const nodemask_t *maskp, int nbits)
+{
+    int nxt = __next_node(n, maskp, nbits);
+
+    if (nxt == nbits)
+        nxt = __first_node(maskp, nbits);
+    return nxt;
+}
+
 #define NODE_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(MAX_NUMNODES)
 
 #if MAX_NUMNODES <= BITS_PER_LONG

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzu-0007Sa-DV; Wed, 19 Dec 2012 19:08:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzs-0007SC-GE
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:04 +0000
Received: from [85.158.143.99:22669] by server-2.bemta-4.messagelabs.com id
	C2/5B-30861-39012D05; Wed, 19 Dec 2012 19:08:03 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1355944082!24917972!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7152 invoked from network); 19 Dec 2012 19:08:02 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:02 -0000
Received: by mail-wg0-f54.google.com with SMTP id fg15so1100947wgb.33
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=T92QiZ3bfQrTT2AV01kQGLH9Afuv3Ghx1sJBKFLKBdk=;
	b=BMytaQGwARIGkQKO3bWDK73JyIHlK+Pa5pB6YRIlLogirem0vzFrAjwsHXPX2ZYMXy
	fgDusZNgX3/Dy8UH/u/TMLTjquWybKFIfs47ts1LUjpllR5trygld8aU7VunCK+tPbty
	ROWQYjo3LmnTTGddyVTqGo79UssSWgzVyX9fRf4NAor0qR/47tFgw6TDCthTtBReZtyO
	HdTLjsccRDaEwlHPeLw5V0tvdUVYHUElq8t98Ooot/P5th2ZO0weSEgMogmVQH5H1wLH
	CB1bhsbJwOJQgxC6Nwxcboesd1KVAxTf/GXcIKbQnVvb83n2zx7yvV0reNQ11eXe0w+g
	gF0w==
X-Received: by 10.180.19.99 with SMTP id d3mr13066620wie.4.1355944081949;
	Wed, 19 Dec 2012 11:08:01 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.07.59
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:01 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 06d2f322a6319d8ba212b50dc3cbee6ed71668da
Message-Id: <06d2f322a6319d8ba212.1355944039@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:19 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As vcpu-affinity tells where VCPUs must run, node-affinity tells
where they should or, better, prefer. While respecting vcpu-affinity
remains mandatory, node-affinity is not that strict, it only expresses
a preference, although honouring it is almost always true that will
bring significant performances benefit (especially as compared to
not having any affinity at all).

This change modifies the VCPU load balancing algorithm (for the
credit scheduler only), introducing a two steps logic.
During the first step, we use the node-affinity mask. The aim is
giving precedence to the CPUs where it is known to be preferable
for the domain to run. If that fails in finding a valid PCPU, the
node-affinity is just ignored and, in the second step, we fall
back to using cpu-affinity only.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * CPU masks variables moved off from the stack, as requested during
   review. As per the comments in the code, having them in the private
   (per-scheduler instance) struct could have been enough, but it would be
   racy (again, see comments). For that reason, use a global bunch of
   them of (via per_cpu());
 * George suggested a different load balancing logic during v1's review. I
   think he was right and then I changed the old implementation in a way
   that resembles exactly that. I rewrote most of this patch to introduce
   a more sensible and effective noda-affinity handling logic.

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -111,6 +111,33 @@
 
 
 /*
+ * Node Balancing
+ */
+#define CSCHED_BALANCE_CPU_AFFINITY     0
+#define CSCHED_BALANCE_NODE_AFFINITY    1
+#define CSCHED_BALANCE_LAST CSCHED_BALANCE_NODE_AFFINITY
+
+/*
+ * When building for high number of CPUs, cpumask_var_t
+ * variables on stack are better avoided. However, we need them,
+ * in order to be able to consider both vcpu and node affinity.
+ * We also don't want to xmalloc()/xfree() them, as that would
+ * happen in critical code paths. Therefore, let's (pre)allocate
+ * some scratch space for them.
+ *
+ * Having one mask for each instance of the scheduler seems
+ * enough, and that would suggest putting it wihin `struct
+ * csched_private' below. However, we don't always hold the
+ * private scheduler lock when the mask itself would need to
+ * be used, leaving room for races. For that reason, we define
+ * and use a cpumask_t for each CPU. As preemption is not an
+ * issue here (we're holding the runqueue spin-lock!), that is
+ * both enough and safe.
+ */
+DEFINE_PER_CPU(cpumask_t, csched_balance_mask);
+#define scratch_balance_mask (this_cpu(csched_balance_mask))
+
+/*
  * Boot parameters
  */
 static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
@@ -159,6 +186,9 @@ struct csched_dom {
     struct list_head active_vcpu;
     struct list_head active_sdom_elem;
     struct domain *dom;
+    /* cpumask translated from the domain's node-affinity.
+     * Basically, the CPUs we prefer to be scheduled on. */
+    cpumask_var_t node_affinity_cpumask;
     uint16_t active_vcpu_count;
     uint16_t weight;
     uint16_t cap;
@@ -239,6 +269,42 @@ static inline void
     list_del_init(&svc->runq_elem);
 }
 
+#define for_each_csched_balance_step(__step) \
+    for ( (__step) = CSCHED_BALANCE_LAST; (__step) >= 0; (__step)-- )
+
+/*
+ * Each csched-balance step has to use its own cpumask. This function
+ * determines which one, given the step, and copies it in mask. Notice
+ * that, in case of node-affinity balancing step, it also filters out from
+ * the node-affinity mask the cpus that are not part of vc's cpu-affinity,
+ * as we do not want to end up running a vcpu where it would like, but
+ * is not allowed to!
+ *
+ * As an optimization, if a domain does not have any node-affinity at all
+ * (namely, its node affinity is automatically computed), not only the
+ * computed mask will reflect its vcpu-affinity, but we also return -1 to
+ * let the caller know that he can skip the step or quit the loop (if he
+ * wants).
+ */
+static int
+csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask)
+{
+    if ( step == CSCHED_BALANCE_NODE_AFFINITY )
+    {
+        struct domain *d = vc->domain;
+        struct csched_dom *sdom = CSCHED_DOM(d);
+
+        cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
+
+        if ( cpumask_full(sdom->node_affinity_cpumask) )
+            return -1;
+    }
+    else /* step == CSCHED_BALANCE_CPU_AFFINITY */
+        cpumask_copy(mask, vc->cpu_affinity);
+
+    return 0;
+}
+
 static void burn_credits(struct csched_vcpu *svc, s_time_t now)
 {
     s_time_t delta;
@@ -266,67 +332,94 @@ static inline void
     struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
     struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
     cpumask_t mask, idle_mask;
-    int idlers_empty;
+    int balance_step, idlers_empty;
 
     ASSERT(cur);
-    cpumask_clear(&mask);
-
     idlers_empty = cpumask_empty(prv->idlers);
 
     /*
-     * If the pcpu is idle, or there are no idlers and the new
-     * vcpu is a higher priority than the old vcpu, run it here.
-     *
-     * If there are idle cpus, first try to find one suitable to run
-     * new, so we can avoid preempting cur.  If we cannot find a
-     * suitable idler on which to run new, run it here, but try to
-     * find a suitable idler on which to run cur instead.
+     * Node and vcpu-affinity balancing loop. To speed things up, in case
+     * no node-affinity at all is present, scratch_balance_mask reflects
+     * the vcpu-affinity, and ret is -1, so that we then can quit the
+     * loop after only one step.
      */
-    if ( cur->pri == CSCHED_PRI_IDLE
-         || (idlers_empty && new->pri > cur->pri) )
+    for_each_csched_balance_step( balance_step )
     {
-        if ( cur->pri != CSCHED_PRI_IDLE )
-            SCHED_STAT_CRANK(tickle_idlers_none);
-        cpumask_set_cpu(cpu, &mask);
-    }
-    else if ( !idlers_empty )
-    {
-        /* Check whether or not there are idlers that can run new */
-        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
+        int ret, new_idlers_empty;
+
+        cpumask_clear(&mask);
 
         /*
-         * If there are no suitable idlers for new, and it's higher
-         * priority than cur, ask the scheduler to migrate cur away.
-         * We have to act like this (instead of just waking some of
-         * the idlers suitable for cur) because cur is running.
+         * If the pcpu is idle, or there are no idlers and the new
+         * vcpu is a higher priority than the old vcpu, run it here.
          *
-         * If there are suitable idlers for new, no matter priorities,
-         * leave cur alone (as it is running and is, likely, cache-hot)
-         * and wake some of them (which is waking up and so is, likely,
-         * cache cold anyway).
+         * If there are idle cpus, first try to find one suitable to run
+         * new, so we can avoid preempting cur.  If we cannot find a
+         * suitable idler on which to run new, run it here, but try to
+         * find a suitable idler on which to run cur instead.
          */
-        if ( cpumask_empty(&idle_mask) && new->pri > cur->pri )
+        if ( cur->pri == CSCHED_PRI_IDLE
+             || (idlers_empty && new->pri > cur->pri) )
         {
-            SCHED_STAT_CRANK(tickle_idlers_none);
-            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
-            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
-            SCHED_STAT_CRANK(migrate_kicked_away);
-            set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
+            if ( cur->pri != CSCHED_PRI_IDLE )
+                SCHED_STAT_CRANK(tickle_idlers_none);
             cpumask_set_cpu(cpu, &mask);
         }
-        else if ( !cpumask_empty(&idle_mask) )
+        else if ( !idlers_empty )
         {
-            /* Which of the idlers suitable for new shall we wake up? */
-            SCHED_STAT_CRANK(tickle_idlers_some);
-            if ( opt_tickle_one_idle )
+            /* Are there idlers suitable for new (for this balance step)? */
+            ret = csched_balance_cpumask(new->vcpu, balance_step,
+                                         &scratch_balance_mask);
+            cpumask_and(&idle_mask, prv->idlers, &scratch_balance_mask);
+            new_idlers_empty = cpumask_empty(&idle_mask);
+
+            /*
+             * Let's not be too harsh! If there aren't idlers suitable
+             * for new in its node-affinity mask, make sure we check its
+             * vcpu-affinity as well, before tacking final decisions.
+             */
+            if ( new_idlers_empty
+                 && (balance_step == CSCHED_BALANCE_NODE_AFFINITY && !ret) )
+                continue;
+
+            /*
+             * If there are no suitable idlers for new, and it's higher
+             * priority than cur, ask the scheduler to migrate cur away.
+             * We have to act like this (instead of just waking some of
+             * the idlers suitable for cur) because cur is running.
+             *
+             * If there are suitable idlers for new, no matter priorities,
+             * leave cur alone (as it is running and is, likely, cache-hot)
+             * and wake some of them (which is waking up and so is, likely,
+             * cache cold anyway).
+             */
+            if ( new_idlers_empty && new->pri > cur->pri )
             {
-                this_cpu(last_tickle_cpu) =
-                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
-                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
+                SCHED_STAT_CRANK(tickle_idlers_none);
+                SCHED_VCPU_STAT_CRANK(cur, kicked_away);
+                SCHED_VCPU_STAT_CRANK(cur, migrate_r);
+                SCHED_STAT_CRANK(migrate_kicked_away);
+                set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
+                cpumask_set_cpu(cpu, &mask);
             }
-            else
-                cpumask_or(&mask, &mask, &idle_mask);
+            else if ( !new_idlers_empty )
+            {
+                /* Which of the idlers suitable for new shall we wake up? */
+                SCHED_STAT_CRANK(tickle_idlers_some);
+                if ( opt_tickle_one_idle )
+                {
+                    this_cpu(last_tickle_cpu) =
+                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
+                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
+                }
+                else
+                    cpumask_or(&mask, &mask, &idle_mask);
+            }
         }
+
+        /* Did we find anyone (or csched_balance_cpumask() says we're done)? */
+        if ( !cpumask_empty(&mask) || ret )
+            break;
     }
 
     if ( !cpumask_empty(&mask) )
@@ -475,15 +568,28 @@ static inline int
 }
 
 static inline int
-__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu)
+__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, cpumask_t *mask)
 {
     /*
      * Don't pick up work that's in the peer's scheduling tail or hot on
-     * peer PCPU. Only pick up work that's allowed to run on our CPU.
+     * peer PCPU. Only pick up work that prefers and/or is allowed to run
+     * on our CPU.
      */
     return !vc->is_running &&
            !__csched_vcpu_is_cache_hot(vc) &&
-           cpumask_test_cpu(dest_cpu, vc->cpu_affinity);
+           cpumask_test_cpu(dest_cpu, mask);
+}
+
+static inline int
+__csched_vcpu_should_migrate(int cpu, cpumask_t *mask, cpumask_t *idlers)
+{
+    /*
+     * Consent to migration if cpu is one of the idlers in the VCPU's
+     * affinity mask. In fact, if that is not the case, it just means it
+     * was some other CPU that was tickled and should hence come and pick
+     * VCPU up. Migrating it to cpu would only make things worse.
+     */
+    return cpumask_test_cpu(cpu, idlers) && cpumask_test_cpu(cpu, mask);
 }
 
 static int
@@ -493,85 +599,98 @@ static int
     cpumask_t idlers;
     cpumask_t *online;
     struct csched_pcpu *spc = NULL;
+    int ret, balance_step;
     int cpu;
 
-    /*
-     * Pick from online CPUs in VCPU's affinity mask, giving a
-     * preference to its current processor if it's in there.
-     */
     online = cpupool_scheduler_cpumask(vc->domain->cpupool);
-    cpumask_and(&cpus, online, vc->cpu_affinity);
-    cpu = cpumask_test_cpu(vc->processor, &cpus)
-            ? vc->processor
-            : cpumask_cycle(vc->processor, &cpus);
-    ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
+    for_each_csched_balance_step( balance_step )
+    {
+        /* Pick an online CPU from the proper affinity mask */
+        ret = csched_balance_cpumask(vc, balance_step, &cpus);
+        cpumask_and(&cpus, &cpus, online);
 
-    /*
-     * Try to find an idle processor within the above constraints.
-     *
-     * In multi-core and multi-threaded CPUs, not all idle execution
-     * vehicles are equal!
-     *
-     * We give preference to the idle execution vehicle with the most
-     * idling neighbours in its grouping. This distributes work across
-     * distinct cores first and guarantees we don't do something stupid
-     * like run two VCPUs on co-hyperthreads while there are idle cores
-     * or sockets.
-     *
-     * Notice that, when computing the "idleness" of cpu, we may want to
-     * discount vc. That is, iff vc is the currently running and the only
-     * runnable vcpu on cpu, we add cpu to the idlers.
-     */
-    cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
-    if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
-        cpumask_set_cpu(cpu, &idlers);
-    cpumask_and(&cpus, &cpus, &idlers);
-    cpumask_clear_cpu(cpu, &cpus);
+        /* If present, prefer vc's current processor */
+        cpu = cpumask_test_cpu(vc->processor, &cpus)
+                ? vc->processor
+                : cpumask_cycle(vc->processor, &cpus);
+        ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
 
-    while ( !cpumask_empty(&cpus) )
-    {
-        cpumask_t cpu_idlers;
-        cpumask_t nxt_idlers;
-        int nxt, weight_cpu, weight_nxt;
-        int migrate_factor;
+        /*
+         * Try to find an idle processor within the above constraints.
+         *
+         * In multi-core and multi-threaded CPUs, not all idle execution
+         * vehicles are equal!
+         *
+         * We give preference to the idle execution vehicle with the most
+         * idling neighbours in its grouping. This distributes work across
+         * distinct cores first and guarantees we don't do something stupid
+         * like run two VCPUs on co-hyperthreads while there are idle cores
+         * or sockets.
+         *
+         * Notice that, when computing the "idleness" of cpu, we may want to
+         * discount vc. That is, iff vc is the currently running and the only
+         * runnable vcpu on cpu, we add cpu to the idlers.
+         */
+        cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
+        if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
+            cpumask_set_cpu(cpu, &idlers);
+        cpumask_and(&cpus, &cpus, &idlers);
+        /* If there are idlers and cpu is still not among them, pick one */
+        if ( !cpumask_empty(&cpus) && !cpumask_test_cpu(cpu, &cpus) )
+            cpu = cpumask_cycle(cpu, &cpus);
+        cpumask_clear_cpu(cpu, &cpus);
 
-        nxt = cpumask_cycle(cpu, &cpus);
+        while ( !cpumask_empty(&cpus) )
+        {
+            cpumask_t cpu_idlers;
+            cpumask_t nxt_idlers;
+            int nxt, weight_cpu, weight_nxt;
+            int migrate_factor;
 
-        if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
-        {
-            /* We're on the same socket, so check the busy-ness of threads.
-             * Migrate if # of idlers is less at all */
-            ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
-            migrate_factor = 1;
-            cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_sibling_mask, cpu));
-            cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_sibling_mask, nxt));
-        }
-        else
-        {
-            /* We're on different sockets, so check the busy-ness of cores.
-             * Migrate only if the other core is twice as idle */
-            ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
-            migrate_factor = 2;
-            cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_core_mask, cpu));
-            cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
+            nxt = cpumask_cycle(cpu, &cpus);
+
+            if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
+            {
+                /* We're on the same socket, so check the busy-ness of threads.
+                 * Migrate if # of idlers is less at all */
+                ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
+                migrate_factor = 1;
+                cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_sibling_mask,
+                            cpu));
+                cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_sibling_mask,
+                            nxt));
+            }
+            else
+            {
+                /* We're on different sockets, so check the busy-ness of cores.
+                 * Migrate only if the other core is twice as idle */
+                ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
+                migrate_factor = 2;
+                cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_core_mask, cpu));
+                cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
+            }
+
+            weight_cpu = cpumask_weight(&cpu_idlers);
+            weight_nxt = cpumask_weight(&nxt_idlers);
+            /* smt_power_savings: consolidate work rather than spreading it */
+            if ( sched_smt_power_savings ?
+                 weight_cpu > weight_nxt :
+                 weight_cpu * migrate_factor < weight_nxt )
+            {
+                cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
+                spc = CSCHED_PCPU(nxt);
+                cpu = cpumask_cycle(spc->idle_bias, &nxt_idlers);
+                cpumask_andnot(&cpus, &cpus, per_cpu(cpu_sibling_mask, cpu));
+            }
+            else
+            {
+                cpumask_andnot(&cpus, &cpus, &nxt_idlers);
+            }
         }
 
-        weight_cpu = cpumask_weight(&cpu_idlers);
-        weight_nxt = cpumask_weight(&nxt_idlers);
-        /* smt_power_savings: consolidate work rather than spreading it */
-        if ( sched_smt_power_savings ?
-             weight_cpu > weight_nxt :
-             weight_cpu * migrate_factor < weight_nxt )
-        {
-            cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
-            spc = CSCHED_PCPU(nxt);
-            cpu = cpumask_cycle(spc->idle_bias, &nxt_idlers);
-            cpumask_andnot(&cpus, &cpus, per_cpu(cpu_sibling_mask, cpu));
-        }
-        else
-        {
-            cpumask_andnot(&cpus, &cpus, &nxt_idlers);
-        }
+        /* Stop if cpu is idle (or if csched_balance_cpumask() says we can) */
+        if ( cpumask_test_cpu(cpu, &idlers) || ret )
+            break;
     }
 
     if ( commit && spc )
@@ -913,6 +1032,13 @@ csched_alloc_domdata(const struct schedu
     if ( sdom == NULL )
         return NULL;
 
+    if ( !alloc_cpumask_var(&sdom->node_affinity_cpumask) )
+    {
+        xfree(sdom);
+        return NULL;
+    }
+    cpumask_setall(sdom->node_affinity_cpumask);
+
     /* Initialize credit and weight */
     INIT_LIST_HEAD(&sdom->active_vcpu);
     sdom->active_vcpu_count = 0;
@@ -944,6 +1070,9 @@ csched_dom_init(const struct scheduler *
 static void
 csched_free_domdata(const struct scheduler *ops, void *data)
 {
+    struct csched_dom *sdom = data;
+
+    free_cpumask_var(sdom->node_affinity_cpumask);
     xfree(data);
 }
 
@@ -1240,9 +1369,10 @@ csched_tick(void *_cpu)
 }
 
 static struct csched_vcpu *
-csched_runq_steal(int peer_cpu, int cpu, int pri)
+csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
 {
     const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
+    struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, peer_cpu));
     const struct vcpu * const peer_vcpu = curr_on_cpu(peer_cpu);
     struct csched_vcpu *speer;
     struct list_head *iter;
@@ -1265,11 +1395,24 @@ csched_runq_steal(int peer_cpu, int cpu,
             if ( speer->pri <= pri )
                 break;
 
-            /* Is this VCPU is runnable on our PCPU? */
+            /* Is this VCPU runnable on our PCPU? */
             vc = speer->vcpu;
             BUG_ON( is_idle_vcpu(vc) );
 
-            if (__csched_vcpu_is_migrateable(vc, cpu))
+            /*
+             * Retrieve the correct mask for this balance_step or, if we're
+             * dealing with node-affinity and the vcpu has no node affinity
+             * at all, just skip this vcpu. That is needed if we want to
+             * check if we have any node-affine work to steal first (wrt
+             * any vcpu-affine work).
+             */
+            if ( csched_balance_cpumask(vc, balance_step,
+                                        &scratch_balance_mask) )
+                continue;
+
+            if ( __csched_vcpu_is_migrateable(vc, cpu, &scratch_balance_mask)
+                 && __csched_vcpu_should_migrate(cpu, &scratch_balance_mask,
+                                                 prv->idlers) )
             {
                 /* We got a candidate. Grab it! */
                 TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
@@ -1295,7 +1438,8 @@ csched_load_balance(struct csched_privat
     struct csched_vcpu *speer;
     cpumask_t workers;
     cpumask_t *online;
-    int peer_cpu;
+    int peer_cpu, peer_node, bstep;
+    int node = cpu_to_node(cpu);
 
     BUG_ON( cpu != snext->vcpu->processor );
     online = cpupool_scheduler_cpumask(per_cpu(cpupool, cpu));
@@ -1312,42 +1456,68 @@ csched_load_balance(struct csched_privat
         SCHED_STAT_CRANK(load_balance_other);
 
     /*
-     * Peek at non-idling CPUs in the system, starting with our
-     * immediate neighbour.
+     * Let's look around for work to steal, taking both vcpu-affinity
+     * and node-affinity into account. More specifically, we check all
+     * the non-idle CPUs' runq, looking for:
+     *  1. any node-affine work to steal first,
+     *  2. if not finding anything, any vcpu-affine work to steal.
      */
-    cpumask_andnot(&workers, online, prv->idlers);
-    cpumask_clear_cpu(cpu, &workers);
-    peer_cpu = cpu;
+    for_each_csched_balance_step( bstep )
+    {
+        /*
+         * We peek at the non-idling CPUs in a node-wise fashion. In fact,
+         * it is more likely that we find some node-affine work on our same
+         * node, not to mention that migrating vcpus within the same node
+         * could well expected to be cheaper than across-nodes (memory
+         * stays local, there might be some node-wide cache[s], etc.).
+         */
+        peer_node = node;
+        do
+        {
+            /* Find out what the !idle are in this node */
+            cpumask_andnot(&workers, online, prv->idlers);
+            cpumask_and(&workers, &workers, &node_to_cpumask(peer_node));
+            cpumask_clear_cpu(cpu, &workers);
 
-    while ( !cpumask_empty(&workers) )
-    {
-        peer_cpu = cpumask_cycle(peer_cpu, &workers);
-        cpumask_clear_cpu(peer_cpu, &workers);
+            if ( cpumask_empty(&workers) )
+                goto next_node;
 
-        /*
-         * Get ahold of the scheduler lock for this peer CPU.
-         *
-         * Note: We don't spin on this lock but simply try it. Spinning could
-         * cause a deadlock if the peer CPU is also load balancing and trying
-         * to lock this CPU.
-         */
-        if ( !pcpu_schedule_trylock(peer_cpu) )
-        {
-            SCHED_STAT_CRANK(steal_trylock_failed);
-            continue;
-        }
+            peer_cpu = cpumask_first(&workers);
+            do
+            {
+                /*
+                 * Get ahold of the scheduler lock for this peer CPU.
+                 *
+                 * Note: We don't spin on this lock but simply try it. Spinning
+                 * could cause a deadlock if the peer CPU is also load
+                 * balancing and trying to lock this CPU.
+                 */
+                if ( !pcpu_schedule_trylock(peer_cpu) )
+                {
+                    SCHED_STAT_CRANK(steal_trylock_failed);
+                    peer_cpu = cpumask_cycle(peer_cpu, &workers);
+                    continue;
+                }
 
-        /*
-         * Any work over there to steal?
-         */
-        speer = cpumask_test_cpu(peer_cpu, online) ?
-            csched_runq_steal(peer_cpu, cpu, snext->pri) : NULL;
-        pcpu_schedule_unlock(peer_cpu);
-        if ( speer != NULL )
-        {
-            *stolen = 1;
-            return speer;
-        }
+                /* Any work over there to steal? */
+                speer = cpumask_test_cpu(peer_cpu, online) ?
+                    csched_runq_steal(peer_cpu, cpu, snext->pri, bstep) : NULL;
+                pcpu_schedule_unlock(peer_cpu);
+
+                /* As soon as one vcpu is found, balancing ends */
+                if ( speer != NULL )
+                {
+                    *stolen = 1;
+                    return speer;
+                }
+
+                peer_cpu = cpumask_cycle(peer_cpu, &workers);
+
+            } while( peer_cpu != cpumask_first(&workers) );
+
+ next_node:
+            peer_node = cycle_node(peer_node, node_online_map);
+        } while( peer_node != node );
     }
 
  out:
diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
--- a/xen/include/xen/nodemask.h
+++ b/xen/include/xen/nodemask.h
@@ -41,6 +41,8 @@
  * int last_node(mask)			Number highest set bit, or MAX_NUMNODES
  * int first_unset_node(mask)		First node not set in mask, or 
  *					MAX_NUMNODES.
+ * int cycle_node(node, mask)		Next node cycling from 'node', or
+ *					MAX_NUMNODES
  *
  * nodemask_t nodemask_of_node(node)	Return nodemask with bit 'node' set
  * NODE_MASK_ALL			Initializer - all bits set
@@ -254,6 +256,16 @@ static inline int __first_unset_node(con
 			find_first_zero_bit(maskp->bits, MAX_NUMNODES));
 }
 
+#define cycle_node(n, src) __cycle_node((n), &(src), MAX_NUMNODES)
+static inline int __cycle_node(int n, const nodemask_t *maskp, int nbits)
+{
+    int nxt = __next_node(n, maskp, nbits);
+
+    if (nxt == nbits)
+        nxt = __first_node(maskp, nbits);
+    return nxt;
+}
+
 #define NODE_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(MAX_NUMNODES)
 
 #if MAX_NUMNODES <= BITS_PER_LONG

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzz-0007UH-SO; Wed, 19 Dec 2012 19:08:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzy-0007Tn-D6
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:10 +0000
Received: from [85.158.143.99:22851] by server-3.bemta-4.messagelabs.com id
	69/0F-18211-99012D05; Wed, 19 Dec 2012 19:08:09 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355944088!22709502!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9149 invoked from network); 19 Dec 2012 19:08:09 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:09 -0000
Received: by mail-we0-f174.google.com with SMTP id x10so1126258wey.19
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=lcuP49kyFOo5tVtXsEGffh6V5AE074+ZfQNo8f9wADQ=;
	b=Z5AD+foZNRHSyfUeokjFBGHW1c16CSVBEZ/JA1oxTCd3oIXRNbbx5GZgnYfPVmRssV
	G42ZS/yh9SxAhXYu/VGVYdCYQOSJMJijytGPQfOqFVrfyPyk/iTRVJsRkKTmXGdswmrz
	L7k2cRIAN86vvqzvk4Iy5fRrr8VJivf7NYkEEGFQHptFv15YneaHY5SaImvp+Ij0UX5m
	4KxoZ8IVCw90cgpyC4DvbTjIQsNgTNnEG90YvAPCgbTMPCxsqhkYG3WrbjSSu4P8ZTOM
	5H1z0yR8RdI6S769rJ65FqrgQz7O7xheevQzNFRtyZgLsT3aZzG7F0ZAWnXNEfUVnh8Z
	jhxA==
X-Received: by 10.194.23.37 with SMTP id j5mr13433585wjf.28.1355944088819;
	Wed, 19 Dec 2012 11:08:08 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.06
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:08 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 61299b4cdc2abbdf9bfb48e4f84e0ba6b16dfd0e
Message-Id: <61299b4cdc2abbdf9bfb.1355944041@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:21 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 05 of 10 v2] libxc: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

By providing the proper get/set interface and wiring them
to the new domctl-s from the previous commit.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -110,6 +110,83 @@ int xc_domain_shutdown(xc_interface *xch
 }
 
 
+int xc_domain_node_setaffinity(xc_interface *xch,
+                               uint32_t domid,
+                               xc_nodemap_t nodemap)
+{
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
+    int ret = -1;
+    int nodesize;
+
+    nodesize = xc_get_nodemap_size(xch);
+    if (!nodesize)
+    {
+        PERROR("Could not get number of nodes");
+        goto out;
+    }
+
+    local = xc_hypercall_buffer_alloc(xch, local, nodesize);
+    if ( local == NULL )
+    {
+        PERROR("Could not allocate memory for setnodeaffinity domctl hypercall");
+        goto out;
+    }
+
+    domctl.cmd = XEN_DOMCTL_setnodeaffinity;
+    domctl.domain = (domid_t)domid;
+
+    memcpy(local, nodemap, nodesize);
+    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
+    domctl.u.nodeaffinity.nodemap.nr_elems = nodesize * 8;
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_buffer_free(xch, local);
+
+ out:
+    return ret;
+}
+
+int xc_domain_node_getaffinity(xc_interface *xch,
+                               uint32_t domid,
+                               xc_nodemap_t nodemap)
+{
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
+    int ret = -1;
+    int nodesize;
+
+    nodesize = xc_get_nodemap_size(xch);
+    if (!nodesize)
+    {
+        PERROR("Could not get number of nodes");
+        goto out;
+    }
+
+    local = xc_hypercall_buffer_alloc(xch, local, nodesize);
+    if ( local == NULL )
+    {
+        PERROR("Could not allocate memory for getnodeaffinity domctl hypercall");
+        goto out;
+    }
+
+    domctl.cmd = XEN_DOMCTL_getnodeaffinity;
+    domctl.domain = (domid_t)domid;
+
+    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
+    domctl.u.nodeaffinity.nodemap.nr_elems = nodesize * 8;
+
+    ret = do_domctl(xch, &domctl);
+
+    memcpy(nodemap, local, nodesize);
+
+    xc_hypercall_buffer_free(xch, local);
+
+ out:
+    return ret;
+}
+
 int xc_vcpu_setaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -521,6 +521,32 @@ int xc_watchdog(xc_interface *xch,
 		uint32_t id,
 		uint32_t timeout);
 
+/**
+ * This function explicitly sets the host NUMA nodes the domain will
+ * have affinity with.
+ *
+ * @parm xch a handle to an open hypervisor interface.
+ * @parm domid the domain id one wants to set the affinity of.
+ * @parm nodemap the map of the affine nodes.
+ * @return 0 on success, -1 on failure.
+ */
+int xc_domain_node_setaffinity(xc_interface *xch,
+                               uint32_t domind,
+                               xc_nodemap_t nodemap);
+
+/**
+ * This function retrieves the host NUMA nodes the domain has
+ * affinity with.
+ *
+ * @parm xch a handle to an open hypervisor interface.
+ * @parm domid the domain id one wants to get the node affinity of.
+ * @parm nodemap the map of the affine nodes.
+ * @return 0 on success, -1 on failure.
+ */
+int xc_domain_node_getaffinity(xc_interface *xch,
+                               uint32_t domind,
+                               xc_nodemap_t nodemap);
+
 int xc_vcpu_setaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlOzz-0007UH-SO; Wed, 19 Dec 2012 19:08:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlOzy-0007Tn-D6
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:10 +0000
Received: from [85.158.143.99:22851] by server-3.bemta-4.messagelabs.com id
	69/0F-18211-99012D05; Wed, 19 Dec 2012 19:08:09 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355944088!22709502!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9149 invoked from network); 19 Dec 2012 19:08:09 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:09 -0000
Received: by mail-we0-f174.google.com with SMTP id x10so1126258wey.19
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=lcuP49kyFOo5tVtXsEGffh6V5AE074+ZfQNo8f9wADQ=;
	b=Z5AD+foZNRHSyfUeokjFBGHW1c16CSVBEZ/JA1oxTCd3oIXRNbbx5GZgnYfPVmRssV
	G42ZS/yh9SxAhXYu/VGVYdCYQOSJMJijytGPQfOqFVrfyPyk/iTRVJsRkKTmXGdswmrz
	L7k2cRIAN86vvqzvk4Iy5fRrr8VJivf7NYkEEGFQHptFv15YneaHY5SaImvp+Ij0UX5m
	4KxoZ8IVCw90cgpyC4DvbTjIQsNgTNnEG90YvAPCgbTMPCxsqhkYG3WrbjSSu4P8ZTOM
	5H1z0yR8RdI6S769rJ65FqrgQz7O7xheevQzNFRtyZgLsT3aZzG7F0ZAWnXNEfUVnh8Z
	jhxA==
X-Received: by 10.194.23.37 with SMTP id j5mr13433585wjf.28.1355944088819;
	Wed, 19 Dec 2012 11:08:08 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.06
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:08 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 61299b4cdc2abbdf9bfb48e4f84e0ba6b16dfd0e
Message-Id: <61299b4cdc2abbdf9bfb.1355944041@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:21 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 05 of 10 v2] libxc: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

By providing the proper get/set interface and wiring them
to the new domctl-s from the previous commit.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -110,6 +110,83 @@ int xc_domain_shutdown(xc_interface *xch
 }
 
 
+int xc_domain_node_setaffinity(xc_interface *xch,
+                               uint32_t domid,
+                               xc_nodemap_t nodemap)
+{
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
+    int ret = -1;
+    int nodesize;
+
+    nodesize = xc_get_nodemap_size(xch);
+    if (!nodesize)
+    {
+        PERROR("Could not get number of nodes");
+        goto out;
+    }
+
+    local = xc_hypercall_buffer_alloc(xch, local, nodesize);
+    if ( local == NULL )
+    {
+        PERROR("Could not allocate memory for setnodeaffinity domctl hypercall");
+        goto out;
+    }
+
+    domctl.cmd = XEN_DOMCTL_setnodeaffinity;
+    domctl.domain = (domid_t)domid;
+
+    memcpy(local, nodemap, nodesize);
+    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
+    domctl.u.nodeaffinity.nodemap.nr_elems = nodesize * 8;
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_buffer_free(xch, local);
+
+ out:
+    return ret;
+}
+
+int xc_domain_node_getaffinity(xc_interface *xch,
+                               uint32_t domid,
+                               xc_nodemap_t nodemap)
+{
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
+    int ret = -1;
+    int nodesize;
+
+    nodesize = xc_get_nodemap_size(xch);
+    if (!nodesize)
+    {
+        PERROR("Could not get number of nodes");
+        goto out;
+    }
+
+    local = xc_hypercall_buffer_alloc(xch, local, nodesize);
+    if ( local == NULL )
+    {
+        PERROR("Could not allocate memory for getnodeaffinity domctl hypercall");
+        goto out;
+    }
+
+    domctl.cmd = XEN_DOMCTL_getnodeaffinity;
+    domctl.domain = (domid_t)domid;
+
+    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
+    domctl.u.nodeaffinity.nodemap.nr_elems = nodesize * 8;
+
+    ret = do_domctl(xch, &domctl);
+
+    memcpy(nodemap, local, nodesize);
+
+    xc_hypercall_buffer_free(xch, local);
+
+ out:
+    return ret;
+}
+
 int xc_vcpu_setaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -521,6 +521,32 @@ int xc_watchdog(xc_interface *xch,
 		uint32_t id,
 		uint32_t timeout);
 
+/**
+ * This function explicitly sets the host NUMA nodes the domain will
+ * have affinity with.
+ *
+ * @parm xch a handle to an open hypervisor interface.
+ * @parm domid the domain id one wants to set the affinity of.
+ * @parm nodemap the map of the affine nodes.
+ * @return 0 on success, -1 on failure.
+ */
+int xc_domain_node_setaffinity(xc_interface *xch,
+                               uint32_t domind,
+                               xc_nodemap_t nodemap);
+
+/**
+ * This function retrieves the host NUMA nodes the domain has
+ * affinity with.
+ *
+ * @parm xch a handle to an open hypervisor interface.
+ * @parm domid the domain id one wants to get the node affinity of.
+ * @parm nodemap the map of the affine nodes.
+ * @return 0 on success, -1 on failure.
+ */
+int xc_domain_node_getaffinity(xc_interface *xch,
+                               uint32_t domind,
+                               xc_nodemap_t nodemap);
+
 int xc_vcpu_setaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlP02-0007VR-Hr; Wed, 19 Dec 2012 19:08:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlP00-0007Tn-Rq
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:13 +0000
Received: from [85.158.143.99:22921] by server-3.bemta-4.messagelabs.com id
	5F/0F-18211-C9012D05; Wed, 19 Dec 2012 19:08:12 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1355944090!25026337!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18049 invoked from network); 19 Dec 2012 19:08:10 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:10 -0000
Received: by mail-we0-f178.google.com with SMTP id x43so1211175wey.9
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=NkBEWKJK1roB51U1qbz1k1TSwoifb9WXw9vw9BMIy7I=;
	b=JAdlItPj1cqlLkm9BMMg9oyEMx8fl0Gy+yD4bED6XvCzIIawtjWJsk4RifIs0y+4np
	9hq1hHPxv+y/KphxHwou5apuEeqj1aY83OROO1EdVcyoBmXrJ6tBuHpDP+/fUpEZQ8h8
	zGf5KEU0kAFzfOxdKu4iZn/y8xoPgvHYNpAN3n39rjFrw/vGJ0Rs9E+jQRUSmFXHKDdj
	mim43/vmeq5xP7LwJPCSRF7Mi9QBsNUMYvP9Op1sj71lLj74Nr/R9Iy9/TqTw6iNrtem
	TNOz4zVteNJtaiDBKIUcN8mVl7XOYdag4zeOo+XeERnJzWtcIYSC8C3Oi1sLBhMYMGCj
	RpPA==
X-Received: by 10.194.58.175 with SMTP id s15mr13442931wjq.31.1355944090680;
	Wed, 19 Dec 2012 11:08:10 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.08
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:10 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 3c196445edb57baadf4f33a8f99e3d35765fc836
Message-Id: <3c196445edb57baadf4f.1355944042@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:22 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

By introducing a nodemap in libxl_domain_build_info and
providing the get/set methods to deal with it.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -4142,6 +4142,26 @@ int libxl_set_vcpuaffinity_all(libxl_ctx
     return rc;
 }
 
+int libxl_domain_set_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
+                                  libxl_bitmap *nodemap)
+{
+    if (xc_domain_node_setaffinity(ctx->xch, domid, nodemap->map)) {
+        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "setting node affinity");
+        return ERROR_FAIL;
+    }
+    return 0;
+}
+
+int libxl_domain_get_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
+                                  libxl_bitmap *nodemap)
+{
+    if (xc_domain_node_getaffinity(ctx->xch, domid, nodemap->map)) {
+        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "getting node affinity");
+        return ERROR_FAIL;
+    }
+    return 0;
+}
+
 int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
 {
     GC_INIT(ctx);
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -861,6 +861,10 @@ int libxl_set_vcpuaffinity(libxl_ctx *ct
                            libxl_bitmap *cpumap);
 int libxl_set_vcpuaffinity_all(libxl_ctx *ctx, uint32_t domid,
                                unsigned int max_vcpus, libxl_bitmap *cpumap);
+int libxl_domain_set_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
+                                  libxl_bitmap *nodemap);
+int libxl_domain_get_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
+                                  libxl_bitmap *nodemap);
 int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap);
 
 libxl_scheduler libxl_get_scheduler(libxl_ctx *ctx);
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -184,6 +184,12 @@ int libxl__domain_build_info_setdefault(
 
     libxl_defbool_setdefault(&b_info->numa_placement, true);
 
+    if (!b_info->nodemap.size) {
+        if (libxl_node_bitmap_alloc(CTX, &b_info->nodemap, 0))
+            return ERROR_FAIL;
+        libxl_bitmap_set_any(&b_info->nodemap);
+    }
+
     if (b_info->max_memkb == LIBXL_MEMKB_DEFAULT)
         b_info->max_memkb = 32 * 1024;
     if (b_info->target_memkb == LIBXL_MEMKB_DEFAULT)
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -230,6 +230,7 @@ int libxl__build_pre(libxl__gc *gc, uint
         if (rc)
             return rc;
     }
+    libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
     libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);
 
     xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb + LIBXL_MAXMEM_CONSTANT);
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -261,6 +261,7 @@ libxl_domain_build_info = Struct("domain
     ("max_vcpus",       integer),
     ("avail_vcpus",     libxl_bitmap),
     ("cpumap",          libxl_bitmap),
+    ("nodemap",         libxl_bitmap),
     ("numa_placement",  libxl_defbool),
     ("tsc_mode",        libxl_tsc_mode),
     ("max_memkb",       MemKB),

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlP02-0007VR-Hr; Wed, 19 Dec 2012 19:08:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlP00-0007Tn-Rq
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:13 +0000
Received: from [85.158.143.99:22921] by server-3.bemta-4.messagelabs.com id
	5F/0F-18211-C9012D05; Wed, 19 Dec 2012 19:08:12 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1355944090!25026337!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18049 invoked from network); 19 Dec 2012 19:08:10 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:10 -0000
Received: by mail-we0-f178.google.com with SMTP id x43so1211175wey.9
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=NkBEWKJK1roB51U1qbz1k1TSwoifb9WXw9vw9BMIy7I=;
	b=JAdlItPj1cqlLkm9BMMg9oyEMx8fl0Gy+yD4bED6XvCzIIawtjWJsk4RifIs0y+4np
	9hq1hHPxv+y/KphxHwou5apuEeqj1aY83OROO1EdVcyoBmXrJ6tBuHpDP+/fUpEZQ8h8
	zGf5KEU0kAFzfOxdKu4iZn/y8xoPgvHYNpAN3n39rjFrw/vGJ0Rs9E+jQRUSmFXHKDdj
	mim43/vmeq5xP7LwJPCSRF7Mi9QBsNUMYvP9Op1sj71lLj74Nr/R9Iy9/TqTw6iNrtem
	TNOz4zVteNJtaiDBKIUcN8mVl7XOYdag4zeOo+XeERnJzWtcIYSC8C3Oi1sLBhMYMGCj
	RpPA==
X-Received: by 10.194.58.175 with SMTP id s15mr13442931wjq.31.1355944090680;
	Wed, 19 Dec 2012 11:08:10 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.08
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:10 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 3c196445edb57baadf4f33a8f99e3d35765fc836
Message-Id: <3c196445edb57baadf4f.1355944042@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:22 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

By introducing a nodemap in libxl_domain_build_info and
providing the get/set methods to deal with it.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -4142,6 +4142,26 @@ int libxl_set_vcpuaffinity_all(libxl_ctx
     return rc;
 }
 
+int libxl_domain_set_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
+                                  libxl_bitmap *nodemap)
+{
+    if (xc_domain_node_setaffinity(ctx->xch, domid, nodemap->map)) {
+        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "setting node affinity");
+        return ERROR_FAIL;
+    }
+    return 0;
+}
+
+int libxl_domain_get_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
+                                  libxl_bitmap *nodemap)
+{
+    if (xc_domain_node_getaffinity(ctx->xch, domid, nodemap->map)) {
+        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "getting node affinity");
+        return ERROR_FAIL;
+    }
+    return 0;
+}
+
 int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
 {
     GC_INIT(ctx);
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -861,6 +861,10 @@ int libxl_set_vcpuaffinity(libxl_ctx *ct
                            libxl_bitmap *cpumap);
 int libxl_set_vcpuaffinity_all(libxl_ctx *ctx, uint32_t domid,
                                unsigned int max_vcpus, libxl_bitmap *cpumap);
+int libxl_domain_set_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
+                                  libxl_bitmap *nodemap);
+int libxl_domain_get_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
+                                  libxl_bitmap *nodemap);
 int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap);
 
 libxl_scheduler libxl_get_scheduler(libxl_ctx *ctx);
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -184,6 +184,12 @@ int libxl__domain_build_info_setdefault(
 
     libxl_defbool_setdefault(&b_info->numa_placement, true);
 
+    if (!b_info->nodemap.size) {
+        if (libxl_node_bitmap_alloc(CTX, &b_info->nodemap, 0))
+            return ERROR_FAIL;
+        libxl_bitmap_set_any(&b_info->nodemap);
+    }
+
     if (b_info->max_memkb == LIBXL_MEMKB_DEFAULT)
         b_info->max_memkb = 32 * 1024;
     if (b_info->target_memkb == LIBXL_MEMKB_DEFAULT)
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -230,6 +230,7 @@ int libxl__build_pre(libxl__gc *gc, uint
         if (rc)
             return rc;
     }
+    libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
     libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);
 
     xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb + LIBXL_MAXMEM_CONSTANT);
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -261,6 +261,7 @@ libxl_domain_build_info = Struct("domain
     ("max_vcpus",       integer),
     ("avail_vcpus",     libxl_bitmap),
     ("cpumap",          libxl_bitmap),
+    ("nodemap",         libxl_bitmap),
     ("numa_placement",  libxl_defbool),
     ("tsc_mode",        libxl_tsc_mode),
     ("max_memkb",       MemKB),

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlP06-0007Xe-Bc; Wed, 19 Dec 2012 19:08:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlP04-0007WD-HK
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:16 +0000
Received: from [85.158.143.35:51339] by server-1.bemta-4.messagelabs.com id
	3E/C6-28401-F9012D05; Wed, 19 Dec 2012 19:08:15 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355944094!16189559!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19626 invoked from network); 19 Dec 2012 19:08:14 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:14 -0000
Received: by mail-we0-f178.google.com with SMTP id x43so1211215wey.9
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=Jc8EfkcCUBy/W46uN7xOrrbMvSO6t2j7hlvbvDLbxY4=;
	b=EnhnbT7evZkHEocxSfLkTcNaY8QTyicLTbTnmDXJeEElpFl+tyeYqQIs6deEyqGFBQ
	5U5Xq8xP3YF261/BmWe+rVS0W8gihy9xmHvFc2AjtM4Pd/Lhfh2r23bZsDwGmTtLh1bG
	bBtLUETKP2OF05vN6o/zz8La1LmamrzLmoDrgD5hzBLfAaefo4+NtZelqw7QfvGHv2Y+
	VyvJlMjvgrzjn/DMnRBvpZfG/pqDQroYlkzBnNW3bPYVQtSp0dTH4WxtmB46xj1IIuUy
	ZFELQFpZBpsMWFKJiwTw6jXx/UjCc2t1LOUty8z7roqCu/EXFCt0SzUJwZb+F2GaVgMe
	sTng==
X-Received: by 10.180.103.136 with SMTP id fw8mr5799918wib.27.1355944094296;
	Wed, 19 Dec 2012 11:08:14 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.12
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:13 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: ff98e6bcc0dd18f6b97a276b8c0101eec7dd4581
Message-Id: <ff98e6bcc0dd18f6b97a.1355944044@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:24 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 08 of 10 v2] libxl: automatic placement deals
 with node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Which basically means the following two things:
 1) during domain creation, it is the node-affinity of
    the domain --rather than the vcpu-affinities of its
    VCPUs-- that is affected by automatic placement;
 2) during automatic placement, when counting how many
    VCPUs are already "bound" to a placement candidate
    (as part of the process of choosing the best
    candidate), both vcpu-affinity and node-affinity
    are considered.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -133,13 +133,13 @@ static int numa_place_domain(libxl__gc *
 {
     int found;
     libxl__numa_candidate candidate;
-    libxl_bitmap candidate_nodemap;
+    libxl_bitmap cpupool_nodemap;
     libxl_cpupoolinfo cpupool_info;
     int i, cpupool, rc = 0;
     uint32_t memkb;
 
     libxl__numa_candidate_init(&candidate);
-    libxl_bitmap_init(&candidate_nodemap);
+    libxl_bitmap_init(&cpupool_nodemap);
 
     /*
      * Extract the cpumap from the cpupool the domain belong to. In fact,
@@ -156,7 +156,7 @@ static int numa_place_domain(libxl__gc *
     rc = libxl_domain_need_memory(CTX, info, &memkb);
     if (rc)
         goto out;
-    if (libxl_node_bitmap_alloc(CTX, &candidate_nodemap, 0)) {
+    if (libxl_node_bitmap_alloc(CTX, &cpupool_nodemap, 0)) {
         rc = ERROR_FAIL;
         goto out;
     }
@@ -174,17 +174,19 @@ static int numa_place_domain(libxl__gc *
     if (found == 0)
         goto out;
 
-    /* Map the candidate's node map to the domain's info->cpumap */
-    libxl__numa_candidate_get_nodemap(gc, &candidate, &candidate_nodemap);
-    rc = libxl_nodemap_to_cpumap(CTX, &candidate_nodemap, &info->cpumap);
+    /* Map the candidate's node map to the domain's info->nodemap */
+    libxl__numa_candidate_get_nodemap(gc, &candidate, &info->nodemap);
+
+    /* Avoid trying to set the affinity to nodes that might be in the
+     * candidate's nodemap but out of our cpupool. */
+    rc = libxl_cpumap_to_nodemap(CTX, &cpupool_info.cpumap,
+                                 &cpupool_nodemap);
     if (rc)
         goto out;
 
-    /* Avoid trying to set the affinity to cpus that might be in the
-     * nodemap but not in our cpupool. */
-    libxl_for_each_set_bit(i, info->cpumap) {
-        if (!libxl_bitmap_test(&cpupool_info.cpumap, i))
-            libxl_bitmap_reset(&info->cpumap, i);
+    libxl_for_each_set_bit(i, info->nodemap) {
+        if (!libxl_bitmap_test(&cpupool_nodemap, i))
+            libxl_bitmap_reset(&info->nodemap, i);
     }
 
     LOG(DETAIL, "NUMA placement candidate with %d nodes, %d cpus and "
@@ -193,7 +195,7 @@ static int numa_place_domain(libxl__gc *
 
  out:
     libxl__numa_candidate_dispose(&candidate);
-    libxl_bitmap_dispose(&candidate_nodemap);
+    libxl_bitmap_dispose(&cpupool_nodemap);
     libxl_cpupoolinfo_dispose(&cpupool_info);
     return rc;
 }
@@ -211,10 +213,10 @@ int libxl__build_pre(libxl__gc *gc, uint
     /*
      * Check if the domain has any CPU affinity. If not, try to build
      * up one. In case numa_place_domain() find at least a suitable
-     * candidate, it will affect info->cpumap accordingly; if it
+     * candidate, it will affect info->nodemap accordingly; if it
      * does not, it just leaves it as it is. This means (unless
      * some weird error manifests) the subsequent call to
-     * libxl_set_vcpuaffinity_all() will do the actual placement,
+     * libxl_domain_set_nodeaffinity() will do the actual placement,
      * whatever that turns out to be.
      */
     if (libxl_defbool_val(info->numa_placement)) {
diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -184,7 +184,7 @@ static int nr_vcpus_on_nodes(libxl__gc *
                              int vcpus_on_node[])
 {
     libxl_dominfo *dinfo = NULL;
-    libxl_bitmap vcpu_nodemap;
+    libxl_bitmap dom_nodemap, vcpu_nodemap;
     int nr_doms, nr_cpus;
     int i, j, k;
 
@@ -197,6 +197,12 @@ static int nr_vcpus_on_nodes(libxl__gc *
         return ERROR_FAIL;
     }
 
+    if (libxl_node_bitmap_alloc(CTX, &dom_nodemap, 0) < 0) {
+        libxl_bitmap_dispose(&vcpu_nodemap);
+        libxl_dominfo_list_free(dinfo, nr_doms);
+        return ERROR_FAIL;
+    }
+
     for (i = 0; i < nr_doms; i++) {
         libxl_vcpuinfo *vinfo;
         int nr_dom_vcpus;
@@ -205,14 +211,21 @@ static int nr_vcpus_on_nodes(libxl__gc *
         if (vinfo == NULL)
             continue;
 
+        /* Retrieve the domain's node-affinity map */
+        libxl_domain_get_nodeaffinity(CTX, dinfo[i].domid, &dom_nodemap);
+
         for (j = 0; j < nr_dom_vcpus; j++) {
-            /* For each vcpu of each domain, increment the elements of
-             * the array corresponding to the nodes where the vcpu runs */
+            /*
+             * For each vcpu of each domain, it must have both vcpu-affinity
+             * and node-affinity to (a pcpu belonging to) a certain node to
+             * cause an increment in the corresponding element of the array.
+             */
             libxl_bitmap_set_none(&vcpu_nodemap);
             libxl_for_each_set_bit(k, vinfo[j].cpumap) {
                 int node = tinfo[k].node;
 
                 if (libxl_bitmap_test(suitable_cpumap, k) &&
+                    libxl_bitmap_test(&dom_nodemap, node) &&
                     !libxl_bitmap_test(&vcpu_nodemap, node)) {
                     libxl_bitmap_set(&vcpu_nodemap, node);
                     vcpus_on_node[node]++;
@@ -223,6 +236,7 @@ static int nr_vcpus_on_nodes(libxl__gc *
         libxl_vcpuinfo_list_free(vinfo, nr_dom_vcpus);
     }
 
+    libxl_bitmap_dispose(&dom_nodemap);
     libxl_bitmap_dispose(&vcpu_nodemap);
     libxl_dominfo_list_free(dinfo, nr_doms);
     return 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlP06-0007Xe-Bc; Wed, 19 Dec 2012 19:08:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlP04-0007WD-HK
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:16 +0000
Received: from [85.158.143.35:51339] by server-1.bemta-4.messagelabs.com id
	3E/C6-28401-F9012D05; Wed, 19 Dec 2012 19:08:15 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1355944094!16189559!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19626 invoked from network); 19 Dec 2012 19:08:14 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:14 -0000
Received: by mail-we0-f178.google.com with SMTP id x43so1211215wey.9
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=Jc8EfkcCUBy/W46uN7xOrrbMvSO6t2j7hlvbvDLbxY4=;
	b=EnhnbT7evZkHEocxSfLkTcNaY8QTyicLTbTnmDXJeEElpFl+tyeYqQIs6deEyqGFBQ
	5U5Xq8xP3YF261/BmWe+rVS0W8gihy9xmHvFc2AjtM4Pd/Lhfh2r23bZsDwGmTtLh1bG
	bBtLUETKP2OF05vN6o/zz8La1LmamrzLmoDrgD5hzBLfAaefo4+NtZelqw7QfvGHv2Y+
	VyvJlMjvgrzjn/DMnRBvpZfG/pqDQroYlkzBnNW3bPYVQtSp0dTH4WxtmB46xj1IIuUy
	ZFELQFpZBpsMWFKJiwTw6jXx/UjCc2t1LOUty8z7roqCu/EXFCt0SzUJwZb+F2GaVgMe
	sTng==
X-Received: by 10.180.103.136 with SMTP id fw8mr5799918wib.27.1355944094296;
	Wed, 19 Dec 2012 11:08:14 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.12
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:13 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: ff98e6bcc0dd18f6b97a276b8c0101eec7dd4581
Message-Id: <ff98e6bcc0dd18f6b97a.1355944044@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:24 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 08 of 10 v2] libxl: automatic placement deals
 with node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Which basically means the following two things:
 1) during domain creation, it is the node-affinity of
    the domain --rather than the vcpu-affinities of its
    VCPUs-- that is affected by automatic placement;
 2) during automatic placement, when counting how many
    VCPUs are already "bound" to a placement candidate
    (as part of the process of choosing the best
    candidate), both vcpu-affinity and node-affinity
    are considered.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -133,13 +133,13 @@ static int numa_place_domain(libxl__gc *
 {
     int found;
     libxl__numa_candidate candidate;
-    libxl_bitmap candidate_nodemap;
+    libxl_bitmap cpupool_nodemap;
     libxl_cpupoolinfo cpupool_info;
     int i, cpupool, rc = 0;
     uint32_t memkb;
 
     libxl__numa_candidate_init(&candidate);
-    libxl_bitmap_init(&candidate_nodemap);
+    libxl_bitmap_init(&cpupool_nodemap);
 
     /*
      * Extract the cpumap from the cpupool the domain belong to. In fact,
@@ -156,7 +156,7 @@ static int numa_place_domain(libxl__gc *
     rc = libxl_domain_need_memory(CTX, info, &memkb);
     if (rc)
         goto out;
-    if (libxl_node_bitmap_alloc(CTX, &candidate_nodemap, 0)) {
+    if (libxl_node_bitmap_alloc(CTX, &cpupool_nodemap, 0)) {
         rc = ERROR_FAIL;
         goto out;
     }
@@ -174,17 +174,19 @@ static int numa_place_domain(libxl__gc *
     if (found == 0)
         goto out;
 
-    /* Map the candidate's node map to the domain's info->cpumap */
-    libxl__numa_candidate_get_nodemap(gc, &candidate, &candidate_nodemap);
-    rc = libxl_nodemap_to_cpumap(CTX, &candidate_nodemap, &info->cpumap);
+    /* Map the candidate's node map to the domain's info->nodemap */
+    libxl__numa_candidate_get_nodemap(gc, &candidate, &info->nodemap);
+
+    /* Avoid trying to set the affinity to nodes that might be in the
+     * candidate's nodemap but out of our cpupool. */
+    rc = libxl_cpumap_to_nodemap(CTX, &cpupool_info.cpumap,
+                                 &cpupool_nodemap);
     if (rc)
         goto out;
 
-    /* Avoid trying to set the affinity to cpus that might be in the
-     * nodemap but not in our cpupool. */
-    libxl_for_each_set_bit(i, info->cpumap) {
-        if (!libxl_bitmap_test(&cpupool_info.cpumap, i))
-            libxl_bitmap_reset(&info->cpumap, i);
+    libxl_for_each_set_bit(i, info->nodemap) {
+        if (!libxl_bitmap_test(&cpupool_nodemap, i))
+            libxl_bitmap_reset(&info->nodemap, i);
     }
 
     LOG(DETAIL, "NUMA placement candidate with %d nodes, %d cpus and "
@@ -193,7 +195,7 @@ static int numa_place_domain(libxl__gc *
 
  out:
     libxl__numa_candidate_dispose(&candidate);
-    libxl_bitmap_dispose(&candidate_nodemap);
+    libxl_bitmap_dispose(&cpupool_nodemap);
     libxl_cpupoolinfo_dispose(&cpupool_info);
     return rc;
 }
@@ -211,10 +213,10 @@ int libxl__build_pre(libxl__gc *gc, uint
     /*
      * Check if the domain has any CPU affinity. If not, try to build
      * up one. In case numa_place_domain() find at least a suitable
-     * candidate, it will affect info->cpumap accordingly; if it
+     * candidate, it will affect info->nodemap accordingly; if it
      * does not, it just leaves it as it is. This means (unless
      * some weird error manifests) the subsequent call to
-     * libxl_set_vcpuaffinity_all() will do the actual placement,
+     * libxl_domain_set_nodeaffinity() will do the actual placement,
      * whatever that turns out to be.
      */
     if (libxl_defbool_val(info->numa_placement)) {
diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -184,7 +184,7 @@ static int nr_vcpus_on_nodes(libxl__gc *
                              int vcpus_on_node[])
 {
     libxl_dominfo *dinfo = NULL;
-    libxl_bitmap vcpu_nodemap;
+    libxl_bitmap dom_nodemap, vcpu_nodemap;
     int nr_doms, nr_cpus;
     int i, j, k;
 
@@ -197,6 +197,12 @@ static int nr_vcpus_on_nodes(libxl__gc *
         return ERROR_FAIL;
     }
 
+    if (libxl_node_bitmap_alloc(CTX, &dom_nodemap, 0) < 0) {
+        libxl_bitmap_dispose(&vcpu_nodemap);
+        libxl_dominfo_list_free(dinfo, nr_doms);
+        return ERROR_FAIL;
+    }
+
     for (i = 0; i < nr_doms; i++) {
         libxl_vcpuinfo *vinfo;
         int nr_dom_vcpus;
@@ -205,14 +211,21 @@ static int nr_vcpus_on_nodes(libxl__gc *
         if (vinfo == NULL)
             continue;
 
+        /* Retrieve the domain's node-affinity map */
+        libxl_domain_get_nodeaffinity(CTX, dinfo[i].domid, &dom_nodemap);
+
         for (j = 0; j < nr_dom_vcpus; j++) {
-            /* For each vcpu of each domain, increment the elements of
-             * the array corresponding to the nodes where the vcpu runs */
+            /*
+             * For each vcpu of each domain, it must have both vcpu-affinity
+             * and node-affinity to (a pcpu belonging to) a certain node to
+             * cause an increment in the corresponding element of the array.
+             */
             libxl_bitmap_set_none(&vcpu_nodemap);
             libxl_for_each_set_bit(k, vinfo[j].cpumap) {
                 int node = tinfo[k].node;
 
                 if (libxl_bitmap_test(suitable_cpumap, k) &&
+                    libxl_bitmap_test(&dom_nodemap, node) &&
                     !libxl_bitmap_test(&vcpu_nodemap, node)) {
                     libxl_bitmap_set(&vcpu_nodemap, node);
                     vcpus_on_node[node]++;
@@ -223,6 +236,7 @@ static int nr_vcpus_on_nodes(libxl__gc *
         libxl_vcpuinfo_list_free(vinfo, nr_dom_vcpus);
     }
 
+    libxl_bitmap_dispose(&dom_nodemap);
     libxl_bitmap_dispose(&vcpu_nodemap);
     libxl_dominfo_list_free(dinfo, nr_doms);
     return 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlP03-0007W4-Ub; Wed, 19 Dec 2012 19:08:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlP02-0007VA-6U
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:14 +0000
Received: from [85.158.139.83:14010] by server-5.bemta-5.messagelabs.com id
	FE/2C-22648-D9012D05; Wed, 19 Dec 2012 19:08:13 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355944092!27884562!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6174 invoked from network); 19 Dec 2012 19:08:12 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:12 -0000
Received: by mail-wg0-f52.google.com with SMTP id 12so1098364wgh.31
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=mqV9MMawgJpFURAcE+zoPLoBNwOnTmEx3PUidYGyunk=;
	b=xGCXkctw2NUJm9w3oL22dfWVBMG0dtYpZJYIUrQ1ZYPtzvsxTQ9ZoKGJ1ECTKf6hW8
	1ryMYcJpO5HytzBVt2+YkVWq/XyLyOfPGbAP2msrSda2/0Qcv/y+6fnw0LcKY1wbdvDY
	ZmF6ZD1nUynSsAWj78rPzwA4ed8X1Epd1tuBdtaRNQ3n1tpwpQMNd08FTZnhVjPXQZPa
	FlNmlTnYZ1SBow6yFFJ4NIbDVvn7aA4iOvMqNp19jsYmBYmM+UWVxtQnfgZuaDCFthgp
	8b49SfYiyRERQqJrdl+f8N9MNADo+7SYp2cv6SF8lO+jjtzeNmv1vhyV0wGCR75uU1ca
	RUtQ==
X-Received: by 10.194.85.234 with SMTP id k10mr13325864wjz.53.1355944092442;
	Wed, 19 Dec 2012 11:08:12 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.10
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:11 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 5dc2571ae5faef87977cbe56f6b6f9b787e60fd3
Message-Id: <5dc2571ae5faef87977c.1355944043@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:23 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 07 of 10 v2] libxl: optimize the calculation of
 how many VCPUs can run on a candidate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For choosing the best NUMA placement candidate, we need to figure out
how many VCPUs are runnable on each of them. That requires going through
all the VCPUs of all the domains and check their affinities.

With this change, instead of doing the above for each candidate, we
do it once for all, populating an array while counting. This way, when
we later are evaluating candidates, all we need is summing up the right
elements of the array itself.

This reduces the complexity of the overall algorithm, as it moves a
potentially expensive operation (for_each_vcpu_of_each_domain {})
outside from the core placement loop, so that it is performed only
once instead of (potentially) tens or hundreds of times.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -165,15 +165,27 @@ static uint32_t nodemap_to_free_memkb(li
     return free_memkb;
 }
 
-/* Retrieve the number of vcpus able to run on the cpus of the nodes
- * that are part of the nodemap. */
-static int nodemap_to_nr_vcpus(libxl__gc *gc, libxl_cputopology *tinfo,
+/* Retrieve the number of vcpus able to run on the nodes in nodemap */
+static int nodemap_to_nr_vcpus(libxl__gc *gc, int vcpus_on_node[],
                                const libxl_bitmap *nodemap)
 {
+    int i, nr_vcpus = 0;
+
+    libxl_for_each_set_bit(i, *nodemap)
+        nr_vcpus += vcpus_on_node[i];
+
+    return nr_vcpus;
+}
+
+/* Number of vcpus able to run on the cpus of the various nodes
+ * (reported by filling the array vcpus_on_node[]). */
+static int nr_vcpus_on_nodes(libxl__gc *gc, libxl_cputopology *tinfo,
+                             const libxl_bitmap *suitable_cpumap,
+                             int vcpus_on_node[])
+{
     libxl_dominfo *dinfo = NULL;
     libxl_bitmap vcpu_nodemap;
     int nr_doms, nr_cpus;
-    int nr_vcpus = 0;
     int i, j, k;
 
     dinfo = libxl_list_domain(CTX, &nr_doms);
@@ -193,19 +205,17 @@ static int nodemap_to_nr_vcpus(libxl__gc
         if (vinfo == NULL)
             continue;
 
-        /* For each vcpu of each domain ... */
         for (j = 0; j < nr_dom_vcpus; j++) {
+            /* For each vcpu of each domain, increment the elements of
+             * the array corresponding to the nodes where the vcpu runs */
+            libxl_bitmap_set_none(&vcpu_nodemap);
+            libxl_for_each_set_bit(k, vinfo[j].cpumap) {
+                int node = tinfo[k].node;
 
-            /* Build up a map telling on which nodes the vcpu is runnable on */
-            libxl_bitmap_set_none(&vcpu_nodemap);
-            libxl_for_each_set_bit(k, vinfo[j].cpumap)
-                libxl_bitmap_set(&vcpu_nodemap, tinfo[k].node);
-
-            /* And check if that map has any intersection with our nodemap */
-            libxl_for_each_set_bit(k, vcpu_nodemap) {
-                if (libxl_bitmap_test(nodemap, k)) {
-                    nr_vcpus++;
-                    break;
+                if (libxl_bitmap_test(suitable_cpumap, k) &&
+                    !libxl_bitmap_test(&vcpu_nodemap, node)) {
+                    libxl_bitmap_set(&vcpu_nodemap, node);
+                    vcpus_on_node[node]++;
                 }
             }
         }
@@ -215,7 +225,7 @@ static int nodemap_to_nr_vcpus(libxl__gc
 
     libxl_bitmap_dispose(&vcpu_nodemap);
     libxl_dominfo_list_free(dinfo, nr_doms);
-    return nr_vcpus;
+    return 0;
 }
 
 /*
@@ -270,7 +280,7 @@ int libxl__get_numa_candidate(libxl__gc 
     libxl_numainfo *ninfo = NULL;
     int nr_nodes = 0, nr_suit_nodes, nr_cpus = 0;
     libxl_bitmap suitable_nodemap, nodemap;
-    int rc = 0;
+    int *vcpus_on_node, rc = 0;
 
     libxl_bitmap_init(&nodemap);
     libxl_bitmap_init(&suitable_nodemap);
@@ -281,6 +291,8 @@ int libxl__get_numa_candidate(libxl__gc 
     if (ninfo == NULL)
         return ERROR_FAIL;
 
+    GCNEW_ARRAY(vcpus_on_node, nr_nodes);
+
     /*
      * The good thing about this solution is that it is based on heuristics
      * (implemented in numa_cmpf() ), but we at least can evaluate it on
@@ -330,6 +342,19 @@ int libxl__get_numa_candidate(libxl__gc 
         goto out;
 
     /*
+     * Later on, we will try to figure out how many vcpus are runnable on
+     * each candidate (as a part of choosing the best one of them). That
+     * requires going through all the vcpus of all the domains and check
+     * their affinities. So, instead of doing that for each candidate,
+     * let's count here the number of vcpus runnable on each node, so that
+     * all we have to do later is summing up the right elements of the
+     * vcpus_on_node array.
+     */
+    rc = nr_vcpus_on_nodes(gc, tinfo, suitable_cpumap, vcpus_on_node);
+    if (rc)
+        goto out;
+
+    /*
      * If the minimum number of NUMA nodes is not explicitly specified
      * (i.e., min_nodes == 0), we try to figure out a sensible number of nodes
      * from where to start generating candidates, if possible (or just start
@@ -414,7 +439,8 @@ int libxl__get_numa_candidate(libxl__gc 
              * current best one (if any).
              */
             libxl__numa_candidate_put_nodemap(gc, &new_cndt, &nodemap);
-            new_cndt.nr_vcpus = nodemap_to_nr_vcpus(gc, tinfo, &nodemap);
+            new_cndt.nr_vcpus = nodemap_to_nr_vcpus(gc, vcpus_on_node,
+                                                    &nodemap);
             new_cndt.free_memkb = nodes_free_memkb;
             new_cndt.nr_nodes = libxl_bitmap_count_set(&nodemap);
             new_cndt.nr_cpus = nodes_cpus;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlP03-0007W4-Ub; Wed, 19 Dec 2012 19:08:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlP02-0007VA-6U
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:14 +0000
Received: from [85.158.139.83:14010] by server-5.bemta-5.messagelabs.com id
	FE/2C-22648-D9012D05; Wed, 19 Dec 2012 19:08:13 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1355944092!27884562!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6174 invoked from network); 19 Dec 2012 19:08:12 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:12 -0000
Received: by mail-wg0-f52.google.com with SMTP id 12so1098364wgh.31
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=mqV9MMawgJpFURAcE+zoPLoBNwOnTmEx3PUidYGyunk=;
	b=xGCXkctw2NUJm9w3oL22dfWVBMG0dtYpZJYIUrQ1ZYPtzvsxTQ9ZoKGJ1ECTKf6hW8
	1ryMYcJpO5HytzBVt2+YkVWq/XyLyOfPGbAP2msrSda2/0Qcv/y+6fnw0LcKY1wbdvDY
	ZmF6ZD1nUynSsAWj78rPzwA4ed8X1Epd1tuBdtaRNQ3n1tpwpQMNd08FTZnhVjPXQZPa
	FlNmlTnYZ1SBow6yFFJ4NIbDVvn7aA4iOvMqNp19jsYmBYmM+UWVxtQnfgZuaDCFthgp
	8b49SfYiyRERQqJrdl+f8N9MNADo+7SYp2cv6SF8lO+jjtzeNmv1vhyV0wGCR75uU1ca
	RUtQ==
X-Received: by 10.194.85.234 with SMTP id k10mr13325864wjz.53.1355944092442;
	Wed, 19 Dec 2012 11:08:12 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.10
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:11 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 5dc2571ae5faef87977cbe56f6b6f9b787e60fd3
Message-Id: <5dc2571ae5faef87977c.1355944043@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:23 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 07 of 10 v2] libxl: optimize the calculation of
 how many VCPUs can run on a candidate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For choosing the best NUMA placement candidate, we need to figure out
how many VCPUs are runnable on each of them. That requires going through
all the VCPUs of all the domains and check their affinities.

With this change, instead of doing the above for each candidate, we
do it once for all, populating an array while counting. This way, when
we later are evaluating candidates, all we need is summing up the right
elements of the array itself.

This reduces the complexity of the overall algorithm, as it moves a
potentially expensive operation (for_each_vcpu_of_each_domain {})
outside from the core placement loop, so that it is performed only
once instead of (potentially) tens or hundreds of times.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -165,15 +165,27 @@ static uint32_t nodemap_to_free_memkb(li
     return free_memkb;
 }
 
-/* Retrieve the number of vcpus able to run on the cpus of the nodes
- * that are part of the nodemap. */
-static int nodemap_to_nr_vcpus(libxl__gc *gc, libxl_cputopology *tinfo,
+/* Retrieve the number of vcpus able to run on the nodes in nodemap */
+static int nodemap_to_nr_vcpus(libxl__gc *gc, int vcpus_on_node[],
                                const libxl_bitmap *nodemap)
 {
+    int i, nr_vcpus = 0;
+
+    libxl_for_each_set_bit(i, *nodemap)
+        nr_vcpus += vcpus_on_node[i];
+
+    return nr_vcpus;
+}
+
+/* Number of vcpus able to run on the cpus of the various nodes
+ * (reported by filling the array vcpus_on_node[]). */
+static int nr_vcpus_on_nodes(libxl__gc *gc, libxl_cputopology *tinfo,
+                             const libxl_bitmap *suitable_cpumap,
+                             int vcpus_on_node[])
+{
     libxl_dominfo *dinfo = NULL;
     libxl_bitmap vcpu_nodemap;
     int nr_doms, nr_cpus;
-    int nr_vcpus = 0;
     int i, j, k;
 
     dinfo = libxl_list_domain(CTX, &nr_doms);
@@ -193,19 +205,17 @@ static int nodemap_to_nr_vcpus(libxl__gc
         if (vinfo == NULL)
             continue;
 
-        /* For each vcpu of each domain ... */
         for (j = 0; j < nr_dom_vcpus; j++) {
+            /* For each vcpu of each domain, increment the elements of
+             * the array corresponding to the nodes where the vcpu runs */
+            libxl_bitmap_set_none(&vcpu_nodemap);
+            libxl_for_each_set_bit(k, vinfo[j].cpumap) {
+                int node = tinfo[k].node;
 
-            /* Build up a map telling on which nodes the vcpu is runnable on */
-            libxl_bitmap_set_none(&vcpu_nodemap);
-            libxl_for_each_set_bit(k, vinfo[j].cpumap)
-                libxl_bitmap_set(&vcpu_nodemap, tinfo[k].node);
-
-            /* And check if that map has any intersection with our nodemap */
-            libxl_for_each_set_bit(k, vcpu_nodemap) {
-                if (libxl_bitmap_test(nodemap, k)) {
-                    nr_vcpus++;
-                    break;
+                if (libxl_bitmap_test(suitable_cpumap, k) &&
+                    !libxl_bitmap_test(&vcpu_nodemap, node)) {
+                    libxl_bitmap_set(&vcpu_nodemap, node);
+                    vcpus_on_node[node]++;
                 }
             }
         }
@@ -215,7 +225,7 @@ static int nodemap_to_nr_vcpus(libxl__gc
 
     libxl_bitmap_dispose(&vcpu_nodemap);
     libxl_dominfo_list_free(dinfo, nr_doms);
-    return nr_vcpus;
+    return 0;
 }
 
 /*
@@ -270,7 +280,7 @@ int libxl__get_numa_candidate(libxl__gc 
     libxl_numainfo *ninfo = NULL;
     int nr_nodes = 0, nr_suit_nodes, nr_cpus = 0;
     libxl_bitmap suitable_nodemap, nodemap;
-    int rc = 0;
+    int *vcpus_on_node, rc = 0;
 
     libxl_bitmap_init(&nodemap);
     libxl_bitmap_init(&suitable_nodemap);
@@ -281,6 +291,8 @@ int libxl__get_numa_candidate(libxl__gc 
     if (ninfo == NULL)
         return ERROR_FAIL;
 
+    GCNEW_ARRAY(vcpus_on_node, nr_nodes);
+
     /*
      * The good thing about this solution is that it is based on heuristics
      * (implemented in numa_cmpf() ), but we at least can evaluate it on
@@ -330,6 +342,19 @@ int libxl__get_numa_candidate(libxl__gc 
         goto out;
 
     /*
+     * Later on, we will try to figure out how many vcpus are runnable on
+     * each candidate (as a part of choosing the best one of them). That
+     * requires going through all the vcpus of all the domains and check
+     * their affinities. So, instead of doing that for each candidate,
+     * let's count here the number of vcpus runnable on each node, so that
+     * all we have to do later is summing up the right elements of the
+     * vcpus_on_node array.
+     */
+    rc = nr_vcpus_on_nodes(gc, tinfo, suitable_cpumap, vcpus_on_node);
+    if (rc)
+        goto out;
+
+    /*
      * If the minimum number of NUMA nodes is not explicitly specified
      * (i.e., min_nodes == 0), we try to figure out a sensible number of nodes
      * from where to start generating candidates, if possible (or just start
@@ -414,7 +439,8 @@ int libxl__get_numa_candidate(libxl__gc 
              * current best one (if any).
              */
             libxl__numa_candidate_put_nodemap(gc, &new_cndt, &nodemap);
-            new_cndt.nr_vcpus = nodemap_to_nr_vcpus(gc, tinfo, &nodemap);
+            new_cndt.nr_vcpus = nodemap_to_nr_vcpus(gc, vcpus_on_node,
+                                                    &nodemap);
             new_cndt.free_memkb = nodes_free_memkb;
             new_cndt.nr_nodes = libxl_bitmap_count_set(&nodemap);
             new_cndt.nr_cpus = nodes_cpus;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlP0D-0007e7-Qc; Wed, 19 Dec 2012 19:08:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlP0B-0007bK-IV
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:23 +0000
Received: from [85.158.139.83:14416] by server-12.bemta-5.messagelabs.com id
	DE/44-02275-6A012D05; Wed, 19 Dec 2012 19:08:22 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355944096!19248980!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26998 invoked from network); 19 Dec 2012 19:08:16 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:16 -0000
Received: by mail-wg0-f49.google.com with SMTP id 15so1149080wgd.16
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=kQXcHczE5IYI/1zsA0QwDYns5WY+PTp1JsZi3qK0rBM=;
	b=iRMNCxWG0QEUKGWcpVWJz0jEsnzdNgzUv9zWeFkWA5Zo40RA1JJlFGmqpluoKLYQdk
	vn91H3C0sYIJQklRtxFrCLlLXcChlmaXouF+JysL9XQQVLbplM6KDa6sdrXMhBlaxL1L
	wpF5MQnHXh7Ce/ze/29WnennCmuWzJrqSX2t5JPfRRgdWLl9xzoY05EY0124bXuJvDSU
	0seGY+Z2k+vnAJuu3nJUp+fyt/LKyEMoONgor3zy8GbogXr2sInQeF3GRI3CoV2BIXVQ
	iZkM7YuV4qCRuZDTjXC8KqXPAEvAsDWlxqu3qnw3s/fmQXLOGq49U+8WX7nPivZBuQP3
	YQ+g==
X-Received: by 10.180.97.68 with SMTP id dy4mr12988959wib.7.1355944096320;
	Wed, 19 Dec 2012 11:08:16 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.14
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:15 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: c862dee08c124ce080e05ea326d1d707449a0d1f
Message-Id: <c862dee08c124ce080e0.1355944045@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:25 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 09 of 10 v2] xl: add node-affinity to the output
	of `xl list`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Node-affinity is now something that is under (some) control of the
user, so show it upon request as part of the output of `xl list'
by the `-n' option.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
---
Changes from v1:
 * print_{cpu,node}map() functions added instead of 'state variable'-izing
   print_bitmap().

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -2961,14 +2961,95 @@ out:
     }
 }
 
-static void list_domains(int verbose, int context, const libxl_dominfo *info, int nb_domain)
+/* If map is not full, prints it and returns 0. Returns 1 otherwise. */
+static int print_bitmap(uint8_t *map, int maplen, FILE *stream)
+{
+    int i;
+    uint8_t pmap = 0, bitmask = 0;
+    int firstset = 0, state = 0;
+
+    for (i = 0; i < maplen; i++) {
+        if (i % 8 == 0) {
+            pmap = *map++;
+            bitmask = 1;
+        } else bitmask <<= 1;
+
+        switch (state) {
+        case 0:
+        case 2:
+            if ((pmap & bitmask) != 0) {
+                firstset = i;
+                state++;
+            }
+            continue;
+        case 1:
+        case 3:
+            if ((pmap & bitmask) == 0) {
+                fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
+                if (i - 1 > firstset)
+                    fprintf(stream, "-%d", i - 1);
+                state = 2;
+            }
+            continue;
+        }
+    }
+    switch (state) {
+        case 0:
+            fprintf(stream, "none");
+            break;
+        case 2:
+            break;
+        case 1:
+            if (firstset == 0)
+                return 1;
+        case 3:
+            fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
+            if (i - 1 > firstset)
+                fprintf(stream, "-%d", i - 1);
+            break;
+    }
+
+    return 0;
+}
+
+static void print_cpumap(uint8_t *map, int maplen, FILE *stream)
+{
+    if (print_bitmap(map, maplen, stream))
+        fprintf(stream, "any cpu");
+}
+
+static void print_nodemap(uint8_t *map, int maplen, FILE *stream)
+{
+    if (print_bitmap(map, maplen, stream))
+        fprintf(stream, "any node");
+}
+
+static void list_domains(int verbose, int context, int numa, const libxl_dominfo *info, int nb_domain)
 {
     int i;
     static const char shutdown_reason_letters[]= "-rscw";
+    libxl_bitmap nodemap;
+    libxl_physinfo physinfo;
+
+    libxl_bitmap_init(&nodemap);
+    libxl_physinfo_init(&physinfo);
 
     printf("Name                                        ID   Mem VCPUs\tState\tTime(s)");
     if (verbose) printf("   UUID                            Reason-Code\tSecurity Label");
     if (context && !verbose) printf("   Security Label");
+    if (numa) {
+        if (libxl_node_bitmap_alloc(ctx, &nodemap, 0)) {
+            fprintf(stderr, "libxl_node_bitmap_alloc_failed.\n");
+            exit(1);
+        }
+        if (libxl_get_physinfo(ctx, &physinfo) != 0) {
+            fprintf(stderr, "libxl_physinfo failed.\n");
+            libxl_bitmap_dispose(&nodemap);
+            exit(1);
+        }
+
+        printf(" NODE Affinity");
+    }
     printf("\n");
     for (i = 0; i < nb_domain; i++) {
         char *domname;
@@ -3002,14 +3083,23 @@ static void list_domains(int verbose, in
             rc = libxl_flask_sid_to_context(ctx, info[i].ssidref, &buf,
                                             &size);
             if (rc < 0)
-                printf("  -");
+                printf("                -");
             else {
-                printf("  %s", buf);
+                printf(" %16s", buf);
                 free(buf);
             }
         }
+        if (numa) {
+            libxl_domain_get_nodeaffinity(ctx, info[i].domid, &nodemap);
+
+            putchar(' ');
+            print_nodemap(nodemap.map, physinfo.nr_nodes, stdout);
+        }
         putchar('\n');
     }
+
+    libxl_bitmap_dispose(&nodemap);
+    libxl_physinfo_dispose(&physinfo);
 }
 
 static void list_vm(void)
@@ -3890,12 +3980,14 @@ int main_list(int argc, char **argv)
     int opt, verbose = 0;
     int context = 0;
     int details = 0;
+    int numa = 0;
     int option_index = 0;
     static struct option long_options[] = {
         {"long", 0, 0, 'l'},
         {"help", 0, 0, 'h'},
         {"verbose", 0, 0, 'v'},
         {"context", 0, 0, 'Z'},
+        {"numa", 0, 0, 'n'},
         {0, 0, 0, 0}
     };
 
@@ -3904,7 +3996,7 @@ int main_list(int argc, char **argv)
     int nb_domain, rc;
 
     while (1) {
-        opt = getopt_long(argc, argv, "lvhZ", long_options, &option_index);
+        opt = getopt_long(argc, argv, "lvhZn", long_options, &option_index);
         if (opt == -1)
             break;
 
@@ -3921,6 +4013,9 @@ int main_list(int argc, char **argv)
         case 'Z':
             context = 1;
             break;
+        case 'n':
+            numa = 1;
+            break;
         default:
             fprintf(stderr, "option `%c' not supported.\n", optopt);
             break;
@@ -3956,7 +4051,7 @@ int main_list(int argc, char **argv)
     if (details)
         list_domains_details(info, nb_domain);
     else
-        list_domains(verbose, context, info, nb_domain);
+        list_domains(verbose, context, numa, info, nb_domain);
 
     if (info_free)
         libxl_dominfo_list_free(info, nb_domain);
@@ -4228,56 +4323,6 @@ int main_button_press(int argc, char **a
     return 0;
 }
 
-static void print_bitmap(uint8_t *map, int maplen, FILE *stream)
-{
-    int i;
-    uint8_t pmap = 0, bitmask = 0;
-    int firstset = 0, state = 0;
-
-    for (i = 0; i < maplen; i++) {
-        if (i % 8 == 0) {
-            pmap = *map++;
-            bitmask = 1;
-        } else bitmask <<= 1;
-
-        switch (state) {
-        case 0:
-        case 2:
-            if ((pmap & bitmask) != 0) {
-                firstset = i;
-                state++;
-            }
-            continue;
-        case 1:
-        case 3:
-            if ((pmap & bitmask) == 0) {
-                fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
-                if (i - 1 > firstset)
-                    fprintf(stream, "-%d", i - 1);
-                state = 2;
-            }
-            continue;
-        }
-    }
-    switch (state) {
-        case 0:
-            fprintf(stream, "none");
-            break;
-        case 2:
-            break;
-        case 1:
-            if (firstset == 0) {
-                fprintf(stream, "any cpu");
-                break;
-            }
-        case 3:
-            fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
-            if (i - 1 > firstset)
-                fprintf(stream, "-%d", i - 1);
-            break;
-    }
-}
-
 static void print_vcpuinfo(uint32_t tdomid,
                            const libxl_vcpuinfo *vcpuinfo,
                            uint32_t nr_cpus)
@@ -4301,7 +4346,7 @@ static void print_vcpuinfo(uint32_t tdom
     /*      TIM */
     printf("%9.1f  ", ((float)vcpuinfo->vcpu_time / 1e9));
     /* CPU AFFINITY */
-    print_bitmap(vcpuinfo->cpumap.map, nr_cpus, stdout);
+    print_cpumap(vcpuinfo->cpumap.map, nr_cpus, stdout);
     printf("\n");
 }
 
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -50,7 +50,8 @@ struct cmd_spec cmd_table[] = {
       "[options] [Domain]\n",
       "-l, --long              Output all VM details\n"
       "-v, --verbose           Prints out UUIDs and security context\n"
-      "-Z, --context           Prints out security context"
+      "-Z, --context           Prints out security context\n"
+      "-n, --numa              Prints out NUMA node affinity"
     },
     { "destroy",
       &main_destroy, 0, 1,

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlP0D-0007e7-Qc; Wed, 19 Dec 2012 19:08:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlP0B-0007bK-IV
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:23 +0000
Received: from [85.158.139.83:14416] by server-12.bemta-5.messagelabs.com id
	DE/44-02275-6A012D05; Wed, 19 Dec 2012 19:08:22 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1355944096!19248980!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26998 invoked from network); 19 Dec 2012 19:08:16 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:16 -0000
Received: by mail-wg0-f49.google.com with SMTP id 15so1149080wgd.16
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=kQXcHczE5IYI/1zsA0QwDYns5WY+PTp1JsZi3qK0rBM=;
	b=iRMNCxWG0QEUKGWcpVWJz0jEsnzdNgzUv9zWeFkWA5Zo40RA1JJlFGmqpluoKLYQdk
	vn91H3C0sYIJQklRtxFrCLlLXcChlmaXouF+JysL9XQQVLbplM6KDa6sdrXMhBlaxL1L
	wpF5MQnHXh7Ce/ze/29WnennCmuWzJrqSX2t5JPfRRgdWLl9xzoY05EY0124bXuJvDSU
	0seGY+Z2k+vnAJuu3nJUp+fyt/LKyEMoONgor3zy8GbogXr2sInQeF3GRI3CoV2BIXVQ
	iZkM7YuV4qCRuZDTjXC8KqXPAEvAsDWlxqu3qnw3s/fmQXLOGq49U+8WX7nPivZBuQP3
	YQ+g==
X-Received: by 10.180.97.68 with SMTP id dy4mr12988959wib.7.1355944096320;
	Wed, 19 Dec 2012 11:08:16 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.14
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:15 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: c862dee08c124ce080e05ea326d1d707449a0d1f
Message-Id: <c862dee08c124ce080e0.1355944045@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:25 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 09 of 10 v2] xl: add node-affinity to the output
	of `xl list`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Node-affinity is now something that is under (some) control of the
user, so show it upon request as part of the output of `xl list'
by the `-n' option.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
---
Changes from v1:
 * print_{cpu,node}map() functions added instead of 'state variable'-izing
   print_bitmap().

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -2961,14 +2961,95 @@ out:
     }
 }
 
-static void list_domains(int verbose, int context, const libxl_dominfo *info, int nb_domain)
+/* If map is not full, prints it and returns 0. Returns 1 otherwise. */
+static int print_bitmap(uint8_t *map, int maplen, FILE *stream)
+{
+    int i;
+    uint8_t pmap = 0, bitmask = 0;
+    int firstset = 0, state = 0;
+
+    for (i = 0; i < maplen; i++) {
+        if (i % 8 == 0) {
+            pmap = *map++;
+            bitmask = 1;
+        } else bitmask <<= 1;
+
+        switch (state) {
+        case 0:
+        case 2:
+            if ((pmap & bitmask) != 0) {
+                firstset = i;
+                state++;
+            }
+            continue;
+        case 1:
+        case 3:
+            if ((pmap & bitmask) == 0) {
+                fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
+                if (i - 1 > firstset)
+                    fprintf(stream, "-%d", i - 1);
+                state = 2;
+            }
+            continue;
+        }
+    }
+    switch (state) {
+        case 0:
+            fprintf(stream, "none");
+            break;
+        case 2:
+            break;
+        case 1:
+            if (firstset == 0)
+                return 1;
+        case 3:
+            fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
+            if (i - 1 > firstset)
+                fprintf(stream, "-%d", i - 1);
+            break;
+    }
+
+    return 0;
+}
+
+static void print_cpumap(uint8_t *map, int maplen, FILE *stream)
+{
+    if (print_bitmap(map, maplen, stream))
+        fprintf(stream, "any cpu");
+}
+
+static void print_nodemap(uint8_t *map, int maplen, FILE *stream)
+{
+    if (print_bitmap(map, maplen, stream))
+        fprintf(stream, "any node");
+}
+
+static void list_domains(int verbose, int context, int numa, const libxl_dominfo *info, int nb_domain)
 {
     int i;
     static const char shutdown_reason_letters[]= "-rscw";
+    libxl_bitmap nodemap;
+    libxl_physinfo physinfo;
+
+    libxl_bitmap_init(&nodemap);
+    libxl_physinfo_init(&physinfo);
 
     printf("Name                                        ID   Mem VCPUs\tState\tTime(s)");
     if (verbose) printf("   UUID                            Reason-Code\tSecurity Label");
     if (context && !verbose) printf("   Security Label");
+    if (numa) {
+        if (libxl_node_bitmap_alloc(ctx, &nodemap, 0)) {
+            fprintf(stderr, "libxl_node_bitmap_alloc_failed.\n");
+            exit(1);
+        }
+        if (libxl_get_physinfo(ctx, &physinfo) != 0) {
+            fprintf(stderr, "libxl_physinfo failed.\n");
+            libxl_bitmap_dispose(&nodemap);
+            exit(1);
+        }
+
+        printf(" NODE Affinity");
+    }
     printf("\n");
     for (i = 0; i < nb_domain; i++) {
         char *domname;
@@ -3002,14 +3083,23 @@ static void list_domains(int verbose, in
             rc = libxl_flask_sid_to_context(ctx, info[i].ssidref, &buf,
                                             &size);
             if (rc < 0)
-                printf("  -");
+                printf("                -");
             else {
-                printf("  %s", buf);
+                printf(" %16s", buf);
                 free(buf);
             }
         }
+        if (numa) {
+            libxl_domain_get_nodeaffinity(ctx, info[i].domid, &nodemap);
+
+            putchar(' ');
+            print_nodemap(nodemap.map, physinfo.nr_nodes, stdout);
+        }
         putchar('\n');
     }
+
+    libxl_bitmap_dispose(&nodemap);
+    libxl_physinfo_dispose(&physinfo);
 }
 
 static void list_vm(void)
@@ -3890,12 +3980,14 @@ int main_list(int argc, char **argv)
     int opt, verbose = 0;
     int context = 0;
     int details = 0;
+    int numa = 0;
     int option_index = 0;
     static struct option long_options[] = {
         {"long", 0, 0, 'l'},
         {"help", 0, 0, 'h'},
         {"verbose", 0, 0, 'v'},
         {"context", 0, 0, 'Z'},
+        {"numa", 0, 0, 'n'},
         {0, 0, 0, 0}
     };
 
@@ -3904,7 +3996,7 @@ int main_list(int argc, char **argv)
     int nb_domain, rc;
 
     while (1) {
-        opt = getopt_long(argc, argv, "lvhZ", long_options, &option_index);
+        opt = getopt_long(argc, argv, "lvhZn", long_options, &option_index);
         if (opt == -1)
             break;
 
@@ -3921,6 +4013,9 @@ int main_list(int argc, char **argv)
         case 'Z':
             context = 1;
             break;
+        case 'n':
+            numa = 1;
+            break;
         default:
             fprintf(stderr, "option `%c' not supported.\n", optopt);
             break;
@@ -3956,7 +4051,7 @@ int main_list(int argc, char **argv)
     if (details)
         list_domains_details(info, nb_domain);
     else
-        list_domains(verbose, context, info, nb_domain);
+        list_domains(verbose, context, numa, info, nb_domain);
 
     if (info_free)
         libxl_dominfo_list_free(info, nb_domain);
@@ -4228,56 +4323,6 @@ int main_button_press(int argc, char **a
     return 0;
 }
 
-static void print_bitmap(uint8_t *map, int maplen, FILE *stream)
-{
-    int i;
-    uint8_t pmap = 0, bitmask = 0;
-    int firstset = 0, state = 0;
-
-    for (i = 0; i < maplen; i++) {
-        if (i % 8 == 0) {
-            pmap = *map++;
-            bitmask = 1;
-        } else bitmask <<= 1;
-
-        switch (state) {
-        case 0:
-        case 2:
-            if ((pmap & bitmask) != 0) {
-                firstset = i;
-                state++;
-            }
-            continue;
-        case 1:
-        case 3:
-            if ((pmap & bitmask) == 0) {
-                fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
-                if (i - 1 > firstset)
-                    fprintf(stream, "-%d", i - 1);
-                state = 2;
-            }
-            continue;
-        }
-    }
-    switch (state) {
-        case 0:
-            fprintf(stream, "none");
-            break;
-        case 2:
-            break;
-        case 1:
-            if (firstset == 0) {
-                fprintf(stream, "any cpu");
-                break;
-            }
-        case 3:
-            fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
-            if (i - 1 > firstset)
-                fprintf(stream, "-%d", i - 1);
-            break;
-    }
-}
-
 static void print_vcpuinfo(uint32_t tdomid,
                            const libxl_vcpuinfo *vcpuinfo,
                            uint32_t nr_cpus)
@@ -4301,7 +4346,7 @@ static void print_vcpuinfo(uint32_t tdom
     /*      TIM */
     printf("%9.1f  ", ((float)vcpuinfo->vcpu_time / 1e9));
     /* CPU AFFINITY */
-    print_bitmap(vcpuinfo->cpumap.map, nr_cpus, stdout);
+    print_cpumap(vcpuinfo->cpumap.map, nr_cpus, stdout);
     printf("\n");
 }
 
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -50,7 +50,8 @@ struct cmd_spec cmd_table[] = {
       "[options] [Domain]\n",
       "-l, --long              Output all VM details\n"
       "-v, --verbose           Prints out UUIDs and security context\n"
-      "-Z, --context           Prints out security context"
+      "-Z, --context           Prints out security context\n"
+      "-n, --numa              Prints out NUMA node affinity"
     },
     { "destroy",
       &main_destroy, 0, 1,

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlP0b-0007tX-Gy; Wed, 19 Dec 2012 19:08:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlP0a-0007sT-0I
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:48 +0000
Received: from [85.158.139.211:22092] by server-15.bemta-5.messagelabs.com id
	6A/74-20523-FB012D05; Wed, 19 Dec 2012 19:08:47 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355944098!21243301!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=1.3 required=7.0 tests=BODY_RANDOM_LONG,
	GUARANTEED_100_PERCENT
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30984 invoked from network); 19 Dec 2012 19:08:18 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:18 -0000
Received: by mail-we0-f174.google.com with SMTP id x10so1126575wey.5
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=mSpQDpdG5zCyckpnxACdiinRJ+iyL7ezQx7uCDwUXow=;
	b=ZDgQcFTVbwwyo1iPQO7Pmh/NKnfC+bcsMEu7c/v/2U98jBuJYYhfB3VnunTZlf0OHA
	jaSlRhmtm7IHfjbVtd1vzh77Fcpn74hDpmnvTyO+VaUFGs1+gVXui6FbGg12GvKr15yC
	7LQPFjyAIw0oK48MqnYd2NfKexaKLcjpXBQWMNpxe8N8GErK/lzHe8lsifATy71H7YB8
	KSkXqDKR/Z8q6wa/NJ3BRDw5NMl50e1q/AyZRGjROfEQqip3p/zOcGHlKzFoDQyN0b5q
	113dlifyAodUDYJiPULA3vQsVuJ4i9N98x+Du/6nSB50L2ioc8QQ9KaEBy9z7VNB3yv2
	F6cg==
X-Received: by 10.194.77.13 with SMTP id o13mr13359652wjw.58.1355944098171;
	Wed, 19 Dec 2012 11:08:18 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.16
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:17 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 0c56584e6b2ce29a98ac694e3c5b55fc9348067c
Message-Id: <0c56584e6b2ce29a98ac.1355944046@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:26 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 10 of 10 v2] docs: rearrange and update NUMA
 placement documentation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To include the new concept of NUMA aware scheduling and its
impact.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/docs/misc/xl-numa-placement.markdown b/docs/misc/xl-numa-placement.markdown
--- a/docs/misc/xl-numa-placement.markdown
+++ b/docs/misc/xl-numa-placement.markdown
@@ -14,22 +14,67 @@ the memory directly attached to the set 
 
 The Xen hypervisor deals with NUMA machines by assigning to each domain
 a "node affinity", i.e., a set of NUMA nodes of the host from which they
-get their memory allocated.
+get their memory allocated. Also, even if the node affinity of a domain
+is allowed to change on-line, it is very important to "place" the domain
+correctly when it is fist created, as the most of its memory is allocated
+at that time and can not (for now) be moved easily.
 
 NUMA awareness becomes very important as soon as many domains start
 running memory-intensive workloads on a shared host. In fact, the cost
 of accessing non node-local memory locations is very high, and the
 performance degradation is likely to be noticeable.
 
-## Guest Placement in xl ##
+For more information, have a look at the [Xen NUMA Introduction][numa_intro]
+page on the Wiki.
+
+### Placing via pinning and cpupools ###
+
+The simplest way of placing a domain on a NUMA node is statically pinning
+the domain's vCPUs to the pCPUs of the node. This goes under the name of
+CPU affinity and can be set through the "cpus=" option in the config file
+(more about this below). Another option is to pool together the pCPUs
+spanning the node and put the domain in such a cpupool with the "pool="
+config option (as documented in our [Wiki][cpupools_howto]).
+
+In both the above cases, the domain will not be able to execute outside
+the specified set of pCPUs for any reasons, even if all those pCPUs are
+busy doing something else while there are others, idle, pCPUs.
+
+So, when doing this, local memory accesses are 100% guaranteed, but that
+may come at he cost of some load imbalances.
+
+### NUMA aware scheduling ###
+
+If the credit scheduler is in use, the concept of node affinity defined
+above does not only apply to memory. In fact, starting from Xen 4.3, the
+scheduler always tries to run the domain's vCPUs on one of the nodes in
+its node affinity. Only if that turns out to be impossible, it will just
+pick any free pCPU.
+
+This is, therefore, something more flexible than CPU affinity, as a domain
+can still run everywhere, it just prefers some nodes rather than others.
+Locality of access is less guaranteed than in the pinning case, but that
+comes along with better chances to exploit all the host resources (e.g.,
+the pCPUs).
+
+In fact, if all the pCPUs in a domain's node affinity are busy, it is
+possible for the domain to run outside of there, but it is very likely that
+slower execution (due to remote memory accesses) is still better than no
+execution at all, as it would happen with pinning. For this reason, NUMA
+aware scheduling has the potential of bringing substantial performances
+benefits, although this will depend on the workload.
+
+## Guest placement in xl ##
 
 If using xl for creating and managing guests, it is very easy to ask for
 both manual or automatic placement of them across the host's NUMA nodes.
 
-Note that xm/xend does the very same thing, the only differences residing
-in the details of the heuristics adopted for the placement (see below).
+Note that xm/xend does a very similar thing, the only differences being
+the details of the heuristics adopted for automatic placement (see below),
+and the lack of support (in both xm/xend and the Xen versions where that\
+was the default toolstack) for NUMA aware scheduling.
 
-### Manual Guest Placement with xl ###
+### Placing the guest manually ###
 
 Thanks to the "cpus=" option, it is possible to specify where a domain
 should be created and scheduled on, directly in its config file. This
@@ -41,14 +86,19 @@ This is very simple and effective, but r
 administrator to explicitly specify affinities for each and every domain,
 or Xen won't be able to guarantee the locality for their memory accesses.
 
-It is also possible to deal with NUMA by partitioning the system using
-cpupools. Again, this could be "The Right Answer" for many needs and
-occasions, but has to be carefully considered and setup by hand.
+Notice that this also pins the domain's vCPUs to the specified set of
+pCPUs, so it not only sets the domain's node affinity (its memory will
+come from the nodes to which the pCPUs belong), but at the same time
+forces the vCPUs of the domain to be scheduled on those same pCPUs.
 
-### Automatic Guest Placement with xl ###
+### Placing the guest automatically ###
 
 If no "cpus=" option is specified in the config file, libxl tries
 to figure out on its own on which node(s) the domain could fit best.
+If it finds one (some), the domain's node affinity get set to there,
+and both memory allocations and NUMA aware scheduling (for the credit
+scheduler and starting from Xen 4.3) will comply with it.
+
 It is worthwhile noting that optimally fitting a set of VMs on the NUMA
 nodes of an host is an incarnation of the Bin Packing Problem. In fact,
 the various VMs with different memory sizes are the items to be packed,
@@ -81,7 +131,7 @@ largest amounts of free memory helps kee
 small, and maximizes the probability of being able to put more domains
 there.
 
-## Guest Placement within libxl ##
+## Guest placement in libxl ##
 
 xl achieves automatic NUMA placement because that is what libxl does
 by default. No API is provided (yet) for modifying the behaviour of
@@ -93,15 +143,34 @@ any placement from happening:
     libxl_defbool_set(&domain_build_info->numa_placement, false);
 
 Also, if `numa_placement` is set to `true`, the domain must not
-have any cpu affinity (i.e., `domain_build_info->cpumap` must
+have any CPU affinity (i.e., `domain_build_info->cpumap` must
 have all its bits set, as it is by default), or domain creation
 will fail returning `ERROR_INVAL`.
 
+Starting from Xen 4.3, in case automatic placement happens (and is
+successful), it will affect the domain's node affinity and _not_ its
+CPU affinity. Namely, the domain's vCPUs will not be pinned to any
+pCPU on the host, but the memory from the domain will come from the
+selected node(s) and the NUMA aware scheduling (if the credit scheduler
+is in use) will try to keep the domain there as much as possible.
+
 Besides than that, looking and/or tweaking the placement algorithm
 search "Automatic NUMA placement" in libxl\_internal.h.
 
 Note this may change in future versions of Xen/libxl.
 
+## Xen < 4.3 ##
+
+As NUMA aware scheduling is a new feature of Xen 4.3, things are a little
+bit different for earlier version of Xen. If no "cpus=" option is specified
+and Xen 4.2 is in use, the automatic placement algorithm still runs, but
+the results is used to _pin_ the vCPUs of the domain to the output node(s).
+This is consistent with what was happening with xm/xend, which were also
+affecting the domain's CPU affinity.
+
+On a version of Xen earlier than 4.2, there is not automatic placement at
+all in xl or libxl, and hence no node or CPU affinity being affected.
+
 ## Limitations ##
 
 Analyzing various possible placement solutions is what makes the
@@ -109,3 +178,6 @@ algorithm flexible and quite effective. 
 it won't scale well to systems with arbitrary number of nodes.
 For this reason, automatic placement is disabled (with a warning)
 if it is requested on a host with more than 16 NUMA nodes.
+
+[numa_intro]: http://wiki.xen.org/wiki/Xen_NUMA_Introduction
+[cpupools_howto]: http://wiki.xen.org/wiki/Cpupools_Howto

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:08:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:08:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlP0b-0007tX-Gy; Wed, 19 Dec 2012 19:08:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlP0a-0007sT-0I
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:08:48 +0000
Received: from [85.158.139.211:22092] by server-15.bemta-5.messagelabs.com id
	6A/74-20523-FB012D05; Wed, 19 Dec 2012 19:08:47 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355944098!21243301!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=1.3 required=7.0 tests=BODY_RANDOM_LONG,
	GUARANTEED_100_PERCENT
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30984 invoked from network); 19 Dec 2012 19:08:18 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:08:18 -0000
Received: by mail-we0-f174.google.com with SMTP id x10so1126575wey.5
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:08:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:content-type:mime-version
	:content-transfer-encoding:subject:x-mercurial-node:message-id
	:in-reply-to:references:user-agent:date:from:to:cc;
	bh=mSpQDpdG5zCyckpnxACdiinRJ+iyL7ezQx7uCDwUXow=;
	b=ZDgQcFTVbwwyo1iPQO7Pmh/NKnfC+bcsMEu7c/v/2U98jBuJYYhfB3VnunTZlf0OHA
	jaSlRhmtm7IHfjbVtd1vzh77Fcpn74hDpmnvTyO+VaUFGs1+gVXui6FbGg12GvKr15yC
	7LQPFjyAIw0oK48MqnYd2NfKexaKLcjpXBQWMNpxe8N8GErK/lzHe8lsifATy71H7YB8
	KSkXqDKR/Z8q6wa/NJ3BRDw5NMl50e1q/AyZRGjROfEQqip3p/zOcGHlKzFoDQyN0b5q
	113dlifyAodUDYJiPULA3vQsVuJ4i9N98x+Du/6nSB50L2ioc8QQ9KaEBy9z7VNB3yv2
	F6cg==
X-Received: by 10.194.77.13 with SMTP id o13mr13359652wjw.58.1355944098171;
	Wed, 19 Dec 2012 11:08:18 -0800 (PST)
Received: from [127.0.1.1] (ip-142-87.sn1.eutelia.it. [62.94.142.87])
	by mx.google.com with ESMTPS id hi2sm15558992wib.10.2012.12.19.11.08.16
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 19 Dec 2012 11:08:17 -0800 (PST)
MIME-Version: 1.0
X-Mercurial-Node: 0c56584e6b2ce29a98ac694e3c5b55fc9348067c
Message-Id: <0c56584e6b2ce29a98ac.1355944046@Solace>
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
User-Agent: Mercurial-patchbomb/2.2.2
Date: Wed, 19 Dec 2012 20:07:26 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 10 of 10 v2] docs: rearrange and update NUMA
 placement documentation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To include the new concept of NUMA aware scheduling and its
impact.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/docs/misc/xl-numa-placement.markdown b/docs/misc/xl-numa-placement.markdown
--- a/docs/misc/xl-numa-placement.markdown
+++ b/docs/misc/xl-numa-placement.markdown
@@ -14,22 +14,67 @@ the memory directly attached to the set 
 
 The Xen hypervisor deals with NUMA machines by assigning to each domain
 a "node affinity", i.e., a set of NUMA nodes of the host from which they
-get their memory allocated.
+get their memory allocated. Also, even if the node affinity of a domain
+is allowed to change on-line, it is very important to "place" the domain
+correctly when it is fist created, as the most of its memory is allocated
+at that time and can not (for now) be moved easily.
 
 NUMA awareness becomes very important as soon as many domains start
 running memory-intensive workloads on a shared host. In fact, the cost
 of accessing non node-local memory locations is very high, and the
 performance degradation is likely to be noticeable.
 
-## Guest Placement in xl ##
+For more information, have a look at the [Xen NUMA Introduction][numa_intro]
+page on the Wiki.
+
+### Placing via pinning and cpupools ###
+
+The simplest way of placing a domain on a NUMA node is statically pinning
+the domain's vCPUs to the pCPUs of the node. This goes under the name of
+CPU affinity and can be set through the "cpus=" option in the config file
+(more about this below). Another option is to pool together the pCPUs
+spanning the node and put the domain in such a cpupool with the "pool="
+config option (as documented in our [Wiki][cpupools_howto]).
+
+In both the above cases, the domain will not be able to execute outside
+the specified set of pCPUs for any reasons, even if all those pCPUs are
+busy doing something else while there are others, idle, pCPUs.
+
+So, when doing this, local memory accesses are 100% guaranteed, but that
+may come at he cost of some load imbalances.
+
+### NUMA aware scheduling ###
+
+If the credit scheduler is in use, the concept of node affinity defined
+above does not only apply to memory. In fact, starting from Xen 4.3, the
+scheduler always tries to run the domain's vCPUs on one of the nodes in
+its node affinity. Only if that turns out to be impossible, it will just
+pick any free pCPU.
+
+This is, therefore, something more flexible than CPU affinity, as a domain
+can still run everywhere, it just prefers some nodes rather than others.
+Locality of access is less guaranteed than in the pinning case, but that
+comes along with better chances to exploit all the host resources (e.g.,
+the pCPUs).
+
+In fact, if all the pCPUs in a domain's node affinity are busy, it is
+possible for the domain to run outside of there, but it is very likely that
+slower execution (due to remote memory accesses) is still better than no
+execution at all, as it would happen with pinning. For this reason, NUMA
+aware scheduling has the potential of bringing substantial performances
+benefits, although this will depend on the workload.
+
+## Guest placement in xl ##
 
 If using xl for creating and managing guests, it is very easy to ask for
 both manual or automatic placement of them across the host's NUMA nodes.
 
-Note that xm/xend does the very same thing, the only differences residing
-in the details of the heuristics adopted for the placement (see below).
+Note that xm/xend does a very similar thing, the only differences being
+the details of the heuristics adopted for automatic placement (see below),
+and the lack of support (in both xm/xend and the Xen versions where that\
+was the default toolstack) for NUMA aware scheduling.
 
-### Manual Guest Placement with xl ###
+### Placing the guest manually ###
 
 Thanks to the "cpus=" option, it is possible to specify where a domain
 should be created and scheduled on, directly in its config file. This
@@ -41,14 +86,19 @@ This is very simple and effective, but r
 administrator to explicitly specify affinities for each and every domain,
 or Xen won't be able to guarantee the locality for their memory accesses.
 
-It is also possible to deal with NUMA by partitioning the system using
-cpupools. Again, this could be "The Right Answer" for many needs and
-occasions, but has to be carefully considered and setup by hand.
+Notice that this also pins the domain's vCPUs to the specified set of
+pCPUs, so it not only sets the domain's node affinity (its memory will
+come from the nodes to which the pCPUs belong), but at the same time
+forces the vCPUs of the domain to be scheduled on those same pCPUs.
 
-### Automatic Guest Placement with xl ###
+### Placing the guest automatically ###
 
 If no "cpus=" option is specified in the config file, libxl tries
 to figure out on its own on which node(s) the domain could fit best.
+If it finds one (some), the domain's node affinity get set to there,
+and both memory allocations and NUMA aware scheduling (for the credit
+scheduler and starting from Xen 4.3) will comply with it.
+
 It is worthwhile noting that optimally fitting a set of VMs on the NUMA
 nodes of an host is an incarnation of the Bin Packing Problem. In fact,
 the various VMs with different memory sizes are the items to be packed,
@@ -81,7 +131,7 @@ largest amounts of free memory helps kee
 small, and maximizes the probability of being able to put more domains
 there.
 
-## Guest Placement within libxl ##
+## Guest placement in libxl ##
 
 xl achieves automatic NUMA placement because that is what libxl does
 by default. No API is provided (yet) for modifying the behaviour of
@@ -93,15 +143,34 @@ any placement from happening:
     libxl_defbool_set(&domain_build_info->numa_placement, false);
 
 Also, if `numa_placement` is set to `true`, the domain must not
-have any cpu affinity (i.e., `domain_build_info->cpumap` must
+have any CPU affinity (i.e., `domain_build_info->cpumap` must
 have all its bits set, as it is by default), or domain creation
 will fail returning `ERROR_INVAL`.
 
+Starting from Xen 4.3, in case automatic placement happens (and is
+successful), it will affect the domain's node affinity and _not_ its
+CPU affinity. Namely, the domain's vCPUs will not be pinned to any
+pCPU on the host, but the memory from the domain will come from the
+selected node(s) and the NUMA aware scheduling (if the credit scheduler
+is in use) will try to keep the domain there as much as possible.
+
 Besides than that, looking and/or tweaking the placement algorithm
 search "Automatic NUMA placement" in libxl\_internal.h.
 
 Note this may change in future versions of Xen/libxl.
 
+## Xen < 4.3 ##
+
+As NUMA aware scheduling is a new feature of Xen 4.3, things are a little
+bit different for earlier version of Xen. If no "cpus=" option is specified
+and Xen 4.2 is in use, the automatic placement algorithm still runs, but
+the results is used to _pin_ the vCPUs of the domain to the output node(s).
+This is consistent with what was happening with xm/xend, which were also
+affecting the domain's CPU affinity.
+
+On a version of Xen earlier than 4.2, there is not automatic placement at
+all in xl or libxl, and hence no node or CPU affinity being affected.
+
 ## Limitations ##
 
 Analyzing various possible placement solutions is what makes the
@@ -109,3 +178,6 @@ algorithm flexible and quite effective. 
 it won't scale well to systems with arbitrary number of nodes.
 For this reason, automatic placement is disabled (with a warning)
 if it is requested on a host with more than 16 NUMA nodes.
+
+[numa_intro]: http://wiki.xen.org/wiki/Xen_NUMA_Introduction
+[cpupools_howto]: http://wiki.xen.org/wiki/Cpupools_Howto

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:24:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlPFb-0000fi-2i; Wed, 19 Dec 2012 19:24:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlPFZ-0000fd-8W
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:24:18 +0000
Received: from [85.158.143.99:63917] by server-2.bemta-4.messagelabs.com id
	61/43-30861-06412D05; Wed, 19 Dec 2012 19:24:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355945053!19025531!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15936 invoked from network); 19 Dec 2012 19:24:14 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 19:24:14 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJJO9hm022844
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 19:24:10 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJJO8SU004676
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 19:24:08 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJJO7sf030695; Wed, 19 Dec 2012 13:24:08 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 11:24:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8F9D51C032B; Wed, 19 Dec 2012 14:24:05 -0500 (EST)
Date: Wed, 19 Dec 2012 14:24:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Message-ID: <20121219192405.GA24729@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
	<1355936517.10526.28.camel@iceland>
	<20121219174000.GA28570@phenom.dumpdata.com>
	<1355939543.10526.30.camel@iceland>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="Kj7319i9nmIyA2yE"
Content-Disposition: inline
In-Reply-To: <1355939543.10526.30.camel@iceland>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--Kj7319i9nmIyA2yE
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 19, 2012 at 05:52:23PM +0000, Wei Liu wrote:
> On Wed, 2012-12-19 at 17:40 +0000, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 19, 2012 at 05:01:57PM +0000, Wei Liu wrote:
> > > On Wed, 2012-12-19 at 16:04 +0000, Konrad Rzeszutek Wilk wrote:
> > > > On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> > > > > Hi Konrad
> > > > >=20
> > > > > I encountered a bug when trying to bring offline a cpu then onlin=
e it
> > > > > again in HVM. As I'm not very familiar with HVM stuffs I cannot c=
ome up
> > > > > with a quick fix.
> > > >=20
> > > > I took your two patches that you posted and they are in v3.8 now.
> > > >=20
> > > > It seems that there are bugs in the offline/online code thought.
> > > >=20
> > > > I did this:
> > > > # echo 0 > /sys/devices/system/cpu/cpu3/online
> > > > # echo 1 > /sys/devices/system/cpu/cpu3/online
> > > >=20
> > > > With a PV guest and it blows up (with or without your patches).
> > > >=20
> > > > Have you seen something similar to this:
> > > >=20
> > > > [  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
> > > > [  106.167168] microcode: CPU2 sig=3D0x206a7, pf=3D0x2, revision=3D=
0x17
> > > > [  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen=
_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_isc=
si scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor =
ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sy=
s_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [la=
st unloaded: dump_dma]
> > > > [  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0=
-rc3upstream-00139-gb1849b3-dirty #1
> > >=20
> > > Can you tell me which tree you're using? I cannot find cs:gb1849b3 in=
 my
> > > repository.
> >=20
> > Oh, .. that might have been me messing around. I would just do v3.7 and
> > try that out.
> > >
>=20
> Just played with 3.7 several times, no oops so far.
>=20
> Wei.

I also tried v3.8 - same thing. Here is the serial output. And attached
is the the config file.


Trying 192.168.101.14...
Connected to maxsrv1.
Escape character is '^]'.

PXELINUX 3.82 2009-06-09  Copyright (C) 1994-2009 H. Peter Anvin et al
boot:=20
Loading xen.gz... ok
Loading vmlinuz... ok
Loading initramfs.cpio.gz... ok
 __  __            _  _    _   _____             _                  =20
 \ \/ /___ _ __   | || |  / | |___ /    _ __ ___/ |   _ _ | .__/|_|  \___|
                                                     |_|            =20
(XEN) Xen version 4.1.3-rc1-pre (konrad@dumpdata.com) (gcc version 4.4.4 20=
100503 (Red Hat 4.4.4-2) (GCC) ) Wed Dec 19 13:47:18 EST 2012
(XEN) Latest ChangeSet: Mon Feb 13 17:57:47 2012 +0000 23225:f2543f449a49
(XEN) Bootloader: unknown
(XEN) Command line: cpuinfo conring_size=3D1048576 dom0_mem=3Dmax:3G cpufre=
q=3Dverbose,performance com1=3D115200,8n1 console=3Dcom1,vga loglvl=3Dall g=
uest_loglvl=3Dall
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 000000000009ec00 (usable)
(XEN)  000000000009ec00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 0000000020000000 (usable)
(XEN)  0000000020000000 - 0000000020200000 (reserved)
(XEN)  0000000020200000 - 0000000040000000 (usable)
(XEN)  0000000040000000 - 0000000040200000 (reserved)
(XEN)  0000000040200000 - 00000000bad80000 (usable)
(XEN)  00000000bad80000 - 00000000badc9000 (ACPI NVS)
(XEN)  00000000badc9000 - 00000000badd1000 (ACPI data)
(XEN)  00000000badd1000 - 00000000badf4000 (reserved)
(XEN)  00000000badf4000 - 00000000badf6000 (usable)
(XEN)  00000000badf6000 - 00000000bae06000 (reserved)
(XEN)  00000000bae06000 - 00000000bae14000 (ACPI NVS)
(XEN)  00000000bae14000 - 00000000bae3c000 (reserved)
(XEN)  00000000bae3c000 - 00000000bae7f000 (ACPI NVS)
(XEN)  00000000bae7f000 - 00000000bb000000 (usable)
(XEN)  00000000bb800000 - 00000000bfa00000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed40000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0450, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT BADC9068, 0054 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) r2 ALASKA    A M I       15 INTL 20051117)
(XEN) ACPI: FACS BAE0BF80, 0040
(XEN) ACPI: APIC BADD0400, 0072 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: SSDT BADD0478, 0102 (r1 AMICPU     PROC        1 MSFT  3000001)
(XEN) ACPI: MCFG BADD0580, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: HPET BADD05C0, 0038 (r1 ALASKA    A M I  1072009 AMI.        4)
(XEN) ACPI: ASF! BADD05F8, 00A0 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) System RAM: 8104MB (8299140kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fcde0
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(Xxfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x01] enabled)
(XEN) Processor #1 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
(XEN) Processor #3 6:10 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0 buses 0 - 255
(XEN) PCI: Not using MMCONFIG.
(XEN) Table is not found!
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) IRQ limits: 24 GSI, 760 MSI/MSI-X
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Initializing CPU#0
(XEN) Detected 3093.056 MHz processor.
(XEN) Initing memory sharing.
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor
(XEN) CPU0: Thermal monitoring enabled (TM1)
(XEN) Intel machine check reporting enabled
(XEN) I/O virtualisation disabled
(XEN) CPU0: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz stepping 07
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) Platform timer is 14.318MHz HPET
=FF(XEN) Allocated console ring of 1048576 KiB.
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN) EPT supports 2MB super page.
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging detected.
(XEN) Booting processor 1/1 eip 7c000
(XEN) Initializing CPU#1
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 0
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 256K
(XEN) CPU: L3 cache: 3072K
(XEN) CPU1: Thermal monitoring enabled (TM1)
(XEN) CPU1: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz stepping 07
(XEN) Booting processor 2/2 eip 7c000
(XEN) Initializing CPU#2
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 1
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 256K
(XEN) CPU: L3 cache: 3072K
(XEN) CPU2: Thermal monitoring enabled (TM1)
(XEN) CPU2: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz stepping 07
(XEN) Booting processor 3/3 eip 7c000
(XEN) Initializing CPU#3
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 1
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 256K
(XEN) CPU: L3 cache: 3072K
(XEN) CPU3: Thermal monitoring enabled (TM1)
(XEN) CPU3: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz stepping 07
(XEN) Brought up 4 CPUs
(XEN) ACPI sleep modes: S3
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x208f000
(XEN) PHYSICAL ME00 (702942 pages to be allocated)
(XEN)  Init. ramdisk: 000000022f7de000->000000023fdff200
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8208f000
(XEN)  Init. ramdisk: ffffffff8208f000->ffffffff926b0200
(XEN)  Phys-Mach map: ffffffff926b1000->ffffffff92cb1000
(XEN)  Start info:    ffffffff92cb1000->ffffffff92cb14b4
(XEN)  Page tables:   ffffffff92cb2000->ffffffff92d4d000
(XEN)  Boot stack:    ffffffff92d4d000->ffffffff92d4e000
(XEN)  TOTAL:         ffffffff80000000->ffffffff93000000
(XEN)  ENTRY ADDRESS: ffffffff81aba210
(XEN) Dom0 has maximum 4 VCPUs
(XEN) Scrubbing Free RAM: ......................................done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input t=
o Xen)
(XEN) Freed 220kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.7.0upstream (konrad@phenom.dumpdata.com) (gc=
c version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) #1 SMP Wed Dec 1000] Com=
mand line: debug console=3Dhvc0 loglevel=3D10 xen-pciback.hide=3D(01:00.0)
[    0.000000] Freeing 9e-100 pfn range: 98 pages freed
[    0.000000] 1-1 mapping on 9e->100
[    0.000000] Freeing 20000-20200 pfn range: 512 pages freed
[    0.000000] 1-1 mapping on 20000->20200
[    0.000000] Freeing 40000-40200 pfn range: 512 pages freed
[    0.000000] 1-1 mapping on 40000->40200
[    0.000000] Freeing bad80-badf4 pfn range: 116 pages freed
[    0.000000] 1-1 mapping on bad80->badf4
[    0.000000] Freeing badf6-bae7f pfn range: 137 pages freed
[    0.000000] 1-1 mapping on badf6->bae7f
[    0.000000] Freeing bb000-c0000 pfn range: 20480 pages freed
[    0.000000] 1-1 mapping on bb000->100000
[    0.000000] Released 21855 pages of unused memory
[    0.000000] Set 283999 page(s) to 1-1 mapping
[    0.000000] Populating 100000-10555f pfn range: 21855 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x000000000009dfff] usable
[    0.000000] Xen: [mem 0x000000000009ec00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x000000001fffffff] usable
[    0.000000] Xen: [mem 0x0000000020000000-0x00000000201fffff] reserved
[    0.000000] Xen: [mem 0x0000000020200000-0x000000003fffffff] usable
[    0.000000] Xen: [mem 0x0000000040000000-0x00000000401fffff] reserved
[    0.000000] Xen: [mem 0x0000000040200000-0x00000000bad7ffff] usable
[    0.000000] Xen: [mem 0x00000000bad80000-0x00000000badc8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000badc9000-0x00000000badd0fff] ACPI data
[    0.000000] Xen: [mem 0x00000000badd1000-0x00000000badf3fff] reserved
[    0.000000] Xen: [mem 0x00000000badf4000-0x00000000badf5fff] usable
[    0.000000] Xen: [mem 0x00000000badf6000-0x00000000bae05fff] reserved
[    0.000000] Xen: [mem 0x00000000bae06000-0x00000000bae13fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000bae14000-0x00000000bae3bfff] reserved
[    0.000000] Xen: [mem 0x00000000bae3c000-0x00000000bae7efff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000bae7f000-0x00000000baffffff] usable
[    0.000000] Xen: [mem 0x00000000bb800000-0x00000000bf9fffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed3ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000010555efff] usable
[    0.000000] Xen: [mem 0x000000010555f000-0x000000023fdfffff] unusable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] DMI 2.7 present.
[    0.000000] DMI: MSI MS-7680/H61M-P23 (MS-7680), BIOS V17.0 03/14/2011
[    0.000000] e820: update [mem 0x00000000-0x0000ffff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] No AGP bridge found
[    0.000000] e820: last_pfn =3D 0x10555f max_arch_pfn =3D 0x400000000
[    0.000000] e820: last_pfn =3D 0xbb000 max_arch_pfn =3D 0x400000000
[    0.000000] initial memory mapped: [mem 0x00000000-0x126b0fff]
[    0.000000] Base memory trampoline at [ffff880000098000] 98000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0xbaffffff]
[    0.000000]  [mem 0x00000000-0xbaffffff] page 4k
[    0.000000] kernel direct mapping tables up to 0xbaffffff @ [mem 0x00a24=
000-0x00ffffff]
[    0.000000] xen: setting RW the range f66000 - 1000000
[    0.000000] init_memory_mapping: [mem 0x100000000-0x10555efff]
[    0.000000]  [mem 0x100000000-0x10555efff] page 4k
[    0.000000] kernel direct mapping tables up to 0x10555efff @ [mem 0xbafd=
3000-0xbaffffff]
[    0.000000] xen: setting RW the range bafff000 - bb000000
[    0.000000] RAMDISK: [mem 0x0208f000-0x126b0fff]
[    0.000000] ACPI: RSDP 00000000000f0450 00024 (v02 ALASKA)
[    0.000000] ACPI: XSDT 00000000badc9068 00054 (v01 ALASKA    A M I 01072=
009 AMI  00010013)
[    0.000000] ACPI: FACP 00000000badd0308 000F4 (v04 ALASKA    A M I 01072=
009 AMI  00010013)
[    0.000000] ACPI: DSDT 00000000badc9150 071B5 (v02 ALASKA    A M I 00000=
015 INTL 20051117)
[    0.000000] ACPI: FACS 00000000bae0bf80 00040
[    0.000000] ACPI: APIC 00000000badd0400 00072 (v03 ALASKA    A M I 01072=
009 AMI  00010013)
[    0.000000] ACPI: SSDT 00000000badd0478 00102 (v01 AMICPU     PROC 00000=
001 MSFT 03000001)
[    0.000000] ACPI: MCFG 00000000badd0580 0003C (v01 ALASKA    A M I 01072=
009 MSFT 00000097)
[    0.000000] ACPI: HPET 00000000badd05c0 00038 (v01 ALASKA    A M I 01072=
009 AMI. 00000004)
[    0.000000] ACPI: ASF! 00000000badd05f8 000A0 (v32 INTEL       HCG 00000=
001 TFSM 000F4240)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at [mem 0x0000000000000000-0x000000010555efff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x10555efff]
[    0.000000]   NODE_DATA [mem 0x10555b000-0x10555efff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00010000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   [mem 0x100000000-0x10555efff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00010000-0x0009dfff]
[    0.000000]   node   0: [mem 0x00100000-0x1fffffff]
[    0.000000]   node   0: [mem 0x20200000-0x3fffffff]
[    0.000000]   node   0: [mem 0x40200000-0xbad7ffff]
[    0.000000]   node   0: [mem 0xbadf4000-0xbadf5fff]
[    0.000000]   node   0: [mem 0xbae7f000-0xbaffffff]
[    0.000000]   node   0: [mem 0x100000000-0x10555efff]
[    0.000000] On node 0 totalpages: 786416
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 1352 pages reserved
[    0.000000]   DMA zone: 2574 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 14280 pages used for memmap
[    0.000000]   DMA32 zone: 746299 pages, LIFO batch:31
[    0.000000]   Normal zone: 299 pages used for memmap
[    0.000000]   Normal zone: 21556 pages, LIFO batch:3
[    0.000000] ACPI: PM-Timer IO Port: 0x408
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-=
23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: 000000000009e000 - 00000000000=
9f000
[    0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000001=
00000
[    0.000000] PM: Registered nosave memory: 0000000020000000 - 00000000202=
00000
[    0.000000] PM: Registered nosave memory: 0000000040000000 - 00000000402=
00000
[    0.000000] PM: Registered nosave memory: 00000000bad80000 - 00000000bad=
c9000
[    0.000000] PM: Registered nosave memory: 00000000badc9000 - 00000000bad=
d1000
[    0.000000] PM: Registered nosave memory: 00000000badd1000 - 00000000bad=
f4000
[    0.000000] PM: Registered nosave memory: 00000000badf6000 - 00000000bae=
06000
[    0.000000] PM: Registered nosave memory: 00000000bae06000 - 00000000bae=
14000
[    0.000000] PM: Registered nosave memory: 00000000bae14000 - 00000000bae=
3c000
[    0.000000] PM: Registered nosave memory: 00000000bae3c000 - 00000000bae=
7f000
[    0.000000] PM: Registered nosave memory: 00000000bb000000 - 00000000bb8=
00000
[    0.000000] PM: Registered nosave memory: 00000000bb800000 - 00000000bfa=
00000
[    0.000000] PM: Registered nosave memory: 00000000bfa00000 - 00000000fec=
00000
[    0.000000] PM: Registered nosave memory: 00000000fec00000 - 00000000fec=
01000
[    0.000000] PM: Registered nosave memory: 00000000fec01000 - 00000000fed=
1c000
[    0.000000] PM: Registered nosave memory: 00000000fed1c000 - 00000000fed=
40000
[    0.000000] PM: Registered nosave memory: 00000000fed40000 - 00000000fee=
00000
[    0.000000] PM: Registered nosave memory: 00000000fee00000 - 00000000fee=
01000
[    0.000000] PM: Registered nosave memory: 00000000fee01000 - 00000000ff0=
00000
[    0.000000] PM: Registered nosave memory: 00000000ff000000 - 00000001000=
00000
[    0.000000] e820: [mem 0xbfa00000-0xfebfffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.1.3-rc1-pre (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 n=
r_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff880105200000 s84288 r8192=
 d22208 u524288
[    0.000000] pcpu-alloc: s84288 r8192 d22208 u524288 alloc=3D1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3=20
[    1.359907] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 770429
[    1.359910] Policy zone: Normal
[    1.359912] Kernel command line: debug console=3Dhvc0 loglevel=3D10 xen-=
pciback.hide=3D(01:00.0)
[    1.360197] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    1.360202] __ex_table already sorted, skipping sort
[    1.382802] software IO TLB [mem 0xb6d80000-0xbad7ffff] (64MB) mapped at=
 [ffff8800b6d80000-ffff8800bad7ffff]
[    1.391629] Memory: 2740388k/4281724k available (6411k kernel code, 1136=
060k absent, 405276k reserved, 4480k data, 752k init)
[    1.391712] Hierarchical RCU implementation.
[    1.391715] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D4.
[    1.391724] NR_IRQS:33024 nr_irqs:712 16
[    1.391792] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0
[    1.391795] xen: registering gsi 9 triggering 0 polarity 0
[    1.391804] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)
[    1.391809] xen: acpi sci 9
[    1.391813] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)
[    1.391817] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)
[    1.391820] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)
[    1.391824] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)
[    1.391827] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)
[    1.391831] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)
[    1.391835] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)
[    1.391838] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)
[    1.391842] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)
[    1.391846] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)
[    1.391849] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)
[    1.391853] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)
[    1.391857] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)
[    1.391860] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)
[    1.393249] Console: colour VGA+ 80x25
[    1.393530] console [hvc0] enabled
[    1.393557] Xen: using vcpuop timer interface
[    1.393562] installing Xen timer for CPU 0
[    1.393582] tsc: Detected 3093.056 MHz processor
[    1.393588] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6186.11 BogoMIPS (lpj=3D3093056)
[    1.393594] pid_max: default: 32768 minimum: 301
[    1.393631] Security Framework initialized
[    1.393635] SELinux:  Initializing.
[    1.393645] SELinux:  Starting in permissive mode
[    1.394172] Dentry cache hash table entries: 524288 (order: 10, 4194304 =
bytes)
[    1.395081] Inode-cache hash table entries: 262144 (order: 9, 2097152 by=
tes)
[    1.395386] Mount-cache hash table entries: 256
[    1.395611] Initializing cgroup subsys cpuacct
[    1.395615] Initializing cgroup subsys freezer
[    1.395672] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    1.395672] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)
[    1.395679] CPU: Physical Processor ID: 0
[    1.395681] CPU: Processor Core ID: 0
[    1.395685] mce: CPU supports 7 MCE banks
[    1.395726] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
[    1.395726] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
[    1.395726] tlb_flushall_shift: 5
[    1.395794] Freeing SMP alternatives: 24k freed
[    1.397997] ACPI: Core revision 20120913
[    1.626158] cpu 0 spinlock event irq 41
[    1.626183] Performance Events: unsupported p6 CPU model 42 no PMU drive=
r, software events only.
[    1.626430] NMI watchdog: disabled (cpu0): hardware events not enabled
[    1.626585] installing Xen timer for CPU 1
[    1.626598] cpu 1 spinlock event irq 48
[    1.626920] installing Xen timer for CPU 2
[    1.626932] cpu 2 spinlock event irq 55
[    1.627217] installing Xen timer for CPU 3
[    1.627229] cpu 3 spinlock event irq 62
[    1.627332] Brought up 4 CPUs
[    1.629982] PM: Registering ACPI NVS region [mem 0xbad80000-0xbadc8fff] =
(299008 bytes)
[    1.629994] PM: Registering ACPI NVS region [mem 0xbae06000-0xbae13fff] =
(57344 bytes)
[    1.629998] PM: Registering ACPI NVS region [mem 0xbae3c000-0xbae7efff] =
(274432 bytes)
[    1.630167] kworker/u:0 (31) used greatest stack depth: 5968 bytes left
[    1.630195] Grant tables using version 2 layout.
[    1.630207] Grant table initialized
[    1.630239] RTC time: 18:55:25, date: 12/19/12
[    1.630315] NET: Registered protocol family 16
[    1.630558] kworker/u:0 (35) used greatest stack depth: 5504 bytes left
[    1.631082] ACPI: bus type pci registered
[    1.631568] dca service started, version 1.12.1
[    1.631632] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000=
-0xefffffff] (base 0xe0000000)
[    1.631637] PCI: not using MMCONFIG
[    1.631640] PCI: Using configuration type 1 for base access
[    1.641990] bio: create slab <bio-0> at 0
[    1.642189] ACPI: Added _OSI(Module Device)
[    1.642193] ACPI: Added _OSI(Processor Device)
[    1.642196] ACPI: Added _OSI(3.0 _SCP Extensions)
[    1.642200] ACPI: Added _OSI(Processor Aggregator Device)
[    1.646439] ACPI: EC: Look up EC in DSDT
[    1.650923] ACPI: Executed 1 blocks of module-level executable AML code
[    1.655607] ACPI: SSDT 00000000bae0ac18 0038C (v01    AMI      IST 00000=
001 MSFT 03000001)
[    1.656750] ACPI: Dynamic OEM Table Load:
[    1.656756] ACPI: SSDT           (null) 0038C (v01    AMI      IST 00000=
001 MSFT 03000001)
[    1.656786] ACPI: SSDT 00000000bae0be18 00084 (v01    AMI      CST 00000=
001 MSFT 03000001)
[    1.657840] ACPI: Dynamic OEM Table Load:
[    1.657846] ACPI: SSDT           (null) 00084 (v01    AMI      CST 00000=
001 MSFT 03000001)
[    1.658539] ACPI: Interpreter enabled
[    1.658544] ACPI: (supports S0 S1 S3 S4 S5)
[    1.658570] ACPI: Using IOAPIC for interrupt routing
[    1.658597] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000=
-0xefffffff] (base 0xe0000000)
[    1.658675] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACP=
I motherboard resources
[    1.709169] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[    1.719728] ACPI: No dock devices found.
[    1.719735] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug
[    1.719976] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    1.720261] PCI host bridge to bus 0000:00
[    1.720266] pci_bus 0000:00: root bus resource [bus 00-ff]
[    1.720270] pci_bus 0000:00: root bus resource [io  0x0000-0x03af]
[    1.720274] pci_bus 0000:00: root bus resource [io  0x03e0-0x0cf7]
[    1.720277] pci_bus 0000:00: root bus resource [io  0x03b0-0x03df]
[    1.720311] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[    1.720314] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f]
[    1.720318] pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000dfff=
f]
[    1.720322] pci_bus 0000:00: root bus resource [mem 0xbfa00000-0xfffffff=
f]
[    1.720342] pci 0000:00:00.0: [8086:0100] type 00 class 0x060000
[    1.720448] pci 0000:00:01.0: [8086:0101] type 01 class 0x060400
[    1.720555] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[    1.720607] pci 0000:00:02.0: [8086:0102] type 00 class 0x030000
[    1.720640] pci 0000:00:02.0: reg 10: [mem 0xfe000000-0xfe3fffff 64bit]
[    1.720658] pci 0000:00:02.0: reg 18: [mem 0xd0000000-0xdfffffff 64bit p=
ref]
[    1.720672] pci 0000:00:02.0: reg 20: [io  0xf000-0xf03f]
[    1.720811] pci 0000:00:16.0: [8086:1c3a] type 00 class 0x078000
[    1.720855] pci 0000:00:16.0: reg 10: [mem 0xfe607000-0xfe60700f 64bit]
[    1.721005] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
[    1.721070] pci 0000:00:1a.0: [8086:1c2d] type 00 class 0x0c0320
[    1.721111] pci 0000:00:1a.0: reg 10: [mem 0xfe606000-0xfe6063ff]
[    1.721282] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    1.721332] pci 0000:00:1b.0: [8086:1c20] type 00 class 0x040300[    2.4=
41647] ehci_hcd 0000:00:1a.0: port 1 reset complete, port enabled
[    2.441660] ehci_hcd 0000:00:1a.0: GetStatus port:1 status 001005 0  ACK=
 POWER sig=3Dse0 PE CONNECT
mount: mount point /proc/bus/usb does not exist
[    2.492442] usb 1-1: new high-speed USB device number 2 using ehci_hcd
mount: mount point /sys/kernel/config does not exist
[    2.509458] core_filesystem (1208) used greatest stack depth: 4952 bytes=
 left
FATAL: Error inserting xen_kbdfront (/lib/modules/3.7.0upstream/kernel/driv=
ers/input/misc/xen-kbdfront.ko): No such device
FATAL: Error inserting xen_fbfront (/lib/modules/3.7.0upstream/kernel/drive=
rs/video/xen-fbfront.ko): No such device
[    2.543646] ehci_hcd 0000:00:1a.0: port 1 reset complete, port enabled
[    2.543667] ehci_hcd 0000:00:1a.0: GetStatus port:1 status 001005 0  ACK=
 POWER sig=3Dse0 PE CONNECT
[    2.545494] Initialising Xen virtual ethernet driver.
[    2.567446] udevd (1266): /proc/1266/oom_adj is deprecated, please use /=
proc/1266/oom_score_adj instead.
[    2.607548] ehci_hcd 0000:00:1a.0: set dev address 2 for port 1
[    2.607560] ehci_hcd 0000:00:1a.0: LPM: no device attached
[    2.607838] usb 1-1: udev 2, busnum 1, minor =3D 1
[    2.607845] usb 1-1: New USB device found, idVendor=3D8087, idProduct=3D=
0024
[    2.607852] usb 1-1: New USB device strings: Mfr=3D0, Product=3D0, Seria=
lNumber=3D0
[    2.609689] usb 1-1: usb_probe_device
[    2.609699] usb 1-1: configuration #1 chosen from 1 choice
[    2.611359] usb 1-1: adding 1-1:1.0 (config #1, interface 0)
[    2.611568] hub 1-1:1.0: usb_probe_interface
[    2.611575] hub 1-1:1.0: usb_probe_interface - got id
[    2.611581] hub 1-1:1.0: USB hub found
[    2.613339] hub 1-1:1.0: 4 ports detected
[    2.613347] hub 1-1:1.0: standalone hub
[    2.613352] hub 1-1:1.0: individual port power switching
[    2.613358] hub 1-1:1.0: individual port over-current protection
[    2.613363] hub 1-1:1.0: Single TT
[    2.613367] hub 1-1:1.0: TT requires at most 8 FS bit times (666 ns)
[    2.613371] hub 1-1:1.0: power on to power good time: 100ms
[    2.613685] hub 1-1:1.0: local power source is good
[    2.614336] hub 1-1:1.0: enabling power on all ports
[    2.614827] hub 2-0:1.0: state 7 ports 2 chg 0002 evt 0000
[    2.614843] hub 2-0:1.0: port 1, status 0501, change 0000, 480 Mb/s
[    2.639505] xen: registering gsi 16 triggering 0 polarity 1
[    2.639517] Already setup the GSI :16
[    2.651431] SCSI subsystem initialized
udevd-work[1305]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} f=
or writing: No such file or directory

[    2.660473] ACPI: bus type scsi registered
[    2.661221] xen: registering gsi 16 triggering 0 polarity 1
[    2.661228] Already setup the GSI :16
[    2.662192] pci 0000:00:00.0: Intel Sandybridge Chipset
[    2.662282] libata version 3.00 loaded.
[    2.662370] pci 0000:00:00.0: detected gtt size: 2097152K total, 262144K=
 mappable
[    2.662601] atl1c 0000:03:00.0: version 1.0.1.0-NAPI
[    2.663465] pci 0000:00:00.0: detected 65536K stolen memory
[    2.663514] i915 0000:00:02.0: setting latency timer to 64
[    2.665638] ehci_hcd 0000:00:1d.0: port 1 reset complete, port enabled
[    2.665647] ehci_hcd 0000:00:1d.0: GetStatus port:1 status 001005 0  ACK=
 POWER sig=3Dse0 PE CONNECT
[    2.716412] usb 2-1: new high-speed USB device number 2 using ehci_hcd
[    2.717096] usb 1-1: link qh256-0001/ffff8800ac75ba40 start 1 [1/0 us]
[    2.767636] ehci_hcd 0000:00:1d.0: port 1 reset complete, port enabled
[    2.767650] ehci_hcd 0000:00:1d.0: GetStatus port:1 status 001005 0  ACK=
 POWER sig=3Dse0 PE CONNECT
[    2.777221] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010).
[    2.777230] [drm] Driver supports precise vblank timestamp query.
[    2.777299] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=
=3Dio+mem,decodes=3Dio+mem:owns=3Dio+mem
Waiting for devices [  OK  ]
[    2.807610] [drm] Enabling RC6 states: RC6 on, RC6p off, RC6pp off
[    2.815015] ip (2328) used greatest stack depth: 3896 bytes left
[    2.830534] ehci_hcd 0000:00:1d.0: set dev address 2 for port 1
[    2.830546] ehci_hcd 0000:00:1d.0: LPM: no device attached
[    2.830780] usb 2-1: udev 2, busnum 2, minor =3D 129
[    2.830788] usb 2-1: New USB device found, idVendor=3D8087, idProduct=3D=
0024
[    2.830796] usb 2-1: New USB device strings: Mfr=3D0, Product=3D0, Seria=
lNumber=3D0
[    2.830980] usb 2-1: usb_probe_device
[    2.830989] usb 2-1: configuration #1 chosen from 1 choice
[    2.831416] usb 2-1: adding 2-1:1.0 (config #1, interface 0)
[    2.831531] hub 2-1:1.0: usb_probe_interface
[    2.831536] hub 2-1:1.0: usb_probe_interface - got id
[    2.831541] hub 2-1:1.0: USB hub found
[    2.831649] hub 2-1:1.0: 6 ports detected
[    2.831654] hub 2-1:1.0: standalone hub
[    2.831659] hub 2-1:1.0: individual port power switching
[    2.831663] hub 2-1:1.0: individual port over-current protection
[    2.831668] hub 2-1:1.0: Single TT
[    2.831672] hub 2-1:1.0: TT requires at most 8 FS bit times (666 ns)
[    2.831677] hub 2-1:1.0: power on to power good time: 100ms
[    2.831899] hub 2-1:1.0: local power source is good
[    2.833153] hub 2-1:1.0: enabling power on all ports
[    2.833934] hub 1-1:1.0: state 7 ports 4 chg 0000 evt 0000
[    2.834741] No connectors reported connected with modes
[    2.834749] [drm] Cannot find any crtc or sizes - going 1024x768
[    2.838554] fbcon: inteldrmfb (fb0) is primary device
[    2.917288] Console: switching to colour frame buffer device 128x48
[    2.934143] usb 2-1: link qh256-0001/ffff8800b0007340 start 1 [1/0 us]
[    2.934164] hub 2-1:1.0: state 7 ports 6 chg 0000 evt 0000
[    2.961792] fb0: inteldrmfb frame buffer device
[    2.961795] drm: registered panic notifier
[    2.961821] i915: No ACPI video bus found
[    2.962143] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on mi=
nor 0
[    2.962213] ata_piix 0000:00:1f.2: version 2.13
[    2.962243] xen: registering gsi 19 triggering 0 polarity 1
[    2.962275] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)
[    2.962308] ata_piix 0000:00:1f.2: MAP [
[    2.962313]  P0 P2 P1 P3 ]
[    2.964634] modprobe (1419) used greatest stack depth: 3392 bytes left
[    3.112540] ata_piix 0000:00:1f.2: setting latency timer to 64
[    3.114540] scsi0 : ata_piix
[    3.115914] scsi1 : ata_piix
[    3.116906] ata1: SATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xf0d0 irq=
 14
[    3.116923] ata2: SATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xf0d8 irq=
 15
[    3.116975] xen: registering gsi 19 triggering 0 polarity 1
[    3.116990] Already setup the GSI :19
[    3.117015] ata_piix 0000:00:1f.5: MAP [
[    3.117024]  P0 -- P1 -- ]
[    3.267504] ata_piix 0000:00:1f.5: setting latency timer to 64
[    3.269836] scsi2 : ata_piix
[    3.271381] scsi3 : ata_piix
[    3.272665] ata3: SATA max UDMA/133 cmd 0xf0b0 ctl 0xf0a0 bmdma 0xf070 i=
rq 19
[    3.272679] ata4: SATA max UDMA/133 cmd 0xf090 ctl 0xf080 bmdma 0xf078 i=
rq 19
[    3.728513] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[    3.731935] ata3.00: ATAPI: HL-DT-ST DVDRAM GH24NS50, XP01, max UDMA/100
[    3.753810] ata3.00: configured for UDMA/100
[    4.123458] ata2.00: failed to resume link (SControl 0)
[    4.279470] ata4: failed to resume link (SControl 0)
[    4.290776] ata4: SATA link down (SStatus 4 SControl 0)
[    4.428514] ata1.01: failed to resume link (SControl 0)
[    4.579683] ata1.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[    4.579753] ata1.01: SATA link down (SStatus 0 SCo^G[    4.628360] ata1.=
00: ATA-7: ST3808110AS, 3.ADH, max UDMA/133
[    4.628375] ata1.00: 156250000 sectors, multi 16: LBA48 NCQ0/32)
^G^G[    4.809185] ata1.00: failed to get Identify Device Data, Emask 0x1
[    5.025822] ata1.00: failed to get Identify Device Data, Emask 0x1
[    5.025872] ata1.00: configured for UDMA/133
[    5     ATA      ST3808110AS      3.AD PQ: 0 ANSI: 5
[    5.027796] ACPI: Invalid Power Resource to register!^G^G^G^G^G^G^G^G^G[=
    5.130457] ata2.01: failed to resume link (SControl 0)
[    5.142312] ata2.00: SATA link down (SStatus 4 SControl 0)
[    5.142382] ata2.01: SATA link down (SStatus 0 SControl 0)
[    5.146256] scsi 2:0:0:0: CD-ROM            HL-DT-ST DVDRAM GH24NS50  XP=
01 PQ: 0 ANSI: 5

[    5.147459] ACPI: Invalid Power Resource to register![    5.156476] sd 0=
:0:0:0: [sda] 156250000 512-byte logical blocks: (80.0 GB/74.5 GiB)
[    5.156759] sd 0:0:0:0: [sda] Write Protect is off
[    5.156774] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    5.156878] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled,=
 doesn't support DPO or FUA
[    5.166854] sr0: scsi3-mmc drive: 48x/48x writer dvd-ram cd/rw xa/form2 =
cdda tray
[    5.166864] cdrom: Uniform CD-ROM driver Revision: 3.20
[    5.167476] sr 2:0:0:0: Attached scsi CD-ROM sr0
[    5.208328]  sda: sda1 sda2 sda3 sda4
[    5.210047] sd 0:0:0:0: [sda] Attached SCSI disk
[    5.216268] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    5.216450] sr 2:0:0:0: Attached scsi generic sg1 type 5
Waiting for fb [  OK  ]
Starting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7fb6c7d73000): Writting .. [1024:768]
Done!
FATAL: Module agpgart_intel not found.
[    5.546351] [drm] radeon kernel modesetting enabled.
[    5.552678] wmi: Mapper lo^GStarting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7fbf03f6e000): Writting .. [1024:768]
Done!
VGA: 0000:00:02.0
Waiting for network [  OK  ]
Bringing up loopback interface:  [  OK  ]
Bringing up interface eth0:  [    6.129588] atl1c 0000:03:00.0: atl1c: eth0=
 NIC Link is Up<1000 Mbps Full Duplex>
[    6.132986] device eth0 entered promiscuous mode
[  OK  ]
Bringing up interface switch: =20
Determining IP information for switch...[    6.213539] switch: port 1(eth0)=
 entered forwarding state
[    6.213561] switch:  done.
[  OK  ]
Waiting for SSHd [  OK  ]
Generating keys ...
+-----------------+
+-----------------+
+-----------------+

Starting SSHd ...

    SSH started [2758]

        passwd =3D skl+3c

FATAL: Module dump_dma not found.
ERROR: Module dump_dma does not exist in /proc/modules
[    7.895459] Loading iSCSI transport class v2.0-870.
[    7.900281] iscsi: registered transport (tcp)
iscsistart: transport class version 2.0-870. iscsid version 2.0-872
Could not get list of targets from firmware.
Dec 19 18:5G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^GStartin=
g xenstored...Dec 19 18:55:31 tst007 xenstored: Checking store ...
Dec 19 18:55:31 tst007 xenstored: Checking store complete.

Setting domain 0 name...
Starting xenconsoled...
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:790: blktapctrl: v1.0=
=2E0=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blkta
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:792: Found driver: [r=
amdisk image (ram)]=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:792: Found driver: [q=
cow disk (qcow)]=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:792: Found driver: [q=
cow2 disk (qcow2)]=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl_linux.c:86: blktap0 ope=
n failed=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:859: couldn't open bl=
ktap interface=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:922: Unable to start =
blktapctrl=20
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^GDec 19 18:55:32 tst007 iscsid:=
 transport class version 2.0-870. iscsid version 2.0-872
Dec 19 18:55:32 tst007 iscsid: iSCSI daemon with pid=3D2825 started!
[0:0:0:0]    disk    ATA      ST3808110AS      3.AD  /dev/sda=20
[2:0:0:0]    cd/dvd  HL-DT-ST DVDRAM GH24NS50  XP01  /dev/sr0 ^G00:00.0 Hos=
t bridge: Intel Corporation Device 0100 (rev 09)
00:01.0 PCI bridge: Intel Corporation Device 0101 (rev 09)
00:02.0 VGA compatible controller: Intel Corporation Sandy Bridge Integrate=
d Graphics Controller (rev 09)
00:16.0 Communication controller: Intel Corporation Cougar Point HECI Contr=
oller #1 (rev 04)
00:1a.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Co=
ntroller #2 (rev 05)
00:1b.0 Audio device: Intel Corporation Cougar Point High Definition Audio =
Controller (rev 05)
00:1c.0 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 1 =
(rev b5)
00:1c.4 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 5 =
(rev b5)
00:1d.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Co=
ntroller #1 (rev 05)
00:1f.0 ISA bridge: Intel Corporation Device 1c5c (rev 05)
00:1f.2 IDE interface: Intel Corporation Cougar Point 4 port SATA IDE Contr=
oller (rev 05)
00:1f.3 SMBus: Intel Corporation Cougar Point SMBus Controller (rev 05)
00:1f.5 IDE interface: Intel Corporation Cougar Point 2 port SATA IDE Contr=
oller (rev 05)
01:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Conne=
ction
03:00.0 Ethernet controller: Attansic Technology Corp. Device 1083 (rev c0)
           CPU0       CPU1       CPU2       CPU3      =20
  1:          2          0          0          0  xen-pirq-ioapic-edge  i80=
42
  8:          1          0          0          0  xen-pirq-ioapic-edge  rtc0
  9:          0          0          0          0  xen-pirq-ioapic-level  ac=
pi
 12:          4          0          0          0  xen-pirq-ioapic-edge  i80=
42
 14:         15          0          0          0  xen-pirq-ioapic-edge  ata=
_piix
 15:          0          0          0          0  xen-pirq-ioapic-edge  ata=
_piix
 16:         22          0          0          0  xen-pirq-ioapic-level  eh=
ci_hcd:usb1
 18:          0          0          0          0  xen-pirq-ioapic-level  i8=
01_smbus
 19:        109          0          0          0  xen-pirq-ioapic-level  at=
a_piix
 23:         26          0          0          0  xen-pirq-ioapic-level  eh=
ci_hcd:usb2
 40:       4082          0          0          0  xen-percpu-virq      time=
r0
 41:          1          0          0          0  xen-percpu-ipi       spin=
lock0
 42:       3815          0          0          0  xen-percpu-ipi       resc=
hed0
 43:        243          0          0          0  xen-percpu-ipi       call=
func0
 44:          0          0          0          0  xen-percpu-virq      debu=
g0
 45:         26          0          0          0  xen-percpu-ipi       call=
funcsingle0
 46:          0          0          0          0  xen-percpu-ipi       irqw=
ork0
 47:          0       2286          0          0  xen-percpu-virq      time=
r1
 48:          0          2          0          0  xen-percpu-ipi       spin=
lock1
 49:          0       3220          0          0  xen-percpu-ipi       resc=
hed1
 50:          0        357          0          0  xen-percpu-ipi       call=
func1
 51:          0          0          0          0  xen-percpu-virq      debu=
g1
 52:          0        180          0          0  xen-percpu-ipi       call=
funcsingle1
 53:          0          0          0          0  xen-percpu-ipi       irqw=
ork1
 54:          0          0       3606          0  xen-percpu-virq      time=
r2
 55:          0          0          2          0  xen-percpu-ipi       spin=
lock2
 56:          0          0       3405          0  xen-percpu-ipi       resc=
hed2
 57:          0          0        231          0  xen-percpu-ipi       call=
func2
 58:          0          0          0          0  xen-percpu-virq      debu=
g2
 59:          0          0         34          0  xen-percpu-ipi       call=
funcsingle2
 60:          0          0          0          0  xen-percpu-ipi       irqw=
ork2
 61:          0          0          0       2545  xen-percpu-virq      time=
r3
 62:          0          0          0          0  xen-percpu-ipi       spin=
lock3
 63:          0          0          0       2343  xen-percpu-ipi       resc=
hed3
 64:          0          0          0        348  xen-percpu-ipi       call=
func3
 65:          0          0          0          0  xen-percpu-virq      debu=
g3
 66:          0          0          0        186  xen-percpu-ipi       call=
funcsingle3
 67:          0          0          0          0  xen-percpu-ipi       irqw=
ork3
 68:        169          0          0          0   xen-dyn-event     xenbus
 69:          0          0          0          0  xen-percpu-virq      xen-=
pcpu
 71:          0          0          0          0  xen-percpu-virq      mce
 72:        145          0          0          0  xen-percpu-virq      hvc_=
console
 73:          0          0          0          0  xen-pirq-msi       i915
 74:         19          0          0          0  xen-pirq-msi       eth0
 75:         87          0          0          0   xen-dyn-event     evtchn=
:xenstored
 76:          0          0          0          0   xen-dyn-event     evtchn=
:xenstored
NMI:          0          0          0          0   Non-maskable interrupts
LOC:          0          0          0          0   Local timer interrupts
SPU:          0          0          0          0   Spurious interrupts
PMI:          0          0          0          0   Performance monitoring i=
nterrupts
IWI:          0          0          0          0   IRQ work interrupts
RTR:          0          0          0          0   APIC ICR read retries
RES:       3815       3221       3405       2343   Rescheduling interrupts
CAL:        269        537        265        534   Function call interrupts
TLB:          0          0          0          0   TLB shootdowns
TRM:          0          0          0          0   Thermal event interrupts
THR:          0          0          0          0   Threshold APIC interrupts
MCE:          0          0          0          0   Machine check exceptions
MCP:          1          1          1          1   Machine check polls
ERR:          0
MIS:          0
00000000-0000ffff : reserved
00010000-0009dfff : System RAM
0009e000-0009ebff : RAM buffer
0009ec00-000fffff : reserved
  000a0000-000bffff : PCI Bus 0000:00
  000c0000-000dffff : PCI Bus 0000:00
    000c0000-000cd7ff : Video ROM
    000cd800-000ce7ff : Adapter ROM
    000ce800-000cf7ff : Adapter ROM
  000f0000-000fffff : System ROM
00100000-1fffffff : System RAM
  01000000-01642f47 : Kernel code
  01642f48-01aa317f : Kernel data
  01b68000-01c71fff : Kernel bss
20000000-201fffff : reserved
20200000-3fffffff : System RAM
40000000-401fffff : reserved
40200000-bad7ffff : System RAM
bad80000-badc8fff : ACPI Non-volatile Storage
badc9000-badd0fff : ACPI Tables
badd1000-badf3fff : reserved
badf4000-badf5fff : System RAM
badf6000-bae05fff : reserved
bae06000-bae13fff : ACPI Non-volatile Storage
bae14000-bae3bfff : reserved
bae3c000-bae7efff : ACPI Non-volatile Storage
bae7f000-baffffff : System RAM
bb000000-bb7fffff : RAM buffer
bb800000-bf9fffff : reserved
bfa00000-ffffffff : PCI Bus 0000:00
  d0000000-dfffffff : 0000:00:02.0
  e0000000-efffffff : PCI MMCONFIG 0000 [bus 00-ff]
    e0000000-efffffff : pnp 00:01
  fe000000-fe3fffff : 0000:00:02.0
  fe400000-fe4fffff : PCI Bus 0000:03
    fe400000-fe43ffff : 0000:03:00.0
      fe400000-fe43ffff : atl1c
  fe500000-fe5fffff : PCI Bus 0000:01
    fe500000-fe57ffff : 0000:01:00.0
    fe580000-fe5bffff : 0000:01:00.0
    fe5c0000-fe5dffff : 0000:01:00.0
    fe5e0000-fe5e3fff : 0000:01:00.0
  fe600000-fe603fff : 0000:00:1b.0
  fe604000-fe6040ff : 0000:00:1f.3
  fe605000-fe6053ff : 0000:00:1d.0
    fe605000-fe6053ff : ehci_hcd
  fe606000-fe6063ff : 0000:00:1a.0
    fe606000-fe6063ff : ehci_hcd
  fe607000-fe60700f : 0000:00:16.0
  fec00000-fec00fff : reserved
    fec00000-fec003ff : IOAPIC 0
  fed00000-fed003ff : HPET 0
  fed08000-fed08fff : pnp 00:0c
  fed10000-fed19fff : pnp 00:01
  fed1c000-fed3ffff : reserved
    fed1c000-fed1ffff : pnp 00:0c
    fed20000-fed3ffff : pnp 00:01
  fed90000-fed93fff : pnp 00:01
  fee00000-fee00fff : Local APIC
    fee00000-fee00fff : reserved
  ff000000-ffffffff : reserved
    ff000000-ffffffff : pnp 00:0c
100000000-10555efff : System RAM
10555f000-23fdfffff : Unusable memory
MemTotal:        3011524 kB
MemFree:         2635868 kB
Buffers:               0 kB
Cached:           285004 kB
SwapCached:            0 kB
Active:           103264 kB
Inactive:         197972 kB
Active(anon):      92564 kB
Inactive(anon):   114340 kB
Active(file):      10700 kB
Inactive(file):    83632 kB
Unevictable:       17360 kB
Mlocked:            5000 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         33520 kB
Mapped:             6464 kB
Shmem:            174728 kB
Slab:              25528 kB
SReclaimable:      12000 kB
SUnreclaim:        13528 kB
KernelStack:         808 kB
PageTables:         1508 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     1505760 kB
Committed_AS:     278004 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      548756 kB
VmallocChunk:   34359187451 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:     3151228 kB
DirectMap2M:           0 kB
19 Dec 18:55:36 ntpdate[2935]: adjust time server 17.171.4.13 offset 0.0284=
62 sec
Wed Dec 19 18:55:37 UTC 2012
Dec 19 18:55:37 tst007 init: starting pid 2944, tty '/dev/tty0': '/bin/sh'
Dec 19 18:55:37 tstd 2946, tty '/dev/ttyS0': '/bin/sh'


BusyBox v1.14.3 (2012-12-04 14:22:27 EST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

# =07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07Dec 19 18:55:37 tst007 init: starting pid 294=
7, tty '/dev/hvc0': '/bin/sh'
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
Dec 19 18:55:38 tst007 init: process '/bin/sh' (pid 2946) exited. Schedulin=
g for restart.
Dec 19 18:55:38 tst007 init: starti=07=07=07=07=07=07=07=07=07=07=07Dec 19 =
18:55:39 tst007 init: process '/bin/sh' (pid 2948) exited. Scheduling for r=
estart.
Dec 19 18:55:39 tst007 init: starti/sh'
=07=07=07=07=07=07=07=07=07
Dec 19 18:55:40 tst007 init: process '/bin/sh' (pid 2949) exited. Schedulin=
g for restart.
Dec 19 18:55:40 tst007 init: starti/sh'
=07=07=07=07=07=07=07=07=07Dec 19 18:55:41 tst007 init: process '/bin/sh' (=
pid 2950) exited. Scheduling for restart.
Dec 19 18:55:41 tst007 init: starti/bin/sh'
=07=07=07=07=07=07=07=07Dec 19 18:55:42 tst007 init: process '/bin/sh' (pid=
 2951) exited. Scheduling for restart.
Dec 19 18:55:42 tst007 init: startityS0': '/bin/sh'
=07=07=07=07=07=07Dec 19 18:55:43 tst007 init: process '/bin/sh' (pid 2952)=
 exited. Scheduling for restart.
Dec 19 18:55:43 tst007 init: startiev/ttyS0': '/bin/sh'
=07=07=07=07=07Dec 19 18:55:44 tst007 init: process '/bin/sh' (pid 2953) ex=
ited. Scheduling for restart.
Dec 19 18:55:44 tst007 init: starti tty '/dev/ttyS0': '/bin/sh'
=07=07=07[   21.237703] switch: port 1(eth0) entered forwarding state
Dec 19 18:55:45 tst007 init: process '/bin/sh' (pid 2954) exited. Schedulin=
g for restart.
Dec 19 18:55:45 tst007 init: starti tty '/dev/ttyS0': '/bin/sh'
=07=07=07Dec 19 18:55:46 tst007 init: process '/bin/sh' (pid 2955) exited. =
Scheduling for restart.
Dec 19 18:55:46 tst007 init: startiid 2956, tty '/dev/ttyS0': '/bin/sh'
=07Dec 19 18:55:46 tst007 sshd[2957]: WARNING: /etc/ssh/moduli does not exi=
st, using fixed modulus
Dec 19 18:55:46 tst007 sshd[2958]: Connection closed by 192.168.101.16
Dec 19 18:55:47 tst007 init: process '/bin/sh' (pid 2956) exited. Schedulin=
g for restart.
Dec 19 18:55:47 tst007 init: starting pid 2959, tty '/dev/ttyS0': '/bin/sh'
Dec 19 18:55:48 tst007 init: process '/bin/sh' (pid 2959) exited. Schedulin=
g for restart.
Dec 19 18:55:48 tst007 init: starting pid 2960, tty '/dev/ttyS0': '/bin/sh'
Dec 19 18:55:49 tst007 init: process '/bin/sh' (pid 2960) exited. Schedulin=
g for restart.
Dec 19 18:55:49 tst007 init: starting pid 2961, tty '/dev/ttyS0': '/bin/sh'
kill Dec 19 18:55:50 tst007 init: process '/bin/sh' (pid 2961) exited. Sche=
duling for restart.
Dec 19 18:55:50 tst007 init: starting pid 2962, tty '/dev/ttyS0': '/bin/sh'
-1 1
/bin/sh: =07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07Dec 19=
 18:55:51 tst007 init: process '/bin/sh' (pid 2962) exited. Scheduling for =
restart.
Dec 19 18:55:51 tst007 init: starting pid 2963, tty '/dev/ttyS0': '/bin/sh'
Dec 19 18:55:52 tst007 init: process '/bin/sh' (pid 2963) exited. Schedulin=
g for restart.
Dec 19 18:55:52 tst007 init: starting pid 2964, tty '/dev/ttyS0': '/bin/sh'

/bin/sh: =07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07: not found
# kiDec 19 18:55:53 tst007 init: process '/bin/sh' (pid 2964) exited. Sched=
uling for restart.
Dec 19 18:55:53 tst007 init: starting pid 2965, tty '/dev/ttyS0': '/bin/sh'
ll -1 1
# Dec 19 18:55:54 tst007 init: reloading /etc/inittab

# eho=08 =08=08 =08cho  =08 =080 > /sys/bus# =1B[Jfind /sys -name cpu9=08 =
=080/=08 =08=08 =084/on=08 =08=08 =08=08 =08=08 =083/online
find: warning: Unix filenames usually don't contain slashes (though pathnam=
es do).  That means that '-name `cpu3/online'' will probably evaluate to fa=
lse all the time on this system.  You might find the '-wholename' test more=
 useful, or perhaps '-samefile'.  Alternatively, if you are using GNU grep,=
 you could use 'find ... -print0 | grep -FzZ `cpu3/online''.
# cat /proc/cp=1B[8D# cat /proc/cpuinfo =1B[J
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz
stepping	: 7
microcode	: 0x12
cpu MHz		: 3093.056
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 0
cpu cores	: 2
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush acpi mmx =
fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl nonstop_tsc pn=
i pclmulqdq est ssse3 cx16 pcid sse4_1 sse4_2 popcnt tsc_deadline_timer hyp=
ervisor lahf_lm arat epb pln pts dtherm
bogomips	: 6186.11
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 1
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz
stepping	: 7
microcode	: 0x12
cpu MHz		: 3093.056
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 0
cpu cores	: 2
apicid		: 1
initial apicid	: 1
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush acpi mmx =
fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl nonstop_tsc pn=
i pclmulqdq est ssse3 cx16 pcid sse4_1 sse4_2 popcnt tsc_deadline_timer hyp=
ervisor lahf_lm arat epb pln pts dtherm
bogomips	: 6186.11
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 2
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz
stepping	: 7
microcode	: 0x12
cpu MHz		: 3093.056
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 1
cpu cores	: 2
apicid		: 2
initial apicid	: 2
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush acpi mmx =
fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl nonstop_tsc pn=
i pclmulqdq est ssse3 cx16 pcid sse4_1 sse4_2 popcnt tsc_deadline_timer hyp=
ervisor lahf_lm arat epb pln pts dtherm
bogomips	: 6186.11
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz
stepping	: 7
microcode	: 0x12
cpu MHz		: 3093.056
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 1
cpu cores	: 2
apicid		: 3
initial apicid	: 3
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush acpi mmx =
fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl nonstop_tsc pn=
i pclmulqdq est ssse3 cx16 pcid sse4_1 sse4_2 popcnt tsc_deadline_timer hyp=
ervisor lahf_lm arat epb pln pts dtherm
bogomips	: 6186.11
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

# # cat /proc/cpuinfo =1B[J# find /sys -name cpu3/online=1B[J=08 =08=08 =08=
=08 =08=08 =08=08 =08=08 =08=08 =08=08 =08=08 =08=08 =08=08 =08online
/sys/devices/system/cpu/cpu1/online
/sys/devices/system/cpu/cpu2/online
/sys/devices/system/cpu/cpu3/online
/sys/devices/system/cpu/online
/sys/devices/system/node/online
/sys/devices/system/xen_cpu/xen_cpu1/online
/sys/devices/system/xen_cpu/xen_cpu2/online
/sys/devices/system/xen_cpu/xen_cpu3/online
# cat /sys/devices/system/cpu/cpu3/online
1
# echo 0 > /sys/devices/system/cpu/cpu3/online
#=20
#=20
# # echo 0 > /sys/devices/system/cpu/cpu3/online=1B[J=1B[44Decho 0=08 > /sy=
s/devices/system/cpu/cpu3/online =1B[39D1 > /sys/devices/system/cpu/cpu3/on=
line=1B[38D > /sys/devices/system/cpu/cpu3/online
[   73.324141] installing Xen timer for CPU 3
[   73.324236] cpu 3 spinlock event irqc_intel fbcon scsi_mod tileblit font=
 bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront fb=
_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   73.325026] Pid: 0, comm: swapper/3 Not tainted 3.7.0upstream #1
[   73.325033] Call Trace:
[   73.325047]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   73.325058]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   73.325074]  [<ffffffff81043d5d>] ? xen_force_evtchn_callback+0xd/0x10
[   73.325086]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   73.325097]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   73.325112]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   73.325116] BUG: unable to handle kernel NULL pointer dereference[   73.=
325122]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10

[   73.325133]  at 0000000000000004
[   73.325140] IP: [<ffffffff81367e7b>] acpi_processor_setup_cpuidle_cx+0x3=
f/0x105
[   73.325160] PGD aacf6067 PUD b0000067 PMD 0=20
[   73.325181] Oops: 0002 [#1] SMP=20
[   73.325194] Modules linked in: xen_evtchn iscsi_boot_sysfs iscsi_tcp lib=
iscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c nouveau mxm_wmi wm=
i radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c=
_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper=
 video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopya=
rea xenfs xen_privcmd mperf
[   73.325330] CPU 0=20
[   73.325337] Pid: 2947, comm: sh Tainted: G        W    3.7.0upstream #1 =
MSI MS-7680/H61M-P23 (MS-7680)
[   73.325345] RIP: e030:[<ffffffff81367e7b>]  [<ffffffff81367e7b>] acpi_pr=
ocessor_setup_cpuidle_cx+0x3f/0x105
[   73.325358] RSP: e02b:ffff8800ace7dcb8  EFLAGS: 00010202
[   73.325365] RAX: 0000000000010ae9 RBX: ffff880100f56c00 RCX: ffff8801053=
80000
[   73.325374] RDX: 0000000000000003 RSI: 0000000000000200 RDI: ffff880100f=
56c00
[   73.325385] RBP: ffff8800ace7dcf8 R08: ffffffff81811ec0 R09: 00000000000=
00200
[   73.325395] R10: 0000000000000000 R11: 000000000000fffc R12: ffff880100f=
56c00
[   73.325406] R13: 00000000ffffff01 R14: 0000000000000000 R15: ffffffff81a=
8e1a0
[   73.325419] FS:  00007fb4785cd700(0000) GS:ffff880105200000(0000) knlGS:=
0000000000000000
[   73.325428] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[   73.325436] CR2: 0000000000000004 CR3: 00000000af063000 CR4: 00000000000=
02660
[   73.325443] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 00000000000=
00000
[   73.325450] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 00000000000=
00400
[   73.325457] Process sh (pid: 2947, threadinfo ffff8800ace7c000, task fff=
f8800aacd0810)
[   73.325463] Stack:
[   73.325468]  0000000000000150 ffff8800120d6200 ffff8800ace7dce8 ffff8801=
00f56c00
[   73.325486]  0000000000000000 00000000ffffff01 0000000000000000 ffffffff=
81a8e1a0
[   73.325504]  ffff8800ace7dd28 ffffffff8136850b ffff8800ace7dd18 ffffffff=
810ad296
[   73.325521] Call Trace:
[   73.325559]  [<ffffffff8136850b>] acpi_processor_hotplug+0x7c/0x9f
[   73.325572]  [<ffffffff810ad296>] ? schedule_delayed_work_on+0x16/0x20
[   73.325583]  [<ffffffff81365d48>] acpi_cpu_soft_notify+0x90/0xca
[   73.325594]  [<ffffffff8163a96d>] notifier_call_chain+0x4d/0x70
[   73.325605]  [<ffffffff810bb1b9>] __raw_notifier_call_chain+0x9/0x10
[   73.325617]  [<ffffffff81093b2b>] __cpu_notify+0x1b/0x30
[   73.325627]  [<ffffffff8162d4d6>] _cpu_up+0x102/0x14a
[   73.325637]  [<ffffffff8162d5f7>] cpu_up+0xd9/0xec
[   73.325648]  [<ffffffff81613da4>] store_online+0x94/0xd0
[   73.325660]  [<ffffffff813f318b>] dev_attr_store+0x1b/0x20
[   73.325671]  [<ffffffff8120f644>] sysfs_write_file+0xf4/0x170
[   73.325682]  [<ffffffff8119ad28>] vfs_write+0xc8/0x190
[   73.325692]  [<ffffffff8119b57a>] sys_write+0x5a/0xa0
[   73.325702]  [<ffffffff8163eae9>] system_call_fastpath+0x16/0x1b
[   73.325708] Code: 89 fc 53 48 83 ec 18 8b 57 0c 89 d1 48 8b 0c cd 00 81 =
a9 81 4c 8b 34 08 8a 47 1c 84 c0 0f 89 ba 00 00 00 a8 01 0f 84 b2 00 00 00 =
<41> 89 56 04 83 3d 62 3a 73 00 00 b8 01 00 00 00 0f 45 05 56 3a=20
[   73.325900] RIP  [<ffffffff81367e7b>] acpi_processor_setup_cpuidle_cx+0x=
3f/0x105
[   73.325912]  RSP <ffff8800ace7dcb8>
[   73.325917] CR2: 0000000000000004
[   73.325939] ---[ end trace 81413fa6088e35ac ]---
Dec 19 18:56:37 [   73.326476] BUG: scheduling while atomic: swapper/3/0/0x=
00000000
tst007 [   73.326488] Modules linked in:init: process '/bin/sh' (pid 2947) =
exited. Scheduling for restart. xen_evtchn iscsi_boot_sysfs
 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c nouv=
eau mxm_wmi wmi radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix liba=
ta i915 crc32c_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c =
drm_kms_helper video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfil=
lrect syscopyarea xenfs xen_privcmd mperf
[   73.326638] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   73.326645] Call Trace:
[   73.326656]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   73.326667]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   73.326677]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   73.326686]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   73.326697]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   73.326707]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
Dec 19 18:56:37 tst007 init: starting pid 2978, tty '/dev/hvc0': '/bin/sh'


BusyBox v1.14.3 (2012-12-04 14:22:27 EST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

# =07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07[   74.567070] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   74.567084] Modules linked in: xen_evtchn iscsi_boot_sy xen_blkfront xen=
_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd m=
perf
[   74.567257] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   74.567263] Call Trace:
[   74.567276]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   74.567286]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   74.567296]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   74.567306]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   74.567317]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   74.567327]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07[   75.066920] BUG: scheduling while atomic: swa=
pper/3/0/0x00000000
[   75.066933] Modules linked in: xen_evtchn iscsi_boot_sysi_tcp libiscsi s=
csi_transport_iscsi libcrc32c crc32c nouveau mxm_wmi wmi radeon ttm sg sr_m=
od sd_mod cdrom ata_generic ata_piix libata i915 crc32c_intel fbcon scsi_mo=
d tileblit font bitblit softcursor atl1c drm_kms_helper video xen_blkfront =
xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcm=
d mperf
[   75.067106] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   75.067113] Call Trace:
[   75.067125]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   75.067135]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   75.067145]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   75.067154]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   75.067165]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   75.067175]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07[   76.566091] BUG: scheduling while atomic: swapper/3/0/0x0=
0000000
[   76.566105] Modules linked in: xen_evtchn iscsi_boot_syt xen_netfront fb=
_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   76.566277] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   76.566283] Call Trace:
[   76.566295]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   76.566305]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   76.566315]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   76.566325]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   76.566335]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   76.566345]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[  =
 77.065960] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   77.065974] Modules linked in: xen_evtchn iscsi_boot_sycsi scsi_transpor=
t_iscsi libcrc32c crc32c nouveau mxm_wmi wmi radeon ttm sg sr_mod sd_mod cd=
rom ata_generic ata_piix libata i915 crc32c_intel fbcon scsi_mod tileblit f=
ont bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront=
 fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   77.066146] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   77.066153] Call Trace:
[   77.066165]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   77.066175]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   77.066185]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   77.066194]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   77.066205]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   77.066215]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07[   77.323438] BUG: scheduling while atomic: swappe=
r/3/0/0x00000000
[   77.323453] Modules linked in: xen_evtchn iscsi_boot_sy2c crc32c nouveau=
 mxm_wmi wmi radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata =
i915 crc32c_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm=
_kms_helper video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillre=
ct syscopyarea xenfs xen_privcmd mperf
[   77.323625] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   77.323631] Call Trace:
[   77.323643]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   77.323654]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   77.323663]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   77.323673]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   77.323684]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   77.323694]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   78.565081] BUG: schedul=
ing while atomic: swapper/3/0/0x00000000
[   78.565095] Modules linked in: xen_evtchn iscsi_boot_syb_sys_fops sysimg=
blt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   78.565267] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   78.565274] Call Trace:
[   78.565286]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   78.565296]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   78.565306]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   78.565315]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   78.565326]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   78.565336]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07[   79.064941] BUG: scheduling while at=
omic: swapper/3/0/0x00000000
[   79.064955] Modules linked in: xen_evtchn iscsi_boot_synsport_iscsi libc=
rc32c crc32c nouveau mxm_wmi wmi radeon ttm sg sr_mod sd_mod cdrom ata_gene=
ric ata_piix libata i915 crc32c_intel fbcon scsi_mod tileblit font bitblit =
softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront fb_sys_fops=
 sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   79.065127] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   79.065134] Call Trace:
[   79.065145]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   79.065156]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   79.065166]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   79.065175]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   79.065186]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   79.065196]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07[   80.564097] BUG: scheduling while atomi=
c: swapper/3/0/0x00000000
[   80.564111] Modules linked in: xen_evtchn iscsi_boot_syps sysimgblt sysf=
illrect syscopyarea xenfs xen_privcmd mperf
[   80.564283] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   80.564290] Call Trace:
[   80.564303]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   80.564313]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   80.564323]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   80.564333]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   80.564344]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   80.564354]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   81.063947] BUG: scheduling wh=
ile atomic: swapper/3/0/0x00000000
[   81.063961] Modules linked in: xen_evtchn iscsi_boot_syscsi libcrc32c cr=
c32c nouveau mxm_wmi wmi radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_=
piix libata i915 crc32c_intel fbcon scsi_mod tileblit font bitblit softcurs=
or atl1c drm_kms_helper video xen_blkfront xen_netfront fb_sys_fops sysimgb=
lt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   81.064133] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   81.064140] Call Trace:
[   81.064151]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   81.064161]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   81.064171]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   81.064181]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   81.064192]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   81.064202]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07[   81.321438] BUG: scheduling while=
 atomic: swapper/3/0/0x00000000
[   81.321452] Modules linked in: xen_evtchn iscsi_boot_syum_wmi wmi radeon=
 ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c_intel f=
bcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper video x=
en_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenf=
s xen_privcmd mperf
[   81.321624] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   81.321631] Call Trace:
[   81.321643]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   81.321653]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   81.321663]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   81.321672]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   81.321683]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   81.321693]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   82.563080] BUG: s=
cheduling while atomic: swapper/3/0/0x00000000
[   82.563094] Modules linked in: xen_evtchn iscsi_boot_syfillrect syscopya=
rea xenfs xen_privcmd mperf
[   82.563266] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   82.563272] Call Trace:
[   82.563284]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   82.563294]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   82.563304]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   82.563314]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   82.563324]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   82.563334]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07[   83.062923] BUG: scheduling while=
 atomic: swapper/3/0/0x00000000
[   83.062937] Modules linked in: xen_evtchn iscsi_boot_sy2c crc32c nouveau=
 mxm_wmi wmi radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata =
i915 crc32c_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm=
_kms_helper video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillre=
ct syscopyarea xenfs xen_privcmd mperf
[   83.063109] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   83.063116] Call Trace:
[   83.063127]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   83.063138]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   83.063148]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   83.063157]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   83.063168]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   83.063178]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   84.562098] BUG: schedul=
ing while atomic: swapper/3/0/0x00000000
[   84.562112] Modules linked in: xen_evtchn iscsi_boot_sy syscopyarea xenf=
s xen_privcmd mperf
[   84.562285] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   84.562291] Call Trace:
[   84.562303]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   84.562313]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   84.562323]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   84.562332]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   84.562343]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   84.562353]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07[   85.061964] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   85.061978] Modules linked in: xen_evtchn iscsi_boot_syuveau mxm_wmi wmi=
 radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c_=
intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper =
video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyar=
ea xenfs xen_privcmd mperf
[   85.062150] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   85.062157] Call Trace:
[   85.062169]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   85.062179]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   85.062189]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   85.062198]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   85.062209]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   85.062219]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   85.319437] BUG=
: scheduling while atomic: swapper/3/0/0x00000000
[   85.319451] Modules linked in: xen_evtchn iscsi_boot_sysr_mod sd_mod cdr=
om ata_generic ata_piix libata i915 crc32c_intel fbcon scsi_mod tileblit fo=
nt bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront =
fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   85.319623] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   85.319629] Call Trace:
[   85.319641]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   85.319651]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   85.319661]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   85.319671]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   85.319682]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   85.319692]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07[   86.561083] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   86.561097] Modules linked in: xen_evtchn iscsi_boot_sy xenfs xen_privcm=
d mperf
[   86.561269] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   86.561275] Call Trace:
[   86.561287]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   86.561297]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   86.561307]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   86.561317]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   86.561327]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   86.561337]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   87.06=
0941] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   87.060955] Modules linked in: xen_evtchn iscsi_boot_syi wmi radeon ttm =
sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c_intel fbcon =
scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper video xen_bl=
kfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen=
_privcmd mperf
[   87.061128] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   87.061135] Call Trace:
[   87.061147]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   87.061157]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   87.061167]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   87.061177]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   87.061187]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   87.061197]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   88.560096] BUG: scheduling wh=
ile atomic: swapper/3/0/0x00000000
[   88.560110] Modules linked in: xen_evtchn iscsi_boot_syen_privcmd mperf
[   88.560282] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   88.560288] Call Trace:
[   88.560300]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   88.560311]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   88.560320]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   88.560330]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   88.560341]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   88.560351]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   89.059963] BUG: schedul=
ing while atomic: swapper/3/0/0x00000000
[   89.059977] Modules linked in: xen_evtchn iscsi_boot_sy ttm sg sr_mod sd=
_mod cdrom ata_generic ata_piix libata i915 crc32c_intel fbcon scsi_mod til=
eblit font bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_n=
etfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mpe=
rf
[   89.060153] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   89.060160] Call Trace:
[   89.060172]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   89.060182]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   89.060192]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   89.060201]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   89.060212]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   89.060222]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
[   89.317439] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   89.317452] Modules linked in: xen_evtchn iscsi_boot_sysfs iscsi_tcp lib=
iscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c nouveau mxm_wmi wm=
i radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c=
_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper=
 video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopya=
rea xenfs xen_privcmd mperf
[   89.317625] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   89.317631] Call Trace:
[   89.317643]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   89.317653]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   89.317663]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   89.317673]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   89.317684]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   89.317694]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
[   90.559083] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   90.559097] Modules linked in: xen_evtchn iscsi_boot_sysfs iscsi_tcp lib=
iscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c nouveau mxm_wmi wm=
i radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c=
_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper=
 video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopya=
rea xenfs xen_privcmd mperf
[   90.559269] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   90.559276] Call Trace:
[   90.559288]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   90.559298]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   90.559308]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   90.559318]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   90.559328]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   90.559338]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
[   91.058924] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   91.058937] Modules linked in: xen_evtchn iscsi_boot_sysr_mod sd_mod cdr=
om ata_generic ata_piix libata i915 crc32c_intel fbcon scsi_mod tileblit fo=
nt bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront =
fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   91.059110] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   91.059117] Call Trace:
[   91.059128]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   91.059139]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   91.059148]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   91.059158]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   91.059169]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   91.059179]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07
telnet> Connection closed.
[Connecting to system 7 ]

--Kj7319i9nmIyA2yE
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="config-3.8.config"

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.7.0 Kernel Configuration
#
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_DEFAULT_IDLE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_HAVE_IRQ_WORK=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
CONFIG_EXPERIMENTAL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
CONFIG_LOCALVERSION="upstream"
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_FHANDLE is not set
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
# CONFIG_AUDIT_LOGINUID_IMMUTABLE is not set
CONFIG_HAVE_GENERIC_HARDIRQS=y

#
# IRQ subsystem
#
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y

#
# CPU/Task time and stats accounting
#
# CONFIG_TICK_CPU_ACCOUNTING is not set
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
# CONFIG_RCU_USER_QS is not set
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_RCU_FAST_NO_HZ is not set
# CONFIG_TREE_RCU_TRACE is not set
# CONFIG_RCU_NOCB_CPU is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=18
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
# CONFIG_NUMA_BALANCING is not set
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
# CONFIG_CGROUP_DEVICE is not set
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
# CONFIG_MEMCG is not set
# CONFIG_CGROUP_HUGETLB is not set
# CONFIG_CGROUP_PERF is not set
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_CFS_BANDWIDTH is not set
CONFIG_RT_GROUP_SCHED=y
# CONFIG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_SYSFS_DEPRECATED=y
# CONFIG_SYSFS_DEPRECATED_V2 is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_ROOT_UID=0
CONFIG_INITRAMFS_ROOT_GID=0
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
# CONFIG_INITRAMFS_COMPRESSION_NONE is not set
CONFIG_INITRAMFS_COMPRESSION_GZIP=y
# CONFIG_INITRAMFS_COMPRESSION_BZIP2 is not set
# CONFIG_INITRAMFS_COMPRESSION_LZMA is not set
# CONFIG_INITRAMFS_COMPRESSION_XZ is not set
# CONFIG_INITRAMFS_COMPRESSION_LZO is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
# CONFIG_EXPERT is not set
CONFIG_HAVE_UID16=y
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_PCI_QUIRKS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
# CONFIG_SLUB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_OPTPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_GENERIC_KERNEL_THREAD=y
CONFIG_GENERIC_KERNEL_EXECVE=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_MODULES_USE_ELF_RELA=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_MODULE_SIG is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
# CONFIG_BLK_DEV_INTEGRITY is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
# CONFIG_DEFAULT_DEADLINE is not set
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_PARAVIRT_GUEST=y
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_XEN_DEBUG_FS=y
CONFIG_KVM_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_GART_IOMMU=y
CONFIG_CALGARY_IOMMU=y
CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS=512
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=y
CONFIG_X86_THERMAL_VECTOR=y
# CONFIG_I8K is not set
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTREMOVE is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_MEMORY_FAILURE is not set
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_EFI=y
# CONFIG_EFI_STUB is not set
CONFIG_SECCOMP=y
# CONFIG_CC_STACKPROTECTOR is not set
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
# CONFIG_KEXEC_JUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
# CONFIG_PM_RUNTIME is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
# CONFIG_PM_ADVANCED_DEBUG is not set
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
CONFIG_ACPI=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_PROCFS=y
# CONFIG_ACPI_PROCFS_POWER is not set
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_PROC_EVENT=y
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_I2C=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_CUSTOM_DSDT is not set
# CONFIG_ACPI_INITRD_TABLE_OVERRIDE is not set
CONFIG_ACPI_BLACKLIST_YEAR=0
CONFIG_ACPI_DEBUG=y
# CONFIG_ACPI_DEBUG_FUNC_TRACE is not set
# CONFIG_ACPI_PCI_SLOT is not set
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=y
# CONFIG_ACPI_HOTPLUG_MEMORY is not set
# CONFIG_ACPI_SBS is not set
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
# CONFIG_ACPI_BGRT is not set
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_EINJ=y
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_TABLE=y
# CONFIG_CPU_FREQ_STAT is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set

#
# x86 CPU frequency scaling drivers
#
# CONFIG_X86_PCC_CPUFREQ is not set
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
CONFIG_X86_SPEEDSTEP_CENTRINO=m
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_MULTIPLE_DRIVERS is not set
CONFIG_CPU_IDLE_GOV_LADDER=y
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
CONFIG_INTEL_IDLE=y

#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set

#
# Bus options (PCI etc.)
#
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
# CONFIG_HOTPLUG_PCI_PCIE is not set
CONFIG_PCIEAER=y
CONFIG_PCIE_ECRC=y
# CONFIG_PCIEAER_INJECT is not set
CONFIG_PCIEASPM=y
# CONFIG_PCIEASPM_DEBUG is not set
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_ARCH_SUPPORTS_MSI=y
CONFIG_PCI_MSI=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
# CONFIG_PCI_STUB is not set
CONFIG_XEN_PCIDEV_FRONTEND=y
CONFIG_HT_IRQ=y
CONFIG_PCI_ATS=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_IOAPIC is not set
CONFIG_PCI_LABEL=y
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
CONFIG_PCCARD=y
CONFIG_PCMCIA=y
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y

#
# PC-card bridges
#
CONFIG_YENTA=y
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_PD6729 is not set
# CONFIG_I82092 is not set
CONFIG_PCCARD_NONSTATIC=y
CONFIG_HOTPLUG_PCI=y
# CONFIG_HOTPLUG_PCI_ACPI is not set
# CONFIG_HOTPLUG_PCI_CPCI is not set
# CONFIG_HOTPLUG_PCI_SHPC is not set
# CONFIG_RAPIDIO is not set

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
# CONFIG_HAVE_AOUT is not set
CONFIG_BINFMT_MISC=y
CONFIG_COREDUMP=y
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_HAVE_TEXT_POKE_SMP=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y

#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_DIAG is not set
CONFIG_UNIX=y
# CONFIG_UNIX_DIAG is not set
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
# CONFIG_IP_FIB_TRIE_STATS is not set
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
CONFIG_IP_MROUTE=y
# CONFIG_IP_MROUTE_MULTIPLE_TABLES is not set
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
# CONFIG_ARPD is not set
CONFIG_SYN_COOKIES=y
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
CONFIG_INET_TUNNEL=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET_LRO=y
# CONFIG_INET_DIAG is not set
CONFIG_TCP_CONG_ADVANCED=y
# CONFIG_TCP_CONG_BIC is not set
CONFIG_TCP_CONG_CUBIC=y
# CONFIG_TCP_CONG_WESTWOOD is not set
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_TCP_CONG_YEAH is not set
# CONFIG_TCP_CONG_ILLINOIS is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
# CONFIG_IPV6_PRIVACY is not set
# CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
# CONFIG_INET6_IPCOMP is not set
# CONFIG_IPV6_MIP6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
CONFIG_INET6_XFRM_MODE_TRANSPORT=y
CONFIG_INET6_XFRM_MODE_TUNNEL=y
CONFIG_INET6_XFRM_MODE_BEET=y
# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
CONFIG_IPV6_SIT=y
# CONFIG_IPV6_SIT_6RD is not set
CONFIG_IPV6_NDISC_NODETYPE=y
# CONFIG_IPV6_TUNNEL is not set
# CONFIG_IPV6_GRE is not set
# CONFIG_IPV6_MULTIPLE_TABLES is not set
# CONFIG_IPV6_MROUTE is not set
CONFIG_NETLABEL=y
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
# CONFIG_NETFILTER_ADVANCED is not set

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_NETLINK_LOG=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_IRC=y
# CONFIG_NF_CONNTRACK_NETBIOS_NS is not set
CONFIG_NF_CONNTRACK_SIP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_NEEDED=y
# CONFIG_NF_NAT_AMANDA is not set
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
# CONFIG_NF_NAT_TFTP is not set
CONFIG_NETFILTER_XTABLES=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
CONFIG_NETFILTER_XT_TARGET_LOG=m
# CONFIG_NETFILTER_XT_TARGET_NETMAP is not set
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
# CONFIG_NETFILTER_XT_TARGET_REDIRECT is not set
CONFIG_NETFILTER_XT_TARGET_SECMARK=y
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
# CONFIG_IP_SET is not set
# CONFIG_IP_VS is not set

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_NF_CONNTRACK_PROC_COMPAT=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_ULOG=y
CONFIG_NF_NAT_IPV4=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
# CONFIG_NF_NAT_PPTP is not set
# CONFIG_NF_NAT_H323 is not set
CONFIG_IP_NF_MANGLE=y
# CONFIG_IP_NF_RAW is not set

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV6=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_MATCH_IPV6HEADER=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
# CONFIG_IP6_NF_RAW is not set
# CONFIG_BRIDGE_NF_EBTABLES is not set
# CONFIG_IP_DCCP is not set
# CONFIG_IP_SCTP is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_HAVE_NET_DSA=y
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=y
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_PHONET is not set
# CONFIG_IEEE802154 is not set
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_CBQ is not set
# CONFIG_NET_SCH_HTB is not set
# CONFIG_NET_SCH_HFSC is not set
# CONFIG_NET_SCH_PRIO is not set
# CONFIG_NET_SCH_MULTIQ is not set
# CONFIG_NET_SCH_RED is not set
# CONFIG_NET_SCH_SFB is not set
# CONFIG_NET_SCH_SFQ is not set
# CONFIG_NET_SCH_TEQL is not set
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_GRED is not set
# CONFIG_NET_SCH_DSMARK is not set
# CONFIG_NET_SCH_NETEM is not set
# CONFIG_NET_SCH_DRR is not set
# CONFIG_NET_SCH_MQPRIO is not set
# CONFIG_NET_SCH_CHOKE is not set
# CONFIG_NET_SCH_QFQ is not set
# CONFIG_NET_SCH_CODEL is not set
# CONFIG_NET_SCH_FQ_CODEL is not set
# CONFIG_NET_SCH_INGRESS is not set
# CONFIG_NET_SCH_PLUG is not set

#
# Classification
#
CONFIG_NET_CLS=y
# CONFIG_NET_CLS_BASIC is not set
# CONFIG_NET_CLS_TCINDEX is not set
# CONFIG_NET_CLS_ROUTE4 is not set
# CONFIG_NET_CLS_FW is not set
# CONFIG_NET_CLS_U32 is not set
# CONFIG_NET_CLS_RSVP is not set
# CONFIG_NET_CLS_RSVP6 is not set
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
# CONFIG_NET_EMATCH_CMP is not set
# CONFIG_NET_EMATCH_NBYTE is not set
# CONFIG_NET_EMATCH_U32 is not set
# CONFIG_NET_EMATCH_META is not set
# CONFIG_NET_EMATCH_TEXT is not set
CONFIG_NET_CLS_ACT=y
# CONFIG_NET_ACT_POLICE is not set
# CONFIG_NET_ACT_GACT is not set
# CONFIG_NET_ACT_MIRRED is not set
# CONFIG_NET_ACT_IPT is not set
# CONFIG_NET_ACT_NAT is not set
# CONFIG_NET_ACT_PEDIT is not set
# CONFIG_NET_ACT_SIMP is not set
# CONFIG_NET_ACT_SKBEDIT is not set
# CONFIG_NET_ACT_CSUM is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=y
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_NETPRIO_CGROUP is not set
CONFIG_BQL=y
# CONFIG_BPF_JIT is not set

#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NET_TCPPROBE is not set
# CONFIG_NET_DROP_MONITOR is not set
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
CONFIG_FIB_RULES=y
# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
CONFIG_RFKILL=y
CONFIG_RFKILL_INPUT=y
# CONFIG_NET_9P is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
# CONFIG_DEVTMPFS is not set
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
# CONFIG_DEBUG_DRIVER is not set
CONFIG_DEBUG_DEVRES=y
CONFIG_SYS_HYPERVISOR=y
# CONFIG_GENERIC_CPU_DEVICES is not set
CONFIG_DMA_SHARED_BUFFER=y

#
# Bus devices
#
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_PNP=y
CONFIG_PNP_DEBUG_MESSAGES=y

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
# CONFIG_BLK_CPQ_DA is not set
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_XEN_BLKDEV_FRONTEND=m
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_VIRTIO_BLK=m
# CONFIG_BLK_DEV_HD is not set
# CONFIG_BLK_DEV_RBD is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_AD525X_DPOT is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_INTEL_MID_PTI is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1780 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_BMP085_I2C is not set
# CONFIG_PCH_PHUB is not set
# CONFIG_USB_SWITCH_FSA9480 is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_LEGACY is not set
# CONFIG_EEPROM_MAX6875 is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_CB710_CORE is not set

#
# Texas Instruments shared transport line discipline
#
# CONFIG_SENSORS_LIS3_I2C is not set

#
# Altera FPGA firmware download module
#
# CONFIG_ALTERA_STAPL is not set
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=m
CONFIG_RAID_ATTRS=m
CONFIG_SCSI=m
CONFIG_SCSI_DMA=y
CONFIG_SCSI_TGT=m
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_CHR_DEV_OSST=m
CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_FC_TGT_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
# CONFIG_SCSI_SRP_ATTRS is not set
CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_SCSI_BNX2X_FCOE is not set
# CONFIG_BE2ISCSI is not set
CONFIG_BLK_DEV_3W_XXXX_RAID=m
# CONFIG_SCSI_HPSA is not set
CONFIG_SCSI_3W_9XXX=m
# CONFIG_SCSI_3W_SAS is not set
CONFIG_SCSI_ACARD=m
CONFIG_SCSI_AACRAID=m
CONFIG_SCSI_AIC7XXX=m
CONFIG_AIC7XXX_CMDS_PER_DEVICE=8
CONFIG_AIC7XXX_RESET_DELAY_MS=15000
CONFIG_AIC7XXX_DEBUG_ENABLE=y
CONFIG_AIC7XXX_DEBUG_MASK=0
CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC7XXX_OLD=m
CONFIG_SCSI_AIC79XX=m
CONFIG_AIC79XX_CMDS_PER_DEVICE=32
CONFIG_AIC79XX_RESET_DELAY_MS=15000
CONFIG_AIC79XX_DEBUG_ENABLE=y
CONFIG_AIC79XX_DEBUG_MASK=0
CONFIG_AIC79XX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC94XX=m
# CONFIG_AIC94XX_DEBUG is not set
CONFIG_SCSI_MVSAS=m
# CONFIG_SCSI_MVSAS_DEBUG is not set
# CONFIG_SCSI_MVSAS_TASKLET is not set
# CONFIG_SCSI_MVUMI is not set
CONFIG_SCSI_DPT_I2O=m
CONFIG_SCSI_ADVANSYS=m
CONFIG_SCSI_ARCMSR=m
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=m
CONFIG_MEGARAID_MAILBOX=m
CONFIG_MEGARAID_LEGACY=m
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS_LOGGING=y
# CONFIG_SCSI_MPT3SAS is not set
# CONFIG_SCSI_UFSHCD is not set
CONFIG_SCSI_HPTIOP=m
CONFIG_SCSI_BUSLOGIC=m
# CONFIG_VMWARE_PVSCSI is not set
CONFIG_LIBFC=m
CONFIG_LIBFCOE=m
CONFIG_FCOE=m
# CONFIG_FCOE_FNIC is not set
CONFIG_SCSI_DMX3191D=m
CONFIG_SCSI_EATA=m
CONFIG_SCSI_EATA_TAGGED_QUEUE=y
CONFIG_SCSI_EATA_LINKED_COMMANDS=y
CONFIG_SCSI_EATA_MAX_TAGS=16
CONFIG_SCSI_FUTURE_DOMAIN=m
CONFIG_SCSI_GDTH=m
CONFIG_SCSI_ISCI=m
CONFIG_SCSI_IPS=m
CONFIG_SCSI_INITIO=m
# CONFIG_SCSI_INIA100 is not set
CONFIG_SCSI_STEX=m
CONFIG_SCSI_SYM53C8XX_2=m
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
CONFIG_SCSI_SYM53C8XX_MMIO=y
CONFIG_SCSI_IPR=m
# CONFIG_SCSI_IPR_TRACE is not set
# CONFIG_SCSI_IPR_DUMP is not set
CONFIG_SCSI_QLOGIC_1280=m
CONFIG_SCSI_QLA_FC=m
# CONFIG_TCM_QLA2XXX is not set
# CONFIG_SCSI_QLA_ISCSI is not set
CONFIG_SCSI_LPFC=m
# CONFIG_SCSI_LPFC_DEBUG_FS is not set
CONFIG_SCSI_DC395x=m
CONFIG_SCSI_DC390T=m
CONFIG_SCSI_DEBUG=m
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
CONFIG_SCSI_SRP=m
# CONFIG_SCSI_BFA_FC is not set
CONFIG_SCSI_VIRTIO=m
# CONFIG_SCSI_CHELSIO_FCOE is not set
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=m
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_ACPI=y
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
# CONFIG_SATA_AHCI_PLATFORM is not set
CONFIG_SATA_INIC162X=m
# CONFIG_SATA_ACARD_AHCI is not set
CONFIG_SATA_SIL24=m
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_PDC_ADMA=m
CONFIG_SATA_QSTOR=m
CONFIG_SATA_SX4=m
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_HIGHBANK is not set
CONFIG_SATA_MV=m
CONFIG_SATA_NV=m
CONFIG_SATA_PROMISE=m
CONFIG_SATA_SIL=m
CONFIG_SATA_SIS=m
CONFIG_SATA_SVW=m
CONFIG_SATA_ULI=m
CONFIG_SATA_VIA=m
CONFIG_SATA_VITESSE=m

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARASAN_CF is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set
# CONFIG_PATA_CS5536 is not set
# CONFIG_PATA_CYPRESS is not set
CONFIG_PATA_EFAR=m
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
CONFIG_PATA_MARVELL=m
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
CONFIG_PATA_PDC_OLD=m
CONFIG_PATA_RADISYS=m
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SC1200 is not set
CONFIG_PATA_SCH=m
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
CONFIG_PATA_SIS=m
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
CONFIG_PATA_WINBOND=m

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
CONFIG_PATA_PCMCIA=m
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=m
CONFIG_PATA_LEGACY=m
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
# CONFIG_MULTICORE_RAID456 is not set
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m
# CONFIG_DM_DEBUG is not set
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
# CONFIG_DM_THIN_PROVISIONING is not set
CONFIG_DM_MIRROR=m
# CONFIG_DM_RAID is not set
# CONFIG_DM_LOG_USERSPACE is not set
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
# CONFIG_DM_MULTIPATH_QL is not set
# CONFIG_DM_MULTIPATH_ST is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_UEVENT is not set
# CONFIG_DM_FLAKEY is not set
# CONFIG_DM_VERITY is not set
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_TCM_FC=m
CONFIG_ISCSI_TARGET=m
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
CONFIG_FUSION_FC=m
CONFIG_FUSION_SAS=m
CONFIG_FUSION_MAX_SGE=40
CONFIG_FUSION_CTL=m
# CONFIG_FUSION_LOGGING is not set

#
# IEEE 1394 (FireWire) support
#
# CONFIG_FIREWIRE is not set
# CONFIG_FIREWIRE_NOSY is not set
# CONFIG_I2O is not set
CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
CONFIG_MII=m
# CONFIG_IFB is not set
# CONFIG_NET_TEAM is not set
CONFIG_MACVLAN=y
CONFIG_MACVTAP=y
# CONFIG_VXLAN is not set
CONFIG_NETCONSOLE=m
# CONFIG_NETCONSOLE_DYNAMIC is not set
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=y
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=m
CONFIG_SUNGEM_PHY=m
# CONFIG_ARCNET is not set

#
# CAIF transport drivers
#

#
# Distributed Switch Architecture drivers
#
# CONFIG_NET_DSA_MV88E6XXX is not set
# CONFIG_NET_DSA_MV88E6060 is not set
# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
# CONFIG_NET_DSA_MV88E6131 is not set
# CONFIG_NET_DSA_MV88E6123_61_65 is not set
CONFIG_ETHERNET=y
CONFIG_MDIO=m
CONFIG_NET_VENDOR_3COM=y
# CONFIG_PCMCIA_3C574 is not set
# CONFIG_PCMCIA_3C589 is not set
CONFIG_VORTEX=m
CONFIG_TYPHOON=m
CONFIG_NET_VENDOR_ADAPTEC=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_ALTEON=y
# CONFIG_ACENIC is not set
CONFIG_NET_VENDOR_AMD=y
# CONFIG_AMD8111_ETH is not set
# CONFIG_PCNET32 is not set
# CONFIG_PCMCIA_NMCLAN is not set
CONFIG_NET_VENDOR_ATHEROS=y
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
CONFIG_ATL1C=m
CONFIG_NET_CADENCE=y
# CONFIG_ARM_AT91_ETHER is not set
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
CONFIG_BNX2=m
# CONFIG_CNIC is not set
CONFIG_TIGON3=y
CONFIG_BNX2X=m
CONFIG_NET_VENDOR_BROCADE=y
# CONFIG_BNA is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=y
CONFIG_NET_TULIP=y
# CONFIG_DE2104X is not set
# CONFIG_TULIP is not set
# CONFIG_DE4X5 is not set
# CONFIG_WINBOND_840 is not set
# CONFIG_DM9102 is not set
# CONFIG_ULI526X is not set
# CONFIG_PCMCIA_XIRCOM is not set
CONFIG_NET_VENDOR_DLINK=y
# CONFIG_DL2K is not set
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EXAR=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_FUJITSU=y
# CONFIG_PCMCIA_FMVJ18X is not set
CONFIG_NET_VENDOR_HP=y
# CONFIG_HP100 is not set
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_IGB=m
CONFIG_IGB_DCA=y
CONFIG_IGBVF=m
CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_IXGBE_HWMON=y
CONFIG_IXGBE_DCA=y
CONFIG_IXGBE_DCB=y
CONFIG_IXGBEVF=y
CONFIG_NET_VENDOR_I825XX=y
# CONFIG_ZNET is not set
# CONFIG_IP1000 is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_MARVELL=y
# CONFIG_MVMDIO is not set
CONFIG_SKGE=m
# CONFIG_SKGE_DEBUG is not set
# CONFIG_SKGE_GENESIS is not set
CONFIG_SKY2=m
# CONFIG_SKY2_DEBUG is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
# CONFIG_MLX4_CORE is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MYRI=y
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NATSEMI=y
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_8390=y
# CONFIG_PCMCIA_AXNET is not set
CONFIG_NE2K_PCI=m
# CONFIG_PCMCIA_PCNET is not set
CONFIG_NET_VENDOR_NVIDIA=y
CONFIG_FORCEDETH=y
CONFIG_NET_VENDOR_OKI=y
# CONFIG_PCH_GBE is not set
# CONFIG_ETHOC is not set
CONFIG_NET_PACKET_ENGINE=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_QLOGIC=y
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_QLGE is not set
# CONFIG_NETXEN_NIC is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_8139CP is not set
CONFIG_8139TOO=m
# CONFIG_8139TOO_PIO is not set
# CONFIG_8139TOO_TUNE_TWISTER is not set
# CONFIG_8139TOO_8129 is not set
# CONFIG_8139_OLD_RX_RESET is not set
CONFIG_R8169=m
CONFIG_NET_VENDOR_RDC=y
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_SEEQ=y
# CONFIG_SEEQ8005 is not set
CONFIG_NET_VENDOR_SILAN=y
CONFIG_SC92031=m
CONFIG_NET_VENDOR_SIS=y
# CONFIG_SIS900 is not set
# CONFIG_SIS190 is not set
# CONFIG_SFC is not set
CONFIG_NET_VENDOR_SMSC=y
# CONFIG_PCMCIA_SMC91C92 is not set
# CONFIG_EPIC100 is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_STMICRO=y
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=y
CONFIG_HAPPYMEAL=m
CONFIG_SUNGEM=m
CONFIG_CASSINI=m
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_TEHUTI=y
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
CONFIG_TLAN=m
CONFIG_NET_VENDOR_VIA=y
CONFIG_VIA_RHINE=m
# CONFIG_VIA_RHINE_MMIO is not set
CONFIG_VIA_VELOCITY=m
CONFIG_NET_VENDOR_WIZNET=y
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XIRCOM=y
# CONFIG_PCMCIA_XIRC2PS is not set
CONFIG_FDDI=y
# CONFIG_DEFXX is not set
# CONFIG_SKFP is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLIB=y

#
# MII PHY device drivers
#
# CONFIG_AT803X_PHY is not set
# CONFIG_AMD_PHY is not set
CONFIG_MARVELL_PHY=m
CONFIG_DAVICOM_PHY=m
CONFIG_QSEMI_PHY=m
CONFIG_LXT_PHY=m
CONFIG_CICADA_PHY=m
CONFIG_VITESSE_PHY=m
CONFIG_SMSC_PHY=m
CONFIG_BROADCOM_PHY=m
# CONFIG_BCM87XX_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MICREL_PHY is not set
CONFIG_FIXED_PHY=y
# CONFIG_MDIO_BITBANG is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set

#
# USB Network Adapters
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
# CONFIG_USB_USBNET is not set
# CONFIG_USB_HSO is not set
# CONFIG_USB_IPHETH is not set
# CONFIG_WLAN is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
# CONFIG_WAN is not set
CONFIG_XEN_NETDEV_FRONTEND=m
CONFIG_XEN_NETDEV_BACKEND=y
# CONFIG_VMXNET3 is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=m
# CONFIG_INPUT_POLLDEV is not set
CONFIG_INPUT_SPARSEKMAP=y
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
# CONFIG_MOUSE_PS2_ELANTECH is not set
# CONFIG_MOUSE_PS2_SENTELIC is not set
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
# CONFIG_JOYSTICK_ANALOG is not set
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
# CONFIG_JOYSTICK_GF2K is not set
# CONFIG_JOYSTICK_GRIP is not set
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
# CONFIG_JOYSTICK_INTERACT is not set
# CONFIG_JOYSTICK_SIDEWINDER is not set
# CONFIG_JOYSTICK_TMDC is not set
# CONFIG_JOYSTICK_IFORCE is not set
# CONFIG_JOYSTICK_WARRIOR is not set
# CONFIG_JOYSTICK_MAGELLAN is not set
# CONFIG_JOYSTICK_SPACEORB is not set
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
# CONFIG_JOYSTICK_TWIDJOY is not set
# CONFIG_JOYSTICK_ZHENHUA is not set
# CONFIG_JOYSTICK_AS5011 is not set
# CONFIG_JOYSTICK_JOYDUMP is not set
# CONFIG_JOYSTICK_XPAD is not set
CONFIG_INPUT_TABLET=y
# CONFIG_TABLET_USB_ACECAD is not set
# CONFIG_TABLET_USB_AIPTEK is not set
# CONFIG_TABLET_USB_GTCO is not set
# CONFIG_TABLET_USB_HANWANG is not set
# CONFIG_TABLET_USB_KBTAB is not set
# CONFIG_TABLET_USB_WACOM is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_PCSPKR is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_MPU3050 is not set
# CONFIG_INPUT_APANEL is not set
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
# CONFIG_INPUT_UINPUT is not set
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_CMA3000 is not set
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_GAMEPORT is not set

#
# Character devices
#
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
# CONFIG_VT_HW_CONSOLE_BINDING is not set
CONFIG_UNIX98_PTYS=y
# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_NOZOMI is not set
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
CONFIG_DEVKMEM=y

#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_CS=m
CONFIG_SERIAL_8250_NR_UARTS=16
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
# CONFIG_SERIAL_8250_RSA is not set

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_KGDB_NMI is not set
# CONFIG_SERIAL_MFD_HSU is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_CONSOLE_POLL=y
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_PCH_UART is not set
# CONFIG_SERIAL_ARC is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=y
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_HW_RANDOM_INTEL is not set
# CONFIG_HW_RANDOM_AMD is not set
CONFIG_HW_RANDOM_VIA=y
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_HW_RANDOM_TPM=y
CONFIG_NVRAM=y
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set

#
# PCMCIA character devices
#
# CONFIG_SYNCLINK_CS is not set
# CONFIG_CARDMAN_4000 is not set
# CONFIG_CARDMAN_4040 is not set
# CONFIG_IPWIRELESS is not set
# CONFIG_MWAVE is not set
# CONFIG_RAW_DRIVER is not set
CONFIG_HPET=y
# CONFIG_HPET_MMAP is not set
# CONFIG_HANGCHECK_TIMER is not set
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS=m
# CONFIG_TCG_TIS_I2C_INFINEON is not set
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
# CONFIG_TELCLOCK is not set
CONFIG_DEVPORT=y
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
# CONFIG_I2C_CHARDEV is not set
# CONFIG_I2C_MUX is not set
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=y

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
CONFIG_I2C_I801=y
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set

#
# ACPI drivers
#
# CONFIG_I2C_SCMI is not set

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EG20T is not set
# CONFIG_I2C_INTEL_MID is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_PXA_PCI is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_STUB is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
# CONFIG_PPS_CLIENT_LDISC is not set
# CONFIG_PPS_CLIENT_GPIO is not set

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y

#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# CONFIG_PTP_1588_CLOCK_PCH is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
# CONFIG_PDA_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_BATTERY_BQ27x00 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_POWER_RESET is not set
# CONFIG_POWER_AVS is not set
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_K8TEMP is not set
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_CORETEMP is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_DME1737 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH56XX_COMMON is not set
# CONFIG_SENSORS_ADS1015 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_APPLESMC is not set

#
# ACPI drivers
#
# CONFIG_SENSORS_ACPI_POWER is not set
# CONFIG_SENSORS_ATK0110 is not set
CONFIG_THERMAL=y
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
# CONFIG_FAIR_SHARE is not set
CONFIG_STEP_WISE=y
# CONFIG_USER_SPACE is not set
# CONFIG_CPU_THERMAL is not set
# CONFIG_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_RTSX_PCI is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS80031 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_STMPE is not set
# CONFIG_MFD_TC3589X is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_MFD_SMSC is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_MAX77686 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_CS5535 is not set
# CONFIG_LPC_SCH is not set
# CONFIG_LPC_ICH is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_AS3711 is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_SIS=y
CONFIG_AGP_VIA=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=y
CONFIG_DRM_KMS_HELPER=m
# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
CONFIG_DRM_TTM=m
# CONFIG_DRM_TDFX is not set
# CONFIG_DRM_R128 is not set
CONFIG_DRM_RADEON=m
CONFIG_DRM_RADEON_KMS=y
CONFIG_DRM_NOUVEAU=m
CONFIG_NOUVEAU_DEBUG=5
CONFIG_NOUVEAU_DEBUG_DEFAULT=3
# CONFIG_DRM_NOUVEAU_BACKLIGHT is not set

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I810 is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_KMS=y
CONFIG_DRM_MGA=m
CONFIG_DRM_SIS=m
CONFIG_DRM_VIA=m
CONFIG_DRM_SAVAGE=m
# CONFIG_DRM_VMWGFX is not set
# CONFIG_DRM_GMA500 is not set
# CONFIG_DRM_UDL is not set
# CONFIG_DRM_AST is not set
# CONFIG_DRM_MGAG200 is not set
# CONFIG_DRM_CIRRUS_QEMU is not set
# CONFIG_STUB_POULSBO is not set
# CONFIG_VGASTATE is not set
CONFIG_VIDEO_OUTPUT_CONTROL=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DDC is not set
# CONFIG_FB_BOOT_VESA_SUPPORT is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set
CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=m
# CONFIG_FB_WMT_GE_ROPS is not set
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_SVGALIB is not set
# CONFIG_FB_MACMODES is not set
# CONFIG_FB_BACKLIGHT is not set
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
CONFIG_FB_CIRRUS=y
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
# CONFIG_FB_VESA is not set
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_GEODE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_VIRTUAL is not set
CONFIG_XEN_FBDEV_FRONTEND=m
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_BROADSHEET is not set
# CONFIG_FB_AUO_K190X is not set
# CONFIG_EXYNOS_VIDEO is not set
CONFIG_BACKLIGHT_LCD_SUPPORT=y
# CONFIG_LCD_CLASS_DEVICE is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_BACKLIGHT_GENERIC=y
# CONFIG_BACKLIGHT_APPLE is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630 is not set
# CONFIG_BACKLIGHT_LM3639 is not set
# CONFIG_BACKLIGHT_LP855X is not set

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=m
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
# CONFIG_LOGO is not set
# CONFIG_SOUND is not set

#
# HID support
#
CONFIG_HID=m
CONFIG_HIDRAW=y
# CONFIG_UHID is not set
CONFIG_HID_GENERIC=m

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=m
# CONFIG_HID_AUREAL is not set
CONFIG_HID_BELKIN=m
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
CONFIG_HID_CYPRESS=m
# CONFIG_HID_DRAGONRISE is not set
# CONFIG_HID_EMS_FF is not set
CONFIG_HID_EZKEY=m
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_KEYTOUCH is not set
# CONFIG_HID_KYE is not set
# CONFIG_HID_UCLOGIC is not set
# CONFIG_HID_WALTOP is not set
CONFIG_HID_GYRATION=m
# CONFIG_HID_TWINHAN is not set
CONFIG_HID_KENSINGTON=m
# CONFIG_HID_LCPOWER is not set
# CONFIG_HID_LENOVO_TPKBD is not set
CONFIG_HID_LOGITECH=m
# CONFIG_HID_LOGITECH_DJ is not set
CONFIG_LOGITECH_FF=y
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
CONFIG_LOGIWHEELS_FF=y
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
# CONFIG_HID_MULTITOUCH is not set
CONFIG_HID_NTRIG=m
# CONFIG_HID_ORTEK is not set
CONFIG_HID_PANTHERLORD=m
CONFIG_PANTHERLORD_FF=y
CONFIG_HID_PETALYNX=m
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_ROCCAT is not set
# CONFIG_HID_SAITEK is not set
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SONY=m
# CONFIG_HID_SPEEDLINK is not set
CONFIG_HID_SUNPLUS=m
# CONFIG_HID_GREENASIA is not set
# CONFIG_HID_SMARTJOYPLUS is not set
# CONFIG_HID_TIVO is not set
CONFIG_HID_TOPSEED=m
# CONFIG_HID_THRUSTMASTER is not set
# CONFIG_HID_ZEROPLUS is not set
# CONFIG_HID_ZYDACRON is not set
# CONFIG_HID_SENSOR_HUB is not set

#
# USB HID support
#
CONFIG_USB_HID=m
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y

#
# I2C HID support
#
# CONFIG_I2C_HID is not set
CONFIG_USB_ARCH_HAS_OHCI=y
CONFIG_USB_ARCH_HAS_EHCI=y
CONFIG_USB_ARCH_HAS_XHCI=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_DEBUG=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
# CONFIG_USB_DYNAMIC_MINORS is not set
CONFIG_USB_MON=y
# CONFIG_USB_WUSB_CBAF is not set

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=y
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_ISP1760_HCD is not set
# CONFIG_USB_ISP1362_HCD is not set
CONFIG_USB_OHCI_HCD=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_CHIPIDEA is not set

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
CONFIG_USB_PRINTER=y
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set

#
# USB port drivers
#
# CONFIG_USB_SERIAL is not set

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set

#
# USB Physical Layer drivers
#
# CONFIG_USB_ISP1301 is not set
# CONFIG_USB_RCAR_PHY is not set
# CONFIG_USB_GADGET is not set

#
# OTG and related infrastructure
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y

#
# LED drivers
#
# CONFIG_LEDS_LM3530 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_LP3944 is not set
# CONFIG_LEDS_LP5521 is not set
# CONFIG_LEDS_LP5523 is not set
# CONFIG_LEDS_CLEVO_MAIL is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA9633 is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_INTEL_SS4200 is not set
# CONFIG_LEDS_DELL_NETBOOKS is not set
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_LM355x is not set
# CONFIG_LEDS_OT200 is not set
# CONFIG_LEDS_BLINKM is not set
# CONFIG_LEDS_TRIGGERS is not set

#
# LED Triggers
#
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC=y

#
# Reporting subsystems
#
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=y
# CONFIG_EDAC_MCE_INJ is not set
# CONFIG_EDAC_MM_EDAC is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
# CONFIG_RTC_DEBUG is not set

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set

#
# SPI RTC drivers
#

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set
# CONFIG_RTC_DRV_DS2404 is not set

#
# on-CPU RTC drivers
#
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
# CONFIG_INTEL_MID_DMAC is not set
CONFIG_INTEL_IOATDMA=y
# CONFIG_TIMB_DMA is not set
# CONFIG_PCH_DMA is not set
CONFIG_DMA_ENGINE=y

#
# DMA Clients
#
CONFIG_NET_DMA=y
CONFIG_ASYNC_TX_DMA=y
# CONFIG_DMATEST is not set
CONFIG_DCA=y
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set
# CONFIG_VFIO is not set
CONFIG_VIRTIO=y

#
# Virtio drivers
#
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set

#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_SELFBALLOONING=y
# CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is not set
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_TMEM=y
CONFIG_XEN_PCIDEV_BACKEND=y
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_ACPI_PROCESSOR=y
CONFIG_XEN_MCE_LOG=y
CONFIG_XEN_HAVE_PVMMU=y
CONFIG_STAGING=y
# CONFIG_ET131X is not set
# CONFIG_SLICOSS is not set
# CONFIG_USBIP_CORE is not set
# CONFIG_ECHO is not set
# CONFIG_COMEDI is not set
# CONFIG_ASUS_OLED is not set
# CONFIG_RTS5139 is not set
# CONFIG_TRANZPORT is not set
# CONFIG_IDE_PHISON is not set
# CONFIG_DX_SEP is not set
CONFIG_ZRAM=y
# CONFIG_ZRAM_DEBUG is not set
CONFIG_ZCACHE=y
CONFIG_ZSMALLOC=y
# CONFIG_FB_SM7XX is not set
# CONFIG_CRYSTALHD is not set
# CONFIG_FB_XGI is not set
# CONFIG_ACPI_QUICKSTART is not set
# CONFIG_USB_ENESTORAGE is not set
# CONFIG_BCM_WIMAX is not set
# CONFIG_FT1000 is not set

#
# Speakup console speech
#
# CONFIG_SPEAKUP is not set
# CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4 is not set
# CONFIG_STAGING_MEDIA is not set

#
# Android
#
# CONFIG_ANDROID is not set
# CONFIG_USB_WPAN_HCD is not set
# CONFIG_WIMAX_GDM72XX is not set
CONFIG_NET_VENDOR_SILICOM=y
# CONFIG_SBYPASS is not set
# CONFIG_BPCTL is not set
# CONFIG_CED1401 is not set
# CONFIG_DGRP is not set
# CONFIG_SB105X is not set
CONFIG_X86_PLATFORM_DEVICES=y
# CONFIG_ACER_WMI is not set
# CONFIG_ACERHDF is not set
# CONFIG_ASUS_LAPTOP is not set
# CONFIG_DELL_WMI is not set
# CONFIG_DELL_WMI_AIO is not set
# CONFIG_FUJITSU_LAPTOP is not set
# CONFIG_FUJITSU_TABLET is not set
# CONFIG_AMILO_RFKILL is not set
# CONFIG_HP_ACCEL is not set
# CONFIG_HP_WMI is not set
# CONFIG_MSI_LAPTOP is not set
# CONFIG_PANASONIC_LAPTOP is not set
# CONFIG_COMPAL_LAPTOP is not set
# CONFIG_SONY_LAPTOP is not set
# CONFIG_IDEAPAD_LAPTOP is not set
# CONFIG_THINKPAD_ACPI is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_INTEL_MENLOW is not set
CONFIG_EEEPC_LAPTOP=y
# CONFIG_ASUS_WMI is not set
CONFIG_ACPI_WMI=m
# CONFIG_MSI_WMI is not set
# CONFIG_TOPSTAR_LAPTOP is not set
# CONFIG_ACPI_TOSHIBA is not set
# CONFIG_TOSHIBA_BT_RFKILL is not set
# CONFIG_ACPI_CMPC is not set
# CONFIG_INTEL_IPS is not set
# CONFIG_IBM_RTL is not set
# CONFIG_XO15_EBOOK is not set
# CONFIG_SAMSUNG_LAPTOP is not set
CONFIG_MXM_WMI=m
# CONFIG_INTEL_OAKTRAIL is not set
# CONFIG_SAMSUNG_Q10 is not set
# CONFIG_APPLE_GMUX is not set

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_STATS=y
# CONFIG_AMD_IOMMU_V2 is not set
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_IRQ_REMAP is not set

#
# Remoteproc drivers (EXPERIMENTAL)
#
# CONFIG_STE_MODEM_RPROC is not set

#
# Rpmsg drivers (EXPERIMENTAL)
#
# CONFIG_VIRT_DRIVERS is not set
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_VME_BUS is not set
# CONFIG_PWM is not set
# CONFIG_IPACK_BUS is not set

#
# Firmware Drivers
#
CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_EFI_VARS=y
# CONFIG_DELL_RBU is not set
# CONFIG_DCDBAS is not set
CONFIG_DMIID=y
# CONFIG_DMI_SYSFS is not set
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=m
# CONFIG_GOOGLE_FIRMWARE is not set

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT2_FS=m
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT2_FS_XIP=y
CONFIG_EXT3_FS=m
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=m
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_FS_XIP=y
CONFIG_JBD=m
# CONFIG_JBD_DEBUG is not set
CONFIG_JBD2=m
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=m
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
CONFIG_JFS_STATISTICS=y
CONFIG_XFS_FS=m
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_DEBUG is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
# CONFIG_NILFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=m
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_FANOTIFY is not set
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_PRINT_QUOTA_WARNING is not set
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=y
# CONFIG_FUSE_FS is not set
CONFIG_GENERIC_ACL=y

#
# Caches
#
# CONFIG_FSCACHE is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
# CONFIG_UDF_FS is not set

#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_NTFS_FS=y
# CONFIG_NTFS_DEBUG is not set
CONFIG_NTFS_RW=y

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=m
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_LOGFS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_RAM is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V2=y
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
# CONFIG_NFS_SWAP is not set
# CONFIG_NFS_V4_1 is not set
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
# CONFIG_NFSD is not set
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
# CONFIG_SUNRPC_DEBUG is not set
# CONFIG_CEPH_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_MAC_ROMAN is not set
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=y
# CONFIG_DLM is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
# CONFIG_ENABLE_WARN_DEPRECATED is not set
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
CONFIG_MAGIC_SYSRQ=y
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_SHIRQ is not set
CONFIG_LOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
# CONFIG_BOOTPARAM_HARDLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=0
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
# CONFIG_PANIC_ON_OOPS is not set
CONFIG_PANIC_ON_OOPS_VALUE=0
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0
# CONFIG_SCHED_DEBUG is not set
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_DEBUG_SLAB is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set
# CONFIG_SPARSE_RCU_POINTER is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_STACKTRACE=y
CONFIG_DEBUG_STACK_USAGE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VIRTUAL is not set
# CONFIG_DEBUG_WRITECOUNT is not set
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_LIST is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_DEBUG_CREDENTIALS is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
# CONFIG_BOOT_PRINTK_DELAY is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=21
# CONFIG_RCU_CPU_STALL_INFO is not set
# CONFIG_RCU_TRACE is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# CONFIG_DEBUG_PER_CPU_MAPS is not set
# CONFIG_LKDTM is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FAULT_INJECTION=y
# CONFIG_FAILSLAB is not set
# CONFIG_FAIL_PAGE_ALLOC is not set
CONFIG_FAIL_MAKE_REQUEST=y
# CONFIG_FAIL_IO_TIMEOUT is not set
CONFIG_FAULT_INJECTION_DEBUG_FS=y
# CONFIG_LATENCYTOP is not set
# CONFIG_DEBUG_PAGEALLOC is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_EVENT_POWER_TRACING_DEPRECATED=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_FUNCTION_TRACER is not set
# CONFIG_IRQSOFF_TRACER is not set
# CONFIG_SCHED_TRACER is not set
# CONFIG_FTRACE_SYSCALLS is not set
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_PROFILE_ALL_BRANCHES is not set
# CONFIG_STACK_TRACER is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENT=y
# CONFIG_UPROBE_EVENT is not set
CONFIG_PROBE_EVENTS=y
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_MMIOTRACE is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_DYNAMIC_DEBUG is not set
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_KGDB=y
CONFIG_KGDB_SERIAL_CONSOLE=y
# CONFIG_KGDB_TESTS is not set
# CONFIG_KGDB_LOW_LEVEL_TRAP is not set
# CONFIG_KGDB_KDB is not set
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_KMEMCHECK is not set
# CONFIG_TEST_KSTRTOX is not set
# CONFIG_STRICT_DEVMEM is not set
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_X86_PTDUMP=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_DEBUG_SET_MODULE_RONX=y
# CONFIG_DEBUG_NX_TEST is not set
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_DEBUG is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
# CONFIG_X86_DECODER_SELFTEST is not set
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
CONFIG_OPTIMIZE_INLINING=y
# CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_TRUSTED_KEYS is not set
# CONFIG_ENCRYPTED_KEYS is not set
CONFIG_KEYS_DEBUG_PROC_KEYS=y
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
# CONFIG_SECURITY_NETWORK_XFRM is not set
# CONFIG_SECURITY_PATH is not set
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65534
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
# CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX is not set
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_YAMA is not set
# CONFIG_IMA is not set
# CONFIG_EVM is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_DEFAULT_SECURITY="selinux"
CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_ASYNC_TX_DISABLE_PQ_VAL_DMA=y
CONFIG_ASYNC_TX_DISABLE_XOR_VAL_DMA=y
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
# CONFIG_CRYPTO_USER is not set
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
# CONFIG_CRYPTO_GF128MUL is not set
# CONFIG_CRYPTO_NULL is not set
# CONFIG_CRYPTO_PCRYPT is not set
CONFIG_CRYPTO_WORKQUEUE=y
# CONFIG_CRYPTO_CRYPTD is not set
CONFIG_CRYPTO_AUTHENC=y
# CONFIG_CRYPTO_TEST is not set

#
# Authenticated Encryption with Associated Data
#
# CONFIG_CRYPTO_CCM is not set
# CONFIG_CRYPTO_GCM is not set
# CONFIG_CRYPTO_SEQIV is not set

#
# Block modes
#
CONFIG_CRYPTO_CBC=y
# CONFIG_CRYPTO_CTR is not set
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_PCBC is not set
# CONFIG_CRYPTO_XTS is not set

#
# Hash modes
#
CONFIG_CRYPTO_HMAC=y
# CONFIG_CRYPTO_XCBC is not set
# CONFIG_CRYPTO_VMAC is not set

#
# Digest
#
CONFIG_CRYPTO_CRC32C=m
CONFIG_CRYPTO_CRC32C_X86_64=y
CONFIG_CRYPTO_CRC32C_INTEL=m
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
# CONFIG_CRYPTO_RMD160 is not set
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
CONFIG_CRYPTO_SHA1=y
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA512 is not set
# CONFIG_CRYPTO_TGR192 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_X86_64 is not set
# CONFIG_CRYPTO_AES_NI_INTEL is not set
# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST5_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAST6 is not set
# CONFIG_CRYPTO_CAST6_AVX_X86_64 is not set
CONFIG_CRYPTO_DES=y
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_KHAZAD is not set
# CONFIG_CRYPTO_SALSA20 is not set
# CONFIG_CRYPTO_SALSA20_X86_64 is not set
# CONFIG_CRYPTO_SEED is not set
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_TWOFISH is not set
# CONFIG_CRYPTO_TWOFISH_X86_64 is not set
# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set
# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set

#
# Compression
#
# CONFIG_CRYPTO_DEFLATE is not set
CONFIG_CRYPTO_ZLIB=y
CONFIG_CRYPTO_LZO=y

#
# Random Number Generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
# CONFIG_CRYPTO_USER_API_HASH is not set
# CONFIG_CRYPTO_USER_API_SKCIPHER is not set
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_PADLOCK is not set
# CONFIG_ASYMMETRIC_KEY_TYPE is not set
CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_APIC_ARCHITECTURE=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
CONFIG_KVM_INTEL=m
CONFIG_KVM_AMD=m
# CONFIG_KVM_MMU_AUDIT is not set
CONFIG_VHOST_NET=y
# CONFIG_TCM_VHOST is not set
CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_PERCPU_RWSEM=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=m
CONFIG_CRC_T10DIF=m
# CONFIG_CRC_ITU_T is not set
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=m
# CONFIG_CRC8 is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
# CONFIG_AVERAGE is not set
# CONFIG_CORDIC is not set
# CONFIG_DDR is not set

--Kj7319i9nmIyA2yE
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--Kj7319i9nmIyA2yE--


From xen-devel-bounces@lists.xen.org Wed Dec 19 19:24:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlPFb-0000fi-2i; Wed, 19 Dec 2012 19:24:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlPFZ-0000fd-8W
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:24:18 +0000
Received: from [85.158.143.99:63917] by server-2.bemta-4.messagelabs.com id
	61/43-30861-06412D05; Wed, 19 Dec 2012 19:24:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1355945053!19025531!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15936 invoked from network); 19 Dec 2012 19:24:14 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 19:24:14 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJJO9hm022844
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 19:24:10 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJJO8SU004676
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 19:24:08 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJJO7sf030695; Wed, 19 Dec 2012 13:24:08 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 11:24:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8F9D51C032B; Wed, 19 Dec 2012 14:24:05 -0500 (EST)
Date: Wed, 19 Dec 2012 14:24:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Message-ID: <20121219192405.GA24729@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
	<1355936517.10526.28.camel@iceland>
	<20121219174000.GA28570@phenom.dumpdata.com>
	<1355939543.10526.30.camel@iceland>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="Kj7319i9nmIyA2yE"
Content-Disposition: inline
In-Reply-To: <1355939543.10526.30.camel@iceland>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--Kj7319i9nmIyA2yE
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 19, 2012 at 05:52:23PM +0000, Wei Liu wrote:
> On Wed, 2012-12-19 at 17:40 +0000, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 19, 2012 at 05:01:57PM +0000, Wei Liu wrote:
> > > On Wed, 2012-12-19 at 16:04 +0000, Konrad Rzeszutek Wilk wrote:
> > > > On Thu, Dec 13, 2012 at 03:12:17PM +0000, Wei Liu wrote:
> > > > > Hi Konrad
> > > > >=20
> > > > > I encountered a bug when trying to bring offline a cpu then onlin=
e it
> > > > > again in HVM. As I'm not very familiar with HVM stuffs I cannot c=
ome up
> > > > > with a quick fix.
> > > >=20
> > > > I took your two patches that you posted and they are in v3.8 now.
> > > >=20
> > > > It seems that there are bugs in the offline/online code thought.
> > > >=20
> > > > I did this:
> > > > # echo 0 > /sys/devices/system/cpu/cpu3/online
> > > > # echo 1 > /sys/devices/system/cpu/cpu3/online
> > > >=20
> > > > With a PV guest and it blows up (with or without your patches).
> > > >=20
> > > > Have you seen something similar to this:
> > > >=20
> > > > [  106.166795] BUG: scheduling while atomic: swapper/2/0/0x00000000
> > > > [  106.167168] microcode: CPU2 sig=3D0x206a7, pf=3D0x2, revision=3D=
0x17
> > > > [  106.167566] Modules linked in: sg sd_mod dm_multipath dm_mod xen=
_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_isc=
si scsi_mod libcrc32c crc32c radeon fbcon tileblit font bitblit softcursor =
ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sy=
s_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd [la=
st unloaded: dump_dma]
> > > > [  106.169286] Pid: 0, comm: swapper/2 Tainted: G           O 3.5.0=
-rc3upstream-00139-gb1849b3-dirty #1
> > >=20
> > > Can you tell me which tree you're using? I cannot find cs:gb1849b3 in=
 my
> > > repository.
> >=20
> > Oh, .. that might have been me messing around. I would just do v3.7 and
> > try that out.
> > >
>=20
> Just played with 3.7 several times, no oops so far.
>=20
> Wei.

I also tried v3.8 - same thing. Here is the serial output. And attached
is the the config file.


Trying 192.168.101.14...
Connected to maxsrv1.
Escape character is '^]'.

PXELINUX 3.82 2009-06-09  Copyright (C) 1994-2009 H. Peter Anvin et al
boot:=20
Loading xen.gz... ok
Loading vmlinuz... ok
Loading initramfs.cpio.gz... ok
 __  __            _  _    _   _____             _                  =20
 \ \/ /___ _ __   | || |  / | |___ /    _ __ ___/ |   _ _ | .__/|_|  \___|
                                                     |_|            =20
(XEN) Xen version 4.1.3-rc1-pre (konrad@dumpdata.com) (gcc version 4.4.4 20=
100503 (Red Hat 4.4.4-2) (GCC) ) Wed Dec 19 13:47:18 EST 2012
(XEN) Latest ChangeSet: Mon Feb 13 17:57:47 2012 +0000 23225:f2543f449a49
(XEN) Bootloader: unknown
(XEN) Command line: cpuinfo conring_size=3D1048576 dom0_mem=3Dmax:3G cpufre=
q=3Dverbose,performance com1=3D115200,8n1 console=3Dcom1,vga loglvl=3Dall g=
uest_loglvl=3Dall
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 000000000009ec00 (usable)
(XEN)  000000000009ec00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 0000000020000000 (usable)
(XEN)  0000000020000000 - 0000000020200000 (reserved)
(XEN)  0000000020200000 - 0000000040000000 (usable)
(XEN)  0000000040000000 - 0000000040200000 (reserved)
(XEN)  0000000040200000 - 00000000bad80000 (usable)
(XEN)  00000000bad80000 - 00000000badc9000 (ACPI NVS)
(XEN)  00000000badc9000 - 00000000badd1000 (ACPI data)
(XEN)  00000000badd1000 - 00000000badf4000 (reserved)
(XEN)  00000000badf4000 - 00000000badf6000 (usable)
(XEN)  00000000badf6000 - 00000000bae06000 (reserved)
(XEN)  00000000bae06000 - 00000000bae14000 (ACPI NVS)
(XEN)  00000000bae14000 - 00000000bae3c000 (reserved)
(XEN)  00000000bae3c000 - 00000000bae7f000 (ACPI NVS)
(XEN)  00000000bae7f000 - 00000000bb000000 (usable)
(XEN)  00000000bb800000 - 00000000bfa00000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed40000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0450, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT BADC9068, 0054 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) r2 ALASKA    A M I       15 INTL 20051117)
(XEN) ACPI: FACS BAE0BF80, 0040
(XEN) ACPI: APIC BADD0400, 0072 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: SSDT BADD0478, 0102 (r1 AMICPU     PROC        1 MSFT  3000001)
(XEN) ACPI: MCFG BADD0580, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: HPET BADD05C0, 0038 (r1 ALASKA    A M I  1072009 AMI.        4)
(XEN) ACPI: ASF! BADD05F8, 00A0 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) System RAM: 8104MB (8299140kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fcde0
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(Xxfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x01] enabled)
(XEN) Processor #1 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
(XEN) Processor #3 6:10 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0 buses 0 - 255
(XEN) PCI: Not using MMCONFIG.
(XEN) Table is not found!
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) IRQ limits: 24 GSI, 760 MSI/MSI-X
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Initializing CPU#0
(XEN) Detected 3093.056 MHz processor.
(XEN) Initing memory sharing.
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor
(XEN) CPU0: Thermal monitoring enabled (TM1)
(XEN) Intel machine check reporting enabled
(XEN) I/O virtualisation disabled
(XEN) CPU0: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz stepping 07
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) Platform timer is 14.318MHz HPET
=FF(XEN) Allocated console ring of 1048576 KiB.
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN) EPT supports 2MB super page.
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging detected.
(XEN) Booting processor 1/1 eip 7c000
(XEN) Initializing CPU#1
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 0
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 256K
(XEN) CPU: L3 cache: 3072K
(XEN) CPU1: Thermal monitoring enabled (TM1)
(XEN) CPU1: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz stepping 07
(XEN) Booting processor 2/2 eip 7c000
(XEN) Initializing CPU#2
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 1
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 256K
(XEN) CPU: L3 cache: 3072K
(XEN) CPU2: Thermal monitoring enabled (TM1)
(XEN) CPU2: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz stepping 07
(XEN) Booting processor 3/3 eip 7c000
(XEN) Initializing CPU#3
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 1
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 256K
(XEN) CPU: L3 cache: 3072K
(XEN) CPU3: Thermal monitoring enabled (TM1)
(XEN) CPU3: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz stepping 07
(XEN) Brought up 4 CPUs
(XEN) ACPI sleep modes: S3
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x208f000
(XEN) PHYSICAL ME00 (702942 pages to be allocated)
(XEN)  Init. ramdisk: 000000022f7de000->000000023fdff200
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8208f000
(XEN)  Init. ramdisk: ffffffff8208f000->ffffffff926b0200
(XEN)  Phys-Mach map: ffffffff926b1000->ffffffff92cb1000
(XEN)  Start info:    ffffffff92cb1000->ffffffff92cb14b4
(XEN)  Page tables:   ffffffff92cb2000->ffffffff92d4d000
(XEN)  Boot stack:    ffffffff92d4d000->ffffffff92d4e000
(XEN)  TOTAL:         ffffffff80000000->ffffffff93000000
(XEN)  ENTRY ADDRESS: ffffffff81aba210
(XEN) Dom0 has maximum 4 VCPUs
(XEN) Scrubbing Free RAM: ......................................done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input t=
o Xen)
(XEN) Freed 220kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.7.0upstream (konrad@phenom.dumpdata.com) (gc=
c version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) #1 SMP Wed Dec 1000] Com=
mand line: debug console=3Dhvc0 loglevel=3D10 xen-pciback.hide=3D(01:00.0)
[    0.000000] Freeing 9e-100 pfn range: 98 pages freed
[    0.000000] 1-1 mapping on 9e->100
[    0.000000] Freeing 20000-20200 pfn range: 512 pages freed
[    0.000000] 1-1 mapping on 20000->20200
[    0.000000] Freeing 40000-40200 pfn range: 512 pages freed
[    0.000000] 1-1 mapping on 40000->40200
[    0.000000] Freeing bad80-badf4 pfn range: 116 pages freed
[    0.000000] 1-1 mapping on bad80->badf4
[    0.000000] Freeing badf6-bae7f pfn range: 137 pages freed
[    0.000000] 1-1 mapping on badf6->bae7f
[    0.000000] Freeing bb000-c0000 pfn range: 20480 pages freed
[    0.000000] 1-1 mapping on bb000->100000
[    0.000000] Released 21855 pages of unused memory
[    0.000000] Set 283999 page(s) to 1-1 mapping
[    0.000000] Populating 100000-10555f pfn range: 21855 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x000000000009dfff] usable
[    0.000000] Xen: [mem 0x000000000009ec00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x000000001fffffff] usable
[    0.000000] Xen: [mem 0x0000000020000000-0x00000000201fffff] reserved
[    0.000000] Xen: [mem 0x0000000020200000-0x000000003fffffff] usable
[    0.000000] Xen: [mem 0x0000000040000000-0x00000000401fffff] reserved
[    0.000000] Xen: [mem 0x0000000040200000-0x00000000bad7ffff] usable
[    0.000000] Xen: [mem 0x00000000bad80000-0x00000000badc8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000badc9000-0x00000000badd0fff] ACPI data
[    0.000000] Xen: [mem 0x00000000badd1000-0x00000000badf3fff] reserved
[    0.000000] Xen: [mem 0x00000000badf4000-0x00000000badf5fff] usable
[    0.000000] Xen: [mem 0x00000000badf6000-0x00000000bae05fff] reserved
[    0.000000] Xen: [mem 0x00000000bae06000-0x00000000bae13fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000bae14000-0x00000000bae3bfff] reserved
[    0.000000] Xen: [mem 0x00000000bae3c000-0x00000000bae7efff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000bae7f000-0x00000000baffffff] usable
[    0.000000] Xen: [mem 0x00000000bb800000-0x00000000bf9fffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed3ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000010555efff] usable
[    0.000000] Xen: [mem 0x000000010555f000-0x000000023fdfffff] unusable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] DMI 2.7 present.
[    0.000000] DMI: MSI MS-7680/H61M-P23 (MS-7680), BIOS V17.0 03/14/2011
[    0.000000] e820: update [mem 0x00000000-0x0000ffff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] No AGP bridge found
[    0.000000] e820: last_pfn =3D 0x10555f max_arch_pfn =3D 0x400000000
[    0.000000] e820: last_pfn =3D 0xbb000 max_arch_pfn =3D 0x400000000
[    0.000000] initial memory mapped: [mem 0x00000000-0x126b0fff]
[    0.000000] Base memory trampoline at [ffff880000098000] 98000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0xbaffffff]
[    0.000000]  [mem 0x00000000-0xbaffffff] page 4k
[    0.000000] kernel direct mapping tables up to 0xbaffffff @ [mem 0x00a24=
000-0x00ffffff]
[    0.000000] xen: setting RW the range f66000 - 1000000
[    0.000000] init_memory_mapping: [mem 0x100000000-0x10555efff]
[    0.000000]  [mem 0x100000000-0x10555efff] page 4k
[    0.000000] kernel direct mapping tables up to 0x10555efff @ [mem 0xbafd=
3000-0xbaffffff]
[    0.000000] xen: setting RW the range bafff000 - bb000000
[    0.000000] RAMDISK: [mem 0x0208f000-0x126b0fff]
[    0.000000] ACPI: RSDP 00000000000f0450 00024 (v02 ALASKA)
[    0.000000] ACPI: XSDT 00000000badc9068 00054 (v01 ALASKA    A M I 01072=
009 AMI  00010013)
[    0.000000] ACPI: FACP 00000000badd0308 000F4 (v04 ALASKA    A M I 01072=
009 AMI  00010013)
[    0.000000] ACPI: DSDT 00000000badc9150 071B5 (v02 ALASKA    A M I 00000=
015 INTL 20051117)
[    0.000000] ACPI: FACS 00000000bae0bf80 00040
[    0.000000] ACPI: APIC 00000000badd0400 00072 (v03 ALASKA    A M I 01072=
009 AMI  00010013)
[    0.000000] ACPI: SSDT 00000000badd0478 00102 (v01 AMICPU     PROC 00000=
001 MSFT 03000001)
[    0.000000] ACPI: MCFG 00000000badd0580 0003C (v01 ALASKA    A M I 01072=
009 MSFT 00000097)
[    0.000000] ACPI: HPET 00000000badd05c0 00038 (v01 ALASKA    A M I 01072=
009 AMI. 00000004)
[    0.000000] ACPI: ASF! 00000000badd05f8 000A0 (v32 INTEL       HCG 00000=
001 TFSM 000F4240)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at [mem 0x0000000000000000-0x000000010555efff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x10555efff]
[    0.000000]   NODE_DATA [mem 0x10555b000-0x10555efff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00010000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   [mem 0x100000000-0x10555efff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00010000-0x0009dfff]
[    0.000000]   node   0: [mem 0x00100000-0x1fffffff]
[    0.000000]   node   0: [mem 0x20200000-0x3fffffff]
[    0.000000]   node   0: [mem 0x40200000-0xbad7ffff]
[    0.000000]   node   0: [mem 0xbadf4000-0xbadf5fff]
[    0.000000]   node   0: [mem 0xbae7f000-0xbaffffff]
[    0.000000]   node   0: [mem 0x100000000-0x10555efff]
[    0.000000] On node 0 totalpages: 786416
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 1352 pages reserved
[    0.000000]   DMA zone: 2574 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 14280 pages used for memmap
[    0.000000]   DMA32 zone: 746299 pages, LIFO batch:31
[    0.000000]   Normal zone: 299 pages used for memmap
[    0.000000]   Normal zone: 21556 pages, LIFO batch:3
[    0.000000] ACPI: PM-Timer IO Port: 0x408
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-=
23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: 000000000009e000 - 00000000000=
9f000
[    0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000001=
00000
[    0.000000] PM: Registered nosave memory: 0000000020000000 - 00000000202=
00000
[    0.000000] PM: Registered nosave memory: 0000000040000000 - 00000000402=
00000
[    0.000000] PM: Registered nosave memory: 00000000bad80000 - 00000000bad=
c9000
[    0.000000] PM: Registered nosave memory: 00000000badc9000 - 00000000bad=
d1000
[    0.000000] PM: Registered nosave memory: 00000000badd1000 - 00000000bad=
f4000
[    0.000000] PM: Registered nosave memory: 00000000badf6000 - 00000000bae=
06000
[    0.000000] PM: Registered nosave memory: 00000000bae06000 - 00000000bae=
14000
[    0.000000] PM: Registered nosave memory: 00000000bae14000 - 00000000bae=
3c000
[    0.000000] PM: Registered nosave memory: 00000000bae3c000 - 00000000bae=
7f000
[    0.000000] PM: Registered nosave memory: 00000000bb000000 - 00000000bb8=
00000
[    0.000000] PM: Registered nosave memory: 00000000bb800000 - 00000000bfa=
00000
[    0.000000] PM: Registered nosave memory: 00000000bfa00000 - 00000000fec=
00000
[    0.000000] PM: Registered nosave memory: 00000000fec00000 - 00000000fec=
01000
[    0.000000] PM: Registered nosave memory: 00000000fec01000 - 00000000fed=
1c000
[    0.000000] PM: Registered nosave memory: 00000000fed1c000 - 00000000fed=
40000
[    0.000000] PM: Registered nosave memory: 00000000fed40000 - 00000000fee=
00000
[    0.000000] PM: Registered nosave memory: 00000000fee00000 - 00000000fee=
01000
[    0.000000] PM: Registered nosave memory: 00000000fee01000 - 00000000ff0=
00000
[    0.000000] PM: Registered nosave memory: 00000000ff000000 - 00000001000=
00000
[    0.000000] e820: [mem 0xbfa00000-0xfebfffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.1.3-rc1-pre (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 n=
r_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff880105200000 s84288 r8192=
 d22208 u524288
[    0.000000] pcpu-alloc: s84288 r8192 d22208 u524288 alloc=3D1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3=20
[    1.359907] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 770429
[    1.359910] Policy zone: Normal
[    1.359912] Kernel command line: debug console=3Dhvc0 loglevel=3D10 xen-=
pciback.hide=3D(01:00.0)
[    1.360197] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    1.360202] __ex_table already sorted, skipping sort
[    1.382802] software IO TLB [mem 0xb6d80000-0xbad7ffff] (64MB) mapped at=
 [ffff8800b6d80000-ffff8800bad7ffff]
[    1.391629] Memory: 2740388k/4281724k available (6411k kernel code, 1136=
060k absent, 405276k reserved, 4480k data, 752k init)
[    1.391712] Hierarchical RCU implementation.
[    1.391715] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D4.
[    1.391724] NR_IRQS:33024 nr_irqs:712 16
[    1.391792] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0
[    1.391795] xen: registering gsi 9 triggering 0 polarity 0
[    1.391804] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)
[    1.391809] xen: acpi sci 9
[    1.391813] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)
[    1.391817] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)
[    1.391820] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)
[    1.391824] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)
[    1.391827] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)
[    1.391831] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)
[    1.391835] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)
[    1.391838] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)
[    1.391842] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)
[    1.391846] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)
[    1.391849] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)
[    1.391853] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)
[    1.391857] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)
[    1.391860] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)
[    1.393249] Console: colour VGA+ 80x25
[    1.393530] console [hvc0] enabled
[    1.393557] Xen: using vcpuop timer interface
[    1.393562] installing Xen timer for CPU 0
[    1.393582] tsc: Detected 3093.056 MHz processor
[    1.393588] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6186.11 BogoMIPS (lpj=3D3093056)
[    1.393594] pid_max: default: 32768 minimum: 301
[    1.393631] Security Framework initialized
[    1.393635] SELinux:  Initializing.
[    1.393645] SELinux:  Starting in permissive mode
[    1.394172] Dentry cache hash table entries: 524288 (order: 10, 4194304 =
bytes)
[    1.395081] Inode-cache hash table entries: 262144 (order: 9, 2097152 by=
tes)
[    1.395386] Mount-cache hash table entries: 256
[    1.395611] Initializing cgroup subsys cpuacct
[    1.395615] Initializing cgroup subsys freezer
[    1.395672] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    1.395672] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)
[    1.395679] CPU: Physical Processor ID: 0
[    1.395681] CPU: Processor Core ID: 0
[    1.395685] mce: CPU supports 7 MCE banks
[    1.395726] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
[    1.395726] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
[    1.395726] tlb_flushall_shift: 5
[    1.395794] Freeing SMP alternatives: 24k freed
[    1.397997] ACPI: Core revision 20120913
[    1.626158] cpu 0 spinlock event irq 41
[    1.626183] Performance Events: unsupported p6 CPU model 42 no PMU drive=
r, software events only.
[    1.626430] NMI watchdog: disabled (cpu0): hardware events not enabled
[    1.626585] installing Xen timer for CPU 1
[    1.626598] cpu 1 spinlock event irq 48
[    1.626920] installing Xen timer for CPU 2
[    1.626932] cpu 2 spinlock event irq 55
[    1.627217] installing Xen timer for CPU 3
[    1.627229] cpu 3 spinlock event irq 62
[    1.627332] Brought up 4 CPUs
[    1.629982] PM: Registering ACPI NVS region [mem 0xbad80000-0xbadc8fff] =
(299008 bytes)
[    1.629994] PM: Registering ACPI NVS region [mem 0xbae06000-0xbae13fff] =
(57344 bytes)
[    1.629998] PM: Registering ACPI NVS region [mem 0xbae3c000-0xbae7efff] =
(274432 bytes)
[    1.630167] kworker/u:0 (31) used greatest stack depth: 5968 bytes left
[    1.630195] Grant tables using version 2 layout.
[    1.630207] Grant table initialized
[    1.630239] RTC time: 18:55:25, date: 12/19/12
[    1.630315] NET: Registered protocol family 16
[    1.630558] kworker/u:0 (35) used greatest stack depth: 5504 bytes left
[    1.631082] ACPI: bus type pci registered
[    1.631568] dca service started, version 1.12.1
[    1.631632] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000=
-0xefffffff] (base 0xe0000000)
[    1.631637] PCI: not using MMCONFIG
[    1.631640] PCI: Using configuration type 1 for base access
[    1.641990] bio: create slab <bio-0> at 0
[    1.642189] ACPI: Added _OSI(Module Device)
[    1.642193] ACPI: Added _OSI(Processor Device)
[    1.642196] ACPI: Added _OSI(3.0 _SCP Extensions)
[    1.642200] ACPI: Added _OSI(Processor Aggregator Device)
[    1.646439] ACPI: EC: Look up EC in DSDT
[    1.650923] ACPI: Executed 1 blocks of module-level executable AML code
[    1.655607] ACPI: SSDT 00000000bae0ac18 0038C (v01    AMI      IST 00000=
001 MSFT 03000001)
[    1.656750] ACPI: Dynamic OEM Table Load:
[    1.656756] ACPI: SSDT           (null) 0038C (v01    AMI      IST 00000=
001 MSFT 03000001)
[    1.656786] ACPI: SSDT 00000000bae0be18 00084 (v01    AMI      CST 00000=
001 MSFT 03000001)
[    1.657840] ACPI: Dynamic OEM Table Load:
[    1.657846] ACPI: SSDT           (null) 00084 (v01    AMI      CST 00000=
001 MSFT 03000001)
[    1.658539] ACPI: Interpreter enabled
[    1.658544] ACPI: (supports S0 S1 S3 S4 S5)
[    1.658570] ACPI: Using IOAPIC for interrupt routing
[    1.658597] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000=
-0xefffffff] (base 0xe0000000)
[    1.658675] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACP=
I motherboard resources
[    1.709169] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[    1.719728] ACPI: No dock devices found.
[    1.719735] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug
[    1.719976] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    1.720261] PCI host bridge to bus 0000:00
[    1.720266] pci_bus 0000:00: root bus resource [bus 00-ff]
[    1.720270] pci_bus 0000:00: root bus resource [io  0x0000-0x03af]
[    1.720274] pci_bus 0000:00: root bus resource [io  0x03e0-0x0cf7]
[    1.720277] pci_bus 0000:00: root bus resource [io  0x03b0-0x03df]
[    1.720311] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[    1.720314] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f]
[    1.720318] pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000dfff=
f]
[    1.720322] pci_bus 0000:00: root bus resource [mem 0xbfa00000-0xfffffff=
f]
[    1.720342] pci 0000:00:00.0: [8086:0100] type 00 class 0x060000
[    1.720448] pci 0000:00:01.0: [8086:0101] type 01 class 0x060400
[    1.720555] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[    1.720607] pci 0000:00:02.0: [8086:0102] type 00 class 0x030000
[    1.720640] pci 0000:00:02.0: reg 10: [mem 0xfe000000-0xfe3fffff 64bit]
[    1.720658] pci 0000:00:02.0: reg 18: [mem 0xd0000000-0xdfffffff 64bit p=
ref]
[    1.720672] pci 0000:00:02.0: reg 20: [io  0xf000-0xf03f]
[    1.720811] pci 0000:00:16.0: [8086:1c3a] type 00 class 0x078000
[    1.720855] pci 0000:00:16.0: reg 10: [mem 0xfe607000-0xfe60700f 64bit]
[    1.721005] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
[    1.721070] pci 0000:00:1a.0: [8086:1c2d] type 00 class 0x0c0320
[    1.721111] pci 0000:00:1a.0: reg 10: [mem 0xfe606000-0xfe6063ff]
[    1.721282] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    1.721332] pci 0000:00:1b.0: [8086:1c20] type 00 class 0x040300[    2.4=
41647] ehci_hcd 0000:00:1a.0: port 1 reset complete, port enabled
[    2.441660] ehci_hcd 0000:00:1a.0: GetStatus port:1 status 001005 0  ACK=
 POWER sig=3Dse0 PE CONNECT
mount: mount point /proc/bus/usb does not exist
[    2.492442] usb 1-1: new high-speed USB device number 2 using ehci_hcd
mount: mount point /sys/kernel/config does not exist
[    2.509458] core_filesystem (1208) used greatest stack depth: 4952 bytes=
 left
FATAL: Error inserting xen_kbdfront (/lib/modules/3.7.0upstream/kernel/driv=
ers/input/misc/xen-kbdfront.ko): No such device
FATAL: Error inserting xen_fbfront (/lib/modules/3.7.0upstream/kernel/drive=
rs/video/xen-fbfront.ko): No such device
[    2.543646] ehci_hcd 0000:00:1a.0: port 1 reset complete, port enabled
[    2.543667] ehci_hcd 0000:00:1a.0: GetStatus port:1 status 001005 0  ACK=
 POWER sig=3Dse0 PE CONNECT
[    2.545494] Initialising Xen virtual ethernet driver.
[    2.567446] udevd (1266): /proc/1266/oom_adj is deprecated, please use /=
proc/1266/oom_score_adj instead.
[    2.607548] ehci_hcd 0000:00:1a.0: set dev address 2 for port 1
[    2.607560] ehci_hcd 0000:00:1a.0: LPM: no device attached
[    2.607838] usb 1-1: udev 2, busnum 1, minor =3D 1
[    2.607845] usb 1-1: New USB device found, idVendor=3D8087, idProduct=3D=
0024
[    2.607852] usb 1-1: New USB device strings: Mfr=3D0, Product=3D0, Seria=
lNumber=3D0
[    2.609689] usb 1-1: usb_probe_device
[    2.609699] usb 1-1: configuration #1 chosen from 1 choice
[    2.611359] usb 1-1: adding 1-1:1.0 (config #1, interface 0)
[    2.611568] hub 1-1:1.0: usb_probe_interface
[    2.611575] hub 1-1:1.0: usb_probe_interface - got id
[    2.611581] hub 1-1:1.0: USB hub found
[    2.613339] hub 1-1:1.0: 4 ports detected
[    2.613347] hub 1-1:1.0: standalone hub
[    2.613352] hub 1-1:1.0: individual port power switching
[    2.613358] hub 1-1:1.0: individual port over-current protection
[    2.613363] hub 1-1:1.0: Single TT
[    2.613367] hub 1-1:1.0: TT requires at most 8 FS bit times (666 ns)
[    2.613371] hub 1-1:1.0: power on to power good time: 100ms
[    2.613685] hub 1-1:1.0: local power source is good
[    2.614336] hub 1-1:1.0: enabling power on all ports
[    2.614827] hub 2-0:1.0: state 7 ports 2 chg 0002 evt 0000
[    2.614843] hub 2-0:1.0: port 1, status 0501, change 0000, 480 Mb/s
[    2.639505] xen: registering gsi 16 triggering 0 polarity 1
[    2.639517] Already setup the GSI :16
[    2.651431] SCSI subsystem initialized
udevd-work[1305]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} f=
or writing: No such file or directory

[    2.660473] ACPI: bus type scsi registered
[    2.661221] xen: registering gsi 16 triggering 0 polarity 1
[    2.661228] Already setup the GSI :16
[    2.662192] pci 0000:00:00.0: Intel Sandybridge Chipset
[    2.662282] libata version 3.00 loaded.
[    2.662370] pci 0000:00:00.0: detected gtt size: 2097152K total, 262144K=
 mappable
[    2.662601] atl1c 0000:03:00.0: version 1.0.1.0-NAPI
[    2.663465] pci 0000:00:00.0: detected 65536K stolen memory
[    2.663514] i915 0000:00:02.0: setting latency timer to 64
[    2.665638] ehci_hcd 0000:00:1d.0: port 1 reset complete, port enabled
[    2.665647] ehci_hcd 0000:00:1d.0: GetStatus port:1 status 001005 0  ACK=
 POWER sig=3Dse0 PE CONNECT
[    2.716412] usb 2-1: new high-speed USB device number 2 using ehci_hcd
[    2.717096] usb 1-1: link qh256-0001/ffff8800ac75ba40 start 1 [1/0 us]
[    2.767636] ehci_hcd 0000:00:1d.0: port 1 reset complete, port enabled
[    2.767650] ehci_hcd 0000:00:1d.0: GetStatus port:1 status 001005 0  ACK=
 POWER sig=3Dse0 PE CONNECT
[    2.777221] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010).
[    2.777230] [drm] Driver supports precise vblank timestamp query.
[    2.777299] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=
=3Dio+mem,decodes=3Dio+mem:owns=3Dio+mem
Waiting for devices [  OK  ]
[    2.807610] [drm] Enabling RC6 states: RC6 on, RC6p off, RC6pp off
[    2.815015] ip (2328) used greatest stack depth: 3896 bytes left
[    2.830534] ehci_hcd 0000:00:1d.0: set dev address 2 for port 1
[    2.830546] ehci_hcd 0000:00:1d.0: LPM: no device attached
[    2.830780] usb 2-1: udev 2, busnum 2, minor =3D 129
[    2.830788] usb 2-1: New USB device found, idVendor=3D8087, idProduct=3D=
0024
[    2.830796] usb 2-1: New USB device strings: Mfr=3D0, Product=3D0, Seria=
lNumber=3D0
[    2.830980] usb 2-1: usb_probe_device
[    2.830989] usb 2-1: configuration #1 chosen from 1 choice
[    2.831416] usb 2-1: adding 2-1:1.0 (config #1, interface 0)
[    2.831531] hub 2-1:1.0: usb_probe_interface
[    2.831536] hub 2-1:1.0: usb_probe_interface - got id
[    2.831541] hub 2-1:1.0: USB hub found
[    2.831649] hub 2-1:1.0: 6 ports detected
[    2.831654] hub 2-1:1.0: standalone hub
[    2.831659] hub 2-1:1.0: individual port power switching
[    2.831663] hub 2-1:1.0: individual port over-current protection
[    2.831668] hub 2-1:1.0: Single TT
[    2.831672] hub 2-1:1.0: TT requires at most 8 FS bit times (666 ns)
[    2.831677] hub 2-1:1.0: power on to power good time: 100ms
[    2.831899] hub 2-1:1.0: local power source is good
[    2.833153] hub 2-1:1.0: enabling power on all ports
[    2.833934] hub 1-1:1.0: state 7 ports 4 chg 0000 evt 0000
[    2.834741] No connectors reported connected with modes
[    2.834749] [drm] Cannot find any crtc or sizes - going 1024x768
[    2.838554] fbcon: inteldrmfb (fb0) is primary device
[    2.917288] Console: switching to colour frame buffer device 128x48
[    2.934143] usb 2-1: link qh256-0001/ffff8800b0007340 start 1 [1/0 us]
[    2.934164] hub 2-1:1.0: state 7 ports 6 chg 0000 evt 0000
[    2.961792] fb0: inteldrmfb frame buffer device
[    2.961795] drm: registered panic notifier
[    2.961821] i915: No ACPI video bus found
[    2.962143] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on mi=
nor 0
[    2.962213] ata_piix 0000:00:1f.2: version 2.13
[    2.962243] xen: registering gsi 19 triggering 0 polarity 1
[    2.962275] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)
[    2.962308] ata_piix 0000:00:1f.2: MAP [
[    2.962313]  P0 P2 P1 P3 ]
[    2.964634] modprobe (1419) used greatest stack depth: 3392 bytes left
[    3.112540] ata_piix 0000:00:1f.2: setting latency timer to 64
[    3.114540] scsi0 : ata_piix
[    3.115914] scsi1 : ata_piix
[    3.116906] ata1: SATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xf0d0 irq=
 14
[    3.116923] ata2: SATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xf0d8 irq=
 15
[    3.116975] xen: registering gsi 19 triggering 0 polarity 1
[    3.116990] Already setup the GSI :19
[    3.117015] ata_piix 0000:00:1f.5: MAP [
[    3.117024]  P0 -- P1 -- ]
[    3.267504] ata_piix 0000:00:1f.5: setting latency timer to 64
[    3.269836] scsi2 : ata_piix
[    3.271381] scsi3 : ata_piix
[    3.272665] ata3: SATA max UDMA/133 cmd 0xf0b0 ctl 0xf0a0 bmdma 0xf070 i=
rq 19
[    3.272679] ata4: SATA max UDMA/133 cmd 0xf090 ctl 0xf080 bmdma 0xf078 i=
rq 19
[    3.728513] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[    3.731935] ata3.00: ATAPI: HL-DT-ST DVDRAM GH24NS50, XP01, max UDMA/100
[    3.753810] ata3.00: configured for UDMA/100
[    4.123458] ata2.00: failed to resume link (SControl 0)
[    4.279470] ata4: failed to resume link (SControl 0)
[    4.290776] ata4: SATA link down (SStatus 4 SControl 0)
[    4.428514] ata1.01: failed to resume link (SControl 0)
[    4.579683] ata1.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[    4.579753] ata1.01: SATA link down (SStatus 0 SCo^G[    4.628360] ata1.=
00: ATA-7: ST3808110AS, 3.ADH, max UDMA/133
[    4.628375] ata1.00: 156250000 sectors, multi 16: LBA48 NCQ0/32)
^G^G[    4.809185] ata1.00: failed to get Identify Device Data, Emask 0x1
[    5.025822] ata1.00: failed to get Identify Device Data, Emask 0x1
[    5.025872] ata1.00: configured for UDMA/133
[    5     ATA      ST3808110AS      3.AD PQ: 0 ANSI: 5
[    5.027796] ACPI: Invalid Power Resource to register!^G^G^G^G^G^G^G^G^G[=
    5.130457] ata2.01: failed to resume link (SControl 0)
[    5.142312] ata2.00: SATA link down (SStatus 4 SControl 0)
[    5.142382] ata2.01: SATA link down (SStatus 0 SControl 0)
[    5.146256] scsi 2:0:0:0: CD-ROM            HL-DT-ST DVDRAM GH24NS50  XP=
01 PQ: 0 ANSI: 5

[    5.147459] ACPI: Invalid Power Resource to register![    5.156476] sd 0=
:0:0:0: [sda] 156250000 512-byte logical blocks: (80.0 GB/74.5 GiB)
[    5.156759] sd 0:0:0:0: [sda] Write Protect is off
[    5.156774] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    5.156878] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled,=
 doesn't support DPO or FUA
[    5.166854] sr0: scsi3-mmc drive: 48x/48x writer dvd-ram cd/rw xa/form2 =
cdda tray
[    5.166864] cdrom: Uniform CD-ROM driver Revision: 3.20
[    5.167476] sr 2:0:0:0: Attached scsi CD-ROM sr0
[    5.208328]  sda: sda1 sda2 sda3 sda4
[    5.210047] sd 0:0:0:0: [sda] Attached SCSI disk
[    5.216268] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    5.216450] sr 2:0:0:0: Attached scsi generic sg1 type 5
Waiting for fb [  OK  ]
Starting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7fb6c7d73000): Writting .. [1024:768]
Done!
FATAL: Module agpgart_intel not found.
[    5.546351] [drm] radeon kernel modesetting enabled.
[    5.552678] wmi: Mapper lo^GStarting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7fbf03f6e000): Writting .. [1024:768]
Done!
VGA: 0000:00:02.0
Waiting for network [  OK  ]
Bringing up loopback interface:  [  OK  ]
Bringing up interface eth0:  [    6.129588] atl1c 0000:03:00.0: atl1c: eth0=
 NIC Link is Up<1000 Mbps Full Duplex>
[    6.132986] device eth0 entered promiscuous mode
[  OK  ]
Bringing up interface switch: =20
Determining IP information for switch...[    6.213539] switch: port 1(eth0)=
 entered forwarding state
[    6.213561] switch:  done.
[  OK  ]
Waiting for SSHd [  OK  ]
Generating keys ...
+-----------------+
+-----------------+
+-----------------+

Starting SSHd ...

    SSH started [2758]

        passwd =3D skl+3c

FATAL: Module dump_dma not found.
ERROR: Module dump_dma does not exist in /proc/modules
[    7.895459] Loading iSCSI transport class v2.0-870.
[    7.900281] iscsi: registered transport (tcp)
iscsistart: transport class version 2.0-870. iscsid version 2.0-872
Could not get list of targets from firmware.
Dec 19 18:5G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^GStartin=
g xenstored...Dec 19 18:55:31 tst007 xenstored: Checking store ...
Dec 19 18:55:31 tst007 xenstored: Checking store complete.

Setting domain 0 name...
Starting xenconsoled...
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:790: blktapctrl: v1.0=
=2E0=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blkta
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:792: Found driver: [r=
amdisk image (ram)]=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:792: Found driver: [q=
cow disk (qcow)]=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:792: Found driver: [q=
cow2 disk (qcow2)]=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl_linux.c:86: blktap0 ope=
n failed=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:859: couldn't open bl=
ktap interface=20
Dec 19 18:55:32 tst007 BLKTAPCTRL[2863]: blktapctrl.c:922: Unable to start =
blktapctrl=20
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^GDec 19 18:55:32 tst007 iscsid:=
 transport class version 2.0-870. iscsid version 2.0-872
Dec 19 18:55:32 tst007 iscsid: iSCSI daemon with pid=3D2825 started!
[0:0:0:0]    disk    ATA      ST3808110AS      3.AD  /dev/sda=20
[2:0:0:0]    cd/dvd  HL-DT-ST DVDRAM GH24NS50  XP01  /dev/sr0 ^G00:00.0 Hos=
t bridge: Intel Corporation Device 0100 (rev 09)
00:01.0 PCI bridge: Intel Corporation Device 0101 (rev 09)
00:02.0 VGA compatible controller: Intel Corporation Sandy Bridge Integrate=
d Graphics Controller (rev 09)
00:16.0 Communication controller: Intel Corporation Cougar Point HECI Contr=
oller #1 (rev 04)
00:1a.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Co=
ntroller #2 (rev 05)
00:1b.0 Audio device: Intel Corporation Cougar Point High Definition Audio =
Controller (rev 05)
00:1c.0 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 1 =
(rev b5)
00:1c.4 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 5 =
(rev b5)
00:1d.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Co=
ntroller #1 (rev 05)
00:1f.0 ISA bridge: Intel Corporation Device 1c5c (rev 05)
00:1f.2 IDE interface: Intel Corporation Cougar Point 4 port SATA IDE Contr=
oller (rev 05)
00:1f.3 SMBus: Intel Corporation Cougar Point SMBus Controller (rev 05)
00:1f.5 IDE interface: Intel Corporation Cougar Point 2 port SATA IDE Contr=
oller (rev 05)
01:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Conne=
ction
03:00.0 Ethernet controller: Attansic Technology Corp. Device 1083 (rev c0)
           CPU0       CPU1       CPU2       CPU3      =20
  1:          2          0          0          0  xen-pirq-ioapic-edge  i80=
42
  8:          1          0          0          0  xen-pirq-ioapic-edge  rtc0
  9:          0          0          0          0  xen-pirq-ioapic-level  ac=
pi
 12:          4          0          0          0  xen-pirq-ioapic-edge  i80=
42
 14:         15          0          0          0  xen-pirq-ioapic-edge  ata=
_piix
 15:          0          0          0          0  xen-pirq-ioapic-edge  ata=
_piix
 16:         22          0          0          0  xen-pirq-ioapic-level  eh=
ci_hcd:usb1
 18:          0          0          0          0  xen-pirq-ioapic-level  i8=
01_smbus
 19:        109          0          0          0  xen-pirq-ioapic-level  at=
a_piix
 23:         26          0          0          0  xen-pirq-ioapic-level  eh=
ci_hcd:usb2
 40:       4082          0          0          0  xen-percpu-virq      time=
r0
 41:          1          0          0          0  xen-percpu-ipi       spin=
lock0
 42:       3815          0          0          0  xen-percpu-ipi       resc=
hed0
 43:        243          0          0          0  xen-percpu-ipi       call=
func0
 44:          0          0          0          0  xen-percpu-virq      debu=
g0
 45:         26          0          0          0  xen-percpu-ipi       call=
funcsingle0
 46:          0          0          0          0  xen-percpu-ipi       irqw=
ork0
 47:          0       2286          0          0  xen-percpu-virq      time=
r1
 48:          0          2          0          0  xen-percpu-ipi       spin=
lock1
 49:          0       3220          0          0  xen-percpu-ipi       resc=
hed1
 50:          0        357          0          0  xen-percpu-ipi       call=
func1
 51:          0          0          0          0  xen-percpu-virq      debu=
g1
 52:          0        180          0          0  xen-percpu-ipi       call=
funcsingle1
 53:          0          0          0          0  xen-percpu-ipi       irqw=
ork1
 54:          0          0       3606          0  xen-percpu-virq      time=
r2
 55:          0          0          2          0  xen-percpu-ipi       spin=
lock2
 56:          0          0       3405          0  xen-percpu-ipi       resc=
hed2
 57:          0          0        231          0  xen-percpu-ipi       call=
func2
 58:          0          0          0          0  xen-percpu-virq      debu=
g2
 59:          0          0         34          0  xen-percpu-ipi       call=
funcsingle2
 60:          0          0          0          0  xen-percpu-ipi       irqw=
ork2
 61:          0          0          0       2545  xen-percpu-virq      time=
r3
 62:          0          0          0          0  xen-percpu-ipi       spin=
lock3
 63:          0          0          0       2343  xen-percpu-ipi       resc=
hed3
 64:          0          0          0        348  xen-percpu-ipi       call=
func3
 65:          0          0          0          0  xen-percpu-virq      debu=
g3
 66:          0          0          0        186  xen-percpu-ipi       call=
funcsingle3
 67:          0          0          0          0  xen-percpu-ipi       irqw=
ork3
 68:        169          0          0          0   xen-dyn-event     xenbus
 69:          0          0          0          0  xen-percpu-virq      xen-=
pcpu
 71:          0          0          0          0  xen-percpu-virq      mce
 72:        145          0          0          0  xen-percpu-virq      hvc_=
console
 73:          0          0          0          0  xen-pirq-msi       i915
 74:         19          0          0          0  xen-pirq-msi       eth0
 75:         87          0          0          0   xen-dyn-event     evtchn=
:xenstored
 76:          0          0          0          0   xen-dyn-event     evtchn=
:xenstored
NMI:          0          0          0          0   Non-maskable interrupts
LOC:          0          0          0          0   Local timer interrupts
SPU:          0          0          0          0   Spurious interrupts
PMI:          0          0          0          0   Performance monitoring i=
nterrupts
IWI:          0          0          0          0   IRQ work interrupts
RTR:          0          0          0          0   APIC ICR read retries
RES:       3815       3221       3405       2343   Rescheduling interrupts
CAL:        269        537        265        534   Function call interrupts
TLB:          0          0          0          0   TLB shootdowns
TRM:          0          0          0          0   Thermal event interrupts
THR:          0          0          0          0   Threshold APIC interrupts
MCE:          0          0          0          0   Machine check exceptions
MCP:          1          1          1          1   Machine check polls
ERR:          0
MIS:          0
00000000-0000ffff : reserved
00010000-0009dfff : System RAM
0009e000-0009ebff : RAM buffer
0009ec00-000fffff : reserved
  000a0000-000bffff : PCI Bus 0000:00
  000c0000-000dffff : PCI Bus 0000:00
    000c0000-000cd7ff : Video ROM
    000cd800-000ce7ff : Adapter ROM
    000ce800-000cf7ff : Adapter ROM
  000f0000-000fffff : System ROM
00100000-1fffffff : System RAM
  01000000-01642f47 : Kernel code
  01642f48-01aa317f : Kernel data
  01b68000-01c71fff : Kernel bss
20000000-201fffff : reserved
20200000-3fffffff : System RAM
40000000-401fffff : reserved
40200000-bad7ffff : System RAM
bad80000-badc8fff : ACPI Non-volatile Storage
badc9000-badd0fff : ACPI Tables
badd1000-badf3fff : reserved
badf4000-badf5fff : System RAM
badf6000-bae05fff : reserved
bae06000-bae13fff : ACPI Non-volatile Storage
bae14000-bae3bfff : reserved
bae3c000-bae7efff : ACPI Non-volatile Storage
bae7f000-baffffff : System RAM
bb000000-bb7fffff : RAM buffer
bb800000-bf9fffff : reserved
bfa00000-ffffffff : PCI Bus 0000:00
  d0000000-dfffffff : 0000:00:02.0
  e0000000-efffffff : PCI MMCONFIG 0000 [bus 00-ff]
    e0000000-efffffff : pnp 00:01
  fe000000-fe3fffff : 0000:00:02.0
  fe400000-fe4fffff : PCI Bus 0000:03
    fe400000-fe43ffff : 0000:03:00.0
      fe400000-fe43ffff : atl1c
  fe500000-fe5fffff : PCI Bus 0000:01
    fe500000-fe57ffff : 0000:01:00.0
    fe580000-fe5bffff : 0000:01:00.0
    fe5c0000-fe5dffff : 0000:01:00.0
    fe5e0000-fe5e3fff : 0000:01:00.0
  fe600000-fe603fff : 0000:00:1b.0
  fe604000-fe6040ff : 0000:00:1f.3
  fe605000-fe6053ff : 0000:00:1d.0
    fe605000-fe6053ff : ehci_hcd
  fe606000-fe6063ff : 0000:00:1a.0
    fe606000-fe6063ff : ehci_hcd
  fe607000-fe60700f : 0000:00:16.0
  fec00000-fec00fff : reserved
    fec00000-fec003ff : IOAPIC 0
  fed00000-fed003ff : HPET 0
  fed08000-fed08fff : pnp 00:0c
  fed10000-fed19fff : pnp 00:01
  fed1c000-fed3ffff : reserved
    fed1c000-fed1ffff : pnp 00:0c
    fed20000-fed3ffff : pnp 00:01
  fed90000-fed93fff : pnp 00:01
  fee00000-fee00fff : Local APIC
    fee00000-fee00fff : reserved
  ff000000-ffffffff : reserved
    ff000000-ffffffff : pnp 00:0c
100000000-10555efff : System RAM
10555f000-23fdfffff : Unusable memory
MemTotal:        3011524 kB
MemFree:         2635868 kB
Buffers:               0 kB
Cached:           285004 kB
SwapCached:            0 kB
Active:           103264 kB
Inactive:         197972 kB
Active(anon):      92564 kB
Inactive(anon):   114340 kB
Active(file):      10700 kB
Inactive(file):    83632 kB
Unevictable:       17360 kB
Mlocked:            5000 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         33520 kB
Mapped:             6464 kB
Shmem:            174728 kB
Slab:              25528 kB
SReclaimable:      12000 kB
SUnreclaim:        13528 kB
KernelStack:         808 kB
PageTables:         1508 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     1505760 kB
Committed_AS:     278004 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      548756 kB
VmallocChunk:   34359187451 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:     3151228 kB
DirectMap2M:           0 kB
19 Dec 18:55:36 ntpdate[2935]: adjust time server 17.171.4.13 offset 0.0284=
62 sec
Wed Dec 19 18:55:37 UTC 2012
Dec 19 18:55:37 tst007 init: starting pid 2944, tty '/dev/tty0': '/bin/sh'
Dec 19 18:55:37 tstd 2946, tty '/dev/ttyS0': '/bin/sh'


BusyBox v1.14.3 (2012-12-04 14:22:27 EST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

# =07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07Dec 19 18:55:37 tst007 init: starting pid 294=
7, tty '/dev/hvc0': '/bin/sh'
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
Dec 19 18:55:38 tst007 init: process '/bin/sh' (pid 2946) exited. Schedulin=
g for restart.
Dec 19 18:55:38 tst007 init: starti=07=07=07=07=07=07=07=07=07=07=07Dec 19 =
18:55:39 tst007 init: process '/bin/sh' (pid 2948) exited. Scheduling for r=
estart.
Dec 19 18:55:39 tst007 init: starti/sh'
=07=07=07=07=07=07=07=07=07
Dec 19 18:55:40 tst007 init: process '/bin/sh' (pid 2949) exited. Schedulin=
g for restart.
Dec 19 18:55:40 tst007 init: starti/sh'
=07=07=07=07=07=07=07=07=07Dec 19 18:55:41 tst007 init: process '/bin/sh' (=
pid 2950) exited. Scheduling for restart.
Dec 19 18:55:41 tst007 init: starti/bin/sh'
=07=07=07=07=07=07=07=07Dec 19 18:55:42 tst007 init: process '/bin/sh' (pid=
 2951) exited. Scheduling for restart.
Dec 19 18:55:42 tst007 init: startityS0': '/bin/sh'
=07=07=07=07=07=07Dec 19 18:55:43 tst007 init: process '/bin/sh' (pid 2952)=
 exited. Scheduling for restart.
Dec 19 18:55:43 tst007 init: startiev/ttyS0': '/bin/sh'
=07=07=07=07=07Dec 19 18:55:44 tst007 init: process '/bin/sh' (pid 2953) ex=
ited. Scheduling for restart.
Dec 19 18:55:44 tst007 init: starti tty '/dev/ttyS0': '/bin/sh'
=07=07=07[   21.237703] switch: port 1(eth0) entered forwarding state
Dec 19 18:55:45 tst007 init: process '/bin/sh' (pid 2954) exited. Schedulin=
g for restart.
Dec 19 18:55:45 tst007 init: starti tty '/dev/ttyS0': '/bin/sh'
=07=07=07Dec 19 18:55:46 tst007 init: process '/bin/sh' (pid 2955) exited. =
Scheduling for restart.
Dec 19 18:55:46 tst007 init: startiid 2956, tty '/dev/ttyS0': '/bin/sh'
=07Dec 19 18:55:46 tst007 sshd[2957]: WARNING: /etc/ssh/moduli does not exi=
st, using fixed modulus
Dec 19 18:55:46 tst007 sshd[2958]: Connection closed by 192.168.101.16
Dec 19 18:55:47 tst007 init: process '/bin/sh' (pid 2956) exited. Schedulin=
g for restart.
Dec 19 18:55:47 tst007 init: starting pid 2959, tty '/dev/ttyS0': '/bin/sh'
Dec 19 18:55:48 tst007 init: process '/bin/sh' (pid 2959) exited. Schedulin=
g for restart.
Dec 19 18:55:48 tst007 init: starting pid 2960, tty '/dev/ttyS0': '/bin/sh'
Dec 19 18:55:49 tst007 init: process '/bin/sh' (pid 2960) exited. Schedulin=
g for restart.
Dec 19 18:55:49 tst007 init: starting pid 2961, tty '/dev/ttyS0': '/bin/sh'
kill Dec 19 18:55:50 tst007 init: process '/bin/sh' (pid 2961) exited. Sche=
duling for restart.
Dec 19 18:55:50 tst007 init: starting pid 2962, tty '/dev/ttyS0': '/bin/sh'
-1 1
/bin/sh: =07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07Dec 19=
 18:55:51 tst007 init: process '/bin/sh' (pid 2962) exited. Scheduling for =
restart.
Dec 19 18:55:51 tst007 init: starting pid 2963, tty '/dev/ttyS0': '/bin/sh'
Dec 19 18:55:52 tst007 init: process '/bin/sh' (pid 2963) exited. Schedulin=
g for restart.
Dec 19 18:55:52 tst007 init: starting pid 2964, tty '/dev/ttyS0': '/bin/sh'

/bin/sh: =07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07: not found
# kiDec 19 18:55:53 tst007 init: process '/bin/sh' (pid 2964) exited. Sched=
uling for restart.
Dec 19 18:55:53 tst007 init: starting pid 2965, tty '/dev/ttyS0': '/bin/sh'
ll -1 1
# Dec 19 18:55:54 tst007 init: reloading /etc/inittab

# eho=08 =08=08 =08cho  =08 =080 > /sys/bus# =1B[Jfind /sys -name cpu9=08 =
=080/=08 =08=08 =084/on=08 =08=08 =08=08 =08=08 =083/online
find: warning: Unix filenames usually don't contain slashes (though pathnam=
es do).  That means that '-name `cpu3/online'' will probably evaluate to fa=
lse all the time on this system.  You might find the '-wholename' test more=
 useful, or perhaps '-samefile'.  Alternatively, if you are using GNU grep,=
 you could use 'find ... -print0 | grep -FzZ `cpu3/online''.
# cat /proc/cp=1B[8D# cat /proc/cpuinfo =1B[J
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz
stepping	: 7
microcode	: 0x12
cpu MHz		: 3093.056
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 0
cpu cores	: 2
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush acpi mmx =
fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl nonstop_tsc pn=
i pclmulqdq est ssse3 cx16 pcid sse4_1 sse4_2 popcnt tsc_deadline_timer hyp=
ervisor lahf_lm arat epb pln pts dtherm
bogomips	: 6186.11
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 1
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz
stepping	: 7
microcode	: 0x12
cpu MHz		: 3093.056
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 0
cpu cores	: 2
apicid		: 1
initial apicid	: 1
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush acpi mmx =
fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl nonstop_tsc pn=
i pclmulqdq est ssse3 cx16 pcid sse4_1 sse4_2 popcnt tsc_deadline_timer hyp=
ervisor lahf_lm arat epb pln pts dtherm
bogomips	: 6186.11
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 2
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz
stepping	: 7
microcode	: 0x12
cpu MHz		: 3093.056
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 1
cpu cores	: 2
apicid		: 2
initial apicid	: 2
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush acpi mmx =
fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl nonstop_tsc pn=
i pclmulqdq est ssse3 cx16 pcid sse4_1 sse4_2 popcnt tsc_deadline_timer hyp=
ervisor lahf_lm arat epb pln pts dtherm
bogomips	: 6186.11
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz
stepping	: 7
microcode	: 0x12
cpu MHz		: 3093.056
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 1
cpu cores	: 2
apicid		: 3
initial apicid	: 3
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu de tsc msr pae mce cx8 apic sep mca cmov pat clflush acpi mmx =
fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl nonstop_tsc pn=
i pclmulqdq est ssse3 cx16 pcid sse4_1 sse4_2 popcnt tsc_deadline_timer hyp=
ervisor lahf_lm arat epb pln pts dtherm
bogomips	: 6186.11
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

# # cat /proc/cpuinfo =1B[J# find /sys -name cpu3/online=1B[J=08 =08=08 =08=
=08 =08=08 =08=08 =08=08 =08=08 =08=08 =08=08 =08=08 =08=08 =08online
/sys/devices/system/cpu/cpu1/online
/sys/devices/system/cpu/cpu2/online
/sys/devices/system/cpu/cpu3/online
/sys/devices/system/cpu/online
/sys/devices/system/node/online
/sys/devices/system/xen_cpu/xen_cpu1/online
/sys/devices/system/xen_cpu/xen_cpu2/online
/sys/devices/system/xen_cpu/xen_cpu3/online
# cat /sys/devices/system/cpu/cpu3/online
1
# echo 0 > /sys/devices/system/cpu/cpu3/online
#=20
#=20
# # echo 0 > /sys/devices/system/cpu/cpu3/online=1B[J=1B[44Decho 0=08 > /sy=
s/devices/system/cpu/cpu3/online =1B[39D1 > /sys/devices/system/cpu/cpu3/on=
line=1B[38D > /sys/devices/system/cpu/cpu3/online
[   73.324141] installing Xen timer for CPU 3
[   73.324236] cpu 3 spinlock event irqc_intel fbcon scsi_mod tileblit font=
 bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront fb=
_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   73.325026] Pid: 0, comm: swapper/3 Not tainted 3.7.0upstream #1
[   73.325033] Call Trace:
[   73.325047]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   73.325058]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   73.325074]  [<ffffffff81043d5d>] ? xen_force_evtchn_callback+0xd/0x10
[   73.325086]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   73.325097]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   73.325112]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   73.325116] BUG: unable to handle kernel NULL pointer dereference[   73.=
325122]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10

[   73.325133]  at 0000000000000004
[   73.325140] IP: [<ffffffff81367e7b>] acpi_processor_setup_cpuidle_cx+0x3=
f/0x105
[   73.325160] PGD aacf6067 PUD b0000067 PMD 0=20
[   73.325181] Oops: 0002 [#1] SMP=20
[   73.325194] Modules linked in: xen_evtchn iscsi_boot_sysfs iscsi_tcp lib=
iscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c nouveau mxm_wmi wm=
i radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c=
_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper=
 video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopya=
rea xenfs xen_privcmd mperf
[   73.325330] CPU 0=20
[   73.325337] Pid: 2947, comm: sh Tainted: G        W    3.7.0upstream #1 =
MSI MS-7680/H61M-P23 (MS-7680)
[   73.325345] RIP: e030:[<ffffffff81367e7b>]  [<ffffffff81367e7b>] acpi_pr=
ocessor_setup_cpuidle_cx+0x3f/0x105
[   73.325358] RSP: e02b:ffff8800ace7dcb8  EFLAGS: 00010202
[   73.325365] RAX: 0000000000010ae9 RBX: ffff880100f56c00 RCX: ffff8801053=
80000
[   73.325374] RDX: 0000000000000003 RSI: 0000000000000200 RDI: ffff880100f=
56c00
[   73.325385] RBP: ffff8800ace7dcf8 R08: ffffffff81811ec0 R09: 00000000000=
00200
[   73.325395] R10: 0000000000000000 R11: 000000000000fffc R12: ffff880100f=
56c00
[   73.325406] R13: 00000000ffffff01 R14: 0000000000000000 R15: ffffffff81a=
8e1a0
[   73.325419] FS:  00007fb4785cd700(0000) GS:ffff880105200000(0000) knlGS:=
0000000000000000
[   73.325428] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[   73.325436] CR2: 0000000000000004 CR3: 00000000af063000 CR4: 00000000000=
02660
[   73.325443] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 00000000000=
00000
[   73.325450] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 00000000000=
00400
[   73.325457] Process sh (pid: 2947, threadinfo ffff8800ace7c000, task fff=
f8800aacd0810)
[   73.325463] Stack:
[   73.325468]  0000000000000150 ffff8800120d6200 ffff8800ace7dce8 ffff8801=
00f56c00
[   73.325486]  0000000000000000 00000000ffffff01 0000000000000000 ffffffff=
81a8e1a0
[   73.325504]  ffff8800ace7dd28 ffffffff8136850b ffff8800ace7dd18 ffffffff=
810ad296
[   73.325521] Call Trace:
[   73.325559]  [<ffffffff8136850b>] acpi_processor_hotplug+0x7c/0x9f
[   73.325572]  [<ffffffff810ad296>] ? schedule_delayed_work_on+0x16/0x20
[   73.325583]  [<ffffffff81365d48>] acpi_cpu_soft_notify+0x90/0xca
[   73.325594]  [<ffffffff8163a96d>] notifier_call_chain+0x4d/0x70
[   73.325605]  [<ffffffff810bb1b9>] __raw_notifier_call_chain+0x9/0x10
[   73.325617]  [<ffffffff81093b2b>] __cpu_notify+0x1b/0x30
[   73.325627]  [<ffffffff8162d4d6>] _cpu_up+0x102/0x14a
[   73.325637]  [<ffffffff8162d5f7>] cpu_up+0xd9/0xec
[   73.325648]  [<ffffffff81613da4>] store_online+0x94/0xd0
[   73.325660]  [<ffffffff813f318b>] dev_attr_store+0x1b/0x20
[   73.325671]  [<ffffffff8120f644>] sysfs_write_file+0xf4/0x170
[   73.325682]  [<ffffffff8119ad28>] vfs_write+0xc8/0x190
[   73.325692]  [<ffffffff8119b57a>] sys_write+0x5a/0xa0
[   73.325702]  [<ffffffff8163eae9>] system_call_fastpath+0x16/0x1b
[   73.325708] Code: 89 fc 53 48 83 ec 18 8b 57 0c 89 d1 48 8b 0c cd 00 81 =
a9 81 4c 8b 34 08 8a 47 1c 84 c0 0f 89 ba 00 00 00 a8 01 0f 84 b2 00 00 00 =
<41> 89 56 04 83 3d 62 3a 73 00 00 b8 01 00 00 00 0f 45 05 56 3a=20
[   73.325900] RIP  [<ffffffff81367e7b>] acpi_processor_setup_cpuidle_cx+0x=
3f/0x105
[   73.325912]  RSP <ffff8800ace7dcb8>
[   73.325917] CR2: 0000000000000004
[   73.325939] ---[ end trace 81413fa6088e35ac ]---
Dec 19 18:56:37 [   73.326476] BUG: scheduling while atomic: swapper/3/0/0x=
00000000
tst007 [   73.326488] Modules linked in:init: process '/bin/sh' (pid 2947) =
exited. Scheduling for restart. xen_evtchn iscsi_boot_sysfs
 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c nouv=
eau mxm_wmi wmi radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix liba=
ta i915 crc32c_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c =
drm_kms_helper video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfil=
lrect syscopyarea xenfs xen_privcmd mperf
[   73.326638] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   73.326645] Call Trace:
[   73.326656]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   73.326667]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   73.326677]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   73.326686]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   73.326697]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   73.326707]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
Dec 19 18:56:37 tst007 init: starting pid 2978, tty '/dev/hvc0': '/bin/sh'


BusyBox v1.14.3 (2012-12-04 14:22:27 EST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

# =07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07[   74.567070] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   74.567084] Modules linked in: xen_evtchn iscsi_boot_sy xen_blkfront xen=
_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd m=
perf
[   74.567257] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   74.567263] Call Trace:
[   74.567276]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   74.567286]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   74.567296]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   74.567306]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   74.567317]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   74.567327]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07[   75.066920] BUG: scheduling while atomic: swa=
pper/3/0/0x00000000
[   75.066933] Modules linked in: xen_evtchn iscsi_boot_sysi_tcp libiscsi s=
csi_transport_iscsi libcrc32c crc32c nouveau mxm_wmi wmi radeon ttm sg sr_m=
od sd_mod cdrom ata_generic ata_piix libata i915 crc32c_intel fbcon scsi_mo=
d tileblit font bitblit softcursor atl1c drm_kms_helper video xen_blkfront =
xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcm=
d mperf
[   75.067106] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   75.067113] Call Trace:
[   75.067125]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   75.067135]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   75.067145]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   75.067154]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   75.067165]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   75.067175]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07[   76.566091] BUG: scheduling while atomic: swapper/3/0/0x0=
0000000
[   76.566105] Modules linked in: xen_evtchn iscsi_boot_syt xen_netfront fb=
_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   76.566277] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   76.566283] Call Trace:
[   76.566295]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   76.566305]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   76.566315]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   76.566325]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   76.566335]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   76.566345]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[  =
 77.065960] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   77.065974] Modules linked in: xen_evtchn iscsi_boot_sycsi scsi_transpor=
t_iscsi libcrc32c crc32c nouveau mxm_wmi wmi radeon ttm sg sr_mod sd_mod cd=
rom ata_generic ata_piix libata i915 crc32c_intel fbcon scsi_mod tileblit f=
ont bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront=
 fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   77.066146] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   77.066153] Call Trace:
[   77.066165]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   77.066175]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   77.066185]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   77.066194]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   77.066205]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   77.066215]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07[   77.323438] BUG: scheduling while atomic: swappe=
r/3/0/0x00000000
[   77.323453] Modules linked in: xen_evtchn iscsi_boot_sy2c crc32c nouveau=
 mxm_wmi wmi radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata =
i915 crc32c_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm=
_kms_helper video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillre=
ct syscopyarea xenfs xen_privcmd mperf
[   77.323625] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   77.323631] Call Trace:
[   77.323643]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   77.323654]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   77.323663]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   77.323673]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   77.323684]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   77.323694]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   78.565081] BUG: schedul=
ing while atomic: swapper/3/0/0x00000000
[   78.565095] Modules linked in: xen_evtchn iscsi_boot_syb_sys_fops sysimg=
blt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   78.565267] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   78.565274] Call Trace:
[   78.565286]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   78.565296]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   78.565306]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   78.565315]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   78.565326]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   78.565336]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07[   79.064941] BUG: scheduling while at=
omic: swapper/3/0/0x00000000
[   79.064955] Modules linked in: xen_evtchn iscsi_boot_synsport_iscsi libc=
rc32c crc32c nouveau mxm_wmi wmi radeon ttm sg sr_mod sd_mod cdrom ata_gene=
ric ata_piix libata i915 crc32c_intel fbcon scsi_mod tileblit font bitblit =
softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront fb_sys_fops=
 sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   79.065127] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   79.065134] Call Trace:
[   79.065145]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   79.065156]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   79.065166]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   79.065175]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   79.065186]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   79.065196]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07[   80.564097] BUG: scheduling while atomi=
c: swapper/3/0/0x00000000
[   80.564111] Modules linked in: xen_evtchn iscsi_boot_syps sysimgblt sysf=
illrect syscopyarea xenfs xen_privcmd mperf
[   80.564283] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   80.564290] Call Trace:
[   80.564303]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   80.564313]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   80.564323]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   80.564333]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   80.564344]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   80.564354]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   81.063947] BUG: scheduling wh=
ile atomic: swapper/3/0/0x00000000
[   81.063961] Modules linked in: xen_evtchn iscsi_boot_syscsi libcrc32c cr=
c32c nouveau mxm_wmi wmi radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_=
piix libata i915 crc32c_intel fbcon scsi_mod tileblit font bitblit softcurs=
or atl1c drm_kms_helper video xen_blkfront xen_netfront fb_sys_fops sysimgb=
lt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   81.064133] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   81.064140] Call Trace:
[   81.064151]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   81.064161]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   81.064171]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   81.064181]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   81.064192]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   81.064202]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07[   81.321438] BUG: scheduling while=
 atomic: swapper/3/0/0x00000000
[   81.321452] Modules linked in: xen_evtchn iscsi_boot_syum_wmi wmi radeon=
 ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c_intel f=
bcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper video x=
en_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenf=
s xen_privcmd mperf
[   81.321624] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   81.321631] Call Trace:
[   81.321643]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   81.321653]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   81.321663]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   81.321672]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   81.321683]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   81.321693]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   82.563080] BUG: s=
cheduling while atomic: swapper/3/0/0x00000000
[   82.563094] Modules linked in: xen_evtchn iscsi_boot_syfillrect syscopya=
rea xenfs xen_privcmd mperf
[   82.563266] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   82.563272] Call Trace:
[   82.563284]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   82.563294]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   82.563304]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   82.563314]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   82.563324]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   82.563334]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07[   83.062923] BUG: scheduling while=
 atomic: swapper/3/0/0x00000000
[   83.062937] Modules linked in: xen_evtchn iscsi_boot_sy2c crc32c nouveau=
 mxm_wmi wmi radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata =
i915 crc32c_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm=
_kms_helper video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillre=
ct syscopyarea xenfs xen_privcmd mperf
[   83.063109] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   83.063116] Call Trace:
[   83.063127]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   83.063138]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   83.063148]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   83.063157]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   83.063168]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   83.063178]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   84.562098] BUG: schedul=
ing while atomic: swapper/3/0/0x00000000
[   84.562112] Modules linked in: xen_evtchn iscsi_boot_sy syscopyarea xenf=
s xen_privcmd mperf
[   84.562285] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   84.562291] Call Trace:
[   84.562303]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   84.562313]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   84.562323]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   84.562332]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   84.562343]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   84.562353]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07[   85.061964] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   85.061978] Modules linked in: xen_evtchn iscsi_boot_syuveau mxm_wmi wmi=
 radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c_=
intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper =
video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyar=
ea xenfs xen_privcmd mperf
[   85.062150] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   85.062157] Call Trace:
[   85.062169]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   85.062179]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   85.062189]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   85.062198]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   85.062209]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   85.062219]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   85.319437] BUG=
: scheduling while atomic: swapper/3/0/0x00000000
[   85.319451] Modules linked in: xen_evtchn iscsi_boot_sysr_mod sd_mod cdr=
om ata_generic ata_piix libata i915 crc32c_intel fbcon scsi_mod tileblit fo=
nt bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront =
fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   85.319623] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   85.319629] Call Trace:
[   85.319641]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   85.319651]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   85.319661]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   85.319671]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   85.319682]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   85.319692]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07[   86.561083] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   86.561097] Modules linked in: xen_evtchn iscsi_boot_sy xenfs xen_privcm=
d mperf
[   86.561269] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   86.561275] Call Trace:
[   86.561287]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   86.561297]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   86.561307]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   86.561317]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   86.561327]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   86.561337]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   87.06=
0941] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   87.060955] Modules linked in: xen_evtchn iscsi_boot_syi wmi radeon ttm =
sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c_intel fbcon =
scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper video xen_bl=
kfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen=
_privcmd mperf
[   87.061128] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   87.061135] Call Trace:
[   87.061147]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   87.061157]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   87.061167]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   87.061177]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   87.061187]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   87.061197]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   88.560096] BUG: scheduling wh=
ile atomic: swapper/3/0/0x00000000
[   88.560110] Modules linked in: xen_evtchn iscsi_boot_syen_privcmd mperf
[   88.560282] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   88.560288] Call Trace:
[   88.560300]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   88.560311]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   88.560320]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   88.560330]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   88.560341]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   88.560351]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07[   89.059963] BUG: schedul=
ing while atomic: swapper/3/0/0x00000000
[   89.059977] Modules linked in: xen_evtchn iscsi_boot_sy ttm sg sr_mod sd=
_mod cdrom ata_generic ata_piix libata i915 crc32c_intel fbcon scsi_mod til=
eblit font bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_n=
etfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mpe=
rf
[   89.060153] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   89.060160] Call Trace:
[   89.060172]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   89.060182]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   89.060192]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   89.060201]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   89.060212]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   89.060222]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
[   89.317439] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   89.317452] Modules linked in: xen_evtchn iscsi_boot_sysfs iscsi_tcp lib=
iscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c nouveau mxm_wmi wm=
i radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c=
_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper=
 video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopya=
rea xenfs xen_privcmd mperf
[   89.317625] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   89.317631] Call Trace:
[   89.317643]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   89.317653]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   89.317663]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   89.317673]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   89.317684]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   89.317694]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
[   90.559083] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   90.559097] Modules linked in: xen_evtchn iscsi_boot_sysfs iscsi_tcp lib=
iscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c nouveau mxm_wmi wm=
i radeon ttm sg sr_mod sd_mod cdrom ata_generic ata_piix libata i915 crc32c=
_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper=
 video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopya=
rea xenfs xen_privcmd mperf
[   90.559269] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   90.559276] Call Trace:
[   90.559288]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   90.559298]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   90.559308]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   90.559318]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   90.559328]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   90.559338]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
[   91.058924] BUG: scheduling while atomic: swapper/3/0/0x00000000
[   91.058937] Modules linked in: xen_evtchn iscsi_boot_sysr_mod sd_mod cdr=
om ata_generic ata_piix libata i915 crc32c_intel fbcon scsi_mod tileblit fo=
nt bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront =
fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
[   91.059110] Pid: 0, comm: swapper/3 Tainted: G      D W    3.7.0upstream=
 #1
[   91.059117] Call Trace:
[   91.059128]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
[   91.059139]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
[   91.059148]  [<ffffffff81635bb4>] schedule+0x24/0x70
[   91.059158]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
[   91.059169]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
[   91.059179]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=07=
=07=07
telnet> Connection closed.
[Connecting to system 7 ]

--Kj7319i9nmIyA2yE
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="config-3.8.config"

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.7.0 Kernel Configuration
#
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_DEFAULT_IDLE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_HAVE_IRQ_WORK=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
CONFIG_EXPERIMENTAL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
CONFIG_LOCALVERSION="upstream"
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_FHANDLE is not set
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
# CONFIG_AUDIT_LOGINUID_IMMUTABLE is not set
CONFIG_HAVE_GENERIC_HARDIRQS=y

#
# IRQ subsystem
#
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y

#
# CPU/Task time and stats accounting
#
# CONFIG_TICK_CPU_ACCOUNTING is not set
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
# CONFIG_RCU_USER_QS is not set
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_RCU_FAST_NO_HZ is not set
# CONFIG_TREE_RCU_TRACE is not set
# CONFIG_RCU_NOCB_CPU is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=18
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
# CONFIG_NUMA_BALANCING is not set
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
# CONFIG_CGROUP_DEVICE is not set
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
# CONFIG_MEMCG is not set
# CONFIG_CGROUP_HUGETLB is not set
# CONFIG_CGROUP_PERF is not set
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_CFS_BANDWIDTH is not set
CONFIG_RT_GROUP_SCHED=y
# CONFIG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_SYSFS_DEPRECATED=y
# CONFIG_SYSFS_DEPRECATED_V2 is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_ROOT_UID=0
CONFIG_INITRAMFS_ROOT_GID=0
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
# CONFIG_INITRAMFS_COMPRESSION_NONE is not set
CONFIG_INITRAMFS_COMPRESSION_GZIP=y
# CONFIG_INITRAMFS_COMPRESSION_BZIP2 is not set
# CONFIG_INITRAMFS_COMPRESSION_LZMA is not set
# CONFIG_INITRAMFS_COMPRESSION_XZ is not set
# CONFIG_INITRAMFS_COMPRESSION_LZO is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
# CONFIG_EXPERT is not set
CONFIG_HAVE_UID16=y
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_PCI_QUIRKS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
# CONFIG_SLUB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_OPTPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_GENERIC_KERNEL_THREAD=y
CONFIG_GENERIC_KERNEL_EXECVE=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_MODULES_USE_ELF_RELA=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_MODULE_SIG is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
# CONFIG_BLK_DEV_INTEGRITY is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
# CONFIG_DEFAULT_DEADLINE is not set
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_PARAVIRT_GUEST=y
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_XEN_DEBUG_FS=y
CONFIG_KVM_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_GART_IOMMU=y
CONFIG_CALGARY_IOMMU=y
CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS=512
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=y
CONFIG_X86_THERMAL_VECTOR=y
# CONFIG_I8K is not set
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTREMOVE is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_MEMORY_FAILURE is not set
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_EFI=y
# CONFIG_EFI_STUB is not set
CONFIG_SECCOMP=y
# CONFIG_CC_STACKPROTECTOR is not set
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
# CONFIG_KEXEC_JUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
# CONFIG_PM_RUNTIME is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
# CONFIG_PM_ADVANCED_DEBUG is not set
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
CONFIG_ACPI=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_PROCFS=y
# CONFIG_ACPI_PROCFS_POWER is not set
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_PROC_EVENT=y
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_I2C=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_CUSTOM_DSDT is not set
# CONFIG_ACPI_INITRD_TABLE_OVERRIDE is not set
CONFIG_ACPI_BLACKLIST_YEAR=0
CONFIG_ACPI_DEBUG=y
# CONFIG_ACPI_DEBUG_FUNC_TRACE is not set
# CONFIG_ACPI_PCI_SLOT is not set
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=y
# CONFIG_ACPI_HOTPLUG_MEMORY is not set
# CONFIG_ACPI_SBS is not set
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
# CONFIG_ACPI_BGRT is not set
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_EINJ=y
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_TABLE=y
# CONFIG_CPU_FREQ_STAT is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set

#
# x86 CPU frequency scaling drivers
#
# CONFIG_X86_PCC_CPUFREQ is not set
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
CONFIG_X86_SPEEDSTEP_CENTRINO=m
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_MULTIPLE_DRIVERS is not set
CONFIG_CPU_IDLE_GOV_LADDER=y
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
CONFIG_INTEL_IDLE=y

#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set

#
# Bus options (PCI etc.)
#
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
# CONFIG_HOTPLUG_PCI_PCIE is not set
CONFIG_PCIEAER=y
CONFIG_PCIE_ECRC=y
# CONFIG_PCIEAER_INJECT is not set
CONFIG_PCIEASPM=y
# CONFIG_PCIEASPM_DEBUG is not set
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_ARCH_SUPPORTS_MSI=y
CONFIG_PCI_MSI=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
# CONFIG_PCI_STUB is not set
CONFIG_XEN_PCIDEV_FRONTEND=y
CONFIG_HT_IRQ=y
CONFIG_PCI_ATS=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_IOAPIC is not set
CONFIG_PCI_LABEL=y
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
CONFIG_PCCARD=y
CONFIG_PCMCIA=y
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y

#
# PC-card bridges
#
CONFIG_YENTA=y
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_PD6729 is not set
# CONFIG_I82092 is not set
CONFIG_PCCARD_NONSTATIC=y
CONFIG_HOTPLUG_PCI=y
# CONFIG_HOTPLUG_PCI_ACPI is not set
# CONFIG_HOTPLUG_PCI_CPCI is not set
# CONFIG_HOTPLUG_PCI_SHPC is not set
# CONFIG_RAPIDIO is not set

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
# CONFIG_HAVE_AOUT is not set
CONFIG_BINFMT_MISC=y
CONFIG_COREDUMP=y
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_HAVE_TEXT_POKE_SMP=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y

#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_DIAG is not set
CONFIG_UNIX=y
# CONFIG_UNIX_DIAG is not set
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
# CONFIG_IP_FIB_TRIE_STATS is not set
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
CONFIG_IP_MROUTE=y
# CONFIG_IP_MROUTE_MULTIPLE_TABLES is not set
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
# CONFIG_ARPD is not set
CONFIG_SYN_COOKIES=y
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
CONFIG_INET_TUNNEL=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET_LRO=y
# CONFIG_INET_DIAG is not set
CONFIG_TCP_CONG_ADVANCED=y
# CONFIG_TCP_CONG_BIC is not set
CONFIG_TCP_CONG_CUBIC=y
# CONFIG_TCP_CONG_WESTWOOD is not set
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_TCP_CONG_YEAH is not set
# CONFIG_TCP_CONG_ILLINOIS is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
# CONFIG_IPV6_PRIVACY is not set
# CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
# CONFIG_INET6_IPCOMP is not set
# CONFIG_IPV6_MIP6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
CONFIG_INET6_XFRM_MODE_TRANSPORT=y
CONFIG_INET6_XFRM_MODE_TUNNEL=y
CONFIG_INET6_XFRM_MODE_BEET=y
# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
CONFIG_IPV6_SIT=y
# CONFIG_IPV6_SIT_6RD is not set
CONFIG_IPV6_NDISC_NODETYPE=y
# CONFIG_IPV6_TUNNEL is not set
# CONFIG_IPV6_GRE is not set
# CONFIG_IPV6_MULTIPLE_TABLES is not set
# CONFIG_IPV6_MROUTE is not set
CONFIG_NETLABEL=y
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
# CONFIG_NETFILTER_ADVANCED is not set

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_NETLINK_LOG=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_IRC=y
# CONFIG_NF_CONNTRACK_NETBIOS_NS is not set
CONFIG_NF_CONNTRACK_SIP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_NEEDED=y
# CONFIG_NF_NAT_AMANDA is not set
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
# CONFIG_NF_NAT_TFTP is not set
CONFIG_NETFILTER_XTABLES=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
CONFIG_NETFILTER_XT_TARGET_LOG=m
# CONFIG_NETFILTER_XT_TARGET_NETMAP is not set
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
# CONFIG_NETFILTER_XT_TARGET_REDIRECT is not set
CONFIG_NETFILTER_XT_TARGET_SECMARK=y
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
# CONFIG_IP_SET is not set
# CONFIG_IP_VS is not set

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_NF_CONNTRACK_PROC_COMPAT=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_ULOG=y
CONFIG_NF_NAT_IPV4=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
# CONFIG_NF_NAT_PPTP is not set
# CONFIG_NF_NAT_H323 is not set
CONFIG_IP_NF_MANGLE=y
# CONFIG_IP_NF_RAW is not set

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV6=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_MATCH_IPV6HEADER=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
# CONFIG_IP6_NF_RAW is not set
# CONFIG_BRIDGE_NF_EBTABLES is not set
# CONFIG_IP_DCCP is not set
# CONFIG_IP_SCTP is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_HAVE_NET_DSA=y
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=y
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_PHONET is not set
# CONFIG_IEEE802154 is not set
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_CBQ is not set
# CONFIG_NET_SCH_HTB is not set
# CONFIG_NET_SCH_HFSC is not set
# CONFIG_NET_SCH_PRIO is not set
# CONFIG_NET_SCH_MULTIQ is not set
# CONFIG_NET_SCH_RED is not set
# CONFIG_NET_SCH_SFB is not set
# CONFIG_NET_SCH_SFQ is not set
# CONFIG_NET_SCH_TEQL is not set
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_GRED is not set
# CONFIG_NET_SCH_DSMARK is not set
# CONFIG_NET_SCH_NETEM is not set
# CONFIG_NET_SCH_DRR is not set
# CONFIG_NET_SCH_MQPRIO is not set
# CONFIG_NET_SCH_CHOKE is not set
# CONFIG_NET_SCH_QFQ is not set
# CONFIG_NET_SCH_CODEL is not set
# CONFIG_NET_SCH_FQ_CODEL is not set
# CONFIG_NET_SCH_INGRESS is not set
# CONFIG_NET_SCH_PLUG is not set

#
# Classification
#
CONFIG_NET_CLS=y
# CONFIG_NET_CLS_BASIC is not set
# CONFIG_NET_CLS_TCINDEX is not set
# CONFIG_NET_CLS_ROUTE4 is not set
# CONFIG_NET_CLS_FW is not set
# CONFIG_NET_CLS_U32 is not set
# CONFIG_NET_CLS_RSVP is not set
# CONFIG_NET_CLS_RSVP6 is not set
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
# CONFIG_NET_EMATCH_CMP is not set
# CONFIG_NET_EMATCH_NBYTE is not set
# CONFIG_NET_EMATCH_U32 is not set
# CONFIG_NET_EMATCH_META is not set
# CONFIG_NET_EMATCH_TEXT is not set
CONFIG_NET_CLS_ACT=y
# CONFIG_NET_ACT_POLICE is not set
# CONFIG_NET_ACT_GACT is not set
# CONFIG_NET_ACT_MIRRED is not set
# CONFIG_NET_ACT_IPT is not set
# CONFIG_NET_ACT_NAT is not set
# CONFIG_NET_ACT_PEDIT is not set
# CONFIG_NET_ACT_SIMP is not set
# CONFIG_NET_ACT_SKBEDIT is not set
# CONFIG_NET_ACT_CSUM is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=y
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_NETPRIO_CGROUP is not set
CONFIG_BQL=y
# CONFIG_BPF_JIT is not set

#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NET_TCPPROBE is not set
# CONFIG_NET_DROP_MONITOR is not set
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
CONFIG_FIB_RULES=y
# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
CONFIG_RFKILL=y
CONFIG_RFKILL_INPUT=y
# CONFIG_NET_9P is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
# CONFIG_DEVTMPFS is not set
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
# CONFIG_DEBUG_DRIVER is not set
CONFIG_DEBUG_DEVRES=y
CONFIG_SYS_HYPERVISOR=y
# CONFIG_GENERIC_CPU_DEVICES is not set
CONFIG_DMA_SHARED_BUFFER=y

#
# Bus devices
#
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_PNP=y
CONFIG_PNP_DEBUG_MESSAGES=y

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
# CONFIG_BLK_CPQ_DA is not set
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_XEN_BLKDEV_FRONTEND=m
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_VIRTIO_BLK=m
# CONFIG_BLK_DEV_HD is not set
# CONFIG_BLK_DEV_RBD is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_AD525X_DPOT is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_INTEL_MID_PTI is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1780 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_BMP085_I2C is not set
# CONFIG_PCH_PHUB is not set
# CONFIG_USB_SWITCH_FSA9480 is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_LEGACY is not set
# CONFIG_EEPROM_MAX6875 is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_CB710_CORE is not set

#
# Texas Instruments shared transport line discipline
#
# CONFIG_SENSORS_LIS3_I2C is not set

#
# Altera FPGA firmware download module
#
# CONFIG_ALTERA_STAPL is not set
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=m
CONFIG_RAID_ATTRS=m
CONFIG_SCSI=m
CONFIG_SCSI_DMA=y
CONFIG_SCSI_TGT=m
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_CHR_DEV_OSST=m
CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_FC_TGT_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
# CONFIG_SCSI_SRP_ATTRS is not set
CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_SCSI_BNX2X_FCOE is not set
# CONFIG_BE2ISCSI is not set
CONFIG_BLK_DEV_3W_XXXX_RAID=m
# CONFIG_SCSI_HPSA is not set
CONFIG_SCSI_3W_9XXX=m
# CONFIG_SCSI_3W_SAS is not set
CONFIG_SCSI_ACARD=m
CONFIG_SCSI_AACRAID=m
CONFIG_SCSI_AIC7XXX=m
CONFIG_AIC7XXX_CMDS_PER_DEVICE=8
CONFIG_AIC7XXX_RESET_DELAY_MS=15000
CONFIG_AIC7XXX_DEBUG_ENABLE=y
CONFIG_AIC7XXX_DEBUG_MASK=0
CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC7XXX_OLD=m
CONFIG_SCSI_AIC79XX=m
CONFIG_AIC79XX_CMDS_PER_DEVICE=32
CONFIG_AIC79XX_RESET_DELAY_MS=15000
CONFIG_AIC79XX_DEBUG_ENABLE=y
CONFIG_AIC79XX_DEBUG_MASK=0
CONFIG_AIC79XX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC94XX=m
# CONFIG_AIC94XX_DEBUG is not set
CONFIG_SCSI_MVSAS=m
# CONFIG_SCSI_MVSAS_DEBUG is not set
# CONFIG_SCSI_MVSAS_TASKLET is not set
# CONFIG_SCSI_MVUMI is not set
CONFIG_SCSI_DPT_I2O=m
CONFIG_SCSI_ADVANSYS=m
CONFIG_SCSI_ARCMSR=m
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=m
CONFIG_MEGARAID_MAILBOX=m
CONFIG_MEGARAID_LEGACY=m
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS_LOGGING=y
# CONFIG_SCSI_MPT3SAS is not set
# CONFIG_SCSI_UFSHCD is not set
CONFIG_SCSI_HPTIOP=m
CONFIG_SCSI_BUSLOGIC=m
# CONFIG_VMWARE_PVSCSI is not set
CONFIG_LIBFC=m
CONFIG_LIBFCOE=m
CONFIG_FCOE=m
# CONFIG_FCOE_FNIC is not set
CONFIG_SCSI_DMX3191D=m
CONFIG_SCSI_EATA=m
CONFIG_SCSI_EATA_TAGGED_QUEUE=y
CONFIG_SCSI_EATA_LINKED_COMMANDS=y
CONFIG_SCSI_EATA_MAX_TAGS=16
CONFIG_SCSI_FUTURE_DOMAIN=m
CONFIG_SCSI_GDTH=m
CONFIG_SCSI_ISCI=m
CONFIG_SCSI_IPS=m
CONFIG_SCSI_INITIO=m
# CONFIG_SCSI_INIA100 is not set
CONFIG_SCSI_STEX=m
CONFIG_SCSI_SYM53C8XX_2=m
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
CONFIG_SCSI_SYM53C8XX_MMIO=y
CONFIG_SCSI_IPR=m
# CONFIG_SCSI_IPR_TRACE is not set
# CONFIG_SCSI_IPR_DUMP is not set
CONFIG_SCSI_QLOGIC_1280=m
CONFIG_SCSI_QLA_FC=m
# CONFIG_TCM_QLA2XXX is not set
# CONFIG_SCSI_QLA_ISCSI is not set
CONFIG_SCSI_LPFC=m
# CONFIG_SCSI_LPFC_DEBUG_FS is not set
CONFIG_SCSI_DC395x=m
CONFIG_SCSI_DC390T=m
CONFIG_SCSI_DEBUG=m
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
CONFIG_SCSI_SRP=m
# CONFIG_SCSI_BFA_FC is not set
CONFIG_SCSI_VIRTIO=m
# CONFIG_SCSI_CHELSIO_FCOE is not set
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=m
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_ACPI=y
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
# CONFIG_SATA_AHCI_PLATFORM is not set
CONFIG_SATA_INIC162X=m
# CONFIG_SATA_ACARD_AHCI is not set
CONFIG_SATA_SIL24=m
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_PDC_ADMA=m
CONFIG_SATA_QSTOR=m
CONFIG_SATA_SX4=m
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_HIGHBANK is not set
CONFIG_SATA_MV=m
CONFIG_SATA_NV=m
CONFIG_SATA_PROMISE=m
CONFIG_SATA_SIL=m
CONFIG_SATA_SIS=m
CONFIG_SATA_SVW=m
CONFIG_SATA_ULI=m
CONFIG_SATA_VIA=m
CONFIG_SATA_VITESSE=m

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARASAN_CF is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set
# CONFIG_PATA_CS5536 is not set
# CONFIG_PATA_CYPRESS is not set
CONFIG_PATA_EFAR=m
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
CONFIG_PATA_MARVELL=m
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
CONFIG_PATA_PDC_OLD=m
CONFIG_PATA_RADISYS=m
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SC1200 is not set
CONFIG_PATA_SCH=m
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
CONFIG_PATA_SIS=m
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
CONFIG_PATA_WINBOND=m

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
CONFIG_PATA_PCMCIA=m
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=m
CONFIG_PATA_LEGACY=m
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
# CONFIG_MULTICORE_RAID456 is not set
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m
# CONFIG_DM_DEBUG is not set
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
# CONFIG_DM_THIN_PROVISIONING is not set
CONFIG_DM_MIRROR=m
# CONFIG_DM_RAID is not set
# CONFIG_DM_LOG_USERSPACE is not set
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
# CONFIG_DM_MULTIPATH_QL is not set
# CONFIG_DM_MULTIPATH_ST is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_UEVENT is not set
# CONFIG_DM_FLAKEY is not set
# CONFIG_DM_VERITY is not set
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_TCM_FC=m
CONFIG_ISCSI_TARGET=m
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
CONFIG_FUSION_FC=m
CONFIG_FUSION_SAS=m
CONFIG_FUSION_MAX_SGE=40
CONFIG_FUSION_CTL=m
# CONFIG_FUSION_LOGGING is not set

#
# IEEE 1394 (FireWire) support
#
# CONFIG_FIREWIRE is not set
# CONFIG_FIREWIRE_NOSY is not set
# CONFIG_I2O is not set
CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
CONFIG_MII=m
# CONFIG_IFB is not set
# CONFIG_NET_TEAM is not set
CONFIG_MACVLAN=y
CONFIG_MACVTAP=y
# CONFIG_VXLAN is not set
CONFIG_NETCONSOLE=m
# CONFIG_NETCONSOLE_DYNAMIC is not set
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=y
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=m
CONFIG_SUNGEM_PHY=m
# CONFIG_ARCNET is not set

#
# CAIF transport drivers
#

#
# Distributed Switch Architecture drivers
#
# CONFIG_NET_DSA_MV88E6XXX is not set
# CONFIG_NET_DSA_MV88E6060 is not set
# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
# CONFIG_NET_DSA_MV88E6131 is not set
# CONFIG_NET_DSA_MV88E6123_61_65 is not set
CONFIG_ETHERNET=y
CONFIG_MDIO=m
CONFIG_NET_VENDOR_3COM=y
# CONFIG_PCMCIA_3C574 is not set
# CONFIG_PCMCIA_3C589 is not set
CONFIG_VORTEX=m
CONFIG_TYPHOON=m
CONFIG_NET_VENDOR_ADAPTEC=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_ALTEON=y
# CONFIG_ACENIC is not set
CONFIG_NET_VENDOR_AMD=y
# CONFIG_AMD8111_ETH is not set
# CONFIG_PCNET32 is not set
# CONFIG_PCMCIA_NMCLAN is not set
CONFIG_NET_VENDOR_ATHEROS=y
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
CONFIG_ATL1C=m
CONFIG_NET_CADENCE=y
# CONFIG_ARM_AT91_ETHER is not set
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
CONFIG_BNX2=m
# CONFIG_CNIC is not set
CONFIG_TIGON3=y
CONFIG_BNX2X=m
CONFIG_NET_VENDOR_BROCADE=y
# CONFIG_BNA is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=y
CONFIG_NET_TULIP=y
# CONFIG_DE2104X is not set
# CONFIG_TULIP is not set
# CONFIG_DE4X5 is not set
# CONFIG_WINBOND_840 is not set
# CONFIG_DM9102 is not set
# CONFIG_ULI526X is not set
# CONFIG_PCMCIA_XIRCOM is not set
CONFIG_NET_VENDOR_DLINK=y
# CONFIG_DL2K is not set
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EXAR=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_FUJITSU=y
# CONFIG_PCMCIA_FMVJ18X is not set
CONFIG_NET_VENDOR_HP=y
# CONFIG_HP100 is not set
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_IGB=m
CONFIG_IGB_DCA=y
CONFIG_IGBVF=m
CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_IXGBE_HWMON=y
CONFIG_IXGBE_DCA=y
CONFIG_IXGBE_DCB=y
CONFIG_IXGBEVF=y
CONFIG_NET_VENDOR_I825XX=y
# CONFIG_ZNET is not set
# CONFIG_IP1000 is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_MARVELL=y
# CONFIG_MVMDIO is not set
CONFIG_SKGE=m
# CONFIG_SKGE_DEBUG is not set
# CONFIG_SKGE_GENESIS is not set
CONFIG_SKY2=m
# CONFIG_SKY2_DEBUG is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
# CONFIG_MLX4_CORE is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MYRI=y
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NATSEMI=y
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_8390=y
# CONFIG_PCMCIA_AXNET is not set
CONFIG_NE2K_PCI=m
# CONFIG_PCMCIA_PCNET is not set
CONFIG_NET_VENDOR_NVIDIA=y
CONFIG_FORCEDETH=y
CONFIG_NET_VENDOR_OKI=y
# CONFIG_PCH_GBE is not set
# CONFIG_ETHOC is not set
CONFIG_NET_PACKET_ENGINE=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_QLOGIC=y
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_QLGE is not set
# CONFIG_NETXEN_NIC is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_8139CP is not set
CONFIG_8139TOO=m
# CONFIG_8139TOO_PIO is not set
# CONFIG_8139TOO_TUNE_TWISTER is not set
# CONFIG_8139TOO_8129 is not set
# CONFIG_8139_OLD_RX_RESET is not set
CONFIG_R8169=m
CONFIG_NET_VENDOR_RDC=y
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_SEEQ=y
# CONFIG_SEEQ8005 is not set
CONFIG_NET_VENDOR_SILAN=y
CONFIG_SC92031=m
CONFIG_NET_VENDOR_SIS=y
# CONFIG_SIS900 is not set
# CONFIG_SIS190 is not set
# CONFIG_SFC is not set
CONFIG_NET_VENDOR_SMSC=y
# CONFIG_PCMCIA_SMC91C92 is not set
# CONFIG_EPIC100 is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_STMICRO=y
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=y
CONFIG_HAPPYMEAL=m
CONFIG_SUNGEM=m
CONFIG_CASSINI=m
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_TEHUTI=y
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
CONFIG_TLAN=m
CONFIG_NET_VENDOR_VIA=y
CONFIG_VIA_RHINE=m
# CONFIG_VIA_RHINE_MMIO is not set
CONFIG_VIA_VELOCITY=m
CONFIG_NET_VENDOR_WIZNET=y
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XIRCOM=y
# CONFIG_PCMCIA_XIRC2PS is not set
CONFIG_FDDI=y
# CONFIG_DEFXX is not set
# CONFIG_SKFP is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLIB=y

#
# MII PHY device drivers
#
# CONFIG_AT803X_PHY is not set
# CONFIG_AMD_PHY is not set
CONFIG_MARVELL_PHY=m
CONFIG_DAVICOM_PHY=m
CONFIG_QSEMI_PHY=m
CONFIG_LXT_PHY=m
CONFIG_CICADA_PHY=m
CONFIG_VITESSE_PHY=m
CONFIG_SMSC_PHY=m
CONFIG_BROADCOM_PHY=m
# CONFIG_BCM87XX_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MICREL_PHY is not set
CONFIG_FIXED_PHY=y
# CONFIG_MDIO_BITBANG is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set

#
# USB Network Adapters
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
# CONFIG_USB_USBNET is not set
# CONFIG_USB_HSO is not set
# CONFIG_USB_IPHETH is not set
# CONFIG_WLAN is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
# CONFIG_WAN is not set
CONFIG_XEN_NETDEV_FRONTEND=m
CONFIG_XEN_NETDEV_BACKEND=y
# CONFIG_VMXNET3 is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=m
# CONFIG_INPUT_POLLDEV is not set
CONFIG_INPUT_SPARSEKMAP=y
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
# CONFIG_MOUSE_PS2_ELANTECH is not set
# CONFIG_MOUSE_PS2_SENTELIC is not set
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
# CONFIG_JOYSTICK_ANALOG is not set
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
# CONFIG_JOYSTICK_GF2K is not set
# CONFIG_JOYSTICK_GRIP is not set
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
# CONFIG_JOYSTICK_INTERACT is not set
# CONFIG_JOYSTICK_SIDEWINDER is not set
# CONFIG_JOYSTICK_TMDC is not set
# CONFIG_JOYSTICK_IFORCE is not set
# CONFIG_JOYSTICK_WARRIOR is not set
# CONFIG_JOYSTICK_MAGELLAN is not set
# CONFIG_JOYSTICK_SPACEORB is not set
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
# CONFIG_JOYSTICK_TWIDJOY is not set
# CONFIG_JOYSTICK_ZHENHUA is not set
# CONFIG_JOYSTICK_AS5011 is not set
# CONFIG_JOYSTICK_JOYDUMP is not set
# CONFIG_JOYSTICK_XPAD is not set
CONFIG_INPUT_TABLET=y
# CONFIG_TABLET_USB_ACECAD is not set
# CONFIG_TABLET_USB_AIPTEK is not set
# CONFIG_TABLET_USB_GTCO is not set
# CONFIG_TABLET_USB_HANWANG is not set
# CONFIG_TABLET_USB_KBTAB is not set
# CONFIG_TABLET_USB_WACOM is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_PCSPKR is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_MPU3050 is not set
# CONFIG_INPUT_APANEL is not set
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
# CONFIG_INPUT_UINPUT is not set
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_CMA3000 is not set
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_GAMEPORT is not set

#
# Character devices
#
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
# CONFIG_VT_HW_CONSOLE_BINDING is not set
CONFIG_UNIX98_PTYS=y
# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_NOZOMI is not set
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
CONFIG_DEVKMEM=y

#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_CS=m
CONFIG_SERIAL_8250_NR_UARTS=16
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
# CONFIG_SERIAL_8250_RSA is not set

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_KGDB_NMI is not set
# CONFIG_SERIAL_MFD_HSU is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_CONSOLE_POLL=y
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_PCH_UART is not set
# CONFIG_SERIAL_ARC is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=y
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_HW_RANDOM_INTEL is not set
# CONFIG_HW_RANDOM_AMD is not set
CONFIG_HW_RANDOM_VIA=y
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_HW_RANDOM_TPM=y
CONFIG_NVRAM=y
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set

#
# PCMCIA character devices
#
# CONFIG_SYNCLINK_CS is not set
# CONFIG_CARDMAN_4000 is not set
# CONFIG_CARDMAN_4040 is not set
# CONFIG_IPWIRELESS is not set
# CONFIG_MWAVE is not set
# CONFIG_RAW_DRIVER is not set
CONFIG_HPET=y
# CONFIG_HPET_MMAP is not set
# CONFIG_HANGCHECK_TIMER is not set
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS=m
# CONFIG_TCG_TIS_I2C_INFINEON is not set
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
# CONFIG_TELCLOCK is not set
CONFIG_DEVPORT=y
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
# CONFIG_I2C_CHARDEV is not set
# CONFIG_I2C_MUX is not set
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=y

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
CONFIG_I2C_I801=y
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set

#
# ACPI drivers
#
# CONFIG_I2C_SCMI is not set

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EG20T is not set
# CONFIG_I2C_INTEL_MID is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_PXA_PCI is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_STUB is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
# CONFIG_PPS_CLIENT_LDISC is not set
# CONFIG_PPS_CLIENT_GPIO is not set

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y

#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# CONFIG_PTP_1588_CLOCK_PCH is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
# CONFIG_PDA_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_BATTERY_BQ27x00 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_POWER_RESET is not set
# CONFIG_POWER_AVS is not set
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_K8TEMP is not set
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_CORETEMP is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_DME1737 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH56XX_COMMON is not set
# CONFIG_SENSORS_ADS1015 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_APPLESMC is not set

#
# ACPI drivers
#
# CONFIG_SENSORS_ACPI_POWER is not set
# CONFIG_SENSORS_ATK0110 is not set
CONFIG_THERMAL=y
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
# CONFIG_FAIR_SHARE is not set
CONFIG_STEP_WISE=y
# CONFIG_USER_SPACE is not set
# CONFIG_CPU_THERMAL is not set
# CONFIG_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_RTSX_PCI is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS80031 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_STMPE is not set
# CONFIG_MFD_TC3589X is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_MFD_SMSC is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_MAX77686 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_CS5535 is not set
# CONFIG_LPC_SCH is not set
# CONFIG_LPC_ICH is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_AS3711 is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_SIS=y
CONFIG_AGP_VIA=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=y
CONFIG_DRM_KMS_HELPER=m
# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
CONFIG_DRM_TTM=m
# CONFIG_DRM_TDFX is not set
# CONFIG_DRM_R128 is not set
CONFIG_DRM_RADEON=m
CONFIG_DRM_RADEON_KMS=y
CONFIG_DRM_NOUVEAU=m
CONFIG_NOUVEAU_DEBUG=5
CONFIG_NOUVEAU_DEBUG_DEFAULT=3
# CONFIG_DRM_NOUVEAU_BACKLIGHT is not set

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I810 is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_KMS=y
CONFIG_DRM_MGA=m
CONFIG_DRM_SIS=m
CONFIG_DRM_VIA=m
CONFIG_DRM_SAVAGE=m
# CONFIG_DRM_VMWGFX is not set
# CONFIG_DRM_GMA500 is not set
# CONFIG_DRM_UDL is not set
# CONFIG_DRM_AST is not set
# CONFIG_DRM_MGAG200 is not set
# CONFIG_DRM_CIRRUS_QEMU is not set
# CONFIG_STUB_POULSBO is not set
# CONFIG_VGASTATE is not set
CONFIG_VIDEO_OUTPUT_CONTROL=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DDC is not set
# CONFIG_FB_BOOT_VESA_SUPPORT is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set
CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=m
# CONFIG_FB_WMT_GE_ROPS is not set
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_SVGALIB is not set
# CONFIG_FB_MACMODES is not set
# CONFIG_FB_BACKLIGHT is not set
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
CONFIG_FB_CIRRUS=y
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
# CONFIG_FB_VESA is not set
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_GEODE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_VIRTUAL is not set
CONFIG_XEN_FBDEV_FRONTEND=m
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_BROADSHEET is not set
# CONFIG_FB_AUO_K190X is not set
# CONFIG_EXYNOS_VIDEO is not set
CONFIG_BACKLIGHT_LCD_SUPPORT=y
# CONFIG_LCD_CLASS_DEVICE is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_BACKLIGHT_GENERIC=y
# CONFIG_BACKLIGHT_APPLE is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630 is not set
# CONFIG_BACKLIGHT_LM3639 is not set
# CONFIG_BACKLIGHT_LP855X is not set

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=m
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
# CONFIG_LOGO is not set
# CONFIG_SOUND is not set

#
# HID support
#
CONFIG_HID=m
CONFIG_HIDRAW=y
# CONFIG_UHID is not set
CONFIG_HID_GENERIC=m

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=m
# CONFIG_HID_AUREAL is not set
CONFIG_HID_BELKIN=m
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
CONFIG_HID_CYPRESS=m
# CONFIG_HID_DRAGONRISE is not set
# CONFIG_HID_EMS_FF is not set
CONFIG_HID_EZKEY=m
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_KEYTOUCH is not set
# CONFIG_HID_KYE is not set
# CONFIG_HID_UCLOGIC is not set
# CONFIG_HID_WALTOP is not set
CONFIG_HID_GYRATION=m
# CONFIG_HID_TWINHAN is not set
CONFIG_HID_KENSINGTON=m
# CONFIG_HID_LCPOWER is not set
# CONFIG_HID_LENOVO_TPKBD is not set
CONFIG_HID_LOGITECH=m
# CONFIG_HID_LOGITECH_DJ is not set
CONFIG_LOGITECH_FF=y
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
CONFIG_LOGIWHEELS_FF=y
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
# CONFIG_HID_MULTITOUCH is not set
CONFIG_HID_NTRIG=m
# CONFIG_HID_ORTEK is not set
CONFIG_HID_PANTHERLORD=m
CONFIG_PANTHERLORD_FF=y
CONFIG_HID_PETALYNX=m
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_ROCCAT is not set
# CONFIG_HID_SAITEK is not set
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SONY=m
# CONFIG_HID_SPEEDLINK is not set
CONFIG_HID_SUNPLUS=m
# CONFIG_HID_GREENASIA is not set
# CONFIG_HID_SMARTJOYPLUS is not set
# CONFIG_HID_TIVO is not set
CONFIG_HID_TOPSEED=m
# CONFIG_HID_THRUSTMASTER is not set
# CONFIG_HID_ZEROPLUS is not set
# CONFIG_HID_ZYDACRON is not set
# CONFIG_HID_SENSOR_HUB is not set

#
# USB HID support
#
CONFIG_USB_HID=m
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y

#
# I2C HID support
#
# CONFIG_I2C_HID is not set
CONFIG_USB_ARCH_HAS_OHCI=y
CONFIG_USB_ARCH_HAS_EHCI=y
CONFIG_USB_ARCH_HAS_XHCI=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_DEBUG=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
# CONFIG_USB_DYNAMIC_MINORS is not set
CONFIG_USB_MON=y
# CONFIG_USB_WUSB_CBAF is not set

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=y
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_ISP1760_HCD is not set
# CONFIG_USB_ISP1362_HCD is not set
CONFIG_USB_OHCI_HCD=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_CHIPIDEA is not set

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
CONFIG_USB_PRINTER=y
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set

#
# USB port drivers
#
# CONFIG_USB_SERIAL is not set

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set

#
# USB Physical Layer drivers
#
# CONFIG_USB_ISP1301 is not set
# CONFIG_USB_RCAR_PHY is not set
# CONFIG_USB_GADGET is not set

#
# OTG and related infrastructure
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y

#
# LED drivers
#
# CONFIG_LEDS_LM3530 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_LP3944 is not set
# CONFIG_LEDS_LP5521 is not set
# CONFIG_LEDS_LP5523 is not set
# CONFIG_LEDS_CLEVO_MAIL is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA9633 is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_INTEL_SS4200 is not set
# CONFIG_LEDS_DELL_NETBOOKS is not set
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_LM355x is not set
# CONFIG_LEDS_OT200 is not set
# CONFIG_LEDS_BLINKM is not set
# CONFIG_LEDS_TRIGGERS is not set

#
# LED Triggers
#
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC=y

#
# Reporting subsystems
#
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=y
# CONFIG_EDAC_MCE_INJ is not set
# CONFIG_EDAC_MM_EDAC is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
# CONFIG_RTC_DEBUG is not set

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set

#
# SPI RTC drivers
#

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set
# CONFIG_RTC_DRV_DS2404 is not set

#
# on-CPU RTC drivers
#
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
# CONFIG_INTEL_MID_DMAC is not set
CONFIG_INTEL_IOATDMA=y
# CONFIG_TIMB_DMA is not set
# CONFIG_PCH_DMA is not set
CONFIG_DMA_ENGINE=y

#
# DMA Clients
#
CONFIG_NET_DMA=y
CONFIG_ASYNC_TX_DMA=y
# CONFIG_DMATEST is not set
CONFIG_DCA=y
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set
# CONFIG_VFIO is not set
CONFIG_VIRTIO=y

#
# Virtio drivers
#
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set

#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_SELFBALLOONING=y
# CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is not set
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_TMEM=y
CONFIG_XEN_PCIDEV_BACKEND=y
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_ACPI_PROCESSOR=y
CONFIG_XEN_MCE_LOG=y
CONFIG_XEN_HAVE_PVMMU=y
CONFIG_STAGING=y
# CONFIG_ET131X is not set
# CONFIG_SLICOSS is not set
# CONFIG_USBIP_CORE is not set
# CONFIG_ECHO is not set
# CONFIG_COMEDI is not set
# CONFIG_ASUS_OLED is not set
# CONFIG_RTS5139 is not set
# CONFIG_TRANZPORT is not set
# CONFIG_IDE_PHISON is not set
# CONFIG_DX_SEP is not set
CONFIG_ZRAM=y
# CONFIG_ZRAM_DEBUG is not set
CONFIG_ZCACHE=y
CONFIG_ZSMALLOC=y
# CONFIG_FB_SM7XX is not set
# CONFIG_CRYSTALHD is not set
# CONFIG_FB_XGI is not set
# CONFIG_ACPI_QUICKSTART is not set
# CONFIG_USB_ENESTORAGE is not set
# CONFIG_BCM_WIMAX is not set
# CONFIG_FT1000 is not set

#
# Speakup console speech
#
# CONFIG_SPEAKUP is not set
# CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4 is not set
# CONFIG_STAGING_MEDIA is not set

#
# Android
#
# CONFIG_ANDROID is not set
# CONFIG_USB_WPAN_HCD is not set
# CONFIG_WIMAX_GDM72XX is not set
CONFIG_NET_VENDOR_SILICOM=y
# CONFIG_SBYPASS is not set
# CONFIG_BPCTL is not set
# CONFIG_CED1401 is not set
# CONFIG_DGRP is not set
# CONFIG_SB105X is not set
CONFIG_X86_PLATFORM_DEVICES=y
# CONFIG_ACER_WMI is not set
# CONFIG_ACERHDF is not set
# CONFIG_ASUS_LAPTOP is not set
# CONFIG_DELL_WMI is not set
# CONFIG_DELL_WMI_AIO is not set
# CONFIG_FUJITSU_LAPTOP is not set
# CONFIG_FUJITSU_TABLET is not set
# CONFIG_AMILO_RFKILL is not set
# CONFIG_HP_ACCEL is not set
# CONFIG_HP_WMI is not set
# CONFIG_MSI_LAPTOP is not set
# CONFIG_PANASONIC_LAPTOP is not set
# CONFIG_COMPAL_LAPTOP is not set
# CONFIG_SONY_LAPTOP is not set
# CONFIG_IDEAPAD_LAPTOP is not set
# CONFIG_THINKPAD_ACPI is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_INTEL_MENLOW is not set
CONFIG_EEEPC_LAPTOP=y
# CONFIG_ASUS_WMI is not set
CONFIG_ACPI_WMI=m
# CONFIG_MSI_WMI is not set
# CONFIG_TOPSTAR_LAPTOP is not set
# CONFIG_ACPI_TOSHIBA is not set
# CONFIG_TOSHIBA_BT_RFKILL is not set
# CONFIG_ACPI_CMPC is not set
# CONFIG_INTEL_IPS is not set
# CONFIG_IBM_RTL is not set
# CONFIG_XO15_EBOOK is not set
# CONFIG_SAMSUNG_LAPTOP is not set
CONFIG_MXM_WMI=m
# CONFIG_INTEL_OAKTRAIL is not set
# CONFIG_SAMSUNG_Q10 is not set
# CONFIG_APPLE_GMUX is not set

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_STATS=y
# CONFIG_AMD_IOMMU_V2 is not set
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_IRQ_REMAP is not set

#
# Remoteproc drivers (EXPERIMENTAL)
#
# CONFIG_STE_MODEM_RPROC is not set

#
# Rpmsg drivers (EXPERIMENTAL)
#
# CONFIG_VIRT_DRIVERS is not set
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_VME_BUS is not set
# CONFIG_PWM is not set
# CONFIG_IPACK_BUS is not set

#
# Firmware Drivers
#
CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_EFI_VARS=y
# CONFIG_DELL_RBU is not set
# CONFIG_DCDBAS is not set
CONFIG_DMIID=y
# CONFIG_DMI_SYSFS is not set
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=m
# CONFIG_GOOGLE_FIRMWARE is not set

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT2_FS=m
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT2_FS_XIP=y
CONFIG_EXT3_FS=m
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=m
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_FS_XIP=y
CONFIG_JBD=m
# CONFIG_JBD_DEBUG is not set
CONFIG_JBD2=m
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=m
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
CONFIG_JFS_STATISTICS=y
CONFIG_XFS_FS=m
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_DEBUG is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
# CONFIG_NILFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=m
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_FANOTIFY is not set
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_PRINT_QUOTA_WARNING is not set
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=y
# CONFIG_FUSE_FS is not set
CONFIG_GENERIC_ACL=y

#
# Caches
#
# CONFIG_FSCACHE is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
# CONFIG_UDF_FS is not set

#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_NTFS_FS=y
# CONFIG_NTFS_DEBUG is not set
CONFIG_NTFS_RW=y

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=m
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_LOGFS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_RAM is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V2=y
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
# CONFIG_NFS_SWAP is not set
# CONFIG_NFS_V4_1 is not set
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
# CONFIG_NFSD is not set
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
# CONFIG_SUNRPC_DEBUG is not set
# CONFIG_CEPH_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_MAC_ROMAN is not set
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=y
# CONFIG_DLM is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
# CONFIG_ENABLE_WARN_DEPRECATED is not set
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
CONFIG_MAGIC_SYSRQ=y
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_SHIRQ is not set
CONFIG_LOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
# CONFIG_BOOTPARAM_HARDLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=0
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
# CONFIG_PANIC_ON_OOPS is not set
CONFIG_PANIC_ON_OOPS_VALUE=0
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0
# CONFIG_SCHED_DEBUG is not set
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_DEBUG_SLAB is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set
# CONFIG_SPARSE_RCU_POINTER is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_STACKTRACE=y
CONFIG_DEBUG_STACK_USAGE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VIRTUAL is not set
# CONFIG_DEBUG_WRITECOUNT is not set
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_LIST is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_DEBUG_CREDENTIALS is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
# CONFIG_BOOT_PRINTK_DELAY is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=21
# CONFIG_RCU_CPU_STALL_INFO is not set
# CONFIG_RCU_TRACE is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# CONFIG_DEBUG_PER_CPU_MAPS is not set
# CONFIG_LKDTM is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FAULT_INJECTION=y
# CONFIG_FAILSLAB is not set
# CONFIG_FAIL_PAGE_ALLOC is not set
CONFIG_FAIL_MAKE_REQUEST=y
# CONFIG_FAIL_IO_TIMEOUT is not set
CONFIG_FAULT_INJECTION_DEBUG_FS=y
# CONFIG_LATENCYTOP is not set
# CONFIG_DEBUG_PAGEALLOC is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_EVENT_POWER_TRACING_DEPRECATED=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_FUNCTION_TRACER is not set
# CONFIG_IRQSOFF_TRACER is not set
# CONFIG_SCHED_TRACER is not set
# CONFIG_FTRACE_SYSCALLS is not set
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_PROFILE_ALL_BRANCHES is not set
# CONFIG_STACK_TRACER is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENT=y
# CONFIG_UPROBE_EVENT is not set
CONFIG_PROBE_EVENTS=y
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_MMIOTRACE is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_DYNAMIC_DEBUG is not set
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_KGDB=y
CONFIG_KGDB_SERIAL_CONSOLE=y
# CONFIG_KGDB_TESTS is not set
# CONFIG_KGDB_LOW_LEVEL_TRAP is not set
# CONFIG_KGDB_KDB is not set
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_KMEMCHECK is not set
# CONFIG_TEST_KSTRTOX is not set
# CONFIG_STRICT_DEVMEM is not set
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_X86_PTDUMP=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_DEBUG_SET_MODULE_RONX=y
# CONFIG_DEBUG_NX_TEST is not set
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_DEBUG is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
# CONFIG_X86_DECODER_SELFTEST is not set
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
CONFIG_OPTIMIZE_INLINING=y
# CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_TRUSTED_KEYS is not set
# CONFIG_ENCRYPTED_KEYS is not set
CONFIG_KEYS_DEBUG_PROC_KEYS=y
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
# CONFIG_SECURITY_NETWORK_XFRM is not set
# CONFIG_SECURITY_PATH is not set
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65534
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
# CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX is not set
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_YAMA is not set
# CONFIG_IMA is not set
# CONFIG_EVM is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_DEFAULT_SECURITY="selinux"
CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_ASYNC_TX_DISABLE_PQ_VAL_DMA=y
CONFIG_ASYNC_TX_DISABLE_XOR_VAL_DMA=y
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
# CONFIG_CRYPTO_USER is not set
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
# CONFIG_CRYPTO_GF128MUL is not set
# CONFIG_CRYPTO_NULL is not set
# CONFIG_CRYPTO_PCRYPT is not set
CONFIG_CRYPTO_WORKQUEUE=y
# CONFIG_CRYPTO_CRYPTD is not set
CONFIG_CRYPTO_AUTHENC=y
# CONFIG_CRYPTO_TEST is not set

#
# Authenticated Encryption with Associated Data
#
# CONFIG_CRYPTO_CCM is not set
# CONFIG_CRYPTO_GCM is not set
# CONFIG_CRYPTO_SEQIV is not set

#
# Block modes
#
CONFIG_CRYPTO_CBC=y
# CONFIG_CRYPTO_CTR is not set
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_PCBC is not set
# CONFIG_CRYPTO_XTS is not set

#
# Hash modes
#
CONFIG_CRYPTO_HMAC=y
# CONFIG_CRYPTO_XCBC is not set
# CONFIG_CRYPTO_VMAC is not set

#
# Digest
#
CONFIG_CRYPTO_CRC32C=m
CONFIG_CRYPTO_CRC32C_X86_64=y
CONFIG_CRYPTO_CRC32C_INTEL=m
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
# CONFIG_CRYPTO_RMD160 is not set
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
CONFIG_CRYPTO_SHA1=y
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA512 is not set
# CONFIG_CRYPTO_TGR192 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_X86_64 is not set
# CONFIG_CRYPTO_AES_NI_INTEL is not set
# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST5_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAST6 is not set
# CONFIG_CRYPTO_CAST6_AVX_X86_64 is not set
CONFIG_CRYPTO_DES=y
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_KHAZAD is not set
# CONFIG_CRYPTO_SALSA20 is not set
# CONFIG_CRYPTO_SALSA20_X86_64 is not set
# CONFIG_CRYPTO_SEED is not set
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_TWOFISH is not set
# CONFIG_CRYPTO_TWOFISH_X86_64 is not set
# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set
# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set

#
# Compression
#
# CONFIG_CRYPTO_DEFLATE is not set
CONFIG_CRYPTO_ZLIB=y
CONFIG_CRYPTO_LZO=y

#
# Random Number Generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
# CONFIG_CRYPTO_USER_API_HASH is not set
# CONFIG_CRYPTO_USER_API_SKCIPHER is not set
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_PADLOCK is not set
# CONFIG_ASYMMETRIC_KEY_TYPE is not set
CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_APIC_ARCHITECTURE=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
CONFIG_KVM_INTEL=m
CONFIG_KVM_AMD=m
# CONFIG_KVM_MMU_AUDIT is not set
CONFIG_VHOST_NET=y
# CONFIG_TCM_VHOST is not set
CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_PERCPU_RWSEM=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=m
CONFIG_CRC_T10DIF=m
# CONFIG_CRC_ITU_T is not set
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=m
# CONFIG_CRC8 is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
# CONFIG_AVERAGE is not set
# CONFIG_CORDIC is not set
# CONFIG_DDR is not set

--Kj7319i9nmIyA2yE
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--Kj7319i9nmIyA2yE--


From xen-devel-bounces@lists.xen.org Wed Dec 19 19:35:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlPQK-0000t6-NV; Wed, 19 Dec 2012 19:35:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlPQJ-0000sw-NT
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:35:23 +0000
Received: from [85.158.143.99:3265] by server-1.bemta-4.messagelabs.com id
	C2/D3-28401-BF612D05; Wed, 19 Dec 2012 19:35:23 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355945721!17962266!1
X-Originating-IP: [209.85.214.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13825 invoked from network); 19 Dec 2012 19:35:21 -0000
Received: from mail-bk0-f45.google.com (HELO mail-bk0-f45.google.com)
	(209.85.214.45)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:35:21 -0000
Received: by mail-bk0-f45.google.com with SMTP id jk13so1231445bkc.18
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:35:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:message-id:date:from:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=zKKCZDUs9a1du3YgqXu0B+g8C89G+o6OCImwU4m6nvQ=;
	b=DezMfgNjpHfjIZmoQ2sRMO2a7CRP5liIxN7x++5vW/ZofQgYMoaQq2+onsMXZ3TYaK
	dlqgShcB8X6zjhME0nxN5dlFJvW/8OZ2qSh7TuawI8BIi/zhClqTw2L7Wmrd4AhzrMLH
	k0gANb5ScXhG2oHNVAt9pwRfMLfO7sFxs3g6KCONfzzE3TgZshrJ8XJnCT6udSrcRSjM
	xe1ZJpS4zNzWYVtW3+LSDr8hNKZpTtYKlUCt2yhEjxIQuJQRzjU4Mj3/wPmMcr0LiFtB
	55I+WZEeF1orViuZTWTrn393BegnBtK1/oLdjU76RhV+Cm+VRhXBdIDudwhZa2L0i0XA
	nPsg==
X-Received: by 10.205.139.16 with SMTP id iu16mr3298495bkc.88.1355945721362;
	Wed, 19 Dec 2012 11:35:21 -0800 (PST)
Received: from [192.168.228.100] ([188.25.222.32])
	by mx.google.com with ESMTPS id 18sm5309572bkv.0.2012.12.19.11.35.20
	(version=SSLv3 cipher=OTHER); Wed, 19 Dec 2012 11:35:20 -0800 (PST)
Message-ID: <50D216F7.2050801@gmail.com>
Date: Wed, 19 Dec 2012 21:35:19 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121128 Thunderbird/10.0.11
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
	<1355933739.14620.456.camel@zakaz.uk.xensource.com>
	<50D1EB56.40400@gmail.com>
	<20121219174627.GA67643@ocelot.phlegethon.org>
In-Reply-To: <20121219174627.GA67643@ocelot.phlegethon.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Sorry, no.  The 'overlapped' bit is a piece of xen implementation, and
> not an architectural state of the VM, so it doesn't belong in the save
> record.  It's trivial to recalculate in a user-space tool, and the

Is it trivial to recalculate that in a userspace tool? I thought the
only way to properly calculate that in the first place was to use
cpuid_eax() in the calculation, which is only available at hypervisor
level (hence much of this thread). Am I missing something?

> result can be cached (since you can also get a mem-event on MSR writes,
> you don't have to pull all this MTRR state out of the giuest except when
> the MTRRs have been changed).

Unfortunately, due to the design of components I don't control, my
application does not have the luxury of waiting for an MSR write event.
(Has my MSR mem_event patch been acknowledged? Or was this already work
in progress? I had to patch my Xen source code to be able to get MSR
events...)

> I'll take a proper look at this thread tomorrow and see if I can suggest
> anything more helpful. 

Thank you, I appreciate the reply, and your time.

All the best,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:35:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlPQK-0000t6-NV; Wed, 19 Dec 2012 19:35:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlPQJ-0000sw-NT
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 19:35:23 +0000
Received: from [85.158.143.99:3265] by server-1.bemta-4.messagelabs.com id
	C2/D3-28401-BF612D05; Wed, 19 Dec 2012 19:35:23 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355945721!17962266!1
X-Originating-IP: [209.85.214.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13825 invoked from network); 19 Dec 2012 19:35:21 -0000
Received: from mail-bk0-f45.google.com (HELO mail-bk0-f45.google.com)
	(209.85.214.45)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 19:35:21 -0000
Received: by mail-bk0-f45.google.com with SMTP id jk13so1231445bkc.18
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 11:35:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:message-id:date:from:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=zKKCZDUs9a1du3YgqXu0B+g8C89G+o6OCImwU4m6nvQ=;
	b=DezMfgNjpHfjIZmoQ2sRMO2a7CRP5liIxN7x++5vW/ZofQgYMoaQq2+onsMXZ3TYaK
	dlqgShcB8X6zjhME0nxN5dlFJvW/8OZ2qSh7TuawI8BIi/zhClqTw2L7Wmrd4AhzrMLH
	k0gANb5ScXhG2oHNVAt9pwRfMLfO7sFxs3g6KCONfzzE3TgZshrJ8XJnCT6udSrcRSjM
	xe1ZJpS4zNzWYVtW3+LSDr8hNKZpTtYKlUCt2yhEjxIQuJQRzjU4Mj3/wPmMcr0LiFtB
	55I+WZEeF1orViuZTWTrn393BegnBtK1/oLdjU76RhV+Cm+VRhXBdIDudwhZa2L0i0XA
	nPsg==
X-Received: by 10.205.139.16 with SMTP id iu16mr3298495bkc.88.1355945721362;
	Wed, 19 Dec 2012 11:35:21 -0800 (PST)
Received: from [192.168.228.100] ([188.25.222.32])
	by mx.google.com with ESMTPS id 18sm5309572bkv.0.2012.12.19.11.35.20
	(version=SSLv3 cipher=OTHER); Wed, 19 Dec 2012 11:35:20 -0800 (PST)
Message-ID: <50D216F7.2050801@gmail.com>
Date: Wed, 19 Dec 2012 21:35:19 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121128 Thunderbird/10.0.11
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
	<1355933739.14620.456.camel@zakaz.uk.xensource.com>
	<50D1EB56.40400@gmail.com>
	<20121219174627.GA67643@ocelot.phlegethon.org>
In-Reply-To: <20121219174627.GA67643@ocelot.phlegethon.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Sorry, no.  The 'overlapped' bit is a piece of xen implementation, and
> not an architectural state of the VM, so it doesn't belong in the save
> record.  It's trivial to recalculate in a user-space tool, and the

Is it trivial to recalculate that in a userspace tool? I thought the
only way to properly calculate that in the first place was to use
cpuid_eax() in the calculation, which is only available at hypervisor
level (hence much of this thread). Am I missing something?

> result can be cached (since you can also get a mem-event on MSR writes,
> you don't have to pull all this MTRR state out of the giuest except when
> the MTRRs have been changed).

Unfortunately, due to the design of components I don't control, my
application does not have the luxury of waiting for an MSR write event.
(Has my MSR mem_event patch been acknowledged? Or was this already work
in progress? I had to patch my Xen source code to be able to get MSR
events...)

> I'll take a proper look at this thread tomorrow and see if I can suggest
> anything more helpful. 

Thank you, I appreciate the reply, and your time.

All the best,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:38:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlPSn-00015R-9C; Wed, 19 Dec 2012 19:37:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <axboe@kernel.dk>) id 1TlPSl-00015M-Pn
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 19:37:55 +0000
Received: from [85.158.138.51:33643] by server-13.bemta-3.messagelabs.com id
	FF/7F-00465-29712D05; Wed, 19 Dec 2012 19:37:54 +0000
X-Env-Sender: axboe@kernel.dk
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355945872!28454324!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTA4NDY3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17426 invoked from network); 19 Dec 2012 19:37:53 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 19:37:53 -0000
Received: from 87-104-106-3-dynamic-customer.profibernet.dk ([87.104.106.3]
	helo=kernel.dk)
	by merlin.infradead.org with esmtpsa (Exim 4.76 #1 (Red Hat Linux))
	id 1TlPSh-0008EZ-He; Wed, 19 Dec 2012 19:37:51 +0000
Received: from [192.168.0.26] (lenny.home.kernel.dk [192.168.0.26])
	by kernel.dk (Postfix) with ESMTPA id E6E2C5634B;
	Wed, 19 Dec 2012 20:37:49 +0100 (CET)
Message-ID: <50D2178D.20900@kernel.dk>
Date: Wed, 19 Dec 2012 20:37:49 +0100
From: Jens Axboe <axboe@kernel.dk>
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <20121218051547.GA13450@phenom.dumpdata.com>
In-Reply-To: <20121218051547.GA13450@phenom.dumpdata.com>
X-Enigmail-Version: 1.4.6
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] Bug-fixes to xen-blkfront for v3.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2012-12-18 06:15, Konrad Rzeszutek Wilk wrote:
> Hey Jens,
> 
> Please git pull the following branch:
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.8
> 
> which has a bug-fix to the xen-blkfront and xen-blkback driver
> when using the persistent mode. An issue was discovered where LVM
> disks could not be read correctly and this fixes it. There
> is also a change in llist.h which has been blessed by akpm.

Pulled for 3.8, thanks.

-- 
Jens Axboe


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 19:38:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 19:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlPSn-00015R-9C; Wed, 19 Dec 2012 19:37:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <axboe@kernel.dk>) id 1TlPSl-00015M-Pn
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 19:37:55 +0000
Received: from [85.158.138.51:33643] by server-13.bemta-3.messagelabs.com id
	FF/7F-00465-29712D05; Wed, 19 Dec 2012 19:37:54 +0000
X-Env-Sender: axboe@kernel.dk
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355945872!28454324!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTA4NDY3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17426 invoked from network); 19 Dec 2012 19:37:53 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 19:37:53 -0000
Received: from 87-104-106-3-dynamic-customer.profibernet.dk ([87.104.106.3]
	helo=kernel.dk)
	by merlin.infradead.org with esmtpsa (Exim 4.76 #1 (Red Hat Linux))
	id 1TlPSh-0008EZ-He; Wed, 19 Dec 2012 19:37:51 +0000
Received: from [192.168.0.26] (lenny.home.kernel.dk [192.168.0.26])
	by kernel.dk (Postfix) with ESMTPA id E6E2C5634B;
	Wed, 19 Dec 2012 20:37:49 +0100 (CET)
Message-ID: <50D2178D.20900@kernel.dk>
Date: Wed, 19 Dec 2012 20:37:49 +0100
From: Jens Axboe <axboe@kernel.dk>
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <20121218051547.GA13450@phenom.dumpdata.com>
In-Reply-To: <20121218051547.GA13450@phenom.dumpdata.com>
X-Enigmail-Version: 1.4.6
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] Bug-fixes to xen-blkfront for v3.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2012-12-18 06:15, Konrad Rzeszutek Wilk wrote:
> Hey Jens,
> 
> Please git pull the following branch:
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.8
> 
> which has a bug-fix to the xen-blkfront and xen-blkback driver
> when using the persistent mode. An issue was discovered where LVM
> disks could not be read correctly and this fixes it. There
> is also a change in llist.h which has been blessed by akpm.

Pulled for 3.8, thanks.

-- 
Jens Axboe


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 20:09:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 20:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlPx4-0001kH-Al; Wed, 19 Dec 2012 20:09:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlPx2-0001kC-LY
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 20:09:12 +0000
Received: from [85.158.143.35:29101] by server-3.bemta-4.messagelabs.com id
	32/AB-18211-8EE12D05; Wed, 19 Dec 2012 20:09:12 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355947750!16295889!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3602 invoked from network); 19 Dec 2012 20:09:11 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 20:09:11 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJK97WY017073
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 20:09:08 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJK967S004780
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 20:09:06 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJK95Ee029655; Wed, 19 Dec 2012 14:09:05 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 12:09:05 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 629E51C032B; Wed, 19 Dec 2012 15:09:04 -0500 (EST)
Date: Wed, 19 Dec 2012 15:09:04 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121219200904.GG15037@phenom.dumpdata.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
	<40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
	<50C85E9F02000078000AFD65@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C85E9F02000078000AFD65@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Dongxiao Xu <dongxiao.xu@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, 2012 at 09:38:23AM +0000, Jan Beulich wrote:
> >>> On 12.12.12 at 02:03, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
> >> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> >> On Tue, Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
> >> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> >> > > What if this check was done in the routines that provide the
> >> > > software static buffers and there try to provide a nice DMA contingous
> >> swatch of pages?
> >> >
> >> > Yes, this approach also came to our mind, which needs to modify the driver
> >> itself.
> >> > If so, it requires driver not using such static buffers (e.g., from 
> > kmalloc) to do
> >> DMA even if the buffer is continuous in native.
> >> 
> >> I am bit loss here.
> >> 
> >> Is the issue you found only with drivers that do not use DMA API?
> >> Can you perhaps point me to the code that triggered this fix in the first 
> > place?
> > 
> > Yes, we met this issue on a specific SAS device/driver, and it calls into 
> > libata-core code, you can refer to function ata_dev_read_id() called from 
> > ata_dev_reread_id() in drivers/ata/libata-core.c.
> > 
> > In the above function, the target buffer is (void *)dev->link->ap->sector_buf, 
> > which is 512 bytes static buffer and unfortunately it across the page 
> > boundary.
> 
> I wonder whether such use of sg_init_one()/sg_set_buf() is correct
> in the first place. While there aren't any restrictions documented for
> its use, one clearly can't pass in whatever one wants (a pointer into
> vmalloc()-ed memory, for instance, won't work afaict).
> 
> I didn't go through all other users of it, but quite a few of the uses
> elsewhere look similarly questionable.
> 
> >> I am still not completely clear on what you had in mind. The one method I
> >> thought about that might help in this is to have Xen-SWIOTLB track which
> >> memory ranges were exchanged (so xen_swiotlb_fixup would save the *buf
> >> and the size for each call to xen_create_contiguous_region in a list or 
> > array).
> >> 
> >> When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they would
> >> consult said array/list to see if the region they retrieved crosses said 2MB
> >> chunks. If so.. and here I am unsure of what would be the best way to 
> > proceed.

And from finally looking at the code I misunderstood your initial description.

> > 
> > We thought we can solve the issue in several ways:
> > 
> > 1) Like the previous patch I sent out, we check the DMA region in 
> > xen_swiotlb_map_page() and xen_swiotlb_map_sg_attr(), and if DMA region 
> > crosses page boundary, we exchange the memory and copy the content. However 
> > it has race condition that when copying the memory content (we introduced two 
> > memory copies in the patch), some other code may also visit the page, which 
> > may encounter incorrect values.
> 
> That's why, after mapping a buffer (or SG list) one has to call the
> sync functions before looking at data. Any race as described by
> you is therefore a programming error.

I am with Jan here. It looks like the libata is depending on the dma_unmap
to do this. But it is unclear to me when the ata_qc_complete is actually
called to unmap the buffer (and hence sync it). 

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 20:09:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 20:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlPx4-0001kH-Al; Wed, 19 Dec 2012 20:09:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlPx2-0001kC-LY
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 20:09:12 +0000
Received: from [85.158.143.35:29101] by server-3.bemta-4.messagelabs.com id
	32/AB-18211-8EE12D05; Wed, 19 Dec 2012 20:09:12 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355947750!16295889!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3602 invoked from network); 19 Dec 2012 20:09:11 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 20:09:11 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJK97WY017073
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 20:09:08 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJK967S004780
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 20:09:06 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJK95Ee029655; Wed, 19 Dec 2012 14:09:05 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 12:09:05 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 629E51C032B; Wed, 19 Dec 2012 15:09:04 -0500 (EST)
Date: Wed, 19 Dec 2012 15:09:04 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121219200904.GG15037@phenom.dumpdata.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
	<40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
	<50C85E9F02000078000AFD65@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C85E9F02000078000AFD65@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Dongxiao Xu <dongxiao.xu@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 12, 2012 at 09:38:23AM +0000, Jan Beulich wrote:
> >>> On 12.12.12 at 02:03, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
> >> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> >> On Tue, Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
> >> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> >> > > What if this check was done in the routines that provide the
> >> > > software static buffers and there try to provide a nice DMA contingous
> >> swatch of pages?
> >> >
> >> > Yes, this approach also came to our mind, which needs to modify the driver
> >> itself.
> >> > If so, it requires driver not using such static buffers (e.g., from 
> > kmalloc) to do
> >> DMA even if the buffer is continuous in native.
> >> 
> >> I am bit loss here.
> >> 
> >> Is the issue you found only with drivers that do not use DMA API?
> >> Can you perhaps point me to the code that triggered this fix in the first 
> > place?
> > 
> > Yes, we met this issue on a specific SAS device/driver, and it calls into 
> > libata-core code, you can refer to function ata_dev_read_id() called from 
> > ata_dev_reread_id() in drivers/ata/libata-core.c.
> > 
> > In the above function, the target buffer is (void *)dev->link->ap->sector_buf, 
> > which is 512 bytes static buffer and unfortunately it across the page 
> > boundary.
> 
> I wonder whether such use of sg_init_one()/sg_set_buf() is correct
> in the first place. While there aren't any restrictions documented for
> its use, one clearly can't pass in whatever one wants (a pointer into
> vmalloc()-ed memory, for instance, won't work afaict).
> 
> I didn't go through all other users of it, but quite a few of the uses
> elsewhere look similarly questionable.
> 
> >> I am still not completely clear on what you had in mind. The one method I
> >> thought about that might help in this is to have Xen-SWIOTLB track which
> >> memory ranges were exchanged (so xen_swiotlb_fixup would save the *buf
> >> and the size for each call to xen_create_contiguous_region in a list or 
> > array).
> >> 
> >> When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they would
> >> consult said array/list to see if the region they retrieved crosses said 2MB
> >> chunks. If so.. and here I am unsure of what would be the best way to 
> > proceed.

And from finally looking at the code I misunderstood your initial description.

> > 
> > We thought we can solve the issue in several ways:
> > 
> > 1) Like the previous patch I sent out, we check the DMA region in 
> > xen_swiotlb_map_page() and xen_swiotlb_map_sg_attr(), and if DMA region 
> > crosses page boundary, we exchange the memory and copy the content. However 
> > it has race condition that when copying the memory content (we introduced two 
> > memory copies in the patch), some other code may also visit the page, which 
> > may encounter incorrect values.
> 
> That's why, after mapping a buffer (or SG list) one has to call the
> sync functions before looking at data. Any race as described by
> you is therefore a programming error.

I am with Jan here. It looks like the libata is depending on the dma_unmap
to do this. But it is unclear to me when the ata_qc_complete is actually
called to unmap the buffer (and hence sync it). 

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 20:16:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 20:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlQ3O-0001w0-5O; Wed, 19 Dec 2012 20:15:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlQ3L-0001vr-TD
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 20:15:44 +0000
Received: from [85.158.137.99:2721] by server-8.bemta-3.messagelabs.com id
	D3/6E-01297-E6022D05; Wed, 19 Dec 2012 20:15:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355948094!18430870!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32699 invoked from network); 19 Dec 2012 20:15:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 20:15:40 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJKEprF009177
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 20:14:52 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJKEous014838
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 20:14:50 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJKEntq022981; Wed, 19 Dec 2012 14:14:49 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 12:14:49 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 85D511C032B; Wed, 19 Dec 2012 15:14:48 -0500 (EST)
Date: Wed, 19 Dec 2012 15:14:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121219201448.GH15037@phenom.dumpdata.com>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
	<50C9FC0302000078000B0344@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C9FC0302000078000B0344@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
	backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 03:02:11PM +0000, Jan Beulich wrote:
> >>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> > backend_changed might be called multiple times, which will leak
> > be->mode. Make sure it will be called only once. Remove some unneeded
> > checks. Also the be->mode string was leaked, release the memory on
> > device shutdown.
> 
> So I decided to make an attempt myself, retaining the current
> behavior of allowing multiple calls, yet not having to sprinkle
> around multiple kfree()-s for be->mode. Slightly re-structuring
> the function made this pretty strait forward.

<nods> Would it be possible to post a patch?

> 
> Jan
> 
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> > ---
> > 
> > incorporate all comments from Jan.
> > fold the oneline change to xen_blkbk_remove into this change
> > now its compile tested.
> > 
> >  drivers/block/xen-blkback/xenbus.c | 69 ++++++++++++++++++--------------------
> >  1 file changed, 33 insertions(+), 36 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkback/xenbus.c 
> > b/drivers/block/xen-blkback/xenbus.c
> > index f58434c..5ca77c3 100644
> > --- a/drivers/block/xen-blkback/xenbus.c
> > +++ b/drivers/block/xen-blkback/xenbus.c
> > @@ -28,6 +28,7 @@ struct backend_info {
> >  	unsigned		major;
> >  	unsigned		minor;
> >  	char			*mode;
> > +	unsigned		alive;
> >  };
> >  
> >  static struct kmem_cache *xen_blkif_cachep;
> > @@ -366,6 +367,7 @@ static int xen_blkbk_remove(struct xenbus_device *dev)
> >  		be->blkif = NULL;
> >  	}
> >  
> > +	kfree(be->mode);
> >  	kfree(be);
> >  	dev_set_drvdata(&dev->dev, NULL);
> >  	return 0;
> > @@ -501,10 +503,14 @@ static void backend_changed(struct xenbus_watch *watch,
> >  		= container_of(watch, struct backend_info, backend_watch);
> >  	struct xenbus_device *dev = be->dev;
> >  	int cdrom = 0;
> > -	char *device_type;
> > +	char *device_type, *p;
> > +	long handle;
> >  
> >  	DPRINTK("");
> >  
> > +	if (be->alive)
> > +		return;
> > +
> >  	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
> >  			   &major, &minor);
> >  	if (XENBUS_EXIST_ERR(err)) {
> > @@ -520,12 +526,7 @@ static void backend_changed(struct xenbus_watch *watch,
> >  		return;
> >  	}
> >  
> > -	if ((be->major || be->minor) &&
> > -	    ((be->major != major) || (be->minor != minor))) {
> > -		pr_warn(DRV_PFX "changing physical device (from %x:%x to %x:%x) not 
> > supported.\n",
> > -			be->major, be->minor, major, minor);
> > -		return;
> > -	}
> > +	be->alive = 1;
> >  
> >  	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
> >  	if (IS_ERR(be->mode)) {
> > @@ -541,39 +542,35 @@ static void backend_changed(struct xenbus_watch *watch,
> >  		kfree(device_type);
> >  	}
> >  
> > -	if (be->major == 0 && be->minor == 0) {
> > -		/* Front end dir is a number, which is used as the handle. */
> > -
> > -		char *p = strrchr(dev->otherend, '/') + 1;
> > -		long handle;
> > -		err = strict_strtoul(p, 0, &handle);
> > -		if (err)
> > -			return;
> > -
> > -		be->major = major;
> > -		be->minor = minor;
> > +	/* Front end dir is a number, which is used as the handle. */
> > +	p = strrchr(dev->otherend, '/') + 1;
> > +	err = strict_strtoul(p, 0, &handle);
> > +	if (err)
> > +		return;
> >  
> > -		err = xen_vbd_create(be->blkif, handle, major, minor,
> > -				 (NULL == strchr(be->mode, 'w')), cdrom);
> > -		if (err) {
> > -			be->major = 0;
> > -			be->minor = 0;
> > -			xenbus_dev_fatal(dev, err, "creating vbd structure");
> > -			return;
> > -		}
> > +	be->major = major;
> > +	be->minor = minor;
> >  
> > -		err = xenvbd_sysfs_addif(dev);
> > -		if (err) {
> > -			xen_vbd_free(&be->blkif->vbd);
> > -			be->major = 0;
> > -			be->minor = 0;
> > -			xenbus_dev_fatal(dev, err, "creating sysfs entries");
> > -			return;
> > -		}
> > +	err = xen_vbd_create(be->blkif, handle, major, minor,
> > +			 (NULL == strchr(be->mode, 'w')), cdrom);
> > +	if (err) {
> > +		be->major = 0;
> > +		be->minor = 0;
> > +		xenbus_dev_fatal(dev, err, "creating vbd structure");
> > +		return;
> > +	}
> >  
> > -		/* We're potentially connected now */
> > -		xen_update_blkif_status(be->blkif);
> > +	err = xenvbd_sysfs_addif(dev);
> > +	if (err) {
> > +		xen_vbd_free(&be->blkif->vbd);
> > +		be->major = 0;
> > +		be->minor = 0;
> > +		xenbus_dev_fatal(dev, err, "creating sysfs entries");
> > +		return;
> >  	}
> > +
> > +	/* We're potentially connected now */
> > +	xen_update_blkif_status(be->blkif);
> >  }
> >  
> >  
> > -- 
> > 1.8.0.1
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 20:16:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 20:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlQ3O-0001w0-5O; Wed, 19 Dec 2012 20:15:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlQ3L-0001vr-TD
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 20:15:44 +0000
Received: from [85.158.137.99:2721] by server-8.bemta-3.messagelabs.com id
	D3/6E-01297-E6022D05; Wed, 19 Dec 2012 20:15:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-217.messagelabs.com!1355948094!18430870!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32699 invoked from network); 19 Dec 2012 20:15:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Dec 2012 20:15:40 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJKEprF009177
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 20:14:52 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJKEous014838
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 20:14:50 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJKEntq022981; Wed, 19 Dec 2012 14:14:49 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 12:14:49 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 85D511C032B; Wed, 19 Dec 2012 15:14:48 -0500 (EST)
Date: Wed, 19 Dec 2012 15:14:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121219201448.GH15037@phenom.dumpdata.com>
References: <1355259026-16946-1-git-send-email-olaf@aepfle.de>
	<50C9FC0302000078000B0344@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50C9FC0302000078000B0344@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/blkback: prevent repeated
	backend_changed invocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 03:02:11PM +0000, Jan Beulich wrote:
> >>> On 11.12.12 at 21:50, Olaf Hering <olaf@aepfle.de> wrote:
> > backend_changed might be called multiple times, which will leak
> > be->mode. Make sure it will be called only once. Remove some unneeded
> > checks. Also the be->mode string was leaked, release the memory on
> > device shutdown.
> 
> So I decided to make an attempt myself, retaining the current
> behavior of allowing multiple calls, yet not having to sprinkle
> around multiple kfree()-s for be->mode. Slightly re-structuring
> the function made this pretty strait forward.

<nods> Would it be possible to post a patch?

> 
> Jan
> 
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> > ---
> > 
> > incorporate all comments from Jan.
> > fold the oneline change to xen_blkbk_remove into this change
> > now its compile tested.
> > 
> >  drivers/block/xen-blkback/xenbus.c | 69 ++++++++++++++++++--------------------
> >  1 file changed, 33 insertions(+), 36 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkback/xenbus.c 
> > b/drivers/block/xen-blkback/xenbus.c
> > index f58434c..5ca77c3 100644
> > --- a/drivers/block/xen-blkback/xenbus.c
> > +++ b/drivers/block/xen-blkback/xenbus.c
> > @@ -28,6 +28,7 @@ struct backend_info {
> >  	unsigned		major;
> >  	unsigned		minor;
> >  	char			*mode;
> > +	unsigned		alive;
> >  };
> >  
> >  static struct kmem_cache *xen_blkif_cachep;
> > @@ -366,6 +367,7 @@ static int xen_blkbk_remove(struct xenbus_device *dev)
> >  		be->blkif = NULL;
> >  	}
> >  
> > +	kfree(be->mode);
> >  	kfree(be);
> >  	dev_set_drvdata(&dev->dev, NULL);
> >  	return 0;
> > @@ -501,10 +503,14 @@ static void backend_changed(struct xenbus_watch *watch,
> >  		= container_of(watch, struct backend_info, backend_watch);
> >  	struct xenbus_device *dev = be->dev;
> >  	int cdrom = 0;
> > -	char *device_type;
> > +	char *device_type, *p;
> > +	long handle;
> >  
> >  	DPRINTK("");
> >  
> > +	if (be->alive)
> > +		return;
> > +
> >  	err = xenbus_scanf(XBT_NIL, dev->nodename, "physical-device", "%x:%x",
> >  			   &major, &minor);
> >  	if (XENBUS_EXIST_ERR(err)) {
> > @@ -520,12 +526,7 @@ static void backend_changed(struct xenbus_watch *watch,
> >  		return;
> >  	}
> >  
> > -	if ((be->major || be->minor) &&
> > -	    ((be->major != major) || (be->minor != minor))) {
> > -		pr_warn(DRV_PFX "changing physical device (from %x:%x to %x:%x) not 
> > supported.\n",
> > -			be->major, be->minor, major, minor);
> > -		return;
> > -	}
> > +	be->alive = 1;
> >  
> >  	be->mode = xenbus_read(XBT_NIL, dev->nodename, "mode", NULL);
> >  	if (IS_ERR(be->mode)) {
> > @@ -541,39 +542,35 @@ static void backend_changed(struct xenbus_watch *watch,
> >  		kfree(device_type);
> >  	}
> >  
> > -	if (be->major == 0 && be->minor == 0) {
> > -		/* Front end dir is a number, which is used as the handle. */
> > -
> > -		char *p = strrchr(dev->otherend, '/') + 1;
> > -		long handle;
> > -		err = strict_strtoul(p, 0, &handle);
> > -		if (err)
> > -			return;
> > -
> > -		be->major = major;
> > -		be->minor = minor;
> > +	/* Front end dir is a number, which is used as the handle. */
> > +	p = strrchr(dev->otherend, '/') + 1;
> > +	err = strict_strtoul(p, 0, &handle);
> > +	if (err)
> > +		return;
> >  
> > -		err = xen_vbd_create(be->blkif, handle, major, minor,
> > -				 (NULL == strchr(be->mode, 'w')), cdrom);
> > -		if (err) {
> > -			be->major = 0;
> > -			be->minor = 0;
> > -			xenbus_dev_fatal(dev, err, "creating vbd structure");
> > -			return;
> > -		}
> > +	be->major = major;
> > +	be->minor = minor;
> >  
> > -		err = xenvbd_sysfs_addif(dev);
> > -		if (err) {
> > -			xen_vbd_free(&be->blkif->vbd);
> > -			be->major = 0;
> > -			be->minor = 0;
> > -			xenbus_dev_fatal(dev, err, "creating sysfs entries");
> > -			return;
> > -		}
> > +	err = xen_vbd_create(be->blkif, handle, major, minor,
> > +			 (NULL == strchr(be->mode, 'w')), cdrom);
> > +	if (err) {
> > +		be->major = 0;
> > +		be->minor = 0;
> > +		xenbus_dev_fatal(dev, err, "creating vbd structure");
> > +		return;
> > +	}
> >  
> > -		/* We're potentially connected now */
> > -		xen_update_blkif_status(be->blkif);
> > +	err = xenvbd_sysfs_addif(dev);
> > +	if (err) {
> > +		xen_vbd_free(&be->blkif->vbd);
> > +		be->major = 0;
> > +		be->minor = 0;
> > +		xenbus_dev_fatal(dev, err, "creating sysfs entries");
> > +		return;
> >  	}
> > +
> > +	/* We're potentially connected now */
> > +	xen_update_blkif_status(be->blkif);
> >  }
> >  
> >  
> > -- 
> > 1.8.0.1
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 20:21:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 20:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlQ8J-000263-Td; Wed, 19 Dec 2012 20:20:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TlQ8J-00025w-14
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 20:20:51 +0000
Received: from [85.158.139.211:52668] by server-6.bemta-5.messagelabs.com id
	F0/66-30498-2A122D05; Wed, 19 Dec 2012 20:20:50 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355948447!21114978!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzQ1MTQz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12929 invoked from network); 19 Dec 2012 20:20:48 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 20:20:48 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355948448; x=1387484448;
	h=from:to:cc:subject:date:message-id:mime-version;
	bh=AR145FCsmYJsKtoB5mpsvccpNi5GMpUpm3nPjpWIWcE=;
	b=rg+3bUccyGir6sH3dc9s/oG0LRgkwUtYo2rNot7vIHbF2Cy+YvQsuzKw
	g1CZVtjvNZJLahiNO8TY3ACIj4RECR3TJBxg2f27s+F2sTIXAF/UqHZDT
	ch304fRT7fpdYeZrpSDY0wdG4nU2QXJlALGJCwF2tFg/uoEs8mxhqORqz 8=;
X-IronPort-AV: E=Sophos;i="4.84,320,1355097600"; d="scan'208";a="1086360172"
Received: from smtp-in-5102.iad5.amazon.com ([10.218.9.29])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 19 Dec 2012 20:20:41 +0000
Received: from ex10-hub-9001.ant.amazon.com (ex10-hub-9001.ant.amazon.com
	[10.185.137.58])
	by smtp-in-5102.iad5.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBJKKd33023464
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 19 Dec 2012 20:20:40 GMT
Received: from u109add4315675089e695.sea31.amazon.com (172.17.1.114) by
	ex10-hub-9001.ant.amazon.com (10.185.137.58) with Microsoft SMTP Server
	id 14.2.247.3; Wed, 19 Dec 2012 12:20:28 -0800
From: Matt Wilson <msw@amazon.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Dec 2012 20:20:14 +0000
Message-ID: <1355948414-7503-1-git-send-email-msw@amazon.com>
X-Mailer: git-send-email 1.7.4.5
MIME-Version: 1.0
Cc: Annie Li <annie.li@oracle.com>, Steven Noonan <snoonan@amazon.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen/grant-table: correctly initialize grant
	table version 1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit 85ff6acb075a484780b3d763fdf41596d8fc0970 (xen/granttable: Grant
tables V2 implementation) changed the GREFS_PER_GRANT_FRAME macro from
a constant to a conditional expression. The expression depends on
grant_table_version being appropriately set. Unfortunately, at init
time grant_table_version will be 0. The GREFS_PER_GRANT_FRAME
conditional expression checks for "grant_table_version == 1", and
therefore returns the number of grant references per frame for v2.

This causes gnttab_init() to allocate fewer pages for gnttab_list, as
a frame can old half the number of v2 entries than v1 entries. After
gnttab_resume() is called, grant_table_version is appropriately
set. nr_init_grefs will then be miscalculated and gnttab_free_count
will hold a value larger than the actual number of free gref entries.

If a guest is heavily utilizing improperly initialized v1 grant
tables, memory corruption can occur. One common manifestation is
corruption of the vmalloc list, resulting in a poisoned pointer
derefrence when accessing /proc/meminfo or /proc/vmallocinfo:

[   40.770064] BUG: unable to handle kernel paging request at 0000200200001407
[   40.770083] IP: [<ffffffff811a6fb0>] get_vmalloc_info+0x70/0x110
[   40.770102] PGD 0
[   40.770107] Oops: 0000 [#1] SMP
[   40.770114] CPU 10

This patch introduces a static variable, grefs_per_grant_frame, to
cache the calculated value. gnttab_init() now calls
gnttab_request_version() early so that grant_table_version and
grefs_per_grant_frame can be appropriately set. A few BUG_ON()s have
been added to prevent this type of bug from reoccurring in the future.

Signed-off-by: Matt Wilson <msw@amazon.com>
Reviewed-and-Tested-by: Steven Noonan <snoonan@amazon.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Annie Li <annie.li@oracle.com>
Cc: xen-devel@lists.xen.org
---
 drivers/xen/grant-table.c |   41 +++++++++++++++++++++++------------------
 1 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 043bf07..011fdc3 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -55,10 +55,6 @@
 /* External tools reserve first few grant table entries. */
 #define NR_RESERVED_ENTRIES 8
 #define GNTTAB_LIST_END 0xffffffff
-#define GREFS_PER_GRANT_FRAME \
-(grant_table_version == 1 ?                      \
-(PAGE_SIZE / sizeof(struct grant_entry_v1)) :   \
-(PAGE_SIZE / sizeof(union grant_entry_v2)))
 
 static grant_ref_t **gnttab_list;
 static unsigned int nr_grant_frames;
@@ -153,6 +149,7 @@ static struct gnttab_ops *gnttab_interface;
 static grant_status_t *grstatus;
 
 static int grant_table_version;
+static int grefs_per_grant_frame;
 
 static struct gnttab_free_callback *gnttab_free_callback_list;
 
@@ -766,12 +763,14 @@ static int grow_gnttab_list(unsigned int more_frames)
 	unsigned int new_nr_grant_frames, extra_entries, i;
 	unsigned int nr_glist_frames, new_nr_glist_frames;
 
+	BUG_ON(grefs_per_grant_frame == 0);
+
 	new_nr_grant_frames = nr_grant_frames + more_frames;
-	extra_entries       = more_frames * GREFS_PER_GRANT_FRAME;
+	extra_entries       = more_frames * grefs_per_grant_frame;
 
-	nr_glist_frames = (nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
+	nr_glist_frames = (nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
 	new_nr_glist_frames =
-		(new_nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
+		(new_nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
 	for (i = nr_glist_frames; i < new_nr_glist_frames; i++) {
 		gnttab_list[i] = (grant_ref_t *)__get_free_page(GFP_ATOMIC);
 		if (!gnttab_list[i])
@@ -779,12 +778,12 @@ static int grow_gnttab_list(unsigned int more_frames)
 	}
 
 
-	for (i = GREFS_PER_GRANT_FRAME * nr_grant_frames;
-	     i < GREFS_PER_GRANT_FRAME * new_nr_grant_frames - 1; i++)
+	for (i = grefs_per_grant_frame * nr_grant_frames;
+	     i < grefs_per_grant_frame * new_nr_grant_frames - 1; i++)
 		gnttab_entry(i) = i + 1;
 
 	gnttab_entry(i) = gnttab_free_head;
-	gnttab_free_head = GREFS_PER_GRANT_FRAME * nr_grant_frames;
+	gnttab_free_head = grefs_per_grant_frame * nr_grant_frames;
 	gnttab_free_count += extra_entries;
 
 	nr_grant_frames = new_nr_grant_frames;
@@ -904,7 +903,8 @@ EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
-	return (nr_grant_frames * GREFS_PER_GRANT_FRAME + SPP - 1) / SPP;
+	BUG_ON(grefs_per_grant_frame == 0);
+	return (nr_grant_frames * grefs_per_grant_frame + SPP - 1) / SPP;
 }
 
 static int gnttab_map_frames_v1(xen_pfn_t *frames, unsigned int nr_gframes)
@@ -1068,6 +1068,7 @@ static void gnttab_request_version(void)
 	rc = HYPERVISOR_grant_table_op(GNTTABOP_set_version, &gsv, 1);
 	if (rc == 0 && gsv.version == 2) {
 		grant_table_version = 2;
+		grefs_per_grant_frame = PAGE_SIZE / sizeof(union grant_entry_v2);
 		gnttab_interface = &gnttab_v2_ops;
 	} else if (grant_table_version == 2) {
 		/*
@@ -1080,10 +1081,9 @@ static void gnttab_request_version(void)
 		panic("we need grant tables version 2, but only version 1 is available");
 	} else {
 		grant_table_version = 1;
+		grefs_per_grant_frame = PAGE_SIZE / sizeof(struct grant_entry_v1);
 		gnttab_interface = &gnttab_v1_ops;
 	}
-	printk(KERN_INFO "Grant tables using version %d layout.\n",
-		grant_table_version);
 }
 
 int gnttab_resume(void)
@@ -1092,6 +1092,8 @@ int gnttab_resume(void)
 	char *kmsg = "Failed to kmalloc pages for pv in hvm grant frames\n";
 
 	gnttab_request_version();
+	printk(KERN_INFO "Grant tables using version %d layout.\n",
+		grant_table_version);
 	max_nr_gframes = gnttab_max_grant_frames();
 	if (max_nr_gframes < nr_grant_frames)
 		return -ENOSYS;
@@ -1137,9 +1139,10 @@ static int gnttab_expand(unsigned int req_entries)
 	int rc;
 	unsigned int cur, extra;
 
+	BUG_ON(grefs_per_grant_frame == 0);
 	cur = nr_grant_frames;
-	extra = ((req_entries + (GREFS_PER_GRANT_FRAME-1)) /
-		 GREFS_PER_GRANT_FRAME);
+	extra = ((req_entries + (grefs_per_grant_frame-1)) /
+		 grefs_per_grant_frame);
 	if (cur + extra > gnttab_max_grant_frames())
 		return -ENOSPC;
 
@@ -1157,21 +1160,23 @@ int gnttab_init(void)
 	unsigned int nr_init_grefs;
 	int ret;
 
+	gnttab_request_version();
 	nr_grant_frames = 1;
 	boot_max_nr_grant_frames = __max_nr_grant_frames();
 
 	/* Determine the maximum number of frames required for the
 	 * grant reference free list on the current hypervisor.
 	 */
+	BUG_ON(grefs_per_grant_frame == 0);
 	max_nr_glist_frames = (boot_max_nr_grant_frames *
-			       GREFS_PER_GRANT_FRAME / RPP);
+			       grefs_per_grant_frame / RPP);
 
 	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
 			      GFP_KERNEL);
 	if (gnttab_list == NULL)
 		return -ENOMEM;
 
-	nr_glist_frames = (nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
+	nr_glist_frames = (nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
 	for (i = 0; i < nr_glist_frames; i++) {
 		gnttab_list[i] = (grant_ref_t *)__get_free_page(GFP_KERNEL);
 		if (gnttab_list[i] == NULL) {
@@ -1185,7 +1190,7 @@ int gnttab_init(void)
 		goto ini_nomem;
 	}
 
-	nr_init_grefs = nr_grant_frames * GREFS_PER_GRANT_FRAME;
+	nr_init_grefs = nr_grant_frames * grefs_per_grant_frame;
 
 	for (i = NR_RESERVED_ENTRIES; i < nr_init_grefs - 1; i++)
 		gnttab_entry(i) = i + 1;
-- 
1.7.4.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 20:21:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 20:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlQ8J-000263-Td; Wed, 19 Dec 2012 20:20:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TlQ8J-00025w-14
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 20:20:51 +0000
Received: from [85.158.139.211:52668] by server-6.bemta-5.messagelabs.com id
	F0/66-30498-2A122D05; Wed, 19 Dec 2012 20:20:50 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1355948447!21114978!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzQ1MTQz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12929 invoked from network); 19 Dec 2012 20:20:48 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 20:20:48 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1355948448; x=1387484448;
	h=from:to:cc:subject:date:message-id:mime-version;
	bh=AR145FCsmYJsKtoB5mpsvccpNi5GMpUpm3nPjpWIWcE=;
	b=rg+3bUccyGir6sH3dc9s/oG0LRgkwUtYo2rNot7vIHbF2Cy+YvQsuzKw
	g1CZVtjvNZJLahiNO8TY3ACIj4RECR3TJBxg2f27s+F2sTIXAF/UqHZDT
	ch304fRT7fpdYeZrpSDY0wdG4nU2QXJlALGJCwF2tFg/uoEs8mxhqORqz 8=;
X-IronPort-AV: E=Sophos;i="4.84,320,1355097600"; d="scan'208";a="1086360172"
Received: from smtp-in-5102.iad5.amazon.com ([10.218.9.29])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 19 Dec 2012 20:20:41 +0000
Received: from ex10-hub-9001.ant.amazon.com (ex10-hub-9001.ant.amazon.com
	[10.185.137.58])
	by smtp-in-5102.iad5.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBJKKd33023464
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 19 Dec 2012 20:20:40 GMT
Received: from u109add4315675089e695.sea31.amazon.com (172.17.1.114) by
	ex10-hub-9001.ant.amazon.com (10.185.137.58) with Microsoft SMTP Server
	id 14.2.247.3; Wed, 19 Dec 2012 12:20:28 -0800
From: Matt Wilson <msw@amazon.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Dec 2012 20:20:14 +0000
Message-ID: <1355948414-7503-1-git-send-email-msw@amazon.com>
X-Mailer: git-send-email 1.7.4.5
MIME-Version: 1.0
Cc: Annie Li <annie.li@oracle.com>, Steven Noonan <snoonan@amazon.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen/grant-table: correctly initialize grant
	table version 1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit 85ff6acb075a484780b3d763fdf41596d8fc0970 (xen/granttable: Grant
tables V2 implementation) changed the GREFS_PER_GRANT_FRAME macro from
a constant to a conditional expression. The expression depends on
grant_table_version being appropriately set. Unfortunately, at init
time grant_table_version will be 0. The GREFS_PER_GRANT_FRAME
conditional expression checks for "grant_table_version == 1", and
therefore returns the number of grant references per frame for v2.

This causes gnttab_init() to allocate fewer pages for gnttab_list, as
a frame can old half the number of v2 entries than v1 entries. After
gnttab_resume() is called, grant_table_version is appropriately
set. nr_init_grefs will then be miscalculated and gnttab_free_count
will hold a value larger than the actual number of free gref entries.

If a guest is heavily utilizing improperly initialized v1 grant
tables, memory corruption can occur. One common manifestation is
corruption of the vmalloc list, resulting in a poisoned pointer
derefrence when accessing /proc/meminfo or /proc/vmallocinfo:

[   40.770064] BUG: unable to handle kernel paging request at 0000200200001407
[   40.770083] IP: [<ffffffff811a6fb0>] get_vmalloc_info+0x70/0x110
[   40.770102] PGD 0
[   40.770107] Oops: 0000 [#1] SMP
[   40.770114] CPU 10

This patch introduces a static variable, grefs_per_grant_frame, to
cache the calculated value. gnttab_init() now calls
gnttab_request_version() early so that grant_table_version and
grefs_per_grant_frame can be appropriately set. A few BUG_ON()s have
been added to prevent this type of bug from reoccurring in the future.

Signed-off-by: Matt Wilson <msw@amazon.com>
Reviewed-and-Tested-by: Steven Noonan <snoonan@amazon.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Annie Li <annie.li@oracle.com>
Cc: xen-devel@lists.xen.org
---
 drivers/xen/grant-table.c |   41 +++++++++++++++++++++++------------------
 1 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 043bf07..011fdc3 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -55,10 +55,6 @@
 /* External tools reserve first few grant table entries. */
 #define NR_RESERVED_ENTRIES 8
 #define GNTTAB_LIST_END 0xffffffff
-#define GREFS_PER_GRANT_FRAME \
-(grant_table_version == 1 ?                      \
-(PAGE_SIZE / sizeof(struct grant_entry_v1)) :   \
-(PAGE_SIZE / sizeof(union grant_entry_v2)))
 
 static grant_ref_t **gnttab_list;
 static unsigned int nr_grant_frames;
@@ -153,6 +149,7 @@ static struct gnttab_ops *gnttab_interface;
 static grant_status_t *grstatus;
 
 static int grant_table_version;
+static int grefs_per_grant_frame;
 
 static struct gnttab_free_callback *gnttab_free_callback_list;
 
@@ -766,12 +763,14 @@ static int grow_gnttab_list(unsigned int more_frames)
 	unsigned int new_nr_grant_frames, extra_entries, i;
 	unsigned int nr_glist_frames, new_nr_glist_frames;
 
+	BUG_ON(grefs_per_grant_frame == 0);
+
 	new_nr_grant_frames = nr_grant_frames + more_frames;
-	extra_entries       = more_frames * GREFS_PER_GRANT_FRAME;
+	extra_entries       = more_frames * grefs_per_grant_frame;
 
-	nr_glist_frames = (nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
+	nr_glist_frames = (nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
 	new_nr_glist_frames =
-		(new_nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
+		(new_nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
 	for (i = nr_glist_frames; i < new_nr_glist_frames; i++) {
 		gnttab_list[i] = (grant_ref_t *)__get_free_page(GFP_ATOMIC);
 		if (!gnttab_list[i])
@@ -779,12 +778,12 @@ static int grow_gnttab_list(unsigned int more_frames)
 	}
 
 
-	for (i = GREFS_PER_GRANT_FRAME * nr_grant_frames;
-	     i < GREFS_PER_GRANT_FRAME * new_nr_grant_frames - 1; i++)
+	for (i = grefs_per_grant_frame * nr_grant_frames;
+	     i < grefs_per_grant_frame * new_nr_grant_frames - 1; i++)
 		gnttab_entry(i) = i + 1;
 
 	gnttab_entry(i) = gnttab_free_head;
-	gnttab_free_head = GREFS_PER_GRANT_FRAME * nr_grant_frames;
+	gnttab_free_head = grefs_per_grant_frame * nr_grant_frames;
 	gnttab_free_count += extra_entries;
 
 	nr_grant_frames = new_nr_grant_frames;
@@ -904,7 +903,8 @@ EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
-	return (nr_grant_frames * GREFS_PER_GRANT_FRAME + SPP - 1) / SPP;
+	BUG_ON(grefs_per_grant_frame == 0);
+	return (nr_grant_frames * grefs_per_grant_frame + SPP - 1) / SPP;
 }
 
 static int gnttab_map_frames_v1(xen_pfn_t *frames, unsigned int nr_gframes)
@@ -1068,6 +1068,7 @@ static void gnttab_request_version(void)
 	rc = HYPERVISOR_grant_table_op(GNTTABOP_set_version, &gsv, 1);
 	if (rc == 0 && gsv.version == 2) {
 		grant_table_version = 2;
+		grefs_per_grant_frame = PAGE_SIZE / sizeof(union grant_entry_v2);
 		gnttab_interface = &gnttab_v2_ops;
 	} else if (grant_table_version == 2) {
 		/*
@@ -1080,10 +1081,9 @@ static void gnttab_request_version(void)
 		panic("we need grant tables version 2, but only version 1 is available");
 	} else {
 		grant_table_version = 1;
+		grefs_per_grant_frame = PAGE_SIZE / sizeof(struct grant_entry_v1);
 		gnttab_interface = &gnttab_v1_ops;
 	}
-	printk(KERN_INFO "Grant tables using version %d layout.\n",
-		grant_table_version);
 }
 
 int gnttab_resume(void)
@@ -1092,6 +1092,8 @@ int gnttab_resume(void)
 	char *kmsg = "Failed to kmalloc pages for pv in hvm grant frames\n";
 
 	gnttab_request_version();
+	printk(KERN_INFO "Grant tables using version %d layout.\n",
+		grant_table_version);
 	max_nr_gframes = gnttab_max_grant_frames();
 	if (max_nr_gframes < nr_grant_frames)
 		return -ENOSYS;
@@ -1137,9 +1139,10 @@ static int gnttab_expand(unsigned int req_entries)
 	int rc;
 	unsigned int cur, extra;
 
+	BUG_ON(grefs_per_grant_frame == 0);
 	cur = nr_grant_frames;
-	extra = ((req_entries + (GREFS_PER_GRANT_FRAME-1)) /
-		 GREFS_PER_GRANT_FRAME);
+	extra = ((req_entries + (grefs_per_grant_frame-1)) /
+		 grefs_per_grant_frame);
 	if (cur + extra > gnttab_max_grant_frames())
 		return -ENOSPC;
 
@@ -1157,21 +1160,23 @@ int gnttab_init(void)
 	unsigned int nr_init_grefs;
 	int ret;
 
+	gnttab_request_version();
 	nr_grant_frames = 1;
 	boot_max_nr_grant_frames = __max_nr_grant_frames();
 
 	/* Determine the maximum number of frames required for the
 	 * grant reference free list on the current hypervisor.
 	 */
+	BUG_ON(grefs_per_grant_frame == 0);
 	max_nr_glist_frames = (boot_max_nr_grant_frames *
-			       GREFS_PER_GRANT_FRAME / RPP);
+			       grefs_per_grant_frame / RPP);
 
 	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
 			      GFP_KERNEL);
 	if (gnttab_list == NULL)
 		return -ENOMEM;
 
-	nr_glist_frames = (nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
+	nr_glist_frames = (nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
 	for (i = 0; i < nr_glist_frames; i++) {
 		gnttab_list[i] = (grant_ref_t *)__get_free_page(GFP_KERNEL);
 		if (gnttab_list[i] == NULL) {
@@ -1185,7 +1190,7 @@ int gnttab_init(void)
 		goto ini_nomem;
 	}
 
-	nr_init_grefs = nr_grant_frames * GREFS_PER_GRANT_FRAME;
+	nr_init_grefs = nr_grant_frames * grefs_per_grant_frame;
 
 	for (i = NR_RESERVED_ENTRIES; i < nr_init_grefs - 1; i++)
 		gnttab_entry(i) = i + 1;
-- 
1.7.4.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 20:29:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 20:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlQGY-0002OQ-74; Wed, 19 Dec 2012 20:29:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlQGW-0002OL-SG
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 20:29:21 +0000
Received: from [85.158.139.83:37692] by server-4.bemta-5.messagelabs.com id
	A7/6B-14693-0A322D05; Wed, 19 Dec 2012 20:29:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355948958!26725522!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21961 invoked from network); 19 Dec 2012 20:29:19 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 20:29:19 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJKSVvS022345
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 20:28:31 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJKSU9H007975
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 20:28:30 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJKSTRT011656; Wed, 19 Dec 2012 14:28:29 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 12:28:29 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4FE721C032B; Wed, 19 Dec 2012 15:28:28 -0500 (EST)
Date: Wed, 19 Dec 2012 15:28:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <ijc@hellion.org.uk>
Message-ID: <20121219202828.GL15037@phenom.dumpdata.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1354711599.15296.191.camel@zakaz.uk.xensource.com>
	<20121205214741.GA1150@phenom.dumpdata.com>
	<1354782871.28777.12.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354782871.28777.12.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 08:34:31AM +0000, Ian Campbell wrote:
> (trim quote please...)
> On Wed, 2012-12-05 at 21:47 +0000, Konrad Rzeszutek Wilk wrote:
> > Do you want to prep a patch that I can stick in my 'microcode' branch?
> > .. That I will at some point try to upstream.
> 
> You might want to look back at the archives when Jeremy first tried to
> upstream this work, it was a vehement "No" and the resulting thread was
> not pretty.
> 
> Now that we have early loading via the hypervisor in 4.2 and Linux is
> finally in the process of growing its own early microcode loading
> solution I suspect the No would be even firmer.
> 
> It is on xenbits if you want it anyway:
> 
> git://xenbits.xen.org/people/ianc/linux-2.6.git debian/wheezy/microcode

Thx. Pulled it in my stable/misc branch.
> 
> About the only argument I can see for continuing to try upstreaming this
> stuff is that in
> http://www.gossamer-threads.com/lists/linux/kernel/1583630 Fenghua says:
> 
>         Note, however, that Linux users have gotten used to being able
>         to install a microcode patch in the field without having a
>         reboot; we support that model too.
> 
> i.e. this is an argument for keeping the previous scheme in parallel,
> which I suppose is an argument for supporting the same under Xen (I
> don't know if its a good one though.
> 
> Ian.
> 
> -- 
> Ian Campbell
> 
> 
> All the existing 2.0.x kernels are to buggy for 2.1.x to be the
> main goal.
> 		-- Alan Cox
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 20:29:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 20:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlQGY-0002OQ-74; Wed, 19 Dec 2012 20:29:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlQGW-0002OL-SG
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 20:29:21 +0000
Received: from [85.158.139.83:37692] by server-4.bemta-5.messagelabs.com id
	A7/6B-14693-0A322D05; Wed, 19 Dec 2012 20:29:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1355948958!26725522!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21961 invoked from network); 19 Dec 2012 20:29:19 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 20:29:19 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJKSVvS022345
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 20:28:31 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJKSU9H007975
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 20:28:30 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJKSTRT011656; Wed, 19 Dec 2012 14:28:29 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 12:28:29 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4FE721C032B; Wed, 19 Dec 2012 15:28:28 -0500 (EST)
Date: Wed, 19 Dec 2012 15:28:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <ijc@hellion.org.uk>
Message-ID: <20121219202828.GL15037@phenom.dumpdata.com>
References: <1353936077.5830.30.camel@zakaz.uk.xensource.com>
	<50B3805A02000078000AB1B8@nat28.tlf.novell.com>
	<1354711599.15296.191.camel@zakaz.uk.xensource.com>
	<20121205214741.GA1150@phenom.dumpdata.com>
	<1354782871.28777.12.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1354782871.28777.12.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	debian-kernel <debian-kernel@lists.debian.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvops microcode support for AMD FAM >= 15
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 06, 2012 at 08:34:31AM +0000, Ian Campbell wrote:
> (trim quote please...)
> On Wed, 2012-12-05 at 21:47 +0000, Konrad Rzeszutek Wilk wrote:
> > Do you want to prep a patch that I can stick in my 'microcode' branch?
> > .. That I will at some point try to upstream.
> 
> You might want to look back at the archives when Jeremy first tried to
> upstream this work, it was a vehement "No" and the resulting thread was
> not pretty.
> 
> Now that we have early loading via the hypervisor in 4.2 and Linux is
> finally in the process of growing its own early microcode loading
> solution I suspect the No would be even firmer.
> 
> It is on xenbits if you want it anyway:
> 
> git://xenbits.xen.org/people/ianc/linux-2.6.git debian/wheezy/microcode

Thx. Pulled it in my stable/misc branch.
> 
> About the only argument I can see for continuing to try upstreaming this
> stuff is that in
> http://www.gossamer-threads.com/lists/linux/kernel/1583630 Fenghua says:
> 
>         Note, however, that Linux users have gotten used to being able
>         to install a microcode patch in the field without having a
>         reboot; we support that model too.
> 
> i.e. this is an argument for keeping the previous scheme in parallel,
> which I suppose is an argument for supporting the same under Xen (I
> don't know if its a good one though.
> 
> Ian.
> 
> -- 
> Ian Campbell
> 
> 
> All the existing 2.0.x kernels are to buggy for 2.1.x to be the
> main goal.
> 		-- Alan Cox
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 20:46:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 20:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlQWl-0002fE-Sk; Wed, 19 Dec 2012 20:46:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlQWl-0002f6-3H
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 20:46:07 +0000
Received: from [85.158.137.99:45308] by server-6.bemta-3.messagelabs.com id
	18/00-12154-D8722D05; Wed, 19 Dec 2012 20:46:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355949963!20200152!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30373 invoked from network); 19 Dec 2012 20:46:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 20:46:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="264381"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 20:46:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 19 Dec 2012 20:46:03 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlQWh-0001Vp-3v;
	Wed, 19 Dec 2012 20:46:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlQWg-0004Hw-CK;
	Wed, 19 Dec 2012 20:46:02 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14789-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Dec 2012 20:46:02 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14789: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14789 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14789/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14670
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14670

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                881c0a027c495dd35992346176a40d39a7666fb9
baseline version:
 linux                4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=881c0a027c495dd35992346176a40d39a7666fb9
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 881c0a027c495dd35992346176a40d39a7666fb9
+ branch=linux-3.0
+ revision=881c0a027c495dd35992346176a40d39a7666fb9
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git 881c0a027c495dd35992346176a40d39a7666fb9:tested/linux-3.0
Counting objects: 1   
Counting objects: 63   
Counting objects: 185, done.
Compressing objects:   4% (1/23)   
Compressing objects:   8% (2/23)   
Compressing objects:  13% (3/23)   
Compressing objects:  17% (4/23)   
Compressing objects:  21% (5/23)   
Compressing objects:  26% (6/23)   
Compressing objects:  30% (7/23)   
Compressing objects:  34% (8/23)   
Compressing objects:  39% (9/23)   
Compressing objects:  43% (10/23)   
Compressing objects:  47% (11/23)   
Compressing objects:  52% (12/23)   
Compressing objects:  56% (13/23)   
Compressing objects:  60% (14/23)   
Compressing objects:  65% (15/23)   
Compressing objects:  69% (16/23)   
Compressing objects:  73% (17/23)   
Compressing objects:  78% (18/23)   
Compressing objects:  82% (19/23)   
Compressing objects:  86% (20/23)   
Compressing objects:  91% (21/23)   
Compressing objects:  95% (22/23)   
Compressing objects: 100% (23/23)   
Compressing objects: 100% (23/23), done.
Writing objects:   0% (1/134)   
Writing objects:   1% (2/134)   
Writing objects:   2% (3/134)   
Writing objects:   3% (5/134)   
Writing objects:   4% (6/134)   
Writing objects:   5% (7/134)   
Writing objects:   6% (9/134)   
Writing objects:   7% (10/134)   
Writing objects:   8% (11/134)   
Writing objects:   9% (13/134)   
Writing objects:  10% (14/134)   
Writing objects:  11% (15/134)   
Writing objects:  12% (17/134)   
Writing objects:  13% (18/134)   
Writing objects:  14% (19/134)   
Writing objects:  15% (21/134)   
Writing objects:  16% (22/134)   
Writing objects:  17% (23/134)   
Writing objects:  18% (25/134)   
Writing objects:  19% (26/134)   
Writing objects:  20% (27/134)   
Writing objects:  21% (29/134)   
Writing objects:  22% (30/134)   
Writing objects:  23% (31/134)   
Writing objects:  24% (33/134)   
Writing objects:  25% (34/134)   
Writing objects:  26% (35/134)   
Writing objects:  27% (37/134)   
Writing objects:  28% (38/134)   
Writing objects:  29% (39/134)   
Writing objects:  30% (41/134)   
Writing objects:  31% (42/134)   
Writing objects:  32% (43/134)   
Writing objects:  33% (45/134)   
Writing objects:  34% (46/134)   
Writing objects:  35% (47/134)   
Writing objects:  36% (49/134)   
Writing objects:  37% (50/134)   
Writing objects:  38% (51/134)   
Writing objects:  39% (53/134)   
Writing objects:  40% (54/134)   
Writing objects:  41% (55/134)   
Writing objects:  42% (57/134)   
Writing objects:  43% (58/134)   
Writing objects:  44% (59/134)   
Writing objects:  45% (61/134)   
Writing objects:  46% (62/134)   
Writing objects:  47% (63/134)   
Writing objects:  48% (65/134)   
Writing objects:  49% (66/134)   
Writing objects:  50% (67/134)   
Writing objects:  51% (69/134)   
Writing objects:  52% (70/134)   
Writing objects:  53% (72/134)   
Writing objects:  54% (73/134)   
Writing objects:  55% (74/134)   
Writing objects:  56% (76/134)   
Writing objects:  57% (77/134)   
Writing objects:  58% (78/134)   
Writing objects:  59% (80/134)   
Writing objects:  60% (81/134)   
Writing objects:  61% (82/134)   
Writing objects:  62% (84/134)   
Writing objects:  63% (85/134)   
Writing objects:  64% (86/134)   
Writing objects:  65% (88/134)   
Writing objects:  66% (89/134)   
Writing objects:  67% (90/134)   
Writing objects:  68% (92/134)   
Writing objects:  69% (93/134)   
Writing objects:  70% (94/134)   
Writing objects:  71% (96/134)   
Writing objects:  72% (97/134)   
Writing objects:  73% (98/134)   
Writing objects:  74% (100/134)   
Writing objects:  75% (101/134)   
Writing objects:  76% (102/134)   
Writing objects:  77% (104/134)   
Writing objects:  78% (105/134)   
Writing objects:  79% (106/134)   
Writing objects:  80% (108/134)   
Writing objects:  81% (109/134)   
Writing objects:  82% (110/134)   
Writing objects:  83% (112/134)   
Writing objects:  84% (113/134)   
Writing objects:  85% (114/134)   
Writing objects:  86% (116/134)   
Writing objects:  87% (117/134)   
Writing objects:  88% (118/134)   
Writing objects:  89% (120/134)   
Writing objects:  90% (121/134)   
Writing objects:  91% (122/134)   
Writing objects:  92% (124/134)   
Writing objects:  93% (125/134)   
Writing objects:  94% (126/134)   
Writing objects:  95% (128/134)   
Writing objects:  96% (129/134)   
Writing objects:  97% (130/134)   
Writing objects:  98% (132/134)   
Writing objects:  99% (133/134)   
Writing objects: 100% (134/134)   
Writing objects: 100% (134/134), 25.61 KiB, done.
Total 134 (delta 110), reused 134 (delta 110)
To xen@xenbits.xensource.com:git/linux-pvops.git
   4eb15b7..881c0a0  881c0a027c495dd35992346176a40d39a7666fb9 -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 20:46:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 20:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlQWl-0002fE-Sk; Wed, 19 Dec 2012 20:46:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlQWl-0002f6-3H
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 20:46:07 +0000
Received: from [85.158.137.99:45308] by server-6.bemta-3.messagelabs.com id
	18/00-12154-D8722D05; Wed, 19 Dec 2012 20:46:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355949963!20200152!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30373 invoked from network); 19 Dec 2012 20:46:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 20:46:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,318,1355097600"; 
   d="scan'208";a="264381"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 20:46:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Wed, 19 Dec 2012 20:46:03 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlQWh-0001Vp-3v;
	Wed, 19 Dec 2012 20:46:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlQWg-0004Hw-CK;
	Wed, 19 Dec 2012 20:46:02 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14789-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Dec 2012 20:46:02 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 14789: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14789 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14789/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14670
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14670

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                881c0a027c495dd35992346176a40d39a7666fb9
baseline version:
 linux                4eb15b7fe7ad2a055f79eb1056a0c2d3317ff6f0

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=881c0a027c495dd35992346176a40d39a7666fb9
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 881c0a027c495dd35992346176a40d39a7666fb9
+ branch=linux-3.0
+ revision=881c0a027c495dd35992346176a40d39a7666fb9
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git 881c0a027c495dd35992346176a40d39a7666fb9:tested/linux-3.0
Counting objects: 1   
Counting objects: 63   
Counting objects: 185, done.
Compressing objects:   4% (1/23)   
Compressing objects:   8% (2/23)   
Compressing objects:  13% (3/23)   
Compressing objects:  17% (4/23)   
Compressing objects:  21% (5/23)   
Compressing objects:  26% (6/23)   
Compressing objects:  30% (7/23)   
Compressing objects:  34% (8/23)   
Compressing objects:  39% (9/23)   
Compressing objects:  43% (10/23)   
Compressing objects:  47% (11/23)   
Compressing objects:  52% (12/23)   
Compressing objects:  56% (13/23)   
Compressing objects:  60% (14/23)   
Compressing objects:  65% (15/23)   
Compressing objects:  69% (16/23)   
Compressing objects:  73% (17/23)   
Compressing objects:  78% (18/23)   
Compressing objects:  82% (19/23)   
Compressing objects:  86% (20/23)   
Compressing objects:  91% (21/23)   
Compressing objects:  95% (22/23)   
Compressing objects: 100% (23/23)   
Compressing objects: 100% (23/23), done.
Writing objects:   0% (1/134)   
Writing objects:   1% (2/134)   
Writing objects:   2% (3/134)   
Writing objects:   3% (5/134)   
Writing objects:   4% (6/134)   
Writing objects:   5% (7/134)   
Writing objects:   6% (9/134)   
Writing objects:   7% (10/134)   
Writing objects:   8% (11/134)   
Writing objects:   9% (13/134)   
Writing objects:  10% (14/134)   
Writing objects:  11% (15/134)   
Writing objects:  12% (17/134)   
Writing objects:  13% (18/134)   
Writing objects:  14% (19/134)   
Writing objects:  15% (21/134)   
Writing objects:  16% (22/134)   
Writing objects:  17% (23/134)   
Writing objects:  18% (25/134)   
Writing objects:  19% (26/134)   
Writing objects:  20% (27/134)   
Writing objects:  21% (29/134)   
Writing objects:  22% (30/134)   
Writing objects:  23% (31/134)   
Writing objects:  24% (33/134)   
Writing objects:  25% (34/134)   
Writing objects:  26% (35/134)   
Writing objects:  27% (37/134)   
Writing objects:  28% (38/134)   
Writing objects:  29% (39/134)   
Writing objects:  30% (41/134)   
Writing objects:  31% (42/134)   
Writing objects:  32% (43/134)   
Writing objects:  33% (45/134)   
Writing objects:  34% (46/134)   
Writing objects:  35% (47/134)   
Writing objects:  36% (49/134)   
Writing objects:  37% (50/134)   
Writing objects:  38% (51/134)   
Writing objects:  39% (53/134)   
Writing objects:  40% (54/134)   
Writing objects:  41% (55/134)   
Writing objects:  42% (57/134)   
Writing objects:  43% (58/134)   
Writing objects:  44% (59/134)   
Writing objects:  45% (61/134)   
Writing objects:  46% (62/134)   
Writing objects:  47% (63/134)   
Writing objects:  48% (65/134)   
Writing objects:  49% (66/134)   
Writing objects:  50% (67/134)   
Writing objects:  51% (69/134)   
Writing objects:  52% (70/134)   
Writing objects:  53% (72/134)   
Writing objects:  54% (73/134)   
Writing objects:  55% (74/134)   
Writing objects:  56% (76/134)   
Writing objects:  57% (77/134)   
Writing objects:  58% (78/134)   
Writing objects:  59% (80/134)   
Writing objects:  60% (81/134)   
Writing objects:  61% (82/134)   
Writing objects:  62% (84/134)   
Writing objects:  63% (85/134)   
Writing objects:  64% (86/134)   
Writing objects:  65% (88/134)   
Writing objects:  66% (89/134)   
Writing objects:  67% (90/134)   
Writing objects:  68% (92/134)   
Writing objects:  69% (93/134)   
Writing objects:  70% (94/134)   
Writing objects:  71% (96/134)   
Writing objects:  72% (97/134)   
Writing objects:  73% (98/134)   
Writing objects:  74% (100/134)   
Writing objects:  75% (101/134)   
Writing objects:  76% (102/134)   
Writing objects:  77% (104/134)   
Writing objects:  78% (105/134)   
Writing objects:  79% (106/134)   
Writing objects:  80% (108/134)   
Writing objects:  81% (109/134)   
Writing objects:  82% (110/134)   
Writing objects:  83% (112/134)   
Writing objects:  84% (113/134)   
Writing objects:  85% (114/134)   
Writing objects:  86% (116/134)   
Writing objects:  87% (117/134)   
Writing objects:  88% (118/134)   
Writing objects:  89% (120/134)   
Writing objects:  90% (121/134)   
Writing objects:  91% (122/134)   
Writing objects:  92% (124/134)   
Writing objects:  93% (125/134)   
Writing objects:  94% (126/134)   
Writing objects:  95% (128/134)   
Writing objects:  96% (129/134)   
Writing objects:  97% (130/134)   
Writing objects:  98% (132/134)   
Writing objects:  99% (133/134)   
Writing objects: 100% (134/134)   
Writing objects: 100% (134/134), 25.61 KiB, done.
Total 134 (delta 110), reused 134 (delta 110)
To xen@xenbits.xensource.com:git/linux-pvops.git
   4eb15b7..881c0a0  881c0a027c495dd35992346176a40d39a7666fb9 -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 21:44:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 21:44:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlRRC-0003ZS-AV; Wed, 19 Dec 2012 21:44:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1TlRRA-0003ZH-RO; Wed, 19 Dec 2012 21:44:25 +0000
Received: from [85.158.137.99:34601] by server-10.bemta-3.messagelabs.com id
	E9/BA-07616-73532D05; Wed, 19 Dec 2012 21:44:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355953461!15046686!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32370 invoked from network); 19 Dec 2012 21:44:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 21:44:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJLiH21029662
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 21:44:18 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJLiGPL027252
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 21:44:16 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJLiFku020364; Wed, 19 Dec 2012 15:44:15 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 13:44:15 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 64EAD1C032B; Wed, 19 Dec 2012 16:44:14 -0500 (EST)
Date: Wed, 19 Dec 2012 16:44:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Lars Kurth <lars.kurth@xen.org>, konrad@kernel.org
Message-ID: <20121219214414.GA25858@phenom.dumpdata.com>
References: <50A4F83D.4000205@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50A4F83D.4000205@xen.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-arm@lists.xen.org, "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Getting into shape for GSOC 2013
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Nov 15, 2012 at 02:12:13PM +0000, Lars Kurth wrote:
> Hi everybody,
> 
> this is a gentle reminder to update
> http://wiki.xen.org/wiki/Xen_Development_Projects and to start

Let me put some extra items on the list.

> thinking about projects that are suitable for GSoC. The template to
> add items to the project page is below...
> 
> {{project
> |Project=Project description
> |Date=date of insert
> |Contact=Owner name
> |Desc=Description of
> |GSoC=Yes or No, or any other GSoC related comment
> }}
> 
> In 2012, we didn't make it into GSoC because we didn't have a good
> enough project list. We need to have a list of about 10 good project
> proposals for GSoC and present these nicely. It would be a real
> shame, if we didn't make it in 2013, in particular with some of the
> exciting work which is going on at the moment.
> 
> In 2011, when we made it our project list was at
> http://wiki.xen.org/wiki/Archived/GSoc_2011_Ideas ... Google has
> raised the bar, so we need to have
> a) Really good descriptions for our GSoC projects
> b) Pre-assign mentors to each project
> c) Ideally I would like to add biography and interest section for
> all our mentors. I can create a wiki template for mentors if it
> helps

Please do.
> 
> All this needs to be in place when we apply for GSoC as a mentoring
> organisation. The application deadline is likely in early February
> 2013, so starting on this now should leave us in good shape! If we

Aye. Feb.
> get a good list of projects together this year, we can tidy it up,
> iterate and improve in January!
> 
> Best Regards
> Lars
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 21:44:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 21:44:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlRRC-0003ZS-AV; Wed, 19 Dec 2012 21:44:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1TlRRA-0003ZH-RO; Wed, 19 Dec 2012 21:44:25 +0000
Received: from [85.158.137.99:34601] by server-10.bemta-3.messagelabs.com id
	E9/BA-07616-73532D05; Wed, 19 Dec 2012 21:44:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1355953461!15046686!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzIwMTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32370 invoked from network); 19 Dec 2012 21:44:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 21:44:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJLiH21029662
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 21:44:18 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJLiGPL027252
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 21:44:16 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJLiFku020364; Wed, 19 Dec 2012 15:44:15 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 13:44:15 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 64EAD1C032B; Wed, 19 Dec 2012 16:44:14 -0500 (EST)
Date: Wed, 19 Dec 2012 16:44:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Lars Kurth <lars.kurth@xen.org>, konrad@kernel.org
Message-ID: <20121219214414.GA25858@phenom.dumpdata.com>
References: <50A4F83D.4000205@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50A4F83D.4000205@xen.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-arm@lists.xen.org, "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Getting into shape for GSOC 2013
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Nov 15, 2012 at 02:12:13PM +0000, Lars Kurth wrote:
> Hi everybody,
> 
> this is a gentle reminder to update
> http://wiki.xen.org/wiki/Xen_Development_Projects and to start

Let me put some extra items on the list.

> thinking about projects that are suitable for GSoC. The template to
> add items to the project page is below...
> 
> {{project
> |Project=Project description
> |Date=date of insert
> |Contact=Owner name
> |Desc=Description of
> |GSoC=Yes or No, or any other GSoC related comment
> }}
> 
> In 2012, we didn't make it into GSoC because we didn't have a good
> enough project list. We need to have a list of about 10 good project
> proposals for GSoC and present these nicely. It would be a real
> shame, if we didn't make it in 2013, in particular with some of the
> exciting work which is going on at the moment.
> 
> In 2011, when we made it our project list was at
> http://wiki.xen.org/wiki/Archived/GSoc_2011_Ideas ... Google has
> raised the bar, so we need to have
> a) Really good descriptions for our GSoC projects
> b) Pre-assign mentors to each project
> c) Ideally I would like to add biography and interest section for
> all our mentors. I can create a wiki template for mentors if it
> helps

Please do.
> 
> All this needs to be in place when we apply for GSoC as a mentoring
> organisation. The application deadline is likely in early February
> 2013, so starting on this now should leave us in good shape! If we

Aye. Feb.
> get a good list of projects together this year, we can tidy it up,
> iterate and improve in January!
> 
> Best Regards
> Lars
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 21:59:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 21:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlRfp-0003mE-UV; Wed, 19 Dec 2012 21:59:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlRfo-0003m9-NE
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 21:59:32 +0000
Received: from [85.158.138.51:62975] by server-7.bemta-3.messagelabs.com id
	1C/50-23008-3C832D05; Wed, 19 Dec 2012 21:59:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355954369!29749510!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8495 invoked from network); 19 Dec 2012 21:59:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 21:59:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJLxPdt024960
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 21:59:26 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJLxOdi022806
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 21:59:24 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJLxNUB008592; Wed, 19 Dec 2012 15:59:24 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 13:59:23 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C63861C032B; Wed, 19 Dec 2012 16:59:22 -0500 (EST)
Date: Wed, 19 Dec 2012 16:59:22 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andy Burns <xen.lists@burns.me.uk>
Message-ID: <20121219215922.GB12292@phenom.dumpdata.com>
References: <CA+LkAa=RHJCtpncP8AUSeR8Te3ZLMvRctuZqnv_NwUkCQ-im8Q@mail.gmail.com>
	<20120921172901.GA6821@phenom.dumpdata.com>
	<099949C5-C4C0-4D57-A778-849C94781372@gmail.com>
	<20120925141013.GC16478@phenom.dumpdata.com>
	<CAE1-PRdFxJWTKp6Qiaagh_9B8eOtK3aF=-8AT2CnrrT+KS2F0A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAE1-PRdFxJWTKp6Qiaagh_9B8eOtK3aF=-8AT2CnrrT+KS2F0A@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Xen + DVB = not working. memory allocation issue?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Oct 25, 2012 at 07:14:53PM +0100, Andy Burns wrote:
> On 25 September 2012 15:10, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> 
> > So try also limiting how much memory the hypervisor has to eliminate
> > this being a 4GB issue. Meaning on the _hypervisor_ line add 'mem=4G'.
> 
> I previously required that with xen 4.1.x but now I've upgraded to Xen
> 4.2 (and dom0 and domU to 3.6.x kernels) I no longer require it.
> 
> Has the O/P got  iommu=soft in the domU kernel command line if it's a PV domU?
> 
> Also try 'pci=resource_alignment=BB:DD.F;BB:DD.F' etc on the
> hypervisor command line for your relevant PCI device(s)
> 
> Those tweaks are all I need to use these days for PCI passthrough :-)

Sorry for the late response - walking through my mailbox. Did this thread ever get resolved?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 21:59:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 21:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlRfp-0003mE-UV; Wed, 19 Dec 2012 21:59:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlRfo-0003m9-NE
	for xen-devel@lists.xensource.com; Wed, 19 Dec 2012 21:59:32 +0000
Received: from [85.158.138.51:62975] by server-7.bemta-3.messagelabs.com id
	1C/50-23008-3C832D05; Wed, 19 Dec 2012 21:59:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1355954369!29749510!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8495 invoked from network); 19 Dec 2012 21:59:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 21:59:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJLxPdt024960
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 21:59:26 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJLxOdi022806
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 21:59:24 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJLxNUB008592; Wed, 19 Dec 2012 15:59:24 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 13:59:23 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C63861C032B; Wed, 19 Dec 2012 16:59:22 -0500 (EST)
Date: Wed, 19 Dec 2012 16:59:22 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andy Burns <xen.lists@burns.me.uk>
Message-ID: <20121219215922.GB12292@phenom.dumpdata.com>
References: <CA+LkAa=RHJCtpncP8AUSeR8Te3ZLMvRctuZqnv_NwUkCQ-im8Q@mail.gmail.com>
	<20120921172901.GA6821@phenom.dumpdata.com>
	<099949C5-C4C0-4D57-A778-849C94781372@gmail.com>
	<20120925141013.GC16478@phenom.dumpdata.com>
	<CAE1-PRdFxJWTKp6Qiaagh_9B8eOtK3aF=-8AT2CnrrT+KS2F0A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAE1-PRdFxJWTKp6Qiaagh_9B8eOtK3aF=-8AT2CnrrT+KS2F0A@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Xen + DVB = not working. memory allocation issue?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Oct 25, 2012 at 07:14:53PM +0100, Andy Burns wrote:
> On 25 September 2012 15:10, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> 
> > So try also limiting how much memory the hypervisor has to eliminate
> > this being a 4GB issue. Meaning on the _hypervisor_ line add 'mem=4G'.
> 
> I previously required that with xen 4.1.x but now I've upgraded to Xen
> 4.2 (and dom0 and domU to 3.6.x kernels) I no longer require it.
> 
> Has the O/P got  iommu=soft in the domU kernel command line if it's a PV domU?
> 
> Also try 'pci=resource_alignment=BB:DD.F;BB:DD.F' etc on the
> hypervisor command line for your relevant PCI device(s)
> 
> Those tweaks are all I need to use these days for PCI passthrough :-)

Sorry for the late response - walking through my mailbox. Did this thread ever get resolved?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 22:02:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 22:02:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlRhs-0003tl-FR; Wed, 19 Dec 2012 22:01:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlRhq-0003tf-Ep
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 22:01:38 +0000
Received: from [193.109.254.147:31612] by server-1.bemta-14.messagelabs.com id
	E3/74-15901-14932D05; Wed, 19 Dec 2012 22:01:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355954495!1952535!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI4Mjcz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22324 invoked from network); 19 Dec 2012 22:01:36 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 22:01:36 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJM1Xr4027316
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 22:01:34 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJM1VbW027249
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 22:01:33 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJM1VVO028015; Wed, 19 Dec 2012 16:01:31 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 14:01:31 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 86D321C032B; Wed, 19 Dec 2012 17:01:30 -0500 (EST)
Date: Wed, 19 Dec 2012 17:01:30 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Vadim A. Misbakh-Soloviov" <mva@mva.name>
Message-ID: <20121219220130.GC12292@phenom.dumpdata.com>
References: <508BC02B.5040607@mva.name> <508BC903.8050104@mva.name>
	<508BCCB6.7040708@mva.name>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <508BCCB6.7040708@mva.name>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Problem with growing up dom0 memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Oct 27, 2012 at 06:59:50PM +0700, Vadim A. Misbakh-Soloviov wrote:
> Uhm. It seems, it is 4.1.3 (as you can see from xl info) on 3.5.3-kernel
> node, but I have all following issues even on 3.6.3-kernel node with 4.2.0.

I believe that might be related to the autoballon functionality. Have
you tried disabling it.
> 
> P.S.:
> By the way, I also has some network speed issues with domUs:
> 
> one_node ~mva % scp bkp.txz domU_on_another_node:/home/mva
> bkp.txz
> 
>                                       3%  592KB 259.7KB/s   0.0KB/s
> 01:12 ETA^CKilled by signal 2.
> one_node ~mva % scp bkp.txz another_node_itself:/home/mva
> bkp.txz
> 
>                                     100%   19MB   3.8MB/s   6.2MB/s
> 00:05
> 
> almost the same for reversed process (downloading from that places)
> it is rt8169 ethernet driver on "one_node", e1000e on "another_node" and
> I dunno which driver Xen uses for domUs, but:
> domU ~mva % dmesg|grep -i 'Xen.*eth'
> Initialising Xen virtual ethernet driver

There is a Wiki that explains this( Google for Network Xen Wiki). I think you need to set an
TSO (or maybe it is another flag) off on the guest's adapter.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 22:02:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 22:02:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlRhs-0003tl-FR; Wed, 19 Dec 2012 22:01:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TlRhq-0003tf-Ep
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 22:01:38 +0000
Received: from [193.109.254.147:31612] by server-1.bemta-14.messagelabs.com id
	E3/74-15901-14932D05; Wed, 19 Dec 2012 22:01:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1355954495!1952535!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI4Mjcz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22324 invoked from network); 19 Dec 2012 22:01:36 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Dec 2012 22:01:36 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBJM1Xr4027316
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Dec 2012 22:01:34 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBJM1VbW027249
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Dec 2012 22:01:33 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBJM1VVO028015; Wed, 19 Dec 2012 16:01:31 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 14:01:31 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 86D321C032B; Wed, 19 Dec 2012 17:01:30 -0500 (EST)
Date: Wed, 19 Dec 2012 17:01:30 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Vadim A. Misbakh-Soloviov" <mva@mva.name>
Message-ID: <20121219220130.GC12292@phenom.dumpdata.com>
References: <508BC02B.5040607@mva.name> <508BC903.8050104@mva.name>
	<508BCCB6.7040708@mva.name>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <508BCCB6.7040708@mva.name>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Problem with growing up dom0 memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Oct 27, 2012 at 06:59:50PM +0700, Vadim A. Misbakh-Soloviov wrote:
> Uhm. It seems, it is 4.1.3 (as you can see from xl info) on 3.5.3-kernel
> node, but I have all following issues even on 3.6.3-kernel node with 4.2.0.

I believe that might be related to the autoballon functionality. Have
you tried disabling it.
> 
> P.S.:
> By the way, I also has some network speed issues with domUs:
> 
> one_node ~mva % scp bkp.txz domU_on_another_node:/home/mva
> bkp.txz
> 
>                                       3%  592KB 259.7KB/s   0.0KB/s
> 01:12 ETA^CKilled by signal 2.
> one_node ~mva % scp bkp.txz another_node_itself:/home/mva
> bkp.txz
> 
>                                     100%   19MB   3.8MB/s   6.2MB/s
> 00:05
> 
> almost the same for reversed process (downloading from that places)
> it is rt8169 ethernet driver on "one_node", e1000e on "another_node" and
> I dunno which driver Xen uses for domUs, but:
> domU ~mva % dmesg|grep -i 'Xen.*eth'
> Initialising Xen virtual ethernet driver

There is a Wiki that explains this( Google for Network Xen Wiki). I think you need to set an
TSO (or maybe it is another flag) off on the guest's adapter.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 19 23:17:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 23:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlSsc-0004wJ-95; Wed, 19 Dec 2012 23:16:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlSsa-0004wE-RN
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 23:16:49 +0000
Received: from [85.158.137.99:36092] by server-7.bemta-3.messagelabs.com id
	13/98-23008-BDA42D05; Wed, 19 Dec 2012 23:16:43 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355959002!17847950!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2187 invoked from network); 19 Dec 2012 23:16:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 23:16:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,320,1355097600"; d="asc'?scan'208";a="266296"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 23:16:42 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 23:16:40 +0000
Message-ID: <1355958992.28419.5.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 20 Dec 2012 00:16:32 +0100
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 00 of 10 v2] NUMA aware credit scheduling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4923731902198407065=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4923731902198407065==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-2DmLg/YH2Vcbtlue8HIa"

--=-2DmLg/YH2Vcbtlue8HIa
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-12-19 at 20:07 +0100, Dario Faggioli wrote:=20
> Which, reasoning in terms of %-performances increase/decrease, means NUMA=
 aware
> scheduling does as follows, as compared to no affinity at all and to pinn=
ing:
>=20
>      ----------------------------------
>      | SpecJBB2005 (throughput)       |
>      ----------------------------------
>      | #VMs | No affinity |  Pinning  |
>      |    2 |   +14.36%   |   -0.36%  |
>      |    6 |   +14.72%   |   -0.26%  |
>      |   10 |   +11.88%   |   -2.44%  |
>      ----------------------------------
>      | Sysbench memory (throughput)   |
>      ----------------------------------
>      | #VMs | No affinity |  Pinning  |
>      |    2 |   +14.12%   |   +0.09%  |
>      |    6 |   +11.12%   |   +2.14%  |
>      |   10 |   +11.81%   |   +5.06%  |
>      ----------------------------------
>      | LMBench proc (latency)         |
>      ----------------------------------
>      | #VMs | No affinity |  Pinning  |
>      ----------------------------------
>      |    2 |   +10.02%   |   +1.07%  |
>      |    6 |    +3.45%   |   +1.02%  |
>      |   10 |    +2.94%   |   +4.53%  |
>      ----------------------------------
>=20
Just to be sure, as I may have not picked up the perfect wording, in the
table above a +xx.yy% means NUMA aware scheduling (i.e., with this patch
series fully applied) performs xx.yy% _better_ than either 'No affinity'
or 'Pinning'. Conversely, a -zz.ww% means it performs zz.ww% worse.

Sorry but the different combination and the presence of both throughput
values (which are better if high) and latency values (which are better
if low) made things a little bit tricky to present effectively. :-)

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-2DmLg/YH2Vcbtlue8HIa
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDSStAACgkQk4XaBE3IOsRHigCffv7N/X53wokDYQQBeC+/Cmmy
mfoAoKjrUWIKqV6XraDuDx5hy/gF6qVb
=Yu33
-----END PGP SIGNATURE-----

--=-2DmLg/YH2Vcbtlue8HIa--


--===============4923731902198407065==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4923731902198407065==--


From xen-devel-bounces@lists.xen.org Wed Dec 19 23:17:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 23:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlSsc-0004wJ-95; Wed, 19 Dec 2012 23:16:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlSsa-0004wE-RN
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 23:16:49 +0000
Received: from [85.158.137.99:36092] by server-7.bemta-3.messagelabs.com id
	13/98-23008-BDA42D05; Wed, 19 Dec 2012 23:16:43 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355959002!17847950!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyMzkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2187 invoked from network); 19 Dec 2012 23:16:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Dec 2012 23:16:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,320,1355097600"; d="asc'?scan'208";a="266296"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Dec 2012 23:16:42 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Wed, 19 Dec 2012 23:16:40 +0000
Message-ID: <1355958992.28419.5.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 20 Dec 2012 00:16:32 +0100
In-Reply-To: <patchbomb.1355944036@Solace>
References: <patchbomb.1355944036@Solace>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 00 of 10 v2] NUMA aware credit scheduling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4923731902198407065=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4923731902198407065==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-2DmLg/YH2Vcbtlue8HIa"

--=-2DmLg/YH2Vcbtlue8HIa
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-12-19 at 20:07 +0100, Dario Faggioli wrote:=20
> Which, reasoning in terms of %-performances increase/decrease, means NUMA=
 aware
> scheduling does as follows, as compared to no affinity at all and to pinn=
ing:
>=20
>      ----------------------------------
>      | SpecJBB2005 (throughput)       |
>      ----------------------------------
>      | #VMs | No affinity |  Pinning  |
>      |    2 |   +14.36%   |   -0.36%  |
>      |    6 |   +14.72%   |   -0.26%  |
>      |   10 |   +11.88%   |   -2.44%  |
>      ----------------------------------
>      | Sysbench memory (throughput)   |
>      ----------------------------------
>      | #VMs | No affinity |  Pinning  |
>      |    2 |   +14.12%   |   +0.09%  |
>      |    6 |   +11.12%   |   +2.14%  |
>      |   10 |   +11.81%   |   +5.06%  |
>      ----------------------------------
>      | LMBench proc (latency)         |
>      ----------------------------------
>      | #VMs | No affinity |  Pinning  |
>      ----------------------------------
>      |    2 |   +10.02%   |   +1.07%  |
>      |    6 |    +3.45%   |   +1.02%  |
>      |   10 |    +2.94%   |   +4.53%  |
>      ----------------------------------
>=20
Just to be sure, as I may have not picked up the perfect wording, in the
table above a +xx.yy% means NUMA aware scheduling (i.e., with this patch
series fully applied) performs xx.yy% _better_ than either 'No affinity'
or 'Pinning'. Conversely, a -zz.ww% means it performs zz.ww% worse.

Sorry but the different combination and the presence of both throughput
values (which are better if high) and latency values (which are better
if low) made things a little bit tricky to present effectively. :-)

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-2DmLg/YH2Vcbtlue8HIa
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDSStAACgkQk4XaBE3IOsRHigCffv7N/X53wokDYQQBeC+/Cmmy
mfoAoKjrUWIKqV6XraDuDx5hy/gF6qVb
=Yu33
-----END PGP SIGNATURE-----

--=-2DmLg/YH2Vcbtlue8HIa--


--===============4923731902198407065==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4923731902198407065==--


From xen-devel-bounces@lists.xen.org Wed Dec 19 23:32:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 23:32:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlT6w-0005As-Qb; Wed, 19 Dec 2012 23:31:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>)
	id 1TlT6v-0005Ac-Aw; Wed, 19 Dec 2012 23:31:37 +0000
Received: from [193.109.254.147:48163] by server-2.bemta-14.messagelabs.com id
	59/4B-30744-85E42D05; Wed, 19 Dec 2012 23:31:36 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355959885!9308274!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25262 invoked from network); 19 Dec 2012 23:31:25 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-9.tower-27.messagelabs.com with SMTP;
	19 Dec 2012 23:31:25 -0000
Received: from [62.94.142.87] (account d.faggioli@sssup.it HELO [192.168.0.20])
	by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 77352104; Thu, 20 Dec 2012 00:31:24 +0100
Message-ID: <1355959883.28419.8.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 20 Dec 2012 00:31:23 +0100
In-Reply-To: <20121219214414.GA25858@phenom.dumpdata.com>
References: <50A4F83D.4000205@xen.org>
	<20121219214414.GA25858@phenom.dumpdata.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>, konrad@kernel.org,
	Lars Kurth <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, xen-arm@lists.xen.org
Subject: Re: [Xen-devel] Getting into shape for GSOC 2013
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2091987669126155803=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============2091987669126155803==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-J1fTlwP9IWIxQmDJ71bf"


--=-J1fTlwP9IWIxQmDJ71bf
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-12-19 at 16:44 -0500, Konrad Rzeszutek Wilk wrote:=20
> > c) Ideally I would like to add biography and interest section for
> > all our mentors. I can create a wiki template for mentors if it
> > helps
>=20
> Please do.
>
Yep, I also think this would help. Perhaps we can put something like
that in each one's user page on the Wiki
(http://wiki.xen.org/wiki/User:Dariof)

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-J1fTlwP9IWIxQmDJ71bf
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDSTkwACgkQk4XaBE3IOsS8BQCeKBgUYgaprFPeymZXVQp6bh+t
L/MAoKjJghc+I3yHwm4B0OAdLzPSIBXO
=zT3s
-----END PGP SIGNATURE-----

--=-J1fTlwP9IWIxQmDJ71bf--



--===============2091987669126155803==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2091987669126155803==--



From xen-devel-bounces@lists.xen.org Wed Dec 19 23:32:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Dec 2012 23:32:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlT6w-0005As-Qb; Wed, 19 Dec 2012 23:31:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>)
	id 1TlT6v-0005Ac-Aw; Wed, 19 Dec 2012 23:31:37 +0000
Received: from [193.109.254.147:48163] by server-2.bemta-14.messagelabs.com id
	59/4B-30744-85E42D05; Wed, 19 Dec 2012 23:31:36 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-9.tower-27.messagelabs.com!1355959885!9308274!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25262 invoked from network); 19 Dec 2012 23:31:25 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-9.tower-27.messagelabs.com with SMTP;
	19 Dec 2012 23:31:25 -0000
Received: from [62.94.142.87] (account d.faggioli@sssup.it HELO [192.168.0.20])
	by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 77352104; Thu, 20 Dec 2012 00:31:24 +0100
Message-ID: <1355959883.28419.8.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 20 Dec 2012 00:31:23 +0100
In-Reply-To: <20121219214414.GA25858@phenom.dumpdata.com>
References: <50A4F83D.4000205@xen.org>
	<20121219214414.GA25858@phenom.dumpdata.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>, konrad@kernel.org,
	Lars Kurth <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, xen-arm@lists.xen.org
Subject: Re: [Xen-devel] Getting into shape for GSOC 2013
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2091987669126155803=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============2091987669126155803==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-J1fTlwP9IWIxQmDJ71bf"


--=-J1fTlwP9IWIxQmDJ71bf
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-12-19 at 16:44 -0500, Konrad Rzeszutek Wilk wrote:=20
> > c) Ideally I would like to add biography and interest section for
> > all our mentors. I can create a wiki template for mentors if it
> > helps
>=20
> Please do.
>
Yep, I also think this would help. Perhaps we can put something like
that in each one's user page on the Wiki
(http://wiki.xen.org/wiki/User:Dariof)

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-J1fTlwP9IWIxQmDJ71bf
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDSTkwACgkQk4XaBE3IOsS8BQCeKBgUYgaprFPeymZXVQp6bh+t
L/MAoKjJghc+I3yHwm4B0OAdLzPSIBXO
=zT3s
-----END PGP SIGNATURE-----

--=-J1fTlwP9IWIxQmDJ71bf--



--===============2091987669126155803==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2091987669126155803==--



From xen-devel-bounces@lists.xen.org Thu Dec 20 01:13:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 01:13:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlUgd-0001gV-Jt; Thu, 20 Dec 2012 01:12:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlUgb-0001gQ-Ij
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 01:12:33 +0000
Received: from [85.158.137.99:59437] by server-8.bemta-3.messagelabs.com id
	89/E4-01297-00662D05; Thu, 20 Dec 2012 01:12:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355965951!17129338!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11251 invoked from network); 20 Dec 2012 01:12:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 01:12:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,320,1355097600"; 
   d="scan'208";a="267187"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 01:12:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 01:12:31 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlUgZ-00031u-6c;
	Thu, 20 Dec 2012 01:12:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlUgU-0005F1-L7;
	Thu, 20 Dec 2012 01:12:30 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14790-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 01:12:26 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14790: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14790 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14790/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-win7-amd64 12 guest-localmigrate/x10 fail REGR. vs. 14783
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 14783

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14783
 test-amd64-amd64-xl-sedf     13 guest-localmigrate.2      fail REGR. vs. 14783

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  2ae6267371d8
baseline version:
 xen                  b13c5ee3c109

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Lasse Collin <lasse.collin@tukaani.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23436:2ae6267371d8
tag:         tip
user:        Lasse Collin <lasse.collin@tukaani.org>
date:        Wed Dec 19 12:29:25 2012 +0100
    
    XZ: Fix incorrect XZ_BUF_ERROR
    
    From: Lasse Collin <lasse.collin@tukaani.org>
    
    xz_dec_run() could incorrectly return XZ_BUF_ERROR if all of the
    following was true:
    
     - The caller knows how many bytes of output to expect and only
       provides
       that much output space.
    
     - When the last output bytes are decoded, the caller-provided input
       buffer ends right before the LZMA2 end of payload marker.  So LZMA2
       won't provide more output anymore, but it won't know it yet and
       thus
       won't return XZ_STREAM_END yet.
    
     - A BCJ filter is in use and it hasn't left any unfiltered bytes in
       the
       temp buffer.  This can happen with any BCJ filter, but in practice
       it's more likely with filters other than the x86 BCJ.
    
    This fixes <https://bugzilla.redhat.com/show_bug.cgi?id=3D735408>
    where Squashfs thinks that a valid file system is corrupt.
    
    This also fixes a similar bug in single-call mode where the
    uncompressed size of a block using BCJ + LZMA2 was 0 bytes and caller
    provided no output space.  Many empty .xz files don't contain any
    blocks and thus don't trigger this bug.
    
    This also tweaks a closely related detail: xz_dec_bcj_run() could call
    xz_dec_lzma2_run() to decode into temp buffer when it was known to be
    useless.  This was harmless although it wasted a minuscule number of
    CPU cycles.
    
    Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 23870:5c97b02f48fc
    xen-unstable date: Thu Sep 22 17:34:27 UTC 2011
    
    
changeset:   23435:ee0ad4dab0a5
user:        Lasse Collin <lasse.collin@tukaani.org>
date:        Wed Dec 19 12:28:13 2012 +0100
    
    XZ decompressor: Fix decoding of empty LZMA2 streams
    
    From: Lasse Collin <lasse.collin@tukaani.org>
    
    The old code considered valid empty LZMA2 streams to be corrupt.
    Note that a typical empty .xz file has no LZMA2 data at all,
    and thus most .xz files having no uncompressed data are handled
    correctly even without this fix.
    
    Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 23869:db1ea4b127cd
    xen-unstable date: Thu Sep 22 17:33:48 UTC 2011
    
    
changeset:   23434:d72d30447ae8
user:        Jan Beulich <jbeulich@novell.com>
date:        Wed Dec 19 12:25:27 2012 +0100
    
    Add Dom0 xz kernel decompression
    
    Largely taken from Linux 2.6.38 and made build/work for Xen.
    
    Signed-off-by: Jan Beulich <jbeulich@novell.com>
    xen-unstable changeset: 23001:9eb9948904cd
    xen-unstable date: Wed Mar  9 16:18:58 UTC 2011
    
    
changeset:   23433:7892ab82191b
user:        Jan Beulich
date:        Wed Dec 19 12:22:57 2012 +0100
    
    update Xen version to 4.1.5-pre
    
    
changeset:   23432:b13c5ee3c109
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Dec 18 12:53:15 2012 +0000
    
    Added signature for changeset 12c4c4c0a715
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 01:13:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 01:13:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlUgd-0001gV-Jt; Thu, 20 Dec 2012 01:12:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlUgb-0001gQ-Ij
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 01:12:33 +0000
Received: from [85.158.137.99:59437] by server-8.bemta-3.messagelabs.com id
	89/E4-01297-00662D05; Thu, 20 Dec 2012 01:12:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1355965951!17129338!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11251 invoked from network); 20 Dec 2012 01:12:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 01:12:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,320,1355097600"; 
   d="scan'208";a="267187"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 01:12:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 01:12:31 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlUgZ-00031u-6c;
	Thu, 20 Dec 2012 01:12:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlUgU-0005F1-L7;
	Thu, 20 Dec 2012 01:12:30 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14790-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 01:12:26 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14790: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14790 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14790/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-win7-amd64 12 guest-localmigrate/x10 fail REGR. vs. 14783
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 14783

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14783
 test-amd64-amd64-xl-sedf     13 guest-localmigrate.2      fail REGR. vs. 14783

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass

version targeted for testing:
 xen                  2ae6267371d8
baseline version:
 xen                  b13c5ee3c109

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Lasse Collin <lasse.collin@tukaani.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23436:2ae6267371d8
tag:         tip
user:        Lasse Collin <lasse.collin@tukaani.org>
date:        Wed Dec 19 12:29:25 2012 +0100
    
    XZ: Fix incorrect XZ_BUF_ERROR
    
    From: Lasse Collin <lasse.collin@tukaani.org>
    
    xz_dec_run() could incorrectly return XZ_BUF_ERROR if all of the
    following was true:
    
     - The caller knows how many bytes of output to expect and only
       provides
       that much output space.
    
     - When the last output bytes are decoded, the caller-provided input
       buffer ends right before the LZMA2 end of payload marker.  So LZMA2
       won't provide more output anymore, but it won't know it yet and
       thus
       won't return XZ_STREAM_END yet.
    
     - A BCJ filter is in use and it hasn't left any unfiltered bytes in
       the
       temp buffer.  This can happen with any BCJ filter, but in practice
       it's more likely with filters other than the x86 BCJ.
    
    This fixes <https://bugzilla.redhat.com/show_bug.cgi?id=3D735408>
    where Squashfs thinks that a valid file system is corrupt.
    
    This also fixes a similar bug in single-call mode where the
    uncompressed size of a block using BCJ + LZMA2 was 0 bytes and caller
    provided no output space.  Many empty .xz files don't contain any
    blocks and thus don't trigger this bug.
    
    This also tweaks a closely related detail: xz_dec_bcj_run() could call
    xz_dec_lzma2_run() to decode into temp buffer when it was known to be
    useless.  This was harmless although it wasted a minuscule number of
    CPU cycles.
    
    Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 23870:5c97b02f48fc
    xen-unstable date: Thu Sep 22 17:34:27 UTC 2011
    
    
changeset:   23435:ee0ad4dab0a5
user:        Lasse Collin <lasse.collin@tukaani.org>
date:        Wed Dec 19 12:28:13 2012 +0100
    
    XZ decompressor: Fix decoding of empty LZMA2 streams
    
    From: Lasse Collin <lasse.collin@tukaani.org>
    
    The old code considered valid empty LZMA2 streams to be corrupt.
    Note that a typical empty .xz file has no LZMA2 data at all,
    and thus most .xz files having no uncompressed data are handled
    correctly even without this fix.
    
    Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset: 23869:db1ea4b127cd
    xen-unstable date: Thu Sep 22 17:33:48 UTC 2011
    
    
changeset:   23434:d72d30447ae8
user:        Jan Beulich <jbeulich@novell.com>
date:        Wed Dec 19 12:25:27 2012 +0100
    
    Add Dom0 xz kernel decompression
    
    Largely taken from Linux 2.6.38 and made build/work for Xen.
    
    Signed-off-by: Jan Beulich <jbeulich@novell.com>
    xen-unstable changeset: 23001:9eb9948904cd
    xen-unstable date: Wed Mar  9 16:18:58 UTC 2011
    
    
changeset:   23433:7892ab82191b
user:        Jan Beulich
date:        Wed Dec 19 12:22:57 2012 +0100
    
    update Xen version to 4.1.5-pre
    
    
changeset:   23432:b13c5ee3c109
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Dec 18 12:53:15 2012 +0000
    
    Added signature for changeset 12c4c4c0a715
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 01:24:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 01:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlUrL-0001sC-QN; Thu, 20 Dec 2012 01:23:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TlUrJ-0001s7-VJ
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 01:23:38 +0000
Received: from [85.158.139.211:62083] by server-3.bemta-5.messagelabs.com id
	AD/4D-25441-99862D05; Thu, 20 Dec 2012 01:23:37 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355966615!18784910!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNjA4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11141 invoked from network); 20 Dec 2012 01:23:36 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-4.tower-206.messagelabs.com with SMTP;
	20 Dec 2012 01:23:36 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 19 Dec 2012 17:22:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="259934155"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 19 Dec 2012 17:23:19 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 17:23:19 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Thu, 20 Dec 2012 09:23:17 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Jan Beulich
	<JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
	for map_sg hook
Thread-Index: AQHN3iS3Yq7YP+NEuUuBhS1jhpm+BJgg4xjQ
Date: Thu, 20 Dec 2012 01:23:17 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FECC3EC@SHSMSX102.ccr.corp.intel.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
	<40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
	<50C85E9F02000078000AFD65@nat28.tlf.novell.com>
	<20121219200904.GG15037@phenom.dumpdata.com>
In-Reply-To: <20121219200904.GG15037@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Thursday, December 20, 2012 4:09 AM
> To: Jan Beulich
> Cc: Xu, Dongxiao; xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
> for map_sg hook
> 
> On Wed, Dec 12, 2012 at 09:38:23AM +0000, Jan Beulich wrote:
> > >>> On 12.12.12 at 02:03, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
> > >> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] On Tue,
> > >> Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
> > >> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > >> > > What if this check was done in the routines that provide the
> > >> > > software static buffers and there try to provide a nice DMA
> > >> > > contingous
> > >> swatch of pages?
> > >> >
> > >> > Yes, this approach also came to our mind, which needs to modify
> > >> > the driver
> > >> itself.
> > >> > If so, it requires driver not using such static buffers (e.g.,
> > >> > from
> > > kmalloc) to do
> > >> DMA even if the buffer is continuous in native.
> > >>
> > >> I am bit loss here.
> > >>
> > >> Is the issue you found only with drivers that do not use DMA API?
> > >> Can you perhaps point me to the code that triggered this fix in the
> > >> first
> > > place?
> > >
> > > Yes, we met this issue on a specific SAS device/driver, and it calls
> > > into libata-core code, you can refer to function ata_dev_read_id()
> > > called from
> > > ata_dev_reread_id() in drivers/ata/libata-core.c.
> > >
> > > In the above function, the target buffer is (void
> > > *)dev->link->ap->sector_buf, which is 512 bytes static buffer and
> > > unfortunately it across the page boundary.
> >
> > I wonder whether such use of sg_init_one()/sg_set_buf() is correct in
> > the first place. While there aren't any restrictions documented for
> > its use, one clearly can't pass in whatever one wants (a pointer into
> > vmalloc()-ed memory, for instance, won't work afaict).
> >
> > I didn't go through all other users of it, but quite a few of the uses
> > elsewhere look similarly questionable.
> >
> > >> I am still not completely clear on what you had in mind. The one
> > >> method I thought about that might help in this is to have
> > >> Xen-SWIOTLB track which memory ranges were exchanged (so
> > >> xen_swiotlb_fixup would save the *buf and the size for each call to
> > >> xen_create_contiguous_region in a list or
> > > array).
> > >>
> > >> When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they
> > >> would consult said array/list to see if the region they retrieved
> > >> crosses said 2MB chunks. If so.. and here I am unsure of what would
> > >> be the best way to
> > > proceed.
> 
> And from finally looking at the code I misunderstood your initial description.
> 
> > >
> > > We thought we can solve the issue in several ways:
> > >
> > > 1) Like the previous patch I sent out, we check the DMA region in
> > > xen_swiotlb_map_page() and xen_swiotlb_map_sg_attr(), and if DMA
> > > region crosses page boundary, we exchange the memory and copy the
> > > content. However it has race condition that when copying the memory
> > > content (we introduced two memory copies in the patch), some other
> > > code may also visit the page, which may encounter incorrect values.
> >
> > That's why, after mapping a buffer (or SG list) one has to call the
> > sync functions before looking at data. Any race as described by you is
> > therefore a programming error.
> 
> I am with Jan here. It looks like the libata is depending on the dma_unmap to
> do this. But it is unclear to me when the ata_qc_complete is actually called to
> unmap the buffer (and hence sync it).

Sorry, maybe I am still not describing this issue clearly.

Take the libata case as an example, the static DMA buffer locates (dev->link->ap->sector_buf , here we use Data Structure B in the graph) in the following structure:

-------------------------------------Page boundary
<Data Structure A>
<Data Structure B>
-------------------------------------Page boundary
<Data Structure B (cross page)>
<Data Structure C>
-------------------------------------Page boundary

Where Structure B is our DMA target.

For Data Structure B, we didn't care about the simultaneous access, either lock or sync function will take care of it.
What we are not sure is "read/write of A and C from other processor". As we will have memory copy for the pages, and at the same time, other CPU may access A/C.

Thanks,
Dongxiao
> 
> >
> > Jan
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 01:24:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 01:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlUrL-0001sC-QN; Thu, 20 Dec 2012 01:23:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1TlUrJ-0001s7-VJ
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 01:23:38 +0000
Received: from [85.158.139.211:62083] by server-3.bemta-5.messagelabs.com id
	AD/4D-25441-99862D05; Thu, 20 Dec 2012 01:23:37 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1355966615!18784910!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzcwNjA4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11141 invoked from network); 20 Dec 2012 01:23:36 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-4.tower-206.messagelabs.com with SMTP;
	20 Dec 2012 01:23:36 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 19 Dec 2012 17:22:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="259934155"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 19 Dec 2012 17:23:19 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 17:23:19 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.85]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.88]) with mapi id
	14.01.0355.002; Thu, 20 Dec 2012 09:23:17 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Jan Beulich
	<JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
	for map_sg hook
Thread-Index: AQHN3iS3Yq7YP+NEuUuBhS1jhpm+BJgg4xjQ
Date: Thu, 20 Dec 2012 01:23:17 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FECC3EC@SHSMSX102.ccr.corp.intel.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
	<40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
	<50C85E9F02000078000AFD65@nat28.tlf.novell.com>
	<20121219200904.GG15037@phenom.dumpdata.com>
In-Reply-To: <20121219200904.GG15037@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Thursday, December 20, 2012 4:09 AM
> To: Jan Beulich
> Cc: Xu, Dongxiao; xen-devel@lists.xen.org; linux-kernel@vger.kernel.org
> Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
> for map_sg hook
> 
> On Wed, Dec 12, 2012 at 09:38:23AM +0000, Jan Beulich wrote:
> > >>> On 12.12.12 at 02:03, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
> > >> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] On Tue,
> > >> Dec 11, 2012 at 06:39:35AM +0000, Xu, Dongxiao wrote:
> > >> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > >> > > What if this check was done in the routines that provide the
> > >> > > software static buffers and there try to provide a nice DMA
> > >> > > contingous
> > >> swatch of pages?
> > >> >
> > >> > Yes, this approach also came to our mind, which needs to modify
> > >> > the driver
> > >> itself.
> > >> > If so, it requires driver not using such static buffers (e.g.,
> > >> > from
> > > kmalloc) to do
> > >> DMA even if the buffer is continuous in native.
> > >>
> > >> I am bit loss here.
> > >>
> > >> Is the issue you found only with drivers that do not use DMA API?
> > >> Can you perhaps point me to the code that triggered this fix in the
> > >> first
> > > place?
> > >
> > > Yes, we met this issue on a specific SAS device/driver, and it calls
> > > into libata-core code, you can refer to function ata_dev_read_id()
> > > called from
> > > ata_dev_reread_id() in drivers/ata/libata-core.c.
> > >
> > > In the above function, the target buffer is (void
> > > *)dev->link->ap->sector_buf, which is 512 bytes static buffer and
> > > unfortunately it across the page boundary.
> >
> > I wonder whether such use of sg_init_one()/sg_set_buf() is correct in
> > the first place. While there aren't any restrictions documented for
> > its use, one clearly can't pass in whatever one wants (a pointer into
> > vmalloc()-ed memory, for instance, won't work afaict).
> >
> > I didn't go through all other users of it, but quite a few of the uses
> > elsewhere look similarly questionable.
> >
> > >> I am still not completely clear on what you had in mind. The one
> > >> method I thought about that might help in this is to have
> > >> Xen-SWIOTLB track which memory ranges were exchanged (so
> > >> xen_swiotlb_fixup would save the *buf and the size for each call to
> > >> xen_create_contiguous_region in a list or
> > > array).
> > >>
> > >> When xen_swiotlb_map/xen_swiotlb_map_sg_attrs care called they
> > >> would consult said array/list to see if the region they retrieved
> > >> crosses said 2MB chunks. If so.. and here I am unsure of what would
> > >> be the best way to
> > > proceed.
> 
> And from finally looking at the code I misunderstood your initial description.
> 
> > >
> > > We thought we can solve the issue in several ways:
> > >
> > > 1) Like the previous patch I sent out, we check the DMA region in
> > > xen_swiotlb_map_page() and xen_swiotlb_map_sg_attr(), and if DMA
> > > region crosses page boundary, we exchange the memory and copy the
> > > content. However it has race condition that when copying the memory
> > > content (we introduced two memory copies in the patch), some other
> > > code may also visit the page, which may encounter incorrect values.
> >
> > That's why, after mapping a buffer (or SG list) one has to call the
> > sync functions before looking at data. Any race as described by you is
> > therefore a programming error.
> 
> I am with Jan here. It looks like the libata is depending on the dma_unmap to
> do this. But it is unclear to me when the ata_qc_complete is actually called to
> unmap the buffer (and hence sync it).

Sorry, maybe I am still not describing this issue clearly.

Take the libata case as an example, the static DMA buffer locates (dev->link->ap->sector_buf , here we use Data Structure B in the graph) in the following structure:

-------------------------------------Page boundary
<Data Structure A>
<Data Structure B>
-------------------------------------Page boundary
<Data Structure B (cross page)>
<Data Structure C>
-------------------------------------Page boundary

Where Structure B is our DMA target.

For Data Structure B, we didn't care about the simultaneous access, either lock or sync function will take care of it.
What we are not sure is "read/write of A and C from other processor". As we will have memory copy for the pages, and at the same time, other CPU may access A/C.

Thanks,
Dongxiao
> 
> >
> > Jan
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 01:28:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 01:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlUvE-0001zN-GT; Thu, 20 Dec 2012 01:27:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlUvC-0001zH-Mk
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 01:27:39 +0000
Received: from [85.158.139.211:43724] by server-9.bemta-5.messagelabs.com id
	68/5E-10690-98962D05; Thu, 20 Dec 2012 01:27:37 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355966854!19777594!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9073 invoked from network); 20 Dec 2012 01:27:36 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-2.tower-206.messagelabs.com with SMTP;
	20 Dec 2012 01:27:36 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 17:27:33 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264725782"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 17:27:32 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 17:27:31 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Thu, 20 Dec 2012 09:27:30 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: "Nakajima, Jun" <jun.nakajima@intel.com>
Thread-Topic: [PATCH v2 05/10] nEPT: Try to enable EPT paging for L2 guest.
Thread-Index: AQHN3gyXK6Gaf1q0+kG0mdHVZWkJ5Zgg5fzQ
Date: Thu, 20 Dec 2012 01:27:29 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BB9EB@SHSMSX101.ccr.corp.intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-6-git-send-email-xiantao.zhang@intel.com>
	<CAL54oT0ZKwYkFvktNj53gg3oXN_zjB=38Pi8Dzqno8nZqY57OQ@mail.gmail.com>
In-Reply-To: <CAL54oT0ZKwYkFvktNj53gg3oXN_zjB=38Pi8Dzqno8nZqY57OQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Dong, Eddie" <eddie.dong@intel.com>,
	"tim@xen.org" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 05/10] nEPT: Try to enable EPT paging for
	L2 guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, Jun
Thanks,  I will update the patches according to your comments. 
Xiantao

> -----Original Message-----
> From: Nakajima, Jun [mailto:jun.nakajima@intel.com]
> Sent: Thursday, December 20, 2012 1:16 AM
> To: Zhang, Xiantao
> Cc: xen-devel@lists.xen.org; Dong, Eddie; keir@xen.org; JBeulich@suse.com;
> tim@xen.org
> Subject: Re: [PATCH v2 05/10] nEPT: Try to enable EPT paging for L2 guest.
> 
> Minor comments below.
> 
> On Wed, Dec 19, 2012 at 11:44 AM, Xiantao Zhang <xiantao.zhang@intel.com>
> wrote:
> > From: Zhang Xiantao <xiantao.zhang@intel.com>
> >
> > Once found EPT is enabled by L1 VMM, enabled nested EPT support for L2
> > guest.
> >
> > Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> > ---
> >  xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
> >  xen/arch/x86/hvm/vmx/vvmx.c        |   48
> +++++++++++++++++++++++++++--------
> >  xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
> >  3 files changed, 54 insertions(+), 15 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vmx.c
> b/xen/arch/x86/hvm/vmx/vmx.c
> > index d74aae0..e5be5a2 100644
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -1461,6 +1461,7 @@ static struct hvm_function_table __read_mostly
> vmx_function_table = {
> >      .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
> >      .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
> >      .nhvm_vcpu_asid       = nvmx_vcpu_asid,
> > +    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
> >      .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
> >      .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
> >      .nhvm_intr_blocked    = nvmx_intr_blocked,
> > @@ -2003,6 +2004,7 @@ static void ept_handle_violation(unsigned long
> qualification, paddr_t gpa)
> >      unsigned long gla, gfn = gpa >> PAGE_SHIFT;
> >      mfn_t mfn;
> >      p2m_type_t p2mt;
> > +    int ret;
> >      struct domain *d = current->domain;
> >
> >      if ( tb_init_done )
> > @@ -2017,18 +2019,26 @@ static void ept_handle_violation(unsigned long
> qualification, paddr_t gpa)
> >          _d.gpa = gpa;
> >          _d.qualification = qualification;
> >          _d.mfn = mfn_x(get_gfn_query_unlocked(d, gfn, &_d.p2mt));
> > -
> > +
> >          __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
> >      }
> >
> > -    if ( hvm_hap_nested_page_fault(gpa,
> > +    ret = hvm_hap_nested_page_fault(gpa,
> >                                     qualification & EPT_GLA_VALID       ? 1 : 0,
> >                                     qualification & EPT_GLA_VALID
> >                                       ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
> >                                     qualification & EPT_READ_VIOLATION  ? 1 : 0,
> >                                     qualification & EPT_WRITE_VIOLATION ? 1 : 0,
> > -                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
> > +                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
> > +    switch ( ret ) {
> > +    case 0:
> > +        break;
> > +    case 1:
> >          return;
> > +    case -1:
> > +        vcpu_nestedhvm(current).nv_vmexit_pending = 1;
> 
> I think we should add some comments for this case (e.g. what it means, what
> to do).
> 
> 
> > +        return;
> > +    }
> >
> >      /* Everything else is an error. */
> >      mfn = get_gfn_query_unlocked(d, gfn, &p2mt); diff --git
> > a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index
> > 76cf757..c100730 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
> >          gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
> >         goto out;
> >      }
> > +    nvmx->ept.enabled = 0;
> >      nvmx->vmxon_region_pa = 0;
> >      nvcpu->nv_vvmcx = NULL;
> >      nvcpu->nv_vvmcxaddr = VMCX_EADDR; @@ -96,9 +97,11 @@ uint64_t
> > nvmx_vcpu_guestcr3(struct vcpu *v)
> >
> >  uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)  {
> > -    /* TODO */
> > -    ASSERT(0);
> > -    return 0;
> > +    uint64_t eptp_base;
> > +    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> > +
> > +    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
> > +    return eptp_base & PAGE_MASK;
> >  }
> >
> >  uint32_t nvmx_vcpu_asid(struct vcpu *v) @@ -108,6 +111,13 @@ uint32_t
> > nvmx_vcpu_asid(struct vcpu *v)
> >      return 0;
> >  }
> >
> > +bool_t nvmx_ept_enabled(struct vcpu *v) {
> > +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> > +
> > +    return !!(nvmx->ept.enabled);
> > +}
> > +
> >  static const enum x86_segment sreg_to_index[] = {
> >      [VMX_SREG_ES] = x86_seg_es,
> >      [VMX_SREG_CS] = x86_seg_cs,
> > @@ -503,14 +513,16 @@ void nvmx_update_exec_control(struct vcpu *v,
> > u32 host_cntrl)  }
> >
> >  void nvmx_update_secondary_exec_control(struct vcpu *v,
> > -                                            unsigned long value)
> > +                                            unsigned long host_cntrl)
> >  {
> >      u32 shadow_cntrl;
> >      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> > +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> >
> >      shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx,
> SECONDARY_VM_EXEC_CONTROL);
> > -    shadow_cntrl |= value;
> > -    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL,
> shadow_cntrl);
> > +    nvmx->ept.enabled = !!(shadow_cntrl &
> SECONDARY_EXEC_ENABLE_EPT);
> > +    shadow_cntrl |= host_cntrl;
> > +    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
> >  }
> >
> >  static void nvmx_update_pin_control(struct vcpu *v, unsigned long
> > host_cntrl) @@ -818,6 +830,17 @@ static void
> load_shadow_guest_state(struct vcpu *v)
> >      /* TODO: CR3 target control */
> >  }
> >
> > +
> > +static uint64_t get_shadow_eptp(struct vcpu *v) {
> > +    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
> > +    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
> > +    struct ept_data *ept = &p2m->ept;
> > +
> > +    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> > +    return ept_get_eptp(ept);
> > +}
> > +
> >  static void virtual_vmentry(struct cpu_user_regs *regs)  {
> >      struct vcpu *v = current;
> > @@ -862,7 +885,10 @@ static void virtual_vmentry(struct cpu_user_regs
> *regs)
> >      /* updating host cr0 to sync TS bit */
> >      __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
> >
> > -    /* TODO: EPT_POINTER */
> > +    /* Setup virtual ETP for L2 guest*/
> > +    if ( nestedhvm_paging_mode_hap(v) )
> > +        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
> > +
> >  }
> >
> >  static void sync_vvmcs_guest_state(struct vcpu *v, struct
> > cpu_user_regs *regs) @@ -915,8 +941,8 @@ static void
> sync_vvmcs_ro(struct vcpu *v)
> >      /* Adjust exit_reason/exit_qualifciation for violation case */
> >      if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
> >                  EXIT_REASON_EPT_VIOLATION ) {
> > -        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx-
> >ept_exit.exit_qual);
> > -        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx-
> >ept_exit.exit_reason);
> > +        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
> > +        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
> >      }
> >  }
> >
> > @@ -1480,8 +1506,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t
> L2_gpa, paddr_t *L1_gpa,
> >          case EPT_TRANSLATE_VIOLATION:
> >          case EPT_TRANSLATE_MISCONFIG:
> >              rc = NESTEDHVM_PAGEFAULT_INJECT;
> > -            nvmx->ept_exit.exit_reason = exit_reason;
> > -            nvmx->ept_exit.exit_qual = exit_qual;
> > +            nvmx->ept.exit_reason = exit_reason;
> > +            nvmx->ept.exit_qual = exit_qual;
> >              break;
> >          case EPT_TRANSLATE_RETRY:
> >              rc = NESTEDHVM_PAGEFAULT_RETRY; diff --git
> > a/xen/include/asm-x86/hvm/vmx/vvmx.h
> > b/xen/include/asm-x86/hvm/vmx/vvmx.h
> > index 8eb377b..661cd8a 100644
> > --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> > @@ -33,9 +33,10 @@ struct nestedvmx {
> >          u32           error_code;
> >      } intr;
> >      struct {
> > +        char     enabled;
> 
> I think we should use boot_t not char.
> 
> >          uint32_t exit_reason;
> >          uint32_t exit_qual;
> > -    } ept_exit;
> > +    } ept;
> >  };
> >
> >  #define vcpu_2_nvmx(v) (vcpu_nestedhvm(v).u.nvmx) @@ -110,6 +111,8
> @@
> > int nvmx_intercepts_exception(struct vcpu *v,
> >                                unsigned int trap, int error_code);
> > void nvmx_domain_relinquish_resources(struct domain *d);
> >
> > +bool_t nvmx_ept_enabled(struct vcpu *v);
> > +
> >  int nvmx_handle_vmxon(struct cpu_user_regs *regs);  int
> > nvmx_handle_vmxoff(struct cpu_user_regs *regs);
> >
> > --
> > 1.7.1
> >
> 
> 
> 
> --
> Jun
> Intel Open Source Technology Center

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 01:28:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 01:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlUvE-0001zN-GT; Thu, 20 Dec 2012 01:27:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlUvC-0001zH-Mk
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 01:27:39 +0000
Received: from [85.158.139.211:43724] by server-9.bemta-5.messagelabs.com id
	68/5E-10690-98962D05; Thu, 20 Dec 2012 01:27:37 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1355966854!19777594!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9073 invoked from network); 20 Dec 2012 01:27:36 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-2.tower-206.messagelabs.com with SMTP;
	20 Dec 2012 01:27:36 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 17:27:33 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264725782"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 17:27:32 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 17:27:31 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Thu, 20 Dec 2012 09:27:30 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: "Nakajima, Jun" <jun.nakajima@intel.com>
Thread-Topic: [PATCH v2 05/10] nEPT: Try to enable EPT paging for L2 guest.
Thread-Index: AQHN3gyXK6Gaf1q0+kG0mdHVZWkJ5Zgg5fzQ
Date: Thu, 20 Dec 2012 01:27:29 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BB9EB@SHSMSX101.ccr.corp.intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-6-git-send-email-xiantao.zhang@intel.com>
	<CAL54oT0ZKwYkFvktNj53gg3oXN_zjB=38Pi8Dzqno8nZqY57OQ@mail.gmail.com>
In-Reply-To: <CAL54oT0ZKwYkFvktNj53gg3oXN_zjB=38Pi8Dzqno8nZqY57OQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Dong, Eddie" <eddie.dong@intel.com>,
	"tim@xen.org" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 05/10] nEPT: Try to enable EPT paging for
	L2 guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, Jun
Thanks,  I will update the patches according to your comments. 
Xiantao

> -----Original Message-----
> From: Nakajima, Jun [mailto:jun.nakajima@intel.com]
> Sent: Thursday, December 20, 2012 1:16 AM
> To: Zhang, Xiantao
> Cc: xen-devel@lists.xen.org; Dong, Eddie; keir@xen.org; JBeulich@suse.com;
> tim@xen.org
> Subject: Re: [PATCH v2 05/10] nEPT: Try to enable EPT paging for L2 guest.
> 
> Minor comments below.
> 
> On Wed, Dec 19, 2012 at 11:44 AM, Xiantao Zhang <xiantao.zhang@intel.com>
> wrote:
> > From: Zhang Xiantao <xiantao.zhang@intel.com>
> >
> > Once found EPT is enabled by L1 VMM, enabled nested EPT support for L2
> > guest.
> >
> > Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> > ---
> >  xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
> >  xen/arch/x86/hvm/vmx/vvmx.c        |   48
> +++++++++++++++++++++++++++--------
> >  xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
> >  3 files changed, 54 insertions(+), 15 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vmx/vmx.c
> b/xen/arch/x86/hvm/vmx/vmx.c
> > index d74aae0..e5be5a2 100644
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -1461,6 +1461,7 @@ static struct hvm_function_table __read_mostly
> vmx_function_table = {
> >      .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
> >      .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
> >      .nhvm_vcpu_asid       = nvmx_vcpu_asid,
> > +    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
> >      .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
> >      .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
> >      .nhvm_intr_blocked    = nvmx_intr_blocked,
> > @@ -2003,6 +2004,7 @@ static void ept_handle_violation(unsigned long
> qualification, paddr_t gpa)
> >      unsigned long gla, gfn = gpa >> PAGE_SHIFT;
> >      mfn_t mfn;
> >      p2m_type_t p2mt;
> > +    int ret;
> >      struct domain *d = current->domain;
> >
> >      if ( tb_init_done )
> > @@ -2017,18 +2019,26 @@ static void ept_handle_violation(unsigned long
> qualification, paddr_t gpa)
> >          _d.gpa = gpa;
> >          _d.qualification = qualification;
> >          _d.mfn = mfn_x(get_gfn_query_unlocked(d, gfn, &_d.p2mt));
> > -
> > +
> >          __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
> >      }
> >
> > -    if ( hvm_hap_nested_page_fault(gpa,
> > +    ret = hvm_hap_nested_page_fault(gpa,
> >                                     qualification & EPT_GLA_VALID       ? 1 : 0,
> >                                     qualification & EPT_GLA_VALID
> >                                       ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
> >                                     qualification & EPT_READ_VIOLATION  ? 1 : 0,
> >                                     qualification & EPT_WRITE_VIOLATION ? 1 : 0,
> > -                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
> > +                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
> > +    switch ( ret ) {
> > +    case 0:
> > +        break;
> > +    case 1:
> >          return;
> > +    case -1:
> > +        vcpu_nestedhvm(current).nv_vmexit_pending = 1;
> 
> I think we should add some comments for this case (e.g. what it means, what
> to do).
> 
> 
> > +        return;
> > +    }
> >
> >      /* Everything else is an error. */
> >      mfn = get_gfn_query_unlocked(d, gfn, &p2mt); diff --git
> > a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index
> > 76cf757..c100730 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
> >          gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
> >         goto out;
> >      }
> > +    nvmx->ept.enabled = 0;
> >      nvmx->vmxon_region_pa = 0;
> >      nvcpu->nv_vvmcx = NULL;
> >      nvcpu->nv_vvmcxaddr = VMCX_EADDR; @@ -96,9 +97,11 @@ uint64_t
> > nvmx_vcpu_guestcr3(struct vcpu *v)
> >
> >  uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)  {
> > -    /* TODO */
> > -    ASSERT(0);
> > -    return 0;
> > +    uint64_t eptp_base;
> > +    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> > +
> > +    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
> > +    return eptp_base & PAGE_MASK;
> >  }
> >
> >  uint32_t nvmx_vcpu_asid(struct vcpu *v) @@ -108,6 +111,13 @@ uint32_t
> > nvmx_vcpu_asid(struct vcpu *v)
> >      return 0;
> >  }
> >
> > +bool_t nvmx_ept_enabled(struct vcpu *v) {
> > +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> > +
> > +    return !!(nvmx->ept.enabled);
> > +}
> > +
> >  static const enum x86_segment sreg_to_index[] = {
> >      [VMX_SREG_ES] = x86_seg_es,
> >      [VMX_SREG_CS] = x86_seg_cs,
> > @@ -503,14 +513,16 @@ void nvmx_update_exec_control(struct vcpu *v,
> > u32 host_cntrl)  }
> >
> >  void nvmx_update_secondary_exec_control(struct vcpu *v,
> > -                                            unsigned long value)
> > +                                            unsigned long host_cntrl)
> >  {
> >      u32 shadow_cntrl;
> >      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> > +    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
> >
> >      shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx,
> SECONDARY_VM_EXEC_CONTROL);
> > -    shadow_cntrl |= value;
> > -    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL,
> shadow_cntrl);
> > +    nvmx->ept.enabled = !!(shadow_cntrl &
> SECONDARY_EXEC_ENABLE_EPT);
> > +    shadow_cntrl |= host_cntrl;
> > +    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
> >  }
> >
> >  static void nvmx_update_pin_control(struct vcpu *v, unsigned long
> > host_cntrl) @@ -818,6 +830,17 @@ static void
> load_shadow_guest_state(struct vcpu *v)
> >      /* TODO: CR3 target control */
> >  }
> >
> > +
> > +static uint64_t get_shadow_eptp(struct vcpu *v) {
> > +    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
> > +    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
> > +    struct ept_data *ept = &p2m->ept;
> > +
> > +    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
> > +    return ept_get_eptp(ept);
> > +}
> > +
> >  static void virtual_vmentry(struct cpu_user_regs *regs)  {
> >      struct vcpu *v = current;
> > @@ -862,7 +885,10 @@ static void virtual_vmentry(struct cpu_user_regs
> *regs)
> >      /* updating host cr0 to sync TS bit */
> >      __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
> >
> > -    /* TODO: EPT_POINTER */
> > +    /* Setup virtual ETP for L2 guest*/
> > +    if ( nestedhvm_paging_mode_hap(v) )
> > +        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
> > +
> >  }
> >
> >  static void sync_vvmcs_guest_state(struct vcpu *v, struct
> > cpu_user_regs *regs) @@ -915,8 +941,8 @@ static void
> sync_vvmcs_ro(struct vcpu *v)
> >      /* Adjust exit_reason/exit_qualifciation for violation case */
> >      if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
> >                  EXIT_REASON_EPT_VIOLATION ) {
> > -        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx-
> >ept_exit.exit_qual);
> > -        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx-
> >ept_exit.exit_reason);
> > +        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
> > +        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
> >      }
> >  }
> >
> > @@ -1480,8 +1506,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t
> L2_gpa, paddr_t *L1_gpa,
> >          case EPT_TRANSLATE_VIOLATION:
> >          case EPT_TRANSLATE_MISCONFIG:
> >              rc = NESTEDHVM_PAGEFAULT_INJECT;
> > -            nvmx->ept_exit.exit_reason = exit_reason;
> > -            nvmx->ept_exit.exit_qual = exit_qual;
> > +            nvmx->ept.exit_reason = exit_reason;
> > +            nvmx->ept.exit_qual = exit_qual;
> >              break;
> >          case EPT_TRANSLATE_RETRY:
> >              rc = NESTEDHVM_PAGEFAULT_RETRY; diff --git
> > a/xen/include/asm-x86/hvm/vmx/vvmx.h
> > b/xen/include/asm-x86/hvm/vmx/vvmx.h
> > index 8eb377b..661cd8a 100644
> > --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> > @@ -33,9 +33,10 @@ struct nestedvmx {
> >          u32           error_code;
> >      } intr;
> >      struct {
> > +        char     enabled;
> 
> I think we should use boot_t not char.
> 
> >          uint32_t exit_reason;
> >          uint32_t exit_qual;
> > -    } ept_exit;
> > +    } ept;
> >  };
> >
> >  #define vcpu_2_nvmx(v) (vcpu_nestedhvm(v).u.nvmx) @@ -110,6 +111,8
> @@
> > int nvmx_intercepts_exception(struct vcpu *v,
> >                                unsigned int trap, int error_code);
> > void nvmx_domain_relinquish_resources(struct domain *d);
> >
> > +bool_t nvmx_ept_enabled(struct vcpu *v);
> > +
> >  int nvmx_handle_vmxon(struct cpu_user_regs *regs);  int
> > nvmx_handle_vmxoff(struct cpu_user_regs *regs);
> >
> > --
> > 1.7.1
> >
> 
> 
> 
> --
> Jun
> Intel Open Source Technology Center

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 02:39:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 02:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlW2m-0002rL-HS; Thu, 20 Dec 2012 02:39:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlW2k-0002rG-AC
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 02:39:30 +0000
Received: from [85.158.143.99:17362] by server-2.bemta-4.messagelabs.com id
	CE/18-30861-16A72D05; Thu, 20 Dec 2012 02:39:29 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355971167!30125430!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjQwNTM3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28256 invoked from network); 20 Dec 2012 02:39:28 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-9.tower-216.messagelabs.com with SMTP;
	20 Dec 2012 02:39:28 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 19 Dec 2012 18:39:25 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="234259674"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by azsmga001.ch.intel.com with ESMTP; 19 Dec 2012 18:39:24 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 18:39:12 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 18:39:12 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Thu, 20 Dec 2012 10:39:04 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v2 08/10] nEPT: handle invept instruction from L1 VMM
Thread-Index: AQHN3cQXf8VG994U2EeGv2QlWu//TZgg+FLw
Date: Thu, 20 Dec 2012 02:39:03 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BBB04@SHSMSX101.ccr.corp.intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-9-git-send-email-xiantao.zhang@intel.com>
	<50D188C902000078000B1572@nat28.tlf.novell.com>
In-Reply-To: <50D188C902000078000B1572@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Dong, Eddie" <eddie.dong@intel.com>,
	"tim@xen.org" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 08/10] nEPT: handle invept instruction
	from L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks, Jan! 
>>: handle invept instruction from L1 VMM
> 
> >>> On 19.12.12 at 20:44, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -2572,11 +2572,13 @@ void vmx_vmexit_handler(struct
> cpu_user_regs *regs)
> >          if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
> >              update_guest_eip();
> >          break;
> > -
> > +    case EXIT_REASON_INVEPT:
> > +        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
> > +            update_guest_eip();
> > +        break;
> 
> In (potentially going to become) long switch statements, please don't drop
> the blank lines between individual cases - instead of dropping the line here,
> you wold want to insert another one below the new separately handled case.

Okay. 

> >      case EXIT_REASON_MWAIT_INSTRUCTION:
> >      case EXIT_REASON_MONITOR_INSTRUCTION:
> >      case EXIT_REASON_GETSEC:
> > -    case EXIT_REASON_INVEPT:
> >      case EXIT_REASON_INVVPID:
> >          /*
> >           * We should never exit on GETSEC because CR4.SMXE is always
> > 0 when
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1356,6 +1356,45 @@ int nvmx_handle_vmwrite(struct cpu_user_regs
> *regs)
> >      return X86EMUL_OKAY;
> >  }
> >
> > +int nvmx_handle_invept(struct cpu_user_regs *regs) {
> > +    struct vmx_inst_decoded decode;
> > +    unsigned long eptp;
> > +    u64 inv_type;
> > +
> > +    if ( !cpu_has_vmx_ept )
> > +        return X86EMUL_EXCEPTION;
> > +
> > +    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
> > +             != X86EMUL_OKAY )
> > +        return X86EMUL_EXCEPTION;
> > +
> > +    inv_type = reg_read(regs, decode.reg2);
> > +    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type,
> > + eptp);
> 
> An unconditional printk() on an operation potentially happening quite
> frequently? Even with XENLOG_DEBUG this is not acceptable imo.

Okay, I will remove it. 

> > +
> > +    switch ( inv_type ) {
> > +    case INVEPT_SINGLE_CONTEXT:
> > +        {
> > +            struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
> > +            if ( p2m )
> > +            {
> > +	            p2m_flush(current, p2m);
> 
> Despite your comment in 00/10, there still is a whitespace issues at least here
> (didn't look that closely elsewhere).

Fixed.

> > +                ept_sync_domain(p2m);
> > +            }
> > +        }
> > +        break;
> > +    case INVEPT_ALL_CONTEXT:
> > +        p2m_flush_nestedp2m(current->domain);
> > +        __invept(INVEPT_ALL_CONTEXT, 0, 0);
> > +        break;
> > +    default:
> > +        return X86EMUL_EXCEPTION;
> > +    }
> > +    vmreturn(regs, VMSUCCEED);
> > +    return X86EMUL_OKAY;
> > +}
> > +
> > +
> >  #define __emul_value(enable1, default1) \
> >      ((enable1 | default1) << 32 | (default1))
> >
> > --- a/xen/arch/x86/mm/p2m.c
> > +++ b/xen/arch/x86/mm/p2m.c
> > @@ -1465,7 +1465,7 @@ p2m_flush_table(struct p2m_domain *p2m)  void
> > p2m_flush(struct vcpu *v, struct p2m_domain *p2m)  {
> > -    ASSERT(v->domain == p2m->domain);
> > +    ASSERT(p2m && v->domain == p2m->domain);
> 
> How is this change related to the rest of the patch? 
I will remove it, and let caller check whether p2m is NULL.  Originally,  this is to fix a Xen booting issue. 
Xiantao


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 02:39:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 02:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlW2m-0002rL-HS; Thu, 20 Dec 2012 02:39:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlW2k-0002rG-AC
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 02:39:30 +0000
Received: from [85.158.143.99:17362] by server-2.bemta-4.messagelabs.com id
	CE/18-30861-16A72D05; Thu, 20 Dec 2012 02:39:29 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355971167!30125430!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjQwNTM3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28256 invoked from network); 20 Dec 2012 02:39:28 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-9.tower-216.messagelabs.com with SMTP;
	20 Dec 2012 02:39:28 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 19 Dec 2012 18:39:25 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="234259674"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by azsmga001.ch.intel.com with ESMTP; 19 Dec 2012 18:39:24 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 18:39:12 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 19 Dec 2012 18:39:12 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Thu, 20 Dec 2012 10:39:04 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v2 08/10] nEPT: handle invept instruction from L1 VMM
Thread-Index: AQHN3cQXf8VG994U2EeGv2QlWu//TZgg+FLw
Date: Thu, 20 Dec 2012 02:39:03 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BBB04@SHSMSX101.ccr.corp.intel.com>
References: <1355946267-24227-1-git-send-email-xiantao.zhang@intel.com>
	<1355946267-24227-9-git-send-email-xiantao.zhang@intel.com>
	<50D188C902000078000B1572@nat28.tlf.novell.com>
In-Reply-To: <50D188C902000078000B1572@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Dong, Eddie" <eddie.dong@intel.com>,
	"tim@xen.org" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 08/10] nEPT: handle invept instruction
	from L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks, Jan! 
>>: handle invept instruction from L1 VMM
> 
> >>> On 19.12.12 at 20:44, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -2572,11 +2572,13 @@ void vmx_vmexit_handler(struct
> cpu_user_regs *regs)
> >          if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
> >              update_guest_eip();
> >          break;
> > -
> > +    case EXIT_REASON_INVEPT:
> > +        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
> > +            update_guest_eip();
> > +        break;
> 
> In (potentially going to become) long switch statements, please don't drop
> the blank lines between individual cases - instead of dropping the line here,
> you wold want to insert another one below the new separately handled case.

Okay. 

> >      case EXIT_REASON_MWAIT_INSTRUCTION:
> >      case EXIT_REASON_MONITOR_INSTRUCTION:
> >      case EXIT_REASON_GETSEC:
> > -    case EXIT_REASON_INVEPT:
> >      case EXIT_REASON_INVVPID:
> >          /*
> >           * We should never exit on GETSEC because CR4.SMXE is always
> > 0 when
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1356,6 +1356,45 @@ int nvmx_handle_vmwrite(struct cpu_user_regs
> *regs)
> >      return X86EMUL_OKAY;
> >  }
> >
> > +int nvmx_handle_invept(struct cpu_user_regs *regs) {
> > +    struct vmx_inst_decoded decode;
> > +    unsigned long eptp;
> > +    u64 inv_type;
> > +
> > +    if ( !cpu_has_vmx_ept )
> > +        return X86EMUL_EXCEPTION;
> > +
> > +    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
> > +             != X86EMUL_OKAY )
> > +        return X86EMUL_EXCEPTION;
> > +
> > +    inv_type = reg_read(regs, decode.reg2);
> > +    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type,
> > + eptp);
> 
> An unconditional printk() on an operation potentially happening quite
> frequently? Even with XENLOG_DEBUG this is not acceptable imo.

Okay, I will remove it. 

> > +
> > +    switch ( inv_type ) {
> > +    case INVEPT_SINGLE_CONTEXT:
> > +        {
> > +            struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
> > +            if ( p2m )
> > +            {
> > +	            p2m_flush(current, p2m);
> 
> Despite your comment in 00/10, there still is a whitespace issues at least here
> (didn't look that closely elsewhere).

Fixed.

> > +                ept_sync_domain(p2m);
> > +            }
> > +        }
> > +        break;
> > +    case INVEPT_ALL_CONTEXT:
> > +        p2m_flush_nestedp2m(current->domain);
> > +        __invept(INVEPT_ALL_CONTEXT, 0, 0);
> > +        break;
> > +    default:
> > +        return X86EMUL_EXCEPTION;
> > +    }
> > +    vmreturn(regs, VMSUCCEED);
> > +    return X86EMUL_OKAY;
> > +}
> > +
> > +
> >  #define __emul_value(enable1, default1) \
> >      ((enable1 | default1) << 32 | (default1))
> >
> > --- a/xen/arch/x86/mm/p2m.c
> > +++ b/xen/arch/x86/mm/p2m.c
> > @@ -1465,7 +1465,7 @@ p2m_flush_table(struct p2m_domain *p2m)  void
> > p2m_flush(struct vcpu *v, struct p2m_domain *p2m)  {
> > -    ASSERT(v->domain == p2m->domain);
> > +    ASSERT(p2m && v->domain == p2m->domain);
> 
> How is this change related to the rest of the patch? 
I will remove it, and let caller check whether p2m is NULL.  Originally,  this is to fix a Xen booting issue. 
Xiantao


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:42:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:42:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlX12-0003MX-KC; Thu, 20 Dec 2012 03:41:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1TlX10-0003MS-Uu
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:41:47 +0000
Received: from [85.158.137.99:21552] by server-14.bemta-3.messagelabs.com id
	F4/1D-27443-5F882D05; Thu, 20 Dec 2012 03:41:41 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355974899!12314180!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI4Mjcz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14804 invoked from network); 20 Dec 2012 03:41:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 03:41:41 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBK3fbSj010362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Dec 2012 03:41:37 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBK3faCs023621
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 20 Dec 2012 03:41:37 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBK3fa1X027515; Wed, 19 Dec 2012 21:41:36 -0600
Received: from [192.168.1.101] (/124.68.9.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 19:41:36 -0800
Message-ID: <50D2890D.9080607@oracle.com>
Date: Thu, 20 Dec 2012 11:42:05 +0800
From: ANNIE LI <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: Matt Wilson <msw@amazon.com>
References: <1355948414-7503-1-git-send-email-msw@amazon.com>
In-Reply-To: <1355948414-7503-1-git-send-email-msw@amazon.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Steven Noonan <snoonan@amazon.com>, xen-devel@lists.xen.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: correctly initialize grant
 table version 1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 2012-12-20 4:20, Matt Wilson wrote:
> Commit 85ff6acb075a484780b3d763fdf41596d8fc0970 (xen/granttable: Grant
> tables V2 implementation) changed the GREFS_PER_GRANT_FRAME macro from
> a constant to a conditional expression. The expression depends on
> grant_table_version being appropriately set. Unfortunately, at init
> time grant_table_version will be 0. The GREFS_PER_GRANT_FRAME
> conditional expression checks for "grant_table_version == 1", and
> therefore returns the number of grant references per frame for v2.
>
> This causes gnttab_init() to allocate fewer pages for gnttab_list, as
> a frame can old half the number of v2 entries than v1 entries. After
> gnttab_resume() is called, grant_table_version is appropriately
> set. nr_init_grefs will then be miscalculated and gnttab_free_count
> will hold a value larger than the actual number of free gref entries.

Correct.

>
> If a guest is heavily utilizing improperly initialized v1 grant
> tables, memory corruption can occur. One common manifestation is
> corruption of the vmalloc list, resulting in a poisoned pointer
> derefrence when accessing /proc/meminfo or /proc/vmallocinfo:
>
> [   40.770064] BUG: unable to handle kernel paging request at 0000200200001407
> [   40.770083] IP: [<ffffffff811a6fb0>] get_vmalloc_info+0x70/0x110
> [   40.770102] PGD 0
> [   40.770107] Oops: 0000 [#1] SMP
> [   40.770114] CPU 10
>
> This patch introduces a static variable, grefs_per_grant_frame, to
> cache the calculated value. gnttab_init() now calls
> gnttab_request_version() early so that grant_table_version and
> grefs_per_grant_frame can be appropriately set. A few BUG_ON()s have
> been added to prevent this type of bug from reoccurring in the future.

Thanks for posting this.
This is caused by not initializing grant_table_version in gnttab_init() 
before gnttab_resume(). How about only adding gnttab_request_version() 
and BUG_ON() to check grant_table_version in gnttab_init(), then no need 
to change GREFS_PER_GRANT_FRAME into grefs_per_grant_frame and add more 
BUG_ON() to check grefs_per_grant_frame?

For example, in gnttab_init()
....

+gnttab_request_version();
+BUG_ON(grant_table_version == 0);

....

Thanks
Annie
>
> Signed-off-by: Matt Wilson<msw@amazon.com>
> Reviewed-and-Tested-by: Steven Noonan<snoonan@amazon.com>
> Cc: Konrad Rzeszutek Wilk<konrad.wilk@oracle.com>
> Cc: Annie Li<annie.li@oracle.com>
> Cc: xen-devel@lists.xen.org
> ---
>   drivers/xen/grant-table.c |   41 +++++++++++++++++++++++------------------
>   1 files changed, 23 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 043bf07..011fdc3 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -55,10 +55,6 @@
>   /* External tools reserve first few grant table entries. */
>   #define NR_RESERVED_ENTRIES 8
>   #define GNTTAB_LIST_END 0xffffffff
> -#define GREFS_PER_GRANT_FRAME \
> -(grant_table_version == 1 ?                      \
> -(PAGE_SIZE / sizeof(struct grant_entry_v1)) :   \
> -(PAGE_SIZE / sizeof(union grant_entry_v2)))
>
>   static grant_ref_t **gnttab_list;
>   static unsigned int nr_grant_frames;
> @@ -153,6 +149,7 @@ static struct gnttab_ops *gnttab_interface;
>   static grant_status_t *grstatus;
>
>   static int grant_table_version;
> +static int grefs_per_grant_frame;
>
>   static struct gnttab_free_callback *gnttab_free_callback_list;
>
> @@ -766,12 +763,14 @@ static int grow_gnttab_list(unsigned int more_frames)
>   	unsigned int new_nr_grant_frames, extra_entries, i;
>   	unsigned int nr_glist_frames, new_nr_glist_frames;
>
> +	BUG_ON(grefs_per_grant_frame == 0);
> +
>   	new_nr_grant_frames = nr_grant_frames + more_frames;
> -	extra_entries       = more_frames * GREFS_PER_GRANT_FRAME;
> +	extra_entries       = more_frames * grefs_per_grant_frame;
>
> -	nr_glist_frames = (nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
> +	nr_glist_frames = (nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
>   	new_nr_glist_frames =
> -		(new_nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
> +		(new_nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
>   	for (i = nr_glist_frames; i<  new_nr_glist_frames; i++) {
>   		gnttab_list[i] = (grant_ref_t *)__get_free_page(GFP_ATOMIC);
>   		if (!gnttab_list[i])
> @@ -779,12 +778,12 @@ static int grow_gnttab_list(unsigned int more_frames)
>   	}
>
>
> -	for (i = GREFS_PER_GRANT_FRAME * nr_grant_frames;
> -	     i<  GREFS_PER_GRANT_FRAME * new_nr_grant_frames - 1; i++)
> +	for (i = grefs_per_grant_frame * nr_grant_frames;
> +	     i<  grefs_per_grant_frame * new_nr_grant_frames - 1; i++)
>   		gnttab_entry(i) = i + 1;
>
>   	gnttab_entry(i) = gnttab_free_head;
> -	gnttab_free_head = GREFS_PER_GRANT_FRAME * nr_grant_frames;
> +	gnttab_free_head = grefs_per_grant_frame * nr_grant_frames;
>   	gnttab_free_count += extra_entries;
>
>   	nr_grant_frames = new_nr_grant_frames;
> @@ -904,7 +903,8 @@ EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>
>   static unsigned nr_status_frames(unsigned nr_grant_frames)
>   {
> -	return (nr_grant_frames * GREFS_PER_GRANT_FRAME + SPP - 1) / SPP;
> +	BUG_ON(grefs_per_grant_frame == 0);
> +	return (nr_grant_frames * grefs_per_grant_frame + SPP - 1) / SPP;
>   }
>
>   static int gnttab_map_frames_v1(xen_pfn_t *frames, unsigned int nr_gframes)
> @@ -1068,6 +1068,7 @@ static void gnttab_request_version(void)
>   	rc = HYPERVISOR_grant_table_op(GNTTABOP_set_version,&gsv, 1);
>   	if (rc == 0&&  gsv.version == 2) {
>   		grant_table_version = 2;
> +		grefs_per_grant_frame = PAGE_SIZE / sizeof(union grant_entry_v2);
>   		gnttab_interface =&gnttab_v2_ops;
>   	} else if (grant_table_version == 2) {
>   		/*
> @@ -1080,10 +1081,9 @@ static void gnttab_request_version(void)
>   		panic("we need grant tables version 2, but only version 1 is available");
>   	} else {
>   		grant_table_version = 1;
> +		grefs_per_grant_frame = PAGE_SIZE / sizeof(struct grant_entry_v1);
>   		gnttab_interface =&gnttab_v1_ops;
>   	}
> -	printk(KERN_INFO "Grant tables using version %d layout.\n",
> -		grant_table_version);
>   }
>
>   int gnttab_resume(void)
> @@ -1092,6 +1092,8 @@ int gnttab_resume(void)
>   	char *kmsg = "Failed to kmalloc pages for pv in hvm grant frames\n";
>
>   	gnttab_request_version();
> +	printk(KERN_INFO "Grant tables using version %d layout.\n",
> +		grant_table_version);
>   	max_nr_gframes = gnttab_max_grant_frames();
>   	if (max_nr_gframes<  nr_grant_frames)
>   		return -ENOSYS;
> @@ -1137,9 +1139,10 @@ static int gnttab_expand(unsigned int req_entries)
>   	int rc;
>   	unsigned int cur, extra;
>
> +	BUG_ON(grefs_per_grant_frame == 0);
>   	cur = nr_grant_frames;
> -	extra = ((req_entries + (GREFS_PER_GRANT_FRAME-1)) /
> -		 GREFS_PER_GRANT_FRAME);
> +	extra = ((req_entries + (grefs_per_grant_frame-1)) /
> +		 grefs_per_grant_frame);
>   	if (cur + extra>  gnttab_max_grant_frames())
>   		return -ENOSPC;
>
> @@ -1157,21 +1160,23 @@ int gnttab_init(void)
>   	unsigned int nr_init_grefs;
>   	int ret;
>
> +	gnttab_request_version();
>   	nr_grant_frames = 1;
>   	boot_max_nr_grant_frames = __max_nr_grant_frames();
>
>   	/* Determine the maximum number of frames required for the
>   	 * grant reference free list on the current hypervisor.
>   	 */
> +	BUG_ON(grefs_per_grant_frame == 0);
>   	max_nr_glist_frames = (boot_max_nr_grant_frames *
> -			       GREFS_PER_GRANT_FRAME / RPP);
> +			       grefs_per_grant_frame / RPP);
>
>   	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
>   			      GFP_KERNEL);
>   	if (gnttab_list == NULL)
>   		return -ENOMEM;
>
> -	nr_glist_frames = (nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
> +	nr_glist_frames = (nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
>   	for (i = 0; i<  nr_glist_frames; i++) {
>   		gnttab_list[i] = (grant_ref_t *)__get_free_page(GFP_KERNEL);
>   		if (gnttab_list[i] == NULL) {
> @@ -1185,7 +1190,7 @@ int gnttab_init(void)
>   		goto ini_nomem;
>   	}
>
> -	nr_init_grefs = nr_grant_frames * GREFS_PER_GRANT_FRAME;
> +	nr_init_grefs = nr_grant_frames * grefs_per_grant_frame;
>
>   	for (i = NR_RESERVED_ENTRIES; i<  nr_init_grefs - 1; i++)
>   		gnttab_entry(i) = i + 1;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:42:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:42:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlX12-0003MX-KC; Thu, 20 Dec 2012 03:41:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1TlX10-0003MS-Uu
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:41:47 +0000
Received: from [85.158.137.99:21552] by server-14.bemta-3.messagelabs.com id
	F4/1D-27443-5F882D05; Thu, 20 Dec 2012 03:41:41 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1355974899!12314180!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTI4Mjcz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14804 invoked from network); 20 Dec 2012 03:41:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 03:41:41 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBK3fbSj010362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Dec 2012 03:41:37 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBK3faCs023621
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 20 Dec 2012 03:41:37 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBK3fa1X027515; Wed, 19 Dec 2012 21:41:36 -0600
Received: from [192.168.1.101] (/124.68.9.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Dec 2012 19:41:36 -0800
Message-ID: <50D2890D.9080607@oracle.com>
Date: Thu, 20 Dec 2012 11:42:05 +0800
From: ANNIE LI <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: Matt Wilson <msw@amazon.com>
References: <1355948414-7503-1-git-send-email-msw@amazon.com>
In-Reply-To: <1355948414-7503-1-git-send-email-msw@amazon.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Steven Noonan <snoonan@amazon.com>, xen-devel@lists.xen.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: correctly initialize grant
 table version 1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 2012-12-20 4:20, Matt Wilson wrote:
> Commit 85ff6acb075a484780b3d763fdf41596d8fc0970 (xen/granttable: Grant
> tables V2 implementation) changed the GREFS_PER_GRANT_FRAME macro from
> a constant to a conditional expression. The expression depends on
> grant_table_version being appropriately set. Unfortunately, at init
> time grant_table_version will be 0. The GREFS_PER_GRANT_FRAME
> conditional expression checks for "grant_table_version == 1", and
> therefore returns the number of grant references per frame for v2.
>
> This causes gnttab_init() to allocate fewer pages for gnttab_list, as
> a frame can old half the number of v2 entries than v1 entries. After
> gnttab_resume() is called, grant_table_version is appropriately
> set. nr_init_grefs will then be miscalculated and gnttab_free_count
> will hold a value larger than the actual number of free gref entries.

Correct.

>
> If a guest is heavily utilizing improperly initialized v1 grant
> tables, memory corruption can occur. One common manifestation is
> corruption of the vmalloc list, resulting in a poisoned pointer
> derefrence when accessing /proc/meminfo or /proc/vmallocinfo:
>
> [   40.770064] BUG: unable to handle kernel paging request at 0000200200001407
> [   40.770083] IP: [<ffffffff811a6fb0>] get_vmalloc_info+0x70/0x110
> [   40.770102] PGD 0
> [   40.770107] Oops: 0000 [#1] SMP
> [   40.770114] CPU 10
>
> This patch introduces a static variable, grefs_per_grant_frame, to
> cache the calculated value. gnttab_init() now calls
> gnttab_request_version() early so that grant_table_version and
> grefs_per_grant_frame can be appropriately set. A few BUG_ON()s have
> been added to prevent this type of bug from reoccurring in the future.

Thanks for posting this.
This is caused by not initializing grant_table_version in gnttab_init() 
before gnttab_resume(). How about only adding gnttab_request_version() 
and BUG_ON() to check grant_table_version in gnttab_init(), then no need 
to change GREFS_PER_GRANT_FRAME into grefs_per_grant_frame and add more 
BUG_ON() to check grefs_per_grant_frame?

For example, in gnttab_init()
....

+gnttab_request_version();
+BUG_ON(grant_table_version == 0);

....

Thanks
Annie
>
> Signed-off-by: Matt Wilson<msw@amazon.com>
> Reviewed-and-Tested-by: Steven Noonan<snoonan@amazon.com>
> Cc: Konrad Rzeszutek Wilk<konrad.wilk@oracle.com>
> Cc: Annie Li<annie.li@oracle.com>
> Cc: xen-devel@lists.xen.org
> ---
>   drivers/xen/grant-table.c |   41 +++++++++++++++++++++++------------------
>   1 files changed, 23 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 043bf07..011fdc3 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -55,10 +55,6 @@
>   /* External tools reserve first few grant table entries. */
>   #define NR_RESERVED_ENTRIES 8
>   #define GNTTAB_LIST_END 0xffffffff
> -#define GREFS_PER_GRANT_FRAME \
> -(grant_table_version == 1 ?                      \
> -(PAGE_SIZE / sizeof(struct grant_entry_v1)) :   \
> -(PAGE_SIZE / sizeof(union grant_entry_v2)))
>
>   static grant_ref_t **gnttab_list;
>   static unsigned int nr_grant_frames;
> @@ -153,6 +149,7 @@ static struct gnttab_ops *gnttab_interface;
>   static grant_status_t *grstatus;
>
>   static int grant_table_version;
> +static int grefs_per_grant_frame;
>
>   static struct gnttab_free_callback *gnttab_free_callback_list;
>
> @@ -766,12 +763,14 @@ static int grow_gnttab_list(unsigned int more_frames)
>   	unsigned int new_nr_grant_frames, extra_entries, i;
>   	unsigned int nr_glist_frames, new_nr_glist_frames;
>
> +	BUG_ON(grefs_per_grant_frame == 0);
> +
>   	new_nr_grant_frames = nr_grant_frames + more_frames;
> -	extra_entries       = more_frames * GREFS_PER_GRANT_FRAME;
> +	extra_entries       = more_frames * grefs_per_grant_frame;
>
> -	nr_glist_frames = (nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
> +	nr_glist_frames = (nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
>   	new_nr_glist_frames =
> -		(new_nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
> +		(new_nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
>   	for (i = nr_glist_frames; i<  new_nr_glist_frames; i++) {
>   		gnttab_list[i] = (grant_ref_t *)__get_free_page(GFP_ATOMIC);
>   		if (!gnttab_list[i])
> @@ -779,12 +778,12 @@ static int grow_gnttab_list(unsigned int more_frames)
>   	}
>
>
> -	for (i = GREFS_PER_GRANT_FRAME * nr_grant_frames;
> -	     i<  GREFS_PER_GRANT_FRAME * new_nr_grant_frames - 1; i++)
> +	for (i = grefs_per_grant_frame * nr_grant_frames;
> +	     i<  grefs_per_grant_frame * new_nr_grant_frames - 1; i++)
>   		gnttab_entry(i) = i + 1;
>
>   	gnttab_entry(i) = gnttab_free_head;
> -	gnttab_free_head = GREFS_PER_GRANT_FRAME * nr_grant_frames;
> +	gnttab_free_head = grefs_per_grant_frame * nr_grant_frames;
>   	gnttab_free_count += extra_entries;
>
>   	nr_grant_frames = new_nr_grant_frames;
> @@ -904,7 +903,8 @@ EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>
>   static unsigned nr_status_frames(unsigned nr_grant_frames)
>   {
> -	return (nr_grant_frames * GREFS_PER_GRANT_FRAME + SPP - 1) / SPP;
> +	BUG_ON(grefs_per_grant_frame == 0);
> +	return (nr_grant_frames * grefs_per_grant_frame + SPP - 1) / SPP;
>   }
>
>   static int gnttab_map_frames_v1(xen_pfn_t *frames, unsigned int nr_gframes)
> @@ -1068,6 +1068,7 @@ static void gnttab_request_version(void)
>   	rc = HYPERVISOR_grant_table_op(GNTTABOP_set_version,&gsv, 1);
>   	if (rc == 0&&  gsv.version == 2) {
>   		grant_table_version = 2;
> +		grefs_per_grant_frame = PAGE_SIZE / sizeof(union grant_entry_v2);
>   		gnttab_interface =&gnttab_v2_ops;
>   	} else if (grant_table_version == 2) {
>   		/*
> @@ -1080,10 +1081,9 @@ static void gnttab_request_version(void)
>   		panic("we need grant tables version 2, but only version 1 is available");
>   	} else {
>   		grant_table_version = 1;
> +		grefs_per_grant_frame = PAGE_SIZE / sizeof(struct grant_entry_v1);
>   		gnttab_interface =&gnttab_v1_ops;
>   	}
> -	printk(KERN_INFO "Grant tables using version %d layout.\n",
> -		grant_table_version);
>   }
>
>   int gnttab_resume(void)
> @@ -1092,6 +1092,8 @@ int gnttab_resume(void)
>   	char *kmsg = "Failed to kmalloc pages for pv in hvm grant frames\n";
>
>   	gnttab_request_version();
> +	printk(KERN_INFO "Grant tables using version %d layout.\n",
> +		grant_table_version);
>   	max_nr_gframes = gnttab_max_grant_frames();
>   	if (max_nr_gframes<  nr_grant_frames)
>   		return -ENOSYS;
> @@ -1137,9 +1139,10 @@ static int gnttab_expand(unsigned int req_entries)
>   	int rc;
>   	unsigned int cur, extra;
>
> +	BUG_ON(grefs_per_grant_frame == 0);
>   	cur = nr_grant_frames;
> -	extra = ((req_entries + (GREFS_PER_GRANT_FRAME-1)) /
> -		 GREFS_PER_GRANT_FRAME);
> +	extra = ((req_entries + (grefs_per_grant_frame-1)) /
> +		 grefs_per_grant_frame);
>   	if (cur + extra>  gnttab_max_grant_frames())
>   		return -ENOSPC;
>
> @@ -1157,21 +1160,23 @@ int gnttab_init(void)
>   	unsigned int nr_init_grefs;
>   	int ret;
>
> +	gnttab_request_version();
>   	nr_grant_frames = 1;
>   	boot_max_nr_grant_frames = __max_nr_grant_frames();
>
>   	/* Determine the maximum number of frames required for the
>   	 * grant reference free list on the current hypervisor.
>   	 */
> +	BUG_ON(grefs_per_grant_frame == 0);
>   	max_nr_glist_frames = (boot_max_nr_grant_frames *
> -			       GREFS_PER_GRANT_FRAME / RPP);
> +			       grefs_per_grant_frame / RPP);
>
>   	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
>   			      GFP_KERNEL);
>   	if (gnttab_list == NULL)
>   		return -ENOMEM;
>
> -	nr_glist_frames = (nr_grant_frames * GREFS_PER_GRANT_FRAME + RPP - 1) / RPP;
> +	nr_glist_frames = (nr_grant_frames * grefs_per_grant_frame + RPP - 1) / RPP;
>   	for (i = 0; i<  nr_glist_frames; i++) {
>   		gnttab_list[i] = (grant_ref_t *)__get_free_page(GFP_KERNEL);
>   		if (gnttab_list[i] == NULL) {
> @@ -1185,7 +1190,7 @@ int gnttab_init(void)
>   		goto ini_nomem;
>   	}
>
> -	nr_init_grefs = nr_grant_frames * GREFS_PER_GRANT_FRAME;
> +	nr_init_grefs = nr_grant_frames * grefs_per_grant_frame;
>
>   	for (i = NR_RESERVED_ENTRIES; i<  nr_init_grefs - 1; i++)
>   		gnttab_entry(i) = i + 1;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:53:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXBq-0003YD-Qx; Thu, 20 Dec 2012 03:52:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlXBp-0003Y8-DT
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:52:57 +0000
Received: from [85.158.139.211:19377] by server-10.bemta-5.messagelabs.com id
	28/4F-13383-89B82D05; Thu, 20 Dec 2012 03:52:56 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355975575!20762420!1
X-Originating-IP: [209.85.215.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20575 invoked from network); 20 Dec 2012 03:52:56 -0000
Received: from mail-la0-f49.google.com (HELO mail-la0-f49.google.com)
	(209.85.215.49)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 03:52:56 -0000
Received: by mail-la0-f49.google.com with SMTP id r15so2160366lag.22
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 19:52:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=ecDAaHq/60qZQ0Zn7WJUyTbpXT4xc1BKaINq6EkNNHQ=;
	b=Z4qKFehaOoRJ+UzA7ffDtPj6d/ykkvF4DPVFZHZocsD59WHZLgI/u5VAib6gXr/9k3
	BSklb0UhXtK+lO0tIavUEDdZlWFXP7/jJdqYLsA0JHYPBWYdFrsOJ3u5RJABuCBJaA4P
	QnjtK4OQWwf47oT4eciilBKbnHK+4+o2AXuoFxfmKLYwu86GaAvo/MOUIQR7nEQdgcRA
	sF1QZCeS6fvzm3Fsc3E5WG2JJOQ0pwETeYwBUdT7HJPxD1fkpepXc335Pi8B+kcz9Rk3
	1FWDvvbAtpjy+MC6h/QwoW5p7tdHSaiLTsvxI0h9ZzTyeOwMMUhRoZ83Ar/R2EuG1G5n
	d6jw==
MIME-Version: 1.0
Received: by 10.152.111.166 with SMTP id ij6mr7491972lab.47.1355975575183;
	Wed, 19 Dec 2012 19:52:55 -0800 (PST)
Received: by 10.112.99.197 with HTTP; Wed, 19 Dec 2012 19:52:55 -0800 (PST)
Date: Thu, 20 Dec 2012 11:52:55 +0800
X-Google-Sender-Auth: bxSS7aJhAtDu6Xk3yG7ZDsKy8mU
Message-ID: <CAKhsbWa65-uQgMi2epF1RZefJPL63p1+-MiHc7yDhXrMHev4JQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Jean Guyader <jean.guyader@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of resource
	conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is hvmloader part of the change that gets rid of the resource
conflict warning in the guest kernel.
The OpRegion may not always be page aligned.
As a result one extra page is required to fully accommodate the
OpRegion in that case.
Just reserve one more page here.

Signed-off-by: Timothy Guo <firemeteor@users.sourceforge.net>

diff -r 11b4bc743b1f tools/firmware/hvmloader/e820.c
--- a/tools/firmware/hvmloader/e820.c    Mon Dec 17 14:59:11 2012 +0000
+++ b/tools/firmware/hvmloader/e820.c    Thu Dec 20 00:07:40 2012 +0800
@@ -142,11 +142,11 @@ int build_e820_table(struct e820entry *e
         nr++;

         e820[nr].addr = igd_opregion_base;
-        e820[nr].size = 2 * PAGE_SIZE;
+        e820[nr].size = 3 * PAGE_SIZE;
         e820[nr].type = E820_NVS;
         nr++;

-        e820[nr].addr = igd_opregion_base + 2 * PAGE_SIZE;
+        e820[nr].addr = igd_opregion_base + 3 * PAGE_SIZE;
         e820[nr].size = (uint32_t)-e820[nr].addr;
         e820[nr].type = E820_RESERVED;
         nr++;
diff -r 11b4bc743b1f tools/firmware/hvmloader/pci.c
--- a/tools/firmware/hvmloader/pci.c    Mon Dec 17 14:59:11 2012 +0000
+++ b/tools/firmware/hvmloader/pci.c    Thu Dec 20 00:07:40 2012 +0800
@@ -98,7 +98,7 @@ void pci_setup(void)
                 virtual_vga = VGA_pt;
                 if ( vendor_id == 0x8086 )
                 {
-                    igd_opregion_pgbase = mem_hole_alloc(2);
+                    igd_opregion_pgbase = mem_hole_alloc(3);
                     /*
                      * Write the the OpRegion offset to give the opregion
                      * address to the device model. The device model will trap


This is qemu-xen part of the change that gets rid of the resource
conflict warning in the guest kernel.
If the host OpRegion is page aligned, two pages will be sufficient.
Otherwise, we need to map one more page to have it fully accommodated
-- the guest kernel would map this extra page and complain about
resource conflict.

Signed-off-by: Timothy Guo <firemeteor@users.sourceforge.net>

diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
index c6f8869..3f2b285 100644
--- a/hw/pt-graphics.c
+++ b/hw/pt-graphics.c
@@ -81,6 +81,7 @@ uint32_t igd_read_opregion(struct pt_dev *pci_dev)
 void igd_write_opregion(struct pt_dev *real_dev, uint32_t val)
 {
     uint32_t host_opregion = 0;
+    uint32_t map_size = 2;
     int ret;

     if ( igd_guest_opregion )
@@ -74,11 +93,13 @@ void igd_write_opregion(struct pt_dev *real_dev,
uint32_t val)
     host_opregion = pt_pci_host_read(real_dev->pci_dev, PCI_INTEL_OPREGION, 4);
     igd_guest_opregion = (val & ~0xfff) | (host_opregion & 0xfff);
     PT_LOG("Map OpRegion: %x -> %x\n", host_opregion, igd_guest_opregion);
+    //If the opregion is not page-aligned, map one more page to fit
the entire region.
+    map_size += (host_opregion & 0xfff) != 0;

     ret = xc_domain_memory_mapping(xc_handle, domid,
             igd_guest_opregion >> XC_PAGE_SHIFT,
             host_opregion >> XC_PAGE_SHIFT,
-            2,
+            map_size,
             DPCI_ADD_MAPPING);

     if ( ret != 0 )

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:53:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXBq-0003YD-Qx; Thu, 20 Dec 2012 03:52:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlXBp-0003Y8-DT
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:52:57 +0000
Received: from [85.158.139.211:19377] by server-10.bemta-5.messagelabs.com id
	28/4F-13383-89B82D05; Thu, 20 Dec 2012 03:52:56 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1355975575!20762420!1
X-Originating-IP: [209.85.215.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20575 invoked from network); 20 Dec 2012 03:52:56 -0000
Received: from mail-la0-f49.google.com (HELO mail-la0-f49.google.com)
	(209.85.215.49)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 03:52:56 -0000
Received: by mail-la0-f49.google.com with SMTP id r15so2160366lag.22
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 19:52:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=ecDAaHq/60qZQ0Zn7WJUyTbpXT4xc1BKaINq6EkNNHQ=;
	b=Z4qKFehaOoRJ+UzA7ffDtPj6d/ykkvF4DPVFZHZocsD59WHZLgI/u5VAib6gXr/9k3
	BSklb0UhXtK+lO0tIavUEDdZlWFXP7/jJdqYLsA0JHYPBWYdFrsOJ3u5RJABuCBJaA4P
	QnjtK4OQWwf47oT4eciilBKbnHK+4+o2AXuoFxfmKLYwu86GaAvo/MOUIQR7nEQdgcRA
	sF1QZCeS6fvzm3Fsc3E5WG2JJOQ0pwETeYwBUdT7HJPxD1fkpepXc335Pi8B+kcz9Rk3
	1FWDvvbAtpjy+MC6h/QwoW5p7tdHSaiLTsvxI0h9ZzTyeOwMMUhRoZ83Ar/R2EuG1G5n
	d6jw==
MIME-Version: 1.0
Received: by 10.152.111.166 with SMTP id ij6mr7491972lab.47.1355975575183;
	Wed, 19 Dec 2012 19:52:55 -0800 (PST)
Received: by 10.112.99.197 with HTTP; Wed, 19 Dec 2012 19:52:55 -0800 (PST)
Date: Thu, 20 Dec 2012 11:52:55 +0800
X-Google-Sender-Auth: bxSS7aJhAtDu6Xk3yG7ZDsKy8mU
Message-ID: <CAKhsbWa65-uQgMi2epF1RZefJPL63p1+-MiHc7yDhXrMHev4JQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Jean Guyader <jean.guyader@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of resource
	conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is hvmloader part of the change that gets rid of the resource
conflict warning in the guest kernel.
The OpRegion may not always be page aligned.
As a result one extra page is required to fully accommodate the
OpRegion in that case.
Just reserve one more page here.

Signed-off-by: Timothy Guo <firemeteor@users.sourceforge.net>

diff -r 11b4bc743b1f tools/firmware/hvmloader/e820.c
--- a/tools/firmware/hvmloader/e820.c    Mon Dec 17 14:59:11 2012 +0000
+++ b/tools/firmware/hvmloader/e820.c    Thu Dec 20 00:07:40 2012 +0800
@@ -142,11 +142,11 @@ int build_e820_table(struct e820entry *e
         nr++;

         e820[nr].addr = igd_opregion_base;
-        e820[nr].size = 2 * PAGE_SIZE;
+        e820[nr].size = 3 * PAGE_SIZE;
         e820[nr].type = E820_NVS;
         nr++;

-        e820[nr].addr = igd_opregion_base + 2 * PAGE_SIZE;
+        e820[nr].addr = igd_opregion_base + 3 * PAGE_SIZE;
         e820[nr].size = (uint32_t)-e820[nr].addr;
         e820[nr].type = E820_RESERVED;
         nr++;
diff -r 11b4bc743b1f tools/firmware/hvmloader/pci.c
--- a/tools/firmware/hvmloader/pci.c    Mon Dec 17 14:59:11 2012 +0000
+++ b/tools/firmware/hvmloader/pci.c    Thu Dec 20 00:07:40 2012 +0800
@@ -98,7 +98,7 @@ void pci_setup(void)
                 virtual_vga = VGA_pt;
                 if ( vendor_id == 0x8086 )
                 {
-                    igd_opregion_pgbase = mem_hole_alloc(2);
+                    igd_opregion_pgbase = mem_hole_alloc(3);
                     /*
                      * Write the the OpRegion offset to give the opregion
                      * address to the device model. The device model will trap


This is qemu-xen part of the change that gets rid of the resource
conflict warning in the guest kernel.
If the host OpRegion is page aligned, two pages will be sufficient.
Otherwise, we need to map one more page to have it fully accommodated
-- the guest kernel would map this extra page and complain about
resource conflict.

Signed-off-by: Timothy Guo <firemeteor@users.sourceforge.net>

diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
index c6f8869..3f2b285 100644
--- a/hw/pt-graphics.c
+++ b/hw/pt-graphics.c
@@ -81,6 +81,7 @@ uint32_t igd_read_opregion(struct pt_dev *pci_dev)
 void igd_write_opregion(struct pt_dev *real_dev, uint32_t val)
 {
     uint32_t host_opregion = 0;
+    uint32_t map_size = 2;
     int ret;

     if ( igd_guest_opregion )
@@ -74,11 +93,13 @@ void igd_write_opregion(struct pt_dev *real_dev,
uint32_t val)
     host_opregion = pt_pci_host_read(real_dev->pci_dev, PCI_INTEL_OPREGION, 4);
     igd_guest_opregion = (val & ~0xfff) | (host_opregion & 0xfff);
     PT_LOG("Map OpRegion: %x -> %x\n", host_opregion, igd_guest_opregion);
+    //If the opregion is not page-aligned, map one more page to fit
the entire region.
+    map_size += (host_opregion & 0xfff) != 0;

     ret = xc_domain_memory_mapping(xc_handle, domid,
             igd_guest_opregion >> XC_PAGE_SHIFT,
             host_opregion >> XC_PAGE_SHIFT,
-            2,
+            map_size,
             DPCI_ADD_MAPPING);

     if ( ret != 0 )

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:57:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXFc-0003fO-Ge; Thu, 20 Dec 2012 03:56:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlXFb-0003fH-09
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:56:51 +0000
Received: from [85.158.139.211:34949] by server-13.bemta-5.messagelabs.com id
	A9/93-10716-28C82D05; Thu, 20 Dec 2012 03:56:50 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355975808!21278328!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21495 invoked from network); 20 Dec 2012 03:56:49 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 03:56:49 -0000
Received: by mail-la0-f54.google.com with SMTP id j13so2219787lah.27
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 19:56:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=NQrHYN22dNugmzrMnVhfvc6hyZtxLHiba2jsF+wn8yY=;
	b=nxfGXuqoEEQv/N/ggSKeZW/mP6dAvfZuoj/z8g+8NLh88Z8nNvB0Z9uYWPS0tc4lvi
	Poiv10Oy5sQo6p1oVc9jl1TQEaS9wZKz/ZL7APQpQKDWcT2fWOzYJ3xQqdEoqdWKtMqe
	rN8xqL9Yuh/1Lj/rvn2SSep649FO0YWGWTSei5bWTZJYl7t0EUNA6AcpEtHlDg6WTPOj
	i/53UL2/lTbVaXmwdYMRl7/JvGBrh04vFneGOwBFXlZ0DICXJah4iJnTZH1Z5MwhdEvI
	ervLeNS3xgLKDpRGl1vhvEXxl7nHNooIdFYTC5oR4i6BPz3scC9Mgw+tuwhSrUG1wSkc
	twJw==
MIME-Version: 1.0
Received: by 10.112.40.129 with SMTP id x1mr3298377lbk.95.1355975808082; Wed,
	19 Dec 2012 19:56:48 -0800 (PST)
Received: by 10.112.99.197 with HTTP; Wed, 19 Dec 2012 19:56:48 -0800 (PST)
In-Reply-To: <CAKhsbWa65-uQgMi2epF1RZefJPL63p1+-MiHc7yDhXrMHev4JQ@mail.gmail.com>
References: <CAKhsbWa65-uQgMi2epF1RZefJPL63p1+-MiHc7yDhXrMHev4JQ@mail.gmail.com>
Date: Thu, 20 Dec 2012 11:56:48 +0800
X-Google-Sender-Auth: btPoLUBvMdhYHnzJTwYmEUQXExk
Message-ID: <CAKhsbWagEeZQjfq8U9U8dvtR_peCeC6QyRT0oPsj7ZUULoRunA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Jean.guyader@gmail.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Switch to a new address that can reach to Jean.

On Thu, Dec 20, 2012 at 11:52 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> This is hvmloader part of the change that gets rid of the resource
> conflict warning in the guest kernel.
> The OpRegion may not always be page aligned.
> As a result one extra page is required to fully accommodate the
> OpRegion in that case.
> Just reserve one more page here.
>
> Signed-off-by: Timothy Guo <firemeteor@users.sourceforge.net>
>
> diff -r 11b4bc743b1f tools/firmware/hvmloader/e820.c
> --- a/tools/firmware/hvmloader/e820.c    Mon Dec 17 14:59:11 2012 +0000
> +++ b/tools/firmware/hvmloader/e820.c    Thu Dec 20 00:07:40 2012 +0800
> @@ -142,11 +142,11 @@ int build_e820_table(struct e820entry *e
>          nr++;
>
>          e820[nr].addr = igd_opregion_base;
> -        e820[nr].size = 2 * PAGE_SIZE;
> +        e820[nr].size = 3 * PAGE_SIZE;
>          e820[nr].type = E820_NVS;
>          nr++;
>
> -        e820[nr].addr = igd_opregion_base + 2 * PAGE_SIZE;
> +        e820[nr].addr = igd_opregion_base + 3 * PAGE_SIZE;
>          e820[nr].size = (uint32_t)-e820[nr].addr;
>          e820[nr].type = E820_RESERVED;
>          nr++;
> diff -r 11b4bc743b1f tools/firmware/hvmloader/pci.c
> --- a/tools/firmware/hvmloader/pci.c    Mon Dec 17 14:59:11 2012 +0000
> +++ b/tools/firmware/hvmloader/pci.c    Thu Dec 20 00:07:40 2012 +0800
> @@ -98,7 +98,7 @@ void pci_setup(void)
>                  virtual_vga = VGA_pt;
>                  if ( vendor_id == 0x8086 )
>                  {
> -                    igd_opregion_pgbase = mem_hole_alloc(2);
> +                    igd_opregion_pgbase = mem_hole_alloc(3);
>                      /*
>                       * Write the the OpRegion offset to give the opregion
>                       * address to the device model. The device model will trap
>
>
> This is qemu-xen part of the change that gets rid of the resource
> conflict warning in the guest kernel.
> If the host OpRegion is page aligned, two pages will be sufficient.
> Otherwise, we need to map one more page to have it fully accommodated
> -- the guest kernel would map this extra page and complain about
> resource conflict.
>
> Signed-off-by: Timothy Guo <firemeteor@users.sourceforge.net>
>
> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
> index c6f8869..3f2b285 100644
> --- a/hw/pt-graphics.c
> +++ b/hw/pt-graphics.c
> @@ -81,6 +81,7 @@ uint32_t igd_read_opregion(struct pt_dev *pci_dev)
>  void igd_write_opregion(struct pt_dev *real_dev, uint32_t val)
>  {
>      uint32_t host_opregion = 0;
> +    uint32_t map_size = 2;
>      int ret;
>
>      if ( igd_guest_opregion )
> @@ -74,11 +93,13 @@ void igd_write_opregion(struct pt_dev *real_dev,
> uint32_t val)
>      host_opregion = pt_pci_host_read(real_dev->pci_dev, PCI_INTEL_OPREGION, 4);
>      igd_guest_opregion = (val & ~0xfff) | (host_opregion & 0xfff);
>      PT_LOG("Map OpRegion: %x -> %x\n", host_opregion, igd_guest_opregion);
> +    //If the opregion is not page-aligned, map one more page to fit
> the entire region.
> +    map_size += (host_opregion & 0xfff) != 0;
>
>      ret = xc_domain_memory_mapping(xc_handle, domid,
>              igd_guest_opregion >> XC_PAGE_SHIFT,
>              host_opregion >> XC_PAGE_SHIFT,
> -            2,
> +            map_size,
>              DPCI_ADD_MAPPING);
>
>      if ( ret != 0 )

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:57:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXFc-0003fO-Ge; Thu, 20 Dec 2012 03:56:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlXFb-0003fH-09
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:56:51 +0000
Received: from [85.158.139.211:34949] by server-13.bemta-5.messagelabs.com id
	A9/93-10716-28C82D05; Thu, 20 Dec 2012 03:56:50 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1355975808!21278328!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21495 invoked from network); 20 Dec 2012 03:56:49 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 03:56:49 -0000
Received: by mail-la0-f54.google.com with SMTP id j13so2219787lah.27
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 19:56:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=NQrHYN22dNugmzrMnVhfvc6hyZtxLHiba2jsF+wn8yY=;
	b=nxfGXuqoEEQv/N/ggSKeZW/mP6dAvfZuoj/z8g+8NLh88Z8nNvB0Z9uYWPS0tc4lvi
	Poiv10Oy5sQo6p1oVc9jl1TQEaS9wZKz/ZL7APQpQKDWcT2fWOzYJ3xQqdEoqdWKtMqe
	rN8xqL9Yuh/1Lj/rvn2SSep649FO0YWGWTSei5bWTZJYl7t0EUNA6AcpEtHlDg6WTPOj
	i/53UL2/lTbVaXmwdYMRl7/JvGBrh04vFneGOwBFXlZ0DICXJah4iJnTZH1Z5MwhdEvI
	ervLeNS3xgLKDpRGl1vhvEXxl7nHNooIdFYTC5oR4i6BPz3scC9Mgw+tuwhSrUG1wSkc
	twJw==
MIME-Version: 1.0
Received: by 10.112.40.129 with SMTP id x1mr3298377lbk.95.1355975808082; Wed,
	19 Dec 2012 19:56:48 -0800 (PST)
Received: by 10.112.99.197 with HTTP; Wed, 19 Dec 2012 19:56:48 -0800 (PST)
In-Reply-To: <CAKhsbWa65-uQgMi2epF1RZefJPL63p1+-MiHc7yDhXrMHev4JQ@mail.gmail.com>
References: <CAKhsbWa65-uQgMi2epF1RZefJPL63p1+-MiHc7yDhXrMHev4JQ@mail.gmail.com>
Date: Thu, 20 Dec 2012 11:56:48 +0800
X-Google-Sender-Auth: btPoLUBvMdhYHnzJTwYmEUQXExk
Message-ID: <CAKhsbWagEeZQjfq8U9U8dvtR_peCeC6QyRT0oPsj7ZUULoRunA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Jean.guyader@gmail.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Switch to a new address that can reach to Jean.

On Thu, Dec 20, 2012 at 11:52 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> This is hvmloader part of the change that gets rid of the resource
> conflict warning in the guest kernel.
> The OpRegion may not always be page aligned.
> As a result one extra page is required to fully accommodate the
> OpRegion in that case.
> Just reserve one more page here.
>
> Signed-off-by: Timothy Guo <firemeteor@users.sourceforge.net>
>
> diff -r 11b4bc743b1f tools/firmware/hvmloader/e820.c
> --- a/tools/firmware/hvmloader/e820.c    Mon Dec 17 14:59:11 2012 +0000
> +++ b/tools/firmware/hvmloader/e820.c    Thu Dec 20 00:07:40 2012 +0800
> @@ -142,11 +142,11 @@ int build_e820_table(struct e820entry *e
>          nr++;
>
>          e820[nr].addr = igd_opregion_base;
> -        e820[nr].size = 2 * PAGE_SIZE;
> +        e820[nr].size = 3 * PAGE_SIZE;
>          e820[nr].type = E820_NVS;
>          nr++;
>
> -        e820[nr].addr = igd_opregion_base + 2 * PAGE_SIZE;
> +        e820[nr].addr = igd_opregion_base + 3 * PAGE_SIZE;
>          e820[nr].size = (uint32_t)-e820[nr].addr;
>          e820[nr].type = E820_RESERVED;
>          nr++;
> diff -r 11b4bc743b1f tools/firmware/hvmloader/pci.c
> --- a/tools/firmware/hvmloader/pci.c    Mon Dec 17 14:59:11 2012 +0000
> +++ b/tools/firmware/hvmloader/pci.c    Thu Dec 20 00:07:40 2012 +0800
> @@ -98,7 +98,7 @@ void pci_setup(void)
>                  virtual_vga = VGA_pt;
>                  if ( vendor_id == 0x8086 )
>                  {
> -                    igd_opregion_pgbase = mem_hole_alloc(2);
> +                    igd_opregion_pgbase = mem_hole_alloc(3);
>                      /*
>                       * Write the the OpRegion offset to give the opregion
>                       * address to the device model. The device model will trap
>
>
> This is qemu-xen part of the change that gets rid of the resource
> conflict warning in the guest kernel.
> If the host OpRegion is page aligned, two pages will be sufficient.
> Otherwise, we need to map one more page to have it fully accommodated
> -- the guest kernel would map this extra page and complain about
> resource conflict.
>
> Signed-off-by: Timothy Guo <firemeteor@users.sourceforge.net>
>
> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
> index c6f8869..3f2b285 100644
> --- a/hw/pt-graphics.c
> +++ b/hw/pt-graphics.c
> @@ -81,6 +81,7 @@ uint32_t igd_read_opregion(struct pt_dev *pci_dev)
>  void igd_write_opregion(struct pt_dev *real_dev, uint32_t val)
>  {
>      uint32_t host_opregion = 0;
> +    uint32_t map_size = 2;
>      int ret;
>
>      if ( igd_guest_opregion )
> @@ -74,11 +93,13 @@ void igd_write_opregion(struct pt_dev *real_dev,
> uint32_t val)
>      host_opregion = pt_pci_host_read(real_dev->pci_dev, PCI_INTEL_OPREGION, 4);
>      igd_guest_opregion = (val & ~0xfff) | (host_opregion & 0xfff);
>      PT_LOG("Map OpRegion: %x -> %x\n", host_opregion, igd_guest_opregion);
> +    //If the opregion is not page-aligned, map one more page to fit
> the entire region.
> +    map_size += (host_opregion & 0xfff) != 0;
>
>      ret = xc_domain_memory_mapping(xc_handle, domid,
>              igd_guest_opregion >> XC_PAGE_SHIFT,
>              host_opregion >> XC_PAGE_SHIFT,
> -            2,
> +            map_size,
>              DPCI_ADD_MAPPING);
>
>      if ( ret != 0 )

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIF-0003oj-4T; Thu, 20 Dec 2012 03:59:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXID-0003nM-Jy
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:34 +0000
Received: from [85.158.139.83:53920] by server-7.bemta-5.messagelabs.com id
	9E/5A-08009-42D82D05; Thu, 20 Dec 2012 03:59:32 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355975968!30670629!3
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 946 invoked from network); 20 Dec 2012 03:59:31 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-182.messagelabs.com with SMTP;
	20 Dec 2012 03:59:31 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775540"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:09 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:44 +0800
Message-Id: <1356018231-26440-4-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 03/10] nested_ept: Implement guest ept's
	walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Implment guest EPT PT walker, some logic is based on shadow's
ia32e PT walker. During the PT walking, if the target pages are
not in memory, use RETRY mechanism and get a chance to let the
target page back.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c              |    1 +
 xen/arch/x86/hvm/vmx/vvmx.c         |   44 +++++-
 xen/arch/x86/mm/guest_walk.c        |   16 ++-
 xen/arch/x86/mm/hap/Makefile        |    1 +
 xen/arch/x86/mm/hap/nested_ept.c    |  280 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c    |    2 +-
 xen/arch/x86/mm/shadow/multi.c      |    2 +-
 xen/include/asm-x86/guest_pt.h      |    8 +
 xen/include/asm-x86/hvm/nestedhvm.h |    2 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h   |   28 ++++
 xen/include/asm-x86/hvm/vmx/vvmx.h  |   15 ++
 12 files changed, 390 insertions(+), 10 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1cae8a8..3cd0075 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1324,6 +1324,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
+        case NESTEDHVM_PAGEFAULT_RETRY:
             return 1;
         case NESTEDHVM_PAGEFAULT_L1_ERROR:
             /* An error occured while translating gpa from
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 4495dd6..f9e620c 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -906,9 +906,18 @@ static void sync_vvmcs_ro(struct vcpu *v)
 {
     int i;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    void *vvmcs = nvcpu->nv_vvmcx;
 
     for ( i = 0; i < ARRAY_SIZE(vmcs_ro_field); i++ )
         shadow_to_vvmcs(nvcpu->nv_vvmcx, vmcs_ro_field[i]);
+
+    /* Adjust exit_reason/exit_qualifciation for violation case */
+    if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
+                EXIT_REASON_EPT_VIOLATION ) {
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+    }
 }
 
 static void load_vvmcs_host_state(struct vcpu *v)
@@ -1454,8 +1463,39 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
-    /*TODO:*/
-    return 0;
+    uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
+    uint32_t exit_reason = EXIT_REASON_EPT_VIOLATION;
+    int rc;
+    unsigned long gfn;
+    uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+                                &exit_qual, &exit_reason);
+    switch ( rc ) {
+        case EPT_TRANSLATE_SUCCEED:
+            *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+            rc = NESTEDHVM_PAGEFAULT_DONE;
+            break;
+        case EPT_TRANSLATE_VIOLATION:
+        case EPT_TRANSLATE_MISCONFIG:
+            rc = NESTEDHVM_PAGEFAULT_INJECT;
+            nvmx->ept_exit.exit_reason = exit_reason;
+            nvmx->ept_exit.exit_qual = exit_qual;
+            break;
+        case EPT_TRANSLATE_RETRY:
+            rc = NESTEDHVM_PAGEFAULT_RETRY;
+            break;
+        case EPT_TRANSLATE_ERR_PAGE:
+            rc = NESTEDHVM_PAGEFAULT_L1_ERROR;
+            break;
+        default:
+            rc = NESTEDHVM_PAGEFAULT_UNHANDLED;
+            gdprintk(XENLOG_ERR, "GUEST EPT translation error!:%d\n", rc);
+            break;
+    }
+
+    return rc;
 }
 
 void nvmx_idtv_handling(void)
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 0f08fb0..1c165c6 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -88,18 +88,19 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
 
 /* If the map is non-NULL, we leave this function having 
  * acquired an extra ref on mfn_to_page(*mfn) */
-static inline void *map_domain_gfn(struct p2m_domain *p2m,
-                                   gfn_t gfn, 
+void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
                                    mfn_t *mfn,
                                    p2m_type_t *p2mt,
-                                   uint32_t *rc) 
+                                   p2m_query_t q,
+                                   uint32_t *rc)
 {
     struct page_info *page;
     void *map;
 
     /* Translate the gfn, unsharing if shared */
     page = get_page_from_gfn_p2m(p2m->domain, p2m, gfn_x(gfn), p2mt, NULL,
-                                  P2M_ALLOC | P2M_UNSHARE);
+                                  q);
     if ( p2m_is_paging(*p2mt) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
@@ -128,7 +129,6 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
     return map;
 }
 
-
 /* Walk the guest pagetables, after the manner of a hardware walker. */
 /* Because the walk is essentially random, it can cause a deadlock 
  * warning in the p2m locking code. Highly unlikely this is an actual
@@ -149,6 +149,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     uint32_t gflags, mflags, iflags, rc = 0;
     int smep;
     bool_t pse1G = 0, pse2M = 0;
+    p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
 
     perfc_incr(guest_walk);
     memset(gw, 0, sizeof(*gw));
@@ -188,7 +189,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     l3p = map_domain_gfn(p2m, 
                          guest_l4e_get_gfn(gw->l4e), 
                          &gw->l3mfn,
-                         &p2mt, 
+                         &p2mt,
+                         qt, 
                          &rc); 
     if(l3p == NULL)
         goto out;
@@ -249,6 +251,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                          guest_l3e_get_gfn(gw->l3e), 
                          &gw->l2mfn,
                          &p2mt, 
+                         qt,
                          &rc); 
     if(l2p == NULL)
         goto out;
@@ -322,6 +325,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                              guest_l2e_get_gfn(gw->l2e), 
                              &gw->l1mfn,
                              &p2mt,
+                             qt,
                              &rc);
         if(l1p == NULL)
             goto out;
diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
index 80a6bec..68f2bb5 100644
--- a/xen/arch/x86/mm/hap/Makefile
+++ b/xen/arch/x86/mm/hap/Makefile
@@ -3,6 +3,7 @@ obj-y += guest_walk_2level.o
 obj-y += guest_walk_3level.o
 obj-$(x86_64) += guest_walk_4level.o
 obj-y += nested_hap.o
+obj-y += nested_ept.o
 
 guest_walk_%level.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
new file mode 100644
index 0000000..c3e698c
--- /dev/null
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -0,0 +1,280 @@
+/*
+ * nested_ept.c: Handling virtulized EPT for guest in nested case.
+ *
+ * Copyright (c) 2012, Intel Corporation
+ *  Xiantao Zhang <xiantao.zhang@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+#include <asm/domain.h>
+#include <asm/page.h>
+#include <asm/paging.h>
+#include <asm/p2m.h>
+#include <asm/mem_event.h>
+#include <public/mem_event.h>
+#include <asm/mem_sharing.h>
+#include <xen/event.h>
+#include <asm/hap.h>
+#include <asm/hvm/support.h>
+
+#include <asm/hvm/nestedhvm.h>
+
+#include "private.h"
+
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vvmx.h>
+
+/* EPT always use 4-level paging structure */
+#define GUEST_PAGING_LEVELS 4
+#include <asm/guest_pt.h>
+
+/* Must reserved bits in all level entries  */
+#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
+                     ~((1ull << paddr_bits) - 1))
+
+/*
+ *TODO: Just leave it as 0 here for compile pass, will
+ * define real capabilities in the subsequent patches.
+ */
+#define NEPT_VPID_CAP_BITS 0
+
+
+#define NEPT_1G_ENTRY_FLAG (1 << 11)
+#define NEPT_2M_ENTRY_FLAG (1 << 10)
+#define NEPT_4K_ENTRY_FLAG (1 << 9)
+
+bool_t nept_sp_entry(ept_entry_t e)
+{
+    return !!(e.sp);
+}
+
+static bool_t nept_rsv_bits_check(ept_entry_t e, uint32_t level)
+{
+    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
+
+    switch ( level ) {
+    case 1:
+        break;
+    case 2 ... 3:
+        if (nept_sp_entry(e))
+            rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
+        else
+            rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK;
+        break;
+    case 4:
+        rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK | EPTE_SUPER_PAGE_MASK;
+    break;
+    default:
+        gdprintk(XENLOG_ERR,"Unsupported EPT paging level: %d\n", level);
+        BUG();
+        break;
+    }
+    return !!(e.epte & rsv_bits);
+}
+
+/* EMT checking*/
+static bool_t nept_emt_bits_check(ept_entry_t e, uint32_t level)
+{
+    if ( e.sp || level == 1 ) {
+        if ( e.emt == EPT_EMT_RSV0 || e.emt == EPT_EMT_RSV1 ||
+                e.emt == EPT_EMT_RSV2 )
+            return 1;
+    }
+    return 0;
+}
+
+static bool_t nept_rwx_bits_check(ept_entry_t e) {
+    /*write only or write/execute only*/
+    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
+
+    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
+        return 1;
+
+    if ( rwx_bits == ept_access_x && !(NEPT_VPID_CAP_BITS &
+                        VMX_EPT_EXEC_ONLY_SUPPORTED))
+        return 1;
+
+    return 0;
+}
+
+/* nept's misconfiguration check */
+static bool_t nept_misconfiguration_check(ept_entry_t e, uint32_t level)
+{
+    return (nept_rsv_bits_check(e, level) ||
+                nept_emt_bits_check(e, level) ||
+                nept_rwx_bits_check(e));
+}
+
+static bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
+{
+    return !(EPTE_RWX_MASK & rwx_acc & ~rwx_bits);
+}
+
+/* nept's non-present check */
+static bool_t nept_non_present_check(ept_entry_t e)
+{
+    if (e.epte & EPTE_RWX_MASK)
+        return 0;
+    return 1;
+}
+
+uint64_t nept_get_ept_vpid_cap(void)
+{
+    return NEPT_VPID_CAP_BITS;
+}
+
+static int ept_lvl_table_offset(unsigned long gpa, int lvl)
+{
+    return (gpa >>(EPT_L4_PAGETABLE_SHIFT -(4 - lvl) * 9)) &
+                (EPT_PAGETABLE_ENTRIES -1 );
+}
+
+static uint32_t
+nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw)
+{
+    int lvl;
+    p2m_type_t p2mt;
+    uint32_t rc = 0, ret = 0, gflags;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = d->arch.p2m;
+    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
+    mfn_t lxmfn;
+    ept_entry_t *lxp = NULL;
+
+    memset(gw, 0, sizeof(*gw));
+
+    for (lvl = 4; lvl > 0; lvl--)
+    {
+        lxp = map_domain_gfn(p2m, base_gfn, &lxmfn, &p2mt, P2M_ALLOC, &rc);
+        if ( !lxp )
+            goto map_err;
+        gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
+        unmap_domain_page(lxp);
+        put_page(mfn_to_page(mfn_x(lxmfn)));
+
+        if (nept_non_present_check(gw->lxe[lvl]))
+            goto non_present;
+
+        if (nept_misconfiguration_check(gw->lxe[lvl], lvl))
+            goto misconfig_err;
+
+        if ( (lvl == 2 || lvl == 3) && nept_sp_entry(gw->lxe[lvl]) )
+        {
+            /* Generate a fake l1 table entry so callers don't all
+             * have to understand superpages. */
+            unsigned long gfn_lvl_mask =  (1ull << ((lvl - 1) * 9)) - 1;
+            gfn_t start = _gfn(gw->lxe[lvl].mfn);
+            /* Increment the pfn by the right number of 4k pages. */
+            start = _gfn((gfn_x(start) & ~gfn_lvl_mask) +
+                     ((l2ga >> PAGE_SHIFT) & gfn_lvl_mask));
+            gflags = (gw->lxe[lvl].epte & EPTE_FLAG_MASK) |
+                    (lvl == 3 ? NEPT_1G_ENTRY_FLAG: NEPT_2M_ENTRY_FLAG);
+            gw->lxe[0].epte = (gfn_x(start) << PAGE_SHIFT) | gflags;
+            goto done;
+        }
+        if ( lvl > 1 )
+            base_gfn = _gfn(gw->lxe[lvl].mfn);
+    }
+
+    /* If this is not a super entry, we can reach here. */
+    gflags = (gw->lxe[1].epte & EPTE_FLAG_MASK) | NEPT_4K_ENTRY_FLAG;
+    gw->lxe[0].epte = (gw->lxe[1].epte & PAGE_MASK) | gflags;
+
+done:
+    ret = EPT_TRANSLATE_SUCCEED;
+    goto out;
+
+map_err:
+    if ( rc == _PAGE_PAGED )
+        ret = EPT_TRANSLATE_RETRY;
+    else
+        ret = EPT_TRANSLATE_ERR_PAGE;
+    goto out;
+
+misconfig_err:
+    ret =  EPT_TRANSLATE_MISCONFIG;
+    goto out;
+
+non_present:
+    ret = EPT_TRANSLATE_VIOLATION;
+    /* fall through. */
+out:
+    return ret;
+}
+
+/* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
+
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason)
+{
+    uint32_t rc, rwx_bits = 0;
+    ept_walk_t gw;
+    rwx_acc &= EPTE_RWX_MASK;
+
+    *l1gfn = INVALID_GFN;
+
+    rc = nept_walk_tables(v, l2ga, &gw);
+    switch ( rc ) {
+    case EPT_TRANSLATE_SUCCEED:
+        if ( likely(gw.lxe[0].epte & NEPT_2M_ENTRY_FLAG) )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                            EPTE_RWX_MASK;
+            *page_order = 9;
+        }
+        else if ( gw.lxe[0].epte & NEPT_4K_ENTRY_FLAG ) {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                    gw.lxe[1].epte & EPTE_RWX_MASK;
+            *page_order = 0;
+        }
+        else if ( gw.lxe[0].epte & NEPT_1G_ENTRY_FLAG  )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte  & EPTE_RWX_MASK;
+            *page_order = 18;
+        }
+        else
+        {
+            gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
+            BUG();
+        }
+        if ( nept_permission_check(rwx_acc, rwx_bits) )
+        {
+            *l1gfn = gw.lxe[0].mfn;
+            break;
+        }
+        rc = EPT_TRANSLATE_VIOLATION;
+    /* Fall through to EPT violation if permission check fails. */
+    case EPT_TRANSLATE_VIOLATION:
+        *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
+        *exit_reason = EXIT_REASON_EPT_VIOLATION;
+        break;
+
+    case EPT_TRANSLATE_ERR_PAGE:
+        break;
+    case EPT_TRANSLATE_MISCONFIG:
+        rc = EPT_TRANSLATE_MISCONFIG;
+        *exit_qual = 0;
+        *exit_reason = EXIT_REASON_EPT_MISCONFIG;
+        break;
+    case EPT_TRANSLATE_RETRY:
+        break;
+    default:
+        rc = EPT_TRANSLATE_UNSUPPORTED;
+        gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);
+        break;
+    }
+    return rc;
+}
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 8787c91..6d1264b 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -217,7 +217,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
-        return rv;
+    case NESTEDHVM_PAGEFAULT_RETRY:
     case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 4967da1..409198c 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
     /* Translate the GFN to an MFN */
     ASSERT(!paging_locked_by_me(v->domain));
     mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
-        
+
     if ( p2m_is_readonly(p2mt) )
     {
         put_gfn(v->domain, gfn);
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index 4e1dda0..db8a0b6 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -315,6 +315,14 @@ guest_walk_to_page_order(walk_t *gw)
 #define GPT_RENAME2(_n, _l) _n ## _ ## _l ## _levels
 #define GPT_RENAME(_n, _l) GPT_RENAME2(_n, _l)
 #define guest_walk_tables GPT_RENAME(guest_walk_tables, GUEST_PAGING_LEVELS)
+#define map_domain_gfn GPT_RENAME(map_domain_gfn, GUEST_PAGING_LEVELS)
+
+extern void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
+                                   mfn_t *mfn,
+                                   p2m_type_t *p2mt,
+                                   p2m_query_t q,
+                                   uint32_t *rc);
 
 extern uint32_t 
 guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, unsigned long va,
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index 91fde0b..4c489d2 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -47,11 +47,13 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
     vcpu_nestedhvm(v).nv_guestmode = 0
 
 /* Nested paging */
+#define NESTEDHVM_PAGEFAULT_UNHANDLED -1
 #define NESTEDHVM_PAGEFAULT_DONE       0
 #define NESTEDHVM_PAGEFAULT_INJECT     1
 #define NESTEDHVM_PAGEFAULT_L1_ERROR   2
 #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
 #define NESTEDHVM_PAGEFAULT_MMIO       4
+#define NESTEDHVM_PAGEFAULT_RETRY      5
 int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ef2c9c9..9a728b6 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -194,6 +194,7 @@ extern u32 vmx_secondary_exec_control;
 
 extern bool_t cpu_has_vmx_ins_outs_instr_info;
 
+#define VMX_EPT_EXEC_ONLY_SUPPORTED             0x00000001
 #define VMX_EPT_WALK_LENGTH_4_SUPPORTED         0x00000040
 #define VMX_EPT_MEMORY_TYPE_UC                  0x00000100
 #define VMX_EPT_MEMORY_TYPE_WB                  0x00004000
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index aa5b080..feaaa80 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -51,6 +51,11 @@ typedef union {
     u64 epte;
 } ept_entry_t;
 
+typedef struct {
+    /*use lxe[0] to save result */
+    ept_entry_t lxe[5];
+} ept_walk_t;
+
 #define EPT_TABLE_ORDER         9
 #define EPTE_SUPER_PAGE_MASK    0x80
 #define EPTE_MFN_MASK           0xffffffffff000ULL
@@ -60,6 +65,28 @@ typedef union {
 #define EPTE_AVAIL1_SHIFT       8
 #define EPTE_EMT_SHIFT          3
 #define EPTE_IGMT_SHIFT         6
+#define EPTE_RWX_MASK           0x7
+#define EPTE_FLAG_MASK          0x7f
+
+#define EPT_EMT_UC              0
+#define EPT_EMT_WC              1
+#define EPT_EMT_RSV0            2
+#define EPT_EMT_RSV1            3
+#define EPT_EMT_WT              4
+#define EPT_EMT_WP              5
+#define EPT_EMT_WB              6
+#define EPT_EMT_RSV2            7
+
+typedef enum {
+    ept_access_n     = 0, /* No access permissions allowed */
+    ept_access_r     = 1,
+    ept_access_w     = 2,
+    ept_access_rw    = 3,
+    ept_access_x     = 4,
+    ept_access_rx    = 5,
+    ept_access_wx    = 6,
+    ept_access_all   = 7,
+} ept_access_t;
 
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
@@ -419,6 +446,7 @@ void update_guest_eip(void);
 #define _EPT_GLA_FAULT              8
 #define EPT_GLA_FAULT               (1UL<<_EPT_GLA_FAULT)
 
+#define EPT_L4_PAGETABLE_SHIFT      39
 #define EPT_PAGETABLE_ENTRIES       512
 
 #endif /* __ASM_X86_HVM_VMX_VMX_H__ */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 422f006..245fddb 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -32,6 +32,10 @@ struct nestedvmx {
         unsigned long intr_info;
         u32           error_code;
     } intr;
+    struct {
+        uint32_t exit_reason;
+        uint32_t exit_qual;
+    } ept_exit;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -109,6 +113,13 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
+#define EPT_TRANSLATE_UNSUPPORTED  -1
+#define EPT_TRANSLATE_SUCCEED       0
+#define EPT_TRANSLATE_VIOLATION     1
+#define EPT_TRANSLATE_ERR_PAGE      2
+#define EPT_TRANSLATE_MISCONFIG     3
+#define EPT_TRANSLATE_RETRY         4
+
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
@@ -192,5 +203,9 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXID-0003nW-G0; Thu, 20 Dec 2012 03:59:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIB-0003n5-Px
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:31 +0000
Received: from [85.158.139.83:53896] by server-10.bemta-5.messagelabs.com id
	E4/B2-13383-32D82D05; Thu, 20 Dec 2012 03:59:31 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355975968!30670629!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 938 invoked from network); 20 Dec 2012 03:59:30 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-182.messagelabs.com with SMTP;
	20 Dec 2012 03:59:30 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775529"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:04 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:41 +0800
Message-Id: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 00/10] Nested VMX: Add virtual EPT & VPID
	support to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

With virtual EPT support, L1 hyerpvisor can use EPT hardware for L2 guest's memory virtualization.
In this way, L2 guest's performance can be improved sharply.
According to our testing, some benchmarks can show > 5x performance gain.

Changes from v1:
Update the patches according to Tim's comments. 
1. Patch 03: Enhance the virtual EPT's walker logic.
2. Patch 04: Add a new field in struct p2m_domain, and use it to store
   EPT-specific data. For host p2m, it saves L1 VMM's EPT data,
   and for nested p2m, it saves nested EPT's data 3. Patch 07: strictly check host's p2m access type.
4. Other patches: some whitespace mangling fixes.

Changes form v2:
Addressed comments from Jan and Jun:
1. Add Acked-by message for reviewed patches by Tim. 
2. Fixed one whilespace mangling issue in PATCH 08
3. Add some comments to describe the meaning of 
   the return value of hvm_hap_nested_page_fault 
   in PATCH 05.
4. Add the logic for handling default case of two switch
   statements.

Zhang Xiantao (10):
  nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
  nestedhap: Change nested p2m's walker to vendor-specific
  nested_ept: Implement guest ept's walker
  EPT: Make ept data structure or operations neutral
  nEPT: Try to enable EPT paging for L2 guest.
  nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
  nEPT: Use minimal permission for nested p2m.
  nEPT: handle invept instruction from L1 VMM
  nVMX: virutalize VPID capability to nested VMM.
  nEPT: expost EPT & VPID capablities to L1 VMM

 xen/arch/x86/hvm/hvm.c                  |    7 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 ++++
 xen/arch/x86/hvm/svm/svm.c              |    3 +-
 xen/arch/x86/hvm/vmx/vmcs.c             |    9 +-
 xen/arch/x86/hvm/vmx/vmx.c              |   91 ++++------
 xen/arch/x86/hvm/vmx/vvmx.c             |  215 ++++++++++++++++++++++--
 xen/arch/x86/mm/guest_walk.c            |   16 +-
 xen/arch/x86/mm/hap/Makefile            |    1 +
 xen/arch/x86/mm/hap/nested_ept.c        |  286 +++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   95 ++++++-----
 xen/arch/x86/mm/mm-locks.h              |    2 +-
 xen/arch/x86/mm/p2m-ept.c               |  104 +++++++++---
 xen/arch/x86/mm/p2m.c                   |   49 +++---
 xen/arch/x86/mm/shadow/multi.c          |    2 +-
 xen/include/asm-x86/guest_pt.h          |    8 +
 xen/include/asm-x86/hvm/hvm.h           |    9 +-
 xen/include/asm-x86/hvm/nestedhvm.h     |    2 +
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
 xen/include/asm-x86/hvm/vmx/vmcs.h      |   24 ++--
 xen/include/asm-x86/hvm/vmx/vmx.h       |   38 ++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |   30 +++-
 xen/include/asm-x86/p2m.h               |   20 ++-
 22 files changed, 852 insertions(+), 193 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIF-0003oj-4T; Thu, 20 Dec 2012 03:59:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXID-0003nM-Jy
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:34 +0000
Received: from [85.158.139.83:53920] by server-7.bemta-5.messagelabs.com id
	9E/5A-08009-42D82D05; Thu, 20 Dec 2012 03:59:32 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355975968!30670629!3
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 946 invoked from network); 20 Dec 2012 03:59:31 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-182.messagelabs.com with SMTP;
	20 Dec 2012 03:59:31 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775540"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:09 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:44 +0800
Message-Id: <1356018231-26440-4-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 03/10] nested_ept: Implement guest ept's
	walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Implment guest EPT PT walker, some logic is based on shadow's
ia32e PT walker. During the PT walking, if the target pages are
not in memory, use RETRY mechanism and get a chance to let the
target page back.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c              |    1 +
 xen/arch/x86/hvm/vmx/vvmx.c         |   44 +++++-
 xen/arch/x86/mm/guest_walk.c        |   16 ++-
 xen/arch/x86/mm/hap/Makefile        |    1 +
 xen/arch/x86/mm/hap/nested_ept.c    |  280 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c    |    2 +-
 xen/arch/x86/mm/shadow/multi.c      |    2 +-
 xen/include/asm-x86/guest_pt.h      |    8 +
 xen/include/asm-x86/hvm/nestedhvm.h |    2 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h   |   28 ++++
 xen/include/asm-x86/hvm/vmx/vvmx.h  |   15 ++
 12 files changed, 390 insertions(+), 10 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1cae8a8..3cd0075 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1324,6 +1324,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
+        case NESTEDHVM_PAGEFAULT_RETRY:
             return 1;
         case NESTEDHVM_PAGEFAULT_L1_ERROR:
             /* An error occured while translating gpa from
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 4495dd6..f9e620c 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -906,9 +906,18 @@ static void sync_vvmcs_ro(struct vcpu *v)
 {
     int i;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    void *vvmcs = nvcpu->nv_vvmcx;
 
     for ( i = 0; i < ARRAY_SIZE(vmcs_ro_field); i++ )
         shadow_to_vvmcs(nvcpu->nv_vvmcx, vmcs_ro_field[i]);
+
+    /* Adjust exit_reason/exit_qualifciation for violation case */
+    if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
+                EXIT_REASON_EPT_VIOLATION ) {
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+    }
 }
 
 static void load_vvmcs_host_state(struct vcpu *v)
@@ -1454,8 +1463,39 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
-    /*TODO:*/
-    return 0;
+    uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
+    uint32_t exit_reason = EXIT_REASON_EPT_VIOLATION;
+    int rc;
+    unsigned long gfn;
+    uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+                                &exit_qual, &exit_reason);
+    switch ( rc ) {
+        case EPT_TRANSLATE_SUCCEED:
+            *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+            rc = NESTEDHVM_PAGEFAULT_DONE;
+            break;
+        case EPT_TRANSLATE_VIOLATION:
+        case EPT_TRANSLATE_MISCONFIG:
+            rc = NESTEDHVM_PAGEFAULT_INJECT;
+            nvmx->ept_exit.exit_reason = exit_reason;
+            nvmx->ept_exit.exit_qual = exit_qual;
+            break;
+        case EPT_TRANSLATE_RETRY:
+            rc = NESTEDHVM_PAGEFAULT_RETRY;
+            break;
+        case EPT_TRANSLATE_ERR_PAGE:
+            rc = NESTEDHVM_PAGEFAULT_L1_ERROR;
+            break;
+        default:
+            rc = NESTEDHVM_PAGEFAULT_UNHANDLED;
+            gdprintk(XENLOG_ERR, "GUEST EPT translation error!:%d\n", rc);
+            break;
+    }
+
+    return rc;
 }
 
 void nvmx_idtv_handling(void)
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 0f08fb0..1c165c6 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -88,18 +88,19 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
 
 /* If the map is non-NULL, we leave this function having 
  * acquired an extra ref on mfn_to_page(*mfn) */
-static inline void *map_domain_gfn(struct p2m_domain *p2m,
-                                   gfn_t gfn, 
+void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
                                    mfn_t *mfn,
                                    p2m_type_t *p2mt,
-                                   uint32_t *rc) 
+                                   p2m_query_t q,
+                                   uint32_t *rc)
 {
     struct page_info *page;
     void *map;
 
     /* Translate the gfn, unsharing if shared */
     page = get_page_from_gfn_p2m(p2m->domain, p2m, gfn_x(gfn), p2mt, NULL,
-                                  P2M_ALLOC | P2M_UNSHARE);
+                                  q);
     if ( p2m_is_paging(*p2mt) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
@@ -128,7 +129,6 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
     return map;
 }
 
-
 /* Walk the guest pagetables, after the manner of a hardware walker. */
 /* Because the walk is essentially random, it can cause a deadlock 
  * warning in the p2m locking code. Highly unlikely this is an actual
@@ -149,6 +149,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     uint32_t gflags, mflags, iflags, rc = 0;
     int smep;
     bool_t pse1G = 0, pse2M = 0;
+    p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
 
     perfc_incr(guest_walk);
     memset(gw, 0, sizeof(*gw));
@@ -188,7 +189,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     l3p = map_domain_gfn(p2m, 
                          guest_l4e_get_gfn(gw->l4e), 
                          &gw->l3mfn,
-                         &p2mt, 
+                         &p2mt,
+                         qt, 
                          &rc); 
     if(l3p == NULL)
         goto out;
@@ -249,6 +251,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                          guest_l3e_get_gfn(gw->l3e), 
                          &gw->l2mfn,
                          &p2mt, 
+                         qt,
                          &rc); 
     if(l2p == NULL)
         goto out;
@@ -322,6 +325,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                              guest_l2e_get_gfn(gw->l2e), 
                              &gw->l1mfn,
                              &p2mt,
+                             qt,
                              &rc);
         if(l1p == NULL)
             goto out;
diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
index 80a6bec..68f2bb5 100644
--- a/xen/arch/x86/mm/hap/Makefile
+++ b/xen/arch/x86/mm/hap/Makefile
@@ -3,6 +3,7 @@ obj-y += guest_walk_2level.o
 obj-y += guest_walk_3level.o
 obj-$(x86_64) += guest_walk_4level.o
 obj-y += nested_hap.o
+obj-y += nested_ept.o
 
 guest_walk_%level.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
new file mode 100644
index 0000000..c3e698c
--- /dev/null
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -0,0 +1,280 @@
+/*
+ * nested_ept.c: Handling virtulized EPT for guest in nested case.
+ *
+ * Copyright (c) 2012, Intel Corporation
+ *  Xiantao Zhang <xiantao.zhang@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+#include <asm/domain.h>
+#include <asm/page.h>
+#include <asm/paging.h>
+#include <asm/p2m.h>
+#include <asm/mem_event.h>
+#include <public/mem_event.h>
+#include <asm/mem_sharing.h>
+#include <xen/event.h>
+#include <asm/hap.h>
+#include <asm/hvm/support.h>
+
+#include <asm/hvm/nestedhvm.h>
+
+#include "private.h"
+
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vvmx.h>
+
+/* EPT always use 4-level paging structure */
+#define GUEST_PAGING_LEVELS 4
+#include <asm/guest_pt.h>
+
+/* Must reserved bits in all level entries  */
+#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
+                     ~((1ull << paddr_bits) - 1))
+
+/*
+ *TODO: Just leave it as 0 here for compile pass, will
+ * define real capabilities in the subsequent patches.
+ */
+#define NEPT_VPID_CAP_BITS 0
+
+
+#define NEPT_1G_ENTRY_FLAG (1 << 11)
+#define NEPT_2M_ENTRY_FLAG (1 << 10)
+#define NEPT_4K_ENTRY_FLAG (1 << 9)
+
+bool_t nept_sp_entry(ept_entry_t e)
+{
+    return !!(e.sp);
+}
+
+static bool_t nept_rsv_bits_check(ept_entry_t e, uint32_t level)
+{
+    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
+
+    switch ( level ) {
+    case 1:
+        break;
+    case 2 ... 3:
+        if (nept_sp_entry(e))
+            rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
+        else
+            rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK;
+        break;
+    case 4:
+        rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK | EPTE_SUPER_PAGE_MASK;
+    break;
+    default:
+        gdprintk(XENLOG_ERR,"Unsupported EPT paging level: %d\n", level);
+        BUG();
+        break;
+    }
+    return !!(e.epte & rsv_bits);
+}
+
+/* EMT checking*/
+static bool_t nept_emt_bits_check(ept_entry_t e, uint32_t level)
+{
+    if ( e.sp || level == 1 ) {
+        if ( e.emt == EPT_EMT_RSV0 || e.emt == EPT_EMT_RSV1 ||
+                e.emt == EPT_EMT_RSV2 )
+            return 1;
+    }
+    return 0;
+}
+
+static bool_t nept_rwx_bits_check(ept_entry_t e) {
+    /*write only or write/execute only*/
+    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
+
+    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
+        return 1;
+
+    if ( rwx_bits == ept_access_x && !(NEPT_VPID_CAP_BITS &
+                        VMX_EPT_EXEC_ONLY_SUPPORTED))
+        return 1;
+
+    return 0;
+}
+
+/* nept's misconfiguration check */
+static bool_t nept_misconfiguration_check(ept_entry_t e, uint32_t level)
+{
+    return (nept_rsv_bits_check(e, level) ||
+                nept_emt_bits_check(e, level) ||
+                nept_rwx_bits_check(e));
+}
+
+static bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
+{
+    return !(EPTE_RWX_MASK & rwx_acc & ~rwx_bits);
+}
+
+/* nept's non-present check */
+static bool_t nept_non_present_check(ept_entry_t e)
+{
+    if (e.epte & EPTE_RWX_MASK)
+        return 0;
+    return 1;
+}
+
+uint64_t nept_get_ept_vpid_cap(void)
+{
+    return NEPT_VPID_CAP_BITS;
+}
+
+static int ept_lvl_table_offset(unsigned long gpa, int lvl)
+{
+    return (gpa >>(EPT_L4_PAGETABLE_SHIFT -(4 - lvl) * 9)) &
+                (EPT_PAGETABLE_ENTRIES -1 );
+}
+
+static uint32_t
+nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw)
+{
+    int lvl;
+    p2m_type_t p2mt;
+    uint32_t rc = 0, ret = 0, gflags;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = d->arch.p2m;
+    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
+    mfn_t lxmfn;
+    ept_entry_t *lxp = NULL;
+
+    memset(gw, 0, sizeof(*gw));
+
+    for (lvl = 4; lvl > 0; lvl--)
+    {
+        lxp = map_domain_gfn(p2m, base_gfn, &lxmfn, &p2mt, P2M_ALLOC, &rc);
+        if ( !lxp )
+            goto map_err;
+        gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
+        unmap_domain_page(lxp);
+        put_page(mfn_to_page(mfn_x(lxmfn)));
+
+        if (nept_non_present_check(gw->lxe[lvl]))
+            goto non_present;
+
+        if (nept_misconfiguration_check(gw->lxe[lvl], lvl))
+            goto misconfig_err;
+
+        if ( (lvl == 2 || lvl == 3) && nept_sp_entry(gw->lxe[lvl]) )
+        {
+            /* Generate a fake l1 table entry so callers don't all
+             * have to understand superpages. */
+            unsigned long gfn_lvl_mask =  (1ull << ((lvl - 1) * 9)) - 1;
+            gfn_t start = _gfn(gw->lxe[lvl].mfn);
+            /* Increment the pfn by the right number of 4k pages. */
+            start = _gfn((gfn_x(start) & ~gfn_lvl_mask) +
+                     ((l2ga >> PAGE_SHIFT) & gfn_lvl_mask));
+            gflags = (gw->lxe[lvl].epte & EPTE_FLAG_MASK) |
+                    (lvl == 3 ? NEPT_1G_ENTRY_FLAG: NEPT_2M_ENTRY_FLAG);
+            gw->lxe[0].epte = (gfn_x(start) << PAGE_SHIFT) | gflags;
+            goto done;
+        }
+        if ( lvl > 1 )
+            base_gfn = _gfn(gw->lxe[lvl].mfn);
+    }
+
+    /* If this is not a super entry, we can reach here. */
+    gflags = (gw->lxe[1].epte & EPTE_FLAG_MASK) | NEPT_4K_ENTRY_FLAG;
+    gw->lxe[0].epte = (gw->lxe[1].epte & PAGE_MASK) | gflags;
+
+done:
+    ret = EPT_TRANSLATE_SUCCEED;
+    goto out;
+
+map_err:
+    if ( rc == _PAGE_PAGED )
+        ret = EPT_TRANSLATE_RETRY;
+    else
+        ret = EPT_TRANSLATE_ERR_PAGE;
+    goto out;
+
+misconfig_err:
+    ret =  EPT_TRANSLATE_MISCONFIG;
+    goto out;
+
+non_present:
+    ret = EPT_TRANSLATE_VIOLATION;
+    /* fall through. */
+out:
+    return ret;
+}
+
+/* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
+
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason)
+{
+    uint32_t rc, rwx_bits = 0;
+    ept_walk_t gw;
+    rwx_acc &= EPTE_RWX_MASK;
+
+    *l1gfn = INVALID_GFN;
+
+    rc = nept_walk_tables(v, l2ga, &gw);
+    switch ( rc ) {
+    case EPT_TRANSLATE_SUCCEED:
+        if ( likely(gw.lxe[0].epte & NEPT_2M_ENTRY_FLAG) )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                            EPTE_RWX_MASK;
+            *page_order = 9;
+        }
+        else if ( gw.lxe[0].epte & NEPT_4K_ENTRY_FLAG ) {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                    gw.lxe[1].epte & EPTE_RWX_MASK;
+            *page_order = 0;
+        }
+        else if ( gw.lxe[0].epte & NEPT_1G_ENTRY_FLAG  )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte  & EPTE_RWX_MASK;
+            *page_order = 18;
+        }
+        else
+        {
+            gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
+            BUG();
+        }
+        if ( nept_permission_check(rwx_acc, rwx_bits) )
+        {
+            *l1gfn = gw.lxe[0].mfn;
+            break;
+        }
+        rc = EPT_TRANSLATE_VIOLATION;
+    /* Fall through to EPT violation if permission check fails. */
+    case EPT_TRANSLATE_VIOLATION:
+        *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
+        *exit_reason = EXIT_REASON_EPT_VIOLATION;
+        break;
+
+    case EPT_TRANSLATE_ERR_PAGE:
+        break;
+    case EPT_TRANSLATE_MISCONFIG:
+        rc = EPT_TRANSLATE_MISCONFIG;
+        *exit_qual = 0;
+        *exit_reason = EXIT_REASON_EPT_MISCONFIG;
+        break;
+    case EPT_TRANSLATE_RETRY:
+        break;
+    default:
+        rc = EPT_TRANSLATE_UNSUPPORTED;
+        gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);
+        break;
+    }
+    return rc;
+}
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 8787c91..6d1264b 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -217,7 +217,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
-        return rv;
+    case NESTEDHVM_PAGEFAULT_RETRY:
     case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 4967da1..409198c 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
     /* Translate the GFN to an MFN */
     ASSERT(!paging_locked_by_me(v->domain));
     mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
-        
+
     if ( p2m_is_readonly(p2mt) )
     {
         put_gfn(v->domain, gfn);
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index 4e1dda0..db8a0b6 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -315,6 +315,14 @@ guest_walk_to_page_order(walk_t *gw)
 #define GPT_RENAME2(_n, _l) _n ## _ ## _l ## _levels
 #define GPT_RENAME(_n, _l) GPT_RENAME2(_n, _l)
 #define guest_walk_tables GPT_RENAME(guest_walk_tables, GUEST_PAGING_LEVELS)
+#define map_domain_gfn GPT_RENAME(map_domain_gfn, GUEST_PAGING_LEVELS)
+
+extern void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
+                                   mfn_t *mfn,
+                                   p2m_type_t *p2mt,
+                                   p2m_query_t q,
+                                   uint32_t *rc);
 
 extern uint32_t 
 guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, unsigned long va,
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index 91fde0b..4c489d2 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -47,11 +47,13 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
     vcpu_nestedhvm(v).nv_guestmode = 0
 
 /* Nested paging */
+#define NESTEDHVM_PAGEFAULT_UNHANDLED -1
 #define NESTEDHVM_PAGEFAULT_DONE       0
 #define NESTEDHVM_PAGEFAULT_INJECT     1
 #define NESTEDHVM_PAGEFAULT_L1_ERROR   2
 #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
 #define NESTEDHVM_PAGEFAULT_MMIO       4
+#define NESTEDHVM_PAGEFAULT_RETRY      5
 int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ef2c9c9..9a728b6 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -194,6 +194,7 @@ extern u32 vmx_secondary_exec_control;
 
 extern bool_t cpu_has_vmx_ins_outs_instr_info;
 
+#define VMX_EPT_EXEC_ONLY_SUPPORTED             0x00000001
 #define VMX_EPT_WALK_LENGTH_4_SUPPORTED         0x00000040
 #define VMX_EPT_MEMORY_TYPE_UC                  0x00000100
 #define VMX_EPT_MEMORY_TYPE_WB                  0x00004000
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index aa5b080..feaaa80 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -51,6 +51,11 @@ typedef union {
     u64 epte;
 } ept_entry_t;
 
+typedef struct {
+    /*use lxe[0] to save result */
+    ept_entry_t lxe[5];
+} ept_walk_t;
+
 #define EPT_TABLE_ORDER         9
 #define EPTE_SUPER_PAGE_MASK    0x80
 #define EPTE_MFN_MASK           0xffffffffff000ULL
@@ -60,6 +65,28 @@ typedef union {
 #define EPTE_AVAIL1_SHIFT       8
 #define EPTE_EMT_SHIFT          3
 #define EPTE_IGMT_SHIFT         6
+#define EPTE_RWX_MASK           0x7
+#define EPTE_FLAG_MASK          0x7f
+
+#define EPT_EMT_UC              0
+#define EPT_EMT_WC              1
+#define EPT_EMT_RSV0            2
+#define EPT_EMT_RSV1            3
+#define EPT_EMT_WT              4
+#define EPT_EMT_WP              5
+#define EPT_EMT_WB              6
+#define EPT_EMT_RSV2            7
+
+typedef enum {
+    ept_access_n     = 0, /* No access permissions allowed */
+    ept_access_r     = 1,
+    ept_access_w     = 2,
+    ept_access_rw    = 3,
+    ept_access_x     = 4,
+    ept_access_rx    = 5,
+    ept_access_wx    = 6,
+    ept_access_all   = 7,
+} ept_access_t;
 
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
@@ -419,6 +446,7 @@ void update_guest_eip(void);
 #define _EPT_GLA_FAULT              8
 #define EPT_GLA_FAULT               (1UL<<_EPT_GLA_FAULT)
 
+#define EPT_L4_PAGETABLE_SHIFT      39
 #define EPT_PAGETABLE_ENTRIES       512
 
 #endif /* __ASM_X86_HVM_VMX_VMX_H__ */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 422f006..245fddb 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -32,6 +32,10 @@ struct nestedvmx {
         unsigned long intr_info;
         u32           error_code;
     } intr;
+    struct {
+        uint32_t exit_reason;
+        uint32_t exit_qual;
+    } ept_exit;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -109,6 +113,13 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
+#define EPT_TRANSLATE_UNSUPPORTED  -1
+#define EPT_TRANSLATE_SUCCEED       0
+#define EPT_TRANSLATE_VIOLATION     1
+#define EPT_TRANSLATE_ERR_PAGE      2
+#define EPT_TRANSLATE_MISCONFIG     3
+#define EPT_TRANSLATE_RETRY         4
+
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
@@ -192,5 +203,9 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXID-0003nW-G0; Thu, 20 Dec 2012 03:59:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIB-0003n5-Px
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:31 +0000
Received: from [85.158.139.83:53896] by server-10.bemta-5.messagelabs.com id
	E4/B2-13383-32D82D05; Thu, 20 Dec 2012 03:59:31 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355975968!30670629!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 938 invoked from network); 20 Dec 2012 03:59:30 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-182.messagelabs.com with SMTP;
	20 Dec 2012 03:59:30 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775529"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:04 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:41 +0800
Message-Id: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 00/10] Nested VMX: Add virtual EPT & VPID
	support to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

With virtual EPT support, L1 hyerpvisor can use EPT hardware for L2 guest's memory virtualization.
In this way, L2 guest's performance can be improved sharply.
According to our testing, some benchmarks can show > 5x performance gain.

Changes from v1:
Update the patches according to Tim's comments. 
1. Patch 03: Enhance the virtual EPT's walker logic.
2. Patch 04: Add a new field in struct p2m_domain, and use it to store
   EPT-specific data. For host p2m, it saves L1 VMM's EPT data,
   and for nested p2m, it saves nested EPT's data 3. Patch 07: strictly check host's p2m access type.
4. Other patches: some whitespace mangling fixes.

Changes form v2:
Addressed comments from Jan and Jun:
1. Add Acked-by message for reviewed patches by Tim. 
2. Fixed one whilespace mangling issue in PATCH 08
3. Add some comments to describe the meaning of 
   the return value of hvm_hap_nested_page_fault 
   in PATCH 05.
4. Add the logic for handling default case of two switch
   statements.

Zhang Xiantao (10):
  nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
  nestedhap: Change nested p2m's walker to vendor-specific
  nested_ept: Implement guest ept's walker
  EPT: Make ept data structure or operations neutral
  nEPT: Try to enable EPT paging for L2 guest.
  nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
  nEPT: Use minimal permission for nested p2m.
  nEPT: handle invept instruction from L1 VMM
  nVMX: virutalize VPID capability to nested VMM.
  nEPT: expost EPT & VPID capablities to L1 VMM

 xen/arch/x86/hvm/hvm.c                  |    7 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 ++++
 xen/arch/x86/hvm/svm/svm.c              |    3 +-
 xen/arch/x86/hvm/vmx/vmcs.c             |    9 +-
 xen/arch/x86/hvm/vmx/vmx.c              |   91 ++++------
 xen/arch/x86/hvm/vmx/vvmx.c             |  215 ++++++++++++++++++++++--
 xen/arch/x86/mm/guest_walk.c            |   16 +-
 xen/arch/x86/mm/hap/Makefile            |    1 +
 xen/arch/x86/mm/hap/nested_ept.c        |  286 +++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   95 ++++++-----
 xen/arch/x86/mm/mm-locks.h              |    2 +-
 xen/arch/x86/mm/p2m-ept.c               |  104 +++++++++---
 xen/arch/x86/mm/p2m.c                   |   49 +++---
 xen/arch/x86/mm/shadow/multi.c          |    2 +-
 xen/include/asm-x86/guest_pt.h          |    8 +
 xen/include/asm-x86/hvm/hvm.h           |    9 +-
 xen/include/asm-x86/hvm/nestedhvm.h     |    2 +
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
 xen/include/asm-x86/hvm/vmx/vmcs.h      |   24 ++--
 xen/include/asm-x86/hvm/vmx/vmx.h       |   38 ++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |   30 +++-
 xen/include/asm-x86/p2m.h               |   20 ++-
 22 files changed, 852 insertions(+), 193 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXII-0003rX-JP; Thu, 20 Dec 2012 03:59:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIG-0003p5-AK
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:36 +0000
Received: from [85.158.138.51:50357] by server-4.bemta-3.messagelabs.com id
	5F/DA-31835-72D82D05; Thu, 20 Dec 2012 03:59:35 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355975974!28489625!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5338 invoked from network); 20 Dec 2012 03:59:34 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-11.tower-174.messagelabs.com with SMTP;
	20 Dec 2012 03:59:34 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:33 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775567"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:17 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:50 +0800
Message-Id: <1356018231-26440-10-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 09/10] nVMX: virutalize VPID capability to
	nested VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Virtualize VPID for the nested vmm, use host's VPID
to emualte guest's VPID. For each virtual vmentry, if
guest'v vpid is changed, allocate a new host VPID for
L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   11 ++++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   56 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +
 3 files changed, 65 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 94cac17..0e479f8 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2578,10 +2578,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             update_guest_eip();
         break;
 
+    case EXIT_REASON_INVVPID:
+        if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
+
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
          * running in guest context, and the CPU checks that before getting
@@ -2699,8 +2703,11 @@ void vmx_vmenter_helper(void)
 
     if ( !cpu_has_vmx_vpid )
         goto out;
+    if ( nestedhvm_vcpu_in_guestmode(curr) )
+        p_asid = &vcpu_nestedhvm(curr).nv_n2asid;
+    else
+        p_asid = &curr->arch.hvm_vcpu.n1asid;
 
-    p_asid = &curr->arch.hvm_vcpu.n1asid;
     old_asid = p_asid->asid;
     need_flush = hvm_asid_handle_vmenter(p_asid);
     new_asid = p_asid->asid;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 8346387..0e1a5ee 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -42,6 +42,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
 	goto out;
     }
     nvmx->ept.enabled = 0;
+    nvmx->guest_vpid = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -848,6 +849,16 @@ static uint64_t get_shadow_eptp(struct vcpu *v)
     return ept_get_eptp(ept);
 }
 
+static bool_t nvmx_vpid_enabled(struct nestedvcpu *nvcpu)
+{
+    uint32_t second_cntl;
+
+    second_cntl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
+    if ( second_cntl & SECONDARY_EXEC_ENABLE_VPID )
+        return 1;
+    return 0;
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -896,6 +907,18 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     if ( nestedhvm_paging_mode_hap(v) )
         __vmwrite(EPT_POINTER, get_shadow_eptp(v));
 
+    /* nested VPID support! */
+    if ( cpu_has_vmx_vpid && nvmx_vpid_enabled(nvcpu) )
+    {
+        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+        uint32_t new_vpid =  __get_vvmcs(vvmcs, VIRTUAL_PROCESSOR_ID);
+        if ( nvmx->guest_vpid != new_vpid )
+        {
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
+            nvmx->guest_vpid = new_vpid;
+        }
+    }
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -1187,7 +1210,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
     {
         vmreturn (regs, VMFAIL_INVALID);
-        return X86EMUL_OKAY;        
+        return X86EMUL_OKAY;
     }
 
     launched = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
@@ -1370,7 +1393,6 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
         return X86EMUL_EXCEPTION;
 
     inv_type = reg_read(regs, decode.reg2);
-    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);
 
     switch ( inv_type ) {
     case INVEPT_SINGLE_CONTEXT:
@@ -1402,6 +1424,36 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
     ((uint32_t)(__emul_value(enable1, default1) | host_value)))
 
+int nvmx_handle_invvpid(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long vpid;
+    u64 inv_type;
+
+    if ( !cpu_has_vmx_vpid )
+        return X86EMUL_EXCEPTION;
+
+    if ( decode_vmx_inst(regs, &decode, &vpid, 0) != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, vpid:%lx\n", inv_type, vpid);
+
+    switch ( inv_type ) {
+        /* Just invalidate all tlb entries for all types! */
+        case INVVPID_INDIVIDUAL_ADDR:
+        case INVVPID_SINGLE_CONTEXT:
+        case INVVPID_ALL_CONTEXT:
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
+            break;
+        default:
+            return X86EMUL_EXCEPTION;
+    }
+    vmreturn(regs, VMSUCCEED);
+
+    return X86EMUL_OKAY;
+}
+
 /*
  * Capability reporting
  */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 03ab987..af702c4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -37,6 +37,7 @@ struct nestedvmx {
         uint32_t exit_reason;
         uint32_t exit_qual;
     } ept;
+    uint32_t guest_vpid;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -192,6 +193,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
 int nvmx_handle_invept(struct cpu_user_regs *regs);
+int nvmx_handle_invvpid(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXII-0003rX-JP; Thu, 20 Dec 2012 03:59:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIG-0003p5-AK
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:36 +0000
Received: from [85.158.138.51:50357] by server-4.bemta-3.messagelabs.com id
	5F/DA-31835-72D82D05; Thu, 20 Dec 2012 03:59:35 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1355975974!28489625!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5338 invoked from network); 20 Dec 2012 03:59:34 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-11.tower-174.messagelabs.com with SMTP;
	20 Dec 2012 03:59:34 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:33 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775567"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:17 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:50 +0800
Message-Id: <1356018231-26440-10-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 09/10] nVMX: virutalize VPID capability to
	nested VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Virtualize VPID for the nested vmm, use host's VPID
to emualte guest's VPID. For each virtual vmentry, if
guest'v vpid is changed, allocate a new host VPID for
L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   11 ++++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   56 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +
 3 files changed, 65 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 94cac17..0e479f8 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2578,10 +2578,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             update_guest_eip();
         break;
 
+    case EXIT_REASON_INVVPID:
+        if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
+
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
          * running in guest context, and the CPU checks that before getting
@@ -2699,8 +2703,11 @@ void vmx_vmenter_helper(void)
 
     if ( !cpu_has_vmx_vpid )
         goto out;
+    if ( nestedhvm_vcpu_in_guestmode(curr) )
+        p_asid = &vcpu_nestedhvm(curr).nv_n2asid;
+    else
+        p_asid = &curr->arch.hvm_vcpu.n1asid;
 
-    p_asid = &curr->arch.hvm_vcpu.n1asid;
     old_asid = p_asid->asid;
     need_flush = hvm_asid_handle_vmenter(p_asid);
     new_asid = p_asid->asid;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 8346387..0e1a5ee 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -42,6 +42,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
 	goto out;
     }
     nvmx->ept.enabled = 0;
+    nvmx->guest_vpid = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -848,6 +849,16 @@ static uint64_t get_shadow_eptp(struct vcpu *v)
     return ept_get_eptp(ept);
 }
 
+static bool_t nvmx_vpid_enabled(struct nestedvcpu *nvcpu)
+{
+    uint32_t second_cntl;
+
+    second_cntl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
+    if ( second_cntl & SECONDARY_EXEC_ENABLE_VPID )
+        return 1;
+    return 0;
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -896,6 +907,18 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     if ( nestedhvm_paging_mode_hap(v) )
         __vmwrite(EPT_POINTER, get_shadow_eptp(v));
 
+    /* nested VPID support! */
+    if ( cpu_has_vmx_vpid && nvmx_vpid_enabled(nvcpu) )
+    {
+        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+        uint32_t new_vpid =  __get_vvmcs(vvmcs, VIRTUAL_PROCESSOR_ID);
+        if ( nvmx->guest_vpid != new_vpid )
+        {
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
+            nvmx->guest_vpid = new_vpid;
+        }
+    }
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -1187,7 +1210,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
     {
         vmreturn (regs, VMFAIL_INVALID);
-        return X86EMUL_OKAY;        
+        return X86EMUL_OKAY;
     }
 
     launched = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
@@ -1370,7 +1393,6 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
         return X86EMUL_EXCEPTION;
 
     inv_type = reg_read(regs, decode.reg2);
-    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);
 
     switch ( inv_type ) {
     case INVEPT_SINGLE_CONTEXT:
@@ -1402,6 +1424,36 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
     ((uint32_t)(__emul_value(enable1, default1) | host_value)))
 
+int nvmx_handle_invvpid(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long vpid;
+    u64 inv_type;
+
+    if ( !cpu_has_vmx_vpid )
+        return X86EMUL_EXCEPTION;
+
+    if ( decode_vmx_inst(regs, &decode, &vpid, 0) != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, vpid:%lx\n", inv_type, vpid);
+
+    switch ( inv_type ) {
+        /* Just invalidate all tlb entries for all types! */
+        case INVVPID_INDIVIDUAL_ADDR:
+        case INVVPID_SINGLE_CONTEXT:
+        case INVVPID_ALL_CONTEXT:
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
+            break;
+        default:
+            return X86EMUL_EXCEPTION;
+    }
+    vmreturn(regs, VMSUCCEED);
+
+    return X86EMUL_OKAY;
+}
+
 /*
  * Capability reporting
  */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 03ab987..af702c4 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -37,6 +37,7 @@ struct nestedvmx {
         uint32_t exit_reason;
         uint32_t exit_qual;
     } ept;
+    uint32_t guest_vpid;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -192,6 +193,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
 int nvmx_handle_invept(struct cpu_user_regs *regs);
+int nvmx_handle_invvpid(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIG-0003pH-1K; Thu, 20 Dec 2012 03:59:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIE-0003nw-L5
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:34 +0000
Received: from [85.158.143.35:48008] by server-3.bemta-4.messagelabs.com id
	07/C1-18211-62D82D05; Thu, 20 Dec 2012 03:59:34 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355975972!12686780!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26891 invoked from network); 20 Dec 2012 03:59:33 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-12.tower-21.messagelabs.com with SMTP;
	20 Dec 2012 03:59:33 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775549"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:12 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:46 +0800
Message-Id: <1356018231-26440-6-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 05/10] nEPT: Try to enable EPT paging for L2
	guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Once found EPT is enabled by L1 VMM, enabled nested EPT support
for L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
 xen/arch/x86/hvm/vmx/vvmx.c        |   48 +++++++++++++++++++++++++++--------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
 3 files changed, 54 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index d74aae0..ed8d532 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1461,6 +1461,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
     .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
+    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
     .nhvm_intr_blocked    = nvmx_intr_blocked,
@@ -2003,6 +2004,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
     unsigned long gla, gfn = gpa >> PAGE_SHIFT;
     mfn_t mfn;
     p2m_type_t p2mt;
+    int ret;
     struct domain *d = current->domain;
 
     if ( tb_init_done )
@@ -2017,18 +2019,26 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
         _d.gpa = gpa;
         _d.qualification = qualification;
         _d.mfn = mfn_x(get_gfn_query_unlocked(d, gfn, &_d.p2mt));
-        
+
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
-    if ( hvm_hap_nested_page_fault(gpa,
+    ret = hvm_hap_nested_page_fault(gpa,
                                    qualification & EPT_GLA_VALID       ? 1 : 0,
                                    qualification & EPT_GLA_VALID
                                      ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
                                    qualification & EPT_READ_VIOLATION  ? 1 : 0,
                                    qualification & EPT_WRITE_VIOLATION ? 1 : 0,
-                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
+                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
+    switch ( ret ) {
+    case 0:         // Unhandled L1 EPT violation
+        break;
+    case 1:         // This violation is handled completly
         return;
+    case -1:        // This vioaltion should be injected to L1 VMM
+        vcpu_nestedhvm(current).nv_vmexit_pending = 1;
+        return;
+    }
 
     /* Everything else is an error. */
     mfn = get_gfn_query_unlocked(d, gfn, &p2mt);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f9e620c..2ae6f6a 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
         gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
 	goto out;
     }
+    nvmx->ept.enabled = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -96,9 +97,11 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
 
 uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
-    /* TODO */
-    ASSERT(0);
-    return 0;
+    uint64_t eptp_base;
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+
+    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
+    return eptp_base & PAGE_MASK; 
 }
 
 uint32_t nvmx_vcpu_asid(struct vcpu *v)
@@ -108,6 +111,13 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v)
     return 0;
 }
 
+bool_t nvmx_ept_enabled(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    return !!(nvmx->ept.enabled);
+}
+
 static const enum x86_segment sreg_to_index[] = {
     [VMX_SREG_ES] = x86_seg_es,
     [VMX_SREG_CS] = x86_seg_cs,
@@ -503,14 +513,16 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
 }
 
 void nvmx_update_secondary_exec_control(struct vcpu *v,
-                                            unsigned long value)
+                                            unsigned long host_cntrl)
 {
     u32 shadow_cntrl;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
     shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
-    shadow_cntrl |= value;
-    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
+    nvmx->ept.enabled = !!(shadow_cntrl & SECONDARY_EXEC_ENABLE_EPT);
+    shadow_cntrl |= host_cntrl;
+    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
 }
 
 static void nvmx_update_pin_control(struct vcpu *v, unsigned long host_cntrl)
@@ -818,6 +830,17 @@ static void load_shadow_guest_state(struct vcpu *v)
     /* TODO: CR3 target control */
 }
 
+
+static uint64_t get_shadow_eptp(struct vcpu *v)
+{
+    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
+    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
+    struct ept_data *ept = &p2m->ept;
+
+    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    return ept_get_eptp(ept);
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -862,7 +885,10 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
 
-    /* TODO: EPT_POINTER */
+    /* Setup virtual ETP for L2 guest*/
+    if ( nestedhvm_paging_mode_hap(v) )
+        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -915,8 +941,8 @@ static void sync_vvmcs_ro(struct vcpu *v)
     /* Adjust exit_reason/exit_qualifciation for violation case */
     if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
                 EXIT_REASON_EPT_VIOLATION ) {
-        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
-        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
     }
 }
 
@@ -1480,8 +1506,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
         case EPT_TRANSLATE_VIOLATION:
         case EPT_TRANSLATE_MISCONFIG:
             rc = NESTEDHVM_PAGEFAULT_INJECT;
-            nvmx->ept_exit.exit_reason = exit_reason;
-            nvmx->ept_exit.exit_qual = exit_qual;
+            nvmx->ept.exit_reason = exit_reason;
+            nvmx->ept.exit_qual = exit_qual;
             break;
         case EPT_TRANSLATE_RETRY:
             rc = NESTEDHVM_PAGEFAULT_RETRY;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 245fddb..3114ec0 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -33,9 +33,10 @@ struct nestedvmx {
         u32           error_code;
     } intr;
     struct {
+        bool_t   enabled;
         uint32_t exit_reason;
         uint32_t exit_qual;
-    } ept_exit;
+    } ept;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -110,6 +111,8 @@ int nvmx_intercepts_exception(struct vcpu *v,
                               unsigned int trap, int error_code);
 void nvmx_domain_relinquish_resources(struct domain *d);
 
+bool_t nvmx_ept_enabled(struct vcpu *v);
+
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIG-0003pH-1K; Thu, 20 Dec 2012 03:59:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIE-0003nw-L5
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:34 +0000
Received: from [85.158.143.35:48008] by server-3.bemta-4.messagelabs.com id
	07/C1-18211-62D82D05; Thu, 20 Dec 2012 03:59:34 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355975972!12686780!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26891 invoked from network); 20 Dec 2012 03:59:33 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-12.tower-21.messagelabs.com with SMTP;
	20 Dec 2012 03:59:33 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775549"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:12 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:46 +0800
Message-Id: <1356018231-26440-6-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 05/10] nEPT: Try to enable EPT paging for L2
	guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Once found EPT is enabled by L1 VMM, enabled nested EPT support
for L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
 xen/arch/x86/hvm/vmx/vvmx.c        |   48 +++++++++++++++++++++++++++--------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
 3 files changed, 54 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index d74aae0..ed8d532 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1461,6 +1461,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
     .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
+    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
     .nhvm_intr_blocked    = nvmx_intr_blocked,
@@ -2003,6 +2004,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
     unsigned long gla, gfn = gpa >> PAGE_SHIFT;
     mfn_t mfn;
     p2m_type_t p2mt;
+    int ret;
     struct domain *d = current->domain;
 
     if ( tb_init_done )
@@ -2017,18 +2019,26 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
         _d.gpa = gpa;
         _d.qualification = qualification;
         _d.mfn = mfn_x(get_gfn_query_unlocked(d, gfn, &_d.p2mt));
-        
+
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
-    if ( hvm_hap_nested_page_fault(gpa,
+    ret = hvm_hap_nested_page_fault(gpa,
                                    qualification & EPT_GLA_VALID       ? 1 : 0,
                                    qualification & EPT_GLA_VALID
                                      ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
                                    qualification & EPT_READ_VIOLATION  ? 1 : 0,
                                    qualification & EPT_WRITE_VIOLATION ? 1 : 0,
-                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
+                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
+    switch ( ret ) {
+    case 0:         // Unhandled L1 EPT violation
+        break;
+    case 1:         // This violation is handled completly
         return;
+    case -1:        // This vioaltion should be injected to L1 VMM
+        vcpu_nestedhvm(current).nv_vmexit_pending = 1;
+        return;
+    }
 
     /* Everything else is an error. */
     mfn = get_gfn_query_unlocked(d, gfn, &p2mt);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f9e620c..2ae6f6a 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
         gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
 	goto out;
     }
+    nvmx->ept.enabled = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -96,9 +97,11 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
 
 uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
-    /* TODO */
-    ASSERT(0);
-    return 0;
+    uint64_t eptp_base;
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+
+    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
+    return eptp_base & PAGE_MASK; 
 }
 
 uint32_t nvmx_vcpu_asid(struct vcpu *v)
@@ -108,6 +111,13 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v)
     return 0;
 }
 
+bool_t nvmx_ept_enabled(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    return !!(nvmx->ept.enabled);
+}
+
 static const enum x86_segment sreg_to_index[] = {
     [VMX_SREG_ES] = x86_seg_es,
     [VMX_SREG_CS] = x86_seg_cs,
@@ -503,14 +513,16 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
 }
 
 void nvmx_update_secondary_exec_control(struct vcpu *v,
-                                            unsigned long value)
+                                            unsigned long host_cntrl)
 {
     u32 shadow_cntrl;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
     shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
-    shadow_cntrl |= value;
-    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
+    nvmx->ept.enabled = !!(shadow_cntrl & SECONDARY_EXEC_ENABLE_EPT);
+    shadow_cntrl |= host_cntrl;
+    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
 }
 
 static void nvmx_update_pin_control(struct vcpu *v, unsigned long host_cntrl)
@@ -818,6 +830,17 @@ static void load_shadow_guest_state(struct vcpu *v)
     /* TODO: CR3 target control */
 }
 
+
+static uint64_t get_shadow_eptp(struct vcpu *v)
+{
+    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
+    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
+    struct ept_data *ept = &p2m->ept;
+
+    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    return ept_get_eptp(ept);
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -862,7 +885,10 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
 
-    /* TODO: EPT_POINTER */
+    /* Setup virtual ETP for L2 guest*/
+    if ( nestedhvm_paging_mode_hap(v) )
+        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -915,8 +941,8 @@ static void sync_vvmcs_ro(struct vcpu *v)
     /* Adjust exit_reason/exit_qualifciation for violation case */
     if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) ==
                 EXIT_REASON_EPT_VIOLATION ) {
-        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
-        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
     }
 }
 
@@ -1480,8 +1506,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
         case EPT_TRANSLATE_VIOLATION:
         case EPT_TRANSLATE_MISCONFIG:
             rc = NESTEDHVM_PAGEFAULT_INJECT;
-            nvmx->ept_exit.exit_reason = exit_reason;
-            nvmx->ept_exit.exit_qual = exit_qual;
+            nvmx->ept.exit_reason = exit_reason;
+            nvmx->ept.exit_qual = exit_qual;
             break;
         case EPT_TRANSLATE_RETRY:
             rc = NESTEDHVM_PAGEFAULT_RETRY;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 245fddb..3114ec0 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -33,9 +33,10 @@ struct nestedvmx {
         u32           error_code;
     } intr;
     struct {
+        bool_t   enabled;
         uint32_t exit_reason;
         uint32_t exit_qual;
-    } ept_exit;
+    } ept;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -110,6 +111,8 @@ int nvmx_intercepts_exception(struct vcpu *v,
                               unsigned int trap, int error_code);
 void nvmx_domain_relinquish_resources(struct domain *d);
 
+bool_t nvmx_ept_enabled(struct vcpu *v);
+
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIH-0003qV-6W; Thu, 20 Dec 2012 03:59:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIG-0003nw-2Z
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:36 +0000
Received: from [85.158.143.35:44959] by server-3.bemta-4.messagelabs.com id
	49/C1-18211-72D82D05; Thu, 20 Dec 2012 03:59:35 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355975972!12686780!3
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26936 invoked from network); 20 Dec 2012 03:59:34 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-12.tower-21.messagelabs.com with SMTP;
	20 Dec 2012 03:59:34 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:33 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775558"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:14 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:48 +0800
Message-Id: <1356018231-26440-8-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 07/10] nEPT: Use minimal permission for
	nested p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Emulate permission check for the nested p2m. Current solution is to
use minimal permission, and once meet permission violation in L0, then
determin whether it is caused by guest EPT or host EPT
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |    4 +-
 xen/arch/x86/mm/hap/nested_ept.c        |    5 ++-
 xen/arch/x86/mm/hap/nested_hap.c        |   38 +++++++++++++++++++++++-------
 xen/include/asm-x86/hvm/hvm.h           |    2 +-
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    6 ++--
 7 files changed, 40 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 5dcb354..ab455a9 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1177,7 +1177,7 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
  */
 int
 nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint32_t pfec;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 1f7de7a..b275044 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1493,7 +1493,7 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
  */
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
@@ -1503,7 +1503,7 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
-    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn, p2m_acc,
                                 &exit_qual, &exit_reason);
     switch ( rc ) {
         case EPT_TRANSLATE_SUCCEED:
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index c3e698c..447b5d5 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -217,8 +217,8 @@ out:
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason)
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason)
 {
     uint32_t rc, rwx_bits = 0;
     ept_walk_t gw;
@@ -253,6 +253,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
         if ( nept_permission_check(rwx_acc, rwx_bits) )
         {
             *l1gfn = gw.lxe[0].mfn;
+            *p2m_acc = (uint8_t)rwx_bits;
             break;
         }
         rc = EPT_TRANSLATE_VIOLATION;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 6d1264b..84dbf15 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -142,12 +142,12 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
  */
 static int
 nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
 
-    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order, p2m_acc,
         access_r, access_w, access_x);
 }
 
@@ -158,16 +158,15 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      p2m_type_t *p2mt,
+                      p2m_type_t *p2mt, p2m_access_t *p2ma,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
@@ -206,12 +205,14 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
     p2m_type_t p2mt_10;
+    p2m_access_t p2ma_10 = p2m_access_rwx;
+    uint8_t p2ma_21;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
@@ -229,7 +230,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     /* ==> we have to walk L0 P2M */
     rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
-        &p2mt_10, &page_order_10,
+        &p2mt_10, &p2ma_10, &page_order_10,
         access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
@@ -250,10 +251,29 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     page_order_20 = min(page_order_21, page_order_10);
 
+    ASSERT(p2ma_10 <= p2m_access_n2rwx);
+    /*NOTE: if assert fails, needs to handle new access type here */
+
+    switch ( p2ma_10 ) {
+    case p2m_access_n ... p2m_access_rwx:
+        break;
+    case p2m_access_rx2rw:
+        p2ma_10 = p2m_access_rx;
+        break;
+    case p2m_access_n2rwx:
+        p2ma_10 = p2m_access_n;
+        break;
+    default:
+        p2ma_10 = p2m_access_n;
+        /* For safety, remove all permissions. */
+        gdprintk(XENLOG_ERR, "Unhandled p2m access type:%d\n", p2ma_10);
+    }
+    /* Use minimal permission for nested p2m. */
+    p2ma_10 &= (p2m_access_t)p2ma_21;
+
     /* fix p2m_get_pagetable(nested_p2m) */
     nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
-        p2mt_10,
-        p2m_access_rwx /* FIXME: Should use minimum permission. */);
+        p2mt_10, p2ma_10);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 80f07e9..889e3c9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -186,7 +186,7 @@ struct hvm_function_table {
 
     /*Walk nested p2m  */
     int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index 0c90f30..748cc04 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -134,7 +134,7 @@ void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
 int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 3114ec0..e35e425 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -125,7 +125,7 @@ int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
@@ -208,7 +208,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason);
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIF-0003oy-Ih; Thu, 20 Dec 2012 03:59:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIE-0003nf-0w
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:34 +0000
Received: from [193.109.254.147:58189] by server-12.bemta-14.messagelabs.com
	id 41/D4-06523-52D82D05; Thu, 20 Dec 2012 03:59:33 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355975970!11091276!3
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25837 invoked from network); 20 Dec 2012 03:59:32 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-27.messagelabs.com with SMTP;
	20 Dec 2012 03:59:32 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:31 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775554"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:13 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:47 +0800
Message-Id: <1356018231-26440-7-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 06/10] nEPT: Sync PDPTR fields if L2 guest in
	PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
vmentry.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    9 ++++++++-
 1 files changed, 8 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 2ae6f6a..1f7de7a 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -826,7 +826,14 @@ static void load_shadow_guest_state(struct vcpu *v)
     vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
     vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
 
-    /* TODO: PDPTRs for nested ept */
+    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
+                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
+    }
+
     /* TODO: CR3 target control */
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIH-0003qV-6W; Thu, 20 Dec 2012 03:59:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIG-0003nw-2Z
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:36 +0000
Received: from [85.158.143.35:44959] by server-3.bemta-4.messagelabs.com id
	49/C1-18211-72D82D05; Thu, 20 Dec 2012 03:59:35 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355975972!12686780!3
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26936 invoked from network); 20 Dec 2012 03:59:34 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-12.tower-21.messagelabs.com with SMTP;
	20 Dec 2012 03:59:34 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:33 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775558"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:14 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:48 +0800
Message-Id: <1356018231-26440-8-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 07/10] nEPT: Use minimal permission for
	nested p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Emulate permission check for the nested p2m. Current solution is to
use minimal permission, and once meet permission violation in L0, then
determin whether it is caused by guest EPT or host EPT
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |    4 +-
 xen/arch/x86/mm/hap/nested_ept.c        |    5 ++-
 xen/arch/x86/mm/hap/nested_hap.c        |   38 +++++++++++++++++++++++-------
 xen/include/asm-x86/hvm/hvm.h           |    2 +-
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    6 ++--
 7 files changed, 40 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 5dcb354..ab455a9 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1177,7 +1177,7 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
  */
 int
 nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint32_t pfec;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 1f7de7a..b275044 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1493,7 +1493,7 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
  */
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
@@ -1503,7 +1503,7 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
-    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn, p2m_acc,
                                 &exit_qual, &exit_reason);
     switch ( rc ) {
         case EPT_TRANSLATE_SUCCEED:
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index c3e698c..447b5d5 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -217,8 +217,8 @@ out:
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason)
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason)
 {
     uint32_t rc, rwx_bits = 0;
     ept_walk_t gw;
@@ -253,6 +253,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
         if ( nept_permission_check(rwx_acc, rwx_bits) )
         {
             *l1gfn = gw.lxe[0].mfn;
+            *p2m_acc = (uint8_t)rwx_bits;
             break;
         }
         rc = EPT_TRANSLATE_VIOLATION;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 6d1264b..84dbf15 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -142,12 +142,12 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
  */
 static int
 nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
 
-    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order, p2m_acc,
         access_r, access_w, access_x);
 }
 
@@ -158,16 +158,15 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      p2m_type_t *p2mt,
+                      p2m_type_t *p2mt, p2m_access_t *p2ma,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
@@ -206,12 +205,14 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
     p2m_type_t p2mt_10;
+    p2m_access_t p2ma_10 = p2m_access_rwx;
+    uint8_t p2ma_21;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
@@ -229,7 +230,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     /* ==> we have to walk L0 P2M */
     rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
-        &p2mt_10, &page_order_10,
+        &p2mt_10, &p2ma_10, &page_order_10,
         access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
@@ -250,10 +251,29 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     page_order_20 = min(page_order_21, page_order_10);
 
+    ASSERT(p2ma_10 <= p2m_access_n2rwx);
+    /*NOTE: if assert fails, needs to handle new access type here */
+
+    switch ( p2ma_10 ) {
+    case p2m_access_n ... p2m_access_rwx:
+        break;
+    case p2m_access_rx2rw:
+        p2ma_10 = p2m_access_rx;
+        break;
+    case p2m_access_n2rwx:
+        p2ma_10 = p2m_access_n;
+        break;
+    default:
+        p2ma_10 = p2m_access_n;
+        /* For safety, remove all permissions. */
+        gdprintk(XENLOG_ERR, "Unhandled p2m access type:%d\n", p2ma_10);
+    }
+    /* Use minimal permission for nested p2m. */
+    p2ma_10 &= (p2m_access_t)p2ma_21;
+
     /* fix p2m_get_pagetable(nested_p2m) */
     nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
-        p2mt_10,
-        p2m_access_rwx /* FIXME: Should use minimum permission. */);
+        p2mt_10, p2ma_10);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 80f07e9..889e3c9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -186,7 +186,7 @@ struct hvm_function_table {
 
     /*Walk nested p2m  */
     int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index 0c90f30..748cc04 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -134,7 +134,7 @@ void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
 int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 3114ec0..e35e425 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -125,7 +125,7 @@ int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
@@ -208,7 +208,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason);
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIG-0003pg-Eb; Thu, 20 Dec 2012 03:59:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIF-0003nw-7F
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:35 +0000
Received: from [85.158.143.35:44927] by server-3.bemta-4.messagelabs.com id
	B7/C1-18211-62D82D05; Thu, 20 Dec 2012 03:59:34 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355975972!12686780!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26912 invoked from network); 20 Dec 2012 03:59:34 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-12.tower-21.messagelabs.com with SMTP;
	20 Dec 2012 03:59:34 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:33 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775561"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:16 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:49 +0800
Message-Id: <1356018231-26440-9-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 08/10] nEPT: handle invept instruction from
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Add the INVEPT instruction emulation logic.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |    6 ++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 3 files changed, 45 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ed8d532..94cac17 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2573,10 +2573,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             update_guest_eip();
         break;
 
+    case EXIT_REASON_INVEPT:
+        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
+
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVEPT:
     case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b275044..8346387 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1356,6 +1356,45 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+int nvmx_handle_invept(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long eptp;
+    u64 inv_type;
+
+    if ( !cpu_has_vmx_ept )
+        return X86EMUL_EXCEPTION;
+
+    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
+             != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);
+
+    switch ( inv_type ) {
+    case INVEPT_SINGLE_CONTEXT:
+    {
+        struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
+        if ( p2m )
+        {
+	        p2m_flush(current, p2m);
+            ept_sync_domain(p2m);
+        }
+        break;
+    }
+    case INVEPT_ALL_CONTEXT:
+        p2m_flush_nestedp2m(current->domain);
+        __invept(INVEPT_ALL_CONTEXT, 0, 0);
+        break;
+    default:
+        return X86EMUL_EXCEPTION;
+    }
+    vmreturn(regs, VMSUCCEED);
+    return X86EMUL_OKAY;
+}
+
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index e35e425..03ab987 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -191,6 +191,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs);
 int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
+int nvmx_handle_invept(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXID-0003nj-TF; Thu, 20 Dec 2012 03:59:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIC-0003nA-Au
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:32 +0000
Received: from [193.109.254.147:47454] by server-16.bemta-14.messagelabs.com
	id E7/D0-18932-32D82D05; Thu, 20 Dec 2012 03:59:31 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355975970!11091276!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25703 invoked from network); 20 Dec 2012 03:59:30 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-27.messagelabs.com with SMTP;
	20 Dec 2012 03:59:30 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775533"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:07 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:43 +0800
Message-Id: <1356018231-26440-3-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 02/10] nestedhap: Change nested p2m's walker
	to vendor-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

EPT and NPT adopts differnt formats for each-level entry,
so change the walker functions to vendor-specific.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c              |    1 +
 xen/arch/x86/hvm/vmx/vmx.c              |    3 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |   13 +++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   46 +++++++++++--------------------
 xen/include/asm-x86/hvm/hvm.h           |    5 +++
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 ++
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    5 +++
 8 files changed, 76 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index ed0faa6..5dcb354 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1171,6 +1171,37 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
     return vcpu_nestedsvm(v).ns_hap_enabled;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    uint32_t pfec;
+    unsigned long nested_cr3, gfn;
+    
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
+
+    pfec = PFEC_user_mode | PFEC_page_present;
+    if (access_w)
+        pfec |= PFEC_write_access;
+    if (access_x)
+        pfec |= PFEC_insn_fetch;
+
+    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
+    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
+
+    if ( gfn == INVALID_GFN ) 
+        return NESTEDHVM_PAGEFAULT_INJECT;
+
+    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+    return NESTEDHVM_PAGEFAULT_DONE;
+}
+
+
 enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 2c8504a..acd2d49 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2008,6 +2008,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
     .nhvm_intr_blocked = nsvm_intr_blocked,
+    .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m,
 };
 
 void svm_vmexit_handler(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 98309da..4abfa90 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1511,7 +1511,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_intr_blocked    = nvmx_intr_blocked,
     .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
     .update_eoi_exit_bitmap = vmx_update_eoi_exit_bitmap,
-    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled
+    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled,
+    .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6d1a736..4495dd6 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1445,6 +1445,19 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
     return 1;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    /*TODO:*/
+    return 0;
+}
+
 void nvmx_idtv_handling(void)
 {
     struct vcpu *v = current;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index f9a5edc..8787c91 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -136,6 +136,22 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     }
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+static int
+nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
+
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+        access_r, access_w, access_x);
+}
+
+
 /* This function uses L1_gpa to walk the P2M table in L0 hypervisor. If the
  * walk is successful, the translated value is returned in L0_gpa. The return 
  * value tells the upper level what to do.
@@ -175,36 +191,6 @@ out:
     return rc;
 }
 
-/* This function uses L2_gpa to walk the P2M page table in L1. If the 
- * walk is successful, the translated value is returned in
- * L1_gpa. The result value tells what to do next.
- */
-static int
-nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
-                      bool_t access_r, bool_t access_w, bool_t access_x)
-{
-    uint32_t pfec;
-    unsigned long nested_cr3, gfn;
-    
-    nested_cr3 = nhvm_vcpu_p2m_base(v);
-
-    pfec = PFEC_user_mode | PFEC_page_present;
-    if (access_w)
-        pfec |= PFEC_write_access;
-    if (access_x)
-        pfec |= PFEC_insn_fetch;
-
-    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
-    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
-
-    if ( gfn == INVALID_GFN ) 
-        return NESTEDHVM_PAGEFAULT_INJECT;
-
-    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
-    return NESTEDHVM_PAGEFAULT_DONE;
-}
-
 /*
  * The following function, nestedhap_page_fault(), is for steps (3)--(10).
  *
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index d3535b6..80f07e9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -183,6 +183,11 @@ struct hvm_function_table {
     /* Virtual interrupt delivery */
     void (*update_eoi_exit_bitmap)(struct vcpu *v, u8 vector, u8 trig);
     int (*virtual_intr_delivery_enabled)(void);
+
+    /*Walk nested p2m  */
+    int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index fa83023..0c90f30 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -133,6 +133,9 @@ int nsvm_wrmsr(struct vcpu *v, unsigned int msr, uint64_t msr_content);
 void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
+int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
 #define NSVM_INTR_NOTINTERCEPTED 2
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d97011d..422f006 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -108,6 +108,11 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
+
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
  *
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIF-0003oy-Ih; Thu, 20 Dec 2012 03:59:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIE-0003nf-0w
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:34 +0000
Received: from [193.109.254.147:58189] by server-12.bemta-14.messagelabs.com
	id 41/D4-06523-52D82D05; Thu, 20 Dec 2012 03:59:33 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355975970!11091276!3
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25837 invoked from network); 20 Dec 2012 03:59:32 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-27.messagelabs.com with SMTP;
	20 Dec 2012 03:59:32 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:31 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775554"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:13 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:47 +0800
Message-Id: <1356018231-26440-7-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 06/10] nEPT: Sync PDPTR fields if L2 guest in
	PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
vmentry.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    9 ++++++++-
 1 files changed, 8 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 2ae6f6a..1f7de7a 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -826,7 +826,14 @@ static void load_shadow_guest_state(struct vcpu *v)
     vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
     vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
 
-    /* TODO: PDPTRs for nested ept */
+    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
+                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
+    }
+
     /* TODO: CR3 target control */
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIG-0003pg-Eb; Thu, 20 Dec 2012 03:59:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIF-0003nw-7F
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:35 +0000
Received: from [85.158.143.35:44927] by server-3.bemta-4.messagelabs.com id
	B7/C1-18211-62D82D05; Thu, 20 Dec 2012 03:59:34 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355975972!12686780!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26912 invoked from network); 20 Dec 2012 03:59:34 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-12.tower-21.messagelabs.com with SMTP;
	20 Dec 2012 03:59:34 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:33 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775561"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:16 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:49 +0800
Message-Id: <1356018231-26440-9-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 08/10] nEPT: handle invept instruction from
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Add the INVEPT instruction emulation logic.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |    6 ++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   39 ++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 3 files changed, 45 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ed8d532..94cac17 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2573,10 +2573,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             update_guest_eip();
         break;
 
+    case EXIT_REASON_INVEPT:
+        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
+
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVEPT:
     case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b275044..8346387 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1356,6 +1356,45 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+int nvmx_handle_invept(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long eptp;
+    u64 inv_type;
+
+    if ( !cpu_has_vmx_ept )
+        return X86EMUL_EXCEPTION;
+
+    if ( decode_vmx_inst(regs, &decode, &eptp, 0)
+             != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+    gdprintk(XENLOG_DEBUG,"inv_type:%ld, eptp:%lx\n", inv_type, eptp);
+
+    switch ( inv_type ) {
+    case INVEPT_SINGLE_CONTEXT:
+    {
+        struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
+        if ( p2m )
+        {
+	        p2m_flush(current, p2m);
+            ept_sync_domain(p2m);
+        }
+        break;
+    }
+    case INVEPT_ALL_CONTEXT:
+        p2m_flush_nestedp2m(current->domain);
+        __invept(INVEPT_ALL_CONTEXT, 0, 0);
+        break;
+    default:
+        return X86EMUL_EXCEPTION;
+    }
+    vmreturn(regs, VMSUCCEED);
+    return X86EMUL_OKAY;
+}
+
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index e35e425..03ab987 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -191,6 +191,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs);
 int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
+int nvmx_handle_invept(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXID-0003nj-TF; Thu, 20 Dec 2012 03:59:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIC-0003nA-Au
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:32 +0000
Received: from [193.109.254.147:47454] by server-16.bemta-14.messagelabs.com
	id E7/D0-18932-32D82D05; Thu, 20 Dec 2012 03:59:31 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355975970!11091276!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25703 invoked from network); 20 Dec 2012 03:59:30 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-27.messagelabs.com with SMTP;
	20 Dec 2012 03:59:30 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775533"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:07 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:43 +0800
Message-Id: <1356018231-26440-3-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 02/10] nestedhap: Change nested p2m's walker
	to vendor-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

EPT and NPT adopts differnt formats for each-level entry,
so change the walker functions to vendor-specific.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c              |    1 +
 xen/arch/x86/hvm/vmx/vmx.c              |    3 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |   13 +++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   46 +++++++++++--------------------
 xen/include/asm-x86/hvm/hvm.h           |    5 +++
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 ++
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    5 +++
 8 files changed, 76 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index ed0faa6..5dcb354 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1171,6 +1171,37 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
     return vcpu_nestedsvm(v).ns_hap_enabled;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    uint32_t pfec;
+    unsigned long nested_cr3, gfn;
+    
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
+
+    pfec = PFEC_user_mode | PFEC_page_present;
+    if (access_w)
+        pfec |= PFEC_write_access;
+    if (access_x)
+        pfec |= PFEC_insn_fetch;
+
+    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
+    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
+
+    if ( gfn == INVALID_GFN ) 
+        return NESTEDHVM_PAGEFAULT_INJECT;
+
+    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+    return NESTEDHVM_PAGEFAULT_DONE;
+}
+
+
 enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 2c8504a..acd2d49 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2008,6 +2008,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
     .nhvm_intr_blocked = nsvm_intr_blocked,
+    .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m,
 };
 
 void svm_vmexit_handler(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 98309da..4abfa90 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1511,7 +1511,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_intr_blocked    = nvmx_intr_blocked,
     .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
     .update_eoi_exit_bitmap = vmx_update_eoi_exit_bitmap,
-    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled
+    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled,
+    .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6d1a736..4495dd6 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1445,6 +1445,19 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
     return 1;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    /*TODO:*/
+    return 0;
+}
+
 void nvmx_idtv_handling(void)
 {
     struct vcpu *v = current;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index f9a5edc..8787c91 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -136,6 +136,22 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     }
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+static int
+nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
+
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+        access_r, access_w, access_x);
+}
+
+
 /* This function uses L1_gpa to walk the P2M table in L0 hypervisor. If the
  * walk is successful, the translated value is returned in L0_gpa. The return 
  * value tells the upper level what to do.
@@ -175,36 +191,6 @@ out:
     return rc;
 }
 
-/* This function uses L2_gpa to walk the P2M page table in L1. If the 
- * walk is successful, the translated value is returned in
- * L1_gpa. The result value tells what to do next.
- */
-static int
-nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
-                      bool_t access_r, bool_t access_w, bool_t access_x)
-{
-    uint32_t pfec;
-    unsigned long nested_cr3, gfn;
-    
-    nested_cr3 = nhvm_vcpu_p2m_base(v);
-
-    pfec = PFEC_user_mode | PFEC_page_present;
-    if (access_w)
-        pfec |= PFEC_write_access;
-    if (access_x)
-        pfec |= PFEC_insn_fetch;
-
-    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
-    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
-
-    if ( gfn == INVALID_GFN ) 
-        return NESTEDHVM_PAGEFAULT_INJECT;
-
-    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
-    return NESTEDHVM_PAGEFAULT_DONE;
-}
-
 /*
  * The following function, nestedhap_page_fault(), is for steps (3)--(10).
  *
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index d3535b6..80f07e9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -183,6 +183,11 @@ struct hvm_function_table {
     /* Virtual interrupt delivery */
     void (*update_eoi_exit_bitmap)(struct vcpu *v, u8 vector, u8 trig);
     int (*virtual_intr_delivery_enabled)(void);
+
+    /*Walk nested p2m  */
+    int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index fa83023..0c90f30 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -133,6 +133,9 @@ int nsvm_wrmsr(struct vcpu *v, unsigned int msr, uint64_t msr_content);
 void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
+int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
 #define NSVM_INTR_NOTINTERCEPTED 2
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d97011d..422f006 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -108,6 +108,11 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
+
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
  *
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIE-0003oC-GS; Thu, 20 Dec 2012 03:59:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXID-0003nA-4g
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:33 +0000
Received: from [193.109.254.147:47495] by server-16.bemta-14.messagelabs.com
	id C9/D0-18932-42D82D05; Thu, 20 Dec 2012 03:59:32 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355975970!11091276!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25763 invoked from network); 20 Dec 2012 03:59:31 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-27.messagelabs.com with SMTP;
	20 Dec 2012 03:59:31 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775546"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:10 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:45 +0800
Message-Id: <1356018231-26440-5-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 04/10] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Share the current EPT logic with nested EPT case, so
make the related data structure or operations netural
to comment EPT and nested EPT.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    9 +++-
 xen/arch/x86/hvm/vmx/vmx.c         |   53 ++-----------------
 xen/arch/x86/mm/p2m-ept.c          |  104 ++++++++++++++++++++++++++++--------
 xen/arch/x86/mm/p2m.c              |   23 ++++++---
 xen/include/asm-x86/hvm/vmx/vmcs.h |   23 ++++----
 xen/include/asm-x86/hvm/vmx/vmx.h  |   10 +++-
 xen/include/asm-x86/p2m.h          |    4 ++
 7 files changed, 133 insertions(+), 93 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 9adc7a4..379b75c 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -941,8 +941,13 @@ static int construct_vmcs(struct vcpu *v)
         __vmwrite(TPR_THRESHOLD, 0);
     }
 
-    if ( paging_mode_hap(d) )
-        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept_control.eptp);
+    if ( paging_mode_hap(d) ) {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
+        struct ept_data *ept = &p2m->ept;
+
+        ept->asr  = pagetable_get_pfn(p2m_get_pagetable(p2m));
+        __vmwrite(EPT_POINTER, ept_get_eptp(ept));
+    }
 
     if ( cpu_has_vmx_pat && paging_mode_hap(d) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4abfa90..d74aae0 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -74,38 +74,19 @@ static void vmx_fpu_dirty_intercept(void);
 static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content);
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
 static void vmx_invlpg_intercept(unsigned long vaddr);
-static void __ept_sync_domain(void *info);
 
 static int vmx_domain_initialise(struct domain *d)
 {
     int rc;
 
-    /* Set the memory type used when accessing EPT paging structures. */
-    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
-
-    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
-    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
-
-    d->arch.hvm_domain.vmx.ept_control.asr  =
-        pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-
-    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
-        return -ENOMEM;
-
     if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
-    {
-        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
         return rc;
-    }
 
     return 0;
 }
 
 static void vmx_domain_destroy(struct domain *d)
 {
-    if ( paging_mode_hap(d) )
-        on_each_cpu(__ept_sync_domain, d, 1);
-    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
     vmx_free_vlapic_mapping(d);
 }
 
@@ -641,6 +622,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
 {
     struct domain *d = v->domain;
     unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
+    struct ept_data *ept_data = &p2m_get_hostp2m(d)->ept;
 
     /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
     if ( old_cr4 != new_cr4 )
@@ -650,10 +632,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
     {
         unsigned int cpu = smp_processor_id();
         /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
-        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced) &&
+        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
              !cpumask_test_and_set_cpu(cpu,
-                                       d->arch.hvm_domain.vmx.ept_synced) )
-            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
+                                       ept_get_synced_mask(ept_data)) )
+            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
     }
 
     vmx_restore_guest_msrs(v);
@@ -1216,33 +1198,6 @@ static void vmx_update_guest_efer(struct vcpu *v)
                    (v->arch.hvm_vcpu.guest_efer & EFER_SCE));
 }
 
-static void __ept_sync_domain(void *info)
-{
-    struct domain *d = info;
-    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
-}
-
-void ept_sync_domain(struct domain *d)
-{
-    /* Only if using EPT and this domain has some VCPUs to dirty. */
-    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
-        return;
-
-    ASSERT(local_irq_is_enabled());
-
-    /*
-     * Flush active cpus synchronously. Flush others the next time this domain
-     * is scheduled onto them. We accept the race of other CPUs adding to
-     * the ept_synced mask before on_selected_cpus() reads it, resulting in
-     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
-     */
-    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
-                d->domain_dirty_cpumask, &cpu_online_map);
-
-    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
-                     __ept_sync_domain, d, 1);
-}
-
 void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
             unsigned long intr_fields, int error_code)
 {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index c964f54..e33f415 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     int need_modify_vtd_table = 1;
     int vtd_pte_present = 0;
     int needs_sync = 1;
-    struct domain *d = p2m->domain;
     ept_entry_t old_entry = { .epte = 0 };
+    struct ept_data *ept = &p2m->ept;
+    struct domain *d = p2m->domain;
 
+    ASSERT(ept);
     /*
      * the caller must make sure:
      * 1. passing valid gfn and mfn at order boundary.
@@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
      * 3. passing a valid order.
      */
     if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
-         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
+         ((u64)gfn >> ((ept_get_wl(ept) + 1) * EPT_TABLE_ORDER)) ||
          (order % EPT_TABLE_ORDER) )
         return 0;
 
-    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
-           (target == 1 && hvm_hap_has_2mb(d)) ||
+    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
+           (target == 1 && hvm_hap_has_2mb()) ||
            (target == 0));
 
-    table = map_domain_page(ept_get_asr(d));
+    table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-    for ( i = ept_get_wl(d); i > target; i-- )
+    for ( i = ept_get_wl(ept); i > target; i-- )
     {
         ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
         if ( !ret )
@@ -439,9 +441,11 @@ out:
     unmap_domain_page(table);
 
     if ( needs_sync )
-        ept_sync_domain(p2m->domain);
+        ept_sync_domain(p2m);
 
-    if ( rv && iommu_enabled && need_iommu(p2m->domain) && need_modify_vtd_table )
+    /* For non-nested p2m, may need to change VT-d page table.*/
+    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled && need_iommu(p2m->domain) &&
+                need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
             iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
@@ -488,14 +492,14 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
                            unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
                            p2m_query_t q, unsigned int *page_order)
 {
-    struct domain *d = p2m->domain;
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    ept_entry_t *table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     u32 index;
     int i;
     int ret = 0;
     mfn_t mfn = _mfn(INVALID_MFN);
+    struct ept_data *ept = &p2m->ept;
 
     *t = p2m_mmio_dm;
     *a = p2m_access_n;
@@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
 
     /* Should check if gfn obeys GAW here. */
 
-    for ( i = ept_get_wl(d); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
     retry:
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
@@ -588,19 +592,20 @@ out:
 static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
     unsigned long gfn, int *level)
 {
-    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     ept_entry_t content = { .epte = 0 };
     u32 index;
     int i;
     int ret=0;
+    struct ept_data *ept = &p2m->ept;
 
     /* This pfn is higher than the highest the p2m map currently holds */
     if ( gfn > p2m->max_mapped_pfn )
         goto out;
 
-    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
         if ( !ret || ret == GUEST_TABLE_POD_PAGE )
@@ -622,7 +627,8 @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
 void ept_walk_table(struct domain *d, unsigned long gfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    struct ept_data *ept = &p2m->ept;
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
 
     int i;
@@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned long gfn)
         goto out;
     }
 
-    for ( i = ept_get_wl(d); i >= 0; i-- )
+    for ( i = ept_get_wl(ept); i >= 0; i-- )
     {
         ept_entry_t *ept_entry, *next;
         u32 index;
@@ -778,24 +784,76 @@ static void ept_change_entry_type_page(mfn_t ept_page_mfn, int ept_page_level,
 static void ept_change_entry_type_global(struct p2m_domain *p2m,
                                          p2m_type_t ot, p2m_type_t nt)
 {
-    struct domain *d = p2m->domain;
-    if ( ept_get_asr(d) == 0 )
+    struct ept_data *ept = &p2m->ept;
+    if ( ept_get_asr(ept) == 0 )
         return;
 
     BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
     BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct));
 
-    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d), ot, nt);
+    ept_change_entry_type_page(_mfn(ept_get_asr(ept)),
+            ept_get_wl(ept), ot, nt);
+
+    ept_sync_domain(p2m);
+}
+
+static void __ept_sync_domain(void *info)
+{
+    struct ept_data *ept = &((struct p2m_domain *)info)->ept;
 
-    ept_sync_domain(d);
+    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept), 0);
 }
 
-void ept_p2m_init(struct p2m_domain *p2m)
+void ept_sync_domain(struct p2m_domain *p2m)
 {
+    struct domain *d = p2m->domain;
+    struct ept_data *ept = &p2m->ept;
+    /* Only if using EPT and this domain has some VCPUs to dirty. */
+    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
+        return;
+
+    ASSERT(local_irq_is_enabled());
+
+    /*
+     * Flush active cpus synchronously. Flush others the next time this domain
+     * is scheduled onto them. We accept the race of other CPUs adding to
+     * the ept_synced mask before on_selected_cpus() reads it, resulting in
+     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
+     */
+    cpumask_and(ept_get_synced_mask(ept),
+                d->domain_dirty_cpumask, &cpu_online_map);
+
+    on_selected_cpus(ept_get_synced_mask(ept),
+                     __ept_sync_domain, p2m, 1);
+}
+
+int ept_p2m_init(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+
     p2m->set_entry = ept_set_entry;
     p2m->get_entry = ept_get_entry;
     p2m->change_entry_type_global = ept_change_entry_type_global;
     p2m->audit_p2m = NULL;
+
+    /* Set the memory type used when accessing EPT paging structures. */
+    ept->ept_mt = EPT_DEFAULT_MT;
+
+    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
+    ept->ept_wl = 3;
+
+    if ( !zalloc_cpumask_var(&ept->synced_mask) )
+        return -ENOMEM;
+
+    on_each_cpu(__ept_sync_domain, p2m, 1);
+
+    return 0;
+}
+
+void ept_p2m_uninit(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+    free_cpumask_var(ept->synced_mask);
 }
 
 static void ept_dump_p2m_table(unsigned char key)
@@ -811,6 +869,7 @@ static void ept_dump_p2m_table(unsigned char key)
     unsigned long gfn, gfn_remainder;
     unsigned long record_counter = 0;
     struct p2m_domain *p2m;
+    struct ept_data *ept;
 
     for_each_domain(d)
     {
@@ -818,15 +877,16 @@ static void ept_dump_p2m_table(unsigned char key)
             continue;
 
         p2m = p2m_get_hostp2m(d);
+        ept = &p2m->ept;
         printk("\ndomain%d EPT p2m table: \n", d->domain_id);
 
         for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
         {
             gfn_remainder = gfn;
             mfn = _mfn(INVALID_MFN);
-            table = map_domain_page(ept_get_asr(d));
+            table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-            for ( i = ept_get_wl(d); i > 0; i-- )
+            for ( i = ept_get_wl(ept); i > 0; i-- )
             {
                 ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
                 if ( ret != GUEST_TABLE_NORMAL_PAGE )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6a4bdd9..1f59410 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -57,8 +57,10 @@ boolean_param("hap_2mb", opt_hap_2mb);
 
 
 /* Init the datastructures for later use by the p2m code */
-static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
+    int ret = 0;
+
     mm_rwlock_init(&p2m->lock);
     mm_lock_init(&p2m->pod.lock);
     INIT_LIST_HEAD(&p2m->np2m_list);
@@ -72,11 +74,11 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
-        ept_p2m_init(p2m);
+        ret = ept_p2m_init(p2m);
     else
         p2m_pt_init(p2m);
 
-    return;
+    return ret;
 }
 
 static int
@@ -119,7 +121,7 @@ int p2m_init(struct domain *d)
      * since nestedhvm_enabled(d) returns false here.
      * (p2m_init runs too early for HVM_PARAM_* options) */
     rc = p2m_init_nestedp2m(d);
-    if ( rc ) 
+    if ( rc )
         p2m_final_teardown(d);
     return rc;
 }
@@ -424,12 +426,16 @@ void p2m_teardown(struct p2m_domain *p2m)
 static void p2m_teardown_nestedp2m(struct domain *d)
 {
     uint8_t i;
+    struct p2m_domain *p2m;
 
     for (i = 0; i < MAX_NESTEDP2M; i++) {
         if ( !d->arch.nested_p2m[i] )
             continue;
-        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
-        xfree(d->arch.nested_p2m[i]);
+        p2m = d->arch.nested_p2m[i];
+        free_cpumask_var(p2m->dirty_cpumask);
+        if ( hap_enabled(d) && cpu_has_vmx )
+            ept_p2m_uninit(p2m);
+        xfree(p2m);
         d->arch.nested_p2m[i] = NULL;
     }
 }
@@ -437,9 +443,12 @@ static void p2m_teardown_nestedp2m(struct domain *d)
 void p2m_final_teardown(struct domain *d)
 {
     /* Iterate over all p2m tables per domain */
-    if ( d->arch.p2m )
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    if ( p2m )
     {
         free_cpumask_var(d->arch.p2m->dirty_cpumask);
+        if ( hap_enabled(d) && cpu_has_vmx )
+            ept_p2m_uninit(p2m);
         xfree(d->arch.p2m);
         d->arch.p2m = NULL;
     }
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 9a728b6..2d38b43 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -56,26 +56,27 @@ struct vmx_msr_state {
 
 #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
 
-struct vmx_domain {
-    unsigned long apic_access_mfn;
+struct ept_data{
     union {
-        struct {
+    struct {
             u64 ept_mt :3,
                 ept_wl :3,
                 rsvd   :6,
                 asr    :52;
         };
         u64 eptp;
-    } ept_control;
-    cpumask_var_t ept_synced;
+    };
+    cpumask_var_t synced_mask;
+};
+
+struct vmx_domain {
+    unsigned long apic_access_mfn;
 };
 
-#define ept_get_wl(d)   \
-    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
-#define ept_get_asr(d)  \
-    ((d)->arch.hvm_domain.vmx.ept_control.asr)
-#define ept_get_eptp(d) \
-    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
+#define ept_get_wl(ept)   ((ept)->ept_wl)
+#define ept_get_asr(ept)  ((ept)->asr)
+#define ept_get_eptp(ept) ((ept)->eptp)
+#define ept_get_synced_mask(ept) ((ept)->synced_mask)
 
 struct arch_vmx_struct {
     /* Virtual address of VMCS. */
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index feaaa80..2600694 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -360,7 +360,7 @@ static inline void ept_sync_all(void)
     __invept(INVEPT_ALL_CONTEXT, 0, 0);
 }
 
-void ept_sync_domain(struct domain *d);
+void ept_sync_domain(struct p2m_domain *p2m);
 
 static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
 {
@@ -422,12 +422,18 @@ void vmx_get_segment_register(struct vcpu *, enum x86_segment,
 void vmx_inject_extint(int trap);
 void vmx_inject_nmi(void);
 
-void ept_p2m_init(struct p2m_domain *p2m);
+int ept_p2m_init(struct p2m_domain *p2m);
+void ept_p2m_uninit(struct p2m_domain *p2m);
+
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
 void update_guest_eip(void);
 
+int alloc_p2m_hap_data(struct p2m_domain *p2m);
+void free_p2m_hap_data(struct p2m_domain *p2m);
+void p2m_init_hap_data(struct p2m_domain *p2m);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index ce26594..b6a84b6 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -277,6 +277,10 @@ struct p2m_domain {
         mm_lock_t        lock;         /* Locking of private pod structs,   *
                                         * not relying on the p2m lock.      */
     } pod;
+    union {
+        struct ept_data ept;
+        /* NPT-equivalent structure could be added here. */
+    };
 };
 
 /* get host p2m table */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIJ-0003rw-0J; Thu, 20 Dec 2012 03:59:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXII-0003qv-5A
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:38 +0000
Received: from [85.158.139.83:54030] by server-15.bemta-5.messagelabs.com id
	38/B1-20523-92D82D05; Thu, 20 Dec 2012 03:59:37 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355975976!29937592!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5ODE1MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31206 invoked from network); 20 Dec 2012 03:59:36 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-9.tower-182.messagelabs.com with SMTP;
	20 Dec 2012 03:59:36 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 19 Dec 2012 19:59:35 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775571"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:19 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:51 +0800
Message-Id: <1356018231-26440-11-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 10/10] nEPT: expost EPT & VPID capablities to
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Expose EPT's  and VPID 's basic features to L1 VMM.
For EPT, no EPT A/D bit feature supported.
For VPID, exposes all features to L1 VMM

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++++++++++--
 xen/arch/x86/mm/hap/nested_ept.c   |   19 ++++++++++++-------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 ++
 3 files changed, 29 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 0e1a5ee..241e295 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1484,6 +1484,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
+    {
+        u32 default1_bits = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1505,12 +1507,20 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_PAUSE_EXITING |
                CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        data = gen_vmx_msr(data, VMX_PROCBASED_CTLS_DEFAULT1, host_data);
+
+        if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
+            default1_bits &= ~(CPU_BASED_CR3_LOAD_EXITING |
+                    CPU_BASED_CR3_STORE_EXITING | CPU_BASED_INVLPG_EXITING);
+
+        data = gen_vmx_msr(data, default1_bits, host_data);
         break;
+    }
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
-               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+               SECONDARY_EXEC_ENABLE_VPID |
+               SECONDARY_EXEC_ENABLE_EPT;
         data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
@@ -1563,6 +1573,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_MISC:
         gdprintk(XENLOG_WARNING, "VMX MSR %x not fully supported yet.\n", msr);
         break;
+    case MSR_IA32_VMX_EPT_VPID_CAP:
+        data = nept_get_ept_vpid_cap();
+        break;
     default:
         r = 0;
         break;
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 447b5d5..b1738fa 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -43,12 +43,15 @@
 #define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
                      ~((1ull << paddr_bits) - 1))
 
-/*
- *TODO: Just leave it as 0 here for compile pass, will
- * define real capabilities in the subsequent patches.
- */
-#define NEPT_VPID_CAP_BITS 0
-
+#define NEPT_VPID_CAP_BITS  \
+        (VMX_EPT_INVEPT_ALL_CONTEXT | VMX_EPT_INVEPT_SINGLE_CONTEXT |   \
+        VMX_EPT_INVEPT_INSTRUCTION | VMX_EPT_SUPERPAGE_1GB |            \
+        VMX_EPT_SUPERPAGE_2MB | VMX_EPT_MEMORY_TYPE_WB |                \
+        VMX_EPT_MEMORY_TYPE_UC | VMX_EPT_WALK_LENGTH_4_SUPPORTED |      \
+        VMX_EPT_EXEC_ONLY_SUPPORTED | VMX_VPID_INVVPID_INSTRUCTION |    \
+        VMX_VPID_INVVPID_INDIVIDUAL_ADDR |                              \
+        VMX_VPID_INVVPID_SINGLE_CONTEXT | VMX_VPID_INVVPID_ALL_CONTEXT |\
+        VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL)
 
 #define NEPT_1G_ENTRY_FLAG (1 << 11)
 #define NEPT_2M_ENTRY_FLAG (1 << 10)
@@ -131,7 +134,9 @@ static bool_t nept_non_present_check(ept_entry_t e)
 
 uint64_t nept_get_ept_vpid_cap(void)
 {
-    return NEPT_VPID_CAP_BITS;
+    if ( cpu_has_vmx_ept && cpu_has_vmx_vpid )
+        return NEPT_VPID_CAP_BITS;
+    return 0;
 }
 
 static int ept_lvl_table_offset(unsigned long gpa, int lvl)
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index af702c4..ea33ed0 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -209,6 +209,8 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+uint64_t nept_get_ept_vpid_cap(void);
+
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
                         unsigned long *l1gfn, uint8_t *p2m_acc,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIE-0003oC-GS; Thu, 20 Dec 2012 03:59:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXID-0003nA-4g
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:33 +0000
Received: from [193.109.254.147:47495] by server-16.bemta-14.messagelabs.com
	id C9/D0-18932-42D82D05; Thu, 20 Dec 2012 03:59:32 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1355975970!11091276!2
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25763 invoked from network); 20 Dec 2012 03:59:31 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-27.messagelabs.com with SMTP;
	20 Dec 2012 03:59:31 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775546"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:10 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:45 +0800
Message-Id: <1356018231-26440-5-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 04/10] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Share the current EPT logic with nested EPT case, so
make the related data structure or operations netural
to comment EPT and nested EPT.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    9 +++-
 xen/arch/x86/hvm/vmx/vmx.c         |   53 ++-----------------
 xen/arch/x86/mm/p2m-ept.c          |  104 ++++++++++++++++++++++++++++--------
 xen/arch/x86/mm/p2m.c              |   23 ++++++---
 xen/include/asm-x86/hvm/vmx/vmcs.h |   23 ++++----
 xen/include/asm-x86/hvm/vmx/vmx.h  |   10 +++-
 xen/include/asm-x86/p2m.h          |    4 ++
 7 files changed, 133 insertions(+), 93 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 9adc7a4..379b75c 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -941,8 +941,13 @@ static int construct_vmcs(struct vcpu *v)
         __vmwrite(TPR_THRESHOLD, 0);
     }
 
-    if ( paging_mode_hap(d) )
-        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept_control.eptp);
+    if ( paging_mode_hap(d) ) {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
+        struct ept_data *ept = &p2m->ept;
+
+        ept->asr  = pagetable_get_pfn(p2m_get_pagetable(p2m));
+        __vmwrite(EPT_POINTER, ept_get_eptp(ept));
+    }
 
     if ( cpu_has_vmx_pat && paging_mode_hap(d) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4abfa90..d74aae0 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -74,38 +74,19 @@ static void vmx_fpu_dirty_intercept(void);
 static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content);
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
 static void vmx_invlpg_intercept(unsigned long vaddr);
-static void __ept_sync_domain(void *info);
 
 static int vmx_domain_initialise(struct domain *d)
 {
     int rc;
 
-    /* Set the memory type used when accessing EPT paging structures. */
-    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
-
-    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
-    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
-
-    d->arch.hvm_domain.vmx.ept_control.asr  =
-        pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-
-    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
-        return -ENOMEM;
-
     if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
-    {
-        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
         return rc;
-    }
 
     return 0;
 }
 
 static void vmx_domain_destroy(struct domain *d)
 {
-    if ( paging_mode_hap(d) )
-        on_each_cpu(__ept_sync_domain, d, 1);
-    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
     vmx_free_vlapic_mapping(d);
 }
 
@@ -641,6 +622,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
 {
     struct domain *d = v->domain;
     unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
+    struct ept_data *ept_data = &p2m_get_hostp2m(d)->ept;
 
     /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
     if ( old_cr4 != new_cr4 )
@@ -650,10 +632,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
     {
         unsigned int cpu = smp_processor_id();
         /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
-        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced) &&
+        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
              !cpumask_test_and_set_cpu(cpu,
-                                       d->arch.hvm_domain.vmx.ept_synced) )
-            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
+                                       ept_get_synced_mask(ept_data)) )
+            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
     }
 
     vmx_restore_guest_msrs(v);
@@ -1216,33 +1198,6 @@ static void vmx_update_guest_efer(struct vcpu *v)
                    (v->arch.hvm_vcpu.guest_efer & EFER_SCE));
 }
 
-static void __ept_sync_domain(void *info)
-{
-    struct domain *d = info;
-    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
-}
-
-void ept_sync_domain(struct domain *d)
-{
-    /* Only if using EPT and this domain has some VCPUs to dirty. */
-    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
-        return;
-
-    ASSERT(local_irq_is_enabled());
-
-    /*
-     * Flush active cpus synchronously. Flush others the next time this domain
-     * is scheduled onto them. We accept the race of other CPUs adding to
-     * the ept_synced mask before on_selected_cpus() reads it, resulting in
-     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
-     */
-    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
-                d->domain_dirty_cpumask, &cpu_online_map);
-
-    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
-                     __ept_sync_domain, d, 1);
-}
-
 void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
             unsigned long intr_fields, int error_code)
 {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index c964f54..e33f415 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     int need_modify_vtd_table = 1;
     int vtd_pte_present = 0;
     int needs_sync = 1;
-    struct domain *d = p2m->domain;
     ept_entry_t old_entry = { .epte = 0 };
+    struct ept_data *ept = &p2m->ept;
+    struct domain *d = p2m->domain;
 
+    ASSERT(ept);
     /*
      * the caller must make sure:
      * 1. passing valid gfn and mfn at order boundary.
@@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
      * 3. passing a valid order.
      */
     if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
-         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
+         ((u64)gfn >> ((ept_get_wl(ept) + 1) * EPT_TABLE_ORDER)) ||
          (order % EPT_TABLE_ORDER) )
         return 0;
 
-    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
-           (target == 1 && hvm_hap_has_2mb(d)) ||
+    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
+           (target == 1 && hvm_hap_has_2mb()) ||
            (target == 0));
 
-    table = map_domain_page(ept_get_asr(d));
+    table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-    for ( i = ept_get_wl(d); i > target; i-- )
+    for ( i = ept_get_wl(ept); i > target; i-- )
     {
         ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
         if ( !ret )
@@ -439,9 +441,11 @@ out:
     unmap_domain_page(table);
 
     if ( needs_sync )
-        ept_sync_domain(p2m->domain);
+        ept_sync_domain(p2m);
 
-    if ( rv && iommu_enabled && need_iommu(p2m->domain) && need_modify_vtd_table )
+    /* For non-nested p2m, may need to change VT-d page table.*/
+    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled && need_iommu(p2m->domain) &&
+                need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
             iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
@@ -488,14 +492,14 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
                            unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
                            p2m_query_t q, unsigned int *page_order)
 {
-    struct domain *d = p2m->domain;
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    ept_entry_t *table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     u32 index;
     int i;
     int ret = 0;
     mfn_t mfn = _mfn(INVALID_MFN);
+    struct ept_data *ept = &p2m->ept;
 
     *t = p2m_mmio_dm;
     *a = p2m_access_n;
@@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
 
     /* Should check if gfn obeys GAW here. */
 
-    for ( i = ept_get_wl(d); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
     retry:
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
@@ -588,19 +592,20 @@ out:
 static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
     unsigned long gfn, int *level)
 {
-    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     ept_entry_t content = { .epte = 0 };
     u32 index;
     int i;
     int ret=0;
+    struct ept_data *ept = &p2m->ept;
 
     /* This pfn is higher than the highest the p2m map currently holds */
     if ( gfn > p2m->max_mapped_pfn )
         goto out;
 
-    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
         if ( !ret || ret == GUEST_TABLE_POD_PAGE )
@@ -622,7 +627,8 @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
 void ept_walk_table(struct domain *d, unsigned long gfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    struct ept_data *ept = &p2m->ept;
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
 
     int i;
@@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned long gfn)
         goto out;
     }
 
-    for ( i = ept_get_wl(d); i >= 0; i-- )
+    for ( i = ept_get_wl(ept); i >= 0; i-- )
     {
         ept_entry_t *ept_entry, *next;
         u32 index;
@@ -778,24 +784,76 @@ static void ept_change_entry_type_page(mfn_t ept_page_mfn, int ept_page_level,
 static void ept_change_entry_type_global(struct p2m_domain *p2m,
                                          p2m_type_t ot, p2m_type_t nt)
 {
-    struct domain *d = p2m->domain;
-    if ( ept_get_asr(d) == 0 )
+    struct ept_data *ept = &p2m->ept;
+    if ( ept_get_asr(ept) == 0 )
         return;
 
     BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
     BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct));
 
-    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d), ot, nt);
+    ept_change_entry_type_page(_mfn(ept_get_asr(ept)),
+            ept_get_wl(ept), ot, nt);
+
+    ept_sync_domain(p2m);
+}
+
+static void __ept_sync_domain(void *info)
+{
+    struct ept_data *ept = &((struct p2m_domain *)info)->ept;
 
-    ept_sync_domain(d);
+    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept), 0);
 }
 
-void ept_p2m_init(struct p2m_domain *p2m)
+void ept_sync_domain(struct p2m_domain *p2m)
 {
+    struct domain *d = p2m->domain;
+    struct ept_data *ept = &p2m->ept;
+    /* Only if using EPT and this domain has some VCPUs to dirty. */
+    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
+        return;
+
+    ASSERT(local_irq_is_enabled());
+
+    /*
+     * Flush active cpus synchronously. Flush others the next time this domain
+     * is scheduled onto them. We accept the race of other CPUs adding to
+     * the ept_synced mask before on_selected_cpus() reads it, resulting in
+     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
+     */
+    cpumask_and(ept_get_synced_mask(ept),
+                d->domain_dirty_cpumask, &cpu_online_map);
+
+    on_selected_cpus(ept_get_synced_mask(ept),
+                     __ept_sync_domain, p2m, 1);
+}
+
+int ept_p2m_init(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+
     p2m->set_entry = ept_set_entry;
     p2m->get_entry = ept_get_entry;
     p2m->change_entry_type_global = ept_change_entry_type_global;
     p2m->audit_p2m = NULL;
+
+    /* Set the memory type used when accessing EPT paging structures. */
+    ept->ept_mt = EPT_DEFAULT_MT;
+
+    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
+    ept->ept_wl = 3;
+
+    if ( !zalloc_cpumask_var(&ept->synced_mask) )
+        return -ENOMEM;
+
+    on_each_cpu(__ept_sync_domain, p2m, 1);
+
+    return 0;
+}
+
+void ept_p2m_uninit(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+    free_cpumask_var(ept->synced_mask);
 }
 
 static void ept_dump_p2m_table(unsigned char key)
@@ -811,6 +869,7 @@ static void ept_dump_p2m_table(unsigned char key)
     unsigned long gfn, gfn_remainder;
     unsigned long record_counter = 0;
     struct p2m_domain *p2m;
+    struct ept_data *ept;
 
     for_each_domain(d)
     {
@@ -818,15 +877,16 @@ static void ept_dump_p2m_table(unsigned char key)
             continue;
 
         p2m = p2m_get_hostp2m(d);
+        ept = &p2m->ept;
         printk("\ndomain%d EPT p2m table: \n", d->domain_id);
 
         for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
         {
             gfn_remainder = gfn;
             mfn = _mfn(INVALID_MFN);
-            table = map_domain_page(ept_get_asr(d));
+            table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-            for ( i = ept_get_wl(d); i > 0; i-- )
+            for ( i = ept_get_wl(ept); i > 0; i-- )
             {
                 ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
                 if ( ret != GUEST_TABLE_NORMAL_PAGE )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6a4bdd9..1f59410 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -57,8 +57,10 @@ boolean_param("hap_2mb", opt_hap_2mb);
 
 
 /* Init the datastructures for later use by the p2m code */
-static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
+    int ret = 0;
+
     mm_rwlock_init(&p2m->lock);
     mm_lock_init(&p2m->pod.lock);
     INIT_LIST_HEAD(&p2m->np2m_list);
@@ -72,11 +74,11 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
-        ept_p2m_init(p2m);
+        ret = ept_p2m_init(p2m);
     else
         p2m_pt_init(p2m);
 
-    return;
+    return ret;
 }
 
 static int
@@ -119,7 +121,7 @@ int p2m_init(struct domain *d)
      * since nestedhvm_enabled(d) returns false here.
      * (p2m_init runs too early for HVM_PARAM_* options) */
     rc = p2m_init_nestedp2m(d);
-    if ( rc ) 
+    if ( rc )
         p2m_final_teardown(d);
     return rc;
 }
@@ -424,12 +426,16 @@ void p2m_teardown(struct p2m_domain *p2m)
 static void p2m_teardown_nestedp2m(struct domain *d)
 {
     uint8_t i;
+    struct p2m_domain *p2m;
 
     for (i = 0; i < MAX_NESTEDP2M; i++) {
         if ( !d->arch.nested_p2m[i] )
             continue;
-        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
-        xfree(d->arch.nested_p2m[i]);
+        p2m = d->arch.nested_p2m[i];
+        free_cpumask_var(p2m->dirty_cpumask);
+        if ( hap_enabled(d) && cpu_has_vmx )
+            ept_p2m_uninit(p2m);
+        xfree(p2m);
         d->arch.nested_p2m[i] = NULL;
     }
 }
@@ -437,9 +443,12 @@ static void p2m_teardown_nestedp2m(struct domain *d)
 void p2m_final_teardown(struct domain *d)
 {
     /* Iterate over all p2m tables per domain */
-    if ( d->arch.p2m )
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    if ( p2m )
     {
         free_cpumask_var(d->arch.p2m->dirty_cpumask);
+        if ( hap_enabled(d) && cpu_has_vmx )
+            ept_p2m_uninit(p2m);
         xfree(d->arch.p2m);
         d->arch.p2m = NULL;
     }
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 9a728b6..2d38b43 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -56,26 +56,27 @@ struct vmx_msr_state {
 
 #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
 
-struct vmx_domain {
-    unsigned long apic_access_mfn;
+struct ept_data{
     union {
-        struct {
+    struct {
             u64 ept_mt :3,
                 ept_wl :3,
                 rsvd   :6,
                 asr    :52;
         };
         u64 eptp;
-    } ept_control;
-    cpumask_var_t ept_synced;
+    };
+    cpumask_var_t synced_mask;
+};
+
+struct vmx_domain {
+    unsigned long apic_access_mfn;
 };
 
-#define ept_get_wl(d)   \
-    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
-#define ept_get_asr(d)  \
-    ((d)->arch.hvm_domain.vmx.ept_control.asr)
-#define ept_get_eptp(d) \
-    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
+#define ept_get_wl(ept)   ((ept)->ept_wl)
+#define ept_get_asr(ept)  ((ept)->asr)
+#define ept_get_eptp(ept) ((ept)->eptp)
+#define ept_get_synced_mask(ept) ((ept)->synced_mask)
 
 struct arch_vmx_struct {
     /* Virtual address of VMCS. */
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index feaaa80..2600694 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -360,7 +360,7 @@ static inline void ept_sync_all(void)
     __invept(INVEPT_ALL_CONTEXT, 0, 0);
 }
 
-void ept_sync_domain(struct domain *d);
+void ept_sync_domain(struct p2m_domain *p2m);
 
 static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
 {
@@ -422,12 +422,18 @@ void vmx_get_segment_register(struct vcpu *, enum x86_segment,
 void vmx_inject_extint(int trap);
 void vmx_inject_nmi(void);
 
-void ept_p2m_init(struct p2m_domain *p2m);
+int ept_p2m_init(struct p2m_domain *p2m);
+void ept_p2m_uninit(struct p2m_domain *p2m);
+
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
 void update_guest_eip(void);
 
+int alloc_p2m_hap_data(struct p2m_domain *p2m);
+void free_p2m_hap_data(struct p2m_domain *p2m);
+void p2m_init_hap_data(struct p2m_domain *p2m);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index ce26594..b6a84b6 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -277,6 +277,10 @@ struct p2m_domain {
         mm_lock_t        lock;         /* Locking of private pod structs,   *
                                         * not relying on the p2m lock.      */
     } pod;
+    union {
+        struct ept_data ept;
+        /* NPT-equivalent structure could be added here. */
+    };
 };
 
 /* get host p2m table */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXIJ-0003rw-0J; Thu, 20 Dec 2012 03:59:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXII-0003qv-5A
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:38 +0000
Received: from [85.158.139.83:54030] by server-15.bemta-5.messagelabs.com id
	38/B1-20523-92D82D05; Thu, 20 Dec 2012 03:59:37 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1355975976!29937592!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDI5ODE1MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31206 invoked from network); 20 Dec 2012 03:59:36 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-9.tower-182.messagelabs.com with SMTP;
	20 Dec 2012 03:59:36 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 19 Dec 2012 19:59:35 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775571"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:19 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:51 +0800
Message-Id: <1356018231-26440-11-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 10/10] nEPT: expost EPT & VPID capablities to
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Expose EPT's  and VPID 's basic features to L1 VMM.
For EPT, no EPT A/D bit feature supported.
For VPID, exposes all features to L1 VMM

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++++++++++--
 xen/arch/x86/mm/hap/nested_ept.c   |   19 ++++++++++++-------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 ++
 3 files changed, 29 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 0e1a5ee..241e295 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1484,6 +1484,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
+    {
+        u32 default1_bits = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1505,12 +1507,20 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_PAUSE_EXITING |
                CPU_BASED_RDPMC_EXITING |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        data = gen_vmx_msr(data, VMX_PROCBASED_CTLS_DEFAULT1, host_data);
+
+        if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
+            default1_bits &= ~(CPU_BASED_CR3_LOAD_EXITING |
+                    CPU_BASED_CR3_STORE_EXITING | CPU_BASED_INVLPG_EXITING);
+
+        data = gen_vmx_msr(data, default1_bits, host_data);
         break;
+    }
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
-               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+               SECONDARY_EXEC_ENABLE_VPID |
+               SECONDARY_EXEC_ENABLE_EPT;
         data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
@@ -1563,6 +1573,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_MISC:
         gdprintk(XENLOG_WARNING, "VMX MSR %x not fully supported yet.\n", msr);
         break;
+    case MSR_IA32_VMX_EPT_VPID_CAP:
+        data = nept_get_ept_vpid_cap();
+        break;
     default:
         r = 0;
         break;
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 447b5d5..b1738fa 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -43,12 +43,15 @@
 #define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
                      ~((1ull << paddr_bits) - 1))
 
-/*
- *TODO: Just leave it as 0 here for compile pass, will
- * define real capabilities in the subsequent patches.
- */
-#define NEPT_VPID_CAP_BITS 0
-
+#define NEPT_VPID_CAP_BITS  \
+        (VMX_EPT_INVEPT_ALL_CONTEXT | VMX_EPT_INVEPT_SINGLE_CONTEXT |   \
+        VMX_EPT_INVEPT_INSTRUCTION | VMX_EPT_SUPERPAGE_1GB |            \
+        VMX_EPT_SUPERPAGE_2MB | VMX_EPT_MEMORY_TYPE_WB |                \
+        VMX_EPT_MEMORY_TYPE_UC | VMX_EPT_WALK_LENGTH_4_SUPPORTED |      \
+        VMX_EPT_EXEC_ONLY_SUPPORTED | VMX_VPID_INVVPID_INSTRUCTION |    \
+        VMX_VPID_INVVPID_INDIVIDUAL_ADDR |                              \
+        VMX_VPID_INVVPID_SINGLE_CONTEXT | VMX_VPID_INVVPID_ALL_CONTEXT |\
+        VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL)
 
 #define NEPT_1G_ENTRY_FLAG (1 << 11)
 #define NEPT_2M_ENTRY_FLAG (1 << 10)
@@ -131,7 +134,9 @@ static bool_t nept_non_present_check(ept_entry_t e)
 
 uint64_t nept_get_ept_vpid_cap(void)
 {
-    return NEPT_VPID_CAP_BITS;
+    if ( cpu_has_vmx_ept && cpu_has_vmx_vpid )
+        return NEPT_VPID_CAP_BITS;
+    return 0;
 }
 
 static int ept_lvl_table_offset(unsigned long gpa, int lvl)
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index af702c4..ea33ed0 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -209,6 +209,8 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+uint64_t nept_get_ept_vpid_cap(void);
+
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
                         unsigned long *l1gfn, uint8_t *p2m_acc,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXID-0003nO-3P; Thu, 20 Dec 2012 03:59:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIB-0003n4-9k
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:31 +0000
Received: from [85.158.139.83:53892] by server-6.bemta-5.messagelabs.com id
	15/B1-30498-22D82D05; Thu, 20 Dec 2012 03:59:30 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355975968!30670629!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 917 invoked from network); 20 Dec 2012 03:59:29 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-182.messagelabs.com with SMTP;
	20 Dec 2012 03:59:29 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775531"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:06 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:42 +0800
Message-Id: <1356018231-26440-2-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 01/10] nestedhap: Change hostcr3 and p2m->cr3
	to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

VMX doesn't have the concept about host cr3 for nested p2m,
and only SVM has, so change it to netural words.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acded-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/hvm.c             |    6 +++---
 xen/arch/x86/hvm/svm/svm.c         |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    2 +-
 xen/arch/x86/mm/hap/nested_hap.c   |   15 ++++++++-------
 xen/arch/x86/mm/mm-locks.h         |    2 +-
 xen/arch/x86/mm/p2m.c              |   26 +++++++++++++-------------
 xen/include/asm-x86/hvm/hvm.h      |    4 ++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +-
 xen/include/asm-x86/p2m.h          |   16 ++++++++--------
 10 files changed, 39 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 40c1ab2..1cae8a8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4536,10 +4536,10 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v)
     return -EOPNOTSUPP;
 }
 
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v)
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v)
 {
-    if (hvm_funcs.nhvm_vcpu_hostcr3)
-        return hvm_funcs.nhvm_vcpu_hostcr3(v);
+    if (hvm_funcs.nhvm_vcpu_p2m_base)
+        return hvm_funcs.nhvm_vcpu_p2m_base(v);
     return -EOPNOTSUPP;
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 55a5ae5..2c8504a 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2003,7 +2003,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vcpu_vmexit = nsvm_vcpu_vmexit_inject,
     .nhvm_vcpu_vmexit_trap = nsvm_vcpu_vmexit_trap,
     .nhvm_vcpu_guestcr3 = nsvm_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3 = nsvm_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base = nsvm_vcpu_hostcr3,
     .nhvm_vcpu_asid = nsvm_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index aee1f9e..98309da 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1504,7 +1504,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_destroy    = nvmx_vcpu_destroy,
     .nhvm_vcpu_reset      = nvmx_vcpu_reset,
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3    = nvmx_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b005816..6d1a736 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -94,7 +94,7 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
     return 0;
 }
 
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v)
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
     /* TODO */
     ASSERT(0);
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 317875d..f9a5edc 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -48,9 +48,10 @@
  *    1. If #NPF is from L1 guest, then we crash the guest VM (same as old 
  *       code)
  *    2. If #NPF is from L2 guest, then we continue from (3)
- *    3. Get h_cr3 from L1 guest. Map h_cr3 into L0 hypervisor address space.
- *    4. Walk the h_cr3 page table
- *    5.    - if not present, then we inject #NPF back to L1 guest and 
+ *    3. Get np2m base from L1 guest. Map np2m base into L0 hypervisor address space.
+ *    4. Walk the np2m's  page table
+ *    5.    - if not present or permission check failure, then we inject #NPF back to 
+ *    L1 guest and 
  *            re-launch L1 guest (L1 guest will either treat this #NPF as MMIO,
  *            or fix its p2m table for L2 guest)
  *    6.    - if present, then we will get the a new translated value L1-GPA 
@@ -89,7 +90,7 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 
     if (old_flags & _PAGE_PRESENT)
         flush_tlb_mask(p2m->dirty_cpumask);
-    
+
     paging_unlock(d);
 }
 
@@ -110,7 +111,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     /* If this p2m table has been flushed or recycled under our feet, 
      * leave it alone.  We'll pick up the right one as we try to 
      * vmenter the guest. */
-    if ( p2m->cr3 == nhvm_vcpu_hostcr3(v) )
+    if ( p2m->np2m_base == nhvm_vcpu_p2m_base(v) )
     {
         unsigned long gfn, mask;
         mfn_t mfn;
@@ -186,7 +187,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t pfec;
     unsigned long nested_cr3, gfn;
     
-    nested_cr3 = nhvm_vcpu_hostcr3(v);
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
 
     pfec = PFEC_user_mode | PFEC_page_present;
     if (access_w)
@@ -221,7 +222,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
-    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
     rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 3700e32..1817f81 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -249,7 +249,7 @@ declare_mm_order_constraint(per_page_sharing)
  * A per-domain lock that protects the mapping from nested-CR3 to 
  * nested-p2m.  In particular it covers:
  * - the array of nested-p2m tables, and all LRU activity therein; and
- * - setting the "cr3" field of any p2m table to a non-CR3_EADDR value. 
+ * - setting the "cr3" field of any p2m table to a non-P2M_BASE_EAADR value. 
  *   (i.e. assigning a p2m table to be the shadow of that cr3 */
 
 /* PoD lock (per-p2m-table)
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 258f46e..6a4bdd9 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -69,7 +69,7 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->domain = d;
     p2m->default_access = p2m_access_rwx;
 
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
         ept_p2m_init(p2m);
@@ -1433,7 +1433,7 @@ p2m_flush_table(struct p2m_domain *p2m)
     ASSERT(page_list_empty(&p2m->pod.single));
 
     /* This is no longer a valid nested p2m for any address space */
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
     
     /* Zap the top level of the trie */
     top = mfn_to_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
@@ -1471,7 +1471,7 @@ p2m_flush_nestedp2m(struct domain *d)
 }
 
 struct p2m_domain *
-p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
+p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
 {
     /* Use volatile to prevent gcc to cache nv->nv_p2m in a cpu register as
      * this may change within the loop by an other (v)cpu.
@@ -1480,8 +1480,8 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     struct domain *d;
     struct p2m_domain *p2m;
 
-    /* Mask out low bits; this avoids collisions with CR3_EADDR */
-    cr3 &= ~(0xfffull);
+    /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
+    np2m_base &= ~(0xfffull);
 
     if (nv->nv_flushp2m && nv->nv_p2m) {
         nv->nv_p2m = NULL;
@@ -1493,14 +1493,14 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     if ( p2m ) 
     {
         p2m_lock(p2m);
-        if ( p2m->cr3 == cr3 || p2m->cr3 == CR3_EADDR )
+        if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
         {
             nv->nv_flushp2m = 0;
             p2m_getlru_nestedp2m(d, p2m);
             nv->nv_p2m = p2m;
-            if (p2m->cr3 == CR3_EADDR)
+            if (p2m->np2m_base == P2M_BASE_EADDR)
                 hvm_asid_flush_vcpu(v);
-            p2m->cr3 = cr3;
+            p2m->np2m_base = np2m_base;
             cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
             p2m_unlock(p2m);
             nestedp2m_unlock(d);
@@ -1515,7 +1515,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     p2m_flush_table(p2m);
     p2m_lock(p2m);
     nv->nv_p2m = p2m;
-    p2m->cr3 = cr3;
+    p2m->np2m_base = np2m_base;
     nv->nv_flushp2m = 0;
     hvm_asid_flush_vcpu(v);
     cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
@@ -1531,7 +1531,7 @@ p2m_get_p2m(struct vcpu *v)
     if (!nestedhvm_is_n2(v))
         return p2m_get_hostp2m(v->domain);
 
-    return p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    return p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 }
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
@@ -1549,15 +1549,15 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
         uint32_t pfec_21 = *pfec;
-        uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
+        uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
 
         /* translate l2 guest va into l2 guest gfn */
-        p2m = p2m_get_nestedp2m(v, ncr3);
+        p2m = p2m_get_nestedp2m(v, np2m_base);
         mode = paging_get_nestedmode(v);
         gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
         /* translate l2 guest gfn into l1 guest gfn */
-        return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
+        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
                                        gfn << PAGE_SHIFT, &pfec_21, NULL);
     }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index fdb0f58..d3535b6 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -170,7 +170,7 @@ struct hvm_function_table {
                                 uint64_t exitcode);
     int (*nhvm_vcpu_vmexit_trap)(struct vcpu *v, struct hvm_trap *trap);
     uint64_t (*nhvm_vcpu_guestcr3)(struct vcpu *v);
-    uint64_t (*nhvm_vcpu_hostcr3)(struct vcpu *v);
+    uint64_t (*nhvm_vcpu_p2m_base)(struct vcpu *v);
     uint32_t (*nhvm_vcpu_asid)(struct vcpu *v);
     int (*nhvm_vmcx_guest_intercepts_trap)(struct vcpu *v, 
                                unsigned int trapnr, int errcode);
@@ -475,7 +475,7 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v);
 /* returns l1 guest's cr3 that points to the page table used to
  * translate l2 guest physical address to l1 guest physical address.
  */
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v);
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v);
 /* returns the asid number l1 guest wants to use to run the l2 guest */
 uint32_t nhvm_vcpu_asid(struct vcpu *v);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index dce2cd8..d97011d 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -99,7 +99,7 @@ int nvmx_vcpu_initialise(struct vcpu *v);
 void nvmx_vcpu_destroy(struct vcpu *v);
 int nvmx_vcpu_reset(struct vcpu *v);
 uint64_t nvmx_vcpu_guestcr3(struct vcpu *v);
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v);
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v);
 uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 2bd2048..ce26594 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -197,17 +197,17 @@ struct p2m_domain {
 
     struct domain     *domain;   /* back pointer to domain */
 
-    /* Nested p2ms only: nested-CR3 value that this p2m shadows. 
-     * This can be cleared to CR3_EADDR under the per-p2m lock but
+    /* Nested p2ms only: nested p2m base value that this p2m shadows. 
+     * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but
      * needs both the per-p2m lock and the per-domain nestedp2m lock
      * to set it to any other value. */
-#define CR3_EADDR     (~0ULL)
-    uint64_t           cr3;
+#define P2M_BASE_EADDR     (~0ULL)
+    uint64_t           np2m_base;
 
     /* Nested p2ms: linked list of n2pms allocated to this domain. 
      * The host p2m hasolds the head of the list and the np2ms are 
      * threaded on in LRU order. */
-    struct list_head np2m_list; 
+    struct list_head   np2m_list; 
 
 
     /* Host p2m: when this flag is set, don't flush all the nested-p2m 
@@ -282,11 +282,11 @@ struct p2m_domain {
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
-/* Get p2m table (re)usable for specified cr3.
+/* Get p2m table (re)usable for specified np2m base.
  * Automatically destroys and re-initializes a p2m if none found.
- * If cr3 == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
+ * If np2m_base == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
  */
-struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3);
+struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
 
 /* If vcpu is in host mode then behaviour matches p2m_get_hostp2m().
  * If vcpu is in guest mode then behaviour matches p2m_get_nestedp2m().
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 03:59:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 03:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXID-0003nO-3P; Thu, 20 Dec 2012 03:59:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlXIB-0003n4-9k
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 03:59:31 +0000
Received: from [85.158.139.83:53892] by server-6.bemta-5.messagelabs.com id
	15/B1-30498-22D82D05; Thu, 20 Dec 2012 03:59:30 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1355975968!30670629!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NTA5MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 917 invoked from network); 20 Dec 2012 03:59:29 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-182.messagelabs.com with SMTP;
	20 Dec 2012 03:59:29 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 19 Dec 2012 19:59:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,320,1355126400"; d="scan'208";a="264775531"
Received: from hax-build.sh.intel.com ([10.239.48.28])
	by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:06 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 20 Dec 2012 23:43:42 +0800
Message-Id: <1356018231-26440-2-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v3 01/10] nestedhap: Change hostcr3 and p2m->cr3
	to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

VMX doesn't have the concept about host cr3 for nested p2m,
and only SVM has, so change it to netural words.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acded-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/hvm.c             |    6 +++---
 xen/arch/x86/hvm/svm/svm.c         |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    2 +-
 xen/arch/x86/mm/hap/nested_hap.c   |   15 ++++++++-------
 xen/arch/x86/mm/mm-locks.h         |    2 +-
 xen/arch/x86/mm/p2m.c              |   26 +++++++++++++-------------
 xen/include/asm-x86/hvm/hvm.h      |    4 ++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +-
 xen/include/asm-x86/p2m.h          |   16 ++++++++--------
 10 files changed, 39 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 40c1ab2..1cae8a8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4536,10 +4536,10 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v)
     return -EOPNOTSUPP;
 }
 
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v)
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v)
 {
-    if (hvm_funcs.nhvm_vcpu_hostcr3)
-        return hvm_funcs.nhvm_vcpu_hostcr3(v);
+    if (hvm_funcs.nhvm_vcpu_p2m_base)
+        return hvm_funcs.nhvm_vcpu_p2m_base(v);
     return -EOPNOTSUPP;
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 55a5ae5..2c8504a 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2003,7 +2003,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vcpu_vmexit = nsvm_vcpu_vmexit_inject,
     .nhvm_vcpu_vmexit_trap = nsvm_vcpu_vmexit_trap,
     .nhvm_vcpu_guestcr3 = nsvm_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3 = nsvm_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base = nsvm_vcpu_hostcr3,
     .nhvm_vcpu_asid = nsvm_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index aee1f9e..98309da 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1504,7 +1504,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_destroy    = nvmx_vcpu_destroy,
     .nhvm_vcpu_reset      = nvmx_vcpu_reset,
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3    = nvmx_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b005816..6d1a736 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -94,7 +94,7 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
     return 0;
 }
 
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v)
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
     /* TODO */
     ASSERT(0);
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 317875d..f9a5edc 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -48,9 +48,10 @@
  *    1. If #NPF is from L1 guest, then we crash the guest VM (same as old 
  *       code)
  *    2. If #NPF is from L2 guest, then we continue from (3)
- *    3. Get h_cr3 from L1 guest. Map h_cr3 into L0 hypervisor address space.
- *    4. Walk the h_cr3 page table
- *    5.    - if not present, then we inject #NPF back to L1 guest and 
+ *    3. Get np2m base from L1 guest. Map np2m base into L0 hypervisor address space.
+ *    4. Walk the np2m's  page table
+ *    5.    - if not present or permission check failure, then we inject #NPF back to 
+ *    L1 guest and 
  *            re-launch L1 guest (L1 guest will either treat this #NPF as MMIO,
  *            or fix its p2m table for L2 guest)
  *    6.    - if present, then we will get the a new translated value L1-GPA 
@@ -89,7 +90,7 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 
     if (old_flags & _PAGE_PRESENT)
         flush_tlb_mask(p2m->dirty_cpumask);
-    
+
     paging_unlock(d);
 }
 
@@ -110,7 +111,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     /* If this p2m table has been flushed or recycled under our feet, 
      * leave it alone.  We'll pick up the right one as we try to 
      * vmenter the guest. */
-    if ( p2m->cr3 == nhvm_vcpu_hostcr3(v) )
+    if ( p2m->np2m_base == nhvm_vcpu_p2m_base(v) )
     {
         unsigned long gfn, mask;
         mfn_t mfn;
@@ -186,7 +187,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t pfec;
     unsigned long nested_cr3, gfn;
     
-    nested_cr3 = nhvm_vcpu_hostcr3(v);
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
 
     pfec = PFEC_user_mode | PFEC_page_present;
     if (access_w)
@@ -221,7 +222,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
-    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
     rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 3700e32..1817f81 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -249,7 +249,7 @@ declare_mm_order_constraint(per_page_sharing)
  * A per-domain lock that protects the mapping from nested-CR3 to 
  * nested-p2m.  In particular it covers:
  * - the array of nested-p2m tables, and all LRU activity therein; and
- * - setting the "cr3" field of any p2m table to a non-CR3_EADDR value. 
+ * - setting the "cr3" field of any p2m table to a non-P2M_BASE_EAADR value. 
  *   (i.e. assigning a p2m table to be the shadow of that cr3 */
 
 /* PoD lock (per-p2m-table)
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 258f46e..6a4bdd9 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -69,7 +69,7 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->domain = d;
     p2m->default_access = p2m_access_rwx;
 
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
         ept_p2m_init(p2m);
@@ -1433,7 +1433,7 @@ p2m_flush_table(struct p2m_domain *p2m)
     ASSERT(page_list_empty(&p2m->pod.single));
 
     /* This is no longer a valid nested p2m for any address space */
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
     
     /* Zap the top level of the trie */
     top = mfn_to_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
@@ -1471,7 +1471,7 @@ p2m_flush_nestedp2m(struct domain *d)
 }
 
 struct p2m_domain *
-p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
+p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
 {
     /* Use volatile to prevent gcc to cache nv->nv_p2m in a cpu register as
      * this may change within the loop by an other (v)cpu.
@@ -1480,8 +1480,8 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     struct domain *d;
     struct p2m_domain *p2m;
 
-    /* Mask out low bits; this avoids collisions with CR3_EADDR */
-    cr3 &= ~(0xfffull);
+    /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
+    np2m_base &= ~(0xfffull);
 
     if (nv->nv_flushp2m && nv->nv_p2m) {
         nv->nv_p2m = NULL;
@@ -1493,14 +1493,14 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     if ( p2m ) 
     {
         p2m_lock(p2m);
-        if ( p2m->cr3 == cr3 || p2m->cr3 == CR3_EADDR )
+        if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
         {
             nv->nv_flushp2m = 0;
             p2m_getlru_nestedp2m(d, p2m);
             nv->nv_p2m = p2m;
-            if (p2m->cr3 == CR3_EADDR)
+            if (p2m->np2m_base == P2M_BASE_EADDR)
                 hvm_asid_flush_vcpu(v);
-            p2m->cr3 = cr3;
+            p2m->np2m_base = np2m_base;
             cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
             p2m_unlock(p2m);
             nestedp2m_unlock(d);
@@ -1515,7 +1515,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     p2m_flush_table(p2m);
     p2m_lock(p2m);
     nv->nv_p2m = p2m;
-    p2m->cr3 = cr3;
+    p2m->np2m_base = np2m_base;
     nv->nv_flushp2m = 0;
     hvm_asid_flush_vcpu(v);
     cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
@@ -1531,7 +1531,7 @@ p2m_get_p2m(struct vcpu *v)
     if (!nestedhvm_is_n2(v))
         return p2m_get_hostp2m(v->domain);
 
-    return p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    return p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 }
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
@@ -1549,15 +1549,15 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
         uint32_t pfec_21 = *pfec;
-        uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
+        uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
 
         /* translate l2 guest va into l2 guest gfn */
-        p2m = p2m_get_nestedp2m(v, ncr3);
+        p2m = p2m_get_nestedp2m(v, np2m_base);
         mode = paging_get_nestedmode(v);
         gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
         /* translate l2 guest gfn into l1 guest gfn */
-        return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
+        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
                                        gfn << PAGE_SHIFT, &pfec_21, NULL);
     }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index fdb0f58..d3535b6 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -170,7 +170,7 @@ struct hvm_function_table {
                                 uint64_t exitcode);
     int (*nhvm_vcpu_vmexit_trap)(struct vcpu *v, struct hvm_trap *trap);
     uint64_t (*nhvm_vcpu_guestcr3)(struct vcpu *v);
-    uint64_t (*nhvm_vcpu_hostcr3)(struct vcpu *v);
+    uint64_t (*nhvm_vcpu_p2m_base)(struct vcpu *v);
     uint32_t (*nhvm_vcpu_asid)(struct vcpu *v);
     int (*nhvm_vmcx_guest_intercepts_trap)(struct vcpu *v, 
                                unsigned int trapnr, int errcode);
@@ -475,7 +475,7 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v);
 /* returns l1 guest's cr3 that points to the page table used to
  * translate l2 guest physical address to l1 guest physical address.
  */
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v);
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v);
 /* returns the asid number l1 guest wants to use to run the l2 guest */
 uint32_t nhvm_vcpu_asid(struct vcpu *v);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index dce2cd8..d97011d 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -99,7 +99,7 @@ int nvmx_vcpu_initialise(struct vcpu *v);
 void nvmx_vcpu_destroy(struct vcpu *v);
 int nvmx_vcpu_reset(struct vcpu *v);
 uint64_t nvmx_vcpu_guestcr3(struct vcpu *v);
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v);
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v);
 uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 2bd2048..ce26594 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -197,17 +197,17 @@ struct p2m_domain {
 
     struct domain     *domain;   /* back pointer to domain */
 
-    /* Nested p2ms only: nested-CR3 value that this p2m shadows. 
-     * This can be cleared to CR3_EADDR under the per-p2m lock but
+    /* Nested p2ms only: nested p2m base value that this p2m shadows. 
+     * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but
      * needs both the per-p2m lock and the per-domain nestedp2m lock
      * to set it to any other value. */
-#define CR3_EADDR     (~0ULL)
-    uint64_t           cr3;
+#define P2M_BASE_EADDR     (~0ULL)
+    uint64_t           np2m_base;
 
     /* Nested p2ms: linked list of n2pms allocated to this domain. 
      * The host p2m hasolds the head of the list and the np2ms are 
      * threaded on in LRU order. */
-    struct list_head np2m_list; 
+    struct list_head   np2m_list; 
 
 
     /* Host p2m: when this flag is set, don't flush all the nested-p2m 
@@ -282,11 +282,11 @@ struct p2m_domain {
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
-/* Get p2m table (re)usable for specified cr3.
+/* Get p2m table (re)usable for specified np2m base.
  * Automatically destroys and re-initializes a p2m if none found.
- * If cr3 == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
+ * If np2m_base == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
  */
-struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3);
+struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
 
 /* If vcpu is in host mode then behaviour matches p2m_get_hostp2m().
  * If vcpu is in guest mode then behaviour matches p2m_get_nestedp2m().
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 04:29:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 04:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXkz-0005lL-Pi; Thu, 20 Dec 2012 04:29:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlXkx-0005lG-Uu
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 04:29:16 +0000
Received: from [85.158.139.83:58679] by server-16.bemta-5.messagelabs.com id
	0B/33-09208-B1492D05; Thu, 20 Dec 2012 04:29:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355977752!28515058!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4431 invoked from network); 20 Dec 2012 04:29:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 04:29:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,320,1355097600"; 
   d="scan'208";a="268137"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 04:29:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 04:29:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlXku-0003zC-8K;
	Thu, 20 Dec 2012 04:29:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlXkt-0006h1-TR;
	Thu, 20 Dec 2012 04:29:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14791-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 04:29:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14791: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14791 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14791/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14774
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14774

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  5acb5967d718
baseline version:
 xen                  11b4bc743b1f

------------------------------------------------------------
People who touched revisions under test:
  Dongxiao Xu <dongxiao.xu@intel.com>
  Jan Beulich <jbeulich@suse.com>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=5acb5967d718
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 5acb5967d718
+ branch=xen-4.2-testing
+ revision=5acb5967d718
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r 5acb5967d718 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 11 changes to 8 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 04:29:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 04:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlXkz-0005lL-Pi; Thu, 20 Dec 2012 04:29:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlXkx-0005lG-Uu
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 04:29:16 +0000
Received: from [85.158.139.83:58679] by server-16.bemta-5.messagelabs.com id
	0B/33-09208-B1492D05; Thu, 20 Dec 2012 04:29:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1355977752!28515058!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4431 invoked from network); 20 Dec 2012 04:29:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 04:29:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,320,1355097600"; 
   d="scan'208";a="268137"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 04:29:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 04:29:12 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlXku-0003zC-8K;
	Thu, 20 Dec 2012 04:29:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlXkt-0006h1-TR;
	Thu, 20 Dec 2012 04:29:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14791-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 04:29:12 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 14791: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14791 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14791/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14774
 test-amd64-amd64-xl-sedf     11 guest-localmigrate           fail   like 14774

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemut-win  13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  5acb5967d718
baseline version:
 xen                  11b4bc743b1f

------------------------------------------------------------
People who touched revisions under test:
  Dongxiao Xu <dongxiao.xu@intel.com>
  Jan Beulich <jbeulich@suse.com>
  Wei Huang <wei.huang2@amd.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=5acb5967d718
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 5acb5967d718
+ branch=xen-4.2-testing
+ revision=5acb5967d718
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.2-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.2-testing.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.2-testing.hg
+ hg push -r 5acb5967d718 ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.2-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 11 changes to 8 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 06:45:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 06:45:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlZsA-00078P-Ul; Thu, 20 Dec 2012 06:44:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1TlZs9-00078H-A9
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 06:44:49 +0000
Received: from [193.109.254.147:15340] by server-3.bemta-14.messagelabs.com id
	41/18-26055-0E3B2D05; Thu, 20 Dec 2012 06:44:48 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355985887!934991!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDE5MTcxNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26003 invoked from network); 20 Dec 2012 06:44:48 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 06:44:48 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:Message-ID:Date:From:Organization:
	User-Agent:MIME-Version:To:CC:Subject:References:
	In-Reply-To:Content-Type:Content-Transfer-Encoding;
	b=GnW53VzaV+HBiMT2Sf+6UPRyPetDE1IFuOjmvVuqHRzBXeQPVNBndHje
	zs1qWIcTD0E+Ox3LFydmMJ60YeHxayWhjqmy0SM2wy4v3PwJzoVy6jO//
	7p0ajgLdK2eXath5cs1zgnOgB/hUuSLFeq/df28nObS5gtbd9ENrFeg21
	SbH5QHRDaCO5Z8FuMcmeBdiw1d7RbTHQNCkW/uBeeVSYLT0at0L0inlDL
	9aAdLKhJZ5a1V1QCTTM3Vx0VGz4k9;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1355985887; x=1387521887;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=HGBCE6YuKYA/Grh+4VQvOdxtFhQ6b+twm2lN0zFByZw=;
	b=t9zqw6QBSnBCLte1W8OsRglu48sdeoyTJbt2kuh3crrGdKkBwkXNg6Kh
	HV5/Xng95xUXl+53qlS04k9OqyoLS6C0/zC3MR9dF4xgqCK4QWdD0om+0
	7ATPjmHBL6Eyx+E39ftXqB2rWZoJyldED7PM57jzw8vds1Cx7x1vVTO6/
	RWcAG6e39lmRcmWLM/TAbzuAhC1kMnZ4zDOO3xYrieN+PZi2oKlJpryki
	2KobvLZJw3xVkRsPSLJtOAdVmzBTf;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="111762967"
Received: from abgdgate30u.abg.fsc.net ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 20 Dec 2012 07:44:46 +0100
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="153446063"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate30u.abg.fsc.net with SMTP; 20 Dec 2012 07:44:46 +0100
Received: from [172.17.21.50] (verdon.osd.mch.fsc.net [172.17.21.50])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id E2DB296A7E1;
	Thu, 20 Dec 2012 07:44:46 +0100 (CET)
Message-ID: <50D2B3DE.70206@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 07:44:46 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
In-Reply-To: <06d2f322a6319d8ba212.1355944039@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 19.12.2012 20:07, schrieb Dario Faggioli:
> As vcpu-affinity tells where VCPUs must run, node-affinity tells
> where they should or, better, prefer. While respecting vcpu-affinity
> remains mandatory, node-affinity is not that strict, it only expresses
> a preference, although honouring it is almost always true that will
> bring significant performances benefit (especially as compared to
> not having any affinity at all).
>
> This change modifies the VCPU load balancing algorithm (for the
> credit scheduler only), introducing a two steps logic.
> During the first step, we use the node-affinity mask. The aim is
> giving precedence to the CPUs where it is known to be preferable
> for the domain to run. If that fails in finding a valid PCPU, the
> node-affinity is just ignored and, in the second step, we fall
> back to using cpu-affinity only.
>
> Signed-off-by: Dario Faggioli<dario.faggioli@citrix.com>
> ---
> Changes from v1:
>   * CPU masks variables moved off from the stack, as requested during
>     review. As per the comments in the code, having them in the private
>     (per-scheduler instance) struct could have been enough, but it would be
>     racy (again, see comments). For that reason, use a global bunch of
>     them of (via per_cpu());

Wouldn't it be better to put the mask in the scheduler private per-pcpu area?
This could be applied to several other instances of cpu masks on the stack,
too.

>   * George suggested a different load balancing logic during v1's review. I
>     think he was right and then I changed the old implementation in a way
>     that resembles exactly that. I rewrote most of this patch to introduce
>     a more sensible and effective noda-affinity handling logic.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -111,6 +111,33 @@
>
>
>   /*
> + * Node Balancing
> + */
> +#define CSCHED_BALANCE_CPU_AFFINITY     0
> +#define CSCHED_BALANCE_NODE_AFFINITY    1
> +#define CSCHED_BALANCE_LAST CSCHED_BALANCE_NODE_AFFINITY
> +
> +/*
> + * When building for high number of CPUs, cpumask_var_t
> + * variables on stack are better avoided. However, we need them,
> + * in order to be able to consider both vcpu and node affinity.
> + * We also don't want to xmalloc()/xfree() them, as that would
> + * happen in critical code paths. Therefore, let's (pre)allocate
> + * some scratch space for them.
> + *
> + * Having one mask for each instance of the scheduler seems
> + * enough, and that would suggest putting it wihin `struct
> + * csched_private' below. However, we don't always hold the
> + * private scheduler lock when the mask itself would need to
> + * be used, leaving room for races. For that reason, we define
> + * and use a cpumask_t for each CPU. As preemption is not an
> + * issue here (we're holding the runqueue spin-lock!), that is
> + * both enough and safe.
> + */
> +DEFINE_PER_CPU(cpumask_t, csched_balance_mask);
> +#define scratch_balance_mask (this_cpu(csched_balance_mask))
> +
> +/*
>    * Boot parameters
>    */
>   static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
> @@ -159,6 +186,9 @@ struct csched_dom {
>       struct list_head active_vcpu;
>       struct list_head active_sdom_elem;
>       struct domain *dom;
> +    /* cpumask translated from the domain's node-affinity.
> +     * Basically, the CPUs we prefer to be scheduled on. */
> +    cpumask_var_t node_affinity_cpumask;
>       uint16_t active_vcpu_count;
>       uint16_t weight;
>       uint16_t cap;
> @@ -239,6 +269,42 @@ static inline void
>       list_del_init(&svc->runq_elem);
>   }
>
> +#define for_each_csched_balance_step(__step) \
> +    for ( (__step) = CSCHED_BALANCE_LAST; (__step)>= 0; (__step)-- )
> +
> +/*
> + * Each csched-balance step has to use its own cpumask. This function
> + * determines which one, given the step, and copies it in mask. Notice
> + * that, in case of node-affinity balancing step, it also filters out from
> + * the node-affinity mask the cpus that are not part of vc's cpu-affinity,
> + * as we do not want to end up running a vcpu where it would like, but
> + * is not allowed to!
> + *
> + * As an optimization, if a domain does not have any node-affinity at all
> + * (namely, its node affinity is automatically computed), not only the
> + * computed mask will reflect its vcpu-affinity, but we also return -1 to
> + * let the caller know that he can skip the step or quit the loop (if he
> + * wants).
> + */
> +static int
> +csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask)
> +{
> +    if ( step == CSCHED_BALANCE_NODE_AFFINITY )
> +    {
> +        struct domain *d = vc->domain;
> +        struct csched_dom *sdom = CSCHED_DOM(d);
> +
> +        cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
> +
> +        if ( cpumask_full(sdom->node_affinity_cpumask) )
> +            return -1;
> +    }
> +    else /* step == CSCHED_BALANCE_CPU_AFFINITY */
> +        cpumask_copy(mask, vc->cpu_affinity);
> +
> +    return 0;
> +}
> +
>   static void burn_credits(struct csched_vcpu *svc, s_time_t now)
>   {
>       s_time_t delta;
> @@ -266,67 +332,94 @@ static inline void
>       struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
>       cpumask_t mask, idle_mask;
> -    int idlers_empty;
> +    int balance_step, idlers_empty;
>
>       ASSERT(cur);
> -    cpumask_clear(&mask);
> -
>       idlers_empty = cpumask_empty(prv->idlers);
>
>       /*
> -     * If the pcpu is idle, or there are no idlers and the new
> -     * vcpu is a higher priority than the old vcpu, run it here.
> -     *
> -     * If there are idle cpus, first try to find one suitable to run
> -     * new, so we can avoid preempting cur.  If we cannot find a
> -     * suitable idler on which to run new, run it here, but try to
> -     * find a suitable idler on which to run cur instead.
> +     * Node and vcpu-affinity balancing loop. To speed things up, in case
> +     * no node-affinity at all is present, scratch_balance_mask reflects
> +     * the vcpu-affinity, and ret is -1, so that we then can quit the
> +     * loop after only one step.
>        */
> -    if ( cur->pri == CSCHED_PRI_IDLE
> -         || (idlers_empty&&  new->pri>  cur->pri) )
> +    for_each_csched_balance_step( balance_step )
>       {
> -        if ( cur->pri != CSCHED_PRI_IDLE )
> -            SCHED_STAT_CRANK(tickle_idlers_none);
> -        cpumask_set_cpu(cpu,&mask);
> -    }
> -    else if ( !idlers_empty )
> -    {
> -        /* Check whether or not there are idlers that can run new */
> -        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
> +        int ret, new_idlers_empty;
> +
> +        cpumask_clear(&mask);
>
>           /*
> -         * If there are no suitable idlers for new, and it's higher
> -         * priority than cur, ask the scheduler to migrate cur away.
> -         * We have to act like this (instead of just waking some of
> -         * the idlers suitable for cur) because cur is running.
> +         * If the pcpu is idle, or there are no idlers and the new
> +         * vcpu is a higher priority than the old vcpu, run it here.
>            *
> -         * If there are suitable idlers for new, no matter priorities,
> -         * leave cur alone (as it is running and is, likely, cache-hot)
> -         * and wake some of them (which is waking up and so is, likely,
> -         * cache cold anyway).
> +         * If there are idle cpus, first try to find one suitable to run
> +         * new, so we can avoid preempting cur.  If we cannot find a
> +         * suitable idler on which to run new, run it here, but try to
> +         * find a suitable idler on which to run cur instead.
>            */
> -        if ( cpumask_empty(&idle_mask)&&  new->pri>  cur->pri )
> +        if ( cur->pri == CSCHED_PRI_IDLE
> +             || (idlers_empty&&  new->pri>  cur->pri) )
>           {
> -            SCHED_STAT_CRANK(tickle_idlers_none);
> -            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
> -            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> -            SCHED_STAT_CRANK(migrate_kicked_away);
> -            set_bit(_VPF_migrating,&cur->vcpu->pause_flags);
> +            if ( cur->pri != CSCHED_PRI_IDLE )
> +                SCHED_STAT_CRANK(tickle_idlers_none);
>               cpumask_set_cpu(cpu,&mask);
>           }
> -        else if ( !cpumask_empty(&idle_mask) )
> +        else if ( !idlers_empty )
>           {
> -            /* Which of the idlers suitable for new shall we wake up? */
> -            SCHED_STAT_CRANK(tickle_idlers_some);
> -            if ( opt_tickle_one_idle )
> +            /* Are there idlers suitable for new (for this balance step)? */
> +            ret = csched_balance_cpumask(new->vcpu, balance_step,
> +&scratch_balance_mask);
> +            cpumask_and(&idle_mask, prv->idlers,&scratch_balance_mask);
> +            new_idlers_empty = cpumask_empty(&idle_mask);
> +
> +            /*
> +             * Let's not be too harsh! If there aren't idlers suitable
> +             * for new in its node-affinity mask, make sure we check its
> +             * vcpu-affinity as well, before tacking final decisions.
> +             */
> +            if ( new_idlers_empty
> +&&  (balance_step == CSCHED_BALANCE_NODE_AFFINITY&&  !ret) )
> +                continue;
> +
> +            /*
> +             * If there are no suitable idlers for new, and it's higher
> +             * priority than cur, ask the scheduler to migrate cur away.
> +             * We have to act like this (instead of just waking some of
> +             * the idlers suitable for cur) because cur is running.
> +             *
> +             * If there are suitable idlers for new, no matter priorities,
> +             * leave cur alone (as it is running and is, likely, cache-hot)
> +             * and wake some of them (which is waking up and so is, likely,
> +             * cache cold anyway).
> +             */
> +            if ( new_idlers_empty&&  new->pri>  cur->pri )
>               {
> -                this_cpu(last_tickle_cpu) =
> -                    cpumask_cycle(this_cpu(last_tickle_cpu),&idle_mask);
> -                cpumask_set_cpu(this_cpu(last_tickle_cpu),&mask);
> +                SCHED_STAT_CRANK(tickle_idlers_none);
> +                SCHED_VCPU_STAT_CRANK(cur, kicked_away);
> +                SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> +                SCHED_STAT_CRANK(migrate_kicked_away);
> +                set_bit(_VPF_migrating,&cur->vcpu->pause_flags);
> +                cpumask_set_cpu(cpu,&mask);
>               }
> -            else
> -                cpumask_or(&mask,&mask,&idle_mask);
> +            else if ( !new_idlers_empty )
> +            {
> +                /* Which of the idlers suitable for new shall we wake up? */
> +                SCHED_STAT_CRANK(tickle_idlers_some);
> +                if ( opt_tickle_one_idle )
> +                {
> +                    this_cpu(last_tickle_cpu) =
> +                        cpumask_cycle(this_cpu(last_tickle_cpu),&idle_mask);
> +                    cpumask_set_cpu(this_cpu(last_tickle_cpu),&mask);
> +                }
> +                else
> +                    cpumask_or(&mask,&mask,&idle_mask);
> +            }
>           }
> +
> +        /* Did we find anyone (or csched_balance_cpumask() says we're done)? */
> +        if ( !cpumask_empty(&mask) || ret )
> +            break;
>       }
>
>       if ( !cpumask_empty(&mask) )
> @@ -475,15 +568,28 @@ static inline int
>   }
>
>   static inline int
> -__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu)
> +__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, cpumask_t *mask)
>   {
>       /*
>        * Don't pick up work that's in the peer's scheduling tail or hot on
> -     * peer PCPU. Only pick up work that's allowed to run on our CPU.
> +     * peer PCPU. Only pick up work that prefers and/or is allowed to run
> +     * on our CPU.
>        */
>       return !vc->is_running&&
>              !__csched_vcpu_is_cache_hot(vc)&&
> -           cpumask_test_cpu(dest_cpu, vc->cpu_affinity);
> +           cpumask_test_cpu(dest_cpu, mask);
> +}
> +
> +static inline int
> +__csched_vcpu_should_migrate(int cpu, cpumask_t *mask, cpumask_t *idlers)
> +{
> +    /*
> +     * Consent to migration if cpu is one of the idlers in the VCPU's
> +     * affinity mask. In fact, if that is not the case, it just means it
> +     * was some other CPU that was tickled and should hence come and pick
> +     * VCPU up. Migrating it to cpu would only make things worse.
> +     */
> +    return cpumask_test_cpu(cpu, idlers)&&  cpumask_test_cpu(cpu, mask);
>   }
>
>   static int
> @@ -493,85 +599,98 @@ static int
>       cpumask_t idlers;
>       cpumask_t *online;
>       struct csched_pcpu *spc = NULL;
> +    int ret, balance_step;
>       int cpu;
>
> -    /*
> -     * Pick from online CPUs in VCPU's affinity mask, giving a
> -     * preference to its current processor if it's in there.
> -     */
>       online = cpupool_scheduler_cpumask(vc->domain->cpupool);
> -    cpumask_and(&cpus, online, vc->cpu_affinity);
> -    cpu = cpumask_test_cpu(vc->processor,&cpus)
> -            ? vc->processor
> -            : cpumask_cycle(vc->processor,&cpus);
> -    ASSERT( !cpumask_empty(&cpus)&&  cpumask_test_cpu(cpu,&cpus) );
> +    for_each_csched_balance_step( balance_step )
> +    {
> +        /* Pick an online CPU from the proper affinity mask */
> +        ret = csched_balance_cpumask(vc, balance_step,&cpus);
> +        cpumask_and(&cpus,&cpus, online);
>
> -    /*
> -     * Try to find an idle processor within the above constraints.
> -     *
> -     * In multi-core and multi-threaded CPUs, not all idle execution
> -     * vehicles are equal!
> -     *
> -     * We give preference to the idle execution vehicle with the most
> -     * idling neighbours in its grouping. This distributes work across
> -     * distinct cores first and guarantees we don't do something stupid
> -     * like run two VCPUs on co-hyperthreads while there are idle cores
> -     * or sockets.
> -     *
> -     * Notice that, when computing the "idleness" of cpu, we may want to
> -     * discount vc. That is, iff vc is the currently running and the only
> -     * runnable vcpu on cpu, we add cpu to the idlers.
> -     */
> -    cpumask_and(&idlers,&cpu_online_map, CSCHED_PRIV(ops)->idlers);
> -    if ( vc->processor == cpu&&  IS_RUNQ_IDLE(cpu) )
> -        cpumask_set_cpu(cpu,&idlers);
> -    cpumask_and(&cpus,&cpus,&idlers);
> -    cpumask_clear_cpu(cpu,&cpus);
> +        /* If present, prefer vc's current processor */
> +        cpu = cpumask_test_cpu(vc->processor,&cpus)
> +                ? vc->processor
> +                : cpumask_cycle(vc->processor,&cpus);
> +        ASSERT( !cpumask_empty(&cpus)&&  cpumask_test_cpu(cpu,&cpus) );
>
> -    while ( !cpumask_empty(&cpus) )
> -    {
> -        cpumask_t cpu_idlers;
> -        cpumask_t nxt_idlers;
> -        int nxt, weight_cpu, weight_nxt;
> -        int migrate_factor;
> +        /*
> +         * Try to find an idle processor within the above constraints.
> +         *
> +         * In multi-core and multi-threaded CPUs, not all idle execution
> +         * vehicles are equal!
> +         *
> +         * We give preference to the idle execution vehicle with the most
> +         * idling neighbours in its grouping. This distributes work across
> +         * distinct cores first and guarantees we don't do something stupid
> +         * like run two VCPUs on co-hyperthreads while there are idle cores
> +         * or sockets.
> +         *
> +         * Notice that, when computing the "idleness" of cpu, we may want to
> +         * discount vc. That is, iff vc is the currently running and the only
> +         * runnable vcpu on cpu, we add cpu to the idlers.
> +         */
> +        cpumask_and(&idlers,&cpu_online_map, CSCHED_PRIV(ops)->idlers);
> +        if ( vc->processor == cpu&&  IS_RUNQ_IDLE(cpu) )
> +            cpumask_set_cpu(cpu,&idlers);
> +        cpumask_and(&cpus,&cpus,&idlers);
> +        /* If there are idlers and cpu is still not among them, pick one */
> +        if ( !cpumask_empty(&cpus)&&  !cpumask_test_cpu(cpu,&cpus) )
> +            cpu = cpumask_cycle(cpu,&cpus);
> +        cpumask_clear_cpu(cpu,&cpus);
>
> -        nxt = cpumask_cycle(cpu,&cpus);
> +        while ( !cpumask_empty(&cpus) )
> +        {
> +            cpumask_t cpu_idlers;
> +            cpumask_t nxt_idlers;
> +            int nxt, weight_cpu, weight_nxt;
> +            int migrate_factor;
>
> -        if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
> -        {
> -            /* We're on the same socket, so check the busy-ness of threads.
> -             * Migrate if # of idlers is less at all */
> -            ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> -            migrate_factor = 1;
> -            cpumask_and(&cpu_idlers,&idlers, per_cpu(cpu_sibling_mask, cpu));
> -            cpumask_and(&nxt_idlers,&idlers, per_cpu(cpu_sibling_mask, nxt));
> -        }
> -        else
> -        {
> -            /* We're on different sockets, so check the busy-ness of cores.
> -             * Migrate only if the other core is twice as idle */
> -            ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> -            migrate_factor = 2;
> -            cpumask_and(&cpu_idlers,&idlers, per_cpu(cpu_core_mask, cpu));
> -            cpumask_and(&nxt_idlers,&idlers, per_cpu(cpu_core_mask, nxt));
> +            nxt = cpumask_cycle(cpu,&cpus);
> +
> +            if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
> +            {
> +                /* We're on the same socket, so check the busy-ness of threads.
> +                 * Migrate if # of idlers is less at all */
> +                ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> +                migrate_factor = 1;
> +                cpumask_and(&cpu_idlers,&idlers, per_cpu(cpu_sibling_mask,
> +                            cpu));
> +                cpumask_and(&nxt_idlers,&idlers, per_cpu(cpu_sibling_mask,
> +                            nxt));
> +            }
> +            else
> +            {
> +                /* We're on different sockets, so check the busy-ness of cores.
> +                 * Migrate only if the other core is twice as idle */
> +                ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> +                migrate_factor = 2;
> +                cpumask_and(&cpu_idlers,&idlers, per_cpu(cpu_core_mask, cpu));
> +                cpumask_and(&nxt_idlers,&idlers, per_cpu(cpu_core_mask, nxt));
> +            }
> +
> +            weight_cpu = cpumask_weight(&cpu_idlers);
> +            weight_nxt = cpumask_weight(&nxt_idlers);
> +            /* smt_power_savings: consolidate work rather than spreading it */
> +            if ( sched_smt_power_savings ?
> +                 weight_cpu>  weight_nxt :
> +                 weight_cpu * migrate_factor<  weight_nxt )
> +            {
> +                cpumask_and(&nxt_idlers,&cpus,&nxt_idlers);
> +                spc = CSCHED_PCPU(nxt);
> +                cpu = cpumask_cycle(spc->idle_bias,&nxt_idlers);
> +                cpumask_andnot(&cpus,&cpus, per_cpu(cpu_sibling_mask, cpu));
> +            }
> +            else
> +            {
> +                cpumask_andnot(&cpus,&cpus,&nxt_idlers);
> +            }
>           }
>
> -        weight_cpu = cpumask_weight(&cpu_idlers);
> -        weight_nxt = cpumask_weight(&nxt_idlers);
> -        /* smt_power_savings: consolidate work rather than spreading it */
> -        if ( sched_smt_power_savings ?
> -             weight_cpu>  weight_nxt :
> -             weight_cpu * migrate_factor<  weight_nxt )
> -        {
> -            cpumask_and(&nxt_idlers,&cpus,&nxt_idlers);
> -            spc = CSCHED_PCPU(nxt);
> -            cpu = cpumask_cycle(spc->idle_bias,&nxt_idlers);
> -            cpumask_andnot(&cpus,&cpus, per_cpu(cpu_sibling_mask, cpu));
> -        }
> -        else
> -        {
> -            cpumask_andnot(&cpus,&cpus,&nxt_idlers);
> -        }
> +        /* Stop if cpu is idle (or if csched_balance_cpumask() says we can) */
> +        if ( cpumask_test_cpu(cpu,&idlers) || ret )
> +            break;
>       }
>
>       if ( commit&&  spc )
> @@ -913,6 +1032,13 @@ csched_alloc_domdata(const struct schedu
>       if ( sdom == NULL )
>           return NULL;
>
> +    if ( !alloc_cpumask_var(&sdom->node_affinity_cpumask) )
> +    {
> +        xfree(sdom);
> +        return NULL;
> +    }
> +    cpumask_setall(sdom->node_affinity_cpumask);
> +
>       /* Initialize credit and weight */
>       INIT_LIST_HEAD(&sdom->active_vcpu);
>       sdom->active_vcpu_count = 0;
> @@ -944,6 +1070,9 @@ csched_dom_init(const struct scheduler *
>   static void
>   csched_free_domdata(const struct scheduler *ops, void *data)
>   {
> +    struct csched_dom *sdom = data;
> +
> +    free_cpumask_var(sdom->node_affinity_cpumask);
>       xfree(data);
>   }
>
> @@ -1240,9 +1369,10 @@ csched_tick(void *_cpu)
>   }
>
>   static struct csched_vcpu *
> -csched_runq_steal(int peer_cpu, int cpu, int pri)
> +csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
>   {
>       const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
> +    struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, peer_cpu));
>       const struct vcpu * const peer_vcpu = curr_on_cpu(peer_cpu);
>       struct csched_vcpu *speer;
>       struct list_head *iter;
> @@ -1265,11 +1395,24 @@ csched_runq_steal(int peer_cpu, int cpu,
>               if ( speer->pri<= pri )
>                   break;
>
> -            /* Is this VCPU is runnable on our PCPU? */
> +            /* Is this VCPU runnable on our PCPU? */
>               vc = speer->vcpu;
>               BUG_ON( is_idle_vcpu(vc) );
>
> -            if (__csched_vcpu_is_migrateable(vc, cpu))
> +            /*
> +             * Retrieve the correct mask for this balance_step or, if we're
> +             * dealing with node-affinity and the vcpu has no node affinity
> +             * at all, just skip this vcpu. That is needed if we want to
> +             * check if we have any node-affine work to steal first (wrt
> +             * any vcpu-affine work).
> +             */
> +            if ( csched_balance_cpumask(vc, balance_step,
> +&scratch_balance_mask) )
> +                continue;
> +
> +            if ( __csched_vcpu_is_migrateable(vc, cpu,&scratch_balance_mask)
> +&&  __csched_vcpu_should_migrate(cpu,&scratch_balance_mask,
> +                                                 prv->idlers) )
>               {
>                   /* We got a candidate. Grab it! */
>                   TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
> @@ -1295,7 +1438,8 @@ csched_load_balance(struct csched_privat
>       struct csched_vcpu *speer;
>       cpumask_t workers;
>       cpumask_t *online;
> -    int peer_cpu;
> +    int peer_cpu, peer_node, bstep;
> +    int node = cpu_to_node(cpu);
>
>       BUG_ON( cpu != snext->vcpu->processor );
>       online = cpupool_scheduler_cpumask(per_cpu(cpupool, cpu));
> @@ -1312,42 +1456,68 @@ csched_load_balance(struct csched_privat
>           SCHED_STAT_CRANK(load_balance_other);
>
>       /*
> -     * Peek at non-idling CPUs in the system, starting with our
> -     * immediate neighbour.
> +     * Let's look around for work to steal, taking both vcpu-affinity
> +     * and node-affinity into account. More specifically, we check all
> +     * the non-idle CPUs' runq, looking for:
> +     *  1. any node-affine work to steal first,
> +     *  2. if not finding anything, any vcpu-affine work to steal.
>        */
> -    cpumask_andnot(&workers, online, prv->idlers);
> -    cpumask_clear_cpu(cpu,&workers);
> -    peer_cpu = cpu;
> +    for_each_csched_balance_step( bstep )
> +    {
> +        /*
> +         * We peek at the non-idling CPUs in a node-wise fashion. In fact,
> +         * it is more likely that we find some node-affine work on our same
> +         * node, not to mention that migrating vcpus within the same node
> +         * could well expected to be cheaper than across-nodes (memory
> +         * stays local, there might be some node-wide cache[s], etc.).
> +         */
> +        peer_node = node;
> +        do
> +        {
> +            /* Find out what the !idle are in this node */
> +            cpumask_andnot(&workers, online, prv->idlers);
> +            cpumask_and(&workers,&workers,&node_to_cpumask(peer_node));
> +            cpumask_clear_cpu(cpu,&workers);
>
> -    while ( !cpumask_empty(&workers) )
> -    {
> -        peer_cpu = cpumask_cycle(peer_cpu,&workers);
> -        cpumask_clear_cpu(peer_cpu,&workers);
> +            if ( cpumask_empty(&workers) )
> +                goto next_node;
>
> -        /*
> -         * Get ahold of the scheduler lock for this peer CPU.
> -         *
> -         * Note: We don't spin on this lock but simply try it. Spinning could
> -         * cause a deadlock if the peer CPU is also load balancing and trying
> -         * to lock this CPU.
> -         */
> -        if ( !pcpu_schedule_trylock(peer_cpu) )
> -        {
> -            SCHED_STAT_CRANK(steal_trylock_failed);
> -            continue;
> -        }
> +            peer_cpu = cpumask_first(&workers);
> +            do
> +            {
> +                /*
> +                 * Get ahold of the scheduler lock for this peer CPU.
> +                 *
> +                 * Note: We don't spin on this lock but simply try it. Spinning
> +                 * could cause a deadlock if the peer CPU is also load
> +                 * balancing and trying to lock this CPU.
> +                 */
> +                if ( !pcpu_schedule_trylock(peer_cpu) )
> +                {
> +                    SCHED_STAT_CRANK(steal_trylock_failed);
> +                    peer_cpu = cpumask_cycle(peer_cpu,&workers);
> +                    continue;
> +                }
>
> -        /*
> -         * Any work over there to steal?
> -         */
> -        speer = cpumask_test_cpu(peer_cpu, online) ?
> -            csched_runq_steal(peer_cpu, cpu, snext->pri) : NULL;
> -        pcpu_schedule_unlock(peer_cpu);
> -        if ( speer != NULL )
> -        {
> -            *stolen = 1;
> -            return speer;
> -        }
> +                /* Any work over there to steal? */
> +                speer = cpumask_test_cpu(peer_cpu, online) ?
> +                    csched_runq_steal(peer_cpu, cpu, snext->pri, bstep) : NULL;
> +                pcpu_schedule_unlock(peer_cpu);
> +
> +                /* As soon as one vcpu is found, balancing ends */
> +                if ( speer != NULL )
> +                {
> +                    *stolen = 1;
> +                    return speer;
> +                }
> +
> +                peer_cpu = cpumask_cycle(peer_cpu,&workers);
> +
> +            } while( peer_cpu != cpumask_first(&workers) );
> +
> + next_node:
> +            peer_node = cycle_node(peer_node, node_online_map);
> +        } while( peer_node != node );
>       }
>
>    out:
> diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
> --- a/xen/include/xen/nodemask.h
> +++ b/xen/include/xen/nodemask.h
> @@ -41,6 +41,8 @@
>    * int last_node(mask)			Number highest set bit, or MAX_NUMNODES
>    * int first_unset_node(mask)		First node not set in mask, or
>    *					MAX_NUMNODES.
> + * int cycle_node(node, mask)		Next node cycling from 'node', or
> + *					MAX_NUMNODES
>    *
>    * nodemask_t nodemask_of_node(node)	Return nodemask with bit 'node' set
>    * NODE_MASK_ALL			Initializer - all bits set
> @@ -254,6 +256,16 @@ static inline int __first_unset_node(con
>   			find_first_zero_bit(maskp->bits, MAX_NUMNODES));
>   }
>
> +#define cycle_node(n, src) __cycle_node((n),&(src), MAX_NUMNODES)
> +static inline int __cycle_node(int n, const nodemask_t *maskp, int nbits)
> +{
> +    int nxt = __next_node(n, maskp, nbits);
> +
> +    if (nxt == nbits)
> +        nxt = __first_node(maskp, nbits);
> +    return nxt;
> +}
> +
>   #define NODE_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(MAX_NUMNODES)
>
>   #if MAX_NUMNODES<= BITS_PER_LONG
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 06:45:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 06:45:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlZsA-00078P-Ul; Thu, 20 Dec 2012 06:44:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1TlZs9-00078H-A9
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 06:44:49 +0000
Received: from [193.109.254.147:15340] by server-3.bemta-14.messagelabs.com id
	41/18-26055-0E3B2D05; Thu, 20 Dec 2012 06:44:48 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1355985887!934991!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDE5MTcxNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26003 invoked from network); 20 Dec 2012 06:44:48 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 06:44:48 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:Message-ID:Date:From:Organization:
	User-Agent:MIME-Version:To:CC:Subject:References:
	In-Reply-To:Content-Type:Content-Transfer-Encoding;
	b=GnW53VzaV+HBiMT2Sf+6UPRyPetDE1IFuOjmvVuqHRzBXeQPVNBndHje
	zs1qWIcTD0E+Ox3LFydmMJ60YeHxayWhjqmy0SM2wy4v3PwJzoVy6jO//
	7p0ajgLdK2eXath5cs1zgnOgB/hUuSLFeq/df28nObS5gtbd9ENrFeg21
	SbH5QHRDaCO5Z8FuMcmeBdiw1d7RbTHQNCkW/uBeeVSYLT0at0L0inlDL
	9aAdLKhJZ5a1V1QCTTM3Vx0VGz4k9;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1355985887; x=1387521887;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=HGBCE6YuKYA/Grh+4VQvOdxtFhQ6b+twm2lN0zFByZw=;
	b=t9zqw6QBSnBCLte1W8OsRglu48sdeoyTJbt2kuh3crrGdKkBwkXNg6Kh
	HV5/Xng95xUXl+53qlS04k9OqyoLS6C0/zC3MR9dF4xgqCK4QWdD0om+0
	7ATPjmHBL6Eyx+E39ftXqB2rWZoJyldED7PM57jzw8vds1Cx7x1vVTO6/
	RWcAG6e39lmRcmWLM/TAbzuAhC1kMnZ4zDOO3xYrieN+PZi2oKlJpryki
	2KobvLZJw3xVkRsPSLJtOAdVmzBTf;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="111762967"
Received: from abgdgate30u.abg.fsc.net ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 20 Dec 2012 07:44:46 +0100
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="153446063"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate30u.abg.fsc.net with SMTP; 20 Dec 2012 07:44:46 +0100
Received: from [172.17.21.50] (verdon.osd.mch.fsc.net [172.17.21.50])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id E2DB296A7E1;
	Thu, 20 Dec 2012 07:44:46 +0100 (CET)
Message-ID: <50D2B3DE.70206@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 07:44:46 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
In-Reply-To: <06d2f322a6319d8ba212.1355944039@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 19.12.2012 20:07, schrieb Dario Faggioli:
> As vcpu-affinity tells where VCPUs must run, node-affinity tells
> where they should or, better, prefer. While respecting vcpu-affinity
> remains mandatory, node-affinity is not that strict, it only expresses
> a preference, although honouring it is almost always true that will
> bring significant performances benefit (especially as compared to
> not having any affinity at all).
>
> This change modifies the VCPU load balancing algorithm (for the
> credit scheduler only), introducing a two steps logic.
> During the first step, we use the node-affinity mask. The aim is
> giving precedence to the CPUs where it is known to be preferable
> for the domain to run. If that fails in finding a valid PCPU, the
> node-affinity is just ignored and, in the second step, we fall
> back to using cpu-affinity only.
>
> Signed-off-by: Dario Faggioli<dario.faggioli@citrix.com>
> ---
> Changes from v1:
>   * CPU masks variables moved off from the stack, as requested during
>     review. As per the comments in the code, having them in the private
>     (per-scheduler instance) struct could have been enough, but it would be
>     racy (again, see comments). For that reason, use a global bunch of
>     them of (via per_cpu());

Wouldn't it be better to put the mask in the scheduler private per-pcpu area?
This could be applied to several other instances of cpu masks on the stack,
too.

>   * George suggested a different load balancing logic during v1's review. I
>     think he was right and then I changed the old implementation in a way
>     that resembles exactly that. I rewrote most of this patch to introduce
>     a more sensible and effective noda-affinity handling logic.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -111,6 +111,33 @@
>
>
>   /*
> + * Node Balancing
> + */
> +#define CSCHED_BALANCE_CPU_AFFINITY     0
> +#define CSCHED_BALANCE_NODE_AFFINITY    1
> +#define CSCHED_BALANCE_LAST CSCHED_BALANCE_NODE_AFFINITY
> +
> +/*
> + * When building for high number of CPUs, cpumask_var_t
> + * variables on stack are better avoided. However, we need them,
> + * in order to be able to consider both vcpu and node affinity.
> + * We also don't want to xmalloc()/xfree() them, as that would
> + * happen in critical code paths. Therefore, let's (pre)allocate
> + * some scratch space for them.
> + *
> + * Having one mask for each instance of the scheduler seems
> + * enough, and that would suggest putting it wihin `struct
> + * csched_private' below. However, we don't always hold the
> + * private scheduler lock when the mask itself would need to
> + * be used, leaving room for races. For that reason, we define
> + * and use a cpumask_t for each CPU. As preemption is not an
> + * issue here (we're holding the runqueue spin-lock!), that is
> + * both enough and safe.
> + */
> +DEFINE_PER_CPU(cpumask_t, csched_balance_mask);
> +#define scratch_balance_mask (this_cpu(csched_balance_mask))
> +
> +/*
>    * Boot parameters
>    */
>   static int __read_mostly sched_credit_tslice_ms = CSCHED_DEFAULT_TSLICE_MS;
> @@ -159,6 +186,9 @@ struct csched_dom {
>       struct list_head active_vcpu;
>       struct list_head active_sdom_elem;
>       struct domain *dom;
> +    /* cpumask translated from the domain's node-affinity.
> +     * Basically, the CPUs we prefer to be scheduled on. */
> +    cpumask_var_t node_affinity_cpumask;
>       uint16_t active_vcpu_count;
>       uint16_t weight;
>       uint16_t cap;
> @@ -239,6 +269,42 @@ static inline void
>       list_del_init(&svc->runq_elem);
>   }
>
> +#define for_each_csched_balance_step(__step) \
> +    for ( (__step) = CSCHED_BALANCE_LAST; (__step)>= 0; (__step)-- )
> +
> +/*
> + * Each csched-balance step has to use its own cpumask. This function
> + * determines which one, given the step, and copies it in mask. Notice
> + * that, in case of node-affinity balancing step, it also filters out from
> + * the node-affinity mask the cpus that are not part of vc's cpu-affinity,
> + * as we do not want to end up running a vcpu where it would like, but
> + * is not allowed to!
> + *
> + * As an optimization, if a domain does not have any node-affinity at all
> + * (namely, its node affinity is automatically computed), not only the
> + * computed mask will reflect its vcpu-affinity, but we also return -1 to
> + * let the caller know that he can skip the step or quit the loop (if he
> + * wants).
> + */
> +static int
> +csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask)
> +{
> +    if ( step == CSCHED_BALANCE_NODE_AFFINITY )
> +    {
> +        struct domain *d = vc->domain;
> +        struct csched_dom *sdom = CSCHED_DOM(d);
> +
> +        cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
> +
> +        if ( cpumask_full(sdom->node_affinity_cpumask) )
> +            return -1;
> +    }
> +    else /* step == CSCHED_BALANCE_CPU_AFFINITY */
> +        cpumask_copy(mask, vc->cpu_affinity);
> +
> +    return 0;
> +}
> +
>   static void burn_credits(struct csched_vcpu *svc, s_time_t now)
>   {
>       s_time_t delta;
> @@ -266,67 +332,94 @@ static inline void
>       struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
>       cpumask_t mask, idle_mask;
> -    int idlers_empty;
> +    int balance_step, idlers_empty;
>
>       ASSERT(cur);
> -    cpumask_clear(&mask);
> -
>       idlers_empty = cpumask_empty(prv->idlers);
>
>       /*
> -     * If the pcpu is idle, or there are no idlers and the new
> -     * vcpu is a higher priority than the old vcpu, run it here.
> -     *
> -     * If there are idle cpus, first try to find one suitable to run
> -     * new, so we can avoid preempting cur.  If we cannot find a
> -     * suitable idler on which to run new, run it here, but try to
> -     * find a suitable idler on which to run cur instead.
> +     * Node and vcpu-affinity balancing loop. To speed things up, in case
> +     * no node-affinity at all is present, scratch_balance_mask reflects
> +     * the vcpu-affinity, and ret is -1, so that we then can quit the
> +     * loop after only one step.
>        */
> -    if ( cur->pri == CSCHED_PRI_IDLE
> -         || (idlers_empty&&  new->pri>  cur->pri) )
> +    for_each_csched_balance_step( balance_step )
>       {
> -        if ( cur->pri != CSCHED_PRI_IDLE )
> -            SCHED_STAT_CRANK(tickle_idlers_none);
> -        cpumask_set_cpu(cpu,&mask);
> -    }
> -    else if ( !idlers_empty )
> -    {
> -        /* Check whether or not there are idlers that can run new */
> -        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
> +        int ret, new_idlers_empty;
> +
> +        cpumask_clear(&mask);
>
>           /*
> -         * If there are no suitable idlers for new, and it's higher
> -         * priority than cur, ask the scheduler to migrate cur away.
> -         * We have to act like this (instead of just waking some of
> -         * the idlers suitable for cur) because cur is running.
> +         * If the pcpu is idle, or there are no idlers and the new
> +         * vcpu is a higher priority than the old vcpu, run it here.
>            *
> -         * If there are suitable idlers for new, no matter priorities,
> -         * leave cur alone (as it is running and is, likely, cache-hot)
> -         * and wake some of them (which is waking up and so is, likely,
> -         * cache cold anyway).
> +         * If there are idle cpus, first try to find one suitable to run
> +         * new, so we can avoid preempting cur.  If we cannot find a
> +         * suitable idler on which to run new, run it here, but try to
> +         * find a suitable idler on which to run cur instead.
>            */
> -        if ( cpumask_empty(&idle_mask)&&  new->pri>  cur->pri )
> +        if ( cur->pri == CSCHED_PRI_IDLE
> +             || (idlers_empty&&  new->pri>  cur->pri) )
>           {
> -            SCHED_STAT_CRANK(tickle_idlers_none);
> -            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
> -            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> -            SCHED_STAT_CRANK(migrate_kicked_away);
> -            set_bit(_VPF_migrating,&cur->vcpu->pause_flags);
> +            if ( cur->pri != CSCHED_PRI_IDLE )
> +                SCHED_STAT_CRANK(tickle_idlers_none);
>               cpumask_set_cpu(cpu,&mask);
>           }
> -        else if ( !cpumask_empty(&idle_mask) )
> +        else if ( !idlers_empty )
>           {
> -            /* Which of the idlers suitable for new shall we wake up? */
> -            SCHED_STAT_CRANK(tickle_idlers_some);
> -            if ( opt_tickle_one_idle )
> +            /* Are there idlers suitable for new (for this balance step)? */
> +            ret = csched_balance_cpumask(new->vcpu, balance_step,
> +&scratch_balance_mask);
> +            cpumask_and(&idle_mask, prv->idlers,&scratch_balance_mask);
> +            new_idlers_empty = cpumask_empty(&idle_mask);
> +
> +            /*
> +             * Let's not be too harsh! If there aren't idlers suitable
> +             * for new in its node-affinity mask, make sure we check its
> +             * vcpu-affinity as well, before tacking final decisions.
> +             */
> +            if ( new_idlers_empty
> +&&  (balance_step == CSCHED_BALANCE_NODE_AFFINITY&&  !ret) )
> +                continue;
> +
> +            /*
> +             * If there are no suitable idlers for new, and it's higher
> +             * priority than cur, ask the scheduler to migrate cur away.
> +             * We have to act like this (instead of just waking some of
> +             * the idlers suitable for cur) because cur is running.
> +             *
> +             * If there are suitable idlers for new, no matter priorities,
> +             * leave cur alone (as it is running and is, likely, cache-hot)
> +             * and wake some of them (which is waking up and so is, likely,
> +             * cache cold anyway).
> +             */
> +            if ( new_idlers_empty&&  new->pri>  cur->pri )
>               {
> -                this_cpu(last_tickle_cpu) =
> -                    cpumask_cycle(this_cpu(last_tickle_cpu),&idle_mask);
> -                cpumask_set_cpu(this_cpu(last_tickle_cpu),&mask);
> +                SCHED_STAT_CRANK(tickle_idlers_none);
> +                SCHED_VCPU_STAT_CRANK(cur, kicked_away);
> +                SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> +                SCHED_STAT_CRANK(migrate_kicked_away);
> +                set_bit(_VPF_migrating,&cur->vcpu->pause_flags);
> +                cpumask_set_cpu(cpu,&mask);
>               }
> -            else
> -                cpumask_or(&mask,&mask,&idle_mask);
> +            else if ( !new_idlers_empty )
> +            {
> +                /* Which of the idlers suitable for new shall we wake up? */
> +                SCHED_STAT_CRANK(tickle_idlers_some);
> +                if ( opt_tickle_one_idle )
> +                {
> +                    this_cpu(last_tickle_cpu) =
> +                        cpumask_cycle(this_cpu(last_tickle_cpu),&idle_mask);
> +                    cpumask_set_cpu(this_cpu(last_tickle_cpu),&mask);
> +                }
> +                else
> +                    cpumask_or(&mask,&mask,&idle_mask);
> +            }
>           }
> +
> +        /* Did we find anyone (or csched_balance_cpumask() says we're done)? */
> +        if ( !cpumask_empty(&mask) || ret )
> +            break;
>       }
>
>       if ( !cpumask_empty(&mask) )
> @@ -475,15 +568,28 @@ static inline int
>   }
>
>   static inline int
> -__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu)
> +__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, cpumask_t *mask)
>   {
>       /*
>        * Don't pick up work that's in the peer's scheduling tail or hot on
> -     * peer PCPU. Only pick up work that's allowed to run on our CPU.
> +     * peer PCPU. Only pick up work that prefers and/or is allowed to run
> +     * on our CPU.
>        */
>       return !vc->is_running&&
>              !__csched_vcpu_is_cache_hot(vc)&&
> -           cpumask_test_cpu(dest_cpu, vc->cpu_affinity);
> +           cpumask_test_cpu(dest_cpu, mask);
> +}
> +
> +static inline int
> +__csched_vcpu_should_migrate(int cpu, cpumask_t *mask, cpumask_t *idlers)
> +{
> +    /*
> +     * Consent to migration if cpu is one of the idlers in the VCPU's
> +     * affinity mask. In fact, if that is not the case, it just means it
> +     * was some other CPU that was tickled and should hence come and pick
> +     * VCPU up. Migrating it to cpu would only make things worse.
> +     */
> +    return cpumask_test_cpu(cpu, idlers)&&  cpumask_test_cpu(cpu, mask);
>   }
>
>   static int
> @@ -493,85 +599,98 @@ static int
>       cpumask_t idlers;
>       cpumask_t *online;
>       struct csched_pcpu *spc = NULL;
> +    int ret, balance_step;
>       int cpu;
>
> -    /*
> -     * Pick from online CPUs in VCPU's affinity mask, giving a
> -     * preference to its current processor if it's in there.
> -     */
>       online = cpupool_scheduler_cpumask(vc->domain->cpupool);
> -    cpumask_and(&cpus, online, vc->cpu_affinity);
> -    cpu = cpumask_test_cpu(vc->processor,&cpus)
> -            ? vc->processor
> -            : cpumask_cycle(vc->processor,&cpus);
> -    ASSERT( !cpumask_empty(&cpus)&&  cpumask_test_cpu(cpu,&cpus) );
> +    for_each_csched_balance_step( balance_step )
> +    {
> +        /* Pick an online CPU from the proper affinity mask */
> +        ret = csched_balance_cpumask(vc, balance_step,&cpus);
> +        cpumask_and(&cpus,&cpus, online);
>
> -    /*
> -     * Try to find an idle processor within the above constraints.
> -     *
> -     * In multi-core and multi-threaded CPUs, not all idle execution
> -     * vehicles are equal!
> -     *
> -     * We give preference to the idle execution vehicle with the most
> -     * idling neighbours in its grouping. This distributes work across
> -     * distinct cores first and guarantees we don't do something stupid
> -     * like run two VCPUs on co-hyperthreads while there are idle cores
> -     * or sockets.
> -     *
> -     * Notice that, when computing the "idleness" of cpu, we may want to
> -     * discount vc. That is, iff vc is the currently running and the only
> -     * runnable vcpu on cpu, we add cpu to the idlers.
> -     */
> -    cpumask_and(&idlers,&cpu_online_map, CSCHED_PRIV(ops)->idlers);
> -    if ( vc->processor == cpu&&  IS_RUNQ_IDLE(cpu) )
> -        cpumask_set_cpu(cpu,&idlers);
> -    cpumask_and(&cpus,&cpus,&idlers);
> -    cpumask_clear_cpu(cpu,&cpus);
> +        /* If present, prefer vc's current processor */
> +        cpu = cpumask_test_cpu(vc->processor,&cpus)
> +                ? vc->processor
> +                : cpumask_cycle(vc->processor,&cpus);
> +        ASSERT( !cpumask_empty(&cpus)&&  cpumask_test_cpu(cpu,&cpus) );
>
> -    while ( !cpumask_empty(&cpus) )
> -    {
> -        cpumask_t cpu_idlers;
> -        cpumask_t nxt_idlers;
> -        int nxt, weight_cpu, weight_nxt;
> -        int migrate_factor;
> +        /*
> +         * Try to find an idle processor within the above constraints.
> +         *
> +         * In multi-core and multi-threaded CPUs, not all idle execution
> +         * vehicles are equal!
> +         *
> +         * We give preference to the idle execution vehicle with the most
> +         * idling neighbours in its grouping. This distributes work across
> +         * distinct cores first and guarantees we don't do something stupid
> +         * like run two VCPUs on co-hyperthreads while there are idle cores
> +         * or sockets.
> +         *
> +         * Notice that, when computing the "idleness" of cpu, we may want to
> +         * discount vc. That is, iff vc is the currently running and the only
> +         * runnable vcpu on cpu, we add cpu to the idlers.
> +         */
> +        cpumask_and(&idlers,&cpu_online_map, CSCHED_PRIV(ops)->idlers);
> +        if ( vc->processor == cpu&&  IS_RUNQ_IDLE(cpu) )
> +            cpumask_set_cpu(cpu,&idlers);
> +        cpumask_and(&cpus,&cpus,&idlers);
> +        /* If there are idlers and cpu is still not among them, pick one */
> +        if ( !cpumask_empty(&cpus)&&  !cpumask_test_cpu(cpu,&cpus) )
> +            cpu = cpumask_cycle(cpu,&cpus);
> +        cpumask_clear_cpu(cpu,&cpus);
>
> -        nxt = cpumask_cycle(cpu,&cpus);
> +        while ( !cpumask_empty(&cpus) )
> +        {
> +            cpumask_t cpu_idlers;
> +            cpumask_t nxt_idlers;
> +            int nxt, weight_cpu, weight_nxt;
> +            int migrate_factor;
>
> -        if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
> -        {
> -            /* We're on the same socket, so check the busy-ness of threads.
> -             * Migrate if # of idlers is less at all */
> -            ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> -            migrate_factor = 1;
> -            cpumask_and(&cpu_idlers,&idlers, per_cpu(cpu_sibling_mask, cpu));
> -            cpumask_and(&nxt_idlers,&idlers, per_cpu(cpu_sibling_mask, nxt));
> -        }
> -        else
> -        {
> -            /* We're on different sockets, so check the busy-ness of cores.
> -             * Migrate only if the other core is twice as idle */
> -            ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> -            migrate_factor = 2;
> -            cpumask_and(&cpu_idlers,&idlers, per_cpu(cpu_core_mask, cpu));
> -            cpumask_and(&nxt_idlers,&idlers, per_cpu(cpu_core_mask, nxt));
> +            nxt = cpumask_cycle(cpu,&cpus);
> +
> +            if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
> +            {
> +                /* We're on the same socket, so check the busy-ness of threads.
> +                 * Migrate if # of idlers is less at all */
> +                ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> +                migrate_factor = 1;
> +                cpumask_and(&cpu_idlers,&idlers, per_cpu(cpu_sibling_mask,
> +                            cpu));
> +                cpumask_and(&nxt_idlers,&idlers, per_cpu(cpu_sibling_mask,
> +                            nxt));
> +            }
> +            else
> +            {
> +                /* We're on different sockets, so check the busy-ness of cores.
> +                 * Migrate only if the other core is twice as idle */
> +                ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> +                migrate_factor = 2;
> +                cpumask_and(&cpu_idlers,&idlers, per_cpu(cpu_core_mask, cpu));
> +                cpumask_and(&nxt_idlers,&idlers, per_cpu(cpu_core_mask, nxt));
> +            }
> +
> +            weight_cpu = cpumask_weight(&cpu_idlers);
> +            weight_nxt = cpumask_weight(&nxt_idlers);
> +            /* smt_power_savings: consolidate work rather than spreading it */
> +            if ( sched_smt_power_savings ?
> +                 weight_cpu>  weight_nxt :
> +                 weight_cpu * migrate_factor<  weight_nxt )
> +            {
> +                cpumask_and(&nxt_idlers,&cpus,&nxt_idlers);
> +                spc = CSCHED_PCPU(nxt);
> +                cpu = cpumask_cycle(spc->idle_bias,&nxt_idlers);
> +                cpumask_andnot(&cpus,&cpus, per_cpu(cpu_sibling_mask, cpu));
> +            }
> +            else
> +            {
> +                cpumask_andnot(&cpus,&cpus,&nxt_idlers);
> +            }
>           }
>
> -        weight_cpu = cpumask_weight(&cpu_idlers);
> -        weight_nxt = cpumask_weight(&nxt_idlers);
> -        /* smt_power_savings: consolidate work rather than spreading it */
> -        if ( sched_smt_power_savings ?
> -             weight_cpu>  weight_nxt :
> -             weight_cpu * migrate_factor<  weight_nxt )
> -        {
> -            cpumask_and(&nxt_idlers,&cpus,&nxt_idlers);
> -            spc = CSCHED_PCPU(nxt);
> -            cpu = cpumask_cycle(spc->idle_bias,&nxt_idlers);
> -            cpumask_andnot(&cpus,&cpus, per_cpu(cpu_sibling_mask, cpu));
> -        }
> -        else
> -        {
> -            cpumask_andnot(&cpus,&cpus,&nxt_idlers);
> -        }
> +        /* Stop if cpu is idle (or if csched_balance_cpumask() says we can) */
> +        if ( cpumask_test_cpu(cpu,&idlers) || ret )
> +            break;
>       }
>
>       if ( commit&&  spc )
> @@ -913,6 +1032,13 @@ csched_alloc_domdata(const struct schedu
>       if ( sdom == NULL )
>           return NULL;
>
> +    if ( !alloc_cpumask_var(&sdom->node_affinity_cpumask) )
> +    {
> +        xfree(sdom);
> +        return NULL;
> +    }
> +    cpumask_setall(sdom->node_affinity_cpumask);
> +
>       /* Initialize credit and weight */
>       INIT_LIST_HEAD(&sdom->active_vcpu);
>       sdom->active_vcpu_count = 0;
> @@ -944,6 +1070,9 @@ csched_dom_init(const struct scheduler *
>   static void
>   csched_free_domdata(const struct scheduler *ops, void *data)
>   {
> +    struct csched_dom *sdom = data;
> +
> +    free_cpumask_var(sdom->node_affinity_cpumask);
>       xfree(data);
>   }
>
> @@ -1240,9 +1369,10 @@ csched_tick(void *_cpu)
>   }
>
>   static struct csched_vcpu *
> -csched_runq_steal(int peer_cpu, int cpu, int pri)
> +csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
>   {
>       const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
> +    struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, peer_cpu));
>       const struct vcpu * const peer_vcpu = curr_on_cpu(peer_cpu);
>       struct csched_vcpu *speer;
>       struct list_head *iter;
> @@ -1265,11 +1395,24 @@ csched_runq_steal(int peer_cpu, int cpu,
>               if ( speer->pri<= pri )
>                   break;
>
> -            /* Is this VCPU is runnable on our PCPU? */
> +            /* Is this VCPU runnable on our PCPU? */
>               vc = speer->vcpu;
>               BUG_ON( is_idle_vcpu(vc) );
>
> -            if (__csched_vcpu_is_migrateable(vc, cpu))
> +            /*
> +             * Retrieve the correct mask for this balance_step or, if we're
> +             * dealing with node-affinity and the vcpu has no node affinity
> +             * at all, just skip this vcpu. That is needed if we want to
> +             * check if we have any node-affine work to steal first (wrt
> +             * any vcpu-affine work).
> +             */
> +            if ( csched_balance_cpumask(vc, balance_step,
> +&scratch_balance_mask) )
> +                continue;
> +
> +            if ( __csched_vcpu_is_migrateable(vc, cpu,&scratch_balance_mask)
> +&&  __csched_vcpu_should_migrate(cpu,&scratch_balance_mask,
> +                                                 prv->idlers) )
>               {
>                   /* We got a candidate. Grab it! */
>                   TRACE_3D(TRC_CSCHED_STOLEN_VCPU, peer_cpu,
> @@ -1295,7 +1438,8 @@ csched_load_balance(struct csched_privat
>       struct csched_vcpu *speer;
>       cpumask_t workers;
>       cpumask_t *online;
> -    int peer_cpu;
> +    int peer_cpu, peer_node, bstep;
> +    int node = cpu_to_node(cpu);
>
>       BUG_ON( cpu != snext->vcpu->processor );
>       online = cpupool_scheduler_cpumask(per_cpu(cpupool, cpu));
> @@ -1312,42 +1456,68 @@ csched_load_balance(struct csched_privat
>           SCHED_STAT_CRANK(load_balance_other);
>
>       /*
> -     * Peek at non-idling CPUs in the system, starting with our
> -     * immediate neighbour.
> +     * Let's look around for work to steal, taking both vcpu-affinity
> +     * and node-affinity into account. More specifically, we check all
> +     * the non-idle CPUs' runq, looking for:
> +     *  1. any node-affine work to steal first,
> +     *  2. if not finding anything, any vcpu-affine work to steal.
>        */
> -    cpumask_andnot(&workers, online, prv->idlers);
> -    cpumask_clear_cpu(cpu,&workers);
> -    peer_cpu = cpu;
> +    for_each_csched_balance_step( bstep )
> +    {
> +        /*
> +         * We peek at the non-idling CPUs in a node-wise fashion. In fact,
> +         * it is more likely that we find some node-affine work on our same
> +         * node, not to mention that migrating vcpus within the same node
> +         * could well expected to be cheaper than across-nodes (memory
> +         * stays local, there might be some node-wide cache[s], etc.).
> +         */
> +        peer_node = node;
> +        do
> +        {
> +            /* Find out what the !idle are in this node */
> +            cpumask_andnot(&workers, online, prv->idlers);
> +            cpumask_and(&workers,&workers,&node_to_cpumask(peer_node));
> +            cpumask_clear_cpu(cpu,&workers);
>
> -    while ( !cpumask_empty(&workers) )
> -    {
> -        peer_cpu = cpumask_cycle(peer_cpu,&workers);
> -        cpumask_clear_cpu(peer_cpu,&workers);
> +            if ( cpumask_empty(&workers) )
> +                goto next_node;
>
> -        /*
> -         * Get ahold of the scheduler lock for this peer CPU.
> -         *
> -         * Note: We don't spin on this lock but simply try it. Spinning could
> -         * cause a deadlock if the peer CPU is also load balancing and trying
> -         * to lock this CPU.
> -         */
> -        if ( !pcpu_schedule_trylock(peer_cpu) )
> -        {
> -            SCHED_STAT_CRANK(steal_trylock_failed);
> -            continue;
> -        }
> +            peer_cpu = cpumask_first(&workers);
> +            do
> +            {
> +                /*
> +                 * Get ahold of the scheduler lock for this peer CPU.
> +                 *
> +                 * Note: We don't spin on this lock but simply try it. Spinning
> +                 * could cause a deadlock if the peer CPU is also load
> +                 * balancing and trying to lock this CPU.
> +                 */
> +                if ( !pcpu_schedule_trylock(peer_cpu) )
> +                {
> +                    SCHED_STAT_CRANK(steal_trylock_failed);
> +                    peer_cpu = cpumask_cycle(peer_cpu,&workers);
> +                    continue;
> +                }
>
> -        /*
> -         * Any work over there to steal?
> -         */
> -        speer = cpumask_test_cpu(peer_cpu, online) ?
> -            csched_runq_steal(peer_cpu, cpu, snext->pri) : NULL;
> -        pcpu_schedule_unlock(peer_cpu);
> -        if ( speer != NULL )
> -        {
> -            *stolen = 1;
> -            return speer;
> -        }
> +                /* Any work over there to steal? */
> +                speer = cpumask_test_cpu(peer_cpu, online) ?
> +                    csched_runq_steal(peer_cpu, cpu, snext->pri, bstep) : NULL;
> +                pcpu_schedule_unlock(peer_cpu);
> +
> +                /* As soon as one vcpu is found, balancing ends */
> +                if ( speer != NULL )
> +                {
> +                    *stolen = 1;
> +                    return speer;
> +                }
> +
> +                peer_cpu = cpumask_cycle(peer_cpu,&workers);
> +
> +            } while( peer_cpu != cpumask_first(&workers) );
> +
> + next_node:
> +            peer_node = cycle_node(peer_node, node_online_map);
> +        } while( peer_node != node );
>       }
>
>    out:
> diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
> --- a/xen/include/xen/nodemask.h
> +++ b/xen/include/xen/nodemask.h
> @@ -41,6 +41,8 @@
>    * int last_node(mask)			Number highest set bit, or MAX_NUMNODES
>    * int first_unset_node(mask)		First node not set in mask, or
>    *					MAX_NUMNODES.
> + * int cycle_node(node, mask)		Next node cycling from 'node', or
> + *					MAX_NUMNODES
>    *
>    * nodemask_t nodemask_of_node(node)	Return nodemask with bit 'node' set
>    * NODE_MASK_ALL			Initializer - all bits set
> @@ -254,6 +256,16 @@ static inline int __first_unset_node(con
>   			find_first_zero_bit(maskp->bits, MAX_NUMNODES));
>   }
>
> +#define cycle_node(n, src) __cycle_node((n),&(src), MAX_NUMNODES)
> +static inline int __cycle_node(int n, const nodemask_t *maskp, int nbits)
> +{
> +    int nxt = __next_node(n, maskp, nbits);
> +
> +    if (nxt == nbits)
> +        nxt = __first_node(maskp, nbits);
> +    return nxt;
> +}
> +
>   #define NODE_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(MAX_NUMNODES)
>
>   #if MAX_NUMNODES<= BITS_PER_LONG
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 06:56:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 06:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tla2h-0007NY-Bo; Thu, 20 Dec 2012 06:55:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tla2f-0007NT-Py
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 06:55:42 +0000
Received: from [85.158.143.99:23049] by server-2.bemta-4.messagelabs.com id
	C8/CD-30861-D66B2D05; Thu, 20 Dec 2012 06:55:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355986524!29306832!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10172 invoked from network); 20 Dec 2012 06:55:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 06:55:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="269967"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 06:55:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 06:55:24 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tla2O-0004mi-EB;
	Thu, 20 Dec 2012 06:55:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tla2O-0005Nj-7P;
	Thu, 20 Dec 2012 06:55:24 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14792-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 06:55:24 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14792: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3388756001777065721=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3388756001777065721==
Content-Type: text/plain

flight 14792 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14792/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14785

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14785
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14785

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  090cc3e20d3e
baseline version:
 xen                  b04de677de31

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <osp@andrep.de>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 532 lines long.)


--===============3388756001777065721==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3388756001777065721==--

From xen-devel-bounces@lists.xen.org Thu Dec 20 06:56:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 06:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tla2h-0007NY-Bo; Thu, 20 Dec 2012 06:55:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tla2f-0007NT-Py
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 06:55:42 +0000
Received: from [85.158.143.99:23049] by server-2.bemta-4.messagelabs.com id
	C8/CD-30861-D66B2D05; Thu, 20 Dec 2012 06:55:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355986524!29306832!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10172 invoked from network); 20 Dec 2012 06:55:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 06:55:25 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="269967"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 06:55:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 06:55:24 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tla2O-0004mi-EB;
	Thu, 20 Dec 2012 06:55:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tla2O-0005Nj-7P;
	Thu, 20 Dec 2012 06:55:24 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14792-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 06:55:24 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14792: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3388756001777065721=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3388756001777065721==
Content-Type: text/plain

flight 14792 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14792/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14785

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14785
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14785

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  090cc3e20d3e
baseline version:
 xen                  b04de677de31

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <osp@andrep.de>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 532 lines long.)


--===============3388756001777065721==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3388756001777065721==--

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:16:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbIf-0000Bg-Dn; Thu, 20 Dec 2012 08:16:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlbId-0000Bb-Lp
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:16:15 +0000
Received: from [85.158.143.35:26360] by server-1.bemta-4.messagelabs.com id
	CA/DD-28401-E49C2D05; Thu, 20 Dec 2012 08:16:14 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355991373!4789429!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21672 invoked from network); 20 Dec 2012 08:16:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 08:16:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="271201"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 08:16:12 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 08:16:12 +0000
Message-ID: <1355991370.28419.15.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 09:16:10 +0100
In-Reply-To: <50D2B3DE.70206@ts.fujitsu.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D2B3DE.70206@ts.fujitsu.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5146542251294028542=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5146542251294028542==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-2+MsGxpSMVOkyiBZmytI"

--=-2+MsGxpSMVOkyiBZmytI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 07:44 +0100, Juergen Gross wrote:=20
> Am 19.12.2012 20:07, schrieb Dario Faggioli:
> > [...]=20
> >
> > This change modifies the VCPU load balancing algorithm (for the
> > credit scheduler only), introducing a two steps logic.
> > During the first step, we use the node-affinity mask. The aim is
> > giving precedence to the CPUs where it is known to be preferable
> > for the domain to run. If that fails in finding a valid PCPU, the
> > node-affinity is just ignored and, in the second step, we fall
> > back to using cpu-affinity only.
> >
> > Signed-off-by: Dario Faggioli<dario.faggioli@citrix.com>
> > ---
> > Changes from v1:
> >   * CPU masks variables moved off from the stack, as requested during
> >     review. As per the comments in the code, having them in the private
> >     (per-scheduler instance) struct could have been enough, but it woul=
d be
> >     racy (again, see comments). For that reason, use a global bunch of
> >     them of (via per_cpu());
>=20
> Wouldn't it be better to put the mask in the scheduler private per-pcpu a=
rea?
> This could be applied to several other instances of cpu masks on the stac=
k,
> too.
>=20
Yes, as I tired to explain, if it's per-cpu it should be fine, since
credit has one runq per each CPU and hence runq lock is enough for
serialization.

BTW, can you be a little bit more specific about where you're suggesting
to put it? I'm sorry but I'm not sure I figured what you mean by "the
scheduler private per-pcpu area"... Do you perhaps mean making it a
member of `struct csched_pcpu' ?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-2+MsGxpSMVOkyiBZmytI
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDSyUoACgkQk4XaBE3IOsQ0ygCbBezWUrauMh3gF7zGj48dw3uG
HxQAoJ4JW9w9Y6S8+Njy9uuJX0l8akET
=q+7/
-----END PGP SIGNATURE-----

--=-2+MsGxpSMVOkyiBZmytI--


--===============5146542251294028542==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5146542251294028542==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 08:16:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbIf-0000Bg-Dn; Thu, 20 Dec 2012 08:16:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlbId-0000Bb-Lp
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:16:15 +0000
Received: from [85.158.143.35:26360] by server-1.bemta-4.messagelabs.com id
	CA/DD-28401-E49C2D05; Thu, 20 Dec 2012 08:16:14 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1355991373!4789429!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21672 invoked from network); 20 Dec 2012 08:16:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 08:16:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="271201"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 08:16:12 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 08:16:12 +0000
Message-ID: <1355991370.28419.15.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 09:16:10 +0100
In-Reply-To: <50D2B3DE.70206@ts.fujitsu.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D2B3DE.70206@ts.fujitsu.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5146542251294028542=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5146542251294028542==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-2+MsGxpSMVOkyiBZmytI"

--=-2+MsGxpSMVOkyiBZmytI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 07:44 +0100, Juergen Gross wrote:=20
> Am 19.12.2012 20:07, schrieb Dario Faggioli:
> > [...]=20
> >
> > This change modifies the VCPU load balancing algorithm (for the
> > credit scheduler only), introducing a two steps logic.
> > During the first step, we use the node-affinity mask. The aim is
> > giving precedence to the CPUs where it is known to be preferable
> > for the domain to run. If that fails in finding a valid PCPU, the
> > node-affinity is just ignored and, in the second step, we fall
> > back to using cpu-affinity only.
> >
> > Signed-off-by: Dario Faggioli<dario.faggioli@citrix.com>
> > ---
> > Changes from v1:
> >   * CPU masks variables moved off from the stack, as requested during
> >     review. As per the comments in the code, having them in the private
> >     (per-scheduler instance) struct could have been enough, but it woul=
d be
> >     racy (again, see comments). For that reason, use a global bunch of
> >     them of (via per_cpu());
>=20
> Wouldn't it be better to put the mask in the scheduler private per-pcpu a=
rea?
> This could be applied to several other instances of cpu masks on the stac=
k,
> too.
>=20
Yes, as I tired to explain, if it's per-cpu it should be fine, since
credit has one runq per each CPU and hence runq lock is enough for
serialization.

BTW, can you be a little bit more specific about where you're suggesting
to put it? I'm sorry but I'm not sure I figured what you mean by "the
scheduler private per-pcpu area"... Do you perhaps mean making it a
member of `struct csched_pcpu' ?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-2+MsGxpSMVOkyiBZmytI
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDSyUoACgkQk4XaBE3IOsQ0ygCbBezWUrauMh3gF7zGj48dw3uG
HxQAoJ4JW9w9Y6S8+Njy9uuJX0l8akET
=q+7/
-----END PGP SIGNATURE-----

--=-2+MsGxpSMVOkyiBZmytI--


--===============5146542251294028542==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5146542251294028542==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 08:26:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbS7-0000NO-JZ; Thu, 20 Dec 2012 08:26:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1TlbS5-0000NJ-EL
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:26:01 +0000
Received: from [85.158.139.211:48762] by server-16.bemta-5.messagelabs.com id
	84/3E-09208-89BC2D05; Thu, 20 Dec 2012 08:26:00 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355991959!20479992!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDE5MTcxNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20252 invoked from network); 20 Dec 2012 08:26:00 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 08:26:00 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:Message-ID:Date:From:Organization:
	User-Agent:MIME-Version:To:CC:Subject:References:
	In-Reply-To:Content-Type:Content-Transfer-Encoding;
	b=GLijQwrtNsfK8NeLH8rr1TI+1H3OW+vyBcfa+pfhPOdnIBJBsXr7c/jY
	vkLQuN1qOqFzxkOdhDUTiXuh23gb0Jb2uWJ6h1R3MdnxF5ZFrV4u3bavW
	1Ux/S5LGlZ3DTweRwM3QTsD2fBwxkEvP+i2uP+F8gzhsEOMEMNVVH4Lhj
	9vdUAWrbfU0fWHZ2bZ/OiDWikzsOoltHsWh8/evlnN8q5RSOWzWQBH3Zz
	jGQuisIlOrHUVYTcEbKXLhdEGdmsL;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1355991960; x=1387527960;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=+zz96iGblbfBupEEtY8Oudf9Pso2afH5zYl+XAfgwbA=;
	b=FIVSOl1z0esruL9oa6UsEfL1856ErhNshTBS5q9gb2XTxWBKnJDcIMs+
	0Y9vU0Rlicpl+jdk+HtYyWOpwq61jVyFvvVoQ7k6feiAPD/cheN/x+evN
	xW7HdkuaqAtcWovTvR1gnPkp5EHZvh0gcGiCNqeyk2n2f0jJgghTk/rmu
	X+sxcEfI5ZjHfESICP4TO/EVurM8GPP/3yB+vCDVqQQf9rlmt6Ze43Cdc
	+bkAy+Baa0lrFfYNc98PB1gZXLZ2K;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="111773388"
Received: from abgdgate40u.abg.fsc.net ([172.25.138.90])
	by dgate20u.abg.fsc.net with ESMTP; 20 Dec 2012 09:25:59 +0100
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="154234655"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate40u.abg.fsc.net with SMTP; 20 Dec 2012 09:25:58 +0100
Received: from [172.17.21.50] (verdon.osd.mch.fsc.net [172.17.21.50])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id B69F6968EFA;
	Thu, 20 Dec 2012 09:25:58 +0100 (CET)
Message-ID: <50D2CB96.4030509@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 09:25:58 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>	<06d2f322a6319d8ba212.1355944039@Solace>	<50D2B3DE.70206@ts.fujitsu.com>
	<1355991370.28419.15.camel@Abyss>
In-Reply-To: <1355991370.28419.15.camel@Abyss>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 20.12.2012 09:16, schrieb Dario Faggioli:
> On Thu, 2012-12-20 at 07:44 +0100, Juergen Gross wrote:
>> Am 19.12.2012 20:07, schrieb Dario Faggioli:
>>> [...]
>>>
>>> This change modifies the VCPU load balancing algorithm (for the
>>> credit scheduler only), introducing a two steps logic.
>>> During the first step, we use the node-affinity mask. The aim is
>>> giving precedence to the CPUs where it is known to be preferable
>>> for the domain to run. If that fails in finding a valid PCPU, the
>>> node-affinity is just ignored and, in the second step, we fall
>>> back to using cpu-affinity only.
>>>
>>> Signed-off-by: Dario Faggioli<dario.faggioli@citrix.com>
>>> ---
>>> Changes from v1:
>>>    * CPU masks variables moved off from the stack, as requested during
>>>      review. As per the comments in the code, having them in the private
>>>      (per-scheduler instance) struct could have been enough, but it would be
>>>      racy (again, see comments). For that reason, use a global bunch of
>>>      them of (via per_cpu());
>>
>> Wouldn't it be better to put the mask in the scheduler private per-pcpu area?
>> This could be applied to several other instances of cpu masks on the stack,
>> too.
>>
> Yes, as I tired to explain, if it's per-cpu it should be fine, since
> credit has one runq per each CPU and hence runq lock is enough for
> serialization.
>
> BTW, can you be a little bit more specific about where you're suggesting
> to put it? I'm sorry but I'm not sure I figured what you mean by "the
> scheduler private per-pcpu area"... Do you perhaps mean making it a
> member of `struct csched_pcpu' ?

Yes, that's what I would suggest.

Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:26:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbS7-0000NO-JZ; Thu, 20 Dec 2012 08:26:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1TlbS5-0000NJ-EL
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:26:01 +0000
Received: from [85.158.139.211:48762] by server-16.bemta-5.messagelabs.com id
	84/3E-09208-89BC2D05; Thu, 20 Dec 2012 08:26:00 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355991959!20479992!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDE5MTcxNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20252 invoked from network); 20 Dec 2012 08:26:00 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 08:26:00 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:Message-ID:Date:From:Organization:
	User-Agent:MIME-Version:To:CC:Subject:References:
	In-Reply-To:Content-Type:Content-Transfer-Encoding;
	b=GLijQwrtNsfK8NeLH8rr1TI+1H3OW+vyBcfa+pfhPOdnIBJBsXr7c/jY
	vkLQuN1qOqFzxkOdhDUTiXuh23gb0Jb2uWJ6h1R3MdnxF5ZFrV4u3bavW
	1Ux/S5LGlZ3DTweRwM3QTsD2fBwxkEvP+i2uP+F8gzhsEOMEMNVVH4Lhj
	9vdUAWrbfU0fWHZ2bZ/OiDWikzsOoltHsWh8/evlnN8q5RSOWzWQBH3Zz
	jGQuisIlOrHUVYTcEbKXLhdEGdmsL;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1355991960; x=1387527960;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=+zz96iGblbfBupEEtY8Oudf9Pso2afH5zYl+XAfgwbA=;
	b=FIVSOl1z0esruL9oa6UsEfL1856ErhNshTBS5q9gb2XTxWBKnJDcIMs+
	0Y9vU0Rlicpl+jdk+HtYyWOpwq61jVyFvvVoQ7k6feiAPD/cheN/x+evN
	xW7HdkuaqAtcWovTvR1gnPkp5EHZvh0gcGiCNqeyk2n2f0jJgghTk/rmu
	X+sxcEfI5ZjHfESICP4TO/EVurM8GPP/3yB+vCDVqQQf9rlmt6Ze43Cdc
	+bkAy+Baa0lrFfYNc98PB1gZXLZ2K;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="111773388"
Received: from abgdgate40u.abg.fsc.net ([172.25.138.90])
	by dgate20u.abg.fsc.net with ESMTP; 20 Dec 2012 09:25:59 +0100
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="154234655"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate40u.abg.fsc.net with SMTP; 20 Dec 2012 09:25:58 +0100
Received: from [172.17.21.50] (verdon.osd.mch.fsc.net [172.17.21.50])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id B69F6968EFA;
	Thu, 20 Dec 2012 09:25:58 +0100 (CET)
Message-ID: <50D2CB96.4030509@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 09:25:58 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>	<06d2f322a6319d8ba212.1355944039@Solace>	<50D2B3DE.70206@ts.fujitsu.com>
	<1355991370.28419.15.camel@Abyss>
In-Reply-To: <1355991370.28419.15.camel@Abyss>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 20.12.2012 09:16, schrieb Dario Faggioli:
> On Thu, 2012-12-20 at 07:44 +0100, Juergen Gross wrote:
>> Am 19.12.2012 20:07, schrieb Dario Faggioli:
>>> [...]
>>>
>>> This change modifies the VCPU load balancing algorithm (for the
>>> credit scheduler only), introducing a two steps logic.
>>> During the first step, we use the node-affinity mask. The aim is
>>> giving precedence to the CPUs where it is known to be preferable
>>> for the domain to run. If that fails in finding a valid PCPU, the
>>> node-affinity is just ignored and, in the second step, we fall
>>> back to using cpu-affinity only.
>>>
>>> Signed-off-by: Dario Faggioli<dario.faggioli@citrix.com>
>>> ---
>>> Changes from v1:
>>>    * CPU masks variables moved off from the stack, as requested during
>>>      review. As per the comments in the code, having them in the private
>>>      (per-scheduler instance) struct could have been enough, but it would be
>>>      racy (again, see comments). For that reason, use a global bunch of
>>>      them of (via per_cpu());
>>
>> Wouldn't it be better to put the mask in the scheduler private per-pcpu area?
>> This could be applied to several other instances of cpu masks on the stack,
>> too.
>>
> Yes, as I tired to explain, if it's per-cpu it should be fine, since
> credit has one runq per each CPU and hence runq lock is enough for
> serialization.
>
> BTW, can you be a little bit more specific about where you're suggesting
> to put it? I'm sorry but I'm not sure I figured what you mean by "the
> scheduler private per-pcpu area"... Do you perhaps mean making it a
> member of `struct csched_pcpu' ?

Yes, that's what I would suggest.

Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:34:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:34:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbZU-0000X5-Mi; Thu, 20 Dec 2012 08:33:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlbZT-0000X0-4D
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:33:39 +0000
Received: from [85.158.143.35:35232] by server-3.bemta-4.messagelabs.com id
	80/AB-18211-26DC2D05; Thu, 20 Dec 2012 08:33:38 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355992417!15819471!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12457 invoked from network); 20 Dec 2012 08:33:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 08:33:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="271573"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 08:33:37 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 08:33:36 +0000
Message-ID: <1355992411.28419.25.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 09:33:31 +0100
In-Reply-To: <50D2CB96.4030509@ts.fujitsu.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>	<50D2B3DE.70206@ts.fujitsu.com>
	<1355991370.28419.15.camel@Abyss> <50D2CB96.4030509@ts.fujitsu.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7148136116007305855=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7148136116007305855==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-soEKK+jj6u9847SlDWgB"

--=-soEKK+jj6u9847SlDWgB
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 08:25 +0000, Juergen Gross wrote:=20
> > BTW, can you be a little bit more specific about where you're suggestin=
g
> > to put it? I'm sorry but I'm not sure I figured what you mean by "the
> > scheduler private per-pcpu area"... Do you perhaps mean making it a
> > member of `struct csched_pcpu' ?
>=20
> Yes, that's what I would suggest.
>=20
Ok then, functionally, that is going to be exactly the same thing as
where it is right now, i.e., a set of global per_cpu() variables. It is
possible for your solution to bring some cache/locality benefits,
although that will very much depends on single cases, architecture,
workload, etc.

That being said, I'm definitely fine with it and can go for it.

It was Jan that suggested/asked to pull them out of the stack, and I
guess it's George's taste that we value most when hacking sched_*.c, so
let's see if they want to comment on this and then I'll decide. :-)

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-soEKK+jj6u9847SlDWgB
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDSzVsACgkQk4XaBE3IOsRKHwCgpeiF6webHpgSFZx/8tYzlnBo
N1kAn00yydD9voxHlT+/h2G/tWBPHlDD
=WTHH
-----END PGP SIGNATURE-----

--=-soEKK+jj6u9847SlDWgB--


--===============7148136116007305855==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7148136116007305855==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 08:34:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:34:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbZU-0000X5-Mi; Thu, 20 Dec 2012 08:33:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlbZT-0000X0-4D
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:33:39 +0000
Received: from [85.158.143.35:35232] by server-3.bemta-4.messagelabs.com id
	80/AB-18211-26DC2D05; Thu, 20 Dec 2012 08:33:38 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1355992417!15819471!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12457 invoked from network); 20 Dec 2012 08:33:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 08:33:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="271573"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 08:33:37 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 08:33:36 +0000
Message-ID: <1355992411.28419.25.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 09:33:31 +0100
In-Reply-To: <50D2CB96.4030509@ts.fujitsu.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>	<50D2B3DE.70206@ts.fujitsu.com>
	<1355991370.28419.15.camel@Abyss> <50D2CB96.4030509@ts.fujitsu.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7148136116007305855=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7148136116007305855==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-soEKK+jj6u9847SlDWgB"

--=-soEKK+jj6u9847SlDWgB
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 08:25 +0000, Juergen Gross wrote:=20
> > BTW, can you be a little bit more specific about where you're suggestin=
g
> > to put it? I'm sorry but I'm not sure I figured what you mean by "the
> > scheduler private per-pcpu area"... Do you perhaps mean making it a
> > member of `struct csched_pcpu' ?
>=20
> Yes, that's what I would suggest.
>=20
Ok then, functionally, that is going to be exactly the same thing as
where it is right now, i.e., a set of global per_cpu() variables. It is
possible for your solution to bring some cache/locality benefits,
although that will very much depends on single cases, architecture,
workload, etc.

That being said, I'm definitely fine with it and can go for it.

It was Jan that suggested/asked to pull them out of the stack, and I
guess it's George's taste that we value most when hacking sched_*.c, so
let's see if they want to comment on this and then I'll decide. :-)

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-soEKK+jj6u9847SlDWgB
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDSzVsACgkQk4XaBE3IOsRKHwCgpeiF6webHpgSFZx/8tYzlnBo
N1kAn00yydD9voxHlT+/h2G/tWBPHlDD
=WTHH
-----END PGP SIGNATURE-----

--=-soEKK+jj6u9847SlDWgB--


--===============7148136116007305855==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7148136116007305855==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 08:40:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbfL-0000i1-I4; Thu, 20 Dec 2012 08:39:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1TlbfK-0000hu-BW
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:39:42 +0000
Received: from [85.158.143.99:45601] by server-2.bemta-4.messagelabs.com id
	59/F9-30861-DCEC2D05; Thu, 20 Dec 2012 08:39:41 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355992780!30156448!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDE5MTcxNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11881 invoked from network); 20 Dec 2012 08:39:40 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 08:39:40 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:Message-ID:Date:From:Organization:
	User-Agent:MIME-Version:To:CC:Subject:References:
	In-Reply-To:Content-Type:Content-Transfer-Encoding;
	b=SHqPlBFm69TfHHC6+MkfBbolT5wxvqc3xW8+MbXL5zteZ+sEcbB7EHWM
	MLZas5LaiBJ4o44VfumIykWRMEwuLHtVbNwl+ZWU7zKJTmJwZeB7eCRJ7
	l7V/lDUVv5VBNcI3pJTIEg1XolW9QWmZiMyBtwn1BSRH4y6MbMIp5IvEt
	Gx7IMEzs3/MIc5E6oCwWyQkVDccV6SnaIcukeVJlYqh7CZKzUXBhnmquj
	4EVd3DqamLnLp+wZf9ROso4buI87q;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1355992781; x=1387528781;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=0/l6amK0BosO6EF2MJiWkXVK42mYxybvIHOIPyXD4TY=;
	b=E2Gs4/vKhPIhdWqzXZRxS0dSv7OdQz1oqVJpw39R2R0SnDpHIRFhkqCr
	ESeF8ld1mc8ftEfw+pI+EeKSqF91VqUkgV8QmiRmT3W3EIFsnlrxF2mAH
	oI8Msn0Qn9fnS2IRCM4w09ycE91ixAW9ppOfoyBdIwRQCBUSBVTu+KPAu
	jGxABTShC26JXcwrgbExxqkO/gbRGiswBt72HuU69ZQCaSkGhIebssj/K
	pxkbrQLMhu77E/vSqjp66XjONpUNL;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="111775381"
Received: from abgdgate40u.abg.fsc.net ([172.25.138.90])
	by dgate20u.abg.fsc.net with ESMTP; 20 Dec 2012 09:39:40 +0100
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="154235856"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate40u.abg.fsc.net with SMTP; 20 Dec 2012 09:39:40 +0100
Received: from [172.17.21.50] (verdon.osd.mch.fsc.net [172.17.21.50])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id D62F796C26C;
	Thu, 20 Dec 2012 09:39:39 +0100 (CET)
Message-ID: <50D2CECB.4030108@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 09:39:39 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>	<06d2f322a6319d8ba212.1355944039@Solace>	<50D2B3DE.70206@ts.fujitsu.com>	<1355991370.28419.15.camel@Abyss>
	<50D2CB96.4030509@ts.fujitsu.com> <1355992411.28419.25.camel@Abyss>
In-Reply-To: <1355992411.28419.25.camel@Abyss>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 20.12.2012 09:33, schrieb Dario Faggioli:
> On Thu, 2012-12-20 at 08:25 +0000, Juergen Gross wrote:
>>> BTW, can you be a little bit more specific about where you're suggesting
>>> to put it? I'm sorry but I'm not sure I figured what you mean by "the
>>> scheduler private per-pcpu area"... Do you perhaps mean making it a
>>> member of `struct csched_pcpu' ?
>>
>> Yes, that's what I would suggest.
>>
> Ok then, functionally, that is going to be exactly the same thing as
> where it is right now, i.e., a set of global per_cpu() variables. It is
> possible for your solution to bring some cache/locality benefits,
> although that will very much depends on single cases, architecture,
> workload, etc.

The space is only allocated if sched_credit is responsible for the pcpu.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:40:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbfL-0000i1-I4; Thu, 20 Dec 2012 08:39:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1TlbfK-0000hu-BW
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:39:42 +0000
Received: from [85.158.143.99:45601] by server-2.bemta-4.messagelabs.com id
	59/F9-30861-DCEC2D05; Thu, 20 Dec 2012 08:39:41 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1355992780!30156448!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDE5MTcxNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11881 invoked from network); 20 Dec 2012 08:39:40 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 08:39:40 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:Message-ID:Date:From:Organization:
	User-Agent:MIME-Version:To:CC:Subject:References:
	In-Reply-To:Content-Type:Content-Transfer-Encoding;
	b=SHqPlBFm69TfHHC6+MkfBbolT5wxvqc3xW8+MbXL5zteZ+sEcbB7EHWM
	MLZas5LaiBJ4o44VfumIykWRMEwuLHtVbNwl+ZWU7zKJTmJwZeB7eCRJ7
	l7V/lDUVv5VBNcI3pJTIEg1XolW9QWmZiMyBtwn1BSRH4y6MbMIp5IvEt
	Gx7IMEzs3/MIc5E6oCwWyQkVDccV6SnaIcukeVJlYqh7CZKzUXBhnmquj
	4EVd3DqamLnLp+wZf9ROso4buI87q;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1355992781; x=1387528781;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=0/l6amK0BosO6EF2MJiWkXVK42mYxybvIHOIPyXD4TY=;
	b=E2Gs4/vKhPIhdWqzXZRxS0dSv7OdQz1oqVJpw39R2R0SnDpHIRFhkqCr
	ESeF8ld1mc8ftEfw+pI+EeKSqF91VqUkgV8QmiRmT3W3EIFsnlrxF2mAH
	oI8Msn0Qn9fnS2IRCM4w09ycE91ixAW9ppOfoyBdIwRQCBUSBVTu+KPAu
	jGxABTShC26JXcwrgbExxqkO/gbRGiswBt72HuU69ZQCaSkGhIebssj/K
	pxkbrQLMhu77E/vSqjp66XjONpUNL;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="111775381"
Received: from abgdgate40u.abg.fsc.net ([172.25.138.90])
	by dgate20u.abg.fsc.net with ESMTP; 20 Dec 2012 09:39:40 +0100
X-IronPort-AV: E=Sophos;i="4.84,322,1355094000"; d="scan'208";a="154235856"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate40u.abg.fsc.net with SMTP; 20 Dec 2012 09:39:40 +0100
Received: from [172.17.21.50] (verdon.osd.mch.fsc.net [172.17.21.50])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id D62F796C26C;
	Thu, 20 Dec 2012 09:39:39 +0100 (CET)
Message-ID: <50D2CECB.4030108@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 09:39:39 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>	<06d2f322a6319d8ba212.1355944039@Solace>	<50D2B3DE.70206@ts.fujitsu.com>	<1355991370.28419.15.camel@Abyss>
	<50D2CB96.4030509@ts.fujitsu.com> <1355992411.28419.25.camel@Abyss>
In-Reply-To: <1355992411.28419.25.camel@Abyss>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 20.12.2012 09:33, schrieb Dario Faggioli:
> On Thu, 2012-12-20 at 08:25 +0000, Juergen Gross wrote:
>>> BTW, can you be a little bit more specific about where you're suggesting
>>> to put it? I'm sorry but I'm not sure I figured what you mean by "the
>>> scheduler private per-pcpu area"... Do you perhaps mean making it a
>>> member of `struct csched_pcpu' ?
>>
>> Yes, that's what I would suggest.
>>
> Ok then, functionally, that is going to be exactly the same thing as
> where it is right now, i.e., a set of global per_cpu() variables. It is
> possible for your solution to bring some cache/locality benefits,
> although that will very much depends on single cases, architecture,
> workload, etc.

The space is only allocated if sched_credit is responsible for the pcpu.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:44:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbjO-0000tr-Fj; Thu, 20 Dec 2012 08:43:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlbjN-0000tl-V4
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:43:54 +0000
Received: from [85.158.143.35:50081] by server-1.bemta-4.messagelabs.com id
	4B/4C-28401-9CFC2D05; Thu, 20 Dec 2012 08:43:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355992909!12709595!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20820 invoked from network); 20 Dec 2012 08:41:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 08:41:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="271705"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 08:41:47 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 08:41:47 +0000
Message-ID: <1355992906.14417.58.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 20 Dec 2012 08:41:46 +0000
In-Reply-To: <5dc2571ae5faef87977c.1355944043@Solace>
References: <patchbomb.1355944036@Solace>
	<5dc2571ae5faef87977c.1355944043@Solace>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>, Matt Wilson <msw@amazon.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 07 of 10 v2] libxl: optimize the calculation
 of how many VCPUs can run on a candidate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 19:07 +0000, Dario Faggioli wrote:
> This reduces the complexity of the overall algorithm, as it moves a 

What was/is the complexity before/after this change? ISTR it was O(n^2)
or something along those lines before.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:44:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbjO-0000tr-Fj; Thu, 20 Dec 2012 08:43:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlbjN-0000tl-V4
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:43:54 +0000
Received: from [85.158.143.35:50081] by server-1.bemta-4.messagelabs.com id
	4B/4C-28401-9CFC2D05; Thu, 20 Dec 2012 08:43:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1355992909!12709595!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyNjUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20820 invoked from network); 20 Dec 2012 08:41:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 08:41:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="271705"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 08:41:47 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 08:41:47 +0000
Message-ID: <1355992906.14417.58.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 20 Dec 2012 08:41:46 +0000
In-Reply-To: <5dc2571ae5faef87977c.1355944043@Solace>
References: <patchbomb.1355944036@Solace>
	<5dc2571ae5faef87977c.1355944043@Solace>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>, Matt Wilson <msw@amazon.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 07 of 10 v2] libxl: optimize the calculation
 of how many VCPUs can run on a candidate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 19:07 +0000, Dario Faggioli wrote:
> This reduces the complexity of the overall algorithm, as it moves a 

What was/is the complexity before/after this change? ISTR it was O(n^2)
or something along those lines before.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:54:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbtF-000161-KO; Thu, 20 Dec 2012 08:54:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhizhang@smu.edu.sg>) id 1TlD1U-0005oP-Gm
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 06:20:56 +0000
Received: from [85.158.143.99:46088] by server-1.bemta-4.messagelabs.com id
	F3/6C-28401-7CC51D05; Wed, 19 Dec 2012 06:20:55 +0000
X-Env-Sender: zhizhang@smu.edu.sg
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355898052!30088870!1
X-Originating-IP: [202.161.41.196]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjE2MS40MS4xOTYgPT4gMTEwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15969 invoked from network); 19 Dec 2012 06:20:54 -0000
Received: from ieg01.smu.edu.sg (HELO ieg03.smu.edu.sg) (202.161.41.196)
	by server-15.tower-216.messagelabs.com with SMTP;
	19 Dec 2012 06:20:54 -0000
X-AuditID: caa129d8-b7fa06d000006de2-79-50d15cc31eb5
Received: from EXCHPA01.staff.smu.edu.sg (exchpa01.staff.smu.edu.sg
	[172.18.225.19]) by ieg03.smu.edu.sg (SMU SMTP Gateway) with SMTP id
	93.94.28130.3CC51D05; Wed, 19 Dec 2012 14:20:51 +0800 (MYT)
Received: from EXCHPD01.staff.smu.edu.sg ([172.18.225.41]) by
	EXCHPA01.staff.smu.edu.sg ([fe80::45e2:b6d4:8ef5:2e07%11]) with mapi id
	14.01.0218.012; Wed, 19 Dec 2012 14:20:50 +0800
From: ZHANG Zhi <zhizhang@smu.edu.sg>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: multi-core VMM
Thread-Index: Ac3dsPu12m9Ys7DcRuG63sLSqNmYaw==
Date: Wed, 19 Dec 2012 06:20:49 +0000
Message-ID: <373C28B006B97A4BA77CD51D8F89698356C517BE@exchpd01>
Accept-Language: en-SG, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.106.128.55]
MIME-Version: 1.0
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrIKsWRmVeSWpSXmKPExsWyRuihsO7hmIsBBseuC1os+biYxYHR4+ju
	30wBjFFcNimpOZllqUX6dglcGVO3XmIp6Jar2Lv+EWMD4xPJLkZODgkBE4mHO+YyQthiEhfu
	rWfrYuTiEBI4xihxYeNUZghnJ6PE2uZrYFVsAqoSzz/+ZgKxRQTMJbYu2QIWFxYQl1j19wUz
	RFxGYuuG3ywQtp7El6UP2EBsFqDepSeOgfXyCthLLP3wFKyXUUBWoufsNrB6ZqA5c6fNYoW4
	SEBiyZ7zzBC2qMTLx/+g4ooSv47ehKrPkLjae58NYqagxMmZT1gmMArNQjJqFpKyWUjKIOI6
	Egt2f2KDsLUlli18zQxjnznwmAlZfAEj+ypGgczUdANjveLcUr3UlFK94vRNjMBYOLVQ88YO
	xj03rQ4xCnAwKvHwKqy/ECDEmlhWXJl7iFGSg0lJlLc8/GKAEF9SfkplRmJxRnxRaU5q8SFG
	CQ5mJRFeXlWgHG9KYmVValE+TEqag0VJnHeJEVBKID2xJDU7NbUgtQgmK8PBoSTBOycSKCtY
	lJqeWpGWmVOCkGbi4AQZzgM0PDEaZHhxQWJucWY6RP4UoyrHsWl3nzIKseTl56VKifMeAikS
	ACnKKM2Dm/OKURzoHWHeIyBreIDpDm7CK6DhTEDDg/QugAwvSURISTUwymclmZz9+Vh4SZXn
	xdVix6WuXlJqjvmfum9ffOuXuZv9/cR+bKlxVecoXuzgaufz9+Bt1VyDNd+zWTenrAl74Kye
	NNun7R2vFe8pZRuxXZpH62d4Sb3UOvSIy6/Z6KVBi+wx9mnrDHRkTiiYbfpTXyG+9qXa+/9R
	zkkzNse8qfFtsVT64zdPiaU4I9FQi7moOBEApFd2FTQDAAA=
X-Mailman-Approved-At: Thu, 20 Dec 2012 08:54:04 +0000
Subject: [Xen-devel] multi-core VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2617097884478671985=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2617097884478671985==
Content-Language: zh-CN
Content-Type: multipart/alternative;
	boundary="_000_373C28B006B97A4BA77CD51D8F89698356C517BEexchpd01_"

--_000_373C28B006B97A4BA77CD51D8F89698356C517BEexchpd01_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi, list,
A VMM provides a VMCS for each VM. How does the VMM assign system resources=
 for each VM? For example, in a multi-core environment, how can I enable th=
e VMM to run, say, on a two-core intel's processor while I am able to force=
 a VM to execute only on a specific core upon initializing the Guest OS ? t=
hat is to say, how I can assure the VM to believe that there is only one ph=
ysical core running on this machine?
What should I do? Is it possible to assign a core when VMM starts a new VM =
after the VMM is booted ?
   Thanks a lot!


--_000_373C28B006B97A4BA77CD51D8F89698356C517BEexchpd01_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:\5B8B\4F53;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:\5B8B\4F53;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@\5B8B\4F53";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	text-align:justify;
	text-justify:inter-ideograph;
	font-size:10.5pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
/* Page Definitions */
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"ZH-CN" link=3D"blue" vlink=3D"purple" style=3D"text-justify-t=
rim:punctuation">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi, list,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-indent:15.75pt"><span lang=3D"EN-US">A=
 VMM provides a VMCS for each VM. How does the VMM assign system resources =
for each VM? For example, in a multi-core environment, how can I enable the=
 VMM to run, say, on a two-core intel&#8217;s
 processor while I am able to force a VM to execute only on a specific core=
 upon initializing the Guest OS ? that is to say, how I can assure the VM t=
o believe that there is only one physical core running on this machine?<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-indent:15.75pt"><span lang=3D"EN-US">W=
hat should I do? Is it possible to assign a core when VMM starts a new VM a=
fter the VMM is booted ?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">&nbsp;&nbsp; Thanks a lot! <o:p=
></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
</div>
</body>
</html>

--_000_373C28B006B97A4BA77CD51D8F89698356C517BEexchpd01_--


--===============2617097884478671985==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2617097884478671985==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 08:54:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbtF-000161-KO; Thu, 20 Dec 2012 08:54:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhizhang@smu.edu.sg>) id 1TlD1U-0005oP-Gm
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 06:20:56 +0000
Received: from [85.158.143.99:46088] by server-1.bemta-4.messagelabs.com id
	F3/6C-28401-7CC51D05; Wed, 19 Dec 2012 06:20:55 +0000
X-Env-Sender: zhizhang@smu.edu.sg
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355898052!30088870!1
X-Originating-IP: [202.161.41.196]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjE2MS40MS4xOTYgPT4gMTEwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15969 invoked from network); 19 Dec 2012 06:20:54 -0000
Received: from ieg01.smu.edu.sg (HELO ieg03.smu.edu.sg) (202.161.41.196)
	by server-15.tower-216.messagelabs.com with SMTP;
	19 Dec 2012 06:20:54 -0000
X-AuditID: caa129d8-b7fa06d000006de2-79-50d15cc31eb5
Received: from EXCHPA01.staff.smu.edu.sg (exchpa01.staff.smu.edu.sg
	[172.18.225.19]) by ieg03.smu.edu.sg (SMU SMTP Gateway) with SMTP id
	93.94.28130.3CC51D05; Wed, 19 Dec 2012 14:20:51 +0800 (MYT)
Received: from EXCHPD01.staff.smu.edu.sg ([172.18.225.41]) by
	EXCHPA01.staff.smu.edu.sg ([fe80::45e2:b6d4:8ef5:2e07%11]) with mapi id
	14.01.0218.012; Wed, 19 Dec 2012 14:20:50 +0800
From: ZHANG Zhi <zhizhang@smu.edu.sg>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: multi-core VMM
Thread-Index: Ac3dsPu12m9Ys7DcRuG63sLSqNmYaw==
Date: Wed, 19 Dec 2012 06:20:49 +0000
Message-ID: <373C28B006B97A4BA77CD51D8F89698356C517BE@exchpd01>
Accept-Language: en-SG, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.106.128.55]
MIME-Version: 1.0
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrIKsWRmVeSWpSXmKPExsWyRuihsO7hmIsBBseuC1os+biYxYHR4+ju
	30wBjFFcNimpOZllqUX6dglcGVO3XmIp6Jar2Lv+EWMD4xPJLkZODgkBE4mHO+YyQthiEhfu
	rWfrYuTiEBI4xihxYeNUZghnJ6PE2uZrYFVsAqoSzz/+ZgKxRQTMJbYu2QIWFxYQl1j19wUz
	RFxGYuuG3ywQtp7El6UP2EBsFqDepSeOgfXyCthLLP3wFKyXUUBWoufsNrB6ZqA5c6fNYoW4
	SEBiyZ7zzBC2qMTLx/+g4ooSv47ehKrPkLjae58NYqagxMmZT1gmMArNQjJqFpKyWUjKIOI6
	Egt2f2KDsLUlli18zQxjnznwmAlZfAEj+ypGgczUdANjveLcUr3UlFK94vRNjMBYOLVQ88YO
	xj03rQ4xCnAwKvHwKqy/ECDEmlhWXJl7iFGSg0lJlLc8/GKAEF9SfkplRmJxRnxRaU5q8SFG
	CQ5mJRFeXlWgHG9KYmVValE+TEqag0VJnHeJEVBKID2xJDU7NbUgtQgmK8PBoSTBOycSKCtY
	lJqeWpGWmVOCkGbi4AQZzgM0PDEaZHhxQWJucWY6RP4UoyrHsWl3nzIKseTl56VKifMeAikS
	ACnKKM2Dm/OKURzoHWHeIyBreIDpDm7CK6DhTEDDg/QugAwvSURISTUwymclmZz9+Vh4SZXn
	xdVix6WuXlJqjvmfum9ffOuXuZv9/cR+bKlxVecoXuzgaufz9+Bt1VyDNd+zWTenrAl74Kye
	NNun7R2vFe8pZRuxXZpH62d4Sb3UOvSIy6/Z6KVBi+wx9mnrDHRkTiiYbfpTXyG+9qXa+/9R
	zkkzNse8qfFtsVT64zdPiaU4I9FQi7moOBEApFd2FTQDAAA=
X-Mailman-Approved-At: Thu, 20 Dec 2012 08:54:04 +0000
Subject: [Xen-devel] multi-core VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2617097884478671985=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2617097884478671985==
Content-Language: zh-CN
Content-Type: multipart/alternative;
	boundary="_000_373C28B006B97A4BA77CD51D8F89698356C517BEexchpd01_"

--_000_373C28B006B97A4BA77CD51D8F89698356C517BEexchpd01_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi, list,
A VMM provides a VMCS for each VM. How does the VMM assign system resources=
 for each VM? For example, in a multi-core environment, how can I enable th=
e VMM to run, say, on a two-core intel's processor while I am able to force=
 a VM to execute only on a specific core upon initializing the Guest OS ? t=
hat is to say, how I can assure the VM to believe that there is only one ph=
ysical core running on this machine?
What should I do? Is it possible to assign a core when VMM starts a new VM =
after the VMM is booted ?
   Thanks a lot!


--_000_373C28B006B97A4BA77CD51D8F89698356C517BEexchpd01_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:\5B8B\4F53;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:\5B8B\4F53;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@\5B8B\4F53";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	text-align:justify;
	text-justify:inter-ideograph;
	font-size:10.5pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
/* Page Definitions */
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"ZH-CN" link=3D"blue" vlink=3D"purple" style=3D"text-justify-t=
rim:punctuation">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi, list,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-indent:15.75pt"><span lang=3D"EN-US">A=
 VMM provides a VMCS for each VM. How does the VMM assign system resources =
for each VM? For example, in a multi-core environment, how can I enable the=
 VMM to run, say, on a two-core intel&#8217;s
 processor while I am able to force a VM to execute only on a specific core=
 upon initializing the Guest OS ? that is to say, how I can assure the VM t=
o believe that there is only one physical core running on this machine?<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-indent:15.75pt"><span lang=3D"EN-US">W=
hat should I do? Is it possible to assign a core when VMM starts a new VM a=
fter the VMM is booted ?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">&nbsp;&nbsp; Thanks a lot! <o:p=
></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
</div>
</body>
</html>

--_000_373C28B006B97A4BA77CD51D8F89698356C517BEexchpd01_--


--===============2617097884478671985==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2617097884478671985==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 08:54:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbtG-00016F-Pj; Thu, 20 Dec 2012 08:54:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TlHNT-0004QS-WF
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:59:56 +0000
Received: from [85.158.143.99:15996] by server-2.bemta-4.messagelabs.com id
	DB/1C-30861-A2E91D05; Wed, 19 Dec 2012 10:59:54 +0000
X-Env-Sender: james-xen@dingwall.me.uk
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355914583!30132040!1
X-Originating-IP: [81.103.221.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_EXCESS_QP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14841 invoked from network); 19 Dec 2012 10:56:23 -0000
Received: from mtaout04-winn.ispmail.ntl.com (HELO
	mtaout04-winn.ispmail.ntl.com) (81.103.221.52)
	by server-15.tower-216.messagelabs.com with SMTP;
	19 Dec 2012 10:56:23 -0000
Received: from know-smtpout-1.server.virginmedia.net ([62.254.123.4])
	by mtaout04-winn.ispmail.ntl.com
	(InterMail vM.7.08.04.00 201-2186-134-20080326) with ESMTP id
	<20121219105613.TOFZ18581.mtaout04-winn.ispmail.ntl.com@know-smtpout-1.server.virginmedia.net>
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 10:56:13 +0000
Received: from [82.32.104.97] (helo=dingwall.me.uk)
	by know-smtpout-1.server.virginmedia.net with esmtp (Exim 4.63)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TlHJV-0002hp-Vr
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:55:50 +0000
Received: (qmail 9856 invoked from network); 19 Dec 2012 10:55:49 -0000
Received: from apache0.xen.dingwall.me.uk (HELO
	webmail.private.dingwall.me.uk) (192.168.1.35)
	by mail0.xen.dingwall.me.uk with SMTP; 19 Dec 2012 10:55:49 -0000
MIME-Version: 1.0
Date: Wed, 19 Dec 2012 10:55:49 +0000
From: James Dingwall <james-xen@dingwall.me.uk>
To: <xen-devel@lists.xen.org>
In-Reply-To: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
Message-ID: <a4a53469917d35c93def9ef45527e277@imap.dingwall.me.uk>
X-Sender: james-xen@dingwall.me.uk
User-Agent: Roundcube Webmail/0.8.3
X-Cloudmark-Analysis: v=1.1 cv=GaEGOwq9FwezmTggA+b6yC6zDZF2HYaK6RN/tSqdnVA=
	c=1 sm=0 a=2-O05olGrMMA:10 a=AIb0U6W00TUA:10 a=jPJDawAOAc8A:10
	a=IkcTkHD0fZMA:10 a=mLnsDVdbAAAA:8 a=SsxfYb5wepEwjwS_qFgA:9
	a=QEXdDO2ut3YA:10 a=HpAAvcLHHh0Zw7uRqdWCyQ==:117
X-Mailman-Approved-At: Thu, 20 Dec 2012 08:54:04 +0000
Subject: Re: [Xen-devel]
 =?utf-8?q?kernel_log_flooded_with=3A_xen=5Fballoon=3A?=
 =?utf-8?q?_reserve=5Fadditional=5Fmemory=3A_add=5Fmemory=28=29_failed=3A_?=
 =?utf-8?q?-17?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2012-12-19 08:47, James Dingwall wrote:
> Hi,
>
> I have encountered an apparently benign error on two systems where
> the dom0 kernel log is flooded with messages like:
>
> [52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff] 
> cannot be added
> [52482.163860] xen_balloon: reserve_additional_memory: add_memory() 
> failed: -17
>
> The first line is from drivers/xen/xen-balloon.c, the second from
> mm/memory_hotplug.c
>
> The trigger for the messages seems to be the first occasion that a
> Xen guest is shutdown.  I have noted this in a vanilla 3.6.7 and
> kernel 3.5.0-18 built from Ubuntu sources.  Xen version is 4.2.0.  It
> is not clear why the dom0 is kernel is trying to balloon up as the 
> Xen
> command line is specifies a fixed dom0 memory allocation and
> noselfballooning is specified for the kernel and ballooning is also
> disabled in the xend-config.sxp / xl.conf (one system using xm,
> another xl)
>
> xen command line:
> placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1 
> dom0_mem=max:6144m
>
> kernel command line:
> root=/dev/loop0 ro console=tty1 console=hvc0 earlyprintk=xen
> nomodeset noselfballooning
>
> Examining /proc/iomem does show that the dom0 memory allocation is
> actually 64kb short of 6144Mb:
>
> cat /proc/iomem | grep System\ RAM
> 00010000-0009bfff : System RAM      [573440 bytes]
> 00100000-cb2dffff : System RAM      [3407740928 bytes]
> 100000000-1b4d83fff : System RAM    [3034071040 bytes]
>
> Total system ram: 6442385408 - 6x2^30 = 65536
>
> The memory range indicated in the log message is "Unusable memory" in
> /proc/iomem:
> 1b4d84000-82fffffff : Unusable memory
>
> Another point of interest is that we have multiple "identical"
> hardware platforms (Dell T320) for the system running the 3.5.0-18
> kernel but only see this error on a slightly more recent system.
> Older systems show in /proc/iomem that all memory is System RAM.
>
> 100000000-82fffffff : System RAM  [older system BIOS 1.0]
>
> 100000000-1b4d83fff : System RAM  [newer system BIOS 1.3]
> 1b4d84000-82fffffff : Unusable memory
>
> The BIOS revision between the old and new has changed so I was
> wondering if it is possible that there is a white list which affects
> the impact of the kernel option:
> CONFIG_X86_RESERVE_LOW=64
> This is only a guess since the amount of memory reserved is
> equivalent to the short fall calculated above.  If this is the right
> area perhaps the dom0 calculation for its memory entitlement needs to
> be taught to not to try and hotplug the missing 64k when it has been
> reserved.
>
> If any other information would be useful then please let me know.

With some further investigation we have determined that the different 
BIOS version does not seem to be a factor and the key point is actually 
the Xen command line.  The reason that we had max: specified is that 
without it we could not boot the kernel/xen combination on an AMD 
platform.  I will do some further testing to see what the result of 
dom0_mem=6144m,max:6144m as suggested 
http://wiki.xen.org/wiki/Xen_Best_Practices gets us.

placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1 
dom0_mem=max:6144m
results in /proc/iomem having an unusable range and top reports 
6083900k of memory in dom0

placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1 
dom0_mem=6144m
no unusable range, top reports 5605976k of memory in dom0 and no log 
messages


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:54:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbtG-00016F-Pj; Thu, 20 Dec 2012 08:54:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TlHNT-0004QS-WF
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:59:56 +0000
Received: from [85.158.143.99:15996] by server-2.bemta-4.messagelabs.com id
	DB/1C-30861-A2E91D05; Wed, 19 Dec 2012 10:59:54 +0000
X-Env-Sender: james-xen@dingwall.me.uk
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355914583!30132040!1
X-Originating-IP: [81.103.221.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_EXCESS_QP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14841 invoked from network); 19 Dec 2012 10:56:23 -0000
Received: from mtaout04-winn.ispmail.ntl.com (HELO
	mtaout04-winn.ispmail.ntl.com) (81.103.221.52)
	by server-15.tower-216.messagelabs.com with SMTP;
	19 Dec 2012 10:56:23 -0000
Received: from know-smtpout-1.server.virginmedia.net ([62.254.123.4])
	by mtaout04-winn.ispmail.ntl.com
	(InterMail vM.7.08.04.00 201-2186-134-20080326) with ESMTP id
	<20121219105613.TOFZ18581.mtaout04-winn.ispmail.ntl.com@know-smtpout-1.server.virginmedia.net>
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 10:56:13 +0000
Received: from [82.32.104.97] (helo=dingwall.me.uk)
	by know-smtpout-1.server.virginmedia.net with esmtp (Exim 4.63)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TlHJV-0002hp-Vr
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 10:55:50 +0000
Received: (qmail 9856 invoked from network); 19 Dec 2012 10:55:49 -0000
Received: from apache0.xen.dingwall.me.uk (HELO
	webmail.private.dingwall.me.uk) (192.168.1.35)
	by mail0.xen.dingwall.me.uk with SMTP; 19 Dec 2012 10:55:49 -0000
MIME-Version: 1.0
Date: Wed, 19 Dec 2012 10:55:49 +0000
From: James Dingwall <james-xen@dingwall.me.uk>
To: <xen-devel@lists.xen.org>
In-Reply-To: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
Message-ID: <a4a53469917d35c93def9ef45527e277@imap.dingwall.me.uk>
X-Sender: james-xen@dingwall.me.uk
User-Agent: Roundcube Webmail/0.8.3
X-Cloudmark-Analysis: v=1.1 cv=GaEGOwq9FwezmTggA+b6yC6zDZF2HYaK6RN/tSqdnVA=
	c=1 sm=0 a=2-O05olGrMMA:10 a=AIb0U6W00TUA:10 a=jPJDawAOAc8A:10
	a=IkcTkHD0fZMA:10 a=mLnsDVdbAAAA:8 a=SsxfYb5wepEwjwS_qFgA:9
	a=QEXdDO2ut3YA:10 a=HpAAvcLHHh0Zw7uRqdWCyQ==:117
X-Mailman-Approved-At: Thu, 20 Dec 2012 08:54:04 +0000
Subject: Re: [Xen-devel]
 =?utf-8?q?kernel_log_flooded_with=3A_xen=5Fballoon=3A?=
 =?utf-8?q?_reserve=5Fadditional=5Fmemory=3A_add=5Fmemory=28=29_failed=3A_?=
 =?utf-8?q?-17?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2012-12-19 08:47, James Dingwall wrote:
> Hi,
>
> I have encountered an apparently benign error on two systems where
> the dom0 kernel log is flooded with messages like:
>
> [52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff] 
> cannot be added
> [52482.163860] xen_balloon: reserve_additional_memory: add_memory() 
> failed: -17
>
> The first line is from drivers/xen/xen-balloon.c, the second from
> mm/memory_hotplug.c
>
> The trigger for the messages seems to be the first occasion that a
> Xen guest is shutdown.  I have noted this in a vanilla 3.6.7 and
> kernel 3.5.0-18 built from Ubuntu sources.  Xen version is 4.2.0.  It
> is not clear why the dom0 is kernel is trying to balloon up as the 
> Xen
> command line is specifies a fixed dom0 memory allocation and
> noselfballooning is specified for the kernel and ballooning is also
> disabled in the xend-config.sxp / xl.conf (one system using xm,
> another xl)
>
> xen command line:
> placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1 
> dom0_mem=max:6144m
>
> kernel command line:
> root=/dev/loop0 ro console=tty1 console=hvc0 earlyprintk=xen
> nomodeset noselfballooning
>
> Examining /proc/iomem does show that the dom0 memory allocation is
> actually 64kb short of 6144Mb:
>
> cat /proc/iomem | grep System\ RAM
> 00010000-0009bfff : System RAM      [573440 bytes]
> 00100000-cb2dffff : System RAM      [3407740928 bytes]
> 100000000-1b4d83fff : System RAM    [3034071040 bytes]
>
> Total system ram: 6442385408 - 6x2^30 = 65536
>
> The memory range indicated in the log message is "Unusable memory" in
> /proc/iomem:
> 1b4d84000-82fffffff : Unusable memory
>
> Another point of interest is that we have multiple "identical"
> hardware platforms (Dell T320) for the system running the 3.5.0-18
> kernel but only see this error on a slightly more recent system.
> Older systems show in /proc/iomem that all memory is System RAM.
>
> 100000000-82fffffff : System RAM  [older system BIOS 1.0]
>
> 100000000-1b4d83fff : System RAM  [newer system BIOS 1.3]
> 1b4d84000-82fffffff : Unusable memory
>
> The BIOS revision between the old and new has changed so I was
> wondering if it is possible that there is a white list which affects
> the impact of the kernel option:
> CONFIG_X86_RESERVE_LOW=64
> This is only a guess since the amount of memory reserved is
> equivalent to the short fall calculated above.  If this is the right
> area perhaps the dom0 calculation for its memory entitlement needs to
> be taught to not to try and hotplug the missing 64k when it has been
> reserved.
>
> If any other information would be useful then please let me know.

With some further investigation we have determined that the different 
BIOS version does not seem to be a factor and the key point is actually 
the Xen command line.  The reason that we had max: specified is that 
without it we could not boot the kernel/xen combination on an AMD 
platform.  I will do some further testing to see what the result of 
dom0_mem=6144m,max:6144m as suggested 
http://wiki.xen.org/wiki/Xen_Best_Practices gets us.

placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1 
dom0_mem=max:6144m
results in /proc/iomem having an unusable range and top reports 
6083900k of memory in dom0

placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1 
dom0_mem=6144m
no unusable range, top reports 5605976k of memory in dom0 and no log 
messages


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:54:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:54:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbtG-000168-4y; Thu, 20 Dec 2012 08:54:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TlFJe-0001r0-RL
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:47:51 +0000
Received: from [85.158.143.99:48371] by server-2.bemta-4.messagelabs.com id
	26/CC-30861-63F71D05; Wed, 19 Dec 2012 08:47:50 +0000
X-Env-Sender: james-xen@dingwall.me.uk
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355906869!29153271!1
X-Originating-IP: [81.103.221.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_EXCESS_QP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11369 invoked from network); 19 Dec 2012 08:47:49 -0000
Received: from mtaout03-winn.ispmail.ntl.com (HELO
	mtaout03-winn.ispmail.ntl.com) (81.103.221.49)
	by server-5.tower-216.messagelabs.com with SMTP;
	19 Dec 2012 08:47:49 -0000
Received: from know-smtpout-1.server.virginmedia.net ([62.254.123.3])
	by mtaout03-winn.ispmail.ntl.com
	(InterMail vM.7.08.04.00 201-2186-134-20080326) with ESMTP id
	<20121219084749.HSNO1579.mtaout03-winn.ispmail.ntl.com@know-smtpout-1.server.virginmedia.net>
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 08:47:49 +0000
Received: from [82.32.104.97] (helo=dingwall.me.uk)
	by know-smtpout-1.server.virginmedia.net with esmtp (Exim 4.63)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TlFJC-0005tA-FZ
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:47:22 +0000
Received: (qmail 21451 invoked from network); 19 Dec 2012 08:47:22 -0000
Received: from apache0.xen.dingwall.me.uk (HELO
	webmail.private.dingwall.me.uk) (192.168.1.35)
	by mail0.xen.dingwall.me.uk with SMTP; 19 Dec 2012 08:47:22 -0000
MIME-Version: 1.0
Date: Wed, 19 Dec 2012 08:47:22 +0000
From: James Dingwall <james-xen@dingwall.me.uk>
To: <xen-devel@lists.xen.org>
Message-ID: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
X-Sender: james-xen@dingwall.me.uk
User-Agent: Roundcube Webmail/0.8.3
X-Cloudmark-Analysis: v=1.1 cv=GaEGOwq9FwezmTggA+b6yC6zDZF2HYaK6RN/tSqdnVA=
	c=1 sm=0 a=2-O05olGrMMA:10 a=3qZhIyWemNUA:10 a=jPJDawAOAc8A:10
	a=IkcTkHD0fZMA:10 a=qLKWQHS2zW8hgOzHqOgA:9 a=QEXdDO2ut3YA:10
	a=HpAAvcLHHh0Zw7uRqdWCyQ==:117
X-Mailman-Approved-At: Thu, 20 Dec 2012 08:54:04 +0000
Subject: [Xen-devel] =?utf-8?q?kernel_log_flooded_with=3A_xen=5Fballoon=3A?=
 =?utf-8?q?_reserve=5Fadditional=5Fmemory=3A_add=5Fmemory=28=29_failed=3A_?=
 =?utf-8?q?-17?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I have encountered an apparently benign error on two systems where the 
dom0 kernel log is flooded with messages like:

[52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff] cannot 
be added
[52482.163860] xen_balloon: reserve_additional_memory: add_memory() 
failed: -17

The first line is from drivers/xen/xen-balloon.c, the second from 
mm/memory_hotplug.c

The trigger for the messages seems to be the first occasion that a Xen 
guest is shutdown.  I have noted this in a vanilla 3.6.7 and kernel 
3.5.0-18 built from Ubuntu sources.  Xen version is 4.2.0.  It is not 
clear why the dom0 is kernel is trying to balloon up as the Xen command 
line is specifies a fixed dom0 memory allocation and noselfballooning is 
specified for the kernel and ballooning is also disabled in the 
xend-config.sxp / xl.conf (one system using xm, another xl)

xen command line:
placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1 
dom0_mem=max:6144m

kernel command line:
root=/dev/loop0 ro console=tty1 console=hvc0 earlyprintk=xen nomodeset 
noselfballooning

Examining /proc/iomem does show that the dom0 memory allocation is 
actually 64kb short of 6144Mb:

cat /proc/iomem | grep System\ RAM
00010000-0009bfff : System RAM      [573440 bytes]
00100000-cb2dffff : System RAM      [3407740928 bytes]
100000000-1b4d83fff : System RAM    [3034071040 bytes]

Total system ram: 6442385408 - 6x2^30 = 65536

The memory range indicated in the log message is "Unusable memory" in 
/proc/iomem:
1b4d84000-82fffffff : Unusable memory

Another point of interest is that we have multiple "identical" hardware 
platforms (Dell T320) for the system running the 3.5.0-18 kernel but 
only see this error on a slightly more recent system.  Older systems 
show in /proc/iomem that all memory is System RAM.

100000000-82fffffff : System RAM  [older system BIOS 1.0]

100000000-1b4d83fff : System RAM  [newer system BIOS 1.3]
1b4d84000-82fffffff : Unusable memory

The BIOS revision between the old and new has changed so I was 
wondering if it is possible that there is a white list which affects the 
impact of the kernel option:
CONFIG_X86_RESERVE_LOW=64
This is only a guess since the amount of memory reserved is equivalent 
to the short fall calculated above.  If this is the right area perhaps 
the dom0 calculation for its memory entitlement needs to be taught to 
not to try and hotplug the missing 64k when it has been reserved.

If any other information would be useful then please let me know.

Thanks,
James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:54:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:54:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlbtG-000168-4y; Thu, 20 Dec 2012 08:54:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TlFJe-0001r0-RL
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:47:51 +0000
Received: from [85.158.143.99:48371] by server-2.bemta-4.messagelabs.com id
	26/CC-30861-63F71D05; Wed, 19 Dec 2012 08:47:50 +0000
X-Env-Sender: james-xen@dingwall.me.uk
X-Msg-Ref: server-5.tower-216.messagelabs.com!1355906869!29153271!1
X-Originating-IP: [81.103.221.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_EXCESS_QP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11369 invoked from network); 19 Dec 2012 08:47:49 -0000
Received: from mtaout03-winn.ispmail.ntl.com (HELO
	mtaout03-winn.ispmail.ntl.com) (81.103.221.49)
	by server-5.tower-216.messagelabs.com with SMTP;
	19 Dec 2012 08:47:49 -0000
Received: from know-smtpout-1.server.virginmedia.net ([62.254.123.3])
	by mtaout03-winn.ispmail.ntl.com
	(InterMail vM.7.08.04.00 201-2186-134-20080326) with ESMTP id
	<20121219084749.HSNO1579.mtaout03-winn.ispmail.ntl.com@know-smtpout-1.server.virginmedia.net>
	for <xen-devel@lists.xen.org>; Wed, 19 Dec 2012 08:47:49 +0000
Received: from [82.32.104.97] (helo=dingwall.me.uk)
	by know-smtpout-1.server.virginmedia.net with esmtp (Exim 4.63)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TlFJC-0005tA-FZ
	for xen-devel@lists.xen.org; Wed, 19 Dec 2012 08:47:22 +0000
Received: (qmail 21451 invoked from network); 19 Dec 2012 08:47:22 -0000
Received: from apache0.xen.dingwall.me.uk (HELO
	webmail.private.dingwall.me.uk) (192.168.1.35)
	by mail0.xen.dingwall.me.uk with SMTP; 19 Dec 2012 08:47:22 -0000
MIME-Version: 1.0
Date: Wed, 19 Dec 2012 08:47:22 +0000
From: James Dingwall <james-xen@dingwall.me.uk>
To: <xen-devel@lists.xen.org>
Message-ID: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
X-Sender: james-xen@dingwall.me.uk
User-Agent: Roundcube Webmail/0.8.3
X-Cloudmark-Analysis: v=1.1 cv=GaEGOwq9FwezmTggA+b6yC6zDZF2HYaK6RN/tSqdnVA=
	c=1 sm=0 a=2-O05olGrMMA:10 a=3qZhIyWemNUA:10 a=jPJDawAOAc8A:10
	a=IkcTkHD0fZMA:10 a=qLKWQHS2zW8hgOzHqOgA:9 a=QEXdDO2ut3YA:10
	a=HpAAvcLHHh0Zw7uRqdWCyQ==:117
X-Mailman-Approved-At: Thu, 20 Dec 2012 08:54:04 +0000
Subject: [Xen-devel] =?utf-8?q?kernel_log_flooded_with=3A_xen=5Fballoon=3A?=
 =?utf-8?q?_reserve=5Fadditional=5Fmemory=3A_add=5Fmemory=28=29_failed=3A_?=
 =?utf-8?q?-17?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I have encountered an apparently benign error on two systems where the 
dom0 kernel log is flooded with messages like:

[52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff] cannot 
be added
[52482.163860] xen_balloon: reserve_additional_memory: add_memory() 
failed: -17

The first line is from drivers/xen/xen-balloon.c, the second from 
mm/memory_hotplug.c

The trigger for the messages seems to be the first occasion that a Xen 
guest is shutdown.  I have noted this in a vanilla 3.6.7 and kernel 
3.5.0-18 built from Ubuntu sources.  Xen version is 4.2.0.  It is not 
clear why the dom0 is kernel is trying to balloon up as the Xen command 
line is specifies a fixed dom0 memory allocation and noselfballooning is 
specified for the kernel and ballooning is also disabled in the 
xend-config.sxp / xl.conf (one system using xm, another xl)

xen command line:
placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1 
dom0_mem=max:6144m

kernel command line:
root=/dev/loop0 ro console=tty1 console=hvc0 earlyprintk=xen nomodeset 
noselfballooning

Examining /proc/iomem does show that the dom0 memory allocation is 
actually 64kb short of 6144Mb:

cat /proc/iomem | grep System\ RAM
00010000-0009bfff : System RAM      [573440 bytes]
00100000-cb2dffff : System RAM      [3407740928 bytes]
100000000-1b4d83fff : System RAM    [3034071040 bytes]

Total system ram: 6442385408 - 6x2^30 = 65536

The memory range indicated in the log message is "Unusable memory" in 
/proc/iomem:
1b4d84000-82fffffff : Unusable memory

Another point of interest is that we have multiple "identical" hardware 
platforms (Dell T320) for the system running the 3.5.0-18 kernel but 
only see this error on a slightly more recent system.  Older systems 
show in /proc/iomem that all memory is System RAM.

100000000-82fffffff : System RAM  [older system BIOS 1.0]

100000000-1b4d83fff : System RAM  [newer system BIOS 1.3]
1b4d84000-82fffffff : Unusable memory

The BIOS revision between the old and new has changed so I was 
wondering if it is possible that there is a white list which affects the 
impact of the kernel option:
CONFIG_X86_RESERVE_LOW=64
This is only a guess since the amount of memory reserved is equivalent 
to the short fall calculated above.  If this is the right area perhaps 
the dom0 calculation for its memory entitlement needs to be taught to 
not to try and hotplug the missing 64k when it has been reserved.

If any other information would be useful then please let me know.

Thanks,
James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:56:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlbvd-0001MY-DM; Thu, 20 Dec 2012 08:56:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tlbvb-0001MC-8s
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:56:31 +0000
Received: from [85.158.139.211:37175] by server-7.bemta-5.messagelabs.com id
	56/8E-08009-CB2D2D05; Thu, 20 Dec 2012 08:56:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355993779!21320783!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11973 invoked from network); 20 Dec 2012 08:56:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 08:56:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 08:58:28 +0000
Message-Id: <50D2E0C102000078000B1B1C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 08:56:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
	<40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
	<50C85E9F02000078000AFD65@nat28.tlf.novell.com>
	<20121219200904.GG15037@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FECC3EC@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FECC3EC@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 02:23, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
> Sorry, maybe I am still not describing this issue clearly.

No, at least I understood you the way you re-describe below.

> Take the libata case as an example, the static DMA buffer locates 
> (dev->link->ap->sector_buf , here we use Data Structure B in the graph) in 
> the following structure:
> 
> -------------------------------------Page boundary
> <Data Structure A>
> <Data Structure B>
> -------------------------------------Page boundary
> <Data Structure B (cross page)>
> <Data Structure C>
> -------------------------------------Page boundary
> 
> Where Structure B is our DMA target.
> 
> For Data Structure B, we didn't care about the simultaneous access, either 
> lock or sync function will take care of it.
> What we are not sure is "read/write of A and C from other processor". As we 
> will have memory copy for the pages, and at the same time, other CPU may 
> access A/C.

The question is whether what libata does here is valid in the first
place - fill an SG list entry with something that crosses a page
boundary and is not a compound page.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:56:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlbvd-0001MY-DM; Thu, 20 Dec 2012 08:56:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tlbvb-0001MC-8s
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:56:31 +0000
Received: from [85.158.139.211:37175] by server-7.bemta-5.messagelabs.com id
	56/8E-08009-CB2D2D05; Thu, 20 Dec 2012 08:56:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1355993779!21320783!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11973 invoked from network); 20 Dec 2012 08:56:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 08:56:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 08:58:28 +0000
Message-Id: <50D2E0C102000078000B1B1C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 08:56:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1354799322-6000-1-git-send-email-dongxiao.xu@intel.com>
	<20121207140852.GC3140@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FEBE741@SHSMSX102.ccr.corp.intel.com>
	<20121211170653.GG9347@localhost.localdomain>
	<40776A41FC278F40B59438AD47D147A90FEBFB64@SHSMSX102.ccr.corp.intel.com>
	<50C85E9F02000078000AFD65@nat28.tlf.novell.com>
	<20121219200904.GG15037@phenom.dumpdata.com>
	<40776A41FC278F40B59438AD47D147A90FECC3EC@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FECC3EC@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/swiotlb: Exchange to contiguous memory
 for map_sg hook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 02:23, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
> Sorry, maybe I am still not describing this issue clearly.

No, at least I understood you the way you re-describe below.

> Take the libata case as an example, the static DMA buffer locates 
> (dev->link->ap->sector_buf , here we use Data Structure B in the graph) in 
> the following structure:
> 
> -------------------------------------Page boundary
> <Data Structure A>
> <Data Structure B>
> -------------------------------------Page boundary
> <Data Structure B (cross page)>
> <Data Structure C>
> -------------------------------------Page boundary
> 
> Where Structure B is our DMA target.
> 
> For Data Structure B, we didn't care about the simultaneous access, either 
> lock or sync function will take care of it.
> What we are not sure is "read/write of A and C from other processor". As we 
> will have memory copy for the pages, and at the same time, other CPU may 
> access A/C.

The question is whether what libata does here is valid in the first
place - fill an SG list entry with something that crosses a page
boundary and is not a compound page.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 08:59:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlbxr-0001YU-V3; Thu, 20 Dec 2012 08:58:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tlbxq-0001YO-PF
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:58:50 +0000
Received: from [85.158.139.211:7515] by server-4.bemta-5.messagelabs.com id
	FE/BB-14693-943D2D05; Thu, 20 Dec 2012 08:58:49 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355993924!20485623!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5739 invoked from network); 20 Dec 2012 08:58:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 08:58:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="272110"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 08:58:44 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 08:58:44 +0000
Message-ID: <1355993922.28419.29.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 09:58:42 +0100
In-Reply-To: <50D2CECB.4030108@ts.fujitsu.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>	<50D2B3DE.70206@ts.fujitsu.com>
	<1355991370.28419.15.camel@Abyss> <50D2CB96.4030509@ts.fujitsu.com>
	<1355992411.28419.25.camel@Abyss> <50D2CECB.4030108@ts.fujitsu.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6543412365262352577=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6543412365262352577==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-Pg9yqbX3DYQuLwIebycu"

--=-Pg9yqbX3DYQuLwIebycu
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 08:39 +0000, Juergen Gross wrote:=20
> > Ok then, functionally, that is going to be exactly the same thing as
> > where it is right now, i.e., a set of global per_cpu() variables. It is
> > possible for your solution to bring some cache/locality benefits,
> > although that will very much depends on single cases, architecture,
> > workload, etc.
>=20
> The space is only allocated if sched_credit is responsible for the pcpu.
>=20
Right, that's true too.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-Pg9yqbX3DYQuLwIebycu
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDS00IACgkQk4XaBE3IOsRWlQCglOZXjuy3MTGYaxqk9pW1WFfX
l8gAoIgMXzKaO+M3ZeBefZ0Ulz1BxcXm
=JC7e
-----END PGP SIGNATURE-----

--=-Pg9yqbX3DYQuLwIebycu--


--===============6543412365262352577==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6543412365262352577==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 08:59:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 08:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlbxr-0001YU-V3; Thu, 20 Dec 2012 08:58:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tlbxq-0001YO-PF
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 08:58:50 +0000
Received: from [85.158.139.211:7515] by server-4.bemta-5.messagelabs.com id
	FE/BB-14693-943D2D05; Thu, 20 Dec 2012 08:58:49 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1355993924!20485623!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5739 invoked from network); 20 Dec 2012 08:58:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 08:58:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="272110"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 08:58:44 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 08:58:44 +0000
Message-ID: <1355993922.28419.29.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 20 Dec 2012 09:58:42 +0100
In-Reply-To: <50D2CECB.4030108@ts.fujitsu.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>	<50D2B3DE.70206@ts.fujitsu.com>
	<1355991370.28419.15.camel@Abyss> <50D2CB96.4030509@ts.fujitsu.com>
	<1355992411.28419.25.camel@Abyss> <50D2CECB.4030108@ts.fujitsu.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6543412365262352577=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6543412365262352577==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-Pg9yqbX3DYQuLwIebycu"

--=-Pg9yqbX3DYQuLwIebycu
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 08:39 +0000, Juergen Gross wrote:=20
> > Ok then, functionally, that is going to be exactly the same thing as
> > where it is right now, i.e., a set of global per_cpu() variables. It is
> > possible for your solution to bring some cache/locality benefits,
> > although that will very much depends on single cases, architecture,
> > workload, etc.
>=20
> The space is only allocated if sched_credit is responsible for the pcpu.
>=20
Right, that's true too.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-Pg9yqbX3DYQuLwIebycu
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDS00IACgkQk4XaBE3IOsRWlQCglOZXjuy3MTGYaxqk9pW1WFfX
l8gAoIgMXzKaO+M3ZeBefZ0Ulz1BxcXm
=JC7e
-----END PGP SIGNATURE-----

--=-Pg9yqbX3DYQuLwIebycu--


--===============6543412365262352577==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6543412365262352577==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 09:10:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlc8y-0001w0-Gg; Thu, 20 Dec 2012 09:10:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tlc8w-0001vv-TT
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:10:19 +0000
Received: from [193.109.254.147:53962] by server-12.bemta-14.messagelabs.com
	id CC/B7-06523-AF5D2D05; Thu, 20 Dec 2012 09:10:18 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355994502!10747034!1
X-Originating-IP: [209.85.223.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22122 invoked from network); 20 Dec 2012 09:08:24 -0000
Received: from mail-ie0-f172.google.com (HELO mail-ie0-f172.google.com)
	(209.85.223.172)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 09:08:24 -0000
Received: by mail-ie0-f172.google.com with SMTP id c13so4232942ieb.31
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 01:08:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=UCERLF145V8VDIk7U7QPq5zDgFRsT3o1uJjc6ei1ghE=;
	b=LJtSwspsCoae/Wshuz8Sy8zayHL2stwi7d4Zdbn1IdAPBIildzmDDQqFGjDw7GFOhA
	rolNJZwyPGF8pOVq8CLfKoLVU2N6mpVi37Yz8oLEbnxXvJdTXfJkZAFrRdixLDQqQLw3
	TEZFugCH3Upaji3hIQ3NJ761VVzogJqv0PS/VV4fcw+aJg06X7TImaw0tvHR4uM7Nl0r
	YfLEfmiV6BWhNzG6UAaVNlS+iLWPb7abvFlQF/sX9BF5BQscwTJv1+cR1H0OdsIN7Y5i
	5zKs6f8ru+HvtQuxZZQlt4n8AMT/1qEyyEVntfCqgOs3d4bv6GHutg80cGJbIubbi8Fp
	gcww==
MIME-Version: 1.0
Received: by 10.50.196.164 with SMTP id in4mr4637462igc.86.1355994501747; Thu,
	20 Dec 2012 01:08:21 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 01:08:21 -0800 (PST)
In-Reply-To: <373C28B006B97A4BA77CD51D8F89698356C517BE@exchpd01>
References: <373C28B006B97A4BA77CD51D8F89698356C517BE@exchpd01>
Date: Thu, 20 Dec 2012 17:08:21 +0800
X-Google-Sender-Auth: t5PYLUszOldqD8xbObAM71uA6V8
Message-ID: <CAKhsbWbvSNiCrDHuGpO=HgAXW0v-NX4_Ng+0JvTGPvf7eOFH2w@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: ZHANG Zhi <zhizhang@smu.edu.sg>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] multi-core VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I think that's possible, you can check the cpupool related command in
the xl manpage.

But I have a similar question -- In CPU like i7, 8 logical cores are
not fully equivalent.
They are hyper-threading over CMP. The two HT cores from a single
physical core are contending
for pipeline resource. In linux kernel, there is specific scheduler
optimization for this case.
My question is that how XEN hypervisor handle this? Will such HT info
be available to the VM?
I guess there may be some different depend on whether you bind VCPU to
host core or not.

Also, is there any different for dom0 && domU in this aspect?

Thanks,
Timothy

On Wed, Dec 19, 2012 at 2:20 PM, ZHANG Zhi <zhizhang@smu.edu.sg> wrote:
> Hi, list,
>
> A VMM provides a VMCS for each VM. How does the VMM assign system resourc=
es
> for each VM? For example, in a multi-core environment, how can I enable t=
he
> VMM to run, say, on a two-core intel=92s processor while I am able to for=
ce a
> VM to execute only on a specific core upon initializing the Guest OS ? th=
at
> is to say, how I can assure the VM to believe that there is only one
> physical core running on this machine?
>
> What should I do? Is it possible to assign a core when VMM starts a new VM
> after the VMM is booted ?
>
>    Thanks a lot!
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:10:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlc8y-0001w0-Gg; Thu, 20 Dec 2012 09:10:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tlc8w-0001vv-TT
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:10:19 +0000
Received: from [193.109.254.147:53962] by server-12.bemta-14.messagelabs.com
	id CC/B7-06523-AF5D2D05; Thu, 20 Dec 2012 09:10:18 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1355994502!10747034!1
X-Originating-IP: [209.85.223.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22122 invoked from network); 20 Dec 2012 09:08:24 -0000
Received: from mail-ie0-f172.google.com (HELO mail-ie0-f172.google.com)
	(209.85.223.172)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 09:08:24 -0000
Received: by mail-ie0-f172.google.com with SMTP id c13so4232942ieb.31
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 01:08:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=UCERLF145V8VDIk7U7QPq5zDgFRsT3o1uJjc6ei1ghE=;
	b=LJtSwspsCoae/Wshuz8Sy8zayHL2stwi7d4Zdbn1IdAPBIildzmDDQqFGjDw7GFOhA
	rolNJZwyPGF8pOVq8CLfKoLVU2N6mpVi37Yz8oLEbnxXvJdTXfJkZAFrRdixLDQqQLw3
	TEZFugCH3Upaji3hIQ3NJ761VVzogJqv0PS/VV4fcw+aJg06X7TImaw0tvHR4uM7Nl0r
	YfLEfmiV6BWhNzG6UAaVNlS+iLWPb7abvFlQF/sX9BF5BQscwTJv1+cR1H0OdsIN7Y5i
	5zKs6f8ru+HvtQuxZZQlt4n8AMT/1qEyyEVntfCqgOs3d4bv6GHutg80cGJbIubbi8Fp
	gcww==
MIME-Version: 1.0
Received: by 10.50.196.164 with SMTP id in4mr4637462igc.86.1355994501747; Thu,
	20 Dec 2012 01:08:21 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 01:08:21 -0800 (PST)
In-Reply-To: <373C28B006B97A4BA77CD51D8F89698356C517BE@exchpd01>
References: <373C28B006B97A4BA77CD51D8F89698356C517BE@exchpd01>
Date: Thu, 20 Dec 2012 17:08:21 +0800
X-Google-Sender-Auth: t5PYLUszOldqD8xbObAM71uA6V8
Message-ID: <CAKhsbWbvSNiCrDHuGpO=HgAXW0v-NX4_Ng+0JvTGPvf7eOFH2w@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: ZHANG Zhi <zhizhang@smu.edu.sg>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] multi-core VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I think that's possible, you can check the cpupool related command in
the xl manpage.

But I have a similar question -- In CPU like i7, 8 logical cores are
not fully equivalent.
They are hyper-threading over CMP. The two HT cores from a single
physical core are contending
for pipeline resource. In linux kernel, there is specific scheduler
optimization for this case.
My question is that how XEN hypervisor handle this? Will such HT info
be available to the VM?
I guess there may be some different depend on whether you bind VCPU to
host core or not.

Also, is there any different for dom0 && domU in this aspect?

Thanks,
Timothy

On Wed, Dec 19, 2012 at 2:20 PM, ZHANG Zhi <zhizhang@smu.edu.sg> wrote:
> Hi, list,
>
> A VMM provides a VMCS for each VM. How does the VMM assign system resourc=
es
> for each VM? For example, in a multi-core environment, how can I enable t=
he
> VMM to run, say, on a two-core intel=92s processor while I am able to for=
ce a
> VM to execute only on a specific core upon initializing the Guest OS ? th=
at
> is to say, how I can assure the VM to believe that there is only one
> physical core running on this machine?
>
> What should I do? Is it possible to assign a core when VMM starts a new VM
> after the VMM is booted ?
>
>    Thanks a lot!
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:20:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcIL-00028F-No; Thu, 20 Dec 2012 09:20:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcIK-00028A-21
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:20:00 +0000
Received: from [85.158.143.35:20538] by server-1.bemta-4.messagelabs.com id
	93/20-28401-F38D2D05; Thu, 20 Dec 2012 09:19:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355995110!11996864!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27790 invoked from network); 20 Dec 2012 09:18:30 -0000
Received: from unknown (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 09:18:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:17:24 +0000
Message-Id: <50D2E5B302000078000B1B39@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:17:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<9bcd5859fb9c7fa29dee.1355944037@Solace>
In-Reply-To: <9bcd5859fb9c7fa29dee.1355944037@Solace>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 01 of 10 v2] xen,
 libxc: rename xenctl_cpumap to xenctl_bitmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 20:07, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> More specifically:
>  1. replaces xenctl_cpumap with xenctl_bitmap
>  2. provides bitmap_to_xenctl_bitmap and the reverse;
>  3. re-implement cpumask_to_xenctl_bitmap with
>     bitmap_to_xenctl_bitmap and the reverse;
> 
> Other than #3, no functional changes. Interface only slightly
> afected.
> 
> This is in preparation of introducing NUMA node-affinity maps.

This (at least) lacks an adjustment to
tools/tests/mce-test/tools/xen-mceinj.c afaict.

Jan.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:20:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcIL-00028F-No; Thu, 20 Dec 2012 09:20:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcIK-00028A-21
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:20:00 +0000
Received: from [85.158.143.35:20538] by server-1.bemta-4.messagelabs.com id
	93/20-28401-F38D2D05; Thu, 20 Dec 2012 09:19:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1355995110!11996864!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27790 invoked from network); 20 Dec 2012 09:18:30 -0000
Received: from unknown (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 09:18:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:17:24 +0000
Message-Id: <50D2E5B302000078000B1B39@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:17:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<9bcd5859fb9c7fa29dee.1355944037@Solace>
In-Reply-To: <9bcd5859fb9c7fa29dee.1355944037@Solace>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 01 of 10 v2] xen,
 libxc: rename xenctl_cpumap to xenctl_bitmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 20:07, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> More specifically:
>  1. replaces xenctl_cpumap with xenctl_bitmap
>  2. provides bitmap_to_xenctl_bitmap and the reverse;
>  3. re-implement cpumask_to_xenctl_bitmap with
>     bitmap_to_xenctl_bitmap and the reverse;
> 
> Other than #3, no functional changes. Interface only slightly
> afected.
> 
> This is in preparation of introducing NUMA node-affinity maps.

This (at least) lacks an adjustment to
tools/tests/mce-test/tools/xen-mceinj.c afaict.

Jan.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:21:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcJJ-0002C4-56; Thu, 20 Dec 2012 09:21:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcJI-0002Bu-5v
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:21:00 +0000
Received: from [85.158.143.35:27317] by server-3.bemta-4.messagelabs.com id
	AF/6D-18211-B78D2D05; Thu, 20 Dec 2012 09:20:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355995104!16354344!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6900 invoked from network); 20 Dec 2012 09:18:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 09:18:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:18:24 +0000
Message-Id: <50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:18:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<4c57c8f1e7ad20c15b8c.1355944038@Solace>
In-Reply-To: <4c57c8f1e7ad20c15b8c.1355944038@Solace>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 02 of 10 v2] xen,
 libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 20:07, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> --- a/xen/include/xen/nodemask.h
> +++ b/xen/include/xen/nodemask.h
> @@ -298,6 +298,53 @@ static inline int __nodemask_parse(const
>  }
>  #endif
>  
> +/*
> + * nodemask_var_t: struct nodemask for stack usage.
> + *
> + * See definition of cpumask_var_t in include/xen//cpumask.h.
> + */
> +#if MAX_NUMNODES > 2 * BITS_PER_LONG

Is that case reasonable to expect?

Jan

> +#include <xen/xmalloc.h>
> +
> +typedef nodemask_t *nodemask_var_t;
> +
> +#define nr_nodemask_bits (BITS_TO_LONGS(MAX_NUMNODES) * BITS_PER_LONG)
> +
> +static inline bool_t alloc_nodemask_var(nodemask_var_t *mask)
> +{
> +	*(void **)mask = _xmalloc(nr_nodemask_bits / 8, sizeof(long));
> +	return *mask != NULL;
> +}
> +
> +static inline bool_t zalloc_nodemask_var(nodemask_var_t *mask)
> +{
> +	*(void **)mask = _xzalloc(nr_nodemask_bits / 8, sizeof(long));
> +	return *mask != NULL;
> +}
> +
> +static inline void free_nodemask_var(nodemask_var_t mask)
> +{
> +	xfree(mask);
> +}
> +#else
> +typedef nodemask_t nodemask_var_t;
> +
> +static inline bool_t alloc_nodemask_var(nodemask_var_t *mask)
> +{
> +	return 1;
> +}
> +
> +static inline bool_t zalloc_nodemask_var(nodemask_var_t *mask)
> +{
> +	nodes_clear(*mask);
> +	return 1;
> +}
> +
> +static inline void free_nodemask_var(nodemask_var_t mask)
> +{
> +}
> +#endif
> +
>  #if MAX_NUMNODES > 1
>  #define for_each_node_mask(node, mask)			\
>  	for ((node) = first_node(mask);			\




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:21:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcJJ-0002C4-56; Thu, 20 Dec 2012 09:21:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcJI-0002Bu-5v
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:21:00 +0000
Received: from [85.158.143.35:27317] by server-3.bemta-4.messagelabs.com id
	AF/6D-18211-B78D2D05; Thu, 20 Dec 2012 09:20:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1355995104!16354344!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6900 invoked from network); 20 Dec 2012 09:18:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 09:18:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:18:24 +0000
Message-Id: <50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:18:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<4c57c8f1e7ad20c15b8c.1355944038@Solace>
In-Reply-To: <4c57c8f1e7ad20c15b8c.1355944038@Solace>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 02 of 10 v2] xen,
 libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.12.12 at 20:07, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> --- a/xen/include/xen/nodemask.h
> +++ b/xen/include/xen/nodemask.h
> @@ -298,6 +298,53 @@ static inline int __nodemask_parse(const
>  }
>  #endif
>  
> +/*
> + * nodemask_var_t: struct nodemask for stack usage.
> + *
> + * See definition of cpumask_var_t in include/xen//cpumask.h.
> + */
> +#if MAX_NUMNODES > 2 * BITS_PER_LONG

Is that case reasonable to expect?

Jan

> +#include <xen/xmalloc.h>
> +
> +typedef nodemask_t *nodemask_var_t;
> +
> +#define nr_nodemask_bits (BITS_TO_LONGS(MAX_NUMNODES) * BITS_PER_LONG)
> +
> +static inline bool_t alloc_nodemask_var(nodemask_var_t *mask)
> +{
> +	*(void **)mask = _xmalloc(nr_nodemask_bits / 8, sizeof(long));
> +	return *mask != NULL;
> +}
> +
> +static inline bool_t zalloc_nodemask_var(nodemask_var_t *mask)
> +{
> +	*(void **)mask = _xzalloc(nr_nodemask_bits / 8, sizeof(long));
> +	return *mask != NULL;
> +}
> +
> +static inline void free_nodemask_var(nodemask_var_t mask)
> +{
> +	xfree(mask);
> +}
> +#else
> +typedef nodemask_t nodemask_var_t;
> +
> +static inline bool_t alloc_nodemask_var(nodemask_var_t *mask)
> +{
> +	return 1;
> +}
> +
> +static inline bool_t zalloc_nodemask_var(nodemask_var_t *mask)
> +{
> +	nodes_clear(*mask);
> +	return 1;
> +}
> +
> +static inline void free_nodemask_var(nodemask_var_t mask)
> +{
> +}
> +#endif
> +
>  #if MAX_NUMNODES > 1
>  #define for_each_node_mask(node, mask)			\
>  	for ((node) = first_node(mask);			\




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:23:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcLJ-0002LA-Lq; Thu, 20 Dec 2012 09:23:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcLI-0002Kx-1o
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:23:04 +0000
Received: from [85.158.139.211:17794] by server-2.bemta-5.messagelabs.com id
	29/54-16162-7F8D2D05; Thu, 20 Dec 2012 09:23:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355995382!20433361!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4610 invoked from network); 20 Dec 2012 09:23:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 09:23:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:23:01 +0000
Message-Id: <50D2E70202000078000B1B51@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:22:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D2B3DE.70206@ts.fujitsu.com>
	<1355991370.28419.15.camel@Abyss> <50D2CB96.4030509@ts.fujitsu.com>
	<1355992411.28419.25.camel@Abyss>
In-Reply-To: <1355992411.28419.25.camel@Abyss>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, AnilMadhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 09:33, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> It was Jan that suggested/asked to pull them out of the stack, and I
> guess it's George's taste that we value most when hacking sched_*.c, so
> let's see if they want to comment on this and then I'll decide. :-)

Yes, I'm with Juergen here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:23:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcLJ-0002LA-Lq; Thu, 20 Dec 2012 09:23:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcLI-0002Kx-1o
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:23:04 +0000
Received: from [85.158.139.211:17794] by server-2.bemta-5.messagelabs.com id
	29/54-16162-7F8D2D05; Thu, 20 Dec 2012 09:23:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1355995382!20433361!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4610 invoked from network); 20 Dec 2012 09:23:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 09:23:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:23:01 +0000
Message-Id: <50D2E70202000078000B1B51@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:22:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D2B3DE.70206@ts.fujitsu.com>
	<1355991370.28419.15.camel@Abyss> <50D2CB96.4030509@ts.fujitsu.com>
	<1355992411.28419.25.camel@Abyss>
In-Reply-To: <1355992411.28419.25.camel@Abyss>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, AnilMadhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 09:33, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> It was Jan that suggested/asked to pull them out of the stack, and I
> guess it's George's taste that we value most when hacking sched_*.c, so
> let's see if they want to comment on this and then I'll decide. :-)

Yes, I'm with Juergen here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:24:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:24:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcMR-0002S9-4f; Thu, 20 Dec 2012 09:24:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlcMP-0002Rt-EP
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:24:13 +0000
Received: from [85.158.137.99:31423] by server-13.bemta-3.messagelabs.com id
	1D/EA-00465-C39D2D05; Thu, 20 Dec 2012 09:24:12 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355995451!17901655!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4037 invoked from network); 20 Dec 2012 09:24:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 09:24:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="272778"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 09:24:11 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 09:24:11 +0000
Message-ID: <1355995449.28419.40.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 20 Dec 2012 10:24:09 +0100
In-Reply-To: <1355992906.14417.58.camel@dagon.hellion.org.uk>
References: <patchbomb.1355944036@Solace>
	<5dc2571ae5faef87977c.1355944043@Solace>
	<1355992906.14417.58.camel@dagon.hellion.org.uk>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>, Matt Wilson <msw@amazon.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 07 of 10 v2] libxl: optimize the calculation
 of how many VCPUs can run on a candidate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5230301457008424114=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5230301457008424114==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-RTIWRx0K03yPaj+ZiJKG"

--=-RTIWRx0K03yPaj+ZiJKG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 08:41 +0000, Ian Campbell wrote:=20
> On Wed, 2012-12-19 at 19:07 +0000, Dario Faggioli wrote:
> > This reduces the complexity of the overall algorithm, as it moves a=20
>=20
> What was/is the complexity before/after this change? ISTR it was O(n^2)
> or something along those lines before.
>=20
Yes and no. Let me try to explain. Counting the number of vCPUs that can
run on (a set) of node(s) was and remains O(n_domains*n_domain_vcpus),
so, yes, sort of quadratic.

The upper bound of the number of candidates evaluated by the placement
algorithm is exponential in the number of NUMA nodes: O(2^n_nodes).

Before this change, we counted the number of vCPUs runnable on each
candidate during each step, so the overall complexity was:

O(2^n_nodes) * O(n_domains*n_domain_vcpus)

In this change I count the number of vCPUs runnable on each candidate
only once, and that happens outside the candidate generation loop, so
the overall complexity is:

O(n_domains*n_domains_vcpus) + O(2^n_nodes) =3D O(2^n_nodes)

Did I answer your question?

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-RTIWRx0K03yPaj+ZiJKG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDS2TkACgkQk4XaBE3IOsSkggCfWp3ZRDxhqXuExeiFytPX8eHN
SasAnjD2tFJLCh0P3tAxd3V7wjZMw36/
=qT7w
-----END PGP SIGNATURE-----

--=-RTIWRx0K03yPaj+ZiJKG--


--===============5230301457008424114==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5230301457008424114==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 09:24:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:24:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcMR-0002S9-4f; Thu, 20 Dec 2012 09:24:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlcMP-0002Rt-EP
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:24:13 +0000
Received: from [85.158.137.99:31423] by server-13.bemta-3.messagelabs.com id
	1D/EA-00465-C39D2D05; Thu, 20 Dec 2012 09:24:12 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1355995451!17901655!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4037 invoked from network); 20 Dec 2012 09:24:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 09:24:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="272778"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 09:24:11 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 09:24:11 +0000
Message-ID: <1355995449.28419.40.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 20 Dec 2012 10:24:09 +0100
In-Reply-To: <1355992906.14417.58.camel@dagon.hellion.org.uk>
References: <patchbomb.1355944036@Solace>
	<5dc2571ae5faef87977c.1355944043@Solace>
	<1355992906.14417.58.camel@dagon.hellion.org.uk>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>, Matt Wilson <msw@amazon.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 07 of 10 v2] libxl: optimize the calculation
 of how many VCPUs can run on a candidate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5230301457008424114=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5230301457008424114==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-RTIWRx0K03yPaj+ZiJKG"

--=-RTIWRx0K03yPaj+ZiJKG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 08:41 +0000, Ian Campbell wrote:=20
> On Wed, 2012-12-19 at 19:07 +0000, Dario Faggioli wrote:
> > This reduces the complexity of the overall algorithm, as it moves a=20
>=20
> What was/is the complexity before/after this change? ISTR it was O(n^2)
> or something along those lines before.
>=20
Yes and no. Let me try to explain. Counting the number of vCPUs that can
run on (a set) of node(s) was and remains O(n_domains*n_domain_vcpus),
so, yes, sort of quadratic.

The upper bound of the number of candidates evaluated by the placement
algorithm is exponential in the number of NUMA nodes: O(2^n_nodes).

Before this change, we counted the number of vCPUs runnable on each
candidate during each step, so the overall complexity was:

O(2^n_nodes) * O(n_domains*n_domain_vcpus)

In this change I count the number of vCPUs runnable on each candidate
only once, and that happens outside the candidate generation loop, so
the overall complexity is:

O(n_domains*n_domains_vcpus) + O(2^n_nodes) =3D O(2^n_nodes)

Did I answer your question?

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-RTIWRx0K03yPaj+ZiJKG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDS2TkACgkQk4XaBE3IOsSkggCfWp3ZRDxhqXuExeiFytPX8eHN
SasAnjD2tFJLCh0P3tAxd3V7wjZMw36/
=qT7w
-----END PGP SIGNATURE-----

--=-RTIWRx0K03yPaj+ZiJKG--


--===============5230301457008424114==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5230301457008424114==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 09:35:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcX1-0002sQ-Mh; Thu, 20 Dec 2012 09:35:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlcX0-0002sL-7g
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:35:10 +0000
Received: from [85.158.143.99:7459] by server-2.bemta-4.messagelabs.com id
	7A/08-30861-DCBD2D05; Thu, 20 Dec 2012 09:35:09 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355996108!30281679!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24774 invoked from network); 20 Dec 2012 09:35:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 09:35:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="273203"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 09:35:08 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 09:35:07 +0000
Message-ID: <1355996105.28419.42.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 20 Dec 2012 10:35:05 +0100
In-Reply-To: <50D2E5B302000078000B1B39@nat28.tlf.novell.com>
References: <patchbomb.1355944036@Solace>
	<9bcd5859fb9c7fa29dee.1355944037@Solace>
	<50D2E5B302000078000B1B39@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 01 of 10 v2] xen,
 libxc: rename xenctl_cpumap to xenctl_bitmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7660995878527961928=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7660995878527961928==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-jft8GzqrbvQkg+sM/mUf"

--=-jft8GzqrbvQkg+sM/mUf
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 09:17 +0000, Jan Beulich wrote:=20
> >>> On 19.12.12 at 20:07, Dario Faggioli <dario.faggioli@citrix.com> wrot=
e:
> > More specifically:
> >  1. replaces xenctl_cpumap with xenctl_bitmap
> >  2. provides bitmap_to_xenctl_bitmap and the reverse;
> >  3. re-implement cpumask_to_xenctl_bitmap with
> >     bitmap_to_xenctl_bitmap and the reverse;
> >=20
> > Other than #3, no functional changes. Interface only slightly
> > afected.
> >=20
> > This is in preparation of introducing NUMA node-affinity maps.
>=20
> This (at least) lacks an adjustment to
> tools/tests/mce-test/tools/xen-mceinj.c afaict.
>=20
Indeed. I really think I was building those, but it looks like I wasn't.
Sorry for that and thanks for pointing out, will fix.

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-jft8GzqrbvQkg+sM/mUf
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDS28kACgkQk4XaBE3IOsQ0ZACdHBrNKuZuwQjbmLeEIHVon1DL
JhwAnRJfUTw/71vjKcy8RrGuO6xSh8vR
=HXkl
-----END PGP SIGNATURE-----

--=-jft8GzqrbvQkg+sM/mUf--


--===============7660995878527961928==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7660995878527961928==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 09:35:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcX1-0002sQ-Mh; Thu, 20 Dec 2012 09:35:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlcX0-0002sL-7g
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:35:10 +0000
Received: from [85.158.143.99:7459] by server-2.bemta-4.messagelabs.com id
	7A/08-30861-DCBD2D05; Thu, 20 Dec 2012 09:35:09 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355996108!30281679!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24774 invoked from network); 20 Dec 2012 09:35:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 09:35:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="273203"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 09:35:08 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 09:35:07 +0000
Message-ID: <1355996105.28419.42.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 20 Dec 2012 10:35:05 +0100
In-Reply-To: <50D2E5B302000078000B1B39@nat28.tlf.novell.com>
References: <patchbomb.1355944036@Solace>
	<9bcd5859fb9c7fa29dee.1355944037@Solace>
	<50D2E5B302000078000B1B39@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 01 of 10 v2] xen,
 libxc: rename xenctl_cpumap to xenctl_bitmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7660995878527961928=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7660995878527961928==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-jft8GzqrbvQkg+sM/mUf"

--=-jft8GzqrbvQkg+sM/mUf
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 09:17 +0000, Jan Beulich wrote:=20
> >>> On 19.12.12 at 20:07, Dario Faggioli <dario.faggioli@citrix.com> wrot=
e:
> > More specifically:
> >  1. replaces xenctl_cpumap with xenctl_bitmap
> >  2. provides bitmap_to_xenctl_bitmap and the reverse;
> >  3. re-implement cpumask_to_xenctl_bitmap with
> >     bitmap_to_xenctl_bitmap and the reverse;
> >=20
> > Other than #3, no functional changes. Interface only slightly
> > afected.
> >=20
> > This is in preparation of introducing NUMA node-affinity maps.
>=20
> This (at least) lacks an adjustment to
> tools/tests/mce-test/tools/xen-mceinj.c afaict.
>=20
Indeed. I really think I was building those, but it looks like I wasn't.
Sorry for that and thanks for pointing out, will fix.

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-jft8GzqrbvQkg+sM/mUf
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDS28kACgkQk4XaBE3IOsQ0ZACdHBrNKuZuwQjbmLeEIHVon1DL
JhwAnRJfUTw/71vjKcy8RrGuO6xSh8vR
=HXkl
-----END PGP SIGNATURE-----

--=-jft8GzqrbvQkg+sM/mUf--


--===============7660995878527961928==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7660995878527961928==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 09:41:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:41:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcdE-00035N-IG; Thu, 20 Dec 2012 09:41:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcdD-00035I-9m
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:41:35 +0000
Received: from [85.158.143.99:58278] by server-1.bemta-4.messagelabs.com id
	BE/41-28401-E4DD2D05; Thu, 20 Dec 2012 09:41:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1355996492!29266174!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23193 invoked from network); 20 Dec 2012 09:41:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 09:41:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:39:32 +0000
Message-Id: <50D2EAE302000078000B1B86@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:39:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-7-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1356018231-26440-7-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, jun.nakajima@intel.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 06/10] nEPT: Sync PDPTR fields if L2
 guest in PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 16:43, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
> vmentry.
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    9 ++++++++-
>  1 files changed, 8 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 2ae6f6a..1f7de7a 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -826,7 +826,14 @@ static void load_shadow_guest_state(struct vcpu *v)
>      vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
>      vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
>  
> -    /* TODO: PDPTRs for nested ept */
> +    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
> +                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);

When I commented on the previous version's white space issue
in one of the patches, I really expected you to check all patches.
Yet here we have tabs again, and the placement of the opening
brace isn't right either, nor is the indentation of the continued
if() clause.

Please, before re-submitting, make sure you look through all of
the patches for further coding style issues.

Jan

> +    }
> +
>      /* TODO: CR3 target control */
>  }
>  
> -- 
> 1.7.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:41:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:41:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcdE-00035N-IG; Thu, 20 Dec 2012 09:41:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcdD-00035I-9m
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:41:35 +0000
Received: from [85.158.143.99:58278] by server-1.bemta-4.messagelabs.com id
	BE/41-28401-E4DD2D05; Thu, 20 Dec 2012 09:41:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1355996492!29266174!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23193 invoked from network); 20 Dec 2012 09:41:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 09:41:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:39:32 +0000
Message-Id: <50D2EAE302000078000B1B86@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:39:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-7-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1356018231-26440-7-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, jun.nakajima@intel.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 06/10] nEPT: Sync PDPTR fields if L2
 guest in PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 16:43, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
> vmentry.
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    9 ++++++++-
>  1 files changed, 8 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 2ae6f6a..1f7de7a 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -826,7 +826,14 @@ static void load_shadow_guest_state(struct vcpu *v)
>      vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
>      vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
>  
> -    /* TODO: PDPTRs for nested ept */
> +    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
> +                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);

When I commented on the previous version's white space issue
in one of the patches, I really expected you to check all patches.
Yet here we have tabs again, and the placement of the opening
brace isn't right either, nor is the indentation of the continued
if() clause.

Please, before re-submitting, make sure you look through all of
the patches for further coding style issues.

Jan

> +    }
> +
>      /* TODO: CR3 target control */
>  }
>  
> -- 
> 1.7.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:55:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcqE-0003Fu-UG; Thu, 20 Dec 2012 09:55:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcqD-0003Fp-Bj
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:55:01 +0000
Received: from [85.158.143.99:30112] by server-1.bemta-4.messagelabs.com id
	FE/E5-28401-470E2D05; Thu, 20 Dec 2012 09:55:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355997291!30285904!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22238 invoked from network); 20 Dec 2012 09:54:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 09:54:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:54:51 +0000
Message-Id: <50D2EE7A02000078000B1B91@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:54:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-9-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1356018231-26440-9-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, jun.nakajima@intel.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 08/10] nEPT: handle invept instruction
 from L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 16:43, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2573,10 +2573,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>              update_guest_eip();
>          break;
>  
> +    case EXIT_REASON_INVEPT:
> +        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
> +            update_guest_eip();
> +        break;
> +

I realize that you're just copying code written the same way
elsewhere, but: What if (here and elsewhere) X86EMUL_OKAY
is not being returned (e.g. in the non-nested case)? Without
the nested VMX code, all of these would have ended up at
the default case (crashing the guest). Iiuc the correct action
would be to inject an exception at least when X86EMUL_EXCEPTION
is being returned here - whether that's done here or (perhaps
better, as only it can know _what_ exception to inject) by the
callee is another thing to decide.

Also, at the example of nvmx_handle_vmclear() I see that it
produces exceptions in most of the cases, but I think all of the
related code needs auditing that things are being handled
consistently _and_ completely (constructs like

    if ( ... != X86EMUL_OKAY )
        return X86EMUL_EXCEPTION;

are definitely not okay, as there are further X86EMUL_* values
that can occur; if you know only the two must ever occur at a
given place, ASSERT() so, making things clear to the reader
without having to follow all code paths).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:55:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcqE-0003Fu-UG; Thu, 20 Dec 2012 09:55:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcqD-0003Fp-Bj
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:55:01 +0000
Received: from [85.158.143.99:30112] by server-1.bemta-4.messagelabs.com id
	FE/E5-28401-470E2D05; Thu, 20 Dec 2012 09:55:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1355997291!30285904!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22238 invoked from network); 20 Dec 2012 09:54:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 09:54:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:54:51 +0000
Message-Id: <50D2EE7A02000078000B1B91@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:54:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-9-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1356018231-26440-9-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, jun.nakajima@intel.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 08/10] nEPT: handle invept instruction
 from L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 16:43, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2573,10 +2573,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>              update_guest_eip();
>          break;
>  
> +    case EXIT_REASON_INVEPT:
> +        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
> +            update_guest_eip();
> +        break;
> +

I realize that you're just copying code written the same way
elsewhere, but: What if (here and elsewhere) X86EMUL_OKAY
is not being returned (e.g. in the non-nested case)? Without
the nested VMX code, all of these would have ended up at
the default case (crashing the guest). Iiuc the correct action
would be to inject an exception at least when X86EMUL_EXCEPTION
is being returned here - whether that's done here or (perhaps
better, as only it can know _what_ exception to inject) by the
callee is another thing to decide.

Also, at the example of nvmx_handle_vmclear() I see that it
produces exceptions in most of the cases, but I think all of the
related code needs auditing that things are being handled
consistently _and_ completely (constructs like

    if ( ... != X86EMUL_OKAY )
        return X86EMUL_EXCEPTION;

are definitely not okay, as there are further X86EMUL_* values
that can occur; if you know only the two must ever occur at a
given place, ASSERT() so, making things clear to the reader
without having to follow all code paths).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:55:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcqI-0003GS-J0; Thu, 20 Dec 2012 09:55:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlcqH-0003Fp-9C
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:55:05 +0000
Received: from [85.158.143.99:5520] by server-1.bemta-4.messagelabs.com id
	2A/06-28401-970E2D05; Thu, 20 Dec 2012 09:55:05 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355997303!22786049!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22287 invoked from network); 20 Dec 2012 09:55:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 09:55:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="273805"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 09:55:03 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 09:55:03 +0000
Message-ID: <1355997301.28419.45.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 20 Dec 2012 10:55:01 +0100
In-Reply-To: <50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
References: <patchbomb.1355944036@Solace>
	<4c57c8f1e7ad20c15b8c.1355944038@Solace>
	<50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 02 of 10 v2] xen,
 libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0759417941166137646=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0759417941166137646==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-LAFR/o8UxzcOD2Oq4bnM"

--=-LAFR/o8UxzcOD2Oq4bnM
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 09:18 +0000, Jan Beulich wrote:=20
> > +/*
> > + * nodemask_var_t: struct nodemask for stack usage.
> > + *
> > + * See definition of cpumask_var_t in include/xen//cpumask.h.
> > + */
> > +#if MAX_NUMNODES > 2 * BITS_PER_LONG
>=20
> Is that case reasonable to expect?
>=20
I really don't think so. Here, I really was just aligning cpumask and
nodemask types. But you're right, thinking more about it, this is
completely unnecessary. I'll kill this hunk.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-LAFR/o8UxzcOD2Oq4bnM
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDS4HUACgkQk4XaBE3IOsQrIgCfaHXDGOlxqzG3nbGoOhhWNmIF
c74An2StF4iLjnvVbk34F/nxVvX+2d1u
=6U7C
-----END PGP SIGNATURE-----

--=-LAFR/o8UxzcOD2Oq4bnM--


--===============0759417941166137646==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0759417941166137646==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 09:55:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcqI-0003GS-J0; Thu, 20 Dec 2012 09:55:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlcqH-0003Fp-9C
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:55:05 +0000
Received: from [85.158.143.99:5520] by server-1.bemta-4.messagelabs.com id
	2A/06-28401-970E2D05; Thu, 20 Dec 2012 09:55:05 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1355997303!22786049!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22287 invoked from network); 20 Dec 2012 09:55:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 09:55:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; d="asc'?scan'208";a="273805"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 09:55:03 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 09:55:03 +0000
Message-ID: <1355997301.28419.45.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 20 Dec 2012 10:55:01 +0100
In-Reply-To: <50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
References: <patchbomb.1355944036@Solace>
	<4c57c8f1e7ad20c15b8c.1355944038@Solace>
	<50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 02 of 10 v2] xen,
 libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0759417941166137646=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0759417941166137646==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-LAFR/o8UxzcOD2Oq4bnM"

--=-LAFR/o8UxzcOD2Oq4bnM
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 09:18 +0000, Jan Beulich wrote:=20
> > +/*
> > + * nodemask_var_t: struct nodemask for stack usage.
> > + *
> > + * See definition of cpumask_var_t in include/xen//cpumask.h.
> > + */
> > +#if MAX_NUMNODES > 2 * BITS_PER_LONG
>=20
> Is that case reasonable to expect?
>=20
I really don't think so. Here, I really was just aligning cpumask and
nodemask types. But you're right, thinking more about it, this is
completely unnecessary. I'll kill this hunk.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-LAFR/o8UxzcOD2Oq4bnM
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDS4HUACgkQk4XaBE3IOsQrIgCfaHXDGOlxqzG3nbGoOhhWNmIF
c74An2StF4iLjnvVbk34F/nxVvX+2d1u
=6U7C
-----END PGP SIGNATURE-----

--=-LAFR/o8UxzcOD2Oq4bnM--


--===============0759417941166137646==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0759417941166137646==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 09:57:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcsK-0003Sf-4o; Thu, 20 Dec 2012 09:57:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcsI-0003SS-QD
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:57:10 +0000
Received: from [85.158.143.99:43492] by server-1.bemta-4.messagelabs.com id
	BB/09-28401-6F0E2D05; Thu, 20 Dec 2012 09:57:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355997393!18036804!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16069 invoked from network); 20 Dec 2012 09:56:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 09:56:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:56:33 +0000
Message-Id: <50D2EEDF02000078000B1B94@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:56:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-10-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1356018231-26440-10-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, jun.nakajima@intel.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 09/10] nVMX: virutalize VPID capability
 to nested VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 16:43, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> +int nvmx_handle_invvpid(struct cpu_user_regs *regs)
> +{
> +    struct vmx_inst_decoded decode;
> +    unsigned long vpid;
> +    u64 inv_type;
> +
> +    if ( !cpu_has_vmx_vpid )
> +        return X86EMUL_EXCEPTION;

Same problem here - you mustn't return X86EMUL_EXCEPTION
without also raising an exception.

Jan

> +
> +    if ( decode_vmx_inst(regs, &decode, &vpid, 0) != X86EMUL_OKAY )
> +        return X86EMUL_EXCEPTION;
> +
> +    inv_type = reg_read(regs, decode.reg2);
> +    gdprintk(XENLOG_DEBUG,"inv_type:%ld, vpid:%lx\n", inv_type, vpid);
> +
> +    switch ( inv_type ) {
> +        /* Just invalidate all tlb entries for all types! */
> +        case INVVPID_INDIVIDUAL_ADDR:
> +        case INVVPID_SINGLE_CONTEXT:
> +        case INVVPID_ALL_CONTEXT:
> +            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
> +            break;
> +        default:
> +            return X86EMUL_EXCEPTION;
> +    }
> +    vmreturn(regs, VMSUCCEED);
> +
> +    return X86EMUL_OKAY;
> +}
> +
>  /*
>   * Capability reporting
>   */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:57:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcsK-0003Sf-4o; Thu, 20 Dec 2012 09:57:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlcsI-0003SS-QD
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 09:57:10 +0000
Received: from [85.158.143.99:43492] by server-1.bemta-4.messagelabs.com id
	BB/09-28401-6F0E2D05; Thu, 20 Dec 2012 09:57:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1355997393!18036804!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16069 invoked from network); 20 Dec 2012 09:56:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 09:56:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 09:56:33 +0000
Message-Id: <50D2EEDF02000078000B1B94@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 09:56:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-10-git-send-email-xiantao.zhang@intel.com>
In-Reply-To: <1356018231-26440-10-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, tim@xen.org, eddie.dong@intel.com, jun.nakajima@intel.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 09/10] nVMX: virutalize VPID capability
 to nested VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 16:43, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> +int nvmx_handle_invvpid(struct cpu_user_regs *regs)
> +{
> +    struct vmx_inst_decoded decode;
> +    unsigned long vpid;
> +    u64 inv_type;
> +
> +    if ( !cpu_has_vmx_vpid )
> +        return X86EMUL_EXCEPTION;

Same problem here - you mustn't return X86EMUL_EXCEPTION
without also raising an exception.

Jan

> +
> +    if ( decode_vmx_inst(regs, &decode, &vpid, 0) != X86EMUL_OKAY )
> +        return X86EMUL_EXCEPTION;
> +
> +    inv_type = reg_read(regs, decode.reg2);
> +    gdprintk(XENLOG_DEBUG,"inv_type:%ld, vpid:%lx\n", inv_type, vpid);
> +
> +    switch ( inv_type ) {
> +        /* Just invalidate all tlb entries for all types! */
> +        case INVVPID_INDIVIDUAL_ADDR:
> +        case INVVPID_SINGLE_CONTEXT:
> +        case INVVPID_ALL_CONTEXT:
> +            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
> +            break;
> +        default:
> +            return X86EMUL_EXCEPTION;
> +    }
> +    vmreturn(regs, VMSUCCEED);
> +
> +    return X86EMUL_OKAY;
> +}
> +
>  /*
>   * Capability reporting
>   */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 09:57:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlcsd-0003Un-Hw; Thu, 20 Dec 2012 09:57:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1Tlcsb-0003UV-R6
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 09:57:30 +0000
Received: from [85.158.138.51:44181] by server-16.bemta-3.messagelabs.com id
	00/E7-27634-AF0E2D05; Thu, 20 Dec 2012 09:57:14 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355997430!21804163!1
X-Originating-IP: [220.181.15.8]
X-SpamReason: No, hits=1.9 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjggPT4gOTYwNw==\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjggPT4gOTYwNw==\n,HTML_50_60,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2733 invoked from network); 20 Dec 2012 09:57:11 -0000
Received: from m15-8.126.com (HELO m15-8.126.com) (220.181.15.8)
	by server-12.tower-174.messagelabs.com with SMTP;
	20 Dec 2012 09:57:11 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=P9ew/ZkZ+GDmDi9I7GzJPKcGHQS0wK7UM+KI
	FJvlRFE=; b=LK1IlclqzNk28LJrH1S0+TXpQcKg8oJ9YzZXMqiSa5IeYB3dFiav
	uiw5WBLTofkBvh4Y/c61Su+hByCAgznwkp1SOznYNSskWuPJJkNDQW6fBGLcC5Lb
	3T5/x5n2lk2GQyycJcrm0oNKaY/+d6T4iUYEr3kDyUXV/TmubEXCkHk=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr8
	(Coremail) ; Thu, 20 Dec 2012 17:57:08 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Thu, 20 Dec 2012 17:57:08 +0800 (CST)
From: hxkhust <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: yJGap2Zvb3Rlcl9odG09NjYwMDo4MQ==
MIME-Version: 1.0
Message-ID: <6e071a35.22ea4.13bb7bebb8a.Coremail.hxkhust@126.com>
X-CM-TRANSID: CMqowGD5oUP14NJQ38cWAA--.6039W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbikRiLBUX9jg1gywABsW
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!!two pressing questions in the xen code notes(I'm
 dying!!!HELP!)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4635322709385631691=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4635322709385631691==
Content-Type: multipart/alternative; 
	boundary="----=_Part_538764_430907640.1355997428618"

------=_Part_538764_430907640.1355997428618
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

I'm doing some research on xen 4.1 platform.I have two question which I'm trying to make it convenient  for every expert to answer.
Question 1:
Here are some codes in the file xen-4.1.2/tools/ioemu-qemu-xen/block-raw-posix.c, 
....
 629 static BlockDriverAIOCB *raw_aio_read(BlockDriverState *bs,
 630         int64_t sector_num, uint8_t *buf, int nb_sectors,
 631         BlockDriverCompletionFunc *cb, void *opaque)
 632 { 
 633     RawAIOCB *acb;  
 634   
 635     /*
 636      * If O_DIRECT is used and the buffer is not aligned fall back
 637      * to synchronous IO.
 638      */
 639     BDRVRawState *s = bs->opaque;
 640 
 641     if (unlikely(s->aligned_buf != NULL && ((uintptr_t) buf % 512))) {
 642         QEMUBH *bh; 
 643         acb = qemu_aio_get(bs, cb, opaque);
 644         acb->ret = raw_pread(bs, 512 * sector_num, buf, 512 * nb_sectors);
 645         bh = qemu_bh_new(raw_aio_em_cb, acb);
 646         qemu_bh_schedule(bh);     
 647         return &acb->common;      
 648     }
 649                     
 650     acb = raw_aio_setup(bs, sector_num, buf, nb_sectors, cb, opaque);
 651     if (!acb)
 652         return NULL;
 653     if (qemu_paio_read(&acb->aiocb) < 0) {
 654         raw_aio_remove(acb);      
 655         return NULL;
 656     }
 657     return &acb->common;
 658 } 
......
In the notes between the codes , the sentence show that If O_DIRECT is used and the buffer is not aligned fall back  to synchronous IO. What does it mean?I just want to make these function returning with the exact data in its field which the parameter buf point to , that is to say, I want to make the asynchronous read into a synchronous one.I have make the  O_DIRECT used in file xenstore.c and the flags will be in effect along the way to this function above.


Question 2:
Here are some codes in /xen-4.1.2/tools/block-qcow.c
...
594 static void qcow_aio_read_cb(void *opaque, int ret)
595 {
596     QCowAIOCB *acb = opaque;
597     BlockDriverState *bs = acb->common.bs;
598     BDRVQcowState *s = bs->opaque;
599     int index_in_cluster;
600 
601     acb->hd_aiocb = NULL;
602     if (ret < 0) {   
603     fail:            
604         acb->common.cb(acb->common.opaque, ret);
605         qemu_aio_release(acb);    
606         return;      
607     }
.........................
642     if (!acb->cluster_offset) {
643         if (bs->backing_hd) {
644             /* read from the base image */
645             acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
646                 acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);
647             if (acb->hd_aiocb == NULL)
648                 goto fail;
649         } else {
650             /* Note: in this case, no need to wait */
651             memset(acb->buf, 0, 512 * acb->n);
652             goto redo;
653         }
654     } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
655         /* add AIO support for compressed blocks ? */
656         if (decompress_cluster(s, acb->cluster_offset) < 0)
657             goto fail;
658         memcpy(acb->buf,
659                s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
660         goto redo;
661     } else {
662         if ((acb->cluster_offset & 511) != 0) {
663             ret = -EIO;
664             goto fail;
665         }
666         acb->hd_aiocb = bdrv_aio_read(s->hd,
667                             (acb->cluster_offset >> 9) + index_in_cluster,
668                             acb->buf, acb->n, qcow_aio_read_cb, acb);
In the code note on the line 650, it says that  "in this case, no need to wait".
what does it mean? Here the code just would wait for what?


If I have some time ,I will learn the whole direction about how to ask a smart question which a expert send to me in a earlier mail.
Thanks.


A Newbie




------=_Part_538764_430907640.1355997428618
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">I'm doing some research on xen 4.1 platform.I have two question which I'm trying to make it convenient &nbsp;for every expert to answer.<div>Question 1:</div><div>Here are some codes in the file xen-4.1.2/tools/ioemu-qemu-xen/block-raw-posix.c,&nbsp;</div><div>....</div><div><div>&nbsp;629 static BlockDriverAIOCB *raw_aio_read(BlockDriverState *bs,</div><div>&nbsp;630 &nbsp; &nbsp; &nbsp; &nbsp; int64_t sector_num, uint8_t *buf, int nb_sectors,</div><div>&nbsp;631 &nbsp; &nbsp; &nbsp; &nbsp; BlockDriverCompletionFunc *cb, void *opaque)</div><div>&nbsp;632 {&nbsp;</div><div>&nbsp;633 &nbsp; &nbsp; RawAIOCB *acb; &nbsp;</div><div>&nbsp;634 &nbsp;&nbsp;</div><div>&nbsp;635 &nbsp; &nbsp; /*</div><div>&nbsp;636 &nbsp; &nbsp; &nbsp;* If O_DIRECT is used and the buffer is not aligned fall back</div><div>&nbsp;637 &nbsp; &nbsp; &nbsp;* to synchronous IO.</div><div>&nbsp;638 &nbsp; &nbsp; &nbsp;*/</div><div>&nbsp;639 &nbsp; &nbsp; BDRVRawState *s = bs-&gt;opaque;</div><div>&nbsp;640&nbsp;</div><div>&nbsp;641 &nbsp; &nbsp; if (unlikely(s-&gt;aligned_buf != NULL &amp;&amp; ((uintptr_t) buf % 512))) {</div><div>&nbsp;642 &nbsp; &nbsp; &nbsp; &nbsp; QEMUBH *bh;&nbsp;</div><div>&nbsp;643 &nbsp; &nbsp; &nbsp; &nbsp; acb = qemu_aio_get(bs, cb, opaque);</div><div>&nbsp;644 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;ret = raw_pread(bs, 512 * sector_num, buf, 512 * nb_sectors);</div><div>&nbsp;645 &nbsp; &nbsp; &nbsp; &nbsp; bh = qemu_bh_new(raw_aio_em_cb, acb);</div><div>&nbsp;646 &nbsp; &nbsp; &nbsp; &nbsp; qemu_bh_schedule(bh); &nbsp; &nbsp;&nbsp;</div><div>&nbsp;647 &nbsp; &nbsp; &nbsp; &nbsp; return &amp;acb-&gt;common; &nbsp; &nbsp; &nbsp;</div><div>&nbsp;648 &nbsp; &nbsp; }</div><div>&nbsp;649 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</div><div>&nbsp;650 &nbsp; &nbsp; acb = raw_aio_setup(bs, sector_num, buf, nb_sectors, cb, opaque);</div><div>&nbsp;651 &nbsp; &nbsp; if (!acb)</div><div>&nbsp;652 &nbsp; &nbsp; &nbsp; &nbsp; return NULL;</div><div>&nbsp;653 &nbsp; &nbsp; if (qemu_paio_read(&amp;acb-&gt;aiocb) &lt; 0) {</div><div>&nbsp;654 &nbsp; &nbsp; &nbsp; &nbsp; raw_aio_remove(acb); &nbsp; &nbsp; &nbsp;</div><div>&nbsp;655 &nbsp; &nbsp; &nbsp; &nbsp; return NULL;</div><div>&nbsp;656 &nbsp; &nbsp; }</div><div>&nbsp;657 &nbsp; &nbsp; return &amp;acb-&gt;common;</div><div>&nbsp;658 }&nbsp;</div></div><div>......</div><div>In the notes between the codes , the sentence show that If O_DIRECT is used and the buffer is not aligned fall back &nbsp;to synchronous IO. What does it mean?I just want to make these function returning with the exact data in its field which the parameter buf point to , that is to say, I want to make the&nbsp;asynchronous read into a synchronous one.I have make the &nbsp;O_DIRECT used in file xenstore.c and the flags will be in effect along the way to this function above.</div><div><br></div><div>Question 2:</div><div>Here are some codes in /xen-4.1.2/tools/block-qcow.c</div><div>...</div><div>594 static void qcow_aio_read_cb(void *opaque, int ret)</div><div>595 {</div><div>596 &nbsp; &nbsp; QCowAIOCB *acb = opaque;</div><div>597 &nbsp; &nbsp; BlockDriverState *bs = acb-&gt;common.bs;</div><div>598 &nbsp; &nbsp; BDRVQcowState *s = bs-&gt;opaque;</div><div>599 &nbsp; &nbsp; int index_in_cluster;</div><div>600&nbsp;</div><div>601 &nbsp; &nbsp; acb-&gt;hd_aiocb = NULL;</div><div>602 &nbsp; &nbsp; if (ret &lt; 0) { &nbsp;&nbsp;</div><div>603 &nbsp; &nbsp; fail: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</div><div>604 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;common.cb(acb-&gt;common.opaque, ret);</div><div>605 &nbsp; &nbsp; &nbsp; &nbsp; qemu_aio_release(acb); &nbsp; &nbsp;</div><div>606 &nbsp; &nbsp; &nbsp; &nbsp; return; &nbsp; &nbsp; &nbsp;</div><div>607 &nbsp; &nbsp; }</div><div>.........................</div><div><div>642 &nbsp; &nbsp; if (!acb-&gt;cluster_offset) {</div><div>643 &nbsp; &nbsp; &nbsp; &nbsp; if (bs-&gt;backing_hd) {</div><div>644 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* read from the base image */</div><div>645 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd,</div><div>646 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);</div><div>647 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (acb-&gt;hd_aiocb == NULL)</div><div>648 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>649 &nbsp; &nbsp; &nbsp; &nbsp; } else {</div><div>650 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* Note: in this case, no need to wait */</div><div>651 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; memset(acb-&gt;buf, 0, 512 * acb-&gt;n);</div><div>652 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>653 &nbsp; &nbsp; &nbsp; &nbsp; }</div><div>654 &nbsp; &nbsp; } else if (acb-&gt;cluster_offset &amp; QCOW_OFLAG_COMPRESSED) {</div><div>655 &nbsp; &nbsp; &nbsp; &nbsp; /* add AIO support for compressed blocks ? */</div><div>656 &nbsp; &nbsp; &nbsp; &nbsp; if (decompress_cluster(s, acb-&gt;cluster_offset) &lt; 0)</div><div>657 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>658 &nbsp; &nbsp; &nbsp; &nbsp; memcpy(acb-&gt;buf,</div><div>659 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s-&gt;cluster_cache + index_in_cluster * 512, 512 * acb-&gt;n);</div><div>660 &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>661 &nbsp; &nbsp; } else {</div><div>662 &nbsp; &nbsp; &nbsp; &nbsp; if ((acb-&gt;cluster_offset &amp; 511) != 0) {</div><div>663 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ret = -EIO;</div><div>664 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>665 &nbsp; &nbsp; &nbsp; &nbsp; }</div><div>666 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(s-&gt;hd,</div><div>667 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (acb-&gt;cluster_offset &gt;&gt; 9) + index_in_cluster,</div><div>668 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);</div></div><div>In the code note on the line 650, it says that &nbsp;"in this case, no need to wait".</div><div>what does it mean? Here the code just would wait for what?</div><div><br></div><div>If I have some time ,I will learn the whole direction about how to ask a smart question which a expert send to me in a earlier mail.</div><div>Thanks.</div><div><br></div><div>A Newbie</div><div><br></div><div><br></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_538764_430907640.1355997428618--



--===============4635322709385631691==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4635322709385631691==--



From xen-devel-bounces@lists.xen.org Thu Dec 20 09:57:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 09:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlcsd-0003Un-Hw; Thu, 20 Dec 2012 09:57:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1Tlcsb-0003UV-R6
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 09:57:30 +0000
Received: from [85.158.138.51:44181] by server-16.bemta-3.messagelabs.com id
	00/E7-27634-AF0E2D05; Thu, 20 Dec 2012 09:57:14 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1355997430!21804163!1
X-Originating-IP: [220.181.15.8]
X-SpamReason: No, hits=1.9 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjggPT4gOTYwNw==\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjggPT4gOTYwNw==\n,HTML_50_60,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2733 invoked from network); 20 Dec 2012 09:57:11 -0000
Received: from m15-8.126.com (HELO m15-8.126.com) (220.181.15.8)
	by server-12.tower-174.messagelabs.com with SMTP;
	20 Dec 2012 09:57:11 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=P9ew/ZkZ+GDmDi9I7GzJPKcGHQS0wK7UM+KI
	FJvlRFE=; b=LK1IlclqzNk28LJrH1S0+TXpQcKg8oJ9YzZXMqiSa5IeYB3dFiav
	uiw5WBLTofkBvh4Y/c61Su+hByCAgznwkp1SOznYNSskWuPJJkNDQW6fBGLcC5Lb
	3T5/x5n2lk2GQyycJcrm0oNKaY/+d6T4iUYEr3kDyUXV/TmubEXCkHk=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr8
	(Coremail) ; Thu, 20 Dec 2012 17:57:08 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Thu, 20 Dec 2012 17:57:08 +0800 (CST)
From: hxkhust <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: yJGap2Zvb3Rlcl9odG09NjYwMDo4MQ==
MIME-Version: 1.0
Message-ID: <6e071a35.22ea4.13bb7bebb8a.Coremail.hxkhust@126.com>
X-CM-TRANSID: CMqowGD5oUP14NJQ38cWAA--.6039W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbikRiLBUX9jg1gywABsW
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!!two pressing questions in the xen code notes(I'm
 dying!!!HELP!)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4635322709385631691=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4635322709385631691==
Content-Type: multipart/alternative; 
	boundary="----=_Part_538764_430907640.1355997428618"

------=_Part_538764_430907640.1355997428618
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

I'm doing some research on xen 4.1 platform.I have two question which I'm trying to make it convenient  for every expert to answer.
Question 1:
Here are some codes in the file xen-4.1.2/tools/ioemu-qemu-xen/block-raw-posix.c, 
....
 629 static BlockDriverAIOCB *raw_aio_read(BlockDriverState *bs,
 630         int64_t sector_num, uint8_t *buf, int nb_sectors,
 631         BlockDriverCompletionFunc *cb, void *opaque)
 632 { 
 633     RawAIOCB *acb;  
 634   
 635     /*
 636      * If O_DIRECT is used and the buffer is not aligned fall back
 637      * to synchronous IO.
 638      */
 639     BDRVRawState *s = bs->opaque;
 640 
 641     if (unlikely(s->aligned_buf != NULL && ((uintptr_t) buf % 512))) {
 642         QEMUBH *bh; 
 643         acb = qemu_aio_get(bs, cb, opaque);
 644         acb->ret = raw_pread(bs, 512 * sector_num, buf, 512 * nb_sectors);
 645         bh = qemu_bh_new(raw_aio_em_cb, acb);
 646         qemu_bh_schedule(bh);     
 647         return &acb->common;      
 648     }
 649                     
 650     acb = raw_aio_setup(bs, sector_num, buf, nb_sectors, cb, opaque);
 651     if (!acb)
 652         return NULL;
 653     if (qemu_paio_read(&acb->aiocb) < 0) {
 654         raw_aio_remove(acb);      
 655         return NULL;
 656     }
 657     return &acb->common;
 658 } 
......
In the notes between the codes , the sentence show that If O_DIRECT is used and the buffer is not aligned fall back  to synchronous IO. What does it mean?I just want to make these function returning with the exact data in its field which the parameter buf point to , that is to say, I want to make the asynchronous read into a synchronous one.I have make the  O_DIRECT used in file xenstore.c and the flags will be in effect along the way to this function above.


Question 2:
Here are some codes in /xen-4.1.2/tools/block-qcow.c
...
594 static void qcow_aio_read_cb(void *opaque, int ret)
595 {
596     QCowAIOCB *acb = opaque;
597     BlockDriverState *bs = acb->common.bs;
598     BDRVQcowState *s = bs->opaque;
599     int index_in_cluster;
600 
601     acb->hd_aiocb = NULL;
602     if (ret < 0) {   
603     fail:            
604         acb->common.cb(acb->common.opaque, ret);
605         qemu_aio_release(acb);    
606         return;      
607     }
.........................
642     if (!acb->cluster_offset) {
643         if (bs->backing_hd) {
644             /* read from the base image */
645             acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
646                 acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);
647             if (acb->hd_aiocb == NULL)
648                 goto fail;
649         } else {
650             /* Note: in this case, no need to wait */
651             memset(acb->buf, 0, 512 * acb->n);
652             goto redo;
653         }
654     } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
655         /* add AIO support for compressed blocks ? */
656         if (decompress_cluster(s, acb->cluster_offset) < 0)
657             goto fail;
658         memcpy(acb->buf,
659                s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
660         goto redo;
661     } else {
662         if ((acb->cluster_offset & 511) != 0) {
663             ret = -EIO;
664             goto fail;
665         }
666         acb->hd_aiocb = bdrv_aio_read(s->hd,
667                             (acb->cluster_offset >> 9) + index_in_cluster,
668                             acb->buf, acb->n, qcow_aio_read_cb, acb);
In the code note on the line 650, it says that  "in this case, no need to wait".
what does it mean? Here the code just would wait for what?


If I have some time ,I will learn the whole direction about how to ask a smart question which a expert send to me in a earlier mail.
Thanks.


A Newbie




------=_Part_538764_430907640.1355997428618
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">I'm doing some research on xen 4.1 platform.I have two question which I'm trying to make it convenient &nbsp;for every expert to answer.<div>Question 1:</div><div>Here are some codes in the file xen-4.1.2/tools/ioemu-qemu-xen/block-raw-posix.c,&nbsp;</div><div>....</div><div><div>&nbsp;629 static BlockDriverAIOCB *raw_aio_read(BlockDriverState *bs,</div><div>&nbsp;630 &nbsp; &nbsp; &nbsp; &nbsp; int64_t sector_num, uint8_t *buf, int nb_sectors,</div><div>&nbsp;631 &nbsp; &nbsp; &nbsp; &nbsp; BlockDriverCompletionFunc *cb, void *opaque)</div><div>&nbsp;632 {&nbsp;</div><div>&nbsp;633 &nbsp; &nbsp; RawAIOCB *acb; &nbsp;</div><div>&nbsp;634 &nbsp;&nbsp;</div><div>&nbsp;635 &nbsp; &nbsp; /*</div><div>&nbsp;636 &nbsp; &nbsp; &nbsp;* If O_DIRECT is used and the buffer is not aligned fall back</div><div>&nbsp;637 &nbsp; &nbsp; &nbsp;* to synchronous IO.</div><div>&nbsp;638 &nbsp; &nbsp; &nbsp;*/</div><div>&nbsp;639 &nbsp; &nbsp; BDRVRawState *s = bs-&gt;opaque;</div><div>&nbsp;640&nbsp;</div><div>&nbsp;641 &nbsp; &nbsp; if (unlikely(s-&gt;aligned_buf != NULL &amp;&amp; ((uintptr_t) buf % 512))) {</div><div>&nbsp;642 &nbsp; &nbsp; &nbsp; &nbsp; QEMUBH *bh;&nbsp;</div><div>&nbsp;643 &nbsp; &nbsp; &nbsp; &nbsp; acb = qemu_aio_get(bs, cb, opaque);</div><div>&nbsp;644 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;ret = raw_pread(bs, 512 * sector_num, buf, 512 * nb_sectors);</div><div>&nbsp;645 &nbsp; &nbsp; &nbsp; &nbsp; bh = qemu_bh_new(raw_aio_em_cb, acb);</div><div>&nbsp;646 &nbsp; &nbsp; &nbsp; &nbsp; qemu_bh_schedule(bh); &nbsp; &nbsp;&nbsp;</div><div>&nbsp;647 &nbsp; &nbsp; &nbsp; &nbsp; return &amp;acb-&gt;common; &nbsp; &nbsp; &nbsp;</div><div>&nbsp;648 &nbsp; &nbsp; }</div><div>&nbsp;649 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</div><div>&nbsp;650 &nbsp; &nbsp; acb = raw_aio_setup(bs, sector_num, buf, nb_sectors, cb, opaque);</div><div>&nbsp;651 &nbsp; &nbsp; if (!acb)</div><div>&nbsp;652 &nbsp; &nbsp; &nbsp; &nbsp; return NULL;</div><div>&nbsp;653 &nbsp; &nbsp; if (qemu_paio_read(&amp;acb-&gt;aiocb) &lt; 0) {</div><div>&nbsp;654 &nbsp; &nbsp; &nbsp; &nbsp; raw_aio_remove(acb); &nbsp; &nbsp; &nbsp;</div><div>&nbsp;655 &nbsp; &nbsp; &nbsp; &nbsp; return NULL;</div><div>&nbsp;656 &nbsp; &nbsp; }</div><div>&nbsp;657 &nbsp; &nbsp; return &amp;acb-&gt;common;</div><div>&nbsp;658 }&nbsp;</div></div><div>......</div><div>In the notes between the codes , the sentence show that If O_DIRECT is used and the buffer is not aligned fall back &nbsp;to synchronous IO. What does it mean?I just want to make these function returning with the exact data in its field which the parameter buf point to , that is to say, I want to make the&nbsp;asynchronous read into a synchronous one.I have make the &nbsp;O_DIRECT used in file xenstore.c and the flags will be in effect along the way to this function above.</div><div><br></div><div>Question 2:</div><div>Here are some codes in /xen-4.1.2/tools/block-qcow.c</div><div>...</div><div>594 static void qcow_aio_read_cb(void *opaque, int ret)</div><div>595 {</div><div>596 &nbsp; &nbsp; QCowAIOCB *acb = opaque;</div><div>597 &nbsp; &nbsp; BlockDriverState *bs = acb-&gt;common.bs;</div><div>598 &nbsp; &nbsp; BDRVQcowState *s = bs-&gt;opaque;</div><div>599 &nbsp; &nbsp; int index_in_cluster;</div><div>600&nbsp;</div><div>601 &nbsp; &nbsp; acb-&gt;hd_aiocb = NULL;</div><div>602 &nbsp; &nbsp; if (ret &lt; 0) { &nbsp;&nbsp;</div><div>603 &nbsp; &nbsp; fail: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</div><div>604 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;common.cb(acb-&gt;common.opaque, ret);</div><div>605 &nbsp; &nbsp; &nbsp; &nbsp; qemu_aio_release(acb); &nbsp; &nbsp;</div><div>606 &nbsp; &nbsp; &nbsp; &nbsp; return; &nbsp; &nbsp; &nbsp;</div><div>607 &nbsp; &nbsp; }</div><div>.........................</div><div><div>642 &nbsp; &nbsp; if (!acb-&gt;cluster_offset) {</div><div>643 &nbsp; &nbsp; &nbsp; &nbsp; if (bs-&gt;backing_hd) {</div><div>644 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* read from the base image */</div><div>645 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd,</div><div>646 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);</div><div>647 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (acb-&gt;hd_aiocb == NULL)</div><div>648 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>649 &nbsp; &nbsp; &nbsp; &nbsp; } else {</div><div>650 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* Note: in this case, no need to wait */</div><div>651 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; memset(acb-&gt;buf, 0, 512 * acb-&gt;n);</div><div>652 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>653 &nbsp; &nbsp; &nbsp; &nbsp; }</div><div>654 &nbsp; &nbsp; } else if (acb-&gt;cluster_offset &amp; QCOW_OFLAG_COMPRESSED) {</div><div>655 &nbsp; &nbsp; &nbsp; &nbsp; /* add AIO support for compressed blocks ? */</div><div>656 &nbsp; &nbsp; &nbsp; &nbsp; if (decompress_cluster(s, acb-&gt;cluster_offset) &lt; 0)</div><div>657 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>658 &nbsp; &nbsp; &nbsp; &nbsp; memcpy(acb-&gt;buf,</div><div>659 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s-&gt;cluster_cache + index_in_cluster * 512, 512 * acb-&gt;n);</div><div>660 &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</div><div>661 &nbsp; &nbsp; } else {</div><div>662 &nbsp; &nbsp; &nbsp; &nbsp; if ((acb-&gt;cluster_offset &amp; 511) != 0) {</div><div>663 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ret = -EIO;</div><div>664 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</div><div>665 &nbsp; &nbsp; &nbsp; &nbsp; }</div><div>666 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(s-&gt;hd,</div><div>667 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (acb-&gt;cluster_offset &gt;&gt; 9) + index_in_cluster,</div><div>668 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);</div></div><div>In the code note on the line 650, it says that &nbsp;"in this case, no need to wait".</div><div>what does it mean? Here the code just would wait for what?</div><div><br></div><div>If I have some time ,I will learn the whole direction about how to ask a smart question which a expert send to me in a earlier mail.</div><div>Thanks.</div><div><br></div><div>A Newbie</div><div><br></div><div><br></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_538764_430907640.1355997428618--



--===============4635322709385631691==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4635322709385631691==--



From xen-devel-bounces@lists.xen.org Thu Dec 20 10:03:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcyJ-0003zE-Bn; Thu, 20 Dec 2012 10:03:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlcyH-0003z9-4q
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 10:03:21 +0000
Received: from [85.158.137.99:59784] by server-14.bemta-3.messagelabs.com id
	AE/D5-27443-862E2D05; Thu, 20 Dec 2012 10:03:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355997713!20272927!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11596 invoked from network); 20 Dec 2012 10:01:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 10:01:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="274049"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 10:01:53 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 10:01:53 +0000
Message-ID: <1355997710.26722.12.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Thu, 20 Dec 2012 10:01:50 +0000
In-Reply-To: <1355948414-7503-1-git-send-email-msw@amazon.com>
References: <1355948414-7503-1-git-send-email-msw@amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Annie Li <annie.li@oracle.com>, Steven Noonan <snoonan@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: correctly initialize grant
 table version 1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 20:20 +0000, Matt Wilson wrote:
> Commit 85ff6acb075a484780b3d763fdf41596d8fc0970 (xen/granttable: Grant
> tables V2 implementation) changed the GREFS_PER_GRANT_FRAME macro from
> a constant to a conditional expression. The expression depends on
> grant_table_version being appropriately set. Unfortunately, at init
> time grant_table_version will be 0. The GREFS_PER_GRANT_FRAME
> conditional expression checks for "grant_table_version == 1", and
> therefore returns the number of grant references per frame for v2.
> 
> This causes gnttab_init() to allocate fewer pages for gnttab_list, as
> a frame can old half the number of v2 entries than v1 entries. After
> gnttab_resume() is called, grant_table_version is appropriately
> set. nr_init_grefs will then be miscalculated and gnttab_free_count
> will hold a value larger than the actual number of free gref entries.
> 
> If a guest is heavily utilizing improperly initialized v1 grant
> tables, memory corruption can occur.

Good catch!

[...]
@@ -1080,10 +1081,9 @@ static void gnttab_request_version(void)
>  		panic("we need grant tables version 2, but only version 1 is available");
>  	} else {
>  		grant_table_version = 1;
> +		grefs_per_grant_frame = PAGE_SIZE / sizeof(struct grant_entry_v1);
>  		gnttab_interface = &gnttab_v1_ops;
>  	}
> -	printk(KERN_INFO "Grant tables using version %d layout.\n",
> -		grant_table_version);

You remove this here, and re-add it in the gnttab_resume callsite but I
don't  see anything added at the gnttab_init callsite. It would be
useful to keep this print at start of day too.

Oh, I see, gnttab_init also calls gnttab_resume later so we get the
message from there, with gnttab_request_version getting called twice.

Can we avoid doing this twice by moving the tail of gnttab_resume
(everything except the gnttab_request_version) into a new gnttab_setup
and calling that from gnttab_resume and gnttab_init (instead of calling
resume)?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 10:03:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlcyJ-0003zE-Bn; Thu, 20 Dec 2012 10:03:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TlcyH-0003z9-4q
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 10:03:21 +0000
Received: from [85.158.137.99:59784] by server-14.bemta-3.messagelabs.com id
	AE/D5-27443-862E2D05; Thu, 20 Dec 2012 10:03:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1355997713!20272927!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11596 invoked from network); 20 Dec 2012 10:01:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 10:01:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="274049"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 10:01:53 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 10:01:53 +0000
Message-ID: <1355997710.26722.12.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Thu, 20 Dec 2012 10:01:50 +0000
In-Reply-To: <1355948414-7503-1-git-send-email-msw@amazon.com>
References: <1355948414-7503-1-git-send-email-msw@amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Annie Li <annie.li@oracle.com>, Steven Noonan <snoonan@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: correctly initialize grant
 table version 1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 20:20 +0000, Matt Wilson wrote:
> Commit 85ff6acb075a484780b3d763fdf41596d8fc0970 (xen/granttable: Grant
> tables V2 implementation) changed the GREFS_PER_GRANT_FRAME macro from
> a constant to a conditional expression. The expression depends on
> grant_table_version being appropriately set. Unfortunately, at init
> time grant_table_version will be 0. The GREFS_PER_GRANT_FRAME
> conditional expression checks for "grant_table_version == 1", and
> therefore returns the number of grant references per frame for v2.
> 
> This causes gnttab_init() to allocate fewer pages for gnttab_list, as
> a frame can old half the number of v2 entries than v1 entries. After
> gnttab_resume() is called, grant_table_version is appropriately
> set. nr_init_grefs will then be miscalculated and gnttab_free_count
> will hold a value larger than the actual number of free gref entries.
> 
> If a guest is heavily utilizing improperly initialized v1 grant
> tables, memory corruption can occur.

Good catch!

[...]
@@ -1080,10 +1081,9 @@ static void gnttab_request_version(void)
>  		panic("we need grant tables version 2, but only version 1 is available");
>  	} else {
>  		grant_table_version = 1;
> +		grefs_per_grant_frame = PAGE_SIZE / sizeof(struct grant_entry_v1);
>  		gnttab_interface = &gnttab_v1_ops;
>  	}
> -	printk(KERN_INFO "Grant tables using version %d layout.\n",
> -		grant_table_version);

You remove this here, and re-add it in the gnttab_resume callsite but I
don't  see anything added at the gnttab_init callsite. It would be
useful to keep this print at start of day too.

Oh, I see, gnttab_init also calls gnttab_resume later so we get the
message from there, with gnttab_request_version getting called twice.

Can we avoid doing this twice by moving the tail of gnttab_resume
(everything except the gnttab_request_version) into a new gnttab_setup
and calling that from gnttab_resume and gnttab_init (instead of calling
resume)?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 10:05:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tld0Q-00044t-TK; Thu, 20 Dec 2012 10:05:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tld0Q-00044n-7i
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 10:05:34 +0000
Received: from [193.109.254.147:55755] by server-2.bemta-14.messagelabs.com id
	DF/C3-30744-DE2E2D05; Thu, 20 Dec 2012 10:05:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355997931!10665524!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6341 invoked from network); 20 Dec 2012 10:05:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 10:05:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="274147"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 10:05:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 10:05:30 +0000
Message-ID: <1355997929.26722.17.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Thu, 20 Dec 2012 10:05:29 +0000
In-Reply-To: <20121218194348.GB29382@u109add4315675089e695.ant.amazon.com>
References: <20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
	<1355743598.14620.43.camel@zakaz.uk.xensource.com>
	<20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
	<1355824968.14620.143.camel@zakaz.uk.xensource.com>
	<20121218194348.GB29382@u109add4315675089e695.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 19:43 +0000, Matt Wilson wrote:
> On Tue, Dec 18, 2012 at 10:02:48AM +0000, Ian Campbell wrote:
> > On Mon, 2012-12-17 at 20:09 +0000, Matt Wilson wrote:
> > > On Mon, Dec 17, 2012 at 11:26:38AM +0000, Ian Campbell wrote:
> [...]
> > > > Do you mean the ring or the actual buffers?
> > > 
> > > Sorry, the actual buffers.
> > > 
> > > > The current code tries to coalesce multiple small frags/heads because it
> > > > is usually trivial but doesn't try too hard with multiple larger frags,
> > > > since they take up most of a page by themselves anyway. I suppose this
> > > > does waste a bit of buffer space and therefore could take more ring
> > > > slots, but it's not clear to me how much this matters in practice (it
> > > > might be tricky to measure this with any realistic workload).
> > > 
> > > In the case where we're consistently handling large heads (like when
> > > using a MTU value of 9000 for streaming traffic), we're wasting 1/3 of
> > > the available buffers.
> > 
> > Sorry if I missed this earlier in the thread, but how do we end up
> > wasting so much?
> 
> I see SKBs with:
>   skb_headlen(skb) == 8157
>   offset_in_page(skb->data) == 64
> 
> when handling long streaming ingress flows from ixgbe with MTU (on the
> NIC and both sides of the VIF) set to 9000. When all the SKBs making
> up the flow have the above property, xen-netback uses 3 pages instead
> of two. The first buffer gets 4032 bytes copied into it. The next
> buffer gets 4096 bytes copied into it. The final buffer gets 29 bytes
> copied into it. See this post in the archives for a more detailed
> walk through netbk_gop_frag_copy():
>   http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html

Thanks. This certainly seems wrong for the head bit.

> What's the down side to making start_new_rx_buffer() always try to
> fill each buffer?

As we discussed earlier in the thread it doubles the number of copy ops
per frag under some circumstances, my gut is that this isn't going to
hurt but that's just my gut.

It seems obviously right that the linear part of the SKB should always
fill entire buffers though. Perhaps the answer is to differentiate
between the skb->data and the frags?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 10:05:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tld0Q-00044t-TK; Thu, 20 Dec 2012 10:05:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tld0Q-00044n-7i
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 10:05:34 +0000
Received: from [193.109.254.147:55755] by server-2.bemta-14.messagelabs.com id
	DF/C3-30744-DE2E2D05; Thu, 20 Dec 2012 10:05:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1355997931!10665524!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6341 invoked from network); 20 Dec 2012 10:05:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 10:05:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="274147"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 10:05:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 10:05:30 +0000
Message-ID: <1355997929.26722.17.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Thu, 20 Dec 2012 10:05:29 +0000
In-Reply-To: <20121218194348.GB29382@u109add4315675089e695.ant.amazon.com>
References: <20121204232305.GA5301@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13143D77@INHYMS111A.ca.com>
	<20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
	<1355743598.14620.43.camel@zakaz.uk.xensource.com>
	<20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
	<1355824968.14620.143.camel@zakaz.uk.xensource.com>
	<20121218194348.GB29382@u109add4315675089e695.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-12-18 at 19:43 +0000, Matt Wilson wrote:
> On Tue, Dec 18, 2012 at 10:02:48AM +0000, Ian Campbell wrote:
> > On Mon, 2012-12-17 at 20:09 +0000, Matt Wilson wrote:
> > > On Mon, Dec 17, 2012 at 11:26:38AM +0000, Ian Campbell wrote:
> [...]
> > > > Do you mean the ring or the actual buffers?
> > > 
> > > Sorry, the actual buffers.
> > > 
> > > > The current code tries to coalesce multiple small frags/heads because it
> > > > is usually trivial but doesn't try too hard with multiple larger frags,
> > > > since they take up most of a page by themselves anyway. I suppose this
> > > > does waste a bit of buffer space and therefore could take more ring
> > > > slots, but it's not clear to me how much this matters in practice (it
> > > > might be tricky to measure this with any realistic workload).
> > > 
> > > In the case where we're consistently handling large heads (like when
> > > using a MTU value of 9000 for streaming traffic), we're wasting 1/3 of
> > > the available buffers.
> > 
> > Sorry if I missed this earlier in the thread, but how do we end up
> > wasting so much?
> 
> I see SKBs with:
>   skb_headlen(skb) == 8157
>   offset_in_page(skb->data) == 64
> 
> when handling long streaming ingress flows from ixgbe with MTU (on the
> NIC and both sides of the VIF) set to 9000. When all the SKBs making
> up the flow have the above property, xen-netback uses 3 pages instead
> of two. The first buffer gets 4032 bytes copied into it. The next
> buffer gets 4096 bytes copied into it. The final buffer gets 29 bytes
> copied into it. See this post in the archives for a more detailed
> walk through netbk_gop_frag_copy():
>   http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html

Thanks. This certainly seems wrong for the head bit.

> What's the down side to making start_new_rx_buffer() always try to
> fill each buffer?

As we discussed earlier in the thread it doubles the number of copy ops
per frag under some circumstances, my gut is that this isn't going to
hurt but that's just my gut.

It seems obviously right that the linear part of the SKB should always
fill entire buffers though. Perhaps the answer is to differentiate
between the skb->data and the frags?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 10:31:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TldPL-0004Qi-El; Thu, 20 Dec 2012 10:31:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TldPK-0004Qc-6C
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 10:31:18 +0000
Received: from [193.109.254.147:34977] by server-8.bemta-14.messagelabs.com id
	00/94-26341-5F8E2D05; Thu, 20 Dec 2012 10:31:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1355999475!3612561!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22881 invoked from network); 20 Dec 2012 10:31:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 10:31:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 10:31:14 +0000
Message-Id: <50D2F6FF02000078000B1BC5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 10:31:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartFDCC72FF.2__="
Cc: Jens Axboe <axboe@kernel.dk>, Olaf Hering <olaf@aepfle.de>,
	linux-kernel@vger.kernel.org, xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] xen-blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartFDCC72FF.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"be->mode" is obtained from xenbus_read(), which does a kmalloc() for
the message body. The short string is never released, so do it along
with freeing "be" itself, and make sure the string isn't kept when
backend_changed() doesn't complete successfully (which made it
desirable to slightly re-structure that function, so that the error
cleanup can be done in one place).

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

---
 drivers/block/xen-blkback/xenbus.c |   49 ++++++++++++++++++--------------=
-----
 1 file changed, 24 insertions(+), 25 deletions(-)

--- 3.7/drivers/block/xen-blkback/xenbus.c
+++ 3.7-xen-blkback-mode-leak/drivers/block/xen-blkback/xenbus.c
@@ -366,6 +366,7 @@ static int xen_blkbk_remove(struct xenbu
 		be->blkif =3D NULL;
 	}
=20
+	kfree(be->mode);
 	kfree(be);
 	dev_set_drvdata(&dev->dev, NULL);
 	return 0;
@@ -501,6 +502,7 @@ static void backend_changed(struct xenbu
 		=3D container_of(watch, struct backend_info, backend_watch)=
;
 	struct xenbus_device *dev =3D be->dev;
 	int cdrom =3D 0;
+	unsigned long handle;
 	char *device_type;
=20
 	DPRINTK("");
@@ -520,10 +522,10 @@ static void backend_changed(struct xenbu
 		return;
 	}
=20
-	if ((be->major || be->minor) &&
-	    ((be->major !=3D major) || (be->minor !=3D minor))) {
-		pr_warn(DRV_PFX "changing physical device (from %x:%x to =
%x:%x) not supported.\n",
-			be->major, be->minor, major, minor);
+	if (be->major | be->minor) {
+		if (be->major !=3D major || be->minor !=3D minor)
+			pr_warn(DRV_PFX "changing physical device (from =
%x:%x to %x:%x) not supported.\n",
+				be->major, be->minor, major, minor);
 		return;
 	}
=20
@@ -541,36 +543,33 @@ static void backend_changed(struct xenbu
 		kfree(device_type);
 	}
=20
-	if (be->major =3D=3D 0 && be->minor =3D=3D 0) {
-		/* Front end dir is a number, which is used as the handle. =
*/
+	/* Front end dir is a number, which is used as the handle. */
+	err =3D strict_strtoul(strrchr(dev->otherend, '/') + 1, 0, =
&handle);
+	if (err)
+		return;
=20
-		char *p =3D strrchr(dev->otherend, '/') + 1;
-		long handle;
-		err =3D strict_strtoul(p, 0, &handle);
-		if (err)
-			return;
+	be->major =3D major;
+	be->minor =3D minor;
=20
-		be->major =3D major;
-		be->minor =3D minor;
-
-		err =3D xen_vbd_create(be->blkif, handle, major, minor,
-				 (NULL =3D=3D strchr(be->mode, 'w')), =
cdrom);
-		if (err) {
-			be->major =3D 0;
-			be->minor =3D 0;
-			xenbus_dev_fatal(dev, err, "creating vbd structure"=
);
-			return;
-		}
+	err =3D xen_vbd_create(be->blkif, handle, major, minor,
+			     !strchr(be->mode, 'w'), cdrom);
=20
+	if (err)
+		xenbus_dev_fatal(dev, err, "creating vbd structure");
+	else {
 		err =3D xenvbd_sysfs_addif(dev);
 		if (err) {
 			xen_vbd_free(&be->blkif->vbd);
-			be->major =3D 0;
-			be->minor =3D 0;
 			xenbus_dev_fatal(dev, err, "creating sysfs =
entries");
-			return;
 		}
+	}
=20
+	if (err) {
+		kfree(be->mode);
+		be->mode =3D NULL;
+		be->major =3D 0;
+		be->minor =3D 0;
+	} else {
 		/* We're potentially connected now */
 		xen_update_blkif_status(be->blkif);
 	}




--=__PartFDCC72FF.2__=
Content-Type: text/plain; name="linux-3.7-xen-blkback-mode-leak.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="linux-3.7-xen-blkback-mode-leak.patch"

xen-blkback: do not leak mode property=0A=0A"be->mode" is obtained from =
xenbus_read(), which does a kmalloc() for=0Athe message body. The short =
string is never released, so do it along=0Awith freeing "be" itself, and =
make sure the string isn't kept when=0Abackend_changed() doesn't complete =
successfully (which made it=0Adesirable to slightly re-structure that =
function, so that the error=0Acleanup can be done in one place).=0A=0ARepor=
ted-by: Olaf Hering <olaf@aepfle.de>=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A---=0A drivers/block/xen-blkback/xenbus.c |   49 =
++++++++++++++++++-------------------=0A 1 file changed, 24 insertions(+), =
25 deletions(-)=0A=0A--- 3.7/drivers/block/xen-blkback/xenbus.c=0A+++ =
3.7-xen-blkback-mode-leak/drivers/block/xen-blkback/xenbus.c=0A@@ -366,6 =
+366,7 @@ static int xen_blkbk_remove(struct xenbu=0A 		be->blkif =
=3D NULL;=0A 	}=0A =0A+	kfree(be->mode);=0A 	kfree(be);=0A 	=
dev_set_drvdata(&dev->dev, NULL);=0A 	return 0;=0A@@ -501,6 +502,7 @@ =
static void backend_changed(struct xenbu=0A 		=3D container_of(wa=
tch, struct backend_info, backend_watch);=0A 	struct xenbus_device *dev =
=3D be->dev;=0A 	int cdrom =3D 0;=0A+	unsigned long handle;=0A 	=
char *device_type;=0A =0A 	DPRINTK("");=0A@@ -520,10 +522,10 @@ =
static void backend_changed(struct xenbu=0A 		return;=0A 	=
}=0A =0A-	if ((be->major || be->minor) &&=0A-	    ((be->major =
!=3D major) || (be->minor !=3D minor))) {=0A-		pr_warn(DRV_PFX =
"changing physical device (from %x:%x to %x:%x) not supported.\n",=0A-		=
	be->major, be->minor, major, minor);=0A+	if (be->major | =
be->minor) {=0A+		if (be->major !=3D major || be->minor !=3D =
minor)=0A+			pr_warn(DRV_PFX "changing physical device =
(from %x:%x to %x:%x) not supported.\n",=0A+				=
be->major, be->minor, major, minor);=0A 		return;=0A 	=
}=0A =0A@@ -541,36 +543,33 @@ static void backend_changed(struct xenbu=0A 	=
	kfree(device_type);=0A 	}=0A =0A-	if (be->major =3D=3D 0 && =
be->minor =3D=3D 0) {=0A-		/* Front end dir is a number, =
which is used as the handle. */=0A+	/* Front end dir is a number, =
which is used as the handle. */=0A+	err =3D strict_strtoul(strrchr(dev-=
>otherend, '/') + 1, 0, &handle);=0A+	if (err)=0A+		return;=0A =
=0A-		char *p =3D strrchr(dev->otherend, '/') + 1;=0A-		=
long handle;=0A-		err =3D strict_strtoul(p, 0, &handle);=0A-	=
	if (err)=0A-			return;=0A+	be->major =3D =
major;=0A+	be->minor =3D minor;=0A =0A-		be->major =3D =
major;=0A-		be->minor =3D minor;=0A-=0A-		err =3D =
xen_vbd_create(be->blkif, handle, major, minor,=0A-				=
 (NULL =3D=3D strchr(be->mode, 'w')), cdrom);=0A-		if (err) =
{=0A-			be->major =3D 0;=0A-			be->minor =
=3D 0;=0A-			xenbus_dev_fatal(dev, err, "creating vbd =
structure");=0A-			return;=0A-		}=0A+	=
err =3D xen_vbd_create(be->blkif, handle, major, minor,=0A+			=
     !strchr(be->mode, 'w'), cdrom);=0A =0A+	if (err)=0A+		=
xenbus_dev_fatal(dev, err, "creating vbd structure");=0A+	else {=0A 	=
	err =3D xenvbd_sysfs_addif(dev);=0A 		if (err) {=0A 		=
	xen_vbd_free(&be->blkif->vbd);=0A-			be->major =
=3D 0;=0A-			be->minor =3D 0;=0A 			=
xenbus_dev_fatal(dev, err, "creating sysfs entries");=0A-			=
return;=0A 		}=0A+	}=0A =0A+	if (err) {=0A+		=
kfree(be->mode);=0A+		be->mode =3D NULL;=0A+		be->major =
=3D 0;=0A+		be->minor =3D 0;=0A+	} else {=0A 		/* =
We're potentially connected now */=0A 		xen_update_blkif_status(be-=
>blkif);=0A 	}=0A
--=__PartFDCC72FF.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartFDCC72FF.2__=--


From xen-devel-bounces@lists.xen.org Thu Dec 20 10:31:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TldPL-0004Qi-El; Thu, 20 Dec 2012 10:31:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TldPK-0004Qc-6C
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 10:31:18 +0000
Received: from [193.109.254.147:34977] by server-8.bemta-14.messagelabs.com id
	00/94-26341-5F8E2D05; Thu, 20 Dec 2012 10:31:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1355999475!3612561!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22881 invoked from network); 20 Dec 2012 10:31:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 10:31:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 10:31:14 +0000
Message-Id: <50D2F6FF02000078000B1BC5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 10:31:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartFDCC72FF.2__="
Cc: Jens Axboe <axboe@kernel.dk>, Olaf Hering <olaf@aepfle.de>,
	linux-kernel@vger.kernel.org, xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] xen-blkback: do not leak mode property
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartFDCC72FF.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"be->mode" is obtained from xenbus_read(), which does a kmalloc() for
the message body. The short string is never released, so do it along
with freeing "be" itself, and make sure the string isn't kept when
backend_changed() doesn't complete successfully (which made it
desirable to slightly re-structure that function, so that the error
cleanup can be done in one place).

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

---
 drivers/block/xen-blkback/xenbus.c |   49 ++++++++++++++++++--------------=
-----
 1 file changed, 24 insertions(+), 25 deletions(-)

--- 3.7/drivers/block/xen-blkback/xenbus.c
+++ 3.7-xen-blkback-mode-leak/drivers/block/xen-blkback/xenbus.c
@@ -366,6 +366,7 @@ static int xen_blkbk_remove(struct xenbu
 		be->blkif =3D NULL;
 	}
=20
+	kfree(be->mode);
 	kfree(be);
 	dev_set_drvdata(&dev->dev, NULL);
 	return 0;
@@ -501,6 +502,7 @@ static void backend_changed(struct xenbu
 		=3D container_of(watch, struct backend_info, backend_watch)=
;
 	struct xenbus_device *dev =3D be->dev;
 	int cdrom =3D 0;
+	unsigned long handle;
 	char *device_type;
=20
 	DPRINTK("");
@@ -520,10 +522,10 @@ static void backend_changed(struct xenbu
 		return;
 	}
=20
-	if ((be->major || be->minor) &&
-	    ((be->major !=3D major) || (be->minor !=3D minor))) {
-		pr_warn(DRV_PFX "changing physical device (from %x:%x to =
%x:%x) not supported.\n",
-			be->major, be->minor, major, minor);
+	if (be->major | be->minor) {
+		if (be->major !=3D major || be->minor !=3D minor)
+			pr_warn(DRV_PFX "changing physical device (from =
%x:%x to %x:%x) not supported.\n",
+				be->major, be->minor, major, minor);
 		return;
 	}
=20
@@ -541,36 +543,33 @@ static void backend_changed(struct xenbu
 		kfree(device_type);
 	}
=20
-	if (be->major =3D=3D 0 && be->minor =3D=3D 0) {
-		/* Front end dir is a number, which is used as the handle. =
*/
+	/* Front end dir is a number, which is used as the handle. */
+	err =3D strict_strtoul(strrchr(dev->otherend, '/') + 1, 0, =
&handle);
+	if (err)
+		return;
=20
-		char *p =3D strrchr(dev->otherend, '/') + 1;
-		long handle;
-		err =3D strict_strtoul(p, 0, &handle);
-		if (err)
-			return;
+	be->major =3D major;
+	be->minor =3D minor;
=20
-		be->major =3D major;
-		be->minor =3D minor;
-
-		err =3D xen_vbd_create(be->blkif, handle, major, minor,
-				 (NULL =3D=3D strchr(be->mode, 'w')), =
cdrom);
-		if (err) {
-			be->major =3D 0;
-			be->minor =3D 0;
-			xenbus_dev_fatal(dev, err, "creating vbd structure"=
);
-			return;
-		}
+	err =3D xen_vbd_create(be->blkif, handle, major, minor,
+			     !strchr(be->mode, 'w'), cdrom);
=20
+	if (err)
+		xenbus_dev_fatal(dev, err, "creating vbd structure");
+	else {
 		err =3D xenvbd_sysfs_addif(dev);
 		if (err) {
 			xen_vbd_free(&be->blkif->vbd);
-			be->major =3D 0;
-			be->minor =3D 0;
 			xenbus_dev_fatal(dev, err, "creating sysfs =
entries");
-			return;
 		}
+	}
=20
+	if (err) {
+		kfree(be->mode);
+		be->mode =3D NULL;
+		be->major =3D 0;
+		be->minor =3D 0;
+	} else {
 		/* We're potentially connected now */
 		xen_update_blkif_status(be->blkif);
 	}




--=__PartFDCC72FF.2__=
Content-Type: text/plain; name="linux-3.7-xen-blkback-mode-leak.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="linux-3.7-xen-blkback-mode-leak.patch"

xen-blkback: do not leak mode property=0A=0A"be->mode" is obtained from =
xenbus_read(), which does a kmalloc() for=0Athe message body. The short =
string is never released, so do it along=0Awith freeing "be" itself, and =
make sure the string isn't kept when=0Abackend_changed() doesn't complete =
successfully (which made it=0Adesirable to slightly re-structure that =
function, so that the error=0Acleanup can be done in one place).=0A=0ARepor=
ted-by: Olaf Hering <olaf@aepfle.de>=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A---=0A drivers/block/xen-blkback/xenbus.c |   49 =
++++++++++++++++++-------------------=0A 1 file changed, 24 insertions(+), =
25 deletions(-)=0A=0A--- 3.7/drivers/block/xen-blkback/xenbus.c=0A+++ =
3.7-xen-blkback-mode-leak/drivers/block/xen-blkback/xenbus.c=0A@@ -366,6 =
+366,7 @@ static int xen_blkbk_remove(struct xenbu=0A 		be->blkif =
=3D NULL;=0A 	}=0A =0A+	kfree(be->mode);=0A 	kfree(be);=0A 	=
dev_set_drvdata(&dev->dev, NULL);=0A 	return 0;=0A@@ -501,6 +502,7 @@ =
static void backend_changed(struct xenbu=0A 		=3D container_of(wa=
tch, struct backend_info, backend_watch);=0A 	struct xenbus_device *dev =
=3D be->dev;=0A 	int cdrom =3D 0;=0A+	unsigned long handle;=0A 	=
char *device_type;=0A =0A 	DPRINTK("");=0A@@ -520,10 +522,10 @@ =
static void backend_changed(struct xenbu=0A 		return;=0A 	=
}=0A =0A-	if ((be->major || be->minor) &&=0A-	    ((be->major =
!=3D major) || (be->minor !=3D minor))) {=0A-		pr_warn(DRV_PFX =
"changing physical device (from %x:%x to %x:%x) not supported.\n",=0A-		=
	be->major, be->minor, major, minor);=0A+	if (be->major | =
be->minor) {=0A+		if (be->major !=3D major || be->minor !=3D =
minor)=0A+			pr_warn(DRV_PFX "changing physical device =
(from %x:%x to %x:%x) not supported.\n",=0A+				=
be->major, be->minor, major, minor);=0A 		return;=0A 	=
}=0A =0A@@ -541,36 +543,33 @@ static void backend_changed(struct xenbu=0A 	=
	kfree(device_type);=0A 	}=0A =0A-	if (be->major =3D=3D 0 && =
be->minor =3D=3D 0) {=0A-		/* Front end dir is a number, =
which is used as the handle. */=0A+	/* Front end dir is a number, =
which is used as the handle. */=0A+	err =3D strict_strtoul(strrchr(dev-=
>otherend, '/') + 1, 0, &handle);=0A+	if (err)=0A+		return;=0A =
=0A-		char *p =3D strrchr(dev->otherend, '/') + 1;=0A-		=
long handle;=0A-		err =3D strict_strtoul(p, 0, &handle);=0A-	=
	if (err)=0A-			return;=0A+	be->major =3D =
major;=0A+	be->minor =3D minor;=0A =0A-		be->major =3D =
major;=0A-		be->minor =3D minor;=0A-=0A-		err =3D =
xen_vbd_create(be->blkif, handle, major, minor,=0A-				=
 (NULL =3D=3D strchr(be->mode, 'w')), cdrom);=0A-		if (err) =
{=0A-			be->major =3D 0;=0A-			be->minor =
=3D 0;=0A-			xenbus_dev_fatal(dev, err, "creating vbd =
structure");=0A-			return;=0A-		}=0A+	=
err =3D xen_vbd_create(be->blkif, handle, major, minor,=0A+			=
     !strchr(be->mode, 'w'), cdrom);=0A =0A+	if (err)=0A+		=
xenbus_dev_fatal(dev, err, "creating vbd structure");=0A+	else {=0A 	=
	err =3D xenvbd_sysfs_addif(dev);=0A 		if (err) {=0A 		=
	xen_vbd_free(&be->blkif->vbd);=0A-			be->major =
=3D 0;=0A-			be->minor =3D 0;=0A 			=
xenbus_dev_fatal(dev, err, "creating sysfs entries");=0A-			=
return;=0A 		}=0A+	}=0A =0A+	if (err) {=0A+		=
kfree(be->mode);=0A+		be->mode =3D NULL;=0A+		be->major =
=3D 0;=0A+		be->minor =3D 0;=0A+	} else {=0A 		/* =
We're potentially connected now */=0A 		xen_update_blkif_status(be-=
>blkif);=0A 	}=0A
--=__PartFDCC72FF.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartFDCC72FF.2__=--


From xen-devel-bounces@lists.xen.org Thu Dec 20 10:40:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TldYD-0004dj-Oc; Thu, 20 Dec 2012 10:40:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TldYC-0004de-Od
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 10:40:29 +0000
Received: from [85.158.143.35:28040] by server-1.bemta-4.messagelabs.com id
	C1/35-28401-B1BE2D05; Thu, 20 Dec 2012 10:40:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1356000022!15837267!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25031 invoked from network); 20 Dec 2012 10:40:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 10:40:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="275375"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 10:40:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 10:40:22 +0000
Message-ID: <1356000020.26722.39.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Thu, 20 Dec 2012 10:40:20 +0000
In-Reply-To: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
References: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"konrad@darnok.org" <konrad@darnok.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5] xen/privcmd.c: improve performance of
	MMAPBATCH_V2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I've added Andres since I think this accumulation of ENOENT in err_ptr
is a paging related thing, or at least I think he understands this
code ;-)

> @@ -89,37 +89,112 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
>         struct page *page = info->pages[info->index++];
>         unsigned long pfn = page_to_pfn(page);
>         pte_t pte = pfn_pte(pfn, info->prot);
> -
> -       if (map_foreign_page(pfn, info->fgmfn, info->domid))
> -               return -EFAULT;
> +       int err;
> +       // TODO: We should really batch these updates.
> +       err = map_foreign_page(pfn, *info->fgmfn, info->domid);
> +       *info->err_ptr++ = err;
>         set_pte_at(info->vma->vm_mm, addr, ptep, pte);
> +       info->fgmfn++;
> 
> -       return 0;
> +       return err;

This will cause apply_to_page_range to stop walking and return an error.
AIUI the intention of the err_ptr array is to accumulate the individual
success/error for the entire range. The caller can then take care of the
successes/failures and ENOENTs as appropriate (in particular it doesn't
want to abort a batch because of an ENOENT, it wants to do as much as
possible)

On x86 (when err_ptr != NULL) you accumulate all of the errors from
HYPERVISOR_mmu_update rather than aborting on the first one  and this
seems correct to me.

>  int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 01de35c..323a2ab 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> [...]

> @@ -2528,23 +2557,98 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>                 if (err)
>                         goto out;
> 
> -               err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
> -               if (err < 0)
> -                       goto out;
> +               /* We record the error for each page that gives an error, but
> +                * continue mapping until the whole set is done */
> +               do {
> +                       err = HYPERVISOR_mmu_update(&mmu_update[index],
> +                                                   batch_left, &done, domid);
> +                       if (err < 0) {
> +                               if (!err_ptr)
> +                                       goto out;
> +                               /* increment done so we skip the error item */
> +                               done++;
> +                               last_err = err_ptr[index] = err;
> +                       }
> +                       batch_left -= done;
> +                       index += done;
> +               } while (batch_left);
> 
>                 nr -= batch;
>                 addr += range;
> +               if (err_ptr)
> +                       err_ptr += batch;
>         }
> 
> -       err = 0;
> +       err = last_err;

This means that if you have 100 failures followed by one success you
return success overall. Is that intentional? That doesn't seem right.

>  out:
> 
>         xen_flush_tlb_all();
> 
>         return err;
>  }
[...]
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 0bbbccb..8f86a44 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
[...]
> @@ -283,12 +317,10 @@ static int mmap_batch_fn(void *data, void *state)
>         if (xen_feature(XENFEAT_auto_translated_physmap))
>                 cur_page = pages[st->index++];
> 
> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -                                        st->vma->vm_page_prot, st->domain,
> -                                        &cur_page);
> -
> -       /* Store error code for second pass. */
> -       *(st->err++) = ret;
> +       BUG_ON(nr < 0);
> +       ret = xen_remap_domain_mfn_array(st->vma, st->va & PAGE_MASK, mfnp, nr,
> +                                        st->err, st->vma->vm_page_prot,
> +                                        st->domain, &cur_page);
> 
>         /* And see if it affects the global_error. */

The code which follows needs adjustment to cope with the fact that we
now batch. I think it needs to walk st->err and set global_error as
appropriate. This is related to my comment about the return value of
xen_remap_domain_mfn_range too.

I think rather than trying to fabricate some sort of meaningful error
code for an entire batch xen_remap_domain_mfn_range should just return
an indication about whether there were any errors or not and leave it to
the caller to figure out the overall result by looking at the err array.

Perhaps return either the number of errors or the number of successes
(which turns the following if into either (ret) or (ret < nr)
respectively).

>         if (ret < 0) {
> @@ -300,7 +332,7 @@ static int mmap_batch_fn(void *data, void *state)
>                                 st->global_error = 1;
>                 }
>         }
> -       st->va += PAGE_SIZE;
> +       st->va += PAGE_SIZE * nr;
> 
>         return 0;
>  }
> @@ -430,8 +462,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>         state.err           = err_array;
> 
>         /* mmap_batch_fn guarantees ret == 0 */
> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
> -                            &pagelist, mmap_batch_fn, &state));
> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
> +                                   &pagelist, mmap_batch_fn, &state));

Can we make traverse_pages and _block common by passing a block size
parameter?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 10:40:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TldYD-0004dj-Oc; Thu, 20 Dec 2012 10:40:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TldYC-0004de-Od
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 10:40:29 +0000
Received: from [85.158.143.35:28040] by server-1.bemta-4.messagelabs.com id
	C1/35-28401-B1BE2D05; Thu, 20 Dec 2012 10:40:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1356000022!15837267!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25031 invoked from network); 20 Dec 2012 10:40:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 10:40:26 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="275375"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 10:40:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 10:40:22 +0000
Message-ID: <1356000020.26722.39.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Thu, 20 Dec 2012 10:40:20 +0000
In-Reply-To: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
References: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"konrad@darnok.org" <konrad@darnok.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5] xen/privcmd.c: improve performance of
	MMAPBATCH_V2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I've added Andres since I think this accumulation of ENOENT in err_ptr
is a paging related thing, or at least I think he understands this
code ;-)

> @@ -89,37 +89,112 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
>         struct page *page = info->pages[info->index++];
>         unsigned long pfn = page_to_pfn(page);
>         pte_t pte = pfn_pte(pfn, info->prot);
> -
> -       if (map_foreign_page(pfn, info->fgmfn, info->domid))
> -               return -EFAULT;
> +       int err;
> +       // TODO: We should really batch these updates.
> +       err = map_foreign_page(pfn, *info->fgmfn, info->domid);
> +       *info->err_ptr++ = err;
>         set_pte_at(info->vma->vm_mm, addr, ptep, pte);
> +       info->fgmfn++;
> 
> -       return 0;
> +       return err;

This will cause apply_to_page_range to stop walking and return an error.
AIUI the intention of the err_ptr array is to accumulate the individual
success/error for the entire range. The caller can then take care of the
successes/failures and ENOENTs as appropriate (in particular it doesn't
want to abort a batch because of an ENOENT, it wants to do as much as
possible)

On x86 (when err_ptr != NULL) you accumulate all of the errors from
HYPERVISOR_mmu_update rather than aborting on the first one  and this
seems correct to me.

>  int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 01de35c..323a2ab 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> [...]

> @@ -2528,23 +2557,98 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>                 if (err)
>                         goto out;
> 
> -               err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
> -               if (err < 0)
> -                       goto out;
> +               /* We record the error for each page that gives an error, but
> +                * continue mapping until the whole set is done */
> +               do {
> +                       err = HYPERVISOR_mmu_update(&mmu_update[index],
> +                                                   batch_left, &done, domid);
> +                       if (err < 0) {
> +                               if (!err_ptr)
> +                                       goto out;
> +                               /* increment done so we skip the error item */
> +                               done++;
> +                               last_err = err_ptr[index] = err;
> +                       }
> +                       batch_left -= done;
> +                       index += done;
> +               } while (batch_left);
> 
>                 nr -= batch;
>                 addr += range;
> +               if (err_ptr)
> +                       err_ptr += batch;
>         }
> 
> -       err = 0;
> +       err = last_err;

This means that if you have 100 failures followed by one success you
return success overall. Is that intentional? That doesn't seem right.

>  out:
> 
>         xen_flush_tlb_all();
> 
>         return err;
>  }
[...]
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 0bbbccb..8f86a44 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
[...]
> @@ -283,12 +317,10 @@ static int mmap_batch_fn(void *data, void *state)
>         if (xen_feature(XENFEAT_auto_translated_physmap))
>                 cur_page = pages[st->index++];
> 
> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -                                        st->vma->vm_page_prot, st->domain,
> -                                        &cur_page);
> -
> -       /* Store error code for second pass. */
> -       *(st->err++) = ret;
> +       BUG_ON(nr < 0);
> +       ret = xen_remap_domain_mfn_array(st->vma, st->va & PAGE_MASK, mfnp, nr,
> +                                        st->err, st->vma->vm_page_prot,
> +                                        st->domain, &cur_page);
> 
>         /* And see if it affects the global_error. */

The code which follows needs adjustment to cope with the fact that we
now batch. I think it needs to walk st->err and set global_error as
appropriate. This is related to my comment about the return value of
xen_remap_domain_mfn_range too.

I think rather than trying to fabricate some sort of meaningful error
code for an entire batch xen_remap_domain_mfn_range should just return
an indication about whether there were any errors or not and leave it to
the caller to figure out the overall result by looking at the err array.

Perhaps return either the number of errors or the number of successes
(which turns the following if into either (ret) or (ret < nr)
respectively).

>         if (ret < 0) {
> @@ -300,7 +332,7 @@ static int mmap_batch_fn(void *data, void *state)
>                                 st->global_error = 1;
>                 }
>         }
> -       st->va += PAGE_SIZE;
> +       st->va += PAGE_SIZE * nr;
> 
>         return 0;
>  }
> @@ -430,8 +462,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>         state.err           = err_array;
> 
>         /* mmap_batch_fn guarantees ret == 0 */
> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
> -                            &pagelist, mmap_batch_fn, &state));
> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
> +                                   &pagelist, mmap_batch_fn, &state));

Can we make traverse_pages and _block common by passing a block size
parameter?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 10:42:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TldZU-0004if-81; Thu, 20 Dec 2012 10:41:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TldZS-0004iX-Ia
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 10:41:46 +0000
Received: from [85.158.139.211:55975] by server-14.bemta-5.messagelabs.com id
	BF/C9-09538-96BE2D05; Thu, 20 Dec 2012 10:41:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1356000103!21341735!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 742 invoked from network); 20 Dec 2012 10:41:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 10:41:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="275417"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 10:41:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 10:41:42 +0000
Message-ID: <1356000101.26722.41.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: G.R. <firemeteor@users.sourceforge.net>
Date: Thu, 20 Dec 2012 10:41:41 +0000
In-Reply-To: <CAKhsbWagEeZQjfq8U9U8dvtR_peCeC6QyRT0oPsj7ZUULoRunA@mail.gmail.com>
References: <CAKhsbWa65-uQgMi2epF1RZefJPL63p1+-MiHc7yDhXrMHev4JQ@mail.gmail.com>
	<CAKhsbWagEeZQjfq8U9U8dvtR_peCeC6QyRT0oPsj7ZUULoRunA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding our qemu maintainer.

On Thu, 2012-12-20 at 03:56 +0000, G.R. wrote:
> Switch to a new address that can reach to Jean.
> 
> On Thu, Dec 20, 2012 at 11:52 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> > This is hvmloader part of the change that gets rid of the resource
> > conflict warning in the guest kernel.
> > The OpRegion may not always be page aligned.

Is it worth detecting this and allocating 2 or 3 pages as required?

The OpRegion is always 8096 bytes? (two pages, but not necessarily
aligned)?

Do we need to worry about what is in the "slop" at either end of a 3
page region containing this? If they are sensitive registers then we may
have a problem.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 10:42:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TldZU-0004if-81; Thu, 20 Dec 2012 10:41:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TldZS-0004iX-Ia
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 10:41:46 +0000
Received: from [85.158.139.211:55975] by server-14.bemta-5.messagelabs.com id
	BF/C9-09538-96BE2D05; Thu, 20 Dec 2012 10:41:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1356000103!21341735!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 742 invoked from network); 20 Dec 2012 10:41:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 10:41:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="275417"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 10:41:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 10:41:42 +0000
Message-ID: <1356000101.26722.41.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: G.R. <firemeteor@users.sourceforge.net>
Date: Thu, 20 Dec 2012 10:41:41 +0000
In-Reply-To: <CAKhsbWagEeZQjfq8U9U8dvtR_peCeC6QyRT0oPsj7ZUULoRunA@mail.gmail.com>
References: <CAKhsbWa65-uQgMi2epF1RZefJPL63p1+-MiHc7yDhXrMHev4JQ@mail.gmail.com>
	<CAKhsbWagEeZQjfq8U9U8dvtR_peCeC6QyRT0oPsj7ZUULoRunA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding our qemu maintainer.

On Thu, 2012-12-20 at 03:56 +0000, G.R. wrote:
> Switch to a new address that can reach to Jean.
> 
> On Thu, Dec 20, 2012 at 11:52 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> > This is hvmloader part of the change that gets rid of the resource
> > conflict warning in the guest kernel.
> > The OpRegion may not always be page aligned.

Is it worth detecting this and allocating 2 or 3 pages as required?

The OpRegion is always 8096 bytes? (two pages, but not necessarily
aligned)?

Do we need to worry about what is in the "slop" at either end of a 3
page region containing this? If they are sensitive registers then we may
have a problem.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 10:57:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:57:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tldob-0004z6-Mj; Thu, 20 Dec 2012 10:57:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TldoZ-0004z1-Vy
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 10:57:24 +0000
Received: from [85.158.137.99:21004] by server-7.bemta-3.messagelabs.com id
	E7/EA-23008-E0FE2D05; Thu, 20 Dec 2012 10:57:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1356001036!20217922!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26043 invoked from network); 20 Dec 2012 10:57:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 10:57:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="275823"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 10:57:16 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 10:57:16 +0000
Message-ID: <1356001034.26722.47.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 10:57:14 +0000
In-Reply-To: <osstest-14792-mainreport@xen.org>
References: <osstest-14792-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 14792: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 06:55 +0000, xen.org wrote:
> flight 14792 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14792/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14785

2012-12-20 01:41:07 Z LEAKED [process 3183 tapdisk2] process: root      3183     1  0 01:39 ?        00:00:00 tapdisk2
2012-12-20 01:41:07 Z executing ssh ... root@10.80.248.97 xenstore-ls -fp
2012-12-20 01:41:07 Z executing ssh ... root@10.80.248.97 find /tmp /var/run /var/tmp /var/lib/xen ! -type d -print0 -ls -printf '\0'
2012-12-20 01:41:07 Z LEAKED [file /var/run/blktap-control/ctl3183] file:  89586    0 srwxr-xr-x   1 root     root            0 Dec 20 01:39 /var/run/blktap-control/ctl3183
2012-12-20 01:41:07 Z FAILURE: 2 leaked object(s)

The only tools change in the changesets being tested is:

changeset:   26315:613d97055444
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Wed Dec 19 14:33:24 2012 +0000
files:       tools/libxl/libxl.h tools/libxl/libxl_create.c tools/libxl/libxl_json.c tools/libxl/libxl_types.idl tools/ocaml/libs/xl/genwrap.py
description:
libxl: move definition of libxl_domain_config into the IDL

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Campbell <ian.campbell@citrix.com>

I've double checked and I think the previous hand coded stuff and the
new autogenerated stuff are functionally identical, except that
libxl_domain_config_dispose now poisons the memory. So I don't think it
is that.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 10:57:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 10:57:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tldob-0004z6-Mj; Thu, 20 Dec 2012 10:57:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TldoZ-0004z1-Vy
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 10:57:24 +0000
Received: from [85.158.137.99:21004] by server-7.bemta-3.messagelabs.com id
	E7/EA-23008-E0FE2D05; Thu, 20 Dec 2012 10:57:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1356001036!20217922!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26043 invoked from network); 20 Dec 2012 10:57:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 10:57:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="275823"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 10:57:16 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 10:57:16 +0000
Message-ID: <1356001034.26722.47.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 10:57:14 +0000
In-Reply-To: <osstest-14792-mainreport@xen.org>
References: <osstest-14792-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 14792: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 06:55 +0000, xen.org wrote:
> flight 14792 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/14792/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check  fail REGR. vs. 14785

2012-12-20 01:41:07 Z LEAKED [process 3183 tapdisk2] process: root      3183     1  0 01:39 ?        00:00:00 tapdisk2
2012-12-20 01:41:07 Z executing ssh ... root@10.80.248.97 xenstore-ls -fp
2012-12-20 01:41:07 Z executing ssh ... root@10.80.248.97 find /tmp /var/run /var/tmp /var/lib/xen ! -type d -print0 -ls -printf '\0'
2012-12-20 01:41:07 Z LEAKED [file /var/run/blktap-control/ctl3183] file:  89586    0 srwxr-xr-x   1 root     root            0 Dec 20 01:39 /var/run/blktap-control/ctl3183
2012-12-20 01:41:07 Z FAILURE: 2 leaked object(s)

The only tools change in the changesets being tested is:

changeset:   26315:613d97055444
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Wed Dec 19 14:33:24 2012 +0000
files:       tools/libxl/libxl.h tools/libxl/libxl_create.c tools/libxl/libxl_json.c tools/libxl/libxl_types.idl tools/ocaml/libs/xl/genwrap.py
description:
libxl: move definition of libxl_domain_config into the IDL

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Campbell <ian.campbell@citrix.com>

I've double checked and I think the previous hand coded stuff and the
new autogenerated stuff are functionally identical, except that
libxl_domain_config_dispose now poisons the memory. So I don't think it
is that.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:00:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tldrn-00057r-AD; Thu, 20 Dec 2012 11:00:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tldrl-00057j-F8
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 11:00:41 +0000
Received: from [85.158.143.99:61293] by server-1.bemta-4.messagelabs.com id
	D6/76-28401-8DFE2D05; Thu, 20 Dec 2012 11:00:40 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1356001237!29344318!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11442 invoked from network); 20 Dec 2012 11:00:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:00:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="1392479"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 11:00:32 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 06:00:32 -0500
Message-ID: <50D2EFCE.8070706@citrix.com>
Date: Thu, 20 Dec 2012 11:00:30 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
	<1356000020.26722.39.camel@zakaz.uk.xensource.com>
In-Reply-To: <1356000020.26722.39.camel@zakaz.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"konrad@darnok.org" <konrad@darnok.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5] xen/privcmd.c: improve performance of
	MMAPBATCH_V2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/12 10:40, Ian Campbell wrote:
> I've added Andres since I think this accumulation of ENOENT in err_ptr
> is a paging related thing, or at least I think he understands this
> code ;-)
>
>> @@ -89,37 +89,112 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
>>          struct page *page = info->pages[info->index++];
>>          unsigned long pfn = page_to_pfn(page);
>>          pte_t pte = pfn_pte(pfn, info->prot);
>> -
>> -       if (map_foreign_page(pfn, info->fgmfn, info->domid))
>> -               return -EFAULT;
>> +       int err;
>> +       // TODO: We should really batch these updates.
>> +       err = map_foreign_page(pfn, *info->fgmfn, info->domid);
>> +       *info->err_ptr++ = err;
>>          set_pte_at(info->vma->vm_mm, addr, ptep, pte);
>> +       info->fgmfn++;
>>
>> -       return 0;
>> +       return err;
> This will cause apply_to_page_range to stop walking and return an error.
> AIUI the intention of the err_ptr array is to accumulate the individual
> success/error for the entire range. The caller can then take care of the
> successes/failures and ENOENTs as appropriate (in particular it doesn't
> want to abort a batch because of an ENOENT, it wants to do as much as
> possible)
>
> On x86 (when err_ptr != NULL) you accumulate all of the errors from
> HYPERVISOR_mmu_update rather than aborting on the first one  and this
> seems correct to me.
Ok, will rework that bit.
>
>>   int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index 01de35c..323a2ab 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> [...]
>> @@ -2528,23 +2557,98 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>                  if (err)
>>                          goto out;
>>
>> -               err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
>> -               if (err < 0)
>> -                       goto out;
>> +               /* We record the error for each page that gives an error, but
>> +                * continue mapping until the whole set is done */
>> +               do {
>> +                       err = HYPERVISOR_mmu_update(&mmu_update[index],
>> +                                                   batch_left, &done, domid);
>> +                       if (err < 0) {
>> +                               if (!err_ptr)
>> +                                       goto out;
>> +                               /* increment done so we skip the error item */
>> +                               done++;
>> +                               last_err = err_ptr[index] = err;
>> +                       }
>> +                       batch_left -= done;
>> +                       index += done;
>> +               } while (batch_left);
>>
>>                  nr -= batch;
>>                  addr += range;
>> +               if (err_ptr)
>> +                       err_ptr += batch;
>>          }
>>
>> -       err = 0;
>> +       err = last_err;
> This means that if you have 100 failures followed by one success you
> return success overall. Is that intentional? That doesn't seem right.
As far as I see, it doesn't mean that. last_err is only set at the 
beginning of the call (to zero) and if there is an error.

>
>>   out:
>>
>>          xen_flush_tlb_all();
>>
>>          return err;
>>   }
> [...]
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index 0bbbccb..8f86a44 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
> [...]
>> @@ -283,12 +317,10 @@ static int mmap_batch_fn(void *data, void *state)
>>          if (xen_feature(XENFEAT_auto_translated_physmap))
>>                  cur_page = pages[st->index++];
>>
>> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>> -                                        st->vma->vm_page_prot, st->domain,
>> -                                        &cur_page);
>> -
>> -       /* Store error code for second pass. */
>> -       *(st->err++) = ret;
>> +       BUG_ON(nr < 0);
>> +       ret = xen_remap_domain_mfn_array(st->vma, st->va & PAGE_MASK, mfnp, nr,
>> +                                        st->err, st->vma->vm_page_prot,
>> +                                        st->domain, &cur_page);
>>
>>          /* And see if it affects the global_error. */
> The code which follows needs adjustment to cope with the fact that we
> now batch. I think it needs to walk st->err and set global_error as
> appropriate. This is related to my comment about the return value of
> xen_remap_domain_mfn_range too.
The return value, as commented above, is "either 0 or the last error we 
saw". There should be no need to walk the st->err, as we know if there 
was some error or not.
>
> I think rather than trying to fabricate some sort of meaningful error
> code for an entire batch xen_remap_domain_mfn_range should just return
> an indication about whether there were any errors or not and leave it to
> the caller to figure out the overall result by looking at the err array.
>
> Perhaps return either the number of errors or the number of successes
> (which turns the following if into either (ret) or (ret < nr)
> respectively).
I'm trying to not change how the code above expects things to work. 
Whilst it would be lovely to rewrite the entire code dealing with 
mapping memory, I don't think that's within the scope of my current 
project. And if I don't wish to rewrite all of libxc's memory management 
code, I don't want to alter what values are returned or when. The 
current code follows what WAS happening before my changes - which isn't 
exactly the most fantastic thing, and I think there may actually be bugs 
in there, such as:
     if (ret < 0) {
         if (ret == -ENOENT)
             st->global_error = -ENOENT;
         else {
             /* Record that at least one error has happened. */
             if (st->global_error == 0)
                 st->global_error = 1;
         }
     }
if we enter this once with -EFAULT, and then after that with -ENOENT, 
global_error will say -ENOENT. I think knowing that we got an EFAULT is 
"higher importance" than ENOENT, but that's how the old code was 
working, and I'm not sure I should change it at this point.

>>          if (ret < 0) {
>> @@ -300,7 +332,7 @@ static int mmap_batch_fn(void *data, void *state)
>>                                  st->global_error = 1;
>>                  }
>>          }
>> -       st->va += PAGE_SIZE;
>> +       st->va += PAGE_SIZE * nr;
>>
>>          return 0;
>>   }
>> @@ -430,8 +462,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>>          state.err           = err_array;
>>
>>          /* mmap_batch_fn guarantees ret == 0 */
>> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
>> -                            &pagelist, mmap_batch_fn, &state));
>> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
>> +                                   &pagelist, mmap_batch_fn, &state));
> Can we make traverse_pages and _block common by passing a block size
> parameter?
Yes of course. Is there much benefit from that? I understand that it's 
less code, but it also makes the original traverse_pages more complex. 
Not convinced it helps much - it's quite a small function, so not much 
extra code. Additionally, all of the callback functions will have to 
deal with an extra parameter (that is probably ignored in all but one 
place).

--
Mats
>
> Ian.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:00:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tldrn-00057r-AD; Thu, 20 Dec 2012 11:00:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tldrl-00057j-F8
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 11:00:41 +0000
Received: from [85.158.143.99:61293] by server-1.bemta-4.messagelabs.com id
	D6/76-28401-8DFE2D05; Thu, 20 Dec 2012 11:00:40 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1356001237!29344318!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11442 invoked from network); 20 Dec 2012 11:00:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:00:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="1392479"
Received: from ftlpex01cl01.citrite.net ([10.13.107.78])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 11:00:32 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 06:00:32 -0500
Message-ID: <50D2EFCE.8070706@citrix.com>
Date: Thu, 20 Dec 2012 11:00:30 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
	<1356000020.26722.39.camel@zakaz.uk.xensource.com>
In-Reply-To: <1356000020.26722.39.camel@zakaz.uk.xensource.com>
X-Originating-IP: [10.80.3.146]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"konrad@darnok.org" <konrad@darnok.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5] xen/privcmd.c: improve performance of
	MMAPBATCH_V2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/12 10:40, Ian Campbell wrote:
> I've added Andres since I think this accumulation of ENOENT in err_ptr
> is a paging related thing, or at least I think he understands this
> code ;-)
>
>> @@ -89,37 +89,112 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
>>          struct page *page = info->pages[info->index++];
>>          unsigned long pfn = page_to_pfn(page);
>>          pte_t pte = pfn_pte(pfn, info->prot);
>> -
>> -       if (map_foreign_page(pfn, info->fgmfn, info->domid))
>> -               return -EFAULT;
>> +       int err;
>> +       // TODO: We should really batch these updates.
>> +       err = map_foreign_page(pfn, *info->fgmfn, info->domid);
>> +       *info->err_ptr++ = err;
>>          set_pte_at(info->vma->vm_mm, addr, ptep, pte);
>> +       info->fgmfn++;
>>
>> -       return 0;
>> +       return err;
> This will cause apply_to_page_range to stop walking and return an error.
> AIUI the intention of the err_ptr array is to accumulate the individual
> success/error for the entire range. The caller can then take care of the
> successes/failures and ENOENTs as appropriate (in particular it doesn't
> want to abort a batch because of an ENOENT, it wants to do as much as
> possible)
>
> On x86 (when err_ptr != NULL) you accumulate all of the errors from
> HYPERVISOR_mmu_update rather than aborting on the first one  and this
> seems correct to me.
Ok, will rework that bit.
>
>>   int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index 01de35c..323a2ab 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> [...]
>> @@ -2528,23 +2557,98 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>>                  if (err)
>>                          goto out;
>>
>> -               err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
>> -               if (err < 0)
>> -                       goto out;
>> +               /* We record the error for each page that gives an error, but
>> +                * continue mapping until the whole set is done */
>> +               do {
>> +                       err = HYPERVISOR_mmu_update(&mmu_update[index],
>> +                                                   batch_left, &done, domid);
>> +                       if (err < 0) {
>> +                               if (!err_ptr)
>> +                                       goto out;
>> +                               /* increment done so we skip the error item */
>> +                               done++;
>> +                               last_err = err_ptr[index] = err;
>> +                       }
>> +                       batch_left -= done;
>> +                       index += done;
>> +               } while (batch_left);
>>
>>                  nr -= batch;
>>                  addr += range;
>> +               if (err_ptr)
>> +                       err_ptr += batch;
>>          }
>>
>> -       err = 0;
>> +       err = last_err;
> This means that if you have 100 failures followed by one success you
> return success overall. Is that intentional? That doesn't seem right.
As far as I see, it doesn't mean that. last_err is only set at the 
beginning of the call (to zero) and if there is an error.

>
>>   out:
>>
>>          xen_flush_tlb_all();
>>
>>          return err;
>>   }
> [...]
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index 0bbbccb..8f86a44 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
> [...]
>> @@ -283,12 +317,10 @@ static int mmap_batch_fn(void *data, void *state)
>>          if (xen_feature(XENFEAT_auto_translated_physmap))
>>                  cur_page = pages[st->index++];
>>
>> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>> -                                        st->vma->vm_page_prot, st->domain,
>> -                                        &cur_page);
>> -
>> -       /* Store error code for second pass. */
>> -       *(st->err++) = ret;
>> +       BUG_ON(nr < 0);
>> +       ret = xen_remap_domain_mfn_array(st->vma, st->va & PAGE_MASK, mfnp, nr,
>> +                                        st->err, st->vma->vm_page_prot,
>> +                                        st->domain, &cur_page);
>>
>>          /* And see if it affects the global_error. */
> The code which follows needs adjustment to cope with the fact that we
> now batch. I think it needs to walk st->err and set global_error as
> appropriate. This is related to my comment about the return value of
> xen_remap_domain_mfn_range too.
The return value, as commented above, is "either 0 or the last error we 
saw". There should be no need to walk the st->err, as we know if there 
was some error or not.
>
> I think rather than trying to fabricate some sort of meaningful error
> code for an entire batch xen_remap_domain_mfn_range should just return
> an indication about whether there were any errors or not and leave it to
> the caller to figure out the overall result by looking at the err array.
>
> Perhaps return either the number of errors or the number of successes
> (which turns the following if into either (ret) or (ret < nr)
> respectively).
I'm trying to not change how the code above expects things to work. 
Whilst it would be lovely to rewrite the entire code dealing with 
mapping memory, I don't think that's within the scope of my current 
project. And if I don't wish to rewrite all of libxc's memory management 
code, I don't want to alter what values are returned or when. The 
current code follows what WAS happening before my changes - which isn't 
exactly the most fantastic thing, and I think there may actually be bugs 
in there, such as:
     if (ret < 0) {
         if (ret == -ENOENT)
             st->global_error = -ENOENT;
         else {
             /* Record that at least one error has happened. */
             if (st->global_error == 0)
                 st->global_error = 1;
         }
     }
if we enter this once with -EFAULT, and then after that with -ENOENT, 
global_error will say -ENOENT. I think knowing that we got an EFAULT is 
"higher importance" than ENOENT, but that's how the old code was 
working, and I'm not sure I should change it at this point.

>>          if (ret < 0) {
>> @@ -300,7 +332,7 @@ static int mmap_batch_fn(void *data, void *state)
>>                                  st->global_error = 1;
>>                  }
>>          }
>> -       st->va += PAGE_SIZE;
>> +       st->va += PAGE_SIZE * nr;
>>
>>          return 0;
>>   }
>> @@ -430,8 +462,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>>          state.err           = err_array;
>>
>>          /* mmap_batch_fn guarantees ret == 0 */
>> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
>> -                            &pagelist, mmap_batch_fn, &state));
>> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
>> +                                   &pagelist, mmap_batch_fn, &state));
> Can we make traverse_pages and _block common by passing a block size
> parameter?
Yes of course. Is there much benefit from that? I understand that it's 
less code, but it also makes the original traverse_pages more complex. 
Not convinced it helps much - it's quite a small function, so not much 
extra code. Additionally, all of the callback functions will have to 
deal with an extra parameter (that is probably ignored in all but one 
place).

--
Mats
>
> Ian.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:21:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleBy-0005SW-BU; Thu, 20 Dec 2012 11:21:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TleBw-0005SR-Km
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:21:32 +0000
Received: from [85.158.139.83:60585] by server-3.bemta-5.messagelabs.com id
	DE/E8-25441-BB4F2D05; Thu, 20 Dec 2012 11:21:31 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-182.messagelabs.com!1356002490!28968458!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24244 invoked from network); 20 Dec 2012 11:21:31 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 11:21:31 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TleBt-000LEr-82; Thu, 20 Dec 2012 11:21:29 +0000
Date: Thu, 20 Dec 2012 11:21:29 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121220112129.GC80837@ocelot.phlegethon.org>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
	<20121219104932.GA65599@ocelot.phlegethon.org>
	<1355916124.14620.328.camel@zakaz.uk.xensource.com>
	<1355928840.14620.432.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355928840.14620.432.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
	drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:54 +0000 on 19 Dec (1355928840), Ian Campbell wrote:
> On Wed, 2012-12-19 at 11:22 +0000, Ian Campbell wrote:
> > On Wed, 2012-12-19 at 10:49 +0000, Tim Deegan wrote:
> > > At 10:20 +0000 on 19 Dec (1355912423), Ian Campbell wrote:
> > > > On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> > > > > On Tue, 18 Dec 2012, Ian Campbell wrote:
> > > > > > This shortens an overly long line.
> > > > > > 
> > > > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > > > 
> > > > > honestly I would rather keep it because it has been quite useful for
> > > > > debugging in the past once all the bugs have been fixed (TM) then we can
> > > > > remove it ;-)
> > > > 
> > > > Can you not just re-add it for debug?
> > > > 
> > > > I mostly just want to get rid of the overlong line, I could nuke the
> > > > spaces from the comment (in all of them, not just this one) instead?
> > > 
> > > Could you just remove the 'lev3: ' from the comment, pulling it in to
> > > exactly 80 chars?  Your' added 'second level' and 'third level' make it
> > > redundant, and I'd rather not lose the spaces in the comments.
> > 
> > I think that makes it exactly 80 characters, which is probably ok.
> 
> It ends up as below, exactly 80 characters long. I think it's probably
> not worth it.

OK, how's this?

arm: trim pagetable flag definitions to fit in 80 characters

Signed-off-by: Tim Deegan <tim@xen.org>

diff -r 6f5c96855a9e xen/arch/arm/arm32/head.S
--- a/xen/arch/arm/arm32/head.S	Thu Dec 20 11:00:32 2012 +0100
+++ b/xen/arch/arm/arm32/head.S	Thu Dec 20 11:19:53 2012 +0000
@@ -24,10 +24,10 @@
 
 #define ZIMAGE_MAGIC_NUMBER 0x016f2818
 
-#define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
-#define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
-#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
-#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
+#define PT_PT     0xe7f /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=1 P=1 */
+#define PT_MEM    0xe7d /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=0 P=1 */
+#define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
+#define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:21:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleBy-0005SW-BU; Thu, 20 Dec 2012 11:21:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TleBw-0005SR-Km
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:21:32 +0000
Received: from [85.158.139.83:60585] by server-3.bemta-5.messagelabs.com id
	DE/E8-25441-BB4F2D05; Thu, 20 Dec 2012 11:21:31 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-182.messagelabs.com!1356002490!28968458!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24244 invoked from network); 20 Dec 2012 11:21:31 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 11:21:31 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TleBt-000LEr-82; Thu, 20 Dec 2012 11:21:29 +0000
Date: Thu, 20 Dec 2012 11:21:29 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121220112129.GC80837@ocelot.phlegethon.org>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
	<20121219104932.GA65599@ocelot.phlegethon.org>
	<1355916124.14620.328.camel@zakaz.uk.xensource.com>
	<1355928840.14620.432.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355928840.14620.432.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
	drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:54 +0000 on 19 Dec (1355928840), Ian Campbell wrote:
> On Wed, 2012-12-19 at 11:22 +0000, Ian Campbell wrote:
> > On Wed, 2012-12-19 at 10:49 +0000, Tim Deegan wrote:
> > > At 10:20 +0000 on 19 Dec (1355912423), Ian Campbell wrote:
> > > > On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> > > > > On Tue, 18 Dec 2012, Ian Campbell wrote:
> > > > > > This shortens an overly long line.
> > > > > > 
> > > > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > > > 
> > > > > honestly I would rather keep it because it has been quite useful for
> > > > > debugging in the past once all the bugs have been fixed (TM) then we can
> > > > > remove it ;-)
> > > > 
> > > > Can you not just re-add it for debug?
> > > > 
> > > > I mostly just want to get rid of the overlong line, I could nuke the
> > > > spaces from the comment (in all of them, not just this one) instead?
> > > 
> > > Could you just remove the 'lev3: ' from the comment, pulling it in to
> > > exactly 80 chars?  Your' added 'second level' and 'third level' make it
> > > redundant, and I'd rather not lose the spaces in the comments.
> > 
> > I think that makes it exactly 80 characters, which is probably ok.
> 
> It ends up as below, exactly 80 characters long. I think it's probably
> not worth it.

OK, how's this?

arm: trim pagetable flag definitions to fit in 80 characters

Signed-off-by: Tim Deegan <tim@xen.org>

diff -r 6f5c96855a9e xen/arch/arm/arm32/head.S
--- a/xen/arch/arm/arm32/head.S	Thu Dec 20 11:00:32 2012 +0100
+++ b/xen/arch/arm/arm32/head.S	Thu Dec 20 11:19:53 2012 +0000
@@ -24,10 +24,10 @@
 
 #define ZIMAGE_MAGIC_NUMBER 0x016f2818
 
-#define PT_PT  0xe7f /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=1, P=1 */
-#define PT_MEM 0xe7d /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=111, T=0, P=1 */
-#define PT_DEV 0xe71 /* nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=0, P=1 */
-#define PT_DEV_L3 0xe73 /* lev3: nG=1, AF=1, SH=10, AP=01, NS=1, ATTR=100, T=1, P=1 */
+#define PT_PT     0xe7f /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=1 P=1 */
+#define PT_MEM    0xe7d /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=0 P=1 */
+#define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
+#define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:26:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:26:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleG9-0005aC-2l; Thu, 20 Dec 2012 11:25:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TleG7-0005a5-69
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 11:25:51 +0000
Received: from [85.158.143.99:57651] by server-3.bemta-4.messagelabs.com id
	77/00-18211-EB5F2D05; Thu, 20 Dec 2012 11:25:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1356002748!25012715!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28268 invoked from network); 20 Dec 2012 11:25:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:25:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="276663"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:25:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 11:25:47 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TleG3-0006LV-Vp;
	Thu, 20 Dec 2012 11:25:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TleG3-0008Dp-R7;
	Thu, 20 Dec 2012 11:25:47 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14793-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 11:25:47 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14793: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14793 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14793/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      5 xen-boot                    fail pass in 14790
 test-i386-i386-xl-qemut-win   7 windows-install             fail pass in 14790
 test-amd64-i386-xl-qemut-win7-amd64 12 guest-localmigrate/x10 fail in 14790 pass in 14793
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 14790 pass in 14793

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14783
 test-amd64-amd64-xl-sedf 13 guest-localmigrate.2 fail in 14790 REGR. vs. 14783

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop            fail in 14790 never pass

version targeted for testing:
 xen                  2ae6267371d8
baseline version:
 xen                  b13c5ee3c109

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Lasse Collin <lasse.collin@tukaani.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=2ae6267371d8
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing 2ae6267371d8
+ branch=xen-4.1-testing
+ revision=2ae6267371d8
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r 2ae6267371d8 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 4 changesets with 15 changes to 13 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:26:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:26:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleG9-0005aC-2l; Thu, 20 Dec 2012 11:25:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TleG7-0005a5-69
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 11:25:51 +0000
Received: from [85.158.143.99:57651] by server-3.bemta-4.messagelabs.com id
	77/00-18211-EB5F2D05; Thu, 20 Dec 2012 11:25:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1356002748!25012715!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28268 invoked from network); 20 Dec 2012 11:25:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:25:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="276663"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:25:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 11:25:47 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TleG3-0006LV-Vp;
	Thu, 20 Dec 2012 11:25:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TleG3-0008Dp-R7;
	Thu, 20 Dec 2012 11:25:47 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14793-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 11:25:47 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 14793: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14793 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14793/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      5 xen-boot                    fail pass in 14790
 test-i386-i386-xl-qemut-win   7 windows-install             fail pass in 14790
 test-amd64-i386-xl-qemut-win7-amd64 12 guest-localmigrate/x10 fail in 14790 pass in 14793
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 14790 pass in 14793

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 14783
 test-amd64-amd64-xl-sedf 13 guest-localmigrate.2 fail in 14790 REGR. vs. 14783

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-qemut-win     16 leak-check/check             fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-qemut-win  13 guest-stop            fail in 14790 never pass

version targeted for testing:
 xen                  2ae6267371d8
baseline version:
 xen                  b13c5ee3c109

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Lasse Collin <lasse.collin@tukaani.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-i386-i386-qemut-win                                     fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-i386-i386-xl-qemut-win                                  fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=2ae6267371d8
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing 2ae6267371d8
+ branch=xen-4.1-testing
+ revision=2ae6267371d8
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r 2ae6267371d8 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 4 changesets with 15 changes to 13 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleHy-0005gQ-JE; Thu, 20 Dec 2012 11:27:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TleHw-0005gF-98
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 11:27:44 +0000
Received: from [85.158.137.99:5371] by server-8.bemta-3.messagelabs.com id
	80/A4-01297-F26F2D05; Thu, 20 Dec 2012 11:27:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1356002862!15393527!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17821 invoked from network); 20 Dec 2012 11:27:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:27:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="276697"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:26:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 11:26:41 +0000
Message-ID: <1356002800.26722.49.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Thu, 20 Dec 2012 11:26:40 +0000
In-Reply-To: <1356000020.26722.39.camel@zakaz.uk.xensource.com>
References: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
	<1356000020.26722.39.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"konrad@darnok.org" <konrad@darnok.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5] xen/privcmd.c: improve performance of
 MMAPBATCH_V2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 10:40 +0000, Ian Campbell wrote:
> I've added Andres since I think this accumulation of ENOENT in err_ptr
> is a paging related thing, or at least I think he understands this
> code ;-)
> 
> > @@ -89,37 +89,112 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
> >         struct page *page = info->pages[info->index++];
> >         unsigned long pfn = page_to_pfn(page);
> >         pte_t pte = pfn_pte(pfn, info->prot);
> > -
> > -       if (map_foreign_page(pfn, info->fgmfn, info->domid))
> > -               return -EFAULT;
> > +       int err;
> > +       // TODO: We should really batch these updates.

This just made me realise that we need a way for
XENMEM_add_to_physmap_range to return errors for each slot. I'll look
into that in the new year.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleHy-0005gQ-JE; Thu, 20 Dec 2012 11:27:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TleHw-0005gF-98
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 11:27:44 +0000
Received: from [85.158.137.99:5371] by server-8.bemta-3.messagelabs.com id
	80/A4-01297-F26F2D05; Thu, 20 Dec 2012 11:27:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1356002862!15393527!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17821 invoked from network); 20 Dec 2012 11:27:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:27:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="276697"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:26:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 11:26:41 +0000
Message-ID: <1356002800.26722.49.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Thu, 20 Dec 2012 11:26:40 +0000
In-Reply-To: <1356000020.26722.39.camel@zakaz.uk.xensource.com>
References: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
	<1356000020.26722.39.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"konrad@darnok.org" <konrad@darnok.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5] xen/privcmd.c: improve performance of
 MMAPBATCH_V2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 10:40 +0000, Ian Campbell wrote:
> I've added Andres since I think this accumulation of ENOENT in err_ptr
> is a paging related thing, or at least I think he understands this
> code ;-)
> 
> > @@ -89,37 +89,112 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
> >         struct page *page = info->pages[info->index++];
> >         unsigned long pfn = page_to_pfn(page);
> >         pte_t pte = pfn_pte(pfn, info->prot);
> > -
> > -       if (map_foreign_page(pfn, info->fgmfn, info->domid))
> > -               return -EFAULT;
> > +       int err;
> > +       // TODO: We should really batch these updates.

This just made me realise that we need a way for
XENMEM_add_to_physmap_range to return errors for each slot. I'll look
into that in the new year.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:28:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:28:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleIZ-0005jo-1I; Thu, 20 Dec 2012 11:28:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TleIX-0005jW-Di
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:28:21 +0000
Received: from [193.109.254.147:49989] by server-16.bemta-14.messagelabs.com
	id 8C/D1-18932-456F2D05; Thu, 20 Dec 2012 11:28:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1356002702!2018969!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25168 invoked from network); 20 Dec 2012 11:25:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:25:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="276643"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:25:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 11:25:01 +0000
Message-ID: <1356002699.26722.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 20 Dec 2012 11:24:59 +0000
In-Reply-To: <20121220112129.GC80837@ocelot.phlegethon.org>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
	<20121219104932.GA65599@ocelot.phlegethon.org>
	<1355916124.14620.328.camel@zakaz.uk.xensource.com>
	<1355928840.14620.432.camel@zakaz.uk.xensource.com>
	<20121220112129.GC80837@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 11:21 +0000, Tim Deegan wrote:
> At 14:54 +0000 on 19 Dec (1355928840), Ian Campbell wrote:
> > On Wed, 2012-12-19 at 11:22 +0000, Ian Campbell wrote:
> > > On Wed, 2012-12-19 at 10:49 +0000, Tim Deegan wrote:
> > > > At 10:20 +0000 on 19 Dec (1355912423), Ian Campbell wrote:
> > > > > On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> > > > > > On Tue, 18 Dec 2012, Ian Campbell wrote:
> > > > > > > This shortens an overly long line.
> > > > > > > 
> > > > > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > > > > 
> > > > > > honestly I would rather keep it because it has been quite useful for
> > > > > > debugging in the past once all the bugs have been fixed (TM) then we can
> > > > > > remove it ;-)
> > > > > 
> > > > > Can you not just re-add it for debug?
> > > > > 
> > > > > I mostly just want to get rid of the overlong line, I could nuke the
> > > > > spaces from the comment (in all of them, not just this one) instead?
> > > > 
> > > > Could you just remove the 'lev3: ' from the comment, pulling it in to
> > > > exactly 80 chars?  Your' added 'second level' and 'third level' make it
> > > > redundant, and I'd rather not lose the spaces in the comments.
> > > 
> > > I think that makes it exactly 80 characters, which is probably ok.
> > 
> > It ends up as below, exactly 80 characters long. I think it's probably
> > not worth it.
> 
> OK, how's this?
> 
> arm: trim pagetable flag definitions to fit in 80 characters
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Lateral thought ;-)

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:28:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:28:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleIZ-0005jo-1I; Thu, 20 Dec 2012 11:28:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TleIX-0005jW-Di
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:28:21 +0000
Received: from [193.109.254.147:49989] by server-16.bemta-14.messagelabs.com
	id 8C/D1-18932-456F2D05; Thu, 20 Dec 2012 11:28:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1356002702!2018969!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25168 invoked from network); 20 Dec 2012 11:25:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:25:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="276643"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:25:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 11:25:01 +0000
Message-ID: <1356002699.26722.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 20 Dec 2012 11:24:59 +0000
In-Reply-To: <20121220112129.GC80837@ocelot.phlegethon.org>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
	<20121219104932.GA65599@ocelot.phlegethon.org>
	<1355916124.14620.328.camel@zakaz.uk.xensource.com>
	<1355928840.14620.432.camel@zakaz.uk.xensource.com>
	<20121220112129.GC80837@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 11:21 +0000, Tim Deegan wrote:
> At 14:54 +0000 on 19 Dec (1355928840), Ian Campbell wrote:
> > On Wed, 2012-12-19 at 11:22 +0000, Ian Campbell wrote:
> > > On Wed, 2012-12-19 at 10:49 +0000, Tim Deegan wrote:
> > > > At 10:20 +0000 on 19 Dec (1355912423), Ian Campbell wrote:
> > > > > On Tue, 2012-12-18 at 18:33 +0000, Stefano Stabellini wrote:
> > > > > > On Tue, 18 Dec 2012, Ian Campbell wrote:
> > > > > > > This shortens an overly long line.
> > > > > > > 
> > > > > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > > > > 
> > > > > > honestly I would rather keep it because it has been quite useful for
> > > > > > debugging in the past once all the bugs have been fixed (TM) then we can
> > > > > > remove it ;-)
> > > > > 
> > > > > Can you not just re-add it for debug?
> > > > > 
> > > > > I mostly just want to get rid of the overlong line, I could nuke the
> > > > > spaces from the comment (in all of them, not just this one) instead?
> > > > 
> > > > Could you just remove the 'lev3: ' from the comment, pulling it in to
> > > > exactly 80 chars?  Your' added 'second level' and 'third level' make it
> > > > redundant, and I'd rather not lose the spaces in the comments.
> > > 
> > > I think that makes it exactly 80 characters, which is probably ok.
> > 
> > It ends up as below, exactly 80 characters long. I think it's probably
> > not worth it.
> 
> OK, how's this?
> 
> arm: trim pagetable flag definitions to fit in 80 characters
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Lateral thought ;-)

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:29:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleJF-0005pa-GB; Thu, 20 Dec 2012 11:29:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1TleJD-0005pJ-RA
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:29:04 +0000
Received: from [85.158.137.99:51209] by server-9.bemta-3.messagelabs.com id
	E9/BD-11948-A76F2D05; Thu, 20 Dec 2012 11:28:58 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-2.tower-217.messagelabs.com!1356002936!19945610!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26741 invoked from network); 20 Dec 2012 11:28:57 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-2.tower-217.messagelabs.com with SMTP;
	20 Dec 2012 11:28:57 -0000
Received: from [62.94.142.87] (account d.faggioli@sssup.it HELO [192.168.0.20])
	by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 77366107; Thu, 20 Dec 2012 12:28:56 +0100
Message-ID: <1356002935.28419.60.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: George Dunlap <dunlapg@umich.edu>
Date: Thu, 20 Dec 2012 12:28:55 +0100
In-Reply-To: <CAFLBxZaB46yiUBd8dj3QLtJQP5WCkdbhK9fvwHvCRNA_x8N1Ug@mail.gmail.com>
References: <CAFLBxZaB46yiUBd8dj3QLtJQP5WCkdbhK9fvwHvCRNA_x8N1Ug@mail.gmail.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 development update, 19 Nov
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1022389050011620581=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============1022389050011620581==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-qpV0NDJNibTNpg7OfIJL"


--=-qpV0NDJNibTNpg7OfIJL
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2012-11-19 at 16:29 +0000, George Dunlap wrote:
> It's been a few weeks since I've sent one out, so please send updates!
>=20
> This information will be mirrored on the Xen 4.3 Roadmap wiki page:
>  http://wiki.xen.org/wiki/Xen_Roadmap/4.3
>=20
Not sure this is the last DevUpdate you sent out... If not, sorry for
that. :-(

> =3D=3D Not yet complete =3D=3D
>=20
> * PVH mode (w/ Linux)
>   owner: mukesh@oracle
>   status (Linux): 3rd draft patches posted. =20
>   status (Xen): Patches being cleaned up for submission
>=20
> * Event channel scalability
>   owner: attilio@citrix
>   status: initial design proposed
>   Increase limit on event channels (currently 1024 for 32-bit guests,
>   4096 for 64-bit guests)
>=20
> * ARM server port
>   owner: ijc@citrix
>   status: Core hypervisor patches accepted; Linux paches pending
>=20
> * NUMA scheduler affinity
>   critical
>   owner: dario@citrix
>   status: Patches posted
>=20
v2 posted, after major rewrite of some core aspects and full performance
re-evaluation. Review ongoing.

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-qpV0NDJNibTNpg7OfIJL
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDS9ncACgkQk4XaBE3IOsTF7ACghdGnogCr5rAM/ltxceatNcND
JMgAn24m0P6tLSgn9BDmlL4SGcubIigH
=l7J6
-----END PGP SIGNATURE-----

--=-qpV0NDJNibTNpg7OfIJL--



--===============1022389050011620581==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1022389050011620581==--



From xen-devel-bounces@lists.xen.org Thu Dec 20 11:29:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleJF-0005pa-GB; Thu, 20 Dec 2012 11:29:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1TleJD-0005pJ-RA
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:29:04 +0000
Received: from [85.158.137.99:51209] by server-9.bemta-3.messagelabs.com id
	E9/BD-11948-A76F2D05; Thu, 20 Dec 2012 11:28:58 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-2.tower-217.messagelabs.com!1356002936!19945610!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26741 invoked from network); 20 Dec 2012 11:28:57 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-2.tower-217.messagelabs.com with SMTP;
	20 Dec 2012 11:28:57 -0000
Received: from [62.94.142.87] (account d.faggioli@sssup.it HELO [192.168.0.20])
	by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 77366107; Thu, 20 Dec 2012 12:28:56 +0100
Message-ID: <1356002935.28419.60.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: George Dunlap <dunlapg@umich.edu>
Date: Thu, 20 Dec 2012 12:28:55 +0100
In-Reply-To: <CAFLBxZaB46yiUBd8dj3QLtJQP5WCkdbhK9fvwHvCRNA_x8N1Ug@mail.gmail.com>
References: <CAFLBxZaB46yiUBd8dj3QLtJQP5WCkdbhK9fvwHvCRNA_x8N1Ug@mail.gmail.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 development update, 19 Nov
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1022389050011620581=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============1022389050011620581==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-qpV0NDJNibTNpg7OfIJL"


--=-qpV0NDJNibTNpg7OfIJL
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2012-11-19 at 16:29 +0000, George Dunlap wrote:
> It's been a few weeks since I've sent one out, so please send updates!
>=20
> This information will be mirrored on the Xen 4.3 Roadmap wiki page:
>  http://wiki.xen.org/wiki/Xen_Roadmap/4.3
>=20
Not sure this is the last DevUpdate you sent out... If not, sorry for
that. :-(

> =3D=3D Not yet complete =3D=3D
>=20
> * PVH mode (w/ Linux)
>   owner: mukesh@oracle
>   status (Linux): 3rd draft patches posted. =20
>   status (Xen): Patches being cleaned up for submission
>=20
> * Event channel scalability
>   owner: attilio@citrix
>   status: initial design proposed
>   Increase limit on event channels (currently 1024 for 32-bit guests,
>   4096 for 64-bit guests)
>=20
> * ARM server port
>   owner: ijc@citrix
>   status: Core hypervisor patches accepted; Linux paches pending
>=20
> * NUMA scheduler affinity
>   critical
>   owner: dario@citrix
>   status: Patches posted
>=20
v2 posted, after major rewrite of some core aspects and full performance
re-evaluation. Review ongoing.

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-qpV0NDJNibTNpg7OfIJL
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDS9ncACgkQk4XaBE3IOsTF7ACghdGnogCr5rAM/ltxceatNcND
JMgAn24m0P6tLSgn9BDmlL4SGcubIigH
=l7J6
-----END PGP SIGNATURE-----

--=-qpV0NDJNibTNpg7OfIJL--



--===============1022389050011620581==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1022389050011620581==--



From xen-devel-bounces@lists.xen.org Thu Dec 20 11:30:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleKQ-0005yq-1u; Thu, 20 Dec 2012 11:30:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TleKO-0005ye-32
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:30:16 +0000
Received: from [85.158.139.83:51361] by server-11.bemta-5.messagelabs.com id
	15/5E-31624-7C6F2D05; Thu, 20 Dec 2012 11:30:15 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-182.messagelabs.com!1356003012!28574041!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26483 invoked from network); 20 Dec 2012 11:30:13 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 11:30:13 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TleKK-000LGV-AR; Thu, 20 Dec 2012 11:30:12 +0000
Date: Thu, 20 Dec 2012 11:30:12 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121220113012.GD80837@ocelot.phlegethon.org>
References: <1355929001-19183-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355929001-19183-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:56 +0000 on 19 Dec (1355929001), Ian Campbell wrote:
> We weren't taking the guest mode (CPSR) into account and would always
> access the user version of the registers.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:30:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleKQ-0005yq-1u; Thu, 20 Dec 2012 11:30:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TleKO-0005ye-32
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:30:16 +0000
Received: from [85.158.139.83:51361] by server-11.bemta-5.messagelabs.com id
	15/5E-31624-7C6F2D05; Thu, 20 Dec 2012 11:30:15 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-182.messagelabs.com!1356003012!28574041!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26483 invoked from network); 20 Dec 2012 11:30:13 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 11:30:13 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TleKK-000LGV-AR; Thu, 20 Dec 2012 11:30:12 +0000
Date: Thu, 20 Dec 2012 11:30:12 +0000
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20121220113012.GD80837@ocelot.phlegethon.org>
References: <1355929001-19183-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355929001-19183-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:56 +0000 on 19 Dec (1355929001), Ian Campbell wrote:
> We weren't taking the guest mode (CPSR) into account and would always
> access the user version of the registers.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:36:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:36:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleQf-0006Xi-SS; Thu, 20 Dec 2012 11:36:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TleQe-0006Xb-DD
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 11:36:44 +0000
Received: from [193.109.254.147:50481] by server-3.bemta-14.messagelabs.com id
	86/4C-26055-B48F2D05; Thu, 20 Dec 2012 11:36:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1356003169!10759286!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29283 invoked from network); 20 Dec 2012 11:32:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:32:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="276860"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:32:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 11:32:48 +0000
Message-ID: <1356003167.26722.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Thu, 20 Dec 2012 11:32:47 +0000
In-Reply-To: <50D2EFCE.8070706@citrix.com>
References: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
	<1356000020.26722.39.camel@zakaz.uk.xensource.com>
	<50D2EFCE.8070706@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"konrad@darnok.org" <konrad@darnok.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5] xen/privcmd.c: improve performance of
	MMAPBATCH_V2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 11:00 +0000, Mats Petersson wrote:

> >> -       err = 0;
> >> +       err = last_err;
> > This means that if you have 100 failures followed by one success you
> > return success overall. Is that intentional? That doesn't seem right.
> As far as I see, it doesn't mean that. last_err is only set at the 
> beginning of the call (to zero) and if there is an error.

Ah, yes I missed that (but it still isn't right, see below).

> >> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> >> index 0bbbccb..8f86a44 100644
> >> --- a/drivers/xen/privcmd.c
> >> +++ b/drivers/xen/privcmd.c
> > [...]
> >> @@ -283,12 +317,10 @@ static int mmap_batch_fn(void *data, void *state)
> >>          if (xen_feature(XENFEAT_auto_translated_physmap))
> >>                  cur_page = pages[st->index++];
> >>
> >> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> >> -                                        st->vma->vm_page_prot, st->domain,
> >> -                                        &cur_page);
> >> -
> >> -       /* Store error code for second pass. */
> >> -       *(st->err++) = ret;
> >> +       BUG_ON(nr < 0);
> >> +       ret = xen_remap_domain_mfn_array(st->vma, st->va & PAGE_MASK, mfnp, nr,
> >> +                                        st->err, st->vma->vm_page_prot,
> >> +                                        st->domain, &cur_page);
> >>
> >>          /* And see if it affects the global_error. */
> > The code which follows needs adjustment to cope with the fact that we
> > now batch. I think it needs to walk st->err and set global_error as
> > appropriate. This is related to my comment about the return value of
> > xen_remap_domain_mfn_range too.
> The return value, as commented above, is "either 0 or the last error we 
> saw". There should be no need to walk the st->err, as we know if there 
> was some error or not.

The code which follows tries to latch having seen ENOENT in preference
to other errors, so you don't need 0 or the last error, you need either
0, ENOENT if one occurred somewhere in the batch or an error!=ENOENT.
(or maybe its vice versa, either way the last error isn't necessarily
what you need).

> > I think rather than trying to fabricate some sort of meaningful error
> > code for an entire batch xen_remap_domain_mfn_range should just return
> > an indication about whether there were any errors or not and leave it to
> > the caller to figure out the overall result by looking at the err array.
> >
> > Perhaps return either the number of errors or the number of successes
> > (which turns the following if into either (ret) or (ret < nr)
> > respectively).
> I'm trying to not change how the code above expects things to work. 
> Whilst it would be lovely to rewrite the entire code dealing with 
> mapping memory, I don't think that's within the scope of my current 
> project. And if I don't wish to rewrite all of libxc's memory management 

I'm not talking about changing the libxc or ioctl interface, only about
the internal-to-Linux interface between privcmd and mmu.c. In fact I'm
only actually asking you to change the return value of the new function
you are adding to the API.

> code, I don't want to alter what values are returned or when. The 
> current code follows what WAS happening before my changes - which isn't 
> exactly the most fantastic thing, and I think there may actually be bugs 
> in there, such as:
>      if (ret < 0) {
>          if (ret == -ENOENT)
>              st->global_error = -ENOENT;
>          else {
>              /* Record that at least one error has happened. */
>              if (st->global_error == 0)
>                  st->global_error = 1;
>          }
>      }
> if we enter this once with -EFAULT, and then after that with -ENOENT, 
> global_error will say -ENOENT. I think knowing that we got an EFAULT is 
> "higher importance" than ENOENT, but that's how the old code was 
> working, and I'm not sure I should change it at this point.

But I think you are changing it, by this behaviour or returning the last
error in the batch which causes this code to behave differently. That's
what I'm asking you to avoid!

> 
> >>          if (ret < 0) {
> >> @@ -300,7 +332,7 @@ static int mmap_batch_fn(void *data, void *state)
> >>                                  st->global_error = 1;
> >>                  }
> >>          }
> >> -       st->va += PAGE_SIZE;
> >> +       st->va += PAGE_SIZE * nr;
> >>
> >>          return 0;
> >>   }
> >> @@ -430,8 +462,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> >>          state.err           = err_array;
> >>
> >>          /* mmap_batch_fn guarantees ret == 0 */
> >> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
> >> -                            &pagelist, mmap_batch_fn, &state));
> >> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
> >> +                                   &pagelist, mmap_batch_fn, &state));
> > Can we make traverse_pages and _block common by passing a block size
> > parameter?
> Yes of course. Is there much benefit from that? I understand that it's 
> less code, but it also makes the original traverse_pages more complex. 
> Not convinced it helps much - it's quite a small function, so not much 
> extra code. Additionally, all of the callback functions will have to 
> deal with an extra parameter (that is probably ignored in all but one 
> place).

It depends on what the combined version ends up looking like but in
general I'm in favour of one more generic function over several cloned
and hacked versions of the same thing.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:36:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:36:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleQf-0006Xi-SS; Thu, 20 Dec 2012 11:36:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1TleQe-0006Xb-DD
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 11:36:44 +0000
Received: from [193.109.254.147:50481] by server-3.bemta-14.messagelabs.com id
	86/4C-26055-B48F2D05; Thu, 20 Dec 2012 11:36:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1356003169!10759286!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29283 invoked from network); 20 Dec 2012 11:32:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:32:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="276860"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:32:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 11:32:48 +0000
Message-ID: <1356003167.26722.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mats Petersson <mats.petersson@citrix.com>
Date: Thu, 20 Dec 2012 11:32:47 +0000
In-Reply-To: <50D2EFCE.8070706@citrix.com>
References: <1355943187-4167-1-git-send-email-mats.petersson@citrix.com>
	<1356000020.26722.39.camel@zakaz.uk.xensource.com>
	<50D2EFCE.8070706@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"mats@planetcatfish.com" <mats@planetcatfish.com>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"konrad@darnok.org" <konrad@darnok.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5] xen/privcmd.c: improve performance of
	MMAPBATCH_V2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 11:00 +0000, Mats Petersson wrote:

> >> -       err = 0;
> >> +       err = last_err;
> > This means that if you have 100 failures followed by one success you
> > return success overall. Is that intentional? That doesn't seem right.
> As far as I see, it doesn't mean that. last_err is only set at the 
> beginning of the call (to zero) and if there is an error.

Ah, yes I missed that (but it still isn't right, see below).

> >> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> >> index 0bbbccb..8f86a44 100644
> >> --- a/drivers/xen/privcmd.c
> >> +++ b/drivers/xen/privcmd.c
> > [...]
> >> @@ -283,12 +317,10 @@ static int mmap_batch_fn(void *data, void *state)
> >>          if (xen_feature(XENFEAT_auto_translated_physmap))
> >>                  cur_page = pages[st->index++];
> >>
> >> -       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> >> -                                        st->vma->vm_page_prot, st->domain,
> >> -                                        &cur_page);
> >> -
> >> -       /* Store error code for second pass. */
> >> -       *(st->err++) = ret;
> >> +       BUG_ON(nr < 0);
> >> +       ret = xen_remap_domain_mfn_array(st->vma, st->va & PAGE_MASK, mfnp, nr,
> >> +                                        st->err, st->vma->vm_page_prot,
> >> +                                        st->domain, &cur_page);
> >>
> >>          /* And see if it affects the global_error. */
> > The code which follows needs adjustment to cope with the fact that we
> > now batch. I think it needs to walk st->err and set global_error as
> > appropriate. This is related to my comment about the return value of
> > xen_remap_domain_mfn_range too.
> The return value, as commented above, is "either 0 or the last error we 
> saw". There should be no need to walk the st->err, as we know if there 
> was some error or not.

The code which follows tries to latch having seen ENOENT in preference
to other errors, so you don't need 0 or the last error, you need either
0, ENOENT if one occurred somewhere in the batch or an error!=ENOENT.
(or maybe its vice versa, either way the last error isn't necessarily
what you need).

> > I think rather than trying to fabricate some sort of meaningful error
> > code for an entire batch xen_remap_domain_mfn_range should just return
> > an indication about whether there were any errors or not and leave it to
> > the caller to figure out the overall result by looking at the err array.
> >
> > Perhaps return either the number of errors or the number of successes
> > (which turns the following if into either (ret) or (ret < nr)
> > respectively).
> I'm trying to not change how the code above expects things to work. 
> Whilst it would be lovely to rewrite the entire code dealing with 
> mapping memory, I don't think that's within the scope of my current 
> project. And if I don't wish to rewrite all of libxc's memory management 

I'm not talking about changing the libxc or ioctl interface, only about
the internal-to-Linux interface between privcmd and mmu.c. In fact I'm
only actually asking you to change the return value of the new function
you are adding to the API.

> code, I don't want to alter what values are returned or when. The 
> current code follows what WAS happening before my changes - which isn't 
> exactly the most fantastic thing, and I think there may actually be bugs 
> in there, such as:
>      if (ret < 0) {
>          if (ret == -ENOENT)
>              st->global_error = -ENOENT;
>          else {
>              /* Record that at least one error has happened. */
>              if (st->global_error == 0)
>                  st->global_error = 1;
>          }
>      }
> if we enter this once with -EFAULT, and then after that with -ENOENT, 
> global_error will say -ENOENT. I think knowing that we got an EFAULT is 
> "higher importance" than ENOENT, but that's how the old code was 
> working, and I'm not sure I should change it at this point.

But I think you are changing it, by this behaviour or returning the last
error in the batch which causes this code to behave differently. That's
what I'm asking you to avoid!

> 
> >>          if (ret < 0) {
> >> @@ -300,7 +332,7 @@ static int mmap_batch_fn(void *data, void *state)
> >>                                  st->global_error = 1;
> >>                  }
> >>          }
> >> -       st->va += PAGE_SIZE;
> >> +       st->va += PAGE_SIZE * nr;
> >>
> >>          return 0;
> >>   }
> >> @@ -430,8 +462,8 @@ static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> >>          state.err           = err_array;
> >>
> >>          /* mmap_batch_fn guarantees ret == 0 */
> >> -       BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
> >> -                            &pagelist, mmap_batch_fn, &state));
> >> +       BUG_ON(traverse_pages_block(m.num, sizeof(xen_pfn_t),
> >> +                                   &pagelist, mmap_batch_fn, &state));
> > Can we make traverse_pages and _block common by passing a block size
> > parameter?
> Yes of course. Is there much benefit from that? I understand that it's 
> less code, but it also makes the original traverse_pages more complex. 
> Not convinced it helps much - it's quite a small function, so not much 
> extra code. Additionally, all of the callback functions will have to 
> deal with an extra parameter (that is probably ignored in all but one 
> place).

It depends on what the combined version ends up looking like but in
general I'm in favour of one more generic function over several cloned
and hacked versions of the same thing.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:44:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:44:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleYF-0006jT-RG; Thu, 20 Dec 2012 11:44:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TleYE-0006jO-Ir
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 11:44:34 +0000
Received: from [193.109.254.147:27173] by server-15.bemta-14.messagelabs.com
	id C6/C4-05116-12AF2D05; Thu, 20 Dec 2012 11:44:33 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1356003705!10768781!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2206 invoked from network); 20 Dec 2012 11:41:47 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 11:41:47 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TleVV-000LIR-4g; Thu, 20 Dec 2012 11:41:45 +0000
Date: Thu, 20 Dec 2012 11:41:45 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121220114145.GE80837@ocelot.phlegethon.org>
References: <46990160b4a373ca96ba.1355755074@rcojocaru.dsd.ro>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <46990160b4a373ca96ba.1355755074@rcojocaru.dsd.ro>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] mem_event: Add support for
	MEM_EVENT_REASON_MSR
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

At 16:37 +0200 on 17 Dec (1355762274), Razvan Cojocaru wrote:
> Add the new MEM_EVENT_REASON_MSR event type. Works similarly
> to the other register events, except event.gla always contains
> the MSR type (in addition to event.gfn, which holds the value).
> 
> Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com>

I have two minor suggestions (below).

> diff -r f50aab21f9f2 -r 46990160b4a3 xen/include/public/hvm/params.h
> --- a/xen/include/public/hvm/params.h	Thu Dec 13 14:39:31 2012 +0000
> +++ b/xen/include/public/hvm/params.h	Mon Dec 17 16:37:18 2012 +0200
> @@ -141,6 +141,8 @@
>  #define HVM_PARAM_ACCESS_RING_PFN   28
>  #define HVM_PARAM_SHARING_RING_PFN  29
>  
> -#define HVM_NR_PARAMS          30
> +#define HVM_PARAM_MEMORY_EVENT_MSR  30

Can you put this up beside the other HVM_PARAM_MEMORY_EVENT_*
definitions, please? 

> +
> +#define HVM_NR_PARAMS          31
>  
>  #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
> diff -r f50aab21f9f2 -r 46990160b4a3 xen/include/public/mem_event.h
> --- a/xen/include/public/mem_event.h	Thu Dec 13 14:39:31 2012 +0000
> +++ b/xen/include/public/mem_event.h	Mon Dec 17 16:37:18 2012 +0200
> @@ -45,6 +45,7 @@
>  #define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
>  #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
>  #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
> +#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR type */

Can you add a comment here to say that MEM_EVENT_REASON_MSR doesn't
honour the HVMPME_onchangeonly bit?

With those two changes, you can add 

Acked-by: Tim Deegan <tim@xen.org> 

and repost.

Cheers,

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:44:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:44:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleYF-0006jT-RG; Thu, 20 Dec 2012 11:44:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TleYE-0006jO-Ir
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 11:44:34 +0000
Received: from [193.109.254.147:27173] by server-15.bemta-14.messagelabs.com
	id C6/C4-05116-12AF2D05; Thu, 20 Dec 2012 11:44:33 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1356003705!10768781!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2206 invoked from network); 20 Dec 2012 11:41:47 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 11:41:47 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TleVV-000LIR-4g; Thu, 20 Dec 2012 11:41:45 +0000
Date: Thu, 20 Dec 2012 11:41:45 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121220114145.GE80837@ocelot.phlegethon.org>
References: <46990160b4a373ca96ba.1355755074@rcojocaru.dsd.ro>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <46990160b4a373ca96ba.1355755074@rcojocaru.dsd.ro>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] mem_event: Add support for
	MEM_EVENT_REASON_MSR
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

At 16:37 +0200 on 17 Dec (1355762274), Razvan Cojocaru wrote:
> Add the new MEM_EVENT_REASON_MSR event type. Works similarly
> to the other register events, except event.gla always contains
> the MSR type (in addition to event.gfn, which holds the value).
> 
> Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com>

I have two minor suggestions (below).

> diff -r f50aab21f9f2 -r 46990160b4a3 xen/include/public/hvm/params.h
> --- a/xen/include/public/hvm/params.h	Thu Dec 13 14:39:31 2012 +0000
> +++ b/xen/include/public/hvm/params.h	Mon Dec 17 16:37:18 2012 +0200
> @@ -141,6 +141,8 @@
>  #define HVM_PARAM_ACCESS_RING_PFN   28
>  #define HVM_PARAM_SHARING_RING_PFN  29
>  
> -#define HVM_NR_PARAMS          30
> +#define HVM_PARAM_MEMORY_EVENT_MSR  30

Can you put this up beside the other HVM_PARAM_MEMORY_EVENT_*
definitions, please? 

> +
> +#define HVM_NR_PARAMS          31
>  
>  #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
> diff -r f50aab21f9f2 -r 46990160b4a3 xen/include/public/mem_event.h
> --- a/xen/include/public/mem_event.h	Thu Dec 13 14:39:31 2012 +0000
> +++ b/xen/include/public/mem_event.h	Mon Dec 17 16:37:18 2012 +0200
> @@ -45,6 +45,7 @@
>  #define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
>  #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
>  #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
> +#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR type */

Can you add a comment here to say that MEM_EVENT_REASON_MSR doesn't
honour the HVMPME_onchangeonly bit?

With those two changes, you can add 

Acked-by: Tim Deegan <tim@xen.org> 

and repost.

Cheers,

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TledK-0006sL-Jb; Thu, 20 Dec 2012 11:49:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TledI-0006sG-CT
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:49:48 +0000
Received: from [85.158.137.99:17498] by server-14.bemta-3.messagelabs.com id
	EE/DC-27443-85BF2D05; Thu, 20 Dec 2012 11:49:44 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1356004181!20227333!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19941 invoked from network); 20 Dec 2012 11:49:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:49:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="1397084"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 11:49:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 06:49:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tled7-000577-CO;
	Thu, 20 Dec 2012 11:49:37 +0000
Date: Thu, 20 Dec 2012 11:49:27 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121219153728.GG10062@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1212201145030.17523@kaball.uk.xensource.com>
References: <20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121218211613.GA5697@phenom.dumpdata.com>
	<alpine.DEB.2.02.1212191116450.17523@kaball.uk.xensource.com>
	<20121219153728.GG10062@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Dec 2012, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 19, 2012 at 11:19:22AM +0000, Stefano Stabellini wrote:
> > On Tue, 18 Dec 2012, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Dec 13, 2012 at 02:25:16PM +0000, Stefano Stabellini wrote:
> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > > > >>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > > wrote:
> > > > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > > > >> > On Wed, 12 Dec 2012 17:15:23 -0800
> > > > > >> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > > > >> > 
> > > > > >> >> On Tue, 11 Dec 2012 12:10:19 +0000
> > > > > >> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > > > >> >> 
> > > > > >> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > > > > >> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> > > > > >> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > > >> >> > 
> > > > > >> >> > That's strange because AFAIK Linux is never editing the MSI-X
> > > > > >> >> > entries directly: give a look at
> > > > > >> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> > > > > >> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> > > > > >> >> > touch the real MSI-X table.
> > > > > >> >> 
> > > > > >> >> So, this is what's happening. The side effect of :
> > > > > >> >> 
> > > > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> > > > > >> >>                                 dev->msix_table.last) )
> > > > > >> >>             WARN();
> > > > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> > > > > >> >>                                 dev->msix_pba.last) )
> > > > > >> >>             WARN();
> > > > > >> >> 
> > > > > >> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> > > > > >> >> I've mapped are going from RW to read only. Then when dom0 accesses
> > > > > >> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> > > > > >> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> > > > > >> >> don't understand why? Looking into that now...
> > > > > >> 
> > > > > >> As far as I was able to tell back at the time when I implemented
> > > > > >> this, existing code shouldn't have mappings for these tables in
> > > > > >> place at the time these ranges get added here. But I noted in
> > > > > >> the patch description that this is a potential issue (and may need
> > > > > >> fixing if deemed severe enough - back then, apparently nobody
> > > > > >> really cared, perhaps largely because passthrough to PV guests
> > > > > >> isn't considered fully secure anyway).
> > > > > >> 
> > > > > >> Now - did that change? I.e. can you describe where the mappings
> > > > > >> come from that cause the problem here?
> > > > > > 
> > > > > > The generic Linux MSI-X handling code does that, before calling the
> > > > > > arch specific msi setup function, for which we have a xen version
> > > > > > (xen_initdom_setup_msi_irqs):
> > > > > > 
> > > > > > pci_enable_msix -> msix_capability_init -> msix_map_region
> > > > > 
> > > > > Ah, okay, (of course?) I had looked only at the forward ported
> > > > > version of this. Is all that fiddling with the mask bits really
> > > > > being suppressed properly when running under Xen? Otherwise
> > > > > pv-ops is quite broken in this regard at present... And if it is,
> > > > > I don't see what the respective ioremap() is good for here in
> > > > > the Xen case.
> > > > 
> > > > Actually I think that you might be right: just looking at the code it
> > > > seems that the mask bits get written to the table once as part of the
> > > > initialization process:
> > > > 
> > > > pci_enable_msix -> msix_capability_init -> msix_program_entries
> > > > 
> > > > Unfortunately msix_program_entries is called few lines after
> > > > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
> > > > a pirq.
> > > > However after that is done, all the masking/unmask is done via irq_mask
> > > > that we handle properly masking/unmasking the corresponding event
> > > > channels.
> > > > 
> > > > 
> > > > Possible solutions on top of my head:
> > > 
> > > There is also the potential to piggyback on Joerg's patches
> > > that introduce a new x86_msi_ops: compose_msi_msg.
> > > 
> > > See here: https://lkml.org/lkml/2012/8/20/432
> > > (I think there was also a more recent one posted at some point).
> > 
> > Given that dom0 should never write to the MSI-X table, introducing a new
> 
> How does this work with QEMU setting up MSI and MSI-X on behalf of
> guests? Or is that actually handled by Xen hypervisor?

In the case of HVM guests, QEMU emulates the PCI config space and the
table, so it is OK for the guest to write to it.


> > msi_ops that replaces msix_program_entries (or at least the part of
> > msix_program_entries that masks all the entried) is the only solution
> > left.
> 
> so this one (__msix_mask_irq):
> 
>          mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT;
>  198         if (flag)
>  199                 mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
>  200         writel(mask_bits, desc->mask_base + offset);
> 

Yes, that's the one. Once could argue that __msix_mask_irq should call
mask_irq rather than writing to the table directly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TledK-0006sL-Jb; Thu, 20 Dec 2012 11:49:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TledI-0006sG-CT
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:49:48 +0000
Received: from [85.158.137.99:17498] by server-14.bemta-3.messagelabs.com id
	EE/DC-27443-85BF2D05; Thu, 20 Dec 2012 11:49:44 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1356004181!20227333!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19941 invoked from network); 20 Dec 2012 11:49:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:49:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="1397084"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 11:49:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 06:49:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Tled7-000577-CO;
	Thu, 20 Dec 2012 11:49:37 +0000
Date: Thu, 20 Dec 2012 11:49:27 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20121219153728.GG10062@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1212201145030.17523@kaball.uk.xensource.com>
References: <20121210184311.4fc11316@mantra.us.oracle.com>
	<alpine.DEB.2.02.1212111208260.17523@kaball.uk.xensource.com>
	<20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121218211613.GA5697@phenom.dumpdata.com>
	<alpine.DEB.2.02.1212191116450.17523@kaball.uk.xensource.com>
	<20121219153728.GG10062@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Dec 2012, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 19, 2012 at 11:19:22AM +0000, Stefano Stabellini wrote:
> > On Tue, 18 Dec 2012, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Dec 13, 2012 at 02:25:16PM +0000, Stefano Stabellini wrote:
> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > > > >>> On 13.12.12 at 13:19, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > > wrote:
> > > > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> > > > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > > > >> > On Wed, 12 Dec 2012 17:15:23 -0800
> > > > > >> > Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > > > > >> > 
> > > > > >> >> On Tue, 11 Dec 2012 12:10:19 +0000
> > > > > >> >> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > > > >> >> 
> > > > > >> >> > On Tue, 11 Dec 2012, Mukesh Rathor wrote:
> > > > > >> >> > > On Mon, 10 Dec 2012 09:43:34 +0000
> > > > > >> >> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > > >> >> > 
> > > > > >> >> > That's strange because AFAIK Linux is never editing the MSI-X
> > > > > >> >> > entries directly: give a look at
> > > > > >> >> > arch/x86/pci/xen.c:xen_initdom_setup_msi_irqs, Linux only remaps
> > > > > >> >> > MSIs into pirqs using hypercalls. Xen should be the only one to
> > > > > >> >> > touch the real MSI-X table.
> > > > > >> >> 
> > > > > >> >> So, this is what's happening. The side effect of :
> > > > > >> >> 
> > > > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_table.first,
> > > > > >> >>                                 dev->msix_table.last) )
> > > > > >> >>             WARN();
> > > > > >> >>         if ( rangeset_add_range(mmio_ro_ranges, dev->msix_pba.first,
> > > > > >> >>                                 dev->msix_pba.last) )
> > > > > >> >>             WARN();
> > > > > >> >> 
> > > > > >> >> in msix_capability_init() in xen is that the dom0 EPT entries that
> > > > > >> >> I've mapped are going from RW to read only. Then when dom0 accesses
> > > > > >> >> it, I get EPT violation. In case of pure PV, the PTE entry to access
> > > > > >> >> the iomem is RW, and the above rangeset adding doesn't affect it. I
> > > > > >> >> don't understand why? Looking into that now...
> > > > > >> 
> > > > > >> As far as I was able to tell back at the time when I implemented
> > > > > >> this, existing code shouldn't have mappings for these tables in
> > > > > >> place at the time these ranges get added here. But I noted in
> > > > > >> the patch description that this is a potential issue (and may need
> > > > > >> fixing if deemed severe enough - back then, apparently nobody
> > > > > >> really cared, perhaps largely because passthrough to PV guests
> > > > > >> isn't considered fully secure anyway).
> > > > > >> 
> > > > > >> Now - did that change? I.e. can you describe where the mappings
> > > > > >> come from that cause the problem here?
> > > > > > 
> > > > > > The generic Linux MSI-X handling code does that, before calling the
> > > > > > arch specific msi setup function, for which we have a xen version
> > > > > > (xen_initdom_setup_msi_irqs):
> > > > > > 
> > > > > > pci_enable_msix -> msix_capability_init -> msix_map_region
> > > > > 
> > > > > Ah, okay, (of course?) I had looked only at the forward ported
> > > > > version of this. Is all that fiddling with the mask bits really
> > > > > being suppressed properly when running under Xen? Otherwise
> > > > > pv-ops is quite broken in this regard at present... And if it is,
> > > > > I don't see what the respective ioremap() is good for here in
> > > > > the Xen case.
> > > > 
> > > > Actually I think that you might be right: just looking at the code it
> > > > seems that the mask bits get written to the table once as part of the
> > > > initialization process:
> > > > 
> > > > pci_enable_msix -> msix_capability_init -> msix_program_entries
> > > > 
> > > > Unfortunately msix_program_entries is called few lines after
> > > > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
> > > > a pirq.
> > > > However after that is done, all the masking/unmask is done via irq_mask
> > > > that we handle properly masking/unmasking the corresponding event
> > > > channels.
> > > > 
> > > > 
> > > > Possible solutions on top of my head:
> > > 
> > > There is also the potential to piggyback on Joerg's patches
> > > that introduce a new x86_msi_ops: compose_msi_msg.
> > > 
> > > See here: https://lkml.org/lkml/2012/8/20/432
> > > (I think there was also a more recent one posted at some point).
> > 
> > Given that dom0 should never write to the MSI-X table, introducing a new
> 
> How does this work with QEMU setting up MSI and MSI-X on behalf of
> guests? Or is that actually handled by Xen hypervisor?

In the case of HVM guests, QEMU emulates the PCI config space and the
table, so it is OK for the guest to write to it.


> > msi_ops that replaces msix_program_entries (or at least the part of
> > msix_program_entries that masks all the entried) is the only solution
> > left.
> 
> so this one (__msix_mask_irq):
> 
>          mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT;
>  198         if (flag)
>  199                 mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
>  200         writel(mask_bits, desc->mask_base + offset);
> 

Yes, that's the one. Once could argue that __msix_mask_irq should call
mask_irq rather than writing to the table directly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:53:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlegl-000718-BX; Thu, 20 Dec 2012 11:53:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tlegk-000711-GY
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:53:22 +0000
Received: from [85.158.143.35:10538] by server-1.bemta-4.messagelabs.com id
	EB/33-28401-13CF2D05; Thu, 20 Dec 2012 11:53:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1356004401!13886027!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20868 invoked from network); 20 Dec 2012 11:53:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:53:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="277429"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:53:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 11:53:20 +0000
Message-ID: <1356004399.26722.56.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 20 Dec 2012 11:53:19 +0000
In-Reply-To: <20121220113012.GD80837@ocelot.phlegethon.org>
References: <1355929001-19183-1-git-send-email-ian.campbell@citrix.com>
	<20121220113012.GD80837@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 11:30 +0000, Tim Deegan wrote:
> At 14:56 +0000 on 19 Dec (1355929001), Ian Campbell wrote:
> > We weren't taking the guest mode (CPSR) into account and would always
> > access the user version of the registers.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Tim Deegan <tim@xen.org>

Applied, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:53:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlegl-000718-BX; Thu, 20 Dec 2012 11:53:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tlegk-000711-GY
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:53:22 +0000
Received: from [85.158.143.35:10538] by server-1.bemta-4.messagelabs.com id
	EB/33-28401-13CF2D05; Thu, 20 Dec 2012 11:53:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1356004401!13886027!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20868 invoked from network); 20 Dec 2012 11:53:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:53:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="277429"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:53:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 11:53:20 +0000
Message-ID: <1356004399.26722.56.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 20 Dec 2012 11:53:19 +0000
In-Reply-To: <20121220113012.GD80837@ocelot.phlegethon.org>
References: <1355929001-19183-1-git-send-email-ian.campbell@citrix.com>
	<20121220113012.GD80837@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: fix guest register access.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 11:30 +0000, Tim Deegan wrote:
> At 14:56 +0000 on 19 Dec (1355929001), Ian Campbell wrote:
> > We weren't taking the guest mode (CPSR) into account and would always
> > access the user version of the registers.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Tim Deegan <tim@xen.org>

Applied, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:54:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:54:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlehh-00077B-QN; Thu, 20 Dec 2012 11:54:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tlehg-00076w-Du
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:54:20 +0000
Received: from [85.158.138.51:29033] by server-10.bemta-3.messagelabs.com id
	9E/AB-07616-B6CF2D05; Thu, 20 Dec 2012 11:54:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1356004453!27781955!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18337 invoked from network); 20 Dec 2012 11:54:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:54:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="277448"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:54:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 11:54:12 +0000
Message-ID: <1356004451.26722.57.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>
Date: Thu, 20 Dec 2012 11:54:11 +0000
In-Reply-To: <1356002699.26722.48.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
	<20121219104932.GA65599@ocelot.phlegethon.org>
	<1355916124.14620.328.camel@zakaz.uk.xensource.com>
	<1355928840.14620.432.camel@zakaz.uk.xensource.com>
	<20121220112129.GC80837@ocelot.phlegethon.org>
	<1356002699.26722.48.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 11:24 +0000, Ian Campbell wrote:
> On Thu, 2012-12-20 at 11:21 +0000, Tim Deegan wrote:

> > arm: trim pagetable flag definitions to fit in 80 characters
> > 
> > Signed-off-by: Tim Deegan <tim@xen.org>
> 
> Lateral thought ;-)
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:54:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:54:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlehh-00077B-QN; Thu, 20 Dec 2012 11:54:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tlehg-00076w-Du
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:54:20 +0000
Received: from [85.158.138.51:29033] by server-10.bemta-3.messagelabs.com id
	9E/AB-07616-B6CF2D05; Thu, 20 Dec 2012 11:54:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1356004453!27781955!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18337 invoked from network); 20 Dec 2012 11:54:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 11:54:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="277448"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 11:54:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 11:54:12 +0000
Message-ID: <1356004451.26722.57.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>
Date: Thu, 20 Dec 2012 11:54:11 +0000
In-Reply-To: <1356002699.26722.48.camel@zakaz.uk.xensource.com>
References: <1355849351.14620.274.camel@zakaz.uk.xensource.com>
	<1355849376-26652-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1212181832100.17523@kaball.uk.xensource.com>
	<1355912423.14620.291.camel@zakaz.uk.xensource.com>
	<20121219104932.GA65599@ocelot.phlegethon.org>
	<1355916124.14620.328.camel@zakaz.uk.xensource.com>
	<1355928840.14620.432.camel@zakaz.uk.xensource.com>
	<20121220112129.GC80837@ocelot.phlegethon.org>
	<1356002699.26722.48.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: arm: head.S PT_DEV is unused,
 drop and rename PT_DEV_L3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 11:24 +0000, Ian Campbell wrote:
> On Thu, 2012-12-20 at 11:21 +0000, Tim Deegan wrote:

> > arm: trim pagetable flag definitions to fit in 80 characters
> > 
> > Signed-off-by: Tim Deegan <tim@xen.org>
> 
> Lateral thought ;-)
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:55:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleiS-0007CK-8c; Thu, 20 Dec 2012 11:55:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TleiQ-0007C1-90
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:55:06 +0000
Received: from [193.109.254.147:15743] by server-7.bemta-14.messagelabs.com id
	09/BA-08102-99CF2D05; Thu, 20 Dec 2012 11:55:05 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356004496!9028350!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29429 invoked from network); 20 Dec 2012 11:54:56 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 11:54:56 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TleiF-000LKG-6t; Thu, 20 Dec 2012 11:54:55 +0000
Date: Thu, 20 Dec 2012 11:54:55 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121220115455.GF80837@ocelot.phlegethon.org>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D1D5BD.8080001@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
	call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:57 +0200 on 19 Dec (1355936221), Razvan Cojocaru wrote:
> >>     m->overlapped = is_var_mtrr_overlapped(m);
> >>
> >>Looks like that function contains the necessary logic.
> >
> >You're right, but what happens there is that that function depends on
> >the get_mtrr_range() function, which in turn depends on the size_or_mask
> >global variable, which is initialized in hvm_mtrr_pat_init(), which then
> >depends on a global table, and so on. Putting that into libxc is pretty
> >much putting the whole mtrr.c file there.
> 
> This is where it gets tricky:
> 
> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
>                            uint64_t *base, uint64_t *end)
> {
>     [...]
>     phys_addr = 36;
> 
>     if ( cpuid_eax(0x80000000) >= 0x80000008 )
>         phys_addr = (uint8_t)cpuid_eax(0x80000008);
> 
>     size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
>     [...]
> }
> 
> specifically, in the cpuid_eax() call, which doesn't make much sense in 
> dom0 userspace.

You can execute the CPUID instruction from dom0 userspace, and get
exactly the same result as Xen does.

(It may, separately, be a bug that Xen uses the host CPUID here and not
a well-defined guest width.  If that gets fixed, the guest address width
will be made available in the save record and you can extract it from
there.)

But also, you don't necessarily need to calculate 'overlapped' for each
MTRR in advance.  It may make sense to do so for speed, as Xen does, but
you could also just handle overlapping MTRRS in your lookup function, by
comparing the address to all MTRRs and handling the case where it
matches more than one.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:55:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TleiS-0007CK-8c; Thu, 20 Dec 2012 11:55:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TleiQ-0007C1-90
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:55:06 +0000
Received: from [193.109.254.147:15743] by server-7.bemta-14.messagelabs.com id
	09/BA-08102-99CF2D05; Thu, 20 Dec 2012 11:55:05 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356004496!9028350!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29429 invoked from network); 20 Dec 2012 11:54:56 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 11:54:56 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TleiF-000LKG-6t; Thu, 20 Dec 2012 11:54:55 +0000
Date: Thu, 20 Dec 2012 11:54:55 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121220115455.GF80837@ocelot.phlegethon.org>
References: <a23515aabc91bec6e9cf.1355744310@rcojocaru.dsd.ro>
	<1355850255.14620.277.camel@zakaz.uk.xensource.com>
	<50D0A6B1.30702@gmail.com>
	<1355912063.14620.286.camel@zakaz.uk.xensource.com>
	<50D19A2B.2050006@gmail.com>
	<1355916539.14620.332.camel@zakaz.uk.xensource.com>
	<50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D1D5BD.8080001@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
	call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:57 +0200 on 19 Dec (1355936221), Razvan Cojocaru wrote:
> >>     m->overlapped = is_var_mtrr_overlapped(m);
> >>
> >>Looks like that function contains the necessary logic.
> >
> >You're right, but what happens there is that that function depends on
> >the get_mtrr_range() function, which in turn depends on the size_or_mask
> >global variable, which is initialized in hvm_mtrr_pat_init(), which then
> >depends on a global table, and so on. Putting that into libxc is pretty
> >much putting the whole mtrr.c file there.
> 
> This is where it gets tricky:
> 
> static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
>                            uint64_t *base, uint64_t *end)
> {
>     [...]
>     phys_addr = 36;
> 
>     if ( cpuid_eax(0x80000000) >= 0x80000008 )
>         phys_addr = (uint8_t)cpuid_eax(0x80000008);
> 
>     size_or_mask = ~((1 << (phys_addr - PAGE_SHIFT)) - 1);
>     [...]
> }
> 
> specifically, in the cpuid_eax() call, which doesn't make much sense in 
> dom0 userspace.

You can execute the CPUID instruction from dom0 userspace, and get
exactly the same result as Xen does.

(It may, separately, be a bug that Xen uses the host CPUID here and not
a well-defined guest width.  If that gets fixed, the guest address width
will be made available in the save record and you can extract it from
there.)

But also, you don't necessarily need to calculate 'overlapped' for each
MTRR in advance.  It may make sense to do so for speed, as Xen does, but
you could also just handle overlapping MTRRS in your lookup function, by
comparing the address to all MTRRs and handling the case where it
matches more than one.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:57:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlekm-0007Rx-14; Thu, 20 Dec 2012 11:57:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tlekl-0007Rn-9R
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:57:31 +0000
Received: from [85.158.137.99:65490] by server-10.bemta-3.messagelabs.com id
	CC/B2-07616-A2DF2D05; Thu, 20 Dec 2012 11:57:30 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-217.messagelabs.com!1356004649!20228996!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25373 invoked from network); 20 Dec 2012 11:57:29 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 11:57:29 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tlekh-000LLD-Uu; Thu, 20 Dec 2012 11:57:27 +0000
Date: Thu, 20 Dec 2012 11:57:27 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121220115727.GG80837@ocelot.phlegethon.org>
References: <50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
	<1355933739.14620.456.camel@zakaz.uk.xensource.com>
	<50D1EB56.40400@gmail.com>
	<20121219174627.GA67643@ocelot.phlegethon.org>
	<50D216F7.2050801@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D216F7.2050801@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
	call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 21:35 +0200 on 19 Dec (1355952919), Razvan Cojocaru wrote:
> > result can be cached (since you can also get a mem-event on MSR writes,
> > you don't have to pull all this MTRR state out of the giuest except when
> > the MTRRs have been changed).
> 
> Unfortunately, due to the design of components I don't control, my
> application does not have the luxury of waiting for an MSR write event.

Hmm.  In that case I guess you can't cache the MTRR state between
lookups, which may be a performance problem for you.

> (Has my MSR mem_event patch been acknowledged? Or was this already work
> in progress? I had to patch my Xen source code to be able to get MSR
> events...)

I've replied to that separately. 

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 11:57:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 11:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlekm-0007Rx-14; Thu, 20 Dec 2012 11:57:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tlekl-0007Rn-9R
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 11:57:31 +0000
Received: from [85.158.137.99:65490] by server-10.bemta-3.messagelabs.com id
	CC/B2-07616-A2DF2D05; Thu, 20 Dec 2012 11:57:30 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-217.messagelabs.com!1356004649!20228996!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25373 invoked from network); 20 Dec 2012 11:57:29 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 11:57:29 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tlekh-000LLD-Uu; Thu, 20 Dec 2012 11:57:27 +0000
Date: Thu, 20 Dec 2012 11:57:27 +0000
From: Tim Deegan <tim@xen.org>
To: Razvan Cojocaru <rzvncj@gmail.com>
Message-ID: <20121220115727.GG80837@ocelot.phlegethon.org>
References: <50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
	<1355933739.14620.456.camel@zakaz.uk.xensource.com>
	<50D1EB56.40400@gmail.com>
	<20121219174627.GA67643@ocelot.phlegethon.org>
	<50D216F7.2050801@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D216F7.2050801@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
	call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 21:35 +0200 on 19 Dec (1355952919), Razvan Cojocaru wrote:
> > result can be cached (since you can also get a mem-event on MSR writes,
> > you don't have to pull all this MTRR state out of the giuest except when
> > the MTRRs have been changed).
> 
> Unfortunately, due to the design of components I don't control, my
> application does not have the luxury of waiting for an MSR write event.

Hmm.  In that case I guess you can't cache the MTRR state between
lookups, which may be a performance problem for you.

> (Has my MSR mem_event patch been acknowledged? Or was this already work
> in progress? I had to patch my Xen source code to be able to get MSR
> events...)

I've replied to that separately. 

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 12:00:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlenm-0007hh-Cq; Thu, 20 Dec 2012 12:00:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1Tlenk-0007hU-N2
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 12:00:37 +0000
Received: from [193.109.254.147:56334] by server-12.bemta-14.messagelabs.com
	id CB/98-06523-4EDF2D05; Thu, 20 Dec 2012 12:00:36 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1356004793!10762376!1
X-Originating-IP: [220.181.15.62]
X-SpamReason: No, hits=1.9 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDEwNjY5\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDEwNjY5\n,HTML_50_60,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11953 invoked from network); 20 Dec 2012 11:59:55 -0000
Received: from m15-62.126.com (HELO m15-62.126.com) (220.181.15.62)
	by server-12.tower-27.messagelabs.com with SMTP;
	20 Dec 2012 11:59:55 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=zeWUoyCUFLz10uAzSw0s+INkqnlLxuL0QSqq
	cRX9DL4=; b=E6/WqMML9M32lOIPxsV3QxloIC8A+bcnYtZoH1LuODWjf+JfVynf
	P6c7RKVBJRU2eX2gCMkRh1DCbL8QLr0pkk8tCDwn8LU3aLGibSrxP7MrfeMrRtUG
	agxw5pF2K8NblEc19Vmx2UZxSIUlx2mS2Is2NxXlk1x8AiyN4v5/mMY=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr62
	(Coremail) ; Thu, 20 Dec 2012 19:59:51 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Thu, 20 Dec 2012 19:59:51 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: ZSsLA2Zvb3Rlcl9odG09Njg2Mjo4MQ==
MIME-Version: 1.0
Message-ID: <8fef2ae.3016d.13bb82f14c8.Coremail.hxkhust@126.com>
X-CM-TRANSID: PsqowEAJNEa3_dJQK2IeAA--.4973W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitQ6LBUX9kH+-0AACsP
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!!who could explain the two code notes existing in
 ioemu-qemu-xen codes (I'm dying!!!HELP!)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1089396372068235218=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1089396372068235218==
Content-Type: multipart/alternative; 
	boundary="----=_Part_744985_186671286.1356004791496"

------=_Part_744985_186671286.1356004791496
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

I'm doing some research on xen 4.1 platform.I have two question which I'm trying to make it convenient  for every expert to answer.
Question 1:
Here are some codes in the file xen-4.1.2/tools/ioemu-qemu-xen/block-raw-posix.c, 
....
 629 static BlockDriverAIOCB *raw_aio_read(BlockDriverState *bs,
 630         int64_t sector_num, uint8_t *buf, int nb_sectors,
 631         BlockDriverCompletionFunc *cb, void *opaque)
 632 { 
 633     RawAIOCB *acb;  
 634   
 635     /*
 636      * If O_DIRECT is used and the buffer is not aligned fall back
 637      * to synchronous IO.
 638      */
 639     BDRVRawState *s = bs->opaque;
 640 
 641     if (unlikely(s->aligned_buf != NULL && ((uintptr_t) buf % 512))) {
 642         QEMUBH *bh; 
 643         acb = qemu_aio_get(bs, cb, opaque);
 644         acb->ret = raw_pread(bs, 512 * sector_num, buf, 512 * nb_sectors);
 645         bh = qemu_bh_new(raw_aio_em_cb, acb);
 646         qemu_bh_schedule(bh);     
 647         return &acb->common;      
 648     }
 649                     
 650     acb = raw_aio_setup(bs, sector_num, buf, nb_sectors, cb, opaque);
 651     if (!acb)
 652         return NULL;
 653     if (qemu_paio_read(&acb->aiocb) < 0) {
 654         raw_aio_remove(acb);      
 655         return NULL;
 656     }
 657     return &acb->common;
 658 } 
......
In the notes between the codes , the sentence show that If O_DIRECT is used and the buffer is not aligned fall back  to synchronous IO. What does it mean?I just want to make these function returning with the exact data in its field which the parameter buf point to , that is to say, I want to make the asynchronous read into a synchronous one.I have make the  O_DIRECT used in file xenstore.c and the flags will be in effect along the way to this function above.


Question 2:
Here are some codes in /xen-4.1.2/tools/block-qcow.c
...
594 static void qcow_aio_read_cb(void *opaque, int ret)
595 {
596     QCowAIOCB *acb = opaque;
597     BlockDriverState *bs = acb->common.bs;
598     BDRVQcowState *s = bs->opaque;
599     int index_in_cluster;
600 
601     acb->hd_aiocb = NULL;
602     if (ret < 0) {   
603     fail:            
604         acb->common.cb(acb->common.opaque, ret);
605         qemu_aio_release(acb);    
606         return;      
607     }
.........................
642     if (!acb->cluster_offset) {
643         if (bs->backing_hd) {
644             /* read from the base image */
645             acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
646                 acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);
647             if (acb->hd_aiocb == NULL)
648                 goto fail;
649         } else {
650             /* Note: in this case, no need to wait */
651             memset(acb->buf, 0, 512 * acb->n);
652             goto redo;
653         }
654     } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
655         /* add AIO support for compressed blocks ? */
656         if (decompress_cluster(s, acb->cluster_offset) < 0)
657             goto fail;
658         memcpy(acb->buf,
659                s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
660         goto redo;
661     } else {
662         if ((acb->cluster_offset & 511) != 0) {
663             ret = -EIO;
664             goto fail;
665         }
666         acb->hd_aiocb = bdrv_aio_read(s->hd,
667                             (acb->cluster_offset >> 9) + index_in_cluster,
668                             acb->buf, acb->n, qcow_aio_read_cb, acb);
In the code note on the line 650, it says that  "in this case, no need to wait".
what does it mean? Here the code just would wait for what?


If I have some time ,I will learn the whole direction about how to ask a smart question which a expert send to me in a earlier mail.
Thanks.


A Newbie







------=_Part_744985_186671286.1356004791496
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><DIV style="LINE-HEIGHT: 1.7; FONT-FAMILY: arial; COLOR: #000000; FONT-SIZE: 14px">I'm doing some research on xen 4.1 platform.I have two question which I'm trying to make it convenient &nbsp;for every expert to answer.
<DIV>Question 1:</DIV>
<DIV>Here are some codes in the file xen-4.1.2/tools/ioemu-qemu-xen/block-raw-posix.c,&nbsp;</DIV>
<DIV>....</DIV>
<DIV>
<DIV>&nbsp;629 static BlockDriverAIOCB *raw_aio_read(BlockDriverState *bs,</DIV>
<DIV>&nbsp;630 &nbsp; &nbsp; &nbsp; &nbsp; int64_t sector_num, uint8_t *buf, int nb_sectors,</DIV>
<DIV>&nbsp;631 &nbsp; &nbsp; &nbsp; &nbsp; BlockDriverCompletionFunc *cb, void *opaque)</DIV>
<DIV>&nbsp;632 {&nbsp;</DIV>
<DIV>&nbsp;633 &nbsp; &nbsp; RawAIOCB *acb; &nbsp;</DIV>
<DIV>&nbsp;634 &nbsp;&nbsp;</DIV>
<DIV>&nbsp;635 &nbsp; &nbsp; /*</DIV>
<DIV>&nbsp;636 &nbsp; &nbsp; &nbsp;* If O_DIRECT is used and the buffer is not aligned fall back</DIV>
<DIV>&nbsp;637 &nbsp; &nbsp; &nbsp;* to synchronous IO.</DIV>
<DIV>&nbsp;638 &nbsp; &nbsp; &nbsp;*/</DIV>
<DIV>&nbsp;639 &nbsp; &nbsp; BDRVRawState *s = bs-&gt;opaque;</DIV>
<DIV>&nbsp;640&nbsp;</DIV>
<DIV>&nbsp;641 &nbsp; &nbsp; if (unlikely(s-&gt;aligned_buf != NULL &amp;&amp; ((uintptr_t) buf % 512))) {</DIV>
<DIV>&nbsp;642 &nbsp; &nbsp; &nbsp; &nbsp; QEMUBH *bh;&nbsp;</DIV>
<DIV>&nbsp;643 &nbsp; &nbsp; &nbsp; &nbsp; acb = qemu_aio_get(bs, cb, opaque);</DIV>
<DIV>&nbsp;644 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;ret = raw_pread(bs, 512 * sector_num, buf, 512 * nb_sectors);</DIV>
<DIV>&nbsp;645 &nbsp; &nbsp; &nbsp; &nbsp; bh = qemu_bh_new(raw_aio_em_cb, acb);</DIV>
<DIV>&nbsp;646 &nbsp; &nbsp; &nbsp; &nbsp; qemu_bh_schedule(bh); &nbsp; &nbsp;&nbsp;</DIV>
<DIV>&nbsp;647 &nbsp; &nbsp; &nbsp; &nbsp; return &amp;acb-&gt;common; &nbsp; &nbsp; &nbsp;</DIV>
<DIV>&nbsp;648 &nbsp; &nbsp; }</DIV>
<DIV>&nbsp;649 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</DIV>
<DIV>&nbsp;650 &nbsp; &nbsp; acb = raw_aio_setup(bs, sector_num, buf, nb_sectors, cb, opaque);</DIV>
<DIV>&nbsp;651 &nbsp; &nbsp; if (!acb)</DIV>
<DIV>&nbsp;652 &nbsp; &nbsp; &nbsp; &nbsp; return NULL;</DIV>
<DIV>&nbsp;653 &nbsp; &nbsp; if (qemu_paio_read(&amp;acb-&gt;aiocb) &lt; 0) {</DIV>
<DIV>&nbsp;654 &nbsp; &nbsp; &nbsp; &nbsp; raw_aio_remove(acb); &nbsp; &nbsp; &nbsp;</DIV>
<DIV>&nbsp;655 &nbsp; &nbsp; &nbsp; &nbsp; return NULL;</DIV>
<DIV>&nbsp;656 &nbsp; &nbsp; }</DIV>
<DIV>&nbsp;657 &nbsp; &nbsp; return &amp;acb-&gt;common;</DIV>
<DIV>&nbsp;658 }&nbsp;</DIV></DIV>
<DIV>......</DIV>
<DIV>In the notes between the codes , the sentence show that If O_DIRECT is used and the buffer is not aligned fall back &nbsp;to synchronous IO. What does it mean?I just want to make these function returning with the exact data in its field which the parameter buf point to , that is to say, I want to make the&nbsp;asynchronous read into a synchronous one.I have make the &nbsp;O_DIRECT used in file xenstore.c and the flags will be in effect along the way to this function above.</DIV>
<DIV><BR></DIV>
<DIV>Question 2:</DIV>
<DIV>Here are some codes in /xen-4.1.2/tools/block-qcow.c</DIV>
<DIV>...</DIV>
<DIV>594 static void qcow_aio_read_cb(void *opaque, int ret)</DIV>
<DIV>595 {</DIV>
<DIV>596 &nbsp; &nbsp; QCowAIOCB *acb = opaque;</DIV>
<DIV>597 &nbsp; &nbsp; BlockDriverState *bs = acb-&gt;common.bs;</DIV>
<DIV>598 &nbsp; &nbsp; BDRVQcowState *s = bs-&gt;opaque;</DIV>
<DIV>599 &nbsp; &nbsp; int index_in_cluster;</DIV>
<DIV>600&nbsp;</DIV>
<DIV>601 &nbsp; &nbsp; acb-&gt;hd_aiocb = NULL;</DIV>
<DIV>602 &nbsp; &nbsp; if (ret &lt; 0) { &nbsp;&nbsp;</DIV>
<DIV>603 &nbsp; &nbsp; fail: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</DIV>
<DIV>604 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;common.cb(acb-&gt;common.opaque, ret);</DIV>
<DIV>605 &nbsp; &nbsp; &nbsp; &nbsp; qemu_aio_release(acb); &nbsp; &nbsp;</DIV>
<DIV>606 &nbsp; &nbsp; &nbsp; &nbsp; return; &nbsp; &nbsp; &nbsp;</DIV>
<DIV>607 &nbsp; &nbsp; }</DIV>
<DIV>.........................</DIV>
<DIV>
<DIV>642 &nbsp; &nbsp; if (!acb-&gt;cluster_offset) {</DIV>
<DIV>643 &nbsp; &nbsp; &nbsp; &nbsp; if (bs-&gt;backing_hd) {</DIV>
<DIV>644 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* read from the base image */</DIV>
<DIV>645 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd,</DIV>
<DIV>646 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);</DIV>
<DIV>647 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (acb-&gt;hd_aiocb == NULL)</DIV>
<DIV>648 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</DIV>
<DIV>649 &nbsp; &nbsp; &nbsp; &nbsp; } else {</DIV>
<DIV>650 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* Note: in this case, no need to wait */</DIV>
<DIV>651 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; memset(acb-&gt;buf, 0, 512 * acb-&gt;n);</DIV>
<DIV>652 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</DIV>
<DIV>653 &nbsp; &nbsp; &nbsp; &nbsp; }</DIV>
<DIV>654 &nbsp; &nbsp; } else if (acb-&gt;cluster_offset &amp; QCOW_OFLAG_COMPRESSED) {</DIV>
<DIV>655 &nbsp; &nbsp; &nbsp; &nbsp; /* add AIO support for compressed blocks ? */</DIV>
<DIV>656 &nbsp; &nbsp; &nbsp; &nbsp; if (decompress_cluster(s, acb-&gt;cluster_offset) &lt; 0)</DIV>
<DIV>657 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</DIV>
<DIV>658 &nbsp; &nbsp; &nbsp; &nbsp; memcpy(acb-&gt;buf,</DIV>
<DIV>659 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s-&gt;cluster_cache + index_in_cluster * 512, 512 * acb-&gt;n);</DIV>
<DIV>660 &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</DIV>
<DIV>661 &nbsp; &nbsp; } else {</DIV>
<DIV>662 &nbsp; &nbsp; &nbsp; &nbsp; if ((acb-&gt;cluster_offset &amp; 511) != 0) {</DIV>
<DIV>663 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ret = -EIO;</DIV>
<DIV>664 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</DIV>
<DIV>665 &nbsp; &nbsp; &nbsp; &nbsp; }</DIV>
<DIV>666 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(s-&gt;hd,</DIV>
<DIV>667 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (acb-&gt;cluster_offset &gt;&gt; 9) + index_in_cluster,</DIV>
<DIV>668 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);</DIV></DIV>
<DIV>In the code note on the line 650, it says that &nbsp;"in this case, no need to wait".</DIV>
<DIV>what does it mean? Here the code just would wait for what?</DIV>
<DIV><BR></DIV>
<DIV>If I have some time ,I will learn the whole direction about how to ask a smart question which a expert send to me in a earlier mail.</DIV>
<DIV>Thanks.</DIV>
<DIV><BR></DIV>
<DIV>A Newbie</DIV>
<DIV><BR></DIV>
<DIV><BR></DIV></DIV><BR><BR><SPAN title="neteasefooter"><SPAN id="netease_mail_footer"></SPAN></SPAN></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_744985_186671286.1356004791496--



--===============1089396372068235218==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1089396372068235218==--



From xen-devel-bounces@lists.xen.org Thu Dec 20 12:00:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlenm-0007hh-Cq; Thu, 20 Dec 2012 12:00:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1Tlenk-0007hU-N2
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 12:00:37 +0000
Received: from [193.109.254.147:56334] by server-12.bemta-14.messagelabs.com
	id CB/98-06523-4EDF2D05; Thu, 20 Dec 2012 12:00:36 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1356004793!10762376!1
X-Originating-IP: [220.181.15.62]
X-SpamReason: No, hits=1.9 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDEwNjY5\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDEwNjY5\n,HTML_50_60,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11953 invoked from network); 20 Dec 2012 11:59:55 -0000
Received: from m15-62.126.com (HELO m15-62.126.com) (220.181.15.62)
	by server-12.tower-27.messagelabs.com with SMTP;
	20 Dec 2012 11:59:55 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=zeWUoyCUFLz10uAzSw0s+INkqnlLxuL0QSqq
	cRX9DL4=; b=E6/WqMML9M32lOIPxsV3QxloIC8A+bcnYtZoH1LuODWjf+JfVynf
	P6c7RKVBJRU2eX2gCMkRh1DCbL8QLr0pkk8tCDwn8LU3aLGibSrxP7MrfeMrRtUG
	agxw5pF2K8NblEc19Vmx2UZxSIUlx2mS2Is2NxXlk1x8AiyN4v5/mMY=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr62
	(Coremail) ; Thu, 20 Dec 2012 19:59:51 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Thu, 20 Dec 2012 19:59:51 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: ZSsLA2Zvb3Rlcl9odG09Njg2Mjo4MQ==
MIME-Version: 1.0
Message-ID: <8fef2ae.3016d.13bb82f14c8.Coremail.hxkhust@126.com>
X-CM-TRANSID: PsqowEAJNEa3_dJQK2IeAA--.4973W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitQ6LBUX9kH+-0AACsP
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!!who could explain the two code notes existing in
 ioemu-qemu-xen codes (I'm dying!!!HELP!)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1089396372068235218=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1089396372068235218==
Content-Type: multipart/alternative; 
	boundary="----=_Part_744985_186671286.1356004791496"

------=_Part_744985_186671286.1356004791496
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

I'm doing some research on xen 4.1 platform.I have two question which I'm trying to make it convenient  for every expert to answer.
Question 1:
Here are some codes in the file xen-4.1.2/tools/ioemu-qemu-xen/block-raw-posix.c, 
....
 629 static BlockDriverAIOCB *raw_aio_read(BlockDriverState *bs,
 630         int64_t sector_num, uint8_t *buf, int nb_sectors,
 631         BlockDriverCompletionFunc *cb, void *opaque)
 632 { 
 633     RawAIOCB *acb;  
 634   
 635     /*
 636      * If O_DIRECT is used and the buffer is not aligned fall back
 637      * to synchronous IO.
 638      */
 639     BDRVRawState *s = bs->opaque;
 640 
 641     if (unlikely(s->aligned_buf != NULL && ((uintptr_t) buf % 512))) {
 642         QEMUBH *bh; 
 643         acb = qemu_aio_get(bs, cb, opaque);
 644         acb->ret = raw_pread(bs, 512 * sector_num, buf, 512 * nb_sectors);
 645         bh = qemu_bh_new(raw_aio_em_cb, acb);
 646         qemu_bh_schedule(bh);     
 647         return &acb->common;      
 648     }
 649                     
 650     acb = raw_aio_setup(bs, sector_num, buf, nb_sectors, cb, opaque);
 651     if (!acb)
 652         return NULL;
 653     if (qemu_paio_read(&acb->aiocb) < 0) {
 654         raw_aio_remove(acb);      
 655         return NULL;
 656     }
 657     return &acb->common;
 658 } 
......
In the notes between the codes , the sentence show that If O_DIRECT is used and the buffer is not aligned fall back  to synchronous IO. What does it mean?I just want to make these function returning with the exact data in its field which the parameter buf point to , that is to say, I want to make the asynchronous read into a synchronous one.I have make the  O_DIRECT used in file xenstore.c and the flags will be in effect along the way to this function above.


Question 2:
Here are some codes in /xen-4.1.2/tools/block-qcow.c
...
594 static void qcow_aio_read_cb(void *opaque, int ret)
595 {
596     QCowAIOCB *acb = opaque;
597     BlockDriverState *bs = acb->common.bs;
598     BDRVQcowState *s = bs->opaque;
599     int index_in_cluster;
600 
601     acb->hd_aiocb = NULL;
602     if (ret < 0) {   
603     fail:            
604         acb->common.cb(acb->common.opaque, ret);
605         qemu_aio_release(acb);    
606         return;      
607     }
.........................
642     if (!acb->cluster_offset) {
643         if (bs->backing_hd) {
644             /* read from the base image */
645             acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
646                 acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);
647             if (acb->hd_aiocb == NULL)
648                 goto fail;
649         } else {
650             /* Note: in this case, no need to wait */
651             memset(acb->buf, 0, 512 * acb->n);
652             goto redo;
653         }
654     } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
655         /* add AIO support for compressed blocks ? */
656         if (decompress_cluster(s, acb->cluster_offset) < 0)
657             goto fail;
658         memcpy(acb->buf,
659                s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
660         goto redo;
661     } else {
662         if ((acb->cluster_offset & 511) != 0) {
663             ret = -EIO;
664             goto fail;
665         }
666         acb->hd_aiocb = bdrv_aio_read(s->hd,
667                             (acb->cluster_offset >> 9) + index_in_cluster,
668                             acb->buf, acb->n, qcow_aio_read_cb, acb);
In the code note on the line 650, it says that  "in this case, no need to wait".
what does it mean? Here the code just would wait for what?


If I have some time ,I will learn the whole direction about how to ask a smart question which a expert send to me in a earlier mail.
Thanks.


A Newbie







------=_Part_744985_186671286.1356004791496
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><DIV style="LINE-HEIGHT: 1.7; FONT-FAMILY: arial; COLOR: #000000; FONT-SIZE: 14px">I'm doing some research on xen 4.1 platform.I have two question which I'm trying to make it convenient &nbsp;for every expert to answer.
<DIV>Question 1:</DIV>
<DIV>Here are some codes in the file xen-4.1.2/tools/ioemu-qemu-xen/block-raw-posix.c,&nbsp;</DIV>
<DIV>....</DIV>
<DIV>
<DIV>&nbsp;629 static BlockDriverAIOCB *raw_aio_read(BlockDriverState *bs,</DIV>
<DIV>&nbsp;630 &nbsp; &nbsp; &nbsp; &nbsp; int64_t sector_num, uint8_t *buf, int nb_sectors,</DIV>
<DIV>&nbsp;631 &nbsp; &nbsp; &nbsp; &nbsp; BlockDriverCompletionFunc *cb, void *opaque)</DIV>
<DIV>&nbsp;632 {&nbsp;</DIV>
<DIV>&nbsp;633 &nbsp; &nbsp; RawAIOCB *acb; &nbsp;</DIV>
<DIV>&nbsp;634 &nbsp;&nbsp;</DIV>
<DIV>&nbsp;635 &nbsp; &nbsp; /*</DIV>
<DIV>&nbsp;636 &nbsp; &nbsp; &nbsp;* If O_DIRECT is used and the buffer is not aligned fall back</DIV>
<DIV>&nbsp;637 &nbsp; &nbsp; &nbsp;* to synchronous IO.</DIV>
<DIV>&nbsp;638 &nbsp; &nbsp; &nbsp;*/</DIV>
<DIV>&nbsp;639 &nbsp; &nbsp; BDRVRawState *s = bs-&gt;opaque;</DIV>
<DIV>&nbsp;640&nbsp;</DIV>
<DIV>&nbsp;641 &nbsp; &nbsp; if (unlikely(s-&gt;aligned_buf != NULL &amp;&amp; ((uintptr_t) buf % 512))) {</DIV>
<DIV>&nbsp;642 &nbsp; &nbsp; &nbsp; &nbsp; QEMUBH *bh;&nbsp;</DIV>
<DIV>&nbsp;643 &nbsp; &nbsp; &nbsp; &nbsp; acb = qemu_aio_get(bs, cb, opaque);</DIV>
<DIV>&nbsp;644 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;ret = raw_pread(bs, 512 * sector_num, buf, 512 * nb_sectors);</DIV>
<DIV>&nbsp;645 &nbsp; &nbsp; &nbsp; &nbsp; bh = qemu_bh_new(raw_aio_em_cb, acb);</DIV>
<DIV>&nbsp;646 &nbsp; &nbsp; &nbsp; &nbsp; qemu_bh_schedule(bh); &nbsp; &nbsp;&nbsp;</DIV>
<DIV>&nbsp;647 &nbsp; &nbsp; &nbsp; &nbsp; return &amp;acb-&gt;common; &nbsp; &nbsp; &nbsp;</DIV>
<DIV>&nbsp;648 &nbsp; &nbsp; }</DIV>
<DIV>&nbsp;649 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</DIV>
<DIV>&nbsp;650 &nbsp; &nbsp; acb = raw_aio_setup(bs, sector_num, buf, nb_sectors, cb, opaque);</DIV>
<DIV>&nbsp;651 &nbsp; &nbsp; if (!acb)</DIV>
<DIV>&nbsp;652 &nbsp; &nbsp; &nbsp; &nbsp; return NULL;</DIV>
<DIV>&nbsp;653 &nbsp; &nbsp; if (qemu_paio_read(&amp;acb-&gt;aiocb) &lt; 0) {</DIV>
<DIV>&nbsp;654 &nbsp; &nbsp; &nbsp; &nbsp; raw_aio_remove(acb); &nbsp; &nbsp; &nbsp;</DIV>
<DIV>&nbsp;655 &nbsp; &nbsp; &nbsp; &nbsp; return NULL;</DIV>
<DIV>&nbsp;656 &nbsp; &nbsp; }</DIV>
<DIV>&nbsp;657 &nbsp; &nbsp; return &amp;acb-&gt;common;</DIV>
<DIV>&nbsp;658 }&nbsp;</DIV></DIV>
<DIV>......</DIV>
<DIV>In the notes between the codes , the sentence show that If O_DIRECT is used and the buffer is not aligned fall back &nbsp;to synchronous IO. What does it mean?I just want to make these function returning with the exact data in its field which the parameter buf point to , that is to say, I want to make the&nbsp;asynchronous read into a synchronous one.I have make the &nbsp;O_DIRECT used in file xenstore.c and the flags will be in effect along the way to this function above.</DIV>
<DIV><BR></DIV>
<DIV>Question 2:</DIV>
<DIV>Here are some codes in /xen-4.1.2/tools/block-qcow.c</DIV>
<DIV>...</DIV>
<DIV>594 static void qcow_aio_read_cb(void *opaque, int ret)</DIV>
<DIV>595 {</DIV>
<DIV>596 &nbsp; &nbsp; QCowAIOCB *acb = opaque;</DIV>
<DIV>597 &nbsp; &nbsp; BlockDriverState *bs = acb-&gt;common.bs;</DIV>
<DIV>598 &nbsp; &nbsp; BDRVQcowState *s = bs-&gt;opaque;</DIV>
<DIV>599 &nbsp; &nbsp; int index_in_cluster;</DIV>
<DIV>600&nbsp;</DIV>
<DIV>601 &nbsp; &nbsp; acb-&gt;hd_aiocb = NULL;</DIV>
<DIV>602 &nbsp; &nbsp; if (ret &lt; 0) { &nbsp;&nbsp;</DIV>
<DIV>603 &nbsp; &nbsp; fail: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</DIV>
<DIV>604 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;common.cb(acb-&gt;common.opaque, ret);</DIV>
<DIV>605 &nbsp; &nbsp; &nbsp; &nbsp; qemu_aio_release(acb); &nbsp; &nbsp;</DIV>
<DIV>606 &nbsp; &nbsp; &nbsp; &nbsp; return; &nbsp; &nbsp; &nbsp;</DIV>
<DIV>607 &nbsp; &nbsp; }</DIV>
<DIV>.........................</DIV>
<DIV>
<DIV>642 &nbsp; &nbsp; if (!acb-&gt;cluster_offset) {</DIV>
<DIV>643 &nbsp; &nbsp; &nbsp; &nbsp; if (bs-&gt;backing_hd) {</DIV>
<DIV>644 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* read from the base image */</DIV>
<DIV>645 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(bs-&gt;backing_hd,</DIV>
<DIV>646 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;sector_num, acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);</DIV>
<DIV>647 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (acb-&gt;hd_aiocb == NULL)</DIV>
<DIV>648 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</DIV>
<DIV>649 &nbsp; &nbsp; &nbsp; &nbsp; } else {</DIV>
<DIV>650 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* Note: in this case, no need to wait */</DIV>
<DIV>651 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; memset(acb-&gt;buf, 0, 512 * acb-&gt;n);</DIV>
<DIV>652 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</DIV>
<DIV>653 &nbsp; &nbsp; &nbsp; &nbsp; }</DIV>
<DIV>654 &nbsp; &nbsp; } else if (acb-&gt;cluster_offset &amp; QCOW_OFLAG_COMPRESSED) {</DIV>
<DIV>655 &nbsp; &nbsp; &nbsp; &nbsp; /* add AIO support for compressed blocks ? */</DIV>
<DIV>656 &nbsp; &nbsp; &nbsp; &nbsp; if (decompress_cluster(s, acb-&gt;cluster_offset) &lt; 0)</DIV>
<DIV>657 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</DIV>
<DIV>658 &nbsp; &nbsp; &nbsp; &nbsp; memcpy(acb-&gt;buf,</DIV>
<DIV>659 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s-&gt;cluster_cache + index_in_cluster * 512, 512 * acb-&gt;n);</DIV>
<DIV>660 &nbsp; &nbsp; &nbsp; &nbsp; goto redo;</DIV>
<DIV>661 &nbsp; &nbsp; } else {</DIV>
<DIV>662 &nbsp; &nbsp; &nbsp; &nbsp; if ((acb-&gt;cluster_offset &amp; 511) != 0) {</DIV>
<DIV>663 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ret = -EIO;</DIV>
<DIV>664 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; goto fail;</DIV>
<DIV>665 &nbsp; &nbsp; &nbsp; &nbsp; }</DIV>
<DIV>666 &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;hd_aiocb = bdrv_aio_read(s-&gt;hd,</DIV>
<DIV>667 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (acb-&gt;cluster_offset &gt;&gt; 9) + index_in_cluster,</DIV>
<DIV>668 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; acb-&gt;buf, acb-&gt;n, qcow_aio_read_cb, acb);</DIV></DIV>
<DIV>In the code note on the line 650, it says that &nbsp;"in this case, no need to wait".</DIV>
<DIV>what does it mean? Here the code just would wait for what?</DIV>
<DIV><BR></DIV>
<DIV>If I have some time ,I will learn the whole direction about how to ask a smart question which a expert send to me in a earlier mail.</DIV>
<DIV>Thanks.</DIV>
<DIV><BR></DIV>
<DIV>A Newbie</DIV>
<DIV><BR></DIV>
<DIV><BR></DIV></DIV><BR><BR><SPAN title="neteasefooter"><SPAN id="netease_mail_footer"></SPAN></SPAN></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_744985_186671286.1356004791496--



--===============1089396372068235218==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1089396372068235218==--



From xen-devel-bounces@lists.xen.org Thu Dec 20 12:11:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tley0-00082u-Ja; Thu, 20 Dec 2012 12:11:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tlexy-00082p-SX
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 12:11:10 +0000
Received: from [85.158.143.99:8289] by server-2.bemta-4.messagelabs.com id
	35/11-30861-E5003D05; Thu, 20 Dec 2012 12:11:10 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356005469!23471364!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10773 invoked from network); 20 Dec 2012 12:11:09 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 12:11:09 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tlexv-000LNf-8Q; Thu, 20 Dec 2012 12:11:07 +0000
Date: Thu, 20 Dec 2012 12:11:07 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220121107.GH80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-2-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-2-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 01/10] nestedhap: Change hostcr3 and
	p2m->cr3 to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:43 +0800 on 20 Dec (1356047022), Xiantao Zhang wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> VMX doesn't have the concept about host cr3 for nested p2m,
> and only SVM has, so change it to netural words.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> Acded-by: Tim Deegan <tim@xen.org>

Ac_k_ed-by. :)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 12:11:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tley0-00082u-Ja; Thu, 20 Dec 2012 12:11:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tlexy-00082p-SX
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 12:11:10 +0000
Received: from [85.158.143.99:8289] by server-2.bemta-4.messagelabs.com id
	35/11-30861-E5003D05; Thu, 20 Dec 2012 12:11:10 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356005469!23471364!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10773 invoked from network); 20 Dec 2012 12:11:09 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 12:11:09 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tlexv-000LNf-8Q; Thu, 20 Dec 2012 12:11:07 +0000
Date: Thu, 20 Dec 2012 12:11:07 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220121107.GH80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-2-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-2-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 01/10] nestedhap: Change hostcr3 and
	p2m->cr3 to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:43 +0800 on 20 Dec (1356047022), Xiantao Zhang wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> VMX doesn't have the concept about host cr3 for nested p2m,
> and only SVM has, so change it to netural words.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> Acded-by: Tim Deegan <tim@xen.org>

Ac_k_ed-by. :)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 12:18:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlf58-0008Dh-Gx; Thu, 20 Dec 2012 12:18:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tlf56-0008Db-LP
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 12:18:32 +0000
Received: from [85.158.138.51:57774] by server-6.bemta-3.messagelabs.com id
	29/01-12154-71203D05; Thu, 20 Dec 2012 12:18:31 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356005911!28557888!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25596 invoked from network); 20 Dec 2012 12:18:31 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 12:18:31 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tlf54-000LOi-Dz; Thu, 20 Dec 2012 12:18:30 +0000
Date: Thu, 20 Dec 2012 12:18:30 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220121830.GI80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-7-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-7-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 06/10] nEPT: Sync PDPTR fields if L2
	guest in PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:43 +0800 on 20 Dec (1356047027), Xiantao Zhang wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
> vmentry.
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Apart from the whitespace mangling that Jan pointed out, 

Acked-by: Tim Deegan <tim@xen.org>

> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    9 ++++++++-
>  1 files changed, 8 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 2ae6f6a..1f7de7a 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -826,7 +826,14 @@ static void load_shadow_guest_state(struct vcpu *v)
>      vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
>      vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
>  
> -    /* TODO: PDPTRs for nested ept */
> +    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
> +                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
> +    }
> +
>      /* TODO: CR3 target control */
>  }
>  
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 12:18:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlf58-0008Dh-Gx; Thu, 20 Dec 2012 12:18:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tlf56-0008Db-LP
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 12:18:32 +0000
Received: from [85.158.138.51:57774] by server-6.bemta-3.messagelabs.com id
	29/01-12154-71203D05; Thu, 20 Dec 2012 12:18:31 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356005911!28557888!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25596 invoked from network); 20 Dec 2012 12:18:31 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 12:18:31 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tlf54-000LOi-Dz; Thu, 20 Dec 2012 12:18:30 +0000
Date: Thu, 20 Dec 2012 12:18:30 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220121830.GI80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-7-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-7-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 06/10] nEPT: Sync PDPTR fields if L2
	guest in PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:43 +0800 on 20 Dec (1356047027), Xiantao Zhang wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
> vmentry.
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Apart from the whitespace mangling that Jan pointed out, 

Acked-by: Tim Deegan <tim@xen.org>

> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    9 ++++++++-
>  1 files changed, 8 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 2ae6f6a..1f7de7a 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -826,7 +826,14 @@ static void load_shadow_guest_state(struct vcpu *v)
>      vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
>      vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
>  
> -    /* TODO: PDPTRs for nested ept */
> +    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
> +                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) ) {
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
> +	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
> +    }
> +
>      /* TODO: CR3 target control */
>  }
>  
> -- 
> 1.7.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 12:35:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlfLD-0008UH-7m; Thu, 20 Dec 2012 12:35:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1TlfLC-0008UC-0f
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 12:35:10 +0000
Received: from [85.158.143.35:16974] by server-3.bemta-4.messagelabs.com id
	88/3A-18211-DF503D05; Thu, 20 Dec 2012 12:35:09 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-14.tower-21.messagelabs.com!1356006901!15851585!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26023 invoked from network); 20 Dec 2012 12:35:02 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 12:35:02 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id A40BB4015CD;
	Thu, 20 Dec 2012 13:35:01 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id K-LCm3wmIHJ4; Thu, 20 Dec 2012 13:35:01 +0100 (CET)
Received: from [192.168.178.50]
	(host123-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.123])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id CFAF9400494;
	Thu, 20 Dec 2012 13:35:00 +0100 (CET)
Message-ID: <50D305F2.1060109@tiscali.it>
Date: Thu, 20 Dec 2012 13:34:58 +0100
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] Update Seabios on xen-unstable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1820892591039254165=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============1820892591039254165==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms010603050708050903090603"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms010603050708050903090603
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable

I saw good news about qemu-xen on xen-unstable updates to latest stable=20
version (1.3), I already start test it and report bugs found.
I saw no seabios updates for now on xen-unstable, there is 1.7.1=20
upstream version since few months that include all vgabios.
Is possible to update seabios on xen-unstable please?
I have tried it time ago for probably resolution of qxl vga problem=20
without result.
It probably needs some particular settings or modification for correct=20
integration in xen about vgabios that I not know of.
Thanks for any reply.


--------------ms010603050708050903090603
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMTIyMDEyMzQ1OFowIwYJKoZIhvcNAQkEMRYEFGSpg8fT3zHwK+r0mJPq1y61
PXKZMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEACxohoLuCSlbRCrOVurUAMw7w
imNSwFjRwCaXE3DfI0s30OahYOG8auNXo1ZaBtohm1xI2i/b1aPfOXufaZgUYikF5V7yN+MX
BJlC1DiaAi6Trm/9jlN41IiKXo3ec2uxRwGpZIJEZfr8FXv+NRcTI249+XQgBsZbb8n7BC6g
TI4b82cDFv3kPevVz2kT+VsfNNmF7vGOggte4BgARu9ZMsHoImNtrOviX7eeDc+rGqIf/OJR
q7kPAOMY+fkYpHwX+IypUBIltEmDkeJoAJnyOnLNGhysChJd8B6X6yovJ9j9wnFJzkG4ORl5
qzt+l3Y2oOzgTUB9E7XvIWOPQatrZgAAAAAAAA==
--------------ms010603050708050903090603--


--===============1820892591039254165==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1820892591039254165==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 12:35:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlfLD-0008UH-7m; Thu, 20 Dec 2012 12:35:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1TlfLC-0008UC-0f
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 12:35:10 +0000
Received: from [85.158.143.35:16974] by server-3.bemta-4.messagelabs.com id
	88/3A-18211-DF503D05; Thu, 20 Dec 2012 12:35:09 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-14.tower-21.messagelabs.com!1356006901!15851585!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26023 invoked from network); 20 Dec 2012 12:35:02 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 12:35:02 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id A40BB4015CD;
	Thu, 20 Dec 2012 13:35:01 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id K-LCm3wmIHJ4; Thu, 20 Dec 2012 13:35:01 +0100 (CET)
Received: from [192.168.178.50]
	(host123-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.123])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id CFAF9400494;
	Thu, 20 Dec 2012 13:35:00 +0100 (CET)
Message-ID: <50D305F2.1060109@tiscali.it>
Date: Thu, 20 Dec 2012 13:34:58 +0100
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] Update Seabios on xen-unstable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1820892591039254165=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============1820892591039254165==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms010603050708050903090603"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms010603050708050903090603
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable

I saw good news about qemu-xen on xen-unstable updates to latest stable=20
version (1.3), I already start test it and report bugs found.
I saw no seabios updates for now on xen-unstable, there is 1.7.1=20
upstream version since few months that include all vgabios.
Is possible to update seabios on xen-unstable please?
I have tried it time ago for probably resolution of qxl vga problem=20
without result.
It probably needs some particular settings or modification for correct=20
integration in xen about vgabios that I not know of.
Thanks for any reply.


--------------ms010603050708050903090603
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMTIyMDEyMzQ1OFowIwYJKoZIhvcNAQkEMRYEFGSpg8fT3zHwK+r0mJPq1y61
PXKZMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEACxohoLuCSlbRCrOVurUAMw7w
imNSwFjRwCaXE3DfI0s30OahYOG8auNXo1ZaBtohm1xI2i/b1aPfOXufaZgUYikF5V7yN+MX
BJlC1DiaAi6Trm/9jlN41IiKXo3ec2uxRwGpZIJEZfr8FXv+NRcTI249+XQgBsZbb8n7BC6g
TI4b82cDFv3kPevVz2kT+VsfNNmF7vGOggte4BgARu9ZMsHoImNtrOviX7eeDc+rGqIf/OJR
q7kPAOMY+fkYpHwX+IypUBIltEmDkeJoAJnyOnLNGhysChJd8B6X6yovJ9j9wnFJzkG4ORl5
qzt+l3Y2oOzgTUB9E7XvIWOPQatrZgAAAAAAAA==
--------------ms010603050708050903090603--


--===============1820892591039254165==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1820892591039254165==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 12:52:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlfbK-0000py-8v; Thu, 20 Dec 2012 12:51:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TlfbI-0000pX-U4
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 12:51:49 +0000
Received: from [193.109.254.147:23347] by server-15.bemta-14.messagelabs.com
	id 3E/E1-05116-4E903D05; Thu, 20 Dec 2012 12:51:48 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1356007898!3610078!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9373 invoked from network); 20 Dec 2012 12:51:39 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 12:51:39 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tlfb7-000LTS-Gv; Thu, 20 Dec 2012 12:51:37 +0000
Date: Thu, 20 Dec 2012 12:51:37 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220125137.GJ80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-4-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-4-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 03/10] nested_ept: Implement guest ept's
	walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:43 +0800 on 20 Dec (1356047024), Xiantao Zhang wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Implment guest EPT PT walker, some logic is based on shadow's
> ia32e PT walker. During the PT walking, if the target pages are
> not in memory, use RETRY mechanism and get a chance to let the
> target page back.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

This is much nicer than v1, thanks.  I have some comments below, and the
whole thing needs to be checked for whitespace mangling.

> +static bool_t nept_rwx_bits_check(ept_entry_t e) {
> +    /*write only or write/execute only*/
> +    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
> +
> +    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
> +        return 1;
> +
> +    if ( rwx_bits == ept_access_x && !(NEPT_VPID_CAP_BITS &
> +                        VMX_EPT_EXEC_ONLY_SUPPORTED))

In a later patch you add VMX_EPT_EXEC_ONLY_SUPPORTED to this field.  How
can that work when running on a CPU that doesn't support exec-only?  The
nested-ept tables will have exec-only mapping in them which the CPU will
reject.

> +done:
> +    ret = EPT_TRANSLATE_SUCCEED;
> +    goto out;
> +
> +map_err:
> +    if ( rc == _PAGE_PAGED )
> +        ret = EPT_TRANSLATE_RETRY;
> +    else
> +        ret = EPT_TRANSLATE_ERR_PAGE;

What does this error code mean?  The caller just retries the faulting
instruction when it sees it, which sounds wrong.  Why not just return
EPT_TRANSLATE_MISCONFIG if the guest uses an unmappable frame for EPT
tables?

> +    default:
> +        rc = EPT_TRANSLATE_UNSUPPORTED;
> +        gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);

Just BUG() here and get rid of EPT_TRANSLATE_UNSUPPORTED and
NESTEDHVM_PAGEFAULT_UNHANDLED.  The function that provided rc is right
above and we can see it hasn't got any other return values.

> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
>      /* Translate the GFN to an MFN */
>      ASSERT(!paging_locked_by_me(v->domain));
>      mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
> -        
> +

This stray change should be dropped. 

> +typedef enum {
> +    ept_access_n     = 0, /* No access permissions allowed */
> +    ept_access_r     = 1,
> +    ept_access_w     = 2,
> +    ept_access_rw    = 3,
> +    ept_access_x     = 4,
> +    ept_access_rx    = 5,
> +    ept_access_wx    = 6,
> +    ept_access_all   = 7,
> +} ept_access_t;

This enum isn't used anywhere.  

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 12:52:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlfbK-0000py-8v; Thu, 20 Dec 2012 12:51:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TlfbI-0000pX-U4
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 12:51:49 +0000
Received: from [193.109.254.147:23347] by server-15.bemta-14.messagelabs.com
	id 3E/E1-05116-4E903D05; Thu, 20 Dec 2012 12:51:48 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1356007898!3610078!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9373 invoked from network); 20 Dec 2012 12:51:39 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 12:51:39 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tlfb7-000LTS-Gv; Thu, 20 Dec 2012 12:51:37 +0000
Date: Thu, 20 Dec 2012 12:51:37 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220125137.GJ80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-4-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-4-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 03/10] nested_ept: Implement guest ept's
	walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:43 +0800 on 20 Dec (1356047024), Xiantao Zhang wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Implment guest EPT PT walker, some logic is based on shadow's
> ia32e PT walker. During the PT walking, if the target pages are
> not in memory, use RETRY mechanism and get a chance to let the
> target page back.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

This is much nicer than v1, thanks.  I have some comments below, and the
whole thing needs to be checked for whitespace mangling.

> +static bool_t nept_rwx_bits_check(ept_entry_t e) {
> +    /*write only or write/execute only*/
> +    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
> +
> +    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
> +        return 1;
> +
> +    if ( rwx_bits == ept_access_x && !(NEPT_VPID_CAP_BITS &
> +                        VMX_EPT_EXEC_ONLY_SUPPORTED))

In a later patch you add VMX_EPT_EXEC_ONLY_SUPPORTED to this field.  How
can that work when running on a CPU that doesn't support exec-only?  The
nested-ept tables will have exec-only mapping in them which the CPU will
reject.

> +done:
> +    ret = EPT_TRANSLATE_SUCCEED;
> +    goto out;
> +
> +map_err:
> +    if ( rc == _PAGE_PAGED )
> +        ret = EPT_TRANSLATE_RETRY;
> +    else
> +        ret = EPT_TRANSLATE_ERR_PAGE;

What does this error code mean?  The caller just retries the faulting
instruction when it sees it, which sounds wrong.  Why not just return
EPT_TRANSLATE_MISCONFIG if the guest uses an unmappable frame for EPT
tables?

> +    default:
> +        rc = EPT_TRANSLATE_UNSUPPORTED;
> +        gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);

Just BUG() here and get rid of EPT_TRANSLATE_UNSUPPORTED and
NESTEDHVM_PAGEFAULT_UNHANDLED.  The function that provided rc is right
above and we can see it hasn't got any other return values.

> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
>      /* Translate the GFN to an MFN */
>      ASSERT(!paging_locked_by_me(v->domain));
>      mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
> -        
> +

This stray change should be dropped. 

> +typedef enum {
> +    ept_access_n     = 0, /* No access permissions allowed */
> +    ept_access_r     = 1,
> +    ept_access_w     = 2,
> +    ept_access_rw    = 3,
> +    ept_access_x     = 4,
> +    ept_access_rx    = 5,
> +    ept_access_wx    = 6,
> +    ept_access_all   = 7,
> +} ept_access_t;

This enum isn't used anywhere.  

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 12:54:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlfdS-0000zn-43; Thu, 20 Dec 2012 12:54:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TlfdP-0000zX-Mp
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 12:54:00 +0000
Received: from [85.158.143.35:29531] by server-3.bemta-4.messagelabs.com id
	7E/97-18211-66A03D05; Thu, 20 Dec 2012 12:53:58 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-21.messagelabs.com!1356007909!16380723!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29460 invoked from network); 20 Dec 2012 12:51:49 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-6.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	20 Dec 2012 12:51:49 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:53363 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TlffY-0000fq-EA; Thu, 20 Dec 2012 13:56:12 +0100
Date: Thu, 20 Dec 2012 13:51:39 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1797374383.20121220135139@eikelenboom.it>
To: Eric Dumazet <erdnetdev@gmail.com>
In-Reply-To: <1355933869.21834.13.camel@edumazet-glaptop>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
MIME-Version: 1.0
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, December 19, 2012, 5:17:49 PM, you wrote:

> On Wed, 2012-12-19 at 12:34 +0100, Sander Eikelenboom wrote:

>> Hi Ian,
>> 
>> It ran overnight and i haven't seen the warn_once trigger.
>> (but i also didn't with the previous patch)
>> 

> As I said, the miminum value to not trigger the warning was what Ian
> patch was doing, but it was still a not accurate estimation.

> Doing the real accounting might trigger slow transferts, or dropped
> packets because of socket limits (SNDBUF / RCVBUF) being hit sooner.

> So the real question was : If accounting for full pages, is your
> applications run as smooth as before, with no huge performance
> regression ?

Ok i have added some extra debug info (see diff's below), the code still uses the old calculation for truesize (in the hope to trigger the warn_on_once again), but also calculates the variants IanC came up with.

I haven't got a clear test case to trigger the warn_on_once, it happens just every once in a while during my normal usage and i'm not a netperf expert :-)
So at the moment i haven't been able to trigger the warn_on_once yet, but the results so far do seem to shed some light ..

- The first variant (current code) seems to be the most effcient and a good estimation *most* of the the, but sometimes triggers the warn_on_once in skb_try_coalesce.
- The first variant (current code) seems to always substract from the truesize for small packets.
- The second variant always seems keep the truesize as is for most of the small network traffic, but it also seems to work ok for larger packets.
- The third variant seems to be a pretty wasteful estimation.

So the last variant seems to be rather wasteful, and the second one the most accurate so far.

Eric:
     From the warn_on_once, delta should be smaller than len, but probably they should be as close together as possible.
     When you say "accurate estimation", what would be a acceptable difference between DELTA and LEN ?



[  116.965062] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  117.094538] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.094707] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.094869] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.095058] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.095216] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.096102] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.096311] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.096373] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.150398] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.150459] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.536901] eth0: mtu:1500 data_len:53642 len before:0 len after:53642 truesize before:896 truesize after:54282 nr_frags:14 variant1:53386(54282) variant2:53386(54282) variant3:57344(58240)
[  117.537463] eth0: mtu:1500 data_len:15994 len before:0 len after:15994 truesize before:896 truesize after:16634 nr_frags:5 variant1:15738(16634) variant2:15738(16634) variant3:20480(21376)
[  117.537915] eth0: mtu:1500 data_len:17442 len before:0 len after:17442 truesize before:896 truesize after:18082 nr_frags:5 variant1:17186(18082) variant2:17186(18082) variant3:20480(21376)
[  117.538543] eth0: mtu:1500 data_len:18890 len before:0 len after:18890 truesize before:896 truesize after:19530 nr_frags:6 variant1:18634(19530) variant2:18634(19530) variant3:24576(25472)
[  117.539223] eth0: mtu:1500 data_len:13098 len before:0 len after:13098 truesize before:896 truesize after:13738 nr_frags:4 variant1:12842(13738) variant2:12842(13738) variant3:16384(17280)
[  117.539283] eth0: mtu:1500 data_len:7306 len before:0 len after:7306 truesize before:896 truesize after:7946 nr_frags:2 variant1:7050(7946) variant2:7050(7946) variant3:8192(9088)
[  117.539403] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:7690 len:7240 from->truesize:7946 skb_headlen(from):190 skb_shinfo(to)->nr_frags:5 skb_shinfo(from)->nr_frags:2
[  117.540035] eth0: mtu:1500 data_len:4410 len before:0 len after:4410 truesize before:896 truesize after:5050 nr_frags:3 variant1:4154(5050) variant2:4304(5200) variant3:12288(13184)
[  117.540153] eth0: mtu:1500 data_len:1018 len before:0 len after:1018 truesize before:896 truesize after:1658 nr_frags:1 variant1:762(1658) variant2:762(1658) variant3:4096(4992)
[  121.981917] net_ratelimit: 27 callbacks suppressed
[  121.981960] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  122.985019] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  123.988308] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  124.991961] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  125.995003] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  126.998324] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)



diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index c26e28b..8833e38 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -964,6 +964,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
        struct sk_buff_head tmpq;
        unsigned long flags;
        int err;
+       int tsz,len;

        spin_lock(&np->rx_lock);

@@ -1037,9 +1038,22 @@ err:
                 * receive throughout using the standard receive
                 * buffer size was cut by 25%(!!!).
                 */
-               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
+
+
+
+
+                tsz = skb->truesize;
+                len = skb->len;
+                /* skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags; */
+                skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
                skb->len += skb->data_len;

+               net_warn_ratelimited("%s: mtu:%d data_len:%d len before:%d len after:%d truesize before:%d truesize after:%d nr_frags:%d variant1:%d(%d) variant2:%d(%d) variant3:%d(%d) \n",
+                        skb->dev->name, skb->dev->mtu, skb->data_len, len,  skb->len,tsz, skb->truesize, skb_shinfo(skb)->nr_frags,
+                        skb->data_len - RX_COPY_THRESHOLD, tsz + skb->data_len - RX_COPY_THRESHOLD ,
+                        skb->data_len - NETFRONT_SKB_CB(skb)->pull_to, tsz + skb->data_len - NETFRONT_SKB_CB(skb)->pull_to,
+                        PAGE_SIZE * skb_shinfo(skb)->nr_frags, tsz + (PAGE_SIZE * skb_shinfo(skb)->nr_frags));
+
                if (rx->flags & XEN_NETRXF_csum_blank)
                        skb->ip_summed = CHECKSUM_PARTIAL;
                else if (rx->flags & XEN_NETRXF_data_validated)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3ab989b..6d0cd86 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3471,6 +3471,16 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,

        WARN_ON_ONCE(delta < len);

+       if(delta < len) {
+               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA < LEN delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
+                        to->dev->name, from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
+       }
+
+       if (delta > len && delta - len > 100) {
+               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA - LEN > 100 delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
+                        to->dev->name,from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
+       }
+
        memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
               skb_shinfo(from)->frags,
               skb_shinfo(from)->nr_frags * sizeof(skb_frag_t));










_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 12:54:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlfdS-0000zn-43; Thu, 20 Dec 2012 12:54:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1TlfdP-0000zX-Mp
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 12:54:00 +0000
Received: from [85.158.143.35:29531] by server-3.bemta-4.messagelabs.com id
	7E/97-18211-66A03D05; Thu, 20 Dec 2012 12:53:58 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-21.messagelabs.com!1356007909!16380723!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29460 invoked from network); 20 Dec 2012 12:51:49 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-6.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	20 Dec 2012 12:51:49 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:53363 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1TlffY-0000fq-EA; Thu, 20 Dec 2012 13:56:12 +0100
Date: Thu, 20 Dec 2012 13:51:39 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1797374383.20121220135139@eikelenboom.it>
To: Eric Dumazet <erdnetdev@gmail.com>
In-Reply-To: <1355933869.21834.13.camel@edumazet-glaptop>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
MIME-Version: 1.0
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, December 19, 2012, 5:17:49 PM, you wrote:

> On Wed, 2012-12-19 at 12:34 +0100, Sander Eikelenboom wrote:

>> Hi Ian,
>> 
>> It ran overnight and i haven't seen the warn_once trigger.
>> (but i also didn't with the previous patch)
>> 

> As I said, the miminum value to not trigger the warning was what Ian
> patch was doing, but it was still a not accurate estimation.

> Doing the real accounting might trigger slow transferts, or dropped
> packets because of socket limits (SNDBUF / RCVBUF) being hit sooner.

> So the real question was : If accounting for full pages, is your
> applications run as smooth as before, with no huge performance
> regression ?

Ok i have added some extra debug info (see diff's below), the code still uses the old calculation for truesize (in the hope to trigger the warn_on_once again), but also calculates the variants IanC came up with.

I haven't got a clear test case to trigger the warn_on_once, it happens just every once in a while during my normal usage and i'm not a netperf expert :-)
So at the moment i haven't been able to trigger the warn_on_once yet, but the results so far do seem to shed some light ..

- The first variant (current code) seems to be the most effcient and a good estimation *most* of the the, but sometimes triggers the warn_on_once in skb_try_coalesce.
- The first variant (current code) seems to always substract from the truesize for small packets.
- The second variant always seems keep the truesize as is for most of the small network traffic, but it also seems to work ok for larger packets.
- The third variant seems to be a pretty wasteful estimation.

So the last variant seems to be rather wasteful, and the second one the most accurate so far.

Eric:
     From the warn_on_once, delta should be smaller than len, but probably they should be as close together as possible.
     When you say "accurate estimation", what would be a acceptable difference between DELTA and LEN ?



[  116.965062] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  117.094538] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.094707] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.094869] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.095058] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.095216] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.096102] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.096311] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.096373] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.150398] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.150459] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
[  117.536901] eth0: mtu:1500 data_len:53642 len before:0 len after:53642 truesize before:896 truesize after:54282 nr_frags:14 variant1:53386(54282) variant2:53386(54282) variant3:57344(58240)
[  117.537463] eth0: mtu:1500 data_len:15994 len before:0 len after:15994 truesize before:896 truesize after:16634 nr_frags:5 variant1:15738(16634) variant2:15738(16634) variant3:20480(21376)
[  117.537915] eth0: mtu:1500 data_len:17442 len before:0 len after:17442 truesize before:896 truesize after:18082 nr_frags:5 variant1:17186(18082) variant2:17186(18082) variant3:20480(21376)
[  117.538543] eth0: mtu:1500 data_len:18890 len before:0 len after:18890 truesize before:896 truesize after:19530 nr_frags:6 variant1:18634(19530) variant2:18634(19530) variant3:24576(25472)
[  117.539223] eth0: mtu:1500 data_len:13098 len before:0 len after:13098 truesize before:896 truesize after:13738 nr_frags:4 variant1:12842(13738) variant2:12842(13738) variant3:16384(17280)
[  117.539283] eth0: mtu:1500 data_len:7306 len before:0 len after:7306 truesize before:896 truesize after:7946 nr_frags:2 variant1:7050(7946) variant2:7050(7946) variant3:8192(9088)
[  117.539403] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:7690 len:7240 from->truesize:7946 skb_headlen(from):190 skb_shinfo(to)->nr_frags:5 skb_shinfo(from)->nr_frags:2
[  117.540035] eth0: mtu:1500 data_len:4410 len before:0 len after:4410 truesize before:896 truesize after:5050 nr_frags:3 variant1:4154(5050) variant2:4304(5200) variant3:12288(13184)
[  117.540153] eth0: mtu:1500 data_len:1018 len before:0 len after:1018 truesize before:896 truesize after:1658 nr_frags:1 variant1:762(1658) variant2:762(1658) variant3:4096(4992)
[  121.981917] net_ratelimit: 27 callbacks suppressed
[  121.981960] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  122.985019] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  123.988308] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  124.991961] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  125.995003] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
[  126.998324] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)



diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index c26e28b..8833e38 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -964,6 +964,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
        struct sk_buff_head tmpq;
        unsigned long flags;
        int err;
+       int tsz,len;

        spin_lock(&np->rx_lock);

@@ -1037,9 +1038,22 @@ err:
                 * receive throughout using the standard receive
                 * buffer size was cut by 25%(!!!).
                 */
-               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
+
+
+
+
+                tsz = skb->truesize;
+                len = skb->len;
+                /* skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags; */
+                skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
                skb->len += skb->data_len;

+               net_warn_ratelimited("%s: mtu:%d data_len:%d len before:%d len after:%d truesize before:%d truesize after:%d nr_frags:%d variant1:%d(%d) variant2:%d(%d) variant3:%d(%d) \n",
+                        skb->dev->name, skb->dev->mtu, skb->data_len, len,  skb->len,tsz, skb->truesize, skb_shinfo(skb)->nr_frags,
+                        skb->data_len - RX_COPY_THRESHOLD, tsz + skb->data_len - RX_COPY_THRESHOLD ,
+                        skb->data_len - NETFRONT_SKB_CB(skb)->pull_to, tsz + skb->data_len - NETFRONT_SKB_CB(skb)->pull_to,
+                        PAGE_SIZE * skb_shinfo(skb)->nr_frags, tsz + (PAGE_SIZE * skb_shinfo(skb)->nr_frags));
+
                if (rx->flags & XEN_NETRXF_csum_blank)
                        skb->ip_summed = CHECKSUM_PARTIAL;
                else if (rx->flags & XEN_NETRXF_data_validated)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3ab989b..6d0cd86 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3471,6 +3471,16 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,

        WARN_ON_ONCE(delta < len);

+       if(delta < len) {
+               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA < LEN delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
+                        to->dev->name, from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
+       }
+
+       if (delta > len && delta - len > 100) {
+               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA - LEN > 100 delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
+                        to->dev->name,from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
+       }
+
        memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
               skb_shinfo(from)->frags,
               skb_shinfo(from)->nr_frags * sizeof(skb_frag_t));










_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 12:54:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlfe1-00013P-Iv; Thu, 20 Dec 2012 12:54:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tlfe0-00013A-Ca
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 12:54:36 +0000
Received: from [85.158.138.51:63111] by server-10.bemta-3.messagelabs.com id
	D1/BA-07616-B8A03D05; Thu, 20 Dec 2012 12:54:35 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1356008073!29845866!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30850 invoked from network); 20 Dec 2012 12:54:33 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 12:54:33 -0000
Received: (qmail 16014 invoked from network); 20 Dec 2012 14:54:31 +0200
Received: from rcojocaru.dsd.ro (10.10.14.59)
	by mail.bitdefender.com with SMTP; 20 Dec 2012 14:54:31 +0200
MIME-Version: 1.0
X-Mercurial-Node: e33d3d37dfbf24f3b4addb672b29e3f86b74f163
Message-Id: <e33d3d37dfbf24f3b4ad.1356008134@rcojocaru.dsd.ro>
User-Agent: Mercurial-patchbomb/2.4.1
Date: Thu, 20 Dec 2012 14:55:34 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
To: xen-devel@lists.xensource.com
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234647,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_SUMM_400_WORDS; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS;
	NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	8cddb866bd5dbc42b65da58fd82e1961.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t8.17epd43g5.2rii3], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44457
Cc: tim@xen.org
Subject: [Xen-devel] [PATCH V2] mem_event: Add support for
	MEM_EVENT_REASON_MSR
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the new MEM_EVENT_REASON_MSR event type. Works similarly
to the other register events, except event.gla always contains
the MSR type (in addition to event.gfn, which holds the value).

Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com>
Acked-by: Tim Deegan <tim@xen.org>

diff -r b04de677de31 -r e33d3d37dfbf xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c	Tue Dec 18 18:16:52 2012 +0000
+++ b/xen/arch/x86/hvm/hvm.c	Thu Dec 20 14:52:52 2012 +0200
@@ -2927,6 +2927,8 @@ int hvm_msr_write_intercept(unsigned int
     hvm_cpuid(1, &cpuid[0], &cpuid[1], &cpuid[2], &cpuid[3]);
     mtrr = !!(cpuid[3] & cpufeat_mask(X86_FEATURE_MTRR));
 
+    hvm_memory_event_msr(msr, msr_content);
+
     switch ( msr )
     {
     case MSR_EFER:
@@ -3857,6 +3859,7 @@ long do_hvm_op(unsigned long op, XEN_GUE
             case HVM_PARAM_MEMORY_EVENT_CR0:
             case HVM_PARAM_MEMORY_EVENT_CR3:
             case HVM_PARAM_MEMORY_EVENT_CR4:
+            case HVM_PARAM_MEMORY_EVENT_MSR:
                 if ( d == current->domain )
                     rc = -EPERM;
                 break;
@@ -4485,6 +4488,14 @@ void hvm_memory_event_cr4(unsigned long 
                            value, old, 0, 0);
 }
 
+void hvm_memory_event_msr(unsigned long msr, unsigned long value)
+{
+    hvm_memory_event_traps(current->domain->arch.hvm_domain
+                             .params[HVM_PARAM_MEMORY_EVENT_MSR],
+                           MEM_EVENT_REASON_MSR,
+                           value, ~value, 1, msr);
+}
+
 int hvm_memory_event_int3(unsigned long gla) 
 {
     uint32_t pfec = PFEC_page_present;
diff -r b04de677de31 -r e33d3d37dfbf xen/include/asm-x86/hvm/hvm.h
--- a/xen/include/asm-x86/hvm/hvm.h	Tue Dec 18 18:16:52 2012 +0000
+++ b/xen/include/asm-x86/hvm/hvm.h	Thu Dec 20 14:52:52 2012 +0200
@@ -448,6 +448,7 @@ int hvm_x2apic_msr_write(struct vcpu *v,
 void hvm_memory_event_cr0(unsigned long value, unsigned long old);
 void hvm_memory_event_cr3(unsigned long value, unsigned long old);
 void hvm_memory_event_cr4(unsigned long value, unsigned long old);
+void hvm_memory_event_msr(unsigned long msr, unsigned long value);
 /* Called for current VCPU on int3: returns -1 if no listener */
 int hvm_memory_event_int3(unsigned long gla);
 
diff -r b04de677de31 -r e33d3d37dfbf xen/include/public/hvm/params.h
--- a/xen/include/public/hvm/params.h	Tue Dec 18 18:16:52 2012 +0000
+++ b/xen/include/public/hvm/params.h	Thu Dec 20 14:52:52 2012 +0200
@@ -126,6 +126,7 @@
 #define HVM_PARAM_MEMORY_EVENT_CR4          22
 #define HVM_PARAM_MEMORY_EVENT_INT3         23
 #define HVM_PARAM_MEMORY_EVENT_SINGLE_STEP  25
+#define HVM_PARAM_MEMORY_EVENT_MSR          30
 
 #define HVMPME_MODE_MASK       (3 << 0)
 #define HVMPME_mode_disabled   0
@@ -141,6 +142,6 @@
 #define HVM_PARAM_ACCESS_RING_PFN   28
 #define HVM_PARAM_SHARING_RING_PFN  29
 
-#define HVM_NR_PARAMS          30
+#define HVM_NR_PARAMS          31
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
diff -r b04de677de31 -r e33d3d37dfbf xen/include/public/mem_event.h
--- a/xen/include/public/mem_event.h	Tue Dec 18 18:16:52 2012 +0000
+++ b/xen/include/public/mem_event.h	Thu Dec 20 14:52:52 2012 +0200
@@ -45,6 +45,8 @@
 #define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
 #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
 #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
+#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR type;
+                                             does NOT honour HVMPME_onchangeonly */
 
 typedef struct mem_event_st {
     uint32_t flags;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 12:54:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 12:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlfe1-00013P-Iv; Thu, 20 Dec 2012 12:54:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tlfe0-00013A-Ca
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 12:54:36 +0000
Received: from [85.158.138.51:63111] by server-10.bemta-3.messagelabs.com id
	D1/BA-07616-B8A03D05; Thu, 20 Dec 2012 12:54:35 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1356008073!29845866!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30850 invoked from network); 20 Dec 2012 12:54:33 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 12:54:33 -0000
Received: (qmail 16014 invoked from network); 20 Dec 2012 14:54:31 +0200
Received: from rcojocaru.dsd.ro (10.10.14.59)
	by mail.bitdefender.com with SMTP; 20 Dec 2012 14:54:31 +0200
MIME-Version: 1.0
X-Mercurial-Node: e33d3d37dfbf24f3b4addb672b29e3f86b74f163
Message-Id: <e33d3d37dfbf24f3b4ad.1356008134@rcojocaru.dsd.ro>
User-Agent: Mercurial-patchbomb/2.4.1
Date: Thu, 20 Dec 2012 14:55:34 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
To: xen-devel@lists.xensource.com
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234647,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_GMAIL_WITH_XMAILER_ADN;
	NN_LEGIT_SUMM_400_WORDS; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS;
	NN_LEGIT_MAILING_LIST_TO], SGN: [Enabled], URL: [Enabled], URI DNSBL:
	[Disabled], SQMD: [Enabled, Hits: none, MD5:
	8cddb866bd5dbc42b65da58fd82e1961.fuzzy.fzrbl.org], RTDA: [Enabled,
	Hit: No, Details: v1.4.6; Id: 2m1g3t8.17epd43g5.2rii3], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44457
Cc: tim@xen.org
Subject: [Xen-devel] [PATCH V2] mem_event: Add support for
	MEM_EVENT_REASON_MSR
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the new MEM_EVENT_REASON_MSR event type. Works similarly
to the other register events, except event.gla always contains
the MSR type (in addition to event.gfn, which holds the value).

Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com>
Acked-by: Tim Deegan <tim@xen.org>

diff -r b04de677de31 -r e33d3d37dfbf xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c	Tue Dec 18 18:16:52 2012 +0000
+++ b/xen/arch/x86/hvm/hvm.c	Thu Dec 20 14:52:52 2012 +0200
@@ -2927,6 +2927,8 @@ int hvm_msr_write_intercept(unsigned int
     hvm_cpuid(1, &cpuid[0], &cpuid[1], &cpuid[2], &cpuid[3]);
     mtrr = !!(cpuid[3] & cpufeat_mask(X86_FEATURE_MTRR));
 
+    hvm_memory_event_msr(msr, msr_content);
+
     switch ( msr )
     {
     case MSR_EFER:
@@ -3857,6 +3859,7 @@ long do_hvm_op(unsigned long op, XEN_GUE
             case HVM_PARAM_MEMORY_EVENT_CR0:
             case HVM_PARAM_MEMORY_EVENT_CR3:
             case HVM_PARAM_MEMORY_EVENT_CR4:
+            case HVM_PARAM_MEMORY_EVENT_MSR:
                 if ( d == current->domain )
                     rc = -EPERM;
                 break;
@@ -4485,6 +4488,14 @@ void hvm_memory_event_cr4(unsigned long 
                            value, old, 0, 0);
 }
 
+void hvm_memory_event_msr(unsigned long msr, unsigned long value)
+{
+    hvm_memory_event_traps(current->domain->arch.hvm_domain
+                             .params[HVM_PARAM_MEMORY_EVENT_MSR],
+                           MEM_EVENT_REASON_MSR,
+                           value, ~value, 1, msr);
+}
+
 int hvm_memory_event_int3(unsigned long gla) 
 {
     uint32_t pfec = PFEC_page_present;
diff -r b04de677de31 -r e33d3d37dfbf xen/include/asm-x86/hvm/hvm.h
--- a/xen/include/asm-x86/hvm/hvm.h	Tue Dec 18 18:16:52 2012 +0000
+++ b/xen/include/asm-x86/hvm/hvm.h	Thu Dec 20 14:52:52 2012 +0200
@@ -448,6 +448,7 @@ int hvm_x2apic_msr_write(struct vcpu *v,
 void hvm_memory_event_cr0(unsigned long value, unsigned long old);
 void hvm_memory_event_cr3(unsigned long value, unsigned long old);
 void hvm_memory_event_cr4(unsigned long value, unsigned long old);
+void hvm_memory_event_msr(unsigned long msr, unsigned long value);
 /* Called for current VCPU on int3: returns -1 if no listener */
 int hvm_memory_event_int3(unsigned long gla);
 
diff -r b04de677de31 -r e33d3d37dfbf xen/include/public/hvm/params.h
--- a/xen/include/public/hvm/params.h	Tue Dec 18 18:16:52 2012 +0000
+++ b/xen/include/public/hvm/params.h	Thu Dec 20 14:52:52 2012 +0200
@@ -126,6 +126,7 @@
 #define HVM_PARAM_MEMORY_EVENT_CR4          22
 #define HVM_PARAM_MEMORY_EVENT_INT3         23
 #define HVM_PARAM_MEMORY_EVENT_SINGLE_STEP  25
+#define HVM_PARAM_MEMORY_EVENT_MSR          30
 
 #define HVMPME_MODE_MASK       (3 << 0)
 #define HVMPME_mode_disabled   0
@@ -141,6 +142,6 @@
 #define HVM_PARAM_ACCESS_RING_PFN   28
 #define HVM_PARAM_SHARING_RING_PFN  29
 
-#define HVM_NR_PARAMS          30
+#define HVM_NR_PARAMS          31
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
diff -r b04de677de31 -r e33d3d37dfbf xen/include/public/mem_event.h
--- a/xen/include/public/mem_event.h	Tue Dec 18 18:16:52 2012 +0000
+++ b/xen/include/public/mem_event.h	Thu Dec 20 14:52:52 2012 +0200
@@ -45,6 +45,8 @@
 #define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
 #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
 #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
+#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR type;
+                                             does NOT honour HVMPME_onchangeonly */
 
 typedef struct mem_event_st {
     uint32_t flags;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:02:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:02:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlfks-0001QC-Jo; Thu, 20 Dec 2012 13:01:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tlfkq-0001Q7-M5
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:01:40 +0000
Received: from [85.158.138.51:59709] by server-16.bemta-3.messagelabs.com id
	C4/42-27634-F2C03D05; Thu, 20 Dec 2012 13:01:35 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-174.messagelabs.com!1356008489!21748295!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12703 invoked from network); 20 Dec 2012 13:01:29 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 13:01:29 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tlfke-000LVx-Mu; Thu, 20 Dec 2012 13:01:28 +0000
Date: Thu, 20 Dec 2012 13:01:28 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220130128.GK80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-5-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-5-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 04/10] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:43 +0800 on 20 Dec (1356047025), Xiantao Zhang wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Share the current EPT logic with nested EPT case, so
> make the related data structure or operations netural
> to comment EPT and nested EPT.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Looking good.  I would ack this, except for for one thing -- you've made
p2m_initialise() return an error code but not updated either of the
callers to use that code.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:02:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:02:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlfks-0001QC-Jo; Thu, 20 Dec 2012 13:01:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tlfkq-0001Q7-M5
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:01:40 +0000
Received: from [85.158.138.51:59709] by server-16.bemta-3.messagelabs.com id
	C4/42-27634-F2C03D05; Thu, 20 Dec 2012 13:01:35 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-174.messagelabs.com!1356008489!21748295!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12703 invoked from network); 20 Dec 2012 13:01:29 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 13:01:29 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Tlfke-000LVx-Mu; Thu, 20 Dec 2012 13:01:28 +0000
Date: Thu, 20 Dec 2012 13:01:28 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220130128.GK80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-5-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-5-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 04/10] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:43 +0800 on 20 Dec (1356047025), Xiantao Zhang wrote:
> From: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> Share the current EPT logic with nested EPT case, so
> make the related data structure or operations netural
> to comment EPT and nested EPT.
> 
> Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>

Looking good.  I would ack this, except for for one thing -- you've made
p2m_initialise() return an error code but not updated either of the
callers to use that code.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:03:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlfml-0001VH-4T; Thu, 20 Dec 2012 13:03:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tlfmj-0001V9-Us
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:03:38 +0000
Received: from [85.158.139.83:31852] by server-4.bemta-5.messagelabs.com id
	8C/31-14693-9AC03D05; Thu, 20 Dec 2012 13:03:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1356008616!23365704!1
X-Originating-IP: [209.85.214.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25587 invoked from network); 20 Dec 2012 13:03:36 -0000
Received: from mail-bk0-f43.google.com (HELO mail-bk0-f43.google.com)
	(209.85.214.43)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 13:03:36 -0000
Received: by mail-bk0-f43.google.com with SMTP id jf20so1672489bkc.30
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 05:03:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:cc:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=gUuo2LfoUR8irn9Wh7mnYGMcNKtkOfj07kl4ZdYwMQo=;
	b=xrgxpcnXmQmBDMlE86dT9ojxEEfyg75lQUcH3JFK+k5Bkny5DHsqoshpOSpljclUoB
	qp+SvfGChyLRLF1HviaFe5NGyDHvpY95BXLM8nGDqrcWI6sVBmPsRvsrRJ0XXjz5CHdi
	lZFoADlgnihfP42au0LfOrYhlvhCFwA1jOhYQXkimEGk6PmokFDR3P7C5/yYCOYagLN2
	/a3+L78Z76us7253hvMte2ETXt9HLQJKZGPDJCx+JLhdYfPK5EM03JFJgqcmhGzDc7ie
	N7+7njSL1HSixUNbuuYL3n2Uaw1CvSHpZkqwxqXMxKJ0C3FAP65bCR2JdtLQsrGJtxfw
	1zuQ==
X-Received: by 10.204.3.211 with SMTP id 19mr4476983bko.99.1356008615969;
	Thu, 20 Dec 2012 05:03:35 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id o9sm7051814bko.15.2012.12.20.05.03.34
	(version=SSLv3 cipher=OTHER); Thu, 20 Dec 2012 05:03:35 -0800 (PST)
User-Agent: Microsoft-Entourage/12.35.0.121009
Date: Thu, 20 Dec 2012 13:03:31 +0000
From: Keir Fraser <keir@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>
Message-ID: <CCF8BD23.561A3%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3esmj0zTxJongNz0Sl/j32Fy2Siw==
In-Reply-To: <1356000101.26722.41.camel@zakaz.uk.xensource.com>
Mime-version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/2012 10:41, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:

> Adding our qemu maintainer.
> 
> On Thu, 2012-12-20 at 03:56 +0000, G.R. wrote:
>> Switch to a new address that can reach to Jean.
>> 
>> On Thu, Dec 20, 2012 at 11:52 AM, G.R. <firemeteor@users.sourceforge.net>
>> wrote:
>>> This is hvmloader part of the change that gets rid of the resource
>>> conflict warning in the guest kernel.
>>> The OpRegion may not always be page aligned.
> 
> Is it worth detecting this and allocating 2 or 3 pages as required?
> 
> The OpRegion is always 8096 bytes? (two pages, but not necessarily
> aligned)?
> 
> Do we need to worry about what is in the "slop" at either end of a 3
> page region containing this? If they are sensitive registers then we may
> have a problem.

In the hvmloader patch it is not worth it I think, one extra page of memory
hole is hardly a scarce resource.

I don't know whether the qemu side is accurate enough. If the region is 8096
bytes then it is not necessarily the case that an unaligned start address
means we need three pages mapped.

 -- Keir

> Ian.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:03:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlfml-0001VH-4T; Thu, 20 Dec 2012 13:03:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tlfmj-0001V9-Us
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:03:38 +0000
Received: from [85.158.139.83:31852] by server-4.bemta-5.messagelabs.com id
	8C/31-14693-9AC03D05; Thu, 20 Dec 2012 13:03:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1356008616!23365704!1
X-Originating-IP: [209.85.214.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25587 invoked from network); 20 Dec 2012 13:03:36 -0000
Received: from mail-bk0-f43.google.com (HELO mail-bk0-f43.google.com)
	(209.85.214.43)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 13:03:36 -0000
Received: by mail-bk0-f43.google.com with SMTP id jf20so1672489bkc.30
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 05:03:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:cc:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=gUuo2LfoUR8irn9Wh7mnYGMcNKtkOfj07kl4ZdYwMQo=;
	b=xrgxpcnXmQmBDMlE86dT9ojxEEfyg75lQUcH3JFK+k5Bkny5DHsqoshpOSpljclUoB
	qp+SvfGChyLRLF1HviaFe5NGyDHvpY95BXLM8nGDqrcWI6sVBmPsRvsrRJ0XXjz5CHdi
	lZFoADlgnihfP42au0LfOrYhlvhCFwA1jOhYQXkimEGk6PmokFDR3P7C5/yYCOYagLN2
	/a3+L78Z76us7253hvMte2ETXt9HLQJKZGPDJCx+JLhdYfPK5EM03JFJgqcmhGzDc7ie
	N7+7njSL1HSixUNbuuYL3n2Uaw1CvSHpZkqwxqXMxKJ0C3FAP65bCR2JdtLQsrGJtxfw
	1zuQ==
X-Received: by 10.204.3.211 with SMTP id 19mr4476983bko.99.1356008615969;
	Thu, 20 Dec 2012 05:03:35 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id o9sm7051814bko.15.2012.12.20.05.03.34
	(version=SSLv3 cipher=OTHER); Thu, 20 Dec 2012 05:03:35 -0800 (PST)
User-Agent: Microsoft-Entourage/12.35.0.121009
Date: Thu, 20 Dec 2012 13:03:31 +0000
From: Keir Fraser <keir@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>,
	"G.R." <firemeteor@users.sourceforge.net>
Message-ID: <CCF8BD23.561A3%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3esmj0zTxJongNz0Sl/j32Fy2Siw==
In-Reply-To: <1356000101.26722.41.camel@zakaz.uk.xensource.com>
Mime-version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/2012 10:41, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:

> Adding our qemu maintainer.
> 
> On Thu, 2012-12-20 at 03:56 +0000, G.R. wrote:
>> Switch to a new address that can reach to Jean.
>> 
>> On Thu, Dec 20, 2012 at 11:52 AM, G.R. <firemeteor@users.sourceforge.net>
>> wrote:
>>> This is hvmloader part of the change that gets rid of the resource
>>> conflict warning in the guest kernel.
>>> The OpRegion may not always be page aligned.
> 
> Is it worth detecting this and allocating 2 or 3 pages as required?
> 
> The OpRegion is always 8096 bytes? (two pages, but not necessarily
> aligned)?
> 
> Do we need to worry about what is in the "slop" at either end of a 3
> page region containing this? If they are sensitive registers then we may
> have a problem.

In the hvmloader patch it is not worth it I think, one extra page of memory
hole is hardly a scarce resource.

I don't know whether the qemu side is accurate enough. If the region is 8096
bytes then it is not necessarily the case that an unaligned start address
means we need three pages mapped.

 -- Keir

> Ian.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:10:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlftZ-0001uF-Kt; Thu, 20 Dec 2012 13:10:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TlftX-0001u4-S7
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:10:39 +0000
Received: from [85.158.138.51:38879] by server-11.bemta-3.messagelabs.com id
	1E/59-13335-F4E03D05; Thu, 20 Dec 2012 13:10:39 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356009038!28567584!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10275 invoked from network); 20 Dec 2012 13:10:38 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 13:10:38 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TlftV-000LWy-8i; Thu, 20 Dec 2012 13:10:37 +0000
Date: Thu, 20 Dec 2012 13:10:37 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220131037.GL80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-8-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-8-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 07/10] nEPT: Use minimal permission for
	nested p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:43 +0800 on 20 Dec (1356047028), Xiantao Zhang wrote:
> @@ -206,12 +205,14 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>      struct p2m_domain *p2m, *nested_p2m;
>      unsigned int page_order_21, page_order_10, page_order_20;
>      p2m_type_t p2mt_10;
> +    p2m_access_t p2ma_10 = p2m_access_rwx;
> +    uint8_t p2ma_21;
>  
>      p2m = p2m_get_hostp2m(d); /* L0 p2m */
>      nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
>  
>      /* walk the L1 P2M table */
> -    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
> +    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
>          access_r, access_w, access_x);

Once again, please either initialise p2ma_21 to rwx or have the SVM
version of this lookup set it to something sensible. 

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:10:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlftZ-0001uF-Kt; Thu, 20 Dec 2012 13:10:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TlftX-0001u4-S7
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:10:39 +0000
Received: from [85.158.138.51:38879] by server-11.bemta-3.messagelabs.com id
	1E/59-13335-F4E03D05; Thu, 20 Dec 2012 13:10:39 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356009038!28567584!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10275 invoked from network); 20 Dec 2012 13:10:38 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 13:10:38 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TlftV-000LWy-8i; Thu, 20 Dec 2012 13:10:37 +0000
Date: Thu, 20 Dec 2012 13:10:37 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220131037.GL80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-8-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-8-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 07/10] nEPT: Use minimal permission for
	nested p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:43 +0800 on 20 Dec (1356047028), Xiantao Zhang wrote:
> @@ -206,12 +205,14 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>      struct p2m_domain *p2m, *nested_p2m;
>      unsigned int page_order_21, page_order_10, page_order_20;
>      p2m_type_t p2mt_10;
> +    p2m_access_t p2ma_10 = p2m_access_rwx;
> +    uint8_t p2ma_21;
>  
>      p2m = p2m_get_hostp2m(d); /* L0 p2m */
>      nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
>  
>      /* walk the L1 P2M table */
> -    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
> +    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
>          access_r, access_w, access_x);

Once again, please either initialise p2ma_21 to rwx or have the SVM
version of this lookup set it to something sensible. 

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:12:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlfvZ-00024z-4i; Thu, 20 Dec 2012 13:12:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlfvY-00024t-Dz
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:12:44 +0000
Received: from [85.158.138.51:36508] by server-16.bemta-3.messagelabs.com id
	01/61-27634-9CE03D05; Thu, 20 Dec 2012 13:12:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1356009159!21842829!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25652 invoked from network); 20 Dec 2012 13:12:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 13:12:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="1404556"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 13:12:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 08:12:39 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlfvS-0006On-T3;
	Thu, 20 Dec 2012 13:12:38 +0000
Date: Thu, 20 Dec 2012 13:12:28 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Christopher Covington <cov@codeaurora.org>
In-Reply-To: <50D1DC6F.3010304@codeaurora.org>
Message-ID: <alpine.DEB.2.02.1212201310080.17523@kaball.uk.xensource.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
	<50D0AF5C.1070605@codeaurora.org>
	<50D0B384.40605@arm.com> <50D1DC6F.3010304@codeaurora.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Marc Zyngier <marc.zyngier@arm.com>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Will Deacon <Will.Deacon@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Dec 2012, Christopher Covington wrote:
> > What would be the point of using DCC?
> 
> It seemed like ARM-Ltd.-architected peripherals were picked for the timer and
> interrupt controller, so I wondered why not for the console as well. As best
> I'm aware, unless one ventures into the PrimeCell line with the PL011, DCC is
> the closest match for an officially architected console mechanism.

I certainly wouldn't want to mandate the presence of any devices that
require emulation in the DT for a virtual machine, including DCC, as
simple as it might be.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:12:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlfvZ-00024z-4i; Thu, 20 Dec 2012 13:12:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlfvY-00024t-Dz
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:12:44 +0000
Received: from [85.158.138.51:36508] by server-16.bemta-3.messagelabs.com id
	01/61-27634-9CE03D05; Thu, 20 Dec 2012 13:12:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1356009159!21842829!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25652 invoked from network); 20 Dec 2012 13:12:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 13:12:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="1404556"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 13:12:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 08:12:39 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlfvS-0006On-T3;
	Thu, 20 Dec 2012 13:12:38 +0000
Date: Thu, 20 Dec 2012 13:12:28 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Christopher Covington <cov@codeaurora.org>
In-Reply-To: <50D1DC6F.3010304@codeaurora.org>
Message-ID: <alpine.DEB.2.02.1212201310080.17523@kaball.uk.xensource.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
	<50D0AF5C.1070605@codeaurora.org>
	<50D0B384.40605@arm.com> <50D1DC6F.3010304@codeaurora.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Marc Zyngier <marc.zyngier@arm.com>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Will Deacon <Will.Deacon@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Dec 2012, Christopher Covington wrote:
> > What would be the point of using DCC?
> 
> It seemed like ARM-Ltd.-architected peripherals were picked for the timer and
> interrupt controller, so I wondered why not for the console as well. As best
> I'm aware, unless one ventures into the PrimeCell line with the PL011, DCC is
> the closest match for an officially architected console mechanism.

I certainly wouldn't want to mandate the presence of any devices that
require emulation in the DT for a virtual machine, including DCC, as
simple as it might be.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:13:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:13:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlfwX-0002D6-P4; Thu, 20 Dec 2012 13:13:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlfwW-0002Cr-Qf
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:13:45 +0000
Received: from [85.158.143.35:44953] by server-3.bemta-4.messagelabs.com id
	91/E7-18211-80F03D05; Thu, 20 Dec 2012 13:13:44 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1356009221!5252791!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15083 invoked from network); 20 Dec 2012 13:13:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 13:13:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="1404666"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 13:13:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 08:13:40 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlfwS-0006QT-Ns;
	Thu, 20 Dec 2012 13:13:40 +0000
Date: Thu, 20 Dec 2012 13:13:30 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWbezhvrAV=a=tvJmWosfeN0xsPb74sjqB6J9JViv2F7BQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212201313081.17523@kaball.uk.xensource.com>
References: <CAKhsbWbezhvrAV=a=tvJmWosfeN0xsPb74sjqB6J9JViv2F7BQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] qemu-xen:Correctly expose PCH ISA bridge
 for IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Dec 2012, G.R. wrote:
> Fix IGD passthrough logic to properly expose PCH ISA bridge (instead
> of exposing as pci-pci bridge). The i915 driver require this to
> correctly detect the PCH version and enable version specific code
> path.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
> Timothy Guo <firemeteor@users.sourceforge.net>

The patch is fine by me

> diff --git a/hw/pci.c b/hw/pci.c
> index f051de1..d371bd7 100644
> --- a/hw/pci.c
> +++ b/hw/pci.c
> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>      }
>  }
> 
> -typedef struct {
> -    PCIDevice dev;
> -    PCIBus *bus;
> -} PCIBridge;
> -
>  void pci_bridge_write_config(
> PCIDevice *d,
>                               uint32_t address, uint32_t val, int len)
>  {
> diff --git a/hw/pci.h b/hw/pci.h
> index edc58b6..c2acab9 100644
> --- a/hw/pci.h
> +++ b/hw/pci.h
> @@ -222,6 +222,11 @@ struct PCIDevice {
>      int irq_state[4];
>  };
> 
> +typedef struct {
> +    PCIDevice dev;
> +    PCIBus *bus;
> +} PCIBridge;
> +
>  extern char direct_pci_str[];
>  extern int direct_pci_msitranslate;
>  extern int direct_pci_power_mgmt;
> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
> index c6f8869..de21f90 100644
> --- a/hw/pt-graphics.c
> +++ b/hw/pt-graphics.c
> @@ -3,6 +3,7 @@
>   */
> 
>  #include "pass-through.h"
> +#include "pci.h"
>  #include "pci/header.h"
>  #include "pci/pci.h"
> 
> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
> 
> -    if ( vid == PCI_VENDOR_ID_INTEL )
> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
> -                        pch_map_irq, "intel_bridge_1f");
> +    if (vid == PCI_VENDOR_ID_INTEL) {
> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
> pci_bridge_write_config);
> +
> +        pci_config_set_vendor_id(s->dev.config, vid);
> +        pci_config_set_device_id(s->dev.config, did);
> +
> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
> back-to-back, 66MHz, no error
> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
> +        s->dev.config[PCI_REVISION] = rid;
> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
> +        s->dev.config[PCI_HEADER_TYPE] = 0x80;
> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
> +
> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
> +    }
>  }
> 
>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:13:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:13:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlfwX-0002D6-P4; Thu, 20 Dec 2012 13:13:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1TlfwW-0002Cr-Qf
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:13:45 +0000
Received: from [85.158.143.35:44953] by server-3.bemta-4.messagelabs.com id
	91/E7-18211-80F03D05; Thu, 20 Dec 2012 13:13:44 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1356009221!5252791!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15083 invoked from network); 20 Dec 2012 13:13:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 13:13:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,322,1355097600"; 
   d="scan'208";a="1404666"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 13:13:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 08:13:40 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1TlfwS-0006QT-Ns;
	Thu, 20 Dec 2012 13:13:40 +0000
Date: Thu, 20 Dec 2012 13:13:30 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: G.R. <firemeteor@users.sourceforge.net>
In-Reply-To: <CAKhsbWbezhvrAV=a=tvJmWosfeN0xsPb74sjqB6J9JViv2F7BQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1212201313081.17523@kaball.uk.xensource.com>
References: <CAKhsbWbezhvrAV=a=tvJmWosfeN0xsPb74sjqB6J9JViv2F7BQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] qemu-xen:Correctly expose PCH ISA bridge
 for IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Dec 2012, G.R. wrote:
> Fix IGD passthrough logic to properly expose PCH ISA bridge (instead
> of exposing as pci-pci bridge). The i915 driver require this to
> correctly detect the PCH version and enable version specific code
> path.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
> Timothy Guo <firemeteor@users.sourceforge.net>

The patch is fine by me

> diff --git a/hw/pci.c b/hw/pci.c
> index f051de1..d371bd7 100644
> --- a/hw/pci.c
> +++ b/hw/pci.c
> @@ -871,11 +871,6 @@ void pci_unplug_netifs(void)
>      }
>  }
> 
> -typedef struct {
> -    PCIDevice dev;
> -    PCIBus *bus;
> -} PCIBridge;
> -
>  void pci_bridge_write_config(
> PCIDevice *d,
>                               uint32_t address, uint32_t val, int len)
>  {
> diff --git a/hw/pci.h b/hw/pci.h
> index edc58b6..c2acab9 100644
> --- a/hw/pci.h
> +++ b/hw/pci.h
> @@ -222,6 +222,11 @@ struct PCIDevice {
>      int irq_state[4];
>  };
> 
> +typedef struct {
> +    PCIDevice dev;
> +    PCIBus *bus;
> +} PCIBridge;
> +
>  extern char direct_pci_str[];
>  extern int direct_pci_msitranslate;
>  extern int direct_pci_power_mgmt;
> diff --git a/hw/pt-graphics.c b/hw/pt-graphics.c
> index c6f8869..de21f90 100644
> --- a/hw/pt-graphics.c
> +++ b/hw/pt-graphics.c
> @@ -3,6 +3,7 @@
>   */
> 
>  #include "pass-through.h"
> +#include "pci.h"
>  #include "pci/header.h"
>  #include "pci/pci.h"
> 
> @@ -40,9 +41,26 @@ void intel_pch_init(PCIBus *bus)
>      did = pt_pci_host_read(pci_dev_1f, PCI_DEVICE_ID, 2);
>      rid = pt_pci_host_read(pci_dev_1f, PCI_REVISION, 1);
> 
> -    if ( vid == PCI_VENDOR_ID_INTEL )
> -        pci_bridge_init(bus, PCI_DEVFN(0x1f, 0), vid, did, rid,
> -                        pch_map_irq, "intel_bridge_1f");
> +    if (vid == PCI_VENDOR_ID_INTEL) {
> +        PCIBridge *s = (PCIBridge *)pci_register_device(bus, "intel_bridge_1f",
> +                sizeof(PCIBridge), PCI_DEVFN(0x1f, 0), NULL,
> pci_bridge_write_config);
> +
> +        pci_config_set_vendor_id(s->dev.config, vid);
> +        pci_config_set_device_id(s->dev.config, did);
> +
> +        s->dev.config[PCI_COMMAND] = 0x06; // command = bus master, pci mem
> +        s->dev.config[PCI_COMMAND + 1] = 0x00;
> +        s->dev.config[PCI_STATUS] = 0xa0; // status = fast
> back-to-back, 66MHz, no error
> +        s->dev.config[PCI_STATUS + 1] = 0x00; // status = fast devsel
> +        s->dev.config[PCI_REVISION] = rid;
> +        s->dev.config[PCI_CLASS_PROG] = 0x00; // programming i/f
> +        pci_config_set_class(s->dev.config, PCI_CLASS_BRIDGE_ISA);
> +        s->dev.config[PCI_LATENCY_TIMER] = 0x10;
> +        s->dev.config[PCI_HEADER_TYPE] = 0x80;
> +        s->dev.config[PCI_SEC_STATUS] = 0xa0;
> +
> +        s->bus = pci_register_secondary_bus(&s->dev, pch_map_irq);
> +    }
>  }
> 
>  uint32_t igd_read_opregion(struct pt_dev *pci_dev)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:25:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:25:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlg7Y-0002Zh-07; Thu, 20 Dec 2012 13:25:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marc.zyngier@arm.com>) id 1Tlg7W-0002Zb-9s
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:25:06 +0000
Received: from [85.158.139.83:28741] by server-14.bemta-5.messagelabs.com id
	8C/69-09538-1B113D05; Thu, 20 Dec 2012 13:25:05 +0000
X-Env-Sender: marc.zyngier@arm.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356009902!26835678!1
X-Originating-IP: [91.220.42.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogOTEuMjIwLjQyLjQ0ID0+IDM0Njg1OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17709 invoked from network); 20 Dec 2012 13:25:02 -0000
Received: from service87.mimecast.com (HELO service87.mimecast.com)
	(91.220.42.44) by server-6.tower-182.messagelabs.com with SMTP;
	20 Dec 2012 13:25:02 -0000
Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com
	[217.140.96.21]) by service87.mimecast.com;
	Thu, 20 Dec 2012 13:25:01 +0000
Received: from [10.1.70.138] ([10.1.255.212]) by cam-owa2.Emea.Arm.com with
	Microsoft SMTPSVC(6.0.3790.3959); Thu, 20 Dec 2012 13:25:01 +0000
Message-ID: <50D311AC.6060309@arm.com>
Date: Thu, 20 Dec 2012 13:25:00 +0000
From: Marc Zyngier <marc.zyngier@arm.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
	<50D0AF5C.1070605@codeaurora.org> <50D0B384.40605@arm.com>
	<50D1DC6F.3010304@codeaurora.org>
	<alpine.DEB.2.02.1212201310080.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212201310080.17523@kaball.uk.xensource.com>
X-Enigmail-Version: 1.4.6
X-OriginalArrivalTime: 20 Dec 2012 13:25:01.0458 (UTC)
	FILETIME=[6A209720:01CDDEB5]
X-MC-Unique: 112122013250131801
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Will Deacon <Will.Deacon@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Christopher Covington <cov@codeaurora.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/12 13:12, Stefano Stabellini wrote:
> On Wed, 19 Dec 2012, Christopher Covington wrote:
>>> What would be the point of using DCC?
>>
>> It seemed like ARM-Ltd.-architected peripherals were picked for the timer and
>> interrupt controller, so I wondered why not for the console as well. As best
>> I'm aware, unless one ventures into the PrimeCell line with the PL011, DCC is
>> the closest match for an officially architected console mechanism.
> 
> I certainly wouldn't want to mandate the presence of any devices that
> require emulation in the DT for a virtual machine, including DCC, as
> simple as it might be.

Indeed. GIC and timers have explicit support for virtualization in
hardware. Emulating random IPs is up to the implementor, and given that
virtio already exists (and performs rather well), I don't feel the urge
to reinvent the wheel.

	M.
-- 
Jazz is not dead. It just smells funny...


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:25:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:25:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlg7Y-0002Zh-07; Thu, 20 Dec 2012 13:25:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marc.zyngier@arm.com>) id 1Tlg7W-0002Zb-9s
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:25:06 +0000
Received: from [85.158.139.83:28741] by server-14.bemta-5.messagelabs.com id
	8C/69-09538-1B113D05; Thu, 20 Dec 2012 13:25:05 +0000
X-Env-Sender: marc.zyngier@arm.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356009902!26835678!1
X-Originating-IP: [91.220.42.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogOTEuMjIwLjQyLjQ0ID0+IDM0Njg1OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17709 invoked from network); 20 Dec 2012 13:25:02 -0000
Received: from service87.mimecast.com (HELO service87.mimecast.com)
	(91.220.42.44) by server-6.tower-182.messagelabs.com with SMTP;
	20 Dec 2012 13:25:02 -0000
Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com
	[217.140.96.21]) by service87.mimecast.com;
	Thu, 20 Dec 2012 13:25:01 +0000
Received: from [10.1.70.138] ([10.1.255.212]) by cam-owa2.Emea.Arm.com with
	Microsoft SMTPSVC(6.0.3790.3959); Thu, 20 Dec 2012 13:25:01 +0000
Message-ID: <50D311AC.6060309@arm.com>
Date: Thu, 20 Dec 2012 13:25:00 +0000
From: Marc Zyngier <marc.zyngier@arm.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1355762141-29616-1-git-send-email-will.deacon@arm.com>
	<1355762141-29616-6-git-send-email-will.deacon@arm.com>
	<alpine.DEB.2.02.1212181202310.17523@kaball.uk.xensource.com>
	<20121218131401.GB22139@mudshark.cambridge.arm.com>
	<50D0AF5C.1070605@codeaurora.org> <50D0B384.40605@arm.com>
	<50D1DC6F.3010304@codeaurora.org>
	<alpine.DEB.2.02.1212201310080.17523@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1212201310080.17523@kaball.uk.xensource.com>
X-Enigmail-Version: 1.4.6
X-OriginalArrivalTime: 20 Dec 2012 13:25:01.0458 (UTC)
	FILETIME=[6A209720:01CDDEB5]
X-MC-Unique: 112122013250131801
Cc: "dave.martin@linaro.org" <dave.martin@linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>, "nico@fluxnic.net" <nico@fluxnic.net>,
	Will Deacon <Will.Deacon@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Christopher Covington <cov@codeaurora.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 5/6] ARM: Dummy Virtual Machine platform
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/12 13:12, Stefano Stabellini wrote:
> On Wed, 19 Dec 2012, Christopher Covington wrote:
>>> What would be the point of using DCC?
>>
>> It seemed like ARM-Ltd.-architected peripherals were picked for the timer and
>> interrupt controller, so I wondered why not for the console as well. As best
>> I'm aware, unless one ventures into the PrimeCell line with the PL011, DCC is
>> the closest match for an officially architected console mechanism.
> 
> I certainly wouldn't want to mandate the presence of any devices that
> require emulation in the DT for a virtual machine, including DCC, as
> simple as it might be.

Indeed. GIC and timers have explicit support for virtualization in
hardware. Emulating random IPs is up to the implementor, and given that
virtio already exists (and performs rather well), I don't feel the urge
to reinvent the wheel.

	M.
-- 
Jazz is not dead. It just smells funny...


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:31:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:31:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlgDJ-0002mH-PD; Thu, 20 Dec 2012 13:31:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlgDJ-0002m3-02
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:31:05 +0000
Received: from [193.109.254.147:27980] by server-4.bemta-14.messagelabs.com id
	78/53-15233-81313D05; Thu, 20 Dec 2012 13:31:04 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1356010261!10776560!1
X-Originating-IP: [209.85.223.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28931 invoked from network); 20 Dec 2012 13:31:02 -0000
Received: from mail-ie0-f175.google.com (HELO mail-ie0-f175.google.com)
	(209.85.223.175)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 13:31:02 -0000
Received: by mail-ie0-f175.google.com with SMTP id qd14so4470247ieb.6
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 05:31:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=o+TniMYVIxOG8f178nNqD4zvGdhoN0+6MDvP8zJyHbE=;
	b=HUU0+yEjI55gNyTOGpw3ImCPUqoxbE0Z6v3vxykKzeyt/ayKhnsJYaqGwng0RjSD/g
	awFz1TZ9JGCuQzgc1AYehMpVz+tyzp9tcMIp1OB1YNwdJnU9RWxuWs5VXiUYlLfi31Xz
	aN8aV6YVEpzikl3YmrgaUcdb8S0XoLOuKWCjm9em03ksGSCwF1wivlspTGqgGQ61q+Jg
	IY5f/M1IrcjZcg2TjsGc6nt6a8rBwjlYSlsEYJQhOX/QOBUTuP7fc6Ks47sFcCRewyAf
	ERsK5a7Qsgl+lzwfcXmg+Aio1e3ZPnsP2VOtDnGE8rrGc1EZGQ1+QT48LBTi4yPwT5Po
	GmfA==
MIME-Version: 1.0
Received: by 10.50.190.163 with SMTP id gr3mr10314563igc.28.1356010261019;
	Thu, 20 Dec 2012 05:31:01 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 05:31:00 -0800 (PST)
In-Reply-To: <CCF8BD23.561A3%keir@xen.org>
References: <1356000101.26722.41.camel@zakaz.uk.xensource.com>
	<CCF8BD23.561A3%keir@xen.org>
Date: Thu, 20 Dec 2012 21:31:00 +0800
X-Google-Sender-Auth: y_D0c6AKz-Qpy8uDXQkwzFyVKF8
Message-ID: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Keir Fraser <keir@xen.org>
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 9:03 PM, Keir Fraser <keir@xen.org> wrote:
> On 20/12/2012 10:41, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
>
>> Adding our qemu maintainer.
>>
>> On Thu, 2012-12-20 at 03:56 +0000, G.R. wrote:
>>> Switch to a new address that can reach to Jean.
>>>
>>> On Thu, Dec 20, 2012 at 11:52 AM, G.R. <firemeteor@users.sourceforge.net>
>>> wrote:
>>>> This is hvmloader part of the change that gets rid of the resource
>>>> conflict warning in the guest kernel.
>>>> The OpRegion may not always be page aligned.
>>
>> Is it worth detecting this and allocating 2 or 3 pages as required?
>>
>> The OpRegion is always 8096 bytes? (two pages, but not necessarily
>> aligned)?
>>
>> Do we need to worry about what is in the "slop" at either end of a 3
>> page region containing this? If they are sensitive registers then we may
>> have a problem.
>
> In the hvmloader patch it is not worth it I think, one extra page of memory
> hole is hardly a scarce resource.
>
> I don't know whether the qemu side is accurate enough. If the region is 8096
> bytes then it is not necessarily the case that an unaligned start address
> means we need three pages mapped.
>

According to the spec table 2.1, the length is fixed to 8096 bytes.
http://intellinuxgraphics.org/ACPI_IGD_OpRegion_%20Spec.pdf

768 (0x300) byte from the tail is reserved.
If the page offset larger than 0x300, the domU may not work correctly.
Otherwise, it's only a warning in the domU kernel log, which looks a
little scary. (And the hvmloader patch should be enough to suppress
it)

If concern is about security, the same argument should apply to the
first page (the portion before the page offset).
The problem is that I have no idea what is around the mapped page. Not
sure who has the knowledge.

What's the standard flow to handle such map with offset?
I expect this to be a common case, since the ioremap function in linux
kernel accept this.

In my case, the host opregion is reported at 0xcd996018, which comes
from a much larger 'ACPI NVS' region as shown below:

(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 000000000009ec00 (usable)
(XEN)  000000000009ec00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 0000000020000000 (usable)
(XEN)  0000000020000000 - 0000000020200000 (reserved)
(XEN)  0000000020200000 - 0000000040004000 (usable)
(XEN)  0000000040004000 - 0000000040005000 (reserved)
(XEN)  0000000040005000 - 00000000cd103000 (usable)
(XEN)  00000000cd103000 - 00000000cd87b000 (reserved)
(XEN)  00000000cd87b000 - 00000000cd907000 (usable)
(XEN)  00000000cd907000 - 00000000cd9a8000 (ACPI NVS)      <======
(XEN)  00000000cd9a8000 - 00000000ce150000 (reserved)
(XEN)  00000000ce150000 - 00000000ce151000 (usable)
(XEN)  00000000ce151000 - 00000000ce194000 (ACPI NVS)
(XEN)  00000000ce194000 - 00000000cec15000 (usable)
(XEN)  00000000cec15000 - 00000000ceff2000 (reserved)
(XEN)  00000000ceff2000 - 00000000cf000000 (usable)
(XEN)  00000000cf800000 - 00000000dfa00000 (reserved)
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000021f600000 (usable)
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT CD999080, 0084 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FACP CD9A28C0, 010C (r5 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI Warning (tbfadt-0232): FADT (revision 5) is longer than
ACPI 2.0 version, truncating length 0x10C to 0xF4 [20070126]
(XEN) ACPI: DSDT CD9991A0, 971F (r2 ALASKA    A M I       22 INTL 20051117)
(XEN) ACPI: FACS CD9A6080, 0040
(XEN) ACPI: APIC CD9A29D0, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FPDT CD9A2A68, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: ASF! CD9A2AB0, 00A5 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) ACPI: MCFG CD9A2B58, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: AAFT CD9A2B98, 00EA (r1 ALASKA OEMAAFT   1072009 MSFT       97)
(XEN) ACPI: HPET CD9A2C88, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)
(XEN) ACPI: SSDT CD9A2CC0, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SSDT CD9A3030, 09AA (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT CD9A39E0, 0A92 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: DMAR CD9A4478, 00B8 (r1 INTEL      SNB         1 INTL        1)
(XEN) ACPI: BGRT CD9A4530, 0038 (r0 ALASKA    A M I  1072009 AMI     10013)
(XEN) System RAM: 7887MB (8077040kB)

Thanks,
Timothy

>  -- Keir
>
>> Ian.
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:31:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:31:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlgDJ-0002mH-PD; Thu, 20 Dec 2012 13:31:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlgDJ-0002m3-02
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:31:05 +0000
Received: from [193.109.254.147:27980] by server-4.bemta-14.messagelabs.com id
	78/53-15233-81313D05; Thu, 20 Dec 2012 13:31:04 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1356010261!10776560!1
X-Originating-IP: [209.85.223.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28931 invoked from network); 20 Dec 2012 13:31:02 -0000
Received: from mail-ie0-f175.google.com (HELO mail-ie0-f175.google.com)
	(209.85.223.175)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 13:31:02 -0000
Received: by mail-ie0-f175.google.com with SMTP id qd14so4470247ieb.6
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 05:31:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=o+TniMYVIxOG8f178nNqD4zvGdhoN0+6MDvP8zJyHbE=;
	b=HUU0+yEjI55gNyTOGpw3ImCPUqoxbE0Z6v3vxykKzeyt/ayKhnsJYaqGwng0RjSD/g
	awFz1TZ9JGCuQzgc1AYehMpVz+tyzp9tcMIp1OB1YNwdJnU9RWxuWs5VXiUYlLfi31Xz
	aN8aV6YVEpzikl3YmrgaUcdb8S0XoLOuKWCjm9em03ksGSCwF1wivlspTGqgGQ61q+Jg
	IY5f/M1IrcjZcg2TjsGc6nt6a8rBwjlYSlsEYJQhOX/QOBUTuP7fc6Ks47sFcCRewyAf
	ERsK5a7Qsgl+lzwfcXmg+Aio1e3ZPnsP2VOtDnGE8rrGc1EZGQ1+QT48LBTi4yPwT5Po
	GmfA==
MIME-Version: 1.0
Received: by 10.50.190.163 with SMTP id gr3mr10314563igc.28.1356010261019;
	Thu, 20 Dec 2012 05:31:01 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 05:31:00 -0800 (PST)
In-Reply-To: <CCF8BD23.561A3%keir@xen.org>
References: <1356000101.26722.41.camel@zakaz.uk.xensource.com>
	<CCF8BD23.561A3%keir@xen.org>
Date: Thu, 20 Dec 2012 21:31:00 +0800
X-Google-Sender-Auth: y_D0c6AKz-Qpy8uDXQkwzFyVKF8
Message-ID: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Keir Fraser <keir@xen.org>
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 9:03 PM, Keir Fraser <keir@xen.org> wrote:
> On 20/12/2012 10:41, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
>
>> Adding our qemu maintainer.
>>
>> On Thu, 2012-12-20 at 03:56 +0000, G.R. wrote:
>>> Switch to a new address that can reach to Jean.
>>>
>>> On Thu, Dec 20, 2012 at 11:52 AM, G.R. <firemeteor@users.sourceforge.net>
>>> wrote:
>>>> This is hvmloader part of the change that gets rid of the resource
>>>> conflict warning in the guest kernel.
>>>> The OpRegion may not always be page aligned.
>>
>> Is it worth detecting this and allocating 2 or 3 pages as required?
>>
>> The OpRegion is always 8096 bytes? (two pages, but not necessarily
>> aligned)?
>>
>> Do we need to worry about what is in the "slop" at either end of a 3
>> page region containing this? If they are sensitive registers then we may
>> have a problem.
>
> In the hvmloader patch it is not worth it I think, one extra page of memory
> hole is hardly a scarce resource.
>
> I don't know whether the qemu side is accurate enough. If the region is 8096
> bytes then it is not necessarily the case that an unaligned start address
> means we need three pages mapped.
>

According to the spec table 2.1, the length is fixed to 8096 bytes.
http://intellinuxgraphics.org/ACPI_IGD_OpRegion_%20Spec.pdf

768 (0x300) byte from the tail is reserved.
If the page offset larger than 0x300, the domU may not work correctly.
Otherwise, it's only a warning in the domU kernel log, which looks a
little scary. (And the hvmloader patch should be enough to suppress
it)

If concern is about security, the same argument should apply to the
first page (the portion before the page offset).
The problem is that I have no idea what is around the mapped page. Not
sure who has the knowledge.

What's the standard flow to handle such map with offset?
I expect this to be a common case, since the ioremap function in linux
kernel accept this.

In my case, the host opregion is reported at 0xcd996018, which comes
from a much larger 'ACPI NVS' region as shown below:

(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 000000000009ec00 (usable)
(XEN)  000000000009ec00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 0000000020000000 (usable)
(XEN)  0000000020000000 - 0000000020200000 (reserved)
(XEN)  0000000020200000 - 0000000040004000 (usable)
(XEN)  0000000040004000 - 0000000040005000 (reserved)
(XEN)  0000000040005000 - 00000000cd103000 (usable)
(XEN)  00000000cd103000 - 00000000cd87b000 (reserved)
(XEN)  00000000cd87b000 - 00000000cd907000 (usable)
(XEN)  00000000cd907000 - 00000000cd9a8000 (ACPI NVS)      <======
(XEN)  00000000cd9a8000 - 00000000ce150000 (reserved)
(XEN)  00000000ce150000 - 00000000ce151000 (usable)
(XEN)  00000000ce151000 - 00000000ce194000 (ACPI NVS)
(XEN)  00000000ce194000 - 00000000cec15000 (usable)
(XEN)  00000000cec15000 - 00000000ceff2000 (reserved)
(XEN)  00000000ceff2000 - 00000000cf000000 (usable)
(XEN)  00000000cf800000 - 00000000dfa00000 (reserved)
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000021f600000 (usable)
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT CD999080, 0084 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FACP CD9A28C0, 010C (r5 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI Warning (tbfadt-0232): FADT (revision 5) is longer than
ACPI 2.0 version, truncating length 0x10C to 0xF4 [20070126]
(XEN) ACPI: DSDT CD9991A0, 971F (r2 ALASKA    A M I       22 INTL 20051117)
(XEN) ACPI: FACS CD9A6080, 0040
(XEN) ACPI: APIC CD9A29D0, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FPDT CD9A2A68, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: ASF! CD9A2AB0, 00A5 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) ACPI: MCFG CD9A2B58, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: AAFT CD9A2B98, 00EA (r1 ALASKA OEMAAFT   1072009 MSFT       97)
(XEN) ACPI: HPET CD9A2C88, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)
(XEN) ACPI: SSDT CD9A2CC0, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SSDT CD9A3030, 09AA (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT CD9A39E0, 0A92 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: DMAR CD9A4478, 00B8 (r1 INTEL      SNB         1 INTL        1)
(XEN) ACPI: BGRT CD9A4530, 0038 (r0 ALASKA    A M I  1072009 AMI     10013)
(XEN) System RAM: 7887MB (8077040kB)

Thanks,
Timothy

>  -- Keir
>
>> Ian.
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:39:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlgLi-0002xl-Pz; Thu, 20 Dec 2012 13:39:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlgLh-0002xg-2j
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:39:45 +0000
Received: from [85.158.138.51:12635] by server-14.bemta-3.messagelabs.com id
	3D/D2-27443-02513D05; Thu, 20 Dec 2012 13:39:44 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1356010783!23519305!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2319 invoked from network); 20 Dec 2012 13:39:43 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 13:39:43 -0000
Received: (qmail 11383 invoked from network); 20 Dec 2012 15:39:42 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	20 Dec 2012 15:39:42 +0200
Message-ID: <50D3155D.3010205@gmail.com>
Date: Thu, 20 Dec 2012 15:40:45 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
	<1355933739.14620.456.camel@zakaz.uk.xensource.com>
	<50D1EB56.40400@gmail.com>
	<20121219174627.GA67643@ocelot.phlegethon.org>
	<50D216F7.2050801@gmail.com>
	<20121220115727.GG80837@ocelot.phlegethon.org>
In-Reply-To: <20121220115727.GG80837@ocelot.phlegethon.org>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234654,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_LUXURY;
	NN_GMAIL_WITH_XMAILER_ADN; NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 47537a186c1efdec62256f17d51df69d.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id:
	2m1g3t9.17epd43f4.2t97k], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44457
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 >> Unfortunately, due to the design of components I don't control, my
 >> application does not have the luxury of waiting for an MSR write event.
 >
 > Hmm.  In that case I guess you can't cache the MTRR state between
 > lookups, which may be a performance problem for you.

Indeed, however I do need the ability to pull the MTRR type for a given 
address, via a userspace application, from the hypervisor.

I'll play around with the libxc implementation and see what I can come 
up with, then if it'll all seem to work out I'll send the patch to 
xen-devel.

Thanks,
Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:39:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlgLi-0002xl-Pz; Thu, 20 Dec 2012 13:39:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TlgLh-0002xg-2j
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:39:45 +0000
Received: from [85.158.138.51:12635] by server-14.bemta-3.messagelabs.com id
	3D/D2-27443-02513D05; Thu, 20 Dec 2012 13:39:44 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1356010783!23519305!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2319 invoked from network); 20 Dec 2012 13:39:43 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 13:39:43 -0000
Received: (qmail 11383 invoked from network); 20 Dec 2012 15:39:42 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	20 Dec 2012 15:39:42 +0200
Message-ID: <50D3155D.3010205@gmail.com>
Date: Thu, 20 Dec 2012 15:40:45 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <50D1A9D1.2020106@gmail.com> <50D1D5BD.8080001@gmail.com>
	<1355929247.14620.436.camel@zakaz.uk.xensource.com>
	<50D1DCC5.5050309@citrix.com>
	<1355932443.14620.448.camel@zakaz.uk.xensource.com>
	<50D1E723.4070201@citrix.com>
	<1355933739.14620.456.camel@zakaz.uk.xensource.com>
	<50D1EB56.40400@gmail.com>
	<20121219174627.GA67643@ocelot.phlegethon.org>
	<50D216F7.2050801@gmail.com>
	<20121220115727.GG80837@ocelot.phlegethon.org>
In-Reply-To: <20121220115727.GG80837@ocelot.phlegethon.org>
X-BitDefender-Spam: No (0)
X-BitDefender-SpamStamp: Build: [Engines: 2.13.5.15620, Dats: 234654,
	Stamp: 3], Multi: [Enabled], BW: [Enabled], RBL DNSBL: [Disabled],
	APM: [Enabled, Score: 500, Flags: NN_LUXURY;
	NN_GMAIL_WITH_XMAILER_ADN; NN_LEGIT_VALID_REPLY; NN_NO_LINK_NMD;
	NN_EXEC_H_YAHOO_AND_GMAIL_NO_DOMAIN_KEY; NN_LEGIT_S_SQARE_BRACKETS],
	SGN: [Enabled], URL: [Enabled], URI DNSBL: [Disabled], SQMD: [Enabled, 
	Hits: none, MD5: 47537a186c1efdec62256f17d51df69d.fuzzy.fzrbl.org],
	RTDA: [Enabled, Hit: No, Details: v1.4.6; Id:
	2m1g3t9.17epd43f4.2t97k], total: 0(775)
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44457
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxc: Add xc_domain_hvm_get_mtrr_type()
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 >> Unfortunately, due to the design of components I don't control, my
 >> application does not have the luxury of waiting for an MSR write event.
 >
 > Hmm.  In that case I guess you can't cache the MTRR state between
 > lookups, which may be a performance problem for you.

Indeed, however I do need the ability to pull the MTRR type for a given 
address, via a userspace application, from the hypervisor.

I'll play around with the libxc implementation and see what I can come 
up with, then if it'll all seem to work out I'll send the patch to 
xen-devel.

Thanks,
Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:53:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlgYP-0003BR-6a; Thu, 20 Dec 2012 13:52:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlgYN-0003BG-II
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:52:51 +0000
Received: from [85.158.138.51:58592] by server-4.bemta-3.messagelabs.com id
	91/CE-31835-C2813D05; Thu, 20 Dec 2012 13:52:44 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1356011560!21617883!1
X-Originating-IP: [209.85.210.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2338 invoked from network); 20 Dec 2012 13:52:41 -0000
Received: from mail-ia0-f182.google.com (HELO mail-ia0-f182.google.com)
	(209.85.210.182)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 13:52:41 -0000
Received: by mail-ia0-f182.google.com with SMTP id x2so2908937iad.27
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 05:52:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=ilHvsEXplf+Fd3PXeoW+UvqlK9As+vtpOiFW/KT2HY4=;
	b=OO3zgg3yREzXvNSIYY5zUxOuIGmpibGQ9pEovLyE31thpnCf76h7c907rwULY9U4FJ
	OK6xEnhs0xevykFD2TjZCNkITMCLwhVCVW4hk2OhsV4ph9Q4G4fPjmc15WFcOm+lesuf
	69Qwnw06Q4Go4eSu9UkHC9dUJNgRqyujerWXZRAu53dq4FZPoXbTYexZH+mZWhnetodd
	6/Un8gNhdpMgnNHVMBZesNNnW6/GT6nQLLyY237EHNpb0YEa3RAH9J9Gqw1U3rtRK+5s
	vQAmokBbMQtW4iRR/pOYqSwo4wFPcZJ+b4lIoXAATgDZPjUw1TiqcloH7Yzh14ZCQt85
	SKHw==
MIME-Version: 1.0
Received: by 10.50.197.169 with SMTP id iv9mr10007001igc.32.1356011559933;
	Thu, 20 Dec 2012 05:52:39 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 05:52:39 -0800 (PST)
In-Reply-To: <CAEBdQ92Cu+_EZWP2yDx1WCHSMM7fee-xtrO3XSFbCfhMeqHH6Q@mail.gmail.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAEBdQ92Cu+_EZWP2yDx1WCHSMM7fee-xtrO3XSFbCfhMeqHH6Q@mail.gmail.com>
Date: Thu, 20 Dec 2012 21:52:39 +0800
X-Google-Sender-Auth: sghevXCJUt3Gnlbbm8bB66xlJpA
Message-ID: <CAKhsbWbP-B+OzKiHrHWjbLaNhog1h3rXBbytONASZg3etRj=jg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Jean Guyader <jean.guyader@gmail.com>
Content-Type: multipart/mixed; boundary=14dae93404c55758c204d1490b31
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--14dae93404c55758c204d1490b31
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Dec 20, 2012 at 2:18 AM, Jean Guyader <jean.guyader@gmail.com> wrote:

> Hi Timothy,
>
> Could you send /proc/iomem, lspci -vvvv and the e820 from dmesg for this VM?
>

Thanks Jean, here are info you asked.
Could I ask what is it about? The warning in kernel log or the host
freezing issue?
If it's about the former, I should mention that in the log I posted, I
have already applied a local patch (sent in a separate thread with you
involved) that reserved one more page in e820.

/proc/iomem:
00000000-0000ffff : reserved
00010000-0009dfff : System RAM
0009e000-0009ffff : reserved
000a0000-000bffff : PCI Bus 0000:00
000c0000-000ce3ff : Video ROM
000ce800-000cf1ff : Adapter ROM
000e0000-000fffff : reserved
  000f0000-000fffff : System ROM
00100000-dfffffff : System RAM
  01000000-013dcc77 : Kernel code
  013dcc78-0168f03f : Kernel data
  01727000-01804fff : Kernel bss
e0000000-fbffffff : PCI Bus 0000:00
  e0000000-efffffff : 0000:00:02.0
  f0000000-f0ffffff : 0000:00:03.0
    f0000000-f0ffffff : xen-platform-pci
  f1000000-f13fffff : 0000:00:02.0
  f1400000-f1403fff : i915 MCHBAR
  f1620000-f1623fff : 0000:00:05.0
    f1620000-f1623fff : ICH HD audio
  f1624000-f1624fff : 0000:00:06.0
    f1624000-f1624fff : ehci_hcd
fc000000-feff3fff : reserved
  fec00000-fec003ff : IOAPIC 0
  fed00000-fed003ff : HPET 0
  fee00000-fee00fff : Local APIC
feff4000-feff6fff : ACPI Non-volatile Storage
feff7000-ffffffff : reserved
100000000-11c7fffff : System RAM
11c800000-11fffffff : RAM buffer


dmesg lines with 'e820' in it.
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009e000-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000dfffffff] usable
[    0.000000] BIOS-e820: [mem 0x00000000fc000000-0x00000000feff3fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000feff4000-0x00000000feff6fff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x00000000feff7000-0x00000000ffffffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000011c7fffff] usable
[    0.000000] e820: update [mem 0x00000000-0x0000ffff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0x11c800 max_arch_pfn = 0x400000000
[    0.000000] e820: last_pfn = 0xe0000 max_arch_pfn = 0x400000000
[    0.000000] e820: [mem 0xe0000000-0xfbffffff] available for PCI devices
[    0.439596] e820: reserve RAM buffer [mem 0x0009e000-0x0009ffff]
[    0.439597] e820: reserve RAM buffer [mem 0x11c800000-0x11fffffff]

Please find the lspci -vvv log in the attachment, it's a bit lengthy.

--14dae93404c55758c204d1490b31
Content-Type: application/octet-stream; name="lspcivvv.debvm"
Content-Disposition: attachment; filename="lspcivvv.debvm"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_haxy653c0

MDA6MDAuMCBIb3N0IGJyaWRnZTogSW50ZWwgQ29ycG9yYXRpb24gWGVvbiBFMy0xMjAwIHYyLzNy
ZCBHZW4gQ29yZSBwcm9jZXNzb3IgRFJBTSBDb250cm9sbGVyIChyZXYgMDkpCglTdWJzeXN0ZW06
IEFTUm9jayBJbmNvcnBvcmF0aW9uIERldmljZSAwMTUwCglQaHlzaWNhbCBTbG90OiAwCglDb250
cm9sOiBJL08tIE1lbS0gQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQ
YXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2
TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0g
PE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMAoKMDA6MDEuMCBJU0EgYnJp
ZGdlOiBJbnRlbCBDb3Jwb3JhdGlvbiA4MjM3MVNCIFBJSVgzIElTQSBbTmF0b21hL1RyaXRvbiBJ
SV0KCVN1YnN5c3RlbTogUmVkIEhhdCwgSW5jIFFlbXUgdmlydHVhbCBtYWNoaW5lCglQaHlzaWNh
bCBTbG90OiAxCglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJ
TlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJ
U3RhdHVzOiBDYXAtIDY2TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPW1lZGl1bSA+
VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAw
CgowMDowMS4xIElERSBpbnRlcmZhY2U6IEludGVsIENvcnBvcmF0aW9uIDgyMzcxU0IgUElJWDMg
SURFIFtOYXRvbWEvVHJpdG9uIElJXSAocHJvZy1pZiA4MCBbTWFzdGVyXSkKCVN1YnN5c3RlbTog
WGVuU291cmNlLCBJbmMuIERldmljZSAwMDAxCglQaHlzaWNhbCBTbG90OiAxCglDb250cm9sOiBJ
L08rIE1lbS0gQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnIt
IFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2TUh6LSBV
REYtIEZhc3RCMkIrIFBhckVyci0gREVWU0VMPW1lZGl1bSA+VEFib3J0LSA8VEFib3J0LSA8TUFi
b3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiA2NAoJUmVnaW9uIDA6IFt2aXJ0dWFs
XSBNZW1vcnkgYXQgMDAwMDAxZjAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9OF0K
CVJlZ2lvbiAxOiBbdmlydHVhbF0gTWVtb3J5IGF0IDAwMDAwM2YwICh0eXBlIDMsIG5vbi1wcmVm
ZXRjaGFibGUpIFtzaXplPTFdCglSZWdpb24gMjogW3ZpcnR1YWxdIE1lbW9yeSBhdCAwMDAwMDE3
MCAoMzItYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT04XQoJUmVnaW9uIDM6IFt2aXJ0dWFs
XSBNZW1vcnkgYXQgMDAwMDAzNzAgKHR5cGUgMywgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9MV0K
CVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQgYzE4MCBbc2l6ZT0xNl0KCUtlcm5lbCBkcml2ZXIgaW4g
dXNlOiBhdGFfcGlpeAoKMDA6MDEuMyBCcmlkZ2U6IEludGVsIENvcnBvcmF0aW9uIDgyMzcxQUIv
RUIvTUIgUElJWDQgQUNQSSAocmV2IDAxKQoJU3Vic3lzdGVtOiBSZWQgSGF0LCBJbmMgUWVtdSB2
aXJ0dWFsIG1hY2hpbmUKCVBoeXNpY2FsIFNsb3Q6IDEKCUNvbnRyb2w6IEkvTy0gTWVtLSBCdXNN
YXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNF
UlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcC0gNjZNSHotIFVERi0gRmFzdEIyQi0g
UGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBF
UlItIElOVHgtCglMYXRlbmN5OiAwCglJbnRlcnJ1cHQ6IHBpbiBBIHJvdXRlZCB0byBJUlEgOQoK
MDA6MDIuMCBWR0EgY29tcGF0aWJsZSBjb250cm9sbGVyOiBJbnRlbCBDb3Jwb3JhdGlvbiBYZW9u
IEUzLTEyMDAgdjIvM3JkIEdlbiBDb3JlIHByb2Nlc3NvciBHcmFwaGljcyBDb250cm9sbGVyIChy
ZXYgMDkpIChwcm9nLWlmIDAwIFtWR0EgY29udHJvbGxlcl0pCglTdWJzeXN0ZW06IEFTUm9jayBJ
bmNvcnBvcmF0aW9uIERldmljZSAwMTYyCglQaHlzaWNhbCBTbG90OiAyCglDb250cm9sOiBJL08r
IE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0
ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4KwoJU3RhdHVzOiBDYXArIDY2TUh6LSBVREYt
IEZhc3RCMkIrIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0g
PlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogNjQKCUludGVycnVwdDogcGluIEEgcm91dGVk
IHRvIElSUSA3OAoJUmVnaW9uIDA6IE1lbW9yeSBhdCBmMTAwMDAwMCAoNjQtYml0LCBub24tcHJl
ZmV0Y2hhYmxlKSBbc2l6ZT00TV0KCVJlZ2lvbiAyOiBNZW1vcnkgYXQgZTAwMDAwMDAgKDY0LWJp
dCwgcHJlZmV0Y2hhYmxlKSBbc2l6ZT0yNTZNXQoJUmVnaW9uIDQ6IEkvTyBwb3J0cyBhdCBjMTAw
IFtzaXplPTY0XQoJRXhwYW5zaW9uIFJPTSBhdCA8dW5hc3NpZ25lZD4gW2Rpc2FibGVkXQoJQ2Fw
YWJpbGl0aWVzOiBbOTBdIE1TSTogRW5hYmxlKyBDb3VudD0xLzEgTWFza2FibGUtIDY0Yml0LQoJ
CUFkZHJlc3M6IGZlZTM2MDAwICBEYXRhOiA0MzAwCglDYXBhYmlsaXRpZXM6IFtkMF0gUG93ZXIg
TWFuYWdlbWVudCB2ZXJzaW9uIDIKCQlGbGFnczogUE1FQ2xrLSBEU0krIEQxLSBEMi0gQXV4Q3Vy
cmVudD0wbUEgUE1FKEQwLSxEMS0sRDItLEQzaG90LSxEM2NvbGQtKQoJCVN0YXR1czogRDAgTm9T
b2Z0UnN0KyBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJS2VybmVsIGRyaXZlciBp
biB1c2U6IGk5MTUKCjAwOjAzLjAgVW5hc3NpZ25lZCBjbGFzcyBbZmY4MF06IFhlblNvdXJjZSwg
SW5jLiBYZW4gUGxhdGZvcm0gRGV2aWNlIChyZXYgMDEpCglTdWJzeXN0ZW06IFhlblNvdXJjZSwg
SW5jLiBYZW4gUGxhdGZvcm0gRGV2aWNlCglQaHlzaWNhbCBTbG90OiAzCglDb250cm9sOiBJL08r
IE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0
ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2TUh6LSBVREYt
IEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0g
PlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMAoJSW50ZXJydXB0OiBwaW4gQSByb3V0ZWQg
dG8gSVJRIDI4CglSZWdpb24gMDogSS9PIHBvcnRzIGF0IGMwMDAgW3NpemU9MjU2XQoJUmVnaW9u
IDE6IE1lbW9yeSBhdCBmMDAwMDAwMCAoMzItYml0LCBwcmVmZXRjaGFibGUpIFtzaXplPTE2TV0K
CUtlcm5lbCBkcml2ZXIgaW4gdXNlOiB4ZW4tcGxhdGZvcm0tcGNpCgowMDowNS4wIEF1ZGlvIGRl
dmljZTogSW50ZWwgQ29ycG9yYXRpb24gNyBTZXJpZXMvQzIxMCBTZXJpZXMgQ2hpcHNldCBGYW1p
bHkgSGlnaCBEZWZpbml0aW9uIEF1ZGlvIENvbnRyb2xsZXIgKHJldiAwNCkKCVN1YnN5c3RlbTog
QVNSb2NrIEluY29ycG9yYXRpb24gRGV2aWNlIDg4OTIKCVBoeXNpY2FsIFNsb3Q6IDUKCUNvbnRy
b2w6IEkvTy0gTWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBh
ckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgrCglTdGF0dXM6IENhcCsgNjZN
SHotIFVERi0gRmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8
TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAwCglJbnRlcnJ1cHQ6IHBpbiBB
IHJvdXRlZCB0byBJUlEgODIKCVJlZ2lvbiAwOiBNZW1vcnkgYXQgZjE2MjAwMDAgKDY0LWJpdCwg
bm9uLXByZWZldGNoYWJsZSkgW3NpemU9MTZLXQoJQ2FwYWJpbGl0aWVzOiBbNTBdIFBvd2VyIE1h
bmFnZW1lbnQgdmVyc2lvbiAyCgkJRmxhZ3M6IFBNRUNsay0gRFNJLSBEMS0gRDItIEF1eEN1cnJl
bnQ9MG1BIFBNRShEMC0sRDEtLEQyLSxEM2hvdC0sRDNjb2xkLSkKCQlTdGF0dXM6IEQwIE5vU29m
dFJzdCsgUE1FLUVuYWJsZS0gRFNlbD0wIERTY2FsZT0wIFBNRS0KCUNhcGFiaWxpdGllczogWzYw
XSBNU0k6IEVuYWJsZSsgQ291bnQ9MS8xIE1hc2thYmxlLSA2NGJpdCsKCQlBZGRyZXNzOiAwMDAw
MDAwMGZlZTM1MDAwICBEYXRhOiA0MzAwCglDYXBhYmlsaXRpZXM6IFs3MF0gRXhwcmVzcyAodjEp
IFJvb3QgQ29tcGxleCBJbnRlZ3JhdGVkIEVuZHBvaW50LCBNU0kgMDAKCQlEZXZDYXA6CU1heFBh
eWxvYWQgMTI4IGJ5dGVzLCBQaGFudEZ1bmMgMCwgTGF0ZW5jeSBMMHMgPDY0bnMsIEwxIDwxdXMK
CQkJRXh0VGFnLSBSQkUtIEZMUmVzZXQtCgkJRGV2Q3RsOglSZXBvcnQgZXJyb3JzOiBDb3JyZWN0
YWJsZS0gTm9uLUZhdGFsLSBGYXRhbC0gVW5zdXBwb3J0ZWQtCgkJCVJseGRPcmQrIEV4dFRhZy0g
UGhhbnRGdW5jLSBBdXhQd3ItIE5vU25vb3AtCgkJCU1heFBheWxvYWQgMTI4IGJ5dGVzLCBNYXhS
ZWFkUmVxIDUxMiBieXRlcwoJCURldlN0YToJQ29yckVyci0gVW5jb3JyRXJyLSBGYXRhbEVyci0g
VW5zdXBwUmVxLSBBdXhQd3IrIFRyYW5zUGVuZC0KCQlMbmtDYXA6CVBvcnQgIzAsIFNwZWVkIHVu
a25vd24sIFdpZHRoIHgwLCBBU1BNIHVua25vd24sIExhdGVuY3kgTDAgPDY0bnMsIEwxIDwxdXMK
CQkJQ2xvY2tQTS0gU3VycHJpc2UtIExMQWN0UmVwLSBCd05vdC0KCQlMbmtDdGw6CUFTUE0gRGlz
YWJsZWQ7IERpc2FibGVkLSBSZXRyYWluLSBDb21tQ2xrLQoJCQlFeHRTeW5jaC0gQ2xvY2tQTS0g
QXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQtCgkJTG5rU3RhOglTcGVlZCB1bmtub3duLCBXaWR0
aCB4MCwgVHJFcnItIFRyYWluLSBTbG90Q2xrLSBETEFjdGl2ZS0gQldNZ210LSBBQldNZ210LQoJ
S2VybmVsIGRyaXZlciBpbiB1c2U6IHNuZF9oZGFfaW50ZWwKCjAwOjA2LjAgVVNCIGNvbnRyb2xs
ZXI6IEludGVsIENvcnBvcmF0aW9uIDcgU2VyaWVzL0MyMTAgU2VyaWVzIENoaXBzZXQgRmFtaWx5
IFVTQiBFbmhhbmNlZCBIb3N0IENvbnRyb2xsZXIgIzIgKHJldiAwNCkgKHByb2ctaWYgMjAgW0VI
Q0ldKQoJU3Vic3lzdGVtOiBBU1JvY2sgSW5jb3Jwb3JhdGlvbiBEZXZpY2UgMWUyZAoJUGh5c2lj
YWwgU2xvdDogNgoJQ29udHJvbDogSS9PLSBNZW0rIEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1X
SU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0K
CVN0YXR1czogQ2FwKyA2Nk1Iei0gVURGLSBGYXN0QjJCKyBQYXJFcnItIERFVlNFTD1tZWRpdW0g
PlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTog
NjQsIENhY2hlIExpbmUgU2l6ZTogNjQgYnl0ZXMKCUludGVycnVwdDogcGluIEEgcm91dGVkIHRv
IElSUSA0MAoJUmVnaW9uIDA6IE1lbW9yeSBhdCBmMTYyNDAwMCAoMzItYml0LCBub24tcHJlZmV0
Y2hhYmxlKSBbc2l6ZT00S10KCUNhcGFiaWxpdGllczogWzUwXSBQb3dlciBNYW5hZ2VtZW50IHZl
cnNpb24gMgoJCUZsYWdzOiBQTUVDbGstIERTSS0gRDEtIEQyLSBBdXhDdXJyZW50PTBtQSBQTUUo
RDAtLEQxLSxEMi0sRDNob3QtLEQzY29sZC0pCgkJU3RhdHVzOiBEMCBOb1NvZnRSc3QrIFBNRS1F
bmFibGUtIERTZWw9MCBEU2NhbGU9MCBQTUUtCglLZXJuZWwgZHJpdmVyIGluIHVzZTogZWhjaV9o
Y2QKCjAwOjFmLjAgSVNBIGJyaWRnZTogSW50ZWwgQ29ycG9yYXRpb24gSDc3IEV4cHJlc3MgQ2hp
cHNldCBMUEMgQ29udHJvbGxlciAocmV2IDA0KQoJU3Vic3lzdGVtOiBSZWQgSGF0LCBJbmMgRGV2
aWNlIDExMDAKCVBoeXNpY2FsIFNsb3Q6IDMxCglDb250cm9sOiBJL08tIE1lbSsgQnVzTWFzdGVy
KyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBG
YXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2TUh6KyBVREYtIEZhc3RCMkIrIFBhckVy
ci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJ
TlR4LQoJTGF0ZW5jeTogMTYKCVJlZ2lvbiAzOiBNZW1vcnkgYXQgPGlnbm9yZWQ+ICgzMi1iaXQs
IG5vbi1wcmVmZXRjaGFibGUpCgo=
--14dae93404c55758c204d1490b31
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--14dae93404c55758c204d1490b31--


From xen-devel-bounces@lists.xen.org Thu Dec 20 13:53:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlgYP-0003BR-6a; Thu, 20 Dec 2012 13:52:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlgYN-0003BG-II
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:52:51 +0000
Received: from [85.158.138.51:58592] by server-4.bemta-3.messagelabs.com id
	91/CE-31835-C2813D05; Thu, 20 Dec 2012 13:52:44 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1356011560!21617883!1
X-Originating-IP: [209.85.210.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2338 invoked from network); 20 Dec 2012 13:52:41 -0000
Received: from mail-ia0-f182.google.com (HELO mail-ia0-f182.google.com)
	(209.85.210.182)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 13:52:41 -0000
Received: by mail-ia0-f182.google.com with SMTP id x2so2908937iad.27
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 05:52:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=ilHvsEXplf+Fd3PXeoW+UvqlK9As+vtpOiFW/KT2HY4=;
	b=OO3zgg3yREzXvNSIYY5zUxOuIGmpibGQ9pEovLyE31thpnCf76h7c907rwULY9U4FJ
	OK6xEnhs0xevykFD2TjZCNkITMCLwhVCVW4hk2OhsV4ph9Q4G4fPjmc15WFcOm+lesuf
	69Qwnw06Q4Go4eSu9UkHC9dUJNgRqyujerWXZRAu53dq4FZPoXbTYexZH+mZWhnetodd
	6/Un8gNhdpMgnNHVMBZesNNnW6/GT6nQLLyY237EHNpb0YEa3RAH9J9Gqw1U3rtRK+5s
	vQAmokBbMQtW4iRR/pOYqSwo4wFPcZJ+b4lIoXAATgDZPjUw1TiqcloH7Yzh14ZCQt85
	SKHw==
MIME-Version: 1.0
Received: by 10.50.197.169 with SMTP id iv9mr10007001igc.32.1356011559933;
	Thu, 20 Dec 2012 05:52:39 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 05:52:39 -0800 (PST)
In-Reply-To: <CAEBdQ92Cu+_EZWP2yDx1WCHSMM7fee-xtrO3XSFbCfhMeqHH6Q@mail.gmail.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAEBdQ92Cu+_EZWP2yDx1WCHSMM7fee-xtrO3XSFbCfhMeqHH6Q@mail.gmail.com>
Date: Thu, 20 Dec 2012 21:52:39 +0800
X-Google-Sender-Auth: sghevXCJUt3Gnlbbm8bB66xlJpA
Message-ID: <CAKhsbWbP-B+OzKiHrHWjbLaNhog1h3rXBbytONASZg3etRj=jg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Jean Guyader <jean.guyader@gmail.com>
Content-Type: multipart/mixed; boundary=14dae93404c55758c204d1490b31
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--14dae93404c55758c204d1490b31
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Dec 20, 2012 at 2:18 AM, Jean Guyader <jean.guyader@gmail.com> wrote:

> Hi Timothy,
>
> Could you send /proc/iomem, lspci -vvvv and the e820 from dmesg for this VM?
>

Thanks Jean, here are info you asked.
Could I ask what is it about? The warning in kernel log or the host
freezing issue?
If it's about the former, I should mention that in the log I posted, I
have already applied a local patch (sent in a separate thread with you
involved) that reserved one more page in e820.

/proc/iomem:
00000000-0000ffff : reserved
00010000-0009dfff : System RAM
0009e000-0009ffff : reserved
000a0000-000bffff : PCI Bus 0000:00
000c0000-000ce3ff : Video ROM
000ce800-000cf1ff : Adapter ROM
000e0000-000fffff : reserved
  000f0000-000fffff : System ROM
00100000-dfffffff : System RAM
  01000000-013dcc77 : Kernel code
  013dcc78-0168f03f : Kernel data
  01727000-01804fff : Kernel bss
e0000000-fbffffff : PCI Bus 0000:00
  e0000000-efffffff : 0000:00:02.0
  f0000000-f0ffffff : 0000:00:03.0
    f0000000-f0ffffff : xen-platform-pci
  f1000000-f13fffff : 0000:00:02.0
  f1400000-f1403fff : i915 MCHBAR
  f1620000-f1623fff : 0000:00:05.0
    f1620000-f1623fff : ICH HD audio
  f1624000-f1624fff : 0000:00:06.0
    f1624000-f1624fff : ehci_hcd
fc000000-feff3fff : reserved
  fec00000-fec003ff : IOAPIC 0
  fed00000-fed003ff : HPET 0
  fee00000-fee00fff : Local APIC
feff4000-feff6fff : ACPI Non-volatile Storage
feff7000-ffffffff : reserved
100000000-11c7fffff : System RAM
11c800000-11fffffff : RAM buffer


dmesg lines with 'e820' in it.
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009e000-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000dfffffff] usable
[    0.000000] BIOS-e820: [mem 0x00000000fc000000-0x00000000feff3fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000feff4000-0x00000000feff6fff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x00000000feff7000-0x00000000ffffffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000011c7fffff] usable
[    0.000000] e820: update [mem 0x00000000-0x0000ffff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0x11c800 max_arch_pfn = 0x400000000
[    0.000000] e820: last_pfn = 0xe0000 max_arch_pfn = 0x400000000
[    0.000000] e820: [mem 0xe0000000-0xfbffffff] available for PCI devices
[    0.439596] e820: reserve RAM buffer [mem 0x0009e000-0x0009ffff]
[    0.439597] e820: reserve RAM buffer [mem 0x11c800000-0x11fffffff]

Please find the lspci -vvv log in the attachment, it's a bit lengthy.

--14dae93404c55758c204d1490b31
Content-Type: application/octet-stream; name="lspcivvv.debvm"
Content-Disposition: attachment; filename="lspcivvv.debvm"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_haxy653c0

MDA6MDAuMCBIb3N0IGJyaWRnZTogSW50ZWwgQ29ycG9yYXRpb24gWGVvbiBFMy0xMjAwIHYyLzNy
ZCBHZW4gQ29yZSBwcm9jZXNzb3IgRFJBTSBDb250cm9sbGVyIChyZXYgMDkpCglTdWJzeXN0ZW06
IEFTUm9jayBJbmNvcnBvcmF0aW9uIERldmljZSAwMTUwCglQaHlzaWNhbCBTbG90OiAwCglDb250
cm9sOiBJL08tIE1lbS0gQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQ
YXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2
TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0g
PE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMAoKMDA6MDEuMCBJU0EgYnJp
ZGdlOiBJbnRlbCBDb3Jwb3JhdGlvbiA4MjM3MVNCIFBJSVgzIElTQSBbTmF0b21hL1RyaXRvbiBJ
SV0KCVN1YnN5c3RlbTogUmVkIEhhdCwgSW5jIFFlbXUgdmlydHVhbCBtYWNoaW5lCglQaHlzaWNh
bCBTbG90OiAxCglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJ
TlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJ
U3RhdHVzOiBDYXAtIDY2TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPW1lZGl1bSA+
VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAw
CgowMDowMS4xIElERSBpbnRlcmZhY2U6IEludGVsIENvcnBvcmF0aW9uIDgyMzcxU0IgUElJWDMg
SURFIFtOYXRvbWEvVHJpdG9uIElJXSAocHJvZy1pZiA4MCBbTWFzdGVyXSkKCVN1YnN5c3RlbTog
WGVuU291cmNlLCBJbmMuIERldmljZSAwMDAxCglQaHlzaWNhbCBTbG90OiAxCglDb250cm9sOiBJ
L08rIE1lbS0gQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnIt
IFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2TUh6LSBV
REYtIEZhc3RCMkIrIFBhckVyci0gREVWU0VMPW1lZGl1bSA+VEFib3J0LSA8VEFib3J0LSA8TUFi
b3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiA2NAoJUmVnaW9uIDA6IFt2aXJ0dWFs
XSBNZW1vcnkgYXQgMDAwMDAxZjAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9OF0K
CVJlZ2lvbiAxOiBbdmlydHVhbF0gTWVtb3J5IGF0IDAwMDAwM2YwICh0eXBlIDMsIG5vbi1wcmVm
ZXRjaGFibGUpIFtzaXplPTFdCglSZWdpb24gMjogW3ZpcnR1YWxdIE1lbW9yeSBhdCAwMDAwMDE3
MCAoMzItYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT04XQoJUmVnaW9uIDM6IFt2aXJ0dWFs
XSBNZW1vcnkgYXQgMDAwMDAzNzAgKHR5cGUgMywgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9MV0K
CVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQgYzE4MCBbc2l6ZT0xNl0KCUtlcm5lbCBkcml2ZXIgaW4g
dXNlOiBhdGFfcGlpeAoKMDA6MDEuMyBCcmlkZ2U6IEludGVsIENvcnBvcmF0aW9uIDgyMzcxQUIv
RUIvTUIgUElJWDQgQUNQSSAocmV2IDAxKQoJU3Vic3lzdGVtOiBSZWQgSGF0LCBJbmMgUWVtdSB2
aXJ0dWFsIG1hY2hpbmUKCVBoeXNpY2FsIFNsb3Q6IDEKCUNvbnRyb2w6IEkvTy0gTWVtLSBCdXNN
YXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNF
UlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcC0gNjZNSHotIFVERi0gRmFzdEIyQi0g
UGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBF
UlItIElOVHgtCglMYXRlbmN5OiAwCglJbnRlcnJ1cHQ6IHBpbiBBIHJvdXRlZCB0byBJUlEgOQoK
MDA6MDIuMCBWR0EgY29tcGF0aWJsZSBjb250cm9sbGVyOiBJbnRlbCBDb3Jwb3JhdGlvbiBYZW9u
IEUzLTEyMDAgdjIvM3JkIEdlbiBDb3JlIHByb2Nlc3NvciBHcmFwaGljcyBDb250cm9sbGVyIChy
ZXYgMDkpIChwcm9nLWlmIDAwIFtWR0EgY29udHJvbGxlcl0pCglTdWJzeXN0ZW06IEFTUm9jayBJ
bmNvcnBvcmF0aW9uIERldmljZSAwMTYyCglQaHlzaWNhbCBTbG90OiAyCglDb250cm9sOiBJL08r
IE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0
ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4KwoJU3RhdHVzOiBDYXArIDY2TUh6LSBVREYt
IEZhc3RCMkIrIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0g
PlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogNjQKCUludGVycnVwdDogcGluIEEgcm91dGVk
IHRvIElSUSA3OAoJUmVnaW9uIDA6IE1lbW9yeSBhdCBmMTAwMDAwMCAoNjQtYml0LCBub24tcHJl
ZmV0Y2hhYmxlKSBbc2l6ZT00TV0KCVJlZ2lvbiAyOiBNZW1vcnkgYXQgZTAwMDAwMDAgKDY0LWJp
dCwgcHJlZmV0Y2hhYmxlKSBbc2l6ZT0yNTZNXQoJUmVnaW9uIDQ6IEkvTyBwb3J0cyBhdCBjMTAw
IFtzaXplPTY0XQoJRXhwYW5zaW9uIFJPTSBhdCA8dW5hc3NpZ25lZD4gW2Rpc2FibGVkXQoJQ2Fw
YWJpbGl0aWVzOiBbOTBdIE1TSTogRW5hYmxlKyBDb3VudD0xLzEgTWFza2FibGUtIDY0Yml0LQoJ
CUFkZHJlc3M6IGZlZTM2MDAwICBEYXRhOiA0MzAwCglDYXBhYmlsaXRpZXM6IFtkMF0gUG93ZXIg
TWFuYWdlbWVudCB2ZXJzaW9uIDIKCQlGbGFnczogUE1FQ2xrLSBEU0krIEQxLSBEMi0gQXV4Q3Vy
cmVudD0wbUEgUE1FKEQwLSxEMS0sRDItLEQzaG90LSxEM2NvbGQtKQoJCVN0YXR1czogRDAgTm9T
b2Z0UnN0KyBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJS2VybmVsIGRyaXZlciBp
biB1c2U6IGk5MTUKCjAwOjAzLjAgVW5hc3NpZ25lZCBjbGFzcyBbZmY4MF06IFhlblNvdXJjZSwg
SW5jLiBYZW4gUGxhdGZvcm0gRGV2aWNlIChyZXYgMDEpCglTdWJzeXN0ZW06IFhlblNvdXJjZSwg
SW5jLiBYZW4gUGxhdGZvcm0gRGV2aWNlCglQaHlzaWNhbCBTbG90OiAzCglDb250cm9sOiBJL08r
IE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0
ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2TUh6LSBVREYt
IEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0g
PlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMAoJSW50ZXJydXB0OiBwaW4gQSByb3V0ZWQg
dG8gSVJRIDI4CglSZWdpb24gMDogSS9PIHBvcnRzIGF0IGMwMDAgW3NpemU9MjU2XQoJUmVnaW9u
IDE6IE1lbW9yeSBhdCBmMDAwMDAwMCAoMzItYml0LCBwcmVmZXRjaGFibGUpIFtzaXplPTE2TV0K
CUtlcm5lbCBkcml2ZXIgaW4gdXNlOiB4ZW4tcGxhdGZvcm0tcGNpCgowMDowNS4wIEF1ZGlvIGRl
dmljZTogSW50ZWwgQ29ycG9yYXRpb24gNyBTZXJpZXMvQzIxMCBTZXJpZXMgQ2hpcHNldCBGYW1p
bHkgSGlnaCBEZWZpbml0aW9uIEF1ZGlvIENvbnRyb2xsZXIgKHJldiAwNCkKCVN1YnN5c3RlbTog
QVNSb2NrIEluY29ycG9yYXRpb24gRGV2aWNlIDg4OTIKCVBoeXNpY2FsIFNsb3Q6IDUKCUNvbnRy
b2w6IEkvTy0gTWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBh
ckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgrCglTdGF0dXM6IENhcCsgNjZN
SHotIFVERi0gRmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8
TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAwCglJbnRlcnJ1cHQ6IHBpbiBB
IHJvdXRlZCB0byBJUlEgODIKCVJlZ2lvbiAwOiBNZW1vcnkgYXQgZjE2MjAwMDAgKDY0LWJpdCwg
bm9uLXByZWZldGNoYWJsZSkgW3NpemU9MTZLXQoJQ2FwYWJpbGl0aWVzOiBbNTBdIFBvd2VyIE1h
bmFnZW1lbnQgdmVyc2lvbiAyCgkJRmxhZ3M6IFBNRUNsay0gRFNJLSBEMS0gRDItIEF1eEN1cnJl
bnQ9MG1BIFBNRShEMC0sRDEtLEQyLSxEM2hvdC0sRDNjb2xkLSkKCQlTdGF0dXM6IEQwIE5vU29m
dFJzdCsgUE1FLUVuYWJsZS0gRFNlbD0wIERTY2FsZT0wIFBNRS0KCUNhcGFiaWxpdGllczogWzYw
XSBNU0k6IEVuYWJsZSsgQ291bnQ9MS8xIE1hc2thYmxlLSA2NGJpdCsKCQlBZGRyZXNzOiAwMDAw
MDAwMGZlZTM1MDAwICBEYXRhOiA0MzAwCglDYXBhYmlsaXRpZXM6IFs3MF0gRXhwcmVzcyAodjEp
IFJvb3QgQ29tcGxleCBJbnRlZ3JhdGVkIEVuZHBvaW50LCBNU0kgMDAKCQlEZXZDYXA6CU1heFBh
eWxvYWQgMTI4IGJ5dGVzLCBQaGFudEZ1bmMgMCwgTGF0ZW5jeSBMMHMgPDY0bnMsIEwxIDwxdXMK
CQkJRXh0VGFnLSBSQkUtIEZMUmVzZXQtCgkJRGV2Q3RsOglSZXBvcnQgZXJyb3JzOiBDb3JyZWN0
YWJsZS0gTm9uLUZhdGFsLSBGYXRhbC0gVW5zdXBwb3J0ZWQtCgkJCVJseGRPcmQrIEV4dFRhZy0g
UGhhbnRGdW5jLSBBdXhQd3ItIE5vU25vb3AtCgkJCU1heFBheWxvYWQgMTI4IGJ5dGVzLCBNYXhS
ZWFkUmVxIDUxMiBieXRlcwoJCURldlN0YToJQ29yckVyci0gVW5jb3JyRXJyLSBGYXRhbEVyci0g
VW5zdXBwUmVxLSBBdXhQd3IrIFRyYW5zUGVuZC0KCQlMbmtDYXA6CVBvcnQgIzAsIFNwZWVkIHVu
a25vd24sIFdpZHRoIHgwLCBBU1BNIHVua25vd24sIExhdGVuY3kgTDAgPDY0bnMsIEwxIDwxdXMK
CQkJQ2xvY2tQTS0gU3VycHJpc2UtIExMQWN0UmVwLSBCd05vdC0KCQlMbmtDdGw6CUFTUE0gRGlz
YWJsZWQ7IERpc2FibGVkLSBSZXRyYWluLSBDb21tQ2xrLQoJCQlFeHRTeW5jaC0gQ2xvY2tQTS0g
QXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQtCgkJTG5rU3RhOglTcGVlZCB1bmtub3duLCBXaWR0
aCB4MCwgVHJFcnItIFRyYWluLSBTbG90Q2xrLSBETEFjdGl2ZS0gQldNZ210LSBBQldNZ210LQoJ
S2VybmVsIGRyaXZlciBpbiB1c2U6IHNuZF9oZGFfaW50ZWwKCjAwOjA2LjAgVVNCIGNvbnRyb2xs
ZXI6IEludGVsIENvcnBvcmF0aW9uIDcgU2VyaWVzL0MyMTAgU2VyaWVzIENoaXBzZXQgRmFtaWx5
IFVTQiBFbmhhbmNlZCBIb3N0IENvbnRyb2xsZXIgIzIgKHJldiAwNCkgKHByb2ctaWYgMjAgW0VI
Q0ldKQoJU3Vic3lzdGVtOiBBU1JvY2sgSW5jb3Jwb3JhdGlvbiBEZXZpY2UgMWUyZAoJUGh5c2lj
YWwgU2xvdDogNgoJQ29udHJvbDogSS9PLSBNZW0rIEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1X
SU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0K
CVN0YXR1czogQ2FwKyA2Nk1Iei0gVURGLSBGYXN0QjJCKyBQYXJFcnItIERFVlNFTD1tZWRpdW0g
PlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTog
NjQsIENhY2hlIExpbmUgU2l6ZTogNjQgYnl0ZXMKCUludGVycnVwdDogcGluIEEgcm91dGVkIHRv
IElSUSA0MAoJUmVnaW9uIDA6IE1lbW9yeSBhdCBmMTYyNDAwMCAoMzItYml0LCBub24tcHJlZmV0
Y2hhYmxlKSBbc2l6ZT00S10KCUNhcGFiaWxpdGllczogWzUwXSBQb3dlciBNYW5hZ2VtZW50IHZl
cnNpb24gMgoJCUZsYWdzOiBQTUVDbGstIERTSS0gRDEtIEQyLSBBdXhDdXJyZW50PTBtQSBQTUUo
RDAtLEQxLSxEMi0sRDNob3QtLEQzY29sZC0pCgkJU3RhdHVzOiBEMCBOb1NvZnRSc3QrIFBNRS1F
bmFibGUtIERTZWw9MCBEU2NhbGU9MCBQTUUtCglLZXJuZWwgZHJpdmVyIGluIHVzZTogZWhjaV9o
Y2QKCjAwOjFmLjAgSVNBIGJyaWRnZTogSW50ZWwgQ29ycG9yYXRpb24gSDc3IEV4cHJlc3MgQ2hp
cHNldCBMUEMgQ29udHJvbGxlciAocmV2IDA0KQoJU3Vic3lzdGVtOiBSZWQgSGF0LCBJbmMgRGV2
aWNlIDExMDAKCVBoeXNpY2FsIFNsb3Q6IDMxCglDb250cm9sOiBJL08tIE1lbSsgQnVzTWFzdGVy
KyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBG
YXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2TUh6KyBVREYtIEZhc3RCMkIrIFBhckVy
ci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJ
TlR4LQoJTGF0ZW5jeTogMTYKCVJlZ2lvbiAzOiBNZW1vcnkgYXQgPGlnbm9yZWQ+ICgzMi1iaXQs
IG5vbi1wcmVmZXRjaGFibGUpCgo=
--14dae93404c55758c204d1490b31
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--14dae93404c55758c204d1490b31--


From xen-devel-bounces@lists.xen.org Thu Dec 20 13:56:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlgbq-0003Lp-0M; Thu, 20 Dec 2012 13:56:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tlgbp-0003Lk-8O
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:56:25 +0000
Received: from [85.158.137.99:36072] by server-13.bemta-3.messagelabs.com id
	50/44-00465-80913D05; Thu, 20 Dec 2012 13:56:24 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-217.messagelabs.com!1356011761!14029410!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15500 invoked from network); 20 Dec 2012 13:56:02 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 13:56:02 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TlgbO-000LeK-Vb; Thu, 20 Dec 2012 13:55:58 +0000
Date: Thu, 20 Dec 2012 13:55:58 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220135558.GM80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 00/10] Nested VMX: Add virtual EPT & VPID
	support to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, 

At 23:43 +0800 on 20 Dec (1356047021), Xiantao Zhang wrote:
> Received: from hax-build.sh.intel.com ([10.239.48.28])
>         by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:04 -0800
> From: Xiantao Zhang <xiantao.zhang@intel.com>
> To: xen-devel@lists.xen.org
> Date: Thu, 20 Dec 2012 23:43:41 +0800

I think the clock on your computer or your email client is confused:
your email is datestamped about 12 hours in the future.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 13:56:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 13:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlgbq-0003Lp-0M; Thu, 20 Dec 2012 13:56:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Tlgbp-0003Lk-8O
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 13:56:25 +0000
Received: from [85.158.137.99:36072] by server-13.bemta-3.messagelabs.com id
	50/44-00465-80913D05; Thu, 20 Dec 2012 13:56:24 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-217.messagelabs.com!1356011761!14029410!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15500 invoked from network); 20 Dec 2012 13:56:02 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 13:56:02 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TlgbO-000LeK-Vb; Thu, 20 Dec 2012 13:55:58 +0000
Date: Thu, 20 Dec 2012 13:55:58 +0000
From: Tim Deegan <tim@xen.org>
To: Xiantao Zhang <xiantao.zhang@intel.com>
Message-ID: <20121220135558.GM80837@ocelot.phlegethon.org>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
User-Agent: Mutt/1.4.2.1i
Cc: eddie.dong@intel.com, keir@xen.org, JBeulich@suse.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 00/10] Nested VMX: Add virtual EPT & VPID
	support to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, 

At 23:43 +0800 on 20 Dec (1356047021), Xiantao Zhang wrote:
> Received: from hax-build.sh.intel.com ([10.239.48.28])
>         by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:04 -0800
> From: Xiantao Zhang <xiantao.zhang@intel.com>
> To: xen-devel@lists.xen.org
> Date: Thu, 20 Dec 2012 23:43:41 +0800

I think the clock on your computer or your email client is confused:
your email is datestamped about 12 hours in the future.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:20:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlgyp-0003hv-5M; Thu, 20 Dec 2012 14:20:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tlgyn-0003hq-Ci
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 14:20:09 +0000
Received: from [85.158.143.35:34227] by server-2.bemta-4.messagelabs.com id
	68/32-30861-89E13D05; Thu, 20 Dec 2012 14:20:08 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1356013171!3929092!1
X-Originating-IP: [209.85.214.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28893 invoked from network); 20 Dec 2012 14:19:32 -0000
Received: from mail-bk0-f48.google.com (HELO mail-bk0-f48.google.com)
	(209.85.214.48)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 14:19:32 -0000
Received: by mail-bk0-f48.google.com with SMTP id jc3so1725373bkc.35
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 06:19:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:cc:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=CMNK7lZ5wieDQMqcxY0jokl3VVt1JHvcG4y83HV3iQY=;
	b=TNYpgMKJOSAqCqMTgZPvn64ad2tnFkPEvuJoiXpJmboJLDfJrfquKULR33c1L/tojy
	QANWciM/5t725Dkomt53nXa0cEykzVBAnP43NjT8ekJa+1p6HHwHu13J+8qwEDasH7ns
	BObMOMbj+gfXj495LHLJpOPRqQ0T6FCaiOcsAmKe6OtlgVwgNn+7y7NFd3fQw4e1Rh61
	MVmKGaoXG1S0pglK3TPfnFAgK0kit+FaiOMSwpgjyjYZMHJG7FDuYL9vv1XEZuXUMdpU
	alxq0XjgobUkR4r9NoaHpmh1TTSyct7X7sho1NiJ4N5OM4JVk4RwYl1qvSjj0PG3sQog
	wP/w==
X-Received: by 10.204.154.202 with SMTP id p10mr4667054bkw.29.1356013170766;
	Thu, 20 Dec 2012 06:19:30 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id o7sm7286610bkv.13.2012.12.20.06.19.27
	(version=SSLv3 cipher=OTHER); Thu, 20 Dec 2012 06:19:29 -0800 (PST)
User-Agent: Microsoft-Entourage/12.35.0.121009
Date: Thu, 20 Dec 2012 14:19:22 +0000
From: Keir Fraser <keir@xen.org>
To: "G.R." <firemeteor@users.sourceforge.net>
Message-ID: <CCF8CEEA.561B5%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3evQGPLXnnXjfOMESWS2OFstSfvw==
In-Reply-To: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
Mime-version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/2012 13:31, "G.R." <firemeteor@users.sourceforge.net> wrote:

> If concern is about security, the same argument should apply to the
> first page (the portion before the page offset).
> The problem is that I have no idea what is around the mapped page. Not
> sure who has the knowledge.

Well we can't do better than mapping some whole number of pages, really.
Unless we trap to qemu on every access. I don't think we'd go there unless
there really were a known security issue. But mapping only the exact number
of pages we definitely need is a good principle.

> What's the standard flow to handle such map with offset?
> I expect this to be a common case, since the ioremap function in linux
> kernel accept this.

map_size = ((host_opregion & 0xfff) + 8096 + 0xfff) >> 12

Possibly with suitable macros used instead of magic numbers (e.g., XC_PAGE_*
and a macro for the opregion size).

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:20:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlgyp-0003hv-5M; Thu, 20 Dec 2012 14:20:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Tlgyn-0003hq-Ci
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 14:20:09 +0000
Received: from [85.158.143.35:34227] by server-2.bemta-4.messagelabs.com id
	68/32-30861-89E13D05; Thu, 20 Dec 2012 14:20:08 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1356013171!3929092!1
X-Originating-IP: [209.85.214.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28893 invoked from network); 20 Dec 2012 14:19:32 -0000
Received: from mail-bk0-f48.google.com (HELO mail-bk0-f48.google.com)
	(209.85.214.48)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 14:19:32 -0000
Received: by mail-bk0-f48.google.com with SMTP id jc3so1725373bkc.35
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 06:19:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:sender:user-agent:date:subject:from:to:cc:message-id
	:thread-topic:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=CMNK7lZ5wieDQMqcxY0jokl3VVt1JHvcG4y83HV3iQY=;
	b=TNYpgMKJOSAqCqMTgZPvn64ad2tnFkPEvuJoiXpJmboJLDfJrfquKULR33c1L/tojy
	QANWciM/5t725Dkomt53nXa0cEykzVBAnP43NjT8ekJa+1p6HHwHu13J+8qwEDasH7ns
	BObMOMbj+gfXj495LHLJpOPRqQ0T6FCaiOcsAmKe6OtlgVwgNn+7y7NFd3fQw4e1Rh61
	MVmKGaoXG1S0pglK3TPfnFAgK0kit+FaiOMSwpgjyjYZMHJG7FDuYL9vv1XEZuXUMdpU
	alxq0XjgobUkR4r9NoaHpmh1TTSyct7X7sho1NiJ4N5OM4JVk4RwYl1qvSjj0PG3sQog
	wP/w==
X-Received: by 10.204.154.202 with SMTP id p10mr4667054bkw.29.1356013170766;
	Thu, 20 Dec 2012 06:19:30 -0800 (PST)
Received: from [192.168.1.3] (host86-140-203-119.range86-140.btcentralplus.com.
	[86.140.203.119])
	by mx.google.com with ESMTPS id o7sm7286610bkv.13.2012.12.20.06.19.27
	(version=SSLv3 cipher=OTHER); Thu, 20 Dec 2012 06:19:29 -0800 (PST)
User-Agent: Microsoft-Entourage/12.35.0.121009
Date: Thu, 20 Dec 2012 14:19:22 +0000
From: Keir Fraser <keir@xen.org>
To: "G.R." <firemeteor@users.sourceforge.net>
Message-ID: <CCF8CEEA.561B5%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3evQGPLXnnXjfOMESWS2OFstSfvw==
In-Reply-To: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
Mime-version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/2012 13:31, "G.R." <firemeteor@users.sourceforge.net> wrote:

> If concern is about security, the same argument should apply to the
> first page (the portion before the page offset).
> The problem is that I have no idea what is around the mapped page. Not
> sure who has the knowledge.

Well we can't do better than mapping some whole number of pages, really.
Unless we trap to qemu on every access. I don't think we'd go there unless
there really were a known security issue. But mapping only the exact number
of pages we definitely need is a good principle.

> What's the standard flow to handle such map with offset?
> I expect this to be a common case, since the ioremap function in linux
> kernel accept this.

map_size = ((host_opregion & 0xfff) + 8096 + 0xfff) >> 12

Possibly with suitable macros used instead of magic numbers (e.g., XC_PAGE_*
and a macro for the opregion size).

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:23:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:23:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlh2A-0003rB-PI; Thu, 20 Dec 2012 14:23:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tlh28-0003qx-IJ
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 14:23:36 +0000
Received: from [85.158.139.211:7475] by server-1.bemta-5.messagelabs.com id
	55/5D-12813-76F13D05; Thu, 20 Dec 2012 14:23:35 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356013413!18874101!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4432 invoked from network); 20 Dec 2012 14:23:34 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-4.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	20 Dec 2012 14:23:34 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:53814 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tlh6N-00017K-4k; Thu, 20 Dec 2012 15:27:59 +0100
Date: Thu, 20 Dec 2012 15:23:26 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1457826869.20121220152326@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1797374383.20121220135139@eikelenboom.it>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
	<1797374383.20121220135139@eikelenboom.it>
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>, Eric Dumazet <erdnetdev@gmail.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, December 20, 2012, 1:51:39 PM, you wrote:


> Wednesday, December 19, 2012, 5:17:49 PM, you wrote:

>> On Wed, 2012-12-19 at 12:34 +0100, Sander Eikelenboom wrote:

>>> Hi Ian,
>>> 
>>> It ran overnight and i haven't seen the warn_once trigger.
>>> (but i also didn't with the previous patch)
>>> 

>> As I said, the miminum value to not trigger the warning was what Ian
>> patch was doing, but it was still a not accurate estimation.

>> Doing the real accounting might trigger slow transferts, or dropped
>> packets because of socket limits (SNDBUF / RCVBUF) being hit sooner.

>> So the real question was : If accounting for full pages, is your
>> applications run as smooth as before, with no huge performance
>> regression ?

> Ok i have added some extra debug info (see diff's below), the code still uses the old calculation for truesize (in the hope to trigger the warn_on_once again), but also calculates the variants IanC came up with.

> I haven't got a clear test case to trigger the warn_on_once, it happens just every once in a while during my normal usage and i'm not a netperf expert :-)
> So at the moment i haven't been able to trigger the warn_on_once yet, but the results so far do seem to shed some light ..

> - The first variant (current code) seems to be the most effcient and a good estimation *most* of the the, but sometimes triggers the warn_on_once in skb_try_coalesce.
> - The first variant (current code) seems to always substract from the truesize for small packets.
> - The second variant always seems keep the truesize as is for most of the small network traffic, but it also seems to work ok for larger packets.
> - The third variant seems to be a pretty wasteful estimation.

> So the last variant seems to be rather wasteful, and the second one the most accurate so far.

> Eric:
>      From the warn_on_once, delta should be smaller than len, but probably they should be as close together as possible.
>      When you say "accurate estimation", what would be a acceptable difference between DELTA and LEN ?



> [  116.965062] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  117.094538] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.094707] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.094869] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.095058] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.095216] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.096102] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.096311] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.096373] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.150398] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.150459] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.536901] eth0: mtu:1500 data_len:53642 len before:0 len after:53642 truesize before:896 truesize after:54282 nr_frags:14 variant1:53386(54282) variant2:53386(54282) variant3:57344(58240)
> [  117.537463] eth0: mtu:1500 data_len:15994 len before:0 len after:15994 truesize before:896 truesize after:16634 nr_frags:5 variant1:15738(16634) variant2:15738(16634) variant3:20480(21376)
> [  117.537915] eth0: mtu:1500 data_len:17442 len before:0 len after:17442 truesize before:896 truesize after:18082 nr_frags:5 variant1:17186(18082) variant2:17186(18082) variant3:20480(21376)
> [  117.538543] eth0: mtu:1500 data_len:18890 len before:0 len after:18890 truesize before:896 truesize after:19530 nr_frags:6 variant1:18634(19530) variant2:18634(19530) variant3:24576(25472)
> [  117.539223] eth0: mtu:1500 data_len:13098 len before:0 len after:13098 truesize before:896 truesize after:13738 nr_frags:4 variant1:12842(13738) variant2:12842(13738) variant3:16384(17280)
> [  117.539283] eth0: mtu:1500 data_len:7306 len before:0 len after:7306 truesize before:896 truesize after:7946 nr_frags:2 variant1:7050(7946) variant2:7050(7946) variant3:8192(9088)
> [  117.539403] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:7690 len:7240 from->truesize:7946 skb_headlen(from):190 skb_shinfo(to)->nr_frags:5 skb_shinfo(from)->nr_frags:2
> [  117.540035] eth0: mtu:1500 data_len:4410 len before:0 len after:4410 truesize before:896 truesize after:5050 nr_frags:3 variant1:4154(5050) variant2:4304(5200) variant3:12288(13184)
> [  117.540153] eth0: mtu:1500 data_len:1018 len before:0 len after:1018 truesize before:896 truesize after:1658 nr_frags:1 variant1:762(1658) variant2:762(1658) variant3:4096(4992)
> [  121.981917] net_ratelimit: 27 callbacks suppressed
> [  121.981960] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  122.985019] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  123.988308] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  124.991961] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  125.995003] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  126.998324] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)



> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index c26e28b..8833e38 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -964,6 +964,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
>         struct sk_buff_head tmpq;
>         unsigned long flags;
>         int err;
> +       int tsz,len;

>         spin_lock(&np->rx_lock);

> @@ -1037,9 +1038,22 @@ err:
>                  * receive throughout using the standard receive
>                  * buffer size was cut by 25%(!!!).
>                  */
> -               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +
> +
> +
> +
> +                tsz = skb->truesize;
> +                len = skb->len;
> +                /* skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags; */
> +                skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
>                 skb->len += skb->data_len;

> +               net_warn_ratelimited("%s: mtu:%d data_len:%d len before:%d len after:%d truesize before:%d truesize after:%d nr_frags:%d variant1:%d(%d) variant2:%d(%d) variant3:%d(%d) \n",
> +                        skb->dev->name, skb->dev->mtu, skb->data_len, len,  skb->len,tsz, skb->truesize, skb_shinfo(skb)->nr_frags,
> +                        skb->data_len - RX_COPY_THRESHOLD, tsz + skb->data_len - RX_COPY_THRESHOLD ,
> +                        skb->data_len - NETFRONT_SKB_CB(skb)->pull_to, tsz + skb->data_len - NETFRONT_SKB_CB(skb)->pull_to,
> +                        PAGE_SIZE * skb_shinfo(skb)->nr_frags, tsz + (PAGE_SIZE * skb_shinfo(skb)->nr_frags));
> +
>                 if (rx->flags & XEN_NETRXF_csum_blank)
>                         skb->ip_summed = CHECKSUM_PARTIAL;
>                 else if (rx->flags & XEN_NETRXF_data_validated)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 3ab989b..6d0cd86 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3471,6 +3471,16 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,

>         WARN_ON_ONCE(delta < len);

> +       if(delta < len) {
> +               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA < LEN delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
> +                        to->dev->name, from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
> +       }
> +
+       if (delta >> len && delta - len > 100) {
> +               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA - LEN > 100 delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
> +                        to->dev->name,from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
> +       }
> +
>         memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
>                skb_shinfo(from)->frags,
>                skb_shinfo(from)->nr_frags * sizeof(skb_frag_t));



Ok i succeeded in triggering the warn_on_once, but it seems the extra debug info from netfront was just rate limited away for the offending packet :(

Dec 20 15:17:33 media kernel: [  393.464062] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:33 media kernel: [  393.464438] eth0: mtu:1500 data_len:762 len before:0 len after:762 truesize before:896 truesize after:1402 nr_frags:1 variant1:506(1402) variant2:506(1402) variant3:4096(4992)
Dec 20 15:17:33 media kernel: [  393.465083] eth0: mtu:1500 data_len:118 len before:0 len after:118 truesize before:896 truesize after:758 nr_frags:1 variant1:-138(758) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:33 media kernel: [  393.466114] eth0: mtu:1500 data_len:118 len before:0 len after:118 truesize before:896 truesize after:758 nr_frags:1 variant1:-138(758) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:33 media kernel: [  393.467336] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  394.940211] ------------[ cut here ]------------
Dec 20 15:17:35 media kernel: [  394.940259] WARNING: at net/core/skbuff.c:3472 skb_try_coalesce+0x3fc/0x470()
Dec 20 15:17:35 media kernel: [  394.940282] Modules linked in:
Dec 20 15:17:35 media kernel: [  394.940306] Pid: 2632, comm: glusterfs Not tainted 3.7.0-rc0-20121220-netfrontdebug1 #1
Dec 20 15:17:35 media kernel: [  394.940330] Call Trace:
Dec 20 15:17:35 media kernel: [  394.940343]  <IRQ>  [<ffffffff8106889a>] warn_slowpath_common+0x7a/0xb0
Dec 20 15:17:35 media kernel: [  394.940384]  [<ffffffff810688e5>] warn_slowpath_null+0x15/0x20
Dec 20 15:17:35 media kernel: [  394.940409]  [<ffffffff8184298c>] skb_try_coalesce+0x3fc/0x470
Dec 20 15:17:35 media kernel: [  394.940434]  [<ffffffff818fb049>] tcp_try_coalesce+0x69/0xc0
Dec 20 15:17:35 media kernel: [  394.940458]  [<ffffffff818fb0f4>] tcp_queue_rcv+0x54/0x100
Dec 20 15:17:35 media kernel: [  394.940481]  [<ffffffff8190029f>] ? tcp_mtup_init+0x2f/0x90
Dec 20 15:17:35 media kernel: [  394.940504]  [<ffffffff818ffbdb>] tcp_rcv_established+0x2bb/0x6a0
Dec 20 15:17:35 media kernel: [  394.940528]  [<ffffffff8190839f>] ? tcp_v4_rcv+0x6cf/0xb10
Dec 20 15:17:35 media kernel: [  394.940551]  [<ffffffff81907985>] tcp_v4_do_rcv+0x135/0x480
Dec 20 15:17:35 media kernel: [  394.940576]  [<ffffffff819b3532>] ? _raw_spin_lock_nested+0x42/0x50
Dec 20 15:17:35 media kernel: [  394.940600]  [<ffffffff8190839f>] ? tcp_v4_rcv+0x6cf/0xb10
Dec 20 15:17:35 media kernel: [  394.940623]  [<ffffffff8190862d>] tcp_v4_rcv+0x95d/0xb10
Dec 20 15:17:35 media kernel: [  394.940666]  [<ffffffff810b5688>] ? lock_acquire+0xd8/0x100
Dec 20 15:17:35 media kernel: [  394.940694]  [<ffffffff818e4d6a>] ip_local_deliver_finish+0x11a/0x230
Dec 20 15:17:35 media kernel: [  394.940720]  [<ffffffff818e4c95>] ? ip_local_deliver_finish+0x45/0x230
Dec 20 15:17:35 media kernel: [  394.940745]  [<ffffffff818e4eb8>] ip_local_deliver+0x38/0x80
Dec 20 15:17:35 media kernel: [  394.940784]  [<ffffffff818e447a>] ip_rcv_finish+0x15a/0x630
Dec 20 15:17:35 media kernel: [  394.940807]  [<ffffffff818e4b68>] ip_rcv+0x218/0x300
Dec 20 15:17:35 media kernel: [  394.940829]  [<ffffffff8184bf2d>] __netif_receive_skb+0x65d/0x8d0
Dec 20 15:17:35 media kernel: [  394.940853]  [<ffffffff8184ba15>] ? __netif_receive_skb+0x145/0x8d0
Dec 20 15:17:35 media kernel: [  394.940889]  [<ffffffff810b192d>] ? trace_hardirqs_on+0xd/0x10
Dec 20 15:17:35 media kernel: [  394.940914]  [<ffffffff810fecbb>] ? free_hot_cold_page+0x1ab/0x1e0
Dec 20 15:17:35 media kernel: [  394.940939]  [<ffffffff8184e4f8>] netif_receive_skb+0x28/0xf0
Dec 20 15:17:35 media kernel: [  394.940964]  [<ffffffff81843e83>] ? __pskb_pull_tail+0x253/0x340
Dec 20 15:17:35 media kernel: [  394.941000]  [<ffffffff8164fbb5>] xennet_poll+0xae5/0xed0
Dec 20 15:17:35 media kernel: [  394.941024]  [<ffffffff81080081>] ? wake_up_worker+0x1/0x30
Dec 20 15:17:35 media kernel: [  394.941046]  [<ffffffff810b2fbc>] ? validate_chain+0x13c/0x1300
Dec 20 15:17:35 media kernel: [  394.941075]  [<ffffffff8184ed66>] net_rx_action+0x136/0x260
Dec 20 15:17:35 media kernel: [  394.941098]  [<ffffffff81070551>] ? __do_softirq+0x71/0x1a0
Dec 20 15:17:35 media kernel: [  394.941133]  [<ffffffff810705a9>] __do_softirq+0xc9/0x1a0
Dec 20 15:17:35 media kernel: [  394.941157]  [<ffffffff819b623c>] call_softirq+0x1c/0x30
Dec 20 15:17:35 media kernel: [  394.941179]  [<ffffffff8100fdc5>] do_softirq+0x85/0xf0
Dec 20 15:17:35 media kernel: [  394.941201]  [<ffffffff8107041e>] irq_exit+0x9e/0xd0
Dec 20 15:17:35 media kernel: [  394.941235]  [<ffffffff81430b1f>] xen_evtchn_do_upcall+0x2f/0x40
Dec 20 15:17:35 media kernel: [  394.941259]  [<ffffffff819b629e>] xen_do_hypervisor_callback+0x1e/0x30
Dec 20 15:17:35 media kernel: [  394.941279]  <EOI>  [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
Dec 20 15:17:35 media kernel: [  394.941318]  [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
Dec 20 15:17:35 media kernel: [  394.941356]  [<ffffffff8100890d>] ? xen_force_evtchn_callback+0xd/0x10
Dec 20 15:17:35 media kernel: [  394.941381]  [<ffffffff810092b2>] ? check_events+0x12/0x20
Dec 20 15:17:35 media kernel: [  394.941405]  [<ffffffff81009259>] ? xen_irq_enable_direct_reloc+0x4/0x4
Dec 20 15:17:35 media kernel: [  394.941432]  [<ffffffff819b3f6c>] ? _raw_spin_unlock_irq+0x3c/0x70
Dec 20 15:17:35 media kernel: [  394.941473]  [<ffffffff81095f83>] ? finish_task_switch+0x83/0xe0
Dec 20 15:17:35 media kernel: [  394.941507]  [<ffffffff81095f46>] ? finish_task_switch+0x46/0xe0
Dec 20 15:17:35 media kernel: [  394.941533]  [<ffffffff819b2434>] ? __schedule+0x444/0x880
Dec 20 15:17:35 media kernel: [  394.941555]  [<ffffffff810b2fbc>] ? validate_chain+0x13c/0x1300
Dec 20 15:17:35 media kernel: [  394.941580]  [<ffffffff810b4c4b>] ? __lock_acquire+0x46b/0xdd0
Dec 20 15:17:35 media kernel: [  394.941614]  [<ffffffff810b4c4b>] ? __lock_acquire+0x46b/0xdd0
Dec 20 15:17:35 media kernel: [  394.941638]  [<ffffffff819aff95>] ? __mutex_unlock_slowpath+0x135/0x1d0
Dec 20 15:17:35 media kernel: [  394.941663]  [<ffffffff819b2904>] ? schedule+0x24/0x70
Dec 20 15:17:35 media kernel: [  394.941697]  [<ffffffff819b179d>] ? schedule_hrtimeout_range_clock+0x11d/0x140
Dec 20 15:17:35 media kernel: [  394.941725]  [<ffffffff810b5688>] ? lock_acquire+0xd8/0x100
Dec 20 15:17:35 media kernel: [  394.941748]  [<ffffffff8118a558>] ? ep_poll+0xf8/0x3a0
Dec 20 15:17:35 media kernel: [  394.941770]  [<ffffffff819b4015>] ? _raw_spin_unlock_irqrestore+0x75/0xa0
Dec 20 15:17:35 media kernel: [  394.941808]  [<ffffffff810b1818>] ? trace_hardirqs_on_caller+0xf8/0x200
Dec 20 15:17:35 media kernel: [  394.941833]  [<ffffffff819b17ce>] ? schedule_hrtimeout_range+0xe/0x10
Dec 20 15:17:35 media kernel: [  394.941856]  [<ffffffff8118a75a>] ? ep_poll+0x2fa/0x3a0
Dec 20 15:17:35 media kernel: [  394.941878]  [<ffffffff81098630>] ? try_to_wake_up+0x310/0x310
Dec 20 15:17:35 media kernel: [  394.941913]  [<ffffffff810b5b17>] ? lock_release+0x117/0x250
Dec 20 15:17:35 media kernel: [  394.941938]  [<ffffffff81165fd7>] ? fget_light+0xd7/0x140
Dec 20 15:17:35 media kernel: [  394.941959]  [<ffffffff81165f3a>] ? fget_light+0x3a/0x140
Dec 20 15:17:35 media kernel: [  394.941981]  [<ffffffff8118a8ce>] ? sys_epoll_wait+0xce/0xe0
Dec 20 15:17:35 media kernel: [  394.942015]  [<ffffffff819b4e69>] ? system_call_fastpath+0x16/0x1b
Dec 20 15:17:35 media kernel: [  394.942036] ---[ end trace 6f3a832c9e91c8af ]---
Dec 20 15:17:35 media kernel: [  394.942056] to: (null) from: (null)  skb_try_coalesce: DELTA < LEN delta:22978 len:23168 from->truesize:23874 skb_headlen(from):0 skb_shinfo(to)->nr_frags:4 skb_shinfo(from)->nr_frags:6
Dec 20 15:17:35 media kernel: [  394.968199] to: (null) from: (null)  skb_try_coalesce: DELTA < LEN delta:14290 len:14480 from->truesize:15186 skb_headlen(from):0 skb_shinfo(to)->nr_frags:13 skb_shinfo(from)->nr_frags:4
Dec 20 15:17:35 media kernel: [  395.262814] net_ratelimit: 371 callbacks suppressed
Dec 20 15:17:35 media kernel: [  395.262858] eth0: mtu:1500 data_len:90 len before:0 len after:90 truesize before:896 truesize after:730 nr_frags:1 variant1:-166(730) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.264767] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.266193] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.268422] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.271617] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.274794] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.278104] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.281319] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.284454] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.287797] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.291121] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)










_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:23:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:23:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlh2A-0003rB-PI; Thu, 20 Dec 2012 14:23:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tlh28-0003qx-IJ
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 14:23:36 +0000
Received: from [85.158.139.211:7475] by server-1.bemta-5.messagelabs.com id
	55/5D-12813-76F13D05; Thu, 20 Dec 2012 14:23:35 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356013413!18874101!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4432 invoked from network); 20 Dec 2012 14:23:34 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-4.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	20 Dec 2012 14:23:34 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:53814 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tlh6N-00017K-4k; Thu, 20 Dec 2012 15:27:59 +0100
Date: Thu, 20 Dec 2012 15:23:26 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1457826869.20121220152326@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1797374383.20121220135139@eikelenboom.it>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
	<1797374383.20121220135139@eikelenboom.it>
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>, Eric Dumazet <erdnetdev@gmail.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, December 20, 2012, 1:51:39 PM, you wrote:


> Wednesday, December 19, 2012, 5:17:49 PM, you wrote:

>> On Wed, 2012-12-19 at 12:34 +0100, Sander Eikelenboom wrote:

>>> Hi Ian,
>>> 
>>> It ran overnight and i haven't seen the warn_once trigger.
>>> (but i also didn't with the previous patch)
>>> 

>> As I said, the miminum value to not trigger the warning was what Ian
>> patch was doing, but it was still a not accurate estimation.

>> Doing the real accounting might trigger slow transferts, or dropped
>> packets because of socket limits (SNDBUF / RCVBUF) being hit sooner.

>> So the real question was : If accounting for full pages, is your
>> applications run as smooth as before, with no huge performance
>> regression ?

> Ok i have added some extra debug info (see diff's below), the code still uses the old calculation for truesize (in the hope to trigger the warn_on_once again), but also calculates the variants IanC came up with.

> I haven't got a clear test case to trigger the warn_on_once, it happens just every once in a while during my normal usage and i'm not a netperf expert :-)
> So at the moment i haven't been able to trigger the warn_on_once yet, but the results so far do seem to shed some light ..

> - The first variant (current code) seems to be the most effcient and a good estimation *most* of the the, but sometimes triggers the warn_on_once in skb_try_coalesce.
> - The first variant (current code) seems to always substract from the truesize for small packets.
> - The second variant always seems keep the truesize as is for most of the small network traffic, but it also seems to work ok for larger packets.
> - The third variant seems to be a pretty wasteful estimation.

> So the last variant seems to be rather wasteful, and the second one the most accurate so far.

> Eric:
>      From the warn_on_once, delta should be smaller than len, but probably they should be as close together as possible.
>      When you say "accurate estimation", what would be a acceptable difference between DELTA and LEN ?



> [  116.965062] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  117.094538] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.094707] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.094869] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.095058] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.095216] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.096102] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.096311] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.096373] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.150398] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.150459] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
> [  117.536901] eth0: mtu:1500 data_len:53642 len before:0 len after:53642 truesize before:896 truesize after:54282 nr_frags:14 variant1:53386(54282) variant2:53386(54282) variant3:57344(58240)
> [  117.537463] eth0: mtu:1500 data_len:15994 len before:0 len after:15994 truesize before:896 truesize after:16634 nr_frags:5 variant1:15738(16634) variant2:15738(16634) variant3:20480(21376)
> [  117.537915] eth0: mtu:1500 data_len:17442 len before:0 len after:17442 truesize before:896 truesize after:18082 nr_frags:5 variant1:17186(18082) variant2:17186(18082) variant3:20480(21376)
> [  117.538543] eth0: mtu:1500 data_len:18890 len before:0 len after:18890 truesize before:896 truesize after:19530 nr_frags:6 variant1:18634(19530) variant2:18634(19530) variant3:24576(25472)
> [  117.539223] eth0: mtu:1500 data_len:13098 len before:0 len after:13098 truesize before:896 truesize after:13738 nr_frags:4 variant1:12842(13738) variant2:12842(13738) variant3:16384(17280)
> [  117.539283] eth0: mtu:1500 data_len:7306 len before:0 len after:7306 truesize before:896 truesize after:7946 nr_frags:2 variant1:7050(7946) variant2:7050(7946) variant3:8192(9088)
> [  117.539403] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:7690 len:7240 from->truesize:7946 skb_headlen(from):190 skb_shinfo(to)->nr_frags:5 skb_shinfo(from)->nr_frags:2
> [  117.540035] eth0: mtu:1500 data_len:4410 len before:0 len after:4410 truesize before:896 truesize after:5050 nr_frags:3 variant1:4154(5050) variant2:4304(5200) variant3:12288(13184)
> [  117.540153] eth0: mtu:1500 data_len:1018 len before:0 len after:1018 truesize before:896 truesize after:1658 nr_frags:1 variant1:762(1658) variant2:762(1658) variant3:4096(4992)
> [  121.981917] net_ratelimit: 27 callbacks suppressed
> [  121.981960] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  122.985019] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  123.988308] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  124.991961] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  125.995003] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> [  126.998324] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)



> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index c26e28b..8833e38 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -964,6 +964,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
>         struct sk_buff_head tmpq;
>         unsigned long flags;
>         int err;
> +       int tsz,len;

>         spin_lock(&np->rx_lock);

> @@ -1037,9 +1038,22 @@ err:
>                  * receive throughout using the standard receive
>                  * buffer size was cut by 25%(!!!).
>                  */
> -               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
> +
> +
> +
> +
> +                tsz = skb->truesize;
> +                len = skb->len;
> +                /* skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags; */
> +                skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
>                 skb->len += skb->data_len;

> +               net_warn_ratelimited("%s: mtu:%d data_len:%d len before:%d len after:%d truesize before:%d truesize after:%d nr_frags:%d variant1:%d(%d) variant2:%d(%d) variant3:%d(%d) \n",
> +                        skb->dev->name, skb->dev->mtu, skb->data_len, len,  skb->len,tsz, skb->truesize, skb_shinfo(skb)->nr_frags,
> +                        skb->data_len - RX_COPY_THRESHOLD, tsz + skb->data_len - RX_COPY_THRESHOLD ,
> +                        skb->data_len - NETFRONT_SKB_CB(skb)->pull_to, tsz + skb->data_len - NETFRONT_SKB_CB(skb)->pull_to,
> +                        PAGE_SIZE * skb_shinfo(skb)->nr_frags, tsz + (PAGE_SIZE * skb_shinfo(skb)->nr_frags));
> +
>                 if (rx->flags & XEN_NETRXF_csum_blank)
>                         skb->ip_summed = CHECKSUM_PARTIAL;
>                 else if (rx->flags & XEN_NETRXF_data_validated)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 3ab989b..6d0cd86 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3471,6 +3471,16 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,

>         WARN_ON_ONCE(delta < len);

> +       if(delta < len) {
> +               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA < LEN delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
> +                        to->dev->name, from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
> +       }
> +
+       if (delta >> len && delta - len > 100) {
> +               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA - LEN > 100 delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
> +                        to->dev->name,from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
> +       }
> +
>         memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
>                skb_shinfo(from)->frags,
>                skb_shinfo(from)->nr_frags * sizeof(skb_frag_t));



Ok i succeeded in triggering the warn_on_once, but it seems the extra debug info from netfront was just rate limited away for the offending packet :(

Dec 20 15:17:33 media kernel: [  393.464062] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:33 media kernel: [  393.464438] eth0: mtu:1500 data_len:762 len before:0 len after:762 truesize before:896 truesize after:1402 nr_frags:1 variant1:506(1402) variant2:506(1402) variant3:4096(4992)
Dec 20 15:17:33 media kernel: [  393.465083] eth0: mtu:1500 data_len:118 len before:0 len after:118 truesize before:896 truesize after:758 nr_frags:1 variant1:-138(758) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:33 media kernel: [  393.466114] eth0: mtu:1500 data_len:118 len before:0 len after:118 truesize before:896 truesize after:758 nr_frags:1 variant1:-138(758) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:33 media kernel: [  393.467336] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  394.940211] ------------[ cut here ]------------
Dec 20 15:17:35 media kernel: [  394.940259] WARNING: at net/core/skbuff.c:3472 skb_try_coalesce+0x3fc/0x470()
Dec 20 15:17:35 media kernel: [  394.940282] Modules linked in:
Dec 20 15:17:35 media kernel: [  394.940306] Pid: 2632, comm: glusterfs Not tainted 3.7.0-rc0-20121220-netfrontdebug1 #1
Dec 20 15:17:35 media kernel: [  394.940330] Call Trace:
Dec 20 15:17:35 media kernel: [  394.940343]  <IRQ>  [<ffffffff8106889a>] warn_slowpath_common+0x7a/0xb0
Dec 20 15:17:35 media kernel: [  394.940384]  [<ffffffff810688e5>] warn_slowpath_null+0x15/0x20
Dec 20 15:17:35 media kernel: [  394.940409]  [<ffffffff8184298c>] skb_try_coalesce+0x3fc/0x470
Dec 20 15:17:35 media kernel: [  394.940434]  [<ffffffff818fb049>] tcp_try_coalesce+0x69/0xc0
Dec 20 15:17:35 media kernel: [  394.940458]  [<ffffffff818fb0f4>] tcp_queue_rcv+0x54/0x100
Dec 20 15:17:35 media kernel: [  394.940481]  [<ffffffff8190029f>] ? tcp_mtup_init+0x2f/0x90
Dec 20 15:17:35 media kernel: [  394.940504]  [<ffffffff818ffbdb>] tcp_rcv_established+0x2bb/0x6a0
Dec 20 15:17:35 media kernel: [  394.940528]  [<ffffffff8190839f>] ? tcp_v4_rcv+0x6cf/0xb10
Dec 20 15:17:35 media kernel: [  394.940551]  [<ffffffff81907985>] tcp_v4_do_rcv+0x135/0x480
Dec 20 15:17:35 media kernel: [  394.940576]  [<ffffffff819b3532>] ? _raw_spin_lock_nested+0x42/0x50
Dec 20 15:17:35 media kernel: [  394.940600]  [<ffffffff8190839f>] ? tcp_v4_rcv+0x6cf/0xb10
Dec 20 15:17:35 media kernel: [  394.940623]  [<ffffffff8190862d>] tcp_v4_rcv+0x95d/0xb10
Dec 20 15:17:35 media kernel: [  394.940666]  [<ffffffff810b5688>] ? lock_acquire+0xd8/0x100
Dec 20 15:17:35 media kernel: [  394.940694]  [<ffffffff818e4d6a>] ip_local_deliver_finish+0x11a/0x230
Dec 20 15:17:35 media kernel: [  394.940720]  [<ffffffff818e4c95>] ? ip_local_deliver_finish+0x45/0x230
Dec 20 15:17:35 media kernel: [  394.940745]  [<ffffffff818e4eb8>] ip_local_deliver+0x38/0x80
Dec 20 15:17:35 media kernel: [  394.940784]  [<ffffffff818e447a>] ip_rcv_finish+0x15a/0x630
Dec 20 15:17:35 media kernel: [  394.940807]  [<ffffffff818e4b68>] ip_rcv+0x218/0x300
Dec 20 15:17:35 media kernel: [  394.940829]  [<ffffffff8184bf2d>] __netif_receive_skb+0x65d/0x8d0
Dec 20 15:17:35 media kernel: [  394.940853]  [<ffffffff8184ba15>] ? __netif_receive_skb+0x145/0x8d0
Dec 20 15:17:35 media kernel: [  394.940889]  [<ffffffff810b192d>] ? trace_hardirqs_on+0xd/0x10
Dec 20 15:17:35 media kernel: [  394.940914]  [<ffffffff810fecbb>] ? free_hot_cold_page+0x1ab/0x1e0
Dec 20 15:17:35 media kernel: [  394.940939]  [<ffffffff8184e4f8>] netif_receive_skb+0x28/0xf0
Dec 20 15:17:35 media kernel: [  394.940964]  [<ffffffff81843e83>] ? __pskb_pull_tail+0x253/0x340
Dec 20 15:17:35 media kernel: [  394.941000]  [<ffffffff8164fbb5>] xennet_poll+0xae5/0xed0
Dec 20 15:17:35 media kernel: [  394.941024]  [<ffffffff81080081>] ? wake_up_worker+0x1/0x30
Dec 20 15:17:35 media kernel: [  394.941046]  [<ffffffff810b2fbc>] ? validate_chain+0x13c/0x1300
Dec 20 15:17:35 media kernel: [  394.941075]  [<ffffffff8184ed66>] net_rx_action+0x136/0x260
Dec 20 15:17:35 media kernel: [  394.941098]  [<ffffffff81070551>] ? __do_softirq+0x71/0x1a0
Dec 20 15:17:35 media kernel: [  394.941133]  [<ffffffff810705a9>] __do_softirq+0xc9/0x1a0
Dec 20 15:17:35 media kernel: [  394.941157]  [<ffffffff819b623c>] call_softirq+0x1c/0x30
Dec 20 15:17:35 media kernel: [  394.941179]  [<ffffffff8100fdc5>] do_softirq+0x85/0xf0
Dec 20 15:17:35 media kernel: [  394.941201]  [<ffffffff8107041e>] irq_exit+0x9e/0xd0
Dec 20 15:17:35 media kernel: [  394.941235]  [<ffffffff81430b1f>] xen_evtchn_do_upcall+0x2f/0x40
Dec 20 15:17:35 media kernel: [  394.941259]  [<ffffffff819b629e>] xen_do_hypervisor_callback+0x1e/0x30
Dec 20 15:17:35 media kernel: [  394.941279]  <EOI>  [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
Dec 20 15:17:35 media kernel: [  394.941318]  [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
Dec 20 15:17:35 media kernel: [  394.941356]  [<ffffffff8100890d>] ? xen_force_evtchn_callback+0xd/0x10
Dec 20 15:17:35 media kernel: [  394.941381]  [<ffffffff810092b2>] ? check_events+0x12/0x20
Dec 20 15:17:35 media kernel: [  394.941405]  [<ffffffff81009259>] ? xen_irq_enable_direct_reloc+0x4/0x4
Dec 20 15:17:35 media kernel: [  394.941432]  [<ffffffff819b3f6c>] ? _raw_spin_unlock_irq+0x3c/0x70
Dec 20 15:17:35 media kernel: [  394.941473]  [<ffffffff81095f83>] ? finish_task_switch+0x83/0xe0
Dec 20 15:17:35 media kernel: [  394.941507]  [<ffffffff81095f46>] ? finish_task_switch+0x46/0xe0
Dec 20 15:17:35 media kernel: [  394.941533]  [<ffffffff819b2434>] ? __schedule+0x444/0x880
Dec 20 15:17:35 media kernel: [  394.941555]  [<ffffffff810b2fbc>] ? validate_chain+0x13c/0x1300
Dec 20 15:17:35 media kernel: [  394.941580]  [<ffffffff810b4c4b>] ? __lock_acquire+0x46b/0xdd0
Dec 20 15:17:35 media kernel: [  394.941614]  [<ffffffff810b4c4b>] ? __lock_acquire+0x46b/0xdd0
Dec 20 15:17:35 media kernel: [  394.941638]  [<ffffffff819aff95>] ? __mutex_unlock_slowpath+0x135/0x1d0
Dec 20 15:17:35 media kernel: [  394.941663]  [<ffffffff819b2904>] ? schedule+0x24/0x70
Dec 20 15:17:35 media kernel: [  394.941697]  [<ffffffff819b179d>] ? schedule_hrtimeout_range_clock+0x11d/0x140
Dec 20 15:17:35 media kernel: [  394.941725]  [<ffffffff810b5688>] ? lock_acquire+0xd8/0x100
Dec 20 15:17:35 media kernel: [  394.941748]  [<ffffffff8118a558>] ? ep_poll+0xf8/0x3a0
Dec 20 15:17:35 media kernel: [  394.941770]  [<ffffffff819b4015>] ? _raw_spin_unlock_irqrestore+0x75/0xa0
Dec 20 15:17:35 media kernel: [  394.941808]  [<ffffffff810b1818>] ? trace_hardirqs_on_caller+0xf8/0x200
Dec 20 15:17:35 media kernel: [  394.941833]  [<ffffffff819b17ce>] ? schedule_hrtimeout_range+0xe/0x10
Dec 20 15:17:35 media kernel: [  394.941856]  [<ffffffff8118a75a>] ? ep_poll+0x2fa/0x3a0
Dec 20 15:17:35 media kernel: [  394.941878]  [<ffffffff81098630>] ? try_to_wake_up+0x310/0x310
Dec 20 15:17:35 media kernel: [  394.941913]  [<ffffffff810b5b17>] ? lock_release+0x117/0x250
Dec 20 15:17:35 media kernel: [  394.941938]  [<ffffffff81165fd7>] ? fget_light+0xd7/0x140
Dec 20 15:17:35 media kernel: [  394.941959]  [<ffffffff81165f3a>] ? fget_light+0x3a/0x140
Dec 20 15:17:35 media kernel: [  394.941981]  [<ffffffff8118a8ce>] ? sys_epoll_wait+0xce/0xe0
Dec 20 15:17:35 media kernel: [  394.942015]  [<ffffffff819b4e69>] ? system_call_fastpath+0x16/0x1b
Dec 20 15:17:35 media kernel: [  394.942036] ---[ end trace 6f3a832c9e91c8af ]---
Dec 20 15:17:35 media kernel: [  394.942056] to: (null) from: (null)  skb_try_coalesce: DELTA < LEN delta:22978 len:23168 from->truesize:23874 skb_headlen(from):0 skb_shinfo(to)->nr_frags:4 skb_shinfo(from)->nr_frags:6
Dec 20 15:17:35 media kernel: [  394.968199] to: (null) from: (null)  skb_try_coalesce: DELTA < LEN delta:14290 len:14480 from->truesize:15186 skb_headlen(from):0 skb_shinfo(to)->nr_frags:13 skb_shinfo(from)->nr_frags:4
Dec 20 15:17:35 media kernel: [  395.262814] net_ratelimit: 371 callbacks suppressed
Dec 20 15:17:35 media kernel: [  395.262858] eth0: mtu:1500 data_len:90 len before:0 len after:90 truesize before:896 truesize after:730 nr_frags:1 variant1:-166(730) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.264767] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.266193] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.268422] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.271617] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.274794] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.278104] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.281319] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.284454] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.287797] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
Dec 20 15:17:35 media kernel: [  395.291121] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)










_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:39:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlhHa-00043l-Bn; Thu, 20 Dec 2012 14:39:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TlhHY-00043g-GV
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 14:39:32 +0000
Received: from [85.158.139.83:40701] by server-6.bemta-5.messagelabs.com id
	BE/4C-30498-32323D05; Thu, 20 Dec 2012 14:39:31 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1356014369!30768897!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15287 invoked from network); 20 Dec 2012 14:39:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 14:39:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,324,1355097600"; 
   d="scan'208";a="1415979"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 14:39:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 09:39:28 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TlhHU-0007en-IP;
	Thu, 20 Dec 2012 14:39:28 +0000
Message-ID: <50D321AC.6080400@eu.citrix.com>
Date: Thu, 20 Dec 2012 14:33:16 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1355944036@Solace>
	<4c57c8f1e7ad20c15b8c.1355944038@Solace>
	<50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
In-Reply-To: <50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>
Subject: Re: [Xen-devel] [PATCH 02 of 10 v2] xen,
	libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/12 09:18, Jan Beulich wrote:
>>>> On 19.12.12 at 20:07, Dario Faggioli <dario.faggioli@citrix.com> wrote:
>> --- a/xen/include/xen/nodemask.h
>> +++ b/xen/include/xen/nodemask.h
>> @@ -298,6 +298,53 @@ static inline int __nodemask_parse(const
>>   }
>>   #endif
>>   
>> +/*
>> + * nodemask_var_t: struct nodemask for stack usage.
>> + *
>> + * See definition of cpumask_var_t in include/xen//cpumask.h.
>> + */
>> +#if MAX_NUMNODES > 2 * BITS_PER_LONG
> Is that case reasonable to expect?

2 * BITS_PER_LONG is just going to be 128, right?  It wasn't too long 
ago that I would have considered 4096 cores a pretty unreasonable 
expectation.  Is there a particular reason you think this is going to be 
more than a few years away, and a particular harm in having the code 
here to begin with?

At very least it should be replaced with something like this:

#if MAX_NUMNODES > 2 * BITS_PER_LONG
# error "MAX_NUMNODES exceeds fixed size nodemask; need to implement 
variable-length nodemasks"
#endif

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:39:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlhHa-00043l-Bn; Thu, 20 Dec 2012 14:39:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TlhHY-00043g-GV
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 14:39:32 +0000
Received: from [85.158.139.83:40701] by server-6.bemta-5.messagelabs.com id
	BE/4C-30498-32323D05; Thu, 20 Dec 2012 14:39:31 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1356014369!30768897!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15287 invoked from network); 20 Dec 2012 14:39:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 14:39:30 -0000
X-IronPort-AV: E=Sophos;i="4.84,324,1355097600"; 
   d="scan'208";a="1415979"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 14:39:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 09:39:28 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TlhHU-0007en-IP;
	Thu, 20 Dec 2012 14:39:28 +0000
Message-ID: <50D321AC.6080400@eu.citrix.com>
Date: Thu, 20 Dec 2012 14:33:16 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1355944036@Solace>
	<4c57c8f1e7ad20c15b8c.1355944038@Solace>
	<50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
In-Reply-To: <50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>
Subject: Re: [Xen-devel] [PATCH 02 of 10 v2] xen,
	libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/12 09:18, Jan Beulich wrote:
>>>> On 19.12.12 at 20:07, Dario Faggioli <dario.faggioli@citrix.com> wrote:
>> --- a/xen/include/xen/nodemask.h
>> +++ b/xen/include/xen/nodemask.h
>> @@ -298,6 +298,53 @@ static inline int __nodemask_parse(const
>>   }
>>   #endif
>>   
>> +/*
>> + * nodemask_var_t: struct nodemask for stack usage.
>> + *
>> + * See definition of cpumask_var_t in include/xen//cpumask.h.
>> + */
>> +#if MAX_NUMNODES > 2 * BITS_PER_LONG
> Is that case reasonable to expect?

2 * BITS_PER_LONG is just going to be 128, right?  It wasn't too long 
ago that I would have considered 4096 cores a pretty unreasonable 
expectation.  Is there a particular reason you think this is going to be 
more than a few years away, and a particular harm in having the code 
here to begin with?

At very least it should be replaced with something like this:

#if MAX_NUMNODES > 2 * BITS_PER_LONG
# error "MAX_NUMNODES exceeds fixed size nodemask; need to implement 
variable-length nodemasks"
#endif

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:50:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:50:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlhRn-0004NY-RG; Thu, 20 Dec 2012 14:50:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jajcus@jajcus.net>) id 1TlhRm-0004NS-Rz
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 14:50:06 +0000
Received: from [85.158.138.51:15477] by server-5.bemta-3.messagelabs.com id
	18/F3-15136-D9523D05; Thu, 20 Dec 2012 14:50:05 +0000
X-Env-Sender: jajcus@jajcus.net
X-Msg-Ref: server-5.tower-174.messagelabs.com!1356015004!29759769!1
X-Originating-IP: [84.205.176.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17175 invoked from network); 20 Dec 2012 14:50:05 -0000
Received: from tropek.jajcus.net (HELO tropek.jajcus.net) (84.205.176.49)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 14:50:05 -0000
Received: from localhost (jajo.ipv6.eggsoft.pl [IPv6:2001:6a0:117::1])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by tropek.jajcus.net (Postfix) with ESMTPSA id 38B505002;
	Thu, 20 Dec 2012 15:50:03 +0100 (CET)
Date: Thu, 20 Dec 2012 15:50:02 +0100
From: Jacek Konieczny <jajcus@jajcus.net>
To: James Dingwall <james-xen@dingwall.me.uk>
Message-ID: <20121220145001.GC16443@jajo.eggsoft>
Mail-Followup-To: James Dingwall <james-xen@dingwall.me.uk>,
	xen-devel@lists.xen.org
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] kernel log flooded with: xen_balloon:
 reserve_additional_memory: add_memory() failed: -17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 08:47:22AM +0000, James Dingwall wrote:
> Hi,
> 
> I have encountered an apparently benign error on two systems where the 
> dom0 kernel log is flooded with messages like:
> 
> [52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff] cannot 
> be added
> [52482.163860] xen_balloon: reserve_additional_memory: add_memory() 
> failed: -17

I have seen this bug too (under Xen 4.2.0).

I am using the following workaround to stop those messages:

cat /sys/devices/system/xen_memory/xen_memory0/info/current_kb > \
        /sys/devices/system/xen_memory/xen_memory0/target_kb

I have not verified yet if Xen 4.2.1 is also affected.

Greets,
        Jacek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:50:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:50:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlhRn-0004NY-RG; Thu, 20 Dec 2012 14:50:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jajcus@jajcus.net>) id 1TlhRm-0004NS-Rz
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 14:50:06 +0000
Received: from [85.158.138.51:15477] by server-5.bemta-3.messagelabs.com id
	18/F3-15136-D9523D05; Thu, 20 Dec 2012 14:50:05 +0000
X-Env-Sender: jajcus@jajcus.net
X-Msg-Ref: server-5.tower-174.messagelabs.com!1356015004!29759769!1
X-Originating-IP: [84.205.176.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17175 invoked from network); 20 Dec 2012 14:50:05 -0000
Received: from tropek.jajcus.net (HELO tropek.jajcus.net) (84.205.176.49)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 14:50:05 -0000
Received: from localhost (jajo.ipv6.eggsoft.pl [IPv6:2001:6a0:117::1])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by tropek.jajcus.net (Postfix) with ESMTPSA id 38B505002;
	Thu, 20 Dec 2012 15:50:03 +0100 (CET)
Date: Thu, 20 Dec 2012 15:50:02 +0100
From: Jacek Konieczny <jajcus@jajcus.net>
To: James Dingwall <james-xen@dingwall.me.uk>
Message-ID: <20121220145001.GC16443@jajo.eggsoft>
Mail-Followup-To: James Dingwall <james-xen@dingwall.me.uk>,
	xen-devel@lists.xen.org
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] kernel log flooded with: xen_balloon:
 reserve_additional_memory: add_memory() failed: -17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 08:47:22AM +0000, James Dingwall wrote:
> Hi,
> 
> I have encountered an apparently benign error on two systems where the 
> dom0 kernel log is flooded with messages like:
> 
> [52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff] cannot 
> be added
> [52482.163860] xen_balloon: reserve_additional_memory: add_memory() 
> failed: -17

I have seen this bug too (under Xen 4.2.0).

I am using the following workaround to stop those messages:

cat /sys/devices/system/xen_memory/xen_memory0/info/current_kb > \
        /sys/devices/system/xen_memory/xen_memory0/target_kb

I have not verified yet if Xen 4.2.1 is also affected.

Greets,
        Jacek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:53:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:53:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlhUK-0004WH-DP; Thu, 20 Dec 2012 14:52:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlhUI-0004W6-L3
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 14:52:42 +0000
Received: from [85.158.137.99:56546] by server-16.bemta-3.messagelabs.com id
	CF/E9-27634-93623D05; Thu, 20 Dec 2012 14:52:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1356015160!20260406!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20401 invoked from network); 20 Dec 2012 14:52:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 14:52:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 14:52:33 +0000
Message-Id: <50D3344002000078000B1CF7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 14:52:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<4c57c8f1e7ad20c15b8c.1355944038@Solace>
	<50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
	<50D321AC.6080400@eu.citrix.com>
In-Reply-To: <50D321AC.6080400@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: MarcusGranado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, AnilMadhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>
Subject: Re: [Xen-devel] [PATCH 02 of 10 v2] xen,
 libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 15:33, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 20/12/12 09:18, Jan Beulich wrote:
>>>>> On 19.12.12 at 20:07, Dario Faggioli <dario.faggioli@citrix.com> wrote:
>>> --- a/xen/include/xen/nodemask.h
>>> +++ b/xen/include/xen/nodemask.h
>>> @@ -298,6 +298,53 @@ static inline int __nodemask_parse(const
>>>   }
>>>   #endif
>>>   
>>> +/*
>>> + * nodemask_var_t: struct nodemask for stack usage.
>>> + *
>>> + * See definition of cpumask_var_t in include/xen//cpumask.h.
>>> + */
>>> +#if MAX_NUMNODES > 2 * BITS_PER_LONG
>> Is that case reasonable to expect?
> 
> 2 * BITS_PER_LONG is just going to be 128, right?  It wasn't too long 
> ago that I would have considered 4096 cores a pretty unreasonable 
> expectation.  Is there a particular reason you think this is going to be 
> more than a few years away, and a particular harm in having the code 
> here to begin with?

I just don't see node counts grow even near as quickly as core/
thread ones.

> At very least it should be replaced with something like this:
> 
> #if MAX_NUMNODES > 2 * BITS_PER_LONG
> # error "MAX_NUMNODES exceeds fixed size nodemask; need to implement 
> variable-length nodemasks"
> #endif

Yes, if there is a limitation in the code, it being violated should be
detected at build time. But I'd suppose on can construct the
statically sized mask definition such that it copes with larger counts
(just at the expense of having larger data objects perhaps
on-stack). Making sure to always pass pointers rather than objects
to functions will already eliminated a good part of the problem.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:53:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:53:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlhUK-0004WH-DP; Thu, 20 Dec 2012 14:52:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlhUI-0004W6-L3
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 14:52:42 +0000
Received: from [85.158.137.99:56546] by server-16.bemta-3.messagelabs.com id
	CF/E9-27634-93623D05; Thu, 20 Dec 2012 14:52:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1356015160!20260406!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20401 invoked from network); 20 Dec 2012 14:52:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 14:52:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 14:52:33 +0000
Message-Id: <50D3344002000078000B1CF7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 14:52:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<4c57c8f1e7ad20c15b8c.1355944038@Solace>
	<50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
	<50D321AC.6080400@eu.citrix.com>
In-Reply-To: <50D321AC.6080400@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: MarcusGranado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, AnilMadhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>
Subject: Re: [Xen-devel] [PATCH 02 of 10 v2] xen,
 libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 15:33, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 20/12/12 09:18, Jan Beulich wrote:
>>>>> On 19.12.12 at 20:07, Dario Faggioli <dario.faggioli@citrix.com> wrote:
>>> --- a/xen/include/xen/nodemask.h
>>> +++ b/xen/include/xen/nodemask.h
>>> @@ -298,6 +298,53 @@ static inline int __nodemask_parse(const
>>>   }
>>>   #endif
>>>   
>>> +/*
>>> + * nodemask_var_t: struct nodemask for stack usage.
>>> + *
>>> + * See definition of cpumask_var_t in include/xen//cpumask.h.
>>> + */
>>> +#if MAX_NUMNODES > 2 * BITS_PER_LONG
>> Is that case reasonable to expect?
> 
> 2 * BITS_PER_LONG is just going to be 128, right?  It wasn't too long 
> ago that I would have considered 4096 cores a pretty unreasonable 
> expectation.  Is there a particular reason you think this is going to be 
> more than a few years away, and a particular harm in having the code 
> here to begin with?

I just don't see node counts grow even near as quickly as core/
thread ones.

> At very least it should be replaced with something like this:
> 
> #if MAX_NUMNODES > 2 * BITS_PER_LONG
> # error "MAX_NUMNODES exceeds fixed size nodemask; need to implement 
> variable-length nodemasks"
> #endif

Yes, if there is a limitation in the code, it being violated should be
detected at build time. But I'd suppose on can construct the
statically sized mask definition such that it copes with larger counts
(just at the expense of having larger data objects perhaps
on-stack). Making sure to always pass pointers rather than objects
to functions will already eliminated a good part of the problem.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:59:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlhao-0004iS-9G; Thu, 20 Dec 2012 14:59:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tlhan-0004iN-1D
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 14:59:25 +0000
Received: from [85.158.137.99:25130] by server-6.bemta-3.messagelabs.com id
	C7/65-12154-CC723D05; Thu, 20 Dec 2012 14:59:24 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-217.messagelabs.com!1356015544!20232858!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30439 invoked from network); 20 Dec 2012 14:59:04 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	20 Dec 2012 14:59:04 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:53854 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tlheh-0001HK-Je; Thu, 20 Dec 2012 16:03:27 +0100
Date: Thu, 20 Dec 2012 15:58:55 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <849328952.20121220155855@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1457826869.20121220152326@eikelenboom.it>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
	<1797374383.20121220135139@eikelenboom.it>
	<1457826869.20121220152326@eikelenboom.it>
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>, Eric Dumazet <erdnetdev@gmail.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, December 20, 2012, 3:23:26 PM, you wrote:


> Thursday, December 20, 2012, 1:51:39 PM, you wrote:


>> Wednesday, December 19, 2012, 5:17:49 PM, you wrote:

>>> On Wed, 2012-12-19 at 12:34 +0100, Sander Eikelenboom wrote:

>>>> Hi Ian,
>>>> 
>>>> It ran overnight and i haven't seen the warn_once trigger.
>>>> (but i also didn't with the previous patch)
>>>> 

>>> As I said, the miminum value to not trigger the warning was what Ian
>>> patch was doing, but it was still a not accurate estimation.

>>> Doing the real accounting might trigger slow transferts, or dropped
>>> packets because of socket limits (SNDBUF / RCVBUF) being hit sooner.

>>> So the real question was : If accounting for full pages, is your
>>> applications run as smooth as before, with no huge performance
>>> regression ?

>> Ok i have added some extra debug info (see diff's below), the code still uses the old calculation for truesize (in the hope to trigger the warn_on_once again), but also calculates the variants IanC came up with.

>> I haven't got a clear test case to trigger the warn_on_once, it happens just every once in a while during my normal usage and i'm not a netperf expert :-)
>> So at the moment i haven't been able to trigger the warn_on_once yet, but the results so far do seem to shed some light ..

>> - The first variant (current code) seems to be the most effcient and a good estimation *most* of the the, but sometimes triggers the warn_on_once in skb_try_coalesce.
>> - The first variant (current code) seems to always substract from the truesize for small packets.
>> - The second variant always seems keep the truesize as is for most of the small network traffic, but it also seems to work ok for larger packets.
>> - The third variant seems to be a pretty wasteful estimation.

>> So the last variant seems to be rather wasteful, and the second one the most accurate so far.

>> Eric:
>>      From the warn_on_once, delta should be smaller than len, but probably they should be as close together as possible.
>>      When you say "accurate estimation", what would be a acceptable difference between DELTA and LEN ?



>> [  116.965062] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  117.094538] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.094707] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.094869] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.095058] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.095216] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.096102] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.096311] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.096373] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.150398] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.150459] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.536901] eth0: mtu:1500 data_len:53642 len before:0 len after:53642 truesize before:896 truesize after:54282 nr_frags:14 variant1:53386(54282) variant2:53386(54282) variant3:57344(58240)
>> [  117.537463] eth0: mtu:1500 data_len:15994 len before:0 len after:15994 truesize before:896 truesize after:16634 nr_frags:5 variant1:15738(16634) variant2:15738(16634) variant3:20480(21376)
>> [  117.537915] eth0: mtu:1500 data_len:17442 len before:0 len after:17442 truesize before:896 truesize after:18082 nr_frags:5 variant1:17186(18082) variant2:17186(18082) variant3:20480(21376)
>> [  117.538543] eth0: mtu:1500 data_len:18890 len before:0 len after:18890 truesize before:896 truesize after:19530 nr_frags:6 variant1:18634(19530) variant2:18634(19530) variant3:24576(25472)
>> [  117.539223] eth0: mtu:1500 data_len:13098 len before:0 len after:13098 truesize before:896 truesize after:13738 nr_frags:4 variant1:12842(13738) variant2:12842(13738) variant3:16384(17280)
>> [  117.539283] eth0: mtu:1500 data_len:7306 len before:0 len after:7306 truesize before:896 truesize after:7946 nr_frags:2 variant1:7050(7946) variant2:7050(7946) variant3:8192(9088)
>> [  117.539403] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:7690 len:7240 from->truesize:7946 skb_headlen(from):190 skb_shinfo(to)->nr_frags:5 skb_shinfo(from)->nr_frags:2
>> [  117.540035] eth0: mtu:1500 data_len:4410 len before:0 len after:4410 truesize before:896 truesize after:5050 nr_frags:3 variant1:4154(5050) variant2:4304(5200) variant3:12288(13184)
>> [  117.540153] eth0: mtu:1500 data_len:1018 len before:0 len after:1018 truesize before:896 truesize after:1658 nr_frags:1 variant1:762(1658) variant2:762(1658) variant3:4096(4992)
>> [  121.981917] net_ratelimit: 27 callbacks suppressed
>> [  121.981960] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  122.985019] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  123.988308] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  124.991961] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  125.995003] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  126.998324] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)



>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index c26e28b..8833e38 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -964,6 +964,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
>>         struct sk_buff_head tmpq;
>>         unsigned long flags;
>>         int err;
>> +       int tsz,len;

>>         spin_lock(&np->rx_lock);

>> @@ -1037,9 +1038,22 @@ err:
>>                  * receive throughout using the standard receive
>>                  * buffer size was cut by 25%(!!!).
>>                  */
>> -               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
>> +
>> +
>> +
>> +
>> +                tsz = skb->truesize;
>> +                len = skb->len;
>> +                /* skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags; */
>> +                skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
>>                 skb->len += skb->data_len;

>> +               net_warn_ratelimited("%s: mtu:%d data_len:%d len before:%d len after:%d truesize before:%d truesize after:%d nr_frags:%d variant1:%d(%d) variant2:%d(%d) variant3:%d(%d) \n",
>> +                        skb->dev->name, skb->dev->mtu, skb->data_len, len,  skb->len,tsz, skb->truesize, skb_shinfo(skb)->nr_frags,
>> +                        skb->data_len - RX_COPY_THRESHOLD, tsz + skb->data_len - RX_COPY_THRESHOLD ,
>> +                        skb->data_len - NETFRONT_SKB_CB(skb)->pull_to, tsz + skb->data_len - NETFRONT_SKB_CB(skb)->pull_to,
>> +                        PAGE_SIZE * skb_shinfo(skb)->nr_frags, tsz + (PAGE_SIZE * skb_shinfo(skb)->nr_frags));
>> +
>>                 if (rx->flags & XEN_NETRXF_csum_blank)
>>                         skb->ip_summed = CHECKSUM_PARTIAL;
>>                 else if (rx->flags & XEN_NETRXF_data_validated)
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index 3ab989b..6d0cd86 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -3471,6 +3471,16 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,

>>         WARN_ON_ONCE(delta < len);

>> +       if(delta < len) {
>> +               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA < LEN delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
>> +                        to->dev->name, from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
>> +       }
>> +
+       if (delta >>> len && delta - len > 100) {
>> +               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA - LEN > 100 delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
>> +                        to->dev->name,from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
>> +       }
>> +
>>         memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
>>                skb_shinfo(from)->frags,
>>                skb_shinfo(from)->nr_frags * sizeof(skb_frag_t));



> Ok i succeeded in triggering the warn_on_once, but it seems the extra debug info from netfront was just rate limited away for the offending packet :(

> Dec 20 15:17:33 media kernel: [  393.464062] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:33 media kernel: [  393.464438] eth0: mtu:1500 data_len:762 len before:0 len after:762 truesize before:896 truesize after:1402 nr_frags:1 variant1:506(1402) variant2:506(1402) variant3:4096(4992)
> Dec 20 15:17:33 media kernel: [  393.465083] eth0: mtu:1500 data_len:118 len before:0 len after:118 truesize before:896 truesize after:758 nr_frags:1 variant1:-138(758) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:33 media kernel: [  393.466114] eth0: mtu:1500 data_len:118 len before:0 len after:118 truesize before:896 truesize after:758 nr_frags:1 variant1:-138(758) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:33 media kernel: [  393.467336] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  394.940211] ------------[ cut here ]------------
> Dec 20 15:17:35 media kernel: [  394.940259] WARNING: at net/core/skbuff.c:3472 skb_try_coalesce+0x3fc/0x470()
> Dec 20 15:17:35 media kernel: [  394.940282] Modules linked in:
> Dec 20 15:17:35 media kernel: [  394.940306] Pid: 2632, comm: glusterfs Not tainted 3.7.0-rc0-20121220-netfrontdebug1 #1
> Dec 20 15:17:35 media kernel: [  394.940330] Call Trace:
> Dec 20 15:17:35 media kernel: [  394.940343]  <IRQ>  [<ffffffff8106889a>] warn_slowpath_common+0x7a/0xb0
> Dec 20 15:17:35 media kernel: [  394.940384]  [<ffffffff810688e5>] warn_slowpath_null+0x15/0x20
> Dec 20 15:17:35 media kernel: [  394.940409]  [<ffffffff8184298c>] skb_try_coalesce+0x3fc/0x470
> Dec 20 15:17:35 media kernel: [  394.940434]  [<ffffffff818fb049>] tcp_try_coalesce+0x69/0xc0
> Dec 20 15:17:35 media kernel: [  394.940458]  [<ffffffff818fb0f4>] tcp_queue_rcv+0x54/0x100
> Dec 20 15:17:35 media kernel: [  394.940481]  [<ffffffff8190029f>] ? tcp_mtup_init+0x2f/0x90
> Dec 20 15:17:35 media kernel: [  394.940504]  [<ffffffff818ffbdb>] tcp_rcv_established+0x2bb/0x6a0
> Dec 20 15:17:35 media kernel: [  394.940528]  [<ffffffff8190839f>] ? tcp_v4_rcv+0x6cf/0xb10
> Dec 20 15:17:35 media kernel: [  394.940551]  [<ffffffff81907985>] tcp_v4_do_rcv+0x135/0x480
> Dec 20 15:17:35 media kernel: [  394.940576]  [<ffffffff819b3532>] ? _raw_spin_lock_nested+0x42/0x50
> Dec 20 15:17:35 media kernel: [  394.940600]  [<ffffffff8190839f>] ? tcp_v4_rcv+0x6cf/0xb10
> Dec 20 15:17:35 media kernel: [  394.940623]  [<ffffffff8190862d>] tcp_v4_rcv+0x95d/0xb10
> Dec 20 15:17:35 media kernel: [  394.940666]  [<ffffffff810b5688>] ? lock_acquire+0xd8/0x100
> Dec 20 15:17:35 media kernel: [  394.940694]  [<ffffffff818e4d6a>] ip_local_deliver_finish+0x11a/0x230
> Dec 20 15:17:35 media kernel: [  394.940720]  [<ffffffff818e4c95>] ? ip_local_deliver_finish+0x45/0x230
> Dec 20 15:17:35 media kernel: [  394.940745]  [<ffffffff818e4eb8>] ip_local_deliver+0x38/0x80
> Dec 20 15:17:35 media kernel: [  394.940784]  [<ffffffff818e447a>] ip_rcv_finish+0x15a/0x630
> Dec 20 15:17:35 media kernel: [  394.940807]  [<ffffffff818e4b68>] ip_rcv+0x218/0x300
> Dec 20 15:17:35 media kernel: [  394.940829]  [<ffffffff8184bf2d>] __netif_receive_skb+0x65d/0x8d0
> Dec 20 15:17:35 media kernel: [  394.940853]  [<ffffffff8184ba15>] ? __netif_receive_skb+0x145/0x8d0
> Dec 20 15:17:35 media kernel: [  394.940889]  [<ffffffff810b192d>] ? trace_hardirqs_on+0xd/0x10
> Dec 20 15:17:35 media kernel: [  394.940914]  [<ffffffff810fecbb>] ? free_hot_cold_page+0x1ab/0x1e0
> Dec 20 15:17:35 media kernel: [  394.940939]  [<ffffffff8184e4f8>] netif_receive_skb+0x28/0xf0
> Dec 20 15:17:35 media kernel: [  394.940964]  [<ffffffff81843e83>] ? __pskb_pull_tail+0x253/0x340
> Dec 20 15:17:35 media kernel: [  394.941000]  [<ffffffff8164fbb5>] xennet_poll+0xae5/0xed0
> Dec 20 15:17:35 media kernel: [  394.941024]  [<ffffffff81080081>] ? wake_up_worker+0x1/0x30
> Dec 20 15:17:35 media kernel: [  394.941046]  [<ffffffff810b2fbc>] ? validate_chain+0x13c/0x1300
> Dec 20 15:17:35 media kernel: [  394.941075]  [<ffffffff8184ed66>] net_rx_action+0x136/0x260
> Dec 20 15:17:35 media kernel: [  394.941098]  [<ffffffff81070551>] ? __do_softirq+0x71/0x1a0
> Dec 20 15:17:35 media kernel: [  394.941133]  [<ffffffff810705a9>] __do_softirq+0xc9/0x1a0
> Dec 20 15:17:35 media kernel: [  394.941157]  [<ffffffff819b623c>] call_softirq+0x1c/0x30
> Dec 20 15:17:35 media kernel: [  394.941179]  [<ffffffff8100fdc5>] do_softirq+0x85/0xf0
> Dec 20 15:17:35 media kernel: [  394.941201]  [<ffffffff8107041e>] irq_exit+0x9e/0xd0
> Dec 20 15:17:35 media kernel: [  394.941235]  [<ffffffff81430b1f>] xen_evtchn_do_upcall+0x2f/0x40
> Dec 20 15:17:35 media kernel: [  394.941259]  [<ffffffff819b629e>] xen_do_hypervisor_callback+0x1e/0x30
> Dec 20 15:17:35 media kernel: [  394.941279]  <EOI>  [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
> Dec 20 15:17:35 media kernel: [  394.941318]  [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
> Dec 20 15:17:35 media kernel: [  394.941356]  [<ffffffff8100890d>] ? xen_force_evtchn_callback+0xd/0x10
> Dec 20 15:17:35 media kernel: [  394.941381]  [<ffffffff810092b2>] ? check_events+0x12/0x20
> Dec 20 15:17:35 media kernel: [  394.941405]  [<ffffffff81009259>] ? xen_irq_enable_direct_reloc+0x4/0x4
> Dec 20 15:17:35 media kernel: [  394.941432]  [<ffffffff819b3f6c>] ? _raw_spin_unlock_irq+0x3c/0x70
> Dec 20 15:17:35 media kernel: [  394.941473]  [<ffffffff81095f83>] ? finish_task_switch+0x83/0xe0
> Dec 20 15:17:35 media kernel: [  394.941507]  [<ffffffff81095f46>] ? finish_task_switch+0x46/0xe0
> Dec 20 15:17:35 media kernel: [  394.941533]  [<ffffffff819b2434>] ? __schedule+0x444/0x880
> Dec 20 15:17:35 media kernel: [  394.941555]  [<ffffffff810b2fbc>] ? validate_chain+0x13c/0x1300
> Dec 20 15:17:35 media kernel: [  394.941580]  [<ffffffff810b4c4b>] ? __lock_acquire+0x46b/0xdd0
> Dec 20 15:17:35 media kernel: [  394.941614]  [<ffffffff810b4c4b>] ? __lock_acquire+0x46b/0xdd0
> Dec 20 15:17:35 media kernel: [  394.941638]  [<ffffffff819aff95>] ? __mutex_unlock_slowpath+0x135/0x1d0
> Dec 20 15:17:35 media kernel: [  394.941663]  [<ffffffff819b2904>] ? schedule+0x24/0x70
> Dec 20 15:17:35 media kernel: [  394.941697]  [<ffffffff819b179d>] ? schedule_hrtimeout_range_clock+0x11d/0x140
> Dec 20 15:17:35 media kernel: [  394.941725]  [<ffffffff810b5688>] ? lock_acquire+0xd8/0x100
> Dec 20 15:17:35 media kernel: [  394.941748]  [<ffffffff8118a558>] ? ep_poll+0xf8/0x3a0
> Dec 20 15:17:35 media kernel: [  394.941770]  [<ffffffff819b4015>] ? _raw_spin_unlock_irqrestore+0x75/0xa0
> Dec 20 15:17:35 media kernel: [  394.941808]  [<ffffffff810b1818>] ? trace_hardirqs_on_caller+0xf8/0x200
> Dec 20 15:17:35 media kernel: [  394.941833]  [<ffffffff819b17ce>] ? schedule_hrtimeout_range+0xe/0x10
> Dec 20 15:17:35 media kernel: [  394.941856]  [<ffffffff8118a75a>] ? ep_poll+0x2fa/0x3a0
> Dec 20 15:17:35 media kernel: [  394.941878]  [<ffffffff81098630>] ? try_to_wake_up+0x310/0x310
> Dec 20 15:17:35 media kernel: [  394.941913]  [<ffffffff810b5b17>] ? lock_release+0x117/0x250
> Dec 20 15:17:35 media kernel: [  394.941938]  [<ffffffff81165fd7>] ? fget_light+0xd7/0x140
> Dec 20 15:17:35 media kernel: [  394.941959]  [<ffffffff81165f3a>] ? fget_light+0x3a/0x140
> Dec 20 15:17:35 media kernel: [  394.941981]  [<ffffffff8118a8ce>] ? sys_epoll_wait+0xce/0xe0
> Dec 20 15:17:35 media kernel: [  394.942015]  [<ffffffff819b4e69>] ? system_call_fastpath+0x16/0x1b
> Dec 20 15:17:35 media kernel: [  394.942036] ---[ end trace 6f3a832c9e91c8af ]---
> Dec 20 15:17:35 media kernel: [  394.942056] to: (null) from: (null)  skb_try_coalesce: DELTA < LEN delta:22978 len:23168 from->truesize:23874 skb_headlen(from):0 skb_shinfo(to)->nr_frags:4 skb_shinfo(from)->nr_frags:6
> Dec 20 15:17:35 media kernel: [  394.968199] to: (null) from: (null)  skb_try_coalesce: DELTA < LEN delta:14290 len:14480 from->truesize:15186 skb_headlen(from):0 skb_shinfo(to)->nr_frags:13 skb_shinfo(from)->nr_frags:4
> Dec 20 15:17:35 media kernel: [  395.262814] net_ratelimit: 371 callbacks suppressed
> Dec 20 15:17:35 media kernel: [  395.262858] eth0: mtu:1500 data_len:90 len before:0 len after:90 truesize before:896 truesize after:730 nr_frags:1 variant1:-166(730) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.264767] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.266193] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.268422] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.271617] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.274794] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.278104] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.281319] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.284454] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.287797] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.291121] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)


Hmm perhaps a better example, i have indented some perhaps interesting points:

        Dec 20 14:12:57 media kernel: [  794.895136] eth0: mtu:1500 data_len:15994 len before:0 len after:15994 truesize before:896 truesize after:16634 nr_frags:5 variant1:15738(16634) variant2:15738(16634) variant3:20480(21376)
        Dec 20 14:12:57 media kernel: [  794.895431] eth0: mtu:1500 data_len:17442 len before:0 len after:17442 truesize before:896 truesize after:18082 nr_frags:5 variant1:17186(18082) variant2:17186(18082) variant3:20480(21376)
        Dec 20 14:12:57 media kernel: [  794.895616] eth0: mtu:1500 data_len:18890 len before:0 len after:18890 truesize before:896 truesize after:19530 nr_frags:6 variant1:18634(19530) variant2:18824(19720) variant3:24576(25472)
        Dec 20 14:12:57 media kernel: [  794.895804] eth0: mtu:1500 data_len:13098 len before:0 len after:13098 truesize before:896 truesize after:13738 nr_frags:4 variant1:12842(13738) variant2:12842(13738) variant3:16384(17280)
        Dec 20 14:12:57 media kernel: [  794.895823] eth0: mtu:1500 data_len:7306 len before:0 len after:7306 truesize before:896 truesize after:7946 nr_frags:3 variant1:7050(7946) variant2:7050(7946) variant3:12288(13184)
        Dec 20 14:12:57 media kernel: [  794.895868] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:7690 len:7240 from->truesize:7946 skb_headlen(from):190 skb_shinfo(to)->nr_frags:5 skb_shinfo(from)->nr_frags:3
        Dec 20 14:12:57 media kernel: [  794.896133] eth0: mtu:1500 data_len:15994 len before:0 len after:15994 truesize before:896 truesize after:16634 nr_frags:5 variant1:15738(16634) variant2:15738(16634) variant3:20480(21376)
        Dec 20 14:12:57 media kernel: [  794.896152] eth0: mtu:1500 data_len:1018 len before:0 len after:1018 truesize before:896 truesize after:1658 nr_frags:1 variant1:762(1658) variant2:762(1658) variant3:4096(4992)
        Dec 20 14:12:57 media kernel: [  794.896200] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:1402 len:952 from->truesize:1658 skb_headlen(from):190 skb_shinfo(to)->nr_frags:6 skb_shinfo(from)->nr_frags:1
        Dec 20 14:12:57 media kernel: [  794.907232] eth0: mtu:1500 data_len:23234 len before:0 len after:23234 truesize before:896 truesize after:23874 nr_frags:7 variant1:22978(23874) variant2:22978(23874) variant3:28672(29568)
        Dec 20 14:12:57 media kernel: [  794.907517] eth0: mtu:1500 data_len:24682 len before:0 len after:24682 truesize before:896 truesize after:25322 nr_frags:7 variant1:24426(25322) variant2:24426(25322) variant3:28672(29568)
        Dec 20 14:12:57 media kernel: [  794.907693] eth0: mtu:1500 data_len:26130 len before:0 len after:26130 truesize before:896 truesize after:26770 nr_frags:7 variant1:25874(26770) variant2:25874(26770) variant3:28672(29568)
        Dec 20 14:12:57 media kernel: [  794.907882] eth0: mtu:1500 data_len:14546 len before:0 len after:14546 truesize before:896 truesize after:15186 nr_frags:5 variant1:14290(15186) variant2:14290(15186) variant3:20480(21376)
        Dec 20 14:12:57 media kernel: [  794.907901] eth0: mtu:1500 data_len:13098 len before:0 len after:13098 truesize before:896 truesize after:13738 nr_frags:4 variant1:12842(13738) variant2:12842(13738) variant3:16384(17280)
        Dec 20 14:12:57 media kernel: [  794.907938] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:13482 len:13032 from->truesize:13738 skb_headlen(from):190 skb_shinfo(to)->nr_frags:6 skb_shinfo(from)->nr_frags:4
        Dec 20 14:12:57 media kernel: [  794.908191] eth0: mtu:1500 data_len:29026 len before:0 len after:29026 truesize before:896 truesize after:29666 nr_frags:9 variant1:28770(29666) variant2:28880(29776) variant3:36864(37760)
        Dec 20 14:12:57 media kernel: [  794.908386] eth0: mtu:1500 data_len:30474 len before:0 len after:30474 truesize before:896 truesize after:31114 nr_frags:8 variant1:30218(31114) variant2:30218(31114) variant3:32768(33664)

A1) Here we have a packet data_len: 5858 and truesize set to 6498 and nr_frags: 2
        Dec 20 14:12:57 media kernel: [  794.908560] eth0: mtu:1500 data_len:5858 len before:0 len after:5858 truesize before:896 truesize after:6498 nr_frags:2 variant1:5602(6498) variant2:5602(6498) variant3:8192(9088)

        Dec 20 14:12:57 media kernel: [  794.908581] eth0: mtu:1500 data_len:26130 len before:0 len after:26130 truesize before:896 truesize after:26770 nr_frags:7 variant1:25874(26770) variant2:25874(26770) variant3:28672(29568)

A2) That seems to end up in skb_try_coalesce, from->nr_frags is still 2, delta >> LEN in this case, no warning but perhaps wasteful ?
        Dec 20 14:12:57 media kernel: [  794.908616] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:6242 len:5792 from->truesize:6498 skb_headlen(from):190 skb_shinfo(to)->nr_frags:9 skb_shinfo(from)->nr_frags:2

        Dec 20 14:12:57 media kernel: [  794.908834] eth0: mtu:1500 data_len:33370 len before:0 len after:33370 truesize before:896 truesize after:34010 nr_frags:9 variant1:33114(34010) variant2:33114(34010) variant3:36864(37760)

B1) Here we have again a packet data_len: 5858 and truesize set to 6498, but nr_frags: 3 this time.
        Dec 20 14:12:57 media kernel: [  794.908992] eth0: mtu:1500 data_len:5858 len before:0 len after:5858 truesize before:896 truesize after:6498 nr_frags:3 variant1:5602(6498) variant2:5792(6688) variant3:12288(13184)
        Dec 20 14:12:57 media kernel: [  794.909012] eth0: mtu:1500 data_len:29026 len before:0 len after:29026 truesize before:896 truesize after:29666 nr_frags:8 variant1:28770(29666) variant2:28770(29666) variant3:32768(33664)

B2) That seems to end up in skb_try_coalesce, from->nr_frags is now 2 instead of 3, delta < LEN in this case, so it would have triggered the warn_on_once
        Dec 20 14:12:57 media kernel: [  794.909040] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA < LEN delta:5602 len:5792 from->truesize:6498 skb_headlen(from):0 skb_shinfo(to)->nr_frags:9 skb_shinfo(from)->nr_frags:2

        Dec 20 14:12:57 media kernel: [  794.909673] eth0: mtu:1500 data_len:1514 len before:0 len after:1514 truesize before:896 truesize after:2154 nr_frags:1 variant1:1258(2154) variant2:1258(2154) variant3:4096(4992)
        Dec 20 14:12:57 media kernel: [  794.909692] eth0: mtu:1500 data_len:522 len before:0 len after:522 truesize before:896 truesize after:1162 nr_frags:1 variant1:266(1162) variant2:266(1162) variant3:4096(4992)
        Dec 20 14:12:57 media kernel: [  794.909736] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:906 len:456 from->truesize:1162 skb_headlen(from):190 skb_shinfo(to)->nr_frags:2 skb_shinfo(from)->nr_frags:1
        Dec 20 14:12:57 media kernel: [  794.910205] eth0: mtu:1500 data_len:36266 len before:0 len after:36266 truesize before:896 truesize after:36906 nr_frags:10 variant1:36010(36906) variant2:36010(36906) variant3:40960(41856)
        Dec 20 14:12:57 media kernel: [  794.910706] eth0: mtu:1500 data_len:37714 len before:0 len after:37714 truesize before:896 truesize after:38354 nr_frags:10 variant1:37458(38354) variant2:37458(38354) variant3:40960(41856)
        Dec 20 14:12:57 media kernel: [  794.911472] eth0: mtu:1500 data_len:27578 len before:0 len after:27578 truesize before:896 truesize after:28218 nr_frags:8 variant1:27322(28218) variant2:27322(28218) variant3:32768(33664)
        Dec 20 14:12:57 media kernel: [  794.911695] eth0: mtu:1500 data_len:29026 len before:0 len after:29026 truesize before:896 truesize after:29666 nr_frags:9 variant1:28770(29666) variant2:28770(29666) variant3:36864(37760)
        Dec 20 14:12:57 media kernel: [  795.015511] eth0: mtu:1500 data_len:1018 len before:0 len after:1018 truesize before:896 truesize after:1658 nr_frags:1 variant1:762(1658) variant2:762(1658) variant3:4096(4992)
        Dec 20 14:12:57 media kernel: [  795.015585] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:1402 len:952 from->truesize:1658 skb_headlen(from):190 skb_shinfo(to)->nr_frags:10 skb_shinfo(from)->nr_frags:1
        Dec 20 14:12:57 media kernel: [  795.015641] eth0: mtu:1500 data_len:10202 len before:0 len after:10202 truesize before:896 truesize after:10842 nr_frags:4 variant1:9946(10842) variant2:9946(10842) variant3:16384(17280)
        Dec 20 14:12:57 media kernel: [  795.015657] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
        Dec 20 14:12:58 media kernel: [  795.817824] net_ratelimit: 9 callbacks suppressed

--
Sander









_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 14:59:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 14:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlhao-0004iS-9G; Thu, 20 Dec 2012 14:59:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tlhan-0004iN-1D
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 14:59:25 +0000
Received: from [85.158.137.99:25130] by server-6.bemta-3.messagelabs.com id
	C7/65-12154-CC723D05; Thu, 20 Dec 2012 14:59:24 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-217.messagelabs.com!1356015544!20232858!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30439 invoked from network); 20 Dec 2012 14:59:04 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-16.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	20 Dec 2012 14:59:04 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:53854 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tlheh-0001HK-Je; Thu, 20 Dec 2012 16:03:27 +0100
Date: Thu, 20 Dec 2012 15:58:55 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <849328952.20121220155855@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1457826869.20121220152326@eikelenboom.it>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
	<1797374383.20121220135139@eikelenboom.it>
	<1457826869.20121220152326@eikelenboom.it>
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>, Eric Dumazet <erdnetdev@gmail.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, December 20, 2012, 3:23:26 PM, you wrote:


> Thursday, December 20, 2012, 1:51:39 PM, you wrote:


>> Wednesday, December 19, 2012, 5:17:49 PM, you wrote:

>>> On Wed, 2012-12-19 at 12:34 +0100, Sander Eikelenboom wrote:

>>>> Hi Ian,
>>>> 
>>>> It ran overnight and i haven't seen the warn_once trigger.
>>>> (but i also didn't with the previous patch)
>>>> 

>>> As I said, the miminum value to not trigger the warning was what Ian
>>> patch was doing, but it was still a not accurate estimation.

>>> Doing the real accounting might trigger slow transferts, or dropped
>>> packets because of socket limits (SNDBUF / RCVBUF) being hit sooner.

>>> So the real question was : If accounting for full pages, is your
>>> applications run as smooth as before, with no huge performance
>>> regression ?

>> Ok i have added some extra debug info (see diff's below), the code still uses the old calculation for truesize (in the hope to trigger the warn_on_once again), but also calculates the variants IanC came up with.

>> I haven't got a clear test case to trigger the warn_on_once, it happens just every once in a while during my normal usage and i'm not a netperf expert :-)
>> So at the moment i haven't been able to trigger the warn_on_once yet, but the results so far do seem to shed some light ..

>> - The first variant (current code) seems to be the most effcient and a good estimation *most* of the the, but sometimes triggers the warn_on_once in skb_try_coalesce.
>> - The first variant (current code) seems to always substract from the truesize for small packets.
>> - The second variant always seems keep the truesize as is for most of the small network traffic, but it also seems to work ok for larger packets.
>> - The third variant seems to be a pretty wasteful estimation.

>> So the last variant seems to be rather wasteful, and the second one the most accurate so far.

>> Eric:
>>      From the warn_on_once, delta should be smaller than len, but probably they should be as close together as possible.
>>      When you say "accurate estimation", what would be a acceptable difference between DELTA and LEN ?



>> [  116.965062] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  117.094538] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.094707] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.094869] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.095058] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.095216] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.096102] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.096311] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.096373] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.150398] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.150459] eth0: mtu:1500 data_len:54 len before:0 len after:54 truesize before:896 truesize after:694 nr_frags:1 variant1:-202(694) variant2:0(896) variant3:4096(4992)
>> [  117.536901] eth0: mtu:1500 data_len:53642 len before:0 len after:53642 truesize before:896 truesize after:54282 nr_frags:14 variant1:53386(54282) variant2:53386(54282) variant3:57344(58240)
>> [  117.537463] eth0: mtu:1500 data_len:15994 len before:0 len after:15994 truesize before:896 truesize after:16634 nr_frags:5 variant1:15738(16634) variant2:15738(16634) variant3:20480(21376)
>> [  117.537915] eth0: mtu:1500 data_len:17442 len before:0 len after:17442 truesize before:896 truesize after:18082 nr_frags:5 variant1:17186(18082) variant2:17186(18082) variant3:20480(21376)
>> [  117.538543] eth0: mtu:1500 data_len:18890 len before:0 len after:18890 truesize before:896 truesize after:19530 nr_frags:6 variant1:18634(19530) variant2:18634(19530) variant3:24576(25472)
>> [  117.539223] eth0: mtu:1500 data_len:13098 len before:0 len after:13098 truesize before:896 truesize after:13738 nr_frags:4 variant1:12842(13738) variant2:12842(13738) variant3:16384(17280)
>> [  117.539283] eth0: mtu:1500 data_len:7306 len before:0 len after:7306 truesize before:896 truesize after:7946 nr_frags:2 variant1:7050(7946) variant2:7050(7946) variant3:8192(9088)
>> [  117.539403] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:7690 len:7240 from->truesize:7946 skb_headlen(from):190 skb_shinfo(to)->nr_frags:5 skb_shinfo(from)->nr_frags:2
>> [  117.540035] eth0: mtu:1500 data_len:4410 len before:0 len after:4410 truesize before:896 truesize after:5050 nr_frags:3 variant1:4154(5050) variant2:4304(5200) variant3:12288(13184)
>> [  117.540153] eth0: mtu:1500 data_len:1018 len before:0 len after:1018 truesize before:896 truesize after:1658 nr_frags:1 variant1:762(1658) variant2:762(1658) variant3:4096(4992)
>> [  121.981917] net_ratelimit: 27 callbacks suppressed
>> [  121.981960] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  122.985019] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  123.988308] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  124.991961] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  125.995003] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
>> [  126.998324] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)



>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index c26e28b..8833e38 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -964,6 +964,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
>>         struct sk_buff_head tmpq;
>>         unsigned long flags;
>>         int err;
>> +       int tsz,len;

>>         spin_lock(&np->rx_lock);

>> @@ -1037,9 +1038,22 @@ err:
>>                  * receive throughout using the standard receive
>>                  * buffer size was cut by 25%(!!!).
>>                  */
>> -               skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
>> +
>> +
>> +
>> +
>> +                tsz = skb->truesize;
>> +                len = skb->len;
>> +                /* skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags; */
>> +                skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
>>                 skb->len += skb->data_len;

>> +               net_warn_ratelimited("%s: mtu:%d data_len:%d len before:%d len after:%d truesize before:%d truesize after:%d nr_frags:%d variant1:%d(%d) variant2:%d(%d) variant3:%d(%d) \n",
>> +                        skb->dev->name, skb->dev->mtu, skb->data_len, len,  skb->len,tsz, skb->truesize, skb_shinfo(skb)->nr_frags,
>> +                        skb->data_len - RX_COPY_THRESHOLD, tsz + skb->data_len - RX_COPY_THRESHOLD ,
>> +                        skb->data_len - NETFRONT_SKB_CB(skb)->pull_to, tsz + skb->data_len - NETFRONT_SKB_CB(skb)->pull_to,
>> +                        PAGE_SIZE * skb_shinfo(skb)->nr_frags, tsz + (PAGE_SIZE * skb_shinfo(skb)->nr_frags));
>> +
>>                 if (rx->flags & XEN_NETRXF_csum_blank)
>>                         skb->ip_summed = CHECKSUM_PARTIAL;
>>                 else if (rx->flags & XEN_NETRXF_data_validated)
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index 3ab989b..6d0cd86 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -3471,6 +3471,16 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,

>>         WARN_ON_ONCE(delta < len);

>> +       if(delta < len) {
>> +               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA < LEN delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
>> +                        to->dev->name, from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
>> +       }
>> +
+       if (delta >>> len && delta - len > 100) {
>> +               net_warn_ratelimited("to: %s from: %s  skb_try_coalesce: DELTA - LEN > 100 delta:%d len:%d from->truesize:%d skb_headlen(from):%d skb_shinfo(to)->nr_frags:%d skb_shinfo(from)->nr_frags:%d \n",
>> +                        to->dev->name,from->dev->name, delta, len, from->truesize, skb_headlen(from), skb_shinfo(to)->nr_frags, skb_shinfo(from)->nr_frags);
>> +       }
>> +
>>         memcpy(skb_shinfo(to)->frags + skb_shinfo(to)->nr_frags,
>>                skb_shinfo(from)->frags,
>>                skb_shinfo(from)->nr_frags * sizeof(skb_frag_t));



> Ok i succeeded in triggering the warn_on_once, but it seems the extra debug info from netfront was just rate limited away for the offending packet :(

> Dec 20 15:17:33 media kernel: [  393.464062] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:33 media kernel: [  393.464438] eth0: mtu:1500 data_len:762 len before:0 len after:762 truesize before:896 truesize after:1402 nr_frags:1 variant1:506(1402) variant2:506(1402) variant3:4096(4992)
> Dec 20 15:17:33 media kernel: [  393.465083] eth0: mtu:1500 data_len:118 len before:0 len after:118 truesize before:896 truesize after:758 nr_frags:1 variant1:-138(758) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:33 media kernel: [  393.466114] eth0: mtu:1500 data_len:118 len before:0 len after:118 truesize before:896 truesize after:758 nr_frags:1 variant1:-138(758) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:33 media kernel: [  393.467336] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  394.940211] ------------[ cut here ]------------
> Dec 20 15:17:35 media kernel: [  394.940259] WARNING: at net/core/skbuff.c:3472 skb_try_coalesce+0x3fc/0x470()
> Dec 20 15:17:35 media kernel: [  394.940282] Modules linked in:
> Dec 20 15:17:35 media kernel: [  394.940306] Pid: 2632, comm: glusterfs Not tainted 3.7.0-rc0-20121220-netfrontdebug1 #1
> Dec 20 15:17:35 media kernel: [  394.940330] Call Trace:
> Dec 20 15:17:35 media kernel: [  394.940343]  <IRQ>  [<ffffffff8106889a>] warn_slowpath_common+0x7a/0xb0
> Dec 20 15:17:35 media kernel: [  394.940384]  [<ffffffff810688e5>] warn_slowpath_null+0x15/0x20
> Dec 20 15:17:35 media kernel: [  394.940409]  [<ffffffff8184298c>] skb_try_coalesce+0x3fc/0x470
> Dec 20 15:17:35 media kernel: [  394.940434]  [<ffffffff818fb049>] tcp_try_coalesce+0x69/0xc0
> Dec 20 15:17:35 media kernel: [  394.940458]  [<ffffffff818fb0f4>] tcp_queue_rcv+0x54/0x100
> Dec 20 15:17:35 media kernel: [  394.940481]  [<ffffffff8190029f>] ? tcp_mtup_init+0x2f/0x90
> Dec 20 15:17:35 media kernel: [  394.940504]  [<ffffffff818ffbdb>] tcp_rcv_established+0x2bb/0x6a0
> Dec 20 15:17:35 media kernel: [  394.940528]  [<ffffffff8190839f>] ? tcp_v4_rcv+0x6cf/0xb10
> Dec 20 15:17:35 media kernel: [  394.940551]  [<ffffffff81907985>] tcp_v4_do_rcv+0x135/0x480
> Dec 20 15:17:35 media kernel: [  394.940576]  [<ffffffff819b3532>] ? _raw_spin_lock_nested+0x42/0x50
> Dec 20 15:17:35 media kernel: [  394.940600]  [<ffffffff8190839f>] ? tcp_v4_rcv+0x6cf/0xb10
> Dec 20 15:17:35 media kernel: [  394.940623]  [<ffffffff8190862d>] tcp_v4_rcv+0x95d/0xb10
> Dec 20 15:17:35 media kernel: [  394.940666]  [<ffffffff810b5688>] ? lock_acquire+0xd8/0x100
> Dec 20 15:17:35 media kernel: [  394.940694]  [<ffffffff818e4d6a>] ip_local_deliver_finish+0x11a/0x230
> Dec 20 15:17:35 media kernel: [  394.940720]  [<ffffffff818e4c95>] ? ip_local_deliver_finish+0x45/0x230
> Dec 20 15:17:35 media kernel: [  394.940745]  [<ffffffff818e4eb8>] ip_local_deliver+0x38/0x80
> Dec 20 15:17:35 media kernel: [  394.940784]  [<ffffffff818e447a>] ip_rcv_finish+0x15a/0x630
> Dec 20 15:17:35 media kernel: [  394.940807]  [<ffffffff818e4b68>] ip_rcv+0x218/0x300
> Dec 20 15:17:35 media kernel: [  394.940829]  [<ffffffff8184bf2d>] __netif_receive_skb+0x65d/0x8d0
> Dec 20 15:17:35 media kernel: [  394.940853]  [<ffffffff8184ba15>] ? __netif_receive_skb+0x145/0x8d0
> Dec 20 15:17:35 media kernel: [  394.940889]  [<ffffffff810b192d>] ? trace_hardirqs_on+0xd/0x10
> Dec 20 15:17:35 media kernel: [  394.940914]  [<ffffffff810fecbb>] ? free_hot_cold_page+0x1ab/0x1e0
> Dec 20 15:17:35 media kernel: [  394.940939]  [<ffffffff8184e4f8>] netif_receive_skb+0x28/0xf0
> Dec 20 15:17:35 media kernel: [  394.940964]  [<ffffffff81843e83>] ? __pskb_pull_tail+0x253/0x340
> Dec 20 15:17:35 media kernel: [  394.941000]  [<ffffffff8164fbb5>] xennet_poll+0xae5/0xed0
> Dec 20 15:17:35 media kernel: [  394.941024]  [<ffffffff81080081>] ? wake_up_worker+0x1/0x30
> Dec 20 15:17:35 media kernel: [  394.941046]  [<ffffffff810b2fbc>] ? validate_chain+0x13c/0x1300
> Dec 20 15:17:35 media kernel: [  394.941075]  [<ffffffff8184ed66>] net_rx_action+0x136/0x260
> Dec 20 15:17:35 media kernel: [  394.941098]  [<ffffffff81070551>] ? __do_softirq+0x71/0x1a0
> Dec 20 15:17:35 media kernel: [  394.941133]  [<ffffffff810705a9>] __do_softirq+0xc9/0x1a0
> Dec 20 15:17:35 media kernel: [  394.941157]  [<ffffffff819b623c>] call_softirq+0x1c/0x30
> Dec 20 15:17:35 media kernel: [  394.941179]  [<ffffffff8100fdc5>] do_softirq+0x85/0xf0
> Dec 20 15:17:35 media kernel: [  394.941201]  [<ffffffff8107041e>] irq_exit+0x9e/0xd0
> Dec 20 15:17:35 media kernel: [  394.941235]  [<ffffffff81430b1f>] xen_evtchn_do_upcall+0x2f/0x40
> Dec 20 15:17:35 media kernel: [  394.941259]  [<ffffffff819b629e>] xen_do_hypervisor_callback+0x1e/0x30
> Dec 20 15:17:35 media kernel: [  394.941279]  <EOI>  [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
> Dec 20 15:17:35 media kernel: [  394.941318]  [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
> Dec 20 15:17:35 media kernel: [  394.941356]  [<ffffffff8100890d>] ? xen_force_evtchn_callback+0xd/0x10
> Dec 20 15:17:35 media kernel: [  394.941381]  [<ffffffff810092b2>] ? check_events+0x12/0x20
> Dec 20 15:17:35 media kernel: [  394.941405]  [<ffffffff81009259>] ? xen_irq_enable_direct_reloc+0x4/0x4
> Dec 20 15:17:35 media kernel: [  394.941432]  [<ffffffff819b3f6c>] ? _raw_spin_unlock_irq+0x3c/0x70
> Dec 20 15:17:35 media kernel: [  394.941473]  [<ffffffff81095f83>] ? finish_task_switch+0x83/0xe0
> Dec 20 15:17:35 media kernel: [  394.941507]  [<ffffffff81095f46>] ? finish_task_switch+0x46/0xe0
> Dec 20 15:17:35 media kernel: [  394.941533]  [<ffffffff819b2434>] ? __schedule+0x444/0x880
> Dec 20 15:17:35 media kernel: [  394.941555]  [<ffffffff810b2fbc>] ? validate_chain+0x13c/0x1300
> Dec 20 15:17:35 media kernel: [  394.941580]  [<ffffffff810b4c4b>] ? __lock_acquire+0x46b/0xdd0
> Dec 20 15:17:35 media kernel: [  394.941614]  [<ffffffff810b4c4b>] ? __lock_acquire+0x46b/0xdd0
> Dec 20 15:17:35 media kernel: [  394.941638]  [<ffffffff819aff95>] ? __mutex_unlock_slowpath+0x135/0x1d0
> Dec 20 15:17:35 media kernel: [  394.941663]  [<ffffffff819b2904>] ? schedule+0x24/0x70
> Dec 20 15:17:35 media kernel: [  394.941697]  [<ffffffff819b179d>] ? schedule_hrtimeout_range_clock+0x11d/0x140
> Dec 20 15:17:35 media kernel: [  394.941725]  [<ffffffff810b5688>] ? lock_acquire+0xd8/0x100
> Dec 20 15:17:35 media kernel: [  394.941748]  [<ffffffff8118a558>] ? ep_poll+0xf8/0x3a0
> Dec 20 15:17:35 media kernel: [  394.941770]  [<ffffffff819b4015>] ? _raw_spin_unlock_irqrestore+0x75/0xa0
> Dec 20 15:17:35 media kernel: [  394.941808]  [<ffffffff810b1818>] ? trace_hardirqs_on_caller+0xf8/0x200
> Dec 20 15:17:35 media kernel: [  394.941833]  [<ffffffff819b17ce>] ? schedule_hrtimeout_range+0xe/0x10
> Dec 20 15:17:35 media kernel: [  394.941856]  [<ffffffff8118a75a>] ? ep_poll+0x2fa/0x3a0
> Dec 20 15:17:35 media kernel: [  394.941878]  [<ffffffff81098630>] ? try_to_wake_up+0x310/0x310
> Dec 20 15:17:35 media kernel: [  394.941913]  [<ffffffff810b5b17>] ? lock_release+0x117/0x250
> Dec 20 15:17:35 media kernel: [  394.941938]  [<ffffffff81165fd7>] ? fget_light+0xd7/0x140
> Dec 20 15:17:35 media kernel: [  394.941959]  [<ffffffff81165f3a>] ? fget_light+0x3a/0x140
> Dec 20 15:17:35 media kernel: [  394.941981]  [<ffffffff8118a8ce>] ? sys_epoll_wait+0xce/0xe0
> Dec 20 15:17:35 media kernel: [  394.942015]  [<ffffffff819b4e69>] ? system_call_fastpath+0x16/0x1b
> Dec 20 15:17:35 media kernel: [  394.942036] ---[ end trace 6f3a832c9e91c8af ]---
> Dec 20 15:17:35 media kernel: [  394.942056] to: (null) from: (null)  skb_try_coalesce: DELTA < LEN delta:22978 len:23168 from->truesize:23874 skb_headlen(from):0 skb_shinfo(to)->nr_frags:4 skb_shinfo(from)->nr_frags:6
> Dec 20 15:17:35 media kernel: [  394.968199] to: (null) from: (null)  skb_try_coalesce: DELTA < LEN delta:14290 len:14480 from->truesize:15186 skb_headlen(from):0 skb_shinfo(to)->nr_frags:13 skb_shinfo(from)->nr_frags:4
> Dec 20 15:17:35 media kernel: [  395.262814] net_ratelimit: 371 callbacks suppressed
> Dec 20 15:17:35 media kernel: [  395.262858] eth0: mtu:1500 data_len:90 len before:0 len after:90 truesize before:896 truesize after:730 nr_frags:1 variant1:-166(730) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.264767] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.266193] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.268422] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.271617] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.274794] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.278104] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.281319] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.284454] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.287797] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)
> Dec 20 15:17:35 media kernel: [  395.291121] eth0: mtu:1500 data_len:66 len before:0 len after:66 truesize before:896 truesize after:706 nr_frags:1 variant1:-190(706) variant2:0(896) variant3:4096(4992)


Hmm perhaps a better example, i have indented some perhaps interesting points:

        Dec 20 14:12:57 media kernel: [  794.895136] eth0: mtu:1500 data_len:15994 len before:0 len after:15994 truesize before:896 truesize after:16634 nr_frags:5 variant1:15738(16634) variant2:15738(16634) variant3:20480(21376)
        Dec 20 14:12:57 media kernel: [  794.895431] eth0: mtu:1500 data_len:17442 len before:0 len after:17442 truesize before:896 truesize after:18082 nr_frags:5 variant1:17186(18082) variant2:17186(18082) variant3:20480(21376)
        Dec 20 14:12:57 media kernel: [  794.895616] eth0: mtu:1500 data_len:18890 len before:0 len after:18890 truesize before:896 truesize after:19530 nr_frags:6 variant1:18634(19530) variant2:18824(19720) variant3:24576(25472)
        Dec 20 14:12:57 media kernel: [  794.895804] eth0: mtu:1500 data_len:13098 len before:0 len after:13098 truesize before:896 truesize after:13738 nr_frags:4 variant1:12842(13738) variant2:12842(13738) variant3:16384(17280)
        Dec 20 14:12:57 media kernel: [  794.895823] eth0: mtu:1500 data_len:7306 len before:0 len after:7306 truesize before:896 truesize after:7946 nr_frags:3 variant1:7050(7946) variant2:7050(7946) variant3:12288(13184)
        Dec 20 14:12:57 media kernel: [  794.895868] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:7690 len:7240 from->truesize:7946 skb_headlen(from):190 skb_shinfo(to)->nr_frags:5 skb_shinfo(from)->nr_frags:3
        Dec 20 14:12:57 media kernel: [  794.896133] eth0: mtu:1500 data_len:15994 len before:0 len after:15994 truesize before:896 truesize after:16634 nr_frags:5 variant1:15738(16634) variant2:15738(16634) variant3:20480(21376)
        Dec 20 14:12:57 media kernel: [  794.896152] eth0: mtu:1500 data_len:1018 len before:0 len after:1018 truesize before:896 truesize after:1658 nr_frags:1 variant1:762(1658) variant2:762(1658) variant3:4096(4992)
        Dec 20 14:12:57 media kernel: [  794.896200] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:1402 len:952 from->truesize:1658 skb_headlen(from):190 skb_shinfo(to)->nr_frags:6 skb_shinfo(from)->nr_frags:1
        Dec 20 14:12:57 media kernel: [  794.907232] eth0: mtu:1500 data_len:23234 len before:0 len after:23234 truesize before:896 truesize after:23874 nr_frags:7 variant1:22978(23874) variant2:22978(23874) variant3:28672(29568)
        Dec 20 14:12:57 media kernel: [  794.907517] eth0: mtu:1500 data_len:24682 len before:0 len after:24682 truesize before:896 truesize after:25322 nr_frags:7 variant1:24426(25322) variant2:24426(25322) variant3:28672(29568)
        Dec 20 14:12:57 media kernel: [  794.907693] eth0: mtu:1500 data_len:26130 len before:0 len after:26130 truesize before:896 truesize after:26770 nr_frags:7 variant1:25874(26770) variant2:25874(26770) variant3:28672(29568)
        Dec 20 14:12:57 media kernel: [  794.907882] eth0: mtu:1500 data_len:14546 len before:0 len after:14546 truesize before:896 truesize after:15186 nr_frags:5 variant1:14290(15186) variant2:14290(15186) variant3:20480(21376)
        Dec 20 14:12:57 media kernel: [  794.907901] eth0: mtu:1500 data_len:13098 len before:0 len after:13098 truesize before:896 truesize after:13738 nr_frags:4 variant1:12842(13738) variant2:12842(13738) variant3:16384(17280)
        Dec 20 14:12:57 media kernel: [  794.907938] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:13482 len:13032 from->truesize:13738 skb_headlen(from):190 skb_shinfo(to)->nr_frags:6 skb_shinfo(from)->nr_frags:4
        Dec 20 14:12:57 media kernel: [  794.908191] eth0: mtu:1500 data_len:29026 len before:0 len after:29026 truesize before:896 truesize after:29666 nr_frags:9 variant1:28770(29666) variant2:28880(29776) variant3:36864(37760)
        Dec 20 14:12:57 media kernel: [  794.908386] eth0: mtu:1500 data_len:30474 len before:0 len after:30474 truesize before:896 truesize after:31114 nr_frags:8 variant1:30218(31114) variant2:30218(31114) variant3:32768(33664)

A1) Here we have a packet data_len: 5858 and truesize set to 6498 and nr_frags: 2
        Dec 20 14:12:57 media kernel: [  794.908560] eth0: mtu:1500 data_len:5858 len before:0 len after:5858 truesize before:896 truesize after:6498 nr_frags:2 variant1:5602(6498) variant2:5602(6498) variant3:8192(9088)

        Dec 20 14:12:57 media kernel: [  794.908581] eth0: mtu:1500 data_len:26130 len before:0 len after:26130 truesize before:896 truesize after:26770 nr_frags:7 variant1:25874(26770) variant2:25874(26770) variant3:28672(29568)

A2) That seems to end up in skb_try_coalesce, from->nr_frags is still 2, delta >> LEN in this case, no warning but perhaps wasteful ?
        Dec 20 14:12:57 media kernel: [  794.908616] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:6242 len:5792 from->truesize:6498 skb_headlen(from):190 skb_shinfo(to)->nr_frags:9 skb_shinfo(from)->nr_frags:2

        Dec 20 14:12:57 media kernel: [  794.908834] eth0: mtu:1500 data_len:33370 len before:0 len after:33370 truesize before:896 truesize after:34010 nr_frags:9 variant1:33114(34010) variant2:33114(34010) variant3:36864(37760)

B1) Here we have again a packet data_len: 5858 and truesize set to 6498, but nr_frags: 3 this time.
        Dec 20 14:12:57 media kernel: [  794.908992] eth0: mtu:1500 data_len:5858 len before:0 len after:5858 truesize before:896 truesize after:6498 nr_frags:3 variant1:5602(6498) variant2:5792(6688) variant3:12288(13184)
        Dec 20 14:12:57 media kernel: [  794.909012] eth0: mtu:1500 data_len:29026 len before:0 len after:29026 truesize before:896 truesize after:29666 nr_frags:8 variant1:28770(29666) variant2:28770(29666) variant3:32768(33664)

B2) That seems to end up in skb_try_coalesce, from->nr_frags is now 2 instead of 3, delta < LEN in this case, so it would have triggered the warn_on_once
        Dec 20 14:12:57 media kernel: [  794.909040] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA < LEN delta:5602 len:5792 from->truesize:6498 skb_headlen(from):0 skb_shinfo(to)->nr_frags:9 skb_shinfo(from)->nr_frags:2

        Dec 20 14:12:57 media kernel: [  794.909673] eth0: mtu:1500 data_len:1514 len before:0 len after:1514 truesize before:896 truesize after:2154 nr_frags:1 variant1:1258(2154) variant2:1258(2154) variant3:4096(4992)
        Dec 20 14:12:57 media kernel: [  794.909692] eth0: mtu:1500 data_len:522 len before:0 len after:522 truesize before:896 truesize after:1162 nr_frags:1 variant1:266(1162) variant2:266(1162) variant3:4096(4992)
        Dec 20 14:12:57 media kernel: [  794.909736] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:906 len:456 from->truesize:1162 skb_headlen(from):190 skb_shinfo(to)->nr_frags:2 skb_shinfo(from)->nr_frags:1
        Dec 20 14:12:57 media kernel: [  794.910205] eth0: mtu:1500 data_len:36266 len before:0 len after:36266 truesize before:896 truesize after:36906 nr_frags:10 variant1:36010(36906) variant2:36010(36906) variant3:40960(41856)
        Dec 20 14:12:57 media kernel: [  794.910706] eth0: mtu:1500 data_len:37714 len before:0 len after:37714 truesize before:896 truesize after:38354 nr_frags:10 variant1:37458(38354) variant2:37458(38354) variant3:40960(41856)
        Dec 20 14:12:57 media kernel: [  794.911472] eth0: mtu:1500 data_len:27578 len before:0 len after:27578 truesize before:896 truesize after:28218 nr_frags:8 variant1:27322(28218) variant2:27322(28218) variant3:32768(33664)
        Dec 20 14:12:57 media kernel: [  794.911695] eth0: mtu:1500 data_len:29026 len before:0 len after:29026 truesize before:896 truesize after:29666 nr_frags:9 variant1:28770(29666) variant2:28770(29666) variant3:36864(37760)
        Dec 20 14:12:57 media kernel: [  795.015511] eth0: mtu:1500 data_len:1018 len before:0 len after:1018 truesize before:896 truesize after:1658 nr_frags:1 variant1:762(1658) variant2:762(1658) variant3:4096(4992)
        Dec 20 14:12:57 media kernel: [  795.015585] skbuff: to: (null) from: (null)  skb_try_coalesce: DELTA - LEN > 100 delta:1402 len:952 from->truesize:1658 skb_headlen(from):190 skb_shinfo(to)->nr_frags:10 skb_shinfo(from)->nr_frags:1
        Dec 20 14:12:57 media kernel: [  795.015641] eth0: mtu:1500 data_len:10202 len before:0 len after:10202 truesize before:896 truesize after:10842 nr_frags:4 variant1:9946(10842) variant2:9946(10842) variant3:16384(17280)
        Dec 20 14:12:57 media kernel: [  795.015657] eth0: mtu:1500 data_len:42 len before:0 len after:42 truesize before:896 truesize after:682 nr_frags:1 variant1:-214(682) variant2:0(896) variant3:4096(4992)
        Dec 20 14:12:58 media kernel: [  795.817824] net_ratelimit: 9 callbacks suppressed

--
Sander









_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 15:06:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 15:06:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlhhM-0004xa-AV; Thu, 20 Dec 2012 15:06:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlhhK-0004xV-V3
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:06:11 +0000
Received: from [85.158.139.211:6555] by server-11.bemta-5.messagelabs.com id
	58/2D-31624-26923D05; Thu, 20 Dec 2012 15:06:10 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1356015967!20551101!1
X-Originating-IP: [209.85.210.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4419 invoked from network); 20 Dec 2012 15:06:08 -0000
Received: from mail-ia0-f169.google.com (HELO mail-ia0-f169.google.com)
	(209.85.210.169)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 15:06:08 -0000
Received: by mail-ia0-f169.google.com with SMTP id r4so2931135iaj.14
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 07:06:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=mvyXmOpBLtyAUxdrEgCZJcoDk84EMwZWY53m6vY6RY4=;
	b=be3NA1Atoxh2dIWO8BDx/YFzSsks3/j7onKwTk+0cu1ChALZ2m0gNwODyZbzRJPJQk
	zov73mmGhBLKDRbLMHiEnPLeScy8HCUZz+h0PkQGpwpNEdRCXpyztFm3gHQjif/Is4D4
	yJE7FejkQwJzPc+vBK2l0EvjArixLy1eKrwYThYh3fkUM91W1wQ66Di0MrRSuPsG4nKo
	Y1XPvhvRF/9xWSi04y/p5s26R4YAWi3684PduVz5BWy5JS5dadtLuimpWV0C3Q32IjZ8
	AZdjISfcYHvrUntTE5/WxwLlDEBByrBB4YoDTQ/E/jpTYgRAyG3NIg/JQYP3m7i9Rra+
	UjvA==
MIME-Version: 1.0
Received: by 10.50.197.169 with SMTP id iv9mr10282744igc.32.1356015966981;
	Thu, 20 Dec 2012 07:06:06 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 07:06:06 -0800 (PST)
In-Reply-To: <CCF8CEEA.561B5%keir@xen.org>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
Date: Thu, 20 Dec 2012 23:06:06 +0800
X-Google-Sender-Auth: sudelMEUwZPjm7KbUibCqG_Ok5E
Message-ID: <CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Keir Fraser <keir@xen.org>
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 10:19 PM, Keir Fraser <keir@xen.org> wrote:
> On 20/12/2012 13:31, "G.R." <firemeteor@users.sourceforge.net> wrote:
>
>> If concern is about security, the same argument should apply to the
>> first page (the portion before the page offset).
>> The problem is that I have no idea what is around the mapped page. Not
>> sure who has the knowledge.
>
> Well we can't do better than mapping some whole number of pages, really.
> Unless we trap to qemu on every access. I don't think we'd go there unless
> there really were a known security issue. But mapping only the exact number
> of pages we definitely need is a good principle.
>
>> What's the standard flow to handle such map with offset?
>> I expect this to be a common case, since the ioremap function in linux
>> kernel accept this.
>
> map_size = ((host_opregion & 0xfff) + 8096 + 0xfff) >> 12
>
Keir, I believe this expression should give the same result.
First of all, 8096 should be 8192 :-), and this part should result in
an constant number 2 after right shift.
The remaining part is ((host_opregion & 0xfff) + 0xfff) >> 12
As long as the first sub-expression is non zero, the result of the add
should range from [0x1000, 0x1ffe].
And this will yield a result '1' after the right shift.
So as long as there is known security risk (which I'm not sure about),
the patch should be fine.

> Possibly with suitable macros used instead of magic numbers (e.g., XC_PAGE_*
> and a macro for the opregion size).

I guess there is no predefined macro for OpRegion size. And I guess I
need to define it twice for both code?

>  -- Keir
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 15:06:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 15:06:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlhhM-0004xa-AV; Thu, 20 Dec 2012 15:06:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlhhK-0004xV-V3
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:06:11 +0000
Received: from [85.158.139.211:6555] by server-11.bemta-5.messagelabs.com id
	58/2D-31624-26923D05; Thu, 20 Dec 2012 15:06:10 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1356015967!20551101!1
X-Originating-IP: [209.85.210.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4419 invoked from network); 20 Dec 2012 15:06:08 -0000
Received: from mail-ia0-f169.google.com (HELO mail-ia0-f169.google.com)
	(209.85.210.169)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 15:06:08 -0000
Received: by mail-ia0-f169.google.com with SMTP id r4so2931135iaj.14
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 07:06:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=mvyXmOpBLtyAUxdrEgCZJcoDk84EMwZWY53m6vY6RY4=;
	b=be3NA1Atoxh2dIWO8BDx/YFzSsks3/j7onKwTk+0cu1ChALZ2m0gNwODyZbzRJPJQk
	zov73mmGhBLKDRbLMHiEnPLeScy8HCUZz+h0PkQGpwpNEdRCXpyztFm3gHQjif/Is4D4
	yJE7FejkQwJzPc+vBK2l0EvjArixLy1eKrwYThYh3fkUM91W1wQ66Di0MrRSuPsG4nKo
	Y1XPvhvRF/9xWSi04y/p5s26R4YAWi3684PduVz5BWy5JS5dadtLuimpWV0C3Q32IjZ8
	AZdjISfcYHvrUntTE5/WxwLlDEBByrBB4YoDTQ/E/jpTYgRAyG3NIg/JQYP3m7i9Rra+
	UjvA==
MIME-Version: 1.0
Received: by 10.50.197.169 with SMTP id iv9mr10282744igc.32.1356015966981;
	Thu, 20 Dec 2012 07:06:06 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 07:06:06 -0800 (PST)
In-Reply-To: <CCF8CEEA.561B5%keir@xen.org>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
Date: Thu, 20 Dec 2012 23:06:06 +0800
X-Google-Sender-Auth: sudelMEUwZPjm7KbUibCqG_Ok5E
Message-ID: <CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Keir Fraser <keir@xen.org>
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 10:19 PM, Keir Fraser <keir@xen.org> wrote:
> On 20/12/2012 13:31, "G.R." <firemeteor@users.sourceforge.net> wrote:
>
>> If concern is about security, the same argument should apply to the
>> first page (the portion before the page offset).
>> The problem is that I have no idea what is around the mapped page. Not
>> sure who has the knowledge.
>
> Well we can't do better than mapping some whole number of pages, really.
> Unless we trap to qemu on every access. I don't think we'd go there unless
> there really were a known security issue. But mapping only the exact number
> of pages we definitely need is a good principle.
>
>> What's the standard flow to handle such map with offset?
>> I expect this to be a common case, since the ioremap function in linux
>> kernel accept this.
>
> map_size = ((host_opregion & 0xfff) + 8096 + 0xfff) >> 12
>
Keir, I believe this expression should give the same result.
First of all, 8096 should be 8192 :-), and this part should result in
an constant number 2 after right shift.
The remaining part is ((host_opregion & 0xfff) + 0xfff) >> 12
As long as the first sub-expression is non zero, the result of the add
should range from [0x1000, 0x1ffe].
And this will yield a result '1' after the right shift.
So as long as there is known security risk (which I'm not sure about),
the patch should be fine.

> Possibly with suitable macros used instead of magic numbers (e.g., XC_PAGE_*
> and a macro for the opregion size).

I guess there is no predefined macro for OpRegion size. And I guess I
need to define it twice for both code?

>  -- Keir
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 15:14:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 15:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlhow-0005AL-B2; Thu, 20 Dec 2012 15:14:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Tlhou-0005AG-Pa
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:14:01 +0000
Received: from [85.158.137.99:48142] by server-16.bemta-3.messagelabs.com id
	2F/91-27634-33B23D05; Thu, 20 Dec 2012 15:13:55 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356016434!12418158!1
X-Originating-IP: [209.85.214.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15012 invoked from network); 20 Dec 2012 15:13:54 -0000
Received: from mail-bk0-f44.google.com (HELO mail-bk0-f44.google.com)
	(209.85.214.44)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 15:13:54 -0000
Received: by mail-bk0-f44.google.com with SMTP id w11so1751090bku.17
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 07:13:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=5f6sB7IyWa3MuTiSX4xBrBXkU2tnrh0kmxn7WPk7itU=;
	b=Xl0LaTIepjF7rhDxP5mP6HGBfL7cUSn/ZfmDvl1C7u/v5UGP5TAZ9XKPovo0Pqoqf4
	Rh0U7NU7+TcHgMIMOa9n3T7cX5cWeLD/j0lHAHpgG6ZnW020V4wO6e6mtD4c1I9flmuT
	ZsqU3AVHc5f1rmNtWcwOo9/MpXxNkCTn6uY39u2W5ReqetRqoyzXiSVtqDKM30Y4R7md
	5hP92D8dZ7ZUokr2SoDuiaBqwjcAaORUF7yXktA43e6WaouufpziWnI0iEA9gYw4QhYZ
	WRy3jsMD6RbDmzc7ym7dIguWERYde9CDwjbPShBLt7ULYh7dJ0TzWnq7cVf9nxKbYyw/
	c0/A==
Received: by 10.112.87.40 with SMTP id u8mr3965483lbz.50.1356016433416; Thu,
	20 Dec 2012 07:13:53 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.147.168 with HTTP; Thu, 20 Dec 2012 07:13:33 -0800 (PST)
In-Reply-To: <50D3344002000078000B1CF7@nat28.tlf.novell.com>
References: <patchbomb.1355944036@Solace>
	<4c57c8f1e7ad20c15b8c.1355944038@Solace>
	<50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
	<50D321AC.6080400@eu.citrix.com>
	<50D3344002000078000B1CF7@nat28.tlf.novell.com>
From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 20 Dec 2012 16:13:33 +0100
X-Google-Sender-Auth: GaTqIMlHs6uNfGK0X_Fh0pIMjeE
Message-ID: <CAAWQectfkwsUsp9Cj9SpRvPfzwi8ii0UqgLxgxT1W8E2tPRVZw@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: MarcusGranado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, AnilMadhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 02 of 10 v2] xen,
 libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 3:52 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> 2 * BITS_PER_LONG is just going to be 128, right?  It wasn't too long
>> ago that I would have considered 4096 cores a pretty unreasonable
>> expectation.  Is there a particular reason you think this is going to be
>> more than a few years away, and a particular harm in having the code
>> here to begin with?
>
> I just don't see node counts grow even near as quickly as core/
> thread ones.
>
Yep, same here, that's why, despite having put that code there, I agree
with Jan in removing it. :-)

That feeling matches with what we've been repeatedly told by hardware
vendors, about node count already saturating at around 8. Beyond that, a
different architectural approach will be needed to tackle scalability issues
(clusters? NoCs?).

Just for the records, Linux still deals with this in the same way it did when
we took these files from there (i.e., without this hunk):
 - cpumasks exists in static and dynamically allocated (cpumask_var_t) foms
 - nodemask are only and always statically allocated

Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
---------------------------------------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 15:14:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 15:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlhow-0005AL-B2; Thu, 20 Dec 2012 15:14:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Tlhou-0005AG-Pa
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:14:01 +0000
Received: from [85.158.137.99:48142] by server-16.bemta-3.messagelabs.com id
	2F/91-27634-33B23D05; Thu, 20 Dec 2012 15:13:55 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356016434!12418158!1
X-Originating-IP: [209.85.214.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15012 invoked from network); 20 Dec 2012 15:13:54 -0000
Received: from mail-bk0-f44.google.com (HELO mail-bk0-f44.google.com)
	(209.85.214.44)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 15:13:54 -0000
Received: by mail-bk0-f44.google.com with SMTP id w11so1751090bku.17
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 07:13:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=5f6sB7IyWa3MuTiSX4xBrBXkU2tnrh0kmxn7WPk7itU=;
	b=Xl0LaTIepjF7rhDxP5mP6HGBfL7cUSn/ZfmDvl1C7u/v5UGP5TAZ9XKPovo0Pqoqf4
	Rh0U7NU7+TcHgMIMOa9n3T7cX5cWeLD/j0lHAHpgG6ZnW020V4wO6e6mtD4c1I9flmuT
	ZsqU3AVHc5f1rmNtWcwOo9/MpXxNkCTn6uY39u2W5ReqetRqoyzXiSVtqDKM30Y4R7md
	5hP92D8dZ7ZUokr2SoDuiaBqwjcAaORUF7yXktA43e6WaouufpziWnI0iEA9gYw4QhYZ
	WRy3jsMD6RbDmzc7ym7dIguWERYde9CDwjbPShBLt7ULYh7dJ0TzWnq7cVf9nxKbYyw/
	c0/A==
Received: by 10.112.87.40 with SMTP id u8mr3965483lbz.50.1356016433416; Thu,
	20 Dec 2012 07:13:53 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.147.168 with HTTP; Thu, 20 Dec 2012 07:13:33 -0800 (PST)
In-Reply-To: <50D3344002000078000B1CF7@nat28.tlf.novell.com>
References: <patchbomb.1355944036@Solace>
	<4c57c8f1e7ad20c15b8c.1355944038@Solace>
	<50D2E5EF02000078000B1B3C@nat28.tlf.novell.com>
	<50D321AC.6080400@eu.citrix.com>
	<50D3344002000078000B1CF7@nat28.tlf.novell.com>
From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 20 Dec 2012 16:13:33 +0100
X-Google-Sender-Auth: GaTqIMlHs6uNfGK0X_Fh0pIMjeE
Message-ID: <CAAWQectfkwsUsp9Cj9SpRvPfzwi8ii0UqgLxgxT1W8E2tPRVZw@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: MarcusGranado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, AnilMadhavapeddy <anil@recoil.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH 02 of 10 v2] xen,
 libxc: introduce node maps and masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 3:52 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> 2 * BITS_PER_LONG is just going to be 128, right?  It wasn't too long
>> ago that I would have considered 4096 cores a pretty unreasonable
>> expectation.  Is there a particular reason you think this is going to be
>> more than a few years away, and a particular harm in having the code
>> here to begin with?
>
> I just don't see node counts grow even near as quickly as core/
> thread ones.
>
Yep, same here, that's why, despite having put that code there, I agree
with Jan in removing it. :-)

That feeling matches with what we've been repeatedly told by hardware
vendors, about node count already saturating at around 8. Beyond that, a
different architectural approach will be needed to tackle scalability issues
(clusters? NoCs?).

Just for the records, Linux still deals with this in the same way it did when
we took these files from there (i.e., without this hunk):
 - cpumasks exists in static and dynamically allocated (cpumask_var_t) foms
 - nodemask are only and always statically allocated

Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
---------------------------------------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 15:35:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 15:35:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tli9V-0005PZ-A3; Thu, 20 Dec 2012 15:35:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tli9U-0005PU-8g
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:35:16 +0000
Received: from [85.158.143.99:55555] by server-1.bemta-4.messagelabs.com id
	5E/EB-28401-33033D05; Thu, 20 Dec 2012 15:35:15 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1356017711!23337428!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM2NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16442 invoked from network); 20 Dec 2012 15:35:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 15:35:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,324,1355097600"; 
   d="scan'208";a="1351271"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 15:35:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 10:35:10 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tli9O-0008VF-Kl;
	Thu, 20 Dec 2012 15:35:10 +0000
Message-ID: <50D32EBA.4020401@eu.citrix.com>
Date: Thu, 20 Dec 2012 15:28:58 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>	<50D2B3DE.70206@ts.fujitsu.com>
	<1355991370.28419.15.camel@Abyss> <50D2CB96.4030509@ts.fujitsu.com>
	<1355992411.28419.25.camel@Abyss> <50D2CECB.4030108@ts.fujitsu.com>
In-Reply-To: <50D2CECB.4030108@ts.fujitsu.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/12 08:39, Juergen Gross wrote:
> Am 20.12.2012 09:33, schrieb Dario Faggioli:
>> On Thu, 2012-12-20 at 08:25 +0000, Juergen Gross wrote:
>>>> BTW, can you be a little bit more specific about where you're suggesting
>>>> to put it? I'm sorry but I'm not sure I figured what you mean by "the
>>>> scheduler private per-pcpu area"... Do you perhaps mean making it a
>>>> member of `struct csched_pcpu' ?
>>> Yes, that's what I would suggest.
>>>
>> Ok then, functionally, that is going to be exactly the same thing as
>> where it is right now, i.e., a set of global per_cpu() variables. It is
>> possible for your solution to bring some cache/locality benefits,
>> although that will very much depends on single cases, architecture,
>> workload, etc.
> The space is only allocated if sched_credit is responsible for the pcpu.

That makes sense.
  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 15:35:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 15:35:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tli9V-0005PZ-A3; Thu, 20 Dec 2012 15:35:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tli9U-0005PU-8g
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:35:16 +0000
Received: from [85.158.143.99:55555] by server-1.bemta-4.messagelabs.com id
	5E/EB-28401-33033D05; Thu, 20 Dec 2012 15:35:15 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1356017711!23337428!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM2NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16442 invoked from network); 20 Dec 2012 15:35:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 15:35:13 -0000
X-IronPort-AV: E=Sophos;i="4.84,324,1355097600"; 
   d="scan'208";a="1351271"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 15:35:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 10:35:10 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tli9O-0008VF-Kl;
	Thu, 20 Dec 2012 15:35:10 +0000
Message-ID: <50D32EBA.4020401@eu.citrix.com>
Date: Thu, 20 Dec 2012 15:28:58 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>	<50D2B3DE.70206@ts.fujitsu.com>
	<1355991370.28419.15.camel@Abyss> <50D2CB96.4030509@ts.fujitsu.com>
	<1355992411.28419.25.camel@Abyss> <50D2CECB.4030108@ts.fujitsu.com>
In-Reply-To: <50D2CECB.4030108@ts.fujitsu.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/12 08:39, Juergen Gross wrote:
> Am 20.12.2012 09:33, schrieb Dario Faggioli:
>> On Thu, 2012-12-20 at 08:25 +0000, Juergen Gross wrote:
>>>> BTW, can you be a little bit more specific about where you're suggesting
>>>> to put it? I'm sorry but I'm not sure I figured what you mean by "the
>>>> scheduler private per-pcpu area"... Do you perhaps mean making it a
>>>> member of `struct csched_pcpu' ?
>>> Yes, that's what I would suggest.
>>>
>> Ok then, functionally, that is going to be exactly the same thing as
>> where it is right now, i.e., a set of global per_cpu() variables. It is
>> possible for your solution to bring some cache/locality benefits,
>> although that will very much depends on single cases, architecture,
>> workload, etc.
> The space is only allocated if sched_credit is responsible for the pcpu.

That makes sense.
  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 15:39:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 15:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TliDe-0005YU-VW; Thu, 20 Dec 2012 15:39:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <erdnetdev@gmail.com>) id 1TliDd-0005YO-Ld
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 15:39:33 +0000
Received: from [85.158.139.83:19493] by server-7.bemta-5.messagelabs.com id
	C1/1E-08009-43133D05; Thu, 20 Dec 2012 15:39:32 +0000
X-Env-Sender: erdnetdev@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1356017970!30041824!1
X-Originating-IP: [209.85.210.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26873 invoked from network); 20 Dec 2012 15:39:32 -0000
Received: from mail-da0-f46.google.com (HELO mail-da0-f46.google.com)
	(209.85.210.46)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 15:39:32 -0000
Received: by mail-da0-f46.google.com with SMTP id p5so1570581dak.5
	for <xen-devel@lists.xensource.com>;
	Thu, 20 Dec 2012 07:39:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:subject:from:to:cc:in-reply-to:references:content-type
	:date:message-id:mime-version:x-mailer:content-transfer-encoding;
	bh=a/8x/BBfxJ43Y9f4NHL6cmVR186Z3o2v/aMluBVtV8U=;
	b=FUWx0HlwMPjFtiZSvTo8ISdOnzJPkOq94s1P873TUSrN/d1PG210En69iZFqAsWzTA
	xEJ3SbOziNN0gvzPhBQ8Ia3aejJTIR85a/UxjTh+295+XQUrIAqrxgOdyJ3Q2PfUt1KZ
	Gqh/6SdnQ+syBGsJhb4wxyINadamzgzZlXR9po9alO9wFSy2FMZ/jb/dhEeoaESHQHOX
	92J4aG4WJ1YBeDJLAqLy59TylcQLMIe71buq2YhTXyQKe/2nlz8bCnCFLJyVdn+RsVpe
	+LBx/55GWTripSnT+VK80f+7n2DfeujBVSHKvl5v0brKiy8sElULSNiLuLQut1LrBEnn
	JVhw==
X-Received: by 10.66.73.165 with SMTP id m5mr28351051pav.78.1356017970298;
	Thu, 20 Dec 2012 07:39:30 -0800 (PST)
Received: from [172.19.250.213] ([172.19.250.213])
	by mx.google.com with ESMTPS id f10sm2035765pav.18.2012.12.20.07.39.29
	(version=SSLv3 cipher=OTHER); Thu, 20 Dec 2012 07:39:29 -0800 (PST)
From: Eric Dumazet <erdnetdev@gmail.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1797374383.20121220135139@eikelenboom.it>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
	<1797374383.20121220135139@eikelenboom.it>
Date: Thu, 20 Dec 2012 07:39:28 -0800
Message-ID: <1356017968.21834.2859.camel@edumazet-glaptop>
Mime-Version: 1.0
X-Mailer: Evolution 2.28.3 
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 13:51 +0100, Sander Eikelenboom wrote:

> Eric:
>      From the warn_on_once, delta should be smaller than len, but probably they should be as close together as possible.
>      When you say "accurate estimation", what would be a acceptable difference between DELTA and LEN ?

I would use the most exact value, which is :

   skb->truesize += nr_frags * PAGE_SIZE;

Then, if we can spot later a regression in some stacks, adapt the
limiting parameters. I did a lot of work in GRO and TCP stack to reduce
the memory, and further changes are possible.

We really want to account memory, because we want to control how memory
is used on our machines and don't let some users use more than the
amount that was allowed to them.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 15:39:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 15:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TliDe-0005YU-VW; Thu, 20 Dec 2012 15:39:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <erdnetdev@gmail.com>) id 1TliDd-0005YO-Ld
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 15:39:33 +0000
Received: from [85.158.139.83:19493] by server-7.bemta-5.messagelabs.com id
	C1/1E-08009-43133D05; Thu, 20 Dec 2012 15:39:32 +0000
X-Env-Sender: erdnetdev@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1356017970!30041824!1
X-Originating-IP: [209.85.210.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26873 invoked from network); 20 Dec 2012 15:39:32 -0000
Received: from mail-da0-f46.google.com (HELO mail-da0-f46.google.com)
	(209.85.210.46)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 15:39:32 -0000
Received: by mail-da0-f46.google.com with SMTP id p5so1570581dak.5
	for <xen-devel@lists.xensource.com>;
	Thu, 20 Dec 2012 07:39:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:subject:from:to:cc:in-reply-to:references:content-type
	:date:message-id:mime-version:x-mailer:content-transfer-encoding;
	bh=a/8x/BBfxJ43Y9f4NHL6cmVR186Z3o2v/aMluBVtV8U=;
	b=FUWx0HlwMPjFtiZSvTo8ISdOnzJPkOq94s1P873TUSrN/d1PG210En69iZFqAsWzTA
	xEJ3SbOziNN0gvzPhBQ8Ia3aejJTIR85a/UxjTh+295+XQUrIAqrxgOdyJ3Q2PfUt1KZ
	Gqh/6SdnQ+syBGsJhb4wxyINadamzgzZlXR9po9alO9wFSy2FMZ/jb/dhEeoaESHQHOX
	92J4aG4WJ1YBeDJLAqLy59TylcQLMIe71buq2YhTXyQKe/2nlz8bCnCFLJyVdn+RsVpe
	+LBx/55GWTripSnT+VK80f+7n2DfeujBVSHKvl5v0brKiy8sElULSNiLuLQut1LrBEnn
	JVhw==
X-Received: by 10.66.73.165 with SMTP id m5mr28351051pav.78.1356017970298;
	Thu, 20 Dec 2012 07:39:30 -0800 (PST)
Received: from [172.19.250.213] ([172.19.250.213])
	by mx.google.com with ESMTPS id f10sm2035765pav.18.2012.12.20.07.39.29
	(version=SSLv3 cipher=OTHER); Thu, 20 Dec 2012 07:39:29 -0800 (PST)
From: Eric Dumazet <erdnetdev@gmail.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1797374383.20121220135139@eikelenboom.it>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
	<1797374383.20121220135139@eikelenboom.it>
Date: Thu, 20 Dec 2012 07:39:28 -0800
Message-ID: <1356017968.21834.2859.camel@edumazet-glaptop>
Mime-Version: 1.0
X-Mailer: Evolution 2.28.3 
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 13:51 +0100, Sander Eikelenboom wrote:

> Eric:
>      From the warn_on_once, delta should be smaller than len, but probably they should be as close together as possible.
>      When you say "accurate estimation", what would be a acceptable difference between DELTA and LEN ?

I would use the most exact value, which is :

   skb->truesize += nr_frags * PAGE_SIZE;

Then, if we can spot later a regression in some stacks, adapt the
limiting parameters. I did a lot of work in GRO and TCP stack to reduce
the memory, and further changes are possible.

We really want to account memory, because we want to control how memory
is used on our machines and don't let some users use more than the
amount that was allowed to them.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 15:58:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 15:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TliVX-0005oa-Me; Thu, 20 Dec 2012 15:58:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TliVV-0005oV-HJ
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:58:01 +0000
Received: from [193.109.254.147:15444] by server-5.bemta-14.messagelabs.com id
	59/9B-32031-88533D05; Thu, 20 Dec 2012 15:58:00 +0000
X-Env-Sender: james-xen@dingwall.me.uk
X-Msg-Ref: server-14.tower-27.messagelabs.com!1356019072!2030528!1
X-Originating-IP: [81.103.221.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_EXCESS_QP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24766 invoked from network); 20 Dec 2012 15:57:52 -0000
Received: from mtaout01-winn.ispmail.ntl.com (HELO
	mtaout01-winn.ispmail.ntl.com) (81.103.221.47)
	by server-14.tower-27.messagelabs.com with SMTP;
	20 Dec 2012 15:57:52 -0000
Received: from know-smtpout-1.server.virginmedia.net ([62.254.123.4])
	by mtaout01-winn.ispmail.ntl.com
	(InterMail vM.7.08.04.00 201-2186-134-20080326) with ESMTP id
	<20121220155752.YXWS10247.mtaout01-winn.ispmail.ntl.com@know-smtpout-1.server.virginmedia.net>
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 15:57:52 +0000
Received: from [82.32.104.97] (helo=dingwall.me.uk)
	by know-smtpout-1.server.virginmedia.net with esmtp (Exim 4.63)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TliSp-0001yd-B2
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:55:15 +0000
Received: (qmail 25725 invoked from network); 20 Dec 2012 15:55:15 -0000
Received: from apache0.xen.dingwall.me.uk (HELO
	webmail.private.dingwall.me.uk) (192.168.1.35)
	by mail0.xen.dingwall.me.uk with SMTP; 20 Dec 2012 15:55:15 -0000
MIME-Version: 1.0
Date: Thu, 20 Dec 2012 15:55:15 +0000
From: James Dingwall <james-xen@dingwall.me.uk>
To: <xen-devel@lists.xen.org>
In-Reply-To: <20121220145001.GC16443@jajo.eggsoft>
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
	<20121220145001.GC16443@jajo.eggsoft>
Message-ID: <4d3255771dfa09868795ff3617ed8873@imap.dingwall.me.uk>
X-Sender: james-xen@dingwall.me.uk
User-Agent: Roundcube Webmail/0.8.3
X-Cloudmark-Analysis: v=1.1 cv=GaEGOwq9FwezmTggA+b6yC6zDZF2HYaK6RN/tSqdnVA=
	c=1 sm=0 a=2-O05olGrMMA:10 a=FVn6k7cUbX0A:10 a=jPJDawAOAc8A:10
	a=IkcTkHD0fZMA:10 a=aZY1jWKCBMqu9AwlxTUA:9 a=QEXdDO2ut3YA:10
	a=HpAAvcLHHh0Zw7uRqdWCyQ==:117
Cc: jajcus@jajcus.net
Subject: Re: [Xen-devel]
 =?utf-8?q?kernel_log_flooded_with=3A_xen=5Fballoon=3A?=
 =?utf-8?q?_reserve=5Fadditional=5Fmemory=3A_add=5Fmemory=28=29_failed=3A_?=
 =?utf-8?q?-17?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2012-12-20 14:50, Jacek Konieczny wrote:
> On Wed, Dec 19, 2012 at 08:47:22AM +0000, James Dingwall wrote:
>> Hi,
>>
>> I have encountered an apparently benign error on two systems where 
>> the
>> dom0 kernel log is flooded with messages like:
>>
>> [52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff] 
>> cannot
>> be added
>> [52482.163860] xen_balloon: reserve_additional_memory: add_memory()
>> failed: -17
>
> I have seen this bug too (under Xen 4.2.0).
>
> I am using the following workaround to stop those messages:
>
> cat /sys/devices/system/xen_memory/xen_memory0/info/current_kb > \
>         /sys/devices/system/xen_memory/xen_memory0/target_kb
>
> I have not verified yet if Xen 4.2.1 is also affected.

Thanks for the cat tip, that works for me too.  The issue is still 
present in Xen 4.2.1 with kernel. 3.7.1.

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 15:58:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 15:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TliVX-0005oa-Me; Thu, 20 Dec 2012 15:58:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TliVV-0005oV-HJ
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:58:01 +0000
Received: from [193.109.254.147:15444] by server-5.bemta-14.messagelabs.com id
	59/9B-32031-88533D05; Thu, 20 Dec 2012 15:58:00 +0000
X-Env-Sender: james-xen@dingwall.me.uk
X-Msg-Ref: server-14.tower-27.messagelabs.com!1356019072!2030528!1
X-Originating-IP: [81.103.221.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_EXCESS_QP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24766 invoked from network); 20 Dec 2012 15:57:52 -0000
Received: from mtaout01-winn.ispmail.ntl.com (HELO
	mtaout01-winn.ispmail.ntl.com) (81.103.221.47)
	by server-14.tower-27.messagelabs.com with SMTP;
	20 Dec 2012 15:57:52 -0000
Received: from know-smtpout-1.server.virginmedia.net ([62.254.123.4])
	by mtaout01-winn.ispmail.ntl.com
	(InterMail vM.7.08.04.00 201-2186-134-20080326) with ESMTP id
	<20121220155752.YXWS10247.mtaout01-winn.ispmail.ntl.com@know-smtpout-1.server.virginmedia.net>
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 15:57:52 +0000
Received: from [82.32.104.97] (helo=dingwall.me.uk)
	by know-smtpout-1.server.virginmedia.net with esmtp (Exim 4.63)
	(envelope-from <james-xen@dingwall.me.uk>) id 1TliSp-0001yd-B2
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:55:15 +0000
Received: (qmail 25725 invoked from network); 20 Dec 2012 15:55:15 -0000
Received: from apache0.xen.dingwall.me.uk (HELO
	webmail.private.dingwall.me.uk) (192.168.1.35)
	by mail0.xen.dingwall.me.uk with SMTP; 20 Dec 2012 15:55:15 -0000
MIME-Version: 1.0
Date: Thu, 20 Dec 2012 15:55:15 +0000
From: James Dingwall <james-xen@dingwall.me.uk>
To: <xen-devel@lists.xen.org>
In-Reply-To: <20121220145001.GC16443@jajo.eggsoft>
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
	<20121220145001.GC16443@jajo.eggsoft>
Message-ID: <4d3255771dfa09868795ff3617ed8873@imap.dingwall.me.uk>
X-Sender: james-xen@dingwall.me.uk
User-Agent: Roundcube Webmail/0.8.3
X-Cloudmark-Analysis: v=1.1 cv=GaEGOwq9FwezmTggA+b6yC6zDZF2HYaK6RN/tSqdnVA=
	c=1 sm=0 a=2-O05olGrMMA:10 a=FVn6k7cUbX0A:10 a=jPJDawAOAc8A:10
	a=IkcTkHD0fZMA:10 a=aZY1jWKCBMqu9AwlxTUA:9 a=QEXdDO2ut3YA:10
	a=HpAAvcLHHh0Zw7uRqdWCyQ==:117
Cc: jajcus@jajcus.net
Subject: Re: [Xen-devel]
 =?utf-8?q?kernel_log_flooded_with=3A_xen=5Fballoon=3A?=
 =?utf-8?q?_reserve=5Fadditional=5Fmemory=3A_add=5Fmemory=28=29_failed=3A_?=
 =?utf-8?q?-17?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2012-12-20 14:50, Jacek Konieczny wrote:
> On Wed, Dec 19, 2012 at 08:47:22AM +0000, James Dingwall wrote:
>> Hi,
>>
>> I have encountered an apparently benign error on two systems where 
>> the
>> dom0 kernel log is flooded with messages like:
>>
>> [52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff] 
>> cannot
>> be added
>> [52482.163860] xen_balloon: reserve_additional_memory: add_memory()
>> failed: -17
>
> I have seen this bug too (under Xen 4.2.0).
>
> I am using the following workaround to stop those messages:
>
> cat /sys/devices/system/xen_memory/xen_memory0/info/current_kb > \
>         /sys/devices/system/xen_memory/xen_memory0/target_kb
>
> I have not verified yet if Xen 4.2.1 is also affected.

Thanks for the cat tip, that works for me too.  The issue is still 
present in Xen 4.2.1 with kernel. 3.7.1.

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:00:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:00:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TliXE-0005tb-B0; Thu, 20 Dec 2012 15:59:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1TliXC-0005tR-QK
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:59:47 +0000
Received: from [193.109.254.147:34617] by server-4.bemta-14.messagelabs.com id
	EB/5C-15233-2F533D05; Thu, 20 Dec 2012 15:59:46 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1356019184!3656198!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDE3NDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 525 invoked from network); 20 Dec 2012 15:59:45 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 15:59:45 -0000
Received: from compute1.internal (compute1.nyi.mail.srv.osa [10.202.2.41])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 77C9F20A18;
	Thu, 20 Dec 2012 10:59:43 -0500 (EST)
Received: from frontend2.nyi.mail.srv.osa ([10.202.2.161])
	by compute1.internal (MEProxy); Thu, 20 Dec 2012 10:59:43 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=Fc
	q+dXfbw2NayrDpchGm0++VJY8=; b=afV8ET1Hia5gc+0OZXAksCYyHduHmeMg5f
	QjoNdyG93C/Bh3hBxGaNtfM8Zgv9TVDKV2mXOVBTycnzSFGAfWvDO6hJ3Im99A5G
	R0KF2O9+jYJGr0VLXzUpyAZ0mjnTehhCUTksQ6TrayFnB7MEb0L9C2KHXmx0IHnC
	PHM5CFc94=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=Fcq+
	dXfbw2NayrDpchGm0++VJY8=; b=TvBuVVwvx4XtEC1KJGCp4ZjSr3O8d3XJ8Zbv
	NrsrffckQ1Z7oF08+GSQwO4PJeE3/wizxsSVVzEB2n6V1BMTyU1VFNrHA0VTp31A
	t8LanwPvzGkS57ebJUra8G59YMap8qYKMprzKOy3866kp8G+ooG4y6WRFff9e/0I
	WduyqHI=
X-Sasl-enc: npFyVJd4riRzPXuAv/9BU4uJ7KaXhydeLZVeGn1ttA1m 1356019183
Received: from [10.137.1.17] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id D687E4827CC;
	Thu, 20 Dec 2012 10:59:42 -0500 (EST)
Message-ID: <50D335E6.902@invisiblethingslab.com>
Date: Thu, 20 Dec 2012 16:59:34 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
In-Reply-To: <50BDFA38.7030009@invisiblethingslab.com>
X-Enigmail-Version: 1.4.6
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1586710756914808593=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============1586710756914808593==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigFCACA693B299B38532B215E6"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigFCACA693B299B38532B215E6
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 04.12.2012 14:27, Marek Marczykowski wrote:
> On 03.12.2012 08:39, Jan Beulich wrote:
>>>>> On 30.11.12 at 17:18, Marek Marczykowski <marmarek@invisiblethingsl=
ab.com>
>> wrote:
>>> On 30.11.2012 17:12, Jan Beulich wrote:
>>>>>>> Marek Marczykowski <marmarek@invisiblethingslab.com> 11/30/12 5:0=
7 PM >>>
>>>>> That was the rare case when resume worked at all... in most cased I=
've got
>>>>> reboot, before anything appear on the screen (even backlight is off=
) - xen
>>>>> panic? dom0 kernel panic?
>>>>
>>>> Without serial console we won't get very far from here.
>>>>
>>>>> I don't have serial console, but have USB-to-serial port - is it po=
ssible to
>>>>> use it as xen console (in xen 4.1.3)?
>>>>
>>>> Not that I'm aware of. But 4.1.x isn't very interesting from a devel=
opment
>>>> perspective anyway. If you had the same problems still with 4.3-unst=
able,
>>>> then that's be of much more interest to analyze, and you could use t=
he
>>>> EHCI debug port (if one of your controllers has one) based serial co=
nsole.
>>>
>>> Is it possible to use libxl from xen 4.1 with newer hypervisor? My li=
bxl is
>>> somehow patched and porting it to newer version will require some eff=
ort.
>>
>> I don't think so, but I also don't think you need a libxl at all for t=
he
>> purposes here (dealing with S3 is a Dom0-only thing).
>=20
> I've tested xen 4.2 hypervisor (without serial console) and also reboot=
ed
> during resume. But it works with xen 4.1.2, so the problem was introduc=
ed
> between 4.1.2 and 4.1.3. Will try to get console messages from xen-unst=
able -
> perhaps this will give some hints.

It still doesn't work on xen 4.1.4, system reboots at resume. Same on
4.3-unstable (today).
Can you give me some hints about using EHCI debug port? I've found some i=
nfo
on coreboot page[1]. Is there any way to use it without NET20DC device?
According to lspci my controller have debug port capability.

[1] http://www.coreboot.org/EHCI_Debug_Port

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enigFCACA693B299B38532B215E6
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ0zXpAAoJENuP0xzK19cs3UMH/3tZG75LmSQHj3DHH6BI0OQM
6YEKyWTA0PLNo4rKg4zw42kyNqisO43GCFR9cNBk+yOGkCgptl931+b6g2Q0E5d4
DOmUY4Rk96KmUYhLgLv4ACB6HAWr2lt4zVAusvbun+6XWhQRYG7phIhv8Es7hyr6
ZUmRf5Nq1yES4Jh35Lhag5x2etuGJuhHDvSxAbZ2nJv4WIslToS2mqJtq9B/Y6XY
4JzWnrPMUf065zKQ+Yc+5uuf+G/6Mscscug53YHFzYL3KNSRJFSrnItFc6/m0E3h
FOVzhMNDLjxJAdbvH96nczenYy3VGCNouoTSfhFkjkS0mFJkPTyd6c49t+tXy60=
=4jyq
-----END PGP SIGNATURE-----

--------------enigFCACA693B299B38532B215E6--


--===============1586710756914808593==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1586710756914808593==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 16:00:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:00:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TliXE-0005tb-B0; Thu, 20 Dec 2012 15:59:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1TliXC-0005tR-QK
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 15:59:47 +0000
Received: from [193.109.254.147:34617] by server-4.bemta-14.messagelabs.com id
	EB/5C-15233-2F533D05; Thu, 20 Dec 2012 15:59:46 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1356019184!3656198!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDE3NDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 525 invoked from network); 20 Dec 2012 15:59:45 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 15:59:45 -0000
Received: from compute1.internal (compute1.nyi.mail.srv.osa [10.202.2.41])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 77C9F20A18;
	Thu, 20 Dec 2012 10:59:43 -0500 (EST)
Received: from frontend2.nyi.mail.srv.osa ([10.202.2.161])
	by compute1.internal (MEProxy); Thu, 20 Dec 2012 10:59:43 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=Fc
	q+dXfbw2NayrDpchGm0++VJY8=; b=afV8ET1Hia5gc+0OZXAksCYyHduHmeMg5f
	QjoNdyG93C/Bh3hBxGaNtfM8Zgv9TVDKV2mXOVBTycnzSFGAfWvDO6hJ3Im99A5G
	R0KF2O9+jYJGr0VLXzUpyAZ0mjnTehhCUTksQ6TrayFnB7MEb0L9C2KHXmx0IHnC
	PHM5CFc94=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=Fcq+
	dXfbw2NayrDpchGm0++VJY8=; b=TvBuVVwvx4XtEC1KJGCp4ZjSr3O8d3XJ8Zbv
	NrsrffckQ1Z7oF08+GSQwO4PJeE3/wizxsSVVzEB2n6V1BMTyU1VFNrHA0VTp31A
	t8LanwPvzGkS57ebJUra8G59YMap8qYKMprzKOy3866kp8G+ooG4y6WRFff9e/0I
	WduyqHI=
X-Sasl-enc: npFyVJd4riRzPXuAv/9BU4uJ7KaXhydeLZVeGn1ttA1m 1356019183
Received: from [10.137.1.17] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id D687E4827CC;
	Thu, 20 Dec 2012 10:59:42 -0500 (EST)
Message-ID: <50D335E6.902@invisiblethingslab.com>
Date: Thu, 20 Dec 2012 16:59:34 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
In-Reply-To: <50BDFA38.7030009@invisiblethingslab.com>
X-Enigmail-Version: 1.4.6
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1586710756914808593=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============1586710756914808593==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigFCACA693B299B38532B215E6"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigFCACA693B299B38532B215E6
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 04.12.2012 14:27, Marek Marczykowski wrote:
> On 03.12.2012 08:39, Jan Beulich wrote:
>>>>> On 30.11.12 at 17:18, Marek Marczykowski <marmarek@invisiblethingsl=
ab.com>
>> wrote:
>>> On 30.11.2012 17:12, Jan Beulich wrote:
>>>>>>> Marek Marczykowski <marmarek@invisiblethingslab.com> 11/30/12 5:0=
7 PM >>>
>>>>> That was the rare case when resume worked at all... in most cased I=
've got
>>>>> reboot, before anything appear on the screen (even backlight is off=
) - xen
>>>>> panic? dom0 kernel panic?
>>>>
>>>> Without serial console we won't get very far from here.
>>>>
>>>>> I don't have serial console, but have USB-to-serial port - is it po=
ssible to
>>>>> use it as xen console (in xen 4.1.3)?
>>>>
>>>> Not that I'm aware of. But 4.1.x isn't very interesting from a devel=
opment
>>>> perspective anyway. If you had the same problems still with 4.3-unst=
able,
>>>> then that's be of much more interest to analyze, and you could use t=
he
>>>> EHCI debug port (if one of your controllers has one) based serial co=
nsole.
>>>
>>> Is it possible to use libxl from xen 4.1 with newer hypervisor? My li=
bxl is
>>> somehow patched and porting it to newer version will require some eff=
ort.
>>
>> I don't think so, but I also don't think you need a libxl at all for t=
he
>> purposes here (dealing with S3 is a Dom0-only thing).
>=20
> I've tested xen 4.2 hypervisor (without serial console) and also reboot=
ed
> during resume. But it works with xen 4.1.2, so the problem was introduc=
ed
> between 4.1.2 and 4.1.3. Will try to get console messages from xen-unst=
able -
> perhaps this will give some hints.

It still doesn't work on xen 4.1.4, system reboots at resume. Same on
4.3-unstable (today).
Can you give me some hints about using EHCI debug port? I've found some i=
nfo
on coreboot page[1]. Is there any way to use it without NET20DC device?
According to lspci my controller have debug port capability.

[1] http://www.coreboot.org/EHCI_Debug_Port

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enigFCACA693B299B38532B215E6
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ0zXpAAoJENuP0xzK19cs3UMH/3tZG75LmSQHj3DHH6BI0OQM
6YEKyWTA0PLNo4rKg4zw42kyNqisO43GCFR9cNBk+yOGkCgptl931+b6g2Q0E5d4
DOmUY4Rk96KmUYhLgLv4ACB6HAWr2lt4zVAusvbun+6XWhQRYG7phIhv8Es7hyr6
ZUmRf5Nq1yES4Jh35Lhag5x2etuGJuhHDvSxAbZ2nJv4WIslToS2mqJtq9B/Y6XY
4JzWnrPMUf065zKQ+Yc+5uuf+G/6Mscscug53YHFzYL3KNSRJFSrnItFc6/m0E3h
FOVzhMNDLjxJAdbvH96nczenYy3VGCNouoTSfhFkjkS0mFJkPTyd6c49t+tXy60=
=4jyq
-----END PGP SIGNATURE-----

--------------enigFCACA693B299B38532B215E6--


--===============1586710756914808593==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1586710756914808593==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 16:00:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TliY1-0006MG-Q3; Thu, 20 Dec 2012 16:00:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TliY0-0006M8-Ev
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:00:36 +0000
Received: from [85.158.143.35:24558] by server-3.bemta-4.messagelabs.com id
	77/E0-18211-32633D05; Thu, 20 Dec 2012 16:00:35 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1356019233!16401613!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7327 invoked from network); 20 Dec 2012 16:00:34 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 16:00:34 -0000
Received: by mail-la0-f43.google.com with SMTP id z14so3228569lag.2
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 08:00:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=lYr4BHr44A0MCdz/UNuL/9qwngWEetNRMS8Cnn1U4T0=;
	b=V1Kb3zsUakVnuzr7pCpkJZubTG7GPZFCNdFXthbamsF7ijpdLvgVttXlp0aOiH6Qw6
	L5Sb2GIJOuVoCpB78DgLLNVW17+z3N6SzeUcQiWghfIOnID+lw1IqcJfuQtANXNQFERl
	aY0a/x3fd7mBj0zSCL+Ip/5t/mOPxbZdck5MvR4BjcUSwseuV32FsVHZkPMDO+qpiLwp
	gQpzZDOdUaBqzhik2YY5ML+nT50nPTV5Qtl0BnY8EHfu2/4Pv5DJ1txuN8VeGk7t6/aO
	bH9hYNBPn6iBhR9p6S3Bq9Aa3adcrm2HpJKB/xSKtbHJK2wKD89lJHYAKkeM+hBVBxzY
	+dHw==
Received: by 10.112.23.234 with SMTP id p10mr4131509lbf.35.1356019232812; Thu,
	20 Dec 2012 08:00:32 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.147.168 with HTTP; Thu, 20 Dec 2012 08:00:12 -0800 (PST)
In-Reply-To: <50D32EBA.4020401@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D2B3DE.70206@ts.fujitsu.com> <1355991370.28419.15.camel@Abyss>
	<50D2CB96.4030509@ts.fujitsu.com> <1355992411.28419.25.camel@Abyss>
	<50D2CECB.4030108@ts.fujitsu.com> <50D32EBA.4020401@eu.citrix.com>
From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 20 Dec 2012 17:00:12 +0100
X-Google-Sender-Auth: fTZYY7rlMVEmh6kyK0pvy5tt5bM
Message-ID: <CAAWQecs-pFf=CTJKYmgfonfUFRJnh1yv+QFbpUKYt2PbAvsYRQ@mail.gmail.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 4:28 PM, George Dunlap
<george.dunlap@eu.citrix.com> wrote:
>> The space is only allocated if sched_credit is responsible for the pcpu.
>
>
> That makes sense.
>  -George
>
Ok, already done here in my queue. When I repost, you'll find it there. :-)

Thanks an Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
---------------------------------------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:00:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TliY1-0006MG-Q3; Thu, 20 Dec 2012 16:00:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TliY0-0006M8-Ev
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:00:36 +0000
Received: from [85.158.143.35:24558] by server-3.bemta-4.messagelabs.com id
	77/E0-18211-32633D05; Thu, 20 Dec 2012 16:00:35 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1356019233!16401613!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7327 invoked from network); 20 Dec 2012 16:00:34 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 16:00:34 -0000
Received: by mail-la0-f43.google.com with SMTP id z14so3228569lag.2
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 08:00:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=lYr4BHr44A0MCdz/UNuL/9qwngWEetNRMS8Cnn1U4T0=;
	b=V1Kb3zsUakVnuzr7pCpkJZubTG7GPZFCNdFXthbamsF7ijpdLvgVttXlp0aOiH6Qw6
	L5Sb2GIJOuVoCpB78DgLLNVW17+z3N6SzeUcQiWghfIOnID+lw1IqcJfuQtANXNQFERl
	aY0a/x3fd7mBj0zSCL+Ip/5t/mOPxbZdck5MvR4BjcUSwseuV32FsVHZkPMDO+qpiLwp
	gQpzZDOdUaBqzhik2YY5ML+nT50nPTV5Qtl0BnY8EHfu2/4Pv5DJ1txuN8VeGk7t6/aO
	bH9hYNBPn6iBhR9p6S3Bq9Aa3adcrm2HpJKB/xSKtbHJK2wKD89lJHYAKkeM+hBVBxzY
	+dHw==
Received: by 10.112.23.234 with SMTP id p10mr4131509lbf.35.1356019232812; Thu,
	20 Dec 2012 08:00:32 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.147.168 with HTTP; Thu, 20 Dec 2012 08:00:12 -0800 (PST)
In-Reply-To: <50D32EBA.4020401@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D2B3DE.70206@ts.fujitsu.com> <1355991370.28419.15.camel@Abyss>
	<50D2CB96.4030509@ts.fujitsu.com> <1355992411.28419.25.camel@Abyss>
	<50D2CECB.4030108@ts.fujitsu.com> <50D32EBA.4020401@eu.citrix.com>
From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 20 Dec 2012 17:00:12 +0100
X-Google-Sender-Auth: fTZYY7rlMVEmh6kyK0pvy5tt5bM
Message-ID: <CAAWQecs-pFf=CTJKYmgfonfUFRJnh1yv+QFbpUKYt2PbAvsYRQ@mail.gmail.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 4:28 PM, George Dunlap
<george.dunlap@eu.citrix.com> wrote:
>> The space is only allocated if sched_credit is responsible for the pcpu.
>
>
> That makes sense.
>  -George
>
Ok, already done here in my queue. When I repost, you'll find it there. :-)

Thanks an Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
---------------------------------------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:03:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tliaf-0006cy-IK; Thu, 20 Dec 2012 16:03:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tliad-0006ci-QW
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:03:20 +0000
Received: from [85.158.138.51:2070] by server-3.bemta-3.messagelabs.com id
	92/12-31588-6C633D05; Thu, 20 Dec 2012 16:03:18 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1356019394!29496165!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14536 invoked from network); 20 Dec 2012 16:03:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 16:03:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,324,1355097600"; 
   d="scan'208";a="1429926"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 16:02:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 11:02:52 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TliaC-0000TJ-6D;
	Thu, 20 Dec 2012 16:02:52 +0000
Message-ID: <50D33537.9050802@eu.citrix.com>
Date: Thu, 20 Dec 2012 15:56:39 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
In-Reply-To: <06d2f322a6319d8ba212.1355944039@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> As vcpu-affinity tells where VCPUs must run, node-affinity tells
> where they should or, better, prefer. While respecting vcpu-affinity
> remains mandatory, node-affinity is not that strict, it only expresses
> a preference, although honouring it is almost always true that will
> bring significant performances benefit (especially as compared to
> not having any affinity at all).
>
> This change modifies the VCPU load balancing algorithm (for the
> credit scheduler only), introducing a two steps logic.
> During the first step, we use the node-affinity mask. The aim is
> giving precedence to the CPUs where it is known to be preferable
> for the domain to run. If that fails in finding a valid PCPU, the
> node-affinity is just ignored and, in the second step, we fall
> back to using cpu-affinity only.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

This one has a lot of structural comments; so I'm going to send a couple 
of different mails as I'm going through it, so we can parallize the 
discussion better. :-)

> ---
> Changes from v1:
>   * CPU masks variables moved off from the stack, as requested during
>     review. As per the comments in the code, having them in the private
>     (per-scheduler instance) struct could have been enough, but it would be
>     racy (again, see comments). For that reason, use a global bunch of
>     them of (via per_cpu());
>   * George suggested a different load balancing logic during v1's review. I
>     think he was right and then I changed the old implementation in a way
>     that resembles exactly that. I rewrote most of this patch to introduce
>     a more sensible and effective noda-affinity handling logic.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -111,6 +111,33 @@
>
>
>   /*
> + * Node Balancing
> + */
> +#define CSCHED_BALANCE_CPU_AFFINITY     0
> +#define CSCHED_BALANCE_NODE_AFFINITY    1
> +#define CSCHED_BALANCE_LAST CSCHED_BALANCE_NODE_AFFINITY
[snip]
> +#define for_each_csched_balance_step(__step) \
> +    for ( (__step) = CSCHED_BALANCE_LAST; (__step) >= 0; (__step)-- )

Why are we starting at the top and going down?  Is there any good reason 
for it?

Every time you do anything unexpected, you add to the cognitive load of 
the person reading your code, leaving less spare processing power or 
memory for other bits of the code, and increasing (sligthly) the chance 
of making a mistake.  The most natural thing would be for someone to 
expect that the steps start at 0 and go up; just reversing it means it's 
that little bit harder to understand.  When you name it "LAST", it's 
even worse, because that would definitely imply that this step is going 
to be executed last.

So why not just have this be as follows?

for(step=0; step<CSCHED_BALANCE_MAX; step++)

> +
> +/*
> + * Each csched-balance step has to use its own cpumask. This function
> + * determines which one, given the step, and copies it in mask. Notice
> + * that, in case of node-affinity balancing step, it also filters out from
> + * the node-affinity mask the cpus that are not part of vc's cpu-affinity,
> + * as we do not want to end up running a vcpu where it would like, but
> + * is not allowed to!
> + *
> + * As an optimization, if a domain does not have any node-affinity at all
> + * (namely, its node affinity is automatically computed), not only the
> + * computed mask will reflect its vcpu-affinity, but we also return -1 to
> + * let the caller know that he can skip the step or quit the loop (if he
> + * wants).
> + */
> +static int
> +csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask)
> +{
> +    if ( step == CSCHED_BALANCE_NODE_AFFINITY )
> +    {
> +        struct domain *d = vc->domain;
> +        struct csched_dom *sdom = CSCHED_DOM(d);
> +
> +        cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
> +
> +        if ( cpumask_full(sdom->node_affinity_cpumask) )
> +            return -1;

There's no optimization in having this comparison done here.  You're not 
reading something from a local variable that you've just calculated.  
But hiding this comparison inside this function, and disguising it as 
"returns -1", does increase the cognitive load on anybody trying to read 
and understand the code -- particularly, as how the return value is used 
is not really clear.

Also, when you use this value, effectively what you're doing is saying, 
"Actually, we just said we were doing the NODE_BALANCE step, but it 
turns out that the results of NODE_BALANCE and CPU_BALANCE will be the 
same, so we're just going to pretend that we've been doing the 
CPU_BALANCE step instead."  (See for example, "balance_step == 
CSCHED_BALANCE_NODE_AFFINITY && !ret" -- why the !ret in this clause?  
Because if !ret then we're not actually doing NODE_AFFINITY now, but 
CPU_AFFINITY.)  Another non-negligible chunk of cognitive load for 
someone reading the code to 1) figure out, and 2) keep in mind as she 
tries to analyze it.

I took a look at all the places which use this return value, and it 
seems like the best thing in each case would just be to have the 
*caller*, before getting into the loop, call 
cpumask_full(sdom->node_affinity_cpumask) and just skip the 
CSCHED_NODE_BALANCE step altogether if it's true.  (Example below.)

> @@ -266,67 +332,94 @@ static inline void
>       struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
>       cpumask_t mask, idle_mask;
> -    int idlers_empty;
> +    int balance_step, idlers_empty;
>
>       ASSERT(cur);
> -    cpumask_clear(&mask);
> -
>       idlers_empty = cpumask_empty(prv->idlers);
>
>       /*
> -     * If the pcpu is idle, or there are no idlers and the new
> -     * vcpu is a higher priority than the old vcpu, run it here.
> -     *
> -     * If there are idle cpus, first try to find one suitable to run
> -     * new, so we can avoid preempting cur.  If we cannot find a
> -     * suitable idler on which to run new, run it here, but try to
> -     * find a suitable idler on which to run cur instead.
> +     * Node and vcpu-affinity balancing loop. To speed things up, in case
> +     * no node-affinity at all is present, scratch_balance_mask reflects
> +     * the vcpu-affinity, and ret is -1, so that we then can quit the
> +     * loop after only one step.
>        */
> -    if ( cur->pri == CSCHED_PRI_IDLE
> -         || (idlers_empty && new->pri > cur->pri) )
> +    for_each_csched_balance_step( balance_step )
>       {
> -        if ( cur->pri != CSCHED_PRI_IDLE )
> -            SCHED_STAT_CRANK(tickle_idlers_none);
> -        cpumask_set_cpu(cpu, &mask);
> -    }
> -    else if ( !idlers_empty )
> -    {
> -        /* Check whether or not there are idlers that can run new */
> -        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
> +        int ret, new_idlers_empty;
> +
> +        cpumask_clear(&mask);
>
>           /*
> -         * If there are no suitable idlers for new, and it's higher
> -         * priority than cur, ask the scheduler to migrate cur away.
> -         * We have to act like this (instead of just waking some of
> -         * the idlers suitable for cur) because cur is running.
> +         * If the pcpu is idle, or there are no idlers and the new
> +         * vcpu is a higher priority than the old vcpu, run it here.
>            *
> -         * If there are suitable idlers for new, no matter priorities,
> -         * leave cur alone (as it is running and is, likely, cache-hot)
> -         * and wake some of them (which is waking up and so is, likely,
> -         * cache cold anyway).
> +         * If there are idle cpus, first try to find one suitable to run
> +         * new, so we can avoid preempting cur.  If we cannot find a
> +         * suitable idler on which to run new, run it here, but try to
> +         * find a suitable idler on which to run cur instead.
>            */
> -        if ( cpumask_empty(&idle_mask) && new->pri > cur->pri )
> +        if ( cur->pri == CSCHED_PRI_IDLE
> +             || (idlers_empty && new->pri > cur->pri) )
>           {
> -            SCHED_STAT_CRANK(tickle_idlers_none);
> -            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
> -            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> -            SCHED_STAT_CRANK(migrate_kicked_away);
> -            set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
> +            if ( cur->pri != CSCHED_PRI_IDLE )
> +                SCHED_STAT_CRANK(tickle_idlers_none);
>               cpumask_set_cpu(cpu, &mask);
>           }
> -        else if ( !cpumask_empty(&idle_mask) )
> +        else if ( !idlers_empty )
>           {
> -            /* Which of the idlers suitable for new shall we wake up? */
> -            SCHED_STAT_CRANK(tickle_idlers_some);
> -            if ( opt_tickle_one_idle )
> +            /* Are there idlers suitable for new (for this balance step)? */
> +            ret = csched_balance_cpumask(new->vcpu, balance_step,
> +                                         &scratch_balance_mask);
> +            cpumask_and(&idle_mask, prv->idlers, &scratch_balance_mask);
> +            new_idlers_empty = cpumask_empty(&idle_mask);
> +
> +            /*
> +             * Let's not be too harsh! If there aren't idlers suitable
> +             * for new in its node-affinity mask, make sure we check its
> +             * vcpu-affinity as well, before tacking final decisions.
> +             */
> +            if ( new_idlers_empty
> +                 && (balance_step == CSCHED_BALANCE_NODE_AFFINITY && !ret) )
> +                continue;
> +
> +            /*
> +             * If there are no suitable idlers for new, and it's higher
> +             * priority than cur, ask the scheduler to migrate cur away.
> +             * We have to act like this (instead of just waking some of
> +             * the idlers suitable for cur) because cur is running.
> +             *
> +             * If there are suitable idlers for new, no matter priorities,
> +             * leave cur alone (as it is running and is, likely, cache-hot)
> +             * and wake some of them (which is waking up and so is, likely,
> +             * cache cold anyway).
> +             */
> +            if ( new_idlers_empty && new->pri > cur->pri )
>               {
> -                this_cpu(last_tickle_cpu) =
> -                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
> -                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
> +                SCHED_STAT_CRANK(tickle_idlers_none);
> +                SCHED_VCPU_STAT_CRANK(cur, kicked_away);
> +                SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> +                SCHED_STAT_CRANK(migrate_kicked_away);
> +                set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
> +                cpumask_set_cpu(cpu, &mask);
>               }
> -            else
> -                cpumask_or(&mask, &mask, &idle_mask);
> +            else if ( !new_idlers_empty )
> +            {
> +                /* Which of the idlers suitable for new shall we wake up? */
> +                SCHED_STAT_CRANK(tickle_idlers_some);
> +                if ( opt_tickle_one_idle )
> +                {
> +                    this_cpu(last_tickle_cpu) =
> +                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
> +                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
> +                }
> +                else
> +                    cpumask_or(&mask, &mask, &idle_mask);
> +            }
>           }
> +
> +        /* Did we find anyone (or csched_balance_cpumask() says we're done)? */
> +        if ( !cpumask_empty(&mask) || ret )
> +            break;
>       }

The whole logic here is really convoluted and hard to read.  For 
example, if cur->pri==IDLE, then you will always just break of the loop 
after the first iteration.  In that case, why have the if() inside the 
loop to begin with?  And if idlers_empty is true but cur->pri >= 
new->pri, then you'll go through the loop two times, even though both 
times it will come up empty.  And, of course, the whole thing about the 
node affinity mask being checked inside csched_balance_cpumask(), but 
not used until the very end.

A much more straighforward way to arrange it would be:

if(cur->pri=IDLE &c &c)
{
   foo;
}
else if(!idlers_empty)
{
   if(cpumask_full(sdom->node_affinity_cpumask)
     balance_step=CSCHED_BALANCE_CPU_AFFINITY;
   else
     balance_step=CSCHED_BALANCE_NODE_AFFINITY;

   for(; balance_step <= CSCHED_BALANCE_MAX; balance_step++)
  {
  ...
  }
}

That seems a lot clearer to me -- does that make sense?

[To be continued...]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:03:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tliaf-0006cy-IK; Thu, 20 Dec 2012 16:03:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tliad-0006ci-QW
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:03:20 +0000
Received: from [85.158.138.51:2070] by server-3.bemta-3.messagelabs.com id
	92/12-31588-6C633D05; Thu, 20 Dec 2012 16:03:18 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1356019394!29496165!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14536 invoked from network); 20 Dec 2012 16:03:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 16:03:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,324,1355097600"; 
   d="scan'208";a="1429926"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 16:02:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 11:02:52 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TliaC-0000TJ-6D;
	Thu, 20 Dec 2012 16:02:52 +0000
Message-ID: <50D33537.9050802@eu.citrix.com>
Date: Thu, 20 Dec 2012 15:56:39 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
In-Reply-To: <06d2f322a6319d8ba212.1355944039@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> As vcpu-affinity tells where VCPUs must run, node-affinity tells
> where they should or, better, prefer. While respecting vcpu-affinity
> remains mandatory, node-affinity is not that strict, it only expresses
> a preference, although honouring it is almost always true that will
> bring significant performances benefit (especially as compared to
> not having any affinity at all).
>
> This change modifies the VCPU load balancing algorithm (for the
> credit scheduler only), introducing a two steps logic.
> During the first step, we use the node-affinity mask. The aim is
> giving precedence to the CPUs where it is known to be preferable
> for the domain to run. If that fails in finding a valid PCPU, the
> node-affinity is just ignored and, in the second step, we fall
> back to using cpu-affinity only.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

This one has a lot of structural comments; so I'm going to send a couple 
of different mails as I'm going through it, so we can parallize the 
discussion better. :-)

> ---
> Changes from v1:
>   * CPU masks variables moved off from the stack, as requested during
>     review. As per the comments in the code, having them in the private
>     (per-scheduler instance) struct could have been enough, but it would be
>     racy (again, see comments). For that reason, use a global bunch of
>     them of (via per_cpu());
>   * George suggested a different load balancing logic during v1's review. I
>     think he was right and then I changed the old implementation in a way
>     that resembles exactly that. I rewrote most of this patch to introduce
>     a more sensible and effective noda-affinity handling logic.
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -111,6 +111,33 @@
>
>
>   /*
> + * Node Balancing
> + */
> +#define CSCHED_BALANCE_CPU_AFFINITY     0
> +#define CSCHED_BALANCE_NODE_AFFINITY    1
> +#define CSCHED_BALANCE_LAST CSCHED_BALANCE_NODE_AFFINITY
[snip]
> +#define for_each_csched_balance_step(__step) \
> +    for ( (__step) = CSCHED_BALANCE_LAST; (__step) >= 0; (__step)-- )

Why are we starting at the top and going down?  Is there any good reason 
for it?

Every time you do anything unexpected, you add to the cognitive load of 
the person reading your code, leaving less spare processing power or 
memory for other bits of the code, and increasing (sligthly) the chance 
of making a mistake.  The most natural thing would be for someone to 
expect that the steps start at 0 and go up; just reversing it means it's 
that little bit harder to understand.  When you name it "LAST", it's 
even worse, because that would definitely imply that this step is going 
to be executed last.

So why not just have this be as follows?

for(step=0; step<CSCHED_BALANCE_MAX; step++)

> +
> +/*
> + * Each csched-balance step has to use its own cpumask. This function
> + * determines which one, given the step, and copies it in mask. Notice
> + * that, in case of node-affinity balancing step, it also filters out from
> + * the node-affinity mask the cpus that are not part of vc's cpu-affinity,
> + * as we do not want to end up running a vcpu where it would like, but
> + * is not allowed to!
> + *
> + * As an optimization, if a domain does not have any node-affinity at all
> + * (namely, its node affinity is automatically computed), not only the
> + * computed mask will reflect its vcpu-affinity, but we also return -1 to
> + * let the caller know that he can skip the step or quit the loop (if he
> + * wants).
> + */
> +static int
> +csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask)
> +{
> +    if ( step == CSCHED_BALANCE_NODE_AFFINITY )
> +    {
> +        struct domain *d = vc->domain;
> +        struct csched_dom *sdom = CSCHED_DOM(d);
> +
> +        cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
> +
> +        if ( cpumask_full(sdom->node_affinity_cpumask) )
> +            return -1;

There's no optimization in having this comparison done here.  You're not 
reading something from a local variable that you've just calculated.  
But hiding this comparison inside this function, and disguising it as 
"returns -1", does increase the cognitive load on anybody trying to read 
and understand the code -- particularly, as how the return value is used 
is not really clear.

Also, when you use this value, effectively what you're doing is saying, 
"Actually, we just said we were doing the NODE_BALANCE step, but it 
turns out that the results of NODE_BALANCE and CPU_BALANCE will be the 
same, so we're just going to pretend that we've been doing the 
CPU_BALANCE step instead."  (See for example, "balance_step == 
CSCHED_BALANCE_NODE_AFFINITY && !ret" -- why the !ret in this clause?  
Because if !ret then we're not actually doing NODE_AFFINITY now, but 
CPU_AFFINITY.)  Another non-negligible chunk of cognitive load for 
someone reading the code to 1) figure out, and 2) keep in mind as she 
tries to analyze it.

I took a look at all the places which use this return value, and it 
seems like the best thing in each case would just be to have the 
*caller*, before getting into the loop, call 
cpumask_full(sdom->node_affinity_cpumask) and just skip the 
CSCHED_NODE_BALANCE step altogether if it's true.  (Example below.)

> @@ -266,67 +332,94 @@ static inline void
>       struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
>       cpumask_t mask, idle_mask;
> -    int idlers_empty;
> +    int balance_step, idlers_empty;
>
>       ASSERT(cur);
> -    cpumask_clear(&mask);
> -
>       idlers_empty = cpumask_empty(prv->idlers);
>
>       /*
> -     * If the pcpu is idle, or there are no idlers and the new
> -     * vcpu is a higher priority than the old vcpu, run it here.
> -     *
> -     * If there are idle cpus, first try to find one suitable to run
> -     * new, so we can avoid preempting cur.  If we cannot find a
> -     * suitable idler on which to run new, run it here, but try to
> -     * find a suitable idler on which to run cur instead.
> +     * Node and vcpu-affinity balancing loop. To speed things up, in case
> +     * no node-affinity at all is present, scratch_balance_mask reflects
> +     * the vcpu-affinity, and ret is -1, so that we then can quit the
> +     * loop after only one step.
>        */
> -    if ( cur->pri == CSCHED_PRI_IDLE
> -         || (idlers_empty && new->pri > cur->pri) )
> +    for_each_csched_balance_step( balance_step )
>       {
> -        if ( cur->pri != CSCHED_PRI_IDLE )
> -            SCHED_STAT_CRANK(tickle_idlers_none);
> -        cpumask_set_cpu(cpu, &mask);
> -    }
> -    else if ( !idlers_empty )
> -    {
> -        /* Check whether or not there are idlers that can run new */
> -        cpumask_and(&idle_mask, prv->idlers, new->vcpu->cpu_affinity);
> +        int ret, new_idlers_empty;
> +
> +        cpumask_clear(&mask);
>
>           /*
> -         * If there are no suitable idlers for new, and it's higher
> -         * priority than cur, ask the scheduler to migrate cur away.
> -         * We have to act like this (instead of just waking some of
> -         * the idlers suitable for cur) because cur is running.
> +         * If the pcpu is idle, or there are no idlers and the new
> +         * vcpu is a higher priority than the old vcpu, run it here.
>            *
> -         * If there are suitable idlers for new, no matter priorities,
> -         * leave cur alone (as it is running and is, likely, cache-hot)
> -         * and wake some of them (which is waking up and so is, likely,
> -         * cache cold anyway).
> +         * If there are idle cpus, first try to find one suitable to run
> +         * new, so we can avoid preempting cur.  If we cannot find a
> +         * suitable idler on which to run new, run it here, but try to
> +         * find a suitable idler on which to run cur instead.
>            */
> -        if ( cpumask_empty(&idle_mask) && new->pri > cur->pri )
> +        if ( cur->pri == CSCHED_PRI_IDLE
> +             || (idlers_empty && new->pri > cur->pri) )
>           {
> -            SCHED_STAT_CRANK(tickle_idlers_none);
> -            SCHED_VCPU_STAT_CRANK(cur, kicked_away);
> -            SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> -            SCHED_STAT_CRANK(migrate_kicked_away);
> -            set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
> +            if ( cur->pri != CSCHED_PRI_IDLE )
> +                SCHED_STAT_CRANK(tickle_idlers_none);
>               cpumask_set_cpu(cpu, &mask);
>           }
> -        else if ( !cpumask_empty(&idle_mask) )
> +        else if ( !idlers_empty )
>           {
> -            /* Which of the idlers suitable for new shall we wake up? */
> -            SCHED_STAT_CRANK(tickle_idlers_some);
> -            if ( opt_tickle_one_idle )
> +            /* Are there idlers suitable for new (for this balance step)? */
> +            ret = csched_balance_cpumask(new->vcpu, balance_step,
> +                                         &scratch_balance_mask);
> +            cpumask_and(&idle_mask, prv->idlers, &scratch_balance_mask);
> +            new_idlers_empty = cpumask_empty(&idle_mask);
> +
> +            /*
> +             * Let's not be too harsh! If there aren't idlers suitable
> +             * for new in its node-affinity mask, make sure we check its
> +             * vcpu-affinity as well, before tacking final decisions.
> +             */
> +            if ( new_idlers_empty
> +                 && (balance_step == CSCHED_BALANCE_NODE_AFFINITY && !ret) )
> +                continue;
> +
> +            /*
> +             * If there are no suitable idlers for new, and it's higher
> +             * priority than cur, ask the scheduler to migrate cur away.
> +             * We have to act like this (instead of just waking some of
> +             * the idlers suitable for cur) because cur is running.
> +             *
> +             * If there are suitable idlers for new, no matter priorities,
> +             * leave cur alone (as it is running and is, likely, cache-hot)
> +             * and wake some of them (which is waking up and so is, likely,
> +             * cache cold anyway).
> +             */
> +            if ( new_idlers_empty && new->pri > cur->pri )
>               {
> -                this_cpu(last_tickle_cpu) =
> -                    cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
> -                cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
> +                SCHED_STAT_CRANK(tickle_idlers_none);
> +                SCHED_VCPU_STAT_CRANK(cur, kicked_away);
> +                SCHED_VCPU_STAT_CRANK(cur, migrate_r);
> +                SCHED_STAT_CRANK(migrate_kicked_away);
> +                set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
> +                cpumask_set_cpu(cpu, &mask);
>               }
> -            else
> -                cpumask_or(&mask, &mask, &idle_mask);
> +            else if ( !new_idlers_empty )
> +            {
> +                /* Which of the idlers suitable for new shall we wake up? */
> +                SCHED_STAT_CRANK(tickle_idlers_some);
> +                if ( opt_tickle_one_idle )
> +                {
> +                    this_cpu(last_tickle_cpu) =
> +                        cpumask_cycle(this_cpu(last_tickle_cpu), &idle_mask);
> +                    cpumask_set_cpu(this_cpu(last_tickle_cpu), &mask);
> +                }
> +                else
> +                    cpumask_or(&mask, &mask, &idle_mask);
> +            }
>           }
> +
> +        /* Did we find anyone (or csched_balance_cpumask() says we're done)? */
> +        if ( !cpumask_empty(&mask) || ret )
> +            break;
>       }

The whole logic here is really convoluted and hard to read.  For 
example, if cur->pri==IDLE, then you will always just break of the loop 
after the first iteration.  In that case, why have the if() inside the 
loop to begin with?  And if idlers_empty is true but cur->pri >= 
new->pri, then you'll go through the loop two times, even though both 
times it will come up empty.  And, of course, the whole thing about the 
node affinity mask being checked inside csched_balance_cpumask(), but 
not used until the very end.

A much more straighforward way to arrange it would be:

if(cur->pri=IDLE &c &c)
{
   foo;
}
else if(!idlers_empty)
{
   if(cpumask_full(sdom->node_affinity_cpumask)
     balance_step=CSCHED_BALANCE_CPU_AFFINITY;
   else
     balance_step=CSCHED_BALANCE_NODE_AFFINITY;

   for(; balance_step <= CSCHED_BALANCE_MAX; balance_step++)
  {
  ...
  }
}

That seems a lot clearer to me -- does that make sense?

[To be continued...]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:05:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlicF-0006kk-36; Thu, 20 Dec 2012 16:04:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TlicD-0006ka-It
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:04:57 +0000
Received: from [85.158.139.83:30912] by server-14.bemta-5.messagelabs.com id
	F8/A4-09538-82733D05; Thu, 20 Dec 2012 16:04:56 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-182.messagelabs.com!1356019459!30046092!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6294 invoked from network); 20 Dec 2012 16:04:19 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 16:04:19 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TlibX-000LxW-25; Thu, 20 Dec 2012 16:04:15 +0000
Date: Thu, 20 Dec 2012 16:04:15 +0000
From: Tim Deegan <tim@xen.org>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Message-ID: <20121220160415.GO80837@ocelot.phlegethon.org>
References: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
	<49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
	<20121218221749.GA6332@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121218221749.GA6332@phenom.dumpdata.com>
User-Agent: Mutt/1.4.2.1i
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
	problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

At 17:17 -0500 on 18 Dec (1355851071), Konrad Rzeszutek Wilk wrote:
> In essence, the max_pages does work - _if_ one does these operations
> in serial. We are trying to make this work in parallel and without
> any failures - for that we - one way that is quite simplistic
> is the claim hypercall. It sets up a 'stake' of the amount of
> memory that the hypervisor should reserve. This way other
> guests creations/ ballonning do not infringe on the 'claimed' amount.
> 
> I believe with this hypercall the Xapi can be made to do its operations
> in parallel as well.

The question of starting VMs in parallel seems like a red herring to me:
- TTBOMK Xapi already can start VMs in parallel.  Since it knows what
  constraints it's placed on existing VMs and what VMs it's currently
  building, there is nothing stopping it.  Indeed, AFAICS any toolstack
  that can guarantee enough RAM to build one VM at a time could do the
  same for multiple parallel builds with a bit of bookkeeping.
- Dan's stated problem (failure during VM build in the presence of
  unconstrained guest-controlled allocations) happens even if there is
  only one VM being created.

> > > Andres Lagar-Cavilla says "... this is because of shortcomings in the
> > > [Xen] mm layer and its interaction with wait queues, documented
> > > elsewhere."  In other words, this batching proposal requires
> > > significant changes to the hypervisor, which I think we
> > > all agreed we were trying to avoid.
> >
> > Let me nip this at the bud. I use page sharing and other techniques in an environment that doesn't use Citrix's DMC, nor is focused only on proprietary kernels...
> 
> I believe Dan is saying is that it is not enabled by default.
> Meaning it does not get executed in by /etc/init.d/xencommons and
> as such it never gets run (or does it now?) - unless one knows
> about it - or it is enabled by default in a product. But perhaps
> we are both mistaken? Is it enabled by default now on xen-unstable?

I think the point Dan was trying to make is that if you use page-sharing
to do overcommit, you can end up with the same problem that self-balloon
has: guest activity might consume all your RAM while you're trying to
build a new VM.

That could be fixed by a 'further hypervisor change' (constraining the
total amount of free memory that CoW unsharing can consume).  I suspect
that it can also be resolved by using d->max_pages on each shared-memory
VM to put a limit on how much memory they can (severally) consume.

> Just as a summary as this is getting to be a long thread - my
> understanding has been that the hypervisor is suppose to toolstack
> independent.

Let's keep calm.  If people were arguing "xl (or xapi) doesn't need this
so we shouldn't do it" that would certainly be wrong, but I don't think
that's the case.  At least I certainly hope not!

The discussion ought to be around the actual problem, which is (as far
as I can see) that in a system where guests are ballooning without
limits, VM creation failure can happen after a long delay.  In
particular it is the delay that is the problem, rather than the failure.
Some solutions that have been proposed so far:
 - don't do that, it's silly (possibly true but not helpful);
 - this reservation hypercall, to pull the failure forward;
 - make allocation faster to avoid the delay (a good idea anyway,
   but can it be made fast enough?);
 - use max_pages or similar to stop other VMs using all of RAM.

My own position remains that I can live with the reservation hypercall,
as long as it's properly done - including handling PV 32-bit and PV
superpage guests.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:05:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlicF-0006kk-36; Thu, 20 Dec 2012 16:04:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1TlicD-0006ka-It
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:04:57 +0000
Received: from [85.158.139.83:30912] by server-14.bemta-5.messagelabs.com id
	F8/A4-09538-82733D05; Thu, 20 Dec 2012 16:04:56 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-182.messagelabs.com!1356019459!30046092!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6294 invoked from network); 20 Dec 2012 16:04:19 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 16:04:19 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1TlibX-000LxW-25; Thu, 20 Dec 2012 16:04:15 +0000
Date: Thu, 20 Dec 2012 16:04:15 +0000
From: Tim Deegan <tim@xen.org>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Message-ID: <20121220160415.GO80837@ocelot.phlegethon.org>
References: <mailman.18000.1354568068.1399.xen-devel@lists.xen.org>
	<49049C00-89CA-4B43-9660-83B9ADC061E0@gridcentric.ca>
	<20121218221749.GA6332@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121218221749.GA6332@phenom.dumpdata.com>
User-Agent: Mutt/1.4.2.1i
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed XENMEM_claim_pages hypercall: Analysis of
	problem and alternate solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

At 17:17 -0500 on 18 Dec (1355851071), Konrad Rzeszutek Wilk wrote:
> In essence, the max_pages does work - _if_ one does these operations
> in serial. We are trying to make this work in parallel and without
> any failures - for that we - one way that is quite simplistic
> is the claim hypercall. It sets up a 'stake' of the amount of
> memory that the hypervisor should reserve. This way other
> guests creations/ ballonning do not infringe on the 'claimed' amount.
> 
> I believe with this hypercall the Xapi can be made to do its operations
> in parallel as well.

The question of starting VMs in parallel seems like a red herring to me:
- TTBOMK Xapi already can start VMs in parallel.  Since it knows what
  constraints it's placed on existing VMs and what VMs it's currently
  building, there is nothing stopping it.  Indeed, AFAICS any toolstack
  that can guarantee enough RAM to build one VM at a time could do the
  same for multiple parallel builds with a bit of bookkeeping.
- Dan's stated problem (failure during VM build in the presence of
  unconstrained guest-controlled allocations) happens even if there is
  only one VM being created.

> > > Andres Lagar-Cavilla says "... this is because of shortcomings in the
> > > [Xen] mm layer and its interaction with wait queues, documented
> > > elsewhere."  In other words, this batching proposal requires
> > > significant changes to the hypervisor, which I think we
> > > all agreed we were trying to avoid.
> >
> > Let me nip this at the bud. I use page sharing and other techniques in an environment that doesn't use Citrix's DMC, nor is focused only on proprietary kernels...
> 
> I believe Dan is saying is that it is not enabled by default.
> Meaning it does not get executed in by /etc/init.d/xencommons and
> as such it never gets run (or does it now?) - unless one knows
> about it - or it is enabled by default in a product. But perhaps
> we are both mistaken? Is it enabled by default now on xen-unstable?

I think the point Dan was trying to make is that if you use page-sharing
to do overcommit, you can end up with the same problem that self-balloon
has: guest activity might consume all your RAM while you're trying to
build a new VM.

That could be fixed by a 'further hypervisor change' (constraining the
total amount of free memory that CoW unsharing can consume).  I suspect
that it can also be resolved by using d->max_pages on each shared-memory
VM to put a limit on how much memory they can (severally) consume.

> Just as a summary as this is getting to be a long thread - my
> understanding has been that the hypervisor is suppose to toolstack
> independent.

Let's keep calm.  If people were arguing "xl (or xapi) doesn't need this
so we shouldn't do it" that would certainly be wrong, but I don't think
that's the case.  At least I certainly hope not!

The discussion ought to be around the actual problem, which is (as far
as I can see) that in a system where guests are ballooning without
limits, VM creation failure can happen after a long delay.  In
particular it is the delay that is the problem, rather than the failure.
Some solutions that have been proposed so far:
 - don't do that, it's silly (possibly true but not helpful);
 - this reservation hypercall, to pull the failure forward;
 - make allocation faster to avoid the delay (a good idea anyway,
   but can it be made fast enough?);
 - use max_pages or similar to stop other VMs using all of RAM.

My own position remains that I can live with the reservation hypercall,
as long as it's properly done - including handling PV 32-bit and PV
superpage guests.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:12:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlij3-0007T8-26; Thu, 20 Dec 2012 16:12:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tlij1-0007Rs-2c
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 16:11:59 +0000
Received: from [193.109.254.147:14732] by server-15.bemta-14.messagelabs.com
	id BD/F5-05116-EC833D05; Thu, 20 Dec 2012 16:11:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356019916!9062733!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14425 invoked from network); 20 Dec 2012 16:11:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 16:11:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,324,1355097600"; 
   d="scan'208";a="285188"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 16:11:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 16:11:56 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tliiy-0008TF-7P;
	Thu, 20 Dec 2012 16:11:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tliix-0003jH-Tw;
	Thu, 20 Dec 2012 16:11:56 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14796-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 16:11:56 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14796: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0087300372272616675=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0087300372272616675==
Content-Type: text/plain

flight 14796 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14796/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 14792
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check fail in 14792 pass in 14796

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14785
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14785

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 14792 never pass

version targeted for testing:
 xen                  090cc3e20d3e
baseline version:
 xen                  b04de677de31

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <osp@andrep.de>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=090cc3e20d3e
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 090cc3e20d3e
+ branch=xen-unstable
+ revision=090cc3e20d3e
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 090cc3e20d3e ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 24 changesets with 95 changes to 65 files


--===============0087300372272616675==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0087300372272616675==--

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:12:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlij3-0007T8-26; Thu, 20 Dec 2012 16:12:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tlij1-0007Rs-2c
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 16:11:59 +0000
Received: from [193.109.254.147:14732] by server-15.bemta-14.messagelabs.com
	id BD/F5-05116-EC833D05; Thu, 20 Dec 2012 16:11:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356019916!9062733!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14425 invoked from network); 20 Dec 2012 16:11:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 16:11:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,324,1355097600"; 
   d="scan'208";a="285188"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 16:11:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 16:11:56 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tliiy-0008TF-7P;
	Thu, 20 Dec 2012 16:11:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tliix-0003jH-Tw;
	Thu, 20 Dec 2012 16:11:56 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14796-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 16:11:56 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14796: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0087300372272616675=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0087300372272616675==
Content-Type: text/plain

flight 14796 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14796/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 14792
 test-amd64-i386-qemut-rhel6hvm-intel 11 leak-check/check fail in 14792 pass in 14796

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14785
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14785

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 14792 never pass

version targeted for testing:
 xen                  090cc3e20d3e
baseline version:
 xen                  b04de677de31

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <osp@andrep.de>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=090cc3e20d3e
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 090cc3e20d3e
+ branch=xen-unstable
+ revision=090cc3e20d3e
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 090cc3e20d3e ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 24 changesets with 95 changes to 65 files


--===============0087300372272616675==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0087300372272616675==--

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:37:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:37:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlj7d-00088A-UY; Thu, 20 Dec 2012 16:37:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tlj7c-000885-Fz
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:37:24 +0000
Received: from [85.158.139.83:22637] by server-3.bemta-5.messagelabs.com id
	0C/EC-25441-3CE33D05; Thu, 20 Dec 2012 16:37:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356021442!26870239!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29884 invoked from network); 20 Dec 2012 16:37:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 16:37:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 16:37:22 +0000
Message-Id: <50D34CD002000078000B1D89@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 16:37:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Marek Marczykowski" <marmarek@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
In-Reply-To: <50D335E6.902@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 16:59, Marek Marczykowski <marmarek@invisiblethingslab.com> wrote:
> It still doesn't work on xen 4.1.4, system reboots at resume. Same on
> 4.3-unstable (today).
> Can you give me some hints about using EHCI debug port? I've found some info
> on coreboot page[1]. Is there any way to use it without NET20DC device?

No, you need such a special device (not sure if what they name
there is the only one, as mine isn't named that way iirc).

> According to lspci my controller have debug port capability.

You also need to check that you system's designers didn't put an
internal hub in between (which would prevent the debug port from
actually working).

> [1] http://www.coreboot.org/EHCI_Debug_Port 

Or Linux'es Documentation/x86/earlyprintk.txt.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:37:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:37:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlj7d-00088A-UY; Thu, 20 Dec 2012 16:37:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tlj7c-000885-Fz
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:37:24 +0000
Received: from [85.158.139.83:22637] by server-3.bemta-5.messagelabs.com id
	0C/EC-25441-3CE33D05; Thu, 20 Dec 2012 16:37:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356021442!26870239!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29884 invoked from network); 20 Dec 2012 16:37:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 16:37:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 16:37:22 +0000
Message-Id: <50D34CD002000078000B1D89@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 16:37:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Marek Marczykowski" <marmarek@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
In-Reply-To: <50D335E6.902@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 16:59, Marek Marczykowski <marmarek@invisiblethingslab.com> wrote:
> It still doesn't work on xen 4.1.4, system reboots at resume. Same on
> 4.3-unstable (today).
> Can you give me some hints about using EHCI debug port? I've found some info
> on coreboot page[1]. Is there any way to use it without NET20DC device?

No, you need such a special device (not sure if what they name
there is the only one, as mine isn't named that way iirc).

> According to lspci my controller have debug port capability.

You also need to check that you system's designers didn't put an
internal hub in between (which would prevent the debug port from
actually working).

> [1] http://www.coreboot.org/EHCI_Debug_Port 

Or Linux'es Documentation/x86/earlyprintk.txt.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:54:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:54:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TljOD-0008NJ-Va; Thu, 20 Dec 2012 16:54:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TljOC-0008N7-D3
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:54:32 +0000
Received: from [193.109.254.147:58381] by server-8.bemta-14.messagelabs.com id
	1B/FF-26341-7C243D05; Thu, 20 Dec 2012 16:54:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1356022459!3662595!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6311 invoked from network); 20 Dec 2012 16:54:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 16:54:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 16:54:18 +0000
Message-Id: <50D350C902000078000B1D9C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 16:54:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part00318EA9.0__="
Subject: [Xen-devel] [PATCH] x86: compat_show_guest_stack() should not
	truncate MFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part00318EA9.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Re-using "addr" here was a mistake, as it is a 32-bit quantity.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_64/compat/traps.c
+++ b/xen/arch/x86/x86_64/compat/traps.c
@@ -20,11 +20,12 @@ void compat_show_guest_stack(struct vcpu
     if ( v !=3D current )
     {
         struct vcpu *vcpu;
+        unsigned long mfn;
=20
         ASSERT(guest_kernel_mode(v, regs));
-        addr =3D read_cr3() >> PAGE_SHIFT;
+        mfn =3D read_cr3() >> PAGE_SHIFT;
         for_each_vcpu( v->domain, vcpu )
-            if ( pagetable_get_pfn(vcpu->arch.guest_table) =3D=3D addr )
+            if ( pagetable_get_pfn(vcpu->arch.guest_table) =3D=3D mfn )
                 break;
         if ( !vcpu )
         {




--=__Part00318EA9.0__=
Content-Type: text/plain; name="x86-compat-show-guest-stack-mfn.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-compat-show-guest-stack-mfn.patch"

x86: compat_show_guest_stack() should not truncate MFN=0A=0ARe-using =
"addr" here was a mistake, as it is a 32-bit quantity.=0A=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/x86_64/compat/traps=
.c=0A+++ b/xen/arch/x86/x86_64/compat/traps.c=0A@@ -20,11 +20,12 @@ void =
compat_show_guest_stack(struct vcpu=0A     if ( v !=3D current )=0A     =
{=0A         struct vcpu *vcpu;=0A+        unsigned long mfn;=0A =0A       =
  ASSERT(guest_kernel_mode(v, regs));=0A-        addr =3D read_cr3() >> =
PAGE_SHIFT;=0A+        mfn =3D read_cr3() >> PAGE_SHIFT;=0A         =
for_each_vcpu( v->domain, vcpu )=0A-            if ( pagetable_get_pfn(vcpu=
->arch.guest_table) =3D=3D addr )=0A+            if ( pagetable_get_pfn(vcp=
u->arch.guest_table) =3D=3D mfn )=0A                 break;=0A         if =
( !vcpu )=0A         {=0A
--=__Part00318EA9.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part00318EA9.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 20 16:54:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:54:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TljOD-0008NJ-Va; Thu, 20 Dec 2012 16:54:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TljOC-0008N7-D3
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:54:32 +0000
Received: from [193.109.254.147:58381] by server-8.bemta-14.messagelabs.com id
	1B/FF-26341-7C243D05; Thu, 20 Dec 2012 16:54:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1356022459!3662595!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6311 invoked from network); 20 Dec 2012 16:54:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Dec 2012 16:54:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Dec 2012 16:54:18 +0000
Message-Id: <50D350C902000078000B1D9C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 20 Dec 2012 16:54:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part00318EA9.0__="
Subject: [Xen-devel] [PATCH] x86: compat_show_guest_stack() should not
	truncate MFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part00318EA9.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Re-using "addr" here was a mistake, as it is a 32-bit quantity.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_64/compat/traps.c
+++ b/xen/arch/x86/x86_64/compat/traps.c
@@ -20,11 +20,12 @@ void compat_show_guest_stack(struct vcpu
     if ( v !=3D current )
     {
         struct vcpu *vcpu;
+        unsigned long mfn;
=20
         ASSERT(guest_kernel_mode(v, regs));
-        addr =3D read_cr3() >> PAGE_SHIFT;
+        mfn =3D read_cr3() >> PAGE_SHIFT;
         for_each_vcpu( v->domain, vcpu )
-            if ( pagetable_get_pfn(vcpu->arch.guest_table) =3D=3D addr )
+            if ( pagetable_get_pfn(vcpu->arch.guest_table) =3D=3D mfn )
                 break;
         if ( !vcpu )
         {




--=__Part00318EA9.0__=
Content-Type: text/plain; name="x86-compat-show-guest-stack-mfn.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-compat-show-guest-stack-mfn.patch"

x86: compat_show_guest_stack() should not truncate MFN=0A=0ARe-using =
"addr" here was a mistake, as it is a 32-bit quantity.=0A=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/x86_64/compat/traps=
.c=0A+++ b/xen/arch/x86/x86_64/compat/traps.c=0A@@ -20,11 +20,12 @@ void =
compat_show_guest_stack(struct vcpu=0A     if ( v !=3D current )=0A     =
{=0A         struct vcpu *vcpu;=0A+        unsigned long mfn;=0A =0A       =
  ASSERT(guest_kernel_mode(v, regs));=0A-        addr =3D read_cr3() >> =
PAGE_SHIFT;=0A+        mfn =3D read_cr3() >> PAGE_SHIFT;=0A         =
for_each_vcpu( v->domain, vcpu )=0A-            if ( pagetable_get_pfn(vcpu=
->arch.guest_table) =3D=3D addr )=0A+            if ( pagetable_get_pfn(vcp=
u->arch.guest_table) =3D=3D mfn )=0A                 break;=0A         if =
( !vcpu )=0A         {=0A
--=__Part00318EA9.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part00318EA9.0__=--


From xen-devel-bounces@lists.xen.org Thu Dec 20 16:54:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:54:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TljOC-0008N8-Iw; Thu, 20 Dec 2012 16:54:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TljOA-0008N2-RI
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:54:31 +0000
Received: from [85.158.137.99:45930] by server-14.bemta-3.messagelabs.com id
	EB/C4-27443-5C243D05; Thu, 20 Dec 2012 16:54:29 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1356022466!17175718!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8107 invoked from network); 20 Dec 2012 16:54:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 16:54:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,324,1355097600"; 
   d="scan'208";a="1438443"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 16:54:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 11:54:25 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TljO5-0001EX-IK;
	Thu, 20 Dec 2012 16:54:25 +0000
Message-ID: <50D3414D.8080901@eu.citrix.com>
Date: Thu, 20 Dec 2012 16:48:13 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
In-Reply-To: <06d2f322a6319d8ba212.1355944039@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
>   static inline int
> -__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu)
> +__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, cpumask_t *mask)
>   {
>       /*
>        * Don't pick up work that's in the peer's scheduling tail or hot on
> -     * peer PCPU. Only pick up work that's allowed to run on our CPU.
> +     * peer PCPU. Only pick up work that prefers and/or is allowed to run
> +     * on our CPU.
>        */
>       return !vc->is_running &&
>              !__csched_vcpu_is_cache_hot(vc) &&
> -           cpumask_test_cpu(dest_cpu, vc->cpu_affinity);
> +           cpumask_test_cpu(dest_cpu, mask);
> +}
> +
> +static inline int
> +__csched_vcpu_should_migrate(int cpu, cpumask_t *mask, cpumask_t *idlers)
> +{
> +    /*
> +     * Consent to migration if cpu is one of the idlers in the VCPU's
> +     * affinity mask. In fact, if that is not the case, it just means it
> +     * was some other CPU that was tickled and should hence come and pick
> +     * VCPU up. Migrating it to cpu would only make things worse.
> +     */
> +    return cpumask_test_cpu(cpu, idlers) && cpumask_test_cpu(cpu, mask);
>   }

I don't get what this function is for.  The only time you call it is in 
csched_runq_steal(), immediately after calling 
__csched_vcpu_is_migrateable().  But is_migrateable() has already 
checked cpu_mask_test_cpu(cpu, mask).  So why do we need to check it again?

We could just replace this with cpumask_test_cpu(cpu, prv->idlers).  But 
that clause is going to be either true or false for every single 
iteration of all the loops, including the loops in 
csched_load_balance().  Wouldn't it make more sense to check it once in 
csched_load_balance(), rather than doing all those nested loops?

And in any case, looking at the caller of csched_load_balance(), it 
explicitly says to steal work if the next thing on the runqueue of cpu 
has a priority of TS_OVER.  That was chosen for a reason -- if you want 
to change that, you should change it there at the top (and make a 
justification for doing so), not deeply nested in a function like this.

Or am I completely missing something?

>   static struct csched_vcpu *
> -csched_runq_steal(int peer_cpu, int cpu, int pri)
> +csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
>   {
>       const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
> +    struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, peer_cpu));
>       const struct vcpu * const peer_vcpu = curr_on_cpu(peer_cpu);
>       struct csched_vcpu *speer;
>       struct list_head *iter;
> @@ -1265,11 +1395,24 @@ csched_runq_steal(int peer_cpu, int cpu,
>               if ( speer->pri <= pri )
>                   break;
>
> -            /* Is this VCPU is runnable on our PCPU? */
> +            /* Is this VCPU runnable on our PCPU? */
>               vc = speer->vcpu;
>               BUG_ON( is_idle_vcpu(vc) );
>
> -            if (__csched_vcpu_is_migrateable(vc, cpu))
> +            /*
> +             * Retrieve the correct mask for this balance_step or, if we're
> +             * dealing with node-affinity and the vcpu has no node affinity
> +             * at all, just skip this vcpu. That is needed if we want to
> +             * check if we have any node-affine work to steal first (wrt
> +             * any vcpu-affine work).
> +             */
> +            if ( csched_balance_cpumask(vc, balance_step,
> +                                        &scratch_balance_mask) )
> +                continue;

Again, I think for clarity the best thing to do here is:

if ( balance_step == NODE

      && cpumask_full(speer->sdom->node_affinity_cpumask) )

     continue;

csched_balance_cpumask();

/* Etc. */




> @@ -1295,7 +1438,8 @@ csched_load_balance(struct csched_privat
>       struct csched_vcpu *speer;
>       cpumask_t workers;
>       cpumask_t *online;
> -    int peer_cpu;
> +    int peer_cpu, peer_node, bstep;
> +    int node = cpu_to_node(cpu);
>
>       BUG_ON( cpu != snext->vcpu->processor );
>       online = cpupool_scheduler_cpumask(per_cpu(cpupool, cpu));
> @@ -1312,42 +1456,68 @@ csched_load_balance(struct csched_privat
>           SCHED_STAT_CRANK(load_balance_other);
>
>       /*
> -     * Peek at non-idling CPUs in the system, starting with our
> -     * immediate neighbour.
> +     * Let's look around for work to steal, taking both vcpu-affinity
> +     * and node-affinity into account. More specifically, we check all
> +     * the non-idle CPUs' runq, looking for:
> +     *  1. any node-affine work to steal first,
> +     *  2. if not finding anything, any vcpu-affine work to steal.
>        */
> -    cpumask_andnot(&workers, online, prv->idlers);
> -    cpumask_clear_cpu(cpu, &workers);
> -    peer_cpu = cpu;
> +    for_each_csched_balance_step( bstep )
> +    {
> +        /*
> +         * We peek at the non-idling CPUs in a node-wise fashion. In fact,
> +         * it is more likely that we find some node-affine work on our same
> +         * node, not to mention that migrating vcpus within the same node
> +         * could well expected to be cheaper than across-nodes (memory
> +         * stays local, there might be some node-wide cache[s], etc.).
> +         */
> +        peer_node = node;
> +        do
> +        {
> +            /* Find out what the !idle are in this node */
> +            cpumask_andnot(&workers, online, prv->idlers);
> +            cpumask_and(&workers, &workers, &node_to_cpumask(peer_node));
> +            cpumask_clear_cpu(cpu, &workers);
>
> -    while ( !cpumask_empty(&workers) )
> -    {
> -        peer_cpu = cpumask_cycle(peer_cpu, &workers);
> -        cpumask_clear_cpu(peer_cpu, &workers);
> +            if ( cpumask_empty(&workers) )
> +                goto next_node;
>
> -        /*
> -         * Get ahold of the scheduler lock for this peer CPU.
> -         *
> -         * Note: We don't spin on this lock but simply try it. Spinning could
> -         * cause a deadlock if the peer CPU is also load balancing and trying
> -         * to lock this CPU.
> -         */
> -        if ( !pcpu_schedule_trylock(peer_cpu) )
> -        {
> -            SCHED_STAT_CRANK(steal_trylock_failed);
> -            continue;
> -        }
> +            peer_cpu = cpumask_first(&workers);
> +            do
> +            {
> +                /*
> +                 * Get ahold of the scheduler lock for this peer CPU.
> +                 *
> +                 * Note: We don't spin on this lock but simply try it. Spinning
> +                 * could cause a deadlock if the peer CPU is also load
> +                 * balancing and trying to lock this CPU.
> +                 */
> +                if ( !pcpu_schedule_trylock(peer_cpu) )
> +                {
> +                    SCHED_STAT_CRANK(steal_trylock_failed);
> +                    peer_cpu = cpumask_cycle(peer_cpu, &workers);
> +                    continue;
> +                }
>
> -        /*
> -         * Any work over there to steal?
> -         */
> -        speer = cpumask_test_cpu(peer_cpu, online) ?
> -            csched_runq_steal(peer_cpu, cpu, snext->pri) : NULL;
> -        pcpu_schedule_unlock(peer_cpu);
> -        if ( speer != NULL )
> -        {
> -            *stolen = 1;
> -            return speer;
> -        }
> +                /* Any work over there to steal? */
> +                speer = cpumask_test_cpu(peer_cpu, online) ?
> +                    csched_runq_steal(peer_cpu, cpu, snext->pri, bstep) : NULL;
> +                pcpu_schedule_unlock(peer_cpu);
> +
> +                /* As soon as one vcpu is found, balancing ends */
> +                if ( speer != NULL )
> +                {
> +                    *stolen = 1;
> +                    return speer;
> +                }
> +
> +                peer_cpu = cpumask_cycle(peer_cpu, &workers);
> +
> +            } while( peer_cpu != cpumask_first(&workers) );
> +
> + next_node:
> +            peer_node = cycle_node(peer_node, node_online_map);
> +        } while( peer_node != node );
>       }

These changes all look right.  But then, I'm a bit tired, so I'll give 
it another once-over tomorrow. :-)

[To be continued]



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 16:54:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 16:54:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TljOC-0008N8-Iw; Thu, 20 Dec 2012 16:54:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1TljOA-0008N2-RI
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 16:54:31 +0000
Received: from [85.158.137.99:45930] by server-14.bemta-3.messagelabs.com id
	EB/C4-27443-5C243D05; Thu, 20 Dec 2012 16:54:29 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1356022466!17175718!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8107 invoked from network); 20 Dec 2012 16:54:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 16:54:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,324,1355097600"; 
   d="scan'208";a="1438443"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 16:54:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 11:54:25 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1TljO5-0001EX-IK;
	Thu, 20 Dec 2012 16:54:25 +0000
Message-ID: <50D3414D.8080901@eu.citrix.com>
Date: Thu, 20 Dec 2012 16:48:13 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
In-Reply-To: <06d2f322a6319d8ba212.1355944039@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
>   static inline int
> -__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu)
> +__csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu, cpumask_t *mask)
>   {
>       /*
>        * Don't pick up work that's in the peer's scheduling tail or hot on
> -     * peer PCPU. Only pick up work that's allowed to run on our CPU.
> +     * peer PCPU. Only pick up work that prefers and/or is allowed to run
> +     * on our CPU.
>        */
>       return !vc->is_running &&
>              !__csched_vcpu_is_cache_hot(vc) &&
> -           cpumask_test_cpu(dest_cpu, vc->cpu_affinity);
> +           cpumask_test_cpu(dest_cpu, mask);
> +}
> +
> +static inline int
> +__csched_vcpu_should_migrate(int cpu, cpumask_t *mask, cpumask_t *idlers)
> +{
> +    /*
> +     * Consent to migration if cpu is one of the idlers in the VCPU's
> +     * affinity mask. In fact, if that is not the case, it just means it
> +     * was some other CPU that was tickled and should hence come and pick
> +     * VCPU up. Migrating it to cpu would only make things worse.
> +     */
> +    return cpumask_test_cpu(cpu, idlers) && cpumask_test_cpu(cpu, mask);
>   }

I don't get what this function is for.  The only time you call it is in 
csched_runq_steal(), immediately after calling 
__csched_vcpu_is_migrateable().  But is_migrateable() has already 
checked cpu_mask_test_cpu(cpu, mask).  So why do we need to check it again?

We could just replace this with cpumask_test_cpu(cpu, prv->idlers).  But 
that clause is going to be either true or false for every single 
iteration of all the loops, including the loops in 
csched_load_balance().  Wouldn't it make more sense to check it once in 
csched_load_balance(), rather than doing all those nested loops?

And in any case, looking at the caller of csched_load_balance(), it 
explicitly says to steal work if the next thing on the runqueue of cpu 
has a priority of TS_OVER.  That was chosen for a reason -- if you want 
to change that, you should change it there at the top (and make a 
justification for doing so), not deeply nested in a function like this.

Or am I completely missing something?

>   static struct csched_vcpu *
> -csched_runq_steal(int peer_cpu, int cpu, int pri)
> +csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
>   {
>       const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
> +    struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, peer_cpu));
>       const struct vcpu * const peer_vcpu = curr_on_cpu(peer_cpu);
>       struct csched_vcpu *speer;
>       struct list_head *iter;
> @@ -1265,11 +1395,24 @@ csched_runq_steal(int peer_cpu, int cpu,
>               if ( speer->pri <= pri )
>                   break;
>
> -            /* Is this VCPU is runnable on our PCPU? */
> +            /* Is this VCPU runnable on our PCPU? */
>               vc = speer->vcpu;
>               BUG_ON( is_idle_vcpu(vc) );
>
> -            if (__csched_vcpu_is_migrateable(vc, cpu))
> +            /*
> +             * Retrieve the correct mask for this balance_step or, if we're
> +             * dealing with node-affinity and the vcpu has no node affinity
> +             * at all, just skip this vcpu. That is needed if we want to
> +             * check if we have any node-affine work to steal first (wrt
> +             * any vcpu-affine work).
> +             */
> +            if ( csched_balance_cpumask(vc, balance_step,
> +                                        &scratch_balance_mask) )
> +                continue;

Again, I think for clarity the best thing to do here is:

if ( balance_step == NODE

      && cpumask_full(speer->sdom->node_affinity_cpumask) )

     continue;

csched_balance_cpumask();

/* Etc. */




> @@ -1295,7 +1438,8 @@ csched_load_balance(struct csched_privat
>       struct csched_vcpu *speer;
>       cpumask_t workers;
>       cpumask_t *online;
> -    int peer_cpu;
> +    int peer_cpu, peer_node, bstep;
> +    int node = cpu_to_node(cpu);
>
>       BUG_ON( cpu != snext->vcpu->processor );
>       online = cpupool_scheduler_cpumask(per_cpu(cpupool, cpu));
> @@ -1312,42 +1456,68 @@ csched_load_balance(struct csched_privat
>           SCHED_STAT_CRANK(load_balance_other);
>
>       /*
> -     * Peek at non-idling CPUs in the system, starting with our
> -     * immediate neighbour.
> +     * Let's look around for work to steal, taking both vcpu-affinity
> +     * and node-affinity into account. More specifically, we check all
> +     * the non-idle CPUs' runq, looking for:
> +     *  1. any node-affine work to steal first,
> +     *  2. if not finding anything, any vcpu-affine work to steal.
>        */
> -    cpumask_andnot(&workers, online, prv->idlers);
> -    cpumask_clear_cpu(cpu, &workers);
> -    peer_cpu = cpu;
> +    for_each_csched_balance_step( bstep )
> +    {
> +        /*
> +         * We peek at the non-idling CPUs in a node-wise fashion. In fact,
> +         * it is more likely that we find some node-affine work on our same
> +         * node, not to mention that migrating vcpus within the same node
> +         * could well expected to be cheaper than across-nodes (memory
> +         * stays local, there might be some node-wide cache[s], etc.).
> +         */
> +        peer_node = node;
> +        do
> +        {
> +            /* Find out what the !idle are in this node */
> +            cpumask_andnot(&workers, online, prv->idlers);
> +            cpumask_and(&workers, &workers, &node_to_cpumask(peer_node));
> +            cpumask_clear_cpu(cpu, &workers);
>
> -    while ( !cpumask_empty(&workers) )
> -    {
> -        peer_cpu = cpumask_cycle(peer_cpu, &workers);
> -        cpumask_clear_cpu(peer_cpu, &workers);
> +            if ( cpumask_empty(&workers) )
> +                goto next_node;
>
> -        /*
> -         * Get ahold of the scheduler lock for this peer CPU.
> -         *
> -         * Note: We don't spin on this lock but simply try it. Spinning could
> -         * cause a deadlock if the peer CPU is also load balancing and trying
> -         * to lock this CPU.
> -         */
> -        if ( !pcpu_schedule_trylock(peer_cpu) )
> -        {
> -            SCHED_STAT_CRANK(steal_trylock_failed);
> -            continue;
> -        }
> +            peer_cpu = cpumask_first(&workers);
> +            do
> +            {
> +                /*
> +                 * Get ahold of the scheduler lock for this peer CPU.
> +                 *
> +                 * Note: We don't spin on this lock but simply try it. Spinning
> +                 * could cause a deadlock if the peer CPU is also load
> +                 * balancing and trying to lock this CPU.
> +                 */
> +                if ( !pcpu_schedule_trylock(peer_cpu) )
> +                {
> +                    SCHED_STAT_CRANK(steal_trylock_failed);
> +                    peer_cpu = cpumask_cycle(peer_cpu, &workers);
> +                    continue;
> +                }
>
> -        /*
> -         * Any work over there to steal?
> -         */
> -        speer = cpumask_test_cpu(peer_cpu, online) ?
> -            csched_runq_steal(peer_cpu, cpu, snext->pri) : NULL;
> -        pcpu_schedule_unlock(peer_cpu);
> -        if ( speer != NULL )
> -        {
> -            *stolen = 1;
> -            return speer;
> -        }
> +                /* Any work over there to steal? */
> +                speer = cpumask_test_cpu(peer_cpu, online) ?
> +                    csched_runq_steal(peer_cpu, cpu, snext->pri, bstep) : NULL;
> +                pcpu_schedule_unlock(peer_cpu);
> +
> +                /* As soon as one vcpu is found, balancing ends */
> +                if ( speer != NULL )
> +                {
> +                    *stolen = 1;
> +                    return speer;
> +                }
> +
> +                peer_cpu = cpumask_cycle(peer_cpu, &workers);
> +
> +            } while( peer_cpu != cpumask_first(&workers) );
> +
> + next_node:
> +            peer_node = cycle_node(peer_node, node_online_map);
> +        } while( peer_node != node );
>       }

These changes all look right.  But then, I'm a bit tired, so I'll give 
it another once-over tomorrow. :-)

[To be continued]



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 17:13:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 17:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TljgA-0000NR-9u; Thu, 20 Dec 2012 17:13:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Tljg9-0000NF-2S
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 17:13:05 +0000
Received: from [85.158.138.51:19630] by server-1.bemta-3.messagelabs.com id
	82/C6-08906-B1743D05; Thu, 20 Dec 2012 17:12:59 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1356023577!29810052!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17300 invoked from network); 20 Dec 2012 17:12:58 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 17:12:58 -0000
Received: by mail-la0-f47.google.com with SMTP id u2so3417968lag.34
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 09:12:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=J6D7nzR4K2atfYOjM7tKSbnhYZYDvTiKj4V+St4W62A=;
	b=sauAtNSlnajab05s+pjHXi4n125Wef7SIZRk3rO6aHWsZTaRaAB6tavl3jj3NVUOzO
	9OtWPUdtLr+I/xe+p9iRoCYj91itF5BbnY0QbsxTTTeRFXL2qStS5UQ9lal8BQtfpKQB
	Fwy9wQgawTKiePfwKGO17OE/hzdlTB1WAkawir55xvFkURfWF9uuORuXgZM+sgsKEcqI
	6QWEt2fO8/37yAlnezRuezZ26taLV009rhnLX1COUXQBhSNg4gK4EyGSqg8uFPp4BhkH
	65C9dXVDXw3jr8K+iMtp+kZj5nYYX4osblQhmi+VpqXPX1W0hTgAaSrIqEWxMiPmQiYn
	Kv3A==
Received: by 10.112.45.166 with SMTP id o6mr3997664lbm.44.1356023577253; Thu,
	20 Dec 2012 09:12:57 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.147.168 with HTTP; Thu, 20 Dec 2012 09:12:37 -0800 (PST)
In-Reply-To: <50D33537.9050802@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D33537.9050802@eu.citrix.com>
From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 20 Dec 2012 18:12:37 +0100
X-Google-Sender-Auth: XW1T3CbusVVVKhYydgfACgi80Zc
Message-ID: <CAAWQecsaFz08SEZTS-X6oseZdtB8uwQGHWpHQfAKZ2ZwL-OZsw@mail.gmail.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 4:56 PM, George Dunlap
<george.dunlap@eu.citrix.com> wrote:
>> This change modifies the VCPU load balancing algorithm (for the
>> credit scheduler only), introducing a two steps logic.
>> During the first step, we use the node-affinity mask. The aim is
>> giving precedence to the CPUs where it is known to be preferable
>> for the domain to run. If that fails in finding a valid PCPU, the
>> node-affinity is just ignored and, in the second step, we fall
>> back to using cpu-affinity only.
>>
>> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>
>
> This one has a lot of structural comments; so I'm going to send a couple of
> different mails as I'm going through it, so we can parallize the discussion
> better. :-)
>
Ok.

>> +#define for_each_csched_balance_step(__step) \
>> +    for ( (__step) = CSCHED_BALANCE_LAST; (__step) >= 0; (__step)-- )
>
>
> Why are we starting at the top and going down?  Is there any good reason for
> it?
>
You're totally right, it looked like this in the very first RFC, when
the whole set of
macros was different. I changed that but never considered this, but I
agree going
up would be more natural.

> So why not just have this be as follows?
>
> for(step=0; step<CSCHED_BALANCE_MAX; step++)
>
Will do.

>> +static int
>> +csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask)
>> +{
>> +    if ( step == CSCHED_BALANCE_NODE_AFFINITY )
>> +    {
>> +        struct domain *d = vc->domain;
>> +        struct csched_dom *sdom = CSCHED_DOM(d);
>> +
>> +        cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
>> +
>> +        if ( cpumask_full(sdom->node_affinity_cpumask) )
>> +            return -1;
>
>
> There's no optimization in having this comparison done here.  You're not
> reading something from a local variable that you've just calculated.  But
> hiding this comparison inside this function, and disguising it as "returns
> -1", does increase the cognitive load on anybody trying to read and
> understand the code -- particularly, as how the return value is used is not
> really clear.
>
Yes, I again agree this is ugly. The previous version was better, from
the caller
point of view, but had other downsides (IIRC it was mangling with the
loop control
variable directly) so we agreed on this '-1' thing.

> Also, when you use this value, effectively what you're doing is saying,
> "Actually, we just said we were doing the NODE_BALANCE step, but it turns
> out that the results of NODE_BALANCE and CPU_BALANCE will be the same, so
> we're just going to pretend that we've been doing the CPU_BALANCE step
> instead."  (See for example, "balance_step == CSCHED_BALANCE_NODE_AFFINITY
> && !ret" -- why the !ret in this clause?  Because if !ret then we're not
> actually doing NODE_AFFINITY now, but CPU_AFFINITY.)  Another non-negligible
> chunk of cognitive load for someone reading the code to 1) figure out, and
> 2) keep in mind as she tries to analyze it.
>
Totally agreed. :-)

> I took a look at all the places which use this return value, and it seems
> like the best thing in each case would just be to have the *caller*, before
> getting into the loop, call cpumask_full(sdom->node_affinity_cpumask) and
> just skip the CSCHED_NODE_BALANCE step altogether if it's true.  (Example
> below.)
>
I will think about it and see if I can find a nice solution for making
that happen
(the point being I'm not sure I like exposing and disseminating that
cpumask_full... thing that much, but I guess I can hide it under some more
macro-ing and stuff).

>> @@ -266,67 +332,94 @@ static inline void
>>       struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
>>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
>>       cpumask_t mask, idle_mask;
>> -    int idlers_empty;
>> +    int balance_step, idlers_empty;
>>
>>       ASSERT(cur);
>> -    cpumask_clear(&mask);
>> -
>>       idlers_empty = cpumask_empty(prv->idlers);
>>
>>       /*
>> -     * If the pcpu is idle, or there are no idlers and the new
>> -     * vcpu is a higher priority than the old vcpu, run it here.
>> -     *
>> -     * If there are idle cpus, first try to find one suitable to run
>> -     * new, so we can avoid preempting cur.  If we cannot find a
>> -     * suitable idler on which to run new, run it here, but try to
>> -     * find a suitable idler on which to run cur instead.
>> +     * Node and vcpu-affinity balancing loop. To speed things up, in case
>> +     * no node-affinity at all is present, scratch_balance_mask reflects
>> +     * the vcpu-affinity, and ret is -1, so that we then can quit the
>> +     * loop after only one step.
>>        */
>> [snip]
>
> The whole logic here is really convoluted and hard to read.  For example, if
> cur->pri==IDLE, then you will always just break of the loop after the first
> iteration.  In that case, why have the if() inside the loop to begin with?
> And if idlers_empty is true but cur->pri >= new->pri, then you'll go through
> the loop two times, even though both times it will come up empty.  And, of
> course, the whole thing about the node affinity mask being checked inside
> csched_balance_cpumask(), but not used until the very end.
>
I fear it looks convoluted and complex because, well, it _is_ quite complex!
However, I see you're point, and there definitely are chances for it
to, although
being complex, look more complex than it should. :-)

> A much more straighforward way to arrange it would be:
>
> if(cur->pri=IDLE &c &c)
> {
>   foo;
> }
> else if(!idlers_empty)
> {
>   if(cpumask_full(sdom->node_affinity_cpumask)
>     balance_step=CSCHED_BALANCE_CPU_AFFINITY;
>   else
>     balance_step=CSCHED_BALANCE_NODE_AFFINITY;
>
Yes, I think this can be taken out. I'll give some more thinking to this and
see if I can simplify the flow.

> [To be continued...]
>
Thanks for now :-)
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
---------------------------------------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 17:13:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 17:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TljgA-0000NR-9u; Thu, 20 Dec 2012 17:13:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Tljg9-0000NF-2S
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 17:13:05 +0000
Received: from [85.158.138.51:19630] by server-1.bemta-3.messagelabs.com id
	82/C6-08906-B1743D05; Thu, 20 Dec 2012 17:12:59 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1356023577!29810052!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17300 invoked from network); 20 Dec 2012 17:12:58 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 17:12:58 -0000
Received: by mail-la0-f47.google.com with SMTP id u2so3417968lag.34
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 09:12:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=J6D7nzR4K2atfYOjM7tKSbnhYZYDvTiKj4V+St4W62A=;
	b=sauAtNSlnajab05s+pjHXi4n125Wef7SIZRk3rO6aHWsZTaRaAB6tavl3jj3NVUOzO
	9OtWPUdtLr+I/xe+p9iRoCYj91itF5BbnY0QbsxTTTeRFXL2qStS5UQ9lal8BQtfpKQB
	Fwy9wQgawTKiePfwKGO17OE/hzdlTB1WAkawir55xvFkURfWF9uuORuXgZM+sgsKEcqI
	6QWEt2fO8/37yAlnezRuezZ26taLV009rhnLX1COUXQBhSNg4gK4EyGSqg8uFPp4BhkH
	65C9dXVDXw3jr8K+iMtp+kZj5nYYX4osblQhmi+VpqXPX1W0hTgAaSrIqEWxMiPmQiYn
	Kv3A==
Received: by 10.112.45.166 with SMTP id o6mr3997664lbm.44.1356023577253; Thu,
	20 Dec 2012 09:12:57 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.147.168 with HTTP; Thu, 20 Dec 2012 09:12:37 -0800 (PST)
In-Reply-To: <50D33537.9050802@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D33537.9050802@eu.citrix.com>
From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 20 Dec 2012 18:12:37 +0100
X-Google-Sender-Auth: XW1T3CbusVVVKhYydgfACgi80Zc
Message-ID: <CAAWQecsaFz08SEZTS-X6oseZdtB8uwQGHWpHQfAKZ2ZwL-OZsw@mail.gmail.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 4:56 PM, George Dunlap
<george.dunlap@eu.citrix.com> wrote:
>> This change modifies the VCPU load balancing algorithm (for the
>> credit scheduler only), introducing a two steps logic.
>> During the first step, we use the node-affinity mask. The aim is
>> giving precedence to the CPUs where it is known to be preferable
>> for the domain to run. If that fails in finding a valid PCPU, the
>> node-affinity is just ignored and, in the second step, we fall
>> back to using cpu-affinity only.
>>
>> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>
>
> This one has a lot of structural comments; so I'm going to send a couple of
> different mails as I'm going through it, so we can parallize the discussion
> better. :-)
>
Ok.

>> +#define for_each_csched_balance_step(__step) \
>> +    for ( (__step) = CSCHED_BALANCE_LAST; (__step) >= 0; (__step)-- )
>
>
> Why are we starting at the top and going down?  Is there any good reason for
> it?
>
You're totally right, it looked like this in the very first RFC, when
the whole set of
macros was different. I changed that but never considered this, but I
agree going
up would be more natural.

> So why not just have this be as follows?
>
> for(step=0; step<CSCHED_BALANCE_MAX; step++)
>
Will do.

>> +static int
>> +csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask)
>> +{
>> +    if ( step == CSCHED_BALANCE_NODE_AFFINITY )
>> +    {
>> +        struct domain *d = vc->domain;
>> +        struct csched_dom *sdom = CSCHED_DOM(d);
>> +
>> +        cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
>> +
>> +        if ( cpumask_full(sdom->node_affinity_cpumask) )
>> +            return -1;
>
>
> There's no optimization in having this comparison done here.  You're not
> reading something from a local variable that you've just calculated.  But
> hiding this comparison inside this function, and disguising it as "returns
> -1", does increase the cognitive load on anybody trying to read and
> understand the code -- particularly, as how the return value is used is not
> really clear.
>
Yes, I again agree this is ugly. The previous version was better, from
the caller
point of view, but had other downsides (IIRC it was mangling with the
loop control
variable directly) so we agreed on this '-1' thing.

> Also, when you use this value, effectively what you're doing is saying,
> "Actually, we just said we were doing the NODE_BALANCE step, but it turns
> out that the results of NODE_BALANCE and CPU_BALANCE will be the same, so
> we're just going to pretend that we've been doing the CPU_BALANCE step
> instead."  (See for example, "balance_step == CSCHED_BALANCE_NODE_AFFINITY
> && !ret" -- why the !ret in this clause?  Because if !ret then we're not
> actually doing NODE_AFFINITY now, but CPU_AFFINITY.)  Another non-negligible
> chunk of cognitive load for someone reading the code to 1) figure out, and
> 2) keep in mind as she tries to analyze it.
>
Totally agreed. :-)

> I took a look at all the places which use this return value, and it seems
> like the best thing in each case would just be to have the *caller*, before
> getting into the loop, call cpumask_full(sdom->node_affinity_cpumask) and
> just skip the CSCHED_NODE_BALANCE step altogether if it's true.  (Example
> below.)
>
I will think about it and see if I can find a nice solution for making
that happen
(the point being I'm not sure I like exposing and disseminating that
cpumask_full... thing that much, but I guess I can hide it under some more
macro-ing and stuff).

>> @@ -266,67 +332,94 @@ static inline void
>>       struct csched_vcpu * const cur = CSCHED_VCPU(curr_on_cpu(cpu));
>>       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
>>       cpumask_t mask, idle_mask;
>> -    int idlers_empty;
>> +    int balance_step, idlers_empty;
>>
>>       ASSERT(cur);
>> -    cpumask_clear(&mask);
>> -
>>       idlers_empty = cpumask_empty(prv->idlers);
>>
>>       /*
>> -     * If the pcpu is idle, or there are no idlers and the new
>> -     * vcpu is a higher priority than the old vcpu, run it here.
>> -     *
>> -     * If there are idle cpus, first try to find one suitable to run
>> -     * new, so we can avoid preempting cur.  If we cannot find a
>> -     * suitable idler on which to run new, run it here, but try to
>> -     * find a suitable idler on which to run cur instead.
>> +     * Node and vcpu-affinity balancing loop. To speed things up, in case
>> +     * no node-affinity at all is present, scratch_balance_mask reflects
>> +     * the vcpu-affinity, and ret is -1, so that we then can quit the
>> +     * loop after only one step.
>>        */
>> [snip]
>
> The whole logic here is really convoluted and hard to read.  For example, if
> cur->pri==IDLE, then you will always just break of the loop after the first
> iteration.  In that case, why have the if() inside the loop to begin with?
> And if idlers_empty is true but cur->pri >= new->pri, then you'll go through
> the loop two times, even though both times it will come up empty.  And, of
> course, the whole thing about the node affinity mask being checked inside
> csched_balance_cpumask(), but not used until the very end.
>
I fear it looks convoluted and complex because, well, it _is_ quite complex!
However, I see you're point, and there definitely are chances for it
to, although
being complex, look more complex than it should. :-)

> A much more straighforward way to arrange it would be:
>
> if(cur->pri=IDLE &c &c)
> {
>   foo;
> }
> else if(!idlers_empty)
> {
>   if(cpumask_full(sdom->node_affinity_cpumask)
>     balance_step=CSCHED_BALANCE_CPU_AFFINITY;
>   else
>     balance_step=CSCHED_BALANCE_NODE_AFFINITY;
>
Yes, I think this can be taken out. I'll give some more thinking to this and
see if I can simplify the flow.

> [To be continued...]
>
Thanks for now :-)
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
---------------------------------------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 17:48:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 17:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlkEO-0000z6-Ah; Thu, 20 Dec 2012 17:48:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TlkEM-0000z1-8d
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 17:48:26 +0000
Received: from [85.158.139.211:6240] by server-16.bemta-5.messagelabs.com id
	A8/5B-09208-96F43D05; Thu, 20 Dec 2012 17:48:25 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1356025703!18810346!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26583 invoked from network); 20 Dec 2012 17:48:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 17:48:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,325,1355097600"; 
   d="scan'208";a="1446713"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 17:48:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 12:48:18 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TlkEE-00020t-83;
	Thu, 20 Dec 2012 17:48:18 +0000
Message-ID: <1356025701.24056.3.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 20 Dec 2012 17:48:21 +0000
In-Reply-To: <20121219192405.GA24729@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
	<1355936517.10526.28.camel@iceland>
	<20121219174000.GA28570@phenom.dumpdata.com>
	<1355939543.10526.30.camel@iceland>
	<20121219192405.GA24729@phenom.dumpdata.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 19:24 +0000, Konrad Rzeszutek Wilk wrote:
> #
> #
> # # echo 0 > /sys/devices/system/cpu/cpu3/online[J[44Decho 0 > /sys/devices/system/cpu/cpu3/online [39D1 > /sys/devices/system/cpu/cpu3/online[38D > /sys/devices/system/cpu/cpu3/online
> [   73.324141] installing Xen timer for CPU 3
> [   73.324236] cpu 3 spinlock event irqc_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
> [   73.325026] Pid: 0, comm: swapper/3 Not tainted 3.7.0upstream #1
> [   73.325033] Call Trace:
> [   73.325047]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
> [   73.325058]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
> [   73.325074]  [<ffffffff81043d5d>] ? xen_force_evtchn_callback+0xd/0x10
> [   73.325086]  [<ffffffff81635bb4>] schedule+0x24/0x70
> [   73.325097]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
> [   73.325112]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
> [   73.325116] BUG: unable to handle kernel NULL pointer dereference[   73.325122]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
> 

The thread is a bit off-topic now since we're talking about PV at the
moment...

Please see cs 41bd956de3dfdc3a43708fe2e0c8096c69064a1e, it seems that
the imbalance still exists with CONFIG_PREEMPT_COUNT=y.

And I didn't manage to reproduce the NULL pointer deference.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 17:48:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 17:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlkEO-0000z6-Ah; Thu, 20 Dec 2012 17:48:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TlkEM-0000z1-8d
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 17:48:26 +0000
Received: from [85.158.139.211:6240] by server-16.bemta-5.messagelabs.com id
	A8/5B-09208-96F43D05; Thu, 20 Dec 2012 17:48:25 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1356025703!18810346!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26583 invoked from network); 20 Dec 2012 17:48:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 17:48:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,325,1355097600"; 
   d="scan'208";a="1446713"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 17:48:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 12:48:18 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TlkEE-00020t-83;
	Thu, 20 Dec 2012 17:48:18 +0000
Message-ID: <1356025701.24056.3.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 20 Dec 2012 17:48:21 +0000
In-Reply-To: <20121219192405.GA24729@phenom.dumpdata.com>
References: <1355411537.8376.52.camel@iceland>
	<20121219160455.GA12077@phenom.dumpdata.com>
	<1355936517.10526.28.camel@iceland>
	<20121219174000.GA28570@phenom.dumpdata.com>
	<1355939543.10526.30.camel@iceland>
	<20121219192405.GA24729@phenom.dumpdata.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM bug: system crashes after offline online a vcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-12-19 at 19:24 +0000, Konrad Rzeszutek Wilk wrote:
> #
> #
> # # echo 0 > /sys/devices/system/cpu/cpu3/online[J[44Decho 0 > /sys/devices/system/cpu/cpu3/online [39D1 > /sys/devices/system/cpu/cpu3/online[38D > /sys/devices/system/cpu/cpu3/online
> [   73.324141] installing Xen timer for CPU 3
> [   73.324236] cpu 3 spinlock event irqc_intel fbcon scsi_mod tileblit font bitblit softcursor atl1c drm_kms_helper video xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd mperf
> [   73.325026] Pid: 0, comm: swapper/3 Not tainted 3.7.0upstream #1
> [   73.325033] Call Trace:
> [   73.325047]  [<ffffffff810c2d1d>] __schedule_bug+0x4d/0x60
> [   73.325058]  [<ffffffff816359e2>] __schedule+0x6b2/0x7c0
> [   73.325074]  [<ffffffff81043d5d>] ? xen_force_evtchn_callback+0xd/0x10
> [   73.325086]  [<ffffffff81635bb4>] schedule+0x24/0x70
> [   73.325097]  [<ffffffff81056de9>] cpu_idle+0xc9/0xe0
> [   73.325112]  [<ffffffff810445b9>] ? xen_irq_enable_direct_reloc+0x4/0x4
> [   73.325116] BUG: unable to handle kernel NULL pointer dereference[   73.325122]  [<ffffffff8162572b>] cpu_bringup_and_idle+0xe/0x10
> 

The thread is a bit off-topic now since we're talking about PV at the
moment...

Please see cs 41bd956de3dfdc3a43708fe2e0c8096c69064a1e, it seems that
the imbalance still exists with CONFIG_PREEMPT_COUNT=y.

And I didn't manage to reproduce the NULL pointer deference.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:15:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlkdj-0001Yn-Lp; Thu, 20 Dec 2012 18:14:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dominic.curran@citrix.com>) id 1Tlkdi-0001YQ-Iz
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 18:14:38 +0000
Received: from [85.158.143.99:17309] by server-2.bemta-4.messagelabs.com id
	84/27-30861-E8553D05; Thu, 20 Dec 2012 18:14:38 +0000
X-Env-Sender: dominic.curran@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1356027276!30256983!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM2NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4545 invoked from network); 20 Dec 2012 18:14:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:14:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="1376367"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 18:14:35 +0000
Received: from [10.216.133.46] (10.216.133.46) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 13:14:35 -0500
Message-ID: <50D3558A.1090105@citrix.com>
Date: Thu, 20 Dec 2012 10:14:34 -0800
From: dom <dominic.curran@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:15.0) Gecko/20120827 Thunderbird/15.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1351187729-4681-1-git-send-email-jean.guyader@citrix.com>
	<508E5C6B02000078000A5011@nat28.tlf.novell.com>
In-Reply-To: <508E5C6B02000078000A5011@nat28.tlf.novell.com>
X-Originating-IP: [10.216.133.46]
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0/2] Add V4V to Xen (v8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/29/2012 02:37 AM, Jan Beulich wrote:
>>>> On 25.10.12 at 19:55, Jean Guyader <jean.guyader@citrix.com> wrote:
>> v8 changes:
>>          - Move v4v private structures to v4v.c
>>          - fix padding
> I still spotted at least one bogus padding field (in struct
> v4v_ring_message_header, where no field is more than 4-byte
> aligned afaict). Did you really carefully walk through all of them?

Hi Jan,
I am going to try and pilot this home, as I believe JeanG will not.

I just had a couple of questions about your comments that I hope you
could help me with.


> Also, to validate the structures are really compatible between
> native and compat mode guests, I'd strongly recommend adding
> the leaf ones to xen/include/xlat.lst.

I'm sorry I don't understand.  What is xlat.lst ? And how does it help ?


> Further I don't think you sync-ed up your patches with the
> XEN_GUEST_HANDLE_PARAM() changes done for ARM, yet you
> also didn't mention that the patch set is against other than the
> tip of unstable.

Sorry, again I'm confused.  I thought all the Xen ARM changes went into
xen-unstable.
What other branches/trees do you think I need to post the v4v patch set against ?


>
> There appear to be plenty left (space between function name and
> opening parenthesis, indentation inside switch statement, missing
> parenthesization of macro expansion, missing newline between
> declarations and statements are which I noticed without specifically
> looking for them).
>
>
OK. This I can do.
dom

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:15:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlkdj-0001Yn-Lp; Thu, 20 Dec 2012 18:14:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dominic.curran@citrix.com>) id 1Tlkdi-0001YQ-Iz
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 18:14:38 +0000
Received: from [85.158.143.99:17309] by server-2.bemta-4.messagelabs.com id
	84/27-30861-E8553D05; Thu, 20 Dec 2012 18:14:38 +0000
X-Env-Sender: dominic.curran@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1356027276!30256983!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM2NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4545 invoked from network); 20 Dec 2012 18:14:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:14:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="1376367"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 18:14:35 +0000
Received: from [10.216.133.46] (10.216.133.46) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 13:14:35 -0500
Message-ID: <50D3558A.1090105@citrix.com>
Date: Thu, 20 Dec 2012 10:14:34 -0800
From: dom <dominic.curran@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:15.0) Gecko/20120827 Thunderbird/15.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1351187729-4681-1-git-send-email-jean.guyader@citrix.com>
	<508E5C6B02000078000A5011@nat28.tlf.novell.com>
In-Reply-To: <508E5C6B02000078000A5011@nat28.tlf.novell.com>
X-Originating-IP: [10.216.133.46]
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0/2] Add V4V to Xen (v8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/29/2012 02:37 AM, Jan Beulich wrote:
>>>> On 25.10.12 at 19:55, Jean Guyader <jean.guyader@citrix.com> wrote:
>> v8 changes:
>>          - Move v4v private structures to v4v.c
>>          - fix padding
> I still spotted at least one bogus padding field (in struct
> v4v_ring_message_header, where no field is more than 4-byte
> aligned afaict). Did you really carefully walk through all of them?

Hi Jan,
I am going to try and pilot this home, as I believe JeanG will not.

I just had a couple of questions about your comments that I hope you
could help me with.


> Also, to validate the structures are really compatible between
> native and compat mode guests, I'd strongly recommend adding
> the leaf ones to xen/include/xlat.lst.

I'm sorry I don't understand.  What is xlat.lst ? And how does it help ?


> Further I don't think you sync-ed up your patches with the
> XEN_GUEST_HANDLE_PARAM() changes done for ARM, yet you
> also didn't mention that the patch set is against other than the
> tip of unstable.

Sorry, again I'm confused.  I thought all the Xen ARM changes went into
xen-unstable.
What other branches/trees do you think I need to post the v4v patch set against ?


>
> There appear to be plenty left (space between function name and
> opening parenthesis, indentation inside switch statement, missing
> parenthesization of macro expansion, missing newline between
> declarations and statements are which I noticed without specifically
> looking for them).
>
>
OK. This I can do.
dom

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:18:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlkhT-0001iz-Ac; Thu, 20 Dec 2012 18:18:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlkhR-0001ir-SF
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 18:18:30 +0000
Received: from [85.158.139.83:52465] by server-3.bemta-5.messagelabs.com id
	A8/C2-25441-57653D05; Thu, 20 Dec 2012 18:18:29 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1356027507!23415845!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22012 invoked from network); 20 Dec 2012 18:18:28 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:18:28 -0000
Received: by mail-la0-f47.google.com with SMTP id u2so3539916lag.34
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 10:18:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=luyy6rLm3ut4vubVOiFnKsO8cZ3gBPDdeUukw1ADOd8=;
	b=QmvxfwqyvCCV1SbtnKA2+ZIc6Gst4DhRplchpxqva9E1lbYtArhbYsVoQBa352slJT
	KFaopkLd/6AgLlQohixxRW4dvHgeIx5/b8xi5/NByXxgSMQ3pchBm0FnOrkTb+mZpAbp
	sl44kaeXr8DAkp72VphwUkJpv3tKjm3c4QBKXjIlWZok+PT7RUFJ00j9k7uokd+OGW6C
	njs6r4+5yBBuob1IWssq0VLrBG3rDbG/N4nA0DOtdEQP1k2M2p8In+K4mZvTnv+zjLdV
	N+egwRzWFPYzvFxs6KWu03qQ6ebL68tV3MfweI+TwS9ERevGfjrrGnzyBO45KTRN09gf
	MQZg==
Received: by 10.112.43.161 with SMTP id x1mr4265103lbl.32.1356027507484; Thu,
	20 Dec 2012 10:18:27 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.147.168 with HTTP; Thu, 20 Dec 2012 10:18:07 -0800 (PST)
In-Reply-To: <50D3414D.8080901@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D3414D.8080901@eu.citrix.com>
From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 20 Dec 2012 19:18:07 +0100
X-Google-Sender-Auth: RitXMb7FmvshYkzRwft6X1ZpisQ
Message-ID: <CAAWQectVEihrayJj5n4SPGqA0QJSiC7s2x_oDW=KHyxukWpMSA@mail.gmail.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 5:48 PM, George Dunlap
<george.dunlap@eu.citrix.com> wrote:
> On 19/12/12 19:07, Dario Faggioli wrote:
>> +static inline int
>> +__csched_vcpu_should_migrate(int cpu, cpumask_t *mask, cpumask_t *idlers)
>> +{
>> +    /*
>> +     * Consent to migration if cpu is one of the idlers in the VCPU's
>> +     * affinity mask. In fact, if that is not the case, it just means it
>> +     * was some other CPU that was tickled and should hence come and pick
>> +     * VCPU up. Migrating it to cpu would only make things worse.
>> +     */
>> +    return cpumask_test_cpu(cpu, idlers) && cpumask_test_cpu(cpu, mask);
>>   }
>
> And in any case, looking at the caller of csched_load_balance(), it
> explicitly says to steal work if the next thing on the runqueue of cpu has a
> priority of TS_OVER.  That was chosen for a reason -- if you want to change
> that, you should change it there at the top (and make a justification for
> doing so), not deeply nested in a function like this.
>
> Or am I completely missing something?
>
No, you're right. Trying to solve a nasty issue I was seeing, I overlooked I was
changing the underlying logic until that point... Thanks!

What I want to avoid is the following: a vcpu wakes-up on the busy pcpu Y. As
a consequence, the idle pcpu X is tickled. Then, for any unrelated reason, pcpu
Z reschedules and, as it would go idle too, it looks around for any
vcpu to steal,
finds one in Y's runqueue and grabs it. Afterward, when X gets the IPI and
schedules, it just does not find anyone to run and goes back idling.

Now, suppose the vcpu has X, but *not* Z, in its node-affinity (while
it has a full
vcpu-affinity, i.e., can run everywhere). In this case, a vcpu that
could have run on
a pcpu in its node-affinity, executes outside from it. That happens because,
the NODE_BALANCE_STEP in csched_load_balance(), when called by Z, won't
find anything suitable to steal (provided there actually isn't any
vcpu waiting in
any runqueue with node-affinity with Z), while the CPU_BALANCE_STEP will
find our vcpu. :-(

So, what I wanted is something that could tell me whether the pcpu which is
stealing work is the one that has actually been tickled to do so. I
was then using
the pcpu idleness as a (cheap and easy to check) indication of that,
but I now see
this is having side effects I in the first place did not want to cause.

Sorry for that, I probably spent so much time buried, as you where
saying, in the
various nested loops and calls, that I lost the context a little bit! :-P

Ok, I think the problem I was describing is real, and I've seen it happening and
causing performances degradation. However, as I think a good solution
is going to
be more complex than I thought, I'd better repost without this
function and deal with
it in a future separate patch (after having figured out the best way
of doing so). Is
that fine with you?

> These changes all look right.
>
At least. :-)

> But then, I'm a bit tired, so I'll give it
> another once-over tomorrow. :-)
>
I can imagine, looking forward to your next comments.

Thanks a lot and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
---------------------------------------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:18:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlkhT-0001iz-Ac; Thu, 20 Dec 2012 18:18:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1TlkhR-0001ir-SF
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 18:18:30 +0000
Received: from [85.158.139.83:52465] by server-3.bemta-5.messagelabs.com id
	A8/C2-25441-57653D05; Thu, 20 Dec 2012 18:18:29 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1356027507!23415845!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22012 invoked from network); 20 Dec 2012 18:18:28 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:18:28 -0000
Received: by mail-la0-f47.google.com with SMTP id u2so3539916lag.34
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 10:18:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=luyy6rLm3ut4vubVOiFnKsO8cZ3gBPDdeUukw1ADOd8=;
	b=QmvxfwqyvCCV1SbtnKA2+ZIc6Gst4DhRplchpxqva9E1lbYtArhbYsVoQBa352slJT
	KFaopkLd/6AgLlQohixxRW4dvHgeIx5/b8xi5/NByXxgSMQ3pchBm0FnOrkTb+mZpAbp
	sl44kaeXr8DAkp72VphwUkJpv3tKjm3c4QBKXjIlWZok+PT7RUFJ00j9k7uokd+OGW6C
	njs6r4+5yBBuob1IWssq0VLrBG3rDbG/N4nA0DOtdEQP1k2M2p8In+K4mZvTnv+zjLdV
	N+egwRzWFPYzvFxs6KWu03qQ6ebL68tV3MfweI+TwS9ERevGfjrrGnzyBO45KTRN09gf
	MQZg==
Received: by 10.112.43.161 with SMTP id x1mr4265103lbl.32.1356027507484; Thu,
	20 Dec 2012 10:18:27 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.147.168 with HTTP; Thu, 20 Dec 2012 10:18:07 -0800 (PST)
In-Reply-To: <50D3414D.8080901@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D3414D.8080901@eu.citrix.com>
From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 20 Dec 2012 19:18:07 +0100
X-Google-Sender-Auth: RitXMb7FmvshYkzRwft6X1ZpisQ
Message-ID: <CAAWQectVEihrayJj5n4SPGqA0QJSiC7s2x_oDW=KHyxukWpMSA@mail.gmail.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 5:48 PM, George Dunlap
<george.dunlap@eu.citrix.com> wrote:
> On 19/12/12 19:07, Dario Faggioli wrote:
>> +static inline int
>> +__csched_vcpu_should_migrate(int cpu, cpumask_t *mask, cpumask_t *idlers)
>> +{
>> +    /*
>> +     * Consent to migration if cpu is one of the idlers in the VCPU's
>> +     * affinity mask. In fact, if that is not the case, it just means it
>> +     * was some other CPU that was tickled and should hence come and pick
>> +     * VCPU up. Migrating it to cpu would only make things worse.
>> +     */
>> +    return cpumask_test_cpu(cpu, idlers) && cpumask_test_cpu(cpu, mask);
>>   }
>
> And in any case, looking at the caller of csched_load_balance(), it
> explicitly says to steal work if the next thing on the runqueue of cpu has a
> priority of TS_OVER.  That was chosen for a reason -- if you want to change
> that, you should change it there at the top (and make a justification for
> doing so), not deeply nested in a function like this.
>
> Or am I completely missing something?
>
No, you're right. Trying to solve a nasty issue I was seeing, I overlooked I was
changing the underlying logic until that point... Thanks!

What I want to avoid is the following: a vcpu wakes-up on the busy pcpu Y. As
a consequence, the idle pcpu X is tickled. Then, for any unrelated reason, pcpu
Z reschedules and, as it would go idle too, it looks around for any
vcpu to steal,
finds one in Y's runqueue and grabs it. Afterward, when X gets the IPI and
schedules, it just does not find anyone to run and goes back idling.

Now, suppose the vcpu has X, but *not* Z, in its node-affinity (while
it has a full
vcpu-affinity, i.e., can run everywhere). In this case, a vcpu that
could have run on
a pcpu in its node-affinity, executes outside from it. That happens because,
the NODE_BALANCE_STEP in csched_load_balance(), when called by Z, won't
find anything suitable to steal (provided there actually isn't any
vcpu waiting in
any runqueue with node-affinity with Z), while the CPU_BALANCE_STEP will
find our vcpu. :-(

So, what I wanted is something that could tell me whether the pcpu which is
stealing work is the one that has actually been tickled to do so. I
was then using
the pcpu idleness as a (cheap and easy to check) indication of that,
but I now see
this is having side effects I in the first place did not want to cause.

Sorry for that, I probably spent so much time buried, as you where
saying, in the
various nested loops and calls, that I lost the context a little bit! :-P

Ok, I think the problem I was describing is real, and I've seen it happening and
causing performances degradation. However, as I think a good solution
is going to
be more complex than I thought, I'd better repost without this
function and deal with
it in a future separate patch (after having figured out the best way
of doing so). Is
that fine with you?

> These changes all look right.
>
At least. :-)

> But then, I'm a bit tired, so I'll give it
> another once-over tomorrow. :-)
>
I can imagine, looking forward to your next comments.

Thanks a lot and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
---------------------------------------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:20:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlkjI-0001rO-Ux; Thu, 20 Dec 2012 18:20:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlkjG-0001r7-TH
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 18:20:23 +0000
Received: from [193.109.254.147:20480] by server-7.bemta-14.messagelabs.com id
	35/0F-08102-6E653D05; Thu, 20 Dec 2012 18:20:22 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1356027617!2385515!1
X-Originating-IP: [209.85.210.169]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6882 invoked from network); 20 Dec 2012 18:20:19 -0000
Received: from mail-ia0-f169.google.com (HELO mail-ia0-f169.google.com)
	(209.85.210.169)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:20:19 -0000
Received: by mail-ia0-f169.google.com with SMTP id r4so3193080iaj.28
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 10:20:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=tmrvjYjCjFbygMx+Z76SRr83O3VTFYhCyJrnmLaN8ok=;
	b=UJNYIikOnOT/AocMEBgtBM5VCRjPTJ7DjRaw4i0hI4FF+xPa2XSJCbFMlGh2kDdh+X
	OSHTv8yIcn455lm37+gSy3EdGd3BV/TEMf9TRsB4xGzqflIWUJDTl7cgqHBF9NR5koge
	s6U/fitpBeBiQRNtBQOlCUIu1D0hnC2KgwIvAYKU0bm7E+M49hSNRh4Si7L5ITgnDo6k
	8CDSkHhdzogJNTcH7DKOfmMnsH9r+JxFawF5uv5I+IHlob5uIOgusM1sZHhSaGMMVHBP
	JY1A0LY6UVUhRzwKEeGMeoLFjxF4DnEQbmbCEqNAIr+qMRnVQkEdP2qEMF2cI9lE7Ea/
	kcmA==
MIME-Version: 1.0
Received: by 10.50.56.139 with SMTP id a11mr6153466igq.86.1356027617533; Thu,
	20 Dec 2012 10:20:17 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 10:20:17 -0800 (PST)
In-Reply-To: <CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
	<CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
Date: Fri, 21 Dec 2012 02:20:17 +0800
X-Google-Sender-Auth: wPBW_AaUfLXfJQbN6cmx1US2HXY
Message-ID: <CAKhsbWau-Kou2Y5vYcetRJLM5nQWGc0w65SpG1YxtcY6+VcXxA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jean Guyader <Jean.guyader@gmail.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 12:04 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
>>> PS2: I also suffered another instability yesterday. It happens when I
>>> was compiling kernel in side the domU. The host reboots suddenly.
>>> Since I'm not using graphics at that time (Xorg session is idle, I
>>> connected through SSH), this may be a different issue.

I tried once more to rebuild kernel in the debian VM. It's a total
mess this time.
The whole system (including dom0) unexpectedly reboots several times
during the compilation.
This destroyed the kernel tree and I failed to build the kernel.
I suspect this has something to do with disk driver, since the reboot
tend to happen during high disk load (like linking vmlinux).
Will run iozone to check tomorrow.

It seems that this issue has little to do with IGD passthrough.
I'm not sure if it's the same issue for the host freezing during game play.
Maybe I should track them separately.

Thanks,
Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:20:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlkjI-0001rO-Ux; Thu, 20 Dec 2012 18:20:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TlkjG-0001r7-TH
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 18:20:23 +0000
Received: from [193.109.254.147:20480] by server-7.bemta-14.messagelabs.com id
	35/0F-08102-6E653D05; Thu, 20 Dec 2012 18:20:22 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1356027617!2385515!1
X-Originating-IP: [209.85.210.169]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6882 invoked from network); 20 Dec 2012 18:20:19 -0000
Received: from mail-ia0-f169.google.com (HELO mail-ia0-f169.google.com)
	(209.85.210.169)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:20:19 -0000
Received: by mail-ia0-f169.google.com with SMTP id r4so3193080iaj.28
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 10:20:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=tmrvjYjCjFbygMx+Z76SRr83O3VTFYhCyJrnmLaN8ok=;
	b=UJNYIikOnOT/AocMEBgtBM5VCRjPTJ7DjRaw4i0hI4FF+xPa2XSJCbFMlGh2kDdh+X
	OSHTv8yIcn455lm37+gSy3EdGd3BV/TEMf9TRsB4xGzqflIWUJDTl7cgqHBF9NR5koge
	s6U/fitpBeBiQRNtBQOlCUIu1D0hnC2KgwIvAYKU0bm7E+M49hSNRh4Si7L5ITgnDo6k
	8CDSkHhdzogJNTcH7DKOfmMnsH9r+JxFawF5uv5I+IHlob5uIOgusM1sZHhSaGMMVHBP
	JY1A0LY6UVUhRzwKEeGMeoLFjxF4DnEQbmbCEqNAIr+qMRnVQkEdP2qEMF2cI9lE7Ea/
	kcmA==
MIME-Version: 1.0
Received: by 10.50.56.139 with SMTP id a11mr6153466igq.86.1356027617533; Thu,
	20 Dec 2012 10:20:17 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 10:20:17 -0800 (PST)
In-Reply-To: <CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
	<CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
Date: Fri, 21 Dec 2012 02:20:17 +0800
X-Google-Sender-Auth: wPBW_AaUfLXfJQbN6cmx1US2HXY
Message-ID: <CAKhsbWau-Kou2Y5vYcetRJLM5nQWGc0w65SpG1YxtcY6+VcXxA@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jean Guyader <Jean.guyader@gmail.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 12:04 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
>>> PS2: I also suffered another instability yesterday. It happens when I
>>> was compiling kernel in side the domU. The host reboots suddenly.
>>> Since I'm not using graphics at that time (Xorg session is idle, I
>>> connected through SSH), this may be a different issue.

I tried once more to rebuild kernel in the debian VM. It's a total
mess this time.
The whole system (including dom0) unexpectedly reboots several times
during the compilation.
This destroyed the kernel tree and I failed to build the kernel.
I suspect this has something to do with disk driver, since the reboot
tend to happen during high disk load (like linking vmlinux).
Will run iozone to check tomorrow.

It seems that this issue has little to do with IGD passthrough.
I'm not sure if it's the same issue for the host freezing during game play.
Maybe I should track them separately.

Thanks,
Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:28:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlkqb-0002Ag-Sf; Thu, 20 Dec 2012 18:27:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1Tlkqb-0002Ab-08
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 18:27:57 +0000
Received: from [85.158.137.99:13732] by server-4.bemta-3.messagelabs.com id
	9D/A2-31835-CA853D05; Thu, 20 Dec 2012 18:27:56 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1356028074!14939453!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5642 invoked from network); 20 Dec 2012 18:27:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:27:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="1452368"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:27:53 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Thu, 20 Dec 2012
	13:27:53 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: G.R. <firemeteor@users.sourceforge.net>, "Keir (Xen.org)" <keir@xen.org>
Date: Thu, 20 Dec 2012 13:27:46 -0500
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3ew75QTFjGx++7S5qnRX4AfS6TFQAG6Tbg
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
In-Reply-To: <CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of G.R.
> Sent: Thursday, December 20, 2012 10:06 AM
> To: Keir (Xen.org)
> Cc: Stefano Stabellini; Ian Campbell; Jean.guyader@gmail.com; xen-devel
> Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
> resource conflict for OpRegion.
> 
> On Thu, Dec 20, 2012 at 10:19 PM, Keir Fraser <keir@xen.org> wrote:
> > On 20/12/2012 13:31, "G.R." <firemeteor@users.sourceforge.net> wrote:
> >
> >> If concern is about security, the same argument should apply to the
> >> first page (the portion before the page offset).
> >> The problem is that I have no idea what is around the mapped page.
> >> Not sure who has the knowledge.
> >
> > Well we can't do better than mapping some whole number of pages,
> really.
> > Unless we trap to qemu on every access. I don't think we'd go there
> > unless there really were a known security issue. But mapping only the
> > exact number of pages we definitely need is a good principle.
> >
> >> What's the standard flow to handle such map with offset?
> >> I expect this to be a common case, since the ioremap function in
> >> linux kernel accept this.
> >
> > map_size = ((host_opregion & 0xfff) + 8096 + 0xfff) >> 12
> >
> Keir, I believe this expression should give the same result.
> First of all, 8096 should be 8192 :-), and this part should result in an
> constant number 2 after right shift.
> The remaining part is ((host_opregion & 0xfff) + 0xfff) >> 12 As long as
> the first sub-expression is non zero, the result of the add should range
> from [0x1000, 0x1ffe].
> And this will yield a result '1' after the right shift.
> So as long as there is known security risk (which I'm not sure about),
> the patch should be fine.
> 
> > Possibly with suitable macros used instead of magic numbers (e.g.,
> > XC_PAGE_* and a macro for the opregion size).
> 
> I guess there is no predefined macro for OpRegion size. And I guess I
> need to define it twice for both code?

In addition we should think about defining the IGD OpRegion in ACPI per the spec (cited earlier). Guest drivers seem to find the region just by reading the ASLS register in the gfx device's config space but it would be more correct to define it in ACPI too. Just a thought.

Ross

> 
> >  -- Keir
> >
> >
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:28:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlkqb-0002Ag-Sf; Thu, 20 Dec 2012 18:27:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1Tlkqb-0002Ab-08
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 18:27:57 +0000
Received: from [85.158.137.99:13732] by server-4.bemta-3.messagelabs.com id
	9D/A2-31835-CA853D05; Thu, 20 Dec 2012 18:27:56 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1356028074!14939453!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5642 invoked from network); 20 Dec 2012 18:27:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:27:55 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="1452368"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:27:53 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Thu, 20 Dec 2012
	13:27:53 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: G.R. <firemeteor@users.sourceforge.net>, "Keir (Xen.org)" <keir@xen.org>
Date: Thu, 20 Dec 2012 13:27:46 -0500
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3ew75QTFjGx++7S5qnRX4AfS6TFQAG6Tbg
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
In-Reply-To: <CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of G.R.
> Sent: Thursday, December 20, 2012 10:06 AM
> To: Keir (Xen.org)
> Cc: Stefano Stabellini; Ian Campbell; Jean.guyader@gmail.com; xen-devel
> Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
> resource conflict for OpRegion.
> 
> On Thu, Dec 20, 2012 at 10:19 PM, Keir Fraser <keir@xen.org> wrote:
> > On 20/12/2012 13:31, "G.R." <firemeteor@users.sourceforge.net> wrote:
> >
> >> If concern is about security, the same argument should apply to the
> >> first page (the portion before the page offset).
> >> The problem is that I have no idea what is around the mapped page.
> >> Not sure who has the knowledge.
> >
> > Well we can't do better than mapping some whole number of pages,
> really.
> > Unless we trap to qemu on every access. I don't think we'd go there
> > unless there really were a known security issue. But mapping only the
> > exact number of pages we definitely need is a good principle.
> >
> >> What's the standard flow to handle such map with offset?
> >> I expect this to be a common case, since the ioremap function in
> >> linux kernel accept this.
> >
> > map_size = ((host_opregion & 0xfff) + 8096 + 0xfff) >> 12
> >
> Keir, I believe this expression should give the same result.
> First of all, 8096 should be 8192 :-), and this part should result in an
> constant number 2 after right shift.
> The remaining part is ((host_opregion & 0xfff) + 0xfff) >> 12 As long as
> the first sub-expression is non zero, the result of the add should range
> from [0x1000, 0x1ffe].
> And this will yield a result '1' after the right shift.
> So as long as there is known security risk (which I'm not sure about),
> the patch should be fine.
> 
> > Possibly with suitable macros used instead of magic numbers (e.g.,
> > XC_PAGE_* and a macro for the opregion size).
> 
> I guess there is no predefined macro for OpRegion size. And I guess I
> need to define it twice for both code?

In addition we should think about defining the IGD OpRegion in ACPI per the spec (cited earlier). Guest drivers seem to find the region just by reading the ASLS register in the gfx device's config space but it would be more correct to define it in ACPI too. Just a thought.

Ross

> 
> >  -- Keir
> >
> >
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:55:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TllHA-00035A-1o; Thu, 20 Dec 2012 18:55:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TllH8-00034x-Ad
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 18:55:22 +0000
Received: from [85.158.138.51:29353] by server-9.bemta-3.messagelabs.com id
	68/BD-11948-91F53D05; Thu, 20 Dec 2012 18:55:21 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1356029717!21893490!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28373 invoked from network); 20 Dec 2012 18:55:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:55:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; d="sf'?scan'208";a="1455993"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:55:19 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Thu, 20 Dec 2012
	13:55:19 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Thu, 20 Dec 2012 13:55:11 -0500
Thread-Topic: [Xen-devel] [PATCH v4 01/04] HVM firmware passthrough HVM defs
	header
Thread-Index: Ac3e4pWRIPPycg3kTjiWJhqefHm7rQ==
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/mixed;
	boundary="_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v4 01/04] HVM firmware passthrough HVM defs
 header
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Add public HVM definitions header for firmware passthrough support (includi=
ng
comment describing the feature's use). In addition this header is used to
collect the various xenstore string values that are used in HVMLOADER.

Signed-off-by: Ross Philipson <ross.philipson@citrix.com>


--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream;
	name="hvm-firmware-passthrough-v4-01.patch"
Content-Description: hvm-firmware-passthrough-v4-01.patch
Content-Disposition: attachment;
	filename="hvm-firmware-passthrough-v4-01.patch"; size=4525;
	creation-date="Wed, 19 Dec 2012 19:29:35 GMT";
	modification-date="Thu, 20 Dec 2012 16:35:19 GMT"
Content-Transfer-Encoding: base64

QWRkIHB1YmxpYyBIVk0gZGVmaW5pdGlvbnMgaGVhZGVyIGZvciBmaXJtd2FyZSBwYXNzdGhyb3Vn
aCBzdXBwb3J0IChpbmNsdWRpbmcKY29tbWVudCBkZXNjcmliaW5nIHRoZSBmZWF0dXJlJ3MgdXNl
KS4gSW4gYWRkaXRpb24gdGhpcyBoZWFkZXIgaXMgdXNlZCB0bwpjb2xsZWN0IHRoZSB2YXJpb3Vz
IHhlbnN0b3JlIHN0cmluZyB2YWx1ZXMgdGhhdCBhcmUgdXNlZCBpbiBIVk1MT0FERVIuCgpTaWdu
ZWQtb2ZmLWJ5OiBSb3NzIFBoaWxpcHNvbiA8cm9zcy5waGlsaXBzb25AY2l0cml4LmNvbT4KCmRp
ZmYgLXIgMDkwY2MzZTIwZDNlIHRvb2xzL2luY2x1ZGUveGVuLXRvb2xzL2h2bV9kZWZzLmgKLS0t
IC9kZXYvbnVsbAlUaHUgSmFuIDAxIDAwOjAwOjAwIDE5NzAgKzAwMDAKKysrIGIvdG9vbHMvaW5j
bHVkZS94ZW4tdG9vbHMvaHZtX2RlZnMuaAlXZWQgRGVjIDE5IDE0OjIyOjQ4IDIwMTIgLTA1MDAK
QEAgLTAsMCArMSw4NSBAQAorLyoKKyAqIFBlcm1pc3Npb24gaXMgaGVyZWJ5IGdyYW50ZWQsIGZy
ZWUgb2YgY2hhcmdlLCB0byBhbnkgcGVyc29uIG9idGFpbmluZyBhIGNvcHkKKyAqIG9mIHRoaXMg
c29mdHdhcmUgYW5kIGFzc29jaWF0ZWQgZG9jdW1lbnRhdGlvbiBmaWxlcyAodGhlICJTb2Z0d2Fy
ZSIpLCB0bworICogZGVhbCBpbiB0aGUgU29mdHdhcmUgd2l0aG91dCByZXN0cmljdGlvbiwgaW5j
bHVkaW5nIHdpdGhvdXQgbGltaXRhdGlvbiB0aGUKKyAqIHJpZ2h0cyB0byB1c2UsIGNvcHksIG1v
ZGlmeSwgbWVyZ2UsIHB1Ymxpc2gsIGRpc3RyaWJ1dGUsIHN1YmxpY2Vuc2UsIGFuZC9vcgorICog
c2VsbCBjb3BpZXMgb2YgdGhlIFNvZnR3YXJlLCBhbmQgdG8gcGVybWl0IHBlcnNvbnMgdG8gd2hv
bSB0aGUgU29mdHdhcmUgaXMKKyAqIGZ1cm5pc2hlZCB0byBkbyBzbywgc3ViamVjdCB0byB0aGUg
Zm9sbG93aW5nIGNvbmRpdGlvbnM6CisgKgorICogVGhlIGFib3ZlIGNvcHlyaWdodCBub3RpY2Ug
YW5kIHRoaXMgcGVybWlzc2lvbiBub3RpY2Ugc2hhbGwgYmUgaW5jbHVkZWQgaW4KKyAqIGFsbCBj
b3BpZXMgb3Igc3Vic3RhbnRpYWwgcG9ydGlvbnMgb2YgdGhlIFNvZnR3YXJlLgorICoKKyAqIFRI
RSBTT0ZUV0FSRSBJUyBQUk9WSURFRCAiQVMgSVMiLCBXSVRIT1VUIFdBUlJBTlRZIE9GIEFOWSBL
SU5ELCBFWFBSRVNTIE9SCisgKiBJTVBMSUVELCBJTkNMVURJTkcgQlVUIE5PVCBMSU1JVEVEIFRP
IFRIRSBXQVJSQU5USUVTIE9GIE1FUkNIQU5UQUJJTElUWSwKKyAqIEZJVE5FU1MgRk9SIEEgUEFS
VElDVUxBUiBQVVJQT1NFIEFORCBOT05JTkZSSU5HRU1FTlQuIElOIE5PIEVWRU5UIFNIQUxMIFRI
RQorICogQVVUSE9SUyBPUiBDT1BZUklHSFQgSE9MREVSUyBCRSBMSUFCTEUgRk9SIEFOWSBDTEFJ
TSwgREFNQUdFUyBPUiBPVEhFUgorICogTElBQklMSVRZLCBXSEVUSEVSIElOIEFOIEFDVElPTiBP
RiBDT05UUkFDVCwgVE9SVCBPUiBPVEhFUldJU0UsIEFSSVNJTkcKKyAqIEZST00sIE9VVCBPRiBP
UiBJTiBDT05ORUNUSU9OIFdJVEggVEhFIFNPRlRXQVJFIE9SIFRIRSBVU0UgT1IgT1RIRVIKKyAq
IERFQUxJTkdTIElOIFRIRSBTT0ZUV0FSRS4KKyAqLworCisjaWZuZGVmIF9YRU5fVE9PTFNfSFZN
X0RFRlNfSF9fCisjZGVmaW5lIF9YRU5fVE9PTFNfSFZNX0RFRlNfSF9fCisKKyNkZWZpbmUgSFZN
X1hTX0hWTUxPQURFUiAgICAgICAgICAgICAgICJodm1sb2FkZXIiCisjZGVmaW5lIEhWTV9YU19C
SU9TICAgICAgICAgICAgICAgICAgICAiaHZtbG9hZGVyL2Jpb3MiCisjZGVmaW5lIEhWTV9YU19H
RU5FUkFUSU9OX0lEX0FERFJFU1MgICAiaHZtbG9hZGVyL2dlbmVyYXRpb24taWQtYWRkcmVzcyIK
KworLyogVGhlIGZvbGxvd2luZyB2YWx1ZXMgYWxsb3cgYWRkaXRpb25hbCBBQ1BJIHRhYmxlcyB0
byBiZSBhZGRlZCB0byB0aGUKKyAqIHZpcnR1YWwgQUNQSSBCSU9TIHRoYXQgaHZtbG9hZGVyIGNv
bnN0cnVjdHMuIFRoZSB2YWx1ZXMgc3BlY2lmeSB0aGUgZ3Vlc3QKKyAqIHBoeXNpY2FsIGFkZHJl
c3MgYW5kIGxlbmd0aCBvZiBhIGJsb2NrIG9mIEFDUEkgdGFibGVzIHRvIGFkZC4gVGhlIGZvcm1h
dCBvZgorICogdGhlIGJsb2NrIGlzIHNpbXBseSBjb25jYXRlbmF0ZWQgcmF3IHRhYmxlcyAod2hp
Y2ggc3BlY2lmeSB0aGVpciBvd24gbGVuZ3RoCisgKiBpbiB0aGUgQUNQSSBoZWFkZXIpLgorICov
CisjZGVmaW5lIEhWTV9YU19BQ1BJX1BUX0FERFJFU1MgICAgICAgICAiaHZtbG9hZGVyL2FjcGkv
YWRkcmVzcyIKKyNkZWZpbmUgSFZNX1hTX0FDUElfUFRfTEVOR1RIICAgICAgICAgICJodm1sb2Fk
ZXIvYWNwaS9sZW5ndGgiCisKKy8qIEFueSBudW1iZXIgb2YgU01CSU9TIHR5cGVzIGNhbiBiZSBw
YXNzZWQgdGhyb3VnaCB0byBhbiBIVk0gZ3Vlc3QgdXNpbmcKKyAqIHRoZSBmb2xsb3dpbmcgeGVu
c3RvcmUgdmFsdWVzLiBUaGUgdmFsdWVzIHNwZWNpZnkgdGhlIGd1ZXN0IHBoeXNpY2FsCisgKiBh
ZGRyZXNzIGFuZCBsZW5ndGggb2YgYSBibG9jayBvZiBTTUJJT1Mgc3RydWN0dXJlcyBmb3IgaHZt
bG9hZGVyIHRvIHVzZS4KKyAqIFRoZSBibG9jayBpcyBmb3JtYXR0ZWQgaW4gdGhlIGZvbGxvd2lu
ZyB3YXk6CisgKgorICogPGxlbmd0aD48c3RydWN0PjxsZW5ndGg+PHN0cnVjdD4uLi4KKyAqCisg
KiBFYWNoIGxlbmd0aCBzZXBhcmF0b3IgaXMgYSAzMmIgaW50ZWdlciBpbmRpY2F0aW5nIHRoZSBs
ZW5ndGggb2YgdGhlIG5leHQKKyAqIFNNQklPUyBzdHJ1Y3R1cmUuIEZvciBETVRGIGRlZmluZWQg
dHlwZXMgKDAgLSAxMjEpLCB0aGUgcGFzc2VkIGluIHN0cnVjdAorICogd2lsbCByZXBsYWNlIHRo
ZSBkZWZhdWx0IHN0cnVjdHVyZSBpbiBodm1sb2FkZXIuIEluIGFkZGl0aW9uLCBhbnkKKyAqIE9F
TS92ZW5kb3J0eXBlcyAoMTI4IC0gMjU1KSB3aWxsIGFsbCBiZSBhZGRlZC4KKyAqLworI2RlZmlu
ZSBIVk1fWFNfU01CSU9TX1BUX0FERFJFU1MgICAgICAgImh2bWxvYWRlci9zbWJpb3MvYWRkcmVz
cyIKKyNkZWZpbmUgSFZNX1hTX1NNQklPU19QVF9MRU5HVEggICAgICAgICJodm1sb2FkZXIvc21i
aW9zL2xlbmd0aCIKKworLyogU2V0IHRvIDEgdG8gZW5hYmxlIFNNQklPUyBkZWZhdWx0IHBvcnRh
YmxlIGJhdHRlcnkgKHR5cGUgMjIpIHZhbHVlcy4gKi8KKyNkZWZpbmUgSFZNX1hTX1NNQklPU19E
RUZBVUxUX0JBVFRFUlkgICJodm1sb2FkZXIvc21iaW9zL2RlZmF1bHRfYmF0dGVyeSIKKworLyog
VGhlIGZvbGxvd2luZyB4ZW5zdG9yZSB2YWx1ZXMgYXJlIHVzZWQgdG8gb3ZlcnJpZGUgc29tZSBv
ZiB0aGUgZGVmYXVsdAorICogc3RyaW5nIHZhbHVlcyBpbiB0aGUgU01CSU9TIHRhYmxlIGNvbnN0
cnVjdGVkIGluIGh2bWxvYWRlci4KKyAqLworI2RlZmluZSBIVk1fWFNfQklPU19TVFJJTkdTICAg
ICAgICAgICAgImJpb3Mtc3RyaW5ncyIKKyNkZWZpbmUgSFZNX1hTX0JJT1NfVkVORE9SICAgICAg
ICAgICAgICJiaW9zLXN0cmluZ3MvYmlvcy12ZW5kb3IiCisjZGVmaW5lIEhWTV9YU19CSU9TX1ZF
UlNJT04gICAgICAgICAgICAiYmlvcy1zdHJpbmdzL2Jpb3MtdmVyc2lvbiIKKyNkZWZpbmUgSFZN
X1hTX1NZU1RFTV9NQU5VRkFDVFVSRVIgICAgICJiaW9zLXN0cmluZ3Mvc3lzdGVtLW1hbnVmYWN0
dXJlciIKKyNkZWZpbmUgSFZNX1hTX1NZU1RFTV9QUk9EVUNUX05BTUUgICAgICJiaW9zLXN0cmlu
Z3Mvc3lzdGVtLXByb2R1Y3QtbmFtZSIKKyNkZWZpbmUgSFZNX1hTX1NZU1RFTV9WRVJTSU9OICAg
ICAgICAgICJiaW9zLXN0cmluZ3Mvc3lzdGVtLXZlcnNpb24iCisjZGVmaW5lIEhWTV9YU19TWVNU
RU1fU0VSSUFMX05VTUJFUiAgICAiYmlvcy1zdHJpbmdzL3N5c3RlbS1zZXJpYWwtbnVtYmVyIgor
I2RlZmluZSBIVk1fWFNfRU5DTE9TVVJFX01BTlVGQUNUVVJFUiAgImJpb3Mtc3RyaW5ncy9lbmNs
b3N1cmUtbWFudWZhY3R1cmVyIgorI2RlZmluZSBIVk1fWFNfRU5DTE9TVVJFX1NFUklBTF9OVU1C
RVIgImJpb3Mtc3RyaW5ncy9lbmNsb3N1cmUtc2VyaWFsLW51bWJlciIKKyNkZWZpbmUgSFZNX1hT
X0JBVFRFUllfTUFOVUZBQ1RVUkVSICAgICJiaW9zLXN0cmluZ3MvYmF0dGVyeS1tYW51ZmFjdHVy
ZXIiCisjZGVmaW5lIEhWTV9YU19CQVRURVJZX0RFVklDRV9OQU1FICAgICAiYmlvcy1zdHJpbmdz
L2JhdHRlcnktZGV2aWNlLW5hbWUiCisKKy8qIDEgdG8gOTkgT0VNIHN0cmluZ3MgY2FuIGJlIHNl
dCBpbiB4ZW5zdG9yZSB1c2luZyB2YWx1ZXMgb2YgdGhlIGZvcm0KKyAqIGJlbG93LiBUaGVzZSBz
dHJpbmdzIHdpbGwgYmUgbG9hZGVkIGludG8gdGhlIFNNQklPUyB0eXBlIDExIHN0cnVjdHVyZS4K
KyAqLworI2RlZmluZSBIVk1fWFNfT0VNX1NUUklOR1MgICAgICAgICAgICAgImJpb3Mtc3RyaW5n
cy9vZW0tJTAyZCIKKworI2VuZGlmIC8qIF9YRU5fVE9PTFNfSFZNX0RFRlNfSF9fICovCisKKy8q
CisgKiBMb2NhbCB2YXJpYWJsZXM6CisgKiBtb2RlOiBDCisgKiBjLXNldC1zdHlsZTogIkJTRCIK
KyAqIGMtYmFzaWMtb2Zmc2V0OiA0CisgKiB0YWItd2lkdGg6IDQKKyAqIGluZGVudC10YWJzLW1v
ZGU6IG5pbAorICogRW5kOgorICovCg==

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="bee9b37e76.aa.sf"
Content-Description: bee9b37e76.aa.sf
Content-Disposition: attachment; filename="bee9b37e76.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:46:04 GMT";
	modification-date="Thu, 20 Dec 2012 18:46:04 GMT"
Content-Transfer-Encoding: base64

YmVlOWIzN2U3Ng0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGJlZTliMzdlNzZcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="d8db81bac3.aa.sf"
Content-Description: d8db81bac3.aa.sf
Content-Disposition: attachment; filename="d8db81bac3.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:48:13 GMT";
	modification-date="Thu, 20 Dec 2012 18:48:13 GMT"
Content-Transfer-Encoding: base64

ZDhkYjgxYmFjMw0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGQ4ZGI4MWJhYzNcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="483a0d23ee.aa.sf"
Content-Description: 483a0d23ee.aa.sf
Content-Disposition: attachment; filename="483a0d23ee.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:52:43 GMT";
	modification-date="Thu, 20 Dec 2012 18:52:43 GMT"
Content-Transfer-Encoding: base64

NDgzYTBkMjNlZQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDQ4M2EwZDIzZWVcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="ebb5603a64.aa.sf"
Content-Description: ebb5603a64.aa.sf
Content-Disposition: attachment; filename="ebb5603a64.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:53:54 GMT";
	modification-date="Thu, 20 Dec 2012 18:53:54 GMT"
Content-Transfer-Encoding: base64

ZWJiNTYwM2E2NA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGViYjU2MDNhNjRcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="3f7f317064.aa.sf"
Content-Description: 3f7f317064.aa.sf
Content-Disposition: attachment; filename="3f7f317064.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:08 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:08 GMT"
Content-Transfer-Encoding: base64

M2Y3ZjMxNzA2NA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDNmN2YzMTcwNjRcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="76b3a7c263.aa.sf"
Content-Description: 76b3a7c263.aa.sf
Content-Disposition: attachment; filename="76b3a7c263.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:57 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:57 GMT"
Content-Transfer-Encoding: base64

NzZiM2E3YzI2Mw0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDc2YjNhN2MyNjNcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_--


From xen-devel-bounces@lists.xen.org Thu Dec 20 18:55:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TllHA-00035A-1o; Thu, 20 Dec 2012 18:55:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TllH8-00034x-Ad
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 18:55:22 +0000
Received: from [85.158.138.51:29353] by server-9.bemta-3.messagelabs.com id
	68/BD-11948-91F53D05; Thu, 20 Dec 2012 18:55:21 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1356029717!21893490!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28373 invoked from network); 20 Dec 2012 18:55:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:55:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; d="sf'?scan'208";a="1455993"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:55:19 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Thu, 20 Dec 2012
	13:55:19 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Thu, 20 Dec 2012 13:55:11 -0500
Thread-Topic: [Xen-devel] [PATCH v4 01/04] HVM firmware passthrough HVM defs
	header
Thread-Index: Ac3e4pWRIPPycg3kTjiWJhqefHm7rQ==
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/mixed;
	boundary="_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v4 01/04] HVM firmware passthrough HVM defs
 header
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Add public HVM definitions header for firmware passthrough support (includi=
ng
comment describing the feature's use). In addition this header is used to
collect the various xenstore string values that are used in HVMLOADER.

Signed-off-by: Ross Philipson <ross.philipson@citrix.com>


--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream;
	name="hvm-firmware-passthrough-v4-01.patch"
Content-Description: hvm-firmware-passthrough-v4-01.patch
Content-Disposition: attachment;
	filename="hvm-firmware-passthrough-v4-01.patch"; size=4525;
	creation-date="Wed, 19 Dec 2012 19:29:35 GMT";
	modification-date="Thu, 20 Dec 2012 16:35:19 GMT"
Content-Transfer-Encoding: base64

QWRkIHB1YmxpYyBIVk0gZGVmaW5pdGlvbnMgaGVhZGVyIGZvciBmaXJtd2FyZSBwYXNzdGhyb3Vn
aCBzdXBwb3J0IChpbmNsdWRpbmcKY29tbWVudCBkZXNjcmliaW5nIHRoZSBmZWF0dXJlJ3MgdXNl
KS4gSW4gYWRkaXRpb24gdGhpcyBoZWFkZXIgaXMgdXNlZCB0bwpjb2xsZWN0IHRoZSB2YXJpb3Vz
IHhlbnN0b3JlIHN0cmluZyB2YWx1ZXMgdGhhdCBhcmUgdXNlZCBpbiBIVk1MT0FERVIuCgpTaWdu
ZWQtb2ZmLWJ5OiBSb3NzIFBoaWxpcHNvbiA8cm9zcy5waGlsaXBzb25AY2l0cml4LmNvbT4KCmRp
ZmYgLXIgMDkwY2MzZTIwZDNlIHRvb2xzL2luY2x1ZGUveGVuLXRvb2xzL2h2bV9kZWZzLmgKLS0t
IC9kZXYvbnVsbAlUaHUgSmFuIDAxIDAwOjAwOjAwIDE5NzAgKzAwMDAKKysrIGIvdG9vbHMvaW5j
bHVkZS94ZW4tdG9vbHMvaHZtX2RlZnMuaAlXZWQgRGVjIDE5IDE0OjIyOjQ4IDIwMTIgLTA1MDAK
QEAgLTAsMCArMSw4NSBAQAorLyoKKyAqIFBlcm1pc3Npb24gaXMgaGVyZWJ5IGdyYW50ZWQsIGZy
ZWUgb2YgY2hhcmdlLCB0byBhbnkgcGVyc29uIG9idGFpbmluZyBhIGNvcHkKKyAqIG9mIHRoaXMg
c29mdHdhcmUgYW5kIGFzc29jaWF0ZWQgZG9jdW1lbnRhdGlvbiBmaWxlcyAodGhlICJTb2Z0d2Fy
ZSIpLCB0bworICogZGVhbCBpbiB0aGUgU29mdHdhcmUgd2l0aG91dCByZXN0cmljdGlvbiwgaW5j
bHVkaW5nIHdpdGhvdXQgbGltaXRhdGlvbiB0aGUKKyAqIHJpZ2h0cyB0byB1c2UsIGNvcHksIG1v
ZGlmeSwgbWVyZ2UsIHB1Ymxpc2gsIGRpc3RyaWJ1dGUsIHN1YmxpY2Vuc2UsIGFuZC9vcgorICog
c2VsbCBjb3BpZXMgb2YgdGhlIFNvZnR3YXJlLCBhbmQgdG8gcGVybWl0IHBlcnNvbnMgdG8gd2hv
bSB0aGUgU29mdHdhcmUgaXMKKyAqIGZ1cm5pc2hlZCB0byBkbyBzbywgc3ViamVjdCB0byB0aGUg
Zm9sbG93aW5nIGNvbmRpdGlvbnM6CisgKgorICogVGhlIGFib3ZlIGNvcHlyaWdodCBub3RpY2Ug
YW5kIHRoaXMgcGVybWlzc2lvbiBub3RpY2Ugc2hhbGwgYmUgaW5jbHVkZWQgaW4KKyAqIGFsbCBj
b3BpZXMgb3Igc3Vic3RhbnRpYWwgcG9ydGlvbnMgb2YgdGhlIFNvZnR3YXJlLgorICoKKyAqIFRI
RSBTT0ZUV0FSRSBJUyBQUk9WSURFRCAiQVMgSVMiLCBXSVRIT1VUIFdBUlJBTlRZIE9GIEFOWSBL
SU5ELCBFWFBSRVNTIE9SCisgKiBJTVBMSUVELCBJTkNMVURJTkcgQlVUIE5PVCBMSU1JVEVEIFRP
IFRIRSBXQVJSQU5USUVTIE9GIE1FUkNIQU5UQUJJTElUWSwKKyAqIEZJVE5FU1MgRk9SIEEgUEFS
VElDVUxBUiBQVVJQT1NFIEFORCBOT05JTkZSSU5HRU1FTlQuIElOIE5PIEVWRU5UIFNIQUxMIFRI
RQorICogQVVUSE9SUyBPUiBDT1BZUklHSFQgSE9MREVSUyBCRSBMSUFCTEUgRk9SIEFOWSBDTEFJ
TSwgREFNQUdFUyBPUiBPVEhFUgorICogTElBQklMSVRZLCBXSEVUSEVSIElOIEFOIEFDVElPTiBP
RiBDT05UUkFDVCwgVE9SVCBPUiBPVEhFUldJU0UsIEFSSVNJTkcKKyAqIEZST00sIE9VVCBPRiBP
UiBJTiBDT05ORUNUSU9OIFdJVEggVEhFIFNPRlRXQVJFIE9SIFRIRSBVU0UgT1IgT1RIRVIKKyAq
IERFQUxJTkdTIElOIFRIRSBTT0ZUV0FSRS4KKyAqLworCisjaWZuZGVmIF9YRU5fVE9PTFNfSFZN
X0RFRlNfSF9fCisjZGVmaW5lIF9YRU5fVE9PTFNfSFZNX0RFRlNfSF9fCisKKyNkZWZpbmUgSFZN
X1hTX0hWTUxPQURFUiAgICAgICAgICAgICAgICJodm1sb2FkZXIiCisjZGVmaW5lIEhWTV9YU19C
SU9TICAgICAgICAgICAgICAgICAgICAiaHZtbG9hZGVyL2Jpb3MiCisjZGVmaW5lIEhWTV9YU19H
RU5FUkFUSU9OX0lEX0FERFJFU1MgICAiaHZtbG9hZGVyL2dlbmVyYXRpb24taWQtYWRkcmVzcyIK
KworLyogVGhlIGZvbGxvd2luZyB2YWx1ZXMgYWxsb3cgYWRkaXRpb25hbCBBQ1BJIHRhYmxlcyB0
byBiZSBhZGRlZCB0byB0aGUKKyAqIHZpcnR1YWwgQUNQSSBCSU9TIHRoYXQgaHZtbG9hZGVyIGNv
bnN0cnVjdHMuIFRoZSB2YWx1ZXMgc3BlY2lmeSB0aGUgZ3Vlc3QKKyAqIHBoeXNpY2FsIGFkZHJl
c3MgYW5kIGxlbmd0aCBvZiBhIGJsb2NrIG9mIEFDUEkgdGFibGVzIHRvIGFkZC4gVGhlIGZvcm1h
dCBvZgorICogdGhlIGJsb2NrIGlzIHNpbXBseSBjb25jYXRlbmF0ZWQgcmF3IHRhYmxlcyAod2hp
Y2ggc3BlY2lmeSB0aGVpciBvd24gbGVuZ3RoCisgKiBpbiB0aGUgQUNQSSBoZWFkZXIpLgorICov
CisjZGVmaW5lIEhWTV9YU19BQ1BJX1BUX0FERFJFU1MgICAgICAgICAiaHZtbG9hZGVyL2FjcGkv
YWRkcmVzcyIKKyNkZWZpbmUgSFZNX1hTX0FDUElfUFRfTEVOR1RIICAgICAgICAgICJodm1sb2Fk
ZXIvYWNwaS9sZW5ndGgiCisKKy8qIEFueSBudW1iZXIgb2YgU01CSU9TIHR5cGVzIGNhbiBiZSBw
YXNzZWQgdGhyb3VnaCB0byBhbiBIVk0gZ3Vlc3QgdXNpbmcKKyAqIHRoZSBmb2xsb3dpbmcgeGVu
c3RvcmUgdmFsdWVzLiBUaGUgdmFsdWVzIHNwZWNpZnkgdGhlIGd1ZXN0IHBoeXNpY2FsCisgKiBh
ZGRyZXNzIGFuZCBsZW5ndGggb2YgYSBibG9jayBvZiBTTUJJT1Mgc3RydWN0dXJlcyBmb3IgaHZt
bG9hZGVyIHRvIHVzZS4KKyAqIFRoZSBibG9jayBpcyBmb3JtYXR0ZWQgaW4gdGhlIGZvbGxvd2lu
ZyB3YXk6CisgKgorICogPGxlbmd0aD48c3RydWN0PjxsZW5ndGg+PHN0cnVjdD4uLi4KKyAqCisg
KiBFYWNoIGxlbmd0aCBzZXBhcmF0b3IgaXMgYSAzMmIgaW50ZWdlciBpbmRpY2F0aW5nIHRoZSBs
ZW5ndGggb2YgdGhlIG5leHQKKyAqIFNNQklPUyBzdHJ1Y3R1cmUuIEZvciBETVRGIGRlZmluZWQg
dHlwZXMgKDAgLSAxMjEpLCB0aGUgcGFzc2VkIGluIHN0cnVjdAorICogd2lsbCByZXBsYWNlIHRo
ZSBkZWZhdWx0IHN0cnVjdHVyZSBpbiBodm1sb2FkZXIuIEluIGFkZGl0aW9uLCBhbnkKKyAqIE9F
TS92ZW5kb3J0eXBlcyAoMTI4IC0gMjU1KSB3aWxsIGFsbCBiZSBhZGRlZC4KKyAqLworI2RlZmlu
ZSBIVk1fWFNfU01CSU9TX1BUX0FERFJFU1MgICAgICAgImh2bWxvYWRlci9zbWJpb3MvYWRkcmVz
cyIKKyNkZWZpbmUgSFZNX1hTX1NNQklPU19QVF9MRU5HVEggICAgICAgICJodm1sb2FkZXIvc21i
aW9zL2xlbmd0aCIKKworLyogU2V0IHRvIDEgdG8gZW5hYmxlIFNNQklPUyBkZWZhdWx0IHBvcnRh
YmxlIGJhdHRlcnkgKHR5cGUgMjIpIHZhbHVlcy4gKi8KKyNkZWZpbmUgSFZNX1hTX1NNQklPU19E
RUZBVUxUX0JBVFRFUlkgICJodm1sb2FkZXIvc21iaW9zL2RlZmF1bHRfYmF0dGVyeSIKKworLyog
VGhlIGZvbGxvd2luZyB4ZW5zdG9yZSB2YWx1ZXMgYXJlIHVzZWQgdG8gb3ZlcnJpZGUgc29tZSBv
ZiB0aGUgZGVmYXVsdAorICogc3RyaW5nIHZhbHVlcyBpbiB0aGUgU01CSU9TIHRhYmxlIGNvbnN0
cnVjdGVkIGluIGh2bWxvYWRlci4KKyAqLworI2RlZmluZSBIVk1fWFNfQklPU19TVFJJTkdTICAg
ICAgICAgICAgImJpb3Mtc3RyaW5ncyIKKyNkZWZpbmUgSFZNX1hTX0JJT1NfVkVORE9SICAgICAg
ICAgICAgICJiaW9zLXN0cmluZ3MvYmlvcy12ZW5kb3IiCisjZGVmaW5lIEhWTV9YU19CSU9TX1ZF
UlNJT04gICAgICAgICAgICAiYmlvcy1zdHJpbmdzL2Jpb3MtdmVyc2lvbiIKKyNkZWZpbmUgSFZN
X1hTX1NZU1RFTV9NQU5VRkFDVFVSRVIgICAgICJiaW9zLXN0cmluZ3Mvc3lzdGVtLW1hbnVmYWN0
dXJlciIKKyNkZWZpbmUgSFZNX1hTX1NZU1RFTV9QUk9EVUNUX05BTUUgICAgICJiaW9zLXN0cmlu
Z3Mvc3lzdGVtLXByb2R1Y3QtbmFtZSIKKyNkZWZpbmUgSFZNX1hTX1NZU1RFTV9WRVJTSU9OICAg
ICAgICAgICJiaW9zLXN0cmluZ3Mvc3lzdGVtLXZlcnNpb24iCisjZGVmaW5lIEhWTV9YU19TWVNU
RU1fU0VSSUFMX05VTUJFUiAgICAiYmlvcy1zdHJpbmdzL3N5c3RlbS1zZXJpYWwtbnVtYmVyIgor
I2RlZmluZSBIVk1fWFNfRU5DTE9TVVJFX01BTlVGQUNUVVJFUiAgImJpb3Mtc3RyaW5ncy9lbmNs
b3N1cmUtbWFudWZhY3R1cmVyIgorI2RlZmluZSBIVk1fWFNfRU5DTE9TVVJFX1NFUklBTF9OVU1C
RVIgImJpb3Mtc3RyaW5ncy9lbmNsb3N1cmUtc2VyaWFsLW51bWJlciIKKyNkZWZpbmUgSFZNX1hT
X0JBVFRFUllfTUFOVUZBQ1RVUkVSICAgICJiaW9zLXN0cmluZ3MvYmF0dGVyeS1tYW51ZmFjdHVy
ZXIiCisjZGVmaW5lIEhWTV9YU19CQVRURVJZX0RFVklDRV9OQU1FICAgICAiYmlvcy1zdHJpbmdz
L2JhdHRlcnktZGV2aWNlLW5hbWUiCisKKy8qIDEgdG8gOTkgT0VNIHN0cmluZ3MgY2FuIGJlIHNl
dCBpbiB4ZW5zdG9yZSB1c2luZyB2YWx1ZXMgb2YgdGhlIGZvcm0KKyAqIGJlbG93LiBUaGVzZSBz
dHJpbmdzIHdpbGwgYmUgbG9hZGVkIGludG8gdGhlIFNNQklPUyB0eXBlIDExIHN0cnVjdHVyZS4K
KyAqLworI2RlZmluZSBIVk1fWFNfT0VNX1NUUklOR1MgICAgICAgICAgICAgImJpb3Mtc3RyaW5n
cy9vZW0tJTAyZCIKKworI2VuZGlmIC8qIF9YRU5fVE9PTFNfSFZNX0RFRlNfSF9fICovCisKKy8q
CisgKiBMb2NhbCB2YXJpYWJsZXM6CisgKiBtb2RlOiBDCisgKiBjLXNldC1zdHlsZTogIkJTRCIK
KyAqIGMtYmFzaWMtb2Zmc2V0OiA0CisgKiB0YWItd2lkdGg6IDQKKyAqIGluZGVudC10YWJzLW1v
ZGU6IG5pbAorICogRW5kOgorICovCg==

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="bee9b37e76.aa.sf"
Content-Description: bee9b37e76.aa.sf
Content-Disposition: attachment; filename="bee9b37e76.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:46:04 GMT";
	modification-date="Thu, 20 Dec 2012 18:46:04 GMT"
Content-Transfer-Encoding: base64

YmVlOWIzN2U3Ng0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGJlZTliMzdlNzZcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="d8db81bac3.aa.sf"
Content-Description: d8db81bac3.aa.sf
Content-Disposition: attachment; filename="d8db81bac3.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:48:13 GMT";
	modification-date="Thu, 20 Dec 2012 18:48:13 GMT"
Content-Transfer-Encoding: base64

ZDhkYjgxYmFjMw0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGQ4ZGI4MWJhYzNcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="483a0d23ee.aa.sf"
Content-Description: 483a0d23ee.aa.sf
Content-Disposition: attachment; filename="483a0d23ee.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:52:43 GMT";
	modification-date="Thu, 20 Dec 2012 18:52:43 GMT"
Content-Transfer-Encoding: base64

NDgzYTBkMjNlZQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDQ4M2EwZDIzZWVcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="ebb5603a64.aa.sf"
Content-Description: ebb5603a64.aa.sf
Content-Disposition: attachment; filename="ebb5603a64.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:53:54 GMT";
	modification-date="Thu, 20 Dec 2012 18:53:54 GMT"
Content-Transfer-Encoding: base64

ZWJiNTYwM2E2NA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGViYjU2MDNhNjRcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="3f7f317064.aa.sf"
Content-Description: 3f7f317064.aa.sf
Content-Disposition: attachment; filename="3f7f317064.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:08 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:08 GMT"
Content-Transfer-Encoding: base64

M2Y3ZjMxNzA2NA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDNmN2YzMTcwNjRcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: application/octet-stream; name="76b3a7c263.aa.sf"
Content-Description: 76b3a7c263.aa.sf
Content-Disposition: attachment; filename="76b3a7c263.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:57 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:57 GMT"
Content-Transfer-Encoding: base64

NzZiM2E3YzI2Mw0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDc2YjNhN2MyNjNcDQowDQowDQowDQpOT05FDQpOT05F

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_008_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6460FTLPMAILBOX02_--


From xen-devel-bounces@lists.xen.org Thu Dec 20 18:55:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TllH7-00034t-Lq; Thu, 20 Dec 2012 18:55:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TllH6-00034n-B4
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 18:55:20 +0000
Received: from [85.158.138.51:32629] by server-8.bemta-3.messagelabs.com id
	6E/23-01297-71F53D05; Thu, 20 Dec 2012 18:55:19 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1356029717!21893490!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28271 invoked from network); 20 Dec 2012 18:55:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:55:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="1455987"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:55:16 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Thu, 20 Dec 2012
	13:55:16 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Thu, 20 Dec 2012 13:55:10 -0500
Thread-Topic: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
Thread-Index: Ac3e4g5U+s/6ywiYRgm7G3gwiv/4vQ==
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: [Xen-devel]  [PATCH v4 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series introduces support of loading external blocks of firmware
into a guest. These blocks can currently contain SMBIOS and/or ACPI firmware
information that is used by HVMLOADER to modify a guests virtual firmware at
startup. These modules are only used by HVMLOADER and are effectively discarded after HVMLOADER has completed.

The domain building code in libxenguest is passed these firmware blocks
in the xc_hvm_build_args structure and loads them into the new guest,
returning the load address. The loading is done in what will become the guests
low RAM area just behind to load location for HVMLOADER. After their use by
HVMLOADER they are effectively discarded. It is the caller's job to load the
base address and length values in xenstore using the paths defined in the new
hvm_defs.h header so HVMLOADER can located the blocks.

Currently two types of firmware information are recognized and processed
in the HVMLOADER though this could be extended.

1. SMBIOS: The SMBIOS table building code will attempt to retrieve (for
predefined set of structure types) any passed in structures. If a match is
found the passed in table will be used overriding the default values. In
addition, the SMBIOS code will also enumerate and load any vendor defined
strutures (in the range of types 128 - 255) that as are passed in. See the
hvm_defs.h header for information on the format of this block.
2. ACPI: Static and secondary descriptor tables can be added to the set of
ACPI table built by HVMLOADER. The ACPI builder code will enumerate passed in
tables and add them at the end of the secondary table list. See the hvm_defs.h
header for information on the format of this block.

There are 4 patches in the series:
01 - Add HVM definitions header for firmware passthrough support.
02 - Xen control tools support for loading the firmware blocks.
03 - Passthrough support for SMBIOS.
04 - Passthrough support for ACPI.

Note this is version 3 of this patch set. Some of the differences:
 - Generic module support removed, overall functionality was simplified.
 - Use of xenstore to supply firmware passthrough information to HVMLOADER.
 - Fixed issues pointed out in the SMBIOS processing code.
 - Created defines for the SMBIOS handles in use and switched to using
   the xenstore values in the new hvm_defs.h file.

Signed-off-by: Ross Philipson <ross.philipson@citrix.com>

(Based on xen-4.3 staging/unstable cs 26317)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:55:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TllH7-00034t-Lq; Thu, 20 Dec 2012 18:55:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TllH6-00034n-B4
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 18:55:20 +0000
Received: from [85.158.138.51:32629] by server-8.bemta-3.messagelabs.com id
	6E/23-01297-71F53D05; Thu, 20 Dec 2012 18:55:19 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1356029717!21893490!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28271 invoked from network); 20 Dec 2012 18:55:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:55:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="1455987"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:55:16 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Thu, 20 Dec 2012
	13:55:16 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Thu, 20 Dec 2012 13:55:10 -0500
Thread-Topic: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
Thread-Index: Ac3e4g5U+s/6ywiYRgm7G3gwiv/4vQ==
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: [Xen-devel]  [PATCH v4 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series introduces support of loading external blocks of firmware
into a guest. These blocks can currently contain SMBIOS and/or ACPI firmware
information that is used by HVMLOADER to modify a guests virtual firmware at
startup. These modules are only used by HVMLOADER and are effectively discarded after HVMLOADER has completed.

The domain building code in libxenguest is passed these firmware blocks
in the xc_hvm_build_args structure and loads them into the new guest,
returning the load address. The loading is done in what will become the guests
low RAM area just behind to load location for HVMLOADER. After their use by
HVMLOADER they are effectively discarded. It is the caller's job to load the
base address and length values in xenstore using the paths defined in the new
hvm_defs.h header so HVMLOADER can located the blocks.

Currently two types of firmware information are recognized and processed
in the HVMLOADER though this could be extended.

1. SMBIOS: The SMBIOS table building code will attempt to retrieve (for
predefined set of structure types) any passed in structures. If a match is
found the passed in table will be used overriding the default values. In
addition, the SMBIOS code will also enumerate and load any vendor defined
strutures (in the range of types 128 - 255) that as are passed in. See the
hvm_defs.h header for information on the format of this block.
2. ACPI: Static and secondary descriptor tables can be added to the set of
ACPI table built by HVMLOADER. The ACPI builder code will enumerate passed in
tables and add them at the end of the secondary table list. See the hvm_defs.h
header for information on the format of this block.

There are 4 patches in the series:
01 - Add HVM definitions header for firmware passthrough support.
02 - Xen control tools support for loading the firmware blocks.
03 - Passthrough support for SMBIOS.
04 - Passthrough support for ACPI.

Note this is version 3 of this patch set. Some of the differences:
 - Generic module support removed, overall functionality was simplified.
 - Use of xenstore to supply firmware passthrough information to HVMLOADER.
 - Fixed issues pointed out in the SMBIOS processing code.
 - Created defines for the SMBIOS handles in use and switched to using
   the xenstore values in the new hvm_defs.h file.

Signed-off-by: Ross Philipson <ross.philipson@citrix.com>

(Based on xen-4.3 staging/unstable cs 26317)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 18:55:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TllHC-00035r-Kl; Thu, 20 Dec 2012 18:55:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TllHB-00035T-DY
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 18:55:25 +0000
Received: from [85.158.138.51:32876] by server-5.bemta-3.messagelabs.com id
	47/C4-15136-C1F53D05; Thu, 20 Dec 2012 18:55:24 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1356029717!21893490!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28622 invoked from network); 20 Dec 2012 18:55:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:55:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; d="sf'?scan'208";a="1456000"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:55:22 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Thu, 20 Dec 2012
	13:55:22 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Thu, 20 Dec 2012 13:55:14 -0500
Thread-Topic: [Xen-devel] [PATCH v4 03/04] HVM firmware passthrough SMBIOS
	processing
Thread-Index: Ac3e4okifyciPK+wQzWEHdJDfWNJ/w==
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/mixed;
	boundary="_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v4 03/04] HVM firmware passthrough SMBIOS
 processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Passthrough support for the SMBIOS structures including three new DMTF defi=
ned
types and support for OEM defined tables. Passed in SMBIOS types override t=
he
default internal values. Default values can be enabled for the new type 22
portable battery using a xenstore flag. All other new DMTF defined and OEM
structures will only be added to the SMBIOS table if passthrough values are
present.

Signed-off-by: Ross Philipson <ross.philipson@citrix.com>


--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream;
	name="hvm-firmware-passthrough-v4-03.patch"
Content-Description: hvm-firmware-passthrough-v4-03.patch
Content-Disposition: attachment;
	filename="hvm-firmware-passthrough-v4-03.patch"; size=19273;
	creation-date="Wed, 19 Dec 2012 19:37:38 GMT";
	modification-date="Thu, 20 Dec 2012 16:35:19 GMT"
Content-Transfer-Encoding: base64

UGFzc3Rocm91Z2ggc3VwcG9ydCBmb3IgdGhlIFNNQklPUyBzdHJ1Y3R1cmVzIGluY2x1ZGluZyB0
aHJlZSBuZXcgRE1URiBkZWZpbmVkCnR5cGVzIGFuZCBzdXBwb3J0IGZvciBPRU0gZGVmaW5lZCB0
YWJsZXMuIFBhc3NlZCBpbiBTTUJJT1MgdHlwZXMgb3ZlcnJpZGUgdGhlCmRlZmF1bHQgaW50ZXJu
YWwgdmFsdWVzLiBEZWZhdWx0IHZhbHVlcyBjYW4gYmUgZW5hYmxlZCBmb3IgdGhlIG5ldyB0eXBl
IDIyCnBvcnRhYmxlIGJhdHRlcnkgdXNpbmcgYSB4ZW5zdG9yZSBmbGFnLiBBbGwgb3RoZXIgbmV3
IERNVEYgZGVmaW5lZCBhbmQgT0VNCnN0cnVjdHVyZXMgd2lsbCBvbmx5IGJlIGFkZGVkIHRvIHRo
ZSBTTUJJT1MgdGFibGUgaWYgcGFzc3Rocm91Z2ggdmFsdWVzIGFyZQpwcmVzZW50LgoKU2lnbmVk
LW9mZi1ieTogUm9zcyBQaGlsaXBzb24gPHJvc3MucGhpbGlwc29uQGNpdHJpeC5jb20+CgpkaWZm
IC1yIDY1NDdmMjJmZDllMiB0b29scy9maXJtd2FyZS9odm1sb2FkZXIvc21iaW9zLmMKLS0tIGEv
dG9vbHMvZmlybXdhcmUvaHZtbG9hZGVyL3NtYmlvcy5jCVRodSBEZWMgMjAgMTE6MTY6NTMgMjAx
MiAtMDUwMAorKysgYi90b29scy9maXJtd2FyZS9odm1sb2FkZXIvc21iaW9zLmMJVGh1IERlYyAy
MCAxMToxNzo1NCAyMDEyIC0wNTAwCkBAIC0yNiwxNiArMjYsMzggQEAKICNpbmNsdWRlICJzbWJp
b3NfdHlwZXMuaCIKICNpbmNsdWRlICJ1dGlsLmgiCiAjaW5jbHVkZSAiaHlwZXJjYWxsLmgiCisj
aW5jbHVkZSAieGVuLXRvb2xzL2h2bV9kZWZzLmgiCiAKKy8qIFNCTUlPUyBoYW5kbGUgYmFzZSB2
YWx1ZXMgKi8KKyNkZWZpbmUgU01CSU9TX0hBTkRMRV9UWVBFMCAgIDB4MDAwMAorI2RlZmluZSBT
TUJJT1NfSEFORExFX1RZUEUxICAgMHgwMTAwCisjZGVmaW5lIFNNQklPU19IQU5ETEVfVFlQRTIg
ICAweDAyMDAKKyNkZWZpbmUgU01CSU9TX0hBTkRMRV9UWVBFMyAgIDB4MDMwMAorI2RlZmluZSBT
TUJJT1NfSEFORExFX1RZUEU0ICAgMHgwNDAwCisjZGVmaW5lIFNNQklPU19IQU5ETEVfVFlQRTEx
ICAweDBCMDAKKyNkZWZpbmUgU01CSU9TX0hBTkRMRV9UWVBFMTYgIDB4MTAwMAorI2RlZmluZSBT
TUJJT1NfSEFORExFX1RZUEUxNyAgMHgxMTAwCisjZGVmaW5lIFNNQklPU19IQU5ETEVfVFlQRTE5
ICAweDEzMDAKKyNkZWZpbmUgU01CSU9TX0hBTkRMRV9UWVBFMjAgIDB4MTQwMAorI2RlZmluZSBT
TUJJT1NfSEFORExFX1RZUEUyMiAgMHgxNjAwCisjZGVmaW5lIFNNQklPU19IQU5ETEVfVFlQRTMy
ICAweDIwMDAKKyNkZWZpbmUgU01CSU9TX0hBTkRMRV9UWVBFMzkgIDB4MjcwMAorI2RlZmluZSBT
TUJJT1NfSEFORExFX1RZUEUxMjcgMHg3ZjAwCisKK3N0YXRpYyB2b2lkCitzbWJpb3NfcHRfaW5p
dCh2b2lkKTsKK3N0YXRpYyB2b2lkKgorZ2V0X3NtYmlvc19wdF9zdHJ1Y3QodWludDhfdCB0eXBl
LCB1aW50MzJfdCAqbGVuZ3RoX291dCk7CitzdGF0aWMgdm9pZAorZ2V0X2NwdV9tYW51ZmFjdHVy
ZXIoY2hhciAqYnVmLCBpbnQgbGVuKTsKIHN0YXRpYyBpbnQKIHdyaXRlX3NtYmlvc190YWJsZXMo
dm9pZCAqZXAsIHZvaWQgKnN0YXJ0LAogICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCB2Y3B1
cywgdWludDY0X3QgbWVtc2l6ZSwKICAgICAgICAgICAgICAgICAgICAgdWludDhfdCB1dWlkWzE2
XSwgY2hhciAqeGVuX3ZlcnNpb24sCiAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IHhlbl9t
YWpvcl92ZXJzaW9uLCB1aW50MzJfdCB4ZW5fbWlub3JfdmVyc2lvbiwKICAgICAgICAgICAgICAg
ICAgICAgdW5zaWduZWQgKm5yX3N0cnVjdHMsIHVuc2lnbmVkICptYXhfc3RydWN0X3NpemUpOwot
Ci1zdGF0aWMgdm9pZAotZ2V0X2NwdV9tYW51ZmFjdHVyZXIoY2hhciAqYnVmLCBpbnQgbGVuKTsK
K3N0YXRpYyB1aW50NjRfdAorZ2V0X21lbXNpemUodm9pZCk7CiBzdGF0aWMgdm9pZAogc21iaW9z
X2VudHJ5X3BvaW50X2luaXQodm9pZCAqc3RhcnQsCiAgICAgICAgICAgICAgICAgICAgICAgICB1
aW50MTZfdCBtYXhfc3RydWN0dXJlX3NpemUsCkBAIC00OSw2ICs3MSw4IEBAIHN0YXRpYyB2b2lk
ICoKIHNtYmlvc190eXBlXzFfaW5pdCh2b2lkICpzdGFydCwgY29uc3QgY2hhciAqeGVuX3ZlcnNp
b24sIAogICAgICAgICAgICAgICAgICAgIHVpbnQ4X3QgdXVpZFsxNl0pOwogc3RhdGljIHZvaWQg
Kgorc21iaW9zX3R5cGVfMl9pbml0KHZvaWQgKnN0YXJ0KTsKK3N0YXRpYyB2b2lkICoKIHNtYmlv
c190eXBlXzNfaW5pdCh2b2lkICpzdGFydCk7CiBzdGF0aWMgdm9pZCAqCiBzbWJpb3NfdHlwZV80
X2luaXQodm9pZCAqc3RhcnQsIHVuc2lnbmVkIGludCBjcHVfbnVtYmVyLApAQCAtNjQsMTAgKzg4
LDczIEBAIHNtYmlvc190eXBlXzE5X2luaXQodm9pZCAqc3RhcnQsIHVpbnQzMl8KIHN0YXRpYyB2
b2lkICoKIHNtYmlvc190eXBlXzIwX2luaXQodm9pZCAqc3RhcnQsIHVpbnQzMl90IG1lbW9yeV9z
aXplX21iLCBpbnQgaW5zdGFuY2UpOwogc3RhdGljIHZvaWQgKgorc21iaW9zX3R5cGVfMjJfaW5p
dCh2b2lkICpzdGFydCk7CitzdGF0aWMgdm9pZCAqCiBzbWJpb3NfdHlwZV8zMl9pbml0KHZvaWQg
KnN0YXJ0KTsKIHN0YXRpYyB2b2lkICoKK3NtYmlvc190eXBlXzM5X2luaXQodm9pZCAqc3RhcnQp
Oworc3RhdGljIHZvaWQgKgorc21iaW9zX3R5cGVfdmVuZG9yX29lbV9pbml0KHZvaWQgKnN0YXJ0
KTsKK3N0YXRpYyB2b2lkICoKIHNtYmlvc190eXBlXzEyN19pbml0KHZvaWQgKnN0YXJ0KTsKIAor
c3RhdGljIHVpbnQzMl90ICpzbWJpb3NfcHRfYWRkciA9IE5VTEw7CitzdGF0aWMgdWludDMyX3Qg
c21iaW9zX3B0X2xlbmd0aCA9IDA7CisKK3N0YXRpYyB2b2lkCitzbWJpb3NfcHRfaW5pdCh2b2lk
KQoreworICAgIGNvbnN0IGNoYXIgKnM7CisKKyAgICBzID0geGVuc3RvcmVfcmVhZChIVk1fWFNf
U01CSU9TX1BUX0FERFJFU1MsIE5VTEwpOworICAgIGlmICggcyA9PSBOVUxMICkKKyAgICAgICAg
Z290byByZXNldDsKKworICAgIHNtYmlvc19wdF9hZGRyID0gKHVpbnQzMl90KikodWludDMyX3Qp
c3RydG9sbChzLCBOVUxMLCAwKTsKKyAgICBpZiAoIHNtYmlvc19wdF9hZGRyID09IE5VTEwgKQor
ICAgICAgICBnb3RvIHJlc2V0OworCisgICAgcyA9IHhlbnN0b3JlX3JlYWQoSFZNX1hTX1NNQklP
U19QVF9MRU5HVEgsIE5VTEwpOworICAgIGlmICggcyA9PSBOVUxMICkKKyAgICAgICAgZ290byBy
ZXNldDsKKworICAgIHNtYmlvc19wdF9sZW5ndGggPSAodWludDMyX3Qpc3RydG9sbChzLCBOVUxM
LCAwKTsKKyAgICBpZiAoIHNtYmlvc19wdF9sZW5ndGggPT0gMCApCisgICAgICAgIGdvdG8gcmVz
ZXQ7CisKKyAgICByZXR1cm47CisKK3Jlc2V0OgorICAgIHNtYmlvc19wdF9hZGRyID0gTlVMTDsK
KyAgICBzbWJpb3NfcHRfbGVuZ3RoID0gMDsKK30KKworc3RhdGljIHZvaWQqCitnZXRfc21iaW9z
X3B0X3N0cnVjdCh1aW50OF90IHR5cGUsIHVpbnQzMl90ICpsZW5ndGhfb3V0KQoreworICAgIHVp
bnQzMl90ICpzZXAgPSBzbWJpb3NfcHRfYWRkcjsKKyAgICB1aW50MzJfdCB0b3RhbCA9IDA7Cisg
ICAgdWludDhfdCAqcHRyOworCisgICAgaWYgKCBzZXAgPT0gTlVMTCApCisgICAgICAgIHJldHVy
biBOVUxMOworCisgICAgd2hpbGUgKCB0b3RhbCA8IHNtYmlvc19wdF9sZW5ndGggKQorICAgIHsK
KyAgICAgICAgcHRyID0gKHVpbnQ4X3QqKShzZXAgKyAxKTsKKyAgICAgICAgaWYgKCBwdHJbMF0g
PT0gdHlwZSApCisgICAgICAgIHsKKyAgICAgICAgICAgICpsZW5ndGhfb3V0ID0gKnNlcDsKKyAg
ICAgICAgICAgIHJldHVybiBwdHI7CisgICAgICAgIH0KKworICAgICAgICB0b3RhbCArPSAoKnNl
cCArIHNpemVvZih1aW50MzJfdCkpOworICAgICAgICBzZXAgPSAodWludDMyX3QqKShwdHIgKyAq
c2VwKTsKKyAgICB9CisKKyAgICByZXR1cm4gTlVMTDsKK30KKwogc3RhdGljIHZvaWQKIGdldF9j
cHVfbWFudWZhY3R1cmVyKGNoYXIgKmJ1ZiwgaW50IGxlbikKIHsKQEAgLTk3LDYgKzE4NCw4IEBA
IHdyaXRlX3NtYmlvc190YWJsZXModm9pZCAqZXAsIHZvaWQgKnN0YXIKICAgICBjaGFyIGNwdV9t
YW51ZmFjdHVyZXJbMTVdOwogICAgIGludCBpLCBucl9tZW1fZGV2czsKIAorICAgIHNtYmlvc19w
dF9pbml0KCk7CisKICAgICBnZXRfY3B1X21hbnVmYWN0dXJlcihjcHVfbWFudWZhY3R1cmVyLCAx
NSk7CiAKICAgICBwID0gKGNoYXIgKilzdGFydDsKQEAgLTExMiw2ICsyMDEsNyBAQCB3cml0ZV9z
bWJpb3NfdGFibGVzKHZvaWQgKmVwLCB2b2lkICpzdGFyCiAgICAgZG9fc3RydWN0KHNtYmlvc190
eXBlXzBfaW5pdChwLCB4ZW5fdmVyc2lvbiwgeGVuX21ham9yX3ZlcnNpb24sCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICB4ZW5fbWlub3JfdmVyc2lvbikpOwogICAgIGRvX3N0cnVj
dChzbWJpb3NfdHlwZV8xX2luaXQocCwgeGVuX3ZlcnNpb24sIHV1aWQpKTsKKyAgICBkb19zdHJ1
Y3Qoc21iaW9zX3R5cGVfMl9pbml0KHApKTsKICAgICBkb19zdHJ1Y3Qoc21iaW9zX3R5cGVfM19p
bml0KHApKTsKICAgICBmb3IgKCBjcHVfbnVtID0gMTsgY3B1X251bSA8PSB2Y3B1czsgY3B1X251
bSsrICkKICAgICAgICAgZG9fc3RydWN0KHNtYmlvc190eXBlXzRfaW5pdChwLCBjcHVfbnVtLCBj
cHVfbWFudWZhY3R1cmVyKSk7CkBAIC0xMzAsNyArMjIwLDEwIEBAIHdyaXRlX3NtYmlvc190YWJs
ZXModm9pZCAqZXAsIHZvaWQgKnN0YXIKICAgICAgICAgZG9fc3RydWN0KHNtYmlvc190eXBlXzIw
X2luaXQocCwgZGV2X21lbXNpemUsIGkpKTsKICAgICB9CiAKKyAgICBkb19zdHJ1Y3Qoc21iaW9z
X3R5cGVfMjJfaW5pdChwKSk7CiAgICAgZG9fc3RydWN0KHNtYmlvc190eXBlXzMyX2luaXQocCkp
OworICAgIGRvX3N0cnVjdChzbWJpb3NfdHlwZV8zOV9pbml0KHApKTsKKyAgICBkb19zdHJ1Y3Qo
c21iaW9zX3R5cGVfdmVuZG9yX29lbV9pbml0KHApKTsKICAgICBkb19zdHJ1Y3Qoc21iaW9zX3R5
cGVfMTI3X2luaXQocCkpOwogCiAjdW5kZWYgZG9fc3RydWN0CkBAIC0yODksMTIgKzM4MiwyMiBA
QCBzbWJpb3NfdHlwZV8wX2luaXQodm9pZCAqc3RhcnQsIGNvbnN0IGNoCiAgICAgc3RydWN0IHNt
Ymlvc190eXBlXzAgKnAgPSAoc3RydWN0IHNtYmlvc190eXBlXzAgKilzdGFydDsKICAgICBzdGF0
aWMgY29uc3QgY2hhciAqc21iaW9zX3JlbGVhc2VfZGF0ZSA9IF9fU01CSU9TX0RBVEVfXzsKICAg
ICBjb25zdCBjaGFyICpzOworICAgIHZvaWQgKnB0czsKKyAgICB1aW50MzJfdCBsZW5ndGg7CisK
KyAgICBwdHMgPSBnZXRfc21iaW9zX3B0X3N0cnVjdCgwLCAmbGVuZ3RoKTsKKyAgICBpZiAoIChw
dHMgIT0gTlVMTCkmJihsZW5ndGggPiAwKSApCisgICAgeworICAgICAgICBtZW1jcHkoc3RhcnQs
IHB0cywgbGVuZ3RoKTsKKyAgICAgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19IQU5ETEVf
VFlQRTA7CisgICAgICAgIHJldHVybiAoc3RhcnQgKyBsZW5ndGgpOworICAgIH0KIAogICAgIG1l
bXNldChwLCAwLCBzaXplb2YoKnApKTsKIAogICAgIHAtPmhlYWRlci50eXBlID0gMDsKICAgICBw
LT5oZWFkZXIubGVuZ3RoID0gc2l6ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8wKTsKLSAgICBwLT5o
ZWFkZXIuaGFuZGxlID0gMDsKKyAgICBwLT5oZWFkZXIuaGFuZGxlID0gU01CSU9TX0hBTkRMRV9U
WVBFMDsKIAogICAgIHAtPnZlbmRvcl9zdHIgPSAxOwogICAgIHAtPnZlcnNpb25fc3RyID0gMjsK
QEAgLTMxNSwxMSArNDE4LDExIEBAIHNtYmlvc190eXBlXzBfaW5pdCh2b2lkICpzdGFydCwgY29u
c3QgY2gKICAgICBwLT5lbWJlZGRlZF9jb250cm9sbGVyX21pbm9yID0gMHhmZjsKIAogICAgIHN0
YXJ0ICs9IHNpemVvZihzdHJ1Y3Qgc21iaW9zX3R5cGVfMCk7Ci0gICAgcyA9IHhlbnN0b3JlX3Jl
YWQoImJpb3Mtc3RyaW5ncy9iaW9zLXZlbmRvciIsICJYZW4iKTsKKyAgICBzID0geGVuc3RvcmVf
cmVhZChIVk1fWFNfQklPU19WRU5ET1IsICJYZW4iKTsKICAgICBzdHJjcHkoKGNoYXIgKilzdGFy
dCwgcyk7CiAgICAgc3RhcnQgKz0gc3RybGVuKHMpICsgMTsKIAotICAgIHMgPSB4ZW5zdG9yZV9y
ZWFkKCJiaW9zLXN0cmluZ3MvYmlvcy12ZXJzaW9uIiwgeGVuX3ZlcnNpb24pOworICAgIHMgPSB4
ZW5zdG9yZV9yZWFkKEhWTV9YU19CSU9TX1ZFUlNJT04sIHhlbl92ZXJzaW9uKTsKICAgICBzdHJj
cHkoKGNoYXIgKilzdGFydCwgcyk7CiAgICAgc3RhcnQgKz0gc3RybGVuKHMpICsgMTsKIApAQCAt
MzM4LDEyICs0NDEsMjIgQEAgc21iaW9zX3R5cGVfMV9pbml0KHZvaWQgKnN0YXJ0LCBjb25zdCBj
aAogICAgIGNoYXIgdXVpZF9zdHJbMzddOwogICAgIHN0cnVjdCBzbWJpb3NfdHlwZV8xICpwID0g
KHN0cnVjdCBzbWJpb3NfdHlwZV8xICopc3RhcnQ7CiAgICAgY29uc3QgY2hhciAqczsKKyAgICB2
b2lkICpwdHM7CisgICAgdWludDMyX3QgbGVuZ3RoOworCisgICAgcHRzID0gZ2V0X3NtYmlvc19w
dF9zdHJ1Y3QoMSwgJmxlbmd0aCk7CisgICAgaWYgKCAocHRzICE9IE5VTEwpJiYobGVuZ3RoID4g
MCkgKQorICAgIHsKKyAgICAgICAgbWVtY3B5KHN0YXJ0LCBwdHMsIGxlbmd0aCk7CisgICAgICAg
IHAtPmhlYWRlci5oYW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEUxOworICAgICAgICByZXR1cm4g
KHN0YXJ0ICsgbGVuZ3RoKTsKKyAgICB9CiAKICAgICBtZW1zZXQocCwgMCwgc2l6ZW9mKCpwKSk7
CiAKICAgICBwLT5oZWFkZXIudHlwZSA9IDE7CiAgICAgcC0+aGVhZGVyLmxlbmd0aCA9IHNpemVv
ZihzdHJ1Y3Qgc21iaW9zX3R5cGVfMSk7Ci0gICAgcC0+aGVhZGVyLmhhbmRsZSA9IDB4MTAwOwor
ICAgIHAtPmhlYWRlci5oYW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEUxOwogCiAgICAgcC0+bWFu
dWZhY3R1cmVyX3N0ciA9IDE7CiAgICAgcC0+cHJvZHVjdF9uYW1lX3N0ciA9IDI7CkBAIC0zNTgs
MjAgKzQ3MSwyMCBAQCBzbWJpb3NfdHlwZV8xX2luaXQodm9pZCAqc3RhcnQsIGNvbnN0IGNoCiAK
ICAgICBzdGFydCArPSBzaXplb2Yoc3RydWN0IHNtYmlvc190eXBlXzEpOwogICAgIAotICAgIHMg
PSB4ZW5zdG9yZV9yZWFkKCJiaW9zLXN0cmluZ3Mvc3lzdGVtLW1hbnVmYWN0dXJlciIsICJYZW4i
KTsKKyAgICBzID0geGVuc3RvcmVfcmVhZChIVk1fWFNfU1lTVEVNX01BTlVGQUNUVVJFUiwgIlhl
biIpOwogICAgIHN0cmNweSgoY2hhciAqKXN0YXJ0LCBzKTsKICAgICBzdGFydCArPSBzdHJsZW4o
cykgKyAxOwogCi0gICAgcyA9IHhlbnN0b3JlX3JlYWQoImJpb3Mtc3RyaW5ncy9zeXN0ZW0tcHJv
ZHVjdC1uYW1lIiwgIkhWTSBkb21VIik7CisgICAgcyA9IHhlbnN0b3JlX3JlYWQoSFZNX1hTX1NZ
U1RFTV9QUk9EVUNUX05BTUUsICJIVk0gZG9tVSIpOwogICAgIHN0cmNweSgoY2hhciAqKXN0YXJ0
LCBzKTsKICAgICBzdGFydCArPSBzdHJsZW4ocykgKyAxOwogCi0gICAgcyA9IHhlbnN0b3JlX3Jl
YWQoImJpb3Mtc3RyaW5ncy9zeXN0ZW0tdmVyc2lvbiIsIHhlbl92ZXJzaW9uKTsKKyAgICBzID0g
eGVuc3RvcmVfcmVhZChIVk1fWFNfU1lTVEVNX1ZFUlNJT04sIHhlbl92ZXJzaW9uKTsKICAgICBz
dHJjcHkoKGNoYXIgKilzdGFydCwgcyk7CiAgICAgc3RhcnQgKz0gc3RybGVuKHMpICsgMTsKIAog
ICAgIHV1aWRfdG9fc3RyaW5nKHV1aWRfc3RyLCB1dWlkKTsgCi0gICAgcyA9IHhlbnN0b3JlX3Jl
YWQoImJpb3Mtc3RyaW5ncy9zeXN0ZW0tc2VyaWFsLW51bWJlciIsIHV1aWRfc3RyKTsKKyAgICBz
ID0geGVuc3RvcmVfcmVhZChIVk1fWFNfU1lTVEVNX1NFUklBTF9OVU1CRVIsIHV1aWRfc3RyKTsK
ICAgICBzdHJjcHkoKGNoYXIgKilzdGFydCwgcyk7CiAgICAgc3RhcnQgKz0gc3RybGVuKHMpICsg
MTsKIApAQCAtMzgwLDE3ICs0OTMsNTggQEAgc21iaW9zX3R5cGVfMV9pbml0KHZvaWQgKnN0YXJ0
LCBjb25zdCBjaAogICAgIHJldHVybiBzdGFydCsxOyAKIH0KIAorLyogVHlwZSAyIC0tIFN5c3Rl
bSBCb2FyZCAqLworc3RhdGljIHZvaWQgKgorc21iaW9zX3R5cGVfMl9pbml0KHZvaWQgKnN0YXJ0
KQoreworICAgIHN0cnVjdCBzbWJpb3NfdHlwZV8yICpwID0gKHN0cnVjdCBzbWJpb3NfdHlwZV8y
ICopc3RhcnQ7CisgICAgdWludDhfdCAqcHRyOworICAgIHZvaWQgKnB0czsKKyAgICB1aW50MzJf
dCBsZW5ndGg7CisKKyAgICBwdHMgPSBnZXRfc21iaW9zX3B0X3N0cnVjdCgyLCAmbGVuZ3RoKTsK
KyAgICBpZiAoIChwdHMgIT0gTlVMTCkmJihsZW5ndGggPiAwKSApCisgICAgeworICAgICAgICBt
ZW1jcHkoc3RhcnQsIHB0cywgbGVuZ3RoKTsKKyAgICAgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNN
QklPU19IQU5ETEVfVFlQRTI7CisKKyAgICAgICAgLyogU2V0IGN1cnJlbnQgY2hhc3NpcyBoYW5k
bGUgaWYgcHJlc2VudCAqLworICAgICAgICBpZiAoIHAtPmhlYWRlci5sZW5ndGggPiAxMyApCisg
ICAgICAgIHsKKyAgICAgICAgICAgIHB0ciA9ICgodWludDhfdCopc3RhcnQpICsgMTE7ICAgICAg
ICAgICAgCisgICAgICAgICAgICBpZiAoICooKHVpbnQxNl90KilwdHIpICE9IDAgKQorICAgICAg
ICAgICAgICAgICooKHVpbnQxNl90KilwdHIpID0gU01CSU9TX0hBTkRMRV9UWVBFMzsKKyAgICAg
ICAgfQorCisgICAgICAgIHJldHVybiAoc3RhcnQgKyBsZW5ndGgpOworICAgIH0KKworICAgIC8q
IE9ubHkgcHJlc2VudCB3aGVuIHBhc3NlZCBpbiAqLworICAgIHJldHVybiBzdGFydDsKK30KKwog
LyogVHlwZSAzIC0tIFN5c3RlbSBFbmNsb3N1cmUgKi8KIHN0YXRpYyB2b2lkICoKIHNtYmlvc190
eXBlXzNfaW5pdCh2b2lkICpzdGFydCkKIHsKICAgICBzdHJ1Y3Qgc21iaW9zX3R5cGVfMyAqcCA9
IChzdHJ1Y3Qgc21iaW9zX3R5cGVfMyAqKXN0YXJ0OworICAgIGNvbnN0IGNoYXIgKnM7CisgICAg
dm9pZCAqcHRzOworICAgIHVpbnQzMl90IGxlbmd0aDsKKworICAgIHB0cyA9IGdldF9zbWJpb3Nf
cHRfc3RydWN0KDMsICZsZW5ndGgpOworICAgIGlmICggKHB0cyAhPSBOVUxMKSYmKGxlbmd0aCA+
IDApICkKKyAgICB7CisgICAgICAgIG1lbWNweShzdGFydCwgcHRzLCBsZW5ndGgpOworICAgICAg
ICBwLT5oZWFkZXIuaGFuZGxlID0gU01CSU9TX0hBTkRMRV9UWVBFMzsKKyAgICAgICAgcmV0dXJu
IChzdGFydCArIGxlbmd0aCk7CisgICAgfQogICAgIAogICAgIG1lbXNldChwLCAwLCBzaXplb2Yo
KnApKTsKIAogICAgIHAtPmhlYWRlci50eXBlID0gMzsKICAgICBwLT5oZWFkZXIubGVuZ3RoID0g
c2l6ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8zKTsKLSAgICBwLT5oZWFkZXIuaGFuZGxlID0gMHgz
MDA7CisgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19IQU5ETEVfVFlQRTM7CiAKICAgICBw
LT5tYW51ZmFjdHVyZXJfc3RyID0gMTsKICAgICBwLT50eXBlID0gMHgwMTsgLyogb3RoZXIgKi8K
QEAgLTQwNCw4ICs1NTgsMTkgQEAgc21iaW9zX3R5cGVfM19pbml0KHZvaWQgKnN0YXJ0KQogCiAg
ICAgc3RhcnQgKz0gc2l6ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8zKTsKICAgICAKLSAgICBzdHJj
cHkoKGNoYXIgKilzdGFydCwgIlhlbiIpOwotICAgIHN0YXJ0ICs9IHN0cmxlbigiWGVuIikgKyAx
OworICAgIHMgPSB4ZW5zdG9yZV9yZWFkKEhWTV9YU19FTkNMT1NVUkVfTUFOVUZBQ1RVUkVSLCAi
WGVuIik7CisgICAgc3RyY3B5KChjaGFyICopc3RhcnQsIHMpOworICAgIHN0YXJ0ICs9IHN0cmxl
bihzKSArIDE7CisKKyAgICAvKiBObyBpbnRlcm5hbCBkZWZhdWx0cyBmb3IgdGhpcyBpZiB0aGUg
dmFsdWUgaXMgbm90IHNldCAqLworICAgIHMgPSB4ZW5zdG9yZV9yZWFkKEhWTV9YU19FTkNMT1NV
UkVfU0VSSUFMX05VTUJFUiwgTlVMTCk7CisgICAgaWYgKCAocyAhPSBOVUxMKSYmKCpzICE9ICdc
MCcpICkKKyAgICB7CisgICAgICAgIHN0cmNweSgoY2hhciAqKXN0YXJ0LCBzKTsKKyAgICAgICAg
c3RhcnQgKz0gc3RybGVuKHMpICsgMTsKKyAgICAgICAgcC0+c2VyaWFsX251bWJlcl9zdHIgPSAy
OworICAgIH0KKwogICAgICooKHVpbnQ4X3QgKilzdGFydCkgPSAwOwogICAgIHJldHVybiBzdGFy
dCsxOwogfQpAQCAtNDIzLDcgKzU4OCw3IEBAIHNtYmlvc190eXBlXzRfaW5pdCgKIAogICAgIHAt
PmhlYWRlci50eXBlID0gNDsKICAgICBwLT5oZWFkZXIubGVuZ3RoID0gc2l6ZW9mKHN0cnVjdCBz
bWJpb3NfdHlwZV80KTsKLSAgICBwLT5oZWFkZXIuaGFuZGxlID0gMHg0MDAgKyBjcHVfbnVtYmVy
OworICAgIHAtPmhlYWRlci5oYW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEU0ICsgY3B1X251bWJl
cjsKIAogICAgIHAtPnNvY2tldF9kZXNpZ25hdGlvbl9zdHIgPSAxOwogICAgIHAtPnByb2Nlc3Nv
cl90eXBlID0gMHgwMzsgLyogQ1BVICovCkBAIC00NjUsMTMgKzYzMCwyMyBAQCBzdGF0aWMgdm9p
ZCAqCiBzbWJpb3NfdHlwZV8xMV9pbml0KHZvaWQgKnN0YXJ0KSAKIHsKICAgICBzdHJ1Y3Qgc21i
aW9zX3R5cGVfMTEgKnAgPSAoc3RydWN0IHNtYmlvc190eXBlXzExICopc3RhcnQ7Ci0gICAgY2hh
ciBwYXRoWzIwXSA9ICJiaW9zLXN0cmluZ3Mvb2VtLVhYIjsKKyAgICBjaGFyIHBhdGhbMjBdOwog
ICAgIGNvbnN0IGNoYXIgKnM7CiAgICAgaW50IGk7CisgICAgdm9pZCAqcHRzOworICAgIHVpbnQz
Ml90IGxlbmd0aDsKKworICAgIHB0cyA9IGdldF9zbWJpb3NfcHRfc3RydWN0KDExLCAmbGVuZ3Ro
KTsKKyAgICBpZiAoIChwdHMgIT0gTlVMTCkmJihsZW5ndGggPiAwKSApCisgICAgeworICAgICAg
ICBtZW1jcHkoc3RhcnQsIHB0cywgbGVuZ3RoKTsKKyAgICAgICAgcC0+aGVhZGVyLmhhbmRsZSA9
IFNNQklPU19IQU5ETEVfVFlQRTExOworICAgICAgICByZXR1cm4gKHN0YXJ0ICsgbGVuZ3RoKTsK
KyAgICB9CiAKICAgICBwLT5oZWFkZXIudHlwZSA9IDExOwogICAgIHAtPmhlYWRlci5sZW5ndGgg
PSBzaXplb2Yoc3RydWN0IHNtYmlvc190eXBlXzExKTsKLSAgICBwLT5oZWFkZXIuaGFuZGxlID0g
MHhCMDA7CisgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19IQU5ETEVfVFlQRTExOwogCiAg
ICAgcC0+Y291bnQgPSAwOwogCkBAIC00ODAsOCArNjU1LDcgQEAgc21iaW9zX3R5cGVfMTFfaW5p
dCh2b2lkICpzdGFydCkKICAgICAvKiBQdWxsIG91dCBhcyBtYW55IG9lbS0qIHN0cmluZ3Mgd2Ug
ZmluZCBpbiB4ZW5zdG9yZSAqLwogICAgIGZvciAoIGkgPSAxOyBpIDwgMTAwOyBpKysgKQogICAg
IHsKLSAgICAgICAgcGF0aFsoc2l6ZW9mIHBhdGgpIC0gM10gPSAnMCcgKyAoKGkgPCAxMCkgPyBp
IDogaSAvIDEwKTsKLSAgICAgICAgcGF0aFsoc2l6ZW9mIHBhdGgpIC0gMl0gPSAoaSA8IDEwKSA/
ICdcMCcgOiAnMCcgKyAoaSAlIDEwKTsKKyAgICAgICAgc25wcmludGYocGF0aCwgc2l6ZW9mKHBh
dGgpLCBIVk1fWFNfT0VNX1NUUklOR1MsIGkpOwogICAgICAgICBpZiAoICgocyA9IHhlbnN0b3Jl
X3JlYWQocGF0aCwgTlVMTCkpID09IE5VTEwpIHx8ICgqcyA9PSAnXDAnKSApCiAgICAgICAgICAg
ICBicmVhazsKICAgICAgICAgc3RyY3B5KChjaGFyICopc3RhcnQsIHMpOwpAQCAtNTEwLDcgKzY4
NCw3IEBAIHNtYmlvc190eXBlXzE2X2luaXQodm9pZCAqc3RhcnQsIHVpbnQzMl8KICAgICBtZW1z
ZXQocCwgMCwgc2l6ZW9mKCpwKSk7CiAKICAgICBwLT5oZWFkZXIudHlwZSA9IDE2OwotICAgIHAt
PmhlYWRlci5oYW5kbGUgPSAweDEwMDA7CisgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19I
QU5ETEVfVFlQRTE2OwogICAgIHAtPmhlYWRlci5sZW5ndGggPSBzaXplb2Yoc3RydWN0IHNtYmlv
c190eXBlXzE2KTsKICAgICAKICAgICBwLT5sb2NhdGlvbiA9IDB4MDE7IC8qIG90aGVyICovCkBA
IC01MzYsNyArNzEwLDcgQEAgc21iaW9zX3R5cGVfMTdfaW5pdCh2b2lkICpzdGFydCwgdWludDMy
XwogCiAgICAgcC0+aGVhZGVyLnR5cGUgPSAxNzsKICAgICBwLT5oZWFkZXIubGVuZ3RoID0gc2l6
ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8xNyk7Ci0gICAgcC0+aGVhZGVyLmhhbmRsZSA9IDB4MTEw
MCArIGluc3RhbmNlOworICAgIHAtPmhlYWRlci5oYW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEUx
NyArIGluc3RhbmNlOwogCiAgICAgcC0+cGh5c2ljYWxfbWVtb3J5X2FycmF5X2hhbmRsZSA9IDB4
MTAwMDsKICAgICBwLT50b3RhbF93aWR0aCA9IDY0OwpAQCAtNTcxLDcgKzc0NSw3IEBAIHNtYmlv
c190eXBlXzE5X2luaXQodm9pZCAqc3RhcnQsIHVpbnQzMl8KIAogICAgIHAtPmhlYWRlci50eXBl
ID0gMTk7CiAgICAgcC0+aGVhZGVyLmxlbmd0aCA9IHNpemVvZihzdHJ1Y3Qgc21iaW9zX3R5cGVf
MTkpOwotICAgIHAtPmhlYWRlci5oYW5kbGUgPSAweDEzMDAgKyBpbnN0YW5jZTsKKyAgICBwLT5o
ZWFkZXIuaGFuZGxlID0gU01CSU9TX0hBTkRMRV9UWVBFMTkgKyBpbnN0YW5jZTsKIAogICAgIHAt
PnN0YXJ0aW5nX2FkZHJlc3MgPSBpbnN0YW5jZSA8PCAyNDsKICAgICBwLT5lbmRpbmdfYWRkcmVz
cyA9IHAtPnN0YXJ0aW5nX2FkZHJlc3MgKyAobWVtb3J5X3NpemVfbWIgPDwgMTApIC0gMTsKQEAg
LTU5Myw3ICs3NjcsNyBAQCBzbWJpb3NfdHlwZV8yMF9pbml0KHZvaWQgKnN0YXJ0LCB1aW50MzJf
CiAKICAgICBwLT5oZWFkZXIudHlwZSA9IDIwOwogICAgIHAtPmhlYWRlci5sZW5ndGggPSBzaXpl
b2Yoc3RydWN0IHNtYmlvc190eXBlXzIwKTsKLSAgICBwLT5oZWFkZXIuaGFuZGxlID0gMHgxNDAw
ICsgaW5zdGFuY2U7CisgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19IQU5ETEVfVFlQRTIw
ICsgaW5zdGFuY2U7CiAKICAgICBwLT5zdGFydGluZ19hZGRyZXNzID0gaW5zdGFuY2UgPDwgMjQ7
CiAgICAgcC0+ZW5kaW5nX2FkZHJlc3MgPSBwLT5zdGFydGluZ19hZGRyZXNzICsgKG1lbW9yeV9z
aXplX21iIDw8IDEwKSAtIDE7CkBAIC02MDksNiArNzgzLDcxIEBAIHNtYmlvc190eXBlXzIwX2lu
aXQodm9pZCAqc3RhcnQsIHVpbnQzMl8KICAgICByZXR1cm4gc3RhcnQrMjsKIH0KIAorLyogVHlw
ZSAyMiAtLSBQb3J0YWJsZSBCYXR0ZXJ5ICovCitzdGF0aWMgdm9pZCAqCitzbWJpb3NfdHlwZV8y
Ml9pbml0KHZvaWQgKnN0YXJ0KQoreworICAgIHN0cnVjdCBzbWJpb3NfdHlwZV8yMiAqcCA9IChz
dHJ1Y3Qgc21iaW9zX3R5cGVfMjIgKilzdGFydDsKKyAgICBzdGF0aWMgY29uc3QgY2hhciAqc21i
aW9zX3JlbGVhc2VfZGF0ZSA9IF9fU01CSU9TX0RBVEVfXzsKKyAgICBjb25zdCBjaGFyICpzOwor
ICAgIHZvaWQgKnB0czsKKyAgICB1aW50MzJfdCBsZW5ndGg7CisKKyAgICBwdHMgPSBnZXRfc21i
aW9zX3B0X3N0cnVjdCgyMiwgJmxlbmd0aCk7CisgICAgaWYgKCAocHRzICE9IE5VTEwpJiYobGVu
Z3RoID4gMCkgKQorICAgIHsKKyAgICAgICAgbWVtY3B5KHN0YXJ0LCBwdHMsIGxlbmd0aCk7Cisg
ICAgICAgIHAtPmhlYWRlci5oYW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEUyMjsKKyAgICAgICAg
cmV0dXJuIChzdGFydCArIGxlbmd0aCk7CisgICAgfQorCisgICAgcyA9IHhlbnN0b3JlX3JlYWQo
SFZNX1hTX1NNQklPU19ERUZBVUxUX0JBVFRFUlksICIwIik7CisgICAgaWYgKCBzdHJuY21wKHMs
ICIxIiwgMSkgIT0gMCApCisgICAgICAgIHJldHVybiBzdGFydDsKKworICAgIG1lbXNldChwLCAw
LCBzaXplb2YoKnApKTsKKworICAgIHAtPmhlYWRlci50eXBlID0gMjI7CisgICAgcC0+aGVhZGVy
Lmxlbmd0aCA9IHNpemVvZihzdHJ1Y3Qgc21iaW9zX3R5cGVfMjIpOworICAgIHAtPmhlYWRlci5o
YW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEUyMjsKKworICAgIHAtPmxvY2F0aW9uX3N0ciA9IDE7
CisgICAgcC0+bWFudWZhY3R1cmVyX3N0ciA9IDI7CisgICAgcC0+bWFudWZhY3R1cmVyX2RhdGVf
c3RyID0gMzsKKyAgICBwLT5zZXJpYWxfbnVtYmVyX3N0ciA9IDA7CisgICAgcC0+ZGV2aWNlX25h
bWVfc3RyID0gNDsKKyAgICBwLT5kZXZpY2VfY2hlbWlzdHJ5ID0gMHgyOyAvKiB1bmtub3duICov
CisgICAgcC0+ZGV2aWNlX2NhcGFjaXR5ID0gMDsgLyogdW5rbm93biAqLworICAgIHAtPmRldmlj
ZV92b2x0YWdlID0gMDsgLyogdW5rbm93biAqLworICAgIHAtPnNiZHNfdmVyc2lvbl9udW1iZXIg
PSAwOworICAgIHAtPm1heF9lcnJvciA9IDB4ZmY7IC8qIHVua25vd24gKi8KKyAgICBwLT5zYmRz
X3NlcmlhbF9udW1iZXIgPSAwOworICAgIHAtPnNiZHNfbWFudWZhY3R1cmVyX2RhdGUgPSAwOwor
ICAgIHAtPnNiZHNfZGV2aWNlX2NoZW1pc3RyeSA9IDA7CisgICAgcC0+ZGVzaWduX2NhcGFjaXR5
X211bHRpcGxpZXIgPSAwOworICAgIHAtPm9lbV9zcGVjaWZpYyA9IDA7CisKKyAgICBzdGFydCAr
PSBzaXplb2Yoc3RydWN0IHNtYmlvc190eXBlXzIyKTsKKworICAgIHN0cmNweSgoY2hhciAqKXN0
YXJ0LCAiUHJpbWFyeSIpOworICAgIHN0YXJ0ICs9IHN0cmxlbigiUHJpbWFyeSIpICsgMTsKKwor
ICAgIHMgPSB4ZW5zdG9yZV9yZWFkKEhWTV9YU19CQVRURVJZX01BTlVGQUNUVVJFUiwgIlhlbiIp
OworICAgIHN0cmNweSgoY2hhciAqKXN0YXJ0LCBzKTsKKyAgICBzdGFydCArPSBzdHJsZW4ocykg
KyAxOworCisgICAgc3RyY3B5KChjaGFyICopc3RhcnQsIHNtYmlvc19yZWxlYXNlX2RhdGUpOwor
ICAgIHN0YXJ0ICs9IHN0cmxlbihzbWJpb3NfcmVsZWFzZV9kYXRlKSArIDE7CisKKyAgICBzID0g
eGVuc3RvcmVfcmVhZChIVk1fWFNfQkFUVEVSWV9ERVZJQ0VfTkFNRSwgIlhFTi1WQkFUIik7Cisg
ICAgc3RyY3B5KChjaGFyICopc3RhcnQsIHMpOworICAgIHN0YXJ0ICs9IHN0cmxlbihzKSArIDE7
CisKKyAgICAqKCh1aW50OF90ICopc3RhcnQpID0gMDsKKworICAgIHJldHVybiBzdGFydCsxOyAK
K30KKwogLyogVHlwZSAzMiAtLSBTeXN0ZW0gQm9vdCBJbmZvcm1hdGlvbiAqLwogc3RhdGljIHZv
aWQgKgogc21iaW9zX3R5cGVfMzJfaW5pdCh2b2lkICpzdGFydCkKQEAgLTYxOSw3ICs4NTgsNyBA
QCBzbWJpb3NfdHlwZV8zMl9pbml0KHZvaWQgKnN0YXJ0KQogCiAgICAgcC0+aGVhZGVyLnR5cGUg
PSAzMjsKICAgICBwLT5oZWFkZXIubGVuZ3RoID0gc2l6ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8z
Mik7Ci0gICAgcC0+aGVhZGVyLmhhbmRsZSA9IDB4MjAwMDsKKyAgICBwLT5oZWFkZXIuaGFuZGxl
ID0gU01CSU9TX0hBTkRMRV9UWVBFMzI7CiAgICAgbWVtc2V0KHAtPnJlc2VydmVkLCAwLCA2KTsK
ICAgICBwLT5ib290X3N0YXR1cyA9IDA7IC8qIG5vIGVycm9ycyBkZXRlY3RlZCAqLwogICAgIApA
QCAtNjI4LDYgKzg2Nyw1OCBAQCBzbWJpb3NfdHlwZV8zMl9pbml0KHZvaWQgKnN0YXJ0KQogICAg
IHJldHVybiBzdGFydCsyOwogfQogCisvKiBUeXBlIDM5IC0tIFBvd2VyIFN1cHBseSAqLworc3Rh
dGljIHZvaWQgKgorc21iaW9zX3R5cGVfMzlfaW5pdCh2b2lkICpzdGFydCkKK3sKKyAgICBzdHJ1
Y3Qgc21iaW9zX3R5cGVfMzkgKnAgPSAoc3RydWN0IHNtYmlvc190eXBlXzM5ICopc3RhcnQ7Cisg
ICAgdm9pZCAqcHRzOworICAgIHVpbnQzMl90IGxlbmd0aDsKKworICAgIHB0cyA9IGdldF9zbWJp
b3NfcHRfc3RydWN0KDM5LCAmbGVuZ3RoKTsKKyAgICBpZiAoIChwdHMgIT0gTlVMTCkmJihsZW5n
dGggPiAwKSApCisgICAgeworICAgICAgICBtZW1jcHkoc3RhcnQsIHB0cywgbGVuZ3RoKTsKKyAg
ICAgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19IQU5ETEVfVFlQRTM5OworICAgICAgICBy
ZXR1cm4gKHN0YXJ0ICsgbGVuZ3RoKTsKKyAgICB9CisKKyAgICAvKiBPbmx5IHByZXNlbnQgd2hl
biBwYXNzZWQgaW4gKi8KKyAgICByZXR1cm4gc3RhcnQ7Cit9CisKK3N0YXRpYyB2b2lkICoKK3Nt
Ymlvc190eXBlX3ZlbmRvcl9vZW1faW5pdCh2b2lkICpzdGFydCkKK3sKKyAgICB1aW50MzJfdCAq
c2VwID0gc21iaW9zX3B0X2FkZHI7CisgICAgdWludDMyX3QgdG90YWwgPSAwOworICAgIHVpbnQ4
X3QgKnB0cjsKKworICAgIGlmICggc2VwID09IE5VTEwgKQorICAgICAgICByZXR1cm4gc3RhcnQ7
CisKKyAgICB3aGlsZSAoIHRvdGFsIDwgc21iaW9zX3B0X2xlbmd0aCApCisgICAgeworICAgICAg
ICBwdHIgPSAodWludDhfdCopKHNlcCArIDEpOworICAgICAgICBpZiAoIHB0clswXSA+PSAxMjgg
KQorICAgICAgICB7CisgICAgICAgICAgICAvKiBWZW5kb3IvT0VNIHRhYmxlLCBjb3B5IGl0IGlu
LiBOb3RlIHRoZSBoYW5kbGUgdmFsdWVzIGNhbm5vdAorICAgICAgICAgICAgICogYmUgY2hhbmdl
ZCBzaW5jZSBpdCBpcyB1bmtub3duIHdoYXQgaXMgaW4gZWFjaCBvZiB0aGVzZSB0YWJsZXMKKyAg
ICAgICAgICAgICAqIGJ1dCB0aGV5IGNvdWxkIGNvbnRhaW4gaGFuZGxlIHJlZmVyZW5jZXMgdG8g
b3RoZXIgdGFibGVzLiBUaGlzCisgICAgICAgICAgICAgKiBtZWFucyBhIHNsaWdodCByaXNrIG9m
IGNvbGxpc2lvbiB3aXRoIHRoZSB0YWJsZXMgYWJvdmUgYnV0IHRoYXQKKyAgICAgICAgICAgICAq
IHdvdWxkIGhhdmUgdG8gYmUgZGVhbHQgd2l0aCBvbiBhIGNhc2UgYnkgY2FzZSBiYXNpcy4KKyAg
ICAgICAgICAgICAqLworICAgICAgICAgICAgbWVtY3B5KHN0YXJ0LCBwdHIsICpzZXApOworICAg
ICAgICAgICAgc3RhcnQgKz0gKnNlcDsKKyAgICAgICAgfQorCisgICAgICAgIHRvdGFsICs9ICgq
c2VwICsgc2l6ZW9mKHVpbnQzMl90KSk7CisgICAgICAgIHNlcCA9ICh1aW50MzJfdCopKHB0ciAr
ICpzZXApOworICAgIH0KKworICAgIHJldHVybiBzdGFydDsKK30KKwogLyogVHlwZSAxMjcgLS0g
RW5kIG9mIFRhYmxlICovCiBzdGF0aWMgdm9pZCAqCiBzbWJpb3NfdHlwZV8xMjdfaW5pdCh2b2lk
ICpzdGFydCkKQEAgLTYzOCw3ICs5MjksNyBAQCBzbWJpb3NfdHlwZV8xMjdfaW5pdCh2b2lkICpz
dGFydCkKIAogICAgIHAtPmhlYWRlci50eXBlID0gMTI3OwogICAgIHAtPmhlYWRlci5sZW5ndGgg
PSBzaXplb2Yoc3RydWN0IHNtYmlvc190eXBlXzEyNyk7Ci0gICAgcC0+aGVhZGVyLmhhbmRsZSA9
IDB4N2YwMDsKKyAgICBwLT5oZWFkZXIuaGFuZGxlID0gU01CSU9TX0hBTkRMRV9UWVBFMTI3Owog
CiAgICAgc3RhcnQgKz0gc2l6ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8xMjcpOwogICAgICooKHVp
bnQxNl90ICopc3RhcnQpID0gMDsKZGlmZiAtciA2NTQ3ZjIyZmQ5ZTIgdG9vbHMvZmlybXdhcmUv
aHZtbG9hZGVyL3NtYmlvc190eXBlcy5oCi0tLSBhL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci9z
bWJpb3NfdHlwZXMuaAlUaHUgRGVjIDIwIDExOjE2OjUzIDIwMTIgLTA1MDAKKysrIGIvdG9vbHMv
ZmlybXdhcmUvaHZtbG9hZGVyL3NtYmlvc190eXBlcy5oCVRodSBEZWMgMjAgMTE6MTc6NTQgMjAx
MiAtMDUwMApAQCAtODQsNiArODQsMTUgQEAgc3RydWN0IHNtYmlvc190eXBlXzEgewogICAgIHVp
bnQ4X3QgZmFtaWx5X3N0cjsKIH0gX19hdHRyaWJ1dGVfXyAoKHBhY2tlZCkpOwogCisvKiBTTUJJ
T1MgdHlwZSAyIC0gQmFzZSBCb2FyZCBJbmZvcm1hdGlvbiAqLworc3RydWN0IHNtYmlvc190eXBl
XzIgeworICAgIHN0cnVjdCBzbWJpb3Nfc3RydWN0dXJlX2hlYWRlciBoZWFkZXI7CisgICAgdWlu
dDhfdCBtYW51ZmFjdHVyZXJfc3RyOworICAgIHVpbnQ4X3QgcHJvZHVjdF9uYW1lX3N0cjsKKyAg
ICB1aW50OF90IHZlcnNpb25fc3RyOworICAgIHVpbnQ4X3Qgc2VyaWFsX251bWJlcl9zdHI7Cit9
IF9fYXR0cmlidXRlX18gKChwYWNrZWQpKTsKKwogLyogU01CSU9TIHR5cGUgMyAtIFN5c3RlbSBF
bmNsb3N1cmUgKi8KIHN0cnVjdCBzbWJpb3NfdHlwZV8zIHsKICAgICBzdHJ1Y3Qgc21iaW9zX3N0
cnVjdHVyZV9oZWFkZXIgaGVhZGVyOwpAQCAtMTczLDYgKzE4MiwyNiBAQCBzdHJ1Y3Qgc21iaW9z
X3R5cGVfMjAgewogICAgIHVpbnQ4X3QgaW50ZXJsZWF2ZWRfZGF0YV9kZXB0aDsKIH0gX19hdHRy
aWJ1dGVfXyAoKHBhY2tlZCkpOwogCisvKiBTTUJJT1MgdHlwZSAyMiAtIFBvcnRhYmxlIGJhdHRl
cnkgKi8KK3N0cnVjdCBzbWJpb3NfdHlwZV8yMiB7CisgICAgc3RydWN0IHNtYmlvc19zdHJ1Y3R1
cmVfaGVhZGVyIGhlYWRlcjsKKyAgICB1aW50OF90IGxvY2F0aW9uX3N0cjsKKyAgICB1aW50OF90
IG1hbnVmYWN0dXJlcl9zdHI7CisgICAgdWludDhfdCBtYW51ZmFjdHVyZXJfZGF0ZV9zdHI7Cisg
ICAgdWludDhfdCBzZXJpYWxfbnVtYmVyX3N0cjsKKyAgICB1aW50OF90IGRldmljZV9uYW1lX3N0
cjsKKyAgICB1aW50OF90IGRldmljZV9jaGVtaXN0cnk7CisgICAgdWludDE2X3QgZGV2aWNlX2Nh
cGFjaXR5OworICAgIHVpbnQxNl90IGRldmljZV92b2x0YWdlOworICAgIHVpbnQ4X3Qgc2Jkc192
ZXJzaW9uX251bWJlcjsKKyAgICB1aW50OF90IG1heF9lcnJvcjsKKyAgICB1aW50MTZfdCBzYmRz
X3NlcmlhbF9udW1iZXI7CisgICAgdWludDE2X3Qgc2Jkc19tYW51ZmFjdHVyZXJfZGF0ZTsKKyAg
ICB1aW50OF90IHNiZHNfZGV2aWNlX2NoZW1pc3RyeTsKKyAgICB1aW50OF90IGRlc2lnbl9jYXBh
Y2l0eV9tdWx0aXBsaWVyOworICAgIHVpbnQzMl90IG9lbV9zcGVjaWZpYzsKK30gX19hdHRyaWJ1
dGVfXyAoKHBhY2tlZCkpOworCiAvKiBTTUJJT1MgdHlwZSAzMiAtIFN5c3RlbSBCb290IEluZm9y
bWF0aW9uICovCiBzdHJ1Y3Qgc21iaW9zX3R5cGVfMzIgewogICAgIHN0cnVjdCBzbWJpb3Nfc3Ry
dWN0dXJlX2hlYWRlciBoZWFkZXI7CkBAIC0xODAsNiArMjA5LDI0IEBAIHN0cnVjdCBzbWJpb3Nf
dHlwZV8zMiB7CiAgICAgdWludDhfdCBib290X3N0YXR1czsKIH0gX19hdHRyaWJ1dGVfXyAoKHBh
Y2tlZCkpOwogCisvKiBTTUJJT1MgdHlwZSAzOSAtIFBvd2VyIFN1cHBseSAqLworc3RydWN0IHNt
Ymlvc190eXBlXzM5IHsKKyAgICBzdHJ1Y3Qgc21iaW9zX3N0cnVjdHVyZV9oZWFkZXIgaGVhZGVy
OworICAgIHVpbnQ4X3QgcG93ZXJfdW5pdF9ncm91cDsKKyAgICB1aW50OF90IGxvY2F0aW9uX3N0
cjsKKyAgICB1aW50OF90IGRldmljZV9uYW1lX3N0cjsKKyAgICB1aW50OF90IG1hbnVmYWN0dXJl
cl9zdHI7CisgICAgdWludDhfdCBzZXJpYWxfbnVtYmVyX3N0cjsKKyAgICB1aW50OF90IGFzc2V0
X3RhZ19udW1iZXJfc3RyOworICAgIHVpbnQ4X3QgbW9kZWxfcGFydF9udW1iZXJfc3RyOworICAg
IHVpbnQ4X3QgcmV2aXNpb25fbGV2ZWxfc3RyOworICAgIHVpbnQxNl90IG1heF9jYXBhY2l0eTsK
KyAgICB1aW50MTZfdCBjaGFyYWN0ZXJpc3RpY3M7CisgICAgdWludDE2X3QgaW5wdXRfdm9sdGFn
ZV9wcm9iZV9oYW5kbGU7CisgICAgdWludDE2X3QgY29vbGluZ19kZXZpY2VfaGFuZGxlOworICAg
IHVpbnQxNl90IGlucHV0X2N1cnJlbnRfcHJvYmVfaGFuZGxlOworfSBfX2F0dHJpYnV0ZV9fICgo
cGFja2VkKSk7CisKIC8qIFNNQklPUyB0eXBlIDEyNyAtLSBFbmQtb2YtdGFibGUgKi8KIHN0cnVj
dCBzbWJpb3NfdHlwZV8xMjcgewogICAgIHN0cnVjdCBzbWJpb3Nfc3RydWN0dXJlX2hlYWRlciBo
ZWFkZXI7Cg==

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream; name="051465c845.aa.sf"
Content-Description: 051465c845.aa.sf
Content-Disposition: attachment; filename="051465c845.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:47:23 GMT";
	modification-date="Thu, 20 Dec 2012 18:47:23 GMT"
Content-Transfer-Encoding: base64

MDUxNDY1Yzg0NQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDA1MTQ2NWM4NDVcDQowDQowDQowDQpOT05FDQpOT05F

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream; name="40fcf29b46.aa.sf"
Content-Description: 40fcf29b46.aa.sf
Content-Disposition: attachment; filename="40fcf29b46.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:47:52 GMT";
	modification-date="Thu, 20 Dec 2012 18:47:52 GMT"
Content-Transfer-Encoding: base64

NDBmY2YyOWI0Ng0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDQwZmNmMjliNDZcDQowDQowDQowDQpOT05FDQpOT05F

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream; name="bf9e518cd4.aa.sf"
Content-Description: bf9e518cd4.aa.sf
Content-Disposition: attachment; filename="bf9e518cd4.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:53:31 GMT";
	modification-date="Thu, 20 Dec 2012 18:53:31 GMT"
Content-Transfer-Encoding: base64

YmY5ZTUxOGNkNA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGJmOWU1MThjZDRcDQowDQowDQowDQpOT05FDQpOT05F

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream; name="26ff013845.aa.sf"
Content-Description: 26ff013845.aa.sf
Content-Disposition: attachment; filename="26ff013845.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:25 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:25 GMT"
Content-Transfer-Encoding: base64

MjZmZjAxMzg0NQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDI2ZmYwMTM4NDVcDQowDQowDQowDQpOT05FDQpOT05F

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream; name="eab0b4583e.aa.sf"
Content-Description: eab0b4583e.aa.sf
Content-Disposition: attachment; filename="eab0b4583e.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:41 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:41 GMT"
Content-Transfer-Encoding: base64

ZWFiMGI0NTgzZQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGVhYjBiNDU4M2VcDQowDQowDQowDQpOT05FDQpOT05F

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_--


From xen-devel-bounces@lists.xen.org Thu Dec 20 18:55:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TllHC-00035r-Kl; Thu, 20 Dec 2012 18:55:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TllHB-00035T-DY
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 18:55:25 +0000
Received: from [85.158.138.51:32876] by server-5.bemta-3.messagelabs.com id
	47/C4-15136-C1F53D05; Thu, 20 Dec 2012 18:55:24 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1356029717!21893490!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTExODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28622 invoked from network); 20 Dec 2012 18:55:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:55:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; d="sf'?scan'208";a="1456000"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:55:22 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Thu, 20 Dec 2012
	13:55:22 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Thu, 20 Dec 2012 13:55:14 -0500
Thread-Topic: [Xen-devel] [PATCH v4 03/04] HVM firmware passthrough SMBIOS
	processing
Thread-Index: Ac3e4okifyciPK+wQzWEHdJDfWNJ/w==
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/mixed;
	boundary="_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v4 03/04] HVM firmware passthrough SMBIOS
 processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Passthrough support for the SMBIOS structures including three new DMTF defi=
ned
types and support for OEM defined tables. Passed in SMBIOS types override t=
he
default internal values. Default values can be enabled for the new type 22
portable battery using a xenstore flag. All other new DMTF defined and OEM
structures will only be added to the SMBIOS table if passthrough values are
present.

Signed-off-by: Ross Philipson <ross.philipson@citrix.com>


--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream;
	name="hvm-firmware-passthrough-v4-03.patch"
Content-Description: hvm-firmware-passthrough-v4-03.patch
Content-Disposition: attachment;
	filename="hvm-firmware-passthrough-v4-03.patch"; size=19273;
	creation-date="Wed, 19 Dec 2012 19:37:38 GMT";
	modification-date="Thu, 20 Dec 2012 16:35:19 GMT"
Content-Transfer-Encoding: base64

UGFzc3Rocm91Z2ggc3VwcG9ydCBmb3IgdGhlIFNNQklPUyBzdHJ1Y3R1cmVzIGluY2x1ZGluZyB0
aHJlZSBuZXcgRE1URiBkZWZpbmVkCnR5cGVzIGFuZCBzdXBwb3J0IGZvciBPRU0gZGVmaW5lZCB0
YWJsZXMuIFBhc3NlZCBpbiBTTUJJT1MgdHlwZXMgb3ZlcnJpZGUgdGhlCmRlZmF1bHQgaW50ZXJu
YWwgdmFsdWVzLiBEZWZhdWx0IHZhbHVlcyBjYW4gYmUgZW5hYmxlZCBmb3IgdGhlIG5ldyB0eXBl
IDIyCnBvcnRhYmxlIGJhdHRlcnkgdXNpbmcgYSB4ZW5zdG9yZSBmbGFnLiBBbGwgb3RoZXIgbmV3
IERNVEYgZGVmaW5lZCBhbmQgT0VNCnN0cnVjdHVyZXMgd2lsbCBvbmx5IGJlIGFkZGVkIHRvIHRo
ZSBTTUJJT1MgdGFibGUgaWYgcGFzc3Rocm91Z2ggdmFsdWVzIGFyZQpwcmVzZW50LgoKU2lnbmVk
LW9mZi1ieTogUm9zcyBQaGlsaXBzb24gPHJvc3MucGhpbGlwc29uQGNpdHJpeC5jb20+CgpkaWZm
IC1yIDY1NDdmMjJmZDllMiB0b29scy9maXJtd2FyZS9odm1sb2FkZXIvc21iaW9zLmMKLS0tIGEv
dG9vbHMvZmlybXdhcmUvaHZtbG9hZGVyL3NtYmlvcy5jCVRodSBEZWMgMjAgMTE6MTY6NTMgMjAx
MiAtMDUwMAorKysgYi90b29scy9maXJtd2FyZS9odm1sb2FkZXIvc21iaW9zLmMJVGh1IERlYyAy
MCAxMToxNzo1NCAyMDEyIC0wNTAwCkBAIC0yNiwxNiArMjYsMzggQEAKICNpbmNsdWRlICJzbWJp
b3NfdHlwZXMuaCIKICNpbmNsdWRlICJ1dGlsLmgiCiAjaW5jbHVkZSAiaHlwZXJjYWxsLmgiCisj
aW5jbHVkZSAieGVuLXRvb2xzL2h2bV9kZWZzLmgiCiAKKy8qIFNCTUlPUyBoYW5kbGUgYmFzZSB2
YWx1ZXMgKi8KKyNkZWZpbmUgU01CSU9TX0hBTkRMRV9UWVBFMCAgIDB4MDAwMAorI2RlZmluZSBT
TUJJT1NfSEFORExFX1RZUEUxICAgMHgwMTAwCisjZGVmaW5lIFNNQklPU19IQU5ETEVfVFlQRTIg
ICAweDAyMDAKKyNkZWZpbmUgU01CSU9TX0hBTkRMRV9UWVBFMyAgIDB4MDMwMAorI2RlZmluZSBT
TUJJT1NfSEFORExFX1RZUEU0ICAgMHgwNDAwCisjZGVmaW5lIFNNQklPU19IQU5ETEVfVFlQRTEx
ICAweDBCMDAKKyNkZWZpbmUgU01CSU9TX0hBTkRMRV9UWVBFMTYgIDB4MTAwMAorI2RlZmluZSBT
TUJJT1NfSEFORExFX1RZUEUxNyAgMHgxMTAwCisjZGVmaW5lIFNNQklPU19IQU5ETEVfVFlQRTE5
ICAweDEzMDAKKyNkZWZpbmUgU01CSU9TX0hBTkRMRV9UWVBFMjAgIDB4MTQwMAorI2RlZmluZSBT
TUJJT1NfSEFORExFX1RZUEUyMiAgMHgxNjAwCisjZGVmaW5lIFNNQklPU19IQU5ETEVfVFlQRTMy
ICAweDIwMDAKKyNkZWZpbmUgU01CSU9TX0hBTkRMRV9UWVBFMzkgIDB4MjcwMAorI2RlZmluZSBT
TUJJT1NfSEFORExFX1RZUEUxMjcgMHg3ZjAwCisKK3N0YXRpYyB2b2lkCitzbWJpb3NfcHRfaW5p
dCh2b2lkKTsKK3N0YXRpYyB2b2lkKgorZ2V0X3NtYmlvc19wdF9zdHJ1Y3QodWludDhfdCB0eXBl
LCB1aW50MzJfdCAqbGVuZ3RoX291dCk7CitzdGF0aWMgdm9pZAorZ2V0X2NwdV9tYW51ZmFjdHVy
ZXIoY2hhciAqYnVmLCBpbnQgbGVuKTsKIHN0YXRpYyBpbnQKIHdyaXRlX3NtYmlvc190YWJsZXMo
dm9pZCAqZXAsIHZvaWQgKnN0YXJ0LAogICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCB2Y3B1
cywgdWludDY0X3QgbWVtc2l6ZSwKICAgICAgICAgICAgICAgICAgICAgdWludDhfdCB1dWlkWzE2
XSwgY2hhciAqeGVuX3ZlcnNpb24sCiAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IHhlbl9t
YWpvcl92ZXJzaW9uLCB1aW50MzJfdCB4ZW5fbWlub3JfdmVyc2lvbiwKICAgICAgICAgICAgICAg
ICAgICAgdW5zaWduZWQgKm5yX3N0cnVjdHMsIHVuc2lnbmVkICptYXhfc3RydWN0X3NpemUpOwot
Ci1zdGF0aWMgdm9pZAotZ2V0X2NwdV9tYW51ZmFjdHVyZXIoY2hhciAqYnVmLCBpbnQgbGVuKTsK
K3N0YXRpYyB1aW50NjRfdAorZ2V0X21lbXNpemUodm9pZCk7CiBzdGF0aWMgdm9pZAogc21iaW9z
X2VudHJ5X3BvaW50X2luaXQodm9pZCAqc3RhcnQsCiAgICAgICAgICAgICAgICAgICAgICAgICB1
aW50MTZfdCBtYXhfc3RydWN0dXJlX3NpemUsCkBAIC00OSw2ICs3MSw4IEBAIHN0YXRpYyB2b2lk
ICoKIHNtYmlvc190eXBlXzFfaW5pdCh2b2lkICpzdGFydCwgY29uc3QgY2hhciAqeGVuX3ZlcnNp
b24sIAogICAgICAgICAgICAgICAgICAgIHVpbnQ4X3QgdXVpZFsxNl0pOwogc3RhdGljIHZvaWQg
Kgorc21iaW9zX3R5cGVfMl9pbml0KHZvaWQgKnN0YXJ0KTsKK3N0YXRpYyB2b2lkICoKIHNtYmlv
c190eXBlXzNfaW5pdCh2b2lkICpzdGFydCk7CiBzdGF0aWMgdm9pZCAqCiBzbWJpb3NfdHlwZV80
X2luaXQodm9pZCAqc3RhcnQsIHVuc2lnbmVkIGludCBjcHVfbnVtYmVyLApAQCAtNjQsMTAgKzg4
LDczIEBAIHNtYmlvc190eXBlXzE5X2luaXQodm9pZCAqc3RhcnQsIHVpbnQzMl8KIHN0YXRpYyB2
b2lkICoKIHNtYmlvc190eXBlXzIwX2luaXQodm9pZCAqc3RhcnQsIHVpbnQzMl90IG1lbW9yeV9z
aXplX21iLCBpbnQgaW5zdGFuY2UpOwogc3RhdGljIHZvaWQgKgorc21iaW9zX3R5cGVfMjJfaW5p
dCh2b2lkICpzdGFydCk7CitzdGF0aWMgdm9pZCAqCiBzbWJpb3NfdHlwZV8zMl9pbml0KHZvaWQg
KnN0YXJ0KTsKIHN0YXRpYyB2b2lkICoKK3NtYmlvc190eXBlXzM5X2luaXQodm9pZCAqc3RhcnQp
Oworc3RhdGljIHZvaWQgKgorc21iaW9zX3R5cGVfdmVuZG9yX29lbV9pbml0KHZvaWQgKnN0YXJ0
KTsKK3N0YXRpYyB2b2lkICoKIHNtYmlvc190eXBlXzEyN19pbml0KHZvaWQgKnN0YXJ0KTsKIAor
c3RhdGljIHVpbnQzMl90ICpzbWJpb3NfcHRfYWRkciA9IE5VTEw7CitzdGF0aWMgdWludDMyX3Qg
c21iaW9zX3B0X2xlbmd0aCA9IDA7CisKK3N0YXRpYyB2b2lkCitzbWJpb3NfcHRfaW5pdCh2b2lk
KQoreworICAgIGNvbnN0IGNoYXIgKnM7CisKKyAgICBzID0geGVuc3RvcmVfcmVhZChIVk1fWFNf
U01CSU9TX1BUX0FERFJFU1MsIE5VTEwpOworICAgIGlmICggcyA9PSBOVUxMICkKKyAgICAgICAg
Z290byByZXNldDsKKworICAgIHNtYmlvc19wdF9hZGRyID0gKHVpbnQzMl90KikodWludDMyX3Qp
c3RydG9sbChzLCBOVUxMLCAwKTsKKyAgICBpZiAoIHNtYmlvc19wdF9hZGRyID09IE5VTEwgKQor
ICAgICAgICBnb3RvIHJlc2V0OworCisgICAgcyA9IHhlbnN0b3JlX3JlYWQoSFZNX1hTX1NNQklP
U19QVF9MRU5HVEgsIE5VTEwpOworICAgIGlmICggcyA9PSBOVUxMICkKKyAgICAgICAgZ290byBy
ZXNldDsKKworICAgIHNtYmlvc19wdF9sZW5ndGggPSAodWludDMyX3Qpc3RydG9sbChzLCBOVUxM
LCAwKTsKKyAgICBpZiAoIHNtYmlvc19wdF9sZW5ndGggPT0gMCApCisgICAgICAgIGdvdG8gcmVz
ZXQ7CisKKyAgICByZXR1cm47CisKK3Jlc2V0OgorICAgIHNtYmlvc19wdF9hZGRyID0gTlVMTDsK
KyAgICBzbWJpb3NfcHRfbGVuZ3RoID0gMDsKK30KKworc3RhdGljIHZvaWQqCitnZXRfc21iaW9z
X3B0X3N0cnVjdCh1aW50OF90IHR5cGUsIHVpbnQzMl90ICpsZW5ndGhfb3V0KQoreworICAgIHVp
bnQzMl90ICpzZXAgPSBzbWJpb3NfcHRfYWRkcjsKKyAgICB1aW50MzJfdCB0b3RhbCA9IDA7Cisg
ICAgdWludDhfdCAqcHRyOworCisgICAgaWYgKCBzZXAgPT0gTlVMTCApCisgICAgICAgIHJldHVy
biBOVUxMOworCisgICAgd2hpbGUgKCB0b3RhbCA8IHNtYmlvc19wdF9sZW5ndGggKQorICAgIHsK
KyAgICAgICAgcHRyID0gKHVpbnQ4X3QqKShzZXAgKyAxKTsKKyAgICAgICAgaWYgKCBwdHJbMF0g
PT0gdHlwZSApCisgICAgICAgIHsKKyAgICAgICAgICAgICpsZW5ndGhfb3V0ID0gKnNlcDsKKyAg
ICAgICAgICAgIHJldHVybiBwdHI7CisgICAgICAgIH0KKworICAgICAgICB0b3RhbCArPSAoKnNl
cCArIHNpemVvZih1aW50MzJfdCkpOworICAgICAgICBzZXAgPSAodWludDMyX3QqKShwdHIgKyAq
c2VwKTsKKyAgICB9CisKKyAgICByZXR1cm4gTlVMTDsKK30KKwogc3RhdGljIHZvaWQKIGdldF9j
cHVfbWFudWZhY3R1cmVyKGNoYXIgKmJ1ZiwgaW50IGxlbikKIHsKQEAgLTk3LDYgKzE4NCw4IEBA
IHdyaXRlX3NtYmlvc190YWJsZXModm9pZCAqZXAsIHZvaWQgKnN0YXIKICAgICBjaGFyIGNwdV9t
YW51ZmFjdHVyZXJbMTVdOwogICAgIGludCBpLCBucl9tZW1fZGV2czsKIAorICAgIHNtYmlvc19w
dF9pbml0KCk7CisKICAgICBnZXRfY3B1X21hbnVmYWN0dXJlcihjcHVfbWFudWZhY3R1cmVyLCAx
NSk7CiAKICAgICBwID0gKGNoYXIgKilzdGFydDsKQEAgLTExMiw2ICsyMDEsNyBAQCB3cml0ZV9z
bWJpb3NfdGFibGVzKHZvaWQgKmVwLCB2b2lkICpzdGFyCiAgICAgZG9fc3RydWN0KHNtYmlvc190
eXBlXzBfaW5pdChwLCB4ZW5fdmVyc2lvbiwgeGVuX21ham9yX3ZlcnNpb24sCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICB4ZW5fbWlub3JfdmVyc2lvbikpOwogICAgIGRvX3N0cnVj
dChzbWJpb3NfdHlwZV8xX2luaXQocCwgeGVuX3ZlcnNpb24sIHV1aWQpKTsKKyAgICBkb19zdHJ1
Y3Qoc21iaW9zX3R5cGVfMl9pbml0KHApKTsKICAgICBkb19zdHJ1Y3Qoc21iaW9zX3R5cGVfM19p
bml0KHApKTsKICAgICBmb3IgKCBjcHVfbnVtID0gMTsgY3B1X251bSA8PSB2Y3B1czsgY3B1X251
bSsrICkKICAgICAgICAgZG9fc3RydWN0KHNtYmlvc190eXBlXzRfaW5pdChwLCBjcHVfbnVtLCBj
cHVfbWFudWZhY3R1cmVyKSk7CkBAIC0xMzAsNyArMjIwLDEwIEBAIHdyaXRlX3NtYmlvc190YWJs
ZXModm9pZCAqZXAsIHZvaWQgKnN0YXIKICAgICAgICAgZG9fc3RydWN0KHNtYmlvc190eXBlXzIw
X2luaXQocCwgZGV2X21lbXNpemUsIGkpKTsKICAgICB9CiAKKyAgICBkb19zdHJ1Y3Qoc21iaW9z
X3R5cGVfMjJfaW5pdChwKSk7CiAgICAgZG9fc3RydWN0KHNtYmlvc190eXBlXzMyX2luaXQocCkp
OworICAgIGRvX3N0cnVjdChzbWJpb3NfdHlwZV8zOV9pbml0KHApKTsKKyAgICBkb19zdHJ1Y3Qo
c21iaW9zX3R5cGVfdmVuZG9yX29lbV9pbml0KHApKTsKICAgICBkb19zdHJ1Y3Qoc21iaW9zX3R5
cGVfMTI3X2luaXQocCkpOwogCiAjdW5kZWYgZG9fc3RydWN0CkBAIC0yODksMTIgKzM4MiwyMiBA
QCBzbWJpb3NfdHlwZV8wX2luaXQodm9pZCAqc3RhcnQsIGNvbnN0IGNoCiAgICAgc3RydWN0IHNt
Ymlvc190eXBlXzAgKnAgPSAoc3RydWN0IHNtYmlvc190eXBlXzAgKilzdGFydDsKICAgICBzdGF0
aWMgY29uc3QgY2hhciAqc21iaW9zX3JlbGVhc2VfZGF0ZSA9IF9fU01CSU9TX0RBVEVfXzsKICAg
ICBjb25zdCBjaGFyICpzOworICAgIHZvaWQgKnB0czsKKyAgICB1aW50MzJfdCBsZW5ndGg7CisK
KyAgICBwdHMgPSBnZXRfc21iaW9zX3B0X3N0cnVjdCgwLCAmbGVuZ3RoKTsKKyAgICBpZiAoIChw
dHMgIT0gTlVMTCkmJihsZW5ndGggPiAwKSApCisgICAgeworICAgICAgICBtZW1jcHkoc3RhcnQs
IHB0cywgbGVuZ3RoKTsKKyAgICAgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19IQU5ETEVf
VFlQRTA7CisgICAgICAgIHJldHVybiAoc3RhcnQgKyBsZW5ndGgpOworICAgIH0KIAogICAgIG1l
bXNldChwLCAwLCBzaXplb2YoKnApKTsKIAogICAgIHAtPmhlYWRlci50eXBlID0gMDsKICAgICBw
LT5oZWFkZXIubGVuZ3RoID0gc2l6ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8wKTsKLSAgICBwLT5o
ZWFkZXIuaGFuZGxlID0gMDsKKyAgICBwLT5oZWFkZXIuaGFuZGxlID0gU01CSU9TX0hBTkRMRV9U
WVBFMDsKIAogICAgIHAtPnZlbmRvcl9zdHIgPSAxOwogICAgIHAtPnZlcnNpb25fc3RyID0gMjsK
QEAgLTMxNSwxMSArNDE4LDExIEBAIHNtYmlvc190eXBlXzBfaW5pdCh2b2lkICpzdGFydCwgY29u
c3QgY2gKICAgICBwLT5lbWJlZGRlZF9jb250cm9sbGVyX21pbm9yID0gMHhmZjsKIAogICAgIHN0
YXJ0ICs9IHNpemVvZihzdHJ1Y3Qgc21iaW9zX3R5cGVfMCk7Ci0gICAgcyA9IHhlbnN0b3JlX3Jl
YWQoImJpb3Mtc3RyaW5ncy9iaW9zLXZlbmRvciIsICJYZW4iKTsKKyAgICBzID0geGVuc3RvcmVf
cmVhZChIVk1fWFNfQklPU19WRU5ET1IsICJYZW4iKTsKICAgICBzdHJjcHkoKGNoYXIgKilzdGFy
dCwgcyk7CiAgICAgc3RhcnQgKz0gc3RybGVuKHMpICsgMTsKIAotICAgIHMgPSB4ZW5zdG9yZV9y
ZWFkKCJiaW9zLXN0cmluZ3MvYmlvcy12ZXJzaW9uIiwgeGVuX3ZlcnNpb24pOworICAgIHMgPSB4
ZW5zdG9yZV9yZWFkKEhWTV9YU19CSU9TX1ZFUlNJT04sIHhlbl92ZXJzaW9uKTsKICAgICBzdHJj
cHkoKGNoYXIgKilzdGFydCwgcyk7CiAgICAgc3RhcnQgKz0gc3RybGVuKHMpICsgMTsKIApAQCAt
MzM4LDEyICs0NDEsMjIgQEAgc21iaW9zX3R5cGVfMV9pbml0KHZvaWQgKnN0YXJ0LCBjb25zdCBj
aAogICAgIGNoYXIgdXVpZF9zdHJbMzddOwogICAgIHN0cnVjdCBzbWJpb3NfdHlwZV8xICpwID0g
KHN0cnVjdCBzbWJpb3NfdHlwZV8xICopc3RhcnQ7CiAgICAgY29uc3QgY2hhciAqczsKKyAgICB2
b2lkICpwdHM7CisgICAgdWludDMyX3QgbGVuZ3RoOworCisgICAgcHRzID0gZ2V0X3NtYmlvc19w
dF9zdHJ1Y3QoMSwgJmxlbmd0aCk7CisgICAgaWYgKCAocHRzICE9IE5VTEwpJiYobGVuZ3RoID4g
MCkgKQorICAgIHsKKyAgICAgICAgbWVtY3B5KHN0YXJ0LCBwdHMsIGxlbmd0aCk7CisgICAgICAg
IHAtPmhlYWRlci5oYW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEUxOworICAgICAgICByZXR1cm4g
KHN0YXJ0ICsgbGVuZ3RoKTsKKyAgICB9CiAKICAgICBtZW1zZXQocCwgMCwgc2l6ZW9mKCpwKSk7
CiAKICAgICBwLT5oZWFkZXIudHlwZSA9IDE7CiAgICAgcC0+aGVhZGVyLmxlbmd0aCA9IHNpemVv
ZihzdHJ1Y3Qgc21iaW9zX3R5cGVfMSk7Ci0gICAgcC0+aGVhZGVyLmhhbmRsZSA9IDB4MTAwOwor
ICAgIHAtPmhlYWRlci5oYW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEUxOwogCiAgICAgcC0+bWFu
dWZhY3R1cmVyX3N0ciA9IDE7CiAgICAgcC0+cHJvZHVjdF9uYW1lX3N0ciA9IDI7CkBAIC0zNTgs
MjAgKzQ3MSwyMCBAQCBzbWJpb3NfdHlwZV8xX2luaXQodm9pZCAqc3RhcnQsIGNvbnN0IGNoCiAK
ICAgICBzdGFydCArPSBzaXplb2Yoc3RydWN0IHNtYmlvc190eXBlXzEpOwogICAgIAotICAgIHMg
PSB4ZW5zdG9yZV9yZWFkKCJiaW9zLXN0cmluZ3Mvc3lzdGVtLW1hbnVmYWN0dXJlciIsICJYZW4i
KTsKKyAgICBzID0geGVuc3RvcmVfcmVhZChIVk1fWFNfU1lTVEVNX01BTlVGQUNUVVJFUiwgIlhl
biIpOwogICAgIHN0cmNweSgoY2hhciAqKXN0YXJ0LCBzKTsKICAgICBzdGFydCArPSBzdHJsZW4o
cykgKyAxOwogCi0gICAgcyA9IHhlbnN0b3JlX3JlYWQoImJpb3Mtc3RyaW5ncy9zeXN0ZW0tcHJv
ZHVjdC1uYW1lIiwgIkhWTSBkb21VIik7CisgICAgcyA9IHhlbnN0b3JlX3JlYWQoSFZNX1hTX1NZ
U1RFTV9QUk9EVUNUX05BTUUsICJIVk0gZG9tVSIpOwogICAgIHN0cmNweSgoY2hhciAqKXN0YXJ0
LCBzKTsKICAgICBzdGFydCArPSBzdHJsZW4ocykgKyAxOwogCi0gICAgcyA9IHhlbnN0b3JlX3Jl
YWQoImJpb3Mtc3RyaW5ncy9zeXN0ZW0tdmVyc2lvbiIsIHhlbl92ZXJzaW9uKTsKKyAgICBzID0g
eGVuc3RvcmVfcmVhZChIVk1fWFNfU1lTVEVNX1ZFUlNJT04sIHhlbl92ZXJzaW9uKTsKICAgICBz
dHJjcHkoKGNoYXIgKilzdGFydCwgcyk7CiAgICAgc3RhcnQgKz0gc3RybGVuKHMpICsgMTsKIAog
ICAgIHV1aWRfdG9fc3RyaW5nKHV1aWRfc3RyLCB1dWlkKTsgCi0gICAgcyA9IHhlbnN0b3JlX3Jl
YWQoImJpb3Mtc3RyaW5ncy9zeXN0ZW0tc2VyaWFsLW51bWJlciIsIHV1aWRfc3RyKTsKKyAgICBz
ID0geGVuc3RvcmVfcmVhZChIVk1fWFNfU1lTVEVNX1NFUklBTF9OVU1CRVIsIHV1aWRfc3RyKTsK
ICAgICBzdHJjcHkoKGNoYXIgKilzdGFydCwgcyk7CiAgICAgc3RhcnQgKz0gc3RybGVuKHMpICsg
MTsKIApAQCAtMzgwLDE3ICs0OTMsNTggQEAgc21iaW9zX3R5cGVfMV9pbml0KHZvaWQgKnN0YXJ0
LCBjb25zdCBjaAogICAgIHJldHVybiBzdGFydCsxOyAKIH0KIAorLyogVHlwZSAyIC0tIFN5c3Rl
bSBCb2FyZCAqLworc3RhdGljIHZvaWQgKgorc21iaW9zX3R5cGVfMl9pbml0KHZvaWQgKnN0YXJ0
KQoreworICAgIHN0cnVjdCBzbWJpb3NfdHlwZV8yICpwID0gKHN0cnVjdCBzbWJpb3NfdHlwZV8y
ICopc3RhcnQ7CisgICAgdWludDhfdCAqcHRyOworICAgIHZvaWQgKnB0czsKKyAgICB1aW50MzJf
dCBsZW5ndGg7CisKKyAgICBwdHMgPSBnZXRfc21iaW9zX3B0X3N0cnVjdCgyLCAmbGVuZ3RoKTsK
KyAgICBpZiAoIChwdHMgIT0gTlVMTCkmJihsZW5ndGggPiAwKSApCisgICAgeworICAgICAgICBt
ZW1jcHkoc3RhcnQsIHB0cywgbGVuZ3RoKTsKKyAgICAgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNN
QklPU19IQU5ETEVfVFlQRTI7CisKKyAgICAgICAgLyogU2V0IGN1cnJlbnQgY2hhc3NpcyBoYW5k
bGUgaWYgcHJlc2VudCAqLworICAgICAgICBpZiAoIHAtPmhlYWRlci5sZW5ndGggPiAxMyApCisg
ICAgICAgIHsKKyAgICAgICAgICAgIHB0ciA9ICgodWludDhfdCopc3RhcnQpICsgMTE7ICAgICAg
ICAgICAgCisgICAgICAgICAgICBpZiAoICooKHVpbnQxNl90KilwdHIpICE9IDAgKQorICAgICAg
ICAgICAgICAgICooKHVpbnQxNl90KilwdHIpID0gU01CSU9TX0hBTkRMRV9UWVBFMzsKKyAgICAg
ICAgfQorCisgICAgICAgIHJldHVybiAoc3RhcnQgKyBsZW5ndGgpOworICAgIH0KKworICAgIC8q
IE9ubHkgcHJlc2VudCB3aGVuIHBhc3NlZCBpbiAqLworICAgIHJldHVybiBzdGFydDsKK30KKwog
LyogVHlwZSAzIC0tIFN5c3RlbSBFbmNsb3N1cmUgKi8KIHN0YXRpYyB2b2lkICoKIHNtYmlvc190
eXBlXzNfaW5pdCh2b2lkICpzdGFydCkKIHsKICAgICBzdHJ1Y3Qgc21iaW9zX3R5cGVfMyAqcCA9
IChzdHJ1Y3Qgc21iaW9zX3R5cGVfMyAqKXN0YXJ0OworICAgIGNvbnN0IGNoYXIgKnM7CisgICAg
dm9pZCAqcHRzOworICAgIHVpbnQzMl90IGxlbmd0aDsKKworICAgIHB0cyA9IGdldF9zbWJpb3Nf
cHRfc3RydWN0KDMsICZsZW5ndGgpOworICAgIGlmICggKHB0cyAhPSBOVUxMKSYmKGxlbmd0aCA+
IDApICkKKyAgICB7CisgICAgICAgIG1lbWNweShzdGFydCwgcHRzLCBsZW5ndGgpOworICAgICAg
ICBwLT5oZWFkZXIuaGFuZGxlID0gU01CSU9TX0hBTkRMRV9UWVBFMzsKKyAgICAgICAgcmV0dXJu
IChzdGFydCArIGxlbmd0aCk7CisgICAgfQogICAgIAogICAgIG1lbXNldChwLCAwLCBzaXplb2Yo
KnApKTsKIAogICAgIHAtPmhlYWRlci50eXBlID0gMzsKICAgICBwLT5oZWFkZXIubGVuZ3RoID0g
c2l6ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8zKTsKLSAgICBwLT5oZWFkZXIuaGFuZGxlID0gMHgz
MDA7CisgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19IQU5ETEVfVFlQRTM7CiAKICAgICBw
LT5tYW51ZmFjdHVyZXJfc3RyID0gMTsKICAgICBwLT50eXBlID0gMHgwMTsgLyogb3RoZXIgKi8K
QEAgLTQwNCw4ICs1NTgsMTkgQEAgc21iaW9zX3R5cGVfM19pbml0KHZvaWQgKnN0YXJ0KQogCiAg
ICAgc3RhcnQgKz0gc2l6ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8zKTsKICAgICAKLSAgICBzdHJj
cHkoKGNoYXIgKilzdGFydCwgIlhlbiIpOwotICAgIHN0YXJ0ICs9IHN0cmxlbigiWGVuIikgKyAx
OworICAgIHMgPSB4ZW5zdG9yZV9yZWFkKEhWTV9YU19FTkNMT1NVUkVfTUFOVUZBQ1RVUkVSLCAi
WGVuIik7CisgICAgc3RyY3B5KChjaGFyICopc3RhcnQsIHMpOworICAgIHN0YXJ0ICs9IHN0cmxl
bihzKSArIDE7CisKKyAgICAvKiBObyBpbnRlcm5hbCBkZWZhdWx0cyBmb3IgdGhpcyBpZiB0aGUg
dmFsdWUgaXMgbm90IHNldCAqLworICAgIHMgPSB4ZW5zdG9yZV9yZWFkKEhWTV9YU19FTkNMT1NV
UkVfU0VSSUFMX05VTUJFUiwgTlVMTCk7CisgICAgaWYgKCAocyAhPSBOVUxMKSYmKCpzICE9ICdc
MCcpICkKKyAgICB7CisgICAgICAgIHN0cmNweSgoY2hhciAqKXN0YXJ0LCBzKTsKKyAgICAgICAg
c3RhcnQgKz0gc3RybGVuKHMpICsgMTsKKyAgICAgICAgcC0+c2VyaWFsX251bWJlcl9zdHIgPSAy
OworICAgIH0KKwogICAgICooKHVpbnQ4X3QgKilzdGFydCkgPSAwOwogICAgIHJldHVybiBzdGFy
dCsxOwogfQpAQCAtNDIzLDcgKzU4OCw3IEBAIHNtYmlvc190eXBlXzRfaW5pdCgKIAogICAgIHAt
PmhlYWRlci50eXBlID0gNDsKICAgICBwLT5oZWFkZXIubGVuZ3RoID0gc2l6ZW9mKHN0cnVjdCBz
bWJpb3NfdHlwZV80KTsKLSAgICBwLT5oZWFkZXIuaGFuZGxlID0gMHg0MDAgKyBjcHVfbnVtYmVy
OworICAgIHAtPmhlYWRlci5oYW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEU0ICsgY3B1X251bWJl
cjsKIAogICAgIHAtPnNvY2tldF9kZXNpZ25hdGlvbl9zdHIgPSAxOwogICAgIHAtPnByb2Nlc3Nv
cl90eXBlID0gMHgwMzsgLyogQ1BVICovCkBAIC00NjUsMTMgKzYzMCwyMyBAQCBzdGF0aWMgdm9p
ZCAqCiBzbWJpb3NfdHlwZV8xMV9pbml0KHZvaWQgKnN0YXJ0KSAKIHsKICAgICBzdHJ1Y3Qgc21i
aW9zX3R5cGVfMTEgKnAgPSAoc3RydWN0IHNtYmlvc190eXBlXzExICopc3RhcnQ7Ci0gICAgY2hh
ciBwYXRoWzIwXSA9ICJiaW9zLXN0cmluZ3Mvb2VtLVhYIjsKKyAgICBjaGFyIHBhdGhbMjBdOwog
ICAgIGNvbnN0IGNoYXIgKnM7CiAgICAgaW50IGk7CisgICAgdm9pZCAqcHRzOworICAgIHVpbnQz
Ml90IGxlbmd0aDsKKworICAgIHB0cyA9IGdldF9zbWJpb3NfcHRfc3RydWN0KDExLCAmbGVuZ3Ro
KTsKKyAgICBpZiAoIChwdHMgIT0gTlVMTCkmJihsZW5ndGggPiAwKSApCisgICAgeworICAgICAg
ICBtZW1jcHkoc3RhcnQsIHB0cywgbGVuZ3RoKTsKKyAgICAgICAgcC0+aGVhZGVyLmhhbmRsZSA9
IFNNQklPU19IQU5ETEVfVFlQRTExOworICAgICAgICByZXR1cm4gKHN0YXJ0ICsgbGVuZ3RoKTsK
KyAgICB9CiAKICAgICBwLT5oZWFkZXIudHlwZSA9IDExOwogICAgIHAtPmhlYWRlci5sZW5ndGgg
PSBzaXplb2Yoc3RydWN0IHNtYmlvc190eXBlXzExKTsKLSAgICBwLT5oZWFkZXIuaGFuZGxlID0g
MHhCMDA7CisgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19IQU5ETEVfVFlQRTExOwogCiAg
ICAgcC0+Y291bnQgPSAwOwogCkBAIC00ODAsOCArNjU1LDcgQEAgc21iaW9zX3R5cGVfMTFfaW5p
dCh2b2lkICpzdGFydCkKICAgICAvKiBQdWxsIG91dCBhcyBtYW55IG9lbS0qIHN0cmluZ3Mgd2Ug
ZmluZCBpbiB4ZW5zdG9yZSAqLwogICAgIGZvciAoIGkgPSAxOyBpIDwgMTAwOyBpKysgKQogICAg
IHsKLSAgICAgICAgcGF0aFsoc2l6ZW9mIHBhdGgpIC0gM10gPSAnMCcgKyAoKGkgPCAxMCkgPyBp
IDogaSAvIDEwKTsKLSAgICAgICAgcGF0aFsoc2l6ZW9mIHBhdGgpIC0gMl0gPSAoaSA8IDEwKSA/
ICdcMCcgOiAnMCcgKyAoaSAlIDEwKTsKKyAgICAgICAgc25wcmludGYocGF0aCwgc2l6ZW9mKHBh
dGgpLCBIVk1fWFNfT0VNX1NUUklOR1MsIGkpOwogICAgICAgICBpZiAoICgocyA9IHhlbnN0b3Jl
X3JlYWQocGF0aCwgTlVMTCkpID09IE5VTEwpIHx8ICgqcyA9PSAnXDAnKSApCiAgICAgICAgICAg
ICBicmVhazsKICAgICAgICAgc3RyY3B5KChjaGFyICopc3RhcnQsIHMpOwpAQCAtNTEwLDcgKzY4
NCw3IEBAIHNtYmlvc190eXBlXzE2X2luaXQodm9pZCAqc3RhcnQsIHVpbnQzMl8KICAgICBtZW1z
ZXQocCwgMCwgc2l6ZW9mKCpwKSk7CiAKICAgICBwLT5oZWFkZXIudHlwZSA9IDE2OwotICAgIHAt
PmhlYWRlci5oYW5kbGUgPSAweDEwMDA7CisgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19I
QU5ETEVfVFlQRTE2OwogICAgIHAtPmhlYWRlci5sZW5ndGggPSBzaXplb2Yoc3RydWN0IHNtYmlv
c190eXBlXzE2KTsKICAgICAKICAgICBwLT5sb2NhdGlvbiA9IDB4MDE7IC8qIG90aGVyICovCkBA
IC01MzYsNyArNzEwLDcgQEAgc21iaW9zX3R5cGVfMTdfaW5pdCh2b2lkICpzdGFydCwgdWludDMy
XwogCiAgICAgcC0+aGVhZGVyLnR5cGUgPSAxNzsKICAgICBwLT5oZWFkZXIubGVuZ3RoID0gc2l6
ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8xNyk7Ci0gICAgcC0+aGVhZGVyLmhhbmRsZSA9IDB4MTEw
MCArIGluc3RhbmNlOworICAgIHAtPmhlYWRlci5oYW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEUx
NyArIGluc3RhbmNlOwogCiAgICAgcC0+cGh5c2ljYWxfbWVtb3J5X2FycmF5X2hhbmRsZSA9IDB4
MTAwMDsKICAgICBwLT50b3RhbF93aWR0aCA9IDY0OwpAQCAtNTcxLDcgKzc0NSw3IEBAIHNtYmlv
c190eXBlXzE5X2luaXQodm9pZCAqc3RhcnQsIHVpbnQzMl8KIAogICAgIHAtPmhlYWRlci50eXBl
ID0gMTk7CiAgICAgcC0+aGVhZGVyLmxlbmd0aCA9IHNpemVvZihzdHJ1Y3Qgc21iaW9zX3R5cGVf
MTkpOwotICAgIHAtPmhlYWRlci5oYW5kbGUgPSAweDEzMDAgKyBpbnN0YW5jZTsKKyAgICBwLT5o
ZWFkZXIuaGFuZGxlID0gU01CSU9TX0hBTkRMRV9UWVBFMTkgKyBpbnN0YW5jZTsKIAogICAgIHAt
PnN0YXJ0aW5nX2FkZHJlc3MgPSBpbnN0YW5jZSA8PCAyNDsKICAgICBwLT5lbmRpbmdfYWRkcmVz
cyA9IHAtPnN0YXJ0aW5nX2FkZHJlc3MgKyAobWVtb3J5X3NpemVfbWIgPDwgMTApIC0gMTsKQEAg
LTU5Myw3ICs3NjcsNyBAQCBzbWJpb3NfdHlwZV8yMF9pbml0KHZvaWQgKnN0YXJ0LCB1aW50MzJf
CiAKICAgICBwLT5oZWFkZXIudHlwZSA9IDIwOwogICAgIHAtPmhlYWRlci5sZW5ndGggPSBzaXpl
b2Yoc3RydWN0IHNtYmlvc190eXBlXzIwKTsKLSAgICBwLT5oZWFkZXIuaGFuZGxlID0gMHgxNDAw
ICsgaW5zdGFuY2U7CisgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19IQU5ETEVfVFlQRTIw
ICsgaW5zdGFuY2U7CiAKICAgICBwLT5zdGFydGluZ19hZGRyZXNzID0gaW5zdGFuY2UgPDwgMjQ7
CiAgICAgcC0+ZW5kaW5nX2FkZHJlc3MgPSBwLT5zdGFydGluZ19hZGRyZXNzICsgKG1lbW9yeV9z
aXplX21iIDw8IDEwKSAtIDE7CkBAIC02MDksNiArNzgzLDcxIEBAIHNtYmlvc190eXBlXzIwX2lu
aXQodm9pZCAqc3RhcnQsIHVpbnQzMl8KICAgICByZXR1cm4gc3RhcnQrMjsKIH0KIAorLyogVHlw
ZSAyMiAtLSBQb3J0YWJsZSBCYXR0ZXJ5ICovCitzdGF0aWMgdm9pZCAqCitzbWJpb3NfdHlwZV8y
Ml9pbml0KHZvaWQgKnN0YXJ0KQoreworICAgIHN0cnVjdCBzbWJpb3NfdHlwZV8yMiAqcCA9IChz
dHJ1Y3Qgc21iaW9zX3R5cGVfMjIgKilzdGFydDsKKyAgICBzdGF0aWMgY29uc3QgY2hhciAqc21i
aW9zX3JlbGVhc2VfZGF0ZSA9IF9fU01CSU9TX0RBVEVfXzsKKyAgICBjb25zdCBjaGFyICpzOwor
ICAgIHZvaWQgKnB0czsKKyAgICB1aW50MzJfdCBsZW5ndGg7CisKKyAgICBwdHMgPSBnZXRfc21i
aW9zX3B0X3N0cnVjdCgyMiwgJmxlbmd0aCk7CisgICAgaWYgKCAocHRzICE9IE5VTEwpJiYobGVu
Z3RoID4gMCkgKQorICAgIHsKKyAgICAgICAgbWVtY3B5KHN0YXJ0LCBwdHMsIGxlbmd0aCk7Cisg
ICAgICAgIHAtPmhlYWRlci5oYW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEUyMjsKKyAgICAgICAg
cmV0dXJuIChzdGFydCArIGxlbmd0aCk7CisgICAgfQorCisgICAgcyA9IHhlbnN0b3JlX3JlYWQo
SFZNX1hTX1NNQklPU19ERUZBVUxUX0JBVFRFUlksICIwIik7CisgICAgaWYgKCBzdHJuY21wKHMs
ICIxIiwgMSkgIT0gMCApCisgICAgICAgIHJldHVybiBzdGFydDsKKworICAgIG1lbXNldChwLCAw
LCBzaXplb2YoKnApKTsKKworICAgIHAtPmhlYWRlci50eXBlID0gMjI7CisgICAgcC0+aGVhZGVy
Lmxlbmd0aCA9IHNpemVvZihzdHJ1Y3Qgc21iaW9zX3R5cGVfMjIpOworICAgIHAtPmhlYWRlci5o
YW5kbGUgPSBTTUJJT1NfSEFORExFX1RZUEUyMjsKKworICAgIHAtPmxvY2F0aW9uX3N0ciA9IDE7
CisgICAgcC0+bWFudWZhY3R1cmVyX3N0ciA9IDI7CisgICAgcC0+bWFudWZhY3R1cmVyX2RhdGVf
c3RyID0gMzsKKyAgICBwLT5zZXJpYWxfbnVtYmVyX3N0ciA9IDA7CisgICAgcC0+ZGV2aWNlX25h
bWVfc3RyID0gNDsKKyAgICBwLT5kZXZpY2VfY2hlbWlzdHJ5ID0gMHgyOyAvKiB1bmtub3duICov
CisgICAgcC0+ZGV2aWNlX2NhcGFjaXR5ID0gMDsgLyogdW5rbm93biAqLworICAgIHAtPmRldmlj
ZV92b2x0YWdlID0gMDsgLyogdW5rbm93biAqLworICAgIHAtPnNiZHNfdmVyc2lvbl9udW1iZXIg
PSAwOworICAgIHAtPm1heF9lcnJvciA9IDB4ZmY7IC8qIHVua25vd24gKi8KKyAgICBwLT5zYmRz
X3NlcmlhbF9udW1iZXIgPSAwOworICAgIHAtPnNiZHNfbWFudWZhY3R1cmVyX2RhdGUgPSAwOwor
ICAgIHAtPnNiZHNfZGV2aWNlX2NoZW1pc3RyeSA9IDA7CisgICAgcC0+ZGVzaWduX2NhcGFjaXR5
X211bHRpcGxpZXIgPSAwOworICAgIHAtPm9lbV9zcGVjaWZpYyA9IDA7CisKKyAgICBzdGFydCAr
PSBzaXplb2Yoc3RydWN0IHNtYmlvc190eXBlXzIyKTsKKworICAgIHN0cmNweSgoY2hhciAqKXN0
YXJ0LCAiUHJpbWFyeSIpOworICAgIHN0YXJ0ICs9IHN0cmxlbigiUHJpbWFyeSIpICsgMTsKKwor
ICAgIHMgPSB4ZW5zdG9yZV9yZWFkKEhWTV9YU19CQVRURVJZX01BTlVGQUNUVVJFUiwgIlhlbiIp
OworICAgIHN0cmNweSgoY2hhciAqKXN0YXJ0LCBzKTsKKyAgICBzdGFydCArPSBzdHJsZW4ocykg
KyAxOworCisgICAgc3RyY3B5KChjaGFyICopc3RhcnQsIHNtYmlvc19yZWxlYXNlX2RhdGUpOwor
ICAgIHN0YXJ0ICs9IHN0cmxlbihzbWJpb3NfcmVsZWFzZV9kYXRlKSArIDE7CisKKyAgICBzID0g
eGVuc3RvcmVfcmVhZChIVk1fWFNfQkFUVEVSWV9ERVZJQ0VfTkFNRSwgIlhFTi1WQkFUIik7Cisg
ICAgc3RyY3B5KChjaGFyICopc3RhcnQsIHMpOworICAgIHN0YXJ0ICs9IHN0cmxlbihzKSArIDE7
CisKKyAgICAqKCh1aW50OF90ICopc3RhcnQpID0gMDsKKworICAgIHJldHVybiBzdGFydCsxOyAK
K30KKwogLyogVHlwZSAzMiAtLSBTeXN0ZW0gQm9vdCBJbmZvcm1hdGlvbiAqLwogc3RhdGljIHZv
aWQgKgogc21iaW9zX3R5cGVfMzJfaW5pdCh2b2lkICpzdGFydCkKQEAgLTYxOSw3ICs4NTgsNyBA
QCBzbWJpb3NfdHlwZV8zMl9pbml0KHZvaWQgKnN0YXJ0KQogCiAgICAgcC0+aGVhZGVyLnR5cGUg
PSAzMjsKICAgICBwLT5oZWFkZXIubGVuZ3RoID0gc2l6ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8z
Mik7Ci0gICAgcC0+aGVhZGVyLmhhbmRsZSA9IDB4MjAwMDsKKyAgICBwLT5oZWFkZXIuaGFuZGxl
ID0gU01CSU9TX0hBTkRMRV9UWVBFMzI7CiAgICAgbWVtc2V0KHAtPnJlc2VydmVkLCAwLCA2KTsK
ICAgICBwLT5ib290X3N0YXR1cyA9IDA7IC8qIG5vIGVycm9ycyBkZXRlY3RlZCAqLwogICAgIApA
QCAtNjI4LDYgKzg2Nyw1OCBAQCBzbWJpb3NfdHlwZV8zMl9pbml0KHZvaWQgKnN0YXJ0KQogICAg
IHJldHVybiBzdGFydCsyOwogfQogCisvKiBUeXBlIDM5IC0tIFBvd2VyIFN1cHBseSAqLworc3Rh
dGljIHZvaWQgKgorc21iaW9zX3R5cGVfMzlfaW5pdCh2b2lkICpzdGFydCkKK3sKKyAgICBzdHJ1
Y3Qgc21iaW9zX3R5cGVfMzkgKnAgPSAoc3RydWN0IHNtYmlvc190eXBlXzM5ICopc3RhcnQ7Cisg
ICAgdm9pZCAqcHRzOworICAgIHVpbnQzMl90IGxlbmd0aDsKKworICAgIHB0cyA9IGdldF9zbWJp
b3NfcHRfc3RydWN0KDM5LCAmbGVuZ3RoKTsKKyAgICBpZiAoIChwdHMgIT0gTlVMTCkmJihsZW5n
dGggPiAwKSApCisgICAgeworICAgICAgICBtZW1jcHkoc3RhcnQsIHB0cywgbGVuZ3RoKTsKKyAg
ICAgICAgcC0+aGVhZGVyLmhhbmRsZSA9IFNNQklPU19IQU5ETEVfVFlQRTM5OworICAgICAgICBy
ZXR1cm4gKHN0YXJ0ICsgbGVuZ3RoKTsKKyAgICB9CisKKyAgICAvKiBPbmx5IHByZXNlbnQgd2hl
biBwYXNzZWQgaW4gKi8KKyAgICByZXR1cm4gc3RhcnQ7Cit9CisKK3N0YXRpYyB2b2lkICoKK3Nt
Ymlvc190eXBlX3ZlbmRvcl9vZW1faW5pdCh2b2lkICpzdGFydCkKK3sKKyAgICB1aW50MzJfdCAq
c2VwID0gc21iaW9zX3B0X2FkZHI7CisgICAgdWludDMyX3QgdG90YWwgPSAwOworICAgIHVpbnQ4
X3QgKnB0cjsKKworICAgIGlmICggc2VwID09IE5VTEwgKQorICAgICAgICByZXR1cm4gc3RhcnQ7
CisKKyAgICB3aGlsZSAoIHRvdGFsIDwgc21iaW9zX3B0X2xlbmd0aCApCisgICAgeworICAgICAg
ICBwdHIgPSAodWludDhfdCopKHNlcCArIDEpOworICAgICAgICBpZiAoIHB0clswXSA+PSAxMjgg
KQorICAgICAgICB7CisgICAgICAgICAgICAvKiBWZW5kb3IvT0VNIHRhYmxlLCBjb3B5IGl0IGlu
LiBOb3RlIHRoZSBoYW5kbGUgdmFsdWVzIGNhbm5vdAorICAgICAgICAgICAgICogYmUgY2hhbmdl
ZCBzaW5jZSBpdCBpcyB1bmtub3duIHdoYXQgaXMgaW4gZWFjaCBvZiB0aGVzZSB0YWJsZXMKKyAg
ICAgICAgICAgICAqIGJ1dCB0aGV5IGNvdWxkIGNvbnRhaW4gaGFuZGxlIHJlZmVyZW5jZXMgdG8g
b3RoZXIgdGFibGVzLiBUaGlzCisgICAgICAgICAgICAgKiBtZWFucyBhIHNsaWdodCByaXNrIG9m
IGNvbGxpc2lvbiB3aXRoIHRoZSB0YWJsZXMgYWJvdmUgYnV0IHRoYXQKKyAgICAgICAgICAgICAq
IHdvdWxkIGhhdmUgdG8gYmUgZGVhbHQgd2l0aCBvbiBhIGNhc2UgYnkgY2FzZSBiYXNpcy4KKyAg
ICAgICAgICAgICAqLworICAgICAgICAgICAgbWVtY3B5KHN0YXJ0LCBwdHIsICpzZXApOworICAg
ICAgICAgICAgc3RhcnQgKz0gKnNlcDsKKyAgICAgICAgfQorCisgICAgICAgIHRvdGFsICs9ICgq
c2VwICsgc2l6ZW9mKHVpbnQzMl90KSk7CisgICAgICAgIHNlcCA9ICh1aW50MzJfdCopKHB0ciAr
ICpzZXApOworICAgIH0KKworICAgIHJldHVybiBzdGFydDsKK30KKwogLyogVHlwZSAxMjcgLS0g
RW5kIG9mIFRhYmxlICovCiBzdGF0aWMgdm9pZCAqCiBzbWJpb3NfdHlwZV8xMjdfaW5pdCh2b2lk
ICpzdGFydCkKQEAgLTYzOCw3ICs5MjksNyBAQCBzbWJpb3NfdHlwZV8xMjdfaW5pdCh2b2lkICpz
dGFydCkKIAogICAgIHAtPmhlYWRlci50eXBlID0gMTI3OwogICAgIHAtPmhlYWRlci5sZW5ndGgg
PSBzaXplb2Yoc3RydWN0IHNtYmlvc190eXBlXzEyNyk7Ci0gICAgcC0+aGVhZGVyLmhhbmRsZSA9
IDB4N2YwMDsKKyAgICBwLT5oZWFkZXIuaGFuZGxlID0gU01CSU9TX0hBTkRMRV9UWVBFMTI3Owog
CiAgICAgc3RhcnQgKz0gc2l6ZW9mKHN0cnVjdCBzbWJpb3NfdHlwZV8xMjcpOwogICAgICooKHVp
bnQxNl90ICopc3RhcnQpID0gMDsKZGlmZiAtciA2NTQ3ZjIyZmQ5ZTIgdG9vbHMvZmlybXdhcmUv
aHZtbG9hZGVyL3NtYmlvc190eXBlcy5oCi0tLSBhL3Rvb2xzL2Zpcm13YXJlL2h2bWxvYWRlci9z
bWJpb3NfdHlwZXMuaAlUaHUgRGVjIDIwIDExOjE2OjUzIDIwMTIgLTA1MDAKKysrIGIvdG9vbHMv
ZmlybXdhcmUvaHZtbG9hZGVyL3NtYmlvc190eXBlcy5oCVRodSBEZWMgMjAgMTE6MTc6NTQgMjAx
MiAtMDUwMApAQCAtODQsNiArODQsMTUgQEAgc3RydWN0IHNtYmlvc190eXBlXzEgewogICAgIHVp
bnQ4X3QgZmFtaWx5X3N0cjsKIH0gX19hdHRyaWJ1dGVfXyAoKHBhY2tlZCkpOwogCisvKiBTTUJJ
T1MgdHlwZSAyIC0gQmFzZSBCb2FyZCBJbmZvcm1hdGlvbiAqLworc3RydWN0IHNtYmlvc190eXBl
XzIgeworICAgIHN0cnVjdCBzbWJpb3Nfc3RydWN0dXJlX2hlYWRlciBoZWFkZXI7CisgICAgdWlu
dDhfdCBtYW51ZmFjdHVyZXJfc3RyOworICAgIHVpbnQ4X3QgcHJvZHVjdF9uYW1lX3N0cjsKKyAg
ICB1aW50OF90IHZlcnNpb25fc3RyOworICAgIHVpbnQ4X3Qgc2VyaWFsX251bWJlcl9zdHI7Cit9
IF9fYXR0cmlidXRlX18gKChwYWNrZWQpKTsKKwogLyogU01CSU9TIHR5cGUgMyAtIFN5c3RlbSBF
bmNsb3N1cmUgKi8KIHN0cnVjdCBzbWJpb3NfdHlwZV8zIHsKICAgICBzdHJ1Y3Qgc21iaW9zX3N0
cnVjdHVyZV9oZWFkZXIgaGVhZGVyOwpAQCAtMTczLDYgKzE4MiwyNiBAQCBzdHJ1Y3Qgc21iaW9z
X3R5cGVfMjAgewogICAgIHVpbnQ4X3QgaW50ZXJsZWF2ZWRfZGF0YV9kZXB0aDsKIH0gX19hdHRy
aWJ1dGVfXyAoKHBhY2tlZCkpOwogCisvKiBTTUJJT1MgdHlwZSAyMiAtIFBvcnRhYmxlIGJhdHRl
cnkgKi8KK3N0cnVjdCBzbWJpb3NfdHlwZV8yMiB7CisgICAgc3RydWN0IHNtYmlvc19zdHJ1Y3R1
cmVfaGVhZGVyIGhlYWRlcjsKKyAgICB1aW50OF90IGxvY2F0aW9uX3N0cjsKKyAgICB1aW50OF90
IG1hbnVmYWN0dXJlcl9zdHI7CisgICAgdWludDhfdCBtYW51ZmFjdHVyZXJfZGF0ZV9zdHI7Cisg
ICAgdWludDhfdCBzZXJpYWxfbnVtYmVyX3N0cjsKKyAgICB1aW50OF90IGRldmljZV9uYW1lX3N0
cjsKKyAgICB1aW50OF90IGRldmljZV9jaGVtaXN0cnk7CisgICAgdWludDE2X3QgZGV2aWNlX2Nh
cGFjaXR5OworICAgIHVpbnQxNl90IGRldmljZV92b2x0YWdlOworICAgIHVpbnQ4X3Qgc2Jkc192
ZXJzaW9uX251bWJlcjsKKyAgICB1aW50OF90IG1heF9lcnJvcjsKKyAgICB1aW50MTZfdCBzYmRz
X3NlcmlhbF9udW1iZXI7CisgICAgdWludDE2X3Qgc2Jkc19tYW51ZmFjdHVyZXJfZGF0ZTsKKyAg
ICB1aW50OF90IHNiZHNfZGV2aWNlX2NoZW1pc3RyeTsKKyAgICB1aW50OF90IGRlc2lnbl9jYXBh
Y2l0eV9tdWx0aXBsaWVyOworICAgIHVpbnQzMl90IG9lbV9zcGVjaWZpYzsKK30gX19hdHRyaWJ1
dGVfXyAoKHBhY2tlZCkpOworCiAvKiBTTUJJT1MgdHlwZSAzMiAtIFN5c3RlbSBCb290IEluZm9y
bWF0aW9uICovCiBzdHJ1Y3Qgc21iaW9zX3R5cGVfMzIgewogICAgIHN0cnVjdCBzbWJpb3Nfc3Ry
dWN0dXJlX2hlYWRlciBoZWFkZXI7CkBAIC0xODAsNiArMjA5LDI0IEBAIHN0cnVjdCBzbWJpb3Nf
dHlwZV8zMiB7CiAgICAgdWludDhfdCBib290X3N0YXR1czsKIH0gX19hdHRyaWJ1dGVfXyAoKHBh
Y2tlZCkpOwogCisvKiBTTUJJT1MgdHlwZSAzOSAtIFBvd2VyIFN1cHBseSAqLworc3RydWN0IHNt
Ymlvc190eXBlXzM5IHsKKyAgICBzdHJ1Y3Qgc21iaW9zX3N0cnVjdHVyZV9oZWFkZXIgaGVhZGVy
OworICAgIHVpbnQ4X3QgcG93ZXJfdW5pdF9ncm91cDsKKyAgICB1aW50OF90IGxvY2F0aW9uX3N0
cjsKKyAgICB1aW50OF90IGRldmljZV9uYW1lX3N0cjsKKyAgICB1aW50OF90IG1hbnVmYWN0dXJl
cl9zdHI7CisgICAgdWludDhfdCBzZXJpYWxfbnVtYmVyX3N0cjsKKyAgICB1aW50OF90IGFzc2V0
X3RhZ19udW1iZXJfc3RyOworICAgIHVpbnQ4X3QgbW9kZWxfcGFydF9udW1iZXJfc3RyOworICAg
IHVpbnQ4X3QgcmV2aXNpb25fbGV2ZWxfc3RyOworICAgIHVpbnQxNl90IG1heF9jYXBhY2l0eTsK
KyAgICB1aW50MTZfdCBjaGFyYWN0ZXJpc3RpY3M7CisgICAgdWludDE2X3QgaW5wdXRfdm9sdGFn
ZV9wcm9iZV9oYW5kbGU7CisgICAgdWludDE2X3QgY29vbGluZ19kZXZpY2VfaGFuZGxlOworICAg
IHVpbnQxNl90IGlucHV0X2N1cnJlbnRfcHJvYmVfaGFuZGxlOworfSBfX2F0dHJpYnV0ZV9fICgo
cGFja2VkKSk7CisKIC8qIFNNQklPUyB0eXBlIDEyNyAtLSBFbmQtb2YtdGFibGUgKi8KIHN0cnVj
dCBzbWJpb3NfdHlwZV8xMjcgewogICAgIHN0cnVjdCBzbWJpb3Nfc3RydWN0dXJlX2hlYWRlciBo
ZWFkZXI7Cg==

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream; name="051465c845.aa.sf"
Content-Description: 051465c845.aa.sf
Content-Disposition: attachment; filename="051465c845.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:47:23 GMT";
	modification-date="Thu, 20 Dec 2012 18:47:23 GMT"
Content-Transfer-Encoding: base64

MDUxNDY1Yzg0NQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDA1MTQ2NWM4NDVcDQowDQowDQowDQpOT05FDQpOT05F

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream; name="40fcf29b46.aa.sf"
Content-Description: 40fcf29b46.aa.sf
Content-Disposition: attachment; filename="40fcf29b46.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:47:52 GMT";
	modification-date="Thu, 20 Dec 2012 18:47:52 GMT"
Content-Transfer-Encoding: base64

NDBmY2YyOWI0Ng0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDQwZmNmMjliNDZcDQowDQowDQowDQpOT05FDQpOT05F

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream; name="bf9e518cd4.aa.sf"
Content-Description: bf9e518cd4.aa.sf
Content-Disposition: attachment; filename="bf9e518cd4.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:53:31 GMT";
	modification-date="Thu, 20 Dec 2012 18:53:31 GMT"
Content-Transfer-Encoding: base64

YmY5ZTUxOGNkNA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGJmOWU1MThjZDRcDQowDQowDQowDQpOT05FDQpOT05F

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream; name="26ff013845.aa.sf"
Content-Description: 26ff013845.aa.sf
Content-Disposition: attachment; filename="26ff013845.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:25 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:25 GMT"
Content-Transfer-Encoding: base64

MjZmZjAxMzg0NQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDI2ZmYwMTM4NDVcDQowDQowDQowDQpOT05FDQpOT05F

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: application/octet-stream; name="eab0b4583e.aa.sf"
Content-Description: eab0b4583e.aa.sf
Content-Disposition: attachment; filename="eab0b4583e.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:41 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:41 GMT"
Content-Transfer-Encoding: base64

ZWFiMGI0NTgzZQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGVhYjBiNDU4M2VcDQowDQowDQowDQpOT05FDQpOT05F

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_007_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6462FTLPMAILBOX02_--


From xen-devel-bounces@lists.xen.org Thu Dec 20 18:55:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TllHI-00037s-Mt; Thu, 20 Dec 2012 18:55:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TllHG-00037A-Gw
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 18:55:30 +0000
Received: from [85.158.143.35:27541] by server-1.bemta-4.messagelabs.com id
	F5/1D-28401-12F53D05; Thu, 20 Dec 2012 18:55:29 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1356029721!3959573!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM2NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6420 invoked from network); 20 Dec 2012 18:55:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:55:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; d="sf'?scan'208";a="1382097"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:55:23 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Thu, 20 Dec 2012
	13:55:23 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Thu, 20 Dec 2012 13:55:16 -0500
Thread-Topic: [Xen-devel] [PATCH v4 04/04] HVM firmware passthrough ACPI
	processing
Thread-Index: Ac3e4qwC4m/2qJorReq+5LbhBa5NzA==
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/mixed;
	boundary="_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v4 04/04] HVM firmware passthrough ACPI
 processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

ACPI table passthrough support allowing additional static tables and
SSDTs (AML code) to be loaded. These additional tables are added at the end
of the secondary table list in the RSDT/XSDT tables.

Signed-off-by: Ross Philipson <ross.philipson@citrix.com>


--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: application/octet-stream;
	name="hvm-firmware-passthrough-v4-04.patch"
Content-Description: hvm-firmware-passthrough-v4-04.patch
Content-Disposition: attachment;
	filename="hvm-firmware-passthrough-v4-04.patch"; size=2856;
	creation-date="Wed, 19 Dec 2012 19:37:38 GMT";
	modification-date="Wed, 19 Dec 2012 21:36:25 GMT"
Content-Transfer-Encoding: base64

QUNQSSB0YWJsZSBwYXNzdGhyb3VnaCBzdXBwb3J0IGFsbG93aW5nIGFkZGl0aW9uYWwgc3RhdGlj
IHRhYmxlcyBhbmQKU1NEVHMgKEFNTCBjb2RlKSB0byBiZSBsb2FkZWQuIFRoZXNlIGFkZGl0aW9u
YWwgdGFibGVzIGFyZSBhZGRlZCBhdCB0aGUgZW5kCm9mIHRoZSBzZWNvbmRhcnkgdGFibGUgbGlz
dCBpbiB0aGUgUlNEVC9YU0RUIHRhYmxlcy4KClNpZ25lZC1vZmYtYnk6IFJvc3MgUGhpbGlwc29u
IDxyb3NzLnBoaWxpcHNvbkBjaXRyaXguY29tPgoKZGlmZiAtciA3ZWRlZGVkOThmNzIgdG9vbHMv
ZmlybXdhcmUvaHZtbG9hZGVyL2FjcGkvYnVpbGQuYwotLS0gYS90b29scy9maXJtd2FyZS9odm1s
b2FkZXIvYWNwaS9idWlsZC5jCVdlZCBEZWMgMTkgMTY6MzE6MjggMjAxMiAtMDUwMAorKysgYi90
b29scy9maXJtd2FyZS9odm1sb2FkZXIvYWNwaS9idWlsZC5jCVdlZCBEZWMgMTkgMTY6MzY6MTUg
MjAxMiAtMDUwMApAQCAtMjMsNiArMjMsOSBAQAogI2luY2x1ZGUgInNzZHRfcG0uaCIKICNpbmNs
dWRlICIuLi9jb25maWcuaCIKICNpbmNsdWRlICIuLi91dGlsLmgiCisjaW5jbHVkZSAieGVuLXRv
b2xzL2h2bV9kZWZzLmgiCisKKyNkZWZpbmUgQUNQSV9NQVhfU0VDT05EQVJZX1RBQkxFUyAxNgog
CiAjZGVmaW5lIGFsaWduMTYoc3opICAgICAgICAoKChzeikgKyAxNSkgJiB+MTUpCiAjZGVmaW5l
IGZpeGVkX3N0cmNweShkLCBzKSBzdHJuY3B5KChkKSwgKHMpLCBzaXplb2YoZCkpCkBAIC0xOTgs
NiArMjAxLDUyIEBAIHN0YXRpYyBzdHJ1Y3QgYWNwaV8yMF93YWV0ICpjb25zdHJ1Y3Rfd2EKICAg
ICByZXR1cm4gd2FldDsKIH0KIAorc3RhdGljIGludCBjb25zdHJ1Y3RfcGFzc3Rocm91Z2hfdGFi
bGVzKHVuc2lnbmVkIGxvbmcgKnRhYmxlX3B0cnMsCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgaW50IG5yX3RhYmxlcykKK3sKKyAgICBjb25zdCBjaGFyICpzOworICAg
IHVpbnQ4X3QgKmFjcGlfcHRfYWRkcjsKKyAgICB1aW50MzJfdCBhY3BpX3B0X2xlbmd0aDsKKyAg
ICBzdHJ1Y3QgYWNwaV9oZWFkZXIgKmhlYWRlcjsKKyAgICBpbnQgbnJfYWRkZWQ7CisgICAgaW50
IG5yX21heCA9IChBQ1BJX01BWF9TRUNPTkRBUllfVEFCTEVTIC0gbnJfdGFibGVzIC0gMSk7Cisg
ICAgdWludDMyX3QgdG90YWwgPSAwOworICAgIHVpbnQ4X3QgKmJ1ZmZlcjsKKworICAgIHMgPSB4
ZW5zdG9yZV9yZWFkKEhWTV9YU19BQ1BJX1BUX0FERFJFU1MsIE5VTEwpOworICAgIGlmICggcyA9
PSBOVUxMICkKKyAgICAgICAgcmV0dXJuIDA7ICAgIAorCisgICAgYWNwaV9wdF9hZGRyID0gKHVp
bnQ4X3QqKSh1aW50MzJfdClzdHJ0b2xsKHMsIE5VTEwsIDApOworICAgIGlmICggYWNwaV9wdF9h
ZGRyID09IE5VTEwgKQorICAgICAgICByZXR1cm4gMDsKKworICAgIHMgPSB4ZW5zdG9yZV9yZWFk
KEhWTV9YU19BQ1BJX1BUX0xFTkdUSCwgTlVMTCk7CisgICAgaWYgKCBzID09IE5VTEwgKQorICAg
ICAgICByZXR1cm4gMDsKKworICAgIGFjcGlfcHRfbGVuZ3RoID0gKHVpbnQzMl90KXN0cnRvbGwo
cywgTlVMTCwgMCk7CisKKyAgICBmb3IgKCBucl9hZGRlZCA9IDA7IG5yX2FkZGVkIDwgbnJfbWF4
OyBucl9hZGRlZCsrICkKKyAgICB7ICAgICAgICAKKyAgICAgICAgaWYgKCAoYWNwaV9wdF9sZW5n
dGggLSB0b3RhbCkgPCBzaXplb2Yoc3RydWN0IGFjcGlfaGVhZGVyKSApCisgICAgICAgICAgICBi
cmVhazsKKworICAgICAgICBoZWFkZXIgPSAoc3RydWN0IGFjcGlfaGVhZGVyKilhY3BpX3B0X2Fk
ZHI7CisKKyAgICAgICAgYnVmZmVyID0gbWVtX2FsbG9jKGhlYWRlci0+bGVuZ3RoLCAxNik7Cisg
ICAgICAgIGlmICggYnVmZmVyID09IE5VTEwgKQorICAgICAgICAgICAgYnJlYWs7CisgICAgICAg
IG1lbWNweShidWZmZXIsIGhlYWRlciwgaGVhZGVyLT5sZW5ndGgpOworCisgICAgICAgIHRhYmxl
X3B0cnNbbnJfdGFibGVzKytdID0gKHVuc2lnbmVkIGxvbmcpYnVmZmVyOworICAgICAgICB0b3Rh
bCArPSBoZWFkZXItPmxlbmd0aDsKKyAgICAgICAgYWNwaV9wdF9hZGRyICs9IGhlYWRlci0+bGVu
Z3RoOworICAgIH0KKworICAgIHJldHVybiBucl9hZGRlZDsKK30KKwogc3RhdGljIGludCBjb25z
dHJ1Y3Rfc2Vjb25kYXJ5X3RhYmxlcyh1bnNpZ25lZCBsb25nICp0YWJsZV9wdHJzLAogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgYWNwaV9pbmZvICppbmZvKQog
ewpAQCAtMjkzLDYgKzM0Miw5IEBAIHN0YXRpYyBpbnQgY29uc3RydWN0X3NlY29uZGFyeV90YWJs
ZXModW4KICAgICAgICAgfQogICAgIH0KIAorICAgIC8qIExvYWQgYW55IGFkZGl0aW9uYWwgdGFi
bGVzIHBhc3NlZCB0aHJvdWdoLiAqLworICAgIG5yX3RhYmxlcyArPSBjb25zdHJ1Y3RfcGFzc3Ro
cm91Z2hfdGFibGVzKHRhYmxlX3B0cnMsIG5yX3RhYmxlcyk7CisKICAgICB0YWJsZV9wdHJzW25y
X3RhYmxlc10gPSAwOwogICAgIHJldHVybiBucl90YWJsZXM7CiB9CkBAIC0zMjcsNyArMzc5LDcg
QEAgdm9pZCBhY3BpX2J1aWxkX3RhYmxlcyhzdHJ1Y3QgYWNwaV9jb25maQogICAgIHN0cnVjdCBh
Y3BpXzEwX2ZhZHQgKmZhZHRfMTA7CiAgICAgc3RydWN0IGFjcGlfMjBfZmFjcyAqZmFjczsKICAg
ICB1bnNpZ25lZCBjaGFyICAgICAgICpkc2R0OwotICAgIHVuc2lnbmVkIGxvbmcgICAgICAgIHNl
Y29uZGFyeV90YWJsZXNbMTZdOworICAgIHVuc2lnbmVkIGxvbmcgICAgICAgIHNlY29uZGFyeV90
YWJsZXNbQUNQSV9NQVhfU0VDT05EQVJZX1RBQkxFU107CiAgICAgaW50ICAgICAgICAgICAgICAg
ICAgbnJfc2Vjb25kYXJpZXMsIGk7CiAgICAgdW5zaWduZWQgbG9uZyAgICAgICAgdm1fZ2lkX2Fk
ZHI7CiAK

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: application/octet-stream; name="f5eaa08596.aa.sf"
Content-Description: f5eaa08596.aa.sf
Content-Disposition: attachment; filename="f5eaa08596.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:49:07 GMT";
	modification-date="Thu, 20 Dec 2012 18:49:07 GMT"
Content-Transfer-Encoding: base64

ZjVlYWEwODU5Ng0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGY1ZWFhMDg1OTZcDQowDQowDQowDQpOT05FDQpOT05F

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: application/octet-stream; name="8412f36fa9.aa.sf"
Content-Description: 8412f36fa9.aa.sf
Content-Disposition: attachment; filename="8412f36fa9.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:51:15 GMT";
	modification-date="Thu, 20 Dec 2012 18:51:15 GMT"
Content-Transfer-Encoding: base64

ODQxMmYzNmZhOQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDg0MTJmMzZmYTlcDQowDQowDQowDQpOT05FDQpOT05F

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: application/octet-stream; name="e034c27100.aa.sf"
Content-Description: e034c27100.aa.sf
Content-Disposition: attachment; filename="e034c27100.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:53:39 GMT";
	modification-date="Thu, 20 Dec 2012 18:53:39 GMT"
Content-Transfer-Encoding: base64

ZTAzNGMyNzEwMA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGUwMzRjMjcxMDBcDQowDQowDQowDQpOT05FDQpOT05F

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: application/octet-stream; name="f61c68372c.aa.sf"
Content-Description: f61c68372c.aa.sf
Content-Disposition: attachment; filename="f61c68372c.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:30 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:30 GMT"
Content-Transfer-Encoding: base64

ZjYxYzY4MzcyYw0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGY2MWM2ODM3MmNcDQowDQowDQowDQpOT05FDQpOT05F

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_--


From xen-devel-bounces@lists.xen.org Thu Dec 20 18:55:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TllHI-00037s-Mt; Thu, 20 Dec 2012 18:55:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TllHG-00037A-Gw
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 18:55:30 +0000
Received: from [85.158.143.35:27541] by server-1.bemta-4.messagelabs.com id
	F5/1D-28401-12F53D05; Thu, 20 Dec 2012 18:55:29 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1356029721!3959573!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM2NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6420 invoked from network); 20 Dec 2012 18:55:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:55:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; d="sf'?scan'208";a="1382097"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:55:23 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Thu, 20 Dec 2012
	13:55:23 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Thu, 20 Dec 2012 13:55:16 -0500
Thread-Topic: [Xen-devel] [PATCH v4 04/04] HVM firmware passthrough ACPI
	processing
Thread-Index: Ac3e4qwC4m/2qJorReq+5LbhBa5NzA==
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/mixed;
	boundary="_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v4 04/04] HVM firmware passthrough ACPI
 processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

ACPI table passthrough support allowing additional static tables and
SSDTs (AML code) to be loaded. These additional tables are added at the end
of the secondary table list in the RSDT/XSDT tables.

Signed-off-by: Ross Philipson <ross.philipson@citrix.com>


--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: application/octet-stream;
	name="hvm-firmware-passthrough-v4-04.patch"
Content-Description: hvm-firmware-passthrough-v4-04.patch
Content-Disposition: attachment;
	filename="hvm-firmware-passthrough-v4-04.patch"; size=2856;
	creation-date="Wed, 19 Dec 2012 19:37:38 GMT";
	modification-date="Wed, 19 Dec 2012 21:36:25 GMT"
Content-Transfer-Encoding: base64

QUNQSSB0YWJsZSBwYXNzdGhyb3VnaCBzdXBwb3J0IGFsbG93aW5nIGFkZGl0aW9uYWwgc3RhdGlj
IHRhYmxlcyBhbmQKU1NEVHMgKEFNTCBjb2RlKSB0byBiZSBsb2FkZWQuIFRoZXNlIGFkZGl0aW9u
YWwgdGFibGVzIGFyZSBhZGRlZCBhdCB0aGUgZW5kCm9mIHRoZSBzZWNvbmRhcnkgdGFibGUgbGlz
dCBpbiB0aGUgUlNEVC9YU0RUIHRhYmxlcy4KClNpZ25lZC1vZmYtYnk6IFJvc3MgUGhpbGlwc29u
IDxyb3NzLnBoaWxpcHNvbkBjaXRyaXguY29tPgoKZGlmZiAtciA3ZWRlZGVkOThmNzIgdG9vbHMv
ZmlybXdhcmUvaHZtbG9hZGVyL2FjcGkvYnVpbGQuYwotLS0gYS90b29scy9maXJtd2FyZS9odm1s
b2FkZXIvYWNwaS9idWlsZC5jCVdlZCBEZWMgMTkgMTY6MzE6MjggMjAxMiAtMDUwMAorKysgYi90
b29scy9maXJtd2FyZS9odm1sb2FkZXIvYWNwaS9idWlsZC5jCVdlZCBEZWMgMTkgMTY6MzY6MTUg
MjAxMiAtMDUwMApAQCAtMjMsNiArMjMsOSBAQAogI2luY2x1ZGUgInNzZHRfcG0uaCIKICNpbmNs
dWRlICIuLi9jb25maWcuaCIKICNpbmNsdWRlICIuLi91dGlsLmgiCisjaW5jbHVkZSAieGVuLXRv
b2xzL2h2bV9kZWZzLmgiCisKKyNkZWZpbmUgQUNQSV9NQVhfU0VDT05EQVJZX1RBQkxFUyAxNgog
CiAjZGVmaW5lIGFsaWduMTYoc3opICAgICAgICAoKChzeikgKyAxNSkgJiB+MTUpCiAjZGVmaW5l
IGZpeGVkX3N0cmNweShkLCBzKSBzdHJuY3B5KChkKSwgKHMpLCBzaXplb2YoZCkpCkBAIC0xOTgs
NiArMjAxLDUyIEBAIHN0YXRpYyBzdHJ1Y3QgYWNwaV8yMF93YWV0ICpjb25zdHJ1Y3Rfd2EKICAg
ICByZXR1cm4gd2FldDsKIH0KIAorc3RhdGljIGludCBjb25zdHJ1Y3RfcGFzc3Rocm91Z2hfdGFi
bGVzKHVuc2lnbmVkIGxvbmcgKnRhYmxlX3B0cnMsCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgaW50IG5yX3RhYmxlcykKK3sKKyAgICBjb25zdCBjaGFyICpzOworICAg
IHVpbnQ4X3QgKmFjcGlfcHRfYWRkcjsKKyAgICB1aW50MzJfdCBhY3BpX3B0X2xlbmd0aDsKKyAg
ICBzdHJ1Y3QgYWNwaV9oZWFkZXIgKmhlYWRlcjsKKyAgICBpbnQgbnJfYWRkZWQ7CisgICAgaW50
IG5yX21heCA9IChBQ1BJX01BWF9TRUNPTkRBUllfVEFCTEVTIC0gbnJfdGFibGVzIC0gMSk7Cisg
ICAgdWludDMyX3QgdG90YWwgPSAwOworICAgIHVpbnQ4X3QgKmJ1ZmZlcjsKKworICAgIHMgPSB4
ZW5zdG9yZV9yZWFkKEhWTV9YU19BQ1BJX1BUX0FERFJFU1MsIE5VTEwpOworICAgIGlmICggcyA9
PSBOVUxMICkKKyAgICAgICAgcmV0dXJuIDA7ICAgIAorCisgICAgYWNwaV9wdF9hZGRyID0gKHVp
bnQ4X3QqKSh1aW50MzJfdClzdHJ0b2xsKHMsIE5VTEwsIDApOworICAgIGlmICggYWNwaV9wdF9h
ZGRyID09IE5VTEwgKQorICAgICAgICByZXR1cm4gMDsKKworICAgIHMgPSB4ZW5zdG9yZV9yZWFk
KEhWTV9YU19BQ1BJX1BUX0xFTkdUSCwgTlVMTCk7CisgICAgaWYgKCBzID09IE5VTEwgKQorICAg
ICAgICByZXR1cm4gMDsKKworICAgIGFjcGlfcHRfbGVuZ3RoID0gKHVpbnQzMl90KXN0cnRvbGwo
cywgTlVMTCwgMCk7CisKKyAgICBmb3IgKCBucl9hZGRlZCA9IDA7IG5yX2FkZGVkIDwgbnJfbWF4
OyBucl9hZGRlZCsrICkKKyAgICB7ICAgICAgICAKKyAgICAgICAgaWYgKCAoYWNwaV9wdF9sZW5n
dGggLSB0b3RhbCkgPCBzaXplb2Yoc3RydWN0IGFjcGlfaGVhZGVyKSApCisgICAgICAgICAgICBi
cmVhazsKKworICAgICAgICBoZWFkZXIgPSAoc3RydWN0IGFjcGlfaGVhZGVyKilhY3BpX3B0X2Fk
ZHI7CisKKyAgICAgICAgYnVmZmVyID0gbWVtX2FsbG9jKGhlYWRlci0+bGVuZ3RoLCAxNik7Cisg
ICAgICAgIGlmICggYnVmZmVyID09IE5VTEwgKQorICAgICAgICAgICAgYnJlYWs7CisgICAgICAg
IG1lbWNweShidWZmZXIsIGhlYWRlciwgaGVhZGVyLT5sZW5ndGgpOworCisgICAgICAgIHRhYmxl
X3B0cnNbbnJfdGFibGVzKytdID0gKHVuc2lnbmVkIGxvbmcpYnVmZmVyOworICAgICAgICB0b3Rh
bCArPSBoZWFkZXItPmxlbmd0aDsKKyAgICAgICAgYWNwaV9wdF9hZGRyICs9IGhlYWRlci0+bGVu
Z3RoOworICAgIH0KKworICAgIHJldHVybiBucl9hZGRlZDsKK30KKwogc3RhdGljIGludCBjb25z
dHJ1Y3Rfc2Vjb25kYXJ5X3RhYmxlcyh1bnNpZ25lZCBsb25nICp0YWJsZV9wdHJzLAogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgYWNwaV9pbmZvICppbmZvKQog
ewpAQCAtMjkzLDYgKzM0Miw5IEBAIHN0YXRpYyBpbnQgY29uc3RydWN0X3NlY29uZGFyeV90YWJs
ZXModW4KICAgICAgICAgfQogICAgIH0KIAorICAgIC8qIExvYWQgYW55IGFkZGl0aW9uYWwgdGFi
bGVzIHBhc3NlZCB0aHJvdWdoLiAqLworICAgIG5yX3RhYmxlcyArPSBjb25zdHJ1Y3RfcGFzc3Ro
cm91Z2hfdGFibGVzKHRhYmxlX3B0cnMsIG5yX3RhYmxlcyk7CisKICAgICB0YWJsZV9wdHJzW25y
X3RhYmxlc10gPSAwOwogICAgIHJldHVybiBucl90YWJsZXM7CiB9CkBAIC0zMjcsNyArMzc5LDcg
QEAgdm9pZCBhY3BpX2J1aWxkX3RhYmxlcyhzdHJ1Y3QgYWNwaV9jb25maQogICAgIHN0cnVjdCBh
Y3BpXzEwX2ZhZHQgKmZhZHRfMTA7CiAgICAgc3RydWN0IGFjcGlfMjBfZmFjcyAqZmFjczsKICAg
ICB1bnNpZ25lZCBjaGFyICAgICAgICpkc2R0OwotICAgIHVuc2lnbmVkIGxvbmcgICAgICAgIHNl
Y29uZGFyeV90YWJsZXNbMTZdOworICAgIHVuc2lnbmVkIGxvbmcgICAgICAgIHNlY29uZGFyeV90
YWJsZXNbQUNQSV9NQVhfU0VDT05EQVJZX1RBQkxFU107CiAgICAgaW50ICAgICAgICAgICAgICAg
ICAgbnJfc2Vjb25kYXJpZXMsIGk7CiAgICAgdW5zaWduZWQgbG9uZyAgICAgICAgdm1fZ2lkX2Fk
ZHI7CiAK

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: application/octet-stream; name="f5eaa08596.aa.sf"
Content-Description: f5eaa08596.aa.sf
Content-Disposition: attachment; filename="f5eaa08596.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:49:07 GMT";
	modification-date="Thu, 20 Dec 2012 18:49:07 GMT"
Content-Transfer-Encoding: base64

ZjVlYWEwODU5Ng0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGY1ZWFhMDg1OTZcDQowDQowDQowDQpOT05FDQpOT05F

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: application/octet-stream; name="8412f36fa9.aa.sf"
Content-Description: 8412f36fa9.aa.sf
Content-Disposition: attachment; filename="8412f36fa9.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:51:15 GMT";
	modification-date="Thu, 20 Dec 2012 18:51:15 GMT"
Content-Transfer-Encoding: base64

ODQxMmYzNmZhOQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDg0MTJmMzZmYTlcDQowDQowDQowDQpOT05FDQpOT05F

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: application/octet-stream; name="e034c27100.aa.sf"
Content-Description: e034c27100.aa.sf
Content-Disposition: attachment; filename="e034c27100.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:53:39 GMT";
	modification-date="Thu, 20 Dec 2012 18:53:39 GMT"
Content-Transfer-Encoding: base64

ZTAzNGMyNzEwMA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGUwMzRjMjcxMDBcDQowDQowDQowDQpOT05FDQpOT05F

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: application/octet-stream; name="f61c68372c.aa.sf"
Content-Description: f61c68372c.aa.sf
Content-Disposition: attachment; filename="f61c68372c.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:30 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:30 GMT"
Content-Transfer-Encoding: base64

ZjYxYzY4MzcyYw0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGY2MWM2ODM3MmNcDQowDQowDQowDQpOT05FDQpOT05F

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_006_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6463FTLPMAILBOX02_--


From xen-devel-bounces@lists.xen.org Thu Dec 20 18:55:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TllHH-00037S-8J; Thu, 20 Dec 2012 18:55:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TllHF-00036g-0b
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 18:55:29 +0000
Received: from [85.158.143.35:28724] by server-2.bemta-4.messagelabs.com id
	5F/9B-30861-02F53D05; Thu, 20 Dec 2012 18:55:28 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1356029721!3959573!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM2NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6375 invoked from network); 20 Dec 2012 18:55:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:55:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; d="sf'?scan'208";a="1382091"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:55:20 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Thu, 20 Dec 2012
	13:55:20 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Thu, 20 Dec 2012 13:55:13 -0500
Thread-Topic: [Xen-devel] [PATCH v4 02/04] HVM firmware passthrough control
	tools support
Thread-Index: Ac3e4pAi1zOGIpqfRCOJv/0SqsKiEA==
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/mixed;
	boundary="_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v4 02/04] HVM firmware passthrough control tools
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Xen control tools support for loading the firmware passthrough blocks durin=
g
domain construction. SMBIOS and ACPI blocks are passed in using the new
xc_hvm_build_args structure. Each block is read and loaded into the new dom=
ain
address space behind the HVMLOADER image. The base address for the two bloc=
ks
is returned as an out parameter to the caller via the args structure.

Signed-off-by: Ross Philipson <ross.philipson@citrix.com>


--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream;
	name="hvm-firmware-passthrough-v4-02.patch"
Content-Description: hvm-firmware-passthrough-v4-02.patch
Content-Disposition: attachment;
	filename="hvm-firmware-passthrough-v4-02.patch"; size=9522;
	creation-date="Wed, 19 Dec 2012 20:57:33 GMT";
	modification-date="Thu, 20 Dec 2012 18:22:16 GMT"
Content-Transfer-Encoding: base64

WGVuIGNvbnRyb2wgdG9vbHMgc3VwcG9ydCBmb3IgbG9hZGluZyB0aGUgZmlybXdhcmUgcGFzc3Ro
cm91Z2ggYmxvY2tzIGR1cmluZwpkb21haW4gY29uc3RydWN0aW9uLiBTTUJJT1MgYW5kIEFDUEkg
YmxvY2tzIGFyZSBwYXNzZWQgaW4gdXNpbmcgdGhlIG5ldwp4Y19odm1fYnVpbGRfYXJncyBzdHJ1
Y3R1cmUuIEVhY2ggYmxvY2sgaXMgcmVhZCBhbmQgbG9hZGVkIGludG8gdGhlIG5ldyBkb21haW4K
YWRkcmVzcyBzcGFjZSBiZWhpbmQgdGhlIEhWTUxPQURFUiBpbWFnZS4gVGhlIGJhc2UgYWRkcmVz
cyBmb3IgdGhlIHR3byBibG9ja3MKaXMgcmV0dXJuZWQgYXMgYW4gb3V0IHBhcmFtZXRlciB0byB0
aGUgY2FsbGVyIHZpYSB0aGUgYXJncyBzdHJ1Y3R1cmUuCgpTaWduZWQtb2ZmLWJ5OiBSb3NzIFBo
aWxpcHNvbiA8cm9zcy5waGlsaXBzb25AY2l0cml4LmNvbT4KCmRpZmYgLXIgNGE4OWVjNDYwMGE3
IHRvb2xzL2xpYnhjL3hjX2h2bV9idWlsZF9hcm0uYwotLS0gYS90b29scy9saWJ4Yy94Y19odm1f
YnVpbGRfYXJtLmMJV2VkIERlYyAxOSAxNjoyMzoyNyAyMDEyIC0wNTAwCisrKyBiL3Rvb2xzL2xp
YnhjL3hjX2h2bV9idWlsZF9hcm0uYwlXZWQgRGVjIDE5IDE2OjIzOjUyIDIwMTIgLTA1MDAKQEAg
LTIyLDcgKzIyLDcgQEAKICNpbmNsdWRlIDx4ZW5ndWVzdC5oPgogCiBpbnQgeGNfaHZtX2J1aWxk
KHhjX2ludGVyZmFjZSAqeGNoLCB1aW50MzJfdCBkb21pZCwKLSAgICAgICAgICAgICAgICAgY29u
c3Qgc3RydWN0IHhjX2h2bV9idWlsZF9hcmdzICpodm1fYXJncykKKyAgICAgICAgICAgICAgICAg
c3RydWN0IHhjX2h2bV9idWlsZF9hcmdzICpodm1fYXJncykKIHsKICAgICBlcnJubyA9IEVOT1NZ
UzsKICAgICByZXR1cm4gLTE7CmRpZmYgLXIgNGE4OWVjNDYwMGE3IHRvb2xzL2xpYnhjL3hjX2h2
bV9idWlsZF94ODYuYwotLS0gYS90b29scy9saWJ4Yy94Y19odm1fYnVpbGRfeDg2LmMJV2VkIERl
YyAxOSAxNjoyMzoyNyAyMDEyIC0wNTAwCisrKyBiL3Rvb2xzL2xpYnhjL3hjX2h2bV9idWlsZF94
ODYuYwlXZWQgRGVjIDE5IDE2OjIzOjUyIDIwMTIgLTA1MDAKQEAgLTQ5LDYgKzQ5LDQwIEBACiAj
ZGVmaW5lIE5SX1NQRUNJQUxfUEFHRVMgICAgIDgKICNkZWZpbmUgc3BlY2lhbF9wZm4oeCkgKDB4
ZmYwMDB1IC0gTlJfU1BFQ0lBTF9QQUdFUyArICh4KSkKIAorc3RhdGljIGludCBtb2R1bGVzX2lu
aXQoc3RydWN0IHhjX2h2bV9idWlsZF9hcmdzICphcmdzLAorICAgICAgICAgICAgICAgICAgICAg
ICAgdWludDY0X3QgdmVuZCwgc3RydWN0IGVsZl9iaW5hcnkgKmVsZiwKKyAgICAgICAgICAgICAg
ICAgICAgICAgIHVpbnQ2NF90ICptc3RhcnRfb3V0LCB1aW50NjRfdCAqbWVuZF9vdXQpCit7Cisj
ZGVmaW5lIE1PRFVMRV9BTElHTiAxVUwgPDwgNworI2RlZmluZSBNQl9BTElHTiAgICAgMVVMIDw8
IDIwCisjZGVmaW5lIE1LQUxJR04oeCwgYSkgKCgodWludDY0X3QpKHgpICsgKGEpIC0gMSkgJiB+
KHVpbnQ2NF90KSgoYSkgLSAxKSkKKyAgICB1aW50NjRfdCB0b3RhbF9sZW4gPSAwLCBvZmZzZXQx
ID0gMDsKKworICAgIGlmICggKGFyZ3MtPmFjcGlfbW9kdWxlLmxlbmd0aCA9PSAwKSYmKGFyZ3Mt
PnNtYmlvc19tb2R1bGUubGVuZ3RoID09IDApICkKKyAgICAgICAgcmV0dXJuIDA7CisKKyAgICAv
KiBGaW5kIHRoZSB0b3RhbCBsZW5ndGggZm9yIHRoZSBmaXJtd2FyZSBtb2R1bGVzIHdpdGggYSBy
ZWFzb25hYmxlIGxhcmdlCisgICAgICogYWxpZ25tZW50IHNpemUgdG8gYWxpZ24gZWFjaCB0aGUg
bW9kdWxlcy4KKyAgICAgKi8KKyAgICB0b3RhbF9sZW4gPSBNS0FMSUdOKGFyZ3MtPmFjcGlfbW9k
dWxlLmxlbmd0aCwgTU9EVUxFX0FMSUdOKTsKKyAgICBvZmZzZXQxID0gdG90YWxfbGVuOworICAg
IHRvdGFsX2xlbiArPSBNS0FMSUdOKGFyZ3MtPnNtYmlvc19tb2R1bGUubGVuZ3RoLCBNT0RVTEVf
QUxJR04pOworCisgICAgLyogV2FudCB0byBwbGFjZSB0aGUgbW9kdWxlcyAxTWIrY2hhbmdlIGJl
aGluZCB0aGUgbG9hZGVyIGltYWdlLiAqLworICAgICptc3RhcnRfb3V0ID0gTUtBTElHTihlbGYt
PnBlbmQsIE1CX0FMSUdOKSArIChNQl9BTElHTik7CisgICAgKm1lbmRfb3V0ID0gKm1zdGFydF9v
dXQgKyB0b3RhbF9sZW47CisKKyAgICBpZiAoICptZW5kX291dCA+IHZlbmQgKSAgICAKKyAgICAg
ICAgcmV0dXJuIC0xOworCisgICAgaWYgKCBhcmdzLT5hY3BpX21vZHVsZS5sZW5ndGggIT0gMCAp
CisgICAgICAgIGFyZ3MtPmFjcGlfbW9kdWxlLmd1ZXN0X2FkZHJfb3V0ID0gKm1zdGFydF9vdXQ7
CisgICAgaWYgKCBhcmdzLT5zbWJpb3NfbW9kdWxlLmxlbmd0aCAhPSAwICkKKyAgICAgICAgYXJn
cy0+c21iaW9zX21vZHVsZS5ndWVzdF9hZGRyX291dCA9ICptc3RhcnRfb3V0ICsgb2Zmc2V0MTsK
KworICAgIHJldHVybiAwOworfQorCiBzdGF0aWMgdm9pZCBidWlsZF9odm1faW5mbyh2b2lkICpo
dm1faW5mb19wYWdlLCB1aW50NjRfdCBtZW1fc2l6ZSwKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHVpbnQ2NF90IG1taW9fc3RhcnQsIHVpbnQ2NF90IG1taW9fc2l6ZSkKIHsKQEAgLTg2LDkg
KzEyMCw4IEBAIHN0YXRpYyB2b2lkIGJ1aWxkX2h2bV9pbmZvKHZvaWQgKmh2bV9pbmYKICAgICBo
dm1faW5mby0+Y2hlY2tzdW0gPSAtc3VtOwogfQogCi1zdGF0aWMgaW50IGxvYWRlbGZpbWFnZSgK
LSAgICB4Y19pbnRlcmZhY2UgKnhjaCwKLSAgICBzdHJ1Y3QgZWxmX2JpbmFyeSAqZWxmLCB1aW50
MzJfdCBkb20sIHVuc2lnbmVkIGxvbmcgKnBhcnJheSkKK3N0YXRpYyBpbnQgbG9hZGVsZmltYWdl
KHhjX2ludGVyZmFjZSAqeGNoLCBzdHJ1Y3QgZWxmX2JpbmFyeSAqZWxmLAorICAgICAgICAgICAg
ICAgICAgICAgICAgdWludDMyX3QgZG9tLCB1bnNpZ25lZCBsb25nICpwYXJyYXkpCiB7CiAgICAg
cHJpdmNtZF9tbWFwX2VudHJ5X3QgKmVudHJpZXMgPSBOVUxMOwogICAgIHVuc2lnbmVkIGxvbmcg
cGZuX3N0YXJ0ID0gZWxmLT5wc3RhcnQgPj4gUEFHRV9TSElGVDsKQEAgLTEyNiw2ICsxNTksNjYg
QEAgc3RhdGljIGludCBsb2FkZWxmaW1hZ2UoCiAgICAgcmV0dXJuIHJjOwogfQogCitzdGF0aWMg
aW50IGxvYWRtb2R1bGVzKHhjX2ludGVyZmFjZSAqeGNoLAorICAgICAgICAgICAgICAgICAgICAg
ICBzdHJ1Y3QgeGNfaHZtX2J1aWxkX2FyZ3MgKmFyZ3MsCisgICAgICAgICAgICAgICAgICAgICAg
IHVpbnQ2NF90IG1zdGFydCwgdWludDY0X3QgbWVuZCwKKyAgICAgICAgICAgICAgICAgICAgICAg
dWludDMyX3QgZG9tLCB1bnNpZ25lZCBsb25nICpwYXJyYXkpCit7CisgICAgcHJpdmNtZF9tbWFw
X2VudHJ5X3QgKmVudHJpZXMgPSBOVUxMOworICAgIHVuc2lnbmVkIGxvbmcgcGZuX3N0YXJ0Owor
ICAgIHVuc2lnbmVkIGxvbmcgcGZuX2VuZDsKKyAgICBzaXplX3QgcGFnZXM7CisgICAgdWludDMy
X3QgaTsKKyAgICB1aW50OF90ICpkZXN0OworICAgIGludCByYyA9IC0xOworCisgICAgaWYgKCAo
bXN0YXJ0ID09IDApfHwobWVuZCA9PSAwKSApCisgICAgICAgIHJldHVybiAwOworCisgICAgcGZu
X3N0YXJ0ID0gKHVuc2lnbmVkIGxvbmcpKG1zdGFydCA+PiBQQUdFX1NISUZUKTsKKyAgICBwZm5f
ZW5kID0gKHVuc2lnbmVkIGxvbmcpKChtZW5kICsgUEFHRV9TSVpFIC0gMSkgPj4gUEFHRV9TSElG
VCk7CisgICAgcGFnZXMgPSBwZm5fZW5kIC0gcGZuX3N0YXJ0OworCisgICAgLyogTWFwIGFkZHJl
c3Mgc3BhY2UgZm9yIG1vZHVsZSBsaXN0LiAqLworICAgIGVudHJpZXMgPSBjYWxsb2MocGFnZXMs
IHNpemVvZihwcml2Y21kX21tYXBfZW50cnlfdCkpOworICAgIGlmICggZW50cmllcyA9PSBOVUxM
ICkKKyAgICAgICAgZ290byBlcnJvcl9vdXQ7CisKKyAgICBmb3IgKCBpID0gMDsgaSA8IHBhZ2Vz
OyBpKysgKQorICAgICAgICBlbnRyaWVzW2ldLm1mbiA9IHBhcnJheVsobXN0YXJ0ID4+IFBBR0Vf
U0hJRlQpICsgaV07CisKKyAgICBkZXN0ID0geGNfbWFwX2ZvcmVpZ25fcmFuZ2VzKAorICAgICAg
ICB4Y2gsIGRvbSwgcGFnZXMgPDwgUEFHRV9TSElGVCwgUFJPVF9SRUFEIHwgUFJPVF9XUklURSwg
MSA8PCBQQUdFX1NISUZULAorICAgICAgICBlbnRyaWVzLCBwYWdlcyk7CisgICAgaWYgKCBkZXN0
ID09IE5VTEwgKQorICAgICAgICBnb3RvIGVycm9yX291dDsKKworICAgIC8qIFplcm8gdGhlIHJh
bmdlIHNvIHBhZGRpbmcgaXMgY2xlYXIgYmV0d2VlbiBtb2R1bGVzICovCisgICAgbWVtc2V0KGRl
c3QsIDAsIHBhZ2VzIDw8IFBBR0VfU0hJRlQpOworCisgICAgLyogTG9hZCBtb2R1bGVzIGludG8g
cmFuZ2UgKi8gICAgCisgICAgaWYgKCBhcmdzLT5hY3BpX21vZHVsZS5sZW5ndGggIT0gMCApCisg
ICAgeworICAgICAgICBtZW1jcHkoZGVzdCwKKyAgICAgICAgICAgICAgIGFyZ3MtPmFjcGlfbW9k
dWxlLmRhdGEsCisgICAgICAgICAgICAgICBhcmdzLT5hY3BpX21vZHVsZS5sZW5ndGgpOworICAg
IH0KKyAgICBpZiAoIGFyZ3MtPnNtYmlvc19tb2R1bGUubGVuZ3RoICE9IDAgKQorICAgIHsKKyAg
ICAgICAgbWVtY3B5KGRlc3QgKyAoYXJncy0+c21iaW9zX21vZHVsZS5ndWVzdF9hZGRyX291dCAt
IG1zdGFydCksCisgICAgICAgICAgICAgICBhcmdzLT5zbWJpb3NfbW9kdWxlLmRhdGEsCisgICAg
ICAgICAgICAgICBhcmdzLT5zbWJpb3NfbW9kdWxlLmxlbmd0aCk7CisgICAgfQorCisgICAgbXVu
bWFwKGRlc3QsIHBhZ2VzIDw8IFBBR0VfU0hJRlQpOworICAgIHJjID0gMDsKKworIGVycm9yX291
dDoKKyAgICBmcmVlKGVudHJpZXMpOworCisgICAgcmV0dXJuIHJjOworfQorCiAvKgogICogQ2hl
Y2sgd2hldGhlciB0aGVyZSBleGlzdHMgbW1pbyBob2xlIGluIHRoZSBzcGVjaWZpZWQgbWVtb3J5
IHJhbmdlLgogICogUmV0dXJucyAxIGlmIGV4aXN0cywgZWxzZSByZXR1cm5zIDAuCkBAIC0xNDAs
NyArMjMzLDcgQEAgc3RhdGljIGludCBjaGVja19tbWlvX2hvbGUodWludDY0X3Qgc3RhcgogfQog
CiBzdGF0aWMgaW50IHNldHVwX2d1ZXN0KHhjX2ludGVyZmFjZSAqeGNoLAotICAgICAgICAgICAg
ICAgICAgICAgICB1aW50MzJfdCBkb20sIGNvbnN0IHN0cnVjdCB4Y19odm1fYnVpbGRfYXJncyAq
YXJncywKKyAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZG9tLCBzdHJ1Y3QgeGNfaHZt
X2J1aWxkX2FyZ3MgKmFyZ3MsCiAgICAgICAgICAgICAgICAgICAgICAgIGNoYXIgKmltYWdlLCB1
bnNpZ25lZCBsb25nIGltYWdlX3NpemUpCiB7CiAgICAgeGVuX3Bmbl90ICpwYWdlX2FycmF5ID0g
TlVMTDsKQEAgLTE1Myw2ICsyNDYsNyBAQCBzdGF0aWMgaW50IHNldHVwX2d1ZXN0KHhjX2ludGVy
ZmFjZSAqeGNoCiAgICAgdWludDMyX3QgKmlkZW50X3B0OwogICAgIHN0cnVjdCBlbGZfYmluYXJ5
IGVsZjsKICAgICB1aW50NjRfdCB2X3N0YXJ0LCB2X2VuZDsKKyAgICB1aW50NjRfdCBtX3N0YXJ0
ID0gMCwgbV9lbmQgPSAwOwogICAgIGludCByYzsKICAgICB4ZW5fY2FwYWJpbGl0aWVzX2luZm9f
dCBjYXBzOwogICAgIHVuc2lnbmVkIGxvbmcgc3RhdF9ub3JtYWxfcGFnZXMgPSAwLCBzdGF0XzJt
Yl9wYWdlcyA9IDAsIApAQCAtMTc4LDExICsyNzIsMTkgQEAgc3RhdGljIGludCBzZXR1cF9ndWVz
dCh4Y19pbnRlcmZhY2UgKnhjaAogICAgICAgICBnb3RvIGVycm9yX291dDsKICAgICB9CiAKKyAg
ICBpZiAoIG1vZHVsZXNfaW5pdChhcmdzLCB2X2VuZCwgJmVsZiwgJm1fc3RhcnQsICZtX2VuZCkg
IT0gMCApCisgICAgeworICAgICAgICBFUlJPUigiSW5zdWZmaWNpZW50IHNwYWNlIHRvIGxvYWQg
bW9kdWxlcy4iKTsKKyAgICAgICAgZ290byBlcnJvcl9vdXQ7CisgICAgfQorCiAgICAgSVBSSU5U
RigiVklSVFVBTCBNRU1PUlkgQVJSQU5HRU1FTlQ6XG4iCiAgICAgICAgICAgICAiICBMb2FkZXI6
ICAgICAgICAlMDE2IlBSSXg2NCItPiUwMTYiUFJJeDY0IlxuIgorICAgICAgICAgICAgIiAgTW9k
dWxlczogICAgICAgJTAxNiJQUkl4NjQiLT4lMDE2IlBSSXg2NCJcbiIKICAgICAgICAgICAgICIg
IFRPVEFMOiAgICAgICAgICUwMTYiUFJJeDY0Ii0+JTAxNiJQUkl4NjQiXG4iCiAgICAgICAgICAg
ICAiICBFTlRSWSBBRERSRVNTOiAlMDE2IlBSSXg2NCJcbiIsCiAgICAgICAgICAgICBlbGYucHN0
YXJ0LCBlbGYucGVuZCwKKyAgICAgICAgICAgIG1fc3RhcnQsIG1fZW5kLAogICAgICAgICAgICAg
dl9zdGFydCwgdl9lbmQsCiAgICAgICAgICAgICBlbGZfdXZhbCgmZWxmLCBlbGYuZWhkciwgZV9l
bnRyeSkpOwogCkBAIC0zMzcsNiArNDM5LDkgQEAgc3RhdGljIGludCBzZXR1cF9ndWVzdCh4Y19p
bnRlcmZhY2UgKnhjaAogICAgIGlmICggbG9hZGVsZmltYWdlKHhjaCwgJmVsZiwgZG9tLCBwYWdl
X2FycmF5KSAhPSAwICkKICAgICAgICAgZ290byBlcnJvcl9vdXQ7CiAKKyAgICBpZiAoIGxvYWRt
b2R1bGVzKHhjaCwgYXJncywgbV9zdGFydCwgbV9lbmQsIGRvbSwgcGFnZV9hcnJheSkgIT0gMCAp
CisgICAgICAgIGdvdG8gZXJyb3Jfb3V0OyAgICAKKwogICAgIGlmICggKGh2bV9pbmZvX3BhZ2Ug
PSB4Y19tYXBfZm9yZWlnbl9yYW5nZSgKICAgICAgICAgICAgICAgeGNoLCBkb20sIFBBR0VfU0la
RSwgUFJPVF9SRUFEIHwgUFJPVF9XUklURSwKICAgICAgICAgICAgICAgSFZNX0lORk9fUEZOKSkg
PT0gTlVMTCApCkBAIC00MTMsNyArNTE4LDcgQEAgc3RhdGljIGludCBzZXR1cF9ndWVzdCh4Y19p
bnRlcmZhY2UgKnhjaAogICogQ3JlYXRlIGEgZG9tYWluIGZvciBhIHZpcnR1YWxpemVkIExpbnV4
LCB1c2luZyBmaWxlcy9maWxlbmFtZXMuCiAgKi8KIGludCB4Y19odm1fYnVpbGQoeGNfaW50ZXJm
YWNlICp4Y2gsIHVpbnQzMl90IGRvbWlkLAotICAgICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3Qg
eGNfaHZtX2J1aWxkX2FyZ3MgKmh2bV9hcmdzKQorICAgICAgICAgICAgICAgICBzdHJ1Y3QgeGNf
aHZtX2J1aWxkX2FyZ3MgKmh2bV9hcmdzKQogewogICAgIHN0cnVjdCB4Y19odm1fYnVpbGRfYXJn
cyBhcmdzID0gKmh2bV9hcmdzOwogICAgIHZvaWQgKmltYWdlOwpAQCAtNDQxLDYgKzU0NiwxNSBA
QCBpbnQgeGNfaHZtX2J1aWxkKHhjX2ludGVyZmFjZSAqeGNoLCB1aW50CiAKICAgICBzdHMgPSBz
ZXR1cF9ndWVzdCh4Y2gsIGRvbWlkLCAmYXJncywgaW1hZ2UsIGltYWdlX3NpemUpOwogCisgICAg
aWYgKCFzdHMpCisgICAgeworICAgICAgICAvKiBSZXR1cm4gbW9kdWxlIGxvYWQgYWRkcmVzc2Vz
IHRvIGNhbGxlciAqLworICAgICAgICBodm1fYXJncy0+YWNwaV9tb2R1bGUuZ3Vlc3RfYWRkcl9v
dXQgPSAKKyAgICAgICAgICAgIGFyZ3MuYWNwaV9tb2R1bGUuZ3Vlc3RfYWRkcl9vdXQ7CisgICAg
ICAgIGh2bV9hcmdzLT5zbWJpb3NfbW9kdWxlLmd1ZXN0X2FkZHJfb3V0ID0gCisgICAgICAgICAg
ICBhcmdzLnNtYmlvc19tb2R1bGUuZ3Vlc3RfYWRkcl9vdXQ7CisgICAgfQorCiAgICAgZnJlZShp
bWFnZSk7CiAKICAgICByZXR1cm4gc3RzOwpAQCAtNDYxLDYgKzU3NSw3IEBAIGludCB4Y19odm1f
YnVpbGRfdGFyZ2V0X21lbSh4Y19pbnRlcmZhY2UKIHsKICAgICBzdHJ1Y3QgeGNfaHZtX2J1aWxk
X2FyZ3MgYXJncyA9IHt9OwogCisgICAgbWVtc2V0KCZhcmdzLCAwLCBzaXplb2Yoc3RydWN0IHhj
X2h2bV9idWlsZF9hcmdzKSk7CiAgICAgYXJncy5tZW1fc2l6ZSA9ICh1aW50NjRfdCltZW1zaXpl
IDw8IDIwOwogICAgIGFyZ3MubWVtX3RhcmdldCA9ICh1aW50NjRfdCl0YXJnZXQgPDwgMjA7CiAg
ICAgYXJncy5pbWFnZV9maWxlX25hbWUgPSBpbWFnZV9uYW1lOwpkaWZmIC1yIDRhODllYzQ2MDBh
NyB0b29scy9saWJ4Yy94ZW5ndWVzdC5oCi0tLSBhL3Rvb2xzL2xpYnhjL3hlbmd1ZXN0LmgJV2Vk
IERlYyAxOSAxNjoyMzoyNyAyMDEyIC0wNTAwCisrKyBiL3Rvb2xzL2xpYnhjL3hlbmd1ZXN0LmgJ
V2VkIERlYyAxOSAxNjoyMzo1MiAyMDEyIC0wNTAwCkBAIC0yMTEsMTEgKzIxMSwyMyBAQCBpbnQg
eGNfbGludXhfYnVpbGRfbWVtKHhjX2ludGVyZmFjZSAqeGNoCiAgICAgICAgICAgICAgICAgICAg
ICAgIHVuc2lnbmVkIGludCBjb25zb2xlX2V2dGNobiwKICAgICAgICAgICAgICAgICAgICAgICAg
dW5zaWduZWQgbG9uZyAqY29uc29sZV9tZm4pOwogCitzdHJ1Y3QgeGNfaHZtX2Zpcm13YXJlX21v
ZHVsZSB7CisgICAgdWludDhfdCAgKmRhdGE7CisgICAgdWludDMyX3QgIGxlbmd0aDsKKyAgICB1
aW50NjRfdCAgZ3Vlc3RfYWRkcl9vdXQ7Cit9OworCiBzdHJ1Y3QgeGNfaHZtX2J1aWxkX2FyZ3Mg
ewogICAgIHVpbnQ2NF90IG1lbV9zaXplOyAgICAgICAgICAgLyogTWVtb3J5IHNpemUgaW4gYnl0
ZXMuICovCiAgICAgdWludDY0X3QgbWVtX3RhcmdldDsgICAgICAgICAvKiBNZW1vcnkgdGFyZ2V0
IGluIGJ5dGVzLiAqLwogICAgIHVpbnQ2NF90IG1taW9fc2l6ZTsgICAgICAgICAgLyogU2l6ZSBv
ZiB0aGUgTU1JTyBob2xlIGluIGJ5dGVzLiAqLwogICAgIGNvbnN0IGNoYXIgKmltYWdlX2ZpbGVf
bmFtZTsgLyogRmlsZSBuYW1lIG9mIHRoZSBpbWFnZSB0byBsb2FkLiAqLworCisgICAgLyogRXh0
cmEgQUNQSSB0YWJsZXMgcGFzc2VkIHRvIEhWTUxPQURFUiAqLworICAgIHN0cnVjdCB4Y19odm1f
ZmlybXdhcmVfbW9kdWxlIGFjcGlfbW9kdWxlOworCisgICAgLyogRXh0cmEgU01CSU9TIHN0cnVj
dHVyZXMgcGFzc2VkIHRvIEhWTUxPQURFUiAqLworICAgIHN0cnVjdCB4Y19odm1fZmlybXdhcmVf
bW9kdWxlIHNtYmlvc19tb2R1bGU7CiB9OwogCiAvKioKQEAgLTIyOCw3ICsyNDAsNyBAQCBzdHJ1
Y3QgeGNfaHZtX2J1aWxkX2FyZ3MgewogICogYXJlIG9wdGlvbmFsLgogICovCiBpbnQgeGNfaHZt
X2J1aWxkKHhjX2ludGVyZmFjZSAqeGNoLCB1aW50MzJfdCBkb21pZCwKLSAgICAgICAgICAgICAg
ICAgY29uc3Qgc3RydWN0IHhjX2h2bV9idWlsZF9hcmdzICpodm1fYXJncyk7CisgICAgICAgICAg
ICAgICAgIHN0cnVjdCB4Y19odm1fYnVpbGRfYXJncyAqaHZtX2FyZ3MpOwogCiBpbnQgeGNfaHZt
X2J1aWxkX3RhcmdldF9tZW0oeGNfaW50ZXJmYWNlICp4Y2gsCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgdWludDMyX3QgZG9taWQsCmRpZmYgLXIgNGE4OWVjNDYwMGE3IHRvb2xzL2xpYnhj
L3hnX3ByaXZhdGUuYwotLS0gYS90b29scy9saWJ4Yy94Z19wcml2YXRlLmMJV2VkIERlYyAxOSAx
NjoyMzoyNyAyMDEyIC0wNTAwCisrKyBiL3Rvb2xzL2xpYnhjL3hnX3ByaXZhdGUuYwlXZWQgRGVj
IDE5IDE2OjIzOjUyIDIwMTIgLTA1MDAKQEAgLTE5Miw3ICsxOTIsNyBAQCB1bnNpZ25lZCBsb25n
IGNzdW1fcGFnZSh2b2lkICpwYWdlKQogX19hdHRyaWJ1dGVfXygod2VhaykpIAogICAgIGludCB4
Y19odm1fYnVpbGQoeGNfaW50ZXJmYWNlICp4Y2gsCiAgICAgICAgICAgICAgICAgICAgICB1aW50
MzJfdCBkb21pZCwKLSAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHN0cnVjdCB4Y19odm1fYnVp
bGRfYXJncyAqaHZtX2FyZ3MpCisgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgeGNfaHZtX2J1
aWxkX2FyZ3MgKmh2bV9hcmdzKQogewogICAgIGVycm5vID0gRU5PU1lTOwogICAgIHJldHVybiAt
MTsK

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="e4af4d4017.aa.sf"
Content-Description: e4af4d4017.aa.sf
Content-Disposition: attachment; filename="e4af4d4017.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:46:39 GMT";
	modification-date="Thu, 20 Dec 2012 18:46:39 GMT"
Content-Transfer-Encoding: base64

ZTRhZjRkNDAxNw0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGU0YWY0ZDQwMTdcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="6e0541b6e9.aa.sf"
Content-Description: 6e0541b6e9.aa.sf
Content-Disposition: attachment; filename="6e0541b6e9.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:48:05 GMT";
	modification-date="Thu, 20 Dec 2012 18:48:05 GMT"
Content-Transfer-Encoding: base64

NmUwNTQxYjZlOQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDZlMDU0MWI2ZTlcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="3088c6aa5b.aa.sf"
Content-Description: 3088c6aa5b.aa.sf
Content-Disposition: attachment; filename="3088c6aa5b.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:52:55 GMT";
	modification-date="Thu, 20 Dec 2012 18:52:55 GMT"
Content-Transfer-Encoding: base64

MzA4OGM2YWE1Yg0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDMwODhjNmFhNWJcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="6d32a86551.aa.sf"
Content-Description: 6d32a86551.aa.sf
Content-Disposition: attachment; filename="6d32a86551.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:53:26 GMT";
	modification-date="Thu, 20 Dec 2012 18:53:26 GMT"
Content-Transfer-Encoding: base64

NmQzMmE4NjU1MQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDZkMzJhODY1NTFcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="e7d9c04b6d.aa.sf"
Content-Description: e7d9c04b6d.aa.sf
Content-Disposition: attachment; filename="e7d9c04b6d.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:00 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:00 GMT"
Content-Transfer-Encoding: base64

ZTdkOWMwNGI2ZA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGU3ZDljMDRiNmRcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="4b4ef48e06.aa.sf"
Content-Description: 4b4ef48e06.aa.sf
Content-Disposition: attachment; filename="4b4ef48e06.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:19 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:19 GMT"
Content-Transfer-Encoding: base64

NGI0ZWY0OGUwNg0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDRiNGVmNDhlMDZcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="b36b656d44.aa.sf"
Content-Description: b36b656d44.aa.sf
Content-Disposition: attachment; filename="b36b656d44.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:49 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:49 GMT"
Content-Transfer-Encoding: base64

YjM2YjY1NmQ0NA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGIzNmI2NTZkNDRcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_--


From xen-devel-bounces@lists.xen.org Thu Dec 20 18:55:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 18:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TllHH-00037S-8J; Thu, 20 Dec 2012 18:55:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TllHF-00036g-0b
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 18:55:29 +0000
Received: from [85.158.143.35:28724] by server-2.bemta-4.messagelabs.com id
	5F/9B-30861-02F53D05; Thu, 20 Dec 2012 18:55:28 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1356029721!3959573!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM2NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6375 invoked from network); 20 Dec 2012 18:55:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 18:55:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; d="sf'?scan'208";a="1382091"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 18:55:20 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Thu, 20 Dec 2012
	13:55:20 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Thu, 20 Dec 2012 13:55:13 -0500
Thread-Topic: [Xen-devel] [PATCH v4 02/04] HVM firmware passthrough control
	tools support
Thread-Index: Ac3e4pAi1zOGIpqfRCOJv/0SqsKiEA==
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/mixed;
	boundary="_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v4 02/04] HVM firmware passthrough control tools
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Xen control tools support for loading the firmware passthrough blocks durin=
g
domain construction. SMBIOS and ACPI blocks are passed in using the new
xc_hvm_build_args structure. Each block is read and loaded into the new dom=
ain
address space behind the HVMLOADER image. The base address for the two bloc=
ks
is returned as an out parameter to the caller via the args structure.

Signed-off-by: Ross Philipson <ross.philipson@citrix.com>


--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream;
	name="hvm-firmware-passthrough-v4-02.patch"
Content-Description: hvm-firmware-passthrough-v4-02.patch
Content-Disposition: attachment;
	filename="hvm-firmware-passthrough-v4-02.patch"; size=9522;
	creation-date="Wed, 19 Dec 2012 20:57:33 GMT";
	modification-date="Thu, 20 Dec 2012 18:22:16 GMT"
Content-Transfer-Encoding: base64

WGVuIGNvbnRyb2wgdG9vbHMgc3VwcG9ydCBmb3IgbG9hZGluZyB0aGUgZmlybXdhcmUgcGFzc3Ro
cm91Z2ggYmxvY2tzIGR1cmluZwpkb21haW4gY29uc3RydWN0aW9uLiBTTUJJT1MgYW5kIEFDUEkg
YmxvY2tzIGFyZSBwYXNzZWQgaW4gdXNpbmcgdGhlIG5ldwp4Y19odm1fYnVpbGRfYXJncyBzdHJ1
Y3R1cmUuIEVhY2ggYmxvY2sgaXMgcmVhZCBhbmQgbG9hZGVkIGludG8gdGhlIG5ldyBkb21haW4K
YWRkcmVzcyBzcGFjZSBiZWhpbmQgdGhlIEhWTUxPQURFUiBpbWFnZS4gVGhlIGJhc2UgYWRkcmVz
cyBmb3IgdGhlIHR3byBibG9ja3MKaXMgcmV0dXJuZWQgYXMgYW4gb3V0IHBhcmFtZXRlciB0byB0
aGUgY2FsbGVyIHZpYSB0aGUgYXJncyBzdHJ1Y3R1cmUuCgpTaWduZWQtb2ZmLWJ5OiBSb3NzIFBo
aWxpcHNvbiA8cm9zcy5waGlsaXBzb25AY2l0cml4LmNvbT4KCmRpZmYgLXIgNGE4OWVjNDYwMGE3
IHRvb2xzL2xpYnhjL3hjX2h2bV9idWlsZF9hcm0uYwotLS0gYS90b29scy9saWJ4Yy94Y19odm1f
YnVpbGRfYXJtLmMJV2VkIERlYyAxOSAxNjoyMzoyNyAyMDEyIC0wNTAwCisrKyBiL3Rvb2xzL2xp
YnhjL3hjX2h2bV9idWlsZF9hcm0uYwlXZWQgRGVjIDE5IDE2OjIzOjUyIDIwMTIgLTA1MDAKQEAg
LTIyLDcgKzIyLDcgQEAKICNpbmNsdWRlIDx4ZW5ndWVzdC5oPgogCiBpbnQgeGNfaHZtX2J1aWxk
KHhjX2ludGVyZmFjZSAqeGNoLCB1aW50MzJfdCBkb21pZCwKLSAgICAgICAgICAgICAgICAgY29u
c3Qgc3RydWN0IHhjX2h2bV9idWlsZF9hcmdzICpodm1fYXJncykKKyAgICAgICAgICAgICAgICAg
c3RydWN0IHhjX2h2bV9idWlsZF9hcmdzICpodm1fYXJncykKIHsKICAgICBlcnJubyA9IEVOT1NZ
UzsKICAgICByZXR1cm4gLTE7CmRpZmYgLXIgNGE4OWVjNDYwMGE3IHRvb2xzL2xpYnhjL3hjX2h2
bV9idWlsZF94ODYuYwotLS0gYS90b29scy9saWJ4Yy94Y19odm1fYnVpbGRfeDg2LmMJV2VkIERl
YyAxOSAxNjoyMzoyNyAyMDEyIC0wNTAwCisrKyBiL3Rvb2xzL2xpYnhjL3hjX2h2bV9idWlsZF94
ODYuYwlXZWQgRGVjIDE5IDE2OjIzOjUyIDIwMTIgLTA1MDAKQEAgLTQ5LDYgKzQ5LDQwIEBACiAj
ZGVmaW5lIE5SX1NQRUNJQUxfUEFHRVMgICAgIDgKICNkZWZpbmUgc3BlY2lhbF9wZm4oeCkgKDB4
ZmYwMDB1IC0gTlJfU1BFQ0lBTF9QQUdFUyArICh4KSkKIAorc3RhdGljIGludCBtb2R1bGVzX2lu
aXQoc3RydWN0IHhjX2h2bV9idWlsZF9hcmdzICphcmdzLAorICAgICAgICAgICAgICAgICAgICAg
ICAgdWludDY0X3QgdmVuZCwgc3RydWN0IGVsZl9iaW5hcnkgKmVsZiwKKyAgICAgICAgICAgICAg
ICAgICAgICAgIHVpbnQ2NF90ICptc3RhcnRfb3V0LCB1aW50NjRfdCAqbWVuZF9vdXQpCit7Cisj
ZGVmaW5lIE1PRFVMRV9BTElHTiAxVUwgPDwgNworI2RlZmluZSBNQl9BTElHTiAgICAgMVVMIDw8
IDIwCisjZGVmaW5lIE1LQUxJR04oeCwgYSkgKCgodWludDY0X3QpKHgpICsgKGEpIC0gMSkgJiB+
KHVpbnQ2NF90KSgoYSkgLSAxKSkKKyAgICB1aW50NjRfdCB0b3RhbF9sZW4gPSAwLCBvZmZzZXQx
ID0gMDsKKworICAgIGlmICggKGFyZ3MtPmFjcGlfbW9kdWxlLmxlbmd0aCA9PSAwKSYmKGFyZ3Mt
PnNtYmlvc19tb2R1bGUubGVuZ3RoID09IDApICkKKyAgICAgICAgcmV0dXJuIDA7CisKKyAgICAv
KiBGaW5kIHRoZSB0b3RhbCBsZW5ndGggZm9yIHRoZSBmaXJtd2FyZSBtb2R1bGVzIHdpdGggYSBy
ZWFzb25hYmxlIGxhcmdlCisgICAgICogYWxpZ25tZW50IHNpemUgdG8gYWxpZ24gZWFjaCB0aGUg
bW9kdWxlcy4KKyAgICAgKi8KKyAgICB0b3RhbF9sZW4gPSBNS0FMSUdOKGFyZ3MtPmFjcGlfbW9k
dWxlLmxlbmd0aCwgTU9EVUxFX0FMSUdOKTsKKyAgICBvZmZzZXQxID0gdG90YWxfbGVuOworICAg
IHRvdGFsX2xlbiArPSBNS0FMSUdOKGFyZ3MtPnNtYmlvc19tb2R1bGUubGVuZ3RoLCBNT0RVTEVf
QUxJR04pOworCisgICAgLyogV2FudCB0byBwbGFjZSB0aGUgbW9kdWxlcyAxTWIrY2hhbmdlIGJl
aGluZCB0aGUgbG9hZGVyIGltYWdlLiAqLworICAgICptc3RhcnRfb3V0ID0gTUtBTElHTihlbGYt
PnBlbmQsIE1CX0FMSUdOKSArIChNQl9BTElHTik7CisgICAgKm1lbmRfb3V0ID0gKm1zdGFydF9v
dXQgKyB0b3RhbF9sZW47CisKKyAgICBpZiAoICptZW5kX291dCA+IHZlbmQgKSAgICAKKyAgICAg
ICAgcmV0dXJuIC0xOworCisgICAgaWYgKCBhcmdzLT5hY3BpX21vZHVsZS5sZW5ndGggIT0gMCAp
CisgICAgICAgIGFyZ3MtPmFjcGlfbW9kdWxlLmd1ZXN0X2FkZHJfb3V0ID0gKm1zdGFydF9vdXQ7
CisgICAgaWYgKCBhcmdzLT5zbWJpb3NfbW9kdWxlLmxlbmd0aCAhPSAwICkKKyAgICAgICAgYXJn
cy0+c21iaW9zX21vZHVsZS5ndWVzdF9hZGRyX291dCA9ICptc3RhcnRfb3V0ICsgb2Zmc2V0MTsK
KworICAgIHJldHVybiAwOworfQorCiBzdGF0aWMgdm9pZCBidWlsZF9odm1faW5mbyh2b2lkICpo
dm1faW5mb19wYWdlLCB1aW50NjRfdCBtZW1fc2l6ZSwKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHVpbnQ2NF90IG1taW9fc3RhcnQsIHVpbnQ2NF90IG1taW9fc2l6ZSkKIHsKQEAgLTg2LDkg
KzEyMCw4IEBAIHN0YXRpYyB2b2lkIGJ1aWxkX2h2bV9pbmZvKHZvaWQgKmh2bV9pbmYKICAgICBo
dm1faW5mby0+Y2hlY2tzdW0gPSAtc3VtOwogfQogCi1zdGF0aWMgaW50IGxvYWRlbGZpbWFnZSgK
LSAgICB4Y19pbnRlcmZhY2UgKnhjaCwKLSAgICBzdHJ1Y3QgZWxmX2JpbmFyeSAqZWxmLCB1aW50
MzJfdCBkb20sIHVuc2lnbmVkIGxvbmcgKnBhcnJheSkKK3N0YXRpYyBpbnQgbG9hZGVsZmltYWdl
KHhjX2ludGVyZmFjZSAqeGNoLCBzdHJ1Y3QgZWxmX2JpbmFyeSAqZWxmLAorICAgICAgICAgICAg
ICAgICAgICAgICAgdWludDMyX3QgZG9tLCB1bnNpZ25lZCBsb25nICpwYXJyYXkpCiB7CiAgICAg
cHJpdmNtZF9tbWFwX2VudHJ5X3QgKmVudHJpZXMgPSBOVUxMOwogICAgIHVuc2lnbmVkIGxvbmcg
cGZuX3N0YXJ0ID0gZWxmLT5wc3RhcnQgPj4gUEFHRV9TSElGVDsKQEAgLTEyNiw2ICsxNTksNjYg
QEAgc3RhdGljIGludCBsb2FkZWxmaW1hZ2UoCiAgICAgcmV0dXJuIHJjOwogfQogCitzdGF0aWMg
aW50IGxvYWRtb2R1bGVzKHhjX2ludGVyZmFjZSAqeGNoLAorICAgICAgICAgICAgICAgICAgICAg
ICBzdHJ1Y3QgeGNfaHZtX2J1aWxkX2FyZ3MgKmFyZ3MsCisgICAgICAgICAgICAgICAgICAgICAg
IHVpbnQ2NF90IG1zdGFydCwgdWludDY0X3QgbWVuZCwKKyAgICAgICAgICAgICAgICAgICAgICAg
dWludDMyX3QgZG9tLCB1bnNpZ25lZCBsb25nICpwYXJyYXkpCit7CisgICAgcHJpdmNtZF9tbWFw
X2VudHJ5X3QgKmVudHJpZXMgPSBOVUxMOworICAgIHVuc2lnbmVkIGxvbmcgcGZuX3N0YXJ0Owor
ICAgIHVuc2lnbmVkIGxvbmcgcGZuX2VuZDsKKyAgICBzaXplX3QgcGFnZXM7CisgICAgdWludDMy
X3QgaTsKKyAgICB1aW50OF90ICpkZXN0OworICAgIGludCByYyA9IC0xOworCisgICAgaWYgKCAo
bXN0YXJ0ID09IDApfHwobWVuZCA9PSAwKSApCisgICAgICAgIHJldHVybiAwOworCisgICAgcGZu
X3N0YXJ0ID0gKHVuc2lnbmVkIGxvbmcpKG1zdGFydCA+PiBQQUdFX1NISUZUKTsKKyAgICBwZm5f
ZW5kID0gKHVuc2lnbmVkIGxvbmcpKChtZW5kICsgUEFHRV9TSVpFIC0gMSkgPj4gUEFHRV9TSElG
VCk7CisgICAgcGFnZXMgPSBwZm5fZW5kIC0gcGZuX3N0YXJ0OworCisgICAgLyogTWFwIGFkZHJl
c3Mgc3BhY2UgZm9yIG1vZHVsZSBsaXN0LiAqLworICAgIGVudHJpZXMgPSBjYWxsb2MocGFnZXMs
IHNpemVvZihwcml2Y21kX21tYXBfZW50cnlfdCkpOworICAgIGlmICggZW50cmllcyA9PSBOVUxM
ICkKKyAgICAgICAgZ290byBlcnJvcl9vdXQ7CisKKyAgICBmb3IgKCBpID0gMDsgaSA8IHBhZ2Vz
OyBpKysgKQorICAgICAgICBlbnRyaWVzW2ldLm1mbiA9IHBhcnJheVsobXN0YXJ0ID4+IFBBR0Vf
U0hJRlQpICsgaV07CisKKyAgICBkZXN0ID0geGNfbWFwX2ZvcmVpZ25fcmFuZ2VzKAorICAgICAg
ICB4Y2gsIGRvbSwgcGFnZXMgPDwgUEFHRV9TSElGVCwgUFJPVF9SRUFEIHwgUFJPVF9XUklURSwg
MSA8PCBQQUdFX1NISUZULAorICAgICAgICBlbnRyaWVzLCBwYWdlcyk7CisgICAgaWYgKCBkZXN0
ID09IE5VTEwgKQorICAgICAgICBnb3RvIGVycm9yX291dDsKKworICAgIC8qIFplcm8gdGhlIHJh
bmdlIHNvIHBhZGRpbmcgaXMgY2xlYXIgYmV0d2VlbiBtb2R1bGVzICovCisgICAgbWVtc2V0KGRl
c3QsIDAsIHBhZ2VzIDw8IFBBR0VfU0hJRlQpOworCisgICAgLyogTG9hZCBtb2R1bGVzIGludG8g
cmFuZ2UgKi8gICAgCisgICAgaWYgKCBhcmdzLT5hY3BpX21vZHVsZS5sZW5ndGggIT0gMCApCisg
ICAgeworICAgICAgICBtZW1jcHkoZGVzdCwKKyAgICAgICAgICAgICAgIGFyZ3MtPmFjcGlfbW9k
dWxlLmRhdGEsCisgICAgICAgICAgICAgICBhcmdzLT5hY3BpX21vZHVsZS5sZW5ndGgpOworICAg
IH0KKyAgICBpZiAoIGFyZ3MtPnNtYmlvc19tb2R1bGUubGVuZ3RoICE9IDAgKQorICAgIHsKKyAg
ICAgICAgbWVtY3B5KGRlc3QgKyAoYXJncy0+c21iaW9zX21vZHVsZS5ndWVzdF9hZGRyX291dCAt
IG1zdGFydCksCisgICAgICAgICAgICAgICBhcmdzLT5zbWJpb3NfbW9kdWxlLmRhdGEsCisgICAg
ICAgICAgICAgICBhcmdzLT5zbWJpb3NfbW9kdWxlLmxlbmd0aCk7CisgICAgfQorCisgICAgbXVu
bWFwKGRlc3QsIHBhZ2VzIDw8IFBBR0VfU0hJRlQpOworICAgIHJjID0gMDsKKworIGVycm9yX291
dDoKKyAgICBmcmVlKGVudHJpZXMpOworCisgICAgcmV0dXJuIHJjOworfQorCiAvKgogICogQ2hl
Y2sgd2hldGhlciB0aGVyZSBleGlzdHMgbW1pbyBob2xlIGluIHRoZSBzcGVjaWZpZWQgbWVtb3J5
IHJhbmdlLgogICogUmV0dXJucyAxIGlmIGV4aXN0cywgZWxzZSByZXR1cm5zIDAuCkBAIC0xNDAs
NyArMjMzLDcgQEAgc3RhdGljIGludCBjaGVja19tbWlvX2hvbGUodWludDY0X3Qgc3RhcgogfQog
CiBzdGF0aWMgaW50IHNldHVwX2d1ZXN0KHhjX2ludGVyZmFjZSAqeGNoLAotICAgICAgICAgICAg
ICAgICAgICAgICB1aW50MzJfdCBkb20sIGNvbnN0IHN0cnVjdCB4Y19odm1fYnVpbGRfYXJncyAq
YXJncywKKyAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZG9tLCBzdHJ1Y3QgeGNfaHZt
X2J1aWxkX2FyZ3MgKmFyZ3MsCiAgICAgICAgICAgICAgICAgICAgICAgIGNoYXIgKmltYWdlLCB1
bnNpZ25lZCBsb25nIGltYWdlX3NpemUpCiB7CiAgICAgeGVuX3Bmbl90ICpwYWdlX2FycmF5ID0g
TlVMTDsKQEAgLTE1Myw2ICsyNDYsNyBAQCBzdGF0aWMgaW50IHNldHVwX2d1ZXN0KHhjX2ludGVy
ZmFjZSAqeGNoCiAgICAgdWludDMyX3QgKmlkZW50X3B0OwogICAgIHN0cnVjdCBlbGZfYmluYXJ5
IGVsZjsKICAgICB1aW50NjRfdCB2X3N0YXJ0LCB2X2VuZDsKKyAgICB1aW50NjRfdCBtX3N0YXJ0
ID0gMCwgbV9lbmQgPSAwOwogICAgIGludCByYzsKICAgICB4ZW5fY2FwYWJpbGl0aWVzX2luZm9f
dCBjYXBzOwogICAgIHVuc2lnbmVkIGxvbmcgc3RhdF9ub3JtYWxfcGFnZXMgPSAwLCBzdGF0XzJt
Yl9wYWdlcyA9IDAsIApAQCAtMTc4LDExICsyNzIsMTkgQEAgc3RhdGljIGludCBzZXR1cF9ndWVz
dCh4Y19pbnRlcmZhY2UgKnhjaAogICAgICAgICBnb3RvIGVycm9yX291dDsKICAgICB9CiAKKyAg
ICBpZiAoIG1vZHVsZXNfaW5pdChhcmdzLCB2X2VuZCwgJmVsZiwgJm1fc3RhcnQsICZtX2VuZCkg
IT0gMCApCisgICAgeworICAgICAgICBFUlJPUigiSW5zdWZmaWNpZW50IHNwYWNlIHRvIGxvYWQg
bW9kdWxlcy4iKTsKKyAgICAgICAgZ290byBlcnJvcl9vdXQ7CisgICAgfQorCiAgICAgSVBSSU5U
RigiVklSVFVBTCBNRU1PUlkgQVJSQU5HRU1FTlQ6XG4iCiAgICAgICAgICAgICAiICBMb2FkZXI6
ICAgICAgICAlMDE2IlBSSXg2NCItPiUwMTYiUFJJeDY0IlxuIgorICAgICAgICAgICAgIiAgTW9k
dWxlczogICAgICAgJTAxNiJQUkl4NjQiLT4lMDE2IlBSSXg2NCJcbiIKICAgICAgICAgICAgICIg
IFRPVEFMOiAgICAgICAgICUwMTYiUFJJeDY0Ii0+JTAxNiJQUkl4NjQiXG4iCiAgICAgICAgICAg
ICAiICBFTlRSWSBBRERSRVNTOiAlMDE2IlBSSXg2NCJcbiIsCiAgICAgICAgICAgICBlbGYucHN0
YXJ0LCBlbGYucGVuZCwKKyAgICAgICAgICAgIG1fc3RhcnQsIG1fZW5kLAogICAgICAgICAgICAg
dl9zdGFydCwgdl9lbmQsCiAgICAgICAgICAgICBlbGZfdXZhbCgmZWxmLCBlbGYuZWhkciwgZV9l
bnRyeSkpOwogCkBAIC0zMzcsNiArNDM5LDkgQEAgc3RhdGljIGludCBzZXR1cF9ndWVzdCh4Y19p
bnRlcmZhY2UgKnhjaAogICAgIGlmICggbG9hZGVsZmltYWdlKHhjaCwgJmVsZiwgZG9tLCBwYWdl
X2FycmF5KSAhPSAwICkKICAgICAgICAgZ290byBlcnJvcl9vdXQ7CiAKKyAgICBpZiAoIGxvYWRt
b2R1bGVzKHhjaCwgYXJncywgbV9zdGFydCwgbV9lbmQsIGRvbSwgcGFnZV9hcnJheSkgIT0gMCAp
CisgICAgICAgIGdvdG8gZXJyb3Jfb3V0OyAgICAKKwogICAgIGlmICggKGh2bV9pbmZvX3BhZ2Ug
PSB4Y19tYXBfZm9yZWlnbl9yYW5nZSgKICAgICAgICAgICAgICAgeGNoLCBkb20sIFBBR0VfU0la
RSwgUFJPVF9SRUFEIHwgUFJPVF9XUklURSwKICAgICAgICAgICAgICAgSFZNX0lORk9fUEZOKSkg
PT0gTlVMTCApCkBAIC00MTMsNyArNTE4LDcgQEAgc3RhdGljIGludCBzZXR1cF9ndWVzdCh4Y19p
bnRlcmZhY2UgKnhjaAogICogQ3JlYXRlIGEgZG9tYWluIGZvciBhIHZpcnR1YWxpemVkIExpbnV4
LCB1c2luZyBmaWxlcy9maWxlbmFtZXMuCiAgKi8KIGludCB4Y19odm1fYnVpbGQoeGNfaW50ZXJm
YWNlICp4Y2gsIHVpbnQzMl90IGRvbWlkLAotICAgICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3Qg
eGNfaHZtX2J1aWxkX2FyZ3MgKmh2bV9hcmdzKQorICAgICAgICAgICAgICAgICBzdHJ1Y3QgeGNf
aHZtX2J1aWxkX2FyZ3MgKmh2bV9hcmdzKQogewogICAgIHN0cnVjdCB4Y19odm1fYnVpbGRfYXJn
cyBhcmdzID0gKmh2bV9hcmdzOwogICAgIHZvaWQgKmltYWdlOwpAQCAtNDQxLDYgKzU0NiwxNSBA
QCBpbnQgeGNfaHZtX2J1aWxkKHhjX2ludGVyZmFjZSAqeGNoLCB1aW50CiAKICAgICBzdHMgPSBz
ZXR1cF9ndWVzdCh4Y2gsIGRvbWlkLCAmYXJncywgaW1hZ2UsIGltYWdlX3NpemUpOwogCisgICAg
aWYgKCFzdHMpCisgICAgeworICAgICAgICAvKiBSZXR1cm4gbW9kdWxlIGxvYWQgYWRkcmVzc2Vz
IHRvIGNhbGxlciAqLworICAgICAgICBodm1fYXJncy0+YWNwaV9tb2R1bGUuZ3Vlc3RfYWRkcl9v
dXQgPSAKKyAgICAgICAgICAgIGFyZ3MuYWNwaV9tb2R1bGUuZ3Vlc3RfYWRkcl9vdXQ7CisgICAg
ICAgIGh2bV9hcmdzLT5zbWJpb3NfbW9kdWxlLmd1ZXN0X2FkZHJfb3V0ID0gCisgICAgICAgICAg
ICBhcmdzLnNtYmlvc19tb2R1bGUuZ3Vlc3RfYWRkcl9vdXQ7CisgICAgfQorCiAgICAgZnJlZShp
bWFnZSk7CiAKICAgICByZXR1cm4gc3RzOwpAQCAtNDYxLDYgKzU3NSw3IEBAIGludCB4Y19odm1f
YnVpbGRfdGFyZ2V0X21lbSh4Y19pbnRlcmZhY2UKIHsKICAgICBzdHJ1Y3QgeGNfaHZtX2J1aWxk
X2FyZ3MgYXJncyA9IHt9OwogCisgICAgbWVtc2V0KCZhcmdzLCAwLCBzaXplb2Yoc3RydWN0IHhj
X2h2bV9idWlsZF9hcmdzKSk7CiAgICAgYXJncy5tZW1fc2l6ZSA9ICh1aW50NjRfdCltZW1zaXpl
IDw8IDIwOwogICAgIGFyZ3MubWVtX3RhcmdldCA9ICh1aW50NjRfdCl0YXJnZXQgPDwgMjA7CiAg
ICAgYXJncy5pbWFnZV9maWxlX25hbWUgPSBpbWFnZV9uYW1lOwpkaWZmIC1yIDRhODllYzQ2MDBh
NyB0b29scy9saWJ4Yy94ZW5ndWVzdC5oCi0tLSBhL3Rvb2xzL2xpYnhjL3hlbmd1ZXN0LmgJV2Vk
IERlYyAxOSAxNjoyMzoyNyAyMDEyIC0wNTAwCisrKyBiL3Rvb2xzL2xpYnhjL3hlbmd1ZXN0LmgJ
V2VkIERlYyAxOSAxNjoyMzo1MiAyMDEyIC0wNTAwCkBAIC0yMTEsMTEgKzIxMSwyMyBAQCBpbnQg
eGNfbGludXhfYnVpbGRfbWVtKHhjX2ludGVyZmFjZSAqeGNoCiAgICAgICAgICAgICAgICAgICAg
ICAgIHVuc2lnbmVkIGludCBjb25zb2xlX2V2dGNobiwKICAgICAgICAgICAgICAgICAgICAgICAg
dW5zaWduZWQgbG9uZyAqY29uc29sZV9tZm4pOwogCitzdHJ1Y3QgeGNfaHZtX2Zpcm13YXJlX21v
ZHVsZSB7CisgICAgdWludDhfdCAgKmRhdGE7CisgICAgdWludDMyX3QgIGxlbmd0aDsKKyAgICB1
aW50NjRfdCAgZ3Vlc3RfYWRkcl9vdXQ7Cit9OworCiBzdHJ1Y3QgeGNfaHZtX2J1aWxkX2FyZ3Mg
ewogICAgIHVpbnQ2NF90IG1lbV9zaXplOyAgICAgICAgICAgLyogTWVtb3J5IHNpemUgaW4gYnl0
ZXMuICovCiAgICAgdWludDY0X3QgbWVtX3RhcmdldDsgICAgICAgICAvKiBNZW1vcnkgdGFyZ2V0
IGluIGJ5dGVzLiAqLwogICAgIHVpbnQ2NF90IG1taW9fc2l6ZTsgICAgICAgICAgLyogU2l6ZSBv
ZiB0aGUgTU1JTyBob2xlIGluIGJ5dGVzLiAqLwogICAgIGNvbnN0IGNoYXIgKmltYWdlX2ZpbGVf
bmFtZTsgLyogRmlsZSBuYW1lIG9mIHRoZSBpbWFnZSB0byBsb2FkLiAqLworCisgICAgLyogRXh0
cmEgQUNQSSB0YWJsZXMgcGFzc2VkIHRvIEhWTUxPQURFUiAqLworICAgIHN0cnVjdCB4Y19odm1f
ZmlybXdhcmVfbW9kdWxlIGFjcGlfbW9kdWxlOworCisgICAgLyogRXh0cmEgU01CSU9TIHN0cnVj
dHVyZXMgcGFzc2VkIHRvIEhWTUxPQURFUiAqLworICAgIHN0cnVjdCB4Y19odm1fZmlybXdhcmVf
bW9kdWxlIHNtYmlvc19tb2R1bGU7CiB9OwogCiAvKioKQEAgLTIyOCw3ICsyNDAsNyBAQCBzdHJ1
Y3QgeGNfaHZtX2J1aWxkX2FyZ3MgewogICogYXJlIG9wdGlvbmFsLgogICovCiBpbnQgeGNfaHZt
X2J1aWxkKHhjX2ludGVyZmFjZSAqeGNoLCB1aW50MzJfdCBkb21pZCwKLSAgICAgICAgICAgICAg
ICAgY29uc3Qgc3RydWN0IHhjX2h2bV9idWlsZF9hcmdzICpodm1fYXJncyk7CisgICAgICAgICAg
ICAgICAgIHN0cnVjdCB4Y19odm1fYnVpbGRfYXJncyAqaHZtX2FyZ3MpOwogCiBpbnQgeGNfaHZt
X2J1aWxkX3RhcmdldF9tZW0oeGNfaW50ZXJmYWNlICp4Y2gsCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgdWludDMyX3QgZG9taWQsCmRpZmYgLXIgNGE4OWVjNDYwMGE3IHRvb2xzL2xpYnhj
L3hnX3ByaXZhdGUuYwotLS0gYS90b29scy9saWJ4Yy94Z19wcml2YXRlLmMJV2VkIERlYyAxOSAx
NjoyMzoyNyAyMDEyIC0wNTAwCisrKyBiL3Rvb2xzL2xpYnhjL3hnX3ByaXZhdGUuYwlXZWQgRGVj
IDE5IDE2OjIzOjUyIDIwMTIgLTA1MDAKQEAgLTE5Miw3ICsxOTIsNyBAQCB1bnNpZ25lZCBsb25n
IGNzdW1fcGFnZSh2b2lkICpwYWdlKQogX19hdHRyaWJ1dGVfXygod2VhaykpIAogICAgIGludCB4
Y19odm1fYnVpbGQoeGNfaW50ZXJmYWNlICp4Y2gsCiAgICAgICAgICAgICAgICAgICAgICB1aW50
MzJfdCBkb21pZCwKLSAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHN0cnVjdCB4Y19odm1fYnVp
bGRfYXJncyAqaHZtX2FyZ3MpCisgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgeGNfaHZtX2J1
aWxkX2FyZ3MgKmh2bV9hcmdzKQogewogICAgIGVycm5vID0gRU5PU1lTOwogICAgIHJldHVybiAt
MTsK

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="e4af4d4017.aa.sf"
Content-Description: e4af4d4017.aa.sf
Content-Disposition: attachment; filename="e4af4d4017.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:46:39 GMT";
	modification-date="Thu, 20 Dec 2012 18:46:39 GMT"
Content-Transfer-Encoding: base64

ZTRhZjRkNDAxNw0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGU0YWY0ZDQwMTdcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="6e0541b6e9.aa.sf"
Content-Description: 6e0541b6e9.aa.sf
Content-Disposition: attachment; filename="6e0541b6e9.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:48:05 GMT";
	modification-date="Thu, 20 Dec 2012 18:48:05 GMT"
Content-Transfer-Encoding: base64

NmUwNTQxYjZlOQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDZlMDU0MWI2ZTlcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="3088c6aa5b.aa.sf"
Content-Description: 3088c6aa5b.aa.sf
Content-Disposition: attachment; filename="3088c6aa5b.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:52:55 GMT";
	modification-date="Thu, 20 Dec 2012 18:52:55 GMT"
Content-Transfer-Encoding: base64

MzA4OGM2YWE1Yg0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDMwODhjNmFhNWJcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="6d32a86551.aa.sf"
Content-Description: 6d32a86551.aa.sf
Content-Disposition: attachment; filename="6d32a86551.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:53:26 GMT";
	modification-date="Thu, 20 Dec 2012 18:53:26 GMT"
Content-Transfer-Encoding: base64

NmQzMmE4NjU1MQ0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDZkMzJhODY1NTFcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="e7d9c04b6d.aa.sf"
Content-Description: e7d9c04b6d.aa.sf
Content-Disposition: attachment; filename="e7d9c04b6d.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:00 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:00 GMT"
Content-Transfer-Encoding: base64

ZTdkOWMwNGI2ZA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGU3ZDljMDRiNmRcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="4b4ef48e06.aa.sf"
Content-Description: 4b4ef48e06.aa.sf
Content-Disposition: attachment; filename="4b4ef48e06.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:19 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:19 GMT"
Content-Transfer-Encoding: base64

NGI0ZWY0OGUwNg0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXDRiNGVmNDhlMDZcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: application/octet-stream; name="b36b656d44.aa.sf"
Content-Description: b36b656d44.aa.sf
Content-Disposition: attachment; filename="b36b656d44.aa.sf"; size=105;
	creation-date="Thu, 20 Dec 2012 18:54:49 GMT";
	modification-date="Thu, 20 Dec 2012 18:54:49 GMT"
Content-Transfer-Encoding: base64

YjM2YjY1NmQ0NA0KMA0KMA0KTk9ORQ0KQzpcVXNlcnNccm9zc3BcQXBwRGF0YVxSb2FtaW5nXFNo
YXJlRmlsZVxPdXRsb29rXGIzNmI2NTZkNDRcDQowDQowDQowDQpOT05FDQpOT05F

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_009_831D55AF5A11D64C9B4B43F59EEBF720A31F6B6461FTLPMAILBOX02_--


From xen-devel-bounces@lists.xen.org Thu Dec 20 19:42:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 19:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlm0c-0004SX-6k; Thu, 20 Dec 2012 19:42:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tlm0a-0004SQ-8D
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 19:42:20 +0000
Received: from [85.158.139.211:11053] by server-8.bemta-5.messagelabs.com id
	41/F1-15003-B1A63D05; Thu, 20 Dec 2012 19:42:19 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1356032535!20527851!1
X-Originating-IP: [209.85.220.180]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15115 invoked from network); 20 Dec 2012 19:42:16 -0000
Received: from mail-vc0-f180.google.com (HELO mail-vc0-f180.google.com)
	(209.85.220.180)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 19:42:16 -0000
Received: by mail-vc0-f180.google.com with SMTP id p16so4184586vcq.39
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 11:41:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=iz0kqulMpTpgjmD0WbSRhYEZYcWKuKkMCTOuiJricWo=;
	b=rBLB9LBhTKn9+BBBQzjrVgZ+jwsmpnMuLp0eN16DdncKiLbFaVy/2UXlvJ5HoKuxFo
	Zh33gkHb/XEkwBiMLfGLknwOTPNnfNS3bSniOPJiPl6zDTFC6tZUzRL52qw6JD+9JHsQ
	kjeSJy0rvIC+g4kmI8v4C2uydEgZa8PSCPnkL/IvAkpVu+xJFr8qaJDBM/sJX7i6jWNQ
	bVJRRQe0rF+lR7AGlzwC+WJpRz+9yo2+chi++ObecDUJq/Iq49wvHv4bzucQLta07vtg
	FiXr9F58yt8dkcL21KmF4gNECiFdFCpEhA+P+FijVPbqQiR2EAD0GbyTlxrClNexW7Rq
	hFjA==
MIME-Version: 1.0
Received: by 10.220.106.147 with SMTP id x19mr16224999vco.37.1356032475572;
	Thu, 20 Dec 2012 11:41:15 -0800 (PST)
Received: by 10.58.187.84 with HTTP; Thu, 20 Dec 2012 11:41:15 -0800 (PST)
In-Reply-To: <50D335E6.902@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
Date: Thu, 20 Dec 2012 14:41:15 -0500
X-Google-Sender-Auth: e6mzax8x0rH-WG__jyUQHFiUJDo
Message-ID: <CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Marek Marczykowski <marmarek@invisiblethingslab.com>
Content-Type: multipart/mixed; boundary=f46d042fdea202ab6204d14deace
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--f46d042fdea202ab6204d14deace
Content-Type: multipart/alternative; boundary=f46d042fdea202ab5d04d14deacc

--f46d042fdea202ab5d04d14deacc
Content-Type: text/plain; charset=ISO-8859-1

See if the attached patch helps.
This was a patch I forgot was still in my queue, that Jan had some
objections about (though I don't recall specifically what they were)
I had found that the scheduler was removing all cpus during suspend, and
this seemed to help with that, IIRC.

That said - I'm starting to be forced to look back into S3 issues, as while
I thought they were all resolved, they are not.
I have an Ivybridge laptop that I didn't have back in May that exhibits
some issues very similar to those that I was trying to solve with Jan in
the thread I pasted above.

This particular machine goes to sleep, and when it resumes, the disk light
flashes briefly, and then the sleep led goes back to pulsing.

Not much to go on, at this point.


/btg



On Thu, Dec 20, 2012 at 10:59 AM, Marek Marczykowski <
marmarek@invisiblethingslab.com> wrote:

> On 04.12.2012 14:27, Marek Marczykowski wrote:
> > On 03.12.2012 08:39, Jan Beulich wrote:
> >>>>> On 30.11.12 at 17:18, Marek Marczykowski <
> marmarek@invisiblethingslab.com>
> >> wrote:
> >>> On 30.11.2012 17:12, Jan Beulich wrote:
> >>>>>>> Marek Marczykowski <marmarek@invisiblethingslab.com> 11/30/12
> 5:07 PM >>>
> >>>>> That was the rare case when resume worked at all... in most cased
> I've got
> >>>>> reboot, before anything appear on the screen (even backlight is off)
> - xen
> >>>>> panic? dom0 kernel panic?
> >>>>
> >>>> Without serial console we won't get very far from here.
> >>>>
> >>>>> I don't have serial console, but have USB-to-serial port - is it
> possible to
> >>>>> use it as xen console (in xen 4.1.3)?
> >>>>
> >>>> Not that I'm aware of. But 4.1.x isn't very interesting from a
> development
> >>>> perspective anyway. If you had the same problems still with
> 4.3-unstable,
> >>>> then that's be of much more interest to analyze, and you could use the
> >>>> EHCI debug port (if one of your controllers has one) based serial
> console.
> >>>
> >>> Is it possible to use libxl from xen 4.1 with newer hypervisor? My
> libxl is
> >>> somehow patched and porting it to newer version will require some
> effort.
> >>
> >> I don't think so, but I also don't think you need a libxl at all for the
> >> purposes here (dealing with S3 is a Dom0-only thing).
> >
> > I've tested xen 4.2 hypervisor (without serial console) and also rebooted
> > during resume. But it works with xen 4.1.2, so the problem was introduced
> > between 4.1.2 and 4.1.3. Will try to get console messages from
> xen-unstable -
> > perhaps this will give some hints.
>
> It still doesn't work on xen 4.1.4, system reboots at resume. Same on
> 4.3-unstable (today).
> Can you give me some hints about using EHCI debug port? I've found some
> info
> on coreboot page[1]. Is there any way to use it without NET20DC device?
> According to lspci my controller have debug port capability.
>
> [1] http://www.coreboot.org/EHCI_Debug_Port
>
> --
> Best Regards / Pozdrawiam,
> Marek Marczykowski
> Invisible Things Lab
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--f46d042fdea202ab5d04d14deacc
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

See if the attached patch helps.<div>This was a patch I forgot was still in=
 my queue, that Jan had some objections about (though I don&#39;t recall sp=
ecifically what they were)</div><div>I had found that the scheduler was rem=
oving all cpus during suspend, and this seemed to help with that, IIRC.</di=
v>
<div><br></div><div>That said - I&#39;m starting to be forced to look back =
into S3 issues, as while I thought they were all resolved, they are not.</d=
iv><div>I have an Ivybridge laptop that I didn&#39;t have back in May that =
exhibits some issues very similar to those that I was trying to solve with =
Jan in the thread I pasted above.</div>
<div><br></div><div>This particular machine goes to sleep, and when it resu=
mes, the disk light flashes briefly, and then the sleep led goes back to pu=
lsing.</div><div><br></div><div>Not much to go on, at this point.</div>
<div><br></div><div><br></div><div>/btg</div><div><br></div><div><br></div>=
<div><br><div class=3D"gmail_quote">On Thu, Dec 20, 2012 at 10:59 AM, Marek=
 Marczykowski <span dir=3D"ltr">&lt;<a href=3D"mailto:marmarek@invisiblethi=
ngslab.com" target=3D"_blank">marmarek@invisiblethingslab.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On 0=
4.12.2012 14:27, Marek Marczykowski wrote:<br>
&gt; On 03.12.2012 08:39, Jan Beulich wrote:<br>
&gt;&gt;&gt;&gt;&gt; On 30.11.12 at 17:18, Marek Marczykowski &lt;<a href=
=3D"mailto:marmarek@invisiblethingslab.com">marmarek@invisiblethingslab.com=
</a>&gt;<br>
&gt;&gt; wrote:<br>
&gt;&gt;&gt; On 30.11.2012 17:12, Jan Beulich wrote:<br>
&gt;&gt;&gt;&gt;&gt;&gt;&gt; Marek Marczykowski &lt;<a href=3D"mailto:marma=
rek@invisiblethingslab.com">marmarek@invisiblethingslab.com</a>&gt; 11/30/1=
2 5:07 PM &gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; That was the rare case when resume worked at all... in=
 most cased I&#39;ve got<br>
&gt;&gt;&gt;&gt;&gt; reboot, before anything appear on the screen (even bac=
klight is off) - xen<br>
&gt;&gt;&gt;&gt;&gt; panic? dom0 kernel panic?<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; Without serial console we won&#39;t get very far from here=
.<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; I don&#39;t have serial console, but have USB-to-seria=
l port - is it possible to<br>
&gt;&gt;&gt;&gt;&gt; use it as xen console (in xen 4.1.3)?<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; Not that I&#39;m aware of. But 4.1.x isn&#39;t very intere=
sting from a development<br>
&gt;&gt;&gt;&gt; perspective anyway. If you had the same problems still wit=
h 4.3-unstable,<br>
&gt;&gt;&gt;&gt; then that&#39;s be of much more interest to analyze, and y=
ou could use the<br>
&gt;&gt;&gt;&gt; EHCI debug port (if one of your controllers has one) based=
 serial console.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Is it possible to use libxl from xen 4.1 with newer hypervisor=
? My libxl is<br>
&gt;&gt;&gt; somehow patched and porting it to newer version will require s=
ome effort.<br>
&gt;&gt;<br>
&gt;&gt; I don&#39;t think so, but I also don&#39;t think you need a libxl =
at all for the<br>
&gt;&gt; purposes here (dealing with S3 is a Dom0-only thing).<br>
&gt;<br>
&gt; I&#39;ve tested xen 4.2 hypervisor (without serial console) and also r=
ebooted<br>
&gt; during resume. But it works with xen 4.1.2, so the problem was introdu=
ced<br>
&gt; between 4.1.2 and 4.1.3. Will try to get console messages from xen-uns=
table -<br>
&gt; perhaps this will give some hints.<br>
<br>
</div></div>It still doesn&#39;t work on xen 4.1.4, system reboots at resum=
e. Same on<br>
4.3-unstable (today).<br>
Can you give me some hints about using EHCI debug port? I&#39;ve found some=
 info<br>
on coreboot page[1]. Is there any way to use it without NET20DC device?<br>
According to lspci my controller have debug port capability.<br>
<br>
[1] <a href=3D"http://www.coreboot.org/EHCI_Debug_Port" target=3D"_blank">h=
ttp://www.coreboot.org/EHCI_Debug_Port</a><br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
--<br>
Best Regards / Pozdrawiam,<br>
Marek Marczykowski<br>
Invisible Things Lab<br>
<br>
</div></div><br>_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br></blockquote></div><br></div>

--f46d042fdea202ab5d04d14deacc--
--f46d042fdea202ab6204d14deace
Content-Type: application/octet-stream; name="sched.patch"
Content-Disposition: attachment; filename="sched.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hayak4je1

ZGlmZiAtLWdpdCBhL3hlbi9jb21tb24vc2NoZWR1bGUuYyBiL3hlbi9jb21tb24vc2NoZWR1bGUu
YwppbmRleCBiZTM5YjIwLi41NDg0ZjI1IDEwMDY0NAotLS0gYS94ZW4vY29tbW9uL3NjaGVkdWxl
LmMKKysrIGIveGVuL2NvbW1vbi9zY2hlZHVsZS5jCkBAIC01NDYsNyArNTQ2LDcgQEAgaW50IGNw
dV9kaXNhYmxlX3NjaGVkdWxlcih1bnNpZ25lZCBpbnQgY3B1KQogICAgIGJvb2xfdCBhZmZpbml0
eV9icm9rZW47CiAKICAgICBjID0gcGVyX2NwdShjcHVwb29sLCBjcHUpOwotICAgIGlmICggKGMg
PT0gTlVMTCkgfHwgKHN5c3RlbV9zdGF0ZSA9PSBTWVNfU1RBVEVfc3VzcGVuZCkgKQorICAgIGlm
ICggYyA9PSBOVUxMICkKICAgICAgICAgcmV0dXJuIHJldDsKIAogICAgIGZvcl9lYWNoX2RvbWFp
biAoIGQgKQo=
--f46d042fdea202ab6204d14deace
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--f46d042fdea202ab6204d14deace--


From xen-devel-bounces@lists.xen.org Thu Dec 20 19:42:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 19:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlm0c-0004SX-6k; Thu, 20 Dec 2012 19:42:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tlm0a-0004SQ-8D
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 19:42:20 +0000
Received: from [85.158.139.211:11053] by server-8.bemta-5.messagelabs.com id
	41/F1-15003-B1A63D05; Thu, 20 Dec 2012 19:42:19 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1356032535!20527851!1
X-Originating-IP: [209.85.220.180]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15115 invoked from network); 20 Dec 2012 19:42:16 -0000
Received: from mail-vc0-f180.google.com (HELO mail-vc0-f180.google.com)
	(209.85.220.180)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 19:42:16 -0000
Received: by mail-vc0-f180.google.com with SMTP id p16so4184586vcq.39
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 11:41:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=iz0kqulMpTpgjmD0WbSRhYEZYcWKuKkMCTOuiJricWo=;
	b=rBLB9LBhTKn9+BBBQzjrVgZ+jwsmpnMuLp0eN16DdncKiLbFaVy/2UXlvJ5HoKuxFo
	Zh33gkHb/XEkwBiMLfGLknwOTPNnfNS3bSniOPJiPl6zDTFC6tZUzRL52qw6JD+9JHsQ
	kjeSJy0rvIC+g4kmI8v4C2uydEgZa8PSCPnkL/IvAkpVu+xJFr8qaJDBM/sJX7i6jWNQ
	bVJRRQe0rF+lR7AGlzwC+WJpRz+9yo2+chi++ObecDUJq/Iq49wvHv4bzucQLta07vtg
	FiXr9F58yt8dkcL21KmF4gNECiFdFCpEhA+P+FijVPbqQiR2EAD0GbyTlxrClNexW7Rq
	hFjA==
MIME-Version: 1.0
Received: by 10.220.106.147 with SMTP id x19mr16224999vco.37.1356032475572;
	Thu, 20 Dec 2012 11:41:15 -0800 (PST)
Received: by 10.58.187.84 with HTTP; Thu, 20 Dec 2012 11:41:15 -0800 (PST)
In-Reply-To: <50D335E6.902@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
Date: Thu, 20 Dec 2012 14:41:15 -0500
X-Google-Sender-Auth: e6mzax8x0rH-WG__jyUQHFiUJDo
Message-ID: <CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Marek Marczykowski <marmarek@invisiblethingslab.com>
Content-Type: multipart/mixed; boundary=f46d042fdea202ab6204d14deace
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--f46d042fdea202ab6204d14deace
Content-Type: multipart/alternative; boundary=f46d042fdea202ab5d04d14deacc

--f46d042fdea202ab5d04d14deacc
Content-Type: text/plain; charset=ISO-8859-1

See if the attached patch helps.
This was a patch I forgot was still in my queue, that Jan had some
objections about (though I don't recall specifically what they were)
I had found that the scheduler was removing all cpus during suspend, and
this seemed to help with that, IIRC.

That said - I'm starting to be forced to look back into S3 issues, as while
I thought they were all resolved, they are not.
I have an Ivybridge laptop that I didn't have back in May that exhibits
some issues very similar to those that I was trying to solve with Jan in
the thread I pasted above.

This particular machine goes to sleep, and when it resumes, the disk light
flashes briefly, and then the sleep led goes back to pulsing.

Not much to go on, at this point.


/btg



On Thu, Dec 20, 2012 at 10:59 AM, Marek Marczykowski <
marmarek@invisiblethingslab.com> wrote:

> On 04.12.2012 14:27, Marek Marczykowski wrote:
> > On 03.12.2012 08:39, Jan Beulich wrote:
> >>>>> On 30.11.12 at 17:18, Marek Marczykowski <
> marmarek@invisiblethingslab.com>
> >> wrote:
> >>> On 30.11.2012 17:12, Jan Beulich wrote:
> >>>>>>> Marek Marczykowski <marmarek@invisiblethingslab.com> 11/30/12
> 5:07 PM >>>
> >>>>> That was the rare case when resume worked at all... in most cased
> I've got
> >>>>> reboot, before anything appear on the screen (even backlight is off)
> - xen
> >>>>> panic? dom0 kernel panic?
> >>>>
> >>>> Without serial console we won't get very far from here.
> >>>>
> >>>>> I don't have serial console, but have USB-to-serial port - is it
> possible to
> >>>>> use it as xen console (in xen 4.1.3)?
> >>>>
> >>>> Not that I'm aware of. But 4.1.x isn't very interesting from a
> development
> >>>> perspective anyway. If you had the same problems still with
> 4.3-unstable,
> >>>> then that's be of much more interest to analyze, and you could use the
> >>>> EHCI debug port (if one of your controllers has one) based serial
> console.
> >>>
> >>> Is it possible to use libxl from xen 4.1 with newer hypervisor? My
> libxl is
> >>> somehow patched and porting it to newer version will require some
> effort.
> >>
> >> I don't think so, but I also don't think you need a libxl at all for the
> >> purposes here (dealing with S3 is a Dom0-only thing).
> >
> > I've tested xen 4.2 hypervisor (without serial console) and also rebooted
> > during resume. But it works with xen 4.1.2, so the problem was introduced
> > between 4.1.2 and 4.1.3. Will try to get console messages from
> xen-unstable -
> > perhaps this will give some hints.
>
> It still doesn't work on xen 4.1.4, system reboots at resume. Same on
> 4.3-unstable (today).
> Can you give me some hints about using EHCI debug port? I've found some
> info
> on coreboot page[1]. Is there any way to use it without NET20DC device?
> According to lspci my controller have debug port capability.
>
> [1] http://www.coreboot.org/EHCI_Debug_Port
>
> --
> Best Regards / Pozdrawiam,
> Marek Marczykowski
> Invisible Things Lab
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--f46d042fdea202ab5d04d14deacc
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

See if the attached patch helps.<div>This was a patch I forgot was still in=
 my queue, that Jan had some objections about (though I don&#39;t recall sp=
ecifically what they were)</div><div>I had found that the scheduler was rem=
oving all cpus during suspend, and this seemed to help with that, IIRC.</di=
v>
<div><br></div><div>That said - I&#39;m starting to be forced to look back =
into S3 issues, as while I thought they were all resolved, they are not.</d=
iv><div>I have an Ivybridge laptop that I didn&#39;t have back in May that =
exhibits some issues very similar to those that I was trying to solve with =
Jan in the thread I pasted above.</div>
<div><br></div><div>This particular machine goes to sleep, and when it resu=
mes, the disk light flashes briefly, and then the sleep led goes back to pu=
lsing.</div><div><br></div><div>Not much to go on, at this point.</div>
<div><br></div><div><br></div><div>/btg</div><div><br></div><div><br></div>=
<div><br><div class=3D"gmail_quote">On Thu, Dec 20, 2012 at 10:59 AM, Marek=
 Marczykowski <span dir=3D"ltr">&lt;<a href=3D"mailto:marmarek@invisiblethi=
ngslab.com" target=3D"_blank">marmarek@invisiblethingslab.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On 0=
4.12.2012 14:27, Marek Marczykowski wrote:<br>
&gt; On 03.12.2012 08:39, Jan Beulich wrote:<br>
&gt;&gt;&gt;&gt;&gt; On 30.11.12 at 17:18, Marek Marczykowski &lt;<a href=
=3D"mailto:marmarek@invisiblethingslab.com">marmarek@invisiblethingslab.com=
</a>&gt;<br>
&gt;&gt; wrote:<br>
&gt;&gt;&gt; On 30.11.2012 17:12, Jan Beulich wrote:<br>
&gt;&gt;&gt;&gt;&gt;&gt;&gt; Marek Marczykowski &lt;<a href=3D"mailto:marma=
rek@invisiblethingslab.com">marmarek@invisiblethingslab.com</a>&gt; 11/30/1=
2 5:07 PM &gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; That was the rare case when resume worked at all... in=
 most cased I&#39;ve got<br>
&gt;&gt;&gt;&gt;&gt; reboot, before anything appear on the screen (even bac=
klight is off) - xen<br>
&gt;&gt;&gt;&gt;&gt; panic? dom0 kernel panic?<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; Without serial console we won&#39;t get very far from here=
.<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; I don&#39;t have serial console, but have USB-to-seria=
l port - is it possible to<br>
&gt;&gt;&gt;&gt;&gt; use it as xen console (in xen 4.1.3)?<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; Not that I&#39;m aware of. But 4.1.x isn&#39;t very intere=
sting from a development<br>
&gt;&gt;&gt;&gt; perspective anyway. If you had the same problems still wit=
h 4.3-unstable,<br>
&gt;&gt;&gt;&gt; then that&#39;s be of much more interest to analyze, and y=
ou could use the<br>
&gt;&gt;&gt;&gt; EHCI debug port (if one of your controllers has one) based=
 serial console.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Is it possible to use libxl from xen 4.1 with newer hypervisor=
? My libxl is<br>
&gt;&gt;&gt; somehow patched and porting it to newer version will require s=
ome effort.<br>
&gt;&gt;<br>
&gt;&gt; I don&#39;t think so, but I also don&#39;t think you need a libxl =
at all for the<br>
&gt;&gt; purposes here (dealing with S3 is a Dom0-only thing).<br>
&gt;<br>
&gt; I&#39;ve tested xen 4.2 hypervisor (without serial console) and also r=
ebooted<br>
&gt; during resume. But it works with xen 4.1.2, so the problem was introdu=
ced<br>
&gt; between 4.1.2 and 4.1.3. Will try to get console messages from xen-uns=
table -<br>
&gt; perhaps this will give some hints.<br>
<br>
</div></div>It still doesn&#39;t work on xen 4.1.4, system reboots at resum=
e. Same on<br>
4.3-unstable (today).<br>
Can you give me some hints about using EHCI debug port? I&#39;ve found some=
 info<br>
on coreboot page[1]. Is there any way to use it without NET20DC device?<br>
According to lspci my controller have debug port capability.<br>
<br>
[1] <a href=3D"http://www.coreboot.org/EHCI_Debug_Port" target=3D"_blank">h=
ttp://www.coreboot.org/EHCI_Debug_Port</a><br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
--<br>
Best Regards / Pozdrawiam,<br>
Marek Marczykowski<br>
Invisible Things Lab<br>
<br>
</div></div><br>_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br></blockquote></div><br></div>

--f46d042fdea202ab5d04d14deacc--
--f46d042fdea202ab6204d14deace
Content-Type: application/octet-stream; name="sched.patch"
Content-Disposition: attachment; filename="sched.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hayak4je1

ZGlmZiAtLWdpdCBhL3hlbi9jb21tb24vc2NoZWR1bGUuYyBiL3hlbi9jb21tb24vc2NoZWR1bGUu
YwppbmRleCBiZTM5YjIwLi41NDg0ZjI1IDEwMDY0NAotLS0gYS94ZW4vY29tbW9uL3NjaGVkdWxl
LmMKKysrIGIveGVuL2NvbW1vbi9zY2hlZHVsZS5jCkBAIC01NDYsNyArNTQ2LDcgQEAgaW50IGNw
dV9kaXNhYmxlX3NjaGVkdWxlcih1bnNpZ25lZCBpbnQgY3B1KQogICAgIGJvb2xfdCBhZmZpbml0
eV9icm9rZW47CiAKICAgICBjID0gcGVyX2NwdShjcHVwb29sLCBjcHUpOwotICAgIGlmICggKGMg
PT0gTlVMTCkgfHwgKHN5c3RlbV9zdGF0ZSA9PSBTWVNfU1RBVEVfc3VzcGVuZCkgKQorICAgIGlm
ICggYyA9PSBOVUxMICkKICAgICAgICAgcmV0dXJuIHJldDsKIAogICAgIGZvcl9lYWNoX2RvbWFp
biAoIGQgKQo=
--f46d042fdea202ab6204d14deace
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--f46d042fdea202ab6204d14deace--


From xen-devel-bounces@lists.xen.org Thu Dec 20 19:44:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 19:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlm2j-0004an-Ut; Thu, 20 Dec 2012 19:44:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1Tlm2i-0004af-Rj
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 19:44:33 +0000
Received: from [193.109.254.147:33135] by server-11.bemta-14.messagelabs.com
	id E5/AA-02659-0AA63D05; Thu, 20 Dec 2012 19:44:32 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1356032670!8569902!1
X-Originating-IP: [209.85.212.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25760 invoked from network); 20 Dec 2012 19:44:31 -0000
Received: from mail-vb0-f50.google.com (HELO mail-vb0-f50.google.com)
	(209.85.212.50)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 19:44:31 -0000
Received: by mail-vb0-f50.google.com with SMTP id fr13so4203191vbb.37
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 11:44:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=5rwuc80rdfmJUrfxMqDWyGrDwdluUgUjiRuBkD6m5eU=;
	b=HIu62woGtytk8iPV070cxobSO7vWhWhYAH9VSSki9FOJ8NYm4DC73tzPODhdkkVOsl
	baINMSX0RcNU6j6EBII7/Dpf3Bc8DxV4JHcJi6pX1Jf4azMM5tX5jnMsY9PlO+LqxFev
	aAnF2qX8uFYa8yA88sBkFphkuRHaJWCjIen4ngoVEgUgPVZlWoxmxJr6wsiS+ofTVoW6
	KIw1riNkXDADk06NGywx26SFef3DGPtl2kiT2AxrBL0ruITn2POGVMtpVrwMJVuP0QP+
	5owt0bqxmq++q5t74c7Cpxffx2hv/4waxjh8OB9pWjNJmkWu82c8Xtfx3kslkT0oeaeN
	7i3A==
Received: by 10.220.150.84 with SMTP id x20mr16154565vcv.73.1356032669883;
	Thu, 20 Dec 2012 11:44:29 -0800 (PST)
MIME-Version: 1.0
Received: by 10.220.225.136 with HTTP; Thu, 20 Dec 2012 11:44:09 -0800 (PST)
In-Reply-To: <CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 20 Dec 2012 11:44:09 -0800
Message-ID: <CAEBdQ91YD-KNUqoax8oBLsh5_3nAL4V+EbPTE4qWkHPbn-=afg@mail.gmail.com>
To: "G.R." <firemeteor@users.sourceforge.net>
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 7:06 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> On Thu, Dec 20, 2012 at 10:19 PM, Keir Fraser <keir@xen.org> wrote:
>> On 20/12/2012 13:31, "G.R." <firemeteor@users.sourceforge.net> wrote:
>>
>>> If concern is about security, the same argument should apply to the
>>> first page (the portion before the page offset).
>>> The problem is that I have no idea what is around the mapped page. Not
>>> sure who has the knowledge.
>>
>> Well we can't do better than mapping some whole number of pages, really.
>> Unless we trap to qemu on every access. I don't think we'd go there unless
>> there really were a known security issue. But mapping only the exact number
>> of pages we definitely need is a good principle.
>>
>>> What's the standard flow to handle such map with offset?
>>> I expect this to be a common case, since the ioremap function in linux
>>> kernel accept this.
>>
>> map_size = ((host_opregion & 0xfff) + 8096 + 0xfff) >> 12
>>
> Keir, I believe this expression should give the same result.
> First of all, 8096 should be 8192 :-), and this part should result in
> an constant number 2 after right shift.
> The remaining part is ((host_opregion & 0xfff) + 0xfff) >> 12
> As long as the first sub-expression is non zero, the result of the add
> should range from [0x1000, 0x1ffe].
> And this will yield a result '1' after the right shift.
> So as long as there is known security risk (which I'm not sure about),
> the patch should be fine.
>
>> Possibly with suitable macros used instead of magic numbers (e.g., XC_PAGE_*
>> and a macro for the opregion size).
>
> I guess there is no predefined macro for OpRegion size. And I guess I
> need to define it twice for both code?
>

Changing the OpRegion mapping to 3 pages if probably the right thing to do here.
If down the road we found some scary security problem we can emulate the device
in Qemu.

Thanks for looking into this.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 19:44:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 19:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlm2j-0004an-Ut; Thu, 20 Dec 2012 19:44:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1Tlm2i-0004af-Rj
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 19:44:33 +0000
Received: from [193.109.254.147:33135] by server-11.bemta-14.messagelabs.com
	id E5/AA-02659-0AA63D05; Thu, 20 Dec 2012 19:44:32 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1356032670!8569902!1
X-Originating-IP: [209.85.212.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25760 invoked from network); 20 Dec 2012 19:44:31 -0000
Received: from mail-vb0-f50.google.com (HELO mail-vb0-f50.google.com)
	(209.85.212.50)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 19:44:31 -0000
Received: by mail-vb0-f50.google.com with SMTP id fr13so4203191vbb.37
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 11:44:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=5rwuc80rdfmJUrfxMqDWyGrDwdluUgUjiRuBkD6m5eU=;
	b=HIu62woGtytk8iPV070cxobSO7vWhWhYAH9VSSki9FOJ8NYm4DC73tzPODhdkkVOsl
	baINMSX0RcNU6j6EBII7/Dpf3Bc8DxV4JHcJi6pX1Jf4azMM5tX5jnMsY9PlO+LqxFev
	aAnF2qX8uFYa8yA88sBkFphkuRHaJWCjIen4ngoVEgUgPVZlWoxmxJr6wsiS+ofTVoW6
	KIw1riNkXDADk06NGywx26SFef3DGPtl2kiT2AxrBL0ruITn2POGVMtpVrwMJVuP0QP+
	5owt0bqxmq++q5t74c7Cpxffx2hv/4waxjh8OB9pWjNJmkWu82c8Xtfx3kslkT0oeaeN
	7i3A==
Received: by 10.220.150.84 with SMTP id x20mr16154565vcv.73.1356032669883;
	Thu, 20 Dec 2012 11:44:29 -0800 (PST)
MIME-Version: 1.0
Received: by 10.220.225.136 with HTTP; Thu, 20 Dec 2012 11:44:09 -0800 (PST)
In-Reply-To: <CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 20 Dec 2012 11:44:09 -0800
Message-ID: <CAEBdQ91YD-KNUqoax8oBLsh5_3nAL4V+EbPTE4qWkHPbn-=afg@mail.gmail.com>
To: "G.R." <firemeteor@users.sourceforge.net>
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 7:06 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> On Thu, Dec 20, 2012 at 10:19 PM, Keir Fraser <keir@xen.org> wrote:
>> On 20/12/2012 13:31, "G.R." <firemeteor@users.sourceforge.net> wrote:
>>
>>> If concern is about security, the same argument should apply to the
>>> first page (the portion before the page offset).
>>> The problem is that I have no idea what is around the mapped page. Not
>>> sure who has the knowledge.
>>
>> Well we can't do better than mapping some whole number of pages, really.
>> Unless we trap to qemu on every access. I don't think we'd go there unless
>> there really were a known security issue. But mapping only the exact number
>> of pages we definitely need is a good principle.
>>
>>> What's the standard flow to handle such map with offset?
>>> I expect this to be a common case, since the ioremap function in linux
>>> kernel accept this.
>>
>> map_size = ((host_opregion & 0xfff) + 8096 + 0xfff) >> 12
>>
> Keir, I believe this expression should give the same result.
> First of all, 8096 should be 8192 :-), and this part should result in
> an constant number 2 after right shift.
> The remaining part is ((host_opregion & 0xfff) + 0xfff) >> 12
> As long as the first sub-expression is non zero, the result of the add
> should range from [0x1000, 0x1ffe].
> And this will yield a result '1' after the right shift.
> So as long as there is known security risk (which I'm not sure about),
> the patch should be fine.
>
>> Possibly with suitable macros used instead of magic numbers (e.g., XC_PAGE_*
>> and a macro for the opregion size).
>
> I guess there is no predefined macro for OpRegion size. And I guess I
> need to define it twice for both code?
>

Changing the OpRegion mapping to 3 pages if probably the right thing to do here.
If down the road we found some scary security problem we can emulate the device
in Qemu.

Thanks for looking into this.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 19:50:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 19:50:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlm8d-0004rD-QN; Thu, 20 Dec 2012 19:50:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tlm8c-0004r8-6L
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 19:50:38 +0000
Received: from [85.158.138.51:58985] by server-6.bemta-3.messagelabs.com id
	43/04-12154-D0C63D05; Thu, 20 Dec 2012 19:50:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1356033036!27851479!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20797 invoked from network); 20 Dec 2012 19:50:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 19:50:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="289751"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 19:50:35 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 19:50:35 +0000
Message-ID: <1356033034.14417.67.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Keir Fraser <keir@xen.org>
Date: Thu, 20 Dec 2012 19:50:34 +0000
In-Reply-To: <CCF8BD23.561A3%keir@xen.org>
References: <CCF8BD23.561A3%keir@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	"G.R." <firemeteor@users.sourceforge.net>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 13:03 +0000, Keir Fraser wrote:
> On 20/12/2012 10:41, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
> > Do we need to worry about what is in the "slop" at either end of a 3
> > page region containing this? If they are sensitive registers then we may
> > have a problem.
> 
> In the hvmloader patch it is not worth it I think, one extra page of memory
> hole is hardly a scarce resource.

I didn't mean the wastage, I meant the contents (registers) at the
physical addresses either immediately before or after the OpRegion.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 19:50:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 19:50:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlm8d-0004rD-QN; Thu, 20 Dec 2012 19:50:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Tlm8c-0004r8-6L
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 19:50:38 +0000
Received: from [85.158.138.51:58985] by server-6.bemta-3.messagelabs.com id
	43/04-12154-D0C63D05; Thu, 20 Dec 2012 19:50:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1356033036!27851479!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20797 invoked from network); 20 Dec 2012 19:50:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 19:50:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="289751"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 19:50:35 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 20 Dec 2012 19:50:35 +0000
Message-ID: <1356033034.14417.67.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Keir Fraser <keir@xen.org>
Date: Thu, 20 Dec 2012 19:50:34 +0000
In-Reply-To: <CCF8BD23.561A3%keir@xen.org>
References: <CCF8BD23.561A3%keir@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	"G.R." <firemeteor@users.sourceforge.net>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-20 at 13:03 +0000, Keir Fraser wrote:
> On 20/12/2012 10:41, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
> > Do we need to worry about what is in the "slop" at either end of a 3
> > page region containing this? If they are sensitive registers then we may
> > have a problem.
> 
> In the hvmloader patch it is not worth it I think, one extra page of memory
> hole is hardly a scarce resource.

I didn't mean the wastage, I meant the contents (registers) at the
physical addresses either immediately before or after the OpRegion.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 20:28:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 20:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlmiy-0005LR-2V; Thu, 20 Dec 2012 20:28:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tlmiw-0005LM-GD
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 20:28:10 +0000
Received: from [85.158.139.211:4712] by server-1.bemta-5.messagelabs.com id
	F0/D8-12813-9D473D05; Thu, 20 Dec 2012 20:28:09 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1356035287!19912143!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM2NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15923 invoked from network); 20 Dec 2012 20:28:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 20:28:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="1395444"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 20:27:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 15:27:58 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tlmik-0004lF-1A;
	Thu, 20 Dec 2012 20:27:58 +0000
Message-ID: <50D37359.9080001@eu.citrix.com>
Date: Thu, 20 Dec 2012 20:21:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
In-Reply-To: <06d2f322a6319d8ba212.1355944039@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> +static inline int
> +__csched_vcpu_should_migrate(int cpu, cpumask_t *mask, cpumask_t *idlers)
> +{
> +    /*
> +     * Consent to migration if cpu is one of the idlers in the VCPU's
> +     * affinity mask. In fact, if that is not the case, it just means it
> +     * was some other CPU that was tickled and should hence come and pick
> +     * VCPU up. Migrating it to cpu would only make things worse.
> +     */
> +    return cpumask_test_cpu(cpu, idlers) && cpumask_test_cpu(cpu, mask);
>   }
>
>   static int
> @@ -493,85 +599,98 @@ static int
>       cpumask_t idlers;
>       cpumask_t *online;
>       struct csched_pcpu *spc = NULL;
> +    int ret, balance_step;
>       int cpu;
>
> -    /*
> -     * Pick from online CPUs in VCPU's affinity mask, giving a
> -     * preference to its current processor if it's in there.
> -     */
>       online = cpupool_scheduler_cpumask(vc->domain->cpupool);
> -    cpumask_and(&cpus, online, vc->cpu_affinity);
> -    cpu = cpumask_test_cpu(vc->processor, &cpus)
> -            ? vc->processor
> -            : cpumask_cycle(vc->processor, &cpus);
> -    ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
> +    for_each_csched_balance_step( balance_step )
> +    {
> +        /* Pick an online CPU from the proper affinity mask */
> +        ret = csched_balance_cpumask(vc, balance_step, &cpus);
> +        cpumask_and(&cpus, &cpus, online);
>
> -    /*
> -     * Try to find an idle processor within the above constraints.
> -     *
> -     * In multi-core and multi-threaded CPUs, not all idle execution
> -     * vehicles are equal!
> -     *
> -     * We give preference to the idle execution vehicle with the most
> -     * idling neighbours in its grouping. This distributes work across
> -     * distinct cores first and guarantees we don't do something stupid
> -     * like run two VCPUs on co-hyperthreads while there are idle cores
> -     * or sockets.
> -     *
> -     * Notice that, when computing the "idleness" of cpu, we may want to
> -     * discount vc. That is, iff vc is the currently running and the only
> -     * runnable vcpu on cpu, we add cpu to the idlers.
> -     */
> -    cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> -    if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
> -        cpumask_set_cpu(cpu, &idlers);
> -    cpumask_and(&cpus, &cpus, &idlers);
> -    cpumask_clear_cpu(cpu, &cpus);
> +        /* If present, prefer vc's current processor */
> +        cpu = cpumask_test_cpu(vc->processor, &cpus)
> +                ? vc->processor
> +                : cpumask_cycle(vc->processor, &cpus);
> +        ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
>
> -    while ( !cpumask_empty(&cpus) )
> -    {
> -        cpumask_t cpu_idlers;
> -        cpumask_t nxt_idlers;
> -        int nxt, weight_cpu, weight_nxt;
> -        int migrate_factor;
> +        /*
> +         * Try to find an idle processor within the above constraints.
> +         *
> +         * In multi-core and multi-threaded CPUs, not all idle execution
> +         * vehicles are equal!
> +         *
> +         * We give preference to the idle execution vehicle with the most
> +         * idling neighbours in its grouping. This distributes work across
> +         * distinct cores first and guarantees we don't do something stupid
> +         * like run two VCPUs on co-hyperthreads while there are idle cores
> +         * or sockets.
> +         *
> +         * Notice that, when computing the "idleness" of cpu, we may want to
> +         * discount vc. That is, iff vc is the currently running and the only
> +         * runnable vcpu on cpu, we add cpu to the idlers.
> +         */
> +        cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> +        if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
> +            cpumask_set_cpu(cpu, &idlers);
> +        cpumask_and(&cpus, &cpus, &idlers);
> +        /* If there are idlers and cpu is still not among them, pick one */
> +        if ( !cpumask_empty(&cpus) && !cpumask_test_cpu(cpu, &cpus) )
> +            cpu = cpumask_cycle(cpu, &cpus);

This seems to be an addition to the algorithm -- particularly hidden in 
this kind of "indent a big section that's almost exactly the same", I 
think this at least needs to be called out in the changelog message, 
perhaps put in a separate patch.

Can you comment on to why you think it's necessary?  Was there a 
particular problem you were seeing?

> +        cpumask_clear_cpu(cpu, &cpus);
>
> -        nxt = cpumask_cycle(cpu, &cpus);
> +        while ( !cpumask_empty(&cpus) )
> +        {
> +            cpumask_t cpu_idlers;
> +            cpumask_t nxt_idlers;
> +            int nxt, weight_cpu, weight_nxt;
> +            int migrate_factor;
>
> -        if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
> -        {
> -            /* We're on the same socket, so check the busy-ness of threads.
> -             * Migrate if # of idlers is less at all */
> -            ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> -            migrate_factor = 1;
> -            cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_sibling_mask, cpu));
> -            cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_sibling_mask, nxt));
> -        }
> -        else
> -        {
> -            /* We're on different sockets, so check the busy-ness of cores.
> -             * Migrate only if the other core is twice as idle */
> -            ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> -            migrate_factor = 2;
> -            cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_core_mask, cpu));
> -            cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
> +            nxt = cpumask_cycle(cpu, &cpus);
> +
> +            if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
> +            {
> +                /* We're on the same socket, so check the busy-ness of threads.
> +                 * Migrate if # of idlers is less at all */
> +                ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> +                migrate_factor = 1;
> +                cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_sibling_mask,
> +                            cpu));
> +                cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_sibling_mask,
> +                            nxt));
> +            }
> +            else
> +            {
> +                /* We're on different sockets, so check the busy-ness of cores.
> +                 * Migrate only if the other core is twice as idle */
> +                ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> +                migrate_factor = 2;
> +                cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_core_mask, cpu));
> +                cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
> +            }
> +
> +            weight_cpu = cpumask_weight(&cpu_idlers);
> +            weight_nxt = cpumask_weight(&nxt_idlers);
> +            /* smt_power_savings: consolidate work rather than spreading it */
> +            if ( sched_smt_power_savings ?
> +                 weight_cpu > weight_nxt :
> +                 weight_cpu * migrate_factor < weight_nxt )
> +            {
> +                cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
> +                spc = CSCHED_PCPU(nxt);
> +                cpu = cpumask_cycle(spc->idle_bias, &nxt_idlers);
> +                cpumask_andnot(&cpus, &cpus, per_cpu(cpu_sibling_mask, cpu));
> +            }
> +            else
> +            {
> +                cpumask_andnot(&cpus, &cpus, &nxt_idlers);
> +            }
>           }
>
> -        weight_cpu = cpumask_weight(&cpu_idlers);
> -        weight_nxt = cpumask_weight(&nxt_idlers);
> -        /* smt_power_savings: consolidate work rather than spreading it */
> -        if ( sched_smt_power_savings ?
> -             weight_cpu > weight_nxt :
> -             weight_cpu * migrate_factor < weight_nxt )
> -        {
> -            cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
> -            spc = CSCHED_PCPU(nxt);
> -            cpu = cpumask_cycle(spc->idle_bias, &nxt_idlers);
> -            cpumask_andnot(&cpus, &cpus, per_cpu(cpu_sibling_mask, cpu));
> -        }
> -        else
> -        {
> -            cpumask_andnot(&cpus, &cpus, &nxt_idlers);
> -        }
> +        /* Stop if cpu is idle (or if csched_balance_cpumask() says we can) */
> +        if ( cpumask_test_cpu(cpu, &idlers) || ret )
> +            break;

Right -- OK, I think everything looks good here, except the "return -1 
from csched_balance_cpumask" thing.  I think it would be better if we 
explicitly checked cpumask_full(...->node_affinity_cpumask) and skipped 
the NODE step if that's the case.

Also -- and sorry to have to ask this kind of thing, but after sorting 
through the placement algorithm my head hurts -- under what 
circumstances would "cpumask_test_cpu(cpu, &idlers)" be false at this 
point?  It seems like the only possibility would be if:
( (vc->processor was not in the original &cpus [1])
   || !IS_RUNQ_IDLE(vc->processor) )
&& (there are no idlers in the original &cpus)

Which I suppose probably matches the time when we want to move on from 
looking at NODE affinity and look for CPU affinity.

[1] This could happen either if the vcpu/node affinity has changed, or 
if we're currently running outside our node affinity and we're doing the 
NODE step.

OK -- I think I've convinced myself that this is OK as well (apart from 
the hidden check).  I'll come back to look at your response to the load 
balancing thing tomorrow.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 20:28:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 20:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlmiy-0005LR-2V; Thu, 20 Dec 2012 20:28:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tlmiw-0005LM-GD
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 20:28:10 +0000
Received: from [85.158.139.211:4712] by server-1.bemta-5.messagelabs.com id
	F0/D8-12813-9D473D05; Thu, 20 Dec 2012 20:28:09 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1356035287!19912143!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM2NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15923 invoked from network); 20 Dec 2012 20:28:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 20:28:08 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="1395444"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	20 Dec 2012 20:27:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 20 Dec 2012 15:27:58 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tlmik-0004lF-1A;
	Thu, 20 Dec 2012 20:27:58 +0000
Message-ID: <50D37359.9080001@eu.citrix.com>
Date: Thu, 20 Dec 2012 20:21:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
In-Reply-To: <06d2f322a6319d8ba212.1355944039@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> +static inline int
> +__csched_vcpu_should_migrate(int cpu, cpumask_t *mask, cpumask_t *idlers)
> +{
> +    /*
> +     * Consent to migration if cpu is one of the idlers in the VCPU's
> +     * affinity mask. In fact, if that is not the case, it just means it
> +     * was some other CPU that was tickled and should hence come and pick
> +     * VCPU up. Migrating it to cpu would only make things worse.
> +     */
> +    return cpumask_test_cpu(cpu, idlers) && cpumask_test_cpu(cpu, mask);
>   }
>
>   static int
> @@ -493,85 +599,98 @@ static int
>       cpumask_t idlers;
>       cpumask_t *online;
>       struct csched_pcpu *spc = NULL;
> +    int ret, balance_step;
>       int cpu;
>
> -    /*
> -     * Pick from online CPUs in VCPU's affinity mask, giving a
> -     * preference to its current processor if it's in there.
> -     */
>       online = cpupool_scheduler_cpumask(vc->domain->cpupool);
> -    cpumask_and(&cpus, online, vc->cpu_affinity);
> -    cpu = cpumask_test_cpu(vc->processor, &cpus)
> -            ? vc->processor
> -            : cpumask_cycle(vc->processor, &cpus);
> -    ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
> +    for_each_csched_balance_step( balance_step )
> +    {
> +        /* Pick an online CPU from the proper affinity mask */
> +        ret = csched_balance_cpumask(vc, balance_step, &cpus);
> +        cpumask_and(&cpus, &cpus, online);
>
> -    /*
> -     * Try to find an idle processor within the above constraints.
> -     *
> -     * In multi-core and multi-threaded CPUs, not all idle execution
> -     * vehicles are equal!
> -     *
> -     * We give preference to the idle execution vehicle with the most
> -     * idling neighbours in its grouping. This distributes work across
> -     * distinct cores first and guarantees we don't do something stupid
> -     * like run two VCPUs on co-hyperthreads while there are idle cores
> -     * or sockets.
> -     *
> -     * Notice that, when computing the "idleness" of cpu, we may want to
> -     * discount vc. That is, iff vc is the currently running and the only
> -     * runnable vcpu on cpu, we add cpu to the idlers.
> -     */
> -    cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> -    if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
> -        cpumask_set_cpu(cpu, &idlers);
> -    cpumask_and(&cpus, &cpus, &idlers);
> -    cpumask_clear_cpu(cpu, &cpus);
> +        /* If present, prefer vc's current processor */
> +        cpu = cpumask_test_cpu(vc->processor, &cpus)
> +                ? vc->processor
> +                : cpumask_cycle(vc->processor, &cpus);
> +        ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
>
> -    while ( !cpumask_empty(&cpus) )
> -    {
> -        cpumask_t cpu_idlers;
> -        cpumask_t nxt_idlers;
> -        int nxt, weight_cpu, weight_nxt;
> -        int migrate_factor;
> +        /*
> +         * Try to find an idle processor within the above constraints.
> +         *
> +         * In multi-core and multi-threaded CPUs, not all idle execution
> +         * vehicles are equal!
> +         *
> +         * We give preference to the idle execution vehicle with the most
> +         * idling neighbours in its grouping. This distributes work across
> +         * distinct cores first and guarantees we don't do something stupid
> +         * like run two VCPUs on co-hyperthreads while there are idle cores
> +         * or sockets.
> +         *
> +         * Notice that, when computing the "idleness" of cpu, we may want to
> +         * discount vc. That is, iff vc is the currently running and the only
> +         * runnable vcpu on cpu, we add cpu to the idlers.
> +         */
> +        cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> +        if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
> +            cpumask_set_cpu(cpu, &idlers);
> +        cpumask_and(&cpus, &cpus, &idlers);
> +        /* If there are idlers and cpu is still not among them, pick one */
> +        if ( !cpumask_empty(&cpus) && !cpumask_test_cpu(cpu, &cpus) )
> +            cpu = cpumask_cycle(cpu, &cpus);

This seems to be an addition to the algorithm -- particularly hidden in 
this kind of "indent a big section that's almost exactly the same", I 
think this at least needs to be called out in the changelog message, 
perhaps put in a separate patch.

Can you comment on to why you think it's necessary?  Was there a 
particular problem you were seeing?

> +        cpumask_clear_cpu(cpu, &cpus);
>
> -        nxt = cpumask_cycle(cpu, &cpus);
> +        while ( !cpumask_empty(&cpus) )
> +        {
> +            cpumask_t cpu_idlers;
> +            cpumask_t nxt_idlers;
> +            int nxt, weight_cpu, weight_nxt;
> +            int migrate_factor;
>
> -        if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
> -        {
> -            /* We're on the same socket, so check the busy-ness of threads.
> -             * Migrate if # of idlers is less at all */
> -            ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> -            migrate_factor = 1;
> -            cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_sibling_mask, cpu));
> -            cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_sibling_mask, nxt));
> -        }
> -        else
> -        {
> -            /* We're on different sockets, so check the busy-ness of cores.
> -             * Migrate only if the other core is twice as idle */
> -            ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> -            migrate_factor = 2;
> -            cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_core_mask, cpu));
> -            cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
> +            nxt = cpumask_cycle(cpu, &cpus);
> +
> +            if ( cpumask_test_cpu(cpu, per_cpu(cpu_core_mask, nxt)) )
> +            {
> +                /* We're on the same socket, so check the busy-ness of threads.
> +                 * Migrate if # of idlers is less at all */
> +                ASSERT( cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> +                migrate_factor = 1;
> +                cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_sibling_mask,
> +                            cpu));
> +                cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_sibling_mask,
> +                            nxt));
> +            }
> +            else
> +            {
> +                /* We're on different sockets, so check the busy-ness of cores.
> +                 * Migrate only if the other core is twice as idle */
> +                ASSERT( !cpumask_test_cpu(nxt, per_cpu(cpu_core_mask, cpu)) );
> +                migrate_factor = 2;
> +                cpumask_and(&cpu_idlers, &idlers, per_cpu(cpu_core_mask, cpu));
> +                cpumask_and(&nxt_idlers, &idlers, per_cpu(cpu_core_mask, nxt));
> +            }
> +
> +            weight_cpu = cpumask_weight(&cpu_idlers);
> +            weight_nxt = cpumask_weight(&nxt_idlers);
> +            /* smt_power_savings: consolidate work rather than spreading it */
> +            if ( sched_smt_power_savings ?
> +                 weight_cpu > weight_nxt :
> +                 weight_cpu * migrate_factor < weight_nxt )
> +            {
> +                cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
> +                spc = CSCHED_PCPU(nxt);
> +                cpu = cpumask_cycle(spc->idle_bias, &nxt_idlers);
> +                cpumask_andnot(&cpus, &cpus, per_cpu(cpu_sibling_mask, cpu));
> +            }
> +            else
> +            {
> +                cpumask_andnot(&cpus, &cpus, &nxt_idlers);
> +            }
>           }
>
> -        weight_cpu = cpumask_weight(&cpu_idlers);
> -        weight_nxt = cpumask_weight(&nxt_idlers);
> -        /* smt_power_savings: consolidate work rather than spreading it */
> -        if ( sched_smt_power_savings ?
> -             weight_cpu > weight_nxt :
> -             weight_cpu * migrate_factor < weight_nxt )
> -        {
> -            cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
> -            spc = CSCHED_PCPU(nxt);
> -            cpu = cpumask_cycle(spc->idle_bias, &nxt_idlers);
> -            cpumask_andnot(&cpus, &cpus, per_cpu(cpu_sibling_mask, cpu));
> -        }
> -        else
> -        {
> -            cpumask_andnot(&cpus, &cpus, &nxt_idlers);
> -        }
> +        /* Stop if cpu is idle (or if csched_balance_cpumask() says we can) */
> +        if ( cpumask_test_cpu(cpu, &idlers) || ret )
> +            break;

Right -- OK, I think everything looks good here, except the "return -1 
from csched_balance_cpumask" thing.  I think it would be better if we 
explicitly checked cpumask_full(...->node_affinity_cpumask) and skipped 
the NODE step if that's the case.

Also -- and sorry to have to ask this kind of thing, but after sorting 
through the placement algorithm my head hurts -- under what 
circumstances would "cpumask_test_cpu(cpu, &idlers)" be false at this 
point?  It seems like the only possibility would be if:
( (vc->processor was not in the original &cpus [1])
   || !IS_RUNQ_IDLE(vc->processor) )
&& (there are no idlers in the original &cpus)

Which I suppose probably matches the time when we want to move on from 
looking at NODE affinity and look for CPU affinity.

[1] This could happen either if the vcpu/node affinity has changed, or 
if we're currently running outside our node affinity and we're doing the 
NODE step.

OK -- I think I've convinced myself that this is OK as well (apart from 
the hidden check).  I'll come back to look at your response to the load 
balancing thing tomorrow.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 21:20:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 21:20:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlnX7-0005n9-Ih; Thu, 20 Dec 2012 21:20:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TlnX6-0005n4-5L
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 21:20:00 +0000
Received: from [85.158.138.51:26379] by server-9.bemta-3.messagelabs.com id
	C2/84-11948-FF083D05; Thu, 20 Dec 2012 21:19:59 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356038396!28628472!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gOTk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29446 invoked from network); 20 Dec 2012 21:19:58 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 21:19:58 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1356038398; x=1387574398;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=ZGy36LsXKx1kQaRm4SFFqsLUd1xA36j8hZbHkJ7tZlM=;
	b=CAEMcgAPbWXl+k4aFGOxrxQymhnltuIeL7OaAZ/2fa6cyoULI0dipSWF
	cjqcOOx71APM9fTdbVW3vrLthsL/C86SochR9CyOb/rFWHY7uR2L25tXg
	r7fln3rurONb3wRK2F4YBOMB8uJOL+wkyugi9oFZLTiyzbu59NOwPmth+ 0=;
X-IronPort-AV: E=Sophos;i="4.84,327,1355097600"; d="scan'208";a="358259668"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 20 Dec 2012 21:19:54 +0000
Received: from ex10-hub-9001.ant.amazon.com (ex10-hub-9001.ant.amazon.com
	[10.185.137.58])
	by smtp-in-1101.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBKLJqsX009216
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 20 Dec 2012 21:19:53 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-9001.ant.amazon.com (10.185.137.58) with Microsoft SMTP Server
	id 14.2.247.3; Thu, 20 Dec 2012 13:19:47 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 20 Dec 2012 13:19:47 -0800
Date: Thu, 20 Dec 2012 13:19:47 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121220211945.GA15542@u109add4315675089e695.ant.amazon.com>
References: <1355948414-7503-1-git-send-email-msw@amazon.com>
	<1355997710.26722.12.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355997710.26722.12.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Annie Li <annie.li@oracle.com>, Steven Noonan <snoonan@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: correctly initialize grant
 table version 1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 10:01:50AM +0000, Ian Campbell wrote:
> On Wed, 2012-12-19 at 20:20 +0000, Matt Wilson wrote:
> > Commit 85ff6acb075a484780b3d763fdf41596d8fc0970 (xen/granttable: Grant
> > tables V2 implementation) changed the GREFS_PER_GRANT_FRAME macro from
> > a constant to a conditional expression. The expression depends on
> > grant_table_version being appropriately set. Unfortunately, at init
> > time grant_table_version will be 0. The GREFS_PER_GRANT_FRAME
> > conditional expression checks for "grant_table_version == 1", and
> > therefore returns the number of grant references per frame for v2.
> > 
> > This causes gnttab_init() to allocate fewer pages for gnttab_list, as
> > a frame can old half the number of v2 entries than v1 entries. After
> > gnttab_resume() is called, grant_table_version is appropriately
> > set. nr_init_grefs will then be miscalculated and gnttab_free_count
> > will hold a value larger than the actual number of free gref entries.
> > 
> > If a guest is heavily utilizing improperly initialized v1 grant
> > tables, memory corruption can occur.
> 
> Good catch!
> 
> [...]
> @@ -1080,10 +1081,9 @@ static void gnttab_request_version(void)
> >  		panic("we need grant tables version 2, but only version 1 is available");
> >  	} else {
> >  		grant_table_version = 1;
> > +		grefs_per_grant_frame = PAGE_SIZE / sizeof(struct grant_entry_v1);
> >  		gnttab_interface = &gnttab_v1_ops;
> >  	}
> > -	printk(KERN_INFO "Grant tables using version %d layout.\n",
> > -		grant_table_version);
> 
> You remove this here, and re-add it in the gnttab_resume callsite but I
> don't  see anything added at the gnttab_init callsite. It would be
> useful to keep this print at start of day too.
>
> Oh, I see, gnttab_init also calls gnttab_resume later so we get the
> message from there, with gnttab_request_version getting called twice.

Right, I was just avoiding printing the "Grant tables using version 1
layout." message at init time.
 
> Can we avoid doing this twice by moving the tail of gnttab_resume
> (everything except the gnttab_request_version) into a new gnttab_setup
> and calling that from gnttab_resume and gnttab_init (instead of calling
> resume)?

Sure, I can do that.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 21:20:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 21:20:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlnX7-0005n9-Ih; Thu, 20 Dec 2012 21:20:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1TlnX6-0005n4-5L
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 21:20:00 +0000
Received: from [85.158.138.51:26379] by server-9.bemta-3.messagelabs.com id
	C2/84-11948-FF083D05; Thu, 20 Dec 2012 21:19:59 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356038396!28628472!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gOTk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29446 invoked from network); 20 Dec 2012 21:19:58 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 21:19:58 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1356038398; x=1387574398;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=ZGy36LsXKx1kQaRm4SFFqsLUd1xA36j8hZbHkJ7tZlM=;
	b=CAEMcgAPbWXl+k4aFGOxrxQymhnltuIeL7OaAZ/2fa6cyoULI0dipSWF
	cjqcOOx71APM9fTdbVW3vrLthsL/C86SochR9CyOb/rFWHY7uR2L25tXg
	r7fln3rurONb3wRK2F4YBOMB8uJOL+wkyugi9oFZLTiyzbu59NOwPmth+ 0=;
X-IronPort-AV: E=Sophos;i="4.84,327,1355097600"; d="scan'208";a="358259668"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 20 Dec 2012 21:19:54 +0000
Received: from ex10-hub-9001.ant.amazon.com (ex10-hub-9001.ant.amazon.com
	[10.185.137.58])
	by smtp-in-1101.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBKLJqsX009216
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 20 Dec 2012 21:19:53 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-9001.ant.amazon.com (10.185.137.58) with Microsoft SMTP Server
	id 14.2.247.3; Thu, 20 Dec 2012 13:19:47 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 20 Dec 2012 13:19:47 -0800
Date: Thu, 20 Dec 2012 13:19:47 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121220211945.GA15542@u109add4315675089e695.ant.amazon.com>
References: <1355948414-7503-1-git-send-email-msw@amazon.com>
	<1355997710.26722.12.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355997710.26722.12.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Annie Li <annie.li@oracle.com>, Steven Noonan <snoonan@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: correctly initialize grant
 table version 1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 10:01:50AM +0000, Ian Campbell wrote:
> On Wed, 2012-12-19 at 20:20 +0000, Matt Wilson wrote:
> > Commit 85ff6acb075a484780b3d763fdf41596d8fc0970 (xen/granttable: Grant
> > tables V2 implementation) changed the GREFS_PER_GRANT_FRAME macro from
> > a constant to a conditional expression. The expression depends on
> > grant_table_version being appropriately set. Unfortunately, at init
> > time grant_table_version will be 0. The GREFS_PER_GRANT_FRAME
> > conditional expression checks for "grant_table_version == 1", and
> > therefore returns the number of grant references per frame for v2.
> > 
> > This causes gnttab_init() to allocate fewer pages for gnttab_list, as
> > a frame can old half the number of v2 entries than v1 entries. After
> > gnttab_resume() is called, grant_table_version is appropriately
> > set. nr_init_grefs will then be miscalculated and gnttab_free_count
> > will hold a value larger than the actual number of free gref entries.
> > 
> > If a guest is heavily utilizing improperly initialized v1 grant
> > tables, memory corruption can occur.
> 
> Good catch!
> 
> [...]
> @@ -1080,10 +1081,9 @@ static void gnttab_request_version(void)
> >  		panic("we need grant tables version 2, but only version 1 is available");
> >  	} else {
> >  		grant_table_version = 1;
> > +		grefs_per_grant_frame = PAGE_SIZE / sizeof(struct grant_entry_v1);
> >  		gnttab_interface = &gnttab_v1_ops;
> >  	}
> > -	printk(KERN_INFO "Grant tables using version %d layout.\n",
> > -		grant_table_version);
> 
> You remove this here, and re-add it in the gnttab_resume callsite but I
> don't  see anything added at the gnttab_init callsite. It would be
> useful to keep this print at start of day too.
>
> Oh, I see, gnttab_init also calls gnttab_resume later so we get the
> message from there, with gnttab_request_version getting called twice.

Right, I was just avoiding printing the "Grant tables using version 1
layout." message at init time.
 
> Can we avoid doing this twice by moving the tail of gnttab_resume
> (everything except the gnttab_request_version) into a new gnttab_setup
> and calling that from gnttab_resume and gnttab_init (instead of calling
> resume)?

Sure, I can do that.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 21:25:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 21:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlnbc-0005zi-AA; Thu, 20 Dec 2012 21:24:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tlnba-0005zb-5y
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 21:24:38 +0000
Received: from [85.158.138.51:63939] by server-12.bemta-3.messagelabs.com id
	F4/3E-27559-51283D05; Thu, 20 Dec 2012 21:24:37 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1356038674!29831437!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDExMTk1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28944 invoked from network); 20 Dec 2012 21:24:36 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 21:24:36 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1356038676; x=1387574676;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=rWUtlqRnmYaRIUJIToLIu0aH8IJCFzaYkYyd5PTPYC8=;
	b=WApgZ+T38cYIhPAK+Z28RbJl135syMW62n6Ou3m0svTUHOjBNuEq0l9T
	CwToPV+UmfNKAlXcukeSgMApNyorU3ls539Y4rVIEi5xHK+zTvvediiAf
	ltm25txCq3y2r8F+X60eNg0bzh65xJgYRDA1+sGNcf/iDvm5s4KKSJ53F g=;
X-IronPort-AV: E=Sophos;i="4.84,327,1355097600"; d="scan'208";a="424022211"
Received: from smtp-in-5102.iad5.amazon.com ([10.218.9.29])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 20 Dec 2012 21:24:32 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-5102.iad5.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBKLOVCt007736
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 20 Dec 2012 21:24:32 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 20 Dec 2012 13:24:10 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 20 Dec 2012 13:24:10 -0800
Date: Thu, 20 Dec 2012 13:24:10 -0800
From: Matt Wilson <msw@amazon.com>
To: ANNIE LI <annie.li@oracle.com>
Message-ID: <20121220212410.GB15542@u109add4315675089e695.ant.amazon.com>
References: <1355948414-7503-1-git-send-email-msw@amazon.com>
	<50D2890D.9080607@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D2890D.9080607@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Steven Noonan <snoonan@amazon.com>, xen-devel@lists.xen.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: correctly initialize grant
 table version 1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 11:42:05AM +0800, ANNIE LI wrote:
> 
> Thanks for posting this.
> This is caused by not initializing grant_table_version in
> gnttab_init() before gnttab_resume(). How about only adding
> gnttab_request_version() and BUG_ON() to check grant_table_version
> in gnttab_init(), then no need to change GREFS_PER_GRANT_FRAME into
> grefs_per_grant_frame and add more BUG_ON() to check
> grefs_per_grant_frame?
> 
> For example, in gnttab_init()
> ....
> 
> +gnttab_request_version();
> +BUG_ON(grant_table_version == 0);
> 

Having GREFS_PER_GRANT_FRAME evaluate a conditional every time it's
used makes me uneasy. I know I added excessive BUG_ON()s, but my
intent was just to be defensive. GREFS_PER_GRANT_FRAME is used in the
conditional part of for (...) loops in several locations. I think it's
better to have this as a variable instead of a conditional expression.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 21:25:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 21:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlnbc-0005zi-AA; Thu, 20 Dec 2012 21:24:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tlnba-0005zb-5y
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 21:24:38 +0000
Received: from [85.158.138.51:63939] by server-12.bemta-3.messagelabs.com id
	F4/3E-27559-51283D05; Thu, 20 Dec 2012 21:24:37 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1356038674!29831437!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDExMTk1Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28944 invoked from network); 20 Dec 2012 21:24:36 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 21:24:36 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1356038676; x=1387574676;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=rWUtlqRnmYaRIUJIToLIu0aH8IJCFzaYkYyd5PTPYC8=;
	b=WApgZ+T38cYIhPAK+Z28RbJl135syMW62n6Ou3m0svTUHOjBNuEq0l9T
	CwToPV+UmfNKAlXcukeSgMApNyorU3ls539Y4rVIEi5xHK+zTvvediiAf
	ltm25txCq3y2r8F+X60eNg0bzh65xJgYRDA1+sGNcf/iDvm5s4KKSJ53F g=;
X-IronPort-AV: E=Sophos;i="4.84,327,1355097600"; d="scan'208";a="424022211"
Received: from smtp-in-5102.iad5.amazon.com ([10.218.9.29])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 20 Dec 2012 21:24:32 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-5102.iad5.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBKLOVCt007736
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 20 Dec 2012 21:24:32 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 20 Dec 2012 13:24:10 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 20 Dec 2012 13:24:10 -0800
Date: Thu, 20 Dec 2012 13:24:10 -0800
From: Matt Wilson <msw@amazon.com>
To: ANNIE LI <annie.li@oracle.com>
Message-ID: <20121220212410.GB15542@u109add4315675089e695.ant.amazon.com>
References: <1355948414-7503-1-git-send-email-msw@amazon.com>
	<50D2890D.9080607@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D2890D.9080607@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Steven Noonan <snoonan@amazon.com>, xen-devel@lists.xen.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: correctly initialize grant
 table version 1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 11:42:05AM +0800, ANNIE LI wrote:
> 
> Thanks for posting this.
> This is caused by not initializing grant_table_version in
> gnttab_init() before gnttab_resume(). How about only adding
> gnttab_request_version() and BUG_ON() to check grant_table_version
> in gnttab_init(), then no need to change GREFS_PER_GRANT_FRAME into
> grefs_per_grant_frame and add more BUG_ON() to check
> grefs_per_grant_frame?
> 
> For example, in gnttab_init()
> ....
> 
> +gnttab_request_version();
> +BUG_ON(grant_table_version == 0);
> 

Having GREFS_PER_GRANT_FRAME evaluate a conditional every time it's
used makes me uneasy. I know I added excessive BUG_ON()s, but my
intent was just to be defensive. GREFS_PER_GRANT_FRAME is used in the
conditional part of for (...) loops in several locations. I think it's
better to have this as a variable instead of a conditional expression.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 21:42:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 21:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlnsl-0006HV-4F; Thu, 20 Dec 2012 21:42:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tlnsj-0006HQ-FV
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 21:42:21 +0000
Received: from [85.158.139.83:63823] by server-8.bemta-5.messagelabs.com id
	5C/F4-15003-C3683D05; Thu, 20 Dec 2012 21:42:20 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1356039738!30573095!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gOTk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12869 invoked from network); 20 Dec 2012 21:42:19 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 21:42:19 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1356039739; x=1387575739;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=qleajJHS5SAwWHZBsOiY/EemZxRU404yOcpWMxqVfrI=;
	b=g4Qqwuii4dpWQaHDL0vvDqdf2TPj9IVm66f0p4Y/+oW5mOoaj9Tu/Zk/
	tP+sqWroWulj24+3wKE8OboL8CtZylhSud9k5IxwtObDT3atJIt70xHmq
	gkNTA35hMiNfiun1EY0g67pFxikOjZXl0KXOB39Oc8InWAWvgdw9wV5L+ E=;
X-IronPort-AV: E=Sophos;i="4.84,327,1355097600"; d="scan'208";a="358272584"
Received: from smtp-in-0105.sea3.amazon.com ([10.224.19.45])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 20 Dec 2012 21:42:12 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-0105.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBKLgAuN005974
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 20 Dec 2012 21:42:12 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 20 Dec 2012 13:42:04 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 20 Dec 2012 13:42:04 -0800
Date: Thu, 20 Dec 2012 13:42:04 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121220214203.GA21709@u109add4315675089e695.ant.amazon.com>
References: <20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
	<1355743598.14620.43.camel@zakaz.uk.xensource.com>
	<20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
	<1355824968.14620.143.camel@zakaz.uk.xensource.com>
	<20121218194348.GB29382@u109add4315675089e695.ant.amazon.com>
	<1355997929.26722.17.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355997929.26722.17.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 10:05:29AM +0000, Ian Campbell wrote:
> On Tue, 2012-12-18 at 19:43 +0000, Matt Wilson wrote:
[...]
> > I see SKBs with:
> >   skb_headlen(skb) == 8157
> >   offset_in_page(skb->data) == 64
> > 
> > when handling long streaming ingress flows from ixgbe with MTU (on the
> > NIC and both sides of the VIF) set to 9000. When all the SKBs making
> > up the flow have the above property, xen-netback uses 3 pages instead
> > of two. The first buffer gets 4032 bytes copied into it. The next
> > buffer gets 4096 bytes copied into it. The final buffer gets 29 bytes
> > copied into it. See this post in the archives for a more detailed
> > walk through netbk_gop_frag_copy():
> >   http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html
> 
> Thanks. This certainly seems wrong for the head bit.
> 
> > What's the down side to making start_new_rx_buffer() always try to
> > fill each buffer?
> 
> As we discussed earlier in the thread it doubles the number of copy ops
> per frag under some circumstances, my gut is that this isn't going to
> hurt but that's just my gut.
> 
> It seems obviously right that the linear part of the SKB should always
> fill entire buffers though. Perhaps the answer is to differentiate
> between the skb->data and the frags?

We've written a patch that does exactly that. It's stable and performs
well in our testing so far. I'll need to forward port it to the latest
Linux tree, test it there, and post.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 21:42:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 21:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlnsl-0006HV-4F; Thu, 20 Dec 2012 21:42:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Tlnsj-0006HQ-FV
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 21:42:21 +0000
Received: from [85.158.139.83:63823] by server-8.bemta-5.messagelabs.com id
	5C/F4-15003-C3683D05; Thu, 20 Dec 2012 21:42:20 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1356039738!30573095!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gOTk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12869 invoked from network); 20 Dec 2012 21:42:19 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 21:42:19 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt;
	s=amazon201209; t=1356039739; x=1387575739;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=qleajJHS5SAwWHZBsOiY/EemZxRU404yOcpWMxqVfrI=;
	b=g4Qqwuii4dpWQaHDL0vvDqdf2TPj9IVm66f0p4Y/+oW5mOoaj9Tu/Zk/
	tP+sqWroWulj24+3wKE8OboL8CtZylhSud9k5IxwtObDT3atJIt70xHmq
	gkNTA35hMiNfiun1EY0g67pFxikOjZXl0KXOB39Oc8InWAWvgdw9wV5L+ E=;
X-IronPort-AV: E=Sophos;i="4.84,327,1355097600"; d="scan'208";a="358272584"
Received: from smtp-in-0105.sea3.amazon.com ([10.224.19.45])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 20 Dec 2012 21:42:12 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-0105.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	qBKLgAuN005974
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 20 Dec 2012 21:42:12 GMT
Received: from u109add4315675089e695.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 20 Dec 2012 13:42:04 -0800
Received: by u109add4315675089e695.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 20 Dec 2012 13:42:04 -0800
Date: Thu, 20 Dec 2012 13:42:04 -0800
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20121220214203.GA21709@u109add4315675089e695.ant.amazon.com>
References: <20121206053521.GA3482@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C13145668@INHYMS111A.ca.com>
	<20121211213437.GA29869@u109add4315675089e695.ant.amazon.com>
	<7D7C26B1462EB14CB0E7246697A18C1314611A@INHYMS111A.ca.com>
	<20121214185304.GA9236@u109add4315675089e695.ant.amazon.com>
	<1355743598.14620.43.camel@zakaz.uk.xensource.com>
	<20121217200950.GA29382@u109add4315675089e695.ant.amazon.com>
	<1355824968.14620.143.camel@zakaz.uk.xensource.com>
	<20121218194348.GB29382@u109add4315675089e695.ant.amazon.com>
	<1355997929.26722.17.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1355997929.26722.17.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Palagummi,
	Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 10:05:29AM +0000, Ian Campbell wrote:
> On Tue, 2012-12-18 at 19:43 +0000, Matt Wilson wrote:
[...]
> > I see SKBs with:
> >   skb_headlen(skb) == 8157
> >   offset_in_page(skb->data) == 64
> > 
> > when handling long streaming ingress flows from ixgbe with MTU (on the
> > NIC and both sides of the VIF) set to 9000. When all the SKBs making
> > up the flow have the above property, xen-netback uses 3 pages instead
> > of two. The first buffer gets 4032 bytes copied into it. The next
> > buffer gets 4096 bytes copied into it. The final buffer gets 29 bytes
> > copied into it. See this post in the archives for a more detailed
> > walk through netbk_gop_frag_copy():
> >   http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html
> 
> Thanks. This certainly seems wrong for the head bit.
> 
> > What's the down side to making start_new_rx_buffer() always try to
> > fill each buffer?
> 
> As we discussed earlier in the thread it doubles the number of copy ops
> per frag under some circumstances, my gut is that this isn't going to
> hurt but that's just my gut.
> 
> It seems obviously right that the linear part of the SKB should always
> fill entire buffers though. Perhaps the answer is to differentiate
> between the skb->data and the frags?

We've written a patch that does exactly that. It's stable and performs
well in our testing so far. I'll need to forward port it to the latest
Linux tree, test it there, and post.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 20 22:11:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 22:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TloKk-0006gB-Ni; Thu, 20 Dec 2012 22:11:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TloKi-0006g6-TC
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 22:11:17 +0000
Received: from [85.158.137.99:8494] by server-3.bemta-3.messagelabs.com id
	42/BB-31588-40D83D05; Thu, 20 Dec 2012 22:11:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1356041474!15479832!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23058 invoked from network); 20 Dec 2012 22:11:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 22:11:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="291223"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 22:11:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 22:11:13 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TloKg-0002Na-1s;
	Thu, 20 Dec 2012 22:11:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TloKf-0002uD-LD;
	Thu, 20 Dec 2012 22:11:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14801-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 22:11:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14801: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0252397141243366470=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0252397141243366470==
Content-Type: text/plain

flight 14801 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14801/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore fail REGR. vs. 14796

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14796
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14796

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  6f5c96855a9e
baseline version:
 xen                  090cc3e20d3e

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26318:6f5c96855a9e
tag:         tip
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 20 11:00:32 2012 +0100
    
    x86: also print CRn register values upon double fault
    
    Do so by simply re-using _show_registers().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
changeset:   26317:090cc3e20d3e
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Wed Dec 19 16:04:50 2012 +0000
    
    xen: arm: remove now empty dummy.S
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
========================================
commit 6a0cf3786f1964fdf5a17f88f26cb499f4e89c81
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Dec 6 12:35:58 2012 +0000

    qemu-stubdom: prevent useless medium change
    
    qemu-stubdom was stripping the prefix from the "params" xenstore
    key in xenstore_parse_domain_config, which was then saved stripped in
    a variable. In xenstore_process_event we compare the "param" from
    xenstore (not stripped) with the stripped "param" saved in the
    variable, which leads to a medium change (even if there isn't any),
    since we are comparing something like aio:/path/to/file with
    /path/to/file. This only happens one time, since
    xenstore_parse_domain_config is the only place where we strip the
    prefix. The result of this bug is the following:
    
    xs_read_watch() -> /local/domain/0/backend/qdisk/19/5632/params hdc
    close(7)
    close blk: backend=/local/domain/0/backend/qdisk/19/5632
    node=/local/domain/19/device/vbd/5632
    (XEN) HVM18: HVM Loader
    (XEN) HVM18: Detected Xen v4.3-unstable
    (XEN) HVM18: Xenbus rings @0xfeffc000, event channel 4
    (XEN) HVM18: System requested ROMBIOS
    (XEN) HVM18: CPU speed is 2400 MHz
    (XEN) irq.c:270: Dom18 PCI link 0 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 0 routed to IRQ5
    (XEN) irq.c:270: Dom18 PCI link 1 changed 0 -> 10
    (XEN) HVM18: PCI-ISA link 1 routed to IRQ10
    (XEN) irq.c:270: Dom18 PCI link 2 changed 0 -> 11
    (XEN) HVM18: PCI-ISA link 2 routed to IRQ11
    (XEN) irq.c:270: Dom18 PCI link 3 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 3 routed to IRQ5
    (XEN) HVM18: pci dev 01:3 INTA->IRQ10
    (XEN) HVM18: pci dev 03:0 INTA->IRQ5
    (XEN) HVM18: pci dev 04:0 INTA->IRQ5
    (XEN) HVM18: pci dev 02:0 bar 10 size lx: 02000000
    (XEN) HVM18: pci dev 03:0 bar 14 size lx: 01000000
    (XEN) HVM18: pci dev 02:0 bar 14 size lx: 00001000
    (XEN) HVM18: pci dev 03:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 14 size lx: 00000100
    (XEN) HVM18: pci dev 01:1 bar 20 size lx: 00000010
    (XEN) HVM18: Multiprocessor initialisation:
    (XEN) HVM18:  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18:  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18: Testing HVM environment:
    (XEN) HVM18:  - REP INSB across page boundaries ... passed
    (XEN) HVM18:  - GS base MSRs and SWAPGS ... passed
    (XEN) HVM18: Passed 2 of 2 tests
    (XEN) HVM18: Writing SMBIOS tables ...
    (XEN) HVM18: Loading ROMBIOS ...
    (XEN) HVM18: 9660 bytes of ROMBIOS high-memory extensions:
    (XEN) HVM18:   Relocating to 0xfc001000-0xfc0035bc ... done
    (XEN) HVM18: Creating MP tables ...
    (XEN) HVM18: Loading Cirrus VGABIOS ...
    (XEN) HVM18: Loading PCI Option ROM ...
    (XEN) HVM18:  - Manufacturer: http://ipxe.org
    (XEN) HVM18:  - Product name: iPXE
    (XEN) HVM18: Option ROMs:
    (XEN) HVM18:  c0000-c8fff: VGA BIOS
    (XEN) HVM18:  c9000-d8fff: Etherboot ROM
    (XEN) HVM18: Loading ACPI ...
    (XEN) HVM18: vm86 TSS at fc00f680
    (XEN) HVM18: BIOS map:
    (XEN) HVM18:  f0000-fffff: Main BIOS
    (XEN) HVM18: E820 table:
    (XEN) HVM18:  [00]: 00000000:00000000 - 00000000:0009e000: RAM
    (XEN) HVM18:  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
    (XEN) HVM18:  HOLE: 00000000:000a0000 - 00000000:000e0000
    (XEN) HVM18:  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
    (XEN) HVM18:  [03]: 00000000:00100000 - 00000000:3f800000: RAM
    (XEN) HVM18:  HOLE: 00000000:3f800000 - 00000000:fc000000
    (XEN) HVM18:  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
    (XEN) HVM18: Invoking ROMBIOS ...
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) stdvga.c:147:d18 entering stdvga and caching modes
    (XEN) HVM18: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
    (XEN) HVM18: Bochs BIOS - build: 06/23/99
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) HVM18: Options: apmbios pcibios eltorito PMM
    (XEN) HVM18:
    (XEN) HVM18: ata0-0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63
    (XEN) HVM18: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (10240 MBytes)
    (XEN) HVM18: IDE time out
    (XEN) HVM18: ata1 master: QEMU DVD-ROM ATAPI-4 CD-Rom/DVD-Rom
    (XEN) HVM18: IDE time out
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: Press F12 for boot menu.
    (XEN) HVM18:
    (XEN) HVM18: Booting from CD-Rom...
    (XEN) HVM18: ata_is_ready returned 1
    (XEN) HVM18: CDROM boot failure code : 0003
    (XEN) HVM18: Boot from CD-Rom failed: could not read the boot disk
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: No bootable device.
    (XEN) HVM18: Powering off in 30 seconds.
    ******************* BLKFRONT for /local/domain/19/device/vbd/5632 **********
    
    backend at /local/domain/0/backend/qdisk/19/5632
    Failed to read
    /local/domain/0/backend/qdisk/19/5632/feature-flush-cache.
    284420 sectors of 512 bytes
    **************************
    blk_open(/local/domain/19/device/vbd/5632) -> 7
    
    As seen in this trace, the medium change happens just when the
    guest is booting, which leads to the guest not being able to boot
    because the BIOS is not able to access the device.
    
    This is a regression from Xen 4.1, which is able to boot from "file:/"
    based backends when using stubdomains.
    
    [ By inspection, this patch does not change the flow for the
      non-stubdom case. -iwj]
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


--===============0252397141243366470==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0252397141243366470==--

From xen-devel-bounces@lists.xen.org Thu Dec 20 22:11:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 22:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TloKk-0006gB-Ni; Thu, 20 Dec 2012 22:11:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TloKi-0006g6-TC
	for xen-devel@lists.xensource.com; Thu, 20 Dec 2012 22:11:17 +0000
Received: from [85.158.137.99:8494] by server-3.bemta-3.messagelabs.com id
	42/BB-31588-40D83D05; Thu, 20 Dec 2012 22:11:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-217.messagelabs.com!1356041474!15479832!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEyODA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23058 invoked from network); 20 Dec 2012 22:11:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 22:11:15 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; 
   d="scan'208";a="291223"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Dec 2012 22:11:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Thu, 20 Dec 2012 22:11:13 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TloKg-0002Na-1s;
	Thu, 20 Dec 2012 22:11:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TloKf-0002uD-LD;
	Thu, 20 Dec 2012 22:11:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14801-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Dec 2012 22:11:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14801: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0252397141243366470=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0252397141243366470==
Content-Type: text/plain

flight 14801 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14801/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore fail REGR. vs. 14796

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14796
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14796

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  6f5c96855a9e
baseline version:
 xen                  090cc3e20d3e

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26318:6f5c96855a9e
tag:         tip
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 20 11:00:32 2012 +0100
    
    x86: also print CRn register values upon double fault
    
    Do so by simply re-using _show_registers().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
changeset:   26317:090cc3e20d3e
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Wed Dec 19 16:04:50 2012 +0000
    
    xen: arm: remove now empty dummy.S
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
========================================
commit 6a0cf3786f1964fdf5a17f88f26cb499f4e89c81
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Dec 6 12:35:58 2012 +0000

    qemu-stubdom: prevent useless medium change
    
    qemu-stubdom was stripping the prefix from the "params" xenstore
    key in xenstore_parse_domain_config, which was then saved stripped in
    a variable. In xenstore_process_event we compare the "param" from
    xenstore (not stripped) with the stripped "param" saved in the
    variable, which leads to a medium change (even if there isn't any),
    since we are comparing something like aio:/path/to/file with
    /path/to/file. This only happens one time, since
    xenstore_parse_domain_config is the only place where we strip the
    prefix. The result of this bug is the following:
    
    xs_read_watch() -> /local/domain/0/backend/qdisk/19/5632/params hdc
    close(7)
    close blk: backend=/local/domain/0/backend/qdisk/19/5632
    node=/local/domain/19/device/vbd/5632
    (XEN) HVM18: HVM Loader
    (XEN) HVM18: Detected Xen v4.3-unstable
    (XEN) HVM18: Xenbus rings @0xfeffc000, event channel 4
    (XEN) HVM18: System requested ROMBIOS
    (XEN) HVM18: CPU speed is 2400 MHz
    (XEN) irq.c:270: Dom18 PCI link 0 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 0 routed to IRQ5
    (XEN) irq.c:270: Dom18 PCI link 1 changed 0 -> 10
    (XEN) HVM18: PCI-ISA link 1 routed to IRQ10
    (XEN) irq.c:270: Dom18 PCI link 2 changed 0 -> 11
    (XEN) HVM18: PCI-ISA link 2 routed to IRQ11
    (XEN) irq.c:270: Dom18 PCI link 3 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 3 routed to IRQ5
    (XEN) HVM18: pci dev 01:3 INTA->IRQ10
    (XEN) HVM18: pci dev 03:0 INTA->IRQ5
    (XEN) HVM18: pci dev 04:0 INTA->IRQ5
    (XEN) HVM18: pci dev 02:0 bar 10 size lx: 02000000
    (XEN) HVM18: pci dev 03:0 bar 14 size lx: 01000000
    (XEN) HVM18: pci dev 02:0 bar 14 size lx: 00001000
    (XEN) HVM18: pci dev 03:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 14 size lx: 00000100
    (XEN) HVM18: pci dev 01:1 bar 20 size lx: 00000010
    (XEN) HVM18: Multiprocessor initialisation:
    (XEN) HVM18:  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18:  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18: Testing HVM environment:
    (XEN) HVM18:  - REP INSB across page boundaries ... passed
    (XEN) HVM18:  - GS base MSRs and SWAPGS ... passed
    (XEN) HVM18: Passed 2 of 2 tests
    (XEN) HVM18: Writing SMBIOS tables ...
    (XEN) HVM18: Loading ROMBIOS ...
    (XEN) HVM18: 9660 bytes of ROMBIOS high-memory extensions:
    (XEN) HVM18:   Relocating to 0xfc001000-0xfc0035bc ... done
    (XEN) HVM18: Creating MP tables ...
    (XEN) HVM18: Loading Cirrus VGABIOS ...
    (XEN) HVM18: Loading PCI Option ROM ...
    (XEN) HVM18:  - Manufacturer: http://ipxe.org
    (XEN) HVM18:  - Product name: iPXE
    (XEN) HVM18: Option ROMs:
    (XEN) HVM18:  c0000-c8fff: VGA BIOS
    (XEN) HVM18:  c9000-d8fff: Etherboot ROM
    (XEN) HVM18: Loading ACPI ...
    (XEN) HVM18: vm86 TSS at fc00f680
    (XEN) HVM18: BIOS map:
    (XEN) HVM18:  f0000-fffff: Main BIOS
    (XEN) HVM18: E820 table:
    (XEN) HVM18:  [00]: 00000000:00000000 - 00000000:0009e000: RAM
    (XEN) HVM18:  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
    (XEN) HVM18:  HOLE: 00000000:000a0000 - 00000000:000e0000
    (XEN) HVM18:  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
    (XEN) HVM18:  [03]: 00000000:00100000 - 00000000:3f800000: RAM
    (XEN) HVM18:  HOLE: 00000000:3f800000 - 00000000:fc000000
    (XEN) HVM18:  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
    (XEN) HVM18: Invoking ROMBIOS ...
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) stdvga.c:147:d18 entering stdvga and caching modes
    (XEN) HVM18: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
    (XEN) HVM18: Bochs BIOS - build: 06/23/99
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) HVM18: Options: apmbios pcibios eltorito PMM
    (XEN) HVM18:
    (XEN) HVM18: ata0-0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63
    (XEN) HVM18: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (10240 MBytes)
    (XEN) HVM18: IDE time out
    (XEN) HVM18: ata1 master: QEMU DVD-ROM ATAPI-4 CD-Rom/DVD-Rom
    (XEN) HVM18: IDE time out
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: Press F12 for boot menu.
    (XEN) HVM18:
    (XEN) HVM18: Booting from CD-Rom...
    (XEN) HVM18: ata_is_ready returned 1
    (XEN) HVM18: CDROM boot failure code : 0003
    (XEN) HVM18: Boot from CD-Rom failed: could not read the boot disk
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: No bootable device.
    (XEN) HVM18: Powering off in 30 seconds.
    ******************* BLKFRONT for /local/domain/19/device/vbd/5632 **********
    
    backend at /local/domain/0/backend/qdisk/19/5632
    Failed to read
    /local/domain/0/backend/qdisk/19/5632/feature-flush-cache.
    284420 sectors of 512 bytes
    **************************
    blk_open(/local/domain/19/device/vbd/5632) -> 7
    
    As seen in this trace, the medium change happens just when the
    guest is booting, which leads to the guest not being able to boot
    because the BIOS is not able to access the device.
    
    This is a regression from Xen 4.1, which is able to boot from "file:/"
    based backends when using stubdomains.
    
    [ By inspection, this patch does not change the flow for the
      non-stubdom case. -iwj]
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


--===============0252397141243366470==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0252397141243366470==--

From xen-devel-bounces@lists.xen.org Thu Dec 20 23:03:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 23:03:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlp8M-00073o-QY; Thu, 20 Dec 2012 23:02:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nehal.bandi@gmail.com>) id 1Tlp8K-00073j-CK
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 23:02:32 +0000
Received: from [85.158.143.35:7582] by server-2.bemta-4.messagelabs.com id
	E6/3B-30861-70993D05; Thu, 20 Dec 2012 23:02:31 +0000
X-Env-Sender: nehal.bandi@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1356044548!4970009!1
X-Originating-IP: [209.85.212.46]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7858 invoked from network); 20 Dec 2012 23:02:29 -0000
Received: from mail-vb0-f46.google.com (HELO mail-vb0-f46.google.com)
	(209.85.212.46)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 23:02:29 -0000
Received: by mail-vb0-f46.google.com with SMTP id b13so4353344vby.19
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 15:02:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=VKSnahPRaJd5/1BfIoBY0HVB/JyC7u2eAtmlXqWZoms=;
	b=Uog/MrIpACWhHTWpQXj3cyhmoXmfn8ssMLtknfpe5uny3vzUx0s79aS/UnYNL6Gv/9
	F7SQyDKD980MJAk9KvEOzcBuz57+l0GktTRznz8jWYd2nx4R8hapL61Dy27rbBaVeZ1h
	ee2P2mWZW6srH3WHNJOaWJvTSfrCGh1oQbxj1GbimFky7+mHOdtBQD8NA8hQuZflddp0
	xe6vth+PNTt5ZAb/JjrMClBs1X2Weuknn81q8vrd9mQF6DGdTIgxjgyzH3fFmeaG4rKh
	1ItFfGieiqRtIRa7XD89fs0Whl5w2iKJ5hnHop185mhQaznLNlnbgCH8/SGzrhBeQgEz
	s9Hg==
MIME-Version: 1.0
Received: by 10.58.187.84 with SMTP id fq20mr17526350vec.25.1356044548274;
	Thu, 20 Dec 2012 15:02:28 -0800 (PST)
Received: by 10.58.190.68 with HTTP; Thu, 20 Dec 2012 15:02:28 -0800 (PST)
Date: Thu, 20 Dec 2012 15:02:28 -0800
Message-ID: <CAK+Tetx=ZRXWumP-VG+V0zGZhBXoxMLgz2UA6ok467vw97ZRJg@mail.gmail.com>
From: Nehal Bandi <nehal.bandi@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] xen domU core dump question.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8192509445341710909=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8192509445341710909==
Content-Type: multipart/alternative; boundary=047d7b6d806299793f04d150b96b

--047d7b6d806299793f04d150b96b
Content-Type: text/plain; charset=ISO-8859-1

Hello,

I have some questions about xen  dump-core which is an ELF format and I am
sure this is the best place to ask for advice because I doubt the xen-users
list
will be very helpful for this.

http://xenbits.xen.org/docs/unstable/misc/dump-core-format.txt

One of the section in the core dump ELF format is about 'xen_pages' which I
think is the domain's memory.

I wanted to extract HVM domain memory from xen domain core dump.

Following is what is read from a core dump of win-7 domU with 2 GB memory.

========================================================
readelf -S win7_xen.core

There are 7 section headers, starting at offset 0x40:
Section Headers:
  [Nr] Name              Type             Address           Offset
       Size              EntSize          Flags  Link  Info  Align
  [ 0] <no-name>         NULL             0000000000000000  00000000
       0000000000000000  0000000000000000           0     0     0
  [ 1] <no-name>         STRTAB           0000000000000000  00000000803f6fa0
       0000000000000048  0000000000000000           0     0     0
  [ 2] <no-name>         NOTE             0000000000000000  00000200
       0000000000000564  0000000000000000           0     0     0
  [ 3] <no-name>         PROGBITS         0000000000000000  00000764
       0000000000001430  0000000000001430           0     0     8
  [ 4] <no-name>         PROGBITS         0000000000000000  00001b94
       0000000000001000  0000000000001000           0     0     8
  [ 5] <no-name>         PROGBITS         0000000000000000  00003000
       *000000007fff4000*  0000000000001000           0     0     4096
  [ 6] <no-name>         PROGBITS         0000000000000000  7fff7000
       00000000003fffa0  0000000000000008           0     0     8

=====================================

In the above Elf image I extracted the section 5 ( xen_pages)  size is* 2GB
- 48 KB* ( 12 pages less then full 2GB).



Window's 64 bit domain dump with 4 GB memory.

Section Headers:
  [Nr] Name              Type             Address           Offset
       Size              EntSize          Flags  Link  Info  Align
  [ 0]                   NULL             0000000000000000  00000000
       0000000000000000  0000000000000000           0     0     0
  [ 1]                   STRTAB           0000000000000000  00000001007f6fa0
       0000000000000048  0000000000000000           0     0     0
  [ 2]                   NOTE             0000000000000000  00000200
       0000000000000564  0000000000000000           0     0     0
  [ 3]                   PROGBITS         0000000000000000  00000764
       0000000000001430  0000000000001430           0     0     8
  [ 4]                   PROGBITS         0000000000000000  00001b94
       0000000000001000  0000000000001000           0     0     8
  [ 5]                   PROGBITS         0000000000000000  00003000
       *00000000ffff4000*  0000000000001000           0     0     4096
  [ 6]                   PROGBITS         0000000000000000  00000000ffff7000
       00000000007fffa0  0000000000000008           0     0     8
============================================================

In the above Elf again the section 5 is* 4GB - 48KB* (again 12 pages less
then 4GB domain memory)


The questions I have are :

Is my understanding correct about the 'xen_pages', is it really VM memory ?

are these missing  48KB (12 pages) xen specific ?? Can I get those pages
from xen ??

Thank you in advance.

-Regards,
Nehal

--047d7b6d806299793f04d150b96b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div>Hello,</div><div><br></div><div>I have some questions about xen =A0dum=
p-core which is an ELF format and I am sure this is the best place to ask f=
or advice because I doubt the xen-users list=A0</div><div>will be very help=
ful for this.</div>
<div><br></div><div><a href=3D"http://xenbits.xen.org/docs/unstable/misc/du=
mp-core-format.txt">http://xenbits.xen.org/docs/unstable/misc/dump-core-for=
mat.txt</a></div><div><br></div><div>One of the section in the core dump EL=
F format is about &#39;<span style=3D"white-space:pre-wrap">xen_pages&#39; =
which I think is the domain&#39;s memory.</span></div>
<div><br></div><div>I wanted to extract HVM domain memory from xen domain c=
ore dump.</div><div><div><br></div><div>Following is what is read from a co=
re dump of win-7 domU with 2 GB memory.</div><div><br></div><div>=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D</div>
<div>readelf -S win7_xen.core=A0</div><div><br></div><div>There are 7 secti=
on headers, starting at offset 0x40:</div><div>Section Headers:</div><div>=
=A0 [Nr] Name =A0 =A0 =A0 =A0 =A0 =A0 =A0Type =A0 =A0 =A0 =A0 =A0 =A0 Addre=
ss =A0 =A0 =A0 =A0 =A0 Offset</div><div>
=A0 =A0 =A0 =A0Size =A0 =A0 =A0 =A0 =A0 =A0 =A0EntSize =A0 =A0 =A0 =A0 =A0F=
lags =A0Link =A0Info =A0Align</div><div>=A0 [ 0] &lt;no-name&gt; =A0 =A0 =
=A0 =A0 NULL =A0 =A0 =A0 =A0 =A0 =A0 0000000000000000 =A000000000</div><div=
>=A0 =A0 =A0 =A00000000000000000 =A00000000000000000 =A0 =A0 =A0 =A0 =A0 0 =
=A0 =A0 0 =A0 =A0 0</div>
<div>=A0 [ 1] &lt;no-name&gt; =A0 =A0 =A0 =A0 STRTAB =A0 =A0 =A0 =A0 =A0 00=
00000000000000 =A000000000803f6fa0</div><div>=A0 =A0 =A0 =A0000000000000004=
8 =A00000000000000000 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0</div><div>=
=A0 [ 2] &lt;no-name&gt; =A0 =A0 =A0 =A0 NOTE =A0 =A0 =A0 =A0 =A0 =A0 00000=
00000000000 =A000000200</div>
<div>=A0 =A0 =A0 =A00000000000000564 =A00000000000000000 =A0 =A0 =A0 =A0 =
=A0 0 =A0 =A0 0 =A0 =A0 0</div><div>=A0 [ 3] &lt;no-name&gt; =A0 =A0 =A0 =
=A0 PROGBITS =A0 =A0 =A0 =A0 0000000000000000 =A000000764</div><div>=A0 =A0=
 =A0 =A00000000000001430 =A00000000000001430 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 =
0 =A0 =A0 8</div>
<div>=A0 [ 4] &lt;no-name&gt; =A0 =A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 0000=
000000000000 =A000001b94</div><div>=A0 =A0 =A0 =A00000000000001000 =A000000=
00000001000 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 8</div><div>=A0 [ 5] &l=
t;no-name&gt; =A0 =A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 0000000000000000 =A0=
00003000</div>
<div>=A0 =A0 =A0 =A0<b>000000007fff4000</b> =A00000000000001000 =A0 =A0 =A0=
 =A0 =A0 0 =A0 =A0 0 =A0 =A0 4096</div><div>=A0 [ 6] &lt;no-name&gt; =A0 =
=A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 0000000000000000 =A07fff7000</div><div=
>=A0 =A0 =A0 =A000000000003fffa0 =A00000000000000008 =A0 =A0 =A0 =A0 =A0 0 =
=A0 =A0 0 =A0 =A0 8</div>
</div><div><br></div><div>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</div><div><=
br></div><div>In the above Elf image I extracted the section 5 ( xen_pages)=
 =A0size is<b> 2GB - 48 KB</b> ( 12 pages less then full 2GB).</div><div>
<br></div><div><br></div><div><br></div><div>Window&#39;s 64 bit domain dum=
p with 4 GB memory.</div><div><br></div><div>Section Headers:</div><div>=A0=
 [Nr] Name =A0 =A0 =A0 =A0 =A0 =A0 =A0Type =A0 =A0 =A0 =A0 =A0 =A0 Address =
=A0 =A0 =A0 =A0 =A0 Offset</div><div>
=A0 =A0 =A0 =A0Size =A0 =A0 =A0 =A0 =A0 =A0 =A0EntSize =A0 =A0 =A0 =A0 =A0F=
lags =A0Link =A0Info =A0Align</div><div>=A0 [ 0] =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 NULL =A0 =A0 =A0 =A0 =A0 =A0 0000000000000000 =A000000000</div>=
<div>=A0 =A0 =A0 =A00000000000000000 =A00000000000000000 =A0 =A0 =A0 =A0 =
=A0 0 =A0 =A0 0 =A0 =A0 0</div>
<div>=A0 [ 1] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 STRTAB =A0 =A0 =A0 =A0 =
=A0 0000000000000000 =A000000001007f6fa0</div><div>=A0 =A0 =A0 =A0000000000=
0000048 =A00000000000000000 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0</div>=
<div>=A0 [ 2] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 NOTE =A0 =A0 =A0 =A0 =A0 =
=A0 0000000000000000 =A000000200</div>
<div>=A0 =A0 =A0 =A00000000000000564 =A00000000000000000 =A0 =A0 =A0 =A0 =
=A0 0 =A0 =A0 0 =A0 =A0 0</div><div>=A0 [ 3] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 PROGBITS =A0 =A0 =A0 =A0 0000000000000000 =A000000764</div><div>=A0=
 =A0 =A0 =A00000000000001430 =A00000000000001430 =A0 =A0 =A0 =A0 =A0 0 =A0 =
=A0 0 =A0 =A0 8</div>
<div>=A0 [ 4] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 =
0000000000000000 =A000001b94</div><div>=A0 =A0 =A0 =A00000000000001000 =A00=
000000000001000 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 8</div><div>=A0 [ 5=
] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 000000000000=
0000 =A000003000</div>
<div>=A0 =A0 =A0 =A0<b>00000000ffff4000</b> =A00000000000001000 =A0 =A0 =A0=
 =A0 =A0 0 =A0 =A0 0 =A0 =A0 4096</div><div>=A0 [ 6] =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 0000000000000000 =A000000000ffff70=
00</div><div>=A0 =A0 =A0 =A000000000007fffa0 =A00000000000000008 =A0 =A0 =
=A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 8</div>
<div>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</div><div><br></div><div>In the above =
Elf again the section 5 is<b> 4GB - 48KB</b> (again 12 pages less then 4GB =
domain memory)</div><div><br></div><div><br>
</div><div>The questions I have are :</div><div><br></div><div>Is my unders=
tanding correct about the &#39;xen_pages&#39;, is it really VM memory ?</di=
v><div><br></div><div>are these missing =A048KB (12 pages) xen specific ?? =
Can I get those pages from xen ??</div>
<div><br></div><div>Thank you in advance.</div><div><br></div><div>-Regards=
,</div><div>Nehal</div><div><br></div><div>=A0</div><div><br></div>

--047d7b6d806299793f04d150b96b--


--===============8192509445341710909==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8192509445341710909==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 23:03:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 23:03:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlp8M-00073o-QY; Thu, 20 Dec 2012 23:02:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nehal.bandi@gmail.com>) id 1Tlp8K-00073j-CK
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 23:02:32 +0000
Received: from [85.158.143.35:7582] by server-2.bemta-4.messagelabs.com id
	E6/3B-30861-70993D05; Thu, 20 Dec 2012 23:02:31 +0000
X-Env-Sender: nehal.bandi@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1356044548!4970009!1
X-Originating-IP: [209.85.212.46]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7858 invoked from network); 20 Dec 2012 23:02:29 -0000
Received: from mail-vb0-f46.google.com (HELO mail-vb0-f46.google.com)
	(209.85.212.46)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Dec 2012 23:02:29 -0000
Received: by mail-vb0-f46.google.com with SMTP id b13so4353344vby.19
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 15:02:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=VKSnahPRaJd5/1BfIoBY0HVB/JyC7u2eAtmlXqWZoms=;
	b=Uog/MrIpACWhHTWpQXj3cyhmoXmfn8ssMLtknfpe5uny3vzUx0s79aS/UnYNL6Gv/9
	F7SQyDKD980MJAk9KvEOzcBuz57+l0GktTRznz8jWYd2nx4R8hapL61Dy27rbBaVeZ1h
	ee2P2mWZW6srH3WHNJOaWJvTSfrCGh1oQbxj1GbimFky7+mHOdtBQD8NA8hQuZflddp0
	xe6vth+PNTt5ZAb/JjrMClBs1X2Weuknn81q8vrd9mQF6DGdTIgxjgyzH3fFmeaG4rKh
	1ItFfGieiqRtIRa7XD89fs0Whl5w2iKJ5hnHop185mhQaznLNlnbgCH8/SGzrhBeQgEz
	s9Hg==
MIME-Version: 1.0
Received: by 10.58.187.84 with SMTP id fq20mr17526350vec.25.1356044548274;
	Thu, 20 Dec 2012 15:02:28 -0800 (PST)
Received: by 10.58.190.68 with HTTP; Thu, 20 Dec 2012 15:02:28 -0800 (PST)
Date: Thu, 20 Dec 2012 15:02:28 -0800
Message-ID: <CAK+Tetx=ZRXWumP-VG+V0zGZhBXoxMLgz2UA6ok467vw97ZRJg@mail.gmail.com>
From: Nehal Bandi <nehal.bandi@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] xen domU core dump question.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8192509445341710909=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8192509445341710909==
Content-Type: multipart/alternative; boundary=047d7b6d806299793f04d150b96b

--047d7b6d806299793f04d150b96b
Content-Type: text/plain; charset=ISO-8859-1

Hello,

I have some questions about xen  dump-core which is an ELF format and I am
sure this is the best place to ask for advice because I doubt the xen-users
list
will be very helpful for this.

http://xenbits.xen.org/docs/unstable/misc/dump-core-format.txt

One of the section in the core dump ELF format is about 'xen_pages' which I
think is the domain's memory.

I wanted to extract HVM domain memory from xen domain core dump.

Following is what is read from a core dump of win-7 domU with 2 GB memory.

========================================================
readelf -S win7_xen.core

There are 7 section headers, starting at offset 0x40:
Section Headers:
  [Nr] Name              Type             Address           Offset
       Size              EntSize          Flags  Link  Info  Align
  [ 0] <no-name>         NULL             0000000000000000  00000000
       0000000000000000  0000000000000000           0     0     0
  [ 1] <no-name>         STRTAB           0000000000000000  00000000803f6fa0
       0000000000000048  0000000000000000           0     0     0
  [ 2] <no-name>         NOTE             0000000000000000  00000200
       0000000000000564  0000000000000000           0     0     0
  [ 3] <no-name>         PROGBITS         0000000000000000  00000764
       0000000000001430  0000000000001430           0     0     8
  [ 4] <no-name>         PROGBITS         0000000000000000  00001b94
       0000000000001000  0000000000001000           0     0     8
  [ 5] <no-name>         PROGBITS         0000000000000000  00003000
       *000000007fff4000*  0000000000001000           0     0     4096
  [ 6] <no-name>         PROGBITS         0000000000000000  7fff7000
       00000000003fffa0  0000000000000008           0     0     8

=====================================

In the above Elf image I extracted the section 5 ( xen_pages)  size is* 2GB
- 48 KB* ( 12 pages less then full 2GB).



Window's 64 bit domain dump with 4 GB memory.

Section Headers:
  [Nr] Name              Type             Address           Offset
       Size              EntSize          Flags  Link  Info  Align
  [ 0]                   NULL             0000000000000000  00000000
       0000000000000000  0000000000000000           0     0     0
  [ 1]                   STRTAB           0000000000000000  00000001007f6fa0
       0000000000000048  0000000000000000           0     0     0
  [ 2]                   NOTE             0000000000000000  00000200
       0000000000000564  0000000000000000           0     0     0
  [ 3]                   PROGBITS         0000000000000000  00000764
       0000000000001430  0000000000001430           0     0     8
  [ 4]                   PROGBITS         0000000000000000  00001b94
       0000000000001000  0000000000001000           0     0     8
  [ 5]                   PROGBITS         0000000000000000  00003000
       *00000000ffff4000*  0000000000001000           0     0     4096
  [ 6]                   PROGBITS         0000000000000000  00000000ffff7000
       00000000007fffa0  0000000000000008           0     0     8
============================================================

In the above Elf again the section 5 is* 4GB - 48KB* (again 12 pages less
then 4GB domain memory)


The questions I have are :

Is my understanding correct about the 'xen_pages', is it really VM memory ?

are these missing  48KB (12 pages) xen specific ?? Can I get those pages
from xen ??

Thank you in advance.

-Regards,
Nehal

--047d7b6d806299793f04d150b96b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div>Hello,</div><div><br></div><div>I have some questions about xen =A0dum=
p-core which is an ELF format and I am sure this is the best place to ask f=
or advice because I doubt the xen-users list=A0</div><div>will be very help=
ful for this.</div>
<div><br></div><div><a href=3D"http://xenbits.xen.org/docs/unstable/misc/du=
mp-core-format.txt">http://xenbits.xen.org/docs/unstable/misc/dump-core-for=
mat.txt</a></div><div><br></div><div>One of the section in the core dump EL=
F format is about &#39;<span style=3D"white-space:pre-wrap">xen_pages&#39; =
which I think is the domain&#39;s memory.</span></div>
<div><br></div><div>I wanted to extract HVM domain memory from xen domain c=
ore dump.</div><div><div><br></div><div>Following is what is read from a co=
re dump of win-7 domU with 2 GB memory.</div><div><br></div><div>=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D</div>
<div>readelf -S win7_xen.core=A0</div><div><br></div><div>There are 7 secti=
on headers, starting at offset 0x40:</div><div>Section Headers:</div><div>=
=A0 [Nr] Name =A0 =A0 =A0 =A0 =A0 =A0 =A0Type =A0 =A0 =A0 =A0 =A0 =A0 Addre=
ss =A0 =A0 =A0 =A0 =A0 Offset</div><div>
=A0 =A0 =A0 =A0Size =A0 =A0 =A0 =A0 =A0 =A0 =A0EntSize =A0 =A0 =A0 =A0 =A0F=
lags =A0Link =A0Info =A0Align</div><div>=A0 [ 0] &lt;no-name&gt; =A0 =A0 =
=A0 =A0 NULL =A0 =A0 =A0 =A0 =A0 =A0 0000000000000000 =A000000000</div><div=
>=A0 =A0 =A0 =A00000000000000000 =A00000000000000000 =A0 =A0 =A0 =A0 =A0 0 =
=A0 =A0 0 =A0 =A0 0</div>
<div>=A0 [ 1] &lt;no-name&gt; =A0 =A0 =A0 =A0 STRTAB =A0 =A0 =A0 =A0 =A0 00=
00000000000000 =A000000000803f6fa0</div><div>=A0 =A0 =A0 =A0000000000000004=
8 =A00000000000000000 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0</div><div>=
=A0 [ 2] &lt;no-name&gt; =A0 =A0 =A0 =A0 NOTE =A0 =A0 =A0 =A0 =A0 =A0 00000=
00000000000 =A000000200</div>
<div>=A0 =A0 =A0 =A00000000000000564 =A00000000000000000 =A0 =A0 =A0 =A0 =
=A0 0 =A0 =A0 0 =A0 =A0 0</div><div>=A0 [ 3] &lt;no-name&gt; =A0 =A0 =A0 =
=A0 PROGBITS =A0 =A0 =A0 =A0 0000000000000000 =A000000764</div><div>=A0 =A0=
 =A0 =A00000000000001430 =A00000000000001430 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 =
0 =A0 =A0 8</div>
<div>=A0 [ 4] &lt;no-name&gt; =A0 =A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 0000=
000000000000 =A000001b94</div><div>=A0 =A0 =A0 =A00000000000001000 =A000000=
00000001000 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 8</div><div>=A0 [ 5] &l=
t;no-name&gt; =A0 =A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 0000000000000000 =A0=
00003000</div>
<div>=A0 =A0 =A0 =A0<b>000000007fff4000</b> =A00000000000001000 =A0 =A0 =A0=
 =A0 =A0 0 =A0 =A0 0 =A0 =A0 4096</div><div>=A0 [ 6] &lt;no-name&gt; =A0 =
=A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 0000000000000000 =A07fff7000</div><div=
>=A0 =A0 =A0 =A000000000003fffa0 =A00000000000000008 =A0 =A0 =A0 =A0 =A0 0 =
=A0 =A0 0 =A0 =A0 8</div>
</div><div><br></div><div>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</div><div><=
br></div><div>In the above Elf image I extracted the section 5 ( xen_pages)=
 =A0size is<b> 2GB - 48 KB</b> ( 12 pages less then full 2GB).</div><div>
<br></div><div><br></div><div><br></div><div>Window&#39;s 64 bit domain dum=
p with 4 GB memory.</div><div><br></div><div>Section Headers:</div><div>=A0=
 [Nr] Name =A0 =A0 =A0 =A0 =A0 =A0 =A0Type =A0 =A0 =A0 =A0 =A0 =A0 Address =
=A0 =A0 =A0 =A0 =A0 Offset</div><div>
=A0 =A0 =A0 =A0Size =A0 =A0 =A0 =A0 =A0 =A0 =A0EntSize =A0 =A0 =A0 =A0 =A0F=
lags =A0Link =A0Info =A0Align</div><div>=A0 [ 0] =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 NULL =A0 =A0 =A0 =A0 =A0 =A0 0000000000000000 =A000000000</div>=
<div>=A0 =A0 =A0 =A00000000000000000 =A00000000000000000 =A0 =A0 =A0 =A0 =
=A0 0 =A0 =A0 0 =A0 =A0 0</div>
<div>=A0 [ 1] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 STRTAB =A0 =A0 =A0 =A0 =
=A0 0000000000000000 =A000000001007f6fa0</div><div>=A0 =A0 =A0 =A0000000000=
0000048 =A00000000000000000 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0</div>=
<div>=A0 [ 2] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 NOTE =A0 =A0 =A0 =A0 =A0 =
=A0 0000000000000000 =A000000200</div>
<div>=A0 =A0 =A0 =A00000000000000564 =A00000000000000000 =A0 =A0 =A0 =A0 =
=A0 0 =A0 =A0 0 =A0 =A0 0</div><div>=A0 [ 3] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 PROGBITS =A0 =A0 =A0 =A0 0000000000000000 =A000000764</div><div>=A0=
 =A0 =A0 =A00000000000001430 =A00000000000001430 =A0 =A0 =A0 =A0 =A0 0 =A0 =
=A0 0 =A0 =A0 8</div>
<div>=A0 [ 4] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 =
0000000000000000 =A000001b94</div><div>=A0 =A0 =A0 =A00000000000001000 =A00=
000000000001000 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 8</div><div>=A0 [ 5=
] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 000000000000=
0000 =A000003000</div>
<div>=A0 =A0 =A0 =A0<b>00000000ffff4000</b> =A00000000000001000 =A0 =A0 =A0=
 =A0 =A0 0 =A0 =A0 0 =A0 =A0 4096</div><div>=A0 [ 6] =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 PROGBITS =A0 =A0 =A0 =A0 0000000000000000 =A000000000ffff70=
00</div><div>=A0 =A0 =A0 =A000000000007fffa0 =A00000000000000008 =A0 =A0 =
=A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 8</div>
<div>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</div><div><br></div><div>In the above =
Elf again the section 5 is<b> 4GB - 48KB</b> (again 12 pages less then 4GB =
domain memory)</div><div><br></div><div><br>
</div><div>The questions I have are :</div><div><br></div><div>Is my unders=
tanding correct about the &#39;xen_pages&#39;, is it really VM memory ?</di=
v><div><br></div><div>are these missing =A048KB (12 pages) xen specific ?? =
Can I get those pages from xen ??</div>
<div><br></div><div>Thank you in advance.</div><div><br></div><div>-Regards=
,</div><div>Nehal</div><div><br></div><div>=A0</div><div><br></div>

--047d7b6d806299793f04d150b96b--


--===============8192509445341710909==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8192509445341710909==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 23:17:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 23:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlpMh-0007Gf-Dd; Thu, 20 Dec 2012 23:17:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1TlpMf-0007Ga-Rj
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 23:17:22 +0000
Received: from [85.158.138.51:37443] by server-7.bemta-3.messagelabs.com id
	15/B0-23008-C7C93D05; Thu, 20 Dec 2012 23:17:16 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1356045434!29922192!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDE3NDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 557 invoked from network); 20 Dec 2012 23:17:15 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 23:17:15 -0000
Received: from compute2.internal (compute2.nyi.mail.srv.osa [10.202.2.42])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id B71A420A6A;
	Thu, 20 Dec 2012 18:17:14 -0500 (EST)
Received: from frontend2.nyi.mail.srv.osa ([10.202.2.161])
	by compute2.internal (MEProxy); Thu, 20 Dec 2012 18:17:14 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=yD
	Sj+fs/9SCoGvGZLzuXPYxkX08=; b=GI/N3pPuc5+SI4j3F7NU0wmLUD4ppaxwM6
	xIQ6/cFWHnrJ7Wz+gxSUPanNBXRm7FaRnlXOl/eZGnw+LLNN+l4hdAqPkogO7O41
	grLtWI43Q2nfKXfPpsRNM0zu9IJ04e/5Uju05x4TJpJkUDq2HGq1LNe4zGJ2HLGP
	ctOhCEle0=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=yDSj
	+fs/9SCoGvGZLzuXPYxkX08=; b=ndcVfRMUToDaqf6nNFsqiaBThrZ+ivcv91Zh
	G+JYp9ndlpXYHp+2Zgc9WrUDWvwlMpXLV5zXHUaHAkzpSrEk+i5B93GAWDuHg2fx
	C0HQGNFRpMUYutgSHjMBSuaMui6xt5DgvvL/rcGIdGqe0fnjfi3ZW4o9ax8NE5fw
	uCGegnQ=
X-Sasl-enc: JYvv+HMuHspEqc//kjZzov78aCWKzHq4Lf2SjHjniKyx 1356045434
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id 12A414827D5;
	Thu, 20 Dec 2012 18:17:13 -0500 (EST)
Message-ID: <50D39C73.906@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 00:17:07 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
In-Reply-To: <CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
X-Enigmail-Version: 1.4.5
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1744327927895738521=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============1744327927895738521==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigFB036A0B5D2380FA3E5E1AA9"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigFB036A0B5D2380FA3E5E1AA9
Content-Type: multipart/mixed;
 boundary="------------080109000601000203000902"

This is a multi-part message in MIME format.
--------------080109000601000203000902
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 20.12.2012 20:41, Ben Guthro wrote:
> See if the attached patch helps.
> This was a patch I forgot was still in my queue, that Jan had some
> objections about (though I don't recall specifically what they were)
> I had found that the scheduler was removing all cpus during suspend, an=
d
> this seemed to help with that, IIRC.
>=20
> That said - I'm starting to be forced to look back into S3 issues, as w=
hile
> I thought they were all resolved, they are not.
> I have an Ivybridge laptop that I didn't have back in May that exhibits=

> some issues very similar to those that I was trying to solve with Jan i=
n
> the thread I pasted above.
>=20
> This particular machine goes to sleep, and when it resumes, the disk li=
ght
> flashes briefly, and then the sleep led goes back to pulsing.

With this patch applied (on 4.1.4 and 4.3-unstable), I've got original sy=
mptom
- only CPU0 active after ACPI S3. Didn't get reboot after few tries.
"xl debug-key r" output attached.
Anyway it looks like bug was introduced somehow between 4.1.2 and 4.1.3 -=
 on
4.1.2 none of above problems happened.

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab

--------------080109000601000203000902
Content-Type: text/plain; charset=UTF-8;
 name="xl-debug-key-r--only-cpu0-active.txt"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="xl-debug-key-r--only-cpu0-active.txt"

KFhFTikgc2NoZWRfc210X3Bvd2VyX3NhdmluZ3M6IGRpc2FibGVkCihYRU4pIE5PVz0weDAw
MDAwMDU5MkMzRjYyOUQKKFhFTikgSWRsZSBjcHVwb29sOgooWEVOKSBTY2hlZHVsZXI6IFNN
UCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpCihYRU4pIGluZm86CihYRU4pIAluY3B1cyAg
ICAgICAgICAgICAgPSA0CihYRU4pIAltYXN0ZXIgICAgICAgICAgICAgPSAwCihYRU4pIAlj
cmVkaXQgICAgICAgICAgICAgPSAxMjAwCihYRU4pIAljcmVkaXQgYmFsYW5jZSAgICAgPSA0
NjYKKFhFTikgCXdlaWdodCAgICAgICAgICAgICA9IDUxMgooWEVOKSAJcnVucV9zb3J0ICAg
ICAgICAgID0gMzcwOQooWEVOKSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4pIAl0
c2xpY2UgICAgICAgICAgICAgPSAzMG1zCihYRU4pIAljcmVkaXRzIHBlciBtc2VjICAgPSAx
MAooWEVOKSAJdGlja3MgcGVyIHRzbGljZSAgID0gMwooWEVOKSAJbWlncmF0aW9uIGRlbGF5
ICAgID0gMHVzCihYRU4pIGlkbGVyczogMDAwMDAwMDAsMDAwMDAwMDAsMDAwMDAwMDAsMDAw
MDAwMGUKKFhFTikgYWN0aXZlIHZjcHVzOgooWEVOKSAJICAxOiBbMC4xXSBwcmk9MCBmbGFn
cz0wIGNwdT0wIGNyZWRpdD0yMDggW3c9NTEyXQooWEVOKSBDcHVwb29sIDA6CihYRU4pIFNj
aGVkdWxlcjogU01QIENyZWRpdCBTY2hlZHVsZXIgKGNyZWRpdCkKKFhFTikgaW5mbzoKKFhF
TikgCW5jcHVzICAgICAgICAgICAgICA9IDQKKFhFTikgCW1hc3RlciAgICAgICAgICAgICA9
IDAKKFhFTikgCWNyZWRpdCAgICAgICAgICAgICA9IDEyMDAKKFhFTikgCWNyZWRpdCBiYWxh
bmNlICAgICA9IDQ2NgooWEVOKSAJd2VpZ2h0ICAgICAgICAgICAgID0gNTEyCihYRU4pIAly
dW5xX3NvcnQgICAgICAgICAgPSAzNzA5CihYRU4pIAlkZWZhdWx0LXdlaWdodCAgICAgPSAy
NTYKKFhFTikgCXRzbGljZSAgICAgICAgICAgICA9IDMwbXMKKFhFTikgCWNyZWRpdHMgcGVy
IG1zZWMgICA9IDEwCihYRU4pIAl0aWNrcyBwZXIgdHNsaWNlICAgPSAzCihYRU4pIAltaWdy
YXRpb24gZGVsYXkgICAgPSAwdXMKKFhFTikgaWRsZXJzOiAwMDAwMDAwMCwwMDAwMDAwMCww
MDAwMDAwMCwwMDAwMDAwZQooWEVOKSBhY3RpdmUgdmNwdXM6CihYRU4pIAkgIDE6IFswLjFd
IHByaT0wIGZsYWdzPTAgY3B1PTAgY3JlZGl0PTIwOCBbdz01MTJdCihYRU4pIENQVVswMF0g
IHNvcnQ9MzcwOCwgc2libGluZz0wMDAwMDAwMCwwMDAwMDAwMCwwMDAwMDAwMCwwMDAwMDAw
MywgY29yZT0wMDAwMDAwMCwwMDAwMDAwMCwwMDAwMDAwMCwwMDAwMDAwZgooWEVOKSAJcnVu
OiBbMC4xXSBwcmk9MCBmbGFncz0wIGNwdT0wIGNyZWRpdD0yMDggW3c9NTEyXQooWEVOKSAJ
ICAxOiBbMzI3NjcuMF0gcHJpPS02NCBmbGFncz0wIGNwdT0wCgo=
--------------080109000601000203000902--

--------------enigFB036A0B5D2380FA3E5E1AA9
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ05xzAAoJENuP0xzK19csmFAH/1BSgdlrMaN/uD6hmetMWrHw
3zozTE69KQTrXR3o4Z46lOuxEcb+OLn64wgnqgbR4KpqzaxtPUaadL2zXBzYM+sq
oAqbncs9wjYB3DxCdfqMSUzWHIrILSKkVR76vD2jmt8tDBv4kmyKx+RFR84ebtcZ
bKSFxRrz3XwiSTsVcxL0Tqt9UGCpnu0QGTsG/qwSQ4xZGgPk1CZnG18XAfCmWoK2
mJJ54gZn7Wunqut2Sm8pzRJg5VvGumLHTeqzPS6GZe7fgXfKtugxswfjV8o8niDs
IKsUh/KVEHg4jQfvwT7nzovkrA0z5BZ1T2AlkmjXiiEs9EhgwBcCl1QJNsKuknc=
=987B
-----END PGP SIGNATURE-----

--------------enigFB036A0B5D2380FA3E5E1AA9--


--===============1744327927895738521==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1744327927895738521==--


From xen-devel-bounces@lists.xen.org Thu Dec 20 23:17:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Dec 2012 23:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlpMh-0007Gf-Dd; Thu, 20 Dec 2012 23:17:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1TlpMf-0007Ga-Rj
	for xen-devel@lists.xen.org; Thu, 20 Dec 2012 23:17:22 +0000
Received: from [85.158.138.51:37443] by server-7.bemta-3.messagelabs.com id
	15/B0-23008-C7C93D05; Thu, 20 Dec 2012 23:17:16 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1356045434!29922192!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDE3NDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 557 invoked from network); 20 Dec 2012 23:17:15 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Dec 2012 23:17:15 -0000
Received: from compute2.internal (compute2.nyi.mail.srv.osa [10.202.2.42])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id B71A420A6A;
	Thu, 20 Dec 2012 18:17:14 -0500 (EST)
Received: from frontend2.nyi.mail.srv.osa ([10.202.2.161])
	by compute2.internal (MEProxy); Thu, 20 Dec 2012 18:17:14 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=yD
	Sj+fs/9SCoGvGZLzuXPYxkX08=; b=GI/N3pPuc5+SI4j3F7NU0wmLUD4ppaxwM6
	xIQ6/cFWHnrJ7Wz+gxSUPanNBXRm7FaRnlXOl/eZGnw+LLNN+l4hdAqPkogO7O41
	grLtWI43Q2nfKXfPpsRNM0zu9IJ04e/5Uju05x4TJpJkUDq2HGq1LNe4zGJ2HLGP
	ctOhCEle0=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=yDSj
	+fs/9SCoGvGZLzuXPYxkX08=; b=ndcVfRMUToDaqf6nNFsqiaBThrZ+ivcv91Zh
	G+JYp9ndlpXYHp+2Zgc9WrUDWvwlMpXLV5zXHUaHAkzpSrEk+i5B93GAWDuHg2fx
	C0HQGNFRpMUYutgSHjMBSuaMui6xt5DgvvL/rcGIdGqe0fnjfi3ZW4o9ax8NE5fw
	uCGegnQ=
X-Sasl-enc: JYvv+HMuHspEqc//kjZzov78aCWKzHq4Lf2SjHjniKyx 1356045434
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id 12A414827D5;
	Thu, 20 Dec 2012 18:17:13 -0500 (EST)
Message-ID: <50D39C73.906@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 00:17:07 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
In-Reply-To: <CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
X-Enigmail-Version: 1.4.5
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1744327927895738521=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============1744327927895738521==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigFB036A0B5D2380FA3E5E1AA9"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigFB036A0B5D2380FA3E5E1AA9
Content-Type: multipart/mixed;
 boundary="------------080109000601000203000902"

This is a multi-part message in MIME format.
--------------080109000601000203000902
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 20.12.2012 20:41, Ben Guthro wrote:
> See if the attached patch helps.
> This was a patch I forgot was still in my queue, that Jan had some
> objections about (though I don't recall specifically what they were)
> I had found that the scheduler was removing all cpus during suspend, an=
d
> this seemed to help with that, IIRC.
>=20
> That said - I'm starting to be forced to look back into S3 issues, as w=
hile
> I thought they were all resolved, they are not.
> I have an Ivybridge laptop that I didn't have back in May that exhibits=

> some issues very similar to those that I was trying to solve with Jan i=
n
> the thread I pasted above.
>=20
> This particular machine goes to sleep, and when it resumes, the disk li=
ght
> flashes briefly, and then the sleep led goes back to pulsing.

With this patch applied (on 4.1.4 and 4.3-unstable), I've got original sy=
mptom
- only CPU0 active after ACPI S3. Didn't get reboot after few tries.
"xl debug-key r" output attached.
Anyway it looks like bug was introduced somehow between 4.1.2 and 4.1.3 -=
 on
4.1.2 none of above problems happened.

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab

--------------080109000601000203000902
Content-Type: text/plain; charset=UTF-8;
 name="xl-debug-key-r--only-cpu0-active.txt"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="xl-debug-key-r--only-cpu0-active.txt"

KFhFTikgc2NoZWRfc210X3Bvd2VyX3NhdmluZ3M6IGRpc2FibGVkCihYRU4pIE5PVz0weDAw
MDAwMDU5MkMzRjYyOUQKKFhFTikgSWRsZSBjcHVwb29sOgooWEVOKSBTY2hlZHVsZXI6IFNN
UCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpCihYRU4pIGluZm86CihYRU4pIAluY3B1cyAg
ICAgICAgICAgICAgPSA0CihYRU4pIAltYXN0ZXIgICAgICAgICAgICAgPSAwCihYRU4pIAlj
cmVkaXQgICAgICAgICAgICAgPSAxMjAwCihYRU4pIAljcmVkaXQgYmFsYW5jZSAgICAgPSA0
NjYKKFhFTikgCXdlaWdodCAgICAgICAgICAgICA9IDUxMgooWEVOKSAJcnVucV9zb3J0ICAg
ICAgICAgID0gMzcwOQooWEVOKSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4pIAl0
c2xpY2UgICAgICAgICAgICAgPSAzMG1zCihYRU4pIAljcmVkaXRzIHBlciBtc2VjICAgPSAx
MAooWEVOKSAJdGlja3MgcGVyIHRzbGljZSAgID0gMwooWEVOKSAJbWlncmF0aW9uIGRlbGF5
ICAgID0gMHVzCihYRU4pIGlkbGVyczogMDAwMDAwMDAsMDAwMDAwMDAsMDAwMDAwMDAsMDAw
MDAwMGUKKFhFTikgYWN0aXZlIHZjcHVzOgooWEVOKSAJICAxOiBbMC4xXSBwcmk9MCBmbGFn
cz0wIGNwdT0wIGNyZWRpdD0yMDggW3c9NTEyXQooWEVOKSBDcHVwb29sIDA6CihYRU4pIFNj
aGVkdWxlcjogU01QIENyZWRpdCBTY2hlZHVsZXIgKGNyZWRpdCkKKFhFTikgaW5mbzoKKFhF
TikgCW5jcHVzICAgICAgICAgICAgICA9IDQKKFhFTikgCW1hc3RlciAgICAgICAgICAgICA9
IDAKKFhFTikgCWNyZWRpdCAgICAgICAgICAgICA9IDEyMDAKKFhFTikgCWNyZWRpdCBiYWxh
bmNlICAgICA9IDQ2NgooWEVOKSAJd2VpZ2h0ICAgICAgICAgICAgID0gNTEyCihYRU4pIAly
dW5xX3NvcnQgICAgICAgICAgPSAzNzA5CihYRU4pIAlkZWZhdWx0LXdlaWdodCAgICAgPSAy
NTYKKFhFTikgCXRzbGljZSAgICAgICAgICAgICA9IDMwbXMKKFhFTikgCWNyZWRpdHMgcGVy
IG1zZWMgICA9IDEwCihYRU4pIAl0aWNrcyBwZXIgdHNsaWNlICAgPSAzCihYRU4pIAltaWdy
YXRpb24gZGVsYXkgICAgPSAwdXMKKFhFTikgaWRsZXJzOiAwMDAwMDAwMCwwMDAwMDAwMCww
MDAwMDAwMCwwMDAwMDAwZQooWEVOKSBhY3RpdmUgdmNwdXM6CihYRU4pIAkgIDE6IFswLjFd
IHByaT0wIGZsYWdzPTAgY3B1PTAgY3JlZGl0PTIwOCBbdz01MTJdCihYRU4pIENQVVswMF0g
IHNvcnQ9MzcwOCwgc2libGluZz0wMDAwMDAwMCwwMDAwMDAwMCwwMDAwMDAwMCwwMDAwMDAw
MywgY29yZT0wMDAwMDAwMCwwMDAwMDAwMCwwMDAwMDAwMCwwMDAwMDAwZgooWEVOKSAJcnVu
OiBbMC4xXSBwcmk9MCBmbGFncz0wIGNwdT0wIGNyZWRpdD0yMDggW3c9NTEyXQooWEVOKSAJ
ICAxOiBbMzI3NjcuMF0gcHJpPS02NCBmbGFncz0wIGNwdT0wCgo=
--------------080109000601000203000902--

--------------enigFB036A0B5D2380FA3E5E1AA9
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ05xzAAoJENuP0xzK19csmFAH/1BSgdlrMaN/uD6hmetMWrHw
3zozTE69KQTrXR3o4Z46lOuxEcb+OLn64wgnqgbR4KpqzaxtPUaadL2zXBzYM+sq
oAqbncs9wjYB3DxCdfqMSUzWHIrILSKkVR76vD2jmt8tDBv4kmyKx+RFR84ebtcZ
bKSFxRrz3XwiSTsVcxL0Tqt9UGCpnu0QGTsG/qwSQ4xZGgPk1CZnG18XAfCmWoK2
mJJ54gZn7Wunqut2Sm8pzRJg5VvGumLHTeqzPS6GZe7fgXfKtugxswfjV8o8niDs
IKsUh/KVEHg4jQfvwT7nzovkrA0z5BZ1T2AlkmjXiiEs9EhgwBcCl1QJNsKuknc=
=987B
-----END PGP SIGNATURE-----

--------------enigFB036A0B5D2380FA3E5E1AA9--


--===============1744327927895738521==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1744327927895738521==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 00:19:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 00:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlqKR-00088D-Ci; Fri, 21 Dec 2012 00:19:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlqKP-000888-HB
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 00:19:05 +0000
Received: from [85.158.139.211:45859] by server-6.bemta-5.messagelabs.com id
	55/F2-30498-8FAA3D05; Fri, 21 Dec 2012 00:19:04 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356049143!21420074!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMDMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5266 invoked from network); 21 Dec 2012 00:19:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 00:19:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; d="asc'?scan'208";a="292127"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 00:19:03 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 00:19:02 +0000
Message-ID: <1356049122.15403.34.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 01:18:42 +0100
In-Reply-To: <50D37359.9080001@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D37359.9080001@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>, Dan
	Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8130245919937370525=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8130245919937370525==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-MBicSVpLP1NUV/hbLCR7"

--=-MBicSVpLP1NUV/hbLCR7
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 20:21 +0000, George Dunlap wrote:=20
> > -    /*
> > -     * Pick from online CPUs in VCPU's affinity mask, giving a
> > -     * preference to its current processor if it's in there.
> > -     */
> >       online =3D cpupool_scheduler_cpumask(vc->domain->cpupool);
> > -    cpumask_and(&cpus, online, vc->cpu_affinity);
> > -    cpu =3D cpumask_test_cpu(vc->processor, &cpus)
> > -            ? vc->processor
> > -            : cpumask_cycle(vc->processor, &cpus);
> > -    ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
> > +    for_each_csched_balance_step( balance_step )
> > +    {
> > +        /* Pick an online CPU from the proper affinity mask */
> > +        ret =3D csched_balance_cpumask(vc, balance_step, &cpus);
> > +        cpumask_and(&cpus, &cpus, online);
> >
> > -    /*
> > -     * Try to find an idle processor within the above constraints.
> > -     *=20
> > -     * In multi-core and multi-threaded CPUs, not all idle execution
> > -     * vehicles are equal!
> > -     *=20
> > -     * We give preference to the idle execution vehicle with the most
> > -     * idling neighbours in its grouping. This distributes work across
> > -     * distinct cores first and guarantees we don't do something stupi=
d
> > -     * like run two VCPUs on co-hyperthreads while there are idle core=
s
> > -     * or sockets.
> > -     *=20
> > -     * Notice that, when computing the "idleness" of cpu, we may want =
to
> > -     * discount vc. That is, iff vc is the currently running and the o=
nly
> > -     * runnable vcpu on cpu, we add cpu to the idlers.
> > -     */=20
> > -    cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> > -    if ( vc->processor =3D=3D cpu && IS_RUNQ_IDLE(cpu) )
> > -        cpumask_set_cpu(cpu, &idlers);
> > -    cpumask_and(&cpus, &cpus, &idlers);
> > -    cpumask_clear_cpu(cpu, &cpus);
> > +        /* If present, prefer vc's current processor */
> > +        cpu =3D cpumask_test_cpu(vc->processor, &cpus)
> > +                ? vc->processor
> > +                : cpumask_cycle(vc->processor, &cpus);=20
> > +        ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) =
);
> >=20
> > -    while ( !cpumask_empty(&cpus) )
> > -    {
> > -        cpumask_t cpu_idlers;
> > -        cpumask_t nxt_idlers;
> > -        int nxt, weight_cpu, weight_nxt;
> > -        int migrate_factor;
> > +        /*
> > +         * Try to find an idle processor within the above constraints.
> > +         *
> > +         * In multi-core and multi-threaded CPUs, not all idle executi=
on
> > +         * vehicles are equal!
> > +         *
> > +         * We give preference to the idle execution vehicle with the m=
ost
> > +         * idling neighbours in its grouping. This distributes work ac=
ross
> > +         * distinct cores first and guarantees we don't do something s=
tupid
> > +         * like run two VCPUs on co-hyperthreads while there are idle =
cores
> > +         * or sockets.
> > +         *
> > +         * Notice that, when computing the "idleness" of cpu, we may w=
ant to
> > +         * discount vc. That is, iff vc is the currently running and t=
he only
> > +         * runnable vcpu on cpu, we add cpu to the idlers.
> > +         */
> > +        cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers=
);
> > +        if ( vc->processor =3D=3D cpu && IS_RUNQ_IDLE(cpu) )
> > +            cpumask_set_cpu(cpu, &idlers);
> > +        cpumask_and(&cpus, &cpus, &idlers);
> > +        /* If there are idlers and cpu is still not among them, pick o=
ne */
> > +        if ( !cpumask_empty(&cpus) && !cpumask_test_cpu(cpu, &cpus) )
> > +            cpu =3D cpumask_cycle(cpu, &cpus);
>=20
> This seems to be an addition to the algorithm -- particularly hidden in=
=20
> this kind of "indent a big section that's almost exactly the same", I=20
> think this at least needs to be called out in the changelog message,=20
> perhaps put in a separate patch.
>=20
You're right, it is an addition, although a minor enough one (at least
from the amount of code point of view, the effect of not having it was
pretty bad! :-P) that I thought it can "hide" here. :-)

But I guess I can put it in a separate patch.

> Can you comment on to why you think it's necessary?  Was there a=20
> particular problem you were seeing?
>=20
Yep. Suppose vc is for some reason running on a pcpu which is outside
its node-affinity, but that now some pcpus within vc's node-affinity
have become idle. What we would like is vc start running there as soon
as possible, so we expect this call to _csched_pick_cpu() to determine
that.

What happens is we do not use vc->processor (as it is outside of vc's
node-affinity) and 'cpu' gets set to cpumask_cycle(vc->processor,
&cpus), where 'cpu' is the cpumask_and(&cpus, balance_mask, online).
This means 'cpu'. Let's also suppose that 'cpu' now points to a busy
thread but with an idle sibling, and that there aren't any other idle
pcpus (either core or threads). Now, the algorithm evaluates the
idleness of 'cpu', and compares it with the idleness of all the other
pcpus, and it won't find anything better that 'cpu' itself, as all the
other pcpus except its sibling thread are busy, while its sibling thread
has the very same idleness it has (2 threads, 1 idle 1 busy).

The neat effect is vc being moved to 'cpu', which is busy, while it
could have been moved to 'cpu''s sibling thread, which is indeed idle.

The if() I added fixes this by making sure that the reference cpu is an
idle one (if that is possible).

I hope I've explained it correctly, and sorry if it is a little bit
tricky, especially to explain like this (although, believe me, it was
tricky to hunt it out too! :-P). I've seen that happening and I'm almost
sure I kept a trace somewhere, so let me know if you want to see the
"smoking gun". :-)

> > -        weight_cpu =3D cpumask_weight(&cpu_idlers);
> > -        weight_nxt =3D cpumask_weight(&nxt_idlers);
> > -        /* smt_power_savings: consolidate work rather than spreading i=
t */
> > -        if ( sched_smt_power_savings ?
> > -             weight_cpu > weight_nxt :
> > -             weight_cpu * migrate_factor < weight_nxt )
> > -        {
> > -            cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
> > -            spc =3D CSCHED_PCPU(nxt);
> > -            cpu =3D cpumask_cycle(spc->idle_bias, &nxt_idlers);
> > -            cpumask_andnot(&cpus, &cpus, per_cpu(cpu_sibling_mask, cpu=
));
> > -        }
> > -        else
> > -        {
> > -            cpumask_andnot(&cpus, &cpus, &nxt_idlers);
> > -        }
> > +        /* Stop if cpu is idle (or if csched_balance_cpumask() says we=
 can) */
> > +        if ( cpumask_test_cpu(cpu, &idlers) || ret )
> > +            break;
>=20
> Right -- OK, I think everything looks good here, except the "return -1=
=20
> from csched_balance_cpumask" thing.  I think it would be better if we=20
> explicitly checked cpumask_full(...->node_affinity_cpumask) and skipped=
=20
> the NODE step if that's the case.
>=20
Yep. Will do this, or something along this line, all over the place.
Thanks.

> Also -- and sorry to have to ask this kind of thing, but after sorting=
=20
> through the placement algorithm my head hurts -- under what=20
> circumstances would "cpumask_test_cpu(cpu, &idlers)" be false at this=20
> point?  It seems like the only possibility would be if:
> ( (vc->processor was not in the original &cpus [1])
>    || !IS_RUNQ_IDLE(vc->processor) )
> && (there are no idlers in the original &cpus)
>=20
> Which I suppose probably matches the time when we want to move on from=
=20
> looking at NODE affinity and look for CPU affinity.
>=20
> [1] This could happen either if the vcpu/node affinity has changed, or=
=20
> if we're currently running outside our node affinity and we're doing the=
=20
> NODE step.
>=20
> OK -- I think I've convinced myself that this is OK as well (apart from=
=20
> the hidden check).  I'll come back to look at your response to the load=
=20
> balancing thing tomorrow.
>=20
Mmm... Sorry, not sure I follow, does this means that you figured out
and understood why I need that 'if(){break;}' ? Sounds like so, but I
can't be sure (my head hurts a bit too, after having written that
thing! :-D).

If no, consider that, for example, it can be false if all the pcpus in
the mask for this step are busy, and if this step is the node-affinity
step, I do _not_ want to exit the balancing loop, and check the other
pcpus in the vcpu-affinity. OTOH, if I don't put a break somewhere, even
if an idle pcpu is found during the node-affinity balancing step, it
will just go on and check all the others pcpus in the vcpu-affinity,
which would kill having tried to do the balancing here. Makes sense?

Thanks again and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-MBicSVpLP1NUV/hbLCR7
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDTquIACgkQk4XaBE3IOsTfswCffr3p5ICKYK3fsD0FrZAf8lnE
wWwAn26/WVy6jT/O49/qY9yFZsgZm9mM
=8hhW
-----END PGP SIGNATURE-----

--=-MBicSVpLP1NUV/hbLCR7--


--===============8130245919937370525==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8130245919937370525==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 00:19:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 00:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlqKR-00088D-Ci; Fri, 21 Dec 2012 00:19:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1TlqKP-000888-HB
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 00:19:05 +0000
Received: from [85.158.139.211:45859] by server-6.bemta-5.messagelabs.com id
	55/F2-30498-8FAA3D05; Fri, 21 Dec 2012 00:19:04 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356049143!21420074!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMDMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5266 invoked from network); 21 Dec 2012 00:19:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 00:19:03 -0000
X-IronPort-AV: E=Sophos;i="4.84,326,1355097600"; d="asc'?scan'208";a="292127"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 00:19:03 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 00:19:02 +0000
Message-ID: <1356049122.15403.34.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 01:18:42 +0100
In-Reply-To: <50D37359.9080001@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D37359.9080001@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>, Dan
	Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8130245919937370525=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8130245919937370525==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-MBicSVpLP1NUV/hbLCR7"

--=-MBicSVpLP1NUV/hbLCR7
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-12-20 at 20:21 +0000, George Dunlap wrote:=20
> > -    /*
> > -     * Pick from online CPUs in VCPU's affinity mask, giving a
> > -     * preference to its current processor if it's in there.
> > -     */
> >       online =3D cpupool_scheduler_cpumask(vc->domain->cpupool);
> > -    cpumask_and(&cpus, online, vc->cpu_affinity);
> > -    cpu =3D cpumask_test_cpu(vc->processor, &cpus)
> > -            ? vc->processor
> > -            : cpumask_cycle(vc->processor, &cpus);
> > -    ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
> > +    for_each_csched_balance_step( balance_step )
> > +    {
> > +        /* Pick an online CPU from the proper affinity mask */
> > +        ret =3D csched_balance_cpumask(vc, balance_step, &cpus);
> > +        cpumask_and(&cpus, &cpus, online);
> >
> > -    /*
> > -     * Try to find an idle processor within the above constraints.
> > -     *=20
> > -     * In multi-core and multi-threaded CPUs, not all idle execution
> > -     * vehicles are equal!
> > -     *=20
> > -     * We give preference to the idle execution vehicle with the most
> > -     * idling neighbours in its grouping. This distributes work across
> > -     * distinct cores first and guarantees we don't do something stupi=
d
> > -     * like run two VCPUs on co-hyperthreads while there are idle core=
s
> > -     * or sockets.
> > -     *=20
> > -     * Notice that, when computing the "idleness" of cpu, we may want =
to
> > -     * discount vc. That is, iff vc is the currently running and the o=
nly
> > -     * runnable vcpu on cpu, we add cpu to the idlers.
> > -     */=20
> > -    cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
> > -    if ( vc->processor =3D=3D cpu && IS_RUNQ_IDLE(cpu) )
> > -        cpumask_set_cpu(cpu, &idlers);
> > -    cpumask_and(&cpus, &cpus, &idlers);
> > -    cpumask_clear_cpu(cpu, &cpus);
> > +        /* If present, prefer vc's current processor */
> > +        cpu =3D cpumask_test_cpu(vc->processor, &cpus)
> > +                ? vc->processor
> > +                : cpumask_cycle(vc->processor, &cpus);=20
> > +        ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) =
);
> >=20
> > -    while ( !cpumask_empty(&cpus) )
> > -    {
> > -        cpumask_t cpu_idlers;
> > -        cpumask_t nxt_idlers;
> > -        int nxt, weight_cpu, weight_nxt;
> > -        int migrate_factor;
> > +        /*
> > +         * Try to find an idle processor within the above constraints.
> > +         *
> > +         * In multi-core and multi-threaded CPUs, not all idle executi=
on
> > +         * vehicles are equal!
> > +         *
> > +         * We give preference to the idle execution vehicle with the m=
ost
> > +         * idling neighbours in its grouping. This distributes work ac=
ross
> > +         * distinct cores first and guarantees we don't do something s=
tupid
> > +         * like run two VCPUs on co-hyperthreads while there are idle =
cores
> > +         * or sockets.
> > +         *
> > +         * Notice that, when computing the "idleness" of cpu, we may w=
ant to
> > +         * discount vc. That is, iff vc is the currently running and t=
he only
> > +         * runnable vcpu on cpu, we add cpu to the idlers.
> > +         */
> > +        cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers=
);
> > +        if ( vc->processor =3D=3D cpu && IS_RUNQ_IDLE(cpu) )
> > +            cpumask_set_cpu(cpu, &idlers);
> > +        cpumask_and(&cpus, &cpus, &idlers);
> > +        /* If there are idlers and cpu is still not among them, pick o=
ne */
> > +        if ( !cpumask_empty(&cpus) && !cpumask_test_cpu(cpu, &cpus) )
> > +            cpu =3D cpumask_cycle(cpu, &cpus);
>=20
> This seems to be an addition to the algorithm -- particularly hidden in=
=20
> this kind of "indent a big section that's almost exactly the same", I=20
> think this at least needs to be called out in the changelog message,=20
> perhaps put in a separate patch.
>=20
You're right, it is an addition, although a minor enough one (at least
from the amount of code point of view, the effect of not having it was
pretty bad! :-P) that I thought it can "hide" here. :-)

But I guess I can put it in a separate patch.

> Can you comment on to why you think it's necessary?  Was there a=20
> particular problem you were seeing?
>=20
Yep. Suppose vc is for some reason running on a pcpu which is outside
its node-affinity, but that now some pcpus within vc's node-affinity
have become idle. What we would like is vc start running there as soon
as possible, so we expect this call to _csched_pick_cpu() to determine
that.

What happens is we do not use vc->processor (as it is outside of vc's
node-affinity) and 'cpu' gets set to cpumask_cycle(vc->processor,
&cpus), where 'cpu' is the cpumask_and(&cpus, balance_mask, online).
This means 'cpu'. Let's also suppose that 'cpu' now points to a busy
thread but with an idle sibling, and that there aren't any other idle
pcpus (either core or threads). Now, the algorithm evaluates the
idleness of 'cpu', and compares it with the idleness of all the other
pcpus, and it won't find anything better that 'cpu' itself, as all the
other pcpus except its sibling thread are busy, while its sibling thread
has the very same idleness it has (2 threads, 1 idle 1 busy).

The neat effect is vc being moved to 'cpu', which is busy, while it
could have been moved to 'cpu''s sibling thread, which is indeed idle.

The if() I added fixes this by making sure that the reference cpu is an
idle one (if that is possible).

I hope I've explained it correctly, and sorry if it is a little bit
tricky, especially to explain like this (although, believe me, it was
tricky to hunt it out too! :-P). I've seen that happening and I'm almost
sure I kept a trace somewhere, so let me know if you want to see the
"smoking gun". :-)

> > -        weight_cpu =3D cpumask_weight(&cpu_idlers);
> > -        weight_nxt =3D cpumask_weight(&nxt_idlers);
> > -        /* smt_power_savings: consolidate work rather than spreading i=
t */
> > -        if ( sched_smt_power_savings ?
> > -             weight_cpu > weight_nxt :
> > -             weight_cpu * migrate_factor < weight_nxt )
> > -        {
> > -            cpumask_and(&nxt_idlers, &cpus, &nxt_idlers);
> > -            spc =3D CSCHED_PCPU(nxt);
> > -            cpu =3D cpumask_cycle(spc->idle_bias, &nxt_idlers);
> > -            cpumask_andnot(&cpus, &cpus, per_cpu(cpu_sibling_mask, cpu=
));
> > -        }
> > -        else
> > -        {
> > -            cpumask_andnot(&cpus, &cpus, &nxt_idlers);
> > -        }
> > +        /* Stop if cpu is idle (or if csched_balance_cpumask() says we=
 can) */
> > +        if ( cpumask_test_cpu(cpu, &idlers) || ret )
> > +            break;
>=20
> Right -- OK, I think everything looks good here, except the "return -1=
=20
> from csched_balance_cpumask" thing.  I think it would be better if we=20
> explicitly checked cpumask_full(...->node_affinity_cpumask) and skipped=
=20
> the NODE step if that's the case.
>=20
Yep. Will do this, or something along this line, all over the place.
Thanks.

> Also -- and sorry to have to ask this kind of thing, but after sorting=
=20
> through the placement algorithm my head hurts -- under what=20
> circumstances would "cpumask_test_cpu(cpu, &idlers)" be false at this=20
> point?  It seems like the only possibility would be if:
> ( (vc->processor was not in the original &cpus [1])
>    || !IS_RUNQ_IDLE(vc->processor) )
> && (there are no idlers in the original &cpus)
>=20
> Which I suppose probably matches the time when we want to move on from=
=20
> looking at NODE affinity and look for CPU affinity.
>=20
> [1] This could happen either if the vcpu/node affinity has changed, or=
=20
> if we're currently running outside our node affinity and we're doing the=
=20
> NODE step.
>=20
> OK -- I think I've convinced myself that this is OK as well (apart from=
=20
> the hidden check).  I'll come back to look at your response to the load=
=20
> balancing thing tomorrow.
>=20
Mmm... Sorry, not sure I follow, does this means that you figured out
and understood why I need that 'if(){break;}' ? Sounds like so, but I
can't be sure (my head hurts a bit too, after having written that
thing! :-D).

If no, consider that, for example, it can be false if all the pcpus in
the mask for this step are busy, and if this step is the node-affinity
step, I do _not_ want to exit the balancing loop, and check the other
pcpus in the vcpu-affinity. OTOH, if I don't put a break somewhere, even
if an idle pcpu is found during the node-affinity balancing step, it
will just go on and check all the others pcpus in the vcpu-affinity,
which would kill having tried to do the balancing here. Makes sense?

Thanks again and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-MBicSVpLP1NUV/hbLCR7
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDTquIACgkQk4XaBE3IOsTfswCffr3p5ICKYK3fsD0FrZAf8lnE
wWwAn26/WVy6jT/O49/qY9yFZsgZm9mM
=8hhW
-----END PGP SIGNATURE-----

--=-MBicSVpLP1NUV/hbLCR7--


--===============8130245919937370525==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8130245919937370525==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 01:04:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 01:04:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlr1U-0003vP-Q0; Fri, 21 Dec 2012 01:03:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1Tlr1T-0003vK-2x
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 01:03:35 +0000
Received: from [85.158.143.35:38988] by server-3.bemta-4.messagelabs.com id
	2C/4A-18211-665B3D05; Fri, 21 Dec 2012 01:03:34 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1356051813!11449562!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20587 invoked from network); 21 Dec 2012 01:03:33 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 01:03:33 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 5C4E784080
	for <xen-devel@lists.xensource.com>;
	Fri, 21 Dec 2012 02:03:32 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id 7qoi-ScoiDAg for <xen-devel@lists.xensource.com>;
	Fri, 21 Dec 2012 02:03:32 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id EAFD18407B
	for <xen-devel@lists.xensource.com>;
	Fri, 21 Dec 2012 02:03:31 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1Tlr1O-0002AW-JJ
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 02:03:30 +0100
Date: Fri, 21 Dec 2012 02:03:30 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: xen-devel@lists.xensource.com
Message-ID: <20121221010330.GB6132@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	xen-devel@lists.xensource.com
References: <E1TiHWp-00077m-3w@xenbits.xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E1TiHWp-00077m-3w@xenbits.xen.org>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Subject: [Xen-devel] mini-os: Notify shutdown through weak function call
 instead of wake queue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To allow for more flexibility, this notifies domain shutdown through a
function rather than a wake queue, to let the application use a wake
queue only if it wishes.

Signed-off-by: Samuel Thibault <samuel.thibaut@ens-lyon.org>

diff -r 090cc3e20d3e extras/mini-os/include/kernel.h
--- a/extras/mini-os/include/kernel.h	Wed Dec 19 16:04:50 2012 +0000
+++ b/extras/mini-os/include/kernel.h	Fri Dec 21 02:01:46 2012 +0100
@@ -1,9 +1,6 @@
 #ifndef _KERNEL_H_
 #define _KERNEL_H_
 
-extern unsigned int do_shutdown;
-extern unsigned int shutdown_reason;
-extern struct wait_queue_head shutdown_queue;
 extern void do_exit(void) __attribute__((noreturn));
 extern void stop_kernel(void);
 
diff -r 090cc3e20d3e extras/mini-os/kernel.c
--- a/extras/mini-os/kernel.c	Wed Dec 19 16:04:50 2012 +0000
+++ b/extras/mini-os/kernel.c	Fri Dec 21 02:01:46 2012 +0100
@@ -48,12 +48,6 @@
 
 uint8_t xen_features[XENFEAT_NR_SUBMAPS * 32];
 
-#ifdef CONFIG_XENBUS
-unsigned int do_shutdown = 0;
-unsigned int shutdown_reason;
-DECLARE_WAIT_QUEUE_HEAD(shutdown_queue);
-#endif
-
 void setup_xen_features(void)
 {
     xen_feature_info_t fi;
@@ -71,12 +65,19 @@
 }
 
 #ifdef CONFIG_XENBUS
+/* This should be overridden by the application we are linked against. */
+__attribute__((weak)) void app_shutdown(unsigned reason)
+{
+    printk("Shutdown requested: %d\n", reason);
+}
+
 static void shutdown_thread(void *p)
 {
     const char *path = "control/shutdown";
     const char *token = path;
     xenbus_event_queue events = NULL;
     char *shutdown, *err;
+    unsigned int shutdown_reason;
     xenbus_watch_path_token(XBT_NIL, path, token, &events);
     while ((err = xenbus_read(XBT_NIL, path, &shutdown)) != NULL)
     {
@@ -94,10 +95,7 @@
     else
         /* Unknown */
         shutdown_reason = SHUTDOWN_crash;
-    wmb();
-    do_shutdown = 1;
-    wmb();
-    wake_up(&shutdown_queue);
+    app_shutdown(shutdown_reason);
 }
 #endif
 
diff -r 090cc3e20d3e extras/mini-os/test.c
--- a/extras/mini-os/test.c	Wed Dec 19 16:04:50 2012 +0000
+++ b/extras/mini-os/test.c	Fri Dec 21 02:01:46 2012 +0100
@@ -45,6 +45,10 @@
 #include <xen/features.h>
 #include <xen/version.h>
 
+static unsigned int do_shutdown = 0;
+static unsigned int shutdown_reason;
+static DECLARE_WAIT_QUEUE_HEAD(shutdown_queue);
+
 static struct netfront_dev *net_dev;
 static struct semaphore net_sem = __SEMAPHORE_INITIALIZER(net_sem, 0);
 
@@ -487,6 +491,15 @@
 #endif
 }
 
+void app_shutdown(unsigned reason)
+{
+    shutdown_reason = reason;
+    wmb();
+    do_shutdown = 1;
+    wmb();
+    wake_up(&shutdown_queue);
+}
+
 static void shutdown_thread(void *p)
 {
     DEFINE_WAIT(w);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 01:04:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 01:04:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlr1U-0003vP-Q0; Fri, 21 Dec 2012 01:03:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1Tlr1T-0003vK-2x
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 01:03:35 +0000
Received: from [85.158.143.35:38988] by server-3.bemta-4.messagelabs.com id
	2C/4A-18211-665B3D05; Fri, 21 Dec 2012 01:03:34 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1356051813!11449562!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20587 invoked from network); 21 Dec 2012 01:03:33 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 01:03:33 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 5C4E784080
	for <xen-devel@lists.xensource.com>;
	Fri, 21 Dec 2012 02:03:32 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id 7qoi-ScoiDAg for <xen-devel@lists.xensource.com>;
	Fri, 21 Dec 2012 02:03:32 +0100 (CET)
Received: from type.ipv6 (youpi.is-a-geek.org [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id EAFD18407B
	for <xen-devel@lists.xensource.com>;
	Fri, 21 Dec 2012 02:03:31 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.80)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1Tlr1O-0002AW-JJ
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 02:03:30 +0100
Date: Fri, 21 Dec 2012 02:03:30 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: xen-devel@lists.xensource.com
Message-ID: <20121221010330.GB6132@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	xen-devel@lists.xensource.com
References: <E1TiHWp-00077m-3w@xenbits.xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E1TiHWp-00077m-3w@xenbits.xen.org>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Subject: [Xen-devel] mini-os: Notify shutdown through weak function call
 instead of wake queue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To allow for more flexibility, this notifies domain shutdown through a
function rather than a wake queue, to let the application use a wake
queue only if it wishes.

Signed-off-by: Samuel Thibault <samuel.thibaut@ens-lyon.org>

diff -r 090cc3e20d3e extras/mini-os/include/kernel.h
--- a/extras/mini-os/include/kernel.h	Wed Dec 19 16:04:50 2012 +0000
+++ b/extras/mini-os/include/kernel.h	Fri Dec 21 02:01:46 2012 +0100
@@ -1,9 +1,6 @@
 #ifndef _KERNEL_H_
 #define _KERNEL_H_
 
-extern unsigned int do_shutdown;
-extern unsigned int shutdown_reason;
-extern struct wait_queue_head shutdown_queue;
 extern void do_exit(void) __attribute__((noreturn));
 extern void stop_kernel(void);
 
diff -r 090cc3e20d3e extras/mini-os/kernel.c
--- a/extras/mini-os/kernel.c	Wed Dec 19 16:04:50 2012 +0000
+++ b/extras/mini-os/kernel.c	Fri Dec 21 02:01:46 2012 +0100
@@ -48,12 +48,6 @@
 
 uint8_t xen_features[XENFEAT_NR_SUBMAPS * 32];
 
-#ifdef CONFIG_XENBUS
-unsigned int do_shutdown = 0;
-unsigned int shutdown_reason;
-DECLARE_WAIT_QUEUE_HEAD(shutdown_queue);
-#endif
-
 void setup_xen_features(void)
 {
     xen_feature_info_t fi;
@@ -71,12 +65,19 @@
 }
 
 #ifdef CONFIG_XENBUS
+/* This should be overridden by the application we are linked against. */
+__attribute__((weak)) void app_shutdown(unsigned reason)
+{
+    printk("Shutdown requested: %d\n", reason);
+}
+
 static void shutdown_thread(void *p)
 {
     const char *path = "control/shutdown";
     const char *token = path;
     xenbus_event_queue events = NULL;
     char *shutdown, *err;
+    unsigned int shutdown_reason;
     xenbus_watch_path_token(XBT_NIL, path, token, &events);
     while ((err = xenbus_read(XBT_NIL, path, &shutdown)) != NULL)
     {
@@ -94,10 +95,7 @@
     else
         /* Unknown */
         shutdown_reason = SHUTDOWN_crash;
-    wmb();
-    do_shutdown = 1;
-    wmb();
-    wake_up(&shutdown_queue);
+    app_shutdown(shutdown_reason);
 }
 #endif
 
diff -r 090cc3e20d3e extras/mini-os/test.c
--- a/extras/mini-os/test.c	Wed Dec 19 16:04:50 2012 +0000
+++ b/extras/mini-os/test.c	Fri Dec 21 02:01:46 2012 +0100
@@ -45,6 +45,10 @@
 #include <xen/features.h>
 #include <xen/version.h>
 
+static unsigned int do_shutdown = 0;
+static unsigned int shutdown_reason;
+static DECLARE_WAIT_QUEUE_HEAD(shutdown_queue);
+
 static struct netfront_dev *net_dev;
 static struct semaphore net_sem = __SEMAPHORE_INITIALIZER(net_sem, 0);
 
@@ -487,6 +491,15 @@
 #endif
 }
 
+void app_shutdown(unsigned reason)
+{
+    shutdown_reason = reason;
+    wmb();
+    do_shutdown = 1;
+    wmb();
+    wake_up(&shutdown_queue);
+}
+
 static void shutdown_thread(void *p)
 {
     DEFINE_WAIT(w);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 01:15:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 01:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlrCh-00046y-3b; Fri, 21 Dec 2012 01:15:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlrCf-00046t-6w
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 01:15:09 +0000
Received: from [85.158.143.99:32610] by server-3.bemta-4.messagelabs.com id
	71/5D-18211-C18B3D05; Fri, 21 Dec 2012 01:15:08 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1356052505!29381798!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM5OTEw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18258 invoked from network); 21 Dec 2012 01:15:06 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-13.tower-216.messagelabs.com with SMTP;
	21 Dec 2012 01:15:06 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 20 Dec 2012 17:14:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,327,1355126400"; d="scan'208";a="234916003"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by azsmga001.ch.intel.com with ESMTP; 20 Dec 2012 17:14:49 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 20 Dec 2012 17:14:49 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Fri, 21 Dec 2012 09:14:45 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v3 08/10] nEPT: handle invept instruction from L1 VMM
Thread-Index: AQHN3pgVW/gjgy5bqkC75F/XLyKjZZgicUOw
Date: Fri, 21 Dec 2012 01:14:45 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BD64A@SHSMSX101.ccr.corp.intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-9-git-send-email-xiantao.zhang@intel.com>
	<50D2EE7A02000078000B1B91@nat28.tlf.novell.com>
In-Reply-To: <50D2EE7A02000078000B1B91@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Dong, Eddie" <eddie.dong@intel.com>,
	"tim@xen.org" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v3 08/10] nEPT: handle invept instruction
	from L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, December 20, 2012 5:55 PM
> To: Zhang, Xiantao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org; keir@xen.org;
> tim@xen.org
> Subject: Re: [PATCH v3 08/10] nEPT: handle invept instruction from L1 VMM
> 
> >>> On 20.12.12 at 16:43, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -2573,10 +2573,14 @@ void vmx_vmexit_handler(struct
> cpu_user_regs *regs)
> >              update_guest_eip();
> >          break;
> >
> > +    case EXIT_REASON_INVEPT:
> > +        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
> > +            update_guest_eip();
> > +        break;
> > +
> 
> I realize that you're just copying code written the same way elsewhere, but:
> What if (here and elsewhere) X86EMUL_OKAY is not being returned (e.g. in
> the non-nested case)? Without the nested VMX code, all of these would
> have ended up at the default case (crashing the guest). Iiuc the correct action
> would be to inject an exception at least when X86EMUL_EXCEPTION is being
> returned here - whether that's done here or (perhaps better, as only it can
> know _what_ exception to inject) by the callee is another thing to decide.
> 
> Also, at the example of nvmx_handle_vmclear() I see that it produces
> exceptions in most of the cases, but I think all of the related code needs
> auditing that things are being handled consistently _and_ completely
> (constructs like
> 
>     if ( ... != X86EMUL_OKAY )
>         return X86EMUL_EXCEPTION;
> 
> are definitely not okay, as there are further X86EMUL_* values that can
> occur; if you know only the two must ever occur at a given place, ASSERT() so,
> making things clear to the reader without having to follow all code paths).
Hi, Jan
I think it is better that the callee should be responsible for handling the exception before returning X86EMUL_EXCEPTION to its caller. 
so for the newly-introduced two functions,  I will add the logic to handle its possible exceptions before its return.  
Xiantao




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 01:15:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 01:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlrCh-00046y-3b; Fri, 21 Dec 2012 01:15:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlrCf-00046t-6w
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 01:15:09 +0000
Received: from [85.158.143.99:32610] by server-3.bemta-4.messagelabs.com id
	71/5D-18211-C18B3D05; Fri, 21 Dec 2012 01:15:08 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1356052505!29381798!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjM5OTEw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18258 invoked from network); 21 Dec 2012 01:15:06 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-13.tower-216.messagelabs.com with SMTP;
	21 Dec 2012 01:15:06 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 20 Dec 2012 17:14:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,327,1355126400"; d="scan'208";a="234916003"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by azsmga001.ch.intel.com with ESMTP; 20 Dec 2012 17:14:49 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 20 Dec 2012 17:14:49 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Fri, 21 Dec 2012 09:14:45 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v3 08/10] nEPT: handle invept instruction from L1 VMM
Thread-Index: AQHN3pgVW/gjgy5bqkC75F/XLyKjZZgicUOw
Date: Fri, 21 Dec 2012 01:14:45 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BD64A@SHSMSX101.ccr.corp.intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-9-git-send-email-xiantao.zhang@intel.com>
	<50D2EE7A02000078000B1B91@nat28.tlf.novell.com>
In-Reply-To: <50D2EE7A02000078000B1B91@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Dong, Eddie" <eddie.dong@intel.com>,
	"tim@xen.org" <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v3 08/10] nEPT: handle invept instruction
	from L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, December 20, 2012 5:55 PM
> To: Zhang, Xiantao
> Cc: Dong, Eddie; Nakajima, Jun; xen-devel@lists.xen.org; keir@xen.org;
> tim@xen.org
> Subject: Re: [PATCH v3 08/10] nEPT: handle invept instruction from L1 VMM
> 
> >>> On 20.12.12 at 16:43, Xiantao Zhang <xiantao.zhang@intel.com> wrote:
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -2573,10 +2573,14 @@ void vmx_vmexit_handler(struct
> cpu_user_regs *regs)
> >              update_guest_eip();
> >          break;
> >
> > +    case EXIT_REASON_INVEPT:
> > +        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
> > +            update_guest_eip();
> > +        break;
> > +
> 
> I realize that you're just copying code written the same way elsewhere, but:
> What if (here and elsewhere) X86EMUL_OKAY is not being returned (e.g. in
> the non-nested case)? Without the nested VMX code, all of these would
> have ended up at the default case (crashing the guest). Iiuc the correct action
> would be to inject an exception at least when X86EMUL_EXCEPTION is being
> returned here - whether that's done here or (perhaps better, as only it can
> know _what_ exception to inject) by the callee is another thing to decide.
> 
> Also, at the example of nvmx_handle_vmclear() I see that it produces
> exceptions in most of the cases, but I think all of the related code needs
> auditing that things are being handled consistently _and_ completely
> (constructs like
> 
>     if ( ... != X86EMUL_OKAY )
>         return X86EMUL_EXCEPTION;
> 
> are definitely not okay, as there are further X86EMUL_* values that can
> occur; if you know only the two must ever occur at a given place, ASSERT() so,
> making things clear to the reader without having to follow all code paths).
Hi, Jan
I think it is better that the callee should be responsible for handling the exception before returning X86EMUL_EXCEPTION to its caller. 
so for the newly-introduced two functions,  I will add the logic to handle its possible exceptions before its return.  
Xiantao




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 01:28:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 01:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlrOx-0004K9-JZ; Fri, 21 Dec 2012 01:27:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlrOw-0004K4-JI
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 01:27:50 +0000
Received: from [85.158.143.99:18632] by server-1.bemta-4.messagelabs.com id
	34/B9-28401-51BB3D05; Fri, 21 Dec 2012 01:27:49 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356053268!23560335!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMyNDkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13696 invoked from network); 21 Dec 2012 01:27:48 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-12.tower-216.messagelabs.com with SMTP;
	21 Dec 2012 01:27:48 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 20 Dec 2012 17:27:47 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,327,1355126400"; d="scan'208";a="260486932"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 20 Dec 2012 17:27:47 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 20 Dec 2012 17:27:47 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 20 Dec 2012 17:27:46 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Fri, 21 Dec 2012 09:27:43 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH v3 00/10] Nested VMX: Add virtual EPT &
	VPID support to L1 VMM
Thread-Index: AQHN3rnVZNf+wHscSEeSvyGACTp8x5gic97g
Date: Fri, 21 Dec 2012 01:27:43 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BD692@SHSMSX101.ccr.corp.intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<20121220135558.GM80837@ocelot.phlegethon.org>
In-Reply-To: <20121220135558.GM80837@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	"Dong, Eddie" <eddie.dong@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Zhang, 
	Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v3 00/10] Nested VMX: Add virtual EPT & VPID
 support to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Oops! Seems my dev machine has wrong date setting. Anyway,  it tells us the End of the World is not real, because we have some mails sent out after the  END. :)
Xiantao

> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Thursday, December 20, 2012 9:56 PM
> To: Zhang, Xiantao
> Cc: xen-devel@lists.xen.org; keir@xen.org; Nakajima, Jun; Dong, Eddie;
> JBeulich@suse.com
> Subject: Re: [Xen-devel] [PATCH v3 00/10] Nested VMX: Add virtual EPT &
> VPID support to L1 VMM
> 
> Hi,
> 
> At 23:43 +0800 on 20 Dec (1356047021), Xiantao Zhang wrote:
> > Received: from hax-build.sh.intel.com ([10.239.48.28])
> >         by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:04 -0800
> > From: Xiantao Zhang <xiantao.zhang@intel.com>
> > To: xen-devel@lists.xen.org
> > Date: Thu, 20 Dec 2012 23:43:41 +0800
> 
> I think the clock on your computer or your email client is confused:
> your email is datestamped about 12 hours in the future.
> 
> Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 01:28:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 01:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlrOx-0004K9-JZ; Fri, 21 Dec 2012 01:27:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1TlrOw-0004K4-JI
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 01:27:50 +0000
Received: from [85.158.143.99:18632] by server-1.bemta-4.messagelabs.com id
	34/B9-28401-51BB3D05; Fri, 21 Dec 2012 01:27:49 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356053268!23560335!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMyNDkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13696 invoked from network); 21 Dec 2012 01:27:48 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-12.tower-216.messagelabs.com with SMTP;
	21 Dec 2012 01:27:48 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 20 Dec 2012 17:27:47 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,327,1355126400"; d="scan'208";a="260486932"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 20 Dec 2012 17:27:47 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 20 Dec 2012 17:27:47 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 20 Dec 2012 17:27:46 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Fri, 21 Dec 2012 09:27:43 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH v3 00/10] Nested VMX: Add virtual EPT &
	VPID support to L1 VMM
Thread-Index: AQHN3rnVZNf+wHscSEeSvyGACTp8x5gic97g
Date: Fri, 21 Dec 2012 01:27:43 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BD692@SHSMSX101.ccr.corp.intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<20121220135558.GM80837@ocelot.phlegethon.org>
In-Reply-To: <20121220135558.GM80837@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	"Dong, Eddie" <eddie.dong@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Zhang, 
	Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v3 00/10] Nested VMX: Add virtual EPT & VPID
 support to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Oops! Seems my dev machine has wrong date setting. Anyway,  it tells us the End of the World is not real, because we have some mails sent out after the  END. :)
Xiantao

> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Thursday, December 20, 2012 9:56 PM
> To: Zhang, Xiantao
> Cc: xen-devel@lists.xen.org; keir@xen.org; Nakajima, Jun; Dong, Eddie;
> JBeulich@suse.com
> Subject: Re: [Xen-devel] [PATCH v3 00/10] Nested VMX: Add virtual EPT &
> VPID support to L1 VMM
> 
> Hi,
> 
> At 23:43 +0800 on 20 Dec (1356047021), Xiantao Zhang wrote:
> > Received: from hax-build.sh.intel.com ([10.239.48.28])
> >         by fmsmga001.fm.intel.com with ESMTP; 19 Dec 2012 19:59:04 -0800
> > From: Xiantao Zhang <xiantao.zhang@intel.com>
> > To: xen-devel@lists.xen.org
> > Date: Thu, 20 Dec 2012 23:43:41 +0800
> 
> I think the clock on your computer or your email client is confused:
> your email is datestamped about 12 hours in the future.
> 
> Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 03:52:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 03:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlte9-0006VV-Kg; Fri, 21 Dec 2012 03:51:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tlte8-0006VQ-Ef
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 03:51:40 +0000
Received: from [193.109.254.147:12855] by server-16.bemta-14.messagelabs.com
	id 78/6C-18932-BCCD3D05; Fri, 21 Dec 2012 03:51:39 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356061897!9108577!1
X-Originating-IP: [209.85.210.175]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27878 invoked from network); 21 Dec 2012 03:51:38 -0000
Received: from mail-ia0-f175.google.com (HELO mail-ia0-f175.google.com)
	(209.85.210.175)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 03:51:38 -0000
Received: by mail-ia0-f175.google.com with SMTP id z3so3463973iad.20
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 19:51:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=jxEfNUXjDnewjGtpHQGty8Fvy0dY01bMyVkbPEUzOm4=;
	b=Vv67XWCvHE+RjUBrNOXO0h8Z22lLiroL4UgUGYIUZpCKwnSgt4ONmD895cHIO9lkxK
	iCAHqiGV7yWmqkQRVne9rF1mOno67FHc4R7CVaE/eMBKnya/585EpNwOIeNw6SOhqrAI
	pcasEQ9HDV3Tq7OpE9hae5huGz/BVaaQ2L3tQ/3/Hca2ySXMbMJGYlHemvEOH6nlQf5q
	FAGoub5+/gI9DbvOtCvKWMGCTnvp1NK+Ca3lyqJs40T0ekoxkM0DZqSn7LqFZilLgQRx
	m2zHK00Lz0tOmOpW6Lwk43nGiKFib25lZW9dl26KFqEQsBQ0I/FbZ/diZnvoSrKTCLh2
	L6tg==
MIME-Version: 1.0
Received: by 10.50.40.133 with SMTP id x5mr12364042igk.32.1356061896866; Thu,
	20 Dec 2012 19:51:36 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 19:51:36 -0800 (PST)
In-Reply-To: <1356033034.14417.67.camel@dagon.hellion.org.uk>
References: <CCF8BD23.561A3%keir@xen.org>
	<1356033034.14417.67.camel@dagon.hellion.org.uk>
Date: Fri, 21 Dec 2012 11:51:36 +0800
X-Google-Sender-Auth: XQ8-kEqNLr34nzIa-aLyydVP2gg
Message-ID: <CAKhsbWa7B2Y3vHWSqX9Rh4hmnaOr3ZDHEN0FwP6fwU3YwwwwHg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Keir Fraser <keir@xen.org>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 3:50 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-12-20 at 13:03 +0000, Keir Fraser wrote:
>> On 20/12/2012 10:41, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
>> > Do we need to worry about what is in the "slop" at either end of a 3
>> > page region containing this? If they are sensitive registers then we may
>> > have a problem.
>>
>> In the hvmloader patch it is not worth it I think, one extra page of memory
>> hole is hardly a scarce resource.
>
> I didn't mean the wastage, I meant the contents (registers) at the
> physical addresses either immediately before or after the OpRegion.
>
The bad news is that we have no idea.
In section 2.3, the spec mentions that the firmware is responsible to
create the OpRegion upon boot.
So the layout may be firmware specific.

But I think we need to get confirmation from ACPI / firmware
experts.Who will that be?

Please remember that even the bundle potentially worsen the case, the
potential hole is already open
in the current code-base.
I think you should take action to evaluate the risk whether this patch
is accepted or not.

Thanks,
Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 03:52:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 03:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tlte9-0006VV-Kg; Fri, 21 Dec 2012 03:51:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tlte8-0006VQ-Ef
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 03:51:40 +0000
Received: from [193.109.254.147:12855] by server-16.bemta-14.messagelabs.com
	id 78/6C-18932-BCCD3D05; Fri, 21 Dec 2012 03:51:39 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356061897!9108577!1
X-Originating-IP: [209.85.210.175]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27878 invoked from network); 21 Dec 2012 03:51:38 -0000
Received: from mail-ia0-f175.google.com (HELO mail-ia0-f175.google.com)
	(209.85.210.175)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 03:51:38 -0000
Received: by mail-ia0-f175.google.com with SMTP id z3so3463973iad.20
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 19:51:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=jxEfNUXjDnewjGtpHQGty8Fvy0dY01bMyVkbPEUzOm4=;
	b=Vv67XWCvHE+RjUBrNOXO0h8Z22lLiroL4UgUGYIUZpCKwnSgt4ONmD895cHIO9lkxK
	iCAHqiGV7yWmqkQRVne9rF1mOno67FHc4R7CVaE/eMBKnya/585EpNwOIeNw6SOhqrAI
	pcasEQ9HDV3Tq7OpE9hae5huGz/BVaaQ2L3tQ/3/Hca2ySXMbMJGYlHemvEOH6nlQf5q
	FAGoub5+/gI9DbvOtCvKWMGCTnvp1NK+Ca3lyqJs40T0ekoxkM0DZqSn7LqFZilLgQRx
	m2zHK00Lz0tOmOpW6Lwk43nGiKFib25lZW9dl26KFqEQsBQ0I/FbZ/diZnvoSrKTCLh2
	L6tg==
MIME-Version: 1.0
Received: by 10.50.40.133 with SMTP id x5mr12364042igk.32.1356061896866; Thu,
	20 Dec 2012 19:51:36 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 19:51:36 -0800 (PST)
In-Reply-To: <1356033034.14417.67.camel@dagon.hellion.org.uk>
References: <CCF8BD23.561A3%keir@xen.org>
	<1356033034.14417.67.camel@dagon.hellion.org.uk>
Date: Fri, 21 Dec 2012 11:51:36 +0800
X-Google-Sender-Auth: XQ8-kEqNLr34nzIa-aLyydVP2gg
Message-ID: <CAKhsbWa7B2Y3vHWSqX9Rh4hmnaOr3ZDHEN0FwP6fwU3YwwwwHg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Keir Fraser <keir@xen.org>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 3:50 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-12-20 at 13:03 +0000, Keir Fraser wrote:
>> On 20/12/2012 10:41, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
>> > Do we need to worry about what is in the "slop" at either end of a 3
>> > page region containing this? If they are sensitive registers then we may
>> > have a problem.
>>
>> In the hvmloader patch it is not worth it I think, one extra page of memory
>> hole is hardly a scarce resource.
>
> I didn't mean the wastage, I meant the contents (registers) at the
> physical addresses either immediately before or after the OpRegion.
>
The bad news is that we have no idea.
In section 2.3, the spec mentions that the firmware is responsible to
create the OpRegion upon boot.
So the layout may be firmware specific.

But I think we need to get confirmation from ACPI / firmware
experts.Who will that be?

Please remember that even the bundle potentially worsen the case, the
potential hole is already open
in the current code-base.
I think you should take action to evaluate the risk whether this patch
is accepted or not.

Thanks,
Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 03:58:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 03:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tltk3-0006dK-El; Fri, 21 Dec 2012 03:57:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tltk2-0006dE-0T
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 03:57:46 +0000
Received: from [85.158.139.83:62678] by server-5.bemta-5.messagelabs.com id
	84/10-22648-93ED3D05; Fri, 21 Dec 2012 03:57:45 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1356062264!28683827!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMDMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 671 invoked from network); 21 Dec 2012 03:57:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 03:57:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,327,1355097600"; 
   d="scan'208";a="293194"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 03:57:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 03:57:43 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tltjz-00046d-Pl;
	Fri, 21 Dec 2012 03:57:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tltjz-0006XJ-Lf;
	Fri, 21 Dec 2012 03:57:43 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14802-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Dec 2012 03:57:43 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14802: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0644332909978883995=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0644332909978883995==
Content-Type: text/plain

flight 14802 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14802/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14796
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14796

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  6f5c96855a9e
baseline version:
 xen                  090cc3e20d3e

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=6f5c96855a9e
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 6f5c96855a9e
+ branch=xen-unstable
+ revision=6f5c96855a9e
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 6f5c96855a9e ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files


--===============0644332909978883995==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0644332909978883995==--

From xen-devel-bounces@lists.xen.org Fri Dec 21 03:58:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 03:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tltk3-0006dK-El; Fri, 21 Dec 2012 03:57:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tltk2-0006dE-0T
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 03:57:46 +0000
Received: from [85.158.139.83:62678] by server-5.bemta-5.messagelabs.com id
	84/10-22648-93ED3D05; Fri, 21 Dec 2012 03:57:45 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1356062264!28683827!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMDMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 671 invoked from network); 21 Dec 2012 03:57:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 03:57:44 -0000
X-IronPort-AV: E=Sophos;i="4.84,327,1355097600"; 
   d="scan'208";a="293194"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 03:57:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 03:57:43 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Tltjz-00046d-Pl;
	Fri, 21 Dec 2012 03:57:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Tltjz-0006XJ-Lf;
	Fri, 21 Dec 2012 03:57:43 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14802-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Dec 2012 03:57:43 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14802: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0644332909978883995=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0644332909978883995==
Content-Type: text/plain

flight 14802 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14802/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14796
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14796

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  6f5c96855a9e
baseline version:
 xen                  090cc3e20d3e

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=6f5c96855a9e
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 6f5c96855a9e
+ branch=xen-unstable
+ revision=6f5c96855a9e
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 6f5c96855a9e ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files


--===============0644332909978883995==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0644332909978883995==--

From xen-devel-bounces@lists.xen.org Fri Dec 21 04:00:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 04:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tltly-0006it-2V; Fri, 21 Dec 2012 03:59:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tltlw-0006ig-Ez
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 03:59:44 +0000
Received: from [85.158.143.35:9112] by server-3.bemta-4.messagelabs.com id
	E6/BD-18211-FAED3D05; Fri, 21 Dec 2012 03:59:43 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1356062381!4900579!1
X-Originating-IP: [209.85.210.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8341 invoked from network); 21 Dec 2012 03:59:43 -0000
Received: from mail-ia0-f174.google.com (HELO mail-ia0-f174.google.com)
	(209.85.210.174)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 03:59:43 -0000
Received: by mail-ia0-f174.google.com with SMTP id y25so3510620iay.19
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 19:59:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=6kk0e1z19F5+1BIFglhSK6L6ESh+QHOdK68ksQ3LWCU=;
	b=yjAr1sbcDj3Sh15Onn5povWnFvqoGtt8H1+7jZ9dYeS7/Qee0OLFTdxPIYh42WqfBZ
	VEbxUXh5gfaA8AO8RWqPJHxZPIiLpUSKIoW9G9bI/NafiRjyvhD0AhoHvdW5/3YRzp3Z
	n1NEFtlUvDnNX2HqE2zyXNsjCVU5gajMM3aiXTyw3PoxDDiRotVW5mOFmD0pgQkR902t
	6p4wCSBdzX4urTxILBl9v1RsMl5f9UqJZSz6Flo2SeF9IstA3xC6MNYdjwVO0yN2D2Bx
	V3volfeU+0ZFERbANDLp9U1OFHFhsrv4C1GvVrbmx5U6rhUKaLBCcMODeH8pAzQgLLf9
	bEwA==
MIME-Version: 1.0
Received: by 10.43.92.72 with SMTP id bp8mr10764145icc.49.1356062381483; Thu,
	20 Dec 2012 19:59:41 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 19:59:41 -0800 (PST)
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
Date: Fri, 21 Dec 2012 11:59:41 +0800
X-Google-Sender-Auth: FBOGcJv1p0XbDeJ2ywHZvMYyiEM
Message-ID: <CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Ross Philipson <Ross.Philipson@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 2:27 AM, Ross Philipson
<Ross.Philipson@citrix.com> wrote:


>> > Possibly with suitable macros used instead of magic numbers (e.g.,
>> > XC_PAGE_* and a macro for the opregion size).
>>
>> I guess there is no predefined macro for OpRegion size. And I guess I
>> need to define it twice for both code?
>
> In addition we should think about defining the IGD OpRegion in ACPI per the spec (cited earlier). Guest drivers seem to find the region just by reading the ASLS register in the gfx device's config space but it would be more correct to define it in ACPI too. Just a thought.

Is it a requirement for the patch to be accepted? Or, are you saying
that this should not be IGD passthrough specific?
I'm not sure what you refer to by 'ACPI' here. Is it the spec itself
or header in your source code?
I'm sorry to ask but I'm just a unlucky user trying to fix my box.

The ASLS register is just the documented way to communicate the
OpRegion you can find in the spec.
There are a lot of details in the spec. But as long as we are not
going to emulate it, only the size is relevant here, I believe.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 04:00:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 04:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tltly-0006it-2V; Fri, 21 Dec 2012 03:59:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tltlw-0006ig-Ez
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 03:59:44 +0000
Received: from [85.158.143.35:9112] by server-3.bemta-4.messagelabs.com id
	E6/BD-18211-FAED3D05; Fri, 21 Dec 2012 03:59:43 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1356062381!4900579!1
X-Originating-IP: [209.85.210.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8341 invoked from network); 21 Dec 2012 03:59:43 -0000
Received: from mail-ia0-f174.google.com (HELO mail-ia0-f174.google.com)
	(209.85.210.174)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 03:59:43 -0000
Received: by mail-ia0-f174.google.com with SMTP id y25so3510620iay.19
	for <xen-devel@lists.xen.org>; Thu, 20 Dec 2012 19:59:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=6kk0e1z19F5+1BIFglhSK6L6ESh+QHOdK68ksQ3LWCU=;
	b=yjAr1sbcDj3Sh15Onn5povWnFvqoGtt8H1+7jZ9dYeS7/Qee0OLFTdxPIYh42WqfBZ
	VEbxUXh5gfaA8AO8RWqPJHxZPIiLpUSKIoW9G9bI/NafiRjyvhD0AhoHvdW5/3YRzp3Z
	n1NEFtlUvDnNX2HqE2zyXNsjCVU5gajMM3aiXTyw3PoxDDiRotVW5mOFmD0pgQkR902t
	6p4wCSBdzX4urTxILBl9v1RsMl5f9UqJZSz6Flo2SeF9IstA3xC6MNYdjwVO0yN2D2Bx
	V3volfeU+0ZFERbANDLp9U1OFHFhsrv4C1GvVrbmx5U6rhUKaLBCcMODeH8pAzQgLLf9
	bEwA==
MIME-Version: 1.0
Received: by 10.43.92.72 with SMTP id bp8mr10764145icc.49.1356062381483; Thu,
	20 Dec 2012 19:59:41 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Thu, 20 Dec 2012 19:59:41 -0800 (PST)
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
Date: Fri, 21 Dec 2012 11:59:41 +0800
X-Google-Sender-Auth: FBOGcJv1p0XbDeJ2ywHZvMYyiEM
Message-ID: <CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Ross Philipson <Ross.Philipson@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 2:27 AM, Ross Philipson
<Ross.Philipson@citrix.com> wrote:


>> > Possibly with suitable macros used instead of magic numbers (e.g.,
>> > XC_PAGE_* and a macro for the opregion size).
>>
>> I guess there is no predefined macro for OpRegion size. And I guess I
>> need to define it twice for both code?
>
> In addition we should think about defining the IGD OpRegion in ACPI per the spec (cited earlier). Guest drivers seem to find the region just by reading the ASLS register in the gfx device's config space but it would be more correct to define it in ACPI too. Just a thought.

Is it a requirement for the patch to be accepted? Or, are you saying
that this should not be IGD passthrough specific?
I'm not sure what you refer to by 'ACPI' here. Is it the spec itself
or header in your source code?
I'm sorry to ask but I'm just a unlucky user trying to fix my box.

The ASLS register is just the documented way to communicate the
OpRegion you can find in the spec.
There are a lot of details in the spec. But as long as we are not
going to emulate it, only the size is relevant here, I believe.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 04:52:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 04:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tluaw-0007Hk-C6; Fri, 21 Dec 2012 04:52:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1Tluav-0007Hf-31
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 04:52:25 +0000
Received: from [85.158.139.211:58889] by server-3.bemta-5.messagelabs.com id
	FB/3C-25441-80BE3D05; Fri, 21 Dec 2012 04:52:24 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1356065542!20922608!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDI1MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11452 invoked from network); 21 Dec 2012 04:52:23 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 04:52:23 -0000
Received: from compute6.internal (compute6.nyi.mail.srv.osa [10.202.2.46])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 8BB83209DD;
	Thu, 20 Dec 2012 23:52:22 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute6.internal (MEProxy); Thu, 20 Dec 2012 23:52:22 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=G7
	+bCkWbqCAj0zeXlN59DM3OMiM=; b=dlKLJaZjlK9Y/bf4KHZwpNVR3JkSLtVy7h
	J4CKRI6YYgW89PxIM7xMyhbtTOJZiiu9LsbKixbwJC4tD185xByRVq5Kiou7Xz4q
	PrsqtwKPNMpZUVK5ZUKlqKqGHHqdBNpyF0cWWDfcLYh23eCgcU7k8w5A8V7pV2mi
	esZYSgYYs=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=G7+b
	CkWbqCAj0zeXlN59DM3OMiM=; b=u2PxtVKqdMr9T/SfSe6Fio/NGrxaEPKk6w2b
	bvtTlOgmwyzShocr0NH7RCMp9Bpc5NtB2dUsXi1LGKGyKgBjJ8iQ/Bw963b/MxnJ
	FVdVOTl/c/kW4YLrMnScDtFZZ+Y88DKughMOpOYBYUCvN5SEtwcZI9FglIdEQw7D
	saS9jw4=
X-Sasl-enc: Y2z0JRqxIpR3/qwU6tnNiCyGA7LxqK52X6jh6JKX3cUe 1356065542
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id CCAC78E06ED;
	Thu, 20 Dec 2012 23:52:21 -0500 (EST)
Message-ID: <50D3EB03.4000109@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 05:52:19 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
In-Reply-To: <50D39C73.906@invisiblethingslab.com>
X-Enigmail-Version: 1.4.5
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0708413466450974634=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============0708413466450974634==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enig22D19562742C823DDA95F67B"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig22D19562742C823DDA95F67B
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 00:17, Marek Marczykowski wrote:
> On 20.12.2012 20:41, Ben Guthro wrote:
>> See if the attached patch helps.
>> This was a patch I forgot was still in my queue, that Jan had some
>> objections about (though I don't recall specifically what they were)
>> I had found that the scheduler was removing all cpus during suspend, a=
nd
>> this seemed to help with that, IIRC.
>>
>> That said - I'm starting to be forced to look back into S3 issues, as =
while
>> I thought they were all resolved, they are not.
>> I have an Ivybridge laptop that I didn't have back in May that exhibit=
s
>> some issues very similar to those that I was trying to solve with Jan =
in
>> the thread I pasted above.
>>
>> This particular machine goes to sleep, and when it resumes, the disk l=
ight
>> flashes briefly, and then the sleep led goes back to pulsing.
>=20
> With this patch applied (on 4.1.4 and 4.3-unstable), I've got original =
symptom
> - only CPU0 active after ACPI S3. Didn't get reboot after few tries.
> "xl debug-key r" output attached.
> Anyway it looks like bug was introduced somehow between 4.1.2 and 4.1.3=
 - on
> 4.1.2 none of above problems happened.

I've done bisect on xen 4.1 and found problematic commit:
http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427

Unfortunately in xen-unstable parent of corresponding commit still have t=
he
same problem, so there must be something more wrong.

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enig22D19562742C823DDA95F67B
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ0+sDAAoJENuP0xzK19csrvgIAJaiRVAXNTRllFZAyBUnrIzb
qLGr+jbobLhNoL4PqFnRNKBsCCPEo9KBPXo52kdOk3ObQ3yXMqzaIaUgpr2ry4DI
x8Yg8Zk/kjg273Qa3xXkz8I+d75uC704LVaJj3GhM9h0B/ITL7K5UHKv/UzGGLnY
Ed7cyrJ4KclJGdbDmjN411fp2xT0EZB7hmtHw66TngDIRqA9FnF+Yp8pkc9UiORH
eg/pAcDbaFyVs/ep0v+XTHAlwFDoBAamblVWPrU1g71+owq26I54Yhlqjui9Gxiv
0NnCUkm5nSLo9iHmti9hWh894/QqvQ6ZUxh/XEl8pCAnnLQslO2tZcGB1FUh9MM=
=Ua7I
-----END PGP SIGNATURE-----

--------------enig22D19562742C823DDA95F67B--


--===============0708413466450974634==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0708413466450974634==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 04:52:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 04:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tluaw-0007Hk-C6; Fri, 21 Dec 2012 04:52:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1Tluav-0007Hf-31
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 04:52:25 +0000
Received: from [85.158.139.211:58889] by server-3.bemta-5.messagelabs.com id
	FB/3C-25441-80BE3D05; Fri, 21 Dec 2012 04:52:24 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1356065542!20922608!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDI1MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11452 invoked from network); 21 Dec 2012 04:52:23 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 04:52:23 -0000
Received: from compute6.internal (compute6.nyi.mail.srv.osa [10.202.2.46])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 8BB83209DD;
	Thu, 20 Dec 2012 23:52:22 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute6.internal (MEProxy); Thu, 20 Dec 2012 23:52:22 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=G7
	+bCkWbqCAj0zeXlN59DM3OMiM=; b=dlKLJaZjlK9Y/bf4KHZwpNVR3JkSLtVy7h
	J4CKRI6YYgW89PxIM7xMyhbtTOJZiiu9LsbKixbwJC4tD185xByRVq5Kiou7Xz4q
	PrsqtwKPNMpZUVK5ZUKlqKqGHHqdBNpyF0cWWDfcLYh23eCgcU7k8w5A8V7pV2mi
	esZYSgYYs=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=G7+b
	CkWbqCAj0zeXlN59DM3OMiM=; b=u2PxtVKqdMr9T/SfSe6Fio/NGrxaEPKk6w2b
	bvtTlOgmwyzShocr0NH7RCMp9Bpc5NtB2dUsXi1LGKGyKgBjJ8iQ/Bw963b/MxnJ
	FVdVOTl/c/kW4YLrMnScDtFZZ+Y88DKughMOpOYBYUCvN5SEtwcZI9FglIdEQw7D
	saS9jw4=
X-Sasl-enc: Y2z0JRqxIpR3/qwU6tnNiCyGA7LxqK52X6jh6JKX3cUe 1356065542
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id CCAC78E06ED;
	Thu, 20 Dec 2012 23:52:21 -0500 (EST)
Message-ID: <50D3EB03.4000109@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 05:52:19 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
In-Reply-To: <50D39C73.906@invisiblethingslab.com>
X-Enigmail-Version: 1.4.5
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0708413466450974634=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============0708413466450974634==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enig22D19562742C823DDA95F67B"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig22D19562742C823DDA95F67B
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 00:17, Marek Marczykowski wrote:
> On 20.12.2012 20:41, Ben Guthro wrote:
>> See if the attached patch helps.
>> This was a patch I forgot was still in my queue, that Jan had some
>> objections about (though I don't recall specifically what they were)
>> I had found that the scheduler was removing all cpus during suspend, a=
nd
>> this seemed to help with that, IIRC.
>>
>> That said - I'm starting to be forced to look back into S3 issues, as =
while
>> I thought they were all resolved, they are not.
>> I have an Ivybridge laptop that I didn't have back in May that exhibit=
s
>> some issues very similar to those that I was trying to solve with Jan =
in
>> the thread I pasted above.
>>
>> This particular machine goes to sleep, and when it resumes, the disk l=
ight
>> flashes briefly, and then the sleep led goes back to pulsing.
>=20
> With this patch applied (on 4.1.4 and 4.3-unstable), I've got original =
symptom
> - only CPU0 active after ACPI S3. Didn't get reboot after few tries.
> "xl debug-key r" output attached.
> Anyway it looks like bug was introduced somehow between 4.1.2 and 4.1.3=
 - on
> 4.1.2 none of above problems happened.

I've done bisect on xen 4.1 and found problematic commit:
http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427

Unfortunately in xen-unstable parent of corresponding commit still have t=
he
same problem, so there must be something more wrong.

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enig22D19562742C823DDA95F67B
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ0+sDAAoJENuP0xzK19csrvgIAJaiRVAXNTRllFZAyBUnrIzb
qLGr+jbobLhNoL4PqFnRNKBsCCPEo9KBPXo52kdOk3ObQ3yXMqzaIaUgpr2ry4DI
x8Yg8Zk/kjg273Qa3xXkz8I+d75uC704LVaJj3GhM9h0B/ITL7K5UHKv/UzGGLnY
Ed7cyrJ4KclJGdbDmjN411fp2xT0EZB7hmtHw66TngDIRqA9FnF+Yp8pkc9UiORH
eg/pAcDbaFyVs/ep0v+XTHAlwFDoBAamblVWPrU1g71+owq26I54Yhlqjui9Gxiv
0NnCUkm5nSLo9iHmti9hWh894/QqvQ6ZUxh/XEl8pCAnnLQslO2tZcGB1FUh9MM=
=Ua7I
-----END PGP SIGNATURE-----

--------------enig22D19562742C823DDA95F67B--


--===============0708413466450974634==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0708413466450974634==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 08:30:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 08:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlxzE-0000q1-AY; Fri, 21 Dec 2012 08:29:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TlxzC-0000pu-QY
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 08:29:43 +0000
Received: from [85.158.143.99:20464] by server-1.bemta-4.messagelabs.com id
	19/4C-28401-6FD14D05; Fri, 21 Dec 2012 08:29:42 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1356078581!25141742!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMDMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7830 invoked from network); 21 Dec 2012 08:29:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 08:29:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="295392"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 08:29:40 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 08:29:40 +0000
Message-ID: <50D41DF3.306@citrix.com>
Date: Fri, 21 Dec 2012 09:29:39 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Create a iSCSI DomU with disks in another DomU running
 on the same Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I'm trying to use a strange setup, that consists in having a DomU
serving iSCSI targets to the Dom0, that will use this targets as disks
for other DomUs. I've tried to set up this iSCSI target DomU using both
Debian Squeeze/Wheezy (with kernels 2.6.32 and 3.2) and ISCSI
Enterprise Target (IET), and when launching the DomU I get this messages
from Xen:

(XEN) mm.c:1925:d0 Error pfn 157e68: rd=ffff83019e60c000, od=ffff830141405000, caf=8000000000000003, taf=7400000000000001
(XEN) Xen WARN at mm.c:1926
(XEN) ----[ Xen-4.3-unstable  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 0000000000000000
(XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802766e8
(XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  0000000000000004
(XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000001
(XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 7400000000000001
(XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802bfba8:
(XEN)    ffff830141405000 8000000000000003 7400000000000001 0000000000145028
(XEN)    ffff82f6028a0510 ffff83019e60c000 ffff82f602afcd00 ffff82c4802bfd28
(XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480109ba3
(XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061dfc3f0
(XEN)    0000000000000001 ffffffffffff8000 0000000000000002 ffff83011d555000
(XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010c607
(XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011cf90
(XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b8000
(XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480300920
(XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000005802bfd38 ffff82c4802b8000
(XEN)    ffff82c400000000 0000000000000001 ffffc90000028b10 ffffc90000028b10
(XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 0000000000145028
(XEN)    000000000011cf7c 0000000000001000 0000000000157e68 0000000000007ff0
(XEN)    000000000000027e 000000000042000d 0000000000020b50 ffff8300dfdf0000
(XEN)    ffff82c4802bfd78 ffffc90000028ac0 ffffc90000028ac0 ffff880185f6fd58
(XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010eb65
(XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480181831
(XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 ffff8300dfb03000
(XEN)    ffff8300dfdf0000 0000150e11a417f8 0000000000000002 ffff82c480300948
(XEN) Xen call trace:
(XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
(XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
(XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
(XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
(XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
(XEN)    
(XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
(XEN) mm.c:1925:d0 Error pfn 157e68: rd=ffff83019e60c000, od=ffff830141405000, caf=8000000000000003, taf=7400000000000001
(XEN) Xen WARN at mm.c:1926
(XEN) ----[ Xen-4.3-unstable  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 0000000000000000
(XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802766e8
(XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  0000000000000004
(XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000001
(XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 7400000000000001
(XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802bfba8:
(XEN)    ffff830141405000 8000000000000003 7400000000000001 000000000014581d
(XEN)    ffff82f6028b03b0 ffff83019e60c000 ffff82f602afcd00 ffff82c4802bfd28
(XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480109ba3
(XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061dfc308
(XEN)    0000000000000000 ffffffffffff8000 0000000000000001 ffff83011d555000
(XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010c607
(XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011cf90
(XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b8000
(XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480300920
(XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000002802bfd38 ffff82c4802b8000
(XEN)    ffffffff00000000 0000000000000001 ffffc90000028b60 ffffc90000028b60
(XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 000000000014581d
(XEN)    00000000000deb3e 0000000000001000 0000000000157e68 000000000b507ff0
(XEN)    0000000000000261 000000000042000d 00000000000204b0 ffffc90000028b38
(XEN)    0000000000000002 ffffc90000028b38 ffffc90000028b38 ffff880185f6fd58
(XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010eb65
(XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480181831
(XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 0000000000000086
(XEN)    ffff82c4802bfe28 ffff82c480125eae ffff83019e60c000 0000000000000286
(XEN) Xen call trace:
(XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
(XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
(XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
(XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
(XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
(XEN)    
(XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.

(Note that I've added a WARN() to mm.c:1925 to see where the
get_page call was coming from).

Connecting the iSCSI disks to another Dom0 works fine, so this
problem only happens when trying to connect the disks to the
Dom0 where the DomU is running.

I've replaced the Linux DomU serving iSCSI targets with a
NetBSD DomU, and the problems disappears, and I'm able to
attach the targets shared by the DomU to the Dom0 without
issues.

The problem seems to come from netfront/netback, does anyone
have a clue about what might cause this bad interaction
between IET and netfront/netback?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 08:30:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 08:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlxzE-0000q1-AY; Fri, 21 Dec 2012 08:29:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TlxzC-0000pu-QY
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 08:29:43 +0000
Received: from [85.158.143.99:20464] by server-1.bemta-4.messagelabs.com id
	19/4C-28401-6FD14D05; Fri, 21 Dec 2012 08:29:42 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1356078581!25141742!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMDMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7830 invoked from network); 21 Dec 2012 08:29:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 08:29:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="295392"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 08:29:40 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 08:29:40 +0000
Message-ID: <50D41DF3.306@citrix.com>
Date: Fri, 21 Dec 2012 09:29:39 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Create a iSCSI DomU with disks in another DomU running
 on the same Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I'm trying to use a strange setup, that consists in having a DomU
serving iSCSI targets to the Dom0, that will use this targets as disks
for other DomUs. I've tried to set up this iSCSI target DomU using both
Debian Squeeze/Wheezy (with kernels 2.6.32 and 3.2) and ISCSI
Enterprise Target (IET), and when launching the DomU I get this messages
from Xen:

(XEN) mm.c:1925:d0 Error pfn 157e68: rd=ffff83019e60c000, od=ffff830141405000, caf=8000000000000003, taf=7400000000000001
(XEN) Xen WARN at mm.c:1926
(XEN) ----[ Xen-4.3-unstable  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 0000000000000000
(XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802766e8
(XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  0000000000000004
(XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000001
(XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 7400000000000001
(XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802bfba8:
(XEN)    ffff830141405000 8000000000000003 7400000000000001 0000000000145028
(XEN)    ffff82f6028a0510 ffff83019e60c000 ffff82f602afcd00 ffff82c4802bfd28
(XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480109ba3
(XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061dfc3f0
(XEN)    0000000000000001 ffffffffffff8000 0000000000000002 ffff83011d555000
(XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010c607
(XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011cf90
(XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b8000
(XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480300920
(XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000005802bfd38 ffff82c4802b8000
(XEN)    ffff82c400000000 0000000000000001 ffffc90000028b10 ffffc90000028b10
(XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 0000000000145028
(XEN)    000000000011cf7c 0000000000001000 0000000000157e68 0000000000007ff0
(XEN)    000000000000027e 000000000042000d 0000000000020b50 ffff8300dfdf0000
(XEN)    ffff82c4802bfd78 ffffc90000028ac0 ffffc90000028ac0 ffff880185f6fd58
(XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010eb65
(XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480181831
(XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 ffff8300dfb03000
(XEN)    ffff8300dfdf0000 0000150e11a417f8 0000000000000002 ffff82c480300948
(XEN) Xen call trace:
(XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
(XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
(XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
(XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
(XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
(XEN)    
(XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
(XEN) mm.c:1925:d0 Error pfn 157e68: rd=ffff83019e60c000, od=ffff830141405000, caf=8000000000000003, taf=7400000000000001
(XEN) Xen WARN at mm.c:1926
(XEN) ----[ Xen-4.3-unstable  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 0000000000000000
(XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802766e8
(XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  0000000000000004
(XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000001
(XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 7400000000000001
(XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802bfba8:
(XEN)    ffff830141405000 8000000000000003 7400000000000001 000000000014581d
(XEN)    ffff82f6028b03b0 ffff83019e60c000 ffff82f602afcd00 ffff82c4802bfd28
(XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480109ba3
(XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061dfc308
(XEN)    0000000000000000 ffffffffffff8000 0000000000000001 ffff83011d555000
(XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010c607
(XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011cf90
(XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b8000
(XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480300920
(XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000002802bfd38 ffff82c4802b8000
(XEN)    ffffffff00000000 0000000000000001 ffffc90000028b60 ffffc90000028b60
(XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 000000000014581d
(XEN)    00000000000deb3e 0000000000001000 0000000000157e68 000000000b507ff0
(XEN)    0000000000000261 000000000042000d 00000000000204b0 ffffc90000028b38
(XEN)    0000000000000002 ffffc90000028b38 ffffc90000028b38 ffff880185f6fd58
(XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010eb65
(XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480181831
(XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 0000000000000086
(XEN)    ffff82c4802bfe28 ffff82c480125eae ffff83019e60c000 0000000000000286
(XEN) Xen call trace:
(XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
(XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
(XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
(XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
(XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
(XEN)    
(XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.

(Note that I've added a WARN() to mm.c:1925 to see where the
get_page call was coming from).

Connecting the iSCSI disks to another Dom0 works fine, so this
problem only happens when trying to connect the disks to the
Dom0 where the DomU is running.

I've replaced the Linux DomU serving iSCSI targets with a
NetBSD DomU, and the problems disappears, and I'm able to
attach the targets shared by the DomU to the Dom0 without
issues.

The problem seems to come from netfront/netback, does anyone
have a clue about what might cause this bad interaction
between IET and netfront/netback?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 08:45:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 08:45:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlyDx-00012q-TX; Fri, 21 Dec 2012 08:44:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlyDv-00012l-KX
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 08:44:55 +0000
Received: from [85.158.143.35:57104] by server-2.bemta-4.messagelabs.com id
	D0/0D-30861-68124D05; Fri, 21 Dec 2012 08:44:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1356079493!12123631!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5213 invoked from network); 21 Dec 2012 08:44:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 08:44:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 08:44:53 +0000
Message-Id: <50D42F9302000078000B1F75@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 08:44:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "dom" <dominic.curran@citrix.com>
References: <1351187729-4681-1-git-send-email-jean.guyader@citrix.com>
	<508E5C6B02000078000A5011@nat28.tlf.novell.com>
	<50D3558A.1090105@citrix.com>
In-Reply-To: <50D3558A.1090105@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0/2] Add V4V to Xen (v8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 19:14, dom <dominic.curran@citrix.com> wrote:
> On 10/29/2012 02:37 AM, Jan Beulich wrote:
>>>>> On 25.10.12 at 19:55, Jean Guyader <jean.guyader@citrix.com> wrote:
>> Also, to validate the structures are really compatible between
>> native and compat mode guests, I'd strongly recommend adding
>> the leaf ones to xen/include/xlat.lst.
> 
> I'm sorry I don't understand.  What is xlat.lst ? And how does it help ?

This lists types that need either translation or verification that no
translation is needed (i.e. structure layouts and type sizes being
identical) for compatibility mode guest support.

>> Further I don't think you sync-ed up your patches with the
>> XEN_GUEST_HANDLE_PARAM() changes done for ARM, yet you
>> also didn't mention that the patch set is against other than the
>> tip of unstable.
> 
> Sorry, again I'm confused.  I thought all the Xen ARM changes went into
> xen-unstable.
> What other branches/trees do you think I need to post the v4v patch set 
> against ?

That's my point: The ARM changes went in, yet this patch didn't
get sync-ed up accordingly (it ought to use the _PARAM() variant
in all relevant places).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 08:45:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 08:45:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlyDx-00012q-TX; Fri, 21 Dec 2012 08:44:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlyDv-00012l-KX
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 08:44:55 +0000
Received: from [85.158.143.35:57104] by server-2.bemta-4.messagelabs.com id
	D0/0D-30861-68124D05; Fri, 21 Dec 2012 08:44:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1356079493!12123631!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5213 invoked from network); 21 Dec 2012 08:44:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 08:44:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 08:44:53 +0000
Message-Id: <50D42F9302000078000B1F75@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 08:44:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "dom" <dominic.curran@citrix.com>
References: <1351187729-4681-1-git-send-email-jean.guyader@citrix.com>
	<508E5C6B02000078000A5011@nat28.tlf.novell.com>
	<50D3558A.1090105@citrix.com>
In-Reply-To: <50D3558A.1090105@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0/2] Add V4V to Xen (v8)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.12.12 at 19:14, dom <dominic.curran@citrix.com> wrote:
> On 10/29/2012 02:37 AM, Jan Beulich wrote:
>>>>> On 25.10.12 at 19:55, Jean Guyader <jean.guyader@citrix.com> wrote:
>> Also, to validate the structures are really compatible between
>> native and compat mode guests, I'd strongly recommend adding
>> the leaf ones to xen/include/xlat.lst.
> 
> I'm sorry I don't understand.  What is xlat.lst ? And how does it help ?

This lists types that need either translation or verification that no
translation is needed (i.e. structure layouts and type sizes being
identical) for compatibility mode guest support.

>> Further I don't think you sync-ed up your patches with the
>> XEN_GUEST_HANDLE_PARAM() changes done for ARM, yet you
>> also didn't mention that the patch set is against other than the
>> tip of unstable.
> 
> Sorry, again I'm confused.  I thought all the Xen ARM changes went into
> xen-unstable.
> What other branches/trees do you think I need to post the v4v patch set 
> against ?

That's my point: The ARM changes went in, yet this patch didn't
get sync-ed up accordingly (it ought to use the _PARAM() variant
in all relevant places).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 08:57:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 08:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlyPS-0001Ei-5b; Fri, 21 Dec 2012 08:56:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlyPQ-0001Ed-Nj
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 08:56:48 +0000
Received: from [193.109.254.147:44204] by server-10.bemta-14.messagelabs.com
	id EA/46-13263-F4424D05; Fri, 21 Dec 2012 08:56:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1356080147!2126163!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29636 invoked from network); 21 Dec 2012 08:55:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 08:55:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 08:55:46 +0000
Message-Id: <50D4322102000078000B1F80@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 08:55:45 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>,
	"Marek Marczykowski" <marmarek@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
In-Reply-To: <50D3EB03.4000109@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblethingslab.com> wrote:
> On 21.12.2012 00:17, Marek Marczykowski wrote:
>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got original 
> symptom
>> - only CPU0 active after ACPI S3. Didn't get reboot after few tries.
>> "xl debug-key r" output attached.
>> Anyway it looks like bug was introduced somehow between 4.1.2 and 4.1.3 - on
>> 4.1.2 none of above problems happened.
> 
> I've done bisect on xen 4.1 and found problematic commit:
> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427 

In that case, using "sched_ratelimit_us=0" should make the
problem go away again - did you try that?

> Unfortunately in xen-unstable parent of corresponding commit still have the
> same problem, so there must be something more wrong.

Which is why we'd need to be really certain that the c/s you
spotted really is relevant here (to me it doesn't look like having
an effect on resume at all).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 08:57:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 08:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlyPS-0001Ei-5b; Fri, 21 Dec 2012 08:56:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1TlyPQ-0001Ed-Nj
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 08:56:48 +0000
Received: from [193.109.254.147:44204] by server-10.bemta-14.messagelabs.com
	id EA/46-13263-F4424D05; Fri, 21 Dec 2012 08:56:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1356080147!2126163!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29636 invoked from network); 21 Dec 2012 08:55:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 08:55:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 08:55:46 +0000
Message-Id: <50D4322102000078000B1F80@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 08:55:45 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>,
	"Marek Marczykowski" <marmarek@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
In-Reply-To: <50D3EB03.4000109@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblethingslab.com> wrote:
> On 21.12.2012 00:17, Marek Marczykowski wrote:
>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got original 
> symptom
>> - only CPU0 active after ACPI S3. Didn't get reboot after few tries.
>> "xl debug-key r" output attached.
>> Anyway it looks like bug was introduced somehow between 4.1.2 and 4.1.3 - on
>> 4.1.2 none of above problems happened.
> 
> I've done bisect on xen 4.1 and found problematic commit:
> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427 

In that case, using "sched_ratelimit_us=0" should make the
problem go away again - did you try that?

> Unfortunately in xen-unstable parent of corresponding commit still have the
> same problem, so there must be something more wrong.

Which is why we'd need to be really certain that the c/s you
spotted really is relevant here (to me it doesn't look like having
an effect on resume at all).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 10:00:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 10:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlzOv-00025F-86; Fri, 21 Dec 2012 10:00:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlzOt-00025A-L8
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 10:00:20 +0000
Received: from [85.158.139.211:40494] by server-12.bemta-5.messagelabs.com id
	9E/39-02275-23334D05; Fri, 21 Dec 2012 10:00:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1356084018!19977404!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4827 invoked from network); 21 Dec 2012 10:00:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 10:00:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="297546"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 10:00:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 10:00:17 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlzOr-0006XS-Sk;
	Fri, 21 Dec 2012 10:00:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlzOr-0004BI-Ic;
	Fri, 21 Dec 2012 10:00:17 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14805-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Dec 2012 10:00:17 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14805: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14805 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14805/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14802
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14802

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6f5c96855a9e
baseline version:
 xen                  6f5c96855a9e

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 10:00:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 10:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TlzOv-00025F-86; Fri, 21 Dec 2012 10:00:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TlzOt-00025A-L8
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 10:00:20 +0000
Received: from [85.158.139.211:40494] by server-12.bemta-5.messagelabs.com id
	9E/39-02275-23334D05; Fri, 21 Dec 2012 10:00:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1356084018!19977404!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4827 invoked from network); 21 Dec 2012 10:00:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 10:00:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="297546"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 10:00:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 10:00:17 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TlzOr-0006XS-Sk;
	Fri, 21 Dec 2012 10:00:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TlzOr-0004BI-Ic;
	Fri, 21 Dec 2012 10:00:17 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14805-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Dec 2012 10:00:17 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14805: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14805 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14805/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14802
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14802

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6f5c96855a9e
baseline version:
 xen                  6f5c96855a9e

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 11:21:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 11:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm0fC-0002bS-Jm; Fri, 21 Dec 2012 11:21:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tm0fA-0002bN-Ri
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 11:21:13 +0000
Received: from [85.158.137.99:52147] by server-9.bemta-3.messagelabs.com id
	1C/DF-11948-32644D05; Fri, 21 Dec 2012 11:21:07 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-14.tower-217.messagelabs.com!1356088866!18687535!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21560 invoked from network); 21 Dec 2012 11:21:06 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-14.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	21 Dec 2012 11:21:06 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:61123 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tm0jO-00029k-RQ; Fri, 21 Dec 2012 12:25:34 +0100
Date: Fri, 21 Dec 2012 12:21:00 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1609010645.20121221122100@eikelenboom.it>
To: Eric Dumazet <erdnetdev@gmail.com>
In-Reply-To: <1356017968.21834.2859.camel@edumazet-glaptop>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
	<1797374383.20121220135139@eikelenboom.it>
	<1356017968.21834.2859.camel@edumazet-glaptop>
MIME-Version: 1.0
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, December 20, 2012, 4:39:28 PM, you wrote:

> On Thu, 2012-12-20 at 13:51 +0100, Sander Eikelenboom wrote:

>> Eric:
>>      From the warn_on_once, delta should be smaller than len, but probably they should be as close together as possible.
>>      When you say "accurate estimation", what would be a acceptable difference between DELTA and LEN ?

> I would use the most exact value, which is :

>    skb->truesize += nr_frags * PAGE_SIZE;

> Then, if we can spot later a regression in some stacks, adapt the
> limiting parameters. I did a lot of work in GRO and TCP stack to reduce
> the memory, and further changes are possible.

> We really want to account memory, because we want to control how memory
> is used on our machines and don't let some users use more than the
> amount that was allowed to them.

Hi Eric and Ian,

I have ran some perfnet tests (although i'm not an expert, so i'm not sure i have done the right tests).
I you have better tests, please do say so ..


"current" is with netfront as is        (skb->truesize += skb->data_len - RX_COPY_THRESHOLD;)
"patched" is with IanC's latest patch   (skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags;)

Tested between domU and dom0 (bridged) on a system with only one guest. The results don't seem to differ very much.

+ netperf -H 192.168.1.1 -t TCP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2
        Recv   Send    Send                          
        Socket Socket  Message  Elapsed              
        Size   Size    Size     Time     Throughput  
        bytes  bytes   bytes    secs.    KBytes/sec  

current  87380  16384  16384    60.00    954438.38   
patched  87380  16384  16384    60.00    975236.19  


+ netperf -H 192.168.1.1 -t TCP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 50
        Recv   Send    Send                          
        Socket Socket  Message  Elapsed              
        Size   Size    Size     Time     Throughput  
        bytes  bytes   bytes    secs.    KBytes/sec  
        
current  87380   2048   2048    60.00    17614.79   
patched  87380   2048   2048    60.00    17207.46 


+ netperf -H 192.168.1.1 -t TCP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 50 -M 1432 -m 1432
        Recv   Send    Send                          
        Socket Socket  Message  Elapsed              
        Size   Size    Size     Time     Throughput  
        bytes  bytes   bytes    secs.    KBytes/sec  
        
current  87380   2048   1432    60.00      35.28   
patched  87380   2048   1432    60.00      35.28 


+ netperf -H 192.168.1.1 -t TCP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 9000
        Recv   Send    Send                          
        Socket Socket  Message  Elapsed              
        Size   Size    Size     Time     Throughput  
        bytes  bytes   bytes    secs.    KBytes/sec  
        
current  87380  18000  18000    60.00    157762.45   
patched  87380  18000  18000    60.00    158606.02


+ netperf -H 192.168.1.1 -t TCP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 9000 -M 1432 -m 1432
        Recv   Send    Send                          
        Socket Socket  Message  Elapsed              
        Size   Size    Size     Time     Throughput  
        bytes  bytes   bytes    secs.    KBytes/sec  
        
current  87380  18000   1432    60.00    78567.39   
patched  87380  18000   1432    60.00    78329.98


+ netperf -H 192.168.1.1 -t UDP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2
        Socket  Message  Elapsed      Messages                
        Size    Size     Time         Okay Errors   Throughput
        bytes   bytes    secs            #      #   KBytes/sec
        
current 212992   65507   60.00      248771      0    265238.24
current 212992           60.00      214267           228450.01
patched 212992   65507   60.00      251188      0    267814.90
patched 212992           60.00      235101           250662.67


+ netperf -H 192.168.1.1 -t UDP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 50
        Socket  Message  Elapsed      Messages                
        Size    Size     Time         Okay Errors   Throughput
        bytes   bytes    secs            #      #   KBytes/sec
        
current   2048    2048   60.00     1329653      0    44321.73
current 212992           60.00     1329650           44321.62
patched   2048    2048   60.00     1363257      0    45441.68
patched 212992           60.00     1363253           45441.57


+ netperf -H 192.168.1.1 -t UDP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 50 -M 1432 -m 1432
        Socket  Message  Elapsed      Messages                
        Size    Size     Time         Okay Errors   Throughput
        bytes   bytes    secs            #      #   KBytes/sec
        
current   2048    1432   60.00     1516249      0    35339.61
current 212992           60.00     1516247           35339.56
patched   2048    1432   60.00     1483705      0    34581.11
patched 212992           60.00     1483701           34581.01


+ netperf -H 192.168.1.1 -t UDP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 9000
        Socket  Message  Elapsed      Messages                
        Size    Size     Time         Okay Errors   Throughput
        bytes   bytes    secs            #      #   KBytes/sec
        
current  18000   18000   60.00      540410     26    158322.98
current 212992           60.00      540349           158305.24
patched  18000   18000   60.00      555449     32    162728.98
patched 212992           60.00      555392           162712.28


+ netperf -H 192.168.1.1 -t UDP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 9000 -M 1432 -m 1432
        Socket  Message  Elapsed      Messages                
        Size    Size     Time         Okay Errors   Throughput
        bytes   bytes    secs            #      #   KBytes/sec
        
current  18000    1432   60.00     5144189      0    119896.95
current 212992           60.00     5138354           119760.96
patched  18000    1432   60.00     5104540      0    118972.85
patched 212992           60.00     5099802           118862.44










_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 11:21:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 11:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm0fC-0002bS-Jm; Fri, 21 Dec 2012 11:21:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Tm0fA-0002bN-Ri
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 11:21:13 +0000
Received: from [85.158.137.99:52147] by server-9.bemta-3.messagelabs.com id
	1C/DF-11948-32644D05; Fri, 21 Dec 2012 11:21:07 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-14.tower-217.messagelabs.com!1356088866!18687535!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21560 invoked from network); 21 Dec 2012 11:21:06 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-14.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	21 Dec 2012 11:21:06 -0000
Received: from 252-69-ftth.on.nl ([88.159.69.252]:61123 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Tm0jO-00029k-RQ; Fri, 21 Dec 2012 12:25:34 +0100
Date: Fri, 21 Dec 2012 12:21:00 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1609010645.20121221122100@eikelenboom.it>
To: Eric Dumazet <erdnetdev@gmail.com>
In-Reply-To: <1356017968.21834.2859.camel@edumazet-glaptop>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
	<1797374383.20121220135139@eikelenboom.it>
	<1356017968.21834.2859.camel@edumazet-glaptop>
MIME-Version: 1.0
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, December 20, 2012, 4:39:28 PM, you wrote:

> On Thu, 2012-12-20 at 13:51 +0100, Sander Eikelenboom wrote:

>> Eric:
>>      From the warn_on_once, delta should be smaller than len, but probably they should be as close together as possible.
>>      When you say "accurate estimation", what would be a acceptable difference between DELTA and LEN ?

> I would use the most exact value, which is :

>    skb->truesize += nr_frags * PAGE_SIZE;

> Then, if we can spot later a regression in some stacks, adapt the
> limiting parameters. I did a lot of work in GRO and TCP stack to reduce
> the memory, and further changes are possible.

> We really want to account memory, because we want to control how memory
> is used on our machines and don't let some users use more than the
> amount that was allowed to them.

Hi Eric and Ian,

I have ran some perfnet tests (although i'm not an expert, so i'm not sure i have done the right tests).
I you have better tests, please do say so ..


"current" is with netfront as is        (skb->truesize += skb->data_len - RX_COPY_THRESHOLD;)
"patched" is with IanC's latest patch   (skb->truesize += PAGE_SIZE * skb_shinfo(skb)->nr_frags;)

Tested between domU and dom0 (bridged) on a system with only one guest. The results don't seem to differ very much.

+ netperf -H 192.168.1.1 -t TCP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2
        Recv   Send    Send                          
        Socket Socket  Message  Elapsed              
        Size   Size    Size     Time     Throughput  
        bytes  bytes   bytes    secs.    KBytes/sec  

current  87380  16384  16384    60.00    954438.38   
patched  87380  16384  16384    60.00    975236.19  


+ netperf -H 192.168.1.1 -t TCP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 50
        Recv   Send    Send                          
        Socket Socket  Message  Elapsed              
        Size   Size    Size     Time     Throughput  
        bytes  bytes   bytes    secs.    KBytes/sec  
        
current  87380   2048   2048    60.00    17614.79   
patched  87380   2048   2048    60.00    17207.46 


+ netperf -H 192.168.1.1 -t TCP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 50 -M 1432 -m 1432
        Recv   Send    Send                          
        Socket Socket  Message  Elapsed              
        Size   Size    Size     Time     Throughput  
        bytes  bytes   bytes    secs.    KBytes/sec  
        
current  87380   2048   1432    60.00      35.28   
patched  87380   2048   1432    60.00      35.28 


+ netperf -H 192.168.1.1 -t TCP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 9000
        Recv   Send    Send                          
        Socket Socket  Message  Elapsed              
        Size   Size    Size     Time     Throughput  
        bytes  bytes   bytes    secs.    KBytes/sec  
        
current  87380  18000  18000    60.00    157762.45   
patched  87380  18000  18000    60.00    158606.02


+ netperf -H 192.168.1.1 -t TCP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 9000 -M 1432 -m 1432
        Recv   Send    Send                          
        Socket Socket  Message  Elapsed              
        Size   Size    Size     Time     Throughput  
        bytes  bytes   bytes    secs.    KBytes/sec  
        
current  87380  18000   1432    60.00    78567.39   
patched  87380  18000   1432    60.00    78329.98


+ netperf -H 192.168.1.1 -t UDP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2
        Socket  Message  Elapsed      Messages                
        Size    Size     Time         Okay Errors   Throughput
        bytes   bytes    secs            #      #   KBytes/sec
        
current 212992   65507   60.00      248771      0    265238.24
current 212992           60.00      214267           228450.01
patched 212992   65507   60.00      251188      0    267814.90
patched 212992           60.00      235101           250662.67


+ netperf -H 192.168.1.1 -t UDP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 50
        Socket  Message  Elapsed      Messages                
        Size    Size     Time         Okay Errors   Throughput
        bytes   bytes    secs            #      #   KBytes/sec
        
current   2048    2048   60.00     1329653      0    44321.73
current 212992           60.00     1329650           44321.62
patched   2048    2048   60.00     1363257      0    45441.68
patched 212992           60.00     1363253           45441.57


+ netperf -H 192.168.1.1 -t UDP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 50 -M 1432 -m 1432
        Socket  Message  Elapsed      Messages                
        Size    Size     Time         Okay Errors   Throughput
        bytes   bytes    secs            #      #   KBytes/sec
        
current   2048    1432   60.00     1516249      0    35339.61
current 212992           60.00     1516247           35339.56
patched   2048    1432   60.00     1483705      0    34581.11
patched 212992           60.00     1483701           34581.01


+ netperf -H 192.168.1.1 -t UDP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 9000
        Socket  Message  Elapsed      Messages                
        Size    Size     Time         Okay Errors   Throughput
        bytes   bytes    secs            #      #   KBytes/sec
        
current  18000   18000   60.00      540410     26    158322.98
current 212992           60.00      540349           158305.24
patched  18000   18000   60.00      555449     32    162728.98
patched 212992           60.00      555392           162712.28


+ netperf -H 192.168.1.1 -t UDP_STREAM -fK -i10,5 -l 60 -I95,5 -P1 -v2 -- -s 9000 -M 1432 -m 1432
        Socket  Message  Elapsed      Messages                
        Size    Size     Time         Okay Errors   Throughput
        bytes   bytes    secs            #      #   KBytes/sec
        
current  18000    1432   60.00     5144189      0    119896.95
current 212992           60.00     5138354           119760.96
patched  18000    1432   60.00     5104540      0    118972.85
patched 212992           60.00     5099802           118862.44










_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 12:30:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 12:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm1jd-0003Dp-9I; Fri, 21 Dec 2012 12:29:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm1jb-0003Dk-BC
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 12:29:51 +0000
Received: from [85.158.139.211:54338] by server-15.bemta-5.messagelabs.com id
	74/71-20523-E3654D05; Fri, 21 Dec 2012 12:29:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1356092990!17237895!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12744 invoked from network); 21 Dec 2012 12:29:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 12:29:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="302319"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 12:29:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 12:29:49 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm1jZ-0007iX-Ri; Fri, 21 Dec 2012 12:29:49 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm1jZ-0002gB-O3;
	Fri, 21 Dec 2012 12:29:49 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.22077.630173.432927@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 12:29:49 +0000
To: Joseph Glanville <joseph.glanville@orionvm.com.au>
In-Reply-To: <CAOzFzEj63zoMoC_gh2C8YWVCuyqG2WT=QHi60i97vpzcd6xUyA@mail.gmail.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
	<20687.24486.142168.302026@mariner.uk.xensource.com>
	<CA+T2pCGGbJXppP6ZEgvfxOU+u0TAhzEnpphbVgC4m0AiN2x1fg@mail.gmail.com>
	<1355844650.14620.257.camel@zakaz.uk.xensource.com>
	<CAOzFzEj63zoMoC_gh2C8YWVCuyqG2WT=QHi60i97vpzcd6xUyA@mail.gmail.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: William Pitcock <nenolod@dereferenced.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Joseph Glanville writes ("Re: [Xen-devel] introducing python-xen"):
> I would be interested in assisting with the Python libxl bindings.
> I made some preliminary work at interfacing with libxl via Cython with
> moderate success prior to the overhaul WRT async ops.
> Now that the API has been stabilized and async APIs are available I
> would like to take another go at it and integrate it with
> gevent/libev.
> If any others are interested in collaborating on this that would be awesome.

Providing the glue between the libxl event system and gevent and/or
libev would be useful in itself.

But do people using Python always use one of those ?  Do we need to
allow the Python programmer to choose their event loop ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 12:30:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 12:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm1jd-0003Dp-9I; Fri, 21 Dec 2012 12:29:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm1jb-0003Dk-BC
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 12:29:51 +0000
Received: from [85.158.139.211:54338] by server-15.bemta-5.messagelabs.com id
	74/71-20523-E3654D05; Fri, 21 Dec 2012 12:29:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1356092990!17237895!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12744 invoked from network); 21 Dec 2012 12:29:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 12:29:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="302319"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 12:29:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 12:29:49 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm1jZ-0007iX-Ri; Fri, 21 Dec 2012 12:29:49 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm1jZ-0002gB-O3;
	Fri, 21 Dec 2012 12:29:49 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.22077.630173.432927@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 12:29:49 +0000
To: Joseph Glanville <joseph.glanville@orionvm.com.au>
In-Reply-To: <CAOzFzEj63zoMoC_gh2C8YWVCuyqG2WT=QHi60i97vpzcd6xUyA@mail.gmail.com>
References: <CA+T2pCGVssZKH_hbFEz_WTGAzpb=eYrF7Xyr8N1RbztRz+-1YQ@mail.gmail.com>
	<20687.24486.142168.302026@mariner.uk.xensource.com>
	<CA+T2pCGGbJXppP6ZEgvfxOU+u0TAhzEnpphbVgC4m0AiN2x1fg@mail.gmail.com>
	<1355844650.14620.257.camel@zakaz.uk.xensource.com>
	<CAOzFzEj63zoMoC_gh2C8YWVCuyqG2WT=QHi60i97vpzcd6xUyA@mail.gmail.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: William Pitcock <nenolod@dereferenced.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] introducing python-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Joseph Glanville writes ("Re: [Xen-devel] introducing python-xen"):
> I would be interested in assisting with the Python libxl bindings.
> I made some preliminary work at interfacing with libxl via Cython with
> moderate success prior to the overhaul WRT async ops.
> Now that the API has been stabilized and async APIs are available I
> would like to take another go at it and integrate it with
> gevent/libev.
> If any others are interested in collaborating on this that would be awesome.

Providing the glue between the libxl event system and gevent and/or
libev would be useful in itself.

But do people using Python always use one of those ?  Do we need to
allow the Python programmer to choose their event loop ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 12:33:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 12:33:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm1mp-0003Ks-T6; Fri, 21 Dec 2012 12:33:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm1mo-0003Kk-RI
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 12:33:11 +0000
Received: from [85.158.139.211:42025] by server-14.bemta-5.messagelabs.com id
	84/E2-09538-60754D05; Fri, 21 Dec 2012 12:33:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1356093189!17238502!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28715 invoked from network); 21 Dec 2012 12:33:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 12:33:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="302407"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 12:33:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 12:33:08 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm1mm-0007jW-Ie; Fri, 21 Dec 2012 12:33:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm1mm-0002gV-D2;
	Fri, 21 Dec 2012 12:33:08 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.22276.126075.949318@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 12:33:08 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <03b4c57dd562e5477615.1355843929@cosworth.uk.xensource.com>
References: <patchbomb.1355843926@cosworth.uk.xensource.com>
	<03b4c57dd562e5477615.1355843929@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3 V2] xl: SWITCH_FOREACH_OPT handles
 special options directly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH 3 of 3 V2] xl: SWITCH_FOREACH_OPT handles special options directly"):
> xl: SWITCH_FOREACH_OPT handles special options directly.
> 
> This removes the need for the "case 0: case 2:" boilerplate in every
> main_foo() but at the expense of a return in a macro which I find
> (mildly) distasteful.

I agree that it's best avoided.  I think it would be better simply
to have the actual function call exit().

Having the option parser terminate the program when necessary is far
simpler and causes no problems.  (If we didn't have an atexit handler,
we could provide a wrapper for exit for xl's internal callers.)

The reason why generic option handling libraries don't typically call
exit is that they don't know what the application they're in might
want by way of exit statuses, stdout/err handling, pre-exit cleanup,
or whatever.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 12:33:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 12:33:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm1mp-0003Ks-T6; Fri, 21 Dec 2012 12:33:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm1mo-0003Kk-RI
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 12:33:11 +0000
Received: from [85.158.139.211:42025] by server-14.bemta-5.messagelabs.com id
	84/E2-09538-60754D05; Fri, 21 Dec 2012 12:33:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1356093189!17238502!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28715 invoked from network); 21 Dec 2012 12:33:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 12:33:09 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="302407"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 12:33:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 12:33:08 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm1mm-0007jW-Ie; Fri, 21 Dec 2012 12:33:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm1mm-0002gV-D2;
	Fri, 21 Dec 2012 12:33:08 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.22276.126075.949318@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 12:33:08 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <03b4c57dd562e5477615.1355843929@cosworth.uk.xensource.com>
References: <patchbomb.1355843926@cosworth.uk.xensource.com>
	<03b4c57dd562e5477615.1355843929@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3 V2] xl: SWITCH_FOREACH_OPT handles
 special options directly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH 3 of 3 V2] xl: SWITCH_FOREACH_OPT handles special options directly"):
> xl: SWITCH_FOREACH_OPT handles special options directly.
> 
> This removes the need for the "case 0: case 2:" boilerplate in every
> main_foo() but at the expense of a return in a macro which I find
> (mildly) distasteful.

I agree that it's best avoided.  I think it would be better simply
to have the actual function call exit().

Having the option parser terminate the program when necessary is far
simpler and causes no problems.  (If we didn't have an atexit handler,
we could provide a wrapper for exit for xl's internal callers.)

The reason why generic option handling libraries don't typically call
exit is that they don't know what the application they're in might
want by way of exit statuses, stdout/err handling, pre-exit cleanup,
or whatever.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 12:46:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 12:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm1yy-0003aA-7F; Fri, 21 Dec 2012 12:45:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tm1yw-0003a5-PO
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 12:45:42 +0000
Received: from [85.158.143.35:26288] by server-1.bemta-4.messagelabs.com id
	A2/0A-28401-6F954D05; Fri, 21 Dec 2012 12:45:42 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1356093844!16500080!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9538 invoked from network); 21 Dec 2012 12:44:04 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 12:44:04 -0000
Received: (qmail 12020 invoked from network); 21 Dec 2012 14:44:03 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	21 Dec 2012 14:44:03 +0200
Message-ID: <50D459D3.4040101@gmail.com>
Date: Fri, 21 Dec 2012 14:45:07 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44469
Subject: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I need access to some MSR values that are not currently being saved in 
struct hvm_hw_cpu. Among them are MSR_IA32_MC0_CTL, MSR_IA32_MISC_ENABLE 
and MSR_IA32_ENERGY_PERF_BIAS.

The way I'm approaching this that I'll patch xen/arch/x86/hvm/vmx/vmx.c 
and xen/arch/x86/hvm/svm/svm.c, and add this in vmx_save_cpu_state() and 
svm_save_cpu_state(), respectively:

hvm_msr_read_intercept(MSR_IA32_MC0_CTL, &data->msr_mc0_ctl);

and so on, for the other registers (after adding the msr_mc0_ctl member 
to struct hvm_hw_cpu, of course). I would also have to do the reverse 
operation (using hvm_msr_write_intercept()) in vmx_load_cpu_state().

My questions:

1. Does it seem architecturally sound to perform the described 
modifications? Can I use hvm_msr_xxx_intercept() for both the VMX and 
the SVM code?

2. It seems repetitive to have duplicated code in both 
vmx_save_cpu_state() and svm_save_cpu_state(), does it make more sense 
to have it like that anyway (in case, for example, the SVM way to 
retrieve that register could change in the future)?

3. Do I need to do additional things so that I won't break anything else?

4. Is there a better way to achieve what I'm after?

Thanks,
Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 12:46:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 12:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm1yy-0003aA-7F; Fri, 21 Dec 2012 12:45:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tm1yw-0003a5-PO
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 12:45:42 +0000
Received: from [85.158.143.35:26288] by server-1.bemta-4.messagelabs.com id
	A2/0A-28401-6F954D05; Fri, 21 Dec 2012 12:45:42 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1356093844!16500080!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9538 invoked from network); 21 Dec 2012 12:44:04 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 12:44:04 -0000
Received: (qmail 12020 invoked from network); 21 Dec 2012 14:44:03 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	21 Dec 2012 14:44:03 +0200
Message-ID: <50D459D3.4040101@gmail.com>
Date: Fri, 21 Dec 2012 14:45:07 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44469
Subject: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I need access to some MSR values that are not currently being saved in 
struct hvm_hw_cpu. Among them are MSR_IA32_MC0_CTL, MSR_IA32_MISC_ENABLE 
and MSR_IA32_ENERGY_PERF_BIAS.

The way I'm approaching this that I'll patch xen/arch/x86/hvm/vmx/vmx.c 
and xen/arch/x86/hvm/svm/svm.c, and add this in vmx_save_cpu_state() and 
svm_save_cpu_state(), respectively:

hvm_msr_read_intercept(MSR_IA32_MC0_CTL, &data->msr_mc0_ctl);

and so on, for the other registers (after adding the msr_mc0_ctl member 
to struct hvm_hw_cpu, of course). I would also have to do the reverse 
operation (using hvm_msr_write_intercept()) in vmx_load_cpu_state().

My questions:

1. Does it seem architecturally sound to perform the described 
modifications? Can I use hvm_msr_xxx_intercept() for both the VMX and 
the SVM code?

2. It seems repetitive to have duplicated code in both 
vmx_save_cpu_state() and svm_save_cpu_state(), does it make more sense 
to have it like that anyway (in case, for example, the SVM way to 
retrieve that register could change in the future)?

3. Do I need to do additional things so that I won't break anything else?

4. Is there a better way to achieve what I'm after?

Thanks,
Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 13:16:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm2SY-0003sn-Rr; Fri, 21 Dec 2012 13:16:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tm2SX-0003si-Gb
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:16:17 +0000
Received: from [193.109.254.147:56335] by server-2.bemta-14.messagelabs.com id
	19/40-30744-02164D05; Fri, 21 Dec 2012 13:16:16 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1356095774!10906017!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26392 invoked from network); 21 Dec 2012 13:16:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 13:16:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="1467281"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 13:16:14 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 08:16:13 -0500
Message-ID: <50D4611C.6030206@citrix.com>
Date: Fri, 21 Dec 2012 13:16:12 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <50D459D3.4040101@gmail.com>
In-Reply-To: <50D459D3.4040101@gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/12/12 12:45, Razvan Cojocaru wrote:
> Hello,
>
> I need access to some MSR values that are not currently being saved in
> struct hvm_hw_cpu. Among them are MSR_IA32_MC0_CTL, MSR_IA32_MISC_ENABLE
> and MSR_IA32_ENERGY_PERF_BIAS.
>
> The way I'm approaching this that I'll patch xen/arch/x86/hvm/vmx/vmx.c
> and xen/arch/x86/hvm/svm/svm.c, and add this in vmx_save_cpu_state() and
> svm_save_cpu_state(), respectively:
>
> hvm_msr_read_intercept(MSR_IA32_MC0_CTL, &data->msr_mc0_ctl);
>
> and so on, for the other registers (after adding the msr_mc0_ctl member
> to struct hvm_hw_cpu, of course). I would also have to do the reverse
> operation (using hvm_msr_write_intercept()) in vmx_load_cpu_state().
>
> My questions:
>
> 1. Does it seem architecturally sound to perform the described
> modifications? Can I use hvm_msr_xxx_intercept() for both the VMX and
> the SVM code?
>
> 2. It seems repetitive to have duplicated code in both
> vmx_save_cpu_state() and svm_save_cpu_state(), does it make more sense
> to have it like that anyway (in case, for example, the SVM way to
> retrieve that register could change in the future)?
>
> 3. Do I need to do additional things so that I won't break anything else?
>
> 4. Is there a better way to achieve what I'm after?
>
> Thanks,
> Razvan Cojocaru
>
I'm not sure I understand what you are trying to achieve (nor am I 
convinced I know how to help you, but if I don't understand the question 
suffiiciently, I certainly can't advice you on what you can/should do or 
can't/shouldn't do), but what MSR's are we talking about - the guest 
MSR's or the host MSR's?

In general, Xen is responsible for "real" Machine Check interrupts 
(correctible ones), and will then forward this information to the guest 
if it has enabled MCE in it's code (via the MCE_SOFTIRQ).

Normally, reading MSR's in usermode is not allowed on bare-metal, so not 
sure why you expect this to work in the guest (or Dom0) on top of Xen. 
But maybe you don't actually mean userspace as opposed to "kernel mode"?

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 13:16:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm2SY-0003sn-Rr; Fri, 21 Dec 2012 13:16:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tm2SX-0003si-Gb
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:16:17 +0000
Received: from [193.109.254.147:56335] by server-2.bemta-14.messagelabs.com id
	19/40-30744-02164D05; Fri, 21 Dec 2012 13:16:16 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1356095774!10906017!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26392 invoked from network); 21 Dec 2012 13:16:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 13:16:16 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="1467281"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 13:16:14 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 08:16:13 -0500
Message-ID: <50D4611C.6030206@citrix.com>
Date: Fri, 21 Dec 2012 13:16:12 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <50D459D3.4040101@gmail.com>
In-Reply-To: <50D459D3.4040101@gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/12/12 12:45, Razvan Cojocaru wrote:
> Hello,
>
> I need access to some MSR values that are not currently being saved in
> struct hvm_hw_cpu. Among them are MSR_IA32_MC0_CTL, MSR_IA32_MISC_ENABLE
> and MSR_IA32_ENERGY_PERF_BIAS.
>
> The way I'm approaching this that I'll patch xen/arch/x86/hvm/vmx/vmx.c
> and xen/arch/x86/hvm/svm/svm.c, and add this in vmx_save_cpu_state() and
> svm_save_cpu_state(), respectively:
>
> hvm_msr_read_intercept(MSR_IA32_MC0_CTL, &data->msr_mc0_ctl);
>
> and so on, for the other registers (after adding the msr_mc0_ctl member
> to struct hvm_hw_cpu, of course). I would also have to do the reverse
> operation (using hvm_msr_write_intercept()) in vmx_load_cpu_state().
>
> My questions:
>
> 1. Does it seem architecturally sound to perform the described
> modifications? Can I use hvm_msr_xxx_intercept() for both the VMX and
> the SVM code?
>
> 2. It seems repetitive to have duplicated code in both
> vmx_save_cpu_state() and svm_save_cpu_state(), does it make more sense
> to have it like that anyway (in case, for example, the SVM way to
> retrieve that register could change in the future)?
>
> 3. Do I need to do additional things so that I won't break anything else?
>
> 4. Is there a better way to achieve what I'm after?
>
> Thanks,
> Razvan Cojocaru
>
I'm not sure I understand what you are trying to achieve (nor am I 
convinced I know how to help you, but if I don't understand the question 
suffiiciently, I certainly can't advice you on what you can/should do or 
can't/shouldn't do), but what MSR's are we talking about - the guest 
MSR's or the host MSR's?

In general, Xen is responsible for "real" Machine Check interrupts 
(correctible ones), and will then forward this information to the guest 
if it has enabled MCE in it's code (via the MCE_SOFTIRQ).

Normally, reading MSR's in usermode is not allowed on bare-metal, so not 
sure why you expect this to work in the guest (or Dom0) on top of Xen. 
But maybe you don't actually mean userspace as opposed to "kernel mode"?

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 13:25:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm2b8-00043y-SJ; Fri, 21 Dec 2012 13:25:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tm2b7-00043t-Rd
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:25:10 +0000
Received: from [85.158.139.83:17102] by server-11.bemta-5.messagelabs.com id
	F6/B1-31624-53364D05; Fri, 21 Dec 2012 13:25:09 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1356096307!30922141!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26002 invoked from network); 21 Dec 2012 13:25:08 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 13:25:08 -0000
Received: (qmail 1760 invoked from network); 21 Dec 2012 15:25:06 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	21 Dec 2012 15:25:06 +0200
Message-ID: <50D46373.10802@gmail.com>
Date: Fri, 21 Dec 2012 15:26:11 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <50D459D3.4040101@gmail.com> <50D4611C.6030206@citrix.com>
In-Reply-To: <50D4611C.6030206@citrix.com>
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44470
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, thanks for the reply!

> I'm not sure I understand what you are trying to achieve (nor am I
> convinced I know how to help you, but if I don't understand the question
> suffiiciently, I certainly can't advice you on what you can/should do or
> can't/shouldn't do), but what MSR's are we talking about - the guest
> MSR's or the host MSR's?

Sorry if I've not been clear. I want to access the MSRs of a Xen HVM 
guest, from a userspace application running in dom0, with the help of libxc.

Libxc already allows me to inspect the values of several registers, 
including a handful of MSRs, if I call:

xc_domain_hvm_getcontext_partial(xch, domain_id, HVM_SAVE_CODE(CPU),
                                  instance, &hw_ctxt, sizeof hw_ctxt);

and then examine, for example, hw_ctxt.msr_lstar.

What I'd like is to be able to check hw_ctxt.msr_mc0_ctl, for example, 
after the xc_domain_hvm_getcontext_partial().

> Normally, reading MSR's in usermode is not allowed on bare-metal, so not
> sure why you expect this to work in the guest (or Dom0) on top of Xen.
> But maybe you don't actually mean userspace as opposed to "kernel mode"?

I mean accessing a domU's MSRs from dom0 userspace.

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 13:25:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm2b8-00043y-SJ; Fri, 21 Dec 2012 13:25:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tm2b7-00043t-Rd
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:25:10 +0000
Received: from [85.158.139.83:17102] by server-11.bemta-5.messagelabs.com id
	F6/B1-31624-53364D05; Fri, 21 Dec 2012 13:25:09 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1356096307!30922141!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26002 invoked from network); 21 Dec 2012 13:25:08 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 13:25:08 -0000
Received: (qmail 1760 invoked from network); 21 Dec 2012 15:25:06 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	21 Dec 2012 15:25:06 +0200
Message-ID: <50D46373.10802@gmail.com>
Date: Fri, 21 Dec 2012 15:26:11 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <50D459D3.4040101@gmail.com> <50D4611C.6030206@citrix.com>
In-Reply-To: <50D4611C.6030206@citrix.com>
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44470
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, thanks for the reply!

> I'm not sure I understand what you are trying to achieve (nor am I
> convinced I know how to help you, but if I don't understand the question
> suffiiciently, I certainly can't advice you on what you can/should do or
> can't/shouldn't do), but what MSR's are we talking about - the guest
> MSR's or the host MSR's?

Sorry if I've not been clear. I want to access the MSRs of a Xen HVM 
guest, from a userspace application running in dom0, with the help of libxc.

Libxc already allows me to inspect the values of several registers, 
including a handful of MSRs, if I call:

xc_domain_hvm_getcontext_partial(xch, domain_id, HVM_SAVE_CODE(CPU),
                                  instance, &hw_ctxt, sizeof hw_ctxt);

and then examine, for example, hw_ctxt.msr_lstar.

What I'd like is to be able to check hw_ctxt.msr_mc0_ctl, for example, 
after the xc_domain_hvm_getcontext_partial().

> Normally, reading MSR's in usermode is not allowed on bare-metal, so not
> sure why you expect this to work in the guest (or Dom0) on top of Xen.
> But maybe you don't actually mean userspace as opposed to "kernel mode"?

I mean accessing a domU's MSRs from dom0 userspace.

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 13:34:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:34:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm2jV-0004FC-SG; Fri, 21 Dec 2012 13:33:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1Tm2jU-0004F7-Be
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:33:48 +0000
Received: from [85.158.139.211:35891] by server-11.bemta-5.messagelabs.com id
	2D/74-31624-B3564D05; Fri, 21 Dec 2012 13:33:47 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1356096825!21398634!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDMyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25557 invoked from network); 21 Dec 2012 13:33:46 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 13:33:46 -0000
Received: from compute5.internal (compute5.nyi.mail.srv.osa [10.202.2.45])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 55AC620B15;
	Fri, 21 Dec 2012 08:33:45 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute5.internal (MEProxy); Fri, 21 Dec 2012 08:33:45 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=Iv
	xF1qEoLg5fzZtgFjQuA40samM=; b=UTza448iOSrUEhlbgZE91WVk5w/4K3cCDr
	5mA4iW1s3TvLxh/PmZvuYGkQ3h72xZI4Q97Lr3RRx04kX9RS4k/A8fPuR0mwyGmB
	aHMHesXir9wfjwCP0SAwMBcK6JhzVYNMYQgTp5gakQE9xmoS0Bli2EootWDoVrP1
	rjDJ/jRcA=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=IvxF
	1qEoLg5fzZtgFjQuA40samM=; b=UBWrVAcUSyfiBbG/I8R7nM/BamUvOCCdEk9u
	dA41HtJtXwO0BQLhsfeMx01gfLfnFxEAi5boc8org6j/yqfoeuUbMJY5bWLhneoi
	F+J/TklTSCC9muiemIfZXEy7Rk2rf5DtUfI/fQcguzUyCboRMvkeSpuI9G9me4Kq
	zVjcygo=
X-Sasl-enc: fLeUm8yT6+Br9bvmN9ngXgHGCgCz9YB82cxBDMfzOPDh 1356096825
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id 9B1DD8E06EC;
	Fri, 21 Dec 2012 08:33:44 -0500 (EST)
Message-ID: <50D46534.2010304@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 14:33:40 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
In-Reply-To: <50D4322102000078000B1F80@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.5
Cc: Ben Guthro <ben@guthro.net>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6756500380364205732=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============6756500380364205732==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enig402B9D1679D83C05483AA8CA"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig402B9D1679D83C05483AA8CA
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 09:55, Jan Beulich wrote:
>>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblethingsla=
b.com> wrote:
>> On 21.12.2012 00:17, Marek Marczykowski wrote:
>>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got origina=
l=20
>> symptom
>>> - only CPU0 active after ACPI S3. Didn't get reboot after few tries.
>>> "xl debug-key r" output attached.
>>> Anyway it looks like bug was introduced somehow between 4.1.2 and 4.1=
=2E3 - on
>>> 4.1.2 none of above problems happened.
>>
>> I've done bisect on xen 4.1 and found problematic commit:
>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427=20
>=20
> In that case, using "sched_ratelimit_us=3D0" should make the
> problem go away again - did you try that?

This option does not help when trying 4.1.4, but does when trying xen com=
piled
from above c/s.

>> Unfortunately in xen-unstable parent of corresponding commit still hav=
e the
>> same problem, so there must be something more wrong.
>=20
> Which is why we'd need to be really certain that the c/s you
> spotted really is relevant here (to me it doesn't look like having
> an effect on resume at all).

So perhaps there is other problematic commit, which was backported to 4.1=

later (than it is in xen-unstable).

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enig402B9D1679D83C05483AA8CA
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ1GU1AAoJENuP0xzK19csBA8H/i1yKq5VPr0QFy4V4QkN2kEq
4ylSiLI+W5YWHsqOjc1JLgw00nTQQxNCUE/KTZWJ44Ho9gb8cymFXWL/bYgm/15d
qShC5tUssba5lE7ZLCaZPIravavbPkPIem6yjHDZhLndBAUsLOCV2kfEFAe/bvVv
CCBHlRA0NTJX4URoyk27iCi/xD/chJjdWvkAa0o2Am/0e5l6g5b9MxUOCbHMt0tl
lkMKezh6iNBggC1m0WgG6bqgw+SWnyhj//KFoSrV2YKRJ6RUH6OtBB4QtGd/u8c3
AF+6/EoQhEccqhdtu5V5R7H7PGtJgvCgtE/GhO8IDUj1/TO/eIiaCd7SFu2hxc0=
=bYdT
-----END PGP SIGNATURE-----

--------------enig402B9D1679D83C05483AA8CA--


--===============6756500380364205732==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6756500380364205732==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 13:34:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:34:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm2jV-0004FC-SG; Fri, 21 Dec 2012 13:33:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1Tm2jU-0004F7-Be
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:33:48 +0000
Received: from [85.158.139.211:35891] by server-11.bemta-5.messagelabs.com id
	2D/74-31624-B3564D05; Fri, 21 Dec 2012 13:33:47 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1356096825!21398634!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDMyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25557 invoked from network); 21 Dec 2012 13:33:46 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 13:33:46 -0000
Received: from compute5.internal (compute5.nyi.mail.srv.osa [10.202.2.45])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 55AC620B15;
	Fri, 21 Dec 2012 08:33:45 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute5.internal (MEProxy); Fri, 21 Dec 2012 08:33:45 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=Iv
	xF1qEoLg5fzZtgFjQuA40samM=; b=UTza448iOSrUEhlbgZE91WVk5w/4K3cCDr
	5mA4iW1s3TvLxh/PmZvuYGkQ3h72xZI4Q97Lr3RRx04kX9RS4k/A8fPuR0mwyGmB
	aHMHesXir9wfjwCP0SAwMBcK6JhzVYNMYQgTp5gakQE9xmoS0Bli2EootWDoVrP1
	rjDJ/jRcA=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=IvxF
	1qEoLg5fzZtgFjQuA40samM=; b=UBWrVAcUSyfiBbG/I8R7nM/BamUvOCCdEk9u
	dA41HtJtXwO0BQLhsfeMx01gfLfnFxEAi5boc8org6j/yqfoeuUbMJY5bWLhneoi
	F+J/TklTSCC9muiemIfZXEy7Rk2rf5DtUfI/fQcguzUyCboRMvkeSpuI9G9me4Kq
	zVjcygo=
X-Sasl-enc: fLeUm8yT6+Br9bvmN9ngXgHGCgCz9YB82cxBDMfzOPDh 1356096825
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id 9B1DD8E06EC;
	Fri, 21 Dec 2012 08:33:44 -0500 (EST)
Message-ID: <50D46534.2010304@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 14:33:40 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
In-Reply-To: <50D4322102000078000B1F80@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.5
Cc: Ben Guthro <ben@guthro.net>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6756500380364205732=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============6756500380364205732==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enig402B9D1679D83C05483AA8CA"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig402B9D1679D83C05483AA8CA
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 09:55, Jan Beulich wrote:
>>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblethingsla=
b.com> wrote:
>> On 21.12.2012 00:17, Marek Marczykowski wrote:
>>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got origina=
l=20
>> symptom
>>> - only CPU0 active after ACPI S3. Didn't get reboot after few tries.
>>> "xl debug-key r" output attached.
>>> Anyway it looks like bug was introduced somehow between 4.1.2 and 4.1=
=2E3 - on
>>> 4.1.2 none of above problems happened.
>>
>> I've done bisect on xen 4.1 and found problematic commit:
>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427=20
>=20
> In that case, using "sched_ratelimit_us=3D0" should make the
> problem go away again - did you try that?

This option does not help when trying 4.1.4, but does when trying xen com=
piled
from above c/s.

>> Unfortunately in xen-unstable parent of corresponding commit still hav=
e the
>> same problem, so there must be something more wrong.
>=20
> Which is why we'd need to be really certain that the c/s you
> spotted really is relevant here (to me it doesn't look like having
> an effect on resume at all).

So perhaps there is other problematic commit, which was backported to 4.1=

later (than it is in xen-unstable).

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enig402B9D1679D83C05483AA8CA
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ1GU1AAoJENuP0xzK19csBA8H/i1yKq5VPr0QFy4V4QkN2kEq
4ylSiLI+W5YWHsqOjc1JLgw00nTQQxNCUE/KTZWJ44Ho9gb8cymFXWL/bYgm/15d
qShC5tUssba5lE7ZLCaZPIravavbPkPIem6yjHDZhLndBAUsLOCV2kfEFAe/bvVv
CCBHlRA0NTJX4URoyk27iCi/xD/chJjdWvkAa0o2Am/0e5l6g5b9MxUOCbHMt0tl
lkMKezh6iNBggC1m0WgG6bqgw+SWnyhj//KFoSrV2YKRJ6RUH6OtBB4QtGd/u8c3
AF+6/EoQhEccqhdtu5V5R7H7PGtJgvCgtE/GhO8IDUj1/TO/eIiaCd7SFu2hxc0=
=bYdT
-----END PGP SIGNATURE-----

--------------enig402B9D1679D83C05483AA8CA--


--===============6756500380364205732==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6756500380364205732==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 13:35:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:35:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm2kc-0004I6-Ax; Fri, 21 Dec 2012 13:34:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1Tm2kb-0004Hx-2t
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:34:57 +0000
Received: from [85.158.139.83:30135] by server-10.bemta-5.messagelabs.com id
	D9/9A-13383-08564D05; Fri, 21 Dec 2012 13:34:56 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1356096894!23631944!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDMyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8472 invoked from network); 21 Dec 2012 13:34:55 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 13:34:55 -0000
Received: from compute6.internal (compute6.nyi.mail.srv.osa [10.202.2.46])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 1B78320B9D;
	Fri, 21 Dec 2012 08:34:54 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute6.internal (MEProxy); Fri, 21 Dec 2012 08:34:54 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=8t
	50T+HahAmyMhahbTlGG3WpPzs=; b=CqYivFLdABi5khG/hSlCnOf4DCtPcoPq2T
	VKcQnMEJrcfKHIEUGk9VG6kp2k5Bbe+Zy8lW2/cZb7CYgrOlwTDEd+CWhk9vjaSa
	ZIiBkZZeyh1wKw4gGuV+jx4WHD6G5gzsgj5eT+v/g8HZE07jv+4h9C4BZv896yLF
	NpfX7Pmvg=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=8t50
	T+HahAmyMhahbTlGG3WpPzs=; b=oKOdR0RvG/oV3oF2QJi4z4jNpQngtYx57pH4
	uwb3+x9Whqih7+9BYwp3WP0ZLoxKQv0pbV5fvUWezNrJR6w4lS883MTpC8p0IlHh
	WedqMIrK0uX04zkmXhQa2Tdce00ECZ3s9UQlfGNSGH0JBTm4Gj/3Wtf8NOeGhEDz
	GFaQE1I=
X-Sasl-enc: mfwesTpzgE+Sbvmu4U3FfgODmq0SQUOQ8cAVHFdq/Ag/ 1356096893
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id 6FB508E06F4;
	Fri, 21 Dec 2012 08:34:53 -0500 (EST)
Message-ID: <50D46579.2020609@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 14:34:49 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
In-Reply-To: <50D46534.2010304@invisiblethingslab.com>
X-Enigmail-Version: 1.4.5
Cc: Ben Guthro <ben@guthro.net>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4161286562440673312=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============4161286562440673312==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigBA1FDAE48EBEA22E36DDE4B8"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigBA1FDAE48EBEA22E36DDE4B8
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 14:33, Marek Marczykowski wrote:
> On 21.12.2012 09:55, Jan Beulich wrote:
>>>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblethingsl=
ab.com> wrote:
>>> On 21.12.2012 00:17, Marek Marczykowski wrote:
>>>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got origin=
al=20
>>> symptom
>>>> - only CPU0 active after ACPI S3. Didn't get reboot after few tries.=

>>>> "xl debug-key r" output attached.
>>>> Anyway it looks like bug was introduced somehow between 4.1.2 and 4.=
1.3 - on
>>>> 4.1.2 none of above problems happened.
>>>
>>> I've done bisect on xen 4.1 and found problematic commit:
>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427=20
>>
>> In that case, using "sched_ratelimit_us=3D0" should make the
>> problem go away again - did you try that?
>=20
> This option does not help when trying 4.1.4, but does when trying xen c=
ompiled
> from above c/s.
>=20
>>> Unfortunately in xen-unstable parent of corresponding commit still ha=
ve the
>>> same problem, so there must be something more wrong.
>>
>> Which is why we'd need to be really certain that the c/s you
>> spotted really is relevant here (to me it doesn't look like having
>> an effect on resume at all).
>=20
> So perhaps there is other problematic commit, which was backported to 4=
=2E1
> later (than it is in xen-unstable).

Or above commit only trigger some other bug...

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enigBA1FDAE48EBEA22E36DDE4B8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ1GV6AAoJENuP0xzK19csnVcH/jKuxI4q6jmErB3N1dB5XA0p
jtpe9cQU6RksdYEYa/ddcYbCo9Hlmy64eFYQIQ4KsdZwunDVAJPlAZM+X7hufGTp
uSHWui7n+eOKxZOn8S7/+e54cW5ofL3Lg1BrON2JzD04PhUPsaTPADF3L1eJgjZF
KCEb5Me3Ym2Z1bWHR9sZZL/aBXQRYPYyRgGGf2NZvxMxpKZB6Q1ns9od5ZPHThBS
QHsJRXQYkS6g8e+Tgutd5EHbuGjuFsuIbhl8wnrF2psCV7kCjowolf5oXMsUarJG
nbUGX+U8vtMGDWBlJkc/TUwrcmsu39qkrrv2ExL20RIO7cLSOZUXD3Rz4xmK/wI=
=bR7L
-----END PGP SIGNATURE-----

--------------enigBA1FDAE48EBEA22E36DDE4B8--


--===============4161286562440673312==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4161286562440673312==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 13:35:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:35:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm2kc-0004I6-Ax; Fri, 21 Dec 2012 13:34:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1Tm2kb-0004Hx-2t
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:34:57 +0000
Received: from [85.158.139.83:30135] by server-10.bemta-5.messagelabs.com id
	D9/9A-13383-08564D05; Fri, 21 Dec 2012 13:34:56 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1356096894!23631944!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDMyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8472 invoked from network); 21 Dec 2012 13:34:55 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 13:34:55 -0000
Received: from compute6.internal (compute6.nyi.mail.srv.osa [10.202.2.46])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 1B78320B9D;
	Fri, 21 Dec 2012 08:34:54 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute6.internal (MEProxy); Fri, 21 Dec 2012 08:34:54 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=8t
	50T+HahAmyMhahbTlGG3WpPzs=; b=CqYivFLdABi5khG/hSlCnOf4DCtPcoPq2T
	VKcQnMEJrcfKHIEUGk9VG6kp2k5Bbe+Zy8lW2/cZb7CYgrOlwTDEd+CWhk9vjaSa
	ZIiBkZZeyh1wKw4gGuV+jx4WHD6G5gzsgj5eT+v/g8HZE07jv+4h9C4BZv896yLF
	NpfX7Pmvg=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=8t50
	T+HahAmyMhahbTlGG3WpPzs=; b=oKOdR0RvG/oV3oF2QJi4z4jNpQngtYx57pH4
	uwb3+x9Whqih7+9BYwp3WP0ZLoxKQv0pbV5fvUWezNrJR6w4lS883MTpC8p0IlHh
	WedqMIrK0uX04zkmXhQa2Tdce00ECZ3s9UQlfGNSGH0JBTm4Gj/3Wtf8NOeGhEDz
	GFaQE1I=
X-Sasl-enc: mfwesTpzgE+Sbvmu4U3FfgODmq0SQUOQ8cAVHFdq/Ag/ 1356096893
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id 6FB508E06F4;
	Fri, 21 Dec 2012 08:34:53 -0500 (EST)
Message-ID: <50D46579.2020609@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 14:34:49 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
In-Reply-To: <50D46534.2010304@invisiblethingslab.com>
X-Enigmail-Version: 1.4.5
Cc: Ben Guthro <ben@guthro.net>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4161286562440673312=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============4161286562440673312==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigBA1FDAE48EBEA22E36DDE4B8"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigBA1FDAE48EBEA22E36DDE4B8
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 14:33, Marek Marczykowski wrote:
> On 21.12.2012 09:55, Jan Beulich wrote:
>>>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblethingsl=
ab.com> wrote:
>>> On 21.12.2012 00:17, Marek Marczykowski wrote:
>>>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got origin=
al=20
>>> symptom
>>>> - only CPU0 active after ACPI S3. Didn't get reboot after few tries.=

>>>> "xl debug-key r" output attached.
>>>> Anyway it looks like bug was introduced somehow between 4.1.2 and 4.=
1.3 - on
>>>> 4.1.2 none of above problems happened.
>>>
>>> I've done bisect on xen 4.1 and found problematic commit:
>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427=20
>>
>> In that case, using "sched_ratelimit_us=3D0" should make the
>> problem go away again - did you try that?
>=20
> This option does not help when trying 4.1.4, but does when trying xen c=
ompiled
> from above c/s.
>=20
>>> Unfortunately in xen-unstable parent of corresponding commit still ha=
ve the
>>> same problem, so there must be something more wrong.
>>
>> Which is why we'd need to be really certain that the c/s you
>> spotted really is relevant here (to me it doesn't look like having
>> an effect on resume at all).
>=20
> So perhaps there is other problematic commit, which was backported to 4=
=2E1
> later (than it is in xen-unstable).

Or above commit only trigger some other bug...

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enigBA1FDAE48EBEA22E36DDE4B8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ1GV6AAoJENuP0xzK19csnVcH/jKuxI4q6jmErB3N1dB5XA0p
jtpe9cQU6RksdYEYa/ddcYbCo9Hlmy64eFYQIQ4KsdZwunDVAJPlAZM+X7hufGTp
uSHWui7n+eOKxZOn8S7/+e54cW5ofL3Lg1BrON2JzD04PhUPsaTPADF3L1eJgjZF
KCEb5Me3Ym2Z1bWHR9sZZL/aBXQRYPYyRgGGf2NZvxMxpKZB6Q1ns9od5ZPHThBS
QHsJRXQYkS6g8e+Tgutd5EHbuGjuFsuIbhl8wnrF2psCV7kCjowolf5oXMsUarJG
nbUGX+U8vtMGDWBlJkc/TUwrcmsu39qkrrv2ExL20RIO7cLSOZUXD3Rz4xmK/wI=
=bR7L
-----END PGP SIGNATURE-----

--------------enigBA1FDAE48EBEA22E36DDE4B8--


--===============4161286562440673312==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4161286562440673312==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 13:43:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm2sQ-0004ad-FP; Fri, 21 Dec 2012 13:43:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tm2sP-0004aY-KZ
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:43:01 +0000
Received: from [85.158.143.99:30492] by server-3.bemta-4.messagelabs.com id
	30/C0-18211-46764D05; Fri, 21 Dec 2012 13:43:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1356097379!23470413!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25709 invoked from network); 21 Dec 2012 13:43:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 13:43:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 13:42:59 +0000
Message-Id: <50D4757202000078000B2042@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 13:42:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Marek Marczykowski" <marmarek@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
In-Reply-To: <50D46534.2010304@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Ben Guthro <ben@guthro.net>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 14:33, Marek Marczykowski <marmarek@invisiblethingslab.com>
wrote:
> On 21.12.2012 09:55, Jan Beulich wrote:
>>>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblethingslab.com> 
> wrote:
>>> On 21.12.2012 00:17, Marek Marczykowski wrote:
>>>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got original 
>>> symptom
>>>> - only CPU0 active after ACPI S3. Didn't get reboot after few tries.
>>>> "xl debug-key r" output attached.
>>>> Anyway it looks like bug was introduced somehow between 4.1.2 and 4.1.3 - on
>>>> 4.1.2 none of above problems happened.
>>>
>>> I've done bisect on xen 4.1 and found problematic commit:
>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427 
>> 
>> In that case, using "sched_ratelimit_us=0" should make the
>> problem go away again - did you try that?
> 
> This option does not help when trying 4.1.4, but does when trying xen 
> compiled
> from above c/s.

Which basically calls for you doing another bisection round, this
time with that option in place.

But for the moment, as said earlier, it escapes me how that change
would have an impact on resume (the more that we have no logs).
Will need to look at this in closer detail after New Year (and George,
maybe you could take a look too).

You don't happen to use "dom0_vcpus_pin", do you?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 13:43:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm2sQ-0004ad-FP; Fri, 21 Dec 2012 13:43:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tm2sP-0004aY-KZ
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:43:01 +0000
Received: from [85.158.143.99:30492] by server-3.bemta-4.messagelabs.com id
	30/C0-18211-46764D05; Fri, 21 Dec 2012 13:43:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1356097379!23470413!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25709 invoked from network); 21 Dec 2012 13:43:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 13:43:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 13:42:59 +0000
Message-Id: <50D4757202000078000B2042@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 13:42:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Marek Marczykowski" <marmarek@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
In-Reply-To: <50D46534.2010304@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Ben Guthro <ben@guthro.net>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 14:33, Marek Marczykowski <marmarek@invisiblethingslab.com>
wrote:
> On 21.12.2012 09:55, Jan Beulich wrote:
>>>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblethingslab.com> 
> wrote:
>>> On 21.12.2012 00:17, Marek Marczykowski wrote:
>>>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got original 
>>> symptom
>>>> - only CPU0 active after ACPI S3. Didn't get reboot after few tries.
>>>> "xl debug-key r" output attached.
>>>> Anyway it looks like bug was introduced somehow between 4.1.2 and 4.1.3 - on
>>>> 4.1.2 none of above problems happened.
>>>
>>> I've done bisect on xen 4.1 and found problematic commit:
>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427 
>> 
>> In that case, using "sched_ratelimit_us=0" should make the
>> problem go away again - did you try that?
> 
> This option does not help when trying 4.1.4, but does when trying xen 
> compiled
> from above c/s.

Which basically calls for you doing another bisection round, this
time with that option in place.

But for the moment, as said earlier, it escapes me how that change
would have an impact on resume (the more that we have no logs).
Will need to look at this in closer detail after New Year (and George,
maybe you could take a look too).

You don't happen to use "dom0_vcpus_pin", do you?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 13:54:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm339-0004m8-L8; Fri, 21 Dec 2012 13:54:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tm337-0004m0-QO
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:54:06 +0000
Received: from [85.158.137.99:21819] by server-10.bemta-3.messagelabs.com id
	41/B6-07616-8F964D05; Fri, 21 Dec 2012 13:54:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1356098039!20139311!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11333 invoked from network); 21 Dec 2012 13:53:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 13:53:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 13:53:58 +0000
Message-Id: <50D4780602000078000B204E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 13:53:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Razvan Cojocaru" <rzvncj@gmail.com>
References: <50D459D3.4040101@gmail.com>
In-Reply-To: <50D459D3.4040101@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 13:45, Razvan Cojocaru <rzvncj@gmail.com> wrote:
> I need access to some MSR values that are not currently being saved in 
> struct hvm_hw_cpu. Among them are MSR_IA32_MC0_CTL, MSR_IA32_MISC_ENABLE 
> and MSR_IA32_ENERGY_PERF_BIAS.

I can't see why, and this is quite likely related to the reason for
them not being accessible in the first place.

> The way I'm approaching this that I'll patch xen/arch/x86/hvm/vmx/vmx.c 
> and xen/arch/x86/hvm/svm/svm.c, and add this in vmx_save_cpu_state() and 
> svm_save_cpu_state(), respectively:
> 
> hvm_msr_read_intercept(MSR_IA32_MC0_CTL, &data->msr_mc0_ctl);
> 
> and so on, for the other registers (after adding the msr_mc0_ctl member 
> to struct hvm_hw_cpu, of course). I would also have to do the reverse 
> operation (using hvm_msr_write_intercept()) in vmx_load_cpu_state().
> 
> My questions:
> 
> 1. Does it seem architecturally sound to perform the described 
> modifications?

Not to me, no. These records are use for save/restore/migrate,
and so far there hasn't been a need to include here the MSRs you
mention.

> Can I use hvm_msr_xxx_intercept() for both the VMX and 
> the SVM code?

For MSR_IA32_MC0_CTL, why not? These should be fine for
anything that is architectural to x86.

The other two MSRs are Intel specific iirc, and hence wouldn't
be validly dealt with in vendor independent code.

> 2. It seems repetitive to have duplicated code in both 
> vmx_save_cpu_state() and svm_save_cpu_state(), does it make more sense 
> to have it like that anyway (in case, for example, the SVM way to 
> retrieve that register could change in the future)?

Is what you want was explained well enough to become reasonable,
and if the amount of code duplication would become significant, then
consolidating what is common between both would certainly make
sense.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 13:54:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 13:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm339-0004m8-L8; Fri, 21 Dec 2012 13:54:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tm337-0004m0-QO
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:54:06 +0000
Received: from [85.158.137.99:21819] by server-10.bemta-3.messagelabs.com id
	41/B6-07616-8F964D05; Fri, 21 Dec 2012 13:54:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1356098039!20139311!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11333 invoked from network); 21 Dec 2012 13:53:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 13:53:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 13:53:58 +0000
Message-Id: <50D4780602000078000B204E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 13:53:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Razvan Cojocaru" <rzvncj@gmail.com>
References: <50D459D3.4040101@gmail.com>
In-Reply-To: <50D459D3.4040101@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 13:45, Razvan Cojocaru <rzvncj@gmail.com> wrote:
> I need access to some MSR values that are not currently being saved in 
> struct hvm_hw_cpu. Among them are MSR_IA32_MC0_CTL, MSR_IA32_MISC_ENABLE 
> and MSR_IA32_ENERGY_PERF_BIAS.

I can't see why, and this is quite likely related to the reason for
them not being accessible in the first place.

> The way I'm approaching this that I'll patch xen/arch/x86/hvm/vmx/vmx.c 
> and xen/arch/x86/hvm/svm/svm.c, and add this in vmx_save_cpu_state() and 
> svm_save_cpu_state(), respectively:
> 
> hvm_msr_read_intercept(MSR_IA32_MC0_CTL, &data->msr_mc0_ctl);
> 
> and so on, for the other registers (after adding the msr_mc0_ctl member 
> to struct hvm_hw_cpu, of course). I would also have to do the reverse 
> operation (using hvm_msr_write_intercept()) in vmx_load_cpu_state().
> 
> My questions:
> 
> 1. Does it seem architecturally sound to perform the described 
> modifications?

Not to me, no. These records are use for save/restore/migrate,
and so far there hasn't been a need to include here the MSRs you
mention.

> Can I use hvm_msr_xxx_intercept() for both the VMX and 
> the SVM code?

For MSR_IA32_MC0_CTL, why not? These should be fine for
anything that is architectural to x86.

The other two MSRs are Intel specific iirc, and hence wouldn't
be validly dealt with in vendor independent code.

> 2. It seems repetitive to have duplicated code in both 
> vmx_save_cpu_state() and svm_save_cpu_state(), does it make more sense 
> to have it like that anyway (in case, for example, the SVM way to 
> retrieve that register could change in the future)?

Is what you want was explained well enough to become reasonable,
and if the amount of code duplication would become significant, then
consolidating what is common between both would certainly make
sense.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:00:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm38a-0004us-Er; Fri, 21 Dec 2012 13:59:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1Tm38Y-0004uj-Fd
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:59:42 +0000
Received: from [85.158.143.99:56834] by server-3.bemta-4.messagelabs.com id
	A2/E6-18211-D4B64D05; Fri, 21 Dec 2012 13:59:41 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1356098380!25223053!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDMyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32384 invoked from network); 21 Dec 2012 13:59:41 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-4.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 13:59:41 -0000
Received: from compute1.internal (compute1.nyi.mail.srv.osa [10.202.2.41])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id BE8CD20B2C;
	Fri, 21 Dec 2012 08:59:39 -0500 (EST)
Received: from frontend2.nyi.mail.srv.osa ([10.202.2.161])
	by compute1.internal (MEProxy); Fri, 21 Dec 2012 08:59:39 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=4d
	KqX8ScwmlbTe5PkGjshy/nVKY=; b=nDgLWujmrokUElIpyhlBItnkZKpgG9Ud0G
	xkr/Bdtn+kWJSOf7Rj8tipS1eqz5d0/5E0aCqqqEb0ULMv/AX+iwcDdC1iI51DSL
	cxlTP3V4wRkR3Oe3RmNp6ZDmx8iq0xjfA5fnp/tVVTxPL0J6xmk9k8yXac4uM+B+
	klOMA1CyA=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=4dKq
	X8ScwmlbTe5PkGjshy/nVKY=; b=BRw415R63S1UMydedok4nvjAojAWwR78UPtO
	fjy6GTgQfqeqzqIQgzSZjZ/CkcOuenst6vfaQkCWGohO6BxClJ0wIwfx9Le9gYRq
	b7NSg4KZh64zr/1qscbvn+xWO2D2MAwFqbOBWtnhUcy+/M4Iee0bXEU/E6UbDpuy
	RxzQxHw=
X-Sasl-enc: 6wwN8BGQMZwECoE2E+rHyyBmxsCMl1MI91T/8Hp+ZYLk 1356098379
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id 885D94824FF;
	Fri, 21 Dec 2012 08:59:38 -0500 (EST)
Message-ID: <50D46B47.8000003@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 14:59:35 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
In-Reply-To: <50D4757202000078000B2042@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.5
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Ben Guthro <ben@guthro.net>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5385511490373877254=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============5385511490373877254==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enig03C55FF69E3365B249A81FEB"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig03C55FF69E3365B249A81FEB
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 14:42, Jan Beulich wrote:
>>>> On 21.12.12 at 14:33, Marek Marczykowski <marmarek@invisiblethingsla=
b.com>
> wrote:
>> On 21.12.2012 09:55, Jan Beulich wrote:
>>>>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblethings=
lab.com>=20
>> wrote:
>>>> On 21.12.2012 00:17, Marek Marczykowski wrote:
>>>>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got origi=
nal=20
>>>> symptom
>>>>> - only CPU0 active after ACPI S3. Didn't get reboot after few tries=
=2E
>>>>> "xl debug-key r" output attached.
>>>>> Anyway it looks like bug was introduced somehow between 4.1.2 and 4=
=2E1.3 - on
>>>>> 4.1.2 none of above problems happened.
>>>>
>>>> I've done bisect on xen 4.1 and found problematic commit:
>>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427=20
>>>
>>> In that case, using "sched_ratelimit_us=3D0" should make the
>>> problem go away again - did you try that?
>>
>> This option does not help when trying 4.1.4, but does when trying xen =

>> compiled
>> from above c/s.
>=20
> Which basically calls for you doing another bisection round, this
> time with that option in place.

Will try, but not sure if will have time for it before Christmas.

>=20
> But for the moment, as said earlier, it escapes me how that change
> would have an impact on resume (the more that we have no logs).
> Will need to look at this in closer detail after New Year (and George,
> maybe you could take a look too).
>=20
> You don't happen to use "dom0_vcpus_pin", do you?

I don't. Should I try?

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enig03C55FF69E3365B249A81FEB
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ1GtHAAoJENuP0xzK19csCPcH/0NJXVn+wd/Vnic1Uin06em2
la139Y2BCz2nlqOaWE5gTBupNOjiDeyadZrtUQspOElWh/xwKKYi4aQxRQyDD+xC
uel3mnsY+yQb+itLi2IDgIVrwdlFufhR1Uo0JLYvbRyDMsq13tC+ZpPAhOU+DNz3
5A9jLGvpr+9kbL17lYC9z90hQglzFJaBr9mvvDlcgyLOYGRnsej/ib/+PJmRnWNi
na55GkTNqTp5cGYk+FdeZwIPCN/BGvWsOehHwkSJXlpAvDgvS4nVSvMyTJWIBuIG
QTHl1VFAiF5jDEIsSC11477t0ZbKmbbJutlty78bMsKV4sYQKBNeSsTQJoBUJx0=
=md5w
-----END PGP SIGNATURE-----

--------------enig03C55FF69E3365B249A81FEB--


--===============5385511490373877254==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5385511490373877254==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 14:00:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm38a-0004us-Er; Fri, 21 Dec 2012 13:59:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1Tm38Y-0004uj-Fd
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 13:59:42 +0000
Received: from [85.158.143.99:56834] by server-3.bemta-4.messagelabs.com id
	A2/E6-18211-D4B64D05; Fri, 21 Dec 2012 13:59:41 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1356098380!25223053!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDMyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32384 invoked from network); 21 Dec 2012 13:59:41 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-4.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 13:59:41 -0000
Received: from compute1.internal (compute1.nyi.mail.srv.osa [10.202.2.41])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id BE8CD20B2C;
	Fri, 21 Dec 2012 08:59:39 -0500 (EST)
Received: from frontend2.nyi.mail.srv.osa ([10.202.2.161])
	by compute1.internal (MEProxy); Fri, 21 Dec 2012 08:59:39 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=4d
	KqX8ScwmlbTe5PkGjshy/nVKY=; b=nDgLWujmrokUElIpyhlBItnkZKpgG9Ud0G
	xkr/Bdtn+kWJSOf7Rj8tipS1eqz5d0/5E0aCqqqEb0ULMv/AX+iwcDdC1iI51DSL
	cxlTP3V4wRkR3Oe3RmNp6ZDmx8iq0xjfA5fnp/tVVTxPL0J6xmk9k8yXac4uM+B+
	klOMA1CyA=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=4dKq
	X8ScwmlbTe5PkGjshy/nVKY=; b=BRw415R63S1UMydedok4nvjAojAWwR78UPtO
	fjy6GTgQfqeqzqIQgzSZjZ/CkcOuenst6vfaQkCWGohO6BxClJ0wIwfx9Le9gYRq
	b7NSg4KZh64zr/1qscbvn+xWO2D2MAwFqbOBWtnhUcy+/M4Iee0bXEU/E6UbDpuy
	RxzQxHw=
X-Sasl-enc: 6wwN8BGQMZwECoE2E+rHyyBmxsCMl1MI91T/8Hp+ZYLk 1356098379
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id 885D94824FF;
	Fri, 21 Dec 2012 08:59:38 -0500 (EST)
Message-ID: <50D46B47.8000003@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 14:59:35 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
In-Reply-To: <50D4757202000078000B2042@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.5
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Ben Guthro <ben@guthro.net>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5385511490373877254=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============5385511490373877254==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enig03C55FF69E3365B249A81FEB"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig03C55FF69E3365B249A81FEB
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 14:42, Jan Beulich wrote:
>>>> On 21.12.12 at 14:33, Marek Marczykowski <marmarek@invisiblethingsla=
b.com>
> wrote:
>> On 21.12.2012 09:55, Jan Beulich wrote:
>>>>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblethings=
lab.com>=20
>> wrote:
>>>> On 21.12.2012 00:17, Marek Marczykowski wrote:
>>>>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got origi=
nal=20
>>>> symptom
>>>>> - only CPU0 active after ACPI S3. Didn't get reboot after few tries=
=2E
>>>>> "xl debug-key r" output attached.
>>>>> Anyway it looks like bug was introduced somehow between 4.1.2 and 4=
=2E1.3 - on
>>>>> 4.1.2 none of above problems happened.
>>>>
>>>> I've done bisect on xen 4.1 and found problematic commit:
>>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427=20
>>>
>>> In that case, using "sched_ratelimit_us=3D0" should make the
>>> problem go away again - did you try that?
>>
>> This option does not help when trying 4.1.4, but does when trying xen =

>> compiled
>> from above c/s.
>=20
> Which basically calls for you doing another bisection round, this
> time with that option in place.

Will try, but not sure if will have time for it before Christmas.

>=20
> But for the moment, as said earlier, it escapes me how that change
> would have an impact on resume (the more that we have no logs).
> Will need to look at this in closer detail after New Year (and George,
> maybe you could take a look too).
>=20
> You don't happen to use "dom0_vcpus_pin", do you?

I don't. Should I try?

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enig03C55FF69E3365B249A81FEB
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ1GtHAAoJENuP0xzK19csCPcH/0NJXVn+wd/Vnic1Uin06em2
la139Y2BCz2nlqOaWE5gTBupNOjiDeyadZrtUQspOElWh/xwKKYi4aQxRQyDD+xC
uel3mnsY+yQb+itLi2IDgIVrwdlFufhR1Uo0JLYvbRyDMsq13tC+ZpPAhOU+DNz3
5A9jLGvpr+9kbL17lYC9z90hQglzFJaBr9mvvDlcgyLOYGRnsej/ib/+PJmRnWNi
na55GkTNqTp5cGYk+FdeZwIPCN/BGvWsOehHwkSJXlpAvDgvS4nVSvMyTJWIBuIG
QTHl1VFAiF5jDEIsSC11477t0ZbKmbbJutlty78bMsKV4sYQKBNeSsTQJoBUJx0=
=md5w
-----END PGP SIGNATURE-----

--------------enig03C55FF69E3365B249A81FEB--


--===============5385511490373877254==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5385511490373877254==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 14:03:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:03:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3C9-00058q-9r; Fri, 21 Dec 2012 14:03:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tm3C7-00058e-Ai
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:03:23 +0000
Received: from [85.158.139.211:26221] by server-9.bemta-5.messagelabs.com id
	3A/52-10690-A2C64D05; Fri, 21 Dec 2012 14:03:22 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356098600!21502301!1
X-Originating-IP: [209.85.220.172]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19476 invoked from network); 21 Dec 2012 14:03:21 -0000
Received: from mail-vc0-f172.google.com (HELO mail-vc0-f172.google.com)
	(209.85.220.172)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 14:03:21 -0000
Received: by mail-vc0-f172.google.com with SMTP id fw7so5124113vcb.17
	for <xen-devel@lists.xen.org>; Fri, 21 Dec 2012 06:03:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=8nlO6RKLIInwFd1oj09sQtkkdC5QBTGufCQPN3WDVnY=;
	b=rWKV8IL39rxjPKM5dwzlpbuQ1a6J45TNZnL2pWtelga8XNfBHZu5u34+3cSMppW92F
	7G5HBzU/hLxOcVwd+ThF20zvNBgwA1QNyyIUEpSz1GUeTKpNHyWuFRP3ZqsLl4zfnfmd
	vBZi5jdewxacTZkA9zs39Ri21APJwxQzc0jGiH5rpWXpBUB4b1PWnKItkqYQtXAie6Rd
	RtkojZkUhH96Wkq5MGaz0qsp9RZrNN13C1NpQh4e6QYTsbr1xqL35elroR/196Mf8s1w
	VOQ387GP2XWcnt65O9GMWpx4jz5ZX9AOarrOjrmm0FNEbBJI0RBTEEL2SCgpTR2UviR6
	sYyg==
MIME-Version: 1.0
Received: by 10.58.155.99 with SMTP id vv3mr20361499veb.50.1356098600181; Fri,
	21 Dec 2012 06:03:20 -0800 (PST)
Received: by 10.58.187.84 with HTTP; Fri, 21 Dec 2012 06:03:20 -0800 (PST)
In-Reply-To: <50D46B47.8000003@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 09:03:20 -0500
X-Google-Sender-Auth: A_Z3EBPOYsSbbLIL1gY_n-Bx9QU
Message-ID: <CAOvdn6Vm6_yW2LL8hxGzps8Jm61ihgNYDtKhOj91EurKFPWe4Q@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Marek Marczykowski <marmarek@invisiblethingslab.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5684548786536126418=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5684548786536126418==
Content-Type: multipart/alternative; boundary=047d7b67002d581e9704d15d4f8f

--047d7b67002d581e9704d15d4f8f
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Dec 21, 2012 at 8:59 AM, Marek Marczykowski <
marmarek@invisiblethingslab.com> wrote:

> > You don't happen to use "dom0_vcpus_pin", do you?
>
> I don't. Should I try?


Probably not.

I ran into issues with it the last go-around with Jan whilst trying to
debug S3 issues.
It turned out it got in the way of some scheduling stuff.
S3 will break the CPU affinity set up by that pinning, and it can lead to
some unexpected behavior, and/or crashing upon resume.

I'd recommend against it.


/btg

--047d7b67002d581e9704d15d4f8f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 21, 2012 at 8:59 AM, Marek Marczykowski <span dir=3D"ltr">&lt;<=
a href=3D"mailto:marmarek@invisiblethingslab.com" target=3D"_blank">marmare=
k@invisiblethingslab.com</a>&gt;</span> wrote:<br><div class=3D"gmail_quote=
"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:=
1px #ccc solid;padding-left:1ex">
<div class=3D"im">&gt; You don&#39;t happen to use &quot;dom0_vcpus_pin&quo=
t;, do you?<br>
<br>
</div>I don&#39;t. Should I try?</blockquote></div><br><div>Probably not.</=
div><div><br></div><div>I ran into issues with it the last go-around with J=
an whilst trying to debug S3 issues.</div><div>It turned out it got in the =
way of some scheduling stuff.</div>
<div>S3 will break the CPU affinity set up by that pinning, and it can lead=
 to some unexpected behavior, and/or crashing upon resume.</div><div><br></=
div><div>I&#39;d recommend against it.</div><div><br></div><div><br></div>
<div>/btg</div>

--047d7b67002d581e9704d15d4f8f--


--===============5684548786536126418==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5684548786536126418==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 14:03:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:03:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3C9-00058q-9r; Fri, 21 Dec 2012 14:03:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tm3C7-00058e-Ai
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:03:23 +0000
Received: from [85.158.139.211:26221] by server-9.bemta-5.messagelabs.com id
	3A/52-10690-A2C64D05; Fri, 21 Dec 2012 14:03:22 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356098600!21502301!1
X-Originating-IP: [209.85.220.172]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19476 invoked from network); 21 Dec 2012 14:03:21 -0000
Received: from mail-vc0-f172.google.com (HELO mail-vc0-f172.google.com)
	(209.85.220.172)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 14:03:21 -0000
Received: by mail-vc0-f172.google.com with SMTP id fw7so5124113vcb.17
	for <xen-devel@lists.xen.org>; Fri, 21 Dec 2012 06:03:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=8nlO6RKLIInwFd1oj09sQtkkdC5QBTGufCQPN3WDVnY=;
	b=rWKV8IL39rxjPKM5dwzlpbuQ1a6J45TNZnL2pWtelga8XNfBHZu5u34+3cSMppW92F
	7G5HBzU/hLxOcVwd+ThF20zvNBgwA1QNyyIUEpSz1GUeTKpNHyWuFRP3ZqsLl4zfnfmd
	vBZi5jdewxacTZkA9zs39Ri21APJwxQzc0jGiH5rpWXpBUB4b1PWnKItkqYQtXAie6Rd
	RtkojZkUhH96Wkq5MGaz0qsp9RZrNN13C1NpQh4e6QYTsbr1xqL35elroR/196Mf8s1w
	VOQ387GP2XWcnt65O9GMWpx4jz5ZX9AOarrOjrmm0FNEbBJI0RBTEEL2SCgpTR2UviR6
	sYyg==
MIME-Version: 1.0
Received: by 10.58.155.99 with SMTP id vv3mr20361499veb.50.1356098600181; Fri,
	21 Dec 2012 06:03:20 -0800 (PST)
Received: by 10.58.187.84 with HTTP; Fri, 21 Dec 2012 06:03:20 -0800 (PST)
In-Reply-To: <50D46B47.8000003@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 09:03:20 -0500
X-Google-Sender-Auth: A_Z3EBPOYsSbbLIL1gY_n-Bx9QU
Message-ID: <CAOvdn6Vm6_yW2LL8hxGzps8Jm61ihgNYDtKhOj91EurKFPWe4Q@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Marek Marczykowski <marmarek@invisiblethingslab.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5684548786536126418=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5684548786536126418==
Content-Type: multipart/alternative; boundary=047d7b67002d581e9704d15d4f8f

--047d7b67002d581e9704d15d4f8f
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Dec 21, 2012 at 8:59 AM, Marek Marczykowski <
marmarek@invisiblethingslab.com> wrote:

> > You don't happen to use "dom0_vcpus_pin", do you?
>
> I don't. Should I try?


Probably not.

I ran into issues with it the last go-around with Jan whilst trying to
debug S3 issues.
It turned out it got in the way of some scheduling stuff.
S3 will break the CPU affinity set up by that pinning, and it can lead to
some unexpected behavior, and/or crashing upon resume.

I'd recommend against it.


/btg

--047d7b67002d581e9704d15d4f8f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 21, 2012 at 8:59 AM, Marek Marczykowski <span dir=3D"ltr">&lt;<=
a href=3D"mailto:marmarek@invisiblethingslab.com" target=3D"_blank">marmare=
k@invisiblethingslab.com</a>&gt;</span> wrote:<br><div class=3D"gmail_quote=
"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:=
1px #ccc solid;padding-left:1ex">
<div class=3D"im">&gt; You don&#39;t happen to use &quot;dom0_vcpus_pin&quo=
t;, do you?<br>
<br>
</div>I don&#39;t. Should I try?</blockquote></div><br><div>Probably not.</=
div><div><br></div><div>I ran into issues with it the last go-around with J=
an whilst trying to debug S3 issues.</div><div>It turned out it got in the =
way of some scheduling stuff.</div>
<div>S3 will break the CPU affinity set up by that pinning, and it can lead=
 to some unexpected behavior, and/or crashing upon resume.</div><div><br></=
div><div>I&#39;d recommend against it.</div><div><br></div><div><br></div>
<div>/btg</div>

--047d7b67002d581e9704d15d4f8f--


--===============5684548786536126418==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5684548786536126418==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 14:03:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3CO-0005AF-Nj; Fri, 21 Dec 2012 14:03:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm3CN-0005A2-K6
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:03:39 +0000
Received: from [193.109.254.147:25106] by server-4.bemta-14.messagelabs.com id
	89/0B-15233-A3C64D05; Fri, 21 Dec 2012 14:03:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1356098616!8656431!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4591 invoked from network); 21 Dec 2012 14:03:38 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 14:03:38 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLE3MJL030023
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 14:03:23 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLE3M4u024155
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 14:03:22 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLE3LFx026105; Fri, 21 Dec 2012 08:03:22 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 06:03:21 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ADCDC1C032B; Fri, 21 Dec 2012 09:03:20 -0500 (EST)
Date: Fri, 21 Dec 2012 09:03:20 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20121221140320.GD25526@phenom.dumpdata.com>
References: <50D41DF3.306@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D41DF3.306@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Create a iSCSI DomU with disks in another DomU
 running on the same Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 09:29:39AM +0100, Roger Pau Monn=E9 wrote:
> Hello,
> =

> I'm trying to use a strange setup, that consists in having a DomU
> serving iSCSI targets to the Dom0, that will use this targets as disks
> for other DomUs. I've tried to set up this iSCSI target DomU using both
> Debian Squeeze/Wheezy (with kernels 2.6.32 and 3.2) and ISCSI
> Enterprise Target (IET), and when launching the DomU I get this messages
> from Xen:
> =

> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff8301=
41405000, caf=3D8000000000000003, taf=3D7400000000000001
> (XEN) Xen WARN at mm.c:1926
> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 00000000000000=
00
> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802766=
e8
> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  00000000000000=
04
> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 00000000000000=
01
> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 74000000000000=
01
> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000026=
f0
> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
> (XEN)    ffff830141405000 8000000000000003 7400000000000001 0000000000145=
028
> (XEN)    ffff82f6028a0510 ffff83019e60c000 ffff82f602afcd00 ffff82c4802bf=
d28
> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480109=
ba3
> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061dfc=
3f0
> (XEN)    0000000000000001 ffffffffffff8000 0000000000000002 ffff83011d555=
000
> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010c=
607
> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011c=
f90
> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b8=
000
> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480300=
920
> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000005802bfd38 ffff82c4802b8=
000
> (XEN)    ffff82c400000000 0000000000000001 ffffc90000028b10 ffffc90000028=
b10
> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 0000000000145=
028
> (XEN)    000000000011cf7c 0000000000001000 0000000000157e68 0000000000007=
ff0
> (XEN)    000000000000027e 000000000042000d 0000000000020b50 ffff8300dfdf0=
000
> (XEN)    ffff82c4802bfd78 ffffc90000028ac0 ffffc90000028ac0 ffff880185f6f=
d58
> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010e=
b65
> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480181=
831
> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000000=
000
> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 ffff8300dfb03=
000
> (XEN)    ffff8300dfdf0000 0000150e11a417f8 0000000000000002 ffff82c480300=
948
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
> (XEN)    =

> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff8301=
41405000, caf=3D8000000000000003, taf=3D7400000000000001
> (XEN) Xen WARN at mm.c:1926
> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 00000000000000=
00
> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802766=
e8
> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  00000000000000=
04
> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 00000000000000=
01
> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 74000000000000=
01
> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000026=
f0
> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
> (XEN)    ffff830141405000 8000000000000003 7400000000000001 0000000000145=
81d
> (XEN)    ffff82f6028b03b0 ffff83019e60c000 ffff82f602afcd00 ffff82c4802bf=
d28
> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480109=
ba3
> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061dfc=
308
> (XEN)    0000000000000000 ffffffffffff8000 0000000000000001 ffff83011d555=
000
> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010c=
607
> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011c=
f90
> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b8=
000
> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480300=
920
> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000002802bfd38 ffff82c4802b8=
000
> (XEN)    ffffffff00000000 0000000000000001 ffffc90000028b60 ffffc90000028=
b60
> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 0000000000145=
81d
> (XEN)    00000000000deb3e 0000000000001000 0000000000157e68 000000000b507=
ff0
> (XEN)    0000000000000261 000000000042000d 00000000000204b0 ffffc90000028=
b38
> (XEN)    0000000000000002 ffffc90000028b38 ffffc90000028b38 ffff880185f6f=
d58
> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010e=
b65
> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480181=
831
> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000000=
000
> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 0000000000000=
086
> (XEN)    ffff82c4802bfe28 ffff82c480125eae ffff83019e60c000 0000000000000=
286
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
> (XEN)    =

> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
> =

> (Note that I've added a WARN() to mm.c:1925 to see where the
> get_page call was coming from).
> =

> Connecting the iSCSI disks to another Dom0 works fine, so this
> problem only happens when trying to connect the disks to the
> Dom0 where the DomU is running.

Is this happening when the 'disks' are exported to the domUs?
Are they exported via QEMU or xen-blkback?

> =

> I've replaced the Linux DomU serving iSCSI targets with a
> NetBSD DomU, and the problems disappears, and I'm able to
> attach the targets shared by the DomU to the Dom0 without
> issues.
> =

> The problem seems to come from netfront/netback, does anyone
> have a clue about what might cause this bad interaction
> between IET and netfront/netback?

Or it might be that we are re-using the PFN for blkback/blkfront
and using the m2p overrides and overwritting the netfront/netback
m2p overrides?

Is this with an HVM domU or PV domU?

> =

> Thanks, Roger.
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:03:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3CO-0005AF-Nj; Fri, 21 Dec 2012 14:03:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm3CN-0005A2-K6
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:03:39 +0000
Received: from [193.109.254.147:25106] by server-4.bemta-14.messagelabs.com id
	89/0B-15233-A3C64D05; Fri, 21 Dec 2012 14:03:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1356098616!8656431!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4591 invoked from network); 21 Dec 2012 14:03:38 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 14:03:38 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLE3MJL030023
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 14:03:23 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLE3M4u024155
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 14:03:22 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLE3LFx026105; Fri, 21 Dec 2012 08:03:22 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 06:03:21 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ADCDC1C032B; Fri, 21 Dec 2012 09:03:20 -0500 (EST)
Date: Fri, 21 Dec 2012 09:03:20 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20121221140320.GD25526@phenom.dumpdata.com>
References: <50D41DF3.306@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D41DF3.306@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Create a iSCSI DomU with disks in another DomU
 running on the same Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 09:29:39AM +0100, Roger Pau Monn=E9 wrote:
> Hello,
> =

> I'm trying to use a strange setup, that consists in having a DomU
> serving iSCSI targets to the Dom0, that will use this targets as disks
> for other DomUs. I've tried to set up this iSCSI target DomU using both
> Debian Squeeze/Wheezy (with kernels 2.6.32 and 3.2) and ISCSI
> Enterprise Target (IET), and when launching the DomU I get this messages
> from Xen:
> =

> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff8301=
41405000, caf=3D8000000000000003, taf=3D7400000000000001
> (XEN) Xen WARN at mm.c:1926
> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 00000000000000=
00
> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802766=
e8
> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  00000000000000=
04
> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 00000000000000=
01
> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 74000000000000=
01
> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000026=
f0
> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
> (XEN)    ffff830141405000 8000000000000003 7400000000000001 0000000000145=
028
> (XEN)    ffff82f6028a0510 ffff83019e60c000 ffff82f602afcd00 ffff82c4802bf=
d28
> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480109=
ba3
> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061dfc=
3f0
> (XEN)    0000000000000001 ffffffffffff8000 0000000000000002 ffff83011d555=
000
> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010c=
607
> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011c=
f90
> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b8=
000
> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480300=
920
> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000005802bfd38 ffff82c4802b8=
000
> (XEN)    ffff82c400000000 0000000000000001 ffffc90000028b10 ffffc90000028=
b10
> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 0000000000145=
028
> (XEN)    000000000011cf7c 0000000000001000 0000000000157e68 0000000000007=
ff0
> (XEN)    000000000000027e 000000000042000d 0000000000020b50 ffff8300dfdf0=
000
> (XEN)    ffff82c4802bfd78 ffffc90000028ac0 ffffc90000028ac0 ffff880185f6f=
d58
> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010e=
b65
> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480181=
831
> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000000=
000
> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 ffff8300dfb03=
000
> (XEN)    ffff8300dfdf0000 0000150e11a417f8 0000000000000002 ffff82c480300=
948
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
> (XEN)    =

> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff8301=
41405000, caf=3D8000000000000003, taf=3D7400000000000001
> (XEN) Xen WARN at mm.c:1926
> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 00000000000000=
00
> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802766=
e8
> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  00000000000000=
04
> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 00000000000000=
01
> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 74000000000000=
01
> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000026=
f0
> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
> (XEN)    ffff830141405000 8000000000000003 7400000000000001 0000000000145=
81d
> (XEN)    ffff82f6028b03b0 ffff83019e60c000 ffff82f602afcd00 ffff82c4802bf=
d28
> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480109=
ba3
> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061dfc=
308
> (XEN)    0000000000000000 ffffffffffff8000 0000000000000001 ffff83011d555=
000
> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010c=
607
> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011c=
f90
> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b8=
000
> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480300=
920
> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000002802bfd38 ffff82c4802b8=
000
> (XEN)    ffffffff00000000 0000000000000001 ffffc90000028b60 ffffc90000028=
b60
> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 0000000000145=
81d
> (XEN)    00000000000deb3e 0000000000001000 0000000000157e68 000000000b507=
ff0
> (XEN)    0000000000000261 000000000042000d 00000000000204b0 ffffc90000028=
b38
> (XEN)    0000000000000002 ffffc90000028b38 ffffc90000028b38 ffff880185f6f=
d58
> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010e=
b65
> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480181=
831
> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000000=
000
> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 0000000000000=
086
> (XEN)    ffff82c4802bfe28 ffff82c480125eae ffff83019e60c000 0000000000000=
286
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
> (XEN)    =

> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
> =

> (Note that I've added a WARN() to mm.c:1925 to see where the
> get_page call was coming from).
> =

> Connecting the iSCSI disks to another Dom0 works fine, so this
> problem only happens when trying to connect the disks to the
> Dom0 where the DomU is running.

Is this happening when the 'disks' are exported to the domUs?
Are they exported via QEMU or xen-blkback?

> =

> I've replaced the Linux DomU serving iSCSI targets with a
> NetBSD DomU, and the problems disappears, and I'm able to
> attach the targets shared by the DomU to the Dom0 without
> issues.
> =

> The problem seems to come from netfront/netback, does anyone
> have a clue about what might cause this bad interaction
> between IET and netfront/netback?

Or it might be that we are re-using the PFN for blkback/blkfront
and using the m2p overrides and overwritting the netfront/netback
m2p overrides?

Is this with an HVM domU or PV domU?

> =

> Thanks, Roger.
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:08:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3H6-0005SV-Mf; Fri, 21 Dec 2012 14:08:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tm3H5-0005SO-M1
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:08:31 +0000
Received: from [193.109.254.147:15871] by server-14.bemta-14.messagelabs.com
	id 25/FE-10022-E5D64D05; Fri, 21 Dec 2012 14:08:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1356098747!10830946!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6231 invoked from network); 21 Dec 2012 14:05:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 14:05:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 14:05:47 +0000
Message-Id: <50D47ACC02000078000B2079@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 14:05:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Marek Marczykowski" <marmarek@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
In-Reply-To: <50D46B47.8000003@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Ben Guthro <ben@guthro.net>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 14:59, Marek Marczykowski <marmarek@invisiblethingslab.com> wrote:
> On 21.12.2012 14:42, Jan Beulich wrote:
>> You don't happen to use "dom0_vcpus_pin", do you?
> 
> I don't. Should I try?

No - iirc that was the cause for Ben's problems.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:08:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3H6-0005SV-Mf; Fri, 21 Dec 2012 14:08:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tm3H5-0005SO-M1
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:08:31 +0000
Received: from [193.109.254.147:15871] by server-14.bemta-14.messagelabs.com
	id 25/FE-10022-E5D64D05; Fri, 21 Dec 2012 14:08:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1356098747!10830946!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6231 invoked from network); 21 Dec 2012 14:05:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 14:05:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 14:05:47 +0000
Message-Id: <50D47ACC02000078000B2079@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 14:05:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Marek Marczykowski" <marmarek@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
In-Reply-To: <50D46B47.8000003@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Ben Guthro <ben@guthro.net>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 14:59, Marek Marczykowski <marmarek@invisiblethingslab.com> wrote:
> On 21.12.2012 14:42, Jan Beulich wrote:
>> You don't happen to use "dom0_vcpus_pin", do you?
> 
> I don't. Should I try?

No - iirc that was the cause for Ben's problems.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:11:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3K4-0005bb-AZ; Fri, 21 Dec 2012 14:11:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tm3K2-0005b6-Ru
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:11:35 +0000
Received: from [85.158.143.35:19345] by server-2.bemta-4.messagelabs.com id
	C0/37-30861-61E64D05; Fri, 21 Dec 2012 14:11:34 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1356098871!11508913!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31418 invoked from network); 21 Dec 2012 14:07:52 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 14:07:52 -0000
Received: (qmail 19930 invoked from network); 21 Dec 2012 16:07:50 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	21 Dec 2012 16:07:50 +0200
Message-ID: <50D46D75.1010800@gmail.com>
Date: Fri, 21 Dec 2012 16:08:53 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50D459D3.4040101@gmail.com>
	<50D4780602000078000B204E@nat28.tlf.novell.com>
In-Reply-To: <50D4780602000078000B204E@nat28.tlf.novell.com>
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44470
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> I need access to some MSR values that are not currently being saved in
>> struct hvm_hw_cpu. Among them are MSR_IA32_MC0_CTL, MSR_IA32_MISC_ENABLE
>> and MSR_IA32_ENERGY_PERF_BIAS.
>
> I can't see why, and this is quite likely related to the reason for
> them not being accessible in the first place.

Because they offer heuristic information about the type of OS running in 
the domU domain, and there are classes of applications interested in that.

> Not to me, no. These records are use for save/restore/migrate,
> and so far there hasn't been a need to include here the MSRs you
> mention.

Interesting. I've assumed that those would be saved (and perhaps ignored 
on restore if they became irrelevant) in a migration scenario.

> For MSR_IA32_MC0_CTL, why not? These should be fine for
> anything that is architectural to x86.
>
> The other two MSRs are Intel specific iirc, and hence wouldn't
> be validly dealt with in vendor independent code.

I agree, however, as I've said before, _some_ applications _are_ 
interested in vendor-specific information for heuristic purposes.

Whether this would make a interesting patch for Xen or not, I would 
appreciate any advice on how best to implement said functionality (if 
only for MSR_IA32_MC0_CTL).

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:11:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3K4-0005bb-AZ; Fri, 21 Dec 2012 14:11:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1Tm3K2-0005b6-Ru
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:11:35 +0000
Received: from [85.158.143.35:19345] by server-2.bemta-4.messagelabs.com id
	C0/37-30861-61E64D05; Fri, 21 Dec 2012 14:11:34 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1356098871!11508913!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31418 invoked from network); 21 Dec 2012 14:07:52 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 14:07:52 -0000
Received: (qmail 19930 invoked from network); 21 Dec 2012 16:07:50 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	21 Dec 2012 16:07:50 +0200
Message-ID: <50D46D75.1010800@gmail.com>
Date: Fri, 21 Dec 2012 16:08:53 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50D459D3.4040101@gmail.com>
	<50D4780602000078000B204E@nat28.tlf.novell.com>
In-Reply-To: <50D4780602000078000B204E@nat28.tlf.novell.com>
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44470
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> I need access to some MSR values that are not currently being saved in
>> struct hvm_hw_cpu. Among them are MSR_IA32_MC0_CTL, MSR_IA32_MISC_ENABLE
>> and MSR_IA32_ENERGY_PERF_BIAS.
>
> I can't see why, and this is quite likely related to the reason for
> them not being accessible in the first place.

Because they offer heuristic information about the type of OS running in 
the domU domain, and there are classes of applications interested in that.

> Not to me, no. These records are use for save/restore/migrate,
> and so far there hasn't been a need to include here the MSRs you
> mention.

Interesting. I've assumed that those would be saved (and perhaps ignored 
on restore if they became irrelevant) in a migration scenario.

> For MSR_IA32_MC0_CTL, why not? These should be fine for
> anything that is architectural to x86.
>
> The other two MSRs are Intel specific iirc, and hence wouldn't
> be validly dealt with in vendor independent code.

I agree, however, as I've said before, _some_ applications _are_ 
interested in vendor-specific information for heuristic purposes.

Whether this would make a interesting patch for Xen or not, I would 
appreciate any advice on how best to implement said functionality (if 
only for MSR_IA32_MC0_CTL).

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:36:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:36:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3hW-0005tZ-FI; Fri, 21 Dec 2012 14:35:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm3hU-0005tU-Tc
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:35:49 +0000
Received: from [85.158.139.83:59753] by server-16.bemta-5.messagelabs.com id
	F9/AB-09208-4C374D05; Fri, 21 Dec 2012 14:35:48 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1356100544!24603204!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3687 invoked from network); 21 Dec 2012 14:35:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 14:35:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="1475133"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 14:35:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 09:35:22 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm3h4-0004HZ-H9;
	Fri, 21 Dec 2012 14:35:22 +0000
Message-ID: <50D47235.4090106@eu.citrix.com>
Date: Fri, 21 Dec 2012 14:29:09 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D3414D.8080901@eu.citrix.com>
	<CAAWQectVEihrayJj5n4SPGqA0QJSiC7s2x_oDW=KHyxukWpMSA@mail.gmail.com>
In-Reply-To: <CAAWQectVEihrayJj5n4SPGqA0QJSiC7s2x_oDW=KHyxukWpMSA@mail.gmail.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/12 18:18, Dario Faggioli wrote:
> On Thu, Dec 20, 2012 at 5:48 PM, George Dunlap
> <george.dunlap@eu.citrix.com> wrote:
>> And in any case, looking at the caller of csched_load_balance(), it
>> explicitly says to steal work if the next thing on the runqueue of cpu has a
>> priority of TS_OVER.  That was chosen for a reason -- if you want to change
>> that, you should change it there at the top (and make a justification for
>> doing so), not deeply nested in a function like this.
>>
>> Or am I completely missing something?
>>
> No, you're right. Trying to solve a nasty issue I was seeing, I overlooked I was
> changing the underlying logic until that point... Thanks!
>
> What I want to avoid is the following: a vcpu wakes-up on the busy pcpu Y. As
> a consequence, the idle pcpu X is tickled. Then, for any unrelated reason, pcpu
> Z reschedules and, as it would go idle too, it looks around for any
> vcpu to steal,
> finds one in Y's runqueue and grabs it. Afterward, when X gets the IPI and
> schedules, it just does not find anyone to run and goes back idling.
>
> Now, suppose the vcpu has X, but *not* Z, in its node-affinity (while
> it has a full
> vcpu-affinity, i.e., can run everywhere). In this case, a vcpu that
> could have run on
> a pcpu in its node-affinity, executes outside from it. That happens because,
> the NODE_BALANCE_STEP in csched_load_balance(), when called by Z, won't
> find anything suitable to steal (provided there actually isn't any
> vcpu waiting in
> any runqueue with node-affinity with Z), while the CPU_BALANCE_STEP will
> find our vcpu. :-(
>
> So, what I wanted is something that could tell me whether the pcpu which is
> stealing work is the one that has actually been tickled to do so. I
> was then using
> the pcpu idleness as a (cheap and easy to check) indication of that,
> but I now see
> this is having side effects I in the first place did not want to cause.
>
> Sorry for that, I probably spent so much time buried, as you where
> saying, in the
> various nested loops and calls, that I lost the context a little bit! :-P

OK, that makes sense -- I figured it was something like that.  Don't 
feel too bad about missing that connection -- we're all fairly blind to 
our own code, and I only caught it because I was trying to figure out 
what was going on.  That's why we do patch review. :-)

Honestly, the whole "steal work" idea seemed a bit backwards to begin 
with, but now that we're not just dealing with "possible" and "not 
possible", but with "better" and "worse", the work-stealing method of 
load balancing sort of falls down.

It does make sense to do the load-balancing work on idle cpus rather 
than already-busy cpus; but I wonder if what should happen instead is 
that before idling, a pcpu chooses a "busy" pcpu and does a global load 
balancing for it -- i.e., pcpu 1 will look at pcpu 5's runqueue, and 
consider moving away the vcpus on the runqueue not just to itself but to 
any available cpu.

That way, in your example, Z might wake up, look at X's runqueue, and 
say, "This would probably run well on Y -- I'll migrate it there."

But that's kind of a half-baked idea at this point.

> Ok, I think the problem I was describing is real, and I've seen it happening and
> causing performances degradation. However, as I think a good solution
> is going to
> be more complex than I thought, I'd better repost without this
> function and deal with
> it in a future separate patch (after having figured out the best way
> of doing so). Is
> that fine with you?

Yes, that's fine.  Thanks, Dario.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:36:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:36:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3hW-0005tZ-FI; Fri, 21 Dec 2012 14:35:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm3hU-0005tU-Tc
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:35:49 +0000
Received: from [85.158.139.83:59753] by server-16.bemta-5.messagelabs.com id
	F9/AB-09208-4C374D05; Fri, 21 Dec 2012 14:35:48 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1356100544!24603204!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3687 invoked from network); 21 Dec 2012 14:35:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 14:35:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="1475133"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 14:35:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 09:35:22 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm3h4-0004HZ-H9;
	Fri, 21 Dec 2012 14:35:22 +0000
Message-ID: <50D47235.4090106@eu.citrix.com>
Date: Fri, 21 Dec 2012 14:29:09 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D3414D.8080901@eu.citrix.com>
	<CAAWQectVEihrayJj5n4SPGqA0QJSiC7s2x_oDW=KHyxukWpMSA@mail.gmail.com>
In-Reply-To: <CAAWQectVEihrayJj5n4SPGqA0QJSiC7s2x_oDW=KHyxukWpMSA@mail.gmail.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/12 18:18, Dario Faggioli wrote:
> On Thu, Dec 20, 2012 at 5:48 PM, George Dunlap
> <george.dunlap@eu.citrix.com> wrote:
>> And in any case, looking at the caller of csched_load_balance(), it
>> explicitly says to steal work if the next thing on the runqueue of cpu has a
>> priority of TS_OVER.  That was chosen for a reason -- if you want to change
>> that, you should change it there at the top (and make a justification for
>> doing so), not deeply nested in a function like this.
>>
>> Or am I completely missing something?
>>
> No, you're right. Trying to solve a nasty issue I was seeing, I overlooked I was
> changing the underlying logic until that point... Thanks!
>
> What I want to avoid is the following: a vcpu wakes-up on the busy pcpu Y. As
> a consequence, the idle pcpu X is tickled. Then, for any unrelated reason, pcpu
> Z reschedules and, as it would go idle too, it looks around for any
> vcpu to steal,
> finds one in Y's runqueue and grabs it. Afterward, when X gets the IPI and
> schedules, it just does not find anyone to run and goes back idling.
>
> Now, suppose the vcpu has X, but *not* Z, in its node-affinity (while
> it has a full
> vcpu-affinity, i.e., can run everywhere). In this case, a vcpu that
> could have run on
> a pcpu in its node-affinity, executes outside from it. That happens because,
> the NODE_BALANCE_STEP in csched_load_balance(), when called by Z, won't
> find anything suitable to steal (provided there actually isn't any
> vcpu waiting in
> any runqueue with node-affinity with Z), while the CPU_BALANCE_STEP will
> find our vcpu. :-(
>
> So, what I wanted is something that could tell me whether the pcpu which is
> stealing work is the one that has actually been tickled to do so. I
> was then using
> the pcpu idleness as a (cheap and easy to check) indication of that,
> but I now see
> this is having side effects I in the first place did not want to cause.
>
> Sorry for that, I probably spent so much time buried, as you where
> saying, in the
> various nested loops and calls, that I lost the context a little bit! :-P

OK, that makes sense -- I figured it was something like that.  Don't 
feel too bad about missing that connection -- we're all fairly blind to 
our own code, and I only caught it because I was trying to figure out 
what was going on.  That's why we do patch review. :-)

Honestly, the whole "steal work" idea seemed a bit backwards to begin 
with, but now that we're not just dealing with "possible" and "not 
possible", but with "better" and "worse", the work-stealing method of 
load balancing sort of falls down.

It does make sense to do the load-balancing work on idle cpus rather 
than already-busy cpus; but I wonder if what should happen instead is 
that before idling, a pcpu chooses a "busy" pcpu and does a global load 
balancing for it -- i.e., pcpu 1 will look at pcpu 5's runqueue, and 
consider moving away the vcpus on the runqueue not just to itself but to 
any available cpu.

That way, in your example, Z might wake up, look at X's runqueue, and 
say, "This would probably run well on Y -- I'll migrate it there."

But that's kind of a half-baked idea at this point.

> Ok, I think the problem I was describing is real, and I've seen it happening and
> causing performances degradation. However, as I think a good solution
> is going to
> be more complex than I thought, I'd better repost without this
> function and deal with
> it in a future separate patch (after having figured out the best way
> of doing so). Is
> that fine with you?

Yes, that's fine.  Thanks, Dario.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:40:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3lR-000611-8U; Fri, 21 Dec 2012 14:39:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tm3lQ-00060w-Au
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:39:52 +0000
Received: from [85.158.143.35:22931] by server-3.bemta-4.messagelabs.com id
	C6/0E-18211-7B474D05; Fri, 21 Dec 2012 14:39:51 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1356100777!4045085!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28810 invoked from network); 21 Dec 2012 14:39:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 14:39:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="1553686"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 14:39:37 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 09:39:37 -0500
Message-ID: <50D474A8.6080500@citrix.com>
Date: Fri, 21 Dec 2012 14:39:36 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <50D459D3.4040101@gmail.com>
	<50D4780602000078000B204E@nat28.tlf.novell.com>
	<50D46D75.1010800@gmail.com>
In-Reply-To: <50D46D75.1010800@gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/12/12 14:08, Razvan Cojocaru wrote:
>>> I need access to some MSR values that are not currently being saved in
>>> struct hvm_hw_cpu. Among them are MSR_IA32_MC0_CTL, MSR_IA32_MISC_ENABLE
>>> and MSR_IA32_ENERGY_PERF_BIAS.
>> I can't see why, and this is quite likely related to the reason for
>> them not being accessible in the first place.
> Because they offer heuristic information about the type of OS running in
> the domU domain, and there are classes of applications interested in that.
The handling of MC0_CTL for guest is this:
/xen/arch/x86/cpu/mcheck/vmce.c: line 14:
     case MSR_IA32_MC0_CTL:
         /*
          * if guest crazy clear any bit of MCi_CTL,
          * treat it as not implement and ignore write change it.
          */
         break;
That is, whatever the guest writes to MC0 is completely ignored.

So, if you're trying to figure out what guest is running by looking at 
MC0_CTL, then I guess tough - Xen doesn't store that at all.

Like I said before, MC Interrupt is handled in Xen, and then forwarded 
to the guest as an IRQ.
(Obviously, MCE is generally fatal to the system, so guest isn't at all 
involved in those).

If you are to save some more Machine Check related information, it 
probably should go into:
static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
in the same vmce.c file as above (although I'm not 100% sure this is 
part of "partial" saves).
Currently, this saves MCI_CTL2 register values only.
>> Not to me, no. These records are use for save/restore/migrate,
>> and so far there hasn't been a need to include here the MSRs you
>> mention.
> Interesting. I've assumed that those would be saved (and perhaps ignored
> on restore if they became irrelevant) in a migration scenario.
I presume guests would just be happy with the defaults Xen provides. 
Otherwise, we'd have bug reports...

--
Mats
>> For MSR_IA32_MC0_CTL, why not? These should be fine for
>> anything that is architectural to x86.
>>
>> The other two MSRs are Intel specific iirc, and hence wouldn't
>> be validly dealt with in vendor independent code.
> I agree, however, as I've said before, _some_ applications _are_
> interested in vendor-specific information for heuristic purposes.
>
> Whether this would make a interesting patch for Xen or not, I would
> appreciate any advice on how best to implement said functionality (if
> only for MSR_IA32_MC0_CTL).
>
> Thanks,
> Razvan Cojocaru
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:40:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3lR-000611-8U; Fri, 21 Dec 2012 14:39:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1Tm3lQ-00060w-Au
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:39:52 +0000
Received: from [85.158.143.35:22931] by server-3.bemta-4.messagelabs.com id
	C6/0E-18211-7B474D05; Fri, 21 Dec 2012 14:39:51 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1356100777!4045085!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28810 invoked from network); 21 Dec 2012 14:39:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 14:39:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="1553686"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 14:39:37 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 09:39:37 -0500
Message-ID: <50D474A8.6080500@citrix.com>
Date: Fri, 21 Dec 2012 14:39:36 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <50D459D3.4040101@gmail.com>
	<50D4780602000078000B204E@nat28.tlf.novell.com>
	<50D46D75.1010800@gmail.com>
In-Reply-To: <50D46D75.1010800@gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/12/12 14:08, Razvan Cojocaru wrote:
>>> I need access to some MSR values that are not currently being saved in
>>> struct hvm_hw_cpu. Among them are MSR_IA32_MC0_CTL, MSR_IA32_MISC_ENABLE
>>> and MSR_IA32_ENERGY_PERF_BIAS.
>> I can't see why, and this is quite likely related to the reason for
>> them not being accessible in the first place.
> Because they offer heuristic information about the type of OS running in
> the domU domain, and there are classes of applications interested in that.
The handling of MC0_CTL for guest is this:
/xen/arch/x86/cpu/mcheck/vmce.c: line 14:
     case MSR_IA32_MC0_CTL:
         /*
          * if guest crazy clear any bit of MCi_CTL,
          * treat it as not implement and ignore write change it.
          */
         break;
That is, whatever the guest writes to MC0 is completely ignored.

So, if you're trying to figure out what guest is running by looking at 
MC0_CTL, then I guess tough - Xen doesn't store that at all.

Like I said before, MC Interrupt is handled in Xen, and then forwarded 
to the guest as an IRQ.
(Obviously, MCE is generally fatal to the system, so guest isn't at all 
involved in those).

If you are to save some more Machine Check related information, it 
probably should go into:
static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
in the same vmce.c file as above (although I'm not 100% sure this is 
part of "partial" saves).
Currently, this saves MCI_CTL2 register values only.
>> Not to me, no. These records are use for save/restore/migrate,
>> and so far there hasn't been a need to include here the MSRs you
>> mention.
> Interesting. I've assumed that those would be saved (and perhaps ignored
> on restore if they became irrelevant) in a migration scenario.
I presume guests would just be happy with the defaults Xen provides. 
Otherwise, we'd have bug reports...

--
Mats
>> For MSR_IA32_MC0_CTL, why not? These should be fine for
>> anything that is architectural to x86.
>>
>> The other two MSRs are Intel specific iirc, and hence wouldn't
>> be validly dealt with in vendor independent code.
> I agree, however, as I've said before, _some_ applications _are_
> interested in vendor-specific information for heuristic purposes.
>
> Whether this would make a interesting patch for Xen or not, I would
> appreciate any advice on how best to implement said functionality (if
> only for MSR_IA32_MC0_CTL).
>
> Thanks,
> Razvan Cojocaru
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:47:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3sj-0006Da-6c; Fri, 21 Dec 2012 14:47:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm3sh-0006DV-Mv
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:47:24 +0000
Received: from [85.158.138.51:3423] by server-9.bemta-3.messagelabs.com id
	A2/9D-11948-A7674D05; Fri, 21 Dec 2012 14:47:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1356101241!27977498!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29660 invoked from network); 21 Dec 2012 14:47:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 14:47:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="305837"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 14:47:21 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 14:47:21 +0000
Message-ID: <50D47678.2050903@citrix.com>
Date: Fri, 21 Dec 2012 15:47:20 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <50D41DF3.306@citrix.com>
	<20121221140320.GD25526@phenom.dumpdata.com>
In-Reply-To: <20121221140320.GD25526@phenom.dumpdata.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Create a iSCSI DomU with disks in another DomU
 running on the same Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/12/12 15:03, Konrad Rzeszutek Wilk wrote:
> On Fri, Dec 21, 2012 at 09:29:39AM +0100, Roger Pau Monn=E9 wrote:
>> Hello,
>>
>> I'm trying to use a strange setup, that consists in having a DomU
>> serving iSCSI targets to the Dom0, that will use this targets as disks
>> for other DomUs. I've tried to set up this iSCSI target DomU using both
>> Debian Squeeze/Wheezy (with kernels 2.6.32 and 3.2) and ISCSI
>> Enterprise Target (IET), and when launching the DomU I get this messages
>> from Xen:
>>
>> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff830=
141405000, caf=3D8000000000000003, taf=3D7400000000000001
>> (XEN) Xen WARN at mm.c:1926
>> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
>> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
>> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 0000000000000=
000
>> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c480276=
6e8
>> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  0000000000000=
004
>> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000=
001
>> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 7400000000000=
001
>> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 0000000000002=
6f0
>> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
>> (XEN)    ffff830141405000 8000000000000003 7400000000000001 000000000014=
5028
>> (XEN)    ffff82f6028a0510 ffff83019e60c000 ffff82f602afcd00 ffff82c4802b=
fd28
>> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c48010=
9ba3
>> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061df=
c3f0
>> (XEN)    0000000000000001 ffffffffffff8000 0000000000000002 ffff83011d55=
5000
>> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010=
c607
>> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011=
cf90
>> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b=
8000
>> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c48030=
0920
>> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000005802bfd38 ffff82c4802b=
8000
>> (XEN)    ffff82c400000000 0000000000000001 ffffc90000028b10 ffffc9000002=
8b10
>> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 000000000014=
5028
>> (XEN)    000000000011cf7c 0000000000001000 0000000000157e68 000000000000=
7ff0
>> (XEN)    000000000000027e 000000000042000d 0000000000020b50 ffff8300dfdf=
0000
>> (XEN)    ffff82c4802bfd78 ffffc90000028ac0 ffffc90000028ac0 ffff880185f6=
fd58
>> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010=
eb65
>> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c48018=
1831
>> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 000000000000=
0000
>> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 ffff8300dfb0=
3000
>> (XEN)    ffff8300dfdf0000 0000150e11a417f8 0000000000000002 ffff82c48030=
0948
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
>> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
>> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
>> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
>> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
>> (XEN)    =

>> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
>> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff830=
141405000, caf=3D8000000000000003, taf=3D7400000000000001
>> (XEN) Xen WARN at mm.c:1926
>> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
>> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
>> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 0000000000000=
000
>> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c480276=
6e8
>> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  0000000000000=
004
>> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000=
001
>> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 7400000000000=
001
>> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 0000000000002=
6f0
>> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
>> (XEN)    ffff830141405000 8000000000000003 7400000000000001 000000000014=
581d
>> (XEN)    ffff82f6028b03b0 ffff83019e60c000 ffff82f602afcd00 ffff82c4802b=
fd28
>> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c48010=
9ba3
>> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061df=
c308
>> (XEN)    0000000000000000 ffffffffffff8000 0000000000000001 ffff83011d55=
5000
>> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010=
c607
>> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011=
cf90
>> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b=
8000
>> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c48030=
0920
>> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000002802bfd38 ffff82c4802b=
8000
>> (XEN)    ffffffff00000000 0000000000000001 ffffc90000028b60 ffffc9000002=
8b60
>> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 000000000014=
581d
>> (XEN)    00000000000deb3e 0000000000001000 0000000000157e68 000000000b50=
7ff0
>> (XEN)    0000000000000261 000000000042000d 00000000000204b0 ffffc9000002=
8b38
>> (XEN)    0000000000000002 ffffc90000028b38 ffffc90000028b38 ffff880185f6=
fd58
>> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010=
eb65
>> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c48018=
1831
>> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 000000000000=
0000
>> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 000000000000=
0086
>> (XEN)    ffff82c4802bfe28 ffff82c480125eae ffff83019e60c000 000000000000=
0286
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
>> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
>> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
>> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
>> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
>> (XEN)    =

>> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
>>
>> (Note that I've added a WARN() to mm.c:1925 to see where the
>> get_page call was coming from).
>>
>> Connecting the iSCSI disks to another Dom0 works fine, so this
>> problem only happens when trying to connect the disks to the
>> Dom0 where the DomU is running.
> =

> Is this happening when the 'disks' are exported to the domUs?
> Are they exported via QEMU or xen-blkback?

The iSCSI disks are connected to the DomUs using blkback, and this is
happening when the DomU tries to access it's disks.

>>
>> I've replaced the Linux DomU serving iSCSI targets with a
>> NetBSD DomU, and the problems disappears, and I'm able to
>> attach the targets shared by the DomU to the Dom0 without
>> issues.
>>
>> The problem seems to come from netfront/netback, does anyone
>> have a clue about what might cause this bad interaction
>> between IET and netfront/netback?
> =

> Or it might be that we are re-using the PFN for blkback/blkfront
> and using the m2p overrides and overwritting the netfront/netback
> m2p overrides?

What's strange is that this doesn't happen when the domain that has the
targets is a NetBSD PV. There are also problems when blkback is not used
(see below), so I guess the problem is between netfront/netback and IET.

> Is this with an HVM domU or PV domU?

Both domains (the domain holding the iSCSI targets, and the created
guests) are PV.

Also, I've forgot to say that in the previous email, but if I just
connect the iSCSI disks to the Dom0, I don't see any errors from Xen,
but the Dom0 kernel starts complaining:

[70272.569607] sd 14:0:0:0: [sdc]
[70272.569611] Sense Key : Medium Error [current]
[70272.569619] Info fld=3D0x0
[70272.569623] sd 14:0:0:0: [sdc]
[70272.569627] Add. Sense: Unrecovered read error
[70272.569633] sd 14:0:0:0: [sdc] CDB:
[70272.569637] Read(10): 28 00 00 00 00 00 00 00 08 00
[70272.569662] end_request: critical target error, dev sdc, sector 0
[70277.571208] sd 14:0:0:0: [sdc] Unhandled sense code
[70277.571220] sd 14:0:0:0: [sdc]
[70277.571224] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
[70277.571229] sd 14:0:0:0: [sdc]
[70277.571233] Sense Key : Medium Error [current]
[70277.571241] Info fld=3D0x0
[70277.571245] sd 14:0:0:0: [sdc]
[70277.571249] Add. Sense: Unrecovered read error
[70277.571255] sd 14:0:0:0: [sdc] CDB:
[70277.571259] Read(10): 28 00 00 00 00 00 00 00 08 00
[70277.571284] end_request: critical target error, dev sdc, sector 0
[70282.572768] sd 14:0:0:0: [sdc] Unhandled sense code
[70282.572781] sd 14:0:0:0: [sdc]
[70282.572785] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
[70282.572790] sd 14:0:0:0: [sdc]
[70282.572794] Sense Key : Medium Error [current]
[70282.572802] Info fld=3D0x0
[70282.572806] sd 14:0:0:0: [sdc]
[70282.572810] Add. Sense: Unrecovered read error
[70282.572816] sd 14:0:0:0: [sdc] CDB:
[70282.572820] Read(10): 28 00 00 00 00 00 00 00 08 00
[70282.572846] end_request: critical target error, dev sdc, sector 0
[70287.574397] sd 14:0:0:0: [sdc] Unhandled sense code
[70287.574409] sd 14:0:0:0: [sdc]
[70287.574413] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
[70287.574418] sd 14:0:0:0: [sdc]
[70287.574422] Sense Key : Medium Error [current]
[70287.574430] Info fld=3D0x0
[70287.574434] sd 14:0:0:0: [sdc]
[70287.574438] Add. Sense: Unrecovered read error
[70287.574445] sd 14:0:0:0: [sdc] CDB:
[70287.574448] Read(10): 28 00 00 00 00 00 00 00 08 00
[70287.574474] end_request: critical target error, dev sdc, sector 0

When I try to attach the targets to another Dom0, everything works fine,
the problem only happens when the iSCSI target is a DomU and you attach
the disks from the Dom0 on the same machine.

Thanks, Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 14:47:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 14:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm3sj-0006Da-6c; Fri, 21 Dec 2012 14:47:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm3sh-0006DV-Mv
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 14:47:24 +0000
Received: from [85.158.138.51:3423] by server-9.bemta-3.messagelabs.com id
	A2/9D-11948-A7674D05; Fri, 21 Dec 2012 14:47:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1356101241!27977498!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29660 invoked from network); 21 Dec 2012 14:47:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 14:47:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="305837"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 14:47:21 +0000
Received: from [192.168.1.30] (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 14:47:21 +0000
Message-ID: <50D47678.2050903@citrix.com>
Date: Fri, 21 Dec 2012 15:47:20 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <50D41DF3.306@citrix.com>
	<20121221140320.GD25526@phenom.dumpdata.com>
In-Reply-To: <20121221140320.GD25526@phenom.dumpdata.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Create a iSCSI DomU with disks in another DomU
 running on the same Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/12/12 15:03, Konrad Rzeszutek Wilk wrote:
> On Fri, Dec 21, 2012 at 09:29:39AM +0100, Roger Pau Monn=E9 wrote:
>> Hello,
>>
>> I'm trying to use a strange setup, that consists in having a DomU
>> serving iSCSI targets to the Dom0, that will use this targets as disks
>> for other DomUs. I've tried to set up this iSCSI target DomU using both
>> Debian Squeeze/Wheezy (with kernels 2.6.32 and 3.2) and ISCSI
>> Enterprise Target (IET), and when launching the DomU I get this messages
>> from Xen:
>>
>> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff830=
141405000, caf=3D8000000000000003, taf=3D7400000000000001
>> (XEN) Xen WARN at mm.c:1926
>> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
>> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
>> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 0000000000000=
000
>> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c480276=
6e8
>> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  0000000000000=
004
>> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000=
001
>> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 7400000000000=
001
>> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 0000000000002=
6f0
>> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
>> (XEN)    ffff830141405000 8000000000000003 7400000000000001 000000000014=
5028
>> (XEN)    ffff82f6028a0510 ffff83019e60c000 ffff82f602afcd00 ffff82c4802b=
fd28
>> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c48010=
9ba3
>> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061df=
c3f0
>> (XEN)    0000000000000001 ffffffffffff8000 0000000000000002 ffff83011d55=
5000
>> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010=
c607
>> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011=
cf90
>> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b=
8000
>> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c48030=
0920
>> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000005802bfd38 ffff82c4802b=
8000
>> (XEN)    ffff82c400000000 0000000000000001 ffffc90000028b10 ffffc9000002=
8b10
>> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 000000000014=
5028
>> (XEN)    000000000011cf7c 0000000000001000 0000000000157e68 000000000000=
7ff0
>> (XEN)    000000000000027e 000000000042000d 0000000000020b50 ffff8300dfdf=
0000
>> (XEN)    ffff82c4802bfd78 ffffc90000028ac0 ffffc90000028ac0 ffff880185f6=
fd58
>> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010=
eb65
>> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c48018=
1831
>> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 000000000000=
0000
>> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 ffff8300dfb0=
3000
>> (XEN)    ffff8300dfdf0000 0000150e11a417f8 0000000000000002 ffff82c48030=
0948
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
>> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
>> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
>> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
>> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
>> (XEN)    =

>> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
>> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff830=
141405000, caf=3D8000000000000003, taf=3D7400000000000001
>> (XEN) Xen WARN at mm.c:1926
>> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
>> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
>> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 0000000000000=
000
>> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c480276=
6e8
>> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  0000000000000=
004
>> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000=
001
>> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 7400000000000=
001
>> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 0000000000002=
6f0
>> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
>> (XEN)    ffff830141405000 8000000000000003 7400000000000001 000000000014=
581d
>> (XEN)    ffff82f6028b03b0 ffff83019e60c000 ffff82f602afcd00 ffff82c4802b=
fd28
>> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c48010=
9ba3
>> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061df=
c308
>> (XEN)    0000000000000000 ffffffffffff8000 0000000000000001 ffff83011d55=
5000
>> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c48010=
c607
>> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 000000000011=
cf90
>> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c4802b=
8000
>> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c48030=
0920
>> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000002802bfd38 ffff82c4802b=
8000
>> (XEN)    ffffffff00000000 0000000000000001 ffffc90000028b60 ffffc9000002=
8b60
>> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 000000000014=
581d
>> (XEN)    00000000000deb3e 0000000000001000 0000000000157e68 000000000b50=
7ff0
>> (XEN)    0000000000000261 000000000042000d 00000000000204b0 ffffc9000002=
8b38
>> (XEN)    0000000000000002 ffffc90000028b38 ffffc90000028b38 ffff880185f6=
fd58
>> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c48010=
eb65
>> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c48018=
1831
>> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 000000000000=
0000
>> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 000000000000=
0086
>> (XEN)    ffff82c4802bfe28 ffff82c480125eae ffff83019e60c000 000000000000=
0286
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
>> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
>> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
>> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
>> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
>> (XEN)    =

>> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
>>
>> (Note that I've added a WARN() to mm.c:1925 to see where the
>> get_page call was coming from).
>>
>> Connecting the iSCSI disks to another Dom0 works fine, so this
>> problem only happens when trying to connect the disks to the
>> Dom0 where the DomU is running.
> =

> Is this happening when the 'disks' are exported to the domUs?
> Are they exported via QEMU or xen-blkback?

The iSCSI disks are connected to the DomUs using blkback, and this is
happening when the DomU tries to access it's disks.

>>
>> I've replaced the Linux DomU serving iSCSI targets with a
>> NetBSD DomU, and the problems disappears, and I'm able to
>> attach the targets shared by the DomU to the Dom0 without
>> issues.
>>
>> The problem seems to come from netfront/netback, does anyone
>> have a clue about what might cause this bad interaction
>> between IET and netfront/netback?
> =

> Or it might be that we are re-using the PFN for blkback/blkfront
> and using the m2p overrides and overwritting the netfront/netback
> m2p overrides?

What's strange is that this doesn't happen when the domain that has the
targets is a NetBSD PV. There are also problems when blkback is not used
(see below), so I guess the problem is between netfront/netback and IET.

> Is this with an HVM domU or PV domU?

Both domains (the domain holding the iSCSI targets, and the created
guests) are PV.

Also, I've forgot to say that in the previous email, but if I just
connect the iSCSI disks to the Dom0, I don't see any errors from Xen,
but the Dom0 kernel starts complaining:

[70272.569607] sd 14:0:0:0: [sdc]
[70272.569611] Sense Key : Medium Error [current]
[70272.569619] Info fld=3D0x0
[70272.569623] sd 14:0:0:0: [sdc]
[70272.569627] Add. Sense: Unrecovered read error
[70272.569633] sd 14:0:0:0: [sdc] CDB:
[70272.569637] Read(10): 28 00 00 00 00 00 00 00 08 00
[70272.569662] end_request: critical target error, dev sdc, sector 0
[70277.571208] sd 14:0:0:0: [sdc] Unhandled sense code
[70277.571220] sd 14:0:0:0: [sdc]
[70277.571224] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
[70277.571229] sd 14:0:0:0: [sdc]
[70277.571233] Sense Key : Medium Error [current]
[70277.571241] Info fld=3D0x0
[70277.571245] sd 14:0:0:0: [sdc]
[70277.571249] Add. Sense: Unrecovered read error
[70277.571255] sd 14:0:0:0: [sdc] CDB:
[70277.571259] Read(10): 28 00 00 00 00 00 00 00 08 00
[70277.571284] end_request: critical target error, dev sdc, sector 0
[70282.572768] sd 14:0:0:0: [sdc] Unhandled sense code
[70282.572781] sd 14:0:0:0: [sdc]
[70282.572785] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
[70282.572790] sd 14:0:0:0: [sdc]
[70282.572794] Sense Key : Medium Error [current]
[70282.572802] Info fld=3D0x0
[70282.572806] sd 14:0:0:0: [sdc]
[70282.572810] Add. Sense: Unrecovered read error
[70282.572816] sd 14:0:0:0: [sdc] CDB:
[70282.572820] Read(10): 28 00 00 00 00 00 00 00 08 00
[70282.572846] end_request: critical target error, dev sdc, sector 0
[70287.574397] sd 14:0:0:0: [sdc] Unhandled sense code
[70287.574409] sd 14:0:0:0: [sdc]
[70287.574413] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
[70287.574418] sd 14:0:0:0: [sdc]
[70287.574422] Sense Key : Medium Error [current]
[70287.574430] Info fld=3D0x0
[70287.574434] sd 14:0:0:0: [sdc]
[70287.574438] Add. Sense: Unrecovered read error
[70287.574445] sd 14:0:0:0: [sdc] CDB:
[70287.574448] Read(10): 28 00 00 00 00 00 00 00 08 00
[70287.574474] end_request: critical target error, dev sdc, sector 0

When I try to attach the targets to another Dom0, everything works fine,
the problem only happens when the iSCSI target is a DomU and you attach
the disks from the Dom0 on the same machine.

Thanks, Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 15:03:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm48V-0006V1-2T; Fri, 21 Dec 2012 15:03:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm48T-0006Uw-MT
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:03:42 +0000
Received: from [85.158.137.99:31671] by server-10.bemta-3.messagelabs.com id
	B5/26-07616-C4A74D05; Fri, 21 Dec 2012 15:03:40 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1356102218!17321955!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29917 invoked from network); 21 Dec 2012 15:03:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:03:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="1478129"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 15:02:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 10:02:38 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm47S-0004hP-9Q;
	Fri, 21 Dec 2012 15:02:38 +0000
Message-ID: <50D47898.2060909@eu.citrix.com>
Date: Fri, 21 Dec 2012 14:56:24 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D37359.9080001@eu.citrix.com> <1356049122.15403.34.camel@Abyss>
In-Reply-To: <1356049122.15403.34.camel@Abyss>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/12/12 00:18, Dario Faggioli wrote:
> On Thu, 2012-12-20 at 20:21 +0000, George Dunlap wrote:
>>> -    /*
>>> -     * Pick from online CPUs in VCPU's affinity mask, giving a
>>> -     * preference to its current processor if it's in there.
>>> -     */
>>>        online = cpupool_scheduler_cpumask(vc->domain->cpupool);
>>> -    cpumask_and(&cpus, online, vc->cpu_affinity);
>>> -    cpu = cpumask_test_cpu(vc->processor, &cpus)
>>> -            ? vc->processor
>>> -            : cpumask_cycle(vc->processor, &cpus);
>>> -    ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
>>> +    for_each_csched_balance_step( balance_step )
>>> +    {
>>> +        /* Pick an online CPU from the proper affinity mask */
>>> +        ret = csched_balance_cpumask(vc, balance_step, &cpus);
>>> +        cpumask_and(&cpus, &cpus, online);
>>>
>>> -    /*
>>> -     * Try to find an idle processor within the above constraints.
>>> -     *
>>> -     * In multi-core and multi-threaded CPUs, not all idle execution
>>> -     * vehicles are equal!
>>> -     *
>>> -     * We give preference to the idle execution vehicle with the most
>>> -     * idling neighbours in its grouping. This distributes work across
>>> -     * distinct cores first and guarantees we don't do something stupid
>>> -     * like run two VCPUs on co-hyperthreads while there are idle cores
>>> -     * or sockets.
>>> -     *
>>> -     * Notice that, when computing the "idleness" of cpu, we may want to
>>> -     * discount vc. That is, iff vc is the currently running and the only
>>> -     * runnable vcpu on cpu, we add cpu to the idlers.
>>> -     */
>>> -    cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
>>> -    if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
>>> -        cpumask_set_cpu(cpu, &idlers);
>>> -    cpumask_and(&cpus, &cpus, &idlers);
>>> -    cpumask_clear_cpu(cpu, &cpus);
>>> +        /* If present, prefer vc's current processor */
>>> +        cpu = cpumask_test_cpu(vc->processor, &cpus)
>>> +                ? vc->processor
>>> +                : cpumask_cycle(vc->processor, &cpus);
>>> +        ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
>>>
>>> -    while ( !cpumask_empty(&cpus) )
>>> -    {
>>> -        cpumask_t cpu_idlers;
>>> -        cpumask_t nxt_idlers;
>>> -        int nxt, weight_cpu, weight_nxt;
>>> -        int migrate_factor;
>>> +        /*
>>> +         * Try to find an idle processor within the above constraints.
>>> +         *
>>> +         * In multi-core and multi-threaded CPUs, not all idle execution
>>> +         * vehicles are equal!
>>> +         *
>>> +         * We give preference to the idle execution vehicle with the most
>>> +         * idling neighbours in its grouping. This distributes work across
>>> +         * distinct cores first and guarantees we don't do something stupid
>>> +         * like run two VCPUs on co-hyperthreads while there are idle cores
>>> +         * or sockets.
>>> +         *
>>> +         * Notice that, when computing the "idleness" of cpu, we may want to
>>> +         * discount vc. That is, iff vc is the currently running and the only
>>> +         * runnable vcpu on cpu, we add cpu to the idlers.
>>> +         */
>>> +        cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
>>> +        if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
>>> +            cpumask_set_cpu(cpu, &idlers);
>>> +        cpumask_and(&cpus, &cpus, &idlers);
>>> +        /* If there are idlers and cpu is still not among them, pick one */
>>> +        if ( !cpumask_empty(&cpus) && !cpumask_test_cpu(cpu, &cpus) )
>>> +            cpu = cpumask_cycle(cpu, &cpus);
>> This seems to be an addition to the algorithm -- particularly hidden in
>> this kind of "indent a big section that's almost exactly the same", I
>> think this at least needs to be called out in the changelog message,
>> perhaps put in a separate patch.
>>
> You're right, it is an addition, although a minor enough one (at least
> from the amount of code point of view, the effect of not having it was
> pretty bad! :-P) that I thought it can "hide" here. :-)
>
> But I guess I can put it in a separate patch.
>
>> Can you comment on to why you think it's necessary?  Was there a
>> particular problem you were seeing?
>>
> Yep. Suppose vc is for some reason running on a pcpu which is outside
> its node-affinity, but that now some pcpus within vc's node-affinity
> have become idle. What we would like is vc start running there as soon
> as possible, so we expect this call to _csched_pick_cpu() to determine
> that.
>
> What happens is we do not use vc->processor (as it is outside of vc's
> node-affinity) and 'cpu' gets set to cpumask_cycle(vc->processor,
> &cpus), where 'cpu' is the cpumask_and(&cpus, balance_mask, online).
> This means 'cpu'. Let's also suppose that 'cpu' now points to a busy
> thread but with an idle sibling, and that there aren't any other idle
> pcpus (either core or threads). Now, the algorithm evaluates the
> idleness of 'cpu', and compares it with the idleness of all the other
> pcpus, and it won't find anything better that 'cpu' itself, as all the
> other pcpus except its sibling thread are busy, while its sibling thread
> has the very same idleness it has (2 threads, 1 idle 1 busy).
>
> The neat effect is vc being moved to 'cpu', which is busy, while it
> could have been moved to 'cpu''s sibling thread, which is indeed idle.
>
> The if() I added fixes this by making sure that the reference cpu is an
> idle one (if that is possible).
>
> I hope I've explained it correctly, and sorry if it is a little bit
> tricky, especially to explain like this (although, believe me, it was
> tricky to hunt it out too! :-P). I've seen that happening and I'm almost
> sure I kept a trace somewhere, so let me know if you want to see the
> "smoking gun". :-)

No, the change looks quite plausible.  I guess it's not obvious that the 
balancing code will never migrate from one thread to another thread.  
(That whole algorithm could do with some commenting -- I may submit a 
patch once this series is in.)

I'm really glad you've had the opportunity to take a close look at these 
kinds of things.
>> Also -- and sorry to have to ask this kind of thing, but after sorting
>> through the placement algorithm my head hurts -- under what
>> circumstances would "cpumask_test_cpu(cpu, &idlers)" be false at this
>> point?  It seems like the only possibility would be if:
>> ( (vc->processor was not in the original &cpus [1])
>>     || !IS_RUNQ_IDLE(vc->processor) )
>> && (there are no idlers in the original &cpus)
>>
>> Which I suppose probably matches the time when we want to move on from
>> looking at NODE affinity and look for CPU affinity.
>>
>> [1] This could happen either if the vcpu/node affinity has changed, or
>> if we're currently running outside our node affinity and we're doing the
>> NODE step.
>>
>> OK -- I think I've convinced myself that this is OK as well (apart from
>> the hidden check).  I'll come back to look at your response to the load
>> balancing thing tomorrow.
>>
> Mmm... Sorry, not sure I follow, does this means that you figured out
> and understood why I need that 'if(){break;}' ? Sounds like so, but I
> can't be sure (my head hurts a bit too, after having written that
> thing! :-D).

Well I always understood why we needed the break -- the purpose is to 
avoid the second run through when it's not necessary.  What I was doing, 
in a sort of "thinking out loud" fashion, seeing under what conditions 
that break might actually happen.  Like the analysis with 
vcpu_should_migrate(), it might have turned out to be redundant, or to 
have missed some cases.  But I think it doesn't, so it's fine. :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 15:03:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm48V-0006V1-2T; Fri, 21 Dec 2012 15:03:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm48T-0006Uw-MT
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:03:42 +0000
Received: from [85.158.137.99:31671] by server-10.bemta-3.messagelabs.com id
	B5/26-07616-C4A74D05; Fri, 21 Dec 2012 15:03:40 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-11.tower-217.messagelabs.com!1356102218!17321955!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29917 invoked from network); 21 Dec 2012 15:03:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:03:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,329,1355097600"; 
   d="scan'208";a="1478129"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 15:02:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 10:02:38 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm47S-0004hP-9Q;
	Fri, 21 Dec 2012 15:02:38 +0000
Message-ID: <50D47898.2060909@eu.citrix.com>
Date: Fri, 21 Dec 2012 14:56:24 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D37359.9080001@eu.citrix.com> <1356049122.15403.34.camel@Abyss>
In-Reply-To: <1356049122.15403.34.camel@Abyss>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/12/12 00:18, Dario Faggioli wrote:
> On Thu, 2012-12-20 at 20:21 +0000, George Dunlap wrote:
>>> -    /*
>>> -     * Pick from online CPUs in VCPU's affinity mask, giving a
>>> -     * preference to its current processor if it's in there.
>>> -     */
>>>        online = cpupool_scheduler_cpumask(vc->domain->cpupool);
>>> -    cpumask_and(&cpus, online, vc->cpu_affinity);
>>> -    cpu = cpumask_test_cpu(vc->processor, &cpus)
>>> -            ? vc->processor
>>> -            : cpumask_cycle(vc->processor, &cpus);
>>> -    ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
>>> +    for_each_csched_balance_step( balance_step )
>>> +    {
>>> +        /* Pick an online CPU from the proper affinity mask */
>>> +        ret = csched_balance_cpumask(vc, balance_step, &cpus);
>>> +        cpumask_and(&cpus, &cpus, online);
>>>
>>> -    /*
>>> -     * Try to find an idle processor within the above constraints.
>>> -     *
>>> -     * In multi-core and multi-threaded CPUs, not all idle execution
>>> -     * vehicles are equal!
>>> -     *
>>> -     * We give preference to the idle execution vehicle with the most
>>> -     * idling neighbours in its grouping. This distributes work across
>>> -     * distinct cores first and guarantees we don't do something stupid
>>> -     * like run two VCPUs on co-hyperthreads while there are idle cores
>>> -     * or sockets.
>>> -     *
>>> -     * Notice that, when computing the "idleness" of cpu, we may want to
>>> -     * discount vc. That is, iff vc is the currently running and the only
>>> -     * runnable vcpu on cpu, we add cpu to the idlers.
>>> -     */
>>> -    cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
>>> -    if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
>>> -        cpumask_set_cpu(cpu, &idlers);
>>> -    cpumask_and(&cpus, &cpus, &idlers);
>>> -    cpumask_clear_cpu(cpu, &cpus);
>>> +        /* If present, prefer vc's current processor */
>>> +        cpu = cpumask_test_cpu(vc->processor, &cpus)
>>> +                ? vc->processor
>>> +                : cpumask_cycle(vc->processor, &cpus);
>>> +        ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
>>>
>>> -    while ( !cpumask_empty(&cpus) )
>>> -    {
>>> -        cpumask_t cpu_idlers;
>>> -        cpumask_t nxt_idlers;
>>> -        int nxt, weight_cpu, weight_nxt;
>>> -        int migrate_factor;
>>> +        /*
>>> +         * Try to find an idle processor within the above constraints.
>>> +         *
>>> +         * In multi-core and multi-threaded CPUs, not all idle execution
>>> +         * vehicles are equal!
>>> +         *
>>> +         * We give preference to the idle execution vehicle with the most
>>> +         * idling neighbours in its grouping. This distributes work across
>>> +         * distinct cores first and guarantees we don't do something stupid
>>> +         * like run two VCPUs on co-hyperthreads while there are idle cores
>>> +         * or sockets.
>>> +         *
>>> +         * Notice that, when computing the "idleness" of cpu, we may want to
>>> +         * discount vc. That is, iff vc is the currently running and the only
>>> +         * runnable vcpu on cpu, we add cpu to the idlers.
>>> +         */
>>> +        cpumask_and(&idlers, &cpu_online_map, CSCHED_PRIV(ops)->idlers);
>>> +        if ( vc->processor == cpu && IS_RUNQ_IDLE(cpu) )
>>> +            cpumask_set_cpu(cpu, &idlers);
>>> +        cpumask_and(&cpus, &cpus, &idlers);
>>> +        /* If there are idlers and cpu is still not among them, pick one */
>>> +        if ( !cpumask_empty(&cpus) && !cpumask_test_cpu(cpu, &cpus) )
>>> +            cpu = cpumask_cycle(cpu, &cpus);
>> This seems to be an addition to the algorithm -- particularly hidden in
>> this kind of "indent a big section that's almost exactly the same", I
>> think this at least needs to be called out in the changelog message,
>> perhaps put in a separate patch.
>>
> You're right, it is an addition, although a minor enough one (at least
> from the amount of code point of view, the effect of not having it was
> pretty bad! :-P) that I thought it can "hide" here. :-)
>
> But I guess I can put it in a separate patch.
>
>> Can you comment on to why you think it's necessary?  Was there a
>> particular problem you were seeing?
>>
> Yep. Suppose vc is for some reason running on a pcpu which is outside
> its node-affinity, but that now some pcpus within vc's node-affinity
> have become idle. What we would like is vc start running there as soon
> as possible, so we expect this call to _csched_pick_cpu() to determine
> that.
>
> What happens is we do not use vc->processor (as it is outside of vc's
> node-affinity) and 'cpu' gets set to cpumask_cycle(vc->processor,
> &cpus), where 'cpu' is the cpumask_and(&cpus, balance_mask, online).
> This means 'cpu'. Let's also suppose that 'cpu' now points to a busy
> thread but with an idle sibling, and that there aren't any other idle
> pcpus (either core or threads). Now, the algorithm evaluates the
> idleness of 'cpu', and compares it with the idleness of all the other
> pcpus, and it won't find anything better that 'cpu' itself, as all the
> other pcpus except its sibling thread are busy, while its sibling thread
> has the very same idleness it has (2 threads, 1 idle 1 busy).
>
> The neat effect is vc being moved to 'cpu', which is busy, while it
> could have been moved to 'cpu''s sibling thread, which is indeed idle.
>
> The if() I added fixes this by making sure that the reference cpu is an
> idle one (if that is possible).
>
> I hope I've explained it correctly, and sorry if it is a little bit
> tricky, especially to explain like this (although, believe me, it was
> tricky to hunt it out too! :-P). I've seen that happening and I'm almost
> sure I kept a trace somewhere, so let me know if you want to see the
> "smoking gun". :-)

No, the change looks quite plausible.  I guess it's not obvious that the 
balancing code will never migrate from one thread to another thread.  
(That whole algorithm could do with some commenting -- I may submit a 
patch once this series is in.)

I'm really glad you've had the opportunity to take a close look at these 
kinds of things.
>> Also -- and sorry to have to ask this kind of thing, but after sorting
>> through the placement algorithm my head hurts -- under what
>> circumstances would "cpumask_test_cpu(cpu, &idlers)" be false at this
>> point?  It seems like the only possibility would be if:
>> ( (vc->processor was not in the original &cpus [1])
>>     || !IS_RUNQ_IDLE(vc->processor) )
>> && (there are no idlers in the original &cpus)
>>
>> Which I suppose probably matches the time when we want to move on from
>> looking at NODE affinity and look for CPU affinity.
>>
>> [1] This could happen either if the vcpu/node affinity has changed, or
>> if we're currently running outside our node affinity and we're doing the
>> NODE step.
>>
>> OK -- I think I've convinced myself that this is OK as well (apart from
>> the hidden check).  I'll come back to look at your response to the load
>> balancing thing tomorrow.
>>
> Mmm... Sorry, not sure I follow, does this means that you figured out
> and understood why I need that 'if(){break;}' ? Sounds like so, but I
> can't be sure (my head hurts a bit too, after having written that
> thing! :-D).

Well I always understood why we needed the break -- the purpose is to 
avoid the second run through when it's not necessary.  What I was doing, 
in a sort of "thinking out loud" fashion, seeing under what conditions 
that break might actually happen.  Like the analysis with 
vcpu_should_migrate(), it might have turned out to be redundant, or to 
have missed some cases.  But I think it doesn't, so it's fine. :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 15:24:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4SD-0006jL-4l; Fri, 21 Dec 2012 15:24:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm4SB-0006jG-Rk
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:24:04 +0000
Received: from [85.158.139.211:12432] by server-15.bemta-5.messagelabs.com id
	B2/D3-20523-31F74D05; Fri, 21 Dec 2012 15:24:03 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1356103440!21415916!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13977 invoked from network); 21 Dec 2012 15:24:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:24:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1560423"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 15:23:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 10:23:58 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm4S6-00051l-OV;
	Fri, 21 Dec 2012 15:23:58 +0000
Message-ID: <50D47D99.9090209@eu.citrix.com>
Date: Fri, 21 Dec 2012 15:17:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<7e8c5e21c3ae1c267c23.1355944040@Solace>
In-Reply-To: <7e8c5e21c3ae1c267c23.1355944040@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 04 of 10 v2] xen: allow for explicitly
	specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> Make it possible to pass the node-affinity of a domain to the hypervisor
> from the upper layers, instead of always being computed automatically.
>
> Note that this also required generalizing the Flask hooks for setting
> and getting the affinity, so that they now deal with both vcpu and
> node affinity.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

I can't comment on the XSM stuff -- is any part of the "getvcpuaffinity" 
stuff for XSM a public interface that needs to be backwards-compatible?  
I.e., is s/vcpu//; OK from an interface point of view?

WRT everything else:
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> Changes from v1:
>   * added the missing dummy hook for nodeaffinity;
>   * let the permission renaming affect flask policies too.
>
> diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
> --- a/tools/flask/policy/policy/flask/access_vectors
> +++ b/tools/flask/policy/policy/flask/access_vectors
> @@ -47,8 +47,8 @@ class domain
>       transition
>       max_vcpus
>       destroy
> -    setvcpuaffinity
> -       getvcpuaffinity
> +    setaffinity
> +       getaffinity
>          scheduler
>          getdomaininfo
>          getvcpuinfo
> diff --git a/tools/flask/policy/policy/mls b/tools/flask/policy/policy/mls
> --- a/tools/flask/policy/policy/mls
> +++ b/tools/flask/policy/policy/mls
> @@ -70,11 +70,11 @@ mlsconstrain domain transition
>          (( h1 dom h2 ) and (( l1 eq l2 ) or (t1 == mls_priv)));
>
>   # all the domain "read" ops
> -mlsconstrain domain { getvcpuaffinity getdomaininfo getvcpuinfo getvcpucontext getaddrsize getextvcpucontext }
> +mlsconstrain domain { getaffinity getdomaininfo getvcpuinfo getvcpucontext getaddrsize getextvcpucontext }
>          ((l1 dom l2) or (t1 == mls_priv));
>
>   # all the domain "write" ops
> -mlsconstrain domain { setvcpucontext pause unpause resume create max_vcpus destroy setvcpuaffinity scheduler setdomainmaxmem setdomainhandle setdebugging hypercall settime set_target shutdown setaddrsize trigger setextvcpucontext }
> +mlsconstrain domain { setvcpucontext pause unpause resume create max_vcpus destroy setaffinity scheduler setdomainmaxmem setdomainhandle setdebugging hypercall settime set_target shutdown setaddrsize trigger setextvcpucontext }
>          ((l1 eq l2) or (t1 == mls_priv));
>
>   # This is incomplete - similar constraints must be written for all classes
> diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
> --- a/tools/flask/policy/policy/modules/xen/xen.if
> +++ b/tools/flask/policy/policy/modules/xen/xen.if
> @@ -55,9 +55,9 @@ define(`create_domain_build_label', `
>   # manage_domain(priv, target)
>   #   Allow managing a running domain
>   define(`manage_domain', `
> -       allow $1 $2:domain { getdomaininfo getvcpuinfo getvcpuaffinity
> +       allow $1 $2:domain { getdomaininfo getvcpuinfo getaffinity
>                          getaddrsize pause unpause trigger shutdown destroy
> -                       setvcpuaffinity setdomainmaxmem };
> +                       setaffinity setdomainmaxmem };
>   ')
>
>   # migrate_domain_out(priv, target)
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -222,6 +222,7 @@ struct domain *domain_create(
>
>       spin_lock_init(&d->node_affinity_lock);
>       d->node_affinity = NODE_MASK_ALL;
> +    d->auto_node_affinity = 1;
>
>       spin_lock_init(&d->shutdown_lock);
>       d->shutdown_code = -1;
> @@ -362,11 +363,26 @@ void domain_update_node_affinity(struct
>           cpumask_or(cpumask, cpumask, online_affinity);
>       }
>
> -    for_each_online_node ( node )
> -        if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
> -            node_set(node, nodemask);
> +    if ( d->auto_node_affinity )
> +    {
> +        /* Node-affinity is automaically computed from all vcpu-affinities */
> +        for_each_online_node ( node )
> +            if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
> +                node_set(node, nodemask);
>
> -    d->node_affinity = nodemask;
> +        d->node_affinity = nodemask;
> +    }
> +    else
> +    {
> +        /* Node-affinity is provided by someone else, just filter out cpus
> +         * that are either offline or not in the affinity of any vcpus. */
> +        for_each_node_mask ( node, d->node_affinity )
> +            if ( !cpumask_intersects(&node_to_cpumask(node), cpumask) )
> +                node_clear(node, d->node_affinity);
> +    }
> +
> +    sched_set_node_affinity(d, &d->node_affinity);
> +
>       spin_unlock(&d->node_affinity_lock);
>
>       free_cpumask_var(online_affinity);
> @@ -374,6 +390,36 @@ void domain_update_node_affinity(struct
>   }
>
>
> +int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity)
> +{
> +    /* Being affine with no nodes is just wrong */
> +    if ( nodes_empty(*affinity) )
> +        return -EINVAL;
> +
> +    spin_lock(&d->node_affinity_lock);
> +
> +    /*
> +     * Being/becoming explicitly affine to all nodes is not particularly
> +     * useful. Let's take it as the `reset node affinity` command.
> +     */
> +    if ( nodes_full(*affinity) )
> +    {
> +        d->auto_node_affinity = 1;
> +        goto out;
> +    }
> +
> +    d->auto_node_affinity = 0;
> +    d->node_affinity = *affinity;
> +
> +out:
> +    spin_unlock(&d->node_affinity_lock);
> +
> +    domain_update_node_affinity(d);
> +
> +    return 0;
> +}
> +
> +
>   struct domain *get_domain_by_id(domid_t dom)
>   {
>       struct domain *d;
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -609,6 +609,40 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>       }
>       break;
>
> +    case XEN_DOMCTL_setnodeaffinity:
> +    case XEN_DOMCTL_getnodeaffinity:
> +    {
> +        domid_t dom = op->domain;
> +        struct domain *d = rcu_lock_domain_by_id(dom);
> +
> +        ret = -ESRCH;
> +        if ( d == NULL )
> +            break;
> +
> +        ret = xsm_nodeaffinity(op->cmd, d);
> +        if ( ret )
> +            goto nodeaffinity_out;
> +
> +        if ( op->cmd == XEN_DOMCTL_setnodeaffinity )
> +        {
> +            nodemask_t new_affinity;
> +
> +            ret = xenctl_bitmap_to_nodemask(&new_affinity,
> +                                            &op->u.nodeaffinity.nodemap);
> +            if ( !ret )
> +                ret = domain_set_node_affinity(d, &new_affinity);
> +        }
> +        else
> +        {
> +            ret = nodemask_to_xenctl_bitmap(&op->u.nodeaffinity.nodemap,
> +                                            &d->node_affinity);
> +        }
> +
> +    nodeaffinity_out:
> +        rcu_unlock_domain(d);
> +    }
> +    break;
> +
>       case XEN_DOMCTL_setvcpuaffinity:
>       case XEN_DOMCTL_getvcpuaffinity:
>       {
> diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -217,6 +217,14 @@ static void cpuset_print(char *set, int
>       *set++ = '\0';
>   }
>
> +static void nodeset_print(char *set, int size, const nodemask_t *mask)
> +{
> +    *set++ = '[';
> +    set += nodelist_scnprintf(set, size-2, mask);
> +    *set++ = ']';
> +    *set++ = '\0';
> +}
> +
>   static void periodic_timer_print(char *str, int size, uint64_t period)
>   {
>       if ( period == 0 )
> @@ -272,6 +280,9 @@ static void dump_domains(unsigned char k
>
>           dump_pageframe_info(d);
>
> +        nodeset_print(tmpstr, sizeof(tmpstr), &d->node_affinity);
> +        printk("NODE affinity for domain %d: %s\n", d->domain_id, tmpstr);
> +
>           printk("VCPU information and callbacks for domain %u:\n",
>                  d->domain_id);
>           for_each_vcpu ( d, v )
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -269,6 +269,33 @@ static inline void
>       list_del_init(&svc->runq_elem);
>   }
>
> +/*
> + * Translates node-affinity mask into a cpumask, so that we can use it during
> + * actual scheduling. That of course will contain all the cpus from all the
> + * set nodes in the original node-affinity mask.
> + *
> + * Note that any serialization needed to access mask safely is complete
> + * responsibility of the caller of this function/hook.
> + */
> +static void csched_set_node_affinity(
> +    const struct scheduler *ops,
> +    struct domain *d,
> +    nodemask_t *mask)
> +{
> +    struct csched_dom *sdom;
> +    int node;
> +
> +    /* Skip idle domain since it doesn't even have a node_affinity_cpumask */
> +    if ( unlikely(is_idle_domain(d)) )
> +        return;
> +
> +    sdom = CSCHED_DOM(d);
> +    cpumask_clear(sdom->node_affinity_cpumask);
> +    for_each_node_mask( node, *mask )
> +        cpumask_or(sdom->node_affinity_cpumask, sdom->node_affinity_cpumask,
> +                   &node_to_cpumask(node));
> +}
> +
>   #define for_each_csched_balance_step(__step) \
>       for ( (__step) = CSCHED_BALANCE_LAST; (__step) >= 0; (__step)-- )
>
> @@ -296,7 +323,8 @@ csched_balance_cpumask(const struct vcpu
>
>           cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
>
> -        if ( cpumask_full(sdom->node_affinity_cpumask) )
> +        if ( cpumask_full(sdom->node_affinity_cpumask) ||
> +             d->auto_node_affinity == 1 )
>               return -1;
>       }
>       else /* step == CSCHED_BALANCE_CPU_AFFINITY */
> @@ -1896,6 +1924,8 @@ const struct scheduler sched_credit_def
>       .adjust         = csched_dom_cntl,
>       .adjust_global  = csched_sys_cntl,
>
> +    .set_node_affinity  = csched_set_node_affinity,
> +
>       .pick_cpu       = csched_cpu_pick,
>       .do_schedule    = csched_schedule,
>
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -590,6 +590,11 @@ int cpu_disable_scheduler(unsigned int c
>       return ret;
>   }
>
> +void sched_set_node_affinity(struct domain *d, nodemask_t *mask)
> +{
> +    SCHED_OP(DOM2OP(d), set_node_affinity, d, mask);
> +}
> +
>   int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity)
>   {
>       cpumask_t online_affinity;
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -279,6 +279,16 @@ typedef struct xen_domctl_getvcpuinfo xe
>   DEFINE_XEN_GUEST_HANDLE(xen_domctl_getvcpuinfo_t);
>
>
> +/* Get/set the NUMA node(s) with which the guest has affinity with. */
> +/* XEN_DOMCTL_setnodeaffinity */
> +/* XEN_DOMCTL_getnodeaffinity */
> +struct xen_domctl_nodeaffinity {
> +    struct xenctl_bitmap nodemap;/* IN */
> +};
> +typedef struct xen_domctl_nodeaffinity xen_domctl_nodeaffinity_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_nodeaffinity_t);
> +
> +
>   /* Get/set which physical cpus a vcpu can execute on. */
>   /* XEN_DOMCTL_setvcpuaffinity */
>   /* XEN_DOMCTL_getvcpuaffinity */
> @@ -907,6 +917,8 @@ struct xen_domctl {
>   #define XEN_DOMCTL_audit_p2m                     65
>   #define XEN_DOMCTL_set_virq_handler              66
>   #define XEN_DOMCTL_set_broken_page_p2m           67
> +#define XEN_DOMCTL_setnodeaffinity               68
> +#define XEN_DOMCTL_getnodeaffinity               69
>   #define XEN_DOMCTL_gdbsx_guestmemio            1000
>   #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>   #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -920,6 +932,7 @@ struct xen_domctl {
>           struct xen_domctl_getpageframeinfo  getpageframeinfo;
>           struct xen_domctl_getpageframeinfo2 getpageframeinfo2;
>           struct xen_domctl_getpageframeinfo3 getpageframeinfo3;
> +        struct xen_domctl_nodeaffinity      nodeaffinity;
>           struct xen_domctl_vcpuaffinity      vcpuaffinity;
>           struct xen_domctl_shadow_op         shadow_op;
>           struct xen_domctl_max_mem           max_mem;
> diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
> --- a/xen/include/xen/nodemask.h
> +++ b/xen/include/xen/nodemask.h
> @@ -8,8 +8,9 @@
>    * See detailed comments in the file linux/bitmap.h describing the
>    * data type on which these nodemasks are based.
>    *
> - * For details of nodemask_scnprintf() and nodemask_parse(),
> - * see bitmap_scnprintf() and bitmap_parse() in lib/bitmap.c.
> + * For details of nodemask_scnprintf(), nodelist_scnpintf() and
> + * nodemask_parse(), see bitmap_scnprintf() and bitmap_parse()
> + * in lib/bitmap.c.
>    *
>    * The available nodemask operations are:
>    *
> @@ -50,6 +51,7 @@
>    * unsigned long *nodes_addr(mask)     Array of unsigned long's in mask
>    *
>    * int nodemask_scnprintf(buf, len, mask) Format nodemask for printing
> + * int nodelist_scnprintf(buf, len, mask) Format nodemask as a list for printing
>    * int nodemask_parse(ubuf, ulen, mask)        Parse ascii string as nodemask
>    *
>    * for_each_node_mask(node, mask)      for-loop node over mask
> @@ -292,6 +294,14 @@ static inline int __cycle_node(int n, co
>
>   #define nodes_addr(src) ((src).bits)
>
> +#define nodelist_scnprintf(buf, len, src) \
> +                       __nodelist_scnprintf((buf), (len), (src), MAX_NUMNODES)
> +static inline int __nodelist_scnprintf(char *buf, int len,
> +                                       const nodemask_t *srcp, int nbits)
> +{
> +       return bitmap_scnlistprintf(buf, len, srcp->bits, nbits);
> +}
> +
>   #if 0
>   #define nodemask_scnprintf(buf, len, src) \
>                          __nodemask_scnprintf((buf), (len), &(src), MAX_NUMNODES)
> diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
> --- a/xen/include/xen/sched-if.h
> +++ b/xen/include/xen/sched-if.h
> @@ -184,6 +184,8 @@ struct scheduler {
>                                       struct xen_domctl_scheduler_op *);
>       int          (*adjust_global)  (const struct scheduler *,
>                                       struct xen_sysctl_scheduler_op *);
> +    void         (*set_node_affinity) (const struct scheduler *,
> +                                       struct domain *, nodemask_t *);
>       void         (*dump_settings)  (const struct scheduler *);
>       void         (*dump_cpu_state) (const struct scheduler *, int);
>
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -359,8 +359,12 @@ struct domain
>       /* Various mem_events */
>       struct mem_event_per_domain *mem_event;
>
> -    /* Currently computed from union of all vcpu cpu-affinity masks. */
> +    /*
> +     * Can be specified by the user. If that is not the case, it is
> +     * computed from the union of all the vcpu cpu-affinity masks.
> +     */
>       nodemask_t node_affinity;
> +    int auto_node_affinity;
>       unsigned int last_alloc_node;
>       spinlock_t node_affinity_lock;
>   };
> @@ -429,6 +433,7 @@ static inline void get_knownalive_domain
>       ASSERT(!(atomic_read(&d->refcnt) & DOMAIN_DESTROYED));
>   }
>
> +int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity);
>   void domain_update_node_affinity(struct domain *d);
>
>   struct domain *domain_create(
> @@ -543,6 +548,7 @@ void sched_destroy_domain(struct domain
>   int sched_move_domain(struct domain *d, struct cpupool *c);
>   long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
>   long sched_adjust_global(struct xen_sysctl_scheduler_op *);
> +void sched_set_node_affinity(struct domain *, nodemask_t *);
>   int  sched_id(void);
>   void sched_tick_suspend(void);
>   void sched_tick_resume(void);
> diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -56,6 +56,7 @@ struct xsm_operations {
>       int (*domain_create) (struct domain *d, u32 ssidref);
>       int (*max_vcpus) (struct domain *d);
>       int (*destroydomain) (struct domain *d);
> +    int (*nodeaffinity) (int cmd, struct domain *d);
>       int (*vcpuaffinity) (int cmd, struct domain *d);
>       int (*scheduler) (struct domain *d);
>       int (*getdomaininfo) (struct domain *d);
> @@ -229,6 +230,11 @@ static inline int xsm_destroydomain (str
>       return xsm_call(destroydomain(d));
>   }
>
> +static inline int xsm_nodeaffinity (int cmd, struct domain *d)
> +{
> +    return xsm_call(nodeaffinity(cmd, d));
> +}
> +
>   static inline int xsm_vcpuaffinity (int cmd, struct domain *d)
>   {
>       return xsm_call(vcpuaffinity(cmd, d));
> diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
> --- a/xen/xsm/dummy.c
> +++ b/xen/xsm/dummy.c
> @@ -54,6 +54,11 @@ static int dummy_destroydomain (struct d
>       return 0;
>   }
>
> +static int dummy_nodeaffinity (int cmd, struct domain *d)
> +{
> +    return 0;
> +}
> +
>   static int dummy_vcpuaffinity (int cmd, struct domain *d)
>   {
>       return 0;
> @@ -634,6 +639,7 @@ void xsm_fixup_ops (struct xsm_operation
>       set_to_dummy_if_null(ops, domain_create);
>       set_to_dummy_if_null(ops, max_vcpus);
>       set_to_dummy_if_null(ops, destroydomain);
> +    set_to_dummy_if_null(ops, nodeaffinity);
>       set_to_dummy_if_null(ops, vcpuaffinity);
>       set_to_dummy_if_null(ops, scheduler);
>       set_to_dummy_if_null(ops, getdomaininfo);
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -521,17 +521,19 @@ static int flask_destroydomain(struct do
>                              DOMAIN__DESTROY);
>   }
>
> -static int flask_vcpuaffinity(int cmd, struct domain *d)
> +static int flask_affinity(int cmd, struct domain *d)
>   {
>       u32 perm;
>
>       switch ( cmd )
>       {
>       case XEN_DOMCTL_setvcpuaffinity:
> -        perm = DOMAIN__SETVCPUAFFINITY;
> +    case XEN_DOMCTL_setnodeaffinity:
> +        perm = DOMAIN__SETAFFINITY;
>           break;
>       case XEN_DOMCTL_getvcpuaffinity:
> -        perm = DOMAIN__GETVCPUAFFINITY;
> +    case XEN_DOMCTL_getnodeaffinity:
> +        perm = DOMAIN__GETAFFINITY;
>           break;
>       default:
>           return -EPERM;
> @@ -1473,7 +1475,8 @@ static struct xsm_operations flask_ops =
>       .domain_create = flask_domain_create,
>       .max_vcpus = flask_max_vcpus,
>       .destroydomain = flask_destroydomain,
> -    .vcpuaffinity = flask_vcpuaffinity,
> +    .nodeaffinity = flask_affinity,
> +    .vcpuaffinity = flask_affinity,
>       .scheduler = flask_scheduler,
>       .getdomaininfo = flask_getdomaininfo,
>       .getvcpucontext = flask_getvcpucontext,
> diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
> --- a/xen/xsm/flask/include/av_perm_to_string.h
> +++ b/xen/xsm/flask/include/av_perm_to_string.h
> @@ -37,8 +37,8 @@
>      S_(SECCLASS_DOMAIN, DOMAIN__TRANSITION, "transition")
>      S_(SECCLASS_DOMAIN, DOMAIN__MAX_VCPUS, "max_vcpus")
>      S_(SECCLASS_DOMAIN, DOMAIN__DESTROY, "destroy")
> -   S_(SECCLASS_DOMAIN, DOMAIN__SETVCPUAFFINITY, "setvcpuaffinity")
> -   S_(SECCLASS_DOMAIN, DOMAIN__GETVCPUAFFINITY, "getvcpuaffinity")
> +   S_(SECCLASS_DOMAIN, DOMAIN__SETAFFINITY, "setaffinity")
> +   S_(SECCLASS_DOMAIN, DOMAIN__GETAFFINITY, "getaffinity")
>      S_(SECCLASS_DOMAIN, DOMAIN__SCHEDULER, "scheduler")
>      S_(SECCLASS_DOMAIN, DOMAIN__GETDOMAININFO, "getdomaininfo")
>      S_(SECCLASS_DOMAIN, DOMAIN__GETVCPUINFO, "getvcpuinfo")
> diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
> --- a/xen/xsm/flask/include/av_permissions.h
> +++ b/xen/xsm/flask/include/av_permissions.h
> @@ -38,8 +38,8 @@
>   #define DOMAIN__TRANSITION                        0x00000020UL
>   #define DOMAIN__MAX_VCPUS                         0x00000040UL
>   #define DOMAIN__DESTROY                           0x00000080UL
> -#define DOMAIN__SETVCPUAFFINITY                   0x00000100UL
> -#define DOMAIN__GETVCPUAFFINITY                   0x00000200UL
> +#define DOMAIN__SETAFFINITY                       0x00000100UL
> +#define DOMAIN__GETAFFINITY                       0x00000200UL
>   #define DOMAIN__SCHEDULER                         0x00000400UL
>   #define DOMAIN__GETDOMAININFO                     0x00000800UL
>   #define DOMAIN__GETVCPUINFO                       0x00001000UL


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 15:24:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4SD-0006jL-4l; Fri, 21 Dec 2012 15:24:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm4SB-0006jG-Rk
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:24:04 +0000
Received: from [85.158.139.211:12432] by server-15.bemta-5.messagelabs.com id
	B2/D3-20523-31F74D05; Fri, 21 Dec 2012 15:24:03 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1356103440!21415916!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13977 invoked from network); 21 Dec 2012 15:24:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:24:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1560423"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 15:23:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 10:23:58 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm4S6-00051l-OV;
	Fri, 21 Dec 2012 15:23:58 +0000
Message-ID: <50D47D99.9090209@eu.citrix.com>
Date: Fri, 21 Dec 2012 15:17:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<7e8c5e21c3ae1c267c23.1355944040@Solace>
In-Reply-To: <7e8c5e21c3ae1c267c23.1355944040@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 04 of 10 v2] xen: allow for explicitly
	specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> Make it possible to pass the node-affinity of a domain to the hypervisor
> from the upper layers, instead of always being computed automatically.
>
> Note that this also required generalizing the Flask hooks for setting
> and getting the affinity, so that they now deal with both vcpu and
> node affinity.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

I can't comment on the XSM stuff -- is any part of the "getvcpuaffinity" 
stuff for XSM a public interface that needs to be backwards-compatible?  
I.e., is s/vcpu//; OK from an interface point of view?

WRT everything else:
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> Changes from v1:
>   * added the missing dummy hook for nodeaffinity;
>   * let the permission renaming affect flask policies too.
>
> diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
> --- a/tools/flask/policy/policy/flask/access_vectors
> +++ b/tools/flask/policy/policy/flask/access_vectors
> @@ -47,8 +47,8 @@ class domain
>       transition
>       max_vcpus
>       destroy
> -    setvcpuaffinity
> -       getvcpuaffinity
> +    setaffinity
> +       getaffinity
>          scheduler
>          getdomaininfo
>          getvcpuinfo
> diff --git a/tools/flask/policy/policy/mls b/tools/flask/policy/policy/mls
> --- a/tools/flask/policy/policy/mls
> +++ b/tools/flask/policy/policy/mls
> @@ -70,11 +70,11 @@ mlsconstrain domain transition
>          (( h1 dom h2 ) and (( l1 eq l2 ) or (t1 == mls_priv)));
>
>   # all the domain "read" ops
> -mlsconstrain domain { getvcpuaffinity getdomaininfo getvcpuinfo getvcpucontext getaddrsize getextvcpucontext }
> +mlsconstrain domain { getaffinity getdomaininfo getvcpuinfo getvcpucontext getaddrsize getextvcpucontext }
>          ((l1 dom l2) or (t1 == mls_priv));
>
>   # all the domain "write" ops
> -mlsconstrain domain { setvcpucontext pause unpause resume create max_vcpus destroy setvcpuaffinity scheduler setdomainmaxmem setdomainhandle setdebugging hypercall settime set_target shutdown setaddrsize trigger setextvcpucontext }
> +mlsconstrain domain { setvcpucontext pause unpause resume create max_vcpus destroy setaffinity scheduler setdomainmaxmem setdomainhandle setdebugging hypercall settime set_target shutdown setaddrsize trigger setextvcpucontext }
>          ((l1 eq l2) or (t1 == mls_priv));
>
>   # This is incomplete - similar constraints must be written for all classes
> diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
> --- a/tools/flask/policy/policy/modules/xen/xen.if
> +++ b/tools/flask/policy/policy/modules/xen/xen.if
> @@ -55,9 +55,9 @@ define(`create_domain_build_label', `
>   # manage_domain(priv, target)
>   #   Allow managing a running domain
>   define(`manage_domain', `
> -       allow $1 $2:domain { getdomaininfo getvcpuinfo getvcpuaffinity
> +       allow $1 $2:domain { getdomaininfo getvcpuinfo getaffinity
>                          getaddrsize pause unpause trigger shutdown destroy
> -                       setvcpuaffinity setdomainmaxmem };
> +                       setaffinity setdomainmaxmem };
>   ')
>
>   # migrate_domain_out(priv, target)
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -222,6 +222,7 @@ struct domain *domain_create(
>
>       spin_lock_init(&d->node_affinity_lock);
>       d->node_affinity = NODE_MASK_ALL;
> +    d->auto_node_affinity = 1;
>
>       spin_lock_init(&d->shutdown_lock);
>       d->shutdown_code = -1;
> @@ -362,11 +363,26 @@ void domain_update_node_affinity(struct
>           cpumask_or(cpumask, cpumask, online_affinity);
>       }
>
> -    for_each_online_node ( node )
> -        if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
> -            node_set(node, nodemask);
> +    if ( d->auto_node_affinity )
> +    {
> +        /* Node-affinity is automaically computed from all vcpu-affinities */
> +        for_each_online_node ( node )
> +            if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
> +                node_set(node, nodemask);
>
> -    d->node_affinity = nodemask;
> +        d->node_affinity = nodemask;
> +    }
> +    else
> +    {
> +        /* Node-affinity is provided by someone else, just filter out cpus
> +         * that are either offline or not in the affinity of any vcpus. */
> +        for_each_node_mask ( node, d->node_affinity )
> +            if ( !cpumask_intersects(&node_to_cpumask(node), cpumask) )
> +                node_clear(node, d->node_affinity);
> +    }
> +
> +    sched_set_node_affinity(d, &d->node_affinity);
> +
>       spin_unlock(&d->node_affinity_lock);
>
>       free_cpumask_var(online_affinity);
> @@ -374,6 +390,36 @@ void domain_update_node_affinity(struct
>   }
>
>
> +int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity)
> +{
> +    /* Being affine with no nodes is just wrong */
> +    if ( nodes_empty(*affinity) )
> +        return -EINVAL;
> +
> +    spin_lock(&d->node_affinity_lock);
> +
> +    /*
> +     * Being/becoming explicitly affine to all nodes is not particularly
> +     * useful. Let's take it as the `reset node affinity` command.
> +     */
> +    if ( nodes_full(*affinity) )
> +    {
> +        d->auto_node_affinity = 1;
> +        goto out;
> +    }
> +
> +    d->auto_node_affinity = 0;
> +    d->node_affinity = *affinity;
> +
> +out:
> +    spin_unlock(&d->node_affinity_lock);
> +
> +    domain_update_node_affinity(d);
> +
> +    return 0;
> +}
> +
> +
>   struct domain *get_domain_by_id(domid_t dom)
>   {
>       struct domain *d;
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -609,6 +609,40 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>       }
>       break;
>
> +    case XEN_DOMCTL_setnodeaffinity:
> +    case XEN_DOMCTL_getnodeaffinity:
> +    {
> +        domid_t dom = op->domain;
> +        struct domain *d = rcu_lock_domain_by_id(dom);
> +
> +        ret = -ESRCH;
> +        if ( d == NULL )
> +            break;
> +
> +        ret = xsm_nodeaffinity(op->cmd, d);
> +        if ( ret )
> +            goto nodeaffinity_out;
> +
> +        if ( op->cmd == XEN_DOMCTL_setnodeaffinity )
> +        {
> +            nodemask_t new_affinity;
> +
> +            ret = xenctl_bitmap_to_nodemask(&new_affinity,
> +                                            &op->u.nodeaffinity.nodemap);
> +            if ( !ret )
> +                ret = domain_set_node_affinity(d, &new_affinity);
> +        }
> +        else
> +        {
> +            ret = nodemask_to_xenctl_bitmap(&op->u.nodeaffinity.nodemap,
> +                                            &d->node_affinity);
> +        }
> +
> +    nodeaffinity_out:
> +        rcu_unlock_domain(d);
> +    }
> +    break;
> +
>       case XEN_DOMCTL_setvcpuaffinity:
>       case XEN_DOMCTL_getvcpuaffinity:
>       {
> diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -217,6 +217,14 @@ static void cpuset_print(char *set, int
>       *set++ = '\0';
>   }
>
> +static void nodeset_print(char *set, int size, const nodemask_t *mask)
> +{
> +    *set++ = '[';
> +    set += nodelist_scnprintf(set, size-2, mask);
> +    *set++ = ']';
> +    *set++ = '\0';
> +}
> +
>   static void periodic_timer_print(char *str, int size, uint64_t period)
>   {
>       if ( period == 0 )
> @@ -272,6 +280,9 @@ static void dump_domains(unsigned char k
>
>           dump_pageframe_info(d);
>
> +        nodeset_print(tmpstr, sizeof(tmpstr), &d->node_affinity);
> +        printk("NODE affinity for domain %d: %s\n", d->domain_id, tmpstr);
> +
>           printk("VCPU information and callbacks for domain %u:\n",
>                  d->domain_id);
>           for_each_vcpu ( d, v )
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -269,6 +269,33 @@ static inline void
>       list_del_init(&svc->runq_elem);
>   }
>
> +/*
> + * Translates node-affinity mask into a cpumask, so that we can use it during
> + * actual scheduling. That of course will contain all the cpus from all the
> + * set nodes in the original node-affinity mask.
> + *
> + * Note that any serialization needed to access mask safely is complete
> + * responsibility of the caller of this function/hook.
> + */
> +static void csched_set_node_affinity(
> +    const struct scheduler *ops,
> +    struct domain *d,
> +    nodemask_t *mask)
> +{
> +    struct csched_dom *sdom;
> +    int node;
> +
> +    /* Skip idle domain since it doesn't even have a node_affinity_cpumask */
> +    if ( unlikely(is_idle_domain(d)) )
> +        return;
> +
> +    sdom = CSCHED_DOM(d);
> +    cpumask_clear(sdom->node_affinity_cpumask);
> +    for_each_node_mask( node, *mask )
> +        cpumask_or(sdom->node_affinity_cpumask, sdom->node_affinity_cpumask,
> +                   &node_to_cpumask(node));
> +}
> +
>   #define for_each_csched_balance_step(__step) \
>       for ( (__step) = CSCHED_BALANCE_LAST; (__step) >= 0; (__step)-- )
>
> @@ -296,7 +323,8 @@ csched_balance_cpumask(const struct vcpu
>
>           cpumask_and(mask, sdom->node_affinity_cpumask, vc->cpu_affinity);
>
> -        if ( cpumask_full(sdom->node_affinity_cpumask) )
> +        if ( cpumask_full(sdom->node_affinity_cpumask) ||
> +             d->auto_node_affinity == 1 )
>               return -1;
>       }
>       else /* step == CSCHED_BALANCE_CPU_AFFINITY */
> @@ -1896,6 +1924,8 @@ const struct scheduler sched_credit_def
>       .adjust         = csched_dom_cntl,
>       .adjust_global  = csched_sys_cntl,
>
> +    .set_node_affinity  = csched_set_node_affinity,
> +
>       .pick_cpu       = csched_cpu_pick,
>       .do_schedule    = csched_schedule,
>
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -590,6 +590,11 @@ int cpu_disable_scheduler(unsigned int c
>       return ret;
>   }
>
> +void sched_set_node_affinity(struct domain *d, nodemask_t *mask)
> +{
> +    SCHED_OP(DOM2OP(d), set_node_affinity, d, mask);
> +}
> +
>   int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity)
>   {
>       cpumask_t online_affinity;
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -279,6 +279,16 @@ typedef struct xen_domctl_getvcpuinfo xe
>   DEFINE_XEN_GUEST_HANDLE(xen_domctl_getvcpuinfo_t);
>
>
> +/* Get/set the NUMA node(s) with which the guest has affinity with. */
> +/* XEN_DOMCTL_setnodeaffinity */
> +/* XEN_DOMCTL_getnodeaffinity */
> +struct xen_domctl_nodeaffinity {
> +    struct xenctl_bitmap nodemap;/* IN */
> +};
> +typedef struct xen_domctl_nodeaffinity xen_domctl_nodeaffinity_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_nodeaffinity_t);
> +
> +
>   /* Get/set which physical cpus a vcpu can execute on. */
>   /* XEN_DOMCTL_setvcpuaffinity */
>   /* XEN_DOMCTL_getvcpuaffinity */
> @@ -907,6 +917,8 @@ struct xen_domctl {
>   #define XEN_DOMCTL_audit_p2m                     65
>   #define XEN_DOMCTL_set_virq_handler              66
>   #define XEN_DOMCTL_set_broken_page_p2m           67
> +#define XEN_DOMCTL_setnodeaffinity               68
> +#define XEN_DOMCTL_getnodeaffinity               69
>   #define XEN_DOMCTL_gdbsx_guestmemio            1000
>   #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>   #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -920,6 +932,7 @@ struct xen_domctl {
>           struct xen_domctl_getpageframeinfo  getpageframeinfo;
>           struct xen_domctl_getpageframeinfo2 getpageframeinfo2;
>           struct xen_domctl_getpageframeinfo3 getpageframeinfo3;
> +        struct xen_domctl_nodeaffinity      nodeaffinity;
>           struct xen_domctl_vcpuaffinity      vcpuaffinity;
>           struct xen_domctl_shadow_op         shadow_op;
>           struct xen_domctl_max_mem           max_mem;
> diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
> --- a/xen/include/xen/nodemask.h
> +++ b/xen/include/xen/nodemask.h
> @@ -8,8 +8,9 @@
>    * See detailed comments in the file linux/bitmap.h describing the
>    * data type on which these nodemasks are based.
>    *
> - * For details of nodemask_scnprintf() and nodemask_parse(),
> - * see bitmap_scnprintf() and bitmap_parse() in lib/bitmap.c.
> + * For details of nodemask_scnprintf(), nodelist_scnpintf() and
> + * nodemask_parse(), see bitmap_scnprintf() and bitmap_parse()
> + * in lib/bitmap.c.
>    *
>    * The available nodemask operations are:
>    *
> @@ -50,6 +51,7 @@
>    * unsigned long *nodes_addr(mask)     Array of unsigned long's in mask
>    *
>    * int nodemask_scnprintf(buf, len, mask) Format nodemask for printing
> + * int nodelist_scnprintf(buf, len, mask) Format nodemask as a list for printing
>    * int nodemask_parse(ubuf, ulen, mask)        Parse ascii string as nodemask
>    *
>    * for_each_node_mask(node, mask)      for-loop node over mask
> @@ -292,6 +294,14 @@ static inline int __cycle_node(int n, co
>
>   #define nodes_addr(src) ((src).bits)
>
> +#define nodelist_scnprintf(buf, len, src) \
> +                       __nodelist_scnprintf((buf), (len), (src), MAX_NUMNODES)
> +static inline int __nodelist_scnprintf(char *buf, int len,
> +                                       const nodemask_t *srcp, int nbits)
> +{
> +       return bitmap_scnlistprintf(buf, len, srcp->bits, nbits);
> +}
> +
>   #if 0
>   #define nodemask_scnprintf(buf, len, src) \
>                          __nodemask_scnprintf((buf), (len), &(src), MAX_NUMNODES)
> diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
> --- a/xen/include/xen/sched-if.h
> +++ b/xen/include/xen/sched-if.h
> @@ -184,6 +184,8 @@ struct scheduler {
>                                       struct xen_domctl_scheduler_op *);
>       int          (*adjust_global)  (const struct scheduler *,
>                                       struct xen_sysctl_scheduler_op *);
> +    void         (*set_node_affinity) (const struct scheduler *,
> +                                       struct domain *, nodemask_t *);
>       void         (*dump_settings)  (const struct scheduler *);
>       void         (*dump_cpu_state) (const struct scheduler *, int);
>
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -359,8 +359,12 @@ struct domain
>       /* Various mem_events */
>       struct mem_event_per_domain *mem_event;
>
> -    /* Currently computed from union of all vcpu cpu-affinity masks. */
> +    /*
> +     * Can be specified by the user. If that is not the case, it is
> +     * computed from the union of all the vcpu cpu-affinity masks.
> +     */
>       nodemask_t node_affinity;
> +    int auto_node_affinity;
>       unsigned int last_alloc_node;
>       spinlock_t node_affinity_lock;
>   };
> @@ -429,6 +433,7 @@ static inline void get_knownalive_domain
>       ASSERT(!(atomic_read(&d->refcnt) & DOMAIN_DESTROYED));
>   }
>
> +int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity);
>   void domain_update_node_affinity(struct domain *d);
>
>   struct domain *domain_create(
> @@ -543,6 +548,7 @@ void sched_destroy_domain(struct domain
>   int sched_move_domain(struct domain *d, struct cpupool *c);
>   long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
>   long sched_adjust_global(struct xen_sysctl_scheduler_op *);
> +void sched_set_node_affinity(struct domain *, nodemask_t *);
>   int  sched_id(void);
>   void sched_tick_suspend(void);
>   void sched_tick_resume(void);
> diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -56,6 +56,7 @@ struct xsm_operations {
>       int (*domain_create) (struct domain *d, u32 ssidref);
>       int (*max_vcpus) (struct domain *d);
>       int (*destroydomain) (struct domain *d);
> +    int (*nodeaffinity) (int cmd, struct domain *d);
>       int (*vcpuaffinity) (int cmd, struct domain *d);
>       int (*scheduler) (struct domain *d);
>       int (*getdomaininfo) (struct domain *d);
> @@ -229,6 +230,11 @@ static inline int xsm_destroydomain (str
>       return xsm_call(destroydomain(d));
>   }
>
> +static inline int xsm_nodeaffinity (int cmd, struct domain *d)
> +{
> +    return xsm_call(nodeaffinity(cmd, d));
> +}
> +
>   static inline int xsm_vcpuaffinity (int cmd, struct domain *d)
>   {
>       return xsm_call(vcpuaffinity(cmd, d));
> diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
> --- a/xen/xsm/dummy.c
> +++ b/xen/xsm/dummy.c
> @@ -54,6 +54,11 @@ static int dummy_destroydomain (struct d
>       return 0;
>   }
>
> +static int dummy_nodeaffinity (int cmd, struct domain *d)
> +{
> +    return 0;
> +}
> +
>   static int dummy_vcpuaffinity (int cmd, struct domain *d)
>   {
>       return 0;
> @@ -634,6 +639,7 @@ void xsm_fixup_ops (struct xsm_operation
>       set_to_dummy_if_null(ops, domain_create);
>       set_to_dummy_if_null(ops, max_vcpus);
>       set_to_dummy_if_null(ops, destroydomain);
> +    set_to_dummy_if_null(ops, nodeaffinity);
>       set_to_dummy_if_null(ops, vcpuaffinity);
>       set_to_dummy_if_null(ops, scheduler);
>       set_to_dummy_if_null(ops, getdomaininfo);
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -521,17 +521,19 @@ static int flask_destroydomain(struct do
>                              DOMAIN__DESTROY);
>   }
>
> -static int flask_vcpuaffinity(int cmd, struct domain *d)
> +static int flask_affinity(int cmd, struct domain *d)
>   {
>       u32 perm;
>
>       switch ( cmd )
>       {
>       case XEN_DOMCTL_setvcpuaffinity:
> -        perm = DOMAIN__SETVCPUAFFINITY;
> +    case XEN_DOMCTL_setnodeaffinity:
> +        perm = DOMAIN__SETAFFINITY;
>           break;
>       case XEN_DOMCTL_getvcpuaffinity:
> -        perm = DOMAIN__GETVCPUAFFINITY;
> +    case XEN_DOMCTL_getnodeaffinity:
> +        perm = DOMAIN__GETAFFINITY;
>           break;
>       default:
>           return -EPERM;
> @@ -1473,7 +1475,8 @@ static struct xsm_operations flask_ops =
>       .domain_create = flask_domain_create,
>       .max_vcpus = flask_max_vcpus,
>       .destroydomain = flask_destroydomain,
> -    .vcpuaffinity = flask_vcpuaffinity,
> +    .nodeaffinity = flask_affinity,
> +    .vcpuaffinity = flask_affinity,
>       .scheduler = flask_scheduler,
>       .getdomaininfo = flask_getdomaininfo,
>       .getvcpucontext = flask_getvcpucontext,
> diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
> --- a/xen/xsm/flask/include/av_perm_to_string.h
> +++ b/xen/xsm/flask/include/av_perm_to_string.h
> @@ -37,8 +37,8 @@
>      S_(SECCLASS_DOMAIN, DOMAIN__TRANSITION, "transition")
>      S_(SECCLASS_DOMAIN, DOMAIN__MAX_VCPUS, "max_vcpus")
>      S_(SECCLASS_DOMAIN, DOMAIN__DESTROY, "destroy")
> -   S_(SECCLASS_DOMAIN, DOMAIN__SETVCPUAFFINITY, "setvcpuaffinity")
> -   S_(SECCLASS_DOMAIN, DOMAIN__GETVCPUAFFINITY, "getvcpuaffinity")
> +   S_(SECCLASS_DOMAIN, DOMAIN__SETAFFINITY, "setaffinity")
> +   S_(SECCLASS_DOMAIN, DOMAIN__GETAFFINITY, "getaffinity")
>      S_(SECCLASS_DOMAIN, DOMAIN__SCHEDULER, "scheduler")
>      S_(SECCLASS_DOMAIN, DOMAIN__GETDOMAININFO, "getdomaininfo")
>      S_(SECCLASS_DOMAIN, DOMAIN__GETVCPUINFO, "getvcpuinfo")
> diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
> --- a/xen/xsm/flask/include/av_permissions.h
> +++ b/xen/xsm/flask/include/av_permissions.h
> @@ -38,8 +38,8 @@
>   #define DOMAIN__TRANSITION                        0x00000020UL
>   #define DOMAIN__MAX_VCPUS                         0x00000040UL
>   #define DOMAIN__DESTROY                           0x00000080UL
> -#define DOMAIN__SETVCPUAFFINITY                   0x00000100UL
> -#define DOMAIN__GETVCPUAFFINITY                   0x00000200UL
> +#define DOMAIN__SETAFFINITY                       0x00000100UL
> +#define DOMAIN__GETAFFINITY                       0x00000200UL
>   #define DOMAIN__SCHEDULER                         0x00000400UL
>   #define DOMAIN__GETDOMAININFO                     0x00000800UL
>   #define DOMAIN__GETVCPUINFO                       0x00001000UL


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 15:26:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:26:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4UA-0006pz-Rw; Fri, 21 Dec 2012 15:26:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm4U9-0006pm-3i
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:26:05 +0000
Received: from [85.158.143.99:11729] by server-1.bemta-4.messagelabs.com id
	4F/6C-28401-C8F74D05; Fri, 21 Dec 2012 15:26:04 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1356103561!25222633!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16199 invoked from network); 21 Dec 2012 15:26:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:26:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1481932"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 15:26:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 10:26:00 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm4U4-00053U-C1;
	Fri, 21 Dec 2012 15:26:00 +0000
Message-ID: <50D47E12.3040803@eu.citrix.com>
Date: Fri, 21 Dec 2012 15:19:46 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<61299b4cdc2abbdf9bfb.1355944041@Solace>
In-Reply-To: <61299b4cdc2abbdf9bfb.1355944041@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 05 of 10 v2] libxc: allow for explicitly
	specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> By providing the proper get/set interface and wiring them
> to the new domctl-s from the previous commit.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

I haven't done a detailed review, but everything looks OK:

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -110,6 +110,83 @@ int xc_domain_shutdown(xc_interface *xch
>   }
>   
>   
> +int xc_domain_node_setaffinity(xc_interface *xch,
> +                               uint32_t domid,
> +                               xc_nodemap_t nodemap)
> +{
> +    DECLARE_DOMCTL;
> +    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> +    int ret = -1;
> +    int nodesize;
> +
> +    nodesize = xc_get_nodemap_size(xch);
> +    if (!nodesize)
> +    {
> +        PERROR("Could not get number of nodes");
> +        goto out;
> +    }
> +
> +    local = xc_hypercall_buffer_alloc(xch, local, nodesize);
> +    if ( local == NULL )
> +    {
> +        PERROR("Could not allocate memory for setnodeaffinity domctl hypercall");
> +        goto out;
> +    }
> +
> +    domctl.cmd = XEN_DOMCTL_setnodeaffinity;
> +    domctl.domain = (domid_t)domid;
> +
> +    memcpy(local, nodemap, nodesize);
> +    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
> +    domctl.u.nodeaffinity.nodemap.nr_elems = nodesize * 8;
> +
> +    ret = do_domctl(xch, &domctl);
> +
> +    xc_hypercall_buffer_free(xch, local);
> +
> + out:
> +    return ret;
> +}
> +
> +int xc_domain_node_getaffinity(xc_interface *xch,
> +                               uint32_t domid,
> +                               xc_nodemap_t nodemap)
> +{
> +    DECLARE_DOMCTL;
> +    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> +    int ret = -1;
> +    int nodesize;
> +
> +    nodesize = xc_get_nodemap_size(xch);
> +    if (!nodesize)
> +    {
> +        PERROR("Could not get number of nodes");
> +        goto out;
> +    }
> +
> +    local = xc_hypercall_buffer_alloc(xch, local, nodesize);
> +    if ( local == NULL )
> +    {
> +        PERROR("Could not allocate memory for getnodeaffinity domctl hypercall");
> +        goto out;
> +    }
> +
> +    domctl.cmd = XEN_DOMCTL_getnodeaffinity;
> +    domctl.domain = (domid_t)domid;
> +
> +    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
> +    domctl.u.nodeaffinity.nodemap.nr_elems = nodesize * 8;
> +
> +    ret = do_domctl(xch, &domctl);
> +
> +    memcpy(nodemap, local, nodesize);
> +
> +    xc_hypercall_buffer_free(xch, local);
> +
> + out:
> +    return ret;
> +}
> +
>   int xc_vcpu_setaffinity(xc_interface *xch,
>                           uint32_t domid,
>                           int vcpu,
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -521,6 +521,32 @@ int xc_watchdog(xc_interface *xch,
>   		uint32_t id,
>   		uint32_t timeout);
>   
> +/**
> + * This function explicitly sets the host NUMA nodes the domain will
> + * have affinity with.
> + *
> + * @parm xch a handle to an open hypervisor interface.
> + * @parm domid the domain id one wants to set the affinity of.
> + * @parm nodemap the map of the affine nodes.
> + * @return 0 on success, -1 on failure.
> + */
> +int xc_domain_node_setaffinity(xc_interface *xch,
> +                               uint32_t domind,
> +                               xc_nodemap_t nodemap);
> +
> +/**
> + * This function retrieves the host NUMA nodes the domain has
> + * affinity with.
> + *
> + * @parm xch a handle to an open hypervisor interface.
> + * @parm domid the domain id one wants to get the node affinity of.
> + * @parm nodemap the map of the affine nodes.
> + * @return 0 on success, -1 on failure.
> + */
> +int xc_domain_node_getaffinity(xc_interface *xch,
> +                               uint32_t domind,
> +                               xc_nodemap_t nodemap);
> +
>   int xc_vcpu_setaffinity(xc_interface *xch,
>                           uint32_t domid,
>                           int vcpu,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 15:26:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:26:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4UA-0006pz-Rw; Fri, 21 Dec 2012 15:26:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm4U9-0006pm-3i
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:26:05 +0000
Received: from [85.158.143.99:11729] by server-1.bemta-4.messagelabs.com id
	4F/6C-28401-C8F74D05; Fri, 21 Dec 2012 15:26:04 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1356103561!25222633!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16199 invoked from network); 21 Dec 2012 15:26:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:26:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1481932"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 15:26:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 10:26:00 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm4U4-00053U-C1;
	Fri, 21 Dec 2012 15:26:00 +0000
Message-ID: <50D47E12.3040803@eu.citrix.com>
Date: Fri, 21 Dec 2012 15:19:46 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<61299b4cdc2abbdf9bfb.1355944041@Solace>
In-Reply-To: <61299b4cdc2abbdf9bfb.1355944041@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 05 of 10 v2] libxc: allow for explicitly
	specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> By providing the proper get/set interface and wiring them
> to the new domctl-s from the previous commit.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

I haven't done a detailed review, but everything looks OK:

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -110,6 +110,83 @@ int xc_domain_shutdown(xc_interface *xch
>   }
>   
>   
> +int xc_domain_node_setaffinity(xc_interface *xch,
> +                               uint32_t domid,
> +                               xc_nodemap_t nodemap)
> +{
> +    DECLARE_DOMCTL;
> +    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> +    int ret = -1;
> +    int nodesize;
> +
> +    nodesize = xc_get_nodemap_size(xch);
> +    if (!nodesize)
> +    {
> +        PERROR("Could not get number of nodes");
> +        goto out;
> +    }
> +
> +    local = xc_hypercall_buffer_alloc(xch, local, nodesize);
> +    if ( local == NULL )
> +    {
> +        PERROR("Could not allocate memory for setnodeaffinity domctl hypercall");
> +        goto out;
> +    }
> +
> +    domctl.cmd = XEN_DOMCTL_setnodeaffinity;
> +    domctl.domain = (domid_t)domid;
> +
> +    memcpy(local, nodemap, nodesize);
> +    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
> +    domctl.u.nodeaffinity.nodemap.nr_elems = nodesize * 8;
> +
> +    ret = do_domctl(xch, &domctl);
> +
> +    xc_hypercall_buffer_free(xch, local);
> +
> + out:
> +    return ret;
> +}
> +
> +int xc_domain_node_getaffinity(xc_interface *xch,
> +                               uint32_t domid,
> +                               xc_nodemap_t nodemap)
> +{
> +    DECLARE_DOMCTL;
> +    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> +    int ret = -1;
> +    int nodesize;
> +
> +    nodesize = xc_get_nodemap_size(xch);
> +    if (!nodesize)
> +    {
> +        PERROR("Could not get number of nodes");
> +        goto out;
> +    }
> +
> +    local = xc_hypercall_buffer_alloc(xch, local, nodesize);
> +    if ( local == NULL )
> +    {
> +        PERROR("Could not allocate memory for getnodeaffinity domctl hypercall");
> +        goto out;
> +    }
> +
> +    domctl.cmd = XEN_DOMCTL_getnodeaffinity;
> +    domctl.domain = (domid_t)domid;
> +
> +    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
> +    domctl.u.nodeaffinity.nodemap.nr_elems = nodesize * 8;
> +
> +    ret = do_domctl(xch, &domctl);
> +
> +    memcpy(nodemap, local, nodesize);
> +
> +    xc_hypercall_buffer_free(xch, local);
> +
> + out:
> +    return ret;
> +}
> +
>   int xc_vcpu_setaffinity(xc_interface *xch,
>                           uint32_t domid,
>                           int vcpu,
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -521,6 +521,32 @@ int xc_watchdog(xc_interface *xch,
>   		uint32_t id,
>   		uint32_t timeout);
>   
> +/**
> + * This function explicitly sets the host NUMA nodes the domain will
> + * have affinity with.
> + *
> + * @parm xch a handle to an open hypervisor interface.
> + * @parm domid the domain id one wants to set the affinity of.
> + * @parm nodemap the map of the affine nodes.
> + * @return 0 on success, -1 on failure.
> + */
> +int xc_domain_node_setaffinity(xc_interface *xch,
> +                               uint32_t domind,
> +                               xc_nodemap_t nodemap);
> +
> +/**
> + * This function retrieves the host NUMA nodes the domain has
> + * affinity with.
> + *
> + * @parm xch a handle to an open hypervisor interface.
> + * @parm domid the domain id one wants to get the node affinity of.
> + * @parm nodemap the map of the affine nodes.
> + * @return 0 on success, -1 on failure.
> + */
> +int xc_domain_node_getaffinity(xc_interface *xch,
> +                               uint32_t domind,
> +                               xc_nodemap_t nodemap);
> +
>   int xc_vcpu_setaffinity(xc_interface *xch,
>                           uint32_t domid,
>                           int vcpu,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 15:31:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4Z8-00071i-Jh; Fri, 21 Dec 2012 15:31:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1Tm4Z7-00071S-1e
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:31:13 +0000
Received: from [85.158.143.35:24584] by server-3.bemta-4.messagelabs.com id
	EE/54-18211-0C084D05; Fri, 21 Dec 2012 15:31:12 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1356103832!12169533!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDMyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22006 invoked from network); 21 Dec 2012 15:30:33 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 15:30:33 -0000
Received: from compute3.internal (compute3.nyi.mail.srv.osa [10.202.2.43])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id D3E702022B;
	Fri, 21 Dec 2012 10:30:31 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute3.internal (MEProxy); Fri, 21 Dec 2012 10:30:31 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=4s
	l+j8DYk+sFfIyhbA8jVI3Idm0=; b=N5Xpy/T8pPlw9KDPNPrwRo8J9K+uRDQCU0
	+4UD+7ur/oQjxif2a9r3u+OZBq5gC/IJv/CjP/Hc0xCdssUpXvkEcxEQ+G0mejdA
	Pi4IyuAyv2+3KwUcRK554PZDoF7n3M2nYQ5J9vGo1xKm6jVrgs6IkmT+qCQXOsSK
	Bv11dCs7I=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=4sl+
	j8DYk+sFfIyhbA8jVI3Idm0=; b=VC3idzzPkc6AuM6Z5nfGZVXZEb9dZ3MkVFbz
	4BqQwlNGKSpihoiB3x6hESH5i9FYW5KH74HfuF99Or022FLPEparjX1Rh96sWeW2
	TBw++3kmlGFUa6tFfKkUL1c5IFb9zNPcXargR31rDS2pcJ7jqHTuWIhZ5QXl/Huo
	jotYaL4=
X-Sasl-enc: xAAQosqJImUzrdv6xbZwovTcoF0xQMXqx20Q9+Sq7mXc 1356103831
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id F3BB58E008A;
	Fri, 21 Dec 2012 10:30:30 -0500 (EST)
Message-ID: <50D48090.6060603@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 16:30:24 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
In-Reply-To: <50D46B47.8000003@invisiblethingslab.com>
X-Enigmail-Version: 1.4.5
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Ben Guthro <ben@guthro.net>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3338158056025678669=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============3338158056025678669==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigA76CED10A7D5AEBBAEC9A69B"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigA76CED10A7D5AEBBAEC9A69B
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 14:59, Marek Marczykowski wrote:
> On 21.12.2012 14:42, Jan Beulich wrote:
>>>>> On 21.12.12 at 14:33, Marek Marczykowski <marmarek@invisiblethingsl=
ab.com>
>> wrote:
>>> On 21.12.2012 09:55, Jan Beulich wrote:
>>>>>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblething=
slab.com>=20
>>> wrote:
>>>>> On 21.12.2012 00:17, Marek Marczykowski wrote:
>>>>>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got orig=
inal=20
>>>>> symptom
>>>>>> - only CPU0 active after ACPI S3. Didn't get reboot after few trie=
s.
>>>>>> "xl debug-key r" output attached.
>>>>>> Anyway it looks like bug was introduced somehow between 4.1.2 and =
4.1.3 - on
>>>>>> 4.1.2 none of above problems happened.
>>>>>
>>>>> I've done bisect on xen 4.1 and found problematic commit:
>>>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427=20
>>>>
>>>> In that case, using "sched_ratelimit_us=3D0" should make the
>>>> problem go away again - did you try that?
>>>
>>> This option does not help when trying 4.1.4, but does when trying xen=
=20
>>> compiled
>>> from above c/s.
>>
>> Which basically calls for you doing another bisection round, this
>> time with that option in place.
>=20
> Will try, but not sure if will have time for it before Christmas.

Next bisection (this time with sched_ratelimit_us=3D0) gives this commit:=

http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f

I've tried also xen-4.1 tip with Ben's patch and sched_ratelimit_us=3D0, =
but it
didn't worked well - got all vCPUs on pCPU 0, instead on reboot (more com=
mon
effect). Not sure if this is really different effect or just non-determin=
istic
behavior of the same bug.

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enigA76CED10A7D5AEBBAEC9A69B
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ1ICRAAoJENuP0xzK19csIc8H/R1ofBMCOCXRUrAZr0sFWykv
QkU9vVuml1j0awCH14956kH98u/F1GAOKjI0YUegoIx1/WFxoLuSVEjsOi1qdcF6
AIKktJid9wjKZTVm0okAmrhENYW2N5syLlDa5GOeVq0fbXc9ATn2rjUhxaMBHK8N
ha0KMgNEQZN9mHrTpT8d3jVKSkiHo8n88y5+gjtHxqZtPIKntpLXLvXRmHwPMMvT
+uAkuxiSgO7GoiDKIV1A3d9r4uCaauCiXVVVkctTBt47GrZFx8xidboayTvHB4vJ
nFZUQFtpKXyup8+ENnlPY7IqTUe/CSTzBIsaekMrSHQQflPLXtQkggvKp1lX5Ns=
=70yj
-----END PGP SIGNATURE-----

--------------enigA76CED10A7D5AEBBAEC9A69B--


--===============3338158056025678669==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3338158056025678669==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 15:31:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4Z8-00071i-Jh; Fri, 21 Dec 2012 15:31:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1Tm4Z7-00071S-1e
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:31:13 +0000
Received: from [85.158.143.35:24584] by server-3.bemta-4.messagelabs.com id
	EE/54-18211-0C084D05; Fri, 21 Dec 2012 15:31:12 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1356103832!12169533!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDMyNDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22006 invoked from network); 21 Dec 2012 15:30:33 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 15:30:33 -0000
Received: from compute3.internal (compute3.nyi.mail.srv.osa [10.202.2.43])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id D3E702022B;
	Fri, 21 Dec 2012 10:30:31 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute3.internal (MEProxy); Fri, 21 Dec 2012 10:30:31 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=4s
	l+j8DYk+sFfIyhbA8jVI3Idm0=; b=N5Xpy/T8pPlw9KDPNPrwRo8J9K+uRDQCU0
	+4UD+7ur/oQjxif2a9r3u+OZBq5gC/IJv/CjP/Hc0xCdssUpXvkEcxEQ+G0mejdA
	Pi4IyuAyv2+3KwUcRK554PZDoF7n3M2nYQ5J9vGo1xKm6jVrgs6IkmT+qCQXOsSK
	Bv11dCs7I=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=4sl+
	j8DYk+sFfIyhbA8jVI3Idm0=; b=VC3idzzPkc6AuM6Z5nfGZVXZEb9dZ3MkVFbz
	4BqQwlNGKSpihoiB3x6hESH5i9FYW5KH74HfuF99Or022FLPEparjX1Rh96sWeW2
	TBw++3kmlGFUa6tFfKkUL1c5IFb9zNPcXargR31rDS2pcJ7jqHTuWIhZ5QXl/Huo
	jotYaL4=
X-Sasl-enc: xAAQosqJImUzrdv6xbZwovTcoF0xQMXqx20Q9+Sq7mXc 1356103831
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id F3BB58E008A;
	Fri, 21 Dec 2012 10:30:30 -0500 (EST)
Message-ID: <50D48090.6060603@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 16:30:24 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121009 Thunderbird/16.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
In-Reply-To: <50D46B47.8000003@invisiblethingslab.com>
X-Enigmail-Version: 1.4.5
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Ben Guthro <ben@guthro.net>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3338158056025678669=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============3338158056025678669==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigA76CED10A7D5AEBBAEC9A69B"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigA76CED10A7D5AEBBAEC9A69B
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 14:59, Marek Marczykowski wrote:
> On 21.12.2012 14:42, Jan Beulich wrote:
>>>>> On 21.12.12 at 14:33, Marek Marczykowski <marmarek@invisiblethingsl=
ab.com>
>> wrote:
>>> On 21.12.2012 09:55, Jan Beulich wrote:
>>>>>>> On 21.12.12 at 05:52, Marek Marczykowski <marmarek@invisiblething=
slab.com>=20
>>> wrote:
>>>>> On 21.12.2012 00:17, Marek Marczykowski wrote:
>>>>>> With this patch applied (on 4.1.4 and 4.3-unstable), I've got orig=
inal=20
>>>>> symptom
>>>>>> - only CPU0 active after ACPI S3. Didn't get reboot after few trie=
s.
>>>>>> "xl debug-key r" output attached.
>>>>>> Anyway it looks like bug was introduced somehow between 4.1.2 and =
4.1.3 - on
>>>>>> 4.1.2 none of above problems happened.
>>>>>
>>>>> I've done bisect on xen 4.1 and found problematic commit:
>>>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/1f95b55ef427=20
>>>>
>>>> In that case, using "sched_ratelimit_us=3D0" should make the
>>>> problem go away again - did you try that?
>>>
>>> This option does not help when trying 4.1.4, but does when trying xen=
=20
>>> compiled
>>> from above c/s.
>>
>> Which basically calls for you doing another bisection round, this
>> time with that option in place.
>=20
> Will try, but not sure if will have time for it before Christmas.

Next bisection (this time with sched_ratelimit_us=3D0) gives this commit:=

http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f

I've tried also xen-4.1 tip with Ben's patch and sched_ratelimit_us=3D0, =
but it
didn't worked well - got all vCPUs on pCPU 0, instead on reboot (more com=
mon
effect). Not sure if this is really different effect or just non-determin=
istic
behavior of the same bug.

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enigA76CED10A7D5AEBBAEC9A69B
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ1ICRAAoJENuP0xzK19csIc8H/R1ofBMCOCXRUrAZr0sFWykv
QkU9vVuml1j0awCH14956kH98u/F1GAOKjI0YUegoIx1/WFxoLuSVEjsOi1qdcF6
AIKktJid9wjKZTVm0okAmrhENYW2N5syLlDa5GOeVq0fbXc9ATn2rjUhxaMBHK8N
ha0KMgNEQZN9mHrTpT8d3jVKSkiHo8n88y5+gjtHxqZtPIKntpLXLvXRmHwPMMvT
+uAkuxiSgO7GoiDKIV1A3d9r4uCaauCiXVVVkctTBt47GrZFx8xidboayTvHB4vJ
nFZUQFtpKXyup8+ENnlPY7IqTUe/CSTzBIsaekMrSHQQflPLXtQkggvKp1lX5Ns=
=70yj
-----END PGP SIGNATURE-----

--------------enigA76CED10A7D5AEBBAEC9A69B--


--===============3338158056025678669==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3338158056025678669==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 15:37:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4eS-0007DW-D9; Fri, 21 Dec 2012 15:36:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm4eQ-0007DR-IS
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:36:42 +0000
Received: from [85.158.143.99:65115] by server-3.bemta-4.messagelabs.com id
	C6/DA-18211-90284D05; Fri, 21 Dec 2012 15:36:41 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1356104199!30518014!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16272 invoked from network); 21 Dec 2012 15:36:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:36:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1562396"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 15:36:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 10:36:38 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm4eM-0005Gf-JV;
	Fri, 21 Dec 2012 15:36:38 +0000
Message-ID: <50D48091.5070409@eu.citrix.com>
Date: Fri, 21 Dec 2012 15:30:25 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<3c196445edb57baadf4f.1355944042@Solace>
In-Reply-To: <3c196445edb57baadf4f.1355944042@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly
	specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> By introducing a nodemap in libxl_domain_build_info and
> providing the get/set methods to deal with it.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

I think you'll probably need to add a line like the following:

#define LIBXL_HAVE_NODEAFFINITY 1

So that people wanting to build against different versions of the 
library can behave appropriately.  But IanC or IanJ would be the final 
word on that, I think.

  -George

>
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -4142,6 +4142,26 @@ int libxl_set_vcpuaffinity_all(libxl_ctx
>       return rc;
>   }
>   
> +int libxl_domain_set_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
> +                                  libxl_bitmap *nodemap)
> +{
> +    if (xc_domain_node_setaffinity(ctx->xch, domid, nodemap->map)) {
> +        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "setting node affinity");
> +        return ERROR_FAIL;
> +    }
> +    return 0;
> +}
> +
> +int libxl_domain_get_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
> +                                  libxl_bitmap *nodemap)
> +{
> +    if (xc_domain_node_getaffinity(ctx->xch, domid, nodemap->map)) {
> +        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "getting node affinity");
> +        return ERROR_FAIL;
> +    }
> +    return 0;
> +}
> +
>   int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
>   {
>       GC_INIT(ctx);
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -861,6 +861,10 @@ int libxl_set_vcpuaffinity(libxl_ctx *ct
>                              libxl_bitmap *cpumap);
>   int libxl_set_vcpuaffinity_all(libxl_ctx *ctx, uint32_t domid,
>                                  unsigned int max_vcpus, libxl_bitmap *cpumap);
> +int libxl_domain_set_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
> +                                  libxl_bitmap *nodemap);
> +int libxl_domain_get_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
> +                                  libxl_bitmap *nodemap);
>   int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap);
>   
>   libxl_scheduler libxl_get_scheduler(libxl_ctx *ctx);
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -184,6 +184,12 @@ int libxl__domain_build_info_setdefault(
>   
>       libxl_defbool_setdefault(&b_info->numa_placement, true);
>   
> +    if (!b_info->nodemap.size) {
> +        if (libxl_node_bitmap_alloc(CTX, &b_info->nodemap, 0))
> +            return ERROR_FAIL;
> +        libxl_bitmap_set_any(&b_info->nodemap);
> +    }
> +
>       if (b_info->max_memkb == LIBXL_MEMKB_DEFAULT)
>           b_info->max_memkb = 32 * 1024;
>       if (b_info->target_memkb == LIBXL_MEMKB_DEFAULT)
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -230,6 +230,7 @@ int libxl__build_pre(libxl__gc *gc, uint
>           if (rc)
>               return rc;
>       }
> +    libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
>       libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);
>   
>       xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb + LIBXL_MAXMEM_CONSTANT);
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -261,6 +261,7 @@ libxl_domain_build_info = Struct("domain
>       ("max_vcpus",       integer),
>       ("avail_vcpus",     libxl_bitmap),
>       ("cpumap",          libxl_bitmap),
> +    ("nodemap",         libxl_bitmap),
>       ("numa_placement",  libxl_defbool),
>       ("tsc_mode",        libxl_tsc_mode),
>       ("max_memkb",       MemKB),


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 15:37:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4eS-0007DW-D9; Fri, 21 Dec 2012 15:36:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm4eQ-0007DR-IS
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:36:42 +0000
Received: from [85.158.143.99:65115] by server-3.bemta-4.messagelabs.com id
	C6/DA-18211-90284D05; Fri, 21 Dec 2012 15:36:41 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1356104199!30518014!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16272 invoked from network); 21 Dec 2012 15:36:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:36:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1562396"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 15:36:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 10:36:38 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm4eM-0005Gf-JV;
	Fri, 21 Dec 2012 15:36:38 +0000
Message-ID: <50D48091.5070409@eu.citrix.com>
Date: Fri, 21 Dec 2012 15:30:25 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<3c196445edb57baadf4f.1355944042@Solace>
In-Reply-To: <3c196445edb57baadf4f.1355944042@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly
	specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> By introducing a nodemap in libxl_domain_build_info and
> providing the get/set methods to deal with it.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

I think you'll probably need to add a line like the following:

#define LIBXL_HAVE_NODEAFFINITY 1

So that people wanting to build against different versions of the 
library can behave appropriately.  But IanC or IanJ would be the final 
word on that, I think.

  -George

>
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -4142,6 +4142,26 @@ int libxl_set_vcpuaffinity_all(libxl_ctx
>       return rc;
>   }
>   
> +int libxl_domain_set_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
> +                                  libxl_bitmap *nodemap)
> +{
> +    if (xc_domain_node_setaffinity(ctx->xch, domid, nodemap->map)) {
> +        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "setting node affinity");
> +        return ERROR_FAIL;
> +    }
> +    return 0;
> +}
> +
> +int libxl_domain_get_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
> +                                  libxl_bitmap *nodemap)
> +{
> +    if (xc_domain_node_getaffinity(ctx->xch, domid, nodemap->map)) {
> +        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "getting node affinity");
> +        return ERROR_FAIL;
> +    }
> +    return 0;
> +}
> +
>   int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
>   {
>       GC_INIT(ctx);
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -861,6 +861,10 @@ int libxl_set_vcpuaffinity(libxl_ctx *ct
>                              libxl_bitmap *cpumap);
>   int libxl_set_vcpuaffinity_all(libxl_ctx *ctx, uint32_t domid,
>                                  unsigned int max_vcpus, libxl_bitmap *cpumap);
> +int libxl_domain_set_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
> +                                  libxl_bitmap *nodemap);
> +int libxl_domain_get_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
> +                                  libxl_bitmap *nodemap);
>   int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap);
>   
>   libxl_scheduler libxl_get_scheduler(libxl_ctx *ctx);
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -184,6 +184,12 @@ int libxl__domain_build_info_setdefault(
>   
>       libxl_defbool_setdefault(&b_info->numa_placement, true);
>   
> +    if (!b_info->nodemap.size) {
> +        if (libxl_node_bitmap_alloc(CTX, &b_info->nodemap, 0))
> +            return ERROR_FAIL;
> +        libxl_bitmap_set_any(&b_info->nodemap);
> +    }
> +
>       if (b_info->max_memkb == LIBXL_MEMKB_DEFAULT)
>           b_info->max_memkb = 32 * 1024;
>       if (b_info->target_memkb == LIBXL_MEMKB_DEFAULT)
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -230,6 +230,7 @@ int libxl__build_pre(libxl__gc *gc, uint
>           if (rc)
>               return rc;
>       }
> +    libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
>       libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);
>   
>       xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb + LIBXL_MAXMEM_CONSTANT);
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -261,6 +261,7 @@ libxl_domain_build_info = Struct("domain
>       ("max_vcpus",       integer),
>       ("avail_vcpus",     libxl_bitmap),
>       ("cpumap",          libxl_bitmap),
> +    ("nodemap",         libxl_bitmap),
>       ("numa_placement",  libxl_defbool),
>       ("tsc_mode",        libxl_tsc_mode),
>       ("max_memkb",       MemKB),


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 15:54:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:54:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4ve-0007T8-4v; Fri, 21 Dec 2012 15:54:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tm4vc-0007T3-Fh
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:54:28 +0000
Received: from [85.158.138.51:11843] by server-4.bemta-3.messagelabs.com id
	22/6F-31835-33684D05; Fri, 21 Dec 2012 15:54:27 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1356105265!19987527!1
X-Originating-IP: [209.85.212.46]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16109 invoked from network); 21 Dec 2012 15:54:26 -0000
Received: from mail-vb0-f46.google.com (HELO mail-vb0-f46.google.com)
	(209.85.212.46)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:54:26 -0000
Received: by mail-vb0-f46.google.com with SMTP id b13so5171233vby.19
	for <xen-devel@lists.xen.org>; Fri, 21 Dec 2012 07:54:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=41whjhO4Oh3HNi1u/cw82e3Rp8FLsWhtRDR/0W46Xus=;
	b=tCukfuz3Y2jt7jH65nW5OE3kMaRu2FMVWa5TOlGFO+cY2FMvYQ3WTZprnjdKkIXWba
	fE8UqrfDf6sDk+n5/SqG81oNYUtHEybQdOOGSyhBEOd/Hrdk7nTsuo2/HXzMF3J6Xr9i
	k3a5m/uvCGM3t/cD8u7PFs0obGwajFlQHB0Cxk+YQPsLxp5QGEYpLX9tH6OBsLwoy3jW
	SRrqDuzB7eOhs9WyLhDOlcSduGjQFqG4ixlkl7gKFIZ9Jk/e7h0VMANS5GzDcf4lp6oe
	GvYOp3Ij5EL2km+wsN/CXs45R8Z/ZiPX8wo/AMWGUVVW923e21HEEXPcXCLDIZSltCZe
	iV0A==
MIME-Version: 1.0
Received: by 10.58.161.113 with SMTP id xr17mr21411125veb.3.1356105265131;
	Fri, 21 Dec 2012 07:54:25 -0800 (PST)
Received: by 10.58.187.84 with HTTP; Fri, 21 Dec 2012 07:54:25 -0800 (PST)
In-Reply-To: <50D48090.6060603@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 10:54:25 -0500
X-Google-Sender-Auth: OVV-Al-0FaUcy7Jo1LlMOoUzKVE
Message-ID: <CAOvdn6WHJpZCTPp-9LUAT61CX8d29nt0g6j7aqbKYtUGNHb5dg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Marek Marczykowski <marmarek@invisiblethingslab.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8161649877002519200=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8161649877002519200==
Content-Type: multipart/alternative; boundary=047d7b6dc8689b314004d15edc17

--047d7b6dc8689b314004d15edc17
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Dec 21, 2012 at 10:30 AM, Marek Marczykowski <
marmarek@invisiblethingslab.com> wrote:

>
> > Will try, but not sure if will have time for it before Christmas.
>
> Next bisection (this time with sched_ratelimit_us=0) gives this commit:
> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f
>
>
This is interesting, as it is this same equivalent changeset in unstable
that I came up with, as problematic.

The xen-unstable equivalent patch is here:

http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=269f543ea750ed567d18f2e819e5d5ce58eda5c5

My patch simply reverts part of this patch, specifically, the hunk
in xen/common/schedule.c

--047d7b6dc8689b314004d15edc17
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_quote">On Fri, Dec 21, 2012 at 10:30 AM, Marek Marc=
zykowski <span dir=3D"ltr">&lt;<a href=3D"mailto:marmarek@invisiblethingsla=
b.com" target=3D"_blank">marmarek@invisiblethingslab.com</a>&gt;</span> wro=
te:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im"><br>
&gt; Will try, but not sure if will have time for it before Christmas.<br>
<br>
</div>Next bisection (this time with sched_ratelimit_us=3D0) gives this com=
mit:<br>
<a href=3D"http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f" t=
arget=3D"_blank">http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d127=
23f</a><br><div class=3D"HOEnZb"><div class=3D"h5"><br></div></div></blockq=
uote>
<div><br></div><div>This is interesting, as it is this same equivalent chan=
geset in unstable that I came up with, as problematic.</div><div><br></div>=
<div>The xen-unstable equivalent patch is here:</div><div>=A0<a href=3D"htt=
p://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dcommitdiff;h=3D269f543ea750ed56=
7d18f2e819e5d5ce58eda5c5">http://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dco=
mmitdiff;h=3D269f543ea750ed567d18f2e819e5d5ce58eda5c5</a></div>
<div><br></div><div>My patch simply reverts part of this patch, specificall=
y, the hunk in=A0xen/common/schedule.c</div><div><br></div><div><br></div><=
/div><br>

--047d7b6dc8689b314004d15edc17--


--===============8161649877002519200==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8161649877002519200==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 15:54:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:54:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4ve-0007T8-4v; Fri, 21 Dec 2012 15:54:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tm4vc-0007T3-Fh
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:54:28 +0000
Received: from [85.158.138.51:11843] by server-4.bemta-3.messagelabs.com id
	22/6F-31835-33684D05; Fri, 21 Dec 2012 15:54:27 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1356105265!19987527!1
X-Originating-IP: [209.85.212.46]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16109 invoked from network); 21 Dec 2012 15:54:26 -0000
Received: from mail-vb0-f46.google.com (HELO mail-vb0-f46.google.com)
	(209.85.212.46)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:54:26 -0000
Received: by mail-vb0-f46.google.com with SMTP id b13so5171233vby.19
	for <xen-devel@lists.xen.org>; Fri, 21 Dec 2012 07:54:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=41whjhO4Oh3HNi1u/cw82e3Rp8FLsWhtRDR/0W46Xus=;
	b=tCukfuz3Y2jt7jH65nW5OE3kMaRu2FMVWa5TOlGFO+cY2FMvYQ3WTZprnjdKkIXWba
	fE8UqrfDf6sDk+n5/SqG81oNYUtHEybQdOOGSyhBEOd/Hrdk7nTsuo2/HXzMF3J6Xr9i
	k3a5m/uvCGM3t/cD8u7PFs0obGwajFlQHB0Cxk+YQPsLxp5QGEYpLX9tH6OBsLwoy3jW
	SRrqDuzB7eOhs9WyLhDOlcSduGjQFqG4ixlkl7gKFIZ9Jk/e7h0VMANS5GzDcf4lp6oe
	GvYOp3Ij5EL2km+wsN/CXs45R8Z/ZiPX8wo/AMWGUVVW923e21HEEXPcXCLDIZSltCZe
	iV0A==
MIME-Version: 1.0
Received: by 10.58.161.113 with SMTP id xr17mr21411125veb.3.1356105265131;
	Fri, 21 Dec 2012 07:54:25 -0800 (PST)
Received: by 10.58.187.84 with HTTP; Fri, 21 Dec 2012 07:54:25 -0800 (PST)
In-Reply-To: <50D48090.6060603@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
Date: Fri, 21 Dec 2012 10:54:25 -0500
X-Google-Sender-Auth: OVV-Al-0FaUcy7Jo1LlMOoUzKVE
Message-ID: <CAOvdn6WHJpZCTPp-9LUAT61CX8d29nt0g6j7aqbKYtUGNHb5dg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Marek Marczykowski <marmarek@invisiblethingslab.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8161649877002519200=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8161649877002519200==
Content-Type: multipart/alternative; boundary=047d7b6dc8689b314004d15edc17

--047d7b6dc8689b314004d15edc17
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Dec 21, 2012 at 10:30 AM, Marek Marczykowski <
marmarek@invisiblethingslab.com> wrote:

>
> > Will try, but not sure if will have time for it before Christmas.
>
> Next bisection (this time with sched_ratelimit_us=0) gives this commit:
> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f
>
>
This is interesting, as it is this same equivalent changeset in unstable
that I came up with, as problematic.

The xen-unstable equivalent patch is here:

http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=269f543ea750ed567d18f2e819e5d5ce58eda5c5

My patch simply reverts part of this patch, specifically, the hunk
in xen/common/schedule.c

--047d7b6dc8689b314004d15edc17
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_quote">On Fri, Dec 21, 2012 at 10:30 AM, Marek Marc=
zykowski <span dir=3D"ltr">&lt;<a href=3D"mailto:marmarek@invisiblethingsla=
b.com" target=3D"_blank">marmarek@invisiblethingslab.com</a>&gt;</span> wro=
te:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im"><br>
&gt; Will try, but not sure if will have time for it before Christmas.<br>
<br>
</div>Next bisection (this time with sched_ratelimit_us=3D0) gives this com=
mit:<br>
<a href=3D"http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f" t=
arget=3D"_blank">http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d127=
23f</a><br><div class=3D"HOEnZb"><div class=3D"h5"><br></div></div></blockq=
uote>
<div><br></div><div>This is interesting, as it is this same equivalent chan=
geset in unstable that I came up with, as problematic.</div><div><br></div>=
<div>The xen-unstable equivalent patch is here:</div><div>=A0<a href=3D"htt=
p://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dcommitdiff;h=3D269f543ea750ed56=
7d18f2e819e5d5ce58eda5c5">http://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dco=
mmitdiff;h=3D269f543ea750ed567d18f2e819e5d5ce58eda5c5</a></div>
<div><br></div><div>My patch simply reverts part of this patch, specificall=
y, the hunk in=A0xen/common/schedule.c</div><div><br></div><div><br></div><=
/div><br>

--047d7b6dc8689b314004d15edc17--


--===============8161649877002519200==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8161649877002519200==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 15:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4wC-0007VF-IZ; Fri, 21 Dec 2012 15:55:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1Tm4wB-0007V5-SU
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:55:04 +0000
Received: from [85.158.139.83:11099] by server-9.bemta-5.messagelabs.com id
	B6/9A-10690-75684D05; Fri, 21 Dec 2012 15:55:03 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1356105300!28786381!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15646 invoked from network); 21 Dec 2012 15:55:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:55:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1565042"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 15:54:59 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Fri, 21 Dec 2012
	10:54:59 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: G.R. <firemeteor@users.sourceforge.net>
Date: Fri, 21 Dec 2012 10:55:02 -0500
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3fL5y2orLf10N9RYmfeOUit8a+6AAYnZRA
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
	<CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
In-Reply-To: <CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: firemeteor.guo@gmail.com [mailto:firemeteor.guo@gmail.com] On
> Behalf Of G.R.
> Sent: Thursday, December 20, 2012 11:00 PM
> To: Ross Philipson
> Cc: Keir (Xen.org); Stefano Stabellini; Ian Campbell;
> Jean.guyader@gmail.com; xen-devel
> Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
> resource conflict for OpRegion.
> 
> On Fri, Dec 21, 2012 at 2:27 AM, Ross Philipson
> <Ross.Philipson@citrix.com> wrote:
> 
> 
> >> > Possibly with suitable macros used instead of magic numbers (e.g.,
> >> > XC_PAGE_* and a macro for the opregion size).
> >>
> >> I guess there is no predefined macro for OpRegion size. And I guess I
> >> need to define it twice for both code?
> >
> > In addition we should think about defining the IGD OpRegion in ACPI
> per the spec (cited earlier). Guest drivers seem to find the region just
> by reading the ASLS register in the gfx device's config space but it
> would be more correct to define it in ACPI too. Just a thought.
> 
> Is it a requirement for the patch to be accepted? Or, are you saying
> that this should not be IGD passthrough specific?
> I'm not sure what you refer to by 'ACPI' here. Is it the spec itself or
> header in your source code?
> I'm sorry to ask but I'm just a unlucky user trying to fix my box.
> 
> The ASLS register is just the documented way to communicate the OpRegion
> you can find in the spec.
> There are a lot of details in the spec. But as long as we are not going
> to emulate it, only the size is relevant here, I believe.

I guess all I am pointing out is that the IGD OpRegion is supposed to be defined in ACPI (that is actually why it is called an OpRegion). On all they systems I have seen it is in the DSDT (look for IDGP and IGDM). The OpRegion declaration actually tells you how big the region is and where its base is. It should probably only be there in the case where you are passing in the igfx device but this could be done by loading an auxiliary SSDT table. In addition to the IGD definition, the BIOSes on these systems also define other NVS regions related to graphics which might be useful, at least in determining their size and layout.

I have been thinking about this in relationship to our igfx passthrough support. We see some odd behaviors here and there with igfx pt and the guest drivers (which we have no control over in say the Windows case). I don't currently have hard evidence that these missing BIOS bits are causing any specific problems but they are an inconsistency wrt to the spec and on my list of suspects.

Thanks
Ross


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 15:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 15:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm4wC-0007VF-IZ; Fri, 21 Dec 2012 15:55:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1Tm4wB-0007V5-SU
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 15:55:04 +0000
Received: from [85.158.139.83:11099] by server-9.bemta-5.messagelabs.com id
	B6/9A-10690-75684D05; Fri, 21 Dec 2012 15:55:03 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1356105300!28786381!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15646 invoked from network); 21 Dec 2012 15:55:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 15:55:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1565042"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 15:54:59 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Fri, 21 Dec 2012
	10:54:59 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: G.R. <firemeteor@users.sourceforge.net>
Date: Fri, 21 Dec 2012 10:55:02 -0500
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3fL5y2orLf10N9RYmfeOUit8a+6AAYnZRA
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
	<CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
In-Reply-To: <CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: firemeteor.guo@gmail.com [mailto:firemeteor.guo@gmail.com] On
> Behalf Of G.R.
> Sent: Thursday, December 20, 2012 11:00 PM
> To: Ross Philipson
> Cc: Keir (Xen.org); Stefano Stabellini; Ian Campbell;
> Jean.guyader@gmail.com; xen-devel
> Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
> resource conflict for OpRegion.
> 
> On Fri, Dec 21, 2012 at 2:27 AM, Ross Philipson
> <Ross.Philipson@citrix.com> wrote:
> 
> 
> >> > Possibly with suitable macros used instead of magic numbers (e.g.,
> >> > XC_PAGE_* and a macro for the opregion size).
> >>
> >> I guess there is no predefined macro for OpRegion size. And I guess I
> >> need to define it twice for both code?
> >
> > In addition we should think about defining the IGD OpRegion in ACPI
> per the spec (cited earlier). Guest drivers seem to find the region just
> by reading the ASLS register in the gfx device's config space but it
> would be more correct to define it in ACPI too. Just a thought.
> 
> Is it a requirement for the patch to be accepted? Or, are you saying
> that this should not be IGD passthrough specific?
> I'm not sure what you refer to by 'ACPI' here. Is it the spec itself or
> header in your source code?
> I'm sorry to ask but I'm just a unlucky user trying to fix my box.
> 
> The ASLS register is just the documented way to communicate the OpRegion
> you can find in the spec.
> There are a lot of details in the spec. But as long as we are not going
> to emulate it, only the size is relevant here, I believe.

I guess all I am pointing out is that the IGD OpRegion is supposed to be defined in ACPI (that is actually why it is called an OpRegion). On all they systems I have seen it is in the DSDT (look for IDGP and IGDM). The OpRegion declaration actually tells you how big the region is and where its base is. It should probably only be there in the case where you are passing in the igfx device but this could be done by loading an auxiliary SSDT table. In addition to the IGD definition, the BIOSes on these systems also define other NVS regions related to graphics which might be useful, at least in determining their size and layout.

I have been thinking about this in relationship to our igfx passthrough support. We see some odd behaviors here and there with igfx pt and the guest drivers (which we have no control over in say the Windows case). I don't currently have hard evidence that these missing BIOS bits are causing any specific problems but they are an inconsistency wrt to the spec and on my list of suspects.

Thanks
Ross


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:01:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm52W-0008I9-Rt; Fri, 21 Dec 2012 16:01:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tm52V-0008I2-O2
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:01:35 +0000
Received: from [85.158.138.51:57681] by server-13.bemta-3.messagelabs.com id
	1D/D6-00465-9D784D05; Fri, 21 Dec 2012 16:01:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1356105688!27988957!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12642 invoked from network); 21 Dec 2012 16:01:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 16:01:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 16:01:27 +0000
Message-Id: <50D495E602000078000B2111@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 16:01:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Razvan Cojocaru" <rzvncj@gmail.com>
References: <50D459D3.4040101@gmail.com>
	<50D4780602000078000B204E@nat28.tlf.novell.com>
	<50D46D75.1010800@gmail.com>
In-Reply-To: <50D46D75.1010800@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 15:08, Razvan Cojocaru <rzvncj@gmail.com> wrote:
> Whether this would make a interesting patch for Xen or not, I would 
> appreciate any advice on how best to implement said functionality (if 
> only for MSR_IA32_MC0_CTL).

I'd make this a new hvm_op.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:01:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm52W-0008I9-Rt; Fri, 21 Dec 2012 16:01:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tm52V-0008I2-O2
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:01:35 +0000
Received: from [85.158.138.51:57681] by server-13.bemta-3.messagelabs.com id
	1D/D6-00465-9D784D05; Fri, 21 Dec 2012 16:01:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1356105688!27988957!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12642 invoked from network); 21 Dec 2012 16:01:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 16:01:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 16:01:27 +0000
Message-Id: <50D495E602000078000B2111@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 16:01:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Razvan Cojocaru" <rzvncj@gmail.com>
References: <50D459D3.4040101@gmail.com>
	<50D4780602000078000B204E@nat28.tlf.novell.com>
	<50D46D75.1010800@gmail.com>
In-Reply-To: <50D46D75.1010800@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to get a few MSR values from userspace?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 15:08, Razvan Cojocaru <rzvncj@gmail.com> wrote:
> Whether this would make a interesting patch for Xen or not, I would 
> appreciate any advice on how best to implement said functionality (if 
> only for MSR_IA32_MC0_CTL).

I'd make this a new hvm_op.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:04:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm54k-0008Nx-Ca; Fri, 21 Dec 2012 16:03:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tm54j-0008Ns-Nv
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:03:53 +0000
Received: from [85.158.138.51:3116] by server-4.bemta-3.messagelabs.com id
	3B/BD-31835-86884D05; Fri, 21 Dec 2012 16:03:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356105831!28757737!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30511 invoked from network); 21 Dec 2012 16:03:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 16:03:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 16:03:51 +0000
Message-Id: <50D4967602000078000B2114@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 16:03:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
In-Reply-To: <50D48090.6060603@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Marek Marczykowski <marmarek@invisiblethingslab.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 16:30, Marek Marczykowski <marmarek@invisiblethingslab.com> wrote:
> Next bisection (this time with sched_ratelimit_us=0) gives this commit:
> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f 

Ben, wasn't this where your bisection ended up too?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:04:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm54k-0008Nx-Ca; Fri, 21 Dec 2012 16:03:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Tm54j-0008Ns-Nv
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:03:53 +0000
Received: from [85.158.138.51:3116] by server-4.bemta-3.messagelabs.com id
	3B/BD-31835-86884D05; Fri, 21 Dec 2012 16:03:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356105831!28757737!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30511 invoked from network); 21 Dec 2012 16:03:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 16:03:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Dec 2012 16:03:51 +0000
Message-Id: <50D4967602000078000B2114@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 21 Dec 2012 16:03:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
In-Reply-To: <50D48090.6060603@invisiblethingslab.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Marek Marczykowski <marmarek@invisiblethingslab.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.12.12 at 16:30, Marek Marczykowski <marmarek@invisiblethingslab.com> wrote:
> Next bisection (this time with sched_ratelimit_us=0) gives this commit:
> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f 

Ben, wasn't this where your bisection ended up too?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:07:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:07:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm58S-00007P-53; Fri, 21 Dec 2012 16:07:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm58P-00007F-JT
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:07:42 +0000
Received: from [85.158.143.35:34313] by server-1.bemta-4.messagelabs.com id
	F6/7C-28401-C4984D05; Fri, 21 Dec 2012 16:07:40 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1356105974!16001368!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24007 invoked from network); 21 Dec 2012 16:06:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:06:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1488599"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 16:06:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 11:06:13 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm56z-0005k0-Mp;
	Fri, 21 Dec 2012 16:06:13 +0000
Message-ID: <50D48780.70302@eu.citrix.com>
Date: Fri, 21 Dec 2012 16:00:00 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<5dc2571ae5faef87977c.1355944043@Solace>
In-Reply-To: <5dc2571ae5faef87977c.1355944043@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 07 of 10 v2] libxl: optimize the calculation
 of how many VCPUs can run on a candidate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> For choosing the best NUMA placement candidate, we need to figure out
> how many VCPUs are runnable on each of them. That requires going through
> all the VCPUs of all the domains and check their affinities.
>
> With this change, instead of doing the above for each candidate, we
> do it once for all, populating an array while counting. This way, when
> we later are evaluating candidates, all we need is summing up the right
> elements of the array itself.
>
> This reduces the complexity of the overall algorithm, as it moves a
> potentially expensive operation (for_each_vcpu_of_each_domain {})
> outside from the core placement loop, so that it is performed only
> once instead of (potentially) tens or hundreds of times.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

You know this code best. :-)  I've looked it over and just have one 
minor suggestion:

>           for (j = 0; j < nr_dom_vcpus; j++) {
> +            /* For each vcpu of each domain, increment the elements of
> +             * the array corresponding to the nodes where the vcpu runs */
> +            libxl_bitmap_set_none(&vcpu_nodemap);
> +            libxl_for_each_set_bit(k, vinfo[j].cpumap) {
> +                int node = tinfo[k].node;

I think I might rename "vcpu_nodemap" to something that suggests better 
how it fits with the algorithm -- for instance, "counted_nodemap" or 
"nodes_counted" -- something to suggest that this is how we avoid 
counting the same vcpu on the same node multiple times.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:07:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:07:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm58S-00007P-53; Fri, 21 Dec 2012 16:07:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm58P-00007F-JT
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:07:42 +0000
Received: from [85.158.143.35:34313] by server-1.bemta-4.messagelabs.com id
	F6/7C-28401-C4984D05; Fri, 21 Dec 2012 16:07:40 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1356105974!16001368!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24007 invoked from network); 21 Dec 2012 16:06:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:06:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1488599"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 16:06:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 11:06:13 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm56z-0005k0-Mp;
	Fri, 21 Dec 2012 16:06:13 +0000
Message-ID: <50D48780.70302@eu.citrix.com>
Date: Fri, 21 Dec 2012 16:00:00 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<5dc2571ae5faef87977c.1355944043@Solace>
In-Reply-To: <5dc2571ae5faef87977c.1355944043@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 07 of 10 v2] libxl: optimize the calculation
 of how many VCPUs can run on a candidate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> For choosing the best NUMA placement candidate, we need to figure out
> how many VCPUs are runnable on each of them. That requires going through
> all the VCPUs of all the domains and check their affinities.
>
> With this change, instead of doing the above for each candidate, we
> do it once for all, populating an array while counting. This way, when
> we later are evaluating candidates, all we need is summing up the right
> elements of the array itself.
>
> This reduces the complexity of the overall algorithm, as it moves a
> potentially expensive operation (for_each_vcpu_of_each_domain {})
> outside from the core placement loop, so that it is performed only
> once instead of (potentially) tens or hundreds of times.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

You know this code best. :-)  I've looked it over and just have one 
minor suggestion:

>           for (j = 0; j < nr_dom_vcpus; j++) {
> +            /* For each vcpu of each domain, increment the elements of
> +             * the array corresponding to the nodes where the vcpu runs */
> +            libxl_bitmap_set_none(&vcpu_nodemap);
> +            libxl_for_each_set_bit(k, vinfo[j].cpumap) {
> +                int node = tinfo[k].node;

I think I might rename "vcpu_nodemap" to something that suggests better 
how it fits with the algorithm -- for instance, "counted_nodemap" or 
"nodes_counted" -- something to suggest that this is how we avoid 
counting the same vcpu on the same node multiple times.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm58m-00009C-IM; Fri, 21 Dec 2012 16:08:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm58l-00008x-3B
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:08:03 +0000
Received: from [85.158.143.99:39804] by server-3.bemta-4.messagelabs.com id
	64/6E-18211-26984D05; Fri, 21 Dec 2012 16:08:02 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1356106081!25336644!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5738 invoked from network); 21 Dec 2012 16:08:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:08:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="307957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:08:01 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:08:00 +0000
Message-ID: <1356106078.15403.59.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:07:58 +0100
In-Reply-To: <50D47235.4090106@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D3414D.8080901@eu.citrix.com>
	<CAAWQectVEihrayJj5n4SPGqA0QJSiC7s2x_oDW=KHyxukWpMSA@mail.gmail.com>
	<50D47235.4090106@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2358770386315652851=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2358770386315652851==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-VHCcvgheZ7clcavNtEty"

--=-VHCcvgheZ7clcavNtEty
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 14:29 +0000, George Dunlap wrote:=20
> > Sorry for that, I probably spent so much time buried, as you where
> > saying, in the
> > various nested loops and calls, that I lost the context a little bit! :=
-P
>=20
> OK, that makes sense -- I figured it was something like that.  Don't=20
> feel too bad about missing that connection -- we're all fairly blind to=
=20
> our own code, and I only caught it because I was trying to figure out=20
> what was going on.
>
Yeah, thanks, and no, I won't let this let me down to much, Even if this
was quite a big one. After all, that's what we have patch review for!

> That's why we do patch review. :-)
>=20
Hehe, I see we agree. :-)

> Honestly, the whole "steal work" idea seemed a bit backwards to begin=20
> with, but now that we're not just dealing with "possible" and "not=20
> possible", but with "better" and "worse", the work-stealing method of=20
> load balancing sort of falls down.
>
> [snip]
>=20
> But that's kind of a half-baked idea at this point.
>=20
Yes, this whole stealing work machinery may be rethought a bit. However,
let's get something sane in re NUMA load balancing ASAP, as we planned,
and then we'll see whether/how to rework it with both simplicity and
effectiveness in mind.

> > Ok, I think the problem I was describing is real, and I've seen it happ=
ening and
> > causing performances degradation. However, as I think a good solution
> > is going to
> > be more complex than I thought, I'd better repost without this
> > function and deal with
> > it in a future separate patch (after having figured out the best way
> > of doing so). Is
> > that fine with you?
>=20
> Yes, that's fine.
>
Ok, I'll sort out all your comments and try to post v3 in early January,
so that you'll find it in your inbox as soon as you'll be back from
vacations! :-)

>   Thanks, Dario.
>=20
Thanks to you,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-VHCcvgheZ7clcavNtEty
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUiV4ACgkQk4XaBE3IOsSnFgCdE3L7W3LWJbMHf+dht8l/hfty
1swAnjJGfft3xiAYaFLBX6IjJfVozfED
=/LeZ
-----END PGP SIGNATURE-----

--=-VHCcvgheZ7clcavNtEty--


--===============2358770386315652851==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2358770386315652851==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm58m-00009C-IM; Fri, 21 Dec 2012 16:08:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm58l-00008x-3B
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:08:03 +0000
Received: from [85.158.143.99:39804] by server-3.bemta-4.messagelabs.com id
	64/6E-18211-26984D05; Fri, 21 Dec 2012 16:08:02 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1356106081!25336644!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5738 invoked from network); 21 Dec 2012 16:08:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:08:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="307957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:08:01 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:08:00 +0000
Message-ID: <1356106078.15403.59.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:07:58 +0100
In-Reply-To: <50D47235.4090106@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D3414D.8080901@eu.citrix.com>
	<CAAWQectVEihrayJj5n4SPGqA0QJSiC7s2x_oDW=KHyxukWpMSA@mail.gmail.com>
	<50D47235.4090106@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2358770386315652851=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2358770386315652851==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-VHCcvgheZ7clcavNtEty"

--=-VHCcvgheZ7clcavNtEty
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 14:29 +0000, George Dunlap wrote:=20
> > Sorry for that, I probably spent so much time buried, as you where
> > saying, in the
> > various nested loops and calls, that I lost the context a little bit! :=
-P
>=20
> OK, that makes sense -- I figured it was something like that.  Don't=20
> feel too bad about missing that connection -- we're all fairly blind to=
=20
> our own code, and I only caught it because I was trying to figure out=20
> what was going on.
>
Yeah, thanks, and no, I won't let this let me down to much, Even if this
was quite a big one. After all, that's what we have patch review for!

> That's why we do patch review. :-)
>=20
Hehe, I see we agree. :-)

> Honestly, the whole "steal work" idea seemed a bit backwards to begin=20
> with, but now that we're not just dealing with "possible" and "not=20
> possible", but with "better" and "worse", the work-stealing method of=20
> load balancing sort of falls down.
>
> [snip]
>=20
> But that's kind of a half-baked idea at this point.
>=20
Yes, this whole stealing work machinery may be rethought a bit. However,
let's get something sane in re NUMA load balancing ASAP, as we planned,
and then we'll see whether/how to rework it with both simplicity and
effectiveness in mind.

> > Ok, I think the problem I was describing is real, and I've seen it happ=
ening and
> > causing performances degradation. However, as I think a good solution
> > is going to
> > be more complex than I thought, I'd better repost without this
> > function and deal with
> > it in a future separate patch (after having figured out the best way
> > of doing so). Is
> > that fine with you?
>=20
> Yes, that's fine.
>
Ok, I'll sort out all your comments and try to post v3 in early January,
so that you'll find it in your inbox as soon as you'll be back from
vacations! :-)

>   Thanks, Dario.
>=20
Thanks to you,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-VHCcvgheZ7clcavNtEty
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUiV4ACgkQk4XaBE3IOsSnFgCdE3L7W3LWJbMHf+dht8l/hfty
1swAnjJGfft3xiAYaFLBX6IjJfVozfED
=/LeZ
-----END PGP SIGNATURE-----

--=-VHCcvgheZ7clcavNtEty--


--===============2358770386315652851==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2358770386315652851==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:15:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5GB-0000SU-HB; Fri, 21 Dec 2012 16:15:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5G9-0000SP-Ai
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:15:41 +0000
Received: from [85.158.139.83:28446] by server-12.bemta-5.messagelabs.com id
	E0/D9-02275-C2B84D05; Fri, 21 Dec 2012 16:15:40 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1356106474!24619112!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24656 invoked from network); 21 Dec 2012 16:14:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:14:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="308089"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:13:35 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:13:34 +0000
Message-ID: <1356106411.15403.64.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:13:31 +0100
In-Reply-To: <50D47898.2060909@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D37359.9080001@eu.citrix.com>
	<1356049122.15403.34.camel@Abyss> <50D47898.2060909@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2106059499683019071=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2106059499683019071==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-AFFfeewK2he1RbaLryrE"

--=-AFFfeewK2he1RbaLryrE
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 14:56 +0000, George Dunlap wrote:=20
> > I hope I've explained it correctly, and sorry if it is a little bit
> > tricky, especially to explain like this (although, believe me, it was
> > tricky to hunt it out too! :-P). I've seen that happening and I'm almos=
t
> > sure I kept a trace somewhere, so let me know if you want to see the
> > "smoking gun". :-)
>=20
> No, the change looks quite plausible.  I guess it's not obvious that the=
=20
> balancing code will never migrate from one thread to another thread. =20
>
It was far from obvious to figure out this was happening, yes. :-)

> (That whole algorithm could do with some commenting -- I may submit a=20
> patch once this series is in.)
>=20
Nice.

> I'm really glad you've had the opportunity to take a close look at these=
=20
> kinds of things.
>
Yeah, well, I'm happy to, scheduling never stops entertaining me, even
(or especially) when it requires my brain-cells to workout so hard! :-D

> What I was doing,=20
> in a sort of "thinking out loud" fashion, seeing under what conditions=
=20
> that break might actually happen.  Like the analysis with=20
> vcpu_should_migrate(), it might have turned out to be redundant, or to=
=20
> have missed some cases.
>
Yep, I agree, it's another aspects of the patch-review model which is
really helpful.

Thanks,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-AFFfeewK2he1RbaLryrE
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUiqwACgkQk4XaBE3IOsQjlwCeIlqrt9qakzoHn7pphKtH9yhC
OsgAnjPYmsO3flwOHN9oRvVSaGf/Y4cw
=eT9A
-----END PGP SIGNATURE-----

--=-AFFfeewK2he1RbaLryrE--


--===============2106059499683019071==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2106059499683019071==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:15:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5GB-0000SU-HB; Fri, 21 Dec 2012 16:15:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5G9-0000SP-Ai
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:15:41 +0000
Received: from [85.158.139.83:28446] by server-12.bemta-5.messagelabs.com id
	E0/D9-02275-C2B84D05; Fri, 21 Dec 2012 16:15:40 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1356106474!24619112!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24656 invoked from network); 21 Dec 2012 16:14:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:14:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="308089"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:13:35 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:13:34 +0000
Message-ID: <1356106411.15403.64.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:13:31 +0100
In-Reply-To: <50D47898.2060909@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<06d2f322a6319d8ba212.1355944039@Solace>
	<50D37359.9080001@eu.citrix.com>
	<1356049122.15403.34.camel@Abyss> <50D47898.2060909@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the
 scheduler know about node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2106059499683019071=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2106059499683019071==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-AFFfeewK2he1RbaLryrE"

--=-AFFfeewK2he1RbaLryrE
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 14:56 +0000, George Dunlap wrote:=20
> > I hope I've explained it correctly, and sorry if it is a little bit
> > tricky, especially to explain like this (although, believe me, it was
> > tricky to hunt it out too! :-P). I've seen that happening and I'm almos=
t
> > sure I kept a trace somewhere, so let me know if you want to see the
> > "smoking gun". :-)
>=20
> No, the change looks quite plausible.  I guess it's not obvious that the=
=20
> balancing code will never migrate from one thread to another thread. =20
>
It was far from obvious to figure out this was happening, yes. :-)

> (That whole algorithm could do with some commenting -- I may submit a=20
> patch once this series is in.)
>=20
Nice.

> I'm really glad you've had the opportunity to take a close look at these=
=20
> kinds of things.
>
Yeah, well, I'm happy to, scheduling never stops entertaining me, even
(or especially) when it requires my brain-cells to workout so hard! :-D

> What I was doing,=20
> in a sort of "thinking out loud" fashion, seeing under what conditions=
=20
> that break might actually happen.  Like the analysis with=20
> vcpu_should_migrate(), it might have turned out to be redundant, or to=
=20
> have missed some cases.
>
Yep, I agree, it's another aspects of the patch-review model which is
really helpful.

Thanks,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-AFFfeewK2he1RbaLryrE
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUiqwACgkQk4XaBE3IOsQjlwCeIlqrt9qakzoHn7pphKtH9yhC
OsgAnjPYmsO3flwOHN9oRvVSaGf/Y4cw
=eT9A
-----END PGP SIGNATURE-----

--=-AFFfeewK2he1RbaLryrE--


--===============2106059499683019071==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2106059499683019071==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:17:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:17:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5I4-0000Xm-0x; Fri, 21 Dec 2012 16:17:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5I3-0000Xd-5m
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:17:39 +0000
Received: from [85.158.139.83:40357] by server-11.bemta-5.messagelabs.com id
	6E/BB-31624-2AB84D05; Fri, 21 Dec 2012 16:17:38 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1356106657!29183720!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7145 invoked from network); 21 Dec 2012 16:17:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:17:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="308214"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:17:37 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:17:36 +0000
Message-ID: <1356106655.15403.68.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:17:35 +0100
In-Reply-To: <50D47D99.9090209@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<7e8c5e21c3ae1c267c23.1355944040@Solace>
	<50D47D99.9090209@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 04 of 10 v2] xen: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3119554692575500966=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3119554692575500966==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-DoBd5iIsos/1Q95y7xUx"

--=-DoBd5iIsos/1Q95y7xUx
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 15:17 +0000, George Dunlap wrote:=20
> On 19/12/12 19:07, Dario Faggioli wrote:
> > Make it possible to pass the node-affinity of a domain to the hyperviso=
r
> > from the upper layers, instead of always being computed automatically.
> >
> > Note that this also required generalizing the Flask hooks for setting
> > and getting the affinity, so that they now deal with both vcpu and
> > node affinity.
> >
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>=20
> I can't comment on the XSM stuff=20
>
Right, it's the part I'm most week on too... Daniel had a go with this
parch during v1's review, and he raised a couple of points that I think
I addressed, let's see if he thinks there's anything else.

> -- is any part of the "getvcpuaffinity"=20
> stuff for XSM a public interface that needs to be backwards-compatible? =
=20
> I.e., is s/vcpu//; OK from an interface point of view?
>=20
Mmm... Good point, I haven't thought about that. This was here in v1 and
Daniel explicitly said he was fine with the renaming instead of adding a
new hook, but mostly for the "semantic" point of view, not sure whether
bkwrd compatibility is an issue... Daniel, what do you think?

> WRT everything else:
> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>=20
Ok. Thanks,
Dario


<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-DoBd5iIsos/1Q95y7xUx
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUi58ACgkQk4XaBE3IOsQUZwCeMb++YyvQ/LeOgqK3nrvBrxdx
YawAniiBWO+dCfZYI7VS+0SVf9vOfMi/
=g3j5
-----END PGP SIGNATURE-----

--=-DoBd5iIsos/1Q95y7xUx--


--===============3119554692575500966==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3119554692575500966==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:17:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:17:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5I4-0000Xm-0x; Fri, 21 Dec 2012 16:17:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5I3-0000Xd-5m
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:17:39 +0000
Received: from [85.158.139.83:40357] by server-11.bemta-5.messagelabs.com id
	6E/BB-31624-2AB84D05; Fri, 21 Dec 2012 16:17:38 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1356106657!29183720!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7145 invoked from network); 21 Dec 2012 16:17:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:17:37 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="308214"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:17:37 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:17:36 +0000
Message-ID: <1356106655.15403.68.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:17:35 +0100
In-Reply-To: <50D47D99.9090209@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<7e8c5e21c3ae1c267c23.1355944040@Solace>
	<50D47D99.9090209@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 04 of 10 v2] xen: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3119554692575500966=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3119554692575500966==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-DoBd5iIsos/1Q95y7xUx"

--=-DoBd5iIsos/1Q95y7xUx
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 15:17 +0000, George Dunlap wrote:=20
> On 19/12/12 19:07, Dario Faggioli wrote:
> > Make it possible to pass the node-affinity of a domain to the hyperviso=
r
> > from the upper layers, instead of always being computed automatically.
> >
> > Note that this also required generalizing the Flask hooks for setting
> > and getting the affinity, so that they now deal with both vcpu and
> > node affinity.
> >
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>=20
> I can't comment on the XSM stuff=20
>
Right, it's the part I'm most week on too... Daniel had a go with this
parch during v1's review, and he raised a couple of points that I think
I addressed, let's see if he thinks there's anything else.

> -- is any part of the "getvcpuaffinity"=20
> stuff for XSM a public interface that needs to be backwards-compatible? =
=20
> I.e., is s/vcpu//; OK from an interface point of view?
>=20
Mmm... Good point, I haven't thought about that. This was here in v1 and
Daniel explicitly said he was fine with the renaming instead of adding a
new hook, but mostly for the "semantic" point of view, not sure whether
bkwrd compatibility is an issue... Daniel, what do you think?

> WRT everything else:
> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>=20
Ok. Thanks,
Dario


<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-DoBd5iIsos/1Q95y7xUx
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUi58ACgkQk4XaBE3IOsQUZwCeMb++YyvQ/LeOgqK3nrvBrxdx
YawAniiBWO+dCfZYI7VS+0SVf9vOfMi/
=g3j5
-----END PGP SIGNATURE-----

--=-DoBd5iIsos/1Q95y7xUx--


--===============3119554692575500966==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3119554692575500966==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:19:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5JK-0000dq-Gg; Fri, 21 Dec 2012 16:18:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tm5JJ-0000dg-5v
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:18:57 +0000
Received: from [85.158.139.83:47816] by server-4.bemta-5.messagelabs.com id
	1F/D9-14693-0FB84D05; Fri, 21 Dec 2012 16:18:56 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1356106729!30373264!1
X-Originating-IP: [209.85.220.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23207 invoked from network); 21 Dec 2012 16:18:50 -0000
Received: from mail-vc0-f172.google.com (HELO mail-vc0-f172.google.com)
	(209.85.220.172)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:18:50 -0000
Received: by mail-vc0-f172.google.com with SMTP id fw7so5430102vcb.3
	for <xen-devel@lists.xen.org>; Fri, 21 Dec 2012 08:18:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=references:from:mime-version:in-reply-to:date:message-id:subject:to
	:cc:content-type;
	bh=n92v5AyvK7jRPZlnOhcpGhHs/7AAZYKlrqmymYhITH0=;
	b=FhqMfDDYuxkvTjfjt/dqdS+zkiE01dHjfQN+Pdw9M5tdSR7xY1PouTU74RZNE/vbCf
	vGG33idIy8db0iiO4TOdT5m9wQ11CM/0l4o8hjdFffwtWESIiWH1jkT3BFTXoUDojY5n
	sDGRlo3nXbE43ec6Ssm4Yrt00gKRgracjTp+nPxEuvO7Tzlh7mnzYZIPlVV5k4GTJBcq
	dCU1vfAtP8C8aVCED6/2S2k/pv8nbJ7aTgtTlaMxL89F7kNjd2Y5dPIHN3jjn+QL95RV
	XymmUPuxqoJ1Al6s5FADeRWcjMhGsAZRGQeSKyYdyelHo22sIsoekU8SD2Biw1uEMrSE
	HlDg==
Received: by 10.52.32.106 with SMTP id h10mr18184617vdi.1.1356106728671; Fri,
	21 Dec 2012 08:18:48 -0800 (PST)
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
	<50D4967602000078000B2114@nat28.tlf.novell.com>
From: Ben Guthro <ben.guthro@gmail.com>
Mime-Version: 1.0 (1.0)
In-Reply-To: <50D4967602000078000B2114@nat28.tlf.novell.com>
Date: Fri, 21 Dec 2012 11:18:48 -0500
Message-ID: <3368417890369848263@unknownmsgid>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ben Guthro <ben@guthro.net>,
	Marek Marczykowski <marmarek@invisiblethingslab.com>
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Dec 21, 2012, at 11:03 AM, Jan Beulich <JBeulich@suse.com> wrote:

>>>> On 21.12.12 at 16:30, Marek Marczykowski <marmarek@invisiblethingslab.com> wrote:
>> Next bisection (this time with sched_ratelimit_us=0) gives this commit:
>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f
>
> Ben, wasn't this where your bisection ended up too?

Yes, for the dom0_pin_vcpus issue.



>
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:19:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5JK-0000dq-Gg; Fri, 21 Dec 2012 16:18:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tm5JJ-0000dg-5v
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:18:57 +0000
Received: from [85.158.139.83:47816] by server-4.bemta-5.messagelabs.com id
	1F/D9-14693-0FB84D05; Fri, 21 Dec 2012 16:18:56 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1356106729!30373264!1
X-Originating-IP: [209.85.220.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23207 invoked from network); 21 Dec 2012 16:18:50 -0000
Received: from mail-vc0-f172.google.com (HELO mail-vc0-f172.google.com)
	(209.85.220.172)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:18:50 -0000
Received: by mail-vc0-f172.google.com with SMTP id fw7so5430102vcb.3
	for <xen-devel@lists.xen.org>; Fri, 21 Dec 2012 08:18:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=references:from:mime-version:in-reply-to:date:message-id:subject:to
	:cc:content-type;
	bh=n92v5AyvK7jRPZlnOhcpGhHs/7AAZYKlrqmymYhITH0=;
	b=FhqMfDDYuxkvTjfjt/dqdS+zkiE01dHjfQN+Pdw9M5tdSR7xY1PouTU74RZNE/vbCf
	vGG33idIy8db0iiO4TOdT5m9wQ11CM/0l4o8hjdFffwtWESIiWH1jkT3BFTXoUDojY5n
	sDGRlo3nXbE43ec6Ssm4Yrt00gKRgracjTp+nPxEuvO7Tzlh7mnzYZIPlVV5k4GTJBcq
	dCU1vfAtP8C8aVCED6/2S2k/pv8nbJ7aTgtTlaMxL89F7kNjd2Y5dPIHN3jjn+QL95RV
	XymmUPuxqoJ1Al6s5FADeRWcjMhGsAZRGQeSKyYdyelHo22sIsoekU8SD2Biw1uEMrSE
	HlDg==
Received: by 10.52.32.106 with SMTP id h10mr18184617vdi.1.1356106728671; Fri,
	21 Dec 2012 08:18:48 -0800 (PST)
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
	<50D4967602000078000B2114@nat28.tlf.novell.com>
From: Ben Guthro <ben.guthro@gmail.com>
Mime-Version: 1.0 (1.0)
In-Reply-To: <50D4967602000078000B2114@nat28.tlf.novell.com>
Date: Fri, 21 Dec 2012 11:18:48 -0500
Message-ID: <3368417890369848263@unknownmsgid>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ben Guthro <ben@guthro.net>,
	Marek Marczykowski <marmarek@invisiblethingslab.com>
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Dec 21, 2012, at 11:03 AM, Jan Beulich <JBeulich@suse.com> wrote:

>>>> On 21.12.12 at 16:30, Marek Marczykowski <marmarek@invisiblethingslab.com> wrote:
>> Next bisection (this time with sched_ratelimit_us=0) gives this commit:
>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f
>
> Ben, wasn't this where your bisection ended up too?

Yes, for the dom0_pin_vcpus issue.



>
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:19:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5JR-0000ee-Tr; Fri, 21 Dec 2012 16:19:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5JQ-0000eI-4u
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:19:04 +0000
Received: from [85.158.138.51:32930] by server-2.bemta-3.messagelabs.com id
	0E/71-11239-7FB84D05; Fri, 21 Dec 2012 16:19:03 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1356106742!21794420!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3156 invoked from network); 21 Dec 2012 16:19:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:19:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="308242"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:19:02 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:19:01 +0000
Message-ID: <1356106739.15403.70.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:18:59 +0100
In-Reply-To: <50D48091.5070409@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<3c196445edb57baadf4f.1355944042@Solace>
	<50D48091.5070409@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2492437288553829706=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2492437288553829706==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-h5ClSqlEhGwC1M4GwUkS"

--=-h5ClSqlEhGwC1M4GwUkS
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 15:30 +0000, George Dunlap wrote:=20
> On 19/12/12 19:07, Dario Faggioli wrote:
> > By introducing a nodemap in libxl_domain_build_info and
> > providing the get/set methods to deal with it.
> >
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> > Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
>=20
> I think you'll probably need to add a line like the following:
>=20
> #define LIBXL_HAVE_NODEAFFINITY 1
>=20
> So that people wanting to build against different versions of the=20
> library can behave appropriately. =20
>
I see what you mean.

> But IanC or IanJ would be the final=20
> word on that, I think.
>=20
Well, let's see if they want to comment. In any case, it shouldn't be a
big deal to add this. Thanks for pointing out.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-h5ClSqlEhGwC1M4GwUkS
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUi/MACgkQk4XaBE3IOsTaOACdE+Q/N2EtK1lgO263tbl+lq7D
ZMIAoKwiDz4I8dMYSgoXh2Fjk9NF+jGl
=o5Wm
-----END PGP SIGNATURE-----

--=-h5ClSqlEhGwC1M4GwUkS--


--===============2492437288553829706==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2492437288553829706==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:19:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5JR-0000ee-Tr; Fri, 21 Dec 2012 16:19:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5JQ-0000eI-4u
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:19:04 +0000
Received: from [85.158.138.51:32930] by server-2.bemta-3.messagelabs.com id
	0E/71-11239-7FB84D05; Fri, 21 Dec 2012 16:19:03 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1356106742!21794420!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3156 invoked from network); 21 Dec 2012 16:19:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:19:02 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="308242"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:19:02 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:19:01 +0000
Message-ID: <1356106739.15403.70.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:18:59 +0100
In-Reply-To: <50D48091.5070409@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<3c196445edb57baadf4f.1355944042@Solace>
	<50D48091.5070409@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2492437288553829706=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2492437288553829706==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-h5ClSqlEhGwC1M4GwUkS"

--=-h5ClSqlEhGwC1M4GwUkS
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 15:30 +0000, George Dunlap wrote:=20
> On 19/12/12 19:07, Dario Faggioli wrote:
> > By introducing a nodemap in libxl_domain_build_info and
> > providing the get/set methods to deal with it.
> >
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> > Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
>=20
> I think you'll probably need to add a line like the following:
>=20
> #define LIBXL_HAVE_NODEAFFINITY 1
>=20
> So that people wanting to build against different versions of the=20
> library can behave appropriately. =20
>
I see what you mean.

> But IanC or IanJ would be the final=20
> word on that, I think.
>=20
Well, let's see if they want to comment. In any case, it shouldn't be a
big deal to add this. Thanks for pointing out.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-h5ClSqlEhGwC1M4GwUkS
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUi/MACgkQk4XaBE3IOsTaOACdE+Q/N2EtK1lgO263tbl+lq7D
ZMIAoKwiDz4I8dMYSgoXh2Fjk9NF+jGl
=o5Wm
-----END PGP SIGNATURE-----

--=-h5ClSqlEhGwC1M4GwUkS--


--===============2492437288553829706==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2492437288553829706==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:23:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5NT-000136-PU; Fri, 21 Dec 2012 16:23:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5NT-000130-5o
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:23:15 +0000
Received: from [85.158.137.99:41721] by server-16.bemta-3.messagelabs.com id
	D2/EF-27634-2FC84D05; Fri, 21 Dec 2012 16:23:14 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1356106992!15085263!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22554 invoked from network); 21 Dec 2012 16:23:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:23:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="308316"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:23:12 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:23:11 +0000
Message-ID: <1356106990.15403.74.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:23:10 +0100
In-Reply-To: <50D48780.70302@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<5dc2571ae5faef87977c.1355944043@Solace> <50D48780.70302@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 07 of 10 v2] libxl: optimize the calculation
 of how many VCPUs can run on a candidate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7081345392282909730=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7081345392282909730==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-jdzAz/ddGjKie18pXv8l"

--=-jdzAz/ddGjKie18pXv8l
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 16:00 +0000, George Dunlap wrote:=20
> On 19/12/12 19:07, Dario Faggioli wrote:
> > For choosing the best NUMA placement candidate, we need to figure out
> > how many VCPUs are runnable on each of them. That requires going throug=
h
> > all the VCPUs of all the domains and check their affinities.
> >
> > With this change, instead of doing the above for each candidate, we
> > do it once for all, populating an array while counting. This way, when
> > we later are evaluating candidates, all we need is summing up the right
> > elements of the array itself.
> >
> > This reduces the complexity of the overall algorithm, as it moves a
> > potentially expensive operation (for_each_vcpu_of_each_domain {})
> > outside from the core placement loop, so that it is performed only
> > once instead of (potentially) tens or hundreds of times.
> >
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>=20
> You know this code best. :-)  I've looked it over and just have one=20
> minor suggestion:
>=20
Well, I certainly spent quite a bit of time on it, and it still is in
need of some more, but again, this change only speed things up at no
"functional cost", so (despite this not being a critical path) I really
think it is something we want.

BTW, thanks for taking a look.

> >           for (j =3D 0; j < nr_dom_vcpus; j++) {
> > +            /* For each vcpu of each domain, increment the elements of
> > +             * the array corresponding to the nodes where the vcpu run=
s */
> > +            libxl_bitmap_set_none(&vcpu_nodemap);
> > +            libxl_for_each_set_bit(k, vinfo[j].cpumap) {
> > +                int node =3D tinfo[k].node;
>=20
> I think I might rename "vcpu_nodemap" to something that suggests better=
=20
> how it fits with the algorithm -- for instance, "counted_nodemap" or=20
> "nodes_counted" -- something to suggest that this is how we avoid=20
> counting the same vcpu on the same node multiple times.
>=20
Good point, I'll go for something like that.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-jdzAz/ddGjKie18pXv8l
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUjO4ACgkQk4XaBE3IOsRvvwCcDw9Y5W7qYhDesszGiGSuFB6J
EDcAni9MK9RVXUL0SYe/5+sKA1DGXe+d
=Sqoi
-----END PGP SIGNATURE-----

--=-jdzAz/ddGjKie18pXv8l--


--===============7081345392282909730==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7081345392282909730==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:23:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5NT-000136-PU; Fri, 21 Dec 2012 16:23:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5NT-000130-5o
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:23:15 +0000
Received: from [85.158.137.99:41721] by server-16.bemta-3.messagelabs.com id
	D2/EF-27634-2FC84D05; Fri, 21 Dec 2012 16:23:14 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1356106992!15085263!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22554 invoked from network); 21 Dec 2012 16:23:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:23:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="308316"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:23:12 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:23:11 +0000
Message-ID: <1356106990.15403.74.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:23:10 +0100
In-Reply-To: <50D48780.70302@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<5dc2571ae5faef87977c.1355944043@Solace> <50D48780.70302@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 07 of 10 v2] libxl: optimize the calculation
 of how many VCPUs can run on a candidate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7081345392282909730=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7081345392282909730==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-jdzAz/ddGjKie18pXv8l"

--=-jdzAz/ddGjKie18pXv8l
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 16:00 +0000, George Dunlap wrote:=20
> On 19/12/12 19:07, Dario Faggioli wrote:
> > For choosing the best NUMA placement candidate, we need to figure out
> > how many VCPUs are runnable on each of them. That requires going throug=
h
> > all the VCPUs of all the domains and check their affinities.
> >
> > With this change, instead of doing the above for each candidate, we
> > do it once for all, populating an array while counting. This way, when
> > we later are evaluating candidates, all we need is summing up the right
> > elements of the array itself.
> >
> > This reduces the complexity of the overall algorithm, as it moves a
> > potentially expensive operation (for_each_vcpu_of_each_domain {})
> > outside from the core placement loop, so that it is performed only
> > once instead of (potentially) tens or hundreds of times.
> >
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>=20
> You know this code best. :-)  I've looked it over and just have one=20
> minor suggestion:
>=20
Well, I certainly spent quite a bit of time on it, and it still is in
need of some more, but again, this change only speed things up at no
"functional cost", so (despite this not being a critical path) I really
think it is something we want.

BTW, thanks for taking a look.

> >           for (j =3D 0; j < nr_dom_vcpus; j++) {
> > +            /* For each vcpu of each domain, increment the elements of
> > +             * the array corresponding to the nodes where the vcpu run=
s */
> > +            libxl_bitmap_set_none(&vcpu_nodemap);
> > +            libxl_for_each_set_bit(k, vinfo[j].cpumap) {
> > +                int node =3D tinfo[k].node;
>=20
> I think I might rename "vcpu_nodemap" to something that suggests better=
=20
> how it fits with the algorithm -- for instance, "counted_nodemap" or=20
> "nodes_counted" -- something to suggest that this is how we avoid=20
> counting the same vcpu on the same node multiple times.
>=20
Good point, I'll go for something like that.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-jdzAz/ddGjKie18pXv8l
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUjO4ACgkQk4XaBE3IOsRvvwCcDw9Y5W7qYhDesszGiGSuFB6J
EDcAni9MK9RVXUL0SYe/5+sKA1DGXe+d
=Sqoi
-----END PGP SIGNATURE-----

--=-jdzAz/ddGjKie18pXv8l--


--===============7081345392282909730==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7081345392282909730==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:27:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:27:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5RG-0001Cp-Gy; Fri, 21 Dec 2012 16:27:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5RF-0001Ck-9k
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:27:09 +0000
Received: from [85.158.138.51:36604] by server-12.bemta-3.messagelabs.com id
	D8/A1-27559-CDD84D05; Fri, 21 Dec 2012 16:27:08 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1356107226!21935391!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25633 invoked from network); 21 Dec 2012 16:27:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:27:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="308424"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:27:06 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:27:05 +0000
Message-ID: <1356107224.15403.78.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:27:04 +0100
In-Reply-To: <50D47E12.3040803@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<61299b4cdc2abbdf9bfb.1355944041@Solace>
	<50D47E12.3040803@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 05 of 10 v2] libxc: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2945805627579981142=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2945805627579981142==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-iziipxIqShOkNz8FzTAF"

--=-iziipxIqShOkNz8FzTAF
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 15:19 +0000, George Dunlap wrote:=20
> On 19/12/12 19:07, Dario Faggioli wrote:
> > By providing the proper get/set interface and wiring them
> > to the new domctl-s from the previous commit.
> >
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> > Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
>=20
> I haven't done a detailed review, but everything looks OK:
>=20
> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>=20
Ok. Also, let me use this e-mail to thank you properly for the thorough,
useful and especially quick review! All you said was really useful, I'll
do my best do address the points you raised in a proper and equally
quick manner.

See you in 2013 (I'm going on vacation today) with v3!  :-D

Thanks again and Regards,
Dario

> >
> > diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -110,6 +110,83 @@ int xc_domain_shutdown(xc_interface *xch
> >   }
> >  =20
> >  =20
> > +int xc_domain_node_setaffinity(xc_interface *xch,
> > +                               uint32_t domid,
> > +                               xc_nodemap_t nodemap)
> > +{
> > +    DECLARE_DOMCTL;
> > +    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> > +    int ret =3D -1;
> > +    int nodesize;
> > +
> > +    nodesize =3D xc_get_nodemap_size(xch);
> > +    if (!nodesize)
> > +    {
> > +        PERROR("Could not get number of nodes");
> > +        goto out;
> > +    }
> > +
> > +    local =3D xc_hypercall_buffer_alloc(xch, local, nodesize);
> > +    if ( local =3D=3D NULL )
> > +    {
> > +        PERROR("Could not allocate memory for setnodeaffinity domctl h=
ypercall");
> > +        goto out;
> > +    }
> > +
> > +    domctl.cmd =3D XEN_DOMCTL_setnodeaffinity;
> > +    domctl.domain =3D (domid_t)domid;
> > +
> > +    memcpy(local, nodemap, nodesize);
> > +    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
> > +    domctl.u.nodeaffinity.nodemap.nr_elems =3D nodesize * 8;
> > +
> > +    ret =3D do_domctl(xch, &domctl);
> > +
> > +    xc_hypercall_buffer_free(xch, local);
> > +
> > + out:
> > +    return ret;
> > +}
> > +
> > +int xc_domain_node_getaffinity(xc_interface *xch,
> > +                               uint32_t domid,
> > +                               xc_nodemap_t nodemap)
> > +{
> > +    DECLARE_DOMCTL;
> > +    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> > +    int ret =3D -1;
> > +    int nodesize;
> > +
> > +    nodesize =3D xc_get_nodemap_size(xch);
> > +    if (!nodesize)
> > +    {
> > +        PERROR("Could not get number of nodes");
> > +        goto out;
> > +    }
> > +
> > +    local =3D xc_hypercall_buffer_alloc(xch, local, nodesize);
> > +    if ( local =3D=3D NULL )
> > +    {
> > +        PERROR("Could not allocate memory for getnodeaffinity domctl h=
ypercall");
> > +        goto out;
> > +    }
> > +
> > +    domctl.cmd =3D XEN_DOMCTL_getnodeaffinity;
> > +    domctl.domain =3D (domid_t)domid;
> > +
> > +    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
> > +    domctl.u.nodeaffinity.nodemap.nr_elems =3D nodesize * 8;
> > +
> > +    ret =3D do_domctl(xch, &domctl);
> > +
> > +    memcpy(nodemap, local, nodesize);
> > +
> > +    xc_hypercall_buffer_free(xch, local);
> > +
> > + out:
> > +    return ret;
> > +}
> > +
> >   int xc_vcpu_setaffinity(xc_interface *xch,
> >                           uint32_t domid,
> >                           int vcpu,
> > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -521,6 +521,32 @@ int xc_watchdog(xc_interface *xch,
> >   		uint32_t id,
> >   		uint32_t timeout);
> >  =20
> > +/**
> > + * This function explicitly sets the host NUMA nodes the domain will
> > + * have affinity with.
> > + *
> > + * @parm xch a handle to an open hypervisor interface.
> > + * @parm domid the domain id one wants to set the affinity of.
> > + * @parm nodemap the map of the affine nodes.
> > + * @return 0 on success, -1 on failure.
> > + */
> > +int xc_domain_node_setaffinity(xc_interface *xch,
> > +                               uint32_t domind,
> > +                               xc_nodemap_t nodemap);
> > +
> > +/**
> > + * This function retrieves the host NUMA nodes the domain has
> > + * affinity with.
> > + *
> > + * @parm xch a handle to an open hypervisor interface.
> > + * @parm domid the domain id one wants to get the node affinity of.
> > + * @parm nodemap the map of the affine nodes.
> > + * @return 0 on success, -1 on failure.
> > + */
> > +int xc_domain_node_getaffinity(xc_interface *xch,
> > +                               uint32_t domind,
> > +                               xc_nodemap_t nodemap);
> > +
> >   int xc_vcpu_setaffinity(xc_interface *xch,
> >                           uint32_t domid,
> >                           int vcpu,
>=20
>=20
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-iziipxIqShOkNz8FzTAF
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUjdgACgkQk4XaBE3IOsRVXACfQu7y0peugPYfguuT+2kx1gOp
xgIAn144+LZ0z7zNHe39TA1F0nDOlIAk
=AIui
-----END PGP SIGNATURE-----

--=-iziipxIqShOkNz8FzTAF--


--===============2945805627579981142==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2945805627579981142==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:27:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:27:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5RG-0001Cp-Gy; Fri, 21 Dec 2012 16:27:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5RF-0001Ck-9k
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:27:09 +0000
Received: from [85.158.138.51:36604] by server-12.bemta-3.messagelabs.com id
	D8/A1-27559-CDD84D05; Fri, 21 Dec 2012 16:27:08 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1356107226!21935391!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25633 invoked from network); 21 Dec 2012 16:27:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:27:07 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="308424"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:27:06 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:27:05 +0000
Message-ID: <1356107224.15403.78.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:27:04 +0100
In-Reply-To: <50D47E12.3040803@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<61299b4cdc2abbdf9bfb.1355944041@Solace>
	<50D47E12.3040803@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Anil
	Madhavapeddy <anil@recoil.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 05 of 10 v2] libxc: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2945805627579981142=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2945805627579981142==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-iziipxIqShOkNz8FzTAF"

--=-iziipxIqShOkNz8FzTAF
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 15:19 +0000, George Dunlap wrote:=20
> On 19/12/12 19:07, Dario Faggioli wrote:
> > By providing the proper get/set interface and wiring them
> > to the new domctl-s from the previous commit.
> >
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> > Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
>=20
> I haven't done a detailed review, but everything looks OK:
>=20
> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>=20
Ok. Also, let me use this e-mail to thank you properly for the thorough,
useful and especially quick review! All you said was really useful, I'll
do my best do address the points you raised in a proper and equally
quick manner.

See you in 2013 (I'm going on vacation today) with v3!  :-D

Thanks again and Regards,
Dario

> >
> > diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -110,6 +110,83 @@ int xc_domain_shutdown(xc_interface *xch
> >   }
> >  =20
> >  =20
> > +int xc_domain_node_setaffinity(xc_interface *xch,
> > +                               uint32_t domid,
> > +                               xc_nodemap_t nodemap)
> > +{
> > +    DECLARE_DOMCTL;
> > +    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> > +    int ret =3D -1;
> > +    int nodesize;
> > +
> > +    nodesize =3D xc_get_nodemap_size(xch);
> > +    if (!nodesize)
> > +    {
> > +        PERROR("Could not get number of nodes");
> > +        goto out;
> > +    }
> > +
> > +    local =3D xc_hypercall_buffer_alloc(xch, local, nodesize);
> > +    if ( local =3D=3D NULL )
> > +    {
> > +        PERROR("Could not allocate memory for setnodeaffinity domctl h=
ypercall");
> > +        goto out;
> > +    }
> > +
> > +    domctl.cmd =3D XEN_DOMCTL_setnodeaffinity;
> > +    domctl.domain =3D (domid_t)domid;
> > +
> > +    memcpy(local, nodemap, nodesize);
> > +    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
> > +    domctl.u.nodeaffinity.nodemap.nr_elems =3D nodesize * 8;
> > +
> > +    ret =3D do_domctl(xch, &domctl);
> > +
> > +    xc_hypercall_buffer_free(xch, local);
> > +
> > + out:
> > +    return ret;
> > +}
> > +
> > +int xc_domain_node_getaffinity(xc_interface *xch,
> > +                               uint32_t domid,
> > +                               xc_nodemap_t nodemap)
> > +{
> > +    DECLARE_DOMCTL;
> > +    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> > +    int ret =3D -1;
> > +    int nodesize;
> > +
> > +    nodesize =3D xc_get_nodemap_size(xch);
> > +    if (!nodesize)
> > +    {
> > +        PERROR("Could not get number of nodes");
> > +        goto out;
> > +    }
> > +
> > +    local =3D xc_hypercall_buffer_alloc(xch, local, nodesize);
> > +    if ( local =3D=3D NULL )
> > +    {
> > +        PERROR("Could not allocate memory for getnodeaffinity domctl h=
ypercall");
> > +        goto out;
> > +    }
> > +
> > +    domctl.cmd =3D XEN_DOMCTL_getnodeaffinity;
> > +    domctl.domain =3D (domid_t)domid;
> > +
> > +    set_xen_guest_handle(domctl.u.nodeaffinity.nodemap.bitmap, local);
> > +    domctl.u.nodeaffinity.nodemap.nr_elems =3D nodesize * 8;
> > +
> > +    ret =3D do_domctl(xch, &domctl);
> > +
> > +    memcpy(nodemap, local, nodesize);
> > +
> > +    xc_hypercall_buffer_free(xch, local);
> > +
> > + out:
> > +    return ret;
> > +}
> > +
> >   int xc_vcpu_setaffinity(xc_interface *xch,
> >                           uint32_t domid,
> >                           int vcpu,
> > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -521,6 +521,32 @@ int xc_watchdog(xc_interface *xch,
> >   		uint32_t id,
> >   		uint32_t timeout);
> >  =20
> > +/**
> > + * This function explicitly sets the host NUMA nodes the domain will
> > + * have affinity with.
> > + *
> > + * @parm xch a handle to an open hypervisor interface.
> > + * @parm domid the domain id one wants to set the affinity of.
> > + * @parm nodemap the map of the affine nodes.
> > + * @return 0 on success, -1 on failure.
> > + */
> > +int xc_domain_node_setaffinity(xc_interface *xch,
> > +                               uint32_t domind,
> > +                               xc_nodemap_t nodemap);
> > +
> > +/**
> > + * This function retrieves the host NUMA nodes the domain has
> > + * affinity with.
> > + *
> > + * @parm xch a handle to an open hypervisor interface.
> > + * @parm domid the domain id one wants to get the node affinity of.
> > + * @parm nodemap the map of the affine nodes.
> > + * @return 0 on success, -1 on failure.
> > + */
> > +int xc_domain_node_getaffinity(xc_interface *xch,
> > +                               uint32_t domind,
> > +                               xc_nodemap_t nodemap);
> > +
> >   int xc_vcpu_setaffinity(xc_interface *xch,
> >                           uint32_t domid,
> >                           int vcpu,
>=20
>=20
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-iziipxIqShOkNz8FzTAF
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUjdgACgkQk4XaBE3IOsRVXACfQu7y0peugPYfguuT+2kx1gOp
xgIAn144+LZ0z7zNHe39TA1F0nDOlIAk
=AIui
-----END PGP SIGNATURE-----

--=-iziipxIqShOkNz8FzTAF--


--===============2945805627579981142==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2945805627579981142==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5c0-0001Qd-OG; Fri, 21 Dec 2012 16:38:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm5by-0001QY-J3
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:38:14 +0000
Received: from [85.158.139.83:45802] by server-7.bemta-5.messagelabs.com id
	35/21-08009-57094D05; Fri, 21 Dec 2012 16:38:13 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1356107891!30375859!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26924 invoked from network); 21 Dec 2012 16:38:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:38:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1572674"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 16:38:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 11:38:10 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm5So-0006Fb-Ej;
	Fri, 21 Dec 2012 16:28:46 +0000
Message-ID: <50D48CC8.6080801@eu.citrix.com>
Date: Fri, 21 Dec 2012 16:22:32 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<ff98e6bcc0dd18f6b97a.1355944044@Solace>
In-Reply-To: <ff98e6bcc0dd18f6b97a.1355944044@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 08 of 10 v2] libxl: automatic placement
	deals with node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> Which basically means the following two things:
>   1) during domain creation, it is the node-affinity of
>      the domain --rather than the vcpu-affinities of its
>      VCPUs-- that is affected by automatic placement;
>   2) during automatic placement, when counting how many
>      VCPUs are already "bound" to a placement candidate
>      (as part of the process of choosing the best
>      candidate), both vcpu-affinity and node-affinity
>      are considered.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

Re-confirming Ack.
  -George

>
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -133,13 +133,13 @@ static int numa_place_domain(libxl__gc *
>   {
>       int found;
>       libxl__numa_candidate candidate;
> -    libxl_bitmap candidate_nodemap;
> +    libxl_bitmap cpupool_nodemap;
>       libxl_cpupoolinfo cpupool_info;
>       int i, cpupool, rc = 0;
>       uint32_t memkb;
>   
>       libxl__numa_candidate_init(&candidate);
> -    libxl_bitmap_init(&candidate_nodemap);
> +    libxl_bitmap_init(&cpupool_nodemap);
>   
>       /*
>        * Extract the cpumap from the cpupool the domain belong to. In fact,
> @@ -156,7 +156,7 @@ static int numa_place_domain(libxl__gc *
>       rc = libxl_domain_need_memory(CTX, info, &memkb);
>       if (rc)
>           goto out;
> -    if (libxl_node_bitmap_alloc(CTX, &candidate_nodemap, 0)) {
> +    if (libxl_node_bitmap_alloc(CTX, &cpupool_nodemap, 0)) {
>           rc = ERROR_FAIL;
>           goto out;
>       }
> @@ -174,17 +174,19 @@ static int numa_place_domain(libxl__gc *
>       if (found == 0)
>           goto out;
>   
> -    /* Map the candidate's node map to the domain's info->cpumap */
> -    libxl__numa_candidate_get_nodemap(gc, &candidate, &candidate_nodemap);
> -    rc = libxl_nodemap_to_cpumap(CTX, &candidate_nodemap, &info->cpumap);
> +    /* Map the candidate's node map to the domain's info->nodemap */
> +    libxl__numa_candidate_get_nodemap(gc, &candidate, &info->nodemap);
> +
> +    /* Avoid trying to set the affinity to nodes that might be in the
> +     * candidate's nodemap but out of our cpupool. */
> +    rc = libxl_cpumap_to_nodemap(CTX, &cpupool_info.cpumap,
> +                                 &cpupool_nodemap);
>       if (rc)
>           goto out;
>   
> -    /* Avoid trying to set the affinity to cpus that might be in the
> -     * nodemap but not in our cpupool. */
> -    libxl_for_each_set_bit(i, info->cpumap) {
> -        if (!libxl_bitmap_test(&cpupool_info.cpumap, i))
> -            libxl_bitmap_reset(&info->cpumap, i);
> +    libxl_for_each_set_bit(i, info->nodemap) {
> +        if (!libxl_bitmap_test(&cpupool_nodemap, i))
> +            libxl_bitmap_reset(&info->nodemap, i);
>       }
>   
>       LOG(DETAIL, "NUMA placement candidate with %d nodes, %d cpus and "
> @@ -193,7 +195,7 @@ static int numa_place_domain(libxl__gc *
>   
>    out:
>       libxl__numa_candidate_dispose(&candidate);
> -    libxl_bitmap_dispose(&candidate_nodemap);
> +    libxl_bitmap_dispose(&cpupool_nodemap);
>       libxl_cpupoolinfo_dispose(&cpupool_info);
>       return rc;
>   }
> @@ -211,10 +213,10 @@ int libxl__build_pre(libxl__gc *gc, uint
>       /*
>        * Check if the domain has any CPU affinity. If not, try to build
>        * up one. In case numa_place_domain() find at least a suitable
> -     * candidate, it will affect info->cpumap accordingly; if it
> +     * candidate, it will affect info->nodemap accordingly; if it
>        * does not, it just leaves it as it is. This means (unless
>        * some weird error manifests) the subsequent call to
> -     * libxl_set_vcpuaffinity_all() will do the actual placement,
> +     * libxl_domain_set_nodeaffinity() will do the actual placement,
>        * whatever that turns out to be.
>        */
>       if (libxl_defbool_val(info->numa_placement)) {
> diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
> --- a/tools/libxl/libxl_numa.c
> +++ b/tools/libxl/libxl_numa.c
> @@ -184,7 +184,7 @@ static int nr_vcpus_on_nodes(libxl__gc *
>                                int vcpus_on_node[])
>   {
>       libxl_dominfo *dinfo = NULL;
> -    libxl_bitmap vcpu_nodemap;
> +    libxl_bitmap dom_nodemap, vcpu_nodemap;
>       int nr_doms, nr_cpus;
>       int i, j, k;
>   
> @@ -197,6 +197,12 @@ static int nr_vcpus_on_nodes(libxl__gc *
>           return ERROR_FAIL;
>       }
>   
> +    if (libxl_node_bitmap_alloc(CTX, &dom_nodemap, 0) < 0) {
> +        libxl_bitmap_dispose(&vcpu_nodemap);
> +        libxl_dominfo_list_free(dinfo, nr_doms);
> +        return ERROR_FAIL;
> +    }
> +
>       for (i = 0; i < nr_doms; i++) {
>           libxl_vcpuinfo *vinfo;
>           int nr_dom_vcpus;
> @@ -205,14 +211,21 @@ static int nr_vcpus_on_nodes(libxl__gc *
>           if (vinfo == NULL)
>               continue;
>   
> +        /* Retrieve the domain's node-affinity map */
> +        libxl_domain_get_nodeaffinity(CTX, dinfo[i].domid, &dom_nodemap);
> +
>           for (j = 0; j < nr_dom_vcpus; j++) {
> -            /* For each vcpu of each domain, increment the elements of
> -             * the array corresponding to the nodes where the vcpu runs */
> +            /*
> +             * For each vcpu of each domain, it must have both vcpu-affinity
> +             * and node-affinity to (a pcpu belonging to) a certain node to
> +             * cause an increment in the corresponding element of the array.
> +             */
>               libxl_bitmap_set_none(&vcpu_nodemap);
>               libxl_for_each_set_bit(k, vinfo[j].cpumap) {
>                   int node = tinfo[k].node;
>   
>                   if (libxl_bitmap_test(suitable_cpumap, k) &&
> +                    libxl_bitmap_test(&dom_nodemap, node) &&
>                       !libxl_bitmap_test(&vcpu_nodemap, node)) {
>                       libxl_bitmap_set(&vcpu_nodemap, node);
>                       vcpus_on_node[node]++;
> @@ -223,6 +236,7 @@ static int nr_vcpus_on_nodes(libxl__gc *
>           libxl_vcpuinfo_list_free(vinfo, nr_dom_vcpus);
>       }
>   
> +    libxl_bitmap_dispose(&dom_nodemap);
>       libxl_bitmap_dispose(&vcpu_nodemap);
>       libxl_dominfo_list_free(dinfo, nr_doms);
>       return 0;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5c0-0001Qd-OG; Fri, 21 Dec 2012 16:38:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm5by-0001QY-J3
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:38:14 +0000
Received: from [85.158.139.83:45802] by server-7.bemta-5.messagelabs.com id
	35/21-08009-57094D05; Fri, 21 Dec 2012 16:38:13 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1356107891!30375859!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26924 invoked from network); 21 Dec 2012 16:38:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:38:12 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1572674"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 16:38:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 11:38:10 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm5So-0006Fb-Ej;
	Fri, 21 Dec 2012 16:28:46 +0000
Message-ID: <50D48CC8.6080801@eu.citrix.com>
Date: Fri, 21 Dec 2012 16:22:32 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<ff98e6bcc0dd18f6b97a.1355944044@Solace>
In-Reply-To: <ff98e6bcc0dd18f6b97a.1355944044@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 08 of 10 v2] libxl: automatic placement
	deals with node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> Which basically means the following two things:
>   1) during domain creation, it is the node-affinity of
>      the domain --rather than the vcpu-affinities of its
>      VCPUs-- that is affected by automatic placement;
>   2) during automatic placement, when counting how many
>      VCPUs are already "bound" to a placement candidate
>      (as part of the process of choosing the best
>      candidate), both vcpu-affinity and node-affinity
>      are considered.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

Re-confirming Ack.
  -George

>
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -133,13 +133,13 @@ static int numa_place_domain(libxl__gc *
>   {
>       int found;
>       libxl__numa_candidate candidate;
> -    libxl_bitmap candidate_nodemap;
> +    libxl_bitmap cpupool_nodemap;
>       libxl_cpupoolinfo cpupool_info;
>       int i, cpupool, rc = 0;
>       uint32_t memkb;
>   
>       libxl__numa_candidate_init(&candidate);
> -    libxl_bitmap_init(&candidate_nodemap);
> +    libxl_bitmap_init(&cpupool_nodemap);
>   
>       /*
>        * Extract the cpumap from the cpupool the domain belong to. In fact,
> @@ -156,7 +156,7 @@ static int numa_place_domain(libxl__gc *
>       rc = libxl_domain_need_memory(CTX, info, &memkb);
>       if (rc)
>           goto out;
> -    if (libxl_node_bitmap_alloc(CTX, &candidate_nodemap, 0)) {
> +    if (libxl_node_bitmap_alloc(CTX, &cpupool_nodemap, 0)) {
>           rc = ERROR_FAIL;
>           goto out;
>       }
> @@ -174,17 +174,19 @@ static int numa_place_domain(libxl__gc *
>       if (found == 0)
>           goto out;
>   
> -    /* Map the candidate's node map to the domain's info->cpumap */
> -    libxl__numa_candidate_get_nodemap(gc, &candidate, &candidate_nodemap);
> -    rc = libxl_nodemap_to_cpumap(CTX, &candidate_nodemap, &info->cpumap);
> +    /* Map the candidate's node map to the domain's info->nodemap */
> +    libxl__numa_candidate_get_nodemap(gc, &candidate, &info->nodemap);
> +
> +    /* Avoid trying to set the affinity to nodes that might be in the
> +     * candidate's nodemap but out of our cpupool. */
> +    rc = libxl_cpumap_to_nodemap(CTX, &cpupool_info.cpumap,
> +                                 &cpupool_nodemap);
>       if (rc)
>           goto out;
>   
> -    /* Avoid trying to set the affinity to cpus that might be in the
> -     * nodemap but not in our cpupool. */
> -    libxl_for_each_set_bit(i, info->cpumap) {
> -        if (!libxl_bitmap_test(&cpupool_info.cpumap, i))
> -            libxl_bitmap_reset(&info->cpumap, i);
> +    libxl_for_each_set_bit(i, info->nodemap) {
> +        if (!libxl_bitmap_test(&cpupool_nodemap, i))
> +            libxl_bitmap_reset(&info->nodemap, i);
>       }
>   
>       LOG(DETAIL, "NUMA placement candidate with %d nodes, %d cpus and "
> @@ -193,7 +195,7 @@ static int numa_place_domain(libxl__gc *
>   
>    out:
>       libxl__numa_candidate_dispose(&candidate);
> -    libxl_bitmap_dispose(&candidate_nodemap);
> +    libxl_bitmap_dispose(&cpupool_nodemap);
>       libxl_cpupoolinfo_dispose(&cpupool_info);
>       return rc;
>   }
> @@ -211,10 +213,10 @@ int libxl__build_pre(libxl__gc *gc, uint
>       /*
>        * Check if the domain has any CPU affinity. If not, try to build
>        * up one. In case numa_place_domain() find at least a suitable
> -     * candidate, it will affect info->cpumap accordingly; if it
> +     * candidate, it will affect info->nodemap accordingly; if it
>        * does not, it just leaves it as it is. This means (unless
>        * some weird error manifests) the subsequent call to
> -     * libxl_set_vcpuaffinity_all() will do the actual placement,
> +     * libxl_domain_set_nodeaffinity() will do the actual placement,
>        * whatever that turns out to be.
>        */
>       if (libxl_defbool_val(info->numa_placement)) {
> diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
> --- a/tools/libxl/libxl_numa.c
> +++ b/tools/libxl/libxl_numa.c
> @@ -184,7 +184,7 @@ static int nr_vcpus_on_nodes(libxl__gc *
>                                int vcpus_on_node[])
>   {
>       libxl_dominfo *dinfo = NULL;
> -    libxl_bitmap vcpu_nodemap;
> +    libxl_bitmap dom_nodemap, vcpu_nodemap;
>       int nr_doms, nr_cpus;
>       int i, j, k;
>   
> @@ -197,6 +197,12 @@ static int nr_vcpus_on_nodes(libxl__gc *
>           return ERROR_FAIL;
>       }
>   
> +    if (libxl_node_bitmap_alloc(CTX, &dom_nodemap, 0) < 0) {
> +        libxl_bitmap_dispose(&vcpu_nodemap);
> +        libxl_dominfo_list_free(dinfo, nr_doms);
> +        return ERROR_FAIL;
> +    }
> +
>       for (i = 0; i < nr_doms; i++) {
>           libxl_vcpuinfo *vinfo;
>           int nr_dom_vcpus;
> @@ -205,14 +211,21 @@ static int nr_vcpus_on_nodes(libxl__gc *
>           if (vinfo == NULL)
>               continue;
>   
> +        /* Retrieve the domain's node-affinity map */
> +        libxl_domain_get_nodeaffinity(CTX, dinfo[i].domid, &dom_nodemap);
> +
>           for (j = 0; j < nr_dom_vcpus; j++) {
> -            /* For each vcpu of each domain, increment the elements of
> -             * the array corresponding to the nodes where the vcpu runs */
> +            /*
> +             * For each vcpu of each domain, it must have both vcpu-affinity
> +             * and node-affinity to (a pcpu belonging to) a certain node to
> +             * cause an increment in the corresponding element of the array.
> +             */
>               libxl_bitmap_set_none(&vcpu_nodemap);
>               libxl_for_each_set_bit(k, vinfo[j].cpumap) {
>                   int node = tinfo[k].node;
>   
>                   if (libxl_bitmap_test(suitable_cpumap, k) &&
> +                    libxl_bitmap_test(&dom_nodemap, node) &&
>                       !libxl_bitmap_test(&vcpu_nodemap, node)) {
>                       libxl_bitmap_set(&vcpu_nodemap, node);
>                       vcpus_on_node[node]++;
> @@ -223,6 +236,7 @@ static int nr_vcpus_on_nodes(libxl__gc *
>           libxl_vcpuinfo_list_free(vinfo, nr_dom_vcpus);
>       }
>   
> +    libxl_bitmap_dispose(&dom_nodemap);
>       libxl_bitmap_dispose(&vcpu_nodemap);
>       libxl_dominfo_list_free(dinfo, nr_doms);
>       return 0;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:40:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5e8-0001W0-9v; Fri, 21 Dec 2012 16:40:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm5e6-0001Vu-1P
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:40:26 +0000
Received: from [85.158.143.35:22848] by server-2.bemta-4.messagelabs.com id
	49/EB-30861-9F094D05; Fri, 21 Dec 2012 16:40:25 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1356108022!12895430!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18973 invoked from network); 21 Dec 2012 16:40:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:40:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1573033"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 16:40:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 11:40:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm5dy-0006Tz-RC;
	Fri, 21 Dec 2012 16:40:18 +0000
Message-ID: <50D48F7D.5010001@eu.citrix.com>
Date: Fri, 21 Dec 2012 16:34:05 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<c862dee08c124ce080e0.1355944045@Solace>
In-Reply-To: <c862dee08c124ce080e0.1355944045@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 09 of 10 v2] xl: add node-affinity to the
 output of `xl list`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> Node-affinity is now something that is under (some) control of the
> user, so show it upon request as part of the output of `xl list'
> by the `-n' option.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
> ---
> Changes from v1:
>   * print_{cpu,node}map() functions added instead of 'state variable'-izing
>     print_bitmap().
>
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -2961,14 +2961,95 @@ out:
>       }
>   }
>   
> -static void list_domains(int verbose, int context, const libxl_dominfo *info, int nb_domain)
> +/* If map is not full, prints it and returns 0. Returns 1 otherwise. */
> +static int print_bitmap(uint8_t *map, int maplen, FILE *stream)
> +{
> +    int i;
> +    uint8_t pmap = 0, bitmask = 0;
> +    int firstset = 0, state = 0;
> +
> +    for (i = 0; i < maplen; i++) {
> +        if (i % 8 == 0) {
> +            pmap = *map++;
> +            bitmask = 1;
> +        } else bitmask <<= 1;
> +
> +        switch (state) {
> +        case 0:
> +        case 2:
> +            if ((pmap & bitmask) != 0) {
> +                firstset = i;
> +                state++;
> +            }
> +            continue;
> +        case 1:
> +        case 3:
> +            if ((pmap & bitmask) == 0) {
> +                fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
> +                if (i - 1 > firstset)
> +                    fprintf(stream, "-%d", i - 1);
> +                state = 2;
> +            }
> +            continue;
> +        }
> +    }
> +    switch (state) {
> +        case 0:
> +            fprintf(stream, "none");
> +            break;
> +        case 2:
> +            break;
> +        case 1:
> +            if (firstset == 0)
> +                return 1;
> +        case 3:
> +            fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
> +            if (i - 1 > firstset)
> +                fprintf(stream, "-%d", i - 1);
> +            break;
> +    }
> +
> +    return 0;
> +}

Just checking -- is the print_bitmap() thing pure code motion? If so, 
would you mind saying that explicitly in the commit message, just to 
save people time when reading this patch?

Other than that, looks OK to me -- I haven't done a detailed review of 
the output layout however.

  -George

> +
> +static void print_cpumap(uint8_t *map, int maplen, FILE *stream)
> +{
> +    if (print_bitmap(map, maplen, stream))
> +        fprintf(stream, "any cpu");
> +}
> +
> +static void print_nodemap(uint8_t *map, int maplen, FILE *stream)
> +{
> +    if (print_bitmap(map, maplen, stream))
> +        fprintf(stream, "any node");
> +}
> +
> +static void list_domains(int verbose, int context, int numa, const libxl_dominfo *info, int nb_domain)
>   {
>       int i;
>       static const char shutdown_reason_letters[]= "-rscw";
> +    libxl_bitmap nodemap;
> +    libxl_physinfo physinfo;
> +
> +    libxl_bitmap_init(&nodemap);
> +    libxl_physinfo_init(&physinfo);
>   
>       printf("Name                                        ID   Mem VCPUs\tState\tTime(s)");
>       if (verbose) printf("   UUID                            Reason-Code\tSecurity Label");
>       if (context && !verbose) printf("   Security Label");
> +    if (numa) {
> +        if (libxl_node_bitmap_alloc(ctx, &nodemap, 0)) {
> +            fprintf(stderr, "libxl_node_bitmap_alloc_failed.\n");
> +            exit(1);
> +        }
> +        if (libxl_get_physinfo(ctx, &physinfo) != 0) {
> +            fprintf(stderr, "libxl_physinfo failed.\n");
> +            libxl_bitmap_dispose(&nodemap);
> +            exit(1);
> +        }
> +
> +        printf(" NODE Affinity");
> +    }
>       printf("\n");
>       for (i = 0; i < nb_domain; i++) {
>           char *domname;
> @@ -3002,14 +3083,23 @@ static void list_domains(int verbose, in
>               rc = libxl_flask_sid_to_context(ctx, info[i].ssidref, &buf,
>                                               &size);
>               if (rc < 0)
> -                printf("  -");
> +                printf("                -");
>               else {
> -                printf("  %s", buf);
> +                printf(" %16s", buf);
>                   free(buf);
>               }
>           }
> +        if (numa) {
> +            libxl_domain_get_nodeaffinity(ctx, info[i].domid, &nodemap);
> +
> +            putchar(' ');
> +            print_nodemap(nodemap.map, physinfo.nr_nodes, stdout);
> +        }
>           putchar('\n');
>       }
> +
> +    libxl_bitmap_dispose(&nodemap);
> +    libxl_physinfo_dispose(&physinfo);
>   }
>   
>   static void list_vm(void)
> @@ -3890,12 +3980,14 @@ int main_list(int argc, char **argv)
>       int opt, verbose = 0;
>       int context = 0;
>       int details = 0;
> +    int numa = 0;
>       int option_index = 0;
>       static struct option long_options[] = {
>           {"long", 0, 0, 'l'},
>           {"help", 0, 0, 'h'},
>           {"verbose", 0, 0, 'v'},
>           {"context", 0, 0, 'Z'},
> +        {"numa", 0, 0, 'n'},
>           {0, 0, 0, 0}
>       };
>   
> @@ -3904,7 +3996,7 @@ int main_list(int argc, char **argv)
>       int nb_domain, rc;
>   
>       while (1) {
> -        opt = getopt_long(argc, argv, "lvhZ", long_options, &option_index);
> +        opt = getopt_long(argc, argv, "lvhZn", long_options, &option_index);
>           if (opt == -1)
>               break;
>   
> @@ -3921,6 +4013,9 @@ int main_list(int argc, char **argv)
>           case 'Z':
>               context = 1;
>               break;
> +        case 'n':
> +            numa = 1;
> +            break;
>           default:
>               fprintf(stderr, "option `%c' not supported.\n", optopt);
>               break;
> @@ -3956,7 +4051,7 @@ int main_list(int argc, char **argv)
>       if (details)
>           list_domains_details(info, nb_domain);
>       else
> -        list_domains(verbose, context, info, nb_domain);
> +        list_domains(verbose, context, numa, info, nb_domain);
>   
>       if (info_free)
>           libxl_dominfo_list_free(info, nb_domain);
> @@ -4228,56 +4323,6 @@ int main_button_press(int argc, char **a
>       return 0;
>   }
>   
> -static void print_bitmap(uint8_t *map, int maplen, FILE *stream)
> -{
> -    int i;
> -    uint8_t pmap = 0, bitmask = 0;
> -    int firstset = 0, state = 0;
> -
> -    for (i = 0; i < maplen; i++) {
> -        if (i % 8 == 0) {
> -            pmap = *map++;
> -            bitmask = 1;
> -        } else bitmask <<= 1;
> -
> -        switch (state) {
> -        case 0:
> -        case 2:
> -            if ((pmap & bitmask) != 0) {
> -                firstset = i;
> -                state++;
> -            }
> -            continue;
> -        case 1:
> -        case 3:
> -            if ((pmap & bitmask) == 0) {
> -                fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
> -                if (i - 1 > firstset)
> -                    fprintf(stream, "-%d", i - 1);
> -                state = 2;
> -            }
> -            continue;
> -        }
> -    }
> -    switch (state) {
> -        case 0:
> -            fprintf(stream, "none");
> -            break;
> -        case 2:
> -            break;
> -        case 1:
> -            if (firstset == 0) {
> -                fprintf(stream, "any cpu");
> -                break;
> -            }
> -        case 3:
> -            fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
> -            if (i - 1 > firstset)
> -                fprintf(stream, "-%d", i - 1);
> -            break;
> -    }
> -}
> -
>   static void print_vcpuinfo(uint32_t tdomid,
>                              const libxl_vcpuinfo *vcpuinfo,
>                              uint32_t nr_cpus)
> @@ -4301,7 +4346,7 @@ static void print_vcpuinfo(uint32_t tdom
>       /*      TIM */
>       printf("%9.1f  ", ((float)vcpuinfo->vcpu_time / 1e9));
>       /* CPU AFFINITY */
> -    print_bitmap(vcpuinfo->cpumap.map, nr_cpus, stdout);
> +    print_cpumap(vcpuinfo->cpumap.map, nr_cpus, stdout);
>       printf("\n");
>   }
>   
> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -50,7 +50,8 @@ struct cmd_spec cmd_table[] = {
>         "[options] [Domain]\n",
>         "-l, --long              Output all VM details\n"
>         "-v, --verbose           Prints out UUIDs and security context\n"
> -      "-Z, --context           Prints out security context"
> +      "-Z, --context           Prints out security context\n"
> +      "-n, --numa              Prints out NUMA node affinity"
>       },
>       { "destroy",
>         &main_destroy, 0, 1,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:40:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5e8-0001W0-9v; Fri, 21 Dec 2012 16:40:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Tm5e6-0001Vu-1P
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:40:26 +0000
Received: from [85.158.143.35:22848] by server-2.bemta-4.messagelabs.com id
	49/EB-30861-9F094D05; Fri, 21 Dec 2012 16:40:25 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1356108022!12895430!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18973 invoked from network); 21 Dec 2012 16:40:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:40:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1573033"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	21 Dec 2012 16:40:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 21 Dec 2012 11:40:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Tm5dy-0006Tz-RC;
	Fri, 21 Dec 2012 16:40:18 +0000
Message-ID: <50D48F7D.5010001@eu.citrix.com>
Date: Fri, 21 Dec 2012 16:34:05 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <patchbomb.1355944036@Solace>
	<c862dee08c124ce080e0.1355944045@Solace>
In-Reply-To: <c862dee08c124ce080e0.1355944045@Solace>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 09 of 10 v2] xl: add node-affinity to the
 output of `xl list`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/12 19:07, Dario Faggioli wrote:
> Node-affinity is now something that is under (some) control of the
> user, so show it upon request as part of the output of `xl list'
> by the `-n' option.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
> ---
> Changes from v1:
>   * print_{cpu,node}map() functions added instead of 'state variable'-izing
>     print_bitmap().
>
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -2961,14 +2961,95 @@ out:
>       }
>   }
>   
> -static void list_domains(int verbose, int context, const libxl_dominfo *info, int nb_domain)
> +/* If map is not full, prints it and returns 0. Returns 1 otherwise. */
> +static int print_bitmap(uint8_t *map, int maplen, FILE *stream)
> +{
> +    int i;
> +    uint8_t pmap = 0, bitmask = 0;
> +    int firstset = 0, state = 0;
> +
> +    for (i = 0; i < maplen; i++) {
> +        if (i % 8 == 0) {
> +            pmap = *map++;
> +            bitmask = 1;
> +        } else bitmask <<= 1;
> +
> +        switch (state) {
> +        case 0:
> +        case 2:
> +            if ((pmap & bitmask) != 0) {
> +                firstset = i;
> +                state++;
> +            }
> +            continue;
> +        case 1:
> +        case 3:
> +            if ((pmap & bitmask) == 0) {
> +                fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
> +                if (i - 1 > firstset)
> +                    fprintf(stream, "-%d", i - 1);
> +                state = 2;
> +            }
> +            continue;
> +        }
> +    }
> +    switch (state) {
> +        case 0:
> +            fprintf(stream, "none");
> +            break;
> +        case 2:
> +            break;
> +        case 1:
> +            if (firstset == 0)
> +                return 1;
> +        case 3:
> +            fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
> +            if (i - 1 > firstset)
> +                fprintf(stream, "-%d", i - 1);
> +            break;
> +    }
> +
> +    return 0;
> +}

Just checking -- is the print_bitmap() thing pure code motion? If so, 
would you mind saying that explicitly in the commit message, just to 
save people time when reading this patch?

Other than that, looks OK to me -- I haven't done a detailed review of 
the output layout however.

  -George

> +
> +static void print_cpumap(uint8_t *map, int maplen, FILE *stream)
> +{
> +    if (print_bitmap(map, maplen, stream))
> +        fprintf(stream, "any cpu");
> +}
> +
> +static void print_nodemap(uint8_t *map, int maplen, FILE *stream)
> +{
> +    if (print_bitmap(map, maplen, stream))
> +        fprintf(stream, "any node");
> +}
> +
> +static void list_domains(int verbose, int context, int numa, const libxl_dominfo *info, int nb_domain)
>   {
>       int i;
>       static const char shutdown_reason_letters[]= "-rscw";
> +    libxl_bitmap nodemap;
> +    libxl_physinfo physinfo;
> +
> +    libxl_bitmap_init(&nodemap);
> +    libxl_physinfo_init(&physinfo);
>   
>       printf("Name                                        ID   Mem VCPUs\tState\tTime(s)");
>       if (verbose) printf("   UUID                            Reason-Code\tSecurity Label");
>       if (context && !verbose) printf("   Security Label");
> +    if (numa) {
> +        if (libxl_node_bitmap_alloc(ctx, &nodemap, 0)) {
> +            fprintf(stderr, "libxl_node_bitmap_alloc_failed.\n");
> +            exit(1);
> +        }
> +        if (libxl_get_physinfo(ctx, &physinfo) != 0) {
> +            fprintf(stderr, "libxl_physinfo failed.\n");
> +            libxl_bitmap_dispose(&nodemap);
> +            exit(1);
> +        }
> +
> +        printf(" NODE Affinity");
> +    }
>       printf("\n");
>       for (i = 0; i < nb_domain; i++) {
>           char *domname;
> @@ -3002,14 +3083,23 @@ static void list_domains(int verbose, in
>               rc = libxl_flask_sid_to_context(ctx, info[i].ssidref, &buf,
>                                               &size);
>               if (rc < 0)
> -                printf("  -");
> +                printf("                -");
>               else {
> -                printf("  %s", buf);
> +                printf(" %16s", buf);
>                   free(buf);
>               }
>           }
> +        if (numa) {
> +            libxl_domain_get_nodeaffinity(ctx, info[i].domid, &nodemap);
> +
> +            putchar(' ');
> +            print_nodemap(nodemap.map, physinfo.nr_nodes, stdout);
> +        }
>           putchar('\n');
>       }
> +
> +    libxl_bitmap_dispose(&nodemap);
> +    libxl_physinfo_dispose(&physinfo);
>   }
>   
>   static void list_vm(void)
> @@ -3890,12 +3980,14 @@ int main_list(int argc, char **argv)
>       int opt, verbose = 0;
>       int context = 0;
>       int details = 0;
> +    int numa = 0;
>       int option_index = 0;
>       static struct option long_options[] = {
>           {"long", 0, 0, 'l'},
>           {"help", 0, 0, 'h'},
>           {"verbose", 0, 0, 'v'},
>           {"context", 0, 0, 'Z'},
> +        {"numa", 0, 0, 'n'},
>           {0, 0, 0, 0}
>       };
>   
> @@ -3904,7 +3996,7 @@ int main_list(int argc, char **argv)
>       int nb_domain, rc;
>   
>       while (1) {
> -        opt = getopt_long(argc, argv, "lvhZ", long_options, &option_index);
> +        opt = getopt_long(argc, argv, "lvhZn", long_options, &option_index);
>           if (opt == -1)
>               break;
>   
> @@ -3921,6 +4013,9 @@ int main_list(int argc, char **argv)
>           case 'Z':
>               context = 1;
>               break;
> +        case 'n':
> +            numa = 1;
> +            break;
>           default:
>               fprintf(stderr, "option `%c' not supported.\n", optopt);
>               break;
> @@ -3956,7 +4051,7 @@ int main_list(int argc, char **argv)
>       if (details)
>           list_domains_details(info, nb_domain);
>       else
> -        list_domains(verbose, context, info, nb_domain);
> +        list_domains(verbose, context, numa, info, nb_domain);
>   
>       if (info_free)
>           libxl_dominfo_list_free(info, nb_domain);
> @@ -4228,56 +4323,6 @@ int main_button_press(int argc, char **a
>       return 0;
>   }
>   
> -static void print_bitmap(uint8_t *map, int maplen, FILE *stream)
> -{
> -    int i;
> -    uint8_t pmap = 0, bitmask = 0;
> -    int firstset = 0, state = 0;
> -
> -    for (i = 0; i < maplen; i++) {
> -        if (i % 8 == 0) {
> -            pmap = *map++;
> -            bitmask = 1;
> -        } else bitmask <<= 1;
> -
> -        switch (state) {
> -        case 0:
> -        case 2:
> -            if ((pmap & bitmask) != 0) {
> -                firstset = i;
> -                state++;
> -            }
> -            continue;
> -        case 1:
> -        case 3:
> -            if ((pmap & bitmask) == 0) {
> -                fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
> -                if (i - 1 > firstset)
> -                    fprintf(stream, "-%d", i - 1);
> -                state = 2;
> -            }
> -            continue;
> -        }
> -    }
> -    switch (state) {
> -        case 0:
> -            fprintf(stream, "none");
> -            break;
> -        case 2:
> -            break;
> -        case 1:
> -            if (firstset == 0) {
> -                fprintf(stream, "any cpu");
> -                break;
> -            }
> -        case 3:
> -            fprintf(stream, "%s%d", state > 1 ? "," : "", firstset);
> -            if (i - 1 > firstset)
> -                fprintf(stream, "-%d", i - 1);
> -            break;
> -    }
> -}
> -
>   static void print_vcpuinfo(uint32_t tdomid,
>                              const libxl_vcpuinfo *vcpuinfo,
>                              uint32_t nr_cpus)
> @@ -4301,7 +4346,7 @@ static void print_vcpuinfo(uint32_t tdom
>       /*      TIM */
>       printf("%9.1f  ", ((float)vcpuinfo->vcpu_time / 1e9));
>       /* CPU AFFINITY */
> -    print_bitmap(vcpuinfo->cpumap.map, nr_cpus, stdout);
> +    print_cpumap(vcpuinfo->cpumap.map, nr_cpus, stdout);
>       printf("\n");
>   }
>   
> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -50,7 +50,8 @@ struct cmd_spec cmd_table[] = {
>         "[options] [Domain]\n",
>         "-l, --long              Output all VM details\n"
>         "-v, --verbose           Prints out UUIDs and security context\n"
> -      "-Z, --context           Prints out security context"
> +      "-Z, --context           Prints out security context\n"
> +      "-n, --numa              Prints out NUMA node affinity"
>       },
>       { "destroy",
>         &main_destroy, 0, 1,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:49:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5mo-0001qy-M5; Fri, 21 Dec 2012 16:49:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tm5mn-0001qt-Bx
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:49:25 +0000
Received: from [85.158.143.35:62992] by server-1.bemta-4.messagelabs.com id
	BB/C4-28401-31394D05; Fri, 21 Dec 2012 16:49:23 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1356108555!4980543!1
X-Originating-IP: [209.85.210.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4216 invoked from network); 21 Dec 2012 16:49:17 -0000
Received: from mail-ia0-f176.google.com (HELO mail-ia0-f176.google.com)
	(209.85.210.176)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:49:17 -0000
Received: by mail-ia0-f176.google.com with SMTP id y26so4058898iab.7
	for <xen-devel@lists.xen.org>; Fri, 21 Dec 2012 08:49:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=L5JpJ+Xg5Z6D/iqXF4ZyETv8MYKEOW2hwzb2Cho7sjs=;
	b=DsNvqmZyHaZxzAY8OmapGeph7vddjdZlPerSkb6JUb21ogRMcOdMZYlxQLKFOTPCwJ
	Cnvy2J7JA8/XQdNL6yTtsgxAxVHGb4v+e1Z1oZIj8hzEmGQtma0X2wkdA76QP7iovRhE
	4FLfYPCRGL4zqgTSj+Y54tiHvE5CPOODgg5Gt11zfdMqJXE11wgev0yzuJupkJCYl/CX
	B2o4MSW7tUTrP9ibnZwXd3Xs5gXtM1zFyV2z1CmQ3vYpJAQENo7j0fdjl8bBNneSuhdI
	G60/hAOPlEtqZYnFLn6VYasijNM2jLQVzvorUdXiFYBa8sJudmcpURPMAwOeBsPQRfq2
	qlWQ==
MIME-Version: 1.0
Received: by 10.50.190.163 with SMTP id gr3mr13947834igc.28.1356108555374;
	Fri, 21 Dec 2012 08:49:15 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Fri, 21 Dec 2012 08:49:15 -0800 (PST)
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
	<CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
Date: Sat, 22 Dec 2012 00:49:15 +0800
X-Google-Sender-Auth: ay-mhYv777EViJT3De6z0VtT-CA
Message-ID: <CAKhsbWbtYA50rCCbAR_5=Bt+5g7Kb_BkKrVo5WF+ZdmO2o8pCw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Ross Philipson <Ross.Philipson@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> >> I guess there is no predefined macro for OpRegion size. And I guess I
>> >> need to define it twice for both code?
>> >
>> > In addition we should think about defining the IGD OpRegion in ACPI
>> per the spec (cited earlier). Guest drivers seem to find the region just
>> by reading the ASLS register in the gfx device's config space but it
>> would be more correct to define it in ACPI too. Just a thought.
>>
>> Is it a requirement for the patch to be accepted? Or, are you saying
>> that this should not be IGD passthrough specific?
>> I'm not sure what you refer to by 'ACPI' here. Is it the spec itself or
>> header in your source code?
>> I'm sorry to ask but I'm just a unlucky user trying to fix my box.
>>
>> The ASLS register is just the documented way to communicate the OpRegion
>> you can find in the spec.
>> There are a lot of details in the spec. But as long as we are not going
>> to emulate it, only the size is relevant here, I believe.
>
> I guess all I am pointing out is that the IGD OpRegion is supposed to be defined in ACPI (that is actually why it is called an OpRegion). On all they systems I have seen it is in the DSDT (look for IDGP and IGDM). The OpRegion declaration actually tells you how big the region is and where its base is. It should probably only be there in the case where you are passing in the igfx device but this could be done by loading an auxiliary SSDT table. In addition to the IGD definition, the BIOSes on these systems also define other NVS regions related to graphics which might be useful, at least in determining their size and layout.

Great! So you are the expert about these firmwares. I also tried to
grep in these ACPI tables but did not found anything useful.
I can see that IGDM has a matching size, but the base refers to an
entry called 'ASLB', which is not directly visible from the dump.

The intel opregion spec gives me the impression that it's part of the
ACPI spec. And you are saying that is is not! What a astonishing news
to me.

> I have been thinking about this in relationship to our igfx passthrough support. We see some odd behaviors here and there with igfx pt and the guest drivers (which we have no control over in say the Windows case). I don't currently have hard evidence that these missing BIOS bits are causing any specific problems but they are an inconsistency wrt to the spec and on my list of suspects.

Yes, my win 7 guest is totally broken with IGD passthrough (much worse
than linux status).
Before I bought my current build, sources like wikis seems to mention
that IGD is the first card that works.
And now, it seems the AMD cards are the best choice for pass-through.
Sad news for me.

Thanks,
Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:49:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5mo-0001qy-M5; Fri, 21 Dec 2012 16:49:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1Tm5mn-0001qt-Bx
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:49:25 +0000
Received: from [85.158.143.35:62992] by server-1.bemta-4.messagelabs.com id
	BB/C4-28401-31394D05; Fri, 21 Dec 2012 16:49:23 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1356108555!4980543!1
X-Originating-IP: [209.85.210.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4216 invoked from network); 21 Dec 2012 16:49:17 -0000
Received: from mail-ia0-f176.google.com (HELO mail-ia0-f176.google.com)
	(209.85.210.176)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:49:17 -0000
Received: by mail-ia0-f176.google.com with SMTP id y26so4058898iab.7
	for <xen-devel@lists.xen.org>; Fri, 21 Dec 2012 08:49:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=L5JpJ+Xg5Z6D/iqXF4ZyETv8MYKEOW2hwzb2Cho7sjs=;
	b=DsNvqmZyHaZxzAY8OmapGeph7vddjdZlPerSkb6JUb21ogRMcOdMZYlxQLKFOTPCwJ
	Cnvy2J7JA8/XQdNL6yTtsgxAxVHGb4v+e1Z1oZIj8hzEmGQtma0X2wkdA76QP7iovRhE
	4FLfYPCRGL4zqgTSj+Y54tiHvE5CPOODgg5Gt11zfdMqJXE11wgev0yzuJupkJCYl/CX
	B2o4MSW7tUTrP9ibnZwXd3Xs5gXtM1zFyV2z1CmQ3vYpJAQENo7j0fdjl8bBNneSuhdI
	G60/hAOPlEtqZYnFLn6VYasijNM2jLQVzvorUdXiFYBa8sJudmcpURPMAwOeBsPQRfq2
	qlWQ==
MIME-Version: 1.0
Received: by 10.50.190.163 with SMTP id gr3mr13947834igc.28.1356108555374;
	Fri, 21 Dec 2012 08:49:15 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Fri, 21 Dec 2012 08:49:15 -0800 (PST)
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
	<CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
Date: Sat, 22 Dec 2012 00:49:15 +0800
X-Google-Sender-Auth: ay-mhYv777EViJT3De6z0VtT-CA
Message-ID: <CAKhsbWbtYA50rCCbAR_5=Bt+5g7Kb_BkKrVo5WF+ZdmO2o8pCw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Ross Philipson <Ross.Philipson@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> >> I guess there is no predefined macro for OpRegion size. And I guess I
>> >> need to define it twice for both code?
>> >
>> > In addition we should think about defining the IGD OpRegion in ACPI
>> per the spec (cited earlier). Guest drivers seem to find the region just
>> by reading the ASLS register in the gfx device's config space but it
>> would be more correct to define it in ACPI too. Just a thought.
>>
>> Is it a requirement for the patch to be accepted? Or, are you saying
>> that this should not be IGD passthrough specific?
>> I'm not sure what you refer to by 'ACPI' here. Is it the spec itself or
>> header in your source code?
>> I'm sorry to ask but I'm just a unlucky user trying to fix my box.
>>
>> The ASLS register is just the documented way to communicate the OpRegion
>> you can find in the spec.
>> There are a lot of details in the spec. But as long as we are not going
>> to emulate it, only the size is relevant here, I believe.
>
> I guess all I am pointing out is that the IGD OpRegion is supposed to be defined in ACPI (that is actually why it is called an OpRegion). On all they systems I have seen it is in the DSDT (look for IDGP and IGDM). The OpRegion declaration actually tells you how big the region is and where its base is. It should probably only be there in the case where you are passing in the igfx device but this could be done by loading an auxiliary SSDT table. In addition to the IGD definition, the BIOSes on these systems also define other NVS regions related to graphics which might be useful, at least in determining their size and layout.

Great! So you are the expert about these firmwares. I also tried to
grep in these ACPI tables but did not found anything useful.
I can see that IGDM has a matching size, but the base refers to an
entry called 'ASLB', which is not directly visible from the dump.

The intel opregion spec gives me the impression that it's part of the
ACPI spec. And you are saying that is is not! What a astonishing news
to me.

> I have been thinking about this in relationship to our igfx passthrough support. We see some odd behaviors here and there with igfx pt and the guest drivers (which we have no control over in say the Windows case). I don't currently have hard evidence that these missing BIOS bits are causing any specific problems but they are an inconsistency wrt to the spec and on my list of suspects.

Yes, my win 7 guest is totally broken with IGD passthrough (much worse
than linux status).
Before I bought my current build, sources like wikis seems to mention
that IGD is the first card that works.
And now, it seems the AMD cards are the best choice for pass-through.
Sad news for me.

Thanks,
Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 16:55:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5rv-00021P-Ff; Fri, 21 Dec 2012 16:54:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5rt-00021I-OK
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:54:41 +0000
Received: from [85.158.138.51:7223] by server-15.bemta-3.messagelabs.com id
	DC/79-07921-15494D05; Fri, 21 Dec 2012 16:54:41 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1356108880!29967523!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3020 invoked from network); 21 Dec 2012 16:54:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:54:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="309117"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:54:40 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:54:39 +0000
Message-ID: <1356108878.15403.87.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:54:38 +0100
In-Reply-To: <50D48F7D.5010001@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<c862dee08c124ce080e0.1355944045@Solace>
	<50D48F7D.5010001@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>, Dan
	Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 09 of 10 v2] xl: add node-affinity to the
 output of `xl list`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0630302041379138723=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0630302041379138723==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-kCLvYiQeJBRRXsDPoSaY"

--=-kCLvYiQeJBRRXsDPoSaY
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 16:34 +0000, George Dunlap wrote:=20
> Just checking -- is the print_bitmap() thing pure code motion? If so,=20
> would you mind saying that explicitly in the commit message, just to=20
> save people time when reading this patch?
>=20
It's _mostly_ code motion, but I had to hack it a tiny little bit to
make it possible to use the same function for printing cpu and node
bitmaps (basically, in case all bits are sets, the function was printing
"any cpu" itself, which wasn't fitting well with node maps).

But yeah, I should have mentioned what I just put here in the changelog.
Will do in v3.

> Other than that, looks OK to me -- I haven't done a detailed review of=
=20
> the output layout however.
>=20
That's the most tricky part of this patch! :-)

For your and others convenience, here's how it looks like on my testbox
(sorry for the extra long lines, but that's not my fault!):

root@Zhaman:~# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0   375    16     r-----     2=
25.1
vm1                                          1   960     2     -b----      =
34.0
root@Zhaman:~# xl list -Z
Name                                        ID   Mem VCPUs	State	Time(s)   =
Security Label
Domain-0                                     0   375    16     r-----     2=
25.4                -
vm1                                          1   960     2     -b----      =
34.0                -
root@Zhaman:~# xl list -v
Name                                        ID   Mem VCPUs	State	Time(s)   =
UUID                            Reason-Code	Security Label
Domain-0                                     0   375    16     r-----     2=
26.6 00000000-0000-0000-0000-000000000000        -                -
vm1                                          1   960     2     -b----      =
34.2 e36429cc-d2a2-4da7-b21d-b053f725e7a7        -                -

root@Zhaman:~# xl list -n
Name                                        ID   Mem VCPUs	State	Time(s) NO=
DE Affinity
Domain-0                                     0   375    16     r-----     2=
26.8 any node
vm1                                          1   960     2     -b----      =
34.2 0
root@Zhaman:~# xl list -nZ
Name                                        ID   Mem VCPUs	State	Time(s)   =
Security Label NODE Affinity
Domain-0                                     0   375    16     r-----     2=
26.9                - any node
vm1                                          1   960     2     -b----      =
34.2                - 0
root@Zhaman:~# xl list -nv
Name                                        ID   Mem VCPUs	State	Time(s)   =
UUID                            Reason-Code	Security Label NODE Affinity
Domain-0                                     0   375    16     r-----     2=
27.0 00000000-0000-0000-0000-000000000000        -                - any nod=
e
vm1                                          1   960     2     -b----      =
34.2 e36429cc-d2a2-4da7-b21d-b053f725e7a7        -                - 0

Reasonable enough?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-kCLvYiQeJBRRXsDPoSaY
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUlE4ACgkQk4XaBE3IOsSy9gCfTqDOQiTXE83+LTkgTtsAAQ1d
9P4AnjWiV4M40zW+dKyNLxcRdEWuQFxv
=mELr
-----END PGP SIGNATURE-----

--=-kCLvYiQeJBRRXsDPoSaY--


--===============0630302041379138723==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0630302041379138723==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 16:55:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 16:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5rv-00021P-Ff; Fri, 21 Dec 2012 16:54:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm5rt-00021I-OK
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 16:54:41 +0000
Received: from [85.158.138.51:7223] by server-15.bemta-3.messagelabs.com id
	DC/79-07921-15494D05; Fri, 21 Dec 2012 16:54:41 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1356108880!29967523!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3020 invoked from network); 21 Dec 2012 16:54:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 16:54:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="309117"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 16:54:40 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 16:54:39 +0000
Message-ID: <1356108878.15403.87.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Dec 2012 17:54:38 +0100
In-Reply-To: <50D48F7D.5010001@eu.citrix.com>
References: <patchbomb.1355944036@Solace>
	<c862dee08c124ce080e0.1355944045@Solace>
	<50D48F7D.5010001@eu.citrix.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>, Dan
	Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 09 of 10 v2] xl: add node-affinity to the
 output of `xl list`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0630302041379138723=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0630302041379138723==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-kCLvYiQeJBRRXsDPoSaY"

--=-kCLvYiQeJBRRXsDPoSaY
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 16:34 +0000, George Dunlap wrote:=20
> Just checking -- is the print_bitmap() thing pure code motion? If so,=20
> would you mind saying that explicitly in the commit message, just to=20
> save people time when reading this patch?
>=20
It's _mostly_ code motion, but I had to hack it a tiny little bit to
make it possible to use the same function for printing cpu and node
bitmaps (basically, in case all bits are sets, the function was printing
"any cpu" itself, which wasn't fitting well with node maps).

But yeah, I should have mentioned what I just put here in the changelog.
Will do in v3.

> Other than that, looks OK to me -- I haven't done a detailed review of=
=20
> the output layout however.
>=20
That's the most tricky part of this patch! :-)

For your and others convenience, here's how it looks like on my testbox
(sorry for the extra long lines, but that's not my fault!):

root@Zhaman:~# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0   375    16     r-----     2=
25.1
vm1                                          1   960     2     -b----      =
34.0
root@Zhaman:~# xl list -Z
Name                                        ID   Mem VCPUs	State	Time(s)   =
Security Label
Domain-0                                     0   375    16     r-----     2=
25.4                -
vm1                                          1   960     2     -b----      =
34.0                -
root@Zhaman:~# xl list -v
Name                                        ID   Mem VCPUs	State	Time(s)   =
UUID                            Reason-Code	Security Label
Domain-0                                     0   375    16     r-----     2=
26.6 00000000-0000-0000-0000-000000000000        -                -
vm1                                          1   960     2     -b----      =
34.2 e36429cc-d2a2-4da7-b21d-b053f725e7a7        -                -

root@Zhaman:~# xl list -n
Name                                        ID   Mem VCPUs	State	Time(s) NO=
DE Affinity
Domain-0                                     0   375    16     r-----     2=
26.8 any node
vm1                                          1   960     2     -b----      =
34.2 0
root@Zhaman:~# xl list -nZ
Name                                        ID   Mem VCPUs	State	Time(s)   =
Security Label NODE Affinity
Domain-0                                     0   375    16     r-----     2=
26.9                - any node
vm1                                          1   960     2     -b----      =
34.2                - 0
root@Zhaman:~# xl list -nv
Name                                        ID   Mem VCPUs	State	Time(s)   =
UUID                            Reason-Code	Security Label NODE Affinity
Domain-0                                     0   375    16     r-----     2=
27.0 00000000-0000-0000-0000-000000000000        -                - any nod=
e
vm1                                          1   960     2     -b----      =
34.2 e36429cc-d2a2-4da7-b21d-b053f725e7a7        -                - 0

Reasonable enough?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-kCLvYiQeJBRRXsDPoSaY
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUlE4ACgkQk4XaBE3IOsSy9gCfTqDOQiTXE83+LTkgTtsAAQ1d
9P4AnjWiV4M40zW+dKyNLxcRdEWuQFxv
=mELr
-----END PGP SIGNATURE-----

--=-kCLvYiQeJBRRXsDPoSaY--


--===============0630302041379138723==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0630302041379138723==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xY-0002Dq-EN; Fri, 21 Dec 2012 17:00:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xW-0002Cf-79
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:30 +0000
Received: from [85.158.143.99:37986] by server-2.bemta-4.messagelabs.com id
	00/0E-30861-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356109228!23684325!4
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25748 invoked from network); 21 Dec 2012 17:00:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309221"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:19 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:19 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:07 +0100
Message-ID: <1356109208-6830-10-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 09/10] hotplug: document new hotplug
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TWVudGlvbiB0aGUgbmV3IGRpc2tzcGVjIHBhcmFtZXRlciBhbmQgYWRkIGEgZG9jdW1lbnQgZXhw
bGFpbmluZyB0aGUKbmV3IGhvdHBsdWcgaW50ZXJmYWNlLgoKU2lnbmVkLW9mZi1ieTogUm9nZXIg
UGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Ci0tLQogZG9jcy9taXNjL2xpYnhsLWhv
dHBsdWctaW50ZXJmYWNlLnR4dCB8ICAxNTYgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrCiBkb2NzL21pc2MveGwtZGlzay1jb25maWd1cmF0aW9uLnR4dCAgIHwgICAxMSArKysKIDIg
ZmlsZXMgY2hhbmdlZCwgMTY3IGluc2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCiBjcmVhdGUg
bW9kZSAxMDA2NDQgZG9jcy9taXNjL2xpYnhsLWhvdHBsdWctaW50ZXJmYWNlLnR4dAoKZGlmZiAt
LWdpdCBhL2RvY3MvbWlzYy9saWJ4bC1ob3RwbHVnLWludGVyZmFjZS50eHQgYi9kb2NzL21pc2Mv
bGlieGwtaG90cGx1Zy1pbnRlcmZhY2UudHh0Cm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAw
MDAwMDAuLjUxODdlNDQKLS0tIC9kZXYvbnVsbAorKysgYi9kb2NzL21pc2MvbGlieGwtaG90cGx1
Zy1pbnRlcmZhY2UudHh0CkBAIC0wLDAgKzEsMTU2IEBACisgICAgICAgICAgICAgICAgICAgICAt
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLQorICAgICAgICAgICAgICAgICAgICAgTElCWEwgSE9UUExV
RyBJTlRFUkZBQ0UKKyAgICAgICAgICAgICAgICAgICAgIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
CisKK1RoaXMgZG9jdW1lbnQgc3BlY2lmaWVzIHRoZSBuZXcgbGlieGwgaG90cGx1ZyBpbnRlcmZh
Y2UuIFRoaXMgbmV3CitpbnRlcmZhY2UgaGFzIGJlZW4gZGVzaWduZWQgdG8gb3BlcmF0ZSBiZXR0
ZXIgd2l0aCBjb21wbGV4IGhvdHBsdWcKK3NjcmlwdHMsIHRoYXQgbmVlZCB0byBwZXJmb3JtIHNl
dmVyYWwgb3BlcmF0aW9ucyBhbmQgY2FuIHRha2UgYQorY29uc2lkZXJhYmxlIHRpbWUgdG8gZXhl
Y3V0ZS4KKworSG90cGx1ZyBzY3JpcHRzIGFyZSBleHBlY3RlZCB0byB0YWtlIGEgcGFyYW1ldGVy
LCBwYXNzZWQgYnkgdGhlIGNhbGxlcgorYW5kIHByb3ZpZGUgYSBibG9jayBkZXZpY2UgYXMgb3V0
cHV0LCB0aGF0IHdpbGwgYmUgYXR0YWNoZWQgdG8gdGhlIGd1ZXN0LgorCisKKworPT09PT09PT09
PT09PT09PT09PT09CitFTlZJUk9OTUVOVCBWQVJJQUJMRVMKKz09PT09PT09PT09PT09PT09PT09
PQorCisKK1RoZSBmb2xsb3dpbmcgZW52aXJvbm1lbnQgdmFyaWFibGVzIHdpbGwgYmUgc2V0IGJl
Zm9yZSBjYWxsaW5nCit0aGUgaG90cGx1ZyBzY3JpcHQuCisKKworSE9UUExVR19QQVRICistLS0t
LS0tLS0tLS0KKworUG9pbnRzIHRvIHRoZSB4ZW5zdG9yZSBkaXJlY3RvcnkgdGhhdCBob2xkcyBp
bmZvcm1hdGlvbiByZWxhdGl2ZQordG8gdGhpcyBob3RwbHVnIHNjcmlwdC4gQXQgcHJlc2VudCBv
bmx5IG9uZSBwYXJhbWV0ZXIgaXMgcGFzc2VkIGJ5Cit0aGUgdG9vbHN0YWNrLCB0aGUgInBhcmFt
cyIgeGVuc3RvcmUgZW50cnkgd2hpY2ggY29udGFpbnMgdGhlICJ0YXJnZXQiCitsaW5lIHNwZWNp
ZmllZCBpbiB0aGUgZGlza3NwZWMgeGwgZGlzayBjb25maWd1cmF0aW9uIChwZGV2X3BhdGggaW4K
K2xpYnhsX2RldmljZV9kaXNrIHN0cnVjdCkuCisKK1RoaXMgeGVuc3RvcmUgZGlyZWN0b3J5IHdp
bGwgYmUgdXNlZCB0byBjb21tdW5pY2F0ZSBiZXR3ZWVuIHRoZQoraG90cGx1ZyBzY3JpcHQgYW5k
IHRoZSB0b29sc3RhY2ssIGFuZCBpdCBjYW4gYWxzbyBiZSB1c2VkIGJ5IHRoZQoraG90cGx1ZyBz
Y3JpcHQgdG8gc3RvcmUgdGVtcG9yYXJ5IGluZm9ybWF0aW9uLiBUaGlzIGRpcmVjdG9yeSBpcyBj
cmVhdGVkCitiZWZvcmUgY2FsbGluZyB0aGUgInByZXBhcmUiIG9wZXJhdGlvbiwgYW5kIHRoZSB0
b29sc3RhY2sgZ3VhcmFudGVlcwordGhhdCBpdCB3aWxsIG5vdCBiZSByZW1vdmVkIGJlZm9yZSB0
aGUgInVucHJlcGFyZSIgb3BlcmF0aW9uIGhhcyBiZWVuCitmaW5pc2hlZC4gQWZ0ZXIgdGhhdCwg
dGhlIHRvb2xzdGFjayB3aWxsIHRha2UgdGhlIGFwcHJvcHJpYXRlIGFjdGlvbnMKK3RvIHJlbW92
ZSBpdC4gVGhlIHRvb2xzdGFjayBndWFyYW50ZWVzIHRoYXQgSE9UUExVR19QQVRIIHdpbGwgYWx3
YXlzCitwb2ludCB0byBhIHZhbGlkIHhlbnN0b3JlIHBhdGggZm9yIGFsbCBvcGVyYXRpb25zLgor
CitUaGUgcGF0aCBvZiB0aGlzIGRpcmVjdG9yeSBmb2xsb3dzIHRoZSBzeW50YXg6CisKKy9sb2Nh
bC9kb21haW4vPGxvY2FsX2RvbWlkPi9saWJ4bC9ob3RwbHVnLzxndWVzdF9kb21pZD4vPGRldmlj
ZV9pZD4KKworKE5vdGUgdGhhdCB0aGVyZSBpcyBubyBlbmQgc2xhc2ggYXBwZW5kZWQgdG8gSE9U
UExVR19QQVRIKQorCisKK0JBQ0tFTkRfUEFUSAorLS0tLS0tLS0tLS0tCisKK1BvaW50cyB0byB0
aGUgeGVuc3RvcmUgYmFja2VuZCBwYXRoIG9mIHRoZSBjb3JyZXNwb25kaW5nIGJsb2NrIGRldmlj
ZS4KK1NpbmNlIGhvdHBsdWcgc2NyaXB0cyBhcmUgYWx3YXlzIGV4ZWN1dGVkIGluIHRoZSBEb21h
aW4gdGhhdCBhY3RzIGFzCitiYWNrZW5kIGZvciBhIGRldmljZSwgaXQgd2lsbCBhbHdheXMgaGF2
ZSB0aGUgZm9sbG93aW5nIHN5bnRheDoKKworL2xvY2FsL2RvbWFpbi88bG9jYWxfZG9tYWluPi9i
YWNrZW5kL3ZiZC88Z3Vlc3RfZG9taWQ+LzxkZXZpY2VfaWQ+CisKKyhOb3RlIHRoYXQgdGhlcmUg
aXMgbm8gZW5kIHNsYXNoIGFwcGVuZGVkIHRvIEJBQ0tFTkRfUEFUSCkKKworVGhpcyBlbnZpcm9u
bWVudCB2YXJpYWJsZSBpcyBub3Qgc2V0IGZvciBhbGwgb3BlcmF0aW9ucywgc2luY2Ugc29tZQor
aG90cGx1ZyBvcGVyYXRpb25zIGFyZSBleGVjdXRlZCBiZWZvcmUgdGhlIGJhY2tlbmQgeGVuc3Rv
cmUgaXMgc2V0IHVwLgorCisKKworPT09PT09PT09PT09PT09PT09PT09PT0KK0NPTU1BTkQgTElO
RSBQQVJBTUVURVJTCis9PT09PT09PT09PT09PT09PT09PT09PQorCisKK1NjcmlwdCB3aWxsIGJl
IGNhbGxlZCB3aXRoIG9ubHkgb25lIHBhcmFtZXRlciwgdGhhdCBpcyBlaXRoZXIgcHJlcGFyZSwK
K2FkZCwgcmVtb3ZlLCB1bnByZXBhcmUsIGxvY2FsYXR0YWNoIG9yIGxvY2FsZGV0YWNoLgorCisK
K1BSRVBBUkUKKy0tLS0tLS0KKworVGhpcyBpcyB0aGUgZmlyc3Qgb3BlcmF0aW9uIHRoYXQgdGhl
IGhvdHBsdWcgc2NyaXB0IHdpbGwgYmUgcmVxdWVzdGVkIHRvCitleGVjdXRlLiBUaGlzIG9wZXJh
dGlvbiBpcyBleGVjdXRlZCBiZWZvcmUgdGhlIGRpc2sgaXMgY29ubmVjdGVkLCB0bworZ2l2ZSB0
aGUgaG90cGx1ZyBzY3JpcHQgdGhlIGNoYW5jZSB0byBvZmZsb2FkIHNvbWUgd29yayBmcm9tIHRo
ZSAiYWRkIgorb3BlcmF0aW9uLCB0aGF0IGlzIHBlcmZvcm1lZCBsYXRlci4KKworQkFDS0VORF9Q
QVRIOiBub3QgdmFsaWQKKworRXhwZWN0ZWQgb3V0cHV0OgorTm9uZQorCisKK0FERAorLS0tCisK
K1RoaXMgb3BlcmF0aW9uIHNob3VsZCBjb25uZWN0IHRoZSBkZXZpY2UgdG8gdGhlIGRvbWFpbi4g
V2lsbCBvbmx5IGJlIGNhbGxlZAorYWZ0ZXIgdGhlICJwcmVwYXJlIiBvcGVyYXRpb24gaGFzIGZp
bmlzaGVkIHN1Y2Nlc3NmdWxseS4KKworQkFDS0VORF9QQVRIOiB2YWxpZAorCitFeHBlY3RlZCBv
dXRwdXQ6CitCQUNLRU5EX1BBVEgvcGh5c2ljYWwtZGV2aWNlID0gYmxvY2sgZGV2aWNlIG1ham9y
Om1pbm9yCitCQUNLRU5EX1BBVEgvcGFyYW1zID0gYmxvY2sgZGV2aWNlIHBhdGgKK0hPVFBMVUdf
UEFUSC9wZGV2ID0gYmxvY2sgZGV2aWNlIHBhdGgKKworCitSRU1PVkUKKy0tLS0tLQorCisKK0Rp
c2Nvbm5lY3RzIGEgYmxvY2sgZGV2aWNlIGZyb20gYSBkb21haW4uIFdpbGwgb25seSBiZSBjYWxs
ZWQKK2FmdGVyIHRoZSAicHJlcGFyZSIgb3BlcmF0aW9uIGhhcyBmaW5pc2hlZCBzdWNjZXNzZnVs
bHkuIEltcGxlbWVudGF0aW9ucworc2hvdWxkIHRha2UgaW50byBhY2NvdW50IHRoYXQgdGhlICJy
ZW1vdmUiIG9wZXJhdGlvbiB3aWxsIGFsc28gYmUgY2FsbGVkIGlmCit0aGUgImFkZCIgb3BlcmF0
aW9uIGhhcyBmYWlsZWQuCisKK0JBQ0tFTkRfUEFUSDogdmFsaWQKKworRXhwZWN0ZWQgb3V0cHV0
OgorTm9uZQorCisKK0xPQ0FMQVRUQUNICistLS0tLS0tLS0tLQorCisKK0NyZWF0ZXMgYSBibG9j
ayBkZXZpY2UgaW4gdGhlIGN1cnJlbnQgZG9tYWluIHRoYXQgcG9pbnRzIHRvIHRoZSBndWVzdAor
ZGlzayBkZXZpY2UuIFdpbGwgb25seSBiZSBjYWxsZWQgYWZ0ZXIgdGhlICJwcmVwYXJlIiBvcGVy
YXRpb24gaGFzCitmaW5pc2hlZCBzdWNjZXNzZnVsbHkuCisKK0JBQ0tFTkRfUEFUSDogbm90IHZh
bGlkCisKK0V4cGVjdGVkIG91dHB1dDoKK0hPVFBMVUdfUEFUSC9wZGV2ID0gYmxvY2sgZGV2aWNl
IHBhdGgKKworCitMT0NBTERFVEFDSAorLS0tLS0tLS0tLS0KKworCitEaXNjb25uZWN0cyBhIGRl
dmljZSAocHJldmlvdXNseSBjb25uZWN0ZWQgd2l0aCB0aGUgbG9jYWxhdHRhY2gKK29wZXJhdGlv
bikgZnJvbSB0aGUgY3VycmVudCBkb21haW4uIFdpbGwgb25seSBiZSBjYWxsZWQgYWZ0ZXIKK3Ro
ZSAicHJlcGFyZSIgb3BlcmF0aW9uIGhhcyBmaW5pc2hlZCBzdWNjZXNzZnVsbHkuIEltcGxlbWVu
dGF0aW9ucworc2hvdWxkIHRha2UgaW50byBhY2NvdW50IHRoYXQgdGhlICJsb2NhbGRldGFjaCIg
b3BlcmF0aW9uIHdpbGwKK2Fsc28gYmUgY2FsbGVkIGlmIHRoZSAibG9jYWxhdHRhY2giIG9wZXJh
dGlvbiBoYXMgZmFpbGVkLgorCitCQUNLRU5EX1BBVEg6IG5vdCB2YWxpZAorCitFeHBlY3RlZCBv
dXRwdXQ6CitOb25lCisKKworVU5QUkVQQVJFCistLS0tLS0tLS0KKworCitQZXJmb3JtcyB0aGUg
bmVjZXNzYXJ5IGFjdGlvbnMgdG8gdW5wcmVwYXJlIHRoZSBkZXZpY2UuCisKK0JBQ0tFTkRfUEFU
SDogbm90IHZhbGlkCisKK0V4cGVjdGVkIG91dHB1dDoKK05vbmUKZGlmZiAtLWdpdCBhL2RvY3Mv
bWlzYy94bC1kaXNrLWNvbmZpZ3VyYXRpb24udHh0IGIvZG9jcy9taXNjL3hsLWRpc2stY29uZmln
dXJhdGlvbi50eHQKaW5kZXggODZjMTZiZS4uZDc4YWNkYiAxMDA2NDQKLS0tIGEvZG9jcy9taXNj
L3hsLWRpc2stY29uZmlndXJhdGlvbi50eHQKKysrIGIvZG9jcy9taXNjL3hsLWRpc2stY29uZmln
dXJhdGlvbi50eHQKQEAgLTE2Niw2ICsxNjYsMTcgQEAgaW5mb3JtYXRpb24gdG8gYmUgaW50ZXJw
cmV0ZWQgYnkgdGhlIGV4ZWN1dGFibGUgcHJvZ3JhbSA8c2NyaXB0PiwKIFRoZXNlIHNjcmlwdHMg
YXJlIG5vcm1hbGx5IGNhbGxlZCAiYmxvY2stPHNjcmlwdD4iLgogCiAKK21ldGhvZD08c2NyaXB0
PgorLS0tLS0tLS0tLS0tLS0tCisKK1NwZWNpZmllcyB0aGF0IDx0YXJnZXQ+IGlzIG5vdCBhIG5v
cm1hbCBob3N0IHBhdGgsIGJ1dCByYXRoZXIKK2luZm9ybWF0aW9uIHRvIGJlIGludGVycHJldGVk
IGJ5IHRoZSBleGVjdXRhYmxlIHByb2dyYW0gPHNjcmlwdD4sCisobG9va2VkIGZvciBpbiAvZXRj
L3hlbi9zY3JpcHRzLCBpZiBpdCBkb2Vzbid0IGNvbnRhaW4gYSBzbGFzaCkuCisKK1RoZSBzY3Jp
cHQgcGFzc2VkIGFzIHBhcmFtZXRlciBzaG91bGQgc3VwcG9ydCB0aGUgbmV3IGhvdHBsdWcKK3Nj
cmlwdCBpbnRlcmZhY2UsIHdoaWNoIGlzIGRlZmluZWQgaW4gbGlieGwtaG90cGx1Zy1pbnRlcmZh
Y2UudHh0LgorCisKIAogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT0KIERFUFJFQ0FURUQgUEFSQU1FVEVSUywgUFJFRklYRVMgQU5EIFNZTlRBWEVTCi0tIAoxLjcu
Ny41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xb-0002G6-LO; Fri, 21 Dec 2012 17:00:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xY-0002Da-Ky
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:32 +0000
Received: from [193.109.254.147:27513] by server-16.bemta-14.messagelabs.com
	id AB/5C-18932-0B594D05; Fri, 21 Dec 2012 17:00:32 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11609 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309212"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:16 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:15 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 17:59:59 +0100
Message-ID: <1356109208-6830-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 01/10] libxl: libxl__prepare_ao_device
	should reset num_exec
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

bnVtX2V4ZWMgd2FzIG5vdCBjbGVhcmVkIHdoZW4gY2FsbGluZyBsaWJ4bF9fcHJlcGFyZV9hb19k
ZXZpY2UuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KLS0tCiB0b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYyB8ICAgIDEgKwogMSBmaWxlcyBj
aGFuZ2VkLCAxIGluc2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9v
bHMvbGlieGwvbGlieGxfZGV2aWNlLmMgYi90b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYwppbmRl
eCA1MWRkMDZlLi41OGQzZjM1IDEwMDY0NAotLS0gYS90b29scy9saWJ4bC9saWJ4bF9kZXZpY2Uu
YworKysgYi90b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYwpAQCAtNDA5LDYgKzQwOSw3IEBAIHZv
aWQgbGlieGxfX3ByZXBhcmVfYW9fZGV2aWNlKGxpYnhsX19hbyAqYW8sIGxpYnhsX19hb19kZXZp
Y2UgKmFvZGV2KQogICAgIGFvZGV2LT5hbyA9IGFvOwogICAgIGFvZGV2LT5yYyA9IDA7CiAgICAg
YW9kZXYtPmRldiA9IE5VTEw7CisgICAgYW9kZXYtPm51bV9leGVjID0gMDsKICAgICAvKiBJbml0
aWFsaXplIHRpbWVyIGZvciBRRU1VIEJvZGdlIGFuZCBob3RwbHVnIGV4ZWN1dGlvbiAqLwogICAg
IGxpYnhsX19ldl90aW1lX2luaXQoJmFvZGV2LT50aW1lb3V0KTsKICAgICBhb2Rldi0+YWN0aXZl
ID0gMTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xa-0002Ew-Bv; Fri, 21 Dec 2012 17:00:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xX-0002D9-6w
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:31 +0000
Received: from [193.109.254.147:4051] by server-4.bemta-14.messagelabs.com id
	05/1A-15233-EA594D05; Fri, 21 Dec 2012 17:00:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!7
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11690 invoked from network); 21 Dec 2012 17:00:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309222"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:19 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:19 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:08 +0100
Message-ID: <1356109208-6830-11-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 10/10] hotplug/Linux: add iscsi block
	hotplug script
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyBob3RwbHVnIHNjcmlwdCBoYXMgYmVlbiB0ZXN0ZWQgd2l0aCBJRVQgYW5kIE5ldEJTRCBp
U0NTSSB0YXJnZXRzLAp3aXRob3V0IGF1dGhlbnRpY2F0aW9uLgoKQXV0aGVudGljYXRpb24gcGFy
YW1ldGVycywgaW5jbHVkaW5nIHRoZSBwYXNzd29yZCBhcmUgcGFzc2VkIGFzCnBhcmFtZXRlcnMg
dG8gaXNjc2lhZG0sIHdoaWNoIGlzIG5vdCByZWNvbW1lbmRlZCBiZWNhdXNlIG90aGVyIHVzZXJz
Cm9mIHRoZSBzeXN0ZW0gY2FuIHNlZSB0aGVtLiBUaGlzIHBhcmFtZXRlcnMgY291bGQgYWxzbyBi
ZSBzZXQgYnkKZWRpdGluZyBhIGNvcnJlc3BvbmRpbmcgZmlsZSBkaXJlY3RseSwgYnV0IHRoZSBs
b2NhdGlvbiBvZiB0aGlzIGZpbGUKc2VlbXMgdG8gYmUgZGlmZmVyZW50IGRlcGVuZGluZyBvbiB0
aGUgZGlzdHJpYnV0aW9uIHVzZWQuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KLS0tCiB0b29scy9ob3RwbHVnL0xpbnV4L01ha2VmaWxlICAg
IHwgICAgMSArCiB0b29scy9ob3RwbHVnL0xpbnV4L2Jsb2NrLWlzY3NpIHwgIDI1NCArKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKIDIgZmlsZXMgY2hhbmdlZCwgMjU1IGlu
c2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvaG90
cGx1Zy9MaW51eC9ibG9jay1pc2NzaQoKZGlmZiAtLWdpdCBhL3Rvb2xzL2hvdHBsdWcvTGludXgv
TWFrZWZpbGUgYi90b29scy9ob3RwbHVnL0xpbnV4L01ha2VmaWxlCmluZGV4IDA2MDU1NTkuLjk4
Yzc3MzggMTAwNjQ0Ci0tLSBhL3Rvb2xzL2hvdHBsdWcvTGludXgvTWFrZWZpbGUKKysrIGIvdG9v
bHMvaG90cGx1Zy9MaW51eC9NYWtlZmlsZQpAQCAtMjEsNiArMjEsNyBAQCBYRU5fU0NSSVBUUyAr
PSBibGt0YXAKIFhFTl9TQ1JJUFRTICs9IHhlbi1ob3RwbHVnLWNsZWFudXAKIFhFTl9TQ1JJUFRT
ICs9IGV4dGVybmFsLWRldmljZS1taWdyYXRlCiBYRU5fU0NSSVBUUyArPSB2c2NzaQorWEVOX1ND
UklQVFMgKz0gYmxvY2staXNjc2kKIFhFTl9TQ1JJUFRfREFUQSA9IHhlbi1zY3JpcHQtY29tbW9u
LnNoIGxvY2tpbmcuc2ggbG9nZ2luZy5zaAogWEVOX1NDUklQVF9EQVRBICs9IHhlbi1ob3RwbHVn
LWNvbW1vbi5zaCB4ZW4tbmV0d29yay1jb21tb24uc2ggdmlmLWNvbW1vbi5zaAogWEVOX1NDUklQ
VF9EQVRBICs9IGJsb2NrLWNvbW1vbi5zaApkaWZmIC0tZ2l0IGEvdG9vbHMvaG90cGx1Zy9MaW51
eC9ibG9jay1pc2NzaSBiL3Rvb2xzL2hvdHBsdWcvTGludXgvYmxvY2staXNjc2kKbmV3IGZpbGUg
bW9kZSAxMDA2NDQKaW5kZXggMDAwMDAwMC4uZGRiOTZjNQotLS0gL2Rldi9udWxsCisrKyBiL3Rv
b2xzL2hvdHBsdWcvTGludXgvYmxvY2staXNjc2kKQEAgLTAsMCArMSwyNTQgQEAKKyMhL2Jpbi9z
aAorIworIyBPcGVuLWlTQ1NJIFhlbiBibG9jayBkZXZpY2UgaG90cGx1ZyBzY3JpcHQKKyMKKyMg
QXV0aG9yIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgorIworIyBUaGlz
IHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29y
IG1vZGlmeQorIyBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdOVSBMZXNzZXIgR2VuZXJhbCBQ
dWJsaWMgTGljZW5zZSBhcyBwdWJsaXNoZWQKKyMgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRh
dGlvbjsgdmVyc2lvbiAyLjEgb25seS4gd2l0aCB0aGUgc3BlY2lhbAorIyBleGNlcHRpb24gb24g
bGlua2luZyBkZXNjcmliZWQgaW4gZmlsZSBMSUNFTlNFLgorIworIyBUaGlzIHByb2dyYW0gaXMg
ZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKKyMgYnV0IFdJ
VEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YK
KyMgTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAg
U2VlIHRoZQorIyBHTlUgTGVzc2VyIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0
YWlscy4KKworcmVtb3ZlX2xhYmVsKCkKK3sKKyAgICBlY2hvICQxIHwgc2VkICJzL15cKCIkMiJc
KS8vIgorfQorCitjaGVja190b29scygpCit7CisgICAgaWYgISB0eXBlIGlzY3NpYWRtID4gL2Rl
di9udWxsIDI+JjE7IHRoZW4KKyAgICAgICAgZWNobyAiVW5hYmxlIHRvIGZpbmQgaXNjc2lhZG0g
dG9vbCIKKyAgICAgICAgcmV0dXJuIDEKKyAgICBmaQorICAgIGlmIFsgIiRtdWx0aXBhdGgiID0g
InkiIF0gJiYgISB0eXBlIG11bHRpcGF0aCA+IC9kZXYvbnVsbCAyPiYxOyB0aGVuCisgICAgICAg
IGVjaG8gIlVuYWJsZSB0byBmaW5kIG11bHRpcGF0aCIKKyAgICAgICAgcmV0dXJuIDEKKyAgICBm
aQorfQorCisjIFNldHMgdGhlIGZvbGxvd2luZyBnbG9iYWwgdmFyaWFibGVzIGJhc2VkIG9uIHRo
ZSBwYXJhbXMgZmllbGQgcGFzc2VkIGluIGFzCisjIGEgcGFyYW1ldGVyOiBpcW4sIHBvcnRhbCwg
YXV0aF9tZXRob2QsIHVzZXIsIG11bHRpcGF0aCwgcGFzc3dvcmQKK3BhcnNlX3RhcmdldCgpCit7
CisgICAgIyBzZXQgbXVsdGlwYXRoIGRlZmF1bHQgdmFsdWUKKyAgICBtdWx0aXBhdGg9Im4iCisg
ICAgZm9yIHBhcmFtIGluICQoZWNobyAiJDEiIHwgdHIgIiwiICJcbiIpCisgICAgZG8KKyAgICAg
ICAgaWYgWyAtbiAiJHBhc3N3b3JkIiBdOyB0aGVuCisgICAgICAgICAgICBwYXNzd29yZD0iJHBh
c3N3b3JkLCRwYXJhbSIKKyAgICAgICAgICAgIGNvbnRpbnVlCisgICAgICAgIGZpCisgICAgICAg
IGNhc2UgJHBhcmFtIGluCisgICAgICAgIGlxbj0qKQorICAgICAgICAgICAgaXFuPSQocmVtb3Zl
X2xhYmVsICRwYXJhbSAiaXFuPSIpCisgICAgICAgICAgICA7OworICAgICAgICBwb3J0YWw9KikK
KyAgICAgICAgICAgIHBvcnRhbD0kKHJlbW92ZV9sYWJlbCAkcGFyYW0gInBvcnRhbD0iKQorICAg
ICAgICAgICAgOzsKKyAgICAgICAgYXV0aF9tZXRob2Q9KikKKyAgICAgICAgICAgIGF1dGhfbWV0
aG9kPSQocmVtb3ZlX2xhYmVsICRwYXJhbSAiYXV0aF9tZXRob2Q9IikKKyAgICAgICAgICAgIDs7
CisgICAgICAgIHVzZXI9KikKKyAgICAgICAgICAgIHVzZXI9JChyZW1vdmVfbGFiZWwgJHBhcmFt
ICJ1c2VyPSIpCisgICAgICAgICAgICA7OworICAgICAgICBtdWx0aXBhdGg9KikKKyAgICAgICAg
ICAgIG11bHRpcGF0aD0kKHJlbW92ZV9sYWJlbCAkcGFyYW0gIm11bHRpcGF0aD0iKQorICAgICAg
ICAgICAgOzsKKyAgICAgICAgcGFzc3dvcmQ9KikKKyAgICAgICAgICAgIHBhc3N3b3JkPSQocmVt
b3ZlX2xhYmVsICRwYXJhbSAicGFzc3dvcmQ9IikKKyAgICAgICAgICAgIDs7CisgICAgICAgIGVz
YWMKKyAgICBkb25lCisgICAgaWYgWyAteiAiJGlxbiIgXSB8fCBbIC16ICIkcG9ydGFsIiBdOyB0
aGVuCisgICAgICAgIGVjaG8gImlxbj0gYW5kIHBvcnRhbD0gYXJlIHJlcXVpcmVkIHBhcmFtZXRl
cnMiCisgICAgICAgIHJldHVybiAxCisgICAgZmkKKyAgICBpZiBbICIkbXVsdGlwYXRoIiAhPSAi
eSIgXSAmJiBbICIkbXVsdGlwYXRoIiAhPSAibiIgXTsgdGhlbgorICAgICAgICBlY2hvICJtdWx0
aXBhdGggdmFsaWQgdmFsdWVzIGFyZSB5IGFuZCBuLCAkbXVsdGlwYXRoIG5vdCBhIHZhbGlkIHZh
bHVlIgorICAgICAgICByZXR1cm4gMQorICAgIGZpCisgICAgcmV0dXJuIDAKK30KKworIyBPdXRw
dXRzIHRoZSBibG9jayBkZXZpY2UgbWFqb3I6bWlub3IKK2RldmljZV9tYWpvcl9taW5vcigpCit7
CisgICAgc3RhdCAtTCAtYyAldDolVCAiJDEiCit9CisKKyMgU2V0cyAkZGV2IHRvIHBvaW50IHRv
IHRoZSBkZXZpY2UgYXNzb2NpYXRlZCB3aXRoIHRoZSB2YWx1ZSBpbiBpcW4KK2ZpbmRfZGV2aWNl
KCkKK3sKKyAgICB3aGlsZSBbICEgLWUgL2Rldi9kaXNrL2J5LXBhdGgvKiIkaXFuIi1sdW4tMCBd
OyBkbworICAgICAgICBzbGVlcCAwLjEKKyAgICBkb25lCisgICAgc2RkZXY9JChyZWFkbGluayAt
ZiAvZGV2L2Rpc2svYnktcGF0aC8qIiRpcW4iLWx1bi0wKQorICAgIGlmIFsgISAtYiAiJHNkZGV2
IiBdOyB0aGVuCisgICAgICAgIGVjaG8gIlVuYWJsZSB0byBmaW5kIGF0dGFjaGVkIGRldmljZSBw
YXRoIgorICAgICAgICByZXR1cm4gMQorICAgIGZpCisgICAgaWYgWyAiJG11bHRpcGF0aCIgPSAi
eSIgXTsgdGhlbgorICAgICAgICBtZGV2PSQobXVsdGlwYXRoIC1sbCAiJHNkZGV2IiB8IGhlYWQg
LTEgfCBhd2sgJ3sgcHJpbnQgJDF9JykKKyAgICAgICAgaWYgWyAhIC1iIC9kZXYvbWFwcGVyLyIk
bWRldiIgXTsgdGhlbgorICAgICAgICAgICAgZWNobyAiVW5hYmxlIHRvIGZpbmQgYXR0YWNoZWQg
ZGV2aWNlIG11bHRpcGF0aCIKKyAgICAgICAgICAgIHJldHVybiAxCisgICAgICAgIGZpCisgICAg
ICAgIGRldj0iL2Rldi9tYXBwZXIvJG1kZXYiCisgICAgZWxzZQorICAgICAgICBkZXY9IiRzZGRl
diIKKyAgICBmaQorICAgIHJldHVybiAwCit9CisKKyMgQXR0YWNoZXMgdGhlIHRhcmdldCAkaXFu
IGluICRwb3J0YWwgYW5kIHNldHMgJGRldiB0byBwb2ludCB0byB0aGUKKyMgbXVsdGlwYXRoIGRl
dmljZQorYXR0YWNoKCkKK3sKKyAgICBpc2NzaWFkbSAtbSBub2RlIC0tdGFyZ2V0bmFtZSAiJGlx
biIgLXAgIiRwb3J0YWwiIC0tbG9naW4gPiAvZGV2L251bGwgMj4mMQorICAgIGlmIFsgJD8gLW5l
IDAgXTsgdGhlbgorICAgICAgICBlY2hvICJVbmFibGUgdG8gY29ubmVjdCB0byB0YXJnZXQiCisg
ICAgICAgIHJldHVybiAxCisgICAgZmkKKyAgICBmaW5kX2RldmljZQorICAgIGlmIFsgJD8gLW5l
IDAgXTsgdGhlbgorICAgICAgICBlY2hvICJVbmFibGUgdG8gZmluZCBpU0NTSSBkZXZpY2UiCisg
ICAgICAgIHJldHVybiAxCisgICAgZmkKKyAgICByZXR1cm4gMAorfQorCisjIERpc2NvdmVycyB0
YXJnZXRzIGluICRwb3J0YWwgYW5kIGNoZWNrcyB0aGF0ICRpcW4gaXMgb25lIG9mIHRob3NlIHRh
cmdldHMKKyMgQWxzbyBzZXRzIHRoZSBhdXRoIHBhcmFtZXRlcnMgdG8gYXR0YWNoIHRoZSBkZXZp
Y2UKK3ByZXBhcmUoKQoreworICAgICMgQ2hlY2sgaWYgdGFyZ2V0IGlzIGFscmVhZHkgb3BlbmVk
CisgICAgaXNjc2lhZG0gLW0gc2Vzc2lvbiAyPiYxIHwgZ3JlcCAtcSAiJGlxbiIKKyAgICBpZiBb
ICQ/IC1lcSAwIF07IHRoZW4KKyAgICAgICAgZWNobyAiRGV2aWNlIGFscmVhZHkgb3BlbmVkIgor
ICAgICAgICByZXR1cm4gMQorICAgIGZpCisgICAgIyBEaXNjb3ZlciBwb3J0YWwgdGFyZ2V0cwor
ICAgIGlzY3NpYWRtIC1tIGRpc2NvdmVyeSAtdCBzdCAtcCAkcG9ydGFsIDI+JjEgfCBncmVwIC1x
ICIkaXFuIgorICAgIGlmIFsgJD8gLW5lIDAgXTsgdGhlbgorICAgICAgICBlY2hvICJObyBtYXRj
aGluZyB0YXJnZXQgaXFuIGZvdW5kIgorICAgICAgICByZXR1cm4gMQorICAgIGZpCisgICAgIyBT
ZXQgYXV0aCBtZXRob2QgaWYgbmVjZXNzYXJ5CisgICAgaWYgWyAtbiAiJGF1dGhfbWV0aG9kIiBd
IHx8IFsgLW4gIiR1c2VyIiBdIHx8IFsgLW4gIiRwYXNzd29yZCIgXTsgdGhlbgorICAgICAgICBp
c2NzaWFkbSAtbSBub2RlIC0tdGFyZ2V0bmFtZSAiJGlxbiIgLXAgIiRwb3J0YWwiIC0tb3A9dXBk
YXRlIC0tbmFtZSBcCisgICAgICAgICAgICBub2RlLnNlc3Npb24uYXV0aC5hdXRobWV0aG9kIC0t
dmFsdWU9IiRhdXRoX21ldGhvZCIgXAorICAgICAgICAgICAgPiAvZGV2L251bGwgMj4mMSAmJiBc
CisgICAgICAgIGlzY3NpYWRtIC1tIG5vZGUgLS10YXJnZXRuYW1lICIkaXFuIiAtcCAiJHBvcnRh
bCIgLS1vcD11cGRhdGUgLS1uYW1lIFwKKyAgICAgICAgICAgIG5vZGUuc2Vzc2lvbi5hdXRoLnVz
ZXJuYW1lIC0tdmFsdWU9IiR1c2VyIiBcCisgICAgICAgICAgICA+IC9kZXYvbnVsbCAyPiYxICYm
IFwKKyAgICAgICAgaXNjc2lhZG0gLW0gbm9kZSAtLXRhcmdldG5hbWUgIiRpcW4iIC1wICIkcG9y
dGFsIiAtLW9wPXVwZGF0ZSAtLW5hbWUgXAorICAgICAgICAgICAgbm9kZS5zZXNzaW9uLmF1dGgu
cGFzc3dvcmQgLS12YWx1ZT0iJHBhc3N3b3JkIiBcCisgICAgICAgICAgICA+IC9kZXYvbnVsbCAy
PiYxCisgICAgICAgIGlmIFsgJD8gLW5lIDAgXTsgdGhlbgorICAgICAgICAgICAgZWNobyAiVW5h
YmxlIHRvIHNldCBhdXRoZW50aWNhdGlvbiBwYXJhbWV0ZXJzIgorICAgICAgICAgICAgcmV0dXJu
IDEKKyAgICAgICAgZmkKKyAgICBmaQorICAgIHJldHVybiAwCit9CisKKyMgQXR0YWNoZXMgdGhl
IGRldmljZSBhbmQgd3JpdGVzIHhlbnN0b3JlIGJhY2tlbmQgZW50cmllcyB0byBjb25uZWN0Cisj
IHRoZSBkZXZpY2UKK2FkZCgpCit7CisgICAgbG9jYWxhdHRhY2gKKyAgICBpZiBbICQ/IC1uZSAw
IF07IHRoZW4KKyAgICAgICAgZWNobyAiRmFpbGVkIHRvIGF0dGFjaCBkZXZpY2UiCisgICAgICAg
IHJldHVybiAxCisgICAgZmkKKyAgICBtbT0kKGRldmljZV9tYWpvcl9taW5vciAiJGRldiIpCisg
ICAgeGVuc3RvcmUtd3JpdGUgIiRCQUNLRU5EX1BBVEgvcGh5c2ljYWwtZGV2aWNlIiAiJG1tIgor
ICAgIHhlbnN0b3JlLXdyaXRlICIkQkFDS0VORF9QQVRIL3BhcmFtcyIgIiRkZXYiCisgICAgcmV0
dXJuIDAKK30KKworIyBEaXNjb25uZWN0cyB0aGUgZGV2aWNlCityZW1vdmUoKQoreworICAgIGZp
bmRfZGV2aWNlCisgICAgaWYgWyAkPyAtbmUgMCBdOyB0aGVuCisgICAgICAgIGVjaG8gIlVuYWJs
ZSB0byBmaW5kIGRldmljZSIKKyAgICAgICAgcmV0dXJuIDEKKyAgICBmaQorICAgIGlzY3NpYWRt
IC1tIG5vZGUgLS10YXJnZXRuYW1lICIkaXFuIiAtcCAiJHBvcnRhbCIgLS1sb2dvdXQgPiAvZGV2
L251bGwgMj4mMQorICAgIGlmIFsgJD8gLW5lIDAgXTsgdGhlbgorICAgICAgICBlY2hvICJVbmFi
bGUgdG8gZGlzY29ubmVjdCB0YXJnZXQiCisgICAgICAgIHJldHVybiAxCisgICAgZmkKKyAgICBy
ZXR1cm4gMAorfQorCisjIFJlbW92ZXMgdGhlIGF1dGggcGFyYW1zIHNldCBieSBwcmVwYXJlCit1
bnByZXBhcmUoKQoreworICAgIGlmIFsgLW4gIiRhdXRoX21ldGhvZCIgXSB8fCBbIC1uICIkdXNl
ciIgXSB8fCBbIC1uICIkcGFzc3dvcmQiIF07IHRoZW4KKyAgICAgICAgaXNjc2lhZG0gLW0gbm9k
ZSAtLXRhcmdldG5hbWUgIiRpcW4iIC1wICIkcG9ydGFsIiAtLW9wPWRlbGV0ZSAtLW5hbWUgXAor
ICAgICAgICAgICAgbm9kZS5zZXNzaW9uLmF1dGguYXV0aG1ldGhvZCA+IC9kZXYvbnVsbCAyPiYx
CisgICAgICAgIGlzY3NpYWRtIC1tIG5vZGUgLS10YXJnZXRuYW1lICIkaXFuIiAtcCAiJHBvcnRh
bCIgLS1vcD1kZWxldGUgLS1uYW1lIFwKKyAgICAgICAgICAgIG5vZGUuc2Vzc2lvbi5hdXRoLnVz
ZXJuYW1lID4gL2Rldi9udWxsIDI+JjEKKyAgICAgICAgaXNjc2lhZG0gLW0gbm9kZSAtLXRhcmdl
dG5hbWUgIiRpcW4iIC1wICIkcG9ydGFsIiAtLW9wPWRlbGV0ZSAtLW5hbWUgXAorICAgICAgICAg
ICAgbm9kZS5zZXNzaW9uLmF1dGgucGFzc3dvcmQgPiAvZGV2L251bGwgMj4mMQorICAgIGZpCit9
CisKKyMgQXR0YWNoZXMgdGhlIGRldmljZSB0byB0aGUgY3VycmVudCBkb21haW4sIGFuZCB3cml0
ZXMgdGhlIHJlc3VsdGluZworIyBibG9jayBkZXZpY2UgdG8geGVuc3RvcmUKK2xvY2FsYXR0YWNo
KCkKK3sKKyAgICBhdHRhY2gKKyAgICBpZiBbICQ/IC1uZSAwIF07IHRoZW4KKyAgICAgICAgZWNo
byAiRmFpbGVkIHRvIGF0dGFjaCBkZXZpY2UiCisgICAgICAgIHJldHVybiAxCisgICAgZmkKKyAg
ICB4ZW5zdG9yZS13cml0ZSAiJEhPVFBMVUdfUEFUSC9wZGV2IiAiJGRldiIKKyAgICByZXR1cm4g
MAorfQorCitjb21tYW5kPSQxCit0YXJnZXQ9JCh4ZW5zdG9yZS1yZWFkICRIT1RQTFVHX1BBVEgv
cGFyYW1zKQorCitpZiBbIC16ICIkdGFyZ2V0IiBdOyB0aGVuCisgICAgZWNobyAiTm8gaW5mb3Jt
YXRpb24gYWJvdXQgdGhlIHRhcmdldCIKKyAgICBleGl0IDEKK2ZpCisKK2NoZWNrX3Rvb2xzIHx8
IGV4aXQgMQorCitwYXJzZV90YXJnZXQgIiR0YXJnZXQiCisKK2Nhc2UgJGNvbW1hbmQgaW4KK3By
ZXBhcmUpCisgICAgcHJlcGFyZQorICAgIGV4aXQgJD8KKyAgICA7OworYWRkKQorICAgIGFkZAor
ICAgIGV4aXQgJD8KKyAgICA7OworcmVtb3ZlKQorICAgIHJlbW92ZQorICAgIGV4aXQgJD8KKyAg
ICA7OwordW5wcmVwYXJlKQorICAgIHVucHJlcGFyZQorICAgIGV4aXQgJD8KKyAgICA7OworbG9j
YWxhdHRhY2gpCisgICAgbG9jYWxhdHRhY2gKKyAgICBleGl0ICQ/CisgICAgOzsKK2xvY2FsZGV0
YWNoKQorICAgIHJlbW92ZQorICAgIGV4aXQgJD8KKyAgICA7OworZXNhYwotLSAKMS43LjcuNSAo
QXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0
cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xX-0002DV-Lp; Fri, 21 Dec 2012 17:00:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xV-0002Cc-SO
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:30 +0000
Received: from [85.158.143.99:34459] by server-3.bemta-4.messagelabs.com id
	43/E0-18211-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356109228!23684325!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25700 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309219"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:18 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:18 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:05 +0100
Message-ID: <1356109208-6830-8-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 07/10] libxl: add local attach support for
	new hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkcyBzdXBwb3J0IGZvciBsb2NhbGx5IGF0dGFjaGluZyBkaXNrcyB0aGF0IHVzZSB0aGUgbmV3
IGhvdHBsdWcKc2NyaXB0IGludGVyZmFjZSwgYnkgY2FsbGluZyB0aGUgbG9jYWxhdHRhY2gvbG9j
YWxkZXRhY2ggb3BlcmF0aW9ucwphbmQgcmV0dXJuaW5nIGEgYmxvY2sgZGV2aWNlIHRoYXQgY2Fu
IGJlIHVzZWQgdG8gZXhlY3V0ZSBweWdydWIuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KLS0tCiB0b29scy9saWJ4bC9saWJ4bC5jICAgICAg
ICAgICAgfCAgIDg4ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0KIHRv
b2xzL2xpYnhsL2xpYnhsX2Jvb3Rsb2FkZXIuYyB8ICAgIDEgKwogdG9vbHMvbGlieGwvbGlieGxf
aW50ZXJuYWwuaCAgIHwgICAgMSArCiAzIGZpbGVzIGNoYW5nZWQsIDc3IGluc2VydGlvbnMoKyks
IDEzIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsLmMgYi90b29s
cy9saWJ4bC9saWJ4bC5jCmluZGV4IDI5YjI3NjUuLmIyMWVlZmMgMTAwNjQ0Ci0tLSBhL3Rvb2xz
L2xpYnhsL2xpYnhsLmMKKysrIGIvdG9vbHMvbGlieGwvbGlieGwuYwpAQCAtMjY1Nyw2ICsyNjU3
LDcgQEAgdm9pZCBsaWJ4bF9fZGV2aWNlX2Rpc2tfbG9jYWxfaW5pdGlhdGVfYXR0YWNoKGxpYnhs
X19lZ2MgKmVnYywKICAgICBjb25zdCBsaWJ4bF9kZXZpY2VfZGlzayAqaW5fZGlzayA9IGRscy0+
aW5fZGlzazsKICAgICBsaWJ4bF9kZXZpY2VfZGlzayAqZGlzayA9ICZkbHMtPmRpc2s7CiAgICAg
Y29uc3QgY2hhciAqYmxrZGV2X3N0YXJ0ID0gZGxzLT5ibGtkZXZfc3RhcnQ7CisgICAgdWludDMy
X3QgZG9taWQgPSBkbHMtPmRvbWlkOwogCiAgICAgYXNzZXJ0KGluX2Rpc2stPnBkZXZfcGF0aCk7
CiAKQEAgLTI2NzMsNiArMjY3NCwyMyBAQCB2b2lkIGxpYnhsX19kZXZpY2VfZGlza19sb2NhbF9p
bml0aWF0ZV9hdHRhY2gobGlieGxfX2VnYyAqZWdjLAogICAgICAgICBjYXNlIExJQlhMX0RJU0tf
QkFDS0VORF9QSFk6CiAgICAgICAgICAgICBMSUJYTF9fTE9HKGN0eCwgTElCWExfX0xPR19ERUJV
RywgImxvY2FsbHkgYXR0YWNoaW5nIFBIWSBkaXNrICVzIiwKICAgICAgICAgICAgICAgICAgICAg
ICAgZGlzay0+cGRldl9wYXRoKTsKKyAgICAgICAgICAgIGlmIChkaXNrLT5ob3RwbHVnX3ZlcnNp
b24gIT0gMCkgeworICAgICAgICAgICAgICAgIGRpc2stPnZkZXYgPSBsaWJ4bF9fc3RyZHVwKGdj
LCBpbl9kaXNrLT52ZGV2KTsKKyAgICAgICAgICAgICAgICBsaWJ4bF9fcHJlcGFyZV9hb19kZXZp
Y2UoYW8sICZkbHMtPmFvZGV2KTsKKyAgICAgICAgICAgICAgICBHQ05FVyhkbHMtPmFvZGV2LmRl
dik7CisgICAgICAgICAgICAgICAgcmMgPSBsaWJ4bF9fZGV2aWNlX2Zyb21fZGlzayhnYywgZG9t
aWQsIGRpc2ssCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBk
bHMtPmFvZGV2LmRldik7CisgICAgICAgICAgICAgICAgaWYgKHJjICE9IDApIHsKKyAgICAgICAg
ICAgICAgICAgICAgTE9HKEVSUk9SLCAiSW52YWxpZCBvciB1bnN1cHBvcnRlZCB2aXJ0dWFsIGRp
c2sgIgorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICJpZGVudGlmaWVyICVzIiwgZGlz
ay0+dmRldik7CisgICAgICAgICAgICAgICAgICAgIGdvdG8gb3V0OworICAgICAgICAgICAgICAg
IH0KKyAgICAgICAgICAgICAgICBkbHMtPmFvZGV2LmFjdGlvbiA9IERFVklDRV9MT0NBTEFUVEFD
SDsKKyAgICAgICAgICAgICAgICBkbHMtPmFvZGV2LmhvdHBsdWdfdmVyc2lvbiA9IGRpc2stPmhv
dHBsdWdfdmVyc2lvbjsKKyAgICAgICAgICAgICAgICBkbHMtPmFvZGV2LmNhbGxiYWNrID0gbG9j
YWxfZGV2aWNlX2F0dGFjaF9jYjsKKyAgICAgICAgICAgICAgICBsaWJ4bF9fZGV2aWNlX2hvdHBs
dWcoZWdjLCAmZGxzLT5hb2Rldik7CisgICAgICAgICAgICAgICAgcmV0dXJuOworICAgICAgICAg
ICAgfQogICAgICAgICAgICAgZGV2ID0gZGlzay0+cGRldl9wYXRoOwogICAgICAgICAgICAgYnJl
YWs7CiAgICAgICAgIGNhc2UgTElCWExfRElTS19CQUNLRU5EX1RBUDoKQEAgLTI3MzUsNyArMjc1
Myw4IEBAIHN0YXRpYyB2b2lkIGxvY2FsX2RldmljZV9hdHRhY2hfY2IobGlieGxfX2VnYyAqZWdj
LCBsaWJ4bF9fYW9fZGV2aWNlICphb2RldikKIHsKICAgICBTVEFURV9BT19HQyhhb2Rldi0+YW8p
OwogICAgIGxpYnhsX19kaXNrX2xvY2FsX3N0YXRlICpkbHMgPSBDT05UQUlORVJfT0YoYW9kZXYs
ICpkbHMsIGFvZGV2KTsKLSAgICBjaGFyICpkZXYgPSBOVUxMLCAqYmVfcGF0aCA9IE5VTEw7Cisg
ICAgY2hhciAqZGV2ID0gTlVMTCwgKmJlX3BhdGggPSBOVUxMLCAqaG90cGx1Z19wYXRoOworICAg
IGNvbnN0IGNoYXIgKnBkZXYgPSBOVUxMOwogICAgIGludCByYzsKICAgICBsaWJ4bF9fZGV2aWNl
IGRldmljZTsKICAgICBsaWJ4bF9kZXZpY2VfZGlzayAqZGlzayA9ICZkbHMtPmRpc2s7CkBAIC0y
NzQ4LDIwICsyNzY3LDQ1IEBAIHN0YXRpYyB2b2lkIGxvY2FsX2RldmljZV9hdHRhY2hfY2IobGli
eGxfX2VnYyAqZWdjLCBsaWJ4bF9fYW9fZGV2aWNlICphb2RldikKICAgICAgICAgICAgICAgICAg
ICAgYW9kZXYtPmRldi0+ZGV2aWQpOwogICAgICAgICBnb3RvIG91dDsKICAgICB9CisgICAgc3dp
dGNoIChkaXNrLT5iYWNrZW5kKSB7CisgICAgY2FzZSBMSUJYTF9ESVNLX0JBQ0tFTkRfUEhZOgor
ICAgICAgICAvKiBGZXRjaCBob3RwbHVnIG91dHB1dCB0byBvYnRhaW4gdGhlIGJsb2NrIGRldmlj
ZSAqLworICAgICAgICByYyA9IGxpYnhsX19kZXZpY2VfZnJvbV9kaXNrKGdjLCBkbHMtPmRvbWlk
LCBkaXNrLCAmZGV2aWNlKTsKKyAgICAgICAgaWYgKHJjKQorICAgICAgICAgICAgZ290byBvdXQ7
CisgICAgICAgIGhvdHBsdWdfcGF0aCA9IGxpYnhsX19kZXZpY2VfeHNfaG90cGx1Z19wYXRoKGdj
LCAmZGV2aWNlKTsKKyAgICAgICAgcmMgPSBsaWJ4bF9feHNfcmVhZF9jaGVja2VkKGdjLCBYQlRf
TlVMTCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIEdDU1BSSU5URigiJXMv
cGRldiIsIGhvdHBsdWdfcGF0aCksCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAmcGRldik7CisgICAgICAgIGlmIChyYykKKyAgICAgICAgICAgIGdvdG8gb3V0OworICAgICAg
ICBpZiAoIXBkZXYpIHsKKyAgICAgICAgICAgIHJjID0gRVJST1JfRkFJTDsKKyAgICAgICAgICAg
IGdvdG8gb3V0OworICAgICAgICB9CisgICAgICAgIExPRyhERUJVRywgImF0dGFjaGVkIGxvY2Fs
IGJsb2NrIGRldmljZSAlcyIsIHBkZXYpOworICAgICAgICBkbHMtPmRpc2twYXRoID0gbGlieGxf
X3N0cmR1cChnYywgcGRldik7CisgICAgICAgIGJyZWFrOworICAgIGNhc2UgTElCWExfRElTS19C
QUNLRU5EX1FESVNLOgorICAgICAgICBkZXYgPSBHQ1NQUklOVEYoIi9kZXYvJXMiLCBkaXNrLT52
ZGV2KTsKKyAgICAgICAgTE9HKERFQlVHLCAibG9jYWxseSBhdHRhY2hpbmcgcWRpc2sgJXMiLCBk
ZXYpOwogCi0gICAgZGV2ID0gR0NTUFJJTlRGKCIvZGV2LyVzIiwgZGlzay0+dmRldik7Ci0gICAg
TE9HKERFQlVHLCAibG9jYWxseSBhdHRhY2hpbmcgcWRpc2sgJXMiLCBkZXYpOworICAgICAgICBy
YyA9IGxpYnhsX19kZXZpY2VfZnJvbV9kaXNrKGdjLCBMSUJYTF9UT09MU1RBQ0tfRE9NSUQsIGRp
c2ssICZkZXZpY2UpOworICAgICAgICBpZiAocmMgPCAwKQorICAgICAgICAgICAgZ290byBvdXQ7
CisgICAgICAgIGJlX3BhdGggPSBsaWJ4bF9fZGV2aWNlX2JhY2tlbmRfcGF0aChnYywgJmRldmlj
ZSk7CisgICAgICAgIHJjID0gbGlieGxfX3dhaXRfZm9yX2JhY2tlbmQoZ2MsIGJlX3BhdGgsICI0
Iik7CisgICAgICAgIGlmIChyYyA8IDApCisgICAgICAgICAgICBnb3RvIG91dDsKIAotICAgIHJj
ID0gbGlieGxfX2RldmljZV9mcm9tX2Rpc2soZ2MsIExJQlhMX1RPT0xTVEFDS19ET01JRCwgZGlz
aywgJmRldmljZSk7Ci0gICAgaWYgKHJjIDwgMCkKLSAgICAgICAgZ290byBvdXQ7Ci0gICAgYmVf
cGF0aCA9IGxpYnhsX19kZXZpY2VfYmFja2VuZF9wYXRoKGdjLCAmZGV2aWNlKTsKLSAgICByYyA9
IGxpYnhsX193YWl0X2Zvcl9iYWNrZW5kKGdjLCBiZV9wYXRoLCAiNCIpOwotICAgIGlmIChyYyA8
IDApCisgICAgICAgIGlmIChkZXYgIT0gTlVMTCkKKyAgICAgICAgICAgIGRscy0+ZGlza3BhdGgg
PSBsaWJ4bF9fc3RyZHVwKGdjLCBkZXYpOworICAgICAgICBicmVhazsKKyAgICBkZWZhdWx0Ogor
ICAgICAgICBMT0coRVJST1IsICJpbnZhbGlkIGRpc2sgYmFja2VuZCBmb3IgbG9jYWwgYXR0YWNo
IGNhbGxiYWNrIik7CisgICAgICAgIHJjID0gRVJST1JfRkFJTDsKICAgICAgICAgZ290byBvdXQ7
Ci0KLSAgICBpZiAoZGV2ICE9IE5VTEwpCi0gICAgICAgIGRscy0+ZGlza3BhdGggPSBsaWJ4bF9f
c3RyZHVwKGdjLCBkZXYpOworICAgIH0KIAogICAgIGRscy0+Y2FsbGJhY2soZWdjLCBkbHMsIDAp
OwogICAgIHJldHVybjsKQEAgLTI3ODUsMTEgKzI4MjksMjkgQEAgdm9pZCBsaWJ4bF9fZGV2aWNl
X2Rpc2tfbG9jYWxfaW5pdGlhdGVfZGV0YWNoKGxpYnhsX19lZ2MgKmVnYywKICAgICBsaWJ4bF9k
ZXZpY2VfZGlzayAqZGlzayA9ICZkbHMtPmRpc2s7CiAgICAgbGlieGxfX2RldmljZSAqZGV2aWNl
OwogICAgIGxpYnhsX19hb19kZXZpY2UgKmFvZGV2ID0gJmRscy0+YW9kZXY7CisgICAgdWludDMy
X3QgZG9taWQgPSBkbHMtPmRvbWlkOwogICAgIGxpYnhsX19wcmVwYXJlX2FvX2RldmljZShhbywg
YW9kZXYpOwogCiAgICAgaWYgKCFkbHMtPmRpc2twYXRoKSBnb3RvIG91dDsKIAogICAgIHN3aXRj
aCAoZGlzay0+YmFja2VuZCkgeworICAgICAgICBjYXNlIExJQlhMX0RJU0tfQkFDS0VORF9QSFk6
CisgICAgICAgICAgICBpZiAoZGlzay0+aG90cGx1Z192ZXJzaW9uICE9IDApIHsKKyAgICAgICAg
ICAgICAgICBHQ05FVyhhb2Rldi0+ZGV2KTsKKyAgICAgICAgICAgICAgICByYyA9IGxpYnhsX19k
ZXZpY2VfZnJvbV9kaXNrKGdjLCBkb21pZCwgZGlzaywKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGFvZGV2LT5kZXYpOworICAgICAgICAgICAgICAgIGlmIChy
YyAhPSAwKSB7CisgICAgICAgICAgICAgICAgICAgIExPRyhFUlJPUiwgIkludmFsaWQgb3IgdW5z
dXBwb3J0ZWQgdmlydHVhbCBkaXNrICIKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAi
aWRlbnRpZmllciAlcyIsIGRpc2stPnZkZXYpOworICAgICAgICAgICAgICAgICAgICBnb3RvIG91
dDsKKyAgICAgICAgICAgICAgICB9CisgICAgICAgICAgICAgICAgYW9kZXYtPmFjdGlvbiA9IERF
VklDRV9MT0NBTERFVEFDSDsKKyAgICAgICAgICAgICAgICBhb2Rldi0+aG90cGx1Z192ZXJzaW9u
ID0gZGlzay0+aG90cGx1Z192ZXJzaW9uOworICAgICAgICAgICAgICAgIGFvZGV2LT5jYWxsYmFj
ayA9IGxvY2FsX2RldmljZV9kZXRhY2hfY2I7CisgICAgICAgICAgICAgICAgbGlieGxfX2Rldmlj
ZV9ob3RwbHVnKGVnYywgYW9kZXYpOworICAgICAgICAgICAgICAgIHJldHVybjsKKyAgICAgICAg
ICAgIH0KKyAgICAgICAgICAgIGJyZWFrOwogICAgICAgICBjYXNlIExJQlhMX0RJU0tfQkFDS0VO
RF9RRElTSzoKICAgICAgICAgICAgIGlmIChkaXNrLT52ZGV2ICE9IE5VTEwpIHsKICAgICAgICAg
ICAgICAgICBHQ05FVyhkZXZpY2UpOwpAQCAtMjgyOCw3ICsyODkwLDcgQEAgc3RhdGljIHZvaWQg
bG9jYWxfZGV2aWNlX2RldGFjaF9jYihsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hb19kZXZpY2Ug
KmFvZGV2KQogCiAgICAgaWYgKGFvZGV2LT5yYykgewogICAgICAgICBMT0dFKEVSUk9SLCAidW5h
YmxlIHRvICVzICVzIHdpdGggaWQgJXUiLAotICAgICAgICAgICAgICAgICAgICBhb2Rldi0+YWN0
aW9uID09IERFVklDRV9DT05ORUNUID8gImFkZCIgOiAicmVtb3ZlIiwKKyAgICAgICAgICAgICAg
ICAgICAgbGlieGxfX2RldmljZV9ob3RwbHVnX2FjdGlvbihnYywgYW9kZXYtPmFjdGlvbiksCiAg
ICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2Vfa2luZF90b19zdHJpbmcoYW9kZXYtPmRl
di0+a2luZCksCiAgICAgICAgICAgICAgICAgICAgIGFvZGV2LT5kZXYtPmRldmlkKTsKICAgICAg
ICAgZ290byBvdXQ7CmRpZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bF9ib290bG9hZGVyLmMg
Yi90b29scy9saWJ4bC9saWJ4bF9ib290bG9hZGVyLmMKaW5kZXggZTEwM2VlOS4uZmYyZTQ4YyAx
MDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGxfYm9vdGxvYWRlci5jCisrKyBiL3Rvb2xzL2xp
YnhsL2xpYnhsX2Jvb3Rsb2FkZXIuYwpAQCAtMzgxLDYgKzM4MSw3IEBAIHZvaWQgbGlieGxfX2Jv
b3Rsb2FkZXJfcnVuKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX2Jvb3Rsb2FkZXJfc3RhdGUgKmJs
KQogICAgIGJsLT5kbHMuYW8gPSBhbzsKICAgICBibC0+ZGxzLmluX2Rpc2sgPSBibC0+ZGlzazsK
ICAgICBibC0+ZGxzLmJsa2Rldl9zdGFydCA9IGluZm8tPmJsa2Rldl9zdGFydDsKKyAgICBibC0+
ZGxzLmRvbWlkID0gZG9taWQ7CiAgICAgYmwtPmRscy5jYWxsYmFjayA9IGJvb3Rsb2FkZXJfZGlz
a19hdHRhY2hlZF9jYjsKICAgICBsaWJ4bF9fZGV2aWNlX2Rpc2tfbG9jYWxfaW5pdGlhdGVfYXR0
YWNoKGVnYywgJmJsLT5kbHMpOwogICAgIHJldHVybjsKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhs
L2xpYnhsX2ludGVybmFsLmggYi90b29scy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oCmluZGV4IDE2
OTA3ZmYuLjY3YzU2YTYgMTAwNjQ0Ci0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVybmFsLmgK
KysrIGIvdG9vbHMvbGlieGwvbGlieGxfaW50ZXJuYWwuaApAQCAtMjEzMCw2ICsyMTMwLDcgQEAg
c3RydWN0IGxpYnhsX19kaXNrX2xvY2FsX3N0YXRlIHsKICAgICBsaWJ4bF9kZXZpY2VfZGlzayBk
aXNrOwogICAgIGNvbnN0IGNoYXIgKmJsa2Rldl9zdGFydDsKICAgICBsaWJ4bF9fZGlza19sb2Nh
bF9zdGF0ZV9jYWxsYmFjayAqY2FsbGJhY2s7CisgICAgdWludDMyX3QgZG9taWQ7CiAgICAgLyog
ZmlsbGVkIGJ5IGxpYnhsX19kZXZpY2VfZGlza19sb2NhbF9pbml0aWF0ZV9hdHRhY2ggKi8KICAg
ICBjaGFyICpkaXNrcGF0aDsKICAgICAvKiBwcml2YXRlIGZvciBpbXBsZW1lbnRhdGlvbiBvZiBs
b2NhbCBkZXRhY2ggKi8KLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0
Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xb-0002Fl-8i; Fri, 21 Dec 2012 17:00:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xY-0002Da-9G
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:32 +0000
Received: from [193.109.254.147:27461] by server-16.bemta-14.messagelabs.com
	id A4/5C-18932-EA594D05; Fri, 21 Dec 2012 17:00:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!6
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11647 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309217"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:18 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:18 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:04 +0100
Message-ID: <1356109208-6830-7-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 06/10] xl: add support for new hotplug
	interface to block-attach/detach
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q2FsbCBwcmVwYXJlIGJlZm9yZSBhdHRhY2hpbmcgYSBibG9jayBkZXZpY2UuCgpTaWduZWQtb2Zm
LWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KLS0tCiB0b29scy9s
aWJ4bC94bF9jbWRpbXBsLmMgfCAgICA1ICsrKysrCiAxIGZpbGVzIGNoYW5nZWQsIDUgaW5zZXJ0
aW9ucygrKSwgMCBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS90b29scy9saWJ4bC94bF9jbWRp
bXBsLmMgYi90b29scy9saWJ4bC94bF9jbWRpbXBsLmMKaW5kZXggNGI3NWZjMy4uNzgxMjAxYSAx
MDA2NDQKLS0tIGEvdG9vbHMvbGlieGwveGxfY21kaW1wbC5jCisrKyBiL3Rvb2xzL2xpYnhsL3hs
X2NtZGltcGwuYwpAQCAtNTYzMSw2ICs1NjMxLDExIEBAIGludCBtYWluX2Jsb2NrYXR0YWNoKGlu
dCBhcmdjLCBjaGFyICoqYXJndikKICAgICAgICAgcmV0dXJuIDA7CiAgICAgfQogCisgICAgaWYg
KGxpYnhsX2RldmljZV9kaXNrX3ByZXBhcmUoY3R4LCBmZV9kb21pZCwgJmRpc2ssIDApKSB7Cisg
ICAgICAgIGZwcmludGYoc3RkZXJyLCAibGlieGxfZGV2aWNlX2Rpc2tfcHJlcGFyZSBmYWlsZWQu
XG4iKTsKKyAgICAgICAgcmV0dXJuIDE7CisgICAgfQorCiAgICAgaWYgKGxpYnhsX2RldmljZV9k
aXNrX2FkZChjdHgsIGZlX2RvbWlkLCAmZGlzaywgMCkpIHsKICAgICAgICAgZnByaW50ZihzdGRl
cnIsICJsaWJ4bF9kZXZpY2VfZGlza19hZGQgZmFpbGVkLlxuIik7CiAgICAgfQotLSAKMS43Ljcu
NSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xb-0002G6-LO; Fri, 21 Dec 2012 17:00:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xY-0002Da-Ky
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:32 +0000
Received: from [193.109.254.147:27513] by server-16.bemta-14.messagelabs.com
	id AB/5C-18932-0B594D05; Fri, 21 Dec 2012 17:00:32 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11609 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309212"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:16 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:15 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 17:59:59 +0100
Message-ID: <1356109208-6830-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 01/10] libxl: libxl__prepare_ao_device
	should reset num_exec
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

bnVtX2V4ZWMgd2FzIG5vdCBjbGVhcmVkIHdoZW4gY2FsbGluZyBsaWJ4bF9fcHJlcGFyZV9hb19k
ZXZpY2UuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KLS0tCiB0b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYyB8ICAgIDEgKwogMSBmaWxlcyBj
aGFuZ2VkLCAxIGluc2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9v
bHMvbGlieGwvbGlieGxfZGV2aWNlLmMgYi90b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYwppbmRl
eCA1MWRkMDZlLi41OGQzZjM1IDEwMDY0NAotLS0gYS90b29scy9saWJ4bC9saWJ4bF9kZXZpY2Uu
YworKysgYi90b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYwpAQCAtNDA5LDYgKzQwOSw3IEBAIHZv
aWQgbGlieGxfX3ByZXBhcmVfYW9fZGV2aWNlKGxpYnhsX19hbyAqYW8sIGxpYnhsX19hb19kZXZp
Y2UgKmFvZGV2KQogICAgIGFvZGV2LT5hbyA9IGFvOwogICAgIGFvZGV2LT5yYyA9IDA7CiAgICAg
YW9kZXYtPmRldiA9IE5VTEw7CisgICAgYW9kZXYtPm51bV9leGVjID0gMDsKICAgICAvKiBJbml0
aWFsaXplIHRpbWVyIGZvciBRRU1VIEJvZGdlIGFuZCBob3RwbHVnIGV4ZWN1dGlvbiAqLwogICAg
IGxpYnhsX19ldl90aW1lX2luaXQoJmFvZGV2LT50aW1lb3V0KTsKICAgICBhb2Rldi0+YWN0aXZl
ID0gMTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xa-0002Ew-Bv; Fri, 21 Dec 2012 17:00:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xX-0002D9-6w
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:31 +0000
Received: from [193.109.254.147:4051] by server-4.bemta-14.messagelabs.com id
	05/1A-15233-EA594D05; Fri, 21 Dec 2012 17:00:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!7
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11690 invoked from network); 21 Dec 2012 17:00:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309222"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:19 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:19 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:08 +0100
Message-ID: <1356109208-6830-11-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 10/10] hotplug/Linux: add iscsi block
	hotplug script
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyBob3RwbHVnIHNjcmlwdCBoYXMgYmVlbiB0ZXN0ZWQgd2l0aCBJRVQgYW5kIE5ldEJTRCBp
U0NTSSB0YXJnZXRzLAp3aXRob3V0IGF1dGhlbnRpY2F0aW9uLgoKQXV0aGVudGljYXRpb24gcGFy
YW1ldGVycywgaW5jbHVkaW5nIHRoZSBwYXNzd29yZCBhcmUgcGFzc2VkIGFzCnBhcmFtZXRlcnMg
dG8gaXNjc2lhZG0sIHdoaWNoIGlzIG5vdCByZWNvbW1lbmRlZCBiZWNhdXNlIG90aGVyIHVzZXJz
Cm9mIHRoZSBzeXN0ZW0gY2FuIHNlZSB0aGVtLiBUaGlzIHBhcmFtZXRlcnMgY291bGQgYWxzbyBi
ZSBzZXQgYnkKZWRpdGluZyBhIGNvcnJlc3BvbmRpbmcgZmlsZSBkaXJlY3RseSwgYnV0IHRoZSBs
b2NhdGlvbiBvZiB0aGlzIGZpbGUKc2VlbXMgdG8gYmUgZGlmZmVyZW50IGRlcGVuZGluZyBvbiB0
aGUgZGlzdHJpYnV0aW9uIHVzZWQuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KLS0tCiB0b29scy9ob3RwbHVnL0xpbnV4L01ha2VmaWxlICAg
IHwgICAgMSArCiB0b29scy9ob3RwbHVnL0xpbnV4L2Jsb2NrLWlzY3NpIHwgIDI1NCArKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKIDIgZmlsZXMgY2hhbmdlZCwgMjU1IGlu
c2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvaG90
cGx1Zy9MaW51eC9ibG9jay1pc2NzaQoKZGlmZiAtLWdpdCBhL3Rvb2xzL2hvdHBsdWcvTGludXgv
TWFrZWZpbGUgYi90b29scy9ob3RwbHVnL0xpbnV4L01ha2VmaWxlCmluZGV4IDA2MDU1NTkuLjk4
Yzc3MzggMTAwNjQ0Ci0tLSBhL3Rvb2xzL2hvdHBsdWcvTGludXgvTWFrZWZpbGUKKysrIGIvdG9v
bHMvaG90cGx1Zy9MaW51eC9NYWtlZmlsZQpAQCAtMjEsNiArMjEsNyBAQCBYRU5fU0NSSVBUUyAr
PSBibGt0YXAKIFhFTl9TQ1JJUFRTICs9IHhlbi1ob3RwbHVnLWNsZWFudXAKIFhFTl9TQ1JJUFRT
ICs9IGV4dGVybmFsLWRldmljZS1taWdyYXRlCiBYRU5fU0NSSVBUUyArPSB2c2NzaQorWEVOX1ND
UklQVFMgKz0gYmxvY2staXNjc2kKIFhFTl9TQ1JJUFRfREFUQSA9IHhlbi1zY3JpcHQtY29tbW9u
LnNoIGxvY2tpbmcuc2ggbG9nZ2luZy5zaAogWEVOX1NDUklQVF9EQVRBICs9IHhlbi1ob3RwbHVn
LWNvbW1vbi5zaCB4ZW4tbmV0d29yay1jb21tb24uc2ggdmlmLWNvbW1vbi5zaAogWEVOX1NDUklQ
VF9EQVRBICs9IGJsb2NrLWNvbW1vbi5zaApkaWZmIC0tZ2l0IGEvdG9vbHMvaG90cGx1Zy9MaW51
eC9ibG9jay1pc2NzaSBiL3Rvb2xzL2hvdHBsdWcvTGludXgvYmxvY2staXNjc2kKbmV3IGZpbGUg
bW9kZSAxMDA2NDQKaW5kZXggMDAwMDAwMC4uZGRiOTZjNQotLS0gL2Rldi9udWxsCisrKyBiL3Rv
b2xzL2hvdHBsdWcvTGludXgvYmxvY2staXNjc2kKQEAgLTAsMCArMSwyNTQgQEAKKyMhL2Jpbi9z
aAorIworIyBPcGVuLWlTQ1NJIFhlbiBibG9jayBkZXZpY2UgaG90cGx1ZyBzY3JpcHQKKyMKKyMg
QXV0aG9yIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgorIworIyBUaGlz
IHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29y
IG1vZGlmeQorIyBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdOVSBMZXNzZXIgR2VuZXJhbCBQ
dWJsaWMgTGljZW5zZSBhcyBwdWJsaXNoZWQKKyMgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRh
dGlvbjsgdmVyc2lvbiAyLjEgb25seS4gd2l0aCB0aGUgc3BlY2lhbAorIyBleGNlcHRpb24gb24g
bGlua2luZyBkZXNjcmliZWQgaW4gZmlsZSBMSUNFTlNFLgorIworIyBUaGlzIHByb2dyYW0gaXMg
ZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKKyMgYnV0IFdJ
VEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YK
KyMgTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAg
U2VlIHRoZQorIyBHTlUgTGVzc2VyIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0
YWlscy4KKworcmVtb3ZlX2xhYmVsKCkKK3sKKyAgICBlY2hvICQxIHwgc2VkICJzL15cKCIkMiJc
KS8vIgorfQorCitjaGVja190b29scygpCit7CisgICAgaWYgISB0eXBlIGlzY3NpYWRtID4gL2Rl
di9udWxsIDI+JjE7IHRoZW4KKyAgICAgICAgZWNobyAiVW5hYmxlIHRvIGZpbmQgaXNjc2lhZG0g
dG9vbCIKKyAgICAgICAgcmV0dXJuIDEKKyAgICBmaQorICAgIGlmIFsgIiRtdWx0aXBhdGgiID0g
InkiIF0gJiYgISB0eXBlIG11bHRpcGF0aCA+IC9kZXYvbnVsbCAyPiYxOyB0aGVuCisgICAgICAg
IGVjaG8gIlVuYWJsZSB0byBmaW5kIG11bHRpcGF0aCIKKyAgICAgICAgcmV0dXJuIDEKKyAgICBm
aQorfQorCisjIFNldHMgdGhlIGZvbGxvd2luZyBnbG9iYWwgdmFyaWFibGVzIGJhc2VkIG9uIHRo
ZSBwYXJhbXMgZmllbGQgcGFzc2VkIGluIGFzCisjIGEgcGFyYW1ldGVyOiBpcW4sIHBvcnRhbCwg
YXV0aF9tZXRob2QsIHVzZXIsIG11bHRpcGF0aCwgcGFzc3dvcmQKK3BhcnNlX3RhcmdldCgpCit7
CisgICAgIyBzZXQgbXVsdGlwYXRoIGRlZmF1bHQgdmFsdWUKKyAgICBtdWx0aXBhdGg9Im4iCisg
ICAgZm9yIHBhcmFtIGluICQoZWNobyAiJDEiIHwgdHIgIiwiICJcbiIpCisgICAgZG8KKyAgICAg
ICAgaWYgWyAtbiAiJHBhc3N3b3JkIiBdOyB0aGVuCisgICAgICAgICAgICBwYXNzd29yZD0iJHBh
c3N3b3JkLCRwYXJhbSIKKyAgICAgICAgICAgIGNvbnRpbnVlCisgICAgICAgIGZpCisgICAgICAg
IGNhc2UgJHBhcmFtIGluCisgICAgICAgIGlxbj0qKQorICAgICAgICAgICAgaXFuPSQocmVtb3Zl
X2xhYmVsICRwYXJhbSAiaXFuPSIpCisgICAgICAgICAgICA7OworICAgICAgICBwb3J0YWw9KikK
KyAgICAgICAgICAgIHBvcnRhbD0kKHJlbW92ZV9sYWJlbCAkcGFyYW0gInBvcnRhbD0iKQorICAg
ICAgICAgICAgOzsKKyAgICAgICAgYXV0aF9tZXRob2Q9KikKKyAgICAgICAgICAgIGF1dGhfbWV0
aG9kPSQocmVtb3ZlX2xhYmVsICRwYXJhbSAiYXV0aF9tZXRob2Q9IikKKyAgICAgICAgICAgIDs7
CisgICAgICAgIHVzZXI9KikKKyAgICAgICAgICAgIHVzZXI9JChyZW1vdmVfbGFiZWwgJHBhcmFt
ICJ1c2VyPSIpCisgICAgICAgICAgICA7OworICAgICAgICBtdWx0aXBhdGg9KikKKyAgICAgICAg
ICAgIG11bHRpcGF0aD0kKHJlbW92ZV9sYWJlbCAkcGFyYW0gIm11bHRpcGF0aD0iKQorICAgICAg
ICAgICAgOzsKKyAgICAgICAgcGFzc3dvcmQ9KikKKyAgICAgICAgICAgIHBhc3N3b3JkPSQocmVt
b3ZlX2xhYmVsICRwYXJhbSAicGFzc3dvcmQ9IikKKyAgICAgICAgICAgIDs7CisgICAgICAgIGVz
YWMKKyAgICBkb25lCisgICAgaWYgWyAteiAiJGlxbiIgXSB8fCBbIC16ICIkcG9ydGFsIiBdOyB0
aGVuCisgICAgICAgIGVjaG8gImlxbj0gYW5kIHBvcnRhbD0gYXJlIHJlcXVpcmVkIHBhcmFtZXRl
cnMiCisgICAgICAgIHJldHVybiAxCisgICAgZmkKKyAgICBpZiBbICIkbXVsdGlwYXRoIiAhPSAi
eSIgXSAmJiBbICIkbXVsdGlwYXRoIiAhPSAibiIgXTsgdGhlbgorICAgICAgICBlY2hvICJtdWx0
aXBhdGggdmFsaWQgdmFsdWVzIGFyZSB5IGFuZCBuLCAkbXVsdGlwYXRoIG5vdCBhIHZhbGlkIHZh
bHVlIgorICAgICAgICByZXR1cm4gMQorICAgIGZpCisgICAgcmV0dXJuIDAKK30KKworIyBPdXRw
dXRzIHRoZSBibG9jayBkZXZpY2UgbWFqb3I6bWlub3IKK2RldmljZV9tYWpvcl9taW5vcigpCit7
CisgICAgc3RhdCAtTCAtYyAldDolVCAiJDEiCit9CisKKyMgU2V0cyAkZGV2IHRvIHBvaW50IHRv
IHRoZSBkZXZpY2UgYXNzb2NpYXRlZCB3aXRoIHRoZSB2YWx1ZSBpbiBpcW4KK2ZpbmRfZGV2aWNl
KCkKK3sKKyAgICB3aGlsZSBbICEgLWUgL2Rldi9kaXNrL2J5LXBhdGgvKiIkaXFuIi1sdW4tMCBd
OyBkbworICAgICAgICBzbGVlcCAwLjEKKyAgICBkb25lCisgICAgc2RkZXY9JChyZWFkbGluayAt
ZiAvZGV2L2Rpc2svYnktcGF0aC8qIiRpcW4iLWx1bi0wKQorICAgIGlmIFsgISAtYiAiJHNkZGV2
IiBdOyB0aGVuCisgICAgICAgIGVjaG8gIlVuYWJsZSB0byBmaW5kIGF0dGFjaGVkIGRldmljZSBw
YXRoIgorICAgICAgICByZXR1cm4gMQorICAgIGZpCisgICAgaWYgWyAiJG11bHRpcGF0aCIgPSAi
eSIgXTsgdGhlbgorICAgICAgICBtZGV2PSQobXVsdGlwYXRoIC1sbCAiJHNkZGV2IiB8IGhlYWQg
LTEgfCBhd2sgJ3sgcHJpbnQgJDF9JykKKyAgICAgICAgaWYgWyAhIC1iIC9kZXYvbWFwcGVyLyIk
bWRldiIgXTsgdGhlbgorICAgICAgICAgICAgZWNobyAiVW5hYmxlIHRvIGZpbmQgYXR0YWNoZWQg
ZGV2aWNlIG11bHRpcGF0aCIKKyAgICAgICAgICAgIHJldHVybiAxCisgICAgICAgIGZpCisgICAg
ICAgIGRldj0iL2Rldi9tYXBwZXIvJG1kZXYiCisgICAgZWxzZQorICAgICAgICBkZXY9IiRzZGRl
diIKKyAgICBmaQorICAgIHJldHVybiAwCit9CisKKyMgQXR0YWNoZXMgdGhlIHRhcmdldCAkaXFu
IGluICRwb3J0YWwgYW5kIHNldHMgJGRldiB0byBwb2ludCB0byB0aGUKKyMgbXVsdGlwYXRoIGRl
dmljZQorYXR0YWNoKCkKK3sKKyAgICBpc2NzaWFkbSAtbSBub2RlIC0tdGFyZ2V0bmFtZSAiJGlx
biIgLXAgIiRwb3J0YWwiIC0tbG9naW4gPiAvZGV2L251bGwgMj4mMQorICAgIGlmIFsgJD8gLW5l
IDAgXTsgdGhlbgorICAgICAgICBlY2hvICJVbmFibGUgdG8gY29ubmVjdCB0byB0YXJnZXQiCisg
ICAgICAgIHJldHVybiAxCisgICAgZmkKKyAgICBmaW5kX2RldmljZQorICAgIGlmIFsgJD8gLW5l
IDAgXTsgdGhlbgorICAgICAgICBlY2hvICJVbmFibGUgdG8gZmluZCBpU0NTSSBkZXZpY2UiCisg
ICAgICAgIHJldHVybiAxCisgICAgZmkKKyAgICByZXR1cm4gMAorfQorCisjIERpc2NvdmVycyB0
YXJnZXRzIGluICRwb3J0YWwgYW5kIGNoZWNrcyB0aGF0ICRpcW4gaXMgb25lIG9mIHRob3NlIHRh
cmdldHMKKyMgQWxzbyBzZXRzIHRoZSBhdXRoIHBhcmFtZXRlcnMgdG8gYXR0YWNoIHRoZSBkZXZp
Y2UKK3ByZXBhcmUoKQoreworICAgICMgQ2hlY2sgaWYgdGFyZ2V0IGlzIGFscmVhZHkgb3BlbmVk
CisgICAgaXNjc2lhZG0gLW0gc2Vzc2lvbiAyPiYxIHwgZ3JlcCAtcSAiJGlxbiIKKyAgICBpZiBb
ICQ/IC1lcSAwIF07IHRoZW4KKyAgICAgICAgZWNobyAiRGV2aWNlIGFscmVhZHkgb3BlbmVkIgor
ICAgICAgICByZXR1cm4gMQorICAgIGZpCisgICAgIyBEaXNjb3ZlciBwb3J0YWwgdGFyZ2V0cwor
ICAgIGlzY3NpYWRtIC1tIGRpc2NvdmVyeSAtdCBzdCAtcCAkcG9ydGFsIDI+JjEgfCBncmVwIC1x
ICIkaXFuIgorICAgIGlmIFsgJD8gLW5lIDAgXTsgdGhlbgorICAgICAgICBlY2hvICJObyBtYXRj
aGluZyB0YXJnZXQgaXFuIGZvdW5kIgorICAgICAgICByZXR1cm4gMQorICAgIGZpCisgICAgIyBT
ZXQgYXV0aCBtZXRob2QgaWYgbmVjZXNzYXJ5CisgICAgaWYgWyAtbiAiJGF1dGhfbWV0aG9kIiBd
IHx8IFsgLW4gIiR1c2VyIiBdIHx8IFsgLW4gIiRwYXNzd29yZCIgXTsgdGhlbgorICAgICAgICBp
c2NzaWFkbSAtbSBub2RlIC0tdGFyZ2V0bmFtZSAiJGlxbiIgLXAgIiRwb3J0YWwiIC0tb3A9dXBk
YXRlIC0tbmFtZSBcCisgICAgICAgICAgICBub2RlLnNlc3Npb24uYXV0aC5hdXRobWV0aG9kIC0t
dmFsdWU9IiRhdXRoX21ldGhvZCIgXAorICAgICAgICAgICAgPiAvZGV2L251bGwgMj4mMSAmJiBc
CisgICAgICAgIGlzY3NpYWRtIC1tIG5vZGUgLS10YXJnZXRuYW1lICIkaXFuIiAtcCAiJHBvcnRh
bCIgLS1vcD11cGRhdGUgLS1uYW1lIFwKKyAgICAgICAgICAgIG5vZGUuc2Vzc2lvbi5hdXRoLnVz
ZXJuYW1lIC0tdmFsdWU9IiR1c2VyIiBcCisgICAgICAgICAgICA+IC9kZXYvbnVsbCAyPiYxICYm
IFwKKyAgICAgICAgaXNjc2lhZG0gLW0gbm9kZSAtLXRhcmdldG5hbWUgIiRpcW4iIC1wICIkcG9y
dGFsIiAtLW9wPXVwZGF0ZSAtLW5hbWUgXAorICAgICAgICAgICAgbm9kZS5zZXNzaW9uLmF1dGgu
cGFzc3dvcmQgLS12YWx1ZT0iJHBhc3N3b3JkIiBcCisgICAgICAgICAgICA+IC9kZXYvbnVsbCAy
PiYxCisgICAgICAgIGlmIFsgJD8gLW5lIDAgXTsgdGhlbgorICAgICAgICAgICAgZWNobyAiVW5h
YmxlIHRvIHNldCBhdXRoZW50aWNhdGlvbiBwYXJhbWV0ZXJzIgorICAgICAgICAgICAgcmV0dXJu
IDEKKyAgICAgICAgZmkKKyAgICBmaQorICAgIHJldHVybiAwCit9CisKKyMgQXR0YWNoZXMgdGhl
IGRldmljZSBhbmQgd3JpdGVzIHhlbnN0b3JlIGJhY2tlbmQgZW50cmllcyB0byBjb25uZWN0Cisj
IHRoZSBkZXZpY2UKK2FkZCgpCit7CisgICAgbG9jYWxhdHRhY2gKKyAgICBpZiBbICQ/IC1uZSAw
IF07IHRoZW4KKyAgICAgICAgZWNobyAiRmFpbGVkIHRvIGF0dGFjaCBkZXZpY2UiCisgICAgICAg
IHJldHVybiAxCisgICAgZmkKKyAgICBtbT0kKGRldmljZV9tYWpvcl9taW5vciAiJGRldiIpCisg
ICAgeGVuc3RvcmUtd3JpdGUgIiRCQUNLRU5EX1BBVEgvcGh5c2ljYWwtZGV2aWNlIiAiJG1tIgor
ICAgIHhlbnN0b3JlLXdyaXRlICIkQkFDS0VORF9QQVRIL3BhcmFtcyIgIiRkZXYiCisgICAgcmV0
dXJuIDAKK30KKworIyBEaXNjb25uZWN0cyB0aGUgZGV2aWNlCityZW1vdmUoKQoreworICAgIGZp
bmRfZGV2aWNlCisgICAgaWYgWyAkPyAtbmUgMCBdOyB0aGVuCisgICAgICAgIGVjaG8gIlVuYWJs
ZSB0byBmaW5kIGRldmljZSIKKyAgICAgICAgcmV0dXJuIDEKKyAgICBmaQorICAgIGlzY3NpYWRt
IC1tIG5vZGUgLS10YXJnZXRuYW1lICIkaXFuIiAtcCAiJHBvcnRhbCIgLS1sb2dvdXQgPiAvZGV2
L251bGwgMj4mMQorICAgIGlmIFsgJD8gLW5lIDAgXTsgdGhlbgorICAgICAgICBlY2hvICJVbmFi
bGUgdG8gZGlzY29ubmVjdCB0YXJnZXQiCisgICAgICAgIHJldHVybiAxCisgICAgZmkKKyAgICBy
ZXR1cm4gMAorfQorCisjIFJlbW92ZXMgdGhlIGF1dGggcGFyYW1zIHNldCBieSBwcmVwYXJlCit1
bnByZXBhcmUoKQoreworICAgIGlmIFsgLW4gIiRhdXRoX21ldGhvZCIgXSB8fCBbIC1uICIkdXNl
ciIgXSB8fCBbIC1uICIkcGFzc3dvcmQiIF07IHRoZW4KKyAgICAgICAgaXNjc2lhZG0gLW0gbm9k
ZSAtLXRhcmdldG5hbWUgIiRpcW4iIC1wICIkcG9ydGFsIiAtLW9wPWRlbGV0ZSAtLW5hbWUgXAor
ICAgICAgICAgICAgbm9kZS5zZXNzaW9uLmF1dGguYXV0aG1ldGhvZCA+IC9kZXYvbnVsbCAyPiYx
CisgICAgICAgIGlzY3NpYWRtIC1tIG5vZGUgLS10YXJnZXRuYW1lICIkaXFuIiAtcCAiJHBvcnRh
bCIgLS1vcD1kZWxldGUgLS1uYW1lIFwKKyAgICAgICAgICAgIG5vZGUuc2Vzc2lvbi5hdXRoLnVz
ZXJuYW1lID4gL2Rldi9udWxsIDI+JjEKKyAgICAgICAgaXNjc2lhZG0gLW0gbm9kZSAtLXRhcmdl
dG5hbWUgIiRpcW4iIC1wICIkcG9ydGFsIiAtLW9wPWRlbGV0ZSAtLW5hbWUgXAorICAgICAgICAg
ICAgbm9kZS5zZXNzaW9uLmF1dGgucGFzc3dvcmQgPiAvZGV2L251bGwgMj4mMQorICAgIGZpCit9
CisKKyMgQXR0YWNoZXMgdGhlIGRldmljZSB0byB0aGUgY3VycmVudCBkb21haW4sIGFuZCB3cml0
ZXMgdGhlIHJlc3VsdGluZworIyBibG9jayBkZXZpY2UgdG8geGVuc3RvcmUKK2xvY2FsYXR0YWNo
KCkKK3sKKyAgICBhdHRhY2gKKyAgICBpZiBbICQ/IC1uZSAwIF07IHRoZW4KKyAgICAgICAgZWNo
byAiRmFpbGVkIHRvIGF0dGFjaCBkZXZpY2UiCisgICAgICAgIHJldHVybiAxCisgICAgZmkKKyAg
ICB4ZW5zdG9yZS13cml0ZSAiJEhPVFBMVUdfUEFUSC9wZGV2IiAiJGRldiIKKyAgICByZXR1cm4g
MAorfQorCitjb21tYW5kPSQxCit0YXJnZXQ9JCh4ZW5zdG9yZS1yZWFkICRIT1RQTFVHX1BBVEgv
cGFyYW1zKQorCitpZiBbIC16ICIkdGFyZ2V0IiBdOyB0aGVuCisgICAgZWNobyAiTm8gaW5mb3Jt
YXRpb24gYWJvdXQgdGhlIHRhcmdldCIKKyAgICBleGl0IDEKK2ZpCisKK2NoZWNrX3Rvb2xzIHx8
IGV4aXQgMQorCitwYXJzZV90YXJnZXQgIiR0YXJnZXQiCisKK2Nhc2UgJGNvbW1hbmQgaW4KK3By
ZXBhcmUpCisgICAgcHJlcGFyZQorICAgIGV4aXQgJD8KKyAgICA7OworYWRkKQorICAgIGFkZAor
ICAgIGV4aXQgJD8KKyAgICA7OworcmVtb3ZlKQorICAgIHJlbW92ZQorICAgIGV4aXQgJD8KKyAg
ICA7OwordW5wcmVwYXJlKQorICAgIHVucHJlcGFyZQorICAgIGV4aXQgJD8KKyAgICA7OworbG9j
YWxhdHRhY2gpCisgICAgbG9jYWxhdHRhY2gKKyAgICBleGl0ICQ/CisgICAgOzsKK2xvY2FsZGV0
YWNoKQorICAgIHJlbW92ZQorICAgIGV4aXQgJD8KKyAgICA7OworZXNhYwotLSAKMS43LjcuNSAo
QXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0
cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xY-0002Dq-EN; Fri, 21 Dec 2012 17:00:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xW-0002Cf-79
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:30 +0000
Received: from [85.158.143.99:37986] by server-2.bemta-4.messagelabs.com id
	00/0E-30861-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356109228!23684325!4
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25748 invoked from network); 21 Dec 2012 17:00:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:29 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309221"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:19 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:19 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:07 +0100
Message-ID: <1356109208-6830-10-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 09/10] hotplug: document new hotplug
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TWVudGlvbiB0aGUgbmV3IGRpc2tzcGVjIHBhcmFtZXRlciBhbmQgYWRkIGEgZG9jdW1lbnQgZXhw
bGFpbmluZyB0aGUKbmV3IGhvdHBsdWcgaW50ZXJmYWNlLgoKU2lnbmVkLW9mZi1ieTogUm9nZXIg
UGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Ci0tLQogZG9jcy9taXNjL2xpYnhsLWhv
dHBsdWctaW50ZXJmYWNlLnR4dCB8ICAxNTYgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrCiBkb2NzL21pc2MveGwtZGlzay1jb25maWd1cmF0aW9uLnR4dCAgIHwgICAxMSArKysKIDIg
ZmlsZXMgY2hhbmdlZCwgMTY3IGluc2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCiBjcmVhdGUg
bW9kZSAxMDA2NDQgZG9jcy9taXNjL2xpYnhsLWhvdHBsdWctaW50ZXJmYWNlLnR4dAoKZGlmZiAt
LWdpdCBhL2RvY3MvbWlzYy9saWJ4bC1ob3RwbHVnLWludGVyZmFjZS50eHQgYi9kb2NzL21pc2Mv
bGlieGwtaG90cGx1Zy1pbnRlcmZhY2UudHh0Cm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAw
MDAwMDAuLjUxODdlNDQKLS0tIC9kZXYvbnVsbAorKysgYi9kb2NzL21pc2MvbGlieGwtaG90cGx1
Zy1pbnRlcmZhY2UudHh0CkBAIC0wLDAgKzEsMTU2IEBACisgICAgICAgICAgICAgICAgICAgICAt
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLQorICAgICAgICAgICAgICAgICAgICAgTElCWEwgSE9UUExV
RyBJTlRFUkZBQ0UKKyAgICAgICAgICAgICAgICAgICAgIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
CisKK1RoaXMgZG9jdW1lbnQgc3BlY2lmaWVzIHRoZSBuZXcgbGlieGwgaG90cGx1ZyBpbnRlcmZh
Y2UuIFRoaXMgbmV3CitpbnRlcmZhY2UgaGFzIGJlZW4gZGVzaWduZWQgdG8gb3BlcmF0ZSBiZXR0
ZXIgd2l0aCBjb21wbGV4IGhvdHBsdWcKK3NjcmlwdHMsIHRoYXQgbmVlZCB0byBwZXJmb3JtIHNl
dmVyYWwgb3BlcmF0aW9ucyBhbmQgY2FuIHRha2UgYQorY29uc2lkZXJhYmxlIHRpbWUgdG8gZXhl
Y3V0ZS4KKworSG90cGx1ZyBzY3JpcHRzIGFyZSBleHBlY3RlZCB0byB0YWtlIGEgcGFyYW1ldGVy
LCBwYXNzZWQgYnkgdGhlIGNhbGxlcgorYW5kIHByb3ZpZGUgYSBibG9jayBkZXZpY2UgYXMgb3V0
cHV0LCB0aGF0IHdpbGwgYmUgYXR0YWNoZWQgdG8gdGhlIGd1ZXN0LgorCisKKworPT09PT09PT09
PT09PT09PT09PT09CitFTlZJUk9OTUVOVCBWQVJJQUJMRVMKKz09PT09PT09PT09PT09PT09PT09
PQorCisKK1RoZSBmb2xsb3dpbmcgZW52aXJvbm1lbnQgdmFyaWFibGVzIHdpbGwgYmUgc2V0IGJl
Zm9yZSBjYWxsaW5nCit0aGUgaG90cGx1ZyBzY3JpcHQuCisKKworSE9UUExVR19QQVRICistLS0t
LS0tLS0tLS0KKworUG9pbnRzIHRvIHRoZSB4ZW5zdG9yZSBkaXJlY3RvcnkgdGhhdCBob2xkcyBp
bmZvcm1hdGlvbiByZWxhdGl2ZQordG8gdGhpcyBob3RwbHVnIHNjcmlwdC4gQXQgcHJlc2VudCBv
bmx5IG9uZSBwYXJhbWV0ZXIgaXMgcGFzc2VkIGJ5Cit0aGUgdG9vbHN0YWNrLCB0aGUgInBhcmFt
cyIgeGVuc3RvcmUgZW50cnkgd2hpY2ggY29udGFpbnMgdGhlICJ0YXJnZXQiCitsaW5lIHNwZWNp
ZmllZCBpbiB0aGUgZGlza3NwZWMgeGwgZGlzayBjb25maWd1cmF0aW9uIChwZGV2X3BhdGggaW4K
K2xpYnhsX2RldmljZV9kaXNrIHN0cnVjdCkuCisKK1RoaXMgeGVuc3RvcmUgZGlyZWN0b3J5IHdp
bGwgYmUgdXNlZCB0byBjb21tdW5pY2F0ZSBiZXR3ZWVuIHRoZQoraG90cGx1ZyBzY3JpcHQgYW5k
IHRoZSB0b29sc3RhY2ssIGFuZCBpdCBjYW4gYWxzbyBiZSB1c2VkIGJ5IHRoZQoraG90cGx1ZyBz
Y3JpcHQgdG8gc3RvcmUgdGVtcG9yYXJ5IGluZm9ybWF0aW9uLiBUaGlzIGRpcmVjdG9yeSBpcyBj
cmVhdGVkCitiZWZvcmUgY2FsbGluZyB0aGUgInByZXBhcmUiIG9wZXJhdGlvbiwgYW5kIHRoZSB0
b29sc3RhY2sgZ3VhcmFudGVlcwordGhhdCBpdCB3aWxsIG5vdCBiZSByZW1vdmVkIGJlZm9yZSB0
aGUgInVucHJlcGFyZSIgb3BlcmF0aW9uIGhhcyBiZWVuCitmaW5pc2hlZC4gQWZ0ZXIgdGhhdCwg
dGhlIHRvb2xzdGFjayB3aWxsIHRha2UgdGhlIGFwcHJvcHJpYXRlIGFjdGlvbnMKK3RvIHJlbW92
ZSBpdC4gVGhlIHRvb2xzdGFjayBndWFyYW50ZWVzIHRoYXQgSE9UUExVR19QQVRIIHdpbGwgYWx3
YXlzCitwb2ludCB0byBhIHZhbGlkIHhlbnN0b3JlIHBhdGggZm9yIGFsbCBvcGVyYXRpb25zLgor
CitUaGUgcGF0aCBvZiB0aGlzIGRpcmVjdG9yeSBmb2xsb3dzIHRoZSBzeW50YXg6CisKKy9sb2Nh
bC9kb21haW4vPGxvY2FsX2RvbWlkPi9saWJ4bC9ob3RwbHVnLzxndWVzdF9kb21pZD4vPGRldmlj
ZV9pZD4KKworKE5vdGUgdGhhdCB0aGVyZSBpcyBubyBlbmQgc2xhc2ggYXBwZW5kZWQgdG8gSE9U
UExVR19QQVRIKQorCisKK0JBQ0tFTkRfUEFUSAorLS0tLS0tLS0tLS0tCisKK1BvaW50cyB0byB0
aGUgeGVuc3RvcmUgYmFja2VuZCBwYXRoIG9mIHRoZSBjb3JyZXNwb25kaW5nIGJsb2NrIGRldmlj
ZS4KK1NpbmNlIGhvdHBsdWcgc2NyaXB0cyBhcmUgYWx3YXlzIGV4ZWN1dGVkIGluIHRoZSBEb21h
aW4gdGhhdCBhY3RzIGFzCitiYWNrZW5kIGZvciBhIGRldmljZSwgaXQgd2lsbCBhbHdheXMgaGF2
ZSB0aGUgZm9sbG93aW5nIHN5bnRheDoKKworL2xvY2FsL2RvbWFpbi88bG9jYWxfZG9tYWluPi9i
YWNrZW5kL3ZiZC88Z3Vlc3RfZG9taWQ+LzxkZXZpY2VfaWQ+CisKKyhOb3RlIHRoYXQgdGhlcmUg
aXMgbm8gZW5kIHNsYXNoIGFwcGVuZGVkIHRvIEJBQ0tFTkRfUEFUSCkKKworVGhpcyBlbnZpcm9u
bWVudCB2YXJpYWJsZSBpcyBub3Qgc2V0IGZvciBhbGwgb3BlcmF0aW9ucywgc2luY2Ugc29tZQor
aG90cGx1ZyBvcGVyYXRpb25zIGFyZSBleGVjdXRlZCBiZWZvcmUgdGhlIGJhY2tlbmQgeGVuc3Rv
cmUgaXMgc2V0IHVwLgorCisKKworPT09PT09PT09PT09PT09PT09PT09PT0KK0NPTU1BTkQgTElO
RSBQQVJBTUVURVJTCis9PT09PT09PT09PT09PT09PT09PT09PQorCisKK1NjcmlwdCB3aWxsIGJl
IGNhbGxlZCB3aXRoIG9ubHkgb25lIHBhcmFtZXRlciwgdGhhdCBpcyBlaXRoZXIgcHJlcGFyZSwK
K2FkZCwgcmVtb3ZlLCB1bnByZXBhcmUsIGxvY2FsYXR0YWNoIG9yIGxvY2FsZGV0YWNoLgorCisK
K1BSRVBBUkUKKy0tLS0tLS0KKworVGhpcyBpcyB0aGUgZmlyc3Qgb3BlcmF0aW9uIHRoYXQgdGhl
IGhvdHBsdWcgc2NyaXB0IHdpbGwgYmUgcmVxdWVzdGVkIHRvCitleGVjdXRlLiBUaGlzIG9wZXJh
dGlvbiBpcyBleGVjdXRlZCBiZWZvcmUgdGhlIGRpc2sgaXMgY29ubmVjdGVkLCB0bworZ2l2ZSB0
aGUgaG90cGx1ZyBzY3JpcHQgdGhlIGNoYW5jZSB0byBvZmZsb2FkIHNvbWUgd29yayBmcm9tIHRo
ZSAiYWRkIgorb3BlcmF0aW9uLCB0aGF0IGlzIHBlcmZvcm1lZCBsYXRlci4KKworQkFDS0VORF9Q
QVRIOiBub3QgdmFsaWQKKworRXhwZWN0ZWQgb3V0cHV0OgorTm9uZQorCisKK0FERAorLS0tCisK
K1RoaXMgb3BlcmF0aW9uIHNob3VsZCBjb25uZWN0IHRoZSBkZXZpY2UgdG8gdGhlIGRvbWFpbi4g
V2lsbCBvbmx5IGJlIGNhbGxlZAorYWZ0ZXIgdGhlICJwcmVwYXJlIiBvcGVyYXRpb24gaGFzIGZp
bmlzaGVkIHN1Y2Nlc3NmdWxseS4KKworQkFDS0VORF9QQVRIOiB2YWxpZAorCitFeHBlY3RlZCBv
dXRwdXQ6CitCQUNLRU5EX1BBVEgvcGh5c2ljYWwtZGV2aWNlID0gYmxvY2sgZGV2aWNlIG1ham9y
Om1pbm9yCitCQUNLRU5EX1BBVEgvcGFyYW1zID0gYmxvY2sgZGV2aWNlIHBhdGgKK0hPVFBMVUdf
UEFUSC9wZGV2ID0gYmxvY2sgZGV2aWNlIHBhdGgKKworCitSRU1PVkUKKy0tLS0tLQorCisKK0Rp
c2Nvbm5lY3RzIGEgYmxvY2sgZGV2aWNlIGZyb20gYSBkb21haW4uIFdpbGwgb25seSBiZSBjYWxs
ZWQKK2FmdGVyIHRoZSAicHJlcGFyZSIgb3BlcmF0aW9uIGhhcyBmaW5pc2hlZCBzdWNjZXNzZnVs
bHkuIEltcGxlbWVudGF0aW9ucworc2hvdWxkIHRha2UgaW50byBhY2NvdW50IHRoYXQgdGhlICJy
ZW1vdmUiIG9wZXJhdGlvbiB3aWxsIGFsc28gYmUgY2FsbGVkIGlmCit0aGUgImFkZCIgb3BlcmF0
aW9uIGhhcyBmYWlsZWQuCisKK0JBQ0tFTkRfUEFUSDogdmFsaWQKKworRXhwZWN0ZWQgb3V0cHV0
OgorTm9uZQorCisKK0xPQ0FMQVRUQUNICistLS0tLS0tLS0tLQorCisKK0NyZWF0ZXMgYSBibG9j
ayBkZXZpY2UgaW4gdGhlIGN1cnJlbnQgZG9tYWluIHRoYXQgcG9pbnRzIHRvIHRoZSBndWVzdAor
ZGlzayBkZXZpY2UuIFdpbGwgb25seSBiZSBjYWxsZWQgYWZ0ZXIgdGhlICJwcmVwYXJlIiBvcGVy
YXRpb24gaGFzCitmaW5pc2hlZCBzdWNjZXNzZnVsbHkuCisKK0JBQ0tFTkRfUEFUSDogbm90IHZh
bGlkCisKK0V4cGVjdGVkIG91dHB1dDoKK0hPVFBMVUdfUEFUSC9wZGV2ID0gYmxvY2sgZGV2aWNl
IHBhdGgKKworCitMT0NBTERFVEFDSAorLS0tLS0tLS0tLS0KKworCitEaXNjb25uZWN0cyBhIGRl
dmljZSAocHJldmlvdXNseSBjb25uZWN0ZWQgd2l0aCB0aGUgbG9jYWxhdHRhY2gKK29wZXJhdGlv
bikgZnJvbSB0aGUgY3VycmVudCBkb21haW4uIFdpbGwgb25seSBiZSBjYWxsZWQgYWZ0ZXIKK3Ro
ZSAicHJlcGFyZSIgb3BlcmF0aW9uIGhhcyBmaW5pc2hlZCBzdWNjZXNzZnVsbHkuIEltcGxlbWVu
dGF0aW9ucworc2hvdWxkIHRha2UgaW50byBhY2NvdW50IHRoYXQgdGhlICJsb2NhbGRldGFjaCIg
b3BlcmF0aW9uIHdpbGwKK2Fsc28gYmUgY2FsbGVkIGlmIHRoZSAibG9jYWxhdHRhY2giIG9wZXJh
dGlvbiBoYXMgZmFpbGVkLgorCitCQUNLRU5EX1BBVEg6IG5vdCB2YWxpZAorCitFeHBlY3RlZCBv
dXRwdXQ6CitOb25lCisKKworVU5QUkVQQVJFCistLS0tLS0tLS0KKworCitQZXJmb3JtcyB0aGUg
bmVjZXNzYXJ5IGFjdGlvbnMgdG8gdW5wcmVwYXJlIHRoZSBkZXZpY2UuCisKK0JBQ0tFTkRfUEFU
SDogbm90IHZhbGlkCisKK0V4cGVjdGVkIG91dHB1dDoKK05vbmUKZGlmZiAtLWdpdCBhL2RvY3Mv
bWlzYy94bC1kaXNrLWNvbmZpZ3VyYXRpb24udHh0IGIvZG9jcy9taXNjL3hsLWRpc2stY29uZmln
dXJhdGlvbi50eHQKaW5kZXggODZjMTZiZS4uZDc4YWNkYiAxMDA2NDQKLS0tIGEvZG9jcy9taXNj
L3hsLWRpc2stY29uZmlndXJhdGlvbi50eHQKKysrIGIvZG9jcy9taXNjL3hsLWRpc2stY29uZmln
dXJhdGlvbi50eHQKQEAgLTE2Niw2ICsxNjYsMTcgQEAgaW5mb3JtYXRpb24gdG8gYmUgaW50ZXJw
cmV0ZWQgYnkgdGhlIGV4ZWN1dGFibGUgcHJvZ3JhbSA8c2NyaXB0PiwKIFRoZXNlIHNjcmlwdHMg
YXJlIG5vcm1hbGx5IGNhbGxlZCAiYmxvY2stPHNjcmlwdD4iLgogCiAKK21ldGhvZD08c2NyaXB0
PgorLS0tLS0tLS0tLS0tLS0tCisKK1NwZWNpZmllcyB0aGF0IDx0YXJnZXQ+IGlzIG5vdCBhIG5v
cm1hbCBob3N0IHBhdGgsIGJ1dCByYXRoZXIKK2luZm9ybWF0aW9uIHRvIGJlIGludGVycHJldGVk
IGJ5IHRoZSBleGVjdXRhYmxlIHByb2dyYW0gPHNjcmlwdD4sCisobG9va2VkIGZvciBpbiAvZXRj
L3hlbi9zY3JpcHRzLCBpZiBpdCBkb2Vzbid0IGNvbnRhaW4gYSBzbGFzaCkuCisKK1RoZSBzY3Jp
cHQgcGFzc2VkIGFzIHBhcmFtZXRlciBzaG91bGQgc3VwcG9ydCB0aGUgbmV3IGhvdHBsdWcKK3Nj
cmlwdCBpbnRlcmZhY2UsIHdoaWNoIGlzIGRlZmluZWQgaW4gbGlieGwtaG90cGx1Zy1pbnRlcmZh
Y2UudHh0LgorCisKIAogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT0KIERFUFJFQ0FURUQgUEFSQU1FVEVSUywgUFJFRklYRVMgQU5EIFNZTlRBWEVTCi0tIAoxLjcu
Ny41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xX-0002DV-Lp; Fri, 21 Dec 2012 17:00:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xV-0002Cc-SO
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:30 +0000
Received: from [85.158.143.99:34459] by server-3.bemta-4.messagelabs.com id
	43/E0-18211-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356109228!23684325!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25700 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309219"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:18 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:18 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:05 +0100
Message-ID: <1356109208-6830-8-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 07/10] libxl: add local attach support for
	new hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkcyBzdXBwb3J0IGZvciBsb2NhbGx5IGF0dGFjaGluZyBkaXNrcyB0aGF0IHVzZSB0aGUgbmV3
IGhvdHBsdWcKc2NyaXB0IGludGVyZmFjZSwgYnkgY2FsbGluZyB0aGUgbG9jYWxhdHRhY2gvbG9j
YWxkZXRhY2ggb3BlcmF0aW9ucwphbmQgcmV0dXJuaW5nIGEgYmxvY2sgZGV2aWNlIHRoYXQgY2Fu
IGJlIHVzZWQgdG8gZXhlY3V0ZSBweWdydWIuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KLS0tCiB0b29scy9saWJ4bC9saWJ4bC5jICAgICAg
ICAgICAgfCAgIDg4ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0KIHRv
b2xzL2xpYnhsL2xpYnhsX2Jvb3Rsb2FkZXIuYyB8ICAgIDEgKwogdG9vbHMvbGlieGwvbGlieGxf
aW50ZXJuYWwuaCAgIHwgICAgMSArCiAzIGZpbGVzIGNoYW5nZWQsIDc3IGluc2VydGlvbnMoKyks
IDEzIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsLmMgYi90b29s
cy9saWJ4bC9saWJ4bC5jCmluZGV4IDI5YjI3NjUuLmIyMWVlZmMgMTAwNjQ0Ci0tLSBhL3Rvb2xz
L2xpYnhsL2xpYnhsLmMKKysrIGIvdG9vbHMvbGlieGwvbGlieGwuYwpAQCAtMjY1Nyw2ICsyNjU3
LDcgQEAgdm9pZCBsaWJ4bF9fZGV2aWNlX2Rpc2tfbG9jYWxfaW5pdGlhdGVfYXR0YWNoKGxpYnhs
X19lZ2MgKmVnYywKICAgICBjb25zdCBsaWJ4bF9kZXZpY2VfZGlzayAqaW5fZGlzayA9IGRscy0+
aW5fZGlzazsKICAgICBsaWJ4bF9kZXZpY2VfZGlzayAqZGlzayA9ICZkbHMtPmRpc2s7CiAgICAg
Y29uc3QgY2hhciAqYmxrZGV2X3N0YXJ0ID0gZGxzLT5ibGtkZXZfc3RhcnQ7CisgICAgdWludDMy
X3QgZG9taWQgPSBkbHMtPmRvbWlkOwogCiAgICAgYXNzZXJ0KGluX2Rpc2stPnBkZXZfcGF0aCk7
CiAKQEAgLTI2NzMsNiArMjY3NCwyMyBAQCB2b2lkIGxpYnhsX19kZXZpY2VfZGlza19sb2NhbF9p
bml0aWF0ZV9hdHRhY2gobGlieGxfX2VnYyAqZWdjLAogICAgICAgICBjYXNlIExJQlhMX0RJU0tf
QkFDS0VORF9QSFk6CiAgICAgICAgICAgICBMSUJYTF9fTE9HKGN0eCwgTElCWExfX0xPR19ERUJV
RywgImxvY2FsbHkgYXR0YWNoaW5nIFBIWSBkaXNrICVzIiwKICAgICAgICAgICAgICAgICAgICAg
ICAgZGlzay0+cGRldl9wYXRoKTsKKyAgICAgICAgICAgIGlmIChkaXNrLT5ob3RwbHVnX3ZlcnNp
b24gIT0gMCkgeworICAgICAgICAgICAgICAgIGRpc2stPnZkZXYgPSBsaWJ4bF9fc3RyZHVwKGdj
LCBpbl9kaXNrLT52ZGV2KTsKKyAgICAgICAgICAgICAgICBsaWJ4bF9fcHJlcGFyZV9hb19kZXZp
Y2UoYW8sICZkbHMtPmFvZGV2KTsKKyAgICAgICAgICAgICAgICBHQ05FVyhkbHMtPmFvZGV2LmRl
dik7CisgICAgICAgICAgICAgICAgcmMgPSBsaWJ4bF9fZGV2aWNlX2Zyb21fZGlzayhnYywgZG9t
aWQsIGRpc2ssCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBk
bHMtPmFvZGV2LmRldik7CisgICAgICAgICAgICAgICAgaWYgKHJjICE9IDApIHsKKyAgICAgICAg
ICAgICAgICAgICAgTE9HKEVSUk9SLCAiSW52YWxpZCBvciB1bnN1cHBvcnRlZCB2aXJ0dWFsIGRp
c2sgIgorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICJpZGVudGlmaWVyICVzIiwgZGlz
ay0+dmRldik7CisgICAgICAgICAgICAgICAgICAgIGdvdG8gb3V0OworICAgICAgICAgICAgICAg
IH0KKyAgICAgICAgICAgICAgICBkbHMtPmFvZGV2LmFjdGlvbiA9IERFVklDRV9MT0NBTEFUVEFD
SDsKKyAgICAgICAgICAgICAgICBkbHMtPmFvZGV2LmhvdHBsdWdfdmVyc2lvbiA9IGRpc2stPmhv
dHBsdWdfdmVyc2lvbjsKKyAgICAgICAgICAgICAgICBkbHMtPmFvZGV2LmNhbGxiYWNrID0gbG9j
YWxfZGV2aWNlX2F0dGFjaF9jYjsKKyAgICAgICAgICAgICAgICBsaWJ4bF9fZGV2aWNlX2hvdHBs
dWcoZWdjLCAmZGxzLT5hb2Rldik7CisgICAgICAgICAgICAgICAgcmV0dXJuOworICAgICAgICAg
ICAgfQogICAgICAgICAgICAgZGV2ID0gZGlzay0+cGRldl9wYXRoOwogICAgICAgICAgICAgYnJl
YWs7CiAgICAgICAgIGNhc2UgTElCWExfRElTS19CQUNLRU5EX1RBUDoKQEAgLTI3MzUsNyArMjc1
Myw4IEBAIHN0YXRpYyB2b2lkIGxvY2FsX2RldmljZV9hdHRhY2hfY2IobGlieGxfX2VnYyAqZWdj
LCBsaWJ4bF9fYW9fZGV2aWNlICphb2RldikKIHsKICAgICBTVEFURV9BT19HQyhhb2Rldi0+YW8p
OwogICAgIGxpYnhsX19kaXNrX2xvY2FsX3N0YXRlICpkbHMgPSBDT05UQUlORVJfT0YoYW9kZXYs
ICpkbHMsIGFvZGV2KTsKLSAgICBjaGFyICpkZXYgPSBOVUxMLCAqYmVfcGF0aCA9IE5VTEw7Cisg
ICAgY2hhciAqZGV2ID0gTlVMTCwgKmJlX3BhdGggPSBOVUxMLCAqaG90cGx1Z19wYXRoOworICAg
IGNvbnN0IGNoYXIgKnBkZXYgPSBOVUxMOwogICAgIGludCByYzsKICAgICBsaWJ4bF9fZGV2aWNl
IGRldmljZTsKICAgICBsaWJ4bF9kZXZpY2VfZGlzayAqZGlzayA9ICZkbHMtPmRpc2s7CkBAIC0y
NzQ4LDIwICsyNzY3LDQ1IEBAIHN0YXRpYyB2b2lkIGxvY2FsX2RldmljZV9hdHRhY2hfY2IobGli
eGxfX2VnYyAqZWdjLCBsaWJ4bF9fYW9fZGV2aWNlICphb2RldikKICAgICAgICAgICAgICAgICAg
ICAgYW9kZXYtPmRldi0+ZGV2aWQpOwogICAgICAgICBnb3RvIG91dDsKICAgICB9CisgICAgc3dp
dGNoIChkaXNrLT5iYWNrZW5kKSB7CisgICAgY2FzZSBMSUJYTF9ESVNLX0JBQ0tFTkRfUEhZOgor
ICAgICAgICAvKiBGZXRjaCBob3RwbHVnIG91dHB1dCB0byBvYnRhaW4gdGhlIGJsb2NrIGRldmlj
ZSAqLworICAgICAgICByYyA9IGxpYnhsX19kZXZpY2VfZnJvbV9kaXNrKGdjLCBkbHMtPmRvbWlk
LCBkaXNrLCAmZGV2aWNlKTsKKyAgICAgICAgaWYgKHJjKQorICAgICAgICAgICAgZ290byBvdXQ7
CisgICAgICAgIGhvdHBsdWdfcGF0aCA9IGxpYnhsX19kZXZpY2VfeHNfaG90cGx1Z19wYXRoKGdj
LCAmZGV2aWNlKTsKKyAgICAgICAgcmMgPSBsaWJ4bF9feHNfcmVhZF9jaGVja2VkKGdjLCBYQlRf
TlVMTCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIEdDU1BSSU5URigiJXMv
cGRldiIsIGhvdHBsdWdfcGF0aCksCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAmcGRldik7CisgICAgICAgIGlmIChyYykKKyAgICAgICAgICAgIGdvdG8gb3V0OworICAgICAg
ICBpZiAoIXBkZXYpIHsKKyAgICAgICAgICAgIHJjID0gRVJST1JfRkFJTDsKKyAgICAgICAgICAg
IGdvdG8gb3V0OworICAgICAgICB9CisgICAgICAgIExPRyhERUJVRywgImF0dGFjaGVkIGxvY2Fs
IGJsb2NrIGRldmljZSAlcyIsIHBkZXYpOworICAgICAgICBkbHMtPmRpc2twYXRoID0gbGlieGxf
X3N0cmR1cChnYywgcGRldik7CisgICAgICAgIGJyZWFrOworICAgIGNhc2UgTElCWExfRElTS19C
QUNLRU5EX1FESVNLOgorICAgICAgICBkZXYgPSBHQ1NQUklOVEYoIi9kZXYvJXMiLCBkaXNrLT52
ZGV2KTsKKyAgICAgICAgTE9HKERFQlVHLCAibG9jYWxseSBhdHRhY2hpbmcgcWRpc2sgJXMiLCBk
ZXYpOwogCi0gICAgZGV2ID0gR0NTUFJJTlRGKCIvZGV2LyVzIiwgZGlzay0+dmRldik7Ci0gICAg
TE9HKERFQlVHLCAibG9jYWxseSBhdHRhY2hpbmcgcWRpc2sgJXMiLCBkZXYpOworICAgICAgICBy
YyA9IGxpYnhsX19kZXZpY2VfZnJvbV9kaXNrKGdjLCBMSUJYTF9UT09MU1RBQ0tfRE9NSUQsIGRp
c2ssICZkZXZpY2UpOworICAgICAgICBpZiAocmMgPCAwKQorICAgICAgICAgICAgZ290byBvdXQ7
CisgICAgICAgIGJlX3BhdGggPSBsaWJ4bF9fZGV2aWNlX2JhY2tlbmRfcGF0aChnYywgJmRldmlj
ZSk7CisgICAgICAgIHJjID0gbGlieGxfX3dhaXRfZm9yX2JhY2tlbmQoZ2MsIGJlX3BhdGgsICI0
Iik7CisgICAgICAgIGlmIChyYyA8IDApCisgICAgICAgICAgICBnb3RvIG91dDsKIAotICAgIHJj
ID0gbGlieGxfX2RldmljZV9mcm9tX2Rpc2soZ2MsIExJQlhMX1RPT0xTVEFDS19ET01JRCwgZGlz
aywgJmRldmljZSk7Ci0gICAgaWYgKHJjIDwgMCkKLSAgICAgICAgZ290byBvdXQ7Ci0gICAgYmVf
cGF0aCA9IGxpYnhsX19kZXZpY2VfYmFja2VuZF9wYXRoKGdjLCAmZGV2aWNlKTsKLSAgICByYyA9
IGxpYnhsX193YWl0X2Zvcl9iYWNrZW5kKGdjLCBiZV9wYXRoLCAiNCIpOwotICAgIGlmIChyYyA8
IDApCisgICAgICAgIGlmIChkZXYgIT0gTlVMTCkKKyAgICAgICAgICAgIGRscy0+ZGlza3BhdGgg
PSBsaWJ4bF9fc3RyZHVwKGdjLCBkZXYpOworICAgICAgICBicmVhazsKKyAgICBkZWZhdWx0Ogor
ICAgICAgICBMT0coRVJST1IsICJpbnZhbGlkIGRpc2sgYmFja2VuZCBmb3IgbG9jYWwgYXR0YWNo
IGNhbGxiYWNrIik7CisgICAgICAgIHJjID0gRVJST1JfRkFJTDsKICAgICAgICAgZ290byBvdXQ7
Ci0KLSAgICBpZiAoZGV2ICE9IE5VTEwpCi0gICAgICAgIGRscy0+ZGlza3BhdGggPSBsaWJ4bF9f
c3RyZHVwKGdjLCBkZXYpOworICAgIH0KIAogICAgIGRscy0+Y2FsbGJhY2soZWdjLCBkbHMsIDAp
OwogICAgIHJldHVybjsKQEAgLTI3ODUsMTEgKzI4MjksMjkgQEAgdm9pZCBsaWJ4bF9fZGV2aWNl
X2Rpc2tfbG9jYWxfaW5pdGlhdGVfZGV0YWNoKGxpYnhsX19lZ2MgKmVnYywKICAgICBsaWJ4bF9k
ZXZpY2VfZGlzayAqZGlzayA9ICZkbHMtPmRpc2s7CiAgICAgbGlieGxfX2RldmljZSAqZGV2aWNl
OwogICAgIGxpYnhsX19hb19kZXZpY2UgKmFvZGV2ID0gJmRscy0+YW9kZXY7CisgICAgdWludDMy
X3QgZG9taWQgPSBkbHMtPmRvbWlkOwogICAgIGxpYnhsX19wcmVwYXJlX2FvX2RldmljZShhbywg
YW9kZXYpOwogCiAgICAgaWYgKCFkbHMtPmRpc2twYXRoKSBnb3RvIG91dDsKIAogICAgIHN3aXRj
aCAoZGlzay0+YmFja2VuZCkgeworICAgICAgICBjYXNlIExJQlhMX0RJU0tfQkFDS0VORF9QSFk6
CisgICAgICAgICAgICBpZiAoZGlzay0+aG90cGx1Z192ZXJzaW9uICE9IDApIHsKKyAgICAgICAg
ICAgICAgICBHQ05FVyhhb2Rldi0+ZGV2KTsKKyAgICAgICAgICAgICAgICByYyA9IGxpYnhsX19k
ZXZpY2VfZnJvbV9kaXNrKGdjLCBkb21pZCwgZGlzaywKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGFvZGV2LT5kZXYpOworICAgICAgICAgICAgICAgIGlmIChy
YyAhPSAwKSB7CisgICAgICAgICAgICAgICAgICAgIExPRyhFUlJPUiwgIkludmFsaWQgb3IgdW5z
dXBwb3J0ZWQgdmlydHVhbCBkaXNrICIKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAi
aWRlbnRpZmllciAlcyIsIGRpc2stPnZkZXYpOworICAgICAgICAgICAgICAgICAgICBnb3RvIG91
dDsKKyAgICAgICAgICAgICAgICB9CisgICAgICAgICAgICAgICAgYW9kZXYtPmFjdGlvbiA9IERF
VklDRV9MT0NBTERFVEFDSDsKKyAgICAgICAgICAgICAgICBhb2Rldi0+aG90cGx1Z192ZXJzaW9u
ID0gZGlzay0+aG90cGx1Z192ZXJzaW9uOworICAgICAgICAgICAgICAgIGFvZGV2LT5jYWxsYmFj
ayA9IGxvY2FsX2RldmljZV9kZXRhY2hfY2I7CisgICAgICAgICAgICAgICAgbGlieGxfX2Rldmlj
ZV9ob3RwbHVnKGVnYywgYW9kZXYpOworICAgICAgICAgICAgICAgIHJldHVybjsKKyAgICAgICAg
ICAgIH0KKyAgICAgICAgICAgIGJyZWFrOwogICAgICAgICBjYXNlIExJQlhMX0RJU0tfQkFDS0VO
RF9RRElTSzoKICAgICAgICAgICAgIGlmIChkaXNrLT52ZGV2ICE9IE5VTEwpIHsKICAgICAgICAg
ICAgICAgICBHQ05FVyhkZXZpY2UpOwpAQCAtMjgyOCw3ICsyODkwLDcgQEAgc3RhdGljIHZvaWQg
bG9jYWxfZGV2aWNlX2RldGFjaF9jYihsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hb19kZXZpY2Ug
KmFvZGV2KQogCiAgICAgaWYgKGFvZGV2LT5yYykgewogICAgICAgICBMT0dFKEVSUk9SLCAidW5h
YmxlIHRvICVzICVzIHdpdGggaWQgJXUiLAotICAgICAgICAgICAgICAgICAgICBhb2Rldi0+YWN0
aW9uID09IERFVklDRV9DT05ORUNUID8gImFkZCIgOiAicmVtb3ZlIiwKKyAgICAgICAgICAgICAg
ICAgICAgbGlieGxfX2RldmljZV9ob3RwbHVnX2FjdGlvbihnYywgYW9kZXYtPmFjdGlvbiksCiAg
ICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2Vfa2luZF90b19zdHJpbmcoYW9kZXYtPmRl
di0+a2luZCksCiAgICAgICAgICAgICAgICAgICAgIGFvZGV2LT5kZXYtPmRldmlkKTsKICAgICAg
ICAgZ290byBvdXQ7CmRpZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bF9ib290bG9hZGVyLmMg
Yi90b29scy9saWJ4bC9saWJ4bF9ib290bG9hZGVyLmMKaW5kZXggZTEwM2VlOS4uZmYyZTQ4YyAx
MDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGxfYm9vdGxvYWRlci5jCisrKyBiL3Rvb2xzL2xp
YnhsL2xpYnhsX2Jvb3Rsb2FkZXIuYwpAQCAtMzgxLDYgKzM4MSw3IEBAIHZvaWQgbGlieGxfX2Jv
b3Rsb2FkZXJfcnVuKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX2Jvb3Rsb2FkZXJfc3RhdGUgKmJs
KQogICAgIGJsLT5kbHMuYW8gPSBhbzsKICAgICBibC0+ZGxzLmluX2Rpc2sgPSBibC0+ZGlzazsK
ICAgICBibC0+ZGxzLmJsa2Rldl9zdGFydCA9IGluZm8tPmJsa2Rldl9zdGFydDsKKyAgICBibC0+
ZGxzLmRvbWlkID0gZG9taWQ7CiAgICAgYmwtPmRscy5jYWxsYmFjayA9IGJvb3Rsb2FkZXJfZGlz
a19hdHRhY2hlZF9jYjsKICAgICBsaWJ4bF9fZGV2aWNlX2Rpc2tfbG9jYWxfaW5pdGlhdGVfYXR0
YWNoKGVnYywgJmJsLT5kbHMpOwogICAgIHJldHVybjsKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhs
L2xpYnhsX2ludGVybmFsLmggYi90b29scy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oCmluZGV4IDE2
OTA3ZmYuLjY3YzU2YTYgMTAwNjQ0Ci0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVybmFsLmgK
KysrIGIvdG9vbHMvbGlieGwvbGlieGxfaW50ZXJuYWwuaApAQCAtMjEzMCw2ICsyMTMwLDcgQEAg
c3RydWN0IGxpYnhsX19kaXNrX2xvY2FsX3N0YXRlIHsKICAgICBsaWJ4bF9kZXZpY2VfZGlzayBk
aXNrOwogICAgIGNvbnN0IGNoYXIgKmJsa2Rldl9zdGFydDsKICAgICBsaWJ4bF9fZGlza19sb2Nh
bF9zdGF0ZV9jYWxsYmFjayAqY2FsbGJhY2s7CisgICAgdWludDMyX3QgZG9taWQ7CiAgICAgLyog
ZmlsbGVkIGJ5IGxpYnhsX19kZXZpY2VfZGlza19sb2NhbF9pbml0aWF0ZV9hdHRhY2ggKi8KICAg
ICBjaGFyICpkaXNrcGF0aDsKICAgICAvKiBwcml2YXRlIGZvciBpbXBsZW1lbnRhdGlvbiBvZiBs
b2NhbCBkZXRhY2ggKi8KLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0
Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xa-0002FV-TR; Fri, 21 Dec 2012 17:00:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xX-0002Cs-Tg
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:32 +0000
Received: from [193.109.254.147:27433] by server-8.bemta-14.messagelabs.com id
	B5/42-26341-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11585 invoked from network); 21 Dec 2012 17:00:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309211"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:15 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:15 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 17:59:58 +0100
Message-ID: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH RFC 00/10] libxl: new hotplug calling convention
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series implements a new hoplug calling convention for libxl.

The aim of this new convention is to reduce the blackout phase of 
migration when using complex hotplug scripts, like iSCSI or other kind 
of storage backends that might have a non trivial setup time.

There are some issues that I would like to discuss, the first one is 
the fact that pdev_path field in libxl_device_disk is no longuer used 
to store a physical path, since diskspec "target" can now contain 
"random" information to connect a block device.

To solve this I would like to introduce a new field in 
libxl_device_disk called "target", that will be used to store the 
diskspec target parameter. This can later be copied to pdev_path if 
using the old hotplug calling convention.

The second issue is related to iSCSI and iscsiadm, specifically the 
way to set authentication parameters, which is done with command line 
parameters or editing files (which each distro seems to place in 
different locations). I will look into this to see if we can find a 
suitable solution.

Thanks for the comments, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xb-0002Fl-8i; Fri, 21 Dec 2012 17:00:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xY-0002Da-9G
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:32 +0000
Received: from [193.109.254.147:27461] by server-16.bemta-14.messagelabs.com
	id A4/5C-18932-EA594D05; Fri, 21 Dec 2012 17:00:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!6
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11647 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309217"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:18 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:18 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:04 +0100
Message-ID: <1356109208-6830-7-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 06/10] xl: add support for new hotplug
	interface to block-attach/detach
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q2FsbCBwcmVwYXJlIGJlZm9yZSBhdHRhY2hpbmcgYSBibG9jayBkZXZpY2UuCgpTaWduZWQtb2Zm
LWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KLS0tCiB0b29scy9s
aWJ4bC94bF9jbWRpbXBsLmMgfCAgICA1ICsrKysrCiAxIGZpbGVzIGNoYW5nZWQsIDUgaW5zZXJ0
aW9ucygrKSwgMCBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS90b29scy9saWJ4bC94bF9jbWRp
bXBsLmMgYi90b29scy9saWJ4bC94bF9jbWRpbXBsLmMKaW5kZXggNGI3NWZjMy4uNzgxMjAxYSAx
MDA2NDQKLS0tIGEvdG9vbHMvbGlieGwveGxfY21kaW1wbC5jCisrKyBiL3Rvb2xzL2xpYnhsL3hs
X2NtZGltcGwuYwpAQCAtNTYzMSw2ICs1NjMxLDExIEBAIGludCBtYWluX2Jsb2NrYXR0YWNoKGlu
dCBhcmdjLCBjaGFyICoqYXJndikKICAgICAgICAgcmV0dXJuIDA7CiAgICAgfQogCisgICAgaWYg
KGxpYnhsX2RldmljZV9kaXNrX3ByZXBhcmUoY3R4LCBmZV9kb21pZCwgJmRpc2ssIDApKSB7Cisg
ICAgICAgIGZwcmludGYoc3RkZXJyLCAibGlieGxfZGV2aWNlX2Rpc2tfcHJlcGFyZSBmYWlsZWQu
XG4iKTsKKyAgICAgICAgcmV0dXJuIDE7CisgICAgfQorCiAgICAgaWYgKGxpYnhsX2RldmljZV9k
aXNrX2FkZChjdHgsIGZlX2RvbWlkLCAmZGlzaywgMCkpIHsKICAgICAgICAgZnByaW50ZihzdGRl
cnIsICJsaWJ4bF9kZXZpY2VfZGlza19hZGQgZmFpbGVkLlxuIik7CiAgICAgfQotLSAKMS43Ljcu
NSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xZ-0002EW-O5; Fri, 21 Dec 2012 17:00:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xW-0002Cq-Np
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:31 +0000
Received: from [193.109.254.147:4027] by server-10.bemta-14.messagelabs.com id
	C6/76-13263-EA594D05; Fri, 21 Dec 2012 17:00:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!4
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11636 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309214"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:17 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:16 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:01 +0100
Message-ID: <1356109208-6830-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 03/10] libxl: add new "method" parameter to
	xl disk config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyBuZXcgZGlza3NwZWMgcGFyYW1ldGVycyB3aWxsIHNldCBzY3JpcHQgdG8gdGhlIHZhbHVl
IHBhc3NlZCBpbgptZXRob2QsIGFuZCBob3RwbHVnX3ZlcnNpb24gdG8gMS4KClRoaXMgcGF0Y2gg
YWRkcyB0aGUgYmFzaWMgc3VwcG9ydCB0byBoYW5kbGUgdGhpcyBuZXcgc2NyaXB0cywgYnkKYWRk
aW5nIHRoZSBwcmVwYXJlL3VucHJlcGFyZSBwcml2YXRlIGxpYnhsIGZ1bmN0aW9ucywgYW5kIG1v
ZGlmeWluZwp0aGUgY3VycmVudCBkb21haW4gY3JlYXRpb24vZGVzdHJ1Y3Rpb24gdG8gY2FsbCB0
aGlzIGZ1bmN0aW9ucyBhdCB0aGUKYXBwcm9wcmlhdGUgc3BvdHMuCgpUaGlzIHBhdGNoIGFsc28g
YWRkcyBzb21lIGhlbHBlcnMsIHRoYXQgd2lsbCBiZSB1c2VkIGhlcmUgYW5kIGluIGxhdGVyCnBh
dGNoZXMuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KLS0tCiB0b29scy9saWJ4bC9saWJ4bC5jICAgICAgICAgIHwgIDEzMSArKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0KIHRvb2xzL2xpYnhsL2xpYnhsX2NyZWF0
ZS5jICAgfCAgIDYyICsrKysrKysrKysrKysrKysrKystCiB0b29scy9saWJ4bC9saWJ4bF9kZXZp
Y2UuYyAgIHwgICA4MCArKysrKysrKysrKysrKysrKysrLS0tLS0tCiB0b29scy9saWJ4bC9saWJ4
bF9pbnRlcm5hbC5oIHwgICAzNCArKysrKysrKysrKwogdG9vbHMvbGlieGwvbGlieGxfdHlwZXMu
aWRsICB8ICAgIDEgKwogdG9vbHMvbGlieGwvbGlieGx1X2Rpc2tfbC5sICB8ICAgIDIgKwogNiBm
aWxlcyBjaGFuZ2VkLCAyODggaW5zZXJ0aW9ucygrKSwgMjIgZGVsZXRpb25zKC0pCgpkaWZmIC0t
Z2l0IGEvdG9vbHMvbGlieGwvbGlieGwuYyBiL3Rvb2xzL2xpYnhsL2xpYnhsLmMKaW5kZXggOGQ5
MjFiYy4uZTE0MTM3OSAxMDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGwuYworKysgYi90b29s
cy9saWJ4bC9saWJ4bC5jCkBAIC0xOTk5LDYgKzE5OTksMTA4IEBAIGludCBsaWJ4bF9fZGV2aWNl
X2Zyb21fZGlzayhsaWJ4bF9fZ2MgKmdjLCB1aW50MzJfdCBkb21pZCwKICAgICByZXR1cm4gMDsK
IH0KIAordm9pZCBsaWJ4bF9fZGV2aWNlX2Rpc2tfcHJlcGFyZShsaWJ4bF9fZWdjICplZ2MsIHVp
bnQzMl90IGRvbWlkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kZXZp
Y2VfZGlzayAqZGlzaywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2Fv
X2RldmljZSAqYW9kZXYpCit7CisgICAgU1RBVEVfQU9fR0MoYW9kZXYtPmFvKTsKKyAgICBjaGFy
ICpob3RwbHVnX3BhdGg7CisgICAgY2hhciAqc2NyaXB0X3BhdGg7CisgICAgaW50IHJjOworICAg
IHhzX3RyYW5zYWN0aW9uX3QgdCA9IFhCVF9OVUxMOworCisgICAgaWYgKGRpc2stPmhvdHBsdWdf
dmVyc2lvbiA9PSAwKSB7CisgICAgICAgIGFvZGV2LT5jYWxsYmFjayhlZ2MsIGFvZGV2KTsKKyAg
ICAgICAgcmV0dXJuOworICAgIH0KKworICAgIEdDTkVXKGFvZGV2LT5kZXYpOworICAgIHJjID0g
bGlieGxfX2RldmljZV9mcm9tX2Rpc2soZ2MsIGRvbWlkLCBkaXNrLCBhb2Rldi0+ZGV2KTsKKyAg
ICBpZiAocmMgIT0gMCkgeworICAgICAgICBMT0coRVJST1IsICJJbnZhbGlkIG9yIHVuc3VwcG9y
dGVkIHZpcnR1YWwgZGlzayBpZGVudGlmaWVyICVzIiwKKyAgICAgICAgICAgICAgICAgICBkaXNr
LT52ZGV2KTsKKyAgICAgICAgZ290byBlcnJvcjsKKyAgICB9CisKKyAgICBzY3JpcHRfcGF0aCA9
IGxpYnhsX19hYnNfcGF0aChnYywgZGlzay0+c2NyaXB0LAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGxpYnhsX194ZW5fc2NyaXB0X2Rpcl9wYXRoKCkpOworCisgICAgaG90cGx1
Z19wYXRoID0gbGlieGxfX2RldmljZV94c19ob3RwbHVnX3BhdGgoZ2MsIGFvZGV2LT5kZXYpOwor
ICAgIGZvciAoOzspIHsKKyAgICAgICAgcmMgPSBsaWJ4bF9feHNfdHJhbnNhY3Rpb25fc3RhcnQo
Z2MsICZ0KTsKKyAgICAgICAgaWYgKHJjKSBnb3RvIGVycm9yOworCisgICAgICAgIHJjID0gbGli
eGxfX3hzX3dyaXRlX2NoZWNrZWQoZ2MsIHQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBHQ1NQUklOVEYoIiVzL3BhcmFtcyIsIGhvdHBsdWdfcGF0aCksCisgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBkaXNrLT5wZGV2X3BhdGgpOworICAgICAgICBpZiAocmMpCisg
ICAgICAgICAgICBnb3RvIGVycm9yOworCisgICAgICAgIHJjID0gbGlieGxfX3hzX3dyaXRlX2No
ZWNrZWQoZ2MsIHQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBHQ1NQUklOVEYo
IiVzL3NjcmlwdCIsIGhvdHBsdWdfcGF0aCksCisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBzY3JpcHRfcGF0aCk7CisgICAgICAgIGlmIChyYykKKyAgICAgICAgICAgIGdvdG8gZXJy
b3I7CisKKyAgICAgICAgcmMgPSBsaWJ4bF9feHNfdHJhbnNhY3Rpb25fY29tbWl0KGdjLCAmdCk7
CisgICAgICAgIGlmICghcmMpIGJyZWFrOworICAgICAgICBpZiAocmMgPCAwKSBnb3RvIGVycm9y
OworICAgIH0KKworICAgIGFvZGV2LT5hY3Rpb24gPSBERVZJQ0VfUFJFUEFSRTsKKyAgICBhb2Rl
di0+aG90cGx1Z192ZXJzaW9uID0gZGlzay0+aG90cGx1Z192ZXJzaW9uOworICAgIGxpYnhsX19k
ZXZpY2VfaG90cGx1ZyhlZ2MsIGFvZGV2KTsKKyAgICByZXR1cm47CisKK2Vycm9yOgorICAgIGFz
c2VydChyYyk7CisgICAgbGlieGxfX3hzX3RyYW5zYWN0aW9uX2Fib3J0KGdjLCAmdCk7CisgICAg
YW9kZXYtPnJjID0gcmM7CisgICAgYW9kZXYtPmNhbGxiYWNrKGVnYywgYW9kZXYpOworICAgIHJl
dHVybjsKK30KKwordm9pZCBsaWJ4bF9fZGV2aWNlX2Rpc2tfdW5wcmVwYXJlKGxpYnhsX19lZ2Mg
KmVnYywgdWludDMyX3QgZG9taWQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
bGlieGxfZGV2aWNlX2Rpc2sgKmRpc2ssCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpCit7CisgICAgU1RBVEVfQU9fR0MoYW9kZXYtPmFv
KTsKKyAgICBjaGFyICpob3RwbHVnX3BhdGg7CisgICAgaW50IHJjOworCisgICAgaWYgKGRpc2st
PmhvdHBsdWdfdmVyc2lvbiA9PSAwKSB7CisgICAgICAgIGFvZGV2LT5jYWxsYmFjayhlZ2MsIGFv
ZGV2KTsKKyAgICAgICAgcmV0dXJuOworICAgIH0KKworICAgIEdDTkVXKGFvZGV2LT5kZXYpOwor
ICAgIHJjID0gbGlieGxfX2RldmljZV9mcm9tX2Rpc2soZ2MsIGRvbWlkLCBkaXNrLCBhb2Rldi0+
ZGV2KTsKKyAgICBpZiAocmMgIT0gMCkgeworICAgICAgICBMT0coRVJST1IsICJJbnZhbGlkIG9y
IHVuc3VwcG9ydGVkIHZpcnR1YWwgZGlzayBpZGVudGlmaWVyICVzIiwKKyAgICAgICAgICAgICAg
ICAgICBkaXNrLT52ZGV2KTsKKyAgICAgICAgZ290byBlcnJvcjsKKyAgICB9CisKKyAgICBob3Rw
bHVnX3BhdGggPSBsaWJ4bF9fZGV2aWNlX3hzX2hvdHBsdWdfcGF0aChnYywgYW9kZXYtPmRldik7
CisgICAgaWYgKCFsaWJ4bF9feHNfcmVhZChnYywgWEJUX05VTEwsIGhvdHBsdWdfcGF0aCkpIHsK
KyAgICAgICAgTE9HKERFQlVHLCAidW5hYmxlIHRvIHVucHJlcGFyZSBkZXZpY2UgYmVjYXVzZSAl
cyBkb2Vzbid0IGV4aXN0IiwKKyAgICAgICAgICAgICAgICAgICBob3RwbHVnX3BhdGgpOworICAg
ICAgICBhb2Rldi0+Y2FsbGJhY2soZWdjLCBhb2Rldik7CisgICAgICAgIHJldHVybjsKKyAgICB9
CisKKyAgICBhb2Rldi0+YWN0aW9uID0gREVWSUNFX1VOUFJFUEFSRTsKKyAgICBhb2Rldi0+aG90
cGx1Z192ZXJzaW9uID0gZGlzay0+aG90cGx1Z192ZXJzaW9uOworICAgIGxpYnhsX19kZXZpY2Vf
aG90cGx1ZyhlZ2MsIGFvZGV2KTsKKyAgICByZXR1cm47CisKK2Vycm9yOgorICAgIGFzc2VydChy
Yyk7CisgICAgYW9kZXYtPnJjID0gcmM7CisgICAgYW9kZXYtPmNhbGxiYWNrKGVnYywgYW9kZXYp
OworICAgIHJldHVybjsKK30KKwogLyogU3BlY2lmaWMgZnVuY3Rpb24gY2FsbGVkIGRpcmVjdGx5
IG9ubHkgYnkgbG9jYWwgZGlzayBhdHRhY2gsCiAgKiBhbGwgb3RoZXIgdXNlcnMgc2hvdWxkIGlu
c3RlYWQgdXNlIHRoZSByZWd1bGFyCiAgKiBsaWJ4bF9fZGV2aWNlX2Rpc2tfYWRkIHdyYXBwZXIK
QEAgLTIwNTgsOCArMjE2MCwxNSBAQCBzdGF0aWMgdm9pZCBkZXZpY2VfZGlza19hZGQobGlieGxf
X2VnYyAqZWdjLCB1aW50MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgICBkZXYgPSBkaXNrLT5w
ZGV2X3BhdGg7CiAKICAgICAgICAgZG9fYmFja2VuZF9waHk6Ci0gICAgICAgICAgICAgICAgZmxl
eGFycmF5X2FwcGVuZChiYWNrLCAicGFyYW1zIik7Ci0gICAgICAgICAgICAgICAgZmxleGFycmF5
X2FwcGVuZChiYWNrLCBkZXYpOworICAgICAgICAgICAgICAgIGlmIChkaXNrLT5ob3RwbHVnX3Zl
cnNpb24gPT0gMCkgeworICAgICAgICAgICAgICAgICAgICAvKgorICAgICAgICAgICAgICAgICAg
ICAgKiBJZiB0aGUgbmV3IGhvdHBsdWcgdmVyc2lvbiBpcyB1c2VkIHBhcmFtcyBpcworICAgICAg
ICAgICAgICAgICAgICAgKiBzdG9yZWQgdW5kZXIgYSBwcml2YXRlIHBhdGgsIHNpbmNlIGl0IGNh
biBjb250YWluCisgICAgICAgICAgICAgICAgICAgICAqIGRhdGEgdGhhdCB0aGUgZ3Vlc3Qgc2hv
dWxkIG5vdCBzZWUuCisgICAgICAgICAgICAgICAgICAgICAqLworICAgICAgICAgICAgICAgICAg
ICBmbGV4YXJyYXlfYXBwZW5kKGJhY2ssICJwYXJhbXMiKTsKKyAgICAgICAgICAgICAgICAgICAg
ZmxleGFycmF5X2FwcGVuZChiYWNrLCBkZXYpOworICAgICAgICAgICAgICAgIH0KIAogICAgICAg
ICAgICAgICAgIHNjcmlwdCA9IGxpYnhsX19hYnNfcGF0aChnYywgZGlzay0+c2NyaXB0PzogImJs
b2NrIiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX3hl
bl9zY3JpcHRfZGlyX3BhdGgoKSk7CkBAIC0yMTUyLDYgKzIyNjEsNyBAQCBzdGF0aWMgdm9pZCBk
ZXZpY2VfZGlza19hZGQobGlieGxfX2VnYyAqZWdjLCB1aW50MzJfdCBkb21pZCwKIAogICAgIGFv
ZGV2LT5kZXYgPSBkZXZpY2U7CiAgICAgYW9kZXYtPmFjdGlvbiA9IERFVklDRV9DT05ORUNUOwor
ICAgIGFvZGV2LT5ob3RwbHVnX3ZlcnNpb24gPSBkaXNrLT5ob3RwbHVnX3ZlcnNpb247CiAgICAg
bGlieGxfX3dhaXRfZGV2aWNlX2Nvbm5lY3Rpb24oZWdjLCBhb2Rldik7CiAKICAgICByYyA9IDA7
CkBAIC0yMjQ1LDYgKzIzNTUsNyBAQCBpbnQgbGlieGxfdmRldl90b19kZXZpY2VfZGlzayhsaWJ4
bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsCiAgICAgR0NfSU5JVChjdHgpOwogICAgIGNoYXIg
KmRvbXBhdGgsICpwYXRoOwogICAgIGludCBkZXZpZCA9IGxpYnhsX19kZXZpY2VfZGlza19kZXZf
bnVtYmVyKHZkZXYsIE5VTEwsIE5VTEwpOworICAgIGxpYnhsX19kZXZpY2UgZGV2OwogICAgIGlu
dCByYyA9IEVSUk9SX0ZBSUw7CiAKICAgICBpZiAoZGV2aWQgPCAwKQpAQCAtMjI2Myw2ICsyMzc0
LDIyIEBAIGludCBsaWJ4bF92ZGV2X3RvX2RldmljZV9kaXNrKGxpYnhsX2N0eCAqY3R4LCB1aW50
MzJfdCBkb21pZCwKICAgICAgICAgZ290byBvdXQ7CiAKICAgICByYyA9IGxpYnhsX19kZXZpY2Vf
ZGlza19mcm9tX3hzX2JlKGdjLCBwYXRoLCBkaXNrKTsKKyAgICBpZiAocmMpIHsKKyAgICAgICAg
TE9HKEVSUk9SLCAidW5hYmxlIHRvIHBhcnNlIGRpc2sgZGV2aWNlIGZyb20gcGF0aCAlcyIsIHBh
dGgpOworICAgICAgICBnb3RvIG91dDsKKyAgICB9CisKKyAgICAvKiBDaGVjayBpZiB0aGUgZGV2
aWNlIGlzIHVzaW5nIHRoZSBuZXcgaG90cGx1ZyBpbnRlcmZhY2UgKi8KKyAgICByYyA9IGxpYnhs
X19kZXZpY2VfZnJvbV9kaXNrKGdjLCBkb21pZCwgZGlzaywgJmRldik7CisgICAgaWYgKHJjKSB7
CisgICAgICAgIExPRyhFUlJPUiwgImludmFsaWQgb3IgdW5zdXBwb3J0ZWQgdmlydHVhbCBkaXNr
IGlkZW50aWZpZXIgJXMiLAorICAgICAgICAgICAgICAgICAgICBkaXNrLT52ZGV2KTsKKyAgICAg
ICAgZ290byBvdXQ7CisgICAgfQorICAgIHBhdGggPSBsaWJ4bF9fZGV2aWNlX3hzX2hvdHBsdWdf
cGF0aChnYywgJmRldik7CisgICAgaWYgKGxpYnhsX194c19yZWFkKGdjLCBYQlRfTlVMTCwgcGF0
aCkpCisgICAgICAgIGRpc2stPmhvdHBsdWdfdmVyc2lvbiA9IDE7CisKIG91dDoKICAgICBHQ19G
UkVFOwogICAgIHJldHVybiByYzsKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2NyZWF0
ZS5jIGIvdG9vbHMvbGlieGwvbGlieGxfY3JlYXRlLmMKaW5kZXggOWQyMDA4Ni4uYzE3Njk2NyAx
MDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGxfY3JlYXRlLmMKKysrIGIvdG9vbHMvbGlieGwv
bGlieGxfY3JlYXRlLmMKQEAgLTU5MSw2ICs1OTEsMTAgQEAgc3RhdGljIGludCBzdG9yZV9saWJ4
bF9lbnRyeShsaWJ4bF9fZ2MgKmdjLCB1aW50MzJfdCBkb21pZCwKICAqLwogCiAvKiBFdmVudCBj
YWxsYmFja3MsIGluIHRoaXMgb3JkZXI6ICovCisKK3N0YXRpYyB2b2lkIGRvbWNyZWF0ZV9sYXVu
Y2hfYm9vdGxvYWRlcihsaWJ4bF9fZWdjICplZ2MsCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgbGlieGxfX211bHRpZGV2ICptdWx0aWRldiwKKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgcmV0KTsKIHN0YXRpYyB2b2lkIGRvbWNyZWF0
ZV9kZXZtb2RlbF9zdGFydGVkKGxpYnhsX19lZ2MgKmVnYywKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGxpYnhsX19kbV9zcGF3bl9zdGF0ZSAqZG1zcywKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCByYyk7CkBAIC02MjcsNiArNjMxLDEy
IEBAIHN0YXRpYyB2b2lkIGRvbWNyZWF0ZV9kZXN0cnVjdGlvbl9jYihsaWJ4bF9fZWdjICplZ2Ms
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2RvbWFpbl9kZXN0
cm95X3N0YXRlICpkZHMsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50
IHJjKTsKIAorLyogSWYgY3JlYXRpb24gaXMgbm90IHN1Y2Nlc3NmdWwsIHRoaXMgY2FsbGJhY2sg
d2lsbCBiZSBleGVjdXRlZAorICogd2hlbiBkZXZpY2VzIGhhdmUgYmVlbiB1bnByZXBhcmVkICov
CitzdGF0aWMgdm9pZCBkb21jcmVhdGVfdW5wcmVwYXJlX2NiKGxpYnhsX19lZ2MgKmVnYywKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX211bHRpZGV2ICptdWx0aWRl
diwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50IHJldCk7CisKIHN0YXRp
YyB2b2lkIGluaXRpYXRlX2RvbWFpbl9jcmVhdGUobGlieGxfX2VnYyAqZWdjLAogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fZG9tYWluX2NyZWF0ZV9zdGF0ZSAqZGNz
KQogewpAQCAtNjM3LDcgKzY0Nyw2IEBAIHN0YXRpYyB2b2lkIGluaXRpYXRlX2RvbWFpbl9jcmVh
dGUobGlieGxfX2VnYyAqZWdjLAogCiAgICAgLyogY29udmVuaWVuY2UgYWxpYXNlcyAqLwogICAg
IGxpYnhsX2RvbWFpbl9jb25maWcgKmNvbnN0IGRfY29uZmlnID0gZGNzLT5ndWVzdF9jb25maWc7
Ci0gICAgY29uc3QgaW50IHJlc3RvcmVfZmQgPSBkY3MtPnJlc3RvcmVfZmQ7CiAgICAgbWVtc2V0
KCZkY3MtPmJ1aWxkX3N0YXRlLCAwLCBzaXplb2YoZGNzLT5idWlsZF9zdGF0ZSkpOwogCiAgICAg
ZG9taWQgPSAwOwpAQCAtNjcwLDYgKzY3OSwzMyBAQCBzdGF0aWMgdm9pZCBpbml0aWF0ZV9kb21h
aW5fY3JlYXRlKGxpYnhsX19lZ2MgKmVnYywKICAgICAgICAgaWYgKHJldCkgZ290byBlcnJvcl9v
dXQ7CiAgICAgfQogCisgICAgbGlieGxfX211bHRpZGV2X2JlZ2luKGFvLCAmZGNzLT5tdWx0aWRl
dik7CisgICAgZGNzLT5tdWx0aWRldi5jYWxsYmFjayA9IGRvbWNyZWF0ZV9sYXVuY2hfYm9vdGxv
YWRlcjsKKyAgICBsaWJ4bF9fcHJlcGFyZV9kaXNrcyhlZ2MsIGFvLCBkb21pZCwgZF9jb25maWcs
ICZkY3MtPm11bHRpZGV2KTsKKyAgICBsaWJ4bF9fbXVsdGlkZXZfcHJlcGFyZWQoZWdjLCAmZGNz
LT5tdWx0aWRldiwgMCk7CisgICAgcmV0dXJuOworCitlcnJvcl9vdXQ6CisgICAgYXNzZXJ0KHJl
dCk7CisgICAgZG9tY3JlYXRlX2NvbXBsZXRlKGVnYywgZGNzLCByZXQpOworfQorCitzdGF0aWMg
dm9pZCBkb21jcmVhdGVfbGF1bmNoX2Jvb3Rsb2FkZXIobGlieGxfX2VnYyAqZWdjLAorICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19tdWx0aWRldiAqbXVsdGlk
ZXYsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50IHJldCkKK3sK
KyAgICBsaWJ4bF9fZG9tYWluX2NyZWF0ZV9zdGF0ZSAqZGNzID0gQ09OVEFJTkVSX09GKG11bHRp
ZGV2LCAqZGNzLCBtdWx0aWRldik7CisgICAgU1RBVEVfQU9fR0MoZGNzLT5hbyk7CisKKyAgICAv
KiBjb252ZW5pZW5jZSBhbGlhc2VzICovCisgICAgY29uc3QgaW50IHJlc3RvcmVfZmQgPSBkY3Mt
PnJlc3RvcmVfZmQ7CisgICAgbGlieGxfZG9tYWluX2NvbmZpZyAqY29uc3QgZF9jb25maWcgPSBk
Y3MtPmd1ZXN0X2NvbmZpZzsKKworICAgIGlmIChyZXQpIHsKKyAgICAgICAgTE9HKEVSUk9SLCAi
dW5hYmxlIHRvIHByZXBhcmUgZGV2aWNlcyIpOworICAgICAgICBnb3RvIGVycm9yX291dDsKKyAg
ICB9CisKICAgICBkY3MtPmJsLmFvID0gYW87CiAgICAgbGlieGxfZGV2aWNlX2Rpc2sgKmJvb3Rk
aXNrID0KICAgICAgICAgZF9jb25maWctPm51bV9kaXNrcyA+IDAgPyAmZF9jb25maWctPmRpc2tz
WzBdIDogTlVMTDsKQEAgLTEyMDIsMTEgKzEyMzgsMzUgQEAgc3RhdGljIHZvaWQgZG9tY3JlYXRl
X2Rlc3RydWN0aW9uX2NiKGxpYnhsX19lZ2MgKmVnYywKIHsKICAgICBTVEFURV9BT19HQyhkZHMt
PmFvKTsKICAgICBsaWJ4bF9fZG9tYWluX2NyZWF0ZV9zdGF0ZSAqZGNzID0gQ09OVEFJTkVSX09G
KGRkcywgKmRjcywgZGRzKTsKKyAgICB1aW50MzJfdCBkb21pZCA9IGRjcy0+Z3Vlc3RfZG9taWQ7
CisgICAgbGlieGxfZG9tYWluX2NvbmZpZyAqY29uc3QgZF9jb25maWcgPSBkY3MtPmd1ZXN0X2Nv
bmZpZzsKIAogICAgIGlmIChyYykKICAgICAgICAgTE9HKEVSUk9SLCAidW5hYmxlIHRvIGRlc3Ry
b3kgZG9tYWluICV1IGZvbGxvd2luZyBmYWlsZWQgY3JlYXRpb24iLAogICAgICAgICAgICAgICAg
ICAgIGRkcy0+ZG9taWQpOwogCisgICAgLyoKKyAgICAgKiBXZSBtaWdodCBoYXZlIGRldmljZXMg
dGhhdCBoYXZlIGJlZW4gcHJlcGFyZWQsIGJ1dCB3aXRoIG5vCisgICAgICogZnJvbnRlbmQgeGVu
c3RvcmUgZW50cmllcywgc28gZG9tYWluIGRlc3RydWN0aW9uIGZhaWxzIHRvCisgICAgICogZmlu
ZCB0aGVtLCB0aGF0IGlzIHdoeSB3ZSBoYXZlIHRvIHVucHJlcGFyZSB0aGVtIG1hbnVhbGx5Lgor
ICAgICAqLworICAgIGxpYnhsX19tdWx0aWRldl9iZWdpbihhbywgJmRjcy0+bXVsdGlkZXYpOwor
ICAgIGRjcy0+bXVsdGlkZXYuY2FsbGJhY2sgPSBkb21jcmVhdGVfdW5wcmVwYXJlX2NiOworICAg
IGxpYnhsX191bnByZXBhcmVfZGlza3MoZWdjLCBhbywgZG9taWQsIGRfY29uZmlnLCAmZGNzLT5t
dWx0aWRldik7CisgICAgbGlieGxfX211bHRpZGV2X3ByZXBhcmVkKGVnYywgJmRjcy0+bXVsdGlk
ZXYsIDApOworICAgIHJldHVybjsKK30KKworc3RhdGljIHZvaWQgZG9tY3JlYXRlX3VucHJlcGFy
ZV9jYihsaWJ4bF9fZWdjICplZ2MsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGxpYnhsX19tdWx0aWRldiAqbXVsdGlkZXYsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIGludCByZXQpCit7CisgICAgbGlieGxfX2RvbWFpbl9jcmVhdGVfc3RhdGUgKmRjcyA9
IENPTlRBSU5FUl9PRihtdWx0aWRldiwgKmRjcywgbXVsdGlkZXYpOworICAgIFNUQVRFX0FPX0dD
KGRjcy0+YW8pOworCisgICAgaWYgKHJldCkKKyAgICAgICAgTE9HKEVSUk9SLCAidW5hYmxlIHRv
IHVucHJlcGFyZSBkZXZpY2VzIik7CisKICAgICBkY3MtPmNhbGxiYWNrKGVnYywgZGNzLCBFUlJP
Ul9GQUlMLCBkY3MtPmd1ZXN0X2RvbWlkKTsKIH0KIApkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGwv
bGlieGxfZGV2aWNlLmMgYi90b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYwppbmRleCA4NWM5OTUz
Li5jZTk5ZDk5IDEwMDY0NAotLS0gYS90b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYworKysgYi90
b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYwpAQCAtNTIxLDE4ICs1MjEsMjEgQEAgdm9pZCBsaWJ4
bF9fbXVsdGlkZXZfcHJlcGFyZWQobGlieGxfX2VnYyAqZWdjLAogCiAvKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqLwogCi0vKiBNYWNybyBmb3IgZGVmaW5pbmcgdGhlIGZ1bmN0aW9ucyB0aGF0IHdpbGwg
YWRkIGEgYnVuY2ggb2YgZGlza3Mgd2hlbgorLyogTWFjcm8gZm9yIGRlZmluaW5nIHRoZSBmdW5j
dGlvbnMgdGhhdCB3aWxsIG9wZXJhdGUgYSBidW5jaCBvZiBkZXZpY2VzIHdoZW4KICAqIGluc2lk
ZSBhbiBhc3luYyBvcCB3aXRoIG11bHRpZGV2LgogICogVGhpcyBtYWNybyBpcyBhZGRlZCB0byBw
cmV2ZW50IHJlcGV0aXRpb24gb2YgY29kZS4KICAqCiAgKiBUaGUgZm9sbG93aW5nIGZ1bmN0aW9u
cyBhcmUgZGVmaW5lZDoKKyAqIGxpYnhsX19wcmVwYXJlX2Rpc2tzCisgKiBsaWJ4bF9fdW5wcmVw
YXJlX2Rpc2tzCiAgKiBsaWJ4bF9fYWRkX2Rpc2tzCiAgKiBsaWJ4bF9fYWRkX25pY3MKICAqIGxp
YnhsX19hZGRfdnRwbXMKICAqLwogCi0jZGVmaW5lIERFRklORV9ERVZJQ0VTX0FERCh0eXBlKSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCi0gICAgdm9pZCBsaWJ4bF9f
YWRkXyMjdHlwZSMjcyhsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hbyAqYW8sIHVpbnQzMl90IGRv
bWlkLCBcCisjZGVmaW5lIERFRklORV9ERVZJQ0VTX0ZVTkModHlwZSwgb3ApICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgdm9pZCBsaWJ4bF9fIyNvcCMjXyMjdHlwZSMj
cyhsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hbyAqYW8sICAgICAgICBcCisgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICB1aW50MzJfdCBkb21pZCwgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kb21haW5fY29uZmlnICpk
X2NvbmZpZywgICAgICAgICAgICBcCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4
bF9fbXVsdGlkZXYgKm11bHRpZGV2KSAgICAgICAgICAgICAgICBcCiAgICAgeyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
CkBAIC01NDAsMTYgKzU0MywxOSBAQCB2b2lkIGxpYnhsX19tdWx0aWRldl9wcmVwYXJlZChsaWJ4
bF9fZWdjICplZ2MsCiAgICAgICAgIGludCBpOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgIGZvciAoaSA9IDA7IGkgPCBk
X2NvbmZpZy0+bnVtXyMjdHlwZSMjczsgaSsrKSB7ICAgICAgICAgICAgICAgICBcCiAgICAgICAg
ICAgICBsaWJ4bF9fYW9fZGV2aWNlICphb2RldiA9IGxpYnhsX19tdWx0aWRldl9wcmVwYXJlKG11
bHRpZGV2KTsgIFwKLSAgICAgICAgICAgIGxpYnhsX19kZXZpY2VfIyN0eXBlIyNfYWRkKGVnYywg
ZG9taWQsICZkX2NvbmZpZy0+dHlwZSMjc1tpXSwgXAorICAgICAgICAgICAgbGlieGxfX2Rldmlj
ZV8jI3R5cGUjI18jI29wKGVnYywgZG9taWQsICZkX2NvbmZpZy0+dHlwZSMjc1tpXSwgXAogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYW9kZXYpOyAgICAgICAgICAgICAg
ICAgICAgICAgICAgXAogICAgICAgICB9ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgIH0KIAotREVGSU5FX0RFVklDRVNf
QUREKGRpc2spCi1ERUZJTkVfREVWSUNFU19BREQobmljKQotREVGSU5FX0RFVklDRVNfQUREKHZ0
cG0pCitERUZJTkVfREVWSUNFU19GVU5DKGRpc2ssIGFkZCkKK0RFRklORV9ERVZJQ0VTX0ZVTkMo
bmljLCBhZGQpCitERUZJTkVfREVWSUNFU19GVU5DKHZ0cG0sIGFkZCkKIAotI3VuZGVmIERFRklO
RV9ERVZJQ0VTX0FERAorREVGSU5FX0RFVklDRVNfRlVOQyhkaXNrLCBwcmVwYXJlKQorREVGSU5F
X0RFVklDRVNfRlVOQyhkaXNrLCB1bnByZXBhcmUpCisKKyN1bmRlZiBERUZJTkVfREVWSUNFU19G
VU5DCiAKIC8qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKiovCiAKQEAgLTU1Nyw2ICs1NjMsNyBAQCBpbnQg
bGlieGxfX2RldmljZV9kZXN0cm95KGxpYnhsX19nYyAqZ2MsIGxpYnhsX19kZXZpY2UgKmRldikK
IHsKICAgICBjb25zdCBjaGFyICpiZV9wYXRoID0gbGlieGxfX2RldmljZV9iYWNrZW5kX3BhdGgo
Z2MsIGRldik7CiAgICAgY29uc3QgY2hhciAqZmVfcGF0aCA9IGxpYnhsX19kZXZpY2VfZnJvbnRl
bmRfcGF0aChnYywgZGV2KTsKKyAgICBjb25zdCBjaGFyICpob3RwbHVnX3BhdGggPSBsaWJ4bF9f
ZGV2aWNlX3hzX2hvdHBsdWdfcGF0aChnYywgZGV2KTsKICAgICBjb25zdCBjaGFyICp0YXBkaXNr
X3BhdGggPSBHQ1NQUklOVEYoIiVzLyVzIiwgYmVfcGF0aCwgInRhcGRpc2stcGFyYW1zIik7CiAg
ICAgY29uc3QgY2hhciAqdGFwZGlza19wYXJhbXM7CiAgICAgeHNfdHJhbnNhY3Rpb25fdCB0ID0g
MDsKQEAgLTU3Miw2ICs1NzksNyBAQCBpbnQgbGlieGxfX2RldmljZV9kZXN0cm95KGxpYnhsX19n
YyAqZ2MsIGxpYnhsX19kZXZpY2UgKmRldikKIAogICAgICAgICBsaWJ4bF9feHNfcGF0aF9jbGVh
bnVwKGdjLCB0LCBmZV9wYXRoKTsKICAgICAgICAgbGlieGxfX3hzX3BhdGhfY2xlYW51cChnYywg
dCwgYmVfcGF0aCk7CisgICAgICAgIGxpYnhsX194c19wYXRoX2NsZWFudXAoZ2MsIHQsIGhvdHBs
dWdfcGF0aCk7CiAKICAgICAgICAgcmMgPSBsaWJ4bF9feHNfdHJhbnNhY3Rpb25fY29tbWl0KGdj
LCAmdCk7CiAgICAgICAgIGlmICghcmMpIGJyZWFrOwpAQCAtNjQ0LDYgKzY1MiwxNCBAQCB2b2lk
IGxpYnhsX19kZXZpY2VzX2Rlc3Ryb3kobGlieGxfX2VnYyAqZWdjLCBsaWJ4bF9fZGV2aWNlc19y
ZW1vdmVfc3RhdGUgKmRycykKICAgICAgICAgICAgICAgICAgICAgY29udGludWU7CiAgICAgICAg
ICAgICAgICAgfQogICAgICAgICAgICAgICAgIGFvZGV2ID0gbGlieGxfX211bHRpZGV2X3ByZXBh
cmUobXVsdGlkZXYpOworICAgICAgICAgICAgICAgIC8qCisgICAgICAgICAgICAgICAgICogQ2hl
Y2sgaWYgdGhlIGRldmljZSBoYXMgaG90cGx1ZyBlbnRyaWVzIGluIHhlbnN0b3JlLAorICAgICAg
ICAgICAgICAgICAqIHdoaWNoIHdvdWxkIG1lYW4gaXQncyB1c2luZyB0aGUgbmV3IGhvdHBsdWcg
Y2FsbGluZworICAgICAgICAgICAgICAgICAqIGNvbnZlbnRpb24uCisgICAgICAgICAgICAgICAg
ICovCisgICAgICAgICAgICAgICAgcGF0aCA9IGxpYnhsX19kZXZpY2VfeHNfaG90cGx1Z19wYXRo
KGdjLCBkZXYpOworICAgICAgICAgICAgICAgIGlmIChsaWJ4bF9feHNfcmVhZChnYywgWEJUX05V
TEwsIHBhdGgpKQorICAgICAgICAgICAgICAgICAgICBhb2Rldi0+aG90cGx1Z192ZXJzaW9uID0g
MTsKICAgICAgICAgICAgICAgICBhb2Rldi0+YWN0aW9uID0gREVWSUNFX0RJU0NPTk5FQ1Q7CiAg
ICAgICAgICAgICAgICAgYW9kZXYtPmRldiA9IGRldjsKICAgICAgICAgICAgICAgICBhb2Rldi0+
Zm9yY2UgPSBkcnMtPmZvcmNlOwpAQCAtNjk2LDggKzcxMiw2IEBAIHN0YXRpYyB2b2lkIGRldmlj
ZV9iYWNrZW5kX2NhbGxiYWNrKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX2V2X2RldnN0YXRlICpk
cywKIHN0YXRpYyB2b2lkIGRldmljZV9iYWNrZW5kX2NsZWFudXAobGlieGxfX2djICpnYywKICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYp
OwogCi1zdGF0aWMgdm9pZCBkZXZpY2VfaG90cGx1ZyhsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19h
b19kZXZpY2UgKmFvZGV2KTsKLQogc3RhdGljIHZvaWQgZGV2aWNlX2hvdHBsdWdfdGltZW91dF9j
YihsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19ldl90aW1lICpldiwKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgY29uc3Qgc3RydWN0IHRpbWV2YWwgKnJlcXVlc3RlZF9hYnMp
OwogCkBAIC03MzIsNyArNzQ2LDcgQEAgdm9pZCBsaWJ4bF9fd2FpdF9kZXZpY2VfY29ubmVjdGlv
bihsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hb19kZXZpY2UgKmFvZGV2KQogICAgICAgICAgKiBJ
ZiBRZW11IGlzIHJ1bm5pbmcsIGl0IHdpbGwgc2V0IHRoZSBzdGF0ZSBvZiB0aGUgZGV2aWNlIHRv
CiAgICAgICAgICAqIDQgZGlyZWN0bHksIHdpdGhvdXQgd2FpdGluZyBpbiBzdGF0ZSAyIGZvciBh
bnkgaG90cGx1ZyBleGVjdXRpb24uCiAgICAgICAgICAqLwotICAgICAgICBkZXZpY2VfaG90cGx1
ZyhlZ2MsIGFvZGV2KTsKKyAgICAgICAgbGlieGxfX2RldmljZV9ob3RwbHVnKGVnYywgYW9kZXYp
OwogICAgICAgICByZXR1cm47CiAgICAgfQogCkBAIC04NjQsNyArODc4LDcgQEAgc3RhdGljIHZv
aWQgZGV2aWNlX3FlbXVfdGltZW91dChsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19ldl90aW1lICpl
diwKICAgICByYyA9IGxpYnhsX194c193cml0ZV9jaGVja2VkKGdjLCBYQlRfTlVMTCwgc3RhdGVf
cGF0aCwgIjYiKTsKICAgICBpZiAocmMpIGdvdG8gb3V0OwogCi0gICAgZGV2aWNlX2hvdHBsdWco
ZWdjLCBhb2Rldik7CisgICAgbGlieGxfX2RldmljZV9ob3RwbHVnKGVnYywgYW9kZXYpOwogICAg
IHJldHVybjsKIAogb3V0OgpAQCAtODkzLDcgKzkwNyw3IEBAIHN0YXRpYyB2b2lkIGRldmljZV9i
YWNrZW5kX2NhbGxiYWNrKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX2V2X2RldnN0YXRlICpkcywK
ICAgICAgICAgZ290byBvdXQ7CiAgICAgfQogCi0gICAgZGV2aWNlX2hvdHBsdWcoZWdjLCBhb2Rl
dik7CisgICAgbGlieGxfX2RldmljZV9ob3RwbHVnKGVnYywgYW9kZXYpOwogICAgIHJldHVybjsK
IAogb3V0OgpAQCAtOTA4LDcgKzkyMiw3IEBAIHN0YXRpYyB2b2lkIGRldmljZV9iYWNrZW5kX2Ns
ZWFudXAobGlieGxfX2djICpnYywgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpCiAgICAgbGlieGxf
X2V2X2RldnN0YXRlX2NhbmNlbChnYywgJmFvZGV2LT5iYWNrZW5kX2RzKTsKIH0KIAotc3RhdGlj
IHZvaWQgZGV2aWNlX2hvdHBsdWcobGlieGxfX2VnYyAqZWdjLCBsaWJ4bF9fYW9fZGV2aWNlICph
b2RldikKK3ZvaWQgbGlieGxfX2RldmljZV9ob3RwbHVnKGxpYnhsX19lZ2MgKmVnYywgbGlieGxf
X2FvX2RldmljZSAqYW9kZXYpCiB7CiAgICAgU1RBVEVfQU9fR0MoYW9kZXYtPmFvKTsKICAgICBj
aGFyICpiZV9wYXRoID0gbGlieGxfX2RldmljZV9iYWNrZW5kX3BhdGgoZ2MsIGFvZGV2LT5kZXYp
OwpAQCAtMTAxMiwxMCArMTAyNiwxMyBAQCBzdGF0aWMgdm9pZCBkZXZpY2VfaG90cGx1Z19jaGls
ZF9kZWF0aF9jYihsaWJ4bF9fZWdjICplZ2MsCiAgICAgICAgIGlmIChob3RwbHVnX2Vycm9yKQog
ICAgICAgICAgICAgTE9HKEVSUk9SLCAic2NyaXB0OiAlcyIsIGhvdHBsdWdfZXJyb3IpOwogICAg
ICAgICBhb2Rldi0+cmMgPSBFUlJPUl9GQUlMOwotICAgICAgICBpZiAoYW9kZXYtPmFjdGlvbiA9
PSBERVZJQ0VfQ09OTkVDVCkKKyAgICAgICAgaWYgKGFvZGV2LT5hY3Rpb24gPT0gREVWSUNFX0NP
Tk5FQ1QgfHwKKyAgICAgICAgICAgIGFvZGV2LT5hY3Rpb24gPT0gREVWSUNFX1BSRVBBUkUgfHwK
KyAgICAgICAgICAgIGFvZGV2LT5hY3Rpb24gPT0gREVWSUNFX0xPQ0FMQVRUQUNIKQogICAgICAg
ICAgICAgLyoKLSAgICAgICAgICAgICAqIE9ubHkgZmFpbCBvbiBkZXZpY2UgY29ubmVjdGlvbiwg
b24gZGlzY29ubmVjdGlvbgotICAgICAgICAgICAgICogaWdub3JlIGVycm9yLCBhbmQgY29udGlu
dWUgd2l0aCB0aGUgcmVtb3ZlIHByb2Nlc3MKKyAgICAgICAgICAgICAqIE9ubHkgZmFpbCBvbiBk
ZXZpY2UgY29ubmVjdGlvbiwgcHJlcGFyZSBvciBsb2NhbGF0dGFjaCwKKyAgICAgICAgICAgICAq
IG9uIG90aGVyIGNhc2VzIGlnbm9yZSBlcnJvciwgYW5kIGNvbnRpbnVlIHdpdGggdGhlIHJlbW92
ZQorICAgICAgICAgICAgICogcHJvY2Vzcy4KICAgICAgICAgICAgICAqLwogICAgICAgICAgICAg
IGdvdG8gZXJyb3I7CiAgICAgfQpAQCAtMTAyNSw3ICsxMDQyLDcgQEAgc3RhdGljIHZvaWQgZGV2
aWNlX2hvdHBsdWdfY2hpbGRfZGVhdGhfY2IobGlieGxfX2VnYyAqZWdjLAogICAgICAqIGRldmlj
ZV9ob3RwbHVnX2RvbmUgYnJlYWtpbmcgdGhlIGxvb3AuCiAgICAgICovCiAgICAgYW9kZXYtPm51
bV9leGVjKys7Ci0gICAgZGV2aWNlX2hvdHBsdWcoZWdjLCBhb2Rldik7CisgICAgbGlieGxfX2Rl
dmljZV9ob3RwbHVnKGVnYywgYW9kZXYpOwogCiAgICAgcmV0dXJuOwogCkBAIC0xMDQxLDggKzEw
NTgsMzMgQEAgc3RhdGljIHZvaWQgZGV2aWNlX2hvdHBsdWdfZG9uZShsaWJ4bF9fZWdjICplZ2Ms
IGxpYnhsX19hb19kZXZpY2UgKmFvZGV2KQogCiAgICAgZGV2aWNlX2hvdHBsdWdfY2xlYW4oZ2Ms
IGFvZGV2KTsKIAotICAgIC8qIENsZWFuIHhlbnN0b3JlIGlmIGl0J3MgYSBkaXNjb25uZWN0aW9u
ICovCiAgICAgaWYgKGFvZGV2LT5hY3Rpb24gPT0gREVWSUNFX0RJU0NPTk5FQ1QpIHsKKyAgICAg
ICAgc3dpdGNoIChhb2Rldi0+aG90cGx1Z192ZXJzaW9uKSB7CisgICAgICAgIGNhc2UgMDoKKyAg
ICAgICAgICAgIC8qIENsZWFuIHhlbnN0b3JlIGlmIGl0J3MgYSBkaXNjb25uZWN0aW9uICovCisg
ICAgICAgICAgICByYyA9IGxpYnhsX19kZXZpY2VfZGVzdHJveShnYywgYW9kZXYtPmRldik7Cisg
ICAgICAgICAgICBpZiAoIWFvZGV2LT5yYykKKyAgICAgICAgICAgICAgICBhb2Rldi0+cmMgPSBy
YzsKKyAgICAgICAgICAgIGJyZWFrOworICAgICAgICBjYXNlIDE6CisgICAgICAgICAgICAvKgor
ICAgICAgICAgICAgICogQ2hhaW4gdW5wcmVwYXJlIGhvdHBsdWcgZXhlY3V0aW9uCisgICAgICAg
ICAgICAgKiBhZnRlciBkaXNjb25uZWN0aW9uIG9mIGRldmljZS4KKyAgICAgICAgICAgICAqLwor
ICAgICAgICAgICAgYW9kZXYtPm51bV9leGVjID0gMDsKKyAgICAgICAgICAgIGFvZGV2LT5hY3Rp
b24gPSBERVZJQ0VfVU5QUkVQQVJFOworICAgICAgICAgICAgbGlieGxfX2RldmljZV9ob3RwbHVn
KGVnYywgYW9kZXYpOworICAgICAgICAgICAgcmV0dXJuOworICAgICAgICBkZWZhdWx0OgorICAg
ICAgICAgICAgTE9HKEVSUk9SLCAidW5rbm93biBob3RwbHVnIHNjcmlwdCB2ZXJzaW9uICglZCki
LAorICAgICAgICAgICAgICAgICAgICAgICBhb2Rldi0+aG90cGx1Z192ZXJzaW9uKTsKKyAgICAg
ICAgICAgIGlmICghYW9kZXYtPnJjKQorICAgICAgICAgICAgICAgIGFvZGV2LT5yYyA9IEVSUk9S
X0ZBSUw7CisgICAgICAgICAgICBicmVhazsKKyAgICAgICAgfQorICAgIH0KKyAgICAvKiBDbGVh
biBob3RwbHVnIHhlbnN0b3JlIHBhdGggKi8KKyAgICBpZiAoYW9kZXYtPmFjdGlvbiA9PSBERVZJ
Q0VfVU5QUkVQQVJFKSB7CiAgICAgICAgIHJjID0gbGlieGxfX2RldmljZV9kZXN0cm95KGdjLCBh
b2Rldi0+ZGV2KTsKICAgICAgICAgaWYgKCFhb2Rldi0+cmMpCiAgICAgICAgICAgICBhb2Rldi0+
cmMgPSByYzsKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVybmFsLmggYi90b29s
cy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oCmluZGV4IGM0MWU2MDguLjE2OTA3ZmYgMTAwNjQ0Ci0t
LSBhL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVybmFsLmgKKysrIGIvdG9vbHMvbGlieGwvbGlieGxf
aW50ZXJuYWwuaApAQCAtMjAxOCw2ICsyMDE4LDEyIEBAIHN0cnVjdCBsaWJ4bF9fbXVsdGlkZXYg
ewogX2hpZGRlbiB2b2lkIGxpYnhsX19kZXZpY2VfZGlza19hZGQobGlieGxfX2VnYyAqZWdjLCB1
aW50MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhs
X2RldmljZV9kaXNrICpkaXNrLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
bGlieGxfX2FvX2RldmljZSAqYW9kZXYpOworX2hpZGRlbiB2b2lkIGxpYnhsX19kZXZpY2VfZGlz
a19wcmVwYXJlKGxpYnhsX19lZ2MgKmVnYywgdWludDMyX3QgZG9taWQsCisgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZGV2aWNlX2Rpc2sgKmRpc2ssCisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9k
ZXYpOworX2hpZGRlbiB2b2lkIGxpYnhsX19kZXZpY2VfZGlza191bnByZXBhcmUobGlieGxfX2Vn
YyAqZWdjLCB1aW50MzJfdCBkb21pZCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGxpYnhsX2RldmljZV9kaXNrICpkaXNrLAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpOwogCiAvKiBBTyBv
cGVyYXRpb24gdG8gY29ubmVjdCBhIG5pYyBkZXZpY2UgKi8KIF9oaWRkZW4gdm9pZCBsaWJ4bF9f
ZGV2aWNlX25pY19hZGQobGlieGxfX2VnYyAqZWdjLCB1aW50MzJfdCBkb21pZCwKQEAgLTIwOTQs
NiArMjEwMCwxOSBAQCBfaGlkZGVuIGludCBsaWJ4bF9fZ2V0X2hvdHBsdWdfc2NyaXB0X2luZm8o
bGlieGxfX2djICpnYywgbGlieGxfX2RldmljZSAqZGV2LAogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2VfYWN0aW9uIGFjdGlvbiwKICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgbnVtX2V4ZWMsIGludCBo
b3RwbHVnX3ZlcnNpb24pOwogCisvKgorICogbGlieGxfX2RldmljZV9ob3RwbHVnIHJ1bnMgdGhl
IGhvdHBsdWcgc2NyaXB0IGFzc29jaWF0ZWQKKyAqIHdpdGggdGhlIGRldmljZSBwYXNzZWQgaW4g
YW9kZXYtPmRldi4KKyAqCisgKiBUaGUgbGlieGxfX2FvX2RldmljZSBwYXNzZWQgdG8gdGhpcyBm
dW5jdGlvbiBzaG91bGQgYmUKKyAqIHByZXBhcmVkIHVzaW5nIGxpYnhsX19wcmVwYXJlX2FvX2Rl
dmljZSBwcmlvciB0byBjYWxsaW5nCisgKiB0aGlzIGZ1bmN0aW9uLgorICoKKyAqIE9uY2UgZmlu
aXNoZWQsIGFvZGV2LT5jYWxsYmFjayB3aWxsIGJlIGV4ZWN1dGVkLgorICovCitfaGlkZGVuIHZv
aWQgbGlieGxfX2RldmljZV9ob3RwbHVnKGxpYnhsX19lZ2MgKmVnYywKKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpOworCiAvKi0tLS0t
IGxvY2FsIGRpc2sgYXR0YWNoOiBhdHRhY2ggYSBkaXNrIGxvY2FsbHkgdG8gcnVuIHRoZSBib290
bG9hZGVyIC0tLS0tKi8KIAogdHlwZWRlZiBzdHJ1Y3QgbGlieGxfX2Rpc2tfbG9jYWxfc3RhdGUg
bGlieGxfX2Rpc2tfbG9jYWxfc3RhdGU7CkBAIC0yNDY3LDYgKzI0ODYsMjEgQEAgX2hpZGRlbiB2
b2lkIGxpYnhsX19hZGRfZGlza3MobGlieGxfX2VnYyAqZWdjLCBsaWJ4bF9fYW8gKmFvLCB1aW50
MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RvbWFpbl9j
b25maWcgKmRfY29uZmlnLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX211
bHRpZGV2ICptdWx0aWRldik7CiAKK19oaWRkZW4gdm9pZCBsaWJ4bF9fcHJlcGFyZV9kaXNrcyhs
aWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hbyAqYW8sCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgdWludDMyX3QgZG9taWQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbGlieGxfZG9tYWluX2NvbmZpZyAqZF9jb25maWcsCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgbGlieGxfX211bHRpZGV2ICptdWx0aWRldik7CisKK19oaWRkZW4gdm9pZCBs
aWJ4bF9fcHJlcGFyZV91bmRpc2tzKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX2FvICphbywKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGRvbWlkLAorICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZG9tYWluX2NvbmZpZyAqZF9jb25m
aWcsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fbXVsdGlkZXYg
Km11bHRpZGV2KTsKKworX2hpZGRlbiB2b2lkIGxpYnhsX191bnByZXBhcmVfZGlza3MobGlieGxf
X2VnYyAqZWdjLCBsaWJ4bF9fYW8gKmFvLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgdWludDMyX3QgZG9taWQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBsaWJ4bF9kb21haW5fY29uZmlnICpkX2NvbmZpZywKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGxpYnhsX19tdWx0aWRldiAqbXVsdGlkZXYpOworCiBfaGlkZGVuIHZvaWQg
bGlieGxfX2FkZF9uaWNzKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX2FvICphbywgdWludDMyX3Qg
ZG9taWQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RvbWFpbl9jb25maWcg
KmRfY29uZmlnLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fbXVsdGlkZXYg
Km11bHRpZGV2KTsKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX3R5cGVzLmlkbCBiL3Rv
b2xzL2xpYnhsL2xpYnhsX3R5cGVzLmlkbAppbmRleCA3ZWFjNGE4Li4yNzJmZjU0IDEwMDY0NAot
LS0gYS90b29scy9saWJ4bC9saWJ4bF90eXBlcy5pZGwKKysrIGIvdG9vbHMvbGlieGwvbGlieGxf
dHlwZXMuaWRsCkBAIC0zNjYsNiArMzY2LDcgQEAgbGlieGxfZGV2aWNlX2Rpc2sgPSBTdHJ1Y3Qo
ImRldmljZV9kaXNrIiwgWwogICAgICgicmVtb3ZhYmxlIiwgaW50ZWdlciksCiAgICAgKCJyZWFk
d3JpdGUiLCBpbnRlZ2VyKSwKICAgICAoImlzX2Nkcm9tIiwgaW50ZWdlciksCisgICAgKCJob3Rw
bHVnX3ZlcnNpb24iLCBpbnRlZ2VyKSwKICAgICBdKQogCiBsaWJ4bF9kZXZpY2VfbmljID0gU3Ry
dWN0KCJkZXZpY2VfbmljIiwgWwpkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGwvbGlieGx1X2Rpc2tf
bC5sIGIvdG9vbHMvbGlieGwvbGlieGx1X2Rpc2tfbC5sCmluZGV4IGJlZTE2YTEuLjQ0ODZjMWEg
MTAwNjQ0Ci0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsdV9kaXNrX2wubAorKysgYi90b29scy9saWJ4
bC9saWJ4bHVfZGlza19sLmwKQEAgLTE3Miw2ICsxNzIsOCBAQCBiYWNrZW5kdHlwZT1bXixdKiw/
IHsgU1RSSVAoJywnKTsgc2V0YmFja2VuZHR5cGUoRFBDLEZST01FUVVBTFMpOyB9CiAKIHZkZXY9
W14sXSosPwl7IFNUUklQKCcsJyk7IFNBVkVTVFJJTkcoInZkZXYiLCB2ZGV2LCBGUk9NRVFVQUxT
KTsgfQogc2NyaXB0PVteLF0qLD8JeyBTVFJJUCgnLCcpOyBTQVZFU1RSSU5HKCJzY3JpcHQiLCBz
Y3JpcHQsIEZST01FUVVBTFMpOyB9CittZXRob2Q9W14sXSosPwl7IFNUUklQKCcsJyk7IFNBVkVT
VFJJTkcoInNjcmlwdCIsIHNjcmlwdCwgRlJPTUVRVUFMUyk7CisJCSAgRFBDLT5kaXNrLT5ob3Rw
bHVnX3ZlcnNpb24gPSAxOyB9CiAKICAvKiB0aGUgdGFyZ2V0IG1hZ2ljIHBhcmFtZXRlciwgZWF0
cyB0aGUgcmVzdCBvZiB0aGUgc3RyaW5nICovCiAKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikK
CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4u
b3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xY-0002E7-S4; Fri, 21 Dec 2012 17:00:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xW-0002Cp-Ky
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:30 +0000
Received: from [193.109.254.147:4023] by server-2.bemta-14.messagelabs.com id
	D5/D5-30744-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!5
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11644 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309215"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:17 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:17 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:02 +0100
Message-ID: <1356109208-6830-5-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 04/10] libxl: add prepare/unprepare
	operations to the libxl public interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHB1YmxpYyBmdW5jdGlvbnMgZm9yIHRoZSBwcmVwYXJlL3VucHJlcGFyZSBkaXNrIG9wZXJh
dGlvbnMuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KLS0tCiB0b29scy9saWJ4bC9saWJ4bC5jIHwgICAyNiArKysrKysrKysrKysrKystLS0t
LS0tLS0tLQogdG9vbHMvbGlieGwvbGlieGwuaCB8ICAgIDggKysrKysrKysKIDIgZmlsZXMgY2hh
bmdlZCwgMjMgaW5zZXJ0aW9ucygrKSwgMTEgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9v
bHMvbGlieGwvbGlieGwuYyBiL3Rvb2xzL2xpYnhsL2xpYnhsLmMKaW5kZXggZTE0MTM3OS4uODIw
MzRmMSAxMDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGwuYworKysgYi90b29scy9saWJ4bC9s
aWJ4bC5jCkBAIC0xNjg4LDE5ICsxNjg4LDE5IEBAIGludCBsaWJ4bF92bmN2aWV3ZXJfZXhlYyhs
aWJ4bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsIGludCBhdXRvcGFzcykKIC8qKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKiovCiAKIC8qIGdlbmVyaWMgY2FsbGJhY2sgZm9yIGRldmljZXMgdGhhdCBvbmx5
IG5lZWQgdG8gc2V0IGFvX2NvbXBsZXRlICovCi1zdGF0aWMgdm9pZCBkZXZpY2VfYWRkcm1fYW9j
b21wbGV0ZShsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hb19kZXZpY2UgKmFvZGV2KQorc3RhdGlj
IHZvaWQgZGV2aWNlX2FvY29tcGxldGUobGlieGxfX2VnYyAqZWdjLCBsaWJ4bF9fYW9fZGV2aWNl
ICphb2RldikKIHsKICAgICBTVEFURV9BT19HQyhhb2Rldi0+YW8pOwogCiAgICAgaWYgKGFvZGV2
LT5yYykgewogICAgICAgICBpZiAoYW9kZXYtPmRldikgewogICAgICAgICAgICAgTE9HKEVSUk9S
LCAidW5hYmxlIHRvICVzICVzIHdpdGggaWQgJXUiLAotICAgICAgICAgICAgICAgICAgICAgICAg
YW9kZXYtPmFjdGlvbiA9PSBERVZJQ0VfQ09OTkVDVCA/ICJhZGQiIDogInJlbW92ZSIsCisgICAg
ICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fZGV2aWNlX2hvdHBsdWdfYWN0aW9uKGdjLCBhb2Rl
di0+YWN0aW9uKSwKICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2Vfa2luZF90
b19zdHJpbmcoYW9kZXYtPmRldi0+a2luZCksCiAgICAgICAgICAgICAgICAgICAgICAgICBhb2Rl
di0+ZGV2LT5kZXZpZCk7CiAgICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAgICBMT0coRVJST1Is
ICJ1bmFibGUgdG8gJXMgZGV2aWNlIiwKLSAgICAgICAgICAgICAgICAgICAgICAgYW9kZXYtPmFj
dGlvbiA9PSBERVZJQ0VfQ09OTkVDVCA/ICJhZGQiIDogInJlbW92ZSIpOworICAgICAgICAgICAg
ICAgICAgICAgICBsaWJ4bF9fZGV2aWNlX2hvdHBsdWdfYWN0aW9uKGdjLCBhb2Rldi0+YWN0aW9u
KSk7CiAgICAgICAgIH0KICAgICAgICAgZ290byBvdXQ7CiAgICAgfQpAQCAtMzQ4MCw3ICszNDgw
LDcgQEAgb3V0OgogICAgICAgICBsaWJ4bF9fcHJlcGFyZV9hb19kZXZpY2UoYW8sIGFvZGV2KTsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICBhb2Rldi0+YWN0aW9uID0gREVW
SUNFX0RJU0NPTk5FQ1Q7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICBh
b2Rldi0+ZGV2ID0gZGV2aWNlOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgXAotICAgICAgICBhb2Rldi0+Y2FsbGJhY2sgPSBkZXZpY2VfYWRkcm1fYW9jb21wbGV0
ZTsgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBhb2Rldi0+Y2FsbGJhY2sgPSBkZXZp
Y2VfYW9jb21wbGV0ZTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICBhb2Rl
di0+Zm9yY2UgPSBmOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAogICAgICAgICBsaWJ4bF9faW5pdGlhdGVfZGV2aWNlX3JlbW92ZShlZ2MsIGFvZGV2KTsg
ICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXApAQCAtMzUyMSwxMCArMzUy
MSwxMiBAQCBERUZJTkVfREVWSUNFX1JFTU9WRSh2dHBtLCBkZXN0cm95LCAxKQogICogbGlieGxf
ZGV2aWNlX2Rpc2tfYWRkCiAgKiBsaWJ4bF9kZXZpY2VfbmljX2FkZAogICogbGlieGxfZGV2aWNl
X3Z0cG1fYWRkCisgKiBsaWJ4bF9kZXZpY2VfZGlza19wcmVwYXJlCisgKiBsaWJ4bF9kZXZpY2Vf
ZGlza191bnByZXBhcmUKICAqLwogCi0jZGVmaW5lIERFRklORV9ERVZJQ0VfQUREKHR5cGUpICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCi0gICAgaW50IGxpYnhsX2Rl
dmljZV8jI3R5cGUjI19hZGQobGlieGxfY3R4ICpjdHgsICAgICAgICAgICAgICAgICAgICAgICBc
CisjZGVmaW5lIERFRklORV9ERVZJQ0VfRlVOQyh0eXBlLCBvcCkgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCisgICAgaW50IGxpYnhsX2RldmljZV8jI3R5cGUjI18jI29wKGxp
YnhsX2N0eCAqY3R4LCAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgIHVpbnQzMl90IGRv
bWlkLCBsaWJ4bF9kZXZpY2VfIyN0eXBlICp0eXBlLCAgICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgICAgIGNvbnN0IGxpYnhsX2FzeW5jb3BfaG93ICphb19ob3cpICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCiAgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCkBAIC0zNTMzLDggKzM1MzUsOCBAQCBE
RUZJTkVfREVWSUNFX1JFTU9WRSh2dHBtLCBkZXN0cm95LCAxKQogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAog
ICAgICAgICBHQ05FVyhhb2Rldik7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgXAogICAgICAgICBsaWJ4bF9fcHJlcGFyZV9hb19kZXZpY2UoYW8sIGFv
ZGV2KTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAotICAgICAgICBhb2Rldi0+Y2FsbGJh
Y2sgPSBkZXZpY2VfYWRkcm1fYW9jb21wbGV0ZTsgICAgICAgICAgICAgICAgICAgICAgXAotICAg
ICAgICBsaWJ4bF9fZGV2aWNlXyMjdHlwZSMjX2FkZChlZ2MsIGRvbWlkLCB0eXBlLCBhb2Rldik7
ICAgICAgICAgICAgXAorICAgICAgICBhb2Rldi0+Y2FsbGJhY2sgPSBkZXZpY2VfYW9jb21wbGV0
ZTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBsaWJ4bF9fZGV2aWNlXyMj
dHlwZSMjXyMjb3AoZWdjLCBkb21pZCwgdHlwZSwgYW9kZXYpOyAgICAgICAgICAgXAogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgXAogICAgICAgICByZXR1cm4gQU9fSU5QUk9HUkVTUzsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgIH0KQEAgLTM1NDIsMTMgKzM1NDQsMTUg
QEAgREVGSU5FX0RFVklDRV9SRU1PVkUodnRwbSwgZGVzdHJveSwgMSkKIC8qIERlZmluZSBhbGxh
ZGQgZnVuY3Rpb25zIGFuZCB1bmRlZiB0aGUgbWFjcm8gKi8KIAogLyogZGlzayAqLwotREVGSU5F
X0RFVklDRV9BREQoZGlzaykKK0RFRklORV9ERVZJQ0VfRlVOQyhkaXNrLCBhZGQpCitERUZJTkVf
REVWSUNFX0ZVTkMoZGlzaywgcHJlcGFyZSkKK0RFRklORV9ERVZJQ0VfRlVOQyhkaXNrLCB1bnBy
ZXBhcmUpCiAKIC8qIG5pYyAqLwotREVGSU5FX0RFVklDRV9BREQobmljKQorREVGSU5FX0RFVklD
RV9GVU5DKG5pYywgYWRkKQogCiAvKiB2dHBtICovCi1ERUZJTkVfREVWSUNFX0FERCh2dHBtKQor
REVGSU5FX0RFVklDRV9GVU5DKHZ0cG0sIGFkZCkKIAogI3VuZGVmIERFRklORV9ERVZJQ0VfQURE
CiAKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsLmggYi90b29scy9saWJ4bC9saWJ4bC5o
CmluZGV4IGUyYmE1NDkuLmYzNDI2ZjcgMTAwNjQ0Ci0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsLmgK
KysrIGIvdG9vbHMvbGlieGwvbGlieGwuaApAQCAtNzA3LDYgKzcwNywxMCBAQCB2b2lkIGxpYnhs
X3Z0cG1pbmZvX2xpc3RfZnJlZShsaWJ4bF92dHBtaW5mbyAqLCBpbnQgbnJfdnRwbXMpOwogICov
CiAKIC8qIERpc2tzICovCitpbnQgbGlieGxfZGV2aWNlX2Rpc2tfcHJlcGFyZShsaWJ4bF9jdHgg
KmN0eCwgdWludDMyX3QgZG9taWQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4
bF9kZXZpY2VfZGlzayAqZGlzaywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0
IGxpYnhsX2FzeW5jb3BfaG93ICphb19ob3cpCisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBMSUJYTF9FWFRFUk5BTF9DQUxMRVJTX09OTFk7CiBpbnQgbGlieGxfZGV2aWNlX2Rpc2tfYWRk
KGxpYnhsX2N0eCAqY3R4LCB1aW50MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbGlieGxfZGV2aWNlX2Rpc2sgKmRpc2ssCiAgICAgICAgICAgICAgICAgICAgICAgICAgIGNv
bnN0IGxpYnhsX2FzeW5jb3BfaG93ICphb19ob3cpCkBAIC03MTksNiArNzIzLDEwIEBAIGludCBs
aWJ4bF9kZXZpY2VfZGlza19kZXN0cm95KGxpYnhsX2N0eCAqY3R4LCB1aW50MzJfdCBkb21pZCwK
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RldmljZV9kaXNrICpkaXNrLAog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgbGlieGxfYXN5bmNvcF9ob3cgKmFv
X2hvdykKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIExJQlhMX0VYVEVSTkFMX0NBTExF
UlNfT05MWTsKK2ludCBsaWJ4bF9kZXZpY2VfZGlza191bnByZXBhcmUobGlieGxfY3R4ICpjdHgs
IHVpbnQzMl90IGRvbWlkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9k
ZXZpY2VfZGlzayAqZGlzaywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3Qg
bGlieGxfYXN5bmNvcF9ob3cgKmFvX2hvdykKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgTElCWExfRVhURVJOQUxfQ0FMTEVSU19PTkxZOwogCiBsaWJ4bF9kZXZpY2VfZGlzayAqbGli
eGxfZGV2aWNlX2Rpc2tfbGlzdChsaWJ4bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsIGludCAq
bnVtKTsKIGludCBsaWJ4bF9kZXZpY2VfZGlza19nZXRpbmZvKGxpYnhsX2N0eCAqY3R4LCB1aW50
MzJfdCBkb21pZCwKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhl
bi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xZ-0002EH-8z; Fri, 21 Dec 2012 17:00:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xW-0002Ck-JM
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:30 +0000
Received: from [193.109.254.147:4024] by server-13.bemta-14.messagelabs.com id
	57/FD-01725-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!3
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11630 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309213"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:16 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:16 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:00 +0100
Message-ID: <1356109208-6830-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 02/10] libxl: add new hotplug interface
	support to hotplug script callers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHN1cHBvcnQgdG8gY2FsbCBob3RwbHVnIHNjcmlwdHMgdGhhdCB1c2UgdGhlIG5ldyBob3Rw
bHVnIGludGVyZmFjZQp0byB0aGUgbG93LWxldmVsIGZ1bmN0aW9ucyBpbiBjaGFyZ2Ugb2YgY2Fs
bGluZyBob3RwbHVnIHNjcmlwdHMuIFRoaXMKbmV3IGNhbGxpbmcgY29udmVudGlvbiB3aWxsIGJl
IHVzZWQgd2hlbiB0aGUgaG90cGx1Z192ZXJzaW9uIGFvZGV2CmZpZWxkIGlzIDEuCgpTaWduZWQt
b2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KLS0tCiB0b29s
cy9saWJ4bC9saWJ4bF9kZXZpY2UuYyAgIHwgICAzMyArKysrKysrKysrKysrKysrKysrLQogdG9v
bHMvbGlieGwvbGlieGxfaW50ZXJuYWwuaCB8ICAgMzMgKysrKysrKysrKysrKysrKysrLS0KIHRv
b2xzL2xpYnhsL2xpYnhsX2xpbnV4LmMgICAgfCAgIDY4ICsrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrLS0tLS0tLS0tLQogdG9vbHMvbGlieGwvbGlieGxfbmV0YnNkLmMgICB8ICAgIDIg
Ky0KIDQgZmlsZXMgY2hhbmdlZCwgMTE1IGluc2VydGlvbnMoKyksIDIxIGRlbGV0aW9ucygtKQoK
ZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2RldmljZS5jIGIvdG9vbHMvbGlieGwvbGli
eGxfZGV2aWNlLmMKaW5kZXggNThkM2YzNS4uODVjOTk1MyAxMDA2NDQKLS0tIGEvdG9vbHMvbGli
eGwvbGlieGxfZGV2aWNlLmMKKysrIGIvdG9vbHMvbGlieGwvbGlieGxfZGV2aWNlLmMKQEAgLTgz
LDYgKzgzLDM1IEBAIG91dDoKICAgICByZXR1cm4gcmM7CiB9CiAKK2NoYXIgKmxpYnhsX19kZXZp
Y2VfaG90cGx1Z19hY3Rpb24obGlieGxfX2djICpnYywKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgbGlieGxfX2RldmljZV9hY3Rpb24gYWN0aW9uKQoreworICAgIHN3aXRjaCAo
YWN0aW9uKSB7CisgICAgY2FzZSBERVZJQ0VfQ09OTkVDVDoKKyAgICAgICAgcmV0dXJuICJhZGQi
OworICAgIGNhc2UgREVWSUNFX0RJU0NPTk5FQ1Q6CisgICAgICAgIHJldHVybiAicmVtb3ZlIjsK
KyAgICBjYXNlIERFVklDRV9QUkVQQVJFOgorICAgICAgICByZXR1cm4gInByZXBhcmUiOworICAg
IGNhc2UgREVWSUNFX1VOUFJFUEFSRToKKyAgICAgICAgcmV0dXJuICJ1bnByZXBhcmUiOworICAg
IGNhc2UgREVWSUNFX0xPQ0FMQVRUQUNIOgorICAgICAgICByZXR1cm4gImxvY2FsYXR0YWNoIjsK
KyAgICBjYXNlIERFVklDRV9MT0NBTERFVEFDSDoKKyAgICAgICAgcmV0dXJuICJsb2NhbGRldGFj
aCI7CisgICAgZGVmYXVsdDoKKyAgICAgICAgTE9HKEVSUk9SLCAiaW52YWxpZCBhY3Rpb24gKCVk
KSBmb3IgZGV2aWNlIiwgYWN0aW9uKTsKKyAgICAgICAgYnJlYWs7CisgICAgfQorICAgIHJldHVy
biBOVUxMOworfQorCitjaGFyICpsaWJ4bF9fZGV2aWNlX3hzX2hvdHBsdWdfcGF0aChsaWJ4bF9f
Z2MgKmdjLCBsaWJ4bF9fZGV2aWNlICpkZXYpCit7CisgICAgcmV0dXJuIEdDU1BSSU5URigiL2xv
Y2FsL2RvbWFpbi8ldS9saWJ4bC9ob3RwbHVnLyV1LyV1IiwKKyAgICAgICAgICAgICAgICAgICAg
IGRldi0+YmFja2VuZF9kb21pZCwgZGV2LT5kb21pZCwgZGV2LT5kZXZpZCk7Cit9CisKIGludCBs
aWJ4bF9fZGV2aWNlX2dlbmVyaWNfYWRkKGxpYnhsX19nYyAqZ2MsIHhzX3RyYW5zYWN0aW9uX3Qg
dCwKICAgICAgICAgbGlieGxfX2RldmljZSAqZGV2aWNlLCBjaGFyICoqYmVudHMsIGNoYXIgKipm
ZW50cykKIHsKQEAgLTQxMCw2ICs0MzksNyBAQCB2b2lkIGxpYnhsX19wcmVwYXJlX2FvX2Rldmlj
ZShsaWJ4bF9fYW8gKmFvLCBsaWJ4bF9fYW9fZGV2aWNlICphb2RldikKICAgICBhb2Rldi0+cmMg
PSAwOwogICAgIGFvZGV2LT5kZXYgPSBOVUxMOwogICAgIGFvZGV2LT5udW1fZXhlYyA9IDA7Cisg
ICAgYW9kZXYtPmhvdHBsdWdfdmVyc2lvbiA9IDA7CiAgICAgLyogSW5pdGlhbGl6ZSB0aW1lciBm
b3IgUUVNVSBCb2RnZSBhbmQgaG90cGx1ZyBleGVjdXRpb24gKi8KICAgICBsaWJ4bF9fZXZfdGlt
ZV9pbml0KCZhb2Rldi0+dGltZW91dCk7CiAgICAgYW9kZXYtPmFjdGl2ZSA9IDE7CkBAIC04OTEs
NyArOTIxLDggQEAgc3RhdGljIHZvaWQgZGV2aWNlX2hvdHBsdWcobGlieGxfX2VnYyAqZWdjLCBs
aWJ4bF9fYW9fZGV2aWNlICphb2RldikKICAgICAgKiBhbmQgcmV0dXJuIHRoZSBuZWNlc3Nhcnkg
YXJncy9lbnYgdmFycyBmb3IgZXhlY3V0aW9uICovCiAgICAgaG90cGx1ZyA9IGxpYnhsX19nZXRf
aG90cGx1Z19zY3JpcHRfaW5mbyhnYywgYW9kZXYtPmRldiwgJmFyZ3MsICZlbnYsCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhb2Rldi0+YWN0aW9uLAotICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYW9kZXYtPm51bV9leGVj
KTsKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGFvZGV2LT5u
dW1fZXhlYywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGFv
ZGV2LT5ob3RwbHVnX3ZlcnNpb24pOwogICAgIHN3aXRjaCAoaG90cGx1ZykgewogICAgIGNhc2Ug
MDoKICAgICAgICAgLyogbm8gaG90cGx1ZyBzY3JpcHQgdG8gZXhlY3V0ZSAqLwpkaWZmIC0tZ2l0
IGEvdG9vbHMvbGlieGwvbGlieGxfaW50ZXJuYWwuaCBiL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVy
bmFsLmgKaW5kZXggMGIzOGUzZS4uYzQxZTYwOCAxMDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGli
eGxfaW50ZXJuYWwuaAorKysgYi90b29scy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oCkBAIC0xODI5
LDEyICsxODI5LDE5IEBAIF9oaWRkZW4gY29uc3QgY2hhciAqbGlieGxfX3J1bl9kaXJfcGF0aCh2
b2lkKTsKIAogLyotLS0tLSBkZXZpY2UgYWRkaXRpb24vcmVtb3ZhbCAtLS0tLSovCiAKLS8qIEFj
dGlvbiB0byBwZXJmb3JtIChlaXRoZXIgY29ubmVjdCBvciBkaXNjb25uZWN0KSAqLworLyogQWN0
aW9uIHRvIHBlcmZvcm0gKi8KIHR5cGVkZWYgZW51bSB7CiAgICAgREVWSUNFX0NPTk5FQ1QsCi0g
ICAgREVWSUNFX0RJU0NPTk5FQ1QKKyAgICBERVZJQ0VfRElTQ09OTkVDVCwKKyAgICBERVZJQ0Vf
UFJFUEFSRSwKKyAgICBERVZJQ0VfVU5QUkVQQVJFLAorICAgIERFVklDRV9MT0NBTEFUVEFDSCwK
KyAgICBERVZJQ0VfTE9DQUxERVRBQ0gKIH0gbGlieGxfX2RldmljZV9hY3Rpb247CiAKK19oaWRk
ZW4gY2hhciAqbGlieGxfX2RldmljZV9ob3RwbHVnX2FjdGlvbihsaWJ4bF9fZ2MgKmdjLAorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2VfYWN0
aW9uIGFjdGlvbik7CisKIHR5cGVkZWYgc3RydWN0IGxpYnhsX19hb19kZXZpY2UgbGlieGxfX2Fv
X2RldmljZTsKIHR5cGVkZWYgc3RydWN0IGxpYnhsX19tdWx0aWRldiBsaWJ4bF9fbXVsdGlkZXY7
CiB0eXBlZGVmIHZvaWQgbGlieGxfX2RldmljZV9jYWxsYmFjayhsaWJ4bF9fZWdjKiwgbGlieGxf
X2FvX2RldmljZSopOwpAQCAtMTg1Miw2ICsxODU5LDE0IEBAIHR5cGVkZWYgdm9pZCBsaWJ4bF9f
ZGV2aWNlX2NhbGxiYWNrKGxpYnhsX19lZ2MqLCBsaWJ4bF9fYW9fZGV2aWNlKik7CiAgKiBPbmNl
IF9wcmVwYXJlIGhhcyBiZWVuIGNhbGxlZCBvbiBhIGxpYnhsX19hb19kZXZpY2UsIGl0IGlzIHNh
ZmUgdG8ganVzdAogICogZGlzY2FyZCB0aGlzIHN0cnVjdCwgdGhlcmUncyBubyBuZWVkIHRvIGNh
bGwgYW55IGRlc3Ryb3kgZnVuY3Rpb24uCiAgKiBfcHJlcGFyZSBjYW4gYWxzbyBiZSBjYWxsZWQg
bXVsdGlwbGUgdGltZXMgd2l0aCB0aGUgc2FtZSBsaWJ4bF9fYW9fZGV2aWNlLgorICoKKyAqIElm
IGhvdHBsdWdfdmVyc2lvbiBmaWVsZCBpcyAxLCB0aGUgbmV3IGhvdHBsdWcgc2NyaXB0IGNhbGxp
bmcgY29udmVudGlvbgorICogd2lsbCBiZSB1c2VkIHRvIGNhbGwgdGhlIGhvdHBsdWcgc2NyaXB0
LiBUaGlzIG5ldyBjb252ZW50aW9uIHByb3ZpZGVzCisgKiB0d28gbmV3IGFjdGlvbnMgdG8gaG90
cGx1ZyBzY3JpcHRzLCAicHJlcGFyZSIsICJ1bnByZXBhcmUiLCAibG9jYWxhdHRhY2giCisgKiBh
bmQgImxvY2FsZGV0YWNoIi4gVGhpcyBuZXcgYWN0aW9ucyBoYXZlIGJlZW4gYWRkZWQgdG8gb2Zm
bG9hZCB3b3JrIGRvbmUKKyAqIGJ5IGhvdHBsdWcgc2NyaXB0cyBkdXJpbmcgdGhlIGJsYWNrb3V0
IHBoYXNlIG9mIG1pZ3JhdGlvbi4gInByZXBhcmUiIGlzCisgKiBjYWxsZWQgYmVmb3JlIHRoZSBy
ZW1vdGUgZG9tYWluIGlzIHBhdXNlZCwgc28gYXMgbXVjaCBvcGVyYXRpb25zIGFzCisgKiBwb3Nz
aWJsZSBzaG91bGQgYmUgZG9uZSBpbiB0aGlzIHBoYXNlLgogICovCiBfaGlkZGVuIHZvaWQgbGli
eGxfX3ByZXBhcmVfYW9fZGV2aWNlKGxpYnhsX19hbyAqYW8sIGxpYnhsX19hb19kZXZpY2UgKmFv
ZGV2KTsKIApAQCAtMTg2Miw2ICsxODc3LDcgQEAgc3RydWN0IGxpYnhsX19hb19kZXZpY2Ugewog
ICAgIGxpYnhsX19kZXZpY2UgKmRldjsKICAgICBpbnQgZm9yY2U7CiAgICAgbGlieGxfX2Rldmlj
ZV9jYWxsYmFjayAqY2FsbGJhY2s7CisgICAgaW50IGhvdHBsdWdfdmVyc2lvbjsKICAgICAvKiBy
ZXR1cm4gdmFsdWUsIHplcm9lZCBieSB1c2VyIG9uIGVudHJ5LCBpcyB2YWxpZCBvbiBjYWxsYmFj
ayAqLwogICAgIGludCByYzsKICAgICAvKiBwcml2YXRlIGZvciBtdWx0aWRldiAqLwpAQCAtMjA0
NSw2ICsyMDYxLDE3IEBAIF9oaWRkZW4gdm9pZCBsaWJ4bF9faW5pdGlhdGVfZGV2aWNlX3JlbW92
ZShsaWJ4bF9fZWdjICplZ2MsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpOwogCiAvKgorICogbGlieGxfX2RldmljZV94
c19ob3RwbHVnX3BhdGggcmV0dXJucyB0aGUgeGVuc3RvcmUgaG90cGx1ZworICogcGF0aCB0aGF0
IGlzIHVzZWQgdG8gc2hhcmUgaW5mb3JtYXRpb24gd2l0aCB0aGUgaG90cGx1ZworICogc2NyaXB0
LgorICoKKyAqIFRoaXMgcGF0aCBpcyBvbmx5IHVzZWQgYnkgbmV3IGhvdHBsdWcgc2NyaXB0cywg
dGhhdCBhcmUKKyAqIHNwZWNpZmllZCB1c2luZyAibWV0aG9kIiBpbnN0ZWFkIG9mICJzY3JpcHQi
IGluIHRoZSBkaXNrCisgKiBwYXJhbWV0ZXJzLgorICovCitfaGlkZGVuIGNoYXIgKmxpYnhsX19k
ZXZpY2VfeHNfaG90cGx1Z19wYXRoKGxpYnhsX19nYyAqZ2MsIGxpYnhsX19kZXZpY2UgKmRldik7
CisKKy8qCiAgKiBsaWJ4bF9fZ2V0X2hvdHBsdWdfc2NyaXB0X2luZm8gcmV0dXJucyB0aGUgYXJn
cyBhbmQgZW52IHRoYXQgc2hvdWxkCiAgKiBiZSBwYXNzZWQgdG8gdGhlIGhvdHBsdWcgc2NyaXB0
IGZvciB0aGUgcmVxdWVzdGVkIGRldmljZS4KICAqCkBAIC0yMDY1LDcgKzIwOTIsNyBAQCBfaGlk
ZGVuIHZvaWQgbGlieGxfX2luaXRpYXRlX2RldmljZV9yZW1vdmUobGlieGxfX2VnYyAqZWdjLAog
X2hpZGRlbiBpbnQgbGlieGxfX2dldF9ob3RwbHVnX3NjcmlwdF9pbmZvKGxpYnhsX19nYyAqZ2Ms
IGxpYnhsX19kZXZpY2UgKmRldiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBjaGFyICoqKmFyZ3MsIGNoYXIgKioqZW52LAogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2VfYWN0aW9uIGFjdGlvbiwKLSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgbnVtX2V4ZWMpOworICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCBudW1fZXhlYywgaW50
IGhvdHBsdWdfdmVyc2lvbik7CiAKIC8qLS0tLS0gbG9jYWwgZGlzayBhdHRhY2g6IGF0dGFjaCBh
IGRpc2sgbG9jYWxseSB0byBydW4gdGhlIGJvb3Rsb2FkZXIgLS0tLS0qLwogCmRpZmYgLS1naXQg
YS90b29scy9saWJ4bC9saWJ4bF9saW51eC5jIGIvdG9vbHMvbGlieGwvbGlieGxfbGludXguYwpp
bmRleCAxZmVkM2NkLi40M2YyM2RkIDEwMDY0NAotLS0gYS90b29scy9saWJ4bC9saWJ4bF9saW51
eC5jCisrKyBiL3Rvb2xzL2xpYnhsL2xpYnhsX2xpbnV4LmMKQEAgLTE5MCwzMiArMTkwLDY4IEBA
IG91dDoKIAogc3RhdGljIGludCBsaWJ4bF9faG90cGx1Z19kaXNrKGxpYnhsX19nYyAqZ2MsIGxp
YnhsX19kZXZpY2UgKmRldiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjaGFyICoq
KmFyZ3MsIGNoYXIgKioqZW52LAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhs
X19kZXZpY2VfYWN0aW9uIGFjdGlvbikKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBs
aWJ4bF9fZGV2aWNlX2FjdGlvbiBhY3Rpb24sCisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgaW50IGhvdHBsdWdfdmVyc2lvbikKIHsKICAgICBjaGFyICpiZV9wYXRoID0gbGlieGxfX2Rl
dmljZV9iYWNrZW5kX3BhdGgoZ2MsIGRldik7Ci0gICAgY2hhciAqc2NyaXB0OworICAgIGNoYXIg
KmhvdHBsdWdfcGF0aCA9IGxpYnhsX19kZXZpY2VfeHNfaG90cGx1Z19wYXRoKGdjLCBkZXYpOwor
ICAgIGNoYXIgKnNjcmlwdCwgKnNhY3Rpb247CiAgICAgaW50IG5yID0gMCwgcmMgPSAwOworICAg
IGNvbnN0IGludCBlbnZfYXJyYXlzaXplID0gNTsKKyAgICBjb25zdCBpbnQgYXJnX2FycmF5c2l6
ZSA9IDM7CiAKLSAgICBzY3JpcHQgPSBsaWJ4bF9feHNfcmVhZChnYywgWEJUX05VTEwsCi0gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgR0NTUFJJTlRGKCIlcy8lcyIsIGJlX3BhdGgsICJzY3Jp
cHQiKSk7Ci0gICAgaWYgKCFzY3JpcHQpIHsKLSAgICAgICAgTE9HRVYoRVJST1IsIGVycm5vLCAi
dW5hYmxlIHRvIHJlYWQgc2NyaXB0IGZyb20gJXMiLCBiZV9wYXRoKTsKKyAgICBzd2l0Y2ggKGhv
dHBsdWdfdmVyc2lvbikgeworICAgIGNhc2UgMDoKKyAgICAgICAgc2NyaXB0ID0gbGlieGxfX3hz
X3JlYWQoZ2MsIFhCVF9OVUxMLCBHQ1NQUklOVEYoIiVzL3NjcmlwdCIsIGJlX3BhdGgpKTsKKyAg
ICAgICAgaWYgKCFzY3JpcHQpIHsKKyAgICAgICAgICAgIExPR0VWKEVSUk9SLCBlcnJubywgInVu
YWJsZSB0byByZWFkIHNjcmlwdCBmcm9tICVzIiwgYmVfcGF0aCk7CisgICAgICAgICAgICByYyA9
IEVSUk9SX0ZBSUw7CisgICAgICAgICAgICBnb3RvIGVycm9yOworICAgICAgICB9CisgICAgICAg
ICplbnYgPSBnZXRfaG90cGx1Z19lbnYoZ2MsIHNjcmlwdCwgZGV2KTsKKyAgICAgICAgaWYgKCEq
ZW52KSB7CisgICAgICAgICAgICByYyA9IEVSUk9SX0ZBSUw7CisgICAgICAgICAgICBnb3RvIGVy
cm9yOworICAgICAgICB9CisgICAgICAgIGJyZWFrOworICAgIGNhc2UgMToKKyAgICAgICAgLyog
VGhlIG5ldyBob3RwbHVnIGNhbGxpbmcgY29udmVudGlvbiBvbmx5IHVzZXMgdHdvIEVOViB2YXJp
YWJsZXM6CisgICAgICAgICAqIEJBQ0tFTkRfUEFUSDogcGF0aCB0byB4ZW5zdG9yZSBiYWNrZW5k
IG9mIHRoZSByZWxhdGVkIGRldmljZS4KKyAgICAgICAgICogSE9UUExVR19QQVRIOiBwYXRoIHRv
IHRoZSB4ZW5zdG9yZSBkaXJlY3RvcnkgdGhhdCBjYW4gYmUgdXNlZCB0bworICAgICAgICAgKiBw
YXNzIGV4dHJhIHBhcmFtZXRlcnMgdG8gdGhlIHNjcmlwdC4KKyAgICAgICAgICovCisgICAgICAg
IHNjcmlwdCA9IGxpYnhsX194c19yZWFkKGdjLCBYQlRfTlVMTCwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgR0NTUFJJTlRGKCIlcy9zY3JpcHQiLCBob3RwbHVnX3BhdGgpKTsKKyAg
ICAgICAgaWYgKCFzY3JpcHQpIHsKKyAgICAgICAgICAgIExPR0VWKEVSUk9SLCBlcnJubywgInVu
YWJsZSB0byByZWFkIHNjcmlwdCBmcm9tICVzIiwgaG90cGx1Z19wYXRoKTsKKyAgICAgICAgICAg
IHJjID0gRVJST1JfRkFJTDsKKyAgICAgICAgICAgIGdvdG8gZXJyb3I7CisgICAgICAgIH0KKyAg
ICAgICAgR0NORVdfQVJSQVkoKmVudiwgZW52X2FycmF5c2l6ZSk7CisgICAgICAgICgqZW52KVtu
cisrXSA9ICJCQUNLRU5EX1BBVEgiOworICAgICAgICAoKmVudilbbnIrK10gPSBiZV9wYXRoOwor
ICAgICAgICAoKmVudilbbnIrK10gPSAiSE9UUExVR19QQVRIIjsKKyAgICAgICAgKCplbnYpW25y
KytdID0gaG90cGx1Z19wYXRoOworICAgICAgICAoKmVudilbbnIrK10gPSBOVUxMOworICAgICAg
ICBhc3NlcnQobnIgPT0gZW52X2FycmF5c2l6ZSk7CisgICAgICAgIG5yID0gMDsKKyAgICAgICAg
YnJlYWs7CisgICAgZGVmYXVsdDoKKyAgICAgICAgTE9HKEVSUk9SLCAidW5rbm93biBob3RwbHVn
IHNjcmlwdCB2ZXJzaW9uICVkIiwgaG90cGx1Z192ZXJzaW9uKTsKICAgICAgICAgcmMgPSBFUlJP
Ul9GQUlMOwogICAgICAgICBnb3RvIGVycm9yOwogICAgIH0KIAotICAgICplbnYgPSBnZXRfaG90
cGx1Z19lbnYoZ2MsIHNjcmlwdCwgZGV2KTsKLSAgICBpZiAoISplbnYpIHsKKyAgICBHQ05FV19B
UlJBWSgqYXJncywgYXJnX2FycmF5c2l6ZSk7CisgICAgKCphcmdzKVtucisrXSA9IHNjcmlwdDsK
KyAgICBzYWN0aW9uID0gbGlieGxfX2RldmljZV9ob3RwbHVnX2FjdGlvbihnYywgYWN0aW9uKTsK
KyAgICBpZiAoIXNhY3Rpb24pIHsKICAgICAgICAgcmMgPSBFUlJPUl9GQUlMOwogICAgICAgICBn
b3RvIGVycm9yOwogICAgIH0KLQotICAgIGNvbnN0IGludCBhcnJheXNpemUgPSAzOwotICAgIEdD
TkVXX0FSUkFZKCphcmdzLCBhcnJheXNpemUpOwotICAgICgqYXJncylbbnIrK10gPSBzY3JpcHQ7
Ci0gICAgKCphcmdzKVtucisrXSA9IGFjdGlvbiA9PSBERVZJQ0VfQ09OTkVDVCA/ICJhZGQiIDog
InJlbW92ZSI7CisgICAgKCphcmdzKVtucisrXSA9IHNhY3Rpb247CiAgICAgKCphcmdzKVtucisr
XSA9IE5VTEw7Ci0gICAgYXNzZXJ0KG5yID09IGFycmF5c2l6ZSk7CisgICAgYXNzZXJ0KG5yID09
IGFyZ19hcnJheXNpemUpOwogCiAgICAgcmMgPSAxOwogCkBAIC0yMjYsNyArMjYyLDcgQEAgZXJy
b3I6CiBpbnQgbGlieGxfX2dldF9ob3RwbHVnX3NjcmlwdF9pbmZvKGxpYnhsX19nYyAqZ2MsIGxp
YnhsX19kZXZpY2UgKmRldiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY2hh
ciAqKiphcmdzLCBjaGFyICoqKmVudiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbGlieGxfX2RldmljZV9hY3Rpb24gYWN0aW9uLAotICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBpbnQgbnVtX2V4ZWMpCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGludCBudW1fZXhlYywgaW50IGhvdHBsdWdfdmVyc2lvbikKIHsKICAgICBjaGFyICpkaXNh
YmxlX3VkZXYgPSBsaWJ4bF9feHNfcmVhZChnYywgWEJUX05VTEwsIERJU0FCTEVfVURFVl9QQVRI
KTsKICAgICBpbnQgcmM7CkBAIC0yNDMsNyArMjc5LDcgQEAgaW50IGxpYnhsX19nZXRfaG90cGx1
Z19zY3JpcHRfaW5mbyhsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9fZGV2aWNlICpkZXYsCiAgICAgICAg
ICAgICByYyA9IDA7CiAgICAgICAgICAgICBnb3RvIG91dDsKICAgICAgICAgfQotICAgICAgICBy
YyA9IGxpYnhsX19ob3RwbHVnX2Rpc2soZ2MsIGRldiwgYXJncywgZW52LCBhY3Rpb24pOworICAg
ICAgICByYyA9IGxpYnhsX19ob3RwbHVnX2Rpc2soZ2MsIGRldiwgYXJncywgZW52LCBhY3Rpb24s
IGhvdHBsdWdfdmVyc2lvbik7CiAgICAgICAgIGJyZWFrOwogICAgIGNhc2UgTElCWExfX0RFVklD
RV9LSU5EX1ZJRjoKICAgICAgICAgLyoKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX25l
dGJzZC5jIGIvdG9vbHMvbGlieGwvbGlieGxfbmV0YnNkLmMKaW5kZXggOTU4NzgzMy4uODA2MWU3
YSAxMDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGxfbmV0YnNkLmMKKysrIGIvdG9vbHMvbGli
eGwvbGlieGxfbmV0YnNkLmMKQEAgLTYyLDcgKzYyLDcgQEAgb3V0OgogaW50IGxpYnhsX19nZXRf
aG90cGx1Z19zY3JpcHRfaW5mbyhsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9fZGV2aWNlICpkZXYsCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNoYXIgKioqYXJncywgY2hhciAqKipl
bnYsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2VfYWN0
aW9uIGFjdGlvbiwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50IG51bV9l
eGVjKQorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgbnVtX2V4ZWMsIGlu
dCBob3RwbHVnX3ZlcnNpb24pCiB7CiAgICAgY2hhciAqZGlzYWJsZV91ZGV2ID0gbGlieGxfX3hz
X3JlYWQoZ2MsIFhCVF9OVUxMLCBESVNBQkxFX1VERVZfUEFUSCk7CiAgICAgaW50IHJjOwotLSAK
MS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhl
bi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xY-0002E7-S4; Fri, 21 Dec 2012 17:00:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xW-0002Cp-Ky
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:30 +0000
Received: from [193.109.254.147:4023] by server-2.bemta-14.messagelabs.com id
	D5/D5-30744-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!5
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11644 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309215"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:17 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:17 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:02 +0100
Message-ID: <1356109208-6830-5-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 04/10] libxl: add prepare/unprepare
	operations to the libxl public interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHB1YmxpYyBmdW5jdGlvbnMgZm9yIHRoZSBwcmVwYXJlL3VucHJlcGFyZSBkaXNrIG9wZXJh
dGlvbnMuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KLS0tCiB0b29scy9saWJ4bC9saWJ4bC5jIHwgICAyNiArKysrKysrKysrKysrKystLS0t
LS0tLS0tLQogdG9vbHMvbGlieGwvbGlieGwuaCB8ICAgIDggKysrKysrKysKIDIgZmlsZXMgY2hh
bmdlZCwgMjMgaW5zZXJ0aW9ucygrKSwgMTEgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvdG9v
bHMvbGlieGwvbGlieGwuYyBiL3Rvb2xzL2xpYnhsL2xpYnhsLmMKaW5kZXggZTE0MTM3OS4uODIw
MzRmMSAxMDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGwuYworKysgYi90b29scy9saWJ4bC9s
aWJ4bC5jCkBAIC0xNjg4LDE5ICsxNjg4LDE5IEBAIGludCBsaWJ4bF92bmN2aWV3ZXJfZXhlYyhs
aWJ4bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsIGludCBhdXRvcGFzcykKIC8qKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKiovCiAKIC8qIGdlbmVyaWMgY2FsbGJhY2sgZm9yIGRldmljZXMgdGhhdCBvbmx5
IG5lZWQgdG8gc2V0IGFvX2NvbXBsZXRlICovCi1zdGF0aWMgdm9pZCBkZXZpY2VfYWRkcm1fYW9j
b21wbGV0ZShsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hb19kZXZpY2UgKmFvZGV2KQorc3RhdGlj
IHZvaWQgZGV2aWNlX2FvY29tcGxldGUobGlieGxfX2VnYyAqZWdjLCBsaWJ4bF9fYW9fZGV2aWNl
ICphb2RldikKIHsKICAgICBTVEFURV9BT19HQyhhb2Rldi0+YW8pOwogCiAgICAgaWYgKGFvZGV2
LT5yYykgewogICAgICAgICBpZiAoYW9kZXYtPmRldikgewogICAgICAgICAgICAgTE9HKEVSUk9S
LCAidW5hYmxlIHRvICVzICVzIHdpdGggaWQgJXUiLAotICAgICAgICAgICAgICAgICAgICAgICAg
YW9kZXYtPmFjdGlvbiA9PSBERVZJQ0VfQ09OTkVDVCA/ICJhZGQiIDogInJlbW92ZSIsCisgICAg
ICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fZGV2aWNlX2hvdHBsdWdfYWN0aW9uKGdjLCBhb2Rl
di0+YWN0aW9uKSwKICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2Vfa2luZF90
b19zdHJpbmcoYW9kZXYtPmRldi0+a2luZCksCiAgICAgICAgICAgICAgICAgICAgICAgICBhb2Rl
di0+ZGV2LT5kZXZpZCk7CiAgICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAgICBMT0coRVJST1Is
ICJ1bmFibGUgdG8gJXMgZGV2aWNlIiwKLSAgICAgICAgICAgICAgICAgICAgICAgYW9kZXYtPmFj
dGlvbiA9PSBERVZJQ0VfQ09OTkVDVCA/ICJhZGQiIDogInJlbW92ZSIpOworICAgICAgICAgICAg
ICAgICAgICAgICBsaWJ4bF9fZGV2aWNlX2hvdHBsdWdfYWN0aW9uKGdjLCBhb2Rldi0+YWN0aW9u
KSk7CiAgICAgICAgIH0KICAgICAgICAgZ290byBvdXQ7CiAgICAgfQpAQCAtMzQ4MCw3ICszNDgw
LDcgQEAgb3V0OgogICAgICAgICBsaWJ4bF9fcHJlcGFyZV9hb19kZXZpY2UoYW8sIGFvZGV2KTsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICBhb2Rldi0+YWN0aW9uID0gREVW
SUNFX0RJU0NPTk5FQ1Q7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICBh
b2Rldi0+ZGV2ID0gZGV2aWNlOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgXAotICAgICAgICBhb2Rldi0+Y2FsbGJhY2sgPSBkZXZpY2VfYWRkcm1fYW9jb21wbGV0
ZTsgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBhb2Rldi0+Y2FsbGJhY2sgPSBkZXZp
Y2VfYW9jb21wbGV0ZTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICBhb2Rl
di0+Zm9yY2UgPSBmOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAogICAgICAgICBsaWJ4bF9faW5pdGlhdGVfZGV2aWNlX3JlbW92ZShlZ2MsIGFvZGV2KTsg
ICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXApAQCAtMzUyMSwxMCArMzUy
MSwxMiBAQCBERUZJTkVfREVWSUNFX1JFTU9WRSh2dHBtLCBkZXN0cm95LCAxKQogICogbGlieGxf
ZGV2aWNlX2Rpc2tfYWRkCiAgKiBsaWJ4bF9kZXZpY2VfbmljX2FkZAogICogbGlieGxfZGV2aWNl
X3Z0cG1fYWRkCisgKiBsaWJ4bF9kZXZpY2VfZGlza19wcmVwYXJlCisgKiBsaWJ4bF9kZXZpY2Vf
ZGlza191bnByZXBhcmUKICAqLwogCi0jZGVmaW5lIERFRklORV9ERVZJQ0VfQUREKHR5cGUpICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCi0gICAgaW50IGxpYnhsX2Rl
dmljZV8jI3R5cGUjI19hZGQobGlieGxfY3R4ICpjdHgsICAgICAgICAgICAgICAgICAgICAgICBc
CisjZGVmaW5lIERFRklORV9ERVZJQ0VfRlVOQyh0eXBlLCBvcCkgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCisgICAgaW50IGxpYnhsX2RldmljZV8jI3R5cGUjI18jI29wKGxp
YnhsX2N0eCAqY3R4LCAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgIHVpbnQzMl90IGRv
bWlkLCBsaWJ4bF9kZXZpY2VfIyN0eXBlICp0eXBlLCAgICAgICAgICAgICAgICAgICAgICBcCiAg
ICAgICAgIGNvbnN0IGxpYnhsX2FzeW5jb3BfaG93ICphb19ob3cpICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCiAgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCkBAIC0zNTMzLDggKzM1MzUsOCBAQCBE
RUZJTkVfREVWSUNFX1JFTU9WRSh2dHBtLCBkZXN0cm95LCAxKQogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAog
ICAgICAgICBHQ05FVyhhb2Rldik7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgXAogICAgICAgICBsaWJ4bF9fcHJlcGFyZV9hb19kZXZpY2UoYW8sIGFv
ZGV2KTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAotICAgICAgICBhb2Rldi0+Y2FsbGJh
Y2sgPSBkZXZpY2VfYWRkcm1fYW9jb21wbGV0ZTsgICAgICAgICAgICAgICAgICAgICAgXAotICAg
ICAgICBsaWJ4bF9fZGV2aWNlXyMjdHlwZSMjX2FkZChlZ2MsIGRvbWlkLCB0eXBlLCBhb2Rldik7
ICAgICAgICAgICAgXAorICAgICAgICBhb2Rldi0+Y2FsbGJhY2sgPSBkZXZpY2VfYW9jb21wbGV0
ZTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBsaWJ4bF9fZGV2aWNlXyMj
dHlwZSMjXyMjb3AoZWdjLCBkb21pZCwgdHlwZSwgYW9kZXYpOyAgICAgICAgICAgXAogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgXAogICAgICAgICByZXR1cm4gQU9fSU5QUk9HUkVTUzsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgIH0KQEAgLTM1NDIsMTMgKzM1NDQsMTUg
QEAgREVGSU5FX0RFVklDRV9SRU1PVkUodnRwbSwgZGVzdHJveSwgMSkKIC8qIERlZmluZSBhbGxh
ZGQgZnVuY3Rpb25zIGFuZCB1bmRlZiB0aGUgbWFjcm8gKi8KIAogLyogZGlzayAqLwotREVGSU5F
X0RFVklDRV9BREQoZGlzaykKK0RFRklORV9ERVZJQ0VfRlVOQyhkaXNrLCBhZGQpCitERUZJTkVf
REVWSUNFX0ZVTkMoZGlzaywgcHJlcGFyZSkKK0RFRklORV9ERVZJQ0VfRlVOQyhkaXNrLCB1bnBy
ZXBhcmUpCiAKIC8qIG5pYyAqLwotREVGSU5FX0RFVklDRV9BREQobmljKQorREVGSU5FX0RFVklD
RV9GVU5DKG5pYywgYWRkKQogCiAvKiB2dHBtICovCi1ERUZJTkVfREVWSUNFX0FERCh2dHBtKQor
REVGSU5FX0RFVklDRV9GVU5DKHZ0cG0sIGFkZCkKIAogI3VuZGVmIERFRklORV9ERVZJQ0VfQURE
CiAKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsLmggYi90b29scy9saWJ4bC9saWJ4bC5o
CmluZGV4IGUyYmE1NDkuLmYzNDI2ZjcgMTAwNjQ0Ci0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsLmgK
KysrIGIvdG9vbHMvbGlieGwvbGlieGwuaApAQCAtNzA3LDYgKzcwNywxMCBAQCB2b2lkIGxpYnhs
X3Z0cG1pbmZvX2xpc3RfZnJlZShsaWJ4bF92dHBtaW5mbyAqLCBpbnQgbnJfdnRwbXMpOwogICov
CiAKIC8qIERpc2tzICovCitpbnQgbGlieGxfZGV2aWNlX2Rpc2tfcHJlcGFyZShsaWJ4bF9jdHgg
KmN0eCwgdWludDMyX3QgZG9taWQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4
bF9kZXZpY2VfZGlzayAqZGlzaywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0
IGxpYnhsX2FzeW5jb3BfaG93ICphb19ob3cpCisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBMSUJYTF9FWFRFUk5BTF9DQUxMRVJTX09OTFk7CiBpbnQgbGlieGxfZGV2aWNlX2Rpc2tfYWRk
KGxpYnhsX2N0eCAqY3R4LCB1aW50MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbGlieGxfZGV2aWNlX2Rpc2sgKmRpc2ssCiAgICAgICAgICAgICAgICAgICAgICAgICAgIGNv
bnN0IGxpYnhsX2FzeW5jb3BfaG93ICphb19ob3cpCkBAIC03MTksNiArNzIzLDEwIEBAIGludCBs
aWJ4bF9kZXZpY2VfZGlza19kZXN0cm95KGxpYnhsX2N0eCAqY3R4LCB1aW50MzJfdCBkb21pZCwK
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RldmljZV9kaXNrICpkaXNrLAog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgbGlieGxfYXN5bmNvcF9ob3cgKmFv
X2hvdykKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIExJQlhMX0VYVEVSTkFMX0NBTExF
UlNfT05MWTsKK2ludCBsaWJ4bF9kZXZpY2VfZGlza191bnByZXBhcmUobGlieGxfY3R4ICpjdHgs
IHVpbnQzMl90IGRvbWlkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9k
ZXZpY2VfZGlzayAqZGlzaywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3Qg
bGlieGxfYXN5bmNvcF9ob3cgKmFvX2hvdykKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgTElCWExfRVhURVJOQUxfQ0FMTEVSU19PTkxZOwogCiBsaWJ4bF9kZXZpY2VfZGlzayAqbGli
eGxfZGV2aWNlX2Rpc2tfbGlzdChsaWJ4bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsIGludCAq
bnVtKTsKIGludCBsaWJ4bF9kZXZpY2VfZGlza19nZXRpbmZvKGxpYnhsX2N0eCAqY3R4LCB1aW50
MzJfdCBkb21pZCwKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhl
bi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xZ-0002EW-O5; Fri, 21 Dec 2012 17:00:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xW-0002Cq-Np
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:31 +0000
Received: from [193.109.254.147:4027] by server-10.bemta-14.messagelabs.com id
	C6/76-13263-EA594D05; Fri, 21 Dec 2012 17:00:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!4
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11636 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309214"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:17 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:16 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:01 +0100
Message-ID: <1356109208-6830-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 03/10] libxl: add new "method" parameter to
	xl disk config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyBuZXcgZGlza3NwZWMgcGFyYW1ldGVycyB3aWxsIHNldCBzY3JpcHQgdG8gdGhlIHZhbHVl
IHBhc3NlZCBpbgptZXRob2QsIGFuZCBob3RwbHVnX3ZlcnNpb24gdG8gMS4KClRoaXMgcGF0Y2gg
YWRkcyB0aGUgYmFzaWMgc3VwcG9ydCB0byBoYW5kbGUgdGhpcyBuZXcgc2NyaXB0cywgYnkKYWRk
aW5nIHRoZSBwcmVwYXJlL3VucHJlcGFyZSBwcml2YXRlIGxpYnhsIGZ1bmN0aW9ucywgYW5kIG1v
ZGlmeWluZwp0aGUgY3VycmVudCBkb21haW4gY3JlYXRpb24vZGVzdHJ1Y3Rpb24gdG8gY2FsbCB0
aGlzIGZ1bmN0aW9ucyBhdCB0aGUKYXBwcm9wcmlhdGUgc3BvdHMuCgpUaGlzIHBhdGNoIGFsc28g
YWRkcyBzb21lIGhlbHBlcnMsIHRoYXQgd2lsbCBiZSB1c2VkIGhlcmUgYW5kIGluIGxhdGVyCnBh
dGNoZXMuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KLS0tCiB0b29scy9saWJ4bC9saWJ4bC5jICAgICAgICAgIHwgIDEzMSArKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0KIHRvb2xzL2xpYnhsL2xpYnhsX2NyZWF0
ZS5jICAgfCAgIDYyICsrKysrKysrKysrKysrKysrKystCiB0b29scy9saWJ4bC9saWJ4bF9kZXZp
Y2UuYyAgIHwgICA4MCArKysrKysrKysrKysrKysrKysrLS0tLS0tCiB0b29scy9saWJ4bC9saWJ4
bF9pbnRlcm5hbC5oIHwgICAzNCArKysrKysrKysrKwogdG9vbHMvbGlieGwvbGlieGxfdHlwZXMu
aWRsICB8ICAgIDEgKwogdG9vbHMvbGlieGwvbGlieGx1X2Rpc2tfbC5sICB8ICAgIDIgKwogNiBm
aWxlcyBjaGFuZ2VkLCAyODggaW5zZXJ0aW9ucygrKSwgMjIgZGVsZXRpb25zKC0pCgpkaWZmIC0t
Z2l0IGEvdG9vbHMvbGlieGwvbGlieGwuYyBiL3Rvb2xzL2xpYnhsL2xpYnhsLmMKaW5kZXggOGQ5
MjFiYy4uZTE0MTM3OSAxMDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGwuYworKysgYi90b29s
cy9saWJ4bC9saWJ4bC5jCkBAIC0xOTk5LDYgKzE5OTksMTA4IEBAIGludCBsaWJ4bF9fZGV2aWNl
X2Zyb21fZGlzayhsaWJ4bF9fZ2MgKmdjLCB1aW50MzJfdCBkb21pZCwKICAgICByZXR1cm4gMDsK
IH0KIAordm9pZCBsaWJ4bF9fZGV2aWNlX2Rpc2tfcHJlcGFyZShsaWJ4bF9fZWdjICplZ2MsIHVp
bnQzMl90IGRvbWlkLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kZXZp
Y2VfZGlzayAqZGlzaywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2Fv
X2RldmljZSAqYW9kZXYpCit7CisgICAgU1RBVEVfQU9fR0MoYW9kZXYtPmFvKTsKKyAgICBjaGFy
ICpob3RwbHVnX3BhdGg7CisgICAgY2hhciAqc2NyaXB0X3BhdGg7CisgICAgaW50IHJjOworICAg
IHhzX3RyYW5zYWN0aW9uX3QgdCA9IFhCVF9OVUxMOworCisgICAgaWYgKGRpc2stPmhvdHBsdWdf
dmVyc2lvbiA9PSAwKSB7CisgICAgICAgIGFvZGV2LT5jYWxsYmFjayhlZ2MsIGFvZGV2KTsKKyAg
ICAgICAgcmV0dXJuOworICAgIH0KKworICAgIEdDTkVXKGFvZGV2LT5kZXYpOworICAgIHJjID0g
bGlieGxfX2RldmljZV9mcm9tX2Rpc2soZ2MsIGRvbWlkLCBkaXNrLCBhb2Rldi0+ZGV2KTsKKyAg
ICBpZiAocmMgIT0gMCkgeworICAgICAgICBMT0coRVJST1IsICJJbnZhbGlkIG9yIHVuc3VwcG9y
dGVkIHZpcnR1YWwgZGlzayBpZGVudGlmaWVyICVzIiwKKyAgICAgICAgICAgICAgICAgICBkaXNr
LT52ZGV2KTsKKyAgICAgICAgZ290byBlcnJvcjsKKyAgICB9CisKKyAgICBzY3JpcHRfcGF0aCA9
IGxpYnhsX19hYnNfcGF0aChnYywgZGlzay0+c2NyaXB0LAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGxpYnhsX194ZW5fc2NyaXB0X2Rpcl9wYXRoKCkpOworCisgICAgaG90cGx1
Z19wYXRoID0gbGlieGxfX2RldmljZV94c19ob3RwbHVnX3BhdGgoZ2MsIGFvZGV2LT5kZXYpOwor
ICAgIGZvciAoOzspIHsKKyAgICAgICAgcmMgPSBsaWJ4bF9feHNfdHJhbnNhY3Rpb25fc3RhcnQo
Z2MsICZ0KTsKKyAgICAgICAgaWYgKHJjKSBnb3RvIGVycm9yOworCisgICAgICAgIHJjID0gbGli
eGxfX3hzX3dyaXRlX2NoZWNrZWQoZ2MsIHQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBHQ1NQUklOVEYoIiVzL3BhcmFtcyIsIGhvdHBsdWdfcGF0aCksCisgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBkaXNrLT5wZGV2X3BhdGgpOworICAgICAgICBpZiAocmMpCisg
ICAgICAgICAgICBnb3RvIGVycm9yOworCisgICAgICAgIHJjID0gbGlieGxfX3hzX3dyaXRlX2No
ZWNrZWQoZ2MsIHQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBHQ1NQUklOVEYo
IiVzL3NjcmlwdCIsIGhvdHBsdWdfcGF0aCksCisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBzY3JpcHRfcGF0aCk7CisgICAgICAgIGlmIChyYykKKyAgICAgICAgICAgIGdvdG8gZXJy
b3I7CisKKyAgICAgICAgcmMgPSBsaWJ4bF9feHNfdHJhbnNhY3Rpb25fY29tbWl0KGdjLCAmdCk7
CisgICAgICAgIGlmICghcmMpIGJyZWFrOworICAgICAgICBpZiAocmMgPCAwKSBnb3RvIGVycm9y
OworICAgIH0KKworICAgIGFvZGV2LT5hY3Rpb24gPSBERVZJQ0VfUFJFUEFSRTsKKyAgICBhb2Rl
di0+aG90cGx1Z192ZXJzaW9uID0gZGlzay0+aG90cGx1Z192ZXJzaW9uOworICAgIGxpYnhsX19k
ZXZpY2VfaG90cGx1ZyhlZ2MsIGFvZGV2KTsKKyAgICByZXR1cm47CisKK2Vycm9yOgorICAgIGFz
c2VydChyYyk7CisgICAgbGlieGxfX3hzX3RyYW5zYWN0aW9uX2Fib3J0KGdjLCAmdCk7CisgICAg
YW9kZXYtPnJjID0gcmM7CisgICAgYW9kZXYtPmNhbGxiYWNrKGVnYywgYW9kZXYpOworICAgIHJl
dHVybjsKK30KKwordm9pZCBsaWJ4bF9fZGV2aWNlX2Rpc2tfdW5wcmVwYXJlKGxpYnhsX19lZ2Mg
KmVnYywgdWludDMyX3QgZG9taWQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
bGlieGxfZGV2aWNlX2Rpc2sgKmRpc2ssCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpCit7CisgICAgU1RBVEVfQU9fR0MoYW9kZXYtPmFv
KTsKKyAgICBjaGFyICpob3RwbHVnX3BhdGg7CisgICAgaW50IHJjOworCisgICAgaWYgKGRpc2st
PmhvdHBsdWdfdmVyc2lvbiA9PSAwKSB7CisgICAgICAgIGFvZGV2LT5jYWxsYmFjayhlZ2MsIGFv
ZGV2KTsKKyAgICAgICAgcmV0dXJuOworICAgIH0KKworICAgIEdDTkVXKGFvZGV2LT5kZXYpOwor
ICAgIHJjID0gbGlieGxfX2RldmljZV9mcm9tX2Rpc2soZ2MsIGRvbWlkLCBkaXNrLCBhb2Rldi0+
ZGV2KTsKKyAgICBpZiAocmMgIT0gMCkgeworICAgICAgICBMT0coRVJST1IsICJJbnZhbGlkIG9y
IHVuc3VwcG9ydGVkIHZpcnR1YWwgZGlzayBpZGVudGlmaWVyICVzIiwKKyAgICAgICAgICAgICAg
ICAgICBkaXNrLT52ZGV2KTsKKyAgICAgICAgZ290byBlcnJvcjsKKyAgICB9CisKKyAgICBob3Rw
bHVnX3BhdGggPSBsaWJ4bF9fZGV2aWNlX3hzX2hvdHBsdWdfcGF0aChnYywgYW9kZXYtPmRldik7
CisgICAgaWYgKCFsaWJ4bF9feHNfcmVhZChnYywgWEJUX05VTEwsIGhvdHBsdWdfcGF0aCkpIHsK
KyAgICAgICAgTE9HKERFQlVHLCAidW5hYmxlIHRvIHVucHJlcGFyZSBkZXZpY2UgYmVjYXVzZSAl
cyBkb2Vzbid0IGV4aXN0IiwKKyAgICAgICAgICAgICAgICAgICBob3RwbHVnX3BhdGgpOworICAg
ICAgICBhb2Rldi0+Y2FsbGJhY2soZWdjLCBhb2Rldik7CisgICAgICAgIHJldHVybjsKKyAgICB9
CisKKyAgICBhb2Rldi0+YWN0aW9uID0gREVWSUNFX1VOUFJFUEFSRTsKKyAgICBhb2Rldi0+aG90
cGx1Z192ZXJzaW9uID0gZGlzay0+aG90cGx1Z192ZXJzaW9uOworICAgIGxpYnhsX19kZXZpY2Vf
aG90cGx1ZyhlZ2MsIGFvZGV2KTsKKyAgICByZXR1cm47CisKK2Vycm9yOgorICAgIGFzc2VydChy
Yyk7CisgICAgYW9kZXYtPnJjID0gcmM7CisgICAgYW9kZXYtPmNhbGxiYWNrKGVnYywgYW9kZXYp
OworICAgIHJldHVybjsKK30KKwogLyogU3BlY2lmaWMgZnVuY3Rpb24gY2FsbGVkIGRpcmVjdGx5
IG9ubHkgYnkgbG9jYWwgZGlzayBhdHRhY2gsCiAgKiBhbGwgb3RoZXIgdXNlcnMgc2hvdWxkIGlu
c3RlYWQgdXNlIHRoZSByZWd1bGFyCiAgKiBsaWJ4bF9fZGV2aWNlX2Rpc2tfYWRkIHdyYXBwZXIK
QEAgLTIwNTgsOCArMjE2MCwxNSBAQCBzdGF0aWMgdm9pZCBkZXZpY2VfZGlza19hZGQobGlieGxf
X2VnYyAqZWdjLCB1aW50MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgICBkZXYgPSBkaXNrLT5w
ZGV2X3BhdGg7CiAKICAgICAgICAgZG9fYmFja2VuZF9waHk6Ci0gICAgICAgICAgICAgICAgZmxl
eGFycmF5X2FwcGVuZChiYWNrLCAicGFyYW1zIik7Ci0gICAgICAgICAgICAgICAgZmxleGFycmF5
X2FwcGVuZChiYWNrLCBkZXYpOworICAgICAgICAgICAgICAgIGlmIChkaXNrLT5ob3RwbHVnX3Zl
cnNpb24gPT0gMCkgeworICAgICAgICAgICAgICAgICAgICAvKgorICAgICAgICAgICAgICAgICAg
ICAgKiBJZiB0aGUgbmV3IGhvdHBsdWcgdmVyc2lvbiBpcyB1c2VkIHBhcmFtcyBpcworICAgICAg
ICAgICAgICAgICAgICAgKiBzdG9yZWQgdW5kZXIgYSBwcml2YXRlIHBhdGgsIHNpbmNlIGl0IGNh
biBjb250YWluCisgICAgICAgICAgICAgICAgICAgICAqIGRhdGEgdGhhdCB0aGUgZ3Vlc3Qgc2hv
dWxkIG5vdCBzZWUuCisgICAgICAgICAgICAgICAgICAgICAqLworICAgICAgICAgICAgICAgICAg
ICBmbGV4YXJyYXlfYXBwZW5kKGJhY2ssICJwYXJhbXMiKTsKKyAgICAgICAgICAgICAgICAgICAg
ZmxleGFycmF5X2FwcGVuZChiYWNrLCBkZXYpOworICAgICAgICAgICAgICAgIH0KIAogICAgICAg
ICAgICAgICAgIHNjcmlwdCA9IGxpYnhsX19hYnNfcGF0aChnYywgZGlzay0+c2NyaXB0PzogImJs
b2NrIiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX3hl
bl9zY3JpcHRfZGlyX3BhdGgoKSk7CkBAIC0yMTUyLDYgKzIyNjEsNyBAQCBzdGF0aWMgdm9pZCBk
ZXZpY2VfZGlza19hZGQobGlieGxfX2VnYyAqZWdjLCB1aW50MzJfdCBkb21pZCwKIAogICAgIGFv
ZGV2LT5kZXYgPSBkZXZpY2U7CiAgICAgYW9kZXYtPmFjdGlvbiA9IERFVklDRV9DT05ORUNUOwor
ICAgIGFvZGV2LT5ob3RwbHVnX3ZlcnNpb24gPSBkaXNrLT5ob3RwbHVnX3ZlcnNpb247CiAgICAg
bGlieGxfX3dhaXRfZGV2aWNlX2Nvbm5lY3Rpb24oZWdjLCBhb2Rldik7CiAKICAgICByYyA9IDA7
CkBAIC0yMjQ1LDYgKzIzNTUsNyBAQCBpbnQgbGlieGxfdmRldl90b19kZXZpY2VfZGlzayhsaWJ4
bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsCiAgICAgR0NfSU5JVChjdHgpOwogICAgIGNoYXIg
KmRvbXBhdGgsICpwYXRoOwogICAgIGludCBkZXZpZCA9IGxpYnhsX19kZXZpY2VfZGlza19kZXZf
bnVtYmVyKHZkZXYsIE5VTEwsIE5VTEwpOworICAgIGxpYnhsX19kZXZpY2UgZGV2OwogICAgIGlu
dCByYyA9IEVSUk9SX0ZBSUw7CiAKICAgICBpZiAoZGV2aWQgPCAwKQpAQCAtMjI2Myw2ICsyMzc0
LDIyIEBAIGludCBsaWJ4bF92ZGV2X3RvX2RldmljZV9kaXNrKGxpYnhsX2N0eCAqY3R4LCB1aW50
MzJfdCBkb21pZCwKICAgICAgICAgZ290byBvdXQ7CiAKICAgICByYyA9IGxpYnhsX19kZXZpY2Vf
ZGlza19mcm9tX3hzX2JlKGdjLCBwYXRoLCBkaXNrKTsKKyAgICBpZiAocmMpIHsKKyAgICAgICAg
TE9HKEVSUk9SLCAidW5hYmxlIHRvIHBhcnNlIGRpc2sgZGV2aWNlIGZyb20gcGF0aCAlcyIsIHBh
dGgpOworICAgICAgICBnb3RvIG91dDsKKyAgICB9CisKKyAgICAvKiBDaGVjayBpZiB0aGUgZGV2
aWNlIGlzIHVzaW5nIHRoZSBuZXcgaG90cGx1ZyBpbnRlcmZhY2UgKi8KKyAgICByYyA9IGxpYnhs
X19kZXZpY2VfZnJvbV9kaXNrKGdjLCBkb21pZCwgZGlzaywgJmRldik7CisgICAgaWYgKHJjKSB7
CisgICAgICAgIExPRyhFUlJPUiwgImludmFsaWQgb3IgdW5zdXBwb3J0ZWQgdmlydHVhbCBkaXNr
IGlkZW50aWZpZXIgJXMiLAorICAgICAgICAgICAgICAgICAgICBkaXNrLT52ZGV2KTsKKyAgICAg
ICAgZ290byBvdXQ7CisgICAgfQorICAgIHBhdGggPSBsaWJ4bF9fZGV2aWNlX3hzX2hvdHBsdWdf
cGF0aChnYywgJmRldik7CisgICAgaWYgKGxpYnhsX194c19yZWFkKGdjLCBYQlRfTlVMTCwgcGF0
aCkpCisgICAgICAgIGRpc2stPmhvdHBsdWdfdmVyc2lvbiA9IDE7CisKIG91dDoKICAgICBHQ19G
UkVFOwogICAgIHJldHVybiByYzsKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2NyZWF0
ZS5jIGIvdG9vbHMvbGlieGwvbGlieGxfY3JlYXRlLmMKaW5kZXggOWQyMDA4Ni4uYzE3Njk2NyAx
MDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGxfY3JlYXRlLmMKKysrIGIvdG9vbHMvbGlieGwv
bGlieGxfY3JlYXRlLmMKQEAgLTU5MSw2ICs1OTEsMTAgQEAgc3RhdGljIGludCBzdG9yZV9saWJ4
bF9lbnRyeShsaWJ4bF9fZ2MgKmdjLCB1aW50MzJfdCBkb21pZCwKICAqLwogCiAvKiBFdmVudCBj
YWxsYmFja3MsIGluIHRoaXMgb3JkZXI6ICovCisKK3N0YXRpYyB2b2lkIGRvbWNyZWF0ZV9sYXVu
Y2hfYm9vdGxvYWRlcihsaWJ4bF9fZWdjICplZ2MsCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgbGlieGxfX211bHRpZGV2ICptdWx0aWRldiwKKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgcmV0KTsKIHN0YXRpYyB2b2lkIGRvbWNyZWF0
ZV9kZXZtb2RlbF9zdGFydGVkKGxpYnhsX19lZ2MgKmVnYywKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGxpYnhsX19kbV9zcGF3bl9zdGF0ZSAqZG1zcywKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCByYyk7CkBAIC02MjcsNiArNjMxLDEy
IEBAIHN0YXRpYyB2b2lkIGRvbWNyZWF0ZV9kZXN0cnVjdGlvbl9jYihsaWJ4bF9fZWdjICplZ2Ms
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2RvbWFpbl9kZXN0
cm95X3N0YXRlICpkZHMsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50
IHJjKTsKIAorLyogSWYgY3JlYXRpb24gaXMgbm90IHN1Y2Nlc3NmdWwsIHRoaXMgY2FsbGJhY2sg
d2lsbCBiZSBleGVjdXRlZAorICogd2hlbiBkZXZpY2VzIGhhdmUgYmVlbiB1bnByZXBhcmVkICov
CitzdGF0aWMgdm9pZCBkb21jcmVhdGVfdW5wcmVwYXJlX2NiKGxpYnhsX19lZ2MgKmVnYywKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX211bHRpZGV2ICptdWx0aWRl
diwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50IHJldCk7CisKIHN0YXRp
YyB2b2lkIGluaXRpYXRlX2RvbWFpbl9jcmVhdGUobGlieGxfX2VnYyAqZWdjLAogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fZG9tYWluX2NyZWF0ZV9zdGF0ZSAqZGNz
KQogewpAQCAtNjM3LDcgKzY0Nyw2IEBAIHN0YXRpYyB2b2lkIGluaXRpYXRlX2RvbWFpbl9jcmVh
dGUobGlieGxfX2VnYyAqZWdjLAogCiAgICAgLyogY29udmVuaWVuY2UgYWxpYXNlcyAqLwogICAg
IGxpYnhsX2RvbWFpbl9jb25maWcgKmNvbnN0IGRfY29uZmlnID0gZGNzLT5ndWVzdF9jb25maWc7
Ci0gICAgY29uc3QgaW50IHJlc3RvcmVfZmQgPSBkY3MtPnJlc3RvcmVfZmQ7CiAgICAgbWVtc2V0
KCZkY3MtPmJ1aWxkX3N0YXRlLCAwLCBzaXplb2YoZGNzLT5idWlsZF9zdGF0ZSkpOwogCiAgICAg
ZG9taWQgPSAwOwpAQCAtNjcwLDYgKzY3OSwzMyBAQCBzdGF0aWMgdm9pZCBpbml0aWF0ZV9kb21h
aW5fY3JlYXRlKGxpYnhsX19lZ2MgKmVnYywKICAgICAgICAgaWYgKHJldCkgZ290byBlcnJvcl9v
dXQ7CiAgICAgfQogCisgICAgbGlieGxfX211bHRpZGV2X2JlZ2luKGFvLCAmZGNzLT5tdWx0aWRl
dik7CisgICAgZGNzLT5tdWx0aWRldi5jYWxsYmFjayA9IGRvbWNyZWF0ZV9sYXVuY2hfYm9vdGxv
YWRlcjsKKyAgICBsaWJ4bF9fcHJlcGFyZV9kaXNrcyhlZ2MsIGFvLCBkb21pZCwgZF9jb25maWcs
ICZkY3MtPm11bHRpZGV2KTsKKyAgICBsaWJ4bF9fbXVsdGlkZXZfcHJlcGFyZWQoZWdjLCAmZGNz
LT5tdWx0aWRldiwgMCk7CisgICAgcmV0dXJuOworCitlcnJvcl9vdXQ6CisgICAgYXNzZXJ0KHJl
dCk7CisgICAgZG9tY3JlYXRlX2NvbXBsZXRlKGVnYywgZGNzLCByZXQpOworfQorCitzdGF0aWMg
dm9pZCBkb21jcmVhdGVfbGF1bmNoX2Jvb3Rsb2FkZXIobGlieGxfX2VnYyAqZWdjLAorICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19tdWx0aWRldiAqbXVsdGlk
ZXYsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50IHJldCkKK3sK
KyAgICBsaWJ4bF9fZG9tYWluX2NyZWF0ZV9zdGF0ZSAqZGNzID0gQ09OVEFJTkVSX09GKG11bHRp
ZGV2LCAqZGNzLCBtdWx0aWRldik7CisgICAgU1RBVEVfQU9fR0MoZGNzLT5hbyk7CisKKyAgICAv
KiBjb252ZW5pZW5jZSBhbGlhc2VzICovCisgICAgY29uc3QgaW50IHJlc3RvcmVfZmQgPSBkY3Mt
PnJlc3RvcmVfZmQ7CisgICAgbGlieGxfZG9tYWluX2NvbmZpZyAqY29uc3QgZF9jb25maWcgPSBk
Y3MtPmd1ZXN0X2NvbmZpZzsKKworICAgIGlmIChyZXQpIHsKKyAgICAgICAgTE9HKEVSUk9SLCAi
dW5hYmxlIHRvIHByZXBhcmUgZGV2aWNlcyIpOworICAgICAgICBnb3RvIGVycm9yX291dDsKKyAg
ICB9CisKICAgICBkY3MtPmJsLmFvID0gYW87CiAgICAgbGlieGxfZGV2aWNlX2Rpc2sgKmJvb3Rk
aXNrID0KICAgICAgICAgZF9jb25maWctPm51bV9kaXNrcyA+IDAgPyAmZF9jb25maWctPmRpc2tz
WzBdIDogTlVMTDsKQEAgLTEyMDIsMTEgKzEyMzgsMzUgQEAgc3RhdGljIHZvaWQgZG9tY3JlYXRl
X2Rlc3RydWN0aW9uX2NiKGxpYnhsX19lZ2MgKmVnYywKIHsKICAgICBTVEFURV9BT19HQyhkZHMt
PmFvKTsKICAgICBsaWJ4bF9fZG9tYWluX2NyZWF0ZV9zdGF0ZSAqZGNzID0gQ09OVEFJTkVSX09G
KGRkcywgKmRjcywgZGRzKTsKKyAgICB1aW50MzJfdCBkb21pZCA9IGRjcy0+Z3Vlc3RfZG9taWQ7
CisgICAgbGlieGxfZG9tYWluX2NvbmZpZyAqY29uc3QgZF9jb25maWcgPSBkY3MtPmd1ZXN0X2Nv
bmZpZzsKIAogICAgIGlmIChyYykKICAgICAgICAgTE9HKEVSUk9SLCAidW5hYmxlIHRvIGRlc3Ry
b3kgZG9tYWluICV1IGZvbGxvd2luZyBmYWlsZWQgY3JlYXRpb24iLAogICAgICAgICAgICAgICAg
ICAgIGRkcy0+ZG9taWQpOwogCisgICAgLyoKKyAgICAgKiBXZSBtaWdodCBoYXZlIGRldmljZXMg
dGhhdCBoYXZlIGJlZW4gcHJlcGFyZWQsIGJ1dCB3aXRoIG5vCisgICAgICogZnJvbnRlbmQgeGVu
c3RvcmUgZW50cmllcywgc28gZG9tYWluIGRlc3RydWN0aW9uIGZhaWxzIHRvCisgICAgICogZmlu
ZCB0aGVtLCB0aGF0IGlzIHdoeSB3ZSBoYXZlIHRvIHVucHJlcGFyZSB0aGVtIG1hbnVhbGx5Lgor
ICAgICAqLworICAgIGxpYnhsX19tdWx0aWRldl9iZWdpbihhbywgJmRjcy0+bXVsdGlkZXYpOwor
ICAgIGRjcy0+bXVsdGlkZXYuY2FsbGJhY2sgPSBkb21jcmVhdGVfdW5wcmVwYXJlX2NiOworICAg
IGxpYnhsX191bnByZXBhcmVfZGlza3MoZWdjLCBhbywgZG9taWQsIGRfY29uZmlnLCAmZGNzLT5t
dWx0aWRldik7CisgICAgbGlieGxfX211bHRpZGV2X3ByZXBhcmVkKGVnYywgJmRjcy0+bXVsdGlk
ZXYsIDApOworICAgIHJldHVybjsKK30KKworc3RhdGljIHZvaWQgZG9tY3JlYXRlX3VucHJlcGFy
ZV9jYihsaWJ4bF9fZWdjICplZ2MsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGxpYnhsX19tdWx0aWRldiAqbXVsdGlkZXYsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIGludCByZXQpCit7CisgICAgbGlieGxfX2RvbWFpbl9jcmVhdGVfc3RhdGUgKmRjcyA9
IENPTlRBSU5FUl9PRihtdWx0aWRldiwgKmRjcywgbXVsdGlkZXYpOworICAgIFNUQVRFX0FPX0dD
KGRjcy0+YW8pOworCisgICAgaWYgKHJldCkKKyAgICAgICAgTE9HKEVSUk9SLCAidW5hYmxlIHRv
IHVucHJlcGFyZSBkZXZpY2VzIik7CisKICAgICBkY3MtPmNhbGxiYWNrKGVnYywgZGNzLCBFUlJP
Ul9GQUlMLCBkY3MtPmd1ZXN0X2RvbWlkKTsKIH0KIApkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGwv
bGlieGxfZGV2aWNlLmMgYi90b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYwppbmRleCA4NWM5OTUz
Li5jZTk5ZDk5IDEwMDY0NAotLS0gYS90b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYworKysgYi90
b29scy9saWJ4bC9saWJ4bF9kZXZpY2UuYwpAQCAtNTIxLDE4ICs1MjEsMjEgQEAgdm9pZCBsaWJ4
bF9fbXVsdGlkZXZfcHJlcGFyZWQobGlieGxfX2VnYyAqZWdjLAogCiAvKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqLwogCi0vKiBNYWNybyBmb3IgZGVmaW5pbmcgdGhlIGZ1bmN0aW9ucyB0aGF0IHdpbGwg
YWRkIGEgYnVuY2ggb2YgZGlza3Mgd2hlbgorLyogTWFjcm8gZm9yIGRlZmluaW5nIHRoZSBmdW5j
dGlvbnMgdGhhdCB3aWxsIG9wZXJhdGUgYSBidW5jaCBvZiBkZXZpY2VzIHdoZW4KICAqIGluc2lk
ZSBhbiBhc3luYyBvcCB3aXRoIG11bHRpZGV2LgogICogVGhpcyBtYWNybyBpcyBhZGRlZCB0byBw
cmV2ZW50IHJlcGV0aXRpb24gb2YgY29kZS4KICAqCiAgKiBUaGUgZm9sbG93aW5nIGZ1bmN0aW9u
cyBhcmUgZGVmaW5lZDoKKyAqIGxpYnhsX19wcmVwYXJlX2Rpc2tzCisgKiBsaWJ4bF9fdW5wcmVw
YXJlX2Rpc2tzCiAgKiBsaWJ4bF9fYWRkX2Rpc2tzCiAgKiBsaWJ4bF9fYWRkX25pY3MKICAqIGxp
YnhsX19hZGRfdnRwbXMKICAqLwogCi0jZGVmaW5lIERFRklORV9ERVZJQ0VTX0FERCh0eXBlKSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCi0gICAgdm9pZCBsaWJ4bF9f
YWRkXyMjdHlwZSMjcyhsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hbyAqYW8sIHVpbnQzMl90IGRv
bWlkLCBcCisjZGVmaW5lIERFRklORV9ERVZJQ0VTX0ZVTkModHlwZSwgb3ApICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgdm9pZCBsaWJ4bF9fIyNvcCMjXyMjdHlwZSMj
cyhsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hbyAqYW8sICAgICAgICBcCisgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICB1aW50MzJfdCBkb21pZCwgICAgICAgICAgICAgICAgICAgICAgICAg
ICBcCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kb21haW5fY29uZmlnICpk
X2NvbmZpZywgICAgICAgICAgICBcCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4
bF9fbXVsdGlkZXYgKm11bHRpZGV2KSAgICAgICAgICAgICAgICBcCiAgICAgeyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
CkBAIC01NDAsMTYgKzU0MywxOSBAQCB2b2lkIGxpYnhsX19tdWx0aWRldl9wcmVwYXJlZChsaWJ4
bF9fZWdjICplZ2MsCiAgICAgICAgIGludCBpOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgIGZvciAoaSA9IDA7IGkgPCBk
X2NvbmZpZy0+bnVtXyMjdHlwZSMjczsgaSsrKSB7ICAgICAgICAgICAgICAgICBcCiAgICAgICAg
ICAgICBsaWJ4bF9fYW9fZGV2aWNlICphb2RldiA9IGxpYnhsX19tdWx0aWRldl9wcmVwYXJlKG11
bHRpZGV2KTsgIFwKLSAgICAgICAgICAgIGxpYnhsX19kZXZpY2VfIyN0eXBlIyNfYWRkKGVnYywg
ZG9taWQsICZkX2NvbmZpZy0+dHlwZSMjc1tpXSwgXAorICAgICAgICAgICAgbGlieGxfX2Rldmlj
ZV8jI3R5cGUjI18jI29wKGVnYywgZG9taWQsICZkX2NvbmZpZy0+dHlwZSMjc1tpXSwgXAogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYW9kZXYpOyAgICAgICAgICAgICAg
ICAgICAgICAgICAgXAogICAgICAgICB9ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgIH0KIAotREVGSU5FX0RFVklDRVNf
QUREKGRpc2spCi1ERUZJTkVfREVWSUNFU19BREQobmljKQotREVGSU5FX0RFVklDRVNfQUREKHZ0
cG0pCitERUZJTkVfREVWSUNFU19GVU5DKGRpc2ssIGFkZCkKK0RFRklORV9ERVZJQ0VTX0ZVTkMo
bmljLCBhZGQpCitERUZJTkVfREVWSUNFU19GVU5DKHZ0cG0sIGFkZCkKIAotI3VuZGVmIERFRklO
RV9ERVZJQ0VTX0FERAorREVGSU5FX0RFVklDRVNfRlVOQyhkaXNrLCBwcmVwYXJlKQorREVGSU5F
X0RFVklDRVNfRlVOQyhkaXNrLCB1bnByZXBhcmUpCisKKyN1bmRlZiBERUZJTkVfREVWSUNFU19G
VU5DCiAKIC8qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKiovCiAKQEAgLTU1Nyw2ICs1NjMsNyBAQCBpbnQg
bGlieGxfX2RldmljZV9kZXN0cm95KGxpYnhsX19nYyAqZ2MsIGxpYnhsX19kZXZpY2UgKmRldikK
IHsKICAgICBjb25zdCBjaGFyICpiZV9wYXRoID0gbGlieGxfX2RldmljZV9iYWNrZW5kX3BhdGgo
Z2MsIGRldik7CiAgICAgY29uc3QgY2hhciAqZmVfcGF0aCA9IGxpYnhsX19kZXZpY2VfZnJvbnRl
bmRfcGF0aChnYywgZGV2KTsKKyAgICBjb25zdCBjaGFyICpob3RwbHVnX3BhdGggPSBsaWJ4bF9f
ZGV2aWNlX3hzX2hvdHBsdWdfcGF0aChnYywgZGV2KTsKICAgICBjb25zdCBjaGFyICp0YXBkaXNr
X3BhdGggPSBHQ1NQUklOVEYoIiVzLyVzIiwgYmVfcGF0aCwgInRhcGRpc2stcGFyYW1zIik7CiAg
ICAgY29uc3QgY2hhciAqdGFwZGlza19wYXJhbXM7CiAgICAgeHNfdHJhbnNhY3Rpb25fdCB0ID0g
MDsKQEAgLTU3Miw2ICs1NzksNyBAQCBpbnQgbGlieGxfX2RldmljZV9kZXN0cm95KGxpYnhsX19n
YyAqZ2MsIGxpYnhsX19kZXZpY2UgKmRldikKIAogICAgICAgICBsaWJ4bF9feHNfcGF0aF9jbGVh
bnVwKGdjLCB0LCBmZV9wYXRoKTsKICAgICAgICAgbGlieGxfX3hzX3BhdGhfY2xlYW51cChnYywg
dCwgYmVfcGF0aCk7CisgICAgICAgIGxpYnhsX194c19wYXRoX2NsZWFudXAoZ2MsIHQsIGhvdHBs
dWdfcGF0aCk7CiAKICAgICAgICAgcmMgPSBsaWJ4bF9feHNfdHJhbnNhY3Rpb25fY29tbWl0KGdj
LCAmdCk7CiAgICAgICAgIGlmICghcmMpIGJyZWFrOwpAQCAtNjQ0LDYgKzY1MiwxNCBAQCB2b2lk
IGxpYnhsX19kZXZpY2VzX2Rlc3Ryb3kobGlieGxfX2VnYyAqZWdjLCBsaWJ4bF9fZGV2aWNlc19y
ZW1vdmVfc3RhdGUgKmRycykKICAgICAgICAgICAgICAgICAgICAgY29udGludWU7CiAgICAgICAg
ICAgICAgICAgfQogICAgICAgICAgICAgICAgIGFvZGV2ID0gbGlieGxfX211bHRpZGV2X3ByZXBh
cmUobXVsdGlkZXYpOworICAgICAgICAgICAgICAgIC8qCisgICAgICAgICAgICAgICAgICogQ2hl
Y2sgaWYgdGhlIGRldmljZSBoYXMgaG90cGx1ZyBlbnRyaWVzIGluIHhlbnN0b3JlLAorICAgICAg
ICAgICAgICAgICAqIHdoaWNoIHdvdWxkIG1lYW4gaXQncyB1c2luZyB0aGUgbmV3IGhvdHBsdWcg
Y2FsbGluZworICAgICAgICAgICAgICAgICAqIGNvbnZlbnRpb24uCisgICAgICAgICAgICAgICAg
ICovCisgICAgICAgICAgICAgICAgcGF0aCA9IGxpYnhsX19kZXZpY2VfeHNfaG90cGx1Z19wYXRo
KGdjLCBkZXYpOworICAgICAgICAgICAgICAgIGlmIChsaWJ4bF9feHNfcmVhZChnYywgWEJUX05V
TEwsIHBhdGgpKQorICAgICAgICAgICAgICAgICAgICBhb2Rldi0+aG90cGx1Z192ZXJzaW9uID0g
MTsKICAgICAgICAgICAgICAgICBhb2Rldi0+YWN0aW9uID0gREVWSUNFX0RJU0NPTk5FQ1Q7CiAg
ICAgICAgICAgICAgICAgYW9kZXYtPmRldiA9IGRldjsKICAgICAgICAgICAgICAgICBhb2Rldi0+
Zm9yY2UgPSBkcnMtPmZvcmNlOwpAQCAtNjk2LDggKzcxMiw2IEBAIHN0YXRpYyB2b2lkIGRldmlj
ZV9iYWNrZW5kX2NhbGxiYWNrKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX2V2X2RldnN0YXRlICpk
cywKIHN0YXRpYyB2b2lkIGRldmljZV9iYWNrZW5kX2NsZWFudXAobGlieGxfX2djICpnYywKICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYp
OwogCi1zdGF0aWMgdm9pZCBkZXZpY2VfaG90cGx1ZyhsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19h
b19kZXZpY2UgKmFvZGV2KTsKLQogc3RhdGljIHZvaWQgZGV2aWNlX2hvdHBsdWdfdGltZW91dF9j
YihsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19ldl90aW1lICpldiwKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgY29uc3Qgc3RydWN0IHRpbWV2YWwgKnJlcXVlc3RlZF9hYnMp
OwogCkBAIC03MzIsNyArNzQ2LDcgQEAgdm9pZCBsaWJ4bF9fd2FpdF9kZXZpY2VfY29ubmVjdGlv
bihsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hb19kZXZpY2UgKmFvZGV2KQogICAgICAgICAgKiBJ
ZiBRZW11IGlzIHJ1bm5pbmcsIGl0IHdpbGwgc2V0IHRoZSBzdGF0ZSBvZiB0aGUgZGV2aWNlIHRv
CiAgICAgICAgICAqIDQgZGlyZWN0bHksIHdpdGhvdXQgd2FpdGluZyBpbiBzdGF0ZSAyIGZvciBh
bnkgaG90cGx1ZyBleGVjdXRpb24uCiAgICAgICAgICAqLwotICAgICAgICBkZXZpY2VfaG90cGx1
ZyhlZ2MsIGFvZGV2KTsKKyAgICAgICAgbGlieGxfX2RldmljZV9ob3RwbHVnKGVnYywgYW9kZXYp
OwogICAgICAgICByZXR1cm47CiAgICAgfQogCkBAIC04NjQsNyArODc4LDcgQEAgc3RhdGljIHZv
aWQgZGV2aWNlX3FlbXVfdGltZW91dChsaWJ4bF9fZWdjICplZ2MsIGxpYnhsX19ldl90aW1lICpl
diwKICAgICByYyA9IGxpYnhsX194c193cml0ZV9jaGVja2VkKGdjLCBYQlRfTlVMTCwgc3RhdGVf
cGF0aCwgIjYiKTsKICAgICBpZiAocmMpIGdvdG8gb3V0OwogCi0gICAgZGV2aWNlX2hvdHBsdWco
ZWdjLCBhb2Rldik7CisgICAgbGlieGxfX2RldmljZV9ob3RwbHVnKGVnYywgYW9kZXYpOwogICAg
IHJldHVybjsKIAogb3V0OgpAQCAtODkzLDcgKzkwNyw3IEBAIHN0YXRpYyB2b2lkIGRldmljZV9i
YWNrZW5kX2NhbGxiYWNrKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX2V2X2RldnN0YXRlICpkcywK
ICAgICAgICAgZ290byBvdXQ7CiAgICAgfQogCi0gICAgZGV2aWNlX2hvdHBsdWcoZWdjLCBhb2Rl
dik7CisgICAgbGlieGxfX2RldmljZV9ob3RwbHVnKGVnYywgYW9kZXYpOwogICAgIHJldHVybjsK
IAogb3V0OgpAQCAtOTA4LDcgKzkyMiw3IEBAIHN0YXRpYyB2b2lkIGRldmljZV9iYWNrZW5kX2Ns
ZWFudXAobGlieGxfX2djICpnYywgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpCiAgICAgbGlieGxf
X2V2X2RldnN0YXRlX2NhbmNlbChnYywgJmFvZGV2LT5iYWNrZW5kX2RzKTsKIH0KIAotc3RhdGlj
IHZvaWQgZGV2aWNlX2hvdHBsdWcobGlieGxfX2VnYyAqZWdjLCBsaWJ4bF9fYW9fZGV2aWNlICph
b2RldikKK3ZvaWQgbGlieGxfX2RldmljZV9ob3RwbHVnKGxpYnhsX19lZ2MgKmVnYywgbGlieGxf
X2FvX2RldmljZSAqYW9kZXYpCiB7CiAgICAgU1RBVEVfQU9fR0MoYW9kZXYtPmFvKTsKICAgICBj
aGFyICpiZV9wYXRoID0gbGlieGxfX2RldmljZV9iYWNrZW5kX3BhdGgoZ2MsIGFvZGV2LT5kZXYp
OwpAQCAtMTAxMiwxMCArMTAyNiwxMyBAQCBzdGF0aWMgdm9pZCBkZXZpY2VfaG90cGx1Z19jaGls
ZF9kZWF0aF9jYihsaWJ4bF9fZWdjICplZ2MsCiAgICAgICAgIGlmIChob3RwbHVnX2Vycm9yKQog
ICAgICAgICAgICAgTE9HKEVSUk9SLCAic2NyaXB0OiAlcyIsIGhvdHBsdWdfZXJyb3IpOwogICAg
ICAgICBhb2Rldi0+cmMgPSBFUlJPUl9GQUlMOwotICAgICAgICBpZiAoYW9kZXYtPmFjdGlvbiA9
PSBERVZJQ0VfQ09OTkVDVCkKKyAgICAgICAgaWYgKGFvZGV2LT5hY3Rpb24gPT0gREVWSUNFX0NP
Tk5FQ1QgfHwKKyAgICAgICAgICAgIGFvZGV2LT5hY3Rpb24gPT0gREVWSUNFX1BSRVBBUkUgfHwK
KyAgICAgICAgICAgIGFvZGV2LT5hY3Rpb24gPT0gREVWSUNFX0xPQ0FMQVRUQUNIKQogICAgICAg
ICAgICAgLyoKLSAgICAgICAgICAgICAqIE9ubHkgZmFpbCBvbiBkZXZpY2UgY29ubmVjdGlvbiwg
b24gZGlzY29ubmVjdGlvbgotICAgICAgICAgICAgICogaWdub3JlIGVycm9yLCBhbmQgY29udGlu
dWUgd2l0aCB0aGUgcmVtb3ZlIHByb2Nlc3MKKyAgICAgICAgICAgICAqIE9ubHkgZmFpbCBvbiBk
ZXZpY2UgY29ubmVjdGlvbiwgcHJlcGFyZSBvciBsb2NhbGF0dGFjaCwKKyAgICAgICAgICAgICAq
IG9uIG90aGVyIGNhc2VzIGlnbm9yZSBlcnJvciwgYW5kIGNvbnRpbnVlIHdpdGggdGhlIHJlbW92
ZQorICAgICAgICAgICAgICogcHJvY2Vzcy4KICAgICAgICAgICAgICAqLwogICAgICAgICAgICAg
IGdvdG8gZXJyb3I7CiAgICAgfQpAQCAtMTAyNSw3ICsxMDQyLDcgQEAgc3RhdGljIHZvaWQgZGV2
aWNlX2hvdHBsdWdfY2hpbGRfZGVhdGhfY2IobGlieGxfX2VnYyAqZWdjLAogICAgICAqIGRldmlj
ZV9ob3RwbHVnX2RvbmUgYnJlYWtpbmcgdGhlIGxvb3AuCiAgICAgICovCiAgICAgYW9kZXYtPm51
bV9leGVjKys7Ci0gICAgZGV2aWNlX2hvdHBsdWcoZWdjLCBhb2Rldik7CisgICAgbGlieGxfX2Rl
dmljZV9ob3RwbHVnKGVnYywgYW9kZXYpOwogCiAgICAgcmV0dXJuOwogCkBAIC0xMDQxLDggKzEw
NTgsMzMgQEAgc3RhdGljIHZvaWQgZGV2aWNlX2hvdHBsdWdfZG9uZShsaWJ4bF9fZWdjICplZ2Ms
IGxpYnhsX19hb19kZXZpY2UgKmFvZGV2KQogCiAgICAgZGV2aWNlX2hvdHBsdWdfY2xlYW4oZ2Ms
IGFvZGV2KTsKIAotICAgIC8qIENsZWFuIHhlbnN0b3JlIGlmIGl0J3MgYSBkaXNjb25uZWN0aW9u
ICovCiAgICAgaWYgKGFvZGV2LT5hY3Rpb24gPT0gREVWSUNFX0RJU0NPTk5FQ1QpIHsKKyAgICAg
ICAgc3dpdGNoIChhb2Rldi0+aG90cGx1Z192ZXJzaW9uKSB7CisgICAgICAgIGNhc2UgMDoKKyAg
ICAgICAgICAgIC8qIENsZWFuIHhlbnN0b3JlIGlmIGl0J3MgYSBkaXNjb25uZWN0aW9uICovCisg
ICAgICAgICAgICByYyA9IGxpYnhsX19kZXZpY2VfZGVzdHJveShnYywgYW9kZXYtPmRldik7Cisg
ICAgICAgICAgICBpZiAoIWFvZGV2LT5yYykKKyAgICAgICAgICAgICAgICBhb2Rldi0+cmMgPSBy
YzsKKyAgICAgICAgICAgIGJyZWFrOworICAgICAgICBjYXNlIDE6CisgICAgICAgICAgICAvKgor
ICAgICAgICAgICAgICogQ2hhaW4gdW5wcmVwYXJlIGhvdHBsdWcgZXhlY3V0aW9uCisgICAgICAg
ICAgICAgKiBhZnRlciBkaXNjb25uZWN0aW9uIG9mIGRldmljZS4KKyAgICAgICAgICAgICAqLwor
ICAgICAgICAgICAgYW9kZXYtPm51bV9leGVjID0gMDsKKyAgICAgICAgICAgIGFvZGV2LT5hY3Rp
b24gPSBERVZJQ0VfVU5QUkVQQVJFOworICAgICAgICAgICAgbGlieGxfX2RldmljZV9ob3RwbHVn
KGVnYywgYW9kZXYpOworICAgICAgICAgICAgcmV0dXJuOworICAgICAgICBkZWZhdWx0OgorICAg
ICAgICAgICAgTE9HKEVSUk9SLCAidW5rbm93biBob3RwbHVnIHNjcmlwdCB2ZXJzaW9uICglZCki
LAorICAgICAgICAgICAgICAgICAgICAgICBhb2Rldi0+aG90cGx1Z192ZXJzaW9uKTsKKyAgICAg
ICAgICAgIGlmICghYW9kZXYtPnJjKQorICAgICAgICAgICAgICAgIGFvZGV2LT5yYyA9IEVSUk9S
X0ZBSUw7CisgICAgICAgICAgICBicmVhazsKKyAgICAgICAgfQorICAgIH0KKyAgICAvKiBDbGVh
biBob3RwbHVnIHhlbnN0b3JlIHBhdGggKi8KKyAgICBpZiAoYW9kZXYtPmFjdGlvbiA9PSBERVZJ
Q0VfVU5QUkVQQVJFKSB7CiAgICAgICAgIHJjID0gbGlieGxfX2RldmljZV9kZXN0cm95KGdjLCBh
b2Rldi0+ZGV2KTsKICAgICAgICAgaWYgKCFhb2Rldi0+cmMpCiAgICAgICAgICAgICBhb2Rldi0+
cmMgPSByYzsKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVybmFsLmggYi90b29s
cy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oCmluZGV4IGM0MWU2MDguLjE2OTA3ZmYgMTAwNjQ0Ci0t
LSBhL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVybmFsLmgKKysrIGIvdG9vbHMvbGlieGwvbGlieGxf
aW50ZXJuYWwuaApAQCAtMjAxOCw2ICsyMDE4LDEyIEBAIHN0cnVjdCBsaWJ4bF9fbXVsdGlkZXYg
ewogX2hpZGRlbiB2b2lkIGxpYnhsX19kZXZpY2VfZGlza19hZGQobGlieGxfX2VnYyAqZWdjLCB1
aW50MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhs
X2RldmljZV9kaXNrICpkaXNrLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
bGlieGxfX2FvX2RldmljZSAqYW9kZXYpOworX2hpZGRlbiB2b2lkIGxpYnhsX19kZXZpY2VfZGlz
a19wcmVwYXJlKGxpYnhsX19lZ2MgKmVnYywgdWludDMyX3QgZG9taWQsCisgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZGV2aWNlX2Rpc2sgKmRpc2ssCisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9k
ZXYpOworX2hpZGRlbiB2b2lkIGxpYnhsX19kZXZpY2VfZGlza191bnByZXBhcmUobGlieGxfX2Vn
YyAqZWdjLCB1aW50MzJfdCBkb21pZCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGxpYnhsX2RldmljZV9kaXNrICpkaXNrLAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpOwogCiAvKiBBTyBv
cGVyYXRpb24gdG8gY29ubmVjdCBhIG5pYyBkZXZpY2UgKi8KIF9oaWRkZW4gdm9pZCBsaWJ4bF9f
ZGV2aWNlX25pY19hZGQobGlieGxfX2VnYyAqZWdjLCB1aW50MzJfdCBkb21pZCwKQEAgLTIwOTQs
NiArMjEwMCwxOSBAQCBfaGlkZGVuIGludCBsaWJ4bF9fZ2V0X2hvdHBsdWdfc2NyaXB0X2luZm8o
bGlieGxfX2djICpnYywgbGlieGxfX2RldmljZSAqZGV2LAogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2VfYWN0aW9uIGFjdGlvbiwKICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgbnVtX2V4ZWMsIGludCBo
b3RwbHVnX3ZlcnNpb24pOwogCisvKgorICogbGlieGxfX2RldmljZV9ob3RwbHVnIHJ1bnMgdGhl
IGhvdHBsdWcgc2NyaXB0IGFzc29jaWF0ZWQKKyAqIHdpdGggdGhlIGRldmljZSBwYXNzZWQgaW4g
YW9kZXYtPmRldi4KKyAqCisgKiBUaGUgbGlieGxfX2FvX2RldmljZSBwYXNzZWQgdG8gdGhpcyBm
dW5jdGlvbiBzaG91bGQgYmUKKyAqIHByZXBhcmVkIHVzaW5nIGxpYnhsX19wcmVwYXJlX2FvX2Rl
dmljZSBwcmlvciB0byBjYWxsaW5nCisgKiB0aGlzIGZ1bmN0aW9uLgorICoKKyAqIE9uY2UgZmlu
aXNoZWQsIGFvZGV2LT5jYWxsYmFjayB3aWxsIGJlIGV4ZWN1dGVkLgorICovCitfaGlkZGVuIHZv
aWQgbGlieGxfX2RldmljZV9ob3RwbHVnKGxpYnhsX19lZ2MgKmVnYywKKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpOworCiAvKi0tLS0t
IGxvY2FsIGRpc2sgYXR0YWNoOiBhdHRhY2ggYSBkaXNrIGxvY2FsbHkgdG8gcnVuIHRoZSBib290
bG9hZGVyIC0tLS0tKi8KIAogdHlwZWRlZiBzdHJ1Y3QgbGlieGxfX2Rpc2tfbG9jYWxfc3RhdGUg
bGlieGxfX2Rpc2tfbG9jYWxfc3RhdGU7CkBAIC0yNDY3LDYgKzI0ODYsMjEgQEAgX2hpZGRlbiB2
b2lkIGxpYnhsX19hZGRfZGlza3MobGlieGxfX2VnYyAqZWdjLCBsaWJ4bF9fYW8gKmFvLCB1aW50
MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RvbWFpbl9j
b25maWcgKmRfY29uZmlnLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfX211
bHRpZGV2ICptdWx0aWRldik7CiAKK19oaWRkZW4gdm9pZCBsaWJ4bF9fcHJlcGFyZV9kaXNrcyhs
aWJ4bF9fZWdjICplZ2MsIGxpYnhsX19hbyAqYW8sCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgdWludDMyX3QgZG9taWQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbGlieGxfZG9tYWluX2NvbmZpZyAqZF9jb25maWcsCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgbGlieGxfX211bHRpZGV2ICptdWx0aWRldik7CisKK19oaWRkZW4gdm9pZCBs
aWJ4bF9fcHJlcGFyZV91bmRpc2tzKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX2FvICphbywKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGRvbWlkLAorICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZG9tYWluX2NvbmZpZyAqZF9jb25m
aWcsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fbXVsdGlkZXYg
Km11bHRpZGV2KTsKKworX2hpZGRlbiB2b2lkIGxpYnhsX191bnByZXBhcmVfZGlza3MobGlieGxf
X2VnYyAqZWdjLCBsaWJ4bF9fYW8gKmFvLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgdWludDMyX3QgZG9taWQsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBsaWJ4bF9kb21haW5fY29uZmlnICpkX2NvbmZpZywKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGxpYnhsX19tdWx0aWRldiAqbXVsdGlkZXYpOworCiBfaGlkZGVuIHZvaWQg
bGlieGxfX2FkZF9uaWNzKGxpYnhsX19lZ2MgKmVnYywgbGlieGxfX2FvICphbywgdWludDMyX3Qg
ZG9taWQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RvbWFpbl9jb25maWcg
KmRfY29uZmlnLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9fbXVsdGlkZXYg
Km11bHRpZGV2KTsKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX3R5cGVzLmlkbCBiL3Rv
b2xzL2xpYnhsL2xpYnhsX3R5cGVzLmlkbAppbmRleCA3ZWFjNGE4Li4yNzJmZjU0IDEwMDY0NAot
LS0gYS90b29scy9saWJ4bC9saWJ4bF90eXBlcy5pZGwKKysrIGIvdG9vbHMvbGlieGwvbGlieGxf
dHlwZXMuaWRsCkBAIC0zNjYsNiArMzY2LDcgQEAgbGlieGxfZGV2aWNlX2Rpc2sgPSBTdHJ1Y3Qo
ImRldmljZV9kaXNrIiwgWwogICAgICgicmVtb3ZhYmxlIiwgaW50ZWdlciksCiAgICAgKCJyZWFk
d3JpdGUiLCBpbnRlZ2VyKSwKICAgICAoImlzX2Nkcm9tIiwgaW50ZWdlciksCisgICAgKCJob3Rw
bHVnX3ZlcnNpb24iLCBpbnRlZ2VyKSwKICAgICBdKQogCiBsaWJ4bF9kZXZpY2VfbmljID0gU3Ry
dWN0KCJkZXZpY2VfbmljIiwgWwpkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGwvbGlieGx1X2Rpc2tf
bC5sIGIvdG9vbHMvbGlieGwvbGlieGx1X2Rpc2tfbC5sCmluZGV4IGJlZTE2YTEuLjQ0ODZjMWEg
MTAwNjQ0Ci0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsdV9kaXNrX2wubAorKysgYi90b29scy9saWJ4
bC9saWJ4bHVfZGlza19sLmwKQEAgLTE3Miw2ICsxNzIsOCBAQCBiYWNrZW5kdHlwZT1bXixdKiw/
IHsgU1RSSVAoJywnKTsgc2V0YmFja2VuZHR5cGUoRFBDLEZST01FUVVBTFMpOyB9CiAKIHZkZXY9
W14sXSosPwl7IFNUUklQKCcsJyk7IFNBVkVTVFJJTkcoInZkZXYiLCB2ZGV2LCBGUk9NRVFVQUxT
KTsgfQogc2NyaXB0PVteLF0qLD8JeyBTVFJJUCgnLCcpOyBTQVZFU1RSSU5HKCJzY3JpcHQiLCBz
Y3JpcHQsIEZST01FUVVBTFMpOyB9CittZXRob2Q9W14sXSosPwl7IFNUUklQKCcsJyk7IFNBVkVT
VFJJTkcoInNjcmlwdCIsIHNjcmlwdCwgRlJPTUVRVUFMUyk7CisJCSAgRFBDLT5kaXNrLT5ob3Rw
bHVnX3ZlcnNpb24gPSAxOyB9CiAKICAvKiB0aGUgdGFyZ2V0IG1hZ2ljIHBhcmFtZXRlciwgZWF0
cyB0aGUgcmVzdCBvZiB0aGUgc3RyaW5nICovCiAKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikK
CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4u
b3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xY-0002De-0i; Fri, 21 Dec 2012 17:00:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xW-0002Cc-DX
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:30 +0000
Received: from [85.158.143.99:34496] by server-3.bemta-4.messagelabs.com id
	77/E0-18211-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356109228!23684325!3
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25737 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309220"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:19 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:18 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:06 +0100
Message-ID: <1356109208-6830-9-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 08/10] libxl: add new hotplug interface
	support for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

UmVhZHMgdGhlIHJlc3VsdHMgb2YgdGhlIGhvdHBsdWcgc2NyaXB0IGV4ZWN1dGlvbiBhbmQgY2hh
bmdlcwpwZGV2X3BhdGggbGlieGxfZGV2aWNlX2Rpc2sgdG8gcG9pbnQgdG8gdGhlIGJsb2NrIGRl
dmljZS4gVGhpcyB3aWxsIGJlCnVzZWQgbGF0ZXIgd2hlbiBRZW11IGlzIGxhdW5jaGVkLgoKU2ln
bmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Ci0tLQog
dG9vbHMvbGlieGwvbGlieGxfY3JlYXRlLmMgfCAgIDMzICsrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKwogMSBmaWxlcyBjaGFuZ2VkLCAzMyBpbnNlcnRpb25zKCspLCAwIGRlbGV0aW9u
cygtKQoKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2NyZWF0ZS5jIGIvdG9vbHMvbGli
eGwvbGlieGxfY3JlYXRlLmMKaW5kZXggYzE3Njk2Ny4uMWQ1NDEwZiAxMDA2NDQKLS0tIGEvdG9v
bHMvbGlieGwvbGlieGxfY3JlYXRlLmMKKysrIGIvdG9vbHMvbGlieGwvbGlieGxfY3JlYXRlLmMK
QEAgLTk2NCw2ICs5NjQsMTAgQEAgc3RhdGljIHZvaWQgZG9tY3JlYXRlX2xhdW5jaF9kbShsaWJ4
bF9fZWdjICplZ2MsIGxpYnhsX19tdWx0aWRldiAqbXVsdGlkZXYsCiAgICAgbGlieGxfX2RvbWFp
bl9jcmVhdGVfc3RhdGUgKmRjcyA9IENPTlRBSU5FUl9PRihtdWx0aWRldiwgKmRjcywgbXVsdGlk
ZXYpOwogICAgIFNUQVRFX0FPX0dDKGRjcy0+YW8pOwogICAgIGludCBpOworICAgIGNoYXIgKmhv
dHBsdWdfcGF0aDsKKyAgICBjb25zdCBjaGFyICpwZGV2OworICAgIGxpYnhsX19kZXZpY2UgZGV2
OworICAgIGxpYnhsX2RldmljZV9kaXNrICpkaXNrOwogCiAgICAgLyogY29udmVuaWVuY2UgYWxp
YXNlcyAqLwogICAgIGNvbnN0IHVpbnQzMl90IGRvbWlkID0gZGNzLT5ndWVzdF9kb21pZDsKQEAg
LTk3NSw2ICs5NzksMzUgQEAgc3RhdGljIHZvaWQgZG9tY3JlYXRlX2xhdW5jaF9kbShsaWJ4bF9f
ZWdjICplZ2MsIGxpYnhsX19tdWx0aWRldiAqbXVsdGlkZXYsCiAgICAgICAgIGdvdG8gZXJyb3Jf
b3V0OwogICAgIH0KIAorICAgIC8qCisgICAgICogSWYgdXNpbmcgdGhlIG5ldyBob3RwbHVnIGlu
dGVyZmFjZSBjaGFuZ2UgZGlza3MgcGRldl9wYXRoCisgICAgICogdG8gcG9pbnQgdG8gdGhlIHJl
YWwgcGh5c2ljYWwgcGF0aCwgc28gaXQgY2FuIGJlIHVzZWQgYnkgUWVtdS4KKyAgICAgKi8KKyAg
ICBmb3IgKGkgPSAwOyBpIDwgZF9jb25maWctPm51bV9kaXNrczsgaSsrKSB7CisgICAgICAgIGRp
c2sgPSAmZF9jb25maWctPmRpc2tzW2ldOworICAgICAgICBpZiAoZGlzay0+aG90cGx1Z192ZXJz
aW9uICE9IDApIHsKKyAgICAgICAgICAgIC8qIFVwZGF0ZSBwZGV2X3BhdGggKi8KKyAgICAgICAg
ICAgIHJldCA9IGxpYnhsX19kZXZpY2VfZnJvbV9kaXNrKGdjLCBkb21pZCwgZGlzaywgJmRldik7
CisgICAgICAgICAgICBpZiAocmV0ICE9IDApIHsKKyAgICAgICAgICAgICAgICBMT0coRVJST1Is
ICJJbnZhbGlkIG9yIHVuc3VwcG9ydGVkIHZpcnR1YWwgZGlzayBpZGVudGlmaWVyICVzIiwKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGRpc2stPnZkZXYpOworICAgICAgICAgICAgICAgIGdv
dG8gZXJyb3Jfb3V0OworICAgICAgICAgICAgfQorICAgICAgICAgICAgaG90cGx1Z19wYXRoID0g
bGlieGxfX2RldmljZV94c19ob3RwbHVnX3BhdGgoZ2MsICZkZXYpOworICAgICAgICAgICAgcmV0
ID0gbGlieGxfX3hzX3JlYWRfY2hlY2tlZChnYywgWEJUX05VTEwsCisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIEdDU1BSSU5URigiJXMvcGRldiIsIGhvdHBsdWdfcGF0
aCksCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICZwZGV2KTsKKyAg
ICAgICAgICAgIGlmIChyZXQpCisgICAgICAgICAgICAgICAgZ290byBlcnJvcl9vdXQ7CisgICAg
ICAgICAgICBpZiAoIXBkZXYpIHsKKyAgICAgICAgICAgICAgICByZXQgPSBFUlJPUl9GQUlMOwor
ICAgICAgICAgICAgICAgIGdvdG8gZXJyb3Jfb3V0OworICAgICAgICAgICAgfQorICAgICAgICAg
ICAgZnJlZShkaXNrLT5wZGV2X3BhdGgpOworICAgICAgICAgICAgZGlzay0+cGRldl9wYXRoID0g
bGlieGxfX3N0cmR1cChOT0dDLCBwZGV2KTsKKyAgICAgICAgfQorICAgIH0KKwogICAgIGZvciAo
aSA9IDA7IGkgPCBkX2NvbmZpZy0+Yl9pbmZvLm51bV9pb3BvcnRzOyBpKyspIHsKICAgICAgICAg
bGlieGxfaW9wb3J0X3JhbmdlICppbyA9ICZkX2NvbmZpZy0+Yl9pbmZvLmlvcG9ydHNbaV07CiAK
LS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0
cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xa-0002FV-TR; Fri, 21 Dec 2012 17:00:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xX-0002Cs-Tg
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:32 +0000
Received: from [193.109.254.147:27433] by server-8.bemta-14.messagelabs.com id
	B5/42-26341-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11585 invoked from network); 21 Dec 2012 17:00:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309211"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:15 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:15 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 17:59:58 +0100
Message-ID: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH RFC 00/10] libxl: new hotplug calling convention
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series implements a new hoplug calling convention for libxl.

The aim of this new convention is to reduce the blackout phase of 
migration when using complex hotplug scripts, like iSCSI or other kind 
of storage backends that might have a non trivial setup time.

There are some issues that I would like to discuss, the first one is 
the fact that pdev_path field in libxl_device_disk is no longuer used 
to store a physical path, since diskspec "target" can now contain 
"random" information to connect a block device.

To solve this I would like to introduce a new field in 
libxl_device_disk called "target", that will be used to store the 
diskspec target parameter. This can later be copied to pdev_path if 
using the old hotplug calling convention.

The second issue is related to iSCSI and iscsiadm, specifically the 
way to set authentication parameters, which is done with command line 
parameters or editing files (which each distro seems to place in 
different locations). I will look into this to see if we can find a 
suitable solution.

Thanks for the comments, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xZ-0002EH-8z; Fri, 21 Dec 2012 17:00:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xW-0002Ck-JM
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:30 +0000
Received: from [193.109.254.147:4024] by server-13.bemta-14.messagelabs.com id
	57/FD-01725-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356109227!9192953!3
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11630 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309213"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:16 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:16 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:00 +0100
Message-ID: <1356109208-6830-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 02/10] libxl: add new hotplug interface
	support to hotplug script callers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHN1cHBvcnQgdG8gY2FsbCBob3RwbHVnIHNjcmlwdHMgdGhhdCB1c2UgdGhlIG5ldyBob3Rw
bHVnIGludGVyZmFjZQp0byB0aGUgbG93LWxldmVsIGZ1bmN0aW9ucyBpbiBjaGFyZ2Ugb2YgY2Fs
bGluZyBob3RwbHVnIHNjcmlwdHMuIFRoaXMKbmV3IGNhbGxpbmcgY29udmVudGlvbiB3aWxsIGJl
IHVzZWQgd2hlbiB0aGUgaG90cGx1Z192ZXJzaW9uIGFvZGV2CmZpZWxkIGlzIDEuCgpTaWduZWQt
b2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KLS0tCiB0b29s
cy9saWJ4bC9saWJ4bF9kZXZpY2UuYyAgIHwgICAzMyArKysrKysrKysrKysrKysrKysrLQogdG9v
bHMvbGlieGwvbGlieGxfaW50ZXJuYWwuaCB8ICAgMzMgKysrKysrKysrKysrKysrKysrLS0KIHRv
b2xzL2xpYnhsL2xpYnhsX2xpbnV4LmMgICAgfCAgIDY4ICsrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrLS0tLS0tLS0tLQogdG9vbHMvbGlieGwvbGlieGxfbmV0YnNkLmMgICB8ICAgIDIg
Ky0KIDQgZmlsZXMgY2hhbmdlZCwgMTE1IGluc2VydGlvbnMoKyksIDIxIGRlbGV0aW9ucygtKQoK
ZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2RldmljZS5jIGIvdG9vbHMvbGlieGwvbGli
eGxfZGV2aWNlLmMKaW5kZXggNThkM2YzNS4uODVjOTk1MyAxMDA2NDQKLS0tIGEvdG9vbHMvbGli
eGwvbGlieGxfZGV2aWNlLmMKKysrIGIvdG9vbHMvbGlieGwvbGlieGxfZGV2aWNlLmMKQEAgLTgz
LDYgKzgzLDM1IEBAIG91dDoKICAgICByZXR1cm4gcmM7CiB9CiAKK2NoYXIgKmxpYnhsX19kZXZp
Y2VfaG90cGx1Z19hY3Rpb24obGlieGxfX2djICpnYywKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgbGlieGxfX2RldmljZV9hY3Rpb24gYWN0aW9uKQoreworICAgIHN3aXRjaCAo
YWN0aW9uKSB7CisgICAgY2FzZSBERVZJQ0VfQ09OTkVDVDoKKyAgICAgICAgcmV0dXJuICJhZGQi
OworICAgIGNhc2UgREVWSUNFX0RJU0NPTk5FQ1Q6CisgICAgICAgIHJldHVybiAicmVtb3ZlIjsK
KyAgICBjYXNlIERFVklDRV9QUkVQQVJFOgorICAgICAgICByZXR1cm4gInByZXBhcmUiOworICAg
IGNhc2UgREVWSUNFX1VOUFJFUEFSRToKKyAgICAgICAgcmV0dXJuICJ1bnByZXBhcmUiOworICAg
IGNhc2UgREVWSUNFX0xPQ0FMQVRUQUNIOgorICAgICAgICByZXR1cm4gImxvY2FsYXR0YWNoIjsK
KyAgICBjYXNlIERFVklDRV9MT0NBTERFVEFDSDoKKyAgICAgICAgcmV0dXJuICJsb2NhbGRldGFj
aCI7CisgICAgZGVmYXVsdDoKKyAgICAgICAgTE9HKEVSUk9SLCAiaW52YWxpZCBhY3Rpb24gKCVk
KSBmb3IgZGV2aWNlIiwgYWN0aW9uKTsKKyAgICAgICAgYnJlYWs7CisgICAgfQorICAgIHJldHVy
biBOVUxMOworfQorCitjaGFyICpsaWJ4bF9fZGV2aWNlX3hzX2hvdHBsdWdfcGF0aChsaWJ4bF9f
Z2MgKmdjLCBsaWJ4bF9fZGV2aWNlICpkZXYpCit7CisgICAgcmV0dXJuIEdDU1BSSU5URigiL2xv
Y2FsL2RvbWFpbi8ldS9saWJ4bC9ob3RwbHVnLyV1LyV1IiwKKyAgICAgICAgICAgICAgICAgICAg
IGRldi0+YmFja2VuZF9kb21pZCwgZGV2LT5kb21pZCwgZGV2LT5kZXZpZCk7Cit9CisKIGludCBs
aWJ4bF9fZGV2aWNlX2dlbmVyaWNfYWRkKGxpYnhsX19nYyAqZ2MsIHhzX3RyYW5zYWN0aW9uX3Qg
dCwKICAgICAgICAgbGlieGxfX2RldmljZSAqZGV2aWNlLCBjaGFyICoqYmVudHMsIGNoYXIgKipm
ZW50cykKIHsKQEAgLTQxMCw2ICs0MzksNyBAQCB2b2lkIGxpYnhsX19wcmVwYXJlX2FvX2Rldmlj
ZShsaWJ4bF9fYW8gKmFvLCBsaWJ4bF9fYW9fZGV2aWNlICphb2RldikKICAgICBhb2Rldi0+cmMg
PSAwOwogICAgIGFvZGV2LT5kZXYgPSBOVUxMOwogICAgIGFvZGV2LT5udW1fZXhlYyA9IDA7Cisg
ICAgYW9kZXYtPmhvdHBsdWdfdmVyc2lvbiA9IDA7CiAgICAgLyogSW5pdGlhbGl6ZSB0aW1lciBm
b3IgUUVNVSBCb2RnZSBhbmQgaG90cGx1ZyBleGVjdXRpb24gKi8KICAgICBsaWJ4bF9fZXZfdGlt
ZV9pbml0KCZhb2Rldi0+dGltZW91dCk7CiAgICAgYW9kZXYtPmFjdGl2ZSA9IDE7CkBAIC04OTEs
NyArOTIxLDggQEAgc3RhdGljIHZvaWQgZGV2aWNlX2hvdHBsdWcobGlieGxfX2VnYyAqZWdjLCBs
aWJ4bF9fYW9fZGV2aWNlICphb2RldikKICAgICAgKiBhbmQgcmV0dXJuIHRoZSBuZWNlc3Nhcnkg
YXJncy9lbnYgdmFycyBmb3IgZXhlY3V0aW9uICovCiAgICAgaG90cGx1ZyA9IGxpYnhsX19nZXRf
aG90cGx1Z19zY3JpcHRfaW5mbyhnYywgYW9kZXYtPmRldiwgJmFyZ3MsICZlbnYsCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhb2Rldi0+YWN0aW9uLAotICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYW9kZXYtPm51bV9leGVj
KTsKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGFvZGV2LT5u
dW1fZXhlYywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGFv
ZGV2LT5ob3RwbHVnX3ZlcnNpb24pOwogICAgIHN3aXRjaCAoaG90cGx1ZykgewogICAgIGNhc2Ug
MDoKICAgICAgICAgLyogbm8gaG90cGx1ZyBzY3JpcHQgdG8gZXhlY3V0ZSAqLwpkaWZmIC0tZ2l0
IGEvdG9vbHMvbGlieGwvbGlieGxfaW50ZXJuYWwuaCBiL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVy
bmFsLmgKaW5kZXggMGIzOGUzZS4uYzQxZTYwOCAxMDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGli
eGxfaW50ZXJuYWwuaAorKysgYi90b29scy9saWJ4bC9saWJ4bF9pbnRlcm5hbC5oCkBAIC0xODI5
LDEyICsxODI5LDE5IEBAIF9oaWRkZW4gY29uc3QgY2hhciAqbGlieGxfX3J1bl9kaXJfcGF0aCh2
b2lkKTsKIAogLyotLS0tLSBkZXZpY2UgYWRkaXRpb24vcmVtb3ZhbCAtLS0tLSovCiAKLS8qIEFj
dGlvbiB0byBwZXJmb3JtIChlaXRoZXIgY29ubmVjdCBvciBkaXNjb25uZWN0KSAqLworLyogQWN0
aW9uIHRvIHBlcmZvcm0gKi8KIHR5cGVkZWYgZW51bSB7CiAgICAgREVWSUNFX0NPTk5FQ1QsCi0g
ICAgREVWSUNFX0RJU0NPTk5FQ1QKKyAgICBERVZJQ0VfRElTQ09OTkVDVCwKKyAgICBERVZJQ0Vf
UFJFUEFSRSwKKyAgICBERVZJQ0VfVU5QUkVQQVJFLAorICAgIERFVklDRV9MT0NBTEFUVEFDSCwK
KyAgICBERVZJQ0VfTE9DQUxERVRBQ0gKIH0gbGlieGxfX2RldmljZV9hY3Rpb247CiAKK19oaWRk
ZW4gY2hhciAqbGlieGxfX2RldmljZV9ob3RwbHVnX2FjdGlvbihsaWJ4bF9fZ2MgKmdjLAorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2VfYWN0
aW9uIGFjdGlvbik7CisKIHR5cGVkZWYgc3RydWN0IGxpYnhsX19hb19kZXZpY2UgbGlieGxfX2Fv
X2RldmljZTsKIHR5cGVkZWYgc3RydWN0IGxpYnhsX19tdWx0aWRldiBsaWJ4bF9fbXVsdGlkZXY7
CiB0eXBlZGVmIHZvaWQgbGlieGxfX2RldmljZV9jYWxsYmFjayhsaWJ4bF9fZWdjKiwgbGlieGxf
X2FvX2RldmljZSopOwpAQCAtMTg1Miw2ICsxODU5LDE0IEBAIHR5cGVkZWYgdm9pZCBsaWJ4bF9f
ZGV2aWNlX2NhbGxiYWNrKGxpYnhsX19lZ2MqLCBsaWJ4bF9fYW9fZGV2aWNlKik7CiAgKiBPbmNl
IF9wcmVwYXJlIGhhcyBiZWVuIGNhbGxlZCBvbiBhIGxpYnhsX19hb19kZXZpY2UsIGl0IGlzIHNh
ZmUgdG8ganVzdAogICogZGlzY2FyZCB0aGlzIHN0cnVjdCwgdGhlcmUncyBubyBuZWVkIHRvIGNh
bGwgYW55IGRlc3Ryb3kgZnVuY3Rpb24uCiAgKiBfcHJlcGFyZSBjYW4gYWxzbyBiZSBjYWxsZWQg
bXVsdGlwbGUgdGltZXMgd2l0aCB0aGUgc2FtZSBsaWJ4bF9fYW9fZGV2aWNlLgorICoKKyAqIElm
IGhvdHBsdWdfdmVyc2lvbiBmaWVsZCBpcyAxLCB0aGUgbmV3IGhvdHBsdWcgc2NyaXB0IGNhbGxp
bmcgY29udmVudGlvbgorICogd2lsbCBiZSB1c2VkIHRvIGNhbGwgdGhlIGhvdHBsdWcgc2NyaXB0
LiBUaGlzIG5ldyBjb252ZW50aW9uIHByb3ZpZGVzCisgKiB0d28gbmV3IGFjdGlvbnMgdG8gaG90
cGx1ZyBzY3JpcHRzLCAicHJlcGFyZSIsICJ1bnByZXBhcmUiLCAibG9jYWxhdHRhY2giCisgKiBh
bmQgImxvY2FsZGV0YWNoIi4gVGhpcyBuZXcgYWN0aW9ucyBoYXZlIGJlZW4gYWRkZWQgdG8gb2Zm
bG9hZCB3b3JrIGRvbmUKKyAqIGJ5IGhvdHBsdWcgc2NyaXB0cyBkdXJpbmcgdGhlIGJsYWNrb3V0
IHBoYXNlIG9mIG1pZ3JhdGlvbi4gInByZXBhcmUiIGlzCisgKiBjYWxsZWQgYmVmb3JlIHRoZSBy
ZW1vdGUgZG9tYWluIGlzIHBhdXNlZCwgc28gYXMgbXVjaCBvcGVyYXRpb25zIGFzCisgKiBwb3Nz
aWJsZSBzaG91bGQgYmUgZG9uZSBpbiB0aGlzIHBoYXNlLgogICovCiBfaGlkZGVuIHZvaWQgbGli
eGxfX3ByZXBhcmVfYW9fZGV2aWNlKGxpYnhsX19hbyAqYW8sIGxpYnhsX19hb19kZXZpY2UgKmFv
ZGV2KTsKIApAQCAtMTg2Miw2ICsxODc3LDcgQEAgc3RydWN0IGxpYnhsX19hb19kZXZpY2Ugewog
ICAgIGxpYnhsX19kZXZpY2UgKmRldjsKICAgICBpbnQgZm9yY2U7CiAgICAgbGlieGxfX2Rldmlj
ZV9jYWxsYmFjayAqY2FsbGJhY2s7CisgICAgaW50IGhvdHBsdWdfdmVyc2lvbjsKICAgICAvKiBy
ZXR1cm4gdmFsdWUsIHplcm9lZCBieSB1c2VyIG9uIGVudHJ5LCBpcyB2YWxpZCBvbiBjYWxsYmFj
ayAqLwogICAgIGludCByYzsKICAgICAvKiBwcml2YXRlIGZvciBtdWx0aWRldiAqLwpAQCAtMjA0
NSw2ICsyMDYxLDE3IEBAIF9oaWRkZW4gdm9pZCBsaWJ4bF9faW5pdGlhdGVfZGV2aWNlX3JlbW92
ZShsaWJ4bF9fZWdjICplZ2MsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpOwogCiAvKgorICogbGlieGxfX2RldmljZV94
c19ob3RwbHVnX3BhdGggcmV0dXJucyB0aGUgeGVuc3RvcmUgaG90cGx1ZworICogcGF0aCB0aGF0
IGlzIHVzZWQgdG8gc2hhcmUgaW5mb3JtYXRpb24gd2l0aCB0aGUgaG90cGx1ZworICogc2NyaXB0
LgorICoKKyAqIFRoaXMgcGF0aCBpcyBvbmx5IHVzZWQgYnkgbmV3IGhvdHBsdWcgc2NyaXB0cywg
dGhhdCBhcmUKKyAqIHNwZWNpZmllZCB1c2luZyAibWV0aG9kIiBpbnN0ZWFkIG9mICJzY3JpcHQi
IGluIHRoZSBkaXNrCisgKiBwYXJhbWV0ZXJzLgorICovCitfaGlkZGVuIGNoYXIgKmxpYnhsX19k
ZXZpY2VfeHNfaG90cGx1Z19wYXRoKGxpYnhsX19nYyAqZ2MsIGxpYnhsX19kZXZpY2UgKmRldik7
CisKKy8qCiAgKiBsaWJ4bF9fZ2V0X2hvdHBsdWdfc2NyaXB0X2luZm8gcmV0dXJucyB0aGUgYXJn
cyBhbmQgZW52IHRoYXQgc2hvdWxkCiAgKiBiZSBwYXNzZWQgdG8gdGhlIGhvdHBsdWcgc2NyaXB0
IGZvciB0aGUgcmVxdWVzdGVkIGRldmljZS4KICAqCkBAIC0yMDY1LDcgKzIwOTIsNyBAQCBfaGlk
ZGVuIHZvaWQgbGlieGxfX2luaXRpYXRlX2RldmljZV9yZW1vdmUobGlieGxfX2VnYyAqZWdjLAog
X2hpZGRlbiBpbnQgbGlieGxfX2dldF9ob3RwbHVnX3NjcmlwdF9pbmZvKGxpYnhsX19nYyAqZ2Ms
IGxpYnhsX19kZXZpY2UgKmRldiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBjaGFyICoqKmFyZ3MsIGNoYXIgKioqZW52LAogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2VfYWN0aW9uIGFjdGlvbiwKLSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgbnVtX2V4ZWMpOworICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCBudW1fZXhlYywgaW50
IGhvdHBsdWdfdmVyc2lvbik7CiAKIC8qLS0tLS0gbG9jYWwgZGlzayBhdHRhY2g6IGF0dGFjaCBh
IGRpc2sgbG9jYWxseSB0byBydW4gdGhlIGJvb3Rsb2FkZXIgLS0tLS0qLwogCmRpZmYgLS1naXQg
YS90b29scy9saWJ4bC9saWJ4bF9saW51eC5jIGIvdG9vbHMvbGlieGwvbGlieGxfbGludXguYwpp
bmRleCAxZmVkM2NkLi40M2YyM2RkIDEwMDY0NAotLS0gYS90b29scy9saWJ4bC9saWJ4bF9saW51
eC5jCisrKyBiL3Rvb2xzL2xpYnhsL2xpYnhsX2xpbnV4LmMKQEAgLTE5MCwzMiArMTkwLDY4IEBA
IG91dDoKIAogc3RhdGljIGludCBsaWJ4bF9faG90cGx1Z19kaXNrKGxpYnhsX19nYyAqZ2MsIGxp
YnhsX19kZXZpY2UgKmRldiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjaGFyICoq
KmFyZ3MsIGNoYXIgKioqZW52LAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhs
X19kZXZpY2VfYWN0aW9uIGFjdGlvbikKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBs
aWJ4bF9fZGV2aWNlX2FjdGlvbiBhY3Rpb24sCisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgaW50IGhvdHBsdWdfdmVyc2lvbikKIHsKICAgICBjaGFyICpiZV9wYXRoID0gbGlieGxfX2Rl
dmljZV9iYWNrZW5kX3BhdGgoZ2MsIGRldik7Ci0gICAgY2hhciAqc2NyaXB0OworICAgIGNoYXIg
KmhvdHBsdWdfcGF0aCA9IGxpYnhsX19kZXZpY2VfeHNfaG90cGx1Z19wYXRoKGdjLCBkZXYpOwor
ICAgIGNoYXIgKnNjcmlwdCwgKnNhY3Rpb247CiAgICAgaW50IG5yID0gMCwgcmMgPSAwOworICAg
IGNvbnN0IGludCBlbnZfYXJyYXlzaXplID0gNTsKKyAgICBjb25zdCBpbnQgYXJnX2FycmF5c2l6
ZSA9IDM7CiAKLSAgICBzY3JpcHQgPSBsaWJ4bF9feHNfcmVhZChnYywgWEJUX05VTEwsCi0gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgR0NTUFJJTlRGKCIlcy8lcyIsIGJlX3BhdGgsICJzY3Jp
cHQiKSk7Ci0gICAgaWYgKCFzY3JpcHQpIHsKLSAgICAgICAgTE9HRVYoRVJST1IsIGVycm5vLCAi
dW5hYmxlIHRvIHJlYWQgc2NyaXB0IGZyb20gJXMiLCBiZV9wYXRoKTsKKyAgICBzd2l0Y2ggKGhv
dHBsdWdfdmVyc2lvbikgeworICAgIGNhc2UgMDoKKyAgICAgICAgc2NyaXB0ID0gbGlieGxfX3hz
X3JlYWQoZ2MsIFhCVF9OVUxMLCBHQ1NQUklOVEYoIiVzL3NjcmlwdCIsIGJlX3BhdGgpKTsKKyAg
ICAgICAgaWYgKCFzY3JpcHQpIHsKKyAgICAgICAgICAgIExPR0VWKEVSUk9SLCBlcnJubywgInVu
YWJsZSB0byByZWFkIHNjcmlwdCBmcm9tICVzIiwgYmVfcGF0aCk7CisgICAgICAgICAgICByYyA9
IEVSUk9SX0ZBSUw7CisgICAgICAgICAgICBnb3RvIGVycm9yOworICAgICAgICB9CisgICAgICAg
ICplbnYgPSBnZXRfaG90cGx1Z19lbnYoZ2MsIHNjcmlwdCwgZGV2KTsKKyAgICAgICAgaWYgKCEq
ZW52KSB7CisgICAgICAgICAgICByYyA9IEVSUk9SX0ZBSUw7CisgICAgICAgICAgICBnb3RvIGVy
cm9yOworICAgICAgICB9CisgICAgICAgIGJyZWFrOworICAgIGNhc2UgMToKKyAgICAgICAgLyog
VGhlIG5ldyBob3RwbHVnIGNhbGxpbmcgY29udmVudGlvbiBvbmx5IHVzZXMgdHdvIEVOViB2YXJp
YWJsZXM6CisgICAgICAgICAqIEJBQ0tFTkRfUEFUSDogcGF0aCB0byB4ZW5zdG9yZSBiYWNrZW5k
IG9mIHRoZSByZWxhdGVkIGRldmljZS4KKyAgICAgICAgICogSE9UUExVR19QQVRIOiBwYXRoIHRv
IHRoZSB4ZW5zdG9yZSBkaXJlY3RvcnkgdGhhdCBjYW4gYmUgdXNlZCB0bworICAgICAgICAgKiBw
YXNzIGV4dHJhIHBhcmFtZXRlcnMgdG8gdGhlIHNjcmlwdC4KKyAgICAgICAgICovCisgICAgICAg
IHNjcmlwdCA9IGxpYnhsX194c19yZWFkKGdjLCBYQlRfTlVMTCwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgR0NTUFJJTlRGKCIlcy9zY3JpcHQiLCBob3RwbHVnX3BhdGgpKTsKKyAg
ICAgICAgaWYgKCFzY3JpcHQpIHsKKyAgICAgICAgICAgIExPR0VWKEVSUk9SLCBlcnJubywgInVu
YWJsZSB0byByZWFkIHNjcmlwdCBmcm9tICVzIiwgaG90cGx1Z19wYXRoKTsKKyAgICAgICAgICAg
IHJjID0gRVJST1JfRkFJTDsKKyAgICAgICAgICAgIGdvdG8gZXJyb3I7CisgICAgICAgIH0KKyAg
ICAgICAgR0NORVdfQVJSQVkoKmVudiwgZW52X2FycmF5c2l6ZSk7CisgICAgICAgICgqZW52KVtu
cisrXSA9ICJCQUNLRU5EX1BBVEgiOworICAgICAgICAoKmVudilbbnIrK10gPSBiZV9wYXRoOwor
ICAgICAgICAoKmVudilbbnIrK10gPSAiSE9UUExVR19QQVRIIjsKKyAgICAgICAgKCplbnYpW25y
KytdID0gaG90cGx1Z19wYXRoOworICAgICAgICAoKmVudilbbnIrK10gPSBOVUxMOworICAgICAg
ICBhc3NlcnQobnIgPT0gZW52X2FycmF5c2l6ZSk7CisgICAgICAgIG5yID0gMDsKKyAgICAgICAg
YnJlYWs7CisgICAgZGVmYXVsdDoKKyAgICAgICAgTE9HKEVSUk9SLCAidW5rbm93biBob3RwbHVn
IHNjcmlwdCB2ZXJzaW9uICVkIiwgaG90cGx1Z192ZXJzaW9uKTsKICAgICAgICAgcmMgPSBFUlJP
Ul9GQUlMOwogICAgICAgICBnb3RvIGVycm9yOwogICAgIH0KIAotICAgICplbnYgPSBnZXRfaG90
cGx1Z19lbnYoZ2MsIHNjcmlwdCwgZGV2KTsKLSAgICBpZiAoISplbnYpIHsKKyAgICBHQ05FV19B
UlJBWSgqYXJncywgYXJnX2FycmF5c2l6ZSk7CisgICAgKCphcmdzKVtucisrXSA9IHNjcmlwdDsK
KyAgICBzYWN0aW9uID0gbGlieGxfX2RldmljZV9ob3RwbHVnX2FjdGlvbihnYywgYWN0aW9uKTsK
KyAgICBpZiAoIXNhY3Rpb24pIHsKICAgICAgICAgcmMgPSBFUlJPUl9GQUlMOwogICAgICAgICBn
b3RvIGVycm9yOwogICAgIH0KLQotICAgIGNvbnN0IGludCBhcnJheXNpemUgPSAzOwotICAgIEdD
TkVXX0FSUkFZKCphcmdzLCBhcnJheXNpemUpOwotICAgICgqYXJncylbbnIrK10gPSBzY3JpcHQ7
Ci0gICAgKCphcmdzKVtucisrXSA9IGFjdGlvbiA9PSBERVZJQ0VfQ09OTkVDVCA/ICJhZGQiIDog
InJlbW92ZSI7CisgICAgKCphcmdzKVtucisrXSA9IHNhY3Rpb247CiAgICAgKCphcmdzKVtucisr
XSA9IE5VTEw7Ci0gICAgYXNzZXJ0KG5yID09IGFycmF5c2l6ZSk7CisgICAgYXNzZXJ0KG5yID09
IGFyZ19hcnJheXNpemUpOwogCiAgICAgcmMgPSAxOwogCkBAIC0yMjYsNyArMjYyLDcgQEAgZXJy
b3I6CiBpbnQgbGlieGxfX2dldF9ob3RwbHVnX3NjcmlwdF9pbmZvKGxpYnhsX19nYyAqZ2MsIGxp
YnhsX19kZXZpY2UgKmRldiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY2hh
ciAqKiphcmdzLCBjaGFyICoqKmVudiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbGlieGxfX2RldmljZV9hY3Rpb24gYWN0aW9uLAotICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBpbnQgbnVtX2V4ZWMpCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGludCBudW1fZXhlYywgaW50IGhvdHBsdWdfdmVyc2lvbikKIHsKICAgICBjaGFyICpkaXNh
YmxlX3VkZXYgPSBsaWJ4bF9feHNfcmVhZChnYywgWEJUX05VTEwsIERJU0FCTEVfVURFVl9QQVRI
KTsKICAgICBpbnQgcmM7CkBAIC0yNDMsNyArMjc5LDcgQEAgaW50IGxpYnhsX19nZXRfaG90cGx1
Z19zY3JpcHRfaW5mbyhsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9fZGV2aWNlICpkZXYsCiAgICAgICAg
ICAgICByYyA9IDA7CiAgICAgICAgICAgICBnb3RvIG91dDsKICAgICAgICAgfQotICAgICAgICBy
YyA9IGxpYnhsX19ob3RwbHVnX2Rpc2soZ2MsIGRldiwgYXJncywgZW52LCBhY3Rpb24pOworICAg
ICAgICByYyA9IGxpYnhsX19ob3RwbHVnX2Rpc2soZ2MsIGRldiwgYXJncywgZW52LCBhY3Rpb24s
IGhvdHBsdWdfdmVyc2lvbik7CiAgICAgICAgIGJyZWFrOwogICAgIGNhc2UgTElCWExfX0RFVklD
RV9LSU5EX1ZJRjoKICAgICAgICAgLyoKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX25l
dGJzZC5jIGIvdG9vbHMvbGlieGwvbGlieGxfbmV0YnNkLmMKaW5kZXggOTU4NzgzMy4uODA2MWU3
YSAxMDA2NDQKLS0tIGEvdG9vbHMvbGlieGwvbGlieGxfbmV0YnNkLmMKKysrIGIvdG9vbHMvbGli
eGwvbGlieGxfbmV0YnNkLmMKQEAgLTYyLDcgKzYyLDcgQEAgb3V0OgogaW50IGxpYnhsX19nZXRf
aG90cGx1Z19zY3JpcHRfaW5mbyhsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9fZGV2aWNlICpkZXYsCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNoYXIgKioqYXJncywgY2hhciAqKipl
bnYsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX19kZXZpY2VfYWN0
aW9uIGFjdGlvbiwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50IG51bV9l
eGVjKQorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgbnVtX2V4ZWMsIGlu
dCBob3RwbHVnX3ZlcnNpb24pCiB7CiAgICAgY2hhciAqZGlzYWJsZV91ZGV2ID0gbGlieGxfX3hz
X3JlYWQoZ2MsIFhCVF9OVUxMLCBESVNBQkxFX1VERVZfUEFUSCk7CiAgICAgaW50IHJjOwotLSAK
MS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhl
bi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xY-0002De-0i; Fri, 21 Dec 2012 17:00:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xW-0002Cc-DX
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:30 +0000
Received: from [85.158.143.99:34496] by server-3.bemta-4.messagelabs.com id
	77/E0-18211-DA594D05; Fri, 21 Dec 2012 17:00:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356109228!23684325!3
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25737 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309220"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:19 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:18 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:06 +0100
Message-ID: <1356109208-6830-9-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 08/10] libxl: add new hotplug interface
	support for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

UmVhZHMgdGhlIHJlc3VsdHMgb2YgdGhlIGhvdHBsdWcgc2NyaXB0IGV4ZWN1dGlvbiBhbmQgY2hh
bmdlcwpwZGV2X3BhdGggbGlieGxfZGV2aWNlX2Rpc2sgdG8gcG9pbnQgdG8gdGhlIGJsb2NrIGRl
dmljZS4gVGhpcyB3aWxsIGJlCnVzZWQgbGF0ZXIgd2hlbiBRZW11IGlzIGxhdW5jaGVkLgoKU2ln
bmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Ci0tLQog
dG9vbHMvbGlieGwvbGlieGxfY3JlYXRlLmMgfCAgIDMzICsrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKwogMSBmaWxlcyBjaGFuZ2VkLCAzMyBpbnNlcnRpb25zKCspLCAwIGRlbGV0aW9u
cygtKQoKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2NyZWF0ZS5jIGIvdG9vbHMvbGli
eGwvbGlieGxfY3JlYXRlLmMKaW5kZXggYzE3Njk2Ny4uMWQ1NDEwZiAxMDA2NDQKLS0tIGEvdG9v
bHMvbGlieGwvbGlieGxfY3JlYXRlLmMKKysrIGIvdG9vbHMvbGlieGwvbGlieGxfY3JlYXRlLmMK
QEAgLTk2NCw2ICs5NjQsMTAgQEAgc3RhdGljIHZvaWQgZG9tY3JlYXRlX2xhdW5jaF9kbShsaWJ4
bF9fZWdjICplZ2MsIGxpYnhsX19tdWx0aWRldiAqbXVsdGlkZXYsCiAgICAgbGlieGxfX2RvbWFp
bl9jcmVhdGVfc3RhdGUgKmRjcyA9IENPTlRBSU5FUl9PRihtdWx0aWRldiwgKmRjcywgbXVsdGlk
ZXYpOwogICAgIFNUQVRFX0FPX0dDKGRjcy0+YW8pOwogICAgIGludCBpOworICAgIGNoYXIgKmhv
dHBsdWdfcGF0aDsKKyAgICBjb25zdCBjaGFyICpwZGV2OworICAgIGxpYnhsX19kZXZpY2UgZGV2
OworICAgIGxpYnhsX2RldmljZV9kaXNrICpkaXNrOwogCiAgICAgLyogY29udmVuaWVuY2UgYWxp
YXNlcyAqLwogICAgIGNvbnN0IHVpbnQzMl90IGRvbWlkID0gZGNzLT5ndWVzdF9kb21pZDsKQEAg
LTk3NSw2ICs5NzksMzUgQEAgc3RhdGljIHZvaWQgZG9tY3JlYXRlX2xhdW5jaF9kbShsaWJ4bF9f
ZWdjICplZ2MsIGxpYnhsX19tdWx0aWRldiAqbXVsdGlkZXYsCiAgICAgICAgIGdvdG8gZXJyb3Jf
b3V0OwogICAgIH0KIAorICAgIC8qCisgICAgICogSWYgdXNpbmcgdGhlIG5ldyBob3RwbHVnIGlu
dGVyZmFjZSBjaGFuZ2UgZGlza3MgcGRldl9wYXRoCisgICAgICogdG8gcG9pbnQgdG8gdGhlIHJl
YWwgcGh5c2ljYWwgcGF0aCwgc28gaXQgY2FuIGJlIHVzZWQgYnkgUWVtdS4KKyAgICAgKi8KKyAg
ICBmb3IgKGkgPSAwOyBpIDwgZF9jb25maWctPm51bV9kaXNrczsgaSsrKSB7CisgICAgICAgIGRp
c2sgPSAmZF9jb25maWctPmRpc2tzW2ldOworICAgICAgICBpZiAoZGlzay0+aG90cGx1Z192ZXJz
aW9uICE9IDApIHsKKyAgICAgICAgICAgIC8qIFVwZGF0ZSBwZGV2X3BhdGggKi8KKyAgICAgICAg
ICAgIHJldCA9IGxpYnhsX19kZXZpY2VfZnJvbV9kaXNrKGdjLCBkb21pZCwgZGlzaywgJmRldik7
CisgICAgICAgICAgICBpZiAocmV0ICE9IDApIHsKKyAgICAgICAgICAgICAgICBMT0coRVJST1Is
ICJJbnZhbGlkIG9yIHVuc3VwcG9ydGVkIHZpcnR1YWwgZGlzayBpZGVudGlmaWVyICVzIiwKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGRpc2stPnZkZXYpOworICAgICAgICAgICAgICAgIGdv
dG8gZXJyb3Jfb3V0OworICAgICAgICAgICAgfQorICAgICAgICAgICAgaG90cGx1Z19wYXRoID0g
bGlieGxfX2RldmljZV94c19ob3RwbHVnX3BhdGgoZ2MsICZkZXYpOworICAgICAgICAgICAgcmV0
ID0gbGlieGxfX3hzX3JlYWRfY2hlY2tlZChnYywgWEJUX05VTEwsCisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIEdDU1BSSU5URigiJXMvcGRldiIsIGhvdHBsdWdfcGF0
aCksCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICZwZGV2KTsKKyAg
ICAgICAgICAgIGlmIChyZXQpCisgICAgICAgICAgICAgICAgZ290byBlcnJvcl9vdXQ7CisgICAg
ICAgICAgICBpZiAoIXBkZXYpIHsKKyAgICAgICAgICAgICAgICByZXQgPSBFUlJPUl9GQUlMOwor
ICAgICAgICAgICAgICAgIGdvdG8gZXJyb3Jfb3V0OworICAgICAgICAgICAgfQorICAgICAgICAg
ICAgZnJlZShkaXNrLT5wZGV2X3BhdGgpOworICAgICAgICAgICAgZGlzay0+cGRldl9wYXRoID0g
bGlieGxfX3N0cmR1cChOT0dDLCBwZGV2KTsKKyAgICAgICAgfQorICAgIH0KKwogICAgIGZvciAo
aSA9IDA7IGkgPCBkX2NvbmZpZy0+Yl9pbmZvLm51bV9pb3BvcnRzOyBpKyspIHsKICAgICAgICAg
bGlieGxfaW9wb3J0X3JhbmdlICppbyA9ICZkX2NvbmZpZy0+Yl9pbmZvLmlvcG9ydHNbaV07CiAK
LS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0
cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xW-0002Cr-8f; Fri, 21 Dec 2012 17:00:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xV-0002Cb-H4
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:29 +0000
Received: from [85.158.143.99:34436] by server-1.bemta-4.messagelabs.com id
	66/1E-28401-CA594D05; Fri, 21 Dec 2012 17:00:28 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356109228!23684325!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25685 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309216"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:18 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:17 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:03 +0100
Message-ID: <1356109208-6830-6-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 05/10] libxl: add disk specific remove
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIGEgc3BlY2lmaWMgbWFjcm8gdG8gZ2VuZXJhdGUgbGlieGxfZGV2aWNlX2Rpc2tfe3JlbW92
ZS9kZXN0cm95fQpmdW5jdGlvbnMgdGhhdCBwYXNzZXMgdGhlIGhvdHBsdWdfdmVyc2lvbiBmaWVs
ZCBkb3duIHRvIHRoZSBhb2RldgpzdHJ1Y3QuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KLS0tCiB0b29scy9saWJ4bC9saWJ4bC5jIHwgICAz
NSArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLQogMSBmaWxlcyBjaGFuZ2VkLCAz
MyBpbnNlcnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhs
L2xpYnhsLmMgYi90b29scy9saWJ4bC9saWJ4bC5jCmluZGV4IDgyMDM0ZjEuLjI5YjI3NjUgMTAw
NjQ0Ci0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsLmMKKysrIGIvdG9vbHMvbGlieGwvbGlieGwuYwpA
QCAtMzQ4OSwxMSArMzQ4OSw0MSBAQCBvdXQ6CiAgICAgICAgIHJldHVybiBBT19JTlBST0dSRVNT
OyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgfQogCisv
KiBTcGVjaWZpYyBmb3IgZGlzayBkZXZpY2VzLCB0aGF0IGhhdmUgdG8gc2V0IGFvZGV2LT5ob3Rw
bHVnX3ZlcnNpb24gKi8KKyNkZWZpbmUgREVGSU5FX0RJU0tfUkVNT1ZFKHR5cGUsIHJlbW92ZWRl
c3Ryb3ksIGYpICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBpbnQgbGlieGxfZGV2aWNlXyMj
dHlwZSMjXyMjcmVtb3ZlZGVzdHJveShsaWJ4bF9jdHggKmN0eCwgICAgICAgICAgIFwKKyAgICAg
ICAgdWludDMyX3QgZG9taWQsIGxpYnhsX2RldmljZV8jI3R5cGUgKnR5cGUsICAgICAgICAgICAg
ICAgICAgICAgIFwKKyAgICAgICAgY29uc3QgbGlieGxfYXN5bmNvcF9ob3cgKmFvX2hvdykgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICB7ICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAg
QU9fQ1JFQVRFKGN0eCwgZG9taWQsIGFvX2hvdyk7ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIFwKKyAgICAgICAgbGlieGxfX2RldmljZSAqZGV2aWNlOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9k
ZXY7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgaW50
IHJjOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgR0NORVcoZGV2aWNlKTsgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgcmMgPSBs
aWJ4bF9fZGV2aWNlX2Zyb21fIyN0eXBlKGdjLCBkb21pZCwgdHlwZSwgZGV2aWNlKTsgICAgICAg
IFwKKyAgICAgICAgaWYgKHJjICE9IDApIGdvdG8gb3V0OyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgR0NORVcoYW9k
ZXYpOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgbGlieGxfX3ByZXBhcmVfYW9fZGV2aWNlKGFvLCBhb2Rldik7ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIFwKKyAgICAgICAgYW9kZXYtPmFjdGlvbiA9IERFVklDRV9ESVNDT05O
RUNUOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgYW9kZXYtPmRldiA9
IGRldmljZTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAg
ICAgICAgYW9kZXYtPmNhbGxiYWNrID0gZGV2aWNlX2FvY29tcGxldGU7ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgYW9kZXYtPmZvcmNlID0gZjsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgYW9kZXYtPmhvdHBsdWdf
dmVyc2lvbiA9IHR5cGUtPmhvdHBsdWdfdmVyc2lvbjsgICAgICAgICAgICAgICAgIFwKKyAgICAg
ICAgTE9HKERFQlVHLCAiaG90cGx1ZyB2ZXJzaW9uOiAlZCIsIGFvZGV2LT5ob3RwbHVnX3ZlcnNp
b24pOyAgICAgIFwKKyAgICAgICAgbGlieGxfX2luaXRpYXRlX2RldmljZV9yZW1vdmUoZWdjLCBh
b2Rldik7ICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBvdXQ6
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIFwKKyAgICAgICAgaWYgKHJjKSByZXR1cm4gQU9fQUJPUlQocmMpOyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgcmV0dXJuIEFPX0lOUFJPR1JFU1M7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICB9CisKIC8q
IERlZmluZSBhbGwgcmVtb3ZlL2Rlc3Ryb3kgZnVuY3Rpb25zIGFuZCB1bmRlZiB0aGUgbWFjcm8g
Ki8KIAogLyogZGlzayAqLwotREVGSU5FX0RFVklDRV9SRU1PVkUoZGlzaywgcmVtb3ZlLCAwKQot
REVGSU5FX0RFVklDRV9SRU1PVkUoZGlzaywgZGVzdHJveSwgMSkKK0RFRklORV9ESVNLX1JFTU9W
RShkaXNrLCByZW1vdmUsIDApCitERUZJTkVfRElTS19SRU1PVkUoZGlzaywgZGVzdHJveSwgMSkK
IAogLyogbmljICovCiBERUZJTkVfREVWSUNFX1JFTU9WRShuaWMsIHJlbW92ZSwgMCkKQEAgLTM1
MTIsNiArMzU0Miw3IEBAIERFRklORV9ERVZJQ0VfUkVNT1ZFKHZmYiwgZGVzdHJveSwgMSkKIERF
RklORV9ERVZJQ0VfUkVNT1ZFKHZ0cG0sIHJlbW92ZSwgMCkKIERFRklORV9ERVZJQ0VfUkVNT1ZF
KHZ0cG0sIGRlc3Ryb3ksIDEpCiAKKyN1bmRlZiBERUZJTkVfRElTS19SRU1PVkUKICN1bmRlZiBE
RUZJTkVfREVWSUNFX1JFTU9WRQogCiAvKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLwotLSAKMS43Ljcu
NSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5xW-0002Cr-8f; Fri, 21 Dec 2012 17:00:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tm5xV-0002Cb-H4
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:00:29 +0000
Received: from [85.158.143.99:34436] by server-1.bemta-4.messagelabs.com id
	66/1E-28401-CA594D05; Fri, 21 Dec 2012 17:00:28 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356109228!23684325!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25685 invoked from network); 21 Dec 2012 17:00:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309216"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:00:18 +0000
Received: from mac.citrite.net (10.31.3.233) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:00:17 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 21 Dec 2012 18:00:03 +0100
Message-ID: <1356109208-6830-6-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
References: <1356109208-6830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 05/10] libxl: add disk specific remove
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIGEgc3BlY2lmaWMgbWFjcm8gdG8gZ2VuZXJhdGUgbGlieGxfZGV2aWNlX2Rpc2tfe3JlbW92
ZS9kZXN0cm95fQpmdW5jdGlvbnMgdGhhdCBwYXNzZXMgdGhlIGhvdHBsdWdfdmVyc2lvbiBmaWVs
ZCBkb3duIHRvIHRoZSBhb2RldgpzdHJ1Y3QuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KLS0tCiB0b29scy9saWJ4bC9saWJ4bC5jIHwgICAz
NSArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLQogMSBmaWxlcyBjaGFuZ2VkLCAz
MyBpbnNlcnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhs
L2xpYnhsLmMgYi90b29scy9saWJ4bC9saWJ4bC5jCmluZGV4IDgyMDM0ZjEuLjI5YjI3NjUgMTAw
NjQ0Ci0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsLmMKKysrIGIvdG9vbHMvbGlieGwvbGlieGwuYwpA
QCAtMzQ4OSwxMSArMzQ4OSw0MSBAQCBvdXQ6CiAgICAgICAgIHJldHVybiBBT19JTlBST0dSRVNT
OyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgfQogCisv
KiBTcGVjaWZpYyBmb3IgZGlzayBkZXZpY2VzLCB0aGF0IGhhdmUgdG8gc2V0IGFvZGV2LT5ob3Rw
bHVnX3ZlcnNpb24gKi8KKyNkZWZpbmUgREVGSU5FX0RJU0tfUkVNT1ZFKHR5cGUsIHJlbW92ZWRl
c3Ryb3ksIGYpICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBpbnQgbGlieGxfZGV2aWNlXyMj
dHlwZSMjXyMjcmVtb3ZlZGVzdHJveShsaWJ4bF9jdHggKmN0eCwgICAgICAgICAgIFwKKyAgICAg
ICAgdWludDMyX3QgZG9taWQsIGxpYnhsX2RldmljZV8jI3R5cGUgKnR5cGUsICAgICAgICAgICAg
ICAgICAgICAgIFwKKyAgICAgICAgY29uc3QgbGlieGxfYXN5bmNvcF9ob3cgKmFvX2hvdykgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICB7ICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAg
QU9fQ1JFQVRFKGN0eCwgZG9taWQsIGFvX2hvdyk7ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIFwKKyAgICAgICAgbGlieGxfX2RldmljZSAqZGV2aWNlOyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9k
ZXY7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgaW50
IHJjOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgR0NORVcoZGV2aWNlKTsgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgcmMgPSBs
aWJ4bF9fZGV2aWNlX2Zyb21fIyN0eXBlKGdjLCBkb21pZCwgdHlwZSwgZGV2aWNlKTsgICAgICAg
IFwKKyAgICAgICAgaWYgKHJjICE9IDApIGdvdG8gb3V0OyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgR0NORVcoYW9k
ZXYpOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgbGlieGxfX3ByZXBhcmVfYW9fZGV2aWNlKGFvLCBhb2Rldik7ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIFwKKyAgICAgICAgYW9kZXYtPmFjdGlvbiA9IERFVklDRV9ESVNDT05O
RUNUOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgYW9kZXYtPmRldiA9
IGRldmljZTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAg
ICAgICAgYW9kZXYtPmNhbGxiYWNrID0gZGV2aWNlX2FvY29tcGxldGU7ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgYW9kZXYtPmZvcmNlID0gZjsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgYW9kZXYtPmhvdHBsdWdf
dmVyc2lvbiA9IHR5cGUtPmhvdHBsdWdfdmVyc2lvbjsgICAgICAgICAgICAgICAgIFwKKyAgICAg
ICAgTE9HKERFQlVHLCAiaG90cGx1ZyB2ZXJzaW9uOiAlZCIsIGFvZGV2LT5ob3RwbHVnX3ZlcnNp
b24pOyAgICAgIFwKKyAgICAgICAgbGlieGxfX2luaXRpYXRlX2RldmljZV9yZW1vdmUoZWdjLCBh
b2Rldik7ICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBvdXQ6
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIFwKKyAgICAgICAgaWYgKHJjKSByZXR1cm4gQU9fQUJPUlQocmMpOyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgcmV0dXJuIEFPX0lOUFJPR1JFU1M7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICB9CisKIC8q
IERlZmluZSBhbGwgcmVtb3ZlL2Rlc3Ryb3kgZnVuY3Rpb25zIGFuZCB1bmRlZiB0aGUgbWFjcm8g
Ki8KIAogLyogZGlzayAqLwotREVGSU5FX0RFVklDRV9SRU1PVkUoZGlzaywgcmVtb3ZlLCAwKQot
REVGSU5FX0RFVklDRV9SRU1PVkUoZGlzaywgZGVzdHJveSwgMSkKK0RFRklORV9ESVNLX1JFTU9W
RShkaXNrLCByZW1vdmUsIDApCitERUZJTkVfRElTS19SRU1PVkUoZGlzaywgZGVzdHJveSwgMSkK
IAogLyogbmljICovCiBERUZJTkVfREVWSUNFX1JFTU9WRShuaWMsIHJlbW92ZSwgMCkKQEAgLTM1
MTIsNiArMzU0Miw3IEBAIERFRklORV9ERVZJQ0VfUkVNT1ZFKHZmYiwgZGVzdHJveSwgMSkKIERF
RklORV9ERVZJQ0VfUkVNT1ZFKHZ0cG0sIHJlbW92ZSwgMCkKIERFRklORV9ERVZJQ0VfUkVNT1ZF
KHZ0cG0sIGRlc3Ryb3ksIDEpCiAKKyN1bmRlZiBERUZJTkVfRElTS19SRU1PVkUKICN1bmRlZiBE
RUZJTkVfREVWSUNFX1JFTU9WRQogCiAvKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLwotLSAKMS43Ljcu
NSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:03:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5zt-0003G3-AK; Fri, 21 Dec 2012 17:02:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm5zr-0003FZ-Io
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:02:55 +0000
Received: from [85.158.143.99:46257] by server-2.bemta-4.messagelabs.com id
	80/50-30861-E3694D05; Fri, 21 Dec 2012 17:02:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1356109374!18267203!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22134 invoked from network); 21 Dec 2012 17:02:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:02:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309296"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:02:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 17:02:54 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm5zq-0000wf-7H; Fri, 21 Dec 2012 17:02:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm5zq-0002wb-3K;
	Fri, 21 Dec 2012 17:02:54 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.38462.2976.430854@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 17:02:54 +0000
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1356106739.15403.70.camel@Abyss>
References: <patchbomb.1355944036@Solace>
	<3c196445edb57baadf4f.1355944042@Solace>
	<50D48091.5070409@eu.citrix.com> <1356106739.15403.70.camel@Abyss>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli writes ("Re: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly specifying node-affinity"):
> On Fri, 2012-12-21 at 15:30 +0000, George Dunlap wrote: 
> > I think you'll probably need to add a line like the following:
> > 
> > #define LIBXL_HAVE_NODEAFFINITY 1
> > 
> > So that people wanting to build against different versions of the 
> > library can behave appropriately.  
> >
> I see what you mean.

I think this is a good idea.

I'm afraid I won't be able to do proper justice to your series until
the new year.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:03:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm5zt-0003G3-AK; Fri, 21 Dec 2012 17:02:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm5zr-0003FZ-Io
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:02:55 +0000
Received: from [85.158.143.99:46257] by server-2.bemta-4.messagelabs.com id
	80/50-30861-E3694D05; Fri, 21 Dec 2012 17:02:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1356109374!18267203!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22134 invoked from network); 21 Dec 2012 17:02:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:02:54 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309296"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:02:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 17:02:54 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm5zq-0000wf-7H; Fri, 21 Dec 2012 17:02:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm5zq-0002wb-3K;
	Fri, 21 Dec 2012 17:02:54 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.38462.2976.430854@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 17:02:54 +0000
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1356106739.15403.70.camel@Abyss>
References: <patchbomb.1355944036@Solace>
	<3c196445edb57baadf4f.1355944042@Solace>
	<50D48091.5070409@eu.citrix.com> <1356106739.15403.70.camel@Abyss>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli writes ("Re: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly specifying node-affinity"):
> On Fri, 2012-12-21 at 15:30 +0000, George Dunlap wrote: 
> > I think you'll probably need to add a line like the following:
> > 
> > #define LIBXL_HAVE_NODEAFFINITY 1
> > 
> > So that people wanting to build against different versions of the 
> > library can behave appropriately.  
> >
> I see what you mean.

I think this is a good idea.

I'm afraid I won't be able to do proper justice to your series until
the new year.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:03:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm60J-0003OL-On; Fri, 21 Dec 2012 17:03:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1Tm60I-0003Ny-V9
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:03:23 +0000
Received: from [85.158.138.51:54349] by server-4.bemta-3.messagelabs.com id
	E0/D7-31835-A5694D05; Fri, 21 Dec 2012 17:03:22 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1356109391!23711139!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 444 invoked from network); 21 Dec 2012 17:03:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:03:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1497134"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:03:10 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Fri, 21 Dec 2012
	12:03:10 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: G.R. <firemeteor@users.sourceforge.net>
Date: Fri, 21 Dec 2012 12:03:13 -0500
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3fmx1VnvRqdp5pSwabm5teC+++3QAAG3tQ
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6567@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
	<CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
	<CAKhsbWbtYA50rCCbAR_5=Bt+5g7Kb_BkKrVo5WF+ZdmO2o8pCw@mail.gmail.com>
In-Reply-To: <CAKhsbWbtYA50rCCbAR_5=Bt+5g7Kb_BkKrVo5WF+ZdmO2o8pCw@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: firemeteor.guo@gmail.com [mailto:firemeteor.guo@gmail.com] On
> Behalf Of G.R.
> Sent: Friday, December 21, 2012 11:49 AM
> To: Ross Philipson
> Cc: Keir (Xen.org); Stefano Stabellini; Ian Campbell;
> Jean.guyader@gmail.com; xen-devel
> Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
> resource conflict for OpRegion.
> 
> >> >> I guess there is no predefined macro for OpRegion size. And I
> >> >> guess I need to define it twice for both code?
> >> >
> >> > In addition we should think about defining the IGD OpRegion in ACPI
> >> per the spec (cited earlier). Guest drivers seem to find the region
> >> just by reading the ASLS register in the gfx device's config space
> >> but it would be more correct to define it in ACPI too. Just a
> thought.
> >>
> >> Is it a requirement for the patch to be accepted? Or, are you saying
> >> that this should not be IGD passthrough specific?
> >> I'm not sure what you refer to by 'ACPI' here. Is it the spec itself
> >> or header in your source code?
> >> I'm sorry to ask but I'm just a unlucky user trying to fix my box.
> >>
> >> The ASLS register is just the documented way to communicate the
> >> OpRegion you can find in the spec.
> >> There are a lot of details in the spec. But as long as we are not
> >> going to emulate it, only the size is relevant here, I believe.
> >
> > I guess all I am pointing out is that the IGD OpRegion is supposed to
> be defined in ACPI (that is actually why it is called an OpRegion). On
> all they systems I have seen it is in the DSDT (look for IDGP and IGDM).
> The OpRegion declaration actually tells you how big the region is and
> where its base is. It should probably only be there in the case where
> you are passing in the igfx device but this could be done by loading an
> auxiliary SSDT table. In addition to the IGD definition, the BIOSes on
> these systems also define other NVS regions related to graphics which
> might be useful, at least in determining their size and layout.
> 
> Great! So you are the expert about these firmwares. I also tried to grep
> in these ACPI tables but did not found anything useful.
> I can see that IGDM has a matching size, but the base refers to an entry
> called 'ASLB', which is not directly visible from the dump.

ASLB is a field in another OpRegion that defines other areas of the gfx
related NVS regions. So the value will be present at runtime and you can
query it via the ACPI object. But the same value should be reported in
the ASLS register in the PCI config space for the igfx device which is a
lot easier to get at.

> 
> The intel opregion spec gives me the impression that it's part of the
> ACPI spec. And you are saying that is is not! What a astonishing news to
> me.

I think the spec is saying that the IGD OpRegion should be defined in ACPI
using standard ACPI entities like OpRegions and Fields.

> 
> > I have been thinking about this in relationship to our igfx
> passthrough support. We see some odd behaviors here and there with igfx
> pt and the guest drivers (which we have no control over in say the
> Windows case). I don't currently have hard evidence that these missing
> BIOS bits are causing any specific problems but they are an
> inconsistency wrt to the spec and on my list of suspects.
> 
> Yes, my win 7 guest is totally broken with IGD passthrough (much worse
> than linux status).
> Before I bought my current build, sources like wikis seems to mention
> that IGD is the first card that works.
> And now, it seems the AMD cards are the best choice for pass-through.
> Sad news for me.

Let me just clarify that up to now we have been successful in passing in
igfx cards without having to surface any of these ACPI bits. I was just
mentioning that this is an inconsistency and might be worth investigating
at some point. More importantly I am pointing out that if you are trying
to find out information like the location/size/layout of the IGD OpRegion,
you can get that information from the host BIOS. That sounded like what
your original issues centered around. Sorry if I confused things.

> 
> Thanks,
> Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:03:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm60J-0003OL-On; Fri, 21 Dec 2012 17:03:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1Tm60I-0003Ny-V9
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:03:23 +0000
Received: from [85.158.138.51:54349] by server-4.bemta-3.messagelabs.com id
	E0/D7-31835-A5694D05; Fri, 21 Dec 2012 17:03:22 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1356109391!23711139!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 444 invoked from network); 21 Dec 2012 17:03:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:03:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="1497134"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:03:10 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Fri, 21 Dec 2012
	12:03:10 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: G.R. <firemeteor@users.sourceforge.net>
Date: Fri, 21 Dec 2012 12:03:13 -0500
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3fmx1VnvRqdp5pSwabm5teC+++3QAAG3tQ
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6567@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
	<CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
	<CAKhsbWbtYA50rCCbAR_5=Bt+5g7Kb_BkKrVo5WF+ZdmO2o8pCw@mail.gmail.com>
In-Reply-To: <CAKhsbWbtYA50rCCbAR_5=Bt+5g7Kb_BkKrVo5WF+ZdmO2o8pCw@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: firemeteor.guo@gmail.com [mailto:firemeteor.guo@gmail.com] On
> Behalf Of G.R.
> Sent: Friday, December 21, 2012 11:49 AM
> To: Ross Philipson
> Cc: Keir (Xen.org); Stefano Stabellini; Ian Campbell;
> Jean.guyader@gmail.com; xen-devel
> Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
> resource conflict for OpRegion.
> 
> >> >> I guess there is no predefined macro for OpRegion size. And I
> >> >> guess I need to define it twice for both code?
> >> >
> >> > In addition we should think about defining the IGD OpRegion in ACPI
> >> per the spec (cited earlier). Guest drivers seem to find the region
> >> just by reading the ASLS register in the gfx device's config space
> >> but it would be more correct to define it in ACPI too. Just a
> thought.
> >>
> >> Is it a requirement for the patch to be accepted? Or, are you saying
> >> that this should not be IGD passthrough specific?
> >> I'm not sure what you refer to by 'ACPI' here. Is it the spec itself
> >> or header in your source code?
> >> I'm sorry to ask but I'm just a unlucky user trying to fix my box.
> >>
> >> The ASLS register is just the documented way to communicate the
> >> OpRegion you can find in the spec.
> >> There are a lot of details in the spec. But as long as we are not
> >> going to emulate it, only the size is relevant here, I believe.
> >
> > I guess all I am pointing out is that the IGD OpRegion is supposed to
> be defined in ACPI (that is actually why it is called an OpRegion). On
> all they systems I have seen it is in the DSDT (look for IDGP and IGDM).
> The OpRegion declaration actually tells you how big the region is and
> where its base is. It should probably only be there in the case where
> you are passing in the igfx device but this could be done by loading an
> auxiliary SSDT table. In addition to the IGD definition, the BIOSes on
> these systems also define other NVS regions related to graphics which
> might be useful, at least in determining their size and layout.
> 
> Great! So you are the expert about these firmwares. I also tried to grep
> in these ACPI tables but did not found anything useful.
> I can see that IGDM has a matching size, but the base refers to an entry
> called 'ASLB', which is not directly visible from the dump.

ASLB is a field in another OpRegion that defines other areas of the gfx
related NVS regions. So the value will be present at runtime and you can
query it via the ACPI object. But the same value should be reported in
the ASLS register in the PCI config space for the igfx device which is a
lot easier to get at.

> 
> The intel opregion spec gives me the impression that it's part of the
> ACPI spec. And you are saying that is is not! What a astonishing news to
> me.

I think the spec is saying that the IGD OpRegion should be defined in ACPI
using standard ACPI entities like OpRegions and Fields.

> 
> > I have been thinking about this in relationship to our igfx
> passthrough support. We see some odd behaviors here and there with igfx
> pt and the guest drivers (which we have no control over in say the
> Windows case). I don't currently have hard evidence that these missing
> BIOS bits are causing any specific problems but they are an
> inconsistency wrt to the spec and on my list of suspects.
> 
> Yes, my win 7 guest is totally broken with IGD passthrough (much worse
> than linux status).
> Before I bought my current build, sources like wikis seems to mention
> that IGD is the first card that works.
> And now, it seems the AMD cards are the best choice for pass-through.
> Sad news for me.

Let me just clarify that up to now we have been successful in passing in
igfx cards without having to surface any of these ACPI bits. I was just
mentioning that this is an inconsistency and might be worth investigating
at some point. More importantly I am pointing out that if you are trying
to find out information like the location/size/layout of the IGD OpRegion,
you can get that information from the host BIOS. That sounded like what
your original issues centered around. Sorry if I confused things.

> 
> Thanks,
> Timothy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:05:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm62f-0003xh-HF; Fri, 21 Dec 2012 17:05:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm62d-0003xQ-RU
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:05:47 +0000
Received: from [85.158.143.99:59209] by server-1.bemta-4.messagelabs.com id
	D3/73-28401-BE694D05; Fri, 21 Dec 2012 17:05:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356109546!23685012!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11513 invoked from network); 21 Dec 2012 17:05:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:05:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309350"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:05:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 17:05:45 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm62c-0000xd-0q; Fri, 21 Dec 2012 17:05:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm62b-0002xF-TX;
	Fri, 21 Dec 2012 17:05:45 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.38633.811300.565536@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 17:05:45 +0000
To: Ian Campbell <ian.campbell@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1355925551-14149-1-git-send-email-ian.campbell@citrix.com>
References: <1355925551-14149-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: stefano.stabellini@citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/tests: Restrict some tests to x86 only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[Xen-devel] [PATCH] tools/tests: Restrict some tests to x86 only"):
> MCE injection and x86_emulator are clearly x86 specific.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:05:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm62f-0003xh-HF; Fri, 21 Dec 2012 17:05:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm62d-0003xQ-RU
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:05:47 +0000
Received: from [85.158.143.99:59209] by server-1.bemta-4.messagelabs.com id
	D3/73-28401-BE694D05; Fri, 21 Dec 2012 17:05:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356109546!23685012!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11513 invoked from network); 21 Dec 2012 17:05:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:05:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; 
   d="scan'208";a="309350"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:05:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 17:05:45 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm62c-0000xd-0q; Fri, 21 Dec 2012 17:05:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm62b-0002xF-TX;
	Fri, 21 Dec 2012 17:05:45 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.38633.811300.565536@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 17:05:45 +0000
To: Ian Campbell <ian.campbell@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1355925551-14149-1-git-send-email-ian.campbell@citrix.com>
References: <1355925551-14149-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: stefano.stabellini@citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/tests: Restrict some tests to x86 only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[Xen-devel] [PATCH] tools/tests: Restrict some tests to x86 only"):
> MCE injection and x86_emulator are clearly x86 specific.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:10:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm66m-0004Im-Cs; Fri, 21 Dec 2012 17:10:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm66k-0004Ic-Vt
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:10:03 +0000
Received: from [85.158.139.211:40615] by server-9.bemta-5.messagelabs.com id
	4D/BF-10690-AE794D05; Fri, 21 Dec 2012 17:10:02 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356109801!19043374!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11909 invoked from network); 21 Dec 2012 17:10:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:10:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="309452"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:10:01 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:10:00 +0000
Message-ID: <1356109799.15403.90.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 21 Dec 2012 18:09:59 +0100
In-Reply-To: <20692.38462.2976.430854@mariner.uk.xensource.com>
References: <patchbomb.1355944036@Solace>
	<3c196445edb57baadf4f.1355944042@Solace>	<50D48091.5070409@eu.citrix.com>
	<1356106739.15403.70.camel@Abyss>
	<20692.38462.2976.430854@mariner.uk.xensource.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6702014978574094799=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6702014978574094799==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-FTvcDXtwk8hu047y2FNG"

--=-FTvcDXtwk8hu047y2FNG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 17:02 +0000, Ian Jackson wrote:=20
> > On Fri, 2012-12-21 at 15:30 +0000, George Dunlap wrote:=20
> > > I think you'll probably need to add a line like the following:
> > >=20
> > > #define LIBXL_HAVE_NODEAFFINITY 1
> > >=20
> > > So that people wanting to build against different versions of the=20
> > > library can behave appropriately. =20
> > >
> > I see what you mean.
>=20
> I think this is a good idea.
>=20
Ok then.

> I'm afraid I won't be able to do proper justice to your series until
> the new year.
>=20
Yep, I now you're away, and that's fine, my bad I couldn't post it
earlier. :-)

I'll be happy to see your comments when you're back.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-FTvcDXtwk8hu047y2FNG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUl+cACgkQk4XaBE3IOsQnYgCfZBITmd7AxsYa3VHbyA3tQwDp
Ab4AoJgg0eKOgsrM1KonNyjSp43LDm9u
=lZoh
-----END PGP SIGNATURE-----

--=-FTvcDXtwk8hu047y2FNG--


--===============6702014978574094799==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6702014978574094799==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 17:10:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm66m-0004Im-Cs; Fri, 21 Dec 2012 17:10:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Tm66k-0004Ic-Vt
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:10:03 +0000
Received: from [85.158.139.211:40615] by server-9.bemta-5.messagelabs.com id
	4D/BF-10690-AE794D05; Fri, 21 Dec 2012 17:10:02 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356109801!19043374!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11909 invoked from network); 21 Dec 2012 17:10:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:10:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,330,1355097600"; d="asc'?scan'208";a="309452"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:10:01 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 21 Dec 2012 17:10:00 +0000
Message-ID: <1356109799.15403.90.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 21 Dec 2012 18:09:59 +0100
In-Reply-To: <20692.38462.2976.430854@mariner.uk.xensource.com>
References: <patchbomb.1355944036@Solace>
	<3c196445edb57baadf4f.1355944042@Solace>	<50D48091.5070409@eu.citrix.com>
	<1356106739.15403.70.camel@Abyss>
	<20692.38462.2976.430854@mariner.uk.xensource.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Dan Magenheimer <dan.magenheimer@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH 06 of 10 v2] libxl: allow for explicitly
 specifying node-affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6702014978574094799=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6702014978574094799==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-FTvcDXtwk8hu047y2FNG"

--=-FTvcDXtwk8hu047y2FNG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-12-21 at 17:02 +0000, Ian Jackson wrote:=20
> > On Fri, 2012-12-21 at 15:30 +0000, George Dunlap wrote:=20
> > > I think you'll probably need to add a line like the following:
> > >=20
> > > #define LIBXL_HAVE_NODEAFFINITY 1
> > >=20
> > > So that people wanting to build against different versions of the=20
> > > library can behave appropriately. =20
> > >
> > I see what you mean.
>=20
> I think this is a good idea.
>=20
Ok then.

> I'm afraid I won't be able to do proper justice to your series until
> the new year.
>=20
Yep, I now you're away, and that's fine, my bad I couldn't post it
earlier. :-)

I'll be happy to see your comments when you're back.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-FTvcDXtwk8hu047y2FNG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlDUl+cACgkQk4XaBE3IOsQnYgCfZBITmd7AxsYa3VHbyA3tQwDp
Ab4AoJgg0eKOgsrM1KonNyjSp43LDm9u
=lZoh
-----END PGP SIGNATURE-----

--=-FTvcDXtwk8hu047y2FNG--


--===============6702014978574094799==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6702014978574094799==--


From xen-devel-bounces@lists.xen.org Fri Dec 21 17:11:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:11:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm682-0004QW-St; Fri, 21 Dec 2012 17:11:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm680-0004QJ-RO
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:11:20 +0000
Received: from [85.158.143.99:22126] by server-3.bemta-4.messagelabs.com id
	8E/DA-18211-83894D05; Fri, 21 Dec 2012 17:11:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1356109878!25254186!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12750 invoked from network); 21 Dec 2012 17:11:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:11:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,331,1355097600"; 
   d="scan'208";a="309511"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:11:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 17:11:18 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm67y-0000zK-Fo; Fri, 21 Dec 2012 17:11:18 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm67y-0002y1-CC;
	Fri, 21 Dec 2012 17:11:18 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.38966.234380.57763@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 17:11:18 +0000
To: "Jan Beulich" <JBeulich@suse.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <50C2200202000078000AF0AC@nat28.tlf.novell.com>
References: <50C2200202000078000AF0AC@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Charles Arnold <CARNOLD@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] compile flags overrides
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("[Xen-devel] compile flags overrides"):
> I don't see how one is supposed to override (e.g. optimization level,
> or add to) CFLAGS for the hypervisor part of the build though.

Why not APPEND_CFLAGS=-O0 or whatever ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:11:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:11:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm682-0004QW-St; Fri, 21 Dec 2012 17:11:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm680-0004QJ-RO
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:11:20 +0000
Received: from [85.158.143.99:22126] by server-3.bemta-4.messagelabs.com id
	8E/DA-18211-83894D05; Fri, 21 Dec 2012 17:11:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1356109878!25254186!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12750 invoked from network); 21 Dec 2012 17:11:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:11:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,331,1355097600"; 
   d="scan'208";a="309511"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:11:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 17:11:18 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm67y-0000zK-Fo; Fri, 21 Dec 2012 17:11:18 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm67y-0002y1-CC;
	Fri, 21 Dec 2012 17:11:18 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.38966.234380.57763@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 17:11:18 +0000
To: "Jan Beulich" <JBeulich@suse.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <50C2200202000078000AF0AC@nat28.tlf.novell.com>
References: <50C2200202000078000AF0AC@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Charles Arnold <CARNOLD@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] compile flags overrides
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("[Xen-devel] compile flags overrides"):
> I don't see how one is supposed to override (e.g. optimization level,
> or add to) CFLAGS for the hypervisor part of the build though.

Why not APPEND_CFLAGS=-O0 or whatever ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:17:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm6EE-0004ip-QA; Fri, 21 Dec 2012 17:17:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm6EC-0004ii-Gu
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 17:17:45 +0000
Received: from [85.158.143.99:42899] by server-1.bemta-4.messagelabs.com id
	7C/3C-28401-7B994D05; Fri, 21 Dec 2012 17:17:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356110260!20922945!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzQ4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17915 invoked from network); 21 Dec 2012 17:17:41 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 17:17:41 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLHHL8G027357
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 17:17:22 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLHHK5u024801
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 17:17:20 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLHHHGm015473; Fri, 21 Dec 2012 11:17:17 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 09:17:16 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 56F061C032B; Fri, 21 Dec 2012 12:17:15 -0500 (EST)
Date: Fri, 21 Dec 2012 12:17:15 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Message-ID: <20121221171715.GE25526@phenom.dumpdata.com>
References: <1353421985-25398-1-git-send-email-matthew.fioravante@jhuapl.edu>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1353421985-25398-1-git-send-email-matthew.fioravante@jhuapl.edu>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: jeremy@goop.org, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mail@srajiv.net,
	tpmdd-devel@lists.sourceforge.net, key@linux.vnet.ibm.com
Subject: Re: [Xen-devel] [PATCH v3] add xen-tpmfront.ko: Xen Virtual TPM
	frontend driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Nov 20, 2012 at 09:33:05AM -0500, Matthew Fioravante wrote:

Hey Matthew,

Sorry for the late response. 


I ran this throught checkpatch I got these:


CHECK: spinlock_t definition without comment
#476: FILE: drivers/char/tpm/xen-tpmfront_if.c:62:
+	spinlock_t tx_lock;

CHECK: Alignment should match open parenthesis
#520: FILE: drivers/char/tpm/xen-tpmfront_if.c:106:
+tx_buffer_copy(struct tx_buffer *txb, const u8 *src, int len,
+		int isuserbuffer)

CHECK: Alignment should match open parenthesis
#673: FILE: drivers/char/tpm/xen-tpmfront_if.c:259:
+		gnttab_end_foreign_access(tp->ring_ref,
+				0, (unsigned long)tp->tx);

WARNING: Avoid CamelCase: <XenbusStateConnected>
#727: FILE: drivers/char/tpm/xen-tpmfront_if.c:313:
+	xenbus_switch_state(dev, XenbusStateConnected);

CHECK: Alignment should match open parenthesis
#1014: FILE: drivers/char/tpm/xen-tpmfront_if.c:600:
+		gnttab_grant_foreign_access_ref(tx->ref,
+				tp->backend_id,

CHECK: memory barrier without comment
#1023: FILE: drivers/char/tpm/xen-tpmfront_if.c:609:
+	mb();

CHECK: Alignment should match open parenthesis
#1085: FILE: drivers/char/tpm/xen-tpmfront_if.c:671:
+	if (gnttab_alloc_grant_references(TPMIF_TX_RING_SIZE,
+				&gref_head) < 0) {

CHECK: Alignment should match open parenthesis
#1197: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:89:
+transmission_set_req_buffer(struct transmission *t,
+		unsigned char *buffer, size_t len)

CHECK: Alignment should match open parenthesis
#1217: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:109:
+transmission_set_res_buffer(struct transmission *t,
+		const unsigned char *buffer, size_t len)

CHECK: Alignment should match open parenthesis
#1348: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:240:
+		interruptible_sleep_on_timeout(&vtpms->resp_wait_queue,
+				1000);

CHECK: Alignment should match open parenthesis
#1402: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:294:
+		if (time_after(jiffies,
+					vtpms->disconnect_time + HZ * 10)) {

CHECK: Blank lines aren't necessary after an open brace '{'
#1412: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:304:
+		if (_vtpm_send_queued(chip) == 0) {
+

CHECK: Alignment should match open parenthesis
#1490: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:382:
+				list_add(&qt->next,
+						&vtpms->queued_requests);

CHECK: Alignment should match open parenthesis
#1551: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:443:
+	if (vtpms->current_response ||
+			0 != (vtpms->flags & DATAEX_FLAG_QUEUED_ONLY)) {

CHECK: spinlock_t definition without comment
#1666: FILE: drivers/char/tpm/xen-tpmfront_vtpm.h:9:
+	spinlock_t           req_list_lock;

CHECK: spinlock_t definition without comment
#1672: FILE: drivers/char/tpm/xen-tpmfront_vtpm.h:15:
+	spinlock_t           resp_list_lock;

total: 0 errors, 1 warnings, 15 checks, 1387 lines checked

/home/konrad/tpm has style problems, please review.


I like your description - and I think you should have it
in the git commit _and_ in a Documentation/*somewhere*

> This patch ports the xen vtpm frontend driver for linux
> from the linux-2.6.18-xen.hg tree to linux-stable. This
> driver is designed be used with the mini-os tpmback driver
> in Xen as part of the new mini-os virtual tpm subsystem.
> 
> Included in this commit message is the first draft of the
> vtpm documentation.  See docs/misc/vtpm.txt for an updated
> version. Contact the xen-devel@lists.xen.org mailing list
> for details.
> 
> Copyright (c) 2010-2012 United States Government, as represented by
> the Secretary of Defense.  All rights reserved.
> November 12 2012
> Authors: Matthew Fioravante (JHUAPL),
> 
> This document describes the virtual Trusted Platform Module (vTPM)
> subsystem
> for Xen. The reader is assumed to have familiarity with building and
> installing
> Xen, Linux, and a basic understanding of the TPM and vTPM concepts.
> 
> ------------------------------
> INTRODUCTION
> ------------------------------
> The goal of this work is to provide a TPM functionality to a virtual
> guest operating system (a DomU).  This allows programs to interact
> with a TPM in a virtual system the same way they interact with a TPM
> on the physical system.  Each guest gets its own unique, emulated,
> software TPM.  However, each of the vTPM's secrets (Keys, NVRAM, etc)
> are managed by a vTPM Manager domain, which seals the secrets to
> the Physical TPM.  Thus, the vTPM subsystem extends the chain of
> trust rooted in the hardware TPM to virtual machines in Xen.
> Each major component of vTPM is implemented as a separate domain,
> providing secure separation guaranteed by the hypervisor. The
> vTPM domains are implemented in mini-os to reduce memory and
> processor overhead.
> 
> This mini-os vTPM subsystem was built on top of the previous vTPM
> work done by IBM and Intel corporation.
> 
> ------------------------------
> DESIGN OVERVIEW
> ------------------------------
> 
> The architecture of vTPM is described below:
> 
> +------------------+
> |    Linux DomU    | ...
> |       |  ^       |
> |       v  |       |
> |   xen-tpmfront   |
> +------------------+
>         |  ^
>         v  |
> +------------------+
> | mini-os/tpmback  |
> |       |  ^       |
> |       v  |       |
> |  vtpm-stubdom    | ...
> |       |  ^       |
> |       v  |       |
> | mini-os/tpmfront |
> +------------------+
>         |  ^
>         v  |
> +------------------+
> | mini-os/tpmback  |
> |       |  ^       |
> |       v  |       |
> |   vtpmmgrdom     |
> |       |  ^       |
> |       v  |       |
> | mini-os/tpm_tis  |
> +------------------+
>         |  ^
>         v  |
> +------------------+
> |   Hardware TPM   |
> +------------------+
>  * Linux DomU: The Linux based guest that wants to use a vTPM. There
>                may be more than one of these.
> 
>  * xen-tpmfront.ko: Linux kernel virtual TPM frontend driver. This
>                     driver provides vTPM access to a para-virtualized
>                     Linux based DomU.
> 
>  * mini-os/tpmback: Mini-os TPM backend driver. The Linux frontend
>                     driver connects to this backend driver to facilitate
>                     communications between the Linux DomU and its vTPM.
>                     This driver is also used by vtpmmgrdom to communicate
>                     with vtpm-stubdom.
> 
>  * vtpm-stubdom: A mini-os stub domain that implements a vTPM. There is
>                  a one to one mapping between running vtpm-stubdom
>                  instances and logical vtpms on the system. The vTPM
>                  Platform Configuration Registers (PCRs) are all
> 		 initialized to zero.
> 
>  * mini-os/tpmfront: Mini-os TPM frontend driver. The vTPM mini-os
>                      domain vtpm-stubdom uses this driver to
>                      communicate with vtpmmgrdom. This driver could
>                      also be used separately to implement a mini-os
>                      domain that wishes to use a vTPM of
>                      its own.
> 
>  * vtpmmgrdom: A mini-os domain that implements the vTPM manager.
>                There is only one vTPM manager and it should be running
>                during the entire lifetime of the machine.  This domain
>                regulates access to the physical TPM on the system and
>                secures the persistent state of each vTPM.
> 
>  * mini-os/tpm_tis: Mini-os TPM version 1.2 TPM Interface Specification
>                     (TIS) driver. This driver used by vtpmmgrdom to talk
>                     directly to the hardware TPM. Communication is
>                     facilitated by mapping hardware memory pages into
>                     vtpmmgrdom.
> 
>  * Hardware TPM: The physical TPM that is soldered onto the motherboard.
> 
> ------------------------------
> INSTALLATION
> ------------------------------
> 
> Prerequisites:
> --------------
> You must have an x86 machine with a TPM on the motherboard.
> The only software requirement to compiling vTPM is cmake.
> You must use libxl to manage domains with vTPMs. 'xm' is
> deprecated and does not support vTPM.
> 
> Compiling the XEN tree:
> -----------------------
> 
> Compile and install the XEN tree as usual. Be sure to build and install
> the stubdom tree.
> 
> Compiling the LINUX dom0 kernel:
> --------------------------------
> 
> The Linux dom0 kernel has no special prerequisites.
> 
> Compiling the LINUX domU kernel:
> --------------------------------
> 
> The domU kernel used by domains with vtpms must
> include the xen-tpmfront.ko driver. It can be built
> directly into the kernel or as a module.
> 
> CONFIG_TCG_TPM=y
> CONFIG_TCG_XEN=y
> 
> ------------------------------
> VTPM MANAGER SETUP
> ------------------------------
> 
> Manager disk image setup:
> -------------------------
> 
> The vTPM Manager requires a disk image to store its
> encrypted data. The image does not require a filesystem
> and can live anywhere on the host disk. The image does not need
> to be large. 8 to 16 Mb should be sufficient.
> 
> Manager config file:
> --------------------
> 
> The vTPM Manager domain (vtpmmgrdom) must be started like
> any other Xen virtual machine and requires a config file.
> The manager requires a disk image for storage and permission
> to access the hardware memory pages for the TPM. An
> example configuration looks like the following.
> 
> kernel="/usr/lib/xen/boot/vtpmmgrdom.gz"
> memory=16
> disk=["file:/var/vtpmmgrdom.img,hda,w"]
> name="vtpmmgrdom"
> iomem=["fed40,5"]
> 
> The iomem line tells xl to allow access to the TPM
> IO memory pages, which are 5 pages that start at
> 0xfed40000.
> 
> Starting and stopping the manager:
> ----------------------------------
> 
> The vTPM manager should be started at boot, you may wish to
> create an init script to do this.
> 
> Once initialization is complete you should see the following:
> INFO[VTPM]: Waiting for commands from vTPM's:
> 
> To shutdown the manager you must destroy it. To avoid data corruption,
> only destroy the manager when you see the above "Waiting for commands"
> message. This ensures the disk is in a consistent state.
> 
> ------------------------------
> VTPM AND LINUX PVM SETUP
> ------------------------------
> 
> In the following examples we will assume we have Linux
> guest named "domu" with its associated configuration
> located at /home/user/domu. It's vtpm will be named
> domu-vtpm.
> 
> vTPM disk image setup:
> ----------------------
> 
> The vTPM requires a disk image to store its persistent
> data. The image does not require a filesystem. The image
> does not need to be large. 8 Mb should be sufficient.
> 
> vTPM config file:
> -----------------
> 
> The vTPM domain requires a configuration file like
> any other domain. The vTPM requires a disk image for
> storage and a TPM frontend driver to communicate
> with the manager. An example configuration is given:
> 
> kernel="/usr/lib/xen/boot/vtpm-stubdom.gz"
> memory=8
> disk=["file:/home/user/domu/vtpm.img,hda,w"]
> name="domu-vtpm"
> vtpm=["backend=vtpmmgrdom,uuid=ac0a5b9e-cbe2-4c07-b43b-1d69e46fb839"]
> 
> The vtpm= line sets up the tpm frontend driver. The backend must set
> to vtpmmgrdom. You are required to generate a uuid for this vtpm.
> You can use the uuidgen unix program or some other method to create a
> uuid. The uuid uniquely identifies this vtpm to manager.
> 
> If you wish to clear the vTPM data you can either recreate the
> disk image or change the uuid.
> 
> Linux Guest config file:
> ------------------------
> 
> The Linux guest config file needs to be modified to include
> the Linux tpmfront driver. Add the following line:
> 
> vtpm=["backend=domu-vtpm"]
> 
> Currently only paravirtualized guests are supported.
> 
> Launching and shut down:
> ------------------------
> 
> To launch a Linux guest with a vTPM we first have to start the vTPM
> domain.
> 
> After initialization is complete, you should see the following:
> Info: Waiting for frontend domain to connect..
> 
> Next, launch the Linux guest
> 
> If xen-tpmfront was compiled as a module, be sure to load it
> in the guest.
> 
> After the Linux domain boots and the xen-tpmfront driver is loaded,
> you should see the following on the vtpm console:
> 
> Info: VTPM attached to Frontend X/Y
> 
> If you have trousers and tpm_tools installed on the guest, you can test
> the vtpm.
> 
> On guest:
> 
> The version command should return the following:
>   TPM 1.2 Version Info:
>   Chip Version:        1.2.0.7
>   Spec Level:          2
>   Errata Revision:     1
>   TPM Vendor ID:       ETHZ
>   TPM Version:         01010000
>   Manufacturer Info:   4554485a
> 
> You should also see the command being sent to the vtpm console as well
> as the vtpm saving its state. You should see the vtpm key being
> encrypted and stored on the vtpmmgrdom console.
> 
> To shutdown the guest and its vtpm, you just have to shutdown the guest
> normally. As soon as the guest vm disconnects, the vtpm will shut itself
> down automatically.
> 
> On guest:
> 
> You may wish to write a script to start your vtpm and guest together.
> 
> ------------------------------
> MORE INFORMATION
> ------------------------------
> 
> See stubdom/vtpmmgr/README for more details about how
> the manager domain works, how to use it, and its command line
> parameters.
> 
> See stubdom/vtpm/README for more specifics about how vtpm-stubdom
> operates and the command line options it accepts.
> 
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> ---
>  drivers/char/tpm/Kconfig             |   11 +
>  drivers/char/tpm/Makefile            |    2 +
>  drivers/char/tpm/tpm.h               |   10 +
>  drivers/char/tpm/xen-tpmfront_if.c   |  688 ++++++++++++++++++++++++++++++++++
>  drivers/char/tpm/xen-tpmfront_vtpm.c |  543 +++++++++++++++++++++++++++
>  drivers/char/tpm/xen-tpmfront_vtpm.h |   55 +++
>  include/xen/interface/io/tpmif.h     |   65 ++++
>  7 files changed, 1374 insertions(+)
>  create mode 100644 drivers/char/tpm/xen-tpmfront_if.c
>  create mode 100644 drivers/char/tpm/xen-tpmfront_vtpm.c
>  create mode 100644 drivers/char/tpm/xen-tpmfront_vtpm.h
>  create mode 100644 include/xen/interface/io/tpmif.h
> 
> diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> index 915875e..23d272f 100644
> --- a/drivers/char/tpm/Kconfig
> +++ b/drivers/char/tpm/Kconfig
> @@ -81,4 +81,15 @@ config TCG_IBMVTPM
>  	  will be accessible from within Linux.  To compile this driver
>  	  as a module, choose M here; the module will be called tpm_ibmvtpm.
>  
> +config TCG_XEN
> +	tristate "XEN TPM Interface"
> +	depends on TCG_TPM && XEN
> +	---help---
> +	  If you want to make TPM support available to a Xen user domain,
> +	  say Yes and it will be accessible from within Linux. See
> +	  the manpages for xl, xl.conf, and docs/misc/vtpm.txt in
> +	  the Xen source repository for more details.
> +	  To compile this driver as a module, choose M here; the module
> +	  will be called xen-tpmfront.
> +
>  endif # TCG_TPM
> diff --git a/drivers/char/tpm/Makefile b/drivers/char/tpm/Makefile
> index 5b3fc8b..0161f05 100644
> --- a/drivers/char/tpm/Makefile
> +++ b/drivers/char/tpm/Makefile
> @@ -17,3 +17,5 @@ obj-$(CONFIG_TCG_NSC) += tpm_nsc.o
>  obj-$(CONFIG_TCG_ATMEL) += tpm_atmel.o
>  obj-$(CONFIG_TCG_INFINEON) += tpm_infineon.o
>  obj-$(CONFIG_TCG_IBMVTPM) += tpm_ibmvtpm.o
> +obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
> +xen-tpmfront-y = xen-tpmfront_if.o xen-tpmfront_vtpm.o
> diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
> index 8ef7649..b575892 100644
> --- a/drivers/char/tpm/tpm.h
> +++ b/drivers/char/tpm/tpm.h
> @@ -328,6 +328,16 @@ extern int tpm_pm_resume(struct device *);
>  extern int wait_for_tpm_stat(struct tpm_chip *, u8, unsigned long,
>  			     wait_queue_head_t *);
>  
> +static inline void *chip_get_private(const struct tpm_chip *chip)
> +{
> +	return chip->vendor.data;
> +}
> +
> +static inline void chip_set_private(struct tpm_chip *chip, void *priv)
> +{
> +	chip->vendor.data = priv;
> +}
> +
>  #ifdef CONFIG_ACPI
>  extern int tpm_add_ppi(struct kobject *);
>  extern void tpm_remove_ppi(struct kobject *);
> diff --git a/drivers/char/tpm/xen-tpmfront_if.c b/drivers/char/tpm/xen-tpmfront_if.c
> new file mode 100644
> index 0000000..ba7fad8
> --- /dev/null
> +++ b/drivers/char/tpm/xen-tpmfront_if.c
> @@ -0,0 +1,688 @@
> +/*
> + * Copyright (c) 2005, IBM Corporation
> + *
> + * Author: Stefan Berger, stefanb@us.ibm.com
> + * Grant table support: Mahadevan Gomathisankaran
> + *
> + * This code has been derived from drivers/xen/netfront/netfront.c
> + *
> + * Copyright (c) 2002-2004, K A Fraser
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <linux/errno.h>
> +#include <linux/err.h>
> +#include <linux/interrupt.h>
> +#include <linux/mutex.h>
> +#include <linux/uaccess.h>
> +#include <xen/events.h>
> +#include <xen/interface/grant_table.h>
> +#include <xen/interface/io/tpmif.h>
> +#include <xen/grant_table.h>
> +#include <xen/xenbus.h>
> +#include <xen/page.h>
> +#include "tpm.h"
> +#include "xen-tpmfront_vtpm.h"
> +
> +#define GRANT_INVALID_REF 0
> +
> +/* local structures */
> +struct tpm_private {
> +	struct tpm_chip *chip;
> +
> +	struct tpmif_tx_interface *tx;
> +	atomic_t refcnt;
> +	unsigned int evtchn;
> +	u8 is_connected;
> +	u8 is_suspended;

Perhaps those two could be 'bool'?
> +
> +	spinlock_t tx_lock;
> +
> +	struct tx_buffer *tx_buffers[TPMIF_TX_RING_SIZE];
> +
> +	atomic_t tx_busy;
> +	void *tx_remember;
> +
> +	domid_t backend_id;
> +	wait_queue_head_t wait_q;
> +
> +	struct xenbus_device *dev;
> +	int ring_ref;
> +};
> +
> +struct tx_buffer {
> +	unsigned int size;	/* available space in data */
> +	unsigned int len;	/* used space in data */
> +	unsigned char *data;	/* pointer to a page */
> +};
> +
> +
> +/* locally visible variables */
> +static grant_ref_t gref_head;
> +static struct tpm_private *my_priv;

Should 'my_priv' have a lock?

> +
> +/* local function prototypes */
> +static irqreturn_t tpmif_int(int irq,
> +		void *tpm_priv);
> +static void tpmif_rx_action(unsigned long unused);
> +static int tpmif_connect(struct xenbus_device *dev,
> +		struct tpm_private *tp,
> +		domid_t domid);
> +static DECLARE_TASKLET(tpmif_rx_tasklet, tpmif_rx_action, 0);

Why make it a tasklet and not a thread? That way you can also
remove the usage of ATOMIC and use the GFP_KERNEL.

> +static int tpmif_allocate_tx_buffers(struct tpm_private *tp);
> +static void tpmif_free_tx_buffers(struct tpm_private *tp);
> +static void tpmif_set_connected_state(struct tpm_private *tp,
> +		u8 newstate);
> +static int tpm_xmit(struct tpm_private *tp,
> +		const u8 *buf, size_t count, int userbuffer,
> +		void *remember);
> +static void destroy_tpmring(struct tpm_private *tp);
> +
> +static inline int
> +tx_buffer_copy(struct tx_buffer *txb, const u8 *src, int len,
> +		int isuserbuffer)
> +{
> +	int copied = len;
> +
> +	if (len > txb->size)
> +		copied = txb->size;

 'len' is an 'int' and 'txb->size' is an 'unsigned int'.
If 'len' is -1, this comparison would end up becoming
UINT_MAX + 1 + -1  > txb->size - which would always be
true (I think - I can't recall if the compiler would
cast 'int' to an 'unsigned int' or the other way).
And then we would copy txb->size instead of 0.
Depending on the 'src' this might blow up.

Perhaps a check if len is less than or equal zero? Or
maybe not using 'int' for the 'len'?


> +	if (isuserbuffer) {
> +		if (copy_from_user(txb->data, src, copied))
> +			return -EFAULT;
> +	} else {
> +		memcpy(txb->data, src, copied);
> +	}
> +	txb->len = len;
> +	return copied;

I think just making it 'unsigned int' would be easier.

> +}
> +
> +static inline struct tx_buffer *tx_buffer_alloc(void)
> +{
> +	struct tx_buffer *txb;
> +
> +	txb = kzalloc(sizeof(struct tx_buffer), GFP_KERNEL);
> +	if (!txb)
> +		return NULL;
> +
> +	txb->len = 0;
> +	txb->size = PAGE_SIZE;
> +	txb->data = (unsigned char *)__get_free_page(GFP_KERNEL);
> +	if (txb->data == NULL) {
> +		kfree(txb);
> +		txb = NULL;
> +	}
> +
> +	return txb;
> +}
> +
> +
> +static inline void tx_buffer_free(struct tx_buffer *txb)
> +{
> +	if (txb) {
> +		free_page((long)txb->data);
> +		kfree(txb);
> +	}
> +}
> +
> +/**************************************************************
> +  Utility function for the tpm_private structure
> + **************************************************************/
> +static void tpm_private_init(struct tpm_private *tp)
> +{
> +	spin_lock_init(&tp->tx_lock);
> +	init_waitqueue_head(&tp->wait_q);
> +	atomic_set(&tp->refcnt, 1);
> +}
> +
> +static void tpm_private_put(void)
> +{
> +	if (!atomic_dec_and_test(&my_priv->refcnt))
> +		return;

So no lock - so what happens if you have _two_ threads
calling this? The first decrements the refcnt, and decides
its time to clear it up - goes through the kfree and sets
my_priv == NULL. At the same instant the other thread goes
in and calls 'atomic_dec_and_test' on the my_priv which
has been set to NULL. Boom!

Unless there is some lock being held _before_ tpm_private_put
is called - in which you should a comment about it.
> +
> +	tpmif_free_tx_buffers(my_priv);
> +	kfree(my_priv);
> +	my_priv = NULL;
> +}
> +
> +static struct tpm_private *tpm_private_get(void)
> +{
> +	int err;
> +
> +	if (my_priv) {
> +		atomic_inc(&my_priv->refcnt);
> +		return my_priv;
> +	}
> +
> +	my_priv = kzalloc(sizeof(struct tpm_private), GFP_KERNEL);
> +	if (!my_priv)
> +		return NULL;
> +
> +	tpm_private_init(my_priv);
> +	err = tpmif_allocate_tx_buffers(my_priv);
> +	if (err < 0)
> +		tpm_private_put();
> +
> +	return my_priv;
> +}
> +
> +/**************************************************************
> +
> +  The interface to let the tpm plugin register its callback
> +  function and send data to another partition using this module
> +
> + **************************************************************/
> +
> +static DEFINE_MUTEX(suspend_lock);
> +/*
> + * Send data via this module by calling this function
> + */
> +int vtpm_vd_send(struct tpm_private *tp,

I think this can be static?

> +		const u8 *buf, size_t count, void *ptr)
> +{
> +	int sent;
> +
> +	mutex_lock(&suspend_lock);
> +	sent = tpm_xmit(tp, buf, count, 0, ptr);
> +	mutex_unlock(&suspend_lock);
> +
> +	return sent;
> +}
> +
> +/**************************************************************
> +  XENBUS support code
> + **************************************************************/
> +
> +static int setup_tpmring(struct xenbus_device *dev,
> +		struct tpm_private *tp)
> +{
> +	struct tpmif_tx_interface *sring;
> +	int err;
> +
> +	tp->ring_ref = GRANT_INVALID_REF;
> +
> +	sring = (void *)__get_free_page(GFP_KERNEL);
> +	if (!sring) {
> +		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
> +		return -ENOMEM;
> +	}
> +	tp->tx = sring;
> +
> +	err = xenbus_grant_ring(dev, virt_to_mfn(tp->tx));
> +	if (err < 0) {
> +		free_page((unsigned long)sring);
> +		tp->tx = NULL;
> +		xenbus_dev_fatal(dev, err, "allocating grant reference");
> +		goto fail;
> +	}
> +	tp->ring_ref = err;
> +
> +	err = tpmif_connect(dev, tp, dev->otherend_id);
> +	if (err)
> +		goto fail;
> +
> +	return 0;
> +fail:
> +	destroy_tpmring(tp);
> +	return err;
> +}
> +
> +
> +static void destroy_tpmring(struct tpm_private *tp)
> +{
> +	tpmif_set_connected_state(tp, 0);
> +
> +	if (tp->ring_ref != GRANT_INVALID_REF) {
> +		gnttab_end_foreign_access(tp->ring_ref,
> +				0, (unsigned long)tp->tx);
> +		tp->ring_ref = GRANT_INVALID_REF;
> +		tp->tx = NULL;

Shouldn't you free_page it first? Looking at the code below, if
it fails at:

	err = tpmif_connect(dev, tp, dev->otherend_id);

then we go to 'destroy_tpmring' which would tear down the
connection. But we still have tp->tx page allocated at
that point?

> +	}
> +
> +	if (tp->evtchn)
> +		unbind_from_irqhandler(irq_from_evtchn(tp->evtchn), tp);
> +
> +	tp->evtchn = GRANT_INVALID_REF;
> +}
> +
> +
> +static int talk_to_backend(struct xenbus_device *dev,
> +		struct tpm_private *tp)
> +{
> +	const char *message = NULL;
> +	int err;
> +	struct xenbus_transaction xbt;
> +
> +	err = setup_tpmring(dev, tp);
> +	if (err) {
> +		xenbus_dev_fatal(dev, err, "setting up ring");
> +		goto out;
> +	}
> +
> +again:
> +	err = xenbus_transaction_start(&xbt);
> +	if (err) {
> +		xenbus_dev_fatal(dev, err, "starting transaction");
> +		goto destroy_tpmring;
> +	}
> +
> +	err = xenbus_printf(xbt, dev->nodename,
> +			"ring-ref", "%u", tp->ring_ref);
> +	if (err) {
> +		message = "writing ring-ref";
> +		goto abort_transaction;
> +	}
> +
> +	err = xenbus_printf(xbt, dev->nodename, "event-channel", "%u",
> +			tp->evtchn);
> +	if (err) {
> +		message = "writing event-channel";
> +		goto abort_transaction;
> +	}
> +
> +	err = xenbus_transaction_end(xbt, 0);
> +	if (err == -EAGAIN)
> +		goto again;
> +	if (err) {
> +		xenbus_dev_fatal(dev, err, "completing transaction");
> +		goto destroy_tpmring;
> +	}
> +
> +	xenbus_switch_state(dev, XenbusStateConnected);
> +
> +	return 0;
> +
> +abort_transaction:
> +	xenbus_transaction_end(xbt, 1);
> +	if (message)
> +		xenbus_dev_error(dev, err, "%s", message);
> +destroy_tpmring:
> +	destroy_tpmring(tp);
> +out:
> +	return err;
> +}
> +
> +/**
> + * Callback received when the backend's state changes.
> + */
> +static void backend_changed(struct xenbus_device *dev,
> +		enum xenbus_state backend_state)
> +{
> +	struct tpm_private *tp = tpm_private_from_dev(&dev->dev);
> +
> +	switch (backend_state) {
> +	case XenbusStateInitialising:
> +	case XenbusStateInitWait:
> +	case XenbusStateInitialised:
> +	case XenbusStateReconfiguring:
> +	case XenbusStateReconfigured:
> +	case XenbusStateUnknown:
> +		break;
> +
> +	case XenbusStateConnected:
> +		tpmif_set_connected_state(tp, 1);

Can that '1' be a true or false?

> +		break;
> +
> +	case XenbusStateClosing:
> +		tpmif_set_connected_state(tp, 0);

The same here.
> +		xenbus_frontend_closed(dev);
> +		break;
> +
> +	case XenbusStateClosed:
> +		tpmif_set_connected_state(tp, 0);

And here.
> +		if (tp->is_suspended == 0)
> +			device_unregister(&dev->dev);
> +		xenbus_frontend_closed(dev);
> +		break;
> +	}
> +}
> +
> +static int tpmfront_probe(struct xenbus_device *dev,
> +		const struct xenbus_device_id *id)
> +{
> +	int err;
> +	int handle;
> +	struct tpm_private *tp = tpm_private_get();
> +
> +	if (!tp)
> +		return -ENOMEM;
> +
> +	tp->chip = init_vtpm(&dev->dev, tp);
> +	if (IS_ERR(tp->chip))
> +		return PTR_ERR(tp->chip);
> +
> +	err = xenbus_scanf(XBT_NIL, dev->nodename,
> +			"handle", "%i", &handle);

Not '%li' ? Ah I guess not since it is 'int'. But why not
'long' ? (Xen netback looks to be using 'long').

> +	if (XENBUS_EXIST_ERR(err))
> +		return err;
> +
> +	if (err < 0) {
> +		xenbus_dev_fatal(dev, err, "reading virtual-device");
> +		return err;
> +	}
> +
> +	tp->dev = dev;
> +
> +	err = talk_to_backend(dev, tp);
> +	if (err) {
> +		tpm_private_put();
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +
> +static int __devexit tpmfront_remove(struct xenbus_device *dev)
> +{
> +	struct tpm_private *tp = tpm_private_from_dev(&dev->dev);
> +	destroy_tpmring(tp);
> +	cleanup_vtpm(&dev->dev);
> +	return 0;
> +}
> +
> +static int tpmfront_suspend(struct xenbus_device *dev)
> +{
> +	struct tpm_private *tp = tpm_private_from_dev(&dev->dev);
> +	u32 ctr;
> +
> +	/* Take the lock, preventing any application from sending. */
> +	mutex_lock(&suspend_lock);
> +	tp->is_suspended = 1;

bool?

> +
> +	for (ctr = 0; atomic_read(&tp->tx_busy); ctr++) {
> +		/* Wait for a request to be responded to. */
> +		interruptible_sleep_on_timeout(&tp->wait_q, 100);

So if the request never comes back (b/c the backend crashed), then
what should we do?

> +	}
> +

Why no 'mutex_unlock' 

> +	return 0;
> +}
> +
> +static int tpmfront_suspend_finish(struct tpm_private *tp)
> +{

No mutex_lock ?

> +	tp->is_suspended = 0;
> +	/* Allow applications to send again. */

So you are holding on the mutex across the different backend
states? Can you explain when this function gets called as a result
of tpmfront_suspend?

> +	mutex_unlock(&suspend_lock);
> +	return 0;
> +}
> +
> +static int tpmfront_resume(struct xenbus_device *dev)
> +{
> +	struct tpm_private *tp = tpm_private_from_dev(&dev->dev);
> +	destroy_tpmring(tp);
> +	return talk_to_backend(dev, tp);
> +}
> +
> +static int tpmif_connect(struct xenbus_device *dev,
> +		struct tpm_private *tp,
> +		domid_t domid)
> +{
> +	int err;
> +
> +	tp->backend_id = domid;
> +	tp->evtchn = GRANT_INVALID_REF;
> +
> +	err = xenbus_alloc_evtchn(dev, &tp->evtchn);
> +	if (err)
> +		return err;
> +
> +	err = bind_evtchn_to_irqhandler(tp->evtchn, tpmif_int,
> +			0, "tpmif", tp);
> +	if (err <= 0)
> +		return err;
> +
> +	return 0;
> +}
> +
> +static const struct xenbus_device_id tpmfront_ids[] = {
> +	{ "vtpm" },

Hm, I thought it would be 'vtpm2'?

> +	{ "" }
> +};
> +MODULE_ALIAS("xen:vtpm");
> +
> +static DEFINE_XENBUS_DRIVER(tpmfront, ,
> +		.probe = tpmfront_probe,
> +		.remove =  __devexit_p(tpmfront_remove),
> +		.resume = tpmfront_resume,
> +		.otherend_changed = backend_changed,
> +		.suspend = tpmfront_suspend,
> +		);
> +
> +static int tpmif_allocate_tx_buffers(struct tpm_private *tp)

> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < TPMIF_TX_RING_SIZE; i++) {
> +		tp->tx_buffers[i] = tx_buffer_alloc();
> +		if (!tp->tx_buffers[i]) {
> +			tpmif_free_tx_buffers(tp);
> +			return -ENOMEM;
> +		}
> +	}
> +	return 0;
> +}
> +
> +static void tpmif_free_tx_buffers(struct tpm_private *tp)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < TPMIF_TX_RING_SIZE; i++)
> +		tx_buffer_free(tp->tx_buffers[i]);
> +}
> +
> +static void tpmif_rx_action(unsigned long priv)
> +{
> +	struct tpm_private *tp = (struct tpm_private *)priv;
> +	int i = 0;
> +	unsigned int received;
> +	unsigned int offset = 0;
> +	u8 *buffer;
> +	struct tpmif_tx_request *tx = &tp->tx->ring[i].req;
> +
> +	atomic_set(&tp->tx_busy, 0);
> +	wake_up_interruptible(&tp->wait_q);
> +
> +	received = tx->size;

No check if the size is out of whack? Say bigger than a PAGE_SIZE?
The ring could have been corrupted.

> +
> +	buffer = kmalloc(received, GFP_ATOMIC);

No 'kzalloc' ?

> +	if (!buffer)
> +		return;
> +
> +	for (i = 0; i < TPMIF_TX_RING_SIZE && offset < received; i++) {
> +		struct tx_buffer *txb = tp->tx_buffers[i];
> +		struct tpmif_tx_request *tx;
> +		unsigned int tocopy;
> +
> +		tx = &tp->tx->ring[i].req;
> +		tocopy = tx->size;

So.. on the first loop you get the tx->size from the same place as
what 'received' got. But on the second loop, the tx->size is from another
request. What if the first tx->size was 8 bytes, the next request
had 8 as well. Wouldn't we end up crashing as buffer was only allocated
to hold 8 bytes?

> +		if (tocopy > PAGE_SIZE)
> +			tocopy = PAGE_SIZE;
> +
> +		memcpy(&buffer[offset], txb->data, tocopy);
> +
> +		gnttab_release_grant_reference(&gref_head, tx->ref);
> +
> +		offset += tocopy;
> +	}
> +
> +	vtpm_vd_recv(tp->chip, buffer, received, tp->tx_remember);
> +	kfree(buffer);
> +}
> +
> +
> +static irqreturn_t tpmif_int(int irq, void *tpm_priv)
> +{
> +	struct tpm_private *tp = tpm_priv;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&tp->tx_lock, flags);
> +	tpmif_rx_tasklet.data = (unsigned long)tp;
> +	tasklet_schedule(&tpmif_rx_tasklet);
> +	spin_unlock_irqrestore(&tp->tx_lock, flags);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +
> +static int tpm_xmit(struct tpm_private *tp,
> +		const u8 *buf, size_t count, int isuserbuffer,
> +		void *remember)
> +{
> +	struct tpmif_tx_request *tx;
> +	int i;
> +	unsigned int offset = 0;
> +
> +	spin_lock_irq(&tp->tx_lock);
> +
> +	if (unlikely(atomic_read(&tp->tx_busy))) {
> +		spin_unlock_irq(&tp->tx_lock);
> +		return -EBUSY;
> +	}
> +
> +	if (tp->is_connected != 1) {

Use a bool here.
> +		spin_unlock_irq(&tp->tx_lock);
> +		return -EIO;
> +	}
> +
> +	for (i = 0; count > 0 && i < TPMIF_TX_RING_SIZE; i++) {
> +		struct tx_buffer *txb = tp->tx_buffers[i];
> +		int copied;
> +
> +		if (!txb) {
> +			spin_unlock_irq(&tp->tx_lock);
> +			return -EFAULT;
> +		}
> +
> +		copied = tx_buffer_copy(txb, &buf[offset], count,
> +				isuserbuffer);
> +		if (copied < 0) {
> +			/* An error occurred */
> +			spin_unlock_irq(&tp->tx_lock);
> +			return copied;
> +		}
> +		count -= copied;
> +		offset += copied;
> +
> +		tx = &tp->tx->ring[i].req;
> +		tx->addr = virt_to_machine(txb->data).maddr;
> +		tx->size = txb->len;
> +		tx->unused = 0;
> +
> +		/* Get the granttable reference for this page. */
> +		tx->ref = gnttab_claim_grant_reference(&gref_head);
> +		if (tx->ref == -ENOSPC) {
> +			spin_unlock_irq(&tp->tx_lock);
> +			return -ENOSPC;
> +		}
> +		gnttab_grant_foreign_access_ref(tx->ref,
> +				tp->backend_id,
> +				virt_to_mfn(txb->data),
> +				0 /*RW*/);
> +		wmb();
> +	}
> +
> +	atomic_set(&tp->tx_busy, 1);
> +	tp->tx_remember = remember;
> +
> +	mb();
> +
> +	notify_remote_via_evtchn(tp->evtchn);
> +
> +	spin_unlock_irq(&tp->tx_lock);
> +	return offset;
> +}
> +
> +
> +static void tpmif_notify_upperlayer(struct tpm_private *tp)
> +{
> +	/* Notify upper layer about the state of the connection to the BE. */
> +	vtpm_vd_status(tp->chip, (tp->is_connected
> +				? TPM_VD_STATUS_CONNECTED
> +				: TPM_VD_STATUS_DISCONNECTED));
> +}
> +
> +
> +static void tpmif_set_connected_state(struct tpm_private *tp, u8 is_connected)
> +{
> +	/*
> +	 * Don't notify upper layer if we are in suspend mode and
> +	 * should disconnect - assumption is that we will resume
> +	 * The mutex keeps apps from sending.
> +	 */
> +	if (is_connected == 0 && tp->is_suspended == 1)

Bool's.

> +		return;
> +
> +	/*
> +	 * Unlock the mutex if we are connected again
> +	 * after being suspended - now resuming.
> +	 * This also removes the suspend state.
> +	 */
> +	if (is_connected == 1 && tp->is_suspended == 1)
> +		tpmfront_suspend_finish(tp);
> +
> +	if (is_connected != tp->is_connected) {
> +		tp->is_connected = is_connected;
> +		tpmif_notify_upperlayer(tp);
> +	}
> +}
> +
> +
> +
> +/* =================================================================
> + * Initialization function.
> + * =================================================================
> + */
> +
> +
> +static int __init tpmif_init(void)
> +{
> +	struct tpm_private *tp;
> +
> +	if (!xen_domain())
> +		return -ENODEV;

So this can run in HVM, PV, and dom0. You tested it in all of
the cases?

> +
> +	tp = tpm_private_get();
> +	if (!tp)
> +		return -ENOMEM;
> +
> +	if (gnttab_alloc_grant_references(TPMIF_TX_RING_SIZE,
> +				&gref_head) < 0) {
> +		tpm_private_put();
> +		return -EFAULT;
> +	}
> +
> +	return xenbus_register_frontend(&tpmfront_driver);
> +}
> +module_init(tpmif_init);
> +
> +static void __exit tpmif_exit(void)
> +{
> +	xenbus_unregister_driver(&tpmfront_driver);
> +	gnttab_free_grant_references(gref_head);
> +	tpm_private_put();
> +}
> +module_exit(tpmif_exit);
> +
> +MODULE_LICENSE("Dual BSD/GPL");
> diff --git a/drivers/char/tpm/xen-tpmfront_vtpm.c b/drivers/char/tpm/xen-tpmfront_vtpm.c
> new file mode 100644
> index 0000000..d70f1df
> --- /dev/null
> +++ b/drivers/char/tpm/xen-tpmfront_vtpm.c
> @@ -0,0 +1,543 @@
> +/*
> + * Copyright (C) 2006 IBM Corporation
> + *
> + * Authors:
> + * Stefan Berger <stefanb@us.ibm.com>
> + *
> + * Generic device driver part for device drivers in a virtualized
> + * environment.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation, version 2 of the
> + * License.
> + *
> + */
> +
> +#include <linux/uaccess.h>
> +#include <linux/list.h>
> +#include <linux/device.h>
> +#include <linux/interrupt.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include "tpm.h"
> +#include "xen-tpmfront_vtpm.h"
> +
> +/* read status bits */
> +enum {
> +	STATUS_BUSY = 0x01,
> +	STATUS_DATA_AVAIL = 0x02,
> +	STATUS_READY = 0x04
> +};

And for what variable is this used? Is this
related to the 'struct transmission' ?

> +
> +struct transmission {
> +	struct list_head next;
> +
> +	unsigned char *request;
> +	size_t  request_len;
> +	size_t  request_buflen;
> +
> +	unsigned char *response;
> +	size_t  response_len;
> +	size_t  response_buflen;
> +
> +	unsigned int flags;

I presume this is for the DATEEX_FLAG_QUEUED_ONLY ?
> +};
> +
> +enum {
> +	TRANSMISSION_FLAG_WAS_QUEUED = 0x1

Hmm, for which variable is this?

> +};
> +
> +
> +enum {
> +	DATAEX_FLAG_QUEUED_ONLY = 0x1

You should have one for the default entry of '0' as well.
> +};
> +
> +
> +/* local variables */
> +
> +/* local function prototypes */
> +static int _vtpm_send_queued(struct tpm_chip *chip);
> +
> +
> +/* =============================================================
> + * Some utility functions
> + * =============================================================
> + */
> +static void vtpm_state_init(struct vtpm_state *vtpms)
> +{
> +	vtpms->current_request = NULL;
> +	spin_lock_init(&vtpms->req_list_lock);
> +	init_waitqueue_head(&vtpms->req_wait_queue);
> +	INIT_LIST_HEAD(&vtpms->queued_requests);
> +
> +	vtpms->current_response = NULL;
> +	spin_lock_init(&vtpms->resp_list_lock);
> +	init_waitqueue_head(&vtpms->resp_wait_queue);
> +
> +	vtpms->disconnect_time = jiffies;
> +}
> +
> +
> +static inline struct transmission *transmission_alloc(void)
> +{
> +	return kzalloc(sizeof(struct transmission), GFP_ATOMIC);

GFP_ATOMIC? Not GFP_KERNEL? Ah, that is b/c you are using a tasklet.
Why not use a thread?

> +}
> +
> +static unsigned char *
> +transmission_set_req_buffer(struct transmission *t,
> +		unsigned char *buffer, size_t len)
> +{
> +	if (t->request_buflen < len) {
> +		kfree(t->request);
> +		t->request = kmalloc(len, GFP_KERNEL);
> +		if (!t->request) {
> +			t->request_buflen = 0;
> +			return NULL;
> +		}
> +		t->request_buflen = len;
> +	}
> +
> +	memcpy(t->request, buffer, len);
> +	t->request_len = len;
> +
> +	return t->request;
> +}
> +
> +static unsigned char *
> +transmission_set_res_buffer(struct transmission *t,
> +		const unsigned char *buffer, size_t len)
> +{
> +	if (t->response_buflen < len) {
> +		kfree(t->response);
> +		t->response = kmalloc(len, GFP_ATOMIC);
> +		if (!t->response) {
> +			t->response_buflen = 0;
> +			return NULL;
> +		}
> +		t->response_buflen = len;
> +	}
> +
> +	memcpy(t->response, buffer, len);
> +	t->response_len = len;
> +
> +	return t->response;
> +}

You could collapse these two functions, and in the
'struct transmission' have something like this:

struct payload {
	unsigned char* data;
	ssize_t	len;
	ssize_t buflen;
};

struct transmission {
	struct list_head next;
	struct payload request;
	struct payload response;
	...
}

And then just pass in 'struct payload' to one of those
functions.

> +static inline void transmission_free(struct transmission *t)
> +{
> +	kfree(t->request);
> +	kfree(t->response);
> +	kfree(t);
> +}
> +
> +/* =============================================================
> + * Interface with the lower layer driver
> + * =============================================================
> + */
> +/*
> + * Lower layer uses this function to make a response available.
> + */
> +int vtpm_vd_recv(const struct tpm_chip *chip,
> +		const unsigned char *buffer, size_t count,
> +		void *ptr)
> +{
> +	unsigned long flags;
> +	int ret_size = 0;
> +	struct transmission *t;
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	/*
> +	 * The list with requests must contain one request
> +	 * only and the element there must be the one that
> +	 * was passed to me from the front-end.
> +	 */
> +	spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +	if (vtpms->current_request != ptr) {
> +		spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +		return 0;
> +	}
> +	t = vtpms->current_request;
> +	if (t) {
> +		transmission_free(t);
> +		vtpms->current_request = NULL;
> +	}
> +
> +	t = transmission_alloc();
> +	if (t) {
> +		if (!transmission_set_res_buffer(t, buffer, count)) {
> +			transmission_free(t);
> +			spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +			return -ENOMEM;
> +		}
> +		ret_size = count;
> +		vtpms->current_response = t;
> +		wake_up_interruptible(&vtpms->resp_wait_queue);
> +	}
> +	spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +
> +	return ret_size;
> +}
> +
> +
> +/*
> + * Lower layer indicates its status (connected/disconnected)
> + */
> +void vtpm_vd_status(const struct tpm_chip *chip, u8 vd_status)
> +{
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	vtpms->vd_status = vd_status;
> +	if ((vtpms->vd_status & TPM_VD_STATUS_CONNECTED) == 0)
> +		vtpms->disconnect_time = jiffies;
> +}
> +
> +/* =============================================================
> + * Interface with the generic TPM driver
> + * =============================================================
> + */
> +static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
> +{
> +	int rc = 0;
> +	unsigned long flags;
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	/*
> +	 * Check if the previous operation only queued the command
> +	 * In this case there won't be a response, so I just
> +	 * return from here and reset that flag. In any other
> +	 * case I should receive a response from the back-end.
> +	 */
> +	spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +	if ((vtpms->flags & DATAEX_FLAG_QUEUED_ONLY) != 0) {
> +		vtpms->flags &= ~DATAEX_FLAG_QUEUED_ONLY;
> +		spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +		/*
> +		 * The first few commands (measurements) must be
> +		 * queued since it might not be possible to talk to the
> +		 * TPM, yet.
> +		 * Return a response of up to 30 '0's.

Is 30 a magic constant? Can you use a #define.

> +		 */
> +
> +		count = min_t(size_t, count, 30);
> +		memset(buf, 0x0, count);
> +		return count;
> +	}
> +	/*
> +	 * Check whether something is in the responselist and if
> +	 * there's nothing in the list wait for something to appear.
> +	 */
> +
> +	if (!vtpms->current_response) {
> +		spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +		interruptible_sleep_on_timeout(&vtpms->resp_wait_queue,
> +				1000);
> +		spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +	}
> +
> +	if (vtpms->current_response) {
> +		struct transmission *t = vtpms->current_response;
> +		vtpms->current_response = NULL;
> +		rc = min(count, t->response_len);
> +		memcpy(buf, t->response, rc);
> +		transmission_free(t);
> +	}
> +
> +	spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +	return rc;
> +}
> +
> +static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
> +{
> +	int rc = 0;
> +	unsigned long flags;
> +	struct transmission *t = transmission_alloc();
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	if (!t)
> +		return -ENOMEM;
> +	/*
> +	 * If there's a current request, it must be the
> +	 * previous request that has timed out.
> +	 */
> +	spin_lock_irqsave(&vtpms->req_list_lock, flags);
> +	if (vtpms->current_request != NULL) {
> +		dev_warn(chip->dev, "Sending although there is a request outstanding.\n"
> +				"         Previous request must have timed out.\n");
> +		transmission_free(vtpms->current_request);
> +		vtpms->current_request = NULL;
> +	}
> +	spin_unlock_irqrestore(&vtpms->req_list_lock, flags);
> +
> +	/*
> +	 * Queue the packet if the driver below is not
> +	 * ready, yet, or there is any packet already
> +	 * in the queue.
> +	 * If the driver below is ready, unqueue all
> +	 * packets first before sending our current
> +	 * packet.
> +	 * For each unqueued packet, except for the
> +	 * last (=current) packet, call the function
> +	 * tpm_xen_recv to wait for the response to come
> +	 * back.
> +	 */
> +	if ((vtpms->vd_status & TPM_VD_STATUS_CONNECTED) == 0) {
> +		if (time_after(jiffies,
> +					vtpms->disconnect_time + HZ * 10)) {
> +			rc = -ENOENT;
> +		} else {
> +			goto queue_it;
> +		}
> +	} else {
> +		/*
> +		 * Send all queued packets.
> +		 */
> +		if (_vtpm_send_queued(chip) == 0) {
> +
> +			vtpms->current_request = t;
> +
> +			rc = vtpm_vd_send(vtpms->tpm_private,
> +					buf,
> +					count,
> +					t);
> +			/*
> +			 * The generic TPM driver will call
> +			 * the function to receive the response.
> +			 */
> +			if (rc < 0) {
> +				vtpms->current_request = NULL;
> +				goto queue_it;
> +			}
> +		} else {
> +queue_it:
> +			if (!transmission_set_req_buffer(t, buf, count)) {
> +				transmission_free(t);
> +				rc = -ENOMEM;
> +				goto exit;
> +			}
> +			/*
> +			 * An error occurred. Don't event try
> +			 * to send the current request. Just
> +			 * queue it.
> +			 */
> +			spin_lock_irqsave(&vtpms->req_list_lock, flags);
> +			vtpms->flags |= DATAEX_FLAG_QUEUED_ONLY;
> +			list_add_tail(&t->next, &vtpms->queued_requests);
> +			spin_unlock_irqrestore(&vtpms->req_list_lock, flags);
> +		}
> +	}
> +
> +exit:
> +	return rc;
> +}
> +
> +
> +/*
> + * Send all queued requests.
> + */
> +static int _vtpm_send_queued(struct tpm_chip *chip)
> +{
> +	int rc;
> +	int error = 0;
> +	unsigned long flags;
> +	unsigned char buffer[1];
> +	struct vtpm_state *vtpms;
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	spin_lock_irqsave(&vtpms->req_list_lock, flags);
> +
> +	while (!list_empty(&vtpms->queued_requests)) {
> +		/*
> +		 * Need to dequeue them.
> +		 * Read the result into a dummy buffer.
> +		 */
> +		struct transmission *qt = (struct transmission *)
> +			vtpms->queued_requests.next;
> +		list_del(&qt->next);
> +		vtpms->current_request = qt;
> +		spin_unlock_irqrestore(&vtpms->req_list_lock, flags);
> +
> +		rc = vtpm_vd_send(vtpms->tpm_private,
> +				qt->request,
> +				qt->request_len,
> +				qt);
> +
> +		if (rc < 0) {
> +			spin_lock_irqsave(&vtpms->req_list_lock, flags);
> +			qt = vtpms->current_request;
> +			if (qt != NULL) {
> +				/*
> +				 * requeue it at the beginning
> +				 * of the list
> +				 */
> +				list_add(&qt->next,
> +						&vtpms->queued_requests);
> +			}
> +			vtpms->current_request = NULL;
> +			error = 1;
> +			break;
> +		}
> +		/*
> +		 * After this point qt is not valid anymore!
> +		 * It is freed when the front-end is delivering
> +		 * the data by calling tpm_recv
> +		 */
> +		/*
> +		 * Receive response into provided dummy buffer
> +		 */
> +		rc = vtpm_recv(chip, buffer, sizeof(buffer));
> +		spin_lock_irqsave(&vtpms->req_list_lock, flags);
> +	}
> +
> +	spin_unlock_irqrestore(&vtpms->req_list_lock, flags);
> +
> +	return error;
> +}
> +
> +static void vtpm_cancel(struct tpm_chip *chip)
> +{
> +	unsigned long flags;
> +	struct vtpm_state *vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +
> +	if (!vtpms->current_response && vtpms->current_request) {
> +		spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +		interruptible_sleep_on(&vtpms->resp_wait_queue);
> +		spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +	}
> +
> +	if (vtpms->current_response) {
> +		struct transmission *t = vtpms->current_response;
> +		vtpms->current_response = NULL;
> +		transmission_free(t);
> +	}
> +
> +	spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +}
> +
> +static u8 vtpm_status(struct tpm_chip *chip)
> +{
> +	u8 rc = 0;
> +	unsigned long flags;
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +	/*
> +	 * Data are available if:
> +	 *  - there's a current response
> +	 *  - the last packet was queued only (this is fake, but necessary to
> +	 *      get the generic TPM layer to call the receive function.)
> +	 */
> +	if (vtpms->current_response ||
> +			0 != (vtpms->flags & DATAEX_FLAG_QUEUED_ONLY)) {
> +		rc = STATUS_DATA_AVAIL;
> +	} else if (!vtpms->current_response && !vtpms->current_request) {
> +		rc = STATUS_READY;
> +	}
> +
> +	spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +	return rc;
> +}
> +
> +static const struct file_operations vtpm_ops = {
> +	.owner = THIS_MODULE,
> +	.llseek = no_llseek,
> +	.open = tpm_open,
> +	.read = tpm_read,
> +	.write = tpm_write,
> +	.release = tpm_release,
> +};
> +
> +static DEVICE_ATTR(pubek, S_IRUGO, tpm_show_pubek, NULL);
> +static DEVICE_ATTR(pcrs, S_IRUGO, tpm_show_pcrs, NULL);
> +static DEVICE_ATTR(enabled, S_IRUGO, tpm_show_enabled, NULL);
> +static DEVICE_ATTR(active, S_IRUGO, tpm_show_active, NULL);
> +static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL);
> +static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated,
> +		NULL);
> +static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps, NULL);
> +static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel);
> +
> +static struct attribute *vtpm_attrs[] = {
> +	&dev_attr_pubek.attr,
> +	&dev_attr_pcrs.attr,
> +	&dev_attr_enabled.attr,
> +	&dev_attr_active.attr,
> +	&dev_attr_owned.attr,
> +	&dev_attr_temp_deactivated.attr,
> +	&dev_attr_caps.attr,
> +	&dev_attr_cancel.attr,
> +	NULL,
> +};
> +
> +static struct attribute_group vtpm_attr_grp = { .attrs = vtpm_attrs };
> +
> +#define TPM_LONG_TIMEOUT   (10 * 60 * HZ)
> +
> +static struct tpm_vendor_specific tpm_vtpm = {
> +	.recv = vtpm_recv,
> +	.send = vtpm_send,
> +	.cancel = vtpm_cancel,
> +	.status = vtpm_status,
> +	.req_complete_mask = STATUS_BUSY | STATUS_DATA_AVAIL,
> +	.req_complete_val  = STATUS_DATA_AVAIL,
> +	.req_canceled = STATUS_READY,
> +	.attr_group = &vtpm_attr_grp,
> +	.miscdev = {
> +		.fops = &vtpm_ops,
> +	},
> +	.duration = {
> +		TPM_LONG_TIMEOUT,
> +		TPM_LONG_TIMEOUT,
> +		TPM_LONG_TIMEOUT,
> +	},
> +};
> +
> +struct tpm_chip *init_vtpm(struct device *dev,
> +		struct tpm_private *tp)
> +{
> +	long rc;
> +	struct tpm_chip *chip;
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = kzalloc(sizeof(struct vtpm_state), GFP_KERNEL);
> +	if (!vtpms)
> +		return ERR_PTR(-ENOMEM);
> +
> +	vtpm_state_init(vtpms);
> +	vtpms->tpm_private = tp;
> +
> +	chip = tpm_register_hardware(dev, &tpm_vtpm);
> +	if (!chip) {
> +		rc = -ENODEV;
> +		goto err_free_mem;
> +	}
> +
> +	chip_set_private(chip, vtpms);
> +
> +	return chip;
> +
> +err_free_mem:
> +	kfree(vtpms);
> +
> +	return ERR_PTR(rc);
> +}
> +
> +void cleanup_vtpm(struct device *dev)
> +{
> +	struct tpm_chip *chip = dev_get_drvdata(dev);
> +	struct vtpm_state *vtpms = (struct vtpm_state *)chip_get_private(chip);
> +	tpm_remove_hardware(dev);
> +	kfree(vtpms);
> +}
> diff --git a/drivers/char/tpm/xen-tpmfront_vtpm.h b/drivers/char/tpm/xen-tpmfront_vtpm.h
> new file mode 100644
> index 0000000..16cf323
> --- /dev/null
> +++ b/drivers/char/tpm/xen-tpmfront_vtpm.h
> @@ -0,0 +1,55 @@
> +#ifndef XEN_TPMFRONT_VTPM_H
> +#define XEN_TPMFRONT_VTPM_H
> +
> +struct tpm_chip;
> +struct tpm_private;
> +
> +struct vtpm_state {
> +	struct transmission *current_request;
> +	spinlock_t           req_list_lock;
> +	wait_queue_head_t    req_wait_queue;
> +
> +	struct list_head     queued_requests;
> +
> +	struct transmission *current_response;
> +	spinlock_t           resp_list_lock;
> +	wait_queue_head_t    resp_wait_queue;
> +
> +	u8                   vd_status;

There seems to be only two states: disconnected or connected.
Why not just make it an 'bool'?

> +	u8                   flags;

What are the flag options? Were are they enumerated?
> +
> +	unsigned long        disconnect_time;
> +
> +	/*
> +	 * The following is a private structure of the underlying
> +	 * driver. It is passed as parameter in the send function.
> +	 */
> +	struct tpm_private *tpm_private;
> +};
> +
> +
> +enum vdev_status {
> +	TPM_VD_STATUS_DISCONNECTED = 0x0,
> +	TPM_VD_STATUS_CONNECTED = 0x1
> +};
> +
> +/* this function is called from tpm_vtpm.c */

OK, then why do we need to be in this header file?

> +int vtpm_vd_send(struct tpm_private *tp,
> +		const u8 *buf, size_t count, void *ptr);
> +
> +/* these functions are offered by tpm_vtpm.c */
> +struct tpm_chip *init_vtpm(struct device *,
> +		struct tpm_private *);
> +void cleanup_vtpm(struct device *);
> +int vtpm_vd_recv(const struct tpm_chip *chip,
> +		const unsigned char *buffer, size_t count, void *ptr);
> +void vtpm_vd_status(const struct tpm_chip *, u8 status);
> +
> +static inline struct tpm_private *tpm_private_from_dev(struct device *dev)
> +{
> +	struct tpm_chip *chip = dev_get_drvdata(dev);
> +	struct vtpm_state *vtpms = (struct vtpm_state *)chip->vendor.data;
> +	return vtpms->tpm_private;
> +}
> +
> +#endif
> diff --git a/include/xen/interface/io/tpmif.h b/include/xen/interface/io/tpmif.h
> new file mode 100644
> index 0000000..c9e7294
> --- /dev/null
> +++ b/include/xen/interface/io/tpmif.h
> @@ -0,0 +1,65 @@
> +/******************************************************************************
> + * tpmif.h
> + *
> + * TPM I/O interface for Xen guest OSes.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2005, IBM Corporation
> + *
> + * Author: Stefan Berger, stefanb@us.ibm.com
> + * Grant table support: Mahadevan Gomathisankaran
> + *
> + * This code has been derived from tools/libxc/xen/io/netif.h
> + *
> + * Copyright (c) 2003-2004, Keir Fraser
> + */
> +
> +#ifndef __XEN_PUBLIC_IO_TPMIF_H__
> +#define __XEN_PUBLIC_IO_TPMIF_H__
> +
> +#include "../grant_table.h"
> +
> +struct tpmif_tx_request {
> +	unsigned long addr;   /* Machine address of packet.   */

unsigned long on 32-bit is 4 bytes, but on 64-bit is 8 bytes.

> +	grant_ref_t ref;      /* grant table access reference */

This entry is 4 bytes (uin32_t).
> +	uint16_t unused;
> +	uint16_t size;        /* Packet size in bytes.        */

And these are both 2 bytes.

The structure on 64-bit is: 8+4+2+2 = 16
On 32-bit: 4+4+2+2 = 12.

The big problem is the aligment of the 'unsigned long' which
is '4' on 32-bit, so if you were to 32/64 communication, the
structure would not align. See here:


32-bit
12
0==addr, 4==gref 8==unused, 10==size

64-bit
16
0==addr, 8==gref 12==unused, 14==size

From:
        printf("%d\n", sizeof(struct tpmif_tx_request));
        printf("%d==addr, %d==gref %d==unused, %d==size\n",
                offsetof(struct tpmif_tx_request, addr),
                offsetof(struct tpmif_tx_request, gref),
                offsetof(struct tpmif_tx_request, unused),
                offsetof(struct tpmif_tx_request, size));


Use uint64_t instead of 'unsigned long' that will fix that
aligment issue.

> +};
> +struct tpmif_tx_request;
> +
> +/*
> + * The TPMIF_TX_RING_SIZE defines the number of pages the
> + * front-end and backend can exchange (= size of array).
> + */
> +#define TPMIF_TX_RING_SIZE 1

You sure? It looks to be the number of array entries? That is
how it is used in the code as well.

> +
> +/* This structure must fit in a memory page. */

Would it make sense to have a BUILD_ON_BUG to make sure?

> +
> +struct tpmif_ring {
> +	struct tpmif_tx_request req;
> +};
> +struct tpmif_ring;
> +
> +struct tpmif_tx_interface {
> +	struct tpmif_ring ring[TPMIF_TX_RING_SIZE];
> +};
> +struct tpmif_tx_interface;
> +
> +#endif
> -- 
> 1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:17:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm6EE-0004ip-QA; Fri, 21 Dec 2012 17:17:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm6EC-0004ii-Gu
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 17:17:45 +0000
Received: from [85.158.143.99:42899] by server-1.bemta-4.messagelabs.com id
	7C/3C-28401-7B994D05; Fri, 21 Dec 2012 17:17:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356110260!20922945!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzQ4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17915 invoked from network); 21 Dec 2012 17:17:41 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 17:17:41 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLHHL8G027357
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 17:17:22 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLHHK5u024801
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 17:17:20 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLHHHGm015473; Fri, 21 Dec 2012 11:17:17 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 09:17:16 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 56F061C032B; Fri, 21 Dec 2012 12:17:15 -0500 (EST)
Date: Fri, 21 Dec 2012 12:17:15 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
Message-ID: <20121221171715.GE25526@phenom.dumpdata.com>
References: <1353421985-25398-1-git-send-email-matthew.fioravante@jhuapl.edu>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1353421985-25398-1-git-send-email-matthew.fioravante@jhuapl.edu>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: jeremy@goop.org, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mail@srajiv.net,
	tpmdd-devel@lists.sourceforge.net, key@linux.vnet.ibm.com
Subject: Re: [Xen-devel] [PATCH v3] add xen-tpmfront.ko: Xen Virtual TPM
	frontend driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Nov 20, 2012 at 09:33:05AM -0500, Matthew Fioravante wrote:

Hey Matthew,

Sorry for the late response. 


I ran this throught checkpatch I got these:


CHECK: spinlock_t definition without comment
#476: FILE: drivers/char/tpm/xen-tpmfront_if.c:62:
+	spinlock_t tx_lock;

CHECK: Alignment should match open parenthesis
#520: FILE: drivers/char/tpm/xen-tpmfront_if.c:106:
+tx_buffer_copy(struct tx_buffer *txb, const u8 *src, int len,
+		int isuserbuffer)

CHECK: Alignment should match open parenthesis
#673: FILE: drivers/char/tpm/xen-tpmfront_if.c:259:
+		gnttab_end_foreign_access(tp->ring_ref,
+				0, (unsigned long)tp->tx);

WARNING: Avoid CamelCase: <XenbusStateConnected>
#727: FILE: drivers/char/tpm/xen-tpmfront_if.c:313:
+	xenbus_switch_state(dev, XenbusStateConnected);

CHECK: Alignment should match open parenthesis
#1014: FILE: drivers/char/tpm/xen-tpmfront_if.c:600:
+		gnttab_grant_foreign_access_ref(tx->ref,
+				tp->backend_id,

CHECK: memory barrier without comment
#1023: FILE: drivers/char/tpm/xen-tpmfront_if.c:609:
+	mb();

CHECK: Alignment should match open parenthesis
#1085: FILE: drivers/char/tpm/xen-tpmfront_if.c:671:
+	if (gnttab_alloc_grant_references(TPMIF_TX_RING_SIZE,
+				&gref_head) < 0) {

CHECK: Alignment should match open parenthesis
#1197: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:89:
+transmission_set_req_buffer(struct transmission *t,
+		unsigned char *buffer, size_t len)

CHECK: Alignment should match open parenthesis
#1217: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:109:
+transmission_set_res_buffer(struct transmission *t,
+		const unsigned char *buffer, size_t len)

CHECK: Alignment should match open parenthesis
#1348: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:240:
+		interruptible_sleep_on_timeout(&vtpms->resp_wait_queue,
+				1000);

CHECK: Alignment should match open parenthesis
#1402: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:294:
+		if (time_after(jiffies,
+					vtpms->disconnect_time + HZ * 10)) {

CHECK: Blank lines aren't necessary after an open brace '{'
#1412: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:304:
+		if (_vtpm_send_queued(chip) == 0) {
+

CHECK: Alignment should match open parenthesis
#1490: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:382:
+				list_add(&qt->next,
+						&vtpms->queued_requests);

CHECK: Alignment should match open parenthesis
#1551: FILE: drivers/char/tpm/xen-tpmfront_vtpm.c:443:
+	if (vtpms->current_response ||
+			0 != (vtpms->flags & DATAEX_FLAG_QUEUED_ONLY)) {

CHECK: spinlock_t definition without comment
#1666: FILE: drivers/char/tpm/xen-tpmfront_vtpm.h:9:
+	spinlock_t           req_list_lock;

CHECK: spinlock_t definition without comment
#1672: FILE: drivers/char/tpm/xen-tpmfront_vtpm.h:15:
+	spinlock_t           resp_list_lock;

total: 0 errors, 1 warnings, 15 checks, 1387 lines checked

/home/konrad/tpm has style problems, please review.


I like your description - and I think you should have it
in the git commit _and_ in a Documentation/*somewhere*

> This patch ports the xen vtpm frontend driver for linux
> from the linux-2.6.18-xen.hg tree to linux-stable. This
> driver is designed be used with the mini-os tpmback driver
> in Xen as part of the new mini-os virtual tpm subsystem.
> 
> Included in this commit message is the first draft of the
> vtpm documentation.  See docs/misc/vtpm.txt for an updated
> version. Contact the xen-devel@lists.xen.org mailing list
> for details.
> 
> Copyright (c) 2010-2012 United States Government, as represented by
> the Secretary of Defense.  All rights reserved.
> November 12 2012
> Authors: Matthew Fioravante (JHUAPL),
> 
> This document describes the virtual Trusted Platform Module (vTPM)
> subsystem
> for Xen. The reader is assumed to have familiarity with building and
> installing
> Xen, Linux, and a basic understanding of the TPM and vTPM concepts.
> 
> ------------------------------
> INTRODUCTION
> ------------------------------
> The goal of this work is to provide a TPM functionality to a virtual
> guest operating system (a DomU).  This allows programs to interact
> with a TPM in a virtual system the same way they interact with a TPM
> on the physical system.  Each guest gets its own unique, emulated,
> software TPM.  However, each of the vTPM's secrets (Keys, NVRAM, etc)
> are managed by a vTPM Manager domain, which seals the secrets to
> the Physical TPM.  Thus, the vTPM subsystem extends the chain of
> trust rooted in the hardware TPM to virtual machines in Xen.
> Each major component of vTPM is implemented as a separate domain,
> providing secure separation guaranteed by the hypervisor. The
> vTPM domains are implemented in mini-os to reduce memory and
> processor overhead.
> 
> This mini-os vTPM subsystem was built on top of the previous vTPM
> work done by IBM and Intel corporation.
> 
> ------------------------------
> DESIGN OVERVIEW
> ------------------------------
> 
> The architecture of vTPM is described below:
> 
> +------------------+
> |    Linux DomU    | ...
> |       |  ^       |
> |       v  |       |
> |   xen-tpmfront   |
> +------------------+
>         |  ^
>         v  |
> +------------------+
> | mini-os/tpmback  |
> |       |  ^       |
> |       v  |       |
> |  vtpm-stubdom    | ...
> |       |  ^       |
> |       v  |       |
> | mini-os/tpmfront |
> +------------------+
>         |  ^
>         v  |
> +------------------+
> | mini-os/tpmback  |
> |       |  ^       |
> |       v  |       |
> |   vtpmmgrdom     |
> |       |  ^       |
> |       v  |       |
> | mini-os/tpm_tis  |
> +------------------+
>         |  ^
>         v  |
> +------------------+
> |   Hardware TPM   |
> +------------------+
>  * Linux DomU: The Linux based guest that wants to use a vTPM. There
>                may be more than one of these.
> 
>  * xen-tpmfront.ko: Linux kernel virtual TPM frontend driver. This
>                     driver provides vTPM access to a para-virtualized
>                     Linux based DomU.
> 
>  * mini-os/tpmback: Mini-os TPM backend driver. The Linux frontend
>                     driver connects to this backend driver to facilitate
>                     communications between the Linux DomU and its vTPM.
>                     This driver is also used by vtpmmgrdom to communicate
>                     with vtpm-stubdom.
> 
>  * vtpm-stubdom: A mini-os stub domain that implements a vTPM. There is
>                  a one to one mapping between running vtpm-stubdom
>                  instances and logical vtpms on the system. The vTPM
>                  Platform Configuration Registers (PCRs) are all
> 		 initialized to zero.
> 
>  * mini-os/tpmfront: Mini-os TPM frontend driver. The vTPM mini-os
>                      domain vtpm-stubdom uses this driver to
>                      communicate with vtpmmgrdom. This driver could
>                      also be used separately to implement a mini-os
>                      domain that wishes to use a vTPM of
>                      its own.
> 
>  * vtpmmgrdom: A mini-os domain that implements the vTPM manager.
>                There is only one vTPM manager and it should be running
>                during the entire lifetime of the machine.  This domain
>                regulates access to the physical TPM on the system and
>                secures the persistent state of each vTPM.
> 
>  * mini-os/tpm_tis: Mini-os TPM version 1.2 TPM Interface Specification
>                     (TIS) driver. This driver used by vtpmmgrdom to talk
>                     directly to the hardware TPM. Communication is
>                     facilitated by mapping hardware memory pages into
>                     vtpmmgrdom.
> 
>  * Hardware TPM: The physical TPM that is soldered onto the motherboard.
> 
> ------------------------------
> INSTALLATION
> ------------------------------
> 
> Prerequisites:
> --------------
> You must have an x86 machine with a TPM on the motherboard.
> The only software requirement to compiling vTPM is cmake.
> You must use libxl to manage domains with vTPMs. 'xm' is
> deprecated and does not support vTPM.
> 
> Compiling the XEN tree:
> -----------------------
> 
> Compile and install the XEN tree as usual. Be sure to build and install
> the stubdom tree.
> 
> Compiling the LINUX dom0 kernel:
> --------------------------------
> 
> The Linux dom0 kernel has no special prerequisites.
> 
> Compiling the LINUX domU kernel:
> --------------------------------
> 
> The domU kernel used by domains with vtpms must
> include the xen-tpmfront.ko driver. It can be built
> directly into the kernel or as a module.
> 
> CONFIG_TCG_TPM=y
> CONFIG_TCG_XEN=y
> 
> ------------------------------
> VTPM MANAGER SETUP
> ------------------------------
> 
> Manager disk image setup:
> -------------------------
> 
> The vTPM Manager requires a disk image to store its
> encrypted data. The image does not require a filesystem
> and can live anywhere on the host disk. The image does not need
> to be large. 8 to 16 Mb should be sufficient.
> 
> Manager config file:
> --------------------
> 
> The vTPM Manager domain (vtpmmgrdom) must be started like
> any other Xen virtual machine and requires a config file.
> The manager requires a disk image for storage and permission
> to access the hardware memory pages for the TPM. An
> example configuration looks like the following.
> 
> kernel="/usr/lib/xen/boot/vtpmmgrdom.gz"
> memory=16
> disk=["file:/var/vtpmmgrdom.img,hda,w"]
> name="vtpmmgrdom"
> iomem=["fed40,5"]
> 
> The iomem line tells xl to allow access to the TPM
> IO memory pages, which are 5 pages that start at
> 0xfed40000.
> 
> Starting and stopping the manager:
> ----------------------------------
> 
> The vTPM manager should be started at boot, you may wish to
> create an init script to do this.
> 
> Once initialization is complete you should see the following:
> INFO[VTPM]: Waiting for commands from vTPM's:
> 
> To shutdown the manager you must destroy it. To avoid data corruption,
> only destroy the manager when you see the above "Waiting for commands"
> message. This ensures the disk is in a consistent state.
> 
> ------------------------------
> VTPM AND LINUX PVM SETUP
> ------------------------------
> 
> In the following examples we will assume we have Linux
> guest named "domu" with its associated configuration
> located at /home/user/domu. It's vtpm will be named
> domu-vtpm.
> 
> vTPM disk image setup:
> ----------------------
> 
> The vTPM requires a disk image to store its persistent
> data. The image does not require a filesystem. The image
> does not need to be large. 8 Mb should be sufficient.
> 
> vTPM config file:
> -----------------
> 
> The vTPM domain requires a configuration file like
> any other domain. The vTPM requires a disk image for
> storage and a TPM frontend driver to communicate
> with the manager. An example configuration is given:
> 
> kernel="/usr/lib/xen/boot/vtpm-stubdom.gz"
> memory=8
> disk=["file:/home/user/domu/vtpm.img,hda,w"]
> name="domu-vtpm"
> vtpm=["backend=vtpmmgrdom,uuid=ac0a5b9e-cbe2-4c07-b43b-1d69e46fb839"]
> 
> The vtpm= line sets up the tpm frontend driver. The backend must set
> to vtpmmgrdom. You are required to generate a uuid for this vtpm.
> You can use the uuidgen unix program or some other method to create a
> uuid. The uuid uniquely identifies this vtpm to manager.
> 
> If you wish to clear the vTPM data you can either recreate the
> disk image or change the uuid.
> 
> Linux Guest config file:
> ------------------------
> 
> The Linux guest config file needs to be modified to include
> the Linux tpmfront driver. Add the following line:
> 
> vtpm=["backend=domu-vtpm"]
> 
> Currently only paravirtualized guests are supported.
> 
> Launching and shut down:
> ------------------------
> 
> To launch a Linux guest with a vTPM we first have to start the vTPM
> domain.
> 
> After initialization is complete, you should see the following:
> Info: Waiting for frontend domain to connect..
> 
> Next, launch the Linux guest
> 
> If xen-tpmfront was compiled as a module, be sure to load it
> in the guest.
> 
> After the Linux domain boots and the xen-tpmfront driver is loaded,
> you should see the following on the vtpm console:
> 
> Info: VTPM attached to Frontend X/Y
> 
> If you have trousers and tpm_tools installed on the guest, you can test
> the vtpm.
> 
> On guest:
> 
> The version command should return the following:
>   TPM 1.2 Version Info:
>   Chip Version:        1.2.0.7
>   Spec Level:          2
>   Errata Revision:     1
>   TPM Vendor ID:       ETHZ
>   TPM Version:         01010000
>   Manufacturer Info:   4554485a
> 
> You should also see the command being sent to the vtpm console as well
> as the vtpm saving its state. You should see the vtpm key being
> encrypted and stored on the vtpmmgrdom console.
> 
> To shutdown the guest and its vtpm, you just have to shutdown the guest
> normally. As soon as the guest vm disconnects, the vtpm will shut itself
> down automatically.
> 
> On guest:
> 
> You may wish to write a script to start your vtpm and guest together.
> 
> ------------------------------
> MORE INFORMATION
> ------------------------------
> 
> See stubdom/vtpmmgr/README for more details about how
> the manager domain works, how to use it, and its command line
> parameters.
> 
> See stubdom/vtpm/README for more specifics about how vtpm-stubdom
> operates and the command line options it accepts.
> 
> Signed-off-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu>
> ---
>  drivers/char/tpm/Kconfig             |   11 +
>  drivers/char/tpm/Makefile            |    2 +
>  drivers/char/tpm/tpm.h               |   10 +
>  drivers/char/tpm/xen-tpmfront_if.c   |  688 ++++++++++++++++++++++++++++++++++
>  drivers/char/tpm/xen-tpmfront_vtpm.c |  543 +++++++++++++++++++++++++++
>  drivers/char/tpm/xen-tpmfront_vtpm.h |   55 +++
>  include/xen/interface/io/tpmif.h     |   65 ++++
>  7 files changed, 1374 insertions(+)
>  create mode 100644 drivers/char/tpm/xen-tpmfront_if.c
>  create mode 100644 drivers/char/tpm/xen-tpmfront_vtpm.c
>  create mode 100644 drivers/char/tpm/xen-tpmfront_vtpm.h
>  create mode 100644 include/xen/interface/io/tpmif.h
> 
> diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> index 915875e..23d272f 100644
> --- a/drivers/char/tpm/Kconfig
> +++ b/drivers/char/tpm/Kconfig
> @@ -81,4 +81,15 @@ config TCG_IBMVTPM
>  	  will be accessible from within Linux.  To compile this driver
>  	  as a module, choose M here; the module will be called tpm_ibmvtpm.
>  
> +config TCG_XEN
> +	tristate "XEN TPM Interface"
> +	depends on TCG_TPM && XEN
> +	---help---
> +	  If you want to make TPM support available to a Xen user domain,
> +	  say Yes and it will be accessible from within Linux. See
> +	  the manpages for xl, xl.conf, and docs/misc/vtpm.txt in
> +	  the Xen source repository for more details.
> +	  To compile this driver as a module, choose M here; the module
> +	  will be called xen-tpmfront.
> +
>  endif # TCG_TPM
> diff --git a/drivers/char/tpm/Makefile b/drivers/char/tpm/Makefile
> index 5b3fc8b..0161f05 100644
> --- a/drivers/char/tpm/Makefile
> +++ b/drivers/char/tpm/Makefile
> @@ -17,3 +17,5 @@ obj-$(CONFIG_TCG_NSC) += tpm_nsc.o
>  obj-$(CONFIG_TCG_ATMEL) += tpm_atmel.o
>  obj-$(CONFIG_TCG_INFINEON) += tpm_infineon.o
>  obj-$(CONFIG_TCG_IBMVTPM) += tpm_ibmvtpm.o
> +obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
> +xen-tpmfront-y = xen-tpmfront_if.o xen-tpmfront_vtpm.o
> diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
> index 8ef7649..b575892 100644
> --- a/drivers/char/tpm/tpm.h
> +++ b/drivers/char/tpm/tpm.h
> @@ -328,6 +328,16 @@ extern int tpm_pm_resume(struct device *);
>  extern int wait_for_tpm_stat(struct tpm_chip *, u8, unsigned long,
>  			     wait_queue_head_t *);
>  
> +static inline void *chip_get_private(const struct tpm_chip *chip)
> +{
> +	return chip->vendor.data;
> +}
> +
> +static inline void chip_set_private(struct tpm_chip *chip, void *priv)
> +{
> +	chip->vendor.data = priv;
> +}
> +
>  #ifdef CONFIG_ACPI
>  extern int tpm_add_ppi(struct kobject *);
>  extern void tpm_remove_ppi(struct kobject *);
> diff --git a/drivers/char/tpm/xen-tpmfront_if.c b/drivers/char/tpm/xen-tpmfront_if.c
> new file mode 100644
> index 0000000..ba7fad8
> --- /dev/null
> +++ b/drivers/char/tpm/xen-tpmfront_if.c
> @@ -0,0 +1,688 @@
> +/*
> + * Copyright (c) 2005, IBM Corporation
> + *
> + * Author: Stefan Berger, stefanb@us.ibm.com
> + * Grant table support: Mahadevan Gomathisankaran
> + *
> + * This code has been derived from drivers/xen/netfront/netfront.c
> + *
> + * Copyright (c) 2002-2004, K A Fraser
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <linux/errno.h>
> +#include <linux/err.h>
> +#include <linux/interrupt.h>
> +#include <linux/mutex.h>
> +#include <linux/uaccess.h>
> +#include <xen/events.h>
> +#include <xen/interface/grant_table.h>
> +#include <xen/interface/io/tpmif.h>
> +#include <xen/grant_table.h>
> +#include <xen/xenbus.h>
> +#include <xen/page.h>
> +#include "tpm.h"
> +#include "xen-tpmfront_vtpm.h"
> +
> +#define GRANT_INVALID_REF 0
> +
> +/* local structures */
> +struct tpm_private {
> +	struct tpm_chip *chip;
> +
> +	struct tpmif_tx_interface *tx;
> +	atomic_t refcnt;
> +	unsigned int evtchn;
> +	u8 is_connected;
> +	u8 is_suspended;

Perhaps those two could be 'bool'?
> +
> +	spinlock_t tx_lock;
> +
> +	struct tx_buffer *tx_buffers[TPMIF_TX_RING_SIZE];
> +
> +	atomic_t tx_busy;
> +	void *tx_remember;
> +
> +	domid_t backend_id;
> +	wait_queue_head_t wait_q;
> +
> +	struct xenbus_device *dev;
> +	int ring_ref;
> +};
> +
> +struct tx_buffer {
> +	unsigned int size;	/* available space in data */
> +	unsigned int len;	/* used space in data */
> +	unsigned char *data;	/* pointer to a page */
> +};
> +
> +
> +/* locally visible variables */
> +static grant_ref_t gref_head;
> +static struct tpm_private *my_priv;

Should 'my_priv' have a lock?

> +
> +/* local function prototypes */
> +static irqreturn_t tpmif_int(int irq,
> +		void *tpm_priv);
> +static void tpmif_rx_action(unsigned long unused);
> +static int tpmif_connect(struct xenbus_device *dev,
> +		struct tpm_private *tp,
> +		domid_t domid);
> +static DECLARE_TASKLET(tpmif_rx_tasklet, tpmif_rx_action, 0);

Why make it a tasklet and not a thread? That way you can also
remove the usage of ATOMIC and use the GFP_KERNEL.

> +static int tpmif_allocate_tx_buffers(struct tpm_private *tp);
> +static void tpmif_free_tx_buffers(struct tpm_private *tp);
> +static void tpmif_set_connected_state(struct tpm_private *tp,
> +		u8 newstate);
> +static int tpm_xmit(struct tpm_private *tp,
> +		const u8 *buf, size_t count, int userbuffer,
> +		void *remember);
> +static void destroy_tpmring(struct tpm_private *tp);
> +
> +static inline int
> +tx_buffer_copy(struct tx_buffer *txb, const u8 *src, int len,
> +		int isuserbuffer)
> +{
> +	int copied = len;
> +
> +	if (len > txb->size)
> +		copied = txb->size;

 'len' is an 'int' and 'txb->size' is an 'unsigned int'.
If 'len' is -1, this comparison would end up becoming
UINT_MAX + 1 + -1  > txb->size - which would always be
true (I think - I can't recall if the compiler would
cast 'int' to an 'unsigned int' or the other way).
And then we would copy txb->size instead of 0.
Depending on the 'src' this might blow up.

Perhaps a check if len is less than or equal zero? Or
maybe not using 'int' for the 'len'?


> +	if (isuserbuffer) {
> +		if (copy_from_user(txb->data, src, copied))
> +			return -EFAULT;
> +	} else {
> +		memcpy(txb->data, src, copied);
> +	}
> +	txb->len = len;
> +	return copied;

I think just making it 'unsigned int' would be easier.

> +}
> +
> +static inline struct tx_buffer *tx_buffer_alloc(void)
> +{
> +	struct tx_buffer *txb;
> +
> +	txb = kzalloc(sizeof(struct tx_buffer), GFP_KERNEL);
> +	if (!txb)
> +		return NULL;
> +
> +	txb->len = 0;
> +	txb->size = PAGE_SIZE;
> +	txb->data = (unsigned char *)__get_free_page(GFP_KERNEL);
> +	if (txb->data == NULL) {
> +		kfree(txb);
> +		txb = NULL;
> +	}
> +
> +	return txb;
> +}
> +
> +
> +static inline void tx_buffer_free(struct tx_buffer *txb)
> +{
> +	if (txb) {
> +		free_page((long)txb->data);
> +		kfree(txb);
> +	}
> +}
> +
> +/**************************************************************
> +  Utility function for the tpm_private structure
> + **************************************************************/
> +static void tpm_private_init(struct tpm_private *tp)
> +{
> +	spin_lock_init(&tp->tx_lock);
> +	init_waitqueue_head(&tp->wait_q);
> +	atomic_set(&tp->refcnt, 1);
> +}
> +
> +static void tpm_private_put(void)
> +{
> +	if (!atomic_dec_and_test(&my_priv->refcnt))
> +		return;

So no lock - so what happens if you have _two_ threads
calling this? The first decrements the refcnt, and decides
its time to clear it up - goes through the kfree and sets
my_priv == NULL. At the same instant the other thread goes
in and calls 'atomic_dec_and_test' on the my_priv which
has been set to NULL. Boom!

Unless there is some lock being held _before_ tpm_private_put
is called - in which you should a comment about it.
> +
> +	tpmif_free_tx_buffers(my_priv);
> +	kfree(my_priv);
> +	my_priv = NULL;
> +}
> +
> +static struct tpm_private *tpm_private_get(void)
> +{
> +	int err;
> +
> +	if (my_priv) {
> +		atomic_inc(&my_priv->refcnt);
> +		return my_priv;
> +	}
> +
> +	my_priv = kzalloc(sizeof(struct tpm_private), GFP_KERNEL);
> +	if (!my_priv)
> +		return NULL;
> +
> +	tpm_private_init(my_priv);
> +	err = tpmif_allocate_tx_buffers(my_priv);
> +	if (err < 0)
> +		tpm_private_put();
> +
> +	return my_priv;
> +}
> +
> +/**************************************************************
> +
> +  The interface to let the tpm plugin register its callback
> +  function and send data to another partition using this module
> +
> + **************************************************************/
> +
> +static DEFINE_MUTEX(suspend_lock);
> +/*
> + * Send data via this module by calling this function
> + */
> +int vtpm_vd_send(struct tpm_private *tp,

I think this can be static?

> +		const u8 *buf, size_t count, void *ptr)
> +{
> +	int sent;
> +
> +	mutex_lock(&suspend_lock);
> +	sent = tpm_xmit(tp, buf, count, 0, ptr);
> +	mutex_unlock(&suspend_lock);
> +
> +	return sent;
> +}
> +
> +/**************************************************************
> +  XENBUS support code
> + **************************************************************/
> +
> +static int setup_tpmring(struct xenbus_device *dev,
> +		struct tpm_private *tp)
> +{
> +	struct tpmif_tx_interface *sring;
> +	int err;
> +
> +	tp->ring_ref = GRANT_INVALID_REF;
> +
> +	sring = (void *)__get_free_page(GFP_KERNEL);
> +	if (!sring) {
> +		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
> +		return -ENOMEM;
> +	}
> +	tp->tx = sring;
> +
> +	err = xenbus_grant_ring(dev, virt_to_mfn(tp->tx));
> +	if (err < 0) {
> +		free_page((unsigned long)sring);
> +		tp->tx = NULL;
> +		xenbus_dev_fatal(dev, err, "allocating grant reference");
> +		goto fail;
> +	}
> +	tp->ring_ref = err;
> +
> +	err = tpmif_connect(dev, tp, dev->otherend_id);
> +	if (err)
> +		goto fail;
> +
> +	return 0;
> +fail:
> +	destroy_tpmring(tp);
> +	return err;
> +}
> +
> +
> +static void destroy_tpmring(struct tpm_private *tp)
> +{
> +	tpmif_set_connected_state(tp, 0);
> +
> +	if (tp->ring_ref != GRANT_INVALID_REF) {
> +		gnttab_end_foreign_access(tp->ring_ref,
> +				0, (unsigned long)tp->tx);
> +		tp->ring_ref = GRANT_INVALID_REF;
> +		tp->tx = NULL;

Shouldn't you free_page it first? Looking at the code below, if
it fails at:

	err = tpmif_connect(dev, tp, dev->otherend_id);

then we go to 'destroy_tpmring' which would tear down the
connection. But we still have tp->tx page allocated at
that point?

> +	}
> +
> +	if (tp->evtchn)
> +		unbind_from_irqhandler(irq_from_evtchn(tp->evtchn), tp);
> +
> +	tp->evtchn = GRANT_INVALID_REF;
> +}
> +
> +
> +static int talk_to_backend(struct xenbus_device *dev,
> +		struct tpm_private *tp)
> +{
> +	const char *message = NULL;
> +	int err;
> +	struct xenbus_transaction xbt;
> +
> +	err = setup_tpmring(dev, tp);
> +	if (err) {
> +		xenbus_dev_fatal(dev, err, "setting up ring");
> +		goto out;
> +	}
> +
> +again:
> +	err = xenbus_transaction_start(&xbt);
> +	if (err) {
> +		xenbus_dev_fatal(dev, err, "starting transaction");
> +		goto destroy_tpmring;
> +	}
> +
> +	err = xenbus_printf(xbt, dev->nodename,
> +			"ring-ref", "%u", tp->ring_ref);
> +	if (err) {
> +		message = "writing ring-ref";
> +		goto abort_transaction;
> +	}
> +
> +	err = xenbus_printf(xbt, dev->nodename, "event-channel", "%u",
> +			tp->evtchn);
> +	if (err) {
> +		message = "writing event-channel";
> +		goto abort_transaction;
> +	}
> +
> +	err = xenbus_transaction_end(xbt, 0);
> +	if (err == -EAGAIN)
> +		goto again;
> +	if (err) {
> +		xenbus_dev_fatal(dev, err, "completing transaction");
> +		goto destroy_tpmring;
> +	}
> +
> +	xenbus_switch_state(dev, XenbusStateConnected);
> +
> +	return 0;
> +
> +abort_transaction:
> +	xenbus_transaction_end(xbt, 1);
> +	if (message)
> +		xenbus_dev_error(dev, err, "%s", message);
> +destroy_tpmring:
> +	destroy_tpmring(tp);
> +out:
> +	return err;
> +}
> +
> +/**
> + * Callback received when the backend's state changes.
> + */
> +static void backend_changed(struct xenbus_device *dev,
> +		enum xenbus_state backend_state)
> +{
> +	struct tpm_private *tp = tpm_private_from_dev(&dev->dev);
> +
> +	switch (backend_state) {
> +	case XenbusStateInitialising:
> +	case XenbusStateInitWait:
> +	case XenbusStateInitialised:
> +	case XenbusStateReconfiguring:
> +	case XenbusStateReconfigured:
> +	case XenbusStateUnknown:
> +		break;
> +
> +	case XenbusStateConnected:
> +		tpmif_set_connected_state(tp, 1);

Can that '1' be a true or false?

> +		break;
> +
> +	case XenbusStateClosing:
> +		tpmif_set_connected_state(tp, 0);

The same here.
> +		xenbus_frontend_closed(dev);
> +		break;
> +
> +	case XenbusStateClosed:
> +		tpmif_set_connected_state(tp, 0);

And here.
> +		if (tp->is_suspended == 0)
> +			device_unregister(&dev->dev);
> +		xenbus_frontend_closed(dev);
> +		break;
> +	}
> +}
> +
> +static int tpmfront_probe(struct xenbus_device *dev,
> +		const struct xenbus_device_id *id)
> +{
> +	int err;
> +	int handle;
> +	struct tpm_private *tp = tpm_private_get();
> +
> +	if (!tp)
> +		return -ENOMEM;
> +
> +	tp->chip = init_vtpm(&dev->dev, tp);
> +	if (IS_ERR(tp->chip))
> +		return PTR_ERR(tp->chip);
> +
> +	err = xenbus_scanf(XBT_NIL, dev->nodename,
> +			"handle", "%i", &handle);

Not '%li' ? Ah I guess not since it is 'int'. But why not
'long' ? (Xen netback looks to be using 'long').

> +	if (XENBUS_EXIST_ERR(err))
> +		return err;
> +
> +	if (err < 0) {
> +		xenbus_dev_fatal(dev, err, "reading virtual-device");
> +		return err;
> +	}
> +
> +	tp->dev = dev;
> +
> +	err = talk_to_backend(dev, tp);
> +	if (err) {
> +		tpm_private_put();
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +
> +static int __devexit tpmfront_remove(struct xenbus_device *dev)
> +{
> +	struct tpm_private *tp = tpm_private_from_dev(&dev->dev);
> +	destroy_tpmring(tp);
> +	cleanup_vtpm(&dev->dev);
> +	return 0;
> +}
> +
> +static int tpmfront_suspend(struct xenbus_device *dev)
> +{
> +	struct tpm_private *tp = tpm_private_from_dev(&dev->dev);
> +	u32 ctr;
> +
> +	/* Take the lock, preventing any application from sending. */
> +	mutex_lock(&suspend_lock);
> +	tp->is_suspended = 1;

bool?

> +
> +	for (ctr = 0; atomic_read(&tp->tx_busy); ctr++) {
> +		/* Wait for a request to be responded to. */
> +		interruptible_sleep_on_timeout(&tp->wait_q, 100);

So if the request never comes back (b/c the backend crashed), then
what should we do?

> +	}
> +

Why no 'mutex_unlock' 

> +	return 0;
> +}
> +
> +static int tpmfront_suspend_finish(struct tpm_private *tp)
> +{

No mutex_lock ?

> +	tp->is_suspended = 0;
> +	/* Allow applications to send again. */

So you are holding on the mutex across the different backend
states? Can you explain when this function gets called as a result
of tpmfront_suspend?

> +	mutex_unlock(&suspend_lock);
> +	return 0;
> +}
> +
> +static int tpmfront_resume(struct xenbus_device *dev)
> +{
> +	struct tpm_private *tp = tpm_private_from_dev(&dev->dev);
> +	destroy_tpmring(tp);
> +	return talk_to_backend(dev, tp);
> +}
> +
> +static int tpmif_connect(struct xenbus_device *dev,
> +		struct tpm_private *tp,
> +		domid_t domid)
> +{
> +	int err;
> +
> +	tp->backend_id = domid;
> +	tp->evtchn = GRANT_INVALID_REF;
> +
> +	err = xenbus_alloc_evtchn(dev, &tp->evtchn);
> +	if (err)
> +		return err;
> +
> +	err = bind_evtchn_to_irqhandler(tp->evtchn, tpmif_int,
> +			0, "tpmif", tp);
> +	if (err <= 0)
> +		return err;
> +
> +	return 0;
> +}
> +
> +static const struct xenbus_device_id tpmfront_ids[] = {
> +	{ "vtpm" },

Hm, I thought it would be 'vtpm2'?

> +	{ "" }
> +};
> +MODULE_ALIAS("xen:vtpm");
> +
> +static DEFINE_XENBUS_DRIVER(tpmfront, ,
> +		.probe = tpmfront_probe,
> +		.remove =  __devexit_p(tpmfront_remove),
> +		.resume = tpmfront_resume,
> +		.otherend_changed = backend_changed,
> +		.suspend = tpmfront_suspend,
> +		);
> +
> +static int tpmif_allocate_tx_buffers(struct tpm_private *tp)

> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < TPMIF_TX_RING_SIZE; i++) {
> +		tp->tx_buffers[i] = tx_buffer_alloc();
> +		if (!tp->tx_buffers[i]) {
> +			tpmif_free_tx_buffers(tp);
> +			return -ENOMEM;
> +		}
> +	}
> +	return 0;
> +}
> +
> +static void tpmif_free_tx_buffers(struct tpm_private *tp)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < TPMIF_TX_RING_SIZE; i++)
> +		tx_buffer_free(tp->tx_buffers[i]);
> +}
> +
> +static void tpmif_rx_action(unsigned long priv)
> +{
> +	struct tpm_private *tp = (struct tpm_private *)priv;
> +	int i = 0;
> +	unsigned int received;
> +	unsigned int offset = 0;
> +	u8 *buffer;
> +	struct tpmif_tx_request *tx = &tp->tx->ring[i].req;
> +
> +	atomic_set(&tp->tx_busy, 0);
> +	wake_up_interruptible(&tp->wait_q);
> +
> +	received = tx->size;

No check if the size is out of whack? Say bigger than a PAGE_SIZE?
The ring could have been corrupted.

> +
> +	buffer = kmalloc(received, GFP_ATOMIC);

No 'kzalloc' ?

> +	if (!buffer)
> +		return;
> +
> +	for (i = 0; i < TPMIF_TX_RING_SIZE && offset < received; i++) {
> +		struct tx_buffer *txb = tp->tx_buffers[i];
> +		struct tpmif_tx_request *tx;
> +		unsigned int tocopy;
> +
> +		tx = &tp->tx->ring[i].req;
> +		tocopy = tx->size;

So.. on the first loop you get the tx->size from the same place as
what 'received' got. But on the second loop, the tx->size is from another
request. What if the first tx->size was 8 bytes, the next request
had 8 as well. Wouldn't we end up crashing as buffer was only allocated
to hold 8 bytes?

> +		if (tocopy > PAGE_SIZE)
> +			tocopy = PAGE_SIZE;
> +
> +		memcpy(&buffer[offset], txb->data, tocopy);
> +
> +		gnttab_release_grant_reference(&gref_head, tx->ref);
> +
> +		offset += tocopy;
> +	}
> +
> +	vtpm_vd_recv(tp->chip, buffer, received, tp->tx_remember);
> +	kfree(buffer);
> +}
> +
> +
> +static irqreturn_t tpmif_int(int irq, void *tpm_priv)
> +{
> +	struct tpm_private *tp = tpm_priv;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&tp->tx_lock, flags);
> +	tpmif_rx_tasklet.data = (unsigned long)tp;
> +	tasklet_schedule(&tpmif_rx_tasklet);
> +	spin_unlock_irqrestore(&tp->tx_lock, flags);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +
> +static int tpm_xmit(struct tpm_private *tp,
> +		const u8 *buf, size_t count, int isuserbuffer,
> +		void *remember)
> +{
> +	struct tpmif_tx_request *tx;
> +	int i;
> +	unsigned int offset = 0;
> +
> +	spin_lock_irq(&tp->tx_lock);
> +
> +	if (unlikely(atomic_read(&tp->tx_busy))) {
> +		spin_unlock_irq(&tp->tx_lock);
> +		return -EBUSY;
> +	}
> +
> +	if (tp->is_connected != 1) {

Use a bool here.
> +		spin_unlock_irq(&tp->tx_lock);
> +		return -EIO;
> +	}
> +
> +	for (i = 0; count > 0 && i < TPMIF_TX_RING_SIZE; i++) {
> +		struct tx_buffer *txb = tp->tx_buffers[i];
> +		int copied;
> +
> +		if (!txb) {
> +			spin_unlock_irq(&tp->tx_lock);
> +			return -EFAULT;
> +		}
> +
> +		copied = tx_buffer_copy(txb, &buf[offset], count,
> +				isuserbuffer);
> +		if (copied < 0) {
> +			/* An error occurred */
> +			spin_unlock_irq(&tp->tx_lock);
> +			return copied;
> +		}
> +		count -= copied;
> +		offset += copied;
> +
> +		tx = &tp->tx->ring[i].req;
> +		tx->addr = virt_to_machine(txb->data).maddr;
> +		tx->size = txb->len;
> +		tx->unused = 0;
> +
> +		/* Get the granttable reference for this page. */
> +		tx->ref = gnttab_claim_grant_reference(&gref_head);
> +		if (tx->ref == -ENOSPC) {
> +			spin_unlock_irq(&tp->tx_lock);
> +			return -ENOSPC;
> +		}
> +		gnttab_grant_foreign_access_ref(tx->ref,
> +				tp->backend_id,
> +				virt_to_mfn(txb->data),
> +				0 /*RW*/);
> +		wmb();
> +	}
> +
> +	atomic_set(&tp->tx_busy, 1);
> +	tp->tx_remember = remember;
> +
> +	mb();
> +
> +	notify_remote_via_evtchn(tp->evtchn);
> +
> +	spin_unlock_irq(&tp->tx_lock);
> +	return offset;
> +}
> +
> +
> +static void tpmif_notify_upperlayer(struct tpm_private *tp)
> +{
> +	/* Notify upper layer about the state of the connection to the BE. */
> +	vtpm_vd_status(tp->chip, (tp->is_connected
> +				? TPM_VD_STATUS_CONNECTED
> +				: TPM_VD_STATUS_DISCONNECTED));
> +}
> +
> +
> +static void tpmif_set_connected_state(struct tpm_private *tp, u8 is_connected)
> +{
> +	/*
> +	 * Don't notify upper layer if we are in suspend mode and
> +	 * should disconnect - assumption is that we will resume
> +	 * The mutex keeps apps from sending.
> +	 */
> +	if (is_connected == 0 && tp->is_suspended == 1)

Bool's.

> +		return;
> +
> +	/*
> +	 * Unlock the mutex if we are connected again
> +	 * after being suspended - now resuming.
> +	 * This also removes the suspend state.
> +	 */
> +	if (is_connected == 1 && tp->is_suspended == 1)
> +		tpmfront_suspend_finish(tp);
> +
> +	if (is_connected != tp->is_connected) {
> +		tp->is_connected = is_connected;
> +		tpmif_notify_upperlayer(tp);
> +	}
> +}
> +
> +
> +
> +/* =================================================================
> + * Initialization function.
> + * =================================================================
> + */
> +
> +
> +static int __init tpmif_init(void)
> +{
> +	struct tpm_private *tp;
> +
> +	if (!xen_domain())
> +		return -ENODEV;

So this can run in HVM, PV, and dom0. You tested it in all of
the cases?

> +
> +	tp = tpm_private_get();
> +	if (!tp)
> +		return -ENOMEM;
> +
> +	if (gnttab_alloc_grant_references(TPMIF_TX_RING_SIZE,
> +				&gref_head) < 0) {
> +		tpm_private_put();
> +		return -EFAULT;
> +	}
> +
> +	return xenbus_register_frontend(&tpmfront_driver);
> +}
> +module_init(tpmif_init);
> +
> +static void __exit tpmif_exit(void)
> +{
> +	xenbus_unregister_driver(&tpmfront_driver);
> +	gnttab_free_grant_references(gref_head);
> +	tpm_private_put();
> +}
> +module_exit(tpmif_exit);
> +
> +MODULE_LICENSE("Dual BSD/GPL");
> diff --git a/drivers/char/tpm/xen-tpmfront_vtpm.c b/drivers/char/tpm/xen-tpmfront_vtpm.c
> new file mode 100644
> index 0000000..d70f1df
> --- /dev/null
> +++ b/drivers/char/tpm/xen-tpmfront_vtpm.c
> @@ -0,0 +1,543 @@
> +/*
> + * Copyright (C) 2006 IBM Corporation
> + *
> + * Authors:
> + * Stefan Berger <stefanb@us.ibm.com>
> + *
> + * Generic device driver part for device drivers in a virtualized
> + * environment.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation, version 2 of the
> + * License.
> + *
> + */
> +
> +#include <linux/uaccess.h>
> +#include <linux/list.h>
> +#include <linux/device.h>
> +#include <linux/interrupt.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include "tpm.h"
> +#include "xen-tpmfront_vtpm.h"
> +
> +/* read status bits */
> +enum {
> +	STATUS_BUSY = 0x01,
> +	STATUS_DATA_AVAIL = 0x02,
> +	STATUS_READY = 0x04
> +};

And for what variable is this used? Is this
related to the 'struct transmission' ?

> +
> +struct transmission {
> +	struct list_head next;
> +
> +	unsigned char *request;
> +	size_t  request_len;
> +	size_t  request_buflen;
> +
> +	unsigned char *response;
> +	size_t  response_len;
> +	size_t  response_buflen;
> +
> +	unsigned int flags;

I presume this is for the DATEEX_FLAG_QUEUED_ONLY ?
> +};
> +
> +enum {
> +	TRANSMISSION_FLAG_WAS_QUEUED = 0x1

Hmm, for which variable is this?

> +};
> +
> +
> +enum {
> +	DATAEX_FLAG_QUEUED_ONLY = 0x1

You should have one for the default entry of '0' as well.
> +};
> +
> +
> +/* local variables */
> +
> +/* local function prototypes */
> +static int _vtpm_send_queued(struct tpm_chip *chip);
> +
> +
> +/* =============================================================
> + * Some utility functions
> + * =============================================================
> + */
> +static void vtpm_state_init(struct vtpm_state *vtpms)
> +{
> +	vtpms->current_request = NULL;
> +	spin_lock_init(&vtpms->req_list_lock);
> +	init_waitqueue_head(&vtpms->req_wait_queue);
> +	INIT_LIST_HEAD(&vtpms->queued_requests);
> +
> +	vtpms->current_response = NULL;
> +	spin_lock_init(&vtpms->resp_list_lock);
> +	init_waitqueue_head(&vtpms->resp_wait_queue);
> +
> +	vtpms->disconnect_time = jiffies;
> +}
> +
> +
> +static inline struct transmission *transmission_alloc(void)
> +{
> +	return kzalloc(sizeof(struct transmission), GFP_ATOMIC);

GFP_ATOMIC? Not GFP_KERNEL? Ah, that is b/c you are using a tasklet.
Why not use a thread?

> +}
> +
> +static unsigned char *
> +transmission_set_req_buffer(struct transmission *t,
> +		unsigned char *buffer, size_t len)
> +{
> +	if (t->request_buflen < len) {
> +		kfree(t->request);
> +		t->request = kmalloc(len, GFP_KERNEL);
> +		if (!t->request) {
> +			t->request_buflen = 0;
> +			return NULL;
> +		}
> +		t->request_buflen = len;
> +	}
> +
> +	memcpy(t->request, buffer, len);
> +	t->request_len = len;
> +
> +	return t->request;
> +}
> +
> +static unsigned char *
> +transmission_set_res_buffer(struct transmission *t,
> +		const unsigned char *buffer, size_t len)
> +{
> +	if (t->response_buflen < len) {
> +		kfree(t->response);
> +		t->response = kmalloc(len, GFP_ATOMIC);
> +		if (!t->response) {
> +			t->response_buflen = 0;
> +			return NULL;
> +		}
> +		t->response_buflen = len;
> +	}
> +
> +	memcpy(t->response, buffer, len);
> +	t->response_len = len;
> +
> +	return t->response;
> +}

You could collapse these two functions, and in the
'struct transmission' have something like this:

struct payload {
	unsigned char* data;
	ssize_t	len;
	ssize_t buflen;
};

struct transmission {
	struct list_head next;
	struct payload request;
	struct payload response;
	...
}

And then just pass in 'struct payload' to one of those
functions.

> +static inline void transmission_free(struct transmission *t)
> +{
> +	kfree(t->request);
> +	kfree(t->response);
> +	kfree(t);
> +}
> +
> +/* =============================================================
> + * Interface with the lower layer driver
> + * =============================================================
> + */
> +/*
> + * Lower layer uses this function to make a response available.
> + */
> +int vtpm_vd_recv(const struct tpm_chip *chip,
> +		const unsigned char *buffer, size_t count,
> +		void *ptr)
> +{
> +	unsigned long flags;
> +	int ret_size = 0;
> +	struct transmission *t;
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	/*
> +	 * The list with requests must contain one request
> +	 * only and the element there must be the one that
> +	 * was passed to me from the front-end.
> +	 */
> +	spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +	if (vtpms->current_request != ptr) {
> +		spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +		return 0;
> +	}
> +	t = vtpms->current_request;
> +	if (t) {
> +		transmission_free(t);
> +		vtpms->current_request = NULL;
> +	}
> +
> +	t = transmission_alloc();
> +	if (t) {
> +		if (!transmission_set_res_buffer(t, buffer, count)) {
> +			transmission_free(t);
> +			spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +			return -ENOMEM;
> +		}
> +		ret_size = count;
> +		vtpms->current_response = t;
> +		wake_up_interruptible(&vtpms->resp_wait_queue);
> +	}
> +	spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +
> +	return ret_size;
> +}
> +
> +
> +/*
> + * Lower layer indicates its status (connected/disconnected)
> + */
> +void vtpm_vd_status(const struct tpm_chip *chip, u8 vd_status)
> +{
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	vtpms->vd_status = vd_status;
> +	if ((vtpms->vd_status & TPM_VD_STATUS_CONNECTED) == 0)
> +		vtpms->disconnect_time = jiffies;
> +}
> +
> +/* =============================================================
> + * Interface with the generic TPM driver
> + * =============================================================
> + */
> +static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
> +{
> +	int rc = 0;
> +	unsigned long flags;
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	/*
> +	 * Check if the previous operation only queued the command
> +	 * In this case there won't be a response, so I just
> +	 * return from here and reset that flag. In any other
> +	 * case I should receive a response from the back-end.
> +	 */
> +	spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +	if ((vtpms->flags & DATAEX_FLAG_QUEUED_ONLY) != 0) {
> +		vtpms->flags &= ~DATAEX_FLAG_QUEUED_ONLY;
> +		spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +		/*
> +		 * The first few commands (measurements) must be
> +		 * queued since it might not be possible to talk to the
> +		 * TPM, yet.
> +		 * Return a response of up to 30 '0's.

Is 30 a magic constant? Can you use a #define.

> +		 */
> +
> +		count = min_t(size_t, count, 30);
> +		memset(buf, 0x0, count);
> +		return count;
> +	}
> +	/*
> +	 * Check whether something is in the responselist and if
> +	 * there's nothing in the list wait for something to appear.
> +	 */
> +
> +	if (!vtpms->current_response) {
> +		spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +		interruptible_sleep_on_timeout(&vtpms->resp_wait_queue,
> +				1000);
> +		spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +	}
> +
> +	if (vtpms->current_response) {
> +		struct transmission *t = vtpms->current_response;
> +		vtpms->current_response = NULL;
> +		rc = min(count, t->response_len);
> +		memcpy(buf, t->response, rc);
> +		transmission_free(t);
> +	}
> +
> +	spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +	return rc;
> +}
> +
> +static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
> +{
> +	int rc = 0;
> +	unsigned long flags;
> +	struct transmission *t = transmission_alloc();
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	if (!t)
> +		return -ENOMEM;
> +	/*
> +	 * If there's a current request, it must be the
> +	 * previous request that has timed out.
> +	 */
> +	spin_lock_irqsave(&vtpms->req_list_lock, flags);
> +	if (vtpms->current_request != NULL) {
> +		dev_warn(chip->dev, "Sending although there is a request outstanding.\n"
> +				"         Previous request must have timed out.\n");
> +		transmission_free(vtpms->current_request);
> +		vtpms->current_request = NULL;
> +	}
> +	spin_unlock_irqrestore(&vtpms->req_list_lock, flags);
> +
> +	/*
> +	 * Queue the packet if the driver below is not
> +	 * ready, yet, or there is any packet already
> +	 * in the queue.
> +	 * If the driver below is ready, unqueue all
> +	 * packets first before sending our current
> +	 * packet.
> +	 * For each unqueued packet, except for the
> +	 * last (=current) packet, call the function
> +	 * tpm_xen_recv to wait for the response to come
> +	 * back.
> +	 */
> +	if ((vtpms->vd_status & TPM_VD_STATUS_CONNECTED) == 0) {
> +		if (time_after(jiffies,
> +					vtpms->disconnect_time + HZ * 10)) {
> +			rc = -ENOENT;
> +		} else {
> +			goto queue_it;
> +		}
> +	} else {
> +		/*
> +		 * Send all queued packets.
> +		 */
> +		if (_vtpm_send_queued(chip) == 0) {
> +
> +			vtpms->current_request = t;
> +
> +			rc = vtpm_vd_send(vtpms->tpm_private,
> +					buf,
> +					count,
> +					t);
> +			/*
> +			 * The generic TPM driver will call
> +			 * the function to receive the response.
> +			 */
> +			if (rc < 0) {
> +				vtpms->current_request = NULL;
> +				goto queue_it;
> +			}
> +		} else {
> +queue_it:
> +			if (!transmission_set_req_buffer(t, buf, count)) {
> +				transmission_free(t);
> +				rc = -ENOMEM;
> +				goto exit;
> +			}
> +			/*
> +			 * An error occurred. Don't event try
> +			 * to send the current request. Just
> +			 * queue it.
> +			 */
> +			spin_lock_irqsave(&vtpms->req_list_lock, flags);
> +			vtpms->flags |= DATAEX_FLAG_QUEUED_ONLY;
> +			list_add_tail(&t->next, &vtpms->queued_requests);
> +			spin_unlock_irqrestore(&vtpms->req_list_lock, flags);
> +		}
> +	}
> +
> +exit:
> +	return rc;
> +}
> +
> +
> +/*
> + * Send all queued requests.
> + */
> +static int _vtpm_send_queued(struct tpm_chip *chip)
> +{
> +	int rc;
> +	int error = 0;
> +	unsigned long flags;
> +	unsigned char buffer[1];
> +	struct vtpm_state *vtpms;
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	spin_lock_irqsave(&vtpms->req_list_lock, flags);
> +
> +	while (!list_empty(&vtpms->queued_requests)) {
> +		/*
> +		 * Need to dequeue them.
> +		 * Read the result into a dummy buffer.
> +		 */
> +		struct transmission *qt = (struct transmission *)
> +			vtpms->queued_requests.next;
> +		list_del(&qt->next);
> +		vtpms->current_request = qt;
> +		spin_unlock_irqrestore(&vtpms->req_list_lock, flags);
> +
> +		rc = vtpm_vd_send(vtpms->tpm_private,
> +				qt->request,
> +				qt->request_len,
> +				qt);
> +
> +		if (rc < 0) {
> +			spin_lock_irqsave(&vtpms->req_list_lock, flags);
> +			qt = vtpms->current_request;
> +			if (qt != NULL) {
> +				/*
> +				 * requeue it at the beginning
> +				 * of the list
> +				 */
> +				list_add(&qt->next,
> +						&vtpms->queued_requests);
> +			}
> +			vtpms->current_request = NULL;
> +			error = 1;
> +			break;
> +		}
> +		/*
> +		 * After this point qt is not valid anymore!
> +		 * It is freed when the front-end is delivering
> +		 * the data by calling tpm_recv
> +		 */
> +		/*
> +		 * Receive response into provided dummy buffer
> +		 */
> +		rc = vtpm_recv(chip, buffer, sizeof(buffer));
> +		spin_lock_irqsave(&vtpms->req_list_lock, flags);
> +	}
> +
> +	spin_unlock_irqrestore(&vtpms->req_list_lock, flags);
> +
> +	return error;
> +}
> +
> +static void vtpm_cancel(struct tpm_chip *chip)
> +{
> +	unsigned long flags;
> +	struct vtpm_state *vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +
> +	if (!vtpms->current_response && vtpms->current_request) {
> +		spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +		interruptible_sleep_on(&vtpms->resp_wait_queue);
> +		spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +	}
> +
> +	if (vtpms->current_response) {
> +		struct transmission *t = vtpms->current_response;
> +		vtpms->current_response = NULL;
> +		transmission_free(t);
> +	}
> +
> +	spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +}
> +
> +static u8 vtpm_status(struct tpm_chip *chip)
> +{
> +	u8 rc = 0;
> +	unsigned long flags;
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = (struct vtpm_state *)chip_get_private(chip);
> +
> +	spin_lock_irqsave(&vtpms->resp_list_lock, flags);
> +	/*
> +	 * Data are available if:
> +	 *  - there's a current response
> +	 *  - the last packet was queued only (this is fake, but necessary to
> +	 *      get the generic TPM layer to call the receive function.)
> +	 */
> +	if (vtpms->current_response ||
> +			0 != (vtpms->flags & DATAEX_FLAG_QUEUED_ONLY)) {
> +		rc = STATUS_DATA_AVAIL;
> +	} else if (!vtpms->current_response && !vtpms->current_request) {
> +		rc = STATUS_READY;
> +	}
> +
> +	spin_unlock_irqrestore(&vtpms->resp_list_lock, flags);
> +	return rc;
> +}
> +
> +static const struct file_operations vtpm_ops = {
> +	.owner = THIS_MODULE,
> +	.llseek = no_llseek,
> +	.open = tpm_open,
> +	.read = tpm_read,
> +	.write = tpm_write,
> +	.release = tpm_release,
> +};
> +
> +static DEVICE_ATTR(pubek, S_IRUGO, tpm_show_pubek, NULL);
> +static DEVICE_ATTR(pcrs, S_IRUGO, tpm_show_pcrs, NULL);
> +static DEVICE_ATTR(enabled, S_IRUGO, tpm_show_enabled, NULL);
> +static DEVICE_ATTR(active, S_IRUGO, tpm_show_active, NULL);
> +static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL);
> +static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated,
> +		NULL);
> +static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps, NULL);
> +static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel);
> +
> +static struct attribute *vtpm_attrs[] = {
> +	&dev_attr_pubek.attr,
> +	&dev_attr_pcrs.attr,
> +	&dev_attr_enabled.attr,
> +	&dev_attr_active.attr,
> +	&dev_attr_owned.attr,
> +	&dev_attr_temp_deactivated.attr,
> +	&dev_attr_caps.attr,
> +	&dev_attr_cancel.attr,
> +	NULL,
> +};
> +
> +static struct attribute_group vtpm_attr_grp = { .attrs = vtpm_attrs };
> +
> +#define TPM_LONG_TIMEOUT   (10 * 60 * HZ)
> +
> +static struct tpm_vendor_specific tpm_vtpm = {
> +	.recv = vtpm_recv,
> +	.send = vtpm_send,
> +	.cancel = vtpm_cancel,
> +	.status = vtpm_status,
> +	.req_complete_mask = STATUS_BUSY | STATUS_DATA_AVAIL,
> +	.req_complete_val  = STATUS_DATA_AVAIL,
> +	.req_canceled = STATUS_READY,
> +	.attr_group = &vtpm_attr_grp,
> +	.miscdev = {
> +		.fops = &vtpm_ops,
> +	},
> +	.duration = {
> +		TPM_LONG_TIMEOUT,
> +		TPM_LONG_TIMEOUT,
> +		TPM_LONG_TIMEOUT,
> +	},
> +};
> +
> +struct tpm_chip *init_vtpm(struct device *dev,
> +		struct tpm_private *tp)
> +{
> +	long rc;
> +	struct tpm_chip *chip;
> +	struct vtpm_state *vtpms;
> +
> +	vtpms = kzalloc(sizeof(struct vtpm_state), GFP_KERNEL);
> +	if (!vtpms)
> +		return ERR_PTR(-ENOMEM);
> +
> +	vtpm_state_init(vtpms);
> +	vtpms->tpm_private = tp;
> +
> +	chip = tpm_register_hardware(dev, &tpm_vtpm);
> +	if (!chip) {
> +		rc = -ENODEV;
> +		goto err_free_mem;
> +	}
> +
> +	chip_set_private(chip, vtpms);
> +
> +	return chip;
> +
> +err_free_mem:
> +	kfree(vtpms);
> +
> +	return ERR_PTR(rc);
> +}
> +
> +void cleanup_vtpm(struct device *dev)
> +{
> +	struct tpm_chip *chip = dev_get_drvdata(dev);
> +	struct vtpm_state *vtpms = (struct vtpm_state *)chip_get_private(chip);
> +	tpm_remove_hardware(dev);
> +	kfree(vtpms);
> +}
> diff --git a/drivers/char/tpm/xen-tpmfront_vtpm.h b/drivers/char/tpm/xen-tpmfront_vtpm.h
> new file mode 100644
> index 0000000..16cf323
> --- /dev/null
> +++ b/drivers/char/tpm/xen-tpmfront_vtpm.h
> @@ -0,0 +1,55 @@
> +#ifndef XEN_TPMFRONT_VTPM_H
> +#define XEN_TPMFRONT_VTPM_H
> +
> +struct tpm_chip;
> +struct tpm_private;
> +
> +struct vtpm_state {
> +	struct transmission *current_request;
> +	spinlock_t           req_list_lock;
> +	wait_queue_head_t    req_wait_queue;
> +
> +	struct list_head     queued_requests;
> +
> +	struct transmission *current_response;
> +	spinlock_t           resp_list_lock;
> +	wait_queue_head_t    resp_wait_queue;
> +
> +	u8                   vd_status;

There seems to be only two states: disconnected or connected.
Why not just make it an 'bool'?

> +	u8                   flags;

What are the flag options? Were are they enumerated?
> +
> +	unsigned long        disconnect_time;
> +
> +	/*
> +	 * The following is a private structure of the underlying
> +	 * driver. It is passed as parameter in the send function.
> +	 */
> +	struct tpm_private *tpm_private;
> +};
> +
> +
> +enum vdev_status {
> +	TPM_VD_STATUS_DISCONNECTED = 0x0,
> +	TPM_VD_STATUS_CONNECTED = 0x1
> +};
> +
> +/* this function is called from tpm_vtpm.c */

OK, then why do we need to be in this header file?

> +int vtpm_vd_send(struct tpm_private *tp,
> +		const u8 *buf, size_t count, void *ptr);
> +
> +/* these functions are offered by tpm_vtpm.c */
> +struct tpm_chip *init_vtpm(struct device *,
> +		struct tpm_private *);
> +void cleanup_vtpm(struct device *);
> +int vtpm_vd_recv(const struct tpm_chip *chip,
> +		const unsigned char *buffer, size_t count, void *ptr);
> +void vtpm_vd_status(const struct tpm_chip *, u8 status);
> +
> +static inline struct tpm_private *tpm_private_from_dev(struct device *dev)
> +{
> +	struct tpm_chip *chip = dev_get_drvdata(dev);
> +	struct vtpm_state *vtpms = (struct vtpm_state *)chip->vendor.data;
> +	return vtpms->tpm_private;
> +}
> +
> +#endif
> diff --git a/include/xen/interface/io/tpmif.h b/include/xen/interface/io/tpmif.h
> new file mode 100644
> index 0000000..c9e7294
> --- /dev/null
> +++ b/include/xen/interface/io/tpmif.h
> @@ -0,0 +1,65 @@
> +/******************************************************************************
> + * tpmif.h
> + *
> + * TPM I/O interface for Xen guest OSes.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Copyright (c) 2005, IBM Corporation
> + *
> + * Author: Stefan Berger, stefanb@us.ibm.com
> + * Grant table support: Mahadevan Gomathisankaran
> + *
> + * This code has been derived from tools/libxc/xen/io/netif.h
> + *
> + * Copyright (c) 2003-2004, Keir Fraser
> + */
> +
> +#ifndef __XEN_PUBLIC_IO_TPMIF_H__
> +#define __XEN_PUBLIC_IO_TPMIF_H__
> +
> +#include "../grant_table.h"
> +
> +struct tpmif_tx_request {
> +	unsigned long addr;   /* Machine address of packet.   */

unsigned long on 32-bit is 4 bytes, but on 64-bit is 8 bytes.

> +	grant_ref_t ref;      /* grant table access reference */

This entry is 4 bytes (uin32_t).
> +	uint16_t unused;
> +	uint16_t size;        /* Packet size in bytes.        */

And these are both 2 bytes.

The structure on 64-bit is: 8+4+2+2 = 16
On 32-bit: 4+4+2+2 = 12.

The big problem is the aligment of the 'unsigned long' which
is '4' on 32-bit, so if you were to 32/64 communication, the
structure would not align. See here:


32-bit
12
0==addr, 4==gref 8==unused, 10==size

64-bit
16
0==addr, 8==gref 12==unused, 14==size

From:
        printf("%d\n", sizeof(struct tpmif_tx_request));
        printf("%d==addr, %d==gref %d==unused, %d==size\n",
                offsetof(struct tpmif_tx_request, addr),
                offsetof(struct tpmif_tx_request, gref),
                offsetof(struct tpmif_tx_request, unused),
                offsetof(struct tpmif_tx_request, size));


Use uint64_t instead of 'unsigned long' that will fix that
aligment issue.

> +};
> +struct tpmif_tx_request;
> +
> +/*
> + * The TPMIF_TX_RING_SIZE defines the number of pages the
> + * front-end and backend can exchange (= size of array).
> + */
> +#define TPMIF_TX_RING_SIZE 1

You sure? It looks to be the number of array entries? That is
how it is used in the code as well.

> +
> +/* This structure must fit in a memory page. */

Would it make sense to have a BUILD_ON_BUG to make sure?

> +
> +struct tpmif_ring {
> +	struct tpmif_tx_request req;
> +};
> +struct tpmif_ring;
> +
> +struct tpmif_tx_interface {
> +	struct tpmif_ring ring[TPMIF_TX_RING_SIZE];
> +};
> +struct tpmif_tx_interface;
> +
> +#endif
> -- 
> 1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:26:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm6MG-0004uu-1U; Fri, 21 Dec 2012 17:26:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1Tm6ME-0004uo-D7
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:26:02 +0000
Received: from [85.158.143.99:5739] by server-1.bemta-4.messagelabs.com id
	49/02-28401-9AB94D05; Fri, 21 Dec 2012 17:26:01 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1356110759!30419130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15860 invoked from network); 21 Dec 2012 17:26:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:26:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,331,1355097600"; 
   d="scan'208";a="1500429"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:25:59 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Fri, 21 Dec 2012
	12:25:59 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Ross Philipson <Ross.Philipson@citrix.com>, G.R.
	<firemeteor@users.sourceforge.net>
Date: Fri, 21 Dec 2012 12:26:03 -0500
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3fmx1VnvRqdp5pSwabm5teC+++3QAAG3tQAAEP8tA=
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6579@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
	<CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
	<CAKhsbWbtYA50rCCbAR_5=Bt+5g7Kb_BkKrVo5WF+ZdmO2o8pCw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6567@FTLPMAILBOX02.citrite.net>
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6567@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Yes, my win 7 guest is totally broken with IGD passthrough (much worse
> > than linux status).
> > Before I bought my current build, sources like wikis seems to mention
> > that IGD is the first card that works.
> > And now, it seems the AMD cards are the best choice for pass-through.
> > Sad news for me.
> 
> Let me just clarify that up to now we have been successful in passing in
> igfx cards without having to surface any of these ACPI bits. I was just
> mentioning that this is an inconsistency and might be worth
> investigating at some point. More importantly I am pointing out that if
> you are trying to find out information like the location/size/layout of
> the IGD OpRegion, you can get that information from the host BIOS. That
> sounded like what your original issues centered around. Sorry if I
> confused things.

Oh and I forgot to add. In addition there are other OpRegions defined like
GNVS that can give you an idea of what might be just before and after the
IGD regions when it is not page aligned.

> 
> >
> > Thanks,
> > Timothy
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:26:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm6MG-0004uu-1U; Fri, 21 Dec 2012 17:26:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1Tm6ME-0004uo-D7
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:26:02 +0000
Received: from [85.158.143.99:5739] by server-1.bemta-4.messagelabs.com id
	49/02-28401-9AB94D05; Fri, 21 Dec 2012 17:26:01 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1356110759!30419130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTM5ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15860 invoked from network); 21 Dec 2012 17:26:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:26:01 -0000
X-IronPort-AV: E=Sophos;i="4.84,331,1355097600"; 
   d="scan'208";a="1500429"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:25:59 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Fri, 21 Dec 2012
	12:25:59 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Ross Philipson <Ross.Philipson@citrix.com>, G.R.
	<firemeteor@users.sourceforge.net>
Date: Fri, 21 Dec 2012 12:26:03 -0500
Thread-Topic: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
	resource conflict for OpRegion.
Thread-Index: Ac3fmx1VnvRqdp5pSwabm5teC+++3QAAG3tQAAEP8tA=
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6579@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
	<CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
	<CAKhsbWbtYA50rCCbAR_5=Bt+5g7Kb_BkKrVo5WF+ZdmO2o8pCw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6567@FTLPMAILBOX02.citrite.net>
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6567@FTLPMAILBOX02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Yes, my win 7 guest is totally broken with IGD passthrough (much worse
> > than linux status).
> > Before I bought my current build, sources like wikis seems to mention
> > that IGD is the first card that works.
> > And now, it seems the AMD cards are the best choice for pass-through.
> > Sad news for me.
> 
> Let me just clarify that up to now we have been successful in passing in
> igfx cards without having to surface any of these ACPI bits. I was just
> mentioning that this is an inconsistency and might be worth
> investigating at some point. More importantly I am pointing out that if
> you are trying to find out information like the location/size/layout of
> the IGD OpRegion, you can get that information from the host BIOS. That
> sounded like what your original issues centered around. Sorry if I
> confused things.

Oh and I forgot to add. In addition there are other OpRegions defined like
GNVS that can give you an idea of what might be just before and after the
IGD regions when it is not page aligned.

> 
> >
> > Thanks,
> > Timothy
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:35:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm6VK-0005LP-06; Fri, 21 Dec 2012 17:35:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm6VI-0005LG-EI
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:35:24 +0000
Received: from [85.158.138.51:28776] by server-15.bemta-3.messagelabs.com id
	2D/2A-07921-BDD94D05; Fri, 21 Dec 2012 17:35:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1356111318!29940182!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5157 invoked from network); 21 Dec 2012 17:35:20 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 17:35:20 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLHZGU5031480
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 17:35:17 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLHZFjD028863
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 17:35:16 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLHZEcQ027572; Fri, 21 Dec 2012 11:35:15 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 09:35:14 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 739F21C032B; Fri, 21 Dec 2012 12:35:13 -0500 (EST)
Date: Fri, 21 Dec 2012 12:35:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20121221173513.GB27893@phenom.dumpdata.com>
References: <50D41DF3.306@citrix.com>
	<20121221140320.GD25526@phenom.dumpdata.com>
	<50D47678.2050903@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D47678.2050903@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Create a iSCSI DomU with disks in another DomU
 running on the same Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 03:47:20PM +0100, Roger Pau Monn=E9 wrote:
> On 21/12/12 15:03, Konrad Rzeszutek Wilk wrote:
> > On Fri, Dec 21, 2012 at 09:29:39AM +0100, Roger Pau Monn=E9 wrote:
> >> Hello,
> >>
> >> I'm trying to use a strange setup, that consists in having a DomU
> >> serving iSCSI targets to the Dom0, that will use this targets as disks
> >> for other DomUs. I've tried to set up this iSCSI target DomU using both
> >> Debian Squeeze/Wheezy (with kernels 2.6.32 and 3.2) and ISCSI
> >> Enterprise Target (IET), and when launching the DomU I get this messag=
es
> >> from Xen:
> >>
> >> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff8=
30141405000, caf=3D8000000000000003, taf=3D7400000000000001
> >> (XEN) Xen WARN at mm.c:1926
> >> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
> >> (XEN) CPU:    0
> >> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
> >> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
> >> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 00000000000=
00000
> >> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802=
766e8
> >> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  00000000000=
00004
> >> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 00000000000=
00001
> >> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 74000000000=
00001
> >> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000=
026f0
> >> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
> >> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> >> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
> >> (XEN)    ffff830141405000 8000000000000003 7400000000000001 0000000000=
145028
> >> (XEN)    ffff82f6028a0510 ffff83019e60c000 ffff82f602afcd00 ffff82c480=
2bfd28
> >> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480=
109ba3
> >> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061=
dfc3f0
> >> (XEN)    0000000000000001 ffffffffffff8000 0000000000000002 ffff83011d=
555000
> >> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c480=
10c607
> >> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 0000000000=
11cf90
> >> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c480=
2b8000
> >> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480=
300920
> >> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000005802bfd38 ffff82c480=
2b8000
> >> (XEN)    ffff82c400000000 0000000000000001 ffffc90000028b10 ffffc90000=
028b10
> >> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 0000000000=
145028
> >> (XEN)    000000000011cf7c 0000000000001000 0000000000157e68 0000000000=
007ff0
> >> (XEN)    000000000000027e 000000000042000d 0000000000020b50 ffff8300df=
df0000
> >> (XEN)    ffff82c4802bfd78 ffffc90000028ac0 ffffc90000028ac0 ffff880185=
f6fd58
> >> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c480=
10eb65
> >> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480=
181831
> >> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000=
000000
> >> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 ffff8300df=
b03000
> >> (XEN)    ffff8300dfdf0000 0000150e11a417f8 0000000000000002 ffff82c480=
300948
> >> (XEN) Xen call trace:
> >> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
> >> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
> >> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
> >> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
> >> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
> >> (XEN)    =

> >> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
> >> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff8=
30141405000, caf=3D8000000000000003, taf=3D7400000000000001
> >> (XEN) Xen WARN at mm.c:1926
> >> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
> >> (XEN) CPU:    0
> >> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
> >> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
> >> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 00000000000=
00000
> >> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802=
766e8
> >> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  00000000000=
00004
> >> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 00000000000=
00001
> >> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 74000000000=
00001
> >> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000=
026f0
> >> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
> >> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> >> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
> >> (XEN)    ffff830141405000 8000000000000003 7400000000000001 0000000000=
14581d
> >> (XEN)    ffff82f6028b03b0 ffff83019e60c000 ffff82f602afcd00 ffff82c480=
2bfd28
> >> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480=
109ba3
> >> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061=
dfc308
> >> (XEN)    0000000000000000 ffffffffffff8000 0000000000000001 ffff83011d=
555000
> >> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c480=
10c607
> >> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 0000000000=
11cf90
> >> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c480=
2b8000
> >> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480=
300920
> >> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000002802bfd38 ffff82c480=
2b8000
> >> (XEN)    ffffffff00000000 0000000000000001 ffffc90000028b60 ffffc90000=
028b60
> >> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 0000000000=
14581d
> >> (XEN)    00000000000deb3e 0000000000001000 0000000000157e68 000000000b=
507ff0
> >> (XEN)    0000000000000261 000000000042000d 00000000000204b0 ffffc90000=
028b38
> >> (XEN)    0000000000000002 ffffc90000028b38 ffffc90000028b38 ffff880185=
f6fd58
> >> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c480=
10eb65
> >> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480=
181831
> >> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000=
000000
> >> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 0000000000=
000086
> >> (XEN)    ffff82c4802bfe28 ffff82c480125eae ffff83019e60c000 0000000000=
000286
> >> (XEN) Xen call trace:
> >> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
> >> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
> >> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
> >> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
> >> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
> >> (XEN)    =

> >> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
> >>
> >> (Note that I've added a WARN() to mm.c:1925 to see where the
> >> get_page call was coming from).
> >>
> >> Connecting the iSCSI disks to another Dom0 works fine, so this
> >> problem only happens when trying to connect the disks to the
> >> Dom0 where the DomU is running.
> > =

> > Is this happening when the 'disks' are exported to the domUs?
> > Are they exported via QEMU or xen-blkback?
> =

> The iSCSI disks are connected to the DomUs using blkback, and this is
> happening when the DomU tries to access it's disks.
> =

> >>
> >> I've replaced the Linux DomU serving iSCSI targets with a
> >> NetBSD DomU, and the problems disappears, and I'm able to
> >> attach the targets shared by the DomU to the Dom0 without
> >> issues.
> >>
> >> The problem seems to come from netfront/netback, does anyone
> >> have a clue about what might cause this bad interaction
> >> between IET and netfront/netback?
> > =

> > Or it might be that we are re-using the PFN for blkback/blkfront
> > and using the m2p overrides and overwritting the netfront/netback
> > m2p overrides?
> =

> What's strange is that this doesn't happen when the domain that has the
> targets is a NetBSD PV. There are also problems when blkback is not used
> (see below), so I guess the problem is between netfront/netback and IET.
> =

> > Is this with an HVM domU or PV domU?
> =

> Both domains (the domain holding the iSCSI targets, and the created
> guests) are PV.
> =

> Also, I've forgot to say that in the previous email, but if I just
> connect the iSCSI disks to the Dom0, I don't see any errors from Xen,
> but the Dom0 kernel starts complaining:
> =

> [70272.569607] sd 14:0:0:0: [sdc]
> [70272.569611] Sense Key : Medium Error [current]
> [70272.569619] Info fld=3D0x0
> [70272.569623] sd 14:0:0:0: [sdc]
> [70272.569627] Add. Sense: Unrecovered read error
> [70272.569633] sd 14:0:0:0: [sdc] CDB:
> [70272.569637] Read(10): 28 00 00 00 00 00 00 00 08 00
> [70272.569662] end_request: critical target error, dev sdc, sector 0
> [70277.571208] sd 14:0:0:0: [sdc] Unhandled sense code
> [70277.571220] sd 14:0:0:0: [sdc]
> [70277.571224] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
> [70277.571229] sd 14:0:0:0: [sdc]
> [70277.571233] Sense Key : Medium Error [current]
> [70277.571241] Info fld=3D0x0
> [70277.571245] sd 14:0:0:0: [sdc]
> [70277.571249] Add. Sense: Unrecovered read error
> [70277.571255] sd 14:0:0:0: [sdc] CDB:
> [70277.571259] Read(10): 28 00 00 00 00 00 00 00 08 00
> [70277.571284] end_request: critical target error, dev sdc, sector 0
> [70282.572768] sd 14:0:0:0: [sdc] Unhandled sense code
> [70282.572781] sd 14:0:0:0: [sdc]
> [70282.572785] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
> [70282.572790] sd 14:0:0:0: [sdc]
> [70282.572794] Sense Key : Medium Error [current]
> [70282.572802] Info fld=3D0x0
> [70282.572806] sd 14:0:0:0: [sdc]
> [70282.572810] Add. Sense: Unrecovered read error
> [70282.572816] sd 14:0:0:0: [sdc] CDB:
> [70282.572820] Read(10): 28 00 00 00 00 00 00 00 08 00
> [70282.572846] end_request: critical target error, dev sdc, sector 0
> [70287.574397] sd 14:0:0:0: [sdc] Unhandled sense code
> [70287.574409] sd 14:0:0:0: [sdc]
> [70287.574413] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
> [70287.574418] sd 14:0:0:0: [sdc]
> [70287.574422] Sense Key : Medium Error [current]
> [70287.574430] Info fld=3D0x0
> [70287.574434] sd 14:0:0:0: [sdc]
> [70287.574438] Add. Sense: Unrecovered read error
> [70287.574445] sd 14:0:0:0: [sdc] CDB:
> [70287.574448] Read(10): 28 00 00 00 00 00 00 00 08 00
> [70287.574474] end_request: critical target error, dev sdc, sector 0
> =

> When I try to attach the targets to another Dom0, everything works fine,
> the problem only happens when the iSCSI target is a DomU and you attach
> the disks from the Dom0 on the same machine.

I think we are just swizzling the PFNs with a different MFN when you
do the domU -> domX, using two ring protocols. Weird thought as the
m2p code has checks WARN_ON(PagePrivate(..)) to catch this sort of
thing.

What happens if the dom0/domU are all 3.8 with the persistent grant
patches?

> =

> Thanks, Roger.
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:35:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm6VK-0005LP-06; Fri, 21 Dec 2012 17:35:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm6VI-0005LG-EI
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:35:24 +0000
Received: from [85.158.138.51:28776] by server-15.bemta-3.messagelabs.com id
	2D/2A-07921-BDD94D05; Fri, 21 Dec 2012 17:35:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1356111318!29940182!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5157 invoked from network); 21 Dec 2012 17:35:20 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 17:35:20 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLHZGU5031480
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 17:35:17 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLHZFjD028863
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 17:35:16 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLHZEcQ027572; Fri, 21 Dec 2012 11:35:15 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 09:35:14 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 739F21C032B; Fri, 21 Dec 2012 12:35:13 -0500 (EST)
Date: Fri, 21 Dec 2012 12:35:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20121221173513.GB27893@phenom.dumpdata.com>
References: <50D41DF3.306@citrix.com>
	<20121221140320.GD25526@phenom.dumpdata.com>
	<50D47678.2050903@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D47678.2050903@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Create a iSCSI DomU with disks in another DomU
 running on the same Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 03:47:20PM +0100, Roger Pau Monn=E9 wrote:
> On 21/12/12 15:03, Konrad Rzeszutek Wilk wrote:
> > On Fri, Dec 21, 2012 at 09:29:39AM +0100, Roger Pau Monn=E9 wrote:
> >> Hello,
> >>
> >> I'm trying to use a strange setup, that consists in having a DomU
> >> serving iSCSI targets to the Dom0, that will use this targets as disks
> >> for other DomUs. I've tried to set up this iSCSI target DomU using both
> >> Debian Squeeze/Wheezy (with kernels 2.6.32 and 3.2) and ISCSI
> >> Enterprise Target (IET), and when launching the DomU I get this messag=
es
> >> from Xen:
> >>
> >> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff8=
30141405000, caf=3D8000000000000003, taf=3D7400000000000001
> >> (XEN) Xen WARN at mm.c:1926
> >> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
> >> (XEN) CPU:    0
> >> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
> >> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
> >> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 00000000000=
00000
> >> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802=
766e8
> >> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  00000000000=
00004
> >> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 00000000000=
00001
> >> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 74000000000=
00001
> >> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000=
026f0
> >> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
> >> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> >> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
> >> (XEN)    ffff830141405000 8000000000000003 7400000000000001 0000000000=
145028
> >> (XEN)    ffff82f6028a0510 ffff83019e60c000 ffff82f602afcd00 ffff82c480=
2bfd28
> >> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480=
109ba3
> >> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061=
dfc3f0
> >> (XEN)    0000000000000001 ffffffffffff8000 0000000000000002 ffff83011d=
555000
> >> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c480=
10c607
> >> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 0000000000=
11cf90
> >> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c480=
2b8000
> >> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480=
300920
> >> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000005802bfd38 ffff82c480=
2b8000
> >> (XEN)    ffff82c400000000 0000000000000001 ffffc90000028b10 ffffc90000=
028b10
> >> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 0000000000=
145028
> >> (XEN)    000000000011cf7c 0000000000001000 0000000000157e68 0000000000=
007ff0
> >> (XEN)    000000000000027e 000000000042000d 0000000000020b50 ffff8300df=
df0000
> >> (XEN)    ffff82c4802bfd78 ffffc90000028ac0 ffffc90000028ac0 ffff880185=
f6fd58
> >> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c480=
10eb65
> >> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480=
181831
> >> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000=
000000
> >> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 ffff8300df=
b03000
> >> (XEN)    ffff8300dfdf0000 0000150e11a417f8 0000000000000002 ffff82c480=
300948
> >> (XEN) Xen call trace:
> >> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
> >> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
> >> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
> >> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
> >> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
> >> (XEN)    =

> >> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
> >> (XEN) mm.c:1925:d0 Error pfn 157e68: rd=3Dffff83019e60c000, od=3Dffff8=
30141405000, caf=3D8000000000000003, taf=3D7400000000000001
> >> (XEN) Xen WARN at mm.c:1926
> >> (XEN) ----[ Xen-4.3-unstable  x86_64  debug=3Dy  Not tainted ]----
> >> (XEN) CPU:    0
> >> (XEN) RIP:    e008:[<ffff82c48016ea17>] get_page+0xd5/0x101
> >> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
> >> (XEN) rax: 0000000000000000   rbx: ffff830141405000   rcx: 00000000000=
00000
> >> (XEN) rdx: ffff82c480300920   rsi: 000000000000000a   rdi: ffff82c4802=
766e8
> >> (XEN) rbp: ffff82c4802bfbf8   rsp: ffff82c4802bfba8   r8:  00000000000=
00004
> >> (XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 00000000000=
00001
> >> (XEN) r12: 0000000000157e68   r13: ffff83019e60c000   r14: 74000000000=
00001
> >> (XEN) r15: 8000000000000003   cr0: 000000008005003b   cr4: 00000000000=
026f0
> >> (XEN) cr3: 000000011c180000   cr2: 00007f668d1eb000
> >> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> >> (XEN) Xen stack trace from rsp=3Dffff82c4802bfba8:
> >> (XEN)    ffff830141405000 8000000000000003 7400000000000001 0000000000=
14581d
> >> (XEN)    ffff82f6028b03b0 ffff83019e60c000 ffff82f602afcd00 ffff82c480=
2bfd28
> >> (XEN)    ffff82c4802bfd18 0000000000157e68 ffff82c4802bfc58 ffff82c480=
109ba3
> >> (XEN)    ffffffffffffffff 0000000000000000 ffff83011c977fb8 0000000061=
dfc308
> >> (XEN)    0000000000000000 ffffffffffff8000 0000000000000001 ffff83011d=
555000
> >> (XEN)    ffff83019e60c000 0000000000000000 ffff82c4802bfd98 ffff82c480=
10c607
> >> (XEN)    ffff82c4802bfd34 ffff82c4802bfd30 ffff82c400000001 0000000000=
11cf90
> >> (XEN)    0000000000000000 ffff82c4802b8000 ffff82c4802b8000 ffff82c480=
2b8000
> >> (XEN)    ffff82c4802b8000 ffff82c4802bfd5c 000000029e60c000 ffff82c480=
300920
> >> (XEN)    ffff82c4802b8000 ffff82c4802bfd38 00000002802bfd38 ffff82c480=
2b8000
> >> (XEN)    ffffffff00000000 0000000000000001 ffffc90000028b60 ffffc90000=
028b60
> >> (XEN)    ffff8300dfb03000 0000000000000000 0000000000000000 0000000000=
14581d
> >> (XEN)    00000000000deb3e 0000000000001000 0000000000157e68 000000000b=
507ff0
> >> (XEN)    0000000000000261 000000000042000d 00000000000204b0 ffffc90000=
028b38
> >> (XEN)    0000000000000002 ffffc90000028b38 ffffc90000028b38 ffff880185=
f6fd58
> >> (XEN)    ffff880185f6fd78 0000000000000005 ffff82c4802bfef8 ffff82c480=
10eb65
> >> (XEN)    ffff82c4802bfdc8 ffff82c480300960 ffff82c4802bfe18 ffff82c480=
181831
> >> (XEN)    000000000006df66 000032cfdc175ce6 0000000000000000 0000000000=
000000
> >> (XEN)    0000000000000000 0000000000000005 ffff82c4802bfe28 0000000000=
000086
> >> (XEN)    ffff82c4802bfe28 ffff82c480125eae ffff83019e60c000 0000000000=
000286
> >> (XEN) Xen call trace:
> >> (XEN)    [<ffff82c48016ea17>] get_page+0xd5/0x101
> >> (XEN)    [<ffff82c480109ba3>] __get_paged_frame+0xbf/0x162
> >> (XEN)    [<ffff82c48010c607>] gnttab_copy+0x4c6/0x91a
> >> (XEN)    [<ffff82c48010eb65>] do_grant_table_op+0x12ad/0x1b23
> >> (XEN)    [<ffff82c48022280b>] syscall_enter+0xeb/0x145
> >> (XEN)    =

> >> (XEN) grant_table.c:2076:d0 source frame ffffffffffffffff invalid.
> >>
> >> (Note that I've added a WARN() to mm.c:1925 to see where the
> >> get_page call was coming from).
> >>
> >> Connecting the iSCSI disks to another Dom0 works fine, so this
> >> problem only happens when trying to connect the disks to the
> >> Dom0 where the DomU is running.
> > =

> > Is this happening when the 'disks' are exported to the domUs?
> > Are they exported via QEMU or xen-blkback?
> =

> The iSCSI disks are connected to the DomUs using blkback, and this is
> happening when the DomU tries to access it's disks.
> =

> >>
> >> I've replaced the Linux DomU serving iSCSI targets with a
> >> NetBSD DomU, and the problems disappears, and I'm able to
> >> attach the targets shared by the DomU to the Dom0 without
> >> issues.
> >>
> >> The problem seems to come from netfront/netback, does anyone
> >> have a clue about what might cause this bad interaction
> >> between IET and netfront/netback?
> > =

> > Or it might be that we are re-using the PFN for blkback/blkfront
> > and using the m2p overrides and overwritting the netfront/netback
> > m2p overrides?
> =

> What's strange is that this doesn't happen when the domain that has the
> targets is a NetBSD PV. There are also problems when blkback is not used
> (see below), so I guess the problem is between netfront/netback and IET.
> =

> > Is this with an HVM domU or PV domU?
> =

> Both domains (the domain holding the iSCSI targets, and the created
> guests) are PV.
> =

> Also, I've forgot to say that in the previous email, but if I just
> connect the iSCSI disks to the Dom0, I don't see any errors from Xen,
> but the Dom0 kernel starts complaining:
> =

> [70272.569607] sd 14:0:0:0: [sdc]
> [70272.569611] Sense Key : Medium Error [current]
> [70272.569619] Info fld=3D0x0
> [70272.569623] sd 14:0:0:0: [sdc]
> [70272.569627] Add. Sense: Unrecovered read error
> [70272.569633] sd 14:0:0:0: [sdc] CDB:
> [70272.569637] Read(10): 28 00 00 00 00 00 00 00 08 00
> [70272.569662] end_request: critical target error, dev sdc, sector 0
> [70277.571208] sd 14:0:0:0: [sdc] Unhandled sense code
> [70277.571220] sd 14:0:0:0: [sdc]
> [70277.571224] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
> [70277.571229] sd 14:0:0:0: [sdc]
> [70277.571233] Sense Key : Medium Error [current]
> [70277.571241] Info fld=3D0x0
> [70277.571245] sd 14:0:0:0: [sdc]
> [70277.571249] Add. Sense: Unrecovered read error
> [70277.571255] sd 14:0:0:0: [sdc] CDB:
> [70277.571259] Read(10): 28 00 00 00 00 00 00 00 08 00
> [70277.571284] end_request: critical target error, dev sdc, sector 0
> [70282.572768] sd 14:0:0:0: [sdc] Unhandled sense code
> [70282.572781] sd 14:0:0:0: [sdc]
> [70282.572785] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
> [70282.572790] sd 14:0:0:0: [sdc]
> [70282.572794] Sense Key : Medium Error [current]
> [70282.572802] Info fld=3D0x0
> [70282.572806] sd 14:0:0:0: [sdc]
> [70282.572810] Add. Sense: Unrecovered read error
> [70282.572816] sd 14:0:0:0: [sdc] CDB:
> [70282.572820] Read(10): 28 00 00 00 00 00 00 00 08 00
> [70282.572846] end_request: critical target error, dev sdc, sector 0
> [70287.574397] sd 14:0:0:0: [sdc] Unhandled sense code
> [70287.574409] sd 14:0:0:0: [sdc]
> [70287.574413] Result: hostbyte=3DDID_OK driverbyte=3DDRIVER_SENSE
> [70287.574418] sd 14:0:0:0: [sdc]
> [70287.574422] Sense Key : Medium Error [current]
> [70287.574430] Info fld=3D0x0
> [70287.574434] sd 14:0:0:0: [sdc]
> [70287.574438] Add. Sense: Unrecovered read error
> [70287.574445] sd 14:0:0:0: [sdc] CDB:
> [70287.574448] Read(10): 28 00 00 00 00 00 00 00 08 00
> [70287.574474] end_request: critical target error, dev sdc, sector 0
> =

> When I try to attach the targets to another Dom0, everything works fine,
> the problem only happens when the iSCSI target is a DomU and you attach
> the disks from the Dom0 on the same machine.

I think we are just swizzling the PFNs with a different MFN when you
do the domU -> domX, using two ring protocols. Weird thought as the
m2p code has checks WARN_ON(PagePrivate(..)) to catch this sort of
thing.

What happens if the dom0/domU are all 3.8 with the persistent grant
patches?

> =

> Thanks, Roger.
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:48:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:48:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm6hm-0005jR-8L; Fri, 21 Dec 2012 17:48:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm6hl-0005jJ-1h
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:48:17 +0000
Received: from [85.158.138.51:19070] by server-2.bemta-3.messagelabs.com id
	0A/44-11239-FD0A4D05; Fri, 21 Dec 2012 17:48:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1356112094!20002100!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11726 invoked from network); 21 Dec 2012 17:48:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:48:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,331,1355097600"; 
   d="scan'208";a="310302"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:48:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 17:48:12 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm6hg-0001CX-7Q; Fri, 21 Dec 2012 17:48:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm6hg-00030e-0c;
	Fri, 21 Dec 2012 17:48:12 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.41178.370551.263683@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 17:48:10 +0000
To: Toby Karyadi <toby.karyadi@gmail.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <50BD01DE.1050809@gmail.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<20121129185635.GA1045@asim.lip6.fr> <50B8718C.1090405@citrix.com>
	<20121130085241.GC311@asim.lip6.fr> <50B87A7E.5030001@citrix.com>
	<20121130094143.GA10993@asim.lip6.fr> <50B88325.1050009@citrix.com>
	<1354271552.6269.110.camel@zakaz.uk.xensource.com>
	<20121130103823.GA9562@asim.lip6.fr>
	<1354272201.6269.113.camel@zakaz.uk.xensource.com>
	<20121130105058.GA3457@asim.lip6.fr> <50BD01DE.1050809@gmail.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/3] Add support for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Toby Karyadi writes ("Re: [Xen-devel] [PATCH v2 0/3] Add support for NetBSD gntdev"):
> Hopefully I don't bore you to death if you read this far, but the way 
> the libxl/xl is going seems somewhat ridiculous, where they are trying 
> to insert more and more policy vs functionality,

The proposal here AIUI is just about what the default should be.

We have taken the view that the admin should not /need/ to specify
explicitly which software components to assemble together to get the
block devices to work.

That doesn't mean that the admin won't be enabled to specify exactly
what they want (and to keep all the pieces if it doesn't work).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 17:48:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 17:48:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm6hm-0005jR-8L; Fri, 21 Dec 2012 17:48:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Tm6hl-0005jJ-1h
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 17:48:17 +0000
Received: from [85.158.138.51:19070] by server-2.bemta-3.messagelabs.com id
	0A/44-11239-FD0A4D05; Fri, 21 Dec 2012 17:48:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1356112094!20002100!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMTcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11726 invoked from network); 21 Dec 2012 17:48:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 17:48:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,331,1355097600"; 
   d="scan'208";a="310302"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 17:48:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 17:48:12 +0000
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Tm6hg-0001CX-7Q; Fri, 21 Dec 2012 17:48:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Tm6hg-00030e-0c;
	Fri, 21 Dec 2012 17:48:12 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20692.41178.370551.263683@mariner.uk.xensource.com>
Date: Fri, 21 Dec 2012 17:48:10 +0000
To: Toby Karyadi <toby.karyadi@gmail.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <50BD01DE.1050809@gmail.com>
References: <1354210308-23251-1-git-send-email-roger.pau@citrix.com>
	<20121129185635.GA1045@asim.lip6.fr> <50B8718C.1090405@citrix.com>
	<20121130085241.GC311@asim.lip6.fr> <50B87A7E.5030001@citrix.com>
	<20121130094143.GA10993@asim.lip6.fr> <50B88325.1050009@citrix.com>
	<1354271552.6269.110.camel@zakaz.uk.xensource.com>
	<20121130103823.GA9562@asim.lip6.fr>
	<1354272201.6269.113.camel@zakaz.uk.xensource.com>
	<20121130105058.GA3457@asim.lip6.fr> <50BD01DE.1050809@gmail.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Manuel Bouyer <bouyer@antioche.eu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 0/3] Add support for NetBSD gntdev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Toby Karyadi writes ("Re: [Xen-devel] [PATCH v2 0/3] Add support for NetBSD gntdev"):
> Hopefully I don't bore you to death if you read this far, but the way 
> the libxl/xl is going seems somewhat ridiculous, where they are trying 
> to insert more and more policy vs functionality,

The proposal here AIUI is just about what the default should be.

We have taken the view that the admin should not /need/ to specify
explicitly which software components to assemble together to get the
block devices to work.

That doesn't mean that the admin won't be enabled to specify exactly
what they want (and to keep all the pieces if it doesn't work).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:21:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm89p-0006Zg-Ar; Fri, 21 Dec 2012 19:21:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm89o-0006Zb-Au
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:21:20 +0000
Received: from [193.109.254.147:37987] by server-3.bemta-14.messagelabs.com id
	49/F2-26055-FA6B4D05; Fri, 21 Dec 2012 19:21:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1356117675!10939996!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1200 invoked from network); 21 Dec 2012 19:21:17 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 19:21:17 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLJL5oj002655
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 19:21:06 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLJL35Y006381
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 19:21:03 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLJL2CO001836; Fri, 21 Dec 2012 13:21:03 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 11:21:02 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B85291C032B; Fri, 21 Dec 2012 14:21:01 -0500 (EST)
Date: Fri, 21 Dec 2012 14:21:01 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20121221192101.GA30562@phenom.dumpdata.com>
References: <1353550823-10009-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E2DFA42@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E2DFA42@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "konrad@kernel.org" <konrad@kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/xen : Fix the wrong check in pciback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 08:25:54AM +0000, Zhang, Yang Z wrote:
> Zhang, Yang Z wrote on 2012-11-22:
> > From: Yang Zhang <yang.z.zhang@Intel.com>
> > 
> > Fix the wrong check in pciback.
> > 
> > Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> > ---
> >  drivers/xen/xen-pciback/pciback.h |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > diff --git a/drivers/xen/xen-pciback/pciback.h
> > b/drivers/xen/xen-pciback/pciback.h index a7def01..f72af87 100644 ---
> > a/drivers/xen/xen-pciback/pciback.h +++
> > b/drivers/xen/xen-pciback/pciback.h @@ -124,7 +124,7 @@ static inline
> > int xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
> >  static inline void xen_pcibk_release_pci_dev(struct xen_pcibk_device *pdev,
> >  					     struct pci_dev *dev)
> >  {
> > -	if (xen_pcibk_backend && xen_pcibk_backend->free)
> > +	if (xen_pcibk_backend && xen_pcibk_backend->release)
> >  		return xen_pcibk_backend->release(pdev, dev);
> >  }
> > --
> > 1.7.1.1
> Any comment?

Hm, I don't seem to have it in my queue, weird. Putting it in now.
> 
> Best regards,
> Yang
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:21:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm89p-0006Zg-Ar; Fri, 21 Dec 2012 19:21:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm89o-0006Zb-Au
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:21:20 +0000
Received: from [193.109.254.147:37987] by server-3.bemta-14.messagelabs.com id
	49/F2-26055-FA6B4D05; Fri, 21 Dec 2012 19:21:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1356117675!10939996!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1200 invoked from network); 21 Dec 2012 19:21:17 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 19:21:17 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLJL5oj002655
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 19:21:06 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLJL35Y006381
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 19:21:03 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLJL2CO001836; Fri, 21 Dec 2012 13:21:03 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 11:21:02 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B85291C032B; Fri, 21 Dec 2012 14:21:01 -0500 (EST)
Date: Fri, 21 Dec 2012 14:21:01 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20121221192101.GA30562@phenom.dumpdata.com>
References: <1353550823-10009-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E2DFA42@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E2DFA42@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "konrad@kernel.org" <konrad@kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/xen : Fix the wrong check in pciback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 13, 2012 at 08:25:54AM +0000, Zhang, Yang Z wrote:
> Zhang, Yang Z wrote on 2012-11-22:
> > From: Yang Zhang <yang.z.zhang@Intel.com>
> > 
> > Fix the wrong check in pciback.
> > 
> > Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> > ---
> >  drivers/xen/xen-pciback/pciback.h |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > diff --git a/drivers/xen/xen-pciback/pciback.h
> > b/drivers/xen/xen-pciback/pciback.h index a7def01..f72af87 100644 ---
> > a/drivers/xen/xen-pciback/pciback.h +++
> > b/drivers/xen/xen-pciback/pciback.h @@ -124,7 +124,7 @@ static inline
> > int xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
> >  static inline void xen_pcibk_release_pci_dev(struct xen_pcibk_device *pdev,
> >  					     struct pci_dev *dev)
> >  {
> > -	if (xen_pcibk_backend && xen_pcibk_backend->free)
> > +	if (xen_pcibk_backend && xen_pcibk_backend->release)
> >  		return xen_pcibk_backend->release(pdev, dev);
> >  }
> > --
> > 1.7.1.1
> Any comment?

Hm, I don't seem to have it in my queue, weird. Putting it in now.
> 
> Best regards,
> Yang
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8Pq-0006lr-VT; Fri, 21 Dec 2012 19:37:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8Pp-0006lm-HA
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:37:53 +0000
Received: from [85.158.139.211:47902] by server-14.bemta-5.messagelabs.com id
	0C/23-09538-09AB4D05; Fri, 21 Dec 2012 19:37:52 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-206.messagelabs.com!1356118672!17291725!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28341 invoked from network); 21 Dec 2012 19:37:52 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-206.messagelabs.com with SMTP;
	21 Dec 2012 19:37:52 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id A1DB6C5617F;
	Fri, 21 Dec 2012 19:37:41 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:26 +0000
Message-Id: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [RFC] QEMU: Enabling live-migrate on HVM on qemu-xen
	device model in 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a COMPILE TESTED ONLY RFC for a backport of the qemu elements
of:
  http://marc.info/?l=qemu-devel&m=134920288412400&w=2

This is	a companion patchset to	the libxl elements I published
here:
  http://marc.info/?l=xen-devel&m=135539631713838&w=4
which in turn are a backport of:
  http://marc.info/?l=xen-devel&m=134944750724252

Stefano: I know you said you were going to have a look at this, but
I thought I'd have a go. It compiles, which is about all I can say
for it.	  Feel free to throw away     and redo.

Comments very welcome.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8Pq-0006lr-VT; Fri, 21 Dec 2012 19:37:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8Pp-0006lm-HA
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:37:53 +0000
Received: from [85.158.139.211:47902] by server-14.bemta-5.messagelabs.com id
	0C/23-09538-09AB4D05; Fri, 21 Dec 2012 19:37:52 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-13.tower-206.messagelabs.com!1356118672!17291725!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28341 invoked from network); 21 Dec 2012 19:37:52 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-13.tower-206.messagelabs.com with SMTP;
	21 Dec 2012 19:37:52 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id A1DB6C5617F;
	Fri, 21 Dec 2012 19:37:41 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:26 +0000
Message-Id: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [RFC] QEMU: Enabling live-migrate on HVM on qemu-xen
	device model in 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a COMPILE TESTED ONLY RFC for a backport of the qemu elements
of:
  http://marc.info/?l=qemu-devel&m=134920288412400&w=2

This is	a companion patchset to	the libxl elements I published
here:
  http://marc.info/?l=xen-devel&m=135539631713838&w=4
which in turn are a backport of:
  http://marc.info/?l=xen-devel&m=134944750724252

Stefano: I know you said you were going to have a look at this, but
I thought I'd have a go. It compiles, which is about all I can say
for it.	  Feel free to throw away     and redo.

Comments very welcome.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8Q1-0006mL-Db; Fri, 21 Dec 2012 19:38:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8Pz-0006mD-Sv
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:38:04 +0000
Received: from [85.158.139.211:48158] by server-3.bemta-5.messagelabs.com id
	BE/58-25441-B9AB4D05; Fri, 21 Dec 2012 19:38:03 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-5.tower-206.messagelabs.com!1356118682!21421964!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=1.0 required=7.0 tests=BODY_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25856 invoked from network); 21 Dec 2012 19:38:02 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-5.tower-206.messagelabs.com with SMTP;
	21 Dec 2012 19:38:02 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 595D3C56190;
	Fri, 21 Dec 2012 19:38:02 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:27 +0000
Message-Id: <1356118651-4882-2-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
References: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [PATCH 1/5] QMP,
	Introduce xen-set-global-dirty-log command.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This command is used during a migration of a guest under Xen. It calls
cpu_physical_memory_set_dirty_tracking.

Backport of 39f42439d0629d3921629dc4b38e68df8f2f7b83

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 qapi-schema.json |   13 +++++++++++++
 qmp-commands.hx  |   24 ++++++++++++++++++++++++
 xen-all.c        |    6 ++++++
 xen-stub.c       |    5 +++++
 4 files changed, 48 insertions(+), 0 deletions(-)

diff --git a/qapi-schema.json b/qapi-schema.json
index a669e98..bb0d7c5 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -905,3 +905,16 @@
 # Since: 1.1
 ##
 { 'command': 'xen-save-devices-state', 'data': {'filename': 'str'} }
+
+##
+# @xen-set-global-dirty-log
+#
+# Enable or disable the global dirty log mode.
+#
+# @enable: true to enable, false to disable.
+#
+# Returns: nothing
+#
+# Since: 1.3
+##
+{ 'command': 'xen-set-global-dirty-log', 'data': { 'enable': 'bool' } }
diff --git a/qmp-commands.hx b/qmp-commands.hx
index bf1df49..0de68df 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -472,6 +472,30 @@ Example:
 EQMP
 
     {
+        .name       = "xen-set-global-dirty-log",
+        .args_type  = "enable:b",
+        .mhandler.cmd_new = qmp_marshal_input_xen_set_global_dirty_log,
+    },
+
+SQMP
+xen-set-global-dirty-log
+-------
+
+Enable or disable the global dirty log mode.
+
+Arguments:
+
+- "enable": Enable it or disable it.
+
+Example:
+
+-> { "execute": "xen-set-global-dirty-log",
+     "arguments": { "enable": true } }
+<- { "return": {} }
+
+EQMP
+
+    {
         .name       = "migrate",
         .args_type  = "detach:-d,blk:-b,inc:-i,uri:s",
         .params     = "[-d] [-b] [-i] uri",
diff --git a/xen-all.c b/xen-all.c
index 3256509..6b4e511 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -12,6 +12,7 @@
 #include "hw/pc.h"
 #include "hw/xen_common.h"
 #include "hw/xen_backend.h"
+#include "qmp-commands.h"
 
 #include "range.h"
 #include "xen-mapcache.h"
@@ -524,6 +525,11 @@ static CPUPhysMemoryClient xen_cpu_phys_memory_client = {
     .log_stop = xen_log_stop,
 };
 
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+    cpu_physical_memory_set_dirty_tracking(!!enable);
+}
+
 /* VCPU Operations, MMIO, IO ring ... */
 
 static void xen_reset_vcpu(void *opaque)
diff --git a/xen-stub.c b/xen-stub.c
index efe2ab5..25317ec 100644
--- a/xen-stub.c
+++ b/xen-stub.c
@@ -8,6 +8,7 @@
 
 #include "qemu-common.h"
 #include "hw/xen.h"
+#include "qmp-commands.h"
 
 void xenstore_store_pv_console_info(int i, CharDriverState *chr)
 {
@@ -43,3 +44,7 @@ int xen_init(void)
 {
     return -ENOSYS;
 }
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+}
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8Q1-0006mL-Db; Fri, 21 Dec 2012 19:38:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8Pz-0006mD-Sv
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:38:04 +0000
Received: from [85.158.139.211:48158] by server-3.bemta-5.messagelabs.com id
	BE/58-25441-B9AB4D05; Fri, 21 Dec 2012 19:38:03 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-5.tower-206.messagelabs.com!1356118682!21421964!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=1.0 required=7.0 tests=BODY_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25856 invoked from network); 21 Dec 2012 19:38:02 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-5.tower-206.messagelabs.com with SMTP;
	21 Dec 2012 19:38:02 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 595D3C56190;
	Fri, 21 Dec 2012 19:38:02 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:27 +0000
Message-Id: <1356118651-4882-2-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
References: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [PATCH 1/5] QMP,
	Introduce xen-set-global-dirty-log command.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This command is used during a migration of a guest under Xen. It calls
cpu_physical_memory_set_dirty_tracking.

Backport of 39f42439d0629d3921629dc4b38e68df8f2f7b83

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 qapi-schema.json |   13 +++++++++++++
 qmp-commands.hx  |   24 ++++++++++++++++++++++++
 xen-all.c        |    6 ++++++
 xen-stub.c       |    5 +++++
 4 files changed, 48 insertions(+), 0 deletions(-)

diff --git a/qapi-schema.json b/qapi-schema.json
index a669e98..bb0d7c5 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -905,3 +905,16 @@
 # Since: 1.1
 ##
 { 'command': 'xen-save-devices-state', 'data': {'filename': 'str'} }
+
+##
+# @xen-set-global-dirty-log
+#
+# Enable or disable the global dirty log mode.
+#
+# @enable: true to enable, false to disable.
+#
+# Returns: nothing
+#
+# Since: 1.3
+##
+{ 'command': 'xen-set-global-dirty-log', 'data': { 'enable': 'bool' } }
diff --git a/qmp-commands.hx b/qmp-commands.hx
index bf1df49..0de68df 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -472,6 +472,30 @@ Example:
 EQMP
 
     {
+        .name       = "xen-set-global-dirty-log",
+        .args_type  = "enable:b",
+        .mhandler.cmd_new = qmp_marshal_input_xen_set_global_dirty_log,
+    },
+
+SQMP
+xen-set-global-dirty-log
+-------
+
+Enable or disable the global dirty log mode.
+
+Arguments:
+
+- "enable": Enable it or disable it.
+
+Example:
+
+-> { "execute": "xen-set-global-dirty-log",
+     "arguments": { "enable": true } }
+<- { "return": {} }
+
+EQMP
+
+    {
         .name       = "migrate",
         .args_type  = "detach:-d,blk:-b,inc:-i,uri:s",
         .params     = "[-d] [-b] [-i] uri",
diff --git a/xen-all.c b/xen-all.c
index 3256509..6b4e511 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -12,6 +12,7 @@
 #include "hw/pc.h"
 #include "hw/xen_common.h"
 #include "hw/xen_backend.h"
+#include "qmp-commands.h"
 
 #include "range.h"
 #include "xen-mapcache.h"
@@ -524,6 +525,11 @@ static CPUPhysMemoryClient xen_cpu_phys_memory_client = {
     .log_stop = xen_log_stop,
 };
 
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+    cpu_physical_memory_set_dirty_tracking(!!enable);
+}
+
 /* VCPU Operations, MMIO, IO ring ... */
 
 static void xen_reset_vcpu(void *opaque)
diff --git a/xen-stub.c b/xen-stub.c
index efe2ab5..25317ec 100644
--- a/xen-stub.c
+++ b/xen-stub.c
@@ -8,6 +8,7 @@
 
 #include "qemu-common.h"
 #include "hw/xen.h"
+#include "qmp-commands.h"
 
 void xenstore_store_pv_console_info(int i, CharDriverState *chr)
 {
@@ -43,3 +44,7 @@ int xen_init(void)
 {
     return -ENOSYS;
 }
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+}
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8Q4-0006mx-0Y; Fri, 21 Dec 2012 19:38:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8Q2-0006mf-JV
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:38:06 +0000
Received: from [85.158.139.211:61032] by server-2.bemta-5.messagelabs.com id
	BB/E5-16162-D9AB4D05; Fri, 21 Dec 2012 19:38:05 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-9.tower-206.messagelabs.com!1356118685!18968608!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30077 invoked from network); 21 Dec 2012 19:38:05 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-9.tower-206.messagelabs.com with SMTP;
	21 Dec 2012 19:38:05 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id F3B19C5618D;
	Fri, 21 Dec 2012 19:38:04 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:28 +0000
Message-Id: <1356118651-4882-3-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
References: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [PATCH 2/5] xen: Introduce xen_modified_memory.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function is to be used during live migration. Every write access to the
guest memory should call this funcion so the Xen tools knows which pages are
dirty.

Backport of 910b38e4dc4c37683c8b821e75a7f4cf095e4b21

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 hw/xen.h   |    1 +
 xen-all.c  |   21 +++++++++++++++++++++
 xen-stub.c |    4 ++++
 3 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/hw/xen.h b/hw/xen.h
index 2162111..359a275 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -45,6 +45,7 @@ void xenstore_store_pv_console_info(int i, struct CharDriverState *chr);
 
 #if defined(NEED_CPU_H) && !defined(CONFIG_USER_ONLY)
 void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size);
+void xen_modified_memory(ram_addr_t start, ram_addr_t length);
 #endif
 
 #if defined(CONFIG_XEN) && CONFIG_XEN_CTRL_INTERFACE_VERSION < 400
diff --git a/xen-all.c b/xen-all.c
index 6b4e511..121289d 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -1135,3 +1135,24 @@ void destroy_hvm_domain(bool reboot)
         xc_interface_close(xc_handle);
     }
 }
+
+void xen_modified_memory(ram_addr_t start, ram_addr_t length)
+{
+    if (unlikely(cpu_physical_memory_get_dirty_tracking())) {
+        int rc;
+        ram_addr_t start_pfn, nb_pages;
+
+        if (length == 0) {
+            length = TARGET_PAGE_SIZE;
+        }
+        start_pfn = start >> TARGET_PAGE_BITS;
+        nb_pages = ((start + length + TARGET_PAGE_SIZE - 1) >> TARGET_PAGE_BITS)
+            - start_pfn;
+        rc = xc_hvm_modified_memory(xen_xc, xen_domid, start_pfn, nb_pages);
+        if (rc) {
+            fprintf(stderr,
+                    "%s failed for "RAM_ADDR_FMT" ("RAM_ADDR_FMT"): %i, %s\n",
+                    __func__, start, nb_pages, rc, strerror(-rc));
+        }
+    }
+}
diff --git a/xen-stub.c b/xen-stub.c
index 25317ec..7b54477 100644
--- a/xen-stub.c
+++ b/xen-stub.c
@@ -48,3 +48,7 @@ int xen_init(void)
 void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
 {
 }
+
+void xen_modified_memory(ram_addr_t start, ram_addr_t length)
+{
+}
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8Q4-0006mx-0Y; Fri, 21 Dec 2012 19:38:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8Q2-0006mf-JV
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:38:06 +0000
Received: from [85.158.139.211:61032] by server-2.bemta-5.messagelabs.com id
	BB/E5-16162-D9AB4D05; Fri, 21 Dec 2012 19:38:05 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-9.tower-206.messagelabs.com!1356118685!18968608!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30077 invoked from network); 21 Dec 2012 19:38:05 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-9.tower-206.messagelabs.com with SMTP;
	21 Dec 2012 19:38:05 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id F3B19C5618D;
	Fri, 21 Dec 2012 19:38:04 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:28 +0000
Message-Id: <1356118651-4882-3-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
References: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [PATCH 2/5] xen: Introduce xen_modified_memory.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function is to be used during live migration. Every write access to the
guest memory should call this funcion so the Xen tools knows which pages are
dirty.

Backport of 910b38e4dc4c37683c8b821e75a7f4cf095e4b21

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 hw/xen.h   |    1 +
 xen-all.c  |   21 +++++++++++++++++++++
 xen-stub.c |    4 ++++
 3 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/hw/xen.h b/hw/xen.h
index 2162111..359a275 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -45,6 +45,7 @@ void xenstore_store_pv_console_info(int i, struct CharDriverState *chr);
 
 #if defined(NEED_CPU_H) && !defined(CONFIG_USER_ONLY)
 void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size);
+void xen_modified_memory(ram_addr_t start, ram_addr_t length);
 #endif
 
 #if defined(CONFIG_XEN) && CONFIG_XEN_CTRL_INTERFACE_VERSION < 400
diff --git a/xen-all.c b/xen-all.c
index 6b4e511..121289d 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -1135,3 +1135,24 @@ void destroy_hvm_domain(bool reboot)
         xc_interface_close(xc_handle);
     }
 }
+
+void xen_modified_memory(ram_addr_t start, ram_addr_t length)
+{
+    if (unlikely(cpu_physical_memory_get_dirty_tracking())) {
+        int rc;
+        ram_addr_t start_pfn, nb_pages;
+
+        if (length == 0) {
+            length = TARGET_PAGE_SIZE;
+        }
+        start_pfn = start >> TARGET_PAGE_BITS;
+        nb_pages = ((start + length + TARGET_PAGE_SIZE - 1) >> TARGET_PAGE_BITS)
+            - start_pfn;
+        rc = xc_hvm_modified_memory(xen_xc, xen_domid, start_pfn, nb_pages);
+        if (rc) {
+            fprintf(stderr,
+                    "%s failed for "RAM_ADDR_FMT" ("RAM_ADDR_FMT"): %i, %s\n",
+                    __func__, start, nb_pages, rc, strerror(-rc));
+        }
+    }
+}
diff --git a/xen-stub.c b/xen-stub.c
index 25317ec..7b54477 100644
--- a/xen-stub.c
+++ b/xen-stub.c
@@ -48,3 +48,7 @@ int xen_init(void)
 void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
 {
 }
+
+void xen_modified_memory(ram_addr_t start, ram_addr_t length)
+{
+}
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8QE-0006oh-Q5; Fri, 21 Dec 2012 19:38:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8QC-0006oC-Tv
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:38:17 +0000
Received: from [85.158.137.99:59053] by server-9.bemta-3.messagelabs.com id
	44/74-11948-8AAB4D05; Fri, 21 Dec 2012 19:38:16 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-11.tower-217.messagelabs.com!1356118695!17354621!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10651 invoked from network); 21 Dec 2012 19:38:15 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-11.tower-217.messagelabs.com with SMTP;
	21 Dec 2012 19:38:15 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 4D319C56192;
	Fri, 21 Dec 2012 19:38:15 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:30 +0000
Message-Id: <1356118651-4882-5-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
References: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [PATCH 4/5] exec, memory: Call to xen_modified_memory.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch add some calls to xen_modified_memory to notify Xen about dirtybits
during migration.

Backport of e226939de5814527a21396903b08c3d0ed989558

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 exec.c   |    4 ++++
 memory.c |    4 ++++
 2 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/exec.c b/exec.c
index 511777b..401f9bc 100644
--- a/exec.c
+++ b/exec.c
@@ -2988,6 +2988,9 @@ ram_addr_t qemu_ram_alloc_from_ptr(DeviceState *dev, const char *name,
     memset(ram_list.phys_dirty + (new_block->offset >> TARGET_PAGE_BITS),
            0xff, size >> TARGET_PAGE_BITS);
 
+    if (xen_enabled())
+        xen_modified_memory(new_block->offset, size);
+
     if (kvm_enabled())
         kvm_setup_guest_memory(new_block->host, size);
 
@@ -3961,6 +3964,7 @@ static void invalidate_and_set_dirty(target_phys_addr_t addr,
         /* set dirty bit */
         cpu_physical_memory_set_dirty_flags(addr, (0xff & ~CODE_DIRTY_FLAG));
     }
+    xen_modified_memory(addr, length);
 }
 
 void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf,
diff --git a/memory.c b/memory.c
index 7c20a07..6e0c596 100644
--- a/memory.c
+++ b/memory.c
@@ -16,6 +16,7 @@
 #include "ioport.h"
 #include "bitops.h"
 #include "kvm.h"
+#include "hw/xen.h"
 #include <assert.h>
 
 unsigned memory_region_transaction_depth = 0;
@@ -1065,6 +1066,9 @@ bool memory_region_get_dirty(MemoryRegion *mr, target_phys_addr_t addr,
 void memory_region_set_dirty(MemoryRegion *mr, target_phys_addr_t addr)
 {
     assert(mr->terminates);
+    if (xen_enabled())
+        xen_modified_memory((mr->ram_addr + addr) & TARGET_PAGE_MASK,
+                            TARGET_PAGE_SIZE);
     return cpu_physical_memory_set_dirty(mr->ram_addr + addr);
 }
 
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8QE-0006oh-Q5; Fri, 21 Dec 2012 19:38:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8QC-0006oC-Tv
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:38:17 +0000
Received: from [85.158.137.99:59053] by server-9.bemta-3.messagelabs.com id
	44/74-11948-8AAB4D05; Fri, 21 Dec 2012 19:38:16 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-11.tower-217.messagelabs.com!1356118695!17354621!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10651 invoked from network); 21 Dec 2012 19:38:15 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-11.tower-217.messagelabs.com with SMTP;
	21 Dec 2012 19:38:15 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 4D319C56192;
	Fri, 21 Dec 2012 19:38:15 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:30 +0000
Message-Id: <1356118651-4882-5-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
References: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [PATCH 4/5] exec, memory: Call to xen_modified_memory.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch add some calls to xen_modified_memory to notify Xen about dirtybits
during migration.

Backport of e226939de5814527a21396903b08c3d0ed989558

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 exec.c   |    4 ++++
 memory.c |    4 ++++
 2 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/exec.c b/exec.c
index 511777b..401f9bc 100644
--- a/exec.c
+++ b/exec.c
@@ -2988,6 +2988,9 @@ ram_addr_t qemu_ram_alloc_from_ptr(DeviceState *dev, const char *name,
     memset(ram_list.phys_dirty + (new_block->offset >> TARGET_PAGE_BITS),
            0xff, size >> TARGET_PAGE_BITS);
 
+    if (xen_enabled())
+        xen_modified_memory(new_block->offset, size);
+
     if (kvm_enabled())
         kvm_setup_guest_memory(new_block->host, size);
 
@@ -3961,6 +3964,7 @@ static void invalidate_and_set_dirty(target_phys_addr_t addr,
         /* set dirty bit */
         cpu_physical_memory_set_dirty_flags(addr, (0xff & ~CODE_DIRTY_FLAG));
     }
+    xen_modified_memory(addr, length);
 }
 
 void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf,
diff --git a/memory.c b/memory.c
index 7c20a07..6e0c596 100644
--- a/memory.c
+++ b/memory.c
@@ -16,6 +16,7 @@
 #include "ioport.h"
 #include "bitops.h"
 #include "kvm.h"
+#include "hw/xen.h"
 #include <assert.h>
 
 unsigned memory_region_transaction_depth = 0;
@@ -1065,6 +1066,9 @@ bool memory_region_get_dirty(MemoryRegion *mr, target_phys_addr_t addr,
 void memory_region_set_dirty(MemoryRegion *mr, target_phys_addr_t addr)
 {
     assert(mr->terminates);
+    if (xen_enabled())
+        xen_modified_memory((mr->ram_addr + addr) & TARGET_PAGE_MASK,
+                            TARGET_PAGE_SIZE);
     return cpu_physical_memory_set_dirty(mr->ram_addr + addr);
 }
 
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8QF-0006ou-7O; Fri, 21 Dec 2012 19:38:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8QE-0006oQ-3Z
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:38:18 +0000
Received: from [85.158.138.51:64205] by server-12.bemta-3.messagelabs.com id
	B5/AD-27559-9AAB4D05; Fri, 21 Dec 2012 19:38:17 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-7.tower-174.messagelabs.com!1356118696!20011396!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23963 invoked from network); 21 Dec 2012 19:38:16 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-7.tower-174.messagelabs.com with SMTP;
	21 Dec 2012 19:38:16 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 5722CC56193;
	Fri, 21 Dec 2012 19:38:16 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:31 +0000
Message-Id: <1356118651-4882-6-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
References: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [PATCH 5/5] xen: Set the vram dirty when an error occur.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the call to xc_hvm_track_dirty_vram() fails, then we set dirtybit on all the
video ram. This case happens during migration.

Backport of 8aba7dc02d5660df7e7d8651304b3079908358be

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 xen-all.c |   20 ++++++++++++++++++--
 1 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 121289d..dbd759c 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -470,7 +470,21 @@ static int xen_sync_dirty_bitmap(XenIOState *state,
     rc = xc_hvm_track_dirty_vram(xen_xc, xen_domid,
                                  start_addr >> TARGET_PAGE_BITS, npages,
                                  bitmap);
-    if (rc) {
+    if (rc < 0) {
+        if (rc != -ENODATA) {
+            ram_addr_t addr, end;
+
+            xen_modified_memory(start_addr, size);
+            
+            end = TARGET_PAGE_ALIGN(start_addr + size);
+            for (addr = start_addr & TARGET_PAGE_MASK; addr < end; addr += TARGET_PAGE_SIZE) {
+                cpu_physical_memory_set_dirty(addr);
+            }
+
+            DPRINTF("xen: track_dirty_vram failed (0x" TARGET_FMT_plx
+                    ", 0x" TARGET_FMT_plx "): %s\n",
+                    start_addr, start_addr + size, strerror(-rc));
+        }
         return rc;
     }
 
@@ -479,7 +493,9 @@ static int xen_sync_dirty_bitmap(XenIOState *state,
         while (map != 0) {
             j = ffsl(map) - 1;
             map &= ~(1ul << j);
-            cpu_physical_memory_set_dirty(vram_offset + (i * width + j) * TARGET_PAGE_SIZE);
+            target_phys_addr_t todirty = vram_offset + (i * width + j) * TARGET_PAGE_SIZE;
+            xen_modified_memory(todirty, TARGET_PAGE_SIZE);
+            cpu_physical_memory_set_dirty(todirty);
         };
     }
 
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8QF-0006ou-7O; Fri, 21 Dec 2012 19:38:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8QE-0006oQ-3Z
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:38:18 +0000
Received: from [85.158.138.51:64205] by server-12.bemta-3.messagelabs.com id
	B5/AD-27559-9AAB4D05; Fri, 21 Dec 2012 19:38:17 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-7.tower-174.messagelabs.com!1356118696!20011396!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23963 invoked from network); 21 Dec 2012 19:38:16 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-7.tower-174.messagelabs.com with SMTP;
	21 Dec 2012 19:38:16 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 5722CC56193;
	Fri, 21 Dec 2012 19:38:16 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:31 +0000
Message-Id: <1356118651-4882-6-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
References: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [PATCH 5/5] xen: Set the vram dirty when an error occur.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the call to xc_hvm_track_dirty_vram() fails, then we set dirtybit on all the
video ram. This case happens during migration.

Backport of 8aba7dc02d5660df7e7d8651304b3079908358be

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 xen-all.c |   20 ++++++++++++++++++--
 1 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 121289d..dbd759c 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -470,7 +470,21 @@ static int xen_sync_dirty_bitmap(XenIOState *state,
     rc = xc_hvm_track_dirty_vram(xen_xc, xen_domid,
                                  start_addr >> TARGET_PAGE_BITS, npages,
                                  bitmap);
-    if (rc) {
+    if (rc < 0) {
+        if (rc != -ENODATA) {
+            ram_addr_t addr, end;
+
+            xen_modified_memory(start_addr, size);
+            
+            end = TARGET_PAGE_ALIGN(start_addr + size);
+            for (addr = start_addr & TARGET_PAGE_MASK; addr < end; addr += TARGET_PAGE_SIZE) {
+                cpu_physical_memory_set_dirty(addr);
+            }
+
+            DPRINTF("xen: track_dirty_vram failed (0x" TARGET_FMT_plx
+                    ", 0x" TARGET_FMT_plx "): %s\n",
+                    start_addr, start_addr + size, strerror(-rc));
+        }
         return rc;
     }
 
@@ -479,7 +493,9 @@ static int xen_sync_dirty_bitmap(XenIOState *state,
         while (map != 0) {
             j = ffsl(map) - 1;
             map &= ~(1ul << j);
-            cpu_physical_memory_set_dirty(vram_offset + (i * width + j) * TARGET_PAGE_SIZE);
+            target_phys_addr_t todirty = vram_offset + (i * width + j) * TARGET_PAGE_SIZE;
+            xen_modified_memory(todirty, TARGET_PAGE_SIZE);
+            cpu_physical_memory_set_dirty(todirty);
         };
     }
 
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8QD-0006oP-Dk; Fri, 21 Dec 2012 19:38:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8QC-0006o8-4Q
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:38:16 +0000
Received: from [85.158.137.99:13563] by server-13.bemta-3.messagelabs.com id
	3B/87-00465-7AAB4D05; Fri, 21 Dec 2012 19:38:15 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-3.tower-217.messagelabs.com!1356118694!14252469!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24755 invoked from network); 21 Dec 2012 19:38:14 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-3.tower-217.messagelabs.com with SMTP;
	21 Dec 2012 19:38:14 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id ED58FC56191;
	Fri, 21 Dec 2012 19:38:13 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:29 +0000
Message-Id: <1356118651-4882-4-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
References: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [PATCH 3/5] exec: Introduce helper to set dirty flags.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This new helper/hook is used in the next patch to add an extra call in a single
place.

Backport of 51d7a9eb2b64e787c90bea1027308087eac22065

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 exec.c |   45 +++++++++++++++++----------------------------
 1 files changed, 17 insertions(+), 28 deletions(-)

diff --git a/exec.c b/exec.c
index 6c206ff..511777b 100644
--- a/exec.c
+++ b/exec.c
@@ -3951,6 +3951,18 @@ int cpu_memory_rw_debug(CPUState *env, target_ulong addr,
 }
 
 #else
+
+static void invalidate_and_set_dirty(target_phys_addr_t addr,
+                                     target_phys_addr_t length)
+{
+    if (!cpu_physical_memory_is_dirty(addr)) {
+        /* invalidate code */
+        tb_invalidate_phys_page_range(addr, addr + length, 0);
+        /* set dirty bit */
+        cpu_physical_memory_set_dirty_flags(addr, (0xff & ~CODE_DIRTY_FLAG));
+    }
+}
+
 void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf,
                             int len, int is_write)
 {
@@ -4003,13 +4015,7 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf,
                 /* RAM case */
                 ptr = qemu_get_ram_ptr(addr1);
                 memcpy(ptr, buf, l);
-                if (!cpu_physical_memory_is_dirty(addr1)) {
-                    /* invalidate code */
-                    tb_invalidate_phys_page_range(addr1, addr1 + l, 0);
-                    /* set dirty bit */
-                    cpu_physical_memory_set_dirty_flags(
-                        addr1, (0xff & ~CODE_DIRTY_FLAG));
-                }
+                invalidate_and_set_dirty(addr1, l);
                 qemu_put_ram_ptr(ptr);
             }
         } else {
@@ -4081,6 +4087,7 @@ void cpu_physical_memory_write_rom(target_phys_addr_t addr,
             /* ROM/RAM case */
             ptr = qemu_get_ram_ptr(addr1);
             memcpy(ptr, buf, l);
+	    invalidate_and_set_dirty(addr1, l);
             qemu_put_ram_ptr(ptr);
         }
         len -= l;
@@ -4211,13 +4218,7 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len,
                 l = TARGET_PAGE_SIZE;
                 if (l > access_len)
                     l = access_len;
-                if (!cpu_physical_memory_is_dirty(addr1)) {
-                    /* invalidate code */
-                    tb_invalidate_phys_page_range(addr1, addr1 + l, 0);
-                    /* set dirty bit */
-                    cpu_physical_memory_set_dirty_flags(
-                        addr1, (0xff & ~CODE_DIRTY_FLAG));
-                }
+                invalidate_and_set_dirty(addr1, l);
                 addr1 += l;
                 access_len -= l;
             }
@@ -4561,13 +4562,7 @@ static inline void stl_phys_internal(target_phys_addr_t addr, uint32_t val,
             stl_p(ptr, val);
             break;
         }
-        if (!cpu_physical_memory_is_dirty(addr1)) {
-            /* invalidate code */
-            tb_invalidate_phys_page_range(addr1, addr1 + 4, 0);
-            /* set dirty bit */
-            cpu_physical_memory_set_dirty_flags(addr1,
-                (0xff & ~CODE_DIRTY_FLAG));
-        }
+        invalidate_and_set_dirty(addr1, 4);
     }
 }
 
@@ -4639,13 +4634,7 @@ static inline void stw_phys_internal(target_phys_addr_t addr, uint32_t val,
             stw_p(ptr, val);
             break;
         }
-        if (!cpu_physical_memory_is_dirty(addr1)) {
-            /* invalidate code */
-            tb_invalidate_phys_page_range(addr1, addr1 + 2, 0);
-            /* set dirty bit */
-            cpu_physical_memory_set_dirty_flags(addr1,
-                (0xff & ~CODE_DIRTY_FLAG));
-        }
+        invalidate_and_set_dirty(addr1, 2);
     }
 }
 
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:38:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8QD-0006oP-Dk; Fri, 21 Dec 2012 19:38:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tm8QC-0006o8-4Q
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:38:16 +0000
Received: from [85.158.137.99:13563] by server-13.bemta-3.messagelabs.com id
	3B/87-00465-7AAB4D05; Fri, 21 Dec 2012 19:38:15 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-3.tower-217.messagelabs.com!1356118694!14252469!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24755 invoked from network); 21 Dec 2012 19:38:14 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-3.tower-217.messagelabs.com with SMTP;
	21 Dec 2012 19:38:14 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id ED58FC56191;
	Fri, 21 Dec 2012 19:38:13 +0000 (GMT)
From: Alex Bligh <alex@alex.org.uk>
To: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 21 Dec 2012 19:37:29 +0000
Message-Id: <1356118651-4882-4-git-send-email-alex@alex.org.uk>
X-Mailer: git-send-email 1.7.4.1
In-Reply-To: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
References: <1356118651-4882-1-git-send-email-alex@alex.org.uk>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Alex Bligh <alex@alex.org.uk>
Subject: [Xen-devel] [PATCH 3/5] exec: Introduce helper to set dirty flags.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This new helper/hook is used in the next patch to add an extra call in a single
place.

Backport of 51d7a9eb2b64e787c90bea1027308087eac22065

Signed-off-by: Alex Bligh <alex@alex.org.uk>
---
 exec.c |   45 +++++++++++++++++----------------------------
 1 files changed, 17 insertions(+), 28 deletions(-)

diff --git a/exec.c b/exec.c
index 6c206ff..511777b 100644
--- a/exec.c
+++ b/exec.c
@@ -3951,6 +3951,18 @@ int cpu_memory_rw_debug(CPUState *env, target_ulong addr,
 }
 
 #else
+
+static void invalidate_and_set_dirty(target_phys_addr_t addr,
+                                     target_phys_addr_t length)
+{
+    if (!cpu_physical_memory_is_dirty(addr)) {
+        /* invalidate code */
+        tb_invalidate_phys_page_range(addr, addr + length, 0);
+        /* set dirty bit */
+        cpu_physical_memory_set_dirty_flags(addr, (0xff & ~CODE_DIRTY_FLAG));
+    }
+}
+
 void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf,
                             int len, int is_write)
 {
@@ -4003,13 +4015,7 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf,
                 /* RAM case */
                 ptr = qemu_get_ram_ptr(addr1);
                 memcpy(ptr, buf, l);
-                if (!cpu_physical_memory_is_dirty(addr1)) {
-                    /* invalidate code */
-                    tb_invalidate_phys_page_range(addr1, addr1 + l, 0);
-                    /* set dirty bit */
-                    cpu_physical_memory_set_dirty_flags(
-                        addr1, (0xff & ~CODE_DIRTY_FLAG));
-                }
+                invalidate_and_set_dirty(addr1, l);
                 qemu_put_ram_ptr(ptr);
             }
         } else {
@@ -4081,6 +4087,7 @@ void cpu_physical_memory_write_rom(target_phys_addr_t addr,
             /* ROM/RAM case */
             ptr = qemu_get_ram_ptr(addr1);
             memcpy(ptr, buf, l);
+	    invalidate_and_set_dirty(addr1, l);
             qemu_put_ram_ptr(ptr);
         }
         len -= l;
@@ -4211,13 +4218,7 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len,
                 l = TARGET_PAGE_SIZE;
                 if (l > access_len)
                     l = access_len;
-                if (!cpu_physical_memory_is_dirty(addr1)) {
-                    /* invalidate code */
-                    tb_invalidate_phys_page_range(addr1, addr1 + l, 0);
-                    /* set dirty bit */
-                    cpu_physical_memory_set_dirty_flags(
-                        addr1, (0xff & ~CODE_DIRTY_FLAG));
-                }
+                invalidate_and_set_dirty(addr1, l);
                 addr1 += l;
                 access_len -= l;
             }
@@ -4561,13 +4562,7 @@ static inline void stl_phys_internal(target_phys_addr_t addr, uint32_t val,
             stl_p(ptr, val);
             break;
         }
-        if (!cpu_physical_memory_is_dirty(addr1)) {
-            /* invalidate code */
-            tb_invalidate_phys_page_range(addr1, addr1 + 4, 0);
-            /* set dirty bit */
-            cpu_physical_memory_set_dirty_flags(addr1,
-                (0xff & ~CODE_DIRTY_FLAG));
-        }
+        invalidate_and_set_dirty(addr1, 4);
     }
 }
 
@@ -4639,13 +4634,7 @@ static inline void stw_phys_internal(target_phys_addr_t addr, uint32_t val,
             stw_p(ptr, val);
             break;
         }
-        if (!cpu_physical_memory_is_dirty(addr1)) {
-            /* invalidate code */
-            tb_invalidate_phys_page_range(addr1, addr1 + 2, 0);
-            /* set dirty bit */
-            cpu_physical_memory_set_dirty_flags(addr1,
-                (0xff & ~CODE_DIRTY_FLAG));
-        }
+        invalidate_and_set_dirty(addr1, 2);
     }
 }
 
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:39:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8R0-00076c-Lp; Fri, 21 Dec 2012 19:39:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm8Qy-000769-Vj
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:39:05 +0000
Received: from [85.158.143.99:46675] by server-1.bemta-4.messagelabs.com id
	F3/96-28401-8DAB4D05; Fri, 21 Dec 2012 19:39:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1356118742!30431051!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30946 invoked from network); 21 Dec 2012 19:39:03 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 19:39:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLJcw34019331
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 19:38:59 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLJcwQ8022214
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 19:38:58 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLJcw8v021740; Fri, 21 Dec 2012 13:38:58 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 11:38:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 112661C032B; Fri, 21 Dec 2012 14:38:57 -0500 (EST)
Date: Fri, 21 Dec 2012 14:38:57 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "G.R." <firemeteor@users.sourceforge.net>
Message-ID: <20121221193857.GC30562@phenom.dumpdata.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
	<CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 12:04:01AM +0800, G.R. wrote:
> On Wed, Dec 19, 2012 at 2:20 PM, G.R. <firemeteor@users.sourceforge.net> wrote:
> > Adding Jean, the author to the opregion patch.
> >
> > Jean, I believe the warning is due to the offset within the page.
> > To accommodate the offset, you would need to reserve another page for it.
> > Will the extra page cause any unexpected problem?
> >
> > The original thread is about an instability issue that directly freeze the host.
> > I believe this warning above should not has such effect.
> > What do you think? And any suggestion?
> >
> 
> Jean appears to be no longer reach able.
> The warning I found turns out to be not relevant.
> According to the OpRegion spec, the tail part is reserved and should
> never be touched by the guest.
> But anyway, I had a local fix to get rid of the warning, but reserving
> one more page and map it when the host opregion is not page aligned.
> I'll send it to a separate thread.
> 
> Back to the topic. I updated to xen 4.2.1 and tried three times tonight.
> Two of them lead to total freeze with no error log available, after
> game playing for a couple of minutes.
> And the last try ended up with GPU hang after 10+ minutes of game playing.
> This is a guest only hang. But I still have no way to check GPU error
> state even it has been collected:
> 
> [ 1553.588076] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
> elapsed... GPU hung
> [ 1553.592112] [drm] capturing error event; look for more information
> in /debug/dri/0/i915_error_state
> [ 1582.004075] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
> elapsed... GPU hung
> [ 1597.220075] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
> elapsed... GPU hung
> [ 1613.220074] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
> elapsed... GPU hung

Those also appear with baremetal (Linus actually mentioned this).

> 
> I'm wondering if the two syndromes are due to the same underlying cause.
> But I guess a GPU hang caused by guest driver issue should not freeze
> the host. Is it true?

It shouldn't. Is the machine usuable with this guest being frozen?
> 
> I'm going to try more with different config -- different kernel
> version, with / without PVOPS, native run vs VM etc.
> But this is kind of blindly since I have no clue at all. If you have
> anything to suspect, it will be highly appreciated.
> 
> Thanks,
> Timothy
> 
> > Thanks,
> > Timothy
> >
> > On Wed, Dec 19, 2012 at 1:28 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> >> Hi Stefano,
> >>
> >> I recently tried to play some 3D games on my linux guest.
> >> The game starts without problem but it freezes the entire system after
> >> a some time (a minute or so?).
> >> Here I mean both the host and domU are not responsive anymore.
> >> The ssh freezes and i had to shutdown the machine using power button directly.
> >>
> >> I did not find anything obvious from the host log. But from the guest,
> >> I can find this:
> >>
> >> Dec 18 20:28:38 debvm kernel: [    0.899860] resource map sanity check
> >> conflict: 0xfeff5018 0xfeff7017 0xfeff7000 0xffffffff reserved
> >> Dec 18 20:28:38 debvm kernel: [    0.899862] ------------[ cut here
> >> ]------------
> >> Dec 18 20:28:38 debvm kernel: [    0.899869] WARNING: at
> >> arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2c4/0x33c()
> >> Dec 18 20:28:38 debvm kernel: [    0.899870] Hardware name: HVM domU
> >> Dec 18 20:28:38 debvm kernel: [    0.899872] Info: mapping multiple
> >> BARs. Your kernel is fine.
> >> Dec 18 20:28:38 debvm kernel: [    0.899873] Modules linked in:
> >> Dec 18 20:28:38 debvm kernel: [    0.899878] Pid: 1, comm: swapper/0
> >> Not tainted 3.6.9 #4
> >> Dec 18 20:28:38 debvm kernel: [    0.899892] Call Trace:
> >> Dec 18 20:28:38 debvm kernel: [    0.899896]  [<ffffffff8103d194>] ?
> >> warn_slowpath_common+0x76/0x8a
> >> Dec 18 20:28:38 debvm kernel: [    0.899898]  [<ffffffff8103d240>] ?
> >> warn_slowpath_fmt+0x45/0x4a
> >> Dec 18 20:28:38 debvm kernel: [    0.899900]  [<ffffffff81032a6c>] ?
> >> __ioremap_caller+0x2c4/0x33c
> >> Dec 18 20:28:38 debvm kernel: [    0.899902]  [<ffffffff812c3be3>] ?
> >> intel_opregion_setup+0x9c/0x201
> >> Dec 18 20:28:38 debvm kernel: [    0.899904]  [<ffffffff812bcb75>] ?
> >> intel_setup_gmbus+0x175/0x19d
> >> Dec 18 20:28:38 debvm kernel: [    0.899907]  [<ffffffff8128a37a>] ?
> >> i915_driver_load+0x548/0x90d
> >> Dec 18 20:28:38 debvm kernel: [    0.899910]  [<ffffffff812ff804>] ?
> >> setup_hpet_msi_remapped+0x20/0x20
> >> Dec 18 20:28:38 debvm kernel: [    0.899912]  [<ffffffff81272706>] ?
> >> drm_get_pci_dev+0x152/0x259
> >> Dec 18 20:28:38 debvm kernel: [    0.899915]  [<ffffffff813d4883>] ?
> >> _raw_spin_lock_irqsave+0x21/0x45
> >> Dec 18 20:28:38 debvm kernel: [    0.899918]  [<ffffffff811d9ecc>] ?
> >> local_pci_probe+0x5a/0xa0
> >> Dec 18 20:28:38 debvm kernel: [    0.899920]  [<ffffffff811d9fcf>] ?
> >> pci_device_probe+0xbd/0xe7
> >> Dec 18 20:28:38 debvm kernel: [    0.899922]  [<ffffffff812cd887>] ?
> >> driver_probe_device+0x1b0/0x1b0
> >> Dec 18 20:28:38 debvm kernel: [    0.899923]  [<ffffffff812cd887>] ?
> >> driver_probe_device+0x1b0/0x1b0
> >> Dec 18 20:28:38 debvm kernel: [    0.899925]  [<ffffffff812cd769>] ?
> >> driver_probe_device+0x92/0x1b0
> >> Dec 18 20:28:38 debvm kernel: [    0.899926]  [<ffffffff812cd8da>] ?
> >> __driver_attach+0x53/0x73
> >> Dec 18 20:28:38 debvm kernel: [    0.899928]  [<ffffffff812cc06f>] ?
> >> bus_for_each_dev+0x46/0x77
> >> Dec 18 20:28:38 debvm kernel: [    0.899930]  [<ffffffff812ccf8f>] ?
> >> bus_add_driver+0xd5/0x1f4
> >> Dec 18 20:28:38 debvm kernel: [    0.899931]  [<ffffffff812cde14>] ?
> >> driver_register+0x89/0x101
> >> Dec 18 20:28:38 debvm kernel: [    0.899933]  [<ffffffff811d9336>] ?
> >> __pci_register_driver+0x49/0xa3
> >> Dec 18 20:28:38 debvm kernel: [    0.899935]  [<ffffffff816d55c7>] ?
> >> ttm_init+0x63/0x63
> >> Dec 18 20:28:38 debvm kernel: [    0.899937]  [<ffffffff81002085>] ?
> >> do_one_initcall+0x75/0x12c
> >> Dec 18 20:28:38 debvm kernel: [    0.899940]  [<ffffffff816a6cc2>] ?
> >> kernel_init+0x13c/0x1c0
> >> Dec 18 20:28:38 debvm kernel: [    0.899941]  [<ffffffff816a6565>] ?
> >> do_early_param+0x83/0x83
> >> Dec 18 20:28:38 debvm kernel: [    0.899943]  [<ffffffff813d9f44>] ?
> >> kernel_thread_helper+0x4/0x10
> >> Dec 18 20:28:38 debvm kernel: [    0.899945]  [<ffffffff816a6b86>] ?
> >> start_kernel+0x3e1/0x3e1
> >> Dec 18 20:28:38 debvm kernel: [    0.899947]  [<ffffffff813d9f40>] ?
> >> gs_change+0x13/0x13
> >> Dec 18 20:28:38 debvm kernel: [    0.899950] ---[ end trace
> >> db461543ce599b44 ]---
> >>
> >> I'm not sure if this has anything to do with the freeze. This seems to
> >> show up on every boot after I upgraded to xen version 4.2.1-rc2. Both
> >> debian kernel 3.2.32 / 3.6.9 suffers from the same log. But whole
> >> system freeze happens only during gaming, which is much less frequent.
> >> So I'm not sure if the two are related. But anyway, could you comment
> >> about what does this log mean?
> >>
> >> I can find the one of the mentioned address in the qemu_dm log:
> >> pt_pci_write_config: [00:02:0] address=00fc val=0xfeff5000 len=4
> >> igd_write_opregion: Map OpRegion: cd996018 -> feff5018
> >> igd_write_opregion: [00:02:0] addr=fc len=2 val=feff5000
> >>
> >> PS: I also run xbmc on domU and it playbacks video under HW
> >> acceleration (VAAPI) without any problem. XBMC by itself is also an
> >> graphics intensive program. But this runs on an pure HVM guest, while
> >> the failing case is on PVHVM.
> >>
> >> PS2: I also suffered another instability yesterday. It happens when I
> >> was compiling kernel in side the domU. The host reboots suddenly.
> >> Since I'm not using graphics at that time (Xorg session is idle, I
> >> connected through SSH), this may be a different issue.
> >>
> >> Thanks,
> >> Timothy
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:39:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8R0-00076c-Lp; Fri, 21 Dec 2012 19:39:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm8Qy-000769-Vj
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:39:05 +0000
Received: from [85.158.143.99:46675] by server-1.bemta-4.messagelabs.com id
	F3/96-28401-8DAB4D05; Fri, 21 Dec 2012 19:39:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1356118742!30431051!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30946 invoked from network); 21 Dec 2012 19:39:03 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 19:39:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLJcw34019331
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 19:38:59 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLJcwQ8022214
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 19:38:58 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLJcw8v021740; Fri, 21 Dec 2012 13:38:58 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 11:38:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 112661C032B; Fri, 21 Dec 2012 14:38:57 -0500 (EST)
Date: Fri, 21 Dec 2012 14:38:57 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "G.R." <firemeteor@users.sourceforge.net>
Message-ID: <20121221193857.GC30562@phenom.dumpdata.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
	<CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 12:04:01AM +0800, G.R. wrote:
> On Wed, Dec 19, 2012 at 2:20 PM, G.R. <firemeteor@users.sourceforge.net> wrote:
> > Adding Jean, the author to the opregion patch.
> >
> > Jean, I believe the warning is due to the offset within the page.
> > To accommodate the offset, you would need to reserve another page for it.
> > Will the extra page cause any unexpected problem?
> >
> > The original thread is about an instability issue that directly freeze the host.
> > I believe this warning above should not has such effect.
> > What do you think? And any suggestion?
> >
> 
> Jean appears to be no longer reach able.
> The warning I found turns out to be not relevant.
> According to the OpRegion spec, the tail part is reserved and should
> never be touched by the guest.
> But anyway, I had a local fix to get rid of the warning, but reserving
> one more page and map it when the host opregion is not page aligned.
> I'll send it to a separate thread.
> 
> Back to the topic. I updated to xen 4.2.1 and tried three times tonight.
> Two of them lead to total freeze with no error log available, after
> game playing for a couple of minutes.
> And the last try ended up with GPU hang after 10+ minutes of game playing.
> This is a guest only hang. But I still have no way to check GPU error
> state even it has been collected:
> 
> [ 1553.588076] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
> elapsed... GPU hung
> [ 1553.592112] [drm] capturing error event; look for more information
> in /debug/dri/0/i915_error_state
> [ 1582.004075] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
> elapsed... GPU hung
> [ 1597.220075] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
> elapsed... GPU hung
> [ 1613.220074] [drm:i915_hangcheck_hung] *ERROR* Hangcheck timer
> elapsed... GPU hung

Those also appear with baremetal (Linus actually mentioned this).

> 
> I'm wondering if the two syndromes are due to the same underlying cause.
> But I guess a GPU hang caused by guest driver issue should not freeze
> the host. Is it true?

It shouldn't. Is the machine usuable with this guest being frozen?
> 
> I'm going to try more with different config -- different kernel
> version, with / without PVOPS, native run vs VM etc.
> But this is kind of blindly since I have no clue at all. If you have
> anything to suspect, it will be highly appreciated.
> 
> Thanks,
> Timothy
> 
> > Thanks,
> > Timothy
> >
> > On Wed, Dec 19, 2012 at 1:28 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> >> Hi Stefano,
> >>
> >> I recently tried to play some 3D games on my linux guest.
> >> The game starts without problem but it freezes the entire system after
> >> a some time (a minute or so?).
> >> Here I mean both the host and domU are not responsive anymore.
> >> The ssh freezes and i had to shutdown the machine using power button directly.
> >>
> >> I did not find anything obvious from the host log. But from the guest,
> >> I can find this:
> >>
> >> Dec 18 20:28:38 debvm kernel: [    0.899860] resource map sanity check
> >> conflict: 0xfeff5018 0xfeff7017 0xfeff7000 0xffffffff reserved
> >> Dec 18 20:28:38 debvm kernel: [    0.899862] ------------[ cut here
> >> ]------------
> >> Dec 18 20:28:38 debvm kernel: [    0.899869] WARNING: at
> >> arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2c4/0x33c()
> >> Dec 18 20:28:38 debvm kernel: [    0.899870] Hardware name: HVM domU
> >> Dec 18 20:28:38 debvm kernel: [    0.899872] Info: mapping multiple
> >> BARs. Your kernel is fine.
> >> Dec 18 20:28:38 debvm kernel: [    0.899873] Modules linked in:
> >> Dec 18 20:28:38 debvm kernel: [    0.899878] Pid: 1, comm: swapper/0
> >> Not tainted 3.6.9 #4
> >> Dec 18 20:28:38 debvm kernel: [    0.899892] Call Trace:
> >> Dec 18 20:28:38 debvm kernel: [    0.899896]  [<ffffffff8103d194>] ?
> >> warn_slowpath_common+0x76/0x8a
> >> Dec 18 20:28:38 debvm kernel: [    0.899898]  [<ffffffff8103d240>] ?
> >> warn_slowpath_fmt+0x45/0x4a
> >> Dec 18 20:28:38 debvm kernel: [    0.899900]  [<ffffffff81032a6c>] ?
> >> __ioremap_caller+0x2c4/0x33c
> >> Dec 18 20:28:38 debvm kernel: [    0.899902]  [<ffffffff812c3be3>] ?
> >> intel_opregion_setup+0x9c/0x201
> >> Dec 18 20:28:38 debvm kernel: [    0.899904]  [<ffffffff812bcb75>] ?
> >> intel_setup_gmbus+0x175/0x19d
> >> Dec 18 20:28:38 debvm kernel: [    0.899907]  [<ffffffff8128a37a>] ?
> >> i915_driver_load+0x548/0x90d
> >> Dec 18 20:28:38 debvm kernel: [    0.899910]  [<ffffffff812ff804>] ?
> >> setup_hpet_msi_remapped+0x20/0x20
> >> Dec 18 20:28:38 debvm kernel: [    0.899912]  [<ffffffff81272706>] ?
> >> drm_get_pci_dev+0x152/0x259
> >> Dec 18 20:28:38 debvm kernel: [    0.899915]  [<ffffffff813d4883>] ?
> >> _raw_spin_lock_irqsave+0x21/0x45
> >> Dec 18 20:28:38 debvm kernel: [    0.899918]  [<ffffffff811d9ecc>] ?
> >> local_pci_probe+0x5a/0xa0
> >> Dec 18 20:28:38 debvm kernel: [    0.899920]  [<ffffffff811d9fcf>] ?
> >> pci_device_probe+0xbd/0xe7
> >> Dec 18 20:28:38 debvm kernel: [    0.899922]  [<ffffffff812cd887>] ?
> >> driver_probe_device+0x1b0/0x1b0
> >> Dec 18 20:28:38 debvm kernel: [    0.899923]  [<ffffffff812cd887>] ?
> >> driver_probe_device+0x1b0/0x1b0
> >> Dec 18 20:28:38 debvm kernel: [    0.899925]  [<ffffffff812cd769>] ?
> >> driver_probe_device+0x92/0x1b0
> >> Dec 18 20:28:38 debvm kernel: [    0.899926]  [<ffffffff812cd8da>] ?
> >> __driver_attach+0x53/0x73
> >> Dec 18 20:28:38 debvm kernel: [    0.899928]  [<ffffffff812cc06f>] ?
> >> bus_for_each_dev+0x46/0x77
> >> Dec 18 20:28:38 debvm kernel: [    0.899930]  [<ffffffff812ccf8f>] ?
> >> bus_add_driver+0xd5/0x1f4
> >> Dec 18 20:28:38 debvm kernel: [    0.899931]  [<ffffffff812cde14>] ?
> >> driver_register+0x89/0x101
> >> Dec 18 20:28:38 debvm kernel: [    0.899933]  [<ffffffff811d9336>] ?
> >> __pci_register_driver+0x49/0xa3
> >> Dec 18 20:28:38 debvm kernel: [    0.899935]  [<ffffffff816d55c7>] ?
> >> ttm_init+0x63/0x63
> >> Dec 18 20:28:38 debvm kernel: [    0.899937]  [<ffffffff81002085>] ?
> >> do_one_initcall+0x75/0x12c
> >> Dec 18 20:28:38 debvm kernel: [    0.899940]  [<ffffffff816a6cc2>] ?
> >> kernel_init+0x13c/0x1c0
> >> Dec 18 20:28:38 debvm kernel: [    0.899941]  [<ffffffff816a6565>] ?
> >> do_early_param+0x83/0x83
> >> Dec 18 20:28:38 debvm kernel: [    0.899943]  [<ffffffff813d9f44>] ?
> >> kernel_thread_helper+0x4/0x10
> >> Dec 18 20:28:38 debvm kernel: [    0.899945]  [<ffffffff816a6b86>] ?
> >> start_kernel+0x3e1/0x3e1
> >> Dec 18 20:28:38 debvm kernel: [    0.899947]  [<ffffffff813d9f40>] ?
> >> gs_change+0x13/0x13
> >> Dec 18 20:28:38 debvm kernel: [    0.899950] ---[ end trace
> >> db461543ce599b44 ]---
> >>
> >> I'm not sure if this has anything to do with the freeze. This seems to
> >> show up on every boot after I upgraded to xen version 4.2.1-rc2. Both
> >> debian kernel 3.2.32 / 3.6.9 suffers from the same log. But whole
> >> system freeze happens only during gaming, which is much less frequent.
> >> So I'm not sure if the two are related. But anyway, could you comment
> >> about what does this log mean?
> >>
> >> I can find the one of the mentioned address in the qemu_dm log:
> >> pt_pci_write_config: [00:02:0] address=00fc val=0xfeff5000 len=4
> >> igd_write_opregion: Map OpRegion: cd996018 -> feff5018
> >> igd_write_opregion: [00:02:0] addr=fc len=2 val=feff5000
> >>
> >> PS: I also run xbmc on domU and it playbacks video under HW
> >> acceleration (VAAPI) without any problem. XBMC by itself is also an
> >> graphics intensive program. But this runs on an pure HVM guest, while
> >> the failing case is on PVHVM.
> >>
> >> PS2: I also suffered another instability yesterday. It happens when I
> >> was compiling kernel in side the domU. The host reboots suddenly.
> >> Since I'm not using graphics at that time (Xorg session is idle, I
> >> connected through SSH), this may be a different issue.
> >>
> >> Thanks,
> >> Timothy
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:39:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:39:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8RU-0007H3-3T; Fri, 21 Dec 2012 19:39:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm8RS-0007GO-5E
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:39:34 +0000
Received: from [85.158.139.211:6107] by server-4.bemta-5.messagelabs.com id
	37/C4-14693-5FAB4D05; Fri, 21 Dec 2012 19:39:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1356118771!20731724!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31445 invoked from network); 21 Dec 2012 19:39:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 19:39:32 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLJdS40019899
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 19:39:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLJdSsq028921
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 19:39:28 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLJdR0l021984; Fri, 21 Dec 2012 13:39:27 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 11:39:28 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F054D1C032B; Fri, 21 Dec 2012 14:39:26 -0500 (EST)
Date: Fri, 21 Dec 2012 14:39:26 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "G.R." <firemeteor@users.sourceforge.net>
Message-ID: <20121221193926.GD30562@phenom.dumpdata.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
	<CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
	<CAKhsbWau-Kou2Y5vYcetRJLM5nQWGc0w65SpG1YxtcY6+VcXxA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAKhsbWau-Kou2Y5vYcetRJLM5nQWGc0w65SpG1YxtcY6+VcXxA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xen.org>, Jean Guyader <Jean.guyader@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 02:20:17AM +0800, G.R. wrote:
> On Thu, Dec 20, 2012 at 12:04 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> >>> PS2: I also suffered another instability yesterday. It happens when I
> >>> was compiling kernel in side the domU. The host reboots suddenly.
> >>> Since I'm not using graphics at that time (Xorg session is idle, I
> >>> connected through SSH), this may be a different issue.
> 
> I tried once more to rebuild kernel in the debian VM. It's a total
> mess this time.
> The whole system (including dom0) unexpectedly reboots several times
> during the compilation.
> This destroyed the kernel tree and I failed to build the kernel.
> I suspect this has something to do with disk driver, since the reboot
> tend to happen during high disk load (like linking vmlinux).

Is the AHCI controller sharing the same interrupt line as the IGD?

> Will run iozone to check tomorrow.
> 
> It seems that this issue has little to do with IGD passthrough.
> I'm not sure if it's the same issue for the host freezing during game play.
> Maybe I should track them separately.
> 
> Thanks,
> Timothy
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:39:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:39:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8RU-0007H3-3T; Fri, 21 Dec 2012 19:39:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm8RS-0007GO-5E
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:39:34 +0000
Received: from [85.158.139.211:6107] by server-4.bemta-5.messagelabs.com id
	37/C4-14693-5FAB4D05; Fri, 21 Dec 2012 19:39:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1356118771!20731724!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31445 invoked from network); 21 Dec 2012 19:39:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 19:39:32 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLJdS40019899
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 19:39:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLJdSsq028921
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 19:39:28 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLJdR0l021984; Fri, 21 Dec 2012 13:39:27 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 11:39:28 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F054D1C032B; Fri, 21 Dec 2012 14:39:26 -0500 (EST)
Date: Fri, 21 Dec 2012 14:39:26 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "G.R." <firemeteor@users.sourceforge.net>
Message-ID: <20121221193926.GD30562@phenom.dumpdata.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
	<CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
	<CAKhsbWau-Kou2Y5vYcetRJLM5nQWGc0w65SpG1YxtcY6+VcXxA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAKhsbWau-Kou2Y5vYcetRJLM5nQWGc0w65SpG1YxtcY6+VcXxA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xen.org>, Jean Guyader <Jean.guyader@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 02:20:17AM +0800, G.R. wrote:
> On Thu, Dec 20, 2012 at 12:04 AM, G.R. <firemeteor@users.sourceforge.net> wrote:
> >>> PS2: I also suffered another instability yesterday. It happens when I
> >>> was compiling kernel in side the domU. The host reboots suddenly.
> >>> Since I'm not using graphics at that time (Xorg session is idle, I
> >>> connected through SSH), this may be a different issue.
> 
> I tried once more to rebuild kernel in the debian VM. It's a total
> mess this time.
> The whole system (including dom0) unexpectedly reboots several times
> during the compilation.
> This destroyed the kernel tree and I failed to build the kernel.
> I suspect this has something to do with disk driver, since the reboot
> tend to happen during high disk load (like linking vmlinux).

Is the AHCI controller sharing the same interrupt line as the IGD?

> Will run iozone to check tomorrow.
> 
> It seems that this issue has little to do with IGD passthrough.
> I'm not sure if it's the same issue for the host freezing during game play.
> Maybe I should track them separately.
> 
> Thanks,
> Timothy
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:45:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8XU-0007yl-6a; Fri, 21 Dec 2012 19:45:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm8XS-0007yf-Jd
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 19:45:46 +0000
Received: from [193.109.254.147:26925] by server-9.bemta-14.messagelabs.com id
	11/21-24482-96CB4D05; Fri, 21 Dec 2012 19:45:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1356119143!2203883!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5704 invoked from network); 21 Dec 2012 19:45:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 19:45:44 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLJjZhQ024917
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 19:45:35 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLJjYnn001946
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 19:45:35 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLJjYZP017335; Fri, 21 Dec 2012 13:45:34 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 11:45:34 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5D9F21C032B; Fri, 21 Dec 2012 14:45:33 -0500 (EST)
Date: Fri, 21 Dec 2012 14:45:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ross Philipson <Ross.Philipson@citrix.com>
Message-ID: <20121221194533.GE30562@phenom.dumpdata.com>
References: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 01:55:10PM -0500, Ross Philipson wrote:
> This patch series introduces support of loading external blocks of firmware
> into a guest. These blocks can currently contain SMBIOS and/or ACPI firmware
> information that is used by HVMLOADER to modify a guests virtual firmware at
> startup. These modules are only used by HVMLOADER and are effectively discarded after HVMLOADER has completed.
> 
> The domain building code in libxenguest is passed these firmware blocks
> in the xc_hvm_build_args structure and loads them into the new guest,
> returning the load address. The loading is done in what will become the guests
> low RAM area just behind to load location for HVMLOADER. After their use by
> HVMLOADER they are effectively discarded. It is the caller's job to load the
> base address and length values in xenstore using the paths defined in the new
> hvm_defs.h header so HVMLOADER can located the blocks.
> 

Are there patches to plug this in the 'xl'?

> Currently two types of firmware information are recognized and processed
> in the HVMLOADER though this could be extended.
> 
> 1. SMBIOS: The SMBIOS table building code will attempt to retrieve (for
> predefined set of structure types) any passed in structures. If a match is
> found the passed in table will be used overriding the default values. In
> addition, the SMBIOS code will also enumerate and load any vendor defined
> strutures (in the range of types 128 - 255) that as are passed in. See the
> hvm_defs.h header for information on the format of this block.
> 2. ACPI: Static and secondary descriptor tables can be added to the set of
> ACPI table built by HVMLOADER. The ACPI builder code will enumerate passed in
> tables and add them at the end of the secondary table list. See the hvm_defs.h
> header for information on the format of this block.
> 
> There are 4 patches in the series:
> 01 - Add HVM definitions header for firmware passthrough support.
> 02 - Xen control tools support for loading the firmware blocks.
> 03 - Passthrough support for SMBIOS.
> 04 - Passthrough support for ACPI.
> 
> Note this is version 3 of this patch set. Some of the differences:
>  - Generic module support removed, overall functionality was simplified.
>  - Use of xenstore to supply firmware passthrough information to HVMLOADER.
>  - Fixed issues pointed out in the SMBIOS processing code.
>  - Created defines for the SMBIOS handles in use and switched to using
>    the xenstore values in the new hvm_defs.h file.
> 
> Signed-off-by: Ross Philipson <ross.philipson@citrix.com>
> 
> (Based on xen-4.3 staging/unstable cs 26317)
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 19:45:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 19:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8XU-0007yl-6a; Fri, 21 Dec 2012 19:45:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm8XS-0007yf-Jd
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 19:45:46 +0000
Received: from [193.109.254.147:26925] by server-9.bemta-14.messagelabs.com id
	11/21-24482-96CB4D05; Fri, 21 Dec 2012 19:45:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1356119143!2203883!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5704 invoked from network); 21 Dec 2012 19:45:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 19:45:44 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLJjZhQ024917
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 19:45:35 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLJjYnn001946
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 19:45:35 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLJjYZP017335; Fri, 21 Dec 2012 13:45:34 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 11:45:34 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5D9F21C032B; Fri, 21 Dec 2012 14:45:33 -0500 (EST)
Date: Fri, 21 Dec 2012 14:45:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ross Philipson <Ross.Philipson@citrix.com>
Message-ID: <20121221194533.GE30562@phenom.dumpdata.com>
References: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 20, 2012 at 01:55:10PM -0500, Ross Philipson wrote:
> This patch series introduces support of loading external blocks of firmware
> into a guest. These blocks can currently contain SMBIOS and/or ACPI firmware
> information that is used by HVMLOADER to modify a guests virtual firmware at
> startup. These modules are only used by HVMLOADER and are effectively discarded after HVMLOADER has completed.
> 
> The domain building code in libxenguest is passed these firmware blocks
> in the xc_hvm_build_args structure and loads them into the new guest,
> returning the load address. The loading is done in what will become the guests
> low RAM area just behind to load location for HVMLOADER. After their use by
> HVMLOADER they are effectively discarded. It is the caller's job to load the
> base address and length values in xenstore using the paths defined in the new
> hvm_defs.h header so HVMLOADER can located the blocks.
> 

Are there patches to plug this in the 'xl'?

> Currently two types of firmware information are recognized and processed
> in the HVMLOADER though this could be extended.
> 
> 1. SMBIOS: The SMBIOS table building code will attempt to retrieve (for
> predefined set of structure types) any passed in structures. If a match is
> found the passed in table will be used overriding the default values. In
> addition, the SMBIOS code will also enumerate and load any vendor defined
> strutures (in the range of types 128 - 255) that as are passed in. See the
> hvm_defs.h header for information on the format of this block.
> 2. ACPI: Static and secondary descriptor tables can be added to the set of
> ACPI table built by HVMLOADER. The ACPI builder code will enumerate passed in
> tables and add them at the end of the secondary table list. See the hvm_defs.h
> header for information on the format of this block.
> 
> There are 4 patches in the series:
> 01 - Add HVM definitions header for firmware passthrough support.
> 02 - Xen control tools support for loading the firmware blocks.
> 03 - Passthrough support for SMBIOS.
> 04 - Passthrough support for ACPI.
> 
> Note this is version 3 of this patch set. Some of the differences:
>  - Generic module support removed, overall functionality was simplified.
>  - Use of xenstore to supply firmware passthrough information to HVMLOADER.
>  - Fixed issues pointed out in the SMBIOS processing code.
>  - Created defines for the SMBIOS handles in use and switched to using
>    the xenstore values in the new hvm_defs.h file.
> 
> Signed-off-by: Ross Philipson <ross.philipson@citrix.com>
> 
> (Based on xen-4.3 staging/unstable cs 26317)
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 20:00:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 20:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8lC-0008Cs-Kl; Fri, 21 Dec 2012 19:59:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm8lA-0008Cn-Tq
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:59:57 +0000
Received: from [85.158.138.51:39180] by server-1.bemta-3.messagelabs.com id
	94/9F-08906-CBFB4D05; Fri, 21 Dec 2012 19:59:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1356119993!11321246!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2279 invoked from network); 21 Dec 2012 19:59:55 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 19:59:55 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLJxpVp004744
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 19:59:52 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLJxoc6024030
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 19:59:51 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLJxogm001663; Fri, 21 Dec 2012 13:59:50 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 11:59:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 066E21C032B; Fri, 21 Dec 2012 14:59:49 -0500 (EST)
Date: Fri, 21 Dec 2012 14:59:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>, daniel.kiper@oracle.com
Message-ID: <20121221195948.GF30562@phenom.dumpdata.com>
References: <50CD7A57.5030107@cxl.epac.to>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50CD7A57.5030107@cxl.epac.to>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Dec 16, 2012 at 02:37:59AM -0500, Xiao-Long Chen wrote:
> Hi Xen developers,
> 
> I have been having a problem where only one CPU core is being detected.
> I'm using a Lenovo W520 with a UEFI firmware and an Intel Core i7
> 2720qm (4 real cores, 4 virtual cores). When I boot, I see
> "Dom0 has maximum 1 VCPUs", or something similar, scroll by.

Ooh.
> 
> In addition, only 10 GB out of my 12 GB of memory is recognized and
> ACPI is not working properly (CPU frequency scaling and battery info
> are not reported).
> 
> This problem has been reported before (for the same laptop too) here:
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg00087.html but
> unfortunately, the person who sent the message didn't reply with more
> information.
> 
> Here are the outputs of dmesg with and without Xen:
> 
> Without Xen: http://paste.kde.org/626222/raw/
> With Xen: http://paste.kde.org/626228/raw/

Hm, this:
[    0.000000] ACPI BIOS Bug: Error: A valid RSDP was not found (20120913/tbxfroot-219)

is a problem. The workaround was mentioned on the mailing list to use
the acpi_rsdp=0xbabfe000

> 
> and some information from xl:
> 
> xl vcpu-list: http://paste.kde.org/626234/raw/
> xl info: http://paste.kde.org/626240/raw/
> xl dmesg: http://paste.kde.org/626246/raw/

And Xen has the same issue:

(XEN) ACPI Error (tbxfroot-0218): A valid RSDP was not found [20070126]

And it looks to be running in a legacy state - with no calls to EFI.
> 
> This is my first time venturing into Xen territory, so please let me
> know if there's any other information needed.

Did you try to boot xen.efi by itself - without using the GRUB loader?
There is a nice writeup of how to do this in docs/misc/efi.markdown.

Also CC-ing Daniel if he has some ideas.
> 
> I'm not sure if it helps, but I'd also like to point out that I had a
> similar problem with Linux back around July 20, 2011, when I got my
> computer. I could work around the issue by booting with "noapic". I'm
> not sure which kernel version fixed the issue as I booted with that
> option for quite a while. The kernel version, at the time, was 3.0.
> 
> I am glad to help in any way I can. I'm excited to get Xen working!
> I'm confortable with compiling the kernel or xen if necessary.
> 
> Thanks in advance!
> 
> Xiao-Long Chen
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 20:00:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 20:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm8lC-0008Cs-Kl; Fri, 21 Dec 2012 19:59:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm8lA-0008Cn-Tq
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 19:59:57 +0000
Received: from [85.158.138.51:39180] by server-1.bemta-3.messagelabs.com id
	94/9F-08906-CBFB4D05; Fri, 21 Dec 2012 19:59:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1356119993!11321246!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2279 invoked from network); 21 Dec 2012 19:59:55 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 19:59:55 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLJxpVp004744
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 19:59:52 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLJxoc6024030
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 19:59:51 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLJxogm001663; Fri, 21 Dec 2012 13:59:50 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 11:59:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 066E21C032B; Fri, 21 Dec 2012 14:59:49 -0500 (EST)
Date: Fri, 21 Dec 2012 14:59:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>, daniel.kiper@oracle.com
Message-ID: <20121221195948.GF30562@phenom.dumpdata.com>
References: <50CD7A57.5030107@cxl.epac.to>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50CD7A57.5030107@cxl.epac.to>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Dec 16, 2012 at 02:37:59AM -0500, Xiao-Long Chen wrote:
> Hi Xen developers,
> 
> I have been having a problem where only one CPU core is being detected.
> I'm using a Lenovo W520 with a UEFI firmware and an Intel Core i7
> 2720qm (4 real cores, 4 virtual cores). When I boot, I see
> "Dom0 has maximum 1 VCPUs", or something similar, scroll by.

Ooh.
> 
> In addition, only 10 GB out of my 12 GB of memory is recognized and
> ACPI is not working properly (CPU frequency scaling and battery info
> are not reported).
> 
> This problem has been reported before (for the same laptop too) here:
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg00087.html but
> unfortunately, the person who sent the message didn't reply with more
> information.
> 
> Here are the outputs of dmesg with and without Xen:
> 
> Without Xen: http://paste.kde.org/626222/raw/
> With Xen: http://paste.kde.org/626228/raw/

Hm, this:
[    0.000000] ACPI BIOS Bug: Error: A valid RSDP was not found (20120913/tbxfroot-219)

is a problem. The workaround was mentioned on the mailing list to use
the acpi_rsdp=0xbabfe000

> 
> and some information from xl:
> 
> xl vcpu-list: http://paste.kde.org/626234/raw/
> xl info: http://paste.kde.org/626240/raw/
> xl dmesg: http://paste.kde.org/626246/raw/

And Xen has the same issue:

(XEN) ACPI Error (tbxfroot-0218): A valid RSDP was not found [20070126]

And it looks to be running in a legacy state - with no calls to EFI.
> 
> This is my first time venturing into Xen territory, so please let me
> know if there's any other information needed.

Did you try to boot xen.efi by itself - without using the GRUB loader?
There is a nice writeup of how to do this in docs/misc/efi.markdown.

Also CC-ing Daniel if he has some ideas.
> 
> I'm not sure if it helps, but I'd also like to point out that I had a
> similar problem with Linux back around July 20, 2011, when I got my
> computer. I could work around the issue by booting with "noapic". I'm
> not sure which kernel version fixed the issue as I booted with that
> option for quite a while. The kernel version, at the time, was 3.0.
> 
> I am glad to help in any way I can. I'm excited to get Xen working!
> I'm confortable with compiling the kernel or xen if necessary.
> 
> Thanks in advance!
> 
> Xiao-Long Chen
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 20:18:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 20:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm93G-0008WB-EJ; Fri, 21 Dec 2012 20:18:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm93E-0008W6-SF
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 20:18:37 +0000
Received: from [85.158.139.83:12260] by server-12.bemta-5.messagelabs.com id
	EA/44-02275-C14C4D05; Fri, 21 Dec 2012 20:18:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356121114!27044854!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13147 invoked from network); 21 Dec 2012 20:18:35 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 20:18:35 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLKISSc021405
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 20:18:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLKIQui000746
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 20:18:27 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLKIQsD013725; Fri, 21 Dec 2012 14:18:26 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 12:18:26 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DBA731C032B; Fri, 21 Dec 2012 15:18:24 -0500 (EST)
Date: Fri, 21 Dec 2012 15:18:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Al Viro <viro@ZenIV.linux.org.uk>, dgdegra@tycho.nsa.gov,
	xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com
Message-ID: <20121221201824.GA31554@phenom.dumpdata.com>
References: <20121215181211.GV4939@ZenIV.linux.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121215181211.GV4939@ZenIV.linux.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] oopsable race in xen-gntdev (unsafe vma access)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Dec 15, 2012 at 06:12:11PM +0000, Al Viro wrote:
> 	1) find_vma() is *not* safe without ->mmap_sem and its result may
> very well be freed just as it's returned to caller.  IOW,
> gntdev_ioctl_get_offset_for_vaddr() is racy and may end up with
> dereferencing freed memory.
> 
> 	2) gntdev_vma_close() is putting NULL into map->vma with only
> ->mmap_sem held by caller.  Things like
>                 if (!map->vma)
>                         continue;
>                 if (map->vma->vm_start >= end)
>                         continue;
>                 if (map->vma->vm_end <= start)
> done with just priv->lock held are racy.
> 
> 	I'm not familiar with the code, but it looks like we need to
> protect gntdev_vma_close() guts with the same spinlock and probably
> hold ->mmap_sem shared around the "find_vma()+get to map->{index,count}"
> in the ioctl.  Or replace the logics in ioctl with search through the
> list of grant_map under the same spinlock...
> 
> 	Comments?
Hey Al,

Thank you for your analysis.

CC-ing Daniel, David and Stefano. I recall we had some priv->lock movement
in the past and there is also interaction with another piece of code - 
the balloon code so we better be circumspect of not blowing up.

Al, it is around holidays and folks are mostly gone - so this will take
a bit of time to get sorted out.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 20:18:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 20:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm93G-0008WB-EJ; Fri, 21 Dec 2012 20:18:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm93E-0008W6-SF
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 20:18:37 +0000
Received: from [85.158.139.83:12260] by server-12.bemta-5.messagelabs.com id
	EA/44-02275-C14C4D05; Fri, 21 Dec 2012 20:18:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356121114!27044854!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13147 invoked from network); 21 Dec 2012 20:18:35 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 20:18:35 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLKISSc021405
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 20:18:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLKIQui000746
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 20:18:27 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLKIQsD013725; Fri, 21 Dec 2012 14:18:26 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 12:18:26 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DBA731C032B; Fri, 21 Dec 2012 15:18:24 -0500 (EST)
Date: Fri, 21 Dec 2012 15:18:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Al Viro <viro@ZenIV.linux.org.uk>, dgdegra@tycho.nsa.gov,
	xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com
Message-ID: <20121221201824.GA31554@phenom.dumpdata.com>
References: <20121215181211.GV4939@ZenIV.linux.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20121215181211.GV4939@ZenIV.linux.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] oopsable race in xen-gntdev (unsafe vma access)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Dec 15, 2012 at 06:12:11PM +0000, Al Viro wrote:
> 	1) find_vma() is *not* safe without ->mmap_sem and its result may
> very well be freed just as it's returned to caller.  IOW,
> gntdev_ioctl_get_offset_for_vaddr() is racy and may end up with
> dereferencing freed memory.
> 
> 	2) gntdev_vma_close() is putting NULL into map->vma with only
> ->mmap_sem held by caller.  Things like
>                 if (!map->vma)
>                         continue;
>                 if (map->vma->vm_start >= end)
>                         continue;
>                 if (map->vma->vm_end <= start)
> done with just priv->lock held are racy.
> 
> 	I'm not familiar with the code, but it looks like we need to
> protect gntdev_vma_close() guts with the same spinlock and probably
> hold ->mmap_sem shared around the "find_vma()+get to map->{index,count}"
> in the ioctl.  Or replace the logics in ioctl with search through the
> list of grant_map under the same spinlock...
> 
> 	Comments?
Hey Al,

Thank you for your analysis.

CC-ing Daniel, David and Stefano. I recall we had some priv->lock movement
in the past and there is also interaction with another piece of code - 
the balloon code so we better be circumspect of not blowing up.

Al, it is around holidays and folks are mostly gone - so this will take
a bit of time to get sorted out.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 20:24:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 20:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm98q-0000Ed-7K; Fri, 21 Dec 2012 20:24:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm98p-0000EX-1k
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 20:24:23 +0000
Received: from [193.109.254.147:30495] by server-14.bemta-14.messagelabs.com
	id 92/6E-10022-675C4D05; Fri, 21 Dec 2012 20:24:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1356121449!2178710!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzQ4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30054 invoked from network); 21 Dec 2012 20:24:11 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 20:24:11 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLKO1Cx012367
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 20:24:02 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLKO07f003825
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 20:24:01 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLKO0Pt008802; Fri, 21 Dec 2012 14:24:00 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 12:24:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A857F1C032B; Fri, 21 Dec 2012 15:23:59 -0500 (EST)
Date: Fri, 21 Dec 2012 15:23:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: James Dingwall <james-xen@dingwall.me.uk>
Message-ID: <20121221202359.GB31554@phenom.dumpdata.com>
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
	<a4a53469917d35c93def9ef45527e277@imap.dingwall.me.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <a4a53469917d35c93def9ef45527e277@imap.dingwall.me.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] kernel log flooded with: xen_balloon:
 reserve_additional_memory: add_memory() failed: -17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 10:55:49AM +0000, James Dingwall wrote:
> On 2012-12-19 08:47, James Dingwall wrote:
> >Hi,
> >
> >I have encountered an apparently benign error on two systems where
> >the dom0 kernel log is flooded with messages like:
> >
> >[52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff]
> >cannot be added
> >[52482.163860] xen_balloon: reserve_additional_memory:
> >add_memory() failed: -17
> >
> >The first line is from drivers/xen/xen-balloon.c, the second from
> >mm/memory_hotplug.c
> >
> >The trigger for the messages seems to be the first occasion that a
> >Xen guest is shutdown.  I have noted this in a vanilla 3.6.7 and
> >kernel 3.5.0-18 built from Ubuntu sources.  Xen version is 4.2.0.  It
> >is not clear why the dom0 is kernel is trying to balloon up as the
> >Xen
> >command line is specifies a fixed dom0 memory allocation and
> >noselfballooning is specified for the kernel and ballooning is also
> >disabled in the xend-config.sxp / xl.conf (one system using xm,
> >another xl)
> >
> >xen command line:
> >placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1
> >dom0_mem=max:6144m
> >
> >kernel command line:
> >root=/dev/loop0 ro console=tty1 console=hvc0 earlyprintk=xen
> >nomodeset noselfballooning
> >
> >Examining /proc/iomem does show that the dom0 memory allocation is
> >actually 64kb short of 6144Mb:
> >
> >cat /proc/iomem | grep System\ RAM
> >00010000-0009bfff : System RAM      [573440 bytes]
> >00100000-cb2dffff : System RAM      [3407740928 bytes]
> >100000000-1b4d83fff : System RAM    [3034071040 bytes]
> >
> >Total system ram: 6442385408 - 6x2^30 = 65536
> >
> >The memory range indicated in the log message is "Unusable memory" in
> >/proc/iomem:
> >1b4d84000-82fffffff : Unusable memory
> >
> >Another point of interest is that we have multiple "identical"
> >hardware platforms (Dell T320) for the system running the 3.5.0-18
> >kernel but only see this error on a slightly more recent system.
> >Older systems show in /proc/iomem that all memory is System RAM.
> >
> >100000000-82fffffff : System RAM  [older system BIOS 1.0]

Wow. That is a lot of memory :-)

> >
> >100000000-1b4d83fff : System RAM  [newer system BIOS 1.3]
> >1b4d84000-82fffffff : Unusable memory
> >
> >The BIOS revision between the old and new has changed so I was
> >wondering if it is possible that there is a white list which affects
> >the impact of the kernel option:
> >CONFIG_X86_RESERVE_LOW=64
> >This is only a guess since the amount of memory reserved is
> >equivalent to the short fall calculated above.  If this is the right
> >area perhaps the dom0 calculation for its memory entitlement needs to
> >be taught to not to try and hotplug the missing 64k when it has been
> >reserved.
> >
> >If any other information would be useful then please let me know.
> 
> With some further investigation we have determined that the
> different BIOS version does not seem to be a factor and the key
> point is actually the Xen command line.  The reason that we had max:
> specified is that without it we could not boot the kernel/xen
> combination on an AMD platform.  I will do some further testing to

What happend? Did you report it in another email on this mailing list?

> see what the result of dom0_mem=6144m,max:6144m as suggested
> http://wiki.xen.org/wiki/Xen_Best_Practices gets us.
> 
> placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1
> dom0_mem=max:6144m

I think you don't need xsave=0 anymore? It was only needed with
a Fedora stock kernel (and the underlaying issue I believe is
now fixed).

> results in /proc/iomem having an unusable range and top reports
> 6083900k of memory in dom0
> 
> placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1
> dom0_mem=6144m
> no unusable range, top reports 5605976k of memory in dom0 and no log
> messages

Hmm..

If you were to run this with 'loglevel=8 debug' with the
dom0_mem=6144m and dom0_mem=max:6144m could you send both
'xl dmesg' and 'dmesg' outputs?
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 20:24:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 20:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm98q-0000Ed-7K; Fri, 21 Dec 2012 20:24:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm98p-0000EX-1k
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 20:24:23 +0000
Received: from [193.109.254.147:30495] by server-14.bemta-14.messagelabs.com
	id 92/6E-10022-675C4D05; Fri, 21 Dec 2012 20:24:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1356121449!2178710!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzQ4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30054 invoked from network); 21 Dec 2012 20:24:11 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 20:24:11 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLKO1Cx012367
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 20:24:02 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLKO07f003825
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 20:24:01 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLKO0Pt008802; Fri, 21 Dec 2012 14:24:00 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 12:24:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A857F1C032B; Fri, 21 Dec 2012 15:23:59 -0500 (EST)
Date: Fri, 21 Dec 2012 15:23:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: James Dingwall <james-xen@dingwall.me.uk>
Message-ID: <20121221202359.GB31554@phenom.dumpdata.com>
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
	<a4a53469917d35c93def9ef45527e277@imap.dingwall.me.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <a4a53469917d35c93def9ef45527e277@imap.dingwall.me.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] kernel log flooded with: xen_balloon:
 reserve_additional_memory: add_memory() failed: -17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 10:55:49AM +0000, James Dingwall wrote:
> On 2012-12-19 08:47, James Dingwall wrote:
> >Hi,
> >
> >I have encountered an apparently benign error on two systems where
> >the dom0 kernel log is flooded with messages like:
> >
> >[52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff]
> >cannot be added
> >[52482.163860] xen_balloon: reserve_additional_memory:
> >add_memory() failed: -17
> >
> >The first line is from drivers/xen/xen-balloon.c, the second from
> >mm/memory_hotplug.c
> >
> >The trigger for the messages seems to be the first occasion that a
> >Xen guest is shutdown.  I have noted this in a vanilla 3.6.7 and
> >kernel 3.5.0-18 built from Ubuntu sources.  Xen version is 4.2.0.  It
> >is not clear why the dom0 is kernel is trying to balloon up as the
> >Xen
> >command line is specifies a fixed dom0 memory allocation and
> >noselfballooning is specified for the kernel and ballooning is also
> >disabled in the xend-config.sxp / xl.conf (one system using xm,
> >another xl)
> >
> >xen command line:
> >placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1
> >dom0_mem=max:6144m
> >
> >kernel command line:
> >root=/dev/loop0 ro console=tty1 console=hvc0 earlyprintk=xen
> >nomodeset noselfballooning
> >
> >Examining /proc/iomem does show that the dom0 memory allocation is
> >actually 64kb short of 6144Mb:
> >
> >cat /proc/iomem | grep System\ RAM
> >00010000-0009bfff : System RAM      [573440 bytes]
> >00100000-cb2dffff : System RAM      [3407740928 bytes]
> >100000000-1b4d83fff : System RAM    [3034071040 bytes]
> >
> >Total system ram: 6442385408 - 6x2^30 = 65536
> >
> >The memory range indicated in the log message is "Unusable memory" in
> >/proc/iomem:
> >1b4d84000-82fffffff : Unusable memory
> >
> >Another point of interest is that we have multiple "identical"
> >hardware platforms (Dell T320) for the system running the 3.5.0-18
> >kernel but only see this error on a slightly more recent system.
> >Older systems show in /proc/iomem that all memory is System RAM.
> >
> >100000000-82fffffff : System RAM  [older system BIOS 1.0]

Wow. That is a lot of memory :-)

> >
> >100000000-1b4d83fff : System RAM  [newer system BIOS 1.3]
> >1b4d84000-82fffffff : Unusable memory
> >
> >The BIOS revision between the old and new has changed so I was
> >wondering if it is possible that there is a white list which affects
> >the impact of the kernel option:
> >CONFIG_X86_RESERVE_LOW=64
> >This is only a guess since the amount of memory reserved is
> >equivalent to the short fall calculated above.  If this is the right
> >area perhaps the dom0 calculation for its memory entitlement needs to
> >be taught to not to try and hotplug the missing 64k when it has been
> >reserved.
> >
> >If any other information would be useful then please let me know.
> 
> With some further investigation we have determined that the
> different BIOS version does not seem to be a factor and the key
> point is actually the Xen command line.  The reason that we had max:
> specified is that without it we could not boot the kernel/xen
> combination on an AMD platform.  I will do some further testing to

What happend? Did you report it in another email on this mailing list?

> see what the result of dom0_mem=6144m,max:6144m as suggested
> http://wiki.xen.org/wiki/Xen_Best_Practices gets us.
> 
> placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1
> dom0_mem=max:6144m

I think you don't need xsave=0 anymore? It was only needed with
a Fedora stock kernel (and the underlaying issue I believe is
now fixed).

> results in /proc/iomem having an unusable range and top reports
> 6083900k of memory in dom0
> 
> placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1
> dom0_mem=6144m
> no unusable range, top reports 5605976k of memory in dom0 and no log
> messages

Hmm..

If you were to run this with 'loglevel=8 debug' with the
dom0_mem=6144m and dom0_mem=max:6144m could you send both
'xl dmesg' and 'dmesg' outputs?
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 20:25:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 20:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm9A6-0000Jk-Mp; Fri, 21 Dec 2012 20:25:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm9A5-0000Jc-N3
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 20:25:41 +0000
Received: from [85.158.137.99:51457] by server-16.bemta-3.messagelabs.com id
	D2/A9-27634-0C5C4D05; Fri, 21 Dec 2012 20:25:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1356121533!20185836!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23403 invoked from network); 21 Dec 2012 20:25:35 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 20:25:35 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLKPQei026931
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 20:25:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLKPQMS001960
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 20:25:26 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLKPP5c008596; Fri, 21 Dec 2012 14:25:25 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 12:25:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A6FA91C032B; Fri, 21 Dec 2012 15:25:24 -0500 (EST)
Date: Fri, 21 Dec 2012 15:25:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: James Dingwall <james-xen@dingwall.me.uk>
Message-ID: <20121221202524.GC31554@phenom.dumpdata.com>
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: daniel.kiper@oracle.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] kernel log flooded with: xen_balloon:
 reserve_additional_memory: add_memory() failed: -17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 08:47:22AM +0000, James Dingwall wrote:
> Hi,
> 
> I have encountered an apparently benign error on two systems where
> the dom0 kernel log is flooded with messages like:
> 
> [52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff]
> cannot be added
> [52482.163860] xen_balloon: reserve_additional_memory: add_memory()
> failed: -17

Daniel tells me it is due to the recent changes in the balloon code.
CC-ing him here.

> 
> The first line is from drivers/xen/xen-balloon.c, the second from
> mm/memory_hotplug.c
> 
> The trigger for the messages seems to be the first occasion that a
> Xen guest is shutdown.  I have noted this in a vanilla 3.6.7 and
> kernel 3.5.0-18 built from Ubuntu sources.  Xen version is 4.2.0.
> It is not clear why the dom0 is kernel is trying to balloon up as
> the Xen command line is specifies a fixed dom0 memory allocation and
> noselfballooning is specified for the kernel and ballooning is also
> disabled in the xend-config.sxp / xl.conf (one system using xm,
> another xl)
> 
> xen command line:
> placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1
> dom0_mem=max:6144m
> 
> kernel command line:
> root=/dev/loop0 ro console=tty1 console=hvc0 earlyprintk=xen
> nomodeset noselfballooning
> 
> Examining /proc/iomem does show that the dom0 memory allocation is
> actually 64kb short of 6144Mb:
> 
> cat /proc/iomem | grep System\ RAM
> 00010000-0009bfff : System RAM      [573440 bytes]
> 00100000-cb2dffff : System RAM      [3407740928 bytes]
> 100000000-1b4d83fff : System RAM    [3034071040 bytes]
> 
> Total system ram: 6442385408 - 6x2^30 = 65536
> 
> The memory range indicated in the log message is "Unusable memory"
> in /proc/iomem:
> 1b4d84000-82fffffff : Unusable memory
> 
> Another point of interest is that we have multiple "identical"
> hardware platforms (Dell T320) for the system running the 3.5.0-18
> kernel but only see this error on a slightly more recent system.
> Older systems show in /proc/iomem that all memory is System RAM.
> 
> 100000000-82fffffff : System RAM  [older system BIOS 1.0]
> 
> 100000000-1b4d83fff : System RAM  [newer system BIOS 1.3]
> 1b4d84000-82fffffff : Unusable memory
> 
> The BIOS revision between the old and new has changed so I was
> wondering if it is possible that there is a white list which affects
> the impact of the kernel option:
> CONFIG_X86_RESERVE_LOW=64
> This is only a guess since the amount of memory reserved is
> equivalent to the short fall calculated above.  If this is the right
> area perhaps the dom0 calculation for its memory entitlement needs
> to be taught to not to try and hotplug the missing 64k when it has
> been reserved.
> 
> If any other information would be useful then please let me know.
> 
> Thanks,
> James
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 20:25:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 20:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm9A6-0000Jk-Mp; Fri, 21 Dec 2012 20:25:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm9A5-0000Jc-N3
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 20:25:41 +0000
Received: from [85.158.137.99:51457] by server-16.bemta-3.messagelabs.com id
	D2/A9-27634-0C5C4D05; Fri, 21 Dec 2012 20:25:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-217.messagelabs.com!1356121533!20185836!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23403 invoked from network); 21 Dec 2012 20:25:35 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 20:25:35 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLKPQei026931
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 20:25:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLKPQMS001960
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 20:25:26 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLKPP5c008596; Fri, 21 Dec 2012 14:25:25 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 12:25:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A6FA91C032B; Fri, 21 Dec 2012 15:25:24 -0500 (EST)
Date: Fri, 21 Dec 2012 15:25:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: James Dingwall <james-xen@dingwall.me.uk>
Message-ID: <20121221202524.GC31554@phenom.dumpdata.com>
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: daniel.kiper@oracle.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] kernel log flooded with: xen_balloon:
 reserve_additional_memory: add_memory() failed: -17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 08:47:22AM +0000, James Dingwall wrote:
> Hi,
> 
> I have encountered an apparently benign error on two systems where
> the dom0 kernel log is flooded with messages like:
> 
> [52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff]
> cannot be added
> [52482.163860] xen_balloon: reserve_additional_memory: add_memory()
> failed: -17

Daniel tells me it is due to the recent changes in the balloon code.
CC-ing him here.

> 
> The first line is from drivers/xen/xen-balloon.c, the second from
> mm/memory_hotplug.c
> 
> The trigger for the messages seems to be the first occasion that a
> Xen guest is shutdown.  I have noted this in a vanilla 3.6.7 and
> kernel 3.5.0-18 built from Ubuntu sources.  Xen version is 4.2.0.
> It is not clear why the dom0 is kernel is trying to balloon up as
> the Xen command line is specifies a fixed dom0 memory allocation and
> noselfballooning is specified for the kernel and ballooning is also
> disabled in the xend-config.sxp / xl.conf (one system using xm,
> another xl)
> 
> xen command line:
> placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1
> dom0_mem=max:6144m
> 
> kernel command line:
> root=/dev/loop0 ro console=tty1 console=hvc0 earlyprintk=xen
> nomodeset noselfballooning
> 
> Examining /proc/iomem does show that the dom0 memory allocation is
> actually 64kb short of 6144Mb:
> 
> cat /proc/iomem | grep System\ RAM
> 00010000-0009bfff : System RAM      [573440 bytes]
> 00100000-cb2dffff : System RAM      [3407740928 bytes]
> 100000000-1b4d83fff : System RAM    [3034071040 bytes]
> 
> Total system ram: 6442385408 - 6x2^30 = 65536
> 
> The memory range indicated in the log message is "Unusable memory"
> in /proc/iomem:
> 1b4d84000-82fffffff : Unusable memory
> 
> Another point of interest is that we have multiple "identical"
> hardware platforms (Dell T320) for the system running the 3.5.0-18
> kernel but only see this error on a slightly more recent system.
> Older systems show in /proc/iomem that all memory is System RAM.
> 
> 100000000-82fffffff : System RAM  [older system BIOS 1.0]
> 
> 100000000-1b4d83fff : System RAM  [newer system BIOS 1.3]
> 1b4d84000-82fffffff : Unusable memory
> 
> The BIOS revision between the old and new has changed so I was
> wondering if it is possible that there is a white list which affects
> the impact of the kernel option:
> CONFIG_X86_RESERVE_LOW=64
> This is only a guess since the amount of memory reserved is
> equivalent to the short fall calculated above.  If this is the right
> area perhaps the dom0 calculation for its memory entitlement needs
> to be taught to not to try and hotplug the missing 64k when it has
> been reserved.
> 
> If any other information would be useful then please let me know.
> 
> Thanks,
> James
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 21:02:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 21:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm9jN-0000jW-LX; Fri, 21 Dec 2012 21:02:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm9jL-0000jO-Td
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 21:02:08 +0000
Received: from [85.158.138.51:17297] by server-1.bemta-3.messagelabs.com id
	89/1D-08906-A4EC4D05; Fri, 21 Dec 2012 21:02:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1356123720!28156194!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25732 invoked from network); 21 Dec 2012 21:02:01 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 21:02:01 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLL1uXI024630
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 21:01:57 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLL1tFl022934
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 21:01:55 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLL1t5M030942; Fri, 21 Dec 2012 15:01:55 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 13:01:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9FF651C032B; Fri, 21 Dec 2012 16:01:53 -0500 (EST)
Date: Fri, 21 Dec 2012 16:01:53 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121221210153.GA32115@phenom.dumpdata.com>
References: <20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121218211613.GA5697@phenom.dumpdata.com>
	<alpine.DEB.2.02.1212191116450.17523@kaball.uk.xensource.com>
	<20121219153728.GG10062@phenom.dumpdata.com>
	<alpine.DEB.2.02.1212201145030.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1212201145030.17523@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > > > pci_enable_msix -> msix_capability_init -> msix_program_entries
> > > > > 
> > > > > Unfortunately msix_program_entries is called few lines after
> > > > > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
> > > > > a pirq.
> > > > > However after that is done, all the masking/unmask is done via irq_mask
> > > > > that we handle properly masking/unmasking the corresponding event
> > > > > channels.
> > > > > 
> > > > > 
> > > > > Possible solutions on top of my head:
> > > > 
> > > > There is also the potential to piggyback on Joerg's patches
> > > > that introduce a new x86_msi_ops: compose_msi_msg.
> > > > 
> > > > See here: https://lkml.org/lkml/2012/8/20/432
> > > > (I think there was also a more recent one posted at some point).
> > > 
> > > Given that dom0 should never write to the MSI-X table, introducing a new
> > 
> > How does this work with QEMU setting up MSI and MSI-X on behalf of
> > guests? Or is that actually handled by Xen hypervisor?
> 
> In the case of HVM guests, QEMU emulates the PCI config space and the
> table, so it is OK for the guest to write to it.
> 
> 
> > > msi_ops that replaces msix_program_entries (or at least the part of
> > > msix_program_entries that masks all the entried) is the only solution
> > > left.
> > 
> > so this one (__msix_mask_irq):
> > 
> >          mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT;
> >  198         if (flag)
> >  199                 mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
> >  200         writel(mask_bits, desc->mask_base + offset);
> > 
> 
> Yes, that's the one. Once could argue that __msix_mask_irq should call
> mask_irq rather than writing to the table directly.

You mean 'irq_mask ' ? Not really - that is within the IOAPIC domain.

To be more generic it should encompass then also the other usages -
that is the 'readl' and 'writel' users.

My understading of the reason we have been fortunate enough to have this
working right now is b/c the hypercall we do beforehand writes the
'pirq' in the MSI-X BAR and that is later what the Linux kernel
does (by doing readl) -  and we end up re-writing that value
by the Linux kernel.

The other thing we can do and entirely bypass the msi.c writes is
xen_initdom_setup_msi_irqs make the desc->mask_base point to
somewhere safe. Meaning point to an page we allocate when
we setup the IRQs and we fill it with whatever we want
(which I guess would be the pirq values we just got).


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 21:02:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 21:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm9jN-0000jW-LX; Fri, 21 Dec 2012 21:02:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm9jL-0000jO-Td
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 21:02:08 +0000
Received: from [85.158.138.51:17297] by server-1.bemta-3.messagelabs.com id
	89/1D-08906-A4EC4D05; Fri, 21 Dec 2012 21:02:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1356123720!28156194!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25732 invoked from network); 21 Dec 2012 21:02:01 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 21:02:01 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLL1uXI024630
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 21:01:57 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLL1tFl022934
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 21:01:55 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLL1t5M030942; Fri, 21 Dec 2012 15:01:55 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 13:01:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9FF651C032B; Fri, 21 Dec 2012 16:01:53 -0500 (EST)
Date: Fri, 21 Dec 2012 16:01:53 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20121221210153.GA32115@phenom.dumpdata.com>
References: <20121212171523.332a0a89@mantra.us.oracle.com>
	<20121212174312.68146c02@mantra.us.oracle.com>
	<50C9BF1E02000078000B00FC@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131216190.17523@kaball.uk.xensource.com>
	<50C9E88F02000078000B02A1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1212131409040.17523@kaball.uk.xensource.com>
	<20121218211613.GA5697@phenom.dumpdata.com>
	<alpine.DEB.2.02.1212191116450.17523@kaball.uk.xensource.com>
	<20121219153728.GG10062@phenom.dumpdata.com>
	<alpine.DEB.2.02.1212201145030.17523@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1212201145030.17523@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PVH]: Help: msi.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > > > pci_enable_msix -> msix_capability_init -> msix_program_entries
> > > > > 
> > > > > Unfortunately msix_program_entries is called few lines after
> > > > > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI as
> > > > > a pirq.
> > > > > However after that is done, all the masking/unmask is done via irq_mask
> > > > > that we handle properly masking/unmasking the corresponding event
> > > > > channels.
> > > > > 
> > > > > 
> > > > > Possible solutions on top of my head:
> > > > 
> > > > There is also the potential to piggyback on Joerg's patches
> > > > that introduce a new x86_msi_ops: compose_msi_msg.
> > > > 
> > > > See here: https://lkml.org/lkml/2012/8/20/432
> > > > (I think there was also a more recent one posted at some point).
> > > 
> > > Given that dom0 should never write to the MSI-X table, introducing a new
> > 
> > How does this work with QEMU setting up MSI and MSI-X on behalf of
> > guests? Or is that actually handled by Xen hypervisor?
> 
> In the case of HVM guests, QEMU emulates the PCI config space and the
> table, so it is OK for the guest to write to it.
> 
> 
> > > msi_ops that replaces msix_program_entries (or at least the part of
> > > msix_program_entries that masks all the entried) is the only solution
> > > left.
> > 
> > so this one (__msix_mask_irq):
> > 
> >          mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT;
> >  198         if (flag)
> >  199                 mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
> >  200         writel(mask_bits, desc->mask_base + offset);
> > 
> 
> Yes, that's the one. Once could argue that __msix_mask_irq should call
> mask_irq rather than writing to the table directly.

You mean 'irq_mask ' ? Not really - that is within the IOAPIC domain.

To be more generic it should encompass then also the other usages -
that is the 'readl' and 'writel' users.

My understading of the reason we have been fortunate enough to have this
working right now is b/c the hypercall we do beforehand writes the
'pirq' in the MSI-X BAR and that is later what the Linux kernel
does (by doing readl) -  and we end up re-writing that value
by the Linux kernel.

The other thing we can do and entirely bypass the msi.c writes is
xen_initdom_setup_msi_irqs make the desc->mask_base point to
somewhere safe. Meaning point to an page we allocate when
we setup the IRQs and we fill it with whatever we want
(which I guess would be the pirq values we just got).


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 21:10:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 21:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm9qr-0000tg-Oz; Fri, 21 Dec 2012 21:09:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm9qq-0000ta-54
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 21:09:52 +0000
Received: from [85.158.143.99:8095] by server-2.bemta-4.messagelabs.com id
	CB/54-30861-F10D4D05; Fri, 21 Dec 2012 21:09:51 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1356124189!19358967!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzQ4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27924 invoked from network); 21 Dec 2012 21:09:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 21:09:50 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLL9k2d017716
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 21:09:46 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLL9jnQ011823
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 21:09:45 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLL9ico004712; Fri, 21 Dec 2012 15:09:44 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 13:09:44 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B30A91C032B; Fri, 21 Dec 2012 16:09:43 -0500 (EST)
Date: Fri, 21 Dec 2012 16:09:43 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121221210943.GA32523@phenom.dumpdata.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
	<50D18D4E02000078000B15B3@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D18D4E02000078000B15B3@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 08:47:58AM +0000, Jan Beulich wrote:
> >>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> > Are you playing with Xen ?  so far,  Xen doesn't support PCIe device hot-plug 
> > feature yet.
> 
> When saying Xen, I assume you mean the pv-ops kernel instead? So

I am looking at the PCI AER which can do native hotplug. And from
the Linux side it ought to work right -- but there is some scanning
of said new PCI device "before" the notifications are sent (which
we latch on and do the hypercall on). At least that is from a cursory
read of the code.

I presume that "before" the hypercall is sent Xen would just
ignore the 0xcf8 reads. But after the hypercall it would populate
appropiately whatever it needs. But it would never get to that point
b/c Linux couldn't read the device.

I might be of course completly off-base - hence asking if
Intel has done any sort of testing on this?

> far I was under the impression that this worked even with the very
> old 2.6.18 tree (as much or as little as hotplug there worked in the
> native case). And given that there are no special requirements on
> the hypervisor to make this work, it's not even obvious to me what
> would be missing in the pv-ops kernel to make it work.
> 
> All that is of course with me not having any practical experience
> with hotplug, due to the lack of capable hardware...
> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 21:10:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 21:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm9qr-0000tg-Oz; Fri, 21 Dec 2012 21:09:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm9qq-0000ta-54
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 21:09:52 +0000
Received: from [85.158.143.99:8095] by server-2.bemta-4.messagelabs.com id
	CB/54-30861-F10D4D05; Fri, 21 Dec 2012 21:09:51 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1356124189!19358967!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzQ4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27924 invoked from network); 21 Dec 2012 21:09:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 21:09:50 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLL9k2d017716
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 21:09:46 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLL9jnQ011823
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 21:09:45 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLL9ico004712; Fri, 21 Dec 2012 15:09:44 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 13:09:44 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B30A91C032B; Fri, 21 Dec 2012 16:09:43 -0500 (EST)
Date: Fri, 21 Dec 2012 16:09:43 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121221210943.GA32523@phenom.dumpdata.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
	<50D18D4E02000078000B15B3@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D18D4E02000078000B15B3@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 08:47:58AM +0000, Jan Beulich wrote:
> >>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> > Are you playing with Xen ?  so far,  Xen doesn't support PCIe device hot-plug 
> > feature yet.
> 
> When saying Xen, I assume you mean the pv-ops kernel instead? So

I am looking at the PCI AER which can do native hotplug. And from
the Linux side it ought to work right -- but there is some scanning
of said new PCI device "before" the notifications are sent (which
we latch on and do the hypercall on). At least that is from a cursory
read of the code.

I presume that "before" the hypercall is sent Xen would just
ignore the 0xcf8 reads. But after the hypercall it would populate
appropiately whatever it needs. But it would never get to that point
b/c Linux couldn't read the device.

I might be of course completly off-base - hence asking if
Intel has done any sort of testing on this?

> far I was under the impression that this worked even with the very
> old 2.6.18 tree (as much or as little as hotplug there worked in the
> native case). And given that there are no special requirements on
> the hypervisor to make this work, it's not even obvious to me what
> would be missing in the pv-ops kernel to make it work.
> 
> All that is of course with me not having any practical experience
> with hotplug, due to the lack of capable hardware...
> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 21:14:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 21:14:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm9uh-00013k-EN; Fri, 21 Dec 2012 21:13:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm9uf-00013d-OP
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 21:13:50 +0000
Received: from [85.158.138.51:15285] by server-11.bemta-3.messagelabs.com id
	37/93-13335-801D4D05; Fri, 21 Dec 2012 21:13:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1356124418!27430746!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzU4ODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10888 invoked from network); 21 Dec 2012 21:13:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 21:13:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLLDaAB020719
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 21:13:37 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLLDaic008412
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 21:13:36 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLLDZ2s015066; Fri, 21 Dec 2012 15:13:35 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 13:13:35 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B85751C032B; Fri, 21 Dec 2012 16:13:34 -0500 (EST)
Date: Fri, 21 Dec 2012 16:13:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121221211334.GB32523@phenom.dumpdata.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
	<50D18D4E02000078000B15B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA621@SHSMSX101.ccr.corp.intel.com>
	<50D1D0E8.7050203@citrix.com>
	<50D1E67F02000078000B1791@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D1E67F02000078000B1791@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 03:08:31PM +0000, Jan Beulich wrote:
> >>> On 19.12.12 at 15:36, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > On 19/12/2012 09:14, Zhang, Xiantao wrote:
> >>> -----Original Message-----
> >>> From: Jan Beulich [mailto:JBeulich@suse.com]
> >>> Sent: Wednesday, December 19, 2012 4:48 PM
> >>> To: Zhang, Xiantao
> >>> Cc: Konrad Rzeszutek Wilk; xen-devel
> >>> Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
> >>>
> >>>>>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> >>> wrote:
> >>>> Are you playing with Xen ?  so far,  Xen doesn't support PCIe device
> >>>> hot-plug feature yet.
> >>> When saying Xen, I assume you mean the pv-ops kernel instead? So far I was
> >>> under the impression that this worked even with the very old 2.6.18 tree (as
> >>> much or as little as hotplug there worked in the native case). And given 
> > that
> >>> there are no special requirements on the hypervisor to make this work, it's
> >>> not even obvious to me what would be missing in the pv-ops kernel to make
> >>> it work. 
> >> Oh, my fault!  Perhaps we don't need to do anything for pv-ops kernel to 
> > support device hot-plug if native system has it supported.  Actually, we 
> > didn't do such testings before, since it is a native feature, not a 
> > Xen-specific one. 
> >> Xiantao
> > 
> > My current understanding is that on boot, Xen scans the PCI bus, then
> > dom0 rescans it later.  If a hotplug event gets serviced by dom0, does
> > there not need to be some hypercall informing Xen that a new device has
> > appeared?  I expect PCIPassthrough would not work correctly on a
> > hotplugged device which Xen is unaware of.
> 
> Sure - such a hypercall exists and is - from all I can tell - being made
> not only during the boot time bus scan, but also during hotplug
> processing. See drivers/xen/pci.c.

Right. What I am not sure about - and this is why I am asking Intel - whether
that notifier gets called _after_ dom0 has interogated the new PCI device.

And if so, would Xen (which gets trapped on the 0xcf8) would not let dom0
actually interogate the new BDF b/c it hasn't been told about this BDF
existence.

This is all speculation - and I might be very well completly wrong about
all of this - hence this email-thread.

Testing would of course tell for sure, but I am not sure if I even have
such hardware.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 21:14:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 21:14:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tm9uh-00013k-EN; Fri, 21 Dec 2012 21:13:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Tm9uf-00013d-OP
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 21:13:50 +0000
Received: from [85.158.138.51:15285] by server-11.bemta-3.messagelabs.com id
	37/93-13335-801D4D05; Fri, 21 Dec 2012 21:13:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1356124418!27430746!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzU4ODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10888 invoked from network); 21 Dec 2012 21:13:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Dec 2012 21:13:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLLDaAB020719
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 21:13:37 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLLDaic008412
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 21:13:36 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLLDZ2s015066; Fri, 21 Dec 2012 15:13:35 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 13:13:35 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B85751C032B; Fri, 21 Dec 2012 16:13:34 -0500 (EST)
Date: Fri, 21 Dec 2012 16:13:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20121221211334.GB32523@phenom.dumpdata.com>
References: <20121218224606.GA6918@phenom.dumpdata.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA490@SHSMSX101.ccr.corp.intel.com>
	<50D18D4E02000078000B15B3@nat28.tlf.novell.com>
	<B6C2EB9186482D47BD0C5A9A48345644033BA621@SHSMSX101.ccr.corp.intel.com>
	<50D1D0E8.7050203@citrix.com>
	<50D1E67F02000078000B1791@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50D1E67F02000078000B1791@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 19, 2012 at 03:08:31PM +0000, Jan Beulich wrote:
> >>> On 19.12.12 at 15:36, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > On 19/12/2012 09:14, Zhang, Xiantao wrote:
> >>> -----Original Message-----
> >>> From: Jan Beulich [mailto:JBeulich@suse.com]
> >>> Sent: Wednesday, December 19, 2012 4:48 PM
> >>> To: Zhang, Xiantao
> >>> Cc: Konrad Rzeszutek Wilk; xen-devel
> >>> Subject: Re: [Xen-devel] Xen 4.2 and PCI hotplug.
> >>>
> >>>>>> On 19.12.12 at 09:13, "Zhang, Xiantao" <xiantao.zhang@intel.com>
> >>> wrote:
> >>>> Are you playing with Xen ?  so far,  Xen doesn't support PCIe device
> >>>> hot-plug feature yet.
> >>> When saying Xen, I assume you mean the pv-ops kernel instead? So far I was
> >>> under the impression that this worked even with the very old 2.6.18 tree (as
> >>> much or as little as hotplug there worked in the native case). And given 
> > that
> >>> there are no special requirements on the hypervisor to make this work, it's
> >>> not even obvious to me what would be missing in the pv-ops kernel to make
> >>> it work. 
> >> Oh, my fault!  Perhaps we don't need to do anything for pv-ops kernel to 
> > support device hot-plug if native system has it supported.  Actually, we 
> > didn't do such testings before, since it is a native feature, not a 
> > Xen-specific one. 
> >> Xiantao
> > 
> > My current understanding is that on boot, Xen scans the PCI bus, then
> > dom0 rescans it later.  If a hotplug event gets serviced by dom0, does
> > there not need to be some hypercall informing Xen that a new device has
> > appeared?  I expect PCIPassthrough would not work correctly on a
> > hotplugged device which Xen is unaware of.
> 
> Sure - such a hypercall exists and is - from all I can tell - being made
> not only during the boot time bus scan, but also during hotplug
> processing. See drivers/xen/pci.c.

Right. What I am not sure about - and this is why I am asking Intel - whether
that notifier gets called _after_ dom0 has interogated the new PCI device.

And if so, would Xen (which gets trapped on the 0xcf8) would not let dom0
actually interogate the new BDF b/c it hasn't been told about this BDF
existence.

This is all speculation - and I might be very well completly wrong about
all of this - hence this email-thread.

Testing would of course tell for sure, but I am not sure if I even have
such hardware.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 21:25:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 21:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmA5h-0001GT-KT; Fri, 21 Dec 2012 21:25:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1TmA5g-0001GL-3c; Fri, 21 Dec 2012 21:25:12 +0000
Received: from [85.158.139.83:6807] by server-13.bemta-5.messagelabs.com id
	AA/68-10716-7B3D4D05; Fri, 21 Dec 2012 21:25:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1356125109!23589953!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24359 invoked from network); 21 Dec 2012 21:25:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 21:25:10 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLLOwqB010908
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 21:24:59 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLLOwHR023575
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 21:24:58 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLLOvJp022384; Fri, 21 Dec 2012 15:24:57 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 13:24:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 377B11C032B; Fri, 21 Dec 2012 16:24:56 -0500 (EST)
Date: Fri, 21 Dec 2012 16:24:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Fan, Huaxiang" <hufan@websense.com>
Message-ID: <20121221212456.GA521@phenom.dumpdata.com>
References: <E71FC5D6F96C3C4B93FC8FF942D924C682F44808@SBJEXCH1B.websense.com>
	<20121022135023.GU8912@reaktio.net>
	<E71FC5D6F96C3C4B93FC8FF942D924C682F45DE3@SBJEXCH1B.websense.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E71FC5D6F96C3C4B93FC8FF942D924C682F45DE3@SBJEXCH1B.websense.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] e820_host problems
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Oct 24, 2012 at 06:02:11AM +0000, Fan, Huaxiang wrote:
> Hi Pasi,
> 
> Thanks for your reply. I've tried the latest table kernel 3.6.3. The situation is better, but still not perfect.  
> When I assign 6144M to domu wcg ,the output of 'xl list' only indicates 5110M allocated for domu wcg.
> When I logon domu wcg, the totol memory is *limited within 3G*. I suspect the e820-map was wrong.

It looks like there is a bug in the libxl when assembling the e820 map.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 21:25:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 21:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmA5h-0001GT-KT; Fri, 21 Dec 2012 21:25:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1TmA5g-0001GL-3c; Fri, 21 Dec 2012 21:25:12 +0000
Received: from [85.158.139.83:6807] by server-13.bemta-5.messagelabs.com id
	AA/68-10716-7B3D4D05; Fri, 21 Dec 2012 21:25:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1356125109!23589953!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTMyMzE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24359 invoked from network); 21 Dec 2012 21:25:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 21:25:10 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLLOwqB010908
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 21:24:59 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLLOwHR023575
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 21:24:58 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLLOvJp022384; Fri, 21 Dec 2012 15:24:57 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 13:24:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 377B11C032B; Fri, 21 Dec 2012 16:24:56 -0500 (EST)
Date: Fri, 21 Dec 2012 16:24:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Fan, Huaxiang" <hufan@websense.com>
Message-ID: <20121221212456.GA521@phenom.dumpdata.com>
References: <E71FC5D6F96C3C4B93FC8FF942D924C682F44808@SBJEXCH1B.websense.com>
	<20121022135023.GU8912@reaktio.net>
	<E71FC5D6F96C3C4B93FC8FF942D924C682F45DE3@SBJEXCH1B.websense.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E71FC5D6F96C3C4B93FC8FF942D924C682F45DE3@SBJEXCH1B.websense.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] e820_host problems
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Oct 24, 2012 at 06:02:11AM +0000, Fan, Huaxiang wrote:
> Hi Pasi,
> 
> Thanks for your reply. I've tried the latest table kernel 3.6.3. The situation is better, but still not perfect.  
> When I assign 6144M to domu wcg ,the output of 'xl list' only indicates 5110M allocated for domu wcg.
> When I logon domu wcg, the totol memory is *limited within 3G*. I suspect the e820-map was wrong.

It looks like there is a bug in the libxl when assembling the e820 map.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 21:28:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 21:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmA8F-0001TP-QN; Fri, 21 Dec 2012 21:27:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TmA8E-0001TK-IQ
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 21:27:50 +0000
Received: from [85.158.143.99:3148] by server-1.bemta-4.messagelabs.com id
	24/D8-28401-554D4D05; Fri, 21 Dec 2012 21:27:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356125267!20943719!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzQ4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10045 invoked from network); 21 Dec 2012 21:27:49 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 21:27:49 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLLRjA4032341
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 21:27:46 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLLRjt8015843
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 21:27:45 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLLRisb023981; Fri, 21 Dec 2012 15:27:44 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 13:27:44 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 78B4D1C032B; Fri, 21 Dec 2012 16:27:43 -0500 (EST)
Date: Fri, 21 Dec 2012 16:27:43 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Wu, GabrielX" <gabrielx.wu@intel.com>
Message-ID: <20121221212743.GB521@phenom.dumpdata.com>
References: <E4558C0C96688748837EB1B05BEED75A0FD83B81@SHSMSX102.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E4558C0C96688748837EB1B05BEED75A0FD83B81@SHSMSX102.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Zhou, Chao" <chao.zhou@intel.com>, "Ren, Yongjie" <yongjie.ren@intel.com>,
	"Liu, SongtaoX" <songtaox.liu@intel.com>, "Liu,
	RongrongX" <rongrongx.liu@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] VMX status report. Xen:26127 & Dom0:3.6.5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Nov 12, 2012 at 02:16:44AM +0000, Wu, GabrielX wrote:
> Hi all,
> 
> This is the test report of xen-unstable tree, no new issue found and no issue fixed.
> 
> Version Info:
> =================================================================
> xen-changeset:   26127:bd78e5630a5b
> Dom0:          linux.git  3.6.5
> =================================================================
> 
> New issue(0)
> ==============
> 
> Fixed issue(0)
> ==============
> 
> Old issues(9)
> ==============
> 1. [ACPI] Dom0 can't resume from S3 sleep
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
> 2. [XL]"xl vcpu-set" causes dom0 crash or panic

This should be fixed in v3.8.

>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730
> 3. [VT-D]fail to detach NIC from guest  
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736
> 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747
> 5. after detaching a VF from a guest, shutdown the guest is very slow
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
> 6. Poor performance when do guest save/restore and migration with linux 3.x dom0
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784

And this one as well.
> 7. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822

And this one too.
> 8. Dom0 cannot be shutdown before PCI device detachment from guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826

> 9. xl pci-list shows one PCI device (PF or VF) could be assigned to two different guests
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1834
> 
> Best Regards,
> Ronghui Wu(Gabriel)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 21:28:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 21:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmA8F-0001TP-QN; Fri, 21 Dec 2012 21:27:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TmA8E-0001TK-IQ
	for xen-devel@lists.xen.org; Fri, 21 Dec 2012 21:27:50 +0000
Received: from [85.158.143.99:3148] by server-1.bemta-4.messagelabs.com id
	24/D8-28401-554D4D05; Fri, 21 Dec 2012 21:27:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356125267!20943719!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzQ4MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10045 invoked from network); 21 Dec 2012 21:27:49 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 21:27:49 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBLLRjA4032341
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Dec 2012 21:27:46 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBLLRjt8015843
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Dec 2012 21:27:45 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBLLRisb023981; Fri, 21 Dec 2012 15:27:44 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 13:27:44 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 78B4D1C032B; Fri, 21 Dec 2012 16:27:43 -0500 (EST)
Date: Fri, 21 Dec 2012 16:27:43 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Wu, GabrielX" <gabrielx.wu@intel.com>
Message-ID: <20121221212743.GB521@phenom.dumpdata.com>
References: <E4558C0C96688748837EB1B05BEED75A0FD83B81@SHSMSX102.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E4558C0C96688748837EB1B05BEED75A0FD83B81@SHSMSX102.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Zhou, Chao" <chao.zhou@intel.com>, "Ren, Yongjie" <yongjie.ren@intel.com>,
	"Liu, SongtaoX" <songtaox.liu@intel.com>, "Liu,
	RongrongX" <rongrongx.liu@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] VMX status report. Xen:26127 & Dom0:3.6.5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Nov 12, 2012 at 02:16:44AM +0000, Wu, GabrielX wrote:
> Hi all,
> 
> This is the test report of xen-unstable tree, no new issue found and no issue fixed.
> 
> Version Info:
> =================================================================
> xen-changeset:   26127:bd78e5630a5b
> Dom0:          linux.git  3.6.5
> =================================================================
> 
> New issue(0)
> ==============
> 
> Fixed issue(0)
> ==============
> 
> Old issues(9)
> ==============
> 1. [ACPI] Dom0 can't resume from S3 sleep
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
> 2. [XL]"xl vcpu-set" causes dom0 crash or panic

This should be fixed in v3.8.

>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730
> 3. [VT-D]fail to detach NIC from guest  
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736
> 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747
> 5. after detaching a VF from a guest, shutdown the guest is very slow
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
> 6. Poor performance when do guest save/restore and migration with linux 3.x dom0
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784

And this one as well.
> 7. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822

And this one too.
> 8. Dom0 cannot be shutdown before PCI device detachment from guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826

> 9. xl pci-list shows one PCI device (PF or VF) could be assigned to two different guests
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1834
> 
> Best Regards,
> Ronghui Wu(Gabriel)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 23:11:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 23:11:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmBju-0002Ly-Ml; Fri, 21 Dec 2012 23:10:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TmBjs-0002Lt-K2
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 23:10:48 +0000
Received: from [85.158.143.35:20944] by server-1.bemta-4.messagelabs.com id
	17/02-28401-77CE4D05; Fri, 21 Dec 2012 23:10:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1356131443!5218933!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31895 invoked from network); 21 Dec 2012 23:10:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 23:10:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,332,1355097600"; 
   d="scan'208";a="313372"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 23:10:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 23:10:42 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TmBjm-0002tj-6L;
	Fri, 21 Dec 2012 23:10:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TmBjl-0001Wi-Lb;
	Fri, 21 Dec 2012 23:10:41 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14806-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Dec 2012 23:10:41 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14806: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5174903107291566093=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5174903107291566093==
Content-Type: text/plain

flight 14806 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14806/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 14805

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14805
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14805

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  6f5c96855a9e

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26319:c4114a042410
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Dec 21 17:05:38 2012 +0000
    
    tools/tests: Restrict some tests to x86 only
    
    MCE injection and x86_emulator are clearly x86 specific.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   26318:6f5c96855a9e
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 20 11:00:32 2012 +0100
    
    x86: also print CRn register values upon double fault
    
    Do so by simply re-using _show_registers().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
========================================
commit 6a0cf3786f1964fdf5a17f88f26cb499f4e89c81
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Dec 6 12:35:58 2012 +0000

    qemu-stubdom: prevent useless medium change
    
    qemu-stubdom was stripping the prefix from the "params" xenstore
    key in xenstore_parse_domain_config, which was then saved stripped in
    a variable. In xenstore_process_event we compare the "param" from
    xenstore (not stripped) with the stripped "param" saved in the
    variable, which leads to a medium change (even if there isn't any),
    since we are comparing something like aio:/path/to/file with
    /path/to/file. This only happens one time, since
    xenstore_parse_domain_config is the only place where we strip the
    prefix. The result of this bug is the following:
    
    xs_read_watch() -> /local/domain/0/backend/qdisk/19/5632/params hdc
    close(7)
    close blk: backend=/local/domain/0/backend/qdisk/19/5632
    node=/local/domain/19/device/vbd/5632
    (XEN) HVM18: HVM Loader
    (XEN) HVM18: Detected Xen v4.3-unstable
    (XEN) HVM18: Xenbus rings @0xfeffc000, event channel 4
    (XEN) HVM18: System requested ROMBIOS
    (XEN) HVM18: CPU speed is 2400 MHz
    (XEN) irq.c:270: Dom18 PCI link 0 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 0 routed to IRQ5
    (XEN) irq.c:270: Dom18 PCI link 1 changed 0 -> 10
    (XEN) HVM18: PCI-ISA link 1 routed to IRQ10
    (XEN) irq.c:270: Dom18 PCI link 2 changed 0 -> 11
    (XEN) HVM18: PCI-ISA link 2 routed to IRQ11
    (XEN) irq.c:270: Dom18 PCI link 3 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 3 routed to IRQ5
    (XEN) HVM18: pci dev 01:3 INTA->IRQ10
    (XEN) HVM18: pci dev 03:0 INTA->IRQ5
    (XEN) HVM18: pci dev 04:0 INTA->IRQ5
    (XEN) HVM18: pci dev 02:0 bar 10 size lx: 02000000
    (XEN) HVM18: pci dev 03:0 bar 14 size lx: 01000000
    (XEN) HVM18: pci dev 02:0 bar 14 size lx: 00001000
    (XEN) HVM18: pci dev 03:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 14 size lx: 00000100
    (XEN) HVM18: pci dev 01:1 bar 20 size lx: 00000010
    (XEN) HVM18: Multiprocessor initialisation:
    (XEN) HVM18:  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18:  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18: Testing HVM environment:
    (XEN) HVM18:  - REP INSB across page boundaries ... passed
    (XEN) HVM18:  - GS base MSRs and SWAPGS ... passed
    (XEN) HVM18: Passed 2 of 2 tests
    (XEN) HVM18: Writing SMBIOS tables ...
    (XEN) HVM18: Loading ROMBIOS ...
    (XEN) HVM18: 9660 bytes of ROMBIOS high-memory extensions:
    (XEN) HVM18:   Relocating to 0xfc001000-0xfc0035bc ... done
    (XEN) HVM18: Creating MP tables ...
    (XEN) HVM18: Loading Cirrus VGABIOS ...
    (XEN) HVM18: Loading PCI Option ROM ...
    (XEN) HVM18:  - Manufacturer: http://ipxe.org
    (XEN) HVM18:  - Product name: iPXE
    (XEN) HVM18: Option ROMs:
    (XEN) HVM18:  c0000-c8fff: VGA BIOS
    (XEN) HVM18:  c9000-d8fff: Etherboot ROM
    (XEN) HVM18: Loading ACPI ...
    (XEN) HVM18: vm86 TSS at fc00f680
    (XEN) HVM18: BIOS map:
    (XEN) HVM18:  f0000-fffff: Main BIOS
    (XEN) HVM18: E820 table:
    (XEN) HVM18:  [00]: 00000000:00000000 - 00000000:0009e000: RAM
    (XEN) HVM18:  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
    (XEN) HVM18:  HOLE: 00000000:000a0000 - 00000000:000e0000
    (XEN) HVM18:  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
    (XEN) HVM18:  [03]: 00000000:00100000 - 00000000:3f800000: RAM
    (XEN) HVM18:  HOLE: 00000000:3f800000 - 00000000:fc000000
    (XEN) HVM18:  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
    (XEN) HVM18: Invoking ROMBIOS ...
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) stdvga.c:147:d18 entering stdvga and caching modes
    (XEN) HVM18: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
    (XEN) HVM18: Bochs BIOS - build: 06/23/99
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) HVM18: Options: apmbios pcibios eltorito PMM
    (XEN) HVM18:
    (XEN) HVM18: ata0-0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63
    (XEN) HVM18: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (10240 MBytes)
    (XEN) HVM18: IDE time out
    (XEN) HVM18: ata1 master: QEMU DVD-ROM ATAPI-4 CD-Rom/DVD-Rom
    (XEN) HVM18: IDE time out
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: Press F12 for boot menu.
    (XEN) HVM18:
    (XEN) HVM18: Booting from CD-Rom...
    (XEN) HVM18: ata_is_ready returned 1
    (XEN) HVM18: CDROM boot failure code : 0003
    (XEN) HVM18: Boot from CD-Rom failed: could not read the boot disk
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: No bootable device.
    (XEN) HVM18: Powering off in 30 seconds.
    ******************* BLKFRONT for /local/domain/19/device/vbd/5632 **********
    
    backend at /local/domain/0/backend/qdisk/19/5632
    Failed to read
    /local/domain/0/backend/qdisk/19/5632/feature-flush-cache.
    284420 sectors of 512 bytes
    **************************
    blk_open(/local/domain/19/device/vbd/5632) -> 7
    
    As seen in this trace, the medium change happens just when the
    guest is booting, which leads to the guest not being able to boot
    because the BIOS is not able to access the device.
    
    This is a regression from Xen 4.1, which is able to boot from "file:/"
    based backends when using stubdomains.
    
    [ By inspection, this patch does not change the flow for the
      non-stubdom case. -iwj]
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


--===============5174903107291566093==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5174903107291566093==--

From xen-devel-bounces@lists.xen.org Fri Dec 21 23:11:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 23:11:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmBju-0002Ly-Ml; Fri, 21 Dec 2012 23:10:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TmBjs-0002Lt-K2
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 23:10:48 +0000
Received: from [85.158.143.35:20944] by server-1.bemta-4.messagelabs.com id
	17/02-28401-77CE4D05; Fri, 21 Dec 2012 23:10:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1356131443!5218933!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31895 invoked from network); 21 Dec 2012 23:10:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 23:10:43 -0000
X-IronPort-AV: E=Sophos;i="4.84,332,1355097600"; 
   d="scan'208";a="313372"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 23:10:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 21 Dec 2012 23:10:42 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TmBjm-0002tj-6L;
	Fri, 21 Dec 2012 23:10:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TmBjl-0001Wi-Lb;
	Fri, 21 Dec 2012 23:10:41 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14806-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Dec 2012 23:10:41 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14806: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5174903107291566093=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5174903107291566093==
Content-Type: text/plain

flight 14806 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14806/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 14805

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14805
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14805

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  6f5c96855a9e

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   26319:c4114a042410
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Dec 21 17:05:38 2012 +0000
    
    tools/tests: Restrict some tests to x86 only
    
    MCE injection and x86_emulator are clearly x86 specific.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   26318:6f5c96855a9e
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Dec 20 11:00:32 2012 +0100
    
    x86: also print CRn register values upon double fault
    
    Do so by simply re-using _show_registers().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
========================================
commit 6a0cf3786f1964fdf5a17f88f26cb499f4e89c81
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Dec 6 12:35:58 2012 +0000

    qemu-stubdom: prevent useless medium change
    
    qemu-stubdom was stripping the prefix from the "params" xenstore
    key in xenstore_parse_domain_config, which was then saved stripped in
    a variable. In xenstore_process_event we compare the "param" from
    xenstore (not stripped) with the stripped "param" saved in the
    variable, which leads to a medium change (even if there isn't any),
    since we are comparing something like aio:/path/to/file with
    /path/to/file. This only happens one time, since
    xenstore_parse_domain_config is the only place where we strip the
    prefix. The result of this bug is the following:
    
    xs_read_watch() -> /local/domain/0/backend/qdisk/19/5632/params hdc
    close(7)
    close blk: backend=/local/domain/0/backend/qdisk/19/5632
    node=/local/domain/19/device/vbd/5632
    (XEN) HVM18: HVM Loader
    (XEN) HVM18: Detected Xen v4.3-unstable
    (XEN) HVM18: Xenbus rings @0xfeffc000, event channel 4
    (XEN) HVM18: System requested ROMBIOS
    (XEN) HVM18: CPU speed is 2400 MHz
    (XEN) irq.c:270: Dom18 PCI link 0 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 0 routed to IRQ5
    (XEN) irq.c:270: Dom18 PCI link 1 changed 0 -> 10
    (XEN) HVM18: PCI-ISA link 1 routed to IRQ10
    (XEN) irq.c:270: Dom18 PCI link 2 changed 0 -> 11
    (XEN) HVM18: PCI-ISA link 2 routed to IRQ11
    (XEN) irq.c:270: Dom18 PCI link 3 changed 0 -> 5
    (XEN) HVM18: PCI-ISA link 3 routed to IRQ5
    (XEN) HVM18: pci dev 01:3 INTA->IRQ10
    (XEN) HVM18: pci dev 03:0 INTA->IRQ5
    (XEN) HVM18: pci dev 04:0 INTA->IRQ5
    (XEN) HVM18: pci dev 02:0 bar 10 size lx: 02000000
    (XEN) HVM18: pci dev 03:0 bar 14 size lx: 01000000
    (XEN) HVM18: pci dev 02:0 bar 14 size lx: 00001000
    (XEN) HVM18: pci dev 03:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 10 size lx: 00000100
    (XEN) HVM18: pci dev 04:0 bar 14 size lx: 00000100
    (XEN) HVM18: pci dev 01:1 bar 20 size lx: 00000010
    (XEN) HVM18: Multiprocessor initialisation:
    (XEN) HVM18:  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18:  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
    (XEN) HVM18: Testing HVM environment:
    (XEN) HVM18:  - REP INSB across page boundaries ... passed
    (XEN) HVM18:  - GS base MSRs and SWAPGS ... passed
    (XEN) HVM18: Passed 2 of 2 tests
    (XEN) HVM18: Writing SMBIOS tables ...
    (XEN) HVM18: Loading ROMBIOS ...
    (XEN) HVM18: 9660 bytes of ROMBIOS high-memory extensions:
    (XEN) HVM18:   Relocating to 0xfc001000-0xfc0035bc ... done
    (XEN) HVM18: Creating MP tables ...
    (XEN) HVM18: Loading Cirrus VGABIOS ...
    (XEN) HVM18: Loading PCI Option ROM ...
    (XEN) HVM18:  - Manufacturer: http://ipxe.org
    (XEN) HVM18:  - Product name: iPXE
    (XEN) HVM18: Option ROMs:
    (XEN) HVM18:  c0000-c8fff: VGA BIOS
    (XEN) HVM18:  c9000-d8fff: Etherboot ROM
    (XEN) HVM18: Loading ACPI ...
    (XEN) HVM18: vm86 TSS at fc00f680
    (XEN) HVM18: BIOS map:
    (XEN) HVM18:  f0000-fffff: Main BIOS
    (XEN) HVM18: E820 table:
    (XEN) HVM18:  [00]: 00000000:00000000 - 00000000:0009e000: RAM
    (XEN) HVM18:  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
    (XEN) HVM18:  HOLE: 00000000:000a0000 - 00000000:000e0000
    (XEN) HVM18:  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
    (XEN) HVM18:  [03]: 00000000:00100000 - 00000000:3f800000: RAM
    (XEN) HVM18:  HOLE: 00000000:3f800000 - 00000000:fc000000
    (XEN) HVM18:  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
    (XEN) HVM18: Invoking ROMBIOS ...
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) stdvga.c:147:d18 entering stdvga and caching modes
    (XEN) HVM18: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
    (XEN) HVM18: Bochs BIOS - build: 06/23/99
    (XEN) HVM18: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
    (XEN) HVM18: Options: apmbios pcibios eltorito PMM
    (XEN) HVM18:
    (XEN) HVM18: ata0-0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63
    (XEN) HVM18: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (10240 MBytes)
    (XEN) HVM18: IDE time out
    (XEN) HVM18: ata1 master: QEMU DVD-ROM ATAPI-4 CD-Rom/DVD-Rom
    (XEN) HVM18: IDE time out
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: Press F12 for boot menu.
    (XEN) HVM18:
    (XEN) HVM18: Booting from CD-Rom...
    (XEN) HVM18: ata_is_ready returned 1
    (XEN) HVM18: CDROM boot failure code : 0003
    (XEN) HVM18: Boot from CD-Rom failed: could not read the boot disk
    (XEN) HVM18:
    (XEN) HVM18:
    (XEN) HVM18: No bootable device.
    (XEN) HVM18: Powering off in 30 seconds.
    ******************* BLKFRONT for /local/domain/19/device/vbd/5632 **********
    
    backend at /local/domain/0/backend/qdisk/19/5632
    Failed to read
    /local/domain/0/backend/qdisk/19/5632/feature-flush-cache.
    284420 sectors of 512 bytes
    **************************
    blk_open(/local/domain/19/device/vbd/5632) -> 7
    
    As seen in this trace, the medium change happens just when the
    guest is booting, which leads to the guest not being able to boot
    because the BIOS is not able to access the device.
    
    This is a regression from Xen 4.1, which is able to boot from "file:/"
    based backends when using stubdomains.
    
    [ By inspection, this patch does not change the flow for the
      non-stubdom case. -iwj]
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


--===============5174903107291566093==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5174903107291566093==--

From xen-devel-bounces@lists.xen.org Fri Dec 21 23:32:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 23:32:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmC4S-0002cX-TZ; Fri, 21 Dec 2012 23:32:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TmC4R-0002cS-03
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 23:32:03 +0000
Received: from [85.158.137.99:5215] by server-5.bemta-3.messagelabs.com id
	9C/F4-15136-071F4D05; Fri, 21 Dec 2012 23:32:00 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1356132718!15119519!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27945 invoked from network); 21 Dec 2012 23:32:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 23:32:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,332,1355097600"; 
   d="scan'208";a="1624432"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 23:31:54 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Fri, 21 Dec 2012
	18:31:54 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 21 Dec 2012 18:31:58 -0500
Thread-Topic: [Xen-devel]  [PATCH v4 00/04] HVM firmware passthrough
Thread-Index: Ac3fs8PBGQnfs9F+QweJ6zZffg11PQAHwcXw
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B65DA@FTLPMAILBOX02.citrite.net>
References: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
	<20121221194533.GE30562@phenom.dumpdata.com>
In-Reply-To: <20121221194533.GE30562@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Friday, December 21, 2012 2:46 PM
> To: Ross Philipson
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
> 
> On Thu, Dec 20, 2012 at 01:55:10PM -0500, Ross Philipson wrote:
> > This patch series introduces support of loading external blocks of
> > firmware into a guest. These blocks can currently contain SMBIOS
> > and/or ACPI firmware information that is used by HVMLOADER to modify a
> > guests virtual firmware at startup. These modules are only used by
> HVMLOADER and are effectively discarded after HVMLOADER has completed.
> >
> > The domain building code in libxenguest is passed these firmware
> > blocks in the xc_hvm_build_args structure and loads them into the new
> > guest, returning the load address. The loading is done in what will
> > become the guests low RAM area just behind to load location for
> > HVMLOADER. After their use by HVMLOADER they are effectively
> > discarded. It is the caller's job to load the base address and length
> > values in xenstore using the paths defined in the new hvm_defs.h
> header so HVMLOADER can located the blocks.
> >
> 
> Are there patches to plug this in the 'xl'?
> 

So far there are only patches to expose it at the xc layer. Nothing else
seems to use the xc_hvm_build() call (only xc_hvm_build_target_mem()).
Since the use of this feature seems dependent on a user's particular
needs, I am not sure how it could generically be built into xl. Any
suggestions are welcome though and I could post subsequent patches.

Thanks
Ross

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 21 23:32:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Dec 2012 23:32:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmC4S-0002cX-TZ; Fri, 21 Dec 2012 23:32:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TmC4R-0002cS-03
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 23:32:03 +0000
Received: from [85.158.137.99:5215] by server-5.bemta-3.messagelabs.com id
	9C/F4-15136-071F4D05; Fri, 21 Dec 2012 23:32:00 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1356132718!15119519!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE1MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27945 invoked from network); 21 Dec 2012 23:32:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Dec 2012 23:32:00 -0000
X-IronPort-AV: E=Sophos;i="4.84,332,1355097600"; 
   d="scan'208";a="1624432"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Dec 2012 23:31:54 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Fri, 21 Dec 2012
	18:31:54 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 21 Dec 2012 18:31:58 -0500
Thread-Topic: [Xen-devel]  [PATCH v4 00/04] HVM firmware passthrough
Thread-Index: Ac3fs8PBGQnfs9F+QweJ6zZffg11PQAHwcXw
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B65DA@FTLPMAILBOX02.citrite.net>
References: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
	<20121221194533.GE30562@phenom.dumpdata.com>
In-Reply-To: <20121221194533.GE30562@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Friday, December 21, 2012 2:46 PM
> To: Ross Philipson
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
> 
> On Thu, Dec 20, 2012 at 01:55:10PM -0500, Ross Philipson wrote:
> > This patch series introduces support of loading external blocks of
> > firmware into a guest. These blocks can currently contain SMBIOS
> > and/or ACPI firmware information that is used by HVMLOADER to modify a
> > guests virtual firmware at startup. These modules are only used by
> HVMLOADER and are effectively discarded after HVMLOADER has completed.
> >
> > The domain building code in libxenguest is passed these firmware
> > blocks in the xc_hvm_build_args structure and loads them into the new
> > guest, returning the load address. The loading is done in what will
> > become the guests low RAM area just behind to load location for
> > HVMLOADER. After their use by HVMLOADER they are effectively
> > discarded. It is the caller's job to load the base address and length
> > values in xenstore using the paths defined in the new hvm_defs.h
> header so HVMLOADER can located the blocks.
> >
> 
> Are there patches to plug this in the 'xl'?
> 

So far there are only patches to expose it at the xc layer. Nothing else
seems to use the xc_hvm_build() call (only xc_hvm_build_target_mem()).
Since the use of this feature seems dependent on a user's particular
needs, I am not sure how it could generically be built into xl. Any
suggestions are welcome though and I could post subsequent patches.

Thanks
Ross

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 02:13:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 02:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmEZy-0007xw-5P; Sat, 22 Dec 2012 02:12:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TmEZw-0007xp-3a
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 02:12:44 +0000
Received: from [85.158.143.99:23853] by server-2.bemta-4.messagelabs.com id
	8B/8A-30861-A1715D05; Sat, 22 Dec 2012 02:12:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1356142360!19376677!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzU4ODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16431 invoked from network); 22 Dec 2012 02:12:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Dec 2012 02:12:42 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBM2CR22022543
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 22 Dec 2012 02:12:28 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBM2CQjD016298
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 22 Dec 2012 02:12:27 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBM2CQvv023420; Fri, 21 Dec 2012 20:12:26 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 18:12:26 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3E25C1C032B; Fri, 21 Dec 2012 21:12:25 -0500 (EST)
Date: Fri, 21 Dec 2012 21:12:25 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ross Philipson <Ross.Philipson@citrix.com>
Message-ID: <20121222021225.GA3468@phenom.dumpdata.com>
References: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
	<20121221194533.GE30562@phenom.dumpdata.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B65DA@FTLPMAILBOX02.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B65DA@FTLPMAILBOX02.citrite.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 06:31:58PM -0500, Ross Philipson wrote:
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Friday, December 21, 2012 2:46 PM
> > To: Ross Philipson
> > Cc: xen-devel@lists.xensource.com
> > Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
> > 
> > On Thu, Dec 20, 2012 at 01:55:10PM -0500, Ross Philipson wrote:
> > > This patch series introduces support of loading external blocks of
> > > firmware into a guest. These blocks can currently contain SMBIOS
> > > and/or ACPI firmware information that is used by HVMLOADER to modify a
> > > guests virtual firmware at startup. These modules are only used by
> > HVMLOADER and are effectively discarded after HVMLOADER has completed.
> > >
> > > The domain building code in libxenguest is passed these firmware
> > > blocks in the xc_hvm_build_args structure and loads them into the new
> > > guest, returning the load address. The loading is done in what will
> > > become the guests low RAM area just behind to load location for
> > > HVMLOADER. After their use by HVMLOADER they are effectively
> > > discarded. It is the caller's job to load the base address and length
> > > values in xenstore using the paths defined in the new hvm_defs.h
> > header so HVMLOADER can located the blocks.
> > >
> > 
> > Are there patches to plug this in the 'xl'?
> > 
> 
> So far there are only patches to expose it at the xc layer. Nothing else
> seems to use the xc_hvm_build() call (only xc_hvm_build_target_mem()).
> Since the use of this feature seems dependent on a user's particular
> needs, I am not sure how it could generically be built into xl. Any
> suggestions are welcome though and I could post subsequent patches.

I was thinking something like this:

firmware="nvidia.bin"
acpi_dsdt="acpi.dsdt"

?
> 
> Thanks
> Ross

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 02:13:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 02:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmEZy-0007xw-5P; Sat, 22 Dec 2012 02:12:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1TmEZw-0007xp-3a
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 02:12:44 +0000
Received: from [85.158.143.99:23853] by server-2.bemta-4.messagelabs.com id
	8B/8A-30861-A1715D05; Sat, 22 Dec 2012 02:12:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1356142360!19376677!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzU4ODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16431 invoked from network); 22 Dec 2012 02:12:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Dec 2012 02:12:42 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBM2CR22022543
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 22 Dec 2012 02:12:28 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBM2CQjD016298
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 22 Dec 2012 02:12:27 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBM2CQvv023420; Fri, 21 Dec 2012 20:12:26 -0600
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Dec 2012 18:12:26 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3E25C1C032B; Fri, 21 Dec 2012 21:12:25 -0500 (EST)
Date: Fri, 21 Dec 2012 21:12:25 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ross Philipson <Ross.Philipson@citrix.com>
Message-ID: <20121222021225.GA3468@phenom.dumpdata.com>
References: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
	<20121221194533.GE30562@phenom.dumpdata.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B65DA@FTLPMAILBOX02.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B65DA@FTLPMAILBOX02.citrite.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 21, 2012 at 06:31:58PM -0500, Ross Philipson wrote:
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Friday, December 21, 2012 2:46 PM
> > To: Ross Philipson
> > Cc: xen-devel@lists.xensource.com
> > Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
> > 
> > On Thu, Dec 20, 2012 at 01:55:10PM -0500, Ross Philipson wrote:
> > > This patch series introduces support of loading external blocks of
> > > firmware into a guest. These blocks can currently contain SMBIOS
> > > and/or ACPI firmware information that is used by HVMLOADER to modify a
> > > guests virtual firmware at startup. These modules are only used by
> > HVMLOADER and are effectively discarded after HVMLOADER has completed.
> > >
> > > The domain building code in libxenguest is passed these firmware
> > > blocks in the xc_hvm_build_args structure and loads them into the new
> > > guest, returning the load address. The loading is done in what will
> > > become the guests low RAM area just behind to load location for
> > > HVMLOADER. After their use by HVMLOADER they are effectively
> > > discarded. It is the caller's job to load the base address and length
> > > values in xenstore using the paths defined in the new hvm_defs.h
> > header so HVMLOADER can located the blocks.
> > >
> > 
> > Are there patches to plug this in the 'xl'?
> > 
> 
> So far there are only patches to expose it at the xc layer. Nothing else
> seems to use the xc_hvm_build() call (only xc_hvm_build_target_mem()).
> Since the use of this feature seems dependent on a user's particular
> needs, I am not sure how it could generically be built into xl. Any
> suggestions are welcome though and I could post subsequent patches.

I was thinking something like this:

firmware="nvidia.bin"
acpi_dsdt="acpi.dsdt"

?
> 
> Thanks
> Ross

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 02:31:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 02:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmErh-0008BS-2Q; Sat, 22 Dec 2012 02:31:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chenxiaolong@cxl.epac.to>) id 1TmErf-0008BN-CV
	for xen-devel@lists.xen.org; Sat, 22 Dec 2012 02:31:03 +0000
Received: from [85.158.143.99:48478] by server-2.bemta-4.messagelabs.com id
	E2/DD-30861-66B15D05; Sat, 22 Dec 2012 02:31:02 +0000
X-Env-Sender: chenxiaolong@cxl.epac.to
X-Msg-Ref: server-2.tower-216.messagelabs.com!1356143459!25274855!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6440 invoked from network); 22 Dec 2012 02:31:00 -0000
Received: from mail-ye0-f173.google.com (HELO mail-ye0-f173.google.com)
	(209.85.213.173)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 02:31:00 -0000
Received: by mail-ye0-f173.google.com with SMTP id l5so1058501yen.4
	for <xen-devel@lists.xen.org>; Fri, 21 Dec 2012 18:30:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=jFlRZmrkU3S8rutX6O5uhCOEsdYWC5wqpO4tk8DG4L8=;
	b=Yk8dkV5QACe3ZBnovWoMq3k63UCegE643GepyjxSRDd2EVcsnWnLlbvfdRojidFtzc
	VZsuUTknU2QhRtSJv/mpVDdV4EvbVrkfFRo2PWH8EReI9ePHy09DQ/BAhlur+xSrkmJd
	cWBNPILnpWHwQiC8mCRZF6fDx8aLsU2Pes77oBoHKCfep6dHDoUxv7UkMTAajhLqlCav
	/GyO/Sfw2TBimuhSGPH0WEz15uObVbml2j092bFsoLVvPAJqJTRXGnIZCc2f7rjKGYUm
	SAIHKF+3i+8CjFxxaohmFFQDYD/YSw6SCMq9ymTOr9GyMbhjFw1W6UcojPSRpAvI1orn
	Uc7g==
MIME-Version: 1.0
Received: by 10.236.127.83 with SMTP id c59mr14465379yhi.123.1356143459298;
	Fri, 21 Dec 2012 18:30:59 -0800 (PST)
Received: by 10.101.36.1 with HTTP; Fri, 21 Dec 2012 18:30:59 -0800 (PST)
X-Originating-IP: [76.190.150.115]
In-Reply-To: <20121221195948.GF30562@phenom.dumpdata.com>
References: <50CD7A57.5030107@cxl.epac.to>
	<20121221195948.GF30562@phenom.dumpdata.com>
Date: Fri, 21 Dec 2012 21:30:59 -0500
Message-ID: <CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
From: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Gm-Message-State: ALoCoQmb0Uu94/D3RnyKvCaakNeM19PPE5e4mrFgsq6mWfCW7oHCVV/oZ9n3r9jPlsNoUFwPmkO+
Cc: daniel.kiper@oracle.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4226319208321583876=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4226319208321583876==
Content-Type: multipart/alternative; boundary=20cf301af3e127ec7104d167c1a6

--20cf301af3e127ec7104d167c1a6
Content-Type: text/plain; charset=ISO-8859-1

Thanks for the reply! I'll respond inline.

On Fri, Dec 21, 2012 at 2:59 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Sun, Dec 16, 2012 at 02:37:59AM -0500, Xiao-Long Chen wrote:
> > Hi Xen developers,
> >
> > I have been having a problem where only one CPU core is being detected.
> > I'm using a Lenovo W520 with a UEFI firmware and an Intel Core i7
> > 2720qm (4 real cores, 4 virtual cores). When I boot, I see
> > "Dom0 has maximum 1 VCPUs", or something similar, scroll by.
>
> Ooh.

>
> > In addition, only 10 GB out of my 12 GB of memory is recognized and
> > ACPI is not working properly (CPU frequency scaling and battery info
> > are not reported).
> >
> > This problem has been reported before (for the same laptop too) here:
> > http://lists.xen.org/archives/html/xen-devel/2012-06/msg00087.html but
> > unfortunately, the person who sent the message didn't reply with more
> > information.
> >
> > Here are the outputs of dmesg with and without Xen:
> >
> > Without Xen: http://paste.kde.org/626222/raw/
> > With Xen: http://paste.kde.org/626228/raw/
>
> Hm, this:
> [    0.000000] ACPI BIOS Bug: Error: A valid RSDP was not found
> (20120913/tbxfroot-219)
>
> is a problem. The workaround was mentioned on the mailing list to use
> the acpi_rsdp=0xbabfe000
>

I tried booting with this, but the kernel immediately crashed
(I think). I booted with 'acpi_rsdp=0xbabfe000' and without 'quiet' and
the system hangs while loading the initramfs. I could not see any sort
of response on the system and could not ssh in.


>
> >
> > and some information from xl:
> >
> > xl vcpu-list: http://paste.kde.org/626234/raw/
> > xl info: http://paste.kde.org/626240/raw/
> > xl dmesg: http://paste.kde.org/626246/raw/
>
> And Xen has the same issue:
>
> (XEN) ACPI Error (tbxfroot-0218): A valid RSDP was not found [20070126]
>
> And it looks to be running in a legacy state - with no calls to EFI.
> >
> > This is my first time venturing into Xen territory, so please let me
> > know if there's any other information needed.
>
> Did you try to boot xen.efi by itself - without using the GRUB loader?
> There is a nice writeup of how to do this in docs/misc/efi.markdown.
>

Booting from xen.efi, I see "Dom0 has maximum 8 VCPUs" as expected, but
Linux is still seeing only one core. In addition, my keyboard and mouse
do not work (internal laptop PS/2). I was able to get the dmesg and
'xl dmesg' outputs from ssh though:

dmesg (no xen): http://paste.kde.org/629684/raw/
dmesg (xen.efi): http://paste.kde.org/629690/raw/
xl dmesg: http://paste.kde.org/629696/raw/

The system is incredibly sluggish as when booted with GRUB, but that
may just be due to one core being available.


>
> Also CC-ing Daniel if he has some ideas.
>

Thanks a lot!

Xiao-Long Chen


> >
> > I'm not sure if it helps, but I'd also like to point out that I had a
> > similar problem with Linux back around July 20, 2011, when I got my
> > computer. I could work around the issue by booting with "noapic". I'm
> > not sure which kernel version fixed the issue as I booted with that
> > option for quite a while. The kernel version, at the time, was 3.0.
> >
> > I am glad to help in any way I can. I'm excited to get Xen working!
> > I'm confortable with compiling the kernel or xen if necessary.
> >
> > Thanks in advance!
> >
> > Xiao-Long Chen
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >
>

--20cf301af3e127ec7104d167c1a6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks for the reply! I&#39;ll respond inline.<br><br><div class=3D"gmail_q=
uote">On Fri, Dec 21, 2012 at 2:59 PM, Konrad Rzeszutek Wilk <span dir=3D"l=
tr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.=
wilk@oracle.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Sun, Dec 16, 2012 at 02=
:37:59AM -0500, Xiao-Long Chen wrote:<br>
&gt; Hi Xen developers,<br>
&gt;<br>
&gt; I have been having a problem where only one CPU core is being detected=
.<br>
&gt; I&#39;m using a Lenovo W520 with a UEFI firmware and an Intel Core i7<=
br>
&gt; 2720qm (4 real cores, 4 virtual cores). When I boot, I see<br>
&gt; &quot;Dom0 has maximum 1 VCPUs&quot;, or something similar, scroll by.=
<br>
<br>
</div>Ooh.=A0</blockquote><blockquote class=3D"gmail_quote" style=3D"margin=
:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im">&gt;<br>
&gt; In addition, only 10 GB out of my 12 GB of memory is recognized and<br=
>
&gt; ACPI is not working properly (CPU frequency scaling and battery info<b=
r>
&gt; are not reported).<br>
&gt;<br>
&gt; This problem has been reported before (for the same laptop too) here:<=
br>
&gt; <a href=3D"http://lists.xen.org/archives/html/xen-devel/2012-06/msg000=
87.html" target=3D"_blank">http://lists.xen.org/archives/html/xen-devel/201=
2-06/msg00087.html</a> but<br>
&gt; unfortunately, the person who sent the message didn&#39;t reply with m=
ore<br>
&gt; information.<br>
&gt;<br>
&gt; Here are the outputs of dmesg with and without Xen:<br>
&gt;<br>
&gt; Without Xen: <a href=3D"http://paste.kde.org/626222/raw/" target=3D"_b=
lank">http://paste.kde.org/626222/raw/</a><br>
&gt; With Xen: <a href=3D"http://paste.kde.org/626228/raw/" target=3D"_blan=
k">http://paste.kde.org/626228/raw/</a><br>
<br>
</div>Hm, this:<br>
[ =A0 =A00.000000] ACPI BIOS Bug: Error: A valid RSDP was not found (201209=
13/tbxfroot-219)<br>
<br>
is a problem. The workaround was mentioned on the mailing list to use<br>
the acpi_rsdp=3D0xbabfe000<br></blockquote><div><br>I tried booting with th=
is, but the kernel immediately crashed<br>(I think). I booted with &#39;acp=
i_rsdp=3D0xbabfe000&#39; and without &#39;quiet&#39; and<br>the system hang=
s while loading the initramfs. I could not see any sort<br>
of response on the system and could not ssh in.<br>=A0</div><blockquote cla=
ss=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;pa=
dding-left:1ex">
<div class=3D"im"><br>
&gt;<br>
&gt; and some information from xl:<br>
&gt;<br>
&gt; xl vcpu-list: <a href=3D"http://paste.kde.org/626234/raw/" target=3D"_=
blank">http://paste.kde.org/626234/raw/</a><br>
&gt; xl info: <a href=3D"http://paste.kde.org/626240/raw/" target=3D"_blank=
">http://paste.kde.org/626240/raw/</a><br>
&gt; xl dmesg: <a href=3D"http://paste.kde.org/626246/raw/" target=3D"_blan=
k">http://paste.kde.org/626246/raw/</a><br>
<br>
</div>And Xen has the same issue:<br>
<br>
(XEN) ACPI Error (tbxfroot-0218): A valid RSDP was not found [20070126]<br>
<br>
And it looks to be running in a legacy state - with no calls to EFI.<br>
<div class=3D"im">&gt;<br>
&gt; This is my first time venturing into Xen territory, so please let me<b=
r>
&gt; know if there&#39;s any other information needed.<br>
<br>
</div>Did you try to boot xen.efi by itself - without using the GRUB loader=
?<br>
There is a nice writeup of how to do this in docs/misc/efi.markdown.<br></b=
lockquote><div><br>Booting from xen.efi, I see &quot;Dom0 has maximum 8 VCP=
Us&quot; as expected, but<br>Linux is still seeing only one core. In additi=
on, my keyboard and mouse<br>
do not work (internal laptop PS/2). I was able to get the dmesg and<br>&#39=
;xl dmesg&#39; outputs from ssh though:<br><br>dmesg (no xen): <a href=3D"h=
ttp://paste.kde.org/629684/raw/">http://paste.kde.org/629684/raw/</a><br>
dmesg (xen.efi): <a href=3D"http://paste.kde.org/629690/raw/">http://paste.=
kde.org/629690/raw/</a><br>xl dmesg: <a href=3D"http://paste.kde.org/629696=
/raw/">http://paste.kde.org/629696/raw/</a><br><br>The system is incredibly=
 sluggish as when booted with GRUB, but that<br>
may just be due to one core being available.<br>=A0</div><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
<br>
Also CC-ing Daniel if he has some ideas.<br></blockquote><div><br>Thanks a =
lot!<br><br>Xiao-Long Chen<br>=A0</div><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im">&gt;<br>
&gt; I&#39;m not sure if it helps, but I&#39;d also like to point out that =
I had a<br>
&gt; similar problem with Linux back around July 20, 2011, when I got my<br=
>
&gt; computer. I could work around the issue by booting with &quot;noapic&q=
uot;. I&#39;m<br>
&gt; not sure which kernel version fixed the issue as I booted with that<br=
>
&gt; option for quite a while. The kernel version, at the time, was 3.0.<br=
>
&gt;<br>
&gt; I am glad to help in any way I can. I&#39;m excited to get Xen working=
!<br>
&gt; I&#39;m confortable with compiling the kernel or xen if necessary.<br>
&gt;<br>
&gt; Thanks in advance!<br>
&gt;<br>
&gt; Xiao-Long Chen<br>
&gt;<br>
</div>&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
&gt;<br>
</blockquote></div><br>

--20cf301af3e127ec7104d167c1a6--


--===============4226319208321583876==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4226319208321583876==--


From xen-devel-bounces@lists.xen.org Sat Dec 22 02:31:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 02:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmErh-0008BS-2Q; Sat, 22 Dec 2012 02:31:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chenxiaolong@cxl.epac.to>) id 1TmErf-0008BN-CV
	for xen-devel@lists.xen.org; Sat, 22 Dec 2012 02:31:03 +0000
Received: from [85.158.143.99:48478] by server-2.bemta-4.messagelabs.com id
	E2/DD-30861-66B15D05; Sat, 22 Dec 2012 02:31:02 +0000
X-Env-Sender: chenxiaolong@cxl.epac.to
X-Msg-Ref: server-2.tower-216.messagelabs.com!1356143459!25274855!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6440 invoked from network); 22 Dec 2012 02:31:00 -0000
Received: from mail-ye0-f173.google.com (HELO mail-ye0-f173.google.com)
	(209.85.213.173)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 02:31:00 -0000
Received: by mail-ye0-f173.google.com with SMTP id l5so1058501yen.4
	for <xen-devel@lists.xen.org>; Fri, 21 Dec 2012 18:30:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=jFlRZmrkU3S8rutX6O5uhCOEsdYWC5wqpO4tk8DG4L8=;
	b=Yk8dkV5QACe3ZBnovWoMq3k63UCegE643GepyjxSRDd2EVcsnWnLlbvfdRojidFtzc
	VZsuUTknU2QhRtSJv/mpVDdV4EvbVrkfFRo2PWH8EReI9ePHy09DQ/BAhlur+xSrkmJd
	cWBNPILnpWHwQiC8mCRZF6fDx8aLsU2Pes77oBoHKCfep6dHDoUxv7UkMTAajhLqlCav
	/GyO/Sfw2TBimuhSGPH0WEz15uObVbml2j092bFsoLVvPAJqJTRXGnIZCc2f7rjKGYUm
	SAIHKF+3i+8CjFxxaohmFFQDYD/YSw6SCMq9ymTOr9GyMbhjFw1W6UcojPSRpAvI1orn
	Uc7g==
MIME-Version: 1.0
Received: by 10.236.127.83 with SMTP id c59mr14465379yhi.123.1356143459298;
	Fri, 21 Dec 2012 18:30:59 -0800 (PST)
Received: by 10.101.36.1 with HTTP; Fri, 21 Dec 2012 18:30:59 -0800 (PST)
X-Originating-IP: [76.190.150.115]
In-Reply-To: <20121221195948.GF30562@phenom.dumpdata.com>
References: <50CD7A57.5030107@cxl.epac.to>
	<20121221195948.GF30562@phenom.dumpdata.com>
Date: Fri, 21 Dec 2012 21:30:59 -0500
Message-ID: <CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
From: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Gm-Message-State: ALoCoQmb0Uu94/D3RnyKvCaakNeM19PPE5e4mrFgsq6mWfCW7oHCVV/oZ9n3r9jPlsNoUFwPmkO+
Cc: daniel.kiper@oracle.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4226319208321583876=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4226319208321583876==
Content-Type: multipart/alternative; boundary=20cf301af3e127ec7104d167c1a6

--20cf301af3e127ec7104d167c1a6
Content-Type: text/plain; charset=ISO-8859-1

Thanks for the reply! I'll respond inline.

On Fri, Dec 21, 2012 at 2:59 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Sun, Dec 16, 2012 at 02:37:59AM -0500, Xiao-Long Chen wrote:
> > Hi Xen developers,
> >
> > I have been having a problem where only one CPU core is being detected.
> > I'm using a Lenovo W520 with a UEFI firmware and an Intel Core i7
> > 2720qm (4 real cores, 4 virtual cores). When I boot, I see
> > "Dom0 has maximum 1 VCPUs", or something similar, scroll by.
>
> Ooh.

>
> > In addition, only 10 GB out of my 12 GB of memory is recognized and
> > ACPI is not working properly (CPU frequency scaling and battery info
> > are not reported).
> >
> > This problem has been reported before (for the same laptop too) here:
> > http://lists.xen.org/archives/html/xen-devel/2012-06/msg00087.html but
> > unfortunately, the person who sent the message didn't reply with more
> > information.
> >
> > Here are the outputs of dmesg with and without Xen:
> >
> > Without Xen: http://paste.kde.org/626222/raw/
> > With Xen: http://paste.kde.org/626228/raw/
>
> Hm, this:
> [    0.000000] ACPI BIOS Bug: Error: A valid RSDP was not found
> (20120913/tbxfroot-219)
>
> is a problem. The workaround was mentioned on the mailing list to use
> the acpi_rsdp=0xbabfe000
>

I tried booting with this, but the kernel immediately crashed
(I think). I booted with 'acpi_rsdp=0xbabfe000' and without 'quiet' and
the system hangs while loading the initramfs. I could not see any sort
of response on the system and could not ssh in.


>
> >
> > and some information from xl:
> >
> > xl vcpu-list: http://paste.kde.org/626234/raw/
> > xl info: http://paste.kde.org/626240/raw/
> > xl dmesg: http://paste.kde.org/626246/raw/
>
> And Xen has the same issue:
>
> (XEN) ACPI Error (tbxfroot-0218): A valid RSDP was not found [20070126]
>
> And it looks to be running in a legacy state - with no calls to EFI.
> >
> > This is my first time venturing into Xen territory, so please let me
> > know if there's any other information needed.
>
> Did you try to boot xen.efi by itself - without using the GRUB loader?
> There is a nice writeup of how to do this in docs/misc/efi.markdown.
>

Booting from xen.efi, I see "Dom0 has maximum 8 VCPUs" as expected, but
Linux is still seeing only one core. In addition, my keyboard and mouse
do not work (internal laptop PS/2). I was able to get the dmesg and
'xl dmesg' outputs from ssh though:

dmesg (no xen): http://paste.kde.org/629684/raw/
dmesg (xen.efi): http://paste.kde.org/629690/raw/
xl dmesg: http://paste.kde.org/629696/raw/

The system is incredibly sluggish as when booted with GRUB, but that
may just be due to one core being available.


>
> Also CC-ing Daniel if he has some ideas.
>

Thanks a lot!

Xiao-Long Chen


> >
> > I'm not sure if it helps, but I'd also like to point out that I had a
> > similar problem with Linux back around July 20, 2011, when I got my
> > computer. I could work around the issue by booting with "noapic". I'm
> > not sure which kernel version fixed the issue as I booted with that
> > option for quite a while. The kernel version, at the time, was 3.0.
> >
> > I am glad to help in any way I can. I'm excited to get Xen working!
> > I'm confortable with compiling the kernel or xen if necessary.
> >
> > Thanks in advance!
> >
> > Xiao-Long Chen
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >
>

--20cf301af3e127ec7104d167c1a6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks for the reply! I&#39;ll respond inline.<br><br><div class=3D"gmail_q=
uote">On Fri, Dec 21, 2012 at 2:59 PM, Konrad Rzeszutek Wilk <span dir=3D"l=
tr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.=
wilk@oracle.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Sun, Dec 16, 2012 at 02=
:37:59AM -0500, Xiao-Long Chen wrote:<br>
&gt; Hi Xen developers,<br>
&gt;<br>
&gt; I have been having a problem where only one CPU core is being detected=
.<br>
&gt; I&#39;m using a Lenovo W520 with a UEFI firmware and an Intel Core i7<=
br>
&gt; 2720qm (4 real cores, 4 virtual cores). When I boot, I see<br>
&gt; &quot;Dom0 has maximum 1 VCPUs&quot;, or something similar, scroll by.=
<br>
<br>
</div>Ooh.=A0</blockquote><blockquote class=3D"gmail_quote" style=3D"margin=
:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im">&gt;<br>
&gt; In addition, only 10 GB out of my 12 GB of memory is recognized and<br=
>
&gt; ACPI is not working properly (CPU frequency scaling and battery info<b=
r>
&gt; are not reported).<br>
&gt;<br>
&gt; This problem has been reported before (for the same laptop too) here:<=
br>
&gt; <a href=3D"http://lists.xen.org/archives/html/xen-devel/2012-06/msg000=
87.html" target=3D"_blank">http://lists.xen.org/archives/html/xen-devel/201=
2-06/msg00087.html</a> but<br>
&gt; unfortunately, the person who sent the message didn&#39;t reply with m=
ore<br>
&gt; information.<br>
&gt;<br>
&gt; Here are the outputs of dmesg with and without Xen:<br>
&gt;<br>
&gt; Without Xen: <a href=3D"http://paste.kde.org/626222/raw/" target=3D"_b=
lank">http://paste.kde.org/626222/raw/</a><br>
&gt; With Xen: <a href=3D"http://paste.kde.org/626228/raw/" target=3D"_blan=
k">http://paste.kde.org/626228/raw/</a><br>
<br>
</div>Hm, this:<br>
[ =A0 =A00.000000] ACPI BIOS Bug: Error: A valid RSDP was not found (201209=
13/tbxfroot-219)<br>
<br>
is a problem. The workaround was mentioned on the mailing list to use<br>
the acpi_rsdp=3D0xbabfe000<br></blockquote><div><br>I tried booting with th=
is, but the kernel immediately crashed<br>(I think). I booted with &#39;acp=
i_rsdp=3D0xbabfe000&#39; and without &#39;quiet&#39; and<br>the system hang=
s while loading the initramfs. I could not see any sort<br>
of response on the system and could not ssh in.<br>=A0</div><blockquote cla=
ss=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;pa=
dding-left:1ex">
<div class=3D"im"><br>
&gt;<br>
&gt; and some information from xl:<br>
&gt;<br>
&gt; xl vcpu-list: <a href=3D"http://paste.kde.org/626234/raw/" target=3D"_=
blank">http://paste.kde.org/626234/raw/</a><br>
&gt; xl info: <a href=3D"http://paste.kde.org/626240/raw/" target=3D"_blank=
">http://paste.kde.org/626240/raw/</a><br>
&gt; xl dmesg: <a href=3D"http://paste.kde.org/626246/raw/" target=3D"_blan=
k">http://paste.kde.org/626246/raw/</a><br>
<br>
</div>And Xen has the same issue:<br>
<br>
(XEN) ACPI Error (tbxfroot-0218): A valid RSDP was not found [20070126]<br>
<br>
And it looks to be running in a legacy state - with no calls to EFI.<br>
<div class=3D"im">&gt;<br>
&gt; This is my first time venturing into Xen territory, so please let me<b=
r>
&gt; know if there&#39;s any other information needed.<br>
<br>
</div>Did you try to boot xen.efi by itself - without using the GRUB loader=
?<br>
There is a nice writeup of how to do this in docs/misc/efi.markdown.<br></b=
lockquote><div><br>Booting from xen.efi, I see &quot;Dom0 has maximum 8 VCP=
Us&quot; as expected, but<br>Linux is still seeing only one core. In additi=
on, my keyboard and mouse<br>
do not work (internal laptop PS/2). I was able to get the dmesg and<br>&#39=
;xl dmesg&#39; outputs from ssh though:<br><br>dmesg (no xen): <a href=3D"h=
ttp://paste.kde.org/629684/raw/">http://paste.kde.org/629684/raw/</a><br>
dmesg (xen.efi): <a href=3D"http://paste.kde.org/629690/raw/">http://paste.=
kde.org/629690/raw/</a><br>xl dmesg: <a href=3D"http://paste.kde.org/629696=
/raw/">http://paste.kde.org/629696/raw/</a><br><br>The system is incredibly=
 sluggish as when booted with GRUB, but that<br>
may just be due to one core being available.<br>=A0</div><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
<br>
Also CC-ing Daniel if he has some ideas.<br></blockquote><div><br>Thanks a =
lot!<br><br>Xiao-Long Chen<br>=A0</div><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im">&gt;<br>
&gt; I&#39;m not sure if it helps, but I&#39;d also like to point out that =
I had a<br>
&gt; similar problem with Linux back around July 20, 2011, when I got my<br=
>
&gt; computer. I could work around the issue by booting with &quot;noapic&q=
uot;. I&#39;m<br>
&gt; not sure which kernel version fixed the issue as I booted with that<br=
>
&gt; option for quite a while. The kernel version, at the time, was 3.0.<br=
>
&gt;<br>
&gt; I am glad to help in any way I can. I&#39;m excited to get Xen working=
!<br>
&gt; I&#39;m confortable with compiling the kernel or xen if necessary.<br>
&gt;<br>
&gt; Thanks in advance!<br>
&gt;<br>
&gt; Xiao-Long Chen<br>
&gt;<br>
</div>&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
&gt;<br>
</blockquote></div><br>

--20cf301af3e127ec7104d167c1a6--


--===============4226319208321583876==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4226319208321583876==--


From xen-devel-bounces@lists.xen.org Sat Dec 22 04:24:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 04:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmGcd-0000Qe-Q4; Sat, 22 Dec 2012 04:23:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TmGcb-0000QZ-P3
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 04:23:38 +0000
Received: from [85.158.143.35:44792] by server-3.bemta-4.messagelabs.com id
	AF/F7-18211-9C535D05; Sat, 22 Dec 2012 04:23:37 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356150216!16471221!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17012 invoked from network); 22 Dec 2012 04:23:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 04:23:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,335,1355097600"; 
   d="scan'208";a="315463"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Dec 2012 04:23:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 22 Dec 2012 04:23:35 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TmGcZ-0004Qn-RY;
	Sat, 22 Dec 2012 04:23:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TmGcZ-0006wc-LS;
	Sat, 22 Dec 2012 04:23:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14807-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Dec 2012 04:23:35 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14807: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7666863622432817969=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7666863622432817969==
Content-Type: text/plain

flight 14807 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14807/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14805
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14805

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  6f5c96855a9e

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=c4114a042410
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable c4114a042410
+ branch=xen-unstable
+ revision=c4114a042410
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r c4114a042410 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files


--===============7666863622432817969==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7666863622432817969==--

From xen-devel-bounces@lists.xen.org Sat Dec 22 04:24:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 04:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmGcd-0000Qe-Q4; Sat, 22 Dec 2012 04:23:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TmGcb-0000QZ-P3
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 04:23:38 +0000
Received: from [85.158.143.35:44792] by server-3.bemta-4.messagelabs.com id
	AF/F7-18211-9C535D05; Sat, 22 Dec 2012 04:23:37 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356150216!16471221!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17012 invoked from network); 22 Dec 2012 04:23:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 04:23:36 -0000
X-IronPort-AV: E=Sophos;i="4.84,335,1355097600"; 
   d="scan'208";a="315463"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Dec 2012 04:23:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 22 Dec 2012 04:23:35 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TmGcZ-0004Qn-RY;
	Sat, 22 Dec 2012 04:23:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TmGcZ-0006wc-LS;
	Sat, 22 Dec 2012 04:23:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14807-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Dec 2012 04:23:35 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14807: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7666863622432817969=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7666863622432817969==
Content-Type: text/plain

flight 14807 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14807/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14805
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14805

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  6f5c96855a9e

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=c4114a042410
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable c4114a042410
+ branch=xen-unstable
+ revision=c4114a042410
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/hg/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r c4114a042410 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files


--===============7666863622432817969==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7666863622432817969==--

From xen-devel-bounces@lists.xen.org Sat Dec 22 10:14:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 10:14:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmM5Q-0003A6-0p; Sat, 22 Dec 2012 10:13:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TmM5O-00039z-3x
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 10:13:42 +0000
Received: from [85.158.143.99:51052] by server-3.bemta-4.messagelabs.com id
	AA/87-18211-5D785D05; Sat, 22 Dec 2012 10:13:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1356171220!29571361!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4505 invoked from network); 22 Dec 2012 10:13:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 10:13:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,336,1355097600"; 
   d="scan'208";a="316748"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Dec 2012 10:13:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 22 Dec 2012 10:13:39 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TmM5L-0006Aj-P7;
	Sat, 22 Dec 2012 10:13:39 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TmM5L-00046n-HN;
	Sat, 22 Dec 2012 10:13:39 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14810-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Dec 2012 10:13:39 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14810: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14810 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14810/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14807
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14807

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  c4114a042410

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 10:14:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 10:14:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmM5Q-0003A6-0p; Sat, 22 Dec 2012 10:13:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TmM5O-00039z-3x
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 10:13:42 +0000
Received: from [85.158.143.99:51052] by server-3.bemta-4.messagelabs.com id
	AA/87-18211-5D785D05; Sat, 22 Dec 2012 10:13:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1356171220!29571361!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzMzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4505 invoked from network); 22 Dec 2012 10:13:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 10:13:40 -0000
X-IronPort-AV: E=Sophos;i="4.84,336,1355097600"; 
   d="scan'208";a="316748"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Dec 2012 10:13:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 22 Dec 2012 10:13:39 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TmM5L-0006Aj-P7;
	Sat, 22 Dec 2012 10:13:39 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TmM5L-00046n-HN;
	Sat, 22 Dec 2012 10:13:39 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-14810-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Dec 2012 10:13:39 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14810: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14810 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14810/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14807
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14807

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  c4114a042410

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 13:01:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 13:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmOgj-0004TU-0m; Sat, 22 Dec 2012 13:00:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TmOgh-0004TP-Tt
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 13:00:24 +0000
Received: from [85.158.143.35:10531] by server-1.bemta-4.messagelabs.com id
	F7/40-28401-7EEA5D05; Sat, 22 Dec 2012 13:00:23 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1356181213!5052430!1
X-Originating-IP: [220.181.15.25]
X-SpamReason: No, hits=0.9 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjI1ID0+IDg4NzM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjI1ID0+IDg4NzM=\n,HTML_20_30,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15367 invoked from network); 22 Dec 2012 13:00:18 -0000
Received: from m15-25.126.com (HELO m15-25.126.com) (220.181.15.25)
	by server-9.tower-21.messagelabs.com with SMTP;
	22 Dec 2012 13:00:18 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=+Y1cs038tYCzhYaz+Vt3tIiRmDt+gg1NZMu5
	Op7TEdA=; b=hebkIKOm/F5KFlZ9Wa9vHZfI+x/+HaeUgxfgys6Cjrp1JI5/GnbN
	ttvaBBG63g9F3f5zg+RoQ1lDJJRaPgz2hTr74cUY9VOMamMSK6qWP4oTfrbfQ+Jv
	/1x1+xONMvkYvTyQDvHbpn5XABnVHeL4JugjojvxnAe9akfwl0dOVW8=
Received: from hxkhust$126.com ( [202.114.0.254] ) by ajax-webmail-wmsvr25
	(Coremail) ; Sat, 22 Dec 2012 21:00:10 +0800 (CST)
X-Originating-IP: [202.114.0.254]
Date: Sat, 22 Dec 2012 21:00:10 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: NBF5RGZvb3Rlcl9odG09MTQ2NTo4MQ==
MIME-Version: 1.0
Message-ID: <2d703237.1651d.13bc2b306fc.Coremail.hxkhust@126.com>
X-CM-TRANSID: GcqowGAJskPcrtVQsM0VAA--.36982W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitACNBUX9kpIf+QABsj
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] how to run a qcow format para-virtualized machine with
 blktap modules used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7872355589879657866=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7872355589879657866==
Content-Type: multipart/alternative; 
	boundary="----=_Part_348207_456150542.1356181210875"

------=_Part_348207_456150542.1356181210875
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

Hi,guys,
first of all,I'm working on xen 4.1 and xen 3.4.I need your help corresponding to these two kinds of xen.
As I know ,para-virtualized machines will use blktap for their read/write and full-virtualized machines will use qemu. I have seen that there are block-qcow.c and block-qcow2.c in blktap/drives/, and I would like to use them for my VM's reading and writing.I plan to create raw-format image as bascking file image and then create qcow format images based on the raw-format one.I know it need to use para-virtualized machines.
I have tried like these:
I create a para-virtualized machine with config file in which vmlinuz and initrd are needed. when I finish the VM's os installation, I use bootloader pygrub to take the place of vmlinuz and initrd.Then I create a qcow format image based on the one created just now.but the vm could not been set up normally.
so I failed.
who could help me to create a para-virtualized machine which take a qcow format image as its virtual disk (and this qcow image is based on a backing file )?
I need your help and the deadline is coming,exactly tomorrow is my deadline.







------=_Part_348207_456150542.1356181210875
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">Hi,guys,<div>first of all,I'm working on xen 4.1 and xen 3.4.I need your help corresponding to these two kinds of xen.</div><div>As I know ,para-virtualized machines will use blktap for their read/write and full-virtualized machines will use qemu. I have seen that there are block-qcow.c and block-qcow2.c in blktap/drives/, and I would like to use them for my VM's reading and writing.I plan to create raw-format image as bascking file image and then create qcow format images based on the raw-format one.I know it need to use para-virtualized machines.</div><div>I have tried like these:</div><div>I create a para-virtualized machine with config file in which vmlinuz and initrd are needed. when I finish the VM's os installation, I use bootloader pygrub to take the place of vmlinuz and initrd.Then I create a qcow format image based on the one created just now.but the vm could not been set up normally.</div><div>so I failed.</div><div>who could help me to create a para-virtualized machine which take a qcow format image as its virtual disk (and this qcow image is based on a backing file )?</div><div>I need your help and the deadline is coming,exactly tomorrow is my deadline.</div><div><br></div><div><br></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_348207_456150542.1356181210875--



--===============7872355589879657866==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7872355589879657866==--



From xen-devel-bounces@lists.xen.org Sat Dec 22 13:01:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 13:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmOgj-0004TU-0m; Sat, 22 Dec 2012 13:00:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TmOgh-0004TP-Tt
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 13:00:24 +0000
Received: from [85.158.143.35:10531] by server-1.bemta-4.messagelabs.com id
	F7/40-28401-7EEA5D05; Sat, 22 Dec 2012 13:00:23 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1356181213!5052430!1
X-Originating-IP: [220.181.15.25]
X-SpamReason: No, hits=0.9 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjI1ID0+IDg4NzM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjI1ID0+IDg4NzM=\n,HTML_20_30,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15367 invoked from network); 22 Dec 2012 13:00:18 -0000
Received: from m15-25.126.com (HELO m15-25.126.com) (220.181.15.25)
	by server-9.tower-21.messagelabs.com with SMTP;
	22 Dec 2012 13:00:18 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=+Y1cs038tYCzhYaz+Vt3tIiRmDt+gg1NZMu5
	Op7TEdA=; b=hebkIKOm/F5KFlZ9Wa9vHZfI+x/+HaeUgxfgys6Cjrp1JI5/GnbN
	ttvaBBG63g9F3f5zg+RoQ1lDJJRaPgz2hTr74cUY9VOMamMSK6qWP4oTfrbfQ+Jv
	/1x1+xONMvkYvTyQDvHbpn5XABnVHeL4JugjojvxnAe9akfwl0dOVW8=
Received: from hxkhust$126.com ( [202.114.0.254] ) by ajax-webmail-wmsvr25
	(Coremail) ; Sat, 22 Dec 2012 21:00:10 +0800 (CST)
X-Originating-IP: [202.114.0.254]
Date: Sat, 22 Dec 2012 21:00:10 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: NBF5RGZvb3Rlcl9odG09MTQ2NTo4MQ==
MIME-Version: 1.0
Message-ID: <2d703237.1651d.13bc2b306fc.Coremail.hxkhust@126.com>
X-CM-TRANSID: GcqowGAJskPcrtVQsM0VAA--.36982W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitACNBUX9kpIf+QABsj
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] how to run a qcow format para-virtualized machine with
 blktap modules used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7872355589879657866=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7872355589879657866==
Content-Type: multipart/alternative; 
	boundary="----=_Part_348207_456150542.1356181210875"

------=_Part_348207_456150542.1356181210875
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

Hi,guys,
first of all,I'm working on xen 4.1 and xen 3.4.I need your help corresponding to these two kinds of xen.
As I know ,para-virtualized machines will use blktap for their read/write and full-virtualized machines will use qemu. I have seen that there are block-qcow.c and block-qcow2.c in blktap/drives/, and I would like to use them for my VM's reading and writing.I plan to create raw-format image as bascking file image and then create qcow format images based on the raw-format one.I know it need to use para-virtualized machines.
I have tried like these:
I create a para-virtualized machine with config file in which vmlinuz and initrd are needed. when I finish the VM's os installation, I use bootloader pygrub to take the place of vmlinuz and initrd.Then I create a qcow format image based on the one created just now.but the vm could not been set up normally.
so I failed.
who could help me to create a para-virtualized machine which take a qcow format image as its virtual disk (and this qcow image is based on a backing file )?
I need your help and the deadline is coming,exactly tomorrow is my deadline.







------=_Part_348207_456150542.1356181210875
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">Hi,guys,<div>first of all,I'm working on xen 4.1 and xen 3.4.I need your help corresponding to these two kinds of xen.</div><div>As I know ,para-virtualized machines will use blktap for their read/write and full-virtualized machines will use qemu. I have seen that there are block-qcow.c and block-qcow2.c in blktap/drives/, and I would like to use them for my VM's reading and writing.I plan to create raw-format image as bascking file image and then create qcow format images based on the raw-format one.I know it need to use para-virtualized machines.</div><div>I have tried like these:</div><div>I create a para-virtualized machine with config file in which vmlinuz and initrd are needed. when I finish the VM's os installation, I use bootloader pygrub to take the place of vmlinuz and initrd.Then I create a qcow format image based on the one created just now.but the vm could not been set up normally.</div><div>so I failed.</div><div>who could help me to create a para-virtualized machine which take a qcow format image as its virtual disk (and this qcow image is based on a backing file )?</div><div>I need your help and the deadline is coming,exactly tomorrow is my deadline.</div><div><br></div><div><br></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_348207_456150542.1356181210875--



--===============7872355589879657866==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7872355589879657866==--



From xen-devel-bounces@lists.xen.org Sat Dec 22 14:17:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 14:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmPt5-0005Jj-Gu; Sat, 22 Dec 2012 14:17:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liuw@liuw.name>) id 1TmPt3-0005Je-Kn
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 14:17:13 +0000
Received: from [85.158.143.35:29137] by server-3.bemta-4.messagelabs.com id
	10/88-18211-8E0C5D05; Sat, 22 Dec 2012 14:17:12 +0000
X-Env-Sender: liuw@liuw.name
X-Msg-Ref: server-2.tower-21.messagelabs.com!1356185832!5141296!1
X-Originating-IP: [209.85.212.180]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25198 invoked from network); 22 Dec 2012 14:17:12 -0000
Received: from mail-wi0-f180.google.com (HELO mail-wi0-f180.google.com)
	(209.85.212.180)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 14:17:12 -0000
Received: by mail-wi0-f180.google.com with SMTP id hj13so3319808wib.7
	for <xen-devel@lists.xensource.com>;
	Sat, 22 Dec 2012 06:17:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-gm-message-state;
	bh=n2dHCqBxStLLahCs+kQki8uvPCitqGcWLK6JCDrGKP0=;
	b=T8EPUMUKJRipNhrtd2XBAWJUylP1knu/8/lkgOhMylCTsMgC/HzuRkQexFPiXaivJ7
	rF+2qXsurFf1T1+31BIquZhsvPejfWeHNtRGY4xco18LMHnk4QmLzHPgGew1uAARLVJA
	eo2FVsdaXZmtxpf1pID5wpspgkPndiaBIOOPOzvBGSSCotEFeuxKUFDnvSE+0DTR0ljQ
	Tde3Ty65JZfr0NUeYfDEldmpGvE2d/G76u4+kb6xZyl4/DFew6TVrGRKE9PZ6PAPXsMg
	6XO3JTn9ShPHVWsJVBrWKbVdouY0w8kKN8dt1yuDZJKEbmQ1KsGw9OGn4/MHXW0uVQMd
	/z/Q==
Received: by 10.194.179.34 with SMTP id dd2mr28686297wjc.1.1356185831978; Sat,
	22 Dec 2012 06:17:11 -0800 (PST)
MIME-Version: 1.0
Received: by 10.180.146.103 with HTTP; Sat, 22 Dec 2012 06:16:41 -0800 (PST)
In-Reply-To: <2d703237.1651d.13bc2b306fc.Coremail.hxkhust@126.com>
References: <2d703237.1651d.13bc2b306fc.Coremail.hxkhust@126.com>
From: Wei Liu <liuw@liuw.name>
Date: Sat, 22 Dec 2012 14:16:41 +0000
Message-ID: <CAOsiSVU-4r=rWiHMthtki_Dt54L4MxE8M6NowJnhuuaQkhoNGg@mail.gmail.com>
To: hxkhust <hxkhust@126.com>
X-Gm-Message-State: ALoCoQnOR6V3JX1CLFaraK1SpIvSFHAB7Um3tV8WxeBL0bzOogdT3gTL+rQLKGtUCBz86vz2uUWm
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] how to run a qcow format para-virtualized machine
 with blktap modules used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8696655136378654423=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8696655136378654423==
Content-Type: multipart/alternative; boundary=089e0141a002c3b68c04d1719ee6

--089e0141a002c3b68c04d1719ee6
Content-Type: text/plain; charset=UTF-8

On Sat, Dec 22, 2012 at 1:00 PM, hxkhust <hxkhust@126.com> wrote:

>
> who could help me to create a para-virtualized machine which take a qcow
> format image as its virtual disk (and this qcow image is based on a backing
> file )?
> I need your help and the deadline is coming,exactly tomorrow is my
> deadline.
>
>
IIRC the toolstack will fall back to use QEMU if there is no blktap module
in the Dom0 kernel.


Wei.

--089e0141a002c3b68c04d1719ee6
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Sat, Dec 22, 2012 at 1:00 PM, hxkhust <span dir=3D"ltr"=
>&lt;<a href=3D"mailto:hxkhust@126.com" target=3D"_blank">hxkhust@126.com</=
a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quot=
e"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left=
:1px #ccc solid;padding-left:1ex">

<div style=3D"line-height:1.7;font-size:14px;font-family:arial"><div style=
=3D"line-height:1.7;font-size:14px;font-family:arial"><br><div>who could he=
lp me to create a para-virtualized machine which take a qcow format image a=
s its virtual disk (and this qcow image is based on a backing file )?</div>

<div>I need your help and the deadline is coming,exactly tomorrow is my dea=
dline.</div><div><br></div></div></div></blockquote><div><br></div><div sty=
le>IIRC the toolstack will fall back to use QEMU if there is no blktap modu=
le in the Dom0 kernel.</div>

<div style><br></div><div style><br></div><div style>Wei.=C2=A0<br></div></=
div></div></div>

--089e0141a002c3b68c04d1719ee6--


--===============8696655136378654423==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8696655136378654423==--


From xen-devel-bounces@lists.xen.org Sat Dec 22 14:17:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 14:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmPt5-0005Jj-Gu; Sat, 22 Dec 2012 14:17:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liuw@liuw.name>) id 1TmPt3-0005Je-Kn
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 14:17:13 +0000
Received: from [85.158.143.35:29137] by server-3.bemta-4.messagelabs.com id
	10/88-18211-8E0C5D05; Sat, 22 Dec 2012 14:17:12 +0000
X-Env-Sender: liuw@liuw.name
X-Msg-Ref: server-2.tower-21.messagelabs.com!1356185832!5141296!1
X-Originating-IP: [209.85.212.180]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25198 invoked from network); 22 Dec 2012 14:17:12 -0000
Received: from mail-wi0-f180.google.com (HELO mail-wi0-f180.google.com)
	(209.85.212.180)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 14:17:12 -0000
Received: by mail-wi0-f180.google.com with SMTP id hj13so3319808wib.7
	for <xen-devel@lists.xensource.com>;
	Sat, 22 Dec 2012 06:17:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-gm-message-state;
	bh=n2dHCqBxStLLahCs+kQki8uvPCitqGcWLK6JCDrGKP0=;
	b=T8EPUMUKJRipNhrtd2XBAWJUylP1knu/8/lkgOhMylCTsMgC/HzuRkQexFPiXaivJ7
	rF+2qXsurFf1T1+31BIquZhsvPejfWeHNtRGY4xco18LMHnk4QmLzHPgGew1uAARLVJA
	eo2FVsdaXZmtxpf1pID5wpspgkPndiaBIOOPOzvBGSSCotEFeuxKUFDnvSE+0DTR0ljQ
	Tde3Ty65JZfr0NUeYfDEldmpGvE2d/G76u4+kb6xZyl4/DFew6TVrGRKE9PZ6PAPXsMg
	6XO3JTn9ShPHVWsJVBrWKbVdouY0w8kKN8dt1yuDZJKEbmQ1KsGw9OGn4/MHXW0uVQMd
	/z/Q==
Received: by 10.194.179.34 with SMTP id dd2mr28686297wjc.1.1356185831978; Sat,
	22 Dec 2012 06:17:11 -0800 (PST)
MIME-Version: 1.0
Received: by 10.180.146.103 with HTTP; Sat, 22 Dec 2012 06:16:41 -0800 (PST)
In-Reply-To: <2d703237.1651d.13bc2b306fc.Coremail.hxkhust@126.com>
References: <2d703237.1651d.13bc2b306fc.Coremail.hxkhust@126.com>
From: Wei Liu <liuw@liuw.name>
Date: Sat, 22 Dec 2012 14:16:41 +0000
Message-ID: <CAOsiSVU-4r=rWiHMthtki_Dt54L4MxE8M6NowJnhuuaQkhoNGg@mail.gmail.com>
To: hxkhust <hxkhust@126.com>
X-Gm-Message-State: ALoCoQnOR6V3JX1CLFaraK1SpIvSFHAB7Um3tV8WxeBL0bzOogdT3gTL+rQLKGtUCBz86vz2uUWm
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] how to run a qcow format para-virtualized machine
 with blktap modules used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8696655136378654423=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8696655136378654423==
Content-Type: multipart/alternative; boundary=089e0141a002c3b68c04d1719ee6

--089e0141a002c3b68c04d1719ee6
Content-Type: text/plain; charset=UTF-8

On Sat, Dec 22, 2012 at 1:00 PM, hxkhust <hxkhust@126.com> wrote:

>
> who could help me to create a para-virtualized machine which take a qcow
> format image as its virtual disk (and this qcow image is based on a backing
> file )?
> I need your help and the deadline is coming,exactly tomorrow is my
> deadline.
>
>
IIRC the toolstack will fall back to use QEMU if there is no blktap module
in the Dom0 kernel.


Wei.

--089e0141a002c3b68c04d1719ee6
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Sat, Dec 22, 2012 at 1:00 PM, hxkhust <span dir=3D"ltr"=
>&lt;<a href=3D"mailto:hxkhust@126.com" target=3D"_blank">hxkhust@126.com</=
a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div class=3D"gmail_quot=
e"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left=
:1px #ccc solid;padding-left:1ex">

<div style=3D"line-height:1.7;font-size:14px;font-family:arial"><div style=
=3D"line-height:1.7;font-size:14px;font-family:arial"><br><div>who could he=
lp me to create a para-virtualized machine which take a qcow format image a=
s its virtual disk (and this qcow image is based on a backing file )?</div>

<div>I need your help and the deadline is coming,exactly tomorrow is my dea=
dline.</div><div><br></div></div></div></blockquote><div><br></div><div sty=
le>IIRC the toolstack will fall back to use QEMU if there is no blktap modu=
le in the Dom0 kernel.</div>

<div style><br></div><div style><br></div><div style>Wei.=C2=A0<br></div></=
div></div></div>

--089e0141a002c3b68c04d1719ee6--


--===============8696655136378654423==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8696655136378654423==--


From xen-devel-bounces@lists.xen.org Sat Dec 22 14:47:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 14:47:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmQLX-0005a6-5K; Sat, 22 Dec 2012 14:46:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rick.jones2@hp.com>) id 1Tm7Po-0006Ep-36
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 18:33:48 +0000
Received: from [85.158.139.211:2093] by server-9.bemta-5.messagelabs.com id
	3D/76-10690-B8BA4D05; Fri, 21 Dec 2012 18:33:47 +0000
X-Env-Sender: rick.jones2@hp.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1356114825!20073205!1
X-Originating-IP: [15.201.24.19]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUuMjAxLjI0LjE5ID0+IDExNjE4NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26366 invoked from network); 21 Dec 2012 18:33:47 -0000
Received: from g4t0016.houston.hp.com (HELO g4t0016.houston.hp.com)
	(15.201.24.19)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 18:33:47 -0000
Received: from g4t0018.houston.hp.com (g4t0018.houston.hp.com [16.234.32.27])
	by g4t0016.houston.hp.com (Postfix) with ESMTP id 7C99E14265;
	Fri, 21 Dec 2012 18:33:45 +0000 (UTC)
Received: from [16.103.148.51] (tardy.usa.hp.com [16.103.148.51])
	by g4t0018.houston.hp.com (Postfix) with ESMTP id B1A1F10134;
	Fri, 21 Dec 2012 18:33:44 +0000 (UTC)
Message-ID: <50D4AB87.8050601@hp.com>
Date: Fri, 21 Dec 2012 10:33:43 -0800
From: Rick Jones <rick.jones2@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
	<1797374383.20121220135139@eikelenboom.it>
	<1356017968.21834.2859.camel@edumazet-glaptop>
	<1609010645.20121221122100@eikelenboom.it>
In-Reply-To: <1609010645.20121221122100@eikelenboom.it>
X-Mailman-Approved-At: Sat, 22 Dec 2012 14:46:38 +0000
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>, Eric Dumazet <erdnetdev@gmail.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm guessing that trusize checks matter more on the "inbound" path than 
the outbound path?  If that is indeed the case, then instead of, or in 
addition to using the -s option to set the local (netperf side) socket 
buffer size, you should use a -S option to set the remote (netserver 
side) socket buffer size.

happy benchmarking,

rick jones

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 14:47:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 14:47:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmQLX-0005a6-5K; Sat, 22 Dec 2012 14:46:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rick.jones2@hp.com>) id 1Tm7Po-0006Ep-36
	for xen-devel@lists.xensource.com; Fri, 21 Dec 2012 18:33:48 +0000
Received: from [85.158.139.211:2093] by server-9.bemta-5.messagelabs.com id
	3D/76-10690-B8BA4D05; Fri, 21 Dec 2012 18:33:47 +0000
X-Env-Sender: rick.jones2@hp.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1356114825!20073205!1
X-Originating-IP: [15.201.24.19]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUuMjAxLjI0LjE5ID0+IDExNjE4NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26366 invoked from network); 21 Dec 2012 18:33:47 -0000
Received: from g4t0016.houston.hp.com (HELO g4t0016.houston.hp.com)
	(15.201.24.19)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Dec 2012 18:33:47 -0000
Received: from g4t0018.houston.hp.com (g4t0018.houston.hp.com [16.234.32.27])
	by g4t0016.houston.hp.com (Postfix) with ESMTP id 7C99E14265;
	Fri, 21 Dec 2012 18:33:45 +0000 (UTC)
Received: from [16.103.148.51] (tardy.usa.hp.com [16.103.148.51])
	by g4t0018.houston.hp.com (Postfix) with ESMTP id B1A1F10134;
	Fri, 21 Dec 2012 18:33:44 +0000 (UTC)
Message-ID: <50D4AB87.8050601@hp.com>
Date: Fri, 21 Dec 2012 10:33:43 -0800
From: Rick Jones <rick.jones2@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>
References: <1355838711-5473-1-git-send-email-ian.campbell@citrix.com>
	<1355843525.9380.18.camel@edumazet-glaptop>
	<1355844398.14620.254.camel@zakaz.uk.xensource.com>
	<55633610.20121219123427@eikelenboom.it>
	<1355933869.21834.13.camel@edumazet-glaptop>
	<1797374383.20121220135139@eikelenboom.it>
	<1356017968.21834.2859.camel@edumazet-glaptop>
	<1609010645.20121221122100@eikelenboom.it>
In-Reply-To: <1609010645.20121221122100@eikelenboom.it>
X-Mailman-Approved-At: Sat, 22 Dec 2012 14:46:38 +0000
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	annie li <annie.li@oracle.com>, Eric Dumazet <erdnetdev@gmail.com>
Subject: Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm guessing that trusize checks matter more on the "inbound" path than 
the outbound path?  If that is indeed the case, then instead of, or in 
addition to using the -s option to set the local (netperf side) socket 
buffer size, you should use a -S option to set the remote (netserver 
side) socket buffer size.

happy benchmarking,

rick jones

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 15:48:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 15:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmRJI-0006Gk-PV; Sat, 22 Dec 2012 15:48:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jajcus@jajcus.net>) id 1TmRJH-0006Gf-30
	for xen-devel@lists.xen.org; Sat, 22 Dec 2012 15:48:23 +0000
Received: from [85.158.139.211:64798] by server-8.bemta-5.messagelabs.com id
	FC/66-15003-546D5D05; Sat, 22 Dec 2012 15:48:21 +0000
X-Env-Sender: jajcus@jajcus.net
X-Msg-Ref: server-10.tower-206.messagelabs.com!1356191300!18772715!1
X-Originating-IP: [84.205.176.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21511 invoked from network); 22 Dec 2012 15:48:20 -0000
Received: from tropek.jajcus.net (HELO tropek.jajcus.net) (84.205.176.49)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Dec 2012 15:48:20 -0000
Received: from localhost (lolek.nigdzie [10.253.0.124])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by tropek.jajcus.net (Postfix) with ESMTPSA id 748485002;
	Sat, 22 Dec 2012 16:48:19 +0100 (CET)
Date: Sat, 22 Dec 2012 16:48:19 +0100
From: Jacek Konieczny <jajcus@jajcus.net>
To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
Message-ID: <20121222154818.GC2417@lolek.nigdzie>
Mail-Followup-To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	daniel.kiper@oracle.com, xen-devel@lists.xen.org
References: <50CD7A57.5030107@cxl.epac.to>
	<20121221195948.GF30562@phenom.dumpdata.com>
	<CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: daniel.kiper@oracle.com, xen-devel@lists.xen.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On Fri, Dec 21, 2012 at 09:30:59PM -0500, Xiao-Long Chen wrote:
> > Hm, this:
> > [    0.000000] ACPI BIOS Bug: Error: A valid RSDP was not found
> > (20120913/tbxfroot-219)
> >
> > is a problem. The workaround was mentioned on the mailing list to use
> > the acpi_rsdp=0xbabfe000

> I tried booting with this, but the kernel immediately crashed
> (I think). I booted with 'acpi_rsdp=0xbabfe000' and without 'quiet' and
> the system hangs while loading the initramfs. I could not see any sort
> of response on the system and could not ssh in.

I posted the workaround, and it was not 'acpi_rsdp=0xbabfe000',
but 'acpi_rsdp=the-value-xen-reports' (it could be 0xbabfe000 for a
specific machine)

> > Did you try to boot xen.efi by itself - without using the GRUB loader?
> > There is a nice writeup of how to do this in docs/misc/efi.markdown.

GRUB2 still may be used, but only for chain-loading the xen.efi. This
give little gain over booting xen.efi directly from the firmware,
though.

> Booting from xen.efi, I see "Dom0 has maximum 8 VCPUs" as expected,

Now xen can find the ACPI RSDP

> but Linux is still seeing only one core.

That is because Xen doesn't pass all the information obtained via EFI to
the kernel.

> xl dmesg: http://paste.kde.org/629696/raw/

the important part:

> (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)

Try booting xen.efi and passing 'acpi_rsdp=0xBABFE014' to the kernel via
the kernel command-line.

This should fix other problems with the kernel too, as ACPI is important
for initializing many kernel subsystems.

Greets,
        Jacek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 15:48:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 15:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmRJI-0006Gk-PV; Sat, 22 Dec 2012 15:48:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jajcus@jajcus.net>) id 1TmRJH-0006Gf-30
	for xen-devel@lists.xen.org; Sat, 22 Dec 2012 15:48:23 +0000
Received: from [85.158.139.211:64798] by server-8.bemta-5.messagelabs.com id
	FC/66-15003-546D5D05; Sat, 22 Dec 2012 15:48:21 +0000
X-Env-Sender: jajcus@jajcus.net
X-Msg-Ref: server-10.tower-206.messagelabs.com!1356191300!18772715!1
X-Originating-IP: [84.205.176.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21511 invoked from network); 22 Dec 2012 15:48:20 -0000
Received: from tropek.jajcus.net (HELO tropek.jajcus.net) (84.205.176.49)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Dec 2012 15:48:20 -0000
Received: from localhost (lolek.nigdzie [10.253.0.124])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by tropek.jajcus.net (Postfix) with ESMTPSA id 748485002;
	Sat, 22 Dec 2012 16:48:19 +0100 (CET)
Date: Sat, 22 Dec 2012 16:48:19 +0100
From: Jacek Konieczny <jajcus@jajcus.net>
To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
Message-ID: <20121222154818.GC2417@lolek.nigdzie>
Mail-Followup-To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	daniel.kiper@oracle.com, xen-devel@lists.xen.org
References: <50CD7A57.5030107@cxl.epac.to>
	<20121221195948.GF30562@phenom.dumpdata.com>
	<CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: daniel.kiper@oracle.com, xen-devel@lists.xen.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On Fri, Dec 21, 2012 at 09:30:59PM -0500, Xiao-Long Chen wrote:
> > Hm, this:
> > [    0.000000] ACPI BIOS Bug: Error: A valid RSDP was not found
> > (20120913/tbxfroot-219)
> >
> > is a problem. The workaround was mentioned on the mailing list to use
> > the acpi_rsdp=0xbabfe000

> I tried booting with this, but the kernel immediately crashed
> (I think). I booted with 'acpi_rsdp=0xbabfe000' and without 'quiet' and
> the system hangs while loading the initramfs. I could not see any sort
> of response on the system and could not ssh in.

I posted the workaround, and it was not 'acpi_rsdp=0xbabfe000',
but 'acpi_rsdp=the-value-xen-reports' (it could be 0xbabfe000 for a
specific machine)

> > Did you try to boot xen.efi by itself - without using the GRUB loader?
> > There is a nice writeup of how to do this in docs/misc/efi.markdown.

GRUB2 still may be used, but only for chain-loading the xen.efi. This
give little gain over booting xen.efi directly from the firmware,
though.

> Booting from xen.efi, I see "Dom0 has maximum 8 VCPUs" as expected,

Now xen can find the ACPI RSDP

> but Linux is still seeing only one core.

That is because Xen doesn't pass all the information obtained via EFI to
the kernel.

> xl dmesg: http://paste.kde.org/629696/raw/

the important part:

> (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)

Try booting xen.efi and passing 'acpi_rsdp=0xBABFE014' to the kernel via
the kernel command-line.

This should fix other problems with the kernel too, as ACPI is important
for initializing many kernel subsystems.

Greets,
        Jacek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 15:50:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 15:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmRKu-0006LL-9Y; Sat, 22 Dec 2012 15:50:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TmRKs-0006L8-DB
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 15:50:02 +0000
Received: from [193.109.254.147:3509] by server-5.bemta-14.messagelabs.com id
	F5/81-32031-9A6D5D05; Sat, 22 Dec 2012 15:50:01 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356191396!9270101!1
X-Originating-IP: [220.181.15.25]
X-SpamReason: No, hits=0.6 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjI1ID0+IDg4NzM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjI1ID0+IDg4NzM=\n,HTML_60_70,HTML_MESSAGE,
	MIME_BASE64_TEXT
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11400 invoked from network); 22 Dec 2012 15:49:59 -0000
Received: from m15-25.126.com (HELO m15-25.126.com) (220.181.15.25)
	by server-2.tower-27.messagelabs.com with SMTP;
	22 Dec 2012 15:49:59 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Cc:Subject:In-Reply-To:
	References:Content-Type:MIME-Version:Message-ID; bh=/zGkYfbHEo8a
	ZdxgKndGsQ4p7R3YhqeFsgCZe0VJ5/A=; b=LsEy5dGCvILKaLxvv7MOWPkf0evf
	DUKAmPet4ooXdB3VB05FmsSRoNAts6ZJxba6Gqn+nG8754c/k8i7+RslWycTcLez
	p/v56jtaGrH9GPJN23yBmnrfZ5151QyGJJgU+mp0rDRrZOe6B0he41nnfJtwHkpX
	0pdLleOXy26eJRc=
Received: from hxkhust$126.com ( [202.114.0.254] ) by ajax-webmail-wmsvr25
	(Coremail) ; Sat, 22 Dec 2012 23:49:53 +0800 (CST)
X-Originating-IP: [202.114.0.254]
Date: Sat, 22 Dec 2012 23:49:53 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: "Wei Liu" <liuw@liuw.name>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
In-Reply-To: <CAOsiSVU-4r=rWiHMthtki_Dt54L4MxE8M6NowJnhuuaQkhoNGg@mail.gmail.com>
References: <2d703237.1651d.13bc2b306fc.Coremail.hxkhust@126.com>
	<CAOsiSVU-4r=rWiHMthtki_Dt54L4MxE8M6NowJnhuuaQkhoNGg@mail.gmail.com>
X-CM-CTRLDATA: OiW/sGZvb3Rlcl9odG09MTQ1MDo4MQ==
MIME-Version: 1.0
Message-ID: <72668005.1734c.13bc34e676b.Coremail.hxkhust@126.com>
X-CM-TRANSID: GcqowGAJskOi1tVQUOEVAA--.37249W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitACNBUX9kpIf+QAEsm
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] how to run a qcow format para-virtualized machine
 with blktap modules used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1808482713202839998=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1808482713202839998==
Content-Type: multipart/alternative; 
	boundary="----=_Part_361825_1125234748.1356191393643"

------=_Part_361825_1125234748.1356191393643
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64

CnFlbXUgcHV6emxlZCBtZSAud2hhdCBJIHdhbnQgdG8gZG8gaXMgdG8gdXNlIGJsa3RhcCBtb2R1
bGVzLkkgbmVlZCBteSBxY293IGZvcm1hdCBWTSBydW5uaW5nIHdpdGggYmxrdGFwIG9mZmVyaW5n
IHN1cHBvcnQuCgoKCgoK1NogMjAxMi0xMi0yMiAyMjoxNjo0MaOsIldlaSBMaXUiIDxsaXV3QGxp
dXcubmFtZT4g0LS1wKO6CgpPbiBTYXQsIERlYyAyMiwgMjAxMiBhdCAxOjAwIFBNLCBoeGtodXN0
IDxoeGtodXN0QDEyNi5jb20+IHdyb3RlOgoKCgp3aG8gY291bGQgaGVscCBtZSB0byBjcmVhdGUg
YSBwYXJhLXZpcnR1YWxpemVkIG1hY2hpbmUgd2hpY2ggdGFrZSBhIHFjb3cgZm9ybWF0IGltYWdl
IGFzIGl0cyB2aXJ0dWFsIGRpc2sgKGFuZCB0aGlzIHFjb3cgaW1hZ2UgaXMgYmFzZWQgb24gYSBi
YWNraW5nIGZpbGUgKT8KSSBuZWVkIHlvdXIgaGVscCBhbmQgdGhlIGRlYWRsaW5lIGlzIGNvbWlu
ZyxleGFjdGx5IHRvbW9ycm93IGlzIG15IGRlYWRsaW5lLgoKCgoKSUlSQyB0aGUgdG9vbHN0YWNr
IHdpbGwgZmFsbCBiYWNrIHRvIHVzZSBRRU1VIGlmIHRoZXJlIGlzIG5vIGJsa3RhcCBtb2R1bGUg
aW4gdGhlIERvbTAga2VybmVsLgoKCgoKV2VpLiAK
------=_Part_361825_1125234748.1356191393643
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxicj5xZW11IHB1enpsZWQgbWUgLndoYXQgSSB3YW50IHRvIGRv
IGlzIHRvIHVzZSBibGt0YXAgbW9kdWxlcy5JIG5lZWQgbXkgcWNvdyBmb3JtYXQgVk0gcnVubmlu
ZyB3aXRoIGJsa3RhcCBvZmZlcmluZyBzdXBwb3J0Ljxicj48YnI+PGJyPjxicj48ZGl2PjwvZGl2
PjxkaXYgaWQ9ImRpdk5ldGVhc2VNYWlsQ2FyZCI+PC9kaXY+PGJyPtTaIDIwMTItMTItMjIgMjI6
MTY6NDGjrCJXZWkmbmJzcDtMaXUiJm5ic3A7Jmx0O2xpdXdAbGl1dy5uYW1lJmd0OyDQtLXAo7o8
YnI+IDxibG9ja3F1b3RlIGlkPSJpc1JlcGx5Q29udGVudCIgc3R5bGU9IlBBRERJTkctTEVGVDog
MWV4OyBNQVJHSU46IDBweCAwcHggMHB4IDAuOGV4OyBCT1JERVItTEVGVDogI2NjYyAxcHggc29s
aWQiPjxkaXYgZGlyPSJsdHIiPk9uIFNhdCwgRGVjIDIyLCAyMDEyIGF0IDE6MDAgUE0sIGh4a2h1
c3QgPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86aHhraHVzdEAxMjYuY29tIiB0
YXJnZXQ9Il9ibGFuayI+aHhraHVzdEAxMjYuY29tPC9hPiZndDs8L3NwYW4+IHdyb3RlOjxicj48
ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPjxibG9ja3F1
b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1s
ZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPgoKPGRpdiBzdHlsZT0ibGluZS1o
ZWlnaHQ6MS43O2ZvbnQtc2l6ZToxNHB4O2ZvbnQtZmFtaWx5OmFyaWFsIj48ZGl2IHN0eWxlPSJs
aW5lLWhlaWdodDoxLjc7Zm9udC1zaXplOjE0cHg7Zm9udC1mYW1pbHk6YXJpYWwiPjxicj48ZGl2
PndobyBjb3VsZCBoZWxwIG1lIHRvIGNyZWF0ZSBhIHBhcmEtdmlydHVhbGl6ZWQgbWFjaGluZSB3
aGljaCB0YWtlIGEgcWNvdyBmb3JtYXQgaW1hZ2UgYXMgaXRzIHZpcnR1YWwgZGlzayAoYW5kIHRo
aXMgcWNvdyBpbWFnZSBpcyBiYXNlZCBvbiBhIGJhY2tpbmcgZmlsZSApPzwvZGl2PgoKPGRpdj5J
IG5lZWQgeW91ciBoZWxwIGFuZCB0aGUgZGVhZGxpbmUgaXMgY29taW5nLGV4YWN0bHkgdG9tb3Jy
b3cgaXMgbXkgZGVhZGxpbmUuPC9kaXY+PGRpdj48YnI+PC9kaXY+PC9kaXY+PC9kaXY+PC9ibG9j
a3F1b3RlPjxkaXY+PGJyPjwvZGl2PjxkaXYgc3R5bGU9IiI+SUlSQyB0aGUgdG9vbHN0YWNrIHdp
bGwgZmFsbCBiYWNrIHRvIHVzZSBRRU1VIGlmIHRoZXJlIGlzIG5vIGJsa3RhcCBtb2R1bGUgaW4g
dGhlIERvbTAga2VybmVsLjwvZGl2PgoKPGRpdiBzdHlsZT0iIj48YnI+PC9kaXY+PGRpdiBzdHls
ZT0iIj48YnI+PC9kaXY+PGRpdiBzdHlsZT0iIj5XZWkuJm5ic3A7PGJyPjwvZGl2PjwvZGl2Pjwv
ZGl2PjwvZGl2Pgo8L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjxicj48c3BhbiB0aXRsZT0ibmV0ZWFz
ZWZvb3RlciI+PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwvc3Bhbj48L3NwYW4+
------=_Part_361825_1125234748.1356191393643--



--===============1808482713202839998==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1808482713202839998==--



From xen-devel-bounces@lists.xen.org Sat Dec 22 15:50:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 15:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmRKu-0006LL-9Y; Sat, 22 Dec 2012 15:50:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1TmRKs-0006L8-DB
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 15:50:02 +0000
Received: from [193.109.254.147:3509] by server-5.bemta-14.messagelabs.com id
	F5/81-32031-9A6D5D05; Sat, 22 Dec 2012 15:50:01 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356191396!9270101!1
X-Originating-IP: [220.181.15.25]
X-SpamReason: No, hits=0.6 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjI1ID0+IDg4NzM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjI1ID0+IDg4NzM=\n,HTML_60_70,HTML_MESSAGE,
	MIME_BASE64_TEXT
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11400 invoked from network); 22 Dec 2012 15:49:59 -0000
Received: from m15-25.126.com (HELO m15-25.126.com) (220.181.15.25)
	by server-2.tower-27.messagelabs.com with SMTP;
	22 Dec 2012 15:49:59 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Cc:Subject:In-Reply-To:
	References:Content-Type:MIME-Version:Message-ID; bh=/zGkYfbHEo8a
	ZdxgKndGsQ4p7R3YhqeFsgCZe0VJ5/A=; b=LsEy5dGCvILKaLxvv7MOWPkf0evf
	DUKAmPet4ooXdB3VB05FmsSRoNAts6ZJxba6Gqn+nG8754c/k8i7+RslWycTcLez
	p/v56jtaGrH9GPJN23yBmnrfZ5151QyGJJgU+mp0rDRrZOe6B0he41nnfJtwHkpX
	0pdLleOXy26eJRc=
Received: from hxkhust$126.com ( [202.114.0.254] ) by ajax-webmail-wmsvr25
	(Coremail) ; Sat, 22 Dec 2012 23:49:53 +0800 (CST)
X-Originating-IP: [202.114.0.254]
Date: Sat, 22 Dec 2012 23:49:53 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: "Wei Liu" <liuw@liuw.name>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
In-Reply-To: <CAOsiSVU-4r=rWiHMthtki_Dt54L4MxE8M6NowJnhuuaQkhoNGg@mail.gmail.com>
References: <2d703237.1651d.13bc2b306fc.Coremail.hxkhust@126.com>
	<CAOsiSVU-4r=rWiHMthtki_Dt54L4MxE8M6NowJnhuuaQkhoNGg@mail.gmail.com>
X-CM-CTRLDATA: OiW/sGZvb3Rlcl9odG09MTQ1MDo4MQ==
MIME-Version: 1.0
Message-ID: <72668005.1734c.13bc34e676b.Coremail.hxkhust@126.com>
X-CM-TRANSID: GcqowGAJskOi1tVQUOEVAA--.37249W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitACNBUX9kpIf+QAEsm
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] how to run a qcow format para-virtualized machine
 with blktap modules used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1808482713202839998=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1808482713202839998==
Content-Type: multipart/alternative; 
	boundary="----=_Part_361825_1125234748.1356191393643"

------=_Part_361825_1125234748.1356191393643
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64

CnFlbXUgcHV6emxlZCBtZSAud2hhdCBJIHdhbnQgdG8gZG8gaXMgdG8gdXNlIGJsa3RhcCBtb2R1
bGVzLkkgbmVlZCBteSBxY293IGZvcm1hdCBWTSBydW5uaW5nIHdpdGggYmxrdGFwIG9mZmVyaW5n
IHN1cHBvcnQuCgoKCgoK1NogMjAxMi0xMi0yMiAyMjoxNjo0MaOsIldlaSBMaXUiIDxsaXV3QGxp
dXcubmFtZT4g0LS1wKO6CgpPbiBTYXQsIERlYyAyMiwgMjAxMiBhdCAxOjAwIFBNLCBoeGtodXN0
IDxoeGtodXN0QDEyNi5jb20+IHdyb3RlOgoKCgp3aG8gY291bGQgaGVscCBtZSB0byBjcmVhdGUg
YSBwYXJhLXZpcnR1YWxpemVkIG1hY2hpbmUgd2hpY2ggdGFrZSBhIHFjb3cgZm9ybWF0IGltYWdl
IGFzIGl0cyB2aXJ0dWFsIGRpc2sgKGFuZCB0aGlzIHFjb3cgaW1hZ2UgaXMgYmFzZWQgb24gYSBi
YWNraW5nIGZpbGUgKT8KSSBuZWVkIHlvdXIgaGVscCBhbmQgdGhlIGRlYWRsaW5lIGlzIGNvbWlu
ZyxleGFjdGx5IHRvbW9ycm93IGlzIG15IGRlYWRsaW5lLgoKCgoKSUlSQyB0aGUgdG9vbHN0YWNr
IHdpbGwgZmFsbCBiYWNrIHRvIHVzZSBRRU1VIGlmIHRoZXJlIGlzIG5vIGJsa3RhcCBtb2R1bGUg
aW4gdGhlIERvbTAga2VybmVsLgoKCgoKV2VpLiAK
------=_Part_361825_1125234748.1356191393643
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxicj5xZW11IHB1enpsZWQgbWUgLndoYXQgSSB3YW50IHRvIGRv
IGlzIHRvIHVzZSBibGt0YXAgbW9kdWxlcy5JIG5lZWQgbXkgcWNvdyBmb3JtYXQgVk0gcnVubmlu
ZyB3aXRoIGJsa3RhcCBvZmZlcmluZyBzdXBwb3J0Ljxicj48YnI+PGJyPjxicj48ZGl2PjwvZGl2
PjxkaXYgaWQ9ImRpdk5ldGVhc2VNYWlsQ2FyZCI+PC9kaXY+PGJyPtTaIDIwMTItMTItMjIgMjI6
MTY6NDGjrCJXZWkmbmJzcDtMaXUiJm5ic3A7Jmx0O2xpdXdAbGl1dy5uYW1lJmd0OyDQtLXAo7o8
YnI+IDxibG9ja3F1b3RlIGlkPSJpc1JlcGx5Q29udGVudCIgc3R5bGU9IlBBRERJTkctTEVGVDog
MWV4OyBNQVJHSU46IDBweCAwcHggMHB4IDAuOGV4OyBCT1JERVItTEVGVDogI2NjYyAxcHggc29s
aWQiPjxkaXYgZGlyPSJsdHIiPk9uIFNhdCwgRGVjIDIyLCAyMDEyIGF0IDE6MDAgUE0sIGh4a2h1
c3QgPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86aHhraHVzdEAxMjYuY29tIiB0
YXJnZXQ9Il9ibGFuayI+aHhraHVzdEAxMjYuY29tPC9hPiZndDs8L3NwYW4+IHdyb3RlOjxicj48
ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPjxibG9ja3F1
b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1s
ZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPgoKPGRpdiBzdHlsZT0ibGluZS1o
ZWlnaHQ6MS43O2ZvbnQtc2l6ZToxNHB4O2ZvbnQtZmFtaWx5OmFyaWFsIj48ZGl2IHN0eWxlPSJs
aW5lLWhlaWdodDoxLjc7Zm9udC1zaXplOjE0cHg7Zm9udC1mYW1pbHk6YXJpYWwiPjxicj48ZGl2
PndobyBjb3VsZCBoZWxwIG1lIHRvIGNyZWF0ZSBhIHBhcmEtdmlydHVhbGl6ZWQgbWFjaGluZSB3
aGljaCB0YWtlIGEgcWNvdyBmb3JtYXQgaW1hZ2UgYXMgaXRzIHZpcnR1YWwgZGlzayAoYW5kIHRo
aXMgcWNvdyBpbWFnZSBpcyBiYXNlZCBvbiBhIGJhY2tpbmcgZmlsZSApPzwvZGl2PgoKPGRpdj5J
IG5lZWQgeW91ciBoZWxwIGFuZCB0aGUgZGVhZGxpbmUgaXMgY29taW5nLGV4YWN0bHkgdG9tb3Jy
b3cgaXMgbXkgZGVhZGxpbmUuPC9kaXY+PGRpdj48YnI+PC9kaXY+PC9kaXY+PC9kaXY+PC9ibG9j
a3F1b3RlPjxkaXY+PGJyPjwvZGl2PjxkaXYgc3R5bGU9IiI+SUlSQyB0aGUgdG9vbHN0YWNrIHdp
bGwgZmFsbCBiYWNrIHRvIHVzZSBRRU1VIGlmIHRoZXJlIGlzIG5vIGJsa3RhcCBtb2R1bGUgaW4g
dGhlIERvbTAga2VybmVsLjwvZGl2PgoKPGRpdiBzdHlsZT0iIj48YnI+PC9kaXY+PGRpdiBzdHls
ZT0iIj48YnI+PC9kaXY+PGRpdiBzdHlsZT0iIj5XZWkuJm5ic3A7PGJyPjwvZGl2PjwvZGl2Pjwv
ZGl2PjwvZGl2Pgo8L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjxicj48c3BhbiB0aXRsZT0ibmV0ZWFz
ZWZvb3RlciI+PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwvc3Bhbj48L3NwYW4+
------=_Part_361825_1125234748.1356191393643--



--===============1808482713202839998==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1808482713202839998==--



From xen-devel-bounces@lists.xen.org Sat Dec 22 17:51:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 17:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmTDR-0007SC-Bm; Sat, 22 Dec 2012 17:50:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtux@126.com>)
	id 1TmTDP-0007S4-CY; Sat, 22 Dec 2012 17:50:27 +0000
Received: from [85.158.143.35:55336] by server-3.bemta-4.messagelabs.com id
	61/E1-18211-2E2F5D05; Sat, 22 Dec 2012 17:50:26 +0000
X-Env-Sender: gbtux@126.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1356198618!14717504!1
X-Originating-IP: [220.181.15.44]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ0ID0+IDE1NDA1\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ0ID0+IDE1NDA1\n,HTML_30_40,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31200 invoked from network); 22 Dec 2012 17:50:21 -0000
Received: from m15-44.126.com (HELO m15-44.126.com) (220.181.15.44)
	by server-8.tower-21.messagelabs.com with SMTP;
	22 Dec 2012 17:50:21 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Cc:Subject:Content-Type:
	MIME-Version:Message-ID; bh=EORfEO+nZnHfP/+AxZVlVPruaM5L1L6RbSyG
	tHwhd5c=; b=nNQ0DZhroC//2+tnIVqINhGKaGarZQDkRxUy04DwcsREXfBckNN+
	V0nT49TiLk+JCg9Tf23wzCnYnBztLtfbTiNoRpVFnrwngh/uWXRnTHT/yq/vzt6Y
	rr5wa1fjYM8x90AA0u2exWw7aT7qiLwxmjHZvA2HB6YeLj0Y0bwQ8SU=
Received: from gbtux$126.com ( [124.16.139.198] ) by ajax-webmail-wmsvr44
	(Coremail) ; Sun, 23 Dec 2012 01:50:16 +0800 (CST)
X-Originating-IP: [124.16.139.198]
Date: Sun, 23 Dec 2012 01:50:16 +0800 (CST)
From: gavin <gbtux@126.com>
To: xen-users@lists.xen.org
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: uyON3GZvb3Rlcl9odG09NTcyOjgx
MIME-Version: 1.0
Message-ID: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
X-CM-TRANSID: LMqowECpOkPZ8tVQkxIgAA--.9945W
X-CM-SenderInfo: pjew35a6rslhhfrp/1tbiGAONnEl1wbHvxgAAsv
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] How to use the vTPM backend driver in the pv-ops kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4552473553753376626=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4552473553753376626==
Content-Type: multipart/alternative; 
	boundary="----=_Part_1778_1751747937.1356198616038"

------=_Part_1778_1751747937.1356198616038
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

 Hi,

I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in the config file of pv-ops kernel, such as kernel 2.6.32.50. However, this option exists in the config file of kernel version 2.6.18.8. I also cannot find the vTPM backed driver (such as linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
So, how can I configure and use the vTPM backend driver in kernel 2.6.32?
Thank you for any advice.



Best Regards,
Gavin
------=_Part_1778_1751747937.1356198616038
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">&nbsp;Hi,<br><br>I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in the config file of pv-ops kernel, such as kernel 2.6.32.50. However, this option exists in the config file of kernel version 2.6.18.8. I also cannot find the vTPM backed driver (such as linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.<br>So, how can I configure and use the vTPM backend driver in kernel 2.6.32?<br>Thank you for any advice.<br><br><br><div>Best Regards,<div>Gavin</div></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_1778_1751747937.1356198616038--



--===============4552473553753376626==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4552473553753376626==--



From xen-devel-bounces@lists.xen.org Sat Dec 22 17:51:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 17:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmTDR-0007SC-Bm; Sat, 22 Dec 2012 17:50:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtux@126.com>)
	id 1TmTDP-0007S4-CY; Sat, 22 Dec 2012 17:50:27 +0000
Received: from [85.158.143.35:55336] by server-3.bemta-4.messagelabs.com id
	61/E1-18211-2E2F5D05; Sat, 22 Dec 2012 17:50:26 +0000
X-Env-Sender: gbtux@126.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1356198618!14717504!1
X-Originating-IP: [220.181.15.44]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ0ID0+IDE1NDA1\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ0ID0+IDE1NDA1\n,HTML_30_40,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31200 invoked from network); 22 Dec 2012 17:50:21 -0000
Received: from m15-44.126.com (HELO m15-44.126.com) (220.181.15.44)
	by server-8.tower-21.messagelabs.com with SMTP;
	22 Dec 2012 17:50:21 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Cc:Subject:Content-Type:
	MIME-Version:Message-ID; bh=EORfEO+nZnHfP/+AxZVlVPruaM5L1L6RbSyG
	tHwhd5c=; b=nNQ0DZhroC//2+tnIVqINhGKaGarZQDkRxUy04DwcsREXfBckNN+
	V0nT49TiLk+JCg9Tf23wzCnYnBztLtfbTiNoRpVFnrwngh/uWXRnTHT/yq/vzt6Y
	rr5wa1fjYM8x90AA0u2exWw7aT7qiLwxmjHZvA2HB6YeLj0Y0bwQ8SU=
Received: from gbtux$126.com ( [124.16.139.198] ) by ajax-webmail-wmsvr44
	(Coremail) ; Sun, 23 Dec 2012 01:50:16 +0800 (CST)
X-Originating-IP: [124.16.139.198]
Date: Sun, 23 Dec 2012 01:50:16 +0800 (CST)
From: gavin <gbtux@126.com>
To: xen-users@lists.xen.org
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: uyON3GZvb3Rlcl9odG09NTcyOjgx
MIME-Version: 1.0
Message-ID: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
X-CM-TRANSID: LMqowECpOkPZ8tVQkxIgAA--.9945W
X-CM-SenderInfo: pjew35a6rslhhfrp/1tbiGAONnEl1wbHvxgAAsv
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] How to use the vTPM backend driver in the pv-ops kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4552473553753376626=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4552473553753376626==
Content-Type: multipart/alternative; 
	boundary="----=_Part_1778_1751747937.1356198616038"

------=_Part_1778_1751747937.1356198616038
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

 Hi,

I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in the config file of pv-ops kernel, such as kernel 2.6.32.50. However, this option exists in the config file of kernel version 2.6.18.8. I also cannot find the vTPM backed driver (such as linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
So, how can I configure and use the vTPM backend driver in kernel 2.6.32?
Thank you for any advice.



Best Regards,
Gavin
------=_Part_1778_1751747937.1356198616038
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">&nbsp;Hi,<br><br>I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in the config file of pv-ops kernel, such as kernel 2.6.32.50. However, this option exists in the config file of kernel version 2.6.18.8. I also cannot find the vTPM backed driver (such as linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.<br>So, how can I configure and use the vTPM backend driver in kernel 2.6.32?<br>Thank you for any advice.<br><br><br><div>Best Regards,<div>Gavin</div></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_1778_1751747937.1356198616038--



--===============4552473553753376626==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4552473553753376626==--



From xen-devel-bounces@lists.xen.org Sat Dec 22 18:54:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 18:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmUD7-0008C7-Ts; Sat, 22 Dec 2012 18:54:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chenxiaolong@cxl.epac.to>) id 1TmUD5-0008C2-R3
	for xen-devel@lists.xen.org; Sat, 22 Dec 2012 18:54:12 +0000
Received: from [85.158.138.51:27189] by server-13.bemta-3.messagelabs.com id
	72/6A-00465-2D106D05; Sat, 22 Dec 2012 18:54:10 +0000
X-Env-Sender: chenxiaolong@cxl.epac.to
X-Msg-Ref: server-4.tower-174.messagelabs.com!1356202448!30066434!1
X-Originating-IP: [209.85.160.179]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22149 invoked from network); 22 Dec 2012 18:54:09 -0000
Received: from mail-gh0-f179.google.com (HELO mail-gh0-f179.google.com)
	(209.85.160.179)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 18:54:09 -0000
Received: by mail-gh0-f179.google.com with SMTP id r14so489904ghr.10
	for <xen-devel@lists.xen.org>; Sat, 22 Dec 2012 10:54:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:content-type:x-gm-message-state;
	bh=Vbi2kMErCJWaAwOh+jDik8JeijnG9pymnhH5cRWTKO4=;
	b=DeEg17lUOucCA/xgyloCNGhKguD28T9IA3BbBaegzzkeUw0IRBTVnot8noxeMjt1nZ
	p0I2/CtTqWPRjZYVXYBUmvcJOllhYeFDKnBogqafFkafgPZ9wzfs+vfP4PDrkstbenQD
	OkDwq3W+Oc0l/PDoCaKDUogAeegbygbcMMQFCU7tlAxA0LBK5qZA1N2zSurUUIyWMEty
	sc7ohC+EK4HoTDo+btcGzgyn8l/oXl/wOF7p4A2Fq6XTangdln5NddWuKAjxd7jycBYR
	9PKVe1/sb4UjNXkqU2WVO7noEWl6c08KYYMgDbcuDR5nCh8Od63QKqTrOWNPJbelmklO
	U2+g==
MIME-Version: 1.0
Received: by 10.236.168.164 with SMTP id k24mr16341636yhl.27.1356202448146;
	Sat, 22 Dec 2012 10:54:08 -0800 (PST)
Received: by 10.101.36.1 with HTTP; Sat, 22 Dec 2012 10:54:07 -0800 (PST)
X-Originating-IP: [2001:470:8:c14:224:d7ff:fe24:7a9c]
In-Reply-To: <20121222154818.GC2417@lolek.nigdzie>
References: <50CD7A57.5030107@cxl.epac.to>
	<20121221195948.GF30562@phenom.dumpdata.com>
	<CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
	<20121222154818.GC2417@lolek.nigdzie>
Date: Sat, 22 Dec 2012 13:54:07 -0500
Message-ID: <CALnNU9PsKrYETpuW97hGCiGp2nthYbD-=NZjiXbOW5dJ268Jbg@mail.gmail.com>
From: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, daniel.kiper@oracle.com,
	xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQkrJOQicHDfSdglTE/o0heAMAMB+j4DxioB3pJvqL4s3T8W1aZS2od9n273ctqJQsjf2+Nb
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5615196009048596967=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5615196009048596967==
Content-Type: multipart/alternative; boundary=20cf3040edb02a4fc404d1757df0

--20cf3040edb02a4fc404d1757df0
Content-Type: text/plain; charset=ISO-8859-1

Thanks for the reply!

On Sat, Dec 22, 2012 at 10:48 AM, Jacek Konieczny <jajcus@jajcus.net> wrote:

> Hi,
>
> On Fri, Dec 21, 2012 at 09:30:59PM -0500, Xiao-Long Chen wrote:
> > > Hm, this:
> > > [    0.000000] ACPI BIOS Bug: Error: A valid RSDP was not found
> > > (20120913/tbxfroot-219)
> > >
> > > is a problem. The workaround was mentioned on the mailing list to use
> > > the acpi_rsdp=0xbabfe000
>
> > I tried booting with this, but the kernel immediately crashed
> > (I think). I booted with 'acpi_rsdp=0xbabfe000' and without 'quiet' and
> > the system hangs while loading the initramfs. I could not see any sort
> > of response on the system and could not ssh in.
>
> I posted the workaround, and it was not 'acpi_rsdp=0xbabfe000',
> but 'acpi_rsdp=the-value-xen-reports' (it could be 0xbabfe000 for a
> specific machine)
>
> > > Did you try to boot xen.efi by itself - without using the GRUB loader?
> > > There is a nice writeup of how to do this in docs/misc/efi.markdown.
>
> GRUB2 still may be used, but only for chain-loading the xen.efi. This
> give little gain over booting xen.efi directly from the firmware,
> though.
>

Good to know. I currently boot xen.efi from the efi shell.


>
> > Booting from xen.efi, I see "Dom0 has maximum 8 VCPUs" as expected,
>
> Now xen can find the ACPI RSDP
>
> > but Linux is still seeing only one core.
>
> That is because Xen doesn't pass all the information obtained via EFI to
> the kernel.
>
> > xl dmesg: http://paste.kde.org/629696/raw/
>
> the important part:
>
> > (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)
>
> Try booting xen.efi and passing 'acpi_rsdp=0xBABFE014' to the kernel via
> the kernel command-line.
>

I rebooted into xen.efi to make sure that it still reported:

(XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)

then I changed my xen.efi config to this: http://paste.kde.org/629996/raw/
which, unfortunately, still causes the hang.


>
> This should fix other problems with the kernel too, as ACPI is important
> for initializing many kernel subsystems.
>
> Greets,
>         Jacek
>

Best regards,
Xiao-Long Chen

--20cf3040edb02a4fc404d1757df0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks for the reply!<br><br><div class=3D"gmail_quote">On Sat, Dec 22, 201=
2 at 10:48 AM, Jacek Konieczny <span dir=3D"ltr">&lt;<a href=3D"mailto:jajc=
us@jajcus.net" target=3D"_blank">jajcus@jajcus.net</a>&gt;</span> wrote:<br=
><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1=
px #ccc solid;padding-left:1ex">
Hi,<br>
<div class=3D"im"><br>
On Fri, Dec 21, 2012 at 09:30:59PM -0500, Xiao-Long Chen wrote:<br>
&gt; &gt; Hm, this:<br>
&gt; &gt; [ =A0 =A00.000000] ACPI BIOS Bug: Error: A valid RSDP was not fou=
nd<br>
&gt; &gt; (20120913/tbxfroot-219)<br>
&gt; &gt;<br>
&gt; &gt; is a problem. The workaround was mentioned on the mailing list to=
 use<br>
&gt; &gt; the acpi_rsdp=3D0xbabfe000<br>
<br>
&gt; I tried booting with this, but the kernel immediately crashed<br>
&gt; (I think). I booted with &#39;acpi_rsdp=3D0xbabfe000&#39; and without =
&#39;quiet&#39; and<br>
&gt; the system hangs while loading the initramfs. I could not see any sort=
<br>
&gt; of response on the system and could not ssh in.<br>
<br>
</div>I posted the workaround, and it was not &#39;acpi_rsdp=3D0xbabfe000&#=
39;,<br>
but &#39;acpi_rsdp=3Dthe-value-xen-reports&#39; (it could be 0xbabfe000 for=
 a<br>
specific machine)<br>
<div class=3D"im"><br>
&gt; &gt; Did you try to boot xen.efi by itself - without using the GRUB lo=
ader?<br>
&gt; &gt; There is a nice writeup of how to do this in docs/misc/efi.markdo=
wn.<br>
<br>
</div>GRUB2 still may be used, but only for chain-loading the xen.efi. This=
<br>
give little gain over booting xen.efi directly from the firmware,<br>
though.<br></blockquote><div><br>Good to know. I currently boot xen.efi fro=
m the efi shell.<br>=A0</div><blockquote class=3D"gmail_quote" style=3D"mar=
gin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im"><br>
&gt; Booting from xen.efi, I see &quot;Dom0 has maximum 8 VCPUs&quot; as ex=
pected,<br>
<br>
</div>Now xen can find the ACPI RSDP<br>
<div class=3D"im"><br>
&gt; but Linux is still seeing only one core.<br>
<br>
</div>That is because Xen doesn&#39;t pass all the information obtained via=
 EFI to<br>
the kernel.<br>
<br>
&gt; xl dmesg: <a href=3D"http://paste.kde.org/629696/raw/" target=3D"_blan=
k">http://paste.kde.org/629696/raw/</a><br>
<br>
the important part:<br>
<br>
&gt; (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)<br>
<br>
Try booting xen.efi and passing &#39;acpi_rsdp=3D0xBABFE014&#39; to the ker=
nel via<br>
the kernel command-line.<br></blockquote><div><br>I rebooted into xen.efi t=
o make sure that it still reported:<br><br>(XEN) ACPI: RSDP BABFE014, 0024 =
(r2 LENOVO)<br><br>then I changed my xen.efi config to this: <a href=3D"htt=
p://paste.kde.org/629996/raw/">http://paste.kde.org/629996/raw/</a><br>
which, unfortunately, still causes the hang.<br>=A0</div><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
<br>
This should fix other problems with the kernel too, as ACPI is important<br=
>
for initializing many kernel subsystems.<br>
<br>
Greets,<br>
=A0 =A0 =A0 =A0 Jacek<br>
</blockquote></div><br>Best regards,<br>Xiao-Long Chen<br>

--20cf3040edb02a4fc404d1757df0--


--===============5615196009048596967==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5615196009048596967==--


From xen-devel-bounces@lists.xen.org Sat Dec 22 18:54:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 18:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmUD7-0008C7-Ts; Sat, 22 Dec 2012 18:54:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chenxiaolong@cxl.epac.to>) id 1TmUD5-0008C2-R3
	for xen-devel@lists.xen.org; Sat, 22 Dec 2012 18:54:12 +0000
Received: from [85.158.138.51:27189] by server-13.bemta-3.messagelabs.com id
	72/6A-00465-2D106D05; Sat, 22 Dec 2012 18:54:10 +0000
X-Env-Sender: chenxiaolong@cxl.epac.to
X-Msg-Ref: server-4.tower-174.messagelabs.com!1356202448!30066434!1
X-Originating-IP: [209.85.160.179]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22149 invoked from network); 22 Dec 2012 18:54:09 -0000
Received: from mail-gh0-f179.google.com (HELO mail-gh0-f179.google.com)
	(209.85.160.179)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 18:54:09 -0000
Received: by mail-gh0-f179.google.com with SMTP id r14so489904ghr.10
	for <xen-devel@lists.xen.org>; Sat, 22 Dec 2012 10:54:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:content-type:x-gm-message-state;
	bh=Vbi2kMErCJWaAwOh+jDik8JeijnG9pymnhH5cRWTKO4=;
	b=DeEg17lUOucCA/xgyloCNGhKguD28T9IA3BbBaegzzkeUw0IRBTVnot8noxeMjt1nZ
	p0I2/CtTqWPRjZYVXYBUmvcJOllhYeFDKnBogqafFkafgPZ9wzfs+vfP4PDrkstbenQD
	OkDwq3W+Oc0l/PDoCaKDUogAeegbygbcMMQFCU7tlAxA0LBK5qZA1N2zSurUUIyWMEty
	sc7ohC+EK4HoTDo+btcGzgyn8l/oXl/wOF7p4A2Fq6XTangdln5NddWuKAjxd7jycBYR
	9PKVe1/sb4UjNXkqU2WVO7noEWl6c08KYYMgDbcuDR5nCh8Od63QKqTrOWNPJbelmklO
	U2+g==
MIME-Version: 1.0
Received: by 10.236.168.164 with SMTP id k24mr16341636yhl.27.1356202448146;
	Sat, 22 Dec 2012 10:54:08 -0800 (PST)
Received: by 10.101.36.1 with HTTP; Sat, 22 Dec 2012 10:54:07 -0800 (PST)
X-Originating-IP: [2001:470:8:c14:224:d7ff:fe24:7a9c]
In-Reply-To: <20121222154818.GC2417@lolek.nigdzie>
References: <50CD7A57.5030107@cxl.epac.to>
	<20121221195948.GF30562@phenom.dumpdata.com>
	<CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
	<20121222154818.GC2417@lolek.nigdzie>
Date: Sat, 22 Dec 2012 13:54:07 -0500
Message-ID: <CALnNU9PsKrYETpuW97hGCiGp2nthYbD-=NZjiXbOW5dJ268Jbg@mail.gmail.com>
From: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, daniel.kiper@oracle.com,
	xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQkrJOQicHDfSdglTE/o0heAMAMB+j4DxioB3pJvqL4s3T8W1aZS2od9n273ctqJQsjf2+Nb
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5615196009048596967=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5615196009048596967==
Content-Type: multipart/alternative; boundary=20cf3040edb02a4fc404d1757df0

--20cf3040edb02a4fc404d1757df0
Content-Type: text/plain; charset=ISO-8859-1

Thanks for the reply!

On Sat, Dec 22, 2012 at 10:48 AM, Jacek Konieczny <jajcus@jajcus.net> wrote:

> Hi,
>
> On Fri, Dec 21, 2012 at 09:30:59PM -0500, Xiao-Long Chen wrote:
> > > Hm, this:
> > > [    0.000000] ACPI BIOS Bug: Error: A valid RSDP was not found
> > > (20120913/tbxfroot-219)
> > >
> > > is a problem. The workaround was mentioned on the mailing list to use
> > > the acpi_rsdp=0xbabfe000
>
> > I tried booting with this, but the kernel immediately crashed
> > (I think). I booted with 'acpi_rsdp=0xbabfe000' and without 'quiet' and
> > the system hangs while loading the initramfs. I could not see any sort
> > of response on the system and could not ssh in.
>
> I posted the workaround, and it was not 'acpi_rsdp=0xbabfe000',
> but 'acpi_rsdp=the-value-xen-reports' (it could be 0xbabfe000 for a
> specific machine)
>
> > > Did you try to boot xen.efi by itself - without using the GRUB loader?
> > > There is a nice writeup of how to do this in docs/misc/efi.markdown.
>
> GRUB2 still may be used, but only for chain-loading the xen.efi. This
> give little gain over booting xen.efi directly from the firmware,
> though.
>

Good to know. I currently boot xen.efi from the efi shell.


>
> > Booting from xen.efi, I see "Dom0 has maximum 8 VCPUs" as expected,
>
> Now xen can find the ACPI RSDP
>
> > but Linux is still seeing only one core.
>
> That is because Xen doesn't pass all the information obtained via EFI to
> the kernel.
>
> > xl dmesg: http://paste.kde.org/629696/raw/
>
> the important part:
>
> > (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)
>
> Try booting xen.efi and passing 'acpi_rsdp=0xBABFE014' to the kernel via
> the kernel command-line.
>

I rebooted into xen.efi to make sure that it still reported:

(XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)

then I changed my xen.efi config to this: http://paste.kde.org/629996/raw/
which, unfortunately, still causes the hang.


>
> This should fix other problems with the kernel too, as ACPI is important
> for initializing many kernel subsystems.
>
> Greets,
>         Jacek
>

Best regards,
Xiao-Long Chen

--20cf3040edb02a4fc404d1757df0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks for the reply!<br><br><div class=3D"gmail_quote">On Sat, Dec 22, 201=
2 at 10:48 AM, Jacek Konieczny <span dir=3D"ltr">&lt;<a href=3D"mailto:jajc=
us@jajcus.net" target=3D"_blank">jajcus@jajcus.net</a>&gt;</span> wrote:<br=
><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1=
px #ccc solid;padding-left:1ex">
Hi,<br>
<div class=3D"im"><br>
On Fri, Dec 21, 2012 at 09:30:59PM -0500, Xiao-Long Chen wrote:<br>
&gt; &gt; Hm, this:<br>
&gt; &gt; [ =A0 =A00.000000] ACPI BIOS Bug: Error: A valid RSDP was not fou=
nd<br>
&gt; &gt; (20120913/tbxfroot-219)<br>
&gt; &gt;<br>
&gt; &gt; is a problem. The workaround was mentioned on the mailing list to=
 use<br>
&gt; &gt; the acpi_rsdp=3D0xbabfe000<br>
<br>
&gt; I tried booting with this, but the kernel immediately crashed<br>
&gt; (I think). I booted with &#39;acpi_rsdp=3D0xbabfe000&#39; and without =
&#39;quiet&#39; and<br>
&gt; the system hangs while loading the initramfs. I could not see any sort=
<br>
&gt; of response on the system and could not ssh in.<br>
<br>
</div>I posted the workaround, and it was not &#39;acpi_rsdp=3D0xbabfe000&#=
39;,<br>
but &#39;acpi_rsdp=3Dthe-value-xen-reports&#39; (it could be 0xbabfe000 for=
 a<br>
specific machine)<br>
<div class=3D"im"><br>
&gt; &gt; Did you try to boot xen.efi by itself - without using the GRUB lo=
ader?<br>
&gt; &gt; There is a nice writeup of how to do this in docs/misc/efi.markdo=
wn.<br>
<br>
</div>GRUB2 still may be used, but only for chain-loading the xen.efi. This=
<br>
give little gain over booting xen.efi directly from the firmware,<br>
though.<br></blockquote><div><br>Good to know. I currently boot xen.efi fro=
m the efi shell.<br>=A0</div><blockquote class=3D"gmail_quote" style=3D"mar=
gin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im"><br>
&gt; Booting from xen.efi, I see &quot;Dom0 has maximum 8 VCPUs&quot; as ex=
pected,<br>
<br>
</div>Now xen can find the ACPI RSDP<br>
<div class=3D"im"><br>
&gt; but Linux is still seeing only one core.<br>
<br>
</div>That is because Xen doesn&#39;t pass all the information obtained via=
 EFI to<br>
the kernel.<br>
<br>
&gt; xl dmesg: <a href=3D"http://paste.kde.org/629696/raw/" target=3D"_blan=
k">http://paste.kde.org/629696/raw/</a><br>
<br>
the important part:<br>
<br>
&gt; (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)<br>
<br>
Try booting xen.efi and passing &#39;acpi_rsdp=3D0xBABFE014&#39; to the ker=
nel via<br>
the kernel command-line.<br></blockquote><div><br>I rebooted into xen.efi t=
o make sure that it still reported:<br><br>(XEN) ACPI: RSDP BABFE014, 0024 =
(r2 LENOVO)<br><br>then I changed my xen.efi config to this: <a href=3D"htt=
p://paste.kde.org/629996/raw/">http://paste.kde.org/629996/raw/</a><br>
which, unfortunately, still causes the hang.<br>=A0</div><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
<br>
This should fix other problems with the kernel too, as ACPI is important<br=
>
for initializing many kernel subsystems.<br>
<br>
Greets,<br>
=A0 =A0 =A0 =A0 Jacek<br>
</blockquote></div><br>Best regards,<br>Xiao-Long Chen<br>

--20cf3040edb02a4fc404d1757df0--


--===============5615196009048596967==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5615196009048596967==--


From xen-devel-bounces@lists.xen.org Sat Dec 22 21:28:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 21:28:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmWbR-0000qs-1o; Sat, 22 Dec 2012 21:27:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jajcus@jajcus.net>) id 1TmWbP-0000qn-4B
	for xen-devel@lists.xen.org; Sat, 22 Dec 2012 21:27:27 +0000
Received: from [85.158.139.83:7493] by server-7.bemta-5.messagelabs.com id
	FB/2D-08009-EB526D05; Sat, 22 Dec 2012 21:27:26 +0000
X-Env-Sender: jajcus@jajcus.net
X-Msg-Ref: server-12.tower-182.messagelabs.com!1356211643!28903155!1
X-Originating-IP: [84.205.176.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16393 invoked from network); 22 Dec 2012 21:27:25 -0000
Received: from tropek.jajcus.net (HELO tropek.jajcus.net) (84.205.176.49)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Dec 2012 21:27:25 -0000
Received: from localhost (lolek.nigdzie [10.253.0.124])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by tropek.jajcus.net (Postfix) with ESMTPSA id 061D15002;
	Sat, 22 Dec 2012 22:27:21 +0100 (CET)
Date: Sat, 22 Dec 2012 22:27:20 +0100
From: Jacek Konieczny <jajcus@jajcus.net>
To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
Message-ID: <20121222212720.GA7550@lolek.nigdzie>
Mail-Followup-To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	daniel.kiper@oracle.com, xen-devel@lists.xen.org
References: <50CD7A57.5030107@cxl.epac.to>
	<20121221195948.GF30562@phenom.dumpdata.com>
	<CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
	<20121222154818.GC2417@lolek.nigdzie>
	<CALnNU9PsKrYETpuW97hGCiGp2nthYbD-=NZjiXbOW5dJ268Jbg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CALnNU9PsKrYETpuW97hGCiGp2nthYbD-=NZjiXbOW5dJ268Jbg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: daniel.kiper@oracle.com, xen-devel@lists.xen.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Dec 22, 2012 at 01:54:07PM -0500, Xiao-Long Chen wrote:
 
> I rebooted into xen.efi to make sure that it still reported:
> 
> (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)
> 
> then I changed my xen.efi config to this: http://paste.kde.org/629996/raw/
> which, unfortunately, still causes the hang.

Have you tried 'acpi_rsdp=0xbabfe014' instead of 'acpi_rsdp=babfe014'?

Do you have the kernel console output from the failed boot with these
settings?

Also, I don't think 'video=vesa:off vga=normal' options make much sense
under EFI and Xen.

Greets,
        Jacek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 21:28:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 21:28:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmWbR-0000qs-1o; Sat, 22 Dec 2012 21:27:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jajcus@jajcus.net>) id 1TmWbP-0000qn-4B
	for xen-devel@lists.xen.org; Sat, 22 Dec 2012 21:27:27 +0000
Received: from [85.158.139.83:7493] by server-7.bemta-5.messagelabs.com id
	FB/2D-08009-EB526D05; Sat, 22 Dec 2012 21:27:26 +0000
X-Env-Sender: jajcus@jajcus.net
X-Msg-Ref: server-12.tower-182.messagelabs.com!1356211643!28903155!1
X-Originating-IP: [84.205.176.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16393 invoked from network); 22 Dec 2012 21:27:25 -0000
Received: from tropek.jajcus.net (HELO tropek.jajcus.net) (84.205.176.49)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Dec 2012 21:27:25 -0000
Received: from localhost (lolek.nigdzie [10.253.0.124])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by tropek.jajcus.net (Postfix) with ESMTPSA id 061D15002;
	Sat, 22 Dec 2012 22:27:21 +0100 (CET)
Date: Sat, 22 Dec 2012 22:27:20 +0100
From: Jacek Konieczny <jajcus@jajcus.net>
To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
Message-ID: <20121222212720.GA7550@lolek.nigdzie>
Mail-Followup-To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	daniel.kiper@oracle.com, xen-devel@lists.xen.org
References: <50CD7A57.5030107@cxl.epac.to>
	<20121221195948.GF30562@phenom.dumpdata.com>
	<CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
	<20121222154818.GC2417@lolek.nigdzie>
	<CALnNU9PsKrYETpuW97hGCiGp2nthYbD-=NZjiXbOW5dJ268Jbg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CALnNU9PsKrYETpuW97hGCiGp2nthYbD-=NZjiXbOW5dJ268Jbg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: daniel.kiper@oracle.com, xen-devel@lists.xen.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Dec 22, 2012 at 01:54:07PM -0500, Xiao-Long Chen wrote:
 
> I rebooted into xen.efi to make sure that it still reported:
> 
> (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)
> 
> then I changed my xen.efi config to this: http://paste.kde.org/629996/raw/
> which, unfortunately, still causes the hang.

Have you tried 'acpi_rsdp=0xbabfe014' instead of 'acpi_rsdp=babfe014'?

Do you have the kernel console output from the failed boot with these
settings?

Also, I don't think 'video=vesa:off vga=normal' options make much sense
under EFI and Xen.

Greets,
        Jacek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 22:04:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 22:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmXAy-00019a-3k; Sat, 22 Dec 2012 22:04:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1TmXAx-00019S-BT; Sat, 22 Dec 2012 22:04:11 +0000
Received: from [85.158.138.51:40430] by server-2.bemta-3.messagelabs.com id
	9E/D2-11239-A5E26D05; Sat, 22 Dec 2012 22:04:10 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-174.messagelabs.com!1356213849!23696809!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NTg5NTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18841 invoked from network); 22 Dec 2012 22:04:09 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Dec 2012 22:04:09 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 0318E24FC;
	Sun, 23 Dec 2012 00:04:08 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 59DC920066; Sun, 23 Dec 2012 00:04:08 +0200 (EET)
Date: Sun, 23 Dec 2012 00:04:08 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: gavin <gbtux@126.com>
Message-ID: <20121222220407.GP8912@reaktio.net>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Dec 23, 2012 at 01:50:16AM +0800, gavin wrote:
>     Hi,
> 
>    I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in the
>    config file of pv-ops kernel, such as kernel 2.6.32.50. However, this
>    option exists in the config file of kernel version 2.6.18.8. I also cannot
>    find the vTPM backed driver (such as
>    linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
>    So, how can I configure and use the vTPM backend driver in kernel 2.6.32?
>    Thank you for any advice.
> 

I don't think vtpm drivers were ported to 2.6.32 pvops.
Recently there has been work on porting the drivers to upstream Linux 3.x, 
but they aren't merged yet iirc.

If you need to use them with 2.6.32 you need to port them yourself.. 

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 22:04:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 22:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmXAy-00019a-3k; Sat, 22 Dec 2012 22:04:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1TmXAx-00019S-BT; Sat, 22 Dec 2012 22:04:11 +0000
Received: from [85.158.138.51:40430] by server-2.bemta-3.messagelabs.com id
	9E/D2-11239-A5E26D05; Sat, 22 Dec 2012 22:04:10 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-174.messagelabs.com!1356213849!23696809!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NTg5NTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18841 invoked from network); 22 Dec 2012 22:04:09 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Dec 2012 22:04:09 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 0318E24FC;
	Sun, 23 Dec 2012 00:04:08 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 59DC920066; Sun, 23 Dec 2012 00:04:08 +0200 (EET)
Date: Sun, 23 Dec 2012 00:04:08 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: gavin <gbtux@126.com>
Message-ID: <20121222220407.GP8912@reaktio.net>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Dec 23, 2012 at 01:50:16AM +0800, gavin wrote:
>     Hi,
> 
>    I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in the
>    config file of pv-ops kernel, such as kernel 2.6.32.50. However, this
>    option exists in the config file of kernel version 2.6.18.8. I also cannot
>    find the vTPM backed driver (such as
>    linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
>    So, how can I configure and use the vTPM backend driver in kernel 2.6.32?
>    Thank you for any advice.
> 

I don't think vtpm drivers were ported to 2.6.32 pvops.
Recently there has been work on porting the drivers to upstream Linux 3.x, 
but they aren't merged yet iirc.

If you need to use them with 2.6.32 you need to port them yourself.. 

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 23:46:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 23:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmYlX-0001zp-Sp; Sat, 22 Dec 2012 23:46:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TmYlW-0001zk-Qo
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 23:46:03 +0000
Received: from [85.158.137.99:36812] by server-13.bemta-3.messagelabs.com id
	AA/78-00465-53646D05; Sat, 22 Dec 2012 23:45:57 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1356219955!14356555!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQyNTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30625 invoked from network); 22 Dec 2012 23:45:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 23:45:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,338,1355097600"; 
   d="scan'208";a="1613568"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Dec 2012 23:45:54 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Sat, 22 Dec 2012
	18:45:54 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Sat, 22 Dec 2012 18:46:00 -0500
Thread-Topic: [Xen-devel]  [PATCH v4 00/04] HVM firmware passthrough
Thread-Index: Ac3f6dIiDmmQJN1+QlSXz/maIJfz4gAs75YQ
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B65F0@FTLPMAILBOX02.citrite.net>
References: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
	<20121221194533.GE30562@phenom.dumpdata.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B65DA@FTLPMAILBOX02.citrite.net>
	<20121222021225.GA3468@phenom.dumpdata.com>
In-Reply-To: <20121222021225.GA3468@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Friday, December 21, 2012 9:12 PM
> To: Ross Philipson
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
> 
> On Fri, Dec 21, 2012 at 06:31:58PM -0500, Ross Philipson wrote:
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > Sent: Friday, December 21, 2012 2:46 PM
> > > To: Ross Philipson
> > > Cc: xen-devel@lists.xensource.com
> > > Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
> > >
> > > On Thu, Dec 20, 2012 at 01:55:10PM -0500, Ross Philipson wrote:
> > > > This patch series introduces support of loading external blocks of
> > > > firmware into a guest. These blocks can currently contain SMBIOS
> > > > and/or ACPI firmware information that is used by HVMLOADER to
> modify a
> > > > guests virtual firmware at startup. These modules are only used by
> > > HVMLOADER and are effectively discarded after HVMLOADER has
> completed.
> > > >
> > > > The domain building code in libxenguest is passed these firmware
> > > > blocks in the xc_hvm_build_args structure and loads them into the
> new
> > > > guest, returning the load address. The loading is done in what
> will
> > > > become the guests low RAM area just behind to load location for
> > > > HVMLOADER. After their use by HVMLOADER they are effectively
> > > > discarded. It is the caller's job to load the base address and
> length
> > > > values in xenstore using the paths defined in the new hvm_defs.h
> > > header so HVMLOADER can located the blocks.
> > > >
> > >
> > > Are there patches to plug this in the 'xl'?
> > >
> >
> > So far there are only patches to expose it at the xc layer. Nothing
> else
> > seems to use the xc_hvm_build() call (only xc_hvm_build_target_mem()).
> > Since the use of this feature seems dependent on a user's particular
> > needs, I am not sure how it could generically be built into xl. Any
> > suggestions are welcome though and I could post subsequent patches.
> 
> I was thinking something like this:
> 
> firmware="nvidia.bin"
> acpi_dsdt="acpi.dsdt"
>

You are talking about values in the config that libxl consumes (I am
not terribly familiar with libxl)? Yea I could do something like that.
I don't think in general anyone would override the entire DSDT (though
I could add that support if desired). So I would probably do something
like:

smbios_pt="smbios_structures.bin"
acpi_pt="acpi_tables.bin"

In the process should I switch libxl to call xc_hvm_build()
rather than xc_hvm_build_target_mem()? 
 
> ?
> >
> > Thanks
> > Ross

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 22 23:46:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Dec 2012 23:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmYlX-0001zp-Sp; Sat, 22 Dec 2012 23:46:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1TmYlW-0001zk-Qo
	for xen-devel@lists.xensource.com; Sat, 22 Dec 2012 23:46:03 +0000
Received: from [85.158.137.99:36812] by server-13.bemta-3.messagelabs.com id
	AA/78-00465-53646D05; Sat, 22 Dec 2012 23:45:57 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1356219955!14356555!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQyNTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30625 invoked from network); 22 Dec 2012 23:45:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Dec 2012 23:45:56 -0000
X-IronPort-AV: E=Sophos;i="4.84,338,1355097600"; 
   d="scan'208";a="1613568"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Dec 2012 23:45:54 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Sat, 22 Dec 2012
	18:45:54 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Sat, 22 Dec 2012 18:46:00 -0500
Thread-Topic: [Xen-devel]  [PATCH v4 00/04] HVM firmware passthrough
Thread-Index: Ac3f6dIiDmmQJN1+QlSXz/maIJfz4gAs75YQ
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B65F0@FTLPMAILBOX02.citrite.net>
References: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B645F@FTLPMAILBOX02.citrite.net>
	<20121221194533.GE30562@phenom.dumpdata.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B65DA@FTLPMAILBOX02.citrite.net>
	<20121222021225.GA3468@phenom.dumpdata.com>
In-Reply-To: <20121222021225.GA3468@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Friday, December 21, 2012 9:12 PM
> To: Ross Philipson
> Cc: xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
> 
> On Fri, Dec 21, 2012 at 06:31:58PM -0500, Ross Philipson wrote:
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > Sent: Friday, December 21, 2012 2:46 PM
> > > To: Ross Philipson
> > > Cc: xen-devel@lists.xensource.com
> > > Subject: Re: [Xen-devel] [PATCH v4 00/04] HVM firmware passthrough
> > >
> > > On Thu, Dec 20, 2012 at 01:55:10PM -0500, Ross Philipson wrote:
> > > > This patch series introduces support of loading external blocks of
> > > > firmware into a guest. These blocks can currently contain SMBIOS
> > > > and/or ACPI firmware information that is used by HVMLOADER to
> modify a
> > > > guests virtual firmware at startup. These modules are only used by
> > > HVMLOADER and are effectively discarded after HVMLOADER has
> completed.
> > > >
> > > > The domain building code in libxenguest is passed these firmware
> > > > blocks in the xc_hvm_build_args structure and loads them into the
> new
> > > > guest, returning the load address. The loading is done in what
> will
> > > > become the guests low RAM area just behind to load location for
> > > > HVMLOADER. After their use by HVMLOADER they are effectively
> > > > discarded. It is the caller's job to load the base address and
> length
> > > > values in xenstore using the paths defined in the new hvm_defs.h
> > > header so HVMLOADER can located the blocks.
> > > >
> > >
> > > Are there patches to plug this in the 'xl'?
> > >
> >
> > So far there are only patches to expose it at the xc layer. Nothing
> else
> > seems to use the xc_hvm_build() call (only xc_hvm_build_target_mem()).
> > Since the use of this feature seems dependent on a user's particular
> > needs, I am not sure how it could generically be built into xl. Any
> > suggestions are welcome though and I could post subsequent patches.
> 
> I was thinking something like this:
> 
> firmware="nvidia.bin"
> acpi_dsdt="acpi.dsdt"
>

You are talking about values in the config that libxl consumes (I am
not terribly familiar with libxl)? Yea I could do something like that.
I don't think in general anyone would override the entire DSDT (though
I could add that support if desired). So I would probably do something
like:

smbios_pt="smbios_structures.bin"
acpi_pt="acpi_tables.bin"

In the process should I switch libxl to call xc_hvm_build()
rather than xc_hvm_build_target_mem()? 
 
> ?
> >
> > Thanks
> > Ross

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 23 01:56:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 01:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tmamq-0007BY-AE; Sun, 23 Dec 2012 01:55:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chenxiaolong@cxl.epac.to>) id 1Tmamp-0007BT-7r
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 01:55:31 +0000
Received: from [85.158.139.211:5104] by server-7.bemta-5.messagelabs.com id
	26/86-08009-29466D05; Sun, 23 Dec 2012 01:55:30 +0000
X-Env-Sender: chenxiaolong@cxl.epac.to
X-Msg-Ref: server-5.tower-206.messagelabs.com!1356227724!21523659!1
X-Originating-IP: [209.85.161.181]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31595 invoked from network); 23 Dec 2012 01:55:26 -0000
Received: from mail-gg0-f181.google.com (HELO mail-gg0-f181.google.com)
	(209.85.161.181)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Dec 2012 01:55:26 -0000
Received: by mail-gg0-f181.google.com with SMTP id s6so1123337ggc.12
	for <xen-devel@lists.xen.org>; Sat, 22 Dec 2012 17:55:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:content-type:x-gm-message-state;
	bh=UIOACh27wAXZ/Ur9tKhh11HYcp5LcwQy9Ibd9T2HGrU=;
	b=amZvteiXIqo6mJm+M9y0nrguiKqNazxrViCcYZodAjtMaqbqcmHYuYOFywtOKPCXyw
	nSkr8d07lexhz8whRqfW5fb3g4GgjP+wpbePTl7fu+pUYpXo0cNsxhK7kRFMGbnKwI2g
	LpDdh2dbNt6RCrFWA1W9oeWlB9BhRwtC08/eO2+H/jTKPnB0jVvcDwcH5mdIUC1tSd/D
	DbJKd5hXOhXnAApE+LHEJCYwkAz2aK7A0dCT+7ULYWI77Bdd42xo5MDagAIZgtZSzOYt
	MeHYICzKdDejvWpnnxykMWboLC2mKU+5x7SH4UGOMZ4/mPL47YKvjudUIh9wT5HfEBn2
	Jlpw==
MIME-Version: 1.0
Received: by 10.236.168.164 with SMTP id k24mr16936758yhl.27.1356227724454;
	Sat, 22 Dec 2012 17:55:24 -0800 (PST)
Received: by 10.101.36.1 with HTTP; Sat, 22 Dec 2012 17:55:23 -0800 (PST)
X-Originating-IP: [2001:470:8:c14:224:d7ff:fe24:7a9c]
In-Reply-To: <20121222212720.GA7550@lolek.nigdzie>
References: <50CD7A57.5030107@cxl.epac.to>
	<20121221195948.GF30562@phenom.dumpdata.com>
	<CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
	<20121222154818.GC2417@lolek.nigdzie>
	<CALnNU9PsKrYETpuW97hGCiGp2nthYbD-=NZjiXbOW5dJ268Jbg@mail.gmail.com>
	<20121222212720.GA7550@lolek.nigdzie>
Date: Sat, 22 Dec 2012 20:55:23 -0500
Message-ID: <CALnNU9Ojq5BpD6zpP5d7jZOgo=Mp3BCZ8G-jM3ibY60P9h5rTg@mail.gmail.com>
From: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, daniel.kiper@oracle.com,
	xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQkM6XXQegitbvurh4CkKLFRCP4XB6G8q5hbFAPSB1QefgDiRcV78ez40vgyuY/KPZPttGb1
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6837564844615770211=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6837564844615770211==
Content-Type: multipart/alternative; boundary=20cf3040edb0c02a3b04d17b5f9f

--20cf3040edb0c02a3b04d17b5f9f
Content-Type: text/plain; charset=ISO-8859-1

On Sat, Dec 22, 2012 at 4:27 PM, Jacek Konieczny <jajcus@jajcus.net> wrote:

> On Sat, Dec 22, 2012 at 01:54:07PM -0500, Xiao-Long Chen wrote:
>
> > I rebooted into xen.efi to make sure that it still reported:
> >
> > (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)
> >
> > then I changed my xen.efi config to this:
> http://paste.kde.org/629996/raw/
> > which, unfortunately, still causes the hang.
>
> Have you tried 'acpi_rsdp=0xbabfe014' instead of 'acpi_rsdp=babfe014'?
>
> Do you have the kernel console output from the failed boot with these
> settings?
>

Unfortunately, that doesn't help either. I don't see any output from
the failed boot. Without the acpi_rsdp option, the kernel starts
printing out messages as soon as xen.efi's output is finished
displaying. With acpi_rsdp, the screen clears after xen.efi's messages
scroll by and then nothing happens.


>
> Also, I don't think 'video=vesa:off vga=normal' options make much sense
> under EFI and Xen.
>

Thanks, I took those out. Those were added automatically by the
proprietary NVIDIA drivers and I copied them into the xen.efi config.

Cheers,
Xiao-Long Chen

--20cf3040edb0c02a3b04d17b5f9f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_quote">On Sat, Dec 22, 2012 at 4:27 PM, Jacek Konieczny=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:jajcus@jajcus.net" target=3D"_blan=
k">jajcus@jajcus.net</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_qu=
ote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex=
">
<div class=3D"im">On Sat, Dec 22, 2012 at 01:54:07PM -0500, Xiao-Long Chen =
wrote:<br>
<br>
&gt; I rebooted into xen.efi to make sure that it still reported:<br>
&gt;<br>
&gt; (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)<br>
&gt;<br>
&gt; then I changed my xen.efi config to this: <a href=3D"http://paste.kde.=
org/629996/raw/" target=3D"_blank">http://paste.kde.org/629996/raw/</a><br>
&gt; which, unfortunately, still causes the hang.<br>
<br>
</div>Have you tried &#39;acpi_rsdp=3D0xbabfe014&#39; instead of &#39;acpi_=
rsdp=3Dbabfe014&#39;?<br>
<br>
Do you have the kernel console output from the failed boot with these<br>
settings?<br></blockquote><div><br>Unfortunately, that doesn&#39;t help eit=
her. I don&#39;t see any output from<br>the failed boot. Without the acpi_r=
sdp option, the kernel starts<br>printing out messages as soon as xen.efi&#=
39;s output is finished<br>
displaying. With acpi_rsdp, the screen clears after xen.efi&#39;s messages<=
br>scroll by and then nothing happens.<br>=A0<br></div><blockquote class=3D=
"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding=
-left:1ex">

<br>
Also, I don&#39;t think &#39;video=3Dvesa:off vga=3Dnormal&#39; options mak=
e much sense<br>
under EFI and Xen.<br></blockquote><div><br>Thanks, I took those out. Those=
 were added automatically by the<br>proprietary NVIDIA drivers and I copied=
 them into the xen.efi config.<br><br>Cheers,<br>Xiao-Long Chen<br></div>
</div><br>

--20cf3040edb0c02a3b04d17b5f9f--


--===============6837564844615770211==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6837564844615770211==--


From xen-devel-bounces@lists.xen.org Sun Dec 23 01:56:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 01:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tmamq-0007BY-AE; Sun, 23 Dec 2012 01:55:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chenxiaolong@cxl.epac.to>) id 1Tmamp-0007BT-7r
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 01:55:31 +0000
Received: from [85.158.139.211:5104] by server-7.bemta-5.messagelabs.com id
	26/86-08009-29466D05; Sun, 23 Dec 2012 01:55:30 +0000
X-Env-Sender: chenxiaolong@cxl.epac.to
X-Msg-Ref: server-5.tower-206.messagelabs.com!1356227724!21523659!1
X-Originating-IP: [209.85.161.181]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31595 invoked from network); 23 Dec 2012 01:55:26 -0000
Received: from mail-gg0-f181.google.com (HELO mail-gg0-f181.google.com)
	(209.85.161.181)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Dec 2012 01:55:26 -0000
Received: by mail-gg0-f181.google.com with SMTP id s6so1123337ggc.12
	for <xen-devel@lists.xen.org>; Sat, 22 Dec 2012 17:55:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:content-type:x-gm-message-state;
	bh=UIOACh27wAXZ/Ur9tKhh11HYcp5LcwQy9Ibd9T2HGrU=;
	b=amZvteiXIqo6mJm+M9y0nrguiKqNazxrViCcYZodAjtMaqbqcmHYuYOFywtOKPCXyw
	nSkr8d07lexhz8whRqfW5fb3g4GgjP+wpbePTl7fu+pUYpXo0cNsxhK7kRFMGbnKwI2g
	LpDdh2dbNt6RCrFWA1W9oeWlB9BhRwtC08/eO2+H/jTKPnB0jVvcDwcH5mdIUC1tSd/D
	DbJKd5hXOhXnAApE+LHEJCYwkAz2aK7A0dCT+7ULYWI77Bdd42xo5MDagAIZgtZSzOYt
	MeHYICzKdDejvWpnnxykMWboLC2mKU+5x7SH4UGOMZ4/mPL47YKvjudUIh9wT5HfEBn2
	Jlpw==
MIME-Version: 1.0
Received: by 10.236.168.164 with SMTP id k24mr16936758yhl.27.1356227724454;
	Sat, 22 Dec 2012 17:55:24 -0800 (PST)
Received: by 10.101.36.1 with HTTP; Sat, 22 Dec 2012 17:55:23 -0800 (PST)
X-Originating-IP: [2001:470:8:c14:224:d7ff:fe24:7a9c]
In-Reply-To: <20121222212720.GA7550@lolek.nigdzie>
References: <50CD7A57.5030107@cxl.epac.to>
	<20121221195948.GF30562@phenom.dumpdata.com>
	<CALnNU9PRWft=hyuHNm0-eqN58o22OMuLOD8Qy5AUMPqPC6b54A@mail.gmail.com>
	<20121222154818.GC2417@lolek.nigdzie>
	<CALnNU9PsKrYETpuW97hGCiGp2nthYbD-=NZjiXbOW5dJ268Jbg@mail.gmail.com>
	<20121222212720.GA7550@lolek.nigdzie>
Date: Sat, 22 Dec 2012 20:55:23 -0500
Message-ID: <CALnNU9Ojq5BpD6zpP5d7jZOgo=Mp3BCZ8G-jM3ibY60P9h5rTg@mail.gmail.com>
From: Xiao-Long Chen <chenxiaolong@cxl.epac.to>
To: Xiao-Long Chen <chenxiaolong@cxl.epac.to>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, daniel.kiper@oracle.com,
	xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQkM6XXQegitbvurh4CkKLFRCP4XB6G8q5hbFAPSB1QefgDiRcV78ez40vgyuY/KPZPttGb1
Subject: Re: [Xen-devel] Only one CPU core detected when booting on UEFI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6837564844615770211=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6837564844615770211==
Content-Type: multipart/alternative; boundary=20cf3040edb0c02a3b04d17b5f9f

--20cf3040edb0c02a3b04d17b5f9f
Content-Type: text/plain; charset=ISO-8859-1

On Sat, Dec 22, 2012 at 4:27 PM, Jacek Konieczny <jajcus@jajcus.net> wrote:

> On Sat, Dec 22, 2012 at 01:54:07PM -0500, Xiao-Long Chen wrote:
>
> > I rebooted into xen.efi to make sure that it still reported:
> >
> > (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)
> >
> > then I changed my xen.efi config to this:
> http://paste.kde.org/629996/raw/
> > which, unfortunately, still causes the hang.
>
> Have you tried 'acpi_rsdp=0xbabfe014' instead of 'acpi_rsdp=babfe014'?
>
> Do you have the kernel console output from the failed boot with these
> settings?
>

Unfortunately, that doesn't help either. I don't see any output from
the failed boot. Without the acpi_rsdp option, the kernel starts
printing out messages as soon as xen.efi's output is finished
displaying. With acpi_rsdp, the screen clears after xen.efi's messages
scroll by and then nothing happens.


>
> Also, I don't think 'video=vesa:off vga=normal' options make much sense
> under EFI and Xen.
>

Thanks, I took those out. Those were added automatically by the
proprietary NVIDIA drivers and I copied them into the xen.efi config.

Cheers,
Xiao-Long Chen

--20cf3040edb0c02a3b04d17b5f9f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_quote">On Sat, Dec 22, 2012 at 4:27 PM, Jacek Konieczny=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:jajcus@jajcus.net" target=3D"_blan=
k">jajcus@jajcus.net</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_qu=
ote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex=
">
<div class=3D"im">On Sat, Dec 22, 2012 at 01:54:07PM -0500, Xiao-Long Chen =
wrote:<br>
<br>
&gt; I rebooted into xen.efi to make sure that it still reported:<br>
&gt;<br>
&gt; (XEN) ACPI: RSDP BABFE014, 0024 (r2 LENOVO)<br>
&gt;<br>
&gt; then I changed my xen.efi config to this: <a href=3D"http://paste.kde.=
org/629996/raw/" target=3D"_blank">http://paste.kde.org/629996/raw/</a><br>
&gt; which, unfortunately, still causes the hang.<br>
<br>
</div>Have you tried &#39;acpi_rsdp=3D0xbabfe014&#39; instead of &#39;acpi_=
rsdp=3Dbabfe014&#39;?<br>
<br>
Do you have the kernel console output from the failed boot with these<br>
settings?<br></blockquote><div><br>Unfortunately, that doesn&#39;t help eit=
her. I don&#39;t see any output from<br>the failed boot. Without the acpi_r=
sdp option, the kernel starts<br>printing out messages as soon as xen.efi&#=
39;s output is finished<br>
displaying. With acpi_rsdp, the screen clears after xen.efi&#39;s messages<=
br>scroll by and then nothing happens.<br>=A0<br></div><blockquote class=3D=
"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding=
-left:1ex">

<br>
Also, I don&#39;t think &#39;video=3Dvesa:off vga=3Dnormal&#39; options mak=
e much sense<br>
under EFI and Xen.<br></blockquote><div><br>Thanks, I took those out. Those=
 were added automatically by the<br>proprietary NVIDIA drivers and I copied=
 them into the xen.efi config.<br><br>Cheers,<br>Xiao-Long Chen<br></div>
</div><br>

--20cf3040edb0c02a3b04d17b5f9f--


--===============6837564844615770211==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6837564844615770211==--


From xen-devel-bounces@lists.xen.org Sun Dec 23 02:50:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 02:50:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmbdR-0007rR-Ss; Sun, 23 Dec 2012 02:49:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1TmbdP-0007rM-Qt
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 02:49:52 +0000
Received: from [85.158.138.51:33208] by server-16.bemta-3.messagelabs.com id
	3E/CA-27634-A4176D05; Sun, 23 Dec 2012 02:49:46 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1356230984!30016833!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDQ1NDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25048 invoked from network); 23 Dec 2012 02:49:45 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Dec 2012 02:49:45 -0000
Received: from compute4.internal (compute4.nyi.mail.srv.osa [10.202.2.44])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id A13B1207C2;
	Sat, 22 Dec 2012 21:49:44 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute4.internal (MEProxy); Sat, 22 Dec 2012 21:49:44 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=ga
	Vae9U7EtOqDdAnlzFa1QshJ0A=; b=VoMhHqwLgF+3Vc58Wqx7o/RrWwdpixmEVp
	K4WzKgBCgZwkuMtHjm9i9UB6t3w5KrIvW9CzjFrCB+Yl9mSAK/1KzFeidSQgo270
	GleP7kGT0FJQQFQ8kJ+RgYdGzdsmveVWqGQn1rzTnlYexfRxY3UzS431e1gw8dNG
	8ECUn/cLY=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=gaVa
	e9U7EtOqDdAnlzFa1QshJ0A=; b=mApN9MC5qwEej1TifOLxRg+K+LwGU0Lst3k5
	YfXxpkVkg2TkNg/vRCpvFYFztht+ARb8qK8b9rXU4HzNmMpWbSB6jt4HEBNKE6xK
	JZzSMSkUl901SL5b84FLaA7vqbV+WJROOuAPEa6mRqiBQPdw56mFsvvkz6IE1v07
	oyhlReQ=
X-Sasl-enc: 1fMoc+MhaRPv5zn0lTAYxtnLsJ8OGSwPDGJuqMxuuVMu 1356230984
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id A79A88E03CB;
	Sat, 22 Dec 2012 21:49:43 -0500 (EST)
Message-ID: <50D6713C.2000202@invisiblethingslab.com>
Date: Sun, 23 Dec 2012 03:49:32 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ben Guthro <ben.guthro@gmail.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
	<50D4967602000078000B2114@nat28.tlf.novell.com>
	<3368417890369848263@unknownmsgid>
In-Reply-To: <3368417890369848263@unknownmsgid>
X-Enigmail-Version: 1.4.6
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Ben Guthro <ben@guthro.net>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3024064646789977410=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============3024064646789977410==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigFD4DDD5D428F3243AD29A492"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigFD4DDD5D428F3243AD29A492
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 17:18, Ben Guthro wrote:
> On Dec 21, 2012, at 11:03 AM, Jan Beulich <JBeulich@suse.com> wrote:
>=20
>>>>> On 21.12.12 at 16:30, Marek Marczykowski <marmarek@invisiblethingsl=
ab.com> wrote:
>>> Next bisection (this time with sched_ratelimit_us=3D0) gives this com=
mit:
>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f
>>
>> Ben, wasn't this where your bisection ended up too?
>=20
> Yes, for the dom0_pin_vcpus issue.

Ok, I can confirm that on xen-4.1-testing tip with above commit reverted
completely problem has gone away, even without sched_ratelimit_us=3D0. Wi=
th
Ben's patch (partially revert) no reboot observed, but still sometimes on=
ly
pCPU0 is used after resume.

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enigFD4DDD5D428F3243AD29A492
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ1nFEAAoJENuP0xzK19cs2G4H+wcOr9W8lh/cyoV8TcQE0VgS
/R0jH4XoL3y/hsFLVAiVDjnPlKMnG6seDJ4n5PjxTGRYJNeSX9w1bQ12t89nC/yc
dK7xSb5TIIF2pEV8Cf9K0tHhStXLxLMzBQKI7OekX9Of73eiXdyrD1O3Hbt84kTL
aNW3PKh7DPy/jWK6kSnvXRojpL67vZIiO/tgIeBYKsVssVmB64PaJogI/VusxE0F
Llvye2qYrubYxT5DRcMWKMUzpbnnGyl4bba6xnxjLZ2Cgc+kvsFjT0Bi3IUphfmd
r161bfU5hmw6NyotO6ayj/K0kTokNddh5AGiGRVBhYmoMejsdEyExB68EOylvgk=
=X2aM
-----END PGP SIGNATURE-----

--------------enigFD4DDD5D428F3243AD29A492--


--===============3024064646789977410==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3024064646789977410==--


From xen-devel-bounces@lists.xen.org Sun Dec 23 02:50:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 02:50:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmbdR-0007rR-Ss; Sun, 23 Dec 2012 02:49:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <marmarek@invisiblethingslab.com>) id 1TmbdP-0007rM-Qt
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 02:49:52 +0000
Received: from [85.158.138.51:33208] by server-16.bemta-3.messagelabs.com id
	3E/CA-27634-A4176D05; Sun, 23 Dec 2012 02:49:46 +0000
X-Env-Sender: marmarek@invisiblethingslab.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1356230984!30016833!1
X-Originating-IP: [66.111.4.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTExLjQuMjggPT4gNDQ1NDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25048 invoked from network); 23 Dec 2012 02:49:45 -0000
Received: from out4-smtp.messagingengine.com (HELO
	out4-smtp.messagingengine.com) (66.111.4.28)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Dec 2012 02:49:45 -0000
Received: from compute4.internal (compute4.nyi.mail.srv.osa [10.202.2.44])
	by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id A13B1207C2;
	Sat, 22 Dec 2012 21:49:44 -0500 (EST)
Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160])
	by compute4.internal (MEProxy); Sat, 22 Dec 2012 21:49:44 -0500
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=message-id:date:from:mime-version:to
	:cc:subject:references:in-reply-to:content-type; s=mesmtp; bh=ga
	Vae9U7EtOqDdAnlzFa1QshJ0A=; b=VoMhHqwLgF+3Vc58Wqx7o/RrWwdpixmEVp
	K4WzKgBCgZwkuMtHjm9i9UB6t3w5KrIvW9CzjFrCB+Yl9mSAK/1KzFeidSQgo270
	GleP7kGT0FJQQFQ8kJ+RgYdGzdsmveVWqGQn1rzTnlYexfRxY3UzS431e1gw8dNG
	8ECUn/cLY=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=
	messagingengine.com; h=message-id:date:from:mime-version:to:cc
	:subject:references:in-reply-to:content-type; s=smtpout; bh=gaVa
	e9U7EtOqDdAnlzFa1QshJ0A=; b=mApN9MC5qwEej1TifOLxRg+K+LwGU0Lst3k5
	YfXxpkVkg2TkNg/vRCpvFYFztht+ARb8qK8b9rXU4HzNmMpWbSB6jt4HEBNKE6xK
	JZzSMSkUl901SL5b84FLaA7vqbV+WJROOuAPEa6mRqiBQPdw56mFsvvkz6IE1v07
	oyhlReQ=
X-Sasl-enc: 1fMoc+MhaRPv5zn0lTAYxtnLsJ8OGSwPDGJuqMxuuVMu 1356230984
Received: from [10.137.2.11] (unknown [89.67.97.211])
	by mail.messagingengine.com (Postfix) with ESMTPA id A79A88E03CB;
	Sat, 22 Dec 2012 21:49:43 -0500 (EST)
Message-ID: <50D6713C.2000202@invisiblethingslab.com>
Date: Sun, 23 Dec 2012 03:49:32 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ben Guthro <ben.guthro@gmail.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
	<50D4967602000078000B2114@nat28.tlf.novell.com>
	<3368417890369848263@unknownmsgid>
In-Reply-To: <3368417890369848263@unknownmsgid>
X-Enigmail-Version: 1.4.6
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Ben Guthro <ben@guthro.net>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3024064646789977410=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============3024064646789977410==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigFD4DDD5D428F3243AD29A492"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigFD4DDD5D428F3243AD29A492
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 21.12.2012 17:18, Ben Guthro wrote:
> On Dec 21, 2012, at 11:03 AM, Jan Beulich <JBeulich@suse.com> wrote:
>=20
>>>>> On 21.12.12 at 16:30, Marek Marczykowski <marmarek@invisiblethingsl=
ab.com> wrote:
>>> Next bisection (this time with sched_ratelimit_us=3D0) gives this com=
mit:
>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f
>>
>> Ben, wasn't this where your bisection ended up too?
>=20
> Yes, for the dom0_pin_vcpus issue.

Ok, I can confirm that on xen-4.1-testing tip with above commit reverted
completely problem has gone away, even without sched_ratelimit_us=3D0. Wi=
th
Ben's patch (partially revert) no reboot observed, but still sometimes on=
ly
pCPU0 is used after resume.

--=20
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab


--------------enigFD4DDD5D428F3243AD29A492
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ1nFEAAoJENuP0xzK19cs2G4H+wcOr9W8lh/cyoV8TcQE0VgS
/R0jH4XoL3y/hsFLVAiVDjnPlKMnG6seDJ4n5PjxTGRYJNeSX9w1bQ12t89nC/yc
dK7xSb5TIIF2pEV8Cf9K0tHhStXLxLMzBQKI7OekX9Of73eiXdyrD1O3Hbt84kTL
aNW3PKh7DPy/jWK6kSnvXRojpL67vZIiO/tgIeBYKsVssVmB64PaJogI/VusxE0F
Llvye2qYrubYxT5DRcMWKMUzpbnnGyl4bba6xnxjLZ2Cgc+kvsFjT0Bi3IUphfmd
r161bfU5hmw6NyotO6ayj/K0kTokNddh5AGiGRVBhYmoMejsdEyExB68EOylvgk=
=X2aM
-----END PGP SIGNATURE-----

--------------enigFD4DDD5D428F3243AD29A492--


--===============3024064646789977410==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3024064646789977410==--


From xen-devel-bounces@lists.xen.org Sun Dec 23 05:38:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 05:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmeFs-0000uc-5e; Sun, 23 Dec 2012 05:37:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TmeFq-0000uX-SP
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 05:37:43 +0000
Received: from [85.158.139.83:59547] by server-3.bemta-5.messagelabs.com id
	B8/10-25441-5A896D05; Sun, 23 Dec 2012 05:37:41 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1356241060!23696492!1
X-Originating-IP: [209.85.210.181]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13483 invoked from network); 23 Dec 2012 05:37:41 -0000
Received: from mail-ia0-f181.google.com (HELO mail-ia0-f181.google.com)
	(209.85.210.181)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Dec 2012 05:37:41 -0000
Received: by mail-ia0-f181.google.com with SMTP id s32so5061854iak.40
	for <xen-devel@lists.xen.org>; Sat, 22 Dec 2012 21:37:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=7yTIG/BCIfRhklNcSwN6tZhWHVOT9YNd5qThsB7D+j8=;
	b=vXowzdB3J99l/vLo/ogrPnKlcwfYUE7DCBnMDJAznsGMmKGEYWslMeT2G+8kZPwS5G
	8uPM8NXJjJcfD7mY66viO5wFSZyNloBHQKs3s4A6ozfz9IbLIbUAld+vWOLofbrQtoHc
	hsA0AnfsTeQkD4fp9nX9eLO9aWzSRIws5k82CSYjHWsEJi2WIPlepZPJ1gRtBwhei5Hv
	ZfsdKkXnFEsC3gOMWhZRqEIKc69klFYBab99FuXUo87ppAf4Kp1R9IznESM0pw7b6IOT
	HeGLNn4ST1dGaOo+VKSYtsIGrAyNYL5X/1UKVmJiSDPF4QymnzKmGcYUvu3c/E10Ec9C
	DRaw==
MIME-Version: 1.0
Received: by 10.50.56.139 with SMTP id a11mr11959532igq.86.1356241059904; Sat,
	22 Dec 2012 21:37:39 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Sat, 22 Dec 2012 21:37:39 -0800 (PST)
In-Reply-To: <20121221193926.GD30562@phenom.dumpdata.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
	<CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
	<CAKhsbWau-Kou2Y5vYcetRJLM5nQWGc0w65SpG1YxtcY6+VcXxA@mail.gmail.com>
	<20121221193926.GD30562@phenom.dumpdata.com>
Date: Sun, 23 Dec 2012 13:37:39 +0800
X-Google-Sender-Auth: GWs1m34ghh3UoXUtnwLjY8-44Bw
Message-ID: <CAKhsbWaMRr=xDnXsAhYHJELsJJ1ooXjiAiUhkmC-qsSNV2KQEg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Jean Guyader <Jean.guyader@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Is the AHCI controller sharing the same interrupt line as the IGD?
>

Thanks for your help, Konrad.
I did some more experiments and this turns out due to my stupid, again.

So basically the instability comes from the HW directly, it panics
once heavy load is present, either gaming or kernel compilation.
The direct cause of this HW instability is that I applied
under-voltage to my processor, which I almost forget about..
That config works fine on a native build -- it passes stress testing
from prime95.
However, the virtualization feature seems more demanding and does not
work well on that voltage setting.

After removing the under-voltage trick, the virtualized system works just fine.
So all known functionality issue about linux build have been solved.
Thank you all and apologize for wasting your time.

Thanks,
Timothy

PS: The bad news is that this instability fix does not help on the
win7 guest in anyways.
It's as broken as before.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 23 05:38:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 05:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmeFs-0000uc-5e; Sun, 23 Dec 2012 05:37:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TmeFq-0000uX-SP
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 05:37:43 +0000
Received: from [85.158.139.83:59547] by server-3.bemta-5.messagelabs.com id
	B8/10-25441-5A896D05; Sun, 23 Dec 2012 05:37:41 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1356241060!23696492!1
X-Originating-IP: [209.85.210.181]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13483 invoked from network); 23 Dec 2012 05:37:41 -0000
Received: from mail-ia0-f181.google.com (HELO mail-ia0-f181.google.com)
	(209.85.210.181)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Dec 2012 05:37:41 -0000
Received: by mail-ia0-f181.google.com with SMTP id s32so5061854iak.40
	for <xen-devel@lists.xen.org>; Sat, 22 Dec 2012 21:37:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=7yTIG/BCIfRhklNcSwN6tZhWHVOT9YNd5qThsB7D+j8=;
	b=vXowzdB3J99l/vLo/ogrPnKlcwfYUE7DCBnMDJAznsGMmKGEYWslMeT2G+8kZPwS5G
	8uPM8NXJjJcfD7mY66viO5wFSZyNloBHQKs3s4A6ozfz9IbLIbUAld+vWOLofbrQtoHc
	hsA0AnfsTeQkD4fp9nX9eLO9aWzSRIws5k82CSYjHWsEJi2WIPlepZPJ1gRtBwhei5Hv
	ZfsdKkXnFEsC3gOMWhZRqEIKc69klFYBab99FuXUo87ppAf4Kp1R9IznESM0pw7b6IOT
	HeGLNn4ST1dGaOo+VKSYtsIGrAyNYL5X/1UKVmJiSDPF4QymnzKmGcYUvu3c/E10Ec9C
	DRaw==
MIME-Version: 1.0
Received: by 10.50.56.139 with SMTP id a11mr11959532igq.86.1356241059904; Sat,
	22 Dec 2012 21:37:39 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Sat, 22 Dec 2012 21:37:39 -0800 (PST)
In-Reply-To: <20121221193926.GD30562@phenom.dumpdata.com>
References: <CAKhsbWbhg7h8okoGcerkW9DZ94WGBn4BDuY9mWn08_h2iVr4-w@mail.gmail.com>
	<CAKhsbWYkeRqWccEHgjdk40aoWWPM_fJNk3K0EsfbbGzKcoyG_g@mail.gmail.com>
	<CAKhsbWYchBmZega7Xy6NiNW4B+2vy1cK+r=d_bD1KLbW2_xWSw@mail.gmail.com>
	<CAKhsbWau-Kou2Y5vYcetRJLM5nQWGc0w65SpG1YxtcY6+VcXxA@mail.gmail.com>
	<20121221193926.GD30562@phenom.dumpdata.com>
Date: Sun, 23 Dec 2012 13:37:39 +0800
X-Google-Sender-Auth: GWs1m34ghh3UoXUtnwLjY8-44Bw
Message-ID: <CAKhsbWaMRr=xDnXsAhYHJELsJJ1ooXjiAiUhkmC-qsSNV2KQEg@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Jean Guyader <Jean.guyader@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] System freeze with IGD passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Is the AHCI controller sharing the same interrupt line as the IGD?
>

Thanks for your help, Konrad.
I did some more experiments and this turns out due to my stupid, again.

So basically the instability comes from the HW directly, it panics
once heavy load is present, either gaming or kernel compilation.
The direct cause of this HW instability is that I applied
under-voltage to my processor, which I almost forget about..
That config works fine on a native build -- it passes stress testing
from prime95.
However, the virtualization feature seems more demanding and does not
work well on that voltage setting.

After removing the under-voltage trick, the virtualized system works just fine.
So all known functionality issue about linux build have been solved.
Thank you all and apologize for wasting your time.

Thanks,
Timothy

PS: The bad news is that this instability fix does not help on the
win7 guest in anyways.
It's as broken as before.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 23 06:11:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 06:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmemW-0001Gs-2N; Sun, 23 Dec 2012 06:11:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TmemT-0001Gk-EH
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 06:11:25 +0000
Received: from [85.158.143.35:57760] by server-1.bemta-4.messagelabs.com id
	BD/90-28401-C80A6D05; Sun, 23 Dec 2012 06:11:24 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1356243068!5257600!1
X-Originating-IP: [209.85.223.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7386 invoked from network); 23 Dec 2012 06:11:09 -0000
Received: from mail-ie0-f178.google.com (HELO mail-ie0-f178.google.com)
	(209.85.223.178)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Dec 2012 06:11:09 -0000
Received: by mail-ie0-f178.google.com with SMTP id c12so7892350ieb.37
	for <xen-devel@lists.xen.org>; Sat, 22 Dec 2012 22:11:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=cmeG5Z9dgPp3vNgSvf2UGvC74/VsqR069HoprRoHU1Y=;
	b=c6U3bO3ET7LaMfoBG0ORQUH6i7ckszctZToz+flEnkydeaZ2p0SrU1e5kpBNyRGVOw
	Q7Hcckf75/NlhWbI2NUyBb3VKhzNZ4lwlD0og0+dRm55E4+uROmMncVrhBmC04E6Ggh7
	90ZkUULLjUyhGLIiT6yf9FRUDz+kFiwmgxVChzL4CthgwprVWyUpAK3a/lLCoBAhx9s1
	fCIcksV3VRlmaPodWkvIE92a8ZXZnHFkB8DpMSjSuG5kKM8kRQKNeQrAbjFWA5WorddL
	I4mw9bwTW+JryzQd2agjgZeWStr/h410iChWTNNWN3W7pNVTTkpi1RtdkshsCDQcwjYy
	iH6Q==
MIME-Version: 1.0
Received: by 10.50.214.38 with SMTP id nx6mr16762639igc.28.1356243067618; Sat,
	22 Dec 2012 22:11:07 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Sat, 22 Dec 2012 22:11:07 -0800 (PST)
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6579@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
	<CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
	<CAKhsbWbtYA50rCCbAR_5=Bt+5g7Kb_BkKrVo5WF+ZdmO2o8pCw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6567@FTLPMAILBOX02.citrite.net>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6579@FTLPMAILBOX02.citrite.net>
Date: Sun, 23 Dec 2012 14:11:07 +0800
X-Google-Sender-Auth: 9X3CRnQvGqa8DLaF7UcKwvP03PI
Message-ID: <CAKhsbWb9LLjOg2pkT+TxJU3NzB4vYeRzVq29xW9iWoP_DHiSMw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Ross Philipson <Ross.Philipson@citrix.com>
Content-Type: multipart/mixed; boundary=14dae93404ed46460704d17ef284
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--14dae93404ed46460704d17ef284
Content-Type: text/plain; charset=ISO-8859-1

On Sat, Dec 22, 2012 at 1:26 AM, Ross Philipson
<Ross.Philipson@citrix.com> wrote:
>> > Yes, my win 7 guest is totally broken with IGD passthrough (much worse
>> > than linux status).
>> > Before I bought my current build, sources like wikis seems to mention
>> > that IGD is the first card that works.
>> > And now, it seems the AMD cards are the best choice for pass-through.
>> > Sad news for me.
>>
>> Let me just clarify that up to now we have been successful in passing in
>> igfx cards without having to surface any of these ACPI bits. I was just
>> mentioning that this is an inconsistency and might be worth
>> investigating at some point.

So you are able to get a working win7 domU with IGD passthrough? That's amazing.
Currently I just have a working linux domU with IGD passthrough. (just
solve the last known functionality issue)
But the win7 domU keeps BSOD during early boot stage with IGD passed through.
The BSOD varies time to time, with or without intel gfx driver installed.
But all the BSODs are more or less related to memory corruption.
I begun to suspect this may have something to do with the bios / firmware.
(Your working system are based on intel board, right? Mine are Asrock H77m-itx)
But I don't have enough knowledge to triage the issue (all I can do so
far is to analyze core dump with KDB).

I'll start a separate thread about this and keep this thread focused.
Help you could give me some hint in that thread.

>> More importantly I am pointing out that if
>> you are trying to find out information like the location/size/layout of
>> the IGD OpRegion, you can get that information from the host BIOS. That
>> sounded like what your original issues centered around. Sorry if I
>> confused things.
>

Ross, your help is highly appreciated. I think it's not you that
confused things.
The problem comes from my side, I'm far from familiar with all these
ACPI / BIOS related stuff.
I dumped && disassembled the ACPI table. But have no idea how to read
the output...
I attached the DSDT.dsl dumped from my system, in case you would like
to take a quick check.

Just one more question -- is the layout specific to the bios, or common?
I wonder how can we judge the security risk if the layout is not constant.


> Oh and I forgot to add. In addition there are other OpRegions defined like
> GNVS that can give you an idea of what might be just before and after the
> IGD regions when it is not page aligned.

Thanks for your hint, I've seen this -- many of these.
The only issue is that I don't understand what do they mean. :-(
I think I need to dig into specs when I got spare time.

--14dae93404ed46460704d17ef284
Content-Type: application/x-gzip; name="DSDT.dsl.gz"
Content-Disposition: attachment; filename="DSDT.dsl.gz"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hb1rebqo0

H4sICJqb1lAAA0RTRFQuZHNsAOxdXXOjxtK+z6+gfGWnVLuAPn3uYABJtSBNBGs7qVRcio29emNL
eyR5s3tS+9/fGZBsoHsQI2NQElE5Pt6mZ/rpj+kehmH8/scflB+V4XwdPigGoUOFLB4/L+bhfK0Y
y5tPs3V4s35ahpzJ8FzFmq2mq1X4+PtDuFS+hMvVbDFXdFVT1bbeY0yc74Xnm7K4UyzfCt7dTtcN
Jfj0pFjhDeNXtOZ/2H/NNm+rx+140/Fydj+bTx+UYMokKINwehsu/8Pv8Muf3c+nHI2yuU543yfb
2244v19/Ul4u9avKrvOu5iinzV6n1z3bsk7CL7MIeoJV394kn8KbP1ZPj4mbVmt7c2x7ytBKCDkx
XMP/YJwkGWL0G7YTQ2FNUvdT4mOUqqrrDGXrGSL3w4xbeSvtZDgK3BNw+2LjBPWrzpygaVpXOW03
u7qqn3cihd//YIV3s/lszbjMh8XNH8ppZLd308eHk8bGhg1Fbzyr0thibiTAnf3w1w9csP11HS65
i06D0ThoKF64/rS4Hf/+f2f87vv3isoC5/7pkUXQKtOAWoQjytI6CK2N0FoIrYnQdISmITQ1SyNO
38rSBriSmlBJf3QFO7kcWg0+yHgPmXtDyzClrDiwxr5UA5v4cir8ek0n1+8I/ai+u6aUZBH/eu2b
1+8oGarvhvYweMf+d7EDUNTDaPoYMvtQbczjSrfPEtThWDOjaNPP1Qzd5XQtQyUb7laWjnJfbrjb
WTrKbfuc6jhJqu/qtKH8Ei4XKapnRrxqu6dm6FHPeopKt9wtNUN3R5ye7qNPt32rGTrG7XtDyqmm
nqQaNLaTY5NoFGfubbRXMz2RSYyymaIPaLDpywJ9sXtkc09zWmorec9x/V/iPNIB7fzJFp+lkcy9
CYn1bGXoATEnWxw9cM8NMJ2oHdvS6akAA7VjOdwr2XuuQTf4bHDP7tNn7OeZex4ZPN/LtrsINlhs
6xzew3VmhXnA6VZqzBi+b8KYNMYBRjWuECq1ByxqxvMw5ZQBJYBIbeohRHuCEIkPiMPggw3Fe6Yd
h42THmxONHwyXThWQONRnKJ+pPEoaaapl5uRlqSa/gBiMG0oyrQHUJI5GUBBZuAgcgYkHrgp4s9x
DuqkiLFb1W4GZgy+l8EZ93qeARUbxciginnNjFYxL8lAiKVZGWkxr53uF8mXTFjEqmlpJaJBqqUN
M3ZgVjTtmJhu7rp2RNUz1EgvPe0DlxgRtZXhjfTS2xlqBFbvZKhWRE37YWBGGNJpkFEjDE0tQ40G
blPPUL2ImsIbTAI46C6twAZExmnBQGTUIYzEPrEs0IHlBwYMD8t3xzAQGJVAl9Nh4EOf+yaJqSlg
vhGbXEsBc30/dkQKgz8eR5m6mXIPteLc3koFoz/xh7C2EVbyOLWjpbXwkTrtDChavYcB7Hfs+Aiy
gHoO7CEgKNVzNJSqo9QmQp3EGNKO932kW99HevX9Jkzrfgum5fFGTraOjqn3MvfP1OWxt5kfwZp4
0TechJDx53A55Y8dk/CeP6Kc9kcXrK3/bbUOH73wcbH8xnsi1rnRtrVeJFGzNv50ZuHD7baJMf9m
3Nw0FJc9vjQUugxX4fLLRkr8WBLJ839mvIqidRrKM5F52eHEXoJGJ56K0LQszSdoWx2hNbM0l3zA
2rYQWhvQeqrFaU092eHQ8gHj5QTQLLMPaMFgDMAYJAAKM1oAZPgXgBYQ7QLSdEjzKaCRCZRhBb6N
0AA+RgPGZzSg2/Ya392twrVyGtWBxP2JfTHKtpnYPnAi0k8v2Y9BhwTaYQRtSCgMOEKBfuxpy4M0
zwHBQDpdN8s4MnhhyRjbMxCaCWguhW0diyBtEdpwAmi+5wPDXDZ7cIR5BNCGfRjogesDoxLDAjag
CI09/togLYwwoo9yWkMLxNzQ0oBPGFHHiE2M2MKIbUD84F+oGBEYbXu9RGqnm4xUNksCRjUnATAW
q+FgJBquAUaY644vAW0wHBTAZSdx2Z4B5NmboE25gRFdQIS9d1vJ3j3bgXnFD4AlkH5S45yVfxDi
jAb69sYTECsB6YO2lMZDJuVWRvwJjks6ydL6gRPFRLuTJuqAOLRskE4YowYYzTjE0xhdGPfQToaR
yoe+awK9hiZM+ENqQFpwAcKM0YAGQwpzy9B0EdrQADQfadtSzwGtjdA6CK2L0CiBelgeCLqho4EA
GlyQMcjrV0iyYUSYbBgRJhtGhMmGEWGyYcQORuxixB4g9qMZeTYfX4Dg314vQWRrySAa+wT4aGRf
wRpjwjTIaGCCgMgzk/Is3waTAZuMAa2PFHsS/AxrqaoBvgtnBCYAjAbwMxrAz2hgYsJoYCYJ9dTY
9DxpRDY/h668GECi4VmQyAoUDA9GhOHBiDA8bNPw4TyG+jAR2jZBiBcI8Qrrc2z6cLwwIhwvjAjH
CyPC8cKIcLwwIjQII0KDMCI0yPPNF3+1UzXbCDwQSBShGcEEJDqK0UYDD6AITAOmbEYcAOIkGMIi
6I9BkWDTUAQ2QnNc8HBgspk5GNN9D8jw+w6c+12OP4B86iIPSL4N2toIjWK0S5gr6ATOTT0sZ3tY
zvawnO1hOdvDcraH5WwPy9kemrM3Ayg9De4jM5Yr5wrSrJ8RvgKzBu3cSgW66wCBQw9mYGrBqRv9
0IeBbnjAIwZRkUdgDdCYCJBtGa1ItjX1pFJXAwLK4dWAgukM1lEqDVwFH+EIZUSY+a48CjmR/klq
kuxSD4w5rFEnVTStCTCd53WAmYyxAQa+71oTmN9JsElOEel7/M4QLCH5rdEFvoTUtc97yXXRzQJS
3KDoAlIAHzz8q82jcRJX/LaT987zCxvqo8XaD5ez6cPsf+Fttl9/vVjylzDLezYJ4P2dZW/xlwAW
u8kRvNwc3imnZDG/nYR3Y/br9ZgvhDIVpg/q2QvXXymvDTeMyunJ5Wx+u/hzpeiqqp0kGsBGWSRa
FklC9R2C3kmKau4tqiMnqLO3oHM5Qef7CtJ0OUEEF5T+bSPYv1l8Zu2ufTMbnJul94nKwo9Ob/6Y
3kf9b1+nQBgJJrW1Cy9fRNb4Cz4HyS583Rohu6MPIu60qozpjZCN5yEOzKoZGH8pgCMj9SNr4siM
ipC1pMOsKmSWNLJBzchqHwAiYOIBULczxQOgqqFpHOwAECEThllVBUAErP48m4NMEGZVDU1TOsz6
FSE7l0ZmV4Ssc7BDU4Ss9gogAlb/0MxBJhiaZkXIyMGGmQiZMMzqNln9YZaDrN4KED8EHGKYCZHV
HWZCYLWHWR6ymsNMvmjWjUwYZlXFv3zRrGoAyBfNqgbAwa4bCJHVns1EwOrPZjnIas5m+huF2fO/
vr/A2Hz1cUCLugIyf+ld9rxJblFXDKxZFTDBqBEj0ytEho0aMTK1bGSSyblCZJKLumJk3aqQ1T4A
JBd1D8CZ0gOg9KEpuahboc3kFnUrLACSi7oH4EzpMCt9aEou6oqRdcpGJrmoK0bWKhuZ5PNphUNT
7vm0wjCTfD6tcGhKPp+KkWllI5Nc1K0wzOQWdQ/AZPWHmeSibnVDU3ZRt7owk1zUrS7MZBd1qwsz
2UXdCsOstKJZGTLZMCs9/ksrmqUPgNKKZukD4GDXDSQXdSt0puSi7gE4s/5sJrmoWzjMnv8FFnXp
RG01REhftah7mK9BDvIVyGG+/nijVx/iWDQOJBaryt4HmbkPM2u/UcbOzYvtQ4jFmlOQMC/WnIEE
ebGSl6/SebGEF68HEYs1pyDZvFhVBpLLi+XOGd/oRWtuXuwcQizWnILq3c4snRfrnFqL82Kxwpab
Fw8iFmtOQfW+hTnMd/Bv9PYlNy92DyEWa05B9e5Ylc6LNT/dv2qnam5ePIhYrDkF1bvQfpiL7G+0
wJ6bF3uHEIvH9cXj+qJxILF4XF88ri+yvHh+CLF4XF88ri8aBxKLx/XF4/oiy4v2IcTicX3xuL5o
HEgsHtcXj+uLLC86hxCLx/XF4/qicSCxeFxfPK4vsryopWJx+7fFIMK3zosi7lrzYiVnpUnnxUrO
o5TOixWd3rbHeUcVnXgnf95RJTG/13lHdTtTHGYVHca6xwkhVZ2qIkJW9yGBQmA1z/DykdV7TrLa
PMwHiBxk9S465wATh1lVeTYHmSDMipUm8QTOOJAJnHguWu6BEiU9TJR7LkJZDxPlnj1T1sNE2Sc1
lPdtc9mnW5T2bXPZh6iU921zZc6UDrOyD14q72vA0r+glJvAHcAHlDUvi8hP4Cr8glhyAlehN+Um
cBXGv+QErsI8KzmBK1yaxBM4OuE6178afNzhJ/NA8Y/d4XcQsXjc4VdCbvz77/AzDiEWjzv8ZPLi
P3aH30HE4nGHXwl58e+/w888hFg87vCTyYv/2B1+BxGLxx1+JeTFv/8OP3IIsXjc4SeTF/+xO/wO
IhaPO/wObim7hh1+Phv6k3C1eFrehEH4+PlhumZ0UUAOJz8pp274JXxoKMbNevYldBd/NhT/03QZ
3jaUMwDmr2aj1Wg3Og1NbWhaQ9MbWquhtVFIxsNsutpiYj9N8S0ivmWJb9niW474Vl98a/Byywq/
zG64RclQ+I49tvn1YMhqgD1bTYe3yukJHVHVUHvZvye/4SUIb1PAa1iTRhQU6bteuP60YK1/M0f8
bwyx/0aLtR8uZ9OH2f/C211ROQnXT8u5cgo7/v4DKufaNEf7i+EgWfzlStro+5HbRqzvNZ0E8jiG
d8yHQ+IhoQwoSdz8DzgB3Aj2ZBuKtcnwjz+Hy+l6tphPwnv2UzkdmB99FnxkeE0W87vZfWOTB1iS
0FQ13ZczCx9ut02sy8Xy1ri54QZxFzd/sE6W4Spcfgl32eQZy93dKlzzZNZSz5DMY1ObeV5RNOQe
v6J72E2bmhN+U1cFLTPSe5h0b1BAegttGEvXuoWkt1Hd+8T9kCc900cL68NS8zVI99FBcdArcR/0
yv8lMnOOhXSs0tCr2EKdYsBQ91jDPYPDGkoFRxc1y7N6WCeevQkAkWHSEnq5ErAIo546yDN8pn8N
daynuTudh7eTka3jsvU9Zesyspu47OaespsystEBSb3WnrJbMrLbuOz2nrLbMrI7uOzOnrI7MrK7
onGU5jPQfPKMAxvRwcePVjSizwtJMIm8BHcjoZiuJFcHrPAMAt+OBEjNDjwyYJMe/9tqHT564eNi
+Y3PDBzbYnODaPbHHlkEs4S4qfltHUpPEtD5mfnRUfOfLWBH/OLTFPNpNXp6/D1cKqfbDuhycft0
Ey4bijebO7Ov/HnDm37d/EYXKyu8WdyGDdTI0fYNtZGivX+v9JfT+dPDdDlbf5NpNpnO2TMdgzF7
fHoUNuSPcnjD6dfchojEgAFdPUSe3gSVqLWGtHbD+f36Ex592ENbNFMcjuVtr9jz9WwZRmrmOSKt
Y2FHgGbFHMEu4nSxhjsdASQWd0Qstpdqne+IhhJ8+xz6a9b7DXQK90dsZq0jtG1KYFrXfCPlNMw3
ktbI0FhD9ph8P38M50Lb9JBGObb5G0WotWeExis/sOEbR6jTzLR+TYRGTomLDuaYhBMwH5Hpzadw
+vtDyGvG9PaSWfoNPGTsm0PMejykg9a7PGTc3rKKvYrQbWcAf3evkX29Rpr1jKvW0WvM+MAKRb3W
PXqtPq/19vWaefRafV4j+3qtppnH0WvxbHE/r1nHulaj1/ata9axrtXotX3rmnWsazV6bd+6Zh3r
Wn1es/eta/axrtXotX3rmn2sazV6bd+6Zh/rWo1e27eu2ce6Vp/XnH3rmlOP1zTQ+t/oNdisiNcc
26jDa45twtZle+2nyr32YhQpr73EsMzb3k3D2HuSXksBlXvbm8Vbpte+C/a4kokf7XEtvsGVLMPp
OuQBsNn3Ee/TYDrwPc2mdwXDxV8vlqFy6j/9vl5Ob9bst0+zu/Vkdv+J/U5tdxTtK+E7qPj++TNR
N2LJ/CR40x29RvJ4Hp6Jeon29HqaW3RPbwzUQpB2CQt/FZORQBvvx8XZkN3AHJ1r//dp+hCjjHWR
w2rO1lmbNts9DmJyWQgrxibAyjcUvt6S5x0mtlXMkijbbksOSrJkS+eWbBWzJMomtKReQkya7Cfp
FbMkyrbTknpZMdniu0xIr5glUTaxJUuIScIyICHFLImy7bZkWTHZJtySpJglUTahJZslxKTdaihW
sTyJs+20ZLOsmOyc9ziIQpbE2cSWLCEmHVYbrWJ5EmfbbcmyYrLb4ZYslidxNqElWyXEpKppHF+x
TImz7bRlq6yo7DUjrMVsKZcpWyVEpao1dSa4WK7E2Xbbsqy4POf7Da1iuRJnE9qyXUpcttis0i6W
LXG2nbZslxWX51aPgyhkS5xNbMtS4rLD5pV2sXyJs+22ZVlxaRjclsXyJc4mtGWnlLjssf+3i+VL
nG2nLTtlxaXZ5bYsli9xNrEtS4nLcza3tIvlS5xtty3LikvS4rYsli9xNqEt1VJsabLZpVMsX+Js
O22plmVLi885nGL5EmdDsObZhrC67GkeonRuq07USrhoI2hl81boug1fpnHDu7VyGn8oxVdpRMCM
29vkEg/HsWFtxKdE4DIij42XL27zvE5r80F4Q3HdcLXa3Bj7P/vxDNVqnkn6FNecP8d6fqEIxNm+
A4r9sAqlkP0kRqaLkOW1tFXeEnPQzpaRTCx6EtZ4XiRtvSyxioFGzu1HMpcbL8bf9PGTQFQVcaLY
XBGIREB+TASkUF/oIH6hTtohOaO+llRfLB035fMgicYMb78x4nakoF7IOYKAOzH3/IH4W75+dNaC
+XR3xz/JS/8dCVz99z8qXEnlx/f8X+rXtslt3oosb0W/N5vR82038oUT/SQRD/a5Zdxd77k7vnzI
Oo0ad1tRMy1awolCshmLie52MoctnyHq+R8pxY6SiO+SYOKK716RUZB7DMXYZ2q1ZJboNyFjLO+Z
idzFzfRBLZKYY87teRDEutRkWvHluaiVLtmqF7Vq4gn6t6sBeUdY9ETqYOM2/1iNqD1lJozaMxez
n3r0s4metZGlSGRVDpc5a6icnlzO5reLP1eKrmp69rCV/E62HW1rUiI68G7yu+JXZIIr33azn84C
ufObZcg/u4vFipnx3AapO6ZJ3KENhaeGwm7dRDaPMjYg2KjLLRk8rFhwseGHsxnz2+3oZMnDLs5q
5bEmVRzZVzTXfWLXpSQ68dXLkyvI+RwOh7FPFI4WvFREQFg22Exp941D3mFCKXx6XLw7fo0GtL8r
qgVmEaDiWez1sKhn74YldyfHuy5z08uQ0vJMm3M2z3Lr5OeUjKT/4kgSY+81UDR1Dyjb4R95FK8t
gI2VCIKzbUuJqJC+qmikzN7K01UORsYy/s3iM1P112vffJc+YQyHtp188FOpdp2BJVbuuQj/xvsp
NK/cCqYlCaayguM/eVaKxogj8zUuRTCVFRz/XYZSNEamfvkalyKYygpmSFsladyS1bgUwVRWMEPa
LknjtqzGpQimsoIZ0k5JGndkNS5FMJUVzJB2S9K4K6txKYKprGCGtFeSxj1ZjUsRTGUFM6TnJWl8
LqtxKYKprGCG1C5JY1tW41IEU1nBDKlTksaOrMalCKayguO/gFKKxoasxqUIprKC479tUIrGpqzG
pQimsoLjU8tL0Rh5TsvXuBTBVCA4B8b2TOeAZlfVoTDh2c5E1bAlxOQZxvx5X8AQ7e/edYaeWP14
W3lTj3bu863+09vx/OEbvlufX9FZgS3wAcJmq7piTgXvP+K2vF1b1DZnOzy/EN9AEtj3/qqgYJ1I
heK1H7wi3fAFFhZJSGoVt0miha8YciAnGzK/oPk8p4tt5LuUmAUjPzp9nIeAFn0lhbwTflnDKGoy
cLwlX/t4x0HxH1r6GOz4zRF/BYRbafPmItuFMf9W4LzLfJwpzM9nfurCc4L5RY0J4eOkl8djFuAh
BXis3TwIfuGZpVGfdgG5TgGefgGewT74O8JTXQX8liS/ncd/9dE34fmt20swZJ/H3egDMiHhlzgA
cyqQI3qJlWgnrkTb6zkNWsP8j46K4eUXXz6NBwI/VrsRDYq891a7sdEdH0QVx/ZySr8v8EVRTLs+
0iqOKfbVJHB3/NUO+Z75Jf3XPVAx4jcj/EKKevIC34jFukabGxg8QWrfXqmNNwXYXzaFRAKi90jb
cGQ1s0gf2yjhOF8XJT6PEq2MKAFGjN/UFjSiM5vf+mH8kZ05Y7bhTcT7EJKXFT6/gS7CvnHXdi/B
6wf/dn72ehM+v1Z8yU6vfq2YmI8hayjJK38ICTch7QUFmYwVgyL1njNR2AQC37awRR+mVl7ZzERl
Mw+ysuV4v6bKZv6LKptZbWUz961sr4ySY2V79eB/i8pmHiubzJ3dlU3g4zevbIL9Mfx6q8pGEpWN
HGRleyWmN6hs5F9U2Ui1lY3sW9lem5b/9ZXt1YP/LSobOVY2mTu7K5tVU2VDdkltr7eqbFaislkH
WdkEziiK6f/b+9bmtnFk0e/3V7jm02TLlaWopz+KpGTpRg8eUY59Tm0llZ0oGdfJJHMcz7lJbe1/
v2iAFEGyGw+CethmV+1sLALdjUaj0QAajQPMbNELmtmi485sUd2ZzVFL2pnNefAfYmaL2pnN5ot+
ZkNiuwAOPrMh0bAZHGpmm0gz2+QsZzaiM0x5OsDMNnlBM9vkuDPbpO7M5qgl7czmPPgPMbNN2pnN
5ot+ZrMONGtoZkNuPWRwqJltKs1s07Oc2YjOMOXpADPb9AXNbNPjzmzTujObo5a0M5vz4D/EzDZt
ZzabL/qZ7fpEMxtyuy2DQ81s19LMdn2WMxvRGaY8HWBmu35BM9v1cWe267ozm6OWtDOb8+A/xMx2
3c5sNl/0MxuSHhPg4DMbcos5g0PNbDNpZpud5cxGdIYpTweY2WYvaGabHXdmm9Wd2Ry1pJ3ZnAf/
IWa2WTuz2Xwxyh1Yuc25iEPP/A5nlnmSVzK/tamZiva3+Xrkbc35OvJUtw/Z947J7cScVkDS2own
K35rUIGLf+9SCMJgzAuMTLokcz2i5RgZhLjoSJfD9zzK5UjrGN9mp6kDzNeMY/5AXWdAX2UHEFfS
Lwu/Gb0kZ4ZA8aKchKBzWfoNLsV/uf/8FWymprLvIZU1N+kB6OFtJ7xRx1F4NILDC6+DVT6i8K66
jsKjERxeeF50WuGFrsOWRvAUhy2z0Be/ht/++JPNoP+8/3L/+JOnoQj++r788P0RXu3kj1V+2j2M
3jPxatz2f/WIebxaTTFrTG9nSJSZxawxX2290bFmDetcKACQuhd5ztUoH4qonyV8r5MTBQBvpl1H
zeIJkg7acnrveF1NR/HdBDpNiSglng04SHcqXqLNgOe3cevP9A2Dg/enaKRd5hkATWLy4qMK0tMY
HafM0LN4rIjL0eMAUCeskcF2fQKgXEAdvYEOCzDyU+1MQdZ7VmodU4vKZBcDfXWld3kxi7ea3QU5
ezvjI2kmUfn+EQ+wHh1uPfSsGGQuL3LKL6k3yqp/QFaxBxgcWO0as0p/0Y6M6nMnNFrFfDqP5+7L
Ze88l8tl19Xa76YRHGHR4p920eL3XIVHIngBwhu5Co9E8AKEF7oKj0Tw/IXXdbV5NIIXIDxXm0cj
eAHCc7V5NIIXIDxXm0cjeP7CG7vaPBrBCxCeq82jEbwA4bnaPBrBCxCeq82jETx/4QWuNo9G8AKE
52rzaAQvQHiuNo9G8AKE52rzaATPXni9yNHmKRA8SeFt/mP1bfrlw+fv2shZv4kz8gWaRsRyqzj0
yGeIzZLcnnRDeeK6M0UieJIKaCW8nqvwaATPX3gD1+A1GsERhHfi4LWBa/AajeAFCK/vKjwSwQsQ
3tBVeCSC5y+8oesSl0bw/IU3chUejeD5C+/KdxQejeD5Cy9wFR6N4Nn7eQPXYatA8BRDxW2E13G9
HqNCcIQbHicV3pSBk/BUCJ67zWuFVwK7bT3XW20KBIcXXr93WuH1HXfjFQiOMGGMTiq8vqvmKRAc
Xnjlmf7Is+3AcT9PheCp+Xl2++ibrfs2enCeEddPe8nevDUyP6EZNXFCs50vNw1cjjtL1eq5qhaN
4AiqdVovgU1UjsKjERgJD13cnFB45uPSa2JchreR26XV+WrbnXqRblzan7Y2dYXV0ml1jL9RIDAb
y5hGnUYdTW/LOl6VTW+8MT2c6K4RmiUV85QXWw9ydVV/J5Qmffjbog3fy0vm684Rgy06HYRaXu5l
ZrtxmrROnO2mFV4BWuEdaH7KDKd7NgeerW6xgNwTvyZxZy1uZk8hv1iW6SH/wH6uPYVVsguKNvA9
get4rrm0Tdbu8dqEGS3WDn4+7kq1h7z2wijfoBCCntVycQ1v+zvsqSAUzCjup0NfsrlLkRSiZu+M
galO3d4Jee26vTOF2gsz+UHjTVgtFzfGvtAxY5A9gDXMKq2GcAOicLy9vIg//PbfHz5zTekQb3/R
XSy0S/WZSgcIwDOAkl9F2lQFbjhleZqfvf7ZsqbjfHBK1qzmtL8edz8ufl3ebEWSFmL8ZlPfZDW9
5vlunaa+8W//89c9DOw92elUsbDa2+gRmzDmq+iuyaIiy+4iWtXKuDO5A3m4ugJ7rsdjHdeb3Zfd
h++p6GqxvIi3S/cuBEVgFvV6EQkRKhwT7uOs47SnfX1SYhCqao8qs+cqPBoh3PB1o6sQ5F2Gfbbm
BnYZOp7Xj7ye22wHC1wVGgJFJcvvfA2iSn5+f9z9MWc+1d5NJe4qpF6EqAZuhWGiXwCFqNmoABdf
laQ3Gm957txq6lxFe+dfP+5+ZDxzIgJNY5xnkKcP9oZU+mAAZor4WkaXjriI0++ocCbhVJvimJXx
Dcp0Dcr0DMr0DcoMDMoMdWUykGR1pZJV+Cac2uPskimhAcbhdmOPc6DEOV+PZzqcrMxCX2am7ff5
eqEtU+V/qOZ/VUcmw54KZ7QMtTKp4pwo+dxcT5Tpw9My2vHFymhlyMpoxxcrox1frIx2fLEy2vHF
ymjHFyszspf5VClz4S2ocbIyWpmzMlqZszJambMyWpmzMlqZszJambMyQ+tZLPNrwCFz92syFyLa
Pew+rWFvAybJbCks3L1ablfET3aO6HtmTj63v65+p+zuZc9fwHrL2ePTHe6oXeuMFb5PoGsjvN0z
XyfbS/kdlQwDFyZbgbDvGiloqJztcdr+hZDKKzJS0zWn41orJ7UPya8tw9GeDFFrmJqVWoeLUbhi
OuYfb6RXttOFLwB71LCZztjIFmpQZv9RtfEGwDUdfGsuwfojned/F2tE9l9mAedfi6/uqNI3KyxA
vovBcIKZU9jCXOWF7yiiv1LqGosBfqQoqa+x2cDJk9zYrKanfKpMZX+VW7ThJum8wKNap+Cs5k8b
zeOLaFtj8UyC9o0E4yAmAOwMRSjsFX9+a6mqVjw86WRnL6wxymoYNR/mIOpYhK7W49WIHRmaSeZo
LiYrXRSE346tU4+tVnjPyDCZJ6YBwAa9D6MXnkdibSNS9WODXlTrccukrIZQE5bJt7BMfm6ZfAvL
5EuWSVkNowbnJ+uuNZMhr2bN5JQz2dWYz15rPk9vAS5+nXz8vKvxQqq7G9GT3AgivISuxkfdzLoa
H3UbZbWiQotqAyagxZI4CqWricFKvBGerwghruSIK0JivUO7VwD5agfK6Urn8SkdlQPGi/71z8eH
D78xZqpPtQINw/2b8r5FtnVDu3EAPD5ppVocqpevZapi7552iwEOsn9TfitYSV+9VwA9ncfY5RsC
sFuQn+l2XFf+BY7pFYHgmPriILHKdodu9c6ZxrSso+e/9taqFKJld1yd7yw3YV1KB/od7Zs++t1G
CLcQ8YXO+4qsjb68r9Lshh6Ag54Rz1frFtDF6tLksq+uXEjrqSsX1Pt2F2cV1V4cAB9TwuiLcCyY
MQxGFDf5eU0xG40VL5ADcNutlZ9ee7AXwpdmL4TzBuznr8J5g3pWATj4HndxTjDgRytuvb7VEDcg
dRY3GPLzELdwMk34UTOkmRgA4Cwhs8cQYGUy30r7zgqnDEB+VTGVtp9N+q6iKnaiL/t4Gq6aPo5h
U4f79Eha2XQrQnFqSNr3dDtCW5W07VuFbaer8o2CraITLKaEdDrYGk4HxalgazgVZET8nMhCtdwo
EvFzIjNFrXSuUXSGWumxOWZrZvRqzS9HWGNo6CtFqdZrS1ECMidRqm31IUVpMlc4LB6M5weruaHW
vKARR735oJbFT+brN02F2oC9pNm7/f3+y45HeBM7oWoSAMnjhy9feNi+2f0hYQHVFHVhM/uAbf2g
yiLYhe2dmJ2Z73dm0rUz1NerZoGzV7oj/Ron5nv9WEwajoBPFn7sFgG/1zZFkEBGlW/+OfpnWb/2
oV+7U3PvXO5fkZQKsJzYMW+oNT3D1ji59ZRJMyAjzlXiJVy8wcOaJBOYmKg4Ltp0RP4CWH5hs9fu
n399xhv0DvIWvNaZysrFijcBv06xv1gBAd/cTSNEn7qwolpj1xPeBNHWPnzX81Xhu0j5rmV5ZZj3
myDZ2l/0AFMlhYFlK4E+Ya4U17W+7HZ/Apehwk7x0SgYNZpj0lnULbIy1VnRp0auoklT6jTHymex
uUdFC0AMQ/B7+K71d5o5swGbcfYmmBJHW2p+AMC0CmriGoEwr/kvtZcWy6RJpvxmmBJnKmlHrpP/
TG/5Rx3pTEX62a9/qsLVsdgGuAlk0oaM2XCjkqCeBYCqJGGNbcqFwU5ks0wKF5aP3/yHs+Cx2J+j
zIPJfzkLLguSHF6mOd7yX86CyacmyozJ0LNh0v6LwzaG0why7A83nWuMuG0vaWYKxH4PRrVngwKb
+wCY/Jcm3ZR0R6jsRXat3RVEBFdTs8mjsEO5r6CPUylV0G9H7It2DSJTTFzJA/hTnXP0p/zWn7Jl
AeBJ+FNPY4KVuMwmjsnUxlmx/+IapVRfpg6NtZ4B5DEp79QukCmh5hJ2MYlUS1iCuXRVS+y8IZWk
fbLbRvbJbq32yYhsafJmQLFLe96rV1Q8Lql7Bll3ekO9qqhlrtlJkHcm6JuYZrjSfaI9v8QJiKrZ
BmcthXMWHS1CNsUxzd2yaaB3y2r6RUUTMOC+qsz7wKTa/oBH59zLA1Xbo4q9cLMhkU+L4iI7pKzI
eKQPiKztQWWH+na+8os71PCWL576J92dFlUsdqdxRZ2vImVyh2icJn8waVcpxU/nMq3uzGUGJql9
FtFKm66iiKsTULiW09gSl0/upzNcRqldJFwDCtf1Yq3ki323pTVW8K1N21HERafoYWPIUp50ap75
eqbENV8vLGnRaXTmK1u+6VQ3b4Mxcaai8ByY4eqYpL9Teg77ZH3EAtagmIrFu0ZYTDPzmdHeP4Ew
d3uZZLVdef0u+TJJJV9+I0lf5ptkK5LQ1cvx8j6aN5AdGTRLdYCcRxYNePZG080PGOomWCcNYgUV
rBml0Uyq6f2zFxZZEtQYAcyvJAKA1+D0nI0eicHVxBRJ/VTjAkHddysBNDsp5auKs/vPv5veVQT4
V1cdaGLfa07v6OiRPPVeI26CAkjPY9S71fSe32pqIIivKVuqu+VD3CoHSGOLFSXsY4snrrHFyhSr
1JcGY4vrpXhNsYjEVQ1PaAZIHeYzcC7cFfqdvGxO/zBOEvVOt25W0wbIpTrWSVUqLjxsE30tBuTt
kzSZxcZmFYGcUYU0kUGtgB4Ax3RsaX/oL8Lp97z3J4tigazwYe3wAuTCiswjK9VTMYA27tOaQ2Fb
muHwuHf3THhWnUto7cXh84qvwnHcYD54k2zpYCwgn6oAKSOApl4Trd3cLRprrZkOIFkrzSxkNWT7
SKQg0/WRSIETdoxu397FR/NKxT6f0Q0NSeZDA5kXp0ugY8JMNiINb3+IgE9dDZmXDudFdzGpsVcH
7H2Ng3kClRNU95QJuV5wfyCY6vsC4ODzHtpcKobIDjVAPQcyg5pCA6jrTugXO1bj1GwgOaxjVvz2
1PE2Oa8MV2/pkkTTW/JdA0UxzSIoi/ei8qZZF4x25ZS4dfTFZbc1TowCIQC0/owY1nQoj9Flp+S3
b38yXP+wO0YAosRVWMVJSTjfvH4fb27rn5ZkvsU1w8KPECLSrilOTTbLJHQ6NrF6BZZ6se6UmYXL
G5XW6ftoBId/irPuLqvC4bESnu87Co9GcAThTU4rvJ7jI+UKBEcQXnha4Q1cNY9GcIQXdP0TC6/v
KjwSwRGEF5xWeENXzaMRHEF4J7Z5I9fZlkZwBOFhmXaPKTzXCYNGcAThdU8svJGr8EgEL0DzQlfh
kQiev+Zdudo8GsHzX2GMXWdbGsHzX2FMXDWPRvDsNa8XOQpPgeCprTCIsKtTPlrTbtsUoN22abdt
nsK2Tbt4LkC7eG4Xz09hCdMungvQLp7bxfNT8PPaxXMB2sVzu3g+m8Vzs7cO82j8YBKKVwum0/ov
aGtfiOINI784xCZKpGtkaa/+Uvlpf194HbtdGBaRLz1N5Avv3WNvjkxdTReN4EmaLvNHaTvEjU5k
ECt0K078N8661fW6Ot1C6gTHucC+tzfzNVwyA4vtqe7/mA16uB11IntDJ5ahSRMRjqcc+QPXkU8j
eP4LtYHrFgGN4EkKz8JsWj3cmw6QuJkBkjx+eHiMdn/uvn5kUpoKMyJi8Wvndp/e/9h9BOVRlgIg
xsz448eH3ffvJtVr9huAulcsG1HV3XNohLkSAlCKCKCYPr5+LOhPPT+++Xh2OnnrAePZgah9PDt4
PIcJaO/ZBbQzRpDnJC1dr2kN12vaIesc2PVinlfreglo+CKBxQTo15gBT3Ny3rqIBWhdxCO5iHYj
5EDbY40uV5/c9tgT9cZtfUBC0wCenhNIJ8s/oBMIRGs5gctjOoEKtvbuoBeXhmiVbjoAxtFGqGln
Ask9qgTzTm6yhXCdxdzNlXjY1ucBzGE8D4l31/VpK8cbD679Wt37zerGVF2j/rxJAvv+jFh3Iob6
QP3ZVdzDNWuhX6uFyHxwoBb2nFvYrdVCJBfBgVoYOrewV6uFSJr2A7Vw4tzCvnULx0cdh33nFg5q
tfBo49CnM5mYtnBYq4VHG4e+Yx9u4rK+GbQwxGf/bLMoji8v4g+//feHz+lj66aN83huGOITJC5C
P/EUkegX2DapiqXKTeWlgfguZK5+HM7fh9++frr/nCVOggfHqtXT3KOi0vjrz4aT+HfIJOkLL+ER
QB3zJP4d3cO3Ulnlo7qcbpdibJzcWTKmfGEXcA2I77M4tBUCmcV/HMRavuG7T3yPo9CxvrXcyFcZ
qmV9UpGQsmTnx4leRkVcVyNd33apsT2LJ0pi8TL9brIyUI7T24f7x934O5gM5J0j07F6pVVjqiWT
+TSyFGvfvDuvQocuUOtkvEzMuyCbBGes4/gkWHNlxmjW3DKABwGP+/w1n59ojjNQt8uMIkA5bVXt
DKYA55ZT1ezXc94V6DnsCuB1Tf08+zVziK9GWj+v9fNSxlo/r/XzWj+vDK2f1/p5JLR+npaj5+Dn
9R38PLyuqZ9nf3IQ4nuyrZ/X+nkpY62f1/p5rZ9XhtbPa/08Elo/T8vRc/DzBg5+Hl7X1M+zj58I
8ZPp1s9r/byUsdbPa/281s8rQ+vntX4eCa2fp+XoOfh5Qwc/D69r6ufZR5GGeJxs6+e1fl7KWOvn
tX5e6+eVofXzWj+PhNbP03L0HPy8kYOfh9c19fPs79KAn9dv/bwMWj+vwljr57V+XuvnlaH181o/
j4TWz9Ny9Bz8vCsHPw+va+rn2d8oBj9v0Pp5GbR+XoWx1s9r/bzWzytD6+e1fh4JrZ+n5eg5+HkT
Bz8Pr2vq541q+XnD1s/LoPXzKoy1fl7r57V+XhlaP6/180ho/TwtR8/Bz5s6+Hl4XbPsuZNrz9bP
8zra7LmnlaXvIEu8ruo5isl1bNq2oiT5A361u8067SHvNnVyztN229ih2/C6prK0Ti3EZalOA3pa
WQYOssTrmsrS+vq+NzhzcxI6yBKvayTLwIusr8jBowdlWebEJEJZ0vj31/EkL14ksO+ABc/RruyA
Kmv/eJ8Er9lS2Xu9iMPgdTJfz8op90sNz+nxjOmW9Fjh+08/oUkZXcgMz8Xim5LtNkEWEphjZKul
bzeBHX+9hvjz7ciGDZHt2pGdNES2Z0e23xDZvhVZnuq5CbIDO7JNtXaIk0UY2M8VTP0p06N4jjcs
v8GVlr2Z84ckxmP0q3iX64cnURTMpYwVH9HImalsAc7X79lslPz8/rj7Y76+5K/ZitdPOpLdTbcU
ROHg5+NOs/dXbP12E8/50pna28oX8B65sSCVGRiUqezFMCb4S0TUzqtUl9w3k8qQ+1FSGXIzQioT
GZSZGJSZ6svIHSrpL6YSUVElvFHHS9/8QFQiqqcSfBtoZMrRDFdSnKNZPY5m5hzFS+Aj5yheBvDo
049RddCIotb8VPvPJ/cXpTKo7vKNr9LvzFHqqva7cpxdVI+xujc8l74Jzh6674rVZXyGxX03qV/U
Isb3OrViJrePddvGuJgTQzEPUNMHda9KvyfhPDHEOTAb8tdxUZ2vY6HOg+rwEkWt1fn6xkPNL/u9
Q/zuE7930d/naxz/fI3jn69x/PM1in8vOoV5v14QbVwQbVwQbYz9HtW/2LEKKz/Eh44/MtMT+uxN
KlMxP9cB0d6AaG9AtDfA+7TKg1+V+fwt0e9viX5/S/T7W1yvbrwe8Xuf+H1A/D4k9A3HP1/j+Odr
HP98jeNfEPwvCP4XBP8LnP8MJNNe0SXW9hEhkyvi9zHxe0C0Hcc/X+P452sc/3yN418Q/C8I/hcE
/4uUfyObvAlhOS1s8nL3x7eHn+yvTRjsH9AtW2ZRIbr99vCRm+a6fgb4Vnpz0DUr1at69LN4nFCn
w/B7v1peefItU6saqQwrZUzLvyfjLXpAmSwD9PdZNFYeaBa4q3hnm7iD1t7EPvF7l/i9R/zeJ34f
EL8Pid9Hpq3sVw9hb2IPrX1DtP6GaP0N0fobovU3ROtviNbfEK2/IVp/E18Rv+M6cRPjOnQTh8Tv
ESl1496YqLcIsk2P8k5Btq1xvRivNNsaecTUVXW71uadraplqryRhr+uqdyYmczCjnELomoLqou/
2zeTctjUwC8vRwGy9QmvkFtmddAUbp+p/o5vb8LipFISA4DNa69V8vwt4ofPyJkGzmoeEPD9kvOn
OzEAQI/glfjFUbsJfkoaSbfGQYD0/rzhrmTSOySZTM1nN8FKh1RWd/wR+/1u5gY7W9UfA6Mv8ALs
pcGMnTYkhqaV02N4xtoATD0yADi71sQYCWVroghZgHipGSBTBmiz1dFdPvAXUSNSZ3hKYa+KB8XV
Ug/++vRp91DembTHA/D3v12A4b7429/hL+/HSBxZeWb/NUQ/2qPvih3y0AS9ErkiCEivDdAXVtqQ
D+4O0Wu67tcMcgDrga6mm9Nm+PQR1+ZIAQwGvkUxAwOQFVMrhbpZkjEgup8jwVUAwNooAJj0kJVx
0CMFMDcSZvgADmwschK5wZh0GjIYAOpYRkPtoY0HTUJvVpAYAQAzs4LHGGTQ2hWDYq1dMUAK8Lzs
SvTc7QqSJh3A2K4Q9QFau2JQrLUrBkgBWrtShTO2K8SINLYrihHd2hWDYq1dMUAK8LzsyuTp2BXy
0/jL/YfvF78mAQQaJNH1WwNTECXLy4teE4oGe/KLyf/89eGL2Ju/tNQQPQUZKpoyhS4a8o6a8kXt
mIcHD/l/Rzx4dswjd3uBoabkZHJtCcT8wwkEPOD1ipMccy2a8N+H/OBlOjIi8m/i4kMGeomk9n/7
3qN20MuQnlZsv82/Pu4+Qw+x/oL4MUCiry53tES2KRtQ0qOO2NxrWn+ywZbpqHIDsT4ZAKYvQ+MK
GhuxL2ZUSnuJNINzlZve4mbQmNz0JYykaq/rYigdQtdlKjAtcPM0NSRjRwogUxEzY5SBmUYDGGs1
QD3WrbTajPOTjdiqjvFdzUP3vtr/KENTMtQ4YLTvBGCmuHUXdkgebgDjhR1RH6Bd2BkUaxd2BkgB
nsnCLniOCzu/Xdi1CzsS2oVdu7ArwrnKrV3YuVIoU2kXdipoF3ZOpADahZ0M0sIOSbwPYLywI+oD
tAs7g2Ltws4AKUC7sKvCuSzs2hO7dmFHQ7uwaxd2RThXubULO1cKZSrtwk4F7cLOiRRAu7CTQVrY
IS9tABgv7Ij6AO3CzqBYu7AzQArQLuyqcC4Lu/bErl3Y0dAu7NqFXRHOVW7tws6VQplKu7BTQbuw
cyIF0C7sZJAWdiPHhR1RH6Bd2BkUaxd2BkgB2oVdFU54d1f9C5nh7oRpBqXsJZo0g+OnmmZwYNYJ
bZrBNs1gm2awBG2awT20aQZp9G2awTbNoApa1791/Qsk2jSDKbRpBl2KtXaltSsFEk85HRj5qQ0C
bs+K27NiM2jPimU4V7m1Z8WuFMpU2rNiFbRnxU6kANqzYhnaPM9laBd27cKuXdihJAwWdm0QcLuw
o6Fd2LULuyKcq9zahZ0rhTKVdmGngnZh50QKoF3YydA+tFGGdmHXLuzahzaMSbR5ng2QArR2pbUr
7eUCYxJtmkEDpACtXWntSmtXjEk8uUtLdzPpVovmzlLP4M7SXbzZlu8sRSKKvVQ1vbMkKtS/sxRv
fLiWdNH1sUtLG395SX/sqmp205om/RfezCN+38nuslOd8wL1AC4P3PEVFz3fkr8SJwIiNpj/u8PP
BYIePxcINQO3PGDH4siB61yf9/CAnwj4HJ3f5ecCAS/ToUcRshGDt1C5N4WYcno3g+jDeJ0wEXRt
+jB82H143HHdTbUZThIu02kujG4R407VYX0y4nWQE2yuK8w0GBwL4MJbM7UCdiQyCGuIFGUtZaZi
Th52GND1bemyFrPGfv2Y4eAtNiXNLxOmdbuqjWV6QE2SyeLiV+rmC1qD2r+kidxZEVEoOj/EMlF0
oKh0aghjBZ1Z1AW+k3p5Ufmxa95NhTuWzPJ/+ELYPt6X3GSDr8nBU9fg/PLvlxd3s3hpgD8rLqw/
+y9e1pZp/0hM+5xp4m5DfluWYb5JAmwcFv9CtWdyTtqDKwXZb3h36CQmutlBZM/r/u9mdhPokJrf
/50l8fnf/zVfVDvf/wX9P93939K0y4cKMetmoG5xYQjNv37c/cgEqgmaUCzO9av79pJyYRHgcf8r
HOTROY1eUh5c5UFBI5eVe61LysKfDu4fU2862j3sPq2ZHqfaJvpIaBtf5zBO3s6TI44BjoaRzNYF
SuoGeq+6jm2ATrK8yLynt7z07cnW9F40a3pFpEJre4vQ2l7c9qZoyK3aJ2p73QbB+RpfZI1iZHyJ
tU1rfC8aN7691viWoTW+Ssf3GRpfh0FwvsYXMWVGxpcwga3xvWjc+I5a41uG1vgqPV+SyNM1vg6D
4EyNb1Jzw5cO1WyN70WDxrfb7vii0Nrel7Pj6zoGztf01t3xJaJZW9N70azpbXd8MWht78vZ8XUe
BOdrfOvu+BIPW7bG96Jx49vu+FagNb4vZ8fXeRCcr/Gtu+NLPD7VGt+Lxo1vu+Nbgdb4vpwdX+dB
cFrjq0B++mtus2gyzYtr7rkFBvfcZtF4U77n1uOqSdxzExVqX3OLwjdj6mmuPZOfPn3fPab2ASnH
cCxVOPjzXni9xIL2CKOd4e/0sct2y4kgcN5KlIy3nrESTZkOSbttOLqOFbq+Gl1wk1ih62pUPFkG
cUXFuX0NcRUXFdyfn8PuY879cKJVEaQJcVBugu9hJ/n7JkAF9yZgWp4EYAFYE2zbEMyZC/Hz++Pu
j/ma/ev3+0+Pi92nR+jycXYyprA8AkHw83FXq0mzZMsH58hs9PvY6J+F6xWFg31bUt+2d8mY+hax
0aj41iHpBdGG+hZPQvLbRsFLwgjyrjV6HTFJ7piS+bZXqJLtZnvxq/WFWfz+DTL9F31oNt6qtdIi
3o+ALTBAMcgi4koz9J+qCOsk6H4Fod6IF1lVi4BIWN3YQiRwB02os1gi4S0AOPht42QDOmB9Y7ym
DojrbsfXgvVDpgkgSI0+sM7unbSzo8L8noGitzGxUv19uwksb5e/0DEvrsPjffECzEJ0xKnhSZiF
pzBNHNhy3LaWQ6kGfOMjS6MBPRYVllaFYtx93tx//v1RzrzxSqqptDvhs7U7t63dKRQ2szsn1oeH
VKHFelAsiVJ95mOghlEyN0zB4lb5eMJJDRPrvu7JDVNaJGGyWX/avzOgtDFSUhHEhKWFypvPedoh
2HyGFa6i0X3Sy779/f4LK7K45nvdDxXGU65Mu25P0ZuOPXUik4yySGiS74pnw2HkvXq1x2B7iBLt
fnvY/bH7+pjlSqF3xZPHD1++AMtd4o4zsW2eb+bX5PHNfEGmAQLQJ+cmONt3ApxX0xZl/rUoI+KA
ibfTWj0A6KYrdTrDSWm1aLmBLJ74/BwsNpZOoNjz3QY3Uzm1nNfxvFcX/0IOuo5rm/1znM1py2ho
yhzNGC5lQ/OlNF3KFGuW/KhMVV2NEJ5LOvKFzhIH9wb2TM50RRmzrKcWu+/fM5PHZuaSDZI5aae9
Kpz9tCc2+wu9qZylmpstTzUFQSMPNAdtIVOrIpyl2oS9OMOR3mgSnzXpHTPp9A47QL7sdn9qHsaR
c2VmOfk00UfqeA6j53hM8xAD1EhPaZzq0MgJs5xGXTSiTsLPJi1mxlKjGqEzlwdWBhNrXP2FtDwV
vaayKzObWdfwHF7RPPo5Kb0kyaHdWJ+9GJ1uYooDXiw1jc/QPLxBxCPi6wf9PC5HHkkcJr99+xNC
d5LgdRzOvdeLOEyzb+asVEJGwjgJspCR5e6Pbw8/edRONBqOJ51ROVQkDRMRlcZff2qiRIoi2GzD
Oyw2LAkCHp0xrP6ORmaw333i9y7xe4/4vU/8PiB+HxK/j4jfr4jf0egQ9ntA/B4Sv0fE7xPi9yn/
XVKf/T/3nlsMmtchQ+aKXSovn5JFfPdK9VVawWOEb8dvrAkLX6OIGwAsD1O3RDcioVw8uUP8ZCLg
/9vj/aefrA4/ovfQ9Ibl8Fb5r4pRRvZiNCTQsEMkEmw+kSPB4mWQlGPZ9jFfUNQg5qvKKx/N5fhs
kDsVAsqfSSj9DvLHyjN9UIaSSoFk3XIgWXwbbA3r9lCtrAp0EW8KoXXLsHLpIxMoL1pboL3ysGXa
rZRnxZ6ygWjY+JHceEkE5alkE2fXCHKeTYNsS5NAObj2qhJcq+PEPxtOumfDSe9sOOmfDSeDs+Fk
eDacjE7DScWcFpxU+E/lhsSoEHudmtZ/YPU0ljbnPByLcN7yDFQ2lmGAl+MtIS5OLLaiTjkgvfzb
NBLlMONMhUaH40VlBgiD6m+LbfW3aVT9DaFXmEGvF9U617Pqb2/C6m9L5Lewg/zmG/GVdn+qRpmy
3txFbylfMe/u/cJ+Oq0u69NrDvyV5+KaLzsF+vgxOzPi3y8voHBeDH0sujsdvaLGjHJ7LPfqCi5i
EQP+PnWBZLWSRDo/5Sit7K2dVJwT3/xgKN908ektl+rCHd3UoLdeEBbLwlIjKbLatd3WJFlWU8RZ
71Ksq5EVm9BTN4FuBoDyyXU1B3iTJqom6ZEC5E3r65sGoH7vT/uqvJ4j3DAoe88cOUDe5IFZkwH0
D8Jrm27OIS4CbW/bEQHIRTE0FwWAXhyadyEth4/+Fmz+r/0/My9LblhpJtxsYDHa08yEB5jsSvu4
iqkOvBd0Xwhm8WJAUKnzSm/ZZVvIMKGbTnlZHXD9lNu6cxGb1LGZSqstK+/MAlhN6NWjKJKFVLpB
mQepiFbCeyaMpIyzI9HbSxuNvyKOuatS19IR0kdb7rj7hs9TNZ+o22IMlqnsA8u7w9q+SUaPjnoz
Z8Ovz4bwLxvhohuEDo6aX5MNXBWNqKZtp3pcZ/8BXJ3brpv6wErR7Cwv76UpFUmhpovQpnoLQJF/
ghhJzmzx7nTiitYlWyZU3XLgU3xUz+qvQ3CD46l6S480Q6wWuDkuALkDYDfkzPzrwthpgj+FLgMQ
/ea+ejtov83OvN8a4M92GQLQ9Cr+qhEtqGEWzJBnBMzUzBwngL2ZANCrHICR2gFYs2tjNgBM1sp1
+7i5XZGj9LHRkAU4kz5ukN+6hgjgULtq40a1p/DudKajA096d3r/ozLtpT31jANz/bXDDSDrMRzc
mNMx02UAY30GqMW+0OtD8O86AgAOvYUaHETrcL33Mb0fmOq9HRcZJ3b6b08DQB4HSys9AjAfCwBW
4wGgdnPEuDhke5oaRQAHmPfQ8ULtIbmRKpPLRoc/sSBnTzIjaz9I6tECKOy3dmyVC8BuwABYDxoA
p+alO+lHaJ+BoywDpmW9Z65l/jPXsiO0r2mjDnBA1w413dFRtNzZhNcjnZGvN8jq0wRwN+kA9gMO
oNagA3BurouJB7Bvr6WpB2jE3AM8YY2sZR4BnqhGHrG9p51EzjniRvyX/98+sCZajnkWFTKwBg+y
xkNv4rE3K4feZF865Bef/NIlv/TIL33yy6DyJZPCqs5tsHgJjYUmo2E/8bLDP3eozz7/7FOfu/xz
l/rc45971Oc+/9ynPg/454H0GbmqtqpzVU2oAMgGJ831AGRDfOZSWVJS4RoBsiE+c6ksKalw3QDZ
EJ+5VJZFqUiyKd8ziCfXlXFwmhsPwAn8Jy6zU7mQEF1P74pXEISB9n5Mq/dfReE89bv5BS9OOw9v
D8oX1xarN2+1dzSPLsMzufvFODmTu1+Mk/O4+1VmInvhgSmo8fsTcNRafsQka0wcpm9WWb2gAZ7k
BL3cbnAb4N07frdn9n4Svk7YP40CCSdhsgUEA7N8lPEYrmgeuln7lL/3/3v/kbVPbtkyHMfiKOny
Ak29WWmheXSYLrNW8S9UQO+jtXLix4ln0cRf99HEEJkNaSyTCZH3dL/eKdSB60hEpgg6II4Z7o8b
kZNrxti3joaDSlR2ibqvGEXrqJ6eSY2ZR2Ps4IcQBCuMNcJJoQpLllU0jxRhrXkkMSu3UCb80AYm
JlG0EHhAh+YRsTTVZBzL2IkWfiPsMDyCnXoJ0HJ2ug2x0xXsIG/GWrHTa4idnmCHeJPQmJ1+Q+z0
BTt9R3YGDbEzEOwQr6QbszNsiJ2hYId4N9iYnVFD7IwEO8RLmsbZW7NZBWwVld8LZypNhbqMO0av
KNItyzNk403B8Un5Srlr1gHXLLV9eXJCzp1C3vsEf6ygkftUlZnVtYe90Pyi0AgDaSI0xQtSh5Ws
eo6gsfhlLHRWOal7LPKvlrsHC/fXdE+32D3EhNFY95ysD9UTq2kfdlV9SCPplpEosr5JmoB0hqkm
lF8mBNBoQs/o/eDjasLJ1EXt+JiqS68Jdekp1YXG0kOwUPeBJKVD+t1U6fr2StcvKh3hkZ2h0p1M
M9U+sKlm9pvQzH4jmtlXaiaNpY9gwcwegKTfiIqZ6vfAXr8HRf0mXPynqt8nGwTqlZfpIBg0MQgG
jQyCQSODYKAcBDSWAYIFM+YA0lBCtNl0KA3th9KwOJSI5emzHkonG2/qrQXT8TZsYrwNGxlvw0bG
27CR8TZUjjcayxDBgk1RANKoRQaO6ajFrmZpRu2oOGqJXZx21J5saKu36UyH9qiJoT1qZGiPGhna
o0aG9qiRoT1SDm0aywjBgk28AJKBQMaoqYHALv6anc7dzqnwXto2ZDzzyvjYMGlJhkazt4vzAUri
eVUyyjPn7Eg+isqEqkT254bicJ5OLk/zKHcVjP+aG/PHe3Ugo8RPgIX5Sy2XQzfnB7CQjFp9AEuz
l7EWRlEipIke1SuoXztQ5z0pnnuSD8mT6yUff8Ppq+w5M2lsJau7iBQ1dXtP3zkMq93h675ibdEl
2gAEmnUysiD0BLwq/FHz1GqVbCdM+Oy/jQQNSIbCP76hqHs2nr8ISx33HNxSUE7EES1F4amTeZQ4
CtP6DbOKpcJk8lIslV1cRsVSWYvueVoqusFBuHCfUyUHrFNje6dffnai+LlbzsMtg4hjUtSGXLKq
z2P1Z/X6rlN+4KH0+Ur9eaL87GM5y6XP5TdhSp+j+kJln8sP+JQ+h8rPPeq5ivTzQP05UH7WaEtf
3d99dX/31f096Jmu5NVjbuloZIR9Th9kzm+Z5Slf+QOnhQ+qTCf04BzPV9v0OhegwasD5O/tMlrB
ZrtoxDj9hzZgmWY/M044M6beW/f43lvdUELJeyPDB7FfG/TeqM2bE3lvzQgzOJIw9/5eQ1J8Rl6g
XThsxQvEBPoCvUBF8yWL1zu+xasbrSwNUjJCGfu1QYtHbTSfzuI1IMyjW7yGpPiMLJ5dxH3F4mEC
bS2e/Kdk8frHt3h1L0RIg5S8BIH92qDFow7FTmfxGhDm0S1eQ1J8RhbP7lJPxeJhAm0tnvynZPEG
x7d4de9cSYOUvGeF/dqgxaMO8E9n8RoQ5tEtXkNSfEYWz+7eYMXiYQJtLZ78p2Txhse3eHWvdUqD
lLzKif3aoMWjgo1OZ/EaEObRLV5DUnxGFs/uanLF4mECbS2e/Kdk8UbHt3h1b45Lg9T4tjhAgxaP
erzidBavAWEe3eI1JMVnZPFIfTazeJhAW4uHNR3STthnOZp//e1h98fu66OIha5yhKZBmnpTxSOc
IqB7lIopK5f+PxJ5WRr3iz1m25NuioAmCFwk0zk6ze4JaPZOQLN/ApqDE9AcnoDm6Bg06UBOwhTB
dNhQwjW1pZGlkb4bW08a3o8OYvs08g/HDraKUZzWoOhgqepSrG+n6lKsb6XqUqxvo+pSrG+h6lKs
b5/qUqxvnYwp0gOYsE2rJ2ebqMyXSoorF9uE3pPS3qRzME01Cda3TDUJ1jdMNQnWt0s1CdY3SzUJ
1rdKNQnWN0qmBI39pTT/8bt3efZjmpNKlu9lOIuLWb69H710uVrlM832LSqNv/7kub5X36hs32qJ
7HnaZ//mT0uiZbfjZAlZwDuqsPUSMjSTXPGvwp8V2cyvI0PZpHIRFczkQsskb0LHp+TBZYF8u56/
jahvWT3szsL1DTzucHGhuiwh8dVT8YXd5rheTlcqvkr4RxT+Yrkxysc4WUx4xngzHCit6ySZkDJO
koB/QxuaTBLzhgakwnMcWE+F0duFBQFSg0rl+li5RQC5zk0lGYREb3CBdH3bsccGfPLz++Puj+Xu
j28PPy8BVcDfpPLKydnz8besP/6S+bXQUfROUDL/r4loB/Jx/XayIT8m6Ue/j93YeZt+xaleK78u
g/UdSTZariPy457xfMx5qC5Gm+g/SSxhsk3oj5O3Wxv6Pk6fb+VRTeTuIv2xq/rYU33sqz4OVB+H
qo8jWlqxop1hrGgn+0i3k32k28k+0u1kH+l2so90O9lHRTvHqnaOVe0cq9o5VrVzrGrnWNXOsaqd
Y0U7V6p2rlR6u1Lp7UqltyuV3q5UertS6e1KpbfMJsfkx+08ou0T8yFp+xQu5oqPUfiGtrZ3yS35
kZkkhcFarWmDtVLZwQxya+bj1jQJ57RbMZmGpPt0J75h7tn1JJmaT9K+h/pM8XizpA2Xqo+rJLp4
28cqCTI1Cmk1Cmf01DteJHPyYxAqOI+nc7q/w2DxVoU2lRa2HAnjKS3LiepjvLjhHPWvcH6XAc0v
8wtpvOHUZiru4R14/RZy5zAFver0USqzCT1mgyjiHcz0b1T4WHIERea0KNgGhcxpHe3tAuIKO4R/
oT93sfECqY9CrE/Zh4mHf+hil505IpwwQ0R86EwJniYUs1OUWUIUh/zZ+zH0CPmwD3h7hx7RrCHV
B8NJKddU6cq4UB42PWyVeZWryqNJ0U/lvppQiQugM3tIWixWvCnaEZX2gNOuxmE8D9qKRBXw6bxa
rVfV5CYMRSAN8nH1dpE9O4Fp+QZmr+qdzbRuuNmmafMKX7MDjGvmiSij1ehAt+sgGjcQ6CbcFpcn
UqB1w6tL7rvgKKTCQnGAKF0023KFXrEKOKq2i84GadIsH3YZz7FZipyQiuwXX/n7cuBgMpM5nU5F
QI+6fVKl5Pf7T4+L3Sfmm0S7h/QFMJHXUfgJ82C7fcWjqjrkfloGaqoQdATYLlOH+Ez7gLr2ptWt
eZyEulaBDCqCn8fjbZoDV9tzH6XuhnSctQiKlGFpj9oQ3GcArdPKYD7mRHtaoifre+oCkL7vt2+n
9aSyfbtMh/7ZSoW6JKCVijgwUbfqLhcLz+FTQ4bi/ENEzJpWh3OQuoocLcXo6RjRU1nY0t/CuZ69
DddgceEwQmluU7Pcf2VlUbmUj6k+47rqIzRd3SigOE8Sgk01GQBZI7yujhrR7ArPxxQv9lgngFa8
b5K3pp4QK9o5WKsMJbeP5Ge+udEh/z7sOgiDs/CpTfQ591M9bzoaTrX288m54CacWgis2cFE5bV6
2q2yX1Y8hVbVdtTPulW1XdDCSpDHPYJTqm0hd082959/fyyhmIq7NNwxBS+1ERN0MrHa+7AyGiET
he1XowJIWy7O3qiI1DL99Apg+aKOggk9IwAcr2BFJJCFf6uRAlyzUqKeuix+e098edJaRF0efNom
B3tZQ92qoqGYprtET1oI9quUqhBi1TpkP55li7u316NXAktt+6JHDQyOFb0EEO32lw3VhRUDmeSE
7/yIM0PYDHra+mK/7CLkIvYt+Kp8Pu0QN04zNoqTvOd1JvQ1VzU3Co4Y5uhVNjPAbodKDagv5F1t
B7b4josZW09Ztzq1Ly6fdauoF3a0rUJ2rg+xVX6qHZwO9aCI2jeVTAG9GlejKXOkME7OLqe8+6je
sAOg7QqA0rZYsWPS2RmoN6BkUPm/1BcHe1loi1q2T9wwPsvtmY7jqa/HT57exk/Gn7Lc8MXxa66R
iYBPYu8LF2x2aBVE44tfmdWDuND6dM0f9M5muSAM7OiW5IjVkq1uEs4Rl6mIJEkQB7RQBKJsq0Ws
ExDE0SbSZdBBcqGApFmNi1/h6oS5hL/sdn+yGTNZIINEcT2QJEU1il/ZqNWo9KEUNrsIHPhbpQ03
L7sHnpEsm1eimdc8ltuv0Uzodabjpg2zvdZZeOEFrsiQuiwO+6Dh1SLV9Ewimj57tyZ/s2b/2qXV
eRIcOeb9vU7+Uwh/GHnZqziFXwf2yZ1Yv9x/+snEHc49/ioO+Zgi9muNLFsZwevpnSVBwpzKz7cy
bTMWMBQWCoApTPkXtKU44mILYUvWaqgZ28XrmUvyA674cDGEttDXXEJ8QkD2symu+FaxdeYqOVkX
Zw1fSCinxXSkonvOTj1aNBYEfo0YwUxbyZFfrnHqXbifo+tdcHts2GJ+ntNkPD7eZIyQIhoF74LV
m6VSSjADiNsx0gZE/s6YudWvkw4BuuSAM2V1aFp5HNxEx1NicaBZt4grPGLdojsI4xUm5QrE7FJu
WJbVznV7RPgCnLDyiNYMI4AUZH3Jr0qdeDcEayV11mWHGSBv7ciktQDqFgNoW23Nnwg/bIC9Wsef
mOJqdM1ec3V9aqO5Rn15fM3V71WaIQZoWjEAmtdbUyui569JxVWpmqXiNr39fI5qazCtmGEGaFwx
AA6iuA1NCLYb7o0tfEX0+jS9LTZUtYYYQjKCPM6HRpPLrseTtqBbhUYtM8iwrFi00RgkLqP7/73/
yP5/+deXx/s/v/zMdk14pIN48Pfygv0DrpLTOgAiEpfNZRGp60j3vSgxATSmCFWxNREtJqQFt/DV
I0S2bnRjAQ5yAKVcfagpG+0IlZrXxP5zEs7j+kvetKNFZiKiozXJffka8zpZzs3Spxs3MHte4d27
5WTp6xokLrS+n8H2zuT++4c5E80v8Sr2Qq/zC8JZWv5mjl6ezUuEm4Qp7mb3/dtfD7/ttrs//vzy
4ZH9biomkZWq60/vf+wYS5vdh4+3D/ePu0tSvUTmKmEaMvj73yEK4AF2VIMPCu2GUBqfrrvYff38
+DtZG1f5Oi3onU0LkOfi9x3rtx1bp2N7tVsgbl0erGP3LzMk27Hboxbz64gIyqLnD+ktEOIcWPMQ
hXHy3X07mQ47Pt7Bj3DkZyji1SzdoppO2f/S0HlQvsHYkw+NoDApixKWaYaly7HUfuMDjHEt0cJg
17mXxPz6PoKcoj2bHAhpdprNhPnCwV+fPu0eqGezqSQSHpIVhDjfK7hqObWOYi9S7QP9/W8XMFIv
/vZ3wcuEu7nDPvy3e8V9XR7tMfL57/wq9GDIs55iOW6qqEd71COeqPFqKG5/wn+DCb/+z1feAy//
JZrS7pjdoWUuL/zAQy+kkJ943n57+JgmlBQ9LaIKpl6iCOcsOPPqopnqAvLakTmilXXWPTKWd+/i
ybX3mv0nfr1YvXmrvGCmRiu3DLd3mtbJCEgvXf+sTfFfKa00R/L7eCPQ5s2IH779xmapbxDWEN+I
faFOatPYhNjJtjgv/vVvoo4o7VvV8bO7vBZ1ROmeVR1Rum9VR5QeWNUZZHsLFnVE6ZGiTtp5y78e
4eb58mZ7JwqLPqykjI0mgZeljJ2vxUpcSq6TDWleKvj5uFMkic31IwquefLDkcwRRrpTJH2VnVeV
aLNiYF9MaV/xbIEDJfF4s0XanU9LKXFRLNpTV9OOR95MStubdUV2Gju6m/GDy/LUmSPQ7jYUTUm2
fn7IvJWRF4lNmCnPGPMqjdm4BM6kc3DJmOi2hcwJeh53bKQz1WxPCHtZz46n0iGmKVPAEsUU5hFY
M9W1ZkqwRDJlKamUikAO2vdKqfRJDAmuZKUPfGTEiWL70a7W+SRZxpWxvvcW43lIxUX8q9wKIdVr
VuUV/ol9kU6m+VZ7ISCo2AMw3LLgRu/HeCwLMPtXYW9MWT0sV6+0dEvG91RamkbhFnpVpldsFsNc
bun+mGG8Hqe50AqThqWy5pFx0RZSuOdRbNsQgi04fp2bv92MY1Zhs+V6Xr7wJPsYRTb5mquw8MoZ
LkTlZfZZ+rVAYl+k/0rd0uzy7xSylU6jbaxtWyHFSuwP6bZJLdyrxu1YBAfReiE6n6NnuhbgWlG0
F3tp7MsyMpie7CXIVaV0X6cqHSlCbxJul6aiESzyGmb9vprcxSo+uDqukzBEQ1iqfPzjfRK8hsDI
16tZfF3euCq5z1UC5XByJYF4OakSINppOwwL7xaNw205RlIdjjcJ16ta4XiCXt7GRRwGr2fvJ+Hr
cDpe6TcLqNZbDexj2qeeVkEyDqWYXb0a7jUrnF5Hdj1RjBf/B1t3vYbV1ev3MTwoUS9BzV//fHz4
8NtjFR23NcUf6cVnzKMP6e/8St7BKDR2lndYNgGOIG/LN2pNjIP2PEzczsSMwwLe0rU0NRQeNq7n
1D5z3V1oqJfTgyDv1yLZBSdVd8um/MxuiYJIwIH8aH+Iiv6cxaxzCgytTeB6qWexvojWEMwcwR2K
BnpWwkZbPW4x89qzaDJ9HYVvErcgBX79pYqVSFBgsH8oMLruH6KegDkq3kIRaw04JsQz3/ui6RIx
CjaXfLGmKW6GWbEdWdBOLnihAvT+ZuPXVcqkrXZGlf5VsLqhEwYppnX1iHsb37LOYf+hTJJW1zVY
DzLTANgGIKnlsDmIHDYHloNixt3EnQhVF+SUrKK7m9jrYMNGSc93oudb0+s60eta0+s50etZ0+s7
0etb0xs40RtY0xua0qvOg1ZWEGV3SM0KFgNsdMoGjEwbgDQlO7YjHlYocoE84VB4uuHfhY3f/bW0
CWR676p2ngoRnF52ulS+vlON89yXkiIQxElwuokszlqzwPEsHHobTLuvKiLIfy0fmfAbyUiK19KR
CbYfQEgyT7EShXhEu2qspes3yuGuIEdT/ijvz0HsLD4P4VwgESDa0ZOxiF8e0be/Y9L+zqHb33Fu
v1+z/b5J+/1Dt993bn+3Zvu7Ju3vHrr9Xef292q2v2fS/t6h299zbn+/Zvv7Ju3vH7r9fef2D2q2
f2DS/sGh2z9wbv+wZvuHJu0fHrr9Q9P2V/+lOPc1mutJXrVzNHE6hG3NogEF2AI36MAhVtApJwhT
Vkm2vEpiXsXnVHwbKj6n4pepyD5b9ZClfF/OZO0j0RxvScdIU490KDT10Im4qISVfZKajULGnFGj
7OuhPo7mpBuO2DQRToULUMlyLsWGV/dIt5vC2Tw6IMTKaMsWY9pBtkca6ZAK9qJtOYVZmWZpR38f
pFqoqONmbtzEGYp0n09tL81SAGfav5WlXxafMF/NVXHzKE8isREzWWzcVge3FETwfg1PzaL5zfE1
OlS4+PWX2/uvH7/9v+8XvoffWFIeRHDuOhh3JdlpCF8k8UmJ+/WI++7EX9dsd9eZ9KAe4YEz4at6
hK9owvhfEkPZEISQFXSfo8pCwSa8gz3i17N4cqewU1DINynUNSnUowvxM+6MpURTxjco0zUo0yuV
waQbLyf1pcsq66VrUKhrUqhnUqhvUmhgUmhoUmhEF5J6PF7qe1xfpmtQpmdQpm9QZmBQZmhQZlQq
g12i+Ed58hWXoiCitbyFnH7Z3MaX1KO7tfaGAfYbwwy9KocW5RSxMZQ+aSf4Q3bQq7m9k6STJZzG
0kLxCLF0m7hQKwtNtqzWzUPMbKr10iA0vFrxkYdiyrDOK2P/puBeFiRZDSqwXkDIWb6lFFl0Mq4p
m/uSHW9NcC9V0jKGOr1QXD0IKEc0u9oeR5uQ0kHFhfaKkyRfZieeiQ5upp7+unNVxnaXhNNLOtYX
hEU98mKx4nJwtZOPyfHoyXFc5wJ2y7Edx3Wu6btzPJ1Efk2O6XquHK+/fvmpYPjqaZkKxnGvX4/j
XnASEU9ranHngFqs1olJzWHXOdGwO61pK2UFqGSGME9aIGIZIulau3AZxG3iTbBB/CnJN3z3jm+m
bsKAJwTwJlQdms4whHAJlI7wIrfBOKCK0Gjhmus2WCBXOuTNSYHbKkCmuFmJUkC2YmhG4WbzUifn
5QzCWkUKWbywggATcKQjEM1TAmRhmoA/Yr6qjsAkzgnghWkCXaaKdzoC8Z0g0BlThRUEWAvukv+i
CKSLDeirbNTHrDhVJ1spAHarc4Vgs11pbtAVdtb5TaF5JMJbxD1oGLeFF9WoU79C0H4UQQhm8U4d
tZ2fEvWbIOpbEu02QbRrSbTXBNGeJdF+E0T7lkQHTRAdWBIdNkF0aEl01ATRUZWoRDrbDLiOJ+SJ
1MK7Mj+RqhlwXQy0fj1DLnXizRRx2dUsec3EZRfjse3Y8q3ZMg7fLoZt27HVtWbLOMq7GN1tx1bP
mi3jYPBiELgdW31rtoxjxoux4nZsDazZMg4tL8aE27E1tGbLOGC8GOltx9bIgK1qRcgrhVXES/IM
VMbFUZOFlkStCFqyMrCRwzdmy5VvkOgmlcksNGSdlTRkHW4CmpW8XoxXZo3smDeS30NmNbJrwfDP
IiNy2ofKDraULKJas/rIlTQlXlYnveROf2ihCswrXrAVJ4FRiLxTpFw0prejDdBoKlXPiwuC4cl5
K3QqMlDzLLFQzprAES487LE9AJP7vMqrXwJJ+Rf7x5HMRae/iVNSNp9QNv8QyuY3o2wUGk0la2Xz
m1Y2vyFlQ+/9CSTlX5pQNkp01srWJZStewhl6zajbBQaTSVrZes2rWzdhpQNvfQpkJR/aULZKNFZ
K1uPULbeIZSt14yyUWg0layVrde0svUaUjb0xq9AUv6lCWWjRGetbH1C2fqHULZ+M8pGodFUsla2
ftPK1m9I2dDr3gJJ+ZcmlI0SnbWyDQhlGxxC2QbNKBuFRlPJWtkGTSvboCFlQ+/6CyTlX5pQNkp0
1so2JJRteAhlGzajbBQaTSVrZRs2rWxDe2WbfxLX0WyZkHfZ6ERQTq8n2SWRyNtp9uthxg2lBdbj
ZkSMm9Ehxs2omXFDodFUsh43o6bHzej5jxsyd0neTrNfDzNuKC2wuFXzfsFPxOyi3IUyXMcTZGNV
utaW8zqfzLev2f/eWhwo7OsY5KmViG5Xa3062C2SykonpoHlaWspX5/Iq5jw1KzS+2fm8kgRhHNb
xoc1Li76hYyNSXCTvJ4V0hMipNaLzqTeibS42W104GT8FpxLek7yUgJVn0rLSafkpPO/lXtcl4qT
NmvNpuA0ygNolHbT+tZGs9kFIcsdrgqQDfDcswuqRc5oBQbHqvgoUt1H4a0SRauxKMl8PbM/v8R4
LbCQ3XkpSnYbLyk+xNUXGwu0DZfTwqsAAAYamb/c1u14YeT1yMwGkuTTOzfJlinFzdf7377B87C/
sPZcdF77aWvlezrkHZ30A4/YVd3PKfJtHFctotbtIpRFdDIZ6U5EJ0sdn/+z8iTHdrkMsic5RCMu
Czx6P4BujiAN0BTVlE90VIU0DsOEP9VBvEu2/vTp+07Ec45elcps+SOC1brbIER/r+IUQWXFym+h
++GtnvKHKPugnP81DxvSkzKjmz7MPNJnG1FNy/JfhsaejQnkQUTcBqsfUbR8Trb4L4UpxMxQVXGZ
iye/JcP+3JYtXqasvKjm9aiqEOar/+Djraxa0XgrflfqRvZOobVumD4dSM+a5ScDxzx2aspvAkT8
9YGueIOATxFd/jXg0Wi9ieLJwPJzgWNexRdpwKIsZD57HuxK/MKR97v4FKxPtCOM8fa9Ry0Ws7sI
3+ZfH3efQWKQ+JENdKiEO42ZoCW0tuNBekYSeciSrg8A14C4mDqEUJBxVv6l5rvrosV18odnDf+F
zaa/mDpUJKdqWlWOkQdYzFABSKoR7R7EQja9YcxUJTsMZhqzXU6Jzsxgv4Tr+Lw48fppqTgUvASr
YVR6PYWcVcz2qEsXssmOueWbThUyAlDLCUCy9xpRKBKwA+RN919a0zH2Oo2xR6aSN+TO9LVP6gs5
ngHUzUDHteoNAj1KgDRfRTzvkPkq6iMHQPJbqIpqCyIWvgy5oTGxHFINwyEEUGcYAZgJzVhfATQ6
C5C9qQGMZvkheJfTsxnGDtTRjR/VV6X6A+ilgw6D8vNf9VAD7HNabxIDtVG3FkDbYnPW0Jb3DVpu
TgJgbxD8okHomhGyIwZgYSAcqxhXMLAxGeT70qa2plTTwuaUapZGs/JtbAzqmrAM7LrZ3EOQwcC2
ZbDvie7xeuKZiRBrDpYCmAK75uw7rG/bYVLtGp0G4NpxAHatBdi3eJq+8Vwav5qVGgXS9GyhFwD6
Ccy+pNGUl4GdDOssDtwoArjpqYShpq4CNKGvAPatByjrbacZvQWor7sA5lppX9pKjwFqa1Z1Iq8h
UJuhbFzUvnfM2DCWrblMUfe4fDzQDCkAaS628IzPzoruJTU8uBltcCcRA8vdRQUKwx1HBYYna+jr
eZgZWBiWDOx2OhUYWpGbQhNOFIBbc412tspQo7VmO7Y4MZvSR/AWULtdfv7lMKQBDmy/AfY2vNet
a8MlNA52XMLiYFiKvNQ1cM3xsn/gog4bdsMBwHpIANjr5qHWAGYl9aXUJeiv9QK4nSIYTYMncEwy
lIMoJjyeoc+jHQYiOZ14YYonIYGnibwfQ547p1eOJsJR54EUV/t0gKwyR9rnYRMiBdeUr6ZEtrN+
RAvb/CXRLKiCTPEAoAqsIKxaybx3lLcG9LEGWS8qJ1p1J8LaRiExQ/0EcI9j0J5VqZuSb9O3sQdq
J8pmmm+lqEN05ECBunNG4U+tBanShxjbIkqz2G1tnOJ8G7OZgxV9H377+un+M/TzZJQe+u6rpZGK
ojDPhGoVqgjA41X75UDWOJrzD1JjaF5FPCwaDDz1KpnBsuhKXsmaZRECWwqxlRisBKjPkahQ+VWG
X+arbeh1PDk+LQvmrr7a0O0g0eC20eMvJkj8PGNtq3zGy02dYHZebfz1pxXL4zDkolaHAduHiOeX
aQQFZFpQ+8CK4HK8coYAjwyn6wCoI8QBiHlGPb8YmffyDZuq9Nsg7BcQhE2uD1Qx2MMpIQ2NKgKc
Mvraa6Ov2+jrNvpai6iNbzZCDtDGNxegjW9WFTib+GZDwetKtOHNWmjDm9vwZhLsw5tdpdJGSePg
GCU9PUacdDNRw2cXg5bL8IixvM1Fsp5dnEgbVaqDs48qtYtWOvmIbkMl2lAJBNpQCQFtqES59PPY
LWtDJZ7wnuNTC5VA09xlx/yQ0486mJNP+Yun+N7VLyU1IPO6AZieyuLt3psVeEHk3bt3ipyOxpk0
BbtBBybFauHsKwwS8quvrOur6obT8UrxdRlt6K9SWmDk6+T/ThR0l+E4pr/Gi+Wd4usknCm/LlRf
V28VX+fKr3GiohsnKrrJVtHeDc9kSX0VKVCprwkbU/TXtwTmbBxMWBej7xpngHuQeJZp3FDd/n7/
JdUlW+cmefzw5Qv3TJGM4QCEWU7HqfBvccJKk5iZpGC89XTiUJimcdk0SeW5ecKnCxsbhfME4BBd
kVrtf+iangbPXK/eJnnwDBU6o+Y2AykB5KScqzEDkaOWTvhYxeVX8j7ucXWS0BJXV4ErKYYDZWCq
bJ1DKhu6Mmh1TRwMkH3qW+tHT4HLUT/8Q+oH7gSfQkGK/6KiQcH9KPuShWjQcRBGnud1jKJBO30q
N3BRLvXC+qwyQlMOMiT8ruEgR+WOd/OBrfO7ZpG1XvGwslduZunEUP+nICLQgFOSJB1CH5IOSrva
QP4IROEX5ByT+Clv/Su5+YKzLsVZ15AzOEg+DGuVjshY65mzVt7Ec2Yt46GvURq49V9babKBEG8T
7gMj4yCnBMICh5KSQilDfcKQlisUi62IEklvo6kpCMzXlWKFcZe17nb8Rt+6MnJWqYxc4lz5NePu
Vi7y7//DuPr/ZmcST+87BQA=
--14dae93404ed46460704d17ef284
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--14dae93404ed46460704d17ef284--


From xen-devel-bounces@lists.xen.org Sun Dec 23 06:11:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 06:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmemW-0001Gs-2N; Sun, 23 Dec 2012 06:11:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <firemeteor.guo@gmail.com>) id 1TmemT-0001Gk-EH
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 06:11:25 +0000
Received: from [85.158.143.35:57760] by server-1.bemta-4.messagelabs.com id
	BD/90-28401-C80A6D05; Sun, 23 Dec 2012 06:11:24 +0000
X-Env-Sender: firemeteor.guo@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1356243068!5257600!1
X-Originating-IP: [209.85.223.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7386 invoked from network); 23 Dec 2012 06:11:09 -0000
Received: from mail-ie0-f178.google.com (HELO mail-ie0-f178.google.com)
	(209.85.223.178)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Dec 2012 06:11:09 -0000
Received: by mail-ie0-f178.google.com with SMTP id c12so7892350ieb.37
	for <xen-devel@lists.xen.org>; Sat, 22 Dec 2012 22:11:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=cmeG5Z9dgPp3vNgSvf2UGvC74/VsqR069HoprRoHU1Y=;
	b=c6U3bO3ET7LaMfoBG0ORQUH6i7ckszctZToz+flEnkydeaZ2p0SrU1e5kpBNyRGVOw
	Q7Hcckf75/NlhWbI2NUyBb3VKhzNZ4lwlD0og0+dRm55E4+uROmMncVrhBmC04E6Ggh7
	90ZkUULLjUyhGLIiT6yf9FRUDz+kFiwmgxVChzL4CthgwprVWyUpAK3a/lLCoBAhx9s1
	fCIcksV3VRlmaPodWkvIE92a8ZXZnHFkB8DpMSjSuG5kKM8kRQKNeQrAbjFWA5WorddL
	I4mw9bwTW+JryzQd2agjgZeWStr/h410iChWTNNWN3W7pNVTTkpi1RtdkshsCDQcwjYy
	iH6Q==
MIME-Version: 1.0
Received: by 10.50.214.38 with SMTP id nx6mr16762639igc.28.1356243067618; Sat,
	22 Dec 2012 22:11:07 -0800 (PST)
Received: by 10.64.20.4 with HTTP; Sat, 22 Dec 2012 22:11:07 -0800 (PST)
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31F6B6579@FTLPMAILBOX02.citrite.net>
References: <CAKhsbWZ2mCUj6=XM9XDYZqMEei3PcFC7WXKiK48Y5KH+5s_MgQ@mail.gmail.com>
	<CCF8CEEA.561B5%keir@xen.org>
	<CAKhsbWZJTsPrRwwygGoG+G1aHWTqMjeECc3zdTRJLJ9QdP9=VA@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6455@FTLPMAILBOX02.citrite.net>
	<CAKhsbWb25i14jy1dT2-U3qDS6VZmGLzd5qkkCQoftkwf7dMsPw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6545@FTLPMAILBOX02.citrite.net>
	<CAKhsbWbtYA50rCCbAR_5=Bt+5g7Kb_BkKrVo5WF+ZdmO2o8pCw@mail.gmail.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6567@FTLPMAILBOX02.citrite.net>
	<831D55AF5A11D64C9B4B43F59EEBF720A31F6B6579@FTLPMAILBOX02.citrite.net>
Date: Sun, 23 Dec 2012 14:11:07 +0800
X-Google-Sender-Auth: 9X3CRnQvGqa8DLaF7UcKwvP03PI
Message-ID: <CAKhsbWb9LLjOg2pkT+TxJU3NzB4vYeRzVq29xW9iWoP_DHiSMw@mail.gmail.com>
From: "G.R." <firemeteor@users.sourceforge.net>
To: Ross Philipson <Ross.Philipson@citrix.com>
Content-Type: multipart/mixed; boundary=14dae93404ed46460704d17ef284
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean.guyader@gmail.com" <Jean.guyader@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] hvmloader / qemu-xen: Getting rid of
 resource conflict for OpRegion.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--14dae93404ed46460704d17ef284
Content-Type: text/plain; charset=ISO-8859-1

On Sat, Dec 22, 2012 at 1:26 AM, Ross Philipson
<Ross.Philipson@citrix.com> wrote:
>> > Yes, my win 7 guest is totally broken with IGD passthrough (much worse
>> > than linux status).
>> > Before I bought my current build, sources like wikis seems to mention
>> > that IGD is the first card that works.
>> > And now, it seems the AMD cards are the best choice for pass-through.
>> > Sad news for me.
>>
>> Let me just clarify that up to now we have been successful in passing in
>> igfx cards without having to surface any of these ACPI bits. I was just
>> mentioning that this is an inconsistency and might be worth
>> investigating at some point.

So you are able to get a working win7 domU with IGD passthrough? That's amazing.
Currently I just have a working linux domU with IGD passthrough. (just
solve the last known functionality issue)
But the win7 domU keeps BSOD during early boot stage with IGD passed through.
The BSOD varies time to time, with or without intel gfx driver installed.
But all the BSODs are more or less related to memory corruption.
I begun to suspect this may have something to do with the bios / firmware.
(Your working system are based on intel board, right? Mine are Asrock H77m-itx)
But I don't have enough knowledge to triage the issue (all I can do so
far is to analyze core dump with KDB).

I'll start a separate thread about this and keep this thread focused.
Help you could give me some hint in that thread.

>> More importantly I am pointing out that if
>> you are trying to find out information like the location/size/layout of
>> the IGD OpRegion, you can get that information from the host BIOS. That
>> sounded like what your original issues centered around. Sorry if I
>> confused things.
>

Ross, your help is highly appreciated. I think it's not you that
confused things.
The problem comes from my side, I'm far from familiar with all these
ACPI / BIOS related stuff.
I dumped && disassembled the ACPI table. But have no idea how to read
the output...
I attached the DSDT.dsl dumped from my system, in case you would like
to take a quick check.

Just one more question -- is the layout specific to the bios, or common?
I wonder how can we judge the security risk if the layout is not constant.


> Oh and I forgot to add. In addition there are other OpRegions defined like
> GNVS that can give you an idea of what might be just before and after the
> IGD regions when it is not page aligned.

Thanks for your hint, I've seen this -- many of these.
The only issue is that I don't understand what do they mean. :-(
I think I need to dig into specs when I got spare time.

--14dae93404ed46460704d17ef284
Content-Type: application/x-gzip; name="DSDT.dsl.gz"
Content-Disposition: attachment; filename="DSDT.dsl.gz"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hb1rebqo0

H4sICJqb1lAAA0RTRFQuZHNsAOxdXXOjxtK+z6+gfGWnVLuAPn3uYABJtSBNBGs7qVRcio29emNL
eyR5s3tS+9/fGZBsoHsQI2NQElE5Pt6mZ/rpj+kehmH8/scflB+V4XwdPigGoUOFLB4/L+bhfK0Y
y5tPs3V4s35ahpzJ8FzFmq2mq1X4+PtDuFS+hMvVbDFXdFVT1bbeY0yc74Xnm7K4UyzfCt7dTtcN
Jfj0pFjhDeNXtOZ/2H/NNm+rx+140/Fydj+bTx+UYMokKINwehsu/8Pv8Muf3c+nHI2yuU543yfb
2244v19/Ul4u9avKrvOu5iinzV6n1z3bsk7CL7MIeoJV394kn8KbP1ZPj4mbVmt7c2x7ytBKCDkx
XMP/YJwkGWL0G7YTQ2FNUvdT4mOUqqrrDGXrGSL3w4xbeSvtZDgK3BNw+2LjBPWrzpygaVpXOW03
u7qqn3cihd//YIV3s/lszbjMh8XNH8ppZLd308eHk8bGhg1Fbzyr0thibiTAnf3w1w9csP11HS65
i06D0ThoKF64/rS4Hf/+f2f87vv3isoC5/7pkUXQKtOAWoQjytI6CK2N0FoIrYnQdISmITQ1SyNO
38rSBriSmlBJf3QFO7kcWg0+yHgPmXtDyzClrDiwxr5UA5v4cir8ek0n1+8I/ai+u6aUZBH/eu2b
1+8oGarvhvYweMf+d7EDUNTDaPoYMvtQbczjSrfPEtThWDOjaNPP1Qzd5XQtQyUb7laWjnJfbrjb
WTrKbfuc6jhJqu/qtKH8Ei4XKapnRrxqu6dm6FHPeopKt9wtNUN3R5ye7qNPt32rGTrG7XtDyqmm
nqQaNLaTY5NoFGfubbRXMz2RSYyymaIPaLDpywJ9sXtkc09zWmorec9x/V/iPNIB7fzJFp+lkcy9
CYn1bGXoATEnWxw9cM8NMJ2oHdvS6akAA7VjOdwr2XuuQTf4bHDP7tNn7OeZex4ZPN/LtrsINlhs
6xzew3VmhXnA6VZqzBi+b8KYNMYBRjWuECq1ByxqxvMw5ZQBJYBIbeohRHuCEIkPiMPggw3Fe6Yd
h42THmxONHwyXThWQONRnKJ+pPEoaaapl5uRlqSa/gBiMG0oyrQHUJI5GUBBZuAgcgYkHrgp4s9x
DuqkiLFb1W4GZgy+l8EZ93qeARUbxciginnNjFYxL8lAiKVZGWkxr53uF8mXTFjEqmlpJaJBqqUN
M3ZgVjTtmJhu7rp2RNUz1EgvPe0DlxgRtZXhjfTS2xlqBFbvZKhWRE37YWBGGNJpkFEjDE0tQ40G
blPPUL2ImsIbTAI46C6twAZExmnBQGTUIYzEPrEs0IHlBwYMD8t3xzAQGJVAl9Nh4EOf+yaJqSlg
vhGbXEsBc30/dkQKgz8eR5m6mXIPteLc3koFoz/xh7C2EVbyOLWjpbXwkTrtDChavYcB7Hfs+Aiy
gHoO7CEgKNVzNJSqo9QmQp3EGNKO932kW99HevX9Jkzrfgum5fFGTraOjqn3MvfP1OWxt5kfwZp4
0TechJDx53A55Y8dk/CeP6Kc9kcXrK3/bbUOH73wcbH8xnsi1rnRtrVeJFGzNv50ZuHD7baJMf9m
3Nw0FJc9vjQUugxX4fLLRkr8WBLJ839mvIqidRrKM5F52eHEXoJGJ56K0LQszSdoWx2hNbM0l3zA
2rYQWhvQeqrFaU092eHQ8gHj5QTQLLMPaMFgDMAYJAAKM1oAZPgXgBYQ7QLSdEjzKaCRCZRhBb6N
0AA+RgPGZzSg2/Ya392twrVyGtWBxP2JfTHKtpnYPnAi0k8v2Y9BhwTaYQRtSCgMOEKBfuxpy4M0
zwHBQDpdN8s4MnhhyRjbMxCaCWguhW0diyBtEdpwAmi+5wPDXDZ7cIR5BNCGfRjogesDoxLDAjag
CI09/togLYwwoo9yWkMLxNzQ0oBPGFHHiE2M2MKIbUD84F+oGBEYbXu9RGqnm4xUNksCRjUnATAW
q+FgJBquAUaY644vAW0wHBTAZSdx2Z4B5NmboE25gRFdQIS9d1vJ3j3bgXnFD4AlkH5S45yVfxDi
jAb69sYTECsB6YO2lMZDJuVWRvwJjks6ydL6gRPFRLuTJuqAOLRskE4YowYYzTjE0xhdGPfQToaR
yoe+awK9hiZM+ENqQFpwAcKM0YAGQwpzy9B0EdrQADQfadtSzwGtjdA6CK2L0CiBelgeCLqho4EA
GlyQMcjrV0iyYUSYbBgRJhtGhMmGEWGyYcQORuxixB4g9qMZeTYfX4Dg314vQWRrySAa+wT4aGRf
wRpjwjTIaGCCgMgzk/Is3waTAZuMAa2PFHsS/AxrqaoBvgtnBCYAjAbwMxrAz2hgYsJoYCYJ9dTY
9DxpRDY/h668GECi4VmQyAoUDA9GhOHBiDA8bNPw4TyG+jAR2jZBiBcI8Qrrc2z6cLwwIhwvjAjH
CyPC8cKIcLwwIjQII0KDMCI0yPPNF3+1UzXbCDwQSBShGcEEJDqK0UYDD6AITAOmbEYcAOIkGMIi
6I9BkWDTUAQ2QnNc8HBgspk5GNN9D8jw+w6c+12OP4B86iIPSL4N2toIjWK0S5gr6ATOTT0sZ3tY
zvawnO1hOdvDcraH5WwPy9kemrM3Ayg9De4jM5Yr5wrSrJ8RvgKzBu3cSgW66wCBQw9mYGrBqRv9
0IeBbnjAIwZRkUdgDdCYCJBtGa1ItjX1pFJXAwLK4dWAgukM1lEqDVwFH+EIZUSY+a48CjmR/klq
kuxSD4w5rFEnVTStCTCd53WAmYyxAQa+71oTmN9JsElOEel7/M4QLCH5rdEFvoTUtc97yXXRzQJS
3KDoAlIAHzz8q82jcRJX/LaT987zCxvqo8XaD5ez6cPsf+Fttl9/vVjylzDLezYJ4P2dZW/xlwAW
u8kRvNwc3imnZDG/nYR3Y/br9ZgvhDIVpg/q2QvXXymvDTeMyunJ5Wx+u/hzpeiqqp0kGsBGWSRa
FklC9R2C3kmKau4tqiMnqLO3oHM5Qef7CtJ0OUEEF5T+bSPYv1l8Zu2ufTMbnJul94nKwo9Ob/6Y
3kf9b1+nQBgJJrW1Cy9fRNb4Cz4HyS583Rohu6MPIu60qozpjZCN5yEOzKoZGH8pgCMj9SNr4siM
ipC1pMOsKmSWNLJBzchqHwAiYOIBULczxQOgqqFpHOwAECEThllVBUAErP48m4NMEGZVDU1TOsz6
FSE7l0ZmV4Ssc7BDU4Ss9gogAlb/0MxBJhiaZkXIyMGGmQiZMMzqNln9YZaDrN4KED8EHGKYCZHV
HWZCYLWHWR6ymsNMvmjWjUwYZlXFv3zRrGoAyBfNqgbAwa4bCJHVns1EwOrPZjnIas5m+huF2fO/
vr/A2Hz1cUCLugIyf+ld9rxJblFXDKxZFTDBqBEj0ytEho0aMTK1bGSSyblCZJKLumJk3aqQ1T4A
JBd1D8CZ0gOg9KEpuahboc3kFnUrLACSi7oH4EzpMCt9aEou6oqRdcpGJrmoK0bWKhuZ5PNphUNT
7vm0wjCTfD6tcGhKPp+KkWllI5Nc1K0wzOQWdQ/AZPWHmeSibnVDU3ZRt7owk1zUrS7MZBd1qwsz
2UXdCsOstKJZGTLZMCs9/ksrmqUPgNKKZukD4GDXDSQXdSt0puSi7gE4s/5sJrmoWzjMnv8FFnXp
RG01REhftah7mK9BDvIVyGG+/nijVx/iWDQOJBaryt4HmbkPM2u/UcbOzYvtQ4jFmlOQMC/WnIEE
ebGSl6/SebGEF68HEYs1pyDZvFhVBpLLi+XOGd/oRWtuXuwcQizWnILq3c4snRfrnFqL82Kxwpab
Fw8iFmtOQfW+hTnMd/Bv9PYlNy92DyEWa05B9e5Ylc6LNT/dv2qnam5ePIhYrDkF1bvQfpiL7G+0
wJ6bF3uHEIvH9cXj+qJxILF4XF88ri+yvHh+CLF4XF88ri8aBxKLx/XF4/oiy4v2IcTicX3xuL5o
HEgsHtcXj+uLLC86hxCLx/XF4/qicSCxeFxfPK4vsryopWJx+7fFIMK3zosi7lrzYiVnpUnnxUrO
o5TOixWd3rbHeUcVnXgnf95RJTG/13lHdTtTHGYVHca6xwkhVZ2qIkJW9yGBQmA1z/DykdV7TrLa
PMwHiBxk9S465wATh1lVeTYHmSDMipUm8QTOOJAJnHguWu6BEiU9TJR7LkJZDxPlnj1T1sNE2Sc1
lPdtc9mnW5T2bXPZh6iU921zZc6UDrOyD14q72vA0r+glJvAHcAHlDUvi8hP4Cr8glhyAlehN+Um
cBXGv+QErsI8KzmBK1yaxBM4OuE6178afNzhJ/NA8Y/d4XcQsXjc4VdCbvz77/AzDiEWjzv8ZPLi
P3aH30HE4nGHXwl58e+/w888hFg87vCTyYv/2B1+BxGLxx1+JeTFv/8OP3IIsXjc4SeTF/+xO/wO
IhaPO/wObim7hh1+Phv6k3C1eFrehEH4+PlhumZ0UUAOJz8pp274JXxoKMbNevYldBd/NhT/03QZ
3jaUMwDmr2aj1Wg3Og1NbWhaQ9MbWquhtVFIxsNsutpiYj9N8S0ivmWJb9niW474Vl98a/Byywq/
zG64RclQ+I49tvn1YMhqgD1bTYe3yukJHVHVUHvZvye/4SUIb1PAa1iTRhQU6bteuP60YK1/M0f8
bwyx/0aLtR8uZ9OH2f/C211ROQnXT8u5cgo7/v4DKufaNEf7i+EgWfzlStro+5HbRqzvNZ0E8jiG
d8yHQ+IhoQwoSdz8DzgB3Aj2ZBuKtcnwjz+Hy+l6tphPwnv2UzkdmB99FnxkeE0W87vZfWOTB1iS
0FQ13ZczCx9ut02sy8Xy1ri54QZxFzd/sE6W4Spcfgl32eQZy93dKlzzZNZSz5DMY1ObeV5RNOQe
v6J72E2bmhN+U1cFLTPSe5h0b1BAegttGEvXuoWkt1Hd+8T9kCc900cL68NS8zVI99FBcdArcR/0
yv8lMnOOhXSs0tCr2EKdYsBQ91jDPYPDGkoFRxc1y7N6WCeevQkAkWHSEnq5ErAIo546yDN8pn8N
daynuTudh7eTka3jsvU9Zesyspu47OaespsystEBSb3WnrJbMrLbuOz2nrLbMrI7uOzOnrI7MrK7
onGU5jPQfPKMAxvRwcePVjSizwtJMIm8BHcjoZiuJFcHrPAMAt+OBEjNDjwyYJMe/9tqHT564eNi
+Y3PDBzbYnODaPbHHlkEs4S4qfltHUpPEtD5mfnRUfOfLWBH/OLTFPNpNXp6/D1cKqfbDuhycft0
Ey4bijebO7Ov/HnDm37d/EYXKyu8WdyGDdTI0fYNtZGivX+v9JfT+dPDdDlbf5NpNpnO2TMdgzF7
fHoUNuSPcnjD6dfchojEgAFdPUSe3gSVqLWGtHbD+f36Ex592ENbNFMcjuVtr9jz9WwZRmrmOSKt
Y2FHgGbFHMEu4nSxhjsdASQWd0Qstpdqne+IhhJ8+xz6a9b7DXQK90dsZq0jtG1KYFrXfCPlNMw3
ktbI0FhD9ph8P38M50Lb9JBGObb5G0WotWeExis/sOEbR6jTzLR+TYRGTomLDuaYhBMwH5Hpzadw
+vtDyGvG9PaSWfoNPGTsm0PMejykg9a7PGTc3rKKvYrQbWcAf3evkX29Rpr1jKvW0WvM+MAKRb3W
PXqtPq/19vWaefRafV4j+3qtppnH0WvxbHE/r1nHulaj1/ata9axrtXotX3rmnWsazV6bd+6Zh3r
Wn1es/eta/axrtXotX3rmn2sazV6bd+6Zh/rWo1e27eu2ce6Vp/XnH3rmlOP1zTQ+t/oNdisiNcc
26jDa45twtZle+2nyr32YhQpr73EsMzb3k3D2HuSXksBlXvbm8Vbpte+C/a4kokf7XEtvsGVLMPp
OuQBsNn3Ee/TYDrwPc2mdwXDxV8vlqFy6j/9vl5Ob9bst0+zu/Vkdv+J/U5tdxTtK+E7qPj++TNR
N2LJ/CR40x29RvJ4Hp6Jeon29HqaW3RPbwzUQpB2CQt/FZORQBvvx8XZkN3AHJ1r//dp+hCjjHWR
w2rO1lmbNts9DmJyWQgrxibAyjcUvt6S5x0mtlXMkijbbksOSrJkS+eWbBWzJMomtKReQkya7Cfp
FbMkyrbTknpZMdniu0xIr5glUTaxJUuIScIyICHFLImy7bZkWTHZJtySpJglUTahJZslxKTdaihW
sTyJs+20ZLOsmOyc9ziIQpbE2cSWLCEmHVYbrWJ5EmfbbcmyYrLb4ZYslidxNqElWyXEpKppHF+x
TImz7bRlq6yo7DUjrMVsKZcpWyVEpao1dSa4WK7E2Xbbsqy4POf7Da1iuRJnE9qyXUpcttis0i6W
LXG2nbZslxWX51aPgyhkS5xNbMtS4rLD5pV2sXyJs+22ZVlxaRjclsXyJc4mtGWnlLjssf+3i+VL
nG2nLTtlxaXZ5bYsli9xNrEtS4nLcza3tIvlS5xtty3LikvS4rYsli9xNqEt1VJsabLZpVMsX+Js
O22plmVLi885nGL5EmdDsObZhrC67GkeonRuq07USrhoI2hl81boug1fpnHDu7VyGn8oxVdpRMCM
29vkEg/HsWFtxKdE4DIij42XL27zvE5r80F4Q3HdcLXa3Bj7P/vxDNVqnkn6FNecP8d6fqEIxNm+
A4r9sAqlkP0kRqaLkOW1tFXeEnPQzpaRTCx6EtZ4XiRtvSyxioFGzu1HMpcbL8bf9PGTQFQVcaLY
XBGIREB+TASkUF/oIH6hTtohOaO+llRfLB035fMgicYMb78x4nakoF7IOYKAOzH3/IH4W75+dNaC
+XR3xz/JS/8dCVz99z8qXEnlx/f8X+rXtslt3oosb0W/N5vR82038oUT/SQRD/a5Zdxd77k7vnzI
Oo0ad1tRMy1awolCshmLie52MoctnyHq+R8pxY6SiO+SYOKK716RUZB7DMXYZ2q1ZJboNyFjLO+Z
idzFzfRBLZKYY87teRDEutRkWvHluaiVLtmqF7Vq4gn6t6sBeUdY9ETqYOM2/1iNqD1lJozaMxez
n3r0s4metZGlSGRVDpc5a6icnlzO5reLP1eKrmp69rCV/E62HW1rUiI68G7yu+JXZIIr33azn84C
ufObZcg/u4vFipnx3AapO6ZJ3KENhaeGwm7dRDaPMjYg2KjLLRk8rFhwseGHsxnz2+3oZMnDLs5q
5bEmVRzZVzTXfWLXpSQ68dXLkyvI+RwOh7FPFI4WvFREQFg22Exp941D3mFCKXx6XLw7fo0GtL8r
qgVmEaDiWez1sKhn74YldyfHuy5z08uQ0vJMm3M2z3Lr5OeUjKT/4kgSY+81UDR1Dyjb4R95FK8t
gI2VCIKzbUuJqJC+qmikzN7K01UORsYy/s3iM1P112vffJc+YQyHtp188FOpdp2BJVbuuQj/xvsp
NK/cCqYlCaayguM/eVaKxogj8zUuRTCVFRz/XYZSNEamfvkalyKYygpmSFsladyS1bgUwVRWMEPa
LknjtqzGpQimsoIZ0k5JGndkNS5FMJUVzJB2S9K4K6txKYKprGCGtFeSxj1ZjUsRTGUFM6TnJWl8
LqtxKYKprGCG1C5JY1tW41IEU1nBDKlTksaOrMalCKayguO/gFKKxoasxqUIprKC479tUIrGpqzG
pQimsoLjU8tL0Rh5TsvXuBTBVCA4B8b2TOeAZlfVoTDh2c5E1bAlxOQZxvx5X8AQ7e/edYaeWP14
W3lTj3bu863+09vx/OEbvlufX9FZgS3wAcJmq7piTgXvP+K2vF1b1DZnOzy/EN9AEtj3/qqgYJ1I
heK1H7wi3fAFFhZJSGoVt0miha8YciAnGzK/oPk8p4tt5LuUmAUjPzp9nIeAFn0lhbwTflnDKGoy
cLwlX/t4x0HxH1r6GOz4zRF/BYRbafPmItuFMf9W4LzLfJwpzM9nfurCc4L5RY0J4eOkl8djFuAh
BXis3TwIfuGZpVGfdgG5TgGefgGewT74O8JTXQX8liS/ncd/9dE34fmt20swZJ/H3egDMiHhlzgA
cyqQI3qJlWgnrkTb6zkNWsP8j46K4eUXXz6NBwI/VrsRDYq891a7sdEdH0QVx/ZySr8v8EVRTLs+
0iqOKfbVJHB3/NUO+Z75Jf3XPVAx4jcj/EKKevIC34jFukabGxg8QWrfXqmNNwXYXzaFRAKi90jb
cGQ1s0gf2yjhOF8XJT6PEq2MKAFGjN/UFjSiM5vf+mH8kZ05Y7bhTcT7EJKXFT6/gS7CvnHXdi/B
6wf/dn72ehM+v1Z8yU6vfq2YmI8hayjJK38ICTch7QUFmYwVgyL1njNR2AQC37awRR+mVl7ZzERl
Mw+ysuV4v6bKZv6LKptZbWUz961sr4ySY2V79eB/i8pmHiubzJ3dlU3g4zevbIL9Mfx6q8pGEpWN
HGRleyWmN6hs5F9U2Ui1lY3sW9lem5b/9ZXt1YP/LSobOVY2mTu7K5tVU2VDdkltr7eqbFaislkH
WdkEziiK6f/b+9bmtnFk0e/3V7jm02TLlaWopz+KpGTpRg8eUY59Tm0llZ0oGdfJJHMcz7lJbe1/
v2iAFEGyGw+CethmV+1sLALdjUaj0QAajQPMbNELmtmi485sUd2ZzVFL2pnNefAfYmaL2pnN5ot+
ZkNiuwAOPrMh0bAZHGpmm0gz2+QsZzaiM0x5OsDMNnlBM9vkuDPbpO7M5qgl7czmPPgPMbNN2pnN
5ot+ZrMONGtoZkNuPWRwqJltKs1s07Oc2YjOMOXpADPb9AXNbNPjzmzTujObo5a0M5vz4D/EzDZt
ZzabL/qZ7fpEMxtyuy2DQ81s19LMdn2WMxvRGaY8HWBmu35BM9v1cWe267ozm6OWtDOb8+A/xMx2
3c5sNl/0MxuSHhPg4DMbcos5g0PNbDNpZpud5cxGdIYpTweY2WYvaGabHXdmm9Wd2Ry1pJ3ZnAf/
IWa2WTuz2Xwxyh1Yuc25iEPP/A5nlnmSVzK/tamZiva3+Xrkbc35OvJUtw/Z947J7cScVkDS2own
K35rUIGLf+9SCMJgzAuMTLokcz2i5RgZhLjoSJfD9zzK5UjrGN9mp6kDzNeMY/5AXWdAX2UHEFfS
Lwu/Gb0kZ4ZA8aKchKBzWfoNLsV/uf/8FWymprLvIZU1N+kB6OFtJ7xRx1F4NILDC6+DVT6i8K66
jsKjERxeeF50WuGFrsOWRvAUhy2z0Be/ht/++JPNoP+8/3L/+JOnoQj++r788P0RXu3kj1V+2j2M
3jPxatz2f/WIebxaTTFrTG9nSJSZxawxX2290bFmDetcKACQuhd5ztUoH4qonyV8r5MTBQBvpl1H
zeIJkg7acnrveF1NR/HdBDpNiSglng04SHcqXqLNgOe3cevP9A2Dg/enaKRd5hkATWLy4qMK0tMY
HafM0LN4rIjL0eMAUCeskcF2fQKgXEAdvYEOCzDyU+1MQdZ7VmodU4vKZBcDfXWld3kxi7ea3QU5
ezvjI2kmUfn+EQ+wHh1uPfSsGGQuL3LKL6k3yqp/QFaxBxgcWO0as0p/0Y6M6nMnNFrFfDqP5+7L
Ze88l8tl19Xa76YRHGHR4p920eL3XIVHIngBwhu5Co9E8AKEF7oKj0Tw/IXXdbV5NIIXIDxXm0cj
eAHCc7V5NIIXIDxXm0cjeP7CG7vaPBrBCxCeq82jEbwA4bnaPBrBCxCeq82jETx/4QWuNo9G8AKE
52rzaAQvQHiuNo9G8AKE52rzaATPXni9yNHmKRA8SeFt/mP1bfrlw+fv2shZv4kz8gWaRsRyqzj0
yGeIzZLcnnRDeeK6M0UieJIKaCW8nqvwaATPX3gD1+A1GsERhHfi4LWBa/AajeAFCK/vKjwSwQsQ
3tBVeCSC5y+8oesSl0bw/IU3chUejeD5C+/KdxQejeD5Cy9wFR6N4Nn7eQPXYatA8BRDxW2E13G9
HqNCcIQbHicV3pSBk/BUCJ67zWuFVwK7bT3XW20KBIcXXr93WuH1HXfjFQiOMGGMTiq8vqvmKRAc
Xnjlmf7Is+3AcT9PheCp+Xl2++ibrfs2enCeEddPe8nevDUyP6EZNXFCs50vNw1cjjtL1eq5qhaN
4AiqdVovgU1UjsKjERgJD13cnFB45uPSa2JchreR26XV+WrbnXqRblzan7Y2dYXV0ml1jL9RIDAb
y5hGnUYdTW/LOl6VTW+8MT2c6K4RmiUV85QXWw9ydVV/J5Qmffjbog3fy0vm684Rgy06HYRaXu5l
ZrtxmrROnO2mFV4BWuEdaH7KDKd7NgeerW6xgNwTvyZxZy1uZk8hv1iW6SH/wH6uPYVVsguKNvA9
get4rrm0Tdbu8dqEGS3WDn4+7kq1h7z2wijfoBCCntVycQ1v+zvsqSAUzCjup0NfsrlLkRSiZu+M
galO3d4Jee26vTOF2gsz+UHjTVgtFzfGvtAxY5A9gDXMKq2GcAOicLy9vIg//PbfHz5zTekQb3/R
XSy0S/WZSgcIwDOAkl9F2lQFbjhleZqfvf7ZsqbjfHBK1qzmtL8edz8ufl3ebEWSFmL8ZlPfZDW9
5vlunaa+8W//89c9DOw92elUsbDa2+gRmzDmq+iuyaIiy+4iWtXKuDO5A3m4ugJ7rsdjHdeb3Zfd
h++p6GqxvIi3S/cuBEVgFvV6EQkRKhwT7uOs47SnfX1SYhCqao8qs+cqPBoh3PB1o6sQ5F2Gfbbm
BnYZOp7Xj7ye22wHC1wVGgJFJcvvfA2iSn5+f9z9MWc+1d5NJe4qpF6EqAZuhWGiXwCFqNmoABdf
laQ3Gm957txq6lxFe+dfP+5+ZDxzIgJNY5xnkKcP9oZU+mAAZor4WkaXjriI0++ocCbhVJvimJXx
Dcp0Dcr0DMr0DcoMDMoMdWUykGR1pZJV+Cac2uPskimhAcbhdmOPc6DEOV+PZzqcrMxCX2am7ff5
eqEtU+V/qOZ/VUcmw54KZ7QMtTKp4pwo+dxcT5Tpw9My2vHFymhlyMpoxxcrox1frIx2fLEy2vHF
ymjHFyszspf5VClz4S2ocbIyWpmzMlqZszJambMyWpmzMlqZszJambMyQ+tZLPNrwCFz92syFyLa
Pew+rWFvAybJbCks3L1ablfET3aO6HtmTj63v65+p+zuZc9fwHrL2ePTHe6oXeuMFb5PoGsjvN0z
XyfbS/kdlQwDFyZbgbDvGiloqJztcdr+hZDKKzJS0zWn41orJ7UPya8tw9GeDFFrmJqVWoeLUbhi
OuYfb6RXttOFLwB71LCZztjIFmpQZv9RtfEGwDUdfGsuwfojned/F2tE9l9mAedfi6/uqNI3KyxA
vovBcIKZU9jCXOWF7yiiv1LqGosBfqQoqa+x2cDJk9zYrKanfKpMZX+VW7ThJum8wKNap+Cs5k8b
zeOLaFtj8UyC9o0E4yAmAOwMRSjsFX9+a6mqVjw86WRnL6wxymoYNR/mIOpYhK7W49WIHRmaSeZo
LiYrXRSE346tU4+tVnjPyDCZJ6YBwAa9D6MXnkdibSNS9WODXlTrccukrIZQE5bJt7BMfm6ZfAvL
5EuWSVkNowbnJ+uuNZMhr2bN5JQz2dWYz15rPk9vAS5+nXz8vKvxQqq7G9GT3AgivISuxkfdzLoa
H3UbZbWiQotqAyagxZI4CqWricFKvBGerwghruSIK0JivUO7VwD5agfK6Urn8SkdlQPGi/71z8eH
D78xZqpPtQINw/2b8r5FtnVDu3EAPD5ppVocqpevZapi7552iwEOsn9TfitYSV+9VwA9ncfY5RsC
sFuQn+l2XFf+BY7pFYHgmPriILHKdodu9c6ZxrSso+e/9taqFKJld1yd7yw3YV1KB/od7Zs++t1G
CLcQ8YXO+4qsjb68r9Lshh6Ag54Rz1frFtDF6tLksq+uXEjrqSsX1Pt2F2cV1V4cAB9TwuiLcCyY
MQxGFDf5eU0xG40VL5ADcNutlZ9ee7AXwpdmL4TzBuznr8J5g3pWATj4HndxTjDgRytuvb7VEDcg
dRY3GPLzELdwMk34UTOkmRgA4Cwhs8cQYGUy30r7zgqnDEB+VTGVtp9N+q6iKnaiL/t4Gq6aPo5h
U4f79Eha2XQrQnFqSNr3dDtCW5W07VuFbaer8o2CraITLKaEdDrYGk4HxalgazgVZET8nMhCtdwo
EvFzIjNFrXSuUXSGWumxOWZrZvRqzS9HWGNo6CtFqdZrS1ECMidRqm31IUVpMlc4LB6M5weruaHW
vKARR735oJbFT+brN02F2oC9pNm7/f3+y45HeBM7oWoSAMnjhy9feNi+2f0hYQHVFHVhM/uAbf2g
yiLYhe2dmJ2Z73dm0rUz1NerZoGzV7oj/Ron5nv9WEwajoBPFn7sFgG/1zZFkEBGlW/+OfpnWb/2
oV+7U3PvXO5fkZQKsJzYMW+oNT3D1ji59ZRJMyAjzlXiJVy8wcOaJBOYmKg4Ltp0RP4CWH5hs9fu
n399xhv0DvIWvNaZysrFijcBv06xv1gBAd/cTSNEn7qwolpj1xPeBNHWPnzX81Xhu0j5rmV5ZZj3
myDZ2l/0AFMlhYFlK4E+Ya4U17W+7HZ/Apehwk7x0SgYNZpj0lnULbIy1VnRp0auoklT6jTHymex
uUdFC0AMQ/B7+K71d5o5swGbcfYmmBJHW2p+AMC0CmriGoEwr/kvtZcWy6RJpvxmmBJnKmlHrpP/
TG/5Rx3pTEX62a9/qsLVsdgGuAlk0oaM2XCjkqCeBYCqJGGNbcqFwU5ks0wKF5aP3/yHs+Cx2J+j
zIPJfzkLLguSHF6mOd7yX86CyacmyozJ0LNh0v6LwzaG0why7A83nWuMuG0vaWYKxH4PRrVngwKb
+wCY/Jcm3ZR0R6jsRXat3RVEBFdTs8mjsEO5r6CPUylV0G9H7It2DSJTTFzJA/hTnXP0p/zWn7Jl
AeBJ+FNPY4KVuMwmjsnUxlmx/+IapVRfpg6NtZ4B5DEp79QukCmh5hJ2MYlUS1iCuXRVS+y8IZWk
fbLbRvbJbq32yYhsafJmQLFLe96rV1Q8Lql7Bll3ekO9qqhlrtlJkHcm6JuYZrjSfaI9v8QJiKrZ
BmcthXMWHS1CNsUxzd2yaaB3y2r6RUUTMOC+qsz7wKTa/oBH59zLA1Xbo4q9cLMhkU+L4iI7pKzI
eKQPiKztQWWH+na+8os71PCWL576J92dFlUsdqdxRZ2vImVyh2icJn8waVcpxU/nMq3uzGUGJql9
FtFKm66iiKsTULiW09gSl0/upzNcRqldJFwDCtf1Yq3ki323pTVW8K1N21HERafoYWPIUp50ap75
eqbENV8vLGnRaXTmK1u+6VQ3b4Mxcaai8ByY4eqYpL9Teg77ZH3EAtagmIrFu0ZYTDPzmdHeP4Ew
d3uZZLVdef0u+TJJJV9+I0lf5ptkK5LQ1cvx8j6aN5AdGTRLdYCcRxYNePZG080PGOomWCcNYgUV
rBml0Uyq6f2zFxZZEtQYAcyvJAKA1+D0nI0eicHVxBRJ/VTjAkHddysBNDsp5auKs/vPv5veVQT4
V1cdaGLfa07v6OiRPPVeI26CAkjPY9S71fSe32pqIIivKVuqu+VD3CoHSGOLFSXsY4snrrHFyhSr
1JcGY4vrpXhNsYjEVQ1PaAZIHeYzcC7cFfqdvGxO/zBOEvVOt25W0wbIpTrWSVUqLjxsE30tBuTt
kzSZxcZmFYGcUYU0kUGtgB4Ax3RsaX/oL8Lp97z3J4tigazwYe3wAuTCiswjK9VTMYA27tOaQ2Fb
muHwuHf3THhWnUto7cXh84qvwnHcYD54k2zpYCwgn6oAKSOApl4Trd3cLRprrZkOIFkrzSxkNWT7
SKQg0/WRSIETdoxu397FR/NKxT6f0Q0NSeZDA5kXp0ugY8JMNiINb3+IgE9dDZmXDudFdzGpsVcH
7H2Ng3kClRNU95QJuV5wfyCY6vsC4ODzHtpcKobIDjVAPQcyg5pCA6jrTugXO1bj1GwgOaxjVvz2
1PE2Oa8MV2/pkkTTW/JdA0UxzSIoi/ei8qZZF4x25ZS4dfTFZbc1TowCIQC0/owY1nQoj9Flp+S3
b38yXP+wO0YAosRVWMVJSTjfvH4fb27rn5ZkvsU1w8KPECLSrilOTTbLJHQ6NrF6BZZ6se6UmYXL
G5XW6ftoBId/irPuLqvC4bESnu87Co9GcAThTU4rvJ7jI+UKBEcQXnha4Q1cNY9GcIQXdP0TC6/v
KjwSwRGEF5xWeENXzaMRHEF4J7Z5I9fZlkZwBOFhmXaPKTzXCYNGcAThdU8svJGr8EgEL0DzQlfh
kQiev+Zdudo8GsHzX2GMXWdbGsHzX2FMXDWPRvDsNa8XOQpPgeCprTCIsKtTPlrTbtsUoN22abdt
nsK2Tbt4LkC7eG4Xz09hCdMungvQLp7bxfNT8PPaxXMB2sVzu3g+m8Vzs7cO82j8YBKKVwum0/ov
aGtfiOINI784xCZKpGtkaa/+Uvlpf194HbtdGBaRLz1N5Avv3WNvjkxdTReN4EmaLvNHaTvEjU5k
ECt0K078N8661fW6Ot1C6gTHucC+tzfzNVwyA4vtqe7/mA16uB11IntDJ5ahSRMRjqcc+QPXkU8j
eP4LtYHrFgGN4EkKz8JsWj3cmw6QuJkBkjx+eHiMdn/uvn5kUpoKMyJi8Wvndp/e/9h9BOVRlgIg
xsz448eH3ffvJtVr9huAulcsG1HV3XNohLkSAlCKCKCYPr5+LOhPPT+++Xh2OnnrAePZgah9PDt4
PIcJaO/ZBbQzRpDnJC1dr2kN12vaIesc2PVinlfreglo+CKBxQTo15gBT3Ny3rqIBWhdxCO5iHYj
5EDbY40uV5/c9tgT9cZtfUBC0wCenhNIJ8s/oBMIRGs5gctjOoEKtvbuoBeXhmiVbjoAxtFGqGln
Ask9qgTzTm6yhXCdxdzNlXjY1ucBzGE8D4l31/VpK8cbD679Wt37zerGVF2j/rxJAvv+jFh3Iob6
QP3ZVdzDNWuhX6uFyHxwoBb2nFvYrdVCJBfBgVoYOrewV6uFSJr2A7Vw4tzCvnULx0cdh33nFg5q
tfBo49CnM5mYtnBYq4VHG4e+Yx9u4rK+GbQwxGf/bLMoji8v4g+//feHz+lj66aN83huGOITJC5C
P/EUkegX2DapiqXKTeWlgfguZK5+HM7fh9++frr/nCVOggfHqtXT3KOi0vjrz4aT+HfIJOkLL+ER
QB3zJP4d3cO3Ulnlo7qcbpdibJzcWTKmfGEXcA2I77M4tBUCmcV/HMRavuG7T3yPo9CxvrXcyFcZ
qmV9UpGQsmTnx4leRkVcVyNd33apsT2LJ0pi8TL9brIyUI7T24f7x934O5gM5J0j07F6pVVjqiWT
+TSyFGvfvDuvQocuUOtkvEzMuyCbBGes4/gkWHNlxmjW3DKABwGP+/w1n59ojjNQt8uMIkA5bVXt
DKYA55ZT1ezXc94V6DnsCuB1Tf08+zVziK9GWj+v9fNSxlo/r/XzWj+vDK2f1/p5JLR+npaj5+Dn
9R38PLyuqZ9nf3IQ4nuyrZ/X+nkpY62f1/p5rZ9XhtbPa/08Elo/T8vRc/DzBg5+Hl7X1M+zj58I
8ZPp1s9r/byUsdbPa/281s8rQ+vntX4eCa2fp+XoOfh5Qwc/D69r6ufZR5GGeJxs6+e1fl7KWOvn
tX5e6+eVofXzWj+PhNbP03L0HPy8kYOfh9c19fPs79KAn9dv/bwMWj+vwljr57V+XuvnlaH181o/
j4TWz9Ny9Bz8vCsHPw+va+rn2d8oBj9v0Pp5GbR+XoWx1s9r/bzWzytD6+e1fh4JrZ+n5eg5+HkT
Bz8Pr2vq541q+XnD1s/LoPXzKoy1fl7r57V+XhlaP6/180ho/TwtR8/Bz5s6+Hl4XbPsuZNrz9bP
8zra7LmnlaXvIEu8ruo5isl1bNq2oiT5A361u8067SHvNnVyztN229ih2/C6prK0Ti3EZalOA3pa
WQYOssTrmsrS+vq+NzhzcxI6yBKvayTLwIusr8jBowdlWebEJEJZ0vj31/EkL14ksO+ABc/RruyA
Kmv/eJ8Er9lS2Xu9iMPgdTJfz8op90sNz+nxjOmW9Fjh+08/oUkZXcgMz8Xim5LtNkEWEphjZKul
bzeBHX+9hvjz7ciGDZHt2pGdNES2Z0e23xDZvhVZnuq5CbIDO7JNtXaIk0UY2M8VTP0p06N4jjcs
v8GVlr2Z84ckxmP0q3iX64cnURTMpYwVH9HImalsAc7X79lslPz8/rj7Y76+5K/ZitdPOpLdTbcU
ROHg5+NOs/dXbP12E8/50pna28oX8B65sSCVGRiUqezFMCb4S0TUzqtUl9w3k8qQ+1FSGXIzQioT
GZSZGJSZ6svIHSrpL6YSUVElvFHHS9/8QFQiqqcSfBtoZMrRDFdSnKNZPY5m5hzFS+Aj5yheBvDo
049RddCIotb8VPvPJ/cXpTKo7vKNr9LvzFHqqva7cpxdVI+xujc8l74Jzh6674rVZXyGxX03qV/U
Isb3OrViJrePddvGuJgTQzEPUNMHda9KvyfhPDHEOTAb8tdxUZ2vY6HOg+rwEkWt1fn6xkPNL/u9
Q/zuE7930d/naxz/fI3jn69x/PM1in8vOoV5v14QbVwQbVwQbYz9HtW/2LEKKz/Eh44/MtMT+uxN
KlMxP9cB0d6AaG9AtDfA+7TKg1+V+fwt0e9viX5/S/T7W1yvbrwe8Xuf+H1A/D4k9A3HP1/j+Odr
HP98jeNfEPwvCP4XBP8LnP8MJNNe0SXW9hEhkyvi9zHxe0C0Hcc/X+P452sc/3yN418Q/C8I/hcE
/4uUfyObvAlhOS1s8nL3x7eHn+yvTRjsH9AtW2ZRIbr99vCRm+a6fgb4Vnpz0DUr1at69LN4nFCn
w/B7v1peefItU6saqQwrZUzLvyfjLXpAmSwD9PdZNFYeaBa4q3hnm7iD1t7EPvF7l/i9R/zeJ34f
EL8Pid9Hpq3sVw9hb2IPrX1DtP6GaP0N0fobovU3ROtviNbfEK2/IVp/E18Rv+M6cRPjOnQTh8Tv
ESl1496YqLcIsk2P8k5Btq1xvRivNNsaecTUVXW71uadraplqryRhr+uqdyYmczCjnELomoLqou/
2zeTctjUwC8vRwGy9QmvkFtmddAUbp+p/o5vb8LipFISA4DNa69V8vwt4ofPyJkGzmoeEPD9kvOn
OzEAQI/glfjFUbsJfkoaSbfGQYD0/rzhrmTSOySZTM1nN8FKh1RWd/wR+/1u5gY7W9UfA6Mv8ALs
pcGMnTYkhqaV02N4xtoATD0yADi71sQYCWVroghZgHipGSBTBmiz1dFdPvAXUSNSZ3hKYa+KB8XV
Ug/++vRp91DembTHA/D3v12A4b7429/hL+/HSBxZeWb/NUQ/2qPvih3y0AS9ErkiCEivDdAXVtqQ
D+4O0Wu67tcMcgDrga6mm9Nm+PQR1+ZIAQwGvkUxAwOQFVMrhbpZkjEgup8jwVUAwNooAJj0kJVx
0CMFMDcSZvgADmwschK5wZh0GjIYAOpYRkPtoY0HTUJvVpAYAQAzs4LHGGTQ2hWDYq1dMUAK8Lzs
SvTc7QqSJh3A2K4Q9QFau2JQrLUrBkgBWrtShTO2K8SINLYrihHd2hWDYq1dMUAK8LzsyuTp2BXy
0/jL/YfvF78mAQQaJNH1WwNTECXLy4teE4oGe/KLyf/89eGL2Ju/tNQQPQUZKpoyhS4a8o6a8kXt
mIcHD/l/Rzx4dswjd3uBoabkZHJtCcT8wwkEPOD1ipMccy2a8N+H/OBlOjIi8m/i4kMGeomk9n/7
3qN20MuQnlZsv82/Pu4+Qw+x/oL4MUCiry53tES2KRtQ0qOO2NxrWn+ywZbpqHIDsT4ZAKYvQ+MK
GhuxL2ZUSnuJNINzlZve4mbQmNz0JYykaq/rYigdQtdlKjAtcPM0NSRjRwogUxEzY5SBmUYDGGs1
QD3WrbTajPOTjdiqjvFdzUP3vtr/KENTMtQ4YLTvBGCmuHUXdkgebgDjhR1RH6Bd2BkUaxd2BkgB
nsnCLniOCzu/Xdi1CzsS2oVdu7ArwrnKrV3YuVIoU2kXdipoF3ZOpADahZ0M0sIOSbwPYLywI+oD
tAs7g2Ltws4AKUC7sKvCuSzs2hO7dmFHQ7uwaxd2RThXubULO1cKZSrtwk4F7cLOiRRAu7CTQVrY
IS9tABgv7Ij6AO3CzqBYu7AzQArQLuyqcC4Lu/bErl3Y0dAu7NqFXRHOVW7tws6VQplKu7BTQbuw
cyIF0C7sZJAWdiPHhR1RH6Bd2BkUaxd2BkgB2oVdFU54d1f9C5nh7oRpBqXsJZo0g+OnmmZwYNYJ
bZrBNs1gm2awBG2awT20aQZp9G2awTbNoApa1791/Qsk2jSDKbRpBl2KtXaltSsFEk85HRj5qQ0C
bs+K27NiM2jPimU4V7m1Z8WuFMpU2rNiFbRnxU6kANqzYhnaPM9laBd27cKuXdihJAwWdm0QcLuw
o6Fd2LULuyKcq9zahZ0rhTKVdmGngnZh50QKoF3YydA+tFGGdmHXLuzahzaMSbR5ng2QArR2pbUr
7eUCYxJtmkEDpACtXWntSmtXjEk8uUtLdzPpVovmzlLP4M7SXbzZlu8sRSKKvVQ1vbMkKtS/sxRv
fLiWdNH1sUtLG395SX/sqmp205om/RfezCN+38nuslOd8wL1AC4P3PEVFz3fkr8SJwIiNpj/u8PP
BYIePxcINQO3PGDH4siB61yf9/CAnwj4HJ3f5ecCAS/ToUcRshGDt1C5N4WYcno3g+jDeJ0wEXRt
+jB82H143HHdTbUZThIu02kujG4R407VYX0y4nWQE2yuK8w0GBwL4MJbM7UCdiQyCGuIFGUtZaZi
Th52GND1bemyFrPGfv2Y4eAtNiXNLxOmdbuqjWV6QE2SyeLiV+rmC1qD2r+kidxZEVEoOj/EMlF0
oKh0aghjBZ1Z1AW+k3p5Ufmxa95NhTuWzPJ/+ELYPt6X3GSDr8nBU9fg/PLvlxd3s3hpgD8rLqw/
+y9e1pZp/0hM+5xp4m5DfluWYb5JAmwcFv9CtWdyTtqDKwXZb3h36CQmutlBZM/r/u9mdhPokJrf
/50l8fnf/zVfVDvf/wX9P93939K0y4cKMetmoG5xYQjNv37c/cgEqgmaUCzO9av79pJyYRHgcf8r
HOTROY1eUh5c5UFBI5eVe61LysKfDu4fU2862j3sPq2ZHqfaJvpIaBtf5zBO3s6TI44BjoaRzNYF
SuoGeq+6jm2ATrK8yLynt7z07cnW9F40a3pFpEJre4vQ2l7c9qZoyK3aJ2p73QbB+RpfZI1iZHyJ
tU1rfC8aN7691viWoTW+Ssf3GRpfh0FwvsYXMWVGxpcwga3xvWjc+I5a41uG1vgqPV+SyNM1vg6D
4EyNb1Jzw5cO1WyN70WDxrfb7vii0Nrel7Pj6zoGztf01t3xJaJZW9N70azpbXd8MWht78vZ8XUe
BOdrfOvu+BIPW7bG96Jx49vu+FagNb4vZ8fXeRCcr/Gtu+NLPD7VGt+Lxo1vu+Nbgdb4vpwdX+dB
cFrjq0B++mtus2gyzYtr7rkFBvfcZtF4U77n1uOqSdxzExVqX3OLwjdj6mmuPZOfPn3fPab2ASnH
cCxVOPjzXni9xIL2CKOd4e/0sct2y4kgcN5KlIy3nrESTZkOSbttOLqOFbq+Gl1wk1ih62pUPFkG
cUXFuX0NcRUXFdyfn8PuY879cKJVEaQJcVBugu9hJ/n7JkAF9yZgWp4EYAFYE2zbEMyZC/Hz++Pu
j/ma/ev3+0+Pi92nR+jycXYyprA8AkHw83FXq0mzZMsH58hs9PvY6J+F6xWFg31bUt+2d8mY+hax
0aj41iHpBdGG+hZPQvLbRsFLwgjyrjV6HTFJ7piS+bZXqJLtZnvxq/WFWfz+DTL9F31oNt6qtdIi
3o+ALTBAMcgi4koz9J+qCOsk6H4Fod6IF1lVi4BIWN3YQiRwB02os1gi4S0AOPht42QDOmB9Y7ym
DojrbsfXgvVDpgkgSI0+sM7unbSzo8L8noGitzGxUv19uwksb5e/0DEvrsPjffECzEJ0xKnhSZiF
pzBNHNhy3LaWQ6kGfOMjS6MBPRYVllaFYtx93tx//v1RzrzxSqqptDvhs7U7t63dKRQ2szsn1oeH
VKHFelAsiVJ95mOghlEyN0zB4lb5eMJJDRPrvu7JDVNaJGGyWX/avzOgtDFSUhHEhKWFypvPedoh
2HyGFa6i0X3Sy779/f4LK7K45nvdDxXGU65Mu25P0ZuOPXUik4yySGiS74pnw2HkvXq1x2B7iBLt
fnvY/bH7+pjlSqF3xZPHD1++AMtd4o4zsW2eb+bX5PHNfEGmAQLQJ+cmONt3ApxX0xZl/rUoI+KA
ibfTWj0A6KYrdTrDSWm1aLmBLJ74/BwsNpZOoNjz3QY3Uzm1nNfxvFcX/0IOuo5rm/1znM1py2ho
yhzNGC5lQ/OlNF3KFGuW/KhMVV2NEJ5LOvKFzhIH9wb2TM50RRmzrKcWu+/fM5PHZuaSDZI5aae9
Kpz9tCc2+wu9qZylmpstTzUFQSMPNAdtIVOrIpyl2oS9OMOR3mgSnzXpHTPp9A47QL7sdn9qHsaR
c2VmOfk00UfqeA6j53hM8xAD1EhPaZzq0MgJs5xGXTSiTsLPJi1mxlKjGqEzlwdWBhNrXP2FtDwV
vaayKzObWdfwHF7RPPo5Kb0kyaHdWJ+9GJ1uYooDXiw1jc/QPLxBxCPi6wf9PC5HHkkcJr99+xNC
d5LgdRzOvdeLOEyzb+asVEJGwjgJspCR5e6Pbw8/edRONBqOJ51ROVQkDRMRlcZff2qiRIoi2GzD
Oyw2LAkCHp0xrP6ORmaw333i9y7xe4/4vU/8PiB+HxK/j4jfr4jf0egQ9ntA/B4Sv0fE7xPi9yn/
XVKf/T/3nlsMmtchQ+aKXSovn5JFfPdK9VVawWOEb8dvrAkLX6OIGwAsD1O3RDcioVw8uUP8ZCLg
/9vj/aefrA4/ovfQ9Ibl8Fb5r4pRRvZiNCTQsEMkEmw+kSPB4mWQlGPZ9jFfUNQg5qvKKx/N5fhs
kDsVAsqfSSj9DvLHyjN9UIaSSoFk3XIgWXwbbA3r9lCtrAp0EW8KoXXLsHLpIxMoL1pboL3ysGXa
rZRnxZ6ygWjY+JHceEkE5alkE2fXCHKeTYNsS5NAObj2qhJcq+PEPxtOumfDSe9sOOmfDSeDs+Fk
eDacjE7DScWcFpxU+E/lhsSoEHudmtZ/YPU0ljbnPByLcN7yDFQ2lmGAl+MtIS5OLLaiTjkgvfzb
NBLlMONMhUaH40VlBgiD6m+LbfW3aVT9DaFXmEGvF9U617Pqb2/C6m9L5Lewg/zmG/GVdn+qRpmy
3txFbylfMe/u/cJ+Oq0u69NrDvyV5+KaLzsF+vgxOzPi3y8voHBeDH0sujsdvaLGjHJ7LPfqCi5i
EQP+PnWBZLWSRDo/5Sit7K2dVJwT3/xgKN908ektl+rCHd3UoLdeEBbLwlIjKbLatd3WJFlWU8RZ
71Ksq5EVm9BTN4FuBoDyyXU1B3iTJqom6ZEC5E3r65sGoH7vT/uqvJ4j3DAoe88cOUDe5IFZkwH0
D8Jrm27OIS4CbW/bEQHIRTE0FwWAXhyadyEth4/+Fmz+r/0/My9LblhpJtxsYDHa08yEB5jsSvu4
iqkOvBd0Xwhm8WJAUKnzSm/ZZVvIMKGbTnlZHXD9lNu6cxGb1LGZSqstK+/MAlhN6NWjKJKFVLpB
mQepiFbCeyaMpIyzI9HbSxuNvyKOuatS19IR0kdb7rj7hs9TNZ+o22IMlqnsA8u7w9q+SUaPjnoz
Z8Ovz4bwLxvhohuEDo6aX5MNXBWNqKZtp3pcZ/8BXJ3brpv6wErR7Cwv76UpFUmhpovQpnoLQJF/
ghhJzmzx7nTiitYlWyZU3XLgU3xUz+qvQ3CD46l6S480Q6wWuDkuALkDYDfkzPzrwthpgj+FLgMQ
/ea+ejtov83OvN8a4M92GQLQ9Cr+qhEtqGEWzJBnBMzUzBwngL2ZANCrHICR2gFYs2tjNgBM1sp1
+7i5XZGj9LHRkAU4kz5ukN+6hgjgULtq40a1p/DudKajA096d3r/ozLtpT31jANz/bXDDSDrMRzc
mNMx02UAY30GqMW+0OtD8O86AgAOvYUaHETrcL33Mb0fmOq9HRcZJ3b6b08DQB4HSys9AjAfCwBW
4wGgdnPEuDhke5oaRQAHmPfQ8ULtIbmRKpPLRoc/sSBnTzIjaz9I6tECKOy3dmyVC8BuwABYDxoA
p+alO+lHaJ+BoywDpmW9Z65l/jPXsiO0r2mjDnBA1w413dFRtNzZhNcjnZGvN8jq0wRwN+kA9gMO
oNagA3BurouJB7Bvr6WpB2jE3AM8YY2sZR4BnqhGHrG9p51EzjniRvyX/98+sCZajnkWFTKwBg+y
xkNv4rE3K4feZF865Bef/NIlv/TIL33yy6DyJZPCqs5tsHgJjYUmo2E/8bLDP3eozz7/7FOfu/xz
l/rc45971Oc+/9ynPg/454H0GbmqtqpzVU2oAMgGJ831AGRDfOZSWVJS4RoBsiE+c6ksKalw3QDZ
EJ+5VJZFqUiyKd8ziCfXlXFwmhsPwAn8Jy6zU7mQEF1P74pXEISB9n5Mq/dfReE89bv5BS9OOw9v
D8oX1xarN2+1dzSPLsMzufvFODmTu1+Mk/O4+1VmInvhgSmo8fsTcNRafsQka0wcpm9WWb2gAZ7k
BL3cbnAb4N07frdn9n4Svk7YP40CCSdhsgUEA7N8lPEYrmgeuln7lL/3/3v/kbVPbtkyHMfiKOny
Ak29WWmheXSYLrNW8S9UQO+jtXLix4ln0cRf99HEEJkNaSyTCZH3dL/eKdSB60hEpgg6II4Z7o8b
kZNrxti3joaDSlR2ibqvGEXrqJ6eSY2ZR2Ps4IcQBCuMNcJJoQpLllU0jxRhrXkkMSu3UCb80AYm
JlG0EHhAh+YRsTTVZBzL2IkWfiPsMDyCnXoJ0HJ2ug2x0xXsIG/GWrHTa4idnmCHeJPQmJ1+Q+z0
BTt9R3YGDbEzEOwQr6QbszNsiJ2hYId4N9iYnVFD7IwEO8RLmsbZW7NZBWwVld8LZypNhbqMO0av
KNItyzNk403B8Un5Srlr1gHXLLV9eXJCzp1C3vsEf6ygkftUlZnVtYe90Pyi0AgDaSI0xQtSh5Ws
eo6gsfhlLHRWOal7LPKvlrsHC/fXdE+32D3EhNFY95ysD9UTq2kfdlV9SCPplpEosr5JmoB0hqkm
lF8mBNBoQs/o/eDjasLJ1EXt+JiqS68Jdekp1YXG0kOwUPeBJKVD+t1U6fr2StcvKh3hkZ2h0p1M
M9U+sKlm9pvQzH4jmtlXaiaNpY9gwcwegKTfiIqZ6vfAXr8HRf0mXPynqt8nGwTqlZfpIBg0MQgG
jQyCQSODYKAcBDSWAYIFM+YA0lBCtNl0KA3th9KwOJSI5emzHkonG2/qrQXT8TZsYrwNGxlvw0bG
27CR8TZUjjcayxDBgk1RANKoRQaO6ajFrmZpRu2oOGqJXZx21J5saKu36UyH9qiJoT1qZGiPGhna
o0aG9qiRoT1SDm0aywjBgk28AJKBQMaoqYHALv6anc7dzqnwXto2ZDzzyvjYMGlJhkazt4vzAUri
eVUyyjPn7Eg+isqEqkT254bicJ5OLk/zKHcVjP+aG/PHe3Ugo8RPgIX5Sy2XQzfnB7CQjFp9AEuz
l7EWRlEipIke1SuoXztQ5z0pnnuSD8mT6yUff8Ppq+w5M2lsJau7iBQ1dXtP3zkMq93h675ibdEl
2gAEmnUysiD0BLwq/FHz1GqVbCdM+Oy/jQQNSIbCP76hqHs2nr8ISx33HNxSUE7EES1F4amTeZQ4
CtP6DbOKpcJk8lIslV1cRsVSWYvueVoqusFBuHCfUyUHrFNje6dffnai+LlbzsMtg4hjUtSGXLKq
z2P1Z/X6rlN+4KH0+Ur9eaL87GM5y6XP5TdhSp+j+kJln8sP+JQ+h8rPPeq5ivTzQP05UH7WaEtf
3d99dX/31f096Jmu5NVjbuloZIR9Th9kzm+Z5Slf+QOnhQ+qTCf04BzPV9v0OhegwasD5O/tMlrB
ZrtoxDj9hzZgmWY/M044M6beW/f43lvdUELJeyPDB7FfG/TeqM2bE3lvzQgzOJIw9/5eQ1J8Rl6g
XThsxQvEBPoCvUBF8yWL1zu+xasbrSwNUjJCGfu1QYtHbTSfzuI1IMyjW7yGpPiMLJ5dxH3F4mEC
bS2e/Kdk8frHt3h1L0RIg5S8BIH92qDFow7FTmfxGhDm0S1eQ1J8RhbP7lJPxeJhAm0tnvynZPEG
x7d4de9cSYOUvGeF/dqgxaMO8E9n8RoQ5tEtXkNSfEYWz+7eYMXiYQJtLZ78p2Txhse3eHWvdUqD
lLzKif3aoMWjgo1OZ/EaEObRLV5DUnxGFs/uanLF4mECbS2e/Kdk8UbHt3h1b45Lg9T4tjhAgxaP
erzidBavAWEe3eI1JMVnZPFIfTazeJhAW4uHNR3STthnOZp//e1h98fu66OIha5yhKZBmnpTxSOc
IqB7lIopK5f+PxJ5WRr3iz1m25NuioAmCFwk0zk6ze4JaPZOQLN/ApqDE9AcnoDm6Bg06UBOwhTB
dNhQwjW1pZGlkb4bW08a3o8OYvs08g/HDraKUZzWoOhgqepSrG+n6lKsb6XqUqxvo+pSrG+h6lKs
b5/qUqxvnYwp0gOYsE2rJ2ebqMyXSoorF9uE3pPS3qRzME01Cda3TDUJ1jdMNQnWt0s1CdY3SzUJ
1rdKNQnWN0qmBI39pTT/8bt3efZjmpNKlu9lOIuLWb69H710uVrlM832LSqNv/7kub5X36hs32qJ
7HnaZ//mT0uiZbfjZAlZwDuqsPUSMjSTXPGvwp8V2cyvI0PZpHIRFczkQsskb0LHp+TBZYF8u56/
jahvWT3szsL1DTzucHGhuiwh8dVT8YXd5rheTlcqvkr4RxT+Yrkxysc4WUx4xngzHCit6ySZkDJO
koB/QxuaTBLzhgakwnMcWE+F0duFBQFSg0rl+li5RQC5zk0lGYREb3CBdH3bsccGfPLz++Puj+Xu
j28PPy8BVcDfpPLKydnz8besP/6S+bXQUfROUDL/r4loB/Jx/XayIT8m6Ue/j93YeZt+xaleK78u
g/UdSTZariPy457xfMx5qC5Gm+g/SSxhsk3oj5O3Wxv6Pk6fb+VRTeTuIv2xq/rYU33sqz4OVB+H
qo8jWlqxop1hrGgn+0i3k32k28k+0u1kH+l2so90O9lHRTvHqnaOVe0cq9o5VrVzrGrnWNXOsaqd
Y0U7V6p2rlR6u1Lp7UqltyuV3q5UertS6e1KpbfMJsfkx+08ou0T8yFp+xQu5oqPUfiGtrZ3yS35
kZkkhcFarWmDtVLZwQxya+bj1jQJ57RbMZmGpPt0J75h7tn1JJmaT9K+h/pM8XizpA2Xqo+rJLp4
28cqCTI1Cmk1Cmf01DteJHPyYxAqOI+nc7q/w2DxVoU2lRa2HAnjKS3LiepjvLjhHPWvcH6XAc0v
8wtpvOHUZiru4R14/RZy5zAFver0USqzCT1mgyjiHcz0b1T4WHIERea0KNgGhcxpHe3tAuIKO4R/
oT93sfECqY9CrE/Zh4mHf+hil505IpwwQ0R86EwJniYUs1OUWUIUh/zZ+zH0CPmwD3h7hx7RrCHV
B8NJKddU6cq4UB42PWyVeZWryqNJ0U/lvppQiQugM3tIWixWvCnaEZX2gNOuxmE8D9qKRBXw6bxa
rVfV5CYMRSAN8nH1dpE9O4Fp+QZmr+qdzbRuuNmmafMKX7MDjGvmiSij1ehAt+sgGjcQ6CbcFpcn
UqB1w6tL7rvgKKTCQnGAKF0023KFXrEKOKq2i84GadIsH3YZz7FZipyQiuwXX/n7cuBgMpM5nU5F
QI+6fVKl5Pf7T4+L3Sfmm0S7h/QFMJHXUfgJ82C7fcWjqjrkfloGaqoQdATYLlOH+Ez7gLr2ptWt
eZyEulaBDCqCn8fjbZoDV9tzH6XuhnSctQiKlGFpj9oQ3GcArdPKYD7mRHtaoifre+oCkL7vt2+n
9aSyfbtMh/7ZSoW6JKCVijgwUbfqLhcLz+FTQ4bi/ENEzJpWh3OQuoocLcXo6RjRU1nY0t/CuZ69
DddgceEwQmluU7Pcf2VlUbmUj6k+47rqIzRd3SigOE8Sgk01GQBZI7yujhrR7ArPxxQv9lgngFa8
b5K3pp4QK9o5WKsMJbeP5Ge+udEh/z7sOgiDs/CpTfQ591M9bzoaTrX288m54CacWgis2cFE5bV6
2q2yX1Y8hVbVdtTPulW1XdDCSpDHPYJTqm0hd082959/fyyhmIq7NNwxBS+1ERN0MrHa+7AyGiET
he1XowJIWy7O3qiI1DL99Apg+aKOggk9IwAcr2BFJJCFf6uRAlyzUqKeuix+e098edJaRF0efNom
B3tZQ92qoqGYprtET1oI9quUqhBi1TpkP55li7u316NXAktt+6JHDQyOFb0EEO32lw3VhRUDmeSE
7/yIM0PYDHra+mK/7CLkIvYt+Kp8Pu0QN04zNoqTvOd1JvQ1VzU3Co4Y5uhVNjPAbodKDagv5F1t
B7b4josZW09Ztzq1Ly6fdauoF3a0rUJ2rg+xVX6qHZwO9aCI2jeVTAG9GlejKXOkME7OLqe8+6je
sAOg7QqA0rZYsWPS2RmoN6BkUPm/1BcHe1loi1q2T9wwPsvtmY7jqa/HT57exk/Gn7Lc8MXxa66R
iYBPYu8LF2x2aBVE44tfmdWDuND6dM0f9M5muSAM7OiW5IjVkq1uEs4Rl6mIJEkQB7RQBKJsq0Ws
ExDE0SbSZdBBcqGApFmNi1/h6oS5hL/sdn+yGTNZIINEcT2QJEU1il/ZqNWo9KEUNrsIHPhbpQ03
L7sHnpEsm1eimdc8ltuv0Uzodabjpg2zvdZZeOEFrsiQuiwO+6Dh1SLV9Ewimj57tyZ/s2b/2qXV
eRIcOeb9vU7+Uwh/GHnZqziFXwf2yZ1Yv9x/+snEHc49/ioO+Zgi9muNLFsZwevpnSVBwpzKz7cy
bTMWMBQWCoApTPkXtKU44mILYUvWaqgZ28XrmUvyA674cDGEttDXXEJ8QkD2symu+FaxdeYqOVkX
Zw1fSCinxXSkonvOTj1aNBYEfo0YwUxbyZFfrnHqXbifo+tdcHts2GJ+ntNkPD7eZIyQIhoF74LV
m6VSSjADiNsx0gZE/s6YudWvkw4BuuSAM2V1aFp5HNxEx1NicaBZt4grPGLdojsI4xUm5QrE7FJu
WJbVznV7RPgCnLDyiNYMI4AUZH3Jr0qdeDcEayV11mWHGSBv7ciktQDqFgNoW23Nnwg/bIC9Wsef
mOJqdM1ec3V9aqO5Rn15fM3V71WaIQZoWjEAmtdbUyui569JxVWpmqXiNr39fI5qazCtmGEGaFwx
AA6iuA1NCLYb7o0tfEX0+jS9LTZUtYYYQjKCPM6HRpPLrseTtqBbhUYtM8iwrFi00RgkLqP7/73/
yP5/+deXx/s/v/zMdk14pIN48Pfygv0DrpLTOgAiEpfNZRGp60j3vSgxATSmCFWxNREtJqQFt/DV
I0S2bnRjAQ5yAKVcfagpG+0IlZrXxP5zEs7j+kvetKNFZiKiozXJffka8zpZzs3Spxs3MHte4d27
5WTp6xokLrS+n8H2zuT++4c5E80v8Sr2Qq/zC8JZWv5mjl6ezUuEm4Qp7mb3/dtfD7/ttrs//vzy
4ZH9biomkZWq60/vf+wYS5vdh4+3D/ePu0tSvUTmKmEaMvj73yEK4AF2VIMPCu2GUBqfrrvYff38
+DtZG1f5Oi3onU0LkOfi9x3rtx1bp2N7tVsgbl0erGP3LzMk27Hboxbz64gIyqLnD+ktEOIcWPMQ
hXHy3X07mQ47Pt7Bj3DkZyji1SzdoppO2f/S0HlQvsHYkw+NoDApixKWaYaly7HUfuMDjHEt0cJg
17mXxPz6PoKcoj2bHAhpdprNhPnCwV+fPu0eqGezqSQSHpIVhDjfK7hqObWOYi9S7QP9/W8XMFIv
/vZ3wcuEu7nDPvy3e8V9XR7tMfL57/wq9GDIs55iOW6qqEd71COeqPFqKG5/wn+DCb/+z1feAy//
JZrS7pjdoWUuL/zAQy+kkJ943n57+JgmlBQ9LaIKpl6iCOcsOPPqopnqAvLakTmilXXWPTKWd+/i
ybX3mv0nfr1YvXmrvGCmRiu3DLd3mtbJCEgvXf+sTfFfKa00R/L7eCPQ5s2IH779xmapbxDWEN+I
faFOatPYhNjJtjgv/vVvoo4o7VvV8bO7vBZ1ROmeVR1Rum9VR5QeWNUZZHsLFnVE6ZGiTtp5y78e
4eb58mZ7JwqLPqykjI0mgZeljJ2vxUpcSq6TDWleKvj5uFMkic31IwquefLDkcwRRrpTJH2VnVeV
aLNiYF9MaV/xbIEDJfF4s0XanU9LKXFRLNpTV9OOR95MStubdUV2Gju6m/GDy/LUmSPQ7jYUTUm2
fn7IvJWRF4lNmCnPGPMqjdm4BM6kc3DJmOi2hcwJeh53bKQz1WxPCHtZz46n0iGmKVPAEsUU5hFY
M9W1ZkqwRDJlKamUikAO2vdKqfRJDAmuZKUPfGTEiWL70a7W+SRZxpWxvvcW43lIxUX8q9wKIdVr
VuUV/ol9kU6m+VZ7ISCo2AMw3LLgRu/HeCwLMPtXYW9MWT0sV6+0dEvG91RamkbhFnpVpldsFsNc
bun+mGG8Hqe50AqThqWy5pFx0RZSuOdRbNsQgi04fp2bv92MY1Zhs+V6Xr7wJPsYRTb5mquw8MoZ
LkTlZfZZ+rVAYl+k/0rd0uzy7xSylU6jbaxtWyHFSuwP6bZJLdyrxu1YBAfReiE6n6NnuhbgWlG0
F3tp7MsyMpie7CXIVaV0X6cqHSlCbxJul6aiESzyGmb9vprcxSo+uDqukzBEQ1iqfPzjfRK8hsDI
16tZfF3euCq5z1UC5XByJYF4OakSINppOwwL7xaNw205RlIdjjcJ16ta4XiCXt7GRRwGr2fvJ+Hr
cDpe6TcLqNZbDexj2qeeVkEyDqWYXb0a7jUrnF5Hdj1RjBf/B1t3vYbV1ev3MTwoUS9BzV//fHz4
8NtjFR23NcUf6cVnzKMP6e/8St7BKDR2lndYNgGOIG/LN2pNjIP2PEzczsSMwwLe0rU0NRQeNq7n
1D5z3V1oqJfTgyDv1yLZBSdVd8um/MxuiYJIwIH8aH+Iiv6cxaxzCgytTeB6qWexvojWEMwcwR2K
BnpWwkZbPW4x89qzaDJ9HYVvErcgBX79pYqVSFBgsH8oMLruH6KegDkq3kIRaw04JsQz3/ui6RIx
CjaXfLGmKW6GWbEdWdBOLnihAvT+ZuPXVcqkrXZGlf5VsLqhEwYppnX1iHsb37LOYf+hTJJW1zVY
DzLTANgGIKnlsDmIHDYHloNixt3EnQhVF+SUrKK7m9jrYMNGSc93oudb0+s60eta0+s50etZ0+s7
0etb0xs40RtY0xua0qvOg1ZWEGV3SM0KFgNsdMoGjEwbgDQlO7YjHlYocoE84VB4uuHfhY3f/bW0
CWR676p2ngoRnF52ulS+vlON89yXkiIQxElwuokszlqzwPEsHHobTLuvKiLIfy0fmfAbyUiK19KR
CbYfQEgyT7EShXhEu2qspes3yuGuIEdT/ijvz0HsLD4P4VwgESDa0ZOxiF8e0be/Y9L+zqHb33Fu
v1+z/b5J+/1Dt993bn+3Zvu7Ju3vHrr9Xef292q2v2fS/t6h299zbn+/Zvv7Ju3vH7r9fef2D2q2
f2DS/sGh2z9wbv+wZvuHJu0fHrr9Q9P2V/+lOPc1mutJXrVzNHE6hG3NogEF2AI36MAhVtApJwhT
Vkm2vEpiXsXnVHwbKj6n4pepyD5b9ZClfF/OZO0j0RxvScdIU490KDT10Im4qISVfZKajULGnFGj
7OuhPo7mpBuO2DQRToULUMlyLsWGV/dIt5vC2Tw6IMTKaMsWY9pBtkca6ZAK9qJtOYVZmWZpR38f
pFqoqONmbtzEGYp0n09tL81SAGfav5WlXxafMF/NVXHzKE8isREzWWzcVge3FETwfg1PzaL5zfE1
OlS4+PWX2/uvH7/9v+8XvoffWFIeRHDuOhh3JdlpCF8k8UmJ+/WI++7EX9dsd9eZ9KAe4YEz4at6
hK9owvhfEkPZEISQFXSfo8pCwSa8gz3i17N4cqewU1DINynUNSnUowvxM+6MpURTxjco0zUo0yuV
waQbLyf1pcsq66VrUKhrUqhnUqhvUmhgUmhoUmhEF5J6PF7qe1xfpmtQpmdQpm9QZmBQZmhQZlQq
g12i+Ed58hWXoiCitbyFnH7Z3MaX1KO7tfaGAfYbwwy9KocW5RSxMZQ+aSf4Q3bQq7m9k6STJZzG
0kLxCLF0m7hQKwtNtqzWzUPMbKr10iA0vFrxkYdiyrDOK2P/puBeFiRZDSqwXkDIWb6lFFl0Mq4p
m/uSHW9NcC9V0jKGOr1QXD0IKEc0u9oeR5uQ0kHFhfaKkyRfZieeiQ5upp7+unNVxnaXhNNLOtYX
hEU98mKx4nJwtZOPyfHoyXFc5wJ2y7Edx3Wu6btzPJ1Efk2O6XquHK+/fvmpYPjqaZkKxnGvX4/j
XnASEU9ranHngFqs1olJzWHXOdGwO61pK2UFqGSGME9aIGIZIulau3AZxG3iTbBB/CnJN3z3jm+m
bsKAJwTwJlQdms4whHAJlI7wIrfBOKCK0Gjhmus2WCBXOuTNSYHbKkCmuFmJUkC2YmhG4WbzUifn
5QzCWkUKWbywggATcKQjEM1TAmRhmoA/Yr6qjsAkzgnghWkCXaaKdzoC8Z0g0BlThRUEWAvukv+i
CKSLDeirbNTHrDhVJ1spAHarc4Vgs11pbtAVdtb5TaF5JMJbxD1oGLeFF9WoU79C0H4UQQhm8U4d
tZ2fEvWbIOpbEu02QbRrSbTXBNGeJdF+E0T7lkQHTRAdWBIdNkF0aEl01ATRUZWoRDrbDLiOJ+SJ
1MK7Mj+RqhlwXQy0fj1DLnXizRRx2dUsec3EZRfjse3Y8q3ZMg7fLoZt27HVtWbLOMq7GN1tx1bP
mi3jYPBiELgdW31rtoxjxoux4nZsDazZMg4tL8aE27E1tGbLOGC8GOltx9bIgK1qRcgrhVXES/IM
VMbFUZOFlkStCFqyMrCRwzdmy5VvkOgmlcksNGSdlTRkHW4CmpW8XoxXZo3smDeS30NmNbJrwfDP
IiNy2ofKDraULKJas/rIlTQlXlYnveROf2ihCswrXrAVJ4FRiLxTpFw0prejDdBoKlXPiwuC4cl5
K3QqMlDzLLFQzprAES487LE9AJP7vMqrXwJJ+Rf7x5HMRae/iVNSNp9QNv8QyuY3o2wUGk0la2Xz
m1Y2vyFlQ+/9CSTlX5pQNkp01srWJZStewhl6zajbBQaTSVrZes2rWzdhpQNvfQpkJR/aULZKNFZ
K1uPULbeIZSt14yyUWg0layVrde0svUaUjb0xq9AUv6lCWWjRGetbH1C2fqHULZ+M8pGodFUsla2
ftPK1m9I2dDr3gJJ+ZcmlI0SnbWyDQhlGxxC2QbNKBuFRlPJWtkGTSvboCFlQ+/6CyTlX5pQNkp0
1so2JJRteAhlGzajbBQaTSVrZRs2rWxDe2WbfxLX0WyZkHfZ6ERQTq8n2SWRyNtp9uthxg2lBdbj
ZkSMm9Ehxs2omXFDodFUsh43o6bHzej5jxsyd0neTrNfDzNuKC2wuFXzfsFPxOyi3IUyXMcTZGNV
utaW8zqfzLev2f/eWhwo7OsY5KmViG5Xa3062C2SykonpoHlaWspX5/Iq5jw1KzS+2fm8kgRhHNb
xoc1Li76hYyNSXCTvJ4V0hMipNaLzqTeibS42W104GT8FpxLek7yUgJVn0rLSafkpPO/lXtcl4qT
NmvNpuA0ygNolHbT+tZGs9kFIcsdrgqQDfDcswuqRc5oBQbHqvgoUt1H4a0SRauxKMl8PbM/v8R4
LbCQ3XkpSnYbLyk+xNUXGwu0DZfTwqsAAAYamb/c1u14YeT1yMwGkuTTOzfJlinFzdf7377B87C/
sPZcdF77aWvlezrkHZ30A4/YVd3PKfJtHFctotbtIpRFdDIZ6U5EJ0sdn/+z8iTHdrkMsic5RCMu
Czx6P4BujiAN0BTVlE90VIU0DsOEP9VBvEu2/vTp+07Ec45elcps+SOC1brbIER/r+IUQWXFym+h
++GtnvKHKPugnP81DxvSkzKjmz7MPNJnG1FNy/JfhsaejQnkQUTcBqsfUbR8Trb4L4UpxMxQVXGZ
iye/JcP+3JYtXqasvKjm9aiqEOar/+Djraxa0XgrflfqRvZOobVumD4dSM+a5ScDxzx2aspvAkT8
9YGueIOATxFd/jXg0Wi9ieLJwPJzgWNexRdpwKIsZD57HuxK/MKR97v4FKxPtCOM8fa9Ry0Ws7sI
3+ZfH3efQWKQ+JENdKiEO42ZoCW0tuNBekYSeciSrg8A14C4mDqEUJBxVv6l5rvrosV18odnDf+F
zaa/mDpUJKdqWlWOkQdYzFABSKoR7R7EQja9YcxUJTsMZhqzXU6Jzsxgv4Tr+Lw48fppqTgUvASr
YVR6PYWcVcz2qEsXssmOueWbThUyAlDLCUCy9xpRKBKwA+RN919a0zH2Oo2xR6aSN+TO9LVP6gs5
ngHUzUDHteoNAj1KgDRfRTzvkPkq6iMHQPJbqIpqCyIWvgy5oTGxHFINwyEEUGcYAZgJzVhfATQ6
C5C9qQGMZvkheJfTsxnGDtTRjR/VV6X6A+ilgw6D8vNf9VAD7HNabxIDtVG3FkDbYnPW0Jb3DVpu
TgJgbxD8okHomhGyIwZgYSAcqxhXMLAxGeT70qa2plTTwuaUapZGs/JtbAzqmrAM7LrZ3EOQwcC2
ZbDvie7xeuKZiRBrDpYCmAK75uw7rG/bYVLtGp0G4NpxAHatBdi3eJq+8Vwav5qVGgXS9GyhFwD6
Ccy+pNGUl4GdDOssDtwoArjpqYShpq4CNKGvAPatByjrbacZvQWor7sA5lppX9pKjwFqa1Z1Iq8h
UJuhbFzUvnfM2DCWrblMUfe4fDzQDCkAaS628IzPzoruJTU8uBltcCcRA8vdRQUKwx1HBYYna+jr
eZgZWBiWDOx2OhUYWpGbQhNOFIBbc412tspQo7VmO7Y4MZvSR/AWULtdfv7lMKQBDmy/AfY2vNet
a8MlNA52XMLiYFiKvNQ1cM3xsn/gog4bdsMBwHpIANjr5qHWAGYl9aXUJeiv9QK4nSIYTYMncEwy
lIMoJjyeoc+jHQYiOZ14YYonIYGnibwfQ547p1eOJsJR54EUV/t0gKwyR9rnYRMiBdeUr6ZEtrN+
RAvb/CXRLKiCTPEAoAqsIKxaybx3lLcG9LEGWS8qJ1p1J8LaRiExQ/0EcI9j0J5VqZuSb9O3sQdq
J8pmmm+lqEN05ECBunNG4U+tBanShxjbIkqz2G1tnOJ8G7OZgxV9H377+un+M/TzZJQe+u6rpZGK
ojDPhGoVqgjA41X75UDWOJrzD1JjaF5FPCwaDDz1KpnBsuhKXsmaZRECWwqxlRisBKjPkahQ+VWG
X+arbeh1PDk+LQvmrr7a0O0g0eC20eMvJkj8PGNtq3zGy02dYHZebfz1pxXL4zDkolaHAduHiOeX
aQQFZFpQ+8CK4HK8coYAjwyn6wCoI8QBiHlGPb8YmffyDZuq9Nsg7BcQhE2uD1Qx2MMpIQ2NKgKc
Mvraa6Ov2+jrNvpai6iNbzZCDtDGNxegjW9WFTib+GZDwetKtOHNWmjDm9vwZhLsw5tdpdJGSePg
GCU9PUacdDNRw2cXg5bL8IixvM1Fsp5dnEgbVaqDs48qtYtWOvmIbkMl2lAJBNpQCQFtqES59PPY
LWtDJZ7wnuNTC5VA09xlx/yQ0486mJNP+Yun+N7VLyU1IPO6AZieyuLt3psVeEHk3bt3ipyOxpk0
BbtBBybFauHsKwwS8quvrOur6obT8UrxdRlt6K9SWmDk6+T/ThR0l+E4pr/Gi+Wd4usknCm/LlRf
V28VX+fKr3GiohsnKrrJVtHeDc9kSX0VKVCprwkbU/TXtwTmbBxMWBej7xpngHuQeJZp3FDd/n7/
JdUlW+cmefzw5Qv3TJGM4QCEWU7HqfBvccJKk5iZpGC89XTiUJimcdk0SeW5ecKnCxsbhfME4BBd
kVrtf+iangbPXK/eJnnwDBU6o+Y2AykB5KScqzEDkaOWTvhYxeVX8j7ucXWS0BJXV4ErKYYDZWCq
bJ1DKhu6Mmh1TRwMkH3qW+tHT4HLUT/8Q+oH7gSfQkGK/6KiQcH9KPuShWjQcRBGnud1jKJBO30q
N3BRLvXC+qwyQlMOMiT8ruEgR+WOd/OBrfO7ZpG1XvGwslduZunEUP+nICLQgFOSJB1CH5IOSrva
QP4IROEX5ByT+Clv/Su5+YKzLsVZ15AzOEg+DGuVjshY65mzVt7Ec2Yt46GvURq49V9babKBEG8T
7gMj4yCnBMICh5KSQilDfcKQlisUi62IEklvo6kpCMzXlWKFcZe17nb8Rt+6MnJWqYxc4lz5NePu
Vi7y7//DuPr/ZmcST+87BQA=
--14dae93404ed46460704d17ef284
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--14dae93404ed46460704d17ef284--


From xen-devel-bounces@lists.xen.org Sun Dec 23 07:31:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 07:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tmg1Y-0002M9-SI; Sun, 23 Dec 2012 07:31:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tmg1X-0002M4-0x
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 07:31:03 +0000
Received: from [85.158.139.83:33072] by server-12.bemta-5.messagelabs.com id
	20/F0-02275-633B6D05; Sun, 23 Dec 2012 07:31:02 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1356247856!28258523!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQyNzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10655 invoked from network); 23 Dec 2012 07:30:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Dec 2012 07:30:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,340,1355097600"; d="log'?scan'208";a="1623872"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	23 Dec 2012 07:30:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Sun, 23 Dec 2012 02:30:55 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1Tmg1P-0008VS-Iu;
	Sun, 23 Dec 2012 07:30:55 +0000
Message-ID: <1356247857.24056.44.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <keir@xen.org>, <tim@xen.org>,
	<ian.jackson@citrix.com>
Date: Sun, 23 Dec 2012 07:30:57 +0000
Content-Type: multipart/mixed; boundary="=-D15UhDipBQKP/Xssc/uH"
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com
Subject: [Xen-devel] Creating guest with less than 4MB RAM causes Xen to
 crash immediately
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--=-D15UhDipBQKP/Xssc/uH
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

Keir, Tim and IanJ

When user creates a guest with less than 4MB of memory, Xen crashes
immediately. That's because:

      * libxc round up memory to 4MB and build page table for the guest,
        see xc_dom_x86.c:count_pgtables. So the virt_pgtab_end is at
        least 0x400000.
      * When Xen try to pin page table for guest, it will scan through
        page table and invoke arch/x86/mm.c:get_page_type, which has a
        assertion that if any error happens the rc should only be
        -EINVAL, otherwise Xen crashes.
      * For a guest with less than 4MB memory, some of the entries in
        the page table are not valid, causing __get_page_type to fail
        with -EBUSY, which causes the assertion to fail.

In practice few people will create a DomU with less than 4MB ram. I
encounter this because I was trying to boot up ~3K+ minios.

Xen console log and xl output files attached.


Wei.

--=-D15UhDipBQKP/Xssc/uH
Content-Disposition: attachment; filename="mini-2m-crashed-xen.log"
Content-Type: text/x-log; name="mini-2m-crashed-xen.log"; charset="UTF-8"
Content-Transfer-Encoding: 7bit


(XEN) mm.c:794:d0 pg_owner 1 l1e_owner 1, but real_pg_owner 0
(XEN) mm.c:865:d0 Error getting mfn a11 (pfn 77ed1) from L1 entry 0000000000a11023 for l1e_owner=1, pg_owner=1
(XEN) mm.c:1163:d0 Failure in alloc_l1_table: entry 0
(XEN) mm.c:2021:d0 Error while validating mfn 11e7f0 (pfn 72) for type 1000000000000000: caf=8000000000000003 taf=1000000000000001
(XEN) Assertion 'rc == -EINVAL' failed at mm.c:2337
(XEN) ----[ Xen-4.3-unstable  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c4801780df>] get_page_type+0x1c/0x36
(XEN) RFLAGS: 0000000000010216   CONTEXT: hypervisor
(XEN) rax: 00000000fffffff0   rbx: 0000000000000000   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 000000000000000a   rdi: ffff82c48027d6e0
(XEN) rbp: ffff82c4802c7a08   rsp: ffff82c4802c7a08   r8:  0000000000000004
(XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000010
(XEN) r12: 0000000000000000   r13: ffff82f6023cfe00   r14: 1000000000000000
(XEN) r15: 000000000011e7f0   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000012b92c000   cr2: ffff880002d8b300
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802c7a08:
(XEN)    ffff82c4802c7a38 ffff82c4801786f5 0000000000000000 0000000000000067
(XEN)    ffff83012b49a000 000000000011e7f2 ffff82c4802c7a78 ffff82c480179414
(XEN)    ffff880002d8b300 ffff83012b49a000 0000000000000001 ffff82f6023cfe40
(XEN)    2100000000000001 ffff83011e7f2008 ffff82c4802c7b48 ffff82c4801776b0
(XEN)    ffff82c4802c7ad8 0000000000000082 ffff83011a768b38 000000000000003e
(XEN)    0000000000000000 2000000000000000 2000000000000001 ffff82c4802c0000
(XEN)    ffff83011e7f2000 000000000011e7f2 ffff82c400000001 0000000000000001
(XEN)    000000000011e7f2 ffff82c400000000 0000000000000000 000000000011e7f2
(XEN)    ffff83012b49a000 ffff82f6023cfe40 2000000000000000 0000000000000001
(XEN)    0000000000000001 ffff82f6023cfe40 2000000000000000 000000000011e7f2
(XEN)    ffff82c4802c7b58 ffff82c4801780c1 ffff82c4802c7b88 ffff82c4801786e2
(XEN)    0000000000000001 0000000000000067 ffff83012b49a000 000000000011e7f3
(XEN)    ffff82c4802c7bc8 ffff82c4801787e8 ffff83012b49a000 0000000000000000
(XEN)    3000000000000000 ffff82f6023cfe60 3100000000000001 3000000000000001
(XEN)    ffff82c4802c7c98 ffff82c480177969 ffff830100000000 ffff82c4802c7c08
(XEN)    0000000000000002 0000000000000001 ffff82c4802e0000 ffff82c4802c7c88
(XEN)    ffff82c4802c7c18 ffff82c48018046f 000000000011e7f3 ffff83011e7f3000
(XEN)    ffff82c400000001 ffff83012b49a000 000000000011e7f3 ffff82c400000001
(XEN)    0000000000000000 000000000011e7f3 ffff83012b49a000 ffff82f6023cfe60
(XEN)    3000000000000000 0000000000000001 0000000000000001 ffff82f6023cfe60
(XEN) Xen call trace:
(XEN)    [<ffff82c4801780df>] get_page_type+0x1c/0x36
(XEN)    [<ffff82c4801786f5>] get_page_and_type_from_pagenr+0x85/0xbb
(XEN)    [<ffff82c480179414>] get_page_from_l2e+0xc0/0x239
(XEN)    [<ffff82c4801776b0>] __get_page_type+0xa3a/0x143d
(XEN)    [<ffff82c4801780c1>] get_page_type_preemptible+0xe/0x10
(XEN)    [<ffff82c4801786e2>] get_page_and_type_from_pagenr+0x72/0xbb
(XEN)    [<ffff82c4801787e8>] get_page_from_l3e+0xbd/0x1cb
(XEN)    [<ffff82c480177969>] __get_page_type+0xcf3/0x143d
(XEN)    [<ffff82c4801780c1>] get_page_type_preemptible+0xe/0x10
(XEN)    [<ffff82c4801786e2>] get_page_and_type_from_pagenr+0x72/0xbb
(XEN)    [<ffff82c480178999>] get_page_from_l4e+0xa3/0x19d
(XEN)    [<ffff82c480177bf0>] __get_page_type+0xf7a/0x143d
(XEN)    [<ffff82c4801780c1>] get_page_type_preemptible+0xe/0x10
(XEN)    [<ffff82c48017b06c>] do_mmuext_op+0x377/0x181d
(XEN)    [<ffff82c480229b5b>] syscall_enter+0xeb/0x145
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'rc == -EINVAL' failed at mm.c:2337
(XEN) ****************************************
(XEN)
(XEN) Manual reset required ('noreboot' specified)

--=-D15UhDipBQKP/Xssc/uH
Content-Disposition: attachment; filename="mini-2m-crashed-xen-xl.log"
Content-Type: text/x-log; name="mini-2m-crashed-xen-xl.log"; charset="UTF-8"
Content-Transfer-Encoding: 7bit


Parsing config from domain_config
libxl: detail: libxl_dom.c:192:numa_place_domain: NUMA placement candidate with 1 nodes, 4 cpus and 1930 KB free selected
domainbuilder: detail: xc_dom_allocate: cmdline="", features="(null)"
domainbuilder: detail: xc_dom_kernel_file: filename="mini-os.gz"
domainbuilder: detail: xc_dom_malloc_filemap    : 103 kB
domainbuilder: detail: xc_dom_malloc            : 2354 kB
domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x19db4 -> 0x24c87e
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.3, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
domainbuilder: detail: xc_dom_probe_bzimage_kernel: kernel is not a bzImage
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ...
domainbuilder: detail: loader probe OK
xc: detail: elf_parse_binary: phdr: paddr=0x0 memsz=0x69a7c
xc: detail: elf_parse_binary: memory: 0x0 -> 0x69a7c
xc: detail: elf_xen_parse: __xen_guest: "GUEST_OS=Mini-OS,XEN_VER=xen-3.0,VIRT_BASE=0x0,ELF_PADDR_OFFSET=0x0,HYPERCALL_PAGE=0x2,LOADER=generic"
xc: detail: elf_xen_parse_guest_info: GUEST_OS="Mini-OS"
xc: detail: elf_xen_parse_guest_info: XEN_VER="xen-3.0"
xc: detail: elf_xen_parse_guest_info: VIRT_BASE="0x0"
xc: detail: elf_xen_parse_guest_info: ELF_PADDR_OFFSET="0x0"
xc: detail: elf_xen_parse_guest_info: HYPERCALL_PAGE="0x2"
xc: detail: elf_xen_parse_guest_info: LOADER="generic"
xc: detail: elf_xen_addr_calc_check: addresses:
xc: detail:     virt_base        = 0x0
xc: detail:     elf_paddr_offset = 0x0
xc: detail:     virt_offset      = 0x0
xc: detail:     virt_kstart      = 0x0
xc: detail:     virt_kend        = 0x69a7c
xc: detail:     virt_entry       = 0x0
xc: detail:     p2m_base         = 0xffffffffffffffff
domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0x0 -> 0x69a7c
domainbuilder: detail: xc_dom_mem_init: mem 2 MB, pages 0x200 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x200 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x0 -> 0x6a000  (pfn 0x0 + 0x6a pages)
domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x0+0x6a at 0x7f71e8461000
xc: detail: elf_load_binary: phdr 0 at 0x0x7f71e8461000 -> 0x0x7f71e847d480
domainbuilder: detail: xc_dom_alloc_segment:   phys2mach    : 0x6a000 -> 0x6b000  (pfn 0x6a + 0x1 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x6a+0x1 at 0x7f71e84ef000
domainbuilder: detail: xc_dom_alloc_page   :   start info   : 0x6b000 (pfn 0x6b)
domainbuilder: detail: xc_dom_alloc_page   :   xenstore     : 0x6c000 (pfn 0x6c)
domainbuilder: detail: xc_dom_alloc_page   :   console      : 0x6d000 (pfn 0x6d)
domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48: 0x0000000000000000 -> 0x0000ffffffffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39: 0x0000000000000000 -> 0x0000007fffffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30: 0x0000000000000000 -> 0x000000003fffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21: 0x0000000000000000 -> 0x00000000003fffff, 2 table(s)
domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0x6e000 -> 0x73000  (pfn 0x6e + 0x5 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x6e+0x5 at 0x7f71e845c000
domainbuilder: detail: xc_dom_alloc_page   :   boot stack   : 0x73000 (pfn 0x73)
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x74000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x400000
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: arch_setup_bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x200
domainbuilder: detail: clear_page: pfn 0x6d, mfn 0x11e7f5
domainbuilder: detail: clear_page: pfn 0x6c, mfn 0x11e7f6
domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x6b+0x1 at 0x7f71e8459000
domainbuilder: detail: start_info_x86_64: called
domainbuilder: detail: setup_hypercall_page: vaddr=0x2000 pfn=0x2
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 2363 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 103 kB
domainbuilder: detail:       domU mmap          : 452 kB

--=-D15UhDipBQKP/Xssc/uH
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-D15UhDipBQKP/Xssc/uH--


From xen-devel-bounces@lists.xen.org Sun Dec 23 07:31:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 07:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tmg1Y-0002M9-SI; Sun, 23 Dec 2012 07:31:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tmg1X-0002M4-0x
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 07:31:03 +0000
Received: from [85.158.139.83:33072] by server-12.bemta-5.messagelabs.com id
	20/F0-02275-633B6D05; Sun, 23 Dec 2012 07:31:02 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1356247856!28258523!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQyNzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10655 invoked from network); 23 Dec 2012 07:30:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Dec 2012 07:30:58 -0000
X-IronPort-AV: E=Sophos;i="4.84,340,1355097600"; d="log'?scan'208";a="1623872"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	23 Dec 2012 07:30:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Sun, 23 Dec 2012 02:30:55 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1Tmg1P-0008VS-Iu;
	Sun, 23 Dec 2012 07:30:55 +0000
Message-ID: <1356247857.24056.44.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <keir@xen.org>, <tim@xen.org>,
	<ian.jackson@citrix.com>
Date: Sun, 23 Dec 2012 07:30:57 +0000
Content-Type: multipart/mixed; boundary="=-D15UhDipBQKP/Xssc/uH"
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com
Subject: [Xen-devel] Creating guest with less than 4MB RAM causes Xen to
 crash immediately
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--=-D15UhDipBQKP/Xssc/uH
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

Keir, Tim and IanJ

When user creates a guest with less than 4MB of memory, Xen crashes
immediately. That's because:

      * libxc round up memory to 4MB and build page table for the guest,
        see xc_dom_x86.c:count_pgtables. So the virt_pgtab_end is at
        least 0x400000.
      * When Xen try to pin page table for guest, it will scan through
        page table and invoke arch/x86/mm.c:get_page_type, which has a
        assertion that if any error happens the rc should only be
        -EINVAL, otherwise Xen crashes.
      * For a guest with less than 4MB memory, some of the entries in
        the page table are not valid, causing __get_page_type to fail
        with -EBUSY, which causes the assertion to fail.

In practice few people will create a DomU with less than 4MB ram. I
encounter this because I was trying to boot up ~3K+ minios.

Xen console log and xl output files attached.


Wei.

--=-D15UhDipBQKP/Xssc/uH
Content-Disposition: attachment; filename="mini-2m-crashed-xen.log"
Content-Type: text/x-log; name="mini-2m-crashed-xen.log"; charset="UTF-8"
Content-Transfer-Encoding: 7bit


(XEN) mm.c:794:d0 pg_owner 1 l1e_owner 1, but real_pg_owner 0
(XEN) mm.c:865:d0 Error getting mfn a11 (pfn 77ed1) from L1 entry 0000000000a11023 for l1e_owner=1, pg_owner=1
(XEN) mm.c:1163:d0 Failure in alloc_l1_table: entry 0
(XEN) mm.c:2021:d0 Error while validating mfn 11e7f0 (pfn 72) for type 1000000000000000: caf=8000000000000003 taf=1000000000000001
(XEN) Assertion 'rc == -EINVAL' failed at mm.c:2337
(XEN) ----[ Xen-4.3-unstable  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c4801780df>] get_page_type+0x1c/0x36
(XEN) RFLAGS: 0000000000010216   CONTEXT: hypervisor
(XEN) rax: 00000000fffffff0   rbx: 0000000000000000   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 000000000000000a   rdi: ffff82c48027d6e0
(XEN) rbp: ffff82c4802c7a08   rsp: ffff82c4802c7a08   r8:  0000000000000004
(XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000010
(XEN) r12: 0000000000000000   r13: ffff82f6023cfe00   r14: 1000000000000000
(XEN) r15: 000000000011e7f0   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000012b92c000   cr2: ffff880002d8b300
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802c7a08:
(XEN)    ffff82c4802c7a38 ffff82c4801786f5 0000000000000000 0000000000000067
(XEN)    ffff83012b49a000 000000000011e7f2 ffff82c4802c7a78 ffff82c480179414
(XEN)    ffff880002d8b300 ffff83012b49a000 0000000000000001 ffff82f6023cfe40
(XEN)    2100000000000001 ffff83011e7f2008 ffff82c4802c7b48 ffff82c4801776b0
(XEN)    ffff82c4802c7ad8 0000000000000082 ffff83011a768b38 000000000000003e
(XEN)    0000000000000000 2000000000000000 2000000000000001 ffff82c4802c0000
(XEN)    ffff83011e7f2000 000000000011e7f2 ffff82c400000001 0000000000000001
(XEN)    000000000011e7f2 ffff82c400000000 0000000000000000 000000000011e7f2
(XEN)    ffff83012b49a000 ffff82f6023cfe40 2000000000000000 0000000000000001
(XEN)    0000000000000001 ffff82f6023cfe40 2000000000000000 000000000011e7f2
(XEN)    ffff82c4802c7b58 ffff82c4801780c1 ffff82c4802c7b88 ffff82c4801786e2
(XEN)    0000000000000001 0000000000000067 ffff83012b49a000 000000000011e7f3
(XEN)    ffff82c4802c7bc8 ffff82c4801787e8 ffff83012b49a000 0000000000000000
(XEN)    3000000000000000 ffff82f6023cfe60 3100000000000001 3000000000000001
(XEN)    ffff82c4802c7c98 ffff82c480177969 ffff830100000000 ffff82c4802c7c08
(XEN)    0000000000000002 0000000000000001 ffff82c4802e0000 ffff82c4802c7c88
(XEN)    ffff82c4802c7c18 ffff82c48018046f 000000000011e7f3 ffff83011e7f3000
(XEN)    ffff82c400000001 ffff83012b49a000 000000000011e7f3 ffff82c400000001
(XEN)    0000000000000000 000000000011e7f3 ffff83012b49a000 ffff82f6023cfe60
(XEN)    3000000000000000 0000000000000001 0000000000000001 ffff82f6023cfe60
(XEN) Xen call trace:
(XEN)    [<ffff82c4801780df>] get_page_type+0x1c/0x36
(XEN)    [<ffff82c4801786f5>] get_page_and_type_from_pagenr+0x85/0xbb
(XEN)    [<ffff82c480179414>] get_page_from_l2e+0xc0/0x239
(XEN)    [<ffff82c4801776b0>] __get_page_type+0xa3a/0x143d
(XEN)    [<ffff82c4801780c1>] get_page_type_preemptible+0xe/0x10
(XEN)    [<ffff82c4801786e2>] get_page_and_type_from_pagenr+0x72/0xbb
(XEN)    [<ffff82c4801787e8>] get_page_from_l3e+0xbd/0x1cb
(XEN)    [<ffff82c480177969>] __get_page_type+0xcf3/0x143d
(XEN)    [<ffff82c4801780c1>] get_page_type_preemptible+0xe/0x10
(XEN)    [<ffff82c4801786e2>] get_page_and_type_from_pagenr+0x72/0xbb
(XEN)    [<ffff82c480178999>] get_page_from_l4e+0xa3/0x19d
(XEN)    [<ffff82c480177bf0>] __get_page_type+0xf7a/0x143d
(XEN)    [<ffff82c4801780c1>] get_page_type_preemptible+0xe/0x10
(XEN)    [<ffff82c48017b06c>] do_mmuext_op+0x377/0x181d
(XEN)    [<ffff82c480229b5b>] syscall_enter+0xeb/0x145
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'rc == -EINVAL' failed at mm.c:2337
(XEN) ****************************************
(XEN)
(XEN) Manual reset required ('noreboot' specified)

--=-D15UhDipBQKP/Xssc/uH
Content-Disposition: attachment; filename="mini-2m-crashed-xen-xl.log"
Content-Type: text/x-log; name="mini-2m-crashed-xen-xl.log"; charset="UTF-8"
Content-Transfer-Encoding: 7bit


Parsing config from domain_config
libxl: detail: libxl_dom.c:192:numa_place_domain: NUMA placement candidate with 1 nodes, 4 cpus and 1930 KB free selected
domainbuilder: detail: xc_dom_allocate: cmdline="", features="(null)"
domainbuilder: detail: xc_dom_kernel_file: filename="mini-os.gz"
domainbuilder: detail: xc_dom_malloc_filemap    : 103 kB
domainbuilder: detail: xc_dom_malloc            : 2354 kB
domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x19db4 -> 0x24c87e
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.3, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
domainbuilder: detail: xc_dom_probe_bzimage_kernel: kernel is not a bzImage
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ...
domainbuilder: detail: loader probe OK
xc: detail: elf_parse_binary: phdr: paddr=0x0 memsz=0x69a7c
xc: detail: elf_parse_binary: memory: 0x0 -> 0x69a7c
xc: detail: elf_xen_parse: __xen_guest: "GUEST_OS=Mini-OS,XEN_VER=xen-3.0,VIRT_BASE=0x0,ELF_PADDR_OFFSET=0x0,HYPERCALL_PAGE=0x2,LOADER=generic"
xc: detail: elf_xen_parse_guest_info: GUEST_OS="Mini-OS"
xc: detail: elf_xen_parse_guest_info: XEN_VER="xen-3.0"
xc: detail: elf_xen_parse_guest_info: VIRT_BASE="0x0"
xc: detail: elf_xen_parse_guest_info: ELF_PADDR_OFFSET="0x0"
xc: detail: elf_xen_parse_guest_info: HYPERCALL_PAGE="0x2"
xc: detail: elf_xen_parse_guest_info: LOADER="generic"
xc: detail: elf_xen_addr_calc_check: addresses:
xc: detail:     virt_base        = 0x0
xc: detail:     elf_paddr_offset = 0x0
xc: detail:     virt_offset      = 0x0
xc: detail:     virt_kstart      = 0x0
xc: detail:     virt_kend        = 0x69a7c
xc: detail:     virt_entry       = 0x0
xc: detail:     p2m_base         = 0xffffffffffffffff
domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0x0 -> 0x69a7c
domainbuilder: detail: xc_dom_mem_init: mem 2 MB, pages 0x200 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x200 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x0 -> 0x6a000  (pfn 0x0 + 0x6a pages)
domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x0+0x6a at 0x7f71e8461000
xc: detail: elf_load_binary: phdr 0 at 0x0x7f71e8461000 -> 0x0x7f71e847d480
domainbuilder: detail: xc_dom_alloc_segment:   phys2mach    : 0x6a000 -> 0x6b000  (pfn 0x6a + 0x1 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x6a+0x1 at 0x7f71e84ef000
domainbuilder: detail: xc_dom_alloc_page   :   start info   : 0x6b000 (pfn 0x6b)
domainbuilder: detail: xc_dom_alloc_page   :   xenstore     : 0x6c000 (pfn 0x6c)
domainbuilder: detail: xc_dom_alloc_page   :   console      : 0x6d000 (pfn 0x6d)
domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48: 0x0000000000000000 -> 0x0000ffffffffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39: 0x0000000000000000 -> 0x0000007fffffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30: 0x0000000000000000 -> 0x000000003fffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21: 0x0000000000000000 -> 0x00000000003fffff, 2 table(s)
domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0x6e000 -> 0x73000  (pfn 0x6e + 0x5 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x6e+0x5 at 0x7f71e845c000
domainbuilder: detail: xc_dom_alloc_page   :   boot stack   : 0x73000 (pfn 0x73)
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x74000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x400000
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: arch_setup_bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x200
domainbuilder: detail: clear_page: pfn 0x6d, mfn 0x11e7f5
domainbuilder: detail: clear_page: pfn 0x6c, mfn 0x11e7f6
domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x6b+0x1 at 0x7f71e8459000
domainbuilder: detail: start_info_x86_64: called
domainbuilder: detail: setup_hypercall_page: vaddr=0x2000 pfn=0x2
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 2363 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 103 kB
domainbuilder: detail:       domU mmap          : 452 kB

--=-D15UhDipBQKP/Xssc/uH
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-D15UhDipBQKP/Xssc/uH--


From xen-devel-bounces@lists.xen.org Sun Dec 23 10:42:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 10:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tmj0N-0003sY-3t; Sun, 23 Dec 2012 10:42:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carsten@schiers.de>) id 1Tmj0L-0003sT-Ju
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 10:42:02 +0000
Received: from [85.158.139.211:25006] by server-12.bemta-5.messagelabs.com id
	60/29-02275-8FFD6D05; Sun, 23 Dec 2012 10:42:00 +0000
X-Env-Sender: carsten@schiers.de
X-Msg-Ref: server-6.tower-206.messagelabs.com!1356259319!21693434!1
X-Originating-IP: [194.117.254.36]
X-SpamReason: No, hits=0.2 required=7.0 tests=FROM_EXCESS_QP,
	MIME_QP_LONG_LINE,SUBJECT_EXCESS_QP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23974 invoked from network); 23 Dec 2012 10:41:59 -0000
Received: from www.zeus06.de (HELO mail.zeus06.de) (194.117.254.36)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Dec 2012 10:41:59 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=simple; d=mail.zeus06.de; h=subject
	:from:to:cc:date:mime-version:content-type
	:content-transfer-encoding:in-reply-to:references:message-id; s=
	beta; bh=sVdXX1ZKctFLrXi4iYcFiX5MbwFGIDaGhF4C1O5uJtw=; b=IB/8M/t
	abhJIQlG64X/imeCcZmGDYQZj+S0a5kBZSC/OCvDlR6QUQNi9KZPb1665+QbU96r
	jO3ZCj9e6qPXyguoDS9yrBM/LQgM968oaaiC/53tcLfEVcpHdAfB6jC4+X3Mbc39
	CA8Ldvm22XWxcmO/NFhf065FKdC6Y/bWPJ4E=
Received: (qmail 1003 invoked from network); 23 Dec 2012 11:41:58 +0100
Received: from unknown (HELO uhura.zz) (l3s6271p1@84.46.14.115)
	by mail.zeus06.de with ESMTPA; 23 Dec 2012 11:41:58 +0100
Received: from localhost (localhost [127.0.0.1])
	by uhura.zz (Postfix) with ESMTP id CAB4A2C697;
	Sun, 23 Dec 2012 11:41:57 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at schiers.de
Received: from uhura.zz ([127.0.0.1])
	by localhost (uhura.space.zz [127.0.0.1]) (amavisd-new, port 10024)
	with LMTP id TUCytbnER8OA; Sun, 23 Dec 2012 11:41:50 +0100 (CET)
Received: from uhura.space.zz (localhost [127.0.0.1])
	by uhura.zz (Postfix) with ESMTP id 6EB272C541;
	Sun, 23 Dec 2012 11:41:50 +0100 (CET)
From: =?iso-8859-1?Q?Carsten_Schiers?= <carsten@schiers.de>
To: =?iso-8859-1?Q?Konrad_Rzeszutek_Wilk?= <konrad.wilk@oracle.com>, 
	=?iso-8859-1?Q?daniel=2Ekiper=40oracle=2Ecom?= <daniel.kiper@oracle.com>
Date: Sun, 23 Dec 2012 11:41:50 +0100
Mime-Version: 1.0
In-Reply-To: <20121221202524.GC31554@phenom.dumpdata.com>
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
X-Priority: 3 (Normal)
X-Mailer: Zarafa 7.0.8-35178
Thread-Index: Ac3g+hsL16wiviymQ8G3VKKJIMII0w==
Message-Id: <zarafa.50d6dfee.2f5e.28087f9c51461565@uhura.space.zz>
Cc: =?iso-8859-1?Q?James_Dingwall?= <james-xen@dingwall.me.uk>,
	=?iso-8859-1?Q?xen-devel=40lists=2Exen=2Eorg?= <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel]
 =?iso-8859-1?q?kernel_log_flooded_with=3A_xen=5Fballo?=
 =?iso-8859-1?q?on=3A_reserve=5Fadditional=5Fmemory=3A_add=5Fmemory=28=29_?=
 =?iso-8859-1?q?failed=3A_-17?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi folks,

I had the same messages flooding logs, but it was in DomUs. It came togethe=
r with e.g.:

    System RAM resource [mem 0x20000000-0x3fffffff] cannot be added

I am not 100% sure, but it was only for DomUs with PCI and PCIe devices pas=
sed-through.

Xen 4.2.1, Dom0&DomU Kernel 3.7.1. Xen commandline with dom0_mem=3D2G and d=
om0_mem=3D2G,max:2G.
Machine is an Intel 64 Bit 16 GB installation.

Difficult to check deeper, as it made me so nervous that I reverted to Xen =
4.1, but I could =

certainly re-install it after Xmas, if it is helping...

BR,
Carsten.


-----Urspr=FCngliche Nachricht-----
Von: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.or=
g] Im Auftrag von Konrad Rzeszutek Wilk
Gesendet: Freitag, 21. Dezember 2012 21:25
An: James Dingwall
Cc: daniel.kiper@oracle.com; xen-devel@lists.xen.org
Betreff: Re: [Xen-devel] kernel log flooded with: xen_balloon: reserve_addi=
tional_memory: add_memory() failed: -17

On Wed, Dec 19, 2012 at 08:47:22AM +0000, James Dingwall wrote:
> Hi,
> =

> I have encountered an apparently benign error on two systems where the =

> dom0 kernel log is flooded with messages like:
> =

> [52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff] =

> cannot be added [52482.163860] xen_balloon: reserve_additional_memory: =

> add_memory()
> failed: -17

Daniel tells me it is due to the recent changes in the balloon code.
CC-ing him here.

> =

> The first line is from drivers/xen/xen-balloon.c, the second from =

> mm/memory_hotplug.c
> =

> The trigger for the messages seems to be the first occasion that a Xen =

> guest is shutdown.  I have noted this in a vanilla 3.6.7 and kernel =

> 3.5.0-18 built from Ubuntu sources.  Xen version is 4.2.0.
> It is not clear why the dom0 is kernel is trying to balloon up as the =

> Xen command line is specifies a fixed dom0 memory allocation and =

> noselfballooning is specified for the kernel and ballooning is also =

> disabled in the xend-config.sxp / xl.conf (one system using xm, =

> another xl)
> =

> xen command line:
> placeholder xsave=3D0 iommu=3D0 console=3Dvga,com2 com2=3D115200,8n1 =

> dom0_mem=3Dmax:6144m
> =

> kernel command line:
> root=3D/dev/loop0 ro console=3Dtty1 console=3Dhvc0 earlyprintk=3Dxen nomo=
deset =

> noselfballooning
> =

> Examining /proc/iomem does show that the dom0 memory allocation is =

> actually 64kb short of 6144Mb:
> =

> cat /proc/iomem | grep System\ RAM
> 00010000-0009bfff : System RAM      [573440 bytes]
> 00100000-cb2dffff : System RAM      [3407740928 bytes]
> 100000000-1b4d83fff : System RAM    [3034071040 bytes]
> =

> Total system ram: 6442385408 - 6x2^30 =3D 65536
> =

> The memory range indicated in the log message is "Unusable memory"
> in /proc/iomem:
> 1b4d84000-82fffffff : Unusable memory
> =

> Another point of interest is that we have multiple "identical"
> hardware platforms (Dell T320) for the system running the 3.5.0-18 =

> kernel but only see this error on a slightly more recent system.
> Older systems show in /proc/iomem that all memory is System RAM.
> =

> 100000000-82fffffff : System RAM  [older system BIOS 1.0]
> =

> 100000000-1b4d83fff : System RAM  [newer system BIOS 1.3] =

> 1b4d84000-82fffffff : Unusable memory
> =

> The BIOS revision between the old and new has changed so I was =

> wondering if it is possible that there is a white list which affects =

> the impact of the kernel option:
> CONFIG_X86_RESERVE_LOW=3D64
> This is only a guess since the amount of memory reserved is equivalent =

> to the short fall calculated above.  If this is the right area perhaps =

> the dom0 calculation for its memory entitlement needs to be taught to =

> not to try and hotplug the missing 64k when it has been reserved.
> =

> If any other information would be useful then please let me know.
> =

> Thanks,
> James
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 23 10:42:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 10:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tmj0N-0003sY-3t; Sun, 23 Dec 2012 10:42:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carsten@schiers.de>) id 1Tmj0L-0003sT-Ju
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 10:42:02 +0000
Received: from [85.158.139.211:25006] by server-12.bemta-5.messagelabs.com id
	60/29-02275-8FFD6D05; Sun, 23 Dec 2012 10:42:00 +0000
X-Env-Sender: carsten@schiers.de
X-Msg-Ref: server-6.tower-206.messagelabs.com!1356259319!21693434!1
X-Originating-IP: [194.117.254.36]
X-SpamReason: No, hits=0.2 required=7.0 tests=FROM_EXCESS_QP,
	MIME_QP_LONG_LINE,SUBJECT_EXCESS_QP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23974 invoked from network); 23 Dec 2012 10:41:59 -0000
Received: from www.zeus06.de (HELO mail.zeus06.de) (194.117.254.36)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Dec 2012 10:41:59 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=simple; d=mail.zeus06.de; h=subject
	:from:to:cc:date:mime-version:content-type
	:content-transfer-encoding:in-reply-to:references:message-id; s=
	beta; bh=sVdXX1ZKctFLrXi4iYcFiX5MbwFGIDaGhF4C1O5uJtw=; b=IB/8M/t
	abhJIQlG64X/imeCcZmGDYQZj+S0a5kBZSC/OCvDlR6QUQNi9KZPb1665+QbU96r
	jO3ZCj9e6qPXyguoDS9yrBM/LQgM968oaaiC/53tcLfEVcpHdAfB6jC4+X3Mbc39
	CA8Ldvm22XWxcmO/NFhf065FKdC6Y/bWPJ4E=
Received: (qmail 1003 invoked from network); 23 Dec 2012 11:41:58 +0100
Received: from unknown (HELO uhura.zz) (l3s6271p1@84.46.14.115)
	by mail.zeus06.de with ESMTPA; 23 Dec 2012 11:41:58 +0100
Received: from localhost (localhost [127.0.0.1])
	by uhura.zz (Postfix) with ESMTP id CAB4A2C697;
	Sun, 23 Dec 2012 11:41:57 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at schiers.de
Received: from uhura.zz ([127.0.0.1])
	by localhost (uhura.space.zz [127.0.0.1]) (amavisd-new, port 10024)
	with LMTP id TUCytbnER8OA; Sun, 23 Dec 2012 11:41:50 +0100 (CET)
Received: from uhura.space.zz (localhost [127.0.0.1])
	by uhura.zz (Postfix) with ESMTP id 6EB272C541;
	Sun, 23 Dec 2012 11:41:50 +0100 (CET)
From: =?iso-8859-1?Q?Carsten_Schiers?= <carsten@schiers.de>
To: =?iso-8859-1?Q?Konrad_Rzeszutek_Wilk?= <konrad.wilk@oracle.com>, 
	=?iso-8859-1?Q?daniel=2Ekiper=40oracle=2Ecom?= <daniel.kiper@oracle.com>
Date: Sun, 23 Dec 2012 11:41:50 +0100
Mime-Version: 1.0
In-Reply-To: <20121221202524.GC31554@phenom.dumpdata.com>
References: <d779abba78844d3701963b5ce5bdae92@imap.dingwall.me.uk>
X-Priority: 3 (Normal)
X-Mailer: Zarafa 7.0.8-35178
Thread-Index: Ac3g+hsL16wiviymQ8G3VKKJIMII0w==
Message-Id: <zarafa.50d6dfee.2f5e.28087f9c51461565@uhura.space.zz>
Cc: =?iso-8859-1?Q?James_Dingwall?= <james-xen@dingwall.me.uk>,
	=?iso-8859-1?Q?xen-devel=40lists=2Exen=2Eorg?= <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel]
 =?iso-8859-1?q?kernel_log_flooded_with=3A_xen=5Fballo?=
 =?iso-8859-1?q?on=3A_reserve=5Fadditional=5Fmemory=3A_add=5Fmemory=28=29_?=
 =?iso-8859-1?q?failed=3A_-17?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi folks,

I had the same messages flooding logs, but it was in DomUs. It came togethe=
r with e.g.:

    System RAM resource [mem 0x20000000-0x3fffffff] cannot be added

I am not 100% sure, but it was only for DomUs with PCI and PCIe devices pas=
sed-through.

Xen 4.2.1, Dom0&DomU Kernel 3.7.1. Xen commandline with dom0_mem=3D2G and d=
om0_mem=3D2G,max:2G.
Machine is an Intel 64 Bit 16 GB installation.

Difficult to check deeper, as it made me so nervous that I reverted to Xen =
4.1, but I could =

certainly re-install it after Xmas, if it is helping...

BR,
Carsten.


-----Urspr=FCngliche Nachricht-----
Von: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.or=
g] Im Auftrag von Konrad Rzeszutek Wilk
Gesendet: Freitag, 21. Dezember 2012 21:25
An: James Dingwall
Cc: daniel.kiper@oracle.com; xen-devel@lists.xen.org
Betreff: Re: [Xen-devel] kernel log flooded with: xen_balloon: reserve_addi=
tional_memory: add_memory() failed: -17

On Wed, Dec 19, 2012 at 08:47:22AM +0000, James Dingwall wrote:
> Hi,
> =

> I have encountered an apparently benign error on two systems where the =

> dom0 kernel log is flooded with messages like:
> =

> [52482.163855] System RAM resource [mem 0x1b8000000-0x1bfffffff] =

> cannot be added [52482.163860] xen_balloon: reserve_additional_memory: =

> add_memory()
> failed: -17

Daniel tells me it is due to the recent changes in the balloon code.
CC-ing him here.

> =

> The first line is from drivers/xen/xen-balloon.c, the second from =

> mm/memory_hotplug.c
> =

> The trigger for the messages seems to be the first occasion that a Xen =

> guest is shutdown.  I have noted this in a vanilla 3.6.7 and kernel =

> 3.5.0-18 built from Ubuntu sources.  Xen version is 4.2.0.
> It is not clear why the dom0 is kernel is trying to balloon up as the =

> Xen command line is specifies a fixed dom0 memory allocation and =

> noselfballooning is specified for the kernel and ballooning is also =

> disabled in the xend-config.sxp / xl.conf (one system using xm, =

> another xl)
> =

> xen command line:
> placeholder xsave=3D0 iommu=3D0 console=3Dvga,com2 com2=3D115200,8n1 =

> dom0_mem=3Dmax:6144m
> =

> kernel command line:
> root=3D/dev/loop0 ro console=3Dtty1 console=3Dhvc0 earlyprintk=3Dxen nomo=
deset =

> noselfballooning
> =

> Examining /proc/iomem does show that the dom0 memory allocation is =

> actually 64kb short of 6144Mb:
> =

> cat /proc/iomem | grep System\ RAM
> 00010000-0009bfff : System RAM      [573440 bytes]
> 00100000-cb2dffff : System RAM      [3407740928 bytes]
> 100000000-1b4d83fff : System RAM    [3034071040 bytes]
> =

> Total system ram: 6442385408 - 6x2^30 =3D 65536
> =

> The memory range indicated in the log message is "Unusable memory"
> in /proc/iomem:
> 1b4d84000-82fffffff : Unusable memory
> =

> Another point of interest is that we have multiple "identical"
> hardware platforms (Dell T320) for the system running the 3.5.0-18 =

> kernel but only see this error on a slightly more recent system.
> Older systems show in /proc/iomem that all memory is System RAM.
> =

> 100000000-82fffffff : System RAM  [older system BIOS 1.0]
> =

> 100000000-1b4d83fff : System RAM  [newer system BIOS 1.3] =

> 1b4d84000-82fffffff : Unusable memory
> =

> The BIOS revision between the old and new has changed so I was =

> wondering if it is possible that there is a white list which affects =

> the impact of the kernel option:
> CONFIG_X86_RESERVE_LOW=3D64
> This is only a guess since the amount of memory reserved is equivalent =

> to the short fall calculated above.  If this is the right area perhaps =

> the dom0 calculation for its memory entitlement needs to be taught to =

> not to try and hotplug the missing 64k when it has been reserved.
> =

> If any other information would be useful then please let me know.
> =

> Thanks,
> James
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 23 13:05:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 13:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmlEl-0004n3-I7; Sun, 23 Dec 2012 13:05:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtux@126.com>)
	id 1TmlEj-0004mv-1V; Sun, 23 Dec 2012 13:05:01 +0000
Received: from [85.158.137.99:50845] by server-6.bemta-3.messagelabs.com id
	E3/01-12154-77107D05; Sun, 23 Dec 2012 13:04:55 +0000
X-Env-Sender: gbtux@126.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1356267891!20604731!1
X-Originating-IP: [220.181.15.56]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjU2ID0+IDkxNjg=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjU2ID0+IDkxNjg=\n,HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7013 invoked from network); 23 Dec 2012 13:04:53 -0000
Received: from m15-56.126.com (HELO m15-56.126.com) (220.181.15.56)
	by server-13.tower-217.messagelabs.com with SMTP;
	23 Dec 2012 13:04:53 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Cc:Subject:In-Reply-To:
	References:Content-Type:MIME-Version:Message-ID; bh=THIy5bp3faYE
	KbOnrYwAYIjq9qn3XwYJ/DxfYtQf0As=; b=kXz9sndO/2SwEedC6bBNBrJ70IW2
	aAY78e8M9yNhIA6uUyFC1xmN1t/DPJdPfw2lghhKMg0tRylQcknWltzO3kk2bWP5
	UnE/ayBeNxMyRENuNYoFhMAe5F15dyyQ8TAPgtxQHd6LCicHnXqxMEidLoLwDOtk
	z6h7e6Z+towtV2k=
Received: from gbtux$126.com ( [124.16.139.198] ) by ajax-webmail-wmsvr56
	(Coremail) ; Sun, 23 Dec 2012 21:04:42 +0800 (CST)
X-Originating-IP: [124.16.139.198]
Date: Sun, 23 Dec 2012 21:04:42 +0800 (CST)
From: gavin  <gbtux@126.com>
To: =?UTF-8?Q?Pasi_K=C3=A4rkk=C3=A4inen?= <pasik@iki.fi>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
In-Reply-To: <20121222220407.GP8912@reaktio.net>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
	<20121222220407.GP8912@reaktio.net>
X-CM-CTRLDATA: EOjmomZvb3Rlcl9odG09MTk2MTo4MQ==
MIME-Version: 1.0
Message-ID: <8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
X-CM-TRANSID: OMqowEAZKUVrAddQOG8dAA--.12323W
X-CM-SenderInfo: pjew35a6rslhhfrp/1tbiGAqOnEl1wbq5AQACs9
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2517206486562354085=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2517206486562354085==
Content-Type: multipart/alternative; 
	boundary="----=_Part_109271_786584118.1356267882339"

------=_Part_109271_786584118.1356267882339
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

SGkgUGFzaSwKCgpUaGFuayB5b3UgdmVyeSBtdWNoIGZvciB5b3VyIGluZm9ybWF0aW9uLgoKCkJl
c3QgUmVnYXJkcywKR2F2aW4KCgpBdCAyMDEyLTEyLTIzIDA2OjA0OjA4LCJQYXNpIEvDpHJra8Ok
aW5lbiIgPHBhc2lrQGlraS5maT4gd3JvdGU6Cgo+T24gU3VuLCBEZWMgMjMsIDIwMTIgYXQgMDE6
NTA6MTZBTSArMDgwMCwgZ2F2aW4gd3JvdGU6Cj4+ICAgICBIaSwKPj4gCj4+ICAgIEkgY2Fubm90
IGZpbmQgdGhlIHZUUE0gY29uZmlnIG9wdGlvbiBDT05GSUdfWEVOX1RQTURFVl9CQUNLRU5EIGlu
IHRoZQo+PiAgICBjb25maWcgZmlsZSBvZiBwdi1vcHMga2VybmVsLCBzdWNoIGFzIGtlcm5lbCAy
LjYuMzIuNTAuIEhvd2V2ZXIsIHRoaXMKPj4gICAgb3B0aW9uIGV4aXN0cyBpbiB0aGUgY29uZmln
IGZpbGUgb2Yga2VybmVsIHZlcnNpb24gMi42LjE4LjguIEkgYWxzbyBjYW5ub3QKPj4gICAgZmlu
ZCB0aGUgdlRQTSBiYWNrZWQgZHJpdmVyIChzdWNoIGFzCj4+ICAgIGxpbnV4LTIuNi4xOC14ZW4u
aGcvZHJpdmVycy94ZW4vdHBtYmFjayApIGluIHRoZSBwdi1vcHMga2VybmVsLgo+PiAgICBTbywg
aG93IGNhbiBJIGNvbmZpZ3VyZSBhbmQgdXNlIHRoZSB2VFBNIGJhY2tlbmQgZHJpdmVyIGluIGtl
cm5lbCAyLjYuMzI/Cj4+ICAgIFRoYW5rIHlvdSBmb3IgYW55IGFkdmljZS4KPj4gCj4KPkkgZG9u
J3QgdGhpbmsgdnRwbSBkcml2ZXJzIHdlcmUgcG9ydGVkIHRvIDIuNi4zMiBwdm9wcy4KPlJlY2Vu
dGx5IHRoZXJlIGhhcyBiZWVuIHdvcmsgb24gcG9ydGluZyB0aGUgZHJpdmVycyB0byB1cHN0cmVh
bSBMaW51eCAzLngsIAo+YnV0IHRoZXkgYXJlbid0IG1lcmdlZCB5ZXQgaWlyYy4KPgo+SWYgeW91
IG5lZWQgdG8gdXNlIHRoZW0gd2l0aCAyLjYuMzIgeW91IG5lZWQgdG8gcG9ydCB0aGVtIHlvdXJz
ZWxmLi4gCj4KPi0tIFBhc2kKPgo=
------=_Part_109271_786584118.1356267882339
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxkaXY+SGkgUGFzaSw8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2
PlRoYW5rIHlvdSB2ZXJ5IG11Y2ggZm9yIHlvdXIgaW5mb3JtYXRpb24uPC9kaXY+PGRpdj48YnI+
PC9kaXY+PGRpdj48ZGl2PkJlc3QgUmVnYXJkcyw8ZGl2PkdhdmluPC9kaXY+PGRpdj48YnI+PC9k
aXY+PC9kaXY+PC9kaXY+QXQmbmJzcDsyMDEyLTEyLTIzJm5ic3A7MDY6MDQ6MDgsIlBhc2kmbmJz
cDtLw6Rya2vDpGluZW4iJm5ic3A7Jmx0O3Bhc2lrQGlraS5maSZndDsmbmJzcDt3cm90ZTo8YnI+
PHByZT4mZ3Q7T24mbmJzcDtTdW4sJm5ic3A7RGVjJm5ic3A7MjMsJm5ic3A7MjAxMiZuYnNwO2F0
Jm5ic3A7MDE6NTA6MTZBTSZuYnNwOyswODAwLCZuYnNwO2dhdmluJm5ic3A7d3JvdGU6CiZndDsm
Z3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7SGksCiZndDsmZ3Q7Jm5ic3A7CiZndDsm
Z3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7SSZuYnNwO2Nhbm5vdCZuYnNwO2ZpbmQmbmJzcDt0
aGUmbmJzcDt2VFBNJm5ic3A7Y29uZmlnJm5ic3A7b3B0aW9uJm5ic3A7Q09ORklHX1hFTl9UUE1E
RVZfQkFDS0VORCZuYnNwO2luJm5ic3A7dGhlCiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Y29uZmlnJm5ic3A7ZmlsZSZuYnNwO29mJm5ic3A7cHYtb3BzJm5ic3A7a2VybmVsLCZuYnNw
O3N1Y2gmbmJzcDthcyZuYnNwO2tlcm5lbCZuYnNwOzIuNi4zMi41MC4mbmJzcDtIb3dldmVyLCZu
YnNwO3RoaXMKJmd0OyZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtvcHRpb24mbmJzcDtleGlz
dHMmbmJzcDtpbiZuYnNwO3RoZSZuYnNwO2NvbmZpZyZuYnNwO2ZpbGUmbmJzcDtvZiZuYnNwO2tl
cm5lbCZuYnNwO3ZlcnNpb24mbmJzcDsyLjYuMTguOC4mbmJzcDtJJm5ic3A7YWxzbyZuYnNwO2Nh
bm5vdAomZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO2ZpbmQmbmJzcDt0aGUmbmJzcDt2
VFBNJm5ic3A7YmFja2VkJm5ic3A7ZHJpdmVyJm5ic3A7KHN1Y2gmbmJzcDthcwomZ3Q7Jmd0OyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwO2xpbnV4LTIuNi4xOC14ZW4uaGcvZHJpdmVycy94ZW4vdHBt
YmFjayZuYnNwOykmbmJzcDtpbiZuYnNwO3RoZSZuYnNwO3B2LW9wcyZuYnNwO2tlcm5lbC4KJmd0
OyZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtTbywmbmJzcDtob3cmbmJzcDtjYW4mbmJzcDtJ
Jm5ic3A7Y29uZmlndXJlJm5ic3A7YW5kJm5ic3A7dXNlJm5ic3A7dGhlJm5ic3A7dlRQTSZuYnNw
O2JhY2tlbmQmbmJzcDtkcml2ZXImbmJzcDtpbiZuYnNwO2tlcm5lbCZuYnNwOzIuNi4zMj8KJmd0
OyZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtUaGFuayZuYnNwO3lvdSZuYnNwO2ZvciZuYnNw
O2FueSZuYnNwO2FkdmljZS4KJmd0OyZndDsmbmJzcDsKJmd0OwomZ3Q7SSZuYnNwO2Rvbid0Jm5i
c3A7dGhpbmsmbmJzcDt2dHBtJm5ic3A7ZHJpdmVycyZuYnNwO3dlcmUmbmJzcDtwb3J0ZWQmbmJz
cDt0byZuYnNwOzIuNi4zMiZuYnNwO3B2b3BzLgomZ3Q7UmVjZW50bHkmbmJzcDt0aGVyZSZuYnNw
O2hhcyZuYnNwO2JlZW4mbmJzcDt3b3JrJm5ic3A7b24mbmJzcDtwb3J0aW5nJm5ic3A7dGhlJm5i
c3A7ZHJpdmVycyZuYnNwO3RvJm5ic3A7dXBzdHJlYW0mbmJzcDtMaW51eCZuYnNwOzMueCwmbmJz
cDsKJmd0O2J1dCZuYnNwO3RoZXkmbmJzcDthcmVuJ3QmbmJzcDttZXJnZWQmbmJzcDt5ZXQmbmJz
cDtpaXJjLgomZ3Q7CiZndDtJZiZuYnNwO3lvdSZuYnNwO25lZWQmbmJzcDt0byZuYnNwO3VzZSZu
YnNwO3RoZW0mbmJzcDt3aXRoJm5ic3A7Mi42LjMyJm5ic3A7eW91Jm5ic3A7bmVlZCZuYnNwO3Rv
Jm5ic3A7cG9ydCZuYnNwO3RoZW0mbmJzcDt5b3Vyc2VsZi4uJm5ic3A7CiZndDsKJmd0Oy0tJm5i
c3A7UGFzaQomZ3Q7CjwvcHJlPjwvZGl2Pjxicj48YnI+PHNwYW4gdGl0bGU9Im5ldGVhc2Vmb290
ZXIiPjxzcGFuIGlkPSJuZXRlYXNlX21haWxfZm9vdGVyIj48L3NwYW4+PC9zcGFuPg==
------=_Part_109271_786584118.1356267882339--



--===============2517206486562354085==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2517206486562354085==--



From xen-devel-bounces@lists.xen.org Sun Dec 23 13:05:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 13:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmlEl-0004n3-I7; Sun, 23 Dec 2012 13:05:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtux@126.com>)
	id 1TmlEj-0004mv-1V; Sun, 23 Dec 2012 13:05:01 +0000
Received: from [85.158.137.99:50845] by server-6.bemta-3.messagelabs.com id
	E3/01-12154-77107D05; Sun, 23 Dec 2012 13:04:55 +0000
X-Env-Sender: gbtux@126.com
X-Msg-Ref: server-13.tower-217.messagelabs.com!1356267891!20604731!1
X-Originating-IP: [220.181.15.56]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjU2ID0+IDkxNjg=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjU2ID0+IDkxNjg=\n,HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7013 invoked from network); 23 Dec 2012 13:04:53 -0000
Received: from m15-56.126.com (HELO m15-56.126.com) (220.181.15.56)
	by server-13.tower-217.messagelabs.com with SMTP;
	23 Dec 2012 13:04:53 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Cc:Subject:In-Reply-To:
	References:Content-Type:MIME-Version:Message-ID; bh=THIy5bp3faYE
	KbOnrYwAYIjq9qn3XwYJ/DxfYtQf0As=; b=kXz9sndO/2SwEedC6bBNBrJ70IW2
	aAY78e8M9yNhIA6uUyFC1xmN1t/DPJdPfw2lghhKMg0tRylQcknWltzO3kk2bWP5
	UnE/ayBeNxMyRENuNYoFhMAe5F15dyyQ8TAPgtxQHd6LCicHnXqxMEidLoLwDOtk
	z6h7e6Z+towtV2k=
Received: from gbtux$126.com ( [124.16.139.198] ) by ajax-webmail-wmsvr56
	(Coremail) ; Sun, 23 Dec 2012 21:04:42 +0800 (CST)
X-Originating-IP: [124.16.139.198]
Date: Sun, 23 Dec 2012 21:04:42 +0800 (CST)
From: gavin  <gbtux@126.com>
To: =?UTF-8?Q?Pasi_K=C3=A4rkk=C3=A4inen?= <pasik@iki.fi>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121210(21001.5125.5051) Copyright (c) 2002-2012 www.mailtech.cn
	126com
In-Reply-To: <20121222220407.GP8912@reaktio.net>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
	<20121222220407.GP8912@reaktio.net>
X-CM-CTRLDATA: EOjmomZvb3Rlcl9odG09MTk2MTo4MQ==
MIME-Version: 1.0
Message-ID: <8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
X-CM-TRANSID: OMqowEAZKUVrAddQOG8dAA--.12323W
X-CM-SenderInfo: pjew35a6rslhhfrp/1tbiGAqOnEl1wbq5AQACs9
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2517206486562354085=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2517206486562354085==
Content-Type: multipart/alternative; 
	boundary="----=_Part_109271_786584118.1356267882339"

------=_Part_109271_786584118.1356267882339
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

SGkgUGFzaSwKCgpUaGFuayB5b3UgdmVyeSBtdWNoIGZvciB5b3VyIGluZm9ybWF0aW9uLgoKCkJl
c3QgUmVnYXJkcywKR2F2aW4KCgpBdCAyMDEyLTEyLTIzIDA2OjA0OjA4LCJQYXNpIEvDpHJra8Ok
aW5lbiIgPHBhc2lrQGlraS5maT4gd3JvdGU6Cgo+T24gU3VuLCBEZWMgMjMsIDIwMTIgYXQgMDE6
NTA6MTZBTSArMDgwMCwgZ2F2aW4gd3JvdGU6Cj4+ICAgICBIaSwKPj4gCj4+ICAgIEkgY2Fubm90
IGZpbmQgdGhlIHZUUE0gY29uZmlnIG9wdGlvbiBDT05GSUdfWEVOX1RQTURFVl9CQUNLRU5EIGlu
IHRoZQo+PiAgICBjb25maWcgZmlsZSBvZiBwdi1vcHMga2VybmVsLCBzdWNoIGFzIGtlcm5lbCAy
LjYuMzIuNTAuIEhvd2V2ZXIsIHRoaXMKPj4gICAgb3B0aW9uIGV4aXN0cyBpbiB0aGUgY29uZmln
IGZpbGUgb2Yga2VybmVsIHZlcnNpb24gMi42LjE4LjguIEkgYWxzbyBjYW5ub3QKPj4gICAgZmlu
ZCB0aGUgdlRQTSBiYWNrZWQgZHJpdmVyIChzdWNoIGFzCj4+ICAgIGxpbnV4LTIuNi4xOC14ZW4u
aGcvZHJpdmVycy94ZW4vdHBtYmFjayApIGluIHRoZSBwdi1vcHMga2VybmVsLgo+PiAgICBTbywg
aG93IGNhbiBJIGNvbmZpZ3VyZSBhbmQgdXNlIHRoZSB2VFBNIGJhY2tlbmQgZHJpdmVyIGluIGtl
cm5lbCAyLjYuMzI/Cj4+ICAgIFRoYW5rIHlvdSBmb3IgYW55IGFkdmljZS4KPj4gCj4KPkkgZG9u
J3QgdGhpbmsgdnRwbSBkcml2ZXJzIHdlcmUgcG9ydGVkIHRvIDIuNi4zMiBwdm9wcy4KPlJlY2Vu
dGx5IHRoZXJlIGhhcyBiZWVuIHdvcmsgb24gcG9ydGluZyB0aGUgZHJpdmVycyB0byB1cHN0cmVh
bSBMaW51eCAzLngsIAo+YnV0IHRoZXkgYXJlbid0IG1lcmdlZCB5ZXQgaWlyYy4KPgo+SWYgeW91
IG5lZWQgdG8gdXNlIHRoZW0gd2l0aCAyLjYuMzIgeW91IG5lZWQgdG8gcG9ydCB0aGVtIHlvdXJz
ZWxmLi4gCj4KPi0tIFBhc2kKPgo=
------=_Part_109271_786584118.1356267882339
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxkaXY+SGkgUGFzaSw8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2
PlRoYW5rIHlvdSB2ZXJ5IG11Y2ggZm9yIHlvdXIgaW5mb3JtYXRpb24uPC9kaXY+PGRpdj48YnI+
PC9kaXY+PGRpdj48ZGl2PkJlc3QgUmVnYXJkcyw8ZGl2PkdhdmluPC9kaXY+PGRpdj48YnI+PC9k
aXY+PC9kaXY+PC9kaXY+QXQmbmJzcDsyMDEyLTEyLTIzJm5ic3A7MDY6MDQ6MDgsIlBhc2kmbmJz
cDtLw6Rya2vDpGluZW4iJm5ic3A7Jmx0O3Bhc2lrQGlraS5maSZndDsmbmJzcDt3cm90ZTo8YnI+
PHByZT4mZ3Q7T24mbmJzcDtTdW4sJm5ic3A7RGVjJm5ic3A7MjMsJm5ic3A7MjAxMiZuYnNwO2F0
Jm5ic3A7MDE6NTA6MTZBTSZuYnNwOyswODAwLCZuYnNwO2dhdmluJm5ic3A7d3JvdGU6CiZndDsm
Z3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7SGksCiZndDsmZ3Q7Jm5ic3A7CiZndDsm
Z3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7SSZuYnNwO2Nhbm5vdCZuYnNwO2ZpbmQmbmJzcDt0
aGUmbmJzcDt2VFBNJm5ic3A7Y29uZmlnJm5ic3A7b3B0aW9uJm5ic3A7Q09ORklHX1hFTl9UUE1E
RVZfQkFDS0VORCZuYnNwO2luJm5ic3A7dGhlCiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Y29uZmlnJm5ic3A7ZmlsZSZuYnNwO29mJm5ic3A7cHYtb3BzJm5ic3A7a2VybmVsLCZuYnNw
O3N1Y2gmbmJzcDthcyZuYnNwO2tlcm5lbCZuYnNwOzIuNi4zMi41MC4mbmJzcDtIb3dldmVyLCZu
YnNwO3RoaXMKJmd0OyZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtvcHRpb24mbmJzcDtleGlz
dHMmbmJzcDtpbiZuYnNwO3RoZSZuYnNwO2NvbmZpZyZuYnNwO2ZpbGUmbmJzcDtvZiZuYnNwO2tl
cm5lbCZuYnNwO3ZlcnNpb24mbmJzcDsyLjYuMTguOC4mbmJzcDtJJm5ic3A7YWxzbyZuYnNwO2Nh
bm5vdAomZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO2ZpbmQmbmJzcDt0aGUmbmJzcDt2
VFBNJm5ic3A7YmFja2VkJm5ic3A7ZHJpdmVyJm5ic3A7KHN1Y2gmbmJzcDthcwomZ3Q7Jmd0OyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwO2xpbnV4LTIuNi4xOC14ZW4uaGcvZHJpdmVycy94ZW4vdHBt
YmFjayZuYnNwOykmbmJzcDtpbiZuYnNwO3RoZSZuYnNwO3B2LW9wcyZuYnNwO2tlcm5lbC4KJmd0
OyZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtTbywmbmJzcDtob3cmbmJzcDtjYW4mbmJzcDtJ
Jm5ic3A7Y29uZmlndXJlJm5ic3A7YW5kJm5ic3A7dXNlJm5ic3A7dGhlJm5ic3A7dlRQTSZuYnNw
O2JhY2tlbmQmbmJzcDtkcml2ZXImbmJzcDtpbiZuYnNwO2tlcm5lbCZuYnNwOzIuNi4zMj8KJmd0
OyZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtUaGFuayZuYnNwO3lvdSZuYnNwO2ZvciZuYnNw
O2FueSZuYnNwO2FkdmljZS4KJmd0OyZndDsmbmJzcDsKJmd0OwomZ3Q7SSZuYnNwO2Rvbid0Jm5i
c3A7dGhpbmsmbmJzcDt2dHBtJm5ic3A7ZHJpdmVycyZuYnNwO3dlcmUmbmJzcDtwb3J0ZWQmbmJz
cDt0byZuYnNwOzIuNi4zMiZuYnNwO3B2b3BzLgomZ3Q7UmVjZW50bHkmbmJzcDt0aGVyZSZuYnNw
O2hhcyZuYnNwO2JlZW4mbmJzcDt3b3JrJm5ic3A7b24mbmJzcDtwb3J0aW5nJm5ic3A7dGhlJm5i
c3A7ZHJpdmVycyZuYnNwO3RvJm5ic3A7dXBzdHJlYW0mbmJzcDtMaW51eCZuYnNwOzMueCwmbmJz
cDsKJmd0O2J1dCZuYnNwO3RoZXkmbmJzcDthcmVuJ3QmbmJzcDttZXJnZWQmbmJzcDt5ZXQmbmJz
cDtpaXJjLgomZ3Q7CiZndDtJZiZuYnNwO3lvdSZuYnNwO25lZWQmbmJzcDt0byZuYnNwO3VzZSZu
YnNwO3RoZW0mbmJzcDt3aXRoJm5ic3A7Mi42LjMyJm5ic3A7eW91Jm5ic3A7bmVlZCZuYnNwO3Rv
Jm5ic3A7cG9ydCZuYnNwO3RoZW0mbmJzcDt5b3Vyc2VsZi4uJm5ic3A7CiZndDsKJmd0Oy0tJm5i
c3A7UGFzaQomZ3Q7CjwvcHJlPjwvZGl2Pjxicj48YnI+PHNwYW4gdGl0bGU9Im5ldGVhc2Vmb290
ZXIiPjxzcGFuIGlkPSJuZXRlYXNlX21haWxfZm9vdGVyIj48L3NwYW4+PC9zcGFuPg==
------=_Part_109271_786584118.1356267882339--



--===============2517206486562354085==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2517206486562354085==--



From xen-devel-bounces@lists.xen.org Sun Dec 23 13:45:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 13:45:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tmlrm-0005PO-Ff; Sun, 23 Dec 2012 13:45:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tmlrk-0005PJ-JX
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 13:45:20 +0000
Received: from [85.158.143.99:4569] by server-1.bemta-4.messagelabs.com id
	66/A7-28401-FEA07D05; Sun, 23 Dec 2012 13:45:19 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356270317!23843107!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31325 invoked from network); 23 Dec 2012 13:45:18 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Dec 2012 13:45:18 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so5330210iac.18
	for <xen-devel@lists.xen.org>; Sun, 23 Dec 2012 05:45:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qMh7e7Q/xHoRmK9Ppngv6dJikBM6E7mrPmPh2iHHFrs=;
	b=zHaJ506Drpq/F11uJNMO+6VulyxD3ujTb2iFzGioqYUHxrvJr/tclvnXT8Xx2nXgE5
	RxM1vEEdiFkwXePwcoN4xdlckLRbSFqlzI9ckEcMf6uX73sEyc4EBGVPIDcnIlg3bxo1
	FruMBEyBAcWaINCUAlLa624G7Ekahgo1iKjkWsWe8kZParyeAvIGWI6XEGVj3YNpvx2N
	PdnECJkrQqosTMuEIdEZIHv+utVpJa1FFDVeolRCy4HvWYqd3JC+4GmGkzTkKirZ07VK
	v48fh9upgfB7Oh/jt3lq0GI7iIL4ZBeKqmISCysklMq3NgCzx3dzGYb3V03p6l0tIp6l
	aoow==
MIME-Version: 1.0
Received: by 10.50.178.106 with SMTP id cx10mr18570566igc.24.1356270316750;
	Sun, 23 Dec 2012 05:45:16 -0800 (PST)
Received: by 10.231.93.74 with HTTP; Sun, 23 Dec 2012 05:45:16 -0800 (PST)
In-Reply-To: <50D6713C.2000202@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
	<50D4967602000078000B2114@nat28.tlf.novell.com>
	<3368417890369848263@unknownmsgid>
	<50D6713C.2000202@invisiblethingslab.com>
Date: Sun, 23 Dec 2012 08:45:16 -0500
Message-ID: <CAOvdn6UOkaZFb3Dtu_=TRCs6ZhH5yM=JjH2a-yJHAUYSzdWqPw@mail.gmail.com>
From: Ben Guthro <ben.guthro@gmail.com>
To: Marek Marczykowski <marmarek@invisiblethingslab.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1680363967131937285=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1680363967131937285==
Content-Type: multipart/alternative; boundary=e89a8f5036c873064804d1854a3b

--e89a8f5036c873064804d1854a3b
Content-Type: text/plain; charset=ISO-8859-1

Interesting.

I had started by reverting the commit entirely, but settled on only
reverting the part causing the scheduling issues.
I'm not sure if I was as thorough in my testing this fix, across a lot of
laptop generations.

I'll test reverting the full commit in the new year, and report back.

I think that, at a minimum - the commit should get some scrutiny by people
who might understand the subtleties, and/or unintended side effects better
than I.


-Ben

On Sat, Dec 22, 2012 at 9:49 PM, Marek Marczykowski <
marmarek@invisiblethingslab.com> wrote:

> On 21.12.2012 17:18, Ben Guthro wrote:
> > On Dec 21, 2012, at 11:03 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >>>>> On 21.12.12 at 16:30, Marek Marczykowski <
> marmarek@invisiblethingslab.com> wrote:
> >>> Next bisection (this time with sched_ratelimit_us=0) gives this commit:
> >>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f
> >>
> >> Ben, wasn't this where your bisection ended up too?
> >
> > Yes, for the dom0_pin_vcpus issue.
>
> Ok, I can confirm that on xen-4.1-testing tip with above commit reverted
> completely problem has gone away, even without sched_ratelimit_us=0. With
> Ben's patch (partially revert) no reboot observed, but still sometimes only
> pCPU0 is used after resume.
>
> --
> Best Regards / Pozdrawiam,
> Marek Marczykowski
> Invisible Things Lab
>
>

--e89a8f5036c873064804d1854a3b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Interesting.<div><br></div><div>I had started by reverting the commit entir=
ely, but settled on only reverting the part causing the scheduling issues.<=
/div><div>I&#39;m not sure if I was as thorough in my testing this fix, acr=
oss a lot of laptop generations.</div>
<div><br></div><div>I&#39;ll test reverting the full commit in the new year=
, and report back.</div><div><br></div><div>I think that, at a minimum - th=
e commit should get some scrutiny by people who might understand the subtle=
ties, and/or unintended side effects better than I.</div>
<div><br></div><div><br></div><div>-Ben<br><br><div class=3D"gmail_quote">O=
n Sat, Dec 22, 2012 at 9:49 PM, Marek Marczykowski <span dir=3D"ltr">&lt;<a=
 href=3D"mailto:marmarek@invisiblethingslab.com" target=3D"_blank">marmarek=
@invisiblethingslab.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On 2=
1.12.2012 17:18, Ben Guthro wrote:<br>
&gt; On Dec 21, 2012, at 11:03 AM, Jan Beulich &lt;<a href=3D"mailto:JBeuli=
ch@suse.com">JBeulich@suse.com</a>&gt; wrote:<br>
&gt;<br>
&gt;&gt;&gt;&gt;&gt; On 21.12.12 at 16:30, Marek Marczykowski &lt;<a href=
=3D"mailto:marmarek@invisiblethingslab.com">marmarek@invisiblethingslab.com=
</a>&gt; wrote:<br>
&gt;&gt;&gt; Next bisection (this time with sched_ratelimit_us=3D0) gives t=
his commit:<br>
&gt;&gt;&gt; <a href=3D"http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d6=
7e4d12723f" target=3D"_blank">http://xenbits.xen.org/hg/xen-4.1-testing.hg/=
rev/d67e4d12723f</a><br>
&gt;&gt;<br>
&gt;&gt; Ben, wasn&#39;t this where your bisection ended up too?<br>
&gt;<br>
&gt; Yes, for the dom0_pin_vcpus issue.<br>
<br>
</div></div>Ok, I can confirm that on xen-4.1-testing tip with above commit=
 reverted<br>
completely problem has gone away, even without sched_ratelimit_us=3D0. With=
<br>
Ben&#39;s patch (partially revert) no reboot observed, but still sometimes =
only<br>
pCPU0 is used after resume.<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
--<br>
Best Regards / Pozdrawiam,<br>
Marek Marczykowski<br>
Invisible Things Lab<br>
<br>
</div></div></blockquote></div><br></div>

--e89a8f5036c873064804d1854a3b--


--===============1680363967131937285==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1680363967131937285==--


From xen-devel-bounces@lists.xen.org Sun Dec 23 13:45:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 13:45:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tmlrm-0005PO-Ff; Sun, 23 Dec 2012 13:45:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tmlrk-0005PJ-JX
	for xen-devel@lists.xen.org; Sun, 23 Dec 2012 13:45:20 +0000
Received: from [85.158.143.99:4569] by server-1.bemta-4.messagelabs.com id
	66/A7-28401-FEA07D05; Sun, 23 Dec 2012 13:45:19 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1356270317!23843107!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31325 invoked from network); 23 Dec 2012 13:45:18 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Dec 2012 13:45:18 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so5330210iac.18
	for <xen-devel@lists.xen.org>; Sun, 23 Dec 2012 05:45:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qMh7e7Q/xHoRmK9Ppngv6dJikBM6E7mrPmPh2iHHFrs=;
	b=zHaJ506Drpq/F11uJNMO+6VulyxD3ujTb2iFzGioqYUHxrvJr/tclvnXT8Xx2nXgE5
	RxM1vEEdiFkwXePwcoN4xdlckLRbSFqlzI9ckEcMf6uX73sEyc4EBGVPIDcnIlg3bxo1
	FruMBEyBAcWaINCUAlLa624G7Ekahgo1iKjkWsWe8kZParyeAvIGWI6XEGVj3YNpvx2N
	PdnECJkrQqosTMuEIdEZIHv+utVpJa1FFDVeolRCy4HvWYqd3JC+4GmGkzTkKirZ07VK
	v48fh9upgfB7Oh/jt3lq0GI7iIL4ZBeKqmISCysklMq3NgCzx3dzGYb3V03p6l0tIp6l
	aoow==
MIME-Version: 1.0
Received: by 10.50.178.106 with SMTP id cx10mr18570566igc.24.1356270316750;
	Sun, 23 Dec 2012 05:45:16 -0800 (PST)
Received: by 10.231.93.74 with HTTP; Sun, 23 Dec 2012 05:45:16 -0800 (PST)
In-Reply-To: <50D6713C.2000202@invisiblethingslab.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
	<50D4967602000078000B2114@nat28.tlf.novell.com>
	<3368417890369848263@unknownmsgid>
	<50D6713C.2000202@invisiblethingslab.com>
Date: Sun, 23 Dec 2012 08:45:16 -0500
Message-ID: <CAOvdn6UOkaZFb3Dtu_=TRCs6ZhH5yM=JjH2a-yJHAUYSzdWqPw@mail.gmail.com>
From: Ben Guthro <ben.guthro@gmail.com>
To: Marek Marczykowski <marmarek@invisiblethingslab.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1680363967131937285=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1680363967131937285==
Content-Type: multipart/alternative; boundary=e89a8f5036c873064804d1854a3b

--e89a8f5036c873064804d1854a3b
Content-Type: text/plain; charset=ISO-8859-1

Interesting.

I had started by reverting the commit entirely, but settled on only
reverting the part causing the scheduling issues.
I'm not sure if I was as thorough in my testing this fix, across a lot of
laptop generations.

I'll test reverting the full commit in the new year, and report back.

I think that, at a minimum - the commit should get some scrutiny by people
who might understand the subtleties, and/or unintended side effects better
than I.


-Ben

On Sat, Dec 22, 2012 at 9:49 PM, Marek Marczykowski <
marmarek@invisiblethingslab.com> wrote:

> On 21.12.2012 17:18, Ben Guthro wrote:
> > On Dec 21, 2012, at 11:03 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >>>>> On 21.12.12 at 16:30, Marek Marczykowski <
> marmarek@invisiblethingslab.com> wrote:
> >>> Next bisection (this time with sched_ratelimit_us=0) gives this commit:
> >>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f
> >>
> >> Ben, wasn't this where your bisection ended up too?
> >
> > Yes, for the dom0_pin_vcpus issue.
>
> Ok, I can confirm that on xen-4.1-testing tip with above commit reverted
> completely problem has gone away, even without sched_ratelimit_us=0. With
> Ben's patch (partially revert) no reboot observed, but still sometimes only
> pCPU0 is used after resume.
>
> --
> Best Regards / Pozdrawiam,
> Marek Marczykowski
> Invisible Things Lab
>
>

--e89a8f5036c873064804d1854a3b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Interesting.<div><br></div><div>I had started by reverting the commit entir=
ely, but settled on only reverting the part causing the scheduling issues.<=
/div><div>I&#39;m not sure if I was as thorough in my testing this fix, acr=
oss a lot of laptop generations.</div>
<div><br></div><div>I&#39;ll test reverting the full commit in the new year=
, and report back.</div><div><br></div><div>I think that, at a minimum - th=
e commit should get some scrutiny by people who might understand the subtle=
ties, and/or unintended side effects better than I.</div>
<div><br></div><div><br></div><div>-Ben<br><br><div class=3D"gmail_quote">O=
n Sat, Dec 22, 2012 at 9:49 PM, Marek Marczykowski <span dir=3D"ltr">&lt;<a=
 href=3D"mailto:marmarek@invisiblethingslab.com" target=3D"_blank">marmarek=
@invisiblethingslab.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On 2=
1.12.2012 17:18, Ben Guthro wrote:<br>
&gt; On Dec 21, 2012, at 11:03 AM, Jan Beulich &lt;<a href=3D"mailto:JBeuli=
ch@suse.com">JBeulich@suse.com</a>&gt; wrote:<br>
&gt;<br>
&gt;&gt;&gt;&gt;&gt; On 21.12.12 at 16:30, Marek Marczykowski &lt;<a href=
=3D"mailto:marmarek@invisiblethingslab.com">marmarek@invisiblethingslab.com=
</a>&gt; wrote:<br>
&gt;&gt;&gt; Next bisection (this time with sched_ratelimit_us=3D0) gives t=
his commit:<br>
&gt;&gt;&gt; <a href=3D"http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d6=
7e4d12723f" target=3D"_blank">http://xenbits.xen.org/hg/xen-4.1-testing.hg/=
rev/d67e4d12723f</a><br>
&gt;&gt;<br>
&gt;&gt; Ben, wasn&#39;t this where your bisection ended up too?<br>
&gt;<br>
&gt; Yes, for the dom0_pin_vcpus issue.<br>
<br>
</div></div>Ok, I can confirm that on xen-4.1-testing tip with above commit=
 reverted<br>
completely problem has gone away, even without sched_ratelimit_us=3D0. With=
<br>
Ben&#39;s patch (partially revert) no reboot observed, but still sometimes =
only<br>
pCPU0 is used after resume.<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
--<br>
Best Regards / Pozdrawiam,<br>
Marek Marczykowski<br>
Invisible Things Lab<br>
<br>
</div></div></blockquote></div><br></div>

--e89a8f5036c873064804d1854a3b--


--===============1680363967131937285==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1680363967131937285==--


From xen-devel-bounces@lists.xen.org Sun Dec 23 19:07:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 19:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmqsP-0007M0-8B; Sun, 23 Dec 2012 19:06:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpoussin@reactos.org>) id 1TmnXC-00067u-7X
	for xen-devel@lists.xensource.com; Sun, 23 Dec 2012 15:32:14 +0000
Received: from [85.158.143.35:60131] by server-2.bemta-4.messagelabs.com id
	F0/B8-30861-DF327D05; Sun, 23 Dec 2012 15:32:13 +0000
X-Env-Sender: hpoussin@reactos.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1356276729!11415136!1
X-Originating-IP: [212.27.42.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEyLjI3LjQyLjEgPT4gMTc1MDQ4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23077 invoked from network); 23 Dec 2012 15:32:10 -0000
Received: from smtp1-g21.free.fr (HELO smtp1-g21.free.fr) (212.27.42.1)
	by server-11.tower-21.messagelabs.com with SMTP;
	23 Dec 2012 15:32:10 -0000
Received: from localhost.localdomain (unknown [82.227.227.196])
	by smtp1-g21.free.fr (Postfix) with ESMTP id EE8499401C5;
	Sun, 23 Dec 2012 16:32:04 +0100 (CET)
From: =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>
To: qemu-devel@nongnu.org
Date: Sun, 23 Dec 2012 16:32:42 +0100
Message-Id: <1356276769-7357-3-git-send-email-hpoussin@reactos.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356276769-7357-1-git-send-email-hpoussin@reactos.org>
References: <1356276769-7357-1-git-send-email-hpoussin@reactos.org>
MIME-Version: 1.0
X-Mailman-Approved-At: Sun, 23 Dec 2012 19:06:19 +0000
Cc: =?UTF-8?q?Andreas=20F=C3=A4rber?= <andreas.faerber@web.de>,
	"open list:X86" <xen-devel@lists.xensource.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [Xen-devel] [RFC 2/8] xen_platform: do not use old_portio-style
	callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

U2lnbmVkLW9mZi1ieTogSGVydsOpIFBvdXNzaW5lYXUgPGhwb3Vzc2luQHJlYWN0b3Mub3JnPgot
LS0KIGh3L3hlbl9wbGF0Zm9ybS5jIHwgICAyMSArKysrKysrKysrLS0tLS0tLS0tLS0KIDEgZmls
ZSBjaGFuZ2VkLCAxMCBpbnNlcnRpb25zKCspLCAxMSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg
YS9ody94ZW5fcGxhdGZvcm0uYyBiL2h3L3hlbl9wbGF0Zm9ybS5jCmluZGV4IGE1NGU3YTIuLmFk
N2NiMDYgMTAwNjQ0Ci0tLSBhL2h3L3hlbl9wbGF0Zm9ybS5jCisrKyBiL2h3L3hlbl9wbGF0Zm9y
bS5jCkBAIC0yODAsNyArMjgwLDggQEAgc3RhdGljIHZvaWQgcGxhdGZvcm1fZml4ZWRfaW9wb3J0
X2luaXQoUENJWGVuUGxhdGZvcm1TdGF0ZSogcykKIAogLyogWGVuIFBsYXRmb3JtIFBDSSBEZXZp
Y2UgKi8KIAotc3RhdGljIHVpbnQzMl90IHhlbl9wbGF0Zm9ybV9pb3BvcnRfcmVhZGIodm9pZCAq
b3BhcXVlLCB1aW50MzJfdCBhZGRyKQorc3RhdGljIHVpbnQ2NF90IHhlbl9wbGF0Zm9ybV9pb3Bv
cnRfcmVhZGIodm9pZCAqb3BhcXVlLCBod2FkZHIgYWRkciwKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBzaXplKQogewogICAgIGlmIChhZGRy
ID09IDApIHsKICAgICAgICAgcmV0dXJuIHBsYXRmb3JtX2ZpeGVkX2lvcG9ydF9yZWFkYihvcGFx
dWUsIDApOwpAQCAtMjg5LDMwICsyOTAsMjggQEAgc3RhdGljIHVpbnQzMl90IHhlbl9wbGF0Zm9y
bV9pb3BvcnRfcmVhZGIodm9pZCAqb3BhcXVlLCB1aW50MzJfdCBhZGRyKQogICAgIH0KIH0KIAot
c3RhdGljIHZvaWQgeGVuX3BsYXRmb3JtX2lvcG9ydF93cml0ZWIodm9pZCAqb3BhcXVlLCB1aW50
MzJfdCBhZGRyLCB1aW50MzJfdCB2YWwpCitzdGF0aWMgdm9pZCB4ZW5fcGxhdGZvcm1faW9wb3J0
X3dyaXRlYih2b2lkICpvcGFxdWUsIGh3YWRkciBhZGRyLAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgdWludDY0X3QgdmFsLCB1bnNpZ25lZCBpbnQgc2l6ZSkKIHsKICAg
ICBQQ0lYZW5QbGF0Zm9ybVN0YXRlICpzID0gb3BhcXVlOwogCiAgICAgc3dpdGNoIChhZGRyKSB7
CiAgICAgY2FzZSAwOiAvKiBQbGF0Zm9ybSBmbGFncyAqLwotICAgICAgICBwbGF0Zm9ybV9maXhl
ZF9pb3BvcnRfd3JpdGViKG9wYXF1ZSwgMCwgdmFsKTsKKyAgICAgICAgcGxhdGZvcm1fZml4ZWRf
aW9wb3J0X3dyaXRlYihvcGFxdWUsIDAsICh1aW50MzJfdCl2YWwpOwogICAgICAgICBicmVhazsK
ICAgICBjYXNlIDg6Ci0gICAgICAgIGxvZ193cml0ZWIocywgdmFsKTsKKyAgICAgICAgbG9nX3dy
aXRlYihzLCAodWludDMyX3QpdmFsKTsKICAgICAgICAgYnJlYWs7CiAgICAgZGVmYXVsdDoKICAg
ICAgICAgYnJlYWs7CiAgICAgfQogfQogCi1zdGF0aWMgTWVtb3J5UmVnaW9uUG9ydGlvIHhlbl9w
Y2lfcG9ydGlvW10gPSB7Ci0gICAgeyAwLCAweDEwMCwgMSwgLnJlYWQgPSB4ZW5fcGxhdGZvcm1f
aW9wb3J0X3JlYWRiLCB9LAotICAgIHsgMCwgMHgxMDAsIDEsIC53cml0ZSA9IHhlbl9wbGF0Zm9y
bV9pb3BvcnRfd3JpdGViLCB9LAotICAgIFBPUlRJT19FTkRfT0ZfTElTVCgpCi19OwotCiBzdGF0
aWMgY29uc3QgTWVtb3J5UmVnaW9uT3BzIHhlbl9wY2lfaW9fb3BzID0gewotICAgIC5vbGRfcG9y
dGlvID0geGVuX3BjaV9wb3J0aW8sCisgICAgLnJlYWQgID0geGVuX3BsYXRmb3JtX2lvcG9ydF9y
ZWFkYiwKKyAgICAud3JpdGUgPSB4ZW5fcGxhdGZvcm1faW9wb3J0X3dyaXRlYiwKKyAgICAuaW1w
bC5taW5fYWNjZXNzX3NpemUgPSAxLAorICAgIC5pbXBsLm1heF9hY2Nlc3Nfc2l6ZSA9IDEsCiB9
OwogCiBzdGF0aWMgdm9pZCBwbGF0Zm9ybV9pb3BvcnRfYmFyX3NldHVwKFBDSVhlblBsYXRmb3Jt
U3RhdGUgKmQpCi0tIAoxLjcuMTAuNAoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhl
bi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Sun Dec 23 19:07:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Dec 2012 19:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmqsP-0007M0-8B; Sun, 23 Dec 2012 19:06:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpoussin@reactos.org>) id 1TmnXC-00067u-7X
	for xen-devel@lists.xensource.com; Sun, 23 Dec 2012 15:32:14 +0000
Received: from [85.158.143.35:60131] by server-2.bemta-4.messagelabs.com id
	F0/B8-30861-DF327D05; Sun, 23 Dec 2012 15:32:13 +0000
X-Env-Sender: hpoussin@reactos.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1356276729!11415136!1
X-Originating-IP: [212.27.42.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEyLjI3LjQyLjEgPT4gMTc1MDQ4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23077 invoked from network); 23 Dec 2012 15:32:10 -0000
Received: from smtp1-g21.free.fr (HELO smtp1-g21.free.fr) (212.27.42.1)
	by server-11.tower-21.messagelabs.com with SMTP;
	23 Dec 2012 15:32:10 -0000
Received: from localhost.localdomain (unknown [82.227.227.196])
	by smtp1-g21.free.fr (Postfix) with ESMTP id EE8499401C5;
	Sun, 23 Dec 2012 16:32:04 +0100 (CET)
From: =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>
To: qemu-devel@nongnu.org
Date: Sun, 23 Dec 2012 16:32:42 +0100
Message-Id: <1356276769-7357-3-git-send-email-hpoussin@reactos.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356276769-7357-1-git-send-email-hpoussin@reactos.org>
References: <1356276769-7357-1-git-send-email-hpoussin@reactos.org>
MIME-Version: 1.0
X-Mailman-Approved-At: Sun, 23 Dec 2012 19:06:19 +0000
Cc: =?UTF-8?q?Andreas=20F=C3=A4rber?= <andreas.faerber@web.de>,
	"open list:X86" <xen-devel@lists.xensource.com>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [Xen-devel] [RFC 2/8] xen_platform: do not use old_portio-style
	callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

U2lnbmVkLW9mZi1ieTogSGVydsOpIFBvdXNzaW5lYXUgPGhwb3Vzc2luQHJlYWN0b3Mub3JnPgot
LS0KIGh3L3hlbl9wbGF0Zm9ybS5jIHwgICAyMSArKysrKysrKysrLS0tLS0tLS0tLS0KIDEgZmls
ZSBjaGFuZ2VkLCAxMCBpbnNlcnRpb25zKCspLCAxMSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg
YS9ody94ZW5fcGxhdGZvcm0uYyBiL2h3L3hlbl9wbGF0Zm9ybS5jCmluZGV4IGE1NGU3YTIuLmFk
N2NiMDYgMTAwNjQ0Ci0tLSBhL2h3L3hlbl9wbGF0Zm9ybS5jCisrKyBiL2h3L3hlbl9wbGF0Zm9y
bS5jCkBAIC0yODAsNyArMjgwLDggQEAgc3RhdGljIHZvaWQgcGxhdGZvcm1fZml4ZWRfaW9wb3J0
X2luaXQoUENJWGVuUGxhdGZvcm1TdGF0ZSogcykKIAogLyogWGVuIFBsYXRmb3JtIFBDSSBEZXZp
Y2UgKi8KIAotc3RhdGljIHVpbnQzMl90IHhlbl9wbGF0Zm9ybV9pb3BvcnRfcmVhZGIodm9pZCAq
b3BhcXVlLCB1aW50MzJfdCBhZGRyKQorc3RhdGljIHVpbnQ2NF90IHhlbl9wbGF0Zm9ybV9pb3Bv
cnRfcmVhZGIodm9pZCAqb3BhcXVlLCBod2FkZHIgYWRkciwKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBzaXplKQogewogICAgIGlmIChhZGRy
ID09IDApIHsKICAgICAgICAgcmV0dXJuIHBsYXRmb3JtX2ZpeGVkX2lvcG9ydF9yZWFkYihvcGFx
dWUsIDApOwpAQCAtMjg5LDMwICsyOTAsMjggQEAgc3RhdGljIHVpbnQzMl90IHhlbl9wbGF0Zm9y
bV9pb3BvcnRfcmVhZGIodm9pZCAqb3BhcXVlLCB1aW50MzJfdCBhZGRyKQogICAgIH0KIH0KIAot
c3RhdGljIHZvaWQgeGVuX3BsYXRmb3JtX2lvcG9ydF93cml0ZWIodm9pZCAqb3BhcXVlLCB1aW50
MzJfdCBhZGRyLCB1aW50MzJfdCB2YWwpCitzdGF0aWMgdm9pZCB4ZW5fcGxhdGZvcm1faW9wb3J0
X3dyaXRlYih2b2lkICpvcGFxdWUsIGh3YWRkciBhZGRyLAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgdWludDY0X3QgdmFsLCB1bnNpZ25lZCBpbnQgc2l6ZSkKIHsKICAg
ICBQQ0lYZW5QbGF0Zm9ybVN0YXRlICpzID0gb3BhcXVlOwogCiAgICAgc3dpdGNoIChhZGRyKSB7
CiAgICAgY2FzZSAwOiAvKiBQbGF0Zm9ybSBmbGFncyAqLwotICAgICAgICBwbGF0Zm9ybV9maXhl
ZF9pb3BvcnRfd3JpdGViKG9wYXF1ZSwgMCwgdmFsKTsKKyAgICAgICAgcGxhdGZvcm1fZml4ZWRf
aW9wb3J0X3dyaXRlYihvcGFxdWUsIDAsICh1aW50MzJfdCl2YWwpOwogICAgICAgICBicmVhazsK
ICAgICBjYXNlIDg6Ci0gICAgICAgIGxvZ193cml0ZWIocywgdmFsKTsKKyAgICAgICAgbG9nX3dy
aXRlYihzLCAodWludDMyX3QpdmFsKTsKICAgICAgICAgYnJlYWs7CiAgICAgZGVmYXVsdDoKICAg
ICAgICAgYnJlYWs7CiAgICAgfQogfQogCi1zdGF0aWMgTWVtb3J5UmVnaW9uUG9ydGlvIHhlbl9w
Y2lfcG9ydGlvW10gPSB7Ci0gICAgeyAwLCAweDEwMCwgMSwgLnJlYWQgPSB4ZW5fcGxhdGZvcm1f
aW9wb3J0X3JlYWRiLCB9LAotICAgIHsgMCwgMHgxMDAsIDEsIC53cml0ZSA9IHhlbl9wbGF0Zm9y
bV9pb3BvcnRfd3JpdGViLCB9LAotICAgIFBPUlRJT19FTkRfT0ZfTElTVCgpCi19OwotCiBzdGF0
aWMgY29uc3QgTWVtb3J5UmVnaW9uT3BzIHhlbl9wY2lfaW9fb3BzID0gewotICAgIC5vbGRfcG9y
dGlvID0geGVuX3BjaV9wb3J0aW8sCisgICAgLnJlYWQgID0geGVuX3BsYXRmb3JtX2lvcG9ydF9y
ZWFkYiwKKyAgICAud3JpdGUgPSB4ZW5fcGxhdGZvcm1faW9wb3J0X3dyaXRlYiwKKyAgICAuaW1w
bC5taW5fYWNjZXNzX3NpemUgPSAxLAorICAgIC5pbXBsLm1heF9hY2Nlc3Nfc2l6ZSA9IDEsCiB9
OwogCiBzdGF0aWMgdm9pZCBwbGF0Zm9ybV9pb3BvcnRfYmFyX3NldHVwKFBDSVhlblBsYXRmb3Jt
U3RhdGUgKmQpCi0tIAoxLjcuMTAuNAoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhl
bi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Dec 24 02:19:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 02:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tmxcp-0005NZ-Bn; Mon, 24 Dec 2012 02:18:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1Tmxcn-0005NU-Me
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 02:18:41 +0000
Received: from [85.158.143.99:5351] by server-3.bemta-4.messagelabs.com id
	68/A8-18211-08BB7D05; Mon, 24 Dec 2012 02:18:40 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1356315519!23220216!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6676 invoked from network); 24 Dec 2012 02:18:40 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-11.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 02:18:40 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 23 Dec 2012 18:18:38 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,343,1355126400"; d="scan'208";a="269245766"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 23 Dec 2012 18:18:37 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 23 Dec 2012 18:18:37 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 23 Dec 2012 18:18:37 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Mon, 24 Dec 2012 10:18:35 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: test report for xen-unstable tree with upstream QEMU
Thread-Index: Ac3hfK0opfC1vypmRGWIBv40kvyMsA==
Date: Mon, 24 Dec 2012 02:18:34 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A10238527@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dai, Yan" <yan.dai@intel.com>, "XU,
	YONGWEIX" <yongweix.xu@intel.com>, "Liu,
	RongrongX" <rongrongx.liu@intel.com>, "Liu,
	SongtaoX" <songtaox.liu@intel.com>, "Zhou, Chao" <chao.zhou@intel.com>
Subject: [Xen-devel] test report for xen-unstable tree with upstream QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,
We did some testing for Xen-unstable tree with upstream QEMU.
We covered basic guest booting up, power management, VT-d, SR-IOV features.
We found 4 new bugs which only exist in upstream QEMU not in qemu-xen.

We followed the below wiki page to use upstream QEMU.
http://wiki.xen.org/wiki/QEMU_Upstream

test tree:
xen-unstable-tree.hg: C/S 26193 (about 20 days ago)
qemu.git: commit e9bff10f8db (about 20 days ago)
Dom0: Linux 3.6.9 release version.

test machine:
Intel Westmere-EP and SandyBridge-EP systems.

new bugs (which don't exist with qemu-xen-unstable tree):
1. 'maxvcpus=NUM' item is not supported in upstream QEMU
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1837
  -- This blocked vCPU hot-plug for HVM guest.
2. Guest console hangs after save/restore or live-migration when setting 'hpet=0' in guest config file
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1837
3. 'xen_platform_pci=0' setting cannot make the guest use emulated PCI devices by default
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1839
4. Guest free memory with upstream qemu is 14MB lower than that with qemu-xen-unstable.git
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1836
  -- This might not be a bug; we just curious to know what the additional 14MB memory is for.


Best Regards,
     Yongjie (Jay)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 02:19:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 02:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tmxcp-0005NZ-Bn; Mon, 24 Dec 2012 02:18:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1Tmxcn-0005NU-Me
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 02:18:41 +0000
Received: from [85.158.143.99:5351] by server-3.bemta-4.messagelabs.com id
	68/A8-18211-08BB7D05; Mon, 24 Dec 2012 02:18:40 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1356315519!23220216!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6676 invoked from network); 24 Dec 2012 02:18:40 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-11.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 02:18:40 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 23 Dec 2012 18:18:38 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,343,1355126400"; d="scan'208";a="269245766"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 23 Dec 2012 18:18:37 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 23 Dec 2012 18:18:37 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 23 Dec 2012 18:18:37 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.85]) with mapi id
	14.01.0355.002; Mon, 24 Dec 2012 10:18:35 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: test report for xen-unstable tree with upstream QEMU
Thread-Index: Ac3hfK0opfC1vypmRGWIBv40kvyMsA==
Date: Mon, 24 Dec 2012 02:18:34 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A10238527@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dai, Yan" <yan.dai@intel.com>, "XU,
	YONGWEIX" <yongweix.xu@intel.com>, "Liu,
	RongrongX" <rongrongx.liu@intel.com>, "Liu,
	SongtaoX" <songtaox.liu@intel.com>, "Zhou, Chao" <chao.zhou@intel.com>
Subject: [Xen-devel] test report for xen-unstable tree with upstream QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,
We did some testing for Xen-unstable tree with upstream QEMU.
We covered basic guest booting up, power management, VT-d, SR-IOV features.
We found 4 new bugs which only exist in upstream QEMU not in qemu-xen.

We followed the below wiki page to use upstream QEMU.
http://wiki.xen.org/wiki/QEMU_Upstream

test tree:
xen-unstable-tree.hg: C/S 26193 (about 20 days ago)
qemu.git: commit e9bff10f8db (about 20 days ago)
Dom0: Linux 3.6.9 release version.

test machine:
Intel Westmere-EP and SandyBridge-EP systems.

new bugs (which don't exist with qemu-xen-unstable tree):
1. 'maxvcpus=NUM' item is not supported in upstream QEMU
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1837
  -- This blocked vCPU hot-plug for HVM guest.
2. Guest console hangs after save/restore or live-migration when setting 'hpet=0' in guest config file
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1837
3. 'xen_platform_pci=0' setting cannot make the guest use emulated PCI devices by default
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1839
4. Guest free memory with upstream qemu is 14MB lower than that with qemu-xen-unstable.git
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1836
  -- This might not be a bug; we just curious to know what the additional 14MB memory is for.


Best Regards,
     Yongjie (Jay)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 02:25:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 02:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmxjK-0005WX-D0; Mon, 24 Dec 2012 02:25:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1TmxjI-0005WS-3a
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 02:25:24 +0000
Received: from [193.109.254.147:46402] by server-2.bemta-14.messagelabs.com id
	1C/10-30744-31DB7D05; Mon, 24 Dec 2012 02:25:23 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1356315921!6458612!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NjQ1Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29199 invoked from network); 24 Dec 2012 02:25:22 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-5.tower-27.messagelabs.com with SMTP;
	24 Dec 2012 02:25:22 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 23 Dec 2012 18:25:20 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,343,1355126400"; d="scan'208";a="266340232"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga001.fm.intel.com with ESMTP; 23 Dec 2012 18:24:58 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 23 Dec 2012 18:24:58 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 23 Dec 2012 18:24:58 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 24 Dec 2012 10:24:28 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: (updated) test report for xen-unstable tree with upstream QEMU
Thread-Index: Ac3hfcctG/yMzf4rSna8JLixxC4VNA==
Date: Mon, 24 Dec 2012 02:24:27 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A10238547@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dai, Yan" <yan.dai@intel.com>, "XU, YONGWEIX" <yongweix.xu@intel.com>,
	"Liu, RongrongX" <rongrongx.liu@intel.com>, "Liu,
	SongtaoX" <songtaox.liu@intel.com>, "Zhou, Chao" <chao.zhou@intel.com>
Subject: [Xen-devel] (updated) test report for xen-unstable tree with
	upstream QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,
We did some testing for Xen-unstable tree with upstream QEMU.
We covered basic guest booting up, power management, VT-d, SR-IOV features.
We found 4 new bugs which only exist in upstream QEMU not in qemu-xen.

We followed the below wiki page to use upstream QEMU.
http://wiki.xen.org/wiki/QEMU_Upstream

test tree:
xen-unstable-tree.hg: C/S 26193 (about 20 days ago)
qemu.git: commit e9bff10f8db (about 20 days ago)
Dom0: Linux 3.6.9 release version.

test machine:
Intel Westmere-EP and SandyBridge-EP systems.

new bugs (which don't exist with qemu-xen-unstable tree):
1. 'maxvcpus=NUM' item is not supported in upstream QEMU
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1837
  -- This blocked vCPU hot-plug for HVM guest.
2. Guest console hangs after save/restore or live-migration when setting 'hpet=0' in guest config file
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1838
3. 'xen_platform_pci=0' setting cannot make the guest use emulated PCI devices by default
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1839
4. Guest free memory with upstream qemu is 14MB lower than that with qemu-xen-unstable.git
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1836
  -- This might not be a bug; we just curious to know what the additional 14MB memory is for.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 02:25:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 02:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TmxjK-0005WX-D0; Mon, 24 Dec 2012 02:25:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1TmxjI-0005WS-3a
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 02:25:24 +0000
Received: from [193.109.254.147:46402] by server-2.bemta-14.messagelabs.com id
	1C/10-30744-31DB7D05; Mon, 24 Dec 2012 02:25:23 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1356315921!6458612!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1NjQ1Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29199 invoked from network); 24 Dec 2012 02:25:22 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-5.tower-27.messagelabs.com with SMTP;
	24 Dec 2012 02:25:22 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 23 Dec 2012 18:25:20 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,343,1355126400"; d="scan'208";a="266340232"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga001.fm.intel.com with ESMTP; 23 Dec 2012 18:24:58 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 23 Dec 2012 18:24:58 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 23 Dec 2012 18:24:58 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 24 Dec 2012 10:24:28 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: (updated) test report for xen-unstable tree with upstream QEMU
Thread-Index: Ac3hfcctG/yMzf4rSna8JLixxC4VNA==
Date: Mon, 24 Dec 2012 02:24:27 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A10238547@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dai, Yan" <yan.dai@intel.com>, "XU, YONGWEIX" <yongweix.xu@intel.com>,
	"Liu, RongrongX" <rongrongx.liu@intel.com>, "Liu,
	SongtaoX" <songtaox.liu@intel.com>, "Zhou, Chao" <chao.zhou@intel.com>
Subject: [Xen-devel] (updated) test report for xen-unstable tree with
	upstream QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,
We did some testing for Xen-unstable tree with upstream QEMU.
We covered basic guest booting up, power management, VT-d, SR-IOV features.
We found 4 new bugs which only exist in upstream QEMU not in qemu-xen.

We followed the below wiki page to use upstream QEMU.
http://wiki.xen.org/wiki/QEMU_Upstream

test tree:
xen-unstable-tree.hg: C/S 26193 (about 20 days ago)
qemu.git: commit e9bff10f8db (about 20 days ago)
Dom0: Linux 3.6.9 release version.

test machine:
Intel Westmere-EP and SandyBridge-EP systems.

new bugs (which don't exist with qemu-xen-unstable tree):
1. 'maxvcpus=NUM' item is not supported in upstream QEMU
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1837
  -- This blocked vCPU hot-plug for HVM guest.
2. Guest console hangs after save/restore or live-migration when setting 'hpet=0' in guest config file
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1838
3. 'xen_platform_pci=0' setting cannot make the guest use emulated PCI devices by default
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1839
4. Guest free memory with upstream qemu is 14MB lower than that with qemu-xen-unstable.git
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1836
  -- This might not be a bug; we just curious to know what the additional 14MB memory is for.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 09:02:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 09:02:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn3uq-00009Q-AX; Mon, 24 Dec 2012 09:01:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn3uo-00009K-4l
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 09:01:42 +0000
Received: from [85.158.139.83:25985] by server-10.bemta-5.messagelabs.com id
	09/43-13383-4F918D05; Mon, 24 Dec 2012 09:01:40 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1356339698!23886980!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18502 invoked from network); 24 Dec 2012 09:01:38 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-11.tower-182.messagelabs.com with SMTP;
	24 Dec 2012 09:01:38 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 01:01:37 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,346,1355126400"; d="scan'208";a="269352120"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga002.fm.intel.com with ESMTP; 24 Dec 2012 01:01:36 -0800
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 24 Dec 2012 01:01:36 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 24 Dec 2012 01:01:36 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 24 Dec 2012 17:01:34 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH v3 03/10] nested_ept: Implement guest ept's
	walker
Thread-Index: AQHN3rDKECgI9Bwg2E+NdSX9ZaD3v5gnpETA
Date: Mon, 24 Dec 2012 09:01:33 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BF540@SHSMSX101.ccr.corp.intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-4-git-send-email-xiantao.zhang@intel.com>
	<20121220125137.GJ80837@ocelot.phlegethon.org>
In-Reply-To: <20121220125137.GJ80837@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	"Dong, Eddie" <eddie.dong@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Zhang, 
	Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v3 03/10] nested_ept: Implement guest ept's
 walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Thursday, December 20, 2012 8:52 PM
> To: Zhang, Xiantao
> Cc: xen-devel@lists.xen.org; keir@xen.org; Nakajima, Jun; Dong, Eddie;
> JBeulich@suse.com
> Subject: Re: [Xen-devel] [PATCH v3 03/10] nested_ept: Implement guest
> ept's walker
> 
> At 23:43 +0800 on 20 Dec (1356047024), Xiantao Zhang wrote:
> > From: Zhang Xiantao <xiantao.zhang@intel.com>
> >
> > Implment guest EPT PT walker, some logic is based on shadow's ia32e PT
> > walker. During the PT walking, if the target pages are not in memory,
> > use RETRY mechanism and get a chance to let the target page back.
> >
> > Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> This is much nicer than v1, thanks.  I have some comments below, and the
> whole thing needs to be checked for whitespace mangling.
> 
> > +static bool_t nept_rwx_bits_check(ept_entry_t e) {
> > +    /*write only or write/execute only*/
> > +    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
> > +
> > +    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
> > +        return 1;
> > +
> > +    if ( rwx_bits == ept_access_x && !(NEPT_VPID_CAP_BITS &
> > +                        VMX_EPT_EXEC_ONLY_SUPPORTED))
> 
> In a later patch you add VMX_EPT_EXEC_ONLY_SUPPORTED to this field.
> How can that work when running on a CPU that doesn't support exec-only?
> The nested-ept tables will have exec-only mapping in them which the CPU
> will reject.

Fixed, and will consult host's capability first. 

> > +done:
> > +    ret = EPT_TRANSLATE_SUCCEED;
> > +    goto out;
> > +
> > +map_err:
> > +    if ( rc == _PAGE_PAGED )
> > +        ret = EPT_TRANSLATE_RETRY;
> > +    else
> > +        ret = EPT_TRANSLATE_ERR_PAGE;
> 
> What does this error code mean?  The caller just retries the faulting
> instruction when it sees it, which sounds wrong.  Why not just return
> EPT_TRANSLATE_MISCONFIG if the guest uses an unmappable frame for EPT
> tables?

Okay, although this doesn't fully follow SDM, injecting a EPT misconfiguration in this case should be a better way instead of hanging there. 

> > +    default:
> > +        rc = EPT_TRANSLATE_UNSUPPORTED;
> > +        gdprintk(XENLOG_ERR, "Unsupported ept translation
> > + type!:%d\n", rc);
> 
> Just BUG() here and get rid of EPT_TRANSLATE_UNSUPPORTED and
> NESTEDHVM_PAGEFAULT_UNHANDLED.  The function that provided rc is
> right above and we can see it hasn't got any other return values.

Okay, this is also what the version 1 does. 

> > --- a/xen/arch/x86/mm/shadow/multi.c
> > +++ b/xen/arch/x86/mm/shadow/multi.c
> > @@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
> >      /* Translate the GFN to an MFN */
> >      ASSERT(!paging_locked_by_me(v->domain));
> >      mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
> > -
> > +
> 
> This stray change should be dropped.


Dropped. 

> > +typedef enum {
> > +    ept_access_n     = 0, /* No access permissions allowed */
> > +    ept_access_r     = 1,
> > +    ept_access_w     = 2,
> > +    ept_access_rw    = 3,
> > +    ept_access_x     = 4,
> > +    ept_access_rx    = 5,
> > +    ept_access_wx    = 6,
> > +    ept_access_all   = 7,
> > +} ept_access_t;
> 
> This enum isn't used anywhere.

Actually,  it is used in the function nept_rwx_bits_check. :)

Xiantao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 09:02:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 09:02:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn3uq-00009Q-AX; Mon, 24 Dec 2012 09:01:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn3uo-00009K-4l
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 09:01:42 +0000
Received: from [85.158.139.83:25985] by server-10.bemta-5.messagelabs.com id
	09/43-13383-4F918D05; Mon, 24 Dec 2012 09:01:40 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1356339698!23886980!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18502 invoked from network); 24 Dec 2012 09:01:38 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-11.tower-182.messagelabs.com with SMTP;
	24 Dec 2012 09:01:38 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 01:01:37 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,346,1355126400"; d="scan'208";a="269352120"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga002.fm.intel.com with ESMTP; 24 Dec 2012 01:01:36 -0800
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 24 Dec 2012 01:01:36 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 24 Dec 2012 01:01:36 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 24 Dec 2012 17:01:34 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH v3 03/10] nested_ept: Implement guest ept's
	walker
Thread-Index: AQHN3rDKECgI9Bwg2E+NdSX9ZaD3v5gnpETA
Date: Mon, 24 Dec 2012 09:01:33 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644033BF540@SHSMSX101.ccr.corp.intel.com>
References: <1356018231-26440-1-git-send-email-xiantao.zhang@intel.com>
	<1356018231-26440-4-git-send-email-xiantao.zhang@intel.com>
	<20121220125137.GJ80837@ocelot.phlegethon.org>
In-Reply-To: <20121220125137.GJ80837@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	"Dong, Eddie" <eddie.dong@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Zhang, 
	Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v3 03/10] nested_ept: Implement guest ept's
 walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Thursday, December 20, 2012 8:52 PM
> To: Zhang, Xiantao
> Cc: xen-devel@lists.xen.org; keir@xen.org; Nakajima, Jun; Dong, Eddie;
> JBeulich@suse.com
> Subject: Re: [Xen-devel] [PATCH v3 03/10] nested_ept: Implement guest
> ept's walker
> 
> At 23:43 +0800 on 20 Dec (1356047024), Xiantao Zhang wrote:
> > From: Zhang Xiantao <xiantao.zhang@intel.com>
> >
> > Implment guest EPT PT walker, some logic is based on shadow's ia32e PT
> > walker. During the PT walking, if the target pages are not in memory,
> > use RETRY mechanism and get a chance to let the target page back.
> >
> > Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
> 
> This is much nicer than v1, thanks.  I have some comments below, and the
> whole thing needs to be checked for whitespace mangling.
> 
> > +static bool_t nept_rwx_bits_check(ept_entry_t e) {
> > +    /*write only or write/execute only*/
> > +    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
> > +
> > +    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
> > +        return 1;
> > +
> > +    if ( rwx_bits == ept_access_x && !(NEPT_VPID_CAP_BITS &
> > +                        VMX_EPT_EXEC_ONLY_SUPPORTED))
> 
> In a later patch you add VMX_EPT_EXEC_ONLY_SUPPORTED to this field.
> How can that work when running on a CPU that doesn't support exec-only?
> The nested-ept tables will have exec-only mapping in them which the CPU
> will reject.

Fixed, and will consult host's capability first. 

> > +done:
> > +    ret = EPT_TRANSLATE_SUCCEED;
> > +    goto out;
> > +
> > +map_err:
> > +    if ( rc == _PAGE_PAGED )
> > +        ret = EPT_TRANSLATE_RETRY;
> > +    else
> > +        ret = EPT_TRANSLATE_ERR_PAGE;
> 
> What does this error code mean?  The caller just retries the faulting
> instruction when it sees it, which sounds wrong.  Why not just return
> EPT_TRANSLATE_MISCONFIG if the guest uses an unmappable frame for EPT
> tables?

Okay, although this doesn't fully follow SDM, injecting a EPT misconfiguration in this case should be a better way instead of hanging there. 

> > +    default:
> > +        rc = EPT_TRANSLATE_UNSUPPORTED;
> > +        gdprintk(XENLOG_ERR, "Unsupported ept translation
> > + type!:%d\n", rc);
> 
> Just BUG() here and get rid of EPT_TRANSLATE_UNSUPPORTED and
> NESTEDHVM_PAGEFAULT_UNHANDLED.  The function that provided rc is
> right above and we can see it hasn't got any other return values.

Okay, this is also what the version 1 does. 

> > --- a/xen/arch/x86/mm/shadow/multi.c
> > +++ b/xen/arch/x86/mm/shadow/multi.c
> > @@ -4582,7 +4582,7 @@ static mfn_t emulate_gva_to_mfn(struct vcpu *v,
> >      /* Translate the GFN to an MFN */
> >      ASSERT(!paging_locked_by_me(v->domain));
> >      mfn = get_gfn(v->domain, _gfn(gfn), &p2mt);
> > -
> > +
> 
> This stray change should be dropped.


Dropped. 

> > +typedef enum {
> > +    ept_access_n     = 0, /* No access permissions allowed */
> > +    ept_access_r     = 1,
> > +    ept_access_w     = 2,
> > +    ept_access_rw    = 3,
> > +    ept_access_x     = 4,
> > +    ept_access_rx    = 5,
> > +    ept_access_wx    = 6,
> > +    ept_access_all   = 7,
> > +} ept_access_t;
> 
> This enum isn't used anywhere.

Actually,  it is used in the function nept_rwx_bits_check. :)

Xiantao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 09:07:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 09:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn40F-0000NG-CU; Mon, 24 Dec 2012 09:07:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1Tn40D-0000N7-T3
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 09:07:17 +0000
Received: from [85.158.143.99:54792] by server-3.bemta-4.messagelabs.com id
	3F/74-18211-54B18D05; Mon, 24 Dec 2012 09:07:17 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356340035!21147642!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMzNDAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4492 invoked from network); 24 Dec 2012 09:07:16 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-14.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 09:07:16 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 24 Dec 2012 01:07:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,346,1355126400"; d="scan'208";a="262017424"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 24 Dec 2012 01:07:09 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 24 Dec 2012 01:07:07 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 24 Dec 2012 01:07:07 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 24 Dec 2012 17:07:05 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: test report for Xen 4.2.1 release
Thread-Index: Ac3htgdRgBvt22bCQBK0LDYKyG+Q4w==
Date: Mon, 24 Dec 2012 09:07:04 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A10238730@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dai, Yan" <yan.dai@intel.com>, "XU,
	YONGWEIX" <yongweix.xu@intel.com>, "Liu,
	RongrongX" <rongrongx.liu@intel.com>, "Liu,
	SongtaoX" <songtaox.liu@intel.com>, "Zhou, Chao" <chao.zhou@intel.com>
Subject: [Xen-devel] test report for Xen 4.2.1 release
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,
We did some testing for Xen 4.2.1 RC1, RC2 and final RELEASE version on Intel SandyBridge and Westmere platforms.
We found no regression and no fixed bug in our bug tracking list.
We covered basic Windows & Linux guest bootup, live migration, power management, VT-d, SR-IOV, vCPU hot-plug features.

Xen: 4.2.1 RC1/RC2/RELEASE (xen-4.2-testing.hg tree)
Dom0: Linux 3.7.1


Best Regards,
     Yongjie (Jay)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 09:07:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 09:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn40F-0000NG-CU; Mon, 24 Dec 2012 09:07:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1Tn40D-0000N7-T3
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 09:07:17 +0000
Received: from [85.158.143.99:54792] by server-3.bemta-4.messagelabs.com id
	3F/74-18211-54B18D05; Mon, 24 Dec 2012 09:07:17 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356340035!21147642!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMzNDAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4492 invoked from network); 24 Dec 2012 09:07:16 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-14.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 09:07:16 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 24 Dec 2012 01:07:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,346,1355126400"; d="scan'208";a="262017424"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 24 Dec 2012 01:07:09 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 24 Dec 2012 01:07:07 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 24 Dec 2012 01:07:07 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 24 Dec 2012 17:07:05 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: test report for Xen 4.2.1 release
Thread-Index: Ac3htgdRgBvt22bCQBK0LDYKyG+Q4w==
Date: Mon, 24 Dec 2012 09:07:04 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A10238730@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dai, Yan" <yan.dai@intel.com>, "XU,
	YONGWEIX" <yongweix.xu@intel.com>, "Liu,
	RongrongX" <rongrongx.liu@intel.com>, "Liu,
	SongtaoX" <songtaox.liu@intel.com>, "Zhou, Chao" <chao.zhou@intel.com>
Subject: [Xen-devel] test report for Xen 4.2.1 release
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,
We did some testing for Xen 4.2.1 RC1, RC2 and final RELEASE version on Intel SandyBridge and Westmere platforms.
We found no regression and no fixed bug in our bug tracking list.
We covered basic Windows & Linux guest bootup, live migration, power management, VT-d, SR-IOV, vCPU hot-plug features.

Xen: 4.2.1 RC1/RC2/RELEASE (xen-4.2-testing.hg tree)
Dom0: Linux 3.7.1


Best Regards,
     Yongjie (Jay)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zh-00025B-L6; Mon, 24 Dec 2012 14:27:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zf-00024Z-A6
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:03 +0000
Received: from [85.158.137.99:13527] by server-4.bemta-3.messagelabs.com id
	41/98-31835-63668D05; Mon, 24 Dec 2012 14:27:02 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356359217!12860986!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20588 invoked from network); 24 Dec 2012 14:26:59 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-217.messagelabs.com with SMTP;
	24 Dec 2012 14:26:59 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:26:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524929"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:56 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:27 +0800
Message-Id: <1356359194-5321-4-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 03/10] nested_ept: Implement guest ept's
	walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Implment guest EPT PT walker, some logic is based on shadow's
ia32e PT walker. During the PT walking, if the target pages are
not in memory, use RETRY mechanism and get a chance to let the
target page back.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c              |    1 +
 xen/arch/x86/hvm/vmx/vvmx.c         |   42 +++++-
 xen/arch/x86/mm/guest_walk.c        |   16 ++-
 xen/arch/x86/mm/hap/Makefile        |    1 +
 xen/arch/x86/mm/hap/nested_ept.c    |  287 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c    |    2 +-
 xen/include/asm-x86/guest_pt.h      |    8 +
 xen/include/asm-x86/hvm/nestedhvm.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h   |   31 ++++
 xen/include/asm-x86/hvm/vmx/vvmx.h  |   13 ++
 11 files changed, 394 insertions(+), 9 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index f63ee52..bd7314f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1324,6 +1324,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
+        case NESTEDHVM_PAGEFAULT_RETRY:
             return 1;
         case NESTEDHVM_PAGEFAULT_L1_ERROR:
             /* An error occured while translating gpa from
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 53f6a4d..1d3090d 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -939,9 +939,18 @@ static void sync_vvmcs_ro(struct vcpu *v)
 {
     int i;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    void *vvmcs = nvcpu->nv_vvmcx;
 
     for ( i = 0; i < ARRAY_SIZE(vmcs_ro_field); i++ )
         shadow_to_vvmcs(nvcpu->nv_vvmcx, vmcs_ro_field[i]);
+
+    /* Adjust exit_reason/exit_qualifciation for violation case */
+    if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) == EXIT_REASON_EPT_VIOLATION )
+    {
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+    }
 }
 
 static void load_vvmcs_host_state(struct vcpu *v)
@@ -1488,8 +1497,37 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
-    /*TODO:*/
-    return 0;
+    int rc;
+    unsigned long gfn;
+    uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
+    uint32_t exit_reason = EXIT_REASON_EPT_VIOLATION;
+    uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+                                &exit_qual, &exit_reason);
+    switch ( rc )
+    {
+    case EPT_TRANSLATE_SUCCEED:
+        *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+        rc = NESTEDHVM_PAGEFAULT_DONE;
+        break;
+    case EPT_TRANSLATE_VIOLATION:
+    case EPT_TRANSLATE_MISCONFIG:
+        rc = NESTEDHVM_PAGEFAULT_INJECT;
+        nvmx->ept_exit.exit_reason = exit_reason;
+        nvmx->ept_exit.exit_qual = exit_qual;
+        break;
+    case EPT_TRANSLATE_RETRY:
+        rc = NESTEDHVM_PAGEFAULT_RETRY;
+        break;
+    default:
+        gdprintk(XENLOG_ERR, "GUEST EPT translation error!:%d\n", rc);
+        BUG();
+        break;
+    }
+
+    return rc;
 }
 
 void nvmx_idtv_handling(void)
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 0f08fb0..1c165c6 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -88,18 +88,19 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
 
 /* If the map is non-NULL, we leave this function having 
  * acquired an extra ref on mfn_to_page(*mfn) */
-static inline void *map_domain_gfn(struct p2m_domain *p2m,
-                                   gfn_t gfn, 
+void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
                                    mfn_t *mfn,
                                    p2m_type_t *p2mt,
-                                   uint32_t *rc) 
+                                   p2m_query_t q,
+                                   uint32_t *rc)
 {
     struct page_info *page;
     void *map;
 
     /* Translate the gfn, unsharing if shared */
     page = get_page_from_gfn_p2m(p2m->domain, p2m, gfn_x(gfn), p2mt, NULL,
-                                  P2M_ALLOC | P2M_UNSHARE);
+                                  q);
     if ( p2m_is_paging(*p2mt) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
@@ -128,7 +129,6 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
     return map;
 }
 
-
 /* Walk the guest pagetables, after the manner of a hardware walker. */
 /* Because the walk is essentially random, it can cause a deadlock 
  * warning in the p2m locking code. Highly unlikely this is an actual
@@ -149,6 +149,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     uint32_t gflags, mflags, iflags, rc = 0;
     int smep;
     bool_t pse1G = 0, pse2M = 0;
+    p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
 
     perfc_incr(guest_walk);
     memset(gw, 0, sizeof(*gw));
@@ -188,7 +189,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     l3p = map_domain_gfn(p2m, 
                          guest_l4e_get_gfn(gw->l4e), 
                          &gw->l3mfn,
-                         &p2mt, 
+                         &p2mt,
+                         qt, 
                          &rc); 
     if(l3p == NULL)
         goto out;
@@ -249,6 +251,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                          guest_l3e_get_gfn(gw->l3e), 
                          &gw->l2mfn,
                          &p2mt, 
+                         qt,
                          &rc); 
     if(l2p == NULL)
         goto out;
@@ -322,6 +325,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                              guest_l2e_get_gfn(gw->l2e), 
                              &gw->l1mfn,
                              &p2mt,
+                             qt,
                              &rc);
         if(l1p == NULL)
             goto out;
diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
index 80a6bec..68f2bb5 100644
--- a/xen/arch/x86/mm/hap/Makefile
+++ b/xen/arch/x86/mm/hap/Makefile
@@ -3,6 +3,7 @@ obj-y += guest_walk_2level.o
 obj-y += guest_walk_3level.o
 obj-$(x86_64) += guest_walk_4level.o
 obj-y += nested_hap.o
+obj-y += nested_ept.o
 
 guest_walk_%level.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
new file mode 100644
index 0000000..1463d81
--- /dev/null
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -0,0 +1,287 @@
+/*
+ * nested_ept.c: Handling virtulized EPT for guest in nested case.
+ *
+ * Copyright (c) 2012, Intel Corporation
+ *  Xiantao Zhang <xiantao.zhang@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+#include <asm/domain.h>
+#include <asm/page.h>
+#include <asm/paging.h>
+#include <asm/p2m.h>
+#include <asm/mem_event.h>
+#include <public/mem_event.h>
+#include <asm/mem_sharing.h>
+#include <xen/event.h>
+#include <asm/hap.h>
+#include <asm/hvm/support.h>
+
+#include <asm/hvm/nestedhvm.h>
+
+#include "private.h"
+
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vvmx.h>
+
+/* EPT always use 4-level paging structure */
+#define GUEST_PAGING_LEVELS 4
+#include <asm/guest_pt.h>
+
+/* Must reserved bits in all level entries  */
+#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
+                     ~((1ull << paddr_bits) - 1))
+
+/*
+ *TODO: Just leave it as 0 here for compile pass, will
+ * define real capabilities in the subsequent patches.
+ */
+#define NEPT_VPID_CAP_BITS 0
+
+
+#define NEPT_1G_ENTRY_FLAG (1 << 11)
+#define NEPT_2M_ENTRY_FLAG (1 << 10)
+#define NEPT_4K_ENTRY_FLAG (1 << 9)
+
+bool_t nept_sp_entry(ept_entry_t e)
+{
+    return !!(e.sp);
+}
+
+static bool_t nept_rsv_bits_check(ept_entry_t e, uint32_t level)
+{
+    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
+
+    switch ( level )
+    {
+    case 1:
+        break;
+    case 2 ... 3:
+        if ( nept_sp_entry(e) )
+            rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
+        else
+            rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK;
+        break;
+    case 4:
+        rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK | EPTE_SUPER_PAGE_MASK;
+    break;
+    default:
+        gdprintk(XENLOG_ERR,"Unsupported EPT paging level: %d\n", level);
+        BUG();
+        break;
+    }
+    return !!(e.epte & rsv_bits);
+}
+
+/* EMT checking*/
+static bool_t nept_emt_bits_check(ept_entry_t e, uint32_t level)
+{
+    if ( e.sp || level == 1 )
+    {
+        if ( e.emt == EPT_EMT_RSV0 || e.emt == EPT_EMT_RSV1 ||
+                e.emt == EPT_EMT_RSV2 )
+            return 1;
+    }
+    return 0;
+}
+
+static bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
+{
+    return !(EPTE_RWX_MASK & rwx_acc & ~rwx_bits);
+}
+
+/* nept's non-present check */
+static bool_t nept_non_present_check(ept_entry_t e)
+{
+    if ( e.epte & EPTE_RWX_MASK )
+        return 0;
+    return 1;
+}
+
+uint64_t nept_get_ept_vpid_cap(void)
+{
+    uint64_t caps = NEPT_VPID_CAP_BITS;
+
+    if ( !cpu_has_vmx_ept_exec_only_supported )
+        caps &= ~VMX_EPT_EXEC_ONLY_SUPPORTED;
+    return caps;
+}
+
+static bool_t nept_rwx_bits_check(ept_entry_t e)
+{
+    /*write only or write/execute only*/
+    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
+
+    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
+        return 1;
+
+    if ( rwx_bits == ept_access_x && !(nept_get_ept_vpid_cap() &
+                        VMX_EPT_EXEC_ONLY_SUPPORTED) )
+        return 1;
+
+    return 0;
+}
+
+/* nept's misconfiguration check */
+static bool_t nept_misconfiguration_check(ept_entry_t e, uint32_t level)
+{
+    return (nept_rsv_bits_check(e, level) ||
+                nept_emt_bits_check(e, level) ||
+                nept_rwx_bits_check(e));
+}
+
+static int ept_lvl_table_offset(unsigned long gpa, int lvl)
+{
+    return (gpa >>(EPT_L4_PAGETABLE_SHIFT -(4 - lvl) * 9)) &
+                (EPT_PAGETABLE_ENTRIES -1 );
+}
+
+static uint32_t
+nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw)
+{
+    int lvl;
+    p2m_type_t p2mt;
+    uint32_t rc = 0, ret = 0, gflags;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = d->arch.p2m;
+    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
+    mfn_t lxmfn;
+    ept_entry_t *lxp = NULL;
+
+    memset(gw, 0, sizeof(*gw));
+
+    for (lvl = 4; lvl > 0; lvl--)
+    {
+        lxp = map_domain_gfn(p2m, base_gfn, &lxmfn, &p2mt, P2M_ALLOC, &rc);
+        if ( !lxp )
+            goto map_err;
+        gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
+        unmap_domain_page(lxp);
+        put_page(mfn_to_page(mfn_x(lxmfn)));
+
+        if ( nept_non_present_check(gw->lxe[lvl]) )
+            goto non_present;
+
+        if ( nept_misconfiguration_check(gw->lxe[lvl], lvl) )
+            goto misconfig_err;
+
+        if ( (lvl == 2 || lvl == 3) && nept_sp_entry(gw->lxe[lvl]) )
+        {
+            /* Generate a fake l1 table entry so callers don't all
+             * have to understand superpages. */
+            unsigned long gfn_lvl_mask =  (1ull << ((lvl - 1) * 9)) - 1;
+            gfn_t start = _gfn(gw->lxe[lvl].mfn);
+            /* Increment the pfn by the right number of 4k pages. */
+            start = _gfn((gfn_x(start) & ~gfn_lvl_mask) +
+                     ((l2ga >> PAGE_SHIFT) & gfn_lvl_mask));
+            gflags = (gw->lxe[lvl].epte & EPTE_FLAG_MASK) |
+                    (lvl == 3 ? NEPT_1G_ENTRY_FLAG: NEPT_2M_ENTRY_FLAG);
+            gw->lxe[0].epte = (gfn_x(start) << PAGE_SHIFT) | gflags;
+            goto done;
+        }
+        if ( lvl > 1 )
+            base_gfn = _gfn(gw->lxe[lvl].mfn);
+    }
+
+    /* If this is not a super entry, we can reach here. */
+    gflags = (gw->lxe[1].epte & EPTE_FLAG_MASK) | NEPT_4K_ENTRY_FLAG;
+    gw->lxe[0].epte = (gw->lxe[1].epte & PAGE_MASK) | gflags;
+
+done:
+    ret = EPT_TRANSLATE_SUCCEED;
+    goto out;
+
+map_err:
+    if ( rc == _PAGE_PAGED )
+    {
+        ret = EPT_TRANSLATE_RETRY;
+        goto out;
+    }
+    /* fall through to misconfig error */
+misconfig_err:
+    ret =  EPT_TRANSLATE_MISCONFIG;
+    goto out;
+
+non_present:
+    ret = EPT_TRANSLATE_VIOLATION;
+    /* fall through. */
+out:
+    return ret;
+}
+
+/* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
+
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason)
+{
+    uint32_t rc, rwx_bits = 0;
+    ept_walk_t gw;
+    rwx_acc &= EPTE_RWX_MASK;
+
+    *l1gfn = INVALID_GFN;
+
+    rc = nept_walk_tables(v, l2ga, &gw);
+    switch ( rc )
+    {
+    case EPT_TRANSLATE_SUCCEED:
+        if ( likely(gw.lxe[0].epte & NEPT_2M_ENTRY_FLAG) )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                            EPTE_RWX_MASK;
+            *page_order = 9;
+        }
+        else if ( gw.lxe[0].epte & NEPT_4K_ENTRY_FLAG )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                    gw.lxe[1].epte & EPTE_RWX_MASK;
+            *page_order = 0;
+        }
+        else if ( gw.lxe[0].epte & NEPT_1G_ENTRY_FLAG  )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte  & EPTE_RWX_MASK;
+            *page_order = 18;
+        }
+        else
+        {
+            gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
+            BUG();
+        }
+        if ( nept_permission_check(rwx_acc, rwx_bits) )
+        {
+            *l1gfn = gw.lxe[0].mfn;
+            break;
+        }
+        rc = EPT_TRANSLATE_VIOLATION;
+    /* Fall through to EPT violation if permission check fails. */
+    case EPT_TRANSLATE_VIOLATION:
+        *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
+        *exit_reason = EXIT_REASON_EPT_VIOLATION;
+        break;
+
+    case EPT_TRANSLATE_MISCONFIG:
+        rc = EPT_TRANSLATE_MISCONFIG;
+        *exit_qual = 0;
+        *exit_reason = EXIT_REASON_EPT_MISCONFIG;
+        break;
+    case EPT_TRANSLATE_RETRY:
+        break;
+    default:
+        gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);
+        BUG();
+        break;
+    }
+    return rc;
+}
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 8787c91..6d1264b 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -217,7 +217,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
-        return rv;
+    case NESTEDHVM_PAGEFAULT_RETRY:
     case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index 4e1dda0..db8a0b6 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -315,6 +315,14 @@ guest_walk_to_page_order(walk_t *gw)
 #define GPT_RENAME2(_n, _l) _n ## _ ## _l ## _levels
 #define GPT_RENAME(_n, _l) GPT_RENAME2(_n, _l)
 #define guest_walk_tables GPT_RENAME(guest_walk_tables, GUEST_PAGING_LEVELS)
+#define map_domain_gfn GPT_RENAME(map_domain_gfn, GUEST_PAGING_LEVELS)
+
+extern void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
+                                   mfn_t *mfn,
+                                   p2m_type_t *p2mt,
+                                   p2m_query_t q,
+                                   uint32_t *rc);
 
 extern uint32_t 
 guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, unsigned long va,
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index 91fde0b..649c511 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -52,6 +52,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
 #define NESTEDHVM_PAGEFAULT_L1_ERROR   2
 #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
 #define NESTEDHVM_PAGEFAULT_MMIO       4
+#define NESTEDHVM_PAGEFAULT_RETRY      5
 int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ef2c9c9..9a728b6 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -194,6 +194,7 @@ extern u32 vmx_secondary_exec_control;
 
 extern bool_t cpu_has_vmx_ins_outs_instr_info;
 
+#define VMX_EPT_EXEC_ONLY_SUPPORTED             0x00000001
 #define VMX_EPT_WALK_LENGTH_4_SUPPORTED         0x00000040
 #define VMX_EPT_MEMORY_TYPE_UC                  0x00000100
 #define VMX_EPT_MEMORY_TYPE_WB                  0x00004000
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index aa5b080..c73946f 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -51,6 +51,22 @@ typedef union {
     u64 epte;
 } ept_entry_t;
 
+typedef struct {
+    /*use lxe[0] to save result */
+    ept_entry_t lxe[5];
+} ept_walk_t;
+
+typedef enum {
+    ept_access_n     = 0, /* No access permissions allowed */
+    ept_access_r     = 1, /* Read only */
+    ept_access_w     = 2, /* Write only */
+    ept_access_rw    = 3, /* Read & Write */
+    ept_access_x     = 4, /* Exec Only */
+    ept_access_rx    = 5, /* Read & Exec */
+    ept_access_wx    = 6, /* Write & Exec*/
+    ept_access_all   = 7, /* Full permissions */
+} ept_access_t;
+
 #define EPT_TABLE_ORDER         9
 #define EPTE_SUPER_PAGE_MASK    0x80
 #define EPTE_MFN_MASK           0xffffffffff000ULL
@@ -60,6 +76,17 @@ typedef union {
 #define EPTE_AVAIL1_SHIFT       8
 #define EPTE_EMT_SHIFT          3
 #define EPTE_IGMT_SHIFT         6
+#define EPTE_RWX_MASK           0x7
+#define EPTE_FLAG_MASK          0x7f
+
+#define EPT_EMT_UC              0
+#define EPT_EMT_WC              1
+#define EPT_EMT_RSV0            2
+#define EPT_EMT_RSV1            3
+#define EPT_EMT_WT              4
+#define EPT_EMT_WP              5
+#define EPT_EMT_WB              6
+#define EPT_EMT_RSV2            7
 
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
@@ -191,6 +218,9 @@ void vmx_update_secondary_exec_control(struct vcpu *v);
 
 extern u64 vmx_ept_vpid_cap;
 
+#define cpu_has_vmx_ept_exec_only_supported        \
+    (vmx_ept_vpid_cap & VMX_EPT_EXEC_ONLY_SUPPORTED)
+
 #define cpu_has_vmx_ept_wl4_supported           \
     (vmx_ept_vpid_cap & VMX_EPT_WALK_LENGTH_4_SUPPORTED)
 #define cpu_has_vmx_ept_mt_uc                   \
@@ -419,6 +449,7 @@ void update_guest_eip(void);
 #define _EPT_GLA_FAULT              8
 #define EPT_GLA_FAULT               (1UL<<_EPT_GLA_FAULT)
 
+#define EPT_L4_PAGETABLE_SHIFT      39
 #define EPT_PAGETABLE_ENTRIES       512
 
 #endif /* __ASM_X86_HVM_VMX_VMX_H__ */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 422f006..97554bf 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -32,6 +32,10 @@ struct nestedvmx {
         unsigned long intr_info;
         u32           error_code;
     } intr;
+    struct {
+        uint32_t exit_reason;
+        uint32_t exit_qual;
+    } ept_exit;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -109,6 +113,11 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
+#define EPT_TRANSLATE_SUCCEED       0
+#define EPT_TRANSLATE_VIOLATION     1
+#define EPT_TRANSLATE_MISCONFIG     2
+#define EPT_TRANSLATE_RETRY         3
+
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
@@ -192,5 +201,9 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zb-000245-4i; Mon, 24 Dec 2012 14:26:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zZ-00023z-Pz
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:26:58 +0000
Received: from [85.158.139.83:10033] by server-16.bemta-5.messagelabs.com id
	D2/55-09208-03668D05; Mon, 24 Dec 2012 14:26:56 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1356359215!27132525!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5952 invoked from network); 24 Dec 2012 14:26:56 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-182.messagelabs.com with SMTP;
	24 Dec 2012 14:26:56 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:26:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524896"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:52 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:24 +0800
Message-Id: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 00/10] Nested VMX: Add virtual EPT & VPID
	support to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

With virtual EPT support, L1 hyerpvisor can use EPT hardware for L2 guest's memory virtualization.
In this way, L2 guest's performance can be improved sharply.
According to our testing, some benchmarks can show > 5x performance gain.

Changes from v1:
Update the patches according to Tim's comments. 
1. Patch 03: Enhance the virtual EPT's walker logic.
2. Patch 04: Add a new field in struct p2m_domain, and use it to store
   EPT-specific data. For host p2m, it saves L1 VMM's EPT data,
   and for nested p2m, it saves nested EPT's data 3. Patch 07: strictly check host's p2m access type.
4. Other patches: some whitespace mangling fixes.

Changes form v2:
Addressed comments from Jan and Jun:
1. Add Acked-by message for reviewed patches by Tim. 
2. Fixed one whitespace mangling issue in PATCH 08 3. Add some comments to describe the meaning of 
   the return value of hvm_hap_nested_page_fault 
   in PATCH 05.
4. Add the logic for handling default case of two switch
   statements.

Changes v3:
1. Re-check all patches' whitespace mangling issue.

2. Addressed Jan's comments in Patch08 and Patch09 that once return X86EMUL_EXCEPTION, the callee
should be responsible for handling the execption before its return.

3. Addressed Tim's comments in Patch03 and Patch04 and Patch07:
   Patch03: If host doesn't support exec-only capability, we shoudln't expost this feature to L1 VMM.
            Once map guest's EPT table error, inject an EPT misconfiguration errot to L1.
   Patch04: Re-organize p2m's and nested p2m's structure {init/teardown} logic.
   Patch07: Initialize p2ma_21  -> p2m_access_rwx, so not to change SVM's behavior.

Zhang Xiantao (10):
  nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
  nestedhap: Change nested p2m's walker to vendor-specific
  nested_ept: Implement guest ept's walker
  EPT: Make ept data structure or operations neutral
  nEPT: Try to enable EPT paging for L2 guest.
  nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
  nEPT: Use minimal permission for nested p2m.
  nEPT: handle invept instruction from L1 VMM
  nVMX: virutalize VPID capability to nested VMM.
  nEPT: expose EPT & VPID capablities to L1 VMM

 xen/arch/x86/hvm/hvm.c                  |    7 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 ++++
 xen/arch/x86/hvm/svm/svm.c              |    3 +-
 xen/arch/x86/hvm/vmx/vmcs.c             |    8 +-
 xen/arch/x86/hvm/vmx/vmx.c              |   91 ++++------
 xen/arch/x86/hvm/vmx/vvmx.c             |  208 ++++++++++++++++++++--
 xen/arch/x86/mm/guest_walk.c            |   16 +-
 xen/arch/x86/mm/hap/Makefile            |    1 +
 xen/arch/x86/mm/hap/nested_ept.c        |  298 +++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   96 ++++++-----
 xen/arch/x86/mm/mm-locks.h              |    2 +-
 xen/arch/x86/mm/p2m-ept.c               |  104 +++++++++---
 xen/arch/x86/mm/p2m.c                   |  159 +++++++++++------
 xen/include/asm-x86/guest_pt.h          |    8 +
 xen/include/asm-x86/hvm/hvm.h           |    9 +-
 xen/include/asm-x86/hvm/nestedhvm.h     |    1 +
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
 xen/include/asm-x86/hvm/vmx/vmcs.h      |   24 ++--
 xen/include/asm-x86/hvm/vmx/vmx.h       |   41 ++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |   28 +++-
 xen/include/asm-x86/p2m.h               |   20 ++-
 21 files changed, 932 insertions(+), 226 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zp-00029h-Fj; Mon, 24 Dec 2012 14:27:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zn-00028L-U0
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:12 +0000
Received: from [85.158.143.99:9547] by server-3.bemta-4.messagelabs.com id
	C5/79-18211-F3668D05; Mon, 24 Dec 2012 14:27:11 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356359222!21178558!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8379 invoked from network); 24 Dec 2012 14:27:08 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 14:27:08 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:06 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524989"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:27:05 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:33 +0800
Message-Id: <1356359194-5321-10-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 09/10] nVMX: virutalize VPID capability to
	nested VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Virtualize VPID for the nested vmm, use host's VPID
to emualte guest's VPID. For each virtual vmentry, if
guest'v vpid is changed, allocate a new host VPID for
L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   11 ++++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   52 +++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +
 3 files changed, 62 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 94cac17..0e479f8 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2578,10 +2578,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             update_guest_eip();
         break;
 
+    case EXIT_REASON_INVVPID:
+        if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
+
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
          * running in guest context, and the CPU checks that before getting
@@ -2699,8 +2703,11 @@ void vmx_vmenter_helper(void)
 
     if ( !cpu_has_vmx_vpid )
         goto out;
+    if ( nestedhvm_vcpu_in_guestmode(curr) )
+        p_asid = &vcpu_nestedhvm(curr).nv_n2asid;
+    else
+        p_asid = &curr->arch.hvm_vcpu.n1asid;
 
-    p_asid = &curr->arch.hvm_vcpu.n1asid;
     old_asid = p_asid->asid;
     need_flush = hvm_asid_handle_vmenter(p_asid);
     new_asid = p_asid->asid;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index c31f7ba..c54ee44 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -42,6 +42,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
 	goto out;
     }
     nvmx->ept.enabled = 0;
+    nvmx->guest_vpid = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -882,6 +883,16 @@ static uint64_t get_shadow_eptp(struct vcpu *v)
     return ept_get_eptp(ept);
 }
 
+static bool_t nvmx_vpid_enabled(struct nestedvcpu *nvcpu)
+{
+    uint32_t second_cntl;
+
+    second_cntl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
+    if ( second_cntl & SECONDARY_EXEC_ENABLE_VPID )
+        return 1;
+    return 0;
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -930,6 +941,18 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     if ( nestedhvm_paging_mode_hap(v) )
         __vmwrite(EPT_POINTER, get_shadow_eptp(v));
 
+    /* nested VPID support! */
+    if ( cpu_has_vmx_vpid && nvmx_vpid_enabled(nvcpu) )
+    {
+        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+        uint32_t new_vpid =  __get_vvmcs(vvmcs, VIRTUAL_PROCESSOR_ID);
+        if ( nvmx->guest_vpid != new_vpid )
+        {
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
+            nvmx->guest_vpid = new_vpid;
+        }
+    }
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -1221,7 +1244,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
     {
         vmreturn (regs, VMFAIL_INVALID);
-        return X86EMUL_OKAY;        
+        return X86EMUL_OKAY;
     }
 
     launched = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
@@ -1433,6 +1456,33 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
     ((uint32_t)(__emul_value(enable1, default1) | host_value)))
 
+int nvmx_handle_invvpid(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long vpid;
+    u64 inv_type;
+
+    if ( decode_vmx_inst(regs, &decode, &vpid, 0) != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+
+    switch ( inv_type ) {
+    /* Just invalidate all tlb entries for all types! */
+    case INVVPID_INDIVIDUAL_ADDR:
+    case INVVPID_SINGLE_CONTEXT:
+    case INVVPID_ALL_CONTEXT:
+        hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
+        break;
+    default:
+        vmreturn(regs, VMFAIL_INVALID);
+        return X86EMUL_OKAY;
+    }
+
+    vmreturn(regs, VMSUCCEED);
+    return X86EMUL_OKAY;
+}
+
 /*
  * Capability reporting
  */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index e671635..d1368a3 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -37,6 +37,7 @@ struct nestedvmx {
         uint32_t exit_reason;
         uint32_t exit_qual;
     } ept;
+    uint32_t guest_vpid;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -190,6 +191,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
 int nvmx_handle_invept(struct cpu_user_regs *regs);
+int nvmx_handle_invvpid(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zh-00025B-L6; Mon, 24 Dec 2012 14:27:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zf-00024Z-A6
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:03 +0000
Received: from [85.158.137.99:13527] by server-4.bemta-3.messagelabs.com id
	41/98-31835-63668D05; Mon, 24 Dec 2012 14:27:02 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356359217!12860986!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20588 invoked from network); 24 Dec 2012 14:26:59 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-217.messagelabs.com with SMTP;
	24 Dec 2012 14:26:59 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:26:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524929"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:56 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:27 +0800
Message-Id: <1356359194-5321-4-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 03/10] nested_ept: Implement guest ept's
	walker
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Implment guest EPT PT walker, some logic is based on shadow's
ia32e PT walker. During the PT walking, if the target pages are
not in memory, use RETRY mechanism and get a chance to let the
target page back.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/hvm.c              |    1 +
 xen/arch/x86/hvm/vmx/vvmx.c         |   42 +++++-
 xen/arch/x86/mm/guest_walk.c        |   16 ++-
 xen/arch/x86/mm/hap/Makefile        |    1 +
 xen/arch/x86/mm/hap/nested_ept.c    |  287 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c    |    2 +-
 xen/include/asm-x86/guest_pt.h      |    8 +
 xen/include/asm-x86/hvm/nestedhvm.h |    1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |    1 +
 xen/include/asm-x86/hvm/vmx/vmx.h   |   31 ++++
 xen/include/asm-x86/hvm/vmx/vvmx.h  |   13 ++
 11 files changed, 394 insertions(+), 9 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index f63ee52..bd7314f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1324,6 +1324,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
+        case NESTEDHVM_PAGEFAULT_RETRY:
             return 1;
         case NESTEDHVM_PAGEFAULT_L1_ERROR:
             /* An error occured while translating gpa from
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 53f6a4d..1d3090d 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -939,9 +939,18 @@ static void sync_vvmcs_ro(struct vcpu *v)
 {
     int i;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    void *vvmcs = nvcpu->nv_vvmcx;
 
     for ( i = 0; i < ARRAY_SIZE(vmcs_ro_field); i++ )
         shadow_to_vvmcs(nvcpu->nv_vvmcx, vmcs_ro_field[i]);
+
+    /* Adjust exit_reason/exit_qualifciation for violation case */
+    if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) == EXIT_REASON_EPT_VIOLATION )
+    {
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+    }
 }
 
 static void load_vvmcs_host_state(struct vcpu *v)
@@ -1488,8 +1497,37 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
-    /*TODO:*/
-    return 0;
+    int rc;
+    unsigned long gfn;
+    uint64_t exit_qual = __vmread(EXIT_QUALIFICATION);
+    uint32_t exit_reason = EXIT_REASON_EPT_VIOLATION;
+    uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+                                &exit_qual, &exit_reason);
+    switch ( rc )
+    {
+    case EPT_TRANSLATE_SUCCEED:
+        *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+        rc = NESTEDHVM_PAGEFAULT_DONE;
+        break;
+    case EPT_TRANSLATE_VIOLATION:
+    case EPT_TRANSLATE_MISCONFIG:
+        rc = NESTEDHVM_PAGEFAULT_INJECT;
+        nvmx->ept_exit.exit_reason = exit_reason;
+        nvmx->ept_exit.exit_qual = exit_qual;
+        break;
+    case EPT_TRANSLATE_RETRY:
+        rc = NESTEDHVM_PAGEFAULT_RETRY;
+        break;
+    default:
+        gdprintk(XENLOG_ERR, "GUEST EPT translation error!:%d\n", rc);
+        BUG();
+        break;
+    }
+
+    return rc;
 }
 
 void nvmx_idtv_handling(void)
diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index 0f08fb0..1c165c6 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -88,18 +88,19 @@ static uint32_t set_ad_bits(void *guest_p, void *walk_p, int set_dirty)
 
 /* If the map is non-NULL, we leave this function having 
  * acquired an extra ref on mfn_to_page(*mfn) */
-static inline void *map_domain_gfn(struct p2m_domain *p2m,
-                                   gfn_t gfn, 
+void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
                                    mfn_t *mfn,
                                    p2m_type_t *p2mt,
-                                   uint32_t *rc) 
+                                   p2m_query_t q,
+                                   uint32_t *rc)
 {
     struct page_info *page;
     void *map;
 
     /* Translate the gfn, unsharing if shared */
     page = get_page_from_gfn_p2m(p2m->domain, p2m, gfn_x(gfn), p2mt, NULL,
-                                  P2M_ALLOC | P2M_UNSHARE);
+                                  q);
     if ( p2m_is_paging(*p2mt) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
@@ -128,7 +129,6 @@ static inline void *map_domain_gfn(struct p2m_domain *p2m,
     return map;
 }
 
-
 /* Walk the guest pagetables, after the manner of a hardware walker. */
 /* Because the walk is essentially random, it can cause a deadlock 
  * warning in the p2m locking code. Highly unlikely this is an actual
@@ -149,6 +149,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     uint32_t gflags, mflags, iflags, rc = 0;
     int smep;
     bool_t pse1G = 0, pse2M = 0;
+    p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
 
     perfc_incr(guest_walk);
     memset(gw, 0, sizeof(*gw));
@@ -188,7 +189,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
     l3p = map_domain_gfn(p2m, 
                          guest_l4e_get_gfn(gw->l4e), 
                          &gw->l3mfn,
-                         &p2mt, 
+                         &p2mt,
+                         qt, 
                          &rc); 
     if(l3p == NULL)
         goto out;
@@ -249,6 +251,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                          guest_l3e_get_gfn(gw->l3e), 
                          &gw->l2mfn,
                          &p2mt, 
+                         qt,
                          &rc); 
     if(l2p == NULL)
         goto out;
@@ -322,6 +325,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
                              guest_l2e_get_gfn(gw->l2e), 
                              &gw->l1mfn,
                              &p2mt,
+                             qt,
                              &rc);
         if(l1p == NULL)
             goto out;
diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
index 80a6bec..68f2bb5 100644
--- a/xen/arch/x86/mm/hap/Makefile
+++ b/xen/arch/x86/mm/hap/Makefile
@@ -3,6 +3,7 @@ obj-y += guest_walk_2level.o
 obj-y += guest_walk_3level.o
 obj-$(x86_64) += guest_walk_4level.o
 obj-y += nested_hap.o
+obj-y += nested_ept.o
 
 guest_walk_%level.o: guest_walk.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
new file mode 100644
index 0000000..1463d81
--- /dev/null
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -0,0 +1,287 @@
+/*
+ * nested_ept.c: Handling virtulized EPT for guest in nested case.
+ *
+ * Copyright (c) 2012, Intel Corporation
+ *  Xiantao Zhang <xiantao.zhang@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+#include <asm/domain.h>
+#include <asm/page.h>
+#include <asm/paging.h>
+#include <asm/p2m.h>
+#include <asm/mem_event.h>
+#include <public/mem_event.h>
+#include <asm/mem_sharing.h>
+#include <xen/event.h>
+#include <asm/hap.h>
+#include <asm/hvm/support.h>
+
+#include <asm/hvm/nestedhvm.h>
+
+#include "private.h"
+
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vvmx.h>
+
+/* EPT always use 4-level paging structure */
+#define GUEST_PAGING_LEVELS 4
+#include <asm/guest_pt.h>
+
+/* Must reserved bits in all level entries  */
+#define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
+                     ~((1ull << paddr_bits) - 1))
+
+/*
+ *TODO: Just leave it as 0 here for compile pass, will
+ * define real capabilities in the subsequent patches.
+ */
+#define NEPT_VPID_CAP_BITS 0
+
+
+#define NEPT_1G_ENTRY_FLAG (1 << 11)
+#define NEPT_2M_ENTRY_FLAG (1 << 10)
+#define NEPT_4K_ENTRY_FLAG (1 << 9)
+
+bool_t nept_sp_entry(ept_entry_t e)
+{
+    return !!(e.sp);
+}
+
+static bool_t nept_rsv_bits_check(ept_entry_t e, uint32_t level)
+{
+    uint64_t rsv_bits = EPT_MUST_RSV_BITS;
+
+    switch ( level )
+    {
+    case 1:
+        break;
+    case 2 ... 3:
+        if ( nept_sp_entry(e) )
+            rsv_bits |=  ((1ull << (9 * (level -1 ))) -1) << PAGE_SHIFT;
+        else
+            rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK;
+        break;
+    case 4:
+        rsv_bits |= EPTE_EMT_MASK | EPTE_IGMT_MASK | EPTE_SUPER_PAGE_MASK;
+    break;
+    default:
+        gdprintk(XENLOG_ERR,"Unsupported EPT paging level: %d\n", level);
+        BUG();
+        break;
+    }
+    return !!(e.epte & rsv_bits);
+}
+
+/* EMT checking*/
+static bool_t nept_emt_bits_check(ept_entry_t e, uint32_t level)
+{
+    if ( e.sp || level == 1 )
+    {
+        if ( e.emt == EPT_EMT_RSV0 || e.emt == EPT_EMT_RSV1 ||
+                e.emt == EPT_EMT_RSV2 )
+            return 1;
+    }
+    return 0;
+}
+
+static bool_t nept_permission_check(uint32_t rwx_acc, uint32_t rwx_bits)
+{
+    return !(EPTE_RWX_MASK & rwx_acc & ~rwx_bits);
+}
+
+/* nept's non-present check */
+static bool_t nept_non_present_check(ept_entry_t e)
+{
+    if ( e.epte & EPTE_RWX_MASK )
+        return 0;
+    return 1;
+}
+
+uint64_t nept_get_ept_vpid_cap(void)
+{
+    uint64_t caps = NEPT_VPID_CAP_BITS;
+
+    if ( !cpu_has_vmx_ept_exec_only_supported )
+        caps &= ~VMX_EPT_EXEC_ONLY_SUPPORTED;
+    return caps;
+}
+
+static bool_t nept_rwx_bits_check(ept_entry_t e)
+{
+    /*write only or write/execute only*/
+    uint8_t rwx_bits = e.epte & EPTE_RWX_MASK;
+
+    if ( rwx_bits == ept_access_w || rwx_bits == ept_access_wx )
+        return 1;
+
+    if ( rwx_bits == ept_access_x && !(nept_get_ept_vpid_cap() &
+                        VMX_EPT_EXEC_ONLY_SUPPORTED) )
+        return 1;
+
+    return 0;
+}
+
+/* nept's misconfiguration check */
+static bool_t nept_misconfiguration_check(ept_entry_t e, uint32_t level)
+{
+    return (nept_rsv_bits_check(e, level) ||
+                nept_emt_bits_check(e, level) ||
+                nept_rwx_bits_check(e));
+}
+
+static int ept_lvl_table_offset(unsigned long gpa, int lvl)
+{
+    return (gpa >>(EPT_L4_PAGETABLE_SHIFT -(4 - lvl) * 9)) &
+                (EPT_PAGETABLE_ENTRIES -1 );
+}
+
+static uint32_t
+nept_walk_tables(struct vcpu *v, unsigned long l2ga, ept_walk_t *gw)
+{
+    int lvl;
+    p2m_type_t p2mt;
+    uint32_t rc = 0, ret = 0, gflags;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = d->arch.p2m;
+    gfn_t base_gfn = _gfn(nhvm_vcpu_p2m_base(v) >> PAGE_SHIFT);
+    mfn_t lxmfn;
+    ept_entry_t *lxp = NULL;
+
+    memset(gw, 0, sizeof(*gw));
+
+    for (lvl = 4; lvl > 0; lvl--)
+    {
+        lxp = map_domain_gfn(p2m, base_gfn, &lxmfn, &p2mt, P2M_ALLOC, &rc);
+        if ( !lxp )
+            goto map_err;
+        gw->lxe[lvl] = lxp[ept_lvl_table_offset(l2ga, lvl)];
+        unmap_domain_page(lxp);
+        put_page(mfn_to_page(mfn_x(lxmfn)));
+
+        if ( nept_non_present_check(gw->lxe[lvl]) )
+            goto non_present;
+
+        if ( nept_misconfiguration_check(gw->lxe[lvl], lvl) )
+            goto misconfig_err;
+
+        if ( (lvl == 2 || lvl == 3) && nept_sp_entry(gw->lxe[lvl]) )
+        {
+            /* Generate a fake l1 table entry so callers don't all
+             * have to understand superpages. */
+            unsigned long gfn_lvl_mask =  (1ull << ((lvl - 1) * 9)) - 1;
+            gfn_t start = _gfn(gw->lxe[lvl].mfn);
+            /* Increment the pfn by the right number of 4k pages. */
+            start = _gfn((gfn_x(start) & ~gfn_lvl_mask) +
+                     ((l2ga >> PAGE_SHIFT) & gfn_lvl_mask));
+            gflags = (gw->lxe[lvl].epte & EPTE_FLAG_MASK) |
+                    (lvl == 3 ? NEPT_1G_ENTRY_FLAG: NEPT_2M_ENTRY_FLAG);
+            gw->lxe[0].epte = (gfn_x(start) << PAGE_SHIFT) | gflags;
+            goto done;
+        }
+        if ( lvl > 1 )
+            base_gfn = _gfn(gw->lxe[lvl].mfn);
+    }
+
+    /* If this is not a super entry, we can reach here. */
+    gflags = (gw->lxe[1].epte & EPTE_FLAG_MASK) | NEPT_4K_ENTRY_FLAG;
+    gw->lxe[0].epte = (gw->lxe[1].epte & PAGE_MASK) | gflags;
+
+done:
+    ret = EPT_TRANSLATE_SUCCEED;
+    goto out;
+
+map_err:
+    if ( rc == _PAGE_PAGED )
+    {
+        ret = EPT_TRANSLATE_RETRY;
+        goto out;
+    }
+    /* fall through to misconfig error */
+misconfig_err:
+    ret =  EPT_TRANSLATE_MISCONFIG;
+    goto out;
+
+non_present:
+    ret = EPT_TRANSLATE_VIOLATION;
+    /* fall through. */
+out:
+    return ret;
+}
+
+/* Translate a L2 guest address to L1 gpa via L1 EPT paging structure */
+
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason)
+{
+    uint32_t rc, rwx_bits = 0;
+    ept_walk_t gw;
+    rwx_acc &= EPTE_RWX_MASK;
+
+    *l1gfn = INVALID_GFN;
+
+    rc = nept_walk_tables(v, l2ga, &gw);
+    switch ( rc )
+    {
+    case EPT_TRANSLATE_SUCCEED:
+        if ( likely(gw.lxe[0].epte & NEPT_2M_ENTRY_FLAG) )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                            EPTE_RWX_MASK;
+            *page_order = 9;
+        }
+        else if ( gw.lxe[0].epte & NEPT_4K_ENTRY_FLAG )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte & gw.lxe[2].epte &
+                    gw.lxe[1].epte & EPTE_RWX_MASK;
+            *page_order = 0;
+        }
+        else if ( gw.lxe[0].epte & NEPT_1G_ENTRY_FLAG  )
+        {
+            rwx_bits = gw.lxe[4].epte & gw.lxe[3].epte  & EPTE_RWX_MASK;
+            *page_order = 18;
+        }
+        else
+        {
+            gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n");
+            BUG();
+        }
+        if ( nept_permission_check(rwx_acc, rwx_bits) )
+        {
+            *l1gfn = gw.lxe[0].mfn;
+            break;
+        }
+        rc = EPT_TRANSLATE_VIOLATION;
+    /* Fall through to EPT violation if permission check fails. */
+    case EPT_TRANSLATE_VIOLATION:
+        *exit_qual = (*exit_qual & 0xffffffc0) | (rwx_bits << 3) | rwx_acc;
+        *exit_reason = EXIT_REASON_EPT_VIOLATION;
+        break;
+
+    case EPT_TRANSLATE_MISCONFIG:
+        rc = EPT_TRANSLATE_MISCONFIG;
+        *exit_qual = 0;
+        *exit_reason = EXIT_REASON_EPT_MISCONFIG;
+        break;
+    case EPT_TRANSLATE_RETRY:
+        break;
+    default:
+        gdprintk(XENLOG_ERR, "Unsupported ept translation type!:%d\n", rc);
+        BUG();
+        break;
+    }
+    return rc;
+}
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 8787c91..6d1264b 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -217,7 +217,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
-        return rv;
+    case NESTEDHVM_PAGEFAULT_RETRY:
     case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index 4e1dda0..db8a0b6 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -315,6 +315,14 @@ guest_walk_to_page_order(walk_t *gw)
 #define GPT_RENAME2(_n, _l) _n ## _ ## _l ## _levels
 #define GPT_RENAME(_n, _l) GPT_RENAME2(_n, _l)
 #define guest_walk_tables GPT_RENAME(guest_walk_tables, GUEST_PAGING_LEVELS)
+#define map_domain_gfn GPT_RENAME(map_domain_gfn, GUEST_PAGING_LEVELS)
+
+extern void *map_domain_gfn(struct p2m_domain *p2m,
+                                   gfn_t gfn,
+                                   mfn_t *mfn,
+                                   p2m_type_t *p2mt,
+                                   p2m_query_t q,
+                                   uint32_t *rc);
 
 extern uint32_t 
 guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m, unsigned long va,
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index 91fde0b..649c511 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -52,6 +52,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
 #define NESTEDHVM_PAGEFAULT_L1_ERROR   2
 #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
 #define NESTEDHVM_PAGEFAULT_MMIO       4
+#define NESTEDHVM_PAGEFAULT_RETRY      5
 int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ef2c9c9..9a728b6 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -194,6 +194,7 @@ extern u32 vmx_secondary_exec_control;
 
 extern bool_t cpu_has_vmx_ins_outs_instr_info;
 
+#define VMX_EPT_EXEC_ONLY_SUPPORTED             0x00000001
 #define VMX_EPT_WALK_LENGTH_4_SUPPORTED         0x00000040
 #define VMX_EPT_MEMORY_TYPE_UC                  0x00000100
 #define VMX_EPT_MEMORY_TYPE_WB                  0x00004000
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index aa5b080..c73946f 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -51,6 +51,22 @@ typedef union {
     u64 epte;
 } ept_entry_t;
 
+typedef struct {
+    /*use lxe[0] to save result */
+    ept_entry_t lxe[5];
+} ept_walk_t;
+
+typedef enum {
+    ept_access_n     = 0, /* No access permissions allowed */
+    ept_access_r     = 1, /* Read only */
+    ept_access_w     = 2, /* Write only */
+    ept_access_rw    = 3, /* Read & Write */
+    ept_access_x     = 4, /* Exec Only */
+    ept_access_rx    = 5, /* Read & Exec */
+    ept_access_wx    = 6, /* Write & Exec*/
+    ept_access_all   = 7, /* Full permissions */
+} ept_access_t;
+
 #define EPT_TABLE_ORDER         9
 #define EPTE_SUPER_PAGE_MASK    0x80
 #define EPTE_MFN_MASK           0xffffffffff000ULL
@@ -60,6 +76,17 @@ typedef union {
 #define EPTE_AVAIL1_SHIFT       8
 #define EPTE_EMT_SHIFT          3
 #define EPTE_IGMT_SHIFT         6
+#define EPTE_RWX_MASK           0x7
+#define EPTE_FLAG_MASK          0x7f
+
+#define EPT_EMT_UC              0
+#define EPT_EMT_WC              1
+#define EPT_EMT_RSV0            2
+#define EPT_EMT_RSV1            3
+#define EPT_EMT_WT              4
+#define EPT_EMT_WP              5
+#define EPT_EMT_WB              6
+#define EPT_EMT_RSV2            7
 
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
@@ -191,6 +218,9 @@ void vmx_update_secondary_exec_control(struct vcpu *v);
 
 extern u64 vmx_ept_vpid_cap;
 
+#define cpu_has_vmx_ept_exec_only_supported        \
+    (vmx_ept_vpid_cap & VMX_EPT_EXEC_ONLY_SUPPORTED)
+
 #define cpu_has_vmx_ept_wl4_supported           \
     (vmx_ept_vpid_cap & VMX_EPT_WALK_LENGTH_4_SUPPORTED)
 #define cpu_has_vmx_ept_mt_uc                   \
@@ -419,6 +449,7 @@ void update_guest_eip(void);
 #define _EPT_GLA_FAULT              8
 #define EPT_GLA_FAULT               (1UL<<_EPT_GLA_FAULT)
 
+#define EPT_L4_PAGETABLE_SHIFT      39
 #define EPT_PAGETABLE_ENTRIES       512
 
 #endif /* __ASM_X86_HVM_VMX_VMX_H__ */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 422f006..97554bf 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -32,6 +32,10 @@ struct nestedvmx {
         unsigned long intr_info;
         u32           error_code;
     } intr;
+    struct {
+        uint32_t exit_reason;
+        uint32_t exit_qual;
+    } ept_exit;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -109,6 +113,11 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
+#define EPT_TRANSLATE_SUCCEED       0
+#define EPT_TRANSLATE_VIOLATION     1
+#define EPT_TRANSLATE_MISCONFIG     2
+#define EPT_TRANSLATE_RETRY         3
+
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
                       unsigned int *page_order,
@@ -192,5 +201,9 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
+                        unsigned int *page_order, uint32_t rwx_acc,
+                        unsigned long *l1gfn, uint64_t *exit_qual,
+                        uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zb-000245-4i; Mon, 24 Dec 2012 14:26:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zZ-00023z-Pz
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:26:58 +0000
Received: from [85.158.139.83:10033] by server-16.bemta-5.messagelabs.com id
	D2/55-09208-03668D05; Mon, 24 Dec 2012 14:26:56 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1356359215!27132525!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5952 invoked from network); 24 Dec 2012 14:26:56 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-182.messagelabs.com with SMTP;
	24 Dec 2012 14:26:56 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:26:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524896"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:52 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:24 +0800
Message-Id: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 00/10] Nested VMX: Add virtual EPT & VPID
	support to L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

With virtual EPT support, L1 hyerpvisor can use EPT hardware for L2 guest's memory virtualization.
In this way, L2 guest's performance can be improved sharply.
According to our testing, some benchmarks can show > 5x performance gain.

Changes from v1:
Update the patches according to Tim's comments. 
1. Patch 03: Enhance the virtual EPT's walker logic.
2. Patch 04: Add a new field in struct p2m_domain, and use it to store
   EPT-specific data. For host p2m, it saves L1 VMM's EPT data,
   and for nested p2m, it saves nested EPT's data 3. Patch 07: strictly check host's p2m access type.
4. Other patches: some whitespace mangling fixes.

Changes form v2:
Addressed comments from Jan and Jun:
1. Add Acked-by message for reviewed patches by Tim. 
2. Fixed one whitespace mangling issue in PATCH 08 3. Add some comments to describe the meaning of 
   the return value of hvm_hap_nested_page_fault 
   in PATCH 05.
4. Add the logic for handling default case of two switch
   statements.

Changes v3:
1. Re-check all patches' whitespace mangling issue.

2. Addressed Jan's comments in Patch08 and Patch09 that once return X86EMUL_EXCEPTION, the callee
should be responsible for handling the execption before its return.

3. Addressed Tim's comments in Patch03 and Patch04 and Patch07:
   Patch03: If host doesn't support exec-only capability, we shoudln't expost this feature to L1 VMM.
            Once map guest's EPT table error, inject an EPT misconfiguration errot to L1.
   Patch04: Re-organize p2m's and nested p2m's structure {init/teardown} logic.
   Patch07: Initialize p2ma_21  -> p2m_access_rwx, so not to change SVM's behavior.

Zhang Xiantao (10):
  nestedhap: Change hostcr3 and p2m->cr3 to meaningful words
  nestedhap: Change nested p2m's walker to vendor-specific
  nested_ept: Implement guest ept's walker
  EPT: Make ept data structure or operations neutral
  nEPT: Try to enable EPT paging for L2 guest.
  nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
  nEPT: Use minimal permission for nested p2m.
  nEPT: handle invept instruction from L1 VMM
  nVMX: virutalize VPID capability to nested VMM.
  nEPT: expose EPT & VPID capablities to L1 VMM

 xen/arch/x86/hvm/hvm.c                  |    7 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 ++++
 xen/arch/x86/hvm/svm/svm.c              |    3 +-
 xen/arch/x86/hvm/vmx/vmcs.c             |    8 +-
 xen/arch/x86/hvm/vmx/vmx.c              |   91 ++++------
 xen/arch/x86/hvm/vmx/vvmx.c             |  208 ++++++++++++++++++++--
 xen/arch/x86/mm/guest_walk.c            |   16 +-
 xen/arch/x86/mm/hap/Makefile            |    1 +
 xen/arch/x86/mm/hap/nested_ept.c        |  298 +++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   96 ++++++-----
 xen/arch/x86/mm/mm-locks.h              |    2 +-
 xen/arch/x86/mm/p2m-ept.c               |  104 +++++++++---
 xen/arch/x86/mm/p2m.c                   |  159 +++++++++++------
 xen/include/asm-x86/guest_pt.h          |    8 +
 xen/include/asm-x86/hvm/hvm.h           |    9 +-
 xen/include/asm-x86/hvm/nestedhvm.h     |    1 +
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 +
 xen/include/asm-x86/hvm/vmx/vmcs.h      |   24 ++--
 xen/include/asm-x86/hvm/vmx/vmx.h       |   41 ++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |   28 +++-
 xen/include/asm-x86/p2m.h               |   20 ++-
 21 files changed, 932 insertions(+), 226 deletions(-)
 create mode 100644 xen/arch/x86/mm/hap/nested_ept.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zp-00029P-2f; Mon, 24 Dec 2012 14:27:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zn-000286-Cx
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:11 +0000
Received: from [85.158.143.99:16101] by server-1.bemta-4.messagelabs.com id
	B0/EE-28401-E3668D05; Mon, 24 Dec 2012 14:27:10 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356359222!21178558!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8353 invoked from network); 24 Dec 2012 14:27:06 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 14:27:06 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524978"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:27:03 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:32 +0800
Message-Id: <1356359194-5321-9-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 08/10] nEPT: handle invept instruction from
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Add the INVEPT instruction emulation logic.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |    6 +++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   36 ++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 3 files changed, 42 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ed8d532..94cac17 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2573,10 +2573,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             update_guest_eip();
         break;
 
+    case EXIT_REASON_INVEPT:
+        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
+
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVEPT:
     case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f1f6af2..c31f7ba 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1390,6 +1390,42 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+int nvmx_handle_invept(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long eptp;
+    u64 inv_type;
+
+    if ( decode_vmx_inst(regs, &decode, &eptp, 0) != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+
+    switch ( inv_type )
+    {
+    case INVEPT_SINGLE_CONTEXT:
+    {
+        struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
+        if ( p2m )
+        {
+	        p2m_flush(current, p2m);
+            ept_sync_domain(p2m);
+        }
+        break;
+    }
+    case INVEPT_ALL_CONTEXT:
+        p2m_flush_nestedp2m(current->domain);
+        __invept(INVEPT_ALL_CONTEXT, 0, 0);
+        break;
+    default:
+        vmreturn(regs, VMFAIL_INVALID);
+        return X86EMUL_OKAY;
+    }
+    vmreturn(regs, VMSUCCEED);
+    return X86EMUL_OKAY;
+}
+
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 1da0e77..e671635 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -189,6 +189,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs);
 int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
+int nvmx_handle_invept(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zq-00029z-2P; Mon, 24 Dec 2012 14:27:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zo-000286-HP
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:12 +0000
Received: from [85.158.143.99:9602] by server-1.bemta-4.messagelabs.com id
	C2/EE-28401-04668D05; Mon, 24 Dec 2012 14:27:12 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356359222!21178558!4
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8409 invoked from network); 24 Dec 2012 14:27:10 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 14:27:10 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:08 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524997"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:27:06 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:34 +0800
Message-Id: <1356359194-5321-11-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 10/10] nEPT: Expose EPT & VPID capablities to
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Expose EPT's  and VPID 's basic features to L1 VMM.
For EPT, no EPT A/D bit feature supported.
For VPID, exposes all features to L1 VMM

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++++++++++--
 xen/arch/x86/mm/hap/nested_ept.c   |   24 +++++++++++++++++-------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 ++
 3 files changed, 34 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index c54ee44..5f12f03 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1513,6 +1513,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
+    {
+        u32 default1_bits = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1535,12 +1537,20 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDPMC_EXITING |
                CPU_BASED_TPR_SHADOW |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        data = gen_vmx_msr(data, VMX_PROCBASED_CTLS_DEFAULT1, host_data);
+
+        if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
+            default1_bits &= ~(CPU_BASED_CR3_LOAD_EXITING |
+                    CPU_BASED_CR3_STORE_EXITING | CPU_BASED_INVLPG_EXITING);
+
+        data = gen_vmx_msr(data, default1_bits, host_data);
         break;
+    }
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
-               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+               SECONDARY_EXEC_ENABLE_VPID |
+               SECONDARY_EXEC_ENABLE_EPT;
         data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
@@ -1593,6 +1603,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_MISC:
         gdprintk(XENLOG_WARNING, "VMX MSR %x not fully supported yet.\n", msr);
         break;
+    case MSR_IA32_VMX_EPT_VPID_CAP:
+        data = nept_get_ept_vpid_cap();
+        break;
     default:
         r = 0;
         break;
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 4393065..83431e1 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -43,12 +43,17 @@
 #define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
                      ~((1ull << paddr_bits) - 1))
 
-/*
- *TODO: Just leave it as 0 here for compile pass, will
- * define real capabilities in the subsequent patches.
- */
-#define NEPT_VPID_CAP_BITS 0
-
+#define NEPT_CAP_BITS       \
+        (VMX_EPT_INVEPT_ALL_CONTEXT | VMX_EPT_INVEPT_SINGLE_CONTEXT |   \
+        VMX_EPT_INVEPT_INSTRUCTION | VMX_EPT_SUPERPAGE_1GB |            \
+        VMX_EPT_SUPERPAGE_2MB | VMX_EPT_MEMORY_TYPE_WB |                \
+        VMX_EPT_MEMORY_TYPE_UC | VMX_EPT_WALK_LENGTH_4_SUPPORTED |      \
+        VMX_EPT_EXEC_ONLY_SUPPORTED)
+
+#define NVPID_CAP_BITS \
+        (VMX_VPID_INVVPID_INSTRUCTION | VMX_VPID_INVVPID_INDIVIDUAL_ADDR |\
+        VMX_VPID_INVVPID_SINGLE_CONTEXT | VMX_VPID_INVVPID_ALL_CONTEXT |\
+        VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL)
 
 #define NEPT_1G_ENTRY_FLAG (1 << 11)
 #define NEPT_2M_ENTRY_FLAG (1 << 10)
@@ -111,10 +116,15 @@ static bool_t nept_non_present_check(ept_entry_t e)
 
 uint64_t nept_get_ept_vpid_cap(void)
 {
-    uint64_t caps = NEPT_VPID_CAP_BITS;
+    uint64_t caps = 0;
 
+    if ( cpu_has_vmx_ept )
+        caps |= NEPT_CAP_BITS;
     if ( !cpu_has_vmx_ept_exec_only_supported )
         caps &= ~VMX_EPT_EXEC_ONLY_SUPPORTED;
+    if ( cpu_has_vmx_vpid )
+        caps |= NVPID_CAP_BITS;
+
     return caps;
 }
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d1368a3..375c7f1 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -207,6 +207,8 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+uint64_t nept_get_ept_vpid_cap(void);
+
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
                         unsigned long *l1gfn, uint8_t *p2m_acc,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zp-00029h-Fj; Mon, 24 Dec 2012 14:27:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zn-00028L-U0
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:12 +0000
Received: from [85.158.143.99:9547] by server-3.bemta-4.messagelabs.com id
	C5/79-18211-F3668D05; Mon, 24 Dec 2012 14:27:11 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356359222!21178558!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8379 invoked from network); 24 Dec 2012 14:27:08 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 14:27:08 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:06 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524989"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:27:05 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:33 +0800
Message-Id: <1356359194-5321-10-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 09/10] nVMX: virutalize VPID capability to
	nested VMM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Virtualize VPID for the nested vmm, use host's VPID
to emualte guest's VPID. For each virtual vmentry, if
guest'v vpid is changed, allocate a new host VPID for
L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   11 ++++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   52 +++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +
 3 files changed, 62 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 94cac17..0e479f8 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2578,10 +2578,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             update_guest_eip();
         break;
 
+    case EXIT_REASON_INVVPID:
+        if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
+
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
          * running in guest context, and the CPU checks that before getting
@@ -2699,8 +2703,11 @@ void vmx_vmenter_helper(void)
 
     if ( !cpu_has_vmx_vpid )
         goto out;
+    if ( nestedhvm_vcpu_in_guestmode(curr) )
+        p_asid = &vcpu_nestedhvm(curr).nv_n2asid;
+    else
+        p_asid = &curr->arch.hvm_vcpu.n1asid;
 
-    p_asid = &curr->arch.hvm_vcpu.n1asid;
     old_asid = p_asid->asid;
     need_flush = hvm_asid_handle_vmenter(p_asid);
     new_asid = p_asid->asid;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index c31f7ba..c54ee44 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -42,6 +42,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
 	goto out;
     }
     nvmx->ept.enabled = 0;
+    nvmx->guest_vpid = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -882,6 +883,16 @@ static uint64_t get_shadow_eptp(struct vcpu *v)
     return ept_get_eptp(ept);
 }
 
+static bool_t nvmx_vpid_enabled(struct nestedvcpu *nvcpu)
+{
+    uint32_t second_cntl;
+
+    second_cntl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
+    if ( second_cntl & SECONDARY_EXEC_ENABLE_VPID )
+        return 1;
+    return 0;
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -930,6 +941,18 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     if ( nestedhvm_paging_mode_hap(v) )
         __vmwrite(EPT_POINTER, get_shadow_eptp(v));
 
+    /* nested VPID support! */
+    if ( cpu_has_vmx_vpid && nvmx_vpid_enabled(nvcpu) )
+    {
+        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+        uint32_t new_vpid =  __get_vvmcs(vvmcs, VIRTUAL_PROCESSOR_ID);
+        if ( nvmx->guest_vpid != new_vpid )
+        {
+            hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
+            nvmx->guest_vpid = new_vpid;
+        }
+    }
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -1221,7 +1244,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
     {
         vmreturn (regs, VMFAIL_INVALID);
-        return X86EMUL_OKAY;        
+        return X86EMUL_OKAY;
     }
 
     launched = __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx,
@@ -1433,6 +1456,33 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     (((__emul_value(enable1, default1) & host_value) & (~0ul << 32)) | \
     ((uint32_t)(__emul_value(enable1, default1) | host_value)))
 
+int nvmx_handle_invvpid(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long vpid;
+    u64 inv_type;
+
+    if ( decode_vmx_inst(regs, &decode, &vpid, 0) != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+
+    switch ( inv_type ) {
+    /* Just invalidate all tlb entries for all types! */
+    case INVVPID_INDIVIDUAL_ADDR:
+    case INVVPID_SINGLE_CONTEXT:
+    case INVVPID_ALL_CONTEXT:
+        hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(current).nv_n2asid);
+        break;
+    default:
+        vmreturn(regs, VMFAIL_INVALID);
+        return X86EMUL_OKAY;
+    }
+
+    vmreturn(regs, VMSUCCEED);
+    return X86EMUL_OKAY;
+}
+
 /*
  * Capability reporting
  */
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index e671635..d1368a3 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -37,6 +37,7 @@ struct nestedvmx {
         uint32_t exit_reason;
         uint32_t exit_qual;
     } ept;
+    uint32_t guest_vpid;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -190,6 +191,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
 int nvmx_handle_invept(struct cpu_user_regs *regs);
+int nvmx_handle_invvpid(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zp-00029P-2f; Mon, 24 Dec 2012 14:27:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zn-000286-Cx
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:11 +0000
Received: from [85.158.143.99:16101] by server-1.bemta-4.messagelabs.com id
	B0/EE-28401-E3668D05; Mon, 24 Dec 2012 14:27:10 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356359222!21178558!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8353 invoked from network); 24 Dec 2012 14:27:06 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 14:27:06 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524978"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:27:03 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:32 +0800
Message-Id: <1356359194-5321-9-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 08/10] nEPT: handle invept instruction from
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Add the INVEPT instruction emulation logic.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |    6 +++++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   36 ++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 3 files changed, 42 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ed8d532..94cac17 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2573,10 +2573,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             update_guest_eip();
         break;
 
+    case EXIT_REASON_INVEPT:
+        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
+            update_guest_eip();
+        break;
+
     case EXIT_REASON_MWAIT_INSTRUCTION:
     case EXIT_REASON_MONITOR_INSTRUCTION:
     case EXIT_REASON_GETSEC:
-    case EXIT_REASON_INVEPT:
     case EXIT_REASON_INVVPID:
         /*
          * We should never exit on GETSEC because CR4.SMXE is always 0 when
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f1f6af2..c31f7ba 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1390,6 +1390,42 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+int nvmx_handle_invept(struct cpu_user_regs *regs)
+{
+    struct vmx_inst_decoded decode;
+    unsigned long eptp;
+    u64 inv_type;
+
+    if ( decode_vmx_inst(regs, &decode, &eptp, 0) != X86EMUL_OKAY )
+        return X86EMUL_EXCEPTION;
+
+    inv_type = reg_read(regs, decode.reg2);
+
+    switch ( inv_type )
+    {
+    case INVEPT_SINGLE_CONTEXT:
+    {
+        struct p2m_domain *p2m = vcpu_nestedhvm(current).nv_p2m;
+        if ( p2m )
+        {
+	        p2m_flush(current, p2m);
+            ept_sync_domain(p2m);
+        }
+        break;
+    }
+    case INVEPT_ALL_CONTEXT:
+        p2m_flush_nestedp2m(current->domain);
+        __invept(INVEPT_ALL_CONTEXT, 0, 0);
+        break;
+    default:
+        vmreturn(regs, VMFAIL_INVALID);
+        return X86EMUL_OKAY;
+    }
+    vmreturn(regs, VMSUCCEED);
+    return X86EMUL_OKAY;
+}
+
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 1da0e77..e671635 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -189,6 +189,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs);
 int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
 int nvmx_handle_vmresume(struct cpu_user_regs *regs);
 int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
+int nvmx_handle_invept(struct cpu_user_regs *regs);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 int nvmx_msr_write_intercept(unsigned int msr,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zq-00029z-2P; Mon, 24 Dec 2012 14:27:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zo-000286-HP
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:12 +0000
Received: from [85.158.143.99:9602] by server-1.bemta-4.messagelabs.com id
	C2/EE-28401-04668D05; Mon, 24 Dec 2012 14:27:12 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356359222!21178558!4
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8409 invoked from network); 24 Dec 2012 14:27:10 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 14:27:10 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:08 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524997"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:27:06 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:34 +0800
Message-Id: <1356359194-5321-11-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 10/10] nEPT: Expose EPT & VPID capablities to
	L1 VMM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Expose EPT's  and VPID 's basic features to L1 VMM.
For EPT, no EPT A/D bit feature supported.
For VPID, exposes all features to L1 VMM

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |   17 +++++++++++++++--
 xen/arch/x86/mm/hap/nested_ept.c   |   24 +++++++++++++++++-------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 ++
 3 files changed, 34 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index c54ee44..5f12f03 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1513,6 +1513,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         break;
     case MSR_IA32_VMX_PROCBASED_CTLS:
     case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
+    {
+        u32 default1_bits = VMX_PROCBASED_CTLS_DEFAULT1;
         /* 1-seetings */
         data = CPU_BASED_HLT_EXITING |
                CPU_BASED_VIRTUAL_INTR_PENDING |
@@ -1535,12 +1537,20 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
                CPU_BASED_RDPMC_EXITING |
                CPU_BASED_TPR_SHADOW |
                CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
-        data = gen_vmx_msr(data, VMX_PROCBASED_CTLS_DEFAULT1, host_data);
+
+        if ( msr == MSR_IA32_VMX_TRUE_PROCBASED_CTLS )
+            default1_bits &= ~(CPU_BASED_CR3_LOAD_EXITING |
+                    CPU_BASED_CR3_STORE_EXITING | CPU_BASED_INVLPG_EXITING);
+
+        data = gen_vmx_msr(data, default1_bits, host_data);
         break;
+    }
     case MSR_IA32_VMX_PROCBASED_CTLS2:
         /* 1-seetings */
         data = SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
-               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+               SECONDARY_EXEC_ENABLE_VPID |
+               SECONDARY_EXEC_ENABLE_EPT;
         data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
@@ -1593,6 +1603,9 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_MISC:
         gdprintk(XENLOG_WARNING, "VMX MSR %x not fully supported yet.\n", msr);
         break;
+    case MSR_IA32_VMX_EPT_VPID_CAP:
+        data = nept_get_ept_vpid_cap();
+        break;
     default:
         r = 0;
         break;
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 4393065..83431e1 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -43,12 +43,17 @@
 #define EPT_MUST_RSV_BITS (((1ull << PADDR_BITS) -1) & \
                      ~((1ull << paddr_bits) - 1))
 
-/*
- *TODO: Just leave it as 0 here for compile pass, will
- * define real capabilities in the subsequent patches.
- */
-#define NEPT_VPID_CAP_BITS 0
-
+#define NEPT_CAP_BITS       \
+        (VMX_EPT_INVEPT_ALL_CONTEXT | VMX_EPT_INVEPT_SINGLE_CONTEXT |   \
+        VMX_EPT_INVEPT_INSTRUCTION | VMX_EPT_SUPERPAGE_1GB |            \
+        VMX_EPT_SUPERPAGE_2MB | VMX_EPT_MEMORY_TYPE_WB |                \
+        VMX_EPT_MEMORY_TYPE_UC | VMX_EPT_WALK_LENGTH_4_SUPPORTED |      \
+        VMX_EPT_EXEC_ONLY_SUPPORTED)
+
+#define NVPID_CAP_BITS \
+        (VMX_VPID_INVVPID_INSTRUCTION | VMX_VPID_INVVPID_INDIVIDUAL_ADDR |\
+        VMX_VPID_INVVPID_SINGLE_CONTEXT | VMX_VPID_INVVPID_ALL_CONTEXT |\
+        VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL)
 
 #define NEPT_1G_ENTRY_FLAG (1 << 11)
 #define NEPT_2M_ENTRY_FLAG (1 << 10)
@@ -111,10 +116,15 @@ static bool_t nept_non_present_check(ept_entry_t e)
 
 uint64_t nept_get_ept_vpid_cap(void)
 {
-    uint64_t caps = NEPT_VPID_CAP_BITS;
+    uint64_t caps = 0;
 
+    if ( cpu_has_vmx_ept )
+        caps |= NEPT_CAP_BITS;
     if ( !cpu_has_vmx_ept_exec_only_supported )
         caps &= ~VMX_EPT_EXEC_ONLY_SUPPORTED;
+    if ( cpu_has_vmx_vpid )
+        caps |= NVPID_CAP_BITS;
+
     return caps;
 }
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d1368a3..375c7f1 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -207,6 +207,8 @@ u64 nvmx_get_tsc_offset(struct vcpu *v);
 int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                           unsigned int exit_reason);
 
+uint64_t nept_get_ept_vpid_cap(void);
+
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
                         unsigned long *l1gfn, uint8_t *p2m_acc,
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zo-000295-M7; Mon, 24 Dec 2012 14:27:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zn-000284-Bi
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:11 +0000
Received: from [85.158.143.99:16099] by server-2.bemta-4.messagelabs.com id
	F7/69-30861-E3668D05; Mon, 24 Dec 2012 14:27:10 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356359222!21178558!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8323 invoked from network); 24 Dec 2012 14:27:03 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 14:27:03 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524956"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:27:01 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:30 +0800
Message-Id: <1356359194-5321-7-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 06/10] nEPT: Sync PDPTR fields if L2 guest in
	PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
vmentry.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++++-
 1 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f9699dc..7b48436 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -859,7 +859,15 @@ static void load_shadow_guest_state(struct vcpu *v)
     vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
     vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
 
-    /* TODO: PDPTRs for nested ept */
+    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
+                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) )
+    {
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
+    }
+
     /* TODO: CR3 target control */
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zi-00025w-La; Mon, 24 Dec 2012 14:27:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zh-000255-Dn
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:05 +0000
Received: from [85.158.143.35:36983] by server-3.bemta-4.messagelabs.com id
	7D/69-18211-83668D05; Mon, 24 Dec 2012 14:27:04 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1356359219!16771313!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23105 invoked from network); 24 Dec 2012 14:27:00 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-21.messagelabs.com with SMTP;
	24 Dec 2012 14:27:00 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524939"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:58 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:28 +0800
Message-Id: <1356359194-5321-5-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 04/10] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Share the current EPT logic with nested EPT case, so
make the related data structure or operations netural
to comment EPT and nested EPT.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    8 ++-
 xen/arch/x86/hvm/vmx/vmx.c         |   53 +-------------
 xen/arch/x86/mm/p2m-ept.c          |  104 ++++++++++++++++++++++------
 xen/arch/x86/mm/p2m.c              |  132 +++++++++++++++++++++++++-----------
 xen/include/asm-x86/hvm/vmx/vmcs.h |   23 +++---
 xen/include/asm-x86/hvm/vmx/vmx.h  |   10 ++-
 xen/include/asm-x86/p2m.h          |    4 +
 7 files changed, 208 insertions(+), 126 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 9adc7a4..de22e03 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -942,7 +942,13 @@ static int construct_vmcs(struct vcpu *v)
     }
 
     if ( paging_mode_hap(d) )
-        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept_control.eptp);
+    {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
+        struct ept_data *ept = &p2m->ept;
+
+        ept->asr  = pagetable_get_pfn(p2m_get_pagetable(p2m));
+        __vmwrite(EPT_POINTER, ept_get_eptp(ept));
+    }
 
     if ( cpu_has_vmx_pat && paging_mode_hap(d) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4abfa90..d74aae0 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -74,38 +74,19 @@ static void vmx_fpu_dirty_intercept(void);
 static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content);
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
 static void vmx_invlpg_intercept(unsigned long vaddr);
-static void __ept_sync_domain(void *info);
 
 static int vmx_domain_initialise(struct domain *d)
 {
     int rc;
 
-    /* Set the memory type used when accessing EPT paging structures. */
-    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
-
-    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
-    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
-
-    d->arch.hvm_domain.vmx.ept_control.asr  =
-        pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-
-    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
-        return -ENOMEM;
-
     if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
-    {
-        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
         return rc;
-    }
 
     return 0;
 }
 
 static void vmx_domain_destroy(struct domain *d)
 {
-    if ( paging_mode_hap(d) )
-        on_each_cpu(__ept_sync_domain, d, 1);
-    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
     vmx_free_vlapic_mapping(d);
 }
 
@@ -641,6 +622,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
 {
     struct domain *d = v->domain;
     unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
+    struct ept_data *ept_data = &p2m_get_hostp2m(d)->ept;
 
     /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
     if ( old_cr4 != new_cr4 )
@@ -650,10 +632,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
     {
         unsigned int cpu = smp_processor_id();
         /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
-        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced) &&
+        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
              !cpumask_test_and_set_cpu(cpu,
-                                       d->arch.hvm_domain.vmx.ept_synced) )
-            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
+                                       ept_get_synced_mask(ept_data)) )
+            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
     }
 
     vmx_restore_guest_msrs(v);
@@ -1216,33 +1198,6 @@ static void vmx_update_guest_efer(struct vcpu *v)
                    (v->arch.hvm_vcpu.guest_efer & EFER_SCE));
 }
 
-static void __ept_sync_domain(void *info)
-{
-    struct domain *d = info;
-    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
-}
-
-void ept_sync_domain(struct domain *d)
-{
-    /* Only if using EPT and this domain has some VCPUs to dirty. */
-    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
-        return;
-
-    ASSERT(local_irq_is_enabled());
-
-    /*
-     * Flush active cpus synchronously. Flush others the next time this domain
-     * is scheduled onto them. We accept the race of other CPUs adding to
-     * the ept_synced mask before on_selected_cpus() reads it, resulting in
-     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
-     */
-    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
-                d->domain_dirty_cpumask, &cpu_online_map);
-
-    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
-                     __ept_sync_domain, d, 1);
-}
-
 void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
             unsigned long intr_fields, int error_code)
 {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index c964f54..e33f415 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     int need_modify_vtd_table = 1;
     int vtd_pte_present = 0;
     int needs_sync = 1;
-    struct domain *d = p2m->domain;
     ept_entry_t old_entry = { .epte = 0 };
+    struct ept_data *ept = &p2m->ept;
+    struct domain *d = p2m->domain;
 
+    ASSERT(ept);
     /*
      * the caller must make sure:
      * 1. passing valid gfn and mfn at order boundary.
@@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
      * 3. passing a valid order.
      */
     if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
-         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
+         ((u64)gfn >> ((ept_get_wl(ept) + 1) * EPT_TABLE_ORDER)) ||
          (order % EPT_TABLE_ORDER) )
         return 0;
 
-    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
-           (target == 1 && hvm_hap_has_2mb(d)) ||
+    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
+           (target == 1 && hvm_hap_has_2mb()) ||
            (target == 0));
 
-    table = map_domain_page(ept_get_asr(d));
+    table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-    for ( i = ept_get_wl(d); i > target; i-- )
+    for ( i = ept_get_wl(ept); i > target; i-- )
     {
         ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
         if ( !ret )
@@ -439,9 +441,11 @@ out:
     unmap_domain_page(table);
 
     if ( needs_sync )
-        ept_sync_domain(p2m->domain);
+        ept_sync_domain(p2m);
 
-    if ( rv && iommu_enabled && need_iommu(p2m->domain) && need_modify_vtd_table )
+    /* For non-nested p2m, may need to change VT-d page table.*/
+    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled && need_iommu(p2m->domain) &&
+                need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
             iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
@@ -488,14 +492,14 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
                            unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
                            p2m_query_t q, unsigned int *page_order)
 {
-    struct domain *d = p2m->domain;
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    ept_entry_t *table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     u32 index;
     int i;
     int ret = 0;
     mfn_t mfn = _mfn(INVALID_MFN);
+    struct ept_data *ept = &p2m->ept;
 
     *t = p2m_mmio_dm;
     *a = p2m_access_n;
@@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
 
     /* Should check if gfn obeys GAW here. */
 
-    for ( i = ept_get_wl(d); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
     retry:
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
@@ -588,19 +592,20 @@ out:
 static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
     unsigned long gfn, int *level)
 {
-    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     ept_entry_t content = { .epte = 0 };
     u32 index;
     int i;
     int ret=0;
+    struct ept_data *ept = &p2m->ept;
 
     /* This pfn is higher than the highest the p2m map currently holds */
     if ( gfn > p2m->max_mapped_pfn )
         goto out;
 
-    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
         if ( !ret || ret == GUEST_TABLE_POD_PAGE )
@@ -622,7 +627,8 @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
 void ept_walk_table(struct domain *d, unsigned long gfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    struct ept_data *ept = &p2m->ept;
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
 
     int i;
@@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned long gfn)
         goto out;
     }
 
-    for ( i = ept_get_wl(d); i >= 0; i-- )
+    for ( i = ept_get_wl(ept); i >= 0; i-- )
     {
         ept_entry_t *ept_entry, *next;
         u32 index;
@@ -778,24 +784,76 @@ static void ept_change_entry_type_page(mfn_t ept_page_mfn, int ept_page_level,
 static void ept_change_entry_type_global(struct p2m_domain *p2m,
                                          p2m_type_t ot, p2m_type_t nt)
 {
-    struct domain *d = p2m->domain;
-    if ( ept_get_asr(d) == 0 )
+    struct ept_data *ept = &p2m->ept;
+    if ( ept_get_asr(ept) == 0 )
         return;
 
     BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
     BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct));
 
-    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d), ot, nt);
+    ept_change_entry_type_page(_mfn(ept_get_asr(ept)),
+            ept_get_wl(ept), ot, nt);
+
+    ept_sync_domain(p2m);
+}
+
+static void __ept_sync_domain(void *info)
+{
+    struct ept_data *ept = &((struct p2m_domain *)info)->ept;
 
-    ept_sync_domain(d);
+    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept), 0);
 }
 
-void ept_p2m_init(struct p2m_domain *p2m)
+void ept_sync_domain(struct p2m_domain *p2m)
 {
+    struct domain *d = p2m->domain;
+    struct ept_data *ept = &p2m->ept;
+    /* Only if using EPT and this domain has some VCPUs to dirty. */
+    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
+        return;
+
+    ASSERT(local_irq_is_enabled());
+
+    /*
+     * Flush active cpus synchronously. Flush others the next time this domain
+     * is scheduled onto them. We accept the race of other CPUs adding to
+     * the ept_synced mask before on_selected_cpus() reads it, resulting in
+     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
+     */
+    cpumask_and(ept_get_synced_mask(ept),
+                d->domain_dirty_cpumask, &cpu_online_map);
+
+    on_selected_cpus(ept_get_synced_mask(ept),
+                     __ept_sync_domain, p2m, 1);
+}
+
+int ept_p2m_init(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+
     p2m->set_entry = ept_set_entry;
     p2m->get_entry = ept_get_entry;
     p2m->change_entry_type_global = ept_change_entry_type_global;
     p2m->audit_p2m = NULL;
+
+    /* Set the memory type used when accessing EPT paging structures. */
+    ept->ept_mt = EPT_DEFAULT_MT;
+
+    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
+    ept->ept_wl = 3;
+
+    if ( !zalloc_cpumask_var(&ept->synced_mask) )
+        return -ENOMEM;
+
+    on_each_cpu(__ept_sync_domain, p2m, 1);
+
+    return 0;
+}
+
+void ept_p2m_uninit(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+    free_cpumask_var(ept->synced_mask);
 }
 
 static void ept_dump_p2m_table(unsigned char key)
@@ -811,6 +869,7 @@ static void ept_dump_p2m_table(unsigned char key)
     unsigned long gfn, gfn_remainder;
     unsigned long record_counter = 0;
     struct p2m_domain *p2m;
+    struct ept_data *ept;
 
     for_each_domain(d)
     {
@@ -818,15 +877,16 @@ static void ept_dump_p2m_table(unsigned char key)
             continue;
 
         p2m = p2m_get_hostp2m(d);
+        ept = &p2m->ept;
         printk("\ndomain%d EPT p2m table: \n", d->domain_id);
 
         for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
         {
             gfn_remainder = gfn;
             mfn = _mfn(INVALID_MFN);
-            table = map_domain_page(ept_get_asr(d));
+            table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-            for ( i = ept_get_wl(d); i > 0; i-- )
+            for ( i = ept_get_wl(ept); i > 0; i-- )
             {
                 ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
                 if ( ret != GUEST_TABLE_NORMAL_PAGE )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 41a461b..49eb8af 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -57,8 +57,10 @@ boolean_param("hap_2mb", opt_hap_2mb);
 
 
 /* Init the datastructures for later use by the p2m code */
-static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
+    int ret = 0;
+
     mm_rwlock_init(&p2m->lock);
     mm_lock_init(&p2m->pod.lock);
     INIT_LIST_HEAD(&p2m->np2m_list);
@@ -72,27 +74,81 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
-        ept_p2m_init(p2m);
+        ret = ept_p2m_init(p2m);
     else
         p2m_pt_init(p2m);
 
-    return;
+    return ret;
+}
+
+static struct p2m_domain *p2m_init_one(struct domain *d)
+{
+    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
+
+    if ( !p2m )
+        return NULL;
+
+    if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
+        goto free_p2m;
+
+    if ( p2m_initialise(d, p2m) )
+        goto free_cpumask;
+    return p2m;
+
+free_cpumask:
+    free_cpumask_var(p2m->dirty_cpumask);
+free_p2m:
+    xfree(p2m);
+    return NULL;
 }
 
-static int
-p2m_init_nestedp2m(struct domain *d)
+static void p2m_free_one(struct p2m_domain *p2m)
+{
+    if ( hap_enabled(p2m->domain) && cpu_has_vmx )
+        ept_p2m_uninit(p2m);
+    free_cpumask_var(p2m->dirty_cpumask);
+    xfree(p2m);
+}
+
+static int p2m_init_hostp2m(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_init_one(d);
+
+    if ( p2m )
+    {
+        d->arch.p2m = p2m;
+        return 0;
+    }
+    return -ENOMEM;
+}
+
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+    /* Iterate over all p2m tables per domain */
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    if ( p2m ) {
+        p2m_free_one(p2m);
+        d->arch.p2m = NULL;
+    }
+}
+
+static void p2m_teardown_nestedp2m(struct domain *d);
+
+static int p2m_init_nestedp2m(struct domain *d)
 {
     uint8_t i;
     struct p2m_domain *p2m;
 
     mm_lock_init(&d->arch.nested_p2m_lock);
-    for (i = 0; i < MAX_NESTEDP2M; i++) {
-        d->arch.nested_p2m[i] = p2m = xzalloc(struct p2m_domain);
-        if (p2m == NULL)
-            return -ENOMEM;
-        if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
+    for (i = 0; i < MAX_NESTEDP2M; i++)
+    {
+        d->arch.nested_p2m[i] = p2m = p2m_init_one(d);
+        if ( p2m == NULL )
+        {
+            p2m_teardown_nestedp2m(d);
             return -ENOMEM;
-        p2m_initialise(d, p2m);
+        }
         p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
         list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
     }
@@ -100,27 +156,37 @@ p2m_init_nestedp2m(struct domain *d)
     return 0;
 }
 
-int p2m_init(struct domain *d)
+static void p2m_teardown_nestedp2m(struct domain *d)
 {
+    uint8_t i;
     struct p2m_domain *p2m;
-    int rc;
 
-    p2m_get_hostp2m(d) = p2m = xzalloc(struct p2m_domain);
-    if ( p2m == NULL )
-        return -ENOMEM;
-    if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
+    for (i = 0; i < MAX_NESTEDP2M; i++)
     {
-        xfree(p2m);
-        return -ENOMEM;
+        if ( !d->arch.nested_p2m[i] )
+            continue;
+        p2m = d->arch.nested_p2m[i];
+        list_del(&p2m->np2m_list);
+        p2m_free_one(p2m);
+        d->arch.nested_p2m[i] = NULL;
     }
-    p2m_initialise(d, p2m);
+}
+
+int p2m_init(struct domain *d)
+{
+    int rc;
+
+    rc = p2m_init_hostp2m(d);
+    if ( rc )
+        return rc;
 
     /* Must initialise nestedp2m unconditionally
      * since nestedhvm_enabled(d) returns false here.
      * (p2m_init runs too early for HVM_PARAM_* options) */
     rc = p2m_init_nestedp2m(d);
-    if ( rc ) 
-        p2m_final_teardown(d);
+    if ( rc )
+        p2m_teardown_hostp2m(d);
+
     return rc;
 }
 
@@ -421,28 +487,12 @@ void p2m_teardown(struct p2m_domain *p2m)
     p2m_unlock(p2m);
 }
 
-static void p2m_teardown_nestedp2m(struct domain *d)
-{
-    uint8_t i;
-
-    for (i = 0; i < MAX_NESTEDP2M; i++) {
-        if ( !d->arch.nested_p2m[i] )
-            continue;
-        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
-        xfree(d->arch.nested_p2m[i]);
-        d->arch.nested_p2m[i] = NULL;
-    }
-}
-
 void p2m_final_teardown(struct domain *d)
 {
     /* Iterate over all p2m tables per domain */
-    if ( d->arch.p2m )
-    {
-        free_cpumask_var(d->arch.p2m->dirty_cpumask);
-        xfree(d->arch.p2m);
-        d->arch.p2m = NULL;
-    }
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    if ( p2m )
+        p2m_teardown_hostp2m(d);
 
     /* We must teardown unconditionally because
      * we initialise them unconditionally.
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 9a728b6..2d38b43 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -56,26 +56,27 @@ struct vmx_msr_state {
 
 #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
 
-struct vmx_domain {
-    unsigned long apic_access_mfn;
+struct ept_data{
     union {
-        struct {
+    struct {
             u64 ept_mt :3,
                 ept_wl :3,
                 rsvd   :6,
                 asr    :52;
         };
         u64 eptp;
-    } ept_control;
-    cpumask_var_t ept_synced;
+    };
+    cpumask_var_t synced_mask;
+};
+
+struct vmx_domain {
+    unsigned long apic_access_mfn;
 };
 
-#define ept_get_wl(d)   \
-    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
-#define ept_get_asr(d)  \
-    ((d)->arch.hvm_domain.vmx.ept_control.asr)
-#define ept_get_eptp(d) \
-    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
+#define ept_get_wl(ept)   ((ept)->ept_wl)
+#define ept_get_asr(ept)  ((ept)->asr)
+#define ept_get_eptp(ept) ((ept)->eptp)
+#define ept_get_synced_mask(ept) ((ept)->synced_mask)
 
 struct arch_vmx_struct {
     /* Virtual address of VMCS. */
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c73946f..d4d6feb 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -363,7 +363,7 @@ static inline void ept_sync_all(void)
     __invept(INVEPT_ALL_CONTEXT, 0, 0);
 }
 
-void ept_sync_domain(struct domain *d);
+void ept_sync_domain(struct p2m_domain *p2m);
 
 static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
 {
@@ -425,12 +425,18 @@ void vmx_get_segment_register(struct vcpu *, enum x86_segment,
 void vmx_inject_extint(int trap);
 void vmx_inject_nmi(void);
 
-void ept_p2m_init(struct p2m_domain *p2m);
+int ept_p2m_init(struct p2m_domain *p2m);
+void ept_p2m_uninit(struct p2m_domain *p2m);
+
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
 void update_guest_eip(void);
 
+int alloc_p2m_hap_data(struct p2m_domain *p2m);
+void free_p2m_hap_data(struct p2m_domain *p2m);
+void p2m_init_hap_data(struct p2m_domain *p2m);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index ce26594..b6a84b6 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -277,6 +277,10 @@ struct p2m_domain {
         mm_lock_t        lock;         /* Locking of private pod structs,   *
                                         * not relying on the p2m lock.      */
     } pod;
+    union {
+        struct ept_data ept;
+        /* NPT-equivalent structure could be added here. */
+    };
 };
 
 /* get host p2m table */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zd-00024L-HK; Mon, 24 Dec 2012 14:27:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zb-000244-E4
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:26:59 +0000
Received: from [85.158.139.83:57402] by server-1.bemta-5.messagelabs.com id
	AA/BA-12813-23668D05; Mon, 24 Dec 2012 14:26:58 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1356359215!27132525!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5968 invoked from network); 24 Dec 2012 14:26:56 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-182.messagelabs.com with SMTP;
	24 Dec 2012 14:26:56 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:26:55 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524907"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:54 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:25 +0800
Message-Id: <1356359194-5321-2-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 01/10] nestedhap: Change hostcr3 and p2m->cr3
	to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

VMX doesn't have the concept about host cr3 for nested p2m,
and only SVM has, so change it to netural words.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/hvm.c             |    6 +++---
 xen/arch/x86/hvm/svm/svm.c         |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    2 +-
 xen/arch/x86/mm/hap/nested_hap.c   |   15 ++++++++-------
 xen/arch/x86/mm/mm-locks.h         |    2 +-
 xen/arch/x86/mm/p2m.c              |   26 +++++++++++++-------------
 xen/include/asm-x86/hvm/hvm.h      |    4 ++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +-
 xen/include/asm-x86/p2m.h          |   16 ++++++++--------
 10 files changed, 39 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 40c1ab2..f63ee52 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4536,10 +4536,10 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v)
     return -EOPNOTSUPP;
 }
 
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v)
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v)
 {
-    if (hvm_funcs.nhvm_vcpu_hostcr3)
-        return hvm_funcs.nhvm_vcpu_hostcr3(v);
+    if ( hvm_funcs.nhvm_vcpu_p2m_base )
+        return hvm_funcs.nhvm_vcpu_p2m_base(v);
     return -EOPNOTSUPP;
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 55a5ae5..2c8504a 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2003,7 +2003,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vcpu_vmexit = nsvm_vcpu_vmexit_inject,
     .nhvm_vcpu_vmexit_trap = nsvm_vcpu_vmexit_trap,
     .nhvm_vcpu_guestcr3 = nsvm_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3 = nsvm_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base = nsvm_vcpu_hostcr3,
     .nhvm_vcpu_asid = nsvm_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index aee1f9e..98309da 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1504,7 +1504,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_destroy    = nvmx_vcpu_destroy,
     .nhvm_vcpu_reset      = nvmx_vcpu_reset,
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3    = nvmx_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 7b27d2d..6999c25 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -94,7 +94,7 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
     return 0;
 }
 
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v)
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
     /* TODO */
     ASSERT(0);
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 317875d..f9a5edc 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -48,9 +48,10 @@
  *    1. If #NPF is from L1 guest, then we crash the guest VM (same as old 
  *       code)
  *    2. If #NPF is from L2 guest, then we continue from (3)
- *    3. Get h_cr3 from L1 guest. Map h_cr3 into L0 hypervisor address space.
- *    4. Walk the h_cr3 page table
- *    5.    - if not present, then we inject #NPF back to L1 guest and 
+ *    3. Get np2m base from L1 guest. Map np2m base into L0 hypervisor address space.
+ *    4. Walk the np2m's  page table
+ *    5.    - if not present or permission check failure, then we inject #NPF back to 
+ *    L1 guest and 
  *            re-launch L1 guest (L1 guest will either treat this #NPF as MMIO,
  *            or fix its p2m table for L2 guest)
  *    6.    - if present, then we will get the a new translated value L1-GPA 
@@ -89,7 +90,7 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 
     if (old_flags & _PAGE_PRESENT)
         flush_tlb_mask(p2m->dirty_cpumask);
-    
+
     paging_unlock(d);
 }
 
@@ -110,7 +111,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     /* If this p2m table has been flushed or recycled under our feet, 
      * leave it alone.  We'll pick up the right one as we try to 
      * vmenter the guest. */
-    if ( p2m->cr3 == nhvm_vcpu_hostcr3(v) )
+    if ( p2m->np2m_base == nhvm_vcpu_p2m_base(v) )
     {
         unsigned long gfn, mask;
         mfn_t mfn;
@@ -186,7 +187,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t pfec;
     unsigned long nested_cr3, gfn;
     
-    nested_cr3 = nhvm_vcpu_hostcr3(v);
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
 
     pfec = PFEC_user_mode | PFEC_page_present;
     if (access_w)
@@ -221,7 +222,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
-    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
     rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 3700e32..1817f81 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -249,7 +249,7 @@ declare_mm_order_constraint(per_page_sharing)
  * A per-domain lock that protects the mapping from nested-CR3 to 
  * nested-p2m.  In particular it covers:
  * - the array of nested-p2m tables, and all LRU activity therein; and
- * - setting the "cr3" field of any p2m table to a non-CR3_EADDR value. 
+ * - setting the "cr3" field of any p2m table to a non-P2M_BASE_EAADR value. 
  *   (i.e. assigning a p2m table to be the shadow of that cr3 */
 
 /* PoD lock (per-p2m-table)
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 258f46e..41a461b 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -69,7 +69,7 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->domain = d;
     p2m->default_access = p2m_access_rwx;
 
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
         ept_p2m_init(p2m);
@@ -1433,7 +1433,7 @@ p2m_flush_table(struct p2m_domain *p2m)
     ASSERT(page_list_empty(&p2m->pod.single));
 
     /* This is no longer a valid nested p2m for any address space */
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
     
     /* Zap the top level of the trie */
     top = mfn_to_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
@@ -1471,7 +1471,7 @@ p2m_flush_nestedp2m(struct domain *d)
 }
 
 struct p2m_domain *
-p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
+p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
 {
     /* Use volatile to prevent gcc to cache nv->nv_p2m in a cpu register as
      * this may change within the loop by an other (v)cpu.
@@ -1480,8 +1480,8 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     struct domain *d;
     struct p2m_domain *p2m;
 
-    /* Mask out low bits; this avoids collisions with CR3_EADDR */
-    cr3 &= ~(0xfffull);
+    /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
+    np2m_base &= ~(0xfffull);
 
     if (nv->nv_flushp2m && nv->nv_p2m) {
         nv->nv_p2m = NULL;
@@ -1493,14 +1493,14 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     if ( p2m ) 
     {
         p2m_lock(p2m);
-        if ( p2m->cr3 == cr3 || p2m->cr3 == CR3_EADDR )
+        if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
         {
             nv->nv_flushp2m = 0;
             p2m_getlru_nestedp2m(d, p2m);
             nv->nv_p2m = p2m;
-            if (p2m->cr3 == CR3_EADDR)
+            if ( p2m->np2m_base == P2M_BASE_EADDR )
                 hvm_asid_flush_vcpu(v);
-            p2m->cr3 = cr3;
+            p2m->np2m_base = np2m_base;
             cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
             p2m_unlock(p2m);
             nestedp2m_unlock(d);
@@ -1515,7 +1515,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     p2m_flush_table(p2m);
     p2m_lock(p2m);
     nv->nv_p2m = p2m;
-    p2m->cr3 = cr3;
+    p2m->np2m_base = np2m_base;
     nv->nv_flushp2m = 0;
     hvm_asid_flush_vcpu(v);
     cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
@@ -1531,7 +1531,7 @@ p2m_get_p2m(struct vcpu *v)
     if (!nestedhvm_is_n2(v))
         return p2m_get_hostp2m(v->domain);
 
-    return p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    return p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 }
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
@@ -1549,15 +1549,15 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
         uint32_t pfec_21 = *pfec;
-        uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
+        uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
 
         /* translate l2 guest va into l2 guest gfn */
-        p2m = p2m_get_nestedp2m(v, ncr3);
+        p2m = p2m_get_nestedp2m(v, np2m_base);
         mode = paging_get_nestedmode(v);
         gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
         /* translate l2 guest gfn into l1 guest gfn */
-        return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
+        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
                                        gfn << PAGE_SHIFT, &pfec_21, NULL);
     }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index fdb0f58..d3535b6 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -170,7 +170,7 @@ struct hvm_function_table {
                                 uint64_t exitcode);
     int (*nhvm_vcpu_vmexit_trap)(struct vcpu *v, struct hvm_trap *trap);
     uint64_t (*nhvm_vcpu_guestcr3)(struct vcpu *v);
-    uint64_t (*nhvm_vcpu_hostcr3)(struct vcpu *v);
+    uint64_t (*nhvm_vcpu_p2m_base)(struct vcpu *v);
     uint32_t (*nhvm_vcpu_asid)(struct vcpu *v);
     int (*nhvm_vmcx_guest_intercepts_trap)(struct vcpu *v, 
                                unsigned int trapnr, int errcode);
@@ -475,7 +475,7 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v);
 /* returns l1 guest's cr3 that points to the page table used to
  * translate l2 guest physical address to l1 guest physical address.
  */
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v);
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v);
 /* returns the asid number l1 guest wants to use to run the l2 guest */
 uint32_t nhvm_vcpu_asid(struct vcpu *v);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index dce2cd8..d97011d 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -99,7 +99,7 @@ int nvmx_vcpu_initialise(struct vcpu *v);
 void nvmx_vcpu_destroy(struct vcpu *v);
 int nvmx_vcpu_reset(struct vcpu *v);
 uint64_t nvmx_vcpu_guestcr3(struct vcpu *v);
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v);
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v);
 uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 2bd2048..ce26594 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -197,17 +197,17 @@ struct p2m_domain {
 
     struct domain     *domain;   /* back pointer to domain */
 
-    /* Nested p2ms only: nested-CR3 value that this p2m shadows. 
-     * This can be cleared to CR3_EADDR under the per-p2m lock but
+    /* Nested p2ms only: nested p2m base value that this p2m shadows. 
+     * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but
      * needs both the per-p2m lock and the per-domain nestedp2m lock
      * to set it to any other value. */
-#define CR3_EADDR     (~0ULL)
-    uint64_t           cr3;
+#define P2M_BASE_EADDR     (~0ULL)
+    uint64_t           np2m_base;
 
     /* Nested p2ms: linked list of n2pms allocated to this domain. 
      * The host p2m hasolds the head of the list and the np2ms are 
      * threaded on in LRU order. */
-    struct list_head np2m_list; 
+    struct list_head   np2m_list; 
 
 
     /* Host p2m: when this flag is set, don't flush all the nested-p2m 
@@ -282,11 +282,11 @@ struct p2m_domain {
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
-/* Get p2m table (re)usable for specified cr3.
+/* Get p2m table (re)usable for specified np2m base.
  * Automatically destroys and re-initializes a p2m if none found.
- * If cr3 == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
+ * If np2m_base == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
  */
-struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3);
+struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
 
 /* If vcpu is in host mode then behaviour matches p2m_get_hostp2m().
  * If vcpu is in guest mode then behaviour matches p2m_get_nestedp2m().
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zi-00025w-La; Mon, 24 Dec 2012 14:27:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zh-000255-Dn
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:05 +0000
Received: from [85.158.143.35:36983] by server-3.bemta-4.messagelabs.com id
	7D/69-18211-83668D05; Mon, 24 Dec 2012 14:27:04 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1356359219!16771313!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23105 invoked from network); 24 Dec 2012 14:27:00 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-21.messagelabs.com with SMTP;
	24 Dec 2012 14:27:00 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524939"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:58 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:28 +0800
Message-Id: <1356359194-5321-5-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 04/10] EPT: Make ept data structure or
	operations neutral
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Share the current EPT logic with nested EPT case, so
make the related data structure or operations netural
to comment EPT and nested EPT.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |    8 ++-
 xen/arch/x86/hvm/vmx/vmx.c         |   53 +-------------
 xen/arch/x86/mm/p2m-ept.c          |  104 ++++++++++++++++++++++------
 xen/arch/x86/mm/p2m.c              |  132 +++++++++++++++++++++++++-----------
 xen/include/asm-x86/hvm/vmx/vmcs.h |   23 +++---
 xen/include/asm-x86/hvm/vmx/vmx.h  |   10 ++-
 xen/include/asm-x86/p2m.h          |    4 +
 7 files changed, 208 insertions(+), 126 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 9adc7a4..de22e03 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -942,7 +942,13 @@ static int construct_vmcs(struct vcpu *v)
     }
 
     if ( paging_mode_hap(d) )
-        __vmwrite(EPT_POINTER, d->arch.hvm_domain.vmx.ept_control.eptp);
+    {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
+        struct ept_data *ept = &p2m->ept;
+
+        ept->asr  = pagetable_get_pfn(p2m_get_pagetable(p2m));
+        __vmwrite(EPT_POINTER, ept_get_eptp(ept));
+    }
 
     if ( cpu_has_vmx_pat && paging_mode_hap(d) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4abfa90..d74aae0 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -74,38 +74,19 @@ static void vmx_fpu_dirty_intercept(void);
 static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content);
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content);
 static void vmx_invlpg_intercept(unsigned long vaddr);
-static void __ept_sync_domain(void *info);
 
 static int vmx_domain_initialise(struct domain *d)
 {
     int rc;
 
-    /* Set the memory type used when accessing EPT paging structures. */
-    d->arch.hvm_domain.vmx.ept_control.ept_mt = EPT_DEFAULT_MT;
-
-    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
-    d->arch.hvm_domain.vmx.ept_control.ept_wl = 3;
-
-    d->arch.hvm_domain.vmx.ept_control.asr  =
-        pagetable_get_pfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-
-    if ( !zalloc_cpumask_var(&d->arch.hvm_domain.vmx.ept_synced) )
-        return -ENOMEM;
-
     if ( (rc = vmx_alloc_vlapic_mapping(d)) != 0 )
-    {
-        free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
         return rc;
-    }
 
     return 0;
 }
 
 static void vmx_domain_destroy(struct domain *d)
 {
-    if ( paging_mode_hap(d) )
-        on_each_cpu(__ept_sync_domain, d, 1);
-    free_cpumask_var(d->arch.hvm_domain.vmx.ept_synced);
     vmx_free_vlapic_mapping(d);
 }
 
@@ -641,6 +622,7 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
 {
     struct domain *d = v->domain;
     unsigned long old_cr4 = read_cr4(), new_cr4 = mmu_cr4_features;
+    struct ept_data *ept_data = &p2m_get_hostp2m(d)->ept;
 
     /* HOST_CR4 in VMCS is always mmu_cr4_features. Sync CR4 now. */
     if ( old_cr4 != new_cr4 )
@@ -650,10 +632,10 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
     {
         unsigned int cpu = smp_processor_id();
         /* Test-and-test-and-set this CPU in the EPT-is-synced mask. */
-        if ( !cpumask_test_cpu(cpu, d->arch.hvm_domain.vmx.ept_synced) &&
+        if ( !cpumask_test_cpu(cpu, ept_get_synced_mask(ept_data)) &&
              !cpumask_test_and_set_cpu(cpu,
-                                       d->arch.hvm_domain.vmx.ept_synced) )
-            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
+                                       ept_get_synced_mask(ept_data)) )
+            __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
     }
 
     vmx_restore_guest_msrs(v);
@@ -1216,33 +1198,6 @@ static void vmx_update_guest_efer(struct vcpu *v)
                    (v->arch.hvm_vcpu.guest_efer & EFER_SCE));
 }
 
-static void __ept_sync_domain(void *info)
-{
-    struct domain *d = info;
-    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(d), 0);
-}
-
-void ept_sync_domain(struct domain *d)
-{
-    /* Only if using EPT and this domain has some VCPUs to dirty. */
-    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
-        return;
-
-    ASSERT(local_irq_is_enabled());
-
-    /*
-     * Flush active cpus synchronously. Flush others the next time this domain
-     * is scheduled onto them. We accept the race of other CPUs adding to
-     * the ept_synced mask before on_selected_cpus() reads it, resulting in
-     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
-     */
-    cpumask_and(d->arch.hvm_domain.vmx.ept_synced,
-                d->domain_dirty_cpumask, &cpu_online_map);
-
-    on_selected_cpus(d->arch.hvm_domain.vmx.ept_synced,
-                     __ept_sync_domain, d, 1);
-}
-
 void nvmx_enqueue_n2_exceptions(struct vcpu *v, 
             unsigned long intr_fields, int error_code)
 {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index c964f54..e33f415 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -291,9 +291,11 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     int need_modify_vtd_table = 1;
     int vtd_pte_present = 0;
     int needs_sync = 1;
-    struct domain *d = p2m->domain;
     ept_entry_t old_entry = { .epte = 0 };
+    struct ept_data *ept = &p2m->ept;
+    struct domain *d = p2m->domain;
 
+    ASSERT(ept);
     /*
      * the caller must make sure:
      * 1. passing valid gfn and mfn at order boundary.
@@ -301,17 +303,17 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
      * 3. passing a valid order.
      */
     if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
-         ((u64)gfn >> ((ept_get_wl(d) + 1) * EPT_TABLE_ORDER)) ||
+         ((u64)gfn >> ((ept_get_wl(ept) + 1) * EPT_TABLE_ORDER)) ||
          (order % EPT_TABLE_ORDER) )
         return 0;
 
-    ASSERT((target == 2 && hvm_hap_has_1gb(d)) ||
-           (target == 1 && hvm_hap_has_2mb(d)) ||
+    ASSERT((target == 2 && hvm_hap_has_1gb()) ||
+           (target == 1 && hvm_hap_has_2mb()) ||
            (target == 0));
 
-    table = map_domain_page(ept_get_asr(d));
+    table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-    for ( i = ept_get_wl(d); i > target; i-- )
+    for ( i = ept_get_wl(ept); i > target; i-- )
     {
         ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
         if ( !ret )
@@ -439,9 +441,11 @@ out:
     unmap_domain_page(table);
 
     if ( needs_sync )
-        ept_sync_domain(p2m->domain);
+        ept_sync_domain(p2m);
 
-    if ( rv && iommu_enabled && need_iommu(p2m->domain) && need_modify_vtd_table )
+    /* For non-nested p2m, may need to change VT-d page table.*/
+    if ( rv && !p2m_is_nestedp2m(p2m) && iommu_enabled && need_iommu(p2m->domain) &&
+                need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
             iommu_pte_flush(d, gfn, (u64*)ept_entry, order, vtd_pte_present);
@@ -488,14 +492,14 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
                            unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
                            p2m_query_t q, unsigned int *page_order)
 {
-    struct domain *d = p2m->domain;
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    ept_entry_t *table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     u32 index;
     int i;
     int ret = 0;
     mfn_t mfn = _mfn(INVALID_MFN);
+    struct ept_data *ept = &p2m->ept;
 
     *t = p2m_mmio_dm;
     *a = p2m_access_n;
@@ -506,7 +510,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
 
     /* Should check if gfn obeys GAW here. */
 
-    for ( i = ept_get_wl(d); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
     retry:
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
@@ -588,19 +592,20 @@ out:
 static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
     unsigned long gfn, int *level)
 {
-    ept_entry_t *table = map_domain_page(ept_get_asr(p2m->domain));
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     ept_entry_t content = { .epte = 0 };
     u32 index;
     int i;
     int ret=0;
+    struct ept_data *ept = &p2m->ept;
 
     /* This pfn is higher than the highest the p2m map currently holds */
     if ( gfn > p2m->max_mapped_pfn )
         goto out;
 
-    for ( i = ept_get_wl(p2m->domain); i > 0; i-- )
+    for ( i = ept_get_wl(ept); i > 0; i-- )
     {
         ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
         if ( !ret || ret == GUEST_TABLE_POD_PAGE )
@@ -622,7 +627,8 @@ static ept_entry_t ept_get_entry_content(struct p2m_domain *p2m,
 void ept_walk_table(struct domain *d, unsigned long gfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    ept_entry_t *table = map_domain_page(ept_get_asr(d));
+    struct ept_data *ept = &p2m->ept;
+    ept_entry_t *table =  map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
     unsigned long gfn_remainder = gfn;
 
     int i;
@@ -638,7 +644,7 @@ void ept_walk_table(struct domain *d, unsigned long gfn)
         goto out;
     }
 
-    for ( i = ept_get_wl(d); i >= 0; i-- )
+    for ( i = ept_get_wl(ept); i >= 0; i-- )
     {
         ept_entry_t *ept_entry, *next;
         u32 index;
@@ -778,24 +784,76 @@ static void ept_change_entry_type_page(mfn_t ept_page_mfn, int ept_page_level,
 static void ept_change_entry_type_global(struct p2m_domain *p2m,
                                          p2m_type_t ot, p2m_type_t nt)
 {
-    struct domain *d = p2m->domain;
-    if ( ept_get_asr(d) == 0 )
+    struct ept_data *ept = &p2m->ept;
+    if ( ept_get_asr(ept) == 0 )
         return;
 
     BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt));
     BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct));
 
-    ept_change_entry_type_page(_mfn(ept_get_asr(d)), ept_get_wl(d), ot, nt);
+    ept_change_entry_type_page(_mfn(ept_get_asr(ept)),
+            ept_get_wl(ept), ot, nt);
+
+    ept_sync_domain(p2m);
+}
+
+static void __ept_sync_domain(void *info)
+{
+    struct ept_data *ept = &((struct p2m_domain *)info)->ept;
 
-    ept_sync_domain(d);
+    __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept), 0);
 }
 
-void ept_p2m_init(struct p2m_domain *p2m)
+void ept_sync_domain(struct p2m_domain *p2m)
 {
+    struct domain *d = p2m->domain;
+    struct ept_data *ept = &p2m->ept;
+    /* Only if using EPT and this domain has some VCPUs to dirty. */
+    if ( !paging_mode_hap(d) || !d->vcpu || !d->vcpu[0] )
+        return;
+
+    ASSERT(local_irq_is_enabled());
+
+    /*
+     * Flush active cpus synchronously. Flush others the next time this domain
+     * is scheduled onto them. We accept the race of other CPUs adding to
+     * the ept_synced mask before on_selected_cpus() reads it, resulting in
+     * unnecessary extra flushes, to avoid allocating a cpumask_t on the stack.
+     */
+    cpumask_and(ept_get_synced_mask(ept),
+                d->domain_dirty_cpumask, &cpu_online_map);
+
+    on_selected_cpus(ept_get_synced_mask(ept),
+                     __ept_sync_domain, p2m, 1);
+}
+
+int ept_p2m_init(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+
     p2m->set_entry = ept_set_entry;
     p2m->get_entry = ept_get_entry;
     p2m->change_entry_type_global = ept_change_entry_type_global;
     p2m->audit_p2m = NULL;
+
+    /* Set the memory type used when accessing EPT paging structures. */
+    ept->ept_mt = EPT_DEFAULT_MT;
+
+    /* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
+    ept->ept_wl = 3;
+
+    if ( !zalloc_cpumask_var(&ept->synced_mask) )
+        return -ENOMEM;
+
+    on_each_cpu(__ept_sync_domain, p2m, 1);
+
+    return 0;
+}
+
+void ept_p2m_uninit(struct p2m_domain *p2m)
+{
+    struct ept_data *ept = &p2m->ept;
+    free_cpumask_var(ept->synced_mask);
 }
 
 static void ept_dump_p2m_table(unsigned char key)
@@ -811,6 +869,7 @@ static void ept_dump_p2m_table(unsigned char key)
     unsigned long gfn, gfn_remainder;
     unsigned long record_counter = 0;
     struct p2m_domain *p2m;
+    struct ept_data *ept;
 
     for_each_domain(d)
     {
@@ -818,15 +877,16 @@ static void ept_dump_p2m_table(unsigned char key)
             continue;
 
         p2m = p2m_get_hostp2m(d);
+        ept = &p2m->ept;
         printk("\ndomain%d EPT p2m table: \n", d->domain_id);
 
         for ( gfn = 0; gfn <= p2m->max_mapped_pfn; gfn += (1 << order) )
         {
             gfn_remainder = gfn;
             mfn = _mfn(INVALID_MFN);
-            table = map_domain_page(ept_get_asr(d));
+            table = map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
 
-            for ( i = ept_get_wl(d); i > 0; i-- )
+            for ( i = ept_get_wl(ept); i > 0; i-- )
             {
                 ret = ept_next_level(p2m, 1, &table, &gfn_remainder, i);
                 if ( ret != GUEST_TABLE_NORMAL_PAGE )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 41a461b..49eb8af 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -57,8 +57,10 @@ boolean_param("hap_2mb", opt_hap_2mb);
 
 
 /* Init the datastructures for later use by the p2m code */
-static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
+    int ret = 0;
+
     mm_rwlock_init(&p2m->lock);
     mm_lock_init(&p2m->pod.lock);
     INIT_LIST_HEAD(&p2m->np2m_list);
@@ -72,27 +74,81 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
-        ept_p2m_init(p2m);
+        ret = ept_p2m_init(p2m);
     else
         p2m_pt_init(p2m);
 
-    return;
+    return ret;
+}
+
+static struct p2m_domain *p2m_init_one(struct domain *d)
+{
+    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
+
+    if ( !p2m )
+        return NULL;
+
+    if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
+        goto free_p2m;
+
+    if ( p2m_initialise(d, p2m) )
+        goto free_cpumask;
+    return p2m;
+
+free_cpumask:
+    free_cpumask_var(p2m->dirty_cpumask);
+free_p2m:
+    xfree(p2m);
+    return NULL;
 }
 
-static int
-p2m_init_nestedp2m(struct domain *d)
+static void p2m_free_one(struct p2m_domain *p2m)
+{
+    if ( hap_enabled(p2m->domain) && cpu_has_vmx )
+        ept_p2m_uninit(p2m);
+    free_cpumask_var(p2m->dirty_cpumask);
+    xfree(p2m);
+}
+
+static int p2m_init_hostp2m(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_init_one(d);
+
+    if ( p2m )
+    {
+        d->arch.p2m = p2m;
+        return 0;
+    }
+    return -ENOMEM;
+}
+
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+    /* Iterate over all p2m tables per domain */
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    if ( p2m ) {
+        p2m_free_one(p2m);
+        d->arch.p2m = NULL;
+    }
+}
+
+static void p2m_teardown_nestedp2m(struct domain *d);
+
+static int p2m_init_nestedp2m(struct domain *d)
 {
     uint8_t i;
     struct p2m_domain *p2m;
 
     mm_lock_init(&d->arch.nested_p2m_lock);
-    for (i = 0; i < MAX_NESTEDP2M; i++) {
-        d->arch.nested_p2m[i] = p2m = xzalloc(struct p2m_domain);
-        if (p2m == NULL)
-            return -ENOMEM;
-        if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
+    for (i = 0; i < MAX_NESTEDP2M; i++)
+    {
+        d->arch.nested_p2m[i] = p2m = p2m_init_one(d);
+        if ( p2m == NULL )
+        {
+            p2m_teardown_nestedp2m(d);
             return -ENOMEM;
-        p2m_initialise(d, p2m);
+        }
         p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
         list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
     }
@@ -100,27 +156,37 @@ p2m_init_nestedp2m(struct domain *d)
     return 0;
 }
 
-int p2m_init(struct domain *d)
+static void p2m_teardown_nestedp2m(struct domain *d)
 {
+    uint8_t i;
     struct p2m_domain *p2m;
-    int rc;
 
-    p2m_get_hostp2m(d) = p2m = xzalloc(struct p2m_domain);
-    if ( p2m == NULL )
-        return -ENOMEM;
-    if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
+    for (i = 0; i < MAX_NESTEDP2M; i++)
     {
-        xfree(p2m);
-        return -ENOMEM;
+        if ( !d->arch.nested_p2m[i] )
+            continue;
+        p2m = d->arch.nested_p2m[i];
+        list_del(&p2m->np2m_list);
+        p2m_free_one(p2m);
+        d->arch.nested_p2m[i] = NULL;
     }
-    p2m_initialise(d, p2m);
+}
+
+int p2m_init(struct domain *d)
+{
+    int rc;
+
+    rc = p2m_init_hostp2m(d);
+    if ( rc )
+        return rc;
 
     /* Must initialise nestedp2m unconditionally
      * since nestedhvm_enabled(d) returns false here.
      * (p2m_init runs too early for HVM_PARAM_* options) */
     rc = p2m_init_nestedp2m(d);
-    if ( rc ) 
-        p2m_final_teardown(d);
+    if ( rc )
+        p2m_teardown_hostp2m(d);
+
     return rc;
 }
 
@@ -421,28 +487,12 @@ void p2m_teardown(struct p2m_domain *p2m)
     p2m_unlock(p2m);
 }
 
-static void p2m_teardown_nestedp2m(struct domain *d)
-{
-    uint8_t i;
-
-    for (i = 0; i < MAX_NESTEDP2M; i++) {
-        if ( !d->arch.nested_p2m[i] )
-            continue;
-        free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
-        xfree(d->arch.nested_p2m[i]);
-        d->arch.nested_p2m[i] = NULL;
-    }
-}
-
 void p2m_final_teardown(struct domain *d)
 {
     /* Iterate over all p2m tables per domain */
-    if ( d->arch.p2m )
-    {
-        free_cpumask_var(d->arch.p2m->dirty_cpumask);
-        xfree(d->arch.p2m);
-        d->arch.p2m = NULL;
-    }
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    if ( p2m )
+        p2m_teardown_hostp2m(d);
 
     /* We must teardown unconditionally because
      * we initialise them unconditionally.
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 9a728b6..2d38b43 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -56,26 +56,27 @@ struct vmx_msr_state {
 
 #define EPT_DEFAULT_MT      MTRR_TYPE_WRBACK
 
-struct vmx_domain {
-    unsigned long apic_access_mfn;
+struct ept_data{
     union {
-        struct {
+    struct {
             u64 ept_mt :3,
                 ept_wl :3,
                 rsvd   :6,
                 asr    :52;
         };
         u64 eptp;
-    } ept_control;
-    cpumask_var_t ept_synced;
+    };
+    cpumask_var_t synced_mask;
+};
+
+struct vmx_domain {
+    unsigned long apic_access_mfn;
 };
 
-#define ept_get_wl(d)   \
-    ((d)->arch.hvm_domain.vmx.ept_control.ept_wl)
-#define ept_get_asr(d)  \
-    ((d)->arch.hvm_domain.vmx.ept_control.asr)
-#define ept_get_eptp(d) \
-    ((d)->arch.hvm_domain.vmx.ept_control.eptp)
+#define ept_get_wl(ept)   ((ept)->ept_wl)
+#define ept_get_asr(ept)  ((ept)->asr)
+#define ept_get_eptp(ept) ((ept)->eptp)
+#define ept_get_synced_mask(ept) ((ept)->synced_mask)
 
 struct arch_vmx_struct {
     /* Virtual address of VMCS. */
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index c73946f..d4d6feb 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -363,7 +363,7 @@ static inline void ept_sync_all(void)
     __invept(INVEPT_ALL_CONTEXT, 0, 0);
 }
 
-void ept_sync_domain(struct domain *d);
+void ept_sync_domain(struct p2m_domain *p2m);
 
 static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
 {
@@ -425,12 +425,18 @@ void vmx_get_segment_register(struct vcpu *, enum x86_segment,
 void vmx_inject_extint(int trap);
 void vmx_inject_nmi(void);
 
-void ept_p2m_init(struct p2m_domain *p2m);
+int ept_p2m_init(struct p2m_domain *p2m);
+void ept_p2m_uninit(struct p2m_domain *p2m);
+
 void ept_walk_table(struct domain *d, unsigned long gfn);
 void setup_ept_dump(void);
 
 void update_guest_eip(void);
 
+int alloc_p2m_hap_data(struct p2m_domain *p2m);
+void free_p2m_hap_data(struct p2m_domain *p2m);
+void p2m_init_hap_data(struct p2m_domain *p2m);
+
 /* EPT violation qualifications definitions */
 #define _EPT_READ_VIOLATION         0
 #define EPT_READ_VIOLATION          (1UL<<_EPT_READ_VIOLATION)
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index ce26594..b6a84b6 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -277,6 +277,10 @@ struct p2m_domain {
         mm_lock_t        lock;         /* Locking of private pod structs,   *
                                         * not relying on the p2m lock.      */
     } pod;
+    union {
+        struct ept_data ept;
+        /* NPT-equivalent structure could be added here. */
+    };
 };
 
 /* get host p2m table */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zi-00025U-1c; Mon, 24 Dec 2012 14:27:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zf-00024G-JF
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:03 +0000
Received: from [85.158.137.99:13564] by server-15.bemta-3.messagelabs.com id
	7A/43-07921-73668D05; Mon, 24 Dec 2012 14:27:03 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356359217!12860986!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20783 invoked from network); 24 Dec 2012 14:27:02 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-217.messagelabs.com with SMTP;
	24 Dec 2012 14:27:02 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524947"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:59 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:29 +0800
Message-Id: <1356359194-5321-6-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 05/10] nEPT: Try to enable EPT paging for L2
	guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Once found EPT is enabled by L1 VMM, enabled nested EPT support
for L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
 xen/arch/x86/hvm/vmx/vvmx.c        |   48 +++++++++++++++++++++++++++--------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
 3 files changed, 54 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index d74aae0..ed8d532 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1461,6 +1461,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
     .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
+    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
     .nhvm_intr_blocked    = nvmx_intr_blocked,
@@ -2003,6 +2004,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
     unsigned long gla, gfn = gpa >> PAGE_SHIFT;
     mfn_t mfn;
     p2m_type_t p2mt;
+    int ret;
     struct domain *d = current->domain;
 
     if ( tb_init_done )
@@ -2017,18 +2019,26 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
         _d.gpa = gpa;
         _d.qualification = qualification;
         _d.mfn = mfn_x(get_gfn_query_unlocked(d, gfn, &_d.p2mt));
-        
+
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
-    if ( hvm_hap_nested_page_fault(gpa,
+    ret = hvm_hap_nested_page_fault(gpa,
                                    qualification & EPT_GLA_VALID       ? 1 : 0,
                                    qualification & EPT_GLA_VALID
                                      ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
                                    qualification & EPT_READ_VIOLATION  ? 1 : 0,
                                    qualification & EPT_WRITE_VIOLATION ? 1 : 0,
-                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
+                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
+    switch ( ret ) {
+    case 0:         // Unhandled L1 EPT violation
+        break;
+    case 1:         // This violation is handled completly
         return;
+    case -1:        // This vioaltion should be injected to L1 VMM
+        vcpu_nestedhvm(current).nv_vmexit_pending = 1;
+        return;
+    }
 
     /* Everything else is an error. */
     mfn = get_gfn_query_unlocked(d, gfn, &p2mt);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 1d3090d..f9699dc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
         gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
 	goto out;
     }
+    nvmx->ept.enabled = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -96,9 +97,11 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
 
 uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
-    /* TODO */
-    ASSERT(0);
-    return 0;
+    uint64_t eptp_base;
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+
+    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
+    return eptp_base & PAGE_MASK; 
 }
 
 uint32_t nvmx_vcpu_asid(struct vcpu *v)
@@ -108,6 +111,13 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v)
     return 0;
 }
 
+bool_t nvmx_ept_enabled(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    return !!(nvmx->ept.enabled);
+}
+
 static const enum x86_segment sreg_to_index[] = {
     [VMX_SREG_ES] = x86_seg_es,
     [VMX_SREG_CS] = x86_seg_cs,
@@ -502,14 +512,16 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
 }
 
 void nvmx_update_secondary_exec_control(struct vcpu *v,
-                                            unsigned long value)
+                                            unsigned long host_cntrl)
 {
     u32 shadow_cntrl;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
     shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
-    shadow_cntrl |= value;
-    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
+    nvmx->ept.enabled = !!(shadow_cntrl & SECONDARY_EXEC_ENABLE_EPT);
+    shadow_cntrl |= host_cntrl;
+    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
 }
 
 static void nvmx_update_pin_control(struct vcpu *v, unsigned long host_cntrl)
@@ -851,6 +863,17 @@ static void load_shadow_guest_state(struct vcpu *v)
     /* TODO: CR3 target control */
 }
 
+
+static uint64_t get_shadow_eptp(struct vcpu *v)
+{
+    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
+    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
+    struct ept_data *ept = &p2m->ept;
+
+    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    return ept_get_eptp(ept);
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -895,7 +918,10 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
 
-    /* TODO: EPT_POINTER */
+    /* Setup virtual ETP for L2 guest*/
+    if ( nestedhvm_paging_mode_hap(v) )
+        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -948,8 +974,8 @@ static void sync_vvmcs_ro(struct vcpu *v)
     /* Adjust exit_reason/exit_qualifciation for violation case */
     if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) == EXIT_REASON_EPT_VIOLATION )
     {
-        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
-        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
     }
 }
 
@@ -1515,8 +1541,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     case EPT_TRANSLATE_VIOLATION:
     case EPT_TRANSLATE_MISCONFIG:
         rc = NESTEDHVM_PAGEFAULT_INJECT;
-        nvmx->ept_exit.exit_reason = exit_reason;
-        nvmx->ept_exit.exit_qual = exit_qual;
+        nvmx->ept.exit_reason = exit_reason;
+        nvmx->ept.exit_qual = exit_qual;
         break;
     case EPT_TRANSLATE_RETRY:
         rc = NESTEDHVM_PAGEFAULT_RETRY;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 97554bf..e3d1a22 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -33,9 +33,10 @@ struct nestedvmx {
         u32           error_code;
     } intr;
     struct {
+        bool_t   enabled;
         uint32_t exit_reason;
         uint32_t exit_qual;
-    } ept_exit;
+    } ept;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -110,6 +111,8 @@ int nvmx_intercepts_exception(struct vcpu *v,
                               unsigned int trap, int error_code);
 void nvmx_domain_relinquish_resources(struct domain *d);
 
+bool_t nvmx_ept_enabled(struct vcpu *v);
+
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zo-000295-M7; Mon, 24 Dec 2012 14:27:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zn-000284-Bi
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:11 +0000
Received: from [85.158.143.99:16099] by server-2.bemta-4.messagelabs.com id
	F7/69-30861-E3668D05; Mon, 24 Dec 2012 14:27:10 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356359222!21178558!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8323 invoked from network); 24 Dec 2012 14:27:03 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-14.tower-216.messagelabs.com with SMTP;
	24 Dec 2012 14:27:03 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524956"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:27:01 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:30 +0800
Message-Id: <1356359194-5321-7-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 06/10] nEPT: Sync PDPTR fields if L2 guest in
	PAE paging mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

For PAE L2 guest, GUEST_DPPTR registers needs to be synced for each virtual
vmentry.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++++-
 1 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f9699dc..7b48436 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -859,7 +859,15 @@ static void load_shadow_guest_state(struct vcpu *v)
     vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
     vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
 
-    /* TODO: PDPTRs for nested ept */
+    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
+                    (v->arch.hvm_vcpu.guest_efer & EFER_LMA) )
+    {
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR0);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR1);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR2);
+	    vvmcs_to_shadow(vvmcs, GUEST_PDPTR3);
+    }
+
     /* TODO: CR3 target control */
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zi-00025U-1c; Mon, 24 Dec 2012 14:27:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zf-00024G-JF
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:03 +0000
Received: from [85.158.137.99:13564] by server-15.bemta-3.messagelabs.com id
	7A/43-07921-73668D05; Mon, 24 Dec 2012 14:27:03 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356359217!12860986!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20783 invoked from network); 24 Dec 2012 14:27:02 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-217.messagelabs.com with SMTP;
	24 Dec 2012 14:27:02 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524947"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:59 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:29 +0800
Message-Id: <1356359194-5321-6-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 05/10] nEPT: Try to enable EPT paging for L2
	guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Once found EPT is enabled by L1 VMM, enabled nested EPT support
for L2 guest.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/vmx/vmx.c         |   16 +++++++++--
 xen/arch/x86/hvm/vmx/vvmx.c        |   48 +++++++++++++++++++++++++++--------
 xen/include/asm-x86/hvm/vmx/vvmx.h |    5 +++-
 3 files changed, 54 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index d74aae0..ed8d532 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1461,6 +1461,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
     .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
+    .nhvm_vmcx_hap_enabled = nvmx_ept_enabled,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
     .nhvm_intr_blocked    = nvmx_intr_blocked,
@@ -2003,6 +2004,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
     unsigned long gla, gfn = gpa >> PAGE_SHIFT;
     mfn_t mfn;
     p2m_type_t p2mt;
+    int ret;
     struct domain *d = current->domain;
 
     if ( tb_init_done )
@@ -2017,18 +2019,26 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
         _d.gpa = gpa;
         _d.qualification = qualification;
         _d.mfn = mfn_x(get_gfn_query_unlocked(d, gfn, &_d.p2mt));
-        
+
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
-    if ( hvm_hap_nested_page_fault(gpa,
+    ret = hvm_hap_nested_page_fault(gpa,
                                    qualification & EPT_GLA_VALID       ? 1 : 0,
                                    qualification & EPT_GLA_VALID
                                      ? __vmread(GUEST_LINEAR_ADDRESS) : ~0ull,
                                    qualification & EPT_READ_VIOLATION  ? 1 : 0,
                                    qualification & EPT_WRITE_VIOLATION ? 1 : 0,
-                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0) )
+                                   qualification & EPT_EXEC_VIOLATION  ? 1 : 0);
+    switch ( ret ) {
+    case 0:         // Unhandled L1 EPT violation
+        break;
+    case 1:         // This violation is handled completly
         return;
+    case -1:        // This vioaltion should be injected to L1 VMM
+        vcpu_nestedhvm(current).nv_vmexit_pending = 1;
+        return;
+    }
 
     /* Everything else is an error. */
     mfn = get_gfn_query_unlocked(d, gfn, &p2mt);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 1d3090d..f9699dc 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -41,6 +41,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
         gdprintk(XENLOG_ERR, "nest: allocation for shadow vmcs failed\n");
 	goto out;
     }
+    nvmx->ept.enabled = 0;
     nvmx->vmxon_region_pa = 0;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
@@ -96,9 +97,11 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
 
 uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
-    /* TODO */
-    ASSERT(0);
-    return 0;
+    uint64_t eptp_base;
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+
+    eptp_base = __get_vvmcs(nvcpu->nv_vvmcx, EPT_POINTER);
+    return eptp_base & PAGE_MASK; 
 }
 
 uint32_t nvmx_vcpu_asid(struct vcpu *v)
@@ -108,6 +111,13 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v)
     return 0;
 }
 
+bool_t nvmx_ept_enabled(struct vcpu *v)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+
+    return !!(nvmx->ept.enabled);
+}
+
 static const enum x86_segment sreg_to_index[] = {
     [VMX_SREG_ES] = x86_seg_es,
     [VMX_SREG_CS] = x86_seg_cs,
@@ -502,14 +512,16 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
 }
 
 void nvmx_update_secondary_exec_control(struct vcpu *v,
-                                            unsigned long value)
+                                            unsigned long host_cntrl)
 {
     u32 shadow_cntrl;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
     shadow_cntrl = __get_vvmcs(nvcpu->nv_vvmcx, SECONDARY_VM_EXEC_CONTROL);
-    shadow_cntrl |= value;
-    set_shadow_control(v, SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
+    nvmx->ept.enabled = !!(shadow_cntrl & SECONDARY_EXEC_ENABLE_EPT);
+    shadow_cntrl |= host_cntrl;
+    __vmwrite(SECONDARY_VM_EXEC_CONTROL, shadow_cntrl);
 }
 
 static void nvmx_update_pin_control(struct vcpu *v, unsigned long host_cntrl)
@@ -851,6 +863,17 @@ static void load_shadow_guest_state(struct vcpu *v)
     /* TODO: CR3 target control */
 }
 
+
+static uint64_t get_shadow_eptp(struct vcpu *v)
+{
+    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
+    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
+    struct ept_data *ept = &p2m->ept;
+
+    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    return ept_get_eptp(ept);
+}
+
 static void virtual_vmentry(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -895,7 +918,10 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     /* updating host cr0 to sync TS bit */
     __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
 
-    /* TODO: EPT_POINTER */
+    /* Setup virtual ETP for L2 guest*/
+    if ( nestedhvm_paging_mode_hap(v) )
+        __vmwrite(EPT_POINTER, get_shadow_eptp(v));
+
 }
 
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
@@ -948,8 +974,8 @@ static void sync_vvmcs_ro(struct vcpu *v)
     /* Adjust exit_reason/exit_qualifciation for violation case */
     if ( __get_vvmcs(vvmcs, VM_EXIT_REASON) == EXIT_REASON_EPT_VIOLATION )
     {
-        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept_exit.exit_qual);
-        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept_exit.exit_reason);
+        __set_vvmcs(vvmcs, EXIT_QUALIFICATION, nvmx->ept.exit_qual);
+        __set_vvmcs(vvmcs, VM_EXIT_REASON, nvmx->ept.exit_reason);
     }
 }
 
@@ -1515,8 +1541,8 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     case EPT_TRANSLATE_VIOLATION:
     case EPT_TRANSLATE_MISCONFIG:
         rc = NESTEDHVM_PAGEFAULT_INJECT;
-        nvmx->ept_exit.exit_reason = exit_reason;
-        nvmx->ept_exit.exit_qual = exit_qual;
+        nvmx->ept.exit_reason = exit_reason;
+        nvmx->ept.exit_qual = exit_qual;
         break;
     case EPT_TRANSLATE_RETRY:
         rc = NESTEDHVM_PAGEFAULT_RETRY;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 97554bf..e3d1a22 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -33,9 +33,10 @@ struct nestedvmx {
         u32           error_code;
     } intr;
     struct {
+        bool_t   enabled;
         uint32_t exit_reason;
         uint32_t exit_qual;
-    } ept_exit;
+    } ept;
 };
 
 #define vcpu_2_nvmx(v)	(vcpu_nestedhvm(v).u.nvmx)
@@ -110,6 +111,8 @@ int nvmx_intercepts_exception(struct vcpu *v,
                               unsigned int trap, int error_code);
 void nvmx_domain_relinquish_resources(struct domain *d);
 
+bool_t nvmx_ept_enabled(struct vcpu *v);
+
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zd-00024L-HK; Mon, 24 Dec 2012 14:27:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zb-000244-E4
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:26:59 +0000
Received: from [85.158.139.83:57402] by server-1.bemta-5.messagelabs.com id
	AA/BA-12813-23668D05; Mon, 24 Dec 2012 14:26:58 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1356359215!27132525!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5968 invoked from network); 24 Dec 2012 14:26:56 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-182.messagelabs.com with SMTP;
	24 Dec 2012 14:26:56 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:26:55 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524907"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:54 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:25 +0800
Message-Id: <1356359194-5321-2-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 01/10] nestedhap: Change hostcr3 and p2m->cr3
	to meaningful words
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

VMX doesn't have the concept about host cr3 for nested p2m,
and only SVM has, so change it to netural words.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/hvm.c             |    6 +++---
 xen/arch/x86/hvm/svm/svm.c         |    2 +-
 xen/arch/x86/hvm/vmx/vmx.c         |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        |    2 +-
 xen/arch/x86/mm/hap/nested_hap.c   |   15 ++++++++-------
 xen/arch/x86/mm/mm-locks.h         |    2 +-
 xen/arch/x86/mm/p2m.c              |   26 +++++++++++++-------------
 xen/include/asm-x86/hvm/hvm.h      |    4 ++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |    2 +-
 xen/include/asm-x86/p2m.h          |   16 ++++++++--------
 10 files changed, 39 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 40c1ab2..f63ee52 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4536,10 +4536,10 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v)
     return -EOPNOTSUPP;
 }
 
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v)
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v)
 {
-    if (hvm_funcs.nhvm_vcpu_hostcr3)
-        return hvm_funcs.nhvm_vcpu_hostcr3(v);
+    if ( hvm_funcs.nhvm_vcpu_p2m_base )
+        return hvm_funcs.nhvm_vcpu_p2m_base(v);
     return -EOPNOTSUPP;
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 55a5ae5..2c8504a 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2003,7 +2003,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vcpu_vmexit = nsvm_vcpu_vmexit_inject,
     .nhvm_vcpu_vmexit_trap = nsvm_vcpu_vmexit_trap,
     .nhvm_vcpu_guestcr3 = nsvm_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3 = nsvm_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base = nsvm_vcpu_hostcr3,
     .nhvm_vcpu_asid = nsvm_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index aee1f9e..98309da 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1504,7 +1504,7 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_destroy    = nvmx_vcpu_destroy,
     .nhvm_vcpu_reset      = nvmx_vcpu_reset,
     .nhvm_vcpu_guestcr3   = nvmx_vcpu_guestcr3,
-    .nhvm_vcpu_hostcr3    = nvmx_vcpu_hostcr3,
+    .nhvm_vcpu_p2m_base   = nvmx_vcpu_eptp_base,
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 7b27d2d..6999c25 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -94,7 +94,7 @@ uint64_t nvmx_vcpu_guestcr3(struct vcpu *v)
     return 0;
 }
 
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v)
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v)
 {
     /* TODO */
     ASSERT(0);
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 317875d..f9a5edc 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -48,9 +48,10 @@
  *    1. If #NPF is from L1 guest, then we crash the guest VM (same as old 
  *       code)
  *    2. If #NPF is from L2 guest, then we continue from (3)
- *    3. Get h_cr3 from L1 guest. Map h_cr3 into L0 hypervisor address space.
- *    4. Walk the h_cr3 page table
- *    5.    - if not present, then we inject #NPF back to L1 guest and 
+ *    3. Get np2m base from L1 guest. Map np2m base into L0 hypervisor address space.
+ *    4. Walk the np2m's  page table
+ *    5.    - if not present or permission check failure, then we inject #NPF back to 
+ *    L1 guest and 
  *            re-launch L1 guest (L1 guest will either treat this #NPF as MMIO,
  *            or fix its p2m table for L2 guest)
  *    6.    - if present, then we will get the a new translated value L1-GPA 
@@ -89,7 +90,7 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 
     if (old_flags & _PAGE_PRESENT)
         flush_tlb_mask(p2m->dirty_cpumask);
-    
+
     paging_unlock(d);
 }
 
@@ -110,7 +111,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     /* If this p2m table has been flushed or recycled under our feet, 
      * leave it alone.  We'll pick up the right one as we try to 
      * vmenter the guest. */
-    if ( p2m->cr3 == nhvm_vcpu_hostcr3(v) )
+    if ( p2m->np2m_base == nhvm_vcpu_p2m_base(v) )
     {
         unsigned long gfn, mask;
         mfn_t mfn;
@@ -186,7 +187,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t pfec;
     unsigned long nested_cr3, gfn;
     
-    nested_cr3 = nhvm_vcpu_hostcr3(v);
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
 
     pfec = PFEC_user_mode | PFEC_page_present;
     if (access_w)
@@ -221,7 +222,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
-    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
     rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 3700e32..1817f81 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -249,7 +249,7 @@ declare_mm_order_constraint(per_page_sharing)
  * A per-domain lock that protects the mapping from nested-CR3 to 
  * nested-p2m.  In particular it covers:
  * - the array of nested-p2m tables, and all LRU activity therein; and
- * - setting the "cr3" field of any p2m table to a non-CR3_EADDR value. 
+ * - setting the "cr3" field of any p2m table to a non-P2M_BASE_EAADR value. 
  *   (i.e. assigning a p2m table to be the shadow of that cr3 */
 
 /* PoD lock (per-p2m-table)
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 258f46e..41a461b 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -69,7 +69,7 @@ static void p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->domain = d;
     p2m->default_access = p2m_access_rwx;
 
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
 
     if ( hap_enabled(d) && cpu_has_vmx )
         ept_p2m_init(p2m);
@@ -1433,7 +1433,7 @@ p2m_flush_table(struct p2m_domain *p2m)
     ASSERT(page_list_empty(&p2m->pod.single));
 
     /* This is no longer a valid nested p2m for any address space */
-    p2m->cr3 = CR3_EADDR;
+    p2m->np2m_base = P2M_BASE_EADDR;
     
     /* Zap the top level of the trie */
     top = mfn_to_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
@@ -1471,7 +1471,7 @@ p2m_flush_nestedp2m(struct domain *d)
 }
 
 struct p2m_domain *
-p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
+p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
 {
     /* Use volatile to prevent gcc to cache nv->nv_p2m in a cpu register as
      * this may change within the loop by an other (v)cpu.
@@ -1480,8 +1480,8 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     struct domain *d;
     struct p2m_domain *p2m;
 
-    /* Mask out low bits; this avoids collisions with CR3_EADDR */
-    cr3 &= ~(0xfffull);
+    /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
+    np2m_base &= ~(0xfffull);
 
     if (nv->nv_flushp2m && nv->nv_p2m) {
         nv->nv_p2m = NULL;
@@ -1493,14 +1493,14 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     if ( p2m ) 
     {
         p2m_lock(p2m);
-        if ( p2m->cr3 == cr3 || p2m->cr3 == CR3_EADDR )
+        if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
         {
             nv->nv_flushp2m = 0;
             p2m_getlru_nestedp2m(d, p2m);
             nv->nv_p2m = p2m;
-            if (p2m->cr3 == CR3_EADDR)
+            if ( p2m->np2m_base == P2M_BASE_EADDR )
                 hvm_asid_flush_vcpu(v);
-            p2m->cr3 = cr3;
+            p2m->np2m_base = np2m_base;
             cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
             p2m_unlock(p2m);
             nestedp2m_unlock(d);
@@ -1515,7 +1515,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3)
     p2m_flush_table(p2m);
     p2m_lock(p2m);
     nv->nv_p2m = p2m;
-    p2m->cr3 = cr3;
+    p2m->np2m_base = np2m_base;
     nv->nv_flushp2m = 0;
     hvm_asid_flush_vcpu(v);
     cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
@@ -1531,7 +1531,7 @@ p2m_get_p2m(struct vcpu *v)
     if (!nestedhvm_is_n2(v))
         return p2m_get_hostp2m(v->domain);
 
-    return p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
+    return p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 }
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
@@ -1549,15 +1549,15 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
         uint32_t pfec_21 = *pfec;
-        uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
+        uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
 
         /* translate l2 guest va into l2 guest gfn */
-        p2m = p2m_get_nestedp2m(v, ncr3);
+        p2m = p2m_get_nestedp2m(v, np2m_base);
         mode = paging_get_nestedmode(v);
         gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
         /* translate l2 guest gfn into l1 guest gfn */
-        return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
+        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
                                        gfn << PAGE_SHIFT, &pfec_21, NULL);
     }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index fdb0f58..d3535b6 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -170,7 +170,7 @@ struct hvm_function_table {
                                 uint64_t exitcode);
     int (*nhvm_vcpu_vmexit_trap)(struct vcpu *v, struct hvm_trap *trap);
     uint64_t (*nhvm_vcpu_guestcr3)(struct vcpu *v);
-    uint64_t (*nhvm_vcpu_hostcr3)(struct vcpu *v);
+    uint64_t (*nhvm_vcpu_p2m_base)(struct vcpu *v);
     uint32_t (*nhvm_vcpu_asid)(struct vcpu *v);
     int (*nhvm_vmcx_guest_intercepts_trap)(struct vcpu *v, 
                                unsigned int trapnr, int errcode);
@@ -475,7 +475,7 @@ uint64_t nhvm_vcpu_guestcr3(struct vcpu *v);
 /* returns l1 guest's cr3 that points to the page table used to
  * translate l2 guest physical address to l1 guest physical address.
  */
-uint64_t nhvm_vcpu_hostcr3(struct vcpu *v);
+uint64_t nhvm_vcpu_p2m_base(struct vcpu *v);
 /* returns the asid number l1 guest wants to use to run the l2 guest */
 uint32_t nhvm_vcpu_asid(struct vcpu *v);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index dce2cd8..d97011d 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -99,7 +99,7 @@ int nvmx_vcpu_initialise(struct vcpu *v);
 void nvmx_vcpu_destroy(struct vcpu *v);
 int nvmx_vcpu_reset(struct vcpu *v);
 uint64_t nvmx_vcpu_guestcr3(struct vcpu *v);
-uint64_t nvmx_vcpu_hostcr3(struct vcpu *v);
+uint64_t nvmx_vcpu_eptp_base(struct vcpu *v);
 uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 2bd2048..ce26594 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -197,17 +197,17 @@ struct p2m_domain {
 
     struct domain     *domain;   /* back pointer to domain */
 
-    /* Nested p2ms only: nested-CR3 value that this p2m shadows. 
-     * This can be cleared to CR3_EADDR under the per-p2m lock but
+    /* Nested p2ms only: nested p2m base value that this p2m shadows. 
+     * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but
      * needs both the per-p2m lock and the per-domain nestedp2m lock
      * to set it to any other value. */
-#define CR3_EADDR     (~0ULL)
-    uint64_t           cr3;
+#define P2M_BASE_EADDR     (~0ULL)
+    uint64_t           np2m_base;
 
     /* Nested p2ms: linked list of n2pms allocated to this domain. 
      * The host p2m hasolds the head of the list and the np2ms are 
      * threaded on in LRU order. */
-    struct list_head np2m_list; 
+    struct list_head   np2m_list; 
 
 
     /* Host p2m: when this flag is set, don't flush all the nested-p2m 
@@ -282,11 +282,11 @@ struct p2m_domain {
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
-/* Get p2m table (re)usable for specified cr3.
+/* Get p2m table (re)usable for specified np2m base.
  * Automatically destroys and re-initializes a p2m if none found.
- * If cr3 == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
+ * If np2m_base == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
  */
-struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t cr3);
+struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
 
 /* If vcpu is in host mode then behaviour matches p2m_get_hostp2m().
  * If vcpu is in guest mode then behaviour matches p2m_get_nestedp2m().
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8ze-00024S-1K; Mon, 24 Dec 2012 14:27:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zd-00024G-1Y
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:01 +0000
Received: from [85.158.137.99:34593] by server-15.bemta-3.messagelabs.com id
	E2/43-07921-33668D05; Mon, 24 Dec 2012 14:26:59 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356359217!12860986!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20473 invoked from network); 24 Dec 2012 14:26:57 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-217.messagelabs.com with SMTP;
	24 Dec 2012 14:26:57 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:26:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524918"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:55 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:26 +0800
Message-Id: <1356359194-5321-3-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 02/10] nestedhap: Change nested p2m's walker
	to vendor-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

EPT and NPT adopts differnt formats for each-level entry,
so change the walker functions to vendor-specific.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c              |    1 +
 xen/arch/x86/hvm/vmx/vmx.c              |    3 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |   13 +++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   46 +++++++++++--------------------
 xen/include/asm-x86/hvm/hvm.h           |    5 +++
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 ++
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    5 +++
 8 files changed, 76 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index ed0faa6..c1c6fa7 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1171,6 +1171,37 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
     return vcpu_nestedsvm(v).ns_hap_enabled;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    uint32_t pfec;
+    unsigned long nested_cr3, gfn;
+
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
+
+    pfec = PFEC_user_mode | PFEC_page_present;
+    if ( access_w )
+        pfec |= PFEC_write_access;
+    if ( access_x )
+        pfec |= PFEC_insn_fetch;
+
+    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
+    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
+
+    if ( gfn == INVALID_GFN ) 
+        return NESTEDHVM_PAGEFAULT_INJECT;
+
+    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+    return NESTEDHVM_PAGEFAULT_DONE;
+}
+
+
 enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 2c8504a..acd2d49 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2008,6 +2008,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
     .nhvm_intr_blocked = nsvm_intr_blocked,
+    .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m,
 };
 
 void svm_vmexit_handler(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 98309da..4abfa90 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1511,7 +1511,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_intr_blocked    = nvmx_intr_blocked,
     .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
     .update_eoi_exit_bitmap = vmx_update_eoi_exit_bitmap,
-    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled
+    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled,
+    .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6999c25..53f6a4d 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1479,6 +1479,19 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
     return 1;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    /*TODO:*/
+    return 0;
+}
+
 void nvmx_idtv_handling(void)
 {
     struct vcpu *v = current;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index f9a5edc..8787c91 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -136,6 +136,22 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     }
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+static int
+nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
+
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+        access_r, access_w, access_x);
+}
+
+
 /* This function uses L1_gpa to walk the P2M table in L0 hypervisor. If the
  * walk is successful, the translated value is returned in L0_gpa. The return 
  * value tells the upper level what to do.
@@ -175,36 +191,6 @@ out:
     return rc;
 }
 
-/* This function uses L2_gpa to walk the P2M page table in L1. If the 
- * walk is successful, the translated value is returned in
- * L1_gpa. The result value tells what to do next.
- */
-static int
-nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
-                      bool_t access_r, bool_t access_w, bool_t access_x)
-{
-    uint32_t pfec;
-    unsigned long nested_cr3, gfn;
-    
-    nested_cr3 = nhvm_vcpu_p2m_base(v);
-
-    pfec = PFEC_user_mode | PFEC_page_present;
-    if (access_w)
-        pfec |= PFEC_write_access;
-    if (access_x)
-        pfec |= PFEC_insn_fetch;
-
-    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
-    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
-
-    if ( gfn == INVALID_GFN ) 
-        return NESTEDHVM_PAGEFAULT_INJECT;
-
-    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
-    return NESTEDHVM_PAGEFAULT_DONE;
-}
-
 /*
  * The following function, nestedhap_page_fault(), is for steps (3)--(10).
  *
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index d3535b6..80f07e9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -183,6 +183,11 @@ struct hvm_function_table {
     /* Virtual interrupt delivery */
     void (*update_eoi_exit_bitmap)(struct vcpu *v, u8 vector, u8 trig);
     int (*virtual_intr_delivery_enabled)(void);
+
+    /*Walk nested p2m  */
+    int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index fa83023..0c90f30 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -133,6 +133,9 @@ int nsvm_wrmsr(struct vcpu *v, unsigned int msr, uint64_t msr_content);
 void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
+int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
 #define NSVM_INTR_NOTINTERCEPTED 2
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d97011d..422f006 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -108,6 +108,11 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
+
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
  *
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zm-00027R-2X; Mon, 24 Dec 2012 14:27:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zj-00026F-Px
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:08 +0000
Received: from [193.109.254.147:33909] by server-8.bemta-14.messagelabs.com id
	E9/E1-26341-A3668D05; Mon, 24 Dec 2012 14:27:06 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1356359224!11159397!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24939 invoked from network); 24 Dec 2012 14:27:05 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-13.tower-27.messagelabs.com with SMTP;
	24 Dec 2012 14:27:05 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:04 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524968"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:27:02 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:31 +0800
Message-Id: <1356359194-5321-8-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 07/10] nEPT: Use minimal permission for
	nested p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Emulate permission check for the nested p2m. Current solution is to
use minimal permission, and once meet permission violation in L0, then
determin whether it is caused by guest EPT or host EPT

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |    4 +-
 xen/arch/x86/mm/hap/nested_ept.c        |    5 ++-
 xen/arch/x86/mm/hap/nested_hap.c        |   39 +++++++++++++++++++++++-------
 xen/include/asm-x86/hvm/hvm.h           |    2 +-
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    6 ++--
 7 files changed, 41 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index c1c6fa7..b8a93f4 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1177,7 +1177,7 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
  */
 int
 nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint32_t pfec;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 7b48436..f1f6af2 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1528,7 +1528,7 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
  */
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     int rc;
@@ -1538,7 +1538,7 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
-    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn, p2m_acc,
                                 &exit_qual, &exit_reason);
     switch ( rc )
     {
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 1463d81..4393065 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -224,8 +224,8 @@ out:
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason)
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason)
 {
     uint32_t rc, rwx_bits = 0;
     ept_walk_t gw;
@@ -262,6 +262,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
         if ( nept_permission_check(rwx_acc, rwx_bits) )
         {
             *l1gfn = gw.lxe[0].mfn;
+            *p2m_acc = (uint8_t)rwx_bits;
             break;
         }
         rc = EPT_TRANSLATE_VIOLATION;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 6d1264b..7722a2a 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -142,12 +142,12 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
  */
 static int
 nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
 
-    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order, p2m_acc,
         access_r, access_w, access_x);
 }
 
@@ -158,16 +158,15 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      p2m_type_t *p2mt,
+                      p2m_type_t *p2mt, p2m_access_t *p2ma,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
@@ -206,12 +205,14 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
     p2m_type_t p2mt_10;
+    p2m_access_t p2ma_10 = p2m_access_rwx;
+    uint8_t p2ma_21 = p2m_access_rwx;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
@@ -229,7 +230,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     /* ==> we have to walk L0 P2M */
     rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
-        &p2mt_10, &page_order_10,
+        &p2mt_10, &p2ma_10, &page_order_10,
         access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
@@ -250,10 +251,30 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     page_order_20 = min(page_order_21, page_order_10);
 
+    ASSERT(p2ma_10 <= p2m_access_n2rwx);
+    /*NOTE: if assert fails, needs to handle new access type here */
+
+    switch ( p2ma_10 )
+    {
+    case p2m_access_n ... p2m_access_rwx:
+        break;
+    case p2m_access_rx2rw:
+        p2ma_10 = p2m_access_rx;
+        break;
+    case p2m_access_n2rwx:
+        p2ma_10 = p2m_access_n;
+        break;
+    default:
+        p2ma_10 = p2m_access_n;
+        /* For safety, remove all permissions. */
+        gdprintk(XENLOG_ERR, "Unhandled p2m access type:%d\n", p2ma_10);
+    }
+    /* Use minimal permission for nested p2m. */
+    p2ma_10 &= (p2m_access_t)p2ma_21;
+
     /* fix p2m_get_pagetable(nested_p2m) */
     nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
-        p2mt_10,
-        p2m_access_rwx /* FIXME: Should use minimum permission. */);
+        p2mt_10, p2ma_10);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 80f07e9..889e3c9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -186,7 +186,7 @@ struct hvm_function_table {
 
     /*Walk nested p2m  */
     int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index 0c90f30..748cc04 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -134,7 +134,7 @@ void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
 int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index e3d1a22..1da0e77 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -123,7 +123,7 @@ int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
@@ -206,7 +206,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason);
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8zm-00027R-2X; Mon, 24 Dec 2012 14:27:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zj-00026F-Px
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:08 +0000
Received: from [193.109.254.147:33909] by server-8.bemta-14.messagelabs.com id
	E9/E1-26341-A3668D05; Mon, 24 Dec 2012 14:27:06 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1356359224!11159397!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24939 invoked from network); 24 Dec 2012 14:27:05 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-13.tower-27.messagelabs.com with SMTP;
	24 Dec 2012 14:27:05 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:27:04 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524968"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:27:02 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:31 +0800
Message-Id: <1356359194-5321-8-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 07/10] nEPT: Use minimal permission for
	nested p2m.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

Emulate permission check for the nested p2m. Current solution is to
use minimal permission, and once meet permission violation in L0, then
determin whether it is caused by guest EPT or host EPT

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |    2 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |    4 +-
 xen/arch/x86/mm/hap/nested_ept.c        |    5 ++-
 xen/arch/x86/mm/hap/nested_hap.c        |   39 +++++++++++++++++++++++-------
 xen/include/asm-x86/hvm/hvm.h           |    2 +-
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    6 ++--
 7 files changed, 41 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index c1c6fa7..b8a93f4 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1177,7 +1177,7 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
  */
 int
 nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     uint32_t pfec;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 7b48436..f1f6af2 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1528,7 +1528,7 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
  */
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     int rc;
@@ -1538,7 +1538,7 @@ nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
     uint32_t rwx_rights = (access_x << 2) | (access_w << 1) | access_r;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
-    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn,
+    rc = nept_translate_l2ga(v, L2_gpa, page_order, rwx_rights, &gfn, p2m_acc,
                                 &exit_qual, &exit_reason);
     switch ( rc )
     {
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 1463d81..4393065 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -224,8 +224,8 @@ out:
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason)
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason)
 {
     uint32_t rc, rwx_bits = 0;
     ept_walk_t gw;
@@ -262,6 +262,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga,
         if ( nept_permission_check(rwx_acc, rwx_bits) )
         {
             *l1gfn = gw.lxe[0].mfn;
+            *p2m_acc = (uint8_t)rwx_bits;
             break;
         }
         rc = EPT_TRANSLATE_VIOLATION;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 6d1264b..7722a2a 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -142,12 +142,12 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
  */
 static int
 nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
 
-    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order, p2m_acc,
         access_r, access_w, access_x);
 }
 
@@ -158,16 +158,15 @@ nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      p2m_type_t *p2mt,
+                      p2m_type_t *p2mt, p2m_access_t *p2ma,
                       unsigned int *page_order,
                       bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
@@ -206,12 +205,14 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
     p2m_type_t p2mt_10;
+    p2m_access_t p2ma_10 = p2m_access_rwx;
+    uint8_t p2ma_21 = p2m_access_rwx;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
@@ -229,7 +230,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     /* ==> we have to walk L0 P2M */
     rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
-        &p2mt_10, &page_order_10,
+        &p2mt_10, &p2ma_10, &page_order_10,
         access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
@@ -250,10 +251,30 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
 
     page_order_20 = min(page_order_21, page_order_10);
 
+    ASSERT(p2ma_10 <= p2m_access_n2rwx);
+    /*NOTE: if assert fails, needs to handle new access type here */
+
+    switch ( p2ma_10 )
+    {
+    case p2m_access_n ... p2m_access_rwx:
+        break;
+    case p2m_access_rx2rw:
+        p2ma_10 = p2m_access_rx;
+        break;
+    case p2m_access_n2rwx:
+        p2ma_10 = p2m_access_n;
+        break;
+    default:
+        p2ma_10 = p2m_access_n;
+        /* For safety, remove all permissions. */
+        gdprintk(XENLOG_ERR, "Unhandled p2m access type:%d\n", p2ma_10);
+    }
+    /* Use minimal permission for nested p2m. */
+    p2ma_10 &= (p2m_access_t)p2ma_21;
+
     /* fix p2m_get_pagetable(nested_p2m) */
     nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
-        p2mt_10,
-        p2m_access_rwx /* FIXME: Should use minimum permission. */);
+        p2mt_10, p2ma_10);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 80f07e9..889e3c9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -186,7 +186,7 @@ struct hvm_function_table {
 
     /*Walk nested p2m  */
     int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index 0c90f30..748cc04 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -134,7 +134,7 @@ void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
 int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index e3d1a22..1da0e77 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -123,7 +123,7 @@ int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
 
 int
 nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
+                      unsigned int *page_order, uint8_t *p2m_acc,
                       bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
@@ -206,7 +206,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
 
 int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, 
                         unsigned int *page_order, uint32_t rwx_acc,
-                        unsigned long *l1gfn, uint64_t *exit_qual,
-                        uint32_t *exit_reason);
+                        unsigned long *l1gfn, uint8_t *p2m_acc,
+                        uint64_t *exit_qual, uint32_t *exit_reason);
 #endif /* __ASM_X86_HVM_VVMX_H__ */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn8ze-00024S-1K; Mon, 24 Dec 2012 14:27:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1Tn8zd-00024G-1Y
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:27:01 +0000
Received: from [85.158.137.99:34593] by server-15.bemta-3.messagelabs.com id
	E2/43-07921-33668D05; Mon, 24 Dec 2012 14:26:59 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356359217!12860986!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMwMDA5Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20473 invoked from network); 24 Dec 2012 14:26:57 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-217.messagelabs.com with SMTP;
	24 Dec 2012 14:26:57 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 24 Dec 2012 06:26:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,347,1355126400"; d="scan'208";a="266524918"
Received: from hax-build.sh.intel.com ([10.239.48.54])
	by fmsmga001.fm.intel.com with ESMTP; 24 Dec 2012 06:26:55 -0800
From: Xiantao Zhang <xiantao.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 24 Dec 2012 22:26:26 +0800
Message-Id: <1356359194-5321-3-git-send-email-xiantao.zhang@intel.com>
X-Mailer: git-send-email 1.7.12.1
In-Reply-To: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
References: <1356359194-5321-1-git-send-email-xiantao.zhang@intel.com>
Cc: keir@xen.org, jun.nakajima@intel.com, tim@xen.org, eddie.dong@intel.com,
	JBeulich@suse.com, Zhang Xiantao <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v4 02/10] nestedhap: Change nested p2m's walker
	to vendor-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zhang Xiantao <xiantao.zhang@intel.com>

EPT and NPT adopts differnt formats for each-level entry,
so change the walker functions to vendor-specific.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |   31 +++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c              |    1 +
 xen/arch/x86/hvm/vmx/vmx.c              |    3 +-
 xen/arch/x86/hvm/vmx/vvmx.c             |   13 +++++++++
 xen/arch/x86/mm/hap/nested_hap.c        |   46 +++++++++++--------------------
 xen/include/asm-x86/hvm/hvm.h           |    5 +++
 xen/include/asm-x86/hvm/svm/nestedsvm.h |    3 ++
 xen/include/asm-x86/hvm/vmx/vvmx.h      |    5 +++
 8 files changed, 76 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index ed0faa6..c1c6fa7 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1171,6 +1171,37 @@ nsvm_vmcb_hap_enabled(struct vcpu *v)
     return vcpu_nestedsvm(v).ns_hap_enabled;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    uint32_t pfec;
+    unsigned long nested_cr3, gfn;
+
+    nested_cr3 = nhvm_vcpu_p2m_base(v);
+
+    pfec = PFEC_user_mode | PFEC_page_present;
+    if ( access_w )
+        pfec |= PFEC_write_access;
+    if ( access_x )
+        pfec |= PFEC_insn_fetch;
+
+    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
+    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
+
+    if ( gfn == INVALID_GFN ) 
+        return NESTEDHVM_PAGEFAULT_INJECT;
+
+    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
+    return NESTEDHVM_PAGEFAULT_DONE;
+}
+
+
 enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 2c8504a..acd2d49 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2008,6 +2008,7 @@ static struct hvm_function_table __read_mostly svm_function_table = {
     .nhvm_vmcx_guest_intercepts_trap = nsvm_vmcb_guest_intercepts_trap,
     .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
     .nhvm_intr_blocked = nsvm_intr_blocked,
+    .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m,
 };
 
 void svm_vmexit_handler(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 98309da..4abfa90 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1511,7 +1511,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_intr_blocked    = nvmx_intr_blocked,
     .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
     .update_eoi_exit_bitmap = vmx_update_eoi_exit_bitmap,
-    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled
+    .virtual_intr_delivery_enabled = vmx_virtual_intr_delivery_enabled,
+    .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 6999c25..53f6a4d 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1479,6 +1479,19 @@ int nvmx_msr_write_intercept(unsigned int msr, u64 msr_content)
     return 1;
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    /*TODO:*/
+    return 0;
+}
+
 void nvmx_idtv_handling(void)
 {
     struct vcpu *v = current;
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index f9a5edc..8787c91 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -136,6 +136,22 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
     }
 }
 
+/* This function uses L2_gpa to walk the P2M page table in L1. If the 
+ * walk is successful, the translated value is returned in
+ * L1_gpa. The result value tells what to do next.
+ */
+static int
+nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
+{
+    ASSERT(hvm_funcs.nhvm_hap_walk_L1_p2m);
+
+    return hvm_funcs.nhvm_hap_walk_L1_p2m(v, L2_gpa, L1_gpa, page_order,
+        access_r, access_w, access_x);
+}
+
+
 /* This function uses L1_gpa to walk the P2M table in L0 hypervisor. If the
  * walk is successful, the translated value is returned in L0_gpa. The return 
  * value tells the upper level what to do.
@@ -175,36 +191,6 @@ out:
     return rc;
 }
 
-/* This function uses L2_gpa to walk the P2M page table in L1. If the 
- * walk is successful, the translated value is returned in
- * L1_gpa. The result value tells what to do next.
- */
-static int
-nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
-                      unsigned int *page_order,
-                      bool_t access_r, bool_t access_w, bool_t access_x)
-{
-    uint32_t pfec;
-    unsigned long nested_cr3, gfn;
-    
-    nested_cr3 = nhvm_vcpu_p2m_base(v);
-
-    pfec = PFEC_user_mode | PFEC_page_present;
-    if (access_w)
-        pfec |= PFEC_write_access;
-    if (access_x)
-        pfec |= PFEC_insn_fetch;
-
-    /* Walk the guest-supplied NPT table, just as if it were a pagetable */
-    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
-
-    if ( gfn == INVALID_GFN ) 
-        return NESTEDHVM_PAGEFAULT_INJECT;
-
-    *L1_gpa = (gfn << PAGE_SHIFT) + (L2_gpa & ~PAGE_MASK);
-    return NESTEDHVM_PAGEFAULT_DONE;
-}
-
 /*
  * The following function, nestedhap_page_fault(), is for steps (3)--(10).
  *
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index d3535b6..80f07e9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -183,6 +183,11 @@ struct hvm_function_table {
     /* Virtual interrupt delivery */
     void (*update_eoi_exit_bitmap)(struct vcpu *v, u8 vector, u8 trig);
     int (*virtual_intr_delivery_enabled)(void);
+
+    /*Walk nested p2m  */
+    int (*nhvm_hap_walk_L1_p2m)(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index fa83023..0c90f30 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -133,6 +133,9 @@ int nsvm_wrmsr(struct vcpu *v, unsigned int msr, uint64_t msr_content);
 void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
 bool_t nestedsvm_gif_isset(struct vcpu *v);
+int nsvm_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 
 #define NSVM_INTR_NOTHANDLED     3
 #define NSVM_INTR_NOTINTERCEPTED 2
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d97011d..422f006 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -108,6 +108,11 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
+
+int
+nvmx_hap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x);
 /*
  * Virtual VMCS layout
  *
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:39:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn9Bi-0003hc-Jd; Mon, 24 Dec 2012 14:39:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1Tn9Bh-0003hX-3n
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:39:29 +0000
Received: from [193.109.254.147:7012] by server-16.bemta-14.messagelabs.com id
	C5/2B-18932-02968D05; Mon, 24 Dec 2012 14:39:28 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1356359966!2414516!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTM1MDU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31539 invoked from network); 24 Dec 2012 14:39:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Dec 2012 14:39:27 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBOEdEH5006348
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Dec 2012 14:39:16 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBOEdD4i004814
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 24 Dec 2012 14:39:14 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBOEdDLG017958; Mon, 24 Dec 2012 08:39:13 -0600
MIME-Version: 1.0
Message-ID: <123f5797-0c7c-4270-aea3-6682f152772c@default>
Date: Mon, 24 Dec 2012 06:39:13 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <carsten@schiers.de>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org, james-xen@dingwall.me.uk, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] kernel log flooded with: xen_balloon:
 reserve_additional_memory: add_memory() failed: -17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

> I had the same messages flooding logs, but it was in DomUs. It came together with e.g.:
>
>     System RAM resource [mem 0x20000000-0x3fffffff] cannot be added
>
> I am not 100% sure, but it was only for DomUs with PCI and PCIe devices passed-through.
>
> Xen 4.2.1, Dom0&DomU Kernel 3.7.1. Xen commandline with dom0_mem=2G and dom0_mem=2G,max:2G.
> Machine is an Intel 64 Bit 16 GB installation.
>
> Difficult to check deeper, as it made me so nervous that I reverted to Xen 4.1, but I could
> certainly re-install it after Xmas, if it is helping...

Thanks for update. I fill that it could be linked with
recent balloon driver updates. I will take a look at that
bug in new year.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 24 14:39:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Dec 2012 14:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tn9Bi-0003hc-Jd; Mon, 24 Dec 2012 14:39:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1Tn9Bh-0003hX-3n
	for xen-devel@lists.xen.org; Mon, 24 Dec 2012 14:39:29 +0000
Received: from [193.109.254.147:7012] by server-16.bemta-14.messagelabs.com id
	C5/2B-18932-02968D05; Mon, 24 Dec 2012 14:39:28 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1356359966!2414516!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTM1MDU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31539 invoked from network); 24 Dec 2012 14:39:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Dec 2012 14:39:27 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBOEdEH5006348
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Dec 2012 14:39:16 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBOEdD4i004814
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 24 Dec 2012 14:39:14 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBOEdDLG017958; Mon, 24 Dec 2012 08:39:13 -0600
MIME-Version: 1.0
Message-ID: <123f5797-0c7c-4270-aea3-6682f152772c@default>
Date: Mon, 24 Dec 2012 06:39:13 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <carsten@schiers.de>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org, james-xen@dingwall.me.uk, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] kernel log flooded with: xen_balloon:
 reserve_additional_memory: add_memory() failed: -17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

> I had the same messages flooding logs, but it was in DomUs. It came together with e.g.:
>
>     System RAM resource [mem 0x20000000-0x3fffffff] cannot be added
>
> I am not 100% sure, but it was only for DomUs with PCI and PCIe devices passed-through.
>
> Xen 4.2.1, Dom0&DomU Kernel 3.7.1. Xen commandline with dom0_mem=2G and dom0_mem=2G,max:2G.
> Machine is an Intel 64 Bit 16 GB installation.
>
> Difficult to check deeper, as it made me so nervous that I reverted to Xen 4.1, but I could
> certainly re-install it after Xmas, if it is helping...

Thanks for update. I fill that it could be linked with
recent balloon driver updates. I will take a look at that
bug in new year.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 25 06:45:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Dec 2012 06:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnOG4-0007Vq-3K; Tue, 25 Dec 2012 06:45:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1TnOG2-0007Vl-S3
	for xen-devel@lists.xen.org; Tue, 25 Dec 2012 06:44:59 +0000
Received: from [193.109.254.147:8690] by server-14.bemta-14.messagelabs.com id
	D7/42-10022-A6B49D05; Tue, 25 Dec 2012 06:44:58 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-7.tower-27.messagelabs.com!1356417896!855325!1
X-Originating-IP: [209.85.216.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_TEST_2,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14904 invoked from network); 25 Dec 2012 06:44:57 -0000
Received: from mail-qc0-f169.google.com (HELO mail-qc0-f169.google.com)
	(209.85.216.169)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Dec 2012 06:44:57 -0000
Received: by mail-qc0-f169.google.com with SMTP id t2so4034710qcq.28
	for <xen-devel@lists.xen.org>; Mon, 24 Dec 2012 22:44:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:from:date:x-google-sender-auth
	:message-id:subject:to:content-type;
	bh=9RyGSeuDhChiNIoEar9pKTT0D2gjDObztv+ozB0euNs=;
	b=LnonhCnxJP7JBfmwdOqgXhqvkuZCXZUIhcuZe1Ukd1CWCjGV5Vrd9yo5Dhwr/IE6Pv
	xNJ5jRBsPKZ7hPNIe9XIDY4MpmPVeoULyEzCaQXri9x8KID+Ow6xlaBYfumFFCZAkpFm
	lZDK//emjoIlK6VOK9Iqs23GBhmYAINP/4SxQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:from:date:x-google-sender-auth
	:message-id:subject:to:content-type:x-gm-message-state;
	bh=9RyGSeuDhChiNIoEar9pKTT0D2gjDObztv+ozB0euNs=;
	b=bH/7vCxyTkrSvvEp584lBvjmxnJvLQ44nai7w/PKrxJ4tViHl0CyRimb/XUEEc/Man
	pziicEagIgMi7rtZ+tTp4A5jhb6KHmO1z7l26jHGFSd6+F43mt+bPsJ/xDq7gcKm91qu
	sXrl9ub+viykNmIKeuIvBnno0YpAVxnLhowxi2x3ODrTlffWsUXFpdf3qNgaQYN900N6
	DjaiNe6ZhLiIRx+nS51tsNa0Q7n60Wx7hGT+iUItA9Fo+T+MpNSQYGNrMNA366ctmYmC
	jMwanPDZuhy7xbMDxlA+VBPgW96xndj+lPjUjllTVx3CzrMRhIuGJ7yw/nZvRL9voIL5
	7tpA==
Received: by 10.49.96.1 with SMTP id do1mr13302472qeb.26.1356417895881; Mon,
	24 Dec 2012 22:44:55 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.96.200 with HTTP; Mon, 24 Dec 2012 22:44:40 -0800 (PST)
X-Originating-IP: [85.143.161.18]
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Tue, 25 Dec 2012 10:44:40 +0400
X-Google-Sender-Auth: 2yIox5cSBuoalvJYRudsOPSxqPM
Message-ID: <CACaajQt5FEMVdX5j7ASmDrJXJzespuAVzXpGsxgLONxyXmD6Zg@mail.gmail.com>
To: xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQleCXkMr2POQ26QjmJs9XgPD02w6BNIZ4BFaQapIoZXSNS150dindgiy6tP4Nv73+h+azMj
Subject: [Xen-devel] blktap2 and blktap2-new
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello. I found, that opensuse kernel-xen have two blktap2 modules. One
blktap2 and other blktap-new. What is difference and why needed two
modules (blktap2-new created by suse/opensuse patches)

xen10:~ # modinfo xen-blktap
filename:
/lib/modules/3.6.9-1-xen/kernel/drivers/xen/blktap2-new/xen-blktap.ko
alias:          xen-backend:tap2
alias:          devname:xen/blktap-2/control
license:        Dual BSD/GPL
srcversion:     A4659328DBBF5A422315CA5
depends:
intree:         Y
vermagic:       3.6.9-1-xen SMP mod_unload modversions Xen
xen10:~ # modinfo blktap2
filename:       /lib/modules/3.6.9-1-xen/kernel/drivers/xen/blktap2/blktap2.ko
alias:          xen-backend:tap2
alias:          devname:xen/blktap-2/control
license:        Dual BSD/GPL
srcversion:     8C21A63E6BCEEFFAD2B0DDA
depends:        blkback-pagemap
intree:         Y
vermagic:       3.6.9-1-xen SMP mod_unload modversions Xen


--
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 25 06:45:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Dec 2012 06:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnOG4-0007Vq-3K; Tue, 25 Dec 2012 06:45:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1TnOG2-0007Vl-S3
	for xen-devel@lists.xen.org; Tue, 25 Dec 2012 06:44:59 +0000
Received: from [193.109.254.147:8690] by server-14.bemta-14.messagelabs.com id
	D7/42-10022-A6B49D05; Tue, 25 Dec 2012 06:44:58 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-7.tower-27.messagelabs.com!1356417896!855325!1
X-Originating-IP: [209.85.216.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_TEST_2,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14904 invoked from network); 25 Dec 2012 06:44:57 -0000
Received: from mail-qc0-f169.google.com (HELO mail-qc0-f169.google.com)
	(209.85.216.169)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Dec 2012 06:44:57 -0000
Received: by mail-qc0-f169.google.com with SMTP id t2so4034710qcq.28
	for <xen-devel@lists.xen.org>; Mon, 24 Dec 2012 22:44:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:from:date:x-google-sender-auth
	:message-id:subject:to:content-type;
	bh=9RyGSeuDhChiNIoEar9pKTT0D2gjDObztv+ozB0euNs=;
	b=LnonhCnxJP7JBfmwdOqgXhqvkuZCXZUIhcuZe1Ukd1CWCjGV5Vrd9yo5Dhwr/IE6Pv
	xNJ5jRBsPKZ7hPNIe9XIDY4MpmPVeoULyEzCaQXri9x8KID+Ow6xlaBYfumFFCZAkpFm
	lZDK//emjoIlK6VOK9Iqs23GBhmYAINP/4SxQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:from:date:x-google-sender-auth
	:message-id:subject:to:content-type:x-gm-message-state;
	bh=9RyGSeuDhChiNIoEar9pKTT0D2gjDObztv+ozB0euNs=;
	b=bH/7vCxyTkrSvvEp584lBvjmxnJvLQ44nai7w/PKrxJ4tViHl0CyRimb/XUEEc/Man
	pziicEagIgMi7rtZ+tTp4A5jhb6KHmO1z7l26jHGFSd6+F43mt+bPsJ/xDq7gcKm91qu
	sXrl9ub+viykNmIKeuIvBnno0YpAVxnLhowxi2x3ODrTlffWsUXFpdf3qNgaQYN900N6
	DjaiNe6ZhLiIRx+nS51tsNa0Q7n60Wx7hGT+iUItA9Fo+T+MpNSQYGNrMNA366ctmYmC
	jMwanPDZuhy7xbMDxlA+VBPgW96xndj+lPjUjllTVx3CzrMRhIuGJ7yw/nZvRL9voIL5
	7tpA==
Received: by 10.49.96.1 with SMTP id do1mr13302472qeb.26.1356417895881; Mon,
	24 Dec 2012 22:44:55 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.96.200 with HTTP; Mon, 24 Dec 2012 22:44:40 -0800 (PST)
X-Originating-IP: [85.143.161.18]
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Tue, 25 Dec 2012 10:44:40 +0400
X-Google-Sender-Auth: 2yIox5cSBuoalvJYRudsOPSxqPM
Message-ID: <CACaajQt5FEMVdX5j7ASmDrJXJzespuAVzXpGsxgLONxyXmD6Zg@mail.gmail.com>
To: xen-devel@lists.xen.org
X-Gm-Message-State: ALoCoQleCXkMr2POQ26QjmJs9XgPD02w6BNIZ4BFaQapIoZXSNS150dindgiy6tP4Nv73+h+azMj
Subject: [Xen-devel] blktap2 and blktap2-new
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello. I found, that opensuse kernel-xen have two blktap2 modules. One
blktap2 and other blktap-new. What is difference and why needed two
modules (blktap2-new created by suse/opensuse patches)

xen10:~ # modinfo xen-blktap
filename:
/lib/modules/3.6.9-1-xen/kernel/drivers/xen/blktap2-new/xen-blktap.ko
alias:          xen-backend:tap2
alias:          devname:xen/blktap-2/control
license:        Dual BSD/GPL
srcversion:     A4659328DBBF5A422315CA5
depends:
intree:         Y
vermagic:       3.6.9-1-xen SMP mod_unload modversions Xen
xen10:~ # modinfo blktap2
filename:       /lib/modules/3.6.9-1-xen/kernel/drivers/xen/blktap2/blktap2.ko
alias:          xen-backend:tap2
alias:          devname:xen/blktap-2/control
license:        Dual BSD/GPL
srcversion:     8C21A63E6BCEEFFAD2B0DDA
depends:        blkback-pagemap
intree:         Y
vermagic:       3.6.9-1-xen SMP mod_unload modversions Xen


--
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Dec 25 18:57:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Dec 2012 18:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnZfo-0003Ut-RQ; Tue, 25 Dec 2012 18:56:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefan.kuhne@gmx.net>) id 1TnZfn-0003Uo-DJ
	for xen-devel@lists.xen.org; Tue, 25 Dec 2012 18:56:19 +0000
Received: from [193.109.254.147:11141] by server-7.bemta-14.messagelabs.com id
	82/1C-08102-2D6F9D05; Tue, 25 Dec 2012 18:56:18 +0000
X-Env-Sender: stefan.kuhne@gmx.net
X-Msg-Ref: server-4.tower-27.messagelabs.com!1356461775!8983250!1
X-Originating-IP: [212.227.15.19]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEyLjIyNy4xNS4xOSA9PiAxNDc5Mw==\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14508 invoked from network); 25 Dec 2012 18:56:16 -0000
Received: from mout.gmx.net (HELO mout.gmx.net) (212.227.15.19)
	by server-4.tower-27.messagelabs.com with SMTP;
	25 Dec 2012 18:56:16 -0000
Received: from mailout-de.gmx.net ([10.1.76.20]) by mrigmx.server.lan
	(mrigmx002) with ESMTP (Nemesis) id 0LrY9r-1T3xyM1Z9x-013OR3 for
	<xen-devel@lists.xen.org>; Tue, 25 Dec 2012 19:56:15 +0100
Received: (qmail invoked by alias); 25 Dec 2012 18:56:15 -0000
Received: from xdsl-81-173-176-208.netcologne.de (EHLO Earth.access.denied)
	[81.173.176.208]
	by mail.gmx.net (mp020) with SMTP; 25 Dec 2012 19:56:15 +0100
X-Authenticated: #6997022
X-Provags-ID: V01U2FsdGVkX19y4bIROqxy5VDrAPzdXhGNutg4K3JaMHH2rlvC7B
	GuksqDCpiR0kbI
Received: from blackbox.access.denied ([192.168.200.212])
	by Earth.access.denied with esmtpa (Exim 4.77)
	(envelope-from <bloebl@access.denied>) id 1TnZfg-0001Ie-Ih
	for xen-devel@lists.xen.org; Tue, 25 Dec 2012 19:56:12 +0100
Message-ID: <50D9F6A9.8030008@access.denied>
Date: Tue, 25 Dec 2012 19:55:37 +0100
From: "Stefan Kuhne" <stefan.kuhne@gmx.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:16.0) Gecko/20121026 Thunderbird/16.0.2
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Enigmail-Version: 1.4.6
X-Y-GMX-Trusted: 0
Subject: [Xen-devel] Using collectd: CPUFreq in dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2282008641029487439=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============2282008641029487439==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigD849841BB43F4B1132E72E12"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigD849841BB43F4B1132E72E12
Content-Type: text/plain; charset=ISO-8859-15
Content-Transfer-Encoding: quoted-printable

Hello,

I've tried to get the CPUFreq plugin of collectd running in dom0.
But I mentioned that it isn't so easy to get a file in sysfs on the
rigth place.

I know that the value has to come from hypervisor and I've seen the code
in xenpm.

The only value I need is
"/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq" which would
create cpufreq.c.

Can anyone help me by giving me some hints?
I've read many websites and papers and coded many things.
I'm glade to get the sysfs file.

With "sysfs_create..." I've many trouble but "kobject_add" works fine.

Regards,
Stefan Kuhne


--------------enigD849841BB43F4B1132E72E12
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (MingW32)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ2fatAAoJEFLNPgL3IBVXiK0H/02nuW7aT9mh0aiwLU/Ur5sg
I6xrwfuDpTmSlCpNdfpN9rMma8XRFXHOuQBjkTfSADn8x2yzTWFlw6H8/mlZruas
dlfxUsaA7Zp/No0XX9YLzKkPF5d9F1QgJbUMBdZ/2ctAeYE7SvR5LOW61LIXcLEc
TeP8FymLmigL9OBnjueKLXcQ3ktud5jQTr17bXXkk1p+S/joexyiaTJelcmVrjGw
JHxB9RUfKpXTxp84ueI4IMl+dW5J/jAPf+iPDeuch+G6ixqG6rhdpkHqx9rDOkMQ
bqdijaAjWRDRhfyig1gzMNWC6kbZKK1YJ/a+7SGrdV9XWF1VlP+hSwgMH8cjW4k=
=YrEE
-----END PGP SIGNATURE-----

--------------enigD849841BB43F4B1132E72E12--


--===============2282008641029487439==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2282008641029487439==--


From xen-devel-bounces@lists.xen.org Tue Dec 25 18:57:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Dec 2012 18:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnZfo-0003Ut-RQ; Tue, 25 Dec 2012 18:56:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefan.kuhne@gmx.net>) id 1TnZfn-0003Uo-DJ
	for xen-devel@lists.xen.org; Tue, 25 Dec 2012 18:56:19 +0000
Received: from [193.109.254.147:11141] by server-7.bemta-14.messagelabs.com id
	82/1C-08102-2D6F9D05; Tue, 25 Dec 2012 18:56:18 +0000
X-Env-Sender: stefan.kuhne@gmx.net
X-Msg-Ref: server-4.tower-27.messagelabs.com!1356461775!8983250!1
X-Originating-IP: [212.227.15.19]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEyLjIyNy4xNS4xOSA9PiAxNDc5Mw==\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14508 invoked from network); 25 Dec 2012 18:56:16 -0000
Received: from mout.gmx.net (HELO mout.gmx.net) (212.227.15.19)
	by server-4.tower-27.messagelabs.com with SMTP;
	25 Dec 2012 18:56:16 -0000
Received: from mailout-de.gmx.net ([10.1.76.20]) by mrigmx.server.lan
	(mrigmx002) with ESMTP (Nemesis) id 0LrY9r-1T3xyM1Z9x-013OR3 for
	<xen-devel@lists.xen.org>; Tue, 25 Dec 2012 19:56:15 +0100
Received: (qmail invoked by alias); 25 Dec 2012 18:56:15 -0000
Received: from xdsl-81-173-176-208.netcologne.de (EHLO Earth.access.denied)
	[81.173.176.208]
	by mail.gmx.net (mp020) with SMTP; 25 Dec 2012 19:56:15 +0100
X-Authenticated: #6997022
X-Provags-ID: V01U2FsdGVkX19y4bIROqxy5VDrAPzdXhGNutg4K3JaMHH2rlvC7B
	GuksqDCpiR0kbI
Received: from blackbox.access.denied ([192.168.200.212])
	by Earth.access.denied with esmtpa (Exim 4.77)
	(envelope-from <bloebl@access.denied>) id 1TnZfg-0001Ie-Ih
	for xen-devel@lists.xen.org; Tue, 25 Dec 2012 19:56:12 +0100
Message-ID: <50D9F6A9.8030008@access.denied>
Date: Tue, 25 Dec 2012 19:55:37 +0100
From: "Stefan Kuhne" <stefan.kuhne@gmx.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:16.0) Gecko/20121026 Thunderbird/16.0.2
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Enigmail-Version: 1.4.6
X-Y-GMX-Trusted: 0
Subject: [Xen-devel] Using collectd: CPUFreq in dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2282008641029487439=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--===============2282008641029487439==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="------------enigD849841BB43F4B1132E72E12"

This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enigD849841BB43F4B1132E72E12
Content-Type: text/plain; charset=ISO-8859-15
Content-Transfer-Encoding: quoted-printable

Hello,

I've tried to get the CPUFreq plugin of collectd running in dom0.
But I mentioned that it isn't so easy to get a file in sysfs on the
rigth place.

I know that the value has to come from hypervisor and I've seen the code
in xenpm.

The only value I need is
"/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq" which would
create cpufreq.c.

Can anyone help me by giving me some hints?
I've read many websites and papers and coded many things.
I'm glade to get the sysfs file.

With "sysfs_create..." I've many trouble but "kobject_add" works fine.

Regards,
Stefan Kuhne


--------------enigD849841BB43F4B1132E72E12
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (MingW32)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQEcBAEBAgAGBQJQ2fatAAoJEFLNPgL3IBVXiK0H/02nuW7aT9mh0aiwLU/Ur5sg
I6xrwfuDpTmSlCpNdfpN9rMma8XRFXHOuQBjkTfSADn8x2yzTWFlw6H8/mlZruas
dlfxUsaA7Zp/No0XX9YLzKkPF5d9F1QgJbUMBdZ/2ctAeYE7SvR5LOW61LIXcLEc
TeP8FymLmigL9OBnjueKLXcQ3ktud5jQTr17bXXkk1p+S/joexyiaTJelcmVrjGw
JHxB9RUfKpXTxp84ueI4IMl+dW5J/jAPf+iPDeuch+G6ixqG6rhdpkHqx9rDOkMQ
bqdijaAjWRDRhfyig1gzMNWC6kbZKK1YJ/a+7SGrdV9XWF1VlP+hSwgMH8cjW4k=
=YrEE
-----END PGP SIGNATURE-----

--------------enigD849841BB43F4B1132E72E12--


--===============2282008641029487439==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2282008641029487439==--


From xen-devel-bounces@lists.xen.org Tue Dec 25 19:50:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Dec 2012 19:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnaVn-0003r5-79; Tue, 25 Dec 2012 19:50:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bobooscar@gmail.com>) id 1TnVsE-00022h-P1
	for xen-devel@lists.xen.org; Tue, 25 Dec 2012 14:52:54 +0000
Received: from [85.158.138.51:50965] by server-16.bemta-3.messagelabs.com id
	39/82-27634-1CDB9D05; Tue, 25 Dec 2012 14:52:49 +0000
X-Env-Sender: bobooscar@gmail.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1356447169!24058532!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25975 invoked from network); 25 Dec 2012 14:52:49 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Dec 2012 14:52:49 -0000
Received: by mail-we0-f175.google.com with SMTP id z53so3766927wey.34
	for <xen-devel@lists.xen.org>; Tue, 25 Dec 2012 06:52:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=LIaUyanzL25F3DgW3hglWqU4/r5idB8xLmEjVefGV1w=;
	b=UrotvCAPAHKxKr4Z+KweWZ8eLIhK4HxzRVl2IC5iAYqywKOiOBqoHa3GO2gJ1zXOTh
	ziQPqG/jMsuhoAk9dthPfbbvqosLnzNhVkmMmdR87T15nhzpcm23oYOaBuKAAnHO41UZ
	1K0nhErHH2vQ7tnF7kQKZrswxHgsrXwla7pTdw+mRt9Kvm/QNvHmbl9TIK1XABjoX9d0
	66+ltgQo2qXSrkbKXT9Sww7S5v5Pw2KtPlTda/OHjUOtFBIBzdkLKZ0VUX+4iupjoU9Z
	L8a7Tqlb61indk5RSALCl+Nwi6LzHlOUO6D87m+RtZqMQ0o0ymBaTNvWQ4xAL+8vnDNT
	BswA==
MIME-Version: 1.0
Received: by 10.180.24.198 with SMTP id w6mr31245326wif.27.1356447168988; Tue,
	25 Dec 2012 06:52:48 -0800 (PST)
Received: by 10.216.74.135 with HTTP; Tue, 25 Dec 2012 06:52:48 -0800 (PST)
Date: Tue, 25 Dec 2012 22:52:48 +0800
Message-ID: <CAJuLoV+JajbArZ90xehN39wOLqxBPhxCk-ernnG4SHPACv3VAA@mail.gmail.com>
From: Oscar Ben <bobooscar@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Tue, 25 Dec 2012 19:50:02 +0000
Subject: [Xen-devel] It's said 'when the guest has PCI passthrough devices
 in use,
 operations like migration/save/restore are not possible.' where are the
 corresponding codes in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4691430272319021973=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4691430272319021973==
Content-Type: multipart/alternative; boundary=f46d04440346aa053c04d1ae779d

--f46d04440346aa053c04d1ae779d
Content-Type: text/plain; charset=ISO-8859-1

It's said 'when the guest has PCI passthrough devices in use, operations
like migration/save/restore are not possible.' where are the corresponding
codes in xen?

I'm wondering how xend/xenlight tells whether a guest has a pci passthu
device in use or not.

   1. thank you in advance!

--f46d04440346aa053c04d1ae779d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div>It&#39;s said &#39;when the guest has PCI passthrough devices in use, =
operations like migration/save/restore are not possible.&#39; where are the=
 corresponding codes in xen?</div>
<div>=A0</div>
<div>I&#39;m wondering how xend/xenlight tells whether a guest has a pci pa=
ssthu device in use or not.=A0</div>
<ol>
<li>thank you in advance! </li></ol>

--f46d04440346aa053c04d1ae779d--


--===============4691430272319021973==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4691430272319021973==--


From xen-devel-bounces@lists.xen.org Tue Dec 25 19:50:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Dec 2012 19:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnaVn-0003r5-79; Tue, 25 Dec 2012 19:50:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bobooscar@gmail.com>) id 1TnVsE-00022h-P1
	for xen-devel@lists.xen.org; Tue, 25 Dec 2012 14:52:54 +0000
Received: from [85.158.138.51:50965] by server-16.bemta-3.messagelabs.com id
	39/82-27634-1CDB9D05; Tue, 25 Dec 2012 14:52:49 +0000
X-Env-Sender: bobooscar@gmail.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1356447169!24058532!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25975 invoked from network); 25 Dec 2012 14:52:49 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Dec 2012 14:52:49 -0000
Received: by mail-we0-f175.google.com with SMTP id z53so3766927wey.34
	for <xen-devel@lists.xen.org>; Tue, 25 Dec 2012 06:52:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=LIaUyanzL25F3DgW3hglWqU4/r5idB8xLmEjVefGV1w=;
	b=UrotvCAPAHKxKr4Z+KweWZ8eLIhK4HxzRVl2IC5iAYqywKOiOBqoHa3GO2gJ1zXOTh
	ziQPqG/jMsuhoAk9dthPfbbvqosLnzNhVkmMmdR87T15nhzpcm23oYOaBuKAAnHO41UZ
	1K0nhErHH2vQ7tnF7kQKZrswxHgsrXwla7pTdw+mRt9Kvm/QNvHmbl9TIK1XABjoX9d0
	66+ltgQo2qXSrkbKXT9Sww7S5v5Pw2KtPlTda/OHjUOtFBIBzdkLKZ0VUX+4iupjoU9Z
	L8a7Tqlb61indk5RSALCl+Nwi6LzHlOUO6D87m+RtZqMQ0o0ymBaTNvWQ4xAL+8vnDNT
	BswA==
MIME-Version: 1.0
Received: by 10.180.24.198 with SMTP id w6mr31245326wif.27.1356447168988; Tue,
	25 Dec 2012 06:52:48 -0800 (PST)
Received: by 10.216.74.135 with HTTP; Tue, 25 Dec 2012 06:52:48 -0800 (PST)
Date: Tue, 25 Dec 2012 22:52:48 +0800
Message-ID: <CAJuLoV+JajbArZ90xehN39wOLqxBPhxCk-ernnG4SHPACv3VAA@mail.gmail.com>
From: Oscar Ben <bobooscar@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Tue, 25 Dec 2012 19:50:02 +0000
Subject: [Xen-devel] It's said 'when the guest has PCI passthrough devices
 in use,
 operations like migration/save/restore are not possible.' where are the
 corresponding codes in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4691430272319021973=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4691430272319021973==
Content-Type: multipart/alternative; boundary=f46d04440346aa053c04d1ae779d

--f46d04440346aa053c04d1ae779d
Content-Type: text/plain; charset=ISO-8859-1

It's said 'when the guest has PCI passthrough devices in use, operations
like migration/save/restore are not possible.' where are the corresponding
codes in xen?

I'm wondering how xend/xenlight tells whether a guest has a pci passthu
device in use or not.

   1. thank you in advance!

--f46d04440346aa053c04d1ae779d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div>It&#39;s said &#39;when the guest has PCI passthrough devices in use, =
operations like migration/save/restore are not possible.&#39; where are the=
 corresponding codes in xen?</div>
<div>=A0</div>
<div>I&#39;m wondering how xend/xenlight tells whether a guest has a pci pa=
ssthu device in use or not.=A0</div>
<ol>
<li>thank you in advance! </li></ol>

--f46d04440346aa053c04d1ae779d--


--===============4691430272319021973==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4691430272319021973==--


From xen-devel-bounces@lists.xen.org Tue Dec 25 20:10:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Dec 2012 20:10:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnapB-00048N-3T; Tue, 25 Dec 2012 20:10:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <genius.rsd@gmail.com>) id 1TnapA-00048I-14
	for xen-devel@lists.xen.org; Tue, 25 Dec 2012 20:10:04 +0000
Received: from [85.158.139.83:29265] by server-13.bemta-5.messagelabs.com id
	9F/13-10716-B180AD05; Tue, 25 Dec 2012 20:10:03 +0000
X-Env-Sender: genius.rsd@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1356466198!31063305!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: , surbl: (ASYNC_NO)
	c3VyYmxfcmVjaGVja19kZWxheTogMjc2ODY2MCAoYWJhbmRvbmVkOiB
	yb2hpdHNkYW1rb25kd2Fy\nLndvcmRwcmVzcy5jb20p\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12935 invoked from network); 25 Dec 2012 20:09:59 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Dec 2012 20:09:59 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so9967426iej.32
	for <xen-devel@lists.xen.org>; Tue, 25 Dec 2012 12:09:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=1r8PTrSsXBbKqMeuZu5EXTRFgcQ0cUINItTT6U+a79o=;
	b=P8Pgy9sDvKztj/ED6K+Lz5NskztwFktLpO9dlOovMlKWV9Ra44ZxLkDTG5R5HqGfP3
	RXRENKnkfmjLle2qlBBUlFneYUYEziFVM9fo138QkcDJ96LWPWY12nxlPMP/UCrdalvc
	Aeh7uYIB8fo2lAHiRimKKlisKqmDpgClV5M7PVsH9YWhxh+tndR93OGyWT/Ap6XwDTx4
	//cH+SBxYb5tc8J/heILzLzFfATQMkXfbKuotsLONt+LUxxACuA+CoURYbxjjzHu2SFf
	ZzOqrNeWK8fiy9ixi4WN6/kb0sWbEKOaTwl7JexT0Psk3okeIfRYkbjZD27bxYP8vG5h
	xH8g==
MIME-Version: 1.0
Received: by 10.50.89.163 with SMTP id bp3mr17965173igb.89.1356466198090; Tue,
	25 Dec 2012 12:09:58 -0800 (PST)
Received: by 10.64.68.47 with HTTP; Tue, 25 Dec 2012 12:09:58 -0800 (PST)
Date: Wed, 26 Dec 2012 01:39:58 +0530
Message-ID: <CAHEEu84ykxG9tv1rU2PS=u=sLD-h5a4xkNQo4_PS_0exmdkfZQ@mail.gmail.com>
From: Rohit Damkondwar <genius.rsd@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Error : libxenlight state driver is not active
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0114135982745786403=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0114135982745786403==
Content-Type: multipart/alternative; boundary=e89a8f3bafa3e3138f04d1b2e5d1

--e89a8f3bafa3e3138f04d1b2e5d1
Content-Type: text/plain; charset=ISO-8859-1

Hi all. I recently compiled xen 4.1.3 on Fedora 17(64 bit). The xen entry
appears in the grub. Inspite of booting Fedora, with xen hypervisor entry ,
when I start virt-manager, I get an error
" Internal Error : libxenlight state driver is not active "

Can anyone help me on this?

-- 
Rohit S Damkondwar
B.Tech Computer Engineering
CoEP
MyBlog <http://www.rohitsdamkondwar.wordpress.com>

--e89a8f3bafa3e3138f04d1b2e5d1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi all. I recently compiled xen 4.1.3 on Fedora =
17(64 bit). The xen entry appears in the grub. Inspite of booting Fedora, w=
ith xen hypervisor entry , when I start virt-manager, I get an error <br></=
div>
&quot; Internal Error : libxenlight state driver is not active &quot;<br><b=
r></div>Can anyone help me on this? <br clear=3D"all"><div><div><div><br>--=
 <br>Rohit S Damkondwar<br>B.Tech Computer Engineering<br>CoEP<br><a href=
=3D"http://www.rohitsdamkondwar.wordpress.com" target=3D"_blank">MyBlog</a>=
<br>

</div></div></div></div>

--e89a8f3bafa3e3138f04d1b2e5d1--


--===============0114135982745786403==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0114135982745786403==--


From xen-devel-bounces@lists.xen.org Tue Dec 25 20:10:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Dec 2012 20:10:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnapB-00048N-3T; Tue, 25 Dec 2012 20:10:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <genius.rsd@gmail.com>) id 1TnapA-00048I-14
	for xen-devel@lists.xen.org; Tue, 25 Dec 2012 20:10:04 +0000
Received: from [85.158.139.83:29265] by server-13.bemta-5.messagelabs.com id
	9F/13-10716-B180AD05; Tue, 25 Dec 2012 20:10:03 +0000
X-Env-Sender: genius.rsd@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1356466198!31063305!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: , surbl: (ASYNC_NO)
	c3VyYmxfcmVjaGVja19kZWxheTogMjc2ODY2MCAoYWJhbmRvbmVkOiB
	yb2hpdHNkYW1rb25kd2Fy\nLndvcmRwcmVzcy5jb20p\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12935 invoked from network); 25 Dec 2012 20:09:59 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Dec 2012 20:09:59 -0000
Received: by mail-ie0-f173.google.com with SMTP id e13so9967426iej.32
	for <xen-devel@lists.xen.org>; Tue, 25 Dec 2012 12:09:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=1r8PTrSsXBbKqMeuZu5EXTRFgcQ0cUINItTT6U+a79o=;
	b=P8Pgy9sDvKztj/ED6K+Lz5NskztwFktLpO9dlOovMlKWV9Ra44ZxLkDTG5R5HqGfP3
	RXRENKnkfmjLle2qlBBUlFneYUYEziFVM9fo138QkcDJ96LWPWY12nxlPMP/UCrdalvc
	Aeh7uYIB8fo2lAHiRimKKlisKqmDpgClV5M7PVsH9YWhxh+tndR93OGyWT/Ap6XwDTx4
	//cH+SBxYb5tc8J/heILzLzFfATQMkXfbKuotsLONt+LUxxACuA+CoURYbxjjzHu2SFf
	ZzOqrNeWK8fiy9ixi4WN6/kb0sWbEKOaTwl7JexT0Psk3okeIfRYkbjZD27bxYP8vG5h
	xH8g==
MIME-Version: 1.0
Received: by 10.50.89.163 with SMTP id bp3mr17965173igb.89.1356466198090; Tue,
	25 Dec 2012 12:09:58 -0800 (PST)
Received: by 10.64.68.47 with HTTP; Tue, 25 Dec 2012 12:09:58 -0800 (PST)
Date: Wed, 26 Dec 2012 01:39:58 +0530
Message-ID: <CAHEEu84ykxG9tv1rU2PS=u=sLD-h5a4xkNQo4_PS_0exmdkfZQ@mail.gmail.com>
From: Rohit Damkondwar <genius.rsd@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Error : libxenlight state driver is not active
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0114135982745786403=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0114135982745786403==
Content-Type: multipart/alternative; boundary=e89a8f3bafa3e3138f04d1b2e5d1

--e89a8f3bafa3e3138f04d1b2e5d1
Content-Type: text/plain; charset=ISO-8859-1

Hi all. I recently compiled xen 4.1.3 on Fedora 17(64 bit). The xen entry
appears in the grub. Inspite of booting Fedora, with xen hypervisor entry ,
when I start virt-manager, I get an error
" Internal Error : libxenlight state driver is not active "

Can anyone help me on this?

-- 
Rohit S Damkondwar
B.Tech Computer Engineering
CoEP
MyBlog <http://www.rohitsdamkondwar.wordpress.com>

--e89a8f3bafa3e3138f04d1b2e5d1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi all. I recently compiled xen 4.1.3 on Fedora =
17(64 bit). The xen entry appears in the grub. Inspite of booting Fedora, w=
ith xen hypervisor entry , when I start virt-manager, I get an error <br></=
div>
&quot; Internal Error : libxenlight state driver is not active &quot;<br><b=
r></div>Can anyone help me on this? <br clear=3D"all"><div><div><div><br>--=
 <br>Rohit S Damkondwar<br>B.Tech Computer Engineering<br>CoEP<br><a href=
=3D"http://www.rohitsdamkondwar.wordpress.com" target=3D"_blank">MyBlog</a>=
<br>

</div></div></div></div>

--e89a8f3bafa3e3138f04d1b2e5d1--


--===============0114135982745786403==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0114135982745786403==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 04:00:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 04:00:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tni9q-0003D1-LQ; Wed, 26 Dec 2012 03:59:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1Tni9n-0003Cw-BW
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 03:59:52 +0000
Received: from [85.158.138.51:36055] by server-7.bemta-3.messagelabs.com id
	E4/FD-23008-6367AD05; Wed, 26 Dec 2012 03:59:50 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1356494386!27794793!1
X-Originating-IP: [220.181.15.8]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjggPT4gOTY2OA==\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjggPT4gOTY2OA==\n,HTML_30_40,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2274 invoked from network); 26 Dec 2012 03:59:48 -0000
Received: from m15-8.126.com (HELO m15-8.126.com) (220.181.15.8)
	by server-9.tower-174.messagelabs.com with SMTP;
	26 Dec 2012 03:59:48 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=EOM09bL3Bt3bJkUO+JSK7l7KMl9UNKfq/Noo
	a4n0nq4=; b=gHrpprRDHgLFZVBxWpFcMZ36QPm/SOr1l9bBbmM7etFfZyL6tRJX
	H35A9MnxhwlpQ1r7ZlrCmRTu0jd4wDontnu64eMeTc7o2NQAjDXLZ4NODKxrAqcf
	srj8slSn3X3qROaN5ePazxlUhlLM5VdxOM+g9RQLQIVhebAslLlBLzo=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr8
	(Coremail) ; Wed, 26 Dec 2012 11:59:44 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Wed, 26 Dec 2012 11:59:44 +0800 (CST)
From: hxkhust <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121219(21170.5156.5150) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: /04WB2Zvb3Rlcl9odG09NTk3Ojgx
MIME-Version: 1.0
Message-ID: <538356a3.19b15.13bd55dadba.Coremail.hxkhust@126.com>
X-CM-TRANSID: CMqowGDpkEIxdtpQiHseAA--.20980W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbi4wCRBU3Llq4ZqgAAs6
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] how to configure a vm so as to use
 .../tools/blktap/drivers/block-qcow2.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2788478931674788683=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2788478931674788683==
Content-Type: multipart/alternative; 
	boundary="----=_Part_401398_1145029613.1356494384570"

------=_Part_401398_1145029613.1356494384570
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

hello,guys,


In order to use ...tools/blktap/drivers/block-qcow2.c  I have tried many ways to set up a virtual machine including pvm/hvm, but I always failed.Under the situation that a hvm is configure with "tap:qcow2:...", qemu-dm is running and support the I/O process.But how to let block-qcow2.c in blktap running for image file write and read ?I need your help about the way on which a vm is created and a configure file example .
Thank you.




------=_Part_401398_1145029613.1356494384570
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">hello,guys,<div><br></div><div>In order to use ...tools/blktap/drivers/block-qcow2.c &nbsp;I have tried many ways to set up a virtual machine including pvm/hvm, but I always failed.Under the situation that a hvm is configure with "tap:qcow2:...", qemu-dm is running and support the I/O process.But how to let block-qcow2.c in blktap running for image file write and read ?I need your help about the way on which a vm is created and a configure file example .</div><div>Thank you.</div><div><br></div><div><br></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_401398_1145029613.1356494384570--



--===============2788478931674788683==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2788478931674788683==--



From xen-devel-bounces@lists.xen.org Wed Dec 26 04:00:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 04:00:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tni9q-0003D1-LQ; Wed, 26 Dec 2012 03:59:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>) id 1Tni9n-0003Cw-BW
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 03:59:52 +0000
Received: from [85.158.138.51:36055] by server-7.bemta-3.messagelabs.com id
	E4/FD-23008-6367AD05; Wed, 26 Dec 2012 03:59:50 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1356494386!27794793!1
X-Originating-IP: [220.181.15.8]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjggPT4gOTY2OA==\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjggPT4gOTY2OA==\n,HTML_30_40,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2274 invoked from network); 26 Dec 2012 03:59:48 -0000
Received: from m15-8.126.com (HELO m15-8.126.com) (220.181.15.8)
	by server-9.tower-174.messagelabs.com with SMTP;
	26 Dec 2012 03:59:48 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=EOM09bL3Bt3bJkUO+JSK7l7KMl9UNKfq/Noo
	a4n0nq4=; b=gHrpprRDHgLFZVBxWpFcMZ36QPm/SOr1l9bBbmM7etFfZyL6tRJX
	H35A9MnxhwlpQ1r7ZlrCmRTu0jd4wDontnu64eMeTc7o2NQAjDXLZ4NODKxrAqcf
	srj8slSn3X3qROaN5ePazxlUhlLM5VdxOM+g9RQLQIVhebAslLlBLzo=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr8
	(Coremail) ; Wed, 26 Dec 2012 11:59:44 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Wed, 26 Dec 2012 11:59:44 +0800 (CST)
From: hxkhust <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121219(21170.5156.5150) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: /04WB2Zvb3Rlcl9odG09NTk3Ojgx
MIME-Version: 1.0
Message-ID: <538356a3.19b15.13bd55dadba.Coremail.hxkhust@126.com>
X-CM-TRANSID: CMqowGDpkEIxdtpQiHseAA--.20980W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbi4wCRBU3Llq4ZqgAAs6
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] how to configure a vm so as to use
 .../tools/blktap/drivers/block-qcow2.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2788478931674788683=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2788478931674788683==
Content-Type: multipart/alternative; 
	boundary="----=_Part_401398_1145029613.1356494384570"

------=_Part_401398_1145029613.1356494384570
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

hello,guys,


In order to use ...tools/blktap/drivers/block-qcow2.c  I have tried many ways to set up a virtual machine including pvm/hvm, but I always failed.Under the situation that a hvm is configure with "tap:qcow2:...", qemu-dm is running and support the I/O process.But how to let block-qcow2.c in blktap running for image file write and read ?I need your help about the way on which a vm is created and a configure file example .
Thank you.




------=_Part_401398_1145029613.1356494384570
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">hello,guys,<div><br></div><div>In order to use ...tools/blktap/drivers/block-qcow2.c &nbsp;I have tried many ways to set up a virtual machine including pvm/hvm, but I always failed.Under the situation that a hvm is configure with "tap:qcow2:...", qemu-dm is running and support the I/O process.But how to let block-qcow2.c in blktap running for image file write and read ?I need your help about the way on which a vm is created and a configure file example .</div><div>Thank you.</div><div><br></div><div><br></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_401398_1145029613.1356494384570--



--===============2788478931674788683==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2788478931674788683==--



From xen-devel-bounces@lists.xen.org Wed Dec 26 09:54:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 09:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tnng9-0006QB-MM; Wed, 26 Dec 2012 09:53:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Tnng8-0006Q6-2b
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 09:53:36 +0000
Received: from [85.158.139.211:51377] by server-1.bemta-5.messagelabs.com id
	67/90-12813-E19CAD05; Wed, 26 Dec 2012 09:53:34 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356515613!19431985!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17084 invoked from network); 26 Dec 2012 09:53:33 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 09:53:33 -0000
Received: by mail-ea0-f176.google.com with SMTP id d13so3492464eaa.35
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Dec 2012 01:53:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=SrF3R5Er/54EaAfN1BBeuz0mvFvPyvbulslfE8vhGh0=;
	b=c+VRRb4YW3CIKxe2DPLnBlSo46jlwmGLDxZE9sNnLMJI7cSYOjeiRfoiu4wRwRwskT
	l/Y96jSmKKr/62TbSVWRm8bgn7lxV6q/FNB1TRvY4zyZpdyiQxAsjQLemTA6aY37E615
	FK7/ycG/jO7CxFn7+VYaejZXdC1idmxxsmyEtjsKQtIPLLJXfj7ovYAf/v+paGq1a/qy
	Y+0QayrQXI0KvxE58PIsoT3jXFf/fwqVxh9wQSFB/FnWA/8q3XqLtZlUfFsqgKgW09hZ
	MfH25tB6bg19nfQUHDWB8WHs2lUHZBRZULllVfsMBU/tqo9i/zqLWX81u/AhAdV4fH7j
	WP0A==
MIME-Version: 1.0
Received: by 10.14.214.132 with SMTP id c4mr68587126eep.18.1356515613252; Wed,
	26 Dec 2012 01:53:33 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Wed, 26 Dec 2012 01:53:33 -0800 (PST)
In-Reply-To: <1344510017.32142.125.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
	<CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
	<1344508123.32142.118.camel@zakaz.uk.xensource.com>
	<1344508288.32142.119.camel@zakaz.uk.xensource.com>
	<CA+ePHTBkBQuhX4C9VZ2a_ARDkY=fvF-=UEBjkOt5ThFZCwa-+A@mail.gmail.com>
	<1344510017.32142.125.camel@zakaz.uk.xensource.com>
Date: Wed, 26 Dec 2012 17:53:33 +0800
Message-ID: <CA+ePHTDd=GKbOPxZmN3yhxpTw86GakPd63u2TEGr0w8yNpVZjg@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5433324095301067868=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5433324095301067868==
Content-Type: multipart/alternative; boundary=047d7b621df8429e4104d1be6763

--047d7b621df8429e4104d1be6763
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 9, 2012 at 7:00 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-09 at 11:49 +0100, =E9=A9=AC=E7=A3=8A wrote:
>
> >         By the way, which driver use the socket file as  backend ?...
>
> You mean who uses the unix domain socket interface to xenstored?
>
> libxenstore can optionally use this if it happens to be running in the
> same domain as the daemon. Otherwise it will use the /proc/xen/xenbus
> method.
>
> Please, try (harder) to use grep, google etc to find the answers to
> these things, you are asking a lot of questions which you really ought
> to be able to answer by studying the code a bit.
>
> Ian.
>
>
I got xen source code from  http://www.xen.org/download/index_4.1.2.html .
when using `xl restore`=EF=BC=8Cxc_evtchn_alloc_unbound will raise this err=
or: xc:
error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (3 =3D No such
process): Internal error.
what does this mean and what does such process refer to?
Looking forward to your reply.
Thanks.

--047d7b621df8429e4104d1be6763
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 9, 2012 at 7:00 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"im">On Thu, 2012-08-09 at 11:49 +0100, =E9=A9=AC=E7=A3=8A wro=
te:<br>
<br>
&gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 By the way, which driver use the socket fi=
le as =C2=A0backend ?...<br>
<br>
</div>You mean who uses the unix domain socket interface to xenstored?<br>
<br>
libxenstore can optionally use this if it happens to be running in the<br>
same domain as the daemon. Otherwise it will use the /proc/xen/xenbus<br>
method.<br>
<br>
Please, try (harder) to use grep, google etc to find the answers to<br>
these things, you are asking a lot of questions which you really ought<br>
to be able to answer by studying the code a bit.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote></div><div><br></div>I got xen source code from=
=C2=A0
<a href=3D"http://www.xen.org/download/index_4.1.2.html">http://www.xen.org=
/download/index_4.1.2.html</a>=C2=A0.<br><div>when using `xl restore`=EF=BC=
=8Cxc_evtchn_alloc_unbound will raise this error:<font color=3D"#ff6666">=
=C2=A0xc: error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (3 =
=3D No such process): Internal error.</font></div>
<div>what does this mean and what does such process refer to?</div><div>Loo=
king forward to your reply.</div><div>Thanks.</div>

--047d7b621df8429e4104d1be6763--


--===============5433324095301067868==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5433324095301067868==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 09:54:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 09:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tnng9-0006QB-MM; Wed, 26 Dec 2012 09:53:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Tnng8-0006Q6-2b
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 09:53:36 +0000
Received: from [85.158.139.211:51377] by server-1.bemta-5.messagelabs.com id
	67/90-12813-E19CAD05; Wed, 26 Dec 2012 09:53:34 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356515613!19431985!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17084 invoked from network); 26 Dec 2012 09:53:33 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 09:53:33 -0000
Received: by mail-ea0-f176.google.com with SMTP id d13so3492464eaa.35
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Dec 2012 01:53:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=SrF3R5Er/54EaAfN1BBeuz0mvFvPyvbulslfE8vhGh0=;
	b=c+VRRb4YW3CIKxe2DPLnBlSo46jlwmGLDxZE9sNnLMJI7cSYOjeiRfoiu4wRwRwskT
	l/Y96jSmKKr/62TbSVWRm8bgn7lxV6q/FNB1TRvY4zyZpdyiQxAsjQLemTA6aY37E615
	FK7/ycG/jO7CxFn7+VYaejZXdC1idmxxsmyEtjsKQtIPLLJXfj7ovYAf/v+paGq1a/qy
	Y+0QayrQXI0KvxE58PIsoT3jXFf/fwqVxh9wQSFB/FnWA/8q3XqLtZlUfFsqgKgW09hZ
	MfH25tB6bg19nfQUHDWB8WHs2lUHZBRZULllVfsMBU/tqo9i/zqLWX81u/AhAdV4fH7j
	WP0A==
MIME-Version: 1.0
Received: by 10.14.214.132 with SMTP id c4mr68587126eep.18.1356515613252; Wed,
	26 Dec 2012 01:53:33 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Wed, 26 Dec 2012 01:53:33 -0800 (PST)
In-Reply-To: <1344510017.32142.125.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
	<CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
	<1344508123.32142.118.camel@zakaz.uk.xensource.com>
	<1344508288.32142.119.camel@zakaz.uk.xensource.com>
	<CA+ePHTBkBQuhX4C9VZ2a_ARDkY=fvF-=UEBjkOt5ThFZCwa-+A@mail.gmail.com>
	<1344510017.32142.125.camel@zakaz.uk.xensource.com>
Date: Wed, 26 Dec 2012 17:53:33 +0800
Message-ID: <CA+ePHTDd=GKbOPxZmN3yhxpTw86GakPd63u2TEGr0w8yNpVZjg@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5433324095301067868=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5433324095301067868==
Content-Type: multipart/alternative; boundary=047d7b621df8429e4104d1be6763

--047d7b621df8429e4104d1be6763
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 9, 2012 at 7:00 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-09 at 11:49 +0100, =E9=A9=AC=E7=A3=8A wrote:
>
> >         By the way, which driver use the socket file as  backend ?...
>
> You mean who uses the unix domain socket interface to xenstored?
>
> libxenstore can optionally use this if it happens to be running in the
> same domain as the daemon. Otherwise it will use the /proc/xen/xenbus
> method.
>
> Please, try (harder) to use grep, google etc to find the answers to
> these things, you are asking a lot of questions which you really ought
> to be able to answer by studying the code a bit.
>
> Ian.
>
>
I got xen source code from  http://www.xen.org/download/index_4.1.2.html .
when using `xl restore`=EF=BC=8Cxc_evtchn_alloc_unbound will raise this err=
or: xc:
error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (3 =3D No such
process): Internal error.
what does this mean and what does such process refer to?
Looking forward to your reply.
Thanks.

--047d7b621df8429e4104d1be6763
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 9, 2012 at 7:00 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"im">On Thu, 2012-08-09 at 11:49 +0100, =E9=A9=AC=E7=A3=8A wro=
te:<br>
<br>
&gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 By the way, which driver use the socket fi=
le as =C2=A0backend ?...<br>
<br>
</div>You mean who uses the unix domain socket interface to xenstored?<br>
<br>
libxenstore can optionally use this if it happens to be running in the<br>
same domain as the daemon. Otherwise it will use the /proc/xen/xenbus<br>
method.<br>
<br>
Please, try (harder) to use grep, google etc to find the answers to<br>
these things, you are asking a lot of questions which you really ought<br>
to be able to answer by studying the code a bit.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote></div><div><br></div>I got xen source code from=
=C2=A0
<a href=3D"http://www.xen.org/download/index_4.1.2.html">http://www.xen.org=
/download/index_4.1.2.html</a>=C2=A0.<br><div>when using `xl restore`=EF=BC=
=8Cxc_evtchn_alloc_unbound will raise this error:<font color=3D"#ff6666">=
=C2=A0xc: error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (3 =
=3D No such process): Internal error.</font></div>
<div>what does this mean and what does such process refer to?</div><div>Loo=
king forward to your reply.</div><div>Thanks.</div>

--047d7b621df8429e4104d1be6763--


--===============5433324095301067868==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5433324095301067868==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 10:03:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 10:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnnpX-0006fT-P0; Wed, 26 Dec 2012 10:03:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1TnnpX-0006fO-0V
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 10:03:19 +0000
Received: from [85.158.139.83:38648] by server-11.bemta-5.messagelabs.com id
	EC/5C-31624-66BCAD05; Wed, 26 Dec 2012 10:03:18 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1356516197!30783305!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=1.7 required=7.0 tests=HTML_MESSAGE,
	HTML_OBFUSCATE_05_10,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3538 invoked from network); 26 Dec 2012 10:03:17 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 10:03:17 -0000
Received: by mail-ee0-f47.google.com with SMTP id e51so4104967eek.20
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Dec 2012 02:03:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=uA/HQIEi0uCyGETyLXXTM4P6uGRBrLJCWMQf65CQPH0=;
	b=LBteRP69shzPt8EdRClHHRKzwuaRlr48U+XEuaYhngYOvKl5DhwDAbZ78zgSNVBGY1
	1bqmfduGHqtUGWvF3CtMjhsPYz3n1UkLu59gQ0WhSJZJ1hBQT9JjbSg4Y6wrMFTFOlMM
	KfBZzRFt4jj9vJA49mcvxNNllIsOa6U6UePMI9sFiOMY8eMs9PvZPE1cXty+azAur3fw
	32lJ75IHNoKlnWBEOhA3sAQSmR2z8p0H1Hs2CePw3Ne8qYSFZW8mLTFlBF5FZ3pd4LFB
	OkKaCIhVdtiH7zb6sN1pG3DjDLzx7dENPDuKzPsb8+wf5Dy9Ve76tMKS7DjXbeCyRDwQ
	qd1A==
MIME-Version: 1.0
Received: by 10.14.214.132 with SMTP id c4mr68664547eep.18.1356516197389; Wed,
	26 Dec 2012 02:03:17 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Wed, 26 Dec 2012 02:03:17 -0800 (PST)
Date: Wed, 26 Dec 2012 18:03:17 +0800
Message-ID: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
	=?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_e?=
	=?utf-8?q?rror?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6982250771898998655=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6982250771898998655==
Content-Type: multipart/alternative; boundary=047d7b621df813d4fe04d1be8a3c

--047d7b621df813d4fe04d1be8a3c
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi=EF=BC=8C
    I got xen source code from  http://www.xen.org/download/index_4.1.2.htm=
l
 .
    when using `xl restore`=EF=BC=8Cxc_evtchn_alloc_unbound will raise this=
 error: xc:
error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (3 =3D No such
process): Internal error.
  what does this mean and what does such process refer to?
    Looking forward to your reply.
    Thanks.

--047d7b621df813d4fe04d1be8a3c
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi=EF=BC=8C<div>=C2=A0 =C2=A0=C2=A0<span style=3D"color:rgb(34,34,34);font-=
family:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255)">I=
 got xen source code from=C2=A0=C2=A0</span><a href=3D"http://www.xen.org/d=
ownload/index_4.1.2.html" target=3D"_blank" style=3D"color:rgb(17,85,204);f=
ont-family:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255=
)">http://www.xen.org/download/index_4.1.2.html</a><span style=3D"color:rgb=
(34,34,34);font-family:arial,sans-serif;font-size:13px;background-color:rgb=
(255,255,255)">=C2=A0.</span></div>
<div style=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13=
px;background-color:rgb(255,255,255)">=C2=A0 =C2=A0 when using `xl restore`=
=EF=BC=8Cxc_evtchn_alloc_unbound will raise this error:<font color=3D"#ff66=
66">=C2=A0xc: error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (=
3 =3D No such process): Internal error.</font></div>
<div style=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13=
px;background-color:rgb(255,255,255)">=C2=A0 what does this mean and what d=
oes such process refer to?</div><div style=3D"color:rgb(34,34,34);font-fami=
ly:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255)">
=C2=A0 =C2=A0 Looking forward to your reply.</div><div style=3D"color:rgb(3=
4,34,34);font-family:arial,sans-serif;font-size:13px;background-color:rgb(2=
55,255,255)">=C2=A0 =C2=A0 Thanks.</div>

--047d7b621df813d4fe04d1be8a3c--


--===============6982250771898998655==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6982250771898998655==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 10:03:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 10:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnnpX-0006fT-P0; Wed, 26 Dec 2012 10:03:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1TnnpX-0006fO-0V
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 10:03:19 +0000
Received: from [85.158.139.83:38648] by server-11.bemta-5.messagelabs.com id
	EC/5C-31624-66BCAD05; Wed, 26 Dec 2012 10:03:18 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1356516197!30783305!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=1.7 required=7.0 tests=HTML_MESSAGE,
	HTML_OBFUSCATE_05_10,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3538 invoked from network); 26 Dec 2012 10:03:17 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 10:03:17 -0000
Received: by mail-ee0-f47.google.com with SMTP id e51so4104967eek.20
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Dec 2012 02:03:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=uA/HQIEi0uCyGETyLXXTM4P6uGRBrLJCWMQf65CQPH0=;
	b=LBteRP69shzPt8EdRClHHRKzwuaRlr48U+XEuaYhngYOvKl5DhwDAbZ78zgSNVBGY1
	1bqmfduGHqtUGWvF3CtMjhsPYz3n1UkLu59gQ0WhSJZJ1hBQT9JjbSg4Y6wrMFTFOlMM
	KfBZzRFt4jj9vJA49mcvxNNllIsOa6U6UePMI9sFiOMY8eMs9PvZPE1cXty+azAur3fw
	32lJ75IHNoKlnWBEOhA3sAQSmR2z8p0H1Hs2CePw3Ne8qYSFZW8mLTFlBF5FZ3pd4LFB
	OkKaCIhVdtiH7zb6sN1pG3DjDLzx7dENPDuKzPsb8+wf5Dy9Ve76tMKS7DjXbeCyRDwQ
	qd1A==
MIME-Version: 1.0
Received: by 10.14.214.132 with SMTP id c4mr68664547eep.18.1356516197389; Wed,
	26 Dec 2012 02:03:17 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Wed, 26 Dec 2012 02:03:17 -0800 (PST)
Date: Wed, 26 Dec 2012 18:03:17 +0800
Message-ID: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
	=?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_e?=
	=?utf-8?q?rror?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6982250771898998655=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6982250771898998655==
Content-Type: multipart/alternative; boundary=047d7b621df813d4fe04d1be8a3c

--047d7b621df813d4fe04d1be8a3c
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi=EF=BC=8C
    I got xen source code from  http://www.xen.org/download/index_4.1.2.htm=
l
 .
    when using `xl restore`=EF=BC=8Cxc_evtchn_alloc_unbound will raise this=
 error: xc:
error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (3 =3D No such
process): Internal error.
  what does this mean and what does such process refer to?
    Looking forward to your reply.
    Thanks.

--047d7b621df813d4fe04d1be8a3c
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi=EF=BC=8C<div>=C2=A0 =C2=A0=C2=A0<span style=3D"color:rgb(34,34,34);font-=
family:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255)">I=
 got xen source code from=C2=A0=C2=A0</span><a href=3D"http://www.xen.org/d=
ownload/index_4.1.2.html" target=3D"_blank" style=3D"color:rgb(17,85,204);f=
ont-family:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255=
)">http://www.xen.org/download/index_4.1.2.html</a><span style=3D"color:rgb=
(34,34,34);font-family:arial,sans-serif;font-size:13px;background-color:rgb=
(255,255,255)">=C2=A0.</span></div>
<div style=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13=
px;background-color:rgb(255,255,255)">=C2=A0 =C2=A0 when using `xl restore`=
=EF=BC=8Cxc_evtchn_alloc_unbound will raise this error:<font color=3D"#ff66=
66">=C2=A0xc: error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (=
3 =3D No such process): Internal error.</font></div>
<div style=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13=
px;background-color:rgb(255,255,255)">=C2=A0 what does this mean and what d=
oes such process refer to?</div><div style=3D"color:rgb(34,34,34);font-fami=
ly:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255)">
=C2=A0 =C2=A0 Looking forward to your reply.</div><div style=3D"color:rgb(3=
4,34,34);font-family:arial,sans-serif;font-size:13px;background-color:rgb(2=
55,255,255)">=C2=A0 =C2=A0 Thanks.</div>

--047d7b621df813d4fe04d1be8a3c--


--===============6982250771898998655==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6982250771898998655==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 10:18:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 10:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tno49-00077E-SF; Wed, 26 Dec 2012 10:18:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.k.lengyel@gmail.com>) id 1Tnjw7-0004Ua-Sm
	for xen-devel@lists.xen.org; Wed, 26 Dec 2012 05:53:52 +0000
Received: from [85.158.137.99:64139] by server-9.bemta-3.messagelabs.com id
	57/36-11948-AE09AD05; Wed, 26 Dec 2012 05:53:46 +0000
X-Env-Sender: tamas.k.lengyel@gmail.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1356501224!18537736!1
X-Originating-IP: [209.85.216.47]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3078 invoked from network); 26 Dec 2012 05:53:45 -0000
Received: from mail-qa0-f47.google.com (HELO mail-qa0-f47.google.com)
	(209.85.216.47)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 05:53:45 -0000
Received: by mail-qa0-f47.google.com with SMTP id a19so8040327qad.6
	for <xen-devel@lists.xen.org>; Tue, 25 Dec 2012 21:53:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=QRmmqVw2JVIVyqh0CILXyU6dlQTzF7/RMl7/w2Cd+Ug=;
	b=uDGX63Run0XKaF6W8mNVJxyv/rxtMxa/97qdbVkZJRjE1hqcxVr09ofgb92vvlQcb/
	jidec/BkCj79INwJv5MkCt1hPKzWq12JkoNKulwVpjv3PGbLmtxD5hfufNixfYOd0gez
	9HzDmuuoHkV5v3MIe0gUrJgBOHhvzzgcpi/0RK1WruKDQ0Mi6kiyX9lBk6C8amjb648w
	Jz4U2DmEWDGb28Cviy454x31GN03Wv2mJfpwqsaiKMmdQMXIUeIAE1JiFjy4/EUZfner
	lYIs3gNjjmCZghTIdq9KBPgjN5Ga30kIVcrY1O37SuHNWjCf/9k8Ds8H9wXdnnWXK1X3
	Bg5g==
MIME-Version: 1.0
Received: by 10.49.127.145 with SMTP id ng17mr14479873qeb.25.1356501223380;
	Tue, 25 Dec 2012 21:53:43 -0800 (PST)
Received: by 10.49.86.34 with HTTP; Tue, 25 Dec 2012 21:53:43 -0800 (PST)
Date: Wed, 26 Dec 2012 00:53:43 -0500
Message-ID: <CABfawhn5BN8RLU1wSxHxX62hB5fUQngXFhEKn9GNHcW7yBs4DA@mail.gmail.com>
From: Tamas Lengyel <tamas.k.lengyel@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 26 Dec 2012 10:18:24 +0000
Subject: [Xen-devel] XSM and privcmd_ioctl_mmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5861047929482942673=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5861047929482942673==
Content-Type: multipart/alternative; boundary=047d7b6dcee08e967604d1bb0d26

--047d7b6dcee08e967604d1bb0d26
Content-Type: text/plain; charset=ISO-8859-1

Hi,
I'm using the v6 XSM patch-set (
http://lists.xen.org/archives/html/xen-devel/2012-11/msg01920.html) to
perform dom0 disaggregation but I came across two ioctl functions in the
Linux privcmd driver that were getting -EPERM errors in my secondary
control domU. The problem is that the permission checks are not coming from
XSM but  the Kernel itself, when XSM should be in charge of access control.

The two functions in Linux 3.x kernel are *privcmd_ioctl_mmap *and *
privcmd_ioctl_mmap_batch*:

driver/xen/privcmd.c@199 and @319 in Linux 3.7.0:

*        if (!xen_initial_domain())*
*                return -EPERM;*
*
*
Are these checks still needed when the XSM patches are applied?

Thanks,
Tamas

--047d7b6dcee08e967604d1bb0d26
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<br><div class=3D"gmail_quote"><div dir=3D"ltr"><div>I&=
#39;m using the v6 XSM patch-set (<a href=3D"http://lists.xen.org/archives/=
html/xen-devel/2012-11/msg01920.html">http://lists.xen.org/archives/html/xe=
n-devel/2012-11/msg01920.html</a>)=A0to perform dom0 disaggregation but I c=
ame across two ioctl functions in the Linux privcmd driver that were gettin=
g -EPERM errors in my secondary control domU. The problem is that the permi=
ssion checks are not coming from XSM but =A0the Kernel itself, when XSM sho=
uld be in charge of access control.</div>

<div><br></div><div>The two functions in Linux 3.x kernel are<span style=3D=
"font-family:arial,sans-serif;font-size:12.727272033691406px">=A0</span><b =
style=3D"font-family:arial,sans-serif;font-size:12.727272033691406px">privc=
md_ioctl_mmap=A0</b><span style=3D"font-family:arial,sans-serif;font-size:1=
2.727272033691406px">and=A0</span><b style=3D"font-family:arial,sans-serif;=
font-size:12.727272033691406px">privcmd_ioctl_mmap_batch</b>:</div>

<div><br></div><div><div style=3D"font-family:arial,sans-serif;font-size:12=
.727272033691406px">driver/xen/privcmd.c@199 and @319 in Linux 3.7.0:</div>=
<div style=3D"font-family:arial,sans-serif;font-size:12.727272033691406px">

<br></div><div style=3D"font-family:arial,sans-serif;font-size:12.727272033=
691406px"><div><b>=A0 =A0 =A0 =A0 if (!xen_initial_domain())</b></div><div>=
<b>=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return -EPERM;</b></div><div><b><br></b>=
</div><div>Are these checks still needed when the XSM patches are applied?<=
/div>

<div><br></div><div>Thanks,</div><div>Tamas</div></div></div></div>
</div><br></div>

--047d7b6dcee08e967604d1bb0d26--


--===============5861047929482942673==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5861047929482942673==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 10:18:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 10:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tno49-00077E-SF; Wed, 26 Dec 2012 10:18:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.k.lengyel@gmail.com>) id 1Tnjw7-0004Ua-Sm
	for xen-devel@lists.xen.org; Wed, 26 Dec 2012 05:53:52 +0000
Received: from [85.158.137.99:64139] by server-9.bemta-3.messagelabs.com id
	57/36-11948-AE09AD05; Wed, 26 Dec 2012 05:53:46 +0000
X-Env-Sender: tamas.k.lengyel@gmail.com
X-Msg-Ref: server-15.tower-217.messagelabs.com!1356501224!18537736!1
X-Originating-IP: [209.85.216.47]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3078 invoked from network); 26 Dec 2012 05:53:45 -0000
Received: from mail-qa0-f47.google.com (HELO mail-qa0-f47.google.com)
	(209.85.216.47)
	by server-15.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 05:53:45 -0000
Received: by mail-qa0-f47.google.com with SMTP id a19so8040327qad.6
	for <xen-devel@lists.xen.org>; Tue, 25 Dec 2012 21:53:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=QRmmqVw2JVIVyqh0CILXyU6dlQTzF7/RMl7/w2Cd+Ug=;
	b=uDGX63Run0XKaF6W8mNVJxyv/rxtMxa/97qdbVkZJRjE1hqcxVr09ofgb92vvlQcb/
	jidec/BkCj79INwJv5MkCt1hPKzWq12JkoNKulwVpjv3PGbLmtxD5hfufNixfYOd0gez
	9HzDmuuoHkV5v3MIe0gUrJgBOHhvzzgcpi/0RK1WruKDQ0Mi6kiyX9lBk6C8amjb648w
	Jz4U2DmEWDGb28Cviy454x31GN03Wv2mJfpwqsaiKMmdQMXIUeIAE1JiFjy4/EUZfner
	lYIs3gNjjmCZghTIdq9KBPgjN5Ga30kIVcrY1O37SuHNWjCf/9k8Ds8H9wXdnnWXK1X3
	Bg5g==
MIME-Version: 1.0
Received: by 10.49.127.145 with SMTP id ng17mr14479873qeb.25.1356501223380;
	Tue, 25 Dec 2012 21:53:43 -0800 (PST)
Received: by 10.49.86.34 with HTTP; Tue, 25 Dec 2012 21:53:43 -0800 (PST)
Date: Wed, 26 Dec 2012 00:53:43 -0500
Message-ID: <CABfawhn5BN8RLU1wSxHxX62hB5fUQngXFhEKn9GNHcW7yBs4DA@mail.gmail.com>
From: Tamas Lengyel <tamas.k.lengyel@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 26 Dec 2012 10:18:24 +0000
Subject: [Xen-devel] XSM and privcmd_ioctl_mmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5861047929482942673=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5861047929482942673==
Content-Type: multipart/alternative; boundary=047d7b6dcee08e967604d1bb0d26

--047d7b6dcee08e967604d1bb0d26
Content-Type: text/plain; charset=ISO-8859-1

Hi,
I'm using the v6 XSM patch-set (
http://lists.xen.org/archives/html/xen-devel/2012-11/msg01920.html) to
perform dom0 disaggregation but I came across two ioctl functions in the
Linux privcmd driver that were getting -EPERM errors in my secondary
control domU. The problem is that the permission checks are not coming from
XSM but  the Kernel itself, when XSM should be in charge of access control.

The two functions in Linux 3.x kernel are *privcmd_ioctl_mmap *and *
privcmd_ioctl_mmap_batch*:

driver/xen/privcmd.c@199 and @319 in Linux 3.7.0:

*        if (!xen_initial_domain())*
*                return -EPERM;*
*
*
Are these checks still needed when the XSM patches are applied?

Thanks,
Tamas

--047d7b6dcee08e967604d1bb0d26
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<br><div class=3D"gmail_quote"><div dir=3D"ltr"><div>I&=
#39;m using the v6 XSM patch-set (<a href=3D"http://lists.xen.org/archives/=
html/xen-devel/2012-11/msg01920.html">http://lists.xen.org/archives/html/xe=
n-devel/2012-11/msg01920.html</a>)=A0to perform dom0 disaggregation but I c=
ame across two ioctl functions in the Linux privcmd driver that were gettin=
g -EPERM errors in my secondary control domU. The problem is that the permi=
ssion checks are not coming from XSM but =A0the Kernel itself, when XSM sho=
uld be in charge of access control.</div>

<div><br></div><div>The two functions in Linux 3.x kernel are<span style=3D=
"font-family:arial,sans-serif;font-size:12.727272033691406px">=A0</span><b =
style=3D"font-family:arial,sans-serif;font-size:12.727272033691406px">privc=
md_ioctl_mmap=A0</b><span style=3D"font-family:arial,sans-serif;font-size:1=
2.727272033691406px">and=A0</span><b style=3D"font-family:arial,sans-serif;=
font-size:12.727272033691406px">privcmd_ioctl_mmap_batch</b>:</div>

<div><br></div><div><div style=3D"font-family:arial,sans-serif;font-size:12=
.727272033691406px">driver/xen/privcmd.c@199 and @319 in Linux 3.7.0:</div>=
<div style=3D"font-family:arial,sans-serif;font-size:12.727272033691406px">

<br></div><div style=3D"font-family:arial,sans-serif;font-size:12.727272033=
691406px"><div><b>=A0 =A0 =A0 =A0 if (!xen_initial_domain())</b></div><div>=
<b>=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return -EPERM;</b></div><div><b><br></b>=
</div><div>Are these checks still needed when the XSM patches are applied?<=
/div>

<div><br></div><div>Thanks,</div><div>Tamas</div></div></div></div>
</div><br></div>

--047d7b6dcee08e967604d1bb0d26--


--===============5861047929482942673==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5861047929482942673==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 10:20:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 10:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tno5N-0007D2-Hk; Wed, 26 Dec 2012 10:19:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Tno5L-0007Ct-PD
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 10:19:40 +0000
Received: from [85.158.138.51:37448] by server-11.bemta-3.messagelabs.com id
	27/49-13335-A3FCAD05; Wed, 26 Dec 2012 10:19:38 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1356517177!11717688!1
X-Originating-IP: [209.85.215.172]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10229 invoked from network); 26 Dec 2012 10:19:37 -0000
Received: from mail-ea0-f172.google.com (HELO mail-ea0-f172.google.com)
	(209.85.215.172)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 10:19:37 -0000
Received: by mail-ea0-f172.google.com with SMTP id a1so3543143eaa.31
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Dec 2012 02:19:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=NzSGW25q5T+Z0+kvtAOpAgMCNHwikFCapR64/7Ge+B8=;
	b=leSkVxENWNVsrdONiSTmU9pVZFiPfcG/SES9l4qT3f4rxHHtd2c25I35wVGDOHoxJ7
	i8zJiXx+FLiw0TTURJjYMryi7edLoUctv3PgPggUKQH5Xn8MzcBfPQAxaQvVYdpxVfTb
	9KITx18udNf7Y6t+Dj/6YS9ogOpnrGwlGtLUKWxF0IDfh671ZWyBiOqMFzDC1vfsq8lH
	0Oa1UCdlFJHC/wLf9fcTugHsT13UeDUTg3xPVc85qB7votmtWAWpAVPCZRlfmcPy0vgs
	pvKsP63QoRqxTYG868shqim2HtdB5ApbAlPL4GtSv1UWFGiZoyBDSa+//WirDOCj5e/s
	JVMg==
MIME-Version: 1.0
Received: by 10.14.209.193 with SMTP id s41mr69276153eeo.9.1356517176819; Wed,
	26 Dec 2012 02:19:36 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Wed, 26 Dec 2012 02:19:36 -0800 (PST)
In-Reply-To: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
Date: Wed, 26 Dec 2012 18:19:36 +0800
Message-ID: <CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
	=?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_e?=
	=?utf-8?q?rror?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6988643620029898910=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6988643620029898910==
Content-Type: multipart/alternative; boundary=e89a8f502b9274be7c04d1bec4e6

--e89a8f502b9274be7c04d1bec4e6
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 26, 2012 at 6:03 PM, =E9=A9=AC=E7=A3=8A <aware.why@gmail.com> w=
rote:

> Hi=EF=BC=8C
>     I got xen source code from
> http://www.xen.org/download/index_4.1.2.html .
>     when using `xl restore`=EF=BC=8Cxc_evtchn_alloc_unbound will raise th=
is error: xc:
> error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (3 =3D No suc=
h
> process): Internal error.
>   what does this mean and what does such process refer to?
>     Looking forward to your reply.
>     Thanks.
>

The error exactly occurs at  this point: (tools/libxl/libxl_dom.c)
67-int libxl__build_pre(libxl_ctx *ctx, uint32_t domid,



 68              libxl_domain_build_info *info, libxl_domain_build_state
*state)


 69{



 70    xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus);



 71    xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
LIBXL_MAXMEM_CONSTANT);


 72    xc_domain_set_memmap_limit(ctx->xch, domid,



 73            (info->hvm) ? info->max_memkb :



 74            (info->max_memkb + info->u.pv.slack_memkb));



 75    xc_domain_set_tsc_info(ctx->xch, domid, info->tsc_mode, 0, 0, 0);



 76    if ( info->disable_migrate )



 77        xc_domain_disable_migrate(ctx->xch, domid);



 78



 79    if (info->hvm) {



 80        unsigned long shadow;



 81        shadow =3D (info->shadow_memkb + 1023) / 1024;



 82        xc_shadow_control(ctx->xch, domid,
XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION, NULL, 0, &shadow, 0, NULL);


 83    }



 84



 85    state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, 0);



 86    state->console_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, 0);



 87    return 0;



 88}

--e89a8f502b9274be7c04d1bec4e6
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGJyPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+T24gV2VkLCBEZWMgMjYsIDIwMTIgYXQg
NjowMyBQTSwg6ams56OKIDxzcGFuIGRpcj0ibHRyIj4mbHQ7PGEgaHJlZj0ibWFpbHRvOmF3YXJl
LndoeUBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5hd2FyZS53aHlAZ21haWwuY29tPC9hPiZn
dDs8L3NwYW4+IHdyb3RlOjxicj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxl
PSJtYXJnaW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHggI2NjYyBzb2xpZDtwYWRkaW5nLWxl
ZnQ6MWV4Ij4KSGnvvIw8ZGl2PsKgIMKgwqA8c3BhbiBzdHlsZT0iY29sb3I6cmdiKDM0LDM0LDM0
KTtmb250LXNpemU6MTNweDtmb250LWZhbWlseTphcmlhbCxzYW5zLXNlcmlmIj5JIGdvdCB4ZW4g
c291cmNlIGNvZGUgZnJvbcKgwqA8L3NwYW4+PGEgaHJlZj0iaHR0cDovL3d3dy54ZW4ub3JnL2Rv
d25sb2FkL2luZGV4XzQuMS4yLmh0bWwiIHN0eWxlPSJjb2xvcjpyZ2IoMTcsODUsMjA0KTtmb250
LXNpemU6MTNweDtmb250LWZhbWlseTphcmlhbCxzYW5zLXNlcmlmIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cDovL3d3dy54ZW4ub3JnL2Rvd25sb2FkL2luZGV4XzQuMS4yLmh0bWw8L2E+PHNwYW4gc3R5
bGU9ImNvbG9yOnJnYigzNCwzNCwzNCk7Zm9udC1zaXplOjEzcHg7Zm9udC1mYW1pbHk6YXJpYWws
c2Fucy1zZXJpZiI+wqAuPC9zcGFuPjwvZGl2PgoKPGRpdiBzdHlsZT0iY29sb3I6cmdiKDM0LDM0
LDM0KTtmb250LXNpemU6MTNweDtmb250LWZhbWlseTphcmlhbCxzYW5zLXNlcmlmIj7CoCDCoCB3
aGVuIHVzaW5nIGB4bCByZXN0b3JlYO+8jHhjX2V2dGNobl9hbGxvY191bmJvdW5kIHdpbGwgcmFp
c2UgdGhpcyBlcnJvcjo8Zm9udCBjb2xvcj0iI2ZmNjY2NiI+wqB4YzogZXJyb3I6IGRvX2V2dGNo
bl9vcDogSFlQRVJWSVNPUl9ldmVudF9jaGFubmVsX29wIGZhaWxlZDogLTEgKDMgPSBObyBzdWNo
IHByb2Nlc3MpOiBJbnRlcm5hbCBlcnJvci48L2ZvbnQ+PC9kaXY+Cgo8ZGl2IHN0eWxlPSJjb2xv
cjpyZ2IoMzQsMzQsMzQpO2ZvbnQtc2l6ZToxM3B4O2ZvbnQtZmFtaWx5OmFyaWFsLHNhbnMtc2Vy
aWYiPsKgIHdoYXQgZG9lcyB0aGlzIG1lYW4gYW5kIHdoYXQgZG9lcyBzdWNoIHByb2Nlc3MgcmVm
ZXIgdG8/PC9kaXY+PGRpdiBzdHlsZT0iY29sb3I6cmdiKDM0LDM0LDM0KTtmb250LXNpemU6MTNw
eDtmb250LWZhbWlseTphcmlhbCxzYW5zLXNlcmlmIj4KwqAgwqAgTG9va2luZyBmb3J3YXJkIHRv
IHlvdXIgcmVwbHkuPC9kaXY+PGRpdiBzdHlsZT0iY29sb3I6cmdiKDM0LDM0LDM0KTtmb250LXNp
emU6MTNweDtmb250LWZhbWlseTphcmlhbCxzYW5zLXNlcmlmIj7CoCDCoCBUaGFua3MuPC9kaXY+
CjwvYmxvY2txdW90ZT48L2Rpdj48YnI+PGRpdj48Zm9udCBjb2xvcj0iI2ZmNjY2NiI+VGhlIGVy
cm9yIGV4YWN0bHkgb2NjdXJzIGF0IMKgdGhpcyBwb2ludDogKHRvb2xzL2xpYnhsL2xpYnhsX2Rv
bS5jPC9mb250PjxzcGFuIHN0eWxlPSJjb2xvcjpyZ2IoMjU1LDEwMiwxMDIpIj4pPC9zcGFuPjwv
ZGl2PjxkaXY+PGRpdj48Zm9udCBjb2xvcj0iI2ZmNjY2NiI+NjctaW50IGxpYnhsX19idWlsZF9w
cmUobGlieGxfY3R4ICpjdHgsIHVpbnQzMl90IGRvbWlkLCDCoCA8L2ZvbnQ+wqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KPGRpdj7CoDY4IMKgIMKgIMKgIMKgIMKg
IMKgIMKgbGlieGxfZG9tYWluX2J1aWxkX2luZm8gKmluZm8sIGxpYnhsX2RvbWFpbl9idWlsZF9z
dGF0ZSAqc3RhdGUpIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+CjxkaXY+wqA2OXsgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KPGRpdj7CoDcw
IMKgIMKgeGNfZG9tYWluX21heF92Y3B1cyhjdHgtJmd0O3hjaCwgZG9taWQsIGluZm8tJmd0O21h
eF92Y3B1cyk7IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgwqA8L2Rpdj4KPGRpdj7C
oDcxIMKgIMKgeGNfZG9tYWluX3NldG1heG1lbShjdHgtJmd0O3hjaCwgZG9taWQsIGluZm8tJmd0
O3RhcmdldF9tZW1rYiArIExJQlhMX01BWE1FTV9DT05TVEFOVCk7IMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+CjxkaXY+wqA3MiDCoCDCoHhj
X2RvbWFpbl9zZXRfbWVtbWFwX2xpbWl0KGN0eC0mZ3Q7eGNoLCBkb21pZCwgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KPGRpdj7CoDczIMKg
IMKgIMKgIMKgIMKgIMKgKGluZm8tJmd0O2h2bSkgPyBpbmZvLSZndDttYXhfbWVta2IgOiDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoDwvZGl2
Pgo8ZGl2PsKgNzQgwqAgwqAgwqAgwqAgwqAgwqAoaW5mby0mZ3Q7bWF4X21lbWtiICsgaW5mby0m
Z3Q7dS5wdi5zbGFja19tZW1rYikpOyDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oMKgPC9kaXY+CjxkaXY+wqA3NSDCoCDCoHhjX2RvbWFpbl9zZXRfdHNjX2luZm8oY3R4LSZndDt4
Y2gsIGRvbWlkLCBpbmZvLSZndDt0c2NfbW9kZSwgMCwgMCwgMCk7IMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
PC9kaXY+CjxkaXY+wqA3NiDCoCDCoGlmICggaW5mby0mZ3Q7ZGlzYWJsZV9taWdyYXRlICkgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqDCoDwvZGl2Pgo8ZGl2PsKgNzcgwqAgwqAgwqAgwqB4Y19kb21haW5fZGlzYWJs
ZV9taWdyYXRlKGN0eC0mZ3Q7eGNoLCBkb21pZCk7IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+CjxkaXY+wqA3OCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoMKgPC9kaXY+CjxkaXY+wqA3
OSDCoCDCoGlmIChpbmZvLSZndDtodm0pIHsgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqDCoDwvZGl2Pgo8ZGl2PsKgODAgwqAgwqAgwqAgwqB1bnNpZ25lZCBsb25nIHNoYWRvdzsgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KPGRpdj7CoDgxIMKgIMKgIMKgIMKgc2hhZG93ID0gKGlu
Zm8tJmd0O3NoYWRvd19tZW1rYiArIDEwMjMpIC8gMTAyNDsgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqDCoDwvZGl2Pgo8ZGl2PsKgODIgwqAgwqAgwqAgwqB4Y19zaGFk
b3dfY29udHJvbChjdHgtJmd0O3hjaCwgZG9taWQsIFhFTl9ET01DVExfU0hBRE9XX09QX1NFVF9B
TExPQ0FUSU9OLCBOVUxMLCAwLCAmYW1wO3NoYWRvdywgMCwgTlVMTCk7IMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgPC9kaXY+CjxkaXY+wqA4MyDCoCDCoH0gwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KPGRpdj7CoDg0IMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgwqA8L2Rpdj4K
PGRpdj7CoDxzcGFuIHN0eWxlPSJiYWNrZ3JvdW5kLWNvbG9yOnJnYigyNTUsMCwwKSI+ODUgwqAg
wqBzdGF0ZS0mZ3Q7c3RvcmVfcG9ydCA9IHhjX2V2dGNobl9hbGxvY191bmJvdW5kKGN0eC0mZ3Q7
eGNoLCBkb21pZCwgMCk7IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgwqA8L3NwYW4+PC9kaXY+CjxkaXY+PHNw
YW4gc3R5bGU9ImJhY2tncm91bmQtY29sb3I6cmdiKDI1NSwwLDApIj7CoDg2IMKgIMKgc3RhdGUt
Jmd0O2NvbnNvbGVfcG9ydCA9IHhjX2V2dGNobl9hbGxvY191bmJvdW5kKGN0eC0mZ3Q7eGNoLCBk
b21pZCwgMCk7IMKgIMKgPC9zcGFuPiDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoMKgPC9kaXY+CjxkaXY+wqA4NyDCoCDCoHJl
dHVybiAwOyDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoDwvZGl2
Pgo8ZGl2PsKgODh9PC9kaXY+PC9kaXY+Cg==
--e89a8f502b9274be7c04d1bec4e6--


--===============6988643620029898910==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6988643620029898910==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 10:20:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 10:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tno5N-0007D2-Hk; Wed, 26 Dec 2012 10:19:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Tno5L-0007Ct-PD
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 10:19:40 +0000
Received: from [85.158.138.51:37448] by server-11.bemta-3.messagelabs.com id
	27/49-13335-A3FCAD05; Wed, 26 Dec 2012 10:19:38 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1356517177!11717688!1
X-Originating-IP: [209.85.215.172]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10229 invoked from network); 26 Dec 2012 10:19:37 -0000
Received: from mail-ea0-f172.google.com (HELO mail-ea0-f172.google.com)
	(209.85.215.172)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 10:19:37 -0000
Received: by mail-ea0-f172.google.com with SMTP id a1so3543143eaa.31
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Dec 2012 02:19:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=NzSGW25q5T+Z0+kvtAOpAgMCNHwikFCapR64/7Ge+B8=;
	b=leSkVxENWNVsrdONiSTmU9pVZFiPfcG/SES9l4qT3f4rxHHtd2c25I35wVGDOHoxJ7
	i8zJiXx+FLiw0TTURJjYMryi7edLoUctv3PgPggUKQH5Xn8MzcBfPQAxaQvVYdpxVfTb
	9KITx18udNf7Y6t+Dj/6YS9ogOpnrGwlGtLUKWxF0IDfh671ZWyBiOqMFzDC1vfsq8lH
	0Oa1UCdlFJHC/wLf9fcTugHsT13UeDUTg3xPVc85qB7votmtWAWpAVPCZRlfmcPy0vgs
	pvKsP63QoRqxTYG868shqim2HtdB5ApbAlPL4GtSv1UWFGiZoyBDSa+//WirDOCj5e/s
	JVMg==
MIME-Version: 1.0
Received: by 10.14.209.193 with SMTP id s41mr69276153eeo.9.1356517176819; Wed,
	26 Dec 2012 02:19:36 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Wed, 26 Dec 2012 02:19:36 -0800 (PST)
In-Reply-To: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
Date: Wed, 26 Dec 2012 18:19:36 +0800
Message-ID: <CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
	=?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_e?=
	=?utf-8?q?rror?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6988643620029898910=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6988643620029898910==
Content-Type: multipart/alternative; boundary=e89a8f502b9274be7c04d1bec4e6

--e89a8f502b9274be7c04d1bec4e6
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 26, 2012 at 6:03 PM, =E9=A9=AC=E7=A3=8A <aware.why@gmail.com> w=
rote:

> Hi=EF=BC=8C
>     I got xen source code from
> http://www.xen.org/download/index_4.1.2.html .
>     when using `xl restore`=EF=BC=8Cxc_evtchn_alloc_unbound will raise th=
is error: xc:
> error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (3 =3D No suc=
h
> process): Internal error.
>   what does this mean and what does such process refer to?
>     Looking forward to your reply.
>     Thanks.
>

The error exactly occurs at  this point: (tools/libxl/libxl_dom.c)
67-int libxl__build_pre(libxl_ctx *ctx, uint32_t domid,



 68              libxl_domain_build_info *info, libxl_domain_build_state
*state)


 69{



 70    xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus);



 71    xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
LIBXL_MAXMEM_CONSTANT);


 72    xc_domain_set_memmap_limit(ctx->xch, domid,



 73            (info->hvm) ? info->max_memkb :



 74            (info->max_memkb + info->u.pv.slack_memkb));



 75    xc_domain_set_tsc_info(ctx->xch, domid, info->tsc_mode, 0, 0, 0);



 76    if ( info->disable_migrate )



 77        xc_domain_disable_migrate(ctx->xch, domid);



 78



 79    if (info->hvm) {



 80        unsigned long shadow;



 81        shadow =3D (info->shadow_memkb + 1023) / 1024;



 82        xc_shadow_control(ctx->xch, domid,
XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION, NULL, 0, &shadow, 0, NULL);


 83    }



 84



 85    state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, 0);



 86    state->console_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, 0);



 87    return 0;



 88}

--e89a8f502b9274be7c04d1bec4e6
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGJyPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+T24gV2VkLCBEZWMgMjYsIDIwMTIgYXQg
NjowMyBQTSwg6ams56OKIDxzcGFuIGRpcj0ibHRyIj4mbHQ7PGEgaHJlZj0ibWFpbHRvOmF3YXJl
LndoeUBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5hd2FyZS53aHlAZ21haWwuY29tPC9hPiZn
dDs8L3NwYW4+IHdyb3RlOjxicj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxl
PSJtYXJnaW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHggI2NjYyBzb2xpZDtwYWRkaW5nLWxl
ZnQ6MWV4Ij4KSGnvvIw8ZGl2PsKgIMKgwqA8c3BhbiBzdHlsZT0iY29sb3I6cmdiKDM0LDM0LDM0
KTtmb250LXNpemU6MTNweDtmb250LWZhbWlseTphcmlhbCxzYW5zLXNlcmlmIj5JIGdvdCB4ZW4g
c291cmNlIGNvZGUgZnJvbcKgwqA8L3NwYW4+PGEgaHJlZj0iaHR0cDovL3d3dy54ZW4ub3JnL2Rv
d25sb2FkL2luZGV4XzQuMS4yLmh0bWwiIHN0eWxlPSJjb2xvcjpyZ2IoMTcsODUsMjA0KTtmb250
LXNpemU6MTNweDtmb250LWZhbWlseTphcmlhbCxzYW5zLXNlcmlmIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cDovL3d3dy54ZW4ub3JnL2Rvd25sb2FkL2luZGV4XzQuMS4yLmh0bWw8L2E+PHNwYW4gc3R5
bGU9ImNvbG9yOnJnYigzNCwzNCwzNCk7Zm9udC1zaXplOjEzcHg7Zm9udC1mYW1pbHk6YXJpYWws
c2Fucy1zZXJpZiI+wqAuPC9zcGFuPjwvZGl2PgoKPGRpdiBzdHlsZT0iY29sb3I6cmdiKDM0LDM0
LDM0KTtmb250LXNpemU6MTNweDtmb250LWZhbWlseTphcmlhbCxzYW5zLXNlcmlmIj7CoCDCoCB3
aGVuIHVzaW5nIGB4bCByZXN0b3JlYO+8jHhjX2V2dGNobl9hbGxvY191bmJvdW5kIHdpbGwgcmFp
c2UgdGhpcyBlcnJvcjo8Zm9udCBjb2xvcj0iI2ZmNjY2NiI+wqB4YzogZXJyb3I6IGRvX2V2dGNo
bl9vcDogSFlQRVJWSVNPUl9ldmVudF9jaGFubmVsX29wIGZhaWxlZDogLTEgKDMgPSBObyBzdWNo
IHByb2Nlc3MpOiBJbnRlcm5hbCBlcnJvci48L2ZvbnQ+PC9kaXY+Cgo8ZGl2IHN0eWxlPSJjb2xv
cjpyZ2IoMzQsMzQsMzQpO2ZvbnQtc2l6ZToxM3B4O2ZvbnQtZmFtaWx5OmFyaWFsLHNhbnMtc2Vy
aWYiPsKgIHdoYXQgZG9lcyB0aGlzIG1lYW4gYW5kIHdoYXQgZG9lcyBzdWNoIHByb2Nlc3MgcmVm
ZXIgdG8/PC9kaXY+PGRpdiBzdHlsZT0iY29sb3I6cmdiKDM0LDM0LDM0KTtmb250LXNpemU6MTNw
eDtmb250LWZhbWlseTphcmlhbCxzYW5zLXNlcmlmIj4KwqAgwqAgTG9va2luZyBmb3J3YXJkIHRv
IHlvdXIgcmVwbHkuPC9kaXY+PGRpdiBzdHlsZT0iY29sb3I6cmdiKDM0LDM0LDM0KTtmb250LXNp
emU6MTNweDtmb250LWZhbWlseTphcmlhbCxzYW5zLXNlcmlmIj7CoCDCoCBUaGFua3MuPC9kaXY+
CjwvYmxvY2txdW90ZT48L2Rpdj48YnI+PGRpdj48Zm9udCBjb2xvcj0iI2ZmNjY2NiI+VGhlIGVy
cm9yIGV4YWN0bHkgb2NjdXJzIGF0IMKgdGhpcyBwb2ludDogKHRvb2xzL2xpYnhsL2xpYnhsX2Rv
bS5jPC9mb250PjxzcGFuIHN0eWxlPSJjb2xvcjpyZ2IoMjU1LDEwMiwxMDIpIj4pPC9zcGFuPjwv
ZGl2PjxkaXY+PGRpdj48Zm9udCBjb2xvcj0iI2ZmNjY2NiI+NjctaW50IGxpYnhsX19idWlsZF9w
cmUobGlieGxfY3R4ICpjdHgsIHVpbnQzMl90IGRvbWlkLCDCoCA8L2ZvbnQ+wqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KPGRpdj7CoDY4IMKgIMKgIMKgIMKgIMKg
IMKgIMKgbGlieGxfZG9tYWluX2J1aWxkX2luZm8gKmluZm8sIGxpYnhsX2RvbWFpbl9idWlsZF9z
dGF0ZSAqc3RhdGUpIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+CjxkaXY+wqA2OXsgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KPGRpdj7CoDcw
IMKgIMKgeGNfZG9tYWluX21heF92Y3B1cyhjdHgtJmd0O3hjaCwgZG9taWQsIGluZm8tJmd0O21h
eF92Y3B1cyk7IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgwqA8L2Rpdj4KPGRpdj7C
oDcxIMKgIMKgeGNfZG9tYWluX3NldG1heG1lbShjdHgtJmd0O3hjaCwgZG9taWQsIGluZm8tJmd0
O3RhcmdldF9tZW1rYiArIExJQlhMX01BWE1FTV9DT05TVEFOVCk7IMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+CjxkaXY+wqA3MiDCoCDCoHhj
X2RvbWFpbl9zZXRfbWVtbWFwX2xpbWl0KGN0eC0mZ3Q7eGNoLCBkb21pZCwgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KPGRpdj7CoDczIMKg
IMKgIMKgIMKgIMKgIMKgKGluZm8tJmd0O2h2bSkgPyBpbmZvLSZndDttYXhfbWVta2IgOiDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoDwvZGl2
Pgo8ZGl2PsKgNzQgwqAgwqAgwqAgwqAgwqAgwqAoaW5mby0mZ3Q7bWF4X21lbWtiICsgaW5mby0m
Z3Q7dS5wdi5zbGFja19tZW1rYikpOyDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oMKgPC9kaXY+CjxkaXY+wqA3NSDCoCDCoHhjX2RvbWFpbl9zZXRfdHNjX2luZm8oY3R4LSZndDt4
Y2gsIGRvbWlkLCBpbmZvLSZndDt0c2NfbW9kZSwgMCwgMCwgMCk7IMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
PC9kaXY+CjxkaXY+wqA3NiDCoCDCoGlmICggaW5mby0mZ3Q7ZGlzYWJsZV9taWdyYXRlICkgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqDCoDwvZGl2Pgo8ZGl2PsKgNzcgwqAgwqAgwqAgwqB4Y19kb21haW5fZGlzYWJs
ZV9taWdyYXRlKGN0eC0mZ3Q7eGNoLCBkb21pZCk7IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+CjxkaXY+wqA3OCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoMKgPC9kaXY+CjxkaXY+wqA3
OSDCoCDCoGlmIChpbmZvLSZndDtodm0pIHsgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqDCoDwvZGl2Pgo8ZGl2PsKgODAgwqAgwqAgwqAgwqB1bnNpZ25lZCBsb25nIHNoYWRvdzsgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KPGRpdj7CoDgxIMKgIMKgIMKgIMKgc2hhZG93ID0gKGlu
Zm8tJmd0O3NoYWRvd19tZW1rYiArIDEwMjMpIC8gMTAyNDsgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqDCoDwvZGl2Pgo8ZGl2PsKgODIgwqAgwqAgwqAgwqB4Y19zaGFk
b3dfY29udHJvbChjdHgtJmd0O3hjaCwgZG9taWQsIFhFTl9ET01DVExfU0hBRE9XX09QX1NFVF9B
TExPQ0FUSU9OLCBOVUxMLCAwLCAmYW1wO3NoYWRvdywgMCwgTlVMTCk7IMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgPC9kaXY+CjxkaXY+wqA4MyDCoCDCoH0gwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KPGRpdj7CoDg0IMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgwqA8L2Rpdj4K
PGRpdj7CoDxzcGFuIHN0eWxlPSJiYWNrZ3JvdW5kLWNvbG9yOnJnYigyNTUsMCwwKSI+ODUgwqAg
wqBzdGF0ZS0mZ3Q7c3RvcmVfcG9ydCA9IHhjX2V2dGNobl9hbGxvY191bmJvdW5kKGN0eC0mZ3Q7
eGNoLCBkb21pZCwgMCk7IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgwqA8L3NwYW4+PC9kaXY+CjxkaXY+PHNw
YW4gc3R5bGU9ImJhY2tncm91bmQtY29sb3I6cmdiKDI1NSwwLDApIj7CoDg2IMKgIMKgc3RhdGUt
Jmd0O2NvbnNvbGVfcG9ydCA9IHhjX2V2dGNobl9hbGxvY191bmJvdW5kKGN0eC0mZ3Q7eGNoLCBk
b21pZCwgMCk7IMKgIMKgPC9zcGFuPiDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoMKgPC9kaXY+CjxkaXY+wqA4NyDCoCDCoHJl
dHVybiAwOyDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoDwvZGl2
Pgo8ZGl2PsKgODh9PC9kaXY+PC9kaXY+Cg==
--e89a8f502b9274be7c04d1bec4e6--


--===============6988643620029898910==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6988643620029898910==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 10:49:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 10:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnoXz-0007g7-Hj; Wed, 26 Dec 2012 10:49:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <maik.wessler@yahoo.com>) id 1TnoVc-0007fa-Jk
	for xen-devel@lists.xen.org; Wed, 26 Dec 2012 10:46:49 +0000
Received: from [85.158.138.51:64933] by server-15.bemta-3.messagelabs.com id
	01/FA-07921-795DAD05; Wed, 26 Dec 2012 10:46:47 +0000
X-Env-Sender: maik.wessler@yahoo.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1356518804!30348540!1
X-Originating-IP: [98.139.212.254]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15401 invoked from network); 26 Dec 2012 10:46:45 -0000
Received: from nm15-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm15-vm0.bullet.mail.bf1.yahoo.com) (98.139.212.254)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Dec 2012 10:46:45 -0000
Received: from [98.139.212.151] by nm15.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Dec 2012 10:46:43 -0000
Received: from [98.139.212.242] by tm8.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Dec 2012 10:46:43 -0000
Received: from [127.0.0.1] by omp1051.mail.bf1.yahoo.com with NNFMP;
	26 Dec 2012 10:46:43 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 817457.67037.bm@omp1051.mail.bf1.yahoo.com
Received: (qmail 91983 invoked by uid 60001); 26 Dec 2012 10:46:43 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1356518803; bh=e5Sc8DvBVRxhZ+Id5E36BYxhGZkT9Kyq+g1eqZv8gXM=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=h/2wqfqDmlkFHuh7es/8RsK2lQVvbob1tmv86wCQwVSEmQ0YcGYJ5iUmFAFCFJXsbBYk5oc9FM2wxeuRJtNmh+02MXH+Ws7X+dcoqH8O2f7FIEf5a/+5t2TIiRiDh7wys0IVOtTSBLV9f79OjFLnj/g1cEBRfBExPp7aKY2xlcI=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=cUnxE3RWejZIKizHrZijsVFhkB1ks9cZZaPPToaCMmIDOWORkbMfC0WeGBncWOQhQijvtwvcBNmuOyxeJEDDTO2TgXN73WZJRAf/LfsZdJ+BCZC0uefkvu2VT1BDpVkttZBd/3ucMcscmZbL9GWYdLodVp0MV8tZmUunvS5KjMg=;
X-YMail-OSG: v3100RoVM1kOKbuuo3VpiPJY3ct_dmu37xeI7pgCy1F4zH0
	cl0yXop2lNEGSJNmpmTewOIWNZ_RTzCla2sPw1wWbLeji9kqMusplZ9j6qSO
	w67BaDShppiwniudY8gCL_cEnsdH2Y5fUi2IY_s.hJlgp9uOft4V_taQYMRB
	HYZXwaxv44GV8JsAFM1qZtIDiHEQbg_PcYvS.q5GzR_N4BikhHKc3xwC8Tap
	8tLe0xoL0HUdnbs0frm1qszej4ooRbPuhzs0PoZFADprfU5GgijqKmj7bI3.
	58PWdbsRpDm9P3SlAmAktTuZFfaLaaZJhYOZhC09G7qycJn.iXGsDAsQjBk5
	eP7eltKQSN7ZIabJttVwXvw5pugAulGn9KSGjSY48i4SNp_JviYpCoLUSas.
	mu_gj1tsQ8FNxglGwr.s5CmmG8NpOCJf.ZuZCVVQkyizxtD7CDM3Vzwvk6tr
	DVyvNdYrQ5W52htodKYIyKpkQomJ242F8ZgpuCEKcv8uF
Received: from [82.115.115.252] by web162702.mail.bf1.yahoo.com via HTTP;
	Wed, 26 Dec 2012 02:46:43 PST
X-Rocket-MIMEInfo: 001.001,
	SGkgYWxsLAoKSSBhbSB1c2luZ8KgeGVuLTQuMi10ZXN0aW5nLmhnIG9uIGRlYmlhbiA2LjAuNiAoeDg2XzY0KSB3aXRoIEtlcm5lbCAzLjQuMTUgKHRtZW0gZW5hYmxlZCkuIFByb2JsZW0gaXMgdGhhdCB0aGXCoC91c3IvbGliL3hlbi9iaW4vcWVtdS1zeXN0ZW0taTM4Ngp1c2UgbW9yZSBhbmQgbW9yZSBtZW1vcnkuIEFmdGVyIG9uZSB3ZWVrIHVwdGltZSAoZGVwZW5kcyBvbiBtZW1vcnkpIHRoZSBtYWNoaW5lIHN0YXJ0cyB0byBzd2FwLi4uCgpEZXRhaWxzOgoKcm9vdEBkbXcwMTp.IyBjYXQgL2V0Yy9ncnUBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.129.483
Message-ID: <1356518803.88953.YahooMailNeo@web162702.mail.bf1.yahoo.com>
Date: Wed, 26 Dec 2012 02:46:43 -0800 (PST)
From: Maik Wessler <maik.wessler@yahoo.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
MIME-Version: 1.0
X-Mailman-Approved-At: Wed, 26 Dec 2012 10:49:13 +0000
Subject: [Xen-devel] qemu-system-i386: memory leak?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Maik Wessler <maik@mwessler.net>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9131452302534339185=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9131452302534339185==
Content-Type: multipart/alternative; boundary="-532260405-1245608948-1356518803=:88953"

---532260405-1245608948-1356518803=:88953
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,=0A=0AI am using=A0xen-4.2-testing.hg on debian 6.0.6 (x86_64) with =
Kernel 3.4.15 (tmem enabled). Problem is that the=A0/usr/lib/xen/bin/qemu-s=
ystem-i386=0Ause more and more memory. After one week uptime (depends on me=
mory) the machine starts to swap...=0A=0ADetails:=0A=0Aroot@dmw01:~# cat /e=
tc/grub.d/09_linux_xen |grep mem=0Amultiboot${rel_xen_dirname}/${xen_basena=
me} placeholder ${xen_args} dom0_mem=3D1592M,max:1592M=0A=0A=0A=0Aroot@dmw0=
1:~# free=0A=A0 =A0 =A0 =A0 =A0 =A0 =A0total =A0 =A0 =A0 used =A0 =A0 =A0 f=
ree =A0 =A0 shared =A0 =A0buffers =A0 =A0 cached=0AMem: =A0 =A0 =A0 1523280=
 =A0 =A01408896 =A0 =A0 114384 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 9824 =A0 =
=A0 =A017496=0A-/+ buffers/cache: =A0 =A01381576 =A0 =A0 141704=0ASwap: =A0=
 =A0 =A0 505916 =A0 =A0 134592 =A0 =A0 371324=0A=0A=0Aps -e -orss=3D,args=
=3D | sort -b -k1,1n=0A=0A=0AStart:=0A28872 /usr/lib/xen/bin/qemu-system-i3=
86 -xen-domid 12 -chardev socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-lib=
xl-12,server,nowait -mon chardev=3Dlibxl-cmd,mode=3Dcontrol -xen-attach -na=
me mgtmw01 -nographic -M xenpv -m 385=0A=0AEnd:=0A243472 /usr/lib/xen/bin/q=
emu-system-i386 -xen-domid 12 -chardev socket,id=3Dlibxl-cmd,path=3D/var/ru=
n/xen/qmp-libxl-12,server,nowait -mon chardev=3Dlibxl-cmd,mode=3Dcontrol -x=
en-attach -name mgtmw01 -nographic -M xenpv -m 385=0A=0A=0A=0A=0Aroot@dmw01=
:~# ps aux|grep qemu|grep mgtmw01=0Aroot =A0 =A0 =A03903 =A00.0 15.9 423876=
 243464 ? =A0 =A0 =A0 Ssl =A0Dec18 =A0 9:39 /usr/lib/xen/bin/qemu-system-i3=
86 -xen-domid 12 -chardev socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-lib=
xl-12,server,nowait -mon chardev=3Dlibxl-cmd,mode=3Dcontrol -xen-attach -na=
me mgtmw01 -nographic -M xenpv -m 385=0A=0Aroot@dmw01:~# pmap 3903=0A3903: =
=A0 /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev socket,id=3Dli=
bxl-cmd,path=3D/var/run/xen/qmp-libxl-12,server,nowait -mon chardev=3Dlibxl=
-cmd,mode=3Dcontrol -xen-attach -name mgtmw01 -nographic -M xenpv -m 385=0A=
00007fe942b7e000 =A0 4112K rw--- =A0 =A0[ anon ]=0A00007fe942f83000 =A0 102=
8K rw--- =A0 =A0[ anon ]=0A00007fe943085000 =A0 1028K rw--- =A0 =A0[ anon ]=
=0A00007fe943187000 =A0 1028K rw--- =A0 =A0[ anon ]=0A00007fe943289000 =A0 =
2056K rw--- =A0 =A0[ anon ]=0A00007fe943495000 =A0 1028K rw--- =A0 =A0[ ano=
n ]=0A00007fe9435a1000 =A0 1028K rw--- =A0 =A0[ anon ]=0A00007fe9436ad000 =
=A0 1028K rw--- =A0 =A0[ anon ]=0A00007fe9437b8000 =A0 5140K rw--- =A0 =A0[=
 anon ]=0A00007fe943cbd000 =A0 =A0 =A04K ----- =A0 =A0[ anon ]=0A00007fe943=
cbe000 =A0 8192K rw--- =A0 =A0[ anon ]=0A00007fe9444be000 =A0 =A0 20K r-x--=
 =A0/usr/lib/libXdmcp.so.6.0.0=0A00007fe9444c3000 =A0 2044K ----- =A0/usr/l=
ib/libXdmcp.so.6.0.0=0A00007fe9446c2000 =A0 =A0 =A04K rw--- =A0/usr/lib/lib=
Xdmcp.so.6.0.0=0A00007fe9446c3000 =A0 =A0 =A08K r-x-- =A0/usr/lib/libXau.so=
.6.0.0=0A00007fe9446c5000 =A0 2048K ----- =A0/usr/lib/libXau.so.6.0.0=0A000=
07fe9448c5000 =A0 =A0 =A04K rw--- =A0/usr/lib/libXau.so.6.0.0=0A00007fe9448=
c6000 =A0 =A0124K r-x-- =A0/lib/libx86.so.1=0A00007fe9448e5000 =A0 2048K --=
--- =A0/lib/libx86.so.1=0A00007fe944ae5000 =A0 =A0 =A08K rw--- =A0/lib/libx=
86.so.1=0A00007fe944ae7000 =A0 =A0 =A04K rw--- =A0 =A0[ anon ]=0A00007fe944=
ae8000 =A0 =A0128K r-x-- =A0/usr/lib/liblzo2.so.2.0.0=0A00007fe944b08000 =
=A0 2044K ----- =A0/usr/lib/liblzo2.so.2.0.0=0A00007fe944d07000 =A0 =A0 =A0=
4K rw--- =A0/usr/lib/liblzo2.so.2.0.0=0A00007fe944d08000 =A0 =A0132K r-x-- =
=A0/usr/lib/liblzma.so.2.0.0=0A00007fe944d29000 =A0 2048K ----- =A0/usr/lib=
/liblzma.so.2.0.0=0A00007fe944f29000 =A0 =A0 =A04K rw--- =A0/usr/lib/liblzm=
a.so.2.0.0=0A00007fe944f2a000 =A0 =A0 60K r-x-- =A0/lib/libbz2.so.1.0.4=0A0=
0007fe944f39000 =A0 2044K ----- =A0/lib/libbz2.so.1.0.4=0A00007fe945138000 =
=A0 =A0 =A08K rw--- =A0/lib/libbz2.so.1.0.4=0A00007fe94513a000 =A0 =A0112K =
r-x-- =A0/usr/lib/libxcb.so.1.1.0=0A00007fe945156000 =A0 2044K ----- =A0/us=
r/lib/libxcb.so.1.1.0=0A00007fe945355000 =A0 =A0 =A04K rw--- =A0/usr/lib/li=
bxcb.so.1.1.0=0A00007fe945356000 =A0 =A0308K r-x-- =A0/usr/lib/libvga.so.1.=
4.3=0A00007fe9453a3000 =A0 2044K ----- =A0/usr/lib/libvga.so.1.4.3=0A00007f=
e9455a2000 =A0 =A0 36K rw--- =A0/usr/lib/libvga.so.1.4.3=0A00007fe9455ab000=
 =A0 =A0 36K rw--- =A0 =A0[ anon ]=0A00007fe9455b4000 =A0 =A0 88K r-x-- =A0=
/usr/lib/libdirect-1.2.so.9.0.1=0A00007fe9455ca000 =A0 2044K ----- =A0/usr/=
lib/libdirect-1.2.so.9.0.1=0A00007fe9457c9000 =A0 =A0 =A08K rw--- =A0/usr/l=
ib/libdirect-1.2.so.9.0.1=0A00007fe9457cb000 =A0 =A0 36K r-x-- =A0/usr/lib/=
libfusion-1.2.so.9.0.1=0A00007fe9457d4000 =A0 2048K ----- =A0/usr/lib/libfu=
sion-1.2.so.9.0.1=0A00007fe9459d4000 =A0 =A0 =A04K rw--- =A0/usr/lib/libfus=
ion-1.2.so.9.0.1=0A00007fe9459d5000 =A0 =A0508K r-x-- =A0/usr/lib/libdirect=
fb-1.2.so.9.0.1=0A00007fe945a54000 =A0 2044K ----- =A0/usr/lib/libdirectfb-=
1.2.so.9.0.1=0A00007fe945c53000 =A0 =A0 16K rw--- =A0/usr/lib/libdirectfb-1=
.2.so.9.0.1=0A00007fe945c57000 =A0 =A0888K r-x-- =A0/usr/lib/libasound.so.2=
.0.0=0A00007fe945d35000 =A0 2044K ----- =A0/usr/lib/libasound.so.2.0.0=0A00=
007fe945f34000 =A0 =A0 32K rw--- =A0/usr/lib/libasound.so.2.0.0=0A00007fe94=
5f3c000 =A0 =A0 =A08K r-x-- =A0/lib/libdl-2.11.3.so=0A00007fe945f3e000 =A0 =
2048K ----- =A0/lib/libdl-2.11.3.so=0A00007fe94613e000 =A0 =A0 =A04K r---- =
=A0/lib/libdl-2.11.3.so=0A00007fe94613f000 =A0 =A0 =A04K rw--- =A0/lib/libd=
l-2.11.3.so=0A00007fe946140000 =A0 =A0192K r-x-- =A0/lib/libpcre.so.3.12.1=
=0A00007fe946170000 =A0 2044K ----- =A0/lib/libpcre.so.3.12.1=0A00007fe9463=
6f000 =A0 =A0 =A04K rw--- =A0/lib/libpcre.so.3.12.1=0A00007fe946370000 =A0 =
1380K r-x-- =A0/lib/libc-2.11.3.so=0A00007fe9464c9000 =A0 2044K ----- =A0/l=
ib/libc-2.11.3.so=0A00007fe9466c8000 =A0 =A0 16K r---- =A0/lib/libc-2.11.3.=
so=0A00007fe9466cc000 =A0 =A0 =A04K rw--- =A0/lib/libc-2.11.3.so=0A00007fe9=
466cd000 =A0 =A0 20K rw--- =A0 =A0[ anon ]=0A00007fe9466d2000 =A0 =A0 92K r=
-x-- =A0/lib/libpthread-2.11.3.so=0A00007fe9466e9000 =A0 2044K ----- =A0/li=
b/libpthread-2.11.3.so=0A00007fe9468e8000 =A0 =A0 =A04K r---- =A0/lib/libpt=
hread-2.11.3.so=0A00007fe9468e9000 =A0 =A0 =A04K rw--- =A0/lib/libpthread-2=
.11.3.so=0A00007fe9468ea000 =A0 =A0 16K rw--- =A0 =A0[ anon ]=0A00007fe9468=
ee000 =A0 =A0 92K r-x-- =A0/usr/lib/libz.so.1.2.3.4=0A00007fe946905000 =A0 =
2044K ----- =A0/usr/lib/libz.so.1.2.3.4=0A00007fe946b04000 =A0 =A0 =A04K rw=
--- =A0/usr/lib/libz.so.1.2.3.4=0A00007fe946b05000 =A0 =A0512K r-x-- =A0/li=
b/libm-2.11.3.so=0A00007fe946b85000 =A0 2048K ----- =A0/lib/libm-2.11.3.so=
=0A00007fe946d85000 =A0 =A0 =A04K r---- =A0/lib/libm-2.11.3.so=0A00007fe946=
d86000 =A0 =A0 =A04K rw--- =A0/lib/libm-2.11.3.so=0A00007fe946d87000 =A0 =
=A0 =A04K r-x-- =A0/lib/libaio.so.1.0.1=0A00007fe946d88000 =A0 2044K ----- =
=A0/lib/libaio.so.1.0.1=0A00007fe946f87000 =A0 =A0 =A04K rw--- =A0/lib/liba=
io.so.1.0.1=0A00007fe946f88000 =A0 =A0160K r-x-- =A0/usr/lib/libxenguest.so=
.4.2.0=0A00007fe946fb0000 =A0 2048K ----- =A0/usr/lib/libxenguest.so.4.2.0=
=0A00007fe9471b0000 =A0 =A0 =A08K rw--- =A0/usr/lib/libxenguest.so.4.2.0=0A=
00007fe9471b2000 =A0 =A0136K r-x-- =A0/usr/lib/libxenctrl.so.4.2.0=0A00007f=
e9471d4000 =A0 2048K ----- =A0/usr/lib/libxenctrl.so.4.2.0=0A00007fe9473d40=
00 =A0 =A0 =A04K rw--- =A0/usr/lib/libxenctrl.so.4.2.0=0A00007fe9473d5000 =
=A0 =A0 24K r-x-- =A0/usr/lib/libxenstore.so.3.0.2=0A00007fe9473db000 =A0 2=
044K ----- =A0/usr/lib/libxenstore.so.3.0.2=0A00007fe9475da000 =A0 =A0 =A04=
K rw--- =A0/usr/lib/libxenstore.so.3.0.2=0A00007fe9475db000 =A0 =A0 12K rw-=
-- =A0 =A0[ anon ]=0A00007fe9475de000 =A0 1236K r-x-- =A0/usr/lib/libX11.so=
.6.3.0=0A00007fe947713000 =A0 2048K ----- =A0/usr/lib/libX11.so.6.3.0=0A000=
07fe947913000 =A0 =A0 24K rw--- =A0/usr/lib/libX11.so.6.3.0=0A00007fe947919=
000 =A0 =A0432K r-x-- =A0/usr/lib/libSDL-1.2.so.0.11.3=0A00007fe947985000 =
=A0 2048K ----- =A0/usr/lib/libSDL-1.2.so.0.11.3=0A00007fe947b85000 =A0 =A0=
 =A08K rw--- =A0/usr/lib/libSDL-1.2.so.0.11.3=0A00007fe947b87000 =A0 =A0304=
K rw--- =A0 =A0[ anon ]=0A00007fe947bd3000 =A0 =A0140K r-x-- =A0/usr/lib/li=
bjpeg.so.62.0.0=0A00007fe947bf6000 =A0 2044K ----- =A0/usr/lib/libjpeg.so.6=
2.0.0=0A00007fe947df5000 =A0 =A0 =A04K rw--- =A0/usr/lib/libjpeg.so.62.0.0=
=0A00007fe947df6000 =A0 =A0148K r-x-- =A0/lib/libpng12.so.0.44.0=0A00007fe9=
47e1b000 =A0 2048K ----- =A0/lib/libpng12.so.0.44.0=0A00007fe94801b000 =A0 =
=A0 =A04K rw--- =A0/lib/libpng12.so.0.44.0=0A00007fe94801c000 =A0 =A0 16K r=
-x-- =A0/lib/libuuid.so.1.3.0=0A00007fe948020000 =A0 2044K ----- =A0/lib/li=
buuid.so.1.3.0=0A00007fe94821f000 =A0 =A0 =A04K rw--- =A0/lib/libuuid.so.1.=
3.0=0A00007fe948220000 =A0 =A0264K r-x-- =A0/lib/libncurses.so.5.7=0A00007f=
e948262000 =A0 2044K ----- =A0/lib/libncurses.so.5.7=0A00007fe948461000 =A0=
 =A0 20K rw--- =A0/lib/libncurses.so.5.7=0A00007fe948466000 =A0 =A0 =A08K r=
-x-- =A0/lib/libutil-2.11.3.so=0A00007fe948468000 =A0 2044K ----- =A0/lib/l=
ibutil-2.11.3.so=0A00007fe948667000 =A0 =A0 =A04K r---- =A0/lib/libutil-2.1=
1.3.so=0A00007fe948668000 =A0 =A0 =A04K rw--- =A0/lib/libutil-2.11.3.so=0A0=
0007fe948669000 =A0 =A0876K r-x-- =A0/lib/libglib-2.0.so.0.2400.2=0A00007fe=
948744000 =A0 2044K ----- =A0/lib/libglib-2.0.so.0.2400.2=0A00007fe94894300=
0 =A0 =A0 =A08K rw--- =A0/lib/libglib-2.0.so.0.2400.2=0A00007fe948945000 =
=A0 =A0 =A04K rw--- =A0 =A0[ anon ]=0A00007fe948946000 =A0 =A0 16K r-x-- =
=A0/usr/lib/libgthread-2.0.so.0.2400.2=0A00007fe94894a000 =A0 2044K ----- =
=A0/usr/lib/libgthread-2.0.so.0.2400.2=0A00007fe948b49000 =A0 =A0 =A04K rw-=
-- =A0/usr/lib/libgthread-2.0.so.0.2400.2=0A00007fe948b4a000 =A0 =A0 28K r-=
x-- =A0/lib/librt-2.11.3.so=0A00007fe948b51000 =A0 2044K ----- =A0/lib/libr=
t-2.11.3.so=0A00007fe948d50000 =A0 =A0 =A04K r---- =A0/lib/librt-2.11.3.so=
=0A00007fe948d51000 =A0 =A0 =A04K rw--- =A0/lib/librt-2.11.3.so=0A00007fe94=
8d52000 =A0 =A0120K r-x-- =A0/lib/ld-2.11.3.so=0A00007fe948de5000 =A0 1536K=
 rw--- =A0 =A0[ anon ]=0A00007fe948f65000 =A0 =A0 =A04K rw-s- =A0/dev/xen/g=
ntdev=0A00007fe948f66000 =A0 =A0 =A04K rw-s- =A0/dev/xen/gntdev=0A00007fe94=
8f67000 =A0 =A0 =A08K rw--- =A0 =A0[ anon ]=0A00007fe948f69000 =A0 =A0 =A04=
K ----- =A0 =A0[ anon ]=0A00007fe948f6a000 =A0 =A0 20K rw--- =A0 =A0[ anon =
]=0A00007fe948f6f000 =A0 =A0 =A04K r---- =A0/lib/ld-2.11.3.so=0A00007fe948f=
70000 =A0 =A0 =A04K rw--- =A0/lib/ld-2.11.3.so=0A00007fe948f71000 =A0 =A0 =
=A04K rw--- =A0 =A0[ anon ]=0A00007fe948f72000 =A0 3020K r-x-- =A0/usr/lib/=
xen/bin/qemu-system-i386=0A00007fe949464000 =A0 =A0816K r---- =A0/usr/lib/x=
en/bin/qemu-system-i386=0A00007fe949530000 =A0 =A0176K rw--- =A0/usr/lib/xe=
n/bin/qemu-system-i386=0A00007fe94955c000 =A0 8228K rw--- =A0 =A0[ anon ]=
=0A00007fe94a5a7000 309936K rw--- =A0 =A0[ anon ]=0A00007fff43e80000 =A0 =
=A0132K rw--- =A0 =A0[ stack ]=0A00007fff43fff000 =A0 =A0 =A04K r-x-- =A0 =
=A0[ anon ]=0Affffffffff600000 =A0 =A0 =A04K r-x-- =A0 =A0[ anon ]=0A=A0tot=
al =A0 =A0 =A0 =A0 =A0 424016K=0A=0A=0ACan anyone help?=A0=0A=0A=0ARegards,=
=0A=A0 =A0Maik
---532260405-1245608948-1356518803=:88953
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:ti=
mes new roman, new york, times, serif;font-size:12pt"><div>Hi all,</div><di=
v><br></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family=
: 'times new roman', 'new york', times, serif; background-color: transparen=
t; font-style: normal; ">I am using&nbsp;xen-4.2-testing.hg on debian 6.0.6=
 (x86_64) with Kernel 3.4.15 (tmem enabled). Problem is that the&nbsp;/usr/=
lib/xen/bin/qemu-system-i386</div><div style=3D"color: rgb(0, 0, 0); font-s=
ize: 16px; font-family: 'times new roman', 'new york', times, serif; backgr=
ound-color: transparent; font-style: normal; ">use more and more memory. Af=
ter one week uptime (depends on memory) the machine starts to swap...</div>=
<div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: 'times new=
 roman', 'new york', times, serif; background-color: transparent; font-styl=
e: normal; "><br></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px;
 font-family: 'times new roman', 'new york', times, serif; background-color=
: transparent; font-style: normal; ">Details:</div><div style=3D"color: rgb=
(0, 0, 0); font-size: 16px; font-family: 'times new roman', 'new york', tim=
es, serif; background-color: transparent; font-style: normal; "><br></div><=
div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: 'times new =
roman', 'new york', times, serif; background-color: transparent; font-style=
: normal; "><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family=
: 'times new roman', 'new york', times, serif; background-color: transparen=
t; font-style: normal; ">root@dmw01:~# cat /etc/grub.d/09_linux_xen |grep m=
em</div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: 't=
imes new roman', 'new york', times, serif; background-color: transparent; f=
ont-style: normal; "><span class=3D"Apple-tab-span" style=3D"white-space:pr=
e">=09</span>multiboot<span class=3D"Apple-tab-span" style=3D"white-space:p=
re">
=09</span>${rel_xen_dirname}/${xen_basename} placeholder ${xen_args} dom0_m=
em=3D1592M,max:1592M</div><div><br></div></div><div style=3D"color: rgb(0, =
0, 0); font-size: 16px; font-family: 'times new roman', 'new york', times, =
serif; background-color: transparent; font-style: normal; "><div style=3D"c=
olor: rgb(0, 0, 0); font-size: 16px; font-family: 'times new roman', 'new y=
ork', times, serif; background-color: transparent; font-style: normal; "><b=
r></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: 't=
imes new roman', 'new york', times, serif; background-color: transparent; f=
ont-style: normal; ">root@dmw01:~# free</div><div style=3D"color: rgb(0, 0,=
 0); font-size: 16px; font-family: 'times new roman', 'new york', times, se=
rif; background-color: transparent; font-style: normal; ">&nbsp; &nbsp; &nb=
sp; &nbsp; &nbsp; &nbsp; &nbsp;total &nbsp; &nbsp; &nbsp; used &nbsp; &nbsp=
; &nbsp; free &nbsp; &nbsp; shared &nbsp; &nbsp;buffers &nbsp; &nbsp;
 cached</div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-famil=
y: 'times new roman', 'new york', times, serif; background-color: transpare=
nt; font-style: normal; ">Mem: &nbsp; &nbsp; &nbsp; 1523280 &nbsp; &nbsp;14=
08896 &nbsp; &nbsp; 114384 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp=
; &nbsp; 9824 &nbsp; &nbsp; &nbsp;17496</div><div style=3D"color: rgb(0, 0,=
 0); font-size: 16px; font-family: 'times new roman', 'new york', times, se=
rif; background-color: transparent; font-style: normal; ">-/+ buffers/cache=
: &nbsp; &nbsp;1381576 &nbsp; &nbsp; 141704</div><div style=3D"color: rgb(0=
, 0, 0); font-size: 16px; font-family: 'times new roman', 'new york', times=
, serif; background-color: transparent; font-style: normal; ">Swap: &nbsp; =
&nbsp; &nbsp; 505916 &nbsp; &nbsp; 134592 &nbsp; &nbsp; 371324</div><div><b=
r></div></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-fami=
ly: 'times new roman', 'new york', times, serif; background-color:
 transparent; font-style: normal; "><br></div><div style=3D"color: rgb(0, 0=
, 0); font-size: 16px; font-family: 'times new roman', 'new york', times, s=
erif; background-color: transparent; font-style: normal; ">ps -e -orss=3D,a=
rgs=3D | sort -b -k1,1n<br></div><div style=3D"color: rgb(0, 0, 0); font-si=
ze: 16px; font-family: 'times new roman', 'new york', times, serif; backgro=
und-color: transparent; font-style: normal; "><br></div><div style=3D"color=
: rgb(0, 0, 0); font-size: 16px; font-family: 'times new roman', 'new york'=
, times, serif; background-color: transparent; font-style: normal; ">Start:=
</div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: 'tim=
es new roman', 'new york', times, serif; background-color: transparent; fon=
t-style: normal; "><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font=
-family: 'times new roman', 'new york', times, serif; background-color: tra=
nsparent; font-style: normal; ">28872 /usr/lib/xen/bin/qemu-system-i386 -xe=
n-domid
 12 -chardev socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-12,server,=
nowait -mon chardev=3Dlibxl-cmd,mode=3Dcontrol -xen-attach -name mgtmw01 -n=
ographic -M xenpv -m 385</div><div><br></div><div>End:</div><div>243472 /us=
r/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev socket,id=3Dlibxl-cmd=
,path=3D/var/run/xen/qmp-libxl-12,server,nowait -mon chardev=3Dlibxl-cmd,mo=
de=3Dcontrol -xen-attach -name mgtmw01 -nographic -M xenpv -m 385<br></div>=
<div><br></div><div><br></div><div><br></div><div><div>root@dmw01:~# ps aux=
|grep qemu|grep mgtmw01</div><div>root &nbsp; &nbsp; &nbsp;3903 &nbsp;0.0 1=
5.9 423876 243464 ? &nbsp; &nbsp; &nbsp; Ssl &nbsp;Dec18 &nbsp; 9:39 /usr/l=
ib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev socket,id=3Dlibxl-cmd,pa=
th=3D/var/run/xen/qmp-libxl-12,server,nowait -mon chardev=3Dlibxl-cmd,mode=
=3Dcontrol -xen-attach -name mgtmw01 -nographic -M xenpv -m 385</div><div><=
br></div><div><div>root@dmw01:~# pmap 3903</div><div>3903: &nbsp;
 /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev socket,id=3Dlibxl=
-cmd,path=3D/var/run/xen/qmp-libxl-12,server,nowait -mon chardev=3Dlibxl-cm=
d,mode=3Dcontrol -xen-attach -name mgtmw01 -nographic -M xenpv -m 385</div>=
<div>00007fe942b7e000 &nbsp; 4112K rw--- &nbsp; &nbsp;[ anon ]</div><div>00=
007fe942f83000 &nbsp; 1028K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe94=
3085000 &nbsp; 1028K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe943187000=
 &nbsp; 1028K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe943289000 &nbsp;=
 2056K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe943495000 &nbsp; 1028K =
rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe9435a1000 &nbsp; 1028K rw--- &=
nbsp; &nbsp;[ anon ]</div><div>00007fe9436ad000 &nbsp; 1028K rw--- &nbsp; &=
nbsp;[ anon ]</div><div>00007fe9437b8000 &nbsp; 5140K rw--- &nbsp; &nbsp;[ =
anon ]</div><div>00007fe943cbd000 &nbsp; &nbsp; &nbsp;4K ----- &nbsp; &nbsp=
;[ anon ]</div><div>00007fe943cbe000 &nbsp; 8192K rw--- &nbsp; &nbsp;[ anon
 ]</div><div>00007fe9444be000 &nbsp; &nbsp; 20K r-x-- &nbsp;/usr/lib/libXdm=
cp.so.6.0.0</div><div>00007fe9444c3000 &nbsp; 2044K ----- &nbsp;/usr/lib/li=
bXdmcp.so.6.0.0</div><div>00007fe9446c2000 &nbsp; &nbsp; &nbsp;4K rw--- &nb=
sp;/usr/lib/libXdmcp.so.6.0.0</div><div>00007fe9446c3000 &nbsp; &nbsp; &nbs=
p;8K r-x-- &nbsp;/usr/lib/libXau.so.6.0.0</div><div>00007fe9446c5000 &nbsp;=
 2048K ----- &nbsp;/usr/lib/libXau.so.6.0.0</div><div>00007fe9448c5000 &nbs=
p; &nbsp; &nbsp;4K rw--- &nbsp;/usr/lib/libXau.so.6.0.0</div><div>00007fe94=
48c6000 &nbsp; &nbsp;124K r-x-- &nbsp;/lib/libx86.so.1</div><div>00007fe944=
8e5000 &nbsp; 2048K ----- &nbsp;/lib/libx86.so.1</div><div>00007fe944ae5000=
 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/lib/libx86.so.1</div><div>00007fe944ae=
7000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe94=
4ae8000 &nbsp; &nbsp;128K r-x-- &nbsp;/usr/lib/liblzo2.so.2.0.0</div><div>0=
0007fe944b08000 &nbsp; 2044K -----
 &nbsp;/usr/lib/liblzo2.so.2.0.0</div><div>00007fe944d07000 &nbsp; &nbsp; &=
nbsp;4K rw--- &nbsp;/usr/lib/liblzo2.so.2.0.0</div><div>00007fe944d08000 &n=
bsp; &nbsp;132K r-x-- &nbsp;/usr/lib/liblzma.so.2.0.0</div><div>00007fe944d=
29000 &nbsp; 2048K ----- &nbsp;/usr/lib/liblzma.so.2.0.0</div><div>00007fe9=
44f29000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/usr/lib/liblzma.so.2.0.0</div>=
<div>00007fe944f2a000 &nbsp; &nbsp; 60K r-x-- &nbsp;/lib/libbz2.so.1.0.4</d=
iv><div>00007fe944f39000 &nbsp; 2044K ----- &nbsp;/lib/libbz2.so.1.0.4</div=
><div>00007fe945138000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/lib/libbz2.so.1.=
0.4</div><div>00007fe94513a000 &nbsp; &nbsp;112K r-x-- &nbsp;/usr/lib/libxc=
b.so.1.1.0</div><div>00007fe945156000 &nbsp; 2044K ----- &nbsp;/usr/lib/lib=
xcb.so.1.1.0</div><div>00007fe945355000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;=
/usr/lib/libxcb.so.1.1.0</div><div>00007fe945356000 &nbsp; &nbsp;308K r-x--=
 &nbsp;/usr/lib/libvga.so.1.4.3</div><div>00007fe9453a3000 &nbsp;
 2044K ----- &nbsp;/usr/lib/libvga.so.1.4.3</div><div>00007fe9455a2000 &nbs=
p; &nbsp; 36K rw--- &nbsp;/usr/lib/libvga.so.1.4.3</div><div>00007fe9455ab0=
00 &nbsp; &nbsp; 36K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe9455b4000=
 &nbsp; &nbsp; 88K r-x-- &nbsp;/usr/lib/libdirect-1.2.so.9.0.1</div><div>00=
007fe9455ca000 &nbsp; 2044K ----- &nbsp;/usr/lib/libdirect-1.2.so.9.0.1</di=
v><div>00007fe9457c9000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/usr/lib/libdire=
ct-1.2.so.9.0.1</div><div>00007fe9457cb000 &nbsp; &nbsp; 36K r-x-- &nbsp;/u=
sr/lib/libfusion-1.2.so.9.0.1</div><div>00007fe9457d4000 &nbsp; 2048K -----=
 &nbsp;/usr/lib/libfusion-1.2.so.9.0.1</div><div>00007fe9459d4000 &nbsp; &n=
bsp; &nbsp;4K rw--- &nbsp;/usr/lib/libfusion-1.2.so.9.0.1</div><div>00007fe=
9459d5000 &nbsp; &nbsp;508K r-x-- &nbsp;/usr/lib/libdirectfb-1.2.so.9.0.1</=
div><div>00007fe945a54000 &nbsp; 2044K ----- &nbsp;/usr/lib/libdirectfb-1.2=
.so.9.0.1</div><div>00007fe945c53000 &nbsp; &nbsp; 16K rw---
 &nbsp;/usr/lib/libdirectfb-1.2.so.9.0.1</div><div>00007fe945c57000 &nbsp; =
&nbsp;888K r-x-- &nbsp;/usr/lib/libasound.so.2.0.0</div><div>00007fe945d350=
00 &nbsp; 2044K ----- &nbsp;/usr/lib/libasound.so.2.0.0</div><div>00007fe94=
5f34000 &nbsp; &nbsp; 32K rw--- &nbsp;/usr/lib/libasound.so.2.0.0</div><div=
>00007fe945f3c000 &nbsp; &nbsp; &nbsp;8K r-x-- &nbsp;/lib/libdl-2.11.3.so</=
div><div>00007fe945f3e000 &nbsp; 2048K ----- &nbsp;/lib/libdl-2.11.3.so</di=
v><div>00007fe94613e000 &nbsp; &nbsp; &nbsp;4K r---- &nbsp;/lib/libdl-2.11.=
3.so</div><div>00007fe94613f000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/lib=
dl-2.11.3.so</div><div>00007fe946140000 &nbsp; &nbsp;192K r-x-- &nbsp;/lib/=
libpcre.so.3.12.1</div><div>00007fe946170000 &nbsp; 2044K ----- &nbsp;/lib/=
libpcre.so.3.12.1</div><div>00007fe94636f000 &nbsp; &nbsp; &nbsp;4K rw--- &=
nbsp;/lib/libpcre.so.3.12.1</div><div>00007fe946370000 &nbsp; 1380K r-x-- &=
nbsp;/lib/libc-2.11.3.so</div><div>00007fe9464c9000 &nbsp; 2044K
 ----- &nbsp;/lib/libc-2.11.3.so</div><div>00007fe9466c8000 &nbsp; &nbsp; 1=
6K r---- &nbsp;/lib/libc-2.11.3.so</div><div>00007fe9466cc000 &nbsp; &nbsp;=
 &nbsp;4K rw--- &nbsp;/lib/libc-2.11.3.so</div><div>00007fe9466cd000 &nbsp;=
 &nbsp; 20K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe9466d2000 &nbsp; &=
nbsp; 92K r-x-- &nbsp;/lib/libpthread-2.11.3.so</div><div>00007fe9466e9000 =
&nbsp; 2044K ----- &nbsp;/lib/libpthread-2.11.3.so</div><div>00007fe9468e80=
00 &nbsp; &nbsp; &nbsp;4K r---- &nbsp;/lib/libpthread-2.11.3.so</div><div>0=
0007fe9468e9000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/libpthread-2.11.3.s=
o</div><div>00007fe9468ea000 &nbsp; &nbsp; 16K rw--- &nbsp; &nbsp;[ anon ]<=
/div><div>00007fe9468ee000 &nbsp; &nbsp; 92K r-x-- &nbsp;/usr/lib/libz.so.1=
.2.3.4</div><div>00007fe946905000 &nbsp; 2044K ----- &nbsp;/usr/lib/libz.so=
.1.2.3.4</div><div>00007fe946b04000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/usr=
/lib/libz.so.1.2.3.4</div><div>00007fe946b05000 &nbsp; &nbsp;512K
 r-x-- &nbsp;/lib/libm-2.11.3.so</div><div>00007fe946b85000 &nbsp; 2048K --=
--- &nbsp;/lib/libm-2.11.3.so</div><div>00007fe946d85000 &nbsp; &nbsp; &nbs=
p;4K r---- &nbsp;/lib/libm-2.11.3.so</div><div>00007fe946d86000 &nbsp; &nbs=
p; &nbsp;4K rw--- &nbsp;/lib/libm-2.11.3.so</div><div>00007fe946d87000 &nbs=
p; &nbsp; &nbsp;4K r-x-- &nbsp;/lib/libaio.so.1.0.1</div><div>00007fe946d88=
000 &nbsp; 2044K ----- &nbsp;/lib/libaio.so.1.0.1</div><div>00007fe946f8700=
0 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/libaio.so.1.0.1</div><div>00007fe=
946f88000 &nbsp; &nbsp;160K r-x-- &nbsp;/usr/lib/libxenguest.so.4.2.0</div>=
<div>00007fe946fb0000 &nbsp; 2048K ----- &nbsp;/usr/lib/libxenguest.so.4.2.=
0</div><div>00007fe9471b0000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/usr/lib/li=
bxenguest.so.4.2.0</div><div>00007fe9471b2000 &nbsp; &nbsp;136K r-x-- &nbsp=
;/usr/lib/libxenctrl.so.4.2.0</div><div>00007fe9471d4000 &nbsp; 2048K -----=
 &nbsp;/usr/lib/libxenctrl.so.4.2.0</div><div>00007fe9473d4000
 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/usr/lib/libxenctrl.so.4.2.0</div><div>=
00007fe9473d5000 &nbsp; &nbsp; 24K r-x-- &nbsp;/usr/lib/libxenstore.so.3.0.=
2</div><div>00007fe9473db000 &nbsp; 2044K ----- &nbsp;/usr/lib/libxenstore.=
so.3.0.2</div><div>00007fe9475da000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/usr=
/lib/libxenstore.so.3.0.2</div><div>00007fe9475db000 &nbsp; &nbsp; 12K rw--=
- &nbsp; &nbsp;[ anon ]</div><div>00007fe9475de000 &nbsp; 1236K r-x-- &nbsp=
;/usr/lib/libX11.so.6.3.0</div><div>00007fe947713000 &nbsp; 2048K ----- &nb=
sp;/usr/lib/libX11.so.6.3.0</div><div>00007fe947913000 &nbsp; &nbsp; 24K rw=
--- &nbsp;/usr/lib/libX11.so.6.3.0</div><div>00007fe947919000 &nbsp; &nbsp;=
432K r-x-- &nbsp;/usr/lib/libSDL-1.2.so.0.11.3</div><div>00007fe947985000 &=
nbsp; 2048K ----- &nbsp;/usr/lib/libSDL-1.2.so.0.11.3</div><div>00007fe947b=
85000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/usr/lib/libSDL-1.2.so.0.11.3</div=
><div>00007fe947b87000 &nbsp; &nbsp;304K rw--- &nbsp; &nbsp;[ anon
 ]</div><div>00007fe947bd3000 &nbsp; &nbsp;140K r-x-- &nbsp;/usr/lib/libjpe=
g.so.62.0.0</div><div>00007fe947bf6000 &nbsp; 2044K ----- &nbsp;/usr/lib/li=
bjpeg.so.62.0.0</div><div>00007fe947df5000 &nbsp; &nbsp; &nbsp;4K rw--- &nb=
sp;/usr/lib/libjpeg.so.62.0.0</div><div>00007fe947df6000 &nbsp; &nbsp;148K =
r-x-- &nbsp;/lib/libpng12.so.0.44.0</div><div>00007fe947e1b000 &nbsp; 2048K=
 ----- &nbsp;/lib/libpng12.so.0.44.0</div><div>00007fe94801b000 &nbsp; &nbs=
p; &nbsp;4K rw--- &nbsp;/lib/libpng12.so.0.44.0</div><div>00007fe94801c000 =
&nbsp; &nbsp; 16K r-x-- &nbsp;/lib/libuuid.so.1.3.0</div><div>00007fe948020=
000 &nbsp; 2044K ----- &nbsp;/lib/libuuid.so.1.3.0</div><div>00007fe94821f0=
00 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/libuuid.so.1.3.0</div><div>00007=
fe948220000 &nbsp; &nbsp;264K r-x-- &nbsp;/lib/libncurses.so.5.7</div><div>=
00007fe948262000 &nbsp; 2044K ----- &nbsp;/lib/libncurses.so.5.7</div><div>=
00007fe948461000 &nbsp; &nbsp; 20K rw---
 &nbsp;/lib/libncurses.so.5.7</div><div>00007fe948466000 &nbsp; &nbsp; &nbs=
p;8K r-x-- &nbsp;/lib/libutil-2.11.3.so</div><div>00007fe948468000 &nbsp; 2=
044K ----- &nbsp;/lib/libutil-2.11.3.so</div><div>00007fe948667000 &nbsp; &=
nbsp; &nbsp;4K r---- &nbsp;/lib/libutil-2.11.3.so</div><div>00007fe94866800=
0 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/libutil-2.11.3.so</div><div>00007=
fe948669000 &nbsp; &nbsp;876K r-x-- &nbsp;/lib/libglib-2.0.so.0.2400.2</div=
><div>00007fe948744000 &nbsp; 2044K ----- &nbsp;/lib/libglib-2.0.so.0.2400.=
2</div><div>00007fe948943000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/lib/libgli=
b-2.0.so.0.2400.2</div><div>00007fe948945000 &nbsp; &nbsp; &nbsp;4K rw--- &=
nbsp; &nbsp;[ anon ]</div><div>00007fe948946000 &nbsp; &nbsp; 16K r-x-- &nb=
sp;/usr/lib/libgthread-2.0.so.0.2400.2</div><div>00007fe94894a000 &nbsp; 20=
44K ----- &nbsp;/usr/lib/libgthread-2.0.so.0.2400.2</div><div>00007fe948b49=
000 &nbsp; &nbsp; &nbsp;4K rw---
 &nbsp;/usr/lib/libgthread-2.0.so.0.2400.2</div><div>00007fe948b4a000 &nbsp=
; &nbsp; 28K r-x-- &nbsp;/lib/librt-2.11.3.so</div><div>00007fe948b51000 &n=
bsp; 2044K ----- &nbsp;/lib/librt-2.11.3.so</div><div>00007fe948d50000 &nbs=
p; &nbsp; &nbsp;4K r---- &nbsp;/lib/librt-2.11.3.so</div><div>00007fe948d51=
000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/librt-2.11.3.so</div><div>00007=
fe948d52000 &nbsp; &nbsp;120K r-x-- &nbsp;/lib/ld-2.11.3.so</div><div>00007=
fe948de5000 &nbsp; 1536K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe948f6=
5000 &nbsp; &nbsp; &nbsp;4K rw-s- &nbsp;/dev/xen/gntdev</div><div>00007fe94=
8f66000 &nbsp; &nbsp; &nbsp;4K rw-s- &nbsp;/dev/xen/gntdev</div><div>00007f=
e948f67000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp; &nbsp;[ anon ]</div><div>000=
07fe948f69000 &nbsp; &nbsp; &nbsp;4K ----- &nbsp; &nbsp;[ anon ]</div><div>=
00007fe948f6a000 &nbsp; &nbsp; 20K rw--- &nbsp; &nbsp;[ anon ]</div><div>00=
007fe948f6f000 &nbsp; &nbsp; &nbsp;4K r----
 &nbsp;/lib/ld-2.11.3.so</div><div>00007fe948f70000 &nbsp; &nbsp; &nbsp;4K =
rw--- &nbsp;/lib/ld-2.11.3.so</div><div>00007fe948f71000 &nbsp; &nbsp; &nbs=
p;4K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe948f72000 &nbsp; 3020K r-=
x-- &nbsp;/usr/lib/xen/bin/qemu-system-i386</div><div>00007fe949464000 &nbs=
p; &nbsp;816K r---- &nbsp;/usr/lib/xen/bin/qemu-system-i386</div><div>00007=
fe949530000 &nbsp; &nbsp;176K rw--- &nbsp;/usr/lib/xen/bin/qemu-system-i386=
</div><div>00007fe94955c000 &nbsp; 8228K rw--- &nbsp; &nbsp;[ anon ]</div><=
div>00007fe94a5a7000 309936K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fff=
43e80000 &nbsp; &nbsp;132K rw--- &nbsp; &nbsp;[ stack ]</div><div>00007fff4=
3fff000 &nbsp; &nbsp; &nbsp;4K r-x-- &nbsp; &nbsp;[ anon ]</div><div>ffffff=
ffff600000 &nbsp; &nbsp; &nbsp;4K r-x-- &nbsp; &nbsp;[ anon ]</div><div>&nb=
sp;total &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 424016K</div><div><br></div><di=
v><br></div><div>Can anyone
 help?&nbsp;<br></div><div><br></div><div>Regards,</div><div>&nbsp; &nbsp;M=
aik</div></div></div></div></div></body></html>
---532260405-1245608948-1356518803=:88953--


--===============9131452302534339185==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9131452302534339185==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 10:49:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 10:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnoXz-0007g7-Hj; Wed, 26 Dec 2012 10:49:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <maik.wessler@yahoo.com>) id 1TnoVc-0007fa-Jk
	for xen-devel@lists.xen.org; Wed, 26 Dec 2012 10:46:49 +0000
Received: from [85.158.138.51:64933] by server-15.bemta-3.messagelabs.com id
	01/FA-07921-795DAD05; Wed, 26 Dec 2012 10:46:47 +0000
X-Env-Sender: maik.wessler@yahoo.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1356518804!30348540!1
X-Originating-IP: [98.139.212.254]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15401 invoked from network); 26 Dec 2012 10:46:45 -0000
Received: from nm15-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm15-vm0.bullet.mail.bf1.yahoo.com) (98.139.212.254)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Dec 2012 10:46:45 -0000
Received: from [98.139.212.151] by nm15.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Dec 2012 10:46:43 -0000
Received: from [98.139.212.242] by tm8.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Dec 2012 10:46:43 -0000
Received: from [127.0.0.1] by omp1051.mail.bf1.yahoo.com with NNFMP;
	26 Dec 2012 10:46:43 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 817457.67037.bm@omp1051.mail.bf1.yahoo.com
Received: (qmail 91983 invoked by uid 60001); 26 Dec 2012 10:46:43 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1356518803; bh=e5Sc8DvBVRxhZ+Id5E36BYxhGZkT9Kyq+g1eqZv8gXM=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=h/2wqfqDmlkFHuh7es/8RsK2lQVvbob1tmv86wCQwVSEmQ0YcGYJ5iUmFAFCFJXsbBYk5oc9FM2wxeuRJtNmh+02MXH+Ws7X+dcoqH8O2f7FIEf5a/+5t2TIiRiDh7wys0IVOtTSBLV9f79OjFLnj/g1cEBRfBExPp7aKY2xlcI=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=cUnxE3RWejZIKizHrZijsVFhkB1ks9cZZaPPToaCMmIDOWORkbMfC0WeGBncWOQhQijvtwvcBNmuOyxeJEDDTO2TgXN73WZJRAf/LfsZdJ+BCZC0uefkvu2VT1BDpVkttZBd/3ucMcscmZbL9GWYdLodVp0MV8tZmUunvS5KjMg=;
X-YMail-OSG: v3100RoVM1kOKbuuo3VpiPJY3ct_dmu37xeI7pgCy1F4zH0
	cl0yXop2lNEGSJNmpmTewOIWNZ_RTzCla2sPw1wWbLeji9kqMusplZ9j6qSO
	w67BaDShppiwniudY8gCL_cEnsdH2Y5fUi2IY_s.hJlgp9uOft4V_taQYMRB
	HYZXwaxv44GV8JsAFM1qZtIDiHEQbg_PcYvS.q5GzR_N4BikhHKc3xwC8Tap
	8tLe0xoL0HUdnbs0frm1qszej4ooRbPuhzs0PoZFADprfU5GgijqKmj7bI3.
	58PWdbsRpDm9P3SlAmAktTuZFfaLaaZJhYOZhC09G7qycJn.iXGsDAsQjBk5
	eP7eltKQSN7ZIabJttVwXvw5pugAulGn9KSGjSY48i4SNp_JviYpCoLUSas.
	mu_gj1tsQ8FNxglGwr.s5CmmG8NpOCJf.ZuZCVVQkyizxtD7CDM3Vzwvk6tr
	DVyvNdYrQ5W52htodKYIyKpkQomJ242F8ZgpuCEKcv8uF
Received: from [82.115.115.252] by web162702.mail.bf1.yahoo.com via HTTP;
	Wed, 26 Dec 2012 02:46:43 PST
X-Rocket-MIMEInfo: 001.001,
	SGkgYWxsLAoKSSBhbSB1c2luZ8KgeGVuLTQuMi10ZXN0aW5nLmhnIG9uIGRlYmlhbiA2LjAuNiAoeDg2XzY0KSB3aXRoIEtlcm5lbCAzLjQuMTUgKHRtZW0gZW5hYmxlZCkuIFByb2JsZW0gaXMgdGhhdCB0aGXCoC91c3IvbGliL3hlbi9iaW4vcWVtdS1zeXN0ZW0taTM4Ngp1c2UgbW9yZSBhbmQgbW9yZSBtZW1vcnkuIEFmdGVyIG9uZSB3ZWVrIHVwdGltZSAoZGVwZW5kcyBvbiBtZW1vcnkpIHRoZSBtYWNoaW5lIHN0YXJ0cyB0byBzd2FwLi4uCgpEZXRhaWxzOgoKcm9vdEBkbXcwMTp.IyBjYXQgL2V0Yy9ncnUBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.129.483
Message-ID: <1356518803.88953.YahooMailNeo@web162702.mail.bf1.yahoo.com>
Date: Wed, 26 Dec 2012 02:46:43 -0800 (PST)
From: Maik Wessler <maik.wessler@yahoo.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
MIME-Version: 1.0
X-Mailman-Approved-At: Wed, 26 Dec 2012 10:49:13 +0000
Subject: [Xen-devel] qemu-system-i386: memory leak?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Maik Wessler <maik@mwessler.net>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9131452302534339185=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9131452302534339185==
Content-Type: multipart/alternative; boundary="-532260405-1245608948-1356518803=:88953"

---532260405-1245608948-1356518803=:88953
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,=0A=0AI am using=A0xen-4.2-testing.hg on debian 6.0.6 (x86_64) with =
Kernel 3.4.15 (tmem enabled). Problem is that the=A0/usr/lib/xen/bin/qemu-s=
ystem-i386=0Ause more and more memory. After one week uptime (depends on me=
mory) the machine starts to swap...=0A=0ADetails:=0A=0Aroot@dmw01:~# cat /e=
tc/grub.d/09_linux_xen |grep mem=0Amultiboot${rel_xen_dirname}/${xen_basena=
me} placeholder ${xen_args} dom0_mem=3D1592M,max:1592M=0A=0A=0A=0Aroot@dmw0=
1:~# free=0A=A0 =A0 =A0 =A0 =A0 =A0 =A0total =A0 =A0 =A0 used =A0 =A0 =A0 f=
ree =A0 =A0 shared =A0 =A0buffers =A0 =A0 cached=0AMem: =A0 =A0 =A0 1523280=
 =A0 =A01408896 =A0 =A0 114384 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 9824 =A0 =
=A0 =A017496=0A-/+ buffers/cache: =A0 =A01381576 =A0 =A0 141704=0ASwap: =A0=
 =A0 =A0 505916 =A0 =A0 134592 =A0 =A0 371324=0A=0A=0Aps -e -orss=3D,args=
=3D | sort -b -k1,1n=0A=0A=0AStart:=0A28872 /usr/lib/xen/bin/qemu-system-i3=
86 -xen-domid 12 -chardev socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-lib=
xl-12,server,nowait -mon chardev=3Dlibxl-cmd,mode=3Dcontrol -xen-attach -na=
me mgtmw01 -nographic -M xenpv -m 385=0A=0AEnd:=0A243472 /usr/lib/xen/bin/q=
emu-system-i386 -xen-domid 12 -chardev socket,id=3Dlibxl-cmd,path=3D/var/ru=
n/xen/qmp-libxl-12,server,nowait -mon chardev=3Dlibxl-cmd,mode=3Dcontrol -x=
en-attach -name mgtmw01 -nographic -M xenpv -m 385=0A=0A=0A=0A=0Aroot@dmw01=
:~# ps aux|grep qemu|grep mgtmw01=0Aroot =A0 =A0 =A03903 =A00.0 15.9 423876=
 243464 ? =A0 =A0 =A0 Ssl =A0Dec18 =A0 9:39 /usr/lib/xen/bin/qemu-system-i3=
86 -xen-domid 12 -chardev socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-lib=
xl-12,server,nowait -mon chardev=3Dlibxl-cmd,mode=3Dcontrol -xen-attach -na=
me mgtmw01 -nographic -M xenpv -m 385=0A=0Aroot@dmw01:~# pmap 3903=0A3903: =
=A0 /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev socket,id=3Dli=
bxl-cmd,path=3D/var/run/xen/qmp-libxl-12,server,nowait -mon chardev=3Dlibxl=
-cmd,mode=3Dcontrol -xen-attach -name mgtmw01 -nographic -M xenpv -m 385=0A=
00007fe942b7e000 =A0 4112K rw--- =A0 =A0[ anon ]=0A00007fe942f83000 =A0 102=
8K rw--- =A0 =A0[ anon ]=0A00007fe943085000 =A0 1028K rw--- =A0 =A0[ anon ]=
=0A00007fe943187000 =A0 1028K rw--- =A0 =A0[ anon ]=0A00007fe943289000 =A0 =
2056K rw--- =A0 =A0[ anon ]=0A00007fe943495000 =A0 1028K rw--- =A0 =A0[ ano=
n ]=0A00007fe9435a1000 =A0 1028K rw--- =A0 =A0[ anon ]=0A00007fe9436ad000 =
=A0 1028K rw--- =A0 =A0[ anon ]=0A00007fe9437b8000 =A0 5140K rw--- =A0 =A0[=
 anon ]=0A00007fe943cbd000 =A0 =A0 =A04K ----- =A0 =A0[ anon ]=0A00007fe943=
cbe000 =A0 8192K rw--- =A0 =A0[ anon ]=0A00007fe9444be000 =A0 =A0 20K r-x--=
 =A0/usr/lib/libXdmcp.so.6.0.0=0A00007fe9444c3000 =A0 2044K ----- =A0/usr/l=
ib/libXdmcp.so.6.0.0=0A00007fe9446c2000 =A0 =A0 =A04K rw--- =A0/usr/lib/lib=
Xdmcp.so.6.0.0=0A00007fe9446c3000 =A0 =A0 =A08K r-x-- =A0/usr/lib/libXau.so=
.6.0.0=0A00007fe9446c5000 =A0 2048K ----- =A0/usr/lib/libXau.so.6.0.0=0A000=
07fe9448c5000 =A0 =A0 =A04K rw--- =A0/usr/lib/libXau.so.6.0.0=0A00007fe9448=
c6000 =A0 =A0124K r-x-- =A0/lib/libx86.so.1=0A00007fe9448e5000 =A0 2048K --=
--- =A0/lib/libx86.so.1=0A00007fe944ae5000 =A0 =A0 =A08K rw--- =A0/lib/libx=
86.so.1=0A00007fe944ae7000 =A0 =A0 =A04K rw--- =A0 =A0[ anon ]=0A00007fe944=
ae8000 =A0 =A0128K r-x-- =A0/usr/lib/liblzo2.so.2.0.0=0A00007fe944b08000 =
=A0 2044K ----- =A0/usr/lib/liblzo2.so.2.0.0=0A00007fe944d07000 =A0 =A0 =A0=
4K rw--- =A0/usr/lib/liblzo2.so.2.0.0=0A00007fe944d08000 =A0 =A0132K r-x-- =
=A0/usr/lib/liblzma.so.2.0.0=0A00007fe944d29000 =A0 2048K ----- =A0/usr/lib=
/liblzma.so.2.0.0=0A00007fe944f29000 =A0 =A0 =A04K rw--- =A0/usr/lib/liblzm=
a.so.2.0.0=0A00007fe944f2a000 =A0 =A0 60K r-x-- =A0/lib/libbz2.so.1.0.4=0A0=
0007fe944f39000 =A0 2044K ----- =A0/lib/libbz2.so.1.0.4=0A00007fe945138000 =
=A0 =A0 =A08K rw--- =A0/lib/libbz2.so.1.0.4=0A00007fe94513a000 =A0 =A0112K =
r-x-- =A0/usr/lib/libxcb.so.1.1.0=0A00007fe945156000 =A0 2044K ----- =A0/us=
r/lib/libxcb.so.1.1.0=0A00007fe945355000 =A0 =A0 =A04K rw--- =A0/usr/lib/li=
bxcb.so.1.1.0=0A00007fe945356000 =A0 =A0308K r-x-- =A0/usr/lib/libvga.so.1.=
4.3=0A00007fe9453a3000 =A0 2044K ----- =A0/usr/lib/libvga.so.1.4.3=0A00007f=
e9455a2000 =A0 =A0 36K rw--- =A0/usr/lib/libvga.so.1.4.3=0A00007fe9455ab000=
 =A0 =A0 36K rw--- =A0 =A0[ anon ]=0A00007fe9455b4000 =A0 =A0 88K r-x-- =A0=
/usr/lib/libdirect-1.2.so.9.0.1=0A00007fe9455ca000 =A0 2044K ----- =A0/usr/=
lib/libdirect-1.2.so.9.0.1=0A00007fe9457c9000 =A0 =A0 =A08K rw--- =A0/usr/l=
ib/libdirect-1.2.so.9.0.1=0A00007fe9457cb000 =A0 =A0 36K r-x-- =A0/usr/lib/=
libfusion-1.2.so.9.0.1=0A00007fe9457d4000 =A0 2048K ----- =A0/usr/lib/libfu=
sion-1.2.so.9.0.1=0A00007fe9459d4000 =A0 =A0 =A04K rw--- =A0/usr/lib/libfus=
ion-1.2.so.9.0.1=0A00007fe9459d5000 =A0 =A0508K r-x-- =A0/usr/lib/libdirect=
fb-1.2.so.9.0.1=0A00007fe945a54000 =A0 2044K ----- =A0/usr/lib/libdirectfb-=
1.2.so.9.0.1=0A00007fe945c53000 =A0 =A0 16K rw--- =A0/usr/lib/libdirectfb-1=
.2.so.9.0.1=0A00007fe945c57000 =A0 =A0888K r-x-- =A0/usr/lib/libasound.so.2=
.0.0=0A00007fe945d35000 =A0 2044K ----- =A0/usr/lib/libasound.so.2.0.0=0A00=
007fe945f34000 =A0 =A0 32K rw--- =A0/usr/lib/libasound.so.2.0.0=0A00007fe94=
5f3c000 =A0 =A0 =A08K r-x-- =A0/lib/libdl-2.11.3.so=0A00007fe945f3e000 =A0 =
2048K ----- =A0/lib/libdl-2.11.3.so=0A00007fe94613e000 =A0 =A0 =A04K r---- =
=A0/lib/libdl-2.11.3.so=0A00007fe94613f000 =A0 =A0 =A04K rw--- =A0/lib/libd=
l-2.11.3.so=0A00007fe946140000 =A0 =A0192K r-x-- =A0/lib/libpcre.so.3.12.1=
=0A00007fe946170000 =A0 2044K ----- =A0/lib/libpcre.so.3.12.1=0A00007fe9463=
6f000 =A0 =A0 =A04K rw--- =A0/lib/libpcre.so.3.12.1=0A00007fe946370000 =A0 =
1380K r-x-- =A0/lib/libc-2.11.3.so=0A00007fe9464c9000 =A0 2044K ----- =A0/l=
ib/libc-2.11.3.so=0A00007fe9466c8000 =A0 =A0 16K r---- =A0/lib/libc-2.11.3.=
so=0A00007fe9466cc000 =A0 =A0 =A04K rw--- =A0/lib/libc-2.11.3.so=0A00007fe9=
466cd000 =A0 =A0 20K rw--- =A0 =A0[ anon ]=0A00007fe9466d2000 =A0 =A0 92K r=
-x-- =A0/lib/libpthread-2.11.3.so=0A00007fe9466e9000 =A0 2044K ----- =A0/li=
b/libpthread-2.11.3.so=0A00007fe9468e8000 =A0 =A0 =A04K r---- =A0/lib/libpt=
hread-2.11.3.so=0A00007fe9468e9000 =A0 =A0 =A04K rw--- =A0/lib/libpthread-2=
.11.3.so=0A00007fe9468ea000 =A0 =A0 16K rw--- =A0 =A0[ anon ]=0A00007fe9468=
ee000 =A0 =A0 92K r-x-- =A0/usr/lib/libz.so.1.2.3.4=0A00007fe946905000 =A0 =
2044K ----- =A0/usr/lib/libz.so.1.2.3.4=0A00007fe946b04000 =A0 =A0 =A04K rw=
--- =A0/usr/lib/libz.so.1.2.3.4=0A00007fe946b05000 =A0 =A0512K r-x-- =A0/li=
b/libm-2.11.3.so=0A00007fe946b85000 =A0 2048K ----- =A0/lib/libm-2.11.3.so=
=0A00007fe946d85000 =A0 =A0 =A04K r---- =A0/lib/libm-2.11.3.so=0A00007fe946=
d86000 =A0 =A0 =A04K rw--- =A0/lib/libm-2.11.3.so=0A00007fe946d87000 =A0 =
=A0 =A04K r-x-- =A0/lib/libaio.so.1.0.1=0A00007fe946d88000 =A0 2044K ----- =
=A0/lib/libaio.so.1.0.1=0A00007fe946f87000 =A0 =A0 =A04K rw--- =A0/lib/liba=
io.so.1.0.1=0A00007fe946f88000 =A0 =A0160K r-x-- =A0/usr/lib/libxenguest.so=
.4.2.0=0A00007fe946fb0000 =A0 2048K ----- =A0/usr/lib/libxenguest.so.4.2.0=
=0A00007fe9471b0000 =A0 =A0 =A08K rw--- =A0/usr/lib/libxenguest.so.4.2.0=0A=
00007fe9471b2000 =A0 =A0136K r-x-- =A0/usr/lib/libxenctrl.so.4.2.0=0A00007f=
e9471d4000 =A0 2048K ----- =A0/usr/lib/libxenctrl.so.4.2.0=0A00007fe9473d40=
00 =A0 =A0 =A04K rw--- =A0/usr/lib/libxenctrl.so.4.2.0=0A00007fe9473d5000 =
=A0 =A0 24K r-x-- =A0/usr/lib/libxenstore.so.3.0.2=0A00007fe9473db000 =A0 2=
044K ----- =A0/usr/lib/libxenstore.so.3.0.2=0A00007fe9475da000 =A0 =A0 =A04=
K rw--- =A0/usr/lib/libxenstore.so.3.0.2=0A00007fe9475db000 =A0 =A0 12K rw-=
-- =A0 =A0[ anon ]=0A00007fe9475de000 =A0 1236K r-x-- =A0/usr/lib/libX11.so=
.6.3.0=0A00007fe947713000 =A0 2048K ----- =A0/usr/lib/libX11.so.6.3.0=0A000=
07fe947913000 =A0 =A0 24K rw--- =A0/usr/lib/libX11.so.6.3.0=0A00007fe947919=
000 =A0 =A0432K r-x-- =A0/usr/lib/libSDL-1.2.so.0.11.3=0A00007fe947985000 =
=A0 2048K ----- =A0/usr/lib/libSDL-1.2.so.0.11.3=0A00007fe947b85000 =A0 =A0=
 =A08K rw--- =A0/usr/lib/libSDL-1.2.so.0.11.3=0A00007fe947b87000 =A0 =A0304=
K rw--- =A0 =A0[ anon ]=0A00007fe947bd3000 =A0 =A0140K r-x-- =A0/usr/lib/li=
bjpeg.so.62.0.0=0A00007fe947bf6000 =A0 2044K ----- =A0/usr/lib/libjpeg.so.6=
2.0.0=0A00007fe947df5000 =A0 =A0 =A04K rw--- =A0/usr/lib/libjpeg.so.62.0.0=
=0A00007fe947df6000 =A0 =A0148K r-x-- =A0/lib/libpng12.so.0.44.0=0A00007fe9=
47e1b000 =A0 2048K ----- =A0/lib/libpng12.so.0.44.0=0A00007fe94801b000 =A0 =
=A0 =A04K rw--- =A0/lib/libpng12.so.0.44.0=0A00007fe94801c000 =A0 =A0 16K r=
-x-- =A0/lib/libuuid.so.1.3.0=0A00007fe948020000 =A0 2044K ----- =A0/lib/li=
buuid.so.1.3.0=0A00007fe94821f000 =A0 =A0 =A04K rw--- =A0/lib/libuuid.so.1.=
3.0=0A00007fe948220000 =A0 =A0264K r-x-- =A0/lib/libncurses.so.5.7=0A00007f=
e948262000 =A0 2044K ----- =A0/lib/libncurses.so.5.7=0A00007fe948461000 =A0=
 =A0 20K rw--- =A0/lib/libncurses.so.5.7=0A00007fe948466000 =A0 =A0 =A08K r=
-x-- =A0/lib/libutil-2.11.3.so=0A00007fe948468000 =A0 2044K ----- =A0/lib/l=
ibutil-2.11.3.so=0A00007fe948667000 =A0 =A0 =A04K r---- =A0/lib/libutil-2.1=
1.3.so=0A00007fe948668000 =A0 =A0 =A04K rw--- =A0/lib/libutil-2.11.3.so=0A0=
0007fe948669000 =A0 =A0876K r-x-- =A0/lib/libglib-2.0.so.0.2400.2=0A00007fe=
948744000 =A0 2044K ----- =A0/lib/libglib-2.0.so.0.2400.2=0A00007fe94894300=
0 =A0 =A0 =A08K rw--- =A0/lib/libglib-2.0.so.0.2400.2=0A00007fe948945000 =
=A0 =A0 =A04K rw--- =A0 =A0[ anon ]=0A00007fe948946000 =A0 =A0 16K r-x-- =
=A0/usr/lib/libgthread-2.0.so.0.2400.2=0A00007fe94894a000 =A0 2044K ----- =
=A0/usr/lib/libgthread-2.0.so.0.2400.2=0A00007fe948b49000 =A0 =A0 =A04K rw-=
-- =A0/usr/lib/libgthread-2.0.so.0.2400.2=0A00007fe948b4a000 =A0 =A0 28K r-=
x-- =A0/lib/librt-2.11.3.so=0A00007fe948b51000 =A0 2044K ----- =A0/lib/libr=
t-2.11.3.so=0A00007fe948d50000 =A0 =A0 =A04K r---- =A0/lib/librt-2.11.3.so=
=0A00007fe948d51000 =A0 =A0 =A04K rw--- =A0/lib/librt-2.11.3.so=0A00007fe94=
8d52000 =A0 =A0120K r-x-- =A0/lib/ld-2.11.3.so=0A00007fe948de5000 =A0 1536K=
 rw--- =A0 =A0[ anon ]=0A00007fe948f65000 =A0 =A0 =A04K rw-s- =A0/dev/xen/g=
ntdev=0A00007fe948f66000 =A0 =A0 =A04K rw-s- =A0/dev/xen/gntdev=0A00007fe94=
8f67000 =A0 =A0 =A08K rw--- =A0 =A0[ anon ]=0A00007fe948f69000 =A0 =A0 =A04=
K ----- =A0 =A0[ anon ]=0A00007fe948f6a000 =A0 =A0 20K rw--- =A0 =A0[ anon =
]=0A00007fe948f6f000 =A0 =A0 =A04K r---- =A0/lib/ld-2.11.3.so=0A00007fe948f=
70000 =A0 =A0 =A04K rw--- =A0/lib/ld-2.11.3.so=0A00007fe948f71000 =A0 =A0 =
=A04K rw--- =A0 =A0[ anon ]=0A00007fe948f72000 =A0 3020K r-x-- =A0/usr/lib/=
xen/bin/qemu-system-i386=0A00007fe949464000 =A0 =A0816K r---- =A0/usr/lib/x=
en/bin/qemu-system-i386=0A00007fe949530000 =A0 =A0176K rw--- =A0/usr/lib/xe=
n/bin/qemu-system-i386=0A00007fe94955c000 =A0 8228K rw--- =A0 =A0[ anon ]=
=0A00007fe94a5a7000 309936K rw--- =A0 =A0[ anon ]=0A00007fff43e80000 =A0 =
=A0132K rw--- =A0 =A0[ stack ]=0A00007fff43fff000 =A0 =A0 =A04K r-x-- =A0 =
=A0[ anon ]=0Affffffffff600000 =A0 =A0 =A04K r-x-- =A0 =A0[ anon ]=0A=A0tot=
al =A0 =A0 =A0 =A0 =A0 424016K=0A=0A=0ACan anyone help?=A0=0A=0A=0ARegards,=
=0A=A0 =A0Maik
---532260405-1245608948-1356518803=:88953
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:ti=
mes new roman, new york, times, serif;font-size:12pt"><div>Hi all,</div><di=
v><br></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family=
: 'times new roman', 'new york', times, serif; background-color: transparen=
t; font-style: normal; ">I am using&nbsp;xen-4.2-testing.hg on debian 6.0.6=
 (x86_64) with Kernel 3.4.15 (tmem enabled). Problem is that the&nbsp;/usr/=
lib/xen/bin/qemu-system-i386</div><div style=3D"color: rgb(0, 0, 0); font-s=
ize: 16px; font-family: 'times new roman', 'new york', times, serif; backgr=
ound-color: transparent; font-style: normal; ">use more and more memory. Af=
ter one week uptime (depends on memory) the machine starts to swap...</div>=
<div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: 'times new=
 roman', 'new york', times, serif; background-color: transparent; font-styl=
e: normal; "><br></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px;
 font-family: 'times new roman', 'new york', times, serif; background-color=
: transparent; font-style: normal; ">Details:</div><div style=3D"color: rgb=
(0, 0, 0); font-size: 16px; font-family: 'times new roman', 'new york', tim=
es, serif; background-color: transparent; font-style: normal; "><br></div><=
div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: 'times new =
roman', 'new york', times, serif; background-color: transparent; font-style=
: normal; "><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family=
: 'times new roman', 'new york', times, serif; background-color: transparen=
t; font-style: normal; ">root@dmw01:~# cat /etc/grub.d/09_linux_xen |grep m=
em</div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: 't=
imes new roman', 'new york', times, serif; background-color: transparent; f=
ont-style: normal; "><span class=3D"Apple-tab-span" style=3D"white-space:pr=
e">=09</span>multiboot<span class=3D"Apple-tab-span" style=3D"white-space:p=
re">
=09</span>${rel_xen_dirname}/${xen_basename} placeholder ${xen_args} dom0_m=
em=3D1592M,max:1592M</div><div><br></div></div><div style=3D"color: rgb(0, =
0, 0); font-size: 16px; font-family: 'times new roman', 'new york', times, =
serif; background-color: transparent; font-style: normal; "><div style=3D"c=
olor: rgb(0, 0, 0); font-size: 16px; font-family: 'times new roman', 'new y=
ork', times, serif; background-color: transparent; font-style: normal; "><b=
r></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: 't=
imes new roman', 'new york', times, serif; background-color: transparent; f=
ont-style: normal; ">root@dmw01:~# free</div><div style=3D"color: rgb(0, 0,=
 0); font-size: 16px; font-family: 'times new roman', 'new york', times, se=
rif; background-color: transparent; font-style: normal; ">&nbsp; &nbsp; &nb=
sp; &nbsp; &nbsp; &nbsp; &nbsp;total &nbsp; &nbsp; &nbsp; used &nbsp; &nbsp=
; &nbsp; free &nbsp; &nbsp; shared &nbsp; &nbsp;buffers &nbsp; &nbsp;
 cached</div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-famil=
y: 'times new roman', 'new york', times, serif; background-color: transpare=
nt; font-style: normal; ">Mem: &nbsp; &nbsp; &nbsp; 1523280 &nbsp; &nbsp;14=
08896 &nbsp; &nbsp; 114384 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp=
; &nbsp; 9824 &nbsp; &nbsp; &nbsp;17496</div><div style=3D"color: rgb(0, 0,=
 0); font-size: 16px; font-family: 'times new roman', 'new york', times, se=
rif; background-color: transparent; font-style: normal; ">-/+ buffers/cache=
: &nbsp; &nbsp;1381576 &nbsp; &nbsp; 141704</div><div style=3D"color: rgb(0=
, 0, 0); font-size: 16px; font-family: 'times new roman', 'new york', times=
, serif; background-color: transparent; font-style: normal; ">Swap: &nbsp; =
&nbsp; &nbsp; 505916 &nbsp; &nbsp; 134592 &nbsp; &nbsp; 371324</div><div><b=
r></div></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-fami=
ly: 'times new roman', 'new york', times, serif; background-color:
 transparent; font-style: normal; "><br></div><div style=3D"color: rgb(0, 0=
, 0); font-size: 16px; font-family: 'times new roman', 'new york', times, s=
erif; background-color: transparent; font-style: normal; ">ps -e -orss=3D,a=
rgs=3D | sort -b -k1,1n<br></div><div style=3D"color: rgb(0, 0, 0); font-si=
ze: 16px; font-family: 'times new roman', 'new york', times, serif; backgro=
und-color: transparent; font-style: normal; "><br></div><div style=3D"color=
: rgb(0, 0, 0); font-size: 16px; font-family: 'times new roman', 'new york'=
, times, serif; background-color: transparent; font-style: normal; ">Start:=
</div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: 'tim=
es new roman', 'new york', times, serif; background-color: transparent; fon=
t-style: normal; "><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font=
-family: 'times new roman', 'new york', times, serif; background-color: tra=
nsparent; font-style: normal; ">28872 /usr/lib/xen/bin/qemu-system-i386 -xe=
n-domid
 12 -chardev socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-12,server,=
nowait -mon chardev=3Dlibxl-cmd,mode=3Dcontrol -xen-attach -name mgtmw01 -n=
ographic -M xenpv -m 385</div><div><br></div><div>End:</div><div>243472 /us=
r/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev socket,id=3Dlibxl-cmd=
,path=3D/var/run/xen/qmp-libxl-12,server,nowait -mon chardev=3Dlibxl-cmd,mo=
de=3Dcontrol -xen-attach -name mgtmw01 -nographic -M xenpv -m 385<br></div>=
<div><br></div><div><br></div><div><br></div><div><div>root@dmw01:~# ps aux=
|grep qemu|grep mgtmw01</div><div>root &nbsp; &nbsp; &nbsp;3903 &nbsp;0.0 1=
5.9 423876 243464 ? &nbsp; &nbsp; &nbsp; Ssl &nbsp;Dec18 &nbsp; 9:39 /usr/l=
ib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev socket,id=3Dlibxl-cmd,pa=
th=3D/var/run/xen/qmp-libxl-12,server,nowait -mon chardev=3Dlibxl-cmd,mode=
=3Dcontrol -xen-attach -name mgtmw01 -nographic -M xenpv -m 385</div><div><=
br></div><div><div>root@dmw01:~# pmap 3903</div><div>3903: &nbsp;
 /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev socket,id=3Dlibxl=
-cmd,path=3D/var/run/xen/qmp-libxl-12,server,nowait -mon chardev=3Dlibxl-cm=
d,mode=3Dcontrol -xen-attach -name mgtmw01 -nographic -M xenpv -m 385</div>=
<div>00007fe942b7e000 &nbsp; 4112K rw--- &nbsp; &nbsp;[ anon ]</div><div>00=
007fe942f83000 &nbsp; 1028K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe94=
3085000 &nbsp; 1028K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe943187000=
 &nbsp; 1028K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe943289000 &nbsp;=
 2056K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe943495000 &nbsp; 1028K =
rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe9435a1000 &nbsp; 1028K rw--- &=
nbsp; &nbsp;[ anon ]</div><div>00007fe9436ad000 &nbsp; 1028K rw--- &nbsp; &=
nbsp;[ anon ]</div><div>00007fe9437b8000 &nbsp; 5140K rw--- &nbsp; &nbsp;[ =
anon ]</div><div>00007fe943cbd000 &nbsp; &nbsp; &nbsp;4K ----- &nbsp; &nbsp=
;[ anon ]</div><div>00007fe943cbe000 &nbsp; 8192K rw--- &nbsp; &nbsp;[ anon
 ]</div><div>00007fe9444be000 &nbsp; &nbsp; 20K r-x-- &nbsp;/usr/lib/libXdm=
cp.so.6.0.0</div><div>00007fe9444c3000 &nbsp; 2044K ----- &nbsp;/usr/lib/li=
bXdmcp.so.6.0.0</div><div>00007fe9446c2000 &nbsp; &nbsp; &nbsp;4K rw--- &nb=
sp;/usr/lib/libXdmcp.so.6.0.0</div><div>00007fe9446c3000 &nbsp; &nbsp; &nbs=
p;8K r-x-- &nbsp;/usr/lib/libXau.so.6.0.0</div><div>00007fe9446c5000 &nbsp;=
 2048K ----- &nbsp;/usr/lib/libXau.so.6.0.0</div><div>00007fe9448c5000 &nbs=
p; &nbsp; &nbsp;4K rw--- &nbsp;/usr/lib/libXau.so.6.0.0</div><div>00007fe94=
48c6000 &nbsp; &nbsp;124K r-x-- &nbsp;/lib/libx86.so.1</div><div>00007fe944=
8e5000 &nbsp; 2048K ----- &nbsp;/lib/libx86.so.1</div><div>00007fe944ae5000=
 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/lib/libx86.so.1</div><div>00007fe944ae=
7000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe94=
4ae8000 &nbsp; &nbsp;128K r-x-- &nbsp;/usr/lib/liblzo2.so.2.0.0</div><div>0=
0007fe944b08000 &nbsp; 2044K -----
 &nbsp;/usr/lib/liblzo2.so.2.0.0</div><div>00007fe944d07000 &nbsp; &nbsp; &=
nbsp;4K rw--- &nbsp;/usr/lib/liblzo2.so.2.0.0</div><div>00007fe944d08000 &n=
bsp; &nbsp;132K r-x-- &nbsp;/usr/lib/liblzma.so.2.0.0</div><div>00007fe944d=
29000 &nbsp; 2048K ----- &nbsp;/usr/lib/liblzma.so.2.0.0</div><div>00007fe9=
44f29000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/usr/lib/liblzma.so.2.0.0</div>=
<div>00007fe944f2a000 &nbsp; &nbsp; 60K r-x-- &nbsp;/lib/libbz2.so.1.0.4</d=
iv><div>00007fe944f39000 &nbsp; 2044K ----- &nbsp;/lib/libbz2.so.1.0.4</div=
><div>00007fe945138000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/lib/libbz2.so.1.=
0.4</div><div>00007fe94513a000 &nbsp; &nbsp;112K r-x-- &nbsp;/usr/lib/libxc=
b.so.1.1.0</div><div>00007fe945156000 &nbsp; 2044K ----- &nbsp;/usr/lib/lib=
xcb.so.1.1.0</div><div>00007fe945355000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;=
/usr/lib/libxcb.so.1.1.0</div><div>00007fe945356000 &nbsp; &nbsp;308K r-x--=
 &nbsp;/usr/lib/libvga.so.1.4.3</div><div>00007fe9453a3000 &nbsp;
 2044K ----- &nbsp;/usr/lib/libvga.so.1.4.3</div><div>00007fe9455a2000 &nbs=
p; &nbsp; 36K rw--- &nbsp;/usr/lib/libvga.so.1.4.3</div><div>00007fe9455ab0=
00 &nbsp; &nbsp; 36K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe9455b4000=
 &nbsp; &nbsp; 88K r-x-- &nbsp;/usr/lib/libdirect-1.2.so.9.0.1</div><div>00=
007fe9455ca000 &nbsp; 2044K ----- &nbsp;/usr/lib/libdirect-1.2.so.9.0.1</di=
v><div>00007fe9457c9000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/usr/lib/libdire=
ct-1.2.so.9.0.1</div><div>00007fe9457cb000 &nbsp; &nbsp; 36K r-x-- &nbsp;/u=
sr/lib/libfusion-1.2.so.9.0.1</div><div>00007fe9457d4000 &nbsp; 2048K -----=
 &nbsp;/usr/lib/libfusion-1.2.so.9.0.1</div><div>00007fe9459d4000 &nbsp; &n=
bsp; &nbsp;4K rw--- &nbsp;/usr/lib/libfusion-1.2.so.9.0.1</div><div>00007fe=
9459d5000 &nbsp; &nbsp;508K r-x-- &nbsp;/usr/lib/libdirectfb-1.2.so.9.0.1</=
div><div>00007fe945a54000 &nbsp; 2044K ----- &nbsp;/usr/lib/libdirectfb-1.2=
.so.9.0.1</div><div>00007fe945c53000 &nbsp; &nbsp; 16K rw---
 &nbsp;/usr/lib/libdirectfb-1.2.so.9.0.1</div><div>00007fe945c57000 &nbsp; =
&nbsp;888K r-x-- &nbsp;/usr/lib/libasound.so.2.0.0</div><div>00007fe945d350=
00 &nbsp; 2044K ----- &nbsp;/usr/lib/libasound.so.2.0.0</div><div>00007fe94=
5f34000 &nbsp; &nbsp; 32K rw--- &nbsp;/usr/lib/libasound.so.2.0.0</div><div=
>00007fe945f3c000 &nbsp; &nbsp; &nbsp;8K r-x-- &nbsp;/lib/libdl-2.11.3.so</=
div><div>00007fe945f3e000 &nbsp; 2048K ----- &nbsp;/lib/libdl-2.11.3.so</di=
v><div>00007fe94613e000 &nbsp; &nbsp; &nbsp;4K r---- &nbsp;/lib/libdl-2.11.=
3.so</div><div>00007fe94613f000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/lib=
dl-2.11.3.so</div><div>00007fe946140000 &nbsp; &nbsp;192K r-x-- &nbsp;/lib/=
libpcre.so.3.12.1</div><div>00007fe946170000 &nbsp; 2044K ----- &nbsp;/lib/=
libpcre.so.3.12.1</div><div>00007fe94636f000 &nbsp; &nbsp; &nbsp;4K rw--- &=
nbsp;/lib/libpcre.so.3.12.1</div><div>00007fe946370000 &nbsp; 1380K r-x-- &=
nbsp;/lib/libc-2.11.3.so</div><div>00007fe9464c9000 &nbsp; 2044K
 ----- &nbsp;/lib/libc-2.11.3.so</div><div>00007fe9466c8000 &nbsp; &nbsp; 1=
6K r---- &nbsp;/lib/libc-2.11.3.so</div><div>00007fe9466cc000 &nbsp; &nbsp;=
 &nbsp;4K rw--- &nbsp;/lib/libc-2.11.3.so</div><div>00007fe9466cd000 &nbsp;=
 &nbsp; 20K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe9466d2000 &nbsp; &=
nbsp; 92K r-x-- &nbsp;/lib/libpthread-2.11.3.so</div><div>00007fe9466e9000 =
&nbsp; 2044K ----- &nbsp;/lib/libpthread-2.11.3.so</div><div>00007fe9468e80=
00 &nbsp; &nbsp; &nbsp;4K r---- &nbsp;/lib/libpthread-2.11.3.so</div><div>0=
0007fe9468e9000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/libpthread-2.11.3.s=
o</div><div>00007fe9468ea000 &nbsp; &nbsp; 16K rw--- &nbsp; &nbsp;[ anon ]<=
/div><div>00007fe9468ee000 &nbsp; &nbsp; 92K r-x-- &nbsp;/usr/lib/libz.so.1=
.2.3.4</div><div>00007fe946905000 &nbsp; 2044K ----- &nbsp;/usr/lib/libz.so=
.1.2.3.4</div><div>00007fe946b04000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/usr=
/lib/libz.so.1.2.3.4</div><div>00007fe946b05000 &nbsp; &nbsp;512K
 r-x-- &nbsp;/lib/libm-2.11.3.so</div><div>00007fe946b85000 &nbsp; 2048K --=
--- &nbsp;/lib/libm-2.11.3.so</div><div>00007fe946d85000 &nbsp; &nbsp; &nbs=
p;4K r---- &nbsp;/lib/libm-2.11.3.so</div><div>00007fe946d86000 &nbsp; &nbs=
p; &nbsp;4K rw--- &nbsp;/lib/libm-2.11.3.so</div><div>00007fe946d87000 &nbs=
p; &nbsp; &nbsp;4K r-x-- &nbsp;/lib/libaio.so.1.0.1</div><div>00007fe946d88=
000 &nbsp; 2044K ----- &nbsp;/lib/libaio.so.1.0.1</div><div>00007fe946f8700=
0 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/libaio.so.1.0.1</div><div>00007fe=
946f88000 &nbsp; &nbsp;160K r-x-- &nbsp;/usr/lib/libxenguest.so.4.2.0</div>=
<div>00007fe946fb0000 &nbsp; 2048K ----- &nbsp;/usr/lib/libxenguest.so.4.2.=
0</div><div>00007fe9471b0000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/usr/lib/li=
bxenguest.so.4.2.0</div><div>00007fe9471b2000 &nbsp; &nbsp;136K r-x-- &nbsp=
;/usr/lib/libxenctrl.so.4.2.0</div><div>00007fe9471d4000 &nbsp; 2048K -----=
 &nbsp;/usr/lib/libxenctrl.so.4.2.0</div><div>00007fe9473d4000
 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/usr/lib/libxenctrl.so.4.2.0</div><div>=
00007fe9473d5000 &nbsp; &nbsp; 24K r-x-- &nbsp;/usr/lib/libxenstore.so.3.0.=
2</div><div>00007fe9473db000 &nbsp; 2044K ----- &nbsp;/usr/lib/libxenstore.=
so.3.0.2</div><div>00007fe9475da000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/usr=
/lib/libxenstore.so.3.0.2</div><div>00007fe9475db000 &nbsp; &nbsp; 12K rw--=
- &nbsp; &nbsp;[ anon ]</div><div>00007fe9475de000 &nbsp; 1236K r-x-- &nbsp=
;/usr/lib/libX11.so.6.3.0</div><div>00007fe947713000 &nbsp; 2048K ----- &nb=
sp;/usr/lib/libX11.so.6.3.0</div><div>00007fe947913000 &nbsp; &nbsp; 24K rw=
--- &nbsp;/usr/lib/libX11.so.6.3.0</div><div>00007fe947919000 &nbsp; &nbsp;=
432K r-x-- &nbsp;/usr/lib/libSDL-1.2.so.0.11.3</div><div>00007fe947985000 &=
nbsp; 2048K ----- &nbsp;/usr/lib/libSDL-1.2.so.0.11.3</div><div>00007fe947b=
85000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/usr/lib/libSDL-1.2.so.0.11.3</div=
><div>00007fe947b87000 &nbsp; &nbsp;304K rw--- &nbsp; &nbsp;[ anon
 ]</div><div>00007fe947bd3000 &nbsp; &nbsp;140K r-x-- &nbsp;/usr/lib/libjpe=
g.so.62.0.0</div><div>00007fe947bf6000 &nbsp; 2044K ----- &nbsp;/usr/lib/li=
bjpeg.so.62.0.0</div><div>00007fe947df5000 &nbsp; &nbsp; &nbsp;4K rw--- &nb=
sp;/usr/lib/libjpeg.so.62.0.0</div><div>00007fe947df6000 &nbsp; &nbsp;148K =
r-x-- &nbsp;/lib/libpng12.so.0.44.0</div><div>00007fe947e1b000 &nbsp; 2048K=
 ----- &nbsp;/lib/libpng12.so.0.44.0</div><div>00007fe94801b000 &nbsp; &nbs=
p; &nbsp;4K rw--- &nbsp;/lib/libpng12.so.0.44.0</div><div>00007fe94801c000 =
&nbsp; &nbsp; 16K r-x-- &nbsp;/lib/libuuid.so.1.3.0</div><div>00007fe948020=
000 &nbsp; 2044K ----- &nbsp;/lib/libuuid.so.1.3.0</div><div>00007fe94821f0=
00 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/libuuid.so.1.3.0</div><div>00007=
fe948220000 &nbsp; &nbsp;264K r-x-- &nbsp;/lib/libncurses.so.5.7</div><div>=
00007fe948262000 &nbsp; 2044K ----- &nbsp;/lib/libncurses.so.5.7</div><div>=
00007fe948461000 &nbsp; &nbsp; 20K rw---
 &nbsp;/lib/libncurses.so.5.7</div><div>00007fe948466000 &nbsp; &nbsp; &nbs=
p;8K r-x-- &nbsp;/lib/libutil-2.11.3.so</div><div>00007fe948468000 &nbsp; 2=
044K ----- &nbsp;/lib/libutil-2.11.3.so</div><div>00007fe948667000 &nbsp; &=
nbsp; &nbsp;4K r---- &nbsp;/lib/libutil-2.11.3.so</div><div>00007fe94866800=
0 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/libutil-2.11.3.so</div><div>00007=
fe948669000 &nbsp; &nbsp;876K r-x-- &nbsp;/lib/libglib-2.0.so.0.2400.2</div=
><div>00007fe948744000 &nbsp; 2044K ----- &nbsp;/lib/libglib-2.0.so.0.2400.=
2</div><div>00007fe948943000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp;/lib/libgli=
b-2.0.so.0.2400.2</div><div>00007fe948945000 &nbsp; &nbsp; &nbsp;4K rw--- &=
nbsp; &nbsp;[ anon ]</div><div>00007fe948946000 &nbsp; &nbsp; 16K r-x-- &nb=
sp;/usr/lib/libgthread-2.0.so.0.2400.2</div><div>00007fe94894a000 &nbsp; 20=
44K ----- &nbsp;/usr/lib/libgthread-2.0.so.0.2400.2</div><div>00007fe948b49=
000 &nbsp; &nbsp; &nbsp;4K rw---
 &nbsp;/usr/lib/libgthread-2.0.so.0.2400.2</div><div>00007fe948b4a000 &nbsp=
; &nbsp; 28K r-x-- &nbsp;/lib/librt-2.11.3.so</div><div>00007fe948b51000 &n=
bsp; 2044K ----- &nbsp;/lib/librt-2.11.3.so</div><div>00007fe948d50000 &nbs=
p; &nbsp; &nbsp;4K r---- &nbsp;/lib/librt-2.11.3.so</div><div>00007fe948d51=
000 &nbsp; &nbsp; &nbsp;4K rw--- &nbsp;/lib/librt-2.11.3.so</div><div>00007=
fe948d52000 &nbsp; &nbsp;120K r-x-- &nbsp;/lib/ld-2.11.3.so</div><div>00007=
fe948de5000 &nbsp; 1536K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe948f6=
5000 &nbsp; &nbsp; &nbsp;4K rw-s- &nbsp;/dev/xen/gntdev</div><div>00007fe94=
8f66000 &nbsp; &nbsp; &nbsp;4K rw-s- &nbsp;/dev/xen/gntdev</div><div>00007f=
e948f67000 &nbsp; &nbsp; &nbsp;8K rw--- &nbsp; &nbsp;[ anon ]</div><div>000=
07fe948f69000 &nbsp; &nbsp; &nbsp;4K ----- &nbsp; &nbsp;[ anon ]</div><div>=
00007fe948f6a000 &nbsp; &nbsp; 20K rw--- &nbsp; &nbsp;[ anon ]</div><div>00=
007fe948f6f000 &nbsp; &nbsp; &nbsp;4K r----
 &nbsp;/lib/ld-2.11.3.so</div><div>00007fe948f70000 &nbsp; &nbsp; &nbsp;4K =
rw--- &nbsp;/lib/ld-2.11.3.so</div><div>00007fe948f71000 &nbsp; &nbsp; &nbs=
p;4K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fe948f72000 &nbsp; 3020K r-=
x-- &nbsp;/usr/lib/xen/bin/qemu-system-i386</div><div>00007fe949464000 &nbs=
p; &nbsp;816K r---- &nbsp;/usr/lib/xen/bin/qemu-system-i386</div><div>00007=
fe949530000 &nbsp; &nbsp;176K rw--- &nbsp;/usr/lib/xen/bin/qemu-system-i386=
</div><div>00007fe94955c000 &nbsp; 8228K rw--- &nbsp; &nbsp;[ anon ]</div><=
div>00007fe94a5a7000 309936K rw--- &nbsp; &nbsp;[ anon ]</div><div>00007fff=
43e80000 &nbsp; &nbsp;132K rw--- &nbsp; &nbsp;[ stack ]</div><div>00007fff4=
3fff000 &nbsp; &nbsp; &nbsp;4K r-x-- &nbsp; &nbsp;[ anon ]</div><div>ffffff=
ffff600000 &nbsp; &nbsp; &nbsp;4K r-x-- &nbsp; &nbsp;[ anon ]</div><div>&nb=
sp;total &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 424016K</div><div><br></div><di=
v><br></div><div>Can anyone
 help?&nbsp;<br></div><div><br></div><div>Regards,</div><div>&nbsp; &nbsp;M=
aik</div></div></div></div></div></body></html>
---532260405-1245608948-1356518803=:88953--


--===============9131452302534339185==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9131452302534339185==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 13:00:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 13:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnqaH-0000L4-Dk; Wed, 26 Dec 2012 12:59:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TnqaG-0000Kx-4j
	for xen-devel@lists.xen.org; Wed, 26 Dec 2012 12:59:44 +0000
Received: from [85.158.143.35:7992] by server-1.bemta-4.messagelabs.com id
	64/66-28401-FB4FAD05; Wed, 26 Dec 2012 12:59:43 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356526781!16806786!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjA2MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7723 invoked from network); 26 Dec 2012 12:59:41 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Dec 2012 12:59:41 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id AB1642662;
	Wed, 26 Dec 2012 14:59:40 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 8CB4220067; Wed, 26 Dec 2012 14:59:40 +0200 (EET)
Date: Wed, 26 Dec 2012 14:59:40 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Rohit Damkondwar <genius.rsd@gmail.com>
Message-ID: <20121226125940.GQ8912@reaktio.net>
References: <CAHEEu84ykxG9tv1rU2PS=u=sLD-h5a4xkNQo4_PS_0exmdkfZQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAHEEu84ykxG9tv1rU2PS=u=sLD-h5a4xkNQo4_PS_0exmdkfZQ@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Error : libxenlight state driver is not active
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 26, 2012 at 01:39:58AM +0530, Rohit Damkondwar wrote:
>    Hi all. I recently compiled xen 4.1.3 on Fedora 17(64 bit). The xen entry
>    appears in the grub. Inspite of booting Fedora, with xen hypervisor entry
>    , when I start virt-manager, I get an error
>    " Internal Error : libxenlight state driver is not active "
> 
>    Can anyone help me on this?

Are you using xm/xend toolstack? is xend running? if yes, then configure libvirt to use xm/xend driver.

I wouldn't recommend using xl/libxl (libxenlight) in Xen 4.1.x, it was a tech preview there,
and it's more mature in Xen 4.2.x+

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 26 13:00:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 13:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnqaH-0000L4-Dk; Wed, 26 Dec 2012 12:59:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1TnqaG-0000Kx-4j
	for xen-devel@lists.xen.org; Wed, 26 Dec 2012 12:59:44 +0000
Received: from [85.158.143.35:7992] by server-1.bemta-4.messagelabs.com id
	64/66-28401-FB4FAD05; Wed, 26 Dec 2012 12:59:43 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356526781!16806786!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjA2MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7723 invoked from network); 26 Dec 2012 12:59:41 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Dec 2012 12:59:41 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id AB1642662;
	Wed, 26 Dec 2012 14:59:40 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 8CB4220067; Wed, 26 Dec 2012 14:59:40 +0200 (EET)
Date: Wed, 26 Dec 2012 14:59:40 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Rohit Damkondwar <genius.rsd@gmail.com>
Message-ID: <20121226125940.GQ8912@reaktio.net>
References: <CAHEEu84ykxG9tv1rU2PS=u=sLD-h5a4xkNQo4_PS_0exmdkfZQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAHEEu84ykxG9tv1rU2PS=u=sLD-h5a4xkNQo4_PS_0exmdkfZQ@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Error : libxenlight state driver is not active
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 26, 2012 at 01:39:58AM +0530, Rohit Damkondwar wrote:
>    Hi all. I recently compiled xen 4.1.3 on Fedora 17(64 bit). The xen entry
>    appears in the grub. Inspite of booting Fedora, with xen hypervisor entry
>    , when I start virt-manager, I get an error
>    " Internal Error : libxenlight state driver is not active "
> 
>    Can anyone help me on this?

Are you using xm/xend toolstack? is xend running? if yes, then configure libvirt to use xm/xend driver.

I wouldn't recommend using xl/libxl (libxenlight) in Xen 4.1.x, it was a tech preview there,
and it's more mature in Xen 4.2.x+

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 26 13:42:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 13:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnrEn-0000n9-Ra; Wed, 26 Dec 2012 13:41:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TnrEm-0000n4-Po
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 13:41:37 +0000
Received: from [85.158.137.99:58777] by server-7.bemta-3.messagelabs.com id
	46/85-23008-09EFAD05; Wed, 26 Dec 2012 13:41:36 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356529293!13026431!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE5ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22230 invoked from network); 26 Dec 2012 13:41:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 13:41:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,356,1355097600"; 
   d="scan'208";a="1862930"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	26 Dec 2012 13:41:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 26 Dec 2012 08:41:32 -0500
Received: from [10.80.3.80] (helo=iceland)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <Wei.Liu2@citrix.com>)	id
	1TnrEh-0007Fi-UI; Wed, 26 Dec 2012 13:41:31 +0000
Date: Wed, 26 Dec 2012 13:41:31 +0000
From: Wei Liu <Wei.Liu2@citrix.com>
To: =?utf-8?B?6ams56OK?= <aware.why@gmail.com>
Message-ID: <20121226134131.GA25087@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
 =?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_error?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCBEZWMgMjYsIDIwMTIgYXQgMTA6MTk6MzZBTSArMDAwMCwg6ams56OKIHdyb3RlOgo+
IAo+IAo+IE9uIFdlZCwgRGVjIDI2LCAyMDEyIGF0IDY6MDMgUE0sIOmprOejiiA8YXdhcmUud2h5
QGdtYWlsLmNvbTxtYWlsdG86YXdhcmUud2h5QGdtYWlsLmNvbT4+IHdyb3RlOgo+IEhp77yMCj4g
ICAgIEkgZ290IHhlbiBzb3VyY2UgY29kZSBmcm9tICBodHRwOi8vd3d3Lnhlbi5vcmcvZG93bmxv
YWQvaW5kZXhfNC4xLjIuaHRtbCAuCj4gICAgIHdoZW4gdXNpbmcgYHhsIHJlc3RvcmVg77yMeGNf
ZXZ0Y2huX2FsbG9jX3VuYm91bmQgd2lsbCByYWlzZSB0aGlzIGVycm9yOiB4YzogZXJyb3I6IGRv
X2V2dGNobl9vcDogSFlQRVJWSVNPUl9ldmVudF9jaGFubmVsX29wIGZhaWxlZDogLTEgKDMgPSBO
byBzdWNoIHByb2Nlc3MpOiBJbnRlcm5hbCBlcnJvci4KPiAgIHdoYXQgZG9lcyB0aGlzIG1lYW4g
YW5kIHdoYXQgZG9lcyBzdWNoIHByb2Nlc3MgcmVmZXIgdG8/Cj4gICAgIExvb2tpbmcgZm9yd2Fy
ZCB0byB5b3VyIHJlcGx5Lgo+ICAgICBUaGFua3MuCj4gCj4gVGhlIGVycm9yIGV4YWN0bHkgb2Nj
dXJzIGF0ICB0aGlzIHBvaW50OiAodG9vbHMvbGlieGwvbGlieGxfZG9tLmMpCj4gNjctaW50IGxp
YnhsX19idWlsZF9wcmUobGlieGxfY3R4ICpjdHgsIHVpbnQzMl90IGRvbWlkLAo+ICA2OCAgICAg
ICAgICAgICAgbGlieGxfZG9tYWluX2J1aWxkX2luZm8gKmluZm8sIGxpYnhsX2RvbWFpbl9idWls
ZF9zdGF0ZSAqc3RhdGUpCj4gIDY5ewo+ICA3MCAgICB4Y19kb21haW5fbWF4X3ZjcHVzKGN0eC0+
eGNoLCBkb21pZCwgaW5mby0+bWF4X3ZjcHVzKTsKPiAgNzEgICAgeGNfZG9tYWluX3NldG1heG1l
bShjdHgtPnhjaCwgZG9taWQsIGluZm8tPnRhcmdldF9tZW1rYiArIExJQlhMX01BWE1FTV9DT05T
VEFOVCk7Cj4gIDcyICAgIHhjX2RvbWFpbl9zZXRfbWVtbWFwX2xpbWl0KGN0eC0+eGNoLCBkb21p
ZCwKPiAgNzMgICAgICAgICAgICAoaW5mby0+aHZtKSA/IGluZm8tPm1heF9tZW1rYiA6Cj4gIDc0
ICAgICAgICAgICAgKGluZm8tPm1heF9tZW1rYiArIGluZm8tPnUucHYuc2xhY2tfbWVta2IpKTsK
PiAgNzUgICAgeGNfZG9tYWluX3NldF90c2NfaW5mbyhjdHgtPnhjaCwgZG9taWQsIGluZm8tPnRz
Y19tb2RlLCAwLCAwLCAwKTsKPiAgNzYgICAgaWYgKCBpbmZvLT5kaXNhYmxlX21pZ3JhdGUgKQo+
ICA3NyAgICAgICAgeGNfZG9tYWluX2Rpc2FibGVfbWlncmF0ZShjdHgtPnhjaCwgZG9taWQpOwo+
ICA3OAo+ICA3OSAgICBpZiAoaW5mby0+aHZtKSB7Cj4gIDgwICAgICAgICB1bnNpZ25lZCBsb25n
IHNoYWRvdzsKPiAgODEgICAgICAgIHNoYWRvdyA9IChpbmZvLT5zaGFkb3dfbWVta2IgKyAxMDIz
KSAvIDEwMjQ7Cj4gIDgyICAgICAgICB4Y19zaGFkb3dfY29udHJvbChjdHgtPnhjaCwgZG9taWQs
IFhFTl9ET01DVExfU0hBRE9XX09QX1NFVF9BTExPQ0FUSU9OLCBOVUxMLCAwLCAmc2hhZG93LCAw
LCBOVUxMKTsKPiAgODMgICAgfQo+ICA4NAo+ICA4NSAgICBzdGF0ZS0+c3RvcmVfcG9ydCA9IHhj
X2V2dGNobl9hbGxvY191bmJvdW5kKGN0eC0+eGNoLCBkb21pZCwgMCk7Cj4gIDg2ICAgIHN0YXRl
LT5jb25zb2xlX3BvcnQgPSB4Y19ldnRjaG5fYWxsb2NfdW5ib3VuZChjdHgtPnhjaCwgZG9taWQs
IDApOwo+ICA4NyAgICByZXR1cm4gMDsKPiAgODh9CgpJIHBsYXllZCB3aXRoIHNhdmUvcmVzdG9y
ZSBzZXZlcmFsIGRheXMgYWdvIGJ1dCBJIG5ldmVyIHNhdyB0aGlzIGVycm9yLgoKVGhlIHJlbGF2
ZW50IGNvZGUgaW4gWGVuIGlzIGluCnhlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jOmRvX2V2ZW50
X2NoYW5uZWxfb3AKCgpXZWkuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3Jn
Cmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Dec 26 13:42:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 13:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnrEn-0000n9-Ra; Wed, 26 Dec 2012 13:41:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TnrEm-0000n4-Po
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 13:41:37 +0000
Received: from [85.158.137.99:58777] by server-7.bemta-3.messagelabs.com id
	46/85-23008-09EFAD05; Wed, 26 Dec 2012 13:41:36 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356529293!13026431!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTE5ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22230 invoked from network); 26 Dec 2012 13:41:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 13:41:34 -0000
X-IronPort-AV: E=Sophos;i="4.84,356,1355097600"; 
   d="scan'208";a="1862930"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	26 Dec 2012 13:41:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 26 Dec 2012 08:41:32 -0500
Received: from [10.80.3.80] (helo=iceland)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <Wei.Liu2@citrix.com>)	id
	1TnrEh-0007Fi-UI; Wed, 26 Dec 2012 13:41:31 +0000
Date: Wed, 26 Dec 2012 13:41:31 +0000
From: Wei Liu <Wei.Liu2@citrix.com>
To: =?utf-8?B?6ams56OK?= <aware.why@gmail.com>
Message-ID: <20121226134131.GA25087@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
 =?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_error?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCBEZWMgMjYsIDIwMTIgYXQgMTA6MTk6MzZBTSArMDAwMCwg6ams56OKIHdyb3RlOgo+
IAo+IAo+IE9uIFdlZCwgRGVjIDI2LCAyMDEyIGF0IDY6MDMgUE0sIOmprOejiiA8YXdhcmUud2h5
QGdtYWlsLmNvbTxtYWlsdG86YXdhcmUud2h5QGdtYWlsLmNvbT4+IHdyb3RlOgo+IEhp77yMCj4g
ICAgIEkgZ290IHhlbiBzb3VyY2UgY29kZSBmcm9tICBodHRwOi8vd3d3Lnhlbi5vcmcvZG93bmxv
YWQvaW5kZXhfNC4xLjIuaHRtbCAuCj4gICAgIHdoZW4gdXNpbmcgYHhsIHJlc3RvcmVg77yMeGNf
ZXZ0Y2huX2FsbG9jX3VuYm91bmQgd2lsbCByYWlzZSB0aGlzIGVycm9yOiB4YzogZXJyb3I6IGRv
X2V2dGNobl9vcDogSFlQRVJWSVNPUl9ldmVudF9jaGFubmVsX29wIGZhaWxlZDogLTEgKDMgPSBO
byBzdWNoIHByb2Nlc3MpOiBJbnRlcm5hbCBlcnJvci4KPiAgIHdoYXQgZG9lcyB0aGlzIG1lYW4g
YW5kIHdoYXQgZG9lcyBzdWNoIHByb2Nlc3MgcmVmZXIgdG8/Cj4gICAgIExvb2tpbmcgZm9yd2Fy
ZCB0byB5b3VyIHJlcGx5Lgo+ICAgICBUaGFua3MuCj4gCj4gVGhlIGVycm9yIGV4YWN0bHkgb2Nj
dXJzIGF0ICB0aGlzIHBvaW50OiAodG9vbHMvbGlieGwvbGlieGxfZG9tLmMpCj4gNjctaW50IGxp
YnhsX19idWlsZF9wcmUobGlieGxfY3R4ICpjdHgsIHVpbnQzMl90IGRvbWlkLAo+ICA2OCAgICAg
ICAgICAgICAgbGlieGxfZG9tYWluX2J1aWxkX2luZm8gKmluZm8sIGxpYnhsX2RvbWFpbl9idWls
ZF9zdGF0ZSAqc3RhdGUpCj4gIDY5ewo+ICA3MCAgICB4Y19kb21haW5fbWF4X3ZjcHVzKGN0eC0+
eGNoLCBkb21pZCwgaW5mby0+bWF4X3ZjcHVzKTsKPiAgNzEgICAgeGNfZG9tYWluX3NldG1heG1l
bShjdHgtPnhjaCwgZG9taWQsIGluZm8tPnRhcmdldF9tZW1rYiArIExJQlhMX01BWE1FTV9DT05T
VEFOVCk7Cj4gIDcyICAgIHhjX2RvbWFpbl9zZXRfbWVtbWFwX2xpbWl0KGN0eC0+eGNoLCBkb21p
ZCwKPiAgNzMgICAgICAgICAgICAoaW5mby0+aHZtKSA/IGluZm8tPm1heF9tZW1rYiA6Cj4gIDc0
ICAgICAgICAgICAgKGluZm8tPm1heF9tZW1rYiArIGluZm8tPnUucHYuc2xhY2tfbWVta2IpKTsK
PiAgNzUgICAgeGNfZG9tYWluX3NldF90c2NfaW5mbyhjdHgtPnhjaCwgZG9taWQsIGluZm8tPnRz
Y19tb2RlLCAwLCAwLCAwKTsKPiAgNzYgICAgaWYgKCBpbmZvLT5kaXNhYmxlX21pZ3JhdGUgKQo+
ICA3NyAgICAgICAgeGNfZG9tYWluX2Rpc2FibGVfbWlncmF0ZShjdHgtPnhjaCwgZG9taWQpOwo+
ICA3OAo+ICA3OSAgICBpZiAoaW5mby0+aHZtKSB7Cj4gIDgwICAgICAgICB1bnNpZ25lZCBsb25n
IHNoYWRvdzsKPiAgODEgICAgICAgIHNoYWRvdyA9IChpbmZvLT5zaGFkb3dfbWVta2IgKyAxMDIz
KSAvIDEwMjQ7Cj4gIDgyICAgICAgICB4Y19zaGFkb3dfY29udHJvbChjdHgtPnhjaCwgZG9taWQs
IFhFTl9ET01DVExfU0hBRE9XX09QX1NFVF9BTExPQ0FUSU9OLCBOVUxMLCAwLCAmc2hhZG93LCAw
LCBOVUxMKTsKPiAgODMgICAgfQo+ICA4NAo+ICA4NSAgICBzdGF0ZS0+c3RvcmVfcG9ydCA9IHhj
X2V2dGNobl9hbGxvY191bmJvdW5kKGN0eC0+eGNoLCBkb21pZCwgMCk7Cj4gIDg2ICAgIHN0YXRl
LT5jb25zb2xlX3BvcnQgPSB4Y19ldnRjaG5fYWxsb2NfdW5ib3VuZChjdHgtPnhjaCwgZG9taWQs
IDApOwo+ICA4NyAgICByZXR1cm4gMDsKPiAgODh9CgpJIHBsYXllZCB3aXRoIHNhdmUvcmVzdG9y
ZSBzZXZlcmFsIGRheXMgYWdvIGJ1dCBJIG5ldmVyIHNhdyB0aGlzIGVycm9yLgoKVGhlIHJlbGF2
ZW50IGNvZGUgaW4gWGVuIGlzIGluCnhlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jOmRvX2V2ZW50
X2NoYW5uZWxfb3AKCgpXZWkuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3Jn
Cmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Dec 26 14:39:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 14:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tns8n-0001JP-G1; Wed, 26 Dec 2012 14:39:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1Tns8m-0001JK-Jf
	for xen-devel@lists.xen.org; Wed, 26 Dec 2012 14:39:28 +0000
Received: from [85.158.139.83:13555] by server-1.bemta-5.messagelabs.com id
	22/5F-12813-F1C0BD05; Wed, 26 Dec 2012 14:39:27 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1356532766!30635961!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16454 invoked from network); 26 Dec 2012 14:39:26 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-9.tower-182.messagelabs.com with SMTP;
	26 Dec 2012 14:39:26 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id qBQEdObD017227;
	Wed, 26 Dec 2012 08:39:24 -0600
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id qBQEdOFl017226;
	Wed, 26 Dec 2012 08:39:24 -0600
Date: Wed, 26 Dec 2012 08:39:24 -0600
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201212261439.qBQEdOFl017226@wind.enjellic.com>
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: xen-devel@lists.xen.org
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Wed, 26 Dec 2012 08:39:24 -0600 (CST)
Subject: [Xen-devel] i915 KMS framebuffer issues in 4.2.1/3.7.1.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Good morning, hope the week is going well for everyone.

We have been putting 4.2.1 through its paces and noted an issue which
is specific to the kernel running on top of the hypervisor.  It isn't
specific to 4.2.1 but we validated the issue on this platform and on
the 3.7.1 kernel for completeness.

An attempt to mmap the framebuffer provided by the i915 KMS driver
causes a fork() to fail with ENOMEM.  Same hardware and kernel on
bare-metal works fine.  The 'fbterm' utility demonstrates the problem
but we can demonstrate on a minimal implemenation of 'mmap + forkpty'.

Would be happy to test any patches or ideas if people have a though on
the issue.

Have a good day.

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"Experience is something you don't get until just after you need it."
                                -- Olivier

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 26 14:39:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 14:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tns8n-0001JP-G1; Wed, 26 Dec 2012 14:39:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1Tns8m-0001JK-Jf
	for xen-devel@lists.xen.org; Wed, 26 Dec 2012 14:39:28 +0000
Received: from [85.158.139.83:13555] by server-1.bemta-5.messagelabs.com id
	22/5F-12813-F1C0BD05; Wed, 26 Dec 2012 14:39:27 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1356532766!30635961!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16454 invoked from network); 26 Dec 2012 14:39:26 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-9.tower-182.messagelabs.com with SMTP;
	26 Dec 2012 14:39:26 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id qBQEdObD017227;
	Wed, 26 Dec 2012 08:39:24 -0600
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id qBQEdOFl017226;
	Wed, 26 Dec 2012 08:39:24 -0600
Date: Wed, 26 Dec 2012 08:39:24 -0600
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201212261439.qBQEdOFl017226@wind.enjellic.com>
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: xen-devel@lists.xen.org
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Wed, 26 Dec 2012 08:39:24 -0600 (CST)
Subject: [Xen-devel] i915 KMS framebuffer issues in 4.2.1/3.7.1.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Good morning, hope the week is going well for everyone.

We have been putting 4.2.1 through its paces and noted an issue which
is specific to the kernel running on top of the hypervisor.  It isn't
specific to 4.2.1 but we validated the issue on this platform and on
the 3.7.1 kernel for completeness.

An attempt to mmap the framebuffer provided by the i915 KMS driver
causes a fork() to fail with ENOMEM.  Same hardware and kernel on
bare-metal works fine.  The 'fbterm' utility demonstrates the problem
but we can demonstrate on a minimal implemenation of 'mmap + forkpty'.

Would be happy to test any patches or ideas if people have a though on
the issue.

Have a good day.

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"Experience is something you don't get until just after you need it."
                                -- Olivier

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 26 14:46:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 14:46:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnsEy-0001Uq-An; Wed, 26 Dec 2012 14:45:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1TnsEw-0001Uj-L1
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 14:45:51 +0000
Received: from [85.158.138.51:33451] by server-12.bemta-3.messagelabs.com id
	22/4C-27559-C9D0BD05; Wed, 26 Dec 2012 14:45:48 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1356533147!30365891!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22154 invoked from network); 26 Dec 2012 14:45:47 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 14:45:47 -0000
Received: by mail-ea0-f169.google.com with SMTP id a12so3460972eaa.14
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Dec 2012 06:45:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=e91rQKq9+kT+5xQzimMaPhpKLCuw7+p9v5KF4/E6Flc=;
	b=mHC8ZLYPwmZzth9Sz03Z3XLKGMIPymCQJ63QcYp5wqM+C3Uw6QkOBTV8eQSsEps7Vg
	YIj0pyiL/eF1jrT+zg5nsPsqlTu/av1i33WxMtzjtO0AyDW27j6NCEjhS10FexgeeRgn
	mWGRH+JMB4QR6B3slgr53I81wDgLoNDFnpuDsKCcIumiE/1wQ8RyxBkaf4ni9/gxgyAb
	XAQAkbXrh4y5wP0OXL5PSQCajM5NoY3gMo8oVXdhwfLHfM+xfxnU/Q7X9S4XuLEHKxrt
	4/JEkip4gGUG0H7mW8w8A3TD+zRQz84DyCN6EmwrOC4NhrtF4XOyyHqPw+45z/8rI3Yf
	9y2w==
MIME-Version: 1.0
Received: by 10.14.219.72 with SMTP id l48mr70583079eep.37.1356533146885; Wed,
	26 Dec 2012 06:45:46 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Wed, 26 Dec 2012 06:45:46 -0800 (PST)
In-Reply-To: <20121226134131.GA25087@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
Date: Wed, 26 Dec 2012 22:45:46 +0800
Message-ID: <CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
	=?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_e?=
	=?utf-8?q?rror?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2649491933379723354=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2649491933379723354==
Content-Type: multipart/alternative; boundary=047d7b622124589e3404d1c27c7a

--047d7b622124589e3404d1c27c7a
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 26, 2012 at 9:41 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Wed, Dec 26, 2012 at 10:19:36AM +0000, =E9=A9=AC=E7=A3=8A wrote:
> >
> >
> > On Wed, Dec 26, 2012 at 6:03 PM, =E9=A9=AC=E7=A3=8A <aware.why@gmail.co=
m<mailto:
> aware.why@gmail.com>> wrote:
> > Hi=EF=BC=8C
> >     I got xen source code from
> http://www.xen.org/download/index_4.1.2.html .
> >     when using `xl restore`=EF=BC=8Cxc_evtchn_alloc_unbound will raise =
this
> error: xc: error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (3=
 =3D
> No such process): Internal error.
> >   what does this mean and what does such process refer to?
> >     Looking forward to your reply.
> >     Thanks.
> >
> > The error exactly occurs at  this point: (tools/libxl/libxl_dom.c)
> > 67-int libxl__build_pre(libxl_ctx *ctx, uint32_t domid,
> >  68              libxl_domain_build_info *info, libxl_domain_build_stat=
e
> *state)
> >  69{
> >  70    xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus);
> >  71    xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
> LIBXL_MAXMEM_CONSTANT);
> >  72    xc_domain_set_memmap_limit(ctx->xch, domid,
> >  73            (info->hvm) ? info->max_memkb :
> >  74            (info->max_memkb + info->u.pv.slack_memkb));
> >  75    xc_domain_set_tsc_info(ctx->xch, domid, info->tsc_mode, 0, 0, 0)=
;
> >  76    if ( info->disable_migrate )
> >  77        xc_domain_disable_migrate(ctx->xch, domid);
> >  78
> >  79    if (info->hvm) {
> >  80        unsigned long shadow;
> >  81        shadow =3D (info->shadow_memkb + 1023) / 1024;
> >  82        xc_shadow_control(ctx->xch, domid,
> XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION, NULL, 0, &shadow, 0, NULL);
> >  83    }
> >  84
> >  85    state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, 0=
);
> >  86    state->console_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid,=
 0);
> >  87    return 0;
> >  88}
>
> I played with save/restore several days ago but I never saw this error.
>
> The relavent code in Xen is in
> xen/common/event_channel.c:do_event_channel_op
>
>
> Wei.
>
I said `xl restore`, not `xm restore`, and the relevant code is in
src/tools/libxl.
It seems that yours aren't the same version as mine

--047d7b622124589e3404d1c27c7a
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Wed, Dec 26, 2012 at 9:41 PM, Wei Liu=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:Wei.Liu2@citrix.com" target=3D"_bl=
ank">Wei.Liu2@citrix.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmai=
l_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left=
:1ex">
On Wed, Dec 26, 2012 at 10:19:36AM +0000, =E9=A9=AC=E7=A3=8A wrote:<br>
<div><div class=3D"h5">&gt;<br>
&gt;<br>
&gt; On Wed, Dec 26, 2012 at 6:03 PM, =E9=A9=AC=E7=A3=8A &lt;<a href=3D"mai=
lto:aware.why@gmail.com">aware.why@gmail.com</a>&lt;mailto:<a href=3D"mailt=
o:aware.why@gmail.com">aware.why@gmail.com</a>&gt;&gt; wrote:<br>
&gt; Hi=EF=BC=8C<br>
&gt; =C2=A0 =C2=A0 I got xen source code from =C2=A0<a href=3D"http://www.x=
en.org/download/index_4.1.2.html" target=3D"_blank">http://www.xen.org/down=
load/index_4.1.2.html</a> .<br>
&gt; =C2=A0 =C2=A0 when using `xl restore`=EF=BC=8Cxc_evtchn_alloc_unbound =
will raise this error: xc: error: do_evtchn_op: HYPERVISOR_event_channel_op=
 failed: -1 (3 =3D No such process): Internal error.<br>
&gt; =C2=A0 what does this mean and what does such process refer to?<br>
&gt; =C2=A0 =C2=A0 Looking forward to your reply.<br>
&gt; =C2=A0 =C2=A0 Thanks.<br>
&gt;<br>
&gt; The error exactly occurs at =C2=A0this point: (tools/libxl/libxl_dom.c=
)<br>
&gt; 67-int libxl__build_pre(libxl_ctx *ctx, uint32_t domid,<br>
&gt; =C2=A068 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0libxl_domain_=
build_info *info, libxl_domain_build_state *state)<br>
&gt; =C2=A069{<br>
&gt; =C2=A070 =C2=A0 =C2=A0xc_domain_max_vcpus(ctx-&gt;xch, domid, info-&gt=
;max_vcpus);<br>
&gt; =C2=A071 =C2=A0 =C2=A0xc_domain_setmaxmem(ctx-&gt;xch, domid, info-&gt=
;target_memkb + LIBXL_MAXMEM_CONSTANT);<br>
&gt; =C2=A072 =C2=A0 =C2=A0xc_domain_set_memmap_limit(ctx-&gt;xch, domid,<b=
r>
&gt; =C2=A073 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(info-&gt;hvm) ? inf=
o-&gt;max_memkb :<br>
&gt; =C2=A074 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(info-&gt;max_memkb =
+ info-&gt;u.pv.slack_memkb));<br>
&gt; =C2=A075 =C2=A0 =C2=A0xc_domain_set_tsc_info(ctx-&gt;xch, domid, info-=
&gt;tsc_mode, 0, 0, 0);<br>
&gt; =C2=A076 =C2=A0 =C2=A0if ( info-&gt;disable_migrate )<br>
&gt; =C2=A077 =C2=A0 =C2=A0 =C2=A0 =C2=A0xc_domain_disable_migrate(ctx-&gt;=
xch, domid);<br>
&gt; =C2=A078<br>
&gt; =C2=A079 =C2=A0 =C2=A0if (info-&gt;hvm) {<br>
&gt; =C2=A080 =C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned long shadow;<br>
&gt; =C2=A081 =C2=A0 =C2=A0 =C2=A0 =C2=A0shadow =3D (info-&gt;shadow_memkb =
+ 1023) / 1024;<br>
&gt; =C2=A082 =C2=A0 =C2=A0 =C2=A0 =C2=A0xc_shadow_control(ctx-&gt;xch, dom=
id, XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION, NULL, 0, &amp;shadow, 0, NULL);<br=
>
&gt; =C2=A083 =C2=A0 =C2=A0}<br>
&gt; =C2=A084<br>
&gt; =C2=A085 =C2=A0 =C2=A0state-&gt;store_port =3D xc_evtchn_alloc_unbound=
(ctx-&gt;xch, domid, 0);<br>
&gt; =C2=A086 =C2=A0 =C2=A0state-&gt;console_port =3D xc_evtchn_alloc_unbou=
nd(ctx-&gt;xch, domid, 0);<br>
&gt; =C2=A087 =C2=A0 =C2=A0return 0;<br>
&gt; =C2=A088}<br>
<br>
</div></div>I played with save/restore several days ago but I never saw thi=
s error.<br>
<br>
The relavent code in Xen is in<br>
xen/common/event_channel.c:do_event_channel_op<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Wei.<br>
</font></span></blockquote></div>I said `xl restore`, not `xm restore`, and=
 the relevant code is in src/tools/libxl.<div>It seems that yours aren&#39;=
t the same version as mine</div>

--047d7b622124589e3404d1c27c7a--


--===============2649491933379723354==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2649491933379723354==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 14:46:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 14:46:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TnsEy-0001Uq-An; Wed, 26 Dec 2012 14:45:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1TnsEw-0001Uj-L1
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 14:45:51 +0000
Received: from [85.158.138.51:33451] by server-12.bemta-3.messagelabs.com id
	22/4C-27559-C9D0BD05; Wed, 26 Dec 2012 14:45:48 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1356533147!30365891!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22154 invoked from network); 26 Dec 2012 14:45:47 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 14:45:47 -0000
Received: by mail-ea0-f169.google.com with SMTP id a12so3460972eaa.14
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Dec 2012 06:45:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=e91rQKq9+kT+5xQzimMaPhpKLCuw7+p9v5KF4/E6Flc=;
	b=mHC8ZLYPwmZzth9Sz03Z3XLKGMIPymCQJ63QcYp5wqM+C3Uw6QkOBTV8eQSsEps7Vg
	YIj0pyiL/eF1jrT+zg5nsPsqlTu/av1i33WxMtzjtO0AyDW27j6NCEjhS10FexgeeRgn
	mWGRH+JMB4QR6B3slgr53I81wDgLoNDFnpuDsKCcIumiE/1wQ8RyxBkaf4ni9/gxgyAb
	XAQAkbXrh4y5wP0OXL5PSQCajM5NoY3gMo8oVXdhwfLHfM+xfxnU/Q7X9S4XuLEHKxrt
	4/JEkip4gGUG0H7mW8w8A3TD+zRQz84DyCN6EmwrOC4NhrtF4XOyyHqPw+45z/8rI3Yf
	9y2w==
MIME-Version: 1.0
Received: by 10.14.219.72 with SMTP id l48mr70583079eep.37.1356533146885; Wed,
	26 Dec 2012 06:45:46 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Wed, 26 Dec 2012 06:45:46 -0800 (PST)
In-Reply-To: <20121226134131.GA25087@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
Date: Wed, 26 Dec 2012 22:45:46 +0800
Message-ID: <CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
	=?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_e?=
	=?utf-8?q?rror?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2649491933379723354=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2649491933379723354==
Content-Type: multipart/alternative; boundary=047d7b622124589e3404d1c27c7a

--047d7b622124589e3404d1c27c7a
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Wed, Dec 26, 2012 at 9:41 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Wed, Dec 26, 2012 at 10:19:36AM +0000, =E9=A9=AC=E7=A3=8A wrote:
> >
> >
> > On Wed, Dec 26, 2012 at 6:03 PM, =E9=A9=AC=E7=A3=8A <aware.why@gmail.co=
m<mailto:
> aware.why@gmail.com>> wrote:
> > Hi=EF=BC=8C
> >     I got xen source code from
> http://www.xen.org/download/index_4.1.2.html .
> >     when using `xl restore`=EF=BC=8Cxc_evtchn_alloc_unbound will raise =
this
> error: xc: error: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1 (3=
 =3D
> No such process): Internal error.
> >   what does this mean and what does such process refer to?
> >     Looking forward to your reply.
> >     Thanks.
> >
> > The error exactly occurs at  this point: (tools/libxl/libxl_dom.c)
> > 67-int libxl__build_pre(libxl_ctx *ctx, uint32_t domid,
> >  68              libxl_domain_build_info *info, libxl_domain_build_stat=
e
> *state)
> >  69{
> >  70    xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus);
> >  71    xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
> LIBXL_MAXMEM_CONSTANT);
> >  72    xc_domain_set_memmap_limit(ctx->xch, domid,
> >  73            (info->hvm) ? info->max_memkb :
> >  74            (info->max_memkb + info->u.pv.slack_memkb));
> >  75    xc_domain_set_tsc_info(ctx->xch, domid, info->tsc_mode, 0, 0, 0)=
;
> >  76    if ( info->disable_migrate )
> >  77        xc_domain_disable_migrate(ctx->xch, domid);
> >  78
> >  79    if (info->hvm) {
> >  80        unsigned long shadow;
> >  81        shadow =3D (info->shadow_memkb + 1023) / 1024;
> >  82        xc_shadow_control(ctx->xch, domid,
> XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION, NULL, 0, &shadow, 0, NULL);
> >  83    }
> >  84
> >  85    state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, 0=
);
> >  86    state->console_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid,=
 0);
> >  87    return 0;
> >  88}
>
> I played with save/restore several days ago but I never saw this error.
>
> The relavent code in Xen is in
> xen/common/event_channel.c:do_event_channel_op
>
>
> Wei.
>
I said `xl restore`, not `xm restore`, and the relevant code is in
src/tools/libxl.
It seems that yours aren't the same version as mine

--047d7b622124589e3404d1c27c7a
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Wed, Dec 26, 2012 at 9:41 PM, Wei Liu=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:Wei.Liu2@citrix.com" target=3D"_bl=
ank">Wei.Liu2@citrix.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmai=
l_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left=
:1ex">
On Wed, Dec 26, 2012 at 10:19:36AM +0000, =E9=A9=AC=E7=A3=8A wrote:<br>
<div><div class=3D"h5">&gt;<br>
&gt;<br>
&gt; On Wed, Dec 26, 2012 at 6:03 PM, =E9=A9=AC=E7=A3=8A &lt;<a href=3D"mai=
lto:aware.why@gmail.com">aware.why@gmail.com</a>&lt;mailto:<a href=3D"mailt=
o:aware.why@gmail.com">aware.why@gmail.com</a>&gt;&gt; wrote:<br>
&gt; Hi=EF=BC=8C<br>
&gt; =C2=A0 =C2=A0 I got xen source code from =C2=A0<a href=3D"http://www.x=
en.org/download/index_4.1.2.html" target=3D"_blank">http://www.xen.org/down=
load/index_4.1.2.html</a> .<br>
&gt; =C2=A0 =C2=A0 when using `xl restore`=EF=BC=8Cxc_evtchn_alloc_unbound =
will raise this error: xc: error: do_evtchn_op: HYPERVISOR_event_channel_op=
 failed: -1 (3 =3D No such process): Internal error.<br>
&gt; =C2=A0 what does this mean and what does such process refer to?<br>
&gt; =C2=A0 =C2=A0 Looking forward to your reply.<br>
&gt; =C2=A0 =C2=A0 Thanks.<br>
&gt;<br>
&gt; The error exactly occurs at =C2=A0this point: (tools/libxl/libxl_dom.c=
)<br>
&gt; 67-int libxl__build_pre(libxl_ctx *ctx, uint32_t domid,<br>
&gt; =C2=A068 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0libxl_domain_=
build_info *info, libxl_domain_build_state *state)<br>
&gt; =C2=A069{<br>
&gt; =C2=A070 =C2=A0 =C2=A0xc_domain_max_vcpus(ctx-&gt;xch, domid, info-&gt=
;max_vcpus);<br>
&gt; =C2=A071 =C2=A0 =C2=A0xc_domain_setmaxmem(ctx-&gt;xch, domid, info-&gt=
;target_memkb + LIBXL_MAXMEM_CONSTANT);<br>
&gt; =C2=A072 =C2=A0 =C2=A0xc_domain_set_memmap_limit(ctx-&gt;xch, domid,<b=
r>
&gt; =C2=A073 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(info-&gt;hvm) ? inf=
o-&gt;max_memkb :<br>
&gt; =C2=A074 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(info-&gt;max_memkb =
+ info-&gt;u.pv.slack_memkb));<br>
&gt; =C2=A075 =C2=A0 =C2=A0xc_domain_set_tsc_info(ctx-&gt;xch, domid, info-=
&gt;tsc_mode, 0, 0, 0);<br>
&gt; =C2=A076 =C2=A0 =C2=A0if ( info-&gt;disable_migrate )<br>
&gt; =C2=A077 =C2=A0 =C2=A0 =C2=A0 =C2=A0xc_domain_disable_migrate(ctx-&gt;=
xch, domid);<br>
&gt; =C2=A078<br>
&gt; =C2=A079 =C2=A0 =C2=A0if (info-&gt;hvm) {<br>
&gt; =C2=A080 =C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned long shadow;<br>
&gt; =C2=A081 =C2=A0 =C2=A0 =C2=A0 =C2=A0shadow =3D (info-&gt;shadow_memkb =
+ 1023) / 1024;<br>
&gt; =C2=A082 =C2=A0 =C2=A0 =C2=A0 =C2=A0xc_shadow_control(ctx-&gt;xch, dom=
id, XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION, NULL, 0, &amp;shadow, 0, NULL);<br=
>
&gt; =C2=A083 =C2=A0 =C2=A0}<br>
&gt; =C2=A084<br>
&gt; =C2=A085 =C2=A0 =C2=A0state-&gt;store_port =3D xc_evtchn_alloc_unbound=
(ctx-&gt;xch, domid, 0);<br>
&gt; =C2=A086 =C2=A0 =C2=A0state-&gt;console_port =3D xc_evtchn_alloc_unbou=
nd(ctx-&gt;xch, domid, 0);<br>
&gt; =C2=A087 =C2=A0 =C2=A0return 0;<br>
&gt; =C2=A088}<br>
<br>
</div></div>I played with save/restore several days ago but I never saw thi=
s error.<br>
<br>
The relavent code in Xen is in<br>
xen/common/event_channel.c:do_event_channel_op<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Wei.<br>
</font></span></blockquote></div>I said `xl restore`, not `xm restore`, and=
 the relevant code is in src/tools/libxl.<div>It seems that yours aren&#39;=
t the same version as mine</div>

--047d7b622124589e3404d1c27c7a--


--===============2649491933379723354==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2649491933379723354==--


From xen-devel-bounces@lists.xen.org Wed Dec 26 17:19:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 17:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tnud4-0003NU-MW; Wed, 26 Dec 2012 17:18:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1Tnud2-0003NP-K2
	for xen-devel@lists.xen.org; Wed, 26 Dec 2012 17:18:52 +0000
Received: from [85.158.138.51:45365] by server-9.bemta-3.messagelabs.com id
	A8/97-11948-B713BD05; Wed, 26 Dec 2012 17:18:51 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1356542330!27849425!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21793 invoked from network); 26 Dec 2012 17:18:50 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-9.tower-174.messagelabs.com with SMTP;
	26 Dec 2012 17:18:50 -0000
Received: from [137.65.220.86] ([137.65.220.86])
	by mail.novell.com with ESMTP; Wed, 26 Dec 2012 10:18:35 -0700
Message-ID: <50DB316A.6040704@suse.com>
Date: Wed, 26 Dec 2012 10:18:34 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
References: <CAHEEu84ykxG9tv1rU2PS=u=sLD-h5a4xkNQo4_PS_0exmdkfZQ@mail.gmail.com>
	<20121226125940.GQ8912@reaktio.net>
In-Reply-To: <20121226125940.GQ8912@reaktio.net>
Cc: Rohit Damkondwar <genius.rsd@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Error : libxenlight state driver is not active
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pasi K=E4rkk=E4inen wrote:
> On Wed, Dec 26, 2012 at 01:39:58AM +0530, Rohit Damkondwar wrote:
>   =

>>    Hi all. I recently compiled xen 4.1.3 on Fedora 17(64 bit). The xen e=
ntry
>>    appears in the grub. Inspite of booting Fedora, with xen hypervisor e=
ntry
>>    , when I start virt-manager, I get an error
>>    " Internal Error : libxenlight state driver is not active "
>>
>>    Can anyone help me on this?
>>     =

>
> Are you using xm/xend toolstack? is xend running? if yes, then configure =
libvirt to use xm/xend driver.
>   =


FYI, there is no configuration knob in libvirt for selecting the legacy
xen driver (primarily xend-based) vs the libxl driver.  Assuming both
drivers are included in your libvirt installation, libvirtd will load
the legacy driver if xend is running, otherwise load the libxl driver. =

Changing the configured xen toolstack requires a libvirtd restart.

> I wouldn't recommend using xl/libxl (libxenlight) in Xen 4.1.x, it was a =
tech preview there,
> and it's more mature in Xen 4.2.x+
>   =


Agreed.  You will need libvirt >=3D 1.0.1 to work with Xen 4.2.x libxl. =

And recall that the libxl driver is missing some functionality wrt the
legacy xend driver, e.g. migration.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 26 17:19:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 17:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tnud4-0003NU-MW; Wed, 26 Dec 2012 17:18:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1Tnud2-0003NP-K2
	for xen-devel@lists.xen.org; Wed, 26 Dec 2012 17:18:52 +0000
Received: from [85.158.138.51:45365] by server-9.bemta-3.messagelabs.com id
	A8/97-11948-B713BD05; Wed, 26 Dec 2012 17:18:51 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1356542330!27849425!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21793 invoked from network); 26 Dec 2012 17:18:50 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-9.tower-174.messagelabs.com with SMTP;
	26 Dec 2012 17:18:50 -0000
Received: from [137.65.220.86] ([137.65.220.86])
	by mail.novell.com with ESMTP; Wed, 26 Dec 2012 10:18:35 -0700
Message-ID: <50DB316A.6040704@suse.com>
Date: Wed, 26 Dec 2012 10:18:34 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
References: <CAHEEu84ykxG9tv1rU2PS=u=sLD-h5a4xkNQo4_PS_0exmdkfZQ@mail.gmail.com>
	<20121226125940.GQ8912@reaktio.net>
In-Reply-To: <20121226125940.GQ8912@reaktio.net>
Cc: Rohit Damkondwar <genius.rsd@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Error : libxenlight state driver is not active
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pasi K=E4rkk=E4inen wrote:
> On Wed, Dec 26, 2012 at 01:39:58AM +0530, Rohit Damkondwar wrote:
>   =

>>    Hi all. I recently compiled xen 4.1.3 on Fedora 17(64 bit). The xen e=
ntry
>>    appears in the grub. Inspite of booting Fedora, with xen hypervisor e=
ntry
>>    , when I start virt-manager, I get an error
>>    " Internal Error : libxenlight state driver is not active "
>>
>>    Can anyone help me on this?
>>     =

>
> Are you using xm/xend toolstack? is xend running? if yes, then configure =
libvirt to use xm/xend driver.
>   =


FYI, there is no configuration knob in libvirt for selecting the legacy
xen driver (primarily xend-based) vs the libxl driver.  Assuming both
drivers are included in your libvirt installation, libvirtd will load
the legacy driver if xend is running, otherwise load the libxl driver. =

Changing the configured xen toolstack requires a libvirtd restart.

> I wouldn't recommend using xl/libxl (libxenlight) in Xen 4.1.x, it was a =
tech preview there,
> and it's more mature in Xen 4.2.x+
>   =


Agreed.  You will need libvirt >=3D 1.0.1 to work with Xen 4.2.x libxl. =

And recall that the libxl driver is missing some functionality wrt the
legacy xend driver, e.g. migration.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Dec 26 19:33:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 19:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tnwj7-0004W3-VX; Wed, 26 Dec 2012 19:33:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tnwj6-0004Vy-CF
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 19:33:16 +0000
Received: from [85.158.139.83:16557] by server-3.bemta-5.messagelabs.com id
	D9/8F-25441-BF05BD05; Wed, 26 Dec 2012 19:33:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1356550393!29630074!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQ0Njg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12924 invoked from network); 26 Dec 2012 19:33:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 19:33:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,358,1355097600"; 
   d="scan'208";a="1795232"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	26 Dec 2012 19:33:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 26 Dec 2012 14:33:12 -0500
Received: from [10.80.3.80] (helo=iceland)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <Wei.Liu2@citrix.com>)	id
	1Tnwj2-0003zo-Fm; Wed, 26 Dec 2012 19:33:12 +0000
Date: Wed, 26 Dec 2012 19:33:12 +0000
From: Wei Liu <Wei.Liu2@citrix.com>
To: =?utf-8?B?6ams56OK?= <aware.why@gmail.com>
Message-ID: <20121226193312.GA28152@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
 =?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_error?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCBEZWMgMjYsIDIwMTIgYXQgMDI6NDU6NDZQTSArMDAwMCwg6ams56OKIHdyb3RlOgo+
IEkgc2FpZCBgeGwgcmVzdG9yZWAsIG5vdCBgeG0gcmVzdG9yZWAsIGFuZCB0aGUgcmVsZXZhbnQg
Y29kZSBpcyBpbiBzcmMvdG9vbHMvbGlieGwuCj4gSXQgc2VlbXMgdGhhdCB5b3VycyBhcmVuJ3Qg
dGhlIHNhbWUgdmVyc2lvbiBhcyBtaW5lCgpJIGRpZG4ndCBzYXkgeG0gcmVzdG9yZSBlaXRoZXIu
IEkgd2FzIHJlZmVycmluZyB0byB0aGUgZ2VuZXJpYwpzYXZlL3Jlc3RvcmUgZnVuY3Rpb25hbGl0
eSwgdGhvdWdoIGluIGZhY3QgSSBkaWQgdXNlIHhsLiBBbmQgdGhlCmZ1bmN0aW9uIEkgdG9sZCB5
b3UgaXMgaW4gWGVuIGh5cGVydmlzb3Igc291cmNlIGNvZGUsIG5vdCBwYXJ0IG9mIGFueQp0b29s
IHN0YWNrLgoKV2hpY2hldmVyIHRvb2xzIHN0YWNrIHlvdSB1c2UsIGl0IHdpbGwgZXZlbnR1YWxs
eSBpc3N1ZSBoeXBlcmNhbGwgdG8gWGVuCnZpYSBsaWJ4Yy4gSSBkb24ndCB1c2UgdGhlIHNhbWUg
dmVyc2lvbiBhcyB5b3VycyBzaW5jZSBJIGFsd2F5cyBwbGF5cwp3aXRoIHhlbiBzdGFnaW5nIHVu
c3RhYmxlLiBIb3dldmVyIEkgZG9uJ3QgdGhpbmsgYWxsb2NhdGlvbiBjb2RlCmhhcyBwb3RlbnRp
YWwgZGlmZmVyZW5jZSBiZXR3ZWVuIDQuMS4yIGFuZCBzdGFnaW5nIHVuc3RhYmxlLgoKCldlaS4K
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Dec 26 19:33:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Dec 2012 19:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tnwj7-0004W3-VX; Wed, 26 Dec 2012 19:33:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tnwj6-0004Vy-CF
	for xen-devel@lists.xensource.com; Wed, 26 Dec 2012 19:33:16 +0000
Received: from [85.158.139.83:16557] by server-3.bemta-5.messagelabs.com id
	D9/8F-25441-BF05BD05; Wed, 26 Dec 2012 19:33:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1356550393!29630074!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQ0Njg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12924 invoked from network); 26 Dec 2012 19:33:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Dec 2012 19:33:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,358,1355097600"; 
   d="scan'208";a="1795232"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	26 Dec 2012 19:33:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Wed, 26 Dec 2012 14:33:12 -0500
Received: from [10.80.3.80] (helo=iceland)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <Wei.Liu2@citrix.com>)	id
	1Tnwj2-0003zo-Fm; Wed, 26 Dec 2012 19:33:12 +0000
Date: Wed, 26 Dec 2012 19:33:12 +0000
From: Wei Liu <Wei.Liu2@citrix.com>
To: =?utf-8?B?6ams56OK?= <aware.why@gmail.com>
Message-ID: <20121226193312.GA28152@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
 =?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_error?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCBEZWMgMjYsIDIwMTIgYXQgMDI6NDU6NDZQTSArMDAwMCwg6ams56OKIHdyb3RlOgo+
IEkgc2FpZCBgeGwgcmVzdG9yZWAsIG5vdCBgeG0gcmVzdG9yZWAsIGFuZCB0aGUgcmVsZXZhbnQg
Y29kZSBpcyBpbiBzcmMvdG9vbHMvbGlieGwuCj4gSXQgc2VlbXMgdGhhdCB5b3VycyBhcmVuJ3Qg
dGhlIHNhbWUgdmVyc2lvbiBhcyBtaW5lCgpJIGRpZG4ndCBzYXkgeG0gcmVzdG9yZSBlaXRoZXIu
IEkgd2FzIHJlZmVycmluZyB0byB0aGUgZ2VuZXJpYwpzYXZlL3Jlc3RvcmUgZnVuY3Rpb25hbGl0
eSwgdGhvdWdoIGluIGZhY3QgSSBkaWQgdXNlIHhsLiBBbmQgdGhlCmZ1bmN0aW9uIEkgdG9sZCB5
b3UgaXMgaW4gWGVuIGh5cGVydmlzb3Igc291cmNlIGNvZGUsIG5vdCBwYXJ0IG9mIGFueQp0b29s
IHN0YWNrLgoKV2hpY2hldmVyIHRvb2xzIHN0YWNrIHlvdSB1c2UsIGl0IHdpbGwgZXZlbnR1YWxs
eSBpc3N1ZSBoeXBlcmNhbGwgdG8gWGVuCnZpYSBsaWJ4Yy4gSSBkb24ndCB1c2UgdGhlIHNhbWUg
dmVyc2lvbiBhcyB5b3VycyBzaW5jZSBJIGFsd2F5cyBwbGF5cwp3aXRoIHhlbiBzdGFnaW5nIHVu
c3RhYmxlLiBIb3dldmVyIEkgZG9uJ3QgdGhpbmsgYWxsb2NhdGlvbiBjb2RlCmhhcyBwb3RlbnRp
YWwgZGlmZmVyZW5jZSBiZXR3ZWVuIDQuMS4yIGFuZCBzdGFnaW5nIHVuc3RhYmxlLgoKCldlaS4K
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:13:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To2xU-0004es-JE; Thu, 27 Dec 2012 02:12:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1To2xT-0004en-2S
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:12:31 +0000
Received: from [85.158.138.51:34993] by server-7.bemta-3.messagelabs.com id
	E5/8B-23008-E8EABD05; Thu, 27 Dec 2012 02:12:30 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1356574349!24062283!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23074 invoked from network); 27 Dec 2012 02:12:29 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 02:12:29 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so4413664eek.18
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Dec 2012 18:12:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=TBiF7uQHvvN6DIsLxMA84/h1kcmG+RDmBapYqgJ+ZMU=;
	b=UOusDYc5FMh0oj6w0q5U4eQ02iCnIoMsaL0NPkRuUydRqQ5PpFUXGapF1OOFo9ZOYD
	hkmIL4pcCjohIBe6eK6syLvskmN89QMoXYca83+d+Q6aLlQJ6S0GFl/3ZNPh94uIswEu
	K3Rx6JZD6B+niQ9E1ZwJlf+bu8Z//YSV5axKT7/cZw8azIh7I5qd7pq04wKqQMxBnXL8
	C4bfx1mG+kah9lHPnuSc+5/UgRRb3qqIQN6FMSr0D1pEmkffYUmHlFFh8b8noz+ueJVw
	bLuGJ5lbpwNXRAz6cHOC5MBJRIj8/14pJCkKYZ55E/WW/Qdftr+Y40ZL76NvzKis87Tl
	VK0Q==
MIME-Version: 1.0
Received: by 10.14.225.194 with SMTP id z42mr74393048eep.22.1356574348831;
	Wed, 26 Dec 2012 18:12:28 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Wed, 26 Dec 2012 18:12:28 -0800 (PST)
In-Reply-To: <20121226193312.GA28152@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
Date: Thu, 27 Dec 2012 10:12:28 +0800
Message-ID: <CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
	=?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_e?=
	=?utf-8?q?rror?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4049107114575591363=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4049107114575591363==
Content-Type: multipart/alternative; boundary=047d7b66f24b2c6c1804d1cc14e8

--047d7b66f24b2c6c1804d1cc14e8
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 27, 2012 at 3:33 AM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Wed, Dec 26, 2012 at 02:45:46PM +0000, =E9=A9=AC=E7=A3=8A wrote:
> > I said `xl restore`, not `xm restore`, and the relevant code is in
> src/tools/libxl.
> > It seems that yours aren't the same version as mine
>
> I didn't say xm restore either. I was referring to the generic
> save/restore functionality, though in fact I did use xl. And the
> function I told you is in Xen hypervisor source code, not part of any
> tool stack.
>
> Whichever tools stack you use, it will eventually issue hypercall to Xen
> via libxc. I don't use the same version as yours since I always plays
> with xen staging unstable. However I don't think allocation code
> has potential difference between 4.1.2 and staging unstable.
>
>
> Wei.
>
I got it, but the error `  xc: error: do_evtchn_op:
HYPERVISOR_event_channel_op failed: -1 (3 =3D No such process): Internal
error. ` said no such process, the system error description
didn't seem has anything to do with the following lines wich raised
 85    state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, 0);
 86    state->console_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, 0);

--047d7b66f24b2c6c1804d1cc14e8
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Dec 27, 2012 at 3:33 AM, Wei Liu=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:Wei.Liu2@citrix.com" target=3D"_bl=
ank">Wei.Liu2@citrix.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmai=
l_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left=
:1ex">
<div class=3D"im">On Wed, Dec 26, 2012 at 02:45:46PM +0000, =E9=A9=AC=E7=A3=
=8A wrote:<br>
&gt; I said `xl restore`, not `xm restore`, and the relevant code is in src=
/tools/libxl.<br>
&gt; It seems that yours aren&#39;t the same version as mine<br>
<br>
</div>I didn&#39;t say xm restore either. I was referring to the generic<br=
>
save/restore functionality, though in fact I did use xl. And the<br>
function I told you is in Xen hypervisor source code, not part of any<br>
tool stack.<br>
<br>
Whichever tools stack you use, it will eventually issue hypercall to Xen<br=
>
via libxc. I don&#39;t use the same version as yours since I always plays<b=
r>
with xen staging unstable. However I don&#39;t think allocation code<br>
has potential difference between 4.1.2 and staging unstable.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Wei.<br>
</font></span></blockquote></div>I got it, but the error `
<span style=3D"color:rgb(255,102,102);font-family:arial,sans-serif;font-siz=
e:13px;background-color:rgb(255,255,255)">=C2=A0xc: error: do_evtchn_op: HY=
PERVISOR_event_channel_op failed: -1 (3 =3D No such process): Internal erro=
r.</span>=C2=A0` said no such process, the system error description<div>
didn&#39;t seem has anything to do with the following lines wich raised=C2=
=A0</div><div>=C2=A0<span style=3D"color:rgb(80,0,80);font-family:arial,san=
s-serif;font-size:13px;background-color:rgb(255,255,255)">85 =C2=A0 =C2=A0s=
tate-&gt;store_port =3D xc_evtchn_alloc_unbound(ctx-&gt;</span><span style=
=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px;backgrou=
nd-color:rgb(255,255,255)">xch, domid, 0);</span></div>
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px;background-color:rgb(255,255,255)">=C2=A086 =C2=A0 =C2=A0state-&gt;conso=
le_port =3D xc_evtchn_alloc_unbound(ctx-&gt;</span><span style=3D"color:rgb=
(80,0,80);font-family:arial,sans-serif;font-size:13px;background-color:rgb(=
255,255,255)">xch, domid, 0);</span>

--047d7b66f24b2c6c1804d1cc14e8--


--===============4049107114575591363==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4049107114575591363==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 02:13:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To2xU-0004es-JE; Thu, 27 Dec 2012 02:12:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1To2xT-0004en-2S
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:12:31 +0000
Received: from [85.158.138.51:34993] by server-7.bemta-3.messagelabs.com id
	E5/8B-23008-E8EABD05; Thu, 27 Dec 2012 02:12:30 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1356574349!24062283!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23074 invoked from network); 27 Dec 2012 02:12:29 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 02:12:29 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so4413664eek.18
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Dec 2012 18:12:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=TBiF7uQHvvN6DIsLxMA84/h1kcmG+RDmBapYqgJ+ZMU=;
	b=UOusDYc5FMh0oj6w0q5U4eQ02iCnIoMsaL0NPkRuUydRqQ5PpFUXGapF1OOFo9ZOYD
	hkmIL4pcCjohIBe6eK6syLvskmN89QMoXYca83+d+Q6aLlQJ6S0GFl/3ZNPh94uIswEu
	K3Rx6JZD6B+niQ9E1ZwJlf+bu8Z//YSV5axKT7/cZw8azIh7I5qd7pq04wKqQMxBnXL8
	C4bfx1mG+kah9lHPnuSc+5/UgRRb3qqIQN6FMSr0D1pEmkffYUmHlFFh8b8noz+ueJVw
	bLuGJ5lbpwNXRAz6cHOC5MBJRIj8/14pJCkKYZ55E/WW/Qdftr+Y40ZL76NvzKis87Tl
	VK0Q==
MIME-Version: 1.0
Received: by 10.14.225.194 with SMTP id z42mr74393048eep.22.1356574348831;
	Wed, 26 Dec 2012 18:12:28 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Wed, 26 Dec 2012 18:12:28 -0800 (PST)
In-Reply-To: <20121226193312.GA28152@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
Date: Thu, 27 Dec 2012 10:12:28 +0800
Message-ID: <CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
	=?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_e?=
	=?utf-8?q?rror?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4049107114575591363=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4049107114575591363==
Content-Type: multipart/alternative; boundary=047d7b66f24b2c6c1804d1cc14e8

--047d7b66f24b2c6c1804d1cc14e8
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 27, 2012 at 3:33 AM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Wed, Dec 26, 2012 at 02:45:46PM +0000, =E9=A9=AC=E7=A3=8A wrote:
> > I said `xl restore`, not `xm restore`, and the relevant code is in
> src/tools/libxl.
> > It seems that yours aren't the same version as mine
>
> I didn't say xm restore either. I was referring to the generic
> save/restore functionality, though in fact I did use xl. And the
> function I told you is in Xen hypervisor source code, not part of any
> tool stack.
>
> Whichever tools stack you use, it will eventually issue hypercall to Xen
> via libxc. I don't use the same version as yours since I always plays
> with xen staging unstable. However I don't think allocation code
> has potential difference between 4.1.2 and staging unstable.
>
>
> Wei.
>
I got it, but the error `  xc: error: do_evtchn_op:
HYPERVISOR_event_channel_op failed: -1 (3 =3D No such process): Internal
error. ` said no such process, the system error description
didn't seem has anything to do with the following lines wich raised
 85    state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, 0);
 86    state->console_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid, 0);

--047d7b66f24b2c6c1804d1cc14e8
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Dec 27, 2012 at 3:33 AM, Wei Liu=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:Wei.Liu2@citrix.com" target=3D"_bl=
ank">Wei.Liu2@citrix.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmai=
l_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left=
:1ex">
<div class=3D"im">On Wed, Dec 26, 2012 at 02:45:46PM +0000, =E9=A9=AC=E7=A3=
=8A wrote:<br>
&gt; I said `xl restore`, not `xm restore`, and the relevant code is in src=
/tools/libxl.<br>
&gt; It seems that yours aren&#39;t the same version as mine<br>
<br>
</div>I didn&#39;t say xm restore either. I was referring to the generic<br=
>
save/restore functionality, though in fact I did use xl. And the<br>
function I told you is in Xen hypervisor source code, not part of any<br>
tool stack.<br>
<br>
Whichever tools stack you use, it will eventually issue hypercall to Xen<br=
>
via libxc. I don&#39;t use the same version as yours since I always plays<b=
r>
with xen staging unstable. However I don&#39;t think allocation code<br>
has potential difference between 4.1.2 and staging unstable.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Wei.<br>
</font></span></blockquote></div>I got it, but the error `
<span style=3D"color:rgb(255,102,102);font-family:arial,sans-serif;font-siz=
e:13px;background-color:rgb(255,255,255)">=C2=A0xc: error: do_evtchn_op: HY=
PERVISOR_event_channel_op failed: -1 (3 =3D No such process): Internal erro=
r.</span>=C2=A0` said no such process, the system error description<div>
didn&#39;t seem has anything to do with the following lines wich raised=C2=
=A0</div><div>=C2=A0<span style=3D"color:rgb(80,0,80);font-family:arial,san=
s-serif;font-size:13px;background-color:rgb(255,255,255)">85 =C2=A0 =C2=A0s=
tate-&gt;store_port =3D xc_evtchn_alloc_unbound(ctx-&gt;</span><span style=
=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px;backgrou=
nd-color:rgb(255,255,255)">xch, domid, 0);</span></div>
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px;background-color:rgb(255,255,255)">=C2=A086 =C2=A0 =C2=A0state-&gt;conso=
le_port =3D xc_evtchn_alloc_unbound(ctx-&gt;</span><span style=3D"color:rgb=
(80,0,80);font-family:arial,sans-serif;font-size:13px;background-color:rgb(=
255,255,255)">xch, domid, 0);</span>

--047d7b66f24b2c6c1804d1cc14e8--


--===============4049107114575591363==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4049107114575591363==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To36A-0004vk-D2; Thu, 27 Dec 2012 02:21:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To367-0004u0-El
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:27 +0000
Received: from [85.158.139.211:12248] by server-15.bemta-5.messagelabs.com id
	5D/62-20523-6A0BBD05; Thu, 27 Dec 2012 02:21:26 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!10
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11690 invoked from network); 27 Dec 2012 02:21:25 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:25 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645336Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:54 +0100
Message-Id: <1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.006529
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 05/11] x86/xen: Register resources required
	by kexec-tools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Register resources required by kexec-tools.

v2 - suggestions/fixes:
   - change logging level
     (suggested by Konrad Rzeszutek Wilk).

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/xen/kexec.c |  150 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 150 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/xen/kexec.c

diff --git a/arch/x86/xen/kexec.c b/arch/x86/xen/kexec.c
new file mode 100644
index 0000000..7ec4c45
--- /dev/null
+++ b/arch/x86/xen/kexec.c
@@ -0,0 +1,150 @@
+/*
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/kernel.h>
+#include <linux/kexec.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+
+#include <xen/interface/platform.h>
+#include <xen/interface/xen.h>
+#include <xen/xen.h>
+
+#include <asm/xen/hypercall.h>
+
+unsigned long xen_vmcoreinfo_maddr = 0;
+unsigned long xen_vmcoreinfo_max_size = 0;
+
+static int __init xen_init_kexec_resources(void)
+{
+	int rc;
+	static struct resource xen_hypervisor_res = {
+		.name = "Hypervisor code and data",
+		.flags = IORESOURCE_BUSY | IORESOURCE_MEM
+	};
+	struct resource *cpu_res;
+	struct xen_kexec_range xkr;
+	struct xen_platform_op cpuinfo_op;
+	uint32_t cpus, i;
+
+	if (!xen_initial_domain())
+		return 0;
+
+	if (strstr(boot_command_line, "crashkernel="))
+		pr_warn("kexec: Ignoring crashkernel option. "
+			"It should be passed to Xen hypervisor.\n");
+
+	/* Register Crash kernel resource. */
+	xkr.range = KEXEC_RANGE_MA_CRASH;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_get_range, &xkr);
+
+	if (rc) {
+		pr_warn("kexec: %s: HYPERVISOR_kexec_op(KEXEC_RANGE_MA_CRASH)"
+			": %i\n", __func__, rc);
+		return rc;
+	}
+
+	if (!xkr.size)
+		return 0;
+
+	crashk_res.start = xkr.start;
+	crashk_res.end = xkr.start + xkr.size - 1;
+	insert_resource(&iomem_resource, &crashk_res);
+
+	/* Register Hypervisor code and data resource. */
+	xkr.range = KEXEC_RANGE_MA_XEN;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_get_range, &xkr);
+
+	if (rc) {
+		pr_warn("kexec: %s: HYPERVISOR_kexec_op(KEXEC_RANGE_MA_XEN)"
+			": %i\n", __func__, rc);
+		return rc;
+	}
+
+	xen_hypervisor_res.start = xkr.start;
+	xen_hypervisor_res.end = xkr.start + xkr.size - 1;
+	insert_resource(&iomem_resource, &xen_hypervisor_res);
+
+	/* Determine maximum number of physical CPUs. */
+	cpuinfo_op.cmd = XENPF_get_cpuinfo;
+	cpuinfo_op.u.pcpu_info.xen_cpuid = 0;
+	rc = HYPERVISOR_dom0_op(&cpuinfo_op);
+
+	if (rc) {
+		pr_warn("kexec: %s: HYPERVISOR_dom0_op(): %i\n", __func__, rc);
+		return rc;
+	}
+
+	cpus = cpuinfo_op.u.pcpu_info.max_present + 1;
+
+	/* Register CPUs Crash note resources. */
+	cpu_res = kcalloc(cpus, sizeof(struct resource), GFP_KERNEL);
+
+	if (!cpu_res) {
+		pr_warn("kexec: %s: kcalloc(): %i\n", __func__, -ENOMEM);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < cpus; ++i) {
+		xkr.range = KEXEC_RANGE_MA_CPU;
+		xkr.nr = i;
+		rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_get_range, &xkr);
+
+		if (rc) {
+			pr_warn("kexec: %s: cpu: %u: HYPERVISOR_kexec_op"
+				"(KEXEC_RANGE_MA_XEN): %i\n", __func__, i, rc);
+			continue;
+		}
+
+		cpu_res->name = "Crash note";
+		cpu_res->start = xkr.start;
+		cpu_res->end = xkr.start + xkr.size - 1;
+		cpu_res->flags = IORESOURCE_BUSY | IORESOURCE_MEM;
+		insert_resource(&iomem_resource, cpu_res++);
+	}
+
+	/* Get vmcoreinfo address and maximum allowed size. */
+	xkr.range = KEXEC_RANGE_MA_VMCOREINFO;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_get_range, &xkr);
+
+	if (rc) {
+		pr_warn("kexec: %s: HYPERVISOR_kexec_op(KEXEC_RANGE_MA_VMCOREINFO)"
+			": %i\n", __func__, rc);
+		return rc;
+	}
+
+	xen_vmcoreinfo_maddr = xkr.start;
+	xen_vmcoreinfo_max_size = xkr.size;
+
+	return 0;
+}
+
+core_initcall(xen_init_kexec_resources);
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To369-0004v3-8z; Thu, 27 Dec 2012 02:21:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To366-0004tR-MF
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:10385] by server-10.bemta-5.messagelabs.com id
	D1/ED-13383-5A0BBD05; Thu, 27 Dec 2012 02:21:25 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!6
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11577 invoked from network); 27 Dec 2012 02:21:25 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:25 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645342Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:19:00 +0100
Message-Id: <1356574740-6806-12-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-11-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-11-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.387740
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 11/11] x86: Add Xen kexec control code size
	check to linker script
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add Xen kexec control code size check to linker script.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/kernel/vmlinux.lds.S |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 22a1530..f18786a 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -360,5 +360,10 @@ INIT_PER_CPU(irq_stack_union);
 
 . = ASSERT(kexec_control_code_size <= KEXEC_CONTROL_CODE_MAX_SIZE,
            "kexec control code size is too big");
-#endif
 
+#ifdef CONFIG_XEN
+. = ASSERT(xen_kexec_control_code_size - xen_relocate_kernel <=
+		KEXEC_CONTROL_CODE_MAX_SIZE,
+		"Xen kexec control code size is too big");
+#endif
+#endif
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To36A-0004vk-D2; Thu, 27 Dec 2012 02:21:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To367-0004u0-El
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:27 +0000
Received: from [85.158.139.211:12248] by server-15.bemta-5.messagelabs.com id
	5D/62-20523-6A0BBD05; Thu, 27 Dec 2012 02:21:26 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!10
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11690 invoked from network); 27 Dec 2012 02:21:25 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:25 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645336Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:54 +0100
Message-Id: <1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.006529
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 05/11] x86/xen: Register resources required
	by kexec-tools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Register resources required by kexec-tools.

v2 - suggestions/fixes:
   - change logging level
     (suggested by Konrad Rzeszutek Wilk).

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/xen/kexec.c |  150 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 150 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/xen/kexec.c

diff --git a/arch/x86/xen/kexec.c b/arch/x86/xen/kexec.c
new file mode 100644
index 0000000..7ec4c45
--- /dev/null
+++ b/arch/x86/xen/kexec.c
@@ -0,0 +1,150 @@
+/*
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/kernel.h>
+#include <linux/kexec.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+
+#include <xen/interface/platform.h>
+#include <xen/interface/xen.h>
+#include <xen/xen.h>
+
+#include <asm/xen/hypercall.h>
+
+unsigned long xen_vmcoreinfo_maddr = 0;
+unsigned long xen_vmcoreinfo_max_size = 0;
+
+static int __init xen_init_kexec_resources(void)
+{
+	int rc;
+	static struct resource xen_hypervisor_res = {
+		.name = "Hypervisor code and data",
+		.flags = IORESOURCE_BUSY | IORESOURCE_MEM
+	};
+	struct resource *cpu_res;
+	struct xen_kexec_range xkr;
+	struct xen_platform_op cpuinfo_op;
+	uint32_t cpus, i;
+
+	if (!xen_initial_domain())
+		return 0;
+
+	if (strstr(boot_command_line, "crashkernel="))
+		pr_warn("kexec: Ignoring crashkernel option. "
+			"It should be passed to Xen hypervisor.\n");
+
+	/* Register Crash kernel resource. */
+	xkr.range = KEXEC_RANGE_MA_CRASH;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_get_range, &xkr);
+
+	if (rc) {
+		pr_warn("kexec: %s: HYPERVISOR_kexec_op(KEXEC_RANGE_MA_CRASH)"
+			": %i\n", __func__, rc);
+		return rc;
+	}
+
+	if (!xkr.size)
+		return 0;
+
+	crashk_res.start = xkr.start;
+	crashk_res.end = xkr.start + xkr.size - 1;
+	insert_resource(&iomem_resource, &crashk_res);
+
+	/* Register Hypervisor code and data resource. */
+	xkr.range = KEXEC_RANGE_MA_XEN;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_get_range, &xkr);
+
+	if (rc) {
+		pr_warn("kexec: %s: HYPERVISOR_kexec_op(KEXEC_RANGE_MA_XEN)"
+			": %i\n", __func__, rc);
+		return rc;
+	}
+
+	xen_hypervisor_res.start = xkr.start;
+	xen_hypervisor_res.end = xkr.start + xkr.size - 1;
+	insert_resource(&iomem_resource, &xen_hypervisor_res);
+
+	/* Determine maximum number of physical CPUs. */
+	cpuinfo_op.cmd = XENPF_get_cpuinfo;
+	cpuinfo_op.u.pcpu_info.xen_cpuid = 0;
+	rc = HYPERVISOR_dom0_op(&cpuinfo_op);
+
+	if (rc) {
+		pr_warn("kexec: %s: HYPERVISOR_dom0_op(): %i\n", __func__, rc);
+		return rc;
+	}
+
+	cpus = cpuinfo_op.u.pcpu_info.max_present + 1;
+
+	/* Register CPUs Crash note resources. */
+	cpu_res = kcalloc(cpus, sizeof(struct resource), GFP_KERNEL);
+
+	if (!cpu_res) {
+		pr_warn("kexec: %s: kcalloc(): %i\n", __func__, -ENOMEM);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < cpus; ++i) {
+		xkr.range = KEXEC_RANGE_MA_CPU;
+		xkr.nr = i;
+		rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_get_range, &xkr);
+
+		if (rc) {
+			pr_warn("kexec: %s: cpu: %u: HYPERVISOR_kexec_op"
+				"(KEXEC_RANGE_MA_XEN): %i\n", __func__, i, rc);
+			continue;
+		}
+
+		cpu_res->name = "Crash note";
+		cpu_res->start = xkr.start;
+		cpu_res->end = xkr.start + xkr.size - 1;
+		cpu_res->flags = IORESOURCE_BUSY | IORESOURCE_MEM;
+		insert_resource(&iomem_resource, cpu_res++);
+	}
+
+	/* Get vmcoreinfo address and maximum allowed size. */
+	xkr.range = KEXEC_RANGE_MA_VMCOREINFO;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_get_range, &xkr);
+
+	if (rc) {
+		pr_warn("kexec: %s: HYPERVISOR_kexec_op(KEXEC_RANGE_MA_VMCOREINFO)"
+			": %i\n", __func__, rc);
+		return rc;
+	}
+
+	xen_vmcoreinfo_maddr = xkr.start;
+	xen_vmcoreinfo_max_size = xkr.size;
+
+	return 0;
+}
+
+core_initcall(xen_init_kexec_resources);
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To369-0004v3-8z; Thu, 27 Dec 2012 02:21:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To366-0004tR-MF
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:10385] by server-10.bemta-5.messagelabs.com id
	D1/ED-13383-5A0BBD05; Thu, 27 Dec 2012 02:21:25 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!6
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11577 invoked from network); 27 Dec 2012 02:21:25 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:25 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645342Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:19:00 +0100
Message-Id: <1356574740-6806-12-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-11-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-11-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.387740
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 11/11] x86: Add Xen kexec control code size
	check to linker script
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add Xen kexec control code size check to linker script.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/kernel/vmlinux.lds.S |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 22a1530..f18786a 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -360,5 +360,10 @@ INIT_PER_CPU(irq_stack_union);
 
 . = ASSERT(kexec_control_code_size <= KEXEC_CONTROL_CODE_MAX_SIZE,
            "kexec control code size is too big");
-#endif
 
+#ifdef CONFIG_XEN
+. = ASSERT(xen_kexec_control_code_size - xen_relocate_kernel <=
+		KEXEC_CONTROL_CODE_MAX_SIZE,
+		"Xen kexec control code size is too big");
+#endif
+#endif
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To368-0004un-Rd; Thu, 27 Dec 2012 02:21:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To366-0004tW-F6
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:48294] by server-9.bemta-5.messagelabs.com id
	5E/41-10690-5A0BBD05; Thu, 27 Dec 2012 02:21:25 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!7
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11626 invoked from network); 27 Dec 2012 02:21:25 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:25 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645340Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:58 +0100
Message-Id: <1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.281227
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 09/11] x86/xen/enlighten: Add init and crash
	kexec/kdump hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add init and crash kexec/kdump hooks.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/xen/enlighten.c |   11 +++++++++++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 138e566..5025bba 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -31,6 +31,7 @@
 #include <linux/pci.h>
 #include <linux/gfp.h>
 #include <linux/memblock.h>
+#include <linux/kexec.h>
 
 #include <xen/xen.h>
 #include <xen/events.h>
@@ -1276,6 +1277,12 @@ static void xen_machine_power_off(void)
 
 static void xen_crash_shutdown(struct pt_regs *regs)
 {
+#ifdef CONFIG_KEXEC_FIRMWARE
+	if (kexec_crash_image) {
+		crash_save_cpu(regs, safe_smp_processor_id());
+		return;
+	}
+#endif
 	xen_reboot(SHUTDOWN_crash);
 }
 
@@ -1353,6 +1360,10 @@ asmlinkage void __init xen_start_kernel(void)
 
 	xen_init_mmu_ops();
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+	kexec_use_firmware = true;
+#endif
+
 	/* Prevent unwanted bits from being set in PTEs. */
 	__supported_pte_mask &= ~_PAGE_GLOBAL;
 #if 0
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To368-0004un-Rd; Thu, 27 Dec 2012 02:21:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To366-0004tW-F6
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:48294] by server-9.bemta-5.messagelabs.com id
	5E/41-10690-5A0BBD05; Thu, 27 Dec 2012 02:21:25 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!7
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11626 invoked from network); 27 Dec 2012 02:21:25 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:25 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645340Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:58 +0100
Message-Id: <1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.281227
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 09/11] x86/xen/enlighten: Add init and crash
	kexec/kdump hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add init and crash kexec/kdump hooks.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/xen/enlighten.c |   11 +++++++++++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 138e566..5025bba 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -31,6 +31,7 @@
 #include <linux/pci.h>
 #include <linux/gfp.h>
 #include <linux/memblock.h>
+#include <linux/kexec.h>
 
 #include <xen/xen.h>
 #include <xen/events.h>
@@ -1276,6 +1277,12 @@ static void xen_machine_power_off(void)
 
 static void xen_crash_shutdown(struct pt_regs *regs)
 {
+#ifdef CONFIG_KEXEC_FIRMWARE
+	if (kexec_crash_image) {
+		crash_save_cpu(regs, safe_smp_processor_id());
+		return;
+	}
+#endif
 	xen_reboot(SHUTDOWN_crash);
 }
 
@@ -1353,6 +1360,10 @@ asmlinkage void __init xen_start_kernel(void)
 
 	xen_init_mmu_ops();
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+	kexec_use_firmware = true;
+#endif
+
 	/* Prevent unwanted bits from being set in PTEs. */
 	__supported_pte_mask &= ~_PAGE_GLOBAL;
 #if 0
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To366-0004tp-Rh; Thu, 27 Dec 2012 02:21:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To365-0004tK-8F
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:25 +0000
Received: from [85.158.139.211:48235] by server-3.bemta-5.messagelabs.com id
	00/3D-25441-4A0BBD05; Thu, 27 Dec 2012 02:21:24 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11519 invoked from network); 27 Dec 2012 02:21:22 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:22 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1623742Ab2L0CTA
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:00 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:49 +0100
Message-Id: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
X-Bogosity: No, spamicity=0.482930
Subject: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hi,

This set of patches contains initial kexec/kdump implementation for Xen v3.
Currently only dom0 is supported, however, almost all infrustructure
required for domU support is ready.

Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
This could simplify and reduce a bit size of kernel code. However, this solution
requires some changes in baremetal x86 code. First of all code which establishes
transition page table should be moved back from machine_kexec_$(BITS).c to
relocate_kernel_$(BITS).S. Another important thing which should be changed in that
case is format of page_list array. Xen kexec hypercall requires to alternate physical
addresses with virtual ones. These and other required stuff have not been done in that
version because I am not sure that solution will be accepted by kexec/kdump maintainers.
I hope that this email spark discussion about that topic.

Daniel

 arch/x86/Kconfig                     |    3 +
 arch/x86/include/asm/kexec.h         |   10 +-
 arch/x86/include/asm/xen/hypercall.h |    6 +
 arch/x86/include/asm/xen/kexec.h     |   79 ++++
 arch/x86/kernel/machine_kexec_64.c   |   12 +-
 arch/x86/kernel/vmlinux.lds.S        |    7 +-
 arch/x86/xen/Kconfig                 |    1 +
 arch/x86/xen/Makefile                |    3 +
 arch/x86/xen/enlighten.c             |   11 +
 arch/x86/xen/kexec.c                 |  150 +++++++
 arch/x86/xen/machine_kexec_32.c      |  226 +++++++++++
 arch/x86/xen/machine_kexec_64.c      |  318 +++++++++++++++
 arch/x86/xen/relocate_kernel_32.S    |  323 +++++++++++++++
 arch/x86/xen/relocate_kernel_64.S    |  309 ++++++++++++++
 drivers/xen/sys-hypervisor.c         |   42 ++-
 include/linux/kexec.h                |   26 ++-
 include/xen/interface/xen.h          |   33 ++
 kernel/Makefile                      |    1 +
 kernel/kexec-firmware.c              |  743 ++++++++++++++++++++++++++++++++++
 kernel/kexec.c                       |   46 ++-
 20 files changed, 2331 insertions(+), 18 deletions(-)

Daniel Kiper (11):
      kexec: introduce kexec firmware support
      x86/kexec: Add extra pointers to transition page table PGD, PUD, PMD and PTE
      xen: Introduce architecture independent data for kexec/kdump
      x86/xen: Introduce architecture dependent data for kexec/kdump
      x86/xen: Register resources required by kexec-tools
      x86/xen: Add i386 kexec/kdump implementation
      x86/xen: Add x86_64 kexec/kdump implementation
      x86/xen: Add kexec/kdump Kconfig and makefile rules
      x86/xen/enlighten: Add init and crash kexec/kdump hooks
      drivers/xen: Export vmcoreinfo through sysfs
      x86: Add Xen kexec control code size check to linker script

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To367-0004u4-8E; Thu, 27 Dec 2012 02:21:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To365-0004tL-NC
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:25 +0000
Received: from [85.158.139.211:48262] by server-13.bemta-5.messagelabs.com id
	42/4A-10716-4A0BBD05; Thu, 27 Dec 2012 02:21:24 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!3
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11548 invoked from network); 27 Dec 2012 02:21:24 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:24 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645312Ab2L0CTA
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:00 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:51 +0100
Message-Id: <1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.395697
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 02/11] x86/kexec: Add extra pointers to
	transition page table PGD, PUD, PMD and PTE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some implementations (e.g. Xen PVOPS) could not use part of identity page table
to construct transition page table. It means that they require separate PUDs,
PMDs and PTEs for virtual and physical (identity) mapping. To satisfy that
requirement add extra pointer to PGD, PUD, PMD and PTE and align existing code.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/include/asm/kexec.h       |   10 +++++++---
 arch/x86/kernel/machine_kexec_64.c |   12 ++++++------
 2 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 6080d26..cedd204 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -157,9 +157,13 @@ struct kimage_arch {
 };
 #else
 struct kimage_arch {
-	pud_t *pud;
-	pmd_t *pmd;
-	pte_t *pte;
+	pgd_t *pgd;
+	pud_t *pud0;
+	pud_t *pud1;
+	pmd_t *pmd0;
+	pmd_t *pmd1;
+	pte_t *pte0;
+	pte_t *pte1;
 };
 #endif
 
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index b3ea9db..976e54b 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -137,9 +137,9 @@ out:
 
 static void free_transition_pgtable(struct kimage *image)
 {
-	free_page((unsigned long)image->arch.pud);
-	free_page((unsigned long)image->arch.pmd);
-	free_page((unsigned long)image->arch.pte);
+	free_page((unsigned long)image->arch.pud0);
+	free_page((unsigned long)image->arch.pmd0);
+	free_page((unsigned long)image->arch.pte0);
 }
 
 static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
@@ -157,7 +157,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
 		pud = (pud_t *)get_zeroed_page(GFP_KERNEL);
 		if (!pud)
 			goto err;
-		image->arch.pud = pud;
+		image->arch.pud0 = pud;
 		set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));
 	}
 	pud = pud_offset(pgd, vaddr);
@@ -165,7 +165,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
 		pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL);
 		if (!pmd)
 			goto err;
-		image->arch.pmd = pmd;
+		image->arch.pmd0 = pmd;
 		set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
 	}
 	pmd = pmd_offset(pud, vaddr);
@@ -173,7 +173,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
 		pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
 		if (!pte)
 			goto err;
-		image->arch.pte = pte;
+		image->arch.pte0 = pte;
 		set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE));
 	}
 	pte = pte_offset_kernel(pmd, vaddr);
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To36B-0004wh-KR; Thu, 27 Dec 2012 02:21:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To368-0004uN-D9
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:28 +0000
Received: from [85.158.139.211:12260] by server-14.bemta-5.messagelabs.com id
	09/5F-09538-7A0BBD05; Thu, 27 Dec 2012 02:21:27 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!11
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11707 invoked from network); 27 Dec 2012 02:21:26 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:26 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645337Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:55 +0100
Message-Id: <1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.070114
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 06/11] x86/xen: Add i386 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add i386 kexec/kdump implementation.

v2 - suggestions/fixes:
   - allocate transition page table pages below 4 GiB
     (suggested by Jan Beulich).

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/xen/machine_kexec_32.c   |  226 ++++++++++++++++++++++++++
 arch/x86/xen/relocate_kernel_32.S |  323 +++++++++++++++++++++++++++++++++++++
 2 files changed, 549 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/xen/machine_kexec_32.c
 create mode 100644 arch/x86/xen/relocate_kernel_32.S

diff --git a/arch/x86/xen/machine_kexec_32.c b/arch/x86/xen/machine_kexec_32.c
new file mode 100644
index 0000000..011a5e8
--- /dev/null
+++ b/arch/x86/xen/machine_kexec_32.c
@@ -0,0 +1,226 @@
+/*
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/kexec.h>
+#include <linux/mm.h>
+#include <linux/string.h>
+
+#include <xen/xen.h>
+#include <xen/xen-ops.h>
+
+#include <asm/xen/hypercall.h>
+#include <asm/xen/kexec.h>
+#include <asm/xen/page.h>
+
+#define __ma(vaddr)	(virt_to_machine(vaddr).maddr)
+
+static void *alloc_pgtable_page(struct kimage *image)
+{
+	struct page *page;
+
+	page = firmware_kimage_alloc_control_pages(image, 0);
+
+	if (!page || !page_address(page))
+		return NULL;
+
+	memset(page_address(page), 0, PAGE_SIZE);
+
+	return page_address(page);
+}
+
+static int alloc_transition_pgtable(struct kimage *image)
+{
+	image->arch.pgd = alloc_pgtable_page(image);
+
+	if (!image->arch.pgd)
+		return -ENOMEM;
+
+	image->arch.pmd0 = alloc_pgtable_page(image);
+
+	if (!image->arch.pmd0)
+		return -ENOMEM;
+
+	image->arch.pmd1 = alloc_pgtable_page(image);
+
+	if (!image->arch.pmd1)
+		return -ENOMEM;
+
+	image->arch.pte0 = alloc_pgtable_page(image);
+
+	if (!image->arch.pte0)
+		return -ENOMEM;
+
+	image->arch.pte1 = alloc_pgtable_page(image);
+
+	if (!image->arch.pte1)
+		return -ENOMEM;
+
+	return 0;
+}
+
+struct page *mf_kexec_kimage_alloc_pages(gfp_t gfp_mask,
+						unsigned int order,
+						unsigned long limit)
+{
+	struct page *pages;
+	unsigned int address_bits, i;
+
+	pages = alloc_pages(gfp_mask, order);
+
+	if (!pages)
+		return NULL;
+
+	address_bits = (limit == ULONG_MAX) ? BITS_PER_LONG : ilog2(limit);
+
+	/* Relocate set of pages below given limit. */
+	if (xen_create_contiguous_region((unsigned long)page_address(pages),
+							order, address_bits)) {
+		__free_pages(pages, order);
+		return NULL;
+	}
+
+	BUG_ON(PagePrivate(pages));
+
+	pages->mapping = NULL;
+	set_page_private(pages, order);
+
+	for (i = 0; i < (1 << order); ++i)
+		SetPageReserved(pages + i);
+
+	return pages;
+}
+
+void mf_kexec_kimage_free_pages(struct page *page)
+{
+	unsigned int i, order;
+
+	order = page_private(page);
+
+	for (i = 0; i < (1 << order); ++i)
+		ClearPageReserved(page + i);
+
+	xen_destroy_contiguous_region((unsigned long)page_address(page), order);
+	__free_pages(page, order);
+}
+
+unsigned long mf_kexec_page_to_pfn(struct page *page)
+{
+	return pfn_to_mfn(page_to_pfn(page));
+}
+
+struct page *mf_kexec_pfn_to_page(unsigned long mfn)
+{
+	return pfn_to_page(mfn_to_pfn(mfn));
+}
+
+unsigned long mf_kexec_virt_to_phys(volatile void *address)
+{
+	return virt_to_machine(address).maddr;
+}
+
+void *mf_kexec_phys_to_virt(unsigned long address)
+{
+	return phys_to_virt(machine_to_phys(XMADDR(address)).paddr);
+}
+
+int mf_kexec_prepare(struct kimage *image)
+{
+#ifdef CONFIG_KEXEC_JUMP
+	if (image->preserve_context) {
+		pr_info_once("kexec: Context preservation is not "
+				"supported in Xen domains.\n");
+		return -ENOSYS;
+	}
+#endif
+
+	return alloc_transition_pgtable(image);
+}
+
+int mf_kexec_load(struct kimage *image)
+{
+	void *control_page;
+	struct xen_kexec_load xkl = {};
+
+	/* Image is unloaded, nothing to do. */
+	if (!image)
+		return 0;
+
+	control_page = page_address(image->control_code_page);
+	memcpy(control_page, xen_relocate_kernel, xen_kexec_control_code_size);
+
+	xkl.type = image->type;
+	xkl.image.page_list[XK_MA_CONTROL_PAGE] = __ma(control_page);
+	xkl.image.page_list[XK_MA_TABLE_PAGE] = 0; /* Unused. */
+	xkl.image.page_list[XK_MA_PGD_PAGE] = __ma(image->arch.pgd);
+	xkl.image.page_list[XK_MA_PUD0_PAGE] = 0; /* Unused. */
+	xkl.image.page_list[XK_MA_PUD1_PAGE] = 0; /* Unused. */
+	xkl.image.page_list[XK_MA_PMD0_PAGE] = __ma(image->arch.pmd0);
+	xkl.image.page_list[XK_MA_PMD1_PAGE] = __ma(image->arch.pmd1);
+	xkl.image.page_list[XK_MA_PTE0_PAGE] = __ma(image->arch.pte0);
+	xkl.image.page_list[XK_MA_PTE1_PAGE] = __ma(image->arch.pte1);
+	xkl.image.indirection_page = image->head;
+	xkl.image.start_address = image->start;
+
+	return HYPERVISOR_kexec_op(KEXEC_CMD_kexec_load, &xkl);
+}
+
+void mf_kexec_cleanup(struct kimage *image)
+{
+}
+
+void mf_kexec_unload(struct kimage *image)
+{
+	int rc;
+	struct xen_kexec_load xkl = {};
+
+	if (!image)
+		return;
+
+	xkl.type = image->type;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_unload, &xkl);
+
+	WARN(rc, "kexec: %s: HYPERVISOR_kexec_op(): %i\n", __func__, rc);
+}
+
+void mf_kexec_shutdown(void)
+{
+}
+
+void mf_kexec(struct kimage *image)
+{
+	int rc;
+	struct xen_kexec_exec xke = {};
+
+	xke.type = image->type;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec, &xke);
+
+	pr_emerg("kexec: %s: HYPERVISOR_kexec_op(): %i\n", __func__, rc);
+	BUG();
+}
diff --git a/arch/x86/xen/relocate_kernel_32.S b/arch/x86/xen/relocate_kernel_32.S
new file mode 100644
index 0000000..0e81830
--- /dev/null
+++ b/arch/x86/xen/relocate_kernel_32.S
@@ -0,0 +1,323 @@
+/*
+ * Copyright (c) 2002-2005 Eric Biederman <ebiederm@xmission.com>
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either veesion 2 of the License, or
+ * (at your option) any later veesion.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/cache.h>
+#include <asm/page_types.h>
+#include <asm/pgtable_types.h>
+#include <asm/processor-flags.h>
+
+#include <asm/xen/kexec.h>
+
+#define ARG_INDIRECTION_PAGE	0x4
+#define ARG_PAGE_LIST		0x8
+#define ARG_START_ADDRESS	0xc
+
+#define PTR(x)	(x << 2)
+
+	.text
+	.align	PAGE_SIZE
+	.globl	xen_kexec_control_code_size, xen_relocate_kernel
+
+xen_relocate_kernel:
+	/*
+	 * Must be relocatable PIC code callable as a C function.
+	 *
+	 * This function is called by Xen but here hypervisor is dead.
+	 * We are playing on bare metal.
+	 *
+	 * Every machine address passed to this function through
+	 * page_list (e.g. XK_MA_CONTROL_PAGE) is established
+	 * by dom0 during kexec load phase.
+	 *
+	 * Every virtual address passed to this function through page_list
+	 * (e.g. XK_VA_CONTROL_PAGE) is established by hypervisor during
+	 * HYPERVISOR_kexec_op(KEXEC_CMD_kexec_load) hypercall.
+	 *
+	 * 0x4(%esp) - indirection_page,
+	 * 0x8(%esp) - page_list,
+	 * 0xc(%esp) - start_address,
+	 * 0x10(%esp) - cpu_has_pae (ignored),
+	 * 0x14(%esp) - preserve_context (ignored).
+	 */
+
+	/* Zero out flags, and disable interrupts. */
+	pushl	$0
+	popfl
+
+	/* Get page_list address. */
+	movl	ARG_PAGE_LIST(%esp), %esi
+
+	/*
+	 * Map the control page at its virtual address
+	 * in transition page table.
+	 */
+	movl	PTR(XK_VA_CONTROL_PAGE)(%esi), %eax
+
+	/* Get PGD address and PGD entry index. */
+	movl	PTR(XK_VA_PGD_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PGDIR_SHIFT, %ecx
+	andl	$(PTRS_PER_PGD - 1), %ecx
+
+	/* Fill PGD entry with PMD0 reference. */
+	movl	PTR(XK_MA_PMD0_PAGE)(%esi), %edx
+	orl	$_PAGE_PRESENT, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/* Get PMD0 address and PMD0 entry index. */
+	movl	PTR(XK_VA_PMD0_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PMD_SHIFT, %ecx
+	andl	$(PTRS_PER_PMD - 1), %ecx
+
+	/* Fill PMD0 entry with PTE0 reference. */
+	movl	PTR(XK_MA_PTE0_PAGE)(%esi), %edx
+	orl	$_KERNPG_TABLE, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/* Get PTE0 address and PTE0 entry index. */
+	movl	PTR(XK_VA_PTE0_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PAGE_SHIFT, %ecx
+	andl	$(PTRS_PER_PTE - 1), %ecx
+
+	/* Fill PTE0 entry with control page reference. */
+	movl	PTR(XK_MA_CONTROL_PAGE)(%esi), %edx
+	orl	$__PAGE_KERNEL_EXEC, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/*
+	 * Identity map the control page at its machine address
+	 * in transition page table.
+	 */
+	movl	PTR(XK_MA_CONTROL_PAGE)(%esi), %eax
+
+	/* Get PGD address and PGD entry index. */
+	movl	PTR(XK_VA_PGD_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PGDIR_SHIFT, %ecx
+	andl	$(PTRS_PER_PGD - 1), %ecx
+
+	/* Fill PGD entry with PMD1 reference. */
+	movl	PTR(XK_MA_PMD1_PAGE)(%esi), %edx
+	orl	$_PAGE_PRESENT, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/* Get PMD1 address and PMD1 entry index. */
+	movl	PTR(XK_VA_PMD1_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PMD_SHIFT, %ecx
+	andl	$(PTRS_PER_PMD - 1), %ecx
+
+	/* Fill PMD1 entry with PTE1 reference. */
+	movl	PTR(XK_MA_PTE1_PAGE)(%esi), %edx
+	orl	$_KERNPG_TABLE, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/* Get PTE1 address and PTE1 entry index. */
+	movl	PTR(XK_VA_PTE1_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PAGE_SHIFT, %ecx
+	andl	$(PTRS_PER_PTE - 1), %ecx
+
+	/* Fill PTE1 entry with control page reference. */
+	movl	PTR(XK_MA_CONTROL_PAGE)(%esi), %edx
+	orl	$__PAGE_KERNEL_EXEC, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/*
+	 * Get machine address of control page now.
+	 * This is impossible after page table switch.
+	 */
+	movl	PTR(XK_MA_CONTROL_PAGE)(%esi), %ebx
+
+	/* Get machine address of transition page table now too. */
+	movl	PTR(XK_MA_PGD_PAGE)(%esi), %ecx
+
+	/* Get start_address too. */
+	movl	ARG_START_ADDRESS(%esp), %edx
+
+	/* Get indirection_page address too. */
+	movl	ARG_INDIRECTION_PAGE(%esp), %edi
+
+	/* Switch to transition page table. */
+	movl	%ecx, %cr3
+
+	/* Load IDT. */
+	lidtl	(idt_48 - xen_relocate_kernel)(%ebx)
+
+	/* Load GDT. */
+	leal	(gdt - xen_relocate_kernel)(%ebx), %eax
+	movl	%eax, (gdt_48 - xen_relocate_kernel + 2)(%ebx)
+	lgdtl	(gdt_48 - xen_relocate_kernel)(%ebx)
+
+	/* Load data segment registers. */
+	movl	$(gdt_ds - gdt), %eax
+	movl	%eax, %ds
+	movl	%eax, %es
+	movl	%eax, %fs
+	movl	%eax, %gs
+	movl	%eax, %ss
+
+	/* Setup a new stack at the end of machine address of control page. */
+	leal	PAGE_SIZE(%ebx), %esp
+
+	/* Store start_address on the stack. */
+	pushl   %edx
+
+	/* Jump to identity mapped page. */
+	pushl	$0
+	pushl	$(gdt_cs - gdt)
+	addl	$(identity_mapped - xen_relocate_kernel), %ebx
+	pushl	%ebx
+	iretl
+
+identity_mapped:
+	/*
+	 * Set %cr0 to a known state:
+	 *   - disable alignment check,
+	 *   - disable floating point emulation,
+	 *   - disable paging,
+	 *   - no task switch,
+	 *   - disable write protect,
+	 *   - enable protected mode.
+	 */
+	movl	%cr0, %eax
+	andl	$~(X86_CR0_AM | X86_CR0_EM | X86_CR0_PG | X86_CR0_TS | X86_CR0_WP), %eax
+	orl	$(X86_CR0_PE), %eax
+	movl	%eax, %cr0
+
+	/* Set %cr4 to a known state. */
+	xorl	%eax, %eax
+	movl	%eax, %cr4
+
+	jmp	1f
+
+1:
+	/* Flush the TLB (needed?). */
+	movl	%eax, %cr3
+
+	/* Do the copies. */
+	movl	%edi, %ecx	/* Put the indirection_page in %ecx. */
+	xorl	%edi, %edi
+	xorl	%esi, %esi
+	jmp	1f
+
+0:
+	/*
+	 * Top, read another doubleword from the indirection page.
+	 * Indirection page is an array which contains source
+	 * and destination address pairs. If all pairs could
+	 * not fit in one page then at the end of given
+	 * indirection page is pointer to next one.
+	 * Copy is stopped when done indicator
+	 * is found in indirection page.
+	 */
+	movl	(%ebx), %ecx
+	addl	$4, %ebx
+
+1:
+	testl	$0x1, %ecx	/* Is it a destination page? */
+	jz	2f
+
+	movl	%ecx, %edi
+	andl	$PAGE_MASK, %edi
+	jmp	0b
+
+2:
+	testl	$0x2, %ecx	/* Is it an indirection page? */
+	jz	2f
+
+	movl	%ecx, %ebx
+	andl	$PAGE_MASK, %ebx
+	jmp	0b
+
+2:
+	testl	$0x4, %ecx	/* Is it the done indicator? */
+	jz	2f
+	jmp	3f
+
+2:
+	testl	$0x8, %ecx	/* Is it the source indicator? */
+	jz	0b		/* Ignore it otherwise. */
+
+	movl	%ecx, %esi
+	andl	$PAGE_MASK, %esi
+	movl	$1024, %ecx
+
+	/* Copy page. */
+	rep	movsl
+	jmp	0b
+
+3:
+	/*
+	 * To be certain of avoiding problems with self-modifying code
+	 * I need to execute a serializing instruction here.
+	 * So I flush the TLB by reloading %cr3 here, it's handy,
+	 * and not processor dependent.
+	 */
+	xorl	%eax, %eax
+	movl	%eax, %cr3
+
+	/*
+	 * Set all of the registers to known values.
+	 * Leave %esp alone.
+	 */
+	xorl	%ebx, %ebx
+	xorl    %ecx, %ecx
+	xorl    %edx, %edx
+	xorl    %esi, %esi
+	xorl    %edi, %edi
+	xorl    %ebp, %ebp
+
+	/* Jump to start_address. */
+	retl
+
+	.align	L1_CACHE_BYTES
+
+gdt:
+	.quad	0x0000000000000000	/* NULL descriptor. */
+
+gdt_cs:
+	.quad	0x00cf9a000000ffff	/* 4 GiB code segment at 0x00000000. */
+
+gdt_ds:
+	.quad	0x00cf92000000ffff	/* 4 GiB data segment at 0x00000000. */
+gdt_end:
+
+gdt_48:
+	.word	gdt_end - gdt - 1	/* GDT limit. */
+	.long	0			/* GDT base - filled in by code above. */
+
+idt_48:
+	.word	0			/* IDT limit. */
+	.long	0			/* IDT base. */
+
+xen_kexec_control_code_size:
+	.long	. - xen_relocate_kernel
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To368-0004uP-0g; Thu, 27 Dec 2012 02:21:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To366-0004tR-6E
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:48286] by server-10.bemta-5.messagelabs.com id
	71/ED-13383-5A0BBD05; Thu, 27 Dec 2012 02:21:25 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!5
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11566 invoked from network); 27 Dec 2012 02:21:24 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:24 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645341Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:59 +0100
Message-Id: <1356574740-6806-11-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.193071
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 10/11] drivers/xen: Export vmcoreinfo through
	sysfs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Export vmcoreinfo through sysfs.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 drivers/xen/sys-hypervisor.c |   42 +++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 41 insertions(+), 1 deletions(-)

diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
index 96453f8..9dd290c 100644
--- a/drivers/xen/sys-hypervisor.c
+++ b/drivers/xen/sys-hypervisor.c
@@ -368,6 +368,41 @@ static void xen_properties_destroy(void)
 	sysfs_remove_group(hypervisor_kobj, &xen_properties_group);
 }
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+static ssize_t vmcoreinfo_show(struct hyp_sysfs_attr *attr, char *buffer)
+{
+	return sprintf(buffer, "%lx %lx\n", xen_vmcoreinfo_maddr,
+						xen_vmcoreinfo_max_size);
+}
+
+HYPERVISOR_ATTR_RO(vmcoreinfo);
+
+static int __init xen_vmcoreinfo_init(void)
+{
+	if (!xen_vmcoreinfo_max_size)
+		return 0;
+
+	return sysfs_create_file(hypervisor_kobj, &vmcoreinfo_attr.attr);
+}
+
+static void xen_vmcoreinfo_destroy(void)
+{
+	if (!xen_vmcoreinfo_max_size)
+		return;
+
+	sysfs_remove_file(hypervisor_kobj, &vmcoreinfo_attr.attr);
+}
+#else
+static int __init xen_vmcoreinfo_init(void)
+{
+	return 0;
+}
+
+static void xen_vmcoreinfo_destroy(void)
+{
+}
+#endif
+
 static int __init hyper_sysfs_init(void)
 {
 	int ret;
@@ -390,9 +425,14 @@ static int __init hyper_sysfs_init(void)
 	ret = xen_properties_init();
 	if (ret)
 		goto prop_out;
+	ret = xen_vmcoreinfo_init();
+	if (ret)
+		goto vmcoreinfo_out;
 
 	goto out;
 
+vmcoreinfo_out:
+	xen_properties_destroy();
 prop_out:
 	xen_sysfs_uuid_destroy();
 uuid_out:
@@ -407,12 +447,12 @@ out:
 
 static void __exit hyper_sysfs_exit(void)
 {
+	xen_vmcoreinfo_destroy();
 	xen_properties_destroy();
 	xen_compilation_destroy();
 	xen_sysfs_uuid_destroy();
 	xen_sysfs_version_destroy();
 	xen_sysfs_type_destroy();
-
 }
 module_init(hyper_sysfs_init);
 module_exit(hyper_sysfs_exit);
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To366-0004tp-Rh; Thu, 27 Dec 2012 02:21:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To365-0004tK-8F
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:25 +0000
Received: from [85.158.139.211:48235] by server-3.bemta-5.messagelabs.com id
	00/3D-25441-4A0BBD05; Thu, 27 Dec 2012 02:21:24 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11519 invoked from network); 27 Dec 2012 02:21:22 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:22 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1623742Ab2L0CTA
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:00 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:49 +0100
Message-Id: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
X-Bogosity: No, spamicity=0.482930
Subject: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hi,

This set of patches contains initial kexec/kdump implementation for Xen v3.
Currently only dom0 is supported, however, almost all infrustructure
required for domU support is ready.

Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
This could simplify and reduce a bit size of kernel code. However, this solution
requires some changes in baremetal x86 code. First of all code which establishes
transition page table should be moved back from machine_kexec_$(BITS).c to
relocate_kernel_$(BITS).S. Another important thing which should be changed in that
case is format of page_list array. Xen kexec hypercall requires to alternate physical
addresses with virtual ones. These and other required stuff have not been done in that
version because I am not sure that solution will be accepted by kexec/kdump maintainers.
I hope that this email spark discussion about that topic.

Daniel

 arch/x86/Kconfig                     |    3 +
 arch/x86/include/asm/kexec.h         |   10 +-
 arch/x86/include/asm/xen/hypercall.h |    6 +
 arch/x86/include/asm/xen/kexec.h     |   79 ++++
 arch/x86/kernel/machine_kexec_64.c   |   12 +-
 arch/x86/kernel/vmlinux.lds.S        |    7 +-
 arch/x86/xen/Kconfig                 |    1 +
 arch/x86/xen/Makefile                |    3 +
 arch/x86/xen/enlighten.c             |   11 +
 arch/x86/xen/kexec.c                 |  150 +++++++
 arch/x86/xen/machine_kexec_32.c      |  226 +++++++++++
 arch/x86/xen/machine_kexec_64.c      |  318 +++++++++++++++
 arch/x86/xen/relocate_kernel_32.S    |  323 +++++++++++++++
 arch/x86/xen/relocate_kernel_64.S    |  309 ++++++++++++++
 drivers/xen/sys-hypervisor.c         |   42 ++-
 include/linux/kexec.h                |   26 ++-
 include/xen/interface/xen.h          |   33 ++
 kernel/Makefile                      |    1 +
 kernel/kexec-firmware.c              |  743 ++++++++++++++++++++++++++++++++++
 kernel/kexec.c                       |   46 ++-
 20 files changed, 2331 insertions(+), 18 deletions(-)

Daniel Kiper (11):
      kexec: introduce kexec firmware support
      x86/kexec: Add extra pointers to transition page table PGD, PUD, PMD and PTE
      xen: Introduce architecture independent data for kexec/kdump
      x86/xen: Introduce architecture dependent data for kexec/kdump
      x86/xen: Register resources required by kexec-tools
      x86/xen: Add i386 kexec/kdump implementation
      x86/xen: Add x86_64 kexec/kdump implementation
      x86/xen: Add kexec/kdump Kconfig and makefile rules
      x86/xen/enlighten: Add init and crash kexec/kdump hooks
      drivers/xen: Export vmcoreinfo through sysfs
      x86: Add Xen kexec control code size check to linker script

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To367-0004u4-8E; Thu, 27 Dec 2012 02:21:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To365-0004tL-NC
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:25 +0000
Received: from [85.158.139.211:48262] by server-13.bemta-5.messagelabs.com id
	42/4A-10716-4A0BBD05; Thu, 27 Dec 2012 02:21:24 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!3
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11548 invoked from network); 27 Dec 2012 02:21:24 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:24 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645312Ab2L0CTA
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:00 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:51 +0100
Message-Id: <1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.395697
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 02/11] x86/kexec: Add extra pointers to
	transition page table PGD, PUD, PMD and PTE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some implementations (e.g. Xen PVOPS) could not use part of identity page table
to construct transition page table. It means that they require separate PUDs,
PMDs and PTEs for virtual and physical (identity) mapping. To satisfy that
requirement add extra pointer to PGD, PUD, PMD and PTE and align existing code.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/include/asm/kexec.h       |   10 +++++++---
 arch/x86/kernel/machine_kexec_64.c |   12 ++++++------
 2 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 6080d26..cedd204 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -157,9 +157,13 @@ struct kimage_arch {
 };
 #else
 struct kimage_arch {
-	pud_t *pud;
-	pmd_t *pmd;
-	pte_t *pte;
+	pgd_t *pgd;
+	pud_t *pud0;
+	pud_t *pud1;
+	pmd_t *pmd0;
+	pmd_t *pmd1;
+	pte_t *pte0;
+	pte_t *pte1;
 };
 #endif
 
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index b3ea9db..976e54b 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -137,9 +137,9 @@ out:
 
 static void free_transition_pgtable(struct kimage *image)
 {
-	free_page((unsigned long)image->arch.pud);
-	free_page((unsigned long)image->arch.pmd);
-	free_page((unsigned long)image->arch.pte);
+	free_page((unsigned long)image->arch.pud0);
+	free_page((unsigned long)image->arch.pmd0);
+	free_page((unsigned long)image->arch.pte0);
 }
 
 static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
@@ -157,7 +157,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
 		pud = (pud_t *)get_zeroed_page(GFP_KERNEL);
 		if (!pud)
 			goto err;
-		image->arch.pud = pud;
+		image->arch.pud0 = pud;
 		set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));
 	}
 	pud = pud_offset(pgd, vaddr);
@@ -165,7 +165,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
 		pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL);
 		if (!pmd)
 			goto err;
-		image->arch.pmd = pmd;
+		image->arch.pmd0 = pmd;
 		set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
 	}
 	pmd = pmd_offset(pud, vaddr);
@@ -173,7 +173,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
 		pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
 		if (!pte)
 			goto err;
-		image->arch.pte = pte;
+		image->arch.pte0 = pte;
 		set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE));
 	}
 	pte = pte_offset_kernel(pmd, vaddr);
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To369-0004vH-OI; Thu, 27 Dec 2012 02:21:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To366-0004tb-N4
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:12240] by server-11.bemta-5.messagelabs.com id
	DB/F8-31624-6A0BBD05; Thu, 27 Dec 2012 02:21:26 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!8
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11639 invoked from network); 27 Dec 2012 02:21:25 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:25 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645339Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:57 +0100
Message-Id: <1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.499592
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 08/11] x86/xen: Add kexec/kdump Kconfig and
	makefile rules
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add kexec/kdump Kconfig and makefile rules.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/Kconfig      |    3 +++
 arch/x86/xen/Kconfig  |    1 +
 arch/x86/xen/Makefile |    3 +++
 3 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 79795af..e2746c4 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1600,6 +1600,9 @@ config KEXEC_JUMP
 	  Jump between original kernel and kexeced kernel and invoke
 	  code in physical address mode via KEXEC
 
+config KEXEC_FIRMWARE
+	def_bool n
+
 config PHYSICAL_START
 	hex "Physical address where the kernel is loaded" if (EXPERT || CRASH_DUMP)
 	default "0x1000000"
diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 131dacd..8469c1c 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -7,6 +7,7 @@ config XEN
 	select PARAVIRT
 	select PARAVIRT_CLOCK
 	select XEN_HAVE_PVMMU
+	select KEXEC_FIRMWARE if KEXEC
 	depends on X86_64 || (X86_32 && X86_PAE && !X86_VISWS)
 	depends on X86_TSC
 	help
diff --git a/arch/x86/xen/Makefile b/arch/x86/xen/Makefile
index 96ab2c0..99952d7 100644
--- a/arch/x86/xen/Makefile
+++ b/arch/x86/xen/Makefile
@@ -22,3 +22,6 @@ obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= spinlock.o
 obj-$(CONFIG_XEN_DEBUG_FS)	+= debugfs.o
 obj-$(CONFIG_XEN_DOM0)		+= apic.o vga.o
 obj-$(CONFIG_SWIOTLB_XEN)	+= pci-swiotlb-xen.o
+obj-$(CONFIG_KEXEC_FIRMWARE)	+= kexec.o
+obj-$(CONFIG_KEXEC_FIRMWARE)	+= machine_kexec_$(BITS).o
+obj-$(CONFIG_KEXEC_FIRMWARE)	+= relocate_kernel_$(BITS).o
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To36A-0004w0-QX; Thu, 27 Dec 2012 02:21:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To368-0004u9-17
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:28 +0000
Received: from [85.158.139.211:12255] by server-1.bemta-5.messagelabs.com id
	FD/E6-12813-7A0BBD05; Thu, 27 Dec 2012 02:21:27 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!12
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11722 invoked from network); 27 Dec 2012 02:21:26 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:26 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645334Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:53 +0100
Message-Id: <1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.011299
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 04/11] x86/xen: Introduce architecture
	dependent data for kexec/kdump
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce architecture dependent constants, structures and
functions required by Xen kexec/kdump implementation.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/include/asm/xen/hypercall.h |    6 +++
 arch/x86/include/asm/xen/kexec.h     |   79 ++++++++++++++++++++++++++++++++++
 2 files changed, 85 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/include/asm/xen/kexec.h

diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
index c20d1ce..e76a1b8 100644
--- a/arch/x86/include/asm/xen/hypercall.h
+++ b/arch/x86/include/asm/xen/hypercall.h
@@ -459,6 +459,12 @@ HYPERVISOR_hvm_op(int op, void *arg)
 }
 
 static inline int
+HYPERVISOR_kexec_op(unsigned long op, void *args)
+{
+	return _hypercall2(int, kexec_op, op, args);
+}
+
+static inline int
 HYPERVISOR_tmem_op(
 	struct tmem_op *op)
 {
diff --git a/arch/x86/include/asm/xen/kexec.h b/arch/x86/include/asm/xen/kexec.h
new file mode 100644
index 0000000..d09b52f
--- /dev/null
+++ b/arch/x86/include/asm/xen/kexec.h
@@ -0,0 +1,79 @@
+/*
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_X86_XEN_KEXEC_H
+#define _ASM_X86_XEN_KEXEC_H
+
+#define KEXEC_XEN_NO_PAGES	17
+
+#define XK_MA_CONTROL_PAGE	0
+#define XK_VA_CONTROL_PAGE	1
+#define XK_MA_PGD_PAGE		2
+#define XK_VA_PGD_PAGE		3
+#define XK_MA_PUD0_PAGE		4
+#define XK_VA_PUD0_PAGE		5
+#define XK_MA_PUD1_PAGE		6
+#define XK_VA_PUD1_PAGE		7
+#define XK_MA_PMD0_PAGE		8
+#define XK_VA_PMD0_PAGE		9
+#define XK_MA_PMD1_PAGE		10
+#define XK_VA_PMD1_PAGE		11
+#define XK_MA_PTE0_PAGE		12
+#define XK_VA_PTE0_PAGE		13
+#define XK_MA_PTE1_PAGE		14
+#define XK_VA_PTE1_PAGE		15
+#define XK_MA_TABLE_PAGE	16
+
+#ifndef __ASSEMBLY__
+struct xen_kexec_image {
+	unsigned long page_list[KEXEC_XEN_NO_PAGES];
+	unsigned long indirection_page;
+	unsigned long start_address;
+};
+
+struct xen_kexec_load {
+	int type;
+	struct xen_kexec_image image;
+};
+
+extern unsigned int xen_kexec_control_code_size;
+
+#ifdef CONFIG_X86_32
+extern void xen_relocate_kernel(unsigned long indirection_page,
+				unsigned long *page_list,
+				unsigned long start_address,
+				unsigned int has_pae,
+				unsigned int preserve_context);
+#else
+extern void xen_relocate_kernel(unsigned long indirection_page,
+				unsigned long *page_list,
+				unsigned long start_address,
+				unsigned int preserve_context);
+#endif
+#endif
+#endif /* _ASM_X86_XEN_KEXEC_H */
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To36B-0004wK-6e; Thu, 27 Dec 2012 02:21:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To368-0004uG-7h
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:28 +0000
Received: from [85.158.139.211:10425] by server-12.bemta-5.messagelabs.com id
	BE/A8-02275-7A0BBD05; Thu, 27 Dec 2012 02:21:27 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!9
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11665 invoked from network); 27 Dec 2012 02:21:25 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:25 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645338Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:56 +0100
Message-Id: <1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.102230
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 07/11] x86/xen: Add x86_64 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add x86_64 kexec/kdump implementation.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/xen/machine_kexec_64.c   |  318 +++++++++++++++++++++++++++++++++++++
 arch/x86/xen/relocate_kernel_64.S |  309 +++++++++++++++++++++++++++++++++++
 2 files changed, 627 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/xen/machine_kexec_64.c
 create mode 100644 arch/x86/xen/relocate_kernel_64.S

diff --git a/arch/x86/xen/machine_kexec_64.c b/arch/x86/xen/machine_kexec_64.c
new file mode 100644
index 0000000..2600342
--- /dev/null
+++ b/arch/x86/xen/machine_kexec_64.c
@@ -0,0 +1,318 @@
+/*
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/kexec.h>
+#include <linux/mm.h>
+#include <linux/string.h>
+
+#include <xen/interface/memory.h>
+#include <xen/xen.h>
+
+#include <asm/xen/hypercall.h>
+#include <asm/xen/kexec.h>
+#include <asm/xen/page.h>
+
+#define __ma(vaddr)	(virt_to_machine(vaddr).maddr)
+
+static void init_level2_page(pmd_t *pmd, unsigned long addr)
+{
+	unsigned long end_addr = addr + PUD_SIZE;
+
+	while (addr < end_addr) {
+		native_set_pmd(pmd++, native_make_pmd(addr | __PAGE_KERNEL_LARGE_EXEC));
+		addr += PMD_SIZE;
+	}
+}
+
+static int init_level3_page(struct kimage *image, pud_t *pud,
+				unsigned long addr, unsigned long last_addr)
+{
+	pmd_t *pmd;
+	struct page *page;
+	unsigned long end_addr = addr + PGDIR_SIZE;
+
+	while ((addr < last_addr) && (addr < end_addr)) {
+		page = firmware_kimage_alloc_control_pages(image, 0);
+
+		if (!page)
+			return -ENOMEM;
+
+		pmd = page_address(page);
+		init_level2_page(pmd, addr);
+		native_set_pud(pud++, native_make_pud(__ma(pmd) | _KERNPG_TABLE));
+		addr += PUD_SIZE;
+	}
+
+	/* Clear the unused entries. */
+	while (addr < end_addr) {
+		native_pud_clear(pud++);
+		addr += PUD_SIZE;
+	}
+
+	return 0;
+}
+
+
+static int init_level4_page(struct kimage *image, pgd_t *pgd,
+				unsigned long addr, unsigned long last_addr)
+{
+	int rc;
+	pud_t *pud;
+	struct page *page;
+	unsigned long end_addr = addr + PTRS_PER_PGD * PGDIR_SIZE;
+
+	while ((addr < last_addr) && (addr < end_addr)) {
+		page = firmware_kimage_alloc_control_pages(image, 0);
+
+		if (!page)
+			return -ENOMEM;
+
+		pud = page_address(page);
+		rc = init_level3_page(image, pud, addr, last_addr);
+
+		if (rc)
+			return rc;
+
+		native_set_pgd(pgd++, native_make_pgd(__ma(pud) | _KERNPG_TABLE));
+		addr += PGDIR_SIZE;
+	}
+
+	/* Clear the unused entries. */
+	while (addr < end_addr) {
+		native_pgd_clear(pgd++);
+		addr += PGDIR_SIZE;
+	}
+
+	return 0;
+}
+
+static void free_transition_pgtable(struct kimage *image)
+{
+	free_page((unsigned long)image->arch.pgd);
+	free_page((unsigned long)image->arch.pud0);
+	free_page((unsigned long)image->arch.pud1);
+	free_page((unsigned long)image->arch.pmd0);
+	free_page((unsigned long)image->arch.pmd1);
+	free_page((unsigned long)image->arch.pte0);
+	free_page((unsigned long)image->arch.pte1);
+}
+
+static int alloc_transition_pgtable(struct kimage *image)
+{
+	image->arch.pgd = (pgd_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pgd)
+		goto err;
+
+	image->arch.pud0 = (pud_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pud0)
+		goto err;
+
+	image->arch.pud1 = (pud_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pud1)
+		goto err;
+
+	image->arch.pmd0 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pmd0)
+		goto err;
+
+	image->arch.pmd1 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pmd1)
+		goto err;
+
+	image->arch.pte0 = (pte_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pte0)
+		goto err;
+
+	image->arch.pte1 = (pte_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pte1)
+		goto err;
+
+	return 0;
+
+err:
+	free_transition_pgtable(image);
+
+	return -ENOMEM;
+}
+
+static int init_pgtable(struct kimage *image, pgd_t *pgd)
+{
+	int rc;
+	unsigned long max_mfn;
+
+	max_mfn = HYPERVISOR_memory_op(XENMEM_maximum_ram_page, NULL);
+
+	rc = init_level4_page(image, pgd, 0, PFN_PHYS(max_mfn));
+
+	if (rc)
+		return rc;
+
+	return alloc_transition_pgtable(image);
+}
+
+struct page *mf_kexec_kimage_alloc_pages(gfp_t gfp_mask,
+						unsigned int order,
+						unsigned long limit)
+{
+	struct page *pages;
+	unsigned int i;
+
+	pages = alloc_pages(gfp_mask, order);
+
+	if (!pages)
+		return NULL;
+
+	BUG_ON(PagePrivate(pages));
+
+	pages->mapping = NULL;
+	set_page_private(pages, order);
+
+	for (i = 0; i < (1 << order); ++i)
+		SetPageReserved(pages + i);
+
+	return pages;
+}
+
+void mf_kexec_kimage_free_pages(struct page *page)
+{
+	unsigned int i, order;
+
+	order = page_private(page);
+
+	for (i = 0; i < (1 << order); ++i)
+		ClearPageReserved(page + i);
+
+	__free_pages(page, order);
+}
+
+unsigned long mf_kexec_page_to_pfn(struct page *page)
+{
+	return pfn_to_mfn(page_to_pfn(page));
+}
+
+struct page *mf_kexec_pfn_to_page(unsigned long mfn)
+{
+	return pfn_to_page(mfn_to_pfn(mfn));
+}
+
+unsigned long mf_kexec_virt_to_phys(volatile void *address)
+{
+	return virt_to_machine(address).maddr;
+}
+
+void *mf_kexec_phys_to_virt(unsigned long address)
+{
+	return phys_to_virt(machine_to_phys(XMADDR(address)).paddr);
+}
+
+int mf_kexec_prepare(struct kimage *image)
+{
+#ifdef CONFIG_KEXEC_JUMP
+	if (image->preserve_context) {
+		pr_info_once("kexec: Context preservation is not "
+				"supported in Xen domains.\n");
+		return -ENOSYS;
+	}
+#endif
+
+	return init_pgtable(image, page_address(image->control_code_page));
+}
+
+int mf_kexec_load(struct kimage *image)
+{
+	void *control_page, *table_page;
+	struct xen_kexec_load xkl = {};
+
+	/* Image is unloaded, nothing to do. */
+	if (!image)
+		return 0;
+
+	table_page = page_address(image->control_code_page);
+	control_page = table_page + PAGE_SIZE;
+
+	memcpy(control_page, xen_relocate_kernel, xen_kexec_control_code_size);
+
+	xkl.type = image->type;
+	xkl.image.page_list[XK_MA_CONTROL_PAGE] = __ma(control_page);
+	xkl.image.page_list[XK_MA_TABLE_PAGE] = __ma(table_page);
+	xkl.image.page_list[XK_MA_PGD_PAGE] = __ma(image->arch.pgd);
+	xkl.image.page_list[XK_MA_PUD0_PAGE] = __ma(image->arch.pud0);
+	xkl.image.page_list[XK_MA_PUD1_PAGE] = __ma(image->arch.pud1);
+	xkl.image.page_list[XK_MA_PMD0_PAGE] = __ma(image->arch.pmd0);
+	xkl.image.page_list[XK_MA_PMD1_PAGE] = __ma(image->arch.pmd1);
+	xkl.image.page_list[XK_MA_PTE0_PAGE] = __ma(image->arch.pte0);
+	xkl.image.page_list[XK_MA_PTE1_PAGE] = __ma(image->arch.pte1);
+	xkl.image.indirection_page = image->head;
+	xkl.image.start_address = image->start;
+
+	return HYPERVISOR_kexec_op(KEXEC_CMD_kexec_load, &xkl);
+}
+
+void mf_kexec_cleanup(struct kimage *image)
+{
+	free_transition_pgtable(image);
+}
+
+void mf_kexec_unload(struct kimage *image)
+{
+	int rc;
+	struct xen_kexec_load xkl = {};
+
+	if (!image)
+		return;
+
+	xkl.type = image->type;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_unload, &xkl);
+
+	WARN(rc, "kexec: %s: HYPERVISOR_kexec_op(): %i\n", __func__, rc);
+}
+
+void mf_kexec_shutdown(void)
+{
+}
+
+void mf_kexec(struct kimage *image)
+{
+	int rc;
+	struct xen_kexec_exec xke = {};
+
+	xke.type = image->type;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec, &xke);
+
+	pr_emerg("kexec: %s: HYPERVISOR_kexec_op(): %i\n", __func__, rc);
+	BUG();
+}
diff --git a/arch/x86/xen/relocate_kernel_64.S b/arch/x86/xen/relocate_kernel_64.S
new file mode 100644
index 0000000..8f641f1
--- /dev/null
+++ b/arch/x86/xen/relocate_kernel_64.S
@@ -0,0 +1,309 @@
+/*
+ * Copyright (c) 2002-2005 Eric Biederman <ebiederm@xmission.com>
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/page_types.h>
+#include <asm/pgtable_types.h>
+#include <asm/processor-flags.h>
+
+#include <asm/xen/kexec.h>
+
+#define PTR(x)	(x << 3)
+
+	.text
+	.code64
+	.globl	xen_kexec_control_code_size, xen_relocate_kernel
+
+xen_relocate_kernel:
+	/*
+	 * Must be relocatable PIC code callable as a C function.
+	 *
+	 * This function is called by Xen but here hypervisor is dead.
+	 * We are playing on bare metal.
+	 *
+	 * Every machine address passed to this function through
+	 * page_list (e.g. XK_MA_CONTROL_PAGE) is established
+	 * by dom0 during kexec load phase.
+	 *
+	 * Every virtual address passed to this function through page_list
+	 * (e.g. XK_VA_CONTROL_PAGE) is established by hypervisor during
+	 * HYPERVISOR_kexec_op(KEXEC_CMD_kexec_load) hypercall.
+	 *
+	 * %rdi - indirection_page,
+	 * %rsi - page_list,
+	 * %rdx - start_address,
+	 * %ecx - preserve_context (ignored).
+	 */
+
+	/* Zero out flags, and disable interrupts. */
+	pushq	$0
+	popfq
+
+	/*
+	 * Map the control page at its virtual address
+	 * in transition page table.
+	 */
+	movq	PTR(XK_VA_CONTROL_PAGE)(%rsi), %r8
+
+	/* Get PGD address and PGD entry index. */
+	movq	PTR(XK_VA_PGD_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PGDIR_SHIFT, %r10
+	andq	$(PTRS_PER_PGD - 1), %r10
+
+	/* Fill PGD entry with PUD0 reference. */
+	movq	PTR(XK_MA_PUD0_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PUD0 address and PUD0 entry index. */
+	movq	PTR(XK_VA_PUD0_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PUD_SHIFT, %r10
+	andq	$(PTRS_PER_PUD - 1), %r10
+
+	/* Fill PUD0 entry with PMD0 reference. */
+	movq	PTR(XK_MA_PMD0_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PMD0 address and PMD0 entry index. */
+	movq	PTR(XK_VA_PMD0_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PMD_SHIFT, %r10
+	andq	$(PTRS_PER_PMD - 1), %r10
+
+	/* Fill PMD0 entry with PTE0 reference. */
+	movq	PTR(XK_MA_PTE0_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PTE0 address and PTE0 entry index. */
+	movq	PTR(XK_VA_PTE0_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PAGE_SHIFT, %r10
+	andq	$(PTRS_PER_PTE - 1), %r10
+
+	/* Fill PTE0 entry with control page reference. */
+	movq	PTR(XK_MA_CONTROL_PAGE)(%rsi), %r11
+	orq	$__PAGE_KERNEL_EXEC, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/*
+	 * Identity map the control page at its machine address
+	 * in transition page table.
+	 */
+	movq	PTR(XK_MA_CONTROL_PAGE)(%rsi), %r8
+
+	/* Get PGD address and PGD entry index. */
+	movq	PTR(XK_VA_PGD_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PGDIR_SHIFT, %r10
+	andq	$(PTRS_PER_PGD - 1), %r10
+
+	/* Fill PGD entry with PUD1 reference. */
+	movq	PTR(XK_MA_PUD1_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PUD1 address and PUD1 entry index. */
+	movq	PTR(XK_VA_PUD1_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PUD_SHIFT, %r10
+	andq	$(PTRS_PER_PUD - 1), %r10
+
+	/* Fill PUD1 entry with PMD1 reference. */
+	movq	PTR(XK_MA_PMD1_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PMD1 address and PMD1 entry index. */
+	movq	PTR(XK_VA_PMD1_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PMD_SHIFT, %r10
+	andq	$(PTRS_PER_PMD - 1), %r10
+
+	/* Fill PMD1 entry with PTE1 reference. */
+	movq	PTR(XK_MA_PTE1_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PTE1 address and PTE1 entry index. */
+	movq	PTR(XK_VA_PTE1_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PAGE_SHIFT, %r10
+	andq	$(PTRS_PER_PTE - 1), %r10
+
+	/* Fill PTE1 entry with control page reference. */
+	movq	PTR(XK_MA_CONTROL_PAGE)(%rsi), %r11
+	orq	$__PAGE_KERNEL_EXEC, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/*
+	 * Get machine address of control page now.
+	 * This is impossible after page table switch.
+	 */
+	movq	PTR(XK_MA_CONTROL_PAGE)(%rsi), %r8
+
+	/* Get machine address of identity page table now too. */
+	movq	PTR(XK_MA_TABLE_PAGE)(%rsi), %r9
+
+	/* Get machine address of transition page table now too. */
+	movq	PTR(XK_MA_PGD_PAGE)(%rsi), %r10
+
+	/* Switch to transition page table. */
+	movq	%r10, %cr3
+
+	/* Setup a new stack at the end of machine address of control page. */
+	leaq	PAGE_SIZE(%r8), %rsp
+
+	/* Store start_address on the stack. */
+	pushq   %rdx
+
+	/* Jump to identity mapped page. */
+	addq	$(identity_mapped - xen_relocate_kernel), %r8
+	jmpq	*%r8
+
+identity_mapped:
+	/* Switch to identity page table. */
+	movq	%r9, %cr3
+
+	/*
+	 * Set %cr0 to a known state:
+	 *   - disable alignment check,
+	 *   - disable floating point emulation,
+	 *   - no task switch,
+	 *   - disable write protect,
+	 *   - enable protected mode,
+	 *   - enable paging.
+	 */
+	movq	%cr0, %rax
+	andq	$~(X86_CR0_AM | X86_CR0_EM | X86_CR0_TS | X86_CR0_WP), %rax
+	orl	$(X86_CR0_PE | X86_CR0_PG), %eax
+	movq	%rax, %cr0
+
+	/*
+	 * Set %cr4 to a known state:
+	 *   - enable physical address extension.
+	 */
+	movq	$X86_CR4_PAE, %rax
+	movq	%rax, %cr4
+
+	jmp	1f
+
+1:
+	/* Flush the TLB (needed?). */
+	movq	%r9, %cr3
+
+	/* Do the copies. */
+	movq	%rdi, %rcx	/* Put the indirection_page in %rcx. */
+	xorq	%rdi, %rdi
+	xorq	%rsi, %rsi
+	jmp	1f
+
+0:
+	/*
+	 * Top, read another quadword from the indirection page.
+	 * Indirection page is an array which contains source
+	 * and destination address pairs. If all pairs could
+	 * not fit in one page then at the end of given
+	 * indirection page is pointer to next one.
+	 * Copy is stopped when done indicator
+	 * is found in indirection page.
+	 */
+	movq	(%rbx), %rcx
+	addq	$8, %rbx
+
+1:
+	testq	$0x1, %rcx	/* Is it a destination page? */
+	jz	2f
+
+	movq	%rcx, %rdi
+	andq	$PAGE_MASK, %rdi
+	jmp	0b
+
+2:
+	testq	$0x2, %rcx	/* Is it an indirection page? */
+	jz	2f
+
+	movq	%rcx, %rbx
+	andq	$PAGE_MASK, %rbx
+	jmp	0b
+
+2:
+	testq	$0x4, %rcx	/* Is it the done indicator? */
+	jz	2f
+	jmp	3f
+
+2:
+	testq	$0x8, %rcx	/* Is it the source indicator? */
+	jz	0b		/* Ignore it otherwise. */
+
+	movq	%rcx, %rsi
+	andq	$PAGE_MASK, %rsi
+	movq	$512, %rcx
+
+	/* Copy page. */
+	rep	movsq
+	jmp	0b
+
+3:
+	/*
+	 * To be certain of avoiding problems with self-modifying code
+	 * I need to execute a serializing instruction here.
+	 * So I flush the TLB by reloading %cr3 here, it's handy,
+	 * and not processor dependent.
+	 */
+	movq	%cr3, %rax
+	movq	%rax, %cr3
+
+	/*
+	 * Set all of the registers to known values.
+	 * Leave %rsp alone.
+	 */
+	xorq	%rax, %rax
+	xorq	%rbx, %rbx
+	xorq    %rcx, %rcx
+	xorq    %rdx, %rdx
+	xorq    %rsi, %rsi
+	xorq    %rdi, %rdi
+	xorq    %rbp, %rbp
+	xorq	%r8, %r8
+	xorq	%r9, %r9
+	xorq	%r10, %r10
+	xorq	%r11, %r11
+	xorq	%r12, %r12
+	xorq	%r13, %r13
+	xorq	%r14, %r14
+	xorq	%r15, %r15
+
+	/* Jump to start_address. */
+	retq
+
+xen_kexec_control_code_size:
+	.long	. - xen_relocate_kernel
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To368-0004uP-0g; Thu, 27 Dec 2012 02:21:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To366-0004tR-6E
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:48286] by server-10.bemta-5.messagelabs.com id
	71/ED-13383-5A0BBD05; Thu, 27 Dec 2012 02:21:25 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!5
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11566 invoked from network); 27 Dec 2012 02:21:24 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:24 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645341Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:59 +0100
Message-Id: <1356574740-6806-11-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.193071
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 10/11] drivers/xen: Export vmcoreinfo through
	sysfs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Export vmcoreinfo through sysfs.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 drivers/xen/sys-hypervisor.c |   42 +++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 41 insertions(+), 1 deletions(-)

diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
index 96453f8..9dd290c 100644
--- a/drivers/xen/sys-hypervisor.c
+++ b/drivers/xen/sys-hypervisor.c
@@ -368,6 +368,41 @@ static void xen_properties_destroy(void)
 	sysfs_remove_group(hypervisor_kobj, &xen_properties_group);
 }
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+static ssize_t vmcoreinfo_show(struct hyp_sysfs_attr *attr, char *buffer)
+{
+	return sprintf(buffer, "%lx %lx\n", xen_vmcoreinfo_maddr,
+						xen_vmcoreinfo_max_size);
+}
+
+HYPERVISOR_ATTR_RO(vmcoreinfo);
+
+static int __init xen_vmcoreinfo_init(void)
+{
+	if (!xen_vmcoreinfo_max_size)
+		return 0;
+
+	return sysfs_create_file(hypervisor_kobj, &vmcoreinfo_attr.attr);
+}
+
+static void xen_vmcoreinfo_destroy(void)
+{
+	if (!xen_vmcoreinfo_max_size)
+		return;
+
+	sysfs_remove_file(hypervisor_kobj, &vmcoreinfo_attr.attr);
+}
+#else
+static int __init xen_vmcoreinfo_init(void)
+{
+	return 0;
+}
+
+static void xen_vmcoreinfo_destroy(void)
+{
+}
+#endif
+
 static int __init hyper_sysfs_init(void)
 {
 	int ret;
@@ -390,9 +425,14 @@ static int __init hyper_sysfs_init(void)
 	ret = xen_properties_init();
 	if (ret)
 		goto prop_out;
+	ret = xen_vmcoreinfo_init();
+	if (ret)
+		goto vmcoreinfo_out;
 
 	goto out;
 
+vmcoreinfo_out:
+	xen_properties_destroy();
 prop_out:
 	xen_sysfs_uuid_destroy();
 uuid_out:
@@ -407,12 +447,12 @@ out:
 
 static void __exit hyper_sysfs_exit(void)
 {
+	xen_vmcoreinfo_destroy();
 	xen_properties_destroy();
 	xen_compilation_destroy();
 	xen_sysfs_uuid_destroy();
 	xen_sysfs_version_destroy();
 	xen_sysfs_type_destroy();
-
 }
 module_init(hyper_sysfs_init);
 module_exit(hyper_sysfs_exit);
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To36A-0004w0-QX; Thu, 27 Dec 2012 02:21:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To368-0004u9-17
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:28 +0000
Received: from [85.158.139.211:12255] by server-1.bemta-5.messagelabs.com id
	FD/E6-12813-7A0BBD05; Thu, 27 Dec 2012 02:21:27 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!12
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11722 invoked from network); 27 Dec 2012 02:21:26 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:26 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645334Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:53 +0100
Message-Id: <1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.011299
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 04/11] x86/xen: Introduce architecture
	dependent data for kexec/kdump
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce architecture dependent constants, structures and
functions required by Xen kexec/kdump implementation.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/include/asm/xen/hypercall.h |    6 +++
 arch/x86/include/asm/xen/kexec.h     |   79 ++++++++++++++++++++++++++++++++++
 2 files changed, 85 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/include/asm/xen/kexec.h

diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
index c20d1ce..e76a1b8 100644
--- a/arch/x86/include/asm/xen/hypercall.h
+++ b/arch/x86/include/asm/xen/hypercall.h
@@ -459,6 +459,12 @@ HYPERVISOR_hvm_op(int op, void *arg)
 }
 
 static inline int
+HYPERVISOR_kexec_op(unsigned long op, void *args)
+{
+	return _hypercall2(int, kexec_op, op, args);
+}
+
+static inline int
 HYPERVISOR_tmem_op(
 	struct tmem_op *op)
 {
diff --git a/arch/x86/include/asm/xen/kexec.h b/arch/x86/include/asm/xen/kexec.h
new file mode 100644
index 0000000..d09b52f
--- /dev/null
+++ b/arch/x86/include/asm/xen/kexec.h
@@ -0,0 +1,79 @@
+/*
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_X86_XEN_KEXEC_H
+#define _ASM_X86_XEN_KEXEC_H
+
+#define KEXEC_XEN_NO_PAGES	17
+
+#define XK_MA_CONTROL_PAGE	0
+#define XK_VA_CONTROL_PAGE	1
+#define XK_MA_PGD_PAGE		2
+#define XK_VA_PGD_PAGE		3
+#define XK_MA_PUD0_PAGE		4
+#define XK_VA_PUD0_PAGE		5
+#define XK_MA_PUD1_PAGE		6
+#define XK_VA_PUD1_PAGE		7
+#define XK_MA_PMD0_PAGE		8
+#define XK_VA_PMD0_PAGE		9
+#define XK_MA_PMD1_PAGE		10
+#define XK_VA_PMD1_PAGE		11
+#define XK_MA_PTE0_PAGE		12
+#define XK_VA_PTE0_PAGE		13
+#define XK_MA_PTE1_PAGE		14
+#define XK_VA_PTE1_PAGE		15
+#define XK_MA_TABLE_PAGE	16
+
+#ifndef __ASSEMBLY__
+struct xen_kexec_image {
+	unsigned long page_list[KEXEC_XEN_NO_PAGES];
+	unsigned long indirection_page;
+	unsigned long start_address;
+};
+
+struct xen_kexec_load {
+	int type;
+	struct xen_kexec_image image;
+};
+
+extern unsigned int xen_kexec_control_code_size;
+
+#ifdef CONFIG_X86_32
+extern void xen_relocate_kernel(unsigned long indirection_page,
+				unsigned long *page_list,
+				unsigned long start_address,
+				unsigned int has_pae,
+				unsigned int preserve_context);
+#else
+extern void xen_relocate_kernel(unsigned long indirection_page,
+				unsigned long *page_list,
+				unsigned long start_address,
+				unsigned int preserve_context);
+#endif
+#endif
+#endif /* _ASM_X86_XEN_KEXEC_H */
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To36B-0004wh-KR; Thu, 27 Dec 2012 02:21:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To368-0004uN-D9
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:28 +0000
Received: from [85.158.139.211:12260] by server-14.bemta-5.messagelabs.com id
	09/5F-09538-7A0BBD05; Thu, 27 Dec 2012 02:21:27 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!11
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11707 invoked from network); 27 Dec 2012 02:21:26 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:26 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645337Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:55 +0100
Message-Id: <1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.070114
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 06/11] x86/xen: Add i386 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add i386 kexec/kdump implementation.

v2 - suggestions/fixes:
   - allocate transition page table pages below 4 GiB
     (suggested by Jan Beulich).

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/xen/machine_kexec_32.c   |  226 ++++++++++++++++++++++++++
 arch/x86/xen/relocate_kernel_32.S |  323 +++++++++++++++++++++++++++++++++++++
 2 files changed, 549 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/xen/machine_kexec_32.c
 create mode 100644 arch/x86/xen/relocate_kernel_32.S

diff --git a/arch/x86/xen/machine_kexec_32.c b/arch/x86/xen/machine_kexec_32.c
new file mode 100644
index 0000000..011a5e8
--- /dev/null
+++ b/arch/x86/xen/machine_kexec_32.c
@@ -0,0 +1,226 @@
+/*
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/kexec.h>
+#include <linux/mm.h>
+#include <linux/string.h>
+
+#include <xen/xen.h>
+#include <xen/xen-ops.h>
+
+#include <asm/xen/hypercall.h>
+#include <asm/xen/kexec.h>
+#include <asm/xen/page.h>
+
+#define __ma(vaddr)	(virt_to_machine(vaddr).maddr)
+
+static void *alloc_pgtable_page(struct kimage *image)
+{
+	struct page *page;
+
+	page = firmware_kimage_alloc_control_pages(image, 0);
+
+	if (!page || !page_address(page))
+		return NULL;
+
+	memset(page_address(page), 0, PAGE_SIZE);
+
+	return page_address(page);
+}
+
+static int alloc_transition_pgtable(struct kimage *image)
+{
+	image->arch.pgd = alloc_pgtable_page(image);
+
+	if (!image->arch.pgd)
+		return -ENOMEM;
+
+	image->arch.pmd0 = alloc_pgtable_page(image);
+
+	if (!image->arch.pmd0)
+		return -ENOMEM;
+
+	image->arch.pmd1 = alloc_pgtable_page(image);
+
+	if (!image->arch.pmd1)
+		return -ENOMEM;
+
+	image->arch.pte0 = alloc_pgtable_page(image);
+
+	if (!image->arch.pte0)
+		return -ENOMEM;
+
+	image->arch.pte1 = alloc_pgtable_page(image);
+
+	if (!image->arch.pte1)
+		return -ENOMEM;
+
+	return 0;
+}
+
+struct page *mf_kexec_kimage_alloc_pages(gfp_t gfp_mask,
+						unsigned int order,
+						unsigned long limit)
+{
+	struct page *pages;
+	unsigned int address_bits, i;
+
+	pages = alloc_pages(gfp_mask, order);
+
+	if (!pages)
+		return NULL;
+
+	address_bits = (limit == ULONG_MAX) ? BITS_PER_LONG : ilog2(limit);
+
+	/* Relocate set of pages below given limit. */
+	if (xen_create_contiguous_region((unsigned long)page_address(pages),
+							order, address_bits)) {
+		__free_pages(pages, order);
+		return NULL;
+	}
+
+	BUG_ON(PagePrivate(pages));
+
+	pages->mapping = NULL;
+	set_page_private(pages, order);
+
+	for (i = 0; i < (1 << order); ++i)
+		SetPageReserved(pages + i);
+
+	return pages;
+}
+
+void mf_kexec_kimage_free_pages(struct page *page)
+{
+	unsigned int i, order;
+
+	order = page_private(page);
+
+	for (i = 0; i < (1 << order); ++i)
+		ClearPageReserved(page + i);
+
+	xen_destroy_contiguous_region((unsigned long)page_address(page), order);
+	__free_pages(page, order);
+}
+
+unsigned long mf_kexec_page_to_pfn(struct page *page)
+{
+	return pfn_to_mfn(page_to_pfn(page));
+}
+
+struct page *mf_kexec_pfn_to_page(unsigned long mfn)
+{
+	return pfn_to_page(mfn_to_pfn(mfn));
+}
+
+unsigned long mf_kexec_virt_to_phys(volatile void *address)
+{
+	return virt_to_machine(address).maddr;
+}
+
+void *mf_kexec_phys_to_virt(unsigned long address)
+{
+	return phys_to_virt(machine_to_phys(XMADDR(address)).paddr);
+}
+
+int mf_kexec_prepare(struct kimage *image)
+{
+#ifdef CONFIG_KEXEC_JUMP
+	if (image->preserve_context) {
+		pr_info_once("kexec: Context preservation is not "
+				"supported in Xen domains.\n");
+		return -ENOSYS;
+	}
+#endif
+
+	return alloc_transition_pgtable(image);
+}
+
+int mf_kexec_load(struct kimage *image)
+{
+	void *control_page;
+	struct xen_kexec_load xkl = {};
+
+	/* Image is unloaded, nothing to do. */
+	if (!image)
+		return 0;
+
+	control_page = page_address(image->control_code_page);
+	memcpy(control_page, xen_relocate_kernel, xen_kexec_control_code_size);
+
+	xkl.type = image->type;
+	xkl.image.page_list[XK_MA_CONTROL_PAGE] = __ma(control_page);
+	xkl.image.page_list[XK_MA_TABLE_PAGE] = 0; /* Unused. */
+	xkl.image.page_list[XK_MA_PGD_PAGE] = __ma(image->arch.pgd);
+	xkl.image.page_list[XK_MA_PUD0_PAGE] = 0; /* Unused. */
+	xkl.image.page_list[XK_MA_PUD1_PAGE] = 0; /* Unused. */
+	xkl.image.page_list[XK_MA_PMD0_PAGE] = __ma(image->arch.pmd0);
+	xkl.image.page_list[XK_MA_PMD1_PAGE] = __ma(image->arch.pmd1);
+	xkl.image.page_list[XK_MA_PTE0_PAGE] = __ma(image->arch.pte0);
+	xkl.image.page_list[XK_MA_PTE1_PAGE] = __ma(image->arch.pte1);
+	xkl.image.indirection_page = image->head;
+	xkl.image.start_address = image->start;
+
+	return HYPERVISOR_kexec_op(KEXEC_CMD_kexec_load, &xkl);
+}
+
+void mf_kexec_cleanup(struct kimage *image)
+{
+}
+
+void mf_kexec_unload(struct kimage *image)
+{
+	int rc;
+	struct xen_kexec_load xkl = {};
+
+	if (!image)
+		return;
+
+	xkl.type = image->type;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_unload, &xkl);
+
+	WARN(rc, "kexec: %s: HYPERVISOR_kexec_op(): %i\n", __func__, rc);
+}
+
+void mf_kexec_shutdown(void)
+{
+}
+
+void mf_kexec(struct kimage *image)
+{
+	int rc;
+	struct xen_kexec_exec xke = {};
+
+	xke.type = image->type;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec, &xke);
+
+	pr_emerg("kexec: %s: HYPERVISOR_kexec_op(): %i\n", __func__, rc);
+	BUG();
+}
diff --git a/arch/x86/xen/relocate_kernel_32.S b/arch/x86/xen/relocate_kernel_32.S
new file mode 100644
index 0000000..0e81830
--- /dev/null
+++ b/arch/x86/xen/relocate_kernel_32.S
@@ -0,0 +1,323 @@
+/*
+ * Copyright (c) 2002-2005 Eric Biederman <ebiederm@xmission.com>
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either veesion 2 of the License, or
+ * (at your option) any later veesion.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/cache.h>
+#include <asm/page_types.h>
+#include <asm/pgtable_types.h>
+#include <asm/processor-flags.h>
+
+#include <asm/xen/kexec.h>
+
+#define ARG_INDIRECTION_PAGE	0x4
+#define ARG_PAGE_LIST		0x8
+#define ARG_START_ADDRESS	0xc
+
+#define PTR(x)	(x << 2)
+
+	.text
+	.align	PAGE_SIZE
+	.globl	xen_kexec_control_code_size, xen_relocate_kernel
+
+xen_relocate_kernel:
+	/*
+	 * Must be relocatable PIC code callable as a C function.
+	 *
+	 * This function is called by Xen but here hypervisor is dead.
+	 * We are playing on bare metal.
+	 *
+	 * Every machine address passed to this function through
+	 * page_list (e.g. XK_MA_CONTROL_PAGE) is established
+	 * by dom0 during kexec load phase.
+	 *
+	 * Every virtual address passed to this function through page_list
+	 * (e.g. XK_VA_CONTROL_PAGE) is established by hypervisor during
+	 * HYPERVISOR_kexec_op(KEXEC_CMD_kexec_load) hypercall.
+	 *
+	 * 0x4(%esp) - indirection_page,
+	 * 0x8(%esp) - page_list,
+	 * 0xc(%esp) - start_address,
+	 * 0x10(%esp) - cpu_has_pae (ignored),
+	 * 0x14(%esp) - preserve_context (ignored).
+	 */
+
+	/* Zero out flags, and disable interrupts. */
+	pushl	$0
+	popfl
+
+	/* Get page_list address. */
+	movl	ARG_PAGE_LIST(%esp), %esi
+
+	/*
+	 * Map the control page at its virtual address
+	 * in transition page table.
+	 */
+	movl	PTR(XK_VA_CONTROL_PAGE)(%esi), %eax
+
+	/* Get PGD address and PGD entry index. */
+	movl	PTR(XK_VA_PGD_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PGDIR_SHIFT, %ecx
+	andl	$(PTRS_PER_PGD - 1), %ecx
+
+	/* Fill PGD entry with PMD0 reference. */
+	movl	PTR(XK_MA_PMD0_PAGE)(%esi), %edx
+	orl	$_PAGE_PRESENT, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/* Get PMD0 address and PMD0 entry index. */
+	movl	PTR(XK_VA_PMD0_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PMD_SHIFT, %ecx
+	andl	$(PTRS_PER_PMD - 1), %ecx
+
+	/* Fill PMD0 entry with PTE0 reference. */
+	movl	PTR(XK_MA_PTE0_PAGE)(%esi), %edx
+	orl	$_KERNPG_TABLE, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/* Get PTE0 address and PTE0 entry index. */
+	movl	PTR(XK_VA_PTE0_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PAGE_SHIFT, %ecx
+	andl	$(PTRS_PER_PTE - 1), %ecx
+
+	/* Fill PTE0 entry with control page reference. */
+	movl	PTR(XK_MA_CONTROL_PAGE)(%esi), %edx
+	orl	$__PAGE_KERNEL_EXEC, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/*
+	 * Identity map the control page at its machine address
+	 * in transition page table.
+	 */
+	movl	PTR(XK_MA_CONTROL_PAGE)(%esi), %eax
+
+	/* Get PGD address and PGD entry index. */
+	movl	PTR(XK_VA_PGD_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PGDIR_SHIFT, %ecx
+	andl	$(PTRS_PER_PGD - 1), %ecx
+
+	/* Fill PGD entry with PMD1 reference. */
+	movl	PTR(XK_MA_PMD1_PAGE)(%esi), %edx
+	orl	$_PAGE_PRESENT, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/* Get PMD1 address and PMD1 entry index. */
+	movl	PTR(XK_VA_PMD1_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PMD_SHIFT, %ecx
+	andl	$(PTRS_PER_PMD - 1), %ecx
+
+	/* Fill PMD1 entry with PTE1 reference. */
+	movl	PTR(XK_MA_PTE1_PAGE)(%esi), %edx
+	orl	$_KERNPG_TABLE, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/* Get PTE1 address and PTE1 entry index. */
+	movl	PTR(XK_VA_PTE1_PAGE)(%esi), %ebx
+	movl	%eax, %ecx
+	shrl	$PAGE_SHIFT, %ecx
+	andl	$(PTRS_PER_PTE - 1), %ecx
+
+	/* Fill PTE1 entry with control page reference. */
+	movl	PTR(XK_MA_CONTROL_PAGE)(%esi), %edx
+	orl	$__PAGE_KERNEL_EXEC, %edx
+	movl	%edx, (%ebx, %ecx, 8)
+
+	/*
+	 * Get machine address of control page now.
+	 * This is impossible after page table switch.
+	 */
+	movl	PTR(XK_MA_CONTROL_PAGE)(%esi), %ebx
+
+	/* Get machine address of transition page table now too. */
+	movl	PTR(XK_MA_PGD_PAGE)(%esi), %ecx
+
+	/* Get start_address too. */
+	movl	ARG_START_ADDRESS(%esp), %edx
+
+	/* Get indirection_page address too. */
+	movl	ARG_INDIRECTION_PAGE(%esp), %edi
+
+	/* Switch to transition page table. */
+	movl	%ecx, %cr3
+
+	/* Load IDT. */
+	lidtl	(idt_48 - xen_relocate_kernel)(%ebx)
+
+	/* Load GDT. */
+	leal	(gdt - xen_relocate_kernel)(%ebx), %eax
+	movl	%eax, (gdt_48 - xen_relocate_kernel + 2)(%ebx)
+	lgdtl	(gdt_48 - xen_relocate_kernel)(%ebx)
+
+	/* Load data segment registers. */
+	movl	$(gdt_ds - gdt), %eax
+	movl	%eax, %ds
+	movl	%eax, %es
+	movl	%eax, %fs
+	movl	%eax, %gs
+	movl	%eax, %ss
+
+	/* Setup a new stack at the end of machine address of control page. */
+	leal	PAGE_SIZE(%ebx), %esp
+
+	/* Store start_address on the stack. */
+	pushl   %edx
+
+	/* Jump to identity mapped page. */
+	pushl	$0
+	pushl	$(gdt_cs - gdt)
+	addl	$(identity_mapped - xen_relocate_kernel), %ebx
+	pushl	%ebx
+	iretl
+
+identity_mapped:
+	/*
+	 * Set %cr0 to a known state:
+	 *   - disable alignment check,
+	 *   - disable floating point emulation,
+	 *   - disable paging,
+	 *   - no task switch,
+	 *   - disable write protect,
+	 *   - enable protected mode.
+	 */
+	movl	%cr0, %eax
+	andl	$~(X86_CR0_AM | X86_CR0_EM | X86_CR0_PG | X86_CR0_TS | X86_CR0_WP), %eax
+	orl	$(X86_CR0_PE), %eax
+	movl	%eax, %cr0
+
+	/* Set %cr4 to a known state. */
+	xorl	%eax, %eax
+	movl	%eax, %cr4
+
+	jmp	1f
+
+1:
+	/* Flush the TLB (needed?). */
+	movl	%eax, %cr3
+
+	/* Do the copies. */
+	movl	%edi, %ecx	/* Put the indirection_page in %ecx. */
+	xorl	%edi, %edi
+	xorl	%esi, %esi
+	jmp	1f
+
+0:
+	/*
+	 * Top, read another doubleword from the indirection page.
+	 * Indirection page is an array which contains source
+	 * and destination address pairs. If all pairs could
+	 * not fit in one page then at the end of given
+	 * indirection page is pointer to next one.
+	 * Copy is stopped when done indicator
+	 * is found in indirection page.
+	 */
+	movl	(%ebx), %ecx
+	addl	$4, %ebx
+
+1:
+	testl	$0x1, %ecx	/* Is it a destination page? */
+	jz	2f
+
+	movl	%ecx, %edi
+	andl	$PAGE_MASK, %edi
+	jmp	0b
+
+2:
+	testl	$0x2, %ecx	/* Is it an indirection page? */
+	jz	2f
+
+	movl	%ecx, %ebx
+	andl	$PAGE_MASK, %ebx
+	jmp	0b
+
+2:
+	testl	$0x4, %ecx	/* Is it the done indicator? */
+	jz	2f
+	jmp	3f
+
+2:
+	testl	$0x8, %ecx	/* Is it the source indicator? */
+	jz	0b		/* Ignore it otherwise. */
+
+	movl	%ecx, %esi
+	andl	$PAGE_MASK, %esi
+	movl	$1024, %ecx
+
+	/* Copy page. */
+	rep	movsl
+	jmp	0b
+
+3:
+	/*
+	 * To be certain of avoiding problems with self-modifying code
+	 * I need to execute a serializing instruction here.
+	 * So I flush the TLB by reloading %cr3 here, it's handy,
+	 * and not processor dependent.
+	 */
+	xorl	%eax, %eax
+	movl	%eax, %cr3
+
+	/*
+	 * Set all of the registers to known values.
+	 * Leave %esp alone.
+	 */
+	xorl	%ebx, %ebx
+	xorl    %ecx, %ecx
+	xorl    %edx, %edx
+	xorl    %esi, %esi
+	xorl    %edi, %edi
+	xorl    %ebp, %ebp
+
+	/* Jump to start_address. */
+	retl
+
+	.align	L1_CACHE_BYTES
+
+gdt:
+	.quad	0x0000000000000000	/* NULL descriptor. */
+
+gdt_cs:
+	.quad	0x00cf9a000000ffff	/* 4 GiB code segment at 0x00000000. */
+
+gdt_ds:
+	.quad	0x00cf92000000ffff	/* 4 GiB data segment at 0x00000000. */
+gdt_end:
+
+gdt_48:
+	.word	gdt_end - gdt - 1	/* GDT limit. */
+	.long	0			/* GDT base - filled in by code above. */
+
+idt_48:
+	.word	0			/* IDT limit. */
+	.long	0			/* IDT base. */
+
+xen_kexec_control_code_size:
+	.long	. - xen_relocate_kernel
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To369-0004vH-OI; Thu, 27 Dec 2012 02:21:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To366-0004tb-N4
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:12240] by server-11.bemta-5.messagelabs.com id
	DB/F8-31624-6A0BBD05; Thu, 27 Dec 2012 02:21:26 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!8
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11639 invoked from network); 27 Dec 2012 02:21:25 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:25 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645339Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:57 +0100
Message-Id: <1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.499592
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 08/11] x86/xen: Add kexec/kdump Kconfig and
	makefile rules
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add kexec/kdump Kconfig and makefile rules.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/Kconfig      |    3 +++
 arch/x86/xen/Kconfig  |    1 +
 arch/x86/xen/Makefile |    3 +++
 3 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 79795af..e2746c4 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1600,6 +1600,9 @@ config KEXEC_JUMP
 	  Jump between original kernel and kexeced kernel and invoke
 	  code in physical address mode via KEXEC
 
+config KEXEC_FIRMWARE
+	def_bool n
+
 config PHYSICAL_START
 	hex "Physical address where the kernel is loaded" if (EXPERT || CRASH_DUMP)
 	default "0x1000000"
diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 131dacd..8469c1c 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -7,6 +7,7 @@ config XEN
 	select PARAVIRT
 	select PARAVIRT_CLOCK
 	select XEN_HAVE_PVMMU
+	select KEXEC_FIRMWARE if KEXEC
 	depends on X86_64 || (X86_32 && X86_PAE && !X86_VISWS)
 	depends on X86_TSC
 	help
diff --git a/arch/x86/xen/Makefile b/arch/x86/xen/Makefile
index 96ab2c0..99952d7 100644
--- a/arch/x86/xen/Makefile
+++ b/arch/x86/xen/Makefile
@@ -22,3 +22,6 @@ obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= spinlock.o
 obj-$(CONFIG_XEN_DEBUG_FS)	+= debugfs.o
 obj-$(CONFIG_XEN_DOM0)		+= apic.o vga.o
 obj-$(CONFIG_SWIOTLB_XEN)	+= pci-swiotlb-xen.o
+obj-$(CONFIG_KEXEC_FIRMWARE)	+= kexec.o
+obj-$(CONFIG_KEXEC_FIRMWARE)	+= machine_kexec_$(BITS).o
+obj-$(CONFIG_KEXEC_FIRMWARE)	+= relocate_kernel_$(BITS).o
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To36B-0004wK-6e; Thu, 27 Dec 2012 02:21:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To368-0004uG-7h
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:28 +0000
Received: from [85.158.139.211:10425] by server-12.bemta-5.messagelabs.com id
	BE/A8-02275-7A0BBD05; Thu, 27 Dec 2012 02:21:27 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!9
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11665 invoked from network); 27 Dec 2012 02:21:25 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:25 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645338Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:56 +0100
Message-Id: <1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.102230
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 07/11] x86/xen: Add x86_64 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add x86_64 kexec/kdump implementation.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 arch/x86/xen/machine_kexec_64.c   |  318 +++++++++++++++++++++++++++++++++++++
 arch/x86/xen/relocate_kernel_64.S |  309 +++++++++++++++++++++++++++++++++++
 2 files changed, 627 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/xen/machine_kexec_64.c
 create mode 100644 arch/x86/xen/relocate_kernel_64.S

diff --git a/arch/x86/xen/machine_kexec_64.c b/arch/x86/xen/machine_kexec_64.c
new file mode 100644
index 0000000..2600342
--- /dev/null
+++ b/arch/x86/xen/machine_kexec_64.c
@@ -0,0 +1,318 @@
+/*
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/kexec.h>
+#include <linux/mm.h>
+#include <linux/string.h>
+
+#include <xen/interface/memory.h>
+#include <xen/xen.h>
+
+#include <asm/xen/hypercall.h>
+#include <asm/xen/kexec.h>
+#include <asm/xen/page.h>
+
+#define __ma(vaddr)	(virt_to_machine(vaddr).maddr)
+
+static void init_level2_page(pmd_t *pmd, unsigned long addr)
+{
+	unsigned long end_addr = addr + PUD_SIZE;
+
+	while (addr < end_addr) {
+		native_set_pmd(pmd++, native_make_pmd(addr | __PAGE_KERNEL_LARGE_EXEC));
+		addr += PMD_SIZE;
+	}
+}
+
+static int init_level3_page(struct kimage *image, pud_t *pud,
+				unsigned long addr, unsigned long last_addr)
+{
+	pmd_t *pmd;
+	struct page *page;
+	unsigned long end_addr = addr + PGDIR_SIZE;
+
+	while ((addr < last_addr) && (addr < end_addr)) {
+		page = firmware_kimage_alloc_control_pages(image, 0);
+
+		if (!page)
+			return -ENOMEM;
+
+		pmd = page_address(page);
+		init_level2_page(pmd, addr);
+		native_set_pud(pud++, native_make_pud(__ma(pmd) | _KERNPG_TABLE));
+		addr += PUD_SIZE;
+	}
+
+	/* Clear the unused entries. */
+	while (addr < end_addr) {
+		native_pud_clear(pud++);
+		addr += PUD_SIZE;
+	}
+
+	return 0;
+}
+
+
+static int init_level4_page(struct kimage *image, pgd_t *pgd,
+				unsigned long addr, unsigned long last_addr)
+{
+	int rc;
+	pud_t *pud;
+	struct page *page;
+	unsigned long end_addr = addr + PTRS_PER_PGD * PGDIR_SIZE;
+
+	while ((addr < last_addr) && (addr < end_addr)) {
+		page = firmware_kimage_alloc_control_pages(image, 0);
+
+		if (!page)
+			return -ENOMEM;
+
+		pud = page_address(page);
+		rc = init_level3_page(image, pud, addr, last_addr);
+
+		if (rc)
+			return rc;
+
+		native_set_pgd(pgd++, native_make_pgd(__ma(pud) | _KERNPG_TABLE));
+		addr += PGDIR_SIZE;
+	}
+
+	/* Clear the unused entries. */
+	while (addr < end_addr) {
+		native_pgd_clear(pgd++);
+		addr += PGDIR_SIZE;
+	}
+
+	return 0;
+}
+
+static void free_transition_pgtable(struct kimage *image)
+{
+	free_page((unsigned long)image->arch.pgd);
+	free_page((unsigned long)image->arch.pud0);
+	free_page((unsigned long)image->arch.pud1);
+	free_page((unsigned long)image->arch.pmd0);
+	free_page((unsigned long)image->arch.pmd1);
+	free_page((unsigned long)image->arch.pte0);
+	free_page((unsigned long)image->arch.pte1);
+}
+
+static int alloc_transition_pgtable(struct kimage *image)
+{
+	image->arch.pgd = (pgd_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pgd)
+		goto err;
+
+	image->arch.pud0 = (pud_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pud0)
+		goto err;
+
+	image->arch.pud1 = (pud_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pud1)
+		goto err;
+
+	image->arch.pmd0 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pmd0)
+		goto err;
+
+	image->arch.pmd1 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pmd1)
+		goto err;
+
+	image->arch.pte0 = (pte_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pte0)
+		goto err;
+
+	image->arch.pte1 = (pte_t *)get_zeroed_page(GFP_KERNEL);
+
+	if (!image->arch.pte1)
+		goto err;
+
+	return 0;
+
+err:
+	free_transition_pgtable(image);
+
+	return -ENOMEM;
+}
+
+static int init_pgtable(struct kimage *image, pgd_t *pgd)
+{
+	int rc;
+	unsigned long max_mfn;
+
+	max_mfn = HYPERVISOR_memory_op(XENMEM_maximum_ram_page, NULL);
+
+	rc = init_level4_page(image, pgd, 0, PFN_PHYS(max_mfn));
+
+	if (rc)
+		return rc;
+
+	return alloc_transition_pgtable(image);
+}
+
+struct page *mf_kexec_kimage_alloc_pages(gfp_t gfp_mask,
+						unsigned int order,
+						unsigned long limit)
+{
+	struct page *pages;
+	unsigned int i;
+
+	pages = alloc_pages(gfp_mask, order);
+
+	if (!pages)
+		return NULL;
+
+	BUG_ON(PagePrivate(pages));
+
+	pages->mapping = NULL;
+	set_page_private(pages, order);
+
+	for (i = 0; i < (1 << order); ++i)
+		SetPageReserved(pages + i);
+
+	return pages;
+}
+
+void mf_kexec_kimage_free_pages(struct page *page)
+{
+	unsigned int i, order;
+
+	order = page_private(page);
+
+	for (i = 0; i < (1 << order); ++i)
+		ClearPageReserved(page + i);
+
+	__free_pages(page, order);
+}
+
+unsigned long mf_kexec_page_to_pfn(struct page *page)
+{
+	return pfn_to_mfn(page_to_pfn(page));
+}
+
+struct page *mf_kexec_pfn_to_page(unsigned long mfn)
+{
+	return pfn_to_page(mfn_to_pfn(mfn));
+}
+
+unsigned long mf_kexec_virt_to_phys(volatile void *address)
+{
+	return virt_to_machine(address).maddr;
+}
+
+void *mf_kexec_phys_to_virt(unsigned long address)
+{
+	return phys_to_virt(machine_to_phys(XMADDR(address)).paddr);
+}
+
+int mf_kexec_prepare(struct kimage *image)
+{
+#ifdef CONFIG_KEXEC_JUMP
+	if (image->preserve_context) {
+		pr_info_once("kexec: Context preservation is not "
+				"supported in Xen domains.\n");
+		return -ENOSYS;
+	}
+#endif
+
+	return init_pgtable(image, page_address(image->control_code_page));
+}
+
+int mf_kexec_load(struct kimage *image)
+{
+	void *control_page, *table_page;
+	struct xen_kexec_load xkl = {};
+
+	/* Image is unloaded, nothing to do. */
+	if (!image)
+		return 0;
+
+	table_page = page_address(image->control_code_page);
+	control_page = table_page + PAGE_SIZE;
+
+	memcpy(control_page, xen_relocate_kernel, xen_kexec_control_code_size);
+
+	xkl.type = image->type;
+	xkl.image.page_list[XK_MA_CONTROL_PAGE] = __ma(control_page);
+	xkl.image.page_list[XK_MA_TABLE_PAGE] = __ma(table_page);
+	xkl.image.page_list[XK_MA_PGD_PAGE] = __ma(image->arch.pgd);
+	xkl.image.page_list[XK_MA_PUD0_PAGE] = __ma(image->arch.pud0);
+	xkl.image.page_list[XK_MA_PUD1_PAGE] = __ma(image->arch.pud1);
+	xkl.image.page_list[XK_MA_PMD0_PAGE] = __ma(image->arch.pmd0);
+	xkl.image.page_list[XK_MA_PMD1_PAGE] = __ma(image->arch.pmd1);
+	xkl.image.page_list[XK_MA_PTE0_PAGE] = __ma(image->arch.pte0);
+	xkl.image.page_list[XK_MA_PTE1_PAGE] = __ma(image->arch.pte1);
+	xkl.image.indirection_page = image->head;
+	xkl.image.start_address = image->start;
+
+	return HYPERVISOR_kexec_op(KEXEC_CMD_kexec_load, &xkl);
+}
+
+void mf_kexec_cleanup(struct kimage *image)
+{
+	free_transition_pgtable(image);
+}
+
+void mf_kexec_unload(struct kimage *image)
+{
+	int rc;
+	struct xen_kexec_load xkl = {};
+
+	if (!image)
+		return;
+
+	xkl.type = image->type;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec_unload, &xkl);
+
+	WARN(rc, "kexec: %s: HYPERVISOR_kexec_op(): %i\n", __func__, rc);
+}
+
+void mf_kexec_shutdown(void)
+{
+}
+
+void mf_kexec(struct kimage *image)
+{
+	int rc;
+	struct xen_kexec_exec xke = {};
+
+	xke.type = image->type;
+	rc = HYPERVISOR_kexec_op(KEXEC_CMD_kexec, &xke);
+
+	pr_emerg("kexec: %s: HYPERVISOR_kexec_op(): %i\n", __func__, rc);
+	BUG();
+}
diff --git a/arch/x86/xen/relocate_kernel_64.S b/arch/x86/xen/relocate_kernel_64.S
new file mode 100644
index 0000000..8f641f1
--- /dev/null
+++ b/arch/x86/xen/relocate_kernel_64.S
@@ -0,0 +1,309 @@
+/*
+ * Copyright (c) 2002-2005 Eric Biederman <ebiederm@xmission.com>
+ * Copyright (c) 2011 Daniel Kiper
+ * Copyright (c) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * kexec/kdump implementation for Xen was written by Daniel Kiper.
+ * Initial work on it was sponsored by Google under Google Summer
+ * of Code 2011 program and Citrix. Konrad Rzeszutek Wilk from Oracle
+ * was the mentor for this project.
+ *
+ * Some ideas are taken from:
+ *   - native kexec/kdump implementation,
+ *   - kexec/kdump implementation for Xen Linux Kernel Ver. 2.6.18,
+ *   - PV-GRUB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/page_types.h>
+#include <asm/pgtable_types.h>
+#include <asm/processor-flags.h>
+
+#include <asm/xen/kexec.h>
+
+#define PTR(x)	(x << 3)
+
+	.text
+	.code64
+	.globl	xen_kexec_control_code_size, xen_relocate_kernel
+
+xen_relocate_kernel:
+	/*
+	 * Must be relocatable PIC code callable as a C function.
+	 *
+	 * This function is called by Xen but here hypervisor is dead.
+	 * We are playing on bare metal.
+	 *
+	 * Every machine address passed to this function through
+	 * page_list (e.g. XK_MA_CONTROL_PAGE) is established
+	 * by dom0 during kexec load phase.
+	 *
+	 * Every virtual address passed to this function through page_list
+	 * (e.g. XK_VA_CONTROL_PAGE) is established by hypervisor during
+	 * HYPERVISOR_kexec_op(KEXEC_CMD_kexec_load) hypercall.
+	 *
+	 * %rdi - indirection_page,
+	 * %rsi - page_list,
+	 * %rdx - start_address,
+	 * %ecx - preserve_context (ignored).
+	 */
+
+	/* Zero out flags, and disable interrupts. */
+	pushq	$0
+	popfq
+
+	/*
+	 * Map the control page at its virtual address
+	 * in transition page table.
+	 */
+	movq	PTR(XK_VA_CONTROL_PAGE)(%rsi), %r8
+
+	/* Get PGD address and PGD entry index. */
+	movq	PTR(XK_VA_PGD_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PGDIR_SHIFT, %r10
+	andq	$(PTRS_PER_PGD - 1), %r10
+
+	/* Fill PGD entry with PUD0 reference. */
+	movq	PTR(XK_MA_PUD0_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PUD0 address and PUD0 entry index. */
+	movq	PTR(XK_VA_PUD0_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PUD_SHIFT, %r10
+	andq	$(PTRS_PER_PUD - 1), %r10
+
+	/* Fill PUD0 entry with PMD0 reference. */
+	movq	PTR(XK_MA_PMD0_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PMD0 address and PMD0 entry index. */
+	movq	PTR(XK_VA_PMD0_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PMD_SHIFT, %r10
+	andq	$(PTRS_PER_PMD - 1), %r10
+
+	/* Fill PMD0 entry with PTE0 reference. */
+	movq	PTR(XK_MA_PTE0_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PTE0 address and PTE0 entry index. */
+	movq	PTR(XK_VA_PTE0_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PAGE_SHIFT, %r10
+	andq	$(PTRS_PER_PTE - 1), %r10
+
+	/* Fill PTE0 entry with control page reference. */
+	movq	PTR(XK_MA_CONTROL_PAGE)(%rsi), %r11
+	orq	$__PAGE_KERNEL_EXEC, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/*
+	 * Identity map the control page at its machine address
+	 * in transition page table.
+	 */
+	movq	PTR(XK_MA_CONTROL_PAGE)(%rsi), %r8
+
+	/* Get PGD address and PGD entry index. */
+	movq	PTR(XK_VA_PGD_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PGDIR_SHIFT, %r10
+	andq	$(PTRS_PER_PGD - 1), %r10
+
+	/* Fill PGD entry with PUD1 reference. */
+	movq	PTR(XK_MA_PUD1_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PUD1 address and PUD1 entry index. */
+	movq	PTR(XK_VA_PUD1_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PUD_SHIFT, %r10
+	andq	$(PTRS_PER_PUD - 1), %r10
+
+	/* Fill PUD1 entry with PMD1 reference. */
+	movq	PTR(XK_MA_PMD1_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PMD1 address and PMD1 entry index. */
+	movq	PTR(XK_VA_PMD1_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PMD_SHIFT, %r10
+	andq	$(PTRS_PER_PMD - 1), %r10
+
+	/* Fill PMD1 entry with PTE1 reference. */
+	movq	PTR(XK_MA_PTE1_PAGE)(%rsi), %r11
+	orq	$_KERNPG_TABLE, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/* Get PTE1 address and PTE1 entry index. */
+	movq	PTR(XK_VA_PTE1_PAGE)(%rsi), %r9
+	movq	%r8, %r10
+	shrq	$PAGE_SHIFT, %r10
+	andq	$(PTRS_PER_PTE - 1), %r10
+
+	/* Fill PTE1 entry with control page reference. */
+	movq	PTR(XK_MA_CONTROL_PAGE)(%rsi), %r11
+	orq	$__PAGE_KERNEL_EXEC, %r11
+	movq	%r11, (%r9, %r10, 8)
+
+	/*
+	 * Get machine address of control page now.
+	 * This is impossible after page table switch.
+	 */
+	movq	PTR(XK_MA_CONTROL_PAGE)(%rsi), %r8
+
+	/* Get machine address of identity page table now too. */
+	movq	PTR(XK_MA_TABLE_PAGE)(%rsi), %r9
+
+	/* Get machine address of transition page table now too. */
+	movq	PTR(XK_MA_PGD_PAGE)(%rsi), %r10
+
+	/* Switch to transition page table. */
+	movq	%r10, %cr3
+
+	/* Setup a new stack at the end of machine address of control page. */
+	leaq	PAGE_SIZE(%r8), %rsp
+
+	/* Store start_address on the stack. */
+	pushq   %rdx
+
+	/* Jump to identity mapped page. */
+	addq	$(identity_mapped - xen_relocate_kernel), %r8
+	jmpq	*%r8
+
+identity_mapped:
+	/* Switch to identity page table. */
+	movq	%r9, %cr3
+
+	/*
+	 * Set %cr0 to a known state:
+	 *   - disable alignment check,
+	 *   - disable floating point emulation,
+	 *   - no task switch,
+	 *   - disable write protect,
+	 *   - enable protected mode,
+	 *   - enable paging.
+	 */
+	movq	%cr0, %rax
+	andq	$~(X86_CR0_AM | X86_CR0_EM | X86_CR0_TS | X86_CR0_WP), %rax
+	orl	$(X86_CR0_PE | X86_CR0_PG), %eax
+	movq	%rax, %cr0
+
+	/*
+	 * Set %cr4 to a known state:
+	 *   - enable physical address extension.
+	 */
+	movq	$X86_CR4_PAE, %rax
+	movq	%rax, %cr4
+
+	jmp	1f
+
+1:
+	/* Flush the TLB (needed?). */
+	movq	%r9, %cr3
+
+	/* Do the copies. */
+	movq	%rdi, %rcx	/* Put the indirection_page in %rcx. */
+	xorq	%rdi, %rdi
+	xorq	%rsi, %rsi
+	jmp	1f
+
+0:
+	/*
+	 * Top, read another quadword from the indirection page.
+	 * Indirection page is an array which contains source
+	 * and destination address pairs. If all pairs could
+	 * not fit in one page then at the end of given
+	 * indirection page is pointer to next one.
+	 * Copy is stopped when done indicator
+	 * is found in indirection page.
+	 */
+	movq	(%rbx), %rcx
+	addq	$8, %rbx
+
+1:
+	testq	$0x1, %rcx	/* Is it a destination page? */
+	jz	2f
+
+	movq	%rcx, %rdi
+	andq	$PAGE_MASK, %rdi
+	jmp	0b
+
+2:
+	testq	$0x2, %rcx	/* Is it an indirection page? */
+	jz	2f
+
+	movq	%rcx, %rbx
+	andq	$PAGE_MASK, %rbx
+	jmp	0b
+
+2:
+	testq	$0x4, %rcx	/* Is it the done indicator? */
+	jz	2f
+	jmp	3f
+
+2:
+	testq	$0x8, %rcx	/* Is it the source indicator? */
+	jz	0b		/* Ignore it otherwise. */
+
+	movq	%rcx, %rsi
+	andq	$PAGE_MASK, %rsi
+	movq	$512, %rcx
+
+	/* Copy page. */
+	rep	movsq
+	jmp	0b
+
+3:
+	/*
+	 * To be certain of avoiding problems with self-modifying code
+	 * I need to execute a serializing instruction here.
+	 * So I flush the TLB by reloading %cr3 here, it's handy,
+	 * and not processor dependent.
+	 */
+	movq	%cr3, %rax
+	movq	%rax, %cr3
+
+	/*
+	 * Set all of the registers to known values.
+	 * Leave %rsp alone.
+	 */
+	xorq	%rax, %rax
+	xorq	%rbx, %rbx
+	xorq    %rcx, %rcx
+	xorq    %rdx, %rdx
+	xorq    %rsi, %rsi
+	xorq    %rdi, %rdi
+	xorq    %rbp, %rbp
+	xorq	%r8, %r8
+	xorq	%r9, %r9
+	xorq	%r10, %r10
+	xorq	%r11, %r11
+	xorq	%r12, %r12
+	xorq	%r13, %r13
+	xorq	%r14, %r14
+	xorq	%r15, %r15
+
+	/* Jump to start_address. */
+	retq
+
+xen_kexec_control_code_size:
+	.long	. - xen_relocate_kernel
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To367-0004uC-KD; Thu, 27 Dec 2012 02:21:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To365-0004tM-Rg
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:12216] by server-6.bemta-5.messagelabs.com id
	EB/02-30498-5A0BBD05; Thu, 27 Dec 2012 02:21:25 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!4
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11564 invoked from network); 27 Dec 2012 02:21:24 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:24 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645333Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:52 +0100
Message-Id: <1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.415109
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 03/11] xen: Introduce architecture
	independent data for kexec/kdump
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce architecture independent constants and structures
required by Xen kexec/kdump implementation.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 include/xen/interface/xen.h |   33 +++++++++++++++++++++++++++++++++
 1 files changed, 33 insertions(+), 0 deletions(-)

diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 886a5d8..09c16ab 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -57,6 +57,7 @@
 #define __HYPERVISOR_event_channel_op     32
 #define __HYPERVISOR_physdev_op           33
 #define __HYPERVISOR_hvm_op               34
+#define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
 
 /* Architecture-specific hypercall definitions. */
@@ -231,7 +232,39 @@ DEFINE_GUEST_HANDLE_STRUCT(mmuext_op);
 #define VMASST_TYPE_pae_extended_cr3     3
 #define MAX_VMASST_TYPE 3
 
+/*
+ * Commands to HYPERVISOR_kexec_op().
+ */
+#define KEXEC_CMD_kexec			0
+#define KEXEC_CMD_kexec_load		1
+#define KEXEC_CMD_kexec_unload		2
+#define KEXEC_CMD_kexec_get_range	3
+
+/*
+ * Memory ranges for kdump (utilized by HYPERVISOR_kexec_op()).
+ */
+#define KEXEC_RANGE_MA_CRASH		0
+#define KEXEC_RANGE_MA_XEN		1
+#define KEXEC_RANGE_MA_CPU		2
+#define KEXEC_RANGE_MA_XENHEAP		3
+#define KEXEC_RANGE_MA_BOOT_PARAM	4
+#define KEXEC_RANGE_MA_EFI_MEMMAP	5
+#define KEXEC_RANGE_MA_VMCOREINFO	6
+
 #ifndef __ASSEMBLY__
+struct xen_kexec_exec {
+	int type;
+};
+
+struct xen_kexec_range {
+	int range;
+	int nr;
+	unsigned long size;
+	unsigned long start;
+};
+
+extern unsigned long xen_vmcoreinfo_maddr;
+extern unsigned long xen_vmcoreinfo_max_size;
 
 typedef uint16_t domid_t;
 
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To367-0004uC-KD; Thu, 27 Dec 2012 02:21:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To365-0004tM-Rg
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:12216] by server-6.bemta-5.messagelabs.com id
	EB/02-30498-5A0BBD05; Thu, 27 Dec 2012 02:21:25 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!4
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11564 invoked from network); 27 Dec 2012 02:21:24 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:24 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645333Ab2L0CTB
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:01 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:52 +0100
Message-Id: <1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.415109
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 03/11] xen: Introduce architecture
	independent data for kexec/kdump
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce architecture independent constants and structures
required by Xen kexec/kdump implementation.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 include/xen/interface/xen.h |   33 +++++++++++++++++++++++++++++++++
 1 files changed, 33 insertions(+), 0 deletions(-)

diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 886a5d8..09c16ab 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -57,6 +57,7 @@
 #define __HYPERVISOR_event_channel_op     32
 #define __HYPERVISOR_physdev_op           33
 #define __HYPERVISOR_hvm_op               34
+#define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
 
 /* Architecture-specific hypercall definitions. */
@@ -231,7 +232,39 @@ DEFINE_GUEST_HANDLE_STRUCT(mmuext_op);
 #define VMASST_TYPE_pae_extended_cr3     3
 #define MAX_VMASST_TYPE 3
 
+/*
+ * Commands to HYPERVISOR_kexec_op().
+ */
+#define KEXEC_CMD_kexec			0
+#define KEXEC_CMD_kexec_load		1
+#define KEXEC_CMD_kexec_unload		2
+#define KEXEC_CMD_kexec_get_range	3
+
+/*
+ * Memory ranges for kdump (utilized by HYPERVISOR_kexec_op()).
+ */
+#define KEXEC_RANGE_MA_CRASH		0
+#define KEXEC_RANGE_MA_XEN		1
+#define KEXEC_RANGE_MA_CPU		2
+#define KEXEC_RANGE_MA_XENHEAP		3
+#define KEXEC_RANGE_MA_BOOT_PARAM	4
+#define KEXEC_RANGE_MA_EFI_MEMMAP	5
+#define KEXEC_RANGE_MA_VMCOREINFO	6
+
 #ifndef __ASSEMBLY__
+struct xen_kexec_exec {
+	int type;
+};
+
+struct xen_kexec_range {
+	int range;
+	int nr;
+	unsigned long size;
+	unsigned long start;
+};
+
+extern unsigned long xen_vmcoreinfo_maddr;
+extern unsigned long xen_vmcoreinfo_max_size;
 
 typedef uint16_t domid_t;
 
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To368-0004ua-DU; Thu, 27 Dec 2012 02:21:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To366-0004tL-6u
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:10389] by server-13.bemta-5.messagelabs.com id
	33/4A-10716-5A0BBD05; Thu, 27 Dec 2012 02:21:25 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!2
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11530 invoked from network); 27 Dec 2012 02:21:23 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:23 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645138Ab2L0CTA
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:00 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:50 +0100
Message-Id: <1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.057879
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 01/11] kexec: introduce kexec firmware support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
Linux infrastructure and require some support from firmware and/or hypervisor.
To cope with that problem kexec firmware infrastructure was introduced.
It allows a developer to use all kexec/kdump features of given firmware
or hypervisor.

v3 - suggestions/fixes:
   - replace kexec_ops struct by kexec firmware infrastructure
     (suggested by Eric Biederman).

v2 - suggestions/fixes:
   - add comment for kexec_ops.crash_alloc_temp_store member
     (suggested by Konrad Rzeszutek Wilk),
   - simplify kexec_ops usage
     (suggested by Konrad Rzeszutek Wilk).

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 include/linux/kexec.h   |   26 ++-
 kernel/Makefile         |    1 +
 kernel/kexec-firmware.c |  743 +++++++++++++++++++++++++++++++++++++++++++++++
 kernel/kexec.c          |   46 +++-
 4 files changed, 809 insertions(+), 7 deletions(-)
 create mode 100644 kernel/kexec-firmware.c

diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index d0b8458..9568457 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -116,17 +116,34 @@ struct kimage {
 #endif
 };
 
-
-
 /* kexec interface functions */
 extern void machine_kexec(struct kimage *image);
 extern int machine_kexec_prepare(struct kimage *image);
 extern void machine_kexec_cleanup(struct kimage *image);
+extern struct page *mf_kexec_kimage_alloc_pages(gfp_t gfp_mask,
+						unsigned int order,
+						unsigned long limit);
+extern void mf_kexec_kimage_free_pages(struct page *page);
+extern unsigned long mf_kexec_page_to_pfn(struct page *page);
+extern struct page *mf_kexec_pfn_to_page(unsigned long mfn);
+extern unsigned long mf_kexec_virt_to_phys(volatile void *address);
+extern void *mf_kexec_phys_to_virt(unsigned long address);
+extern int mf_kexec_prepare(struct kimage *image);
+extern int mf_kexec_load(struct kimage *image);
+extern void mf_kexec_cleanup(struct kimage *image);
+extern void mf_kexec_unload(struct kimage *image);
+extern void mf_kexec_shutdown(void);
+extern void mf_kexec(struct kimage *image);
 extern asmlinkage long sys_kexec_load(unsigned long entry,
 					unsigned long nr_segments,
 					struct kexec_segment __user *segments,
 					unsigned long flags);
+extern long firmware_sys_kexec_load(unsigned long entry,
+					unsigned long nr_segments,
+					struct kexec_segment __user *segments,
+					unsigned long flags);
 extern int kernel_kexec(void);
+extern int firmware_kernel_kexec(void);
 #ifdef CONFIG_COMPAT
 extern asmlinkage long compat_sys_kexec_load(unsigned long entry,
 				unsigned long nr_segments,
@@ -135,7 +152,10 @@ extern asmlinkage long compat_sys_kexec_load(unsigned long entry,
 #endif
 extern struct page *kimage_alloc_control_pages(struct kimage *image,
 						unsigned int order);
+extern struct page *firmware_kimage_alloc_control_pages(struct kimage *image,
+							unsigned int order);
 extern void crash_kexec(struct pt_regs *);
+extern void firmware_crash_kexec(struct pt_regs *);
 int kexec_should_crash(struct task_struct *);
 void crash_save_cpu(struct pt_regs *regs, int cpu);
 void crash_save_vmcoreinfo(void);
@@ -168,6 +188,8 @@ unsigned long paddr_vmcoreinfo_note(void);
 #define VMCOREINFO_CONFIG(name) \
 	vmcoreinfo_append_str("CONFIG_%s=y\n", #name)
 
+extern bool kexec_use_firmware;
+
 extern struct kimage *kexec_image;
 extern struct kimage *kexec_crash_image;
 
diff --git a/kernel/Makefile b/kernel/Makefile
index 6c072b6..bc96b2f 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -58,6 +58,7 @@ obj-$(CONFIG_MODULE_SIG) += module_signing.o modsign_pubkey.o modsign_certificat
 obj-$(CONFIG_KALLSYMS) += kallsyms.o
 obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o
 obj-$(CONFIG_KEXEC) += kexec.o
+obj-$(CONFIG_KEXEC_FIRMWARE) += kexec-firmware.o
 obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
 obj-$(CONFIG_COMPAT) += compat.o
 obj-$(CONFIG_CGROUPS) += cgroup.o
diff --git a/kernel/kexec-firmware.c b/kernel/kexec-firmware.c
new file mode 100644
index 0000000..f6ddd4c
--- /dev/null
+++ b/kernel/kexec-firmware.c
@@ -0,0 +1,743 @@
+/*
+ * Copyright (C) 2002-2004 Eric Biederman  <ebiederm@xmission.com>
+ * Copyright (C) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * Most of the code here is a copy of kernel/kexec.c.
+ *
+ * This source code is licensed under the GNU General Public License,
+ * Version 2.  See the file COPYING for more details.
+ */
+
+#include <linux/atomic.h>
+#include <linux/errno.h>
+#include <linux/highmem.h>
+#include <linux/kernel.h>
+#include <linux/kexec.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/reboot.h>
+#include <linux/slab.h>
+
+#include <asm/uaccess.h>
+
+/*
+ * KIMAGE_NO_DEST is an impossible destination address..., for
+ * allocating pages whose destination address we do not care about.
+ */
+#define KIMAGE_NO_DEST (-1UL)
+
+static int kimage_is_destination_range(struct kimage *image,
+				       unsigned long start, unsigned long end);
+static struct page *kimage_alloc_page(struct kimage *image,
+				       gfp_t gfp_mask,
+				       unsigned long dest);
+
+static int do_kimage_alloc(struct kimage **rimage, unsigned long entry,
+	                    unsigned long nr_segments,
+                            struct kexec_segment __user *segments)
+{
+	size_t segment_bytes;
+	struct kimage *image;
+	unsigned long i;
+	int result;
+
+	/* Allocate a controlling structure */
+	result = -ENOMEM;
+	image = kzalloc(sizeof(*image), GFP_KERNEL);
+	if (!image)
+		goto out;
+
+	image->head = 0;
+	image->entry = &image->head;
+	image->last_entry = &image->head;
+	image->control_page = ~0; /* By default this does not apply */
+	image->start = entry;
+	image->type = KEXEC_TYPE_DEFAULT;
+
+	/* Initialize the list of control pages */
+	INIT_LIST_HEAD(&image->control_pages);
+
+	/* Initialize the list of destination pages */
+	INIT_LIST_HEAD(&image->dest_pages);
+
+	/* Initialize the list of unusable pages */
+	INIT_LIST_HEAD(&image->unuseable_pages);
+
+	/* Read in the segments */
+	image->nr_segments = nr_segments;
+	segment_bytes = nr_segments * sizeof(*segments);
+	result = copy_from_user(image->segment, segments, segment_bytes);
+	if (result) {
+		result = -EFAULT;
+		goto out;
+	}
+
+	/*
+	 * Verify we have good destination addresses.  The caller is
+	 * responsible for making certain we don't attempt to load
+	 * the new image into invalid or reserved areas of RAM.  This
+	 * just verifies it is an address we can use.
+	 *
+	 * Since the kernel does everything in page size chunks ensure
+	 * the destination addresses are page aligned.  Too many
+	 * special cases crop of when we don't do this.  The most
+	 * insidious is getting overlapping destination addresses
+	 * simply because addresses are changed to page size
+	 * granularity.
+	 */
+	result = -EADDRNOTAVAIL;
+	for (i = 0; i < nr_segments; i++) {
+		unsigned long mstart, mend;
+
+		mstart = image->segment[i].mem;
+		mend   = mstart + image->segment[i].memsz;
+		if ((mstart & ~PAGE_MASK) || (mend & ~PAGE_MASK))
+			goto out;
+		if (mend >= KEXEC_DESTINATION_MEMORY_LIMIT)
+			goto out;
+	}
+
+	/* Verify our destination addresses do not overlap.
+	 * If we alloed overlapping destination addresses
+	 * through very weird things can happen with no
+	 * easy explanation as one segment stops on another.
+	 */
+	result = -EINVAL;
+	for (i = 0; i < nr_segments; i++) {
+		unsigned long mstart, mend;
+		unsigned long j;
+
+		mstart = image->segment[i].mem;
+		mend   = mstart + image->segment[i].memsz;
+		for (j = 0; j < i; j++) {
+			unsigned long pstart, pend;
+			pstart = image->segment[j].mem;
+			pend   = pstart + image->segment[j].memsz;
+			/* Do the segments overlap ? */
+			if ((mend > pstart) && (mstart < pend))
+				goto out;
+		}
+	}
+
+	/* Ensure our buffer sizes are strictly less than
+	 * our memory sizes.  This should always be the case,
+	 * and it is easier to check up front than to be surprised
+	 * later on.
+	 */
+	result = -EINVAL;
+	for (i = 0; i < nr_segments; i++) {
+		if (image->segment[i].bufsz > image->segment[i].memsz)
+			goto out;
+	}
+
+	result = 0;
+out:
+	if (result == 0)
+		*rimage = image;
+	else
+		kfree(image);
+
+	return result;
+
+}
+
+static int kimage_normal_alloc(struct kimage **rimage, unsigned long entry,
+				unsigned long nr_segments,
+				struct kexec_segment __user *segments)
+{
+	int result;
+	struct kimage *image;
+
+	/* Allocate and initialize a controlling structure */
+	image = NULL;
+	result = do_kimage_alloc(&image, entry, nr_segments, segments);
+	if (result)
+		goto out;
+
+	*rimage = image;
+
+	/*
+	 * Find a location for the control code buffer, and add it
+	 * the vector of segments so that it's pages will also be
+	 * counted as destination pages.
+	 */
+	result = -ENOMEM;
+	image->control_code_page = firmware_kimage_alloc_control_pages(image,
+					   get_order(KEXEC_CONTROL_PAGE_SIZE));
+	if (!image->control_code_page) {
+		printk(KERN_ERR "Could not allocate control_code_buffer\n");
+		goto out;
+	}
+
+	image->swap_page = firmware_kimage_alloc_control_pages(image, 0);
+	if (!image->swap_page) {
+		printk(KERN_ERR "Could not allocate swap buffer\n");
+		goto out;
+	}
+
+	result = 0;
+ out:
+	if (result == 0)
+		*rimage = image;
+	else
+		kfree(image);
+
+	return result;
+}
+
+static int kimage_crash_alloc(struct kimage **rimage, unsigned long entry,
+				unsigned long nr_segments,
+				struct kexec_segment __user *segments)
+{
+	int result;
+	struct kimage *image;
+	unsigned long i;
+
+	image = NULL;
+	/* Verify we have a valid entry point */
+	if ((entry < crashk_res.start) || (entry > crashk_res.end)) {
+		result = -EADDRNOTAVAIL;
+		goto out;
+	}
+
+	/* Allocate and initialize a controlling structure */
+	result = do_kimage_alloc(&image, entry, nr_segments, segments);
+	if (result)
+		goto out;
+
+	/* Enable the special crash kernel control page
+	 * allocation policy.
+	 */
+	image->control_page = crashk_res.start;
+	image->type = KEXEC_TYPE_CRASH;
+
+	/*
+	 * Verify we have good destination addresses.  Normally
+	 * the caller is responsible for making certain we don't
+	 * attempt to load the new image into invalid or reserved
+	 * areas of RAM.  But crash kernels are preloaded into a
+	 * reserved area of ram.  We must ensure the addresses
+	 * are in the reserved area otherwise preloading the
+	 * kernel could corrupt things.
+	 */
+	result = -EADDRNOTAVAIL;
+	for (i = 0; i < nr_segments; i++) {
+		unsigned long mstart, mend;
+
+		mstart = image->segment[i].mem;
+		mend = mstart + image->segment[i].memsz - 1;
+		/* Ensure we are within the crash kernel limits */
+		if ((mstart < crashk_res.start) || (mend > crashk_res.end))
+			goto out;
+	}
+
+	/*
+	 * Find a location for the control code buffer, and add
+	 * the vector of segments so that it's pages will also be
+	 * counted as destination pages.
+	 */
+	result = -ENOMEM;
+	image->control_code_page = firmware_kimage_alloc_control_pages(image,
+					   get_order(KEXEC_CONTROL_PAGE_SIZE));
+	if (!image->control_code_page) {
+		printk(KERN_ERR "Could not allocate control_code_buffer\n");
+		goto out;
+	}
+
+	result = 0;
+out:
+	if (result == 0)
+		*rimage = image;
+	else
+		kfree(image);
+
+	return result;
+}
+
+static int kimage_is_destination_range(struct kimage *image,
+					unsigned long start,
+					unsigned long end)
+{
+	unsigned long i;
+
+	for (i = 0; i < image->nr_segments; i++) {
+		unsigned long mstart, mend;
+
+		mstart = image->segment[i].mem;
+		mend = mstart + image->segment[i].memsz;
+		if ((end > mstart) && (start < mend))
+			return 1;
+	}
+
+	return 0;
+}
+
+static void kimage_free_page_list(struct list_head *list)
+{
+	struct list_head *pos, *next;
+
+	list_for_each_safe(pos, next, list) {
+		struct page *page;
+
+		page = list_entry(pos, struct page, lru);
+		list_del(&page->lru);
+		mf_kexec_kimage_free_pages(page);
+	}
+}
+
+static struct page *kimage_alloc_normal_control_pages(struct kimage *image,
+							unsigned int order)
+{
+	/* Control pages are special, they are the intermediaries
+	 * that are needed while we copy the rest of the pages
+	 * to their final resting place.  As such they must
+	 * not conflict with either the destination addresses
+	 * or memory the kernel is already using.
+	 *
+	 * The only case where we really need more than one of
+	 * these are for architectures where we cannot disable
+	 * the MMU and must instead generate an identity mapped
+	 * page table for all of the memory.
+	 *
+	 * At worst this runs in O(N) of the image size.
+	 */
+	struct list_head extra_pages;
+	struct page *pages;
+	unsigned int count;
+
+	count = 1 << order;
+	INIT_LIST_HEAD(&extra_pages);
+
+	/* Loop while I can allocate a page and the page allocated
+	 * is a destination page.
+	 */
+	do {
+		unsigned long pfn, epfn, addr, eaddr;
+
+		pages = mf_kexec_kimage_alloc_pages(GFP_KERNEL, order,
+							KEXEC_CONTROL_MEMORY_LIMIT);
+		if (!pages)
+			break;
+		pfn   = mf_kexec_page_to_pfn(pages);
+		epfn  = pfn + count;
+		addr  = pfn << PAGE_SHIFT;
+		eaddr = epfn << PAGE_SHIFT;
+		if ((epfn >= (KEXEC_CONTROL_MEMORY_LIMIT >> PAGE_SHIFT)) ||
+			      kimage_is_destination_range(image, addr, eaddr)) {
+			list_add(&pages->lru, &extra_pages);
+			pages = NULL;
+		}
+	} while (!pages);
+
+	if (pages) {
+		/* Remember the allocated page... */
+		list_add(&pages->lru, &image->control_pages);
+
+		/* Because the page is already in it's destination
+		 * location we will never allocate another page at
+		 * that address.  Therefore mf_kexec_kimage_alloc_pages
+		 * will not return it (again) and we don't need
+		 * to give it an entry in image->segment[].
+		 */
+	}
+	/* Deal with the destination pages I have inadvertently allocated.
+	 *
+	 * Ideally I would convert multi-page allocations into single
+	 * page allocations, and add everything to image->dest_pages.
+	 *
+	 * For now it is simpler to just free the pages.
+	 */
+	kimage_free_page_list(&extra_pages);
+
+	return pages;
+}
+
+struct page *firmware_kimage_alloc_control_pages(struct kimage *image,
+							unsigned int order)
+{
+	return kimage_alloc_normal_control_pages(image, order);
+}
+
+static int kimage_add_entry(struct kimage *image, kimage_entry_t entry)
+{
+	if (*image->entry != 0)
+		image->entry++;
+
+	if (image->entry == image->last_entry) {
+		kimage_entry_t *ind_page;
+		struct page *page;
+
+		page = kimage_alloc_page(image, GFP_KERNEL, KIMAGE_NO_DEST);
+		if (!page)
+			return -ENOMEM;
+
+		ind_page = page_address(page);
+		*image->entry = mf_kexec_virt_to_phys(ind_page) | IND_INDIRECTION;
+		image->entry = ind_page;
+		image->last_entry = ind_page +
+				      ((PAGE_SIZE/sizeof(kimage_entry_t)) - 1);
+	}
+	*image->entry = entry;
+	image->entry++;
+	*image->entry = 0;
+
+	return 0;
+}
+
+static int kimage_set_destination(struct kimage *image,
+				   unsigned long destination)
+{
+	int result;
+
+	destination &= PAGE_MASK;
+	result = kimage_add_entry(image, destination | IND_DESTINATION);
+	if (result == 0)
+		image->destination = destination;
+
+	return result;
+}
+
+
+static int kimage_add_page(struct kimage *image, unsigned long page)
+{
+	int result;
+
+	page &= PAGE_MASK;
+	result = kimage_add_entry(image, page | IND_SOURCE);
+	if (result == 0)
+		image->destination += PAGE_SIZE;
+
+	return result;
+}
+
+
+static void kimage_free_extra_pages(struct kimage *image)
+{
+	/* Walk through and free any extra destination pages I may have */
+	kimage_free_page_list(&image->dest_pages);
+
+	/* Walk through and free any unusable pages I have cached */
+	kimage_free_page_list(&image->unuseable_pages);
+
+}
+static void kimage_terminate(struct kimage *image)
+{
+	if (*image->entry != 0)
+		image->entry++;
+
+	*image->entry = IND_DONE;
+}
+
+#define for_each_kimage_entry(image, ptr, entry) \
+	for (ptr = &image->head; (entry = *ptr) && !(entry & IND_DONE); \
+		ptr = (entry & IND_INDIRECTION)? \
+			mf_kexec_phys_to_virt((entry & PAGE_MASK)): ptr +1)
+
+static void kimage_free_entry(kimage_entry_t entry)
+{
+	struct page *page;
+
+	page = mf_kexec_pfn_to_page(entry >> PAGE_SHIFT);
+	mf_kexec_kimage_free_pages(page);
+}
+
+static void kimage_free(struct kimage *image)
+{
+	kimage_entry_t *ptr, entry;
+	kimage_entry_t ind = 0;
+
+	if (!image)
+		return;
+
+	kimage_free_extra_pages(image);
+	for_each_kimage_entry(image, ptr, entry) {
+		if (entry & IND_INDIRECTION) {
+			/* Free the previous indirection page */
+			if (ind & IND_INDIRECTION)
+				kimage_free_entry(ind);
+			/* Save this indirection page until we are
+			 * done with it.
+			 */
+			ind = entry;
+		}
+		else if (entry & IND_SOURCE)
+			kimage_free_entry(entry);
+	}
+	/* Free the final indirection page */
+	if (ind & IND_INDIRECTION)
+		kimage_free_entry(ind);
+
+	/* Handle any machine specific cleanup */
+	mf_kexec_cleanup(image);
+
+	/* Free the kexec control pages... */
+	kimage_free_page_list(&image->control_pages);
+	kfree(image);
+}
+
+static kimage_entry_t *kimage_dst_used(struct kimage *image,
+					unsigned long page)
+{
+	kimage_entry_t *ptr, entry;
+	unsigned long destination = 0;
+
+	for_each_kimage_entry(image, ptr, entry) {
+		if (entry & IND_DESTINATION)
+			destination = entry & PAGE_MASK;
+		else if (entry & IND_SOURCE) {
+			if (page == destination)
+				return ptr;
+			destination += PAGE_SIZE;
+		}
+	}
+
+	return NULL;
+}
+
+static struct page *kimage_alloc_page(struct kimage *image,
+					gfp_t gfp_mask,
+					unsigned long destination)
+{
+	/*
+	 * Here we implement safeguards to ensure that a source page
+	 * is not copied to its destination page before the data on
+	 * the destination page is no longer useful.
+	 *
+	 * To do this we maintain the invariant that a source page is
+	 * either its own destination page, or it is not a
+	 * destination page at all.
+	 *
+	 * That is slightly stronger than required, but the proof
+	 * that no problems will not occur is trivial, and the
+	 * implementation is simply to verify.
+	 *
+	 * When allocating all pages normally this algorithm will run
+	 * in O(N) time, but in the worst case it will run in O(N^2)
+	 * time.   If the runtime is a problem the data structures can
+	 * be fixed.
+	 */
+	struct page *page;
+	unsigned long addr;
+
+	/*
+	 * Walk through the list of destination pages, and see if I
+	 * have a match.
+	 */
+	list_for_each_entry(page, &image->dest_pages, lru) {
+		addr = mf_kexec_page_to_pfn(page) << PAGE_SHIFT;
+		if (addr == destination) {
+			list_del(&page->lru);
+			return page;
+		}
+	}
+	page = NULL;
+	while (1) {
+		kimage_entry_t *old;
+
+		/* Allocate a page, if we run out of memory give up */
+		page = mf_kexec_kimage_alloc_pages(gfp_mask, 0,
+							KEXEC_SOURCE_MEMORY_LIMIT);
+		if (!page)
+			return NULL;
+		/* If the page cannot be used file it away */
+		if (mf_kexec_page_to_pfn(page) >
+				(KEXEC_SOURCE_MEMORY_LIMIT >> PAGE_SHIFT)) {
+			list_add(&page->lru, &image->unuseable_pages);
+			continue;
+		}
+		addr = mf_kexec_page_to_pfn(page) << PAGE_SHIFT;
+
+		/* If it is the destination page we want use it */
+		if (addr == destination)
+			break;
+
+		/* If the page is not a destination page use it */
+		if (!kimage_is_destination_range(image, addr,
+						  addr + PAGE_SIZE))
+			break;
+
+		/*
+		 * I know that the page is someones destination page.
+		 * See if there is already a source page for this
+		 * destination page.  And if so swap the source pages.
+		 */
+		old = kimage_dst_used(image, addr);
+		if (old) {
+			/* If so move it */
+			unsigned long old_addr;
+			struct page *old_page;
+
+			old_addr = *old & PAGE_MASK;
+			old_page = mf_kexec_pfn_to_page(old_addr >> PAGE_SHIFT);
+			copy_highpage(page, old_page);
+			*old = addr | (*old & ~PAGE_MASK);
+
+			/* The old page I have found cannot be a
+			 * destination page, so return it if it's
+			 * gfp_flags honor the ones passed in.
+			 */
+			if (!(gfp_mask & __GFP_HIGHMEM) &&
+			    PageHighMem(old_page)) {
+				mf_kexec_kimage_free_pages(old_page);
+				continue;
+			}
+			addr = old_addr;
+			page = old_page;
+			break;
+		}
+		else {
+			/* Place the page on the destination list I
+			 * will use it later.
+			 */
+			list_add(&page->lru, &image->dest_pages);
+		}
+	}
+
+	return page;
+}
+
+static int kimage_load_normal_segment(struct kimage *image,
+					 struct kexec_segment *segment)
+{
+	unsigned long maddr;
+	unsigned long ubytes, mbytes;
+	int result;
+	unsigned char __user *buf;
+
+	result = 0;
+	buf = segment->buf;
+	ubytes = segment->bufsz;
+	mbytes = segment->memsz;
+	maddr = segment->mem;
+
+	result = kimage_set_destination(image, maddr);
+	if (result < 0)
+		goto out;
+
+	while (mbytes) {
+		struct page *page;
+		char *ptr;
+		size_t uchunk, mchunk;
+
+		page = kimage_alloc_page(image, GFP_HIGHUSER, maddr);
+		if (!page) {
+			result  = -ENOMEM;
+			goto out;
+		}
+		result = kimage_add_page(image, mf_kexec_page_to_pfn(page)
+								<< PAGE_SHIFT);
+		if (result < 0)
+			goto out;
+
+		ptr = kmap(page);
+		/* Start with a clear page */
+		clear_page(ptr);
+		ptr += maddr & ~PAGE_MASK;
+		mchunk = PAGE_SIZE - (maddr & ~PAGE_MASK);
+		if (mchunk > mbytes)
+			mchunk = mbytes;
+
+		uchunk = mchunk;
+		if (uchunk > ubytes)
+			uchunk = ubytes;
+
+		result = copy_from_user(ptr, buf, uchunk);
+		kunmap(page);
+		if (result) {
+			result = -EFAULT;
+			goto out;
+		}
+		ubytes -= uchunk;
+		maddr  += mchunk;
+		buf    += mchunk;
+		mbytes -= mchunk;
+	}
+out:
+	return result;
+}
+
+static int kimage_load_segment(struct kimage *image,
+				struct kexec_segment *segment)
+{
+	return kimage_load_normal_segment(image, segment);
+}
+
+long firmware_sys_kexec_load(unsigned long entry, unsigned long nr_segments,
+				struct kexec_segment __user *segments,
+				unsigned long flags)
+{
+	struct kimage **dest_image, *image = NULL;
+	int result = 0;
+
+	dest_image = &kexec_image;
+	if (flags & KEXEC_ON_CRASH)
+		dest_image = &kexec_crash_image;
+	if (nr_segments > 0) {
+		unsigned long i;
+
+		/* Loading another kernel to reboot into */
+		if ((flags & KEXEC_ON_CRASH) == 0)
+			result = kimage_normal_alloc(&image, entry,
+							nr_segments, segments);
+		/* Loading another kernel to switch to if this one crashes */
+		else if (flags & KEXEC_ON_CRASH) {
+			/* Free any current crash dump kernel before
+			 * we corrupt it.
+			 */
+			mf_kexec_unload(image);
+			kimage_free(xchg(&kexec_crash_image, NULL));
+			result = kimage_crash_alloc(&image, entry,
+						     nr_segments, segments);
+		}
+		if (result)
+			goto out;
+
+		if (flags & KEXEC_PRESERVE_CONTEXT)
+			image->preserve_context = 1;
+		result = mf_kexec_prepare(image);
+		if (result)
+			goto out;
+
+		for (i = 0; i < nr_segments; i++) {
+			result = kimage_load_segment(image, &image->segment[i]);
+			if (result)
+				goto out;
+		}
+		kimage_terminate(image);
+	}
+
+	result = mf_kexec_load(image);
+
+	if (result)
+		goto out;
+
+	/* Install the new kernel, and  Uninstall the old */
+	image = xchg(dest_image, image);
+
+out:
+	mf_kexec_unload(image);
+
+	kimage_free(image);
+
+	return result;
+}
+
+void firmware_crash_kexec(struct pt_regs *regs)
+{
+	struct pt_regs fixed_regs;
+
+	crash_setup_regs(&fixed_regs, regs);
+	crash_save_vmcoreinfo();
+	machine_crash_shutdown(&fixed_regs);
+	mf_kexec(kexec_crash_image);
+}
+
+int firmware_kernel_kexec(void)
+{
+	kernel_restart_prepare(NULL);
+	printk(KERN_EMERG "Starting new kernel\n");
+	mf_kexec_shutdown();
+	mf_kexec(kexec_image);
+
+	return 0;
+}
diff --git a/kernel/kexec.c b/kernel/kexec.c
index 5e4bd78..9f3b6cb 100644
--- a/kernel/kexec.c
+++ b/kernel/kexec.c
@@ -38,6 +38,10 @@
 #include <asm/io.h>
 #include <asm/sections.h>
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+bool kexec_use_firmware = false;
+#endif
+
 /* Per cpu memory for storing cpu states in case of system crash. */
 note_buf_t __percpu *crash_notes;
 
@@ -924,7 +928,7 @@ static int kimage_load_segment(struct kimage *image,
  *   the devices in a consistent state so a later kernel can
  *   reinitialize them.
  *
- * - A machine specific part that includes the syscall number
+ * - A machine/firmware specific part that includes the syscall number
  *   and the copies the image to it's final destination.  And
  *   jumps into the image at entry.
  *
@@ -978,6 +982,17 @@ SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments,
 	if (!mutex_trylock(&kexec_mutex))
 		return -EBUSY;
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+	if (kexec_use_firmware) {
+		result = firmware_sys_kexec_load(entry, nr_segments,
+							segments, flags);
+
+		mutex_unlock(&kexec_mutex);
+
+		return result;
+	}
+#endif
+
 	dest_image = &kexec_image;
 	if (flags & KEXEC_ON_CRASH)
 		dest_image = &kexec_crash_image;
@@ -1091,10 +1106,17 @@ void crash_kexec(struct pt_regs *regs)
 		if (kexec_crash_image) {
 			struct pt_regs fixed_regs;
 
-			crash_setup_regs(&fixed_regs, regs);
-			crash_save_vmcoreinfo();
-			machine_crash_shutdown(&fixed_regs);
-			machine_kexec(kexec_crash_image);
+#ifdef CONFIG_KEXEC_FIRMWARE
+			if (kexec_use_firmware)
+				firmware_crash_kexec(regs);
+			else
+#endif
+			{
+				crash_setup_regs(&fixed_regs, regs);
+				crash_save_vmcoreinfo();
+				machine_crash_shutdown(&fixed_regs);
+				machine_kexec(kexec_crash_image);
+			}
 		}
 		mutex_unlock(&kexec_mutex);
 	}
@@ -1132,6 +1154,13 @@ int crash_shrink_memory(unsigned long new_size)
 
 	mutex_lock(&kexec_mutex);
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+	if (kexec_use_firmware) {
+		ret = -ENOSYS;
+		goto unlock;
+	}
+#endif
+
 	if (kexec_crash_image) {
 		ret = -ENOENT;
 		goto unlock;
@@ -1536,6 +1565,13 @@ int kernel_kexec(void)
 		goto Unlock;
 	}
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+	if (kexec_use_firmware) {
+		error = firmware_kernel_kexec();
+		goto Unlock;
+	}
+#endif
+
 #ifdef CONFIG_KEXEC_JUMP
 	if (kexec_image->preserve_context) {
 		lock_system_sleep();
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 02:21:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 02:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To368-0004ua-DU; Thu, 27 Dec 2012 02:21:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl>)
	id 1To366-0004tL-6u
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 02:21:26 +0000
Received: from [85.158.139.211:10389] by server-13.bemta-5.messagelabs.com id
	33/4A-10716-5A0BBD05; Thu, 27 Dec 2012 02:21:25 +0000
X-Env-Sender: SRS0=iuTH=KV=oracle.com=daniel.kiper@net-space.pl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356574881!21973186!2
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11530 invoked from network); 27 Dec 2012 02:21:23 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-16.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 27 Dec 2012 02:21:23 -0000
Received: from dev-00.local.net-space.pl ([192.168.1.5]:41014 "EHLO
	localhost.localdomain" TLS-CIPHER: <none>)
	by router-fw-old.local.net-space.pl with ESMTP id S1645138Ab2L0CTA
	(ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Thu, 27 Dec 2012 03:19:00 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: andrew.cooper3@citrix.com, ebiederm@xmission.com, hpa@zytor.com,
	jbeulich@suse.com, konrad.wilk@oracle.com, maxim.uvarov@oracle.com,
	mingo@redhat.com, tglx@linutronix.de, vgoyal@redhat.com,
	x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 03:18:50 +0100
Message-Id: <1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
X-Mailer: git-send-email 1.5.6.5
In-Reply-To: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
X-Bogosity: No, spamicity=0.057879
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Subject: [Xen-devel] [PATCH v3 01/11] kexec: introduce kexec firmware support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
Linux infrastructure and require some support from firmware and/or hypervisor.
To cope with that problem kexec firmware infrastructure was introduced.
It allows a developer to use all kexec/kdump features of given firmware
or hypervisor.

v3 - suggestions/fixes:
   - replace kexec_ops struct by kexec firmware infrastructure
     (suggested by Eric Biederman).

v2 - suggestions/fixes:
   - add comment for kexec_ops.crash_alloc_temp_store member
     (suggested by Konrad Rzeszutek Wilk),
   - simplify kexec_ops usage
     (suggested by Konrad Rzeszutek Wilk).

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
---
 include/linux/kexec.h   |   26 ++-
 kernel/Makefile         |    1 +
 kernel/kexec-firmware.c |  743 +++++++++++++++++++++++++++++++++++++++++++++++
 kernel/kexec.c          |   46 +++-
 4 files changed, 809 insertions(+), 7 deletions(-)
 create mode 100644 kernel/kexec-firmware.c

diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index d0b8458..9568457 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -116,17 +116,34 @@ struct kimage {
 #endif
 };
 
-
-
 /* kexec interface functions */
 extern void machine_kexec(struct kimage *image);
 extern int machine_kexec_prepare(struct kimage *image);
 extern void machine_kexec_cleanup(struct kimage *image);
+extern struct page *mf_kexec_kimage_alloc_pages(gfp_t gfp_mask,
+						unsigned int order,
+						unsigned long limit);
+extern void mf_kexec_kimage_free_pages(struct page *page);
+extern unsigned long mf_kexec_page_to_pfn(struct page *page);
+extern struct page *mf_kexec_pfn_to_page(unsigned long mfn);
+extern unsigned long mf_kexec_virt_to_phys(volatile void *address);
+extern void *mf_kexec_phys_to_virt(unsigned long address);
+extern int mf_kexec_prepare(struct kimage *image);
+extern int mf_kexec_load(struct kimage *image);
+extern void mf_kexec_cleanup(struct kimage *image);
+extern void mf_kexec_unload(struct kimage *image);
+extern void mf_kexec_shutdown(void);
+extern void mf_kexec(struct kimage *image);
 extern asmlinkage long sys_kexec_load(unsigned long entry,
 					unsigned long nr_segments,
 					struct kexec_segment __user *segments,
 					unsigned long flags);
+extern long firmware_sys_kexec_load(unsigned long entry,
+					unsigned long nr_segments,
+					struct kexec_segment __user *segments,
+					unsigned long flags);
 extern int kernel_kexec(void);
+extern int firmware_kernel_kexec(void);
 #ifdef CONFIG_COMPAT
 extern asmlinkage long compat_sys_kexec_load(unsigned long entry,
 				unsigned long nr_segments,
@@ -135,7 +152,10 @@ extern asmlinkage long compat_sys_kexec_load(unsigned long entry,
 #endif
 extern struct page *kimage_alloc_control_pages(struct kimage *image,
 						unsigned int order);
+extern struct page *firmware_kimage_alloc_control_pages(struct kimage *image,
+							unsigned int order);
 extern void crash_kexec(struct pt_regs *);
+extern void firmware_crash_kexec(struct pt_regs *);
 int kexec_should_crash(struct task_struct *);
 void crash_save_cpu(struct pt_regs *regs, int cpu);
 void crash_save_vmcoreinfo(void);
@@ -168,6 +188,8 @@ unsigned long paddr_vmcoreinfo_note(void);
 #define VMCOREINFO_CONFIG(name) \
 	vmcoreinfo_append_str("CONFIG_%s=y\n", #name)
 
+extern bool kexec_use_firmware;
+
 extern struct kimage *kexec_image;
 extern struct kimage *kexec_crash_image;
 
diff --git a/kernel/Makefile b/kernel/Makefile
index 6c072b6..bc96b2f 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -58,6 +58,7 @@ obj-$(CONFIG_MODULE_SIG) += module_signing.o modsign_pubkey.o modsign_certificat
 obj-$(CONFIG_KALLSYMS) += kallsyms.o
 obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o
 obj-$(CONFIG_KEXEC) += kexec.o
+obj-$(CONFIG_KEXEC_FIRMWARE) += kexec-firmware.o
 obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
 obj-$(CONFIG_COMPAT) += compat.o
 obj-$(CONFIG_CGROUPS) += cgroup.o
diff --git a/kernel/kexec-firmware.c b/kernel/kexec-firmware.c
new file mode 100644
index 0000000..f6ddd4c
--- /dev/null
+++ b/kernel/kexec-firmware.c
@@ -0,0 +1,743 @@
+/*
+ * Copyright (C) 2002-2004 Eric Biederman  <ebiederm@xmission.com>
+ * Copyright (C) 2012 Daniel Kiper, Oracle Corporation
+ *
+ * Most of the code here is a copy of kernel/kexec.c.
+ *
+ * This source code is licensed under the GNU General Public License,
+ * Version 2.  See the file COPYING for more details.
+ */
+
+#include <linux/atomic.h>
+#include <linux/errno.h>
+#include <linux/highmem.h>
+#include <linux/kernel.h>
+#include <linux/kexec.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/reboot.h>
+#include <linux/slab.h>
+
+#include <asm/uaccess.h>
+
+/*
+ * KIMAGE_NO_DEST is an impossible destination address..., for
+ * allocating pages whose destination address we do not care about.
+ */
+#define KIMAGE_NO_DEST (-1UL)
+
+static int kimage_is_destination_range(struct kimage *image,
+				       unsigned long start, unsigned long end);
+static struct page *kimage_alloc_page(struct kimage *image,
+				       gfp_t gfp_mask,
+				       unsigned long dest);
+
+static int do_kimage_alloc(struct kimage **rimage, unsigned long entry,
+	                    unsigned long nr_segments,
+                            struct kexec_segment __user *segments)
+{
+	size_t segment_bytes;
+	struct kimage *image;
+	unsigned long i;
+	int result;
+
+	/* Allocate a controlling structure */
+	result = -ENOMEM;
+	image = kzalloc(sizeof(*image), GFP_KERNEL);
+	if (!image)
+		goto out;
+
+	image->head = 0;
+	image->entry = &image->head;
+	image->last_entry = &image->head;
+	image->control_page = ~0; /* By default this does not apply */
+	image->start = entry;
+	image->type = KEXEC_TYPE_DEFAULT;
+
+	/* Initialize the list of control pages */
+	INIT_LIST_HEAD(&image->control_pages);
+
+	/* Initialize the list of destination pages */
+	INIT_LIST_HEAD(&image->dest_pages);
+
+	/* Initialize the list of unusable pages */
+	INIT_LIST_HEAD(&image->unuseable_pages);
+
+	/* Read in the segments */
+	image->nr_segments = nr_segments;
+	segment_bytes = nr_segments * sizeof(*segments);
+	result = copy_from_user(image->segment, segments, segment_bytes);
+	if (result) {
+		result = -EFAULT;
+		goto out;
+	}
+
+	/*
+	 * Verify we have good destination addresses.  The caller is
+	 * responsible for making certain we don't attempt to load
+	 * the new image into invalid or reserved areas of RAM.  This
+	 * just verifies it is an address we can use.
+	 *
+	 * Since the kernel does everything in page size chunks ensure
+	 * the destination addresses are page aligned.  Too many
+	 * special cases crop of when we don't do this.  The most
+	 * insidious is getting overlapping destination addresses
+	 * simply because addresses are changed to page size
+	 * granularity.
+	 */
+	result = -EADDRNOTAVAIL;
+	for (i = 0; i < nr_segments; i++) {
+		unsigned long mstart, mend;
+
+		mstart = image->segment[i].mem;
+		mend   = mstart + image->segment[i].memsz;
+		if ((mstart & ~PAGE_MASK) || (mend & ~PAGE_MASK))
+			goto out;
+		if (mend >= KEXEC_DESTINATION_MEMORY_LIMIT)
+			goto out;
+	}
+
+	/* Verify our destination addresses do not overlap.
+	 * If we alloed overlapping destination addresses
+	 * through very weird things can happen with no
+	 * easy explanation as one segment stops on another.
+	 */
+	result = -EINVAL;
+	for (i = 0; i < nr_segments; i++) {
+		unsigned long mstart, mend;
+		unsigned long j;
+
+		mstart = image->segment[i].mem;
+		mend   = mstart + image->segment[i].memsz;
+		for (j = 0; j < i; j++) {
+			unsigned long pstart, pend;
+			pstart = image->segment[j].mem;
+			pend   = pstart + image->segment[j].memsz;
+			/* Do the segments overlap ? */
+			if ((mend > pstart) && (mstart < pend))
+				goto out;
+		}
+	}
+
+	/* Ensure our buffer sizes are strictly less than
+	 * our memory sizes.  This should always be the case,
+	 * and it is easier to check up front than to be surprised
+	 * later on.
+	 */
+	result = -EINVAL;
+	for (i = 0; i < nr_segments; i++) {
+		if (image->segment[i].bufsz > image->segment[i].memsz)
+			goto out;
+	}
+
+	result = 0;
+out:
+	if (result == 0)
+		*rimage = image;
+	else
+		kfree(image);
+
+	return result;
+
+}
+
+static int kimage_normal_alloc(struct kimage **rimage, unsigned long entry,
+				unsigned long nr_segments,
+				struct kexec_segment __user *segments)
+{
+	int result;
+	struct kimage *image;
+
+	/* Allocate and initialize a controlling structure */
+	image = NULL;
+	result = do_kimage_alloc(&image, entry, nr_segments, segments);
+	if (result)
+		goto out;
+
+	*rimage = image;
+
+	/*
+	 * Find a location for the control code buffer, and add it
+	 * the vector of segments so that it's pages will also be
+	 * counted as destination pages.
+	 */
+	result = -ENOMEM;
+	image->control_code_page = firmware_kimage_alloc_control_pages(image,
+					   get_order(KEXEC_CONTROL_PAGE_SIZE));
+	if (!image->control_code_page) {
+		printk(KERN_ERR "Could not allocate control_code_buffer\n");
+		goto out;
+	}
+
+	image->swap_page = firmware_kimage_alloc_control_pages(image, 0);
+	if (!image->swap_page) {
+		printk(KERN_ERR "Could not allocate swap buffer\n");
+		goto out;
+	}
+
+	result = 0;
+ out:
+	if (result == 0)
+		*rimage = image;
+	else
+		kfree(image);
+
+	return result;
+}
+
+static int kimage_crash_alloc(struct kimage **rimage, unsigned long entry,
+				unsigned long nr_segments,
+				struct kexec_segment __user *segments)
+{
+	int result;
+	struct kimage *image;
+	unsigned long i;
+
+	image = NULL;
+	/* Verify we have a valid entry point */
+	if ((entry < crashk_res.start) || (entry > crashk_res.end)) {
+		result = -EADDRNOTAVAIL;
+		goto out;
+	}
+
+	/* Allocate and initialize a controlling structure */
+	result = do_kimage_alloc(&image, entry, nr_segments, segments);
+	if (result)
+		goto out;
+
+	/* Enable the special crash kernel control page
+	 * allocation policy.
+	 */
+	image->control_page = crashk_res.start;
+	image->type = KEXEC_TYPE_CRASH;
+
+	/*
+	 * Verify we have good destination addresses.  Normally
+	 * the caller is responsible for making certain we don't
+	 * attempt to load the new image into invalid or reserved
+	 * areas of RAM.  But crash kernels are preloaded into a
+	 * reserved area of ram.  We must ensure the addresses
+	 * are in the reserved area otherwise preloading the
+	 * kernel could corrupt things.
+	 */
+	result = -EADDRNOTAVAIL;
+	for (i = 0; i < nr_segments; i++) {
+		unsigned long mstart, mend;
+
+		mstart = image->segment[i].mem;
+		mend = mstart + image->segment[i].memsz - 1;
+		/* Ensure we are within the crash kernel limits */
+		if ((mstart < crashk_res.start) || (mend > crashk_res.end))
+			goto out;
+	}
+
+	/*
+	 * Find a location for the control code buffer, and add
+	 * the vector of segments so that it's pages will also be
+	 * counted as destination pages.
+	 */
+	result = -ENOMEM;
+	image->control_code_page = firmware_kimage_alloc_control_pages(image,
+					   get_order(KEXEC_CONTROL_PAGE_SIZE));
+	if (!image->control_code_page) {
+		printk(KERN_ERR "Could not allocate control_code_buffer\n");
+		goto out;
+	}
+
+	result = 0;
+out:
+	if (result == 0)
+		*rimage = image;
+	else
+		kfree(image);
+
+	return result;
+}
+
+static int kimage_is_destination_range(struct kimage *image,
+					unsigned long start,
+					unsigned long end)
+{
+	unsigned long i;
+
+	for (i = 0; i < image->nr_segments; i++) {
+		unsigned long mstart, mend;
+
+		mstart = image->segment[i].mem;
+		mend = mstart + image->segment[i].memsz;
+		if ((end > mstart) && (start < mend))
+			return 1;
+	}
+
+	return 0;
+}
+
+static void kimage_free_page_list(struct list_head *list)
+{
+	struct list_head *pos, *next;
+
+	list_for_each_safe(pos, next, list) {
+		struct page *page;
+
+		page = list_entry(pos, struct page, lru);
+		list_del(&page->lru);
+		mf_kexec_kimage_free_pages(page);
+	}
+}
+
+static struct page *kimage_alloc_normal_control_pages(struct kimage *image,
+							unsigned int order)
+{
+	/* Control pages are special, they are the intermediaries
+	 * that are needed while we copy the rest of the pages
+	 * to their final resting place.  As such they must
+	 * not conflict with either the destination addresses
+	 * or memory the kernel is already using.
+	 *
+	 * The only case where we really need more than one of
+	 * these are for architectures where we cannot disable
+	 * the MMU and must instead generate an identity mapped
+	 * page table for all of the memory.
+	 *
+	 * At worst this runs in O(N) of the image size.
+	 */
+	struct list_head extra_pages;
+	struct page *pages;
+	unsigned int count;
+
+	count = 1 << order;
+	INIT_LIST_HEAD(&extra_pages);
+
+	/* Loop while I can allocate a page and the page allocated
+	 * is a destination page.
+	 */
+	do {
+		unsigned long pfn, epfn, addr, eaddr;
+
+		pages = mf_kexec_kimage_alloc_pages(GFP_KERNEL, order,
+							KEXEC_CONTROL_MEMORY_LIMIT);
+		if (!pages)
+			break;
+		pfn   = mf_kexec_page_to_pfn(pages);
+		epfn  = pfn + count;
+		addr  = pfn << PAGE_SHIFT;
+		eaddr = epfn << PAGE_SHIFT;
+		if ((epfn >= (KEXEC_CONTROL_MEMORY_LIMIT >> PAGE_SHIFT)) ||
+			      kimage_is_destination_range(image, addr, eaddr)) {
+			list_add(&pages->lru, &extra_pages);
+			pages = NULL;
+		}
+	} while (!pages);
+
+	if (pages) {
+		/* Remember the allocated page... */
+		list_add(&pages->lru, &image->control_pages);
+
+		/* Because the page is already in it's destination
+		 * location we will never allocate another page at
+		 * that address.  Therefore mf_kexec_kimage_alloc_pages
+		 * will not return it (again) and we don't need
+		 * to give it an entry in image->segment[].
+		 */
+	}
+	/* Deal with the destination pages I have inadvertently allocated.
+	 *
+	 * Ideally I would convert multi-page allocations into single
+	 * page allocations, and add everything to image->dest_pages.
+	 *
+	 * For now it is simpler to just free the pages.
+	 */
+	kimage_free_page_list(&extra_pages);
+
+	return pages;
+}
+
+struct page *firmware_kimage_alloc_control_pages(struct kimage *image,
+							unsigned int order)
+{
+	return kimage_alloc_normal_control_pages(image, order);
+}
+
+static int kimage_add_entry(struct kimage *image, kimage_entry_t entry)
+{
+	if (*image->entry != 0)
+		image->entry++;
+
+	if (image->entry == image->last_entry) {
+		kimage_entry_t *ind_page;
+		struct page *page;
+
+		page = kimage_alloc_page(image, GFP_KERNEL, KIMAGE_NO_DEST);
+		if (!page)
+			return -ENOMEM;
+
+		ind_page = page_address(page);
+		*image->entry = mf_kexec_virt_to_phys(ind_page) | IND_INDIRECTION;
+		image->entry = ind_page;
+		image->last_entry = ind_page +
+				      ((PAGE_SIZE/sizeof(kimage_entry_t)) - 1);
+	}
+	*image->entry = entry;
+	image->entry++;
+	*image->entry = 0;
+
+	return 0;
+}
+
+static int kimage_set_destination(struct kimage *image,
+				   unsigned long destination)
+{
+	int result;
+
+	destination &= PAGE_MASK;
+	result = kimage_add_entry(image, destination | IND_DESTINATION);
+	if (result == 0)
+		image->destination = destination;
+
+	return result;
+}
+
+
+static int kimage_add_page(struct kimage *image, unsigned long page)
+{
+	int result;
+
+	page &= PAGE_MASK;
+	result = kimage_add_entry(image, page | IND_SOURCE);
+	if (result == 0)
+		image->destination += PAGE_SIZE;
+
+	return result;
+}
+
+
+static void kimage_free_extra_pages(struct kimage *image)
+{
+	/* Walk through and free any extra destination pages I may have */
+	kimage_free_page_list(&image->dest_pages);
+
+	/* Walk through and free any unusable pages I have cached */
+	kimage_free_page_list(&image->unuseable_pages);
+
+}
+static void kimage_terminate(struct kimage *image)
+{
+	if (*image->entry != 0)
+		image->entry++;
+
+	*image->entry = IND_DONE;
+}
+
+#define for_each_kimage_entry(image, ptr, entry) \
+	for (ptr = &image->head; (entry = *ptr) && !(entry & IND_DONE); \
+		ptr = (entry & IND_INDIRECTION)? \
+			mf_kexec_phys_to_virt((entry & PAGE_MASK)): ptr +1)
+
+static void kimage_free_entry(kimage_entry_t entry)
+{
+	struct page *page;
+
+	page = mf_kexec_pfn_to_page(entry >> PAGE_SHIFT);
+	mf_kexec_kimage_free_pages(page);
+}
+
+static void kimage_free(struct kimage *image)
+{
+	kimage_entry_t *ptr, entry;
+	kimage_entry_t ind = 0;
+
+	if (!image)
+		return;
+
+	kimage_free_extra_pages(image);
+	for_each_kimage_entry(image, ptr, entry) {
+		if (entry & IND_INDIRECTION) {
+			/* Free the previous indirection page */
+			if (ind & IND_INDIRECTION)
+				kimage_free_entry(ind);
+			/* Save this indirection page until we are
+			 * done with it.
+			 */
+			ind = entry;
+		}
+		else if (entry & IND_SOURCE)
+			kimage_free_entry(entry);
+	}
+	/* Free the final indirection page */
+	if (ind & IND_INDIRECTION)
+		kimage_free_entry(ind);
+
+	/* Handle any machine specific cleanup */
+	mf_kexec_cleanup(image);
+
+	/* Free the kexec control pages... */
+	kimage_free_page_list(&image->control_pages);
+	kfree(image);
+}
+
+static kimage_entry_t *kimage_dst_used(struct kimage *image,
+					unsigned long page)
+{
+	kimage_entry_t *ptr, entry;
+	unsigned long destination = 0;
+
+	for_each_kimage_entry(image, ptr, entry) {
+		if (entry & IND_DESTINATION)
+			destination = entry & PAGE_MASK;
+		else if (entry & IND_SOURCE) {
+			if (page == destination)
+				return ptr;
+			destination += PAGE_SIZE;
+		}
+	}
+
+	return NULL;
+}
+
+static struct page *kimage_alloc_page(struct kimage *image,
+					gfp_t gfp_mask,
+					unsigned long destination)
+{
+	/*
+	 * Here we implement safeguards to ensure that a source page
+	 * is not copied to its destination page before the data on
+	 * the destination page is no longer useful.
+	 *
+	 * To do this we maintain the invariant that a source page is
+	 * either its own destination page, or it is not a
+	 * destination page at all.
+	 *
+	 * That is slightly stronger than required, but the proof
+	 * that no problems will not occur is trivial, and the
+	 * implementation is simply to verify.
+	 *
+	 * When allocating all pages normally this algorithm will run
+	 * in O(N) time, but in the worst case it will run in O(N^2)
+	 * time.   If the runtime is a problem the data structures can
+	 * be fixed.
+	 */
+	struct page *page;
+	unsigned long addr;
+
+	/*
+	 * Walk through the list of destination pages, and see if I
+	 * have a match.
+	 */
+	list_for_each_entry(page, &image->dest_pages, lru) {
+		addr = mf_kexec_page_to_pfn(page) << PAGE_SHIFT;
+		if (addr == destination) {
+			list_del(&page->lru);
+			return page;
+		}
+	}
+	page = NULL;
+	while (1) {
+		kimage_entry_t *old;
+
+		/* Allocate a page, if we run out of memory give up */
+		page = mf_kexec_kimage_alloc_pages(gfp_mask, 0,
+							KEXEC_SOURCE_MEMORY_LIMIT);
+		if (!page)
+			return NULL;
+		/* If the page cannot be used file it away */
+		if (mf_kexec_page_to_pfn(page) >
+				(KEXEC_SOURCE_MEMORY_LIMIT >> PAGE_SHIFT)) {
+			list_add(&page->lru, &image->unuseable_pages);
+			continue;
+		}
+		addr = mf_kexec_page_to_pfn(page) << PAGE_SHIFT;
+
+		/* If it is the destination page we want use it */
+		if (addr == destination)
+			break;
+
+		/* If the page is not a destination page use it */
+		if (!kimage_is_destination_range(image, addr,
+						  addr + PAGE_SIZE))
+			break;
+
+		/*
+		 * I know that the page is someones destination page.
+		 * See if there is already a source page for this
+		 * destination page.  And if so swap the source pages.
+		 */
+		old = kimage_dst_used(image, addr);
+		if (old) {
+			/* If so move it */
+			unsigned long old_addr;
+			struct page *old_page;
+
+			old_addr = *old & PAGE_MASK;
+			old_page = mf_kexec_pfn_to_page(old_addr >> PAGE_SHIFT);
+			copy_highpage(page, old_page);
+			*old = addr | (*old & ~PAGE_MASK);
+
+			/* The old page I have found cannot be a
+			 * destination page, so return it if it's
+			 * gfp_flags honor the ones passed in.
+			 */
+			if (!(gfp_mask & __GFP_HIGHMEM) &&
+			    PageHighMem(old_page)) {
+				mf_kexec_kimage_free_pages(old_page);
+				continue;
+			}
+			addr = old_addr;
+			page = old_page;
+			break;
+		}
+		else {
+			/* Place the page on the destination list I
+			 * will use it later.
+			 */
+			list_add(&page->lru, &image->dest_pages);
+		}
+	}
+
+	return page;
+}
+
+static int kimage_load_normal_segment(struct kimage *image,
+					 struct kexec_segment *segment)
+{
+	unsigned long maddr;
+	unsigned long ubytes, mbytes;
+	int result;
+	unsigned char __user *buf;
+
+	result = 0;
+	buf = segment->buf;
+	ubytes = segment->bufsz;
+	mbytes = segment->memsz;
+	maddr = segment->mem;
+
+	result = kimage_set_destination(image, maddr);
+	if (result < 0)
+		goto out;
+
+	while (mbytes) {
+		struct page *page;
+		char *ptr;
+		size_t uchunk, mchunk;
+
+		page = kimage_alloc_page(image, GFP_HIGHUSER, maddr);
+		if (!page) {
+			result  = -ENOMEM;
+			goto out;
+		}
+		result = kimage_add_page(image, mf_kexec_page_to_pfn(page)
+								<< PAGE_SHIFT);
+		if (result < 0)
+			goto out;
+
+		ptr = kmap(page);
+		/* Start with a clear page */
+		clear_page(ptr);
+		ptr += maddr & ~PAGE_MASK;
+		mchunk = PAGE_SIZE - (maddr & ~PAGE_MASK);
+		if (mchunk > mbytes)
+			mchunk = mbytes;
+
+		uchunk = mchunk;
+		if (uchunk > ubytes)
+			uchunk = ubytes;
+
+		result = copy_from_user(ptr, buf, uchunk);
+		kunmap(page);
+		if (result) {
+			result = -EFAULT;
+			goto out;
+		}
+		ubytes -= uchunk;
+		maddr  += mchunk;
+		buf    += mchunk;
+		mbytes -= mchunk;
+	}
+out:
+	return result;
+}
+
+static int kimage_load_segment(struct kimage *image,
+				struct kexec_segment *segment)
+{
+	return kimage_load_normal_segment(image, segment);
+}
+
+long firmware_sys_kexec_load(unsigned long entry, unsigned long nr_segments,
+				struct kexec_segment __user *segments,
+				unsigned long flags)
+{
+	struct kimage **dest_image, *image = NULL;
+	int result = 0;
+
+	dest_image = &kexec_image;
+	if (flags & KEXEC_ON_CRASH)
+		dest_image = &kexec_crash_image;
+	if (nr_segments > 0) {
+		unsigned long i;
+
+		/* Loading another kernel to reboot into */
+		if ((flags & KEXEC_ON_CRASH) == 0)
+			result = kimage_normal_alloc(&image, entry,
+							nr_segments, segments);
+		/* Loading another kernel to switch to if this one crashes */
+		else if (flags & KEXEC_ON_CRASH) {
+			/* Free any current crash dump kernel before
+			 * we corrupt it.
+			 */
+			mf_kexec_unload(image);
+			kimage_free(xchg(&kexec_crash_image, NULL));
+			result = kimage_crash_alloc(&image, entry,
+						     nr_segments, segments);
+		}
+		if (result)
+			goto out;
+
+		if (flags & KEXEC_PRESERVE_CONTEXT)
+			image->preserve_context = 1;
+		result = mf_kexec_prepare(image);
+		if (result)
+			goto out;
+
+		for (i = 0; i < nr_segments; i++) {
+			result = kimage_load_segment(image, &image->segment[i]);
+			if (result)
+				goto out;
+		}
+		kimage_terminate(image);
+	}
+
+	result = mf_kexec_load(image);
+
+	if (result)
+		goto out;
+
+	/* Install the new kernel, and  Uninstall the old */
+	image = xchg(dest_image, image);
+
+out:
+	mf_kexec_unload(image);
+
+	kimage_free(image);
+
+	return result;
+}
+
+void firmware_crash_kexec(struct pt_regs *regs)
+{
+	struct pt_regs fixed_regs;
+
+	crash_setup_regs(&fixed_regs, regs);
+	crash_save_vmcoreinfo();
+	machine_crash_shutdown(&fixed_regs);
+	mf_kexec(kexec_crash_image);
+}
+
+int firmware_kernel_kexec(void)
+{
+	kernel_restart_prepare(NULL);
+	printk(KERN_EMERG "Starting new kernel\n");
+	mf_kexec_shutdown();
+	mf_kexec(kexec_image);
+
+	return 0;
+}
diff --git a/kernel/kexec.c b/kernel/kexec.c
index 5e4bd78..9f3b6cb 100644
--- a/kernel/kexec.c
+++ b/kernel/kexec.c
@@ -38,6 +38,10 @@
 #include <asm/io.h>
 #include <asm/sections.h>
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+bool kexec_use_firmware = false;
+#endif
+
 /* Per cpu memory for storing cpu states in case of system crash. */
 note_buf_t __percpu *crash_notes;
 
@@ -924,7 +928,7 @@ static int kimage_load_segment(struct kimage *image,
  *   the devices in a consistent state so a later kernel can
  *   reinitialize them.
  *
- * - A machine specific part that includes the syscall number
+ * - A machine/firmware specific part that includes the syscall number
  *   and the copies the image to it's final destination.  And
  *   jumps into the image at entry.
  *
@@ -978,6 +982,17 @@ SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments,
 	if (!mutex_trylock(&kexec_mutex))
 		return -EBUSY;
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+	if (kexec_use_firmware) {
+		result = firmware_sys_kexec_load(entry, nr_segments,
+							segments, flags);
+
+		mutex_unlock(&kexec_mutex);
+
+		return result;
+	}
+#endif
+
 	dest_image = &kexec_image;
 	if (flags & KEXEC_ON_CRASH)
 		dest_image = &kexec_crash_image;
@@ -1091,10 +1106,17 @@ void crash_kexec(struct pt_regs *regs)
 		if (kexec_crash_image) {
 			struct pt_regs fixed_regs;
 
-			crash_setup_regs(&fixed_regs, regs);
-			crash_save_vmcoreinfo();
-			machine_crash_shutdown(&fixed_regs);
-			machine_kexec(kexec_crash_image);
+#ifdef CONFIG_KEXEC_FIRMWARE
+			if (kexec_use_firmware)
+				firmware_crash_kexec(regs);
+			else
+#endif
+			{
+				crash_setup_regs(&fixed_regs, regs);
+				crash_save_vmcoreinfo();
+				machine_crash_shutdown(&fixed_regs);
+				machine_kexec(kexec_crash_image);
+			}
 		}
 		mutex_unlock(&kexec_mutex);
 	}
@@ -1132,6 +1154,13 @@ int crash_shrink_memory(unsigned long new_size)
 
 	mutex_lock(&kexec_mutex);
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+	if (kexec_use_firmware) {
+		ret = -ENOSYS;
+		goto unlock;
+	}
+#endif
+
 	if (kexec_crash_image) {
 		ret = -ENOENT;
 		goto unlock;
@@ -1536,6 +1565,13 @@ int kernel_kexec(void)
 		goto Unlock;
 	}
 
+#ifdef CONFIG_KEXEC_FIRMWARE
+	if (kexec_use_firmware) {
+		error = firmware_kernel_kexec();
+		goto Unlock;
+	}
+#endif
+
 #ifdef CONFIG_KEXEC_JUMP
 	if (kexec_image->preserve_context) {
 		lock_system_sleep();
-- 
1.5.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 03:35:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 03:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To4F1-0007L9-LM; Thu, 27 Dec 2012 03:34:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1To4F0-0007L4-0N
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 03:34:42 +0000
Received: from [85.158.139.83:19629] by server-6.bemta-5.messagelabs.com id
	F2/DB-30498-1D1CBD05; Thu, 27 Dec 2012 03:34:41 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356579278!27493416!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17747 invoked from network); 27 Dec 2012 03:34:40 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 03:34:40 -0000
Received: from [162.162.161.83] (mf80536d0.tmodns.net [208.54.5.248])
	(authenticated bits=0)
	by mail.zytor.com (8.14.5/8.14.5) with ESMTP id qBR3Xkse007469
	(version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NO);
	Wed, 26 Dec 2012 19:33:50 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
MIME-Version: 1.0
From: "H. Peter Anvin" <hpa@zytor.com>
Date: Wed, 26 Dec 2012 19:33:37 -0800
To: Daniel Kiper <daniel.kiper@oracle.com>, andrew.cooper3@citrix.com,
	ebiederm@xmission.com, jbeulich@suse.com, konrad.wilk@oracle.com,
	maxim.uvarov@oracle.com, mingo@redhat.com, tglx@linutronix.de,
	vgoyal@redhat.com, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
	xen-devel@lists.xensource.com
Message-ID: <083de527-84b0-4450-8e89-055103305afa@email.android.com>
Subject: Re: [Xen-devel] [PATCH v3 02/11] x86/kexec: Add extra pointers to
	transition page table PGD, PUD, PMD and PTE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hmm... this code is being redone at the moment... this might conflict.

Daniel Kiper <daniel.kiper@oracle.com> wrote:

>Some implementations (e.g. Xen PVOPS) could not use part of identity
>page table
>to construct transition page table. It means that they require separate
>PUDs,
>PMDs and PTEs for virtual and physical (identity) mapping. To satisfy
>that
>requirement add extra pointer to PGD, PUD, PMD and PTE and align
>existing code.
>
>Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
>---
> arch/x86/include/asm/kexec.h       |   10 +++++++---
> arch/x86/kernel/machine_kexec_64.c |   12 ++++++------
> 2 files changed, 13 insertions(+), 9 deletions(-)
>
>diff --git a/arch/x86/include/asm/kexec.h
>b/arch/x86/include/asm/kexec.h
>index 6080d26..cedd204 100644
>--- a/arch/x86/include/asm/kexec.h
>+++ b/arch/x86/include/asm/kexec.h
>@@ -157,9 +157,13 @@ struct kimage_arch {
> };
> #else
> struct kimage_arch {
>-	pud_t *pud;
>-	pmd_t *pmd;
>-	pte_t *pte;
>+	pgd_t *pgd;
>+	pud_t *pud0;
>+	pud_t *pud1;
>+	pmd_t *pmd0;
>+	pmd_t *pmd1;
>+	pte_t *pte0;
>+	pte_t *pte1;
> };
> #endif
> 
>diff --git a/arch/x86/kernel/machine_kexec_64.c
>b/arch/x86/kernel/machine_kexec_64.c
>index b3ea9db..976e54b 100644
>--- a/arch/x86/kernel/machine_kexec_64.c
>+++ b/arch/x86/kernel/machine_kexec_64.c
>@@ -137,9 +137,9 @@ out:
> 
> static void free_transition_pgtable(struct kimage *image)
> {
>-	free_page((unsigned long)image->arch.pud);
>-	free_page((unsigned long)image->arch.pmd);
>-	free_page((unsigned long)image->arch.pte);
>+	free_page((unsigned long)image->arch.pud0);
>+	free_page((unsigned long)image->arch.pmd0);
>+	free_page((unsigned long)image->arch.pte0);
> }
> 
> static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
>@@ -157,7 +157,7 @@ static int init_transition_pgtable(struct kimage
>*image, pgd_t *pgd)
> 		pud = (pud_t *)get_zeroed_page(GFP_KERNEL);
> 		if (!pud)
> 			goto err;
>-		image->arch.pud = pud;
>+		image->arch.pud0 = pud;
> 		set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));
> 	}
> 	pud = pud_offset(pgd, vaddr);
>@@ -165,7 +165,7 @@ static int init_transition_pgtable(struct kimage
>*image, pgd_t *pgd)
> 		pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL);
> 		if (!pmd)
> 			goto err;
>-		image->arch.pmd = pmd;
>+		image->arch.pmd0 = pmd;
> 		set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
> 	}
> 	pmd = pmd_offset(pud, vaddr);
>@@ -173,7 +173,7 @@ static int init_transition_pgtable(struct kimage
>*image, pgd_t *pgd)
> 		pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
> 		if (!pte)
> 			goto err;
>-		image->arch.pte = pte;
>+		image->arch.pte0 = pte;
> 		set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE));
> 	}
> 	pte = pte_offset_kernel(pmd, vaddr);

-- 
Sent from my mobile phone. Please excuse brevity and lack of formatting.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 03:35:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 03:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To4F1-0007L9-LM; Thu, 27 Dec 2012 03:34:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1To4F0-0007L4-0N
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 03:34:42 +0000
Received: from [85.158.139.83:19629] by server-6.bemta-5.messagelabs.com id
	F2/DB-30498-1D1CBD05; Thu, 27 Dec 2012 03:34:41 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356579278!27493416!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17747 invoked from network); 27 Dec 2012 03:34:40 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 03:34:40 -0000
Received: from [162.162.161.83] (mf80536d0.tmodns.net [208.54.5.248])
	(authenticated bits=0)
	by mail.zytor.com (8.14.5/8.14.5) with ESMTP id qBR3Xkse007469
	(version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NO);
	Wed, 26 Dec 2012 19:33:50 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
MIME-Version: 1.0
From: "H. Peter Anvin" <hpa@zytor.com>
Date: Wed, 26 Dec 2012 19:33:37 -0800
To: Daniel Kiper <daniel.kiper@oracle.com>, andrew.cooper3@citrix.com,
	ebiederm@xmission.com, jbeulich@suse.com, konrad.wilk@oracle.com,
	maxim.uvarov@oracle.com, mingo@redhat.com, tglx@linutronix.de,
	vgoyal@redhat.com, x86@kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
	xen-devel@lists.xensource.com
Message-ID: <083de527-84b0-4450-8e89-055103305afa@email.android.com>
Subject: Re: [Xen-devel] [PATCH v3 02/11] x86/kexec: Add extra pointers to
	transition page table PGD, PUD, PMD and PTE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hmm... this code is being redone at the moment... this might conflict.

Daniel Kiper <daniel.kiper@oracle.com> wrote:

>Some implementations (e.g. Xen PVOPS) could not use part of identity
>page table
>to construct transition page table. It means that they require separate
>PUDs,
>PMDs and PTEs for virtual and physical (identity) mapping. To satisfy
>that
>requirement add extra pointer to PGD, PUD, PMD and PTE and align
>existing code.
>
>Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
>---
> arch/x86/include/asm/kexec.h       |   10 +++++++---
> arch/x86/kernel/machine_kexec_64.c |   12 ++++++------
> 2 files changed, 13 insertions(+), 9 deletions(-)
>
>diff --git a/arch/x86/include/asm/kexec.h
>b/arch/x86/include/asm/kexec.h
>index 6080d26..cedd204 100644
>--- a/arch/x86/include/asm/kexec.h
>+++ b/arch/x86/include/asm/kexec.h
>@@ -157,9 +157,13 @@ struct kimage_arch {
> };
> #else
> struct kimage_arch {
>-	pud_t *pud;
>-	pmd_t *pmd;
>-	pte_t *pte;
>+	pgd_t *pgd;
>+	pud_t *pud0;
>+	pud_t *pud1;
>+	pmd_t *pmd0;
>+	pmd_t *pmd1;
>+	pte_t *pte0;
>+	pte_t *pte1;
> };
> #endif
> 
>diff --git a/arch/x86/kernel/machine_kexec_64.c
>b/arch/x86/kernel/machine_kexec_64.c
>index b3ea9db..976e54b 100644
>--- a/arch/x86/kernel/machine_kexec_64.c
>+++ b/arch/x86/kernel/machine_kexec_64.c
>@@ -137,9 +137,9 @@ out:
> 
> static void free_transition_pgtable(struct kimage *image)
> {
>-	free_page((unsigned long)image->arch.pud);
>-	free_page((unsigned long)image->arch.pmd);
>-	free_page((unsigned long)image->arch.pte);
>+	free_page((unsigned long)image->arch.pud0);
>+	free_page((unsigned long)image->arch.pmd0);
>+	free_page((unsigned long)image->arch.pte0);
> }
> 
> static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
>@@ -157,7 +157,7 @@ static int init_transition_pgtable(struct kimage
>*image, pgd_t *pgd)
> 		pud = (pud_t *)get_zeroed_page(GFP_KERNEL);
> 		if (!pud)
> 			goto err;
>-		image->arch.pud = pud;
>+		image->arch.pud0 = pud;
> 		set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));
> 	}
> 	pud = pud_offset(pgd, vaddr);
>@@ -165,7 +165,7 @@ static int init_transition_pgtable(struct kimage
>*image, pgd_t *pgd)
> 		pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL);
> 		if (!pmd)
> 			goto err;
>-		image->arch.pmd = pmd;
>+		image->arch.pmd0 = pmd;
> 		set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
> 	}
> 	pmd = pmd_offset(pud, vaddr);
>@@ -173,7 +173,7 @@ static int init_transition_pgtable(struct kimage
>*image, pgd_t *pgd)
> 		pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
> 		if (!pte)
> 			goto err;
>-		image->arch.pte = pte;
>+		image->arch.pte0 = pte;
> 		set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE));
> 	}
> 	pte = pte_offset_kernel(pmd, vaddr);

-- 
Sent from my mobile phone. Please excuse brevity and lack of formatting.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 04:01:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 04:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To4ed-0007oQ-DR; Thu, 27 Dec 2012 04:01:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1To4eb-0007oL-NF
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 04:01:09 +0000
Received: from [85.158.137.99:41749] by server-3.bemta-3.messagelabs.com id
	8A/64-31588-008CBD05; Thu, 27 Dec 2012 04:01:04 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1356580861!15840013!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31269 invoked from network); 27 Dec 2012 04:01:03 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-6.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 04:01:03 -0000
Received: from tazenda.hos.anvin.org
	([IPv6:2601:9:3300:78:e269:95ff:fe35:9f3c]) (authenticated bits=0)
	by mail.zytor.com (8.14.5/8.14.5) with ESMTP id qBR40jsq014884
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Wed, 26 Dec 2012 20:00:45 -0800
Message-ID: <50DBC7E8.8040302@zytor.com>
Date: Wed, 26 Dec 2012 20:00:40 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
In-Reply-To: <1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 06/11] x86/xen: Add i386 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/26/2012 06:18 PM, Daniel Kiper wrote:
> Add i386 kexec/kdump implementation.
>
> v2 - suggestions/fixes:
>     - allocate transition page table pages below 4 GiB
>       (suggested by Jan Beulich).
>

Why?

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 04:01:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 04:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To4ed-0007oQ-DR; Thu, 27 Dec 2012 04:01:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1To4eb-0007oL-NF
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 04:01:09 +0000
Received: from [85.158.137.99:41749] by server-3.bemta-3.messagelabs.com id
	8A/64-31588-008CBD05; Thu, 27 Dec 2012 04:01:04 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1356580861!15840013!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31269 invoked from network); 27 Dec 2012 04:01:03 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-6.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 04:01:03 -0000
Received: from tazenda.hos.anvin.org
	([IPv6:2601:9:3300:78:e269:95ff:fe35:9f3c]) (authenticated bits=0)
	by mail.zytor.com (8.14.5/8.14.5) with ESMTP id qBR40jsq014884
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Wed, 26 Dec 2012 20:00:45 -0800
Message-ID: <50DBC7E8.8040302@zytor.com>
Date: Wed, 26 Dec 2012 20:00:40 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
In-Reply-To: <1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 06/11] x86/xen: Add i386 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/26/2012 06:18 PM, Daniel Kiper wrote:
> Add i386 kexec/kdump implementation.
>
> v2 - suggestions/fixes:
>     - allocate transition page table pages below 4 GiB
>       (suggested by Jan Beulich).
>

Why?

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 04:03:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 04:03:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To4gU-0007tJ-U9; Thu, 27 Dec 2012 04:03:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1To4gT-0007tB-JL
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 04:03:05 +0000
Received: from [193.109.254.147:27697] by server-2.bemta-14.messagelabs.com id
	AB/62-30744-878CBD05; Thu, 27 Dec 2012 04:03:04 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1356580982!11343991!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10539 invoked from network); 27 Dec 2012 04:03:04 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 04:03:04 -0000
Received: from tazenda.hos.anvin.org
	([IPv6:2601:9:3300:78:e269:95ff:fe35:9f3c]) (authenticated bits=0)
	by mail.zytor.com (8.14.5/8.14.5) with ESMTP id qBR42ZKA015060
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Wed, 26 Dec 2012 20:02:35 -0800
Message-ID: <50DBC856.6030208@zytor.com>
Date: Wed, 26 Dec 2012 20:02:30 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
In-Reply-To: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/26/2012 06:18 PM, Daniel Kiper wrote:
> Hi,
>
> This set of patches contains initial kexec/kdump implementation for Xen v3.
> Currently only dom0 is supported, however, almost all infrustructure
> required for domU support is ready.
>
> Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
> This could simplify and reduce a bit size of kernel code. However, this solution
> requires some changes in baremetal x86 code. First of all code which establishes
> transition page table should be moved back from machine_kexec_$(BITS).c to
> relocate_kernel_$(BITS).S. Another important thing which should be changed in that
> case is format of page_list array. Xen kexec hypercall requires to alternate physical
> addresses with virtual ones. These and other required stuff have not been done in that
> version because I am not sure that solution will be accepted by kexec/kdump maintainers.
> I hope that this email spark discussion about that topic.
>

I want a detailed list of the constraints that this assumes and 
therefore imposes on the native implementation as a result of this.  We 
have had way too many patches where Xen PV hacks effectively nailgun 
arbitrary, and sometimes poor, design decisions in place and now we 
can't fix them.

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 04:03:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 04:03:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To4gU-0007tJ-U9; Thu, 27 Dec 2012 04:03:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1To4gT-0007tB-JL
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 04:03:05 +0000
Received: from [193.109.254.147:27697] by server-2.bemta-14.messagelabs.com id
	AB/62-30744-878CBD05; Thu, 27 Dec 2012 04:03:04 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1356580982!11343991!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10539 invoked from network); 27 Dec 2012 04:03:04 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 04:03:04 -0000
Received: from tazenda.hos.anvin.org
	([IPv6:2601:9:3300:78:e269:95ff:fe35:9f3c]) (authenticated bits=0)
	by mail.zytor.com (8.14.5/8.14.5) with ESMTP id qBR42ZKA015060
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Wed, 26 Dec 2012 20:02:35 -0800
Message-ID: <50DBC856.6030208@zytor.com>
Date: Wed, 26 Dec 2012 20:02:30 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
In-Reply-To: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/26/2012 06:18 PM, Daniel Kiper wrote:
> Hi,
>
> This set of patches contains initial kexec/kdump implementation for Xen v3.
> Currently only dom0 is supported, however, almost all infrustructure
> required for domU support is ready.
>
> Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
> This could simplify and reduce a bit size of kernel code. However, this solution
> requires some changes in baremetal x86 code. First of all code which establishes
> transition page table should be moved back from machine_kexec_$(BITS).c to
> relocate_kernel_$(BITS).S. Another important thing which should be changed in that
> case is format of page_list array. Xen kexec hypercall requires to alternate physical
> addresses with virtual ones. These and other required stuff have not been done in that
> version because I am not sure that solution will be accepted by kexec/kdump maintainers.
> I hope that this email spark discussion about that topic.
>

I want a detailed list of the constraints that this assumes and 
therefore imposes on the native implementation as a result of this.  We 
have had way too many patches where Xen PV hacks effectively nailgun 
arbitrary, and sometimes poor, design decisions in place and now we 
can't fix them.

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 04:47:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 04:47:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To5Mm-0008VR-P5; Thu, 27 Dec 2012 04:46:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ebiederm@xmission.com>) id 1To5Ml-0008VM-E1
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 04:46:47 +0000
Received: from [85.158.139.83:27073] by server-1.bemta-5.messagelabs.com id
	C2/9A-12813-6B2DBD05; Thu, 27 Dec 2012 04:46:46 +0000
X-Env-Sender: ebiederm@xmission.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1356583604!16614015!1
X-Originating-IP: [166.70.13.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23414 invoked from network); 27 Dec 2012 04:46:45 -0000
Received: from out01.mta.xmission.com (HELO out01.mta.xmission.com)
	(166.70.13.231)
	by server-8.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	27 Dec 2012 04:46:45 -0000
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out01.mta.xmission.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.76) (envelope-from <ebiederm@xmission.com>)
	id 1To5MZ-0004iA-Fe; Wed, 26 Dec 2012 21:46:35 -0700
Received: from c-98-207-153-68.hsd1.ca.comcast.net ([98.207.153.68]
	helo=eric-ThinkPad-X220.xmission.com)
	by in02.mta.xmission.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76)
	(envelope-from <ebiederm@xmission.com>)
	id 1To5MY-0008Cz-AY; Wed, 26 Dec 2012 21:46:35 -0700
From: ebiederm@xmission.com (Eric W. Biederman)
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
Date: Wed, 26 Dec 2012 20:46:27 -0800
In-Reply-To: <1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	(Daniel Kiper's message of "Thu, 27 Dec 2012 03:18:50 +0100")
Message-ID: <878v8k14rg.fsf@xmission.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux)
MIME-Version: 1.0
X-XM-AID: U2FsdGVkX18rsRec3wATcqzfXwQLf4ni+qRkbNyrxEQ=
X-SA-Exim-Connect-IP: 98.207.153.68
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sa06.xmission.com
X-Spam-Level: 
X-Spam-Status: No, score=-3.9 required=8.0 tests=ALL_TRUSTED,BAYES_00,
	DCC_CHECK_NEGATIVE,T_TM2_M_HEADER_IN_MSG,XMSubLong autolearn=disabled
	version=3.3.2
X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.1 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: T_TM2_M_HEADER_IN_MSG
	* -3.0 BAYES_00 BODY: Bayes spam probability is 0 to 1%
	*      [score: 0.0000]
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa06 1397; Body=1 Fuz1=1 Fuz2=1]
X-Spam-DCC: XMission; sa06 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: ;Daniel Kiper <daniel.kiper@oracle.com>
X-Spam-Relay-Country: 
X-SA-Exim-Version: 4.2.1 (built Sun, 08 Jan 2012 03:05:19 +0000)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)
Cc: x86@kernel.org, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	hpa@zytor.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	xen-devel@lists.xensource.com, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 01/11] kexec: introduce kexec firmware
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel Kiper <daniel.kiper@oracle.com> writes:

> Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
> Linux infrastructure and require some support from firmware and/or hypervisor.
> To cope with that problem kexec firmware infrastructure was introduced.
> It allows a developer to use all kexec/kdump features of given firmware
> or hypervisor.

As this stands this patch is wrong.

You need to pass an additional flag from userspace through /sbin/kexec
that says load the kexec image in the firmware.  A global variable here
is not ok.

As I understand it you are loading a kexec on xen panic image.  Which
is semantically different from a kexec on linux panic image.  It is not
ok to do have a silly global variable kexec_use_firmware.

Furthermore it is not ok to have a conditional code outside of header
files.

Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 04:47:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 04:47:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To5Mm-0008VR-P5; Thu, 27 Dec 2012 04:46:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ebiederm@xmission.com>) id 1To5Ml-0008VM-E1
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 04:46:47 +0000
Received: from [85.158.139.83:27073] by server-1.bemta-5.messagelabs.com id
	C2/9A-12813-6B2DBD05; Thu, 27 Dec 2012 04:46:46 +0000
X-Env-Sender: ebiederm@xmission.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1356583604!16614015!1
X-Originating-IP: [166.70.13.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23414 invoked from network); 27 Dec 2012 04:46:45 -0000
Received: from out01.mta.xmission.com (HELO out01.mta.xmission.com)
	(166.70.13.231)
	by server-8.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	27 Dec 2012 04:46:45 -0000
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out01.mta.xmission.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.76) (envelope-from <ebiederm@xmission.com>)
	id 1To5MZ-0004iA-Fe; Wed, 26 Dec 2012 21:46:35 -0700
Received: from c-98-207-153-68.hsd1.ca.comcast.net ([98.207.153.68]
	helo=eric-ThinkPad-X220.xmission.com)
	by in02.mta.xmission.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76)
	(envelope-from <ebiederm@xmission.com>)
	id 1To5MY-0008Cz-AY; Wed, 26 Dec 2012 21:46:35 -0700
From: ebiederm@xmission.com (Eric W. Biederman)
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
Date: Wed, 26 Dec 2012 20:46:27 -0800
In-Reply-To: <1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	(Daniel Kiper's message of "Thu, 27 Dec 2012 03:18:50 +0100")
Message-ID: <878v8k14rg.fsf@xmission.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux)
MIME-Version: 1.0
X-XM-AID: U2FsdGVkX18rsRec3wATcqzfXwQLf4ni+qRkbNyrxEQ=
X-SA-Exim-Connect-IP: 98.207.153.68
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sa06.xmission.com
X-Spam-Level: 
X-Spam-Status: No, score=-3.9 required=8.0 tests=ALL_TRUSTED,BAYES_00,
	DCC_CHECK_NEGATIVE,T_TM2_M_HEADER_IN_MSG,XMSubLong autolearn=disabled
	version=3.3.2
X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.1 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: T_TM2_M_HEADER_IN_MSG
	* -3.0 BAYES_00 BODY: Bayes spam probability is 0 to 1%
	*      [score: 0.0000]
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa06 1397; Body=1 Fuz1=1 Fuz2=1]
X-Spam-DCC: XMission; sa06 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: ;Daniel Kiper <daniel.kiper@oracle.com>
X-Spam-Relay-Country: 
X-SA-Exim-Version: 4.2.1 (built Sun, 08 Jan 2012 03:05:19 +0000)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)
Cc: x86@kernel.org, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	hpa@zytor.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	xen-devel@lists.xensource.com, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 01/11] kexec: introduce kexec firmware
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel Kiper <daniel.kiper@oracle.com> writes:

> Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
> Linux infrastructure and require some support from firmware and/or hypervisor.
> To cope with that problem kexec firmware infrastructure was introduced.
> It allows a developer to use all kexec/kdump features of given firmware
> or hypervisor.

As this stands this patch is wrong.

You need to pass an additional flag from userspace through /sbin/kexec
that says load the kexec image in the firmware.  A global variable here
is not ok.

As I understand it you are loading a kexec on xen panic image.  Which
is semantically different from a kexec on linux panic image.  It is not
ok to do have a silly global variable kexec_use_firmware.

Furthermore it is not ok to have a conditional code outside of header
files.

Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 06:27:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 06:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To6vM-0001Q5-3d; Thu, 27 Dec 2012 06:26:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1To6vK-0001Pk-6Q
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 06:26:34 +0000
Received: from [85.158.139.83:33676] by server-16.bemta-5.messagelabs.com id
	2F/32-09208-91AEBD05; Thu, 27 Dec 2012 06:26:33 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1356589592!31435407!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1Njg5Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2501 invoked from network); 27 Dec 2012 06:26:33 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-182.messagelabs.com with SMTP;
	27 Dec 2012 06:26:33 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 26 Dec 2012 22:26:31 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,361,1355126400"; d="scan'208";a="267339223"
Received: from unknown (HELO localhost) ([10.239.36.67])
	by fmsmga001.fm.intel.com with ESMTP; 26 Dec 2012 22:26:30 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 14:16:12 +0800
Message-Id: <1356588973-13106-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
References: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 2/3] nested vmx: synchronize page fault error
	code match and mask
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Page fault is specially handled not only with exception bitmaps,
but also with consideration of page fault error code mask/match
values. Therefore in nested virtualization case, the two values
need to be synchronized from virtual VMCS to shadow VMCS.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 0a876b0..0bcea19 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -603,6 +603,17 @@ static void nvmx_update_tpr_threshold(struct vcpu *v)
         __vmwrite(TPR_THRESHOLD, 0);
 }
 
+static void nvmx_update_pfec(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    void *vvmcs = nvcpu->nv_vvmcx;
+
+    __vmwrite(PAGE_FAULT_ERROR_CODE_MASK,
+        __get_vvmcs(vvmcs, PAGE_FAULT_ERROR_CODE_MASK));
+    __vmwrite(PAGE_FAULT_ERROR_CODE_MATCH,
+        __get_vvmcs(vvmcs, PAGE_FAULT_ERROR_CODE_MATCH));
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -813,6 +824,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_apic_access_address(v);
     nvmx_update_virtual_apic_address(v);
     nvmx_update_tpr_threshold(v);
+    nvmx_update_pfec(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 06:27:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 06:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To6vO-0001QN-4l; Thu, 27 Dec 2012 06:26:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1To6vM-0001Q1-Bp
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 06:26:36 +0000
Received: from [85.158.143.35:23390] by server-1.bemta-4.messagelabs.com id
	67/84-28401-B1AEBD05; Thu, 27 Dec 2012 06:26:35 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1356589590!4496326!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMzOTEw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30287 invoked from network); 27 Dec 2012 06:26:31 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-5.tower-21.messagelabs.com with SMTP;
	27 Dec 2012 06:26:31 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 26 Dec 2012 22:26:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,361,1355126400"; d="scan'208";a="263317344"
Received: from unknown (HELO localhost) ([10.239.36.67])
	by orsmga002.jf.intel.com with ESMTP; 26 Dec 2012 22:26:28 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 14:16:10 +0800
Message-Id: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH 0/3] nested vmx bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patchset fixes issues about IA32_VMX_MISC MSR emulation, VMCS guest area
synchronization about PAGE_FAULT_ERROR_CODE_MASK/PAGE_FAULT_ERROR_CODE_MATCH,
and CR0/CR4 emulation.

Please help to review and pull.

Thanks,
Dongxiao

Dongxiao Xu (3):
  nested vmx: emulate IA32_VMX_MISC MSR
  nested vmx: synchronize page fault error code match and mask
  nested vmx: fix CR0/CR4 emulation

 xen/arch/x86/hvm/vmx/vvmx.c |  136 ++++++++++++++++++++++++++++++++++--------
 1 files changed, 110 insertions(+), 26 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 06:27:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 06:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To6vM-0001Q5-3d; Thu, 27 Dec 2012 06:26:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1To6vK-0001Pk-6Q
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 06:26:34 +0000
Received: from [85.158.139.83:33676] by server-16.bemta-5.messagelabs.com id
	2F/32-09208-91AEBD05; Thu, 27 Dec 2012 06:26:33 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1356589592!31435407!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM1Njg5Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2501 invoked from network); 27 Dec 2012 06:26:33 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-182.messagelabs.com with SMTP;
	27 Dec 2012 06:26:33 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 26 Dec 2012 22:26:31 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,361,1355126400"; d="scan'208";a="267339223"
Received: from unknown (HELO localhost) ([10.239.36.67])
	by fmsmga001.fm.intel.com with ESMTP; 26 Dec 2012 22:26:30 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 14:16:12 +0800
Message-Id: <1356588973-13106-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
References: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 2/3] nested vmx: synchronize page fault error
	code match and mask
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Page fault is specially handled not only with exception bitmaps,
but also with consideration of page fault error code mask/match
values. Therefore in nested virtualization case, the two values
need to be synchronized from virtual VMCS to shadow VMCS.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 0a876b0..0bcea19 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -603,6 +603,17 @@ static void nvmx_update_tpr_threshold(struct vcpu *v)
         __vmwrite(TPR_THRESHOLD, 0);
 }
 
+static void nvmx_update_pfec(struct vcpu *v)
+{
+    struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+    void *vvmcs = nvcpu->nv_vvmcx;
+
+    __vmwrite(PAGE_FAULT_ERROR_CODE_MASK,
+        __get_vvmcs(vvmcs, PAGE_FAULT_ERROR_CODE_MASK));
+    __vmwrite(PAGE_FAULT_ERROR_CODE_MATCH,
+        __get_vvmcs(vvmcs, PAGE_FAULT_ERROR_CODE_MATCH));
+}
+
 static void __clear_current_vvmcs(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
@@ -813,6 +824,7 @@ static void load_shadow_control(struct vcpu *v)
     nvmx_update_apic_access_address(v);
     nvmx_update_virtual_apic_address(v);
     nvmx_update_tpr_threshold(v);
+    nvmx_update_pfec(v);
 }
 
 static void load_shadow_guest_state(struct vcpu *v)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 06:27:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 06:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To6vO-0001QN-4l; Thu, 27 Dec 2012 06:26:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1To6vM-0001Q1-Bp
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 06:26:36 +0000
Received: from [85.158.143.35:23390] by server-1.bemta-4.messagelabs.com id
	67/84-28401-B1AEBD05; Thu, 27 Dec 2012 06:26:35 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1356589590!4496326!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzMzOTEw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30287 invoked from network); 27 Dec 2012 06:26:31 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-5.tower-21.messagelabs.com with SMTP;
	27 Dec 2012 06:26:31 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 26 Dec 2012 22:26:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,361,1355126400"; d="scan'208";a="263317344"
Received: from unknown (HELO localhost) ([10.239.36.67])
	by orsmga002.jf.intel.com with ESMTP; 26 Dec 2012 22:26:28 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 14:16:10 +0800
Message-Id: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH 0/3] nested vmx bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patchset fixes issues about IA32_VMX_MISC MSR emulation, VMCS guest area
synchronization about PAGE_FAULT_ERROR_CODE_MASK/PAGE_FAULT_ERROR_CODE_MATCH,
and CR0/CR4 emulation.

Please help to review and pull.

Thanks,
Dongxiao

Dongxiao Xu (3):
  nested vmx: emulate IA32_VMX_MISC MSR
  nested vmx: synchronize page fault error code match and mask
  nested vmx: fix CR0/CR4 emulation

 xen/arch/x86/hvm/vmx/vvmx.c |  136 ++++++++++++++++++++++++++++++++++--------
 1 files changed, 110 insertions(+), 26 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 06:27:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 06:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To6vL-0001Px-N4; Thu, 27 Dec 2012 06:26:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1To6vJ-0001Pj-P0
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 06:26:33 +0000
Received: from [85.158.138.51:6819] by server-8.bemta-3.messagelabs.com id
	55/E5-01233-81AEBD05; Thu, 27 Dec 2012 06:26:32 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356589591!29257950!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjQwOTky\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26457 invoked from network); 27 Dec 2012 06:26:32 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-11.tower-174.messagelabs.com with SMTP;
	27 Dec 2012 06:26:32 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 26 Dec 2012 22:26:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,361,1355126400"; d="scan'208";a="237094263"
Received: from unknown (HELO localhost) ([10.239.36.67])
	by azsmga001.ch.intel.com with ESMTP; 26 Dec 2012 22:26:29 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 14:16:11 +0800
Message-Id: <1356588973-13106-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
References: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 1/3] nested vmx: emulate IA32_VMX_MISC MSR
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use the host value to emulate IA32_VMX_MISC MSR for L1 VMM.
For CR3-target value, we don't support this feature currently and
set the number to zero.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 7b27d2d..0a876b0 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1462,7 +1462,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = 0x267ff & ~X86_CR4_SMXE;
         break;
     case MSR_IA32_VMX_MISC:
-        gdprintk(XENLOG_WARNING, "VMX MSR %x not fully supported yet.\n", msr);
+        /* Do not support CR3-target feature now */
+        data = host_data & ~(0x1ff << 16);
         break;
     default:
         r = 0;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 06:27:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 06:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To6vL-0001Px-N4; Thu, 27 Dec 2012 06:26:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1To6vJ-0001Pj-P0
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 06:26:33 +0000
Received: from [85.158.138.51:6819] by server-8.bemta-3.messagelabs.com id
	55/E5-01233-81AEBD05; Thu, 27 Dec 2012 06:26:32 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356589591!29257950!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjQwOTky\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26457 invoked from network); 27 Dec 2012 06:26:32 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-11.tower-174.messagelabs.com with SMTP;
	27 Dec 2012 06:26:32 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 26 Dec 2012 22:26:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,361,1355126400"; d="scan'208";a="237094263"
Received: from unknown (HELO localhost) ([10.239.36.67])
	by azsmga001.ch.intel.com with ESMTP; 26 Dec 2012 22:26:29 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 14:16:11 +0800
Message-Id: <1356588973-13106-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
References: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 1/3] nested vmx: emulate IA32_VMX_MISC MSR
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use the host value to emulate IA32_VMX_MISC MSR for L1 VMM.
For CR3-target value, we don't support this feature currently and
set the number to zero.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 7b27d2d..0a876b0 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1462,7 +1462,8 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
         data = 0x267ff & ~X86_CR4_SMXE;
         break;
     case MSR_IA32_VMX_MISC:
-        gdprintk(XENLOG_WARNING, "VMX MSR %x not fully supported yet.\n", msr);
+        /* Do not support CR3-target feature now */
+        data = host_data & ~(0x1ff << 16);
         break;
     default:
         r = 0;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 06:27:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 06:27:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To6vM-0001QD-HM; Thu, 27 Dec 2012 06:26:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1To6vK-0001Pj-Sn
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 06:26:35 +0000
Received: from [85.158.138.51:6871] by server-8.bemta-3.messagelabs.com id
	98/E5-01233-A1AEBD05; Thu, 27 Dec 2012 06:26:34 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356589591!29257950!2
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjQwOTky\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26553 invoked from network); 27 Dec 2012 06:26:33 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-11.tower-174.messagelabs.com with SMTP;
	27 Dec 2012 06:26:33 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 26 Dec 2012 22:26:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,361,1355126400"; d="scan'208";a="237094278"
Received: from unknown (HELO localhost) ([10.239.36.67])
	by azsmga001.ch.intel.com with ESMTP; 26 Dec 2012 22:26:32 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 14:16:13 +0800
Message-Id: <1356588973-13106-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
References: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 3/3] nested vmx: fix CR0/CR4 emulation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While emulate CR0 and CR4 for nested virtualization, set the CR0/CR4
guest host mask to 0xffffffff in shadow VMCS, then calculate the
corresponding read shadow separately for CR0 and CR4. While getting
the VM exit for CR0/CR4 access, check if L1 VMM owns the bit. If so,
inject the VM exit to L1 VMM. Otherwise, L0 will handle it and sync
the value to L1 virtual VMCS.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |  121 ++++++++++++++++++++++++++++++++++---------
 1 files changed, 96 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 0bcea19..40edaf1 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -833,6 +833,7 @@ static void load_shadow_guest_state(struct vcpu *v)
     void *vvmcs = nvcpu->nv_vvmcx;
     int i;
     u32 control;
+    u64 cr_gh_mask, cr_read_shadow;
 
     /* vvmcs.gstate to shadow vmcs.gstate */
     for ( i = 0; i < ARRAY_SIZE(vmcs_gstate_field); i++ )
@@ -854,10 +855,20 @@ static void load_shadow_guest_state(struct vcpu *v)
     vvmcs_to_shadow(vvmcs, VM_ENTRY_EXCEPTION_ERROR_CODE);
     vvmcs_to_shadow(vvmcs, VM_ENTRY_INSTRUCTION_LEN);
 
-    vvmcs_to_shadow(vvmcs, CR0_READ_SHADOW);
-    vvmcs_to_shadow(vvmcs, CR4_READ_SHADOW);
-    vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
-    vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
+    /*
+     * While emulate CR0 and CR4 for nested virtualization, set the CR0/CR4
+     * guest host mask to 0xffffffff in shadow VMCS (follow the host L1 VMCS),
+     * then calculate the corresponding read shadow separately for CR0 and CR4.
+     */
+    cr_gh_mask = __get_vvmcs(vvmcs, CR0_GUEST_HOST_MASK);
+    cr_read_shadow = (__get_vvmcs(vvmcs, GUEST_CR0) & ~cr_gh_mask) |
+                          (__get_vvmcs(vvmcs, CR0_READ_SHADOW) & cr_gh_mask);
+    __vmwrite(CR0_READ_SHADOW, cr_read_shadow);
+
+    cr_gh_mask = __get_vvmcs(vvmcs, CR4_GUEST_HOST_MASK);
+    cr_read_shadow = (__get_vvmcs(vvmcs, GUEST_CR4) & ~cr_gh_mask) |
+                          (__get_vvmcs(vvmcs, CR4_READ_SHADOW) & cr_gh_mask);
+    __vmwrite(CR4_READ_SHADOW, cr_read_shadow);
 
     /* TODO: PDPTRs for nested ept */
     /* TODO: CR3 target control */
@@ -913,8 +924,6 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
 {
     int i;
-    unsigned long mask;
-    unsigned long cr;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     void *vvmcs = nvcpu->nv_vvmcx;
 
@@ -925,23 +934,6 @@ static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
     __set_vvmcs(vvmcs, GUEST_RIP, regs->eip);
     __set_vvmcs(vvmcs, GUEST_RSP, regs->esp);
 
-    /* SDM 20.6.6: L2 guest execution may change GUEST CR0/CR4 */
-    mask = __get_vvmcs(vvmcs, CR0_GUEST_HOST_MASK);
-    if ( ~mask )
-    {
-        cr = __get_vvmcs(vvmcs, GUEST_CR0);
-        cr = (cr & mask) | (__vmread(GUEST_CR0) & ~mask);
-        __set_vvmcs(vvmcs, GUEST_CR0, cr);
-    }
-
-    mask = __get_vvmcs(vvmcs, CR4_GUEST_HOST_MASK);
-    if ( ~mask )
-    {
-        cr = __get_vvmcs(vvmcs, GUEST_CR4);
-        cr = (cr & mask) | (__vmread(GUEST_CR4) & ~mask);
-        __set_vvmcs(vvmcs, GUEST_CR4, cr);
-    }
-
     /* CR3 sync if exec doesn't want cr3 load exiting: i.e. nested EPT */
     if ( !(__n2_exec_control(v) & CPU_BASED_CR3_LOAD_EXITING) )
         shadow_to_vvmcs(vvmcs, GUEST_CR3);
@@ -1745,8 +1737,87 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                 nvcpu->nv_vmexit_pending = 1;
         }
         else  /* CR0, CR4, CLTS, LMSW */
-            nvcpu->nv_vmexit_pending = 1;
-
+        {
+            /*
+             * While getting the VM exit for CR0/CR4 access, check if L1 VMM owns
+             * the bit.
+             * If so, inject the VM exit to L1 VMM.
+             * Otherwise, L0 will handle it and sync the value to L1 virtual VMCS.
+             */
+            unsigned long old_val, val, changed_bits;
+            switch ( VMX_CONTROL_REG_ACCESS_TYPE(exit_qualification) )
+            {
+            case VMX_CONTROL_REG_ACCESS_TYPE_MOV_TO_CR:
+            {
+                unsigned long gp = VMX_CONTROL_REG_ACCESS_GPR(exit_qualification);
+                unsigned long *reg;
+                if ( (reg = decode_register(gp, guest_cpu_user_regs(), 0)) == NULL )
+                {
+                    gdprintk(XENLOG_ERR, "invalid gpr: %lx\n", gp);
+                    break;
+                }
+                val = *reg;
+                if ( cr == 0 )
+                {
+                    u64 cr0_gh_mask = __get_vvmcs(nvcpu->nv_vvmcx, CR0_GUEST_HOST_MASK);
+                    old_val = __vmread(CR0_READ_SHADOW);
+                    changed_bits = old_val ^ val;
+                    if ( changed_bits & cr0_gh_mask )
+                        nvcpu->nv_vmexit_pending = 1;
+                    else
+                    {
+                        u64 guest_cr0 = __get_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0);
+                        __set_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0, (guest_cr0 & cr0_gh_mask) | (val & ~cr0_gh_mask));
+                    }
+                }
+                else if ( cr == 4 )
+                {
+                    u64 cr4_gh_mask = __get_vvmcs(nvcpu->nv_vvmcx, CR4_GUEST_HOST_MASK);
+                    old_val = __vmread(CR4_READ_SHADOW);
+                    changed_bits = old_val ^ val;
+                    if ( changed_bits & cr4_gh_mask )
+                        nvcpu->nv_vmexit_pending = 1;
+                    else
+                    {
+                        u64 guest_cr4 = __get_vvmcs(nvcpu->nv_vvmcx, GUEST_CR4);
+                        __set_vvmcs(nvcpu->nv_vvmcx, GUEST_CR4, (guest_cr4 & cr4_gh_mask) | (val & ~cr4_gh_mask));
+                    }
+                }
+                else
+                    nvcpu->nv_vmexit_pending = 1;
+                break;
+            }
+            case VMX_CONTROL_REG_ACCESS_TYPE_CLTS:
+            {
+                u64 cr0_gh_mask = __get_vvmcs(nvcpu->nv_vvmcx, CR0_GUEST_HOST_MASK);
+                if ( cr0_gh_mask & X86_CR0_TS )
+                    nvcpu->nv_vmexit_pending = 1;
+                else
+                {
+                    u64 guest_cr0 = __get_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0);
+                    __set_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0, (guest_cr0 & ~X86_CR0_TS));
+                }
+                break;
+            }
+            case VMX_CONTROL_REG_ACCESS_TYPE_LMSW:
+            {
+                u64 cr0_gh_mask = __get_vvmcs(nvcpu->nv_vvmcx, CR0_GUEST_HOST_MASK);
+                old_val = __vmread(CR0_READ_SHADOW) & 0xf;
+                val = (exit_qualification >> 16) & 0xf;
+                changed_bits = old_val ^ val;
+                if ( changed_bits & cr0_gh_mask )
+                    nvcpu->nv_vmexit_pending = 1;
+                else
+                {
+                    u64 guest_cr0 = __get_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0);
+                    __set_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0, (guest_cr0 & cr0_gh_mask) | (val & ~cr0_gh_mask));
+                }
+                break;
+            }
+            default:
+                break;
+            }
+        }
         break;
     }
     case EXIT_REASON_APIC_ACCESS:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 06:27:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 06:27:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To6vM-0001QD-HM; Thu, 27 Dec 2012 06:26:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1To6vK-0001Pj-Sn
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 06:26:35 +0000
Received: from [85.158.138.51:6871] by server-8.bemta-3.messagelabs.com id
	98/E5-01233-A1AEBD05; Thu, 27 Dec 2012 06:26:34 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1356589591!29257950!2
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjQwOTky\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26553 invoked from network); 27 Dec 2012 06:26:33 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-11.tower-174.messagelabs.com with SMTP;
	27 Dec 2012 06:26:33 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 26 Dec 2012 22:26:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,361,1355126400"; d="scan'208";a="237094278"
Received: from unknown (HELO localhost) ([10.239.36.67])
	by azsmga001.ch.intel.com with ESMTP; 26 Dec 2012 22:26:32 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xensource.com
Date: Thu, 27 Dec 2012 14:16:13 +0800
Message-Id: <1356588973-13106-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
References: <1356588973-13106-1-git-send-email-dongxiao.xu@intel.com>
Subject: [Xen-devel] [PATCH 3/3] nested vmx: fix CR0/CR4 emulation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While emulate CR0 and CR4 for nested virtualization, set the CR0/CR4
guest host mask to 0xffffffff in shadow VMCS, then calculate the
corresponding read shadow separately for CR0 and CR4. While getting
the VM exit for CR0/CR4 access, check if L1 VMM owns the bit. If so,
inject the VM exit to L1 VMM. Otherwise, L0 will handle it and sync
the value to L1 virtual VMCS.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |  121 ++++++++++++++++++++++++++++++++++---------
 1 files changed, 96 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 0bcea19..40edaf1 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -833,6 +833,7 @@ static void load_shadow_guest_state(struct vcpu *v)
     void *vvmcs = nvcpu->nv_vvmcx;
     int i;
     u32 control;
+    u64 cr_gh_mask, cr_read_shadow;
 
     /* vvmcs.gstate to shadow vmcs.gstate */
     for ( i = 0; i < ARRAY_SIZE(vmcs_gstate_field); i++ )
@@ -854,10 +855,20 @@ static void load_shadow_guest_state(struct vcpu *v)
     vvmcs_to_shadow(vvmcs, VM_ENTRY_EXCEPTION_ERROR_CODE);
     vvmcs_to_shadow(vvmcs, VM_ENTRY_INSTRUCTION_LEN);
 
-    vvmcs_to_shadow(vvmcs, CR0_READ_SHADOW);
-    vvmcs_to_shadow(vvmcs, CR4_READ_SHADOW);
-    vvmcs_to_shadow(vvmcs, CR0_GUEST_HOST_MASK);
-    vvmcs_to_shadow(vvmcs, CR4_GUEST_HOST_MASK);
+    /*
+     * While emulate CR0 and CR4 for nested virtualization, set the CR0/CR4
+     * guest host mask to 0xffffffff in shadow VMCS (follow the host L1 VMCS),
+     * then calculate the corresponding read shadow separately for CR0 and CR4.
+     */
+    cr_gh_mask = __get_vvmcs(vvmcs, CR0_GUEST_HOST_MASK);
+    cr_read_shadow = (__get_vvmcs(vvmcs, GUEST_CR0) & ~cr_gh_mask) |
+                          (__get_vvmcs(vvmcs, CR0_READ_SHADOW) & cr_gh_mask);
+    __vmwrite(CR0_READ_SHADOW, cr_read_shadow);
+
+    cr_gh_mask = __get_vvmcs(vvmcs, CR4_GUEST_HOST_MASK);
+    cr_read_shadow = (__get_vvmcs(vvmcs, GUEST_CR4) & ~cr_gh_mask) |
+                          (__get_vvmcs(vvmcs, CR4_READ_SHADOW) & cr_gh_mask);
+    __vmwrite(CR4_READ_SHADOW, cr_read_shadow);
 
     /* TODO: PDPTRs for nested ept */
     /* TODO: CR3 target control */
@@ -913,8 +924,6 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
 static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
 {
     int i;
-    unsigned long mask;
-    unsigned long cr;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     void *vvmcs = nvcpu->nv_vvmcx;
 
@@ -925,23 +934,6 @@ static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
     __set_vvmcs(vvmcs, GUEST_RIP, regs->eip);
     __set_vvmcs(vvmcs, GUEST_RSP, regs->esp);
 
-    /* SDM 20.6.6: L2 guest execution may change GUEST CR0/CR4 */
-    mask = __get_vvmcs(vvmcs, CR0_GUEST_HOST_MASK);
-    if ( ~mask )
-    {
-        cr = __get_vvmcs(vvmcs, GUEST_CR0);
-        cr = (cr & mask) | (__vmread(GUEST_CR0) & ~mask);
-        __set_vvmcs(vvmcs, GUEST_CR0, cr);
-    }
-
-    mask = __get_vvmcs(vvmcs, CR4_GUEST_HOST_MASK);
-    if ( ~mask )
-    {
-        cr = __get_vvmcs(vvmcs, GUEST_CR4);
-        cr = (cr & mask) | (__vmread(GUEST_CR4) & ~mask);
-        __set_vvmcs(vvmcs, GUEST_CR4, cr);
-    }
-
     /* CR3 sync if exec doesn't want cr3 load exiting: i.e. nested EPT */
     if ( !(__n2_exec_control(v) & CPU_BASED_CR3_LOAD_EXITING) )
         shadow_to_vvmcs(vvmcs, GUEST_CR3);
@@ -1745,8 +1737,87 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
                 nvcpu->nv_vmexit_pending = 1;
         }
         else  /* CR0, CR4, CLTS, LMSW */
-            nvcpu->nv_vmexit_pending = 1;
-
+        {
+            /*
+             * While getting the VM exit for CR0/CR4 access, check if L1 VMM owns
+             * the bit.
+             * If so, inject the VM exit to L1 VMM.
+             * Otherwise, L0 will handle it and sync the value to L1 virtual VMCS.
+             */
+            unsigned long old_val, val, changed_bits;
+            switch ( VMX_CONTROL_REG_ACCESS_TYPE(exit_qualification) )
+            {
+            case VMX_CONTROL_REG_ACCESS_TYPE_MOV_TO_CR:
+            {
+                unsigned long gp = VMX_CONTROL_REG_ACCESS_GPR(exit_qualification);
+                unsigned long *reg;
+                if ( (reg = decode_register(gp, guest_cpu_user_regs(), 0)) == NULL )
+                {
+                    gdprintk(XENLOG_ERR, "invalid gpr: %lx\n", gp);
+                    break;
+                }
+                val = *reg;
+                if ( cr == 0 )
+                {
+                    u64 cr0_gh_mask = __get_vvmcs(nvcpu->nv_vvmcx, CR0_GUEST_HOST_MASK);
+                    old_val = __vmread(CR0_READ_SHADOW);
+                    changed_bits = old_val ^ val;
+                    if ( changed_bits & cr0_gh_mask )
+                        nvcpu->nv_vmexit_pending = 1;
+                    else
+                    {
+                        u64 guest_cr0 = __get_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0);
+                        __set_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0, (guest_cr0 & cr0_gh_mask) | (val & ~cr0_gh_mask));
+                    }
+                }
+                else if ( cr == 4 )
+                {
+                    u64 cr4_gh_mask = __get_vvmcs(nvcpu->nv_vvmcx, CR4_GUEST_HOST_MASK);
+                    old_val = __vmread(CR4_READ_SHADOW);
+                    changed_bits = old_val ^ val;
+                    if ( changed_bits & cr4_gh_mask )
+                        nvcpu->nv_vmexit_pending = 1;
+                    else
+                    {
+                        u64 guest_cr4 = __get_vvmcs(nvcpu->nv_vvmcx, GUEST_CR4);
+                        __set_vvmcs(nvcpu->nv_vvmcx, GUEST_CR4, (guest_cr4 & cr4_gh_mask) | (val & ~cr4_gh_mask));
+                    }
+                }
+                else
+                    nvcpu->nv_vmexit_pending = 1;
+                break;
+            }
+            case VMX_CONTROL_REG_ACCESS_TYPE_CLTS:
+            {
+                u64 cr0_gh_mask = __get_vvmcs(nvcpu->nv_vvmcx, CR0_GUEST_HOST_MASK);
+                if ( cr0_gh_mask & X86_CR0_TS )
+                    nvcpu->nv_vmexit_pending = 1;
+                else
+                {
+                    u64 guest_cr0 = __get_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0);
+                    __set_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0, (guest_cr0 & ~X86_CR0_TS));
+                }
+                break;
+            }
+            case VMX_CONTROL_REG_ACCESS_TYPE_LMSW:
+            {
+                u64 cr0_gh_mask = __get_vvmcs(nvcpu->nv_vvmcx, CR0_GUEST_HOST_MASK);
+                old_val = __vmread(CR0_READ_SHADOW) & 0xf;
+                val = (exit_qualification >> 16) & 0xf;
+                changed_bits = old_val ^ val;
+                if ( changed_bits & cr0_gh_mask )
+                    nvcpu->nv_vmexit_pending = 1;
+                else
+                {
+                    u64 guest_cr0 = __get_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0);
+                    __set_vvmcs(nvcpu->nv_vvmcx, GUEST_CR0, (guest_cr0 & cr0_gh_mask) | (val & ~cr0_gh_mask));
+                }
+                break;
+            }
+            default:
+                break;
+            }
+        }
         break;
     }
     case EXIT_REASON_APIC_ACCESS:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 07:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 07:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To8IE-0003Dw-39; Thu, 27 Dec 2012 07:54:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ebiederm@xmission.com>) id 1To8IC-0003Dr-Vx
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 07:54:17 +0000
Received: from [85.158.137.99:44319] by server-3.bemta-3.messagelabs.com id
	46/87-31588-7AEFBD05; Thu, 27 Dec 2012 07:54:15 +0000
X-Env-Sender: ebiederm@xmission.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1356594852!15857379!1
X-Originating-IP: [166.70.13.233]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26369 invoked from network); 27 Dec 2012 07:54:14 -0000
Received: from out03.mta.xmission.com (HELO out03.mta.xmission.com)
	(166.70.13.233)
	by server-6.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	27 Dec 2012 07:54:14 -0000
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out03.mta.xmission.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.76) (envelope-from <ebiederm@xmission.com>)
	id 1To8I2-0005uv-Ke; Thu, 27 Dec 2012 00:54:06 -0700
Received: from c-98-207-153-68.hsd1.ca.comcast.net ([98.207.153.68]
	helo=sidekick.int.ebiederm.org) by in02.mta.xmission.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.76)
	(envelope-from <ebiederm@xmission.com>)
	id 1To8Hx-0001dr-H5; Thu, 27 Dec 2012 00:54:06 -0700
User-Agent: K-9 Mail for Android
In-Reply-To: <50DBC856.6030208@zytor.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<50DBC856.6030208@zytor.com>
MIME-Version: 1.0
From: "Eric W. Biederman" <ebiederm@xmission.com>
Date: Wed, 26 Dec 2012 23:53:49 -0800
To: "H. Peter Anvin" <hpa@zytor.com>,Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <791b4922-078f-4adc-b3f3-0651f2266147@email.android.com>
X-XM-AID: U2FsdGVkX199+ev+/tViEeWqlkGu3MFRoDsmM2/fLpA=
X-SA-Exim-Connect-IP: 98.207.153.68
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sa06.xmission.com
X-Spam-Level: 
X-Spam-Status: No, score=-0.9 required=8.0 tests=ALL_TRUSTED,BAYES_20,
	DCC_CHECK_NEGATIVE, T_TM2_M_HEADER_IN_MSG, T_XMDrugObfuBody_08,
	XMSubLong autolearn=disabled version=3.3.2
X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.1 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: T_TM2_M_HEADER_IN_MSG
	* -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20%
	*      [score: 0.0942]
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa06 1397; Body=1 Fuz1=1 Fuz2=1]
	*  0.0 T_XMDrugObfuBody_08 obfuscated drug references
X-Spam-DCC: XMission; sa06 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: ;"H. Peter Anvin" <hpa@zytor.com>,Daniel Kiper
	<daniel.kiper@oracle.com>
X-Spam-Relay-Country: 
X-SA-Exim-Version: 4.2.1 (built Sun, 08 Jan 2012 03:05:19 +0000)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The syscall ABI still has the wrong semantics.

Aka totally unmaintainable and umergeable.

The concept of domU support is also strange.  What does domU support even mean, when the dom0 support is loading a kernel to pick up Xen when Xen falls over.

I expect a lot of decisions about what code can be shared and what code can't is going to be driven by the simple question what does the syscall mean.

Sharing machine_kexec.c and relocate_kernel.S does not make much sense to me when what you are doing is effectively passing your arguments through to the Xen version of kexec.

Either Xen has it's own version of those routines or I expect the Xen version of kexec is buggy.   I can't imagine what sharing that code would mean.  By the same token I can't any need to duplicate the code either.

Furthermore since this is just passing data from one version of the syscall to another I expect you can share the majority of the code across all architectures that implement Xen.  The only part I can see being arch specific is the Xen syscall stub.

With respect to the proposed semantics of silently giving the kexec system call different meaning when running under Xen,
/sbin/kexec has to act somewhat differently when loading code into the Xen hypervisor so there is no point not making that explicit in the ABI.

Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 07:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 07:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To8IE-0003Dw-39; Thu, 27 Dec 2012 07:54:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ebiederm@xmission.com>) id 1To8IC-0003Dr-Vx
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 07:54:17 +0000
Received: from [85.158.137.99:44319] by server-3.bemta-3.messagelabs.com id
	46/87-31588-7AEFBD05; Thu, 27 Dec 2012 07:54:15 +0000
X-Env-Sender: ebiederm@xmission.com
X-Msg-Ref: server-6.tower-217.messagelabs.com!1356594852!15857379!1
X-Originating-IP: [166.70.13.233]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26369 invoked from network); 27 Dec 2012 07:54:14 -0000
Received: from out03.mta.xmission.com (HELO out03.mta.xmission.com)
	(166.70.13.233)
	by server-6.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	27 Dec 2012 07:54:14 -0000
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out03.mta.xmission.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.76) (envelope-from <ebiederm@xmission.com>)
	id 1To8I2-0005uv-Ke; Thu, 27 Dec 2012 00:54:06 -0700
Received: from c-98-207-153-68.hsd1.ca.comcast.net ([98.207.153.68]
	helo=sidekick.int.ebiederm.org) by in02.mta.xmission.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.76)
	(envelope-from <ebiederm@xmission.com>)
	id 1To8Hx-0001dr-H5; Thu, 27 Dec 2012 00:54:06 -0700
User-Agent: K-9 Mail for Android
In-Reply-To: <50DBC856.6030208@zytor.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<50DBC856.6030208@zytor.com>
MIME-Version: 1.0
From: "Eric W. Biederman" <ebiederm@xmission.com>
Date: Wed, 26 Dec 2012 23:53:49 -0800
To: "H. Peter Anvin" <hpa@zytor.com>,Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <791b4922-078f-4adc-b3f3-0651f2266147@email.android.com>
X-XM-AID: U2FsdGVkX199+ev+/tViEeWqlkGu3MFRoDsmM2/fLpA=
X-SA-Exim-Connect-IP: 98.207.153.68
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sa06.xmission.com
X-Spam-Level: 
X-Spam-Status: No, score=-0.9 required=8.0 tests=ALL_TRUSTED,BAYES_20,
	DCC_CHECK_NEGATIVE, T_TM2_M_HEADER_IN_MSG, T_XMDrugObfuBody_08,
	XMSubLong autolearn=disabled version=3.3.2
X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.1 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: T_TM2_M_HEADER_IN_MSG
	* -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20%
	*      [score: 0.0942]
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa06 1397; Body=1 Fuz1=1 Fuz2=1]
	*  0.0 T_XMDrugObfuBody_08 obfuscated drug references
X-Spam-DCC: XMission; sa06 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: ;"H. Peter Anvin" <hpa@zytor.com>,Daniel Kiper
	<daniel.kiper@oracle.com>
X-Spam-Relay-Country: 
X-SA-Exim-Version: 4.2.1 (built Sun, 08 Jan 2012 03:05:19 +0000)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The syscall ABI still has the wrong semantics.

Aka totally unmaintainable and umergeable.

The concept of domU support is also strange.  What does domU support even mean, when the dom0 support is loading a kernel to pick up Xen when Xen falls over.

I expect a lot of decisions about what code can be shared and what code can't is going to be driven by the simple question what does the syscall mean.

Sharing machine_kexec.c and relocate_kernel.S does not make much sense to me when what you are doing is effectively passing your arguments through to the Xen version of kexec.

Either Xen has it's own version of those routines or I expect the Xen version of kexec is buggy.   I can't imagine what sharing that code would mean.  By the same token I can't any need to duplicate the code either.

Furthermore since this is just passing data from one version of the syscall to another I expect you can share the majority of the code across all architectures that implement Xen.  The only part I can see being arch specific is the Xen syscall stub.

With respect to the proposed semantics of silently giving the kexec system call different meaning when running under Xen,
/sbin/kexec has to act somewhat differently when loading code into the Xen hypervisor so there is no point not making that explicit in the ABI.

Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 08:18:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 08:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To8fA-00047R-I2; Thu, 27 Dec 2012 08:18:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <genius.rsd@gmail.com>) id 1To8f8-00047M-KA
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 08:17:58 +0000
Received: from [85.158.139.211:9931] by server-10.bemta-5.messagelabs.com id
	50/5E-13383-5340CD05; Thu, 27 Dec 2012 08:17:57 +0000
X-Env-Sender: genius.rsd@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1356596268!21499872!1
X-Originating-IP: [209.85.223.178]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: , 
	async_handler: YXN5bmNfZGVsYXk6IDcwNDI3OTQgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20987 invoked from network); 27 Dec 2012 08:17:49 -0000
Received: from mail-ie0-f178.google.com (HELO mail-ie0-f178.google.com)
	(209.85.223.178)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 08:17:49 -0000
Received: by mail-ie0-f178.google.com with SMTP id c12so11068474ieb.23
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 00:17:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=5ePAJvZEDSFRdrGJhrU9pIeVBRF/LHlcE27Nk3n0yRs=;
	b=1B6uLESnw1SNIfgU/Pe7wijEA084r5domupIYbLRnxMGU78vZzLmkip/4VwU492cYb
	Z7lwncqh7gruvvIDb3iaiujHvi9ae77vInoy2QMYUU1SOZAMqAw8YN8+OVZruomcct86
	7eNl0xIj9qZWWABy1mowoI4FqHOe33OxtT8ioBD6zH5hRMnIBkItEZPp2HoORTk0g5da
	VL1ODYQx6pJpIB34moCmXM92cho2wUqjXtYTjWMS0RTLfTMY/WkaX3yNmuDcA+XFS7G8
	zdrPhLoBbvrwrRmT3ly6txvH7Tt3NmiGhsgoLpyBNBrzntliSAirZotd22G2llTgGEAl
	KYJQ==
MIME-Version: 1.0
Received: by 10.42.72.132 with SMTP id o4mr12384678icj.44.1356596268399; Thu,
	27 Dec 2012 00:17:48 -0800 (PST)
Received: by 10.64.68.47 with HTTP; Thu, 27 Dec 2012 00:17:48 -0800 (PST)
In-Reply-To: <50DB316A.6040704@suse.com>
References: <CAHEEu84ykxG9tv1rU2PS=u=sLD-h5a4xkNQo4_PS_0exmdkfZQ@mail.gmail.com>
	<20121226125940.GQ8912@reaktio.net> <50DB316A.6040704@suse.com>
Date: Thu, 27 Dec 2012 13:47:48 +0530
Message-ID: <CAHEEu86NkNgqnuYNVOVOtaryzt+OeOyVJ075p7wsHH6baje-vQ@mail.gmail.com>
From: Rohit Damkondwar <genius.rsd@gmail.com>
To: Jim Fehlig <jfehlig@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Error : libxenlight state driver is not active
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2513465349233802123=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2513465349233802123==
Content-Type: multipart/alternative; boundary=90e6ba3fcd55ae7adc04d1d12ecd

--90e6ba3fcd55ae7adc04d1d12ecd
Content-Type: text/plain; charset=ISO-8859-1

Problem solved. xend was not started. Also there was some configuration
errors in xend-config. The main problem was inability of virt-manager to
connect to xen hypervisor. Now it connects somehow. Thankyou all.


-- 
Rohit S Damkondwar
B.Tech Computer Engineering
CoEP
MyBlog <http://www.rohitsdamkondwar.wordpress.com>

--90e6ba3fcd55ae7adc04d1d12ecd
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr">Problem solved. xend was not started. Also there was some configuration errors in xend-config. The main problem was inability of virt-manager to connect to xen hypervisor. Now it connects somehow. Thankyou all.<br clear="all">
<div><div><div class="gmail_extra"><br><br>-- <br>Rohit S Damkondwar<br>B.Tech Computer Engineering<br>CoEP<br><a href="http://www.rohitsdamkondwar.wordpress.com" target="_blank">MyBlog</a><br>
</div></div></div></div>

--90e6ba3fcd55ae7adc04d1d12ecd--


--===============2513465349233802123==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2513465349233802123==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 08:18:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 08:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To8fA-00047R-I2; Thu, 27 Dec 2012 08:18:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <genius.rsd@gmail.com>) id 1To8f8-00047M-KA
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 08:17:58 +0000
Received: from [85.158.139.211:9931] by server-10.bemta-5.messagelabs.com id
	50/5E-13383-5340CD05; Thu, 27 Dec 2012 08:17:57 +0000
X-Env-Sender: genius.rsd@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1356596268!21499872!1
X-Originating-IP: [209.85.223.178]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: , 
	async_handler: YXN5bmNfZGVsYXk6IDcwNDI3OTQgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20987 invoked from network); 27 Dec 2012 08:17:49 -0000
Received: from mail-ie0-f178.google.com (HELO mail-ie0-f178.google.com)
	(209.85.223.178)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 08:17:49 -0000
Received: by mail-ie0-f178.google.com with SMTP id c12so11068474ieb.23
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 00:17:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=5ePAJvZEDSFRdrGJhrU9pIeVBRF/LHlcE27Nk3n0yRs=;
	b=1B6uLESnw1SNIfgU/Pe7wijEA084r5domupIYbLRnxMGU78vZzLmkip/4VwU492cYb
	Z7lwncqh7gruvvIDb3iaiujHvi9ae77vInoy2QMYUU1SOZAMqAw8YN8+OVZruomcct86
	7eNl0xIj9qZWWABy1mowoI4FqHOe33OxtT8ioBD6zH5hRMnIBkItEZPp2HoORTk0g5da
	VL1ODYQx6pJpIB34moCmXM92cho2wUqjXtYTjWMS0RTLfTMY/WkaX3yNmuDcA+XFS7G8
	zdrPhLoBbvrwrRmT3ly6txvH7Tt3NmiGhsgoLpyBNBrzntliSAirZotd22G2llTgGEAl
	KYJQ==
MIME-Version: 1.0
Received: by 10.42.72.132 with SMTP id o4mr12384678icj.44.1356596268399; Thu,
	27 Dec 2012 00:17:48 -0800 (PST)
Received: by 10.64.68.47 with HTTP; Thu, 27 Dec 2012 00:17:48 -0800 (PST)
In-Reply-To: <50DB316A.6040704@suse.com>
References: <CAHEEu84ykxG9tv1rU2PS=u=sLD-h5a4xkNQo4_PS_0exmdkfZQ@mail.gmail.com>
	<20121226125940.GQ8912@reaktio.net> <50DB316A.6040704@suse.com>
Date: Thu, 27 Dec 2012 13:47:48 +0530
Message-ID: <CAHEEu86NkNgqnuYNVOVOtaryzt+OeOyVJ075p7wsHH6baje-vQ@mail.gmail.com>
From: Rohit Damkondwar <genius.rsd@gmail.com>
To: Jim Fehlig <jfehlig@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Error : libxenlight state driver is not active
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2513465349233802123=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2513465349233802123==
Content-Type: multipart/alternative; boundary=90e6ba3fcd55ae7adc04d1d12ecd

--90e6ba3fcd55ae7adc04d1d12ecd
Content-Type: text/plain; charset=ISO-8859-1

Problem solved. xend was not started. Also there was some configuration
errors in xend-config. The main problem was inability of virt-manager to
connect to xen hypervisor. Now it connects somehow. Thankyou all.


-- 
Rohit S Damkondwar
B.Tech Computer Engineering
CoEP
MyBlog <http://www.rohitsdamkondwar.wordpress.com>

--90e6ba3fcd55ae7adc04d1d12ecd
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr">Problem solved. xend was not started. Also there was some configuration errors in xend-config. The main problem was inability of virt-manager to connect to xen hypervisor. Now it connects somehow. Thankyou all.<br clear="all">
<div><div><div class="gmail_extra"><br><br>-- <br>Rohit S Damkondwar<br>B.Tech Computer Engineering<br>CoEP<br><a href="http://www.rohitsdamkondwar.wordpress.com" target="_blank">MyBlog</a><br>
</div></div></div></div>

--90e6ba3fcd55ae7adc04d1d12ecd--


--===============2513465349233802123==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2513465349233802123==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 08:47:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 08:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To96s-0004a5-CE; Thu, 27 Dec 2012 08:46:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <genius.rsd@gmail.com>) id 1To96r-0004a0-5C
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 08:46:37 +0000
Received: from [85.158.139.83:26700] by server-2.bemta-5.messagelabs.com id
	15/F0-16162-CEA0CD05; Thu, 27 Dec 2012 08:46:36 +0000
X-Env-Sender: genius.rsd@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1356597994!30706777!1
X-Originating-IP: [209.85.210.169]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24844 invoked from network); 27 Dec 2012 08:46:35 -0000
Received: from mail-ia0-f169.google.com (HELO mail-ia0-f169.google.com)
	(209.85.210.169)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 08:46:35 -0000
Received: by mail-ia0-f169.google.com with SMTP id u20so1279108iag.14
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 00:46:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=CrW0n4Pg80NxeDTe/PNx7dkdW2jAna/MyuBtnG/3KC0=;
	b=Zq/kH/8Fgy3JekDvLxWyMbLk2pANSyq46i6p3/mqIsl7pI0mpgzSLv1kj6yivoIaRk
	417rNSPMcqBbPg+O0ilMbi68Pw3cmPwRPBBwLx+W8xgwkma/f/Zqij6EPUqnBj7OCvOw
	jYXpdiyA7T/TBcws/FXNoQmrwzCWL7+i7qqa8uNgAEOOT99qZ2IoneJlFw6kkKZNLKve
	NpZRYxYGzYFaJ+jiPDvCIXlKkcH485hCYZz6mKxB9+my7/H76M5hqOQOij5XR4ZxlNsU
	RqR/VNhlvkXyd0KPFBcvsB3OV7U7titRj0e3jM7T7ROOHp94ChLCCP53cw+SCRdZPXDW
	NgUA==
MIME-Version: 1.0
Received: by 10.50.190.199 with SMTP id gs7mr17073840igc.89.1356597993973;
	Thu, 27 Dec 2012 00:46:33 -0800 (PST)
Received: by 10.64.68.47 with HTTP; Thu, 27 Dec 2012 00:46:33 -0800 (PST)
Date: Thu, 27 Dec 2012 14:16:33 +0530
Message-ID: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
From: Rohit Damkondwar <genius.rsd@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] dynamically set bandwidth limits of a virtual interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3653790946583176224=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3653790946583176224==
Content-Type: multipart/alternative; boundary=f46d044794f788a83304d1d1956f

--f46d044794f788a83304d1d1956f
Content-Type: text/plain; charset=ISO-8859-1

Hi all. I want to set bandwidth limits to a virtual interface
dynamically(without restarting virtual machine). I have been browsing xen
source code 4.1.3. I looked into libxen folder(xen_vif.c) and
hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
(driver/net/xen-netback/
interface.c + common.h) and tx_add_credit function could be used to modify
rate limits. I want to change bandwidth limits dynamically of a virtual
interface in xen 4.1.3. Where should I look for in xen 4.1.3?
Please help.




-- 
Rohit S Damkondwar
B.Tech Computer Engineering
CoEP

--f46d044794f788a83304d1d1956f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi all. I want to set bandwidth limits to a virtual interf=
ace dynamically(without restarting virtual machine). I have been browsing x=
en source code 4.1.3. I looked into libxen folder(xen_vif.c) and hotplug(li=
nux) folder. Earlier in xen 3.0 , xenvif struture (driver/net/xen-netback/<=
div>
interface.c + common.h) and tx_add_credit function could be used to modify =
rate limits. I want to change bandwidth limits dynamically of a virtual int=
erface in xen 4.1.3. Where should I look for in xen 4.1.3? <br clear=3D"all=
">
</div><div>Please help.<br><br></div><div><br><br><br>-- <br>Rohit S Damkon=
dwar<br>B.Tech Computer Engineering<br>CoEP<br><br>
</div></div>

--f46d044794f788a83304d1d1956f--


--===============3653790946583176224==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3653790946583176224==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 08:47:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 08:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1To96s-0004a5-CE; Thu, 27 Dec 2012 08:46:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <genius.rsd@gmail.com>) id 1To96r-0004a0-5C
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 08:46:37 +0000
Received: from [85.158.139.83:26700] by server-2.bemta-5.messagelabs.com id
	15/F0-16162-CEA0CD05; Thu, 27 Dec 2012 08:46:36 +0000
X-Env-Sender: genius.rsd@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1356597994!30706777!1
X-Originating-IP: [209.85.210.169]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24844 invoked from network); 27 Dec 2012 08:46:35 -0000
Received: from mail-ia0-f169.google.com (HELO mail-ia0-f169.google.com)
	(209.85.210.169)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 08:46:35 -0000
Received: by mail-ia0-f169.google.com with SMTP id u20so1279108iag.14
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 00:46:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=CrW0n4Pg80NxeDTe/PNx7dkdW2jAna/MyuBtnG/3KC0=;
	b=Zq/kH/8Fgy3JekDvLxWyMbLk2pANSyq46i6p3/mqIsl7pI0mpgzSLv1kj6yivoIaRk
	417rNSPMcqBbPg+O0ilMbi68Pw3cmPwRPBBwLx+W8xgwkma/f/Zqij6EPUqnBj7OCvOw
	jYXpdiyA7T/TBcws/FXNoQmrwzCWL7+i7qqa8uNgAEOOT99qZ2IoneJlFw6kkKZNLKve
	NpZRYxYGzYFaJ+jiPDvCIXlKkcH485hCYZz6mKxB9+my7/H76M5hqOQOij5XR4ZxlNsU
	RqR/VNhlvkXyd0KPFBcvsB3OV7U7titRj0e3jM7T7ROOHp94ChLCCP53cw+SCRdZPXDW
	NgUA==
MIME-Version: 1.0
Received: by 10.50.190.199 with SMTP id gs7mr17073840igc.89.1356597993973;
	Thu, 27 Dec 2012 00:46:33 -0800 (PST)
Received: by 10.64.68.47 with HTTP; Thu, 27 Dec 2012 00:46:33 -0800 (PST)
Date: Thu, 27 Dec 2012 14:16:33 +0530
Message-ID: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
From: Rohit Damkondwar <genius.rsd@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] dynamically set bandwidth limits of a virtual interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3653790946583176224=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3653790946583176224==
Content-Type: multipart/alternative; boundary=f46d044794f788a83304d1d1956f

--f46d044794f788a83304d1d1956f
Content-Type: text/plain; charset=ISO-8859-1

Hi all. I want to set bandwidth limits to a virtual interface
dynamically(without restarting virtual machine). I have been browsing xen
source code 4.1.3. I looked into libxen folder(xen_vif.c) and
hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
(driver/net/xen-netback/
interface.c + common.h) and tx_add_credit function could be used to modify
rate limits. I want to change bandwidth limits dynamically of a virtual
interface in xen 4.1.3. Where should I look for in xen 4.1.3?
Please help.




-- 
Rohit S Damkondwar
B.Tech Computer Engineering
CoEP

--f46d044794f788a83304d1d1956f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi all. I want to set bandwidth limits to a virtual interf=
ace dynamically(without restarting virtual machine). I have been browsing x=
en source code 4.1.3. I looked into libxen folder(xen_vif.c) and hotplug(li=
nux) folder. Earlier in xen 3.0 , xenvif struture (driver/net/xen-netback/<=
div>
interface.c + common.h) and tx_add_credit function could be used to modify =
rate limits. I want to change bandwidth limits dynamically of a virtual int=
erface in xen 4.1.3. Where should I look for in xen 4.1.3? <br clear=3D"all=
">
</div><div>Please help.<br><br></div><div><br><br><br>-- <br>Rohit S Damkon=
dwar<br>B.Tech Computer Engineering<br>CoEP<br><br>
</div></div>

--f46d044794f788a83304d1d1956f--


--===============3653790946583176224==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3653790946583176224==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 12:34:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 12:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToCeP-0006VU-Lp; Thu, 27 Dec 2012 12:33:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1ToCeO-0006VP-Qq
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 12:33:28 +0000
Received: from [85.158.143.99:10885] by server-1.bemta-4.messagelabs.com id
	0D/96-28401-8104CD05; Thu, 27 Dec 2012 12:33:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1356611606!25755747!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTIxNDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2878 invoked from network); 27 Dec 2012 12:33:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 12:33:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,362,1355097600"; 
   d="scan'208";a="1967620"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	27 Dec 2012 12:33:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 27 Dec 2012 07:33:20 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1ToCeG-0001Py-J6;
	Thu, 27 Dec 2012 12:33:20 +0000
Message-ID: <1356611599.19238.13.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Rohit Damkondwar <genius.rsd@gmail.com>
Date: Thu, 27 Dec 2012 12:33:19 +0000
In-Reply-To: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
 interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:
> Hi all. I want to set bandwidth limits to a virtual interface
> dynamically(without restarting virtual machine). I have been browsing
> xen source code 4.1.3. I looked into libxen folder(xen_vif.c) and
> hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
> (driver/net/xen-netback/
> interface.c + common.h) and tx_add_credit function could be used to
> modify rate limits. I want to change bandwidth limits dynamically of a
> virtual interface in xen 4.1.3. Where should I look for in xen 4.1.3? 
> 
> Please help.
> 

Xen vif has a parameter called 'rate', I don't know whether it suits
you.

Also, you can have a look at external tool like tc(8). My vague thought
is that Vif is just another interface in Dom0, tc(8) should be able to
traffic-shape Vif.

Last but not least, patches are always welcomed. ;-)


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 12:34:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 12:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToCeP-0006VU-Lp; Thu, 27 Dec 2012 12:33:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1ToCeO-0006VP-Qq
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 12:33:28 +0000
Received: from [85.158.143.99:10885] by server-1.bemta-4.messagelabs.com id
	0D/96-28401-8104CD05; Thu, 27 Dec 2012 12:33:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1356611606!25755747!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTIxNDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2878 invoked from network); 27 Dec 2012 12:33:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 12:33:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,362,1355097600"; 
   d="scan'208";a="1967620"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	27 Dec 2012 12:33:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 27 Dec 2012 07:33:20 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1ToCeG-0001Py-J6;
	Thu, 27 Dec 2012 12:33:20 +0000
Message-ID: <1356611599.19238.13.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Rohit Damkondwar <genius.rsd@gmail.com>
Date: Thu, 27 Dec 2012 12:33:19 +0000
In-Reply-To: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
 interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:
> Hi all. I want to set bandwidth limits to a virtual interface
> dynamically(without restarting virtual machine). I have been browsing
> xen source code 4.1.3. I looked into libxen folder(xen_vif.c) and
> hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
> (driver/net/xen-netback/
> interface.c + common.h) and tx_add_credit function could be used to
> modify rate limits. I want to change bandwidth limits dynamically of a
> virtual interface in xen 4.1.3. Where should I look for in xen 4.1.3? 
> 
> Please help.
> 

Xen vif has a parameter called 'rate', I don't know whether it suits
you.

Also, you can have a look at external tool like tc(8). My vague thought
is that Vif is just another interface in Dom0, tc(8) should be able to
traffic-shape Vif.

Last but not least, patches are always welcomed. ;-)


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 12:39:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 12:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToCjT-0006dJ-HH; Thu, 27 Dec 2012 12:38:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1ToCjS-0006dD-Bx
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 12:38:42 +0000
Received: from [85.158.137.99:4524] by server-10.bemta-3.messagelabs.com id
	FC/F6-07616-1514CD05; Thu, 27 Dec 2012 12:38:41 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-217.messagelabs.com!1356611920!18678720!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NTgwNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32672 invoked from network); 27 Dec 2012 12:38:41 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 12:38:41 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 5A8A32C16;
	Thu, 27 Dec 2012 14:38:40 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id D2EAB20067; Thu, 27 Dec 2012 14:38:39 +0200 (EET)
Date: Thu, 27 Dec 2012 14:38:39 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Wei Liu <Wei.Liu2@citrix.com>
Message-ID: <20121227123839.GR8912@reaktio.net>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356611599.19238.13.camel@iceland>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Rohit Damkondwar <genius.rsd@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
 interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 27, 2012 at 12:33:19PM +0000, Wei Liu wrote:
> On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:
> > Hi all. I want to set bandwidth limits to a virtual interface
> > dynamically(without restarting virtual machine). I have been browsing
> > xen source code 4.1.3. I looked into libxen folder(xen_vif.c) and
> > hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
> > (driver/net/xen-netback/
> > interface.c + common.h) and tx_add_credit function could be used to
> > modify rate limits. I want to change bandwidth limits dynamically of a
> > virtual interface in xen 4.1.3. Where should I look for in xen 4.1.3? 
> > 
> > Please help.
> > 
> 
> Xen vif has a parameter called 'rate', I don't know whether it suits
> you.
> 
> Also, you can have a look at external tool like tc(8). My vague thought
> is that Vif is just another interface in Dom0, tc(8) should be able to
> traffic-shape Vif.
> 

Yes, You can use the generic Linux QoS tools in dom0 to shape the vifs. 


> Last but not least, patches are always welcomed. ;-)
> 

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 12:39:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 12:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToCjT-0006dJ-HH; Thu, 27 Dec 2012 12:38:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1ToCjS-0006dD-Bx
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 12:38:42 +0000
Received: from [85.158.137.99:4524] by server-10.bemta-3.messagelabs.com id
	FC/F6-07616-1514CD05; Thu, 27 Dec 2012 12:38:41 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-217.messagelabs.com!1356611920!18678720!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NTgwNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32672 invoked from network); 27 Dec 2012 12:38:41 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-217.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 12:38:41 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 5A8A32C16;
	Thu, 27 Dec 2012 14:38:40 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id D2EAB20067; Thu, 27 Dec 2012 14:38:39 +0200 (EET)
Date: Thu, 27 Dec 2012 14:38:39 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Wei Liu <Wei.Liu2@citrix.com>
Message-ID: <20121227123839.GR8912@reaktio.net>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1356611599.19238.13.camel@iceland>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Rohit Damkondwar <genius.rsd@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
 interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 27, 2012 at 12:33:19PM +0000, Wei Liu wrote:
> On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:
> > Hi all. I want to set bandwidth limits to a virtual interface
> > dynamically(without restarting virtual machine). I have been browsing
> > xen source code 4.1.3. I looked into libxen folder(xen_vif.c) and
> > hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
> > (driver/net/xen-netback/
> > interface.c + common.h) and tx_add_credit function could be used to
> > modify rate limits. I want to change bandwidth limits dynamically of a
> > virtual interface in xen 4.1.3. Where should I look for in xen 4.1.3? 
> > 
> > Please help.
> > 
> 
> Xen vif has a parameter called 'rate', I don't know whether it suits
> you.
> 
> Also, you can have a look at external tool like tc(8). My vague thought
> is that Vif is just another interface in Dom0, tc(8) should be able to
> traffic-shape Vif.
> 

Yes, You can use the generic Linux QoS tools in dom0 to shape the vifs. 


> Last but not least, patches are always welcomed. ;-)
> 

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 12:42:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 12:42:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToCmW-0006mE-5L; Thu, 27 Dec 2012 12:41:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1ToCmV-0006m9-Ht
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 12:41:51 +0000
Received: from [85.158.139.83:45465] by server-15.bemta-5.messagelabs.com id
	96/17-20523-E024CD05; Thu, 27 Dec 2012 12:41:50 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1356612079!24187007!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTIxNDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3958 invoked from network); 27 Dec 2012 12:41:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 12:41:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,362,1355097600"; 
   d="scan'208";a="1968212"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	27 Dec 2012 12:41:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 27 Dec 2012 07:41:18 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1ToCly-0001WY-7D;
	Thu, 27 Dec 2012 12:41:18 +0000
Message-ID: <1356612077.19238.20.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 27 Dec 2012 12:41:17 +0000
In-Reply-To: <CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
	<CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
 =?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_error?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTEyLTI3IGF0IDAyOjEyICswMDAwLCDpqazno4ogd3JvdGU6Cj4gICAgICAg
ICAKPiBJIGdvdCBpdCwgYnV0IHRoZSBlcnJvciBgICB4YzogZXJyb3I6IGRvX2V2dGNobl9vcDoK
PiBIWVBFUlZJU09SX2V2ZW50X2NoYW5uZWxfb3AgZmFpbGVkOiAtMSAoMyA9IE5vIHN1Y2ggcHJv
Y2Vzcyk6IEludGVybmFsCj4gZXJyb3IuIGAgc2FpZCBubyBzdWNoIHByb2Nlc3MsIHRoZSBzeXN0
ZW0gZXJyb3IgZGVzY3JpcHRpb24KPiBkaWRuJ3Qgc2VlbSBoYXMgYW55dGhpbmcgdG8gZG8gd2l0
aCB0aGUgZm9sbG93aW5nIGxpbmVzIHdpY2ggcmFpc2VkIAo+ICA4NSAgICBzdGF0ZS0+c3RvcmVf
cG9ydCA9IHhjX2V2dGNobl9hbGxvY191bmJvdW5kKGN0eC0+eGNoLCBkb21pZCwKPiAwKTsKPiAg
ODYgICAgc3RhdGUtPmNvbnNvbGVfcG9ydCA9IHhjX2V2dGNobl9hbGxvY191bmJvdW5kKGN0eC0+
eGNoLCBkb21pZCwKPiAwKTsKClRoZSBlcnJvciBjb2RlIC0xIGlzIC1FUEVSTSwgd2hpY2ggbWVh
bnMgeW91IGRvbid0IGhhdmUgcGVybWlzc2lvbiB0bwppc3N1ZSB0aGlzIG9wZXJhdGlvbi4gSSBk
b24ndCB0aGluayB0aGlzIGlzIGEgYnVnLiBUaGVyZSBtaWdodCBiZSBzb21lCnByb2JsZW1zIHdp
dGggeW91ciBzZXR1cC4KCklmIHlvdSBuZWVkIGFueSBwb2ludGVyIGluIHJlYWRpbmcgc291cmNl
IGNvZGUsIEkgd2lsbCBiZSBoYXBweSB0byBoZWxwLgoKCldlaS4KCgpfX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhl
bi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Dec 27 12:42:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 12:42:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToCmW-0006mE-5L; Thu, 27 Dec 2012 12:41:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1ToCmV-0006m9-Ht
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 12:41:51 +0000
Received: from [85.158.139.83:45465] by server-15.bemta-5.messagelabs.com id
	96/17-20523-E024CD05; Thu, 27 Dec 2012 12:41:50 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1356612079!24187007!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTIxNDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3958 invoked from network); 27 Dec 2012 12:41:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 12:41:20 -0000
X-IronPort-AV: E=Sophos;i="4.84,362,1355097600"; 
   d="scan'208";a="1968212"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	27 Dec 2012 12:41:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 27 Dec 2012 07:41:18 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1ToCly-0001WY-7D;
	Thu, 27 Dec 2012 12:41:18 +0000
Message-ID: <1356612077.19238.20.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 27 Dec 2012 12:41:17 +0000
In-Reply-To: <CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
	<CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
 =?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_error?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTEyLTI3IGF0IDAyOjEyICswMDAwLCDpqazno4ogd3JvdGU6Cj4gICAgICAg
ICAKPiBJIGdvdCBpdCwgYnV0IHRoZSBlcnJvciBgICB4YzogZXJyb3I6IGRvX2V2dGNobl9vcDoK
PiBIWVBFUlZJU09SX2V2ZW50X2NoYW5uZWxfb3AgZmFpbGVkOiAtMSAoMyA9IE5vIHN1Y2ggcHJv
Y2Vzcyk6IEludGVybmFsCj4gZXJyb3IuIGAgc2FpZCBubyBzdWNoIHByb2Nlc3MsIHRoZSBzeXN0
ZW0gZXJyb3IgZGVzY3JpcHRpb24KPiBkaWRuJ3Qgc2VlbSBoYXMgYW55dGhpbmcgdG8gZG8gd2l0
aCB0aGUgZm9sbG93aW5nIGxpbmVzIHdpY2ggcmFpc2VkIAo+ICA4NSAgICBzdGF0ZS0+c3RvcmVf
cG9ydCA9IHhjX2V2dGNobl9hbGxvY191bmJvdW5kKGN0eC0+eGNoLCBkb21pZCwKPiAwKTsKPiAg
ODYgICAgc3RhdGUtPmNvbnNvbGVfcG9ydCA9IHhjX2V2dGNobl9hbGxvY191bmJvdW5kKGN0eC0+
eGNoLCBkb21pZCwKPiAwKTsKClRoZSBlcnJvciBjb2RlIC0xIGlzIC1FUEVSTSwgd2hpY2ggbWVh
bnMgeW91IGRvbid0IGhhdmUgcGVybWlzc2lvbiB0bwppc3N1ZSB0aGlzIG9wZXJhdGlvbi4gSSBk
b24ndCB0aGluayB0aGlzIGlzIGEgYnVnLiBUaGVyZSBtaWdodCBiZSBzb21lCnByb2JsZW1zIHdp
dGggeW91ciBzZXR1cC4KCklmIHlvdSBuZWVkIGFueSBwb2ludGVyIGluIHJlYWRpbmcgc291cmNl
IGNvZGUsIEkgd2lsbCBiZSBoYXBweSB0byBoZWxwLgoKCldlaS4KCgpfX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhl
bi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Dec 27 14:19:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 14:19:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToEIO-00009b-JX; Thu, 27 Dec 2012 14:18:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1ToEIN-00009W-84
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 14:18:51 +0000
Received: from [193.109.254.147:29151] by server-1.bemta-14.messagelabs.com id
	DF/42-15901-AC85CD05; Thu, 27 Dec 2012 14:18:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1356617929!11384743!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzNTQ1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13883 invoked from network); 27 Dec 2012 14:18:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 14:18:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,362,1355097600"; 
   d="scan'208";a="352340"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	27 Dec 2012 14:18:49 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 27 Dec 2012 14:18:48 +0000
Message-ID: <50DC58C4.3000307@citrix.com>
Date: Thu, 27 Dec 2012 14:18:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "Eric W. Biederman" <ebiederm@xmission.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<50DBC856.6030208@zytor.com>
	<791b4922-078f-4adc-b3f3-0651f2266147@email.android.com>
In-Reply-To: <791b4922-078f-4adc-b3f3-0651f2266147@email.android.com>
Cc: "x86@kernel.org" <x86@kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"kexec@lists.infradead.org" <kexec@lists.infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"maxim.uvarov@oracle.com" <maxim.uvarov@oracle.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"vgoyal@redhat.com" <vgoyal@redhat.com>
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/12/2012 07:53, Eric W. Biederman wrote:
> The syscall ABI still has the wrong semantics.
>
> Aka totally unmaintainable and umergeable.
>
> The concept of domU support is also strange.  What does domU support even mean, when the dom0 support is loading a kernel to pick up Xen when Xen falls over.

There are two requirements pulling at this patch series, but I agree
that we need to clarify them.

When dom0 loads a crash kernel, it is loading one for Xen to use.  As a
dom0 crash causes a Xen crash, having dom0 set up a kdump kernel for
itself is completely useless.  This ability is present in "classic Xen
dom0" kernels, but the feature is currently missing in PVOPS.

Many cloud customers and service providers want the ability for a VM
administrator to be able to load a kdump/kexec kernel within a
domain[1].  This allows the VM administrator to take more proactive
steps to isolate the cause of a crash, the state of which is most likely
discarded while tearing down the domain.  The result being that as far
as Xen is concerned, the domain is still alive, while the kdump
kernel/environment can work its usual magic.  I am not aware of any
feature like this existing in the past.

~Andrew

[1] http://lists.xen.org/archives/html/xen-devel/2012-11/msg01274.html

>
> I expect a lot of decisions about what code can be shared and what code can't is going to be driven by the simple question what does the syscall mean.
>
> Sharing machine_kexec.c and relocate_kernel.S does not make much sense to me when what you are doing is effectively passing your arguments through to the Xen version of kexec.
>
> Either Xen has it's own version of those routines or I expect the Xen version of kexec is buggy.   I can't imagine what sharing that code would mean.  By the same token I can't any need to duplicate the code either.
>
> Furthermore since this is just passing data from one version of the syscall to another I expect you can share the majority of the code across all architectures that implement Xen.  The only part I can see being arch specific is the Xen syscall stub.
>
> With respect to the proposed semantics of silently giving the kexec system call different meaning when running under Xen,
> /sbin/kexec has to act somewhat differently when loading code into the Xen hypervisor so there is no point not making that explicit in the ABI.
>
> Eric
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 14:19:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 14:19:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToEIO-00009b-JX; Thu, 27 Dec 2012 14:18:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1ToEIN-00009W-84
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 14:18:51 +0000
Received: from [193.109.254.147:29151] by server-1.bemta-14.messagelabs.com id
	DF/42-15901-AC85CD05; Thu, 27 Dec 2012 14:18:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1356617929!11384743!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzNTQ1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13883 invoked from network); 27 Dec 2012 14:18:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 14:18:49 -0000
X-IronPort-AV: E=Sophos;i="4.84,362,1355097600"; 
   d="scan'208";a="352340"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	27 Dec 2012 14:18:49 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Thu, 27 Dec 2012 14:18:48 +0000
Message-ID: <50DC58C4.3000307@citrix.com>
Date: Thu, 27 Dec 2012 14:18:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "Eric W. Biederman" <ebiederm@xmission.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<50DBC856.6030208@zytor.com>
	<791b4922-078f-4adc-b3f3-0651f2266147@email.android.com>
In-Reply-To: <791b4922-078f-4adc-b3f3-0651f2266147@email.android.com>
Cc: "x86@kernel.org" <x86@kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"kexec@lists.infradead.org" <kexec@lists.infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"maxim.uvarov@oracle.com" <maxim.uvarov@oracle.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"vgoyal@redhat.com" <vgoyal@redhat.com>
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/12/2012 07:53, Eric W. Biederman wrote:
> The syscall ABI still has the wrong semantics.
>
> Aka totally unmaintainable and umergeable.
>
> The concept of domU support is also strange.  What does domU support even mean, when the dom0 support is loading a kernel to pick up Xen when Xen falls over.

There are two requirements pulling at this patch series, but I agree
that we need to clarify them.

When dom0 loads a crash kernel, it is loading one for Xen to use.  As a
dom0 crash causes a Xen crash, having dom0 set up a kdump kernel for
itself is completely useless.  This ability is present in "classic Xen
dom0" kernels, but the feature is currently missing in PVOPS.

Many cloud customers and service providers want the ability for a VM
administrator to be able to load a kdump/kexec kernel within a
domain[1].  This allows the VM administrator to take more proactive
steps to isolate the cause of a crash, the state of which is most likely
discarded while tearing down the domain.  The result being that as far
as Xen is concerned, the domain is still alive, while the kdump
kernel/environment can work its usual magic.  I am not aware of any
feature like this existing in the past.

~Andrew

[1] http://lists.xen.org/archives/html/xen-devel/2012-11/msg01274.html

>
> I expect a lot of decisions about what code can be shared and what code can't is going to be driven by the simple question what does the syscall mean.
>
> Sharing machine_kexec.c and relocate_kernel.S does not make much sense to me when what you are doing is effectively passing your arguments through to the Xen version of kexec.
>
> Either Xen has it's own version of those routines or I expect the Xen version of kexec is buggy.   I can't imagine what sharing that code would mean.  By the same token I can't any need to duplicate the code either.
>
> Furthermore since this is just passing data from one version of the syscall to another I expect you can share the majority of the code across all architectures that implement Xen.  The only part I can see being arch specific is the Xen syscall stub.
>
> With respect to the proposed semantics of silently giving the kexec system call different meaning when running under Xen,
> /sbin/kexec has to act somewhat differently when loading code into the Xen hypervisor so there is no point not making that explicit in the ABI.
>
> Eric
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 14:55:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 14:55:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToEro-0000Yy-Md; Thu, 27 Dec 2012 14:55:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1ToErn-0000Yt-2H
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 14:55:27 +0000
Received: from [85.158.139.211:43333] by server-15.bemta-5.messagelabs.com id
	4D/25-20523-E516CD05; Thu, 27 Dec 2012 14:55:26 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1356620124!21170675!1
X-Originating-IP: [74.125.82.44]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24072 invoked from network); 27 Dec 2012 14:55:24 -0000
Received: from mail-wg0-f44.google.com (HELO mail-wg0-f44.google.com)
	(74.125.82.44)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 14:55:24 -0000
Received: by mail-wg0-f44.google.com with SMTP id dr12so4409698wgb.35
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Dec 2012 06:55:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=IL398LQHU6OCQTZB2KGW+/Zet1NlqdntlXzDcezERhg=;
	b=QvK79G4H4yl9gHiDRBuMIL9UEhBjQJXYjyjMIpAtTHMgwpLpfhO8fOa0KGs3Felxll
	Chk/FJDBEfxDkkVkl+cb+uKYwwEGYSdplpagl2laGUBpFcYyc8AqnjcUJKzeCYbalBMa
	b6teNRZIKQ3yqYo9k7DjwkssapUi+kAjsNWIg/FfRk9jyGzmbba9km/efjo6L+FT7/Bi
	qlQwefi8M/8r8pzVPPaOrP5tiBzR54wDaNMdYHmYP/ZBdRBYKbRnhOv49eZv8+PkcMQr
	g6zC1y0tAcIq68K7RrbCIsZuCOVNg5rnlyw+k+LPIyNnvcbiTlthyFQHdM56mbQ76ycU
	ZH6g==
MIME-Version: 1.0
Received: by 10.180.109.195 with SMTP id hu3mr40073365wib.31.1356620124260;
	Thu, 27 Dec 2012 06:55:24 -0800 (PST)
Received: by 10.194.170.167 with HTTP; Thu, 27 Dec 2012 06:55:24 -0800 (PST)
Date: Thu, 27 Dec 2012 20:25:24 +0530
Message-ID: <CANq0ewtxXMcbmdEFNasB7puYEhu_LSNxvRixcmbz0FKGpurpJg@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org, xen-devel@lists.xensource.com, 
	Lars P Kurth <lars.kurth@sky.com>
Subject: [Xen-devel] Performance analysis of transport protocol(TCP) using
	XEN during live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8498278797799675644=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8498278797799675644==
Content-Type: multipart/alternative; boundary=e89a8f3ba7fb9a0aca04d1d6bcdd

--e89a8f3ba7fb9a0aca04d1d6bcdd
Content-Type: text/plain; charset=ISO-8859-1

Hello,
         I want to perform the analysis of transport protocol using XEN
during Live migration and compare it with the same with KVM, then what can
I do to achieve it. And what type of analysis can I do with this.

regards,
Digvijay

--e89a8f3ba7fb9a0aca04d1d6bcdd
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br>=A0=A0=A0=A0=A0=A0=A0=A0 I want to perform the analysis of transp=
ort protocol using XEN during Live migration and compare it with the same w=
ith KVM, then what can I do to achieve it. And what type of analysis can I =
do with this.<br>
<br>regards,<br>Digvijay<br>

--e89a8f3ba7fb9a0aca04d1d6bcdd--


--===============8498278797799675644==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8498278797799675644==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 14:55:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 14:55:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToEro-0000Yy-Md; Thu, 27 Dec 2012 14:55:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1ToErn-0000Yt-2H
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 14:55:27 +0000
Received: from [85.158.139.211:43333] by server-15.bemta-5.messagelabs.com id
	4D/25-20523-E516CD05; Thu, 27 Dec 2012 14:55:26 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1356620124!21170675!1
X-Originating-IP: [74.125.82.44]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24072 invoked from network); 27 Dec 2012 14:55:24 -0000
Received: from mail-wg0-f44.google.com (HELO mail-wg0-f44.google.com)
	(74.125.82.44)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 14:55:24 -0000
Received: by mail-wg0-f44.google.com with SMTP id dr12so4409698wgb.35
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Dec 2012 06:55:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=IL398LQHU6OCQTZB2KGW+/Zet1NlqdntlXzDcezERhg=;
	b=QvK79G4H4yl9gHiDRBuMIL9UEhBjQJXYjyjMIpAtTHMgwpLpfhO8fOa0KGs3Felxll
	Chk/FJDBEfxDkkVkl+cb+uKYwwEGYSdplpagl2laGUBpFcYyc8AqnjcUJKzeCYbalBMa
	b6teNRZIKQ3yqYo9k7DjwkssapUi+kAjsNWIg/FfRk9jyGzmbba9km/efjo6L+FT7/Bi
	qlQwefi8M/8r8pzVPPaOrP5tiBzR54wDaNMdYHmYP/ZBdRBYKbRnhOv49eZv8+PkcMQr
	g6zC1y0tAcIq68K7RrbCIsZuCOVNg5rnlyw+k+LPIyNnvcbiTlthyFQHdM56mbQ76ycU
	ZH6g==
MIME-Version: 1.0
Received: by 10.180.109.195 with SMTP id hu3mr40073365wib.31.1356620124260;
	Thu, 27 Dec 2012 06:55:24 -0800 (PST)
Received: by 10.194.170.167 with HTTP; Thu, 27 Dec 2012 06:55:24 -0800 (PST)
Date: Thu, 27 Dec 2012 20:25:24 +0530
Message-ID: <CANq0ewtxXMcbmdEFNasB7puYEhu_LSNxvRixcmbz0FKGpurpJg@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org, xen-devel@lists.xensource.com, 
	Lars P Kurth <lars.kurth@sky.com>
Subject: [Xen-devel] Performance analysis of transport protocol(TCP) using
	XEN during live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8498278797799675644=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8498278797799675644==
Content-Type: multipart/alternative; boundary=e89a8f3ba7fb9a0aca04d1d6bcdd

--e89a8f3ba7fb9a0aca04d1d6bcdd
Content-Type: text/plain; charset=ISO-8859-1

Hello,
         I want to perform the analysis of transport protocol using XEN
during Live migration and compare it with the same with KVM, then what can
I do to achieve it. And what type of analysis can I do with this.

regards,
Digvijay

--e89a8f3ba7fb9a0aca04d1d6bcdd
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br>=A0=A0=A0=A0=A0=A0=A0=A0 I want to perform the analysis of transp=
ort protocol using XEN during Live migration and compare it with the same w=
ith KVM, then what can I do to achieve it. And what type of analysis can I =
do with this.<br>
<br>regards,<br>Digvijay<br>

--e89a8f3ba7fb9a0aca04d1d6bcdd--


--===============8498278797799675644==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8498278797799675644==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 14:56:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 14:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToEs3-0000ZN-3Q; Thu, 27 Dec 2012 14:55:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1ToEs1-0000ZI-Ho
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 14:55:41 +0000
Received: from [85.158.143.99:31470] by server-1.bemta-4.messagelabs.com id
	F8/16-28401-C616CD05; Thu, 27 Dec 2012 14:55:40 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1356620124!27506995!1
X-Originating-IP: [209.85.212.177]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26545 invoked from network); 27 Dec 2012 14:55:24 -0000
Received: from mail-wi0-f177.google.com (HELO mail-wi0-f177.google.com)
	(209.85.212.177)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 14:55:24 -0000
Received: by mail-wi0-f177.google.com with SMTP id hm2so5404338wib.4
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 06:55:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=IL398LQHU6OCQTZB2KGW+/Zet1NlqdntlXzDcezERhg=;
	b=QvK79G4H4yl9gHiDRBuMIL9UEhBjQJXYjyjMIpAtTHMgwpLpfhO8fOa0KGs3Felxll
	Chk/FJDBEfxDkkVkl+cb+uKYwwEGYSdplpagl2laGUBpFcYyc8AqnjcUJKzeCYbalBMa
	b6teNRZIKQ3yqYo9k7DjwkssapUi+kAjsNWIg/FfRk9jyGzmbba9km/efjo6L+FT7/Bi
	qlQwefi8M/8r8pzVPPaOrP5tiBzR54wDaNMdYHmYP/ZBdRBYKbRnhOv49eZv8+PkcMQr
	g6zC1y0tAcIq68K7RrbCIsZuCOVNg5rnlyw+k+LPIyNnvcbiTlthyFQHdM56mbQ76ycU
	ZH6g==
MIME-Version: 1.0
Received: by 10.180.109.195 with SMTP id hu3mr40073365wib.31.1356620124260;
	Thu, 27 Dec 2012 06:55:24 -0800 (PST)
Received: by 10.194.170.167 with HTTP; Thu, 27 Dec 2012 06:55:24 -0800 (PST)
Date: Thu, 27 Dec 2012 20:25:24 +0530
Message-ID: <CANq0ewtxXMcbmdEFNasB7puYEhu_LSNxvRixcmbz0FKGpurpJg@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org, xen-devel@lists.xensource.com, 
	Lars P Kurth <lars.kurth@sky.com>
Subject: [Xen-devel] Performance analysis of transport protocol(TCP) using
	XEN during live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3760853939698509542=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3760853939698509542==
Content-Type: multipart/alternative; boundary=e89a8f3ba7fb9a0aca04d1d6bcdd

--e89a8f3ba7fb9a0aca04d1d6bcdd
Content-Type: text/plain; charset=ISO-8859-1

Hello,
         I want to perform the analysis of transport protocol using XEN
during Live migration and compare it with the same with KVM, then what can
I do to achieve it. And what type of analysis can I do with this.

regards,
Digvijay

--e89a8f3ba7fb9a0aca04d1d6bcdd
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br>=A0=A0=A0=A0=A0=A0=A0=A0 I want to perform the analysis of transp=
ort protocol using XEN during Live migration and compare it with the same w=
ith KVM, then what can I do to achieve it. And what type of analysis can I =
do with this.<br>
<br>regards,<br>Digvijay<br>

--e89a8f3ba7fb9a0aca04d1d6bcdd--


--===============3760853939698509542==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3760853939698509542==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 14:56:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 14:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToEs3-0000ZN-3Q; Thu, 27 Dec 2012 14:55:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1ToEs1-0000ZI-Ho
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 14:55:41 +0000
Received: from [85.158.143.99:31470] by server-1.bemta-4.messagelabs.com id
	F8/16-28401-C616CD05; Thu, 27 Dec 2012 14:55:40 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1356620124!27506995!1
X-Originating-IP: [209.85.212.177]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26545 invoked from network); 27 Dec 2012 14:55:24 -0000
Received: from mail-wi0-f177.google.com (HELO mail-wi0-f177.google.com)
	(209.85.212.177)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 14:55:24 -0000
Received: by mail-wi0-f177.google.com with SMTP id hm2so5404338wib.4
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 06:55:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=IL398LQHU6OCQTZB2KGW+/Zet1NlqdntlXzDcezERhg=;
	b=QvK79G4H4yl9gHiDRBuMIL9UEhBjQJXYjyjMIpAtTHMgwpLpfhO8fOa0KGs3Felxll
	Chk/FJDBEfxDkkVkl+cb+uKYwwEGYSdplpagl2laGUBpFcYyc8AqnjcUJKzeCYbalBMa
	b6teNRZIKQ3yqYo9k7DjwkssapUi+kAjsNWIg/FfRk9jyGzmbba9km/efjo6L+FT7/Bi
	qlQwefi8M/8r8pzVPPaOrP5tiBzR54wDaNMdYHmYP/ZBdRBYKbRnhOv49eZv8+PkcMQr
	g6zC1y0tAcIq68K7RrbCIsZuCOVNg5rnlyw+k+LPIyNnvcbiTlthyFQHdM56mbQ76ycU
	ZH6g==
MIME-Version: 1.0
Received: by 10.180.109.195 with SMTP id hu3mr40073365wib.31.1356620124260;
	Thu, 27 Dec 2012 06:55:24 -0800 (PST)
Received: by 10.194.170.167 with HTTP; Thu, 27 Dec 2012 06:55:24 -0800 (PST)
Date: Thu, 27 Dec 2012 20:25:24 +0530
Message-ID: <CANq0ewtxXMcbmdEFNasB7puYEhu_LSNxvRixcmbz0FKGpurpJg@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org, xen-devel@lists.xensource.com, 
	Lars P Kurth <lars.kurth@sky.com>
Subject: [Xen-devel] Performance analysis of transport protocol(TCP) using
	XEN during live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3760853939698509542=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3760853939698509542==
Content-Type: multipart/alternative; boundary=e89a8f3ba7fb9a0aca04d1d6bcdd

--e89a8f3ba7fb9a0aca04d1d6bcdd
Content-Type: text/plain; charset=ISO-8859-1

Hello,
         I want to perform the analysis of transport protocol using XEN
during Live migration and compare it with the same with KVM, then what can
I do to achieve it. And what type of analysis can I do with this.

regards,
Digvijay

--e89a8f3ba7fb9a0aca04d1d6bcdd
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br>=A0=A0=A0=A0=A0=A0=A0=A0 I want to perform the analysis of transp=
ort protocol using XEN during Live migration and compare it with the same w=
ith KVM, then what can I do to achieve it. And what type of analysis can I =
do with this.<br>
<br>regards,<br>Digvijay<br>

--e89a8f3ba7fb9a0aca04d1d6bcdd--


--===============3760853939698509542==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3760853939698509542==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 15:30:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 15:30:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToFPk-0001AK-A8; Thu, 27 Dec 2012 15:30:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>)
	id 1ToFPi-0001AC-HD; Thu, 27 Dec 2012 15:30:30 +0000
Received: from [85.158.138.51:48785] by server-5.bemta-3.messagelabs.com id
	4F/23-15136-5996CD05; Thu, 27 Dec 2012 15:30:29 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-16.tower-174.messagelabs.com!1356622225!30441320!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3906 invoked from network); 27 Dec 2012 15:30:27 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 15:30:27 -0000
Received: from aplexcas2.dom1.jhuapl.edu (aplexcas2.dom1.jhuapl.edu
	[128.244.198.91]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 62ab_326f_22f0262c_eac6_4d16_9ab7_07035f749d1c;
	Thu, 27 Dec 2012 10:29:42 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas2.dom1.jhuapl.edu ([128.244.198.91]) with mapi; Thu, 27 Dec 2012
	10:28:49 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: gavin <gbtux@126.com>, =?iso-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Thu, 27 Dec 2012 10:27:33 -0500
Thread-Topic: [Xen-devel] How to use the vTPM backend driver in the pv-ops
	kernel
Thread-Index: Ac3hDikOb2wZEVqJQbOgFMA4eN9W0QDOIhJu
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48D30B4E78@aplesstripe.dom1.jhuapl.edu>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
	<20121222220407.GP8912@reaktio.net>,
	<8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
In-Reply-To: <8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The frontend driver is currently being ported to the latest kernel. You can =

find the patch cross listed here as well as the linux kernel mailing list.

I have no plans to port the backend driver. If you need it you'll have to g=
et it from the 2.6.18
kernel and port it yourself.

________________________________________
From: xen-devel-bounces@lists.xen.org [xen-devel-bounces@lists.xen.org] On =
Behalf Of gavin [gbtux@126.com]
Sent: Sunday, December 23, 2012 8:04 AM
To: Pasi K=E4rkk=E4inen
Cc: xen-users@lists.xen.org; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops k=
ernel

Hi Pasi,

Thank you very much for your information.

Best Regards,
Gavin

At 2012-12-23 06:04:08,"Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:

>On Sun, Dec 23, 2012 at 01:50:16AM +0800, gavin wrote:
>>     Hi,
>>
>>    I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in the
>>    config file of pv-ops kernel, such as kernel 2.6.32.50. However, this
>>    option exists in the config file of kernel version 2.6.18.8. I also c=
annot
>>    find the vTPM backed driver (such as
>>    linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
>>    So, how can I configure and use the vTPM backend driver in kernel 2.6=
.32?
>>    Thank you for any advice.
>>
>
>I don't think vtpm drivers were ported to 2.6.32 pvops.
>Recently there has been work on porting the drivers to upstream Linux 3.x,
>but they aren't merged yet iirc.
>
>If you need to use them with 2.6.32 you need to port them yourself..
>
>-- Pasi
>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 15:30:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 15:30:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToFPk-0001AK-A8; Thu, 27 Dec 2012 15:30:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>)
	id 1ToFPi-0001AC-HD; Thu, 27 Dec 2012 15:30:30 +0000
Received: from [85.158.138.51:48785] by server-5.bemta-3.messagelabs.com id
	4F/23-15136-5996CD05; Thu, 27 Dec 2012 15:30:29 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-16.tower-174.messagelabs.com!1356622225!30441320!1
X-Originating-IP: [128.244.251.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3906 invoked from network); 27 Dec 2012 15:30:27 -0000
Received: from pilot.jhuapl.edu (HELO pilot.jhuapl.edu) (128.244.251.36)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 15:30:27 -0000
Received: from aplexcas2.dom1.jhuapl.edu (aplexcas2.dom1.jhuapl.edu
	[128.244.198.91]) by pilot.jhuapl.edu with smtp
	(TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 62ab_326f_22f0262c_eac6_4d16_9ab7_07035f749d1c;
	Thu, 27 Dec 2012 10:29:42 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas2.dom1.jhuapl.edu ([128.244.198.91]) with mapi; Thu, 27 Dec 2012
	10:28:49 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: gavin <gbtux@126.com>, =?iso-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Thu, 27 Dec 2012 10:27:33 -0500
Thread-Topic: [Xen-devel] How to use the vTPM backend driver in the pv-ops
	kernel
Thread-Index: Ac3hDikOb2wZEVqJQbOgFMA4eN9W0QDOIhJu
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48D30B4E78@aplesstripe.dom1.jhuapl.edu>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
	<20121222220407.GP8912@reaktio.net>,
	<8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
In-Reply-To: <8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The frontend driver is currently being ported to the latest kernel. You can =

find the patch cross listed here as well as the linux kernel mailing list.

I have no plans to port the backend driver. If you need it you'll have to g=
et it from the 2.6.18
kernel and port it yourself.

________________________________________
From: xen-devel-bounces@lists.xen.org [xen-devel-bounces@lists.xen.org] On =
Behalf Of gavin [gbtux@126.com]
Sent: Sunday, December 23, 2012 8:04 AM
To: Pasi K=E4rkk=E4inen
Cc: xen-users@lists.xen.org; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops k=
ernel

Hi Pasi,

Thank you very much for your information.

Best Regards,
Gavin

At 2012-12-23 06:04:08,"Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:

>On Sun, Dec 23, 2012 at 01:50:16AM +0800, gavin wrote:
>>     Hi,
>>
>>    I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in the
>>    config file of pv-ops kernel, such as kernel 2.6.32.50. However, this
>>    option exists in the config file of kernel version 2.6.18.8. I also c=
annot
>>    find the vTPM backed driver (such as
>>    linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
>>    So, how can I configure and use the vTPM backend driver in kernel 2.6=
.32?
>>    Thank you for any advice.
>>
>
>I don't think vtpm drivers were ported to 2.6.32 pvops.
>Recently there has been work on porting the drivers to upstream Linux 3.x,
>but they aren't merged yet iirc.
>
>If you need to use them with 2.6.32 you need to port them yourself..
>
>-- Pasi
>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 15:32:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 15:32:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToFRX-0001If-UW; Thu, 27 Dec 2012 15:32:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1ToFRW-0001IY-0m
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 15:32:22 +0000
Received: from [85.158.143.35:38529] by server-2.bemta-4.messagelabs.com id
	8A/F7-30861-50A6CD05; Thu, 27 Dec 2012 15:32:21 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1356622337!13379806!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQ2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4215 invoked from network); 27 Dec 2012 15:32:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 15:32:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,363,1355097600"; 
   d="scan'208";a="1867169"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	27 Dec 2012 15:32:17 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 27 Dec 2012 10:32:17 -0500
Message-ID: <50DC6A00.6020500@citrix.com>
Date: Thu, 27 Dec 2012 15:32:16 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CANq0ewtxXMcbmdEFNasB7puYEhu_LSNxvRixcmbz0FKGpurpJg@mail.gmail.com>
In-Reply-To: <CANq0ewtxXMcbmdEFNasB7puYEhu_LSNxvRixcmbz0FKGpurpJg@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] Performance analysis of transport protocol(TCP)
 using XEN during live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/12/12 14:55, digvijay chauhan wrote:
> Hello,
>          I want to perform the analysis of transport protocol using 
> XEN during Live migration and compare it with the same with KVM, then 
> what can I do to achieve it. And what type of analysis can I do with this.
>
> regards,
> Digvijay
Do you mean "measure the downtime" or some other form of analysis.

I have used "ping -i 0.1 -c 100 -n ${address} > ping.txt", and then 
check how many pings went missing to analyse the downtime. I would 
suggest that you run ping on a separate machine, as in my experience, at 
least for localhost migration, the dom0 cpu-load is such that ping 
doesn't necessarily get to run when it wants/needs to.

If you had some other analysis in mind, do explain further, and I'm sure 
someone can come up with a good suggestion.

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 15:32:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 15:32:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToFRX-0001If-UW; Thu, 27 Dec 2012 15:32:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mats.petersson@citrix.com>) id 1ToFRW-0001IY-0m
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 15:32:22 +0000
Received: from [85.158.143.35:38529] by server-2.bemta-4.messagelabs.com id
	8A/F7-30861-50A6CD05; Thu, 27 Dec 2012 15:32:21 +0000
X-Env-Sender: mats.petersson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1356622337!13379806!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQ2MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4215 invoked from network); 27 Dec 2012 15:32:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 15:32:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,363,1355097600"; 
   d="scan'208";a="1867169"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	27 Dec 2012 15:32:17 +0000
Received: from [10.80.3.146] (10.80.3.146) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Thu, 27 Dec 2012 10:32:17 -0500
Message-ID: <50DC6A00.6020500@citrix.com>
Date: Thu, 27 Dec 2012 15:32:16 +0000
From: Mats Petersson <mats.petersson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:16.0) Gecko/20121011 Thunderbird/16.0.1
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <CANq0ewtxXMcbmdEFNasB7puYEhu_LSNxvRixcmbz0FKGpurpJg@mail.gmail.com>
In-Reply-To: <CANq0ewtxXMcbmdEFNasB7puYEhu_LSNxvRixcmbz0FKGpurpJg@mail.gmail.com>
X-Originating-IP: [10.80.3.146]
Subject: Re: [Xen-devel] Performance analysis of transport protocol(TCP)
 using XEN during live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/12/12 14:55, digvijay chauhan wrote:
> Hello,
>          I want to perform the analysis of transport protocol using 
> XEN during Live migration and compare it with the same with KVM, then 
> what can I do to achieve it. And what type of analysis can I do with this.
>
> regards,
> Digvijay
Do you mean "measure the downtime" or some other form of analysis.

I have used "ping -i 0.1 -c 100 -n ${address} > ping.txt", and then 
check how many pings went missing to analyse the downtime. I would 
suggest that you run ping on a separate machine, as in my experience, at 
least for localhost migration, the dom0 cpu-load is such that ping 
doesn't necessarily get to run when it wants/needs to.

If you had some other analysis in mind, do explain further, and I'm sure 
someone can come up with a good suggestion.

--
Mats

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 15:35:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 15:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToFUh-0001UO-N3; Thu, 27 Dec 2012 15:35:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nai.xia@gmail.com>) id 1ToFUg-0001UH-JL
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 15:35:38 +0000
Received: from [85.158.139.211:11877] by server-6.bemta-5.messagelabs.com id
	DD/CB-30498-9CA6CD05; Thu, 27 Dec 2012 15:35:37 +0000
X-Env-Sender: nai.xia@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1356622535!19212276!1
X-Originating-IP: [209.85.160.48]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17811 invoked from network); 27 Dec 2012 15:35:36 -0000
Received: from mail-pb0-f48.google.com (HELO mail-pb0-f48.google.com)
	(209.85.160.48)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 15:35:36 -0000
Received: by mail-pb0-f48.google.com with SMTP id rq13so5439557pbb.21
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Dec 2012 07:35:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:message-id:date:from:reply-to:organization:user-agent
	:mime-version:to:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=7LW+y6OP0Hx7l5L9bET4ahheOIvaEMs7Spp2ZvYpdKQ=;
	b=lq1PcrFP9+JEPnbSGP/BOr09oQ1x/P20QZBUWkUl79YRqgvd1tyyHSVSIJX7a05hTF
	9UcjBrRQElpPEn4NxL73nRqKX+5mrR83u1SeISvbCvAXKF0RaATBaYrWJwHi82j/L5LI
	xYl/YuXMiktvZL8A9A3Iv8zrv0y2vN6amNFd243YpO7DpteCDH69c9XQ+46yM0Dc5wMm
	EQ0b/XXxTYTevXt0zw2Y194ZuteNyKNYFGc30NciIgih318NG6Fnq/YyqOYdsOqELKR2
	TIkUgx0yA9i35CQDhNWLAF9bJsGbdj+1yBSkeVv0u/g+Tg+w1qH0sbLC9ZtJ1PssCNK5
	NXGA==
X-Received: by 10.68.233.196 with SMTP id ty4mr96952090pbc.23.1356622534531;
	Thu, 27 Dec 2012 07:35:34 -0800 (PST)
Received: from [192.168.0.2] ([114.221.33.174])
	by mx.google.com with ESMTPS id o11sm17986595pby.8.2012.12.27.07.35.29
	(version=SSLv3 cipher=OTHER); Thu, 27 Dec 2012 07:35:33 -0800 (PST)
Message-ID: <50DC6ABF.7020003@gmail.com>
Date: Thu, 27 Dec 2012 23:35:27 +0800
From: Nai Xia <nai.xia@gmail.com>
Organization: NJU
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <4F14345B.4040807@gmail.com>
	<CAPQyPG5tW+Y2Snyf8qF8nn5pYcwrz=craTeedv_nAP8r8c9Q-A@mail.gmail.com>
	<20120117105323.GA74654@ocelot.phlegethon.org>
In-Reply-To: <20120117105323.GA74654@ocelot.phlegethon.org>
Cc: Lixiuchang <lixiuchang@huawei.com>, Xiaowei Yang <xiaowei.yang@huawei.com>,
	"Luohao \(brian\)" <brian.luohao@huawei.com>,
	xen-devel@lists.xensource.com, Grzegorz Milos <Grzegorz.Milos@citrix.com>
Subject: Re: [Xen-devel] Is this a racing bug in page_make_sharable()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: nai.xia@gmail.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGkgYWxsLAoKTGFzdCB0aW1lIEkgcmVwb3J0ZWQgdGhpcyBidWcuIEFuZCBJIG5vdyBzZWUgc29t
ZSBjaGFuZ2VzIGluIHlvdXIgWGVuCmdpdCBtYXN0ZXIgYnJhbmNoLgpIb3dldmVyLCBJIHRoaW5r
IHRoZSBwcm9ibGVtIHN0aWxsIHJlbWFpbnMgZm9yIHJlZi1jaGVja2luZyBpbiBwYWdlCm1pZ3Jh
dGlvbiB0byBkb21fY293LgoKSSB0aGluayBJIGNhbiBjb25zdHJ1Y3QgYSBidWcgYnkgaW50ZXJs
ZWF2aW5nIHRoZSB0d28gY29kZSBwYXRoczoKCmluIGd1ZXN0X3JlbW92ZV9wYWdlKCkgICAgICAg
ICAgICAgIHwgICAgICAgICAgICAgIGluIHBhZ2VfbWFrZV9zaGFyYWJsZSgpCi0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQppZiAoIHAybV9pc19zaGFyZWQocDJtdCkgKSAgICAgICAgICAgICAgICAgICAgICAgLi4uLi4K
Li4uICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC4uLi4uCnBh
Z2UgPSBtZm5fdG9fcGFnZShtZm4pOyAgICAgICAgICAgICAgICAgICAgICAgICAuLi4uLgogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC4uLi4uCgogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGlmICggIWdldF9w
YWdlX2FuZF90eXBlKHBhZ2UsIGQsIFBHVF9zaGFyZWRfcGFnZSkgKSAgICAvLyBzdWNjZXNzCgog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC4uLi4uLi4u
LgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGlmICgg
cGFnZS0+Y291bnRfaW5mbyAhPSAoUEdDX2FsbG9jYXRlZCB8ICgyICsgZXhwZWN0ZWRfcmVmY250
KSkgKSAvLyBhbHNvIHBhc3MKCgppZiAoIHVubGlrZWx5KCFnZXRfcGFnZShwYWdlLCBkKSkgKQoK
LyogZ28gb24gdG8gcmVtb3ZlIHBhZ2UgKi8gICAgICAgICAgICAgICAgICAgICAgIC8qIGdvIG9u
IHRvIGFkZCBwYWdlIHRvIGNvdyBkb21haW4gKi8KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQoKCmlzIHRo
ZXJlIGFueXRoaW5nIHRoYXQgY2FuIGFscmVhZHkgcHJldmVudCBzdWNoIHJhY2luZyBvciBpcyB0
aGlzIHJlYWxseSBjYW4gaGFwcGVuPwoKClRoYW5rcywKCk5haSBYaWEKCgpPbiAyMDEy5bm0MDHm
nIgxN+aXpSAxODo1MywgVGltIERlZWdhbiB3cm90ZToKPiBBdCAyMjo0MyArMDgwMCBvbiAxNiBK
YW4gKDEzMjY3NTM4MzQpLCBOYWkgWGlhIHdyb3RlOgo+PiBIaSBHcnplZ29yeiwKPj4KPj4gQXMg
SSB1bmRlcnN0YW5kLCB0aGUgcHVycG9zZSBvZiB0aGUgY29kZSBpbiBwYWdlX21ha2Vfc2hhcmFi
bGUoKQo+PiBjaGVja2luZyB0aGUgcmVmIGNvdW50IGlzIHRvIGVuc3VyZSB0aGF0IG5vYm9keSB1
bmV4cGVjdGVkIGlzIHdvcmtpbmcKPj4gb24gdGhlIHBhZ2UsIGFuZCBzbyB3ZSBjYW4gbWlncmF0
ZSBpdCB0byBkb21fY293LCByaWdodD8KPj4KPj4gPT09PQo+PiAgICAgIC8qIENoZWNrIGlmIHRo
ZSByZWYgY291bnQgaXMgMi4gVGhlIGZpcnN0IGZyb20gUEdUX2FsbG9jYXRlZCwgYW5kCj4+ICAg
ICAgICogdGhlIHNlY29uZCBmcm9tIGdldF9wYWdlX2FuZF90eXBlIGF0IHRoZSB0b3Agb2YgdGhp
cyBmdW5jdGlvbiAqLwo+PiAgICAgIGlmKHBhZ2UtPmNvdW50X2luZm8gIT0gKFBHQ19hbGxvY2F0
ZWQgfCAoMiArIGV4cGVjdGVkX3JlZmNudCkpKQo+PiAgICAgIHsKPj4gICAgICAgICAgLyogUmV0
dXJuIHR5cGUgY291bnQgYmFjayB0byB6ZXJvICovCj4+ICAgICAgICAgIHB1dF9wYWdlX2FuZF90
eXBlKHBhZ2UpOwo+PiAgICAgICAgICBzcGluX3VubG9jaygmZC0+cGFnZV9hbGxvY19sb2NrKTsK
Pj4gICAgICAgICAgcmV0dXJuIC1FMkJJRzsKPj4gICAgICB9Cj4+ID09PT0KPj4KPj4gSG93ZXZl
ciwgaXQgc2VlbXMgdG8gbWUgdGhhdCB0aGlzIHJlZiBjaGVjayBhbmQgdGhlIGZvbGxvd2luZyBw
YWdlCj4+IG1pZ3JhdGlvbiBpcyBub3QgYXRvbWljKCBhbHRob3VnaCB0aGUgb3BlcmF0aW9ucyBm
b3IgdHlwZV9pbmZvIHJlZgo+PiBjaGVjayBzZWVtcyBnb29kKSBpLmUuIGl0J3MgcG9zc2libGUg
dGhhdCBpdCBwYXNzZWQgdGhpcyByZWYKPj4gY2hlY2sgYnV0IGp1c3QgYmVmb3JlIGl0IGdvZXMg
dG8gZG9tX2Nvdywgc29tZW9uZSBlbHNlIGdldHMgdGhpcyBwYWdlPwo+Cj4gWWVzLCB0aGVyZSBh
cmUgYSBudW1iZXIgb2YgcmFjZXMgYXJvdW5kIHRoZSBwMm0gY29kZTsgQW5kZXJzCj4gTGFnYXIt
Q2F2aWxsYSBoYXMgYmVlbiB3b3JraW5nIG9uIGZpeGluZyB0aGUgbG9ja2luZyBhcm91bmQgcDJt
IGxvb2t1cHMKPiB0byB0YWtlIGNhcmUgb2YgdGhpcy4KPgo+IFRpbS4KPgoKX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlz
dApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Dec 27 15:35:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 15:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToFUh-0001UO-N3; Thu, 27 Dec 2012 15:35:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nai.xia@gmail.com>) id 1ToFUg-0001UH-JL
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 15:35:38 +0000
Received: from [85.158.139.211:11877] by server-6.bemta-5.messagelabs.com id
	DD/CB-30498-9CA6CD05; Thu, 27 Dec 2012 15:35:37 +0000
X-Env-Sender: nai.xia@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1356622535!19212276!1
X-Originating-IP: [209.85.160.48]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17811 invoked from network); 27 Dec 2012 15:35:36 -0000
Received: from mail-pb0-f48.google.com (HELO mail-pb0-f48.google.com)
	(209.85.160.48)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 15:35:36 -0000
Received: by mail-pb0-f48.google.com with SMTP id rq13so5439557pbb.21
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Dec 2012 07:35:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:message-id:date:from:reply-to:organization:user-agent
	:mime-version:to:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=7LW+y6OP0Hx7l5L9bET4ahheOIvaEMs7Spp2ZvYpdKQ=;
	b=lq1PcrFP9+JEPnbSGP/BOr09oQ1x/P20QZBUWkUl79YRqgvd1tyyHSVSIJX7a05hTF
	9UcjBrRQElpPEn4NxL73nRqKX+5mrR83u1SeISvbCvAXKF0RaATBaYrWJwHi82j/L5LI
	xYl/YuXMiktvZL8A9A3Iv8zrv0y2vN6amNFd243YpO7DpteCDH69c9XQ+46yM0Dc5wMm
	EQ0b/XXxTYTevXt0zw2Y194ZuteNyKNYFGc30NciIgih318NG6Fnq/YyqOYdsOqELKR2
	TIkUgx0yA9i35CQDhNWLAF9bJsGbdj+1yBSkeVv0u/g+Tg+w1qH0sbLC9ZtJ1PssCNK5
	NXGA==
X-Received: by 10.68.233.196 with SMTP id ty4mr96952090pbc.23.1356622534531;
	Thu, 27 Dec 2012 07:35:34 -0800 (PST)
Received: from [192.168.0.2] ([114.221.33.174])
	by mx.google.com with ESMTPS id o11sm17986595pby.8.2012.12.27.07.35.29
	(version=SSLv3 cipher=OTHER); Thu, 27 Dec 2012 07:35:33 -0800 (PST)
Message-ID: <50DC6ABF.7020003@gmail.com>
Date: Thu, 27 Dec 2012 23:35:27 +0800
From: Nai Xia <nai.xia@gmail.com>
Organization: NJU
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <4F14345B.4040807@gmail.com>
	<CAPQyPG5tW+Y2Snyf8qF8nn5pYcwrz=craTeedv_nAP8r8c9Q-A@mail.gmail.com>
	<20120117105323.GA74654@ocelot.phlegethon.org>
In-Reply-To: <20120117105323.GA74654@ocelot.phlegethon.org>
Cc: Lixiuchang <lixiuchang@huawei.com>, Xiaowei Yang <xiaowei.yang@huawei.com>,
	"Luohao \(brian\)" <brian.luohao@huawei.com>,
	xen-devel@lists.xensource.com, Grzegorz Milos <Grzegorz.Milos@citrix.com>
Subject: Re: [Xen-devel] Is this a racing bug in page_make_sharable()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: nai.xia@gmail.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGkgYWxsLAoKTGFzdCB0aW1lIEkgcmVwb3J0ZWQgdGhpcyBidWcuIEFuZCBJIG5vdyBzZWUgc29t
ZSBjaGFuZ2VzIGluIHlvdXIgWGVuCmdpdCBtYXN0ZXIgYnJhbmNoLgpIb3dldmVyLCBJIHRoaW5r
IHRoZSBwcm9ibGVtIHN0aWxsIHJlbWFpbnMgZm9yIHJlZi1jaGVja2luZyBpbiBwYWdlCm1pZ3Jh
dGlvbiB0byBkb21fY293LgoKSSB0aGluayBJIGNhbiBjb25zdHJ1Y3QgYSBidWcgYnkgaW50ZXJs
ZWF2aW5nIHRoZSB0d28gY29kZSBwYXRoczoKCmluIGd1ZXN0X3JlbW92ZV9wYWdlKCkgICAgICAg
ICAgICAgIHwgICAgICAgICAgICAgIGluIHBhZ2VfbWFrZV9zaGFyYWJsZSgpCi0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQppZiAoIHAybV9pc19zaGFyZWQocDJtdCkgKSAgICAgICAgICAgICAgICAgICAgICAgLi4uLi4K
Li4uICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC4uLi4uCnBh
Z2UgPSBtZm5fdG9fcGFnZShtZm4pOyAgICAgICAgICAgICAgICAgICAgICAgICAuLi4uLgogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC4uLi4uCgogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGlmICggIWdldF9w
YWdlX2FuZF90eXBlKHBhZ2UsIGQsIFBHVF9zaGFyZWRfcGFnZSkgKSAgICAvLyBzdWNjZXNzCgog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC4uLi4uLi4u
LgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGlmICgg
cGFnZS0+Y291bnRfaW5mbyAhPSAoUEdDX2FsbG9jYXRlZCB8ICgyICsgZXhwZWN0ZWRfcmVmY250
KSkgKSAvLyBhbHNvIHBhc3MKCgppZiAoIHVubGlrZWx5KCFnZXRfcGFnZShwYWdlLCBkKSkgKQoK
LyogZ28gb24gdG8gcmVtb3ZlIHBhZ2UgKi8gICAgICAgICAgICAgICAgICAgICAgIC8qIGdvIG9u
IHRvIGFkZCBwYWdlIHRvIGNvdyBkb21haW4gKi8KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQoKCmlzIHRo
ZXJlIGFueXRoaW5nIHRoYXQgY2FuIGFscmVhZHkgcHJldmVudCBzdWNoIHJhY2luZyBvciBpcyB0
aGlzIHJlYWxseSBjYW4gaGFwcGVuPwoKClRoYW5rcywKCk5haSBYaWEKCgpPbiAyMDEy5bm0MDHm
nIgxN+aXpSAxODo1MywgVGltIERlZWdhbiB3cm90ZToKPiBBdCAyMjo0MyArMDgwMCBvbiAxNiBK
YW4gKDEzMjY3NTM4MzQpLCBOYWkgWGlhIHdyb3RlOgo+PiBIaSBHcnplZ29yeiwKPj4KPj4gQXMg
SSB1bmRlcnN0YW5kLCB0aGUgcHVycG9zZSBvZiB0aGUgY29kZSBpbiBwYWdlX21ha2Vfc2hhcmFi
bGUoKQo+PiBjaGVja2luZyB0aGUgcmVmIGNvdW50IGlzIHRvIGVuc3VyZSB0aGF0IG5vYm9keSB1
bmV4cGVjdGVkIGlzIHdvcmtpbmcKPj4gb24gdGhlIHBhZ2UsIGFuZCBzbyB3ZSBjYW4gbWlncmF0
ZSBpdCB0byBkb21fY293LCByaWdodD8KPj4KPj4gPT09PQo+PiAgICAgIC8qIENoZWNrIGlmIHRo
ZSByZWYgY291bnQgaXMgMi4gVGhlIGZpcnN0IGZyb20gUEdUX2FsbG9jYXRlZCwgYW5kCj4+ICAg
ICAgICogdGhlIHNlY29uZCBmcm9tIGdldF9wYWdlX2FuZF90eXBlIGF0IHRoZSB0b3Agb2YgdGhp
cyBmdW5jdGlvbiAqLwo+PiAgICAgIGlmKHBhZ2UtPmNvdW50X2luZm8gIT0gKFBHQ19hbGxvY2F0
ZWQgfCAoMiArIGV4cGVjdGVkX3JlZmNudCkpKQo+PiAgICAgIHsKPj4gICAgICAgICAgLyogUmV0
dXJuIHR5cGUgY291bnQgYmFjayB0byB6ZXJvICovCj4+ICAgICAgICAgIHB1dF9wYWdlX2FuZF90
eXBlKHBhZ2UpOwo+PiAgICAgICAgICBzcGluX3VubG9jaygmZC0+cGFnZV9hbGxvY19sb2NrKTsK
Pj4gICAgICAgICAgcmV0dXJuIC1FMkJJRzsKPj4gICAgICB9Cj4+ID09PT0KPj4KPj4gSG93ZXZl
ciwgaXQgc2VlbXMgdG8gbWUgdGhhdCB0aGlzIHJlZiBjaGVjayBhbmQgdGhlIGZvbGxvd2luZyBw
YWdlCj4+IG1pZ3JhdGlvbiBpcyBub3QgYXRvbWljKCBhbHRob3VnaCB0aGUgb3BlcmF0aW9ucyBm
b3IgdHlwZV9pbmZvIHJlZgo+PiBjaGVjayBzZWVtcyBnb29kKSBpLmUuIGl0J3MgcG9zc2libGUg
dGhhdCBpdCBwYXNzZWQgdGhpcyByZWYKPj4gY2hlY2sgYnV0IGp1c3QgYmVmb3JlIGl0IGdvZXMg
dG8gZG9tX2Nvdywgc29tZW9uZSBlbHNlIGdldHMgdGhpcyBwYWdlPwo+Cj4gWWVzLCB0aGVyZSBh
cmUgYSBudW1iZXIgb2YgcmFjZXMgYXJvdW5kIHRoZSBwMm0gY29kZTsgQW5kZXJzCj4gTGFnYXIt
Q2F2aWxsYSBoYXMgYmVlbiB3b3JraW5nIG9uIGZpeGluZyB0aGUgbG9ja2luZyBhcm91bmQgcDJt
IGxvb2t1cHMKPiB0byB0YWtlIGNhcmUgb2YgdGhpcy4KPgo+IFRpbS4KPgoKX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlz
dApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Dec 27 16:12:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 16:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToG3a-0002QE-OC; Thu, 27 Dec 2012 16:11:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1ToG3Z-0002Q9-9U
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 16:11:41 +0000
Received: from [85.158.139.211:3629] by server-12.bemta-5.messagelabs.com id
	7A/48-02275-C337CD05; Thu, 27 Dec 2012 16:11:40 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1356624699!22079179!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18552 invoked from network); 27 Dec 2012 16:11:40 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 16:11:40 -0000
Received: by mail-wi0-f178.google.com with SMTP id hn3so5447417wib.5
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 08:11:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=fDAY4rfkeOvAgO7yQKXF0AeK1zlsxrgLU5+i41RdPDQ=;
	b=cytTzWc3I70YzOcZklmde28SE4x8P2H3EDmj1jKeILvihpEoZsf5URmuBhW3Zehl6b
	QWuNxiQA/8q5fRXMXPb5fhYWBLndpRQ0pHtYpd+H0ba8hFJ9EsMwCcrAjuRm4SFKZkTW
	+9fQnOLAYJzj2AL+UCgmFC9gzl356ti5QrNvTh7JKkqR75IGHXMggIQxZm0aTr9G5BvY
	I5C46ci2u0nx1Bz3G+4xisMkUT77QP2rNc7NC/hkxdA0FIJ2GSHuaJWYexUf+k8Pm286
	qkWBcPEcsAnljcripVb2L0j3x73jEUpx5l987egYyenR/A+rRz/sR653O6hPyhZ85WjL
	tnBA==
MIME-Version: 1.0
Received: by 10.194.76.237 with SMTP id n13mr49752228wjw.57.1356624699736;
	Thu, 27 Dec 2012 08:11:39 -0800 (PST)
Received: by 10.194.170.167 with HTTP; Thu, 27 Dec 2012 08:11:39 -0800 (PST)
Date: Thu, 27 Dec 2012 21:41:39 +0530
Message-ID: <CANq0ewvRs7NMhaYpPfCCd9UCjbT-6dac5h=BWAXHs7pQzx8-rA@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] comparion between xen n kvm :Performance analysis of
 transport protocol(TCP) during live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7028289445973677738=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7028289445973677738==
Content-Type: multipart/alternative; boundary=047d7bfcef745240fb04d1d7cd57

--047d7bfcef745240fb04d1d7cd57
Content-Type: text/plain; charset=ISO-8859-1

Hello, I want to do comparative study between xen and kvm during live
migration of vm.analysis of downtime and also using wireshark and netperf
tool to perform analysis, what else can be done?

--047d7bfcef745240fb04d1d7cd57
Content-Type: text/html; charset=ISO-8859-1

Hello, I want to do comparative study between xen and kvm during live migration of vm.analysis of downtime and also using wireshark and netperf tool to perform analysis, what else can be done?<br>

--047d7bfcef745240fb04d1d7cd57--


--===============7028289445973677738==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7028289445973677738==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 16:12:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 16:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToG3a-0002QE-OC; Thu, 27 Dec 2012 16:11:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <digvijaych@gmail.com>) id 1ToG3Z-0002Q9-9U
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 16:11:41 +0000
Received: from [85.158.139.211:3629] by server-12.bemta-5.messagelabs.com id
	7A/48-02275-C337CD05; Thu, 27 Dec 2012 16:11:40 +0000
X-Env-Sender: digvijaych@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1356624699!22079179!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18552 invoked from network); 27 Dec 2012 16:11:40 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Dec 2012 16:11:40 -0000
Received: by mail-wi0-f178.google.com with SMTP id hn3so5447417wib.5
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 08:11:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=fDAY4rfkeOvAgO7yQKXF0AeK1zlsxrgLU5+i41RdPDQ=;
	b=cytTzWc3I70YzOcZklmde28SE4x8P2H3EDmj1jKeILvihpEoZsf5URmuBhW3Zehl6b
	QWuNxiQA/8q5fRXMXPb5fhYWBLndpRQ0pHtYpd+H0ba8hFJ9EsMwCcrAjuRm4SFKZkTW
	+9fQnOLAYJzj2AL+UCgmFC9gzl356ti5QrNvTh7JKkqR75IGHXMggIQxZm0aTr9G5BvY
	I5C46ci2u0nx1Bz3G+4xisMkUT77QP2rNc7NC/hkxdA0FIJ2GSHuaJWYexUf+k8Pm286
	qkWBcPEcsAnljcripVb2L0j3x73jEUpx5l987egYyenR/A+rRz/sR653O6hPyhZ85WjL
	tnBA==
MIME-Version: 1.0
Received: by 10.194.76.237 with SMTP id n13mr49752228wjw.57.1356624699736;
	Thu, 27 Dec 2012 08:11:39 -0800 (PST)
Received: by 10.194.170.167 with HTTP; Thu, 27 Dec 2012 08:11:39 -0800 (PST)
Date: Thu, 27 Dec 2012 21:41:39 +0530
Message-ID: <CANq0ewvRs7NMhaYpPfCCd9UCjbT-6dac5h=BWAXHs7pQzx8-rA@mail.gmail.com>
From: digvijay chauhan <digvijaych@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] comparion between xen n kvm :Performance analysis of
 transport protocol(TCP) during live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7028289445973677738=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7028289445973677738==
Content-Type: multipart/alternative; boundary=047d7bfcef745240fb04d1d7cd57

--047d7bfcef745240fb04d1d7cd57
Content-Type: text/plain; charset=ISO-8859-1

Hello, I want to do comparative study between xen and kvm during live
migration of vm.analysis of downtime and also using wireshark and netperf
tool to perform analysis, what else can be done?

--047d7bfcef745240fb04d1d7cd57
Content-Type: text/html; charset=ISO-8859-1

Hello, I want to do comparative study between xen and kvm during live migration of vm.analysis of downtime and also using wireshark and netperf tool to perform analysis, what else can be done?<br>

--047d7bfcef745240fb04d1d7cd57--


--===============7028289445973677738==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7028289445973677738==--


From xen-devel-bounces@lists.xen.org Thu Dec 27 17:03:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 17:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToGrf-00045z-E2; Thu, 27 Dec 2012 17:03:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1ToGre-00045q-GD; Thu, 27 Dec 2012 17:03:26 +0000
Received: from [85.158.139.211:47674] by server-12.bemta-5.messagelabs.com id
	F4/A8-02275-D5F7CD05; Thu, 27 Dec 2012 17:03:25 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-206.messagelabs.com!1356627804!21250890!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NTgwNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18274 invoked from network); 27 Dec 2012 17:03:25 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 17:03:25 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 4FDE31A4E;
	Thu, 27 Dec 2012 19:03:23 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 2F41320067; Thu, 27 Dec 2012 19:03:23 +0200 (EET)
Date: Thu, 27 Dec 2012 19:03:23 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Message-ID: <20121227170322.GS8912@reaktio.net>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
	<20121222220407.GP8912@reaktio.net>
	<8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
	<068F06DC4D106941B297C0C5F9F446EA48D30B4E78@aplesstripe.dom1.jhuapl.edu>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48D30B4E78@aplesstripe.dom1.jhuapl.edu>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>, gavin <gbtux@126.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 27, 2012 at 10:27:33AM -0500, Fioravante, Matthew E. wrote:
> The frontend driver is currently being ported to the latest kernel. You c=
an =

> find the patch cross listed here as well as the linux kernel mailing list.
> =

> I have no plans to port the backend driver. If you need it you'll have to=
 get it from the 2.6.18
> kernel and port it yourself.
> =


Hmm.. are you still using 2.6.18 kernel in dom0 yourself? =


-- Pasi

> ________________________________________
> From: xen-devel-bounces@lists.xen.org [xen-devel-bounces@lists.xen.org] O=
n Behalf Of gavin [gbtux@126.com]
> Sent: Sunday, December 23, 2012 8:04 AM
> To: Pasi K=E4rkk=E4inen
> Cc: xen-users@lists.xen.org; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops=
 kernel
> =

> Hi Pasi,
> =

> Thank you very much for your information.
> =

> Best Regards,
> Gavin
> =

> At 2012-12-23 06:04:08,"Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:
> =

> >On Sun, Dec 23, 2012 at 01:50:16AM +0800, gavin wrote:
> >>     Hi,
> >>
> >>    I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in t=
he
> >>    config file of pv-ops kernel, such as kernel 2.6.32.50. However, th=
is
> >>    option exists in the config file of kernel version 2.6.18.8. I also=
 cannot
> >>    find the vTPM backed driver (such as
> >>    linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
> >>    So, how can I configure and use the vTPM backend driver in kernel 2=
.6.32?
> >>    Thank you for any advice.
> >>
> >
> >I don't think vtpm drivers were ported to 2.6.32 pvops.
> >Recently there has been work on porting the drivers to upstream Linux 3.=
x,
> >but they aren't merged yet iirc.
> >
> >If you need to use them with 2.6.32 you need to port them yourself..
> >
> >-- Pasi
> >
> =

> =

> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 17:03:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 17:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToGrf-00045z-E2; Thu, 27 Dec 2012 17:03:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1ToGre-00045q-GD; Thu, 27 Dec 2012 17:03:26 +0000
Received: from [85.158.139.211:47674] by server-12.bemta-5.messagelabs.com id
	F4/A8-02275-D5F7CD05; Thu, 27 Dec 2012 17:03:25 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-206.messagelabs.com!1356627804!21250890!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NTgwNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18274 invoked from network); 27 Dec 2012 17:03:25 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 17:03:25 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 4FDE31A4E;
	Thu, 27 Dec 2012 19:03:23 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 2F41320067; Thu, 27 Dec 2012 19:03:23 +0200 (EET)
Date: Thu, 27 Dec 2012 19:03:23 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Message-ID: <20121227170322.GS8912@reaktio.net>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
	<20121222220407.GP8912@reaktio.net>
	<8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
	<068F06DC4D106941B297C0C5F9F446EA48D30B4E78@aplesstripe.dom1.jhuapl.edu>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48D30B4E78@aplesstripe.dom1.jhuapl.edu>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>, gavin <gbtux@126.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 27, 2012 at 10:27:33AM -0500, Fioravante, Matthew E. wrote:
> The frontend driver is currently being ported to the latest kernel. You c=
an =

> find the patch cross listed here as well as the linux kernel mailing list.
> =

> I have no plans to port the backend driver. If you need it you'll have to=
 get it from the 2.6.18
> kernel and port it yourself.
> =


Hmm.. are you still using 2.6.18 kernel in dom0 yourself? =


-- Pasi

> ________________________________________
> From: xen-devel-bounces@lists.xen.org [xen-devel-bounces@lists.xen.org] O=
n Behalf Of gavin [gbtux@126.com]
> Sent: Sunday, December 23, 2012 8:04 AM
> To: Pasi K=E4rkk=E4inen
> Cc: xen-users@lists.xen.org; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops=
 kernel
> =

> Hi Pasi,
> =

> Thank you very much for your information.
> =

> Best Regards,
> Gavin
> =

> At 2012-12-23 06:04:08,"Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:
> =

> >On Sun, Dec 23, 2012 at 01:50:16AM +0800, gavin wrote:
> >>     Hi,
> >>
> >>    I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in t=
he
> >>    config file of pv-ops kernel, such as kernel 2.6.32.50. However, th=
is
> >>    option exists in the config file of kernel version 2.6.18.8. I also=
 cannot
> >>    find the vTPM backed driver (such as
> >>    linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
> >>    So, how can I configure and use the vTPM backend driver in kernel 2=
.6.32?
> >>    Thank you for any advice.
> >>
> >
> >I don't think vtpm drivers were ported to 2.6.32 pvops.
> >Recently there has been work on porting the drivers to upstream Linux 3.=
x,
> >but they aren't merged yet iirc.
> >
> >If you need to use them with 2.6.32 you need to port them yourself..
> >
> >-- Pasi
> >
> =

> =

> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 18:03:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 18:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToHnZ-0006G0-Br; Thu, 27 Dec 2012 18:03:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ebiederm@xmission.com>) id 1ToHnX-0006Fh-V5
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 18:03:16 +0000
Received: from [85.158.137.99:33250] by server-9.bemta-3.messagelabs.com id
	F6/B5-11948-36D8CD05; Thu, 27 Dec 2012 18:03:15 +0000
X-Env-Sender: ebiederm@xmission.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356631393!13166514!1
X-Originating-IP: [166.70.13.233]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22101 invoked from network); 27 Dec 2012 18:03:14 -0000
Received: from out03.mta.xmission.com (HELO out03.mta.xmission.com)
	(166.70.13.233)
	by server-7.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	27 Dec 2012 18:03:14 -0000
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out03.mta.xmission.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.76) (envelope-from <ebiederm@xmission.com>)
	id 1ToHnH-000897-Ad; Thu, 27 Dec 2012 11:02:59 -0700
Received: from c-98-207-153-68.hsd1.ca.comcast.net ([98.207.153.68]
	helo=eric-ThinkPad-X220.xmission.com)
	by in02.mta.xmission.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76)
	(envelope-from <ebiederm@xmission.com>)
	id 1ToHn8-0002z4-W9; Thu, 27 Dec 2012 11:02:59 -0700
From: ebiederm@xmission.com (Eric W. Biederman)
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<50DBC856.6030208@zytor.com>
	<791b4922-078f-4adc-b3f3-0651f2266147@email.android.com>
	<50DC58C4.3000307@citrix.com>
Date: Thu, 27 Dec 2012 10:02:44 -0800
In-Reply-To: <50DC58C4.3000307@citrix.com> (Andrew Cooper's message of "Thu,
	27 Dec 2012 14:18:44 +0000")
Message-ID: <874nj7qsor.fsf@xmission.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux)
MIME-Version: 1.0
X-XM-AID: U2FsdGVkX1/bd4hRjyplMNYJdnZ6JPiVKfj7go4hnuA=
X-SA-Exim-Connect-IP: 98.207.153.68
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sa06.xmission.com
X-Spam-Level: 
X-Spam-Status: No, score=-3.9 required=8.0 tests=ALL_TRUSTED,BAYES_00,
	DCC_CHECK_NEGATIVE,T_TM2_M_HEADER_IN_MSG,XMSubLong autolearn=disabled
	version=3.3.2
X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.1 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: T_TM2_M_HEADER_IN_MSG
	* -3.0 BAYES_00 BODY: Bayes spam probability is 0 to 1%
	*      [score: 0.0000]
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa06 1397; Body=1 Fuz1=1 Fuz2=1]
X-Spam-DCC: XMission; sa06 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: ;Andrew Cooper <andrew.cooper3@citrix.com>
X-Spam-Relay-Country: 
X-SA-Exim-Version: 4.2.1 (built Sun, 08 Jan 2012 03:05:19 +0000)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)
Cc: "x86@kernel.org" <x86@kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"kexec@lists.infradead.org" <kexec@lists.infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"maxim.uvarov@oracle.com" <maxim.uvarov@oracle.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"vgoyal@redhat.com" <vgoyal@redhat.com>
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper <andrew.cooper3@citrix.com> writes:

> On 27/12/2012 07:53, Eric W. Biederman wrote:
>> The syscall ABI still has the wrong semantics.
>>
>> Aka totally unmaintainable and umergeable.
>>
>> The concept of domU support is also strange.  What does domU support even mean, when the dom0 support is loading a kernel to pick up Xen when Xen falls over.
>
> There are two requirements pulling at this patch series, but I agree
> that we need to clarify them.

It probably make sense to split them apart a little even.

> When dom0 loads a crash kernel, it is loading one for Xen to use.  As a
> dom0 crash causes a Xen crash, having dom0 set up a kdump kernel for
> itself is completely useless.  This ability is present in "classic Xen
> dom0" kernels, but the feature is currently missing in PVOPS.

> Many cloud customers and service providers want the ability for a VM
> administrator to be able to load a kdump/kexec kernel within a
> domain[1].  This allows the VM administrator to take more proactive
> steps to isolate the cause of a crash, the state of which is most likely
> discarded while tearing down the domain.  The result being that as far
> as Xen is concerned, the domain is still alive, while the kdump
> kernel/environment can work its usual magic.  I am not aware of any
> feature like this existing in the past.

Which makes domU support semantically just the normal kexec/kdump
support.  Got it.

The point of implementing domU is for those times when the hypervisor
admin and the kernel admin are different.

For domU support modifying or adding alternate versions of
machine_kexec.c and relocate_kernel.S to add paravirtualization support
make sense.

There is the practical argument that for implementation efficiency of
crash dumps it would be better if that support came from the hypervisor
or the hypervisor environment.  But this gets into the practical reality
that the hypervisor environment does not do that today.  Furthermore
kexec all by itself working in a paravirtualized environment under Xen
makes sense.

domU support is what Peter was worrying about for cleanliness, and
we need some x86 backend ops there, and generally to be careful.


For dom0 support we need to extend the kexec_load system call, and
get it right.

When we are done I expect both dom0 and domU support of kexec to work
in dom0.  I don't know if the normal kexec or kdump case will ever make
sense in dom0 but there is no reason for that case to be broken.

> ~Andrew
>
> [1] http://lists.xen.org/archives/html/xen-devel/2012-11/msg01274.html

Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 18:03:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 18:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToHnZ-0006G0-Br; Thu, 27 Dec 2012 18:03:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ebiederm@xmission.com>) id 1ToHnX-0006Fh-V5
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 18:03:16 +0000
Received: from [85.158.137.99:33250] by server-9.bemta-3.messagelabs.com id
	F6/B5-11948-36D8CD05; Thu, 27 Dec 2012 18:03:15 +0000
X-Env-Sender: ebiederm@xmission.com
X-Msg-Ref: server-7.tower-217.messagelabs.com!1356631393!13166514!1
X-Originating-IP: [166.70.13.233]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22101 invoked from network); 27 Dec 2012 18:03:14 -0000
Received: from out03.mta.xmission.com (HELO out03.mta.xmission.com)
	(166.70.13.233)
	by server-7.tower-217.messagelabs.com with AES256-SHA encrypted SMTP;
	27 Dec 2012 18:03:14 -0000
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out03.mta.xmission.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.76) (envelope-from <ebiederm@xmission.com>)
	id 1ToHnH-000897-Ad; Thu, 27 Dec 2012 11:02:59 -0700
Received: from c-98-207-153-68.hsd1.ca.comcast.net ([98.207.153.68]
	helo=eric-ThinkPad-X220.xmission.com)
	by in02.mta.xmission.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76)
	(envelope-from <ebiederm@xmission.com>)
	id 1ToHn8-0002z4-W9; Thu, 27 Dec 2012 11:02:59 -0700
From: ebiederm@xmission.com (Eric W. Biederman)
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<50DBC856.6030208@zytor.com>
	<791b4922-078f-4adc-b3f3-0651f2266147@email.android.com>
	<50DC58C4.3000307@citrix.com>
Date: Thu, 27 Dec 2012 10:02:44 -0800
In-Reply-To: <50DC58C4.3000307@citrix.com> (Andrew Cooper's message of "Thu,
	27 Dec 2012 14:18:44 +0000")
Message-ID: <874nj7qsor.fsf@xmission.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux)
MIME-Version: 1.0
X-XM-AID: U2FsdGVkX1/bd4hRjyplMNYJdnZ6JPiVKfj7go4hnuA=
X-SA-Exim-Connect-IP: 98.207.153.68
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sa06.xmission.com
X-Spam-Level: 
X-Spam-Status: No, score=-3.9 required=8.0 tests=ALL_TRUSTED,BAYES_00,
	DCC_CHECK_NEGATIVE,T_TM2_M_HEADER_IN_MSG,XMSubLong autolearn=disabled
	version=3.3.2
X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.1 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: T_TM2_M_HEADER_IN_MSG
	* -3.0 BAYES_00 BODY: Bayes spam probability is 0 to 1%
	*      [score: 0.0000]
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa06 1397; Body=1 Fuz1=1 Fuz2=1]
X-Spam-DCC: XMission; sa06 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: ;Andrew Cooper <andrew.cooper3@citrix.com>
X-Spam-Relay-Country: 
X-SA-Exim-Version: 4.2.1 (built Sun, 08 Jan 2012 03:05:19 +0000)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)
Cc: "x86@kernel.org" <x86@kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"kexec@lists.infradead.org" <kexec@lists.infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"maxim.uvarov@oracle.com" <maxim.uvarov@oracle.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"vgoyal@redhat.com" <vgoyal@redhat.com>
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper <andrew.cooper3@citrix.com> writes:

> On 27/12/2012 07:53, Eric W. Biederman wrote:
>> The syscall ABI still has the wrong semantics.
>>
>> Aka totally unmaintainable and umergeable.
>>
>> The concept of domU support is also strange.  What does domU support even mean, when the dom0 support is loading a kernel to pick up Xen when Xen falls over.
>
> There are two requirements pulling at this patch series, but I agree
> that we need to clarify them.

It probably make sense to split them apart a little even.

> When dom0 loads a crash kernel, it is loading one for Xen to use.  As a
> dom0 crash causes a Xen crash, having dom0 set up a kdump kernel for
> itself is completely useless.  This ability is present in "classic Xen
> dom0" kernels, but the feature is currently missing in PVOPS.

> Many cloud customers and service providers want the ability for a VM
> administrator to be able to load a kdump/kexec kernel within a
> domain[1].  This allows the VM administrator to take more proactive
> steps to isolate the cause of a crash, the state of which is most likely
> discarded while tearing down the domain.  The result being that as far
> as Xen is concerned, the domain is still alive, while the kdump
> kernel/environment can work its usual magic.  I am not aware of any
> feature like this existing in the past.

Which makes domU support semantically just the normal kexec/kdump
support.  Got it.

The point of implementing domU is for those times when the hypervisor
admin and the kernel admin are different.

For domU support modifying or adding alternate versions of
machine_kexec.c and relocate_kernel.S to add paravirtualization support
make sense.

There is the practical argument that for implementation efficiency of
crash dumps it would be better if that support came from the hypervisor
or the hypervisor environment.  But this gets into the practical reality
that the hypervisor environment does not do that today.  Furthermore
kexec all by itself working in a paravirtualized environment under Xen
makes sense.

domU support is what Peter was worrying about for cleanliness, and
we need some x86 backend ops there, and generally to be careful.


For dom0 support we need to extend the kexec_load system call, and
get it right.

When we are done I expect both dom0 and domU support of kexec to work
in dom0.  I don't know if the normal kexec or kdump case will ever make
sense in dom0 but there is no reason for that case to be broken.

> ~Andrew
>
> [1] http://lists.xen.org/archives/html/xen-devel/2012-11/msg01274.html

Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 18:54:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 18:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToIb0-0007Vx-Av; Thu, 27 Dec 2012 18:54:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1ToIay-0007Vs-Lp
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 18:54:20 +0000
Received: from [193.109.254.147:28503] by server-15.bemta-14.messagelabs.com
	id 53/8B-05116-B599CD05; Thu, 27 Dec 2012 18:54:19 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1356634456!2663047!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32627 invoked from network); 27 Dec 2012 18:54:18 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 18:54:18 -0000
Received: from tazenda.hos.anvin.org
	([IPv6:2601:9:3300:78:e269:95ff:fe35:9f3c]) (authenticated bits=0)
	by mail.zytor.com (8.14.5/8.14.5) with ESMTP id qBRIrlcE026572
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Thu, 27 Dec 2012 10:53:48 -0800
Message-ID: <50DC9935.6000201@zytor.com>
Date: Thu, 27 Dec 2012 10:53:41 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
In-Reply-To: <1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 09/11] x86/xen/enlighten: Add init and
 crash kexec/kdump hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/26/2012 06:18 PM, Daniel Kiper wrote:
> Add init and crash kexec/kdump hooks.
>
> Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
> ---
>   arch/x86/xen/enlighten.c |   11 +++++++++++
>   1 files changed, 11 insertions(+), 0 deletions(-)

On the general issue of hooks:

Hooks need their pre- and postsemantics extremely ewell documented, let 
they end up being an impossible roadblock to development.

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 18:54:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 18:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToIb0-0007Vx-Av; Thu, 27 Dec 2012 18:54:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1ToIay-0007Vs-Lp
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 18:54:20 +0000
Received: from [193.109.254.147:28503] by server-15.bemta-14.messagelabs.com
	id 53/8B-05116-B599CD05; Thu, 27 Dec 2012 18:54:19 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1356634456!2663047!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32627 invoked from network); 27 Dec 2012 18:54:18 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 18:54:18 -0000
Received: from tazenda.hos.anvin.org
	([IPv6:2601:9:3300:78:e269:95ff:fe35:9f3c]) (authenticated bits=0)
	by mail.zytor.com (8.14.5/8.14.5) with ESMTP id qBRIrlcE026572
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Thu, 27 Dec 2012 10:53:48 -0800
Message-ID: <50DC9935.6000201@zytor.com>
Date: Thu, 27 Dec 2012 10:53:41 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <1356574740-6806-1-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-2-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-3-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-4-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-5-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-6-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-7-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-8-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-9-git-send-email-daniel.kiper@oracle.com>
	<1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
In-Reply-To: <1356574740-6806-10-git-send-email-daniel.kiper@oracle.com>
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 09/11] x86/xen/enlighten: Add init and
 crash kexec/kdump hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/26/2012 06:18 PM, Daniel Kiper wrote:
> Add init and crash kexec/kdump hooks.
>
> Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
> ---
>   arch/x86/xen/enlighten.c |   11 +++++++++++
>   1 files changed, 11 insertions(+), 0 deletions(-)

On the general issue of hooks:

Hooks need their pre- and postsemantics extremely ewell documented, let 
they end up being an impossible roadblock to development.

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 19:33:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 19:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToJCf-00081n-Ok; Thu, 27 Dec 2012 19:33:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1ToJCe-00081i-PU
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 19:33:16 +0000
Received: from [85.158.143.35:36059] by server-2.bemta-4.messagelabs.com id
	09/A2-30861-C72ACD05; Thu, 27 Dec 2012 19:33:16 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1356636795!13302572!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24350 invoked from network); 27 Dec 2012 19:33:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 19:33:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Dec 2012 19:33:14 +0000
Message-Id: <50DCA2790200007800092135@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 27 Dec 2012 19:33:13 +0000
From: "Jan Beulich" <jbeulich@suse.com>
To: <xen-devel@lists.xen.org>,<v.tolstov@selfip.ru>
References: <CACaajQt5FEMVdX5j7ASmDrJXJzespuAVzXpGsxgLONxyXmD6Zg@mail.gmail.com>
In-Reply-To: <CACaajQt5FEMVdX5j7ASmDrJXJzespuAVzXpGsxgLONxyXmD6Zg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] blktap2 and blktap2-new
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Vasiliy Tolstov <v.tolstov@selfip.ru> 12/25/12 7:46 AM >>>
>Hello. I found, that opensuse kernel-xen have two blktap2 modules. One
>blktap2 and other blktap-new. What is difference and why needed two
>modules (blktap2-new created by suse/opensuse patches)

If you looked at the sources, you certainly saw the comments at the top of
the respective patches - the two drivers come from different sources, but
are supposed to fulfill the same function.

Also, for the future I'd recommend asking openSUSE related questions on
openSUSE mailing lists.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 19:33:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 19:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToJCf-00081n-Ok; Thu, 27 Dec 2012 19:33:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1ToJCe-00081i-PU
	for xen-devel@lists.xen.org; Thu, 27 Dec 2012 19:33:16 +0000
Received: from [85.158.143.35:36059] by server-2.bemta-4.messagelabs.com id
	09/A2-30861-C72ACD05; Thu, 27 Dec 2012 19:33:16 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1356636795!13302572!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24350 invoked from network); 27 Dec 2012 19:33:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 19:33:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Dec 2012 19:33:14 +0000
Message-Id: <50DCA2790200007800092135@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Thu, 27 Dec 2012 19:33:13 +0000
From: "Jan Beulich" <jbeulich@suse.com>
To: <xen-devel@lists.xen.org>,<v.tolstov@selfip.ru>
References: <CACaajQt5FEMVdX5j7ASmDrJXJzespuAVzXpGsxgLONxyXmD6Zg@mail.gmail.com>
In-Reply-To: <CACaajQt5FEMVdX5j7ASmDrJXJzespuAVzXpGsxgLONxyXmD6Zg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] blktap2 and blktap2-new
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Vasiliy Tolstov <v.tolstov@selfip.ru> 12/25/12 7:46 AM >>>
>Hello. I found, that opensuse kernel-xen have two blktap2 modules. One
>blktap2 and other blktap-new. What is difference and why needed two
>modules (blktap2-new created by suse/opensuse patches)

If you looked at the sources, you certainly saw the comments at the top of
the respective patches - the two drivers come from different sources, but
are supposed to fulfill the same function.

Also, for the future I'd recommend asking openSUSE related questions on
openSUSE mailing lists.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 23:20:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 23:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToMju-0002ux-Nm; Thu, 27 Dec 2012 23:19:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1ToMjt-0002un-CO
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 23:19:49 +0000
Received: from [85.158.139.211:23192] by server-15.bemta-5.messagelabs.com id
	F3/C6-20523-497DCD05; Thu, 27 Dec 2012 23:19:48 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1356650386!20618128!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzk4OTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32157 invoked from network); 27 Dec 2012 23:19:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 23:19:47 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBRNJQAT006960
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Dec 2012 23:19:27 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBRNJPYG023594
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 27 Dec 2012 23:19:25 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBRNJONS008726; Thu, 27 Dec 2012 17:19:24 -0600
MIME-Version: 1.0
Message-ID: <71b94237-c365-47ba-8d75-5ee1d8caee73@default>
Date: Thu, 27 Dec 2012 15:19:24 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <hpa@zytor.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 02/11] x86/kexec: Add extra pointers to
 transition page table PGD, PUD, PMD and PTE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Hmm... this code is being redone at the moment... this might conflict.

Is this available somewhere? May I have a look at it?

Daniel

PS I am on holiday until 02/01/2013 and I could not
   have access to my email box. Please be patient.
   At worst case I will send reply when I will be
   back at office.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 23:20:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 23:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToMju-0002ux-Nm; Thu, 27 Dec 2012 23:19:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1ToMjt-0002un-CO
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 23:19:49 +0000
Received: from [85.158.139.211:23192] by server-15.bemta-5.messagelabs.com id
	F3/C6-20523-497DCD05; Thu, 27 Dec 2012 23:19:48 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1356650386!20618128!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzk4OTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32157 invoked from network); 27 Dec 2012 23:19:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 23:19:47 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBRNJQAT006960
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Dec 2012 23:19:27 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBRNJPYG023594
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 27 Dec 2012 23:19:25 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBRNJONS008726; Thu, 27 Dec 2012 17:19:24 -0600
MIME-Version: 1.0
Message-ID: <71b94237-c365-47ba-8d75-5ee1d8caee73@default>
Date: Thu, 27 Dec 2012 15:19:24 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <hpa@zytor.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 02/11] x86/kexec: Add extra pointers to
 transition page table PGD, PUD, PMD and PTE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Hmm... this code is being redone at the moment... this might conflict.

Is this available somewhere? May I have a look at it?

Daniel

PS I am on holiday until 02/01/2013 and I could not
   have access to my email box. Please be patient.
   At worst case I will send reply when I will be
   back at office.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 23:23:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 23:23:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToMnQ-0003Aw-Ly; Thu, 27 Dec 2012 23:23:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1ToMnP-0003Ao-Ba
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 23:23:27 +0000
Received: from [85.158.138.51:13133] by server-10.bemta-3.messagelabs.com id
	67/D2-07616-E68DCD05; Thu, 27 Dec 2012 23:23:26 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1356650604!28580496!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTM3ODk2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19569 invoked from network); 27 Dec 2012 23:23:26 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 23:23:26 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBRNN46K012579
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Dec 2012 23:23:07 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBRNN3TA016709
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 27 Dec 2012 23:23:03 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBRNN3LZ010291; Thu, 27 Dec 2012 17:23:03 -0600
MIME-Version: 1.0
Message-ID: <5698071f-c96c-4891-81d6-a77c4e3b77c2@default>
Date: Thu, 27 Dec 2012 15:23:02 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <hpa@zytor.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 06/11] x86/xen: Add i386 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On 12/26/2012 06:18 PM, Daniel Kiper wrote:
> > Add i386 kexec/kdump implementation.
> >
> > v2 - suggestions/fixes:
> >     - allocate transition page table pages below 4 GiB
> >       (suggested by Jan Beulich).
>
> Why?

Sadly all addresses are passed via unsigned long
variable to kexec hypercall.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 23:23:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 23:23:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToMnQ-0003Aw-Ly; Thu, 27 Dec 2012 23:23:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1ToMnP-0003Ao-Ba
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 23:23:27 +0000
Received: from [85.158.138.51:13133] by server-10.bemta-3.messagelabs.com id
	67/D2-07616-E68DCD05; Thu, 27 Dec 2012 23:23:26 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1356650604!28580496!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTM3ODk2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19569 invoked from network); 27 Dec 2012 23:23:26 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 23:23:26 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBRNN46K012579
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Dec 2012 23:23:07 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBRNN3TA016709
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 27 Dec 2012 23:23:03 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBRNN3LZ010291; Thu, 27 Dec 2012 17:23:03 -0600
MIME-Version: 1.0
Message-ID: <5698071f-c96c-4891-81d6-a77c4e3b77c2@default>
Date: Thu, 27 Dec 2012 15:23:02 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <hpa@zytor.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 06/11] x86/xen: Add i386 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On 12/26/2012 06:18 PM, Daniel Kiper wrote:
> > Add i386 kexec/kdump implementation.
> >
> > v2 - suggestions/fixes:
> >     - allocate transition page table pages below 4 GiB
> >       (suggested by Jan Beulich).
>
> Why?

Sadly all addresses are passed via unsigned long
variable to kexec hypercall.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 23:38:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 23:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToN20-0004Tk-Sx; Thu, 27 Dec 2012 23:38:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1ToN1z-0004TR-Ux
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 23:38:32 +0000
Received: from [85.158.137.99:45958] by server-7.bemta-3.messagelabs.com id
	9B/E2-23008-7FBDCD05; Thu, 27 Dec 2012 23:38:31 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1356651508!14833170!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21326 invoked from network); 27 Dec 2012 23:38:30 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-3.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 23:38:30 -0000
Received: from tazenda.hos.anvin.org
	([IPv6:2601:9:3300:78:e269:95ff:fe35:9f3c]) (authenticated bits=0)
	by mail.zytor.com (8.14.5/8.14.5) with ESMTP id qBRNc1Ak032229
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Thu, 27 Dec 2012 15:38:01 -0800
Message-ID: <50DCDBD3.1020703@zytor.com>
Date: Thu, 27 Dec 2012 15:37:55 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <5698071f-c96c-4891-81d6-a77c4e3b77c2@default>
In-Reply-To: <5698071f-c96c-4891-81d6-a77c4e3b77c2@default>
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 06/11] x86/xen: Add i386 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/27/2012 03:23 PM, Daniel Kiper wrote:
>> On 12/26/2012 06:18 PM, Daniel Kiper wrote:
>>> Add i386 kexec/kdump implementation.
>>>
>>> v2 - suggestions/fixes:
>>>      - allocate transition page table pages below 4 GiB
>>>        (suggested by Jan Beulich).
>>
>> Why?
>
> Sadly all addresses are passed via unsigned long
> variable to kexec hypercall.
>

So can you unf*ck your broken interface before imposing it on anyone else?

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 23:38:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 23:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToN20-0004Tk-Sx; Thu, 27 Dec 2012 23:38:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1ToN1z-0004TR-Ux
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 23:38:32 +0000
Received: from [85.158.137.99:45958] by server-7.bemta-3.messagelabs.com id
	9B/E2-23008-7FBDCD05; Thu, 27 Dec 2012 23:38:31 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-3.tower-217.messagelabs.com!1356651508!14833170!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21326 invoked from network); 27 Dec 2012 23:38:30 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-3.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Dec 2012 23:38:30 -0000
Received: from tazenda.hos.anvin.org
	([IPv6:2601:9:3300:78:e269:95ff:fe35:9f3c]) (authenticated bits=0)
	by mail.zytor.com (8.14.5/8.14.5) with ESMTP id qBRNc1Ak032229
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Thu, 27 Dec 2012 15:38:01 -0800
Message-ID: <50DCDBD3.1020703@zytor.com>
Date: Thu, 27 Dec 2012 15:37:55 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <5698071f-c96c-4891-81d6-a77c4e3b77c2@default>
In-Reply-To: <5698071f-c96c-4891-81d6-a77c4e3b77c2@default>
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 06/11] x86/xen: Add i386 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/27/2012 03:23 PM, Daniel Kiper wrote:
>> On 12/26/2012 06:18 PM, Daniel Kiper wrote:
>>> Add i386 kexec/kdump implementation.
>>>
>>> v2 - suggestions/fixes:
>>>      - allocate transition page table pages below 4 GiB
>>>        (suggested by Jan Beulich).
>>
>> Why?
>
> Sadly all addresses are passed via unsigned long
> variable to kexec hypercall.
>

So can you unf*ck your broken interface before imposing it on anyone else?

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 23:41:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 23:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToN4a-0004mB-FP; Thu, 27 Dec 2012 23:41:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1ToN4Y-0004lu-OC
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 23:41:10 +0000
Received: from [85.158.143.99:19722] by server-2.bemta-4.messagelabs.com id
	A9/B8-30861-69CDCD05; Thu, 27 Dec 2012 23:41:10 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1356651668!30081126!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTM3Mjkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30413 invoked from network); 27 Dec 2012 23:41:09 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 23:41:09 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBRNepKl022127
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Dec 2012 23:40:52 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBRNeplF009972
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 27 Dec 2012 23:40:51 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBRNeoxr032640; Thu, 27 Dec 2012 17:40:50 -0600
MIME-Version: 1.0
Message-ID: <bd591d7e-5e78-4df1-af68-5c0e596a80f6@default>
Date: Thu, 27 Dec 2012 15:40:50 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <hpa@zytor.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On 12/26/2012 06:18 PM, Daniel Kiper wrote:
> > Hi,
> >
> > This set of patches contains initial kexec/kdump implementation for Xen v3.
> > Currently only dom0 is supported, however, almost all infrustructure
> > required for domU support is ready.
> >
> > Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
> > This could simplify and reduce a bit size of kernel code. However, this solution
> > requires some changes in baremetal x86 code. First of all code which establishes
> > transition page table should be moved back from machine_kexec_$(BITS).c to
> > relocate_kernel_$(BITS).S. Another important thing which should be changed in that
> > case is format of page_list array. Xen kexec hypercall requires to alternate physical
> > addresses with virtual ones. These and other required stuff have not been done in that
> > version because I am not sure that solution will be accepted by kexec/kdump maintainers.
> > I hope that this email spark discussion about that topic.
>
> I want a detailed list of the constraints that this assumes and 
> therefore imposes on the native implementation as a result of this.  We 
> have had way too many patches where Xen PV hacks effectively nailgun 
> arbitrary, and sometimes poor, design decisions in place and now we 
> can't fix them.

OK but now I think that we should leave this discussion
until all details regarding kexec/kdump generic code
will be agreed. Sorry for that.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Dec 27 23:41:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Dec 2012 23:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToN4a-0004mB-FP; Thu, 27 Dec 2012 23:41:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1ToN4Y-0004lu-OC
	for xen-devel@lists.xensource.com; Thu, 27 Dec 2012 23:41:10 +0000
Received: from [85.158.143.99:19722] by server-2.bemta-4.messagelabs.com id
	A9/B8-30861-69CDCD05; Thu, 27 Dec 2012 23:41:10 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1356651668!30081126!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTM3Mjkw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30413 invoked from network); 27 Dec 2012 23:41:09 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Dec 2012 23:41:09 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBRNepKl022127
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Dec 2012 23:40:52 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBRNeplF009972
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 27 Dec 2012 23:40:51 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBRNeoxr032640; Thu, 27 Dec 2012 17:40:50 -0600
MIME-Version: 1.0
Message-ID: <bd591d7e-5e78-4df1-af68-5c0e596a80f6@default>
Date: Thu, 27 Dec 2012 15:40:50 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <hpa@zytor.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	ebiederm@xmission.com, jbeulich@suse.com,
	maxim.uvarov@oracle.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On 12/26/2012 06:18 PM, Daniel Kiper wrote:
> > Hi,
> >
> > This set of patches contains initial kexec/kdump implementation for Xen v3.
> > Currently only dom0 is supported, however, almost all infrustructure
> > required for domU support is ready.
> >
> > Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code.
> > This could simplify and reduce a bit size of kernel code. However, this solution
> > requires some changes in baremetal x86 code. First of all code which establishes
> > transition page table should be moved back from machine_kexec_$(BITS).c to
> > relocate_kernel_$(BITS).S. Another important thing which should be changed in that
> > case is format of page_list array. Xen kexec hypercall requires to alternate physical
> > addresses with virtual ones. These and other required stuff have not been done in that
> > version because I am not sure that solution will be accepted by kexec/kdump maintainers.
> > I hope that this email spark discussion about that topic.
>
> I want a detailed list of the constraints that this assumes and 
> therefore imposes on the native implementation as a result of this.  We 
> have had way too many patches where Xen PV hacks effectively nailgun 
> arbitrary, and sometimes poor, design decisions in place and now we 
> can't fix them.

OK but now I think that we should leave this discussion
until all details regarding kexec/kdump generic code
will be agreed. Sorry for that.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 00:19:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 00:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToNfT-0006KI-C2; Fri, 28 Dec 2012 00:19:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1ToNfS-0006KD-6i
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 00:19:18 +0000
Received: from [85.158.139.83:5941] by server-8.bemta-5.messagelabs.com id
	67/16-15003-585ECD05; Fri, 28 Dec 2012 00:19:17 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1356653955!30953828!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTM3ODk2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3560 invoked from network); 28 Dec 2012 00:19:16 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Dec 2012 00:19:16 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBS0Ixbx011913
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Dec 2012 00:19:00 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBS0Iw3W008864
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Dec 2012 00:18:59 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBS0IvaG022116; Thu, 27 Dec 2012 18:18:57 -0600
MIME-Version: 1.0
Message-ID: <2f3acfe3-c608-43d8-b76a-6ba69a872692@default>
Date: Thu, 27 Dec 2012 16:18:56 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <ebiederm@xmission.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com, andrew.cooper3@citrix.com, hpa@zytor.com,
	kexec@lists.infradead.org, x86@kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 01/11] kexec: introduce kexec firmware
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Daniel Kiper <daniel.kiper@oracle.com> writes:
>
> > Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
> > Linux infrastructure and require some support from firmware and/or hypervisor.
> > To cope with that problem kexec firmware infrastructure was introduced.
> > It allows a developer to use all kexec/kdump features of given firmware
> > or hypervisor.
>
> As this stands this patch is wrong.
>
> You need to pass an additional flag from userspace through /sbin/kexec
> that says load the kexec image in the firmware.  A global variable here
> is not ok.
>
> As I understand it you are loading a kexec on xen panic image.  Which
> is semantically different from a kexec on linux panic image.  It is not
> ok to do have a silly global variable kexec_use_firmware.

Earlier we agreed that /sbin/kexec should call kexec syscall with
special flag. However, during work on Xen kexec/kdump v3 patch
I stated that this is insufficient because e.g. crash_kexec()
should execute different code in case of use of firmware support too.
Sadly syscall does not save this flag anywhere. Additionally, I stated
that kernel itself has the best knowledge which code path should be
used (firmware or plain Linux). If this decision will be left to userspace
then simple kexec syscall could crash system at worst case (e.g. when
plain Linux kexec will be used in case when firmware kaxec should be used).
However, if you wish I could add this flag to syscall. Additionally, I could
add function which enables firmware support and then kexec_use_firmware
variable will be global only in kexec.c module.

> Furthermore it is not ok to have a conditional
> code outside of header files.

I agree but how to dispatch execution e.g. in crash_kexec()
if we would like (I suppose) compile kexec firmware
support conditionally?

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 00:19:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 00:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToNfT-0006KI-C2; Fri, 28 Dec 2012 00:19:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1ToNfS-0006KD-6i
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 00:19:18 +0000
Received: from [85.158.139.83:5941] by server-8.bemta-5.messagelabs.com id
	67/16-15003-585ECD05; Fri, 28 Dec 2012 00:19:17 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1356653955!30953828!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMTM3ODk2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3560 invoked from network); 28 Dec 2012 00:19:16 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Dec 2012 00:19:16 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBS0Ixbx011913
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Dec 2012 00:19:00 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBS0Iw3W008864
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Dec 2012 00:18:59 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBS0IvaG022116; Thu, 27 Dec 2012 18:18:57 -0600
MIME-Version: 1.0
Message-ID: <2f3acfe3-c608-43d8-b76a-6ba69a872692@default>
Date: Thu, 27 Dec 2012 16:18:56 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <ebiederm@xmission.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com, andrew.cooper3@citrix.com, hpa@zytor.com,
	kexec@lists.infradead.org, x86@kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 01/11] kexec: introduce kexec firmware
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Daniel Kiper <daniel.kiper@oracle.com> writes:
>
> > Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
> > Linux infrastructure and require some support from firmware and/or hypervisor.
> > To cope with that problem kexec firmware infrastructure was introduced.
> > It allows a developer to use all kexec/kdump features of given firmware
> > or hypervisor.
>
> As this stands this patch is wrong.
>
> You need to pass an additional flag from userspace through /sbin/kexec
> that says load the kexec image in the firmware.  A global variable here
> is not ok.
>
> As I understand it you are loading a kexec on xen panic image.  Which
> is semantically different from a kexec on linux panic image.  It is not
> ok to do have a silly global variable kexec_use_firmware.

Earlier we agreed that /sbin/kexec should call kexec syscall with
special flag. However, during work on Xen kexec/kdump v3 patch
I stated that this is insufficient because e.g. crash_kexec()
should execute different code in case of use of firmware support too.
Sadly syscall does not save this flag anywhere. Additionally, I stated
that kernel itself has the best knowledge which code path should be
used (firmware or plain Linux). If this decision will be left to userspace
then simple kexec syscall could crash system at worst case (e.g. when
plain Linux kexec will be used in case when firmware kaxec should be used).
However, if you wish I could add this flag to syscall. Additionally, I could
add function which enables firmware support and then kexec_use_firmware
variable will be global only in kexec.c module.

> Furthermore it is not ok to have a conditional
> code outside of header files.

I agree but how to dispatch execution e.g. in crash_kexec()
if we would like (I suppose) compile kexec firmware
support conditionally?

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 00:54:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 00:54:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToOCg-0006qg-8W; Fri, 28 Dec 2012 00:53:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1ToOCe-0006qb-By
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 00:53:36 +0000
Received: from [85.158.137.99:12584] by server-15.bemta-3.messagelabs.com id
	39/2C-07921-C8DECD05; Fri, 28 Dec 2012 00:53:32 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1356656010!21098497!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzk4OTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7924 invoked from network); 28 Dec 2012 00:53:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Dec 2012 00:53:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBS0rC3Z000881
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Dec 2012 00:53:12 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBS0rAx5008811
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Dec 2012 00:53:11 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBS0rAgW016424; Thu, 27 Dec 2012 18:53:10 -0600
MIME-Version: 1.0
Message-ID: <45868bcd-baa9-4a4d-901f-23d1c35cebea@default>
Date: Thu, 27 Dec 2012 16:53:09 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <ebiederm@xmission.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com, andrew.cooper3@citrix.com, hpa@zytor.com,
	kexec@lists.infradead.org, x86@kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Andrew Cooper <andrew.cooper3@citrix.com> writes:
>
> > On 27/12/2012 07:53, Eric W. Biederman wrote:
> >> The syscall ABI still has the wrong semantics.
> >>
> >> Aka totally unmaintainable and umergeable.
> >>
> >> The concept of domU support is also strange.  What does domU support even mean, when the dom0 > support is loading a kernel to pick up Xen when Xen falls over.
> >
> > There are two requirements pulling at this patch series, but I agree
> > that we need to clarify them.
>
> It probably make sense to split them apart a little even.
>
> > When dom0 loads a crash kernel, it is loading one for Xen to use.  As a
> > dom0 crash causes a Xen crash, having dom0 set up a kdump kernel for
> > itself is completely useless.  This ability is present in "classic Xen
> > dom0" kernels, but the feature is currently missing in PVOPS.
>
> > Many cloud customers and service providers want the ability for a VM
> > administrator to be able to load a kdump/kexec kernel within a
> > domain[1].  This allows the VM administrator to take more proactive
> > steps to isolate the cause of a crash, the state of which is most likely
> > discarded while tearing down the domain.  The result being that as far
> > as Xen is concerned, the domain is still alive, while the kdump
> > kernel/environment can work its usual magic.  I am not aware of any
> > feature like this existing in the past.
>
> Which makes domU support semantically just the normal kexec/kdump
> support.  Got it.

To some extent. It is true on HVM and PVonHVM guests. However,
PV guests requires a bit different kexec/kdump implementation
than plain kexec/kdump. Proposed firmware support has almost
all required features. PV guest specific features (a few) will
be added later (after agreeing generic firmware support which
is sufficient at least for dom0).

It looks that I should replace domU by PV guest in patch description.

> The point of implementing domU is for those times when the hypervisor
> admin and the kernel admin are different.

Right.

> For domU support modifying or adding alternate versions of
> machine_kexec.c and relocate_kernel.S to add paravirtualization support
> make sense.

It is not sufficient. Please look above.

> There is the practical argument that for implementation efficiency of
> crash dumps it would be better if that support came from the hypervisor
> or the hypervisor environment.  But this gets into the practical reality

I am thinking about that.

> that the hypervisor environment does not do that today.  Furthermore
> kexec all by itself working in a paravirtualized environment under Xen
> makes sense.
>
> domU support is what Peter was worrying about for cleanliness, and
> we need some x86 backend ops there, and generally to be careful.

As I know we do not need any additional pv_ops stuff
if we place all needed things in kexec firmware support.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 00:54:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 00:54:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToOCg-0006qg-8W; Fri, 28 Dec 2012 00:53:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1ToOCe-0006qb-By
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 00:53:36 +0000
Received: from [85.158.137.99:12584] by server-15.bemta-3.messagelabs.com id
	39/2C-07921-C8DECD05; Fri, 28 Dec 2012 00:53:32 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-4.tower-217.messagelabs.com!1356656010!21098497!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAxMzk4OTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7924 invoked from network); 28 Dec 2012 00:53:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-217.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Dec 2012 00:53:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id qBS0rC3Z000881
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Dec 2012 00:53:12 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	qBS0rAx5008811
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Dec 2012 00:53:11 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	qBS0rAgW016424; Thu, 27 Dec 2012 18:53:10 -0600
MIME-Version: 1.0
Message-ID: <45868bcd-baa9-4a4d-901f-23d1c35cebea@default>
Date: Thu, 27 Dec 2012 16:53:09 -0800 (PST)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <ebiederm@xmission.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com, andrew.cooper3@citrix.com, hpa@zytor.com,
	kexec@lists.infradead.org, x86@kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 00/11] xen: Initial kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Andrew Cooper <andrew.cooper3@citrix.com> writes:
>
> > On 27/12/2012 07:53, Eric W. Biederman wrote:
> >> The syscall ABI still has the wrong semantics.
> >>
> >> Aka totally unmaintainable and umergeable.
> >>
> >> The concept of domU support is also strange.  What does domU support even mean, when the dom0 > support is loading a kernel to pick up Xen when Xen falls over.
> >
> > There are two requirements pulling at this patch series, but I agree
> > that we need to clarify them.
>
> It probably make sense to split them apart a little even.
>
> > When dom0 loads a crash kernel, it is loading one for Xen to use.  As a
> > dom0 crash causes a Xen crash, having dom0 set up a kdump kernel for
> > itself is completely useless.  This ability is present in "classic Xen
> > dom0" kernels, but the feature is currently missing in PVOPS.
>
> > Many cloud customers and service providers want the ability for a VM
> > administrator to be able to load a kdump/kexec kernel within a
> > domain[1].  This allows the VM administrator to take more proactive
> > steps to isolate the cause of a crash, the state of which is most likely
> > discarded while tearing down the domain.  The result being that as far
> > as Xen is concerned, the domain is still alive, while the kdump
> > kernel/environment can work its usual magic.  I am not aware of any
> > feature like this existing in the past.
>
> Which makes domU support semantically just the normal kexec/kdump
> support.  Got it.

To some extent. It is true on HVM and PVonHVM guests. However,
PV guests requires a bit different kexec/kdump implementation
than plain kexec/kdump. Proposed firmware support has almost
all required features. PV guest specific features (a few) will
be added later (after agreeing generic firmware support which
is sufficient at least for dom0).

It looks that I should replace domU by PV guest in patch description.

> The point of implementing domU is for those times when the hypervisor
> admin and the kernel admin are different.

Right.

> For domU support modifying or adding alternate versions of
> machine_kexec.c and relocate_kernel.S to add paravirtualization support
> make sense.

It is not sufficient. Please look above.

> There is the practical argument that for implementation efficiency of
> crash dumps it would be better if that support came from the hypervisor
> or the hypervisor environment.  But this gets into the practical reality

I am thinking about that.

> that the hypervisor environment does not do that today.  Furthermore
> kexec all by itself working in a paravirtualized environment under Xen
> makes sense.
>
> domU support is what Peter was worrying about for cleanliness, and
> we need some x86 backend ops there, and generally to be careful.

As I know we do not need any additional pv_ops stuff
if we place all needed things in kexec firmware support.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 03:07:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 03:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToQHL-00041I-Hu; Fri, 28 Dec 2012 03:06:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ebiederm@xmission.com>) id 1ToQHK-00041D-58
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 03:06:34 +0000
Received: from [85.158.143.99:51907] by server-3.bemta-4.messagelabs.com id
	58/9F-18211-9BC0DD05; Fri, 28 Dec 2012 03:06:33 +0000
X-Env-Sender: ebiederm@xmission.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356663991!21509020!1
X-Originating-IP: [166.70.13.232]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAxNjYuNzAuMTMuMjMyID0+IDYyOTI=\n,sa_preprocessor: 
	QmFkIElQOiAxNjYuNzAuMTMuMjMyID0+IDYyOTI=\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 468 invoked from network); 28 Dec 2012 03:06:32 -0000
Received: from out02.mta.xmission.com (HELO out02.mta.xmission.com)
	(166.70.13.232)
	by server-14.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	28 Dec 2012 03:06:32 -0000
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out02.mta.xmission.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.76) (envelope-from <ebiederm@xmission.com>)
	id 1ToQHA-000194-4b; Thu, 27 Dec 2012 20:06:24 -0700
Received: from c-98-207-153-68.hsd1.ca.comcast.net ([98.207.153.68]
	helo=eric-ThinkPad-X220.xmission.com)
	by in02.mta.xmission.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76)
	(envelope-from <ebiederm@xmission.com>)
	id 1ToQH6-0006W6-Ph; Thu, 27 Dec 2012 20:06:23 -0700
From: ebiederm@xmission.com (Eric W. Biederman)
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <2f3acfe3-c608-43d8-b76a-6ba69a872692@default>
Date: Thu, 27 Dec 2012 19:06:13 -0800
In-Reply-To: <2f3acfe3-c608-43d8-b76a-6ba69a872692@default> (Daniel Kiper's
	message of "Thu, 27 Dec 2012 16:18:56 -0800 (PST)")
Message-ID: <874nj6ooyi.fsf@xmission.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux)
MIME-Version: 1.0
X-XM-AID: U2FsdGVkX1/ucq/S2VajQy1XwEAoKK2rydGJt9Hy4IA=
X-SA-Exim-Connect-IP: 98.207.153.68
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sa03.xmission.com
X-Spam-Level: 
X-Spam-Status: No, score=-3.9 required=8.0 tests=ALL_TRUSTED,BAYES_00,
	DCC_CHECK_NEGATIVE, T_TM2_M_HEADER_IN_MSG, T_XMDrugObfuBody_08,
	XMSubLong autolearn=disabled version=3.3.2
X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.1 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: T_TM2_M_HEADER_IN_MSG
	* -3.0 BAYES_00 BODY: Bayes spam probability is 0 to 1%
	*      [score: 0.0000]
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa03 1397; Body=1 Fuz1=1 Fuz2=1]
	*  0.0 T_XMDrugObfuBody_08 obfuscated drug references
X-Spam-DCC: XMission; sa03 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: ;Daniel Kiper <daniel.kiper@oracle.com>
X-Spam-Relay-Country: 
X-SA-Exim-Version: 4.2.1 (built Sun, 08 Jan 2012 03:05:19 +0000)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com, andrew.cooper3@citrix.com, hpa@zytor.com,
	kexec@lists.infradead.org, x86@kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 01/11] kexec: introduce kexec firmware
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel Kiper <daniel.kiper@oracle.com> writes:

>> Daniel Kiper <daniel.kiper@oracle.com> writes:
>>
>> > Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
>> > Linux infrastructure and require some support from firmware and/or hypervisor.
>> > To cope with that problem kexec firmware infrastructure was introduced.
>> > It allows a developer to use all kexec/kdump features of given firmware
>> > or hypervisor.
>>
>> As this stands this patch is wrong.
>>
>> You need to pass an additional flag from userspace through /sbin/kexec
>> that says load the kexec image in the firmware.  A global variable here
>> is not ok.
>>
>> As I understand it you are loading a kexec on xen panic image.  Which
>> is semantically different from a kexec on linux panic image.  It is not
>> ok to do have a silly global variable kexec_use_firmware.
>
> Earlier we agreed that /sbin/kexec should call kexec syscall with
> special flag. However, during work on Xen kexec/kdump v3 patch
> I stated that this is insufficient because e.g. crash_kexec()
> should execute different code in case of use of firmware support too.

That implies you have the wrong model of userspace.

Very simply there is:
linux kexec pass through to xen kexec.

And
linux kexec (ultimately pv kexec because the pv machine is a slightly
different architecture).

> Sadly syscall does not save this flag anywhere.

> Additionally, I stated
> that kernel itself has the best knowledge which code path should be
> used (firmware or plain Linux). If this decision will be left to userspace
> then simple kexec syscall could crash system at worst case (e.g. when
> plain Linux kexec will be used in case when firmware kaxec should be
> used).

And that path selection bit is strongly non-sense.  You are advocating
hardcoding unnecessary policy in the kernel.

If for dom0 you need crash_kexec to do something different from domU
you should be able to load a small piece of code via kexec that makes
the hypervisor calls you need.

> However, if you wish I could add this flag to syscall.

I do wish.  We need to distinguish between the kexec firmware pass
through, and normal kexec.

> Additionally, I could
> add function which enables firmware support and then kexec_use_firmware
> variable will be global only in kexec.c module.

No.  kexec_use_firmware is the wrong mental model.

Do not mix the kexec pass through and the normal kexec case.

We most definitely need to call different code in the kexec firmware
pass through case.

For normal kexec we just need to use a paravirt aware version of
machine_kexec and machine_kexec_shutdown.

>> Furthermore it is not ok to have a conditional
>> code outside of header files.
>
> I agree but how to dispatch execution e.g. in crash_kexec()
> if we would like (I suppose) compile kexec firmware
> support conditionally?

The classic pattern is to have the #ifdefs in the header and have an
noop function that is inlined when the functionality is compiled out.
This allows all of the logic to always be compiled.

Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 03:07:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 03:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToQHL-00041I-Hu; Fri, 28 Dec 2012 03:06:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ebiederm@xmission.com>) id 1ToQHK-00041D-58
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 03:06:34 +0000
Received: from [85.158.143.99:51907] by server-3.bemta-4.messagelabs.com id
	58/9F-18211-9BC0DD05; Fri, 28 Dec 2012 03:06:33 +0000
X-Env-Sender: ebiederm@xmission.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1356663991!21509020!1
X-Originating-IP: [166.70.13.232]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAxNjYuNzAuMTMuMjMyID0+IDYyOTI=\n,sa_preprocessor: 
	QmFkIElQOiAxNjYuNzAuMTMuMjMyID0+IDYyOTI=\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 468 invoked from network); 28 Dec 2012 03:06:32 -0000
Received: from out02.mta.xmission.com (HELO out02.mta.xmission.com)
	(166.70.13.232)
	by server-14.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	28 Dec 2012 03:06:32 -0000
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out02.mta.xmission.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.76) (envelope-from <ebiederm@xmission.com>)
	id 1ToQHA-000194-4b; Thu, 27 Dec 2012 20:06:24 -0700
Received: from c-98-207-153-68.hsd1.ca.comcast.net ([98.207.153.68]
	helo=eric-ThinkPad-X220.xmission.com)
	by in02.mta.xmission.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76)
	(envelope-from <ebiederm@xmission.com>)
	id 1ToQH6-0006W6-Ph; Thu, 27 Dec 2012 20:06:23 -0700
From: ebiederm@xmission.com (Eric W. Biederman)
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <2f3acfe3-c608-43d8-b76a-6ba69a872692@default>
Date: Thu, 27 Dec 2012 19:06:13 -0800
In-Reply-To: <2f3acfe3-c608-43d8-b76a-6ba69a872692@default> (Daniel Kiper's
	message of "Thu, 27 Dec 2012 16:18:56 -0800 (PST)")
Message-ID: <874nj6ooyi.fsf@xmission.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux)
MIME-Version: 1.0
X-XM-AID: U2FsdGVkX1/ucq/S2VajQy1XwEAoKK2rydGJt9Hy4IA=
X-SA-Exim-Connect-IP: 98.207.153.68
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sa03.xmission.com
X-Spam-Level: 
X-Spam-Status: No, score=-3.9 required=8.0 tests=ALL_TRUSTED,BAYES_00,
	DCC_CHECK_NEGATIVE, T_TM2_M_HEADER_IN_MSG, T_XMDrugObfuBody_08,
	XMSubLong autolearn=disabled version=3.3.2
X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.1 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: T_TM2_M_HEADER_IN_MSG
	* -3.0 BAYES_00 BODY: Bayes spam probability is 0 to 1%
	*      [score: 0.0000]
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa03 1397; Body=1 Fuz1=1 Fuz2=1]
	*  0.0 T_XMDrugObfuBody_08 obfuscated drug references
X-Spam-DCC: XMission; sa03 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: ;Daniel Kiper <daniel.kiper@oracle.com>
X-Spam-Relay-Country: 
X-SA-Exim-Version: 4.2.1 (built Sun, 08 Jan 2012 03:05:19 +0000)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com, andrew.cooper3@citrix.com, hpa@zytor.com,
	kexec@lists.infradead.org, x86@kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 01/11] kexec: introduce kexec firmware
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel Kiper <daniel.kiper@oracle.com> writes:

>> Daniel Kiper <daniel.kiper@oracle.com> writes:
>>
>> > Some kexec/kdump implementations (e.g. Xen PVOPS) could not use default
>> > Linux infrastructure and require some support from firmware and/or hypervisor.
>> > To cope with that problem kexec firmware infrastructure was introduced.
>> > It allows a developer to use all kexec/kdump features of given firmware
>> > or hypervisor.
>>
>> As this stands this patch is wrong.
>>
>> You need to pass an additional flag from userspace through /sbin/kexec
>> that says load the kexec image in the firmware.  A global variable here
>> is not ok.
>>
>> As I understand it you are loading a kexec on xen panic image.  Which
>> is semantically different from a kexec on linux panic image.  It is not
>> ok to do have a silly global variable kexec_use_firmware.
>
> Earlier we agreed that /sbin/kexec should call kexec syscall with
> special flag. However, during work on Xen kexec/kdump v3 patch
> I stated that this is insufficient because e.g. crash_kexec()
> should execute different code in case of use of firmware support too.

That implies you have the wrong model of userspace.

Very simply there is:
linux kexec pass through to xen kexec.

And
linux kexec (ultimately pv kexec because the pv machine is a slightly
different architecture).

> Sadly syscall does not save this flag anywhere.

> Additionally, I stated
> that kernel itself has the best knowledge which code path should be
> used (firmware or plain Linux). If this decision will be left to userspace
> then simple kexec syscall could crash system at worst case (e.g. when
> plain Linux kexec will be used in case when firmware kaxec should be
> used).

And that path selection bit is strongly non-sense.  You are advocating
hardcoding unnecessary policy in the kernel.

If for dom0 you need crash_kexec to do something different from domU
you should be able to load a small piece of code via kexec that makes
the hypervisor calls you need.

> However, if you wish I could add this flag to syscall.

I do wish.  We need to distinguish between the kexec firmware pass
through, and normal kexec.

> Additionally, I could
> add function which enables firmware support and then kexec_use_firmware
> variable will be global only in kexec.c module.

No.  kexec_use_firmware is the wrong mental model.

Do not mix the kexec pass through and the normal kexec case.

We most definitely need to call different code in the kexec firmware
pass through case.

For normal kexec we just need to use a paravirt aware version of
machine_kexec and machine_kexec_shutdown.

>> Furthermore it is not ok to have a conditional
>> code outside of header files.
>
> I agree but how to dispatch execution e.g. in crash_kexec()
> if we would like (I suppose) compile kexec firmware
> support conditionally?

The classic pattern is to have the #ifdefs in the header and have an
noop function that is inlined when the functionality is compiled out.
This allows all of the logic to always be compiled.

Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 03:13:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 03:13:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToQO0-0004DA-El; Fri, 28 Dec 2012 03:13:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1ToQNz-0004D5-04
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 03:13:27 +0000
Received: from [85.158.139.211:49657] by server-15.bemta-5.messagelabs.com id
	29/B2-20523-65E0DD05; Fri, 28 Dec 2012 03:13:26 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1356664405!21592843!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7621 invoked from network); 28 Dec 2012 03:13:25 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 03:13:25 -0000
Received: by mail-ee0-f44.google.com with SMTP id b47so5134551eek.31
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Dec 2012 19:13:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=A5xAWSyDN8WQhBsgCKDlMWoUXYbHwRLp1GoUJsFNwjA=;
	b=Vrkp4O+ioixlGie3q49gtF7u4UddoVq/r2Hglmus6TmuTBuReGpaJgVMLn28gfnDYW
	LkyplyBKy1Kw7FGgxH3JFTlMQA6bDh3n3A33tsVH4KC1YnGD5f9xs3wBwuwLR2z9gvtx
	1khkQTI3ABkkd3lyCgKB+pvTSn30VqnDnKiBvbvThCv/m9/4xvq+x7UtjWVFWzVuFsS1
	KF+7Dniv6FDQVSY8RVwIsETbWRWVaI/pUTh+eIVK0TTFIVIZSB5/tIri3HLyfcyLZDJm
	/RbDygDafLWcENg5wkA98DquuPkEPk1sTyksxc16knIVIBZuFYwYJNx8BOGO20lBVNXe
	KPpQ==
MIME-Version: 1.0
Received: by 10.14.209.193 with SMTP id s41mr82922183eeo.9.1356664405261; Thu,
	27 Dec 2012 19:13:25 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Thu, 27 Dec 2012 19:13:25 -0800 (PST)
In-Reply-To: <1356612077.19238.20.camel@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
	<CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
	<1356612077.19238.20.camel@iceland>
Date: Fri, 28 Dec 2012 11:13:25 +0800
Message-ID: <CA+ePHTC4Fq0sqb5ie+YBiC5Ft_q6O2PkJxYqGXxbBbHSHxfbOA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
	=?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_e?=
	=?utf-8?q?rror?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7098957816635183019=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7098957816635183019==
Content-Type: multipart/alternative; boundary=e89a8f502b92f47cf604d1e10b48

--e89a8f502b92f47cf604d1e10b48
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 27, 2012 at 8:41 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Thu, 2012-12-27 at 02:12 +0000, =E9=A9=AC=E7=A3=8A wrote:
> >
> > I got it, but the error `  xc: error: do_evtchn_op:
> > HYPERVISOR_event_channel_op failed: -1 (3 =3D No such process): Interna=
l
> > error. ` said no such process, the system error description
> > didn't seem has anything to do with the following lines wich raised
> >  85    state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid,
> > 0);
> >  86    state->console_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid,
> > 0);
>
> The error code -1 is -EPERM, which means you don't have permission to
> issue this operation. I don't think this is a bug. There might be some
> problems with your setup.
>
> If you need any pointer in reading source code, I will be happy to help.
>
>
> Wei.
>
>     Thanks for your kindness!

    I looked into the functions for logging, in this case,  `3 =3D No such
process` was from `errno` and the ` HYPERVISOR_event_channel_op failed: -1 =
`
was from hypervisor-level error(src/xen/common/event_channel.c).
In my option, that's to say, error number of -1 was caused by hypervisor;
but what was the error number of 3 caused by,  dom0?
Do both the two error numbers refer to the description defined in errno.h
or else hypervisor has its own error description?

    I run `xl restore` in sudo mode, it had no reason for `EPERM`.

--e89a8f502b92f47cf604d1e10b48
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Dec 27, 2012 at 8:41 PM, Wei Liu=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:Wei.Liu2@citrix.com" target=3D"_bl=
ank">Wei.Liu2@citrix.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmai=
l_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left=
:1ex">
<div class=3D"im">On Thu, 2012-12-27 at 02:12 +0000, =E9=A9=AC=E7=A3=8A wro=
te:<br>
&gt;<br>
&gt; I got it, but the error ` =C2=A0xc: error: do_evtchn_op:<br>
&gt; HYPERVISOR_event_channel_op failed: -1 (3 =3D No such process): Intern=
al<br>
&gt; error. ` said no such process, the system error description<br>
&gt; didn&#39;t seem has anything to do with the following lines wich raise=
d<br>
&gt; =C2=A085 =C2=A0 =C2=A0state-&gt;store_port =3D xc_evtchn_alloc_unbound=
(ctx-&gt;xch, domid,<br>
&gt; 0);<br>
&gt; =C2=A086 =C2=A0 =C2=A0state-&gt;console_port =3D xc_evtchn_alloc_unbou=
nd(ctx-&gt;xch, domid,<br>
&gt; 0);<br>
<br>
</div>The error code -1 is -EPERM, which means you don&#39;t have permissio=
n to<br>
issue this operation. I don&#39;t think this is a bug. There might be some<=
br>
problems with your setup.<br>
<br>
If you need any pointer in reading source code, I will be happy to help.<br=
>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Wei.<br>
<br></font></span></blockquote><div>=C2=A0 =C2=A0 Thanks for your kindness!=
=C2=A0</div><div><br></div></div>=C2=A0 =C2=A0 I looked into the functions =
for logging, in this case,=C2=A0<font color=3D"#ff6666">=C2=A0`3 =3D No suc=
h process` was from `errno`</font> and the<font color=3D"#ff6666"> `
<span style=3D"font-family:arial,sans-serif;font-size:13px;background-color=
:rgb(255,255,255)">HYPERVISOR_event_channel_op failed: -1</span>=C2=A0` was=
 from hypervisor-level error(src/xen/common/event_channel.c)</font>.<div>In=
 my option, that&#39;s to say, error number of -1 was caused by hypervisor;=
 but what was the error number of 3 caused by, =C2=A0dom0?</div>
<div>Do both the two error numbers refer to the description defined in errn=
o.h or else hypervisor has its own error description?</div><div><br></div><=
div><font color=3D"#ff6666">=C2=A0 =C2=A0 I run `xl restore` in sudo mode, =
it had no reason for `EPERM`.</font></div>

--e89a8f502b92f47cf604d1e10b48--


--===============7098957816635183019==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7098957816635183019==--


From xen-devel-bounces@lists.xen.org Fri Dec 28 03:13:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 03:13:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToQO0-0004DA-El; Fri, 28 Dec 2012 03:13:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1ToQNz-0004D5-04
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 03:13:27 +0000
Received: from [85.158.139.211:49657] by server-15.bemta-5.messagelabs.com id
	29/B2-20523-65E0DD05; Fri, 28 Dec 2012 03:13:26 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1356664405!21592843!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7621 invoked from network); 28 Dec 2012 03:13:25 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 03:13:25 -0000
Received: by mail-ee0-f44.google.com with SMTP id b47so5134551eek.31
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Dec 2012 19:13:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=A5xAWSyDN8WQhBsgCKDlMWoUXYbHwRLp1GoUJsFNwjA=;
	b=Vrkp4O+ioixlGie3q49gtF7u4UddoVq/r2Hglmus6TmuTBuReGpaJgVMLn28gfnDYW
	LkyplyBKy1Kw7FGgxH3JFTlMQA6bDh3n3A33tsVH4KC1YnGD5f9xs3wBwuwLR2z9gvtx
	1khkQTI3ABkkd3lyCgKB+pvTSn30VqnDnKiBvbvThCv/m9/4xvq+x7UtjWVFWzVuFsS1
	KF+7Dniv6FDQVSY8RVwIsETbWRWVaI/pUTh+eIVK0TTFIVIZSB5/tIri3HLyfcyLZDJm
	/RbDygDafLWcENg5wkA98DquuPkEPk1sTyksxc16knIVIBZuFYwYJNx8BOGO20lBVNXe
	KPpQ==
MIME-Version: 1.0
Received: by 10.14.209.193 with SMTP id s41mr82922183eeo.9.1356664405261; Thu,
	27 Dec 2012 19:13:25 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Thu, 27 Dec 2012 19:13:25 -0800 (PST)
In-Reply-To: <1356612077.19238.20.camel@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
	<CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
	<1356612077.19238.20.camel@iceland>
Date: Fri, 28 Dec 2012 11:13:25 +0800
Message-ID: <CA+ePHTC4Fq0sqb5ie+YBiC5Ft_q6O2PkJxYqGXxbBbHSHxfbOA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
	=?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_e?=
	=?utf-8?q?rror?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7098957816635183019=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7098957816635183019==
Content-Type: multipart/alternative; boundary=e89a8f502b92f47cf604d1e10b48

--e89a8f502b92f47cf604d1e10b48
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Dec 27, 2012 at 8:41 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Thu, 2012-12-27 at 02:12 +0000, =E9=A9=AC=E7=A3=8A wrote:
> >
> > I got it, but the error `  xc: error: do_evtchn_op:
> > HYPERVISOR_event_channel_op failed: -1 (3 =3D No such process): Interna=
l
> > error. ` said no such process, the system error description
> > didn't seem has anything to do with the following lines wich raised
> >  85    state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid,
> > 0);
> >  86    state->console_port =3D xc_evtchn_alloc_unbound(ctx->xch, domid,
> > 0);
>
> The error code -1 is -EPERM, which means you don't have permission to
> issue this operation. I don't think this is a bug. There might be some
> problems with your setup.
>
> If you need any pointer in reading source code, I will be happy to help.
>
>
> Wei.
>
>     Thanks for your kindness!

    I looked into the functions for logging, in this case,  `3 =3D No such
process` was from `errno` and the ` HYPERVISOR_event_channel_op failed: -1 =
`
was from hypervisor-level error(src/xen/common/event_channel.c).
In my option, that's to say, error number of -1 was caused by hypervisor;
but what was the error number of 3 caused by,  dom0?
Do both the two error numbers refer to the description defined in errno.h
or else hypervisor has its own error description?

    I run `xl restore` in sudo mode, it had no reason for `EPERM`.

--e89a8f502b92f47cf604d1e10b48
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Dec 27, 2012 at 8:41 PM, Wei Liu=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:Wei.Liu2@citrix.com" target=3D"_bl=
ank">Wei.Liu2@citrix.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmai=
l_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left=
:1ex">
<div class=3D"im">On Thu, 2012-12-27 at 02:12 +0000, =E9=A9=AC=E7=A3=8A wro=
te:<br>
&gt;<br>
&gt; I got it, but the error ` =C2=A0xc: error: do_evtchn_op:<br>
&gt; HYPERVISOR_event_channel_op failed: -1 (3 =3D No such process): Intern=
al<br>
&gt; error. ` said no such process, the system error description<br>
&gt; didn&#39;t seem has anything to do with the following lines wich raise=
d<br>
&gt; =C2=A085 =C2=A0 =C2=A0state-&gt;store_port =3D xc_evtchn_alloc_unbound=
(ctx-&gt;xch, domid,<br>
&gt; 0);<br>
&gt; =C2=A086 =C2=A0 =C2=A0state-&gt;console_port =3D xc_evtchn_alloc_unbou=
nd(ctx-&gt;xch, domid,<br>
&gt; 0);<br>
<br>
</div>The error code -1 is -EPERM, which means you don&#39;t have permissio=
n to<br>
issue this operation. I don&#39;t think this is a bug. There might be some<=
br>
problems with your setup.<br>
<br>
If you need any pointer in reading source code, I will be happy to help.<br=
>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Wei.<br>
<br></font></span></blockquote><div>=C2=A0 =C2=A0 Thanks for your kindness!=
=C2=A0</div><div><br></div></div>=C2=A0 =C2=A0 I looked into the functions =
for logging, in this case,=C2=A0<font color=3D"#ff6666">=C2=A0`3 =3D No suc=
h process` was from `errno`</font> and the<font color=3D"#ff6666"> `
<span style=3D"font-family:arial,sans-serif;font-size:13px;background-color=
:rgb(255,255,255)">HYPERVISOR_event_channel_op failed: -1</span>=C2=A0` was=
 from hypervisor-level error(src/xen/common/event_channel.c)</font>.<div>In=
 my option, that&#39;s to say, error number of -1 was caused by hypervisor;=
 but what was the error number of 3 caused by, =C2=A0dom0?</div>
<div>Do both the two error numbers refer to the description defined in errn=
o.h or else hypervisor has its own error description?</div><div><br></div><=
div><font color=3D"#ff6666">=C2=A0 =C2=A0 I run `xl restore` in sudo mode, =
it had no reason for `EPERM`.</font></div>

--e89a8f502b92f47cf604d1e10b48--


--===============7098957816635183019==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7098957816635183019==--


From xen-devel-bounces@lists.xen.org Fri Dec 28 03:17:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 03:17:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToQRW-0004MS-8h; Fri, 28 Dec 2012 03:17:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ebiederm@xmission.com>) id 1ToQRU-0004MK-LX
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 03:17:04 +0000
Received: from [193.109.254.147:36000] by server-12.bemta-14.messagelabs.com
	id 30/2C-06523-F2F0DD05; Fri, 28 Dec 2012 03:17:03 +0000
X-Env-Sender: ebiederm@xmission.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1356664622!11713577!1
X-Originating-IP: [166.70.13.233]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13689 invoked from network); 28 Dec 2012 03:17:03 -0000
Received: from out03.mta.xmission.com (HELO out03.mta.xmission.com)
	(166.70.13.233)
	by server-16.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	28 Dec 2012 03:17:03 -0000
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out03.mta.xmission.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.76) (envelope-from <ebiederm@xmission.com>)
	id 1ToQRP-0007rW-UR; Thu, 27 Dec 2012 20:17:00 -0700
Received: from c-98-207-153-68.hsd1.ca.comcast.net ([98.207.153.68]
	helo=eric-ThinkPad-X220.xmission.com)
	by in02.mta.xmission.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76)
	(envelope-from <ebiederm@xmission.com>)
	id 1ToQRJ-0007hM-IP; Thu, 27 Dec 2012 20:16:59 -0700
From: ebiederm@xmission.com (Eric W. Biederman)
To: "H. Peter Anvin" <hpa@zytor.com>
References: <5698071f-c96c-4891-81d6-a77c4e3b77c2@default>
	<50DCDBD3.1020703@zytor.com>
Date: Thu, 27 Dec 2012 19:16:46 -0800
In-Reply-To: <50DCDBD3.1020703@zytor.com> (H. Peter Anvin's message of "Thu,
	27 Dec 2012 15:37:55 -0800")
Message-ID: <87han6n9wh.fsf@xmission.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux)
MIME-Version: 1.0
X-XM-AID: U2FsdGVkX1/F1tbWEEceGoJXUWc2fmBDWHR7nPNYdvE=
X-SA-Exim-Connect-IP: 98.207.153.68
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sa06.xmission.com
X-Spam-Level: 
X-Spam-Status: No, score=-3.9 required=8.0 tests=ALL_TRUSTED,BAYES_00,
	DCC_CHECK_NEGATIVE, T_TM2_M_HEADER_IN_MSG, T_XMDrugObfuBody_08,
	XMSubLong autolearn=disabled version=3.3.2
X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.1 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: T_TM2_M_HEADER_IN_MSG
	* -3.0 BAYES_00 BODY: Bayes spam probability is 0 to 1%
	*      [score: 0.0000]
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa06 1397; Body=1 Fuz1=1 Fuz2=1]
	*  0.0 T_XMDrugObfuBody_08 obfuscated drug references
X-Spam-DCC: XMission; sa06 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: ;"H. Peter Anvin" <hpa@zytor.com>
X-Spam-Relay-Country: 
X-SA-Exim-Version: 4.2.1 (built Sun, 08 Jan 2012 03:05:19 +0000)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com,
	Daniel Kiper <daniel.kiper@oracle.com>, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 06/11] x86/xen: Add i386 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"H. Peter Anvin" <hpa@zytor.com> writes:

> On 12/27/2012 03:23 PM, Daniel Kiper wrote:
>>> On 12/26/2012 06:18 PM, Daniel Kiper wrote:
>>>> Add i386 kexec/kdump implementation.
>>>>
>>>> v2 - suggestions/fixes:
>>>>      - allocate transition page table pages below 4 GiB
>>>>        (suggested by Jan Beulich).
>>>
>>> Why?
>>
>> Sadly all addresses are passed via unsigned long
>> variable to kexec hypercall.
>>
>
> So can you unf*ck your broken interface before imposing it on anyone
> else?

Hasn't 32bit dom0 been retired?

Untill the kexec firmware pass through and the normal kexec code paths
are separated I can't make sense out of this code.

The normal kexec code path should be doable without any special
constraints on the kernel.  Just leaning on some xen paravirt calls.

The kexec pass through probably should not even touch x86 arch code.

Certainly the same patch should not have code for both the xen kexec
pass through and the paravirt arch code for the normal kexec path.

Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 03:17:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 03:17:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToQRW-0004MS-8h; Fri, 28 Dec 2012 03:17:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ebiederm@xmission.com>) id 1ToQRU-0004MK-LX
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 03:17:04 +0000
Received: from [193.109.254.147:36000] by server-12.bemta-14.messagelabs.com
	id 30/2C-06523-F2F0DD05; Fri, 28 Dec 2012 03:17:03 +0000
X-Env-Sender: ebiederm@xmission.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1356664622!11713577!1
X-Originating-IP: [166.70.13.233]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13689 invoked from network); 28 Dec 2012 03:17:03 -0000
Received: from out03.mta.xmission.com (HELO out03.mta.xmission.com)
	(166.70.13.233)
	by server-16.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	28 Dec 2012 03:17:03 -0000
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out03.mta.xmission.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.76) (envelope-from <ebiederm@xmission.com>)
	id 1ToQRP-0007rW-UR; Thu, 27 Dec 2012 20:17:00 -0700
Received: from c-98-207-153-68.hsd1.ca.comcast.net ([98.207.153.68]
	helo=eric-ThinkPad-X220.xmission.com)
	by in02.mta.xmission.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76)
	(envelope-from <ebiederm@xmission.com>)
	id 1ToQRJ-0007hM-IP; Thu, 27 Dec 2012 20:16:59 -0700
From: ebiederm@xmission.com (Eric W. Biederman)
To: "H. Peter Anvin" <hpa@zytor.com>
References: <5698071f-c96c-4891-81d6-a77c4e3b77c2@default>
	<50DCDBD3.1020703@zytor.com>
Date: Thu, 27 Dec 2012 19:16:46 -0800
In-Reply-To: <50DCDBD3.1020703@zytor.com> (H. Peter Anvin's message of "Thu,
	27 Dec 2012 15:37:55 -0800")
Message-ID: <87han6n9wh.fsf@xmission.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux)
MIME-Version: 1.0
X-XM-AID: U2FsdGVkX1/F1tbWEEceGoJXUWc2fmBDWHR7nPNYdvE=
X-SA-Exim-Connect-IP: 98.207.153.68
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sa06.xmission.com
X-Spam-Level: 
X-Spam-Status: No, score=-3.9 required=8.0 tests=ALL_TRUSTED,BAYES_00,
	DCC_CHECK_NEGATIVE, T_TM2_M_HEADER_IN_MSG, T_XMDrugObfuBody_08,
	XMSubLong autolearn=disabled version=3.3.2
X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.1 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: T_TM2_M_HEADER_IN_MSG
	* -3.0 BAYES_00 BODY: Bayes spam probability is 0 to 1%
	*      [score: 0.0000]
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa06 1397; Body=1 Fuz1=1 Fuz2=1]
	*  0.0 T_XMDrugObfuBody_08 obfuscated drug references
X-Spam-DCC: XMission; sa06 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: ;"H. Peter Anvin" <hpa@zytor.com>
X-Spam-Relay-Country: 
X-SA-Exim-Version: 4.2.1 (built Sun, 08 Jan 2012 03:05:19 +0000)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)
Cc: xen-devel@lists.xensource.com, konrad.wilk@oracle.com,
	andrew.cooper3@citrix.com,
	Daniel Kiper <daniel.kiper@oracle.com>, x86@kernel.org,
	kexec@lists.infradead.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mingo@redhat.com,
	jbeulich@suse.com, maxim.uvarov@oracle.com, tglx@linutronix.de,
	vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 06/11] x86/xen: Add i386 kexec/kdump
	implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"H. Peter Anvin" <hpa@zytor.com> writes:

> On 12/27/2012 03:23 PM, Daniel Kiper wrote:
>>> On 12/26/2012 06:18 PM, Daniel Kiper wrote:
>>>> Add i386 kexec/kdump implementation.
>>>>
>>>> v2 - suggestions/fixes:
>>>>      - allocate transition page table pages below 4 GiB
>>>>        (suggested by Jan Beulich).
>>>
>>> Why?
>>
>> Sadly all addresses are passed via unsigned long
>> variable to kexec hypercall.
>>
>
> So can you unf*ck your broken interface before imposing it on anyone
> else?

Hasn't 32bit dom0 been retired?

Untill the kexec firmware pass through and the normal kexec code paths
are separated I can't make sense out of this code.

The normal kexec code path should be doable without any special
constraints on the kernel.  Just leaning on some xen paravirt calls.

The kexec pass through probably should not even touch x86 arch code.

Certainly the same patch should not have code for both the xen kexec
pass through and the paravirt arch code for the normal kexec path.

Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 05:16:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 05:16:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToSIR-0005qY-5N; Fri, 28 Dec 2012 05:15:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tushar.behera@linaro.org>) id 1ToSIP-0005qT-CF
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 05:15:49 +0000
Received: from [85.158.139.83:29001] by server-14.bemta-5.messagelabs.com id
	F4/60-09538-40B2DD05; Fri, 28 Dec 2012 05:15:48 +0000
X-Env-Sender: tushar.behera@linaro.org
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356671745!27612359!1
X-Originating-IP: [209.85.220.42]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22227 invoked from network); 28 Dec 2012 05:15:47 -0000
Received: from mail-pa0-f42.google.com (HELO mail-pa0-f42.google.com)
	(209.85.220.42)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 05:15:47 -0000
Received: by mail-pa0-f42.google.com with SMTP id rl6so5841169pac.15
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Dec 2012 21:15:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:message-id:date:from:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding:x-gm-message-state;
	bh=647SpzB4d0KqZxI3TASSooDNLx9Dw9RGamVY0ubAdFk=;
	b=TBycvx9lJzX5XWUxSRkcM7dQ2Oqj9G+BkRa71XSGHJgg9V7/aKaLPUFKYkLbo7jqBK
	CmBMzx2NW4ScLCmCmJLKxuA7m1fy7YnnvjWeWLkECNS+AdEeQHmm6HjzE9ZCxM5GO3fH
	uwlrG4I1e1CVW2dX2lBgtLvNaeizi980JqpZOVQJWjwnRfpSycqsUdd8E/hnIVXp21I3
	h2bvf5FbgNdiBtyF+VGnuZ1P5+/3RcjxOxoWdDRICuC32MImdpwdCmrKMnxVIDNCHqay
	0Vzf2F2aWUS04nMWbgCxQc13V2d3vpg1nOfxRaOjRXAZKrPsrrJzbMwbbLmVMWGt00kl
	taQg==
X-Received: by 10.66.87.8 with SMTP id t8mr95951209paz.28.1356671745217;
	Thu, 27 Dec 2012 21:15:45 -0800 (PST)
Received: from [10.10.10.29] ([115.113.119.130])
	by mx.google.com with ESMTPS id hs2sm19025836pbc.22.2012.12.27.21.15.41
	(version=SSLv3 cipher=OTHER); Thu, 27 Dec 2012 21:15:43 -0800 (PST)
Message-ID: <50DD2AFB.6030701@linaro.org>
Date: Fri, 28 Dec 2012 10:45:39 +0530
From: Tushar Behera <tushar.behera@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1353048646-10935-1-git-send-email-tushar.behera@linaro.org>
	<1353048646-10935-9-git-send-email-tushar.behera@linaro.org>
	<1353057394.3499.159.camel@zakaz.uk.xensource.com>
In-Reply-To: <1353057394.3499.159.camel@zakaz.uk.xensource.com>
X-Gm-Message-State: ALoCoQk7ilBOMee+OiW0O/72qz2QNCllSf4GpzWpCL/yN+FDkP6gDP7hZazLCw1V5NWFrsTd6cVx
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"patches@linaro.org" <patches@linaro.org>
Subject: Re: [Xen-devel] [PATCH 08/14] xen: netback: Remove redundant check
 on unsigned variable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/16/2012 02:46 PM, Ian Campbell wrote:
> On Fri, 2012-11-16 at 06:50 +0000, Tushar Behera wrote:
>> No need to check whether unsigned variable is less than 0.
>>
>> CC: Ian Campbell <ian.campbell@citrix.com>
>> CC: xen-devel@lists.xensource.com
>> CC: netdev@vger.kernel.org
>> Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Thanks.
> 

This patch was not picked up for 3.8-rc1. Any idea, who should pick this up?

>> ---
>>  drivers/net/xen-netback/netback.c |    4 ++--
>>  1 files changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index aab8677..515e10c 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -190,14 +190,14 @@ static int get_page_ext(struct page *pg,
>>  
>>  	group = ext.e.group - 1;
>>  
>> -	if (group < 0 || group >= xen_netbk_group_nr)
>> +	if (group >= xen_netbk_group_nr)
>>  		return 0;
>>  
>>  	netbk = &xen_netbk[group];
>>  
>>  	idx = ext.e.idx;
>>  
>> -	if ((idx < 0) || (idx >= MAX_PENDING_REQS))
>> +	if (idx >= MAX_PENDING_REQS)
>>  		return 0;
>>  
>>  	if (netbk->mmap_pages[idx] != pg)
> 
> 


-- 
Tushar Behera

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 05:16:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 05:16:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToSIR-0005qY-5N; Fri, 28 Dec 2012 05:15:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tushar.behera@linaro.org>) id 1ToSIP-0005qT-CF
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 05:15:49 +0000
Received: from [85.158.139.83:29001] by server-14.bemta-5.messagelabs.com id
	F4/60-09538-40B2DD05; Fri, 28 Dec 2012 05:15:48 +0000
X-Env-Sender: tushar.behera@linaro.org
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356671745!27612359!1
X-Originating-IP: [209.85.220.42]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22227 invoked from network); 28 Dec 2012 05:15:47 -0000
Received: from mail-pa0-f42.google.com (HELO mail-pa0-f42.google.com)
	(209.85.220.42)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 05:15:47 -0000
Received: by mail-pa0-f42.google.com with SMTP id rl6so5841169pac.15
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Dec 2012 21:15:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=x-received:message-id:date:from:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding:x-gm-message-state;
	bh=647SpzB4d0KqZxI3TASSooDNLx9Dw9RGamVY0ubAdFk=;
	b=TBycvx9lJzX5XWUxSRkcM7dQ2Oqj9G+BkRa71XSGHJgg9V7/aKaLPUFKYkLbo7jqBK
	CmBMzx2NW4ScLCmCmJLKxuA7m1fy7YnnvjWeWLkECNS+AdEeQHmm6HjzE9ZCxM5GO3fH
	uwlrG4I1e1CVW2dX2lBgtLvNaeizi980JqpZOVQJWjwnRfpSycqsUdd8E/hnIVXp21I3
	h2bvf5FbgNdiBtyF+VGnuZ1P5+/3RcjxOxoWdDRICuC32MImdpwdCmrKMnxVIDNCHqay
	0Vzf2F2aWUS04nMWbgCxQc13V2d3vpg1nOfxRaOjRXAZKrPsrrJzbMwbbLmVMWGt00kl
	taQg==
X-Received: by 10.66.87.8 with SMTP id t8mr95951209paz.28.1356671745217;
	Thu, 27 Dec 2012 21:15:45 -0800 (PST)
Received: from [10.10.10.29] ([115.113.119.130])
	by mx.google.com with ESMTPS id hs2sm19025836pbc.22.2012.12.27.21.15.41
	(version=SSLv3 cipher=OTHER); Thu, 27 Dec 2012 21:15:43 -0800 (PST)
Message-ID: <50DD2AFB.6030701@linaro.org>
Date: Fri, 28 Dec 2012 10:45:39 +0530
From: Tushar Behera <tushar.behera@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1353048646-10935-1-git-send-email-tushar.behera@linaro.org>
	<1353048646-10935-9-git-send-email-tushar.behera@linaro.org>
	<1353057394.3499.159.camel@zakaz.uk.xensource.com>
In-Reply-To: <1353057394.3499.159.camel@zakaz.uk.xensource.com>
X-Gm-Message-State: ALoCoQk7ilBOMee+OiW0O/72qz2QNCllSf4GpzWpCL/yN+FDkP6gDP7hZazLCw1V5NWFrsTd6cVx
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"patches@linaro.org" <patches@linaro.org>
Subject: Re: [Xen-devel] [PATCH 08/14] xen: netback: Remove redundant check
 on unsigned variable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/16/2012 02:46 PM, Ian Campbell wrote:
> On Fri, 2012-11-16 at 06:50 +0000, Tushar Behera wrote:
>> No need to check whether unsigned variable is less than 0.
>>
>> CC: Ian Campbell <ian.campbell@citrix.com>
>> CC: xen-devel@lists.xensource.com
>> CC: netdev@vger.kernel.org
>> Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Thanks.
> 

This patch was not picked up for 3.8-rc1. Any idea, who should pick this up?

>> ---
>>  drivers/net/xen-netback/netback.c |    4 ++--
>>  1 files changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index aab8677..515e10c 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -190,14 +190,14 @@ static int get_page_ext(struct page *pg,
>>  
>>  	group = ext.e.group - 1;
>>  
>> -	if (group < 0 || group >= xen_netbk_group_nr)
>> +	if (group >= xen_netbk_group_nr)
>>  		return 0;
>>  
>>  	netbk = &xen_netbk[group];
>>  
>>  	idx = ext.e.idx;
>>  
>> -	if ((idx < 0) || (idx >= MAX_PENDING_REQS))
>> +	if (idx >= MAX_PENDING_REQS)
>>  		return 0;
>>  
>>  	if (netbk->mmap_pages[idx] != pg)
> 
> 


-- 
Tushar Behera

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 07:07:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 07:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToU1u-0007Cx-OK; Fri, 28 Dec 2012 07:06:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ToU1s-0007Cs-JT
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 07:06:52 +0000
Received: from [85.158.139.211:13331] by server-9.bemta-5.messagelabs.com id
	FA/91-10690-B054DD05; Fri, 28 Dec 2012 07:06:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356678410!22104027!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzNjY0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3057 invoked from network); 28 Dec 2012 07:06:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 07:06:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,367,1355097600"; 
   d="scan'208";a="373994"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Dec 2012 07:06:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 28 Dec 2012 07:06:49 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ToU1p-0008Sd-8g;
	Fri, 28 Dec 2012 07:06:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ToU1o-0001w7-Qj;
	Fri, 28 Dec 2012 07:06:49 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14811-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Dec 2012 07:06:48 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14811: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14811 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14811/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14810
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14810

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  c4114a042410

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 07:07:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 07:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToU1u-0007Cx-OK; Fri, 28 Dec 2012 07:06:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1ToU1s-0007Cs-JT
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 07:06:52 +0000
Received: from [85.158.139.211:13331] by server-9.bemta-5.messagelabs.com id
	FA/91-10690-B054DD05; Fri, 28 Dec 2012 07:06:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1356678410!22104027!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzNjY0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3057 invoked from network); 28 Dec 2012 07:06:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 07:06:50 -0000
X-IronPort-AV: E=Sophos;i="4.84,367,1355097600"; 
   d="scan'208";a="373994"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Dec 2012 07:06:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Fri, 28 Dec 2012 07:06:49 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1ToU1p-0008Sd-8g;
	Fri, 28 Dec 2012 07:06:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1ToU1o-0001w7-Qj;
	Fri, 28 Dec 2012 07:06:49 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14811-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Dec 2012 07:06:48 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14811: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14811 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14811/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14810
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14810

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  c4114a042410

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 07:47:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 07:47:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToUeJ-0007hi-68; Fri, 28 Dec 2012 07:46:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <genius.rsd@gmail.com>) id 1ToUeI-0007hd-FI
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 07:46:34 +0000
Received: from [85.158.139.211:60696] by server-1.bemta-5.messagelabs.com id
	E6/FF-12813-95E4DD05; Fri, 28 Dec 2012 07:46:33 +0000
X-Env-Sender: genius.rsd@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1356680789!20646400!1
X-Originating-IP: [209.85.210.179]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: , surbl: (ASYNC_NO)
	c3VyYmxfcmVjaGVja19kZWxheTogMjI2MTI4MCAoYWJhbmRvbmVkOiB
	yb2hpdHNkYW1rb25kd2Fy\nLndvcmRwcmVzcy5jb20p\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9916 invoked from network); 28 Dec 2012 07:46:30 -0000
Received: from mail-ia0-f179.google.com (HELO mail-ia0-f179.google.com)
	(209.85.210.179)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 07:46:30 -0000
Received: by mail-ia0-f179.google.com with SMTP id o25so8686222iad.10
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 23:46:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=eN/UKED5sl0T4GDbue/5EYgSLznl1DSetP9lX4F4Ggs=;
	b=q4WlJy2hd8dn9rwlip0aaGofwIh9ORW8A/lSDOBt2g9Ie1d9BghT0erwXcZWAV+yhI
	hi4th+YJdVibydffmvSkiE/r6FmTxkO81XpI+nZsjdYZlh27k1dd6EhUmbi5DtIN8+rv
	09UoLTWcbo1S4YhmKRr22BXBI5el1dJHo2HkfMOCXItnnT+HVvWvR6cVkbBb+MmC36Kx
	by1XuReQhE+d4XKOU/tEAoiMqNmsDioukjw3qqFxkLFuwTFDQqGzXCz8GJXGLufzKSPC
	IKGl1o/ZhmFtXulHJ7/e4XnTA7TmnqLjbl+r0wiI7ftoXcx3InXJk86f5d6dDAcdxcFo
	nNmA==
MIME-Version: 1.0
Received: by 10.50.212.3 with SMTP id ng3mr24046468igc.104.1356680788763; Thu,
	27 Dec 2012 23:46:28 -0800 (PST)
Received: by 10.64.68.47 with HTTP; Thu, 27 Dec 2012 23:46:28 -0800 (PST)
In-Reply-To: <1356611599.19238.13.camel@iceland>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland>
Date: Fri, 28 Dec 2012 13:16:28 +0530
Message-ID: <CAHEEu857jjBLACY=MNKzqEkpGbjXw1cbfqy9AOyR5fVsuOmagg@mail.gmail.com>
From: Rohit Damkondwar <genius.rsd@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6190198305537326003=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6190198305537326003==
Content-Type: multipart/alternative; boundary=14dae93409f77ce5e604d1e4dcf3

--14dae93409f77ce5e604d1e4dcf3
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Dec 27, 2012 at 6:03 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:
> > Hi all. I want to set bandwidth limits to a virtual interface
> > dynamically(without restarting virtual machine). I have been browsing
> > xen source code 4.1.3. I looked into libxen folder(xen_vif.c) and
> > hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
> > (driver/net/xen-netback/
> > interface.c + common.h) and tx_add_credit function could be used to
> > modify rate limits. I want to change bandwidth limits dynamically of a
> > virtual interface in xen 4.1.3. Where should I look for in xen 4.1.3?
> >
> > Please help.
> >
>
> Xen vif has a parameter called 'rate', I don't know whether it suits
> you.
>
> The rate parameter only restricts one way traffic(probably only outgoing).


> Also, you can have a look at external tool like tc(8). My vague thought
> is that Vif is just another interface in Dom0, tc(8) should be able to
> traffic-shape Vif.
>

Don't you think using external tool may decrease the eifficiency ?. If xen
itself has capabailities ( provided by tc tool ), wouldn't it be more
efficient ?
I have used this tool. It is good. It serves my purpose. But wudn't it be
better to include the bandwidth limiting capabilities in xen itself? I am
not sure about this. Currently I am just browsing through the source code.
What do u think ?

I have seen  function "set_qos_algorithm_type" and paramaters
(qos/algorithm type,qos/algorithm params, qos/supported algorithms) in vif
class. Would they be useful ? Are they available only for XEN Enterprise ?



> Last but not least, patches are always welcomed. ;-)
>
>
> Wei.
>
>


-- 
Rohit S Damkondwar
B.Tech Computer Engineering
CoEP
MyBlog <http://www.rohitsdamkondwar.wordpress.com>

--14dae93409f77ce5e604d1e4dcf3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
hu, Dec 27, 2012 at 6:03 PM, Wei Liu <span dir=3D"ltr">&lt;<a href=3D"mailt=
o:Wei.Liu2@citrix.com" target=3D"_blank">Wei.Liu2@citrix.com</a>&gt;</span>=
 wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D"im">On Thu,=
 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:<br>
&gt; Hi all. I want to set bandwidth limits to a virtual interface<br>
&gt; dynamically(without restarting virtual machine). I have been browsing<=
br>
&gt; xen source code 4.1.3. I looked into libxen folder(xen_vif.c) and<br>
&gt; hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture<br>
&gt; (driver/net/xen-netback/<br>
&gt; interface.c + common.h) and tx_add_credit function could be used to<br=
>
&gt; modify rate limits. I want to change bandwidth limits dynamically of a=
<br>
&gt; virtual interface in xen 4.1.3. Where should I look for in xen 4.1.3?<=
br>
&gt;<br>
&gt; Please help.<br>
&gt;<br>
<br>
</div>Xen vif has a parameter called &#39;rate&#39;, I don&#39;t know wheth=
er it suits<br>
you.<br>
<br></blockquote><div>The rate parameter only restricts one way traffic(pro=
bably only outgoing). <br>=A0<br></div><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padd=
ing-left:1ex">

Also, you can have a look at external tool like tc(8). My vague thought<br>
is that Vif is just another interface in Dom0, tc(8) should be able to<br>
traffic-shape Vif.=A0<br></blockquote><div><br></div><div>Don&#39;t you thi=
nk using external tool may decrease the eifficiency ?. If xen itself has ca=
pabailities ( provided by tc tool ), wouldn&#39;t it be more efficient ? <b=
r>
</div><div>I have used this tool. It is good. It serves my purpose. But wud=
n&#39;t it be better to include the bandwidth limiting capabilities in xen =
itself? I am not sure about this. Currently I am just browsing through the =
source code. What do u think ?<br>
<br></div><div>I have seen=A0 function &quot;set_qos_algorithm_type&quot; a=
nd paramaters (qos/algorithm type,qos/algorithm params, qos/supported algor=
ithms) in vif class. Would they be useful ? Are they available only for XEN=
 Enterprise ?<br>
<br><br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Last but not least, patches are always welcomed. ;-)<br>
<span class=3D""><font color=3D"#888888"><br>
<br>
Wei.<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><br>-- <br>Rohit S D=
amkondwar<br>B.Tech Computer Engineering<br>CoEP<br><a href=3D"http://www.r=
ohitsdamkondwar.wordpress.com" target=3D"_blank">MyBlog</a><br>
</div></div>

--14dae93409f77ce5e604d1e4dcf3--


--===============6190198305537326003==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6190198305537326003==--


From xen-devel-bounces@lists.xen.org Fri Dec 28 07:47:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 07:47:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToUeJ-0007hi-68; Fri, 28 Dec 2012 07:46:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <genius.rsd@gmail.com>) id 1ToUeI-0007hd-FI
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 07:46:34 +0000
Received: from [85.158.139.211:60696] by server-1.bemta-5.messagelabs.com id
	E6/FF-12813-95E4DD05; Fri, 28 Dec 2012 07:46:33 +0000
X-Env-Sender: genius.rsd@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1356680789!20646400!1
X-Originating-IP: [209.85.210.179]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: , surbl: (ASYNC_NO)
	c3VyYmxfcmVjaGVja19kZWxheTogMjI2MTI4MCAoYWJhbmRvbmVkOiB
	yb2hpdHNkYW1rb25kd2Fy\nLndvcmRwcmVzcy5jb20p\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9916 invoked from network); 28 Dec 2012 07:46:30 -0000
Received: from mail-ia0-f179.google.com (HELO mail-ia0-f179.google.com)
	(209.85.210.179)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 07:46:30 -0000
Received: by mail-ia0-f179.google.com with SMTP id o25so8686222iad.10
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 23:46:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=eN/UKED5sl0T4GDbue/5EYgSLznl1DSetP9lX4F4Ggs=;
	b=q4WlJy2hd8dn9rwlip0aaGofwIh9ORW8A/lSDOBt2g9Ie1d9BghT0erwXcZWAV+yhI
	hi4th+YJdVibydffmvSkiE/r6FmTxkO81XpI+nZsjdYZlh27k1dd6EhUmbi5DtIN8+rv
	09UoLTWcbo1S4YhmKRr22BXBI5el1dJHo2HkfMOCXItnnT+HVvWvR6cVkbBb+MmC36Kx
	by1XuReQhE+d4XKOU/tEAoiMqNmsDioukjw3qqFxkLFuwTFDQqGzXCz8GJXGLufzKSPC
	IKGl1o/ZhmFtXulHJ7/e4XnTA7TmnqLjbl+r0wiI7ftoXcx3InXJk86f5d6dDAcdxcFo
	nNmA==
MIME-Version: 1.0
Received: by 10.50.212.3 with SMTP id ng3mr24046468igc.104.1356680788763; Thu,
	27 Dec 2012 23:46:28 -0800 (PST)
Received: by 10.64.68.47 with HTTP; Thu, 27 Dec 2012 23:46:28 -0800 (PST)
In-Reply-To: <1356611599.19238.13.camel@iceland>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland>
Date: Fri, 28 Dec 2012 13:16:28 +0530
Message-ID: <CAHEEu857jjBLACY=MNKzqEkpGbjXw1cbfqy9AOyR5fVsuOmagg@mail.gmail.com>
From: Rohit Damkondwar <genius.rsd@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6190198305537326003=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6190198305537326003==
Content-Type: multipart/alternative; boundary=14dae93409f77ce5e604d1e4dcf3

--14dae93409f77ce5e604d1e4dcf3
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Dec 27, 2012 at 6:03 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:
> > Hi all. I want to set bandwidth limits to a virtual interface
> > dynamically(without restarting virtual machine). I have been browsing
> > xen source code 4.1.3. I looked into libxen folder(xen_vif.c) and
> > hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
> > (driver/net/xen-netback/
> > interface.c + common.h) and tx_add_credit function could be used to
> > modify rate limits. I want to change bandwidth limits dynamically of a
> > virtual interface in xen 4.1.3. Where should I look for in xen 4.1.3?
> >
> > Please help.
> >
>
> Xen vif has a parameter called 'rate', I don't know whether it suits
> you.
>
> The rate parameter only restricts one way traffic(probably only outgoing).


> Also, you can have a look at external tool like tc(8). My vague thought
> is that Vif is just another interface in Dom0, tc(8) should be able to
> traffic-shape Vif.
>

Don't you think using external tool may decrease the eifficiency ?. If xen
itself has capabailities ( provided by tc tool ), wouldn't it be more
efficient ?
I have used this tool. It is good. It serves my purpose. But wudn't it be
better to include the bandwidth limiting capabilities in xen itself? I am
not sure about this. Currently I am just browsing through the source code.
What do u think ?

I have seen  function "set_qos_algorithm_type" and paramaters
(qos/algorithm type,qos/algorithm params, qos/supported algorithms) in vif
class. Would they be useful ? Are they available only for XEN Enterprise ?



> Last but not least, patches are always welcomed. ;-)
>
>
> Wei.
>
>


-- 
Rohit S Damkondwar
B.Tech Computer Engineering
CoEP
MyBlog <http://www.rohitsdamkondwar.wordpress.com>

--14dae93409f77ce5e604d1e4dcf3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
hu, Dec 27, 2012 at 6:03 PM, Wei Liu <span dir=3D"ltr">&lt;<a href=3D"mailt=
o:Wei.Liu2@citrix.com" target=3D"_blank">Wei.Liu2@citrix.com</a>&gt;</span>=
 wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D"im">On Thu,=
 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:<br>
&gt; Hi all. I want to set bandwidth limits to a virtual interface<br>
&gt; dynamically(without restarting virtual machine). I have been browsing<=
br>
&gt; xen source code 4.1.3. I looked into libxen folder(xen_vif.c) and<br>
&gt; hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture<br>
&gt; (driver/net/xen-netback/<br>
&gt; interface.c + common.h) and tx_add_credit function could be used to<br=
>
&gt; modify rate limits. I want to change bandwidth limits dynamically of a=
<br>
&gt; virtual interface in xen 4.1.3. Where should I look for in xen 4.1.3?<=
br>
&gt;<br>
&gt; Please help.<br>
&gt;<br>
<br>
</div>Xen vif has a parameter called &#39;rate&#39;, I don&#39;t know wheth=
er it suits<br>
you.<br>
<br></blockquote><div>The rate parameter only restricts one way traffic(pro=
bably only outgoing). <br>=A0<br></div><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padd=
ing-left:1ex">

Also, you can have a look at external tool like tc(8). My vague thought<br>
is that Vif is just another interface in Dom0, tc(8) should be able to<br>
traffic-shape Vif.=A0<br></blockquote><div><br></div><div>Don&#39;t you thi=
nk using external tool may decrease the eifficiency ?. If xen itself has ca=
pabailities ( provided by tc tool ), wouldn&#39;t it be more efficient ? <b=
r>
</div><div>I have used this tool. It is good. It serves my purpose. But wud=
n&#39;t it be better to include the bandwidth limiting capabilities in xen =
itself? I am not sure about this. Currently I am just browsing through the =
source code. What do u think ?<br>
<br></div><div>I have seen=A0 function &quot;set_qos_algorithm_type&quot; a=
nd paramaters (qos/algorithm type,qos/algorithm params, qos/supported algor=
ithms) in vif class. Would they be useful ? Are they available only for XEN=
 Enterprise ?<br>
<br><br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Last but not least, patches are always welcomed. ;-)<br>
<span class=3D""><font color=3D"#888888"><br>
<br>
Wei.<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><br>-- <br>Rohit S D=
amkondwar<br>B.Tech Computer Engineering<br>CoEP<br><a href=3D"http://www.r=
ohitsdamkondwar.wordpress.com" target=3D"_blank">MyBlog</a><br>
</div></div>

--14dae93409f77ce5e604d1e4dcf3--


--===============6190198305537326003==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6190198305537326003==--


From xen-devel-bounces@lists.xen.org Fri Dec 28 07:56:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 07:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToUnU-0007vL-8G; Fri, 28 Dec 2012 07:56:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1ToUnS-0007vG-Cj
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 07:56:02 +0000
Received: from [85.158.137.99:14524] by server-14.bemta-3.messagelabs.com id
	5D/68-27443-1905DD05; Fri, 28 Dec 2012 07:56:01 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-2.tower-217.messagelabs.com!1356681358!20788834!1
X-Originating-IP: [209.85.216.42]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_TEST_2,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16781 invoked from network); 28 Dec 2012 07:55:59 -0000
Received: from mail-qa0-f42.google.com (HELO mail-qa0-f42.google.com)
	(209.85.216.42)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 07:55:59 -0000
Received: by mail-qa0-f42.google.com with SMTP id hg5so9511996qab.1
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 23:55:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=74o9kssAg3wikW3G5CfPCXOB+1EKKFKKWg4UVwtTwUg=;
	b=Z/AVuockoSBcmI+AADvfGvx2AXb25y6tLuGN1he2iYYDFsw2sxh7DZrA3R/l3+4dxm
	z6p3IJM3zBE75lPv1bh6tpgJW2OlFocwqI+MmeSDLyQgeVgfOAfiWpVnzhKgJlisepDS
	GDvZSWrvbvBocxi+3gz7SsRkYtteOJWoaRhxM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type
	:x-gm-message-state;
	bh=74o9kssAg3wikW3G5CfPCXOB+1EKKFKKWg4UVwtTwUg=;
	b=O6VQayjI76cixfFrc45UGnSAEJA43jMyYA3IjHC7e3OXVlyMCLB4VL+IkfxdxMqC8p
	EmQN84jMW6UWlBJQe2VT35M7DVuUeUYeEGRXU0o7sC5lnh0Zp/KK2ThybgYPSh6gxWZ/
	J33LUh2EbsAYmgFI+47FnaycCDSM6EspMlSBLHJj2y7QMp36zkWGwjGsV/uT9VctIy4K
	QC53ScnQ+uM6vnco5VQpCUORJFlgxWAPjCdOgLitqshLDQQrX2KDxyd0mPzt4fMyXGAH
	T/iaWcPXmm88/sof9ZYK72sM/52pXCog0I/aO+xGG3xavH7GZ/ClOPVvsIbLUiui0J35
	j87Q==
Received: by 10.224.183.75 with SMTP id cf11mr15383612qab.82.1356681357736;
	Thu, 27 Dec 2012 23:55:57 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.96.200 with HTTP; Thu, 27 Dec 2012 23:55:42 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <50DCA2790200007800092135@nat28.tlf.novell.com>
References: <CACaajQt5FEMVdX5j7ASmDrJXJzespuAVzXpGsxgLONxyXmD6Zg@mail.gmail.com>
	<50DCA2790200007800092135@nat28.tlf.novell.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 28 Dec 2012 11:55:42 +0400
X-Google-Sender-Auth: TTydImH0x1DMXUlMO9941mEL_oM
Message-ID: <CACaajQvxq-CL14dWaw=8AEiZWNw20XdvjuX2dzQYpN+Qs1EtvQ@mail.gmail.com>
To: Jan Beulich <jbeulich@suse.com>
X-Gm-Message-State: ALoCoQmAKCIqAMmNPQxcWDEQhZwMmnC9zbFJdn3Z8sbD2+X/IXamMRSVQNVIJ/zo3oGJRdZwYdip
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] blktap2 and blktap2-new
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ok. Thanks!

2012/12/27 Jan Beulich <jbeulich@suse.com>:
>>>> Vasiliy Tolstov <v.tolstov@selfip.ru> 12/25/12 7:46 AM >>>
>>Hello. I found, that opensuse kernel-xen have two blktap2 modules. One
>>blktap2 and other blktap-new. What is difference and why needed two
>>modules (blktap2-new created by suse/opensuse patches)
>
> If you looked at the sources, you certainly saw the comments at the top of
> the respective patches - the two drivers come from different sources, but
> are supposed to fulfill the same function.
>
> Also, for the future I'd recommend asking openSUSE related questions on
> openSUSE mailing lists.
>
> Jan
>



-- 
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 07:56:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 07:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToUnU-0007vL-8G; Fri, 28 Dec 2012 07:56:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1ToUnS-0007vG-Cj
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 07:56:02 +0000
Received: from [85.158.137.99:14524] by server-14.bemta-3.messagelabs.com id
	5D/68-27443-1905DD05; Fri, 28 Dec 2012 07:56:01 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-2.tower-217.messagelabs.com!1356681358!20788834!1
X-Originating-IP: [209.85.216.42]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_TEST_2,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16781 invoked from network); 28 Dec 2012 07:55:59 -0000
Received: from mail-qa0-f42.google.com (HELO mail-qa0-f42.google.com)
	(209.85.216.42)
	by server-2.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 07:55:59 -0000
Received: by mail-qa0-f42.google.com with SMTP id hg5so9511996qab.1
	for <xen-devel@lists.xen.org>; Thu, 27 Dec 2012 23:55:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=74o9kssAg3wikW3G5CfPCXOB+1EKKFKKWg4UVwtTwUg=;
	b=Z/AVuockoSBcmI+AADvfGvx2AXb25y6tLuGN1he2iYYDFsw2sxh7DZrA3R/l3+4dxm
	z6p3IJM3zBE75lPv1bh6tpgJW2OlFocwqI+MmeSDLyQgeVgfOAfiWpVnzhKgJlisepDS
	GDvZSWrvbvBocxi+3gz7SsRkYtteOJWoaRhxM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:sender:x-originating-ip:in-reply-to:references:from
	:date:x-google-sender-auth:message-id:subject:to:cc:content-type
	:x-gm-message-state;
	bh=74o9kssAg3wikW3G5CfPCXOB+1EKKFKKWg4UVwtTwUg=;
	b=O6VQayjI76cixfFrc45UGnSAEJA43jMyYA3IjHC7e3OXVlyMCLB4VL+IkfxdxMqC8p
	EmQN84jMW6UWlBJQe2VT35M7DVuUeUYeEGRXU0o7sC5lnh0Zp/KK2ThybgYPSh6gxWZ/
	J33LUh2EbsAYmgFI+47FnaycCDSM6EspMlSBLHJj2y7QMp36zkWGwjGsV/uT9VctIy4K
	QC53ScnQ+uM6vnco5VQpCUORJFlgxWAPjCdOgLitqshLDQQrX2KDxyd0mPzt4fMyXGAH
	T/iaWcPXmm88/sof9ZYK72sM/52pXCog0I/aO+xGG3xavH7GZ/ClOPVvsIbLUiui0J35
	j87Q==
Received: by 10.224.183.75 with SMTP id cf11mr15383612qab.82.1356681357736;
	Thu, 27 Dec 2012 23:55:57 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.96.200 with HTTP; Thu, 27 Dec 2012 23:55:42 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <50DCA2790200007800092135@nat28.tlf.novell.com>
References: <CACaajQt5FEMVdX5j7ASmDrJXJzespuAVzXpGsxgLONxyXmD6Zg@mail.gmail.com>
	<50DCA2790200007800092135@nat28.tlf.novell.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 28 Dec 2012 11:55:42 +0400
X-Google-Sender-Auth: TTydImH0x1DMXUlMO9941mEL_oM
Message-ID: <CACaajQvxq-CL14dWaw=8AEiZWNw20XdvjuX2dzQYpN+Qs1EtvQ@mail.gmail.com>
To: Jan Beulich <jbeulich@suse.com>
X-Gm-Message-State: ALoCoQmAKCIqAMmNPQxcWDEQhZwMmnC9zbFJdn3Z8sbD2+X/IXamMRSVQNVIJ/zo3oGJRdZwYdip
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] blktap2 and blktap2-new
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ok. Thanks!

2012/12/27 Jan Beulich <jbeulich@suse.com>:
>>>> Vasiliy Tolstov <v.tolstov@selfip.ru> 12/25/12 7:46 AM >>>
>>Hello. I found, that opensuse kernel-xen have two blktap2 modules. One
>>blktap2 and other blktap-new. What is difference and why needed two
>>modules (blktap2-new created by suse/opensuse patches)
>
> If you looked at the sources, you certainly saw the comments at the top of
> the respective patches - the two drivers come from different sources, but
> are supposed to fulfill the same function.
>
> Also, for the future I'd recommend asking openSUSE related questions on
> openSUSE mailing lists.
>
> Jan
>



-- 
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 08:22:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 08:22:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToVCz-0000N7-Sx; Fri, 28 Dec 2012 08:22:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1ToVCx-0000N2-12
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 08:22:24 +0000
Received: from [85.158.138.51:27499] by server-6.bemta-3.messagelabs.com id
	17/BF-12154-EB65DD05; Fri, 28 Dec 2012 08:22:22 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1356682938!22550061!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14876 invoked from network); 28 Dec 2012 08:22:18 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 08:22:18 -0000
Received: by mail-ee0-f50.google.com with SMTP id b45so5169360eek.23
	for <xen-devel@lists.xensource.com>;
	Fri, 28 Dec 2012 00:22:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=2Uibw8eyuRFTg4hB6x+Ys7glNyYd+Lw7vxmNy2fd0vA=;
	b=MrhseFkN8PQnfSmXorsp6LdgSMMhJfPzTYlDo/8amJeUHSJabccreQJlbtRxhxFmYH
	McgrlPk5nL32nH+AxGmXZKyHL6J+yoTot6KdPpryiXSf78SVXhAbaVeaiXiEIp75dTkp
	3zyrfhZU7wq1YJ9qp6lBWxSikPmA5sf88aXudH96YT3d2XWrtucIPqaBbsDhsLWnnYVy
	TOOI0YR7EGMl8QaG1vBaSbKJeQjIvyxS4d9K86Sina537aQAQK9bVh49ga8yM2xbUCH8
	ntYBILGZ0GkWl2mNmHq7gLqZeiEFT+Z/WKk6Q9wBjO0pWGLU0feuCYc/86WdoKp0vP1n
	M/dw==
MIME-Version: 1.0
Received: by 10.14.225.194 with SMTP id z42mr84802387eep.22.1356682938218;
	Fri, 28 Dec 2012 00:22:18 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Fri, 28 Dec 2012 00:22:17 -0800 (PST)
Date: Fri, 28 Dec 2012 16:22:17 +0800
Message-ID: <CA+ePHTDo4743dodupmoHJGsuGqJCk83PJ1bcv-PwxbcdgiHq3Q@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=047d7b66f24b9af9a904d1e55c43
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] patch for listing and reading files from
 qcow2-formated image file (for xen-4.1.2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7b66f24b9af9a904d1e55c43
Content-Type: multipart/alternative; boundary=047d7b66f24b9af9a604d1e55c41

--047d7b66f24b9af9a604d1e55c41
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi,
    The final effect is as follows:


*[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$ qemu-img-xen cat
-f /1/boot.ini ~/vm-check.img *
*[boot loader]*
*timeout=3D30*
*default=3Dmulti(0)disk(0)rdisk(0)partition(1)\WINDOWS*
*[operating systems]*
*multi(0)disk(0)rdisk(0)partition(1)\WINDOWS=3D"Microsoft Windows XP
Professional" /noexecute=3Doptin /fastdetect*

[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$ qemu-img-xen ls -l
-d /1/ ~/vm-check.img
*=E3=80=90name                 size(bytes) dir?      date
 create-time=E3=80=91*
*AUTOEXEC.BAT 0                file 2010-12-22        17:30:37*
*boot.ini               211                file 2010-12-23        01:24:41*
*bootfont.bin  322730                file 2004-11-23        20:00:00*
*
*
*
*
*
*

As you see above, the patch add two sub-commands for qemu-img-xen=EF=BC=9Ac=
at and
ls.

For details in the patch, please check the attachment.

--047d7b66f24b9af9a604d1e55c41
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi,<div>=C2=A0 =C2=A0 The final effect is as follows:</div><div>=C2=A0 =C2=
=A0=C2=A0</div><blockquote style=3D"margin:0 0 0 40px;border:none;padding:0=
px"><div><u>[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$ <font =
color=3D"#ff6666">qemu-img-xen cat -f /1/boot.ini ~/vm-check.img=C2=A0</fon=
t></u></div>
<div><b>[boot loader]</b></div><div><b>timeout=3D30</b></div><div><b>defaul=
t=3Dmulti(0)disk(0)rdisk(0)partition(1)\WINDOWS</b></div><div><b>[operating=
 systems]</b></div><div><b>multi(0)disk(0)rdisk(0)partition(1)\WINDOWS=3D&q=
uot;Microsoft Windows XP Professional&quot; /noexecute=3Doptin /fastdetect<=
/b></div>
<div><br></div><div>[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]=
$ <font color=3D"#ff6666">qemu-img-xen ls -l -d /1/ ~/vm-check.img=C2=A0</f=
ont></div><div><b>=E3=80=90name<span class=3D"Apple-tab-span" style=3D"whit=
e-space:pre">	</span>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0=C2=A0size(bytes)<span class=3D"Apple-tab-span" style=3D"white-space:pre=
">	</span>dir?<span class=3D"Apple-tab-span" style=3D"white-space:pre">	</s=
pan>=C2=A0 =C2=A0 =C2=A0date<span class=3D"Apple-tab-span" style=3D"white-s=
pace:pre">	</span>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cr=
eate-time=E3=80=91</b></div>
<div><b>AUTOEXEC.BAT<span class=3D"Apple-tab-span" style=3D"white-space:pre=
">	</span>0<span class=3D"Apple-tab-span" style=3D"white-space:pre">	</span=
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0file<span class=3D"=
Apple-tab-span" style=3D"white-space:pre">	</span>2010-12-22<span class=3D"=
Apple-tab-span" style=3D"white-space:pre">	</span>=C2=A0 =C2=A0 =C2=A0 =C2=
=A017:30:37</b></div>
<div><b>boot.ini<span class=3D"Apple-tab-span" style=3D"white-space:pre">	<=
/span>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0211<span class=
=3D"Apple-tab-span" style=3D"white-space:pre">	</span>=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0file<span class=3D"Apple-tab-span" style=
=3D"white-space:pre">	</span>2010-12-23<span class=3D"Apple-tab-span" style=
=3D"white-space:pre">	</span>=C2=A0 =C2=A0 =C2=A0 =C2=A001:24:41</b></div>
<div><b>bootfont.bin<span class=3D"Apple-tab-span" style=3D"white-space:pre=
">	</span>=C2=A0322730<span class=3D"Apple-tab-span" style=3D"white-space:p=
re">	</span>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0file<spa=
n class=3D"Apple-tab-span" style=3D"white-space:pre">	</span>2004-11-23<spa=
n class=3D"Apple-tab-span" style=3D"white-space:pre">	</span>=C2=A0 =C2=A0 =
=C2=A0 =C2=A020:00:00</b></div>
<div><b><br></b></div><div><b><br></b></div><div><b><br></b></div></blockqu=
ote>As you see above, the patch add two sub-commands for qemu-img-xen=EF=BC=
=9Acat and ls.<div><br></div><div>For details in the patch, please check th=
e attachment.=C2=A0</div>
<div><div><br></div><div><br></div></div>

--047d7b66f24b9af9a604d1e55c41--
--047d7b66f24b9af9a904d1e55c43
Content-Type: application/octet-stream; name="qemu-imgfs-for-qcow2.patch"
Content-Disposition: attachment; filename="qemu-imgfs-for-qcow2.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hb91vt6j0

ZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11
LXhlbi9kZWJ1Zy5jIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2RlYnVnLmMKLS0t
IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2RlYnVnLmMJMTk3MC0wMS0wMSAwNzow
MDowMC4wMDAwMDAwMDAgKzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVu
L2RlYnVnLmMJMjAxMi0xMi0yOCAxNjowMjo0MC45OTk5MzM5MjUgKzA4MDAKQEAgLTAsMCArMSwx
ODIgQEAKKyNpbmNsdWRlPHRpbWUuaD4KKyNpbmNsdWRlPHN5cy9zdGF0Lmg+CisjaW5jbHVkZTxz
dGRhcmcuaD4KKyNpbmNsdWRlPGZjbnRsLmg+CisjaW5jbHVkZSJkZWJ1Zy5oIgorI2luY2x1ZGUg
PHVuaXN0ZC5oPgorI2luY2x1ZGUgPHN0cmluZy5oPgorCisjZGVmaW5lIEtCKHgpICAgICgoeCkq
MTAyNCkKKworc3RhdGljIGludCBkYmdfdGVybSA9IDAsIGRiZ19maWxlID0gMCwgbG9nX2RheSA9
IDA7CitzdGF0aWMgRklMRSogZnBfbG9nID0gTlVMTDsKK3N0YXRpYyBjaGFyIGRpclsxMjhdPXsw
LH0sIGZpbGVuYW1lWzE2MF07CitzdGF0aWMgdm9pZCBpbml0X2ZpbGVfcGF0aCh2b2lkKTsKK3N0
YXRpYyBjaGFyIHByaW50YnVmWzEwMjRdPXt9OworaW50IG1rZGlyX3JlY3Vyc2l2ZShjaGFyKiBw
YXRoKTsKKworCit2b2lkIHByaW50X2Vycm9yKGNoYXIqIGZpbGUsIGNoYXIqIGZ1bmN0aW9uLCBp
bnQgbGluZSwgY29uc3QgY2hhciAqZm10LCAuLi4pCit7CisgIHZhX2xpc3QgYXJnczsKKyAgaW50
IGk7CisKKyAgaWYoICFkYmdfdGVybSAmJiAhZGJnX2ZpbGUgKQorICAgIHJldHVybjsKKworICB2
YV9zdGFydChhcmdzLCBmbXQpOworICBpPXZzcHJpbnRmKCBwcmludGJ1ZiwgZm10LCBhcmdzICk7
CisgIHByaW50YnVmW2ldID0gMDsKKyAgdmFfZW5kKGFyZ3MpOworCisgIGlmKCBkYmdfdGVybSAp
CisgICAgeworICAgICAgcHJpbnRmKCJbJXNdJXMoJWQpOlxuJXNcbiIsIGZpbGUsIGZ1bmN0aW9u
LCBsaW5lLCBwcmludGJ1Zik7CisgICAgfQorCisgIGlmKCBkYmdfZmlsZSApCisgICAgeworICAg
ICAgdGltZV90IHQgPSB0aW1lKCBOVUxMICk7CisgICAgICBzdHJ1Y3QgdG0qIHRtMSA9IGxvY2Fs
dGltZSgmdCk7CisgICAgICBpZiggIXRtMSApIHJldHVybjsKKyAgICAgIC8vaWYoIHRtMS0+dG1f
bWRheSAhPSBsb2dfZGF5ICkKKyAgICAgIHsKKwkvL2luaXRfZmlsZV9wYXRoKCk7CisgICAgICB9
CisgICAgICBjaGFyIHRtcFsxNl07CisgICAgICBzdHJmdGltZSggdG1wLCAxNSwgIiVYIiwgdG0x
ICk7CisgICAgICBmcHJpbnRmKCBmcF9sb2csICIlcyBbJXNdJXMoJWQpOiAlc1xuIiwgdG1wLCBm
aWxlLCBmdW5jdGlvbiwgbGluZSwgcHJpbnRidWYpOworICAgICAgZmZsdXNoKCBmcF9sb2cgKTsK
KyAgICB9Cit9CisKK3N0YXRpYyBjaGFyKiBoZXhfc3RyKHVuc2lnbmVkIGNoYXIgKmJ1ZiwgaW50
IGxlbiwgY2hhciogb3V0c3RyICkKK3sKKworICBjb25zdCBjaGFyICpzZXQgPSAiMDEyMzQ1Njc4
OWFiY2RlZiI7CisgIGNoYXIgKnRtcDsKKyAgdW5zaWduZWQgY2hhciAqZW5kOworICBpZiAobGVu
ID4gMTAyNCkKKyAgICBsZW4gPSAxMDI0OworICBlbmQgPSBidWYgKyBsZW47CisgIHRtcCA9ICZv
dXRzdHJbMF07CisgIHdoaWxlIChidWYgPCBlbmQpCisgICAgeworICAgICAgKnRtcCsrID0gc2V0
WyAoKmJ1ZikgPj4gNCBdOworICAgICAgKnRtcCsrID0gc2V0WyAoKmJ1ZikgJiAweEYgXTsKKyAg
ICAgICp0bXArKyA9ICcgJzsKKyAgICAgIGJ1ZiArKzsKKyAgICB9CisgICp0bXAgPSAnXDAnOwor
ICByZXR1cm4gb3V0c3RyOworfQorCit2b2lkIGhleF9kdW1wKCB1bnNpZ25lZCBjaGFyICogYnVm
LCBpbnQgbGVuICkKK3sKKyAgY2hhciBzdHJbS0IoNCldOworICBpZiggZGJnX3Rlcm0gKQorICAg
IHB1dHMoIGhleF9zdHIoIGJ1ZiwgbGVuLCBzdHIgKSApOworICBpZiggZGJnX2ZpbGUgKXsKKyAg
ICBmcHV0cyggaGV4X3N0ciggYnVmLCBsZW4sIHN0ciApLCBmcF9sb2cgKTsKKyAgICBmcHJpbnRm
KCBmcF9sb2csICJcbiIgKTsKKyAgICBmZmx1c2goIGZwX2xvZyApOworICB9CisgIC8vZnByaW50
Ziggc3RkZXJyLCBoZXhfc3RyKCBidWYsIGxlbiApICk7Cit9CisKK3ZvaWQgZGVidWdfdGVybV9v
bigpCit7CisgIGRiZ190ZXJtID0gMTsKK30KKwordm9pZCBkZWJ1Z190ZXJtX29mZigpCit7Cisg
IGRiZ190ZXJtID0gMDsKK30KKworCitpbnQgbWtkaXJfcmVjdXJzaXZlKCBjaGFyKiBwYXRoICkK
K3sKKyAgY2hhciAqcDsKKworICBpZiggYWNjZXNzKCBwYXRoLCAwICkgPT0gMCApCisgICAgcmV0
dXJuIDA7CisKKyAgZm9yKCBwPXBhdGg7ICpwOyBwKysgKQorICAgIHsKKyAgICAgIGlmKCBwPnBh
dGggJiYgKnAgPT0gJy8nICkKKwl7CisJICAqcCA9IDA7CisJICBpZiggYWNjZXNzKCBwYXRoLCAw
ICkgIT0gMCApCisJICAgIHsKKyNpZmRlZiBfX1dJTjMyX18KKwkgICAgICBta2RpciggcGF0aCAp
OworI2Vsc2UKKwkgICAgICBpZiggbWtkaXIoIHBhdGgsIFNfSVJXWFUgKSAhPSAwICkKKwkJcmV0
dXJuIC0xOworI2VuZGlmCisJICAgIH0KKwkgICpwID0gJy8nOworCX0KKyAgICB9CisjaWZkZWYg
X19XSU4zMl9fCisgIHJldHVybiBta2RpciggcGF0aCApOworI2Vsc2UKKyAgcmV0dXJuIG1rZGly
KCBwYXRoLCBTX0lSV1hVICk7CisjZW5kaWYKK30KKwordm9pZCBpbml0X2ZpbGVfcGF0aCgpCit7
CisgIGNoYXIgdG1wWzY0XTsKKyAgdGltZV90IHQgPSB0aW1lKCBOVUxMICk7CisgIHN0cnVjdCB0
bSogdG0xID0gbG9jYWx0aW1lKCZ0KTsKKyAgCisgIGlmKCAhdG0xICkKKyAgICB7CisgICAgICBw
ZXJyb3IoImRlYnVnLmMgaW5pdF9maWxlX3BhdGg6IEVSUk9SIEdFVFRJTkcgU1lTVEVNIFRJTUUu
Iik7CisgICAgfQorICBsb2dfZGF5ID0gdG0xLT50bV9tZGF5OworICBzdHJmdGltZSggdG1wLCA2
NCwgIi8lWS0lbS0lZC50eHQiLCB0bTEgKTsKKyAgCisgIGlmKCBhY2Nlc3MoIGRpciwgMCApIT0w
ICkKKyAgICB7CisgICAgICAobWtkaXJfcmVjdXJzaXZlKCBkaXIgKTwwKSA/IHBlcnJvcigibWtk
aXJfcmVjdXJzaXZlIGZhaWwhIVxuIikgOiAwOworICAgIH0KKyAgc3RyY3B5KCBmaWxlbmFtZSwg
ZGlyICk7CisgIHN0cmNhdCggZmlsZW5hbWUsIHRtcCApOworICBpZiggZnBfbG9nICkKKyAgICBm
Y2xvc2UoIGZwX2xvZyApOworICBmcF9sb2cgPSBmb3BlbiggZmlsZW5hbWUsICJ3IiApOworICBp
ZihmcF9sb2cpCisgICAgeworICAgICAgZnByaW50ZihmcF9sb2csIj09PT09PT09PT09PT09PT09
PT09PT1MT0cgIFNUQVJUPT09PT09PT09PT09PT09PT09PT09PT09XG4iKTsKKyAgICAgIGZjbG9z
ZShmcF9sb2cpOyAgZnBfbG9nPU5VTEw7CisgICAgICBmcF9sb2cgPSBmb3BlbiggZmlsZW5hbWUs
ICJhKyIgKTsKKyAgICAgIE5VTEw9PWZwX2xvZyA/IHByaW50ZigiaW5pdF9maWxlX3BhdGgoKTpm
b3BlbihhKykgZmFpbFxuIikgOiAwOworICAgIH0KK30KKwordm9pZCBkZWJ1Z19maWxlX29uKGNo
YXIgKnBhdGgpCit7CisgIGlmKCBkYmdfZmlsZSApCisgICAgcmV0dXJuOworICBkZWJ1Z19zZXRf
ZGlyKHBhdGgpOworICBpbml0X2ZpbGVfcGF0aCgpOworICBkYmdfZmlsZSA9IDE7Cit9CisKK3Zv
aWQgZGVidWdfZmlsZV9vZmYoKQoreworICBpZiggIWRiZ19maWxlICkKKyAgICByZXR1cm47Cisg
IGRiZ19maWxlID0gMDsKKyAgaWYoIGZwX2xvZyApCisgICAgZmNsb3NlKCBmcF9sb2cgKTsKK30K
Kwordm9pZCBkZWJ1Z19zZXRfZGlyKGNoYXIqIHN0cikKK3sKKyAgc3RyY3B5KCBkaXIsIHN0ciAp
OworfQorCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9l
bXUtcWVtdS14ZW4vZGVidWcuaCB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9kZWJ1
Zy5oCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9kZWJ1Zy5oCTE5NzAtMDEt
MDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1x
ZW11LXhlbi9kZWJ1Zy5oCTIwMTItMTItMjggMTY6MDI6NDEuMDAwOTM0MzI3ICswODAwCkBAIC0w
LDAgKzEsMzQgQEAKKyNpZm5kZWYgX0RFQlVHX0gKKyNkZWZpbmUgX0RFQlVHX0gKKworI2luY2x1
ZGUgPHN0ZGlvLmg+CisjaW5jbHVkZSA8ZXJybm8uaD4KKyNpbmNsdWRlIDxhc3NlcnQuaD4KKwor
Ly8jZGVmaW5lIFJFTEVBU0UKKworI2lmbmRlZiBSRUxFQVNFCisjZGVmaW5lIERCRyhhcmdzIC4u
LikgXAorICBwcmludF9lcnJvciggKGNoYXIqKV9fRklMRV9fLCAoY2hhciopX19mdW5jX18sIF9f
TElORV9fLCAjI2FyZ3MgKQorI2Vsc2UKKyNkZWZpbmUgREJHKGFyZ3MgLi4uKQkJCQkgIFwKKyAg
ZG8JCQkJCQkgIFwKKyAgICB7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCitmcHJpbnRmKGxvZ2ZpbGUsIiVzOjpbJXNdOjooJWQpOlxuIiwgICAgICAgICAg
ICAgICAgXAorCShjaGFyKilfX0ZJTEVfXywgKGNoYXIqKV9fZnVuY19fLCBfX0xJTkVfXyk7ICAg
ICAgXAorZnByaW50Zihsb2dmaWxlLCAjI2FyZ3MpOwlmcHJpbnRmKGxvZ2ZpbGUsICJcbiIpOwkg
IFwKK30gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
XAord2hpbGUoMCkKKy8vI2RlZmluZSBEQkcgcHJpbnRmCisjZW5kaWYKKyNkZWZpbmUgTVNHcHJp
bnRmCit2b2lkIHByaW50X2Vycm9yKGNoYXIqIGZpbGUsIGNoYXIqIGZ1bmN0aW9uLCBpbnQgbGlu
ZSwgY29uc3QgY2hhciAqZm10LCAuLi4pOwordm9pZCBoZXhfZHVtcCggdW5zaWduZWQgY2hhciAq
IGJ1ZiwgaW50IGxlbiApOwordm9pZCBkZWJ1Z190ZXJtX29uKHZvaWQpOwordm9pZCBkZWJ1Z190
ZXJtX29mZih2b2lkKTsKK3ZvaWQgZGVidWdfZmlsZV9vbihjaGFyICpwYXRoKTsKK3ZvaWQgZGVi
dWdfZmlsZV9vZmYodm9pZCk7Cit2b2lkIGRlYnVnX3NldF9kaXIoY2hhciogc3RyKTsKKworI2Vu
ZGlmIC8vX0RFQlVHX0gKKwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEuMi1h
L3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZhdC5jIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUt
eGVuL2ZhdC5jCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9mYXQuYwkxOTcw
LTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9l
bXUtcWVtdS14ZW4vZmF0LmMJMjAxMi0xMi0yOCAxNjowMjo0MS4wMDE5MzQ3MDkgKzA4MDAKQEAg
LTAsMCArMSw5MzYgQEAKKy8qIGZhdC5jIC0gRkFUIGZpbGVzeXN0ZW0gKi8KKy8qCisgKiAgR1JV
QiAgLS0gIEdSYW5kIFVuaWZpZWQgQm9vdGxvYWRlcgorICogIENvcHlyaWdodCAoQykgMjAwMCwy
MDAxLDIwMDIsMjAwMywyMDA0LDIwMDUsMjAwNywyMDA4LDIwMDkgIEZyZWUgU29mdHdhcmUgRm91
bmRhdGlvbiwgSW5jLgorICoKKyAqICBHUlVCIGlzIGZyZWUgc29mdHdhcmU6IHlvdSBjYW4gcmVk
aXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkKKyAqICBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhl
IEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGFzIHB1Ymxpc2hlZCBieQorICogIHRoZSBGcmVl
IFNvZnR3YXJlIEZvdW5kYXRpb24sIGVpdGhlciB2ZXJzaW9uIDMgb2YgdGhlIExpY2Vuc2UsIG9y
CisgKiAgKGF0IHlvdXIgb3B0aW9uKSBhbnkgbGF0ZXIgdmVyc2lvbi4KKyAqCisgKiAgR1JVQiBp
cyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLAorICogIGJ1
dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5
IG9mCisgKiAgTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQ
T1NFLiAgU2VlIHRoZQorICogIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRl
dGFpbHMuCisgKgorICogIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdO
VSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlCisgKiAgYWxvbmcgd2l0aCBHUlVCLiAgSWYgbm90LCBz
ZWUgPGh0dHA6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy8+LgorICovCisjaW5jbHVkZSAibWlzYy5o
IgorI2luY2x1ZGUgImZhdC5oIgorI2luY2x1ZGUgImRlYnVnLmgiCisKKworaW50IGdfZXJyID0g
R1JVQl9FUlJfTk9ORTsKK2ludDY0X3Qgc19icGJfYnl0ZXNfcGVyX3NlY3RvcjsKK2ludDY0X3Qg
c19wYXJ0X29mZl9zZWN0b3I7CisKK3N0YXRpYyBpbnQgYmRydl9wcmVhZF9mcm9tX3NlY3Rvcl9v
Zl92b2x1bWUoQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIGludDY0X3Qgb2Zmc2V0LAorICAgICAgICAg
ICAgICAgdm9pZCAqYnVmMSwgaW50IGNvdW50MSkKK3sKKyAgaW50NjRfdCBvZmYgPSBzX2JwYl9i
eXRlc19wZXJfc2VjdG9yICogc19wYXJ0X29mZl9zZWN0b3IgKyBvZmZzZXQ7CisgIHJldHVybiBi
ZHJ2X3ByZWFkKGJzLCBvZmYsIGJ1ZjEsIGNvdW50MSk7Cit9CisKKworc3RhdGljIGludAorZmF0
X2xvZzIgKHVuc2lnbmVkIHgpCit7CisgIGludCBpOworCisgIGlmICh4ID09IDApCisgICAgcmV0
dXJuIC0xOworCisgIGZvciAoaSA9IDA7ICh4ICYgMSkgPT0gMDsgaSsrKQorICAgIHggPj49IDE7
CisKKyAgaWYgKHggIT0gMSkKKyAgICByZXR1cm4gLTE7CisKKyAgcmV0dXJuIGk7Cit9CisKKwor
Y2hhciAqCitncnViX2ZhdF9maW5kX2RpciAoQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIHN0cnVjdCBn
cnViX2ZhdF9kYXRhICpkYXRhLAorCQkgICBjb25zdCBjaGFyICpwYXRoLAorCQkgICBpbnQgKCpo
b29rKSAoY29uc3QgY2hhciAqZmlsZW5hbWUsCisJCQkJY29uc3Qgc3RydWN0IGdydWJfZGlyaG9v
a19pbmZvICppbmZvLAorCQkJCXZvaWQgKmNsb3N1cmUpLAorCQkgICB2b2lkICpjbG9zdXJlKTsK
KworCisKK3N0cnVjdCBncnViX2ZhdF9kYXRhICoKK2dydWJfZmF0X21vdW50IChCbG9ja0RyaXZl
clN0YXRlICpicywgdWludDMyX3QgcGFydF9vZmZfc2VjdG9yKQoreworICBzdHJ1Y3QgZ3J1Yl9m
YXRfYnBiIGJwYjsKKyAgc3RydWN0IGdydWJfZmF0X2RhdGEgKmRhdGEgPSAwOworICBncnViX3Vp
bnQzMl90IGZpcnN0X2ZhdCwgbWFnaWM7CisgIGludDY0X3Qgb2ZmX2J5dGVzID0gKGludDY0X3Qp
cGFydF9vZmZfc2VjdG9yIDw8IEdSVUJfRElTS19TRUNUT1JfQklUUzsKKworICBpZiAoISBicykK
KyAgICBnb3RvIGZhaWw7CisKKyAgZGF0YSA9IChzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqKSBtYWxs
b2MgKHNpemVvZiAoKmRhdGEpKTsKKyAgaWYgKCEgZGF0YSkKKyAgICBnb3RvIGZhaWw7CisKKyAg
LyogUmVhZCB0aGUgQlBCLiAgKi8KKyAgaWYgKGJkcnZfcHJlYWQoYnMsIG9mZl9ieXRlcywgJmJw
Yiwgc2l6ZW9mKGJwYikpICE9IHNpemVvZihicGIpKQorICAgIHsKKyAgICAgIERCRygiYmRydl9w
cmVhZCBmYWlsLi4uLiIpOworICAgICAgZ290byBmYWlsOworICAgIH0KKyAgICAKKyAgaWYgKGdy
dWJfc3RybmNtcCgoY29uc3QgY2hhciAqKSBicGIudmVyc2lvbl9zcGVjaWZpYy5mYXQxMl9vcl9m
YXQxNi5mc3R5cGUsCisJCSAgICJGQVQxMiIsIDUpCisgICAgICAmJiBncnViX3N0cm5jbXAoKGNv
bnN0IGNoYXIgKikgYnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MTJfb3JfZmF0MTYuZnN0eXBlLAor
CQkgICAgICAiRkFUMTYiLCA1KQorICAgICAgJiYgZ3J1Yl9zdHJuY21wKChjb25zdCBjaGFyICop
IGJwYi52ZXJzaW9uX3NwZWNpZmljLmZhdDMyLmZzdHlwZSwKKwkJICAgICAgIkZBVDMyIiwgNSkK
KyAgICAgICkKKyAgICB7CisgICAgICAKKyAgICAgIERCRygiZmFpbCBoZXJlLS0+Z3J1Yl9zdHJu
Y21wLi4uLi4uIik7CisgICAgICBnb3RvIGZhaWw7CisgICAgfQorCisgIC8qIEdldCB0aGUgc2l6
ZXMgb2YgbG9naWNhbCBzZWN0b3JzIGFuZCBjbHVzdGVycy4gICovCisgIHNfYnBiX2J5dGVzX3Bl
cl9zZWN0b3IgPSAoYnBiLmJ5dGVzX3Blcl9zZWN0b3IpOworICBzX3BhcnRfb2ZmX3NlY3RvciA9
IHBhcnRfb2ZmX3NlY3RvcjsKKyAgZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cyA9CisgICAgZmF0
X2xvZzIgKGdydWJfbGVfdG9fY3B1MTYgKGJwYi5ieXRlc19wZXJfc2VjdG9yKSk7CisgIERCRygi
YnBiLmJ5dGVzX3Blcl9zZWN0b3I9MHgleCwgbGVfdG9fY3B1MTY9MHgleCIsCisJIGJwYi5ieXRl
c19wZXJfc2VjdG9yLCBncnViX2xlX3RvX2NwdTE2IChicGIuYnl0ZXNfcGVyX3NlY3RvcikpOwor
ICAKKworICBpZiAoZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cyA8IEdSVUJfRElTS19TRUNUT1Jf
QklUUykKKyAgeworICAgIERCRygiZmFpbCBoZXJlLS0+bG9naWNhbF9zZWN0b3JfYml0cyIpOyAK
KyAgICBnb3RvIGZhaWw7CisgIH0KKyAgZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cyAtPSBHUlVC
X0RJU0tfU0VDVE9SX0JJVFM7CisKKyAgREJHKCJicGIuc2VjdG9yc19wZXJfY2x1c3Rlcj0ldSIs
IGJwYi5zZWN0b3JzX3Blcl9jbHVzdGVyKTsKKyAgZGF0YS0+Y2x1c3Rlcl9iaXRzID0gZmF0X2xv
ZzIgKGJwYi5zZWN0b3JzX3Blcl9jbHVzdGVyKTsKKyAgaWYgKGRhdGEtPmNsdXN0ZXJfYml0cyA8
IDApCisgICAgeworICAgICAgREJHKCJmYWlsIGhlcmUtLT5jbHVzdGVyX2JpdHMuLi4uLi5saW5l
WyV1XSIsIF9fTElORV9fKTsgCisgICAgICBnb3RvIGZhaWw7CisgICAgfQorICBkYXRhLT5jbHVz
dGVyX2JpdHMgKz0gZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0czsKKworICAvKiBHZXQgaW5mb3Jt
YXRpb24gYWJvdXQgRkFUcy4gICovCisgIERCRygiYnBiLm51bV9yZXNlcnZlZF9zZWN0b3JzPSV1
LCIKKyAgICAgICJsZV90b19jcHUxNj0ldSIsCisgICAgICBicGIubnVtX3Jlc2VydmVkX3NlY3Rv
cnMsCisgICAgICBncnViX2xlX3RvX2NwdTE2IChicGIubnVtX3Jlc2VydmVkX3NlY3RvcnMpKTsK
KyAgZGF0YS0+ZmF0X3NlY3RvciA9IChncnViX2xlX3RvX2NwdTE2IChicGIubnVtX3Jlc2VydmVk
X3NlY3RvcnMpCisJCSAgICAgIDw8IGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMpOworICBEQkco
ImRhdGEtPmZhdF9zZWN0b3I9JXUsIHBhcnRfb2ZmX3NlY3Rvcj0ldSIsCisgICAgICBkYXRhLT5m
YXRfc2VjdG9yLCBwYXJ0X29mZl9zZWN0b3IpOworICBpZiAoZGF0YS0+ZmF0X3NlY3RvciA9PSAw
KQorICAgIHsKKyAgICAgIERCRygiZmFpbCBoZXJlLS0+ZmF0X3NlY3Rvci4uLi4uLiIpOyAKKyAg
ICAgIGdvdG8gZmFpbDsKKyAgICB9CisgIGRhdGEtPnNlY3RvcnNfcGVyX2ZhdCA9ICgoYnBiLnNl
Y3RvcnNfcGVyX2ZhdF8xNgorCQkJICAgID8gZ3J1Yl9sZV90b19jcHUxNiAoYnBiLnNlY3RvcnNf
cGVyX2ZhdF8xNikKKwkJCSAgICA6IGdydWJfbGVfdG9fY3B1MzIgKGJwYi52ZXJzaW9uX3NwZWNp
ZmljLmZhdDMyLnNlY3RvcnNfcGVyX2ZhdF8zMikpCisJCQkgICA8PCBkYXRhLT5sb2dpY2FsX3Nl
Y3Rvcl9iaXRzKTsKKyAgREJHKCJicGIudmVyc2lvbl9zcGVjaWZpYy5mYXQzMi5zZWN0b3JzX3Bl
cl9mYXRfMzI9JXVcbiIKKwkgImdydWJfbGVfdG9fY3B1MzIgKGJwYi52ZXJzaW9uX3NwZWNpZmlj
LmZhdDMyLnNlY3RvcnNfcGVyX2ZhdF8zMik9JXUiLAorCSBicGIudmVyc2lvbl9zcGVjaWZpYy5m
YXQzMi5zZWN0b3JzX3Blcl9mYXRfMzIsCisJIGdydWJfbGVfdG9fY3B1MzIgKGJwYi52ZXJzaW9u
X3NwZWNpZmljLmZhdDMyLnNlY3RvcnNfcGVyX2ZhdF8zMikpOworICBpZiAoZGF0YS0+c2VjdG9y
c19wZXJfZmF0ID09IDApCisgICAgZ290byBmYWlsOworCisgIC8qIEdldCB0aGUgbnVtYmVyIG9m
IHNlY3RvcnMgaW4gdGhpcyB2b2x1bWUuICAqLworICBkYXRhLT5udW1fc2VjdG9ycyA9ICgoYnBi
Lm51bV90b3RhbF9zZWN0b3JzXzE2CisJCQk/IGdydWJfbGVfdG9fY3B1MTYgKGJwYi5udW1fdG90
YWxfc2VjdG9yc18xNikKKwkJCTogZ3J1Yl9sZV90b19jcHUzMiAoYnBiLm51bV90b3RhbF9zZWN0
b3JzXzMyKSkKKwkJICAgICAgIDw8IGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMpOworICBpZiAo
ZGF0YS0+bnVtX3NlY3RvcnMgPT0gMCkKKyAgICB7CisgICAgICBEQkcoImZhaWwgaGVyZS0tPm51
bV9zZWN0b3JzLi4uLi4uIik7IAorICAgICAgZ290byBmYWlsOworICAgIH0KKyAgLyogR2V0IGlu
Zm9ybWF0aW9uIGFib3V0IHRoZSByb290IGRpcmVjdG9yeS4gICovCisgIGlmIChicGIubnVtX2Zh
dHMgPT0gMCkKKyAgICB7CisgICAgICBEQkcoImZhaWwgaGVyZS0tPm51bV9mYXRzLi4uLi4uIik7
IAorICAgICAgZ290byBmYWlsOworICAgIH0KKyAgZGF0YS0+cm9vdF9zZWN0b3IgPSBkYXRhLT5m
YXRfc2VjdG9yICsgYnBiLm51bV9mYXRzICogZGF0YS0+c2VjdG9yc19wZXJfZmF0OworICBkYXRh
LT5udW1fcm9vdF9zZWN0b3JzCisgICAgPSAoKCgoZ3J1Yl91aW50MzJfdCkgZ3J1Yl9sZV90b19j
cHUxNiAoYnBiLm51bV9yb290X2VudHJpZXMpCisJICogR1JVQl9GQVRfRElSX0VOVFJZX1NJWkUK
KwkgKyBncnViX2xlX3RvX2NwdTE2IChicGIuYnl0ZXNfcGVyX3NlY3RvcikgLSAxKQorCT4+IChk
YXRhLT5sb2dpY2FsX3NlY3Rvcl9iaXRzICsgR1JVQl9ESVNLX1NFQ1RPUl9CSVRTKSkKKyAgICAg
ICA8PCAoZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cykpOworICAvL2luIGZhdDMyIDogcm9vdCBp
cyBub3QgaW5jbHVkZWQgaW4gZmlsZSBjbHVzdGVyPz8KKyAgZGF0YS0+Y2x1c3Rlcl9zZWN0b3Ig
PSBkYXRhLT5yb290X3NlY3RvciArIGRhdGEtPm51bV9yb290X3NlY3RvcnM7CisgIGRhdGEtPm51
bV9jbHVzdGVycyA9ICgoKGRhdGEtPm51bV9zZWN0b3JzIC0gZGF0YS0+Y2x1c3Rlcl9zZWN0b3Ip
CisJCQkgPj4gKGRhdGEtPmNsdXN0ZXJfYml0cyArIGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMp
KQorCQkJKyAyKTsKKworICBpZiAoZGF0YS0+bnVtX2NsdXN0ZXJzIDw9IDIpCisgICAgeworICAg
ICAgREJHKCJmYWlsIGhlcmUtLT5udW1fY2x1c3RlcnMuLi4uLi4iKTsgCisgICAgICBnb3RvIGZh
aWw7CisgICAgfQorICBpZiAoISBicGIuc2VjdG9yc19wZXJfZmF0XzE2KQorICAgIHsKKyAgICAg
IC8qIEZBVDMyLiAgKi8KKyAgICAgIGdydWJfdWludDE2X3QgZmxhZ3MgPSBncnViX2xlX3RvX2Nw
dTE2IChicGIudmVyc2lvbl9zcGVjaWZpYy5mYXQzMi5leHRlbmRlZF9mbGFncyk7CisKKyAgICAg
IGRhdGEtPnJvb3RfY2x1c3RlciA9IGdydWJfbGVfdG9fY3B1MzIgKGJwYi52ZXJzaW9uX3NwZWNp
ZmljLmZhdDMyLnJvb3RfY2x1c3Rlcik7CisgICAgICBkYXRhLT5mYXRfc2l6ZSA9IDMyOworICAg
ICAgZGF0YS0+Y2x1c3Rlcl9lb2ZfbWFyayA9IDB4MGZmZmZmZjg7CisKKyAgICAgIGlmIChmbGFn
cyAmIDB4ODApCisJeworCSAgLyogR2V0IGFuIGFjdGl2ZSBGQVQuICAqLworCSAgdW5zaWduZWQg
YWN0aXZlX2ZhdCA9IGZsYWdzICYgMHhmOworCisJICBpZiAoYWN0aXZlX2ZhdCA+IGJwYi5udW1f
ZmF0cykKKwkgICAgZ290byBmYWlsOworCisJICBkYXRhLT5mYXRfc2VjdG9yICs9IGFjdGl2ZV9m
YXQgKiBkYXRhLT5zZWN0b3JzX3Blcl9mYXQ7CisJfQorCisgICAgICBpZiAoYnBiLm51bV9yb290
X2VudHJpZXMgIT0gMCB8fCBicGIudmVyc2lvbl9zcGVjaWZpYy5mYXQzMi5mc192ZXJzaW9uICE9
IDApCisJZ290byBmYWlsOworICAgIH0KKyAgZWxzZQorICAgIHsKKyAgICAgIC8qIEZBVDEyIG9y
IEZBVDE2LiAgKi8KKyAgICAgIGRhdGEtPnJvb3RfY2x1c3RlciA9IH4wVTsKKworICAgICAgaWYg
KGRhdGEtPm51bV9jbHVzdGVycyA8PSA0MDg1ICsgMikKKwl7CisJICAvKiBGQVQxMi4gICovCisJ
ICBkYXRhLT5mYXRfc2l6ZSA9IDEyOworCSAgZGF0YS0+Y2x1c3Rlcl9lb2ZfbWFyayA9IDB4MGZm
ODsKKwl9CisgICAgICBlbHNlCisJeworCSAgLyogRkFUMTYuICAqLworCSAgZGF0YS0+ZmF0X3Np
emUgPSAxNjsKKwkgIGRhdGEtPmNsdXN0ZXJfZW9mX21hcmsgPSAweGZmZjg7CisJfQorICAgIH0K
KworICAvKiBNb3JlIHNhbml0eSBjaGVja3MuICAqLworICBpZiAoZGF0YS0+bnVtX3NlY3RvcnMg
PD0gZGF0YS0+ZmF0X3NlY3RvcikKKyAgICBnb3RvIGZhaWw7CisKKyAgCisgIERCRygiZGF0YS0+
ZmF0X3NlY3Rvcj0ldSwgZGF0YS0+c2VjdG9yc19wZXJfZmF0PSV1IiwKKwkgZGF0YS0+ZmF0X3Nl
Y3RvciwgZGF0YS0+c2VjdG9yc19wZXJfZmF0KTsKKyAgaWYgKGJkcnZfcHJlYWRfZnJvbV9zZWN0
b3Jfb2Zfdm9sdW1lKGJzLAorCQkgZGF0YS0+ZmF0X3NlY3RvciA8PCBHUlVCX0RJU0tfU0VDVE9S
X0JJVFMsCisJCSAmZmlyc3RfZmF0LAorCQkgc2l6ZW9mIChmaXJzdF9mYXQpKSAhPSBzaXplb2Yo
Zmlyc3RfZmF0KSkKKyAgICB7CisgICAgICBEQkcoImZhaWwgaGVyZS0tPmJkcnZfcHJlYWQuLi4u
Li4iKTsgCisgICAgICBnb3RvIGZhaWw7CisgICAgfQorCisgIGZpcnN0X2ZhdCA9IGdydWJfbGVf
dG9fY3B1MzIgKGZpcnN0X2ZhdCk7CisKKyAgaWYgKGRhdGEtPmZhdF9zaXplID09IDMyKQorICAg
IHsKKyAgICAgIGZpcnN0X2ZhdCAmPSAweDBmZmZmZmZmOworICAgICAgbWFnaWMgPSAweDBmZmZm
ZjAwOworICAgIH0KKyAgZWxzZSBpZiAoZGF0YS0+ZmF0X3NpemUgPT0gMTYpCisgICAgeworICAg
ICAgZmlyc3RfZmF0ICY9IDB4MDAwMGZmZmY7CisgICAgICBtYWdpYyA9IDB4ZmYwMDsKKyAgICB9
CisgIGVsc2UKKyAgICB7CisgICAgICBmaXJzdF9mYXQgJj0gMHgwMDAwMGZmZjsKKyAgICAgIG1h
Z2ljID0gMHgwZjAwOworICAgIH0KKworICAvKiBTZXJpYWwgbnVtYmVyLiAgKi8KKyAgaWYgKGJw
Yi5zZWN0b3JzX3Blcl9mYXRfMTYpCisgICAgZGF0YS0+dXVpZCA9IGdydWJfbGVfdG9fY3B1MzIg
KGJwYi52ZXJzaW9uX3NwZWNpZmljLmZhdDEyX29yX2ZhdDE2Lm51bV9zZXJpYWwpOworICBlbHNl
CisgICAgZGF0YS0+dXVpZCA9IGdydWJfbGVfdG9fY3B1MzIgKGJwYi52ZXJzaW9uX3NwZWNpZmlj
LmZhdDMyLm51bV9zZXJpYWwpOworCisgIC8qIElnbm9yZSB0aGUgM3JkIGJpdCwgYmVjYXVzZSBz
b21lIEJJT1NlcyBhc3NpZ25zIDB4RjAgdG8gdGhlIG1lZGlhCisgICAgIGRlc2NyaXB0b3IsIGV2
ZW4gaWYgaXQgaXMgYSBzby1jYWxsZWQgc3VwZXJmbG9wcHkgKGUuZy4gYW4gVVNCIGtleSkuCisg
ICAgIFRoZSBjaGVjayBtYXkgYmUgdG9vIHN0cmljdCBmb3IgdGhpcyBraW5kIG9mIHN0dXBpZCBC
SU9TZXMsIGFzCisgICAgIHRoZXkgb3ZlcndyaXRlIHRoZSBtZWRpYSBkZXNjcmlwdG9yLiAgKi8K
KyAgaWYgKChmaXJzdF9mYXQgfCAweDgpICE9IChtYWdpYyB8IGJwYi5tZWRpYSB8IDB4OCkpCisg
ICAgeworICAgICAgREJHKCJmYWlsIGhlcmUtLT5maXJzdF9mYXQ9MHgleCwgbWFnaWM9MHgleCIs
CisJICAgICBmaXJzdF9mYXQsIG1hZ2ljKTsgCisgICAgICBnb3RvIGZhaWw7CisgICAgfQorICAv
KiBTdGFydCBmcm9tIHRoZSByb290IGRpcmVjdG9yeS4gICovCisgIGRhdGEtPmZpbGVfY2x1c3Rl
ciA9IGRhdGEtPnJvb3RfY2x1c3RlcjsKKyAgZGF0YS0+Y3VyX2NsdXN0ZXJfbnVtID0gfjBVOwor
ICBkYXRhLT5hdHRyID0gR1JVQl9GQVRfQVRUUl9ESVJFQ1RPUlk7CisgIERCRygiZGF0YS0+Zmls
ZV9jbHVzdGVyPSV1IFxuZGF0YS0+Y3VyX2NsdXN0ZXJfbnVtPSV1IFxuZGF0YS0+YXR0cj0weCV4
XG4iCisJICJkYXRhLT5sb2dpY2FsX3NlY3Rvcl9iaXRzPSV1XG4iCisJICJkYXRhLT5jbHVzdGVy
X2JpdHM9JXUiLAorCSBkYXRhLT5maWxlX2NsdXN0ZXIsIGRhdGEtPmN1cl9jbHVzdGVyX251bSwg
ZGF0YS0+YXR0ciwKKwkgZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cywgZGF0YS0+Y2x1c3Rlcl9i
aXRzKTsKKyAgcmV0dXJuIGRhdGE7CisKKyBmYWlsOgorCisgIGZyZWUgKGRhdGEpOworICBwcmlu
dGYoIm5vdCBhIEZBVCBmaWxlc3lzdGVtIVxuIik7CisgIHJldHVybiAwOworfQorCisKKworLy+0
087EvP61xNa4tqjGq9LGb2Zmc2V019a92rSmtsHIoWxlbtfWvdq1xMr9vt21vWJ1ZgorLy/OxLz+
08lkYXRhLT5maWxlX2NsdXN0ZXLWuLaoCisvL2RhdGEtPmZpbGVfY2x1c3Rlcta4tqjBy87EvP61
xMbwyry02LrFCisvL8SsyM9kYXRhLT5maWxlX2NsdXN0ZXI9MqOstPqx7bj5xL/CvAorc3RhdGlj
IGdydWJfc3NpemVfdAorZ3J1Yl9mYXRfcmVhZF9kYXRhIChCbG9ja0RyaXZlclN0YXRlICpicywg
c3RydWN0IGdydWJfZmF0X2RhdGEgKmRhdGEsCisJCSAgICB2b2lkICgqcmVhZF9ob29rKSAoZ3J1
Yl9kaXNrX2FkZHJfdCBzZWN0b3IsCisJCQkJICAgICAgIHVuc2lnbmVkIG9mZnNldCwgdW5zaWdu
ZWQgbGVuZ3RoLAorCQkJCSAgICAgICB2b2lkICpjbG9zdXJlKSwKKwkJICAgIHZvaWQgKmNsb3N1
cmUsCisJCSAgICBncnViX29mZl90IG9mZnNldCwgZ3J1Yl9zaXplX3QgbGVuLCBjaGFyICpidWYp
Cit7CisgIGdydWJfc2l6ZV90IHNpemU7CisgIGdydWJfdWludDMyX3QgbG9naWNhbF9jbHVzdGVy
OworICB1bnNpZ25lZCBsb2dpY2FsX2NsdXN0ZXJfYml0czsKKyAgZ3J1Yl9zc2l6ZV90IHJldCA9
IDA7CisgIHVuc2lnbmVkIGxvbmcgc2VjdG9yOworICB1aW50NjRfdCBvZmZfYnl0ZXMgPSAwOyAK
KyAgLyogVGhpcyBpcyBhIHNwZWNpYWwgY2FzZS4gRkFUMTIgYW5kIEZBVDE2IGRvZXNuJ3QgaGF2
ZSB0aGUgcm9vdCBkaXJlY3RvcnkKKyAgICAgaW4gY2x1c3RlcnMuICAqLworICBpZiAoZGF0YS0+
ZmlsZV9jbHVzdGVyID09IH4wVSkKKyAgICB7CisgICAgICBzaXplID0gKGRhdGEtPm51bV9yb290
X3NlY3RvcnMgPDwgR1JVQl9ESVNLX1NFQ1RPUl9CSVRTKSAtIG9mZnNldDsKKyAgICAgIGlmIChz
aXplID4gbGVuKQorCXNpemUgPSBsZW47CisKKyAgICAgIG9mZl9ieXRlcyA9ICgodWludDY0X3Qp
ZGF0YS0+cm9vdF9zZWN0b3IgPDwgR1JVQl9ESVNLX1NFQ1RPUl9CSVRTKSArIG9mZnNldDsKKyAg
ICAgIGlmKGJkcnZfcHJlYWRfZnJvbV9zZWN0b3Jfb2Zfdm9sdW1lKGJzLCBvZmZfYnl0ZXMsIGJ1
Ziwgc2l6ZSApICE9IHNpemUpIAorCXJldHVybiAtMTsKKworICAgICAgcmV0dXJuIHNpemU7Cisg
ICAgfQorCisgIC8qIENhbGN1bGF0ZSB0aGUgbG9naWNhbCBjbHVzdGVyIG51bWJlciBhbmQgb2Zm
c2V0LiAgKi8KKyAgbG9naWNhbF9jbHVzdGVyX2JpdHMgPSAoZGF0YS0+Y2x1c3Rlcl9iaXRzCisJ
CQkgICsgZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cworCQkJICArIEdSVUJfRElTS19TRUNUT1Jf
QklUUyk7CisgIGxvZ2ljYWxfY2x1c3RlciA9IG9mZnNldCA+PiBsb2dpY2FsX2NsdXN0ZXJfYml0
czsgICAgLy93aGljaCBjbHVzdGVyIHRvIHJlYWQgCisgIG9mZnNldCAmPSAoMSA8PCBsb2dpY2Fs
X2NsdXN0ZXJfYml0cykgLSAxOyAgICAgICAgICAgLy9tb2QKKworICBpZiAobG9naWNhbF9jbHVz
dGVyIDwgZGF0YS0+Y3VyX2NsdXN0ZXJfbnVtKSAgIC8vCisgICAgeworICAgICAgZGF0YS0+Y3Vy
X2NsdXN0ZXJfbnVtID0gMDsKKyAgICAgIGRhdGEtPmN1cl9jbHVzdGVyID0gZGF0YS0+ZmlsZV9j
bHVzdGVyOyAvLyC12jK49mZhdLHtz+6/qsq8vMfCvMS/wry6zc7EvP4KKyAgICB9CisKKyAgd2hp
bGUgKGxlbikKKyAgICB7CisgICAgICB3aGlsZSAobG9naWNhbF9jbHVzdGVyID4gZGF0YS0+Y3Vy
X2NsdXN0ZXJfbnVtKQorCXsKKwkgIC8qIEZpbmQgbmV4dCBjbHVzdGVyLiAgKi8KKwkgIGdydWJf
dWludDMyX3QgbmV4dF9jbHVzdGVyOworCSAgdW5zaWduZWQgbG9uZyBmYXRfb2Zmc2V0OworCisJ
ICBzd2l0Y2ggKGRhdGEtPmZhdF9zaXplKQorCSAgICB7CisJICAgIGNhc2UgMzI6CisJICAgICAg
ZmF0X29mZnNldCA9IGRhdGEtPmN1cl9jbHVzdGVyIDw8IDI7CisJICAgICAgYnJlYWs7CisJICAg
IGNhc2UgMTY6CisJICAgICAgZmF0X29mZnNldCA9IGRhdGEtPmN1cl9jbHVzdGVyIDw8IDE7CisJ
ICAgICAgYnJlYWs7CisJICAgIGRlZmF1bHQ6CisJICAgICAgLyogY2FzZSAxMjogKi8KKwkgICAg
ICBmYXRfb2Zmc2V0ID0gZGF0YS0+Y3VyX2NsdXN0ZXIgKyAoZGF0YS0+Y3VyX2NsdXN0ZXIgPj4g
MSk7CisJICAgICAgYnJlYWs7CisJICAgIH0KKworCSAgLyogUmVhZCB0aGUgRkFULiAgKi8KKwkg
IGludCBsZW4gPSAoZGF0YS0+ZmF0X3NpemUgKyA3KSA+PiAzOworCSAgdWludDY0X3Qgb2ZmX2J5
dGVzID0gICgodWludDY0X3QpZGF0YS0+ZmF0X3NlY3RvciA8PCBHUlVCX0RJU0tfU0VDVE9SX0JJ
VFMpICsgZmF0X29mZnNldDsgCisJICBpZiAoYmRydl9wcmVhZF9mcm9tX3NlY3Rvcl9vZl92b2x1
bWUgKGJzLCBvZmZfYnl0ZXMsIAorCQkJICAoY2hhciAqKSAmbmV4dF9jbHVzdGVyLCAKKwkJCSAg
bGVuKSAhPSBsZW4pICAgLy+002ZhdLHttsHIobTYusUKKwkgICAgcmV0dXJuIC0xOworCisJICBu
ZXh0X2NsdXN0ZXIgPSBncnViX2xlX3RvX2NwdTMyIChuZXh0X2NsdXN0ZXIpOworCSAgc3dpdGNo
IChkYXRhLT5mYXRfc2l6ZSkKKwkgICAgeworCSAgICBjYXNlIDE2OgorCSAgICAgIG5leHRfY2x1
c3RlciAmPSAweEZGRkY7CisJICAgICAgYnJlYWs7CisJICAgIGNhc2UgMTI6CisJICAgICAgaWYg
KGRhdGEtPmN1cl9jbHVzdGVyICYgMSkKKwkJbmV4dF9jbHVzdGVyID4+PSA0OworCisJICAgICAg
bmV4dF9jbHVzdGVyICY9IDB4MEZGRjsKKwkgICAgICBicmVhazsKKwkgICAgfQorCisJICBEQkcg
KCJmYXRfc2l6ZT0lZCwgbmV4dF9jbHVzdGVyPSV1IiwKKwkJCWRhdGEtPmZhdF9zaXplLCBuZXh0
X2NsdXN0ZXIpOworCisJICAvKiBDaGVjayB0aGUgZW5kLiAgKi8KKwkgIGlmIChuZXh0X2NsdXN0
ZXIgPj0gZGF0YS0+Y2x1c3Rlcl9lb2ZfbWFyaykKKwkgICAgcmV0dXJuIHJldDsKKworCSAgaWYg
KG5leHRfY2x1c3RlciA8IDIgfHwgbmV4dF9jbHVzdGVyID49IGRhdGEtPm51bV9jbHVzdGVycykK
KwkgICAgeworCSAgICAgIERCRygiaW52YWxpZCBjbHVzdGVyICV1Li4uLi4uLi4uLi4uLi4uLiIs
CisJCQkgIG5leHRfY2x1c3Rlcik7CisJICAgICAgcmV0dXJuIC0xOworCSAgICB9CisKKwkgIGRh
dGEtPmN1cl9jbHVzdGVyID0gbmV4dF9jbHVzdGVyOworCSAgZGF0YS0+Y3VyX2NsdXN0ZXJfbnVt
Kys7CisJfQorCisgICAgICAvKiBSZWFkIHRoZSBkYXRhIGhlcmUuICAqLworICAgICAgLy/C37yt
tNjL+bbU06a1xL74ttTJyMf4CisgICAgICBzZWN0b3IgPSAoZGF0YS0+Y2x1c3Rlcl9zZWN0b3IK
KwkJKyAoKGRhdGEtPmN1cl9jbHVzdGVyIC0gMikKKwkJICAgPDwgKGRhdGEtPmNsdXN0ZXJfYml0
cyArIGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMpKSk7IAorICAgICAgLy+++LbUycjH+NbQyKW1
9Mar0sa687XE19a92sr9CisgICAgICBzaXplID0gKDEgPDwgbG9naWNhbF9jbHVzdGVyX2JpdHMp
IC0gb2Zmc2V0OworICAgICAgaWYgKHNpemUgPiBsZW4pCisJc2l6ZSA9IGxlbjsKKworICAgICAg
Ly9kaXNrLT5yZWFkX2hvb2sgPSByZWFkX2hvb2s7CisgICAgICAvL2Rpc2stPmNsb3N1cmUgPSBj
bG9zdXJlOworICAgICAgaW50NjRfdCBvZmZfYnl0ZXMgPSAoKHVpbnQ2NF90KXNlY3RvciA8PCBH
UlVCX0RJU0tfU0VDVE9SX0JJVFMpICsgb2Zmc2V0OworICAgICAgLy9kaXNrLT5yZWFkX2hvb2sg
PSAwOworICAgICAgaWYgKGJkcnZfcHJlYWRfZnJvbV9zZWN0b3Jfb2Zfdm9sdW1lIChicywgb2Zm
X2J5dGVzLCBidWYsIHNpemUpICE9IHNpemUpCisJcmV0dXJuIC0xOworCisgICAgICBsZW4gLT0g
c2l6ZTsKKyAgICAgIGJ1ZiArPSBzaXplOworICAgICAgcmV0ICs9IHNpemU7CisgICAgICBsb2dp
Y2FsX2NsdXN0ZXIrKzsKKyAgICAgIG9mZnNldCA9IDA7ICAvL9LUuvO2wbXEtrzKx83q1fvJyMf4
CisgICAgfQorCisgIHJldHVybiByZXQ7Cit9CisKKy8vsenA+tPJZGF0YS0+ZmlsZV9jbHVzdGVy
1ri2qLXExL/CvAorc3RhdGljIGludAorZ3J1Yl9mYXRfaXRlcmF0ZV9kaXIgKEJsb2NrRHJpdmVy
U3RhdGUgKmJzLCBzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YSwKKwkJICAgICAgaW50ICgqaG9v
aykgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLAorCQkJCSAgIHN0cnVjdCBncnViX2ZhdF9kaXJfZW50
cnkgKmRpciwKKwkJCQkgICB2b2lkICpjbG9zdXJlKSwKKwkJICAgICAgdm9pZCAqY2xvc3VyZSkK
K3sKKyAgc3RydWN0IGdydWJfZmF0X2Rpcl9lbnRyeSBkaXI7CisgIGNoYXIgKmZpbGVuYW1lLCAq
ZmlsZXAgPSAwOworICBncnViX3VpbnQxNl90ICp1bmlidWY7CisgIGludCBzbG90ID0gLTEsIHNs
b3RzID0gLTE7CisgIGludCBjaGVja3N1bSA9IC0xOworICBncnViX3NzaXplX3Qgb2Zmc2V0ID0g
LXNpemVvZihkaXIpOworCisgIGlmICghIChkYXRhLT5hdHRyICYgR1JVQl9GQVRfQVRUUl9ESVJF
Q1RPUlkpKQorICAgIHJldHVybiBwcmludGYoIm5vdCBhIGRpcmVjdG9yeS4uLi4uLlxuIik7CisK
KyAgLyogQWxsb2NhdGUgc3BhY2UgZW5vdWdoIHRvIGhvbGQgYSBsb25nIG5hbWUuICAqLworICBm
aWxlbmFtZSA9IChjaGFyKiltYWxsb2MgKDB4NDAgKiAxMyAqIDQgKyAxKTsKKyAgdW5pYnVmID0g
KGdydWJfdWludDE2X3QgKikgbWFsbG9jICgweDQwICogMTMgKiAyKTsKKyAgY2hhciAqZ2JuYW1l
ID0gKGNoYXIqKW1hbGxvYygweDQwICogMTMgKiAyKTsKKyAgaWYgKCEgZmlsZW5hbWUgfHwgISB1
bmlidWYgfHwgIWdibmFtZSkKKyAgICB7CisgICAgICBmcmVlKGdibmFtZSk7CisgICAgICBmcmVl
IChmaWxlbmFtZSk7CisgICAgICBmcmVlICh1bmlidWYpOworICAgICAgcGVycm9yKCJpdGVyYXRl
OiBtYWxsb2MgZmFpbGVkIS4uLlxuIik7CisgICAgICByZXR1cm4gLTE7CisgICAgfQorCisgIAor
ICBpbnQgY291bnQgPSAwOworICB3aGlsZSAoMSkKKyAgICB7CisgICAgICB1bnNpZ25lZCBpOwor
CisgICAgICAvKiBBZGp1c3QgdGhlIG9mZnNldC4gICovCisgICAgICBvZmZzZXQgKz0gc2l6ZW9m
IChkaXIpOworICAgICAgREJHKCJcblslZF1vZmZzZXQ9JXUsIgorCSAgICAgImRhdGEtPmN1cl9j
bHVzdGVyX251bT0ldSxkYXRhLT5jdXJfY2x1c3Rlcj0ldSIsIAorCSAgICAgY291bnQrMSwgb2Zm
c2V0LCAKKwkgICAgIGRhdGEtPmN1cl9jbHVzdGVyX251bSwgZGF0YS0+Y3VyX2NsdXN0ZXIpOwor
ICAgICAgLyogUmVhZCBhIGRpcmVjdG9yeSBlbnRyeS4gICovCisgICAgICAvLzB4MLHtyr6/1cS/
wrwKKyAgICAgIGlmICgoZ3J1Yl9mYXRfcmVhZF9kYXRhIChicywgZGF0YSwgMCwgMCwKKwkJCSAg
ICAgICBvZmZzZXQsIHNpemVvZiAoZGlyKSwgKGNoYXIgKikgJmRpcikKKwkgICAhPSBzaXplb2Yg
KGRpcikgfHwgZGlyLm5hbWVbMF0gPT0gMCkpCisJeworCSAgREJHKCJicmVhay4uLmRpci5uYW1l
WzBdPT0lZCIsIGRpci5uYW1lWzBdKTsKKwkgIGJyZWFrOworCX0KKyAgICAgIC8qIEhhbmRsZSBs
b25nIG5hbWUgZW50cmllcy4gICovCisgICAgICBpZiAoZGlyLmF0dHIgPT0gR1JVQl9GQVRfQVRU
Ul9MT05HX05BTUUpCisJeworCSAgREJHKCJsb25nIG5hbWUuLi4iKTsKKwkgIHN0cnVjdCBncnVi
X2ZhdF9sb25nX25hbWVfZW50cnkgKmxvbmdfbmFtZQorCSAgICA9IChzdHJ1Y3QgZ3J1Yl9mYXRf
bG9uZ19uYW1lX2VudHJ5ICopICZkaXI7CisJICBncnViX3VpbnQ4X3QgaWQgPSBsb25nX25hbWUt
PmlkOworCisJICBpZiAoaWQgJiAweDQwKSAgLy90aGUgbGFzdCBpdGVtCisJICAgIHsKKwkgICAg
ICBpZCAmPSAweDNmOyAgIC8vaW5kZXggb3Igb3JkaW5hbCBudW1iZXIgIDF+MzEKKwkgICAgICBz
bG90cyA9IHNsb3QgPSBpZDsKKwkgICAgICBjaGVja3N1bSA9IGxvbmdfbmFtZS0+Y2hlY2tzdW07
CisJICAgICAgREJHKCJ0aGUgbGFzdCBvcmRpbmFsIG51bT0lZCEhISIsIGlkKTsKKwkgICAgfQor
CisJICBpZiAoaWQgIT0gc2xvdCB8fCBzbG90ID09IDAgfHwgY2hlY2tzdW0gIT0gbG9uZ19uYW1l
LT5jaGVja3N1bSkKKwkgICAgeworCSAgICAgIERCRygibm90IHZhbGlkIG9yZGluYWwgbnVtYmVy
ICxpZ25vcmUuLi5jb250aW51ZSIpOworCSAgICAgIGNoZWNrc3VtID0gLTE7CisJICAgICAgY29u
dGludWU7CisJICAgIH0KKworCSAgc2xvdC0tOworCSAgbWVtY3B5ICh1bmlidWYgKyBzbG90ICog
MTMsIGxvbmdfbmFtZS0+bmFtZTEsIDUgKiAyKTsKKwkgIG1lbWNweSAodW5pYnVmICsgc2xvdCAq
IDEzICsgNSwgbG9uZ19uYW1lLT5uYW1lMiwgNiAqIDIpOworCSAgbWVtY3B5ICh1bmlidWYgKyBz
bG90ICogMTMgKyAxMSwgbG9uZ19uYW1lLT5uYW1lMywgMiAqIDIpOworCSAgREJHKCJtZW1jcHku
Li5jb250aW51ZSIpOworCSAgY29udGludWU7CisJfQorCisgICAgICAKKyAgICAgIC8qIENoZWNr
IGlmIHRoaXMgZW50cnkgaXMgdmFsaWQuICAqLworICAgICAgLy9veGU1se3KvtLRvq2xu8m+s/0K
KyAgICAgIGlmIChkaXIubmFtZVswXSA9PSAweGU1IHx8IChkaXIuYXR0ciAmIH5HUlVCX0ZBVF9B
VFRSX1ZBTElEKSkKKwl7CisJICBEQkcoImRpci5uYW1lWzBdPTB4JXgsIGRpci5hdHRyPTB4JXgg
bm90IHZhbGlkLi4uY29udGludWUiLCAKKwkJIGRpci5uYW1lWzBdLCBkaXIuYXR0cik7CisJICBj
b250aW51ZTsKKwl9CisKKyAgICAgIERCRygiY2hlY2tzdW09JWQsIHNsb3Q9JWQiLCBjaGVja3N1
bSwgc2xvdCk7CisgICAgICAvKiBUaGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgSmFwYW5lc2UuICAq
LworICAgICAgaWYgKGRpci5uYW1lWzBdID09IDB4MDUpCisJZGlyLm5hbWVbMF0gPSAweGU1Owor
CisgICAgICBpZiAoY2hlY2tzdW0gIT0gLTEgJiYgc2xvdCA9PSAwKQorCXsKKwkgIERCRygiY2hl
Y2tzdW1pbmciKTsKKwkgIGdydWJfdWludDhfdCBzdW07CisKKwkgIGZvciAoc3VtID0gMCwgaSA9
IDA7IGkgPCBzaXplb2YgKGRpci5uYW1lKTsgaSsrKQorCSAgICBzdW0gPSAoKHN1bSA+PiAxKSB8
IChzdW0gPDwgNykpICsgZGlyLm5hbWVbaV07CisKKwkgIGlmIChzdW0gPT0gY2hlY2tzdW0pCisJ
ICAgIHsvL7Okw/ux7c/uuvPD5r30vdO2zMP7se3P7qOs0enWpLPJuabU8takw/fV5tX9ysezpMP7
19YKKwkgICAgICBpbnQgdTsKKworCSAgICAgIGZvciAodSA9IDA7IHUgPCBzbG90cyAqIDEzOyB1
KyspCisJCXVuaWJ1Zlt1XSA9IGdydWJfbGVfdG9fY3B1MTYgKHVuaWJ1Zlt1XSk7CisKKwkgICAg
ICAqZ3J1Yl91dGYxNl90b191dGY4ICgoZ3J1Yl91aW50OF90ICopIGZpbGVuYW1lLCB1bmlidWYs
CisJCQkJICAgc2xvdHMgKiAxMykgPSAnXDAnOworCisJICAgICAgCisJICAgICAgY2hlY2tzdW0g
PSAtMTsKKwkgICAgICBmb3IgKGkgPSAwOyBpIDwgc2l6ZW9mIChkaXIubmFtZSk7IGkrKykKKwkJ
REJHKCIweCV4ICAiLCBkaXIubmFtZVtpXSk7CisJICAgICAgCisJICAgICAgdTJnKGZpbGVuYW1l
LCBzdHJsZW4oZmlsZW5hbWUpLCBnYm5hbWUsIDB4NDAgKiAxMyAqIDIpOworCSAgICAgIERCRygi
XG5kaXIubmFtZT0lcywgZmlsZW5hbWU9JXMsIGRpci5hdHRyPTB4JXgsIgorCQkgICAgICJzdW09
PWNoZWNrc3VtLi4uY29udGludWUiLAorCQkgICAgIGRpci5uYW1lLCBnYm5hbWUsIGRpci5hdHRy
KTsKKwkgICAgICAKKwkgICAgICBjb3VudCsrOworCSAgICAgIAorCSAgICAgIGlmIChob29rICYm
IGhvb2sgKGdibmFtZSwgJmRpciwgY2xvc3VyZSkpCisJICAgICAgICBicmVhazsKKwkgICAgICAK
KwkgICAgICBjb250aW51ZTsKKwkgICAgfQorCisJICBjaGVja3N1bSA9IC0xOworCX0KKworICAg
ICAgLy+688PmtcS0psDt1eu21LfH1ebKtbOkw/u6zdXmyrW2zMP7CisgICAgICAvKiBDb252ZXJ0
IHRoZSA4LjMgZmlsZSBuYW1lLiAgKi8KKyAgICAgIC8vyKW19LbMw/u1xL/VuPGjrMiruMTOqtCh
0LQKKyAgICAgIGZpbGVwID0gZmlsZW5hbWU7CisgICAgICBpZiAoZGlyLmF0dHIgJiBHUlVCX0ZB
VF9BVFRSX1ZPTFVNRV9JRCkKKwl7CisJICBEQkcoIlZPTFVNRSIpOworCSAgZm9yIChpID0gMDsg
aSA8IHNpemVvZiAoZGlyLm5hbWUpICYmIGRpci5uYW1lW2ldCisJCSAmJiAhIGdydWJfaXNzcGFj
ZSAoZGlyLm5hbWVbaV0pOyBpKyspCisJICAgICpmaWxlcCsrID0gZGlyLm5hbWVbaV07CisJfQor
ICAgICAgZWxzZQorCXsKKwkgIGZvciAoaSA9IDA7IGkgPCA4ICYmIGRpci5uYW1lW2ldICYmICEg
Z3J1Yl9pc3NwYWNlIChkaXIubmFtZVtpXSk7IGkrKykKKwkgICAgKmZpbGVwKysgPSBncnViX3Rv
bG93ZXIgKGRpci5uYW1lW2ldKTsKKworCSAgKmZpbGVwID0gJy4nOworCisJICBmb3IgKGkgPSA4
OyBpIDwgMTEgJiYgZGlyLm5hbWVbaV0gJiYgISBncnViX2lzc3BhY2UgKGRpci5uYW1lW2ldKTsg
aSsrKQorCSAgICAqKytmaWxlcCA9IGdydWJfdG9sb3dlciAoZGlyLm5hbWVbaV0pOworCisJICBp
ZiAoKmZpbGVwICE9ICcuJykKKwkgICAgZmlsZXArKzsKKwl9CisgICAgICAqZmlsZXAgPSAnXDAn
OworICAgICAgCisgICAgICAvL2ZvciAoaSA9IDA7IGkgPCBzaXplb2YgKGRpci5uYW1lKTsgaSsr
KQorICAgICAgLy8JREJHKCIweCV4ICAiLCBkaXIubmFtZVtpXSk7CisgICAgICBEQkcoIlxuZGly
Lm5hbWU9JXMsIGZpbGVuYW1lPaG+JXOhvywgZGlyLmF0dHI9MHgleCwiCisJICAgICAiLi4ubmV4
dCB3aGlsZSIsCisJICAgICBkaXIubmFtZSwgZmlsZW5hbWUsIGRpci5hdHRyKTsKKyAgICAgIGNv
dW50Kys7CisgICAgICAvKmlmKHN0cmNtcChmaWxlbmFtZSwgIi4iKSAmJiBzdHJjbXAoZmlsZW5h
bWUsICIuLiIpKQorCXsKKwkgIERCRygiez09PT09PT09PT09PT09PiIpOworCSAgc3RydWN0IGdy
dWJfZmF0X2RhdGEgKmRhdGEyID0gTlVMTDsKKwkgIGRhdGEyID0gKHN0cnVjdCBncnViX2ZhdF9k
YXRhKiltYWxsb2Moc2l6ZW9mKCpkYXRhKSk7CisJICBtZW1jcHkoZGF0YTIsIGRhdGEsIHNpemVv
ZigqZGF0YSkpOworCSAgZGF0YTItPmF0dHIgPSBkaXIuYXR0cjsKKwkgIGRhdGEyLT5maWxlX3Np
emUgPSBncnViX2xlX3RvX2NwdTMyIChkaXIuZmlsZV9zaXplKTsKKwkgIGRhdGEyLT5maWxlX2Ns
dXN0ZXIgPSAoKGdydWJfbGVfdG9fY3B1MTYgKGRpci5maXJzdF9jbHVzdGVyX2hpZ2gpIDw8IDE2
KQorCQkJCSB8IGdydWJfbGVfdG9fY3B1MTYgKGRpci5maXJzdF9jbHVzdGVyX2xvdykpOworCSAg
ZGF0YTItPmN1cl9jbHVzdGVyX251bSA9IH4wVTsKKwkgIChncnViX2ZhdF9pdGVyYXRlX2Rpcihi
cywgZGF0YTIsIE5VTEwsIE5VTEwpIDwgMCkgPyBEQkcoImVycm9yICEhISEhISIpIDogMDsKKwkg
IGZyZWUoZGF0YTIpOworCSAgREJHKCI8PT09PT09PT09PT09PT09PT09PX0iKTsKKwl9CisgICAg
ICAqLworICAgICAgaWYgKGhvb2sgJiYgaG9vayAoZmlsZW5hbWUsICZkaXIsIGNsb3N1cmUpKQor
ICAgICAgICBicmVhazsKKyAgICB9CisKKyAgZnJlZShnYm5hbWUpOworICBmcmVlIChmaWxlbmFt
ZSk7CisgIGZyZWUgKHVuaWJ1Zik7CisKKyAgcmV0dXJuIDA7Cit9CisKKworCisvL7SruPhncnVi
X2ZhdF9maW5kX2hvb2u1xLLOyv1jbG9zdXJlCitzdHJ1Y3QgZ3J1Yl9mYXRfZmluZF9kaXJfY2xv
c3VyZQoreworICBzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YTsKKyAgaW50ICgqaG9vaykgKGNv
bnN0IGNoYXIgKmZpbGVuYW1lLAorCSAgICAgICBjb25zdCBzdHJ1Y3QgZ3J1Yl9kaXJob29rX2lu
Zm8gKmluZm8sCisJICAgICAgIHZvaWQgKmNsb3N1cmUpOworICB2b2lkICpjbG9zdXJlOworICBj
aGFyICpkaXJuYW1lOworICBpbnQgY2FsbF9ob29rOworICBpbnQgZm91bmQ7Cit9OworCisKK3N0
YXRpYyBpbnQKK2dydWJfZmF0X2ZpbmRfZGlyX2hvb2sgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLCBz
dHJ1Y3QgZ3J1Yl9mYXRfZGlyX2VudHJ5ICpkaXIsCisJCQl2b2lkICpjbG9zdXJlKQoreworICBz
dHJ1Y3QgZ3J1Yl9mYXRfZmluZF9kaXJfY2xvc3VyZSAqYyA9IGNsb3N1cmU7CisgIHN0cnVjdCBn
cnViX2Rpcmhvb2tfaW5mbyBpbmZvOworICBtZW1zZXQgKCZpbmZvLCAwLCBzaXplb2YgKGluZm8p
KTsKKworICBpbmZvLmRpciA9ICEhIChkaXItPmF0dHIgJiBHUlVCX0ZBVF9BVFRSX0RJUkVDVE9S
WSk7CisgIGluZm8uY2FzZV9pbnNlbnNpdGl2ZSA9IDE7CisgIGluZm8ubXRpbWVzZXQgPSAoZGly
LT5jX2RhdGUgfHwgZGlyLT5jX3RpbWUpOworICBpbmZvLm10aW1lID0gKCgoZ3J1Yl91aW50MzJf
dClkaXItPmNfZGF0ZSA8PCAxNikgfCAoZGlyLT5jX3RpbWUpKTsKKyAgaW5mby5maWxlc2l6ZSA9
IGRpci0+ZmlsZV9zaXplOworICAKKyAgREJHKCJ0YXJnZXQgZmlsZSChviVzob89PT09PT0iLCBj
LT5kaXJuYW1lKTsKKyAgaWYgKGRpci0+YXR0ciAmIEdSVUJfRkFUX0FUVFJfVk9MVU1FX0lEKQor
ICAgIHsKKyAgICAgIERCRygidm9sdW1lIGlkICwgaWdub3JlPT09PT09Iik7CisgICAgICByZXR1
cm4gMDsKKyAgICB9CisgIAorICBpZiAoKihjLT5kaXJuYW1lKSA9PSAnXDAnICYmIChjLT5jYWxs
X2hvb2spKQorICAgIHsgLy+08r+qtcTKx8S/wrwgIC94L3BhdGgxL3BhdGgyLworICAgICAgLy+3
tbvYMKOsyMNpdGVyYXRlyrHWu8rHtPLTodDFz6KjrLb4srvNy7P2d2hpbGUKKyAgICAgIGMtPmZv
dW5kID0gMTsKKyAgICAgIGlmKCEoYy0+ZGF0YS0+YXR0ciAmIEdSVUJfRkFUX0FUVFJfRElSRUNU
T1JZKSkKKwl7CisJICBwcmludGYoIml0J3Mgbm90IGEgZGlyZWN0b3J5IVxuIik7CisJfQorICAg
ICAgREJHKCJsaXN0IHRoZSBkaXIgob4lc6G/PT09PT09PT09PT0iLAorCSAgKChzdHJ1Y3QgbHNf
Y3RybCopYy0+Y2xvc3VyZSktPmRpcm5hbWUpOworICAgICAgcmV0dXJuIGMtPmhvb2sgKGZpbGVu
YW1lLCAmaW5mbywgYy0+Y2xvc3VyZSk7CisgICAgfQorCisgIAorICBpZiAoZ3J1Yl9zdHJjYXNl
Y21wIChjLT5kaXJuYW1lLCBmaWxlbmFtZSkgPT0gMCkKKyAgICB7IC8vtPK/qrXEysfOxLz+IC94
L3BhdGgxL2ZpbGUKKyAgICAgIERCRygiZm91bmQ9PT09PT0iKTsKKyAgICAgIHN0cnVjdCBncnVi
X2ZhdF9kYXRhICpkYXRhID0gYy0+ZGF0YTsKKworICAgICAgYy0+Zm91bmQgPSAxOworICAgICAg
ZGF0YS0+YXR0ciA9IGRpci0+YXR0cjsKKyAgICAgIGRhdGEtPmZpbGVfc2l6ZSA9IGdydWJfbGVf
dG9fY3B1MzIgKGRpci0+ZmlsZV9zaXplKTsKKyAgICAgIGRhdGEtPmZpbGVfY2x1c3RlciA9ICgo
Z3J1Yl9sZV90b19jcHUxNiAoZGlyLT5maXJzdF9jbHVzdGVyX2hpZ2gpIDw8IDE2KQorCQkJICAg
ICAgIHwgZ3J1Yl9sZV90b19jcHUxNiAoZGlyLT5maXJzdF9jbHVzdGVyX2xvdykpOworICAgICAg
ZGF0YS0+Y3VyX2NsdXN0ZXJfbnVtID0gfjBVOworCisgICAgICBpZiAoYy0+Y2FsbF9ob29rKQor
CWMtPmhvb2sgKGZpbGVuYW1lLCAmaW5mbywgYy0+Y2xvc3VyZSk7CisKKyAgICAgIHJldHVybiAx
OworICAgIH0KKyAgZWxzZQorICAgIHsKKyAgICAgIERCRygibm90IG1hdGNoPT09PT09Iik7Cisg
ICAgfQorICByZXR1cm4gMDsKK30KKworCisvKiBGaW5kIHRoZSB1bmRlcmx5aW5nIGRpcmVjdG9y
eSBvciBmaWxlIGluIFBBVEggYW5kIHJldHVybiB0aGUKKyAgIG5leHQgcGF0aC4gSWYgdGhlcmUg
aXMgbm8gbmV4dCBwYXRoIG9yIGFuIGVycm9yIG9jY3VycywgcmV0dXJuIE5VTEwuCisgICBJZiBI
T09LIGlzIHNwZWNpZmllZCwgY2FsbCBpdCB3aXRoIGVhY2ggZmlsZSBuYW1lLiAgKi8KKy8v1NrT
yWRhdGHWuLaotcTEv8K8z8Ky6dXS08lwYXRowre+tta4tqi1xM7EvP680LvyzsS8/gorLy/V0rW9
1q66872708kgZ3J1Yl9mYXRfZmluZF9kaXJfaG9va7qvyv20psDto6zG5NbQY2xvc3VyZbLOyv3K
x7nYvPwKK2NoYXIgKgorZ3J1Yl9mYXRfZmluZF9kaXIgKEJsb2NrRHJpdmVyU3RhdGUgKmJzLCBz
dHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YSwKKwkJICAgY29uc3QgY2hhciAqcGF0aCwKKwkJICAg
aW50ICgqaG9vaykgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLAorCQkJCWNvbnN0IHN0cnVjdCBncnVi
X2Rpcmhvb2tfaW5mbyAqaW5mbywKKwkJCQl2b2lkICpjbG9zdXJlKSwKKwkJICAgdm9pZCAqY2xv
c3VyZSkKK3sKKyAgY2hhciAqZGlybmFtZSwgKmRpcnA7CisgIHN0cnVjdCBncnViX2ZhdF9maW5k
X2Rpcl9jbG9zdXJlIGM7CisgIERCRygidG8gc2VhcmNoIFslc10uLi5pbiBkYXRhLT5hdHRyPTB4
JXgiLCBwYXRoLCBkYXRhLT5hdHRyKTsKKyAgaWYgKCEgKGRhdGEtPmF0dHIgJiBHUlVCX0ZBVF9B
VFRSX0RJUkVDVE9SWSkpCisgICAgeworICAgICAgcHJpbnRmKCJub3QgYSBkaXJlY3RvcnkuLi4u
Li4uLi4uLlxuIik7CisgICAgICByZXR1cm4gMDsKKyAgICB9CisKKyAgLyogRXh0cmFjdCBhIGRp
cmVjdG9yeSBuYW1lLiAgKi8KKyAgd2hpbGUgKCpwYXRoID09ICcvJykKKyAgICBwYXRoKys7CisK
KyAgZGlycCA9IGdydWJfc3RyY2hyIChwYXRoLCAnLycpOworICBpZiAoZGlycCkKKyAgICB7Cisg
ICAgICB1bnNpZ25lZCBsZW4gPSBkaXJwIC0gcGF0aDsKKworICAgICAgZGlybmFtZSA9IChjaGFy
KiltYWxsb2MgKGxlbiArIDEpOworICAgICAgaWYgKCEgZGlybmFtZSkKKwlyZXR1cm4gMDsKKwor
ICAgICAgbWVtY3B5IChkaXJuYW1lLCBwYXRoLCBsZW4pOworICAgICAgZGlybmFtZVtsZW5dID0g
J1wwJzsKKyAgICB9CisgIGVsc2UKKyAgICB7CisgICAgLyogVGhpcyBpcyBhY3R1YWxseSBhIGZp
bGUuICAqLworICAgICAgZGlybmFtZSA9IGdydWJfc3RyZHVwIChwYXRoKTsKKyAgICB9CisgIERC
Rygic2VhcmNoaW5nIFwiJXNcIj09PT09PSIsIGRpcm5hbWUpOworICBjLmRhdGEgPSBkYXRhOwor
ICBjLmhvb2sgPSBob29rOworICBjLmNsb3N1cmUgPSBjbG9zdXJlOworICBjLmRpcm5hbWUgPWRp
cm5hbWU7CisgIGMuZm91bmQgPSAwOworICBjLmNhbGxfaG9vayA9ICghIGRpcnAgJiYgaG9vayk7
ICAvL9XrttTEv8K8tcRob29rCisgIAorICBpbnQgcmV0ID0gZ3J1Yl9mYXRfaXRlcmF0ZV9kaXIg
KGJzLCBkYXRhLCBncnViX2ZhdF9maW5kX2Rpcl9ob29rLCAmYyk7CisgIGlmKDAgPT0gcmV0ICYm
ICFjLmZvdW5kKQorICAgIHsKKyAgICAgIGdfZXJyID0gR1JVQl9FUlJfTk9UX0ZPVU5EOyAKKyAg
ICAgIHByaW50ZigiZmlsZSBub3QgZm91bmQuLlxuIik7CisgICAgfQorICBlbHNlIGlmKHJldCA8
IDApCisgICAgeworICAgICAgZ19lcnIgPSBHUlVCX0VSUl9VTktOT1dOOworICAgICAgcHJpbnRm
KCJpdGVyYXRlIGVycm9yIVxuIik7CisgICAgfQorICAgIAorICAKKyAgZnJlZSAoZGlybmFtZSk7
CisKKyAgcmV0dXJuIChjLmZvdW5kICYmIDA9PXJldCkgPyBkaXJwIDogMDsKK30KKworCisKKwor
CitncnViX2Vycl90CitncnViX2ZhdF9vcGVuIChncnViX2ZpbGVfdCBmaWxlLCBjb25zdCBjaGFy
ICpuYW1lKQoreworICBzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YSA9IDA7CisgIGNoYXIgKnAg
PSAoY2hhciAqKSBuYW1lOworCisgIAorICBkYXRhID0gZ3J1Yl9mYXRfbW91bnQgKGZpbGUtPmJz
LCBmaWxlLT5wYXJ0X29mZl9zZWN0b3IpOworICBpZiAoISBkYXRhKQorICAgIHsKKyAgICAgIHBy
aW50ZigiWyVzXTogbW91bnQgZXJyb3IhXG4iLCBuYW1lKTsKKyAgICAgIGdvdG8gZmFpbDsKKyAg
ICB9CisKKyAgaW50IGkgPSAwOworICBkbworICAgIHsKKyAgICAgIHAgPSBncnViX2ZhdF9maW5k
X2RpciAoZmlsZS0+YnMsIGRhdGEsIHAsIDAsIDApOworICAgICAgREJHKCIlZCBjeWNsZSBwYXN0
ob5wYXRoPSVzob8uLi4uLi4uIiwgaSsxLCBwKTsKKyAgICAgIC8vZXJyb3IganVkZ2UuLi4uLi4K
KyAgICB9CisgIHdoaWxlIChwKTsKKworICBEQkcoImV4aXQgd2hpbGU9PT09PT0iKTsKKyAKKyAg
aWYgKChHUlVCX0VSUl9OT05FID09IGdfZXJyKSAKKyAgICAgICYmIChkYXRhLT5hdHRyICYgR1JV
Ql9GQVRfQVRUUl9ESVJFQ1RPUlkpKQorICAgIHsKKyAgICAgIHByaW50ZiAoIlslc106IG5vdCBh
IGZpbGUhXG4iLCBuYW1lKTsKKyAgICAgIGdvdG8gZmFpbDsKKyAgICB9CisgIAorICBpZihHUlVC
X0VSUl9OT05FID09IGdfZXJyKQorICAgIHsKKyAgICAgIERCRygiZm91bmQ9PT09PT0iKTsKKyAg
ICB9CisgIGVsc2UKKyAgICB7CisgICAgICBwcmludGYoIm5vdCBmb3VuZCBvciBlcnJvciFcbiIp
OworICAgICAgZ290byBmYWlsOworICAgIH0KKworICBEQkcoIjExMTExMTExMTExMTExMTExMTEx
MTExIik7CisgIGZpbGUtPmRhdGEgPSBkYXRhOworICBmaWxlLT5zaXplID0gZGF0YS0+ZmlsZV9z
aXplOworICByZXR1cm4gMDsKKworIGZhaWw6CisgIGZyZWUoZGF0YSk7IAorICBmaWxlLT5kYXRh
ID0gTlVMTDsKKyAgREJHKCIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyIik7CisgIHJldHVybiAxOwor
fQorCisKKyNkZWZpbmUgICAgVElNRV9CSVQgICAgMHhGRkZGCisjZGVmaW5lICAgIFRJTUVfSE9V
Ul9CSVQgICAgMHhGODAwCisjZGVmaW5lICAgIFRJTUVfTUlOVVRFX0JJVCAgICAweDA3RTAKKyNk
ZWZpbmUgICAgVElNRV9TRUNPTkRfQklUICAgIDB4MDAxRgorI2RlZmluZSAgICBEQVRFX0JJVCAg
ICAweEZGRkYwMDAwCisjZGVmaW5lICAgIERBVEVfWUVBUl9CSVQgICAgMHhGRTAwCisjZGVmaW5l
ICAgIERBVEVfTU9OVEhfQklUICAgIDB4MDFFMAorI2RlZmluZSAgICBEQVRFX0RBWV9CSVQgICAg
MHgwMDFGCitzdGF0aWMgIGludCBmaW5kX3RoZW5fbHNfaG9vayhjb25zdCBjaGFyICpmaWxlbmFt
ZSwKKwkJCSAgIGNvbnN0IHN0cnVjdCBncnViX2Rpcmhvb2tfaW5mbyAqaW5mbywgdm9pZCAqY2xv
c3VyZSkKK3sKKyAgc3RydWN0IGxzX2N0cmwqIGN0cmwgPSAoc3RydWN0IGxzX2N0cmwqKWNsb3N1
cmU7CisgIERCRygiZGV0YWlsPSVkIiwgY3RybC0+ZGV0YWlsKTsKKyAgcHJpbnRmKCIlcyIsIGZp
bGVuYW1lKTsKKyAgaWYoIWN0cmwtPmRldGFpbCkKKyAgICB7CisgICAgICBwcmludGYoIlxuIik7
CisgICAgICByZXR1cm4gMDsKKyAgICB9CisgIGVsc2UKKyAgICB7CisgICAgICBwcmludGYoIlx0
Iik7CisgICAgfQorCisKKyAgcHJpbnRmKCIldWJ5dGVzXHQiLCAoaW5mby0+ZmlsZXNpemUpKTsK
KyAgcHJpbnRmKCIlc1x0IiwgKGluZm8tPmRpciA/ICJkaXIiIDogImZpbGUiKSk7CisgIGdydWJf
dWludDE2X3QgdGltZSA9ICgoaW5mby0+bXRpbWUpICYgVElNRV9CSVQpOworICBncnViX3VpbnQx
Nl90IGRhdGUgPSAoKGluZm8tPm10aW1lKSAmIERBVEVfQklUKSA+PiAxNjsKKyAgCisgIHByaW50
ZigiJTA0ZC8lMDJkLyUwMmRcdCIsCisJICgoZGF0ZSAmIERBVEVfWUVBUl9CSVQpID4+IDkpICsg
MTk4MCwKKwkgKGRhdGUgJiBEQVRFX01PTlRIX0JJVCkgPj4gNSwKKwkgKGRhdGUgJiBEQVRFX0RB
WV9CSVQpKTsKKyAgcHJpbnRmKCIlMDJkOiUwMmQ6JTAyZFxuIiwgCisJICh0aW1lICYgVElNRV9I
T1VSX0JJVCkgPj4gMTEsCisJICh0aW1lICYgVElNRV9NSU5VVEVfQklUKSA+PiA1LAorCSB0aW1l
ICYgVElNRV9TRUNPTkRfQklUKSAqIDI7ICAKKyAgCisgIHJldHVybiAwOyAgLy8g1+7W1be1u9i4
+Gl0ZXJhdGUKK30KKworCitncnViX2Vycl90CitncnViX2ZhdF9scyAoZ3J1Yl9maWxlX3QgZmls
ZSwgY29uc3QgY2hhciAqcGF0aCwKKwkgICAgICBpbnQgKCpob29rKSAoY29uc3QgY2hhciAqZmls
ZW5hbWUsCisJCQkgICBjb25zdCBzdHJ1Y3QgZ3J1Yl9kaXJob29rX2luZm8gKmluZm8sIHZvaWQg
KmNsb3N1cmUpLAorCSAgICAgIHZvaWQgKmNsb3N1cmUpCit7CisgIHN0cnVjdCBncnViX2ZhdF9k
YXRhICpkYXRhID0gMDsKKyAgZ3J1Yl9zaXplX3QgbGVuOworICBjaGFyICpkaXJuYW1lID0gMDsK
KyAgY2hhciAqcDsKKyAgCisgIGRhdGEgPSBncnViX2ZhdF9tb3VudCAoZmlsZS0+YnMsIGZpbGUt
PnBhcnRfb2ZmX3NlY3Rvcik7CisgIGlmICghIGRhdGEpCisgICAgZ290byBmYWlsOworCisgIGZp
bGUtPmRhdGEgPSBkYXRhOworICAvKiBNYWtlIHN1cmUgdGhhdCBESVJOQU1FIHRlcm1pbmF0ZXMg
d2l0aCAnLycuICAqLworICBsZW4gPSBzdHJsZW4ocGF0aCk7CisgIGRpcm5hbWUgPSAoY2hhciop
bWFsbG9jIChsZW4gKyAxICsgMSk7CisgIGlmICghIGRpcm5hbWUpCisgICAgZ290byBmYWlsOwor
ICBtZW1jcHkgKGRpcm5hbWUsIHBhdGgsIGxlbik7CisgIHAgPSBkaXJuYW1lICsgbGVuOworICBp
ZiAocGF0aFtsZW4gLSAxXSAhPSAnLycpCisgICAgKnArKyA9ICcvJzsKKyAgKnAgPSAnXDAnOwor
ICBwID0gZGlybmFtZTsKKworICBkbworICAgIHsKKyAgICAgIHAgPSBncnViX2ZhdF9maW5kX2Rp
ciAoZmlsZS0+YnMsIGRhdGEsIHAsIGZpbmRfdGhlbl9sc19ob29rLCBjbG9zdXJlKTsKKyAgICB9
CisgIHdoaWxlIChwICYmIGdfZXJyID09IEdSVUJfRVJSX05PTkUpOworCisgIAorCisgZmFpbDoK
KworICBmcmVlIChkaXJuYW1lKTsKKyAgZnJlZSAoZGF0YSk7ICBmaWxlLT5kYXRhID0gTlVMTDsK
KyAgCisgIHJldHVybiBnX2VycjsKK30KKworCitncnViX2Vycl90IGdydWJfZmF0X2Nsb3NlKGdy
dWJfZmlsZV90IGZpbGUpCit7CisgIGZyZWUoZmlsZS0+ZGF0YSk7CisgIHJldHVybiBnX2VycjsK
K30KKworCitncnViX3NzaXplX3QgZ3J1Yl9mYXRfcmVhZChncnViX2ZpbGVfdCBmaWxlLCBncnVi
X29mZl90IG9mZnNldCwKKwkJCSAgIGdydWJfc2l6ZV90IGxlbiwgY2hhciAqYnVmKQoreworICBy
ZXR1cm4gZ3J1Yl9mYXRfcmVhZF9kYXRhKGZpbGUtPmJzLCBmaWxlLT5kYXRhLCBOVUxMLCBOVUxM
LCBvZmZzZXQsIGxlbiwgYnVmKTsKK30KKworCisKKworCisKKwpkaWZmIC0tZXhjbHVkZT0uc3Zu
IC1ycE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZhdC5oIHhlbi00LjEu
Mi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZhdC5oCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2Vt
dS1xZW11LXhlbi9mYXQuaAkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysg
eGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZmF0LmgJMjAxMi0xMi0yOCAxNjowMjo0
MS4wMDI5MzgwMTkgKzA4MDAKQEAgLTAsMCArMSwxNjAgQEAKKyNpZm5kZWYgRlNfRkFUX0gKKyNk
ZWZpbmUgRlNfRkFUX0gKKworCisjaW5jbHVkZSAiZnMtdHlwZXMuaCIKKyNpbmNsdWRlICJibG9j
a19pbnQuaCIKKyNpbmNsdWRlICJmcy1jb21tLmgiCisjaW5jbHVkZSAiZ3J1Yl9lcnIuaCIKKwor
CisjZGVmaW5lIEdSVUJfRElTS19TRUNUT1JfQklUUyAgICAgIDkKKyNkZWZpbmUgR1JVQl9GQVRf
RElSX0VOVFJZX1NJWkUJMzIKKworI2RlZmluZSBHUlVCX0ZBVF9BVFRSX1JFQURfT05MWQkweDAx
CisjZGVmaW5lIEdSVUJfRkFUX0FUVFJfSElEREVOCTB4MDIKKyNkZWZpbmUgR1JVQl9GQVRfQVRU
Ul9TWVNURU0JMHgwNAorI2RlZmluZSBHUlVCX0ZBVF9BVFRSX1ZPTFVNRV9JRAkweDA4CisjZGVm
aW5lIEdSVUJfRkFUX0FUVFJfRElSRUNUT1JZCTB4MTAKKyNkZWZpbmUgR1JVQl9GQVRfQVRUUl9B
UkNISVZFCTB4MjAKKworI2RlZmluZSBHUlVCX0ZBVF9NQVhGSUxFCTI1NgorCisjZGVmaW5lIEdS
VUJfRkFUX0FUVFJfTE9OR19OQU1FCShHUlVCX0ZBVF9BVFRSX1JFQURfT05MWSBcCisJCQkJIHwg
R1JVQl9GQVRfQVRUUl9ISURERU4gXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfU1lTVEVNIFwKKwkJ
CQkgfCBHUlVCX0ZBVF9BVFRSX1ZPTFVNRV9JRCkKKyNkZWZpbmUgR1JVQl9GQVRfQVRUUl9WQUxJ
RAkoR1JVQl9GQVRfQVRUUl9SRUFEX09OTFkgXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfSElEREVO
IFwKKwkJCQkgfCBHUlVCX0ZBVF9BVFRSX1NZU1RFTSBcCisJCQkJIHwgR1JVQl9GQVRfQVRUUl9E
SVJFQ1RPUlkgXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfQVJDSElWRSBcCisJCQkJIHwgR1JVQl9G
QVRfQVRUUl9WT0xVTUVfSUQpCisKK3N0cnVjdCBncnViX2ZhdF9icGIKK3sKKyAgZ3J1Yl91aW50
OF90IGptcF9ib290WzNdOworICBncnViX3VpbnQ4X3Qgb2VtX25hbWVbOF07CisgIGdydWJfdWlu
dDE2X3QgYnl0ZXNfcGVyX3NlY3RvcjsKKyAgZ3J1Yl91aW50OF90IHNlY3RvcnNfcGVyX2NsdXN0
ZXI7CisgIGdydWJfdWludDE2X3QgbnVtX3Jlc2VydmVkX3NlY3RvcnM7CisgIGdydWJfdWludDhf
dCBudW1fZmF0czsKKyAgZ3J1Yl91aW50MTZfdCBudW1fcm9vdF9lbnRyaWVzOworICBncnViX3Vp
bnQxNl90IG51bV90b3RhbF9zZWN0b3JzXzE2OworICBncnViX3VpbnQ4X3QgbWVkaWE7CisgIGdy
dWJfdWludDE2X3Qgc2VjdG9yc19wZXJfZmF0XzE2OworICBncnViX3VpbnQxNl90IHNlY3RvcnNf
cGVyX3RyYWNrOworICBncnViX3VpbnQxNl90IG51bV9oZWFkczsKKyAgZ3J1Yl91aW50MzJfdCBu
dW1faGlkZGVuX3NlY3RvcnM7CisgIGdydWJfdWludDMyX3QgbnVtX3RvdGFsX3NlY3RvcnNfMzI7
CisgIHVuaW9uCisgIHsKKyAgICBzdHJ1Y3QKKyAgICB7CisgICAgICBncnViX3VpbnQ4X3QgbnVt
X3BoX2RyaXZlOworICAgICAgZ3J1Yl91aW50OF90IHJlc2VydmVkOworICAgICAgZ3J1Yl91aW50
OF90IGJvb3Rfc2lnOworICAgICAgZ3J1Yl91aW50MzJfdCBudW1fc2VyaWFsOworICAgICAgZ3J1
Yl91aW50OF90IGxhYmVsWzExXTsKKyAgICAgIGdydWJfdWludDhfdCBmc3R5cGVbOF07CisgICAg
fSBfX2F0dHJpYnV0ZV9fICgocGFja2VkKSkgZmF0MTJfb3JfZmF0MTY7CisgICAgc3RydWN0Cisg
ICAgeworICAgICAgZ3J1Yl91aW50MzJfdCBzZWN0b3JzX3Blcl9mYXRfMzI7CisgICAgICBncnVi
X3VpbnQxNl90IGV4dGVuZGVkX2ZsYWdzOworICAgICAgZ3J1Yl91aW50MTZfdCBmc192ZXJzaW9u
OworICAgICAgZ3J1Yl91aW50MzJfdCByb290X2NsdXN0ZXI7CisgICAgICBncnViX3VpbnQxNl90
IGZzX2luZm87CisgICAgICBncnViX3VpbnQxNl90IGJhY2t1cF9ib290X3NlY3RvcjsKKyAgICAg
IGdydWJfdWludDhfdCByZXNlcnZlZFsxMl07CisgICAgICBncnViX3VpbnQ4X3QgbnVtX3BoX2Ry
aXZlOworICAgICAgZ3J1Yl91aW50OF90IHJlc2VydmVkMTsKKyAgICAgIGdydWJfdWludDhfdCBi
b290X3NpZzsKKyAgICAgIGdydWJfdWludDMyX3QgbnVtX3NlcmlhbDsKKyAgICAgIGdydWJfdWlu
dDhfdCBsYWJlbFsxMV07CisgICAgICBncnViX3VpbnQ4X3QgZnN0eXBlWzhdOworICAgIH0gX19h
dHRyaWJ1dGVfXyAoKHBhY2tlZCkpIGZhdDMyOworICB9IF9fYXR0cmlidXRlX18gKChwYWNrZWQp
KSB2ZXJzaW9uX3NwZWNpZmljOworfSBfX2F0dHJpYnV0ZV9fICgocGFja2VkKSk7CisKK3N0cnVj
dCBncnViX2ZhdF9kaXJfZW50cnkKK3sKKyAgZ3J1Yl91aW50OF90IG5hbWVbMTFdOworICBncnVi
X3VpbnQ4X3QgYXR0cjsKKyAgZ3J1Yl91aW50OF90IG50X3Jlc2VydmVkOworICBncnViX3VpbnQ4
X3QgY190aW1lX3RlbnRoOworICBncnViX3VpbnQxNl90IGNfdGltZTsKKyAgZ3J1Yl91aW50MTZf
dCBjX2RhdGU7CisgIGdydWJfdWludDE2X3QgYV9kYXRlOworICBncnViX3VpbnQxNl90IGZpcnN0
X2NsdXN0ZXJfaGlnaDsKKyAgZ3J1Yl91aW50MTZfdCB3X3RpbWU7CisgIGdydWJfdWludDE2X3Qg
d19kYXRlOworICBncnViX3VpbnQxNl90IGZpcnN0X2NsdXN0ZXJfbG93OworICBncnViX3VpbnQz
Ml90IGZpbGVfc2l6ZTsKK30gX19hdHRyaWJ1dGVfXyAoKHBhY2tlZCkpOworCitzdHJ1Y3QgZ3J1
Yl9mYXRfbG9uZ19uYW1lX2VudHJ5Cit7CisgIGdydWJfdWludDhfdCBpZDsKKyAgZ3J1Yl91aW50
MTZfdCBuYW1lMVs1XTsKKyAgZ3J1Yl91aW50OF90IGF0dHI7CisgIGdydWJfdWludDhfdCByZXNl
cnZlZDsKKyAgZ3J1Yl91aW50OF90IGNoZWNrc3VtOworICBncnViX3VpbnQxNl90IG5hbWUyWzZd
OworICBncnViX3VpbnQxNl90IGZpcnN0X2NsdXN0ZXI7CisgIGdydWJfdWludDE2X3QgbmFtZTNb
Ml07Cit9IF9fYXR0cmlidXRlX18gKChwYWNrZWQpKTsKKworc3RydWN0IGdydWJfZmF0X2RhdGEK
K3sKKyAgaW50IGxvZ2ljYWxfc2VjdG9yX2JpdHM7CisgIGdydWJfdWludDMyX3QgbnVtX3NlY3Rv
cnM7CisKKyAgZ3J1Yl91aW50MzJfdCBmYXRfc2VjdG9yOworICBncnViX3VpbnQzMl90IHNlY3Rv
cnNfcGVyX2ZhdDsKKyAgaW50IGZhdF9zaXplOworCisgIGdydWJfdWludDMyX3Qgcm9vdF9jbHVz
dGVyOworICBncnViX3VpbnQzMl90IHJvb3Rfc2VjdG9yOworICBncnViX3VpbnQzMl90IG51bV9y
b290X3NlY3RvcnM7CisKKyAgaW50IGNsdXN0ZXJfYml0czsKKyAgZ3J1Yl91aW50MzJfdCBjbHVz
dGVyX2VvZl9tYXJrOworICBncnViX3VpbnQzMl90IGNsdXN0ZXJfc2VjdG9yOworICBncnViX3Vp
bnQzMl90IG51bV9jbHVzdGVyczsKKworICBncnViX3VpbnQ4X3QgYXR0cjsKKyAgZ3J1Yl9zc2l6
ZV90IGZpbGVfc2l6ZTsKKyAgZ3J1Yl91aW50MzJfdCBmaWxlX2NsdXN0ZXI7CisgIGdydWJfdWlu
dDMyX3QgY3VyX2NsdXN0ZXJfbnVtOworICBncnViX3VpbnQzMl90IGN1cl9jbHVzdGVyOworCisg
IGdydWJfdWludDMyX3QgdXVpZDsKK307CisKKworCisKKworCisKK3N0cnVjdCBncnViX2ZhdF9k
YXRhKiAKK2dydWJfZmF0X21vdW50IChCbG9ja0RyaXZlclN0YXRlICpicywgZ3J1Yl91aW50MzJf
dCBwYXJ0X29mZl9zZWN0b3IpOworCitncnViX2Vycl90CitncnViX2ZhdF9vcGVuIChncnViX2Zp
bGVfdCBmaWxlLCBjb25zdCBjaGFyICpuYW1lKTsKKworZ3J1Yl9lcnJfdAorZ3J1Yl9mYXRfbHMg
KGdydWJfZmlsZV90IGZpbGUsIGNvbnN0IGNoYXIgKnBhdGgsCisJICAgICAgaW50ICgqaG9vaykg
KGNvbnN0IGNoYXIgKmZpbGVuYW1lLAorCQkJICAgY29uc3Qgc3RydWN0IGdydWJfZGlyaG9va19p
bmZvICppbmZvLAorCQkJICAgdm9pZCAqY2xvc3VyZSksCisJICAgICB2b2lkICpjbG9zdXJlKTsK
KworZ3J1Yl9lcnJfdCBncnViX2ZhdF9jbG9zZShncnViX2ZpbGVfdCBmaWxlKTsKKworZ3J1Yl9z
c2l6ZV90IGdydWJfZmF0X3JlYWQoZ3J1Yl9maWxlX3QgZmlsZSwgZ3J1Yl9vZmZfdCBvZmZzZXQs
CisJCSAgZ3J1Yl9zaXplX3QgbGVuLCBjaGFyICpidWYpOworCisKKyNlbmRpZgpkaWZmIC0tZXhj
bHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZzLWNv
bW0uaCB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9mcy1jb21tLmgKLS0tIHhlbi00
LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZzLWNvbW0uaAkxOTcwLTAxLTAxIDA3OjAwOjAw
LjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZnMt
Y29tbS5oCTIwMTItMTItMjggMTY6MDI6NDEuMDAzODQ2ODk3ICswODAwCkBAIC0wLDAgKzEsNjAg
QEAKKyNpZm5kZWYgX0ZTX0NPTU1fSAorI2RlZmluZSBfRlNfQ09NTV9ICisKKyNpbmNsdWRlICJm
cy10eXBlcy5oIgorI2luY2x1ZGUgImJsb2NrX2ludC5oIgorI2luY2x1ZGUgImdydWJfZXJyLmgi
CisjaW5jbHVkZSAiZGVidWcuaCIKKwordHlwZWRlZiBzdHJ1Y3QgIGdydWJfZmlsZQoreworICB2
b2lkICpkYXRhOworICBCbG9ja0RyaXZlclN0YXRlICpiczsKKyAgdWludDMyX3QgcGFydF9vZmZf
c2VjdG9yOworICBncnViX3NpemVfdCBzaXplOworICBncnViX29mZl90IG9mZnNldDsKKyAgLyog
VGhpcyBpcyBjYWxsZWQgd2hlbiBhIHNlY3RvciBpcyByZWFkLiBVc2VkIG9ubHkgZm9yIGEgZGlz
ayBkZXZpY2UuICAqLworICB2b2lkICgqcmVhZF9ob29rKSAoZ3J1Yl9kaXNrX2FkZHJfdCBzZWN0
b3IsCisJCSAgICAgdW5zaWduZWQgb2Zmc2V0LCB1bnNpZ25lZCBsZW5ndGgsIHZvaWQgKmNsb3N1
cmUpOworICB2b2lkICpjbG9zdXJlOworfSpncnViX2ZpbGVfdDsKKworc3RydWN0IGdydWJfZGly
aG9va19pbmZvCit7CisgIHVuc2lnbmVkIGRpcjoxOworICB1bnNpZ25lZCBtdGltZXNldDoxOwor
ICB1bnNpZ25lZCBjYXNlX2luc2Vuc2l0aXZlOjE7CisgIGdydWJfdWludDMyX3QgbXRpbWU7ICAg
IC8vKGRhdGUgfCB0aW1lKQorICBncnViX3VpbnQzMl90IGZpbGVzaXplOworICBncnViX3VpbnQ2
NF90IGZpbGVzaXplX250ZnM7CisgIGdydWJfdWludDY0X3QgdGltZV9udGZzOworfTsKKworc3Ry
dWN0IGxzX2N0cmwKK3sKKyAgdW5zaWduZWQgZGV0YWlsOjE7CisgIGNoYXIqIGRpcm5hbWU7Cit9
OworCisKKworCit0eXBlZGVmIGdydWJfZXJyX3QKKygqZ3J1Yl9vcGVuKSAoZ3J1Yl9maWxlX3Qg
ZmlsZSwgY29uc3QgY2hhciAqbmFtZSk7CisKK3R5cGVkZWYgZ3J1Yl9lcnJfdAorKCpncnViX2xz
KSAoZ3J1Yl9maWxlX3QgZmlsZSwgY29uc3QgY2hhciAqcGF0aCwKKwkgICAgICBpbnQgKCpob29r
KSAoY29uc3QgY2hhciAqZmlsZW5hbWUsCisJCQkgICBjb25zdCBzdHJ1Y3QgZ3J1Yl9kaXJob29r
X2luZm8gKmluZm8sCisJCQkgICB2b2lkICpjbG9zdXJlKSwKKwkgICAgIHZvaWQgKmNsb3N1cmUp
OworCit0eXBlZGVmIGdydWJfZXJyX3QgCisoKmdydWJfY2xvc2UpIChncnViX2ZpbGVfdCBmaWxl
KTsKKwordHlwZWRlZiBncnViX3NzaXplX3QgCisoKmdydWJfcmVhZCkoZ3J1Yl9maWxlX3QgZmls
ZSwgZ3J1Yl9vZmZfdCBvZmZzZXQsCisJCSAgZ3J1Yl9zaXplX3QgbGVuLCBjaGFyICpidWYpOwor
CisKKyNlbmRpZgpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xz
L2lvZW11LXFlbXUteGVuL2ZzaGVscC5jIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVu
L2ZzaGVscC5jCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9mc2hlbHAuYwkx
OTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMv
aW9lbXUtcWVtdS14ZW4vZnNoZWxwLmMJMjAxMi0xMi0yOCAxNjowMjo0MS4wMDQ5MzI0NTcgKzA4
MDAKQEAgLTAsMCArMSwzNjIgQEAKKy8qIGZzaGVscC5jIC0tIEZpbGVzeXN0ZW0gaGVscGVyIGZ1
bmN0aW9ucyAqLworLyoKKyAqICBHUlVCICAtLSAgR1JhbmQgVW5pZmllZCBCb290bG9hZGVyCisg
KiAgQ29weXJpZ2h0IChDKSAyMDA0LDIwMDUsMjAwNiwyMDA3LDIwMDggIEZyZWUgU29mdHdhcmUg
Rm91bmRhdGlvbiwgSW5jLgorICoKKyAqICBHUlVCIGlzIGZyZWUgc29mdHdhcmU6IHlvdSBjYW4g
cmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkKKyAqICBpdCB1bmRlciB0aGUgdGVybXMgb2Yg
dGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGFzIHB1Ymxpc2hlZCBieQorICogIHRoZSBG
cmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIGVpdGhlciB2ZXJzaW9uIDMgb2YgdGhlIExpY2Vuc2Us
IG9yCisgKiAgKGF0IHlvdXIgb3B0aW9uKSBhbnkgbGF0ZXIgdmVyc2lvbi4KKyAqCisgKiAgR1JV
QiBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLAorICog
IGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJh
bnR5IG9mCisgKiAgTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQ
VVJQT1NFLiAgU2VlIHRoZQorICogIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3Jl
IGRldGFpbHMuCisgKgorICogIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhl
IEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlCisgKiAgYWxvbmcgd2l0aCBHUlVCLiAgSWYgbm90
LCBzZWUgPGh0dHA6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy8+LgorICovCisKKyNpbmNsdWRlICJl
cnIuaCIKKyNpbmNsdWRlICJtaXNjLmgiCisjaW5jbHVkZSAiYmxvY2tfaW50LmgiCisjaW5jbHVk
ZSAiZnNoZWxwLmgiCisjaW5jbHVkZSAibnRmcy5oIgorI2luY2x1ZGUgImRlYnVnLmgiCisKK3N0
cnVjdCBncnViX2ZzaGVscF9maW5kX2ZpbGVfY2xvc3VyZQoreworICBncnViX2ZzaGVscF9ub2Rl
X3Qgcm9vdG5vZGU7CisgIGludCAoKml0ZXJhdGVfZGlyKSAoZ3J1Yl9mc2hlbHBfbm9kZV90IGRp
ciwKKwkJICAgICAgaW50ICgqaG9vaykKKwkJICAgICAgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLAor
CQkgICAgICAgZW51bSBncnViX2ZzaGVscF9maWxldHlwZSBmaWxldHlwZSwKKwkJICAgICAgIGdy
dWJfZnNoZWxwX25vZGVfdCBub2RlLCB2b2lkICpjbG9zdXJlKSwKKwkJICAgICAgdm9pZCAqY2xv
c3VyZSk7CisgIHZvaWQgKmNsb3N1cmU7CisgIGNoYXIgKigqcmVhZF9zeW1saW5rKSAoZ3J1Yl9m
c2hlbHBfbm9kZV90IG5vZGUpOworICBpbnQgc3ltbGlua25lc3Q7CisgIGVudW0gZ3J1Yl9mc2hl
bHBfZmlsZXR5cGUgZm91bmR0eXBlOworICBncnViX2ZzaGVscF9ub2RlX3QgY3VycnJvb3Q7Cit9
OworCitzdGF0aWMgdm9pZAorZnJlZV9ub2RlIChncnViX2ZzaGVscF9ub2RlX3Qgbm9kZSwgc3Ry
dWN0IGdydWJfZnNoZWxwX2ZpbmRfZmlsZV9jbG9zdXJlICpjKQoreworICBpZiAobm9kZSAhPSBj
LT5yb290bm9kZSAmJiBub2RlICE9IGMtPmN1cnJyb290KQorICAgIGdydWJfZnJlZSAobm9kZSk7
Cit9CisKK3N0cnVjdCBmaW5kX2ZpbGVfY2xvc3VyZQoreworICBjaGFyICpuYW1lOworICBlbnVt
IGdydWJfZnNoZWxwX2ZpbGV0eXBlICp0eXBlOworICBncnViX2ZzaGVscF9ub2RlX3QgKm9sZG5v
ZGU7CisgIGdydWJfZnNoZWxwX25vZGVfdCAqY3Vycm5vZGU7Cit9OworCitzdGF0aWMgaW50Citp
dGVyYXRlIChjb25zdCBjaGFyICpmaWxlbmFtZSwKKwkgZW51bSBncnViX2ZzaGVscF9maWxldHlw
ZSBmaWxldHlwZSwKKwkgZ3J1Yl9mc2hlbHBfbm9kZV90IG5vZGUsCisJIHZvaWQgKmNsb3N1cmUp
Cit7CisgIHN0cnVjdCBmaW5kX2ZpbGVfY2xvc3VyZSAqYyA9IGNsb3N1cmU7CisgIERCRygibGlz
dF9maWxlIGhvb2tlZCBieSBmc2hlbHA6aXRlcmF0ZSgpLCBmaWxlbmFtZT0lcyIsIGZpbGVuYW1l
KTsKKyAgaWYgKGZpbGV0eXBlID09IEdSVUJfRlNIRUxQX1VOS05PV04gfHwKKyAgICAgIChncnVi
X3N0cmNtcCAoYy0+bmFtZSwgZmlsZW5hbWUpICYmCisgICAgICAgKCEgKGZpbGV0eXBlICYgR1JV
Ql9GU0hFTFBfQ0FTRV9JTlNFTlNJVElWRSkgfHwKKwlncnViX3N0cm5jYXNlY21wIChjLT5uYW1l
LCBmaWxlbmFtZSwgR1JVQl9MT05HX01BWCkpKSkKKyAgICB7CisgICAgICBEQkcoIm5vdCBtYXRj
aCEhIT4+Pj4+PiIpOworICAgICAgZ3J1Yl9mcmVlIChub2RlKTsKKyAgICAgIHJldHVybiAwOwor
ICAgIH0KKworICAvKiBUaGUgbm9kZSBpcyBmb3VuZCwgc3RvcCBpdGVyYXRpbmcgb3ZlciB0aGUg
bm9kZXMuICAqLworICAqKGMtPnR5cGUpID0gZmlsZXR5cGUgJiB+R1JVQl9GU0hFTFBfQ0FTRV9J
TlNFTlNJVElWRTsKKyAgKihjLT5vbGRub2RlKSA9ICooYy0+Y3Vycm5vZGUpOworICAqKGMtPmN1
cnJub2RlKSA9IG5vZGU7CisgIERCRygiZm91bmQhIT4+Pj4+PiIpOworICByZXR1cm4gMTsKK30K
Kworc3RhdGljIGdydWJfZXJyX3QKK2ZpbmRfZmlsZSAoY29uc3QgY2hhciAqY3VycnBhdGgsIGdy
dWJfZnNoZWxwX25vZGVfdCBjdXJycm9vdCwKKwkgICBncnViX2ZzaGVscF9ub2RlX3QgKmN1cnJm
b3VuZCwKKwkgICBzdHJ1Y3QgZ3J1Yl9mc2hlbHBfZmluZF9maWxlX2Nsb3N1cmUgKmMpCit7Cisg
IGNoYXIgZnBhdGhbZ3J1Yl9zdHJsZW4gKGN1cnJwYXRoKSArIDFdOworICBjaGFyICpuYW1lID0g
ZnBhdGg7CisgIGNoYXIgKm5leHQ7CisgIGVudW0gZ3J1Yl9mc2hlbHBfZmlsZXR5cGUgdHlwZSA9
IEdSVUJfRlNIRUxQX0RJUjsKKyAgZ3J1Yl9mc2hlbHBfbm9kZV90IGN1cnJub2RlID0gY3VycnJv
b3Q7CisgIGdydWJfZnNoZWxwX25vZGVfdCBvbGRub2RlID0gY3VycnJvb3Q7CisKKyAgYy0+Y3Vy
cnJvb3QgPSBjdXJycm9vdDsKKworICBncnViX3N0cm5jcHkgKGZwYXRoLCBjdXJycGF0aCwgZ3J1
Yl9zdHJsZW4gKGN1cnJwYXRoKSArIDEpOworCisgIC8qIFJlbW92ZSBhbGwgbGVhZGluZyBzbGFz
aGVzLiAgKi8KKyAgd2hpbGUgKCpuYW1lID09ICcvJykKKyAgICBuYW1lKys7CisKKyAgaWYgKCEg
Km5hbWUpCisgICAgeworICAgICAgKmN1cnJmb3VuZCA9IGN1cnJub2RlOworICAgICAgcmV0dXJu
IDA7CisgICAgfQorCisgIGZvciAoOzspCisgICAgeworICAgICAgaW50IGZvdW5kOworICAgICAg
c3RydWN0IGZpbmRfZmlsZV9jbG9zdXJlIGNjOworCisgICAgICAvKiBFeHRyYWN0IHRoZSBhY3R1
YWwgcGFydCBmcm9tIHRoZSBwYXRobmFtZS4gICovCisgICAgICBuZXh0ID0gZ3J1Yl9zdHJjaHIg
KG5hbWUsICcvJyk7CisgICAgICBpZiAobmV4dCkKKwl7CisJICAvKiBSZW1vdmUgYWxsIGxlYWRp
bmcgc2xhc2hlcy4gICovCisJICB3aGlsZSAoKm5leHQgPT0gJy8nKQorCSAgICAqKG5leHQrKykg
PSAnXDAnOworCX0KKworICAgICAgLyogQXQgdGhpcyBwb2ludCBpdCBpcyBleHBlY3RlZCB0aGF0
IHRoZSBjdXJyZW50IG5vZGUgaXMgYQorCSBkaXJlY3RvcnksIGNoZWNrIGlmIHRoaXMgaXMgdHJ1
ZS4gICovCisgICAgICBpZiAodHlwZSAhPSBHUlVCX0ZTSEVMUF9ESVIpCisJeworCSAgZnJlZV9u
b2RlIChjdXJybm9kZSwgYyk7CisJICByZXR1cm4gZ3J1Yl9lcnJvciAoR1JVQl9FUlJfQkFEX0ZJ
TEVfVFlQRSwgIm5vdCBhIGRpcmVjdG9yeSIpOworCX0KKworICAgICAgREJHKCJmaW5kX2ZpbGVf
Y2xvc3VyZSBjYy5uYW1lPaG+JXOhvyIsIG5hbWUpOworICAgICAgY2MubmFtZSA9IG5hbWU7Cisg
ICAgICBjYy50eXBlID0gJnR5cGU7CisgICAgICBjYy5vbGRub2RlID0gJm9sZG5vZGU7CisgICAg
ICBjYy5jdXJybm9kZSA9ICZjdXJybm9kZTsKKyAgICAgIC8qIEl0ZXJhdGUgb3ZlciB0aGUgZGly
ZWN0b3J5LiAgKi8KKyAgICAgIERCRygiKioqKioqZnNoZWxwOmZpbmRfZmlsZSBob29rZWQgYnkg
XCdncnViX250ZnNfaXRlcmF0ZV9kaXJcJywiCisJICAibmVzdGVkIGFub3RoZXIgaG9vayBcJ2Zz
aGVscDppdGVyYXRvclwnIik7CisgICAgICBmb3VuZCA9IGMtPml0ZXJhdGVfZGlyIChjdXJybm9k
ZSwgaXRlcmF0ZSwgJmNjKTsgCisgICAgICBpZiAoISBmb3VuZCkKKwl7CisJICBpZiAoZ3J1Yl9l
cnJubykKKwkgICAgcmV0dXJuIGdydWJfZXJybm87CisKKwkgIGJyZWFrOworCX0KKworICAgICAg
LyogUmVhZCBpbiB0aGUgc3ltbGluayBhbmQgZm9sbG93IGl0LiAgKi8KKyAgICAgIGlmICh0eXBl
ID09IEdSVUJfRlNIRUxQX1NZTUxJTkspCisJeworCSAgY2hhciAqc3ltbGluazsKKworCSAgLyog
VGVzdCBpZiB0aGUgc3ltbGluayBkb2VzIG5vdCBsb29wLiAgKi8KKwkgIGlmICgrKyhjLT5zeW1s
aW5rbmVzdCkgPT0gOCkKKwkgICAgeworCSAgICAgIGZyZWVfbm9kZSAoY3Vycm5vZGUsIGMpOwor
CSAgICAgIGZyZWVfbm9kZSAob2xkbm9kZSwgYyk7CisJICAgICAgcmV0dXJuIGdydWJfZXJyb3Ig
KEdSVUJfRVJSX1NZTUxJTktfTE9PUCwKKwkJCQkgInRvbyBkZWVwIG5lc3Rpbmcgb2Ygc3ltbGlu
a3MiKTsKKwkgICAgfQorCisJICBzeW1saW5rID0gYy0+cmVhZF9zeW1saW5rIChjdXJybm9kZSk7
CisJICBmcmVlX25vZGUgKGN1cnJub2RlLCBjKTsKKworCSAgaWYgKCFzeW1saW5rKQorCSAgICB7
CisJICAgICAgZnJlZV9ub2RlIChvbGRub2RlLCBjKTsKKwkgICAgICByZXR1cm4gZ3J1Yl9lcnJu
bzsKKwkgICAgfQorCisJICAvKiBUaGUgc3ltbGluayBpcyBhbiBhYnNvbHV0ZSBwYXRoLCBnbyBi
YWNrIHRvIHRoZSByb290IGlub2RlLiAgKi8KKwkgIGlmIChzeW1saW5rWzBdID09ICcvJykKKwkg
ICAgeworCSAgICAgIGZyZWVfbm9kZSAob2xkbm9kZSwgYyk7CisJICAgICAgb2xkbm9kZSA9IGMt
PnJvb3Rub2RlOworCSAgICB9CisKKwkgIC8qIExvb2t1cCB0aGUgbm9kZSB0aGUgc3ltbGluayBw
b2ludHMgdG8uICAqLworCSAgZmluZF9maWxlIChzeW1saW5rLCBvbGRub2RlLCAmY3Vycm5vZGUs
IGMpOworCSAgdHlwZSA9IGMtPmZvdW5kdHlwZTsKKwkgIGdydWJfZnJlZSAoc3ltbGluayk7CisK
KwkgIGlmIChncnViX2Vycm5vKQorCSAgICB7CisJICAgICAgZnJlZV9ub2RlIChvbGRub2RlLCBj
KTsKKwkgICAgICByZXR1cm4gZ3J1Yl9lcnJubzsKKwkgICAgfQorCX0KKworICAgICAgZnJlZV9u
b2RlIChvbGRub2RlLCBjKTsKKworICAgICAgLyogRm91bmQgdGhlIG5vZGUhICAqLworICAgICAg
aWYgKCEgbmV4dCB8fCAqbmV4dCA9PSAnXDAnKQorCXsKKwkgICpjdXJyZm91bmQgPSBjdXJybm9k
ZTsKKwkgIGMtPmZvdW5kdHlwZSA9IHR5cGU7CisJICByZXR1cm4gMDsKKwl9CisKKyAgICAgIG5h
bWUgPSBuZXh0OworICAgIH0KKworICByZXR1cm4gZ3J1Yl9lcnJvciAoR1JVQl9FUlJfRklMRV9O
T1RfRk9VTkQsICJmaWxlIG5vdCBmb3VuZCIpOworfQorCisvKiBMb29rdXAgdGhlIG5vZGUgUEFU
SC4gIFRoZSBub2RlIFJPT1ROT0RFIGRlc2NyaWJlcyB0aGUgcm9vdCBvZiB0aGUKKyAgIGRpcmVj
dG9yeSB0cmVlLiAgVGhlIG5vZGUgZm91bmQgaXMgcmV0dXJuZWQgaW4gRk9VTkROT0RFLCB3aGlj
aCBpcworICAgZWl0aGVyIGEgUk9PVE5PREUgb3IgYSBuZXcgbWFsbG9jJ2VkIG5vZGUuICBJVEVS
QVRFX0RJUiBpcyB1c2VkIHRvCisgICBpdGVyYXRlIG92ZXIgYWxsIGRpcmVjdG9yeSBlbnRyaWVz
IGluIHRoZSBjdXJyZW50IG5vZGUuCisgICBSRUFEX1NZTUxJTksgaXMgdXNlZCB0byByZWFkIHRo
ZSBzeW1saW5rIGlmIGEgbm9kZSBpcyBhIHN5bWxpbmsuCisgICBFWFBFQ1RUWVBFIGlzIHRoZSB0
eXBlIG5vZGUgdGhhdCBpcyBleHBlY3RlZCBieSB0aGUgY2FsbGVkLCBhbgorICAgZXJyb3IgaXMg
Z2VuZXJhdGVkIGlmIHRoZSBub2RlIGlzIG5vdCBvZiB0aGUgZXhwZWN0ZWQgdHlwZS4gIE1ha2UK
KyAgIHN1cmUgeW91IHVzZSB0aGUgTkVTVEVEX0ZVTkNfQVRUUiBtYWNybyBmb3IgSE9PSywgdGhp
cyBpcyByZXF1aXJlZAorICAgYmVjYXVzZSBHQ0MgaGFzIGEgbmFzdHkgYnVnIHdoZW4gdXNpbmcg
cmVncGFybT0zLiAgKi8KK2dydWJfZXJyX3QKK2dydWJfZnNoZWxwX2ZpbmRfZmlsZSAoY29uc3Qg
Y2hhciAqcGF0aCwgZ3J1Yl9mc2hlbHBfbm9kZV90IHJvb3Rub2RlLAorCQkgICAgICAgZ3J1Yl9m
c2hlbHBfbm9kZV90ICpmb3VuZG5vZGUsCisJCSAgICAgICBpbnQgKCppdGVyYXRlX2RpcikgKGdy
dWJfZnNoZWxwX25vZGVfdCBkaXIsCisJCQkJCSAgIGludCAoKmhvb2spCisJCQkJCSAgIChjb25z
dCBjaGFyICpmaWxlbmFtZSwKKwkJCQkJICAgIGVudW0gZ3J1Yl9mc2hlbHBfZmlsZXR5cGUgZmls
ZXR5cGUsCisJCQkJCSAgICBncnViX2ZzaGVscF9ub2RlX3Qgbm9kZSwKKwkJCQkJICAgIHZvaWQg
KmNsb3N1cmUpLAorCQkJCQkgICB2b2lkICpjbG9zdXJlKSwKKwkJICAgICAgIHZvaWQgKmNsb3N1
cmUsCisJCSAgICAgICBjaGFyICooKnJlYWRfc3ltbGluaykgKGdydWJfZnNoZWxwX25vZGVfdCBu
b2RlKSwKKwkJICAgICAgIGVudW0gZ3J1Yl9mc2hlbHBfZmlsZXR5cGUgZXhwZWN0dHlwZSkKK3sK
KyAgZ3J1Yl9lcnJfdCBlcnI7CisgIHN0cnVjdCBncnViX2ZzaGVscF9maW5kX2ZpbGVfY2xvc3Vy
ZSBjOworCisgIGMucm9vdG5vZGUgPSByb290bm9kZTsKKyAgYy5pdGVyYXRlX2RpciA9IGl0ZXJh
dGVfZGlyOworICBjLmNsb3N1cmUgPSBjbG9zdXJlOworICBjLnJlYWRfc3ltbGluayA9IHJlYWRf
c3ltbGluazsKKyAgYy5zeW1saW5rbmVzdCA9IDA7CisgIGMuZm91bmR0eXBlID0gR1JVQl9GU0hF
TFBfRElSOworCisgIGlmICghcGF0aCB8fCBwYXRoWzBdICE9ICcvJykKKyAgICB7CisgICAgICBn
cnViX2Vycm9yIChHUlVCX0VSUl9CQURfRklMRU5BTUUsICJiYWQgZmlsZW5hbWUiKTsKKyAgICAg
IHJldHVybiBncnViX2Vycm5vOworICAgIH0KKyAgCisgIAorICBEQkcoImdvaW5nIHRvIGZpbmRf
ZmlsZVxuIik7CisgIGVyciA9IGZpbmRfZmlsZSAocGF0aCwgcm9vdG5vZGUsIGZvdW5kbm9kZSwg
JmMpOworICBpZiAoZXJyKQorICAgIHJldHVybiBlcnI7CisKKyAgLyogQ2hlY2sgaWYgdGhlIG5v
ZGUgdGhhdCB3YXMgZm91bmQgd2FzIG9mIHRoZSBleHBlY3RlZCB0eXBlLiAgKi8KKyAgaWYgKGV4
cGVjdHR5cGUgPT0gR1JVQl9GU0hFTFBfUkVHICYmIGMuZm91bmR0eXBlICE9IGV4cGVjdHR5cGUp
CisgICAgcmV0dXJuIGdydWJfZXJyb3IgKEdSVUJfRVJSX0JBRF9GSUxFX1RZUEUsICJub3QgYSBy
ZWd1bGFyIGZpbGUiKTsKKyAgZWxzZSBpZiAoZXhwZWN0dHlwZSA9PSBHUlVCX0ZTSEVMUF9ESVIg
JiYgYy5mb3VuZHR5cGUgIT0gZXhwZWN0dHlwZSkKKyAgICByZXR1cm4gZ3J1Yl9lcnJvciAoR1JV
Ql9FUlJfQkFEX0ZJTEVfVFlQRSwgIm5vdCBhIGRpcmVjdG9yeSIpOworCisgIHJldHVybiAwOwor
fQorCisvKiBSZWFkIExFTiBieXRlcyBmcm9tIHRoZSBmaWxlIE5PREUgb24gZGlzayBESVNLIGlu
dG8gdGhlIGJ1ZmZlciBCVUYsCisgICBiZWdpbm5pbmcgd2l0aCB0aGUgYmxvY2sgUE9TLiAgUkVB
RF9IT09LIHNob3VsZCBiZSBzZXQgYmVmb3JlCisgICByZWFkaW5nIGEgYmxvY2sgZnJvbSB0aGUg
ZmlsZS4gIEdFVF9CTE9DSyBpcyB1c2VkIHRvIHRyYW5zbGF0ZSBmaWxlCisgICBibG9ja3MgdG8g
ZGlzayBibG9ja3MuICBUaGUgZmlsZSBpcyBGSUxFU0laRSBieXRlcyBiaWcgYW5kIHRoZQorICAg
YmxvY2tzIGhhdmUgYSBzaXplIG9mIExPRzJCTE9DS1NJWkUgKGluIGxvZzIpLiAgKi8KK2dydWJf
c3NpemVfdAorZ3J1Yl9mc2hlbHBfcmVhZF9maWxlIChCbG9ja0RyaXZlclN0YXRlKiBicywgZ3J1
Yl9mc2hlbHBfbm9kZV90IG5vZGUsCisJCSAgICAgICB2b2lkICgqcmVhZF9ob29rKSAoZ3J1Yl9k
aXNrX2FkZHJfdCBzZWN0b3IsCisJCQkJCSAgdW5zaWduZWQgb2Zmc2V0LAorCQkJCQkgIHVuc2ln
bmVkIGxlbmd0aCwKKwkJCQkJICB2b2lkICpjbG9zdXJlKSwKKwkJICAgICAgIHZvaWQgKmNsb3N1
cmUsCisJCSAgICAgICBncnViX29mZl90IHBvcywgZ3J1Yl9zaXplX3QgbGVuLCBjaGFyICpidWYs
CisJCSAgICAgICBncnViX2Rpc2tfYWRkcl90ICgqZ2V0X2Jsb2NrKSAoZ3J1Yl9mc2hlbHBfbm9k
ZV90IG5vZGUsCisJCQkJCQkgICAgICBncnViX2Rpc2tfYWRkcl90IGJsb2NrKSwKKwkJICAgICAg
IGdydWJfb2ZmX3QgZmlsZXNpemUsIGludCBsb2cyYmxvY2tzaXplKQoreworICBncnViX2Rpc2tf
YWRkcl90IGksIGJsb2NrY250OworICBncnViX29mZl90IG9mZl9ieXRlczsKKyAgaW50IGJsb2Nr
c2l6ZSA9IDEgPDwgKGxvZzJibG9ja3NpemUgKyBHUlVCX0RJU0tfU0VDVE9SX0JJVFMpOworCisg
IC8qIEFkanVzdCBMRU4gc28gaXQgd2UgY2FuJ3QgcmVhZCBwYXN0IHRoZSBlbmQgb2YgdGhlIGZp
bGUuICAqLworICBpZiAocG9zICsgbGVuID4gZmlsZXNpemUpCisgICAgbGVuID0gZmlsZXNpemUg
LSBwb3M7CisKKyAgYmxvY2tjbnQgPSAoKGxlbiArIHBvcykgKyBibG9ja3NpemUgLSAxKSA+Pgor
ICAgIChsb2cyYmxvY2tzaXplICsgR1JVQl9ESVNLX1NFQ1RPUl9CSVRTKTsKKworICBmb3IgKGkg
PSBwb3MgPj4gKGxvZzJibG9ja3NpemUgKyBHUlVCX0RJU0tfU0VDVE9SX0JJVFMpOyBpIDwgYmxv
Y2tjbnQ7IGkrKykKKyAgICB7CisgICAgICBncnViX2Rpc2tfYWRkcl90IGJsa25yOworICAgICAg
aW50IGJsb2Nrb2ZmID0gcG9zICYgKGJsb2Nrc2l6ZSAtIDEpOworICAgICAgaW50IGJsb2NrZW5k
ID0gYmxvY2tzaXplOworCisgICAgICBpbnQgc2tpcGZpcnN0ID0gMDsKKworICAgICAgYmxrbnIg
PSBnZXRfYmxvY2sgKG5vZGUsIGkpOworICAgICAgaWYgKGdydWJfZXJybm8pCisJcmV0dXJuIC0x
OworCisgICAgICBibGtuciA9IGJsa25yIDw8IGxvZzJibG9ja3NpemU7CisgICAgICBvZmZfYnl0
ZXMgPSBibGtuciA8PCBHUlVCX0RJU0tfU0VDVE9SX0JJVFM7CisKKyAgICAgIC8qIExhc3QgYmxv
Y2suICAqLworICAgICAgaWYgKGkgPT0gYmxvY2tjbnQgLSAxKQorCXsKKwkgIGJsb2NrZW5kID0g
KGxlbiArIHBvcykgJiAoYmxvY2tzaXplIC0gMSk7CisKKwkgIC8qIFRoZSBsYXN0IHBvcnRpb24g
aXMgZXhhY3RseSBibG9ja3NpemUuICAqLworCSAgaWYgKCEgYmxvY2tlbmQpCisJICAgIGJsb2Nr
ZW5kID0gYmxvY2tzaXplOworCX0KKworICAgICAgLyogRmlyc3QgYmxvY2suICAqLworICAgICAg
aWYgKGkgPT0gKHBvcyA+PiAobG9nMmJsb2Nrc2l6ZSArIEdSVUJfRElTS19TRUNUT1JfQklUUykp
KQorCXsKKwkgIHNraXBmaXJzdCA9IGJsb2Nrb2ZmOworCSAgYmxvY2tlbmQgLT0gc2tpcGZpcnN0
OworCX0KKworICAgICAgLyogSWYgdGhlIGJsb2NrIG51bWJlciBpcyAwIHRoaXMgYmxvY2sgaXMg
bm90IHN0b3JlZCBvbiBkaXNrIGJ1dAorCSBpcyB6ZXJvIGZpbGxlZCBpbnN0ZWFkLiAgKi8KKyAg
ICAgIGlmIChibGtucikKKwl7CisJICAvL2JzLT5yZWFkX2hvb2sgPSByZWFkX2hvb2s7CisJICAv
L2JzLT5jbG9zdXJlID0gY2xvc3VyZTsKKwkgIAorCSAgYmRydl9wcmVhZF9mcm9tX2xjbl9vZl92
b2x1bShicywgb2ZmX2J5dGVzICsgc2tpcGZpcnN0LAorCQkgICAgICBidWYsIGJsb2NrZW5kKTsK
KwkgIC8vYnMtPnJlYWRfaG9vayA9IDA7CisJICBpZiAoZ3J1Yl9lcnJubykKKwkgICAgcmV0dXJu
IC0xOworCX0KKyAgICAgIGVsc2UKKwlncnViX21lbXNldCAoYnVmLCAwLCBibG9ja2VuZCk7CisK
KyAgICAgIGJ1ZiArPSBibG9ja3NpemUgLSBza2lwZmlyc3Q7CisgICAgfQorCisgIHJldHVybiBs
ZW47Cit9CisKK3Vuc2lnbmVkIGludAorZ3J1Yl9mc2hlbHBfbG9nMmJsa3NpemUgKHVuc2lnbmVk
IGludCBibGtzaXplLCB1bnNpZ25lZCBpbnQgKnBvdykKK3sKKyAgaW50IG1vZDsKKworICAqcG93
ID0gMDsKKyAgd2hpbGUgKGJsa3NpemUgPiAxKQorICAgIHsKKyAgICAgIG1vZCA9IGJsa3NpemUg
LSAoKGJsa3NpemUgPj4gMSkgPDwgMSk7CisgICAgICBibGtzaXplID4+PSAxOworCisgICAgICAv
KiBDaGVjayBpZiBpdCByZWFsbHkgaXMgYSBwb3dlciBvZiB0d28uICAqLworICAgICAgaWYgKG1v
ZCkKKwlyZXR1cm4gZ3J1Yl9lcnJvciAoR1JVQl9FUlJfQkFEX05VTUJFUiwKKwkJCSAgICJ0aGUg
YmxvY2tzaXplIGlzIG5vdCBhIHBvd2VyIG9mIHR3byIpOworICAgICAgKCpwb3cpKys7CisgICAg
fQorCisgIHJldHVybiBHUlVCX0VSUl9OT05FOworfQpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4g
LVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZzaGVscC5oIHhlbi00LjEuMi1i
L3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZzaGVscC5oCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2Vt
dS1xZW11LXhlbi9mc2hlbHAuaAkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAor
KysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZnNoZWxwLmgJMjAxMi0xMi0yOCAx
NjowMjo0MS4wMDQ5MzI0NTcgKzA4MDAKQEAgLTAsMCArMSw4NiBAQAorLyogZnNoZWxwLmggLS0g
RmlsZXN5c3RlbSBoZWxwZXIgZnVuY3Rpb25zICovCisvKgorICogIEdSVUIgIC0tICBHUmFuZCBV
bmlmaWVkIEJvb3Rsb2FkZXIKKyAqICBDb3B5cmlnaHQgKEMpIDIwMDQsMjAwNSwyMDA2LDIwMDcs
MjAwOCAgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuCisgKgorICogIEdSVUIgaXMgZnJl
ZSBzb2Z0d2FyZTogeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQorICogIGl0
IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgYXMgcHVi
bGlzaGVkIGJ5CisgKiAgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiwgZWl0aGVyIHZlcnNp
b24gMyBvZiB0aGUgTGljZW5zZSwgb3IKKyAqICAoYXQgeW91ciBvcHRpb24pIGFueSBsYXRlciB2
ZXJzaW9uLgorICoKKyAqICBHUlVCIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQg
d2lsbCBiZSB1c2VmdWwsCisgKiAgYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2
ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKKyAqICBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVT
UyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlCisgKiAgR05VIEdlbmVyYWwgUHVi
bGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KKyAqCisgKiAgWW91IHNob3VsZCBoYXZlIHJl
Y2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UKKyAqICBhbG9u
ZyB3aXRoIEdSVUIuICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4u
CisgKi8KKworI2lmbmRlZiBHUlVCX0ZTSEVMUF9IRUFERVIKKyNkZWZpbmUgR1JVQl9GU0hFTFBf
SEVBREVSCTEKKworI2luY2x1ZGUgImZzLXR5cGVzLmgiCisjaW5jbHVkZSAiZ3J1Yl9lcnIuaCIK
KyNpbmNsdWRlICJibG9ja19pbnQuaCIKK3R5cGVkZWYgc3RydWN0IGdydWJfZnNoZWxwX25vZGUg
KmdydWJfZnNoZWxwX25vZGVfdDsKKworI2RlZmluZSBHUlVCX0ZTSEVMUF9DQVNFX0lOU0VOU0lU
SVZFCTB4MTAwCisjZGVmaW5lIEdSVUJfRlNIRUxQX1RZUEVfTUFTSwkweGZmCisjZGVmaW5lIEdS
VUJfRlNIRUxQX0ZMQUdTX01BU0sJMHgxMDAKKworZW51bSBncnViX2ZzaGVscF9maWxldHlwZQor
ICB7CisgICAgR1JVQl9GU0hFTFBfVU5LTk9XTiwKKyAgICBHUlVCX0ZTSEVMUF9SRUcsCisgICAg
R1JVQl9GU0hFTFBfRElSLAorICAgIEdSVUJfRlNIRUxQX1NZTUxJTksKKyAgfTsKKworLyogTG9v
a3VwIHRoZSBub2RlIFBBVEguICBUaGUgbm9kZSBST09UTk9ERSBkZXNjcmliZXMgdGhlIHJvb3Qg
b2YgdGhlCisgICBkaXJlY3RvcnkgdHJlZS4gIFRoZSBub2RlIGZvdW5kIGlzIHJldHVybmVkIGlu
IEZPVU5ETk9ERSwgd2hpY2ggaXMKKyAgIGVpdGhlciBhIFJPT1ROT0RFIG9yIGEgbmV3IG1hbGxv
YydlZCBub2RlLiAgSVRFUkFURV9ESVIgaXMgdXNlZCB0bworICAgaXRlcmF0ZSBvdmVyIGFsbCBk
aXJlY3RvcnkgZW50cmllcyBpbiB0aGUgY3VycmVudCBub2RlLgorICAgUkVBRF9TWU1MSU5LIGlz
IHVzZWQgdG8gcmVhZCB0aGUgc3ltbGluayBpZiBhIG5vZGUgaXMgYSBzeW1saW5rLgorICAgRVhQ
RUNUVFlQRSBpcyB0aGUgdHlwZSBub2RlIHRoYXQgaXMgZXhwZWN0ZWQgYnkgdGhlIGNhbGxlZCwg
YW4KKyAgIGVycm9yIGlzIGdlbmVyYXRlZCBpZiB0aGUgbm9kZSBpcyBub3Qgb2YgdGhlIGV4cGVj
dGVkIHR5cGUuICBNYWtlCisgICBzdXJlIHlvdSB1c2UgdGhlIE5FU1RFRF9GVU5DX0FUVFIgbWFj
cm8gZm9yIEhPT0ssIHRoaXMgaXMgcmVxdWlyZWQKKyAgIGJlY2F1c2UgR0NDIGhhcyBhIG5hc3R5
IGJ1ZyB3aGVuIHVzaW5nIHJlZ3Bhcm09My4gICovCitncnViX2Vycl90IGdydWJfZnNoZWxwX2Zp
bmRfZmlsZSAoY29uc3QgY2hhciAqcGF0aCwKKwkJCQkgIGdydWJfZnNoZWxwX25vZGVfdCByb290
bm9kZSwKKwkJCQkgIGdydWJfZnNoZWxwX25vZGVfdCAqZm91bmRub2RlLAorCQkJCSAgaW50ICgq
aXRlcmF0ZV9kaXIpCisJCQkJICAoZ3J1Yl9mc2hlbHBfbm9kZV90IGRpciwKKwkJCQkgICBpbnQg
KCpob29rKQorCQkJCSAgIChjb25zdCBjaGFyICpmaWxlbmFtZSwKKwkJCQkgICAgZW51bSBncnVi
X2ZzaGVscF9maWxldHlwZSBmaWxldHlwZSwKKwkJCQkgICAgZ3J1Yl9mc2hlbHBfbm9kZV90IG5v
ZGUsCisJCQkJICAgIHZvaWQgKmNsb3N1cmUpLAorCQkJCSAgIHZvaWQgKmNsb3N1cmUpLAorCQkJ
CSAgdm9pZCAqY2xvc3VyZSwKKwkJCQkgIGNoYXIgKigqcmVhZF9zeW1saW5rKSAoZ3J1Yl9mc2hl
bHBfbm9kZV90IG5vZGUpLAorCQkJCSAgZW51bSBncnViX2ZzaGVscF9maWxldHlwZSBleHBlY3Qp
OworCisKKy8qIFJlYWQgTEVOIGJ5dGVzIGZyb20gdGhlIGZpbGUgTk9ERSBvbiBkaXNrIERJU0sg
aW50byB0aGUgYnVmZmVyIEJVRiwKKyAgIGJlZ2lubmluZyB3aXRoIHRoZSBibG9jayBQT1MuICBS
RUFEX0hPT0sgc2hvdWxkIGJlIHNldCBiZWZvcmUKKyAgIHJlYWRpbmcgYSBibG9jayBmcm9tIHRo
ZSBmaWxlLiAgR0VUX0JMT0NLIGlzIHVzZWQgdG8gdHJhbnNsYXRlIGZpbGUKKyAgIGJsb2NrcyB0
byBkaXNrIGJsb2Nrcy4gIFRoZSBmaWxlIGlzIEZJTEVTSVpFIGJ5dGVzIGJpZyBhbmQgdGhlCisg
ICBibG9ja3MgaGF2ZSBhIHNpemUgb2YgTE9HMkJMT0NLU0laRSAoaW4gbG9nMikuICAqLworZ3J1
Yl9zc2l6ZV90IGdydWJfZnNoZWxwX3JlYWRfZmlsZSAoQmxvY2tEcml2ZXJTdGF0ZSogYnMsIGdy
dWJfZnNoZWxwX25vZGVfdCBub2RlLAorCQkJCSAgICB2b2lkICgqcmVhZF9ob29rKQorCQkJCSAg
ICAoZ3J1Yl9kaXNrX2FkZHJfdCBzZWN0b3IsCisJCQkJICAgICB1bnNpZ25lZCBvZmZzZXQsCisJ
CQkJICAgICB1bnNpZ25lZCBsZW5ndGgsCisJCQkJICAgICB2b2lkICpjbG9zdXJlKSwKKwkJCQkg
ICAgdm9pZCAqY2xvc3VyZSwKKwkJCQkgICAgZ3J1Yl9vZmZfdCBwb3MsIGdydWJfc2l6ZV90IGxl
biwgY2hhciAqYnVmLAorCQkJCSAgICBncnViX2Rpc2tfYWRkcl90ICgqZ2V0X2Jsb2NrKQorCQkJ
CSAgICAoZ3J1Yl9mc2hlbHBfbm9kZV90IG5vZGUsCisJCQkJICAgICBncnViX2Rpc2tfYWRkcl90
IGJsb2NrKSwKKwkJCQkgICAgZ3J1Yl9vZmZfdCBmaWxlc2l6ZSwgaW50IGxvZzJibG9ja3NpemUp
OworCit1bnNpZ25lZCBpbnQgZ3J1Yl9mc2hlbHBfbG9nMmJsa3NpemUgKHVuc2lnbmVkIGludCBi
bGtzaXplLAorCQkJCSAgICAgIHVuc2lnbmVkIGludCAqcG93KTsKKworI2VuZGlmIC8qICEgR1JV
Ql9GU0hFTFBfSEVBREVSICovCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4y
LWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZnMtdGltZS5jIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11
LXFlbXUteGVuL2ZzLXRpbWUuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4v
ZnMtdGltZS5jCTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4x
LjItYi90b29scy9pb2VtdS1xZW11LXhlbi9mcy10aW1lLmMJMjAxMi0xMi0yOCAxNjowMjo0MS4w
MDU2ODU3OTggKzA4MDAKQEAgLTAsMCArMSw3NyBAQAorI2luY2x1ZGUgImZzLXRpbWUuaCIKKwor
CisKK3N0YXRpYyB1aW50NjRfdCBkaXY2NCh1aW50NjRfdCBhLCB1aW50MzJfdCBiLCB1aW50MzJf
dCBjKQoreworICAgIHVuaW9uIHsKKyAgICAgICAgdWludDY0X3QgbGw7CisgICAgICAgIHN0cnVj
dCB7CisjaWZkZWYgV09SRFNfQklHRU5ESUFOCisgICAgICAgICAgICB1aW50MzJfdCBoaWdoLCBs
b3c7CisjZWxzZQorICAgICAgICAgICAgdWludDMyX3QgbG93LCBoaWdoOworI2VuZGlmCisgICAg
ICAgIH0gbDsKKyAgICB9IHUsIHJlczsKKyAgICB1aW50NjRfdCBybCwgcmg7CisKKyAgICB1Lmxs
ID0gYTsKKyAgICBybCA9ICh1aW50NjRfdCl1LmwubG93ICogKHVpbnQ2NF90KWI7CisgICAgcmgg
PSAodWludDY0X3QpdS5sLmhpZ2ggKiAodWludDY0X3QpYjsKKyAgICByaCArPSAocmwgPj4gMzIp
OworICAgIHJlcy5sLmhpZ2ggPSByaCAvIGM7CisgICAgcmVzLmwubG93ID0gKCgocmggJSBjKSA8
PCAzMikgKyAocmwgJiAweGZmZmZmZmZmKSkgLyBjOworICAgIHJldHVybiByZXMubGw7Cit9CisK
K3N0YXRpYyB1aW50NjRfdCBzdWI2NCh1aW50NjRfdCBhLCB1aW50NjRfdCBiKQoreworICBzdHJ1
Y3QKKyAgeworI2lmZGVmIFdPUkRTX0JJR0VORElBTgorICAgICAgICAgICAgdWludDMyX3QgaGln
aCwgbG93OworI2Vsc2UKKyAgICAgICAgICAgIHVpbnQzMl90IGxvdywgaGlnaDsKKyNlbmRpZgor
ICB9YTEsYjEsYzsKKyAgCisgIGExLmhpZ2ggPSBhPj4zMjsKKyAgYTEubG93ID0gYSYweGZmZmZm
ZmZmOworICBiMS5oaWdoID0gYj4+MzI7CisgIGIxLmxvdyA9IGImMHhmZmZmZmZmZjsKKyAgCisg
IGlmKGExLmhpZ2ggPCBiMS5oaWdoKQorICAgIHsKKyAgICAgIGM9YjE7CisgICAgICBiMT1hMTsK
KyAgICAgIGExPWM7CisgICAgfQorICAKKyAgYTEuaGlnaCAtPSBiMS5oaWdoOworICBhMS5sb3cg
LT0gYjEubG93OworICBpZihhMS5sb3cgJiAweDgwMDAwMDAwKQorICAgIHsKKyAgICAgIGExLmxv
dyA9ICh+KGExLmxvdyAmIDB4N2ZmZmZmZmYpKSsxOworICAgICAgYTEuaGlnaCAtPSAxOworICAg
IH0KKyAgCisgIHVpbnQ2NF90IHJldCA9ICh1aW50NjRfdClhMS5oaWdoPDwzMiB8IGExLmxvdzsK
KyAgcmV0dXJuIHJldDsKK30KKworc3RydWN0IHRtKiBudGZzX3V0YzJsb2NhbChncnViX3VpbnQ2
NF90IHRpbWUsIHN0cnVjdCB0bSogcHRtKQoreworICAvL3RpbWVfdCB0aW1lMiA9IHN1YjY0KHRp
bWUsIE5URlNfVElNRV9PRkZTRVQpOworICB0aW1lX3QgdGltZTIgPSB0aW1lIC0gTlRGU19USU1F
X09GRlNFVDsKKyAgLypEQkcoInNpemVvZihpbnQpPSVkIiwgc2l6ZW9mKGludCkpOworICBEQkco
InNpemVvZihzaG9ydCk9JWQiLCBzaXplb2Yoc2hvcnQpKTsKKyAgREJHKCJzaXplb2YobG9uZyk9
JWQiLCBzaXplb2YobG9uZykpOworICBEQkcoInNpemVvZihsb25nIGxvbmcpPSVkIiwgc2l6ZW9m
KHVuc2lnbmVkIGxvbmcgbG9uZykpOworICBEQkcoInNpemVvZih0aW1lX3QpPSVkLCB0aW1lPSV6
dSwgdGltZTI9JXp1Iiwgc2l6ZW9mKHRpbWVfdCksIHRpbWUsIHRpbWUyKTsqLworICAvL3RpbWUy
ID0gZGl2NjQodGltZTIsMSwxMDAwMDAwMCk7CisgIHRpbWUyID0gdGltZTIgLyAxMDAwMDAwMDsK
KyAgREJHKCJzaXplb2YodGltZV90KT0lZCwgdGltZT0lenUsIHRpbWUyPSV6dSIsIHNpemVvZih0
aW1lX3QpLCB0aW1lLCB0aW1lMik7CisgIC8vLy90aW1lMiA9IDA7Ly90aW1lKE5VTEwpOworICBy
ZXR1cm4gbG9jYWx0aW1lX3IoJnRpbWUyLCBwdG0pOworfQpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1y
cE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZzLXRpbWUuaCB4ZW4tNC4x
LjItYi90b29scy9pb2VtdS1xZW11LXhlbi9mcy10aW1lLmgKLS0tIHhlbi00LjEuMi1hL3Rvb2xz
L2lvZW11LXFlbXUteGVuL2ZzLXRpbWUuaAkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCAr
MDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZnMtdGltZS5oCTIwMTIt
MTItMjggMTY6MDI6NDEuMDA1Njg1Nzk4ICswODAwCkBAIC0wLDAgKzEsMTIgQEAKKyNpZm5kZWYg
RlNfVElNRV9ICisjZGVmaW5lIEZTX1RJTUVfSAorCisjaW5jbHVkZSA8dGltZS5oPgorI2luY2x1
ZGUgImZzLWNvbW0uaCIKKyNkZWZpbmUgTlRGU19USU1FX09GRlNFVCAoKGdydWJfdWludDY0X3Qp
KDM2OSAqIDM2NSArIDg5KSAqIDI0ICogMzYwMCAqIDEwMDAwMDAwKQorCitzdHJ1Y3QgdG0qIG50
ZnNfdXRjMmxvY2FsKGdydWJfdWludDY0X3QgdGltZSwgc3RydWN0IHRtKiBwdG0pOworCisKKyNl
bmRpZgorCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9l
bXUtcWVtdS14ZW4vZnMtdHlwZXMuaCB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9m
cy10eXBlcy5oCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9mcy10eXBlcy5o
CTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29s
cy9pb2VtdS1xZW11LXhlbi9mcy10eXBlcy5oCTIwMTItMTItMjggMTY6MDI6NDEuMDA2OTMyNDE3
ICswODAwCkBAIC0wLDAgKzEsMjM0IEBACisvKgorICogIEdSVUIgIC0tICBHUmFuZCBVbmlmaWVk
IEJvb3Rsb2FkZXIKKyAqICBDb3B5cmlnaHQgKEMpIDIwMDIsMjAwNSwyMDA2LDIwMDcsMjAwOCwy
MDA5ICBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIEluYy4KKyAqCisgKiAgR1JVQiBpcyBmcmVl
IHNvZnR3YXJlOiB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5CisgKiAgaXQg
dW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBhcyBwdWJs
aXNoZWQgYnkKKyAqICB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBlaXRoZXIgdmVyc2lv
biAzIG9mIHRoZSBMaWNlbnNlLCBvcgorICogIChhdCB5b3VyIG9wdGlvbikgYW55IGxhdGVyIHZl
cnNpb24uCisgKgorICogIEdSVUIgaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3
aWxsIGJlIHVzZWZ1bCwKKyAqICBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZl
biB0aGUgaW1wbGllZCB3YXJyYW50eSBvZgorICogIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNT
IEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUKKyAqICBHTlUgR2VuZXJhbCBQdWJs
aWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgorICoKKyAqICBZb3Ugc2hvdWxkIGhhdmUgcmVj
ZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZQorICogIGFsb25n
IHdpdGggR1JVQi4gIElmIG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4K
KyAqLworCisjaWZuZGVmIEdSVUJfVFlQRVNfSEVBREVSCisjZGVmaW5lIEdSVUJfVFlQRVNfSEVB
REVSCTEKKworI2luY2x1ZGUgImdydWItY29uZmlnLmgiCisjaW5jbHVkZSAieDg2XzY0L3R5cGVz
LmgiCisKKyNpZmRlZiBHUlVCX1VUSUwKKyMgZGVmaW5lIEdSVUJfQ1BVX1NJWkVPRl9WT0lEX1AJ
U0laRU9GX1ZPSURfUAorIyBkZWZpbmUgR1JVQl9DUFVfU0laRU9GX0xPTkcJU0laRU9GX0xPTkcK
KyMgaWZkZWYgV09SRFNfQklHRU5ESUFOCisjICBkZWZpbmUgR1JVQl9DUFVfV09SRFNfQklHRU5E
SUFOCTEKKyMgZWxzZQorIyAgdW5kZWYgR1JVQl9DUFVfV09SRFNfQklHRU5ESUFOCisjIGVuZGlm
CisjZWxzZSAvKiAhIEdSVUJfVVRJTCAqLworIyBkZWZpbmUgR1JVQl9DUFVfU0laRU9GX1ZPSURf
UAlHUlVCX1RBUkdFVF9TSVpFT0ZfVk9JRF9QCisjIGRlZmluZSBHUlVCX0NQVV9TSVpFT0ZfTE9O
RwlHUlVCX1RBUkdFVF9TSVpFT0ZfTE9ORworIyBpZmRlZiBHUlVCX1RBUkdFVF9XT1JEU19CSUdF
TkRJQU4KKyMgIGRlZmluZSBHUlVCX0NQVV9XT1JEU19CSUdFTkRJQU4JMQorIyBlbHNlCisjICB1
bmRlZiBHUlVCX0NQVV9XT1JEU19CSUdFTkRJQU4KKyMgZW5kaWYKKyNlbmRpZiAvKiAhIEdSVUJf
VVRJTCAqLworCisjaWYgR1JVQl9DUFVfU0laRU9GX1ZPSURfUCAhPSA0ICYmIEdSVUJfQ1BVX1NJ
WkVPRl9WT0lEX1AgIT0gOAorIyBlcnJvciAiVGhpcyBhcmNoaXRlY3R1cmUgaXMgbm90IHN1cHBv
cnRlZCBiZWNhdXNlIHNpemVvZih2b2lkICopICE9IDQgYW5kIHNpemVvZih2b2lkICopICE9IDgi
CisjZW5kaWYKKworI2lmbmRlZiBHUlVCX1RBUkdFVF9XT1JEU0laRQorIyBpZiBHUlVCX1RBUkdF
VF9TSVpFT0ZfVk9JRF9QID09IDQKKyMgIGRlZmluZSBHUlVCX1RBUkdFVF9XT1JEU0laRSAzMgor
IyBlbGlmIEdSVUJfVEFSR0VUX1NJWkVPRl9WT0lEX1AgPT0gOAorIyAgZGVmaW5lIEdSVUJfVEFS
R0VUX1dPUkRTSVpFIDY0CisjIGVuZGlmCisjZW5kaWYKKworLyogRGVmaW5lIHZhcmlvdXMgd2lk
ZSBpbnRlZ2Vycy4gICovCit0eXBlZGVmIHNpZ25lZCBjaGFyCQlncnViX2ludDhfdDsKK3R5cGVk
ZWYgc2hvcnQJCQlncnViX2ludDE2X3Q7Cit0eXBlZGVmIGludAkJCWdydWJfaW50MzJfdDsKKyNp
ZiBHUlVCX0NQVV9TSVpFT0ZfTE9ORyA9PSA4Cit0eXBlZGVmIGxvbmcJCQlncnViX2ludDY0X3Q7
CisjZWxzZQordHlwZWRlZiBsb25nIGxvbmcJCWdydWJfaW50NjRfdDsKKyNlbmRpZgorCit0eXBl
ZGVmIHVuc2lnbmVkIGNoYXIJCWdydWJfdWludDhfdDsKK3R5cGVkZWYgdW5zaWduZWQgc2hvcnQJ
CWdydWJfdWludDE2X3Q7Cit0eXBlZGVmIHVuc2lnbmVkCQlncnViX3VpbnQzMl90OworI2lmIEdS
VUJfQ1BVX1NJWkVPRl9MT05HID09IDgKK3R5cGVkZWYgdW5zaWduZWQgbG9uZwkJZ3J1Yl91aW50
NjRfdDsKKyNlbHNlCit0eXBlZGVmIHVuc2lnbmVkIGxvbmcgbG9uZwlncnViX3VpbnQ2NF90Owor
I2VuZGlmCisKKy8qIE1pc2MgdHlwZXMuICAqLworI2lmIEdSVUJfVEFSR0VUX1NJWkVPRl9WT0lE
X1AgPT0gOAordHlwZWRlZiBncnViX3VpbnQ2NF90CWdydWJfdGFyZ2V0X2FkZHJfdDsKK3R5cGVk
ZWYgZ3J1Yl91aW50NjRfdAlncnViX3RhcmdldF9vZmZfdDsKK3R5cGVkZWYgZ3J1Yl91aW50NjRf
dAlncnViX3RhcmdldF9zaXplX3Q7Cit0eXBlZGVmIGdydWJfaW50NjRfdAlncnViX3RhcmdldF9z
c2l6ZV90OworI2Vsc2UKK3R5cGVkZWYgZ3J1Yl91aW50MzJfdAlncnViX3RhcmdldF9hZGRyX3Q7
Cit0eXBlZGVmIGdydWJfdWludDMyX3QJZ3J1Yl90YXJnZXRfb2ZmX3Q7Cit0eXBlZGVmIGdydWJf
dWludDMyX3QJZ3J1Yl90YXJnZXRfc2l6ZV90OwordHlwZWRlZiBncnViX2ludDMyX3QJZ3J1Yl90
YXJnZXRfc3NpemVfdDsKKyNlbmRpZgorCisjaWYgR1JVQl9DUFVfU0laRU9GX1ZPSURfUCA9PSA4
Cit0eXBlZGVmIGdydWJfdWludDY0X3QJZ3J1Yl9hZGRyX3Q7Cit0eXBlZGVmIGdydWJfdWludDY0
X3QJZ3J1Yl9zaXplX3Q7Cit0eXBlZGVmIGdydWJfaW50NjRfdAlncnViX3NzaXplX3Q7CisjZWxz
ZQordHlwZWRlZiBncnViX3VpbnQzMl90CWdydWJfYWRkcl90OwordHlwZWRlZiBncnViX3VpbnQz
Ml90CWdydWJfc2l6ZV90OwordHlwZWRlZiBncnViX2ludDMyX3QJZ3J1Yl9zc2l6ZV90OworI2Vu
ZGlmCisKKyNpZiBHUlVCX0NQVV9TSVpFT0ZfVk9JRF9QID09IDgKKyMgZGVmaW5lIEdSVUJfVUxP
TkdfTUFYIDE4NDQ2NzQ0MDczNzA5NTUxNjE1VUwKKyMgZGVmaW5lIEdSVUJfTE9OR19NQVggOTIy
MzM3MjAzNjg1NDc3NTgwN0wKKyMgZGVmaW5lIEdSVUJfTE9OR19NSU4gKC05MjIzMzcyMDM2ODU0
Nzc1ODA3TCAtIDEpCisjZWxzZQorIyBkZWZpbmUgR1JVQl9VTE9OR19NQVggNDI5NDk2NzI5NVVM
CisjIGRlZmluZSBHUlVCX0xPTkdfTUFYIDIxNDc0ODM2NDdMCisjIGRlZmluZSBHUlVCX0xPTkdf
TUlOICgtMjE0NzQ4MzY0N0wgLSAxKQorI2VuZGlmCisKKyNpZiBHUlVCX0NQVV9TSVpFT0ZfVk9J
RF9QID09IDQKKyNkZWZpbmUgVUlOVF9UT19QVFIoeCkgKCh2b2lkKikoZ3J1Yl91aW50MzJfdCko
eCkpCisjZGVmaW5lIFBUUl9UT19VSU5UNjQoeCkgKChncnViX3VpbnQ2NF90KShncnViX3VpbnQz
Ml90KSh4KSkKKyNkZWZpbmUgUFRSX1RPX1VJTlQzMih4KSAoKGdydWJfdWludDMyX3QpKHgpKQor
I2Vsc2UKKyNkZWZpbmUgVUlOVF9UT19QVFIoeCkgKCh2b2lkKikoZ3J1Yl91aW50NjRfdCkoeCkp
CisjZGVmaW5lIFBUUl9UT19VSU5UNjQoeCkgKChncnViX3VpbnQ2NF90KSh4KSkKKyNkZWZpbmUg
UFRSX1RPX1VJTlQzMih4KSAoKGdydWJfdWludDMyX3QpKGdydWJfdWludDY0X3QpKHgpKQorI2Vu
ZGlmCisKKy8qIFRoZSB0eXBlIGZvciByZXByZXNlbnRpbmcgYSBmaWxlIG9mZnNldC4gICovCit0
eXBlZGVmIGdydWJfdWludDY0X3QJZ3J1Yl9vZmZfdDsKKworLyogVGhlIHR5cGUgZm9yIHJlcHJl
c2VudGluZyBhIGRpc2sgYmxvY2sgYWRkcmVzcy4gICovCit0eXBlZGVmIGdydWJfdWludDY0X3QJ
Z3J1Yl9kaXNrX2FkZHJfdDsKKworLyogQnl0ZS1vcmRlcnMuICAqLworI2RlZmluZSBncnViX3N3
YXBfYnl0ZXMxNih4KQlcCisoeyBcCisgICBncnViX3VpbnQxNl90IF94ID0gKHgpOyBcCisgICAo
Z3J1Yl91aW50MTZfdCkgKChfeCA8PCA4KSB8IChfeCA+PiA4KSk7IFwKK30pCisKKyNpZiBkZWZp
bmVkKF9fR05VQ19fKSAmJiAoX19HTlVDX18gPiAzKSAmJiAoX19HTlVDX18gPiA0IHx8IF9fR05V
Q19NSU5PUl9fID49IDMpICYmIGRlZmluZWQoR1JVQl9UQVJHRVRfSTM4NikKK3N0YXRpYyBpbmxp
bmUgZ3J1Yl91aW50MzJfdCBncnViX3N3YXBfYnl0ZXMzMihncnViX3VpbnQzMl90IHgpCit7CisJ
cmV0dXJuIF9fYnVpbHRpbl9ic3dhcDMyKHgpOworfQorCitzdGF0aWMgaW5saW5lIGdydWJfdWlu
dDY0X3QgZ3J1Yl9zd2FwX2J5dGVzNjQoZ3J1Yl91aW50NjRfdCB4KQoreworCXJldHVybiBfX2J1
aWx0aW5fYnN3YXA2NCh4KTsKK30KKyNlbHNlCQkJCQkvKiBub3QgZ2NjIDQuMyBvciBuZXdlciAq
LworI2RlZmluZSBncnViX3N3YXBfYnl0ZXMzMih4KQlcCisoeyBcCisgICBncnViX3VpbnQzMl90
IF94ID0gKHgpOyBcCisgICAoZ3J1Yl91aW50MzJfdCkgKChfeCA8PCAyNCkgXAorICAgICAgICAg
ICAgICAgICAgICB8ICgoX3ggJiAoZ3J1Yl91aW50MzJfdCkgMHhGRjAwVUwpIDw8IDgpIFwKKyAg
ICAgICAgICAgICAgICAgICAgfCAoKF94ICYgKGdydWJfdWludDMyX3QpIDB4RkYwMDAwVUwpID4+
IDgpIFwKKyAgICAgICAgICAgICAgICAgICAgfCAoX3ggPj4gMjQpKTsgXAorfSkKKworI2RlZmlu
ZSBncnViX3N3YXBfYnl0ZXM2NCh4KQlcCisoeyBcCisgICBncnViX3VpbnQ2NF90IF94ID0gKHgp
OyBcCisgICAoZ3J1Yl91aW50NjRfdCkgKChfeCA8PCA1NikgXAorICAgICAgICAgICAgICAgICAg
ICB8ICgoX3ggJiAoZ3J1Yl91aW50NjRfdCkgMHhGRjAwVUxMKSA8PCA0MCkgXAorICAgICAgICAg
ICAgICAgICAgICB8ICgoX3ggJiAoZ3J1Yl91aW50NjRfdCkgMHhGRjAwMDBVTEwpIDw8IDI0KSBc
CisgICAgICAgICAgICAgICAgICAgIHwgKChfeCAmIChncnViX3VpbnQ2NF90KSAweEZGMDAwMDAw
VUxMKSA8PCA4KSBcCisgICAgICAgICAgICAgICAgICAgIHwgKChfeCAmIChncnViX3VpbnQ2NF90
KSAweEZGMDAwMDAwMDBVTEwpID4+IDgpIFwKKyAgICAgICAgICAgICAgICAgICAgfCAoKF94ICYg
KGdydWJfdWludDY0X3QpIDB4RkYwMDAwMDAwMDAwVUxMKSA+PiAyNCkgXAorICAgICAgICAgICAg
ICAgICAgICB8ICgoX3ggJiAoZ3J1Yl91aW50NjRfdCkgMHhGRjAwMDAwMDAwMDAwMFVMTCkgPj4g
NDApIFwKKyAgICAgICAgICAgICAgICAgICAgfCAoX3ggPj4gNTYpKTsgXAorfSkKKyNlbmRpZgkJ
CQkJLyogbm90IGdjYyA0LjMgb3IgbmV3ZXIgKi8KKworI2lmZGVmIEdSVUJfQ1BVX1dPUkRTX0JJ
R0VORElBTgorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fbGUxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4
KQorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fbGUzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyBk
ZWZpbmUgZ3J1Yl9jcHVfdG9fbGU2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyBkZWZpbmUg
Z3J1Yl9sZV90b19jcHUxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4KQorIyBkZWZpbmUgZ3J1Yl9s
ZV90b19jcHUzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyBkZWZpbmUgZ3J1Yl9sZV90b19j
cHU2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fYmUxNih4
KQkoKGdydWJfdWludDE2X3QpICh4KSkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2JlMzIoeCkJKChn
cnViX3VpbnQzMl90KSAoeCkpCisjIGRlZmluZSBncnViX2NwdV90b19iZTY0KHgpCSgoZ3J1Yl91
aW50NjRfdCkgKHgpKQorIyBkZWZpbmUgZ3J1Yl9iZV90b19jcHUxNih4KQkoKGdydWJfdWludDE2
X3QpICh4KSkKKyMgZGVmaW5lIGdydWJfYmVfdG9fY3B1MzIoeCkJKChncnViX3VpbnQzMl90KSAo
eCkpCisjIGRlZmluZSBncnViX2JlX3RvX2NwdTY0KHgpCSgoZ3J1Yl91aW50NjRfdCkgKHgpKQor
IyBpZmRlZiBHUlVCX1RBUkdFVF9XT1JEU19CSUdFTkRJQU4KKyMgIGRlZmluZSBncnViX3Rhcmdl
dF90b19ob3N0MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjICBkZWZpbmUgZ3J1Yl90YXJn
ZXRfdG9faG9zdDMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJfdGFy
Z2V0X3RvX2hvc3Q2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMgIGRlZmluZSBncnViX2hv
c3RfdG9fdGFyZ2V0MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjICBkZWZpbmUgZ3J1Yl9o
b3N0X3RvX3RhcmdldDMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJf
aG9zdF90b190YXJnZXQ2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMgZWxzZSAvKiAhIEdS
VUJfVEFSR0VUX1dPUkRTX0JJR0VORElBTiAqLworIyAgZGVmaW5lIGdydWJfdGFyZ2V0X3RvX2hv
c3QxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4KQorIyAgZGVmaW5lIGdydWJfdGFyZ2V0X3RvX2hv
c3QzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyAgZGVmaW5lIGdydWJfdGFyZ2V0X3RvX2hv
c3Q2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyAgZGVmaW5lIGdydWJfaG9zdF90b190YXJn
ZXQxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4KQorIyAgZGVmaW5lIGdydWJfaG9zdF90b190YXJn
ZXQzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyAgZGVmaW5lIGdydWJfaG9zdF90b190YXJn
ZXQ2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyBlbmRpZgorI2Vsc2UgLyogISBXT1JEU19C
SUdFTkRJQU4gKi8KKyMgZGVmaW5lIGdydWJfY3B1X3RvX2xlMTYoeCkJKChncnViX3VpbnQxNl90
KSAoeCkpCisjIGRlZmluZSBncnViX2NwdV90b19sZTMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgp
KQorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fbGU2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMg
ZGVmaW5lIGdydWJfbGVfdG9fY3B1MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjIGRlZmlu
ZSBncnViX2xlX3RvX2NwdTMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyBkZWZpbmUgZ3J1
Yl9sZV90b19jcHU2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMgZGVmaW5lIGdydWJfY3B1
X3RvX2JlMTYoeCkJZ3J1Yl9zd2FwX2J5dGVzMTYoeCkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2Jl
MzIoeCkJZ3J1Yl9zd2FwX2J5dGVzMzIoeCkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2JlNjQoeCkJ
Z3J1Yl9zd2FwX2J5dGVzNjQoeCkKKyMgZGVmaW5lIGdydWJfYmVfdG9fY3B1MTYoeCkJZ3J1Yl9z
d2FwX2J5dGVzMTYoeCkKKyMgZGVmaW5lIGdydWJfYmVfdG9fY3B1MzIoeCkJZ3J1Yl9zd2FwX2J5
dGVzMzIoeCkKKyMgZGVmaW5lIGdydWJfYmVfdG9fY3B1NjQoeCkJZ3J1Yl9zd2FwX2J5dGVzNjQo
eCkKKyMgaWZkZWYgR1JVQl9UQVJHRVRfV09SRFNfQklHRU5ESUFOCisjICBkZWZpbmUgZ3J1Yl90
YXJnZXRfdG9faG9zdDE2KHgpCWdydWJfc3dhcF9ieXRlczE2KHgpCisjICBkZWZpbmUgZ3J1Yl90
YXJnZXRfdG9faG9zdDMyKHgpCWdydWJfc3dhcF9ieXRlczMyKHgpCisjICBkZWZpbmUgZ3J1Yl90
YXJnZXRfdG9faG9zdDY0KHgpCWdydWJfc3dhcF9ieXRlczY0KHgpCisjICBkZWZpbmUgZ3J1Yl9o
b3N0X3RvX3RhcmdldDE2KHgpCWdydWJfc3dhcF9ieXRlczE2KHgpCisjICBkZWZpbmUgZ3J1Yl9o
b3N0X3RvX3RhcmdldDMyKHgpCWdydWJfc3dhcF9ieXRlczMyKHgpCisjICBkZWZpbmUgZ3J1Yl9o
b3N0X3RvX3RhcmdldDY0KHgpCWdydWJfc3dhcF9ieXRlczY0KHgpCisjIGVsc2UgLyogISBHUlVC
X1RBUkdFVF9XT1JEU19CSUdFTkRJQU4gKi8KKyMgIGRlZmluZSBncnViX3RhcmdldF90b19ob3N0
MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjICBkZWZpbmUgZ3J1Yl90YXJnZXRfdG9faG9z
dDMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJfdGFyZ2V0X3RvX2hv
c3Q2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMgIGRlZmluZSBncnViX2hvc3RfdG9fdGFy
Z2V0MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjICBkZWZpbmUgZ3J1Yl9ob3N0X3RvX3Rh
cmdldDMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJfaG9zdF90b190
YXJnZXQ2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMgZW5kaWYKKyNlbmRpZiAvKiAhIFdP
UkRTX0JJR0VORElBTiAqLworCisjaWYgR1JVQl9UQVJHRVRfU0laRU9GX1ZPSURfUCA9PSA4Cisj
ICBkZWZpbmUgZ3J1Yl9ob3N0X3RvX3RhcmdldF9hZGRyKHgpIGdydWJfaG9zdF90b190YXJnZXQ2
NCh4KQorI2Vsc2UKKyMgIGRlZmluZSBncnViX2hvc3RfdG9fdGFyZ2V0X2FkZHIoeCkgZ3J1Yl9o
b3N0X3RvX3RhcmdldDMyKHgpCisjZW5kaWYKKworCisKKworCisKKworI2VuZGlmIC8qICEgR1JV
Ql9UWVBFU19IRUFERVIgKi8KZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjIt
YS90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWNvbmZpZy5oIHhlbi00LjEuMi1iL3Rvb2xzL2lv
ZW11LXFlbXUteGVuL2dydWItY29uZmlnLmgKLS0tIHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFl
bXUteGVuL2dydWItY29uZmlnLmgJMTk3MC0wMS0wMSAwNzowMDowMC4wMDAwMDAwMDAgKzA3MDAK
KysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWItY29uZmlnLmgJMjAxMi0x
Mi0yOCAxNjowMjo0MS4wMDY5MzI0MTcgKzA4MDAKQEAgLTAsMCArMSwyNTEgQEAKKy8qIGNvbmZp
Zy5oLiAgR2VuZXJhdGVkIGZyb20gY29uZmlnLmguaW4gYnkgY29uZmlndXJlLiAgKi8KKy8qIGNv
bmZpZy5oLmluLiAgR2VuZXJhdGVkIGZyb20gY29uZmlndXJlLmFjIGJ5IGF1dG9oZWFkZXIuICAq
LworCisvKiBEZWZpbmUgaXQgaWYgR0FTIHJlcXVpcmVzIHRoYXQgYWJzb2x1dGUgaW5kaXJlY3Qg
Y2FsbHMvanVtcHMgYXJlIG5vdAorICAgcHJlZml4ZWQgd2l0aCBhbiBhc3RlcmlzayAqLworLyog
I3VuZGVmIEFCU09MVVRFX1dJVEhPVVRfQVNURVJJU0sgKi8KKworLyogRGVmaW5lIGl0IHRvIFwi
YWRkcjMyXCIgb3IgXCJhZGRyMzI7XCIgdG8gbWFrZSBHQVMgaGFwcHkgKi8KKyNkZWZpbmUgQURE
UjMyIGFkZHIzMgorCisvKiBEZWZpbmUgaXQgdG8gXCJkYXRhMzJcIiBvciBcImRhdGEzMjtcIiB0
byBtYWtlIEdBUyBoYXBweSAqLworI2RlZmluZSBEQVRBMzIgZGF0YTMyCisKKy8qIERlZmluZSB0
byAxIGlmIHRyYW5zbGF0aW9uIG9mIHByb2dyYW0gbWVzc2FnZXMgdG8gdGhlIHVzZXIncyBuYXRp
dmUKKyAgIGxhbmd1YWdlIGlzIHJlcXVlc3RlZC4gKi8KKyNkZWZpbmUgRU5BQkxFX05MUyAxCisK
Ky8qIERlZmluZSBpZiBDIHN5bWJvbHMgZ2V0IGFuIHVuZGVyc2NvcmUgYWZ0ZXIgY29tcGlsYXRp
b24gKi8KKy8qICN1bmRlZiBIQVZFX0FTTV9VU0NPUkUgKi8KKworLyogRGVmaW5lIHRvIDEgaWYg
eW91IGhhdmUgdGhlIGBhc3ByaW50ZicgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfQVNQUklO
VEYgMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgTWFjT1MgWCBmdW5jdGlvbiBD
RkxvY2FsZUNvcHlDdXJyZW50IGluIHRoZQorICAgQ29yZUZvdW5kYXRpb24gZnJhbWV3b3JrLiAq
LworLyogI3VuZGVmIEhBVkVfQ0ZMT0NBTEVDT1BZQ1VSUkVOVCAqLworCisvKiBEZWZpbmUgdG8g
MSBpZiB5b3UgaGF2ZSB0aGUgTWFjT1MgWCBmdW5jdGlvbiBDRlByZWZlcmVuY2VzQ29weUFwcFZh
bHVlIGluCisgICB0aGUgQ29yZUZvdW5kYXRpb24gZnJhbWV3b3JrLiAqLworLyogI3VuZGVmIEhB
VkVfQ0ZQUkVGRVJFTkNFU0NPUFlBUFBWQUxVRSAqLworCisvKiBEZWZpbmUgdG8gMSBpZiB5b3Ug
aGF2ZSB0aGUgPGN1cnNlcy5oPiBoZWFkZXIgZmlsZS4gKi8KKy8qICN1bmRlZiBIQVZFX0NVUlNF
U19IICovCisKKy8qIERlZmluZSBpZiB0aGUgR05VIGRjZ2V0dGV4dCgpIGZ1bmN0aW9uIGlzIGFs
cmVhZHkgcHJlc2VudCBvciBwcmVpbnN0YWxsZWQuCisgICAqLworI2RlZmluZSBIQVZFX0RDR0VU
VEVYVCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8ZGlyZW50Lmg+IGhlYWRl
ciBmaWxlLCBhbmQgaXQgZGVmaW5lcyBgRElSJy4KKyAgICovCisjZGVmaW5lIEhBVkVfRElSRU5U
X0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPGZ0MmJ1aWxkLmg+IGhlYWRl
ciBmaWxlLiAqLworI2RlZmluZSBIQVZFX0ZUMkJVSUxEX0ggMQorCisvKiBEZWZpbmUgdG8gMSBp
ZiB5b3UgaGF2ZSB0aGUgYGdldGdpZCcgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfR0VUR0lE
IDEKKworLyogRGVmaW5lIGlmIGdldHJhd3BhcnRpdGlvbigpIGluIC1sdXRpbCBjYW4gYmUgdXNl
ZCAqLworLyogI3VuZGVmIEhBVkVfR0VUUkFXUEFSVElUSU9OICovCisKKy8qIERlZmluZSBpZiB0
aGUgR05VIGdldHRleHQoKSBmdW5jdGlvbiBpcyBhbHJlYWR5IHByZXNlbnQgb3IgcHJlaW5zdGFs
bGVkLiAqLworI2RlZmluZSBIQVZFX0dFVFRFWFQgMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3Ug
aGF2ZSB0aGUgYGdldHVpZCcgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfR0VUVUlEIDEKKwor
LyogRGVmaW5lIGlmIHlvdSBoYXZlIHRoZSBpY29udigpIGZ1bmN0aW9uIGFuZCBpdCB3b3Jrcy4g
Ki8KKy8qICN1bmRlZiBIQVZFX0lDT05WICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZl
IHRoZSA8aW50dHlwZXMuaD4gaGVhZGVyIGZpbGUuICovCisjZGVmaW5lIEhBVkVfSU5UVFlQRVNf
SCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8bGltaXRzLmg+IGhlYWRlciBm
aWxlLiAqLworI2RlZmluZSBIQVZFX0xJTUlUU19IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91
IGhhdmUgdGhlIGBsc3RhdCcgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfTFNUQVQgMQorCisv
KiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPG1hbGxvYy5oPiBoZWFkZXIgZmlsZS4gKi8K
KyNkZWZpbmUgSEFWRV9NQUxMT0NfSCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRo
ZSBgbWVtYWxpZ24nIGZ1bmN0aW9uLiAqLworI2RlZmluZSBIQVZFX01FTUFMSUdOIDEKKworLyog
RGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIGBtZW1tb3ZlJyBmdW5jdGlvbi4gKi8KKyNkZWZp
bmUgSEFWRV9NRU1NT1ZFIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxtZW1v
cnkuaD4gaGVhZGVyIGZpbGUuICovCisjZGVmaW5lIEhBVkVfTUVNT1JZX0ggMQorCisvKiBEZWZp
bmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPG5jdXJzZXMvY3Vyc2VzLmg+IGhlYWRlciBmaWxlLiAq
LworLyogI3VuZGVmIEhBVkVfTkNVUlNFU19DVVJTRVNfSCAqLworCisvKiBEZWZpbmUgdG8gMSBp
ZiB5b3UgaGF2ZSB0aGUgPG5jdXJzZXMuaD4gaGVhZGVyIGZpbGUuICovCisvKiAjdW5kZWYgSEFW
RV9OQ1VSU0VTX0ggKi8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxuZGlyLmg+
IGhlYWRlciBmaWxlLCBhbmQgaXQgZGVmaW5lcyBgRElSJy4gKi8KKy8qICN1bmRlZiBIQVZFX05E
SVJfSCAqLworCisvKiBEZWZpbmUgaWYgb3BlbmRpc2soKSBpbiAtbHV0aWwgY2FuIGJlIHVzZWQg
Ki8KKy8qICN1bmRlZiBIQVZFX09QRU5ESVNLICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBo
YXZlIHRoZSA8cGNpL3BjaS5oPiBoZWFkZXIgZmlsZS4gKi8KKy8qICN1bmRlZiBIQVZFX1BDSV9Q
Q0lfSCAqLworCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgYHBvc2l4X21lbWFsaWdu
JyBmdW5jdGlvbi4gKi8KKyNkZWZpbmUgSEFWRV9QT1NJWF9NRU1BTElHTiAxCisKKy8qIERlZmlu
ZSBpZiByZXR1cm5zX3R3aWNlIGF0dHJpYnV0ZSBpcyBzdXBwb3J0ZWQgKi8KKy8qICN1bmRlZiBI
QVZFX1JFVFVSTlNfVFdJQ0UgKi8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIGBz
YnJrJyBmdW5jdGlvbi4gKi8KKyNkZWZpbmUgSEFWRV9TQlJLIDEKKworLyogRGVmaW5lIHRvIDEg
aWYgeW91IGhhdmUgdGhlIDxTREwvU0RMLmg+IGhlYWRlciBmaWxlLiAqLworLyogI3VuZGVmIEhB
VkVfU0RMX1NETF9IICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3RkaW50
Lmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZFX1NURElOVF9IIDEKKworLyogRGVmaW5l
IHRvIDEgaWYgeW91IGhhdmUgdGhlIDxzdGRsaWIuaD4gaGVhZGVyIGZpbGUuICovCisjZGVmaW5l
IEhBVkVfU1RETElCX0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgYHN0cmR1
cCcgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfU1RSRFVQIDEKKworLyogRGVmaW5lIHRvIDEg
aWYgeW91IGhhdmUgdGhlIDxzdHJpbmdzLmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZF
X1NUUklOR1NfSCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3RyaW5nLmg+
IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZFX1NUUklOR19IIDEKKworLyogRGVmaW5lIHRv
IDEgaWYgeW91IGhhdmUgdGhlIDxzeXMvZGlyLmg+IGhlYWRlciBmaWxlLCBhbmQgaXQgZGVmaW5l
cyBgRElSJy4KKyAgICovCisvKiAjdW5kZWYgSEFWRV9TWVNfRElSX0ggKi8KKworLyogRGVmaW5l
IHRvIDEgaWYgeW91IGhhdmUgdGhlIDxzeXMvZmNudGwuaD4gaGVhZGVyIGZpbGUuICovCisjZGVm
aW5lIEhBVkVfU1lTX0ZDTlRMX0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUg
PHN5cy9ta2Rldi5oPiBoZWFkZXIgZmlsZS4gKi8KKy8qICN1bmRlZiBIQVZFX1NZU19NS0RFVl9I
ICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3lzL25kaXIuaD4gaGVhZGVy
IGZpbGUsIGFuZCBpdCBkZWZpbmVzIGBESVInLgorICAgKi8KKy8qICN1bmRlZiBIQVZFX1NZU19O
RElSX0ggKi8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxzeXMvc3RhdC5oPiBo
ZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9TWVNfU1RBVF9IIDEKKworLyogRGVmaW5lIHRv
IDEgaWYgeW91IGhhdmUgdGhlIDxzeXMvc3lzbWFjcm9zLmg+IGhlYWRlciBmaWxlLiAqLworI2Rl
ZmluZSBIQVZFX1NZU19TWVNNQUNST1NfSCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZl
IHRoZSA8c3lzL3R5cGVzLmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZFX1NZU19UWVBF
U19IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDx0ZXJtaW9zLmg+IGhlYWRl
ciBmaWxlLiAqLworI2RlZmluZSBIQVZFX1RFUk1JT1NfSCAxCisKKy8qIERlZmluZSB0byAxIGlm
IHlvdSBoYXZlIHRoZSA8dW5pc3RkLmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZFX1VO
SVNURF9IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDx1c2IuaD4gaGVhZGVy
IGZpbGUuICovCisvKiAjdW5kZWYgSEFWRV9VU0JfSCAqLworCisvKiBEZWZpbmUgdG8gMSBpZiB5
b3UgaGF2ZSB0aGUgYHZhc3ByaW50ZicgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfVkFTUFJJ
TlRGIDEKKworLyogRGVmaW5lIHRvIDEgaWYgYG1ham9yJywgYG1pbm9yJywgYW5kIGBtYWtlZGV2
JyBhcmUgZGVjbGFyZWQgaW4gPG1rZGV2Lmg+LgorICAgKi8KKy8qICN1bmRlZiBNQUpPUl9JTl9N
S0RFViAqLworCisvKiBEZWZpbmUgdG8gMSBpZiBgbWFqb3InLCBgbWlub3InLCBhbmQgYG1ha2Vk
ZXYnIGFyZSBkZWNsYXJlZCBpbgorICAgPHN5c21hY3Jvcy5oPi4gKi8KKy8qICN1bmRlZiBNQUpP
Ul9JTl9TWVNNQUNST1MgKi8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGVuYWJsZSBtZW1vcnkg
bWFuYWdlciBkZWJ1Z2dpbmcuICovCisvKiAjdW5kZWYgTU1fREVCVUcgKi8KKworLyogRGVmaW5l
IHRvIDEgaWYgR0NDIGdlbmVyYXRlcyBjYWxscyB0byBfX3JlZ2lzdGVyX2ZyYW1lX2luZm8oKSAq
LworLyogI3VuZGVmIE5FRURfUkVHSVNURVJfRlJBTUVfSU5GTyAqLworCisvKiBOYW1lIG9mIHBh
Y2thZ2UgKi8KKyNkZWZpbmUgUEFDS0FHRSAiYnVyZyIKKworLyogRGVmaW5lIHRvIHRoZSBhZGRy
ZXNzIHdoZXJlIGJ1ZyByZXBvcnRzIGZvciB0aGlzIHBhY2thZ2Ugc2hvdWxkIGJlIHNlbnQuICov
CisjZGVmaW5lIFBBQ0tBR0VfQlVHUkVQT1JUICJiZWFuMTIzY2hAZ21haWwuY29tIgorCisvKiBE
ZWZpbmUgdG8gdGhlIGZ1bGwgbmFtZSBvZiB0aGlzIHBhY2thZ2UuICovCisjZGVmaW5lIFBBQ0tB
R0VfTkFNRSAiQlVSRyIKKworLyogRGVmaW5lIHRvIHRoZSBmdWxsIG5hbWUgYW5kIHZlcnNpb24g
b2YgdGhpcyBwYWNrYWdlLiAqLworI2RlZmluZSBQQUNLQUdFX1NUUklORyAiQlVSRyAxLjk4Igor
CisvKiBEZWZpbmUgdG8gdGhlIG9uZSBzeW1ib2wgc2hvcnQgbmFtZSBvZiB0aGlzIHBhY2thZ2Uu
ICovCisjZGVmaW5lIFBBQ0tBR0VfVEFSTkFNRSAiYnVyZyIKKworLyogRGVmaW5lIHRvIHRoZSB2
ZXJzaW9uIG9mIHRoaXMgcGFja2FnZS4gKi8KKyNkZWZpbmUgUEFDS0FHRV9WRVJTSU9OICIxLjk4
IgorCisvKiBUaGUgc2l6ZSBvZiBgbG9uZycsIGFzIGNvbXB1dGVkIGJ5IHNpemVvZi4gKi8KKyNk
ZWZpbmUgU0laRU9GX0xPTkcgOAorCisvKiBUaGUgc2l6ZSBvZiBgdm9pZCAqJywgYXMgY29tcHV0
ZWQgYnkgc2l6ZW9mLiAqLworI2RlZmluZSBTSVpFT0ZfVk9JRF9QIDgKKworLyogRGVmaW5lIHRv
IDEgaWYgeW91IGhhdmUgdGhlIEFOU0kgQyBoZWFkZXIgZmlsZXMuICovCisjZGVmaW5lIFNURENf
SEVBREVSUyAxCisKKy8qIFZlcnNpb24gbnVtYmVyIG9mIHBhY2thZ2UgKi8KKyNkZWZpbmUgVkVS
U0lPTiAiMS45OCIKKworLyogRGVmaW5lIFdPUkRTX0JJR0VORElBTiB0byAxIGlmIHlvdXIgcHJv
Y2Vzc29yIHN0b3JlcyB3b3JkcyB3aXRoIHRoZSBtb3N0CisgICBzaWduaWZpY2FudCBieXRlIGZp
cnN0IChsaWtlIE1vdG9yb2xhIGFuZCBTUEFSQywgdW5saWtlIEludGVsIGFuZCBWQVgpLiAqLwor
I2lmIGRlZmluZWQgX19CSUdfRU5ESUFOX18KKyMgZGVmaW5lIFdPUkRTX0JJR0VORElBTiAxCisj
ZWxpZiAhIGRlZmluZWQgX19MSVRUTEVfRU5ESUFOX18KKy8qICMgdW5kZWYgV09SRFNfQklHRU5E
SUFOICovCisjZW5kaWYKKworLyogRGVmaW5lIHRvIDEgaWYgYGxleCcgZGVjbGFyZXMgYHl5dGV4
dCcgYXMgYSBgY2hhciAqJyBieSBkZWZhdWx0LCBub3QgYQorICAgYGNoYXJbXScuICovCisjZGVm
aW5lIFlZVEVYVF9QT0lOVEVSIDEKKworLyogTnVtYmVyIG9mIGJpdHMgaW4gYSBmaWxlIG9mZnNl
dCwgb24gaG9zdHMgd2hlcmUgdGhpcyBpcyBzZXR0YWJsZS4gKi8KKy8qICN1bmRlZiBfRklMRV9P
RkZTRVRfQklUUyAqLworCisvKiBEZWZpbmUgZm9yIGxhcmdlIGZpbGVzLCBvbiBBSVgtc3R5bGUg
aG9zdHMuICovCisvKiAjdW5kZWYgX0xBUkdFX0ZJTEVTICovCisKKy8qIERlZmluZSB0byAxIGlm
IG9uIE1JTklYLiAqLworLyogI3VuZGVmIF9NSU5JWCAqLworCisvKiBEZWZpbmUgdG8gMiBpZiB0
aGUgc3lzdGVtIGRvZXMgbm90IHByb3ZpZGUgUE9TSVguMSBmZWF0dXJlcyBleGNlcHQgd2l0aAor
ICAgdGhpcyBkZWZpbmVkLiAqLworLyogI3VuZGVmIF9QT1NJWF8xX1NPVVJDRSAqLworCisvKiBE
ZWZpbmUgdG8gMSBpZiB5b3UgbmVlZCB0byBpbiBvcmRlciBmb3IgYHN0YXQnIGFuZCBvdGhlciB0
aGluZ3MgdG8gd29yay4gKi8KKy8qICN1bmRlZiBfUE9TSVhfU09VUkNFICovCisKKy8qIEVuYWJs
ZSBleHRlbnNpb25zIG9uIEFJWCAzLCBJbnRlcml4LiAgKi8KKyNpZm5kZWYgX0FMTF9TT1VSQ0UK
KyMgZGVmaW5lIF9BTExfU09VUkNFIDEKKyNlbmRpZgorLyogRW5hYmxlIEdOVSBleHRlbnNpb25z
IG9uIHN5c3RlbXMgdGhhdCBoYXZlIHRoZW0uICAqLworI2lmbmRlZiBfR05VX1NPVVJDRQorIyBk
ZWZpbmUgX0dOVV9TT1VSQ0UgMQorI2VuZGlmCisvKiBFbmFibGUgdGhyZWFkaW5nIGV4dGVuc2lv
bnMgb24gU29sYXJpcy4gICovCisjaWZuZGVmIF9QT1NJWF9QVEhSRUFEX1NFTUFOVElDUworIyBk
ZWZpbmUgX1BPU0lYX1BUSFJFQURfU0VNQU5USUNTIDEKKyNlbmRpZgorLyogRW5hYmxlIGV4dGVu
c2lvbnMgb24gSFAgTm9uU3RvcC4gICovCisjaWZuZGVmIF9UQU5ERU1fU09VUkNFCisjIGRlZmlu
ZSBfVEFOREVNX1NPVVJDRSAxCisjZW5kaWYKKy8qIEVuYWJsZSBnZW5lcmFsIGV4dGVuc2lvbnMg
b24gU29sYXJpcy4gICovCisjaWZuZGVmIF9fRVhURU5TSU9OU19fCisjIGRlZmluZSBfX0VYVEVO
U0lPTlNfXyAxCisjZW5kaWYKKwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEu
Mi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWJfZXJyLmMgeGVuLTQuMS4yLWIvdG9vbHMvaW9l
bXUtcWVtdS14ZW4vZ3J1Yl9lcnIuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14
ZW4vZ3J1Yl9lcnIuYwkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysgeGVu
LTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1Yl9lcnIuYwkyMDEyLTEyLTI4IDE2OjAy
OjQxLjAwNzczNDE2NCArMDgwMApAQCAtMCwwICsxLDE4NiBAQAorLyogZXJyLmMgLSBlcnJvciBo
YW5kbGluZyByb3V0aW5lcyAqLworLyoKKyAqICBHUlVCICAtLSAgR1JhbmQgVW5pZmllZCBCb290
bG9hZGVyCisgKiAgQ29weXJpZ2h0IChDKSAyMDAyLDIwMDUsMjAwNywyMDA4ICBGcmVlIFNvZnR3
YXJlIEZvdW5kYXRpb24sIEluYy4KKyAqCisgKiAgR1JVQiBpcyBmcmVlIHNvZnR3YXJlOiB5b3Ug
Y2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5CisgKiAgaXQgdW5kZXIgdGhlIHRlcm1z
IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBhcyBwdWJsaXNoZWQgYnkKKyAqICB0
aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBlaXRoZXIgdmVyc2lvbiAzIG9mIHRoZSBMaWNl
bnNlLCBvcgorICogIChhdCB5b3VyIG9wdGlvbikgYW55IGxhdGVyIHZlcnNpb24uCisgKgorICog
IEdSVUIgaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwK
KyAqICBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3
YXJyYW50eSBvZgorICogIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VM
QVIgUFVSUE9TRS4gIFNlZSB0aGUKKyAqICBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3Ig
bW9yZSBkZXRhaWxzLgorICoKKyAqICBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9m
IHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZQorICogIGFsb25nIHdpdGggR1JVQi4gIElm
IG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4KKyAqLworCisjaW5jbHVk
ZSAiZ3J1Yl9lcnIuaCIKKyNpbmNsdWRlICJtaXNjLmgiCisjaW5jbHVkZSA8c3RkYXJnLmg+Cisj
aW5jbHVkZSA8c3RkaW8uaD4KKyNpbmNsdWRlIDxzdGRsaWIuaD4KKworI2RlZmluZSBHUlVCX01B
WF9FUlJNU0cJCTI1NgorI2RlZmluZSBHUlVCX0VSUk9SX1NUQUNLX1NJWkUJMTAKKworZ3J1Yl9l
cnJfdCBncnViX2Vycm5vOworY2hhciBncnViX2Vycm1zZ1tHUlVCX01BWF9FUlJNU0ddOworCitz
dGF0aWMgc3RydWN0Cit7CisgIGdydWJfZXJyX3QgZXJybm87CisgIGNoYXIgZXJybXNnW0dSVUJf
TUFYX0VSUk1TR107Cit9IGdydWJfZXJyb3Jfc3RhY2tfaXRlbXNbR1JVQl9FUlJPUl9TVEFDS19T
SVpFXTsKKworc3RhdGljIGludCBncnViX2Vycm9yX3N0YWNrX3BvczsKK3N0YXRpYyBpbnQgZ3J1
Yl9lcnJvcl9zdGFja19hc3NlcnQ7CisKKworCitzdGF0aWMgaW50CitncnViX3ZzbnByaW50ZiAo
Y2hhciAqc3RyLCBncnViX3NpemVfdCBuLCBjb25zdCBjaGFyICpmbXQsIHZhX2xpc3QgYXApCit7
CisgIGdydWJfc2l6ZV90IHJldDsKKworICBpZiAoIW4pCisgICAgcmV0dXJuIDA7CisKKyAgCisg
IHJldCA9IHZzbnByaW50ZihzdHIsIG4sIGZtdCwgYXApOworICBwcmludGYoIiVzXG4iLCBzdHIp
OworICByZXR1cm4gcmV0IDwgbiA/IHJldCA6IG47Cit9CisKKworCitzdGF0aWMgaW50CitncnVi
X3ZwcmludGYgKGNvbnN0IGNoYXIgKmZtdCwgdmFfbGlzdCBhcmdzKQoreworICBpbnQgcmV0Owor
CisgIHJldCA9IGdydWJfdnNucHJpbnRmIChncnViX2Vycm1zZywgc2l6ZW9mIChncnViX2Vycm1z
ZyksIGZtdCwgYXJncyk7CisKKyAgcmV0dXJuIHJldDsKK30KKworaW50CitncnViX2Vycl9wcmlu
dGYgKGNvbnN0IGNoYXIgKmZtdCwgLi4uKQoreworICB2YV9saXN0IGFwOworICBpbnQgcmV0Owor
CisgIHZhX3N0YXJ0IChhcCwgZm10KTsKKyAgcmV0ID0gZ3J1Yl92cHJpbnRmIChmbXQsIGFwKTsK
KyAgdmFfZW5kIChhcCk7CisKKyAgcmV0dXJuIHJldDsKK30KKworCitncnViX2Vycl90CitncnVi
X2Vycm9yIChncnViX2Vycl90IG4sIGNvbnN0IGNoYXIgKmZtdCwgLi4uKQoreworICB2YV9saXN0
IGFwOworCisgIGdydWJfZXJybm8gPSBuOworICB2YV9zdGFydCAoYXAsIGZtdCk7CisgIGdydWJf
dnNucHJpbnRmIChncnViX2Vycm1zZywgc2l6ZW9mIChncnViX2Vycm1zZyksIGZtdCwgYXApOwor
ICB2YV9lbmQgKGFwKTsKKyAgCisgIHJldHVybiBuOworfQorCit2b2lkCitncnViX2ZhdGFsIChj
b25zdCBjaGFyICpmbXQsIC4uLikKK3sKKyAgdmFfbGlzdCBhcDsKKworICB2YV9zdGFydCAoYXAs
IGZtdCk7CisgIGdydWJfdnByaW50ZiAoXyhmbXQpLCBhcCk7CisgIHZhX2VuZCAoYXApOworCisg
IGV4aXQoMSk7Cit9CisKK3ZvaWQKK2dydWJfZXJyb3JfcHVzaCAodm9pZCkKK3sKKyAgLyogT25s
eSBhZGQgaXRlbXMgdG8gc3RhY2ssIGlmIHRoZXJlIGlzIGVub3VnaCByb29tLiAgKi8KKyAgaWYg
KGdydWJfZXJyb3Jfc3RhY2tfcG9zIDwgR1JVQl9FUlJPUl9TVEFDS19TSVpFKQorICAgIHsKKyAg
ICAgIC8qIENvcHkgYWN0aXZlIGVycm9yIG1lc3NhZ2UgdG8gc3RhY2suICAqLworICAgICAgZ3J1
Yl9lcnJvcl9zdGFja19pdGVtc1tncnViX2Vycm9yX3N0YWNrX3Bvc10uZXJybm8gPSBncnViX2Vy
cm5vOworICAgICAgZ3J1Yl9tZW1jcHkgKGdydWJfZXJyb3Jfc3RhY2tfaXRlbXNbZ3J1Yl9lcnJv
cl9zdGFja19wb3NdLmVycm1zZywKKyAgICAgICAgICAgICAgICAgICBncnViX2Vycm1zZywKKyAg
ICAgICAgICAgICAgICAgICBzaXplb2YgKGdydWJfZXJybXNnKSk7CisKKyAgICAgIC8qIEFkdmFu
Y2UgdG8gbmV4dCBlcnJvciBzdGFjayBwb3NpdGlvbi4gICovCisgICAgICBncnViX2Vycm9yX3N0
YWNrX3BvcysrOworICAgIH0KKyAgZWxzZQorICAgIHsKKyAgICAgIC8qIFRoZXJlIGlzIG5vIHJv
b20gZm9yIG5ldyBlcnJvciBtZXNzYWdlLiBEaXNjYXJkIG5ldyBlcnJvciBtZXNzYWdlCisgICAg
ICAgICBhbmQgbWFyayBlcnJvciBzdGFjayBhc3NlcnRpb24gZmxhZy4gICovCisgICAgICBncnVi
X2Vycm9yX3N0YWNrX2Fzc2VydCA9IDE7CisgICAgfQorCisgIC8qIEFsbG93IGZ1cnRoZXIgb3Bl
cmF0aW9uIG9mIG90aGVyIGNvbXBvbmVudHMgYnkgcmVzZXR0aW5nCisgICAgIGFjdGl2ZSBlcnJu
byB0byBHUlVCX0VSUl9OT05FLiAgKi8KKyAgZ3J1Yl9lcnJubyA9IEdSVUJfRVJSX05PTkU7Cit9
CisKK2ludAorZ3J1Yl9lcnJvcl9wb3AgKHZvaWQpCit7CisgIGlmIChncnViX2Vycm9yX3N0YWNr
X3BvcyA+IDApCisgICAgeworICAgICAgLyogUG9wIGVycm9yIG1lc3NhZ2UgZnJvbSBlcnJvciBz
dGFjayB0byBjdXJyZW50IGFjdGl2ZSBlcnJvci4gICovCisgICAgICBncnViX2Vycm9yX3N0YWNr
X3Bvcy0tOworCisgICAgICBncnViX2Vycm5vID0gZ3J1Yl9lcnJvcl9zdGFja19pdGVtc1tncnVi
X2Vycm9yX3N0YWNrX3Bvc10uZXJybm87CisgICAgICBncnViX21lbWNweSAoZ3J1Yl9lcnJtc2cs
CisgICAgICAgICAgICAgICAgICAgZ3J1Yl9lcnJvcl9zdGFja19pdGVtc1tncnViX2Vycm9yX3N0
YWNrX3Bvc10uZXJybXNnLAorICAgICAgICAgICAgICAgICAgIHNpemVvZiAoZ3J1Yl9lcnJtc2cp
KTsKKworICAgICAgcmV0dXJuIDE7CisgICAgfQorICBlbHNlCisgICAgeworICAgICAgLyogVGhl
cmUgaXMgbm8gbW9yZSBpdGVtcyBvbiBlcnJvciBzdGFjaywgcmVzZXQgdG8gbm8gZXJyb3Igc3Rh
dGUuICAqLworICAgICAgZ3J1Yl9lcnJubyA9IEdSVUJfRVJSX05PTkU7CisKKyAgICAgIHJldHVy
biAwOworICAgIH0KK30KKwordm9pZAorZ3J1Yl9wcmludF9lcnJvciAodm9pZCkKK3sKKyAgLyog
UHJpbnQgZXJyb3IgbWVzc2FnZXMgaW4gcmV2ZXJzZSBvcmRlci4gRmlyc3QgcHJpbnQgYWN0aXZl
IGVycm9yIG1lc3NhZ2UKKyAgICAgYW5kIHRoZW4gZW1wdHkgZXJyb3Igc3RhY2suICAqLworICBk
bworICAgIHsKKyAgICAgIGlmIChncnViX2Vycm5vICE9IEdSVUJfRVJSX05PTkUpCisgICAgICAg
IGdydWJfZXJyX3ByaW50ZiAoImVycm9yOiAlcy5cbiIsIGdydWJfZXJybXNnKTsKKyAgICB9Cisg
IHdoaWxlIChncnViX2Vycm9yX3BvcCAoKSk7CisKKyAgLyogSWYgdGhlcmUgd2FzIGFuIGFzc2Vy
dCB3aGlsZSB1c2luZyBlcnJvciBzdGFjaywgcmVwb3J0IGFib3V0IGl0LiAgKi8KKyAgaWYgKGdy
dWJfZXJyb3Jfc3RhY2tfYXNzZXJ0KQorICAgIHsKKyAgICAgIGdydWJfZXJyX3ByaW50ZiAoImFz
c2VydDogZXJyb3Igc3RhY2sgb3ZlcmZsb3cgZGV0ZWN0ZWQhXG4iKTsKKyAgICAgIGdydWJfZXJy
b3Jfc3RhY2tfYXNzZXJ0ID0gMDsKKyAgICB9Cit9CisKKworaW50IHRlc3RfZ3J1Yl9lcnIoKQor
eworICBncnViX2Vycm9yKDIyMiwgInRlc3QgJXNcbiIsICJncnViX2Vycm9yIik7CisgIGdydWJf
ZXJyX3ByaW50ZigidGVzdCAlc1xuIiwgImdydWJfZXJyX3ByaW50ZiIpOworICBncnViX2ZhdGFs
KCJ0ZXN0ICVzXG4iLCAiZ3J1Yl9mYXRhbCIpOworfQorCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJw
TiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1Yl9lcnIuaCB4ZW4tNC4x
LjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnViX2Vyci5oCi0tLSB4ZW4tNC4xLjItYS90b29s
cy9pb2VtdS1xZW11LXhlbi9ncnViX2Vyci5oCTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAw
ICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnViX2Vyci5oCTIw
MTItMTItMjggMTY6MDI6NDEuMDA3NzM0MTY0ICswODAwCkBAIC0wLDAgKzEsODEgQEAKKy8qIGVy
ci5oIC0gZXJyb3IgbnVtYmVycyBhbmQgcHJvdG90eXBlcyAqLworLyoKKyAqICBHUlVCICAtLSAg
R1JhbmQgVW5pZmllZCBCb290bG9hZGVyCisgKiAgQ29weXJpZ2h0IChDKSAyMDAyLDIwMDUsMjAw
NywyMDA4IEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiwgSW5jLgorICoKKyAqICBHUlVCIGlzIGZy
ZWUgc29mdHdhcmU6IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkKKyAqICBp
dCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGFzIHB1
Ymxpc2hlZCBieQorICogIHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIGVpdGhlciB2ZXJz
aW9uIDMgb2YgdGhlIExpY2Vuc2UsIG9yCisgKiAgKGF0IHlvdXIgb3B0aW9uKSBhbnkgbGF0ZXIg
dmVyc2lvbi4KKyAqCisgKiAgR1JVQiBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0
IHdpbGwgYmUgdXNlZnVsLAorICogIGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBl
dmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mCisgKiAgTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5F
U1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRoZQorICogIEdOVSBHZW5lcmFsIFB1
YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRldGFpbHMuCisgKgorICogIFlvdSBzaG91bGQgaGF2ZSBy
ZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlCisgKiAgYWxv
bmcgd2l0aCBHUlVCLiAgSWYgbm90LCBzZWUgPGh0dHA6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy8+
LgorICovCisKKyNpZm5kZWYgR1JVQl9FUlJfSEVBREVSCisjZGVmaW5lIEdSVUJfRVJSX0hFQURF
UgkxCisKKwordHlwZWRlZiBlbnVtCisgIHsKKyAgICBHUlVCX0VSUl9OT05FID0gMCwKKyAgICBH
UlVCX0VSUl9URVNUX0ZBSUxVUkUsCisgICAgR1JVQl9FUlJfQkFEX01PRFVMRSwKKyAgICBHUlVC
X0VSUl9PVVRfT0ZfTUVNT1JZLAorICAgIEdSVUJfRVJSX0JBRF9GSUxFX1RZUEUsCisgICAgR1JV
Ql9FUlJfRklMRV9OT1RfRk9VTkQsCisgICAgR1JVQl9FUlJfRklMRV9SRUFEX0VSUk9SLAorICAg
IEdSVUJfRVJSX0JBRF9GSUxFTkFNRSwKKyAgICBHUlVCX0VSUl9VTktOT1dOX0ZTLAorICAgIEdS
VUJfRVJSX0JBRF9GUywKKyAgICBHUlVCX0VSUl9CQURfTlVNQkVSLAorICAgIEdSVUJfRVJSX09V
VF9PRl9SQU5HRSwKKyAgICBHUlVCX0VSUl9VTktOT1dOX0RFVklDRSwKKyAgICBHUlVCX0VSUl9C
QURfREVWSUNFLAorICAgIEdSVUJfRVJSX1JFQURfRVJST1IsCisgICAgR1JVQl9FUlJfV1JJVEVf
RVJST1IsCisgICAgR1JVQl9FUlJfVU5LTk9XTl9DT01NQU5ELAorICAgIEdSVUJfRVJSX0lOVkFM
SURfQ09NTUFORCwKKyAgICBHUlVCX0VSUl9CQURfQVJHVU1FTlQsCisgICAgR1JVQl9FUlJfQkFE
X1BBUlRfVEFCTEUsCisgICAgR1JVQl9FUlJfVU5LTk9XTl9PUywKKyAgICBHUlVCX0VSUl9CQURf
T1MsCisgICAgR1JVQl9FUlJfTk9fS0VSTkVMLAorICAgIEdSVUJfRVJSX0JBRF9GT05ULAorICAg
IEdSVUJfRVJSX05PVF9JTVBMRU1FTlRFRF9ZRVQsCisgICAgR1JVQl9FUlJfU1lNTElOS19MT09Q
LAorICAgIEdSVUJfRVJSX0JBRF9HWklQX0RBVEEsCisgICAgR1JVQl9FUlJfTUVOVSwKKyAgICBH
UlVCX0VSUl9USU1FT1VULAorICAgIEdSVUJfRVJSX0lPLAorICAgIEdSVUJfRVJSX0FDQ0VTU19E
RU5JRUQsCisgICAgR1JVQl9FUlJfTUVOVV9FU0NBUEUsCisgICAgR1JVQl9FUlJfTk9UX0ZPVU5E
LAorICAgIEdSVUJfRVJSX1VOS05PV04KKworICB9CitncnViX2Vycl90OworCisKKyNpZm5kZWYg
XworIyBkZWZpbmUgXyhTdHJpbmcpIFN0cmluZworI2VuZGlmCisKK2V4dGVybiBncnViX2Vycl90
IGdydWJfZXJybm87CitleHRlcm4gY2hhciBncnViX2Vycm1zZ1tdOworCitncnViX2Vycl90IGdy
dWJfZXJyb3IgKGdydWJfZXJyX3QgbiwgY29uc3QgY2hhciAqZm10LCAuLi4pOwordm9pZCBncnVi
X2ZhdGFsIChjb25zdCBjaGFyICpmbXQsIC4uLikgX19hdHRyaWJ1dGVfXygobm9yZXR1cm4pKTsK
K3ZvaWQgZ3J1Yl9lcnJvcl9wdXNoICh2b2lkKTsKK2ludCBncnViX2Vycm9yX3BvcCAodm9pZCk7
Cit2b2lkIGdydWJfcHJpbnRfZXJyb3IgKHZvaWQpOworaW50IGdydWJfZXJyX3ByaW50ZiAoY29u
c3QgY2hhciAqZm10LCAuLi4pCisgICAgIF9fYXR0cmlidXRlX18gKChmb3JtYXQgKHByaW50Ziwg
MSwgMikpKTsKK2ludCB0ZXN0X2dydWJfZXJyKHZvaWQpOworCisjZW5kaWYgLyogISBHUlVCX0VS
Ul9IRUFERVIgKi8KZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29s
cy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9jb25maWcuaCB4ZW4tNC4xLjItYi90b29scy9pb2Vt
dS1xZW11LXhlbi9ncnViLWZhdC9jb25maWcuaAotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUt
cWVtdS14ZW4vZ3J1Yi1mYXQvY29uZmlnLmgJMTk3MC0wMS0wMSAwNzowMDowMC4wMDAwMDAwMDAg
KzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWItZmF0L2NvbmZp
Zy5oCTIwMTItMTItMjggMTY6MDI6NDEuMDA4NjQwODM4ICswODAwCkBAIC0wLDAgKzEsMjUxIEBA
CisvKiBjb25maWcuaC4gIEdlbmVyYXRlZCBmcm9tIGNvbmZpZy5oLmluIGJ5IGNvbmZpZ3VyZS4g
ICovCisvKiBjb25maWcuaC5pbi4gIEdlbmVyYXRlZCBmcm9tIGNvbmZpZ3VyZS5hYyBieSBhdXRv
aGVhZGVyLiAgKi8KKworLyogRGVmaW5lIGl0IGlmIEdBUyByZXF1aXJlcyB0aGF0IGFic29sdXRl
IGluZGlyZWN0IGNhbGxzL2p1bXBzIGFyZSBub3QKKyAgIHByZWZpeGVkIHdpdGggYW4gYXN0ZXJp
c2sgKi8KKy8qICN1bmRlZiBBQlNPTFVURV9XSVRIT1VUX0FTVEVSSVNLICovCisKKy8qIERlZmlu
ZSBpdCB0byBcImFkZHIzMlwiIG9yIFwiYWRkcjMyO1wiIHRvIG1ha2UgR0FTIGhhcHB5ICovCisj
ZGVmaW5lIEFERFIzMiBhZGRyMzIKKworLyogRGVmaW5lIGl0IHRvIFwiZGF0YTMyXCIgb3IgXCJk
YXRhMzI7XCIgdG8gbWFrZSBHQVMgaGFwcHkgKi8KKyNkZWZpbmUgREFUQTMyIGRhdGEzMgorCisv
KiBEZWZpbmUgdG8gMSBpZiB0cmFuc2xhdGlvbiBvZiBwcm9ncmFtIG1lc3NhZ2VzIHRvIHRoZSB1
c2VyJ3MgbmF0aXZlCisgICBsYW5ndWFnZSBpcyByZXF1ZXN0ZWQuICovCisjZGVmaW5lIEVOQUJM
RV9OTFMgMQorCisvKiBEZWZpbmUgaWYgQyBzeW1ib2xzIGdldCBhbiB1bmRlcnNjb3JlIGFmdGVy
IGNvbXBpbGF0aW9uICovCisvKiAjdW5kZWYgSEFWRV9BU01fVVNDT1JFICovCisKKy8qIERlZmlu
ZSB0byAxIGlmIHlvdSBoYXZlIHRoZSBgYXNwcmludGYnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBI
QVZFX0FTUFJJTlRGIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIE1hY09TIFgg
ZnVuY3Rpb24gQ0ZMb2NhbGVDb3B5Q3VycmVudCBpbiB0aGUKKyAgIENvcmVGb3VuZGF0aW9uIGZy
YW1ld29yay4gKi8KKy8qICN1bmRlZiBIQVZFX0NGTE9DQUxFQ09QWUNVUlJFTlQgKi8KKworLyog
RGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIE1hY09TIFggZnVuY3Rpb24gQ0ZQcmVmZXJlbmNl
c0NvcHlBcHBWYWx1ZSBpbgorICAgdGhlIENvcmVGb3VuZGF0aW9uIGZyYW1ld29yay4gKi8KKy8q
ICN1bmRlZiBIQVZFX0NGUFJFRkVSRU5DRVNDT1BZQVBQVkFMVUUgKi8KKworLyogRGVmaW5lIHRv
IDEgaWYgeW91IGhhdmUgdGhlIDxjdXJzZXMuaD4gaGVhZGVyIGZpbGUuICovCisvKiAjdW5kZWYg
SEFWRV9DVVJTRVNfSCAqLworCisvKiBEZWZpbmUgaWYgdGhlIEdOVSBkY2dldHRleHQoKSBmdW5j
dGlvbiBpcyBhbHJlYWR5IHByZXNlbnQgb3IgcHJlaW5zdGFsbGVkLgorICAgKi8KKyNkZWZpbmUg
SEFWRV9EQ0dFVFRFWFQgMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPGRpcmVu
dC5oPiBoZWFkZXIgZmlsZSwgYW5kIGl0IGRlZmluZXMgYERJUicuCisgICAqLworI2RlZmluZSBI
QVZFX0RJUkVOVF9IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxmdDJidWls
ZC5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9GVDJCVUlMRF9IIDEKKworLyogRGVm
aW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIGBnZXRnaWQnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBI
QVZFX0dFVEdJRCAxCisKKy8qIERlZmluZSBpZiBnZXRyYXdwYXJ0aXRpb24oKSBpbiAtbHV0aWwg
Y2FuIGJlIHVzZWQgKi8KKy8qICN1bmRlZiBIQVZFX0dFVFJBV1BBUlRJVElPTiAqLworCisvKiBE
ZWZpbmUgaWYgdGhlIEdOVSBnZXR0ZXh0KCkgZnVuY3Rpb24gaXMgYWxyZWFkeSBwcmVzZW50IG9y
IHByZWluc3RhbGxlZC4gKi8KKyNkZWZpbmUgSEFWRV9HRVRURVhUIDEKKworLyogRGVmaW5lIHRv
IDEgaWYgeW91IGhhdmUgdGhlIGBnZXR1aWQnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBIQVZFX0dF
VFVJRCAxCisKKy8qIERlZmluZSBpZiB5b3UgaGF2ZSB0aGUgaWNvbnYoKSBmdW5jdGlvbiBhbmQg
aXQgd29ya3MuICovCisvKiAjdW5kZWYgSEFWRV9JQ09OViAqLworCisvKiBEZWZpbmUgdG8gMSBp
ZiB5b3UgaGF2ZSB0aGUgPGludHR5cGVzLmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZF
X0lOVFRZUEVTX0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPGxpbWl0cy5o
PiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9MSU1JVFNfSCAxCisKKy8qIERlZmluZSB0
byAxIGlmIHlvdSBoYXZlIHRoZSBgbHN0YXQnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBIQVZFX0xT
VEFUIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxtYWxsb2MuaD4gaGVhZGVy
IGZpbGUuICovCisjZGVmaW5lIEhBVkVfTUFMTE9DX0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5
b3UgaGF2ZSB0aGUgYG1lbWFsaWduJyBmdW5jdGlvbi4gKi8KKyNkZWZpbmUgSEFWRV9NRU1BTElH
TiAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSBgbWVtbW92ZScgZnVuY3Rpb24u
ICovCisjZGVmaW5lIEhBVkVfTUVNTU9WRSAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZl
IHRoZSA8bWVtb3J5Lmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZFX01FTU9SWV9IIDEK
KworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxuY3Vyc2VzL2N1cnNlcy5oPiBoZWFk
ZXIgZmlsZS4gKi8KKy8qICN1bmRlZiBIQVZFX05DVVJTRVNfQ1VSU0VTX0ggKi8KKworLyogRGVm
aW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxuY3Vyc2VzLmg+IGhlYWRlciBmaWxlLiAqLworLyog
I3VuZGVmIEhBVkVfTkNVUlNFU19IICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRo
ZSA8bmRpci5oPiBoZWFkZXIgZmlsZSwgYW5kIGl0IGRlZmluZXMgYERJUicuICovCisvKiAjdW5k
ZWYgSEFWRV9ORElSX0ggKi8KKworLyogRGVmaW5lIGlmIG9wZW5kaXNrKCkgaW4gLWx1dGlsIGNh
biBiZSB1c2VkICovCisvKiAjdW5kZWYgSEFWRV9PUEVORElTSyAqLworCisvKiBEZWZpbmUgdG8g
MSBpZiB5b3UgaGF2ZSB0aGUgPHBjaS9wY2kuaD4gaGVhZGVyIGZpbGUuICovCisvKiAjdW5kZWYg
SEFWRV9QQ0lfUENJX0ggKi8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIGBwb3Np
eF9tZW1hbGlnbicgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfUE9TSVhfTUVNQUxJR04gMQor
CisvKiBEZWZpbmUgaWYgcmV0dXJuc190d2ljZSBhdHRyaWJ1dGUgaXMgc3VwcG9ydGVkICovCisv
KiAjdW5kZWYgSEFWRV9SRVRVUk5TX1RXSUNFICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBo
YXZlIHRoZSBgc2JyaycgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfU0JSSyAxCisKKy8qIERl
ZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8U0RML1NETC5oPiBoZWFkZXIgZmlsZS4gKi8KKy8q
ICN1bmRlZiBIQVZFX1NETF9TRExfSCAqLworCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0
aGUgPHN0ZGludC5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9TVERJTlRfSCAxCisK
Ky8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3RkbGliLmg+IGhlYWRlciBmaWxlLiAq
LworI2RlZmluZSBIQVZFX1NURExJQl9IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUg
dGhlIGBzdHJkdXAnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBIQVZFX1NUUkRVUCAxCisKKy8qIERl
ZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3RyaW5ncy5oPiBoZWFkZXIgZmlsZS4gKi8KKyNk
ZWZpbmUgSEFWRV9TVFJJTkdTX0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUg
PHN0cmluZy5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9TVFJJTkdfSCAxCisKKy8q
IERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3lzL2Rpci5oPiBoZWFkZXIgZmlsZSwgYW5k
IGl0IGRlZmluZXMgYERJUicuCisgICAqLworLyogI3VuZGVmIEhBVkVfU1lTX0RJUl9IICovCisK
Ky8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3lzL2ZjbnRsLmg+IGhlYWRlciBmaWxl
LiAqLworI2RlZmluZSBIQVZFX1NZU19GQ05UTF9IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91
IGhhdmUgdGhlIDxzeXMvbWtkZXYuaD4gaGVhZGVyIGZpbGUuICovCisvKiAjdW5kZWYgSEFWRV9T
WVNfTUtERVZfSCAqLworCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPHN5cy9uZGly
Lmg+IGhlYWRlciBmaWxlLCBhbmQgaXQgZGVmaW5lcyBgRElSJy4KKyAgICovCisvKiAjdW5kZWYg
SEFWRV9TWVNfTkRJUl9IICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3lz
L3N0YXQuaD4gaGVhZGVyIGZpbGUuICovCisjZGVmaW5lIEhBVkVfU1lTX1NUQVRfSCAxCisKKy8q
IERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3lzL3N5c21hY3Jvcy5oPiBoZWFkZXIgZmls
ZS4gKi8KKyNkZWZpbmUgSEFWRV9TWVNfU1lTTUFDUk9TX0ggMQorCisvKiBEZWZpbmUgdG8gMSBp
ZiB5b3UgaGF2ZSB0aGUgPHN5cy90eXBlcy5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFW
RV9TWVNfVFlQRVNfSCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8dGVybWlv
cy5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9URVJNSU9TX0ggMQorCisvKiBEZWZp
bmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPHVuaXN0ZC5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZp
bmUgSEFWRV9VTklTVERfSCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8dXNi
Lmg+IGhlYWRlciBmaWxlLiAqLworLyogI3VuZGVmIEhBVkVfVVNCX0ggKi8KKworLyogRGVmaW5l
IHRvIDEgaWYgeW91IGhhdmUgdGhlIGB2YXNwcmludGYnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBI
QVZFX1ZBU1BSSU5URiAxCisKKy8qIERlZmluZSB0byAxIGlmIGBtYWpvcicsIGBtaW5vcicsIGFu
ZCBgbWFrZWRldicgYXJlIGRlY2xhcmVkIGluIDxta2Rldi5oPi4KKyAgICovCisvKiAjdW5kZWYg
TUFKT1JfSU5fTUtERVYgKi8KKworLyogRGVmaW5lIHRvIDEgaWYgYG1ham9yJywgYG1pbm9yJywg
YW5kIGBtYWtlZGV2JyBhcmUgZGVjbGFyZWQgaW4KKyAgIDxzeXNtYWNyb3MuaD4uICovCisvKiAj
dW5kZWYgTUFKT1JfSU5fU1lTTUFDUk9TICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBlbmFi
bGUgbWVtb3J5IG1hbmFnZXIgZGVidWdnaW5nLiAqLworLyogI3VuZGVmIE1NX0RFQlVHICovCisK
Ky8qIERlZmluZSB0byAxIGlmIEdDQyBnZW5lcmF0ZXMgY2FsbHMgdG8gX19yZWdpc3Rlcl9mcmFt
ZV9pbmZvKCkgKi8KKy8qICN1bmRlZiBORUVEX1JFR0lTVEVSX0ZSQU1FX0lORk8gKi8KKworLyog
TmFtZSBvZiBwYWNrYWdlICovCisjZGVmaW5lIFBBQ0tBR0UgImJ1cmciCisKKy8qIERlZmluZSB0
byB0aGUgYWRkcmVzcyB3aGVyZSBidWcgcmVwb3J0cyBmb3IgdGhpcyBwYWNrYWdlIHNob3VsZCBi
ZSBzZW50LiAqLworI2RlZmluZSBQQUNLQUdFX0JVR1JFUE9SVCAiYmVhbjEyM2NoQGdtYWlsLmNv
bSIKKworLyogRGVmaW5lIHRvIHRoZSBmdWxsIG5hbWUgb2YgdGhpcyBwYWNrYWdlLiAqLworI2Rl
ZmluZSBQQUNLQUdFX05BTUUgIkJVUkciCisKKy8qIERlZmluZSB0byB0aGUgZnVsbCBuYW1lIGFu
ZCB2ZXJzaW9uIG9mIHRoaXMgcGFja2FnZS4gKi8KKyNkZWZpbmUgUEFDS0FHRV9TVFJJTkcgIkJV
UkcgMS45OCIKKworLyogRGVmaW5lIHRvIHRoZSBvbmUgc3ltYm9sIHNob3J0IG5hbWUgb2YgdGhp
cyBwYWNrYWdlLiAqLworI2RlZmluZSBQQUNLQUdFX1RBUk5BTUUgImJ1cmciCisKKy8qIERlZmlu
ZSB0byB0aGUgdmVyc2lvbiBvZiB0aGlzIHBhY2thZ2UuICovCisjZGVmaW5lIFBBQ0tBR0VfVkVS
U0lPTiAiMS45OCIKKworLyogVGhlIHNpemUgb2YgYGxvbmcnLCBhcyBjb21wdXRlZCBieSBzaXpl
b2YuICovCisjZGVmaW5lIFNJWkVPRl9MT05HIDgKKworLyogVGhlIHNpemUgb2YgYHZvaWQgKics
IGFzIGNvbXB1dGVkIGJ5IHNpemVvZi4gKi8KKyNkZWZpbmUgU0laRU9GX1ZPSURfUCA4CisKKy8q
IERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSBBTlNJIEMgaGVhZGVyIGZpbGVzLiAqLworI2Rl
ZmluZSBTVERDX0hFQURFUlMgMQorCisvKiBWZXJzaW9uIG51bWJlciBvZiBwYWNrYWdlICovCisj
ZGVmaW5lIFZFUlNJT04gIjEuOTgiCisKKy8qIERlZmluZSBXT1JEU19CSUdFTkRJQU4gdG8gMSBp
ZiB5b3VyIHByb2Nlc3NvciBzdG9yZXMgd29yZHMgd2l0aCB0aGUgbW9zdAorICAgc2lnbmlmaWNh
bnQgYnl0ZSBmaXJzdCAobGlrZSBNb3Rvcm9sYSBhbmQgU1BBUkMsIHVubGlrZSBJbnRlbCBhbmQg
VkFYKS4gKi8KKyNpZiBkZWZpbmVkIF9fQklHX0VORElBTl9fCisjIGRlZmluZSBXT1JEU19CSUdF
TkRJQU4gMQorI2VsaWYgISBkZWZpbmVkIF9fTElUVExFX0VORElBTl9fCisvKiAjIHVuZGVmIFdP
UkRTX0JJR0VORElBTiAqLworI2VuZGlmCisKKy8qIERlZmluZSB0byAxIGlmIGBsZXgnIGRlY2xh
cmVzIGB5eXRleHQnIGFzIGEgYGNoYXIgKicgYnkgZGVmYXVsdCwgbm90IGEKKyAgIGBjaGFyW10n
LiAqLworI2RlZmluZSBZWVRFWFRfUE9JTlRFUiAxCisKKy8qIE51bWJlciBvZiBiaXRzIGluIGEg
ZmlsZSBvZmZzZXQsIG9uIGhvc3RzIHdoZXJlIHRoaXMgaXMgc2V0dGFibGUuICovCisvKiAjdW5k
ZWYgX0ZJTEVfT0ZGU0VUX0JJVFMgKi8KKworLyogRGVmaW5lIGZvciBsYXJnZSBmaWxlcywgb24g
QUlYLXN0eWxlIGhvc3RzLiAqLworLyogI3VuZGVmIF9MQVJHRV9GSUxFUyAqLworCisvKiBEZWZp
bmUgdG8gMSBpZiBvbiBNSU5JWC4gKi8KKy8qICN1bmRlZiBfTUlOSVggKi8KKworLyogRGVmaW5l
IHRvIDIgaWYgdGhlIHN5c3RlbSBkb2VzIG5vdCBwcm92aWRlIFBPU0lYLjEgZmVhdHVyZXMgZXhj
ZXB0IHdpdGgKKyAgIHRoaXMgZGVmaW5lZC4gKi8KKy8qICN1bmRlZiBfUE9TSVhfMV9TT1VSQ0Ug
Ki8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IG5lZWQgdG8gaW4gb3JkZXIgZm9yIGBzdGF0JyBh
bmQgb3RoZXIgdGhpbmdzIHRvIHdvcmsuICovCisvKiAjdW5kZWYgX1BPU0lYX1NPVVJDRSAqLwor
CisvKiBFbmFibGUgZXh0ZW5zaW9ucyBvbiBBSVggMywgSW50ZXJpeC4gICovCisjaWZuZGVmIF9B
TExfU09VUkNFCisjIGRlZmluZSBfQUxMX1NPVVJDRSAxCisjZW5kaWYKKy8qIEVuYWJsZSBHTlUg
ZXh0ZW5zaW9ucyBvbiBzeXN0ZW1zIHRoYXQgaGF2ZSB0aGVtLiAgKi8KKyNpZm5kZWYgX0dOVV9T
T1VSQ0UKKyMgZGVmaW5lIF9HTlVfU09VUkNFIDEKKyNlbmRpZgorLyogRW5hYmxlIHRocmVhZGlu
ZyBleHRlbnNpb25zIG9uIFNvbGFyaXMuICAqLworI2lmbmRlZiBfUE9TSVhfUFRIUkVBRF9TRU1B
TlRJQ1MKKyMgZGVmaW5lIF9QT1NJWF9QVEhSRUFEX1NFTUFOVElDUyAxCisjZW5kaWYKKy8qIEVu
YWJsZSBleHRlbnNpb25zIG9uIEhQIE5vblN0b3AuICAqLworI2lmbmRlZiBfVEFOREVNX1NPVVJD
RQorIyBkZWZpbmUgX1RBTkRFTV9TT1VSQ0UgMQorI2VuZGlmCisvKiBFbmFibGUgZ2VuZXJhbCBl
eHRlbnNpb25zIG9uIFNvbGFyaXMuICAqLworI2lmbmRlZiBfX0VYVEVOU0lPTlNfXworIyBkZWZp
bmUgX19FWFRFTlNJT05TX18gMQorI2VuZGlmCisKZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1V
OCB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mYXQuYyB4ZW4tNC4x
LjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mYXQuYwotLS0geGVuLTQuMS4yLWEv
dG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQvZmF0LmMJMTk3MC0wMS0wMSAwNzowMDowMC4w
MDAwMDAwMDAgKzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWIt
ZmF0L2ZhdC5jCTIwMTItMTItMjggMTY6MDI6NDEuMDA4NjQwODM4ICswODAwCkBAIC0wLDAgKzEs
NzExIEBACisvKiBmYXQuYyAtIEZBVCBmaWxlc3lzdGVtICovCisvKgorICogIEdSVUIgIC0tICBH
UmFuZCBVbmlmaWVkIEJvb3Rsb2FkZXIKKyAqICBDb3B5cmlnaHQgKEMpIDIwMDAsMjAwMSwyMDAy
LDIwMDMsMjAwNCwyMDA1LDIwMDcsMjAwOCwyMDA5ICBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24s
IEluYy4KKyAqCisgKiAgR1JVQiBpcyBmcmVlIHNvZnR3YXJlOiB5b3UgY2FuIHJlZGlzdHJpYnV0
ZSBpdCBhbmQvb3IgbW9kaWZ5CisgKiAgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZSBhcyBwdWJsaXNoZWQgYnkKKyAqICB0aGUgRnJlZSBTb2Z0d2Fy
ZSBGb3VuZGF0aW9uLCBlaXRoZXIgdmVyc2lvbiAzIG9mIHRoZSBMaWNlbnNlLCBvcgorICogIChh
dCB5b3VyIG9wdGlvbikgYW55IGxhdGVyIHZlcnNpb24uCisgKgorICogIEdSVUIgaXMgZGlzdHJp
YnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKKyAqICBidXQgV0lUSE9V
VCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBvZgorICog
IE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNl
ZSB0aGUKKyAqICBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgor
ICoKKyAqICBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJh
bCBQdWJsaWMgTGljZW5zZQorICogIGFsb25nIHdpdGggR1JVQi4gIElmIG5vdCwgc2VlIDxodHRw
Oi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4KKyAqLworI2luY2x1ZGUgIm1pc2MuaCIKKyNpbmNs
dWRlICJmYXQuaCIKKworCitzdGF0aWMgaW50CitmYXRfbG9nMiAodW5zaWduZWQgeCkKK3sKKyAg
aW50IGk7CisKKyAgaWYgKHggPT0gMCkKKyAgICByZXR1cm4gLTE7CisKKyAgZm9yIChpID0gMDsg
KHggJiAxKSA9PSAwOyBpKyspCisgICAgeCA+Pj0gMTsKKworICBpZiAoeCAhPSAxKQorICAgIHJl
dHVybiAtMTsKKworICByZXR1cm4gaTsKK30KKworCitzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqCitn
cnViX2ZhdF9tb3VudCAoQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIHVpbnQzMl90IHBhcnRfb2ZmX3Nl
Y3RvcikKK3sKKyAgc3RydWN0IGdydWJfZmF0X2JwYiBicGI7CisgIHN0cnVjdCBncnViX2ZhdF9k
YXRhICpkYXRhID0gMDsKKyAgZ3J1Yl91aW50MzJfdCBmaXJzdF9mYXQsIG1hZ2ljOworICBpbnQ2
NF90IG9mZl9ieXRlcyA9IChpbnQ2NF90KXBhcnRfb2ZmX3NlY3RvciA8PCBHUlVCX0RJU0tfU0VD
VE9SX0JJVFM7CisKKyAgaWYgKCEgYnMpCisgICAgZ290byBmYWlsOworCisgIGRhdGEgPSAoc3Ry
dWN0IGdydWJfZmF0X2RhdGEgKikgbWFsbG9jIChzaXplb2YgKCpkYXRhKSk7CisgIGlmICghIGRh
dGEpCisgICAgZ290byBmYWlsOworCisgIC8qIFJlYWQgdGhlIEJQQi4gICovCisgIGlmIChiZHJ2
X3ByZWFkKGJzLCBvZmZfYnl0ZXMsICZicGIsIHNpemVvZihicGIpKSAhPSBzaXplb2YoYnBiKSkK
KyAgICB7CisgICAgICBwcmludGYoImJkcnZfcHJlYWQgZmFpbC4uLi5cbiIpOworICAgICAgZ290
byBmYWlsOworICAgIH0KKyAgICAKKyAgaWYgKGdydWJfc3RybmNtcCgoY29uc3QgY2hhciAqKSBi
cGIudmVyc2lvbl9zcGVjaWZpYy5mYXQxMl9vcl9mYXQxNi5mc3R5cGUsICJGQVQxMiIsIDUpCisg
ICAgICAmJiBncnViX3N0cm5jbXAoKGNvbnN0IGNoYXIgKikgYnBiLnZlcnNpb25fc3BlY2lmaWMu
ZmF0MTJfb3JfZmF0MTYuZnN0eXBlLCAiRkFUMTYiLCA1KQorICAgICAgJiYgZ3J1Yl9zdHJuY21w
KChjb25zdCBjaGFyICopIGJwYi52ZXJzaW9uX3NwZWNpZmljLmZhdDMyLmZzdHlwZSwgIkZBVDMy
IiwgNSkpCisgICAgeworICAgICAgCisgICAgICBwcmludGYoImZhaWwgaGVyZS0tPmdydWJfc3Ry
bmNtcC4uLi4uLmxpbmVbJXVdXG4iLCBfX0xJTkVfXyk7CisgICAgICBnb3RvIGZhaWw7CisgICAg
fQorCisgIC8qIEdldCB0aGUgc2l6ZXMgb2YgbG9naWNhbCBzZWN0b3JzIGFuZCBjbHVzdGVycy4g
ICovCisgIGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMgPQorICAgIGZhdF9sb2cyIChncnViX2xl
X3RvX2NwdTE2IChicGIuYnl0ZXNfcGVyX3NlY3RvcikpOworICBwcmludGYoImJwYi5ieXRlc19w
ZXJfc2VjdG9yPTB4JXgsIGxlX3RvX2NwdTE2PTB4JXhcbiIsCisJIGJwYi5ieXRlc19wZXJfc2Vj
dG9yLCBncnViX2xlX3RvX2NwdTE2IChicGIuYnl0ZXNfcGVyX3NlY3RvcikpOworICAKKworICBp
ZiAoZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cyA8IEdSVUJfRElTS19TRUNUT1JfQklUUykKKyAg
eworICAgIHByaW50ZigiZmFpbCBoZXJlLS0+bG9naWNhbF9zZWN0b3JfYml0cy4uLi4uLmxpbmVb
JXVdXG4iLCBfX0xJTkVfXyk7IAorICAgIGdvdG8gZmFpbDsKKyAgfQorICBkYXRhLT5sb2dpY2Fs
X3NlY3Rvcl9iaXRzIC09IEdSVUJfRElTS19TRUNUT1JfQklUUzsKKworICBwcmludGYoImJwYi5z
ZWN0b3JzX3Blcl9jbHVzdGVyPSV1XG4iLCBicGIuc2VjdG9yc19wZXJfY2x1c3Rlcik7CisgIGRh
dGEtPmNsdXN0ZXJfYml0cyA9IGZhdF9sb2cyIChicGIuc2VjdG9yc19wZXJfY2x1c3Rlcik7Cisg
IGlmIChkYXRhLT5jbHVzdGVyX2JpdHMgPCAwKQorICAgIHsKKyAgICAgIHByaW50ZigiZmFpbCBo
ZXJlLS0+Y2x1c3Rlcl9iaXRzLi4uLi4ubGluZVsldV1cbiIsIF9fTElORV9fKTsgCisgICAgICBn
b3RvIGZhaWw7CisgICAgfQorICBkYXRhLT5jbHVzdGVyX2JpdHMgKz0gZGF0YS0+bG9naWNhbF9z
ZWN0b3JfYml0czsKKworICAvKiBHZXQgaW5mb3JtYXRpb24gYWJvdXQgRkFUcy4gICovCisgIHBy
aW50ZigiYnBiLm51bV9yZXNlcnZlZF9zZWN0b3JzPSV1LCBsZV90b19jcHUxNj0ldVxuIiwKKwkg
YnBiLm51bV9yZXNlcnZlZF9zZWN0b3JzLCBncnViX2xlX3RvX2NwdTE2IChicGIubnVtX3Jlc2Vy
dmVkX3NlY3RvcnMpKTsKKyAgZGF0YS0+ZmF0X3NlY3RvciA9IHBhcnRfb2ZmX3NlY3RvciArIChn
cnViX2xlX3RvX2NwdTE2IChicGIubnVtX3Jlc2VydmVkX3NlY3RvcnMpCisJCSAgICAgIDw8IGRh
dGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMpOworICBwcmludGYoImRhdGEtPmZhdF9zZWN0b3I9JXVc
biIsIGRhdGEtPmZhdF9zZWN0b3IpOworICBpZiAoZGF0YS0+ZmF0X3NlY3RvciA9PSAwKQorICAg
IHsKKyAgICAgIHByaW50ZigiZmFpbCBoZXJlLS0+ZmF0X3NlY3Rvci4uLi4uLmxpbmVbJXVdXG4i
LCBfX0xJTkVfXyk7IAorICAgICAgZ290byBmYWlsOworICAgIH0KKyAgZGF0YS0+c2VjdG9yc19w
ZXJfZmF0ID0gKChicGIuc2VjdG9yc19wZXJfZmF0XzE2CisJCQkgICAgPyBncnViX2xlX3RvX2Nw
dTE2IChicGIuc2VjdG9yc19wZXJfZmF0XzE2KQorCQkJICAgIDogZ3J1Yl9sZV90b19jcHUzMiAo
YnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MzIuc2VjdG9yc19wZXJfZmF0XzMyKSkKKwkJCSAgIDw8
IGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMpOworICBwcmludGYoImJwYi52ZXJzaW9uX3NwZWNp
ZmljLmZhdDMyLnNlY3RvcnNfcGVyX2ZhdF8zMj0ldVxuIgorCSAiZ3J1Yl9sZV90b19jcHUzMiAo
YnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MzIuc2VjdG9yc19wZXJfZmF0XzMyKT0ldVxuIiwKKwkg
YnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MzIuc2VjdG9yc19wZXJfZmF0XzMyLAorCSBncnViX2xl
X3RvX2NwdTMyIChicGIudmVyc2lvbl9zcGVjaWZpYy5mYXQzMi5zZWN0b3JzX3Blcl9mYXRfMzIp
KTsKKyAgaWYgKGRhdGEtPnNlY3RvcnNfcGVyX2ZhdCA9PSAwKQorICAgIGdvdG8gZmFpbDsKKwor
ICAvKiBHZXQgdGhlIG51bWJlciBvZiBzZWN0b3JzIGluIHRoaXMgdm9sdW1lLiAgKi8KKyAgZGF0
YS0+bnVtX3NlY3RvcnMgPSAoKGJwYi5udW1fdG90YWxfc2VjdG9yc18xNgorCQkJPyBncnViX2xl
X3RvX2NwdTE2IChicGIubnVtX3RvdGFsX3NlY3RvcnNfMTYpCisJCQk6IGdydWJfbGVfdG9fY3B1
MzIgKGJwYi5udW1fdG90YWxfc2VjdG9yc18zMikpCisJCSAgICAgICA8PCBkYXRhLT5sb2dpY2Fs
X3NlY3Rvcl9iaXRzKTsKKyAgaWYgKGRhdGEtPm51bV9zZWN0b3JzID09IDApCisgICAgeworICAg
ICAgcHJpbnRmKCJmYWlsIGhlcmUtLT5udW1fc2VjdG9ycy4uLi4uLmxpbmVbJXVdXG4iLCBfX0xJ
TkVfXyk7IAorICAgICAgZ290byBmYWlsOworICAgIH0KKyAgLyogR2V0IGluZm9ybWF0aW9uIGFi
b3V0IHRoZSByb290IGRpcmVjdG9yeS4gICovCisgIGlmIChicGIubnVtX2ZhdHMgPT0gMCkKKyAg
ICB7CisgICAgICBwcmludGYoImZhaWwgaGVyZS0tPm51bV9mYXRzLi4uLi4ubGluZVsldV1cbiIs
IF9fTElORV9fKTsgCisgICAgICBnb3RvIGZhaWw7CisgICAgfQorICBkYXRhLT5yb290X3NlY3Rv
ciA9IGRhdGEtPmZhdF9zZWN0b3IgKyBicGIubnVtX2ZhdHMgKiBkYXRhLT5zZWN0b3JzX3Blcl9m
YXQ7CisgIGRhdGEtPm51bV9yb290X3NlY3RvcnMKKyAgICA9ICgoKChncnViX3VpbnQzMl90KSBn
cnViX2xlX3RvX2NwdTE2IChicGIubnVtX3Jvb3RfZW50cmllcykKKwkgKiBHUlVCX0ZBVF9ESVJf
RU5UUllfU0laRQorCSArIGdydWJfbGVfdG9fY3B1MTYgKGJwYi5ieXRlc19wZXJfc2VjdG9yKSAt
IDEpCisJPj4gKGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMgKyBHUlVCX0RJU0tfU0VDVE9SX0JJ
VFMpKQorICAgICAgIDw8IChkYXRhLT5sb2dpY2FsX3NlY3Rvcl9iaXRzKSk7CisKKyAgZGF0YS0+
Y2x1c3Rlcl9zZWN0b3IgPSBkYXRhLT5yb290X3NlY3RvciArIGRhdGEtPm51bV9yb290X3NlY3Rv
cnM7CisgIGRhdGEtPm51bV9jbHVzdGVycyA9ICgoKGRhdGEtPm51bV9zZWN0b3JzIC0gZGF0YS0+
Y2x1c3Rlcl9zZWN0b3IpCisJCQkgPj4gKGRhdGEtPmNsdXN0ZXJfYml0cyArIGRhdGEtPmxvZ2lj
YWxfc2VjdG9yX2JpdHMpKQorCQkJKyAyKTsKKworICBpZiAoZGF0YS0+bnVtX2NsdXN0ZXJzIDw9
IDIpCisgICAgeworICAgICAgcHJpbnRmKCJmYWlsIGhlcmUtLT5udW1fY2x1c3RlcnMuLi4uLi5s
aW5lWyV1XVxuIiwgX19MSU5FX18pOyAKKyAgICAgIGdvdG8gZmFpbDsKKyAgICB9CisgIGlmICgh
IGJwYi5zZWN0b3JzX3Blcl9mYXRfMTYpCisgICAgeworICAgICAgLyogRkFUMzIuICAqLworICAg
ICAgZ3J1Yl91aW50MTZfdCBmbGFncyA9IGdydWJfbGVfdG9fY3B1MTYgKGJwYi52ZXJzaW9uX3Nw
ZWNpZmljLmZhdDMyLmV4dGVuZGVkX2ZsYWdzKTsKKworICAgICAgZGF0YS0+cm9vdF9jbHVzdGVy
ID0gZ3J1Yl9sZV90b19jcHUzMiAoYnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MzIucm9vdF9jbHVz
dGVyKTsKKyAgICAgIGRhdGEtPmZhdF9zaXplID0gMzI7CisgICAgICBkYXRhLT5jbHVzdGVyX2Vv
Zl9tYXJrID0gMHgwZmZmZmZmODsKKworICAgICAgaWYgKGZsYWdzICYgMHg4MCkKKwl7CisJICAv
KiBHZXQgYW4gYWN0aXZlIEZBVC4gICovCisJICB1bnNpZ25lZCBhY3RpdmVfZmF0ID0gZmxhZ3Mg
JiAweGY7CisKKwkgIGlmIChhY3RpdmVfZmF0ID4gYnBiLm51bV9mYXRzKQorCSAgICBnb3RvIGZh
aWw7CisKKwkgIGRhdGEtPmZhdF9zZWN0b3IgKz0gYWN0aXZlX2ZhdCAqIGRhdGEtPnNlY3RvcnNf
cGVyX2ZhdDsKKwl9CisKKyAgICAgIGlmIChicGIubnVtX3Jvb3RfZW50cmllcyAhPSAwIHx8IGJw
Yi52ZXJzaW9uX3NwZWNpZmljLmZhdDMyLmZzX3ZlcnNpb24gIT0gMCkKKwlnb3RvIGZhaWw7Cisg
ICAgfQorICBlbHNlCisgICAgeworICAgICAgLyogRkFUMTIgb3IgRkFUMTYuICAqLworICAgICAg
ZGF0YS0+cm9vdF9jbHVzdGVyID0gfjBVOworCisgICAgICBpZiAoZGF0YS0+bnVtX2NsdXN0ZXJz
IDw9IDQwODUgKyAyKQorCXsKKwkgIC8qIEZBVDEyLiAgKi8KKwkgIGRhdGEtPmZhdF9zaXplID0g
MTI7CisJICBkYXRhLT5jbHVzdGVyX2VvZl9tYXJrID0gMHgwZmY4OworCX0KKyAgICAgIGVsc2UK
Kwl7CisJICAvKiBGQVQxNi4gICovCisJICBkYXRhLT5mYXRfc2l6ZSA9IDE2OworCSAgZGF0YS0+
Y2x1c3Rlcl9lb2ZfbWFyayA9IDB4ZmZmODsKKwl9CisgICAgfQorCisgIC8qIE1vcmUgc2FuaXR5
IGNoZWNrcy4gICovCisgIGlmIChkYXRhLT5udW1fc2VjdG9ycyA8PSBkYXRhLT5mYXRfc2VjdG9y
KQorICAgIGdvdG8gZmFpbDsKKworICAKKyAgcHJpbnRmKCJkYXRhLT5mYXRfc2VjdG9yPSV1LCBk
YXRhLT5zZWN0b3JzX3Blcl9mYXQ9JXVcbiIsCisJIGRhdGEtPmZhdF9zZWN0b3IsIGRhdGEtPnNl
Y3RvcnNfcGVyX2ZhdCk7CisgIGlmIChiZHJ2X3ByZWFkKGJzLAorCQkgZGF0YS0+ZmF0X3NlY3Rv
ciA8PCBHUlVCX0RJU0tfU0VDVE9SX0JJVFMsCisJCSAmZmlyc3RfZmF0LAorCQkgc2l6ZW9mIChm
aXJzdF9mYXQpKSAhPSBzaXplb2YoZmlyc3RfZmF0KSkKKyAgICB7CisgICAgICBwcmludGYoImZh
aWwgaGVyZS0tPmJkcnZfcHJlYWQuLi4uLi5saW5lWyV1XVxuIiwgX19MSU5FX18pOyAKKyAgICAg
IGdvdG8gZmFpbDsKKyAgICB9CisKKyAgZmlyc3RfZmF0ID0gZ3J1Yl9sZV90b19jcHUzMiAoZmly
c3RfZmF0KTsKKworICBpZiAoZGF0YS0+ZmF0X3NpemUgPT0gMzIpCisgICAgeworICAgICAgZmly
c3RfZmF0ICY9IDB4MGZmZmZmZmY7CisgICAgICBtYWdpYyA9IDB4MGZmZmZmMDA7CisgICAgfQor
ICBlbHNlIGlmIChkYXRhLT5mYXRfc2l6ZSA9PSAxNikKKyAgICB7CisgICAgICBmaXJzdF9mYXQg
Jj0gMHgwMDAwZmZmZjsKKyAgICAgIG1hZ2ljID0gMHhmZjAwOworICAgIH0KKyAgZWxzZQorICAg
IHsKKyAgICAgIGZpcnN0X2ZhdCAmPSAweDAwMDAwZmZmOworICAgICAgbWFnaWMgPSAweDBmMDA7
CisgICAgfQorCisgIC8qIFNlcmlhbCBudW1iZXIuICAqLworICBpZiAoYnBiLnNlY3RvcnNfcGVy
X2ZhdF8xNikKKyAgICBkYXRhLT51dWlkID0gZ3J1Yl9sZV90b19jcHUzMiAoYnBiLnZlcnNpb25f
c3BlY2lmaWMuZmF0MTJfb3JfZmF0MTYubnVtX3NlcmlhbCk7CisgIGVsc2UKKyAgICBkYXRhLT51
dWlkID0gZ3J1Yl9sZV90b19jcHUzMiAoYnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MzIubnVtX3Nl
cmlhbCk7CisKKyAgLyogSWdub3JlIHRoZSAzcmQgYml0LCBiZWNhdXNlIHNvbWUgQklPU2VzIGFz
c2lnbnMgMHhGMCB0byB0aGUgbWVkaWEKKyAgICAgZGVzY3JpcHRvciwgZXZlbiBpZiBpdCBpcyBh
IHNvLWNhbGxlZCBzdXBlcmZsb3BweSAoZS5nLiBhbiBVU0Iga2V5KS4KKyAgICAgVGhlIGNoZWNr
IG1heSBiZSB0b28gc3RyaWN0IGZvciB0aGlzIGtpbmQgb2Ygc3R1cGlkIEJJT1NlcywgYXMKKyAg
ICAgdGhleSBvdmVyd3JpdGUgdGhlIG1lZGlhIGRlc2NyaXB0b3IuICAqLworICBpZiAoKGZpcnN0
X2ZhdCB8IDB4OCkgIT0gKG1hZ2ljIHwgYnBiLm1lZGlhIHwgMHg4KSkKKyAgICB7CisgICAgICBw
cmludGYoImZhaWwgaGVyZS0tPmZpcnN0X2ZhdD0weCV4LCBtYWdpYz0weCV4Li4uLi4ubGluZVsl
dV1cbiIsCisJICAgICBmaXJzdF9mYXQsIG1hZ2ljLCBfX0xJTkVfXyk7IAorICAgICAgZ290byBm
YWlsOworICAgIH0KKyAgLyogU3RhcnQgZnJvbSB0aGUgcm9vdCBkaXJlY3RvcnkuICAqLworICBk
YXRhLT5maWxlX2NsdXN0ZXIgPSBkYXRhLT5yb290X2NsdXN0ZXI7CisgIGRhdGEtPmN1cl9jbHVz
dGVyX251bSA9IH4wVTsKKyAgZGF0YS0+YXR0ciA9IEdSVUJfRkFUX0FUVFJfRElSRUNUT1JZOwor
ICBwcmludGYoImRhdGEtPmZpbGVfY2x1c3Rlcj0ldSBcbmRhdGEtPmN1cl9jbHVzdGVyX251bT0l
dSBcbmRhdGEtPmF0dHI9MHgleFxuIgorCSAiZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cz0ldVxu
IgorCSAiZGF0YS0+Y2x1c3Rlcl9iaXRzPSV1XG4iLAorCSBkYXRhLT5maWxlX2NsdXN0ZXIsIGRh
dGEtPmN1cl9jbHVzdGVyX251bSwgZGF0YS0+YXR0ciwKKwkgZGF0YS0+bG9naWNhbF9zZWN0b3Jf
Yml0cywgZGF0YS0+Y2x1c3Rlcl9iaXRzKTsKKyAgcmV0dXJuIGRhdGE7CisKKyBmYWlsOgorCisg
IGZyZWUgKGRhdGEpOworICBlcnJ4ICgibm90IGEgRkFUIGZpbGVzeXN0ZW0uLi5cbiIpOworICBy
ZXR1cm4gMDsKK30KKworCisKKy8vtNPOxLz+tcTWuLaoxqvSxm9mZnNldNfWvdq0prbByKFsZW7X
1r3atcTK/b7dtb1idWYKKy8vzsS8/tPJZGF0YS0+ZmlsZV9jbHVzdGVy1ri2qAorLy9kYXRhLT5m
aWxlX2NsdXN0ZXLWuLaowcvOxLz+tcTG8Mq8tNi6xQorLy/ErMjPZGF0YS0+ZmlsZV9jbHVzdGVy
PTKjrLT6se24+cS/wrwKK3N0YXRpYyBncnViX3NzaXplX3QKK2dydWJfZmF0X3JlYWRfZGF0YSAo
QmxvY2tEcml2ZXJTdGF0ZSAqYnMsIHN0cnVjdCBncnViX2ZhdF9kYXRhICpkYXRhLAorCQkgICAg
dm9pZCAoKnJlYWRfaG9vaykgKGdydWJfZGlza19hZGRyX3Qgc2VjdG9yLAorCQkJCSAgICAgICB1
bnNpZ25lZCBvZmZzZXQsIHVuc2lnbmVkIGxlbmd0aCwKKwkJCQkgICAgICAgdm9pZCAqY2xvc3Vy
ZSksCisJCSAgICB2b2lkICpjbG9zdXJlLAorCQkgICAgZ3J1Yl9vZmZfdCBvZmZzZXQsIGdydWJf
c2l6ZV90IGxlbiwgY2hhciAqYnVmKQoreworICBncnViX3NpemVfdCBzaXplOworICBncnViX3Vp
bnQzMl90IGxvZ2ljYWxfY2x1c3RlcjsKKyAgdW5zaWduZWQgbG9naWNhbF9jbHVzdGVyX2JpdHM7
CisgIGdydWJfc3NpemVfdCByZXQgPSAwOworICB1bnNpZ25lZCBsb25nIHNlY3RvcjsKKyAgdWlu
dDY0X3Qgb2ZmX2J5dGVzID0gMDsgCisgIC8qIFRoaXMgaXMgYSBzcGVjaWFsIGNhc2UuIEZBVDEy
IGFuZCBGQVQxNiBkb2Vzbid0IGhhdmUgdGhlIHJvb3QgZGlyZWN0b3J5CisgICAgIGluIGNsdXN0
ZXJzLiAgKi8KKyAgaWYgKGRhdGEtPmZpbGVfY2x1c3RlciA9PSB+MFUpCisgICAgeworICAgICAg
c2l6ZSA9IChkYXRhLT5udW1fcm9vdF9zZWN0b3JzIDw8IEdSVUJfRElTS19TRUNUT1JfQklUUykg
LSBvZmZzZXQ7CisgICAgICBpZiAoc2l6ZSA+IGxlbikKKwlzaXplID0gbGVuOworCisgICAgICBv
ZmZfYnl0ZXMgPSAoKHVpbnQ2NF90KWRhdGEtPnJvb3Rfc2VjdG9yIDw8IEdSVUJfRElTS19TRUNU
T1JfQklUUykgKyBvZmZzZXQ7CisgICAgICBpZihiZHJ2X3JlYWQoYnMsIG9mZl9ieXRlcywgYnVm
LCBzaXplICkgIT0gc2l6ZSkgCisJcmV0dXJuIC0xOworCisgICAgICByZXR1cm4gc2l6ZTsKKyAg
ICB9CisKKyAgLyogQ2FsY3VsYXRlIHRoZSBsb2dpY2FsIGNsdXN0ZXIgbnVtYmVyIGFuZCBvZmZz
ZXQuICAqLworICBsb2dpY2FsX2NsdXN0ZXJfYml0cyA9IChkYXRhLT5jbHVzdGVyX2JpdHMKKwkJ
CSAgKyBkYXRhLT5sb2dpY2FsX3NlY3Rvcl9iaXRzCisJCQkgICsgR1JVQl9ESVNLX1NFQ1RPUl9C
SVRTKTsKKyAgbG9naWNhbF9jbHVzdGVyID0gb2Zmc2V0ID4+IGxvZ2ljYWxfY2x1c3Rlcl9iaXRz
OyAgICAvL3doaWNoIGNsdXN0ZXIgdG8gcmVhZCAKKyAgb2Zmc2V0ICY9ICgxIDw8IGxvZ2ljYWxf
Y2x1c3Rlcl9iaXRzKSAtIDE7ICAgICAgICAgICAvL21vZAorCisgIGlmIChsb2dpY2FsX2NsdXN0
ZXIgPCBkYXRhLT5jdXJfY2x1c3Rlcl9udW0pICAgLy8KKyAgICB7CisgICAgICBkYXRhLT5jdXJf
Y2x1c3Rlcl9udW0gPSAwOworICAgICAgZGF0YS0+Y3VyX2NsdXN0ZXIgPSBkYXRhLT5maWxlX2Ns
dXN0ZXI7IC8vILXaMrj2ZmF0se3P7r+qyry8x8K8xL/CvLrNzsS8/gorICAgIH0KKworICB3aGls
ZSAobGVuKQorICAgIHsKKyAgICAgIHdoaWxlIChsb2dpY2FsX2NsdXN0ZXIgPiBkYXRhLT5jdXJf
Y2x1c3Rlcl9udW0pCisJeworCSAgLyogRmluZCBuZXh0IGNsdXN0ZXIuICAqLworCSAgZ3J1Yl91
aW50MzJfdCBuZXh0X2NsdXN0ZXI7CisJICB1bnNpZ25lZCBsb25nIGZhdF9vZmZzZXQ7CisKKwkg
IHN3aXRjaCAoZGF0YS0+ZmF0X3NpemUpCisJICAgIHsKKwkgICAgY2FzZSAzMjoKKwkgICAgICBm
YXRfb2Zmc2V0ID0gZGF0YS0+Y3VyX2NsdXN0ZXIgPDwgMjsKKwkgICAgICBicmVhazsKKwkgICAg
Y2FzZSAxNjoKKwkgICAgICBmYXRfb2Zmc2V0ID0gZGF0YS0+Y3VyX2NsdXN0ZXIgPDwgMTsKKwkg
ICAgICBicmVhazsKKwkgICAgZGVmYXVsdDoKKwkgICAgICAvKiBjYXNlIDEyOiAqLworCSAgICAg
IGZhdF9vZmZzZXQgPSBkYXRhLT5jdXJfY2x1c3RlciArIChkYXRhLT5jdXJfY2x1c3RlciA+PiAx
KTsKKwkgICAgICBicmVhazsKKwkgICAgfQorCisJICAvKiBSZWFkIHRoZSBGQVQuICAqLworCSAg
aW50IGxlbiA9IChkYXRhLT5mYXRfc2l6ZSArIDcpID4+IDM7CisJICB1aW50NjRfdCBvZmZfYnl0
ZXMgPSAgKCh1aW50NjRfdClkYXRhLT5mYXRfc2VjdG9yIDw8IEdSVUJfRElTS19TRUNUT1JfQklU
UykgKyBmYXRfb2Zmc2V0OyAKKwkgIGlmIChiZHJ2X3ByZWFkIChicywgb2ZmX2J5dGVzLCAKKwkJ
CSAgKGNoYXIgKikgJm5leHRfY2x1c3RlciwgCisJCQkgIGxlbikgIT0gbGVuKSAgIC8vtNNmYXSx
7bbByKG02LrFCisJICAgIHJldHVybiAtMTsKKworCSAgbmV4dF9jbHVzdGVyID0gZ3J1Yl9sZV90
b19jcHUzMiAobmV4dF9jbHVzdGVyKTsKKwkgIHN3aXRjaCAoZGF0YS0+ZmF0X3NpemUpCisJICAg
IHsKKwkgICAgY2FzZSAxNjoKKwkgICAgICBuZXh0X2NsdXN0ZXIgJj0gMHhGRkZGOworCSAgICAg
IGJyZWFrOworCSAgICBjYXNlIDEyOgorCSAgICAgIGlmIChkYXRhLT5jdXJfY2x1c3RlciAmIDEp
CisJCW5leHRfY2x1c3RlciA+Pj0gNDsKKworCSAgICAgIG5leHRfY2x1c3RlciAmPSAweDBGRkY7
CisJICAgICAgYnJlYWs7CisJICAgIH0KKworCSAgcHJpbnRmICgiZmF0X3NpemU9JWQsIG5leHRf
Y2x1c3Rlcj0ldVxuIiwKKwkJCWRhdGEtPmZhdF9zaXplLCBuZXh0X2NsdXN0ZXIpOworCisJICAv
KiBDaGVjayB0aGUgZW5kLiAgKi8KKwkgIGlmIChuZXh0X2NsdXN0ZXIgPj0gZGF0YS0+Y2x1c3Rl
cl9lb2ZfbWFyaykKKwkgICAgcmV0dXJuIHJldDsKKworCSAgaWYgKG5leHRfY2x1c3RlciA8IDIg
fHwgbmV4dF9jbHVzdGVyID49IGRhdGEtPm51bV9jbHVzdGVycykKKwkgICAgeworCSAgICAgIHBy
aW50ZigiaW52YWxpZCBjbHVzdGVyICV1Li4uLi4uLi4uLi4uLi4uLlxuIiwKKwkJCSAgbmV4dF9j
bHVzdGVyKTsKKwkgICAgICByZXR1cm4gLTE7CisJICAgIH0KKworCSAgZGF0YS0+Y3VyX2NsdXN0
ZXIgPSBuZXh0X2NsdXN0ZXI7CisJICBkYXRhLT5jdXJfY2x1c3Rlcl9udW0rKzsKKwl9CisKKyAg
ICAgIC8qIFJlYWQgdGhlIGRhdGEgaGVyZS4gICovCisgICAgICAvL8LfvK202Mv5ttTTprXEvvi2
1MnIx/gKKyAgICAgIHNlY3RvciA9IChkYXRhLT5jbHVzdGVyX3NlY3RvcgorCQkrICgoZGF0YS0+
Y3VyX2NsdXN0ZXIgLSAyKQorCQkgICA8PCAoZGF0YS0+Y2x1c3Rlcl9iaXRzICsgZGF0YS0+bG9n
aWNhbF9zZWN0b3JfYml0cykpKTsgCisgICAgICAvL774ttTJyMf41tDIpbX0xqvSxrrztcTX1r3a
yv0KKyAgICAgIHNpemUgPSAoMSA8PCBsb2dpY2FsX2NsdXN0ZXJfYml0cykgLSBvZmZzZXQ7Cisg
ICAgICBpZiAoc2l6ZSA+IGxlbikKKwlzaXplID0gbGVuOworCisgICAgICAvL2Rpc2stPnJlYWRf
aG9vayA9IHJlYWRfaG9vazsKKyAgICAgIC8vZGlzay0+Y2xvc3VyZSA9IGNsb3N1cmU7CisgICAg
ICBpbnQ2NF90IG9mZl9ieXRlcyA9ICgodWludDY0X3Qpc2VjdG9yIDw8IEdSVUJfRElTS19TRUNU
T1JfQklUUykgKyBvZmZzZXQ7CisgICAgICAvL2Rpc2stPnJlYWRfaG9vayA9IDA7CisgICAgICBp
ZiAoYmRydl9wcmVhZCAoYnMsIG9mZl9ieXRlcywgYnVmLCBzaXplKSAhPSBzaXplKQorCXJldHVy
biAtMTsKKworICAgICAgbGVuIC09IHNpemU7CisgICAgICBidWYgKz0gc2l6ZTsKKyAgICAgIHJl
dCArPSBzaXplOworICAgICAgbG9naWNhbF9jbHVzdGVyKys7CisgICAgICBvZmZzZXQgPSAwOyAg
Ly/S1LrztsG1xLa8ysfN6tX7ycjH+AorICAgIH0KKworICByZXR1cm4gcmV0OworfQorCisvL7Hp
wPrTyWRhdGEtPmZpbGVfY2x1c3Rlcta4tqi1xMS/wrwKK2ludAorZ3J1Yl9mYXRfaXRlcmF0ZV9k
aXIgKEJsb2NrRHJpdmVyU3RhdGUgKmJzLCBzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YSkKK3sK
KyAgc3RydWN0IGdydWJfZmF0X2Rpcl9lbnRyeSBkaXI7CisgIGNoYXIgKmZpbGVuYW1lLCAqZmls
ZXAgPSAwOworICBncnViX3VpbnQxNl90ICp1bmlidWY7CisgIGludCBzbG90ID0gLTEsIHNsb3Rz
ID0gLTE7CisgIGludCBjaGVja3N1bSA9IC0xOworICBncnViX3NzaXplX3Qgb2Zmc2V0ID0gLXNp
emVvZihkaXIpOworCisgIGlmICghIChkYXRhLT5hdHRyICYgR1JVQl9GQVRfQVRUUl9ESVJFQ1RP
UlkpKQorICAgIHJldHVybiBwcmludGYoIm5vdCBhIGRpcmVjdG9yeS4uLi4uLlxuIik7CisKKyAg
LyogQWxsb2NhdGUgc3BhY2UgZW5vdWdoIHRvIGhvbGQgYSBsb25nIG5hbWUuICAqLworICBmaWxl
bmFtZSA9IChjaGFyKiltYWxsb2MgKDB4NDAgKiAxMyAqIDQgKyAxKTsKKyAgdW5pYnVmID0gKGdy
dWJfdWludDE2X3QgKikgbWFsbG9jICgweDQwICogMTMgKiAyKTsKKyAgaWYgKCEgZmlsZW5hbWUg
fHwgISB1bmlidWYpCisgICAgeworICAgICAgZnJlZSAoZmlsZW5hbWUpOworICAgICAgZnJlZSAo
dW5pYnVmKTsKKyAgICAgIHJldHVybiAtMTsKKyAgICB9CisKKworICBpbnQgY291bnQgPSAwOwor
ICB3aGlsZSAoMSkKKyAgICB7CisgICAgICB1bnNpZ25lZCBpOworCisgICAgICAvKiBBZGp1c3Qg
dGhlIG9mZnNldC4gICovCisgICAgICBvZmZzZXQgKz0gc2l6ZW9mIChkaXIpOworICAgICAgcHJp
bnRmKCJbJWRdb2Zmc2V0PSV1XG4iCisJICAgICAiZGF0YS0+Y3VyX2NsdXN0ZXJfbnVtPSV1LGRh
dGEtPmN1cl9jbHVzdGVyPSV1XG4iLCAKKwkgICAgIGNvdW50KzEsIG9mZnNldCwgCisJICAgICBk
YXRhLT5jdXJfY2x1c3Rlcl9udW0sIGRhdGEtPmN1cl9jbHVzdGVyKTsKKyAgICAgIC8qIFJlYWQg
YSBkaXJlY3RvcnkgZW50cnkuICAqLworICAgICAgLy8weDCx7cq+v9XEv8K8CisgICAgICBpZiAo
KGdydWJfZmF0X3JlYWRfZGF0YSAoYnMsIGRhdGEsIDAsIDAsCisJCQkgICAgICAgb2Zmc2V0LCBz
aXplb2YgKGRpciksIChjaGFyICopICZkaXIpCisJICAgIT0gc2l6ZW9mIChkaXIpIHx8IGRpci5u
YW1lWzBdID09IDApKQorCXsKKwkgIHByaW50ZigiYnJlYWsuLi5kaXIubmFtZVswXT09JWRcbiIs
IGRpci5uYW1lWzBdKTsKKwkgIGJyZWFrOworCX0KKyAgICAgIC8qIEhhbmRsZSBsb25nIG5hbWUg
ZW50cmllcy4gICovCisgICAgICBpZiAoZGlyLmF0dHIgPT0gR1JVQl9GQVRfQVRUUl9MT05HX05B
TUUpCisJeworCSAgcHJpbnRmKCJsb25nIG5hbWUuLi5cbiIpOworCSAgc3RydWN0IGdydWJfZmF0
X2xvbmdfbmFtZV9lbnRyeSAqbG9uZ19uYW1lCisJICAgID0gKHN0cnVjdCBncnViX2ZhdF9sb25n
X25hbWVfZW50cnkgKikgJmRpcjsKKwkgIGdydWJfdWludDhfdCBpZCA9IGxvbmdfbmFtZS0+aWQ7
CisKKwkgIGlmIChpZCAmIDB4NDApICAvL3RoZSBsYXN0IGl0ZW0KKwkgICAgeworCSAgICAgIGlk
ICY9IDB4M2Y7ICAgLy9pbmRleCBvciBvcmRpbmFsIG51bWJlciAgMX4zMQorCSAgICAgIHNsb3Rz
ID0gc2xvdCA9IGlkOworCSAgICAgIGNoZWNrc3VtID0gbG9uZ19uYW1lLT5jaGVja3N1bTsKKwkg
ICAgICBwcmludGYoInRoZSBsYXN0IG9yZGluYWwgbnVtPSVkISEhXG4iLCBpZCk7CisJICAgIH0K
KworCSAgaWYgKGlkICE9IHNsb3QgfHwgc2xvdCA9PSAwIHx8IGNoZWNrc3VtICE9IGxvbmdfbmFt
ZS0+Y2hlY2tzdW0pCisJICAgIHsKKwkgICAgICBwcmludGYoIm5vdCB2YWxpZCBvcmRpbmFsIG51
bWJlciAsaWdub3JlLi4uY29udGludWVcbiIpOworCSAgICAgIGNoZWNrc3VtID0gLTE7CisJICAg
ICAgY29udGludWU7CisJICAgIH0KKworCSAgc2xvdC0tOworCSAgbWVtY3B5ICh1bmlidWYgKyBz
bG90ICogMTMsIGxvbmdfbmFtZS0+bmFtZTEsIDUgKiAyKTsKKwkgIG1lbWNweSAodW5pYnVmICsg
c2xvdCAqIDEzICsgNSwgbG9uZ19uYW1lLT5uYW1lMiwgNiAqIDIpOworCSAgbWVtY3B5ICh1bmli
dWYgKyBzbG90ICogMTMgKyAxMSwgbG9uZ19uYW1lLT5uYW1lMywgMiAqIDIpOworCSAgcHJpbnRm
KCJtZW1jcHkuLi5jb250aW51ZVxuIik7CisJICBjb250aW51ZTsKKwl9CisKKyAgICAgIAorICAg
ICAgLyogQ2hlY2sgaWYgdGhpcyBlbnRyeSBpcyB2YWxpZC4gICovCisgICAgICAvL294ZTWx7cq+
0tG+rbG7yb6z/QorICAgICAgaWYgKGRpci5uYW1lWzBdID09IDB4ZTUgfHwgKGRpci5hdHRyICYg
fkdSVUJfRkFUX0FUVFJfVkFMSUQpKQorCXsKKwkgIHByaW50ZigiZGlyLm5hbWVbMF09MHgleCwg
ZGlyLmF0dHI9MHgleCBub3QgdmFsaWQuLi5jb250aW51ZVxuIiwgCisJCSBkaXIubmFtZVswXSwg
ZGlyLmF0dHIpOworCSAgY29udGludWU7CisJfQorCisgICAgICBwcmludGYoImNoZWNrc3VtPSVk
LCBzbG90PSVkXG4iLCBjaGVja3N1bSwgc2xvdCk7CisgICAgICAvKiBUaGlzIGlzIGEgd29ya2Fy
b3VuZCBmb3IgSmFwYW5lc2UuICAqLworICAgICAgaWYgKGRpci5uYW1lWzBdID09IDB4MDUpCisJ
ZGlyLm5hbWVbMF0gPSAweGU1OworCisgICAgICBpZiAoY2hlY2tzdW0gIT0gLTEgJiYgc2xvdCA9
PSAwKQorCXsKKwkgIHByaW50ZigiY2hlY2tzdW1pbmdcbiIpOworCSAgZ3J1Yl91aW50OF90IHN1
bTsKKworCSAgZm9yIChzdW0gPSAwLCBpID0gMDsgaSA8IHNpemVvZiAoZGlyLm5hbWUpOyBpKysp
CisJICAgIHN1bSA9ICgoc3VtID4+IDEpIHwgKHN1bSA8PCA3KSkgKyBkaXIubmFtZVtpXTsKKwor
CSAgaWYgKHN1bSA9PSBjaGVja3N1bSkKKwkgICAgey8vs6TD+7Htz+6688PmvfS907bMw/ux7c/u
o6zR6daks8m5ptTy1qTD99Xm1f3Kx7Okw/vX1gorCSAgICAgIGludCB1OworCisJICAgICAgZm9y
ICh1ID0gMDsgdSA8IHNsb3RzICogMTM7IHUrKykKKwkJdW5pYnVmW3VdID0gZ3J1Yl9sZV90b19j
cHUxNiAodW5pYnVmW3VdKTsKKworCSAgICAgICpncnViX3V0ZjE2X3RvX3V0ZjggKChncnViX3Vp
bnQ4X3QgKikgZmlsZW5hbWUsIHVuaWJ1ZiwKKwkJCQkgICBzbG90cyAqIDEzKSA9ICdcMCc7CisK
KwkgICAgICAvL2lmIChob29rIChmaWxlbmFtZSwgJmRpciwgY2xvc3VyZSkpCisJICAgICAgICAv
L2JyZWFrOworCisJICAgICAgY2hlY2tzdW0gPSAtMTsKKwkgICAgICBmb3IgKGkgPSAwOyBpIDwg
c2l6ZW9mIChkaXIubmFtZSk7IGkrKykKKwkJcHJpbnRmKCIweCV4ICAiLCBkaXIubmFtZVtpXSk7
CisJICAgICAgY2hhciAqZ2JuYW1lID0gKGNoYXIqKW1hbGxvYygyNTYpOworCSAgICAgIHUyZyhm
aWxlbmFtZSwgc3RybGVuKGZpbGVuYW1lKSwgZ2JuYW1lLCAyNTYpOworCSAgICAgIHByaW50Zigi
XG5kaXIubmFtZT0lcywgZmlsZW5hbWU9JXMsIGRpci5hdHRyPTB4JXgsIgorCQkgICAgICJzdW09
PWNoZWNrc3VtLi4uY29udGludWVcbiIsCisJCSAgICAgZGlyLm5hbWUsIGdibmFtZSwgZGlyLmF0
dHIpOworCSAgICAgIGZyZWUoZ2JuYW1lKTsKKwkgICAgICBjb3VudCsrOworCSAgICAgIGNvbnRp
bnVlOworCSAgICB9CisKKwkgIGNoZWNrc3VtID0gLTE7CisJfQorCisgICAgICAvL7rzw+a1xLSm
wO3V67bUt8fV5sq1s6TD+7rN1ebKtbbMw/sKKyAgICAgIC8qIENvbnZlcnQgdGhlIDguMyBmaWxl
IG5hbWUuICAqLworICAgICAgLy/IpbX0tszD+7XEv9W48aOsyKu4xM6q0KHQtAorICAgICAgZmls
ZXAgPSBmaWxlbmFtZTsKKyAgICAgIGlmIChkaXIuYXR0ciAmIEdSVUJfRkFUX0FUVFJfVk9MVU1F
X0lEKQorCXsKKwkgIHByaW50ZigiVk9MVU1FXG4iKTsKKwkgIGZvciAoaSA9IDA7IGkgPCBzaXpl
b2YgKGRpci5uYW1lKSAmJiBkaXIubmFtZVtpXQorCQkgJiYgISBncnViX2lzc3BhY2UgKGRpci5u
YW1lW2ldKTsgaSsrKQorCSAgICAqZmlsZXArKyA9IGRpci5uYW1lW2ldOworCX0KKyAgICAgIGVs
c2UKKwl7CisJICBmb3IgKGkgPSAwOyBpIDwgOCAmJiBkaXIubmFtZVtpXSAmJiAhIGdydWJfaXNz
cGFjZSAoZGlyLm5hbWVbaV0pOyBpKyspCisJICAgICpmaWxlcCsrID0gZ3J1Yl90b2xvd2VyIChk
aXIubmFtZVtpXSk7CisKKwkgICpmaWxlcCA9ICcuJzsKKworCSAgZm9yIChpID0gODsgaSA8IDEx
ICYmIGRpci5uYW1lW2ldICYmICEgZ3J1Yl9pc3NwYWNlIChkaXIubmFtZVtpXSk7IGkrKykKKwkg
ICAgKisrZmlsZXAgPSBncnViX3RvbG93ZXIgKGRpci5uYW1lW2ldKTsKKworCSAgaWYgKCpmaWxl
cCAhPSAnLicpCisJICAgIGZpbGVwKys7CisJfQorICAgICAgKmZpbGVwID0gJ1wwJzsKKworICAg
ICAgCisgICAgICBmb3IgKGkgPSAwOyBpIDwgc2l6ZW9mIChkaXIubmFtZSk7IGkrKykKKwlwcmlu
dGYoIjB4JXggICIsIGRpci5uYW1lW2ldKTsKKyAgICAgIHByaW50ZigiXG5kaXIubmFtZT0lcywg
ZmlsZW5hbWU9ob4lc6G/LCBkaXIuYXR0cj0weCV4LCIKKwkgICAgICIuLi5uZXh0IHdoaWxlXG4i
LAorCSAgICAgZGlyLm5hbWUsIGZpbGVuYW1lLCBkaXIuYXR0cik7CisgICAgICBjb3VudCsrOwor
ICAgICAgLyppZihzdHJjbXAoZmlsZW5hbWUsICIuIikgJiYgc3RyY21wKGZpbGVuYW1lLCAiLi4i
KSkKKwl7CisJICBwcmludGYoIns9PT09PT09PT09PT09PT5cbiIpOworCSAgc3RydWN0IGdydWJf
ZmF0X2RhdGEgKmRhdGEyID0gTlVMTDsKKwkgIGRhdGEyID0gKHN0cnVjdCBncnViX2ZhdF9kYXRh
KiltYWxsb2Moc2l6ZW9mKCpkYXRhKSk7CisJICBtZW1jcHkoZGF0YTIsIGRhdGEsIHNpemVvZigq
ZGF0YSkpOworCSAgZGF0YTItPmF0dHIgPSBkaXIuYXR0cjsKKwkgIGRhdGEyLT5maWxlX3NpemUg
PSBncnViX2xlX3RvX2NwdTMyIChkaXIuZmlsZV9zaXplKTsKKwkgIGRhdGEyLT5maWxlX2NsdXN0
ZXIgPSAoKGdydWJfbGVfdG9fY3B1MTYgKGRpci5maXJzdF9jbHVzdGVyX2hpZ2gpIDw8IDE2KQor
CQkJCSB8IGdydWJfbGVfdG9fY3B1MTYgKGRpci5maXJzdF9jbHVzdGVyX2xvdykpOworCSAgZGF0
YTItPmN1cl9jbHVzdGVyX251bSA9IH4wVTsKKwkgIChncnViX2ZhdF9pdGVyYXRlX2Rpcihicywg
ZGF0YTIpIDwgMCkgPyBwcmludGYoImVycm9yICEhISEhIVxuIikgOiAwOworCSAgZnJlZShkYXRh
Mik7CisJICBwcmludGYoIjw9PT09PT09PT09PT09PT09PT09fVxuIik7CisJfQorICAgICAgKi8K
KyAgICAgIC8vaWYgKGhvb2sgKGZpbGVuYW1lLCAmZGlyLCBjbG9zdXJlKSkKKyAgICAgICAgLy9i
cmVhazsKKyAgICB9CisKKyAgZnJlZSAoZmlsZW5hbWUpOworICBmcmVlICh1bmlidWYpOworCisg
IHJldHVybiAwOworfQorCisKKy8qCitzdHJ1Y3QgZ3J1Yl9mYXRfZmluZF9kaXJfY2xvc3VyZQor
eworICBzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YTsKKyAgaW50ICgqaG9vaykgKGNvbnN0IGNo
YXIgKmZpbGVuYW1lLAorCSAgICAgICBjb25zdCBzdHJ1Y3QgZ3J1Yl9kaXJob29rX2luZm8gKmlu
Zm8sCisJICAgICAgIHZvaWQgKmNsb3N1cmUpOworICB2b2lkICpjbG9zdXJlOworICBjaGFyICpk
aXJuYW1lOworICBpbnQgY2FsbF9ob29rOworICBpbnQgZm91bmQ7Cit9OworCisKK3N0YXRpYyBp
bnQKK2dydWJfZmF0X2ZpbmRfZGlyX2hvb2sgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLCBzdHJ1Y3Qg
Z3J1Yl9mYXRfZGlyX2VudHJ5ICpkaXIsCisJCQl2b2lkICpjbG9zdXJlKQoreworICBzdHJ1Y3Qg
Z3J1Yl9mYXRfZmluZF9kaXJfY2xvc3VyZSAqYyA9IGNsb3N1cmU7CisgIHN0cnVjdCBncnViX2Rp
cmhvb2tfaW5mbyBpbmZvOworICBncnViX21lbXNldCAoJmluZm8sIDAsIHNpemVvZiAoaW5mbykp
OworCisgIGluZm8uZGlyID0gISEgKGRpci0+YXR0ciAmIEdSVUJfRkFUX0FUVFJfRElSRUNUT1JZ
KTsKKyAgaW5mby5jYXNlX2luc2Vuc2l0aXZlID0gMTsKKworICBpZiAoZGlyLT5hdHRyICYgR1JV
Ql9GQVRfQVRUUl9WT0xVTUVfSUQpCisgICAgcmV0dXJuIDA7CisgIGlmICgqKGMtPmRpcm5hbWUp
ID09ICdcMCcgJiYgKGMtPmNhbGxfaG9vaykpCisgICAgcmV0dXJuIGMtPmhvb2sgKGZpbGVuYW1l
LCAmaW5mbywgYy0+Y2xvc3VyZSk7CisKKyAgaWYgKGdydWJfc3RyY2FzZWNtcCAoYy0+ZGlybmFt
ZSwgZmlsZW5hbWUpID09IDApCisgICAgeworICAgICAgc3RydWN0IGdydWJfZmF0X2RhdGEgKmRh
dGEgPSBjLT5kYXRhOworCisgICAgICBjLT5mb3VuZCA9IDE7CisgICAgICBkYXRhLT5hdHRyID0g
ZGlyLT5hdHRyOworICAgICAgZGF0YS0+ZmlsZV9zaXplID0gZ3J1Yl9sZV90b19jcHUzMiAoZGly
LT5maWxlX3NpemUpOworICAgICAgZGF0YS0+ZmlsZV9jbHVzdGVyID0gKChncnViX2xlX3RvX2Nw
dTE2IChkaXItPmZpcnN0X2NsdXN0ZXJfaGlnaCkgPDwgMTYpCisJCQkgICAgICAgfCBncnViX2xl
X3RvX2NwdTE2IChkaXItPmZpcnN0X2NsdXN0ZXJfbG93KSk7CisgICAgICBkYXRhLT5jdXJfY2x1
c3Rlcl9udW0gPSB+MFU7CisKKyAgICAgIGlmIChjLT5jYWxsX2hvb2spCisJYy0+aG9vayAoZmls
ZW5hbWUsICZpbmZvLCBjLT5jbG9zdXJlKTsKKworICAgICAgcmV0dXJuIDE7CisgICAgfQorICBy
ZXR1cm4gMDsKK30KKyovCisKKy8qIEZpbmQgdGhlIHVuZGVybHlpbmcgZGlyZWN0b3J5IG9yIGZp
bGUgaW4gUEFUSCBhbmQgcmV0dXJuIHRoZQorICAgbmV4dCBwYXRoLiBJZiB0aGVyZSBpcyBubyBu
ZXh0IHBhdGggb3IgYW4gZXJyb3Igb2NjdXJzLCByZXR1cm4gTlVMTC4KKyAgIElmIEhPT0sgaXMg
c3BlY2lmaWVkLCBjYWxsIGl0IHdpdGggZWFjaCBmaWxlIG5hbWUuICAqLworY2hhciAqCitncnVi
X2ZhdF9maW5kX2RpciAoQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIHN0cnVjdCBncnViX2ZhdF9kYXRh
ICpkYXRhLAorCQkgICBjb25zdCBjaGFyICpwYXRoLAorCQkgICBpbnQgKCpob29rKSAoY29uc3Qg
Y2hhciAqZmlsZW5hbWUsCisJCQkJY29uc3Qgc3RydWN0IGdydWJfZGlyaG9va19pbmZvICppbmZv
LAorCQkJCXZvaWQgKmNsb3N1cmUpLAorCQkgICB2b2lkICpjbG9zdXJlKQoreworICBjaGFyICpk
aXJuYW1lLCAqZGlycDsKKyAgLy9zdHJ1Y3QgZ3J1Yl9mYXRfZmluZF9kaXJfY2xvc3VyZSBjOwor
CisgIGlmICghIChkYXRhLT5hdHRyICYgR1JVQl9GQVRfQVRUUl9ESVJFQ1RPUlkpKQorICAgIHsK
KyAgICAgIHByaW50Zigibm90IGEgZGlyZWN0b3J5Li4uLi4uLi4uLi4uLlxuIik7CisgICAgICBy
ZXR1cm4gMDsKKyAgICB9CisKKyAgLyogRXh0cmFjdCBhIGRpcmVjdG9yeSBuYW1lLiAgKi8KKyAg
d2hpbGUgKCpwYXRoID09ICcvJykKKyAgICBwYXRoKys7CisKKyAgZGlycCA9IGdydWJfc3RyY2hy
IChwYXRoLCAnLycpOworICBpZiAoZGlycCkKKyAgICB7CisgICAgICB1bnNpZ25lZCBsZW4gPSBk
aXJwIC0gcGF0aDsKKworICAgICAgZGlybmFtZSA9IChjaGFyKiltYWxsb2MgKGxlbiArIDEpOwor
ICAgICAgaWYgKCEgZGlybmFtZSkKKwlyZXR1cm4gMDsKKworICAgICAgbWVtY3B5IChkaXJuYW1l
LCBwYXRoLCBsZW4pOworICAgICAgZGlybmFtZVtsZW5dID0gJ1wwJzsKKyAgICB9CisgIGVsc2UK
KyAgICB7CisgICAgLyogVGhpcyBpcyBhY3R1YWxseSBhIGZpbGUuICAqLworICAgICAgZGlybmFt
ZSA9IGdydWJfc3RyZHVwIChwYXRoKTsKKyAgICB9CisgIC8vYy5kYXRhID0gZGF0YTsKKyAgLy9j
Lmhvb2sgPSBob29rOworICAvL2MuY2xvc3VyZSA9IGNsb3N1cmU7CisgIC8vYy5kaXJuYW1lID1k
aXJuYW1lOworICAvL2MuZm91bmQgPSAwOworICAvL2MuY2FsbF9ob29rID0gKCEgZGlycCAmJiBo
b29rKTsKKyAgaWYoZ3J1Yl9mYXRfaXRlcmF0ZV9kaXIgKGJzLCBkYXRhKTwwKQorICAgIHsKKyAg
ICAgICBwcmludGYoImZpbGUgbm90IGZvdW5kLi5cbiIpOworICAgICAgIHJldHVybiAwOworICAg
IH0KKyAgICAKKyAgCisgIGZyZWUgKGRpcm5hbWUpOworCisgIHJldHVybiBkaXJwOworfQorCisK
KworCisKKworCisKKworCisKKworCisKKworCisKKwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4g
LVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWItZmF0L2ZhdC5oIHhlbi00
LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWItZmF0L2ZhdC5oCi0tLSB4ZW4tNC4xLjIt
YS90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mYXQuaAkxOTcwLTAxLTAxIDA3OjAwOjAw
LjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1
Yi1mYXQvZmF0LmgJMjAxMi0xMi0yOCAxNjowMjo0MS4wMDk5Mzc3MzggKzA4MDAKQEAgLTAsMCAr
MSwxNDYgQEAKKyNpZm5kZWYgRlNfRkFUX0gKKyNkZWZpbmUgRlNfRkFUX0gKKworCisjaW5jbHVk
ZSAiZnMtdHlwZXMuaCIKKyNpbmNsdWRlICJibG9ja19pbnQuaCIKKworI2RlZmluZSBHUlVCX0RJ
U0tfU0VDVE9SX0JJVFMgICAgICA5CisjZGVmaW5lIEdSVUJfRkFUX0RJUl9FTlRSWV9TSVpFCTMy
CisKKyNkZWZpbmUgR1JVQl9GQVRfQVRUUl9SRUFEX09OTFkJMHgwMQorI2RlZmluZSBHUlVCX0ZB
VF9BVFRSX0hJRERFTgkweDAyCisjZGVmaW5lIEdSVUJfRkFUX0FUVFJfU1lTVEVNCTB4MDQKKyNk
ZWZpbmUgR1JVQl9GQVRfQVRUUl9WT0xVTUVfSUQJMHgwOAorI2RlZmluZSBHUlVCX0ZBVF9BVFRS
X0RJUkVDVE9SWQkweDEwCisjZGVmaW5lIEdSVUJfRkFUX0FUVFJfQVJDSElWRQkweDIwCisKKyNk
ZWZpbmUgR1JVQl9GQVRfTUFYRklMRQkyNTYKKworI2RlZmluZSBHUlVCX0ZBVF9BVFRSX0xPTkdf
TkFNRQkoR1JVQl9GQVRfQVRUUl9SRUFEX09OTFkgXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfSElE
REVOIFwKKwkJCQkgfCBHUlVCX0ZBVF9BVFRSX1NZU1RFTSBcCisJCQkJIHwgR1JVQl9GQVRfQVRU
Ul9WT0xVTUVfSUQpCisjZGVmaW5lIEdSVUJfRkFUX0FUVFJfVkFMSUQJKEdSVUJfRkFUX0FUVFJf
UkVBRF9PTkxZIFwKKwkJCQkgfCBHUlVCX0ZBVF9BVFRSX0hJRERFTiBcCisJCQkJIHwgR1JVQl9G
QVRfQVRUUl9TWVNURU0gXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfRElSRUNUT1JZIFwKKwkJCQkg
fCBHUlVCX0ZBVF9BVFRSX0FSQ0hJVkUgXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfVk9MVU1FX0lE
KQorCitzdHJ1Y3QgZ3J1Yl9mYXRfYnBiCit7CisgIGdydWJfdWludDhfdCBqbXBfYm9vdFszXTsK
KyAgZ3J1Yl91aW50OF90IG9lbV9uYW1lWzhdOworICBncnViX3VpbnQxNl90IGJ5dGVzX3Blcl9z
ZWN0b3I7CisgIGdydWJfdWludDhfdCBzZWN0b3JzX3Blcl9jbHVzdGVyOworICBncnViX3VpbnQx
Nl90IG51bV9yZXNlcnZlZF9zZWN0b3JzOworICBncnViX3VpbnQ4X3QgbnVtX2ZhdHM7CisgIGdy
dWJfdWludDE2X3QgbnVtX3Jvb3RfZW50cmllczsKKyAgZ3J1Yl91aW50MTZfdCBudW1fdG90YWxf
c2VjdG9yc18xNjsKKyAgZ3J1Yl91aW50OF90IG1lZGlhOworICBncnViX3VpbnQxNl90IHNlY3Rv
cnNfcGVyX2ZhdF8xNjsKKyAgZ3J1Yl91aW50MTZfdCBzZWN0b3JzX3Blcl90cmFjazsKKyAgZ3J1
Yl91aW50MTZfdCBudW1faGVhZHM7CisgIGdydWJfdWludDMyX3QgbnVtX2hpZGRlbl9zZWN0b3Jz
OworICBncnViX3VpbnQzMl90IG51bV90b3RhbF9zZWN0b3JzXzMyOworICB1bmlvbgorICB7Cisg
ICAgc3RydWN0CisgICAgeworICAgICAgZ3J1Yl91aW50OF90IG51bV9waF9kcml2ZTsKKyAgICAg
IGdydWJfdWludDhfdCByZXNlcnZlZDsKKyAgICAgIGdydWJfdWludDhfdCBib290X3NpZzsKKyAg
ICAgIGdydWJfdWludDMyX3QgbnVtX3NlcmlhbDsKKyAgICAgIGdydWJfdWludDhfdCBsYWJlbFsx
MV07CisgICAgICBncnViX3VpbnQ4X3QgZnN0eXBlWzhdOworICAgIH0gX19hdHRyaWJ1dGVfXyAo
KHBhY2tlZCkpIGZhdDEyX29yX2ZhdDE2OworICAgIHN0cnVjdAorICAgIHsKKyAgICAgIGdydWJf
dWludDMyX3Qgc2VjdG9yc19wZXJfZmF0XzMyOworICAgICAgZ3J1Yl91aW50MTZfdCBleHRlbmRl
ZF9mbGFnczsKKyAgICAgIGdydWJfdWludDE2X3QgZnNfdmVyc2lvbjsKKyAgICAgIGdydWJfdWlu
dDMyX3Qgcm9vdF9jbHVzdGVyOworICAgICAgZ3J1Yl91aW50MTZfdCBmc19pbmZvOworICAgICAg
Z3J1Yl91aW50MTZfdCBiYWNrdXBfYm9vdF9zZWN0b3I7CisgICAgICBncnViX3VpbnQ4X3QgcmVz
ZXJ2ZWRbMTJdOworICAgICAgZ3J1Yl91aW50OF90IG51bV9waF9kcml2ZTsKKyAgICAgIGdydWJf
dWludDhfdCByZXNlcnZlZDE7CisgICAgICBncnViX3VpbnQ4X3QgYm9vdF9zaWc7CisgICAgICBn
cnViX3VpbnQzMl90IG51bV9zZXJpYWw7CisgICAgICBncnViX3VpbnQ4X3QgbGFiZWxbMTFdOwor
ICAgICAgZ3J1Yl91aW50OF90IGZzdHlwZVs4XTsKKyAgICB9IF9fYXR0cmlidXRlX18gKChwYWNr
ZWQpKSBmYXQzMjsKKyAgfSBfX2F0dHJpYnV0ZV9fICgocGFja2VkKSkgdmVyc2lvbl9zcGVjaWZp
YzsKK30gX19hdHRyaWJ1dGVfXyAoKHBhY2tlZCkpOworCitzdHJ1Y3QgZ3J1Yl9mYXRfZGlyX2Vu
dHJ5Cit7CisgIGdydWJfdWludDhfdCBuYW1lWzExXTsKKyAgZ3J1Yl91aW50OF90IGF0dHI7Cisg
IGdydWJfdWludDhfdCBudF9yZXNlcnZlZDsKKyAgZ3J1Yl91aW50OF90IGNfdGltZV90ZW50aDsK
KyAgZ3J1Yl91aW50MTZfdCBjX3RpbWU7CisgIGdydWJfdWludDE2X3QgY19kYXRlOworICBncnVi
X3VpbnQxNl90IGFfZGF0ZTsKKyAgZ3J1Yl91aW50MTZfdCBmaXJzdF9jbHVzdGVyX2hpZ2g7Cisg
IGdydWJfdWludDE2X3Qgd190aW1lOworICBncnViX3VpbnQxNl90IHdfZGF0ZTsKKyAgZ3J1Yl91
aW50MTZfdCBmaXJzdF9jbHVzdGVyX2xvdzsKKyAgZ3J1Yl91aW50MzJfdCBmaWxlX3NpemU7Cit9
IF9fYXR0cmlidXRlX18gKChwYWNrZWQpKTsKKworc3RydWN0IGdydWJfZmF0X2xvbmdfbmFtZV9l
bnRyeQoreworICBncnViX3VpbnQ4X3QgaWQ7CisgIGdydWJfdWludDE2X3QgbmFtZTFbNV07Cisg
IGdydWJfdWludDhfdCBhdHRyOworICBncnViX3VpbnQ4X3QgcmVzZXJ2ZWQ7CisgIGdydWJfdWlu
dDhfdCBjaGVja3N1bTsKKyAgZ3J1Yl91aW50MTZfdCBuYW1lMls2XTsKKyAgZ3J1Yl91aW50MTZf
dCBmaXJzdF9jbHVzdGVyOworICBncnViX3VpbnQxNl90IG5hbWUzWzJdOworfSBfX2F0dHJpYnV0
ZV9fICgocGFja2VkKSk7CisKK3N0cnVjdCBncnViX2ZhdF9kYXRhCit7CisgIGludCBsb2dpY2Fs
X3NlY3Rvcl9iaXRzOworICBncnViX3VpbnQzMl90IG51bV9zZWN0b3JzOworCisgIGdydWJfdWlu
dDE2X3QgZmF0X3NlY3RvcjsKKyAgZ3J1Yl91aW50MzJfdCBzZWN0b3JzX3Blcl9mYXQ7CisgIGlu
dCBmYXRfc2l6ZTsKKworICBncnViX3VpbnQzMl90IHJvb3RfY2x1c3RlcjsKKyAgZ3J1Yl91aW50
MzJfdCByb290X3NlY3RvcjsKKyAgZ3J1Yl91aW50MzJfdCBudW1fcm9vdF9zZWN0b3JzOworCisg
IGludCBjbHVzdGVyX2JpdHM7CisgIGdydWJfdWludDMyX3QgY2x1c3Rlcl9lb2ZfbWFyazsKKyAg
Z3J1Yl91aW50MzJfdCBjbHVzdGVyX3NlY3RvcjsKKyAgZ3J1Yl91aW50MzJfdCBudW1fY2x1c3Rl
cnM7CisKKyAgZ3J1Yl91aW50OF90IGF0dHI7CisgIGdydWJfc3NpemVfdCBmaWxlX3NpemU7Cisg
IGdydWJfdWludDMyX3QgZmlsZV9jbHVzdGVyOworICBncnViX3VpbnQzMl90IGN1cl9jbHVzdGVy
X251bTsKKyAgZ3J1Yl91aW50MzJfdCBjdXJfY2x1c3RlcjsKKworICBncnViX3VpbnQzMl90IHV1
aWQ7Cit9OworCisKKworCisKKworc3RydWN0IGdydWJfZmF0X2RhdGEqIGdydWJfZmF0X21vdW50
IChCbG9ja0RyaXZlclN0YXRlICpicywgdWludDMyX3QgcGFydF9vZmZfc2VjdG9yKTsKK2ludCBn
cnViX2ZhdF9pdGVyYXRlX2RpciAoQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIHN0cnVjdCBncnViX2Zh
dF9kYXRhICpkYXRhKTsKKworCisKKworCisKKworI2VuZGlmCmRpZmYgLS1leGNsdWRlPS5zdm4g
LXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQvZnMtdHlw
ZXMuaCB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mcy10eXBlcy5o
Ci0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mcy10eXBlcy5o
CTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29s
cy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mcy10eXBlcy5oCTIwMTItMTItMjggMTY6MDI6NDEu
MDA5OTM3NzM4ICswODAwCkBAIC0wLDAgKzEsMjI4IEBACisvKgorICogIEdSVUIgIC0tICBHUmFu
ZCBVbmlmaWVkIEJvb3Rsb2FkZXIKKyAqICBDb3B5cmlnaHQgKEMpIDIwMDIsMjAwNSwyMDA2LDIw
MDcsMjAwOCwyMDA5ICBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIEluYy4KKyAqCisgKiAgR1JV
QiBpcyBmcmVlIHNvZnR3YXJlOiB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5
CisgKiAgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5z
ZSBhcyBwdWJsaXNoZWQgYnkKKyAqICB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBlaXRo
ZXIgdmVyc2lvbiAzIG9mIHRoZSBMaWNlbnNlLCBvcgorICogIChhdCB5b3VyIG9wdGlvbikgYW55
IGxhdGVyIHZlcnNpb24uCisgKgorICogIEdSVUIgaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUg
dGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKKyAqICBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdp
dGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBvZgorICogIE1FUkNIQU5UQUJJTElUWSBv
ciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUKKyAqICBHTlUgR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgorICoKKyAqICBZb3Ugc2hvdWxk
IGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZQor
ICogIGFsb25nIHdpdGggR1JVQi4gIElmIG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGlj
ZW5zZXMvPi4KKyAqLworCisjaWZuZGVmIEdSVUJfVFlQRVNfSEVBREVSCisjZGVmaW5lIEdSVUJf
VFlQRVNfSEVBREVSCTEKKworI2luY2x1ZGUgPGNvbmZpZy5oPgorI2luY2x1ZGUgPHg4Nl82NC90
eXBlcy5oPgorCisjaWZkZWYgR1JVQl9VVElMCisjIGRlZmluZSBHUlVCX0NQVV9TSVpFT0ZfVk9J
RF9QCVNJWkVPRl9WT0lEX1AKKyMgZGVmaW5lIEdSVUJfQ1BVX1NJWkVPRl9MT05HCVNJWkVPRl9M
T05HCisjIGlmZGVmIFdPUkRTX0JJR0VORElBTgorIyAgZGVmaW5lIEdSVUJfQ1BVX1dPUkRTX0JJ
R0VORElBTgkxCisjIGVsc2UKKyMgIHVuZGVmIEdSVUJfQ1BVX1dPUkRTX0JJR0VORElBTgorIyBl
bmRpZgorI2Vsc2UgLyogISBHUlVCX1VUSUwgKi8KKyMgZGVmaW5lIEdSVUJfQ1BVX1NJWkVPRl9W
T0lEX1AJR1JVQl9UQVJHRVRfU0laRU9GX1ZPSURfUAorIyBkZWZpbmUgR1JVQl9DUFVfU0laRU9G
X0xPTkcJR1JVQl9UQVJHRVRfU0laRU9GX0xPTkcKKyMgaWZkZWYgR1JVQl9UQVJHRVRfV09SRFNf
QklHRU5ESUFOCisjICBkZWZpbmUgR1JVQl9DUFVfV09SRFNfQklHRU5ESUFOCTEKKyMgZWxzZQor
IyAgdW5kZWYgR1JVQl9DUFVfV09SRFNfQklHRU5ESUFOCisjIGVuZGlmCisjZW5kaWYgLyogISBH
UlVCX1VUSUwgKi8KKworI2lmIEdSVUJfQ1BVX1NJWkVPRl9WT0lEX1AgIT0gNCAmJiBHUlVCX0NQ
VV9TSVpFT0ZfVk9JRF9QICE9IDgKKyMgZXJyb3IgIlRoaXMgYXJjaGl0ZWN0dXJlIGlzIG5vdCBz
dXBwb3J0ZWQgYmVjYXVzZSBzaXplb2Yodm9pZCAqKSAhPSA0IGFuZCBzaXplb2Yodm9pZCAqKSAh
PSA4IgorI2VuZGlmCisKKyNpZm5kZWYgR1JVQl9UQVJHRVRfV09SRFNJWkUKKyMgaWYgR1JVQl9U
QVJHRVRfU0laRU9GX1ZPSURfUCA9PSA0CisjICBkZWZpbmUgR1JVQl9UQVJHRVRfV09SRFNJWkUg
MzIKKyMgZWxpZiBHUlVCX1RBUkdFVF9TSVpFT0ZfVk9JRF9QID09IDgKKyMgIGRlZmluZSBHUlVC
X1RBUkdFVF9XT1JEU0laRSA2NAorIyBlbmRpZgorI2VuZGlmCisKKy8qIERlZmluZSB2YXJpb3Vz
IHdpZGUgaW50ZWdlcnMuICAqLwordHlwZWRlZiBzaWduZWQgY2hhcgkJZ3J1Yl9pbnQ4X3Q7Cit0
eXBlZGVmIHNob3J0CQkJZ3J1Yl9pbnQxNl90OwordHlwZWRlZiBpbnQJCQlncnViX2ludDMyX3Q7
CisjaWYgR1JVQl9DUFVfU0laRU9GX0xPTkcgPT0gOAordHlwZWRlZiBsb25nCQkJZ3J1Yl9pbnQ2
NF90OworI2Vsc2UKK3R5cGVkZWYgbG9uZyBsb25nCQlncnViX2ludDY0X3Q7CisjZW5kaWYKKwor
dHlwZWRlZiB1bnNpZ25lZCBjaGFyCQlncnViX3VpbnQ4X3Q7Cit0eXBlZGVmIHVuc2lnbmVkIHNo
b3J0CQlncnViX3VpbnQxNl90OwordHlwZWRlZiB1bnNpZ25lZAkJZ3J1Yl91aW50MzJfdDsKKyNp
ZiBHUlVCX0NQVV9TSVpFT0ZfTE9ORyA9PSA4Cit0eXBlZGVmIHVuc2lnbmVkIGxvbmcJCWdydWJf
dWludDY0X3Q7CisjZWxzZQordHlwZWRlZiB1bnNpZ25lZCBsb25nIGxvbmcJZ3J1Yl91aW50NjRf
dDsKKyNlbmRpZgorCisvKiBNaXNjIHR5cGVzLiAgKi8KKyNpZiBHUlVCX1RBUkdFVF9TSVpFT0Zf
Vk9JRF9QID09IDgKK3R5cGVkZWYgZ3J1Yl91aW50NjRfdAlncnViX3RhcmdldF9hZGRyX3Q7Cit0
eXBlZGVmIGdydWJfdWludDY0X3QJZ3J1Yl90YXJnZXRfb2ZmX3Q7Cit0eXBlZGVmIGdydWJfdWlu
dDY0X3QJZ3J1Yl90YXJnZXRfc2l6ZV90OwordHlwZWRlZiBncnViX2ludDY0X3QJZ3J1Yl90YXJn
ZXRfc3NpemVfdDsKKyNlbHNlCit0eXBlZGVmIGdydWJfdWludDMyX3QJZ3J1Yl90YXJnZXRfYWRk
cl90OwordHlwZWRlZiBncnViX3VpbnQzMl90CWdydWJfdGFyZ2V0X29mZl90OwordHlwZWRlZiBn
cnViX3VpbnQzMl90CWdydWJfdGFyZ2V0X3NpemVfdDsKK3R5cGVkZWYgZ3J1Yl9pbnQzMl90CWdy
dWJfdGFyZ2V0X3NzaXplX3Q7CisjZW5kaWYKKworI2lmIEdSVUJfQ1BVX1NJWkVPRl9WT0lEX1Ag
PT0gOAordHlwZWRlZiBncnViX3VpbnQ2NF90CWdydWJfYWRkcl90OwordHlwZWRlZiBncnViX3Vp
bnQ2NF90CWdydWJfc2l6ZV90OwordHlwZWRlZiBncnViX2ludDY0X3QJZ3J1Yl9zc2l6ZV90Owor
I2Vsc2UKK3R5cGVkZWYgZ3J1Yl91aW50MzJfdAlncnViX2FkZHJfdDsKK3R5cGVkZWYgZ3J1Yl91
aW50MzJfdAlncnViX3NpemVfdDsKK3R5cGVkZWYgZ3J1Yl9pbnQzMl90CWdydWJfc3NpemVfdDsK
KyNlbmRpZgorCisjaWYgR1JVQl9DUFVfU0laRU9GX1ZPSURfUCA9PSA4CisjIGRlZmluZSBHUlVC
X1VMT05HX01BWCAxODQ0Njc0NDA3MzcwOTU1MTYxNVVMCisjIGRlZmluZSBHUlVCX0xPTkdfTUFY
IDkyMjMzNzIwMzY4NTQ3NzU4MDdMCisjIGRlZmluZSBHUlVCX0xPTkdfTUlOICgtOTIyMzM3MjAz
Njg1NDc3NTgwN0wgLSAxKQorI2Vsc2UKKyMgZGVmaW5lIEdSVUJfVUxPTkdfTUFYIDQyOTQ5Njcy
OTVVTAorIyBkZWZpbmUgR1JVQl9MT05HX01BWCAyMTQ3NDgzNjQ3TAorIyBkZWZpbmUgR1JVQl9M
T05HX01JTiAoLTIxNDc0ODM2NDdMIC0gMSkKKyNlbmRpZgorCisjaWYgR1JVQl9DUFVfU0laRU9G
X1ZPSURfUCA9PSA0CisjZGVmaW5lIFVJTlRfVE9fUFRSKHgpICgodm9pZCopKGdydWJfdWludDMy
X3QpKHgpKQorI2RlZmluZSBQVFJfVE9fVUlOVDY0KHgpICgoZ3J1Yl91aW50NjRfdCkoZ3J1Yl91
aW50MzJfdCkoeCkpCisjZGVmaW5lIFBUUl9UT19VSU5UMzIoeCkgKChncnViX3VpbnQzMl90KSh4
KSkKKyNlbHNlCisjZGVmaW5lIFVJTlRfVE9fUFRSKHgpICgodm9pZCopKGdydWJfdWludDY0X3Qp
KHgpKQorI2RlZmluZSBQVFJfVE9fVUlOVDY0KHgpICgoZ3J1Yl91aW50NjRfdCkoeCkpCisjZGVm
aW5lIFBUUl9UT19VSU5UMzIoeCkgKChncnViX3VpbnQzMl90KShncnViX3VpbnQ2NF90KSh4KSkK
KyNlbmRpZgorCisvKiBUaGUgdHlwZSBmb3IgcmVwcmVzZW50aW5nIGEgZmlsZSBvZmZzZXQuICAq
LwordHlwZWRlZiBncnViX3VpbnQ2NF90CWdydWJfb2ZmX3Q7CisKKy8qIFRoZSB0eXBlIGZvciBy
ZXByZXNlbnRpbmcgYSBkaXNrIGJsb2NrIGFkZHJlc3MuICAqLwordHlwZWRlZiBncnViX3VpbnQ2
NF90CWdydWJfZGlza19hZGRyX3Q7CisKKy8qIEJ5dGUtb3JkZXJzLiAgKi8KKyNkZWZpbmUgZ3J1
Yl9zd2FwX2J5dGVzMTYoeCkJXAorKHsgXAorICAgZ3J1Yl91aW50MTZfdCBfeCA9ICh4KTsgXAor
ICAgKGdydWJfdWludDE2X3QpICgoX3ggPDwgOCkgfCAoX3ggPj4gOCkpOyBcCit9KQorCisjaWYg
ZGVmaW5lZChfX0dOVUNfXykgJiYgKF9fR05VQ19fID4gMykgJiYgKF9fR05VQ19fID4gNCB8fCBf
X0dOVUNfTUlOT1JfXyA+PSAzKSAmJiBkZWZpbmVkKEdSVUJfVEFSR0VUX0kzODYpCitzdGF0aWMg
aW5saW5lIGdydWJfdWludDMyX3QgZ3J1Yl9zd2FwX2J5dGVzMzIoZ3J1Yl91aW50MzJfdCB4KQor
eworCXJldHVybiBfX2J1aWx0aW5fYnN3YXAzMih4KTsKK30KKworc3RhdGljIGlubGluZSBncnVi
X3VpbnQ2NF90IGdydWJfc3dhcF9ieXRlczY0KGdydWJfdWludDY0X3QgeCkKK3sKKwlyZXR1cm4g
X19idWlsdGluX2Jzd2FwNjQoeCk7Cit9CisjZWxzZQkJCQkJLyogbm90IGdjYyA0LjMgb3IgbmV3
ZXIgKi8KKyNkZWZpbmUgZ3J1Yl9zd2FwX2J5dGVzMzIoeCkJXAorKHsgXAorICAgZ3J1Yl91aW50
MzJfdCBfeCA9ICh4KTsgXAorICAgKGdydWJfdWludDMyX3QpICgoX3ggPDwgMjQpIFwKKyAgICAg
ICAgICAgICAgICAgICAgfCAoKF94ICYgKGdydWJfdWludDMyX3QpIDB4RkYwMFVMKSA8PCA4KSBc
CisgICAgICAgICAgICAgICAgICAgIHwgKChfeCAmIChncnViX3VpbnQzMl90KSAweEZGMDAwMFVM
KSA+PiA4KSBcCisgICAgICAgICAgICAgICAgICAgIHwgKF94ID4+IDI0KSk7IFwKK30pCisKKyNk
ZWZpbmUgZ3J1Yl9zd2FwX2J5dGVzNjQoeCkJXAorKHsgXAorICAgZ3J1Yl91aW50NjRfdCBfeCA9
ICh4KTsgXAorICAgKGdydWJfdWludDY0X3QpICgoX3ggPDwgNTYpIFwKKyAgICAgICAgICAgICAg
ICAgICAgfCAoKF94ICYgKGdydWJfdWludDY0X3QpIDB4RkYwMFVMTCkgPDwgNDApIFwKKyAgICAg
ICAgICAgICAgICAgICAgfCAoKF94ICYgKGdydWJfdWludDY0X3QpIDB4RkYwMDAwVUxMKSA8PCAy
NCkgXAorICAgICAgICAgICAgICAgICAgICB8ICgoX3ggJiAoZ3J1Yl91aW50NjRfdCkgMHhGRjAw
MDAwMFVMTCkgPDwgOCkgXAorICAgICAgICAgICAgICAgICAgICB8ICgoX3ggJiAoZ3J1Yl91aW50
NjRfdCkgMHhGRjAwMDAwMDAwVUxMKSA+PiA4KSBcCisgICAgICAgICAgICAgICAgICAgIHwgKChf
eCAmIChncnViX3VpbnQ2NF90KSAweEZGMDAwMDAwMDAwMFVMTCkgPj4gMjQpIFwKKyAgICAgICAg
ICAgICAgICAgICAgfCAoKF94ICYgKGdydWJfdWludDY0X3QpIDB4RkYwMDAwMDAwMDAwMDBVTEwp
ID4+IDQwKSBcCisgICAgICAgICAgICAgICAgICAgIHwgKF94ID4+IDU2KSk7IFwKK30pCisjZW5k
aWYJCQkJCS8qIG5vdCBnY2MgNC4zIG9yIG5ld2VyICovCisKKyNpZmRlZiBHUlVCX0NQVV9XT1JE
U19CSUdFTkRJQU4KKyMgZGVmaW5lIGdydWJfY3B1X3RvX2xlMTYoeCkJZ3J1Yl9zd2FwX2J5dGVz
MTYoeCkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2xlMzIoeCkJZ3J1Yl9zd2FwX2J5dGVzMzIoeCkK
KyMgZGVmaW5lIGdydWJfY3B1X3RvX2xlNjQoeCkJZ3J1Yl9zd2FwX2J5dGVzNjQoeCkKKyMgZGVm
aW5lIGdydWJfbGVfdG9fY3B1MTYoeCkJZ3J1Yl9zd2FwX2J5dGVzMTYoeCkKKyMgZGVmaW5lIGdy
dWJfbGVfdG9fY3B1MzIoeCkJZ3J1Yl9zd2FwX2J5dGVzMzIoeCkKKyMgZGVmaW5lIGdydWJfbGVf
dG9fY3B1NjQoeCkJZ3J1Yl9zd2FwX2J5dGVzNjQoeCkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2Jl
MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjIGRlZmluZSBncnViX2NwdV90b19iZTMyKHgp
CSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fYmU2NCh4KQkoKGdy
dWJfdWludDY0X3QpICh4KSkKKyMgZGVmaW5lIGdydWJfYmVfdG9fY3B1MTYoeCkJKChncnViX3Vp
bnQxNl90KSAoeCkpCisjIGRlZmluZSBncnViX2JlX3RvX2NwdTMyKHgpCSgoZ3J1Yl91aW50MzJf
dCkgKHgpKQorIyBkZWZpbmUgZ3J1Yl9iZV90b19jcHU2NCh4KQkoKGdydWJfdWludDY0X3QpICh4
KSkKKyMgaWZkZWYgR1JVQl9UQVJHRVRfV09SRFNfQklHRU5ESUFOCisjICBkZWZpbmUgZ3J1Yl90
YXJnZXRfdG9faG9zdDE2KHgpCSgoZ3J1Yl91aW50MTZfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJf
dGFyZ2V0X3RvX2hvc3QzMih4KQkoKGdydWJfdWludDMyX3QpICh4KSkKKyMgIGRlZmluZSBncnVi
X3RhcmdldF90b19ob3N0NjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkpCisjICBkZWZpbmUgZ3J1
Yl9ob3N0X3RvX3RhcmdldDE2KHgpCSgoZ3J1Yl91aW50MTZfdCkgKHgpKQorIyAgZGVmaW5lIGdy
dWJfaG9zdF90b190YXJnZXQzMih4KQkoKGdydWJfdWludDMyX3QpICh4KSkKKyMgIGRlZmluZSBn
cnViX2hvc3RfdG9fdGFyZ2V0NjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkpCisjIGVsc2UgLyog
ISBHUlVCX1RBUkdFVF9XT1JEU19CSUdFTkRJQU4gKi8KKyMgIGRlZmluZSBncnViX3RhcmdldF90
b19ob3N0MTYoeCkJZ3J1Yl9zd2FwX2J5dGVzMTYoeCkKKyMgIGRlZmluZSBncnViX3RhcmdldF90
b19ob3N0MzIoeCkJZ3J1Yl9zd2FwX2J5dGVzMzIoeCkKKyMgIGRlZmluZSBncnViX3RhcmdldF90
b19ob3N0NjQoeCkJZ3J1Yl9zd2FwX2J5dGVzNjQoeCkKKyMgIGRlZmluZSBncnViX2hvc3RfdG9f
dGFyZ2V0MTYoeCkJZ3J1Yl9zd2FwX2J5dGVzMTYoeCkKKyMgIGRlZmluZSBncnViX2hvc3RfdG9f
dGFyZ2V0MzIoeCkJZ3J1Yl9zd2FwX2J5dGVzMzIoeCkKKyMgIGRlZmluZSBncnViX2hvc3RfdG9f
dGFyZ2V0NjQoeCkJZ3J1Yl9zd2FwX2J5dGVzNjQoeCkKKyMgZW5kaWYKKyNlbHNlIC8qICEgV09S
RFNfQklHRU5ESUFOICovCisjIGRlZmluZSBncnViX2NwdV90b19sZTE2KHgpCSgoZ3J1Yl91aW50
MTZfdCkgKHgpKQorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fbGUzMih4KQkoKGdydWJfdWludDMyX3Qp
ICh4KSkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2xlNjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkp
CisjIGRlZmluZSBncnViX2xlX3RvX2NwdTE2KHgpCSgoZ3J1Yl91aW50MTZfdCkgKHgpKQorIyBk
ZWZpbmUgZ3J1Yl9sZV90b19jcHUzMih4KQkoKGdydWJfdWludDMyX3QpICh4KSkKKyMgZGVmaW5l
IGdydWJfbGVfdG9fY3B1NjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkpCisjIGRlZmluZSBncnVi
X2NwdV90b19iZTE2KHgpCWdydWJfc3dhcF9ieXRlczE2KHgpCisjIGRlZmluZSBncnViX2NwdV90
b19iZTMyKHgpCWdydWJfc3dhcF9ieXRlczMyKHgpCisjIGRlZmluZSBncnViX2NwdV90b19iZTY0
KHgpCWdydWJfc3dhcF9ieXRlczY0KHgpCisjIGRlZmluZSBncnViX2JlX3RvX2NwdTE2KHgpCWdy
dWJfc3dhcF9ieXRlczE2KHgpCisjIGRlZmluZSBncnViX2JlX3RvX2NwdTMyKHgpCWdydWJfc3dh
cF9ieXRlczMyKHgpCisjIGRlZmluZSBncnViX2JlX3RvX2NwdTY0KHgpCWdydWJfc3dhcF9ieXRl
czY0KHgpCisjIGlmZGVmIEdSVUJfVEFSR0VUX1dPUkRTX0JJR0VORElBTgorIyAgZGVmaW5lIGdy
dWJfdGFyZ2V0X3RvX2hvc3QxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4KQorIyAgZGVmaW5lIGdy
dWJfdGFyZ2V0X3RvX2hvc3QzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyAgZGVmaW5lIGdy
dWJfdGFyZ2V0X3RvX2hvc3Q2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyAgZGVmaW5lIGdy
dWJfaG9zdF90b190YXJnZXQxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4KQorIyAgZGVmaW5lIGdy
dWJfaG9zdF90b190YXJnZXQzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyAgZGVmaW5lIGdy
dWJfaG9zdF90b190YXJnZXQ2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyBlbHNlIC8qICEg
R1JVQl9UQVJHRVRfV09SRFNfQklHRU5ESUFOICovCisjICBkZWZpbmUgZ3J1Yl90YXJnZXRfdG9f
aG9zdDE2KHgpCSgoZ3J1Yl91aW50MTZfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJfdGFyZ2V0X3Rv
X2hvc3QzMih4KQkoKGdydWJfdWludDMyX3QpICh4KSkKKyMgIGRlZmluZSBncnViX3RhcmdldF90
b19ob3N0NjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkpCisjICBkZWZpbmUgZ3J1Yl9ob3N0X3Rv
X3RhcmdldDE2KHgpCSgoZ3J1Yl91aW50MTZfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJfaG9zdF90
b190YXJnZXQzMih4KQkoKGdydWJfdWludDMyX3QpICh4KSkKKyMgIGRlZmluZSBncnViX2hvc3Rf
dG9fdGFyZ2V0NjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkpCisjIGVuZGlmCisjZW5kaWYgLyog
ISBXT1JEU19CSUdFTkRJQU4gKi8KKworI2lmIEdSVUJfVEFSR0VUX1NJWkVPRl9WT0lEX1AgPT0g
OAorIyAgZGVmaW5lIGdydWJfaG9zdF90b190YXJnZXRfYWRkcih4KSBncnViX2hvc3RfdG9fdGFy
Z2V0NjQoeCkKKyNlbHNlCisjICBkZWZpbmUgZ3J1Yl9ob3N0X3RvX3RhcmdldF9hZGRyKHgpIGdy
dWJfaG9zdF90b190YXJnZXQzMih4KQorI2VuZGlmCisKKyNlbmRpZiAvKiAhIEdSVUJfVFlQRVNf
SEVBREVSICovCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMv
aW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQvaTM4Ni90eXBlcy5oIHhlbi00LjEuMi1iL3Rvb2xzL2lv
ZW11LXFlbXUteGVuL2dydWItZmF0L2kzODYvdHlwZXMuaAotLS0geGVuLTQuMS4yLWEvdG9vbHMv
aW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQvaTM4Ni90eXBlcy5oCTE5NzAtMDEtMDEgMDc6MDA6MDAu
MDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnVi
LWZhdC9pMzg2L3R5cGVzLmgJMjAxMi0xMi0yOCAxNjowMjo0MS4wMTA5Mzc2MTkgKzA4MDAKQEAg
LTAsMCArMSwzNSBAQAorLyoKKyAqICBHUlVCICAtLSAgR1JhbmQgVW5pZmllZCBCb290bG9hZGVy
CisgKiAgQ29weXJpZ2h0IChDKSAyMDAyLDIwMDYsMjAwNyAgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0
aW9uLCBJbmMuCisgKgorICogIEdSVUIgaXMgZnJlZSBzb2Z0d2FyZTogeW91IGNhbiByZWRpc3Ry
aWJ1dGUgaXQgYW5kL29yIG1vZGlmeQorICogIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05V
IEdlbmVyYWwgUHVibGljIExpY2Vuc2UgYXMgcHVibGlzaGVkIGJ5CisgKiAgdGhlIEZyZWUgU29m
dHdhcmUgRm91bmRhdGlvbiwgZWl0aGVyIHZlcnNpb24gMyBvZiB0aGUgTGljZW5zZSwgb3IKKyAq
ICAoYXQgeW91ciBvcHRpb24pIGFueSBsYXRlciB2ZXJzaW9uLgorICoKKyAqICBHUlVCIGlzIGRp
c3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsCisgKiAgYnV0IFdJ
VEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YK
KyAqICBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0Uu
ICBTZWUgdGhlCisgKiAgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWls
cy4KKyAqCisgKiAgWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdl
bmVyYWwgUHVibGljIExpY2Vuc2UKKyAqICBhbG9uZyB3aXRoIEdSVUIuICBJZiBub3QsIHNlZSA8
aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uCisgKi8KKworI2lmbmRlZiBHUlVCX1RZUEVT
X0NQVV9IRUFERVIKKyNkZWZpbmUgR1JVQl9UWVBFU19DUFVfSEVBREVSCTEKKworLyogVGhlIHNp
emUgb2Ygdm9pZCAqLiAgKi8KKyNkZWZpbmUgR1JVQl9UQVJHRVRfU0laRU9GX1ZPSURfUAk0CisK
Ky8qIFRoZSBzaXplIG9mIGxvbmcuICAqLworI2RlZmluZSBHUlVCX1RBUkdFVF9TSVpFT0ZfTE9O
RwkJNAorCisvKiBpMzg2IGlzIGxpdHRsZS1lbmRpYW4uICAqLworI3VuZGVmIEdSVUJfVEFSR0VU
X1dPUkRTX0JJR0VORElBTgorCisjZGVmaW5lIEdSVUJfVEFSR0VUX0kzODYJCTEKKworI2RlZmlu
ZSBHUlVCX1RBUkdFVF9NSU5fQUxJR04JCTEKKworI2VuZGlmIC8qICEgR1JVQl9UWVBFU19DUFVf
SEVBREVSICovCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMv
aW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQvbWlzYy5oIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFl
bXUteGVuL2dydWItZmF0L21pc2MuaAotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14
ZW4vZ3J1Yi1mYXQvbWlzYy5oCTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisr
KyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9taXNjLmgJMjAxMi0x
Mi0yOCAxNjowMjo0MS4wMTA5Mzc2MTkgKzA4MDAKQEAgLTAsMCArMSwxNyBAQAoraW50CitncnVi
X3N0cm5jbXAgKGNvbnN0IGNoYXIgKnMxLCBjb25zdCBjaGFyICpzMiwgZ3J1Yl9zaXplX3QgbikK
K3sKKyAgaWYgKG4gPT0gMCkKKyAgICByZXR1cm4gMDsKKworICB3aGlsZSAoKnMxICYmICpzMiAm
JiAtLW4pCisgICAgeworICAgICAgaWYgKCpzMSAhPSAqczIpCisgICAgICAgIGJyZWFrOworCisg
ICAgICBzMSsrOworICAgICAgczIrKzsKKyAgICB9CisKKyAgcmV0dXJuIChpbnQpICpzMSAtIChp
bnQpICpzMjsKK30KZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29s
cy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC94ODZfNjQvdHlwZXMuaCB4ZW4tNC4xLjItYi90b29s
cy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC94ODZfNjQvdHlwZXMuaAotLS0geGVuLTQuMS4yLWEv
dG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQveDg2XzY0L3R5cGVzLmgJMTk3MC0wMS0wMSAw
NzowMDowMC4wMDAwMDAwMDAgKzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUt
eGVuL2dydWItZmF0L3g4Nl82NC90eXBlcy5oCTIwMTItMTItMjggMTY6MDI6NDEuMDExNzY0MTU5
ICswODAwCkBAIC0wLDAgKzEsMzkgQEAKKy8qCisgKiAgR1JVQiAgLS0gIEdSYW5kIFVuaWZpZWQg
Qm9vdGxvYWRlcgorICogIENvcHlyaWdodCAoQykgMjAwOCAgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0
aW9uLCBJbmMuCisgKgorICogIEdSVUIgaXMgZnJlZSBzb2Z0d2FyZTogeW91IGNhbiByZWRpc3Ry
aWJ1dGUgaXQgYW5kL29yIG1vZGlmeQorICogIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05V
IEdlbmVyYWwgUHVibGljIExpY2Vuc2UgYXMgcHVibGlzaGVkIGJ5CisgKiAgdGhlIEZyZWUgU29m
dHdhcmUgRm91bmRhdGlvbiwgZWl0aGVyIHZlcnNpb24gMyBvZiB0aGUgTGljZW5zZSwgb3IKKyAq
ICAoYXQgeW91ciBvcHRpb24pIGFueSBsYXRlciB2ZXJzaW9uLgorICoKKyAqICBHUlVCIGlzIGRp
c3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsCisgKiAgYnV0IFdJ
VEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YK
KyAqICBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0Uu
ICBTZWUgdGhlCisgKiAgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWls
cy4KKyAqCisgKiAgWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdl
bmVyYWwgUHVibGljIExpY2Vuc2UKKyAqICBhbG9uZyB3aXRoIEdSVUIuICBJZiBub3QsIHNlZSA8
aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uCisgKi8KKworI2lmbmRlZiBHUlVCX1RZUEVT
X0NQVV9IRUFERVIKKyNkZWZpbmUgR1JVQl9UWVBFU19DUFVfSEVBREVSCTEKKworLyogVGhlIHNp
emUgb2Ygdm9pZCAqLiAgKi8KKyNkZWZpbmUgR1JVQl9UQVJHRVRfU0laRU9GX1ZPSURfUAk4CisK
Ky8qIFRoZSBzaXplIG9mIGxvbmcuICAqLworI2lmZGVmIF9fTUlOR1czMl9fCisjZGVmaW5lIEdS
VUJfVEFSR0VUX1NJWkVPRl9MT05HCQk0CisjZWxzZQorI2RlZmluZSBHUlVCX1RBUkdFVF9TSVpF
T0ZfTE9ORwkJOAorI2VuZGlmCisKKy8qIHg4Nl82NCBpcyBsaXR0bGUtZW5kaWFuLiAgKi8KKyN1
bmRlZiBHUlVCX1RBUkdFVF9XT1JEU19CSUdFTkRJQU4KKworI2RlZmluZSBHUlVCX1RBUkdFVF9Y
ODZfNjQJCTEKKworI2RlZmluZSBHUlVCX1RBUkdFVF9NSU5fQUxJR04JCTEKKworI2VuZGlmIC8q
ICEgR1JVQl9UWVBFU19DUFVfSEVBREVSICovCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTgg
eGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vaTM4Ni90eXBlcy5oIHhlbi00LjEuMi1i
L3Rvb2xzL2lvZW11LXFlbXUteGVuL2kzODYvdHlwZXMuaAotLS0geGVuLTQuMS4yLWEvdG9vbHMv
aW9lbXUtcWVtdS14ZW4vaTM4Ni90eXBlcy5oCTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAw
ICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9pMzg2L3R5cGVzLmgJ
MjAxMi0xMi0yOCAxNjowMjo0MS4wMTc4MDIzNzEgKzA4MDAKQEAgLTAsMCArMSwzNSBAQAorLyoK
KyAqICBHUlVCICAtLSAgR1JhbmQgVW5pZmllZCBCb290bG9hZGVyCisgKiAgQ29weXJpZ2h0IChD
KSAyMDAyLDIwMDYsMjAwNyAgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuCisgKgorICog
IEdSVUIgaXMgZnJlZSBzb2Z0d2FyZTogeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1v
ZGlmeQorICogIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExp
Y2Vuc2UgYXMgcHVibGlzaGVkIGJ5CisgKiAgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiwg
ZWl0aGVyIHZlcnNpb24gMyBvZiB0aGUgTGljZW5zZSwgb3IKKyAqICAoYXQgeW91ciBvcHRpb24p
IGFueSBsYXRlciB2ZXJzaW9uLgorICoKKyAqICBHUlVCIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBo
b3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsCisgKiAgYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZ
OyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKKyAqICBNRVJDSEFOVEFCSUxJ
VFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlCisgKiAgR05V
IEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KKyAqCisgKiAgWW91IHNo
b3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vu
c2UKKyAqICBhbG9uZyB3aXRoIEdSVUIuICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3Jn
L2xpY2Vuc2VzLz4uCisgKi8KKworI2lmbmRlZiBHUlVCX1RZUEVTX0NQVV9IRUFERVIKKyNkZWZp
bmUgR1JVQl9UWVBFU19DUFVfSEVBREVSCTEKKworLyogVGhlIHNpemUgb2Ygdm9pZCAqLiAgKi8K
KyNkZWZpbmUgR1JVQl9UQVJHRVRfU0laRU9GX1ZPSURfUAk0CisKKy8qIFRoZSBzaXplIG9mIGxv
bmcuICAqLworI2RlZmluZSBHUlVCX1RBUkdFVF9TSVpFT0ZfTE9ORwkJNAorCisvKiBpMzg2IGlz
IGxpdHRsZS1lbmRpYW4uICAqLworI3VuZGVmIEdSVUJfVEFSR0VUX1dPUkRTX0JJR0VORElBTgor
CisjZGVmaW5lIEdSVUJfVEFSR0VUX0kzODYJCTEKKworI2RlZmluZSBHUlVCX1RBUkdFVF9NSU5f
QUxJR04JCTEKKworI2VuZGlmIC8qICEgR1JVQl9UWVBFU19DUFVfSEVBREVSICovCmRpZmYgLS1l
eGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vTWFr
ZWZpbGUgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vTWFrZWZpbGUKLS0tIHhlbi00
LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL01ha2VmaWxlCTIwMTEtMDItMTIgMDE6NTQ6NTEu
MDAwMDAwMDAwICswODAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9NYWtl
ZmlsZQkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxMTc2NDE1OSArMDgwMApAQCAtMTg4LDE3ICsxODgs
MTggQEAgbGlicWVtdV9jb21tb24uYTogJChPQkpTKQogIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMKICMgVVNFUl9P
QkpTIGlzIGNvZGUgdXNlZCBieSBxZW11IHVzZXJzcGFjZSBlbXVsYXRpb24KIFVTRVJfT0JKUz1j
dXRpbHMubyAgY2FjaGUtdXRpbHMubwogCiBsaWJxZW11X3VzZXIuYTogJChVU0VSX09CSlMpCiAK
ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMKIAotcWVtdS1pbWckKEVYRVNVRik6IHFlbXUtaW1nLm8gcWVtdS10b29s
Lm8gb3NkZXAubyAkKEJMT0NLX09CSlMpCisKK3FlbXUtaW1nJChFWEVTVUYpOmZzLXRpbWUubyBn
cnViX2Vyci5vIHBhcnRpdGlvbi5vIGZzaGVscC5vIG50ZnMubyBmYXQubyBtaXNjLm8gZGVidWcu
byBxZW11LWltZy5vIHFlbXUtdG9vbC5vIG9zZGVwLm8gJChCTE9DS19PQkpTKQogCiBxZW11LW5i
ZCQoRVhFU1VGKTogIHFlbXUtbmJkLm8gcWVtdS10b29sLm8gb3NkZXAubyAkKEJMT0NLX09CSlMp
CiAKIHFlbXUtaW1nJChFWEVTVUYpIHFlbXUtbmJkJChFWEVTVUYpOiBMSUJTICs9IC1segogCiAK
IGNsZWFuOgogIyBhdm9pZCBvbGQgYnVpbGQgcHJvYmxlbXMgYnkgcmVtb3ZpbmcgcG90ZW50aWFs
bHkgaW5jb3JyZWN0IG9sZCBmaWxlcwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00
LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL01ha2VmaWxlLm9yaWcgeGVuLTQuMS4yLWIvdG9v
bHMvaW9lbXUtcWVtdS14ZW4vTWFrZWZpbGUub3JpZwotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9l
bXUtcWVtdS14ZW4vTWFrZWZpbGUub3JpZwkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCAr
MDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vTWFrZWZpbGUub3JpZwky
MDEyLTEyLTI4IDE1OjU5OjM1LjM1NDY4MTYzNCArMDgwMApAQCAtMCwwICsxLDM3MiBAQAorIyBN
YWtlZmlsZSBmb3IgUUVNVS4KKworaW5jbHVkZSBjb25maWctaG9zdC5tYWsKK2luY2x1ZGUgJChT
UkNfUEFUSCkvcnVsZXMubWFrCisKKy5QSE9OWTogYWxsIGNsZWFuIGNzY29wZSBkaXN0Y2xlYW4g
ZHZpIGh0bWwgaW5mbyBpbnN0YWxsIGluc3RhbGwtZG9jIFwKKwlyZWN1cnNlLWFsbCBzcGVlZCB0
YXIgdGFyYmluIHRlc3QKKworVlBBVEg9JChTUkNfUEFUSCk6JChTUkNfUEFUSCkvaHcKKworCitD
RkxBR1MgKz0gJChPU19DRkxBR1MpICQoQVJDSF9DRkxBR1MpCitMREZMQUdTICs9ICQoT1NfTERG
TEFHUykgJChBUkNIX0xERkxBR1MpCisKK0NQUEZMQUdTICs9IC1JLiAtSSQoU1JDX1BBVEgpIC1N
TUQgLU1QIC1NVCAkQAorQ1BQRkxBR1MgKz0gLURfR05VX1NPVVJDRSAtRF9GSUxFX09GRlNFVF9C
SVRTPTY0IC1EX0xBUkdFRklMRV9TT1VSQ0UKK0xJQlM9CitpZmRlZiBDT05GSUdfU1RBVElDCitM
REZMQUdTICs9IC1zdGF0aWMKK2VuZGlmCitpZmRlZiBCVUlMRF9ET0NTCitET0NTPXFlbXUtZG9j
Lmh0bWwgcWVtdS10ZWNoLmh0bWwgcWVtdS4xIHFlbXUtaW1nLjEgcWVtdS1uYmQuOAorZWxzZQor
RE9DUz0KK2VuZGlmCisKK0xJQlMrPSQoQUlPTElCUykKKworaWZkZWYgQ09ORklHX1NPTEFSSVMK
K0xJQlMrPS1sc29ja2V0IC1sbnNsIC1scmVzb2x2CitlbmRpZgorCitpZmRlZiBDT05GSUdfV0lO
MzIKK0xJQlMrPS1sd2lubW0gLWx3czJfMzIgLWxpcGhscGFwaQorZW5kaWYKKworYWxsOiAkKFRP
T0xTKSAkKERPQ1MpIHJlY3Vyc2UtYWxsCisKK1NVQkRJUl9SVUxFUz0kKHBhdHN1YnN0ICUsc3Vi
ZGlyLSUsICQoVEFSR0VUX0RJUlMpKQorCitzdWJkaXItJToKKwkkKGNhbGwgcXVpZXQtY29tbWFu
ZCwkKE1BS0UpIC1DICQqIFY9IiQoVikiIFRBUkdFVF9ESVI9IiQqLyIgYWxsLCkKKworJChmaWx0
ZXIgJS1zb2Z0bW11LCQoU1VCRElSX1JVTEVTKSk6IGxpYnFlbXVfY29tbW9uLmEKKyQoZmlsdGVy
ICUtdXNlciwkKFNVQkRJUl9SVUxFUykpOiBsaWJxZW11X3VzZXIuYQorCityZWN1cnNlLWFsbDog
JChTVUJESVJfUlVMRVMpCisKK0NQUEZMQUdTICs9IC1JJChYRU5fUk9PVCkvdG9vbHMvbGlieGMK
K0NQUEZMQUdTICs9IC1JJChYRU5fUk9PVCkvdG9vbHMvYmxrdGFwL2xpYgorQ1BQRkxBR1MgKz0g
LUkkKFhFTl9ST09UKS90b29scy94ZW5zdG9yZQorQ1BQRkxBR1MgKz0gLUkkKFhFTl9ST09UKS90
b29scy9pbmNsdWRlCisKK3RhcGRpc2staW9lbXU6IHRhcGRpc2staW9lbXUuYyBjdXRpbHMuYyBi
bG9jay5jIGJsb2NrLXJhdy5jIGJsb2NrLWNvdy5jIGJsb2NrLXFjb3cuYyBhZXMuYyBibG9jay12
bWRrLmMgYmxvY2stY2xvb3AuYyBibG9jay1kbWcuYyBibG9jay1ib2Nocy5jIGJsb2NrLXZwYy5j
IGJsb2NrLXZ2ZmF0LmMgYmxvY2stcWNvdzIuYyBody94ZW5fYmxrdGFwLmMgb3NkZXAuYworCSQo
Q0MpIC1EUUVNVV9UT09MICQoQ0ZMQUdTKSAkKENQUEZMQUdTKSAkKEJBU0VfQ0ZMQUdTKSAkKExE
RkxBR1MpICQoQkFTRV9MREZMQUdTKSAtbyAkQCAkXiAtbHogJChMSUJTKQorCisjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIworIyBCTE9DS19PQkpTIGlzIGNvZGUgdXNlZCBieSBib3RoIHFlbXUgc3lzdGVtIGVtdWxh
dGlvbiBhbmQgcWVtdS1pbWcKKworQkxPQ0tfT0JKUz1jdXRpbHMubyBvc2RlcC5vIHFlbXUtbWFs
bG9jLm8KK0JMT0NLX09CSlMrPWJsb2NrLWNvdy5vIGJsb2NrLXFjb3cubyBhZXMubyBibG9jay12
bWRrLm8gYmxvY2stY2xvb3AubworQkxPQ0tfT0JKUys9YmxvY2stZG1nLm8gYmxvY2stYm9jaHMu
byBibG9jay12cGMubyBibG9jay12dmZhdC5vCitCTE9DS19PQkpTKz1ibG9jay1xY293Mi5vIGJs
b2NrLXBhcmFsbGVscy5vIGJsb2NrLW5iZC5vCitCTE9DS19PQkpTKz1uYmQubyBibG9jay5vIGFp
by5vCisKK2lmZGVmIENPTkZJR19XSU4zMgorQkxPQ0tfT0JKUyArPSBibG9jay1yYXctd2luMzIu
bworZWxzZQoraWZkZWYgQ09ORklHX0FJTworQkxPQ0tfT0JKUyArPSBwb3NpeC1haW8tY29tcGF0
Lm8KK2VuZGlmCitCTE9DS19PQkpTICs9IGJsb2NrLXJhdy1wb3NpeC5vCitlbmRpZgorCisjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjCisjIGxpYnFlbXVfY29tbW9uLmE6IFRhcmdldCBpbmRlcGVuZGVudCBwYXJ0IG9m
IHN5c3RlbSBlbXVsYXRpb24uIFRoZQorIyBsb25nIHRlcm0gcGF0aCBpcyB0byBzdXBwcmVzcyAq
YWxsKiB0YXJnZXQgc3BlY2lmaWMgY29kZSBpbiBjYXNlIG9mCisjIHN5c3RlbSBlbXVsYXRpb24s
IGkuZS4gYSBzaW5nbGUgUUVNVSBleGVjdXRhYmxlIHNob3VsZCBzdXBwb3J0IGFsbAorIyBDUFVz
IGFuZCBtYWNoaW5lcy4KKworT0JKUz0kKEJMT0NLX09CSlMpCitPQkpTKz1yZWFkbGluZS5vIGNv
bnNvbGUubworCitPQkpTKz1pcnEubworT0JKUys9aTJjLm8gc21idXMubyBzbWJ1c19lZXByb20u
byBtYXg3MzEwLm8gbWF4MTExeC5vIHdtODc1MC5vCitPQkpTKz1zc2QwMzAzLm8gc3NkMDMyMy5v
IGFkczc4NDYubyBzdGVsbGFyaXNfaW5wdXQubyB0d2w5MjIzMC5vCitPQkpTKz10bXAxMDUubyBs
bTgzMngubworT0JKUys9c2NzaS1kaXNrLm8gY2Ryb20ubworT0JKUys9c2NzaS1nZW5lcmljLm8K
K09CSlMrPXVzYi5vIHVzYi1odWIubyB1c2ItJChIT1NUX1VTQikubyB1c2ItaGlkLm8gdXNiLW1z
ZC5vIHVzYi13YWNvbS5vCitPQkpTKz11c2Itc2VyaWFsLm8gdXNiLW5ldC5vCitPQkpTKz1zZC5v
IHNzaS1zZC5vCitPQkpTKz1idC5vIGJ0LWhvc3QubyBidC12aGNpLm8gYnQtbDJjYXAubyBidC1z
ZHAubyBidC1oY2kubyBidC1oaWQubyB1c2ItYnQubworT0JKUys9YnVmZmVyZWRfZmlsZS5vIG1p
Z3JhdGlvbi5vIG1pZ3JhdGlvbi10Y3AubyBuZXQubyBxZW11LXNvY2tldHMubworT0JKUys9cWVt
dS1jaGFyLm8gYWlvLm8gbmV0LWNoZWNrc3VtLm8gc2F2ZXZtLm8gY2FjaGUtdXRpbHMubworCitp
ZmRlZiBDT05GSUdfQlJMQVBJCitPQkpTKz0gYmF1bS5vCitMSUJTKz0tbGJybGFwaQorZW5kaWYK
KworaWZkZWYgQ09ORklHX1dJTjMyCitPQkpTKz10YXAtd2luMzIubworZWxzZQorT0JKUys9bWln
cmF0aW9uLWV4ZWMubworZW5kaWYKKworQVVESU9fT0JKUyA9IGF1ZGlvLm8gbm9hdWRpby5vIHdh
dmF1ZGlvLm8gbWl4ZW5nLm8KK2lmZGVmIENPTkZJR19TREwKK0FVRElPX09CSlMgKz0gc2RsYXVk
aW8ubworZW5kaWYKK2lmZGVmIENPTkZJR19PU1MKK0FVRElPX09CSlMgKz0gb3NzYXVkaW8ubwor
ZW5kaWYKK2lmZGVmIENPTkZJR19DT1JFQVVESU8KK0FVRElPX09CSlMgKz0gY29yZWF1ZGlvLm8K
K0FVRElPX1BUID0geWVzCitlbmRpZgoraWZkZWYgQ09ORklHX0FMU0EKK0FVRElPX09CSlMgKz0g
YWxzYWF1ZGlvLm8KK2VuZGlmCitpZmRlZiBDT05GSUdfRFNPVU5ECitBVURJT19PQkpTICs9IGRz
b3VuZGF1ZGlvLm8KK2VuZGlmCitpZmRlZiBDT05GSUdfRk1PRAorQVVESU9fT0JKUyArPSBmbW9k
YXVkaW8ubworYXVkaW8vYXVkaW8ubyBhdWRpby9mbW9kYXVkaW8ubzogQ1BQRkxBR1MgOj0gLUkk
KENPTkZJR19GTU9EX0lOQykgJChDUFBGTEFHUykKK2VuZGlmCitpZmRlZiBDT05GSUdfRVNECitB
VURJT19QVCA9IHllcworQVVESU9fUFRfSU5UID0geWVzCitBVURJT19PQkpTICs9IGVzZGF1ZGlv
Lm8KK2VuZGlmCitpZmRlZiBDT05GSUdfUEEKK0FVRElPX1BUID0geWVzCitBVURJT19QVF9JTlQg
PSB5ZXMKK0FVRElPX09CSlMgKz0gcGFhdWRpby5vCitlbmRpZgoraWZkZWYgQVVESU9fUFQKK0xE
RkxBR1MgKz0gLXB0aHJlYWQKK2VuZGlmCitpZmRlZiBBVURJT19QVF9JTlQKK0FVRElPX09CSlMg
Kz0gYXVkaW9fcHRfaW50Lm8KK2VuZGlmCitBVURJT19PQkpTKz0gd2F2Y2FwdHVyZS5vCitpZmRl
ZiBDT05GSUdfQVVESU8KK09CSlMrPSQoYWRkcHJlZml4IGF1ZGlvLywgJChBVURJT19PQkpTKSkK
K2VuZGlmCisKK2lmZGVmIENPTkZJR19TREwKK09CSlMrPXNkbC5vIHhfa2V5bWFwLm8KK2VuZGlm
CitpZmRlZiBDT05GSUdfQ1VSU0VTCitPQkpTKz1jdXJzZXMubworZW5kaWYKK09CSlMrPXZuYy5v
IGQzZGVzLm8KKworaWZkZWYgQ09ORklHX0NPQ09BCitPQkpTKz1jb2NvYS5vCitlbmRpZgorCitp
ZmRlZiBDT05GSUdfU0xJUlAKK0NQUEZMQUdTKz0tSSQoU1JDX1BBVEgpL3NsaXJwCitTTElSUF9P
QkpTPWNrc3VtLm8gaWYubyBpcF9pY21wLm8gaXBfaW5wdXQubyBpcF9vdXRwdXQubyBcCitzbGly
cC5vIG1idWYubyBtaXNjLm8gc2J1Zi5vIHNvY2tldC5vIHRjcF9pbnB1dC5vIHRjcF9vdXRwdXQu
byBcCit0Y3Bfc3Vici5vIHRjcF90aW1lci5vIHVkcC5vIGJvb3RwLm8gZGVidWcubyB0ZnRwLm8K
K09CSlMrPSQoYWRkcHJlZml4IHNsaXJwLywgJChTTElSUF9PQkpTKSkKK2VuZGlmCisKK0xJQlMr
PSQoVkRFX0xJQlMpCisKK2NvY29hLm86IGNvY29hLm0KKworc2RsLm86IHNkbC5jIGtleW1hcHMu
YyBzZGxfa2V5c3ltLmgKKworc2RsLm8gYXVkaW8vc2RsYXVkaW8ubzogQ0ZMQUdTICs9ICQoU0RM
X0NGTEFHUykKKwordm5jLm86IHZuYy5jIGtleW1hcHMuYyBzZGxfa2V5c3ltLmggdm5jaGV4dGls
ZS5oIGQzZGVzLmMgZDNkZXMuaAorCit2bmMubzogQ0ZMQUdTICs9ICQoQ09ORklHX1ZOQ19UTFNf
Q0ZMQUdTKQorCitjdXJzZXMubzogY3Vyc2VzLmMga2V5bWFwcy5jIGN1cnNlc19rZXlzLmgKKwor
YnQtaG9zdC5vOiBDRkxBR1MgKz0gJChDT05GSUdfQkxVRVpfQ0ZMQUdTKQorCitsaWJxZW11X2Nv
bW1vbi5hOiAkKE9CSlMpCisKKyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjCisjIFVTRVJfT0JKUyBpcyBjb2RlIHVz
ZWQgYnkgcWVtdSB1c2Vyc3BhY2UgZW11bGF0aW9uCitVU0VSX09CSlM9Y3V0aWxzLm8gIGNhY2hl
LXV0aWxzLm8KKworbGlicWVtdV91c2VyLmE6ICQoVVNFUl9PQkpTKQorCisjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
CisKK3FlbXUtaW1nJChFWEVTVUYpOiBxZW11LWltZy5vIHFlbXUtdG9vbC5vIG9zZGVwLm8gJChC
TE9DS19PQkpTKQorCitxZW11LW5iZCQoRVhFU1VGKTogIHFlbXUtbmJkLm8gcWVtdS10b29sLm8g
b3NkZXAubyAkKEJMT0NLX09CSlMpCisKK3FlbXUtaW1nJChFWEVTVUYpIHFlbXUtbmJkJChFWEVT
VUYpOiBMSUJTICs9IC1segorCisKK2NsZWFuOgorIyBhdm9pZCBvbGQgYnVpbGQgcHJvYmxlbXMg
YnkgcmVtb3ZpbmcgcG90ZW50aWFsbHkgaW5jb3JyZWN0IG9sZCBmaWxlcworCXJtIC1mIGNvbmZp
Zy5tYWsgY29uZmlnLmggb3AtaTM4Ni5oIG9wYy1pMzg2LmggZ2VuLW9wLWkzODYuaCBvcC1hcm0u
aCBvcGMtYXJtLmggZ2VuLW9wLWFybS5oCisJcm0gLWYgKi5vIC4qLmQgKi5hICQoVE9PTFMpIFRB
R1MgY3Njb3BlLiogKi5wb2QgKn4gKi8qfgorCXJtIC1mIHNsaXJwLyoubyBzbGlycC8uKi5kIGF1
ZGlvLyoubyBhdWRpby8uKi5kCisJJChNQUtFKSAtQyB0ZXN0cyBjbGVhbgorCWZvciBkIGluICQo
VEFSR0VUX0RJUlMpOyBkbyBcCisJJChNQUtFKSAtQyAkJGQgJEAgfHwgZXhpdCAxIDsgXAorICAg
ICAgICBkb25lCisKK2Rpc3RjbGVhbjogY2xlYW4KKwlybSAtZiBjb25maWctaG9zdC5tYWsgY29u
ZmlnLWhvc3QuaCAkKERPQ1MpCisJcm0gLWYgcWVtdS17ZG9jLHRlY2h9LntpbmZvLGF1eCxjcCxk
dmksZm4saW5mbyxreSxsb2cscGcsdG9jLHRwLHZyfQorCWZvciBkIGluICQoVEFSR0VUX0RJUlMp
OyBkbyBcCisJcm0gLXJmICQkZCB8fCBleGl0IDEgOyBcCisgICAgICAgIGRvbmUKKworS0VZTUFQ
Uz1kYSAgICAgZW4tZ2IgIGV0ICBmciAgICAgZnItY2ggIGlzICBsdCAgbW9kaWZpZXJzICBubyAg
cHQtYnIgIHN2IFwKK2FyICAgICAgZGUgICAgIGVuLXVzICBmaSAgZnItYmUgIGhyICAgICBpdCAg
bHYgIG5sICAgICAgICAgcGwgIHJ1ICAgICB0aCBcCitjb21tb24gIGRlLWNoICBlcyAgICAgZm8g
IGZyLWNhICBodSAgICAgamEgIG1rICBubC1iZSAgICAgIHB0ICBzbCAgICAgdHIKKworaWZkZWYg
SU5TVEFMTF9CTE9CUworQkxPQlM9Ymlvcy5iaW4gdmdhYmlvcy5iaW4gdmdhYmlvcy1jaXJydXMu
YmluIHBwY19yb20uYmluIFwKK3ZpZGVvLnggb3BlbmJpb3Mtc3BhcmMzMiBvcGVuYmlvcy1zcGFy
YzY0IG9wZW5iaW9zLXBwYyBcCitweGUtbmUya19wY2kuYmluIHB4ZS1ydGw4MTM5LmJpbiBweGUt
cGNuZXQuYmluIHB4ZS1lMTAwMC5iaW4gXAorYmFtYm9vLmR0YgorZWxzZQorQkxPQlM9CitlbmRp
ZgorCitpbnN0YWxsLWRvYzogJChET0NTKQorCW1rZGlyIC1wICIkKERFU1RESVIpJChkb2NkaXIp
IgorCSQoSU5TVEFMTCkgLW0gNjQ0IHFlbXUtZG9jLmh0bWwgIHFlbXUtdGVjaC5odG1sICIkKERF
U1RESVIpJChkb2NkaXIpIgoraWZuZGVmIENPTkZJR19XSU4zMgorCW1rZGlyIC1wICIkKERFU1RE
SVIpJChtYW5kaXIpL21hbjEiCisJJChJTlNUQUxMKSAtbSA2NDQgcWVtdS4xIHFlbXUtaW1nLjEg
IiQoREVTVERJUikkKG1hbmRpcikvbWFuMSIKKwlta2RpciAtcCAiJChERVNURElSKSQobWFuZGly
KS9tYW44IgorCSQoSU5TVEFMTCkgLW0gNjQ0IHFlbXUtbmJkLjggIiQoREVTVERJUikkKG1hbmRp
cikvbWFuOCIKK2VuZGlmCisKK2luc3RhbGw6IGFsbCAkKGlmICQoQlVJTERfRE9DUyksaW5zdGFs
bC1kb2MpCisJbWtkaXIgLXAgIiQoREVTVERJUikkKGJpbmRpcikiCitpZm5lcSAoJChUT09MUyks
KQorCSQoSU5TVEFMTCkgLW0gNzU1IC1zICQoVE9PTFMpICIkKERFU1RESVIpJChiaW5kaXIpIgor
ZW5kaWYKK2lmbmVxICgkKEJMT0JTKSwpCisJbWtkaXIgLXAgIiQoREVTVERJUikkKGRhdGFkaXIp
IgorCXNldCAtZTsgZm9yIHggaW4gJChCTE9CUyk7IGRvIFwKKwkJJChJTlNUQUxMKSAtbSA2NDQg
JChTUkNfUEFUSCkvcGMtYmlvcy8kJHggIiQoREVTVERJUikkKGRhdGFkaXIpIjsgXAorCWRvbmUK
K2VuZGlmCitpZm5kZWYgQ09ORklHX1dJTjMyCisJbWtkaXIgLXAgIiQoREVTVERJUikkKGRhdGFk
aXIpL2tleW1hcHMiCisJc2V0IC1lOyBmb3IgeCBpbiAkKEtFWU1BUFMpOyBkbyBcCisJCSQoSU5T
VEFMTCkgLW0gNjQ0ICQoU1JDX1BBVEgpL2tleW1hcHMvJCR4ICIkKERFU1RESVIpJChkYXRhZGly
KS9rZXltYXBzIjsgXAorCWRvbmUKK2VuZGlmCisJZm9yIGQgaW4gJChUQVJHRVRfRElSUyk7IGRv
IFwKKwkkKE1BS0UpIC1DICQkZCAkQCB8fCBleGl0IDEgOyBcCisgICAgICAgIGRvbmUKKworIyB2
YXJpb3VzIHRlc3QgdGFyZ2V0cwordGVzdCBzcGVlZDogYWxsCisJJChNQUtFKSAtQyB0ZXN0cyAk
QAorCitUQUdTOgorCWV0YWdzICouW2NoXSB0ZXN0cy8qLltjaF0KKworY3Njb3BlOgorCXJtIC1m
IC4vY3Njb3BlLioKKwlmaW5kIC4gLW5hbWUgIiouW2NoXSIgLXByaW50IHwgc2VkICdzLF5cLi8s
LCcgPiAuL2NzY29wZS5maWxlcworCWNzY29wZSAtYgorCisjIGRvY3VtZW50YXRpb24KKyUuaHRt
bDogJS50ZXhpCisJdGV4aTJodG1sIC1tb25vbGl0aGljIC1udW1iZXIgJDwKKworJS5pbmZvOiAl
LnRleGkKKwltYWtlaW5mbyAkPCAtbyAkQAorCislLmR2aTogJS50ZXhpCisJdGV4aTJkdmkgJDwK
KworcWVtdS4xOiBxZW11LWRvYy50ZXhpCisJJChTUkNfUEFUSCkvdGV4aTJwb2QucGwgJDwgcWVt
dS5wb2QKKwlwb2QybWFuIC0tc2VjdGlvbj0xIC0tY2VudGVyPSIgIiAtLXJlbGVhc2U9IiAiIHFl
bXUucG9kID4gJEAKKworcWVtdS1pbWcuMTogcWVtdS1pbWcudGV4aQorCSQoU1JDX1BBVEgpL3Rl
eGkycG9kLnBsICQ8IHFlbXUtaW1nLnBvZAorCXBvZDJtYW4gLS1zZWN0aW9uPTEgLS1jZW50ZXI9
IiAiIC0tcmVsZWFzZT0iICIgcWVtdS1pbWcucG9kID4gJEAKKworcWVtdS1uYmQuODogcWVtdS1u
YmQudGV4aQorCSQoU1JDX1BBVEgpL3RleGkycG9kLnBsICQ8IHFlbXUtbmJkLnBvZAorCXBvZDJt
YW4gLS1zZWN0aW9uPTggLS1jZW50ZXI9IiAiIC0tcmVsZWFzZT0iICIgcWVtdS1uYmQucG9kID4g
JEAKKworaW5mbzogcWVtdS1kb2MuaW5mbyBxZW11LXRlY2guaW5mbworCitkdmk6IHFlbXUtZG9j
LmR2aSBxZW11LXRlY2guZHZpCisKK2h0bWw6IHFlbXUtZG9jLmh0bWwgcWVtdS10ZWNoLmh0bWwK
KworcWVtdS1kb2MuZHZpIHFlbXUtZG9jLmh0bWwgcWVtdS1kb2MuaW5mbzogcWVtdS1pbWcudGV4
aSBxZW11LW5iZC50ZXhpCisKK1ZFUlNJT04gPz0gJChzaGVsbCBjYXQgVkVSU0lPTikKK0ZJTEUg
PSBxZW11LSQoVkVSU0lPTikKKworIyB0YXIgcmVsZWFzZSAodXNlICdtYWtlIC1rIHRhcicgb24g
YSBjaGVja291dGVkIHRyZWUpCit0YXI6CisJcm0gLXJmIC90bXAvJChGSUxFKQorCWNwIC1yIC4g
L3RtcC8kKEZJTEUpCisJY2QgL3RtcCAmJiB0YXIgemN2ZiB+LyQoRklMRSkudGFyLmd6ICQoRklM
RSkgLS1leGNsdWRlIENWUyAtLWV4Y2x1ZGUgLmdpdCAtLWV4Y2x1ZGUgLnN2bgorCXJtIC1yZiAv
dG1wLyQoRklMRSkKKworIyBnZW5lcmF0ZSBhIGJpbmFyeSBkaXN0cmlidXRpb24KK3RhcmJpbjoK
KwljZCAvICYmIHRhciB6Y3ZmIH4vcWVtdS0kKFZFUlNJT04pLSQoQVJDSCkudGFyLmd6IFwKKwkk
KGJpbmRpcikvcWVtdSBcCisJJChiaW5kaXIpL3FlbXUtc3lzdGVtLXg4Nl82NCBcCisJJChiaW5k
aXIpL3FlbXUtc3lzdGVtLWFybSBcCisJJChiaW5kaXIpL3FlbXUtc3lzdGVtLWNyaXMgXAorCSQo
YmluZGlyKS9xZW11LXN5c3RlbS1tNjhrIFwKKwkkKGJpbmRpcikvcWVtdS1zeXN0ZW0tbWlwcyBc
CisJJChiaW5kaXIpL3FlbXUtc3lzdGVtLW1pcHNlbCBcCisJJChiaW5kaXIpL3FlbXUtc3lzdGVt
LW1pcHM2NCBcCisJJChiaW5kaXIpL3FlbXUtc3lzdGVtLW1pcHM2NGVsIFwKKwkkKGJpbmRpcikv
cWVtdS1zeXN0ZW0tcHBjIFwKKwkkKGJpbmRpcikvcWVtdS1zeXN0ZW0tcHBjZW1iIFwKKwkkKGJp
bmRpcikvcWVtdS1zeXN0ZW0tcHBjNjQgXAorCSQoYmluZGlyKS9xZW11LXN5c3RlbS1zaDQgXAor
CSQoYmluZGlyKS9xZW11LXN5c3RlbS1zaDRlYiBcCisJJChiaW5kaXIpL3FlbXUtc3lzdGVtLXNw
YXJjIFwKKwkkKGJpbmRpcikvcWVtdS1pMzg2IFwKKwkkKGJpbmRpcikvcWVtdS14ODZfNjQgXAor
CSQoYmluZGlyKS9xZW11LWFscGhhIFwKKwkkKGJpbmRpcikvcWVtdS1hcm0gXAorCSQoYmluZGly
KS9xZW11LWFybWViIFwKKwkkKGJpbmRpcikvcWVtdS1jcmlzIFwKKwkkKGJpbmRpcikvcWVtdS1t
NjhrIFwKKwkkKGJpbmRpcikvcWVtdS1taXBzIFwKKwkkKGJpbmRpcikvcWVtdS1taXBzZWwgXAor
CSQoYmluZGlyKS9xZW11LXBwYyBcCisJJChiaW5kaXIpL3FlbXUtcHBjNjQgXAorCSQoYmluZGly
KS9xZW11LXBwYzY0YWJpMzIgXAorCSQoYmluZGlyKS9xZW11LXNoNCBcCisJJChiaW5kaXIpL3Fl
bXUtc2g0ZWIgXAorCSQoYmluZGlyKS9xZW11LXNwYXJjIFwKKwkkKGJpbmRpcikvcWVtdS1zcGFy
YzY0IFwKKwkkKGJpbmRpcikvcWVtdS1zcGFyYzMycGx1cyBcCisJJChiaW5kaXIpL3FlbXUtaW1n
IFwKKwkkKGJpbmRpcikvcWVtdS1uYmQgXAorCSQoZGF0YWRpcikvYmlvcy5iaW4gXAorCSQoZGF0
YWRpcikvdmdhYmlvcy5iaW4gXAorCSQoZGF0YWRpcikvdmdhYmlvcy1jaXJydXMuYmluIFwKKwkk
KGRhdGFkaXIpL3BwY19yb20uYmluIFwKKwkkKGRhdGFkaXIpL3ZpZGVvLnggXAorCSQoZGF0YWRp
cikvb3BlbmJpb3Mtc3BhcmMzMiBcCisJJChkYXRhZGlyKS9vcGVuYmlvcy1zcGFyYzY0IFwKKwkk
KGRhdGFkaXIpL29wZW5iaW9zLXBwYyBcCisJJChkYXRhZGlyKS9weGUtbmUya19wY2kuYmluIFwK
KwkkKGRhdGFkaXIpL3B4ZS1ydGw4MTM5LmJpbiBcCisJJChkYXRhZGlyKS9weGUtcGNuZXQuYmlu
IFwKKwkkKGRhdGFkaXIpL3B4ZS1lMTAwMC5iaW4gXAorCSQoZG9jZGlyKS9xZW11LWRvYy5odG1s
IFwKKwkkKGRvY2RpcikvcWVtdS10ZWNoLmh0bWwgXAorCSQobWFuZGlyKS9tYW4xL3FlbXUuMSBc
CisJJChtYW5kaXIpL21hbjEvcWVtdS1pbWcuMSBcCisJJChtYW5kaXIpL21hbjgvcWVtdS1uYmQu
OAorCisjIEluY2x1ZGUgYXV0b21hdGljYWxseSBnZW5lcmF0ZWQgZGVwZW5kZW5jeSBmaWxlcwor
LWluY2x1ZGUgJCh3aWxkY2FyZCAuKi5kIGF1ZGlvLy4qLmQgc2xpcnAvLiouZCkKZGlmZiAtLWV4
Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9taXNj
LmMgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vbWlzYy5jCi0tLSB4ZW4tNC4xLjIt
YS90b29scy9pb2VtdS1xZW11LXhlbi9taXNjLmMJMTk3MC0wMS0wMSAwNzowMDowMC4wMDAwMDAw
MDAgKzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL21pc2MuYwkyMDEy
LTEyLTI4IDE2OjAyOjQxLjAxMjkzNzg0NiArMDgwMApAQCAtMCwwICsxLDQzMiBAQAorI2luY2x1
ZGUgIm1pc2MuaCIKKyNpbmNsdWRlIDxzdGRsaWIuaD4KKyNpbmNsdWRlIDxzdGRpby5oPgorI2lu
Y2x1ZGUgPHN0cmluZy5oPgorI2luY2x1ZGUgImdydWJfZXJyLmgiCisKKworaW50CitncnViX2lz
c3BhY2UgKGludCBjKQoreworICByZXR1cm4gKGMgPT0gJ1xuJyB8fCBjID09ICdccicgfHwgYyA9
PSAnICcgfHwgYyA9PSAnXHQnKTsKK30KKworaW50CitncnViX3RvbG93ZXIgKGludCBjKQorewor
ICBpZiAoYyA+PSAnQScgJiYgYyA8PSAnWicpCisgICAgcmV0dXJuIGMgLSAnQScgKyAnYSc7CisK
KyAgcmV0dXJuIGM7Cit9CisKKworY2hhciAqCitncnViX3N0cmNociAoY29uc3QgY2hhciAqcywg
aW50IGMpCit7CisgIGRvCisgICAgeworICAgICAgaWYgKCpzID09IGMpCisJcmV0dXJuIChjaGFy
ICopIHM7CisgICAgfQorICB3aGlsZSAoKnMrKyk7CisKKyAgcmV0dXJuIDA7Cit9CisKKworZ3J1
Yl9zaXplX3QKK2dydWJfc3RybGVuIChjb25zdCBjaGFyICpzKQoreworICBjb25zdCBjaGFyICpw
ID0gczsKKworICB3aGlsZSAoKnApCisgICAgcCsrOworCisgIHJldHVybiBwIC0gczsKK30KKwor
CisKKworCitjaGFyICoKK2dydWJfc3RybmNweSAoY2hhciAqZGVzdCwgY29uc3QgY2hhciAqc3Jj
LCBpbnQgYykKK3sKKyAgY2hhciAqcCA9IGRlc3Q7CisKKyAgd2hpbGUgKCgqcCsrID0gKnNyYysr
KSAhPSAnXDAnICYmIC0tYykKKyAgICA7CisKKyAgcmV0dXJuIGRlc3Q7Cit9CisKK2NoYXIgKgor
Z3J1Yl9zdHJkdXAgKGNvbnN0IGNoYXIgKnMpCit7CisgIGdydWJfc2l6ZV90IGxlbjsKKyAgY2hh
ciAqcDsKKworICBsZW4gPSBncnViX3N0cmxlbiAocykgKyAxOworICBwID0gKGNoYXIgKikgbWFs
bG9jIChsZW4pOworICBpZiAoISBwKQorICAgIHJldHVybiAwOworCisgIHJldHVybiBtZW1jcHkg
KHAsIHMsIGxlbik7Cit9CisKKworCitjaGFyICoKK2dydWJfc3RybmR1cCAoY29uc3QgY2hhciAq
cywgZ3J1Yl9zaXplX3QgbikKK3sKKyAgZ3J1Yl9zaXplX3QgbGVuOworICBjaGFyICpwOworCisg
IGxlbiA9IGdydWJfc3RybGVuIChzKTsKKyAgaWYgKGxlbiA+IG4pCisgICAgbGVuID0gbjsKKyAg
cCA9IChjaGFyICopIG1hbGxvYyAobGVuICsgMSk7CisgIGlmICghIHApCisgICAgcmV0dXJuIDA7
CisKKyAgbWVtY3B5IChwLCBzLCBsZW4pOworICBwW2xlbl0gPSAnXDAnOworICByZXR1cm4gcDsK
K30KKworCisKKworaW50CitncnViX3N0cmNtcCAoY29uc3QgY2hhciAqczEsIGNvbnN0IGNoYXIg
KnMyKQoreworICB3aGlsZSAoKnMxICYmICpzMikKKyAgICB7CisgICAgICBpZiAoKnMxICE9ICpz
MikKKwlicmVhazsKKworICAgICAgczErKzsKKyAgICAgIHMyKys7CisgICAgfQorCisgIHJldHVy
biAoaW50KSAqczEgLSAoaW50KSAqczI7Cit9CisKK2ludAorZ3J1Yl9zdHJuY21wIChjb25zdCBj
aGFyICpzMSwgY29uc3QgY2hhciAqczIsIGdydWJfc2l6ZV90IG4pCit7CisgIGlmIChuID09IDAp
CisgICAgcmV0dXJuIDA7CisKKyAgd2hpbGUgKCpzMSAmJiAqczIgJiYgLS1uKQorICAgIHsKKyAg
ICAgIGlmICgqczEgIT0gKnMyKQorCWJyZWFrOworCisgICAgICBzMSsrOworICAgICAgczIrKzsK
KyAgICB9CisKKyAgcmV0dXJuIChpbnQpICpzMSAtIChpbnQpICpzMjsKK30KKworCitpbnQKK2dy
dWJfc3RyY2FzZWNtcCAoY29uc3QgY2hhciAqczEsIGNvbnN0IGNoYXIgKnMyKQoreworICB3aGls
ZSAoKnMxICYmICpzMikKKyAgICB7CisgICAgICBpZiAoZ3J1Yl90b2xvd2VyICgqczEpICE9IGdy
dWJfdG9sb3dlciAoKnMyKSkKKwlicmVhazsKKworICAgICAgczErKzsKKyAgICAgIHMyKys7Cisg
ICAgfQorCisgIHJldHVybiAoaW50KSBncnViX3RvbG93ZXIgKCpzMSkgLSAoaW50KSBncnViX3Rv
bG93ZXIgKCpzMik7Cit9CisKKworaW50CitncnViX3N0cm5jYXNlY21wIChjb25zdCBjaGFyICpz
MSwgY29uc3QgY2hhciAqczIsIGdydWJfc2l6ZV90IG4pCit7CisgIGlmIChuID09IDApCisgICAg
cmV0dXJuIDA7CisKKyAgd2hpbGUgKCpzMSAmJiAqczIgJiYgLS1uKQorICAgIHsKKyAgICAgIGlm
IChncnViX3RvbG93ZXIgKCpzMSkgIT0gZ3J1Yl90b2xvd2VyICgqczIpKQorCWJyZWFrOworCisg
ICAgICBzMSsrOworICAgICAgczIrKzsKKyAgICB9CisKKyAgcmV0dXJuIChpbnQpIGdydWJfdG9s
b3dlciAoKnMxKSAtIChpbnQpIGdydWJfdG9sb3dlciAoKnMyKTsKK30KKwordm9pZCAqCitncnVi
X21lbW1vdmUgKHZvaWQgKmRlc3QsIGNvbnN0IHZvaWQgKnNyYywgZ3J1Yl9zaXplX3QgbikKK3sK
KyAgY2hhciAqZCA9IChjaGFyICopIGRlc3Q7CisgIGNvbnN0IGNoYXIgKnMgPSAoY29uc3QgY2hh
ciAqKSBzcmM7CisKKyAgaWYgKGQgPCBzKQorICAgIHdoaWxlIChuLS0pCisgICAgICAqZCsrID0g
KnMrKzsKKyAgZWxzZQorICAgIHsKKyAgICAgIGQgKz0gbjsKKyAgICAgIHMgKz0gbjsKKworICAg
ICAgd2hpbGUgKG4tLSkKKwkqLS1kID0gKi0tczsKKyAgICB9CisKKyAgcmV0dXJuIGRlc3Q7Cit9
CisKKwordm9pZCAqCitncnViX21hbGxvYyAoZ3J1Yl9zaXplX3Qgc2l6ZSkKK3sKKyAgdm9pZCAq
cmV0OworICByZXQgPSBtYWxsb2MgKHNpemUpOworICBpZiAoIXJldCkKKyAgICBncnViX2Vycm9y
IChHUlVCX0VSUl9PVVRfT0ZfTUVNT1JZLCAib3V0IG9mIG1lbW9yeSIpOworICByZXR1cm4gcmV0
OworfQorCisKKwordm9pZAorZ3J1Yl9mcmVlICh2b2lkICpwdHIpCit7CisgIGZyZWUgKHB0cik7
Cit9CisKKwordm9pZCAqCitncnViX21lbXNldCAodm9pZCAqcywgaW50IGMsIGdydWJfc2l6ZV90
IG4pCit7CisgIHVuc2lnbmVkIGNoYXIgKnAgPSAodW5zaWduZWQgY2hhciAqKSBzOworCisgIHdo
aWxlIChuLS0pCisgICAgKnArKyA9ICh1bnNpZ25lZCBjaGFyKSBjOworCisgIHJldHVybiBzOwor
fQorCitpbnQKK2dydWJfbWVtY21wIChjb25zdCB2b2lkICpzMSwgY29uc3Qgdm9pZCAqczIsIGdy
dWJfc2l6ZV90IG4pCit7CisgIGNvbnN0IGNoYXIgKnQxID0gczE7CisgIGNvbnN0IGNoYXIgKnQy
ID0gczI7CisKKyAgd2hpbGUgKG4tLSkKKyAgICB7CisgICAgICBpZiAoKnQxICE9ICp0MikKKwly
ZXR1cm4gKGludCkgKnQxIC0gKGludCkgKnQyOworCisgICAgICB0MSsrOworICAgICAgdDIrKzsK
KyAgICB9CisKKyAgcmV0dXJuIDA7Cit9CisKKwordm9pZCAqCitncnViX3phbGxvYyAoZ3J1Yl9z
aXplX3Qgc2l6ZSkKK3sKKyAgdm9pZCAqcmV0OworCisgIHJldCA9IGdydWJfbWFsbG9jIChzaXpl
KTsKKyAgaWYgKCFyZXQpCisgICAgcmV0dXJuIE5VTEw7CisgIG1lbXNldCAocmV0LCAwLCBzaXpl
KTsKKyAgcmV0dXJuIHJldDsKK30KKworLyogRGl2aWRlIE4gYnkgRCwgcmV0dXJuIHRoZSBxdW90
aWVudCwgYW5kIHN0b3JlIHRoZSByZW1haW5kZXIgaW4gKlIuICAqLworZ3J1Yl91aW50NjRfdAor
Z3J1Yl9kaXZtb2Q2NCAoZ3J1Yl91aW50NjRfdCBuLCBncnViX3VpbnQzMl90IGQsIGdydWJfdWlu
dDMyX3QgKnIpCit7CisgIC8qIFRoaXMgYWxnb3JpdGhtIGlzIHR5cGljYWxseSBpbXBsZW1lbnRl
ZCBieSBoYXJkd2FyZS4gVGhlIGlkZWEKKyAgICAgaXMgdG8gZ2V0IHRoZSBoaWdoZXN0IGJpdCBp
biBOLCA2NCB0aW1lcywgYnkga2VlcGluZworICAgICB1cHBlcihOICogMl5pKSA9IHVwcGVyKChR
ICogMTAgKyBNKSAqIDJeaSksIHdoZXJlIHVwcGVyCisgICAgIHJlcHJlc2VudHMgdGhlIGhpZ2gg
NjQgYml0cyBpbiAxMjgtYml0cyBzcGFjZS4gICovCisgIHVuc2lnbmVkIGJpdHMgPSA2NDsKKyAg
dW5zaWduZWQgbG9uZyBsb25nIHEgPSAwOworICB1bnNpZ25lZCBtID0gMDsKKworICAvKiBTa2lw
IHRoZSBzbG93IGNvbXB1dGF0aW9uIGlmIDMyLWJpdCBhcml0aG1ldGljIGlzIHBvc3NpYmxlLiAg
Ki8KKyAgaWYgKG4gPCAweGZmZmZmZmZmKQorICAgIHsKKyAgICAgIGlmIChyKQorCSpyID0gKChn
cnViX3VpbnQzMl90KSBuKSAlIGQ7CisKKyAgICAgIHJldHVybiAoKGdydWJfdWludDMyX3QpIG4p
IC8gZDsKKyAgICB9CisKKyAgd2hpbGUgKGJpdHMtLSkKKyAgICB7CisgICAgICBtIDw8PSAxOwor
CisgICAgICBpZiAobiAmICgxVUxMIDw8IDYzKSkKKwltIHw9IDE7CisKKyAgICAgIHEgPDw9IDE7
CisgICAgICBuIDw8PSAxOworCisgICAgICBpZiAobSA+PSBkKQorCXsKKwkgIHEgfD0gMTsKKwkg
IG0gLT0gZDsKKwl9CisgICAgfQorCisgIGlmIChyKQorICAgICpyID0gbTsKKworICByZXR1cm4g
cTsKK30KKworCisKKy8qIENvbnZlcnQgVVRGLTE2IHRvIFVURi04LiAgKi8KK2dydWJfdWludDhf
dCAqCitncnViX3V0ZjE2X3RvX3V0ZjggKGdydWJfdWludDhfdCAqZGVzdCwgZ3J1Yl91aW50MTZf
dCAqc3JjLAorICAgICAgICAgICAgICAgICAgICBncnViX3NpemVfdCBzaXplKQoreworICBncnVi
X3VpbnQzMl90IGNvZGVfaGlnaCA9IDA7CisKKyAgd2hpbGUgKHNpemUtLSkKKyAgICB7CisgICAg
ICBncnViX3VpbnQzMl90IGNvZGUgPSAqc3JjKys7CisKKyAgICAgIGlmIChjb2RlX2hpZ2gpCisg
ICAgICAgIHsKKyAgICAgICAgICBpZiAoY29kZSA+PSAweERDMDAgJiYgY29kZSA8PSAweERGRkYp
CisgICAgICAgICAgICB7CisgICAgICAgICAgICAgIC8qIFN1cnJvZ2F0ZSBwYWlyLiAgKi8KKyAg
ICAgICAgICAgICAgY29kZSA9ICgoY29kZV9oaWdoIC0gMHhEODAwKSA8PCAxMikgKyAoY29kZSAt
IDB4REMwMCkgKyAweDEwMDAwOworCisgICAgICAgICAgICAgICpkZXN0KysgPSAoY29kZSA+PiAx
OCkgfCAweEYwOworICAgICAgICAgICAgICAqZGVzdCsrID0gKChjb2RlID4+IDEyKSAmIDB4M0Yp
IHwgMHg4MDsKKyAgICAgICAgICAgICAgKmRlc3QrKyA9ICgoY29kZSA+PiA2KSAmIDB4M0YpIHwg
MHg4MDsKKyAgICAgICAgICAgICAgKmRlc3QrKyA9IChjb2RlICYgMHgzRikgfCAweDgwOworICAg
ICAgICAgICAgfQorICAgICAgICAgIGVsc2UKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAg
LyogRXJyb3IuLi4gICovCisgICAgICAgICAgICAgICpkZXN0KysgPSAnPyc7CisgICAgICAgICAg
ICB9CisKKyAgICAgICAgICBjb2RlX2hpZ2ggPSAwOworICAgICAgICB9CisgICAgICBlbHNlCisg
ICAgICAgIHsKKyAgICAgICAgICBpZiAoY29kZSA8PSAweDAwN0YpCisgICAgICAgICAgICAqZGVz
dCsrID0gY29kZTsKKyAgICAgICAgICBlbHNlIGlmIChjb2RlIDw9IDB4MDdGRikKKyAgICAgICAg
ICAgIHsKKyAgICAgICAgICAgICAgKmRlc3QrKyA9IChjb2RlID4+IDYpIHwgMHhDMDsKKyAgICAg
ICAgICAgICAgKmRlc3QrKyA9IChjb2RlICYgMHgzRikgfCAweDgwOworICAgICAgICAgICAgfQor
ICAgICAgICAgIGVsc2UgaWYgKGNvZGUgPj0gMHhEODAwICYmIGNvZGUgPD0gMHhEQkZGKQorICAg
ICAgICAgICAgeworICAgICAgICAgICAgICBjb2RlX2hpZ2ggPSBjb2RlOworICAgICAgICAgICAg
ICBjb250aW51ZTsKKyAgICAgICAgICAgIH0KKyAgICAgICAgICBlbHNlIGlmIChjb2RlID49IDB4
REMwMCAmJiBjb2RlIDw9IDB4REZGRikKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAgLyog
RXJyb3IuLi4gKi8KKyAgICAgICAgICAgICAgKmRlc3QrKyA9ICc/JzsKKyAgICAgICAgICAgIH0K
KyAgICAgICAgICBlbHNlCisgICAgICAgICAgICB7CisgICAgICAgICAgICAgICpkZXN0KysgPSAo
Y29kZSA+PiAxMikgfCAweEUwOworICAgICAgICAgICAgICAqZGVzdCsrID0gKChjb2RlID4+IDYp
ICYgMHgzRikgfCAweDgwOworICAgICAgICAgICAgICAqZGVzdCsrID0gKGNvZGUgJiAweDNGKSB8
IDB4ODA7CisgICAgICAgICAgICB9CisgICAgICAgIH0KKyAgICB9CisKKyAgcmV0dXJuIGRlc3Q7
Cit9CisKKworCisKKworCitzdGF0aWMgdm9pZCBwcmludF9ieXRlKGNoYXIgKnAsIGludCBsZW4p
Cit7CisgIHByaW50ZigiXG4qKioqKioqKioqKioqKioqc3RhcnQgcHJpbnQgJXMqKioqKioqKioq
KioqKioqKioqKlxuIiwgcCk7CisgIGludCBpOworICB1bnNpZ25lZCBjaGFyICpwYiA9ICh1bnNp
Z25lZCBjaGFyKilwOworICBmb3IoaSA9IDA7IGkgPCBsZW47IGkrKykKKyAgICB7CisgICAgICBw
cmludGYoIjB4JTAyeCwiLCBwYltpXSk7CisgICAgfQorICBwcmludGYoIlxuKioqKioqKioqKioq
KioqKioqKioqKmVuZCoqKioqKioqKioqKioqKioqKioqKioqKioqXG4iKTsKK30KKworCisKKyNp
bmNsdWRlICAgPGljb252Lmg+IAorI2RlZmluZSAgIE9VVExFTiAgIDI1NiAKKworCisvL7T6wuvX
qru7OrTT0rvW1rHgwuvXqs6qwe3Su9bWseDC6yAKK3N0YXRpYyBpbnQgICBjb2RlX2NvbnZlcnQo
Y29uc3QgY2hhciAgICpmcm9tX2NoYXJzZXQsIGNvbnN0IGNoYXIgICAqdG9fY2hhcnNldCwgCisJ
CSAgICBjaGFyICAgKmluYnVmLCBzaXplX3QgICBpbmxlbiwKKwkJICAgY2hhciAgICpvdXRidWYs
IHNpemVfdCAgIG91dGxlbikgCit7IAorICBpY29udl90ICAgY2Q7IAorICBpbnQgICByYzsgCisg
IGNoYXIgICAqKnBpbiAgID0gICAmaW5idWY7IAorICBjaGFyICAgKipwb3V0ICAgPSAgICZvdXRi
dWY7IAorCisgIC8vcHJpbnRmKCJzaXplb2YoaW50KT0lZCwgc2l6ZW9mKHNpemVfdCk9JWRcbiIs
IHNpemVvZihpbnQpLCBzaXplb2Yoc2l6ZV90KSk7CisgIGNkICAgPSAgIGljb252X29wZW4odG9f
Y2hhcnNldCwgZnJvbV9jaGFyc2V0KTsgCisgIGlmICAgKGNkPT0wKSAgIHJldHVybiAgIC0xOyAK
KyAgbWVtc2V0KG91dGJ1ZiwgMCwgb3V0bGVuKTsgCisgIGlmICAgKGljb252KGNkLCBwaW4sICZp
bmxlbiwgcG91dCwgJm91dGxlbik9PS0xKSAgIHJldHVybiAgIC0xOyAKKyAgaWNvbnZfY2xvc2Uo
Y2QpOyAKKyAgcmV0dXJuICAgMDsgCit9IAorLy9VTklDT0RFwuvXqs6qR0IyMzEywusgCitpbnQg
ICB1MmcoY2hhciAgICppbmJ1Ziwgc2l6ZV90ICAgaW5sZW4sIGNoYXIgICAqb3V0YnVmLCBzaXpl
X3QgIG91dGxlbikgCit7CisgIHJldHVybiAgIGNvZGVfY29udmVydCgidXRmLTgiLCAiZ2IyMzEy
IiwgaW5idWYsIGlubGVuLCBvdXRidWYsIG91dGxlbik7IAorfSAKKy8vR0IyMzEywuvXqs6qVU5J
Q09ERcLrIAoraW50ICAgZzJ1KGNoYXIgICAqaW5idWYsIHNpemVfdCAgIGlubGVuLCBjaGFyICAg
Km91dGJ1Ziwgc2l6ZV90ICAgb3V0bGVuKSAKK3sgCisgIHJldHVybiAgIGNvZGVfY29udmVydCgi
Z2IyMzEyIiwgInV0Zi04IiwgaW5idWYsIGlubGVuLCBvdXRidWYsIG91dGxlbik7IAorfSAKKwor
CisKK3ZvaWQgdGVzdCh2b2lkKSAKK3sKKyAgLy/X1rf7seDC69equ7s977u/5T/nrD8/Pz+9rD8/
CisgIGNoYXIgICBpbl91dGY4W10gICA9ICAgezB4ZTUsMHhhZCwweDk3LDB4ZTcsMHhhYywweGE2
LDB4ZTcsMHhiYywweDk2LDB4ZTcsMHhhMCwweDgxLDB4ZTgsMHhiZCwweGFjLDB4ZTYsMHg4ZCww
eGEyfTsgCisgIGNoYXIgICAqaW5fZ2IyMzEyICAgPSAoY2hhciopICAi19a3+7HgwuvXqru7Ijsg
CisgIGNoYXIgICBvdXRbT1VUTEVOXTsgCisgIGludCByYzsKKyAgLy91dGY4wuvXqs6qZ2IyMzEy
wusgCisgIHJjICAgPSAgIHUyZyhpbl91dGY4LCBzdHJsZW4oaW5fdXRmOCksIG91dCwgT1VUTEVO
KTsgCisgIHByaW50ZigidXRmOC0tPmdiMjMxMiAgIG91dD0lc1xuIiwgb3V0KTsKKyAgcHJpbnRf
Ynl0ZShvdXQsIHN0cmxlbihvdXQpKTsKKyAgLy9nYjIzMTLC69eqzqp1dGY4wusgCisgIHJjICAg
PSAgIGcydShpbl9nYjIzMTIsIHN0cmxlbihpbl9nYjIzMTIpLCBvdXQsIE9VVExFTik7IAorICBw
cmludF9ieXRlKG91dCwgc3RybGVuKG91dCkpOworICBwcmludGYoImdiMjMxMi0tPnV0ZjggICBv
dXQ9JXNcbiIsb3V0KTsgCit9IAorCisKKwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhl
bi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL21pc2MuaCB4ZW4tNC4xLjItYi90b29scy9p
b2VtdS1xZW11LXhlbi9taXNjLmgKLS0tIHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVu
L21pc2MuaAkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4y
LWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vbWlzYy5oCTIwMTItMTItMjggMTY6MDI6NDEuMDEyOTM3
ODQ2ICswODAwCkBAIC0wLDAgKzEsODkgQEAKKyNpbmNsdWRlICJmcy10eXBlcy5oIgorI2luY2x1
ZGUgPHN0ZGRlZi5oPgorCisKKworCisKKyNkZWZpbmUgZ3J1Yl9tZW1jcHkoZCxzLG4pZ3J1Yl9t
ZW1tb3ZlICgoZCksIChzKSwgKG4pKQorCisKK2ludAorZ3J1Yl9pc3NwYWNlIChpbnQgYyk7CisK
K2ludAorZ3J1Yl90b2xvd2VyIChpbnQgYyk7CisKKworY2hhciAqCitncnViX3N0cmNociAoY29u
c3QgY2hhciAqcywgaW50IGMpOworCisKK2dydWJfc2l6ZV90CitncnViX3N0cmxlbiAoY29uc3Qg
Y2hhciAqcyk7CisKKworCitjaGFyICoKK2dydWJfc3RyZHVwIChjb25zdCBjaGFyICpzKTsKKwor
CisKK2NoYXIgKgorZ3J1Yl9zdHJuZHVwIChjb25zdCBjaGFyICpzLCBncnViX3NpemVfdCBuKTsK
KworCitjaGFyICoKK2dydWJfc3RybmNweSAoY2hhciAqZGVzdCwgY29uc3QgY2hhciAqc3JjLCBp
bnQgYyk7CisKKworaW50CitncnViX3N0cmNtcCAoY29uc3QgY2hhciAqczEsIGNvbnN0IGNoYXIg
KnMyKTsKKworaW50CitncnViX3N0cm5jbXAgKGNvbnN0IGNoYXIgKnMxLCBjb25zdCBjaGFyICpz
MiwgZ3J1Yl9zaXplX3Qgbik7CisKK2ludAorZ3J1Yl9zdHJjYXNlY21wIChjb25zdCBjaGFyICpz
MSwgY29uc3QgY2hhciAqczIpOworCitpbnQKK2dydWJfc3RybmNhc2VjbXAgKGNvbnN0IGNoYXIg
KnMxLCBjb25zdCBjaGFyICpzMiwgZ3J1Yl9zaXplX3Qgbik7CisKK3ZvaWQgKgorZ3J1Yl9tZW1t
b3ZlICh2b2lkICpkZXN0LCBjb25zdCB2b2lkICpzcmMsIGdydWJfc2l6ZV90IG4pOworCitpbnQK
K2dydWJfbWVtY21wIChjb25zdCB2b2lkICpzMSwgY29uc3Qgdm9pZCAqczIsIGdydWJfc2l6ZV90
IG4pOworCit2b2lkICoKK2dydWJfbWFsbG9jIChncnViX3NpemVfdCBzaXplKTsKKwordm9pZCAq
CitncnViX21lbXNldCAodm9pZCAqcywgaW50IGMsIGdydWJfc2l6ZV90IG4pOworCit2b2lkCitn
cnViX2ZyZWUgKHZvaWQgKnB0cik7CisKK3ZvaWQgKgorZ3J1Yl96YWxsb2MgKGdydWJfc2l6ZV90
IHNpemUpOworCitncnViX3VpbnQ2NF90CitncnViX2Rpdm1vZDY0IChncnViX3VpbnQ2NF90IG4s
IGdydWJfdWludDMyX3QgZCwgZ3J1Yl91aW50MzJfdCAqcik7CisKKy8qIENvbnZlcnQgVVRGLTE2
IHRvIFVURi04LiAgKi8KK2dydWJfdWludDhfdCAqCitncnViX3V0ZjE2X3RvX3V0ZjggKGdydWJf
dWludDhfdCAqZGVzdCwgZ3J1Yl91aW50MTZfdCAqc3JjLAorICAgICAgICAgICAgICAgICAgICBn
cnViX3NpemVfdCBzaXplKTsKKworCisvL1VOSUNPREXC69eqzqpHQjIzMTLC6yAKK2ludCAgIHUy
ZyhjaGFyICAgKmluYnVmLCBzaXplX3QgICBpbmxlbiwgY2hhciAgICpvdXRidWYsIHNpemVfdCAg
IG91dGxlbik7CisvL0dCMjMxMsLr16rOqlVOSUNPREXC6yAKK2ludCAgIGcydShjaGFyICAgKmlu
YnVmLCBzaXplX3QgICBpbmxlbiwgY2hhciAgICpvdXRidWYsIHNpemVfdCAgIG91dGxlbik7CisK
KworCit2b2lkIHRlc3Qodm9pZCk7CisKKworCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTgg
eGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vbnRmcy5jIHhlbi00LjEuMi1iL3Rvb2xz
L2lvZW11LXFlbXUteGVuL250ZnMuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14
ZW4vbnRmcy5jCTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4x
LjItYi90b29scy9pb2VtdS1xZW11LXhlbi9udGZzLmMJMjAxMi0xMi0yOCAxNjowMjo0MS4wMTM5
MzcwMzggKzA4MDAKQEAgLTAsMCArMSwxMTg4IEBACisvKiBudGZzLmMgLSBOVEZTIGZpbGVzeXN0
ZW0gKi8KKy8qCisgKiAgR1JVQiAgLS0gIEdSYW5kIFVuaWZpZWQgQm9vdGxvYWRlcgorICogIENv
cHlyaWdodCAoQykgMjAwNywyMDA4LDIwMDkgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMu
CisgKgorICogIFRoaXMgcHJvZ3JhbSBpcyBmcmVlIHNvZnR3YXJlOiB5b3UgY2FuIHJlZGlzdHJp
YnV0ZSBpdCBhbmQvb3IgbW9kaWZ5CisgKiAgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUg
R2VuZXJhbCBQdWJsaWMgTGljZW5zZSBhcyBwdWJsaXNoZWQgYnkKKyAqICB0aGUgRnJlZSBTb2Z0
d2FyZSBGb3VuZGF0aW9uLCBlaXRoZXIgdmVyc2lvbiAzIG9mIHRoZSBMaWNlbnNlLCBvcgorICog
IChhdCB5b3VyIG9wdGlvbikgYW55IGxhdGVyIHZlcnNpb24uCisgKgorICogIFRoaXMgcHJvZ3Jh
bSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLAorICog
IGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJh
bnR5IG9mCisgKiAgTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQ
VVJQT1NFLiAgU2VlIHRoZQorICogIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3Jl
IGRldGFpbHMuCisgKgorICogIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhl
IEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlCisgKiAgYWxvbmcgd2l0aCB0aGlzIHByb2dyYW0u
ICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uCisgKi8KKworCisj
aW5jbHVkZSAibWlzYy5oIgorI2luY2x1ZGUgImZzaGVscC5oIgorI2luY2x1ZGUgIm50ZnMuaCIK
KyNpbmNsdWRlICJkZWJ1Zy5oIgorI2luY2x1ZGUgImdydWJfZXJyLmgiCisjaW5jbHVkZSAiZXJy
LmgiCisKKyNkZWZpbmUgR1JVQl9ESVNLX1NFQ1RPUl9TSVpFIDB4MjAwCitudGZzY29tcF9mdW5j
X3QgZ3J1Yl9udGZzY29tcF9mdW5jOworCisvL2ltcG9ydGFudAorLy9sY24gaXMgcmVsYXRpdmUg
dG8gc3RhcnQgc2VjdG9yIG9mIHRoZSB2b2x1bWUKK3N0YXRpYyBncnViX29mZl90IHNfcGFydF9v
ZmZfc2VjdG9yOworc3RhdGljIGdydWJfdWludDMyX3Qgc19icGJfYnl0ZXNfcGVyX3NlY3RvcjsK
K2ludCBiZHJ2X3ByZWFkX2Zyb21fbGNuX29mX3ZvbHVtKEJsb2NrRHJpdmVyU3RhdGUgKmJzLCBp
bnQ2NF90IG9mZnNldCwKKwkJCQkgdm9pZCAqYnVmMSwgaW50IGNvdW50MSkKK3sKKyAgcmV0dXJu
IGJkcnZfcHJlYWQoYnMsIHNfcGFydF9vZmZfc2VjdG9yICogc19icGJfYnl0ZXNfcGVyX3NlY3Rv
ciArIG9mZnNldCwKKwkJIGJ1ZjEsIGNvdW50MSk7Cit9CisKKworc3RhdGljIGdydWJfZXJyX3QK
K2ZpeHVwIChzdHJ1Y3QgZ3J1Yl9udGZzX2RhdGEgKmRhdGEsIGNoYXIgKmJ1ZiwgaW50IGxlbiwg
Y2hhciAqbWFnaWMpCit7CisgIGludCBzczsKKyAgY2hhciAqcHU7CisgIGdydWJfdWludDE2X3Qg
dXM7CisgIAorICBEQkcoIiV4LSV4LSV4LSV4IiwgYnVmWzBdLCBidWZbMV0sIGJ1ZlsyXSwgYnVm
WzNdKTsKKyAgaWYgKGdydWJfbWVtY21wIChidWYsIG1hZ2ljLCA0KSkKKyAgICByZXR1cm4gZ3J1
Yl9lcnJvciAoR1JVQl9FUlJfQkFEX0ZTLCAiJXMgbGFiZWwgbm90IGZvdW5kIiwgbWFnaWMpOwor
CisgIHNzID0gdTE2YXQgKGJ1ZiwgNikgLSAxOworICBpZiAoc3MgKiAoaW50KSBkYXRhLT5ibG9j
a3NpemUgIT0gbGVuICogR1JVQl9ESVNLX1NFQ1RPUl9TSVpFKQorICAgIHJldHVybiBncnViX2Vy
cm9yIChHUlVCX0VSUl9CQURfRlMsICJzaXplIG5vdCBtYXRjaCIsCisJCSAgICAgICBzcyAqIChp
bnQpIGRhdGEtPmJsb2Nrc2l6ZSwKKwkJICAgICAgIGxlbiAqIEdSVUJfRElTS19TRUNUT1JfU0la
RSk7CisgIHB1ID0gYnVmICsgdTE2YXQgKGJ1ZiwgNCk7CisgIHVzID0gdTE2YXQgKHB1LCAwKTsK
KyAgYnVmIC09IDI7CisgIHdoaWxlIChzcyA+IDApCisgICAgeworICAgICAgYnVmICs9IGRhdGEt
PmJsb2Nrc2l6ZTsKKyAgICAgIHB1ICs9IDI7CisgICAgICBpZiAodTE2YXQgKGJ1ZiwgMCkgIT0g
dXMpCisJcmV0dXJuIGdydWJfZXJyb3IgKEdSVUJfRVJSX0JBRF9GUywgImZpeHVwIHNpZ25hdHVy
ZSBub3QgbWF0Y2giKTsKKyAgICAgIHYxNmF0IChidWYsIDApID0gdjE2YXQgKHB1LCAwKTsKKyAg
ICAgIHNzLS07CisgICAgfQorCisgIHJldHVybiAwOworfQorCitzdGF0aWMgZ3J1Yl9lcnJfdCBy
ZWFkX21mdCAoc3RydWN0IGdydWJfbnRmc19kYXRhICpkYXRhLCBjaGFyICpidWYsCisJCQkgICAg
Z3J1Yl91aW50MzJfdCBtZnRubyk7CitzdGF0aWMgZ3J1Yl9lcnJfdCByZWFkX2F0dHIgKHN0cnVj
dCBncnViX250ZnNfYXR0ciAqYXQsIGNoYXIgKmRlc3QsCisJCQkgICAgIGdydWJfZGlza19hZGRy
X3Qgb2ZzLCBncnViX3NpemVfdCBsZW4sCisJCQkgICAgIGludCBjYWNoZWQsCisJCQkgICAgIHZv
aWQgKCpyZWFkX2hvb2spIChncnViX2Rpc2tfYWRkcl90IHNlY3RvciwKKwkJCQkJCXVuc2lnbmVk
IG9mZnNldCwKKwkJCQkJCXVuc2lnbmVkIGxlbmd0aCwKKwkJCQkJCXZvaWQgKmNsb3N1cmUpLAor
CQkJICAgICB2b2lkICpjbG9zdXJlKTsKKworc3RhdGljIGdydWJfZXJyX3QgcmVhZF9kYXRhIChz
dHJ1Y3QgZ3J1Yl9udGZzX2F0dHIgKmF0LCBjaGFyICpwYSwgY2hhciAqZGVzdCwKKwkJCSAgICAg
Z3J1Yl9kaXNrX2FkZHJfdCBvZnMsIGdydWJfc2l6ZV90IGxlbiwKKwkJCSAgICAgaW50IGNhY2hl
ZCwKKwkJCSAgICAgdm9pZCAoKnJlYWRfaG9vaykgKGdydWJfZGlza19hZGRyX3Qgc2VjdG9yLAor
CQkJCQkJdW5zaWduZWQgb2Zmc2V0LAorCQkJCQkJdW5zaWduZWQgbGVuZ3RoLAorCQkJCQkJdm9p
ZCAqY2xvc3VyZSksCisJCQkgICAgIHZvaWQgKmNsb3N1cmUpOworCitzdGF0aWMgdm9pZAoraW5p
dF9hdHRyIChzdHJ1Y3QgZ3J1Yl9udGZzX2F0dHIgKmF0LCBzdHJ1Y3QgZ3J1Yl9udGZzX2ZpbGUg
Km1mdCkKK3sKKyAgYXQtPm1mdCA9IG1mdDsKKyAgYXQtPmZsYWdzID0gKG1mdCA9PSAmbWZ0LT5k
YXRhLT5tbWZ0KSA/IEFGX01NRlQgOiAwOworICBhdC0+YXR0cl9ueHQgPSBtZnQtPmJ1ZiArIHUx
NmF0IChtZnQtPmJ1ZiwgMHgxNCk7CisgIGF0LT5hdHRyX2VuZCA9IGF0LT5lbWZ0X2J1ZiA9IGF0
LT5lZGF0X2J1ZiA9IGF0LT5zYnVmID0gTlVMTDsKK30KKworc3RhdGljIHZvaWQKK2ZyZWVfYXR0
ciAoc3RydWN0IGdydWJfbnRmc19hdHRyICphdCkKK3sKKyAgZ3J1Yl9mcmVlIChhdC0+ZW1mdF9i
dWYpOworICBncnViX2ZyZWUgKGF0LT5lZGF0X2J1Zik7CisgIGdydWJfZnJlZSAoYXQtPnNidWYp
OworfQorCitzdGF0aWMgY2hhciAqCitmaW5kX2F0dHIgKHN0cnVjdCBncnViX250ZnNfYXR0ciAq
YXQsIHVuc2lnbmVkIGNoYXIgYXR0cikKK3sKKyAgZ3J1Yl9vZmZfdCBvZmZfYnl0ZXMxOworICBn
cnViX29mZl90IG9mZl9ieXRlczI7CisgCisgIGlmIChhdC0+ZmxhZ3MgJiBBRl9BTFNUKQorICAg
IHsKKyAgICAgIERCRygiISEhISEhXG5pbiBhIGF0dHIgbGlzdD09PT09PSIpOworICAgIHJldHJ5
OgorICAgICAgd2hpbGUgKGF0LT5hdHRyX254dCA8IGF0LT5hdHRyX2VuZCkKKwl7CisJICBhdC0+
YXR0cl9jdXIgPSBhdC0+YXR0cl9ueHQ7CisJICBhdC0+YXR0cl9ueHQgKz0gdTE2YXQgKGF0LT5h
dHRyX2N1ciwgNCk7CisJICBpZiAoKCh1bnNpZ25lZCBjaGFyKSAqYXQtPmF0dHJfY3VyID09IGF0
dHIpIHx8IChhdHRyID09IDApKQorCSAgICB7CisJICAgICAgY2hhciAqbmV3X3BvczsKKworCSAg
ICAgIGlmIChhdC0+ZmxhZ3MgJiBBRl9NTUZUKQorCQl7CisJCSAgREJHKCJpbiBBRl9NTUZULi4u
Li4uIik7CisJCSAgb2ZmX2J5dGVzMSA9IChncnViX29mZl90KSh2MzJhdCAoYXQtPmF0dHJfY3Vy
LCAweDEwKSkgPDwgQkxLX1NIUjsKKwkJICBvZmZfYnl0ZXMyID0gKGdydWJfb2ZmX3QpKHYzMmF0
IChhdC0+YXR0cl9jdXIsIDB4MTQpKSA8PCBCTEtfU0hSOworCQkgIGlmICgoYmRydl9wcmVhZAor
CQkgICAgICAgKGF0LT5tZnQtPmRhdGEtPmJzLCBvZmZfYnl0ZXMxLAorCQkJYXQtPmVtZnRfYnVm
LCA1MTIpKQorCQkgICAgICB8fAorCQkgICAgICAoYmRydl9wcmVhZAorCQkgICAgICAgKGF0LT5t
ZnQtPmRhdGEtPmJzLCBvZmZfYnl0ZXMyLAorCQkJYXQtPmVtZnRfYnVmICsgNTEyLCA1MTIpKSkK
KwkJICAgIHJldHVybiBOVUxMOworCisJCSAgaWYgKGZpeHVwCisJCSAgICAgIChhdC0+bWZ0LT5k
YXRhLCBhdC0+ZW1mdF9idWYsIGF0LT5tZnQtPmRhdGEtPm1mdF9zaXplLAorCQkgICAgICAgKGNo
YXIqKSJGSUxFIikpCisJCSAgICByZXR1cm4gTlVMTDsKKwkJfQorCSAgICAgIGVsc2UKKwkJewor
CQkgIERCRygicmVhZCBleHRlbmQgbWZ0IEZSPT09PT09Iik7CisJCSAgaWYgKHJlYWRfbWZ0IChh
dC0+bWZ0LT5kYXRhLCBhdC0+ZW1mdF9idWYsCisJCQkJdTMyYXQgKGF0LT5hdHRyX2N1ciwgMHgx
MCkpKQorCQkgICAgcmV0dXJuIE5VTEw7CisJCX0KKworCSAgICAgIG5ld19wb3MgPSAmYXQtPmVt
ZnRfYnVmW3UxNmF0IChhdC0+ZW1mdF9idWYsIDB4MTQpXTsKKwkgICAgICB3aGlsZSAoKHVuc2ln
bmVkIGNoYXIpICpuZXdfcG9zICE9IDB4RkYpCisJCXsKKwkJICBEQkcoIm5ldyBwb3MgaW4gZXh0
ZW5kIG1mdD09PT09PSIpOworCQkgIGlmICgoKHVuc2lnbmVkIGNoYXIpICpuZXdfcG9zID09CisJ
CSAgICAgICAodW5zaWduZWQgY2hhcikgKmF0LT5hdHRyX2N1cikKKwkJICAgICAgJiYgKHUxNmF0
IChuZXdfcG9zLCAweEUpID09IHUxNmF0IChhdC0+YXR0cl9jdXIsIDB4MTgpKSkKKwkJICAgIHsK
KwkJICAgICAgcmV0dXJuIG5ld19wb3M7CisJCSAgICB9CisJCSAgbmV3X3BvcyArPSB1MTZhdCAo
bmV3X3BvcywgNCk7CisJCX0KKwkgICAgICBncnViX2Vycm9yIChHUlVCX0VSUl9CQURfRlMsCisJ
CQkgICJjYW5cJ3QgZmluZCAweCVYIGluIGF0dHJpYnV0ZSBsaXN0IiwKKwkJCSAgKHVuc2lnbmVk
IGNoYXIpICphdC0+YXR0cl9jdXIpOworCSAgICAgIHJldHVybiBOVUxMOworCSAgICB9CisJfQor
ICAgICAgcmV0dXJuIE5VTEw7CisgICAgfQorCisKKyAgREJHKCJub3QgaW4gYSBhdHRyIGxpc3Q9
PT09PT0iKTsKKyAgYXQtPmF0dHJfY3VyID0gYXQtPmF0dHJfbnh0OworICB3aGlsZSAoKHVuc2ln
bmVkIGNoYXIpICphdC0+YXR0cl9jdXIgIT0gMHhGRikKKyAgICB7CisgICAgICBhdC0+YXR0cl9u
eHQgKz0gdTE2YXQgKGF0LT5hdHRyX2N1ciwgNCk7CisgICAgICBpZiAoKHVuc2lnbmVkIGNoYXIp
ICphdC0+YXR0cl9jdXIgPT0gQVRfQVRUUklCVVRFX0xJU1QpCisJYXQtPmF0dHJfZW5kID0gYXQt
PmF0dHJfY3VyOworICAgICAgaWYgKCgodW5zaWduZWQgY2hhcikgKmF0LT5hdHRyX2N1ciA9PSBh
dHRyKSB8fCAoYXR0ciA9PSAwKSkKKwl7CisJICBEQkcoImZvdW5kPT09PT09Iik7CisJICByZXR1
cm4gYXQtPmF0dHJfY3VyOworCX0KKyAgICAgIGF0LT5hdHRyX2N1ciA9IGF0LT5hdHRyX254dDsK
KyAgICB9CisgIAorICAKKyAgaWYgKGF0LT5hdHRyX2VuZCkKKyAgICB7CisgICAgICBEQkcoInNl
YXJjaGluZyBpbiBhdHRyIGxpc3Q9PT09PT0iKTsKKyAgICAgIGNoYXIgKnBhOworCisgICAgICBh
dC0+ZW1mdF9idWYgPSBncnViX21hbGxvYyAoYXQtPm1mdC0+ZGF0YS0+bWZ0X3NpemUgPDwgQkxL
X1NIUik7CisgICAgICBpZiAoYXQtPmVtZnRfYnVmID09IE5VTEwpCisJcmV0dXJuIE5VTEw7CisK
KyAgICAgIHBhID0gYXQtPmF0dHJfZW5kOworICAgICAgaWYgKHBhWzhdKQorCXsKKyAgICAgICAg
ICBpbnQgbjsKKworICAgICAgICAgIG4gPSAoKHUzMmF0IChwYSwgMHgzMCkgKyBHUlVCX0RJU0tf
U0VDVE9SX1NJWkUgLSAxKQorICAgICAgICAgICAgICAgJiAofihHUlVCX0RJU0tfU0VDVE9SX1NJ
WkUgLSAxKSkpOworCSAgYXQtPmF0dHJfY3VyID0gYXQtPmF0dHJfZW5kOworCSAgYXQtPmVkYXRf
YnVmID0gZ3J1Yl9tYWxsb2MgKG4pOworCSAgaWYgKCFhdC0+ZWRhdF9idWYpCisJICAgIHJldHVy
biBOVUxMOworCSAgaWYgKHJlYWRfZGF0YSAoYXQsIHBhLCBhdC0+ZWRhdF9idWYsIDAsIG4sIDAs
IDAsIDApKQorCSAgICB7CisJICAgICAgZ3J1Yl9lcnJvciAoR1JVQl9FUlJfQkFEX0ZTLAorCQkJ
ICAiZmFpbCB0byByZWFkIG5vbi1yZXNpZGVudCBhdHRyaWJ1dGUgbGlzdCIpOworCSAgICAgIHJl
dHVybiBOVUxMOworCSAgICB9CisJICBhdC0+YXR0cl9ueHQgPSBhdC0+ZWRhdF9idWY7CisJICBh
dC0+YXR0cl9lbmQgPSBhdC0+ZWRhdF9idWYgKyB1MzJhdCAocGEsIDB4MzApOworCX0KKyAgICAg
IGVsc2UKKwl7CisJICBhdC0+YXR0cl9ueHQgPSBhdC0+YXR0cl9lbmQgKyB1MTZhdCAocGEsIDB4
MTQpOworCSAgYXQtPmF0dHJfZW5kID0gYXQtPmF0dHJfZW5kICsgdTMyYXQgKHBhLCA0KTsKKwl9
CisgICAgICBhdC0+ZmxhZ3MgfD0gQUZfQUxTVDsKKyAgICAgIHdoaWxlIChhdC0+YXR0cl9ueHQg
PCBhdC0+YXR0cl9lbmQpCisJeworCSAgaWYgKCgodW5zaWduZWQgY2hhcikgKmF0LT5hdHRyX254
dCA9PSBhdHRyKSB8fCAoYXR0ciA9PSAwKSkKKwkgICAgYnJlYWs7CisJICBhdC0+YXR0cl9ueHQg
Kz0gdTE2YXQgKGF0LT5hdHRyX254dCwgNCk7CisJfQorICAgICAgaWYgKGF0LT5hdHRyX254dCA+
PSBhdC0+YXR0cl9lbmQpCisJeworCSAgREJHKCJub3QgZm91bmQgaW4gbGlzdCIpOworCSAgcmV0
dXJuIE5VTEw7CisJfQorICAgICAgREJHKCJmb3VuZCBpbiBhdHRyIGxpc3Q9PT09PT0iKTsKKyAg
ICAgIGlmICgoYXQtPmZsYWdzICYgQUZfTU1GVCkgJiYgKGF0dHIgPT0gQVRfREFUQSkpCisJewor
CSAgREJHKCJBRl9HUE9TISEhISEhPT09PT09Iik7CisJICBhdC0+ZmxhZ3MgfD0gQUZfR1BPUzsK
KwkgIGF0LT5hdHRyX2N1ciA9IGF0LT5hdHRyX254dDsKKwkgIHBhID0gYXQtPmF0dHJfY3VyOwor
CSAgdjMyYXQgKHBhLCAweDEwKSA9IGF0LT5tZnQtPmRhdGEtPm1mdF9zdGFydDsKKwkgIHYzMmF0
IChwYSwgMHgxNCkgPSBhdC0+bWZ0LT5kYXRhLT5tZnRfc3RhcnQgKyAxOworCSAgcGEgPSBhdC0+
YXR0cl9ueHQgKyB1MTZhdCAocGEsIDQpOworCSAgd2hpbGUgKHBhIDwgYXQtPmF0dHJfZW5kKQor
CSAgICB7CisJICAgICAgaWYgKCh1bnNpZ25lZCBjaGFyKSAqcGEgIT0gYXR0cikKKwkJYnJlYWs7
CisJICAgICAgaWYgKHJlYWRfYXR0cgorCQkgIChhdCwgcGEgKyAweDEwLAorCQkgICB1MzJhdCAo
cGEsIDB4MTApICogKGF0LT5tZnQtPmRhdGEtPm1mdF9zaXplIDw8IEJMS19TSFIpLAorCQkgICBh
dC0+bWZ0LT5kYXRhLT5tZnRfc2l6ZSA8PCBCTEtfU0hSLCAwLCAwLCAwKSkKKwkJcmV0dXJuIE5V
TEw7CisJICAgICAgcGEgKz0gdTE2YXQgKHBhLCA0KTsKKwkgICAgfQorCSAgYXQtPmF0dHJfbnh0
ID0gYXQtPmF0dHJfY3VyOworCSAgYXQtPmZsYWdzICY9IH5BRl9HUE9TOworCX0KKyAgICAgIAor
ICAgICAgREJHKCJnb3RvIHJldHJ5PT09PT09Iik7CisgICAgICBnb3RvIHJldHJ5OworICAgIH0K
KworICBEQkcoInJldHVybiBOVUxMIik7CisgIHJldHVybiBOVUxMOworfQorCitzdGF0aWMgY2hh
ciAqCitsb2NhdGVfYXR0ciAoc3RydWN0IGdydWJfbnRmc19hdHRyICphdCwgc3RydWN0IGdydWJf
bnRmc19maWxlICptZnQsCisJICAgICB1bnNpZ25lZCBjaGFyIGF0dHIpCit7CisgIAorCisgIGNo
YXIgKnBhOworICAKKyAgaW5pdF9hdHRyIChhdCwgbWZ0KTsKKworICBEQkcoIlxuISEhISEhXG5s
b2NhdGluZyBhdHRyPTB4JTAyeCwgYXQtPmZsYWc9MHglMDJ4PT09PT09PT09PT09IiwKKyAgICAg
IGF0dHIsIGF0LT5mbGFncyk7CisgIGlmICgocGEgPSBmaW5kX2F0dHIgKGF0LCBhdHRyKSkgPT0g
TlVMTCkKKyAgICB7CisgICAgICBEQkcoIjE9PT09PT09PT1ub3QgZm91bmQiKTsKKyAgICAgIHJl
dHVybiBOVUxMOworICAgIH0KKyAgaWYgKChhdC0+ZmxhZ3MgJiBBRl9BTFNUKSA9PSAwKQorICAg
IHsKKyAgICAgIERCRygiMj09PT09PT1ub3QgYSBhdHRyIGxpc3QsIGNvbnRpbnVlIHNlYXJjaGlu
ZyIpOworICAgICAgd2hpbGUgKDEpCisJeworCSAgaWYgKChwYSA9IGZpbmRfYXR0ciAoYXQsIGF0
dHIpKSA9PSBOVUxMKQorCSAgICBicmVhazsKKwkgIGlmIChhdC0+ZmxhZ3MgJiBBRl9BTFNUKQor
CSAgICB7CisJICAgICAgREJHKCIzPT09PT09PT09PWluIGEgYXR0ciBsaXN0LGZvdW5kIik7CisJ
ICAgICAgcmV0dXJuIHBhOworCSAgICB9CisJfQorICAgICAgREJHKCI0PT09PT09PT1zdGFydCBz
ZWFyY2hpbmcgYWxsIG92ZXIgYWdhaW4iKTsKKyAgICAgIGdydWJfZXJybm8gPSBHUlVCX0VSUl9O
T05FOworICAgICAgZnJlZV9hdHRyIChhdCk7CisgICAgICBpbml0X2F0dHIgKGF0LCBtZnQpOwor
ICAgICAgcGEgPSBmaW5kX2F0dHIgKGF0LCBhdHRyKTsKKyAgICB9CisgIERCRygibG9jYXRlIGZp
bmlzaD09PT09PVxuXG4iKTsKKyAgcmV0dXJuIHBhOworfQorCitzdGF0aWMgY2hhciAqCityZWFk
X3J1bl9kYXRhIChjaGFyICpydW4sIGludCBubiwgZ3J1Yl9kaXNrX2FkZHJfdCAqIHZhbCwgaW50
IHNpZykKK3sKKyAgZ3J1Yl9kaXNrX2FkZHJfdCByLCB2OworCisgIHIgPSAwOworICB2ID0gMTsK
KworICB3aGlsZSAobm4tLSkKKyAgICB7CisgICAgICByICs9IHYgKiAoKih1bnNpZ25lZCBjaGFy
ICopIChydW4rKykpOworICAgICAgdiA8PD0gODsKKyAgICB9CisKKyAgaWYgKChzaWcpICYmIChy
ICYgKHYgPj4gMSkpKQorICAgIHIgLT0gdjsKKworICAqdmFsID0gcjsKKyAgcmV0dXJuIHJ1bjsK
K30KKworZ3J1Yl9lcnJfdAorZ3J1Yl9udGZzX3JlYWRfcnVuX2xpc3QgKHN0cnVjdCBncnViX250
ZnNfcmxzdCAqIGN0eCkKK3sKKyAgREJHKCJyZWFkIHJ1biBsaXN0Iik7CisKKyAgaW50IGMxLCBj
MjsKKyAgZ3J1Yl9kaXNrX2FkZHJfdCB2YWw7CisgIGNoYXIgKnJ1bjsKKworICBydW4gPSBjdHgt
PmN1cl9ydW47CityZXRyeToKKyAgYzEgPSAoKHVuc2lnbmVkIGNoYXIpICgqcnVuKSAmIDB4Rik7
CisgIGMyID0gKCh1bnNpZ25lZCBjaGFyKSAoKnJ1bikgPj4gNCk7CisgIGlmICghYzEpCisgICAg
eworICAgICAgaWYgKChjdHgtPmF0dHIpICYmIChjdHgtPmF0dHItPmZsYWdzICYgQUZfQUxTVCkp
CisJeworCSAgdm9pZCAoKnNhdmVfaG9vaykgKGdydWJfZGlza19hZGRyX3Qgc2VjdG9yLAorCQkJ
ICAgICB1bnNpZ25lZCBvZmZzZXQsCisJCQkgICAgIHVuc2lnbmVkIGxlbmd0aCwKKwkJCSAgICAg
dm9pZCAqY2xvc3VyZSk7CisJICAKKwkgIC8vc2F2ZV9ob29rID0gY3R4LT5jb21wLmJzLT5yZWFk
X2hvb2s7CisJICAvL2N0eC0+Y29tcC5icy0+cmVhZF9ob29rID0gMDsKKwkgIHJ1biA9IGZpbmRf
YXR0ciAoY3R4LT5hdHRyLCAodW5zaWduZWQgY2hhcikgKmN0eC0+YXR0ci0+YXR0cl9jdXIpOwor
CSAgLy9jdHgtPmNvbXAuYnMtPnJlYWRfaG9vayA9IHNhdmVfaG9vazsKKwkgIGlmIChydW4pCisJ
ICAgIHsKKwkgICAgICBpZiAocnVuWzhdID09IDApCisJCXJldHVybiBncnViX2Vycm9yIChHUlVC
X0VSUl9CQURfRlMsCisJCQkJICAgIiREQVRBIHNob3VsZCBiZSBub24tcmVzaWRlbnQiKTsKKwor
CSAgICAgIHJ1biArPSB1MTZhdCAocnVuLCAweDIwKTsKKwkgICAgICBjdHgtPmN1cnJfbGNuID0g
MDsKKwkgICAgICBnb3RvIHJldHJ5OworCSAgICB9CisJfQorICAgICAgcmV0dXJuIGdydWJfZXJy
b3IgKEdSVUJfRVJSX0JBRF9GUywgInJ1biBsaXN0IG92ZXJmbG93biIpOworICAgIH0KKyAgcnVu
ID0gcmVhZF9ydW5fZGF0YSAocnVuICsgMSwgYzEsICZ2YWwsIDApOwkvKiBsZW5ndGggb2YgY3Vy
cmVudCBWQ04gKi8KKyAgY3R4LT5jdXJyX3ZjbiA9IGN0eC0+bmV4dF92Y247CisgIGN0eC0+bmV4
dF92Y24gKz0gdmFsOworICBydW4gPSByZWFkX3J1bl9kYXRhIChydW4sIGMyLCAmdmFsLCAxKTsJ
Lyogb2Zmc2V0IHRvIHByZXZpb3VzIExDTiAqLworICBjdHgtPmN1cnJfbGNuICs9IHZhbDsKKyAg
aWYgKHZhbCA9PSAwKQorICAgIGN0eC0+ZmxhZ3MgfD0gUkZfQkxOSzsKKyAgZWxzZQorICAgIGN0
eC0+ZmxhZ3MgJj0gflJGX0JMTks7CisgIGN0eC0+Y3VyX3J1biA9IHJ1bjsKKyAgcmV0dXJuIDA7
Cit9CisKK3N0YXRpYyBncnViX2Rpc2tfYWRkcl90CitncnViX250ZnNfcmVhZF9ibG9jayAoZ3J1
Yl9mc2hlbHBfbm9kZV90IG5vZGUsIGdydWJfZGlza19hZGRyX3QgYmxvY2spCit7CisgIHN0cnVj
dCBncnViX250ZnNfcmxzdCAqY3R4OworCisgIGN0eCA9IChzdHJ1Y3QgZ3J1Yl9udGZzX3Jsc3Qg
Kikgbm9kZTsKKyAgaWYgKGJsb2NrID49IGN0eC0+bmV4dF92Y24pCisgICAgeworICAgICAgaWYg
KGdydWJfbnRmc19yZWFkX3J1bl9saXN0IChjdHgpKQorCXJldHVybiAtMTsKKyAgICAgIHJldHVy
biBjdHgtPmN1cnJfbGNuOworICAgIH0KKyAgZWxzZQorICAgIHJldHVybiAoY3R4LT5mbGFncyAm
IFJGX0JMTkspID8gMCA6IChibG9jayAtCisJCQkJCSBjdHgtPmN1cnJfdmNuICsgY3R4LT5jdXJy
X2xjbik7Cit9CisKK3N0YXRpYyBncnViX2Vycl90CityZWFkX2RhdGEgKHN0cnVjdCBncnViX250
ZnNfYXR0ciAqYXQsIGNoYXIgKnBhLCBjaGFyICpkZXN0LAorCSAgIGdydWJfZGlza19hZGRyX3Qg
b2ZzLCBncnViX3NpemVfdCBsZW4sIGludCBjYWNoZWQsCisJICAgdm9pZCAoKnJlYWRfaG9vaykg
KGdydWJfZGlza19hZGRyX3Qgc2VjdG9yLAorCQkJICAgICAgdW5zaWduZWQgb2Zmc2V0LAorCQkJ
ICAgICAgdW5zaWduZWQgbGVuZ3RoLAorCQkJICAgICAgdm9pZCAqY2xvc3VyZSksCisJICAgdm9p
ZCAqY2xvc3VyZSkKK3sKKyAgZ3J1Yl9kaXNrX2FkZHJfdCB2Y247CisgIHN0cnVjdCBncnViX250
ZnNfcmxzdCBjYywgKmN0eDsKKworICBpZiAobGVuID09IDApCisgICAgcmV0dXJuIDA7CisKKyAg
Z3J1Yl9tZW1zZXQgKCZjYywgMCwgc2l6ZW9mIChjYykpOworICBjdHggPSAmY2M7CisgIGN0eC0+
YXR0ciA9IGF0OworICBjdHgtPmNvbXAuc3BjID0gYXQtPm1mdC0+ZGF0YS0+c3BjOworICBjdHgt
PmNvbXAuYnMgPSBhdC0+bWZ0LT5kYXRhLT5iczsKKworICBpZiAocGFbOF0gPT0gMCkKKyAgICB7
CisgICAgICBpZiAob2ZzICsgbGVuID4gdTMyYXQgKHBhLCAweDEwKSkKKwlyZXR1cm4gZ3J1Yl9l
cnJvciAoR1JVQl9FUlJfQkFEX0ZTLCAicmVhZCBvdXQgb2YgcmFuZ2UiKTsKKyAgICAgIGdydWJf
bWVtY3B5IChkZXN0LCBwYSArIHUzMmF0IChwYSwgMHgxNCkgKyBvZnMsIGxlbik7CisgICAgICBy
ZXR1cm4gMDsKKyAgICB9CisKKyAgaWYgKHUxNmF0IChwYSwgMHhDKSAmIEZMQUdfQ09NUFJFU1NF
RCkKKyAgICBjdHgtPmZsYWdzIHw9IFJGX0NPTVA7CisgIGVsc2UKKyAgICBjdHgtPmZsYWdzICY9
IH5SRl9DT01QOworICBjdHgtPmN1cl9ydW4gPSBwYSArIHUxNmF0IChwYSwgMHgyMCk7CisKKyAg
aWYgKGN0eC0+ZmxhZ3MgJiBSRl9DT01QKQorICAgIHsKKyAgICAgIGlmICghY2FjaGVkKQorCXJl
dHVybiBncnViX2Vycm9yIChHUlVCX0VSUl9CQURfRlMsICJhdHRyaWJ1dGUgY2FuXCd0IGJlIGNv
bXByZXNzZWQiKTsKKworICAgICAgaWYgKGF0LT5zYnVmKQorCXsKKwkgIGlmICgob2ZzICYgKH4o
Q09NX0xFTiAtIDEpKSkgPT0gYXQtPnNhdmVfcG9zKQorCSAgICB7CisJICAgICAgZ3J1Yl9kaXNr
X2FkZHJfdCBuOworCisJICAgICAgbiA9IENPTV9MRU4gLSAob2ZzIC0gYXQtPnNhdmVfcG9zKTsK
KwkgICAgICBpZiAobiA+IGxlbikKKwkJbiA9IGxlbjsKKworCSAgICAgIGdydWJfbWVtY3B5IChk
ZXN0LCBhdC0+c2J1ZiArIG9mcyAtIGF0LT5zYXZlX3Bvcywgbik7CisJICAgICAgaWYgKG4gPT0g
bGVuKQorCQlyZXR1cm4gMDsKKworCSAgICAgIGRlc3QgKz0gbjsKKwkgICAgICBsZW4gLT0gbjsK
KwkgICAgICBvZnMgKz0gbjsKKwkgICAgfQorCX0KKyAgICAgIGVsc2UKKwl7CisJICBhdC0+c2J1
ZiA9IGdydWJfbWFsbG9jIChDT01fTEVOKTsKKwkgIGlmIChhdC0+c2J1ZiA9PSBOVUxMKQorCSAg
ICByZXR1cm4gZ3J1Yl9lcnJubzsKKwkgIGF0LT5zYXZlX3BvcyA9IDE7CisJfQorCisgICAgICB2
Y24gPSBjdHgtPnRhcmdldF92Y24gPSAob2ZzID4+IENPTV9MT0dfTEVOKSAqIChDT01fU0VDIC8g
Y3R4LT5jb21wLnNwYyk7CisgICAgICBjdHgtPnRhcmdldF92Y24gJj0gfjB4RjsKKyAgICB9Cisg
IGVsc2UKKyAgICB2Y24gPSBjdHgtPnRhcmdldF92Y24gPSBncnViX2Rpdm1vZDY0IChvZnMgPj4g
QkxLX1NIUiwgY3R4LT5jb21wLnNwYywgMCk7CisKKyAgY3R4LT5uZXh0X3ZjbiA9IHUzMmF0IChw
YSwgMHgxMCk7CisgIGN0eC0+Y3Vycl9sY24gPSAwOworICB3aGlsZSAoY3R4LT5uZXh0X3ZjbiA8
PSBjdHgtPnRhcmdldF92Y24pCisgICAgeworICAgICAgaWYgKGdydWJfbnRmc19yZWFkX3J1bl9s
aXN0IChjdHgpKQorCXJldHVybiBncnViX2Vycm5vOworICAgIH0KKworICBpZiAoYXQtPmZsYWdz
ICYgQUZfR1BPUykKKyAgICB7CisgICAgICBncnViX2Rpc2tfYWRkcl90IHN0MCwgc3QxOworICAg
ICAgZ3J1Yl91aW50MzJfdCBtOworCisgICAgICBncnViX2Rpdm1vZDY0IChvZnMgPj4gQkxLX1NI
UiwgY3R4LT5jb21wLnNwYywgJm0pOworCisgICAgICBzdDAgPQorCShjdHgtPnRhcmdldF92Y24g
LSBjdHgtPmN1cnJfdmNuICsgY3R4LT5jdXJyX2xjbikgKiBjdHgtPmNvbXAuc3BjICsgbTsKKyAg
ICAgIHN0MSA9IHN0MCArIDE7CisgICAgICBpZiAoc3QxID09CisJICAoY3R4LT5uZXh0X3ZjbiAt
IGN0eC0+Y3Vycl92Y24gKyBjdHgtPmN1cnJfbGNuKSAqIGN0eC0+Y29tcC5zcGMpCisJeworCSAg
aWYgKGdydWJfbnRmc19yZWFkX3J1bl9saXN0IChjdHgpKQorCSAgICByZXR1cm4gZ3J1Yl9lcnJu
bzsKKwkgIHN0MSA9IGN0eC0+Y3Vycl9sY24gKiBjdHgtPmNvbXAuc3BjOworCX0KKyAgICAgIHYz
MmF0IChkZXN0LCAwKSA9IHN0MDsKKyAgICAgIHYzMmF0IChkZXN0LCA0KSA9IHN0MTsKKyAgICAg
IHJldHVybiAwOworICAgIH0KKworICBpZiAoIShjdHgtPmZsYWdzICYgUkZfQ09NUCkpCisgICAg
eworICAgICAgdW5zaWduZWQgaW50IHBvdzsKKworICAgICAgaWYgKCFncnViX2ZzaGVscF9sb2cy
Ymxrc2l6ZSAoY3R4LT5jb21wLnNwYywgJnBvdykpCisJZ3J1Yl9mc2hlbHBfcmVhZF9maWxlIChj
dHgtPmNvbXAuYnMsIChncnViX2ZzaGVscF9ub2RlX3QpIGN0eCwKKwkJCSAgICAgICByZWFkX2hv
b2ssIGNsb3N1cmUsIG9mcywgbGVuLCBkZXN0LAorCQkJICAgICAgIGdydWJfbnRmc19yZWFkX2Js
b2NrLCBvZnMgKyBsZW4sIHBvdyk7CisgICAgICByZXR1cm4gZ3J1Yl9lcnJubzsKKyAgICB9CisK
KyAgcmV0dXJuIChncnViX250ZnNjb21wX2Z1bmMpID8gZ3J1Yl9udGZzY29tcF9mdW5jIChhdCwg
ZGVzdCwgb2ZzLCBsZW4sIGN0eCwKKwkJCQkJCSAgICB2Y24pIDoKKyAgICBncnViX2Vycm9yIChH
UlVCX0VSUl9CQURfRlMsICJudGZzY29tcCBtb2R1bGUgbm90IGxvYWRlZCIpOworfQorCitzdGF0
aWMgZ3J1Yl9lcnJfdAorcmVhZF9hdHRyIChzdHJ1Y3QgZ3J1Yl9udGZzX2F0dHIgKmF0LCBjaGFy
ICpkZXN0LCBncnViX2Rpc2tfYWRkcl90IG9mcywKKwkgICBncnViX3NpemVfdCBsZW4sIGludCBj
YWNoZWQsCisJICAgdm9pZCAoKnJlYWRfaG9vaykgKGdydWJfZGlza19hZGRyX3Qgc2VjdG9yLAor
CQkJICAgICAgdW5zaWduZWQgb2Zmc2V0LAorCQkJICAgICAgdW5zaWduZWQgbGVuZ3RoLAorCQkJ
ICAgICAgdm9pZCAqY2xvc3VyZSksCisJICAgdm9pZCAqY2xvc3VyZSkKK3sKKyAgREJHKCJyZWFk
IGF0dHIiKTsKKyAgCisgIGNoYXIgKnNhdmVfY3VyOworICB1bnNpZ25lZCBjaGFyIGF0dHI7Cisg
IGNoYXIgKnBwOworICBncnViX2Vycl90IHJldDsKKworICBzYXZlX2N1ciA9IGF0LT5hdHRyX2N1
cjsKKyAgYXQtPmF0dHJfbnh0ID0gYXQtPmF0dHJfY3VyOworICBhdHRyID0gKHVuc2lnbmVkIGNo
YXIpICphdC0+YXR0cl9ueHQ7CisgIGlmIChhdC0+ZmxhZ3MgJiBBRl9BTFNUKQorICAgIHsKKyAg
ICAgIGNoYXIgKnBhOworICAgICAgZ3J1Yl9kaXNrX2FkZHJfdCB2Y247CisKKyAgICAgIHZjbiA9
IGdydWJfZGl2bW9kNjQgKG9mcywgYXQtPm1mdC0+ZGF0YS0+c3BjIDw8IEJMS19TSFIsIDApOwor
ICAgICAgcGEgPSBhdC0+YXR0cl9ueHQgKyB1MTZhdCAoYXQtPmF0dHJfbnh0LCA0KTsKKyAgICAg
IHdoaWxlIChwYSA8IGF0LT5hdHRyX2VuZCkKKwl7CisJICBpZiAoKHVuc2lnbmVkIGNoYXIpICpw
YSAhPSBhdHRyKQorCSAgICBicmVhazsKKwkgIGlmICh1MzJhdCAocGEsIDgpID4gdmNuKQorCSAg
ICBicmVhazsKKwkgIGF0LT5hdHRyX254dCA9IHBhOworCSAgcGEgKz0gdTE2YXQgKHBhLCA0KTsK
Kwl9CisgICAgfQorICBwcCA9IGZpbmRfYXR0ciAoYXQsIGF0dHIpOworICBpZiAocHApCisgICAg
cmV0ID0gcmVhZF9kYXRhIChhdCwgcHAsIGRlc3QsIG9mcywgbGVuLCBjYWNoZWQsIHJlYWRfaG9v
aywgY2xvc3VyZSk7CisgIGVsc2UKKyAgICByZXQgPQorICAgICAgKGdydWJfZXJybm8pID8gZ3J1
Yl9lcnJubyA6IGdydWJfZXJyb3IgKEdSVUJfRVJSX0JBRF9GUywKKwkJCQkJICAgICAgImF0dHJp
YnV0ZSBub3QgZm91bmQiKTsKKyAgYXQtPmF0dHJfY3VyID0gc2F2ZV9jdXI7CisgIHJldHVybiBy
ZXQ7Cit9CisKK3N0YXRpYyBncnViX2Vycl90CityZWFkX21mdCAoc3RydWN0IGdydWJfbnRmc19k
YXRhICpkYXRhLCBjaGFyICpidWYsIGdydWJfdWludDMyX3QgbWZ0bm8pCit7CisgIGlmIChyZWFk
X2F0dHIKKyAgICAgICgmZGF0YS0+bW1mdC5hdHRyLCBidWYsIG1mdG5vICogKChncnViX2Rpc2tf
YWRkcl90KSBkYXRhLT5tZnRfc2l6ZSkgPDwgQkxLX1NIUiwKKyAgICAgICBkYXRhLT5tZnRfc2l6
ZSA8PCBCTEtfU0hSLCAwLCAwLCAwKSkKKyAgICByZXR1cm4gZ3J1Yl9lcnJvciAoR1JVQl9FUlJf
QkFEX0ZTLCAiUmVhZCBNRlQgMHglWCBmYWlscyIsIG1mdG5vKTsKKyAgcmV0dXJuIGZpeHVwIChk
YXRhLCBidWYsIGRhdGEtPm1mdF9zaXplLCAoY2hhciopIkZJTEUiKTsKK30KKworc3RhdGljIGdy
dWJfZXJyX3QKK2luaXRfZmlsZSAoc3RydWN0IGdydWJfbnRmc19maWxlICptZnQsIGdydWJfdWlu
dDMyX3QgbWZ0bm8pCit7CisgIERCRygiaW5pdCBmaWxlIik7CisgIAorICB1bnNpZ25lZCBzaG9y
dCBmbGFnOworCisgIG1mdC0+aW5vZGVfcmVhZCA9IDE7CisKKyAgbWZ0LT5idWYgPSBncnViX21h
bGxvYyAobWZ0LT5kYXRhLT5tZnRfc2l6ZSA8PCBCTEtfU0hSKTsKKyAgaWYgKG1mdC0+YnVmID09
IE5VTEwpCisgICAgcmV0dXJuIGdydWJfZXJybm87CisKKyAgaWYgKHJlYWRfbWZ0IChtZnQtPmRh
dGEsIG1mdC0+YnVmLCBtZnRubykpCisgICAgcmV0dXJuIGdydWJfZXJybm87CisKKyAgZmxhZyA9
IHUxNmF0IChtZnQtPmJ1ZiwgMHgxNik7CisgIGlmICgoZmxhZyAmIDEpID09IDApCisgICAgcmV0
dXJuIGdydWJfZXJyb3IgKEdSVUJfRVJSX0JBRF9GUywgIk1GVCAweCVYIGlzIG5vdCBpbiB1c2Ui
LCBtZnRubyk7CisKKyAgaWYgKChmbGFnICYgMikgPT0gMCkKKyAgICB7CisgICAgICBjaGFyICpw
YTsKKworICAgICAgcGEgPSBsb2NhdGVfYXR0ciAoJm1mdC0+YXR0ciwgbWZ0LCBBVF9EQVRBKTsK
KyAgICAgIGlmIChwYSA9PSBOVUxMKQorCXJldHVybiBncnViX2Vycm9yIChHUlVCX0VSUl9CQURf
RlMsICJubyAkREFUQSBpbiBNRlQgMHglWCIsIG1mdG5vKTsKKworICAgICAgaWYgKCFwYVs4XSkK
KwltZnQtPnNpemUgPSB1MzJhdCAocGEsIDB4MTApOworICAgICAgZWxzZQorCW1mdC0+c2l6ZSA9
IHU2NGF0IChwYSwgMHgzMCk7CisKKyAgICAgIGlmICgobWZ0LT5hdHRyLmZsYWdzICYgQUZfQUxT
VCkgPT0gMCkKKwltZnQtPmF0dHIuYXR0cl9lbmQgPSAwOwkvKiAgRG9uJ3QganVtcCB0byBhdHRy
aWJ1dGUgbGlzdCAqLworICAgIH0KKyAgZWxzZQorICAgIGluaXRfYXR0ciAoJm1mdC0+YXR0ciwg
bWZ0KTsKKworICByZXR1cm4gMDsKK30KKworc3RhdGljIHZvaWQKK2ZyZWVfZmlsZSAoc3RydWN0
IGdydWJfbnRmc19maWxlICptZnQpCit7CisgIGZyZWVfYXR0ciAoJm1mdC0+YXR0cik7CisgIGdy
dWJfZnJlZSAobWZ0LT5idWYpOworfQorCitzdGF0aWMgaW50CitsaXN0X2ZpbGUgKHN0cnVjdCBn
cnViX250ZnNfZmlsZSAqZGlybywgY2hhciAqcG9zLAorCSAgIGludCAoKmhvb2spIChjb25zdCBj
aGFyICpmaWxlbmFtZSwKKwkJCWVudW0gZ3J1Yl9mc2hlbHBfZmlsZXR5cGUgZmlsZXR5cGUsCisJ
CQlncnViX2ZzaGVscF9ub2RlX3Qgbm9kZSwKKwkJCXZvaWQgKmNsb3N1cmUpLAorCSAgIHZvaWQg
KmNsb3N1cmUpCit7CisgIGNoYXIgKm5wOworICBpbnQgbnM7CisKKyAgd2hpbGUgKDEpCisgICAg
eworICAgICAgY2hhciAqdXN0ciwgbmFtZXNwYWNlOworICAgICAgY2hhciogZ2JzdHI7CisKKyAg
ICAgIGlmIChwb3NbMHhDXSAmIDIpCQkvKiBlbmQgc2lnbmF0dXJlICovCisJYnJlYWs7CisKKyAg
ICAgIG5wID0gcG9zICsgMHg1MDsKKyAgICAgIG5zID0gKHVuc2lnbmVkIGNoYXIpICoobnArKyk7
CisgICAgICBuYW1lc3BhY2UgPSAqKG5wKyspOworCisgICAgICAvKgorICAgICAgICogIElnbm9y
ZSBmaWxlcyBpbiBET1MgbmFtZXNwYWNlLCBhcyB0aGV5IHdpbGwgcmVhcHBlYXIgYXMgV2luMzIK
KyAgICAgICAqICBuYW1lcy4KKyAgICAgICAqLworICAgICAgaWYgKChucykgJiYgKG5hbWVzcGFj
ZSAhPSAyKSkKKwl7CisJICBlbnVtIGdydWJfZnNoZWxwX2ZpbGV0eXBlIHR5cGU7CisJICBzdHJ1
Y3QgZ3J1Yl9udGZzX2ZpbGUgKmZkaXJvOworCisJICBpZiAodTE2YXQgKHBvcywgNCkpCisJICAg
IHsKKwkgICAgICBncnViX2Vycm9yIChHUlVCX0VSUl9CQURfRlMsICI2NC1iaXQgTUZUIG51bWJl
ciIpOworCSAgICAgIHJldHVybiAwOworCSAgICB9CisKKwkgIHR5cGUgPQorCSAgICAodTMyYXQg
KHBvcywgMHg0OCkgJiBBVFRSX0RJUkVDVE9SWSkgPyBHUlVCX0ZTSEVMUF9ESVIgOgorCSAgICBH
UlVCX0ZTSEVMUF9SRUc7CisKKwkgIGZkaXJvID0gZ3J1Yl96YWxsb2MgKHNpemVvZiAoc3RydWN0
IGdydWJfbnRmc19maWxlKSk7CisJICBpZiAoIWZkaXJvKQorCSAgICByZXR1cm4gMDsKKworCSAg
ZmRpcm8tPmRhdGEgPSBkaXJvLT5kYXRhOworCSAgZmRpcm8tPmlubyA9IHUzMmF0IChwb3MsIDAp
OworCisJICB1c3RyID0gZ3J1Yl9tYWxsb2MgKG5zICogNCArIDEpOworCSAgZ2JzdHIgPSBncnVi
X21hbGxvYyhucyAqIDIgKyAxKTsKKwkgICAgaWYgKHVzdHIgPT0gTlVMTCB8fCBnYnN0ciA9PSBO
VUxMKQorCSAgICByZXR1cm4gMDsKKwkgICpncnViX3V0ZjE2X3RvX3V0ZjggKChncnViX3VpbnQ4
X3QgKikgdXN0ciwgKGdydWJfdWludDE2X3QgKikgbnAsCisJCQkgICAgICAgbnMpID0gJ1wwJzsK
KwkgIHUyZyh1c3RyLCBzdHJsZW4odXN0ciksIGdic3RyLCBucyAqIDIgKyAxKTsKKwkgIERCRygi
Z2JzdHI9JXMiLCBnYnN0cik7CisgICAgICAgICAgaWYgKG5hbWVzcGFjZSkKKyAgICAgICAgICAg
IHR5cGUgfD0gR1JVQl9GU0hFTFBfQ0FTRV9JTlNFTlNJVElWRTsKKworCSAgaWYgKGhvb2sgKGdi
c3RyLCB0eXBlLCBmZGlybywgY2xvc3VyZSkpCisJICAgIHsKKwkgICAgICBncnViX2ZyZWUgKHVz
dHIpOworCSAgICAgIGdydWJfZnJlZSAoZ2JzdHIpOworCSAgICAgIHJldHVybiAxOworCSAgICB9
CisJICBncnViX2ZyZWUoZ2JzdHIpOworCSAgZ3J1Yl9mcmVlICh1c3RyKTsKKwl9CisgICAgICBw
b3MgKz0gdTE2YXQgKHBvcywgOCk7CisgICAgfQorICByZXR1cm4gMDsKK30KKworc3RhdGljIGlu
dAorZ3J1Yl9udGZzX2l0ZXJhdGVfZGlyIChncnViX2ZzaGVscF9ub2RlX3QgZGlyLAorCQkgICAg
ICAgaW50ICgqaG9vaykgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLAorCQkJCSAgICBlbnVtIGdydWJf
ZnNoZWxwX2ZpbGV0eXBlIGZpbGV0eXBlLAorCQkJCSAgICBncnViX2ZzaGVscF9ub2RlX3Qgbm9k
ZSwKKwkJCQkgICAgdm9pZCAqY2xvc3VyZSksCisJCSAgICAgICB2b2lkICpjbG9zdXJlKQorewor
ICB1bnNpZ25lZCBjaGFyICpiaXRtYXA7CisgIHN0cnVjdCBncnViX250ZnNfYXR0ciBhdHRyLCAq
YXQ7CisgIGNoYXIgKmN1cl9wb3MsICppbmR4LCAqYm1wOworICBpbnQgcmV0ID0gMDsKKyAgZ3J1
Yl9zaXplX3QgYml0bWFwX2xlbjsKKyAgc3RydWN0IGdydWJfbnRmc19maWxlICptZnQ7CisKKyAg
bWZ0ID0gKHN0cnVjdCBncnViX250ZnNfZmlsZSAqKSBkaXI7CisKKyAgaWYgKCFtZnQtPmlub2Rl
X3JlYWQpCisgICAgeworICAgICAgaWYgKGluaXRfZmlsZSAobWZ0LCBtZnQtPmlubykpCisJcmV0
dXJuIDA7CisgICAgfQorCisgIGluZHggPSBOVUxMOworICBibXAgPSBOVUxMOworCisgIGF0ID0g
JmF0dHI7CisgIGluaXRfYXR0ciAoYXQsIG1mdCk7CisgIHdoaWxlICgxKQorICAgIHsKKyAgICAg
IGlmICgoY3VyX3BvcyA9IGZpbmRfYXR0ciAoYXQsIEFUX0lOREVYX1JPT1QpKSA9PSBOVUxMKQor
CXsKKwkgIGdydWJfZXJyb3IgKEdSVUJfRVJSX0JBRF9GUywgIm5vICRJTkRFWF9ST09UIik7CisJ
ICBnb3RvIGRvbmU7CisJfQorCisgICAgICAvKiBSZXNpZGVudCwgTmFtZWxlbj00LCBPZmZzZXQ9
MHgxOCwgRmxhZ3M9MHgwMCwgTmFtZT0iJEkzMCIgKi8KKyAgICAgIGlmICgodTMyYXQgKGN1cl9w
b3MsIDgpICE9IDB4MTgwNDAwKSB8fAorCSAgKHUzMmF0IChjdXJfcG9zLCAweDE4KSAhPSAweDQ5
MDAyNCkgfHwKKwkgICh1MzJhdCAoY3VyX3BvcywgMHgxQykgIT0gMHgzMDAwMzMpKQorCWNvbnRp
bnVlOworICAgICAgY3VyX3BvcyArPSB1MTZhdCAoY3VyX3BvcywgMHgxNCk7CisgICAgICBpZiAo
KmN1cl9wb3MgIT0gMHgzMCkJLyogTm90IGZpbGVuYW1lIGluZGV4ICovCisJY29udGludWU7Cisg
ICAgICBicmVhazsKKyAgICB9CisKKyAgY3VyX3BvcyArPSAweDEwOwkJLyogU2tpcCBpbmRleCBy
b290ICovCisgIHJldCA9IGxpc3RfZmlsZSAobWZ0LCBjdXJfcG9zICsgdTE2YXQgKGN1cl9wb3Ms
IDApLCBob29rLCBjbG9zdXJlKTsKKyAgaWYgKHJldCkKKyAgICBnb3RvIGRvbmU7CisgICAgCisK
KyAgYml0bWFwID0gTlVMTDsKKyAgYml0bWFwX2xlbiA9IDA7CisgIGZyZWVfYXR0ciAoYXQpOwor
ICBpbml0X2F0dHIgKGF0LCBtZnQpOworICB3aGlsZSAoKGN1cl9wb3MgPSBmaW5kX2F0dHIgKGF0
LCBBVF9CSVRNQVApKSAhPSBOVUxMKQorICAgIHsKKyAgICAgIGludCBvZnM7CisKKyAgICAgIG9m
cyA9ICh1bnNpZ25lZCBjaGFyKSBjdXJfcG9zWzB4QV07CisgICAgICAvKiBOYW1lbGVuPTQsIE5h
bWU9IiRJMzAiICovCisgICAgICBpZiAoKGN1cl9wb3NbOV0gPT0gNCkgJiYKKwkgICh1MzJhdCAo
Y3VyX3Bvcywgb2ZzKSA9PSAweDQ5MDAyNCkgJiYKKwkgICh1MzJhdCAoY3VyX3Bvcywgb2ZzICsg
NCkgPT0gMHgzMDAwMzMpKQorCXsKKyAgICAgICAgICBpbnQgaXNfcmVzaWRlbnQgPSAoY3VyX3Bv
c1s4XSA9PSAwKTsKKworICAgICAgICAgIGJpdG1hcF9sZW4gPSAoKGlzX3Jlc2lkZW50KSA/IHUz
MmF0IChjdXJfcG9zLCAweDEwKSA6CisgICAgICAgICAgICAgICAgICAgICAgICB1MzJhdCAoY3Vy
X3BvcywgMHgyOCkpOworCisgICAgICAgICAgYm1wID0gZ3J1Yl9tYWxsb2MgKGJpdG1hcF9sZW4p
OworICAgICAgICAgIGlmIChibXAgPT0gTlVMTCkKKyAgICAgICAgICAgIGdvdG8gZG9uZTsKKwor
CSAgaWYgKGlzX3Jlc2lkZW50KQorCSAgICB7CisgICAgICAgICAgICAgIGdydWJfbWVtY3B5IChi
bXAsIChjaGFyICopIChjdXJfcG9zICsgdTE2YXQgKGN1cl9wb3MsIDB4MTQpKSwKKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGJpdG1hcF9sZW4pOworCSAgICB9CisgICAgICAgICAgZWxzZQor
ICAgICAgICAgICAgeworICAgICAgICAgICAgICBpZiAocmVhZF9kYXRhIChhdCwgY3VyX3Bvcywg
Ym1wLCAwLCBiaXRtYXBfbGVuLCAwLCAwLCAwKSkKKyAgICAgICAgICAgICAgICB7CisgICAgICAg
ICAgICAgICAgICBncnViX2Vycm9yIChHUlVCX0VSUl9CQURfRlMsCisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAiZmFpbHMgdG8gcmVhZCBub24tcmVzaWRlbnQgJEJJVE1BUCIpOworICAg
ICAgICAgICAgICAgICAgZ290byBkb25lOworICAgICAgICAgICAgICAgIH0KKyAgICAgICAgICAg
ICAgYml0bWFwX2xlbiA9IHUzMmF0IChjdXJfcG9zLCAweDMwKTsKKyAgICAgICAgICAgIH0KKwor
ICAgICAgICAgIGJpdG1hcCA9ICh1bnNpZ25lZCBjaGFyICopIGJtcDsKKwkgIGJyZWFrOworCX0K
KyAgICB9CisKKyAgZnJlZV9hdHRyIChhdCk7CisgIGN1cl9wb3MgPSBsb2NhdGVfYXR0ciAoYXQs
IG1mdCwgQVRfSU5ERVhfQUxMT0NBVElPTik7CisgIHdoaWxlIChjdXJfcG9zICE9IE5VTEwpCisg
ICAgeworICAgICAgLyogTm9uLXJlc2lkZW50LCBOYW1lbGVuPTQsIE9mZnNldD0weDQwLCBGbGFn
cz0wLCBOYW1lPSIkSTMwIiAqLworICAgICAgaWYgKCh1MzJhdCAoY3VyX3BvcywgOCkgPT0gMHg0
MDA0MDEpICYmCisJICAodTMyYXQgKGN1cl9wb3MsIDB4NDApID09IDB4NDkwMDI0KSAmJgorCSAg
KHUzMmF0IChjdXJfcG9zLCAweDQ0KSA9PSAweDMwMDAzMykpCisJYnJlYWs7CisgICAgICBjdXJf
cG9zID0gZmluZF9hdHRyIChhdCwgQVRfSU5ERVhfQUxMT0NBVElPTik7CisgICAgfQorCisgIGlm
ICgoIWN1cl9wb3MpICYmIChiaXRtYXApKQorICAgIHsKKyAgICAgIGdydWJfZXJyb3IgKEdSVUJf
RVJSX0JBRF9GUywgIiRCSVRNQVAgd2l0aG91dCAkSU5ERVhfQUxMT0NBVElPTiIpOworICAgICAg
Z290byBkb25lOworICAgIH0KKworICBpZiAoYml0bWFwKQorICAgIHsKKyAgICAgIGdydWJfZGlz
a19hZGRyX3QgdiwgaTsKKworICAgICAgaW5keCA9IGdydWJfbWFsbG9jIChtZnQtPmRhdGEtPmlk
eF9zaXplIDw8IEJMS19TSFIpOworICAgICAgaWYgKGluZHggPT0gTlVMTCkKKwlnb3RvIGRvbmU7
CisKKyAgICAgIHYgPSAxOworICAgICAgZm9yIChpID0gMDsgaSA8IChncnViX2Rpc2tfYWRkcl90
KWJpdG1hcF9sZW4gKiA4OyBpKyspCisJeworCSAgaWYgKCpiaXRtYXAgJiB2KQorCSAgICB7CisJ
ICAgICAgaWYgKChyZWFkX2F0dHIKKwkJICAgKGF0LCBpbmR4LCBpICogKG1mdC0+ZGF0YS0+aWR4
X3NpemUgPDwgQkxLX1NIUiksCisJCSAgICAobWZ0LT5kYXRhLT5pZHhfc2l6ZSA8PCBCTEtfU0hS
KSwgMCwgMCwgMCkpCisJCSAgfHwgKGZpeHVwIChtZnQtPmRhdGEsIGluZHgsIG1mdC0+ZGF0YS0+
aWR4X3NpemUsIChjaGFyKikiSU5EWCIpKSkKKwkJZ290byBkb25lOworCSAgICAgIHJldCA9IGxp
c3RfZmlsZSAobWZ0LCAmaW5keFsweDE4ICsgdTE2YXQgKGluZHgsIDB4MTgpXSwgaG9vaywKKwkJ
CSAgICAgICBjbG9zdXJlKTsKKwkgICAgICBpZiAocmV0KQorCQlnb3RvIGRvbmU7CisJICAgIH0K
KwkgIHYgPDw9IDE7CisJICBpZiAodiA+PSAweDEwMCkKKwkgICAgeworCSAgICAgIHYgPSAxOwor
CSAgICAgIGJpdG1hcCsrOworCSAgICB9CisJfQorICAgIH0KKworZG9uZToKKyAgZnJlZV9hdHRy
IChhdCk7CisgIGdydWJfZnJlZSAoaW5keCk7CisgIGdydWJfZnJlZSAoYm1wKTsKKworICByZXR1
cm4gcmV0OworfQorCisKK3N0cnVjdCBncnViX250ZnNfZGF0YSAqCitncnViX250ZnNfbW91bnQg
KEJsb2NrRHJpdmVyU3RhdGUqIGJzLCBncnViX3VpbnQzMl90IHBhcnRfb2ZmX3NlY3RvcikKK3sK
KyAgc3RydWN0IGdydWJfbnRmc19icGIgYnBiOworICBzdHJ1Y3QgZ3J1Yl9udGZzX2RhdGEgKmRh
dGEgPSAwOworICBncnViX29mZl90IG9mZl9ieXRlcyA9IChncnViX29mZl90KXBhcnRfb2ZmX3Nl
Y3RvciA8PCBCTEtfU0hSOyAKKyAgCisgIGlmICghYnMpCisgICAgZ290byBmYWlsOworCisgIGRh
dGEgPSAoc3RydWN0IGdydWJfbnRmc19kYXRhICopIGdydWJfemFsbG9jIChzaXplb2YgKCpkYXRh
KSk7CisgIGlmICghZGF0YSkKKyAgICBnb3RvIGZhaWw7CisKKyAgZGF0YS0+YnMgPSBiczsKKwor
ICAvKiBSZWFkIHRoZSBCUEIuICAqLworICBpZiAoYmRydl9wcmVhZCAoYnMsIG9mZl9ieXRlcywg
JmJwYiwgc2l6ZW9mIChicGIpKSAhPSBzaXplb2YoYnBiKSkKKyAgICB7CisgICAgICBEQkcoInJl
YWQgYnBiIGVyciEiKTsKKyAgICAgIGdvdG8gZmFpbDsKKyAgICB9CisgIGlmIChncnViX21lbWNt
cCAoKGNoYXIgKikgJmJwYi5vZW1fbmFtZSwgIk5URlMiLCA0KSkKKyAgICB7CisgICAgICBEQkco
ImJwYi5vZW1fbmFtZT0lcywgbm90IG50ZnMiLCBicGIub2VtX25hbWUpOworICAgICAgZ290byBm
YWlsOworICAgIH0KKyAgZGF0YS0+YmxvY2tzaXplID0gZ3J1Yl9sZV90b19jcHUxNiAoYnBiLmJ5
dGVzX3Blcl9zZWN0b3IpOworICBkYXRhLT5zcGMgPSBicGIuc2VjdG9yc19wZXJfY2x1c3RlciAq
IChkYXRhLT5ibG9ja3NpemUgPj4gQkxLX1NIUik7CisKKyAgaWYgKGJwYi5jbHVzdGVyc19wZXJf
bWZ0ID4gMCkKKyAgICBkYXRhLT5tZnRfc2l6ZSA9IGRhdGEtPnNwYyAqIGJwYi5jbHVzdGVyc19w
ZXJfbWZ0OworICBlbHNlCisgICAgZGF0YS0+bWZ0X3NpemUgPSAxIDw8ICgtYnBiLmNsdXN0ZXJz
X3Blcl9tZnQgLSBCTEtfU0hSKTsKKworICBpZiAoYnBiLmNsdXN0ZXJzX3Blcl9pbmRleCA+IDAp
CisgICAgZGF0YS0+aWR4X3NpemUgPSBkYXRhLT5zcGMgKiBicGIuY2x1c3RlcnNfcGVyX2luZGV4
OworICBlbHNlCisgICAgZGF0YS0+aWR4X3NpemUgPSAxIDw8ICgtYnBiLmNsdXN0ZXJzX3Blcl9p
bmRleCAtIEJMS19TSFIpOworCisgIGRhdGEtPm1mdF9zdGFydCA9IGdydWJfbGVfdG9fY3B1NjQg
KGJwYi5tZnRfbGNuKSAqIGRhdGEtPnNwYzsKKworICBpZiAoKGRhdGEtPm1mdF9zaXplID4gTUFY
X01GVCkgfHwgKGRhdGEtPmlkeF9zaXplID4gTUFYX0lEWCkpCisgICAgZ290byBmYWlsOworCisg
IGRhdGEtPm1tZnQuZGF0YSA9IGRhdGE7CisgIGRhdGEtPmNtZnQuZGF0YSA9IGRhdGE7CisKKyAg
ZGF0YS0+bW1mdC5idWYgPSBncnViX21hbGxvYyAoZGF0YS0+bWZ0X3NpemUgPDwgQkxLX1NIUik7
CisgIGlmICghZGF0YS0+bW1mdC5idWYpCisgICAgZ290byBmYWlsOworCisgIHNfYnBiX2J5dGVz
X3Blcl9zZWN0b3IgPSAoYnBiLmJ5dGVzX3Blcl9zZWN0b3IpOworICBzX3BhcnRfb2ZmX3NlY3Rv
ciA9IHBhcnRfb2ZmX3NlY3RvcjsKKyAgREJHKCJicGIuYnl0ZXNfcGVyX3NlY3Rvcj1ibG9ja3Np
emU9JXVcbiIKKyAgICAgICJicGIuc2VjdG9yX3Blcl9jbHVzdGVyPSV1XG4iCisgICAgICAiZGF0
YS0+YmxvY2tzaXplPSV1XG4iCisgICAgICAiZGF0YS0+c3BjPSV1XG4iCisgICAgICAiYnBiLmNs
dXN0ZXJzX3Blcl9tZnQ9JWRcbiIKKyAgICAgICJkYXRhLT5tZnRfc2l6ZT0ldVxuIgorICAgICAg
ImJwYi50b3RhbF9zZWN0b3JzPSV6ZFxuIgorICAgICAgImJwYi5tZnRfbGNuPSV6ZFxuIgorICAg
ICAgImRhdGEtPm1mdF9zdGFydD0ldVxuIiwKKyAgICAgIChicGIuYnl0ZXNfcGVyX3NlY3Rvciks
IChicGIuc2VjdG9yc19wZXJfY2x1c3RlciksCisgICAgICAoZGF0YS0+YmxvY2tzaXplKSwgKGRh
dGEtPnNwYyksCisgICAgICAoYnBiLmNsdXN0ZXJzX3Blcl9tZnQpLCAoZGF0YS0+bWZ0X3NpemUp
LAorICAgICAgKGJwYi5udW1fdG90YWxfc2VjdG9ycyksCisgICAgICAoZ3J1Yl9sZV90b19jcHU2
NChicGIubWZ0X2xjbikpLCAoZGF0YS0+bWZ0X3N0YXJ0KSk7CisgIAorICBvZmZfYnl0ZXMgPSAo
Z3J1Yl9vZmZfdClkYXRhLT5tZnRfc3RhcnQgPDwgQkxLX1NIUjsKKyAgZ3J1Yl91aW50MzJfdCBs
ZW4gPSBkYXRhLT5tZnRfc2l6ZSA8PCBCTEtfU0hSOworICBpZiAoYmRydl9wcmVhZF9mcm9tX2xj
bl9vZl92b2x1bShicywgb2ZmX2J5dGVzLAorCQkgZGF0YS0+bW1mdC5idWYsIGxlbikgIT0gbGVu
KQorICAgIHsKKyAgICAgIERCRygicmVhZCBtbWZ0IGVycm9yISIpOworICAgICAgZ290byBmYWls
OworICAgIH0KKyAgZGF0YS0+dXVpZCA9IGdydWJfbGVfdG9fY3B1NjQgKGJwYi5udW1fc2VyaWFs
KTsKKworICBpZiAoZml4dXAgKGRhdGEsIGRhdGEtPm1tZnQuYnVmLCBkYXRhLT5tZnRfc2l6ZSwg
KGNoYXIqKSJGSUxFIikpCisgICAgZ290byBmYWlsOworCisgIGlmICghbG9jYXRlX2F0dHIgKCZk
YXRhLT5tbWZ0LmF0dHIsICZkYXRhLT5tbWZ0LCBBVF9EQVRBKSkKKyAgICB7CisgICAgICBEQkco
ImxvY2F0ZV9hdHRyIEFUX0RBVEEgaW4gbW1mdCBmYWlsZWQhICIpOworICAgICAgZ290byBmYWls
OworICAgIH0KKyAgaWYgKGluaXRfZmlsZSAoJmRhdGEtPmNtZnQsIEZJTEVfUk9PVCkpCisgICAg
eworICAgICAgREJHKCJpbml0X2ZpbGUgRklMRV9ST09UIGZhaWxlZCEiKTsKKyAgICAgIGdvdG8g
ZmFpbDsKKyAgICB9CisgIHJldHVybiBkYXRhOworCitmYWlsOgorICBncnViX2Vycm9yIChHUlVC
X0VSUl9CQURfRlMsICJub3QgYW4gbnRmcyBmaWxlc3lzdGVtIik7CisKKyAgaWYgKGRhdGEpCisg
ICAgeworICAgICAgZnJlZV9maWxlICgmZGF0YS0+bW1mdCk7CisgICAgICBmcmVlX2ZpbGUgKCZk
YXRhLT5jbWZ0KTsKKyAgICAgIGdydWJfZnJlZSAoZGF0YSk7CisgICAgfQorICByZXR1cm4gMDsK
K30KKworc3RydWN0IGdydWJfbnRmc19kaXJfY2xvc3VyZQoreworICBpbnQgKCpob29rKSAoY29u
c3QgY2hhciAqZmlsZW5hbWUsCisJICAgICAgIGNvbnN0IHN0cnVjdCBncnViX2Rpcmhvb2tfaW5m
byAqaW5mbywKKwkgICAgICAgdm9pZCAqY2xvc3VyZSk7CisgIHZvaWQgKmNsb3N1cmU7CisgIHN0
cnVjdCBncnViX250ZnNfZmlsZSBmaWxlOworfTsKKworc3RhdGljIGludAoraXRlcmF0ZSAoY29u
c3QgY2hhciAqZmlsZW5hbWUsCisJIGVudW0gZ3J1Yl9mc2hlbHBfZmlsZXR5cGUgZmlsZXR5cGUs
CisJIGdydWJfZnNoZWxwX25vZGVfdCBub2RlLAorCSB2b2lkICpjbG9zdXJlKQoreworICBzdHJ1
Y3QgZ3J1Yl9udGZzX2Rpcl9jbG9zdXJlICpjID0gY2xvc3VyZTsKKyAgc3RydWN0IGdydWJfZGly
aG9va19pbmZvIGluZm87CisgIGdydWJfbWVtc2V0ICgmaW5mbywgMCwgc2l6ZW9mIChpbmZvKSk7
CisgIGluZm8uZGlyID0gKChmaWxldHlwZSAmIEdSVUJfRlNIRUxQX1RZUEVfTUFTSykgPT0gR1JV
Ql9GU0hFTFBfRElSKTsKKyAgYy0+ZmlsZS5kYXRhID0gbm9kZS0+ZGF0YTsKKyAgYy0+ZmlsZS5p
bm8gPSBub2RlLT5pbm87CisgIGdydWJfZnJlZSAobm9kZSk7CisgIAorICAKKyAgaWYoaW5pdF9m
aWxlKCZjLT5maWxlLCBjLT5maWxlLmlubykpCisgICAgeworICAgICAgZXJyeCgxLCAiaXRlcmF0
ZSgpOiBpbml0X2ZpbGUgZXJyb3IhXG4iKTsKKyAgICB9CisgIGVsc2UKKyAgICB7CisgICAgICBE
QkcoIi4uLi4uLmN1cnJlbnQgZmlsZSBtZnQgcmVhZCBzdWNjZXNzZnVsbHkhXG4iKTsKKyAgICB9
CisgIGNoYXIgKnBhID0gbG9jYXRlX2F0dHIoJmMtPmZpbGUuYXR0ciwKKwkJCSAmYy0+ZmlsZSwg
QVRfU1RBTkRBUkRfSU5GT1JNQVRJT04pOworICBpZihOVUxMID09IHBhKQorICAgIHsKKyAgICAg
IGVycngoMiwgIm5vICRTVEFOREFSRF9JTkZPUk1BVElPTiBpbiBNRlQgMHgleFxuIiwgYy0+Zmls
ZS5pbm8pOworICAgIH0KKyAgZ3J1Yl91aW50NjRfdCBkYXRlPSAwOworICBpZihyZWFkX2F0dHIo
JmMtPmZpbGUuYXR0ciwgKGNoYXIqKSZkYXRlLCAwLCA4LCAxLCBOVUxMLCBOVUxMKSkKKyAgICB7
CisgICAgICBlcnJ4KDMsICJyZWFkIGRhdGUgZXJyb3JcbiIpOworICAgIH0KKyAgZWxzZQorICAg
IHsKKworICAgICAgaW5mby50aW1lX250ZnMgPSBkYXRlOworICAgICAgREJHKCIuLi4uLi5kYXRl
OiAlenVcbiIsIGRhdGUpOworICAgIH0KKyAgREJHKCIuLi4uLi5zaXplIG9mIFwnJXNcJzogJXp1
XG4iLCBmaWxlbmFtZSwgKGMtPmZpbGUuc2l6ZSkpOworICBpbmZvLmZpbGVzaXplX250ZnMgPSBj
LT5maWxlLnNpemU7CisgIGZyZWVfZmlsZSgmYy0+ZmlsZSk7CisgIHJldHVybiBjLT5ob29rIChm
aWxlbmFtZSwgJmluZm8sIGMtPmNsb3N1cmUpOworfQorCisKKyNpbmNsdWRlICJmcy10aW1lLmgi
CitzdGF0aWMgIGludCBmaW5kX3RoZW5fbHNfaG9vayhjb25zdCBjaGFyICpmaWxlbmFtZSwKKwkJ
CSAgIGNvbnN0IHN0cnVjdCBncnViX2Rpcmhvb2tfaW5mbyAqaW5mbywgdm9pZCAqY2xvc3VyZSkK
K3sKKyAgc3RydWN0IGxzX2N0cmwqIGN0cmwgPSAoc3RydWN0IGxzX2N0cmwqKWNsb3N1cmU7Cisg
IERCRygiZGV0YWlsPSVkIiwgY3RybC0+ZGV0YWlsKTsKKyAgaWYoJyQnID09ICpmaWxlbmFtZSkK
KyAgICBnb3RvIGRvbmU7CisKKyAgcHJpbnRmKCIlcyIsIGZpbGVuYW1lKTsKKyAgaWYoIWN0cmwt
PmRldGFpbCkKKyAgICB7CisgICAgICBwcmludGYoIlxuIik7CisgICAgICBnb3RvIGRvbmU7Cisg
ICAgfQorICBlbHNlCisgICAgeworICAgICAgcHJpbnRmKCJcdCIpOworICAgIH0KKyAgCisgIGNo
YXIgYnVmZmVyWzUwXT17fTsKKyAgc3RydWN0IHRtIHRtMDsgIAorICBzdHJ1Y3QgdG0qIHB0bT0g
bnRmc191dGMybG9jYWwoaW5mby0+dGltZV9udGZzLCAmdG0wKTsKKyAgaWYoTlVMTCA9PSBwdG0p
IGVycngoMSwgIm50ZnNfdXRjMmxvY2FsIGZhaWxcbiIpOworICAgICAgICAgICAKKyAgcHJpbnRm
KCIlenVcdCIsIGluZm8tPmZpbGVzaXplX250ZnMpOworICBwcmludGYoIiVzXHQiLCAoaW5mby0+
ZGlyID8gImRpciIgOiAiZmlsZSIpKTsKKyAgc3RyZnRpbWUoYnVmZmVyLCA1MCwgIiVZLSVtLSVk
XHQlSDolTTolUyIsIHB0bSk7CisgIHByaW50ZigiJXMiLCBidWZmZXIpOworICAvL3ByaW50Zigi
JWQtJWQtJWRcdCIsIHB0bS0+dG1feWVhciwgcHRtLT50bV9tb24sIHB0bS0+dG1fbWRheSk7Cisg
IC8vcHJpbnRmKCIlZDolZDolZFx0IiwgcHRtLT50bV9ob3VyLCBwdG0tPnRtX21pbiwgcHRtLT50
bV9zZWMpOworICBwcmludGYoIlxuIik7CisKKyBkb25lOgorICByZXR1cm4gMDsgIC8vINfu1tW3
tbvYuPhpdGVyYXRlCit9CisKKworCitncnViX2Vycl90CitncnViX250ZnNfbHMgKGdydWJfZmls
ZV90IGZpbGUsIGNvbnN0IGNoYXIgKnBhdGgsCisJICAgICAgIGludCAoKmhvb2spIChjb25zdCBj
aGFyICpmaWxlbmFtZSwKKwkJCSAgICBjb25zdCBzdHJ1Y3QgZ3J1Yl9kaXJob29rX2luZm8gKmlu
Zm8sCisJCQkgICAgdm9pZCAqY2xvc3VyZSksCisJICAgICAgIHZvaWQgKmNsb3N1cmUpCit7Cisg
IHN0cnVjdCBncnViX250ZnNfZGF0YSAqZGF0YSA9IDA7CisgIHN0cnVjdCBncnViX2ZzaGVscF9u
b2RlICpmZGlybyA9IDA7CisgIHN0cnVjdCBncnViX250ZnNfZGlyX2Nsb3N1cmUgYyA9IHswfTsK
KworICBkYXRhID0gZ3J1Yl9udGZzX21vdW50IChmaWxlLT5icywgZmlsZS0+cGFydF9vZmZfc2Vj
dG9yKTsKKyAgaWYgKCFkYXRhKQorICAgIHsKKyAgICAgIERCRygibW91bnQgZmFpbGVkISIpOwor
ICAgICAgZ290byBmYWlsOworICAgIH0KKyAgZ3J1Yl9mc2hlbHBfZmluZF9maWxlIChwYXRoLCAm
ZGF0YS0+Y21mdCwgJmZkaXJvLCBncnViX250ZnNfaXRlcmF0ZV9kaXIsIDAsCisJCQkgMCwgR1JV
Ql9GU0hFTFBfRElSKTsKKworICAKKyAgaWYgKGdydWJfZXJybm8pCisgICAgZ290byBmYWlsOwor
CisgIGMuaG9vayA9IChob29rID8gaG9vayA6IGZpbmRfdGhlbl9sc19ob29rKTsKKyAgYy5jbG9z
dXJlID0gY2xvc3VyZTsKKyAgZ3J1Yl9udGZzX2l0ZXJhdGVfZGlyIChmZGlybywgaXRlcmF0ZSwg
JmMpOworCitmYWlsOgorICBpZiAoKGZkaXJvKSAmJiAoZmRpcm8gIT0gJmRhdGEtPmNtZnQpKQor
ICAgIHsKKyAgICAgIGZyZWVfZmlsZSAoZmRpcm8pOworICAgICAgZ3J1Yl9mcmVlIChmZGlybyk7
CisgICAgfQorICBpZiAoZGF0YSkKKyAgICB7CisgICAgICBmcmVlX2ZpbGUgKCZkYXRhLT5tbWZ0
KTsKKyAgICAgIGZyZWVfZmlsZSAoJmRhdGEtPmNtZnQpOworICAgICAgZ3J1Yl9mcmVlIChkYXRh
KTsKKyAgICB9CisKKworICByZXR1cm4gZ3J1Yl9lcnJubzsKK30KKworZ3J1Yl9lcnJfdAorZ3J1
Yl9udGZzX29wZW4gKGdydWJfZmlsZV90IGZpbGUsIGNvbnN0IGNoYXIgKm5hbWUpCit7CisgIHN0
cnVjdCBncnViX250ZnNfZGF0YSAqZGF0YSA9IDA7CisgIHN0cnVjdCBncnViX2ZzaGVscF9ub2Rl
ICptZnQgPSAwOworCisKKyAgZGF0YSA9IGdydWJfbnRmc19tb3VudCAoZmlsZS0+YnMsIGZpbGUt
PnBhcnRfb2ZmX3NlY3Rvcik7CisgIGlmICghZGF0YSkKKyAgICB7CisgICAgICBEQkcoIm1vdW50
IGZhaWxlZCEiKTsKKyAgICAgIGdvdG8gZmFpbDsKKyAgICB9CisgIGdydWJfZnNoZWxwX2ZpbmRf
ZmlsZSAobmFtZSwgJmRhdGEtPmNtZnQsICZtZnQsIGdydWJfbnRmc19pdGVyYXRlX2RpciwgMCwK
KwkJCSAwLCBHUlVCX0ZTSEVMUF9SRUcpOworCisgIGlmIChncnViX2Vycm5vKQorICAgIGdvdG8g
ZmFpbDsKKworICBpZiAobWZ0ICE9ICZkYXRhLT5jbWZ0KQorICAgIHsKKyAgICAgIGZyZWVfZmls
ZSAoJmRhdGEtPmNtZnQpOworICAgICAgZ3J1Yl9tZW1jcHkgKCZkYXRhLT5jbWZ0LCBtZnQsIHNp
emVvZiAoKm1mdCkpOworICAgICAgZ3J1Yl9mcmVlIChtZnQpOworICAgICAgaWYgKCFkYXRhLT5j
bWZ0Lmlub2RlX3JlYWQpCisJeworCSAgaWYgKGluaXRfZmlsZSAoJmRhdGEtPmNtZnQsIGRhdGEt
PmNtZnQuaW5vKSkKKwkgICAgZ290byBmYWlsOworCX0KKyAgICB9CisKKyAgZmlsZS0+c2l6ZSA9
IGRhdGEtPmNtZnQuc2l6ZTsKKyAgZmlsZS0+ZGF0YSA9IGRhdGE7CisgIGZpbGUtPm9mZnNldCA9
IDA7CisKKyAgcmV0dXJuIDA7CisKK2ZhaWw6CisgIGlmIChkYXRhKQorICAgIHsKKyAgICAgIGZy
ZWVfZmlsZSAoJmRhdGEtPm1tZnQpOworICAgICAgZnJlZV9maWxlICgmZGF0YS0+Y21mdCk7Cisg
ICAgICBncnViX2ZyZWUgKGRhdGEpOworICAgIH0KKworCisgIHJldHVybiBncnViX2Vycm5vOwor
fQorCitncnViX3NzaXplX3QKK2dydWJfbnRmc19yZWFkIChncnViX2ZpbGVfdCBmaWxlLCBncnVi
X29mZl90IG9mZnNldCwgZ3J1Yl9zaXplX3QgbGVuLCBjaGFyICpidWYpCit7CisgIHN0cnVjdCBn
cnViX250ZnNfZmlsZSAqbWZ0OworCisgIG1mdCA9ICYoKHN0cnVjdCBncnViX250ZnNfZGF0YSAq
KSBmaWxlLT5kYXRhKS0+Y21mdDsKKyAgaWYgKGZpbGUtPnJlYWRfaG9vaykKKyAgICBtZnQtPmF0
dHIuc2F2ZV9wb3MgPSAxOworICAKKyAgcmVhZF9hdHRyICgmbWZ0LT5hdHRyLCBidWYsIG9mZnNl
dCwgbGVuLCAxLAorCSAgICAgZmlsZS0+cmVhZF9ob29rLCBmaWxlLT5jbG9zdXJlKTsKKyAgCisg
IHJldHVybiAoZ3J1Yl9lcnJubykgPyAwIDogbGVuOworfQorCitncnViX2Vycl90CitncnViX250
ZnNfY2xvc2UgKGdydWJfZmlsZV90IGZpbGUpCit7CisgIHN0cnVjdCBncnViX250ZnNfZGF0YSAq
ZGF0YTsKKworICBkYXRhID0gZmlsZS0+ZGF0YTsKKworICBpZiAoZGF0YSkKKyAgICB7CisgICAg
ICBmcmVlX2ZpbGUgKCZkYXRhLT5tbWZ0KTsKKyAgICAgIGZyZWVfZmlsZSAoJmRhdGEtPmNtZnQp
OworICAgICAgZ3J1Yl9mcmVlIChkYXRhKTsKKyAgICB9CisKKworICByZXR1cm4gZ3J1Yl9lcnJu
bzsKK30KKworCisKZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29s
cy9pb2VtdS1xZW11LXhlbi9udGZzLmggeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4v
bnRmcy5oCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9udGZzLmgJMTk3MC0w
MS0wMSAwNzowMDowMC4wMDAwMDAwMDAgKzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11
LXFlbXUteGVuL250ZnMuaAkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxNDkzNjcwMSArMDgwMApAQCAt
MCwwICsxLDIyNyBAQAorLyogbnRmcy5oIC0gaGVhZGVyIGZvciB0aGUgTlRGUyBmaWxlc3lzdGVt
ICovCisvKgorICogIEdSVUIgIC0tICBHUmFuZCBVbmlmaWVkIEJvb3Rsb2FkZXIKKyAqICBDb3B5
cmlnaHQgKEMpIDIwMDcsMjAwOSAgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuCisgKgor
ICogIEdSVUIgaXMgZnJlZSBzb2Z0d2FyZTogeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29y
IG1vZGlmeQorICogIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj
IExpY2Vuc2UgYXMgcHVibGlzaGVkIGJ5CisgKiAgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlv
biwgZWl0aGVyIHZlcnNpb24gMyBvZiB0aGUgTGljZW5zZSwgb3IKKyAqICAoYXQgeW91ciBvcHRp
b24pIGFueSBsYXRlciB2ZXJzaW9uLgorICoKKyAqICBHUlVCIGlzIGRpc3RyaWJ1dGVkIGluIHRo
ZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsCisgKiAgYnV0IFdJVEhPVVQgQU5ZIFdBUlJB
TlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKKyAqICBNRVJDSEFOVEFC
SUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlCisgKiAg
R05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KKyAqCisgKiAgWW91
IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExp
Y2Vuc2UKKyAqICBhbG9uZyB3aXRoIEdSVUIuICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUu
b3JnL2xpY2Vuc2VzLz4uCisgKi8KKworI2lmbmRlZiBHUlVCX05URlNfSAorI2RlZmluZSBHUlVC
X05URlNfSAkxCisKKworI2luY2x1ZGUgImJsb2NrX2ludC5oIgorI2luY2x1ZGUgImZzLXR5cGVz
LmgiCisjaW5jbHVkZSAiZ3J1Yl9lcnIuaCIKKyNpbmNsdWRlICJmcy1jb21tLmgiCisKKyNkZWZp
bmUgRklMRV9NRlQgICAgICAwCisjZGVmaW5lIEZJTEVfTUZUTUlSUiAgMQorI2RlZmluZSBGSUxF
X0xPR0ZJTEUgIDIKKyNkZWZpbmUgRklMRV9WT0xVTUUgICAzCisjZGVmaW5lIEZJTEVfQVRUUkRF
RiAgNAorI2RlZmluZSBGSUxFX1JPT1QgICAgIDUKKyNkZWZpbmUgRklMRV9CSVRNQVAgICA2Cisj
ZGVmaW5lIEZJTEVfQk9PVCAgICAgNworI2RlZmluZSBGSUxFX0JBRENMVVMgIDgKKyNkZWZpbmUg
RklMRV9RVU9UQSAgICA5CisjZGVmaW5lIEZJTEVfVVBDQVNFICAxMAorCisjZGVmaW5lIEFUX1NU
QU5EQVJEX0lORk9STUFUSU9OCTB4MTAKKyNkZWZpbmUgQVRfQVRUUklCVVRFX0xJU1QJMHgyMAor
I2RlZmluZSBBVF9GSUxFTkFNRQkJMHgzMAorI2RlZmluZSBBVF9PQkpFQ1RfSUQJCTB4NDAKKyNk
ZWZpbmUgQVRfU0VDVVJJVFlfREVTQ1JJUFRPUgkweDUwCisjZGVmaW5lIEFUX1ZPTFVNRV9OQU1F
CQkweDYwCisjZGVmaW5lIEFUX1ZPTFVNRV9JTkZPUk1BVElPTgkweDcwCisjZGVmaW5lIEFUX0RB
VEEJCQkweDgwCisjZGVmaW5lIEFUX0lOREVYX1JPT1QJCTB4OTAKKyNkZWZpbmUgQVRfSU5ERVhf
QUxMT0NBVElPTgkweEEwCisjZGVmaW5lIEFUX0JJVE1BUAkJMHhCMAorI2RlZmluZSBBVF9TWU1M
SU5LCQkweEMwCisjZGVmaW5lIEFUX0VBX0lORk9STUFUSU9OCTB4RDAKKyNkZWZpbmUgQVRfRUEJ
CQkweEUwCisKKyNkZWZpbmUgQVRUUl9SRUFEX09OTFkJCTB4MQorI2RlZmluZSBBVFRSX0hJRERF
TgkJMHgyCisjZGVmaW5lIEFUVFJfU1lTVEVNCQkweDQKKyNkZWZpbmUgQVRUUl9BUkNISVZFCQkw
eDIwCisjZGVmaW5lIEFUVFJfREVWSUNFCQkweDQwCisjZGVmaW5lIEFUVFJfTk9STUFMCQkweDgw
CisjZGVmaW5lIEFUVFJfVEVNUE9SQVJZCQkweDEwMAorI2RlZmluZSBBVFRSX1NQQVJTRQkJMHgy
MDAKKyNkZWZpbmUgQVRUUl9SRVBBUlNFCQkweDQwMAorI2RlZmluZSBBVFRSX0NPTVBSRVNTRUQJ
CTB4ODAwCisjZGVmaW5lIEFUVFJfT0ZGTElORQkJMHgxMDAwCisjZGVmaW5lIEFUVFJfTk9UX0lO
REVYRUQJMHgyMDAwCisjZGVmaW5lIEFUVFJfRU5DUllQVEVECQkweDQwMDAKKyNkZWZpbmUgQVRU
Ul9ESVJFQ1RPUlkJCTB4MTAwMDAwMDAKKyNkZWZpbmUgQVRUUl9JTkRFWF9WSUVXCQkweDIwMDAw
MDAwCisKKyNkZWZpbmUgRkxBR19DT01QUkVTU0VECQkxCisjZGVmaW5lIEZMQUdfRU5DUllQVEVE
CQkweDQwMDAKKyNkZWZpbmUgRkxBR19TUEFSU0UJCTB4ODAwMAorCisKKyNkZWZpbmUgR1JVQl9E
SVNLX1NFQ1RPUl9CSVRTICAgOQorI2RlZmluZSBCTEtfU0hSCQlHUlVCX0RJU0tfU0VDVE9SX0JJ
VFMKKworI2RlZmluZSBNQVhfTUZUCQkoMTAyNCA+PiBCTEtfU0hSKQorI2RlZmluZSBNQVhfSURY
CQkoMTYzODQgPj4gQkxLX1NIUikKKworI2RlZmluZSBDT01fTEVOCQk0MDk2CisjZGVmaW5lIENP
TV9MT0dfTEVOCTEyCisjZGVmaW5lIENPTV9TRUMJCShDT01fTEVOID4+IEJMS19TSFIpCisKKyNk
ZWZpbmUgQUZfQUxTVAkJMQorI2RlZmluZSBBRl9NTUZUCQkyCisjZGVmaW5lIEFGX0dQT1MJCTQK
KworI2RlZmluZSBSRl9DT01QCQkxCisjZGVmaW5lIFJGX0NCTEsJCTIKKyNkZWZpbmUgUkZfQkxO
SwkJNAorCisjZGVmaW5lIHZhbHVlYXQoYnVmLG9mcyx0eXBlKQkqKCh0eXBlKikoKChjaGFyKili
dWYpK29mcykpCisKKyNkZWZpbmUgdTE2YXQoYnVmLG9mcykJZ3J1Yl9sZV90b19jcHUxNih2YWx1
ZWF0KGJ1ZixvZnMsZ3J1Yl91aW50MTZfdCkpCisjZGVmaW5lIHUzMmF0KGJ1ZixvZnMpCWdydWJf
bGVfdG9fY3B1MzIodmFsdWVhdChidWYsb2ZzLGdydWJfdWludDMyX3QpKQorI2RlZmluZSB1NjRh
dChidWYsb2ZzKQlncnViX2xlX3RvX2NwdTY0KHZhbHVlYXQoYnVmLG9mcyxncnViX3VpbnQ2NF90
KSkKKworI2RlZmluZSB2MTZhdChidWYsb2ZzKQl2YWx1ZWF0KGJ1ZixvZnMsZ3J1Yl91aW50MTZf
dCkKKyNkZWZpbmUgdjMyYXQoYnVmLG9mcykJdmFsdWVhdChidWYsb2ZzLGdydWJfdWludDMyX3Qp
CisjZGVmaW5lIHY2NGF0KGJ1ZixvZnMpCXZhbHVlYXQoYnVmLG9mcyxncnViX3VpbnQ2NF90KQor
CitzdHJ1Y3QgZ3J1Yl9udGZzX2JwYgoreworICBncnViX3VpbnQ4X3Qgam1wX2Jvb3RbM107Cisg
IGdydWJfdWludDhfdCBvZW1fbmFtZVs4XTsKKyAgZ3J1Yl91aW50MTZfdCBieXRlc19wZXJfc2Vj
dG9yOworICBncnViX3VpbnQ4X3Qgc2VjdG9yc19wZXJfY2x1c3RlcjsKKyAgZ3J1Yl91aW50OF90
IHJlc2VydmVkXzFbN107CisgIGdydWJfdWludDhfdCBtZWRpYTsKKyAgZ3J1Yl91aW50MTZfdCBy
ZXNlcnZlZF8yOworICBncnViX3VpbnQxNl90IHNlY3RvcnNfcGVyX3RyYWNrOworICBncnViX3Vp
bnQxNl90IG51bV9oZWFkczsKKyAgZ3J1Yl91aW50MzJfdCBudW1faGlkZGVuX3NlY3RvcnM7Cisg
IGdydWJfdWludDMyX3QgcmVzZXJ2ZWRfM1syXTsKKyAgZ3J1Yl91aW50NjRfdCBudW1fdG90YWxf
c2VjdG9yczsKKyAgZ3J1Yl91aW50NjRfdCBtZnRfbGNuOworICBncnViX3VpbnQ2NF90IG1mdF9t
aXJyX2xjbjsKKyAgZ3J1Yl9pbnQ4X3QgY2x1c3RlcnNfcGVyX21mdDsKKyAgZ3J1Yl9pbnQ4X3Qg
cmVzZXJ2ZWRfNFszXTsKKyAgZ3J1Yl9pbnQ4X3QgY2x1c3RlcnNfcGVyX2luZGV4OworICBncnVi
X2ludDhfdCByZXNlcnZlZF81WzNdOworICBncnViX3VpbnQ2NF90IG51bV9zZXJpYWw7CisgIGdy
dWJfdWludDMyX3QgY2hlY2tzdW07Cit9IF9fYXR0cmlidXRlX18gKChwYWNrZWQpKTsKKworI2Rl
ZmluZSBncnViX250ZnNfZmlsZSBncnViX2ZzaGVscF9ub2RlCisKK3N0cnVjdCBncnViX250ZnNf
YXR0cgoreworICBpbnQgZmxhZ3M7CisgIGNoYXIgKmVtZnRfYnVmLCAqZWRhdF9idWY7CisgIGNo
YXIgKmF0dHJfY3VyLCAqYXR0cl9ueHQsICphdHRyX2VuZDsKKyAgZ3J1Yl91aW50MzJfdCBzYXZl
X3BvczsKKyAgY2hhciAqc2J1ZjsKKyAgc3RydWN0IGdydWJfbnRmc19maWxlICptZnQ7Cit9Owor
CitzdHJ1Y3QgZ3J1Yl9mc2hlbHBfbm9kZQoreworICBzdHJ1Y3QgZ3J1Yl9udGZzX2RhdGEgKmRh
dGE7CisgIGNoYXIgKmJ1ZjsKKyAgZ3J1Yl91aW50NjRfdCBzaXplOworICBncnViX3VpbnQzMl90
IGlubzsKKyAgaW50IGlub2RlX3JlYWQ7CisgIHN0cnVjdCBncnViX250ZnNfYXR0ciBhdHRyOwor
fTsKKworc3RydWN0IGdydWJfbnRmc19kYXRhCit7CisgIHN0cnVjdCBncnViX250ZnNfZmlsZSBj
bWZ0OworICBzdHJ1Y3QgZ3J1Yl9udGZzX2ZpbGUgbW1mdDsKKyAgQmxvY2tEcml2ZXJTdGF0ZSog
YnM7CisgIGdydWJfdWludDMyX3QgbWZ0X3NpemU7CisgIGdydWJfdWludDMyX3QgaWR4X3NpemU7
CisgIGdydWJfdWludDMyX3Qgc3BjOworICBncnViX3VpbnQzMl90IGJsb2Nrc2l6ZTsKKyAgZ3J1
Yl91aW50MzJfdCBtZnRfc3RhcnQ7CisgIGdydWJfdWludDY0X3QgdXVpZDsKK307CisKK3N0cnVj
dCBncnViX250ZnNfY29tcAoreworICBCbG9ja0RyaXZlclN0YXRlKiBiczsKKyAgaW50IGNvbXBf
aGVhZCwgY29tcF90YWlsOworICBncnViX3VpbnQzMl90IGNvbXBfdGFibGVbMTZdWzJdOworICBn
cnViX3VpbnQzMl90IGNidWZfb2ZzLCBjYnVmX3Zjbiwgc3BjOworICBjaGFyICpjYnVmOworfTsK
Kworc3RydWN0IGdydWJfbnRmc19ybHN0Cit7CisgIGludCBmbGFnczsKKyAgZ3J1Yl9kaXNrX2Fk
ZHJfdCB0YXJnZXRfdmNuLCBjdXJyX3ZjbiwgbmV4dF92Y24sIGN1cnJfbGNuOworICBjaGFyICpj
dXJfcnVuOworICBzdHJ1Y3QgZ3J1Yl9udGZzX2F0dHIgKmF0dHI7CisgIHN0cnVjdCBncnViX250
ZnNfY29tcCBjb21wOworfTsKKworCisKKworCisKKwordHlwZWRlZiBncnViX2Vycl90ICgqbnRm
c2NvbXBfZnVuY190KSAoc3RydWN0IGdydWJfbnRmc19hdHRyICogYXQsIGNoYXIgKmRlc3QsCisJ
CQkJICAgICAgIGdydWJfdWludDMyX3Qgb2ZzLCBncnViX3VpbnQzMl90IGxlbiwKKwkJCQkgICAg
ICAgc3RydWN0IGdydWJfbnRmc19ybHN0ICogY3R4LAorCQkJCSAgICAgICBncnViX3VpbnQzMl90
IHZjbik7CisKK2V4dGVybiBudGZzY29tcF9mdW5jX3QgZ3J1Yl9udGZzY29tcF9mdW5jOworCitn
cnViX2Vycl90IGdydWJfbnRmc19yZWFkX3J1bl9saXN0IChzdHJ1Y3QgZ3J1Yl9udGZzX3Jsc3Qg
KmN0eCk7CisKKworCisKK2ludCBiZHJ2X3ByZWFkX2Zyb21fbGNuX29mX3ZvbHVtKEJsb2NrRHJp
dmVyU3RhdGUgKmJzLCBpbnQ2NF90IG9mZnNldCwKKwkJCQkgdm9pZCAqYnVmMSwgaW50IGNvdW50
MSk7CisKK3N0cnVjdCBncnViX250ZnNfZGF0YSAqCitncnViX250ZnNfbW91bnQgKEJsb2NrRHJp
dmVyU3RhdGUqIGJzLCBncnViX3VpbnQzMl90IHBhcnRfb2ZmX3NlY3Rvcik7CisKKworZ3J1Yl9l
cnJfdAorZ3J1Yl9udGZzX2xzIChncnViX2ZpbGVfdCBmaWxlLCBjb25zdCBjaGFyICpwYXRoLAor
CSAgICAgICBpbnQgKCpob29rKSAoY29uc3QgY2hhciAqZmlsZW5hbWUsCisJCQkgICAgY29uc3Qg
c3RydWN0IGdydWJfZGlyaG9va19pbmZvICppbmZvLAorCQkJICAgIHZvaWQgKmNsb3N1cmUpLAor
CSAgICAgIHZvaWQgKmNsb3N1cmUpOworCitncnViX2Vycl90CitncnViX250ZnNfb3BlbiAoZ3J1
Yl9maWxlX3QgZmlsZSwgY29uc3QgY2hhciAqbmFtZSk7CisKKworZ3J1Yl9zc2l6ZV90CitncnVi
X250ZnNfcmVhZCAoZ3J1Yl9maWxlX3QgZmlsZSwgZ3J1Yl9vZmZfdCBvZmZzZXQsCisJCWdydWJf
c2l6ZV90IGxlbiwgY2hhciAqYnVmKTsKKworCitncnViX2Vycl90CitncnViX250ZnNfY2xvc2Ug
KGdydWJfZmlsZV90IGZpbGUpOworCisKKyNlbmRpZiAvKiAhIEdSVUJfTlRGU19IICovCmRpZmYg
LS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4v
cGFydGl0aW9uLmMgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vcGFydGl0aW9uLmMK
LS0tIHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL3BhcnRpdGlvbi5jCTE5NzAtMDEt
MDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1x
ZW11LXhlbi9wYXJ0aXRpb24uYwkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxNDkzNjcwMSArMDgwMApA
QCAtMCwwICsxLDI0MCBAQAorI2luY2x1ZGUgInBhcnRpdGlvbi5oIgorI2luY2x1ZGUgPGVyci5o
PgorCitzdGF0aWMgaW50IGlzX2Z1bGxfemVybyh2b2lkICpwLCB1aW50IGJ5dGVzKQoreworICBp
bnQgaSA9IDA7CisgIHVpbnQ4X3QgKnAxID0gKHVpbnQ4X3QqKXA7CisgIHdoaWxlKGkgPCBieXRl
cykKKyAgeworICAgIGlmKCpwMSAhPSAwKQorICAgIHsKKyAgICAgIHJldHVybiAwOworICAgIH1l
bHNlCisgICAgeworICAgICAgaSsrOworICAgICAgcDErKzsKKyAgICB9CisgIH0KKyAgLy9wcmlu
dGYoIi4uLi4uLi4uLi5mdWxsIHplcm8uLi4uLi5cbiIpOworICByZXR1cm4gMTsKK30KKworc3Rh
dGljIHZvaWQgcmVhZF9wYXJ0aXRpb24odWludDhfdCAqcCwgc3RydWN0IHBhcnRpdGlvbl9yZWNv
cmQgKnIpCit7CisgICAgci0+Ym9vdGFibGUgPSBwWzBdOworICAgIHItPnN0YXJ0X2hlYWQgPSBw
WzFdOworICAgIHItPnN0YXJ0X2N5bGluZGVyID0gcFszXSB8ICgocFsyXSA8PCAyKSAmIDB4MDMw
MCk7CisgICAgci0+c3RhcnRfc2VjdG9yID0gcFsyXSAmIDB4M2Y7CisgICAgci0+c3lzdGVtID0g
cFs0XTsKKyAgICByLT5lbmRfaGVhZCA9IHBbNV07CisgICAgci0+ZW5kX2N5bGluZGVyID0gcFs3
XSB8ICgocFs2XSA8PCAyKSAmIDB4MzAwKTsKKyAgICByLT5lbmRfc2VjdG9yID0gcFs2XSAmIDB4
M2Y7CisgICAgci0+c3RhcnRfc2VjdG9yX2FicyA9IHBbOF0gfCBwWzldIDw8IDggfCBwWzEwXSA8
PCAxNiB8IHBbMTFdIDw8IDI0OworICAgIHItPm5iX3NlY3RvcnNfYWJzID0gcFsxMl0gfCBwWzEz
XSA8PCA4IHwgcFsxNF0gPDwgMTYgfCBwWzE1XSA8PCAyNDsKK30KKworCisKK2NoYXIqIGp1ZGdl
X2ZzKGxzX3BhcnRpdGlvbl90KiBwdCkKK3sKKyAgaWYocHQtPnBhcnQuc3lzdGVtPT0weDBiIHx8
IHB0LT5wYXJ0LnN5c3RlbT09MHgwMSkKKyAgICB7CisgICAgICBwdC0+ZnNfdHlwZSA9IEZTX0ZB
VDsKKyAgICAgIHJldHVybiAoY2hhciopIkZBVDMyIjsKKyAgICB9CisgIGVsc2UgaWYocHQtPnBh
cnQuc3lzdGVtPT0weDA3KQorICAgIHsKKyAgICAgIHB0LT5mc190eXBlID0gRlNfTlRGUzsKKyAg
ICAgIHJldHVybiAoY2hhciopIk5URlMiOworICAgIH0KKyAgZWxzZQorICAgIHsKKyAgICAgIHB0
LT5mc190eXBlID0gRlNfVU5LTk9XTjsKKyAgICAgIHJldHVybiAgKGNoYXIqKSJVTktOT1dOIjsK
KyAgICB9Cit9CisKK2ludCBlbnVtX3BhcnRpdGlvbihCbG9ja0RyaXZlclN0YXRlICpicywgbHNf
cGFydGl0aW9uX3QqIHBhcnRzKQoreworICAgIHN0cnVjdCBwYXJ0aXRpb25fcmVjb3JkIG1icls0
XTsKKyAgICB1aW50OF90IGRhdGFbNTEyXTsKKyAgICBpbnQgaTsKKyAgICBpbnQgZXh0X3BhcnRu
dW0gPSA0OworICAgIHN0cnVjdCBwYXJ0aXRpb25fcmVjb3JkIGV4dFsxMF07CisgICAgdWludDhf
dCBkYXRhMVs1MTJdOworICAgIGludCBqID0gMDsKKworICAgIGlmIChiZHJ2X3JlYWQoYnMsIDAs
IGRhdGEsIDEpKQorICAgICAgICBlcnJ4KEVJTlZBTCwgImVycm9yIHdoaWxlIHJlYWRpbmciKTsK
KworICAgIGlmIChkYXRhWzUxMF0gIT0gMHg1NSB8fCBkYXRhWzUxMV0gIT0gMHhhYSkgCisgICAg
eworICAgICAgICBlcnJubyA9IC1FSU5WQUw7CisgICAgICAgIHJldHVybiAtMTsKKyAgICB9Cisg
ICAgCisgICAgaW50IGsgPSAwOworICAgIGZvciAoaSA9IDA7IGkgPCA0OyBpKyspIAorICAgIHsK
KyAgICAgICAgcmVhZF9wYXJ0aXRpb24oJmRhdGFbNDQ2ICsgMTYgKiBpXSwgJm1icltpXSk7CisK
KyAgICAgICAgaWYgKCFtYnJbaV0ubmJfc2VjdG9yc19hYnMpCisgICAgICAgICAgICBjb250aW51
ZTsKKwkvL3ByaW50ZigidGhlICVkIHBhcnRpdGlvbjpib290PTB4JXgsIHN0YXJ0PSV1LCBzeXN0
ZW09MHgleCwgdG90YWw9JXVcdCIsIAorCS8vICAgICAgIGkrMSwgbWJyW2ldLmJvb3RhYmxlLCBt
YnJbaV0uc3RhcnRfc2VjdG9yX2FicywgbWJyW2ldLnN5c3RlbSwgbWJyW2ldLm5iX3NlY3RvcnNf
YWJzKTsKKwlwYXJ0c1trXS5wYXJ0ID0gbWJyW2ldOworCXBhcnRzW2tdLmlkID0gaSsxOworCWsr
KzsKKyAgICAgICAgaWYgKG1icltpXS5zeXN0ZW0gPT0gMHhGIHx8IG1icltpXS5zeXN0ZW0gPT0g
MHg1KSAKKwl7CisJICAgIC8vcHJpbnRmKCJpcyBhIGV4dGVuZCBwYXJ0aXRpb24uLi4uLi5cbiIp
OworCSAgICBpZiAoYmRydl9yZWFkKGJzLCBtYnJbaV0uc3RhcnRfc2VjdG9yX2FicywgZGF0YTEs
IDEpKQorICAgICAgICAgICAgICAgIGVycngoRUlOVkFMLCAiZXJyb3Igd2hpbGUgcmVhZGluZyIp
OworCSAgICAvLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8KKwkgICAgLy9kdW1wIGVicgorCSAg
ICAvLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8KKwkgICAgdWludDMyX3QgZXh0X3N0YXJ0X3Nl
Y3RvciA9IG1icltpXS5zdGFydF9zZWN0b3JfYWJzOworCSAgICBzdHJ1Y3QgcGFydGl0aW9uX3Jl
Y29yZCBleHRfbmV4dCA9IHswfTsKKyAgICAgICAgICAgIHdoaWxlICgxKSAKKwkgICAgeworCSAg
ICAgICAgcmVhZF9wYXJ0aXRpb24oJmRhdGExWzQ0NiArIDE2ICogMF0sICZleHRbal0pOworCQkv
L3ByaW50ZigidGhlICVkdGggcGFydGl0aW9uOmJvb3Q9MHgleCwgc3RhcnQ9JXUsIHN5c3RlbT0w
eCV4LCB0b3RhbD0ldVx0IiwKKwkJLy8gICAgICAgZXh0X3BhcnRudW0raisxLCBleHRbal0uYm9v
dGFibGUsIGV4dFtqXS5zdGFydF9zZWN0b3JfYWJzK2V4dF9zdGFydF9zZWN0b3IsIAorCQkvLyAg
ICAgICBleHRbal0uc3lzdGVtLCBleHRbal0ubmJfc2VjdG9yc19hYnMpOworCQkKKwkJCisJCWlm
KDAgIT0gZXh0W2pdLm5iX3NlY3RvcnNfYWJzKSAKKwkJeworCQkgIGV4dFtqXS5zdGFydF9zZWN0
b3JfYWJzICs9IGV4dF9zdGFydF9zZWN0b3I7CisJCSAgaWYoaiA+IDApCisJCSAgICBleHRbal0u
c3RhcnRfc2VjdG9yX2FicyArPSBleHRfbmV4dC5zdGFydF9zZWN0b3JfYWJzOworCQkgIHBhcnRz
W2tdLnBhcnQgPSBleHRbal07CisJCSAgcGFydHNba10uaWQgPSBleHRfcGFydG51bSArIGogKzE7
CisJCSAgaysrOworCQkgIGorKzsKKwkgICAgICAgIH0KKwkJZWxzZQorCQl7CisJCSAgcHJpbnRm
KCJuYl9zZWN0b3JzX2Ficz0wPj4+Pj4+Pj4+Pj4+XG4iKTsKKwkJfQorCQkvLy8vLy8vLy8vLy8v
Ly8vLy8vLy8vCisJCWlmKGV4dFtqLTFdLnN5c3RlbSA9PSAweEYgKQorCQkgIHsKKwkJICAgIHBy
aW50ZigiLi4uLi4uLi4uLi4uLi4uYWdhaW4gZXh0ZW5kLi4uLi4uLi4uLi4uLlxuIik7CisJCSAg
ICBleHRfc3RhcnRfc2VjdG9yID0gZXh0W2otMV0uc3RhcnRfc2VjdG9yX2FicyArIGV4dF9zdGFy
dF9zZWN0b3I7CisJCSAgICBpZiAoYmRydl9yZWFkKGJzLCBleHRfc3RhcnRfc2VjdG9yLCBkYXRh
MSwgMSkpCisJCSAgICAgIGVycngoRUlOVkFMLCAiZXJyb3Igd2hpbGUgcmVhZGluZyIpOworCQkg
ICAgY29udGludWU7CisJCSAgfQorCQllbHNlCisJCSAgeworCQkgICAgOy8vcHJpbnRmKCJpcyBh
IGxvZ2ljYWwgcGFydFxuIik7CisJCSAgfQorCQkvLy8vLy8vLy8vLy8vLy8vLy8vLy8KKwkJcmVh
ZF9wYXJ0aXRpb24oJmRhdGExWzQ0NiArIDE2ICogMV0sICZleHRfbmV4dCk7CisJCWlmIChpc19m
dWxsX3plcm8oJmV4dF9uZXh0LCBzaXplb2YoZXh0X25leHQpKSkKKyAgICAgICAgICAgICAgICAg
ICAgYnJlYWs7CisKKwkJaWYgKGJkcnZfcmVhZChicywgZXh0X3N0YXJ0X3NlY3RvciArIGV4dF9u
ZXh0LnN0YXJ0X3NlY3Rvcl9hYnMgLCBkYXRhMSwgMSkpCisJCSAgZXJyeChFSU5WQUwsICJlcnJv
ciB3aGlsZSByZWFkaW5nIik7CisJICAgIH0KKwl9ZWxzZQorCXsKKwkgIDsvL3ByaW50ZigiaXMg
YSBtYWluIHBhcnRpdGlvbi4uLi4uLlxuIik7CisJfQorICAgIH0KKyAgICAKKyAgICByZXR1cm4g
azsKK30KKworCisKKworCitpbnQgZmluZF9wYXJ0aXRpb24oQmxvY2tEcml2ZXJTdGF0ZSAqYnMs
IGludCBwYXJ0aXRpb24sCisgICAgICAgICAgICAgICAgICAgICAgICAgIG9mZl90ICpvZmZzZXQs
IG9mZl90ICpzaXplKQoreworICAgIHN0cnVjdCBwYXJ0aXRpb25fcmVjb3JkIG1icls0XTsKKyAg
ICB1aW50OF90IGRhdGFbNTEyXTsKKyAgICBpbnQgaTsKKyAgICBpbnQgZXh0X3BhcnRudW0gPSA0
OworCisKKyAgICBpZiAoYmRydl9yZWFkKGJzLCAwLCBkYXRhLCAxKSkKKyAgICAgICAgZXJyeChF
SU5WQUwsICJlcnJvciB3aGlsZSByZWFkaW5nIik7CisKKyAgICBpZiAoZGF0YVs1MTBdICE9IDB4
NTUgfHwgZGF0YVs1MTFdICE9IDB4YWEpIAorICAgIHsKKyAgICAgICAgZXJybm8gPSAtRUlOVkFM
OworICAgICAgICByZXR1cm4gLTE7CisgICAgfQorICAgIAorICAgIGludCBrID0gMDsKKyAgICBm
b3IgKGkgPSAwOyBpIDwgNDsgaSsrKSAKKyAgICB7CisgICAgICAgIHJlYWRfcGFydGl0aW9uKCZk
YXRhWzQ0NiArIDE2ICogaV0sICZtYnJbaV0pOworCisgICAgICAgIGlmICghbWJyW2ldLm5iX3Nl
Y3RvcnNfYWJzKQorICAgICAgICAgICAgY29udGludWU7CisJLy9wcmludGYoInRoZSAlZCBwYXJ0
aXRpb246IiwgaSsxKTsKKwkKKyAgICAgICAgaWYgKG1icltpXS5zeXN0ZW0gPT0gMHhGIHx8IG1i
cltpXS5zeXN0ZW0gPT0gMHg1KSAKKwl7CisJICAvL3ByaW50ZigiaXMgYSBleHRlbmQgcGFydGl0
aW9uLi4uLi4uXG4iKTsKKyAgICAgICAgICAgIHN0cnVjdCBwYXJ0aXRpb25fcmVjb3JkIGV4dFsx
MF07CisgICAgICAgICAgICB1aW50OF90IGRhdGExWzUxMl07CisgICAgICAgICAgICBpbnQgaiA9
IDA7CisKKyAgICAgICAgICAgIGlmIChiZHJ2X3JlYWQoYnMsIG1icltpXS5zdGFydF9zZWN0b3Jf
YWJzLCBkYXRhMSwgMSkpCisgICAgICAgICAgICAgICAgZXJyeChFSU5WQUwsICJlcnJvciB3aGls
ZSByZWFkaW5nIik7CisJICAgIAorCSAgICB1aW50MzJfdCBleHRfc3RhcnRfc2VjdG9yID0gbWJy
W2ldLnN0YXJ0X3NlY3Rvcl9hYnM7CisJICAgIHN0cnVjdCBwYXJ0aXRpb25fcmVjb3JkIGV4dF9u
ZXh0ID0gezB9OworICAgICAgICAgICAgd2hpbGUgKDEpIAorCSAgICB7CisJICAgICAgICByZWFk
X3BhcnRpdGlvbigmZGF0YTFbNDQ2ICsgMTYgKiAwXSwgJmV4dFtqXSk7CisJCXByaW50Zigic3Rh
cnQ9JXUsIHRvdGFsPSV1LCBzeXN0ZW09MHgleFx0IiwKKwkJICAgICAgIGV4dFtqXS5zdGFydF9z
ZWN0b3JfYWJzLCBleHRbal0ubmJfc2VjdG9yc19hYnMsIGV4dFtqXS5zeXN0ZW0pOworCQlwcmlu
dGYoInRoZSAlZHRoIHBhcnRpdGlvbiBpcyBhIGxvZ2ljYWwgcGFydFxuIiwgZXh0X3BhcnRudW0g
KyBqICsgMSk7CisJCQorCQlpZigwICE9IGV4dFtqXS5uYl9zZWN0b3JzX2FicykgCisJCXsKKwkJ
ICBpZiAoKGV4dF9wYXJ0bnVtICsgaiArIDEpID09IHBhcnRpdGlvbikgCisJCSAgeworCQkgICAg
ZXh0W2pdLnN0YXJ0X3NlY3Rvcl9hYnMgKz0gIGV4dF9zdGFydF9zZWN0b3I7CisJCSAgICBpZihq
ID4gMCkKKwkJICAgICAgZXh0W2pdLnN0YXJ0X3NlY3Rvcl9hYnMgKz0gZXh0X25leHQuc3RhcnRf
c2VjdG9yX2FiczsKKwkJICAgICpvZmZzZXQgPSAodWludDY0X3QpZXh0W2pdLnN0YXJ0X3NlY3Rv
cl9hYnMgPDwgOTsKKwkJICAgICpzaXplID0gKHVpbnQ2NF90KWV4dFtqXS5uYl9zZWN0b3JzX2Fi
cyA8PCA5OworCQkgICAgcmV0dXJuIDA7CisJCSAgfQorCQkgIGorKzsKKwkgICAgICAgIH0KKwkJ
CisJCXJlYWRfcGFydGl0aW9uKCZkYXRhMVs0NDYgKyAxNiAqIDFdLCAmZXh0X25leHQpOworCQlp
ZiAoaXNfZnVsbF96ZXJvKCZleHRfbmV4dCwgc2l6ZW9mKGV4dF9uZXh0KSkpCisgICAgICAgICAg
ICAgICAgICAgIGJyZWFrOworCQkvL2V4dF9zdGFydF9zZWN0b3IgKz0gZXh0X25leHQuc3RhcnRf
c2VjdG9yX2FiczsKKwkJaWYgKGJkcnZfcmVhZChicywgZXh0X3N0YXJ0X3NlY3RvciArIGV4dF9u
ZXh0LnN0YXJ0X3NlY3Rvcl9hYnMsIGRhdGExLCAxKSkKKwkJICAgIGVycngoRUlOVkFMLCAiZXJy
b3Igd2hpbGUgcmVhZGluZyIpOworCSAgICB9CisgICAgICAgICAgICAKKyAgICAgICAgfSAKKwll
bHNlIAorCXsKKwkgIC8vcHJpbnRmKCJpcyBhIG1haW4gcGFydGl0aW9uLi4uLi4uXG4iKTsKKwkg
ICAgaWYgKChpICsgMSkgPT0gcGFydGl0aW9uKSAKKwkgICAgeworCSAgICAgICpvZmZzZXQgPSAo
dWludDY0X3QpbWJyW2ldLnN0YXJ0X3NlY3Rvcl9hYnMgPDwgOTsKKwkgICAgICAqc2l6ZSA9ICh1
aW50NjRfdCltYnJbaV0ubmJfc2VjdG9yc19hYnMgPDwgOTsKKwkgICAgICByZXR1cm4gMDsKKwkg
ICAgfQorCX0KKyAgICB9CisKKyAgICBlcnJubyA9IC1FTk9FTlQ7CisgICAgcmV0dXJuIC0xOwor
fQorCisvLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8v
Ly8vCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUt
cWVtdS14ZW4vcGFydGl0aW9uLmggeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vcGFy
dGl0aW9uLmgKLS0tIHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL3BhcnRpdGlvbi5o
CTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29s
cy9pb2VtdS1xZW11LXhlbi9wYXJ0aXRpb24uaAkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxNTk0MDIy
NSArMDgwMApAQCAtMCwwICsxLDQ2IEBACisjaWZuZGVmIF9QQVJUSVRJT05fSAorI2RlZmluZSBf
UEFSVElUSU9OX0gKKworI2luY2x1ZGUgPHN0ZGludC5oPgorCit0eXBlZGVmIHN0cnVjdCBwYXJ0
aXRpb25fcmVjb3JkCit7CisgICAgdWludDhfdCBib290YWJsZTsKKyAgICB1aW50OF90IHN0YXJ0
X2hlYWQ7CisgICAgdWludDMyX3Qgc3RhcnRfY3lsaW5kZXI7CisgICAgdWludDhfdCBzdGFydF9z
ZWN0b3I7CisgICAgdWludDhfdCBzeXN0ZW07CisgICAgdWludDhfdCBlbmRfaGVhZDsKKyAgICB1
aW50OF90IGVuZF9jeWxpbmRlcjsKKyAgICB1aW50OF90IGVuZF9zZWN0b3I7CisgICAgdWludDMy
X3Qgc3RhcnRfc2VjdG9yX2FiczsKKyAgICB1aW50MzJfdCBuYl9zZWN0b3JzX2FiczsKK30gX19h
dHRyaWJ1dGVfXyAoKHBhY2tlZCkpIHBhcnRfcmVjb3JkX3Q7CisKKworCit0eXBlZGVmIGVudW0K
KyAgeworICAgIEZTX1VOS05PV04gPSAwLAorICAgIEZTX0ZBVCwKKyAgICBGU19OVEZTCisgIH1G
U19UWVBFOworCit0eXBlZGVmIHN0cnVjdCBsc19wYXJ0aXRpb24KK3sKKyAgcGFydF9yZWNvcmRf
dCBwYXJ0OworICBpbnQgaWQ7CisgIEZTX1RZUEUgZnNfdHlwZTsKK31sc19wYXJ0aXRpb25fdDsK
KworCitjaGFyKiBqdWRnZV9mcyhsc19wYXJ0aXRpb25fdCogcHQpOworCisjaW5jbHVkZSAiYmxv
Y2tfaW50LmgiCitpbnQgZW51bV9wYXJ0aXRpb24oQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIGxzX3Bh
cnRpdGlvbl90KiBwYXJ0cyk7CisKK2ludCBmaW5kX3BhcnRpdGlvbihCbG9ja0RyaXZlclN0YXRl
ICpicywgaW50IHBhcnRpdGlvbiwKKwkJICAgb2ZmX3QgKm9mZnNldCwgb2ZmX3QgKnNpemUpOwor
CisKKyNlbmRpZgpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xz
L2lvZW11LXFlbXUteGVuL3FlbXUtaW1nLmMgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14
ZW4vcWVtdS1pbWcuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vcWVtdS1p
bWcuYwkyMDExLTAyLTEyIDAxOjU0OjUxLjAwMDAwMDAwMCArMDgwMAorKysgeGVuLTQuMS4yLWIv
dG9vbHMvaW9lbXUtcWVtdS14ZW4vcWVtdS1pbWcuYwkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxNjkz
MjYyMiArMDgwMApAQCAtMjAsMjMgKzIwLDM1IEBACiAgKiBMSUFCSUxJVFksIFdIRVRIRVIgSU4g
QU4gQUNUSU9OIE9GIENPTlRSQUNULCBUT1JUIE9SIE9USEVSV0lTRSwgQVJJU0lORyBGUk9NLAog
ICogT1VUIE9GIE9SIElOIENPTk5FQ1RJT04gV0lUSCBUSEUgU09GVFdBUkUgT1IgVEhFIFVTRSBP
UiBPVEhFUiBERUFMSU5HUyBJTgogICogVEhFIFNPRlRXQVJFLgogICovCiAjaW5jbHVkZSAicWVt
dS1jb21tb24uaCIKICNpbmNsdWRlICJvc2RlcC5oIgogI2luY2x1ZGUgImJsb2NrX2ludC5oIgog
I2luY2x1ZGUgPGFzc2VydC5oPgorI2luY2x1ZGUgPGVyci5oPgorCisKKyNpbmNsdWRlICJwYXJ0
aXRpb24uaCIKKyNpbmNsdWRlICJmcy1jb21tLmgiCisjaW5jbHVkZSAiZmF0LmgiCisjaW5jbHVk
ZSAibnRmcy5oIgorI2luY2x1ZGUgIm1pc2MuaCIKKworCisKIAogI2lmZGVmIF9XSU4zMgogI2Rl
ZmluZSBXSU4zMl9MRUFOX0FORF9NRUFOCiAjaW5jbHVkZSA8d2luZG93cy5oPgogI2VuZGlmCiAK
IC8qIERlZmF1bHQgdG8gY2FjaGU9d3JpdGViYWNrIGFzIGRhdGEgaW50ZWdyaXR5IGlzIG5vdCBp
bXBvcnRhbnQgZm9yIHFlbXUtdGNnLiAqLworI2RlZmluZSBNQVhfUEFSVElUSU9OUyAgICAyMAog
I2RlZmluZSBCUkRWX09fRkxBR1MgQkRSVl9PX0NBQ0hFX1dCCiAKIHN0YXRpYyB2b2lkIFFFTVVf
Tk9SRVRVUk4gZXJyb3IoY29uc3QgY2hhciAqZm10LCAuLi4pCiB7CiAgICAgdmFfbGlzdCBhcDsK
ICAgICB2YV9zdGFydChhcCwgZm10KTsKICAgICBmcHJpbnRmKHN0ZGVyciwgInFlbXUtaW1nOiAi
KTsKICAgICB2ZnByaW50ZihzdGRlcnIsIGZtdCwgYXApOwpAQCAtNTMsMTYgKzY1LDE4IEBAIHN0
YXRpYyB2b2lkIGZvcm1hdF9wcmludCh2b2lkICpvcGFxdWUsIGMKIC8qIFBsZWFzZSBrZWVwIGlu
IHN5bmNoIHdpdGggcWVtdS1pbWcudGV4aSAqLwogc3RhdGljIHZvaWQgaGVscCh2b2lkKQogewog
ICAgIHByaW50ZigicWVtdS1pbWcgdmVyc2lvbiAiIFFFTVVfVkVSU0lPTiAiLCBDb3B5cmlnaHQg
KGMpIDIwMDQtMjAwOCBGYWJyaWNlIEJlbGxhcmRcbiIKICAgICAgICAgICAgInVzYWdlOiBxZW11
LWltZyBjb21tYW5kIFtjb21tYW5kIG9wdGlvbnNdXG4iCiAgICAgICAgICAgICJRRU1VIGRpc2sg
aW1hZ2UgdXRpbGl0eVxuIgogICAgICAgICAgICAiXG4iCiAgICAgICAgICAgICJDb21tYW5kIHN5
bnRheDpcbiIKKyAgICAgICAgICAgIiAgbHMgWy12XSBbWy1sXSAtZCBkaXJlY3RvcnldIGltZ2Zp
bGVcbiIKKyAgICAgICAgICAgIiAgY2F0IFstdl0gLWYgZmlsZSBpbWdmaWxlXG4iCiAgICAgICAg
ICAgICIgIGNyZWF0ZSBbLWVdIFstNl0gWy1iIGJhc2VfaW1hZ2VdIFstZiBmbXRdIGZpbGVuYW1l
IFtzaXplXVxuIgogICAgICAgICAgICAiICBjb21taXQgWy1mIGZtdF0gZmlsZW5hbWVcbiIKICAg
ICAgICAgICAgIiAgY29udmVydCBbLWNdIFstZV0gWy02XSBbLWYgZm10XSBbLU8gb3V0cHV0X2Zt
dF0gWy1CIG91dHB1dF9iYXNlX2ltYWdlXSBmaWxlbmFtZSBbZmlsZW5hbWUyIFsuLi5dXSBvdXRw
dXRfZmlsZW5hbWVcbiIKICAgICAgICAgICAgIiAgaW5mbyBbLWYgZm10XSBmaWxlbmFtZVxuIgog
ICAgICAgICAgICAiICBzbmFwc2hvdCBbLWwgfCAtYSBzbmFwc2hvdCB8IC1jIHNuYXBzaG90IHwg
LWQgc25hcHNob3RdIGZpbGVuYW1lXG4iCiAgICAgICAgICAgICJcbiIKICAgICAgICAgICAgIkNv
bW1hbmQgcGFyYW1ldGVyczpcbiIKICAgICAgICAgICAgIiAgJ2ZpbGVuYW1lJyBpcyBhIGRpc2sg
aW1hZ2UgZmlsZW5hbWVcbiIKQEAgLTIwOSwxNiArMjIzLDM0MyBAQCBzdGF0aWMgQmxvY2tEcml2
ZXJTdGF0ZSAqYmRydl9uZXdfb3BlbihjCiAgICAgICAgIGlmIChyZWFkX3Bhc3N3b3JkKHBhc3N3
b3JkLCBzaXplb2YocGFzc3dvcmQpKSA8IDApCiAgICAgICAgICAgICBlcnJvcigiTm8gcGFzc3dv
cmQgZ2l2ZW4iKTsKICAgICAgICAgaWYgKGJkcnZfc2V0X2tleShicywgcGFzc3dvcmQpIDwgMCkK
ICAgICAgICAgICAgIGVycm9yKCJpbnZhbGlkIHBhc3N3b3JkIik7CiAgICAgfQogICAgIHJldHVy
biBiczsKIH0KIAorc3RhdGljIHZvaWQgZ2V0X3BhcnRpdGlvbl9wYXRoKGNvbnN0IGNoYXIgKmRp
ciwgaW50ICp3aGljaF9wYXJ0LCBjaGFyICoqcGF0aCkKK3sKKyAgICBzdGF0aWMgY2hhciBmdWxs
X3BhdGhbNTEyXTsKKyAgICBjaGFyIHBhcnRbNV09e307CisKKyAgICBzdHJuY3B5KGZ1bGxfcGF0
aCwgZGlyLCA1MTIpOworICAgIGZ1bGxfcGF0aFs1MTFdID0gJ1wwJzsKKworICAgIC8vz97WxtLU
L7+qzbcgveHOsgorICAgIGNoYXIgKnAxID0gZnVsbF9wYXRoICsgMTsKKyAgICBjaGFyICpwMiA9
IHN0cmNocihmdWxsX3BhdGggKyAxLCAnLycpOworICAgIGlmKCFwMikKKyAgICB7CisgICAgICAg
IGVycngoMSwgImNoZWNrIHRoZSBmaWxlIHBhdGghXG4iKTsKKyAgICB9CisKKyAgICAqcGF0aCA9
IHAyOworICAgIHN0cm5jcHkocGFydCwgcDEsIHAyLXAxKTsKKyAgICAqd2hpY2hfcGFydCA9IGF0
b2kocGFydCk7Cit9CisKK3R5cGVkZWYgc3RydWN0IGdydWJfZnMKK3sKKyAgICBncnViX29wZW4g
b3BlbjsKKyAgICBncnViX2xzIGxzOworICAgIGdydWJfcmVhZCByZWFkOworICAgIGdydWJfY2xv
c2UgY2xvc2U7Cit9Z3J1Yl9mc190OworCitzdGF0aWMgZ3J1Yl9mc190IGdydWJfZnNfcGx1Z1sx
MF0gPSB7fTsKKworc3RhdGljIHZvaWQgZ3J1Yl9mc19wbHVnaW4odm9pZCkKK3sKKyAgZ3J1Yl9m
c19wbHVnW0ZTX0ZBVF0ub3BlbiA9IGdydWJfZmF0X29wZW47CisgIGdydWJfZnNfcGx1Z1tGU19G
QVRdLnJlYWQgPSBncnViX2ZhdF9yZWFkOworICBncnViX2ZzX3BsdWdbRlNfRkFUXS5jbG9zZSA9
IGdydWJfZmF0X2Nsb3NlOworICBncnViX2ZzX3BsdWdbRlNfRkFUXS5scyA9IGdydWJfZmF0X2xz
OworCisgIGdydWJfZnNfcGx1Z1tGU19OVEZTXS5vcGVuID0gZ3J1Yl9udGZzX29wZW47CisgIGdy
dWJfZnNfcGx1Z1tGU19OVEZTXS5yZWFkID0gZ3J1Yl9udGZzX3JlYWQ7CisgIGdydWJfZnNfcGx1
Z1tGU19OVEZTXS5jbG9zZSA9IGdydWJfbnRmc19jbG9zZTsKKyAgZ3J1Yl9mc19wbHVnW0ZTX05U
RlNdLmxzID0gZ3J1Yl9udGZzX2xzOworfQorCitzdGF0aWMgaW50IGltZ19scyhpbnQgYXJnYywg
Y2hhciAqKmFyZ3YpCit7CisgICAgaW50IGMgPSAtMTsKKyAgICBjaGFyICppbWdmaWxlID0gTlVM
TDsKKyAgICBjaGFyICpkaXIgPSBOVUxMOworICAgIGNoYXIgdmVyYm9zZSA9IDA7CisgICAgc3Ry
dWN0IGxzX2N0cmwgY3RybD17fTsKKworICAgIGZvcig7OykgCisgICAgeworICAgICAgICBjID0g
Z2V0b3B0KGFyZ2MsIGFyZ3YsICJkOmhsdiIpOworICAgICAgICBpZiAoYyA9PSAtMSkKKyAgICAg
ICAgICAgIGJyZWFrOworCisgICAgICAgIHN3aXRjaChjKSAKKyAgICAgICAgeworICAgICAgICAg
ICAgY2FzZSAndic6CisgICAgICAgICAgICAgICAgdmVyYm9zZSA9IDE7CisgICAgICAgICAgICAg
ICAgYnJlYWs7CisgICAgICAgICAgICBjYXNlICdsJzoKKyAgICAgICAgICAgICAgICBjdHJsLmRl
dGFpbCA9IDE7CisgICAgICAgICAgICAgICAgYnJlYWs7CisgICAgICAgICAgICBjYXNlICdoJzoK
KyAgICAgICAgICAgICAgICBoZWxwKCk7CisgICAgICAgICAgICAgICAgYnJlYWs7CisgICAgICAg
ICAgICBjYXNlICdkJzoKKyAgICAgICAgICAgICAgICBkaXIgPSBvcHRhcmc7CisgICAgICAgICAg
ICAgICAgYnJlYWs7CisgICAgICAgICAgICBkZWZhdWx0OgorICAgICAgICAgICAgICAgIGJyZWFr
OworICAgICAgICB9CisgICAgfQorICAgIAorICAgIGltZ2ZpbGUgPSBhcmd2W29wdGluZCsrXTsK
KyAgICAKKyAgICBpZiAob3B0aW5kID4gYXJnYykKKyAgICAgIGhlbHAoKTsKKyAgICAKKyAgICBC
bG9ja0RyaXZlclN0YXRlICpicyA9IGJkcnZfbmV3KCIiKTsKKyAgICBpZighYnMpCisgICAgICAg
IGVycm9yKCJOb3QgZW5vdWdoIG1lbW9yeSBmb3IgYmRydl9uZXdcbiIpOworICAgIGlmKGJkcnZf
b3BlbihicywgaW1nZmlsZSwgQlJEVl9PX0ZMQUdTKSA8IDApCisgICAgICAgIGVycm9yKCJDb3Vs
ZCBub3Qgb3BlbiAnJXMnXG4iLCBpbWdmaWxlKTsKKyAgICAKKyAgICBvZmZfdCBvZmZfYnl0ZXMg
PSAwOworICAgIG9mZl90IHNpemVfYnl0ZXMgPSAwOworICAgIGludCBpID0gMDsKKyAgICBsc19w
YXJ0aXRpb25fdCogcGFydHMgPSAobHNfcGFydGl0aW9uX3QqKW1hbGxvYyhNQVhfUEFSVElUSU9O
UyAqIHNpemVvZihsc19wYXJ0aXRpb25fdCkpOworICAgIGludCBjb3VudCA9IGVudW1fcGFydGl0
aW9uKGJzLCBwYXJ0cyk7CisgICAgCisgICAgaWYoIWRpcikKKyAgICB7CisgICAgICAgIC8vZmlu
ZF9wYXJ0aXRpb24oYnMsIDE1LCAmb2ZmX2J5dGVzLCAmc2l6ZV9ieXRlcyk7CisgICAgICAgIHBy
aW50ZigiaWRcdGFjdGl2ZVx0dHlwZVx0ZnNcdHN0YXJ0X3NlY3Rvclx0dG90YWxfc2VjdG9yc1xu
Iik7CisgICAgICAgIGZvcihpID0gMDsgaSA8IGNvdW50OyBpKyspCisgICAgICAgIHsKKyAgICAg
ICAgICAgIHByaW50ZigiJWRcdCVzXHQlc1x0JXNcdCV1XHQldVxuIiwgCisgICAgICAgICAgICAg
ICAgICAgcGFydHNbaV0uaWQsIAorICAgICAgICAgICAgICAgICAgIHBhcnRzW2ldLnBhcnQuYm9v
dGFibGU9PTB4ODAgPyAiYWN0aXZlIiA6ICJub25lLWFjdGl2ZSIsCisgICAgICAgICAgICAgICAg
ICAgKHBhcnRzW2ldLnBhcnQuc3lzdGVtPT0weDBmIHx8IHBhcnRzW2ldLnBhcnQuc3lzdGVtPT0w
eDA1KSA/ICJleHRlbmQiIDogKHBhcnRzW2ldLmlkPj01ID8gImxvZ2ljYWwiIDogInByaW1hcnki
KSwKKyAgICAgICAgICAgICAgICAgICBqdWRnZV9mcygmcGFydHNbaV0pLAorICAgICAgICAgICAg
ICAgICAgIHBhcnRzW2ldLnBhcnQuc3RhcnRfc2VjdG9yX2FicywKKyAgICAgICAgICAgICAgICAg
ICBwYXJ0c1tpXS5wYXJ0Lm5iX3NlY3RvcnNfYWJzCisgICAgICAgICAgICAgICAgICAgKTsKKyAg
ICAgICAgfQorCisgICAgICAgIGdvdG8gZmFpbDsKKyAgICB9CisgICAgZWxzZQorICAgIHsKKyAg
ICAgICAgZ3J1Yl9mc19wbHVnaW4oKTsKKyAgICAgIAorICAgICAgICBncnViX2ZpbGVfdCBmaWxl
ID0gTlVMTDsKKyAgICAgICAgY2hhciAqcGF0aCA9IE5VTEw7CisgICAgICAgIGludCB3aGljaF9w
YXJ0ID0gMTsKKyAgICAgIAorICAgICAgICBmaWxlID0gKGdydWJfZmlsZV90KW1hbGxvYyhzaXpl
b2YoKmZpbGUpKTsKKyAgICAgICAgZmlsZS0+YnMgPSBiczsKKyAgICAgICAgZmlsZS0+ZGF0YSA9
IE5VTEw7CisKKyAgICAgICAgaWYoJy8nICE9IGRpcltzdHJsZW4oZGlyKSAtIDFdKQorICAgICAg
ICAgICAgc3RyY2F0KGRpciwgIi8iKTsKKworICAgICAgICBnZXRfcGFydGl0aW9uX3BhdGgoZGly
LCAmd2hpY2hfcGFydCwgJnBhdGgpOworICAgICAgICBpZih3aGljaF9wYXJ0IDwgMSB8fCB3aGlj
aF9wYXJ0ID4gY291bnQpCisgICAgICAgIHsKKyAgICAgICAgICAgIGZwcmludGYoc3RkZXJyLCAi
ZXJyb3I6IGNoZWNrIHRoZSBwYXJ0aXRpb24gbnVtYmVyIVxuIik7CisgICAgICAgICAgICBnb3Rv
IGZhaWw7CisgICAgICAgIH0KKworICAgICAgICBmaWxlLT5wYXJ0X29mZl9zZWN0b3IgPSBwYXJ0
c1t3aGljaF9wYXJ0IC0gMV0ucGFydC5zdGFydF9zZWN0b3JfYWJzOworICAgICAgICBjdHJsLmRp
cm5hbWUgPSBkaXI7CisgICAgICAgIHByaW50Zigiob5uYW1lXHQiCisgICAgICAgICAgICAgICAi
c2l6ZShieXRlcylcdCIKKyAgICAgICAgICAgICAgICJkaXI/XHQiCisgICAgICAgICAgICAgICAi
ZGF0ZVx0IgorICAgICAgICAgICAgICAgInRpbWWhv1xuIik7CisKKyAgICAgICAganVkZ2VfZnMo
JnBhcnRzW3doaWNoX3BhcnQgLSAxXSk7CisgICAgICAgIEZTX1RZUEUgZnNfdHlwZSA9IHBhcnRz
W3doaWNoX3BhcnQgLSAxXS5mc190eXBlOworICAgICAgICBpZiAoZnNfdHlwZSA9PSBGU19VTktO
T1dOKSAKKyAgICAgICAgeworICAgICAgICAgICAgZXJyeCgxLCAidW5rbm93biBmaWxlIHN5c3Rl
bSFcbiIpOworICAgICAgICB9CisKKyAgICAgICAgZ3J1Yl9mc19wbHVnW2ZzX3R5cGVdLmxzKGZp
bGUsIHBhdGgsIE5VTEwsICh2b2lkKikmY3RybCk7CisgICAgICAgIGZpbGUtPmRhdGEgPyBmcmVl
KGZpbGUtPmRhdGEpIDogMDsKKyAgICAgICAgZnJlZShmaWxlKTsKKyAgICB9CisgICAgCisgICAg
CisgIGZhaWw6CisgICAgYmRydl9kZWxldGUoYnMpOworICAgIGZyZWUocGFydHMpOworICAgIHJl
dHVybiAwOworfQorCisKKworc3RhdGljIGludCBpbWdfY2F0KGludCBhcmdjLCBjaGFyICoqYXJn
dikKK3sKKyAgICBpbnQgYyA9IC0xOworICAgIGNoYXIgKmltZ2ZpbGUgPSBOVUxMOworICAgIGNo
YXIgKmZpbGVuYW1lID0gTlVMTDsKKyAgICBjaGFyIHZlcmJvc2UgPSAwOworCisgICAgZm9yKDs7
KSB7CisgICAgICAgIGMgPSBnZXRvcHQoYXJnYywgYXJndiwgImY6aHYiKTsKKyAgICAgICAgaWYg
KGMgPT0gLTEpCisgICAgICAgICAgICBicmVhazsKKyAgICAgICAgc3dpdGNoKGMpIHsKKyAgICAg
ICAgICAgIGNhc2UgJ3YnOgorICAgICAgICAgICAgICAgIHZlcmJvc2UgPSAxOworICAgICAgICAg
ICAgICAgIGJyZWFrOworICAgICAgICAgICAgY2FzZSAnaCc6CisgICAgICAgICAgICAgICAgaGVs
cCgpOworICAgICAgICAgICAgICAgIGJyZWFrOworICAgICAgICAgICAgY2FzZSAnZic6CisgICAg
ICAgICAgICAgICAgZmlsZW5hbWUgPSBvcHRhcmc7CisgICAgICAgICAgICAgICAgYnJlYWs7Cisg
ICAgICAgICAgICBkZWZhdWx0OgorICAgICAgICAgICAgICAgIGJyZWFrOworICAgICAgICB9Cisg
ICAgfQorICAgIAorICAgIGltZ2ZpbGUgPSBhcmd2W29wdGluZCsrXTsKKyAgICBpZiAob3B0aW5k
ID4gYXJnYykKKyAgICAgICAgaGVscCgpOworCisgICAgCisgICAgaWYoIWZpbGVuYW1lKQorICAg
IHsKKyAgICAgICAgcHJpbnRmKCJlcnJvcjogc3BlY2lmaWMgdGhlIGZpbGUgdG8gc2hvdyFcbiIp
OworICAgICAgICByZXR1cm4gLTE7CisgICAgfQorICAgICAgICAKKyAgICBCbG9ja0RyaXZlclN0
YXRlICpicyA9IGJkcnZfbmV3KCIiKTsKKyAgICBpZighYnMpCisgICAgICAgIGVycngoLTEsICJO
b3QgZW5vdWdoIG1lbW9yeSBmb3IgYmRydl9uZXdcbiIpOworICAgIGlmKGJkcnZfb3Blbihicywg
aW1nZmlsZSwgQlJEVl9PX0ZMQUdTKSA8IDApCisgICAgICAgIGVycngoLTEsICJDb3VsZCBub3Qg
b3BlbiAlc1xuIiwgaW1nZmlsZSk7CisgICAgCisKKyAgICB1aW50IGJ1Zl9zaXplID0gNDA5NjsK
KyAgICBjaGFyKiBidWYgPSAoY2hhciopbWFsbG9jKGJ1Zl9zaXplKTsKKyAgICBvZmZfdCBvZmZf
Ynl0ZXMgPSAwOworICAgIG9mZl90IHNpemVfYnl0ZXMgPSAwOworICAgIGludCBpID0gMDsKKyAg
ICBsc19wYXJ0aXRpb25fdCAqcGFydHMgPSAobHNfcGFydGl0aW9uX3QqKW1hbGxvYyhNQVhfUEFS
VElUSU9OUyAqIHNpemVvZihsc19wYXJ0aXRpb25fdCkpOworICAgIGludCBjb3VudCA9IGVudW1f
cGFydGl0aW9uKGJzLCBwYXJ0cyk7CisgICAgCisgICAgICAgCisgICAgeworICAgICAgICBncnVi
X2ZzX3BsdWdpbigpOworCisgICAgICAgIGdydWJfZmlsZV90IGZpbGUgPSBOVUxMOworICAgICAg
ICBjaGFyICpwYXRoID0gTlVMTDsKKyAgICAgICAgaW50IHdoaWNoX3BhcnQgPSAxOworICAgICAg
CisgICAgICAgIGZpbGUgPSAoZ3J1Yl9maWxlX3QpbWFsbG9jKHNpemVvZigqZmlsZSkpOworICAg
ICAgICBmaWxlLT5icyA9IGJzOworICAgICAgICBmaWxlLT5kYXRhID0gTlVMTDsKKyAgICAgIAor
ICAgICAgICBjaGFyKiBwID0gc3RyY2hyKGZpbGVuYW1lLCAnLycpOworICAgICAgICBpZighcCkK
KyAgICAgICAgeworICAgICAgICAgICAgZXJyeCgtMSwgInBsZWFzZSBjaGVjayB0aGUgZmlsZSBw
YXRoIVxuIik7CisgICAgICAgIH0KKyAgICAgICAgZWxzZQorICAgICAgICB7CisgICAgICAgICAg
ICBwID0gc3RyY2hyKHAsICcvJyk7CisgICAgICAgICAgICBpZighcCkgZXJyeCgtMSwgImNoZWNr
IHRoZSBmaWxlIHBhdGghIVxuIik7CisgICAgICAgIH0KKwkgIAorICAgICAgICBnZXRfcGFydGl0
aW9uX3BhdGgoZmlsZW5hbWUsICZ3aGljaF9wYXJ0LCAmcGF0aCk7CisgICAgICAgIERCRygicGFy
dD0lZCwgcGF0aD0lcyIsIHdoaWNoX3BhcnQsIHBhdGgpOworICAgICAgICBpZih3aGljaF9wYXJ0
IDwgMSB8fCB3aGljaF9wYXJ0ID4gY291bnQpCisgICAgICAgIHsKKyAgICAgICAgICAgIHByaW50
ZigiZXJyb3I6IGNoZWNrIHRoZSBwYXJ0aXRpb24gbnVtYmVyIVxuIik7CisgICAgICAgICAgICBn
b3RvIGZhaWw7CisgICAgICAgIH0KKyAgICAgICAgZmlsZS0+cGFydF9vZmZfc2VjdG9yID0gcGFy
dHNbd2hpY2hfcGFydCAtIDFdLnBhcnQuc3RhcnRfc2VjdG9yX2FiczsKKyAgICAgICAganVkZ2Vf
ZnMoJnBhcnRzW3doaWNoX3BhcnQgLSAxXSk7CisgICAgICAgIEZTX1RZUEUgZnNfdHlwZSA9IHBh
cnRzW3doaWNoX3BhcnQgLSAxXS5mc190eXBlOworICAgICAgICAoZnNfdHlwZSA9PSBGU19VTktO
T1dOKSA/IGVycngoMSwgInVua25vd24gZmlsZSBzeXN0ZW0hXG4iKSA6IDA7CisgICAgICAgIGdy
dWJfZnNfdCBncnViX2ZzX3BsZyA9IGdydWJfZnNfcGx1Z1tmc190eXBlXTsKKyAgIAorICAgICAg
ICBpZihncnViX2ZzX3BsZy5vcGVuKGZpbGUsIHBhdGgpID09IDApCisgICAgICAgIHsKKyAgICAg
ICAgICAgIC8vcHJpbnRmKCJmaWxlIHNpemU9JXpkIGJ5dGVzXG4iLCAoZmlsZS0+c2l6ZSkpOwor
CSAgCisgICAgICAgICAgICBncnViX3NpemVfdCBsZW4gPSBmaWxlLT5zaXplOworICAgICAgICAg
ICAgZ3J1Yl9vZmZfdCBvZmYgPSAwOworICAgICAgICAgICAgY2hhciAgdG1wZmlsZVsyNTZdPXt9
OworICAgICAgICAgICAgc3RybmNweSh0bXBmaWxlLCBnZXRlbnYoIkhPTUUiKSwgc2l6ZW9mKHRt
cGZpbGUpKTsKKyAgICAgICAgICAgIHRtcGZpbGVbc2l6ZW9mKHRtcGZpbGUpIC0gMV0gPSAnXDAn
OworICAgICAgICAgICAgc3RyY2F0KHRtcGZpbGUsICIvdG1wLmZpbGUiKTsKKwkgICAgCisgICAg
ICAgICAgICBpZighYnVmKQorICAgICAgICAgICAgeworICAgICAgICAgICAgICAgIHBlcnJvcigi
bm90IGVub3VnaCBtZW1vcnkhXG4iKTsKKyAgICAgICAgICAgICAgICBnb3RvIGZhaWw7CisgICAg
ICAgICAgICB9CisgICAgICAgICAgICBlbHNlCisgICAgICAgICAgICB7CisgICAgICAgICAgICAg
ICAgZ3J1Yl9zaXplX3QgcmVhZGVkID0gMDsKKyAgICAgICAgICAgICAgICBncnViX3NpemVfdCBs
ZWZ0ICA9IGxlbjsKKyAgICAgICAgICAgICAgICBncnViX3NpemVfdCB0b3RhbCA9IDA7CisgICAg
ICAgICAgICAgICAgRklMRSogZiA9IGZvcGVuKHRtcGZpbGUgLCJ3Iik7CisgICAgICAgICAgICAg
ICAgaWYoIWYpCisgICAgICAgICAgICAgICAgeworICAgICAgICAgICAgICAgICAgICBwZXJyb3Io
ImZvcGVuIGVycm9yIik7CisgICAgICAgICAgICAgICAgICAgIGdvdG8gZmFpbDsKKyAgICAgICAg
ICAgICAgICB9CisJICAgICAgCisJICAgICAgCisgICAgICAgICAgICAgICAgKGxlZnQgPiBidWZf
c2l6ZSkgPyAobGVmdCA9IGJ1Zl9zaXplKSA6IDA7CisgICAgICAgICAgICAgICAgd2hpbGUoKHJl
YWRlZCA9IGdydWJfZnNfcGxnLnJlYWQoZmlsZSwgb2ZmLCBsZWZ0LCBidWYpKQorICAgICAgICAg
ICAgICAgICAgICAgICYmIHRvdGFsIDw9IGxlbgorICAgICAgICAgICAgICAgICAgICAgICYmIHJl
YWRlZCA+IDApCisgICAgICAgICAgICAgICAgeworICAgICAgICAgICAgICAgICAgICBEQkcoInJl
YWRlZD0lemQiLCByZWFkZWQpOworICAgICAgICAgICAgICAgICAgICB0b3RhbCArPSBmd3JpdGUo
YnVmLCAxLCByZWFkZWQsIGYpOworICAgICAgICAgICAgICAgICAgICBvZmYgPSB0b3RhbDsKKyAg
ICAgICAgICAgICAgICAgICAgbGVmdCA9IGxlbiAtIHRvdGFsOworICAgICAgICAgICAgICAgICAg
ICAobGVmdCA8PSBidWZfc2l6ZSkgPyAwICA6IChsZWZ0ID0gYnVmX3NpemUpOworICAgICAgICAg
ICAgICAgICAgICBEQkcoInRvdGFsPSV6ZCIsIHRvdGFsKTsKKyAgICAgICAgICAgICAgICB9Owor
ICAgICAgICAgICAgICAgIGZjbG9zZShmKTsKKwkgICAgICAKKyAgICAgICAgICAgICAgICBpZih0
b3RhbCAhPSBsZW4pCisgICAgICAgICAgICAgICAgeworICAgICAgICAgICAgICAgICAgICBwZXJy
b3IoInJlYWQgZXJyb3IiKTsKKyAgICAgICAgICAgICAgICAgICAgZ290byBmYWlsOworICAgICAg
ICAgICAgICAgIH0KKyAgICAgICAgICAgICAgICBlbHNlCisgICAgICAgICAgICAgICAgeworICAg
ICAgICAgICAgICAgICAgICBzcHJpbnRmKGJ1ZiwgImNhdCAlcyIsIHRtcGZpbGUpOworICAgICAg
ICAgICAgICAgICAgICBzeXN0ZW0oYnVmKTsKKyAgCisgICAgICAgICAgICAgICAgfQorICAgICAg
ICAgICAgfQorICAgICAgICB9CisgICAgICAgIGVsc2UKKyAgICAgICAgeworICAgICAgICAgICAg
cHJpbnRmKCJvcGVuIGZhaWxlZCFcbiIpOworICAgICAgICB9CisgICAgICAKKyAgICAgIAorICAg
ICAgICBncnViX2ZzX3BsZy5jbG9zZShmaWxlKTsKKyAgICAgICAgZnJlZShmaWxlKTsKKyAgICB9
CisgICAgCisgICAgCisgIGZhaWw6CisgICAgZnJlZShidWYpOworICAgIGJkcnZfZGVsZXRlKGJz
KTsKKyAgICBmcmVlKHBhcnRzKTsKKyAgICByZXR1cm4gMDsKK30KKworCisKIHN0YXRpYyBpbnQg
aW1nX2NyZWF0ZShpbnQgYXJnYywgY2hhciAqKmFyZ3YpCiB7CiAgICAgaW50IGMsIHJldCwgZmxh
Z3M7CiAgICAgY29uc3QgY2hhciAqZm10ID0gInJhdyI7CiAgICAgY29uc3QgY2hhciAqZmlsZW5h
bWU7CiAgICAgY29uc3QgY2hhciAqYmFzZV9maWxlbmFtZSA9IE5VTEw7CiAgICAgdWludDY0X3Qg
c2l6ZTsKICAgICBjb25zdCBjaGFyICpwOwpAQCAtODUwLDE2ICsxMTkxLDE3IEBAIHN0YXRpYyB2
b2lkIGltZ19zbmFwc2hvdChpbnQgYXJnYywgY2hhciAKICAgICB9CiAKICAgICAvKiBDbGVhbnVw
ICovCiAgICAgYmRydl9kZWxldGUoYnMpOwogfQogCiBpbnQgbWFpbihpbnQgYXJnYywgY2hhciAq
KmFyZ3YpCiB7CisgIAogICAgIGNvbnN0IGNoYXIgKmNtZDsKIAogICAgIGJkcnZfaW5pdCgpOwog
ICAgIGlmIChhcmdjIDwgMikKICAgICAgICAgaGVscCgpOwogICAgIGNtZCA9IGFyZ3ZbMV07CiAg
ICAgYXJnYy0tOyBhcmd2Kys7CiAgICAgaWYgKCFzdHJjbXAoY21kLCAiY3JlYXRlIikpIHsKQEAg
LTg2NywxMyArMTIwOSwxOSBAQCBpbnQgbWFpbihpbnQgYXJnYywgY2hhciAqKmFyZ3YpCiAgICAg
fSBlbHNlIGlmICghc3RyY21wKGNtZCwgImNvbW1pdCIpKSB7CiAgICAgICAgIGltZ19jb21taXQo
YXJnYywgYXJndik7CiAgICAgfSBlbHNlIGlmICghc3RyY21wKGNtZCwgImNvbnZlcnQiKSkgewog
ICAgICAgICBpbWdfY29udmVydChhcmdjLCBhcmd2KTsKICAgICB9IGVsc2UgaWYgKCFzdHJjbXAo
Y21kLCAiaW5mbyIpKSB7CiAgICAgICAgIGltZ19pbmZvKGFyZ2MsIGFyZ3YpOwogICAgIH0gZWxz
ZSBpZiAoIXN0cmNtcChjbWQsICJzbmFwc2hvdCIpKSB7CiAgICAgICAgIGltZ19zbmFwc2hvdChh
cmdjLCBhcmd2KTsKLSAgICB9IGVsc2UgeworICAgIH0gZWxzZSBpZiAoIXN0cmNtcChjbWQsICJs
cyIpKSB7CisgICAgICAgIGltZ19scyhhcmdjLCBhcmd2KTsgICAgCisgICAgfSBlbHNlIGlmICgh
c3RyY21wKGNtZCwgImNhdCIpKSB7CisgICAgICAgIGltZ19jYXQoYXJnYywgYXJndik7CisgICAg
fQorICAgIGVsc2UgewogICAgICAgICBoZWxwKCk7CiAgICAgfQorICAgIAogICAgIHJldHVybiAw
OwogfQpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11
LXFlbXUteGVuL3R5cGVzLmggeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vdHlwZXMu
aAotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vdHlwZXMuaAkxOTcwLTAxLTAx
IDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVt
dS14ZW4vdHlwZXMuaAkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxNjkzMjYyMiArMDgwMApAQCAtMCww
ICsxLDM1IEBACisvKgorICogIEdSVUIgIC0tICBHUmFuZCBVbmlmaWVkIEJvb3Rsb2FkZXIKKyAq
ICBDb3B5cmlnaHQgKEMpIDIwMDIsMjAwNiwyMDA3ICBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24s
IEluYy4KKyAqCisgKiAgR1JVQiBpcyBmcmVlIHNvZnR3YXJlOiB5b3UgY2FuIHJlZGlzdHJpYnV0
ZSBpdCBhbmQvb3IgbW9kaWZ5CisgKiAgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZSBhcyBwdWJsaXNoZWQgYnkKKyAqICB0aGUgRnJlZSBTb2Z0d2Fy
ZSBGb3VuZGF0aW9uLCBlaXRoZXIgdmVyc2lvbiAzIG9mIHRoZSBMaWNlbnNlLCBvcgorICogIChh
dCB5b3VyIG9wdGlvbikgYW55IGxhdGVyIHZlcnNpb24uCisgKgorICogIEdSVUIgaXMgZGlzdHJp
YnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKKyAqICBidXQgV0lUSE9V
VCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBvZgorICog
IE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNl
ZSB0aGUKKyAqICBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgor
ICoKKyAqICBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJh
bCBQdWJsaWMgTGljZW5zZQorICogIGFsb25nIHdpdGggR1JVQi4gIElmIG5vdCwgc2VlIDxodHRw
Oi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4KKyAqLworCisjaWZuZGVmIEdSVUJfVFlQRVNfQ1BV
X0hFQURFUgorI2RlZmluZSBHUlVCX1RZUEVTX0NQVV9IRUFERVIJMQorCisvKiBUaGUgc2l6ZSBv
ZiB2b2lkICouICAqLworI2RlZmluZSBHUlVCX1RBUkdFVF9TSVpFT0ZfVk9JRF9QCTQKKworLyog
VGhlIHNpemUgb2YgbG9uZy4gICovCisjZGVmaW5lIEdSVUJfVEFSR0VUX1NJWkVPRl9MT05HCQk0
CisKKy8qIGkzODYgaXMgbGl0dGxlLWVuZGlhbi4gICovCisjdW5kZWYgR1JVQl9UQVJHRVRfV09S
RFNfQklHRU5ESUFOCisKKyNkZWZpbmUgR1JVQl9UQVJHRVRfSTM4NgkJMQorCisjZGVmaW5lIEdS
VUJfVEFSR0VUX01JTl9BTElHTgkJMQorCisjZW5kaWYgLyogISBHUlVCX1RZUEVTX0NQVV9IRUFE
RVIgKi8KZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9pb2Vt
dS1xZW11LXhlbi94ODZfNjQvdHlwZXMuaCB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhl
bi94ODZfNjQvdHlwZXMuaAotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4veDg2
XzY0L3R5cGVzLmgJMTk3MC0wMS0wMSAwNzowMDowMC4wMDAwMDAwMDAgKzA3MDAKKysrIHhlbi00
LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL3g4Nl82NC90eXBlcy5oCTIwMTItMTItMjggMTY6
MDI6NDEuMDE3ODAyMzcxICswODAwCkBAIC0wLDAgKzEsMzkgQEAKKy8qCisgKiAgR1JVQiAgLS0g
IEdSYW5kIFVuaWZpZWQgQm9vdGxvYWRlcgorICogIENvcHlyaWdodCAoQykgMjAwOCAgRnJlZSBT
b2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuCisgKgorICogIEdSVUIgaXMgZnJlZSBzb2Z0d2FyZTog
eW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQorICogIGl0IHVuZGVyIHRoZSB0
ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgYXMgcHVibGlzaGVkIGJ5Cisg
KiAgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiwgZWl0aGVyIHZlcnNpb24gMyBvZiB0aGUg
TGljZW5zZSwgb3IKKyAqICAoYXQgeW91ciBvcHRpb24pIGFueSBsYXRlciB2ZXJzaW9uLgorICoK
KyAqICBHUlVCIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2Vm
dWwsCisgKiAgYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxp
ZWQgd2FycmFudHkgb2YKKyAqICBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJU
SUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlCisgKiAgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2Ug
Zm9yIG1vcmUgZGV0YWlscy4KKyAqCisgKiAgWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29w
eSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UKKyAqICBhbG9uZyB3aXRoIEdSVUIu
ICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uCisgKi8KKworI2lm
bmRlZiBHUlVCX1RZUEVTX0NQVV9IRUFERVIKKyNkZWZpbmUgR1JVQl9UWVBFU19DUFVfSEVBREVS
CTEKKworLyogVGhlIHNpemUgb2Ygdm9pZCAqLiAgKi8KKyNkZWZpbmUgR1JVQl9UQVJHRVRfU0la
RU9GX1ZPSURfUAk4CisKKy8qIFRoZSBzaXplIG9mIGxvbmcuICAqLworI2lmZGVmIF9fTUlOR1cz
Ml9fCisjZGVmaW5lIEdSVUJfVEFSR0VUX1NJWkVPRl9MT05HCQk0CisjZWxzZQorI2RlZmluZSBH
UlVCX1RBUkdFVF9TSVpFT0ZfTE9ORwkJOAorI2VuZGlmCisKKy8qIHg4Nl82NCBpcyBsaXR0bGUt
ZW5kaWFuLiAgKi8KKyN1bmRlZiBHUlVCX1RBUkdFVF9XT1JEU19CSUdFTkRJQU4KKworI2RlZmlu
ZSBHUlVCX1RBUkdFVF9YODZfNjQJCTEKKworI2RlZmluZSBHUlVCX1RBUkdFVF9NSU5fQUxJR04J
CTEKKworI2VuZGlmIC8qICEgR1JVQl9UWVBFU19DUFVfSEVBREVSICovCg==
--047d7b66f24b9af9a904d1e55c43
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7b66f24b9af9a904d1e55c43--


From xen-devel-bounces@lists.xen.org Fri Dec 28 08:22:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 08:22:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToVCz-0000N7-Sx; Fri, 28 Dec 2012 08:22:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1ToVCx-0000N2-12
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 08:22:24 +0000
Received: from [85.158.138.51:27499] by server-6.bemta-3.messagelabs.com id
	17/BF-12154-EB65DD05; Fri, 28 Dec 2012 08:22:22 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1356682938!22550061!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14876 invoked from network); 28 Dec 2012 08:22:18 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 08:22:18 -0000
Received: by mail-ee0-f50.google.com with SMTP id b45so5169360eek.23
	for <xen-devel@lists.xensource.com>;
	Fri, 28 Dec 2012 00:22:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=2Uibw8eyuRFTg4hB6x+Ys7glNyYd+Lw7vxmNy2fd0vA=;
	b=MrhseFkN8PQnfSmXorsp6LdgSMMhJfPzTYlDo/8amJeUHSJabccreQJlbtRxhxFmYH
	McgrlPk5nL32nH+AxGmXZKyHL6J+yoTot6KdPpryiXSf78SVXhAbaVeaiXiEIp75dTkp
	3zyrfhZU7wq1YJ9qp6lBWxSikPmA5sf88aXudH96YT3d2XWrtucIPqaBbsDhsLWnnYVy
	TOOI0YR7EGMl8QaG1vBaSbKJeQjIvyxS4d9K86Sina537aQAQK9bVh49ga8yM2xbUCH8
	ntYBILGZ0GkWl2mNmHq7gLqZeiEFT+Z/WKk6Q9wBjO0pWGLU0feuCYc/86WdoKp0vP1n
	M/dw==
MIME-Version: 1.0
Received: by 10.14.225.194 with SMTP id z42mr84802387eep.22.1356682938218;
	Fri, 28 Dec 2012 00:22:18 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Fri, 28 Dec 2012 00:22:17 -0800 (PST)
Date: Fri, 28 Dec 2012 16:22:17 +0800
Message-ID: <CA+ePHTDo4743dodupmoHJGsuGqJCk83PJ1bcv-PwxbcdgiHq3Q@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=047d7b66f24b9af9a904d1e55c43
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] patch for listing and reading files from
 qcow2-formated image file (for xen-4.1.2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7b66f24b9af9a904d1e55c43
Content-Type: multipart/alternative; boundary=047d7b66f24b9af9a604d1e55c41

--047d7b66f24b9af9a604d1e55c41
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi,
    The final effect is as follows:


*[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$ qemu-img-xen cat
-f /1/boot.ini ~/vm-check.img *
*[boot loader]*
*timeout=3D30*
*default=3Dmulti(0)disk(0)rdisk(0)partition(1)\WINDOWS*
*[operating systems]*
*multi(0)disk(0)rdisk(0)partition(1)\WINDOWS=3D"Microsoft Windows XP
Professional" /noexecute=3Doptin /fastdetect*

[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$ qemu-img-xen ls -l
-d /1/ ~/vm-check.img
*=E3=80=90name                 size(bytes) dir?      date
 create-time=E3=80=91*
*AUTOEXEC.BAT 0                file 2010-12-22        17:30:37*
*boot.ini               211                file 2010-12-23        01:24:41*
*bootfont.bin  322730                file 2004-11-23        20:00:00*
*
*
*
*
*
*

As you see above, the patch add two sub-commands for qemu-img-xen=EF=BC=9Ac=
at and
ls.

For details in the patch, please check the attachment.

--047d7b66f24b9af9a604d1e55c41
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi,<div>=C2=A0 =C2=A0 The final effect is as follows:</div><div>=C2=A0 =C2=
=A0=C2=A0</div><blockquote style=3D"margin:0 0 0 40px;border:none;padding:0=
px"><div><u>[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$ <font =
color=3D"#ff6666">qemu-img-xen cat -f /1/boot.ini ~/vm-check.img=C2=A0</fon=
t></u></div>
<div><b>[boot loader]</b></div><div><b>timeout=3D30</b></div><div><b>defaul=
t=3Dmulti(0)disk(0)rdisk(0)partition(1)\WINDOWS</b></div><div><b>[operating=
 systems]</b></div><div><b>multi(0)disk(0)rdisk(0)partition(1)\WINDOWS=3D&q=
uot;Microsoft Windows XP Professional&quot; /noexecute=3Doptin /fastdetect<=
/b></div>
<div><br></div><div>[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]=
$ <font color=3D"#ff6666">qemu-img-xen ls -l -d /1/ ~/vm-check.img=C2=A0</f=
ont></div><div><b>=E3=80=90name<span class=3D"Apple-tab-span" style=3D"whit=
e-space:pre">	</span>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0=C2=A0size(bytes)<span class=3D"Apple-tab-span" style=3D"white-space:pre=
">	</span>dir?<span class=3D"Apple-tab-span" style=3D"white-space:pre">	</s=
pan>=C2=A0 =C2=A0 =C2=A0date<span class=3D"Apple-tab-span" style=3D"white-s=
pace:pre">	</span>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cr=
eate-time=E3=80=91</b></div>
<div><b>AUTOEXEC.BAT<span class=3D"Apple-tab-span" style=3D"white-space:pre=
">	</span>0<span class=3D"Apple-tab-span" style=3D"white-space:pre">	</span=
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0file<span class=3D"=
Apple-tab-span" style=3D"white-space:pre">	</span>2010-12-22<span class=3D"=
Apple-tab-span" style=3D"white-space:pre">	</span>=C2=A0 =C2=A0 =C2=A0 =C2=
=A017:30:37</b></div>
<div><b>boot.ini<span class=3D"Apple-tab-span" style=3D"white-space:pre">	<=
/span>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0211<span class=
=3D"Apple-tab-span" style=3D"white-space:pre">	</span>=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0file<span class=3D"Apple-tab-span" style=
=3D"white-space:pre">	</span>2010-12-23<span class=3D"Apple-tab-span" style=
=3D"white-space:pre">	</span>=C2=A0 =C2=A0 =C2=A0 =C2=A001:24:41</b></div>
<div><b>bootfont.bin<span class=3D"Apple-tab-span" style=3D"white-space:pre=
">	</span>=C2=A0322730<span class=3D"Apple-tab-span" style=3D"white-space:p=
re">	</span>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0file<spa=
n class=3D"Apple-tab-span" style=3D"white-space:pre">	</span>2004-11-23<spa=
n class=3D"Apple-tab-span" style=3D"white-space:pre">	</span>=C2=A0 =C2=A0 =
=C2=A0 =C2=A020:00:00</b></div>
<div><b><br></b></div><div><b><br></b></div><div><b><br></b></div></blockqu=
ote>As you see above, the patch add two sub-commands for qemu-img-xen=EF=BC=
=9Acat and ls.<div><br></div><div>For details in the patch, please check th=
e attachment.=C2=A0</div>
<div><div><br></div><div><br></div></div>

--047d7b66f24b9af9a604d1e55c41--
--047d7b66f24b9af9a904d1e55c43
Content-Type: application/octet-stream; name="qemu-imgfs-for-qcow2.patch"
Content-Disposition: attachment; filename="qemu-imgfs-for-qcow2.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hb91vt6j0

ZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11
LXhlbi9kZWJ1Zy5jIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2RlYnVnLmMKLS0t
IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2RlYnVnLmMJMTk3MC0wMS0wMSAwNzow
MDowMC4wMDAwMDAwMDAgKzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVu
L2RlYnVnLmMJMjAxMi0xMi0yOCAxNjowMjo0MC45OTk5MzM5MjUgKzA4MDAKQEAgLTAsMCArMSwx
ODIgQEAKKyNpbmNsdWRlPHRpbWUuaD4KKyNpbmNsdWRlPHN5cy9zdGF0Lmg+CisjaW5jbHVkZTxz
dGRhcmcuaD4KKyNpbmNsdWRlPGZjbnRsLmg+CisjaW5jbHVkZSJkZWJ1Zy5oIgorI2luY2x1ZGUg
PHVuaXN0ZC5oPgorI2luY2x1ZGUgPHN0cmluZy5oPgorCisjZGVmaW5lIEtCKHgpICAgICgoeCkq
MTAyNCkKKworc3RhdGljIGludCBkYmdfdGVybSA9IDAsIGRiZ19maWxlID0gMCwgbG9nX2RheSA9
IDA7CitzdGF0aWMgRklMRSogZnBfbG9nID0gTlVMTDsKK3N0YXRpYyBjaGFyIGRpclsxMjhdPXsw
LH0sIGZpbGVuYW1lWzE2MF07CitzdGF0aWMgdm9pZCBpbml0X2ZpbGVfcGF0aCh2b2lkKTsKK3N0
YXRpYyBjaGFyIHByaW50YnVmWzEwMjRdPXt9OworaW50IG1rZGlyX3JlY3Vyc2l2ZShjaGFyKiBw
YXRoKTsKKworCit2b2lkIHByaW50X2Vycm9yKGNoYXIqIGZpbGUsIGNoYXIqIGZ1bmN0aW9uLCBp
bnQgbGluZSwgY29uc3QgY2hhciAqZm10LCAuLi4pCit7CisgIHZhX2xpc3QgYXJnczsKKyAgaW50
IGk7CisKKyAgaWYoICFkYmdfdGVybSAmJiAhZGJnX2ZpbGUgKQorICAgIHJldHVybjsKKworICB2
YV9zdGFydChhcmdzLCBmbXQpOworICBpPXZzcHJpbnRmKCBwcmludGJ1ZiwgZm10LCBhcmdzICk7
CisgIHByaW50YnVmW2ldID0gMDsKKyAgdmFfZW5kKGFyZ3MpOworCisgIGlmKCBkYmdfdGVybSAp
CisgICAgeworICAgICAgcHJpbnRmKCJbJXNdJXMoJWQpOlxuJXNcbiIsIGZpbGUsIGZ1bmN0aW9u
LCBsaW5lLCBwcmludGJ1Zik7CisgICAgfQorCisgIGlmKCBkYmdfZmlsZSApCisgICAgeworICAg
ICAgdGltZV90IHQgPSB0aW1lKCBOVUxMICk7CisgICAgICBzdHJ1Y3QgdG0qIHRtMSA9IGxvY2Fs
dGltZSgmdCk7CisgICAgICBpZiggIXRtMSApIHJldHVybjsKKyAgICAgIC8vaWYoIHRtMS0+dG1f
bWRheSAhPSBsb2dfZGF5ICkKKyAgICAgIHsKKwkvL2luaXRfZmlsZV9wYXRoKCk7CisgICAgICB9
CisgICAgICBjaGFyIHRtcFsxNl07CisgICAgICBzdHJmdGltZSggdG1wLCAxNSwgIiVYIiwgdG0x
ICk7CisgICAgICBmcHJpbnRmKCBmcF9sb2csICIlcyBbJXNdJXMoJWQpOiAlc1xuIiwgdG1wLCBm
aWxlLCBmdW5jdGlvbiwgbGluZSwgcHJpbnRidWYpOworICAgICAgZmZsdXNoKCBmcF9sb2cgKTsK
KyAgICB9Cit9CisKK3N0YXRpYyBjaGFyKiBoZXhfc3RyKHVuc2lnbmVkIGNoYXIgKmJ1ZiwgaW50
IGxlbiwgY2hhciogb3V0c3RyICkKK3sKKworICBjb25zdCBjaGFyICpzZXQgPSAiMDEyMzQ1Njc4
OWFiY2RlZiI7CisgIGNoYXIgKnRtcDsKKyAgdW5zaWduZWQgY2hhciAqZW5kOworICBpZiAobGVu
ID4gMTAyNCkKKyAgICBsZW4gPSAxMDI0OworICBlbmQgPSBidWYgKyBsZW47CisgIHRtcCA9ICZv
dXRzdHJbMF07CisgIHdoaWxlIChidWYgPCBlbmQpCisgICAgeworICAgICAgKnRtcCsrID0gc2V0
WyAoKmJ1ZikgPj4gNCBdOworICAgICAgKnRtcCsrID0gc2V0WyAoKmJ1ZikgJiAweEYgXTsKKyAg
ICAgICp0bXArKyA9ICcgJzsKKyAgICAgIGJ1ZiArKzsKKyAgICB9CisgICp0bXAgPSAnXDAnOwor
ICByZXR1cm4gb3V0c3RyOworfQorCit2b2lkIGhleF9kdW1wKCB1bnNpZ25lZCBjaGFyICogYnVm
LCBpbnQgbGVuICkKK3sKKyAgY2hhciBzdHJbS0IoNCldOworICBpZiggZGJnX3Rlcm0gKQorICAg
IHB1dHMoIGhleF9zdHIoIGJ1ZiwgbGVuLCBzdHIgKSApOworICBpZiggZGJnX2ZpbGUgKXsKKyAg
ICBmcHV0cyggaGV4X3N0ciggYnVmLCBsZW4sIHN0ciApLCBmcF9sb2cgKTsKKyAgICBmcHJpbnRm
KCBmcF9sb2csICJcbiIgKTsKKyAgICBmZmx1c2goIGZwX2xvZyApOworICB9CisgIC8vZnByaW50
Ziggc3RkZXJyLCBoZXhfc3RyKCBidWYsIGxlbiApICk7Cit9CisKK3ZvaWQgZGVidWdfdGVybV9v
bigpCit7CisgIGRiZ190ZXJtID0gMTsKK30KKwordm9pZCBkZWJ1Z190ZXJtX29mZigpCit7Cisg
IGRiZ190ZXJtID0gMDsKK30KKworCitpbnQgbWtkaXJfcmVjdXJzaXZlKCBjaGFyKiBwYXRoICkK
K3sKKyAgY2hhciAqcDsKKworICBpZiggYWNjZXNzKCBwYXRoLCAwICkgPT0gMCApCisgICAgcmV0
dXJuIDA7CisKKyAgZm9yKCBwPXBhdGg7ICpwOyBwKysgKQorICAgIHsKKyAgICAgIGlmKCBwPnBh
dGggJiYgKnAgPT0gJy8nICkKKwl7CisJICAqcCA9IDA7CisJICBpZiggYWNjZXNzKCBwYXRoLCAw
ICkgIT0gMCApCisJICAgIHsKKyNpZmRlZiBfX1dJTjMyX18KKwkgICAgICBta2RpciggcGF0aCAp
OworI2Vsc2UKKwkgICAgICBpZiggbWtkaXIoIHBhdGgsIFNfSVJXWFUgKSAhPSAwICkKKwkJcmV0
dXJuIC0xOworI2VuZGlmCisJICAgIH0KKwkgICpwID0gJy8nOworCX0KKyAgICB9CisjaWZkZWYg
X19XSU4zMl9fCisgIHJldHVybiBta2RpciggcGF0aCApOworI2Vsc2UKKyAgcmV0dXJuIG1rZGly
KCBwYXRoLCBTX0lSV1hVICk7CisjZW5kaWYKK30KKwordm9pZCBpbml0X2ZpbGVfcGF0aCgpCit7
CisgIGNoYXIgdG1wWzY0XTsKKyAgdGltZV90IHQgPSB0aW1lKCBOVUxMICk7CisgIHN0cnVjdCB0
bSogdG0xID0gbG9jYWx0aW1lKCZ0KTsKKyAgCisgIGlmKCAhdG0xICkKKyAgICB7CisgICAgICBw
ZXJyb3IoImRlYnVnLmMgaW5pdF9maWxlX3BhdGg6IEVSUk9SIEdFVFRJTkcgU1lTVEVNIFRJTUUu
Iik7CisgICAgfQorICBsb2dfZGF5ID0gdG0xLT50bV9tZGF5OworICBzdHJmdGltZSggdG1wLCA2
NCwgIi8lWS0lbS0lZC50eHQiLCB0bTEgKTsKKyAgCisgIGlmKCBhY2Nlc3MoIGRpciwgMCApIT0w
ICkKKyAgICB7CisgICAgICAobWtkaXJfcmVjdXJzaXZlKCBkaXIgKTwwKSA/IHBlcnJvcigibWtk
aXJfcmVjdXJzaXZlIGZhaWwhIVxuIikgOiAwOworICAgIH0KKyAgc3RyY3B5KCBmaWxlbmFtZSwg
ZGlyICk7CisgIHN0cmNhdCggZmlsZW5hbWUsIHRtcCApOworICBpZiggZnBfbG9nICkKKyAgICBm
Y2xvc2UoIGZwX2xvZyApOworICBmcF9sb2cgPSBmb3BlbiggZmlsZW5hbWUsICJ3IiApOworICBp
ZihmcF9sb2cpCisgICAgeworICAgICAgZnByaW50ZihmcF9sb2csIj09PT09PT09PT09PT09PT09
PT09PT1MT0cgIFNUQVJUPT09PT09PT09PT09PT09PT09PT09PT09XG4iKTsKKyAgICAgIGZjbG9z
ZShmcF9sb2cpOyAgZnBfbG9nPU5VTEw7CisgICAgICBmcF9sb2cgPSBmb3BlbiggZmlsZW5hbWUs
ICJhKyIgKTsKKyAgICAgIE5VTEw9PWZwX2xvZyA/IHByaW50ZigiaW5pdF9maWxlX3BhdGgoKTpm
b3BlbihhKykgZmFpbFxuIikgOiAwOworICAgIH0KK30KKwordm9pZCBkZWJ1Z19maWxlX29uKGNo
YXIgKnBhdGgpCit7CisgIGlmKCBkYmdfZmlsZSApCisgICAgcmV0dXJuOworICBkZWJ1Z19zZXRf
ZGlyKHBhdGgpOworICBpbml0X2ZpbGVfcGF0aCgpOworICBkYmdfZmlsZSA9IDE7Cit9CisKK3Zv
aWQgZGVidWdfZmlsZV9vZmYoKQoreworICBpZiggIWRiZ19maWxlICkKKyAgICByZXR1cm47Cisg
IGRiZ19maWxlID0gMDsKKyAgaWYoIGZwX2xvZyApCisgICAgZmNsb3NlKCBmcF9sb2cgKTsKK30K
Kwordm9pZCBkZWJ1Z19zZXRfZGlyKGNoYXIqIHN0cikKK3sKKyAgc3RyY3B5KCBkaXIsIHN0ciAp
OworfQorCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9l
bXUtcWVtdS14ZW4vZGVidWcuaCB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9kZWJ1
Zy5oCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9kZWJ1Zy5oCTE5NzAtMDEt
MDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1x
ZW11LXhlbi9kZWJ1Zy5oCTIwMTItMTItMjggMTY6MDI6NDEuMDAwOTM0MzI3ICswODAwCkBAIC0w
LDAgKzEsMzQgQEAKKyNpZm5kZWYgX0RFQlVHX0gKKyNkZWZpbmUgX0RFQlVHX0gKKworI2luY2x1
ZGUgPHN0ZGlvLmg+CisjaW5jbHVkZSA8ZXJybm8uaD4KKyNpbmNsdWRlIDxhc3NlcnQuaD4KKwor
Ly8jZGVmaW5lIFJFTEVBU0UKKworI2lmbmRlZiBSRUxFQVNFCisjZGVmaW5lIERCRyhhcmdzIC4u
LikgXAorICBwcmludF9lcnJvciggKGNoYXIqKV9fRklMRV9fLCAoY2hhciopX19mdW5jX18sIF9f
TElORV9fLCAjI2FyZ3MgKQorI2Vsc2UKKyNkZWZpbmUgREJHKGFyZ3MgLi4uKQkJCQkgIFwKKyAg
ZG8JCQkJCQkgIFwKKyAgICB7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCitmcHJpbnRmKGxvZ2ZpbGUsIiVzOjpbJXNdOjooJWQpOlxuIiwgICAgICAgICAg
ICAgICAgXAorCShjaGFyKilfX0ZJTEVfXywgKGNoYXIqKV9fZnVuY19fLCBfX0xJTkVfXyk7ICAg
ICAgXAorZnByaW50Zihsb2dmaWxlLCAjI2FyZ3MpOwlmcHJpbnRmKGxvZ2ZpbGUsICJcbiIpOwkg
IFwKK30gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
XAord2hpbGUoMCkKKy8vI2RlZmluZSBEQkcgcHJpbnRmCisjZW5kaWYKKyNkZWZpbmUgTVNHcHJp
bnRmCit2b2lkIHByaW50X2Vycm9yKGNoYXIqIGZpbGUsIGNoYXIqIGZ1bmN0aW9uLCBpbnQgbGlu
ZSwgY29uc3QgY2hhciAqZm10LCAuLi4pOwordm9pZCBoZXhfZHVtcCggdW5zaWduZWQgY2hhciAq
IGJ1ZiwgaW50IGxlbiApOwordm9pZCBkZWJ1Z190ZXJtX29uKHZvaWQpOwordm9pZCBkZWJ1Z190
ZXJtX29mZih2b2lkKTsKK3ZvaWQgZGVidWdfZmlsZV9vbihjaGFyICpwYXRoKTsKK3ZvaWQgZGVi
dWdfZmlsZV9vZmYodm9pZCk7Cit2b2lkIGRlYnVnX3NldF9kaXIoY2hhciogc3RyKTsKKworI2Vu
ZGlmIC8vX0RFQlVHX0gKKwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEuMi1h
L3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZhdC5jIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUt
eGVuL2ZhdC5jCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9mYXQuYwkxOTcw
LTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9l
bXUtcWVtdS14ZW4vZmF0LmMJMjAxMi0xMi0yOCAxNjowMjo0MS4wMDE5MzQ3MDkgKzA4MDAKQEAg
LTAsMCArMSw5MzYgQEAKKy8qIGZhdC5jIC0gRkFUIGZpbGVzeXN0ZW0gKi8KKy8qCisgKiAgR1JV
QiAgLS0gIEdSYW5kIFVuaWZpZWQgQm9vdGxvYWRlcgorICogIENvcHlyaWdodCAoQykgMjAwMCwy
MDAxLDIwMDIsMjAwMywyMDA0LDIwMDUsMjAwNywyMDA4LDIwMDkgIEZyZWUgU29mdHdhcmUgRm91
bmRhdGlvbiwgSW5jLgorICoKKyAqICBHUlVCIGlzIGZyZWUgc29mdHdhcmU6IHlvdSBjYW4gcmVk
aXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkKKyAqICBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhl
IEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGFzIHB1Ymxpc2hlZCBieQorICogIHRoZSBGcmVl
IFNvZnR3YXJlIEZvdW5kYXRpb24sIGVpdGhlciB2ZXJzaW9uIDMgb2YgdGhlIExpY2Vuc2UsIG9y
CisgKiAgKGF0IHlvdXIgb3B0aW9uKSBhbnkgbGF0ZXIgdmVyc2lvbi4KKyAqCisgKiAgR1JVQiBp
cyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLAorICogIGJ1
dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5
IG9mCisgKiAgTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQ
T1NFLiAgU2VlIHRoZQorICogIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRl
dGFpbHMuCisgKgorICogIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdO
VSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlCisgKiAgYWxvbmcgd2l0aCBHUlVCLiAgSWYgbm90LCBz
ZWUgPGh0dHA6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy8+LgorICovCisjaW5jbHVkZSAibWlzYy5o
IgorI2luY2x1ZGUgImZhdC5oIgorI2luY2x1ZGUgImRlYnVnLmgiCisKKworaW50IGdfZXJyID0g
R1JVQl9FUlJfTk9ORTsKK2ludDY0X3Qgc19icGJfYnl0ZXNfcGVyX3NlY3RvcjsKK2ludDY0X3Qg
c19wYXJ0X29mZl9zZWN0b3I7CisKK3N0YXRpYyBpbnQgYmRydl9wcmVhZF9mcm9tX3NlY3Rvcl9v
Zl92b2x1bWUoQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIGludDY0X3Qgb2Zmc2V0LAorICAgICAgICAg
ICAgICAgdm9pZCAqYnVmMSwgaW50IGNvdW50MSkKK3sKKyAgaW50NjRfdCBvZmYgPSBzX2JwYl9i
eXRlc19wZXJfc2VjdG9yICogc19wYXJ0X29mZl9zZWN0b3IgKyBvZmZzZXQ7CisgIHJldHVybiBi
ZHJ2X3ByZWFkKGJzLCBvZmYsIGJ1ZjEsIGNvdW50MSk7Cit9CisKKworc3RhdGljIGludAorZmF0
X2xvZzIgKHVuc2lnbmVkIHgpCit7CisgIGludCBpOworCisgIGlmICh4ID09IDApCisgICAgcmV0
dXJuIC0xOworCisgIGZvciAoaSA9IDA7ICh4ICYgMSkgPT0gMDsgaSsrKQorICAgIHggPj49IDE7
CisKKyAgaWYgKHggIT0gMSkKKyAgICByZXR1cm4gLTE7CisKKyAgcmV0dXJuIGk7Cit9CisKKwor
Y2hhciAqCitncnViX2ZhdF9maW5kX2RpciAoQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIHN0cnVjdCBn
cnViX2ZhdF9kYXRhICpkYXRhLAorCQkgICBjb25zdCBjaGFyICpwYXRoLAorCQkgICBpbnQgKCpo
b29rKSAoY29uc3QgY2hhciAqZmlsZW5hbWUsCisJCQkJY29uc3Qgc3RydWN0IGdydWJfZGlyaG9v
a19pbmZvICppbmZvLAorCQkJCXZvaWQgKmNsb3N1cmUpLAorCQkgICB2b2lkICpjbG9zdXJlKTsK
KworCisKK3N0cnVjdCBncnViX2ZhdF9kYXRhICoKK2dydWJfZmF0X21vdW50IChCbG9ja0RyaXZl
clN0YXRlICpicywgdWludDMyX3QgcGFydF9vZmZfc2VjdG9yKQoreworICBzdHJ1Y3QgZ3J1Yl9m
YXRfYnBiIGJwYjsKKyAgc3RydWN0IGdydWJfZmF0X2RhdGEgKmRhdGEgPSAwOworICBncnViX3Vp
bnQzMl90IGZpcnN0X2ZhdCwgbWFnaWM7CisgIGludDY0X3Qgb2ZmX2J5dGVzID0gKGludDY0X3Qp
cGFydF9vZmZfc2VjdG9yIDw8IEdSVUJfRElTS19TRUNUT1JfQklUUzsKKworICBpZiAoISBicykK
KyAgICBnb3RvIGZhaWw7CisKKyAgZGF0YSA9IChzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqKSBtYWxs
b2MgKHNpemVvZiAoKmRhdGEpKTsKKyAgaWYgKCEgZGF0YSkKKyAgICBnb3RvIGZhaWw7CisKKyAg
LyogUmVhZCB0aGUgQlBCLiAgKi8KKyAgaWYgKGJkcnZfcHJlYWQoYnMsIG9mZl9ieXRlcywgJmJw
Yiwgc2l6ZW9mKGJwYikpICE9IHNpemVvZihicGIpKQorICAgIHsKKyAgICAgIERCRygiYmRydl9w
cmVhZCBmYWlsLi4uLiIpOworICAgICAgZ290byBmYWlsOworICAgIH0KKyAgICAKKyAgaWYgKGdy
dWJfc3RybmNtcCgoY29uc3QgY2hhciAqKSBicGIudmVyc2lvbl9zcGVjaWZpYy5mYXQxMl9vcl9m
YXQxNi5mc3R5cGUsCisJCSAgICJGQVQxMiIsIDUpCisgICAgICAmJiBncnViX3N0cm5jbXAoKGNv
bnN0IGNoYXIgKikgYnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MTJfb3JfZmF0MTYuZnN0eXBlLAor
CQkgICAgICAiRkFUMTYiLCA1KQorICAgICAgJiYgZ3J1Yl9zdHJuY21wKChjb25zdCBjaGFyICop
IGJwYi52ZXJzaW9uX3NwZWNpZmljLmZhdDMyLmZzdHlwZSwKKwkJICAgICAgIkZBVDMyIiwgNSkK
KyAgICAgICkKKyAgICB7CisgICAgICAKKyAgICAgIERCRygiZmFpbCBoZXJlLS0+Z3J1Yl9zdHJu
Y21wLi4uLi4uIik7CisgICAgICBnb3RvIGZhaWw7CisgICAgfQorCisgIC8qIEdldCB0aGUgc2l6
ZXMgb2YgbG9naWNhbCBzZWN0b3JzIGFuZCBjbHVzdGVycy4gICovCisgIHNfYnBiX2J5dGVzX3Bl
cl9zZWN0b3IgPSAoYnBiLmJ5dGVzX3Blcl9zZWN0b3IpOworICBzX3BhcnRfb2ZmX3NlY3RvciA9
IHBhcnRfb2ZmX3NlY3RvcjsKKyAgZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cyA9CisgICAgZmF0
X2xvZzIgKGdydWJfbGVfdG9fY3B1MTYgKGJwYi5ieXRlc19wZXJfc2VjdG9yKSk7CisgIERCRygi
YnBiLmJ5dGVzX3Blcl9zZWN0b3I9MHgleCwgbGVfdG9fY3B1MTY9MHgleCIsCisJIGJwYi5ieXRl
c19wZXJfc2VjdG9yLCBncnViX2xlX3RvX2NwdTE2IChicGIuYnl0ZXNfcGVyX3NlY3RvcikpOwor
ICAKKworICBpZiAoZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cyA8IEdSVUJfRElTS19TRUNUT1Jf
QklUUykKKyAgeworICAgIERCRygiZmFpbCBoZXJlLS0+bG9naWNhbF9zZWN0b3JfYml0cyIpOyAK
KyAgICBnb3RvIGZhaWw7CisgIH0KKyAgZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cyAtPSBHUlVC
X0RJU0tfU0VDVE9SX0JJVFM7CisKKyAgREJHKCJicGIuc2VjdG9yc19wZXJfY2x1c3Rlcj0ldSIs
IGJwYi5zZWN0b3JzX3Blcl9jbHVzdGVyKTsKKyAgZGF0YS0+Y2x1c3Rlcl9iaXRzID0gZmF0X2xv
ZzIgKGJwYi5zZWN0b3JzX3Blcl9jbHVzdGVyKTsKKyAgaWYgKGRhdGEtPmNsdXN0ZXJfYml0cyA8
IDApCisgICAgeworICAgICAgREJHKCJmYWlsIGhlcmUtLT5jbHVzdGVyX2JpdHMuLi4uLi5saW5l
WyV1XSIsIF9fTElORV9fKTsgCisgICAgICBnb3RvIGZhaWw7CisgICAgfQorICBkYXRhLT5jbHVz
dGVyX2JpdHMgKz0gZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0czsKKworICAvKiBHZXQgaW5mb3Jt
YXRpb24gYWJvdXQgRkFUcy4gICovCisgIERCRygiYnBiLm51bV9yZXNlcnZlZF9zZWN0b3JzPSV1
LCIKKyAgICAgICJsZV90b19jcHUxNj0ldSIsCisgICAgICBicGIubnVtX3Jlc2VydmVkX3NlY3Rv
cnMsCisgICAgICBncnViX2xlX3RvX2NwdTE2IChicGIubnVtX3Jlc2VydmVkX3NlY3RvcnMpKTsK
KyAgZGF0YS0+ZmF0X3NlY3RvciA9IChncnViX2xlX3RvX2NwdTE2IChicGIubnVtX3Jlc2VydmVk
X3NlY3RvcnMpCisJCSAgICAgIDw8IGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMpOworICBEQkco
ImRhdGEtPmZhdF9zZWN0b3I9JXUsIHBhcnRfb2ZmX3NlY3Rvcj0ldSIsCisgICAgICBkYXRhLT5m
YXRfc2VjdG9yLCBwYXJ0X29mZl9zZWN0b3IpOworICBpZiAoZGF0YS0+ZmF0X3NlY3RvciA9PSAw
KQorICAgIHsKKyAgICAgIERCRygiZmFpbCBoZXJlLS0+ZmF0X3NlY3Rvci4uLi4uLiIpOyAKKyAg
ICAgIGdvdG8gZmFpbDsKKyAgICB9CisgIGRhdGEtPnNlY3RvcnNfcGVyX2ZhdCA9ICgoYnBiLnNl
Y3RvcnNfcGVyX2ZhdF8xNgorCQkJICAgID8gZ3J1Yl9sZV90b19jcHUxNiAoYnBiLnNlY3RvcnNf
cGVyX2ZhdF8xNikKKwkJCSAgICA6IGdydWJfbGVfdG9fY3B1MzIgKGJwYi52ZXJzaW9uX3NwZWNp
ZmljLmZhdDMyLnNlY3RvcnNfcGVyX2ZhdF8zMikpCisJCQkgICA8PCBkYXRhLT5sb2dpY2FsX3Nl
Y3Rvcl9iaXRzKTsKKyAgREJHKCJicGIudmVyc2lvbl9zcGVjaWZpYy5mYXQzMi5zZWN0b3JzX3Bl
cl9mYXRfMzI9JXVcbiIKKwkgImdydWJfbGVfdG9fY3B1MzIgKGJwYi52ZXJzaW9uX3NwZWNpZmlj
LmZhdDMyLnNlY3RvcnNfcGVyX2ZhdF8zMik9JXUiLAorCSBicGIudmVyc2lvbl9zcGVjaWZpYy5m
YXQzMi5zZWN0b3JzX3Blcl9mYXRfMzIsCisJIGdydWJfbGVfdG9fY3B1MzIgKGJwYi52ZXJzaW9u
X3NwZWNpZmljLmZhdDMyLnNlY3RvcnNfcGVyX2ZhdF8zMikpOworICBpZiAoZGF0YS0+c2VjdG9y
c19wZXJfZmF0ID09IDApCisgICAgZ290byBmYWlsOworCisgIC8qIEdldCB0aGUgbnVtYmVyIG9m
IHNlY3RvcnMgaW4gdGhpcyB2b2x1bWUuICAqLworICBkYXRhLT5udW1fc2VjdG9ycyA9ICgoYnBi
Lm51bV90b3RhbF9zZWN0b3JzXzE2CisJCQk/IGdydWJfbGVfdG9fY3B1MTYgKGJwYi5udW1fdG90
YWxfc2VjdG9yc18xNikKKwkJCTogZ3J1Yl9sZV90b19jcHUzMiAoYnBiLm51bV90b3RhbF9zZWN0
b3JzXzMyKSkKKwkJICAgICAgIDw8IGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMpOworICBpZiAo
ZGF0YS0+bnVtX3NlY3RvcnMgPT0gMCkKKyAgICB7CisgICAgICBEQkcoImZhaWwgaGVyZS0tPm51
bV9zZWN0b3JzLi4uLi4uIik7IAorICAgICAgZ290byBmYWlsOworICAgIH0KKyAgLyogR2V0IGlu
Zm9ybWF0aW9uIGFib3V0IHRoZSByb290IGRpcmVjdG9yeS4gICovCisgIGlmIChicGIubnVtX2Zh
dHMgPT0gMCkKKyAgICB7CisgICAgICBEQkcoImZhaWwgaGVyZS0tPm51bV9mYXRzLi4uLi4uIik7
IAorICAgICAgZ290byBmYWlsOworICAgIH0KKyAgZGF0YS0+cm9vdF9zZWN0b3IgPSBkYXRhLT5m
YXRfc2VjdG9yICsgYnBiLm51bV9mYXRzICogZGF0YS0+c2VjdG9yc19wZXJfZmF0OworICBkYXRh
LT5udW1fcm9vdF9zZWN0b3JzCisgICAgPSAoKCgoZ3J1Yl91aW50MzJfdCkgZ3J1Yl9sZV90b19j
cHUxNiAoYnBiLm51bV9yb290X2VudHJpZXMpCisJICogR1JVQl9GQVRfRElSX0VOVFJZX1NJWkUK
KwkgKyBncnViX2xlX3RvX2NwdTE2IChicGIuYnl0ZXNfcGVyX3NlY3RvcikgLSAxKQorCT4+IChk
YXRhLT5sb2dpY2FsX3NlY3Rvcl9iaXRzICsgR1JVQl9ESVNLX1NFQ1RPUl9CSVRTKSkKKyAgICAg
ICA8PCAoZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cykpOworICAvL2luIGZhdDMyIDogcm9vdCBp
cyBub3QgaW5jbHVkZWQgaW4gZmlsZSBjbHVzdGVyPz8KKyAgZGF0YS0+Y2x1c3Rlcl9zZWN0b3Ig
PSBkYXRhLT5yb290X3NlY3RvciArIGRhdGEtPm51bV9yb290X3NlY3RvcnM7CisgIGRhdGEtPm51
bV9jbHVzdGVycyA9ICgoKGRhdGEtPm51bV9zZWN0b3JzIC0gZGF0YS0+Y2x1c3Rlcl9zZWN0b3Ip
CisJCQkgPj4gKGRhdGEtPmNsdXN0ZXJfYml0cyArIGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMp
KQorCQkJKyAyKTsKKworICBpZiAoZGF0YS0+bnVtX2NsdXN0ZXJzIDw9IDIpCisgICAgeworICAg
ICAgREJHKCJmYWlsIGhlcmUtLT5udW1fY2x1c3RlcnMuLi4uLi4iKTsgCisgICAgICBnb3RvIGZh
aWw7CisgICAgfQorICBpZiAoISBicGIuc2VjdG9yc19wZXJfZmF0XzE2KQorICAgIHsKKyAgICAg
IC8qIEZBVDMyLiAgKi8KKyAgICAgIGdydWJfdWludDE2X3QgZmxhZ3MgPSBncnViX2xlX3RvX2Nw
dTE2IChicGIudmVyc2lvbl9zcGVjaWZpYy5mYXQzMi5leHRlbmRlZF9mbGFncyk7CisKKyAgICAg
IGRhdGEtPnJvb3RfY2x1c3RlciA9IGdydWJfbGVfdG9fY3B1MzIgKGJwYi52ZXJzaW9uX3NwZWNp
ZmljLmZhdDMyLnJvb3RfY2x1c3Rlcik7CisgICAgICBkYXRhLT5mYXRfc2l6ZSA9IDMyOworICAg
ICAgZGF0YS0+Y2x1c3Rlcl9lb2ZfbWFyayA9IDB4MGZmZmZmZjg7CisKKyAgICAgIGlmIChmbGFn
cyAmIDB4ODApCisJeworCSAgLyogR2V0IGFuIGFjdGl2ZSBGQVQuICAqLworCSAgdW5zaWduZWQg
YWN0aXZlX2ZhdCA9IGZsYWdzICYgMHhmOworCisJICBpZiAoYWN0aXZlX2ZhdCA+IGJwYi5udW1f
ZmF0cykKKwkgICAgZ290byBmYWlsOworCisJICBkYXRhLT5mYXRfc2VjdG9yICs9IGFjdGl2ZV9m
YXQgKiBkYXRhLT5zZWN0b3JzX3Blcl9mYXQ7CisJfQorCisgICAgICBpZiAoYnBiLm51bV9yb290
X2VudHJpZXMgIT0gMCB8fCBicGIudmVyc2lvbl9zcGVjaWZpYy5mYXQzMi5mc192ZXJzaW9uICE9
IDApCisJZ290byBmYWlsOworICAgIH0KKyAgZWxzZQorICAgIHsKKyAgICAgIC8qIEZBVDEyIG9y
IEZBVDE2LiAgKi8KKyAgICAgIGRhdGEtPnJvb3RfY2x1c3RlciA9IH4wVTsKKworICAgICAgaWYg
KGRhdGEtPm51bV9jbHVzdGVycyA8PSA0MDg1ICsgMikKKwl7CisJICAvKiBGQVQxMi4gICovCisJ
ICBkYXRhLT5mYXRfc2l6ZSA9IDEyOworCSAgZGF0YS0+Y2x1c3Rlcl9lb2ZfbWFyayA9IDB4MGZm
ODsKKwl9CisgICAgICBlbHNlCisJeworCSAgLyogRkFUMTYuICAqLworCSAgZGF0YS0+ZmF0X3Np
emUgPSAxNjsKKwkgIGRhdGEtPmNsdXN0ZXJfZW9mX21hcmsgPSAweGZmZjg7CisJfQorICAgIH0K
KworICAvKiBNb3JlIHNhbml0eSBjaGVja3MuICAqLworICBpZiAoZGF0YS0+bnVtX3NlY3RvcnMg
PD0gZGF0YS0+ZmF0X3NlY3RvcikKKyAgICBnb3RvIGZhaWw7CisKKyAgCisgIERCRygiZGF0YS0+
ZmF0X3NlY3Rvcj0ldSwgZGF0YS0+c2VjdG9yc19wZXJfZmF0PSV1IiwKKwkgZGF0YS0+ZmF0X3Nl
Y3RvciwgZGF0YS0+c2VjdG9yc19wZXJfZmF0KTsKKyAgaWYgKGJkcnZfcHJlYWRfZnJvbV9zZWN0
b3Jfb2Zfdm9sdW1lKGJzLAorCQkgZGF0YS0+ZmF0X3NlY3RvciA8PCBHUlVCX0RJU0tfU0VDVE9S
X0JJVFMsCisJCSAmZmlyc3RfZmF0LAorCQkgc2l6ZW9mIChmaXJzdF9mYXQpKSAhPSBzaXplb2Yo
Zmlyc3RfZmF0KSkKKyAgICB7CisgICAgICBEQkcoImZhaWwgaGVyZS0tPmJkcnZfcHJlYWQuLi4u
Li4iKTsgCisgICAgICBnb3RvIGZhaWw7CisgICAgfQorCisgIGZpcnN0X2ZhdCA9IGdydWJfbGVf
dG9fY3B1MzIgKGZpcnN0X2ZhdCk7CisKKyAgaWYgKGRhdGEtPmZhdF9zaXplID09IDMyKQorICAg
IHsKKyAgICAgIGZpcnN0X2ZhdCAmPSAweDBmZmZmZmZmOworICAgICAgbWFnaWMgPSAweDBmZmZm
ZjAwOworICAgIH0KKyAgZWxzZSBpZiAoZGF0YS0+ZmF0X3NpemUgPT0gMTYpCisgICAgeworICAg
ICAgZmlyc3RfZmF0ICY9IDB4MDAwMGZmZmY7CisgICAgICBtYWdpYyA9IDB4ZmYwMDsKKyAgICB9
CisgIGVsc2UKKyAgICB7CisgICAgICBmaXJzdF9mYXQgJj0gMHgwMDAwMGZmZjsKKyAgICAgIG1h
Z2ljID0gMHgwZjAwOworICAgIH0KKworICAvKiBTZXJpYWwgbnVtYmVyLiAgKi8KKyAgaWYgKGJw
Yi5zZWN0b3JzX3Blcl9mYXRfMTYpCisgICAgZGF0YS0+dXVpZCA9IGdydWJfbGVfdG9fY3B1MzIg
KGJwYi52ZXJzaW9uX3NwZWNpZmljLmZhdDEyX29yX2ZhdDE2Lm51bV9zZXJpYWwpOworICBlbHNl
CisgICAgZGF0YS0+dXVpZCA9IGdydWJfbGVfdG9fY3B1MzIgKGJwYi52ZXJzaW9uX3NwZWNpZmlj
LmZhdDMyLm51bV9zZXJpYWwpOworCisgIC8qIElnbm9yZSB0aGUgM3JkIGJpdCwgYmVjYXVzZSBz
b21lIEJJT1NlcyBhc3NpZ25zIDB4RjAgdG8gdGhlIG1lZGlhCisgICAgIGRlc2NyaXB0b3IsIGV2
ZW4gaWYgaXQgaXMgYSBzby1jYWxsZWQgc3VwZXJmbG9wcHkgKGUuZy4gYW4gVVNCIGtleSkuCisg
ICAgIFRoZSBjaGVjayBtYXkgYmUgdG9vIHN0cmljdCBmb3IgdGhpcyBraW5kIG9mIHN0dXBpZCBC
SU9TZXMsIGFzCisgICAgIHRoZXkgb3ZlcndyaXRlIHRoZSBtZWRpYSBkZXNjcmlwdG9yLiAgKi8K
KyAgaWYgKChmaXJzdF9mYXQgfCAweDgpICE9IChtYWdpYyB8IGJwYi5tZWRpYSB8IDB4OCkpCisg
ICAgeworICAgICAgREJHKCJmYWlsIGhlcmUtLT5maXJzdF9mYXQ9MHgleCwgbWFnaWM9MHgleCIs
CisJICAgICBmaXJzdF9mYXQsIG1hZ2ljKTsgCisgICAgICBnb3RvIGZhaWw7CisgICAgfQorICAv
KiBTdGFydCBmcm9tIHRoZSByb290IGRpcmVjdG9yeS4gICovCisgIGRhdGEtPmZpbGVfY2x1c3Rl
ciA9IGRhdGEtPnJvb3RfY2x1c3RlcjsKKyAgZGF0YS0+Y3VyX2NsdXN0ZXJfbnVtID0gfjBVOwor
ICBkYXRhLT5hdHRyID0gR1JVQl9GQVRfQVRUUl9ESVJFQ1RPUlk7CisgIERCRygiZGF0YS0+Zmls
ZV9jbHVzdGVyPSV1IFxuZGF0YS0+Y3VyX2NsdXN0ZXJfbnVtPSV1IFxuZGF0YS0+YXR0cj0weCV4
XG4iCisJICJkYXRhLT5sb2dpY2FsX3NlY3Rvcl9iaXRzPSV1XG4iCisJICJkYXRhLT5jbHVzdGVy
X2JpdHM9JXUiLAorCSBkYXRhLT5maWxlX2NsdXN0ZXIsIGRhdGEtPmN1cl9jbHVzdGVyX251bSwg
ZGF0YS0+YXR0ciwKKwkgZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cywgZGF0YS0+Y2x1c3Rlcl9i
aXRzKTsKKyAgcmV0dXJuIGRhdGE7CisKKyBmYWlsOgorCisgIGZyZWUgKGRhdGEpOworICBwcmlu
dGYoIm5vdCBhIEZBVCBmaWxlc3lzdGVtIVxuIik7CisgIHJldHVybiAwOworfQorCisKKworLy+0
087EvP61xNa4tqjGq9LGb2Zmc2V019a92rSmtsHIoWxlbtfWvdq1xMr9vt21vWJ1ZgorLy/OxLz+
08lkYXRhLT5maWxlX2NsdXN0ZXLWuLaoCisvL2RhdGEtPmZpbGVfY2x1c3Rlcta4tqjBy87EvP61
xMbwyry02LrFCisvL8SsyM9kYXRhLT5maWxlX2NsdXN0ZXI9MqOstPqx7bj5xL/CvAorc3RhdGlj
IGdydWJfc3NpemVfdAorZ3J1Yl9mYXRfcmVhZF9kYXRhIChCbG9ja0RyaXZlclN0YXRlICpicywg
c3RydWN0IGdydWJfZmF0X2RhdGEgKmRhdGEsCisJCSAgICB2b2lkICgqcmVhZF9ob29rKSAoZ3J1
Yl9kaXNrX2FkZHJfdCBzZWN0b3IsCisJCQkJICAgICAgIHVuc2lnbmVkIG9mZnNldCwgdW5zaWdu
ZWQgbGVuZ3RoLAorCQkJCSAgICAgICB2b2lkICpjbG9zdXJlKSwKKwkJICAgIHZvaWQgKmNsb3N1
cmUsCisJCSAgICBncnViX29mZl90IG9mZnNldCwgZ3J1Yl9zaXplX3QgbGVuLCBjaGFyICpidWYp
Cit7CisgIGdydWJfc2l6ZV90IHNpemU7CisgIGdydWJfdWludDMyX3QgbG9naWNhbF9jbHVzdGVy
OworICB1bnNpZ25lZCBsb2dpY2FsX2NsdXN0ZXJfYml0czsKKyAgZ3J1Yl9zc2l6ZV90IHJldCA9
IDA7CisgIHVuc2lnbmVkIGxvbmcgc2VjdG9yOworICB1aW50NjRfdCBvZmZfYnl0ZXMgPSAwOyAK
KyAgLyogVGhpcyBpcyBhIHNwZWNpYWwgY2FzZS4gRkFUMTIgYW5kIEZBVDE2IGRvZXNuJ3QgaGF2
ZSB0aGUgcm9vdCBkaXJlY3RvcnkKKyAgICAgaW4gY2x1c3RlcnMuICAqLworICBpZiAoZGF0YS0+
ZmlsZV9jbHVzdGVyID09IH4wVSkKKyAgICB7CisgICAgICBzaXplID0gKGRhdGEtPm51bV9yb290
X3NlY3RvcnMgPDwgR1JVQl9ESVNLX1NFQ1RPUl9CSVRTKSAtIG9mZnNldDsKKyAgICAgIGlmIChz
aXplID4gbGVuKQorCXNpemUgPSBsZW47CisKKyAgICAgIG9mZl9ieXRlcyA9ICgodWludDY0X3Qp
ZGF0YS0+cm9vdF9zZWN0b3IgPDwgR1JVQl9ESVNLX1NFQ1RPUl9CSVRTKSArIG9mZnNldDsKKyAg
ICAgIGlmKGJkcnZfcHJlYWRfZnJvbV9zZWN0b3Jfb2Zfdm9sdW1lKGJzLCBvZmZfYnl0ZXMsIGJ1
Ziwgc2l6ZSApICE9IHNpemUpIAorCXJldHVybiAtMTsKKworICAgICAgcmV0dXJuIHNpemU7Cisg
ICAgfQorCisgIC8qIENhbGN1bGF0ZSB0aGUgbG9naWNhbCBjbHVzdGVyIG51bWJlciBhbmQgb2Zm
c2V0LiAgKi8KKyAgbG9naWNhbF9jbHVzdGVyX2JpdHMgPSAoZGF0YS0+Y2x1c3Rlcl9iaXRzCisJ
CQkgICsgZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cworCQkJICArIEdSVUJfRElTS19TRUNUT1Jf
QklUUyk7CisgIGxvZ2ljYWxfY2x1c3RlciA9IG9mZnNldCA+PiBsb2dpY2FsX2NsdXN0ZXJfYml0
czsgICAgLy93aGljaCBjbHVzdGVyIHRvIHJlYWQgCisgIG9mZnNldCAmPSAoMSA8PCBsb2dpY2Fs
X2NsdXN0ZXJfYml0cykgLSAxOyAgICAgICAgICAgLy9tb2QKKworICBpZiAobG9naWNhbF9jbHVz
dGVyIDwgZGF0YS0+Y3VyX2NsdXN0ZXJfbnVtKSAgIC8vCisgICAgeworICAgICAgZGF0YS0+Y3Vy
X2NsdXN0ZXJfbnVtID0gMDsKKyAgICAgIGRhdGEtPmN1cl9jbHVzdGVyID0gZGF0YS0+ZmlsZV9j
bHVzdGVyOyAvLyC12jK49mZhdLHtz+6/qsq8vMfCvMS/wry6zc7EvP4KKyAgICB9CisKKyAgd2hp
bGUgKGxlbikKKyAgICB7CisgICAgICB3aGlsZSAobG9naWNhbF9jbHVzdGVyID4gZGF0YS0+Y3Vy
X2NsdXN0ZXJfbnVtKQorCXsKKwkgIC8qIEZpbmQgbmV4dCBjbHVzdGVyLiAgKi8KKwkgIGdydWJf
dWludDMyX3QgbmV4dF9jbHVzdGVyOworCSAgdW5zaWduZWQgbG9uZyBmYXRfb2Zmc2V0OworCisJ
ICBzd2l0Y2ggKGRhdGEtPmZhdF9zaXplKQorCSAgICB7CisJICAgIGNhc2UgMzI6CisJICAgICAg
ZmF0X29mZnNldCA9IGRhdGEtPmN1cl9jbHVzdGVyIDw8IDI7CisJICAgICAgYnJlYWs7CisJICAg
IGNhc2UgMTY6CisJICAgICAgZmF0X29mZnNldCA9IGRhdGEtPmN1cl9jbHVzdGVyIDw8IDE7CisJ
ICAgICAgYnJlYWs7CisJICAgIGRlZmF1bHQ6CisJICAgICAgLyogY2FzZSAxMjogKi8KKwkgICAg
ICBmYXRfb2Zmc2V0ID0gZGF0YS0+Y3VyX2NsdXN0ZXIgKyAoZGF0YS0+Y3VyX2NsdXN0ZXIgPj4g
MSk7CisJICAgICAgYnJlYWs7CisJICAgIH0KKworCSAgLyogUmVhZCB0aGUgRkFULiAgKi8KKwkg
IGludCBsZW4gPSAoZGF0YS0+ZmF0X3NpemUgKyA3KSA+PiAzOworCSAgdWludDY0X3Qgb2ZmX2J5
dGVzID0gICgodWludDY0X3QpZGF0YS0+ZmF0X3NlY3RvciA8PCBHUlVCX0RJU0tfU0VDVE9SX0JJ
VFMpICsgZmF0X29mZnNldDsgCisJICBpZiAoYmRydl9wcmVhZF9mcm9tX3NlY3Rvcl9vZl92b2x1
bWUgKGJzLCBvZmZfYnl0ZXMsIAorCQkJICAoY2hhciAqKSAmbmV4dF9jbHVzdGVyLCAKKwkJCSAg
bGVuKSAhPSBsZW4pICAgLy+002ZhdLHttsHIobTYusUKKwkgICAgcmV0dXJuIC0xOworCisJICBu
ZXh0X2NsdXN0ZXIgPSBncnViX2xlX3RvX2NwdTMyIChuZXh0X2NsdXN0ZXIpOworCSAgc3dpdGNo
IChkYXRhLT5mYXRfc2l6ZSkKKwkgICAgeworCSAgICBjYXNlIDE2OgorCSAgICAgIG5leHRfY2x1
c3RlciAmPSAweEZGRkY7CisJICAgICAgYnJlYWs7CisJICAgIGNhc2UgMTI6CisJICAgICAgaWYg
KGRhdGEtPmN1cl9jbHVzdGVyICYgMSkKKwkJbmV4dF9jbHVzdGVyID4+PSA0OworCisJICAgICAg
bmV4dF9jbHVzdGVyICY9IDB4MEZGRjsKKwkgICAgICBicmVhazsKKwkgICAgfQorCisJICBEQkcg
KCJmYXRfc2l6ZT0lZCwgbmV4dF9jbHVzdGVyPSV1IiwKKwkJCWRhdGEtPmZhdF9zaXplLCBuZXh0
X2NsdXN0ZXIpOworCisJICAvKiBDaGVjayB0aGUgZW5kLiAgKi8KKwkgIGlmIChuZXh0X2NsdXN0
ZXIgPj0gZGF0YS0+Y2x1c3Rlcl9lb2ZfbWFyaykKKwkgICAgcmV0dXJuIHJldDsKKworCSAgaWYg
KG5leHRfY2x1c3RlciA8IDIgfHwgbmV4dF9jbHVzdGVyID49IGRhdGEtPm51bV9jbHVzdGVycykK
KwkgICAgeworCSAgICAgIERCRygiaW52YWxpZCBjbHVzdGVyICV1Li4uLi4uLi4uLi4uLi4uLiIs
CisJCQkgIG5leHRfY2x1c3Rlcik7CisJICAgICAgcmV0dXJuIC0xOworCSAgICB9CisKKwkgIGRh
dGEtPmN1cl9jbHVzdGVyID0gbmV4dF9jbHVzdGVyOworCSAgZGF0YS0+Y3VyX2NsdXN0ZXJfbnVt
Kys7CisJfQorCisgICAgICAvKiBSZWFkIHRoZSBkYXRhIGhlcmUuICAqLworICAgICAgLy/C37yt
tNjL+bbU06a1xL74ttTJyMf4CisgICAgICBzZWN0b3IgPSAoZGF0YS0+Y2x1c3Rlcl9zZWN0b3IK
KwkJKyAoKGRhdGEtPmN1cl9jbHVzdGVyIC0gMikKKwkJICAgPDwgKGRhdGEtPmNsdXN0ZXJfYml0
cyArIGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMpKSk7IAorICAgICAgLy+++LbUycjH+NbQyKW1
9Mar0sa687XE19a92sr9CisgICAgICBzaXplID0gKDEgPDwgbG9naWNhbF9jbHVzdGVyX2JpdHMp
IC0gb2Zmc2V0OworICAgICAgaWYgKHNpemUgPiBsZW4pCisJc2l6ZSA9IGxlbjsKKworICAgICAg
Ly9kaXNrLT5yZWFkX2hvb2sgPSByZWFkX2hvb2s7CisgICAgICAvL2Rpc2stPmNsb3N1cmUgPSBj
bG9zdXJlOworICAgICAgaW50NjRfdCBvZmZfYnl0ZXMgPSAoKHVpbnQ2NF90KXNlY3RvciA8PCBH
UlVCX0RJU0tfU0VDVE9SX0JJVFMpICsgb2Zmc2V0OworICAgICAgLy9kaXNrLT5yZWFkX2hvb2sg
PSAwOworICAgICAgaWYgKGJkcnZfcHJlYWRfZnJvbV9zZWN0b3Jfb2Zfdm9sdW1lIChicywgb2Zm
X2J5dGVzLCBidWYsIHNpemUpICE9IHNpemUpCisJcmV0dXJuIC0xOworCisgICAgICBsZW4gLT0g
c2l6ZTsKKyAgICAgIGJ1ZiArPSBzaXplOworICAgICAgcmV0ICs9IHNpemU7CisgICAgICBsb2dp
Y2FsX2NsdXN0ZXIrKzsKKyAgICAgIG9mZnNldCA9IDA7ICAvL9LUuvO2wbXEtrzKx83q1fvJyMf4
CisgICAgfQorCisgIHJldHVybiByZXQ7Cit9CisKKy8vsenA+tPJZGF0YS0+ZmlsZV9jbHVzdGVy
1ri2qLXExL/CvAorc3RhdGljIGludAorZ3J1Yl9mYXRfaXRlcmF0ZV9kaXIgKEJsb2NrRHJpdmVy
U3RhdGUgKmJzLCBzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YSwKKwkJICAgICAgaW50ICgqaG9v
aykgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLAorCQkJCSAgIHN0cnVjdCBncnViX2ZhdF9kaXJfZW50
cnkgKmRpciwKKwkJCQkgICB2b2lkICpjbG9zdXJlKSwKKwkJICAgICAgdm9pZCAqY2xvc3VyZSkK
K3sKKyAgc3RydWN0IGdydWJfZmF0X2Rpcl9lbnRyeSBkaXI7CisgIGNoYXIgKmZpbGVuYW1lLCAq
ZmlsZXAgPSAwOworICBncnViX3VpbnQxNl90ICp1bmlidWY7CisgIGludCBzbG90ID0gLTEsIHNs
b3RzID0gLTE7CisgIGludCBjaGVja3N1bSA9IC0xOworICBncnViX3NzaXplX3Qgb2Zmc2V0ID0g
LXNpemVvZihkaXIpOworCisgIGlmICghIChkYXRhLT5hdHRyICYgR1JVQl9GQVRfQVRUUl9ESVJF
Q1RPUlkpKQorICAgIHJldHVybiBwcmludGYoIm5vdCBhIGRpcmVjdG9yeS4uLi4uLlxuIik7CisK
KyAgLyogQWxsb2NhdGUgc3BhY2UgZW5vdWdoIHRvIGhvbGQgYSBsb25nIG5hbWUuICAqLworICBm
aWxlbmFtZSA9IChjaGFyKiltYWxsb2MgKDB4NDAgKiAxMyAqIDQgKyAxKTsKKyAgdW5pYnVmID0g
KGdydWJfdWludDE2X3QgKikgbWFsbG9jICgweDQwICogMTMgKiAyKTsKKyAgY2hhciAqZ2JuYW1l
ID0gKGNoYXIqKW1hbGxvYygweDQwICogMTMgKiAyKTsKKyAgaWYgKCEgZmlsZW5hbWUgfHwgISB1
bmlidWYgfHwgIWdibmFtZSkKKyAgICB7CisgICAgICBmcmVlKGdibmFtZSk7CisgICAgICBmcmVl
IChmaWxlbmFtZSk7CisgICAgICBmcmVlICh1bmlidWYpOworICAgICAgcGVycm9yKCJpdGVyYXRl
OiBtYWxsb2MgZmFpbGVkIS4uLlxuIik7CisgICAgICByZXR1cm4gLTE7CisgICAgfQorCisgIAor
ICBpbnQgY291bnQgPSAwOworICB3aGlsZSAoMSkKKyAgICB7CisgICAgICB1bnNpZ25lZCBpOwor
CisgICAgICAvKiBBZGp1c3QgdGhlIG9mZnNldC4gICovCisgICAgICBvZmZzZXQgKz0gc2l6ZW9m
IChkaXIpOworICAgICAgREJHKCJcblslZF1vZmZzZXQ9JXUsIgorCSAgICAgImRhdGEtPmN1cl9j
bHVzdGVyX251bT0ldSxkYXRhLT5jdXJfY2x1c3Rlcj0ldSIsIAorCSAgICAgY291bnQrMSwgb2Zm
c2V0LCAKKwkgICAgIGRhdGEtPmN1cl9jbHVzdGVyX251bSwgZGF0YS0+Y3VyX2NsdXN0ZXIpOwor
ICAgICAgLyogUmVhZCBhIGRpcmVjdG9yeSBlbnRyeS4gICovCisgICAgICAvLzB4MLHtyr6/1cS/
wrwKKyAgICAgIGlmICgoZ3J1Yl9mYXRfcmVhZF9kYXRhIChicywgZGF0YSwgMCwgMCwKKwkJCSAg
ICAgICBvZmZzZXQsIHNpemVvZiAoZGlyKSwgKGNoYXIgKikgJmRpcikKKwkgICAhPSBzaXplb2Yg
KGRpcikgfHwgZGlyLm5hbWVbMF0gPT0gMCkpCisJeworCSAgREJHKCJicmVhay4uLmRpci5uYW1l
WzBdPT0lZCIsIGRpci5uYW1lWzBdKTsKKwkgIGJyZWFrOworCX0KKyAgICAgIC8qIEhhbmRsZSBs
b25nIG5hbWUgZW50cmllcy4gICovCisgICAgICBpZiAoZGlyLmF0dHIgPT0gR1JVQl9GQVRfQVRU
Ul9MT05HX05BTUUpCisJeworCSAgREJHKCJsb25nIG5hbWUuLi4iKTsKKwkgIHN0cnVjdCBncnVi
X2ZhdF9sb25nX25hbWVfZW50cnkgKmxvbmdfbmFtZQorCSAgICA9IChzdHJ1Y3QgZ3J1Yl9mYXRf
bG9uZ19uYW1lX2VudHJ5ICopICZkaXI7CisJICBncnViX3VpbnQ4X3QgaWQgPSBsb25nX25hbWUt
PmlkOworCisJICBpZiAoaWQgJiAweDQwKSAgLy90aGUgbGFzdCBpdGVtCisJICAgIHsKKwkgICAg
ICBpZCAmPSAweDNmOyAgIC8vaW5kZXggb3Igb3JkaW5hbCBudW1iZXIgIDF+MzEKKwkgICAgICBz
bG90cyA9IHNsb3QgPSBpZDsKKwkgICAgICBjaGVja3N1bSA9IGxvbmdfbmFtZS0+Y2hlY2tzdW07
CisJICAgICAgREJHKCJ0aGUgbGFzdCBvcmRpbmFsIG51bT0lZCEhISIsIGlkKTsKKwkgICAgfQor
CisJICBpZiAoaWQgIT0gc2xvdCB8fCBzbG90ID09IDAgfHwgY2hlY2tzdW0gIT0gbG9uZ19uYW1l
LT5jaGVja3N1bSkKKwkgICAgeworCSAgICAgIERCRygibm90IHZhbGlkIG9yZGluYWwgbnVtYmVy
ICxpZ25vcmUuLi5jb250aW51ZSIpOworCSAgICAgIGNoZWNrc3VtID0gLTE7CisJICAgICAgY29u
dGludWU7CisJICAgIH0KKworCSAgc2xvdC0tOworCSAgbWVtY3B5ICh1bmlidWYgKyBzbG90ICog
MTMsIGxvbmdfbmFtZS0+bmFtZTEsIDUgKiAyKTsKKwkgIG1lbWNweSAodW5pYnVmICsgc2xvdCAq
IDEzICsgNSwgbG9uZ19uYW1lLT5uYW1lMiwgNiAqIDIpOworCSAgbWVtY3B5ICh1bmlidWYgKyBz
bG90ICogMTMgKyAxMSwgbG9uZ19uYW1lLT5uYW1lMywgMiAqIDIpOworCSAgREJHKCJtZW1jcHku
Li5jb250aW51ZSIpOworCSAgY29udGludWU7CisJfQorCisgICAgICAKKyAgICAgIC8qIENoZWNr
IGlmIHRoaXMgZW50cnkgaXMgdmFsaWQuICAqLworICAgICAgLy9veGU1se3KvtLRvq2xu8m+s/0K
KyAgICAgIGlmIChkaXIubmFtZVswXSA9PSAweGU1IHx8IChkaXIuYXR0ciAmIH5HUlVCX0ZBVF9B
VFRSX1ZBTElEKSkKKwl7CisJICBEQkcoImRpci5uYW1lWzBdPTB4JXgsIGRpci5hdHRyPTB4JXgg
bm90IHZhbGlkLi4uY29udGludWUiLCAKKwkJIGRpci5uYW1lWzBdLCBkaXIuYXR0cik7CisJICBj
b250aW51ZTsKKwl9CisKKyAgICAgIERCRygiY2hlY2tzdW09JWQsIHNsb3Q9JWQiLCBjaGVja3N1
bSwgc2xvdCk7CisgICAgICAvKiBUaGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgSmFwYW5lc2UuICAq
LworICAgICAgaWYgKGRpci5uYW1lWzBdID09IDB4MDUpCisJZGlyLm5hbWVbMF0gPSAweGU1Owor
CisgICAgICBpZiAoY2hlY2tzdW0gIT0gLTEgJiYgc2xvdCA9PSAwKQorCXsKKwkgIERCRygiY2hl
Y2tzdW1pbmciKTsKKwkgIGdydWJfdWludDhfdCBzdW07CisKKwkgIGZvciAoc3VtID0gMCwgaSA9
IDA7IGkgPCBzaXplb2YgKGRpci5uYW1lKTsgaSsrKQorCSAgICBzdW0gPSAoKHN1bSA+PiAxKSB8
IChzdW0gPDwgNykpICsgZGlyLm5hbWVbaV07CisKKwkgIGlmIChzdW0gPT0gY2hlY2tzdW0pCisJ
ICAgIHsvL7Okw/ux7c/uuvPD5r30vdO2zMP7se3P7qOs0enWpLPJuabU8takw/fV5tX9ysezpMP7
19YKKwkgICAgICBpbnQgdTsKKworCSAgICAgIGZvciAodSA9IDA7IHUgPCBzbG90cyAqIDEzOyB1
KyspCisJCXVuaWJ1Zlt1XSA9IGdydWJfbGVfdG9fY3B1MTYgKHVuaWJ1Zlt1XSk7CisKKwkgICAg
ICAqZ3J1Yl91dGYxNl90b191dGY4ICgoZ3J1Yl91aW50OF90ICopIGZpbGVuYW1lLCB1bmlidWYs
CisJCQkJICAgc2xvdHMgKiAxMykgPSAnXDAnOworCisJICAgICAgCisJICAgICAgY2hlY2tzdW0g
PSAtMTsKKwkgICAgICBmb3IgKGkgPSAwOyBpIDwgc2l6ZW9mIChkaXIubmFtZSk7IGkrKykKKwkJ
REJHKCIweCV4ICAiLCBkaXIubmFtZVtpXSk7CisJICAgICAgCisJICAgICAgdTJnKGZpbGVuYW1l
LCBzdHJsZW4oZmlsZW5hbWUpLCBnYm5hbWUsIDB4NDAgKiAxMyAqIDIpOworCSAgICAgIERCRygi
XG5kaXIubmFtZT0lcywgZmlsZW5hbWU9JXMsIGRpci5hdHRyPTB4JXgsIgorCQkgICAgICJzdW09
PWNoZWNrc3VtLi4uY29udGludWUiLAorCQkgICAgIGRpci5uYW1lLCBnYm5hbWUsIGRpci5hdHRy
KTsKKwkgICAgICAKKwkgICAgICBjb3VudCsrOworCSAgICAgIAorCSAgICAgIGlmIChob29rICYm
IGhvb2sgKGdibmFtZSwgJmRpciwgY2xvc3VyZSkpCisJICAgICAgICBicmVhazsKKwkgICAgICAK
KwkgICAgICBjb250aW51ZTsKKwkgICAgfQorCisJICBjaGVja3N1bSA9IC0xOworCX0KKworICAg
ICAgLy+688PmtcS0psDt1eu21LfH1ebKtbOkw/u6zdXmyrW2zMP7CisgICAgICAvKiBDb252ZXJ0
IHRoZSA4LjMgZmlsZSBuYW1lLiAgKi8KKyAgICAgIC8vyKW19LbMw/u1xL/VuPGjrMiruMTOqtCh
0LQKKyAgICAgIGZpbGVwID0gZmlsZW5hbWU7CisgICAgICBpZiAoZGlyLmF0dHIgJiBHUlVCX0ZB
VF9BVFRSX1ZPTFVNRV9JRCkKKwl7CisJICBEQkcoIlZPTFVNRSIpOworCSAgZm9yIChpID0gMDsg
aSA8IHNpemVvZiAoZGlyLm5hbWUpICYmIGRpci5uYW1lW2ldCisJCSAmJiAhIGdydWJfaXNzcGFj
ZSAoZGlyLm5hbWVbaV0pOyBpKyspCisJICAgICpmaWxlcCsrID0gZGlyLm5hbWVbaV07CisJfQor
ICAgICAgZWxzZQorCXsKKwkgIGZvciAoaSA9IDA7IGkgPCA4ICYmIGRpci5uYW1lW2ldICYmICEg
Z3J1Yl9pc3NwYWNlIChkaXIubmFtZVtpXSk7IGkrKykKKwkgICAgKmZpbGVwKysgPSBncnViX3Rv
bG93ZXIgKGRpci5uYW1lW2ldKTsKKworCSAgKmZpbGVwID0gJy4nOworCisJICBmb3IgKGkgPSA4
OyBpIDwgMTEgJiYgZGlyLm5hbWVbaV0gJiYgISBncnViX2lzc3BhY2UgKGRpci5uYW1lW2ldKTsg
aSsrKQorCSAgICAqKytmaWxlcCA9IGdydWJfdG9sb3dlciAoZGlyLm5hbWVbaV0pOworCisJICBp
ZiAoKmZpbGVwICE9ICcuJykKKwkgICAgZmlsZXArKzsKKwl9CisgICAgICAqZmlsZXAgPSAnXDAn
OworICAgICAgCisgICAgICAvL2ZvciAoaSA9IDA7IGkgPCBzaXplb2YgKGRpci5uYW1lKTsgaSsr
KQorICAgICAgLy8JREJHKCIweCV4ICAiLCBkaXIubmFtZVtpXSk7CisgICAgICBEQkcoIlxuZGly
Lm5hbWU9JXMsIGZpbGVuYW1lPaG+JXOhvywgZGlyLmF0dHI9MHgleCwiCisJICAgICAiLi4ubmV4
dCB3aGlsZSIsCisJICAgICBkaXIubmFtZSwgZmlsZW5hbWUsIGRpci5hdHRyKTsKKyAgICAgIGNv
dW50Kys7CisgICAgICAvKmlmKHN0cmNtcChmaWxlbmFtZSwgIi4iKSAmJiBzdHJjbXAoZmlsZW5h
bWUsICIuLiIpKQorCXsKKwkgIERCRygiez09PT09PT09PT09PT09PiIpOworCSAgc3RydWN0IGdy
dWJfZmF0X2RhdGEgKmRhdGEyID0gTlVMTDsKKwkgIGRhdGEyID0gKHN0cnVjdCBncnViX2ZhdF9k
YXRhKiltYWxsb2Moc2l6ZW9mKCpkYXRhKSk7CisJICBtZW1jcHkoZGF0YTIsIGRhdGEsIHNpemVv
ZigqZGF0YSkpOworCSAgZGF0YTItPmF0dHIgPSBkaXIuYXR0cjsKKwkgIGRhdGEyLT5maWxlX3Np
emUgPSBncnViX2xlX3RvX2NwdTMyIChkaXIuZmlsZV9zaXplKTsKKwkgIGRhdGEyLT5maWxlX2Ns
dXN0ZXIgPSAoKGdydWJfbGVfdG9fY3B1MTYgKGRpci5maXJzdF9jbHVzdGVyX2hpZ2gpIDw8IDE2
KQorCQkJCSB8IGdydWJfbGVfdG9fY3B1MTYgKGRpci5maXJzdF9jbHVzdGVyX2xvdykpOworCSAg
ZGF0YTItPmN1cl9jbHVzdGVyX251bSA9IH4wVTsKKwkgIChncnViX2ZhdF9pdGVyYXRlX2Rpcihi
cywgZGF0YTIsIE5VTEwsIE5VTEwpIDwgMCkgPyBEQkcoImVycm9yICEhISEhISIpIDogMDsKKwkg
IGZyZWUoZGF0YTIpOworCSAgREJHKCI8PT09PT09PT09PT09PT09PT09PX0iKTsKKwl9CisgICAg
ICAqLworICAgICAgaWYgKGhvb2sgJiYgaG9vayAoZmlsZW5hbWUsICZkaXIsIGNsb3N1cmUpKQor
ICAgICAgICBicmVhazsKKyAgICB9CisKKyAgZnJlZShnYm5hbWUpOworICBmcmVlIChmaWxlbmFt
ZSk7CisgIGZyZWUgKHVuaWJ1Zik7CisKKyAgcmV0dXJuIDA7Cit9CisKKworCisvL7SruPhncnVi
X2ZhdF9maW5kX2hvb2u1xLLOyv1jbG9zdXJlCitzdHJ1Y3QgZ3J1Yl9mYXRfZmluZF9kaXJfY2xv
c3VyZQoreworICBzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YTsKKyAgaW50ICgqaG9vaykgKGNv
bnN0IGNoYXIgKmZpbGVuYW1lLAorCSAgICAgICBjb25zdCBzdHJ1Y3QgZ3J1Yl9kaXJob29rX2lu
Zm8gKmluZm8sCisJICAgICAgIHZvaWQgKmNsb3N1cmUpOworICB2b2lkICpjbG9zdXJlOworICBj
aGFyICpkaXJuYW1lOworICBpbnQgY2FsbF9ob29rOworICBpbnQgZm91bmQ7Cit9OworCisKK3N0
YXRpYyBpbnQKK2dydWJfZmF0X2ZpbmRfZGlyX2hvb2sgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLCBz
dHJ1Y3QgZ3J1Yl9mYXRfZGlyX2VudHJ5ICpkaXIsCisJCQl2b2lkICpjbG9zdXJlKQoreworICBz
dHJ1Y3QgZ3J1Yl9mYXRfZmluZF9kaXJfY2xvc3VyZSAqYyA9IGNsb3N1cmU7CisgIHN0cnVjdCBn
cnViX2Rpcmhvb2tfaW5mbyBpbmZvOworICBtZW1zZXQgKCZpbmZvLCAwLCBzaXplb2YgKGluZm8p
KTsKKworICBpbmZvLmRpciA9ICEhIChkaXItPmF0dHIgJiBHUlVCX0ZBVF9BVFRSX0RJUkVDVE9S
WSk7CisgIGluZm8uY2FzZV9pbnNlbnNpdGl2ZSA9IDE7CisgIGluZm8ubXRpbWVzZXQgPSAoZGly
LT5jX2RhdGUgfHwgZGlyLT5jX3RpbWUpOworICBpbmZvLm10aW1lID0gKCgoZ3J1Yl91aW50MzJf
dClkaXItPmNfZGF0ZSA8PCAxNikgfCAoZGlyLT5jX3RpbWUpKTsKKyAgaW5mby5maWxlc2l6ZSA9
IGRpci0+ZmlsZV9zaXplOworICAKKyAgREJHKCJ0YXJnZXQgZmlsZSChviVzob89PT09PT0iLCBj
LT5kaXJuYW1lKTsKKyAgaWYgKGRpci0+YXR0ciAmIEdSVUJfRkFUX0FUVFJfVk9MVU1FX0lEKQor
ICAgIHsKKyAgICAgIERCRygidm9sdW1lIGlkICwgaWdub3JlPT09PT09Iik7CisgICAgICByZXR1
cm4gMDsKKyAgICB9CisgIAorICBpZiAoKihjLT5kaXJuYW1lKSA9PSAnXDAnICYmIChjLT5jYWxs
X2hvb2spKQorICAgIHsgLy+08r+qtcTKx8S/wrwgIC94L3BhdGgxL3BhdGgyLworICAgICAgLy+3
tbvYMKOsyMNpdGVyYXRlyrHWu8rHtPLTodDFz6KjrLb4srvNy7P2d2hpbGUKKyAgICAgIGMtPmZv
dW5kID0gMTsKKyAgICAgIGlmKCEoYy0+ZGF0YS0+YXR0ciAmIEdSVUJfRkFUX0FUVFJfRElSRUNU
T1JZKSkKKwl7CisJICBwcmludGYoIml0J3Mgbm90IGEgZGlyZWN0b3J5IVxuIik7CisJfQorICAg
ICAgREJHKCJsaXN0IHRoZSBkaXIgob4lc6G/PT09PT09PT09PT0iLAorCSAgKChzdHJ1Y3QgbHNf
Y3RybCopYy0+Y2xvc3VyZSktPmRpcm5hbWUpOworICAgICAgcmV0dXJuIGMtPmhvb2sgKGZpbGVu
YW1lLCAmaW5mbywgYy0+Y2xvc3VyZSk7CisgICAgfQorCisgIAorICBpZiAoZ3J1Yl9zdHJjYXNl
Y21wIChjLT5kaXJuYW1lLCBmaWxlbmFtZSkgPT0gMCkKKyAgICB7IC8vtPK/qrXEysfOxLz+IC94
L3BhdGgxL2ZpbGUKKyAgICAgIERCRygiZm91bmQ9PT09PT0iKTsKKyAgICAgIHN0cnVjdCBncnVi
X2ZhdF9kYXRhICpkYXRhID0gYy0+ZGF0YTsKKworICAgICAgYy0+Zm91bmQgPSAxOworICAgICAg
ZGF0YS0+YXR0ciA9IGRpci0+YXR0cjsKKyAgICAgIGRhdGEtPmZpbGVfc2l6ZSA9IGdydWJfbGVf
dG9fY3B1MzIgKGRpci0+ZmlsZV9zaXplKTsKKyAgICAgIGRhdGEtPmZpbGVfY2x1c3RlciA9ICgo
Z3J1Yl9sZV90b19jcHUxNiAoZGlyLT5maXJzdF9jbHVzdGVyX2hpZ2gpIDw8IDE2KQorCQkJICAg
ICAgIHwgZ3J1Yl9sZV90b19jcHUxNiAoZGlyLT5maXJzdF9jbHVzdGVyX2xvdykpOworICAgICAg
ZGF0YS0+Y3VyX2NsdXN0ZXJfbnVtID0gfjBVOworCisgICAgICBpZiAoYy0+Y2FsbF9ob29rKQor
CWMtPmhvb2sgKGZpbGVuYW1lLCAmaW5mbywgYy0+Y2xvc3VyZSk7CisKKyAgICAgIHJldHVybiAx
OworICAgIH0KKyAgZWxzZQorICAgIHsKKyAgICAgIERCRygibm90IG1hdGNoPT09PT09Iik7Cisg
ICAgfQorICByZXR1cm4gMDsKK30KKworCisvKiBGaW5kIHRoZSB1bmRlcmx5aW5nIGRpcmVjdG9y
eSBvciBmaWxlIGluIFBBVEggYW5kIHJldHVybiB0aGUKKyAgIG5leHQgcGF0aC4gSWYgdGhlcmUg
aXMgbm8gbmV4dCBwYXRoIG9yIGFuIGVycm9yIG9jY3VycywgcmV0dXJuIE5VTEwuCisgICBJZiBI
T09LIGlzIHNwZWNpZmllZCwgY2FsbCBpdCB3aXRoIGVhY2ggZmlsZSBuYW1lLiAgKi8KKy8v1NrT
yWRhdGHWuLaotcTEv8K8z8Ky6dXS08lwYXRowre+tta4tqi1xM7EvP680LvyzsS8/gorLy/V0rW9
1q66872708kgZ3J1Yl9mYXRfZmluZF9kaXJfaG9va7qvyv20psDto6zG5NbQY2xvc3VyZbLOyv3K
x7nYvPwKK2NoYXIgKgorZ3J1Yl9mYXRfZmluZF9kaXIgKEJsb2NrRHJpdmVyU3RhdGUgKmJzLCBz
dHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YSwKKwkJICAgY29uc3QgY2hhciAqcGF0aCwKKwkJICAg
aW50ICgqaG9vaykgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLAorCQkJCWNvbnN0IHN0cnVjdCBncnVi
X2Rpcmhvb2tfaW5mbyAqaW5mbywKKwkJCQl2b2lkICpjbG9zdXJlKSwKKwkJICAgdm9pZCAqY2xv
c3VyZSkKK3sKKyAgY2hhciAqZGlybmFtZSwgKmRpcnA7CisgIHN0cnVjdCBncnViX2ZhdF9maW5k
X2Rpcl9jbG9zdXJlIGM7CisgIERCRygidG8gc2VhcmNoIFslc10uLi5pbiBkYXRhLT5hdHRyPTB4
JXgiLCBwYXRoLCBkYXRhLT5hdHRyKTsKKyAgaWYgKCEgKGRhdGEtPmF0dHIgJiBHUlVCX0ZBVF9B
VFRSX0RJUkVDVE9SWSkpCisgICAgeworICAgICAgcHJpbnRmKCJub3QgYSBkaXJlY3RvcnkuLi4u
Li4uLi4uLlxuIik7CisgICAgICByZXR1cm4gMDsKKyAgICB9CisKKyAgLyogRXh0cmFjdCBhIGRp
cmVjdG9yeSBuYW1lLiAgKi8KKyAgd2hpbGUgKCpwYXRoID09ICcvJykKKyAgICBwYXRoKys7CisK
KyAgZGlycCA9IGdydWJfc3RyY2hyIChwYXRoLCAnLycpOworICBpZiAoZGlycCkKKyAgICB7Cisg
ICAgICB1bnNpZ25lZCBsZW4gPSBkaXJwIC0gcGF0aDsKKworICAgICAgZGlybmFtZSA9IChjaGFy
KiltYWxsb2MgKGxlbiArIDEpOworICAgICAgaWYgKCEgZGlybmFtZSkKKwlyZXR1cm4gMDsKKwor
ICAgICAgbWVtY3B5IChkaXJuYW1lLCBwYXRoLCBsZW4pOworICAgICAgZGlybmFtZVtsZW5dID0g
J1wwJzsKKyAgICB9CisgIGVsc2UKKyAgICB7CisgICAgLyogVGhpcyBpcyBhY3R1YWxseSBhIGZp
bGUuICAqLworICAgICAgZGlybmFtZSA9IGdydWJfc3RyZHVwIChwYXRoKTsKKyAgICB9CisgIERC
Rygic2VhcmNoaW5nIFwiJXNcIj09PT09PSIsIGRpcm5hbWUpOworICBjLmRhdGEgPSBkYXRhOwor
ICBjLmhvb2sgPSBob29rOworICBjLmNsb3N1cmUgPSBjbG9zdXJlOworICBjLmRpcm5hbWUgPWRp
cm5hbWU7CisgIGMuZm91bmQgPSAwOworICBjLmNhbGxfaG9vayA9ICghIGRpcnAgJiYgaG9vayk7
ICAvL9XrttTEv8K8tcRob29rCisgIAorICBpbnQgcmV0ID0gZ3J1Yl9mYXRfaXRlcmF0ZV9kaXIg
KGJzLCBkYXRhLCBncnViX2ZhdF9maW5kX2Rpcl9ob29rLCAmYyk7CisgIGlmKDAgPT0gcmV0ICYm
ICFjLmZvdW5kKQorICAgIHsKKyAgICAgIGdfZXJyID0gR1JVQl9FUlJfTk9UX0ZPVU5EOyAKKyAg
ICAgIHByaW50ZigiZmlsZSBub3QgZm91bmQuLlxuIik7CisgICAgfQorICBlbHNlIGlmKHJldCA8
IDApCisgICAgeworICAgICAgZ19lcnIgPSBHUlVCX0VSUl9VTktOT1dOOworICAgICAgcHJpbnRm
KCJpdGVyYXRlIGVycm9yIVxuIik7CisgICAgfQorICAgIAorICAKKyAgZnJlZSAoZGlybmFtZSk7
CisKKyAgcmV0dXJuIChjLmZvdW5kICYmIDA9PXJldCkgPyBkaXJwIDogMDsKK30KKworCisKKwor
CitncnViX2Vycl90CitncnViX2ZhdF9vcGVuIChncnViX2ZpbGVfdCBmaWxlLCBjb25zdCBjaGFy
ICpuYW1lKQoreworICBzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YSA9IDA7CisgIGNoYXIgKnAg
PSAoY2hhciAqKSBuYW1lOworCisgIAorICBkYXRhID0gZ3J1Yl9mYXRfbW91bnQgKGZpbGUtPmJz
LCBmaWxlLT5wYXJ0X29mZl9zZWN0b3IpOworICBpZiAoISBkYXRhKQorICAgIHsKKyAgICAgIHBy
aW50ZigiWyVzXTogbW91bnQgZXJyb3IhXG4iLCBuYW1lKTsKKyAgICAgIGdvdG8gZmFpbDsKKyAg
ICB9CisKKyAgaW50IGkgPSAwOworICBkbworICAgIHsKKyAgICAgIHAgPSBncnViX2ZhdF9maW5k
X2RpciAoZmlsZS0+YnMsIGRhdGEsIHAsIDAsIDApOworICAgICAgREJHKCIlZCBjeWNsZSBwYXN0
ob5wYXRoPSVzob8uLi4uLi4uIiwgaSsxLCBwKTsKKyAgICAgIC8vZXJyb3IganVkZ2UuLi4uLi4K
KyAgICB9CisgIHdoaWxlIChwKTsKKworICBEQkcoImV4aXQgd2hpbGU9PT09PT0iKTsKKyAKKyAg
aWYgKChHUlVCX0VSUl9OT05FID09IGdfZXJyKSAKKyAgICAgICYmIChkYXRhLT5hdHRyICYgR1JV
Ql9GQVRfQVRUUl9ESVJFQ1RPUlkpKQorICAgIHsKKyAgICAgIHByaW50ZiAoIlslc106IG5vdCBh
IGZpbGUhXG4iLCBuYW1lKTsKKyAgICAgIGdvdG8gZmFpbDsKKyAgICB9CisgIAorICBpZihHUlVC
X0VSUl9OT05FID09IGdfZXJyKQorICAgIHsKKyAgICAgIERCRygiZm91bmQ9PT09PT0iKTsKKyAg
ICB9CisgIGVsc2UKKyAgICB7CisgICAgICBwcmludGYoIm5vdCBmb3VuZCBvciBlcnJvciFcbiIp
OworICAgICAgZ290byBmYWlsOworICAgIH0KKworICBEQkcoIjExMTExMTExMTExMTExMTExMTEx
MTExIik7CisgIGZpbGUtPmRhdGEgPSBkYXRhOworICBmaWxlLT5zaXplID0gZGF0YS0+ZmlsZV9z
aXplOworICByZXR1cm4gMDsKKworIGZhaWw6CisgIGZyZWUoZGF0YSk7IAorICBmaWxlLT5kYXRh
ID0gTlVMTDsKKyAgREJHKCIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyIik7CisgIHJldHVybiAxOwor
fQorCisKKyNkZWZpbmUgICAgVElNRV9CSVQgICAgMHhGRkZGCisjZGVmaW5lICAgIFRJTUVfSE9V
Ul9CSVQgICAgMHhGODAwCisjZGVmaW5lICAgIFRJTUVfTUlOVVRFX0JJVCAgICAweDA3RTAKKyNk
ZWZpbmUgICAgVElNRV9TRUNPTkRfQklUICAgIDB4MDAxRgorI2RlZmluZSAgICBEQVRFX0JJVCAg
ICAweEZGRkYwMDAwCisjZGVmaW5lICAgIERBVEVfWUVBUl9CSVQgICAgMHhGRTAwCisjZGVmaW5l
ICAgIERBVEVfTU9OVEhfQklUICAgIDB4MDFFMAorI2RlZmluZSAgICBEQVRFX0RBWV9CSVQgICAg
MHgwMDFGCitzdGF0aWMgIGludCBmaW5kX3RoZW5fbHNfaG9vayhjb25zdCBjaGFyICpmaWxlbmFt
ZSwKKwkJCSAgIGNvbnN0IHN0cnVjdCBncnViX2Rpcmhvb2tfaW5mbyAqaW5mbywgdm9pZCAqY2xv
c3VyZSkKK3sKKyAgc3RydWN0IGxzX2N0cmwqIGN0cmwgPSAoc3RydWN0IGxzX2N0cmwqKWNsb3N1
cmU7CisgIERCRygiZGV0YWlsPSVkIiwgY3RybC0+ZGV0YWlsKTsKKyAgcHJpbnRmKCIlcyIsIGZp
bGVuYW1lKTsKKyAgaWYoIWN0cmwtPmRldGFpbCkKKyAgICB7CisgICAgICBwcmludGYoIlxuIik7
CisgICAgICByZXR1cm4gMDsKKyAgICB9CisgIGVsc2UKKyAgICB7CisgICAgICBwcmludGYoIlx0
Iik7CisgICAgfQorCisKKyAgcHJpbnRmKCIldWJ5dGVzXHQiLCAoaW5mby0+ZmlsZXNpemUpKTsK
KyAgcHJpbnRmKCIlc1x0IiwgKGluZm8tPmRpciA/ICJkaXIiIDogImZpbGUiKSk7CisgIGdydWJf
dWludDE2X3QgdGltZSA9ICgoaW5mby0+bXRpbWUpICYgVElNRV9CSVQpOworICBncnViX3VpbnQx
Nl90IGRhdGUgPSAoKGluZm8tPm10aW1lKSAmIERBVEVfQklUKSA+PiAxNjsKKyAgCisgIHByaW50
ZigiJTA0ZC8lMDJkLyUwMmRcdCIsCisJICgoZGF0ZSAmIERBVEVfWUVBUl9CSVQpID4+IDkpICsg
MTk4MCwKKwkgKGRhdGUgJiBEQVRFX01PTlRIX0JJVCkgPj4gNSwKKwkgKGRhdGUgJiBEQVRFX0RB
WV9CSVQpKTsKKyAgcHJpbnRmKCIlMDJkOiUwMmQ6JTAyZFxuIiwgCisJICh0aW1lICYgVElNRV9I
T1VSX0JJVCkgPj4gMTEsCisJICh0aW1lICYgVElNRV9NSU5VVEVfQklUKSA+PiA1LAorCSB0aW1l
ICYgVElNRV9TRUNPTkRfQklUKSAqIDI7ICAKKyAgCisgIHJldHVybiAwOyAgLy8g1+7W1be1u9i4
+Gl0ZXJhdGUKK30KKworCitncnViX2Vycl90CitncnViX2ZhdF9scyAoZ3J1Yl9maWxlX3QgZmls
ZSwgY29uc3QgY2hhciAqcGF0aCwKKwkgICAgICBpbnQgKCpob29rKSAoY29uc3QgY2hhciAqZmls
ZW5hbWUsCisJCQkgICBjb25zdCBzdHJ1Y3QgZ3J1Yl9kaXJob29rX2luZm8gKmluZm8sIHZvaWQg
KmNsb3N1cmUpLAorCSAgICAgIHZvaWQgKmNsb3N1cmUpCit7CisgIHN0cnVjdCBncnViX2ZhdF9k
YXRhICpkYXRhID0gMDsKKyAgZ3J1Yl9zaXplX3QgbGVuOworICBjaGFyICpkaXJuYW1lID0gMDsK
KyAgY2hhciAqcDsKKyAgCisgIGRhdGEgPSBncnViX2ZhdF9tb3VudCAoZmlsZS0+YnMsIGZpbGUt
PnBhcnRfb2ZmX3NlY3Rvcik7CisgIGlmICghIGRhdGEpCisgICAgZ290byBmYWlsOworCisgIGZp
bGUtPmRhdGEgPSBkYXRhOworICAvKiBNYWtlIHN1cmUgdGhhdCBESVJOQU1FIHRlcm1pbmF0ZXMg
d2l0aCAnLycuICAqLworICBsZW4gPSBzdHJsZW4ocGF0aCk7CisgIGRpcm5hbWUgPSAoY2hhciop
bWFsbG9jIChsZW4gKyAxICsgMSk7CisgIGlmICghIGRpcm5hbWUpCisgICAgZ290byBmYWlsOwor
ICBtZW1jcHkgKGRpcm5hbWUsIHBhdGgsIGxlbik7CisgIHAgPSBkaXJuYW1lICsgbGVuOworICBp
ZiAocGF0aFtsZW4gLSAxXSAhPSAnLycpCisgICAgKnArKyA9ICcvJzsKKyAgKnAgPSAnXDAnOwor
ICBwID0gZGlybmFtZTsKKworICBkbworICAgIHsKKyAgICAgIHAgPSBncnViX2ZhdF9maW5kX2Rp
ciAoZmlsZS0+YnMsIGRhdGEsIHAsIGZpbmRfdGhlbl9sc19ob29rLCBjbG9zdXJlKTsKKyAgICB9
CisgIHdoaWxlIChwICYmIGdfZXJyID09IEdSVUJfRVJSX05PTkUpOworCisgIAorCisgZmFpbDoK
KworICBmcmVlIChkaXJuYW1lKTsKKyAgZnJlZSAoZGF0YSk7ICBmaWxlLT5kYXRhID0gTlVMTDsK
KyAgCisgIHJldHVybiBnX2VycjsKK30KKworCitncnViX2Vycl90IGdydWJfZmF0X2Nsb3NlKGdy
dWJfZmlsZV90IGZpbGUpCit7CisgIGZyZWUoZmlsZS0+ZGF0YSk7CisgIHJldHVybiBnX2VycjsK
K30KKworCitncnViX3NzaXplX3QgZ3J1Yl9mYXRfcmVhZChncnViX2ZpbGVfdCBmaWxlLCBncnVi
X29mZl90IG9mZnNldCwKKwkJCSAgIGdydWJfc2l6ZV90IGxlbiwgY2hhciAqYnVmKQoreworICBy
ZXR1cm4gZ3J1Yl9mYXRfcmVhZF9kYXRhKGZpbGUtPmJzLCBmaWxlLT5kYXRhLCBOVUxMLCBOVUxM
LCBvZmZzZXQsIGxlbiwgYnVmKTsKK30KKworCisKKworCisKKwpkaWZmIC0tZXhjbHVkZT0uc3Zu
IC1ycE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZhdC5oIHhlbi00LjEu
Mi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZhdC5oCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2Vt
dS1xZW11LXhlbi9mYXQuaAkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysg
eGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZmF0LmgJMjAxMi0xMi0yOCAxNjowMjo0
MS4wMDI5MzgwMTkgKzA4MDAKQEAgLTAsMCArMSwxNjAgQEAKKyNpZm5kZWYgRlNfRkFUX0gKKyNk
ZWZpbmUgRlNfRkFUX0gKKworCisjaW5jbHVkZSAiZnMtdHlwZXMuaCIKKyNpbmNsdWRlICJibG9j
a19pbnQuaCIKKyNpbmNsdWRlICJmcy1jb21tLmgiCisjaW5jbHVkZSAiZ3J1Yl9lcnIuaCIKKwor
CisjZGVmaW5lIEdSVUJfRElTS19TRUNUT1JfQklUUyAgICAgIDkKKyNkZWZpbmUgR1JVQl9GQVRf
RElSX0VOVFJZX1NJWkUJMzIKKworI2RlZmluZSBHUlVCX0ZBVF9BVFRSX1JFQURfT05MWQkweDAx
CisjZGVmaW5lIEdSVUJfRkFUX0FUVFJfSElEREVOCTB4MDIKKyNkZWZpbmUgR1JVQl9GQVRfQVRU
Ul9TWVNURU0JMHgwNAorI2RlZmluZSBHUlVCX0ZBVF9BVFRSX1ZPTFVNRV9JRAkweDA4CisjZGVm
aW5lIEdSVUJfRkFUX0FUVFJfRElSRUNUT1JZCTB4MTAKKyNkZWZpbmUgR1JVQl9GQVRfQVRUUl9B
UkNISVZFCTB4MjAKKworI2RlZmluZSBHUlVCX0ZBVF9NQVhGSUxFCTI1NgorCisjZGVmaW5lIEdS
VUJfRkFUX0FUVFJfTE9OR19OQU1FCShHUlVCX0ZBVF9BVFRSX1JFQURfT05MWSBcCisJCQkJIHwg
R1JVQl9GQVRfQVRUUl9ISURERU4gXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfU1lTVEVNIFwKKwkJ
CQkgfCBHUlVCX0ZBVF9BVFRSX1ZPTFVNRV9JRCkKKyNkZWZpbmUgR1JVQl9GQVRfQVRUUl9WQUxJ
RAkoR1JVQl9GQVRfQVRUUl9SRUFEX09OTFkgXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfSElEREVO
IFwKKwkJCQkgfCBHUlVCX0ZBVF9BVFRSX1NZU1RFTSBcCisJCQkJIHwgR1JVQl9GQVRfQVRUUl9E
SVJFQ1RPUlkgXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfQVJDSElWRSBcCisJCQkJIHwgR1JVQl9G
QVRfQVRUUl9WT0xVTUVfSUQpCisKK3N0cnVjdCBncnViX2ZhdF9icGIKK3sKKyAgZ3J1Yl91aW50
OF90IGptcF9ib290WzNdOworICBncnViX3VpbnQ4X3Qgb2VtX25hbWVbOF07CisgIGdydWJfdWlu
dDE2X3QgYnl0ZXNfcGVyX3NlY3RvcjsKKyAgZ3J1Yl91aW50OF90IHNlY3RvcnNfcGVyX2NsdXN0
ZXI7CisgIGdydWJfdWludDE2X3QgbnVtX3Jlc2VydmVkX3NlY3RvcnM7CisgIGdydWJfdWludDhf
dCBudW1fZmF0czsKKyAgZ3J1Yl91aW50MTZfdCBudW1fcm9vdF9lbnRyaWVzOworICBncnViX3Vp
bnQxNl90IG51bV90b3RhbF9zZWN0b3JzXzE2OworICBncnViX3VpbnQ4X3QgbWVkaWE7CisgIGdy
dWJfdWludDE2X3Qgc2VjdG9yc19wZXJfZmF0XzE2OworICBncnViX3VpbnQxNl90IHNlY3RvcnNf
cGVyX3RyYWNrOworICBncnViX3VpbnQxNl90IG51bV9oZWFkczsKKyAgZ3J1Yl91aW50MzJfdCBu
dW1faGlkZGVuX3NlY3RvcnM7CisgIGdydWJfdWludDMyX3QgbnVtX3RvdGFsX3NlY3RvcnNfMzI7
CisgIHVuaW9uCisgIHsKKyAgICBzdHJ1Y3QKKyAgICB7CisgICAgICBncnViX3VpbnQ4X3QgbnVt
X3BoX2RyaXZlOworICAgICAgZ3J1Yl91aW50OF90IHJlc2VydmVkOworICAgICAgZ3J1Yl91aW50
OF90IGJvb3Rfc2lnOworICAgICAgZ3J1Yl91aW50MzJfdCBudW1fc2VyaWFsOworICAgICAgZ3J1
Yl91aW50OF90IGxhYmVsWzExXTsKKyAgICAgIGdydWJfdWludDhfdCBmc3R5cGVbOF07CisgICAg
fSBfX2F0dHJpYnV0ZV9fICgocGFja2VkKSkgZmF0MTJfb3JfZmF0MTY7CisgICAgc3RydWN0Cisg
ICAgeworICAgICAgZ3J1Yl91aW50MzJfdCBzZWN0b3JzX3Blcl9mYXRfMzI7CisgICAgICBncnVi
X3VpbnQxNl90IGV4dGVuZGVkX2ZsYWdzOworICAgICAgZ3J1Yl91aW50MTZfdCBmc192ZXJzaW9u
OworICAgICAgZ3J1Yl91aW50MzJfdCByb290X2NsdXN0ZXI7CisgICAgICBncnViX3VpbnQxNl90
IGZzX2luZm87CisgICAgICBncnViX3VpbnQxNl90IGJhY2t1cF9ib290X3NlY3RvcjsKKyAgICAg
IGdydWJfdWludDhfdCByZXNlcnZlZFsxMl07CisgICAgICBncnViX3VpbnQ4X3QgbnVtX3BoX2Ry
aXZlOworICAgICAgZ3J1Yl91aW50OF90IHJlc2VydmVkMTsKKyAgICAgIGdydWJfdWludDhfdCBi
b290X3NpZzsKKyAgICAgIGdydWJfdWludDMyX3QgbnVtX3NlcmlhbDsKKyAgICAgIGdydWJfdWlu
dDhfdCBsYWJlbFsxMV07CisgICAgICBncnViX3VpbnQ4X3QgZnN0eXBlWzhdOworICAgIH0gX19h
dHRyaWJ1dGVfXyAoKHBhY2tlZCkpIGZhdDMyOworICB9IF9fYXR0cmlidXRlX18gKChwYWNrZWQp
KSB2ZXJzaW9uX3NwZWNpZmljOworfSBfX2F0dHJpYnV0ZV9fICgocGFja2VkKSk7CisKK3N0cnVj
dCBncnViX2ZhdF9kaXJfZW50cnkKK3sKKyAgZ3J1Yl91aW50OF90IG5hbWVbMTFdOworICBncnVi
X3VpbnQ4X3QgYXR0cjsKKyAgZ3J1Yl91aW50OF90IG50X3Jlc2VydmVkOworICBncnViX3VpbnQ4
X3QgY190aW1lX3RlbnRoOworICBncnViX3VpbnQxNl90IGNfdGltZTsKKyAgZ3J1Yl91aW50MTZf
dCBjX2RhdGU7CisgIGdydWJfdWludDE2X3QgYV9kYXRlOworICBncnViX3VpbnQxNl90IGZpcnN0
X2NsdXN0ZXJfaGlnaDsKKyAgZ3J1Yl91aW50MTZfdCB3X3RpbWU7CisgIGdydWJfdWludDE2X3Qg
d19kYXRlOworICBncnViX3VpbnQxNl90IGZpcnN0X2NsdXN0ZXJfbG93OworICBncnViX3VpbnQz
Ml90IGZpbGVfc2l6ZTsKK30gX19hdHRyaWJ1dGVfXyAoKHBhY2tlZCkpOworCitzdHJ1Y3QgZ3J1
Yl9mYXRfbG9uZ19uYW1lX2VudHJ5Cit7CisgIGdydWJfdWludDhfdCBpZDsKKyAgZ3J1Yl91aW50
MTZfdCBuYW1lMVs1XTsKKyAgZ3J1Yl91aW50OF90IGF0dHI7CisgIGdydWJfdWludDhfdCByZXNl
cnZlZDsKKyAgZ3J1Yl91aW50OF90IGNoZWNrc3VtOworICBncnViX3VpbnQxNl90IG5hbWUyWzZd
OworICBncnViX3VpbnQxNl90IGZpcnN0X2NsdXN0ZXI7CisgIGdydWJfdWludDE2X3QgbmFtZTNb
Ml07Cit9IF9fYXR0cmlidXRlX18gKChwYWNrZWQpKTsKKworc3RydWN0IGdydWJfZmF0X2RhdGEK
K3sKKyAgaW50IGxvZ2ljYWxfc2VjdG9yX2JpdHM7CisgIGdydWJfdWludDMyX3QgbnVtX3NlY3Rv
cnM7CisKKyAgZ3J1Yl91aW50MzJfdCBmYXRfc2VjdG9yOworICBncnViX3VpbnQzMl90IHNlY3Rv
cnNfcGVyX2ZhdDsKKyAgaW50IGZhdF9zaXplOworCisgIGdydWJfdWludDMyX3Qgcm9vdF9jbHVz
dGVyOworICBncnViX3VpbnQzMl90IHJvb3Rfc2VjdG9yOworICBncnViX3VpbnQzMl90IG51bV9y
b290X3NlY3RvcnM7CisKKyAgaW50IGNsdXN0ZXJfYml0czsKKyAgZ3J1Yl91aW50MzJfdCBjbHVz
dGVyX2VvZl9tYXJrOworICBncnViX3VpbnQzMl90IGNsdXN0ZXJfc2VjdG9yOworICBncnViX3Vp
bnQzMl90IG51bV9jbHVzdGVyczsKKworICBncnViX3VpbnQ4X3QgYXR0cjsKKyAgZ3J1Yl9zc2l6
ZV90IGZpbGVfc2l6ZTsKKyAgZ3J1Yl91aW50MzJfdCBmaWxlX2NsdXN0ZXI7CisgIGdydWJfdWlu
dDMyX3QgY3VyX2NsdXN0ZXJfbnVtOworICBncnViX3VpbnQzMl90IGN1cl9jbHVzdGVyOworCisg
IGdydWJfdWludDMyX3QgdXVpZDsKK307CisKKworCisKKworCisKK3N0cnVjdCBncnViX2ZhdF9k
YXRhKiAKK2dydWJfZmF0X21vdW50IChCbG9ja0RyaXZlclN0YXRlICpicywgZ3J1Yl91aW50MzJf
dCBwYXJ0X29mZl9zZWN0b3IpOworCitncnViX2Vycl90CitncnViX2ZhdF9vcGVuIChncnViX2Zp
bGVfdCBmaWxlLCBjb25zdCBjaGFyICpuYW1lKTsKKworZ3J1Yl9lcnJfdAorZ3J1Yl9mYXRfbHMg
KGdydWJfZmlsZV90IGZpbGUsIGNvbnN0IGNoYXIgKnBhdGgsCisJICAgICAgaW50ICgqaG9vaykg
KGNvbnN0IGNoYXIgKmZpbGVuYW1lLAorCQkJICAgY29uc3Qgc3RydWN0IGdydWJfZGlyaG9va19p
bmZvICppbmZvLAorCQkJICAgdm9pZCAqY2xvc3VyZSksCisJICAgICB2b2lkICpjbG9zdXJlKTsK
KworZ3J1Yl9lcnJfdCBncnViX2ZhdF9jbG9zZShncnViX2ZpbGVfdCBmaWxlKTsKKworZ3J1Yl9z
c2l6ZV90IGdydWJfZmF0X3JlYWQoZ3J1Yl9maWxlX3QgZmlsZSwgZ3J1Yl9vZmZfdCBvZmZzZXQs
CisJCSAgZ3J1Yl9zaXplX3QgbGVuLCBjaGFyICpidWYpOworCisKKyNlbmRpZgpkaWZmIC0tZXhj
bHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZzLWNv
bW0uaCB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9mcy1jb21tLmgKLS0tIHhlbi00
LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZzLWNvbW0uaAkxOTcwLTAxLTAxIDA3OjAwOjAw
LjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZnMt
Y29tbS5oCTIwMTItMTItMjggMTY6MDI6NDEuMDAzODQ2ODk3ICswODAwCkBAIC0wLDAgKzEsNjAg
QEAKKyNpZm5kZWYgX0ZTX0NPTU1fSAorI2RlZmluZSBfRlNfQ09NTV9ICisKKyNpbmNsdWRlICJm
cy10eXBlcy5oIgorI2luY2x1ZGUgImJsb2NrX2ludC5oIgorI2luY2x1ZGUgImdydWJfZXJyLmgi
CisjaW5jbHVkZSAiZGVidWcuaCIKKwordHlwZWRlZiBzdHJ1Y3QgIGdydWJfZmlsZQoreworICB2
b2lkICpkYXRhOworICBCbG9ja0RyaXZlclN0YXRlICpiczsKKyAgdWludDMyX3QgcGFydF9vZmZf
c2VjdG9yOworICBncnViX3NpemVfdCBzaXplOworICBncnViX29mZl90IG9mZnNldDsKKyAgLyog
VGhpcyBpcyBjYWxsZWQgd2hlbiBhIHNlY3RvciBpcyByZWFkLiBVc2VkIG9ubHkgZm9yIGEgZGlz
ayBkZXZpY2UuICAqLworICB2b2lkICgqcmVhZF9ob29rKSAoZ3J1Yl9kaXNrX2FkZHJfdCBzZWN0
b3IsCisJCSAgICAgdW5zaWduZWQgb2Zmc2V0LCB1bnNpZ25lZCBsZW5ndGgsIHZvaWQgKmNsb3N1
cmUpOworICB2b2lkICpjbG9zdXJlOworfSpncnViX2ZpbGVfdDsKKworc3RydWN0IGdydWJfZGly
aG9va19pbmZvCit7CisgIHVuc2lnbmVkIGRpcjoxOworICB1bnNpZ25lZCBtdGltZXNldDoxOwor
ICB1bnNpZ25lZCBjYXNlX2luc2Vuc2l0aXZlOjE7CisgIGdydWJfdWludDMyX3QgbXRpbWU7ICAg
IC8vKGRhdGUgfCB0aW1lKQorICBncnViX3VpbnQzMl90IGZpbGVzaXplOworICBncnViX3VpbnQ2
NF90IGZpbGVzaXplX250ZnM7CisgIGdydWJfdWludDY0X3QgdGltZV9udGZzOworfTsKKworc3Ry
dWN0IGxzX2N0cmwKK3sKKyAgdW5zaWduZWQgZGV0YWlsOjE7CisgIGNoYXIqIGRpcm5hbWU7Cit9
OworCisKKworCit0eXBlZGVmIGdydWJfZXJyX3QKKygqZ3J1Yl9vcGVuKSAoZ3J1Yl9maWxlX3Qg
ZmlsZSwgY29uc3QgY2hhciAqbmFtZSk7CisKK3R5cGVkZWYgZ3J1Yl9lcnJfdAorKCpncnViX2xz
KSAoZ3J1Yl9maWxlX3QgZmlsZSwgY29uc3QgY2hhciAqcGF0aCwKKwkgICAgICBpbnQgKCpob29r
KSAoY29uc3QgY2hhciAqZmlsZW5hbWUsCisJCQkgICBjb25zdCBzdHJ1Y3QgZ3J1Yl9kaXJob29r
X2luZm8gKmluZm8sCisJCQkgICB2b2lkICpjbG9zdXJlKSwKKwkgICAgIHZvaWQgKmNsb3N1cmUp
OworCit0eXBlZGVmIGdydWJfZXJyX3QgCisoKmdydWJfY2xvc2UpIChncnViX2ZpbGVfdCBmaWxl
KTsKKwordHlwZWRlZiBncnViX3NzaXplX3QgCisoKmdydWJfcmVhZCkoZ3J1Yl9maWxlX3QgZmls
ZSwgZ3J1Yl9vZmZfdCBvZmZzZXQsCisJCSAgZ3J1Yl9zaXplX3QgbGVuLCBjaGFyICpidWYpOwor
CisKKyNlbmRpZgpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xz
L2lvZW11LXFlbXUteGVuL2ZzaGVscC5jIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVu
L2ZzaGVscC5jCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9mc2hlbHAuYwkx
OTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMv
aW9lbXUtcWVtdS14ZW4vZnNoZWxwLmMJMjAxMi0xMi0yOCAxNjowMjo0MS4wMDQ5MzI0NTcgKzA4
MDAKQEAgLTAsMCArMSwzNjIgQEAKKy8qIGZzaGVscC5jIC0tIEZpbGVzeXN0ZW0gaGVscGVyIGZ1
bmN0aW9ucyAqLworLyoKKyAqICBHUlVCICAtLSAgR1JhbmQgVW5pZmllZCBCb290bG9hZGVyCisg
KiAgQ29weXJpZ2h0IChDKSAyMDA0LDIwMDUsMjAwNiwyMDA3LDIwMDggIEZyZWUgU29mdHdhcmUg
Rm91bmRhdGlvbiwgSW5jLgorICoKKyAqICBHUlVCIGlzIGZyZWUgc29mdHdhcmU6IHlvdSBjYW4g
cmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkKKyAqICBpdCB1bmRlciB0aGUgdGVybXMgb2Yg
dGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGFzIHB1Ymxpc2hlZCBieQorICogIHRoZSBG
cmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIGVpdGhlciB2ZXJzaW9uIDMgb2YgdGhlIExpY2Vuc2Us
IG9yCisgKiAgKGF0IHlvdXIgb3B0aW9uKSBhbnkgbGF0ZXIgdmVyc2lvbi4KKyAqCisgKiAgR1JV
QiBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLAorICog
IGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJh
bnR5IG9mCisgKiAgTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQ
VVJQT1NFLiAgU2VlIHRoZQorICogIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3Jl
IGRldGFpbHMuCisgKgorICogIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhl
IEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlCisgKiAgYWxvbmcgd2l0aCBHUlVCLiAgSWYgbm90
LCBzZWUgPGh0dHA6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy8+LgorICovCisKKyNpbmNsdWRlICJl
cnIuaCIKKyNpbmNsdWRlICJtaXNjLmgiCisjaW5jbHVkZSAiYmxvY2tfaW50LmgiCisjaW5jbHVk
ZSAiZnNoZWxwLmgiCisjaW5jbHVkZSAibnRmcy5oIgorI2luY2x1ZGUgImRlYnVnLmgiCisKK3N0
cnVjdCBncnViX2ZzaGVscF9maW5kX2ZpbGVfY2xvc3VyZQoreworICBncnViX2ZzaGVscF9ub2Rl
X3Qgcm9vdG5vZGU7CisgIGludCAoKml0ZXJhdGVfZGlyKSAoZ3J1Yl9mc2hlbHBfbm9kZV90IGRp
ciwKKwkJICAgICAgaW50ICgqaG9vaykKKwkJICAgICAgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLAor
CQkgICAgICAgZW51bSBncnViX2ZzaGVscF9maWxldHlwZSBmaWxldHlwZSwKKwkJICAgICAgIGdy
dWJfZnNoZWxwX25vZGVfdCBub2RlLCB2b2lkICpjbG9zdXJlKSwKKwkJICAgICAgdm9pZCAqY2xv
c3VyZSk7CisgIHZvaWQgKmNsb3N1cmU7CisgIGNoYXIgKigqcmVhZF9zeW1saW5rKSAoZ3J1Yl9m
c2hlbHBfbm9kZV90IG5vZGUpOworICBpbnQgc3ltbGlua25lc3Q7CisgIGVudW0gZ3J1Yl9mc2hl
bHBfZmlsZXR5cGUgZm91bmR0eXBlOworICBncnViX2ZzaGVscF9ub2RlX3QgY3VycnJvb3Q7Cit9
OworCitzdGF0aWMgdm9pZAorZnJlZV9ub2RlIChncnViX2ZzaGVscF9ub2RlX3Qgbm9kZSwgc3Ry
dWN0IGdydWJfZnNoZWxwX2ZpbmRfZmlsZV9jbG9zdXJlICpjKQoreworICBpZiAobm9kZSAhPSBj
LT5yb290bm9kZSAmJiBub2RlICE9IGMtPmN1cnJyb290KQorICAgIGdydWJfZnJlZSAobm9kZSk7
Cit9CisKK3N0cnVjdCBmaW5kX2ZpbGVfY2xvc3VyZQoreworICBjaGFyICpuYW1lOworICBlbnVt
IGdydWJfZnNoZWxwX2ZpbGV0eXBlICp0eXBlOworICBncnViX2ZzaGVscF9ub2RlX3QgKm9sZG5v
ZGU7CisgIGdydWJfZnNoZWxwX25vZGVfdCAqY3Vycm5vZGU7Cit9OworCitzdGF0aWMgaW50Citp
dGVyYXRlIChjb25zdCBjaGFyICpmaWxlbmFtZSwKKwkgZW51bSBncnViX2ZzaGVscF9maWxldHlw
ZSBmaWxldHlwZSwKKwkgZ3J1Yl9mc2hlbHBfbm9kZV90IG5vZGUsCisJIHZvaWQgKmNsb3N1cmUp
Cit7CisgIHN0cnVjdCBmaW5kX2ZpbGVfY2xvc3VyZSAqYyA9IGNsb3N1cmU7CisgIERCRygibGlz
dF9maWxlIGhvb2tlZCBieSBmc2hlbHA6aXRlcmF0ZSgpLCBmaWxlbmFtZT0lcyIsIGZpbGVuYW1l
KTsKKyAgaWYgKGZpbGV0eXBlID09IEdSVUJfRlNIRUxQX1VOS05PV04gfHwKKyAgICAgIChncnVi
X3N0cmNtcCAoYy0+bmFtZSwgZmlsZW5hbWUpICYmCisgICAgICAgKCEgKGZpbGV0eXBlICYgR1JV
Ql9GU0hFTFBfQ0FTRV9JTlNFTlNJVElWRSkgfHwKKwlncnViX3N0cm5jYXNlY21wIChjLT5uYW1l
LCBmaWxlbmFtZSwgR1JVQl9MT05HX01BWCkpKSkKKyAgICB7CisgICAgICBEQkcoIm5vdCBtYXRj
aCEhIT4+Pj4+PiIpOworICAgICAgZ3J1Yl9mcmVlIChub2RlKTsKKyAgICAgIHJldHVybiAwOwor
ICAgIH0KKworICAvKiBUaGUgbm9kZSBpcyBmb3VuZCwgc3RvcCBpdGVyYXRpbmcgb3ZlciB0aGUg
bm9kZXMuICAqLworICAqKGMtPnR5cGUpID0gZmlsZXR5cGUgJiB+R1JVQl9GU0hFTFBfQ0FTRV9J
TlNFTlNJVElWRTsKKyAgKihjLT5vbGRub2RlKSA9ICooYy0+Y3Vycm5vZGUpOworICAqKGMtPmN1
cnJub2RlKSA9IG5vZGU7CisgIERCRygiZm91bmQhIT4+Pj4+PiIpOworICByZXR1cm4gMTsKK30K
Kworc3RhdGljIGdydWJfZXJyX3QKK2ZpbmRfZmlsZSAoY29uc3QgY2hhciAqY3VycnBhdGgsIGdy
dWJfZnNoZWxwX25vZGVfdCBjdXJycm9vdCwKKwkgICBncnViX2ZzaGVscF9ub2RlX3QgKmN1cnJm
b3VuZCwKKwkgICBzdHJ1Y3QgZ3J1Yl9mc2hlbHBfZmluZF9maWxlX2Nsb3N1cmUgKmMpCit7Cisg
IGNoYXIgZnBhdGhbZ3J1Yl9zdHJsZW4gKGN1cnJwYXRoKSArIDFdOworICBjaGFyICpuYW1lID0g
ZnBhdGg7CisgIGNoYXIgKm5leHQ7CisgIGVudW0gZ3J1Yl9mc2hlbHBfZmlsZXR5cGUgdHlwZSA9
IEdSVUJfRlNIRUxQX0RJUjsKKyAgZ3J1Yl9mc2hlbHBfbm9kZV90IGN1cnJub2RlID0gY3VycnJv
b3Q7CisgIGdydWJfZnNoZWxwX25vZGVfdCBvbGRub2RlID0gY3VycnJvb3Q7CisKKyAgYy0+Y3Vy
cnJvb3QgPSBjdXJycm9vdDsKKworICBncnViX3N0cm5jcHkgKGZwYXRoLCBjdXJycGF0aCwgZ3J1
Yl9zdHJsZW4gKGN1cnJwYXRoKSArIDEpOworCisgIC8qIFJlbW92ZSBhbGwgbGVhZGluZyBzbGFz
aGVzLiAgKi8KKyAgd2hpbGUgKCpuYW1lID09ICcvJykKKyAgICBuYW1lKys7CisKKyAgaWYgKCEg
Km5hbWUpCisgICAgeworICAgICAgKmN1cnJmb3VuZCA9IGN1cnJub2RlOworICAgICAgcmV0dXJu
IDA7CisgICAgfQorCisgIGZvciAoOzspCisgICAgeworICAgICAgaW50IGZvdW5kOworICAgICAg
c3RydWN0IGZpbmRfZmlsZV9jbG9zdXJlIGNjOworCisgICAgICAvKiBFeHRyYWN0IHRoZSBhY3R1
YWwgcGFydCBmcm9tIHRoZSBwYXRobmFtZS4gICovCisgICAgICBuZXh0ID0gZ3J1Yl9zdHJjaHIg
KG5hbWUsICcvJyk7CisgICAgICBpZiAobmV4dCkKKwl7CisJICAvKiBSZW1vdmUgYWxsIGxlYWRp
bmcgc2xhc2hlcy4gICovCisJICB3aGlsZSAoKm5leHQgPT0gJy8nKQorCSAgICAqKG5leHQrKykg
PSAnXDAnOworCX0KKworICAgICAgLyogQXQgdGhpcyBwb2ludCBpdCBpcyBleHBlY3RlZCB0aGF0
IHRoZSBjdXJyZW50IG5vZGUgaXMgYQorCSBkaXJlY3RvcnksIGNoZWNrIGlmIHRoaXMgaXMgdHJ1
ZS4gICovCisgICAgICBpZiAodHlwZSAhPSBHUlVCX0ZTSEVMUF9ESVIpCisJeworCSAgZnJlZV9u
b2RlIChjdXJybm9kZSwgYyk7CisJICByZXR1cm4gZ3J1Yl9lcnJvciAoR1JVQl9FUlJfQkFEX0ZJ
TEVfVFlQRSwgIm5vdCBhIGRpcmVjdG9yeSIpOworCX0KKworICAgICAgREJHKCJmaW5kX2ZpbGVf
Y2xvc3VyZSBjYy5uYW1lPaG+JXOhvyIsIG5hbWUpOworICAgICAgY2MubmFtZSA9IG5hbWU7Cisg
ICAgICBjYy50eXBlID0gJnR5cGU7CisgICAgICBjYy5vbGRub2RlID0gJm9sZG5vZGU7CisgICAg
ICBjYy5jdXJybm9kZSA9ICZjdXJybm9kZTsKKyAgICAgIC8qIEl0ZXJhdGUgb3ZlciB0aGUgZGly
ZWN0b3J5LiAgKi8KKyAgICAgIERCRygiKioqKioqZnNoZWxwOmZpbmRfZmlsZSBob29rZWQgYnkg
XCdncnViX250ZnNfaXRlcmF0ZV9kaXJcJywiCisJICAibmVzdGVkIGFub3RoZXIgaG9vayBcJ2Zz
aGVscDppdGVyYXRvclwnIik7CisgICAgICBmb3VuZCA9IGMtPml0ZXJhdGVfZGlyIChjdXJybm9k
ZSwgaXRlcmF0ZSwgJmNjKTsgCisgICAgICBpZiAoISBmb3VuZCkKKwl7CisJICBpZiAoZ3J1Yl9l
cnJubykKKwkgICAgcmV0dXJuIGdydWJfZXJybm87CisKKwkgIGJyZWFrOworCX0KKworICAgICAg
LyogUmVhZCBpbiB0aGUgc3ltbGluayBhbmQgZm9sbG93IGl0LiAgKi8KKyAgICAgIGlmICh0eXBl
ID09IEdSVUJfRlNIRUxQX1NZTUxJTkspCisJeworCSAgY2hhciAqc3ltbGluazsKKworCSAgLyog
VGVzdCBpZiB0aGUgc3ltbGluayBkb2VzIG5vdCBsb29wLiAgKi8KKwkgIGlmICgrKyhjLT5zeW1s
aW5rbmVzdCkgPT0gOCkKKwkgICAgeworCSAgICAgIGZyZWVfbm9kZSAoY3Vycm5vZGUsIGMpOwor
CSAgICAgIGZyZWVfbm9kZSAob2xkbm9kZSwgYyk7CisJICAgICAgcmV0dXJuIGdydWJfZXJyb3Ig
KEdSVUJfRVJSX1NZTUxJTktfTE9PUCwKKwkJCQkgInRvbyBkZWVwIG5lc3Rpbmcgb2Ygc3ltbGlu
a3MiKTsKKwkgICAgfQorCisJICBzeW1saW5rID0gYy0+cmVhZF9zeW1saW5rIChjdXJybm9kZSk7
CisJICBmcmVlX25vZGUgKGN1cnJub2RlLCBjKTsKKworCSAgaWYgKCFzeW1saW5rKQorCSAgICB7
CisJICAgICAgZnJlZV9ub2RlIChvbGRub2RlLCBjKTsKKwkgICAgICByZXR1cm4gZ3J1Yl9lcnJu
bzsKKwkgICAgfQorCisJICAvKiBUaGUgc3ltbGluayBpcyBhbiBhYnNvbHV0ZSBwYXRoLCBnbyBi
YWNrIHRvIHRoZSByb290IGlub2RlLiAgKi8KKwkgIGlmIChzeW1saW5rWzBdID09ICcvJykKKwkg
ICAgeworCSAgICAgIGZyZWVfbm9kZSAob2xkbm9kZSwgYyk7CisJICAgICAgb2xkbm9kZSA9IGMt
PnJvb3Rub2RlOworCSAgICB9CisKKwkgIC8qIExvb2t1cCB0aGUgbm9kZSB0aGUgc3ltbGluayBw
b2ludHMgdG8uICAqLworCSAgZmluZF9maWxlIChzeW1saW5rLCBvbGRub2RlLCAmY3Vycm5vZGUs
IGMpOworCSAgdHlwZSA9IGMtPmZvdW5kdHlwZTsKKwkgIGdydWJfZnJlZSAoc3ltbGluayk7CisK
KwkgIGlmIChncnViX2Vycm5vKQorCSAgICB7CisJICAgICAgZnJlZV9ub2RlIChvbGRub2RlLCBj
KTsKKwkgICAgICByZXR1cm4gZ3J1Yl9lcnJubzsKKwkgICAgfQorCX0KKworICAgICAgZnJlZV9u
b2RlIChvbGRub2RlLCBjKTsKKworICAgICAgLyogRm91bmQgdGhlIG5vZGUhICAqLworICAgICAg
aWYgKCEgbmV4dCB8fCAqbmV4dCA9PSAnXDAnKQorCXsKKwkgICpjdXJyZm91bmQgPSBjdXJybm9k
ZTsKKwkgIGMtPmZvdW5kdHlwZSA9IHR5cGU7CisJICByZXR1cm4gMDsKKwl9CisKKyAgICAgIG5h
bWUgPSBuZXh0OworICAgIH0KKworICByZXR1cm4gZ3J1Yl9lcnJvciAoR1JVQl9FUlJfRklMRV9O
T1RfRk9VTkQsICJmaWxlIG5vdCBmb3VuZCIpOworfQorCisvKiBMb29rdXAgdGhlIG5vZGUgUEFU
SC4gIFRoZSBub2RlIFJPT1ROT0RFIGRlc2NyaWJlcyB0aGUgcm9vdCBvZiB0aGUKKyAgIGRpcmVj
dG9yeSB0cmVlLiAgVGhlIG5vZGUgZm91bmQgaXMgcmV0dXJuZWQgaW4gRk9VTkROT0RFLCB3aGlj
aCBpcworICAgZWl0aGVyIGEgUk9PVE5PREUgb3IgYSBuZXcgbWFsbG9jJ2VkIG5vZGUuICBJVEVS
QVRFX0RJUiBpcyB1c2VkIHRvCisgICBpdGVyYXRlIG92ZXIgYWxsIGRpcmVjdG9yeSBlbnRyaWVz
IGluIHRoZSBjdXJyZW50IG5vZGUuCisgICBSRUFEX1NZTUxJTksgaXMgdXNlZCB0byByZWFkIHRo
ZSBzeW1saW5rIGlmIGEgbm9kZSBpcyBhIHN5bWxpbmsuCisgICBFWFBFQ1RUWVBFIGlzIHRoZSB0
eXBlIG5vZGUgdGhhdCBpcyBleHBlY3RlZCBieSB0aGUgY2FsbGVkLCBhbgorICAgZXJyb3IgaXMg
Z2VuZXJhdGVkIGlmIHRoZSBub2RlIGlzIG5vdCBvZiB0aGUgZXhwZWN0ZWQgdHlwZS4gIE1ha2UK
KyAgIHN1cmUgeW91IHVzZSB0aGUgTkVTVEVEX0ZVTkNfQVRUUiBtYWNybyBmb3IgSE9PSywgdGhp
cyBpcyByZXF1aXJlZAorICAgYmVjYXVzZSBHQ0MgaGFzIGEgbmFzdHkgYnVnIHdoZW4gdXNpbmcg
cmVncGFybT0zLiAgKi8KK2dydWJfZXJyX3QKK2dydWJfZnNoZWxwX2ZpbmRfZmlsZSAoY29uc3Qg
Y2hhciAqcGF0aCwgZ3J1Yl9mc2hlbHBfbm9kZV90IHJvb3Rub2RlLAorCQkgICAgICAgZ3J1Yl9m
c2hlbHBfbm9kZV90ICpmb3VuZG5vZGUsCisJCSAgICAgICBpbnQgKCppdGVyYXRlX2RpcikgKGdy
dWJfZnNoZWxwX25vZGVfdCBkaXIsCisJCQkJCSAgIGludCAoKmhvb2spCisJCQkJCSAgIChjb25z
dCBjaGFyICpmaWxlbmFtZSwKKwkJCQkJICAgIGVudW0gZ3J1Yl9mc2hlbHBfZmlsZXR5cGUgZmls
ZXR5cGUsCisJCQkJCSAgICBncnViX2ZzaGVscF9ub2RlX3Qgbm9kZSwKKwkJCQkJICAgIHZvaWQg
KmNsb3N1cmUpLAorCQkJCQkgICB2b2lkICpjbG9zdXJlKSwKKwkJICAgICAgIHZvaWQgKmNsb3N1
cmUsCisJCSAgICAgICBjaGFyICooKnJlYWRfc3ltbGluaykgKGdydWJfZnNoZWxwX25vZGVfdCBu
b2RlKSwKKwkJICAgICAgIGVudW0gZ3J1Yl9mc2hlbHBfZmlsZXR5cGUgZXhwZWN0dHlwZSkKK3sK
KyAgZ3J1Yl9lcnJfdCBlcnI7CisgIHN0cnVjdCBncnViX2ZzaGVscF9maW5kX2ZpbGVfY2xvc3Vy
ZSBjOworCisgIGMucm9vdG5vZGUgPSByb290bm9kZTsKKyAgYy5pdGVyYXRlX2RpciA9IGl0ZXJh
dGVfZGlyOworICBjLmNsb3N1cmUgPSBjbG9zdXJlOworICBjLnJlYWRfc3ltbGluayA9IHJlYWRf
c3ltbGluazsKKyAgYy5zeW1saW5rbmVzdCA9IDA7CisgIGMuZm91bmR0eXBlID0gR1JVQl9GU0hF
TFBfRElSOworCisgIGlmICghcGF0aCB8fCBwYXRoWzBdICE9ICcvJykKKyAgICB7CisgICAgICBn
cnViX2Vycm9yIChHUlVCX0VSUl9CQURfRklMRU5BTUUsICJiYWQgZmlsZW5hbWUiKTsKKyAgICAg
IHJldHVybiBncnViX2Vycm5vOworICAgIH0KKyAgCisgIAorICBEQkcoImdvaW5nIHRvIGZpbmRf
ZmlsZVxuIik7CisgIGVyciA9IGZpbmRfZmlsZSAocGF0aCwgcm9vdG5vZGUsIGZvdW5kbm9kZSwg
JmMpOworICBpZiAoZXJyKQorICAgIHJldHVybiBlcnI7CisKKyAgLyogQ2hlY2sgaWYgdGhlIG5v
ZGUgdGhhdCB3YXMgZm91bmQgd2FzIG9mIHRoZSBleHBlY3RlZCB0eXBlLiAgKi8KKyAgaWYgKGV4
cGVjdHR5cGUgPT0gR1JVQl9GU0hFTFBfUkVHICYmIGMuZm91bmR0eXBlICE9IGV4cGVjdHR5cGUp
CisgICAgcmV0dXJuIGdydWJfZXJyb3IgKEdSVUJfRVJSX0JBRF9GSUxFX1RZUEUsICJub3QgYSBy
ZWd1bGFyIGZpbGUiKTsKKyAgZWxzZSBpZiAoZXhwZWN0dHlwZSA9PSBHUlVCX0ZTSEVMUF9ESVIg
JiYgYy5mb3VuZHR5cGUgIT0gZXhwZWN0dHlwZSkKKyAgICByZXR1cm4gZ3J1Yl9lcnJvciAoR1JV
Ql9FUlJfQkFEX0ZJTEVfVFlQRSwgIm5vdCBhIGRpcmVjdG9yeSIpOworCisgIHJldHVybiAwOwor
fQorCisvKiBSZWFkIExFTiBieXRlcyBmcm9tIHRoZSBmaWxlIE5PREUgb24gZGlzayBESVNLIGlu
dG8gdGhlIGJ1ZmZlciBCVUYsCisgICBiZWdpbm5pbmcgd2l0aCB0aGUgYmxvY2sgUE9TLiAgUkVB
RF9IT09LIHNob3VsZCBiZSBzZXQgYmVmb3JlCisgICByZWFkaW5nIGEgYmxvY2sgZnJvbSB0aGUg
ZmlsZS4gIEdFVF9CTE9DSyBpcyB1c2VkIHRvIHRyYW5zbGF0ZSBmaWxlCisgICBibG9ja3MgdG8g
ZGlzayBibG9ja3MuICBUaGUgZmlsZSBpcyBGSUxFU0laRSBieXRlcyBiaWcgYW5kIHRoZQorICAg
YmxvY2tzIGhhdmUgYSBzaXplIG9mIExPRzJCTE9DS1NJWkUgKGluIGxvZzIpLiAgKi8KK2dydWJf
c3NpemVfdAorZ3J1Yl9mc2hlbHBfcmVhZF9maWxlIChCbG9ja0RyaXZlclN0YXRlKiBicywgZ3J1
Yl9mc2hlbHBfbm9kZV90IG5vZGUsCisJCSAgICAgICB2b2lkICgqcmVhZF9ob29rKSAoZ3J1Yl9k
aXNrX2FkZHJfdCBzZWN0b3IsCisJCQkJCSAgdW5zaWduZWQgb2Zmc2V0LAorCQkJCQkgIHVuc2ln
bmVkIGxlbmd0aCwKKwkJCQkJICB2b2lkICpjbG9zdXJlKSwKKwkJICAgICAgIHZvaWQgKmNsb3N1
cmUsCisJCSAgICAgICBncnViX29mZl90IHBvcywgZ3J1Yl9zaXplX3QgbGVuLCBjaGFyICpidWYs
CisJCSAgICAgICBncnViX2Rpc2tfYWRkcl90ICgqZ2V0X2Jsb2NrKSAoZ3J1Yl9mc2hlbHBfbm9k
ZV90IG5vZGUsCisJCQkJCQkgICAgICBncnViX2Rpc2tfYWRkcl90IGJsb2NrKSwKKwkJICAgICAg
IGdydWJfb2ZmX3QgZmlsZXNpemUsIGludCBsb2cyYmxvY2tzaXplKQoreworICBncnViX2Rpc2tf
YWRkcl90IGksIGJsb2NrY250OworICBncnViX29mZl90IG9mZl9ieXRlczsKKyAgaW50IGJsb2Nr
c2l6ZSA9IDEgPDwgKGxvZzJibG9ja3NpemUgKyBHUlVCX0RJU0tfU0VDVE9SX0JJVFMpOworCisg
IC8qIEFkanVzdCBMRU4gc28gaXQgd2UgY2FuJ3QgcmVhZCBwYXN0IHRoZSBlbmQgb2YgdGhlIGZp
bGUuICAqLworICBpZiAocG9zICsgbGVuID4gZmlsZXNpemUpCisgICAgbGVuID0gZmlsZXNpemUg
LSBwb3M7CisKKyAgYmxvY2tjbnQgPSAoKGxlbiArIHBvcykgKyBibG9ja3NpemUgLSAxKSA+Pgor
ICAgIChsb2cyYmxvY2tzaXplICsgR1JVQl9ESVNLX1NFQ1RPUl9CSVRTKTsKKworICBmb3IgKGkg
PSBwb3MgPj4gKGxvZzJibG9ja3NpemUgKyBHUlVCX0RJU0tfU0VDVE9SX0JJVFMpOyBpIDwgYmxv
Y2tjbnQ7IGkrKykKKyAgICB7CisgICAgICBncnViX2Rpc2tfYWRkcl90IGJsa25yOworICAgICAg
aW50IGJsb2Nrb2ZmID0gcG9zICYgKGJsb2Nrc2l6ZSAtIDEpOworICAgICAgaW50IGJsb2NrZW5k
ID0gYmxvY2tzaXplOworCisgICAgICBpbnQgc2tpcGZpcnN0ID0gMDsKKworICAgICAgYmxrbnIg
PSBnZXRfYmxvY2sgKG5vZGUsIGkpOworICAgICAgaWYgKGdydWJfZXJybm8pCisJcmV0dXJuIC0x
OworCisgICAgICBibGtuciA9IGJsa25yIDw8IGxvZzJibG9ja3NpemU7CisgICAgICBvZmZfYnl0
ZXMgPSBibGtuciA8PCBHUlVCX0RJU0tfU0VDVE9SX0JJVFM7CisKKyAgICAgIC8qIExhc3QgYmxv
Y2suICAqLworICAgICAgaWYgKGkgPT0gYmxvY2tjbnQgLSAxKQorCXsKKwkgIGJsb2NrZW5kID0g
KGxlbiArIHBvcykgJiAoYmxvY2tzaXplIC0gMSk7CisKKwkgIC8qIFRoZSBsYXN0IHBvcnRpb24g
aXMgZXhhY3RseSBibG9ja3NpemUuICAqLworCSAgaWYgKCEgYmxvY2tlbmQpCisJICAgIGJsb2Nr
ZW5kID0gYmxvY2tzaXplOworCX0KKworICAgICAgLyogRmlyc3QgYmxvY2suICAqLworICAgICAg
aWYgKGkgPT0gKHBvcyA+PiAobG9nMmJsb2Nrc2l6ZSArIEdSVUJfRElTS19TRUNUT1JfQklUUykp
KQorCXsKKwkgIHNraXBmaXJzdCA9IGJsb2Nrb2ZmOworCSAgYmxvY2tlbmQgLT0gc2tpcGZpcnN0
OworCX0KKworICAgICAgLyogSWYgdGhlIGJsb2NrIG51bWJlciBpcyAwIHRoaXMgYmxvY2sgaXMg
bm90IHN0b3JlZCBvbiBkaXNrIGJ1dAorCSBpcyB6ZXJvIGZpbGxlZCBpbnN0ZWFkLiAgKi8KKyAg
ICAgIGlmIChibGtucikKKwl7CisJICAvL2JzLT5yZWFkX2hvb2sgPSByZWFkX2hvb2s7CisJICAv
L2JzLT5jbG9zdXJlID0gY2xvc3VyZTsKKwkgIAorCSAgYmRydl9wcmVhZF9mcm9tX2xjbl9vZl92
b2x1bShicywgb2ZmX2J5dGVzICsgc2tpcGZpcnN0LAorCQkgICAgICBidWYsIGJsb2NrZW5kKTsK
KwkgIC8vYnMtPnJlYWRfaG9vayA9IDA7CisJICBpZiAoZ3J1Yl9lcnJubykKKwkgICAgcmV0dXJu
IC0xOworCX0KKyAgICAgIGVsc2UKKwlncnViX21lbXNldCAoYnVmLCAwLCBibG9ja2VuZCk7CisK
KyAgICAgIGJ1ZiArPSBibG9ja3NpemUgLSBza2lwZmlyc3Q7CisgICAgfQorCisgIHJldHVybiBs
ZW47Cit9CisKK3Vuc2lnbmVkIGludAorZ3J1Yl9mc2hlbHBfbG9nMmJsa3NpemUgKHVuc2lnbmVk
IGludCBibGtzaXplLCB1bnNpZ25lZCBpbnQgKnBvdykKK3sKKyAgaW50IG1vZDsKKworICAqcG93
ID0gMDsKKyAgd2hpbGUgKGJsa3NpemUgPiAxKQorICAgIHsKKyAgICAgIG1vZCA9IGJsa3NpemUg
LSAoKGJsa3NpemUgPj4gMSkgPDwgMSk7CisgICAgICBibGtzaXplID4+PSAxOworCisgICAgICAv
KiBDaGVjayBpZiBpdCByZWFsbHkgaXMgYSBwb3dlciBvZiB0d28uICAqLworICAgICAgaWYgKG1v
ZCkKKwlyZXR1cm4gZ3J1Yl9lcnJvciAoR1JVQl9FUlJfQkFEX05VTUJFUiwKKwkJCSAgICJ0aGUg
YmxvY2tzaXplIGlzIG5vdCBhIHBvd2VyIG9mIHR3byIpOworICAgICAgKCpwb3cpKys7CisgICAg
fQorCisgIHJldHVybiBHUlVCX0VSUl9OT05FOworfQpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4g
LVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZzaGVscC5oIHhlbi00LjEuMi1i
L3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZzaGVscC5oCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2Vt
dS1xZW11LXhlbi9mc2hlbHAuaAkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAor
KysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZnNoZWxwLmgJMjAxMi0xMi0yOCAx
NjowMjo0MS4wMDQ5MzI0NTcgKzA4MDAKQEAgLTAsMCArMSw4NiBAQAorLyogZnNoZWxwLmggLS0g
RmlsZXN5c3RlbSBoZWxwZXIgZnVuY3Rpb25zICovCisvKgorICogIEdSVUIgIC0tICBHUmFuZCBV
bmlmaWVkIEJvb3Rsb2FkZXIKKyAqICBDb3B5cmlnaHQgKEMpIDIwMDQsMjAwNSwyMDA2LDIwMDcs
MjAwOCAgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuCisgKgorICogIEdSVUIgaXMgZnJl
ZSBzb2Z0d2FyZTogeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQorICogIGl0
IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgYXMgcHVi
bGlzaGVkIGJ5CisgKiAgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiwgZWl0aGVyIHZlcnNp
b24gMyBvZiB0aGUgTGljZW5zZSwgb3IKKyAqICAoYXQgeW91ciBvcHRpb24pIGFueSBsYXRlciB2
ZXJzaW9uLgorICoKKyAqICBHUlVCIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQg
d2lsbCBiZSB1c2VmdWwsCisgKiAgYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2
ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKKyAqICBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVT
UyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlCisgKiAgR05VIEdlbmVyYWwgUHVi
bGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KKyAqCisgKiAgWW91IHNob3VsZCBoYXZlIHJl
Y2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UKKyAqICBhbG9u
ZyB3aXRoIEdSVUIuICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4u
CisgKi8KKworI2lmbmRlZiBHUlVCX0ZTSEVMUF9IRUFERVIKKyNkZWZpbmUgR1JVQl9GU0hFTFBf
SEVBREVSCTEKKworI2luY2x1ZGUgImZzLXR5cGVzLmgiCisjaW5jbHVkZSAiZ3J1Yl9lcnIuaCIK
KyNpbmNsdWRlICJibG9ja19pbnQuaCIKK3R5cGVkZWYgc3RydWN0IGdydWJfZnNoZWxwX25vZGUg
KmdydWJfZnNoZWxwX25vZGVfdDsKKworI2RlZmluZSBHUlVCX0ZTSEVMUF9DQVNFX0lOU0VOU0lU
SVZFCTB4MTAwCisjZGVmaW5lIEdSVUJfRlNIRUxQX1RZUEVfTUFTSwkweGZmCisjZGVmaW5lIEdS
VUJfRlNIRUxQX0ZMQUdTX01BU0sJMHgxMDAKKworZW51bSBncnViX2ZzaGVscF9maWxldHlwZQor
ICB7CisgICAgR1JVQl9GU0hFTFBfVU5LTk9XTiwKKyAgICBHUlVCX0ZTSEVMUF9SRUcsCisgICAg
R1JVQl9GU0hFTFBfRElSLAorICAgIEdSVUJfRlNIRUxQX1NZTUxJTksKKyAgfTsKKworLyogTG9v
a3VwIHRoZSBub2RlIFBBVEguICBUaGUgbm9kZSBST09UTk9ERSBkZXNjcmliZXMgdGhlIHJvb3Qg
b2YgdGhlCisgICBkaXJlY3RvcnkgdHJlZS4gIFRoZSBub2RlIGZvdW5kIGlzIHJldHVybmVkIGlu
IEZPVU5ETk9ERSwgd2hpY2ggaXMKKyAgIGVpdGhlciBhIFJPT1ROT0RFIG9yIGEgbmV3IG1hbGxv
YydlZCBub2RlLiAgSVRFUkFURV9ESVIgaXMgdXNlZCB0bworICAgaXRlcmF0ZSBvdmVyIGFsbCBk
aXJlY3RvcnkgZW50cmllcyBpbiB0aGUgY3VycmVudCBub2RlLgorICAgUkVBRF9TWU1MSU5LIGlz
IHVzZWQgdG8gcmVhZCB0aGUgc3ltbGluayBpZiBhIG5vZGUgaXMgYSBzeW1saW5rLgorICAgRVhQ
RUNUVFlQRSBpcyB0aGUgdHlwZSBub2RlIHRoYXQgaXMgZXhwZWN0ZWQgYnkgdGhlIGNhbGxlZCwg
YW4KKyAgIGVycm9yIGlzIGdlbmVyYXRlZCBpZiB0aGUgbm9kZSBpcyBub3Qgb2YgdGhlIGV4cGVj
dGVkIHR5cGUuICBNYWtlCisgICBzdXJlIHlvdSB1c2UgdGhlIE5FU1RFRF9GVU5DX0FUVFIgbWFj
cm8gZm9yIEhPT0ssIHRoaXMgaXMgcmVxdWlyZWQKKyAgIGJlY2F1c2UgR0NDIGhhcyBhIG5hc3R5
IGJ1ZyB3aGVuIHVzaW5nIHJlZ3Bhcm09My4gICovCitncnViX2Vycl90IGdydWJfZnNoZWxwX2Zp
bmRfZmlsZSAoY29uc3QgY2hhciAqcGF0aCwKKwkJCQkgIGdydWJfZnNoZWxwX25vZGVfdCByb290
bm9kZSwKKwkJCQkgIGdydWJfZnNoZWxwX25vZGVfdCAqZm91bmRub2RlLAorCQkJCSAgaW50ICgq
aXRlcmF0ZV9kaXIpCisJCQkJICAoZ3J1Yl9mc2hlbHBfbm9kZV90IGRpciwKKwkJCQkgICBpbnQg
KCpob29rKQorCQkJCSAgIChjb25zdCBjaGFyICpmaWxlbmFtZSwKKwkJCQkgICAgZW51bSBncnVi
X2ZzaGVscF9maWxldHlwZSBmaWxldHlwZSwKKwkJCQkgICAgZ3J1Yl9mc2hlbHBfbm9kZV90IG5v
ZGUsCisJCQkJICAgIHZvaWQgKmNsb3N1cmUpLAorCQkJCSAgIHZvaWQgKmNsb3N1cmUpLAorCQkJ
CSAgdm9pZCAqY2xvc3VyZSwKKwkJCQkgIGNoYXIgKigqcmVhZF9zeW1saW5rKSAoZ3J1Yl9mc2hl
bHBfbm9kZV90IG5vZGUpLAorCQkJCSAgZW51bSBncnViX2ZzaGVscF9maWxldHlwZSBleHBlY3Qp
OworCisKKy8qIFJlYWQgTEVOIGJ5dGVzIGZyb20gdGhlIGZpbGUgTk9ERSBvbiBkaXNrIERJU0sg
aW50byB0aGUgYnVmZmVyIEJVRiwKKyAgIGJlZ2lubmluZyB3aXRoIHRoZSBibG9jayBQT1MuICBS
RUFEX0hPT0sgc2hvdWxkIGJlIHNldCBiZWZvcmUKKyAgIHJlYWRpbmcgYSBibG9jayBmcm9tIHRo
ZSBmaWxlLiAgR0VUX0JMT0NLIGlzIHVzZWQgdG8gdHJhbnNsYXRlIGZpbGUKKyAgIGJsb2NrcyB0
byBkaXNrIGJsb2Nrcy4gIFRoZSBmaWxlIGlzIEZJTEVTSVpFIGJ5dGVzIGJpZyBhbmQgdGhlCisg
ICBibG9ja3MgaGF2ZSBhIHNpemUgb2YgTE9HMkJMT0NLU0laRSAoaW4gbG9nMikuICAqLworZ3J1
Yl9zc2l6ZV90IGdydWJfZnNoZWxwX3JlYWRfZmlsZSAoQmxvY2tEcml2ZXJTdGF0ZSogYnMsIGdy
dWJfZnNoZWxwX25vZGVfdCBub2RlLAorCQkJCSAgICB2b2lkICgqcmVhZF9ob29rKQorCQkJCSAg
ICAoZ3J1Yl9kaXNrX2FkZHJfdCBzZWN0b3IsCisJCQkJICAgICB1bnNpZ25lZCBvZmZzZXQsCisJ
CQkJICAgICB1bnNpZ25lZCBsZW5ndGgsCisJCQkJICAgICB2b2lkICpjbG9zdXJlKSwKKwkJCQkg
ICAgdm9pZCAqY2xvc3VyZSwKKwkJCQkgICAgZ3J1Yl9vZmZfdCBwb3MsIGdydWJfc2l6ZV90IGxl
biwgY2hhciAqYnVmLAorCQkJCSAgICBncnViX2Rpc2tfYWRkcl90ICgqZ2V0X2Jsb2NrKQorCQkJ
CSAgICAoZ3J1Yl9mc2hlbHBfbm9kZV90IG5vZGUsCisJCQkJICAgICBncnViX2Rpc2tfYWRkcl90
IGJsb2NrKSwKKwkJCQkgICAgZ3J1Yl9vZmZfdCBmaWxlc2l6ZSwgaW50IGxvZzJibG9ja3NpemUp
OworCit1bnNpZ25lZCBpbnQgZ3J1Yl9mc2hlbHBfbG9nMmJsa3NpemUgKHVuc2lnbmVkIGludCBi
bGtzaXplLAorCQkJCSAgICAgIHVuc2lnbmVkIGludCAqcG93KTsKKworI2VuZGlmIC8qICEgR1JV
Ql9GU0hFTFBfSEVBREVSICovCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4y
LWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZnMtdGltZS5jIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11
LXFlbXUteGVuL2ZzLXRpbWUuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4v
ZnMtdGltZS5jCTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4x
LjItYi90b29scy9pb2VtdS1xZW11LXhlbi9mcy10aW1lLmMJMjAxMi0xMi0yOCAxNjowMjo0MS4w
MDU2ODU3OTggKzA4MDAKQEAgLTAsMCArMSw3NyBAQAorI2luY2x1ZGUgImZzLXRpbWUuaCIKKwor
CisKK3N0YXRpYyB1aW50NjRfdCBkaXY2NCh1aW50NjRfdCBhLCB1aW50MzJfdCBiLCB1aW50MzJf
dCBjKQoreworICAgIHVuaW9uIHsKKyAgICAgICAgdWludDY0X3QgbGw7CisgICAgICAgIHN0cnVj
dCB7CisjaWZkZWYgV09SRFNfQklHRU5ESUFOCisgICAgICAgICAgICB1aW50MzJfdCBoaWdoLCBs
b3c7CisjZWxzZQorICAgICAgICAgICAgdWludDMyX3QgbG93LCBoaWdoOworI2VuZGlmCisgICAg
ICAgIH0gbDsKKyAgICB9IHUsIHJlczsKKyAgICB1aW50NjRfdCBybCwgcmg7CisKKyAgICB1Lmxs
ID0gYTsKKyAgICBybCA9ICh1aW50NjRfdCl1LmwubG93ICogKHVpbnQ2NF90KWI7CisgICAgcmgg
PSAodWludDY0X3QpdS5sLmhpZ2ggKiAodWludDY0X3QpYjsKKyAgICByaCArPSAocmwgPj4gMzIp
OworICAgIHJlcy5sLmhpZ2ggPSByaCAvIGM7CisgICAgcmVzLmwubG93ID0gKCgocmggJSBjKSA8
PCAzMikgKyAocmwgJiAweGZmZmZmZmZmKSkgLyBjOworICAgIHJldHVybiByZXMubGw7Cit9CisK
K3N0YXRpYyB1aW50NjRfdCBzdWI2NCh1aW50NjRfdCBhLCB1aW50NjRfdCBiKQoreworICBzdHJ1
Y3QKKyAgeworI2lmZGVmIFdPUkRTX0JJR0VORElBTgorICAgICAgICAgICAgdWludDMyX3QgaGln
aCwgbG93OworI2Vsc2UKKyAgICAgICAgICAgIHVpbnQzMl90IGxvdywgaGlnaDsKKyNlbmRpZgor
ICB9YTEsYjEsYzsKKyAgCisgIGExLmhpZ2ggPSBhPj4zMjsKKyAgYTEubG93ID0gYSYweGZmZmZm
ZmZmOworICBiMS5oaWdoID0gYj4+MzI7CisgIGIxLmxvdyA9IGImMHhmZmZmZmZmZjsKKyAgCisg
IGlmKGExLmhpZ2ggPCBiMS5oaWdoKQorICAgIHsKKyAgICAgIGM9YjE7CisgICAgICBiMT1hMTsK
KyAgICAgIGExPWM7CisgICAgfQorICAKKyAgYTEuaGlnaCAtPSBiMS5oaWdoOworICBhMS5sb3cg
LT0gYjEubG93OworICBpZihhMS5sb3cgJiAweDgwMDAwMDAwKQorICAgIHsKKyAgICAgIGExLmxv
dyA9ICh+KGExLmxvdyAmIDB4N2ZmZmZmZmYpKSsxOworICAgICAgYTEuaGlnaCAtPSAxOworICAg
IH0KKyAgCisgIHVpbnQ2NF90IHJldCA9ICh1aW50NjRfdClhMS5oaWdoPDwzMiB8IGExLmxvdzsK
KyAgcmV0dXJuIHJldDsKK30KKworc3RydWN0IHRtKiBudGZzX3V0YzJsb2NhbChncnViX3VpbnQ2
NF90IHRpbWUsIHN0cnVjdCB0bSogcHRtKQoreworICAvL3RpbWVfdCB0aW1lMiA9IHN1YjY0KHRp
bWUsIE5URlNfVElNRV9PRkZTRVQpOworICB0aW1lX3QgdGltZTIgPSB0aW1lIC0gTlRGU19USU1F
X09GRlNFVDsKKyAgLypEQkcoInNpemVvZihpbnQpPSVkIiwgc2l6ZW9mKGludCkpOworICBEQkco
InNpemVvZihzaG9ydCk9JWQiLCBzaXplb2Yoc2hvcnQpKTsKKyAgREJHKCJzaXplb2YobG9uZyk9
JWQiLCBzaXplb2YobG9uZykpOworICBEQkcoInNpemVvZihsb25nIGxvbmcpPSVkIiwgc2l6ZW9m
KHVuc2lnbmVkIGxvbmcgbG9uZykpOworICBEQkcoInNpemVvZih0aW1lX3QpPSVkLCB0aW1lPSV6
dSwgdGltZTI9JXp1Iiwgc2l6ZW9mKHRpbWVfdCksIHRpbWUsIHRpbWUyKTsqLworICAvL3RpbWUy
ID0gZGl2NjQodGltZTIsMSwxMDAwMDAwMCk7CisgIHRpbWUyID0gdGltZTIgLyAxMDAwMDAwMDsK
KyAgREJHKCJzaXplb2YodGltZV90KT0lZCwgdGltZT0lenUsIHRpbWUyPSV6dSIsIHNpemVvZih0
aW1lX3QpLCB0aW1lLCB0aW1lMik7CisgIC8vLy90aW1lMiA9IDA7Ly90aW1lKE5VTEwpOworICBy
ZXR1cm4gbG9jYWx0aW1lX3IoJnRpbWUyLCBwdG0pOworfQpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1y
cE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2ZzLXRpbWUuaCB4ZW4tNC4x
LjItYi90b29scy9pb2VtdS1xZW11LXhlbi9mcy10aW1lLmgKLS0tIHhlbi00LjEuMi1hL3Rvb2xz
L2lvZW11LXFlbXUteGVuL2ZzLXRpbWUuaAkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCAr
MDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZnMtdGltZS5oCTIwMTIt
MTItMjggMTY6MDI6NDEuMDA1Njg1Nzk4ICswODAwCkBAIC0wLDAgKzEsMTIgQEAKKyNpZm5kZWYg
RlNfVElNRV9ICisjZGVmaW5lIEZTX1RJTUVfSAorCisjaW5jbHVkZSA8dGltZS5oPgorI2luY2x1
ZGUgImZzLWNvbW0uaCIKKyNkZWZpbmUgTlRGU19USU1FX09GRlNFVCAoKGdydWJfdWludDY0X3Qp
KDM2OSAqIDM2NSArIDg5KSAqIDI0ICogMzYwMCAqIDEwMDAwMDAwKQorCitzdHJ1Y3QgdG0qIG50
ZnNfdXRjMmxvY2FsKGdydWJfdWludDY0X3QgdGltZSwgc3RydWN0IHRtKiBwdG0pOworCisKKyNl
bmRpZgorCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9l
bXUtcWVtdS14ZW4vZnMtdHlwZXMuaCB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9m
cy10eXBlcy5oCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9mcy10eXBlcy5o
CTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29s
cy9pb2VtdS1xZW11LXhlbi9mcy10eXBlcy5oCTIwMTItMTItMjggMTY6MDI6NDEuMDA2OTMyNDE3
ICswODAwCkBAIC0wLDAgKzEsMjM0IEBACisvKgorICogIEdSVUIgIC0tICBHUmFuZCBVbmlmaWVk
IEJvb3Rsb2FkZXIKKyAqICBDb3B5cmlnaHQgKEMpIDIwMDIsMjAwNSwyMDA2LDIwMDcsMjAwOCwy
MDA5ICBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIEluYy4KKyAqCisgKiAgR1JVQiBpcyBmcmVl
IHNvZnR3YXJlOiB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5CisgKiAgaXQg
dW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBhcyBwdWJs
aXNoZWQgYnkKKyAqICB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBlaXRoZXIgdmVyc2lv
biAzIG9mIHRoZSBMaWNlbnNlLCBvcgorICogIChhdCB5b3VyIG9wdGlvbikgYW55IGxhdGVyIHZl
cnNpb24uCisgKgorICogIEdSVUIgaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3
aWxsIGJlIHVzZWZ1bCwKKyAqICBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZl
biB0aGUgaW1wbGllZCB3YXJyYW50eSBvZgorICogIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNT
IEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUKKyAqICBHTlUgR2VuZXJhbCBQdWJs
aWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgorICoKKyAqICBZb3Ugc2hvdWxkIGhhdmUgcmVj
ZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZQorICogIGFsb25n
IHdpdGggR1JVQi4gIElmIG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4K
KyAqLworCisjaWZuZGVmIEdSVUJfVFlQRVNfSEVBREVSCisjZGVmaW5lIEdSVUJfVFlQRVNfSEVB
REVSCTEKKworI2luY2x1ZGUgImdydWItY29uZmlnLmgiCisjaW5jbHVkZSAieDg2XzY0L3R5cGVz
LmgiCisKKyNpZmRlZiBHUlVCX1VUSUwKKyMgZGVmaW5lIEdSVUJfQ1BVX1NJWkVPRl9WT0lEX1AJ
U0laRU9GX1ZPSURfUAorIyBkZWZpbmUgR1JVQl9DUFVfU0laRU9GX0xPTkcJU0laRU9GX0xPTkcK
KyMgaWZkZWYgV09SRFNfQklHRU5ESUFOCisjICBkZWZpbmUgR1JVQl9DUFVfV09SRFNfQklHRU5E
SUFOCTEKKyMgZWxzZQorIyAgdW5kZWYgR1JVQl9DUFVfV09SRFNfQklHRU5ESUFOCisjIGVuZGlm
CisjZWxzZSAvKiAhIEdSVUJfVVRJTCAqLworIyBkZWZpbmUgR1JVQl9DUFVfU0laRU9GX1ZPSURf
UAlHUlVCX1RBUkdFVF9TSVpFT0ZfVk9JRF9QCisjIGRlZmluZSBHUlVCX0NQVV9TSVpFT0ZfTE9O
RwlHUlVCX1RBUkdFVF9TSVpFT0ZfTE9ORworIyBpZmRlZiBHUlVCX1RBUkdFVF9XT1JEU19CSUdF
TkRJQU4KKyMgIGRlZmluZSBHUlVCX0NQVV9XT1JEU19CSUdFTkRJQU4JMQorIyBlbHNlCisjICB1
bmRlZiBHUlVCX0NQVV9XT1JEU19CSUdFTkRJQU4KKyMgZW5kaWYKKyNlbmRpZiAvKiAhIEdSVUJf
VVRJTCAqLworCisjaWYgR1JVQl9DUFVfU0laRU9GX1ZPSURfUCAhPSA0ICYmIEdSVUJfQ1BVX1NJ
WkVPRl9WT0lEX1AgIT0gOAorIyBlcnJvciAiVGhpcyBhcmNoaXRlY3R1cmUgaXMgbm90IHN1cHBv
cnRlZCBiZWNhdXNlIHNpemVvZih2b2lkICopICE9IDQgYW5kIHNpemVvZih2b2lkICopICE9IDgi
CisjZW5kaWYKKworI2lmbmRlZiBHUlVCX1RBUkdFVF9XT1JEU0laRQorIyBpZiBHUlVCX1RBUkdF
VF9TSVpFT0ZfVk9JRF9QID09IDQKKyMgIGRlZmluZSBHUlVCX1RBUkdFVF9XT1JEU0laRSAzMgor
IyBlbGlmIEdSVUJfVEFSR0VUX1NJWkVPRl9WT0lEX1AgPT0gOAorIyAgZGVmaW5lIEdSVUJfVEFS
R0VUX1dPUkRTSVpFIDY0CisjIGVuZGlmCisjZW5kaWYKKworLyogRGVmaW5lIHZhcmlvdXMgd2lk
ZSBpbnRlZ2Vycy4gICovCit0eXBlZGVmIHNpZ25lZCBjaGFyCQlncnViX2ludDhfdDsKK3R5cGVk
ZWYgc2hvcnQJCQlncnViX2ludDE2X3Q7Cit0eXBlZGVmIGludAkJCWdydWJfaW50MzJfdDsKKyNp
ZiBHUlVCX0NQVV9TSVpFT0ZfTE9ORyA9PSA4Cit0eXBlZGVmIGxvbmcJCQlncnViX2ludDY0X3Q7
CisjZWxzZQordHlwZWRlZiBsb25nIGxvbmcJCWdydWJfaW50NjRfdDsKKyNlbmRpZgorCit0eXBl
ZGVmIHVuc2lnbmVkIGNoYXIJCWdydWJfdWludDhfdDsKK3R5cGVkZWYgdW5zaWduZWQgc2hvcnQJ
CWdydWJfdWludDE2X3Q7Cit0eXBlZGVmIHVuc2lnbmVkCQlncnViX3VpbnQzMl90OworI2lmIEdS
VUJfQ1BVX1NJWkVPRl9MT05HID09IDgKK3R5cGVkZWYgdW5zaWduZWQgbG9uZwkJZ3J1Yl91aW50
NjRfdDsKKyNlbHNlCit0eXBlZGVmIHVuc2lnbmVkIGxvbmcgbG9uZwlncnViX3VpbnQ2NF90Owor
I2VuZGlmCisKKy8qIE1pc2MgdHlwZXMuICAqLworI2lmIEdSVUJfVEFSR0VUX1NJWkVPRl9WT0lE
X1AgPT0gOAordHlwZWRlZiBncnViX3VpbnQ2NF90CWdydWJfdGFyZ2V0X2FkZHJfdDsKK3R5cGVk
ZWYgZ3J1Yl91aW50NjRfdAlncnViX3RhcmdldF9vZmZfdDsKK3R5cGVkZWYgZ3J1Yl91aW50NjRf
dAlncnViX3RhcmdldF9zaXplX3Q7Cit0eXBlZGVmIGdydWJfaW50NjRfdAlncnViX3RhcmdldF9z
c2l6ZV90OworI2Vsc2UKK3R5cGVkZWYgZ3J1Yl91aW50MzJfdAlncnViX3RhcmdldF9hZGRyX3Q7
Cit0eXBlZGVmIGdydWJfdWludDMyX3QJZ3J1Yl90YXJnZXRfb2ZmX3Q7Cit0eXBlZGVmIGdydWJf
dWludDMyX3QJZ3J1Yl90YXJnZXRfc2l6ZV90OwordHlwZWRlZiBncnViX2ludDMyX3QJZ3J1Yl90
YXJnZXRfc3NpemVfdDsKKyNlbmRpZgorCisjaWYgR1JVQl9DUFVfU0laRU9GX1ZPSURfUCA9PSA4
Cit0eXBlZGVmIGdydWJfdWludDY0X3QJZ3J1Yl9hZGRyX3Q7Cit0eXBlZGVmIGdydWJfdWludDY0
X3QJZ3J1Yl9zaXplX3Q7Cit0eXBlZGVmIGdydWJfaW50NjRfdAlncnViX3NzaXplX3Q7CisjZWxz
ZQordHlwZWRlZiBncnViX3VpbnQzMl90CWdydWJfYWRkcl90OwordHlwZWRlZiBncnViX3VpbnQz
Ml90CWdydWJfc2l6ZV90OwordHlwZWRlZiBncnViX2ludDMyX3QJZ3J1Yl9zc2l6ZV90OworI2Vu
ZGlmCisKKyNpZiBHUlVCX0NQVV9TSVpFT0ZfVk9JRF9QID09IDgKKyMgZGVmaW5lIEdSVUJfVUxP
TkdfTUFYIDE4NDQ2NzQ0MDczNzA5NTUxNjE1VUwKKyMgZGVmaW5lIEdSVUJfTE9OR19NQVggOTIy
MzM3MjAzNjg1NDc3NTgwN0wKKyMgZGVmaW5lIEdSVUJfTE9OR19NSU4gKC05MjIzMzcyMDM2ODU0
Nzc1ODA3TCAtIDEpCisjZWxzZQorIyBkZWZpbmUgR1JVQl9VTE9OR19NQVggNDI5NDk2NzI5NVVM
CisjIGRlZmluZSBHUlVCX0xPTkdfTUFYIDIxNDc0ODM2NDdMCisjIGRlZmluZSBHUlVCX0xPTkdf
TUlOICgtMjE0NzQ4MzY0N0wgLSAxKQorI2VuZGlmCisKKyNpZiBHUlVCX0NQVV9TSVpFT0ZfVk9J
RF9QID09IDQKKyNkZWZpbmUgVUlOVF9UT19QVFIoeCkgKCh2b2lkKikoZ3J1Yl91aW50MzJfdCko
eCkpCisjZGVmaW5lIFBUUl9UT19VSU5UNjQoeCkgKChncnViX3VpbnQ2NF90KShncnViX3VpbnQz
Ml90KSh4KSkKKyNkZWZpbmUgUFRSX1RPX1VJTlQzMih4KSAoKGdydWJfdWludDMyX3QpKHgpKQor
I2Vsc2UKKyNkZWZpbmUgVUlOVF9UT19QVFIoeCkgKCh2b2lkKikoZ3J1Yl91aW50NjRfdCkoeCkp
CisjZGVmaW5lIFBUUl9UT19VSU5UNjQoeCkgKChncnViX3VpbnQ2NF90KSh4KSkKKyNkZWZpbmUg
UFRSX1RPX1VJTlQzMih4KSAoKGdydWJfdWludDMyX3QpKGdydWJfdWludDY0X3QpKHgpKQorI2Vu
ZGlmCisKKy8qIFRoZSB0eXBlIGZvciByZXByZXNlbnRpbmcgYSBmaWxlIG9mZnNldC4gICovCit0
eXBlZGVmIGdydWJfdWludDY0X3QJZ3J1Yl9vZmZfdDsKKworLyogVGhlIHR5cGUgZm9yIHJlcHJl
c2VudGluZyBhIGRpc2sgYmxvY2sgYWRkcmVzcy4gICovCit0eXBlZGVmIGdydWJfdWludDY0X3QJ
Z3J1Yl9kaXNrX2FkZHJfdDsKKworLyogQnl0ZS1vcmRlcnMuICAqLworI2RlZmluZSBncnViX3N3
YXBfYnl0ZXMxNih4KQlcCisoeyBcCisgICBncnViX3VpbnQxNl90IF94ID0gKHgpOyBcCisgICAo
Z3J1Yl91aW50MTZfdCkgKChfeCA8PCA4KSB8IChfeCA+PiA4KSk7IFwKK30pCisKKyNpZiBkZWZp
bmVkKF9fR05VQ19fKSAmJiAoX19HTlVDX18gPiAzKSAmJiAoX19HTlVDX18gPiA0IHx8IF9fR05V
Q19NSU5PUl9fID49IDMpICYmIGRlZmluZWQoR1JVQl9UQVJHRVRfSTM4NikKK3N0YXRpYyBpbmxp
bmUgZ3J1Yl91aW50MzJfdCBncnViX3N3YXBfYnl0ZXMzMihncnViX3VpbnQzMl90IHgpCit7CisJ
cmV0dXJuIF9fYnVpbHRpbl9ic3dhcDMyKHgpOworfQorCitzdGF0aWMgaW5saW5lIGdydWJfdWlu
dDY0X3QgZ3J1Yl9zd2FwX2J5dGVzNjQoZ3J1Yl91aW50NjRfdCB4KQoreworCXJldHVybiBfX2J1
aWx0aW5fYnN3YXA2NCh4KTsKK30KKyNlbHNlCQkJCQkvKiBub3QgZ2NjIDQuMyBvciBuZXdlciAq
LworI2RlZmluZSBncnViX3N3YXBfYnl0ZXMzMih4KQlcCisoeyBcCisgICBncnViX3VpbnQzMl90
IF94ID0gKHgpOyBcCisgICAoZ3J1Yl91aW50MzJfdCkgKChfeCA8PCAyNCkgXAorICAgICAgICAg
ICAgICAgICAgICB8ICgoX3ggJiAoZ3J1Yl91aW50MzJfdCkgMHhGRjAwVUwpIDw8IDgpIFwKKyAg
ICAgICAgICAgICAgICAgICAgfCAoKF94ICYgKGdydWJfdWludDMyX3QpIDB4RkYwMDAwVUwpID4+
IDgpIFwKKyAgICAgICAgICAgICAgICAgICAgfCAoX3ggPj4gMjQpKTsgXAorfSkKKworI2RlZmlu
ZSBncnViX3N3YXBfYnl0ZXM2NCh4KQlcCisoeyBcCisgICBncnViX3VpbnQ2NF90IF94ID0gKHgp
OyBcCisgICAoZ3J1Yl91aW50NjRfdCkgKChfeCA8PCA1NikgXAorICAgICAgICAgICAgICAgICAg
ICB8ICgoX3ggJiAoZ3J1Yl91aW50NjRfdCkgMHhGRjAwVUxMKSA8PCA0MCkgXAorICAgICAgICAg
ICAgICAgICAgICB8ICgoX3ggJiAoZ3J1Yl91aW50NjRfdCkgMHhGRjAwMDBVTEwpIDw8IDI0KSBc
CisgICAgICAgICAgICAgICAgICAgIHwgKChfeCAmIChncnViX3VpbnQ2NF90KSAweEZGMDAwMDAw
VUxMKSA8PCA4KSBcCisgICAgICAgICAgICAgICAgICAgIHwgKChfeCAmIChncnViX3VpbnQ2NF90
KSAweEZGMDAwMDAwMDBVTEwpID4+IDgpIFwKKyAgICAgICAgICAgICAgICAgICAgfCAoKF94ICYg
KGdydWJfdWludDY0X3QpIDB4RkYwMDAwMDAwMDAwVUxMKSA+PiAyNCkgXAorICAgICAgICAgICAg
ICAgICAgICB8ICgoX3ggJiAoZ3J1Yl91aW50NjRfdCkgMHhGRjAwMDAwMDAwMDAwMFVMTCkgPj4g
NDApIFwKKyAgICAgICAgICAgICAgICAgICAgfCAoX3ggPj4gNTYpKTsgXAorfSkKKyNlbmRpZgkJ
CQkJLyogbm90IGdjYyA0LjMgb3IgbmV3ZXIgKi8KKworI2lmZGVmIEdSVUJfQ1BVX1dPUkRTX0JJ
R0VORElBTgorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fbGUxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4
KQorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fbGUzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyBk
ZWZpbmUgZ3J1Yl9jcHVfdG9fbGU2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyBkZWZpbmUg
Z3J1Yl9sZV90b19jcHUxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4KQorIyBkZWZpbmUgZ3J1Yl9s
ZV90b19jcHUzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyBkZWZpbmUgZ3J1Yl9sZV90b19j
cHU2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fYmUxNih4
KQkoKGdydWJfdWludDE2X3QpICh4KSkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2JlMzIoeCkJKChn
cnViX3VpbnQzMl90KSAoeCkpCisjIGRlZmluZSBncnViX2NwdV90b19iZTY0KHgpCSgoZ3J1Yl91
aW50NjRfdCkgKHgpKQorIyBkZWZpbmUgZ3J1Yl9iZV90b19jcHUxNih4KQkoKGdydWJfdWludDE2
X3QpICh4KSkKKyMgZGVmaW5lIGdydWJfYmVfdG9fY3B1MzIoeCkJKChncnViX3VpbnQzMl90KSAo
eCkpCisjIGRlZmluZSBncnViX2JlX3RvX2NwdTY0KHgpCSgoZ3J1Yl91aW50NjRfdCkgKHgpKQor
IyBpZmRlZiBHUlVCX1RBUkdFVF9XT1JEU19CSUdFTkRJQU4KKyMgIGRlZmluZSBncnViX3Rhcmdl
dF90b19ob3N0MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjICBkZWZpbmUgZ3J1Yl90YXJn
ZXRfdG9faG9zdDMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJfdGFy
Z2V0X3RvX2hvc3Q2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMgIGRlZmluZSBncnViX2hv
c3RfdG9fdGFyZ2V0MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjICBkZWZpbmUgZ3J1Yl9o
b3N0X3RvX3RhcmdldDMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJf
aG9zdF90b190YXJnZXQ2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMgZWxzZSAvKiAhIEdS
VUJfVEFSR0VUX1dPUkRTX0JJR0VORElBTiAqLworIyAgZGVmaW5lIGdydWJfdGFyZ2V0X3RvX2hv
c3QxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4KQorIyAgZGVmaW5lIGdydWJfdGFyZ2V0X3RvX2hv
c3QzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyAgZGVmaW5lIGdydWJfdGFyZ2V0X3RvX2hv
c3Q2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyAgZGVmaW5lIGdydWJfaG9zdF90b190YXJn
ZXQxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4KQorIyAgZGVmaW5lIGdydWJfaG9zdF90b190YXJn
ZXQzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyAgZGVmaW5lIGdydWJfaG9zdF90b190YXJn
ZXQ2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyBlbmRpZgorI2Vsc2UgLyogISBXT1JEU19C
SUdFTkRJQU4gKi8KKyMgZGVmaW5lIGdydWJfY3B1X3RvX2xlMTYoeCkJKChncnViX3VpbnQxNl90
KSAoeCkpCisjIGRlZmluZSBncnViX2NwdV90b19sZTMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgp
KQorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fbGU2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMg
ZGVmaW5lIGdydWJfbGVfdG9fY3B1MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjIGRlZmlu
ZSBncnViX2xlX3RvX2NwdTMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyBkZWZpbmUgZ3J1
Yl9sZV90b19jcHU2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMgZGVmaW5lIGdydWJfY3B1
X3RvX2JlMTYoeCkJZ3J1Yl9zd2FwX2J5dGVzMTYoeCkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2Jl
MzIoeCkJZ3J1Yl9zd2FwX2J5dGVzMzIoeCkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2JlNjQoeCkJ
Z3J1Yl9zd2FwX2J5dGVzNjQoeCkKKyMgZGVmaW5lIGdydWJfYmVfdG9fY3B1MTYoeCkJZ3J1Yl9z
d2FwX2J5dGVzMTYoeCkKKyMgZGVmaW5lIGdydWJfYmVfdG9fY3B1MzIoeCkJZ3J1Yl9zd2FwX2J5
dGVzMzIoeCkKKyMgZGVmaW5lIGdydWJfYmVfdG9fY3B1NjQoeCkJZ3J1Yl9zd2FwX2J5dGVzNjQo
eCkKKyMgaWZkZWYgR1JVQl9UQVJHRVRfV09SRFNfQklHRU5ESUFOCisjICBkZWZpbmUgZ3J1Yl90
YXJnZXRfdG9faG9zdDE2KHgpCWdydWJfc3dhcF9ieXRlczE2KHgpCisjICBkZWZpbmUgZ3J1Yl90
YXJnZXRfdG9faG9zdDMyKHgpCWdydWJfc3dhcF9ieXRlczMyKHgpCisjICBkZWZpbmUgZ3J1Yl90
YXJnZXRfdG9faG9zdDY0KHgpCWdydWJfc3dhcF9ieXRlczY0KHgpCisjICBkZWZpbmUgZ3J1Yl9o
b3N0X3RvX3RhcmdldDE2KHgpCWdydWJfc3dhcF9ieXRlczE2KHgpCisjICBkZWZpbmUgZ3J1Yl9o
b3N0X3RvX3RhcmdldDMyKHgpCWdydWJfc3dhcF9ieXRlczMyKHgpCisjICBkZWZpbmUgZ3J1Yl9o
b3N0X3RvX3RhcmdldDY0KHgpCWdydWJfc3dhcF9ieXRlczY0KHgpCisjIGVsc2UgLyogISBHUlVC
X1RBUkdFVF9XT1JEU19CSUdFTkRJQU4gKi8KKyMgIGRlZmluZSBncnViX3RhcmdldF90b19ob3N0
MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjICBkZWZpbmUgZ3J1Yl90YXJnZXRfdG9faG9z
dDMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJfdGFyZ2V0X3RvX2hv
c3Q2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMgIGRlZmluZSBncnViX2hvc3RfdG9fdGFy
Z2V0MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjICBkZWZpbmUgZ3J1Yl9ob3N0X3RvX3Rh
cmdldDMyKHgpCSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJfaG9zdF90b190
YXJnZXQ2NCh4KQkoKGdydWJfdWludDY0X3QpICh4KSkKKyMgZW5kaWYKKyNlbmRpZiAvKiAhIFdP
UkRTX0JJR0VORElBTiAqLworCisjaWYgR1JVQl9UQVJHRVRfU0laRU9GX1ZPSURfUCA9PSA4Cisj
ICBkZWZpbmUgZ3J1Yl9ob3N0X3RvX3RhcmdldF9hZGRyKHgpIGdydWJfaG9zdF90b190YXJnZXQ2
NCh4KQorI2Vsc2UKKyMgIGRlZmluZSBncnViX2hvc3RfdG9fdGFyZ2V0X2FkZHIoeCkgZ3J1Yl9o
b3N0X3RvX3RhcmdldDMyKHgpCisjZW5kaWYKKworCisKKworCisKKworI2VuZGlmIC8qICEgR1JV
Ql9UWVBFU19IRUFERVIgKi8KZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjIt
YS90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWNvbmZpZy5oIHhlbi00LjEuMi1iL3Rvb2xzL2lv
ZW11LXFlbXUteGVuL2dydWItY29uZmlnLmgKLS0tIHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFl
bXUteGVuL2dydWItY29uZmlnLmgJMTk3MC0wMS0wMSAwNzowMDowMC4wMDAwMDAwMDAgKzA3MDAK
KysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWItY29uZmlnLmgJMjAxMi0x
Mi0yOCAxNjowMjo0MS4wMDY5MzI0MTcgKzA4MDAKQEAgLTAsMCArMSwyNTEgQEAKKy8qIGNvbmZp
Zy5oLiAgR2VuZXJhdGVkIGZyb20gY29uZmlnLmguaW4gYnkgY29uZmlndXJlLiAgKi8KKy8qIGNv
bmZpZy5oLmluLiAgR2VuZXJhdGVkIGZyb20gY29uZmlndXJlLmFjIGJ5IGF1dG9oZWFkZXIuICAq
LworCisvKiBEZWZpbmUgaXQgaWYgR0FTIHJlcXVpcmVzIHRoYXQgYWJzb2x1dGUgaW5kaXJlY3Qg
Y2FsbHMvanVtcHMgYXJlIG5vdAorICAgcHJlZml4ZWQgd2l0aCBhbiBhc3RlcmlzayAqLworLyog
I3VuZGVmIEFCU09MVVRFX1dJVEhPVVRfQVNURVJJU0sgKi8KKworLyogRGVmaW5lIGl0IHRvIFwi
YWRkcjMyXCIgb3IgXCJhZGRyMzI7XCIgdG8gbWFrZSBHQVMgaGFwcHkgKi8KKyNkZWZpbmUgQURE
UjMyIGFkZHIzMgorCisvKiBEZWZpbmUgaXQgdG8gXCJkYXRhMzJcIiBvciBcImRhdGEzMjtcIiB0
byBtYWtlIEdBUyBoYXBweSAqLworI2RlZmluZSBEQVRBMzIgZGF0YTMyCisKKy8qIERlZmluZSB0
byAxIGlmIHRyYW5zbGF0aW9uIG9mIHByb2dyYW0gbWVzc2FnZXMgdG8gdGhlIHVzZXIncyBuYXRp
dmUKKyAgIGxhbmd1YWdlIGlzIHJlcXVlc3RlZC4gKi8KKyNkZWZpbmUgRU5BQkxFX05MUyAxCisK
Ky8qIERlZmluZSBpZiBDIHN5bWJvbHMgZ2V0IGFuIHVuZGVyc2NvcmUgYWZ0ZXIgY29tcGlsYXRp
b24gKi8KKy8qICN1bmRlZiBIQVZFX0FTTV9VU0NPUkUgKi8KKworLyogRGVmaW5lIHRvIDEgaWYg
eW91IGhhdmUgdGhlIGBhc3ByaW50ZicgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfQVNQUklO
VEYgMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgTWFjT1MgWCBmdW5jdGlvbiBD
RkxvY2FsZUNvcHlDdXJyZW50IGluIHRoZQorICAgQ29yZUZvdW5kYXRpb24gZnJhbWV3b3JrLiAq
LworLyogI3VuZGVmIEhBVkVfQ0ZMT0NBTEVDT1BZQ1VSUkVOVCAqLworCisvKiBEZWZpbmUgdG8g
MSBpZiB5b3UgaGF2ZSB0aGUgTWFjT1MgWCBmdW5jdGlvbiBDRlByZWZlcmVuY2VzQ29weUFwcFZh
bHVlIGluCisgICB0aGUgQ29yZUZvdW5kYXRpb24gZnJhbWV3b3JrLiAqLworLyogI3VuZGVmIEhB
VkVfQ0ZQUkVGRVJFTkNFU0NPUFlBUFBWQUxVRSAqLworCisvKiBEZWZpbmUgdG8gMSBpZiB5b3Ug
aGF2ZSB0aGUgPGN1cnNlcy5oPiBoZWFkZXIgZmlsZS4gKi8KKy8qICN1bmRlZiBIQVZFX0NVUlNF
U19IICovCisKKy8qIERlZmluZSBpZiB0aGUgR05VIGRjZ2V0dGV4dCgpIGZ1bmN0aW9uIGlzIGFs
cmVhZHkgcHJlc2VudCBvciBwcmVpbnN0YWxsZWQuCisgICAqLworI2RlZmluZSBIQVZFX0RDR0VU
VEVYVCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8ZGlyZW50Lmg+IGhlYWRl
ciBmaWxlLCBhbmQgaXQgZGVmaW5lcyBgRElSJy4KKyAgICovCisjZGVmaW5lIEhBVkVfRElSRU5U
X0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPGZ0MmJ1aWxkLmg+IGhlYWRl
ciBmaWxlLiAqLworI2RlZmluZSBIQVZFX0ZUMkJVSUxEX0ggMQorCisvKiBEZWZpbmUgdG8gMSBp
ZiB5b3UgaGF2ZSB0aGUgYGdldGdpZCcgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfR0VUR0lE
IDEKKworLyogRGVmaW5lIGlmIGdldHJhd3BhcnRpdGlvbigpIGluIC1sdXRpbCBjYW4gYmUgdXNl
ZCAqLworLyogI3VuZGVmIEhBVkVfR0VUUkFXUEFSVElUSU9OICovCisKKy8qIERlZmluZSBpZiB0
aGUgR05VIGdldHRleHQoKSBmdW5jdGlvbiBpcyBhbHJlYWR5IHByZXNlbnQgb3IgcHJlaW5zdGFs
bGVkLiAqLworI2RlZmluZSBIQVZFX0dFVFRFWFQgMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3Ug
aGF2ZSB0aGUgYGdldHVpZCcgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfR0VUVUlEIDEKKwor
LyogRGVmaW5lIGlmIHlvdSBoYXZlIHRoZSBpY29udigpIGZ1bmN0aW9uIGFuZCBpdCB3b3Jrcy4g
Ki8KKy8qICN1bmRlZiBIQVZFX0lDT05WICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZl
IHRoZSA8aW50dHlwZXMuaD4gaGVhZGVyIGZpbGUuICovCisjZGVmaW5lIEhBVkVfSU5UVFlQRVNf
SCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8bGltaXRzLmg+IGhlYWRlciBm
aWxlLiAqLworI2RlZmluZSBIQVZFX0xJTUlUU19IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91
IGhhdmUgdGhlIGBsc3RhdCcgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfTFNUQVQgMQorCisv
KiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPG1hbGxvYy5oPiBoZWFkZXIgZmlsZS4gKi8K
KyNkZWZpbmUgSEFWRV9NQUxMT0NfSCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRo
ZSBgbWVtYWxpZ24nIGZ1bmN0aW9uLiAqLworI2RlZmluZSBIQVZFX01FTUFMSUdOIDEKKworLyog
RGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIGBtZW1tb3ZlJyBmdW5jdGlvbi4gKi8KKyNkZWZp
bmUgSEFWRV9NRU1NT1ZFIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxtZW1v
cnkuaD4gaGVhZGVyIGZpbGUuICovCisjZGVmaW5lIEhBVkVfTUVNT1JZX0ggMQorCisvKiBEZWZp
bmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPG5jdXJzZXMvY3Vyc2VzLmg+IGhlYWRlciBmaWxlLiAq
LworLyogI3VuZGVmIEhBVkVfTkNVUlNFU19DVVJTRVNfSCAqLworCisvKiBEZWZpbmUgdG8gMSBp
ZiB5b3UgaGF2ZSB0aGUgPG5jdXJzZXMuaD4gaGVhZGVyIGZpbGUuICovCisvKiAjdW5kZWYgSEFW
RV9OQ1VSU0VTX0ggKi8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxuZGlyLmg+
IGhlYWRlciBmaWxlLCBhbmQgaXQgZGVmaW5lcyBgRElSJy4gKi8KKy8qICN1bmRlZiBIQVZFX05E
SVJfSCAqLworCisvKiBEZWZpbmUgaWYgb3BlbmRpc2soKSBpbiAtbHV0aWwgY2FuIGJlIHVzZWQg
Ki8KKy8qICN1bmRlZiBIQVZFX09QRU5ESVNLICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBo
YXZlIHRoZSA8cGNpL3BjaS5oPiBoZWFkZXIgZmlsZS4gKi8KKy8qICN1bmRlZiBIQVZFX1BDSV9Q
Q0lfSCAqLworCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgYHBvc2l4X21lbWFsaWdu
JyBmdW5jdGlvbi4gKi8KKyNkZWZpbmUgSEFWRV9QT1NJWF9NRU1BTElHTiAxCisKKy8qIERlZmlu
ZSBpZiByZXR1cm5zX3R3aWNlIGF0dHJpYnV0ZSBpcyBzdXBwb3J0ZWQgKi8KKy8qICN1bmRlZiBI
QVZFX1JFVFVSTlNfVFdJQ0UgKi8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIGBz
YnJrJyBmdW5jdGlvbi4gKi8KKyNkZWZpbmUgSEFWRV9TQlJLIDEKKworLyogRGVmaW5lIHRvIDEg
aWYgeW91IGhhdmUgdGhlIDxTREwvU0RMLmg+IGhlYWRlciBmaWxlLiAqLworLyogI3VuZGVmIEhB
VkVfU0RMX1NETF9IICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3RkaW50
Lmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZFX1NURElOVF9IIDEKKworLyogRGVmaW5l
IHRvIDEgaWYgeW91IGhhdmUgdGhlIDxzdGRsaWIuaD4gaGVhZGVyIGZpbGUuICovCisjZGVmaW5l
IEhBVkVfU1RETElCX0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgYHN0cmR1
cCcgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfU1RSRFVQIDEKKworLyogRGVmaW5lIHRvIDEg
aWYgeW91IGhhdmUgdGhlIDxzdHJpbmdzLmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZF
X1NUUklOR1NfSCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3RyaW5nLmg+
IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZFX1NUUklOR19IIDEKKworLyogRGVmaW5lIHRv
IDEgaWYgeW91IGhhdmUgdGhlIDxzeXMvZGlyLmg+IGhlYWRlciBmaWxlLCBhbmQgaXQgZGVmaW5l
cyBgRElSJy4KKyAgICovCisvKiAjdW5kZWYgSEFWRV9TWVNfRElSX0ggKi8KKworLyogRGVmaW5l
IHRvIDEgaWYgeW91IGhhdmUgdGhlIDxzeXMvZmNudGwuaD4gaGVhZGVyIGZpbGUuICovCisjZGVm
aW5lIEhBVkVfU1lTX0ZDTlRMX0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUg
PHN5cy9ta2Rldi5oPiBoZWFkZXIgZmlsZS4gKi8KKy8qICN1bmRlZiBIQVZFX1NZU19NS0RFVl9I
ICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3lzL25kaXIuaD4gaGVhZGVy
IGZpbGUsIGFuZCBpdCBkZWZpbmVzIGBESVInLgorICAgKi8KKy8qICN1bmRlZiBIQVZFX1NZU19O
RElSX0ggKi8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxzeXMvc3RhdC5oPiBo
ZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9TWVNfU1RBVF9IIDEKKworLyogRGVmaW5lIHRv
IDEgaWYgeW91IGhhdmUgdGhlIDxzeXMvc3lzbWFjcm9zLmg+IGhlYWRlciBmaWxlLiAqLworI2Rl
ZmluZSBIQVZFX1NZU19TWVNNQUNST1NfSCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZl
IHRoZSA8c3lzL3R5cGVzLmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZFX1NZU19UWVBF
U19IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDx0ZXJtaW9zLmg+IGhlYWRl
ciBmaWxlLiAqLworI2RlZmluZSBIQVZFX1RFUk1JT1NfSCAxCisKKy8qIERlZmluZSB0byAxIGlm
IHlvdSBoYXZlIHRoZSA8dW5pc3RkLmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZFX1VO
SVNURF9IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDx1c2IuaD4gaGVhZGVy
IGZpbGUuICovCisvKiAjdW5kZWYgSEFWRV9VU0JfSCAqLworCisvKiBEZWZpbmUgdG8gMSBpZiB5
b3UgaGF2ZSB0aGUgYHZhc3ByaW50ZicgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfVkFTUFJJ
TlRGIDEKKworLyogRGVmaW5lIHRvIDEgaWYgYG1ham9yJywgYG1pbm9yJywgYW5kIGBtYWtlZGV2
JyBhcmUgZGVjbGFyZWQgaW4gPG1rZGV2Lmg+LgorICAgKi8KKy8qICN1bmRlZiBNQUpPUl9JTl9N
S0RFViAqLworCisvKiBEZWZpbmUgdG8gMSBpZiBgbWFqb3InLCBgbWlub3InLCBhbmQgYG1ha2Vk
ZXYnIGFyZSBkZWNsYXJlZCBpbgorICAgPHN5c21hY3Jvcy5oPi4gKi8KKy8qICN1bmRlZiBNQUpP
Ul9JTl9TWVNNQUNST1MgKi8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGVuYWJsZSBtZW1vcnkg
bWFuYWdlciBkZWJ1Z2dpbmcuICovCisvKiAjdW5kZWYgTU1fREVCVUcgKi8KKworLyogRGVmaW5l
IHRvIDEgaWYgR0NDIGdlbmVyYXRlcyBjYWxscyB0byBfX3JlZ2lzdGVyX2ZyYW1lX2luZm8oKSAq
LworLyogI3VuZGVmIE5FRURfUkVHSVNURVJfRlJBTUVfSU5GTyAqLworCisvKiBOYW1lIG9mIHBh
Y2thZ2UgKi8KKyNkZWZpbmUgUEFDS0FHRSAiYnVyZyIKKworLyogRGVmaW5lIHRvIHRoZSBhZGRy
ZXNzIHdoZXJlIGJ1ZyByZXBvcnRzIGZvciB0aGlzIHBhY2thZ2Ugc2hvdWxkIGJlIHNlbnQuICov
CisjZGVmaW5lIFBBQ0tBR0VfQlVHUkVQT1JUICJiZWFuMTIzY2hAZ21haWwuY29tIgorCisvKiBE
ZWZpbmUgdG8gdGhlIGZ1bGwgbmFtZSBvZiB0aGlzIHBhY2thZ2UuICovCisjZGVmaW5lIFBBQ0tB
R0VfTkFNRSAiQlVSRyIKKworLyogRGVmaW5lIHRvIHRoZSBmdWxsIG5hbWUgYW5kIHZlcnNpb24g
b2YgdGhpcyBwYWNrYWdlLiAqLworI2RlZmluZSBQQUNLQUdFX1NUUklORyAiQlVSRyAxLjk4Igor
CisvKiBEZWZpbmUgdG8gdGhlIG9uZSBzeW1ib2wgc2hvcnQgbmFtZSBvZiB0aGlzIHBhY2thZ2Uu
ICovCisjZGVmaW5lIFBBQ0tBR0VfVEFSTkFNRSAiYnVyZyIKKworLyogRGVmaW5lIHRvIHRoZSB2
ZXJzaW9uIG9mIHRoaXMgcGFja2FnZS4gKi8KKyNkZWZpbmUgUEFDS0FHRV9WRVJTSU9OICIxLjk4
IgorCisvKiBUaGUgc2l6ZSBvZiBgbG9uZycsIGFzIGNvbXB1dGVkIGJ5IHNpemVvZi4gKi8KKyNk
ZWZpbmUgU0laRU9GX0xPTkcgOAorCisvKiBUaGUgc2l6ZSBvZiBgdm9pZCAqJywgYXMgY29tcHV0
ZWQgYnkgc2l6ZW9mLiAqLworI2RlZmluZSBTSVpFT0ZfVk9JRF9QIDgKKworLyogRGVmaW5lIHRv
IDEgaWYgeW91IGhhdmUgdGhlIEFOU0kgQyBoZWFkZXIgZmlsZXMuICovCisjZGVmaW5lIFNURENf
SEVBREVSUyAxCisKKy8qIFZlcnNpb24gbnVtYmVyIG9mIHBhY2thZ2UgKi8KKyNkZWZpbmUgVkVS
U0lPTiAiMS45OCIKKworLyogRGVmaW5lIFdPUkRTX0JJR0VORElBTiB0byAxIGlmIHlvdXIgcHJv
Y2Vzc29yIHN0b3JlcyB3b3JkcyB3aXRoIHRoZSBtb3N0CisgICBzaWduaWZpY2FudCBieXRlIGZp
cnN0IChsaWtlIE1vdG9yb2xhIGFuZCBTUEFSQywgdW5saWtlIEludGVsIGFuZCBWQVgpLiAqLwor
I2lmIGRlZmluZWQgX19CSUdfRU5ESUFOX18KKyMgZGVmaW5lIFdPUkRTX0JJR0VORElBTiAxCisj
ZWxpZiAhIGRlZmluZWQgX19MSVRUTEVfRU5ESUFOX18KKy8qICMgdW5kZWYgV09SRFNfQklHRU5E
SUFOICovCisjZW5kaWYKKworLyogRGVmaW5lIHRvIDEgaWYgYGxleCcgZGVjbGFyZXMgYHl5dGV4
dCcgYXMgYSBgY2hhciAqJyBieSBkZWZhdWx0LCBub3QgYQorICAgYGNoYXJbXScuICovCisjZGVm
aW5lIFlZVEVYVF9QT0lOVEVSIDEKKworLyogTnVtYmVyIG9mIGJpdHMgaW4gYSBmaWxlIG9mZnNl
dCwgb24gaG9zdHMgd2hlcmUgdGhpcyBpcyBzZXR0YWJsZS4gKi8KKy8qICN1bmRlZiBfRklMRV9P
RkZTRVRfQklUUyAqLworCisvKiBEZWZpbmUgZm9yIGxhcmdlIGZpbGVzLCBvbiBBSVgtc3R5bGUg
aG9zdHMuICovCisvKiAjdW5kZWYgX0xBUkdFX0ZJTEVTICovCisKKy8qIERlZmluZSB0byAxIGlm
IG9uIE1JTklYLiAqLworLyogI3VuZGVmIF9NSU5JWCAqLworCisvKiBEZWZpbmUgdG8gMiBpZiB0
aGUgc3lzdGVtIGRvZXMgbm90IHByb3ZpZGUgUE9TSVguMSBmZWF0dXJlcyBleGNlcHQgd2l0aAor
ICAgdGhpcyBkZWZpbmVkLiAqLworLyogI3VuZGVmIF9QT1NJWF8xX1NPVVJDRSAqLworCisvKiBE
ZWZpbmUgdG8gMSBpZiB5b3UgbmVlZCB0byBpbiBvcmRlciBmb3IgYHN0YXQnIGFuZCBvdGhlciB0
aGluZ3MgdG8gd29yay4gKi8KKy8qICN1bmRlZiBfUE9TSVhfU09VUkNFICovCisKKy8qIEVuYWJs
ZSBleHRlbnNpb25zIG9uIEFJWCAzLCBJbnRlcml4LiAgKi8KKyNpZm5kZWYgX0FMTF9TT1VSQ0UK
KyMgZGVmaW5lIF9BTExfU09VUkNFIDEKKyNlbmRpZgorLyogRW5hYmxlIEdOVSBleHRlbnNpb25z
IG9uIHN5c3RlbXMgdGhhdCBoYXZlIHRoZW0uICAqLworI2lmbmRlZiBfR05VX1NPVVJDRQorIyBk
ZWZpbmUgX0dOVV9TT1VSQ0UgMQorI2VuZGlmCisvKiBFbmFibGUgdGhyZWFkaW5nIGV4dGVuc2lv
bnMgb24gU29sYXJpcy4gICovCisjaWZuZGVmIF9QT1NJWF9QVEhSRUFEX1NFTUFOVElDUworIyBk
ZWZpbmUgX1BPU0lYX1BUSFJFQURfU0VNQU5USUNTIDEKKyNlbmRpZgorLyogRW5hYmxlIGV4dGVu
c2lvbnMgb24gSFAgTm9uU3RvcC4gICovCisjaWZuZGVmIF9UQU5ERU1fU09VUkNFCisjIGRlZmlu
ZSBfVEFOREVNX1NPVVJDRSAxCisjZW5kaWYKKy8qIEVuYWJsZSBnZW5lcmFsIGV4dGVuc2lvbnMg
b24gU29sYXJpcy4gICovCisjaWZuZGVmIF9fRVhURU5TSU9OU19fCisjIGRlZmluZSBfX0VYVEVO
U0lPTlNfXyAxCisjZW5kaWYKKwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEu
Mi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWJfZXJyLmMgeGVuLTQuMS4yLWIvdG9vbHMvaW9l
bXUtcWVtdS14ZW4vZ3J1Yl9lcnIuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14
ZW4vZ3J1Yl9lcnIuYwkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysgeGVu
LTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1Yl9lcnIuYwkyMDEyLTEyLTI4IDE2OjAy
OjQxLjAwNzczNDE2NCArMDgwMApAQCAtMCwwICsxLDE4NiBAQAorLyogZXJyLmMgLSBlcnJvciBo
YW5kbGluZyByb3V0aW5lcyAqLworLyoKKyAqICBHUlVCICAtLSAgR1JhbmQgVW5pZmllZCBCb290
bG9hZGVyCisgKiAgQ29weXJpZ2h0IChDKSAyMDAyLDIwMDUsMjAwNywyMDA4ICBGcmVlIFNvZnR3
YXJlIEZvdW5kYXRpb24sIEluYy4KKyAqCisgKiAgR1JVQiBpcyBmcmVlIHNvZnR3YXJlOiB5b3Ug
Y2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5CisgKiAgaXQgdW5kZXIgdGhlIHRlcm1z
IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBhcyBwdWJsaXNoZWQgYnkKKyAqICB0
aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBlaXRoZXIgdmVyc2lvbiAzIG9mIHRoZSBMaWNl
bnNlLCBvcgorICogIChhdCB5b3VyIG9wdGlvbikgYW55IGxhdGVyIHZlcnNpb24uCisgKgorICog
IEdSVUIgaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwK
KyAqICBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3
YXJyYW50eSBvZgorICogIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VM
QVIgUFVSUE9TRS4gIFNlZSB0aGUKKyAqICBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3Ig
bW9yZSBkZXRhaWxzLgorICoKKyAqICBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9m
IHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZQorICogIGFsb25nIHdpdGggR1JVQi4gIElm
IG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4KKyAqLworCisjaW5jbHVk
ZSAiZ3J1Yl9lcnIuaCIKKyNpbmNsdWRlICJtaXNjLmgiCisjaW5jbHVkZSA8c3RkYXJnLmg+Cisj
aW5jbHVkZSA8c3RkaW8uaD4KKyNpbmNsdWRlIDxzdGRsaWIuaD4KKworI2RlZmluZSBHUlVCX01B
WF9FUlJNU0cJCTI1NgorI2RlZmluZSBHUlVCX0VSUk9SX1NUQUNLX1NJWkUJMTAKKworZ3J1Yl9l
cnJfdCBncnViX2Vycm5vOworY2hhciBncnViX2Vycm1zZ1tHUlVCX01BWF9FUlJNU0ddOworCitz
dGF0aWMgc3RydWN0Cit7CisgIGdydWJfZXJyX3QgZXJybm87CisgIGNoYXIgZXJybXNnW0dSVUJf
TUFYX0VSUk1TR107Cit9IGdydWJfZXJyb3Jfc3RhY2tfaXRlbXNbR1JVQl9FUlJPUl9TVEFDS19T
SVpFXTsKKworc3RhdGljIGludCBncnViX2Vycm9yX3N0YWNrX3BvczsKK3N0YXRpYyBpbnQgZ3J1
Yl9lcnJvcl9zdGFja19hc3NlcnQ7CisKKworCitzdGF0aWMgaW50CitncnViX3ZzbnByaW50ZiAo
Y2hhciAqc3RyLCBncnViX3NpemVfdCBuLCBjb25zdCBjaGFyICpmbXQsIHZhX2xpc3QgYXApCit7
CisgIGdydWJfc2l6ZV90IHJldDsKKworICBpZiAoIW4pCisgICAgcmV0dXJuIDA7CisKKyAgCisg
IHJldCA9IHZzbnByaW50ZihzdHIsIG4sIGZtdCwgYXApOworICBwcmludGYoIiVzXG4iLCBzdHIp
OworICByZXR1cm4gcmV0IDwgbiA/IHJldCA6IG47Cit9CisKKworCitzdGF0aWMgaW50CitncnVi
X3ZwcmludGYgKGNvbnN0IGNoYXIgKmZtdCwgdmFfbGlzdCBhcmdzKQoreworICBpbnQgcmV0Owor
CisgIHJldCA9IGdydWJfdnNucHJpbnRmIChncnViX2Vycm1zZywgc2l6ZW9mIChncnViX2Vycm1z
ZyksIGZtdCwgYXJncyk7CisKKyAgcmV0dXJuIHJldDsKK30KKworaW50CitncnViX2Vycl9wcmlu
dGYgKGNvbnN0IGNoYXIgKmZtdCwgLi4uKQoreworICB2YV9saXN0IGFwOworICBpbnQgcmV0Owor
CisgIHZhX3N0YXJ0IChhcCwgZm10KTsKKyAgcmV0ID0gZ3J1Yl92cHJpbnRmIChmbXQsIGFwKTsK
KyAgdmFfZW5kIChhcCk7CisKKyAgcmV0dXJuIHJldDsKK30KKworCitncnViX2Vycl90CitncnVi
X2Vycm9yIChncnViX2Vycl90IG4sIGNvbnN0IGNoYXIgKmZtdCwgLi4uKQoreworICB2YV9saXN0
IGFwOworCisgIGdydWJfZXJybm8gPSBuOworICB2YV9zdGFydCAoYXAsIGZtdCk7CisgIGdydWJf
dnNucHJpbnRmIChncnViX2Vycm1zZywgc2l6ZW9mIChncnViX2Vycm1zZyksIGZtdCwgYXApOwor
ICB2YV9lbmQgKGFwKTsKKyAgCisgIHJldHVybiBuOworfQorCit2b2lkCitncnViX2ZhdGFsIChj
b25zdCBjaGFyICpmbXQsIC4uLikKK3sKKyAgdmFfbGlzdCBhcDsKKworICB2YV9zdGFydCAoYXAs
IGZtdCk7CisgIGdydWJfdnByaW50ZiAoXyhmbXQpLCBhcCk7CisgIHZhX2VuZCAoYXApOworCisg
IGV4aXQoMSk7Cit9CisKK3ZvaWQKK2dydWJfZXJyb3JfcHVzaCAodm9pZCkKK3sKKyAgLyogT25s
eSBhZGQgaXRlbXMgdG8gc3RhY2ssIGlmIHRoZXJlIGlzIGVub3VnaCByb29tLiAgKi8KKyAgaWYg
KGdydWJfZXJyb3Jfc3RhY2tfcG9zIDwgR1JVQl9FUlJPUl9TVEFDS19TSVpFKQorICAgIHsKKyAg
ICAgIC8qIENvcHkgYWN0aXZlIGVycm9yIG1lc3NhZ2UgdG8gc3RhY2suICAqLworICAgICAgZ3J1
Yl9lcnJvcl9zdGFja19pdGVtc1tncnViX2Vycm9yX3N0YWNrX3Bvc10uZXJybm8gPSBncnViX2Vy
cm5vOworICAgICAgZ3J1Yl9tZW1jcHkgKGdydWJfZXJyb3Jfc3RhY2tfaXRlbXNbZ3J1Yl9lcnJv
cl9zdGFja19wb3NdLmVycm1zZywKKyAgICAgICAgICAgICAgICAgICBncnViX2Vycm1zZywKKyAg
ICAgICAgICAgICAgICAgICBzaXplb2YgKGdydWJfZXJybXNnKSk7CisKKyAgICAgIC8qIEFkdmFu
Y2UgdG8gbmV4dCBlcnJvciBzdGFjayBwb3NpdGlvbi4gICovCisgICAgICBncnViX2Vycm9yX3N0
YWNrX3BvcysrOworICAgIH0KKyAgZWxzZQorICAgIHsKKyAgICAgIC8qIFRoZXJlIGlzIG5vIHJv
b20gZm9yIG5ldyBlcnJvciBtZXNzYWdlLiBEaXNjYXJkIG5ldyBlcnJvciBtZXNzYWdlCisgICAg
ICAgICBhbmQgbWFyayBlcnJvciBzdGFjayBhc3NlcnRpb24gZmxhZy4gICovCisgICAgICBncnVi
X2Vycm9yX3N0YWNrX2Fzc2VydCA9IDE7CisgICAgfQorCisgIC8qIEFsbG93IGZ1cnRoZXIgb3Bl
cmF0aW9uIG9mIG90aGVyIGNvbXBvbmVudHMgYnkgcmVzZXR0aW5nCisgICAgIGFjdGl2ZSBlcnJu
byB0byBHUlVCX0VSUl9OT05FLiAgKi8KKyAgZ3J1Yl9lcnJubyA9IEdSVUJfRVJSX05PTkU7Cit9
CisKK2ludAorZ3J1Yl9lcnJvcl9wb3AgKHZvaWQpCit7CisgIGlmIChncnViX2Vycm9yX3N0YWNr
X3BvcyA+IDApCisgICAgeworICAgICAgLyogUG9wIGVycm9yIG1lc3NhZ2UgZnJvbSBlcnJvciBz
dGFjayB0byBjdXJyZW50IGFjdGl2ZSBlcnJvci4gICovCisgICAgICBncnViX2Vycm9yX3N0YWNr
X3Bvcy0tOworCisgICAgICBncnViX2Vycm5vID0gZ3J1Yl9lcnJvcl9zdGFja19pdGVtc1tncnVi
X2Vycm9yX3N0YWNrX3Bvc10uZXJybm87CisgICAgICBncnViX21lbWNweSAoZ3J1Yl9lcnJtc2cs
CisgICAgICAgICAgICAgICAgICAgZ3J1Yl9lcnJvcl9zdGFja19pdGVtc1tncnViX2Vycm9yX3N0
YWNrX3Bvc10uZXJybXNnLAorICAgICAgICAgICAgICAgICAgIHNpemVvZiAoZ3J1Yl9lcnJtc2cp
KTsKKworICAgICAgcmV0dXJuIDE7CisgICAgfQorICBlbHNlCisgICAgeworICAgICAgLyogVGhl
cmUgaXMgbm8gbW9yZSBpdGVtcyBvbiBlcnJvciBzdGFjaywgcmVzZXQgdG8gbm8gZXJyb3Igc3Rh
dGUuICAqLworICAgICAgZ3J1Yl9lcnJubyA9IEdSVUJfRVJSX05PTkU7CisKKyAgICAgIHJldHVy
biAwOworICAgIH0KK30KKwordm9pZAorZ3J1Yl9wcmludF9lcnJvciAodm9pZCkKK3sKKyAgLyog
UHJpbnQgZXJyb3IgbWVzc2FnZXMgaW4gcmV2ZXJzZSBvcmRlci4gRmlyc3QgcHJpbnQgYWN0aXZl
IGVycm9yIG1lc3NhZ2UKKyAgICAgYW5kIHRoZW4gZW1wdHkgZXJyb3Igc3RhY2suICAqLworICBk
bworICAgIHsKKyAgICAgIGlmIChncnViX2Vycm5vICE9IEdSVUJfRVJSX05PTkUpCisgICAgICAg
IGdydWJfZXJyX3ByaW50ZiAoImVycm9yOiAlcy5cbiIsIGdydWJfZXJybXNnKTsKKyAgICB9Cisg
IHdoaWxlIChncnViX2Vycm9yX3BvcCAoKSk7CisKKyAgLyogSWYgdGhlcmUgd2FzIGFuIGFzc2Vy
dCB3aGlsZSB1c2luZyBlcnJvciBzdGFjaywgcmVwb3J0IGFib3V0IGl0LiAgKi8KKyAgaWYgKGdy
dWJfZXJyb3Jfc3RhY2tfYXNzZXJ0KQorICAgIHsKKyAgICAgIGdydWJfZXJyX3ByaW50ZiAoImFz
c2VydDogZXJyb3Igc3RhY2sgb3ZlcmZsb3cgZGV0ZWN0ZWQhXG4iKTsKKyAgICAgIGdydWJfZXJy
b3Jfc3RhY2tfYXNzZXJ0ID0gMDsKKyAgICB9Cit9CisKKworaW50IHRlc3RfZ3J1Yl9lcnIoKQor
eworICBncnViX2Vycm9yKDIyMiwgInRlc3QgJXNcbiIsICJncnViX2Vycm9yIik7CisgIGdydWJf
ZXJyX3ByaW50ZigidGVzdCAlc1xuIiwgImdydWJfZXJyX3ByaW50ZiIpOworICBncnViX2ZhdGFs
KCJ0ZXN0ICVzXG4iLCAiZ3J1Yl9mYXRhbCIpOworfQorCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJw
TiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1Yl9lcnIuaCB4ZW4tNC4x
LjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnViX2Vyci5oCi0tLSB4ZW4tNC4xLjItYS90b29s
cy9pb2VtdS1xZW11LXhlbi9ncnViX2Vyci5oCTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAw
ICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnViX2Vyci5oCTIw
MTItMTItMjggMTY6MDI6NDEuMDA3NzM0MTY0ICswODAwCkBAIC0wLDAgKzEsODEgQEAKKy8qIGVy
ci5oIC0gZXJyb3IgbnVtYmVycyBhbmQgcHJvdG90eXBlcyAqLworLyoKKyAqICBHUlVCICAtLSAg
R1JhbmQgVW5pZmllZCBCb290bG9hZGVyCisgKiAgQ29weXJpZ2h0IChDKSAyMDAyLDIwMDUsMjAw
NywyMDA4IEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiwgSW5jLgorICoKKyAqICBHUlVCIGlzIGZy
ZWUgc29mdHdhcmU6IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkKKyAqICBp
dCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGFzIHB1
Ymxpc2hlZCBieQorICogIHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIGVpdGhlciB2ZXJz
aW9uIDMgb2YgdGhlIExpY2Vuc2UsIG9yCisgKiAgKGF0IHlvdXIgb3B0aW9uKSBhbnkgbGF0ZXIg
dmVyc2lvbi4KKyAqCisgKiAgR1JVQiBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0
IHdpbGwgYmUgdXNlZnVsLAorICogIGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBl
dmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mCisgKiAgTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5F
U1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRoZQorICogIEdOVSBHZW5lcmFsIFB1
YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRldGFpbHMuCisgKgorICogIFlvdSBzaG91bGQgaGF2ZSBy
ZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlCisgKiAgYWxv
bmcgd2l0aCBHUlVCLiAgSWYgbm90LCBzZWUgPGh0dHA6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy8+
LgorICovCisKKyNpZm5kZWYgR1JVQl9FUlJfSEVBREVSCisjZGVmaW5lIEdSVUJfRVJSX0hFQURF
UgkxCisKKwordHlwZWRlZiBlbnVtCisgIHsKKyAgICBHUlVCX0VSUl9OT05FID0gMCwKKyAgICBH
UlVCX0VSUl9URVNUX0ZBSUxVUkUsCisgICAgR1JVQl9FUlJfQkFEX01PRFVMRSwKKyAgICBHUlVC
X0VSUl9PVVRfT0ZfTUVNT1JZLAorICAgIEdSVUJfRVJSX0JBRF9GSUxFX1RZUEUsCisgICAgR1JV
Ql9FUlJfRklMRV9OT1RfRk9VTkQsCisgICAgR1JVQl9FUlJfRklMRV9SRUFEX0VSUk9SLAorICAg
IEdSVUJfRVJSX0JBRF9GSUxFTkFNRSwKKyAgICBHUlVCX0VSUl9VTktOT1dOX0ZTLAorICAgIEdS
VUJfRVJSX0JBRF9GUywKKyAgICBHUlVCX0VSUl9CQURfTlVNQkVSLAorICAgIEdSVUJfRVJSX09V
VF9PRl9SQU5HRSwKKyAgICBHUlVCX0VSUl9VTktOT1dOX0RFVklDRSwKKyAgICBHUlVCX0VSUl9C
QURfREVWSUNFLAorICAgIEdSVUJfRVJSX1JFQURfRVJST1IsCisgICAgR1JVQl9FUlJfV1JJVEVf
RVJST1IsCisgICAgR1JVQl9FUlJfVU5LTk9XTl9DT01NQU5ELAorICAgIEdSVUJfRVJSX0lOVkFM
SURfQ09NTUFORCwKKyAgICBHUlVCX0VSUl9CQURfQVJHVU1FTlQsCisgICAgR1JVQl9FUlJfQkFE
X1BBUlRfVEFCTEUsCisgICAgR1JVQl9FUlJfVU5LTk9XTl9PUywKKyAgICBHUlVCX0VSUl9CQURf
T1MsCisgICAgR1JVQl9FUlJfTk9fS0VSTkVMLAorICAgIEdSVUJfRVJSX0JBRF9GT05ULAorICAg
IEdSVUJfRVJSX05PVF9JTVBMRU1FTlRFRF9ZRVQsCisgICAgR1JVQl9FUlJfU1lNTElOS19MT09Q
LAorICAgIEdSVUJfRVJSX0JBRF9HWklQX0RBVEEsCisgICAgR1JVQl9FUlJfTUVOVSwKKyAgICBH
UlVCX0VSUl9USU1FT1VULAorICAgIEdSVUJfRVJSX0lPLAorICAgIEdSVUJfRVJSX0FDQ0VTU19E
RU5JRUQsCisgICAgR1JVQl9FUlJfTUVOVV9FU0NBUEUsCisgICAgR1JVQl9FUlJfTk9UX0ZPVU5E
LAorICAgIEdSVUJfRVJSX1VOS05PV04KKworICB9CitncnViX2Vycl90OworCisKKyNpZm5kZWYg
XworIyBkZWZpbmUgXyhTdHJpbmcpIFN0cmluZworI2VuZGlmCisKK2V4dGVybiBncnViX2Vycl90
IGdydWJfZXJybm87CitleHRlcm4gY2hhciBncnViX2Vycm1zZ1tdOworCitncnViX2Vycl90IGdy
dWJfZXJyb3IgKGdydWJfZXJyX3QgbiwgY29uc3QgY2hhciAqZm10LCAuLi4pOwordm9pZCBncnVi
X2ZhdGFsIChjb25zdCBjaGFyICpmbXQsIC4uLikgX19hdHRyaWJ1dGVfXygobm9yZXR1cm4pKTsK
K3ZvaWQgZ3J1Yl9lcnJvcl9wdXNoICh2b2lkKTsKK2ludCBncnViX2Vycm9yX3BvcCAodm9pZCk7
Cit2b2lkIGdydWJfcHJpbnRfZXJyb3IgKHZvaWQpOworaW50IGdydWJfZXJyX3ByaW50ZiAoY29u
c3QgY2hhciAqZm10LCAuLi4pCisgICAgIF9fYXR0cmlidXRlX18gKChmb3JtYXQgKHByaW50Ziwg
MSwgMikpKTsKK2ludCB0ZXN0X2dydWJfZXJyKHZvaWQpOworCisjZW5kaWYgLyogISBHUlVCX0VS
Ul9IRUFERVIgKi8KZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29s
cy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9jb25maWcuaCB4ZW4tNC4xLjItYi90b29scy9pb2Vt
dS1xZW11LXhlbi9ncnViLWZhdC9jb25maWcuaAotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUt
cWVtdS14ZW4vZ3J1Yi1mYXQvY29uZmlnLmgJMTk3MC0wMS0wMSAwNzowMDowMC4wMDAwMDAwMDAg
KzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWItZmF0L2NvbmZp
Zy5oCTIwMTItMTItMjggMTY6MDI6NDEuMDA4NjQwODM4ICswODAwCkBAIC0wLDAgKzEsMjUxIEBA
CisvKiBjb25maWcuaC4gIEdlbmVyYXRlZCBmcm9tIGNvbmZpZy5oLmluIGJ5IGNvbmZpZ3VyZS4g
ICovCisvKiBjb25maWcuaC5pbi4gIEdlbmVyYXRlZCBmcm9tIGNvbmZpZ3VyZS5hYyBieSBhdXRv
aGVhZGVyLiAgKi8KKworLyogRGVmaW5lIGl0IGlmIEdBUyByZXF1aXJlcyB0aGF0IGFic29sdXRl
IGluZGlyZWN0IGNhbGxzL2p1bXBzIGFyZSBub3QKKyAgIHByZWZpeGVkIHdpdGggYW4gYXN0ZXJp
c2sgKi8KKy8qICN1bmRlZiBBQlNPTFVURV9XSVRIT1VUX0FTVEVSSVNLICovCisKKy8qIERlZmlu
ZSBpdCB0byBcImFkZHIzMlwiIG9yIFwiYWRkcjMyO1wiIHRvIG1ha2UgR0FTIGhhcHB5ICovCisj
ZGVmaW5lIEFERFIzMiBhZGRyMzIKKworLyogRGVmaW5lIGl0IHRvIFwiZGF0YTMyXCIgb3IgXCJk
YXRhMzI7XCIgdG8gbWFrZSBHQVMgaGFwcHkgKi8KKyNkZWZpbmUgREFUQTMyIGRhdGEzMgorCisv
KiBEZWZpbmUgdG8gMSBpZiB0cmFuc2xhdGlvbiBvZiBwcm9ncmFtIG1lc3NhZ2VzIHRvIHRoZSB1
c2VyJ3MgbmF0aXZlCisgICBsYW5ndWFnZSBpcyByZXF1ZXN0ZWQuICovCisjZGVmaW5lIEVOQUJM
RV9OTFMgMQorCisvKiBEZWZpbmUgaWYgQyBzeW1ib2xzIGdldCBhbiB1bmRlcnNjb3JlIGFmdGVy
IGNvbXBpbGF0aW9uICovCisvKiAjdW5kZWYgSEFWRV9BU01fVVNDT1JFICovCisKKy8qIERlZmlu
ZSB0byAxIGlmIHlvdSBoYXZlIHRoZSBgYXNwcmludGYnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBI
QVZFX0FTUFJJTlRGIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIE1hY09TIFgg
ZnVuY3Rpb24gQ0ZMb2NhbGVDb3B5Q3VycmVudCBpbiB0aGUKKyAgIENvcmVGb3VuZGF0aW9uIGZy
YW1ld29yay4gKi8KKy8qICN1bmRlZiBIQVZFX0NGTE9DQUxFQ09QWUNVUlJFTlQgKi8KKworLyog
RGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIE1hY09TIFggZnVuY3Rpb24gQ0ZQcmVmZXJlbmNl
c0NvcHlBcHBWYWx1ZSBpbgorICAgdGhlIENvcmVGb3VuZGF0aW9uIGZyYW1ld29yay4gKi8KKy8q
ICN1bmRlZiBIQVZFX0NGUFJFRkVSRU5DRVNDT1BZQVBQVkFMVUUgKi8KKworLyogRGVmaW5lIHRv
IDEgaWYgeW91IGhhdmUgdGhlIDxjdXJzZXMuaD4gaGVhZGVyIGZpbGUuICovCisvKiAjdW5kZWYg
SEFWRV9DVVJTRVNfSCAqLworCisvKiBEZWZpbmUgaWYgdGhlIEdOVSBkY2dldHRleHQoKSBmdW5j
dGlvbiBpcyBhbHJlYWR5IHByZXNlbnQgb3IgcHJlaW5zdGFsbGVkLgorICAgKi8KKyNkZWZpbmUg
SEFWRV9EQ0dFVFRFWFQgMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPGRpcmVu
dC5oPiBoZWFkZXIgZmlsZSwgYW5kIGl0IGRlZmluZXMgYERJUicuCisgICAqLworI2RlZmluZSBI
QVZFX0RJUkVOVF9IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxmdDJidWls
ZC5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9GVDJCVUlMRF9IIDEKKworLyogRGVm
aW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIGBnZXRnaWQnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBI
QVZFX0dFVEdJRCAxCisKKy8qIERlZmluZSBpZiBnZXRyYXdwYXJ0aXRpb24oKSBpbiAtbHV0aWwg
Y2FuIGJlIHVzZWQgKi8KKy8qICN1bmRlZiBIQVZFX0dFVFJBV1BBUlRJVElPTiAqLworCisvKiBE
ZWZpbmUgaWYgdGhlIEdOVSBnZXR0ZXh0KCkgZnVuY3Rpb24gaXMgYWxyZWFkeSBwcmVzZW50IG9y
IHByZWluc3RhbGxlZC4gKi8KKyNkZWZpbmUgSEFWRV9HRVRURVhUIDEKKworLyogRGVmaW5lIHRv
IDEgaWYgeW91IGhhdmUgdGhlIGBnZXR1aWQnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBIQVZFX0dF
VFVJRCAxCisKKy8qIERlZmluZSBpZiB5b3UgaGF2ZSB0aGUgaWNvbnYoKSBmdW5jdGlvbiBhbmQg
aXQgd29ya3MuICovCisvKiAjdW5kZWYgSEFWRV9JQ09OViAqLworCisvKiBEZWZpbmUgdG8gMSBp
ZiB5b3UgaGF2ZSB0aGUgPGludHR5cGVzLmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZF
X0lOVFRZUEVTX0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPGxpbWl0cy5o
PiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9MSU1JVFNfSCAxCisKKy8qIERlZmluZSB0
byAxIGlmIHlvdSBoYXZlIHRoZSBgbHN0YXQnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBIQVZFX0xT
VEFUIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxtYWxsb2MuaD4gaGVhZGVy
IGZpbGUuICovCisjZGVmaW5lIEhBVkVfTUFMTE9DX0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5
b3UgaGF2ZSB0aGUgYG1lbWFsaWduJyBmdW5jdGlvbi4gKi8KKyNkZWZpbmUgSEFWRV9NRU1BTElH
TiAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSBgbWVtbW92ZScgZnVuY3Rpb24u
ICovCisjZGVmaW5lIEhBVkVfTUVNTU9WRSAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZl
IHRoZSA8bWVtb3J5Lmg+IGhlYWRlciBmaWxlLiAqLworI2RlZmluZSBIQVZFX01FTU9SWV9IIDEK
KworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxuY3Vyc2VzL2N1cnNlcy5oPiBoZWFk
ZXIgZmlsZS4gKi8KKy8qICN1bmRlZiBIQVZFX05DVVJTRVNfQ1VSU0VTX0ggKi8KKworLyogRGVm
aW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIDxuY3Vyc2VzLmg+IGhlYWRlciBmaWxlLiAqLworLyog
I3VuZGVmIEhBVkVfTkNVUlNFU19IICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRo
ZSA8bmRpci5oPiBoZWFkZXIgZmlsZSwgYW5kIGl0IGRlZmluZXMgYERJUicuICovCisvKiAjdW5k
ZWYgSEFWRV9ORElSX0ggKi8KKworLyogRGVmaW5lIGlmIG9wZW5kaXNrKCkgaW4gLWx1dGlsIGNh
biBiZSB1c2VkICovCisvKiAjdW5kZWYgSEFWRV9PUEVORElTSyAqLworCisvKiBEZWZpbmUgdG8g
MSBpZiB5b3UgaGF2ZSB0aGUgPHBjaS9wY2kuaD4gaGVhZGVyIGZpbGUuICovCisvKiAjdW5kZWYg
SEFWRV9QQ0lfUENJX0ggKi8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUgdGhlIGBwb3Np
eF9tZW1hbGlnbicgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfUE9TSVhfTUVNQUxJR04gMQor
CisvKiBEZWZpbmUgaWYgcmV0dXJuc190d2ljZSBhdHRyaWJ1dGUgaXMgc3VwcG9ydGVkICovCisv
KiAjdW5kZWYgSEFWRV9SRVRVUk5TX1RXSUNFICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBo
YXZlIHRoZSBgc2JyaycgZnVuY3Rpb24uICovCisjZGVmaW5lIEhBVkVfU0JSSyAxCisKKy8qIERl
ZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8U0RML1NETC5oPiBoZWFkZXIgZmlsZS4gKi8KKy8q
ICN1bmRlZiBIQVZFX1NETF9TRExfSCAqLworCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0
aGUgPHN0ZGludC5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9TVERJTlRfSCAxCisK
Ky8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3RkbGliLmg+IGhlYWRlciBmaWxlLiAq
LworI2RlZmluZSBIQVZFX1NURExJQl9IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91IGhhdmUg
dGhlIGBzdHJkdXAnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBIQVZFX1NUUkRVUCAxCisKKy8qIERl
ZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3RyaW5ncy5oPiBoZWFkZXIgZmlsZS4gKi8KKyNk
ZWZpbmUgSEFWRV9TVFJJTkdTX0ggMQorCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUg
PHN0cmluZy5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9TVFJJTkdfSCAxCisKKy8q
IERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3lzL2Rpci5oPiBoZWFkZXIgZmlsZSwgYW5k
IGl0IGRlZmluZXMgYERJUicuCisgICAqLworLyogI3VuZGVmIEhBVkVfU1lTX0RJUl9IICovCisK
Ky8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3lzL2ZjbnRsLmg+IGhlYWRlciBmaWxl
LiAqLworI2RlZmluZSBIQVZFX1NZU19GQ05UTF9IIDEKKworLyogRGVmaW5lIHRvIDEgaWYgeW91
IGhhdmUgdGhlIDxzeXMvbWtkZXYuaD4gaGVhZGVyIGZpbGUuICovCisvKiAjdW5kZWYgSEFWRV9T
WVNfTUtERVZfSCAqLworCisvKiBEZWZpbmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPHN5cy9uZGly
Lmg+IGhlYWRlciBmaWxlLCBhbmQgaXQgZGVmaW5lcyBgRElSJy4KKyAgICovCisvKiAjdW5kZWYg
SEFWRV9TWVNfTkRJUl9IICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3lz
L3N0YXQuaD4gaGVhZGVyIGZpbGUuICovCisjZGVmaW5lIEhBVkVfU1lTX1NUQVRfSCAxCisKKy8q
IERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8c3lzL3N5c21hY3Jvcy5oPiBoZWFkZXIgZmls
ZS4gKi8KKyNkZWZpbmUgSEFWRV9TWVNfU1lTTUFDUk9TX0ggMQorCisvKiBEZWZpbmUgdG8gMSBp
ZiB5b3UgaGF2ZSB0aGUgPHN5cy90eXBlcy5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFW
RV9TWVNfVFlQRVNfSCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8dGVybWlv
cy5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZpbmUgSEFWRV9URVJNSU9TX0ggMQorCisvKiBEZWZp
bmUgdG8gMSBpZiB5b3UgaGF2ZSB0aGUgPHVuaXN0ZC5oPiBoZWFkZXIgZmlsZS4gKi8KKyNkZWZp
bmUgSEFWRV9VTklTVERfSCAxCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSA8dXNi
Lmg+IGhlYWRlciBmaWxlLiAqLworLyogI3VuZGVmIEhBVkVfVVNCX0ggKi8KKworLyogRGVmaW5l
IHRvIDEgaWYgeW91IGhhdmUgdGhlIGB2YXNwcmludGYnIGZ1bmN0aW9uLiAqLworI2RlZmluZSBI
QVZFX1ZBU1BSSU5URiAxCisKKy8qIERlZmluZSB0byAxIGlmIGBtYWpvcicsIGBtaW5vcicsIGFu
ZCBgbWFrZWRldicgYXJlIGRlY2xhcmVkIGluIDxta2Rldi5oPi4KKyAgICovCisvKiAjdW5kZWYg
TUFKT1JfSU5fTUtERVYgKi8KKworLyogRGVmaW5lIHRvIDEgaWYgYG1ham9yJywgYG1pbm9yJywg
YW5kIGBtYWtlZGV2JyBhcmUgZGVjbGFyZWQgaW4KKyAgIDxzeXNtYWNyb3MuaD4uICovCisvKiAj
dW5kZWYgTUFKT1JfSU5fU1lTTUFDUk9TICovCisKKy8qIERlZmluZSB0byAxIGlmIHlvdSBlbmFi
bGUgbWVtb3J5IG1hbmFnZXIgZGVidWdnaW5nLiAqLworLyogI3VuZGVmIE1NX0RFQlVHICovCisK
Ky8qIERlZmluZSB0byAxIGlmIEdDQyBnZW5lcmF0ZXMgY2FsbHMgdG8gX19yZWdpc3Rlcl9mcmFt
ZV9pbmZvKCkgKi8KKy8qICN1bmRlZiBORUVEX1JFR0lTVEVSX0ZSQU1FX0lORk8gKi8KKworLyog
TmFtZSBvZiBwYWNrYWdlICovCisjZGVmaW5lIFBBQ0tBR0UgImJ1cmciCisKKy8qIERlZmluZSB0
byB0aGUgYWRkcmVzcyB3aGVyZSBidWcgcmVwb3J0cyBmb3IgdGhpcyBwYWNrYWdlIHNob3VsZCBi
ZSBzZW50LiAqLworI2RlZmluZSBQQUNLQUdFX0JVR1JFUE9SVCAiYmVhbjEyM2NoQGdtYWlsLmNv
bSIKKworLyogRGVmaW5lIHRvIHRoZSBmdWxsIG5hbWUgb2YgdGhpcyBwYWNrYWdlLiAqLworI2Rl
ZmluZSBQQUNLQUdFX05BTUUgIkJVUkciCisKKy8qIERlZmluZSB0byB0aGUgZnVsbCBuYW1lIGFu
ZCB2ZXJzaW9uIG9mIHRoaXMgcGFja2FnZS4gKi8KKyNkZWZpbmUgUEFDS0FHRV9TVFJJTkcgIkJV
UkcgMS45OCIKKworLyogRGVmaW5lIHRvIHRoZSBvbmUgc3ltYm9sIHNob3J0IG5hbWUgb2YgdGhp
cyBwYWNrYWdlLiAqLworI2RlZmluZSBQQUNLQUdFX1RBUk5BTUUgImJ1cmciCisKKy8qIERlZmlu
ZSB0byB0aGUgdmVyc2lvbiBvZiB0aGlzIHBhY2thZ2UuICovCisjZGVmaW5lIFBBQ0tBR0VfVkVS
U0lPTiAiMS45OCIKKworLyogVGhlIHNpemUgb2YgYGxvbmcnLCBhcyBjb21wdXRlZCBieSBzaXpl
b2YuICovCisjZGVmaW5lIFNJWkVPRl9MT05HIDgKKworLyogVGhlIHNpemUgb2YgYHZvaWQgKics
IGFzIGNvbXB1dGVkIGJ5IHNpemVvZi4gKi8KKyNkZWZpbmUgU0laRU9GX1ZPSURfUCA4CisKKy8q
IERlZmluZSB0byAxIGlmIHlvdSBoYXZlIHRoZSBBTlNJIEMgaGVhZGVyIGZpbGVzLiAqLworI2Rl
ZmluZSBTVERDX0hFQURFUlMgMQorCisvKiBWZXJzaW9uIG51bWJlciBvZiBwYWNrYWdlICovCisj
ZGVmaW5lIFZFUlNJT04gIjEuOTgiCisKKy8qIERlZmluZSBXT1JEU19CSUdFTkRJQU4gdG8gMSBp
ZiB5b3VyIHByb2Nlc3NvciBzdG9yZXMgd29yZHMgd2l0aCB0aGUgbW9zdAorICAgc2lnbmlmaWNh
bnQgYnl0ZSBmaXJzdCAobGlrZSBNb3Rvcm9sYSBhbmQgU1BBUkMsIHVubGlrZSBJbnRlbCBhbmQg
VkFYKS4gKi8KKyNpZiBkZWZpbmVkIF9fQklHX0VORElBTl9fCisjIGRlZmluZSBXT1JEU19CSUdF
TkRJQU4gMQorI2VsaWYgISBkZWZpbmVkIF9fTElUVExFX0VORElBTl9fCisvKiAjIHVuZGVmIFdP
UkRTX0JJR0VORElBTiAqLworI2VuZGlmCisKKy8qIERlZmluZSB0byAxIGlmIGBsZXgnIGRlY2xh
cmVzIGB5eXRleHQnIGFzIGEgYGNoYXIgKicgYnkgZGVmYXVsdCwgbm90IGEKKyAgIGBjaGFyW10n
LiAqLworI2RlZmluZSBZWVRFWFRfUE9JTlRFUiAxCisKKy8qIE51bWJlciBvZiBiaXRzIGluIGEg
ZmlsZSBvZmZzZXQsIG9uIGhvc3RzIHdoZXJlIHRoaXMgaXMgc2V0dGFibGUuICovCisvKiAjdW5k
ZWYgX0ZJTEVfT0ZGU0VUX0JJVFMgKi8KKworLyogRGVmaW5lIGZvciBsYXJnZSBmaWxlcywgb24g
QUlYLXN0eWxlIGhvc3RzLiAqLworLyogI3VuZGVmIF9MQVJHRV9GSUxFUyAqLworCisvKiBEZWZp
bmUgdG8gMSBpZiBvbiBNSU5JWC4gKi8KKy8qICN1bmRlZiBfTUlOSVggKi8KKworLyogRGVmaW5l
IHRvIDIgaWYgdGhlIHN5c3RlbSBkb2VzIG5vdCBwcm92aWRlIFBPU0lYLjEgZmVhdHVyZXMgZXhj
ZXB0IHdpdGgKKyAgIHRoaXMgZGVmaW5lZC4gKi8KKy8qICN1bmRlZiBfUE9TSVhfMV9TT1VSQ0Ug
Ki8KKworLyogRGVmaW5lIHRvIDEgaWYgeW91IG5lZWQgdG8gaW4gb3JkZXIgZm9yIGBzdGF0JyBh
bmQgb3RoZXIgdGhpbmdzIHRvIHdvcmsuICovCisvKiAjdW5kZWYgX1BPU0lYX1NPVVJDRSAqLwor
CisvKiBFbmFibGUgZXh0ZW5zaW9ucyBvbiBBSVggMywgSW50ZXJpeC4gICovCisjaWZuZGVmIF9B
TExfU09VUkNFCisjIGRlZmluZSBfQUxMX1NPVVJDRSAxCisjZW5kaWYKKy8qIEVuYWJsZSBHTlUg
ZXh0ZW5zaW9ucyBvbiBzeXN0ZW1zIHRoYXQgaGF2ZSB0aGVtLiAgKi8KKyNpZm5kZWYgX0dOVV9T
T1VSQ0UKKyMgZGVmaW5lIF9HTlVfU09VUkNFIDEKKyNlbmRpZgorLyogRW5hYmxlIHRocmVhZGlu
ZyBleHRlbnNpb25zIG9uIFNvbGFyaXMuICAqLworI2lmbmRlZiBfUE9TSVhfUFRIUkVBRF9TRU1B
TlRJQ1MKKyMgZGVmaW5lIF9QT1NJWF9QVEhSRUFEX1NFTUFOVElDUyAxCisjZW5kaWYKKy8qIEVu
YWJsZSBleHRlbnNpb25zIG9uIEhQIE5vblN0b3AuICAqLworI2lmbmRlZiBfVEFOREVNX1NPVVJD
RQorIyBkZWZpbmUgX1RBTkRFTV9TT1VSQ0UgMQorI2VuZGlmCisvKiBFbmFibGUgZ2VuZXJhbCBl
eHRlbnNpb25zIG9uIFNvbGFyaXMuICAqLworI2lmbmRlZiBfX0VYVEVOU0lPTlNfXworIyBkZWZp
bmUgX19FWFRFTlNJT05TX18gMQorI2VuZGlmCisKZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1V
OCB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mYXQuYyB4ZW4tNC4x
LjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mYXQuYwotLS0geGVuLTQuMS4yLWEv
dG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQvZmF0LmMJMTk3MC0wMS0wMSAwNzowMDowMC4w
MDAwMDAwMDAgKzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWIt
ZmF0L2ZhdC5jCTIwMTItMTItMjggMTY6MDI6NDEuMDA4NjQwODM4ICswODAwCkBAIC0wLDAgKzEs
NzExIEBACisvKiBmYXQuYyAtIEZBVCBmaWxlc3lzdGVtICovCisvKgorICogIEdSVUIgIC0tICBH
UmFuZCBVbmlmaWVkIEJvb3Rsb2FkZXIKKyAqICBDb3B5cmlnaHQgKEMpIDIwMDAsMjAwMSwyMDAy
LDIwMDMsMjAwNCwyMDA1LDIwMDcsMjAwOCwyMDA5ICBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24s
IEluYy4KKyAqCisgKiAgR1JVQiBpcyBmcmVlIHNvZnR3YXJlOiB5b3UgY2FuIHJlZGlzdHJpYnV0
ZSBpdCBhbmQvb3IgbW9kaWZ5CisgKiAgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZSBhcyBwdWJsaXNoZWQgYnkKKyAqICB0aGUgRnJlZSBTb2Z0d2Fy
ZSBGb3VuZGF0aW9uLCBlaXRoZXIgdmVyc2lvbiAzIG9mIHRoZSBMaWNlbnNlLCBvcgorICogIChh
dCB5b3VyIG9wdGlvbikgYW55IGxhdGVyIHZlcnNpb24uCisgKgorICogIEdSVUIgaXMgZGlzdHJp
YnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKKyAqICBidXQgV0lUSE9V
VCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBvZgorICog
IE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNl
ZSB0aGUKKyAqICBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgor
ICoKKyAqICBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJh
bCBQdWJsaWMgTGljZW5zZQorICogIGFsb25nIHdpdGggR1JVQi4gIElmIG5vdCwgc2VlIDxodHRw
Oi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4KKyAqLworI2luY2x1ZGUgIm1pc2MuaCIKKyNpbmNs
dWRlICJmYXQuaCIKKworCitzdGF0aWMgaW50CitmYXRfbG9nMiAodW5zaWduZWQgeCkKK3sKKyAg
aW50IGk7CisKKyAgaWYgKHggPT0gMCkKKyAgICByZXR1cm4gLTE7CisKKyAgZm9yIChpID0gMDsg
KHggJiAxKSA9PSAwOyBpKyspCisgICAgeCA+Pj0gMTsKKworICBpZiAoeCAhPSAxKQorICAgIHJl
dHVybiAtMTsKKworICByZXR1cm4gaTsKK30KKworCitzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqCitn
cnViX2ZhdF9tb3VudCAoQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIHVpbnQzMl90IHBhcnRfb2ZmX3Nl
Y3RvcikKK3sKKyAgc3RydWN0IGdydWJfZmF0X2JwYiBicGI7CisgIHN0cnVjdCBncnViX2ZhdF9k
YXRhICpkYXRhID0gMDsKKyAgZ3J1Yl91aW50MzJfdCBmaXJzdF9mYXQsIG1hZ2ljOworICBpbnQ2
NF90IG9mZl9ieXRlcyA9IChpbnQ2NF90KXBhcnRfb2ZmX3NlY3RvciA8PCBHUlVCX0RJU0tfU0VD
VE9SX0JJVFM7CisKKyAgaWYgKCEgYnMpCisgICAgZ290byBmYWlsOworCisgIGRhdGEgPSAoc3Ry
dWN0IGdydWJfZmF0X2RhdGEgKikgbWFsbG9jIChzaXplb2YgKCpkYXRhKSk7CisgIGlmICghIGRh
dGEpCisgICAgZ290byBmYWlsOworCisgIC8qIFJlYWQgdGhlIEJQQi4gICovCisgIGlmIChiZHJ2
X3ByZWFkKGJzLCBvZmZfYnl0ZXMsICZicGIsIHNpemVvZihicGIpKSAhPSBzaXplb2YoYnBiKSkK
KyAgICB7CisgICAgICBwcmludGYoImJkcnZfcHJlYWQgZmFpbC4uLi5cbiIpOworICAgICAgZ290
byBmYWlsOworICAgIH0KKyAgICAKKyAgaWYgKGdydWJfc3RybmNtcCgoY29uc3QgY2hhciAqKSBi
cGIudmVyc2lvbl9zcGVjaWZpYy5mYXQxMl9vcl9mYXQxNi5mc3R5cGUsICJGQVQxMiIsIDUpCisg
ICAgICAmJiBncnViX3N0cm5jbXAoKGNvbnN0IGNoYXIgKikgYnBiLnZlcnNpb25fc3BlY2lmaWMu
ZmF0MTJfb3JfZmF0MTYuZnN0eXBlLCAiRkFUMTYiLCA1KQorICAgICAgJiYgZ3J1Yl9zdHJuY21w
KChjb25zdCBjaGFyICopIGJwYi52ZXJzaW9uX3NwZWNpZmljLmZhdDMyLmZzdHlwZSwgIkZBVDMy
IiwgNSkpCisgICAgeworICAgICAgCisgICAgICBwcmludGYoImZhaWwgaGVyZS0tPmdydWJfc3Ry
bmNtcC4uLi4uLmxpbmVbJXVdXG4iLCBfX0xJTkVfXyk7CisgICAgICBnb3RvIGZhaWw7CisgICAg
fQorCisgIC8qIEdldCB0aGUgc2l6ZXMgb2YgbG9naWNhbCBzZWN0b3JzIGFuZCBjbHVzdGVycy4g
ICovCisgIGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMgPQorICAgIGZhdF9sb2cyIChncnViX2xl
X3RvX2NwdTE2IChicGIuYnl0ZXNfcGVyX3NlY3RvcikpOworICBwcmludGYoImJwYi5ieXRlc19w
ZXJfc2VjdG9yPTB4JXgsIGxlX3RvX2NwdTE2PTB4JXhcbiIsCisJIGJwYi5ieXRlc19wZXJfc2Vj
dG9yLCBncnViX2xlX3RvX2NwdTE2IChicGIuYnl0ZXNfcGVyX3NlY3RvcikpOworICAKKworICBp
ZiAoZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cyA8IEdSVUJfRElTS19TRUNUT1JfQklUUykKKyAg
eworICAgIHByaW50ZigiZmFpbCBoZXJlLS0+bG9naWNhbF9zZWN0b3JfYml0cy4uLi4uLmxpbmVb
JXVdXG4iLCBfX0xJTkVfXyk7IAorICAgIGdvdG8gZmFpbDsKKyAgfQorICBkYXRhLT5sb2dpY2Fs
X3NlY3Rvcl9iaXRzIC09IEdSVUJfRElTS19TRUNUT1JfQklUUzsKKworICBwcmludGYoImJwYi5z
ZWN0b3JzX3Blcl9jbHVzdGVyPSV1XG4iLCBicGIuc2VjdG9yc19wZXJfY2x1c3Rlcik7CisgIGRh
dGEtPmNsdXN0ZXJfYml0cyA9IGZhdF9sb2cyIChicGIuc2VjdG9yc19wZXJfY2x1c3Rlcik7Cisg
IGlmIChkYXRhLT5jbHVzdGVyX2JpdHMgPCAwKQorICAgIHsKKyAgICAgIHByaW50ZigiZmFpbCBo
ZXJlLS0+Y2x1c3Rlcl9iaXRzLi4uLi4ubGluZVsldV1cbiIsIF9fTElORV9fKTsgCisgICAgICBn
b3RvIGZhaWw7CisgICAgfQorICBkYXRhLT5jbHVzdGVyX2JpdHMgKz0gZGF0YS0+bG9naWNhbF9z
ZWN0b3JfYml0czsKKworICAvKiBHZXQgaW5mb3JtYXRpb24gYWJvdXQgRkFUcy4gICovCisgIHBy
aW50ZigiYnBiLm51bV9yZXNlcnZlZF9zZWN0b3JzPSV1LCBsZV90b19jcHUxNj0ldVxuIiwKKwkg
YnBiLm51bV9yZXNlcnZlZF9zZWN0b3JzLCBncnViX2xlX3RvX2NwdTE2IChicGIubnVtX3Jlc2Vy
dmVkX3NlY3RvcnMpKTsKKyAgZGF0YS0+ZmF0X3NlY3RvciA9IHBhcnRfb2ZmX3NlY3RvciArIChn
cnViX2xlX3RvX2NwdTE2IChicGIubnVtX3Jlc2VydmVkX3NlY3RvcnMpCisJCSAgICAgIDw8IGRh
dGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMpOworICBwcmludGYoImRhdGEtPmZhdF9zZWN0b3I9JXVc
biIsIGRhdGEtPmZhdF9zZWN0b3IpOworICBpZiAoZGF0YS0+ZmF0X3NlY3RvciA9PSAwKQorICAg
IHsKKyAgICAgIHByaW50ZigiZmFpbCBoZXJlLS0+ZmF0X3NlY3Rvci4uLi4uLmxpbmVbJXVdXG4i
LCBfX0xJTkVfXyk7IAorICAgICAgZ290byBmYWlsOworICAgIH0KKyAgZGF0YS0+c2VjdG9yc19w
ZXJfZmF0ID0gKChicGIuc2VjdG9yc19wZXJfZmF0XzE2CisJCQkgICAgPyBncnViX2xlX3RvX2Nw
dTE2IChicGIuc2VjdG9yc19wZXJfZmF0XzE2KQorCQkJICAgIDogZ3J1Yl9sZV90b19jcHUzMiAo
YnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MzIuc2VjdG9yc19wZXJfZmF0XzMyKSkKKwkJCSAgIDw8
IGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMpOworICBwcmludGYoImJwYi52ZXJzaW9uX3NwZWNp
ZmljLmZhdDMyLnNlY3RvcnNfcGVyX2ZhdF8zMj0ldVxuIgorCSAiZ3J1Yl9sZV90b19jcHUzMiAo
YnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MzIuc2VjdG9yc19wZXJfZmF0XzMyKT0ldVxuIiwKKwkg
YnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MzIuc2VjdG9yc19wZXJfZmF0XzMyLAorCSBncnViX2xl
X3RvX2NwdTMyIChicGIudmVyc2lvbl9zcGVjaWZpYy5mYXQzMi5zZWN0b3JzX3Blcl9mYXRfMzIp
KTsKKyAgaWYgKGRhdGEtPnNlY3RvcnNfcGVyX2ZhdCA9PSAwKQorICAgIGdvdG8gZmFpbDsKKwor
ICAvKiBHZXQgdGhlIG51bWJlciBvZiBzZWN0b3JzIGluIHRoaXMgdm9sdW1lLiAgKi8KKyAgZGF0
YS0+bnVtX3NlY3RvcnMgPSAoKGJwYi5udW1fdG90YWxfc2VjdG9yc18xNgorCQkJPyBncnViX2xl
X3RvX2NwdTE2IChicGIubnVtX3RvdGFsX3NlY3RvcnNfMTYpCisJCQk6IGdydWJfbGVfdG9fY3B1
MzIgKGJwYi5udW1fdG90YWxfc2VjdG9yc18zMikpCisJCSAgICAgICA8PCBkYXRhLT5sb2dpY2Fs
X3NlY3Rvcl9iaXRzKTsKKyAgaWYgKGRhdGEtPm51bV9zZWN0b3JzID09IDApCisgICAgeworICAg
ICAgcHJpbnRmKCJmYWlsIGhlcmUtLT5udW1fc2VjdG9ycy4uLi4uLmxpbmVbJXVdXG4iLCBfX0xJ
TkVfXyk7IAorICAgICAgZ290byBmYWlsOworICAgIH0KKyAgLyogR2V0IGluZm9ybWF0aW9uIGFi
b3V0IHRoZSByb290IGRpcmVjdG9yeS4gICovCisgIGlmIChicGIubnVtX2ZhdHMgPT0gMCkKKyAg
ICB7CisgICAgICBwcmludGYoImZhaWwgaGVyZS0tPm51bV9mYXRzLi4uLi4ubGluZVsldV1cbiIs
IF9fTElORV9fKTsgCisgICAgICBnb3RvIGZhaWw7CisgICAgfQorICBkYXRhLT5yb290X3NlY3Rv
ciA9IGRhdGEtPmZhdF9zZWN0b3IgKyBicGIubnVtX2ZhdHMgKiBkYXRhLT5zZWN0b3JzX3Blcl9m
YXQ7CisgIGRhdGEtPm51bV9yb290X3NlY3RvcnMKKyAgICA9ICgoKChncnViX3VpbnQzMl90KSBn
cnViX2xlX3RvX2NwdTE2IChicGIubnVtX3Jvb3RfZW50cmllcykKKwkgKiBHUlVCX0ZBVF9ESVJf
RU5UUllfU0laRQorCSArIGdydWJfbGVfdG9fY3B1MTYgKGJwYi5ieXRlc19wZXJfc2VjdG9yKSAt
IDEpCisJPj4gKGRhdGEtPmxvZ2ljYWxfc2VjdG9yX2JpdHMgKyBHUlVCX0RJU0tfU0VDVE9SX0JJ
VFMpKQorICAgICAgIDw8IChkYXRhLT5sb2dpY2FsX3NlY3Rvcl9iaXRzKSk7CisKKyAgZGF0YS0+
Y2x1c3Rlcl9zZWN0b3IgPSBkYXRhLT5yb290X3NlY3RvciArIGRhdGEtPm51bV9yb290X3NlY3Rv
cnM7CisgIGRhdGEtPm51bV9jbHVzdGVycyA9ICgoKGRhdGEtPm51bV9zZWN0b3JzIC0gZGF0YS0+
Y2x1c3Rlcl9zZWN0b3IpCisJCQkgPj4gKGRhdGEtPmNsdXN0ZXJfYml0cyArIGRhdGEtPmxvZ2lj
YWxfc2VjdG9yX2JpdHMpKQorCQkJKyAyKTsKKworICBpZiAoZGF0YS0+bnVtX2NsdXN0ZXJzIDw9
IDIpCisgICAgeworICAgICAgcHJpbnRmKCJmYWlsIGhlcmUtLT5udW1fY2x1c3RlcnMuLi4uLi5s
aW5lWyV1XVxuIiwgX19MSU5FX18pOyAKKyAgICAgIGdvdG8gZmFpbDsKKyAgICB9CisgIGlmICgh
IGJwYi5zZWN0b3JzX3Blcl9mYXRfMTYpCisgICAgeworICAgICAgLyogRkFUMzIuICAqLworICAg
ICAgZ3J1Yl91aW50MTZfdCBmbGFncyA9IGdydWJfbGVfdG9fY3B1MTYgKGJwYi52ZXJzaW9uX3Nw
ZWNpZmljLmZhdDMyLmV4dGVuZGVkX2ZsYWdzKTsKKworICAgICAgZGF0YS0+cm9vdF9jbHVzdGVy
ID0gZ3J1Yl9sZV90b19jcHUzMiAoYnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MzIucm9vdF9jbHVz
dGVyKTsKKyAgICAgIGRhdGEtPmZhdF9zaXplID0gMzI7CisgICAgICBkYXRhLT5jbHVzdGVyX2Vv
Zl9tYXJrID0gMHgwZmZmZmZmODsKKworICAgICAgaWYgKGZsYWdzICYgMHg4MCkKKwl7CisJICAv
KiBHZXQgYW4gYWN0aXZlIEZBVC4gICovCisJICB1bnNpZ25lZCBhY3RpdmVfZmF0ID0gZmxhZ3Mg
JiAweGY7CisKKwkgIGlmIChhY3RpdmVfZmF0ID4gYnBiLm51bV9mYXRzKQorCSAgICBnb3RvIGZh
aWw7CisKKwkgIGRhdGEtPmZhdF9zZWN0b3IgKz0gYWN0aXZlX2ZhdCAqIGRhdGEtPnNlY3RvcnNf
cGVyX2ZhdDsKKwl9CisKKyAgICAgIGlmIChicGIubnVtX3Jvb3RfZW50cmllcyAhPSAwIHx8IGJw
Yi52ZXJzaW9uX3NwZWNpZmljLmZhdDMyLmZzX3ZlcnNpb24gIT0gMCkKKwlnb3RvIGZhaWw7Cisg
ICAgfQorICBlbHNlCisgICAgeworICAgICAgLyogRkFUMTIgb3IgRkFUMTYuICAqLworICAgICAg
ZGF0YS0+cm9vdF9jbHVzdGVyID0gfjBVOworCisgICAgICBpZiAoZGF0YS0+bnVtX2NsdXN0ZXJz
IDw9IDQwODUgKyAyKQorCXsKKwkgIC8qIEZBVDEyLiAgKi8KKwkgIGRhdGEtPmZhdF9zaXplID0g
MTI7CisJICBkYXRhLT5jbHVzdGVyX2VvZl9tYXJrID0gMHgwZmY4OworCX0KKyAgICAgIGVsc2UK
Kwl7CisJICAvKiBGQVQxNi4gICovCisJICBkYXRhLT5mYXRfc2l6ZSA9IDE2OworCSAgZGF0YS0+
Y2x1c3Rlcl9lb2ZfbWFyayA9IDB4ZmZmODsKKwl9CisgICAgfQorCisgIC8qIE1vcmUgc2FuaXR5
IGNoZWNrcy4gICovCisgIGlmIChkYXRhLT5udW1fc2VjdG9ycyA8PSBkYXRhLT5mYXRfc2VjdG9y
KQorICAgIGdvdG8gZmFpbDsKKworICAKKyAgcHJpbnRmKCJkYXRhLT5mYXRfc2VjdG9yPSV1LCBk
YXRhLT5zZWN0b3JzX3Blcl9mYXQ9JXVcbiIsCisJIGRhdGEtPmZhdF9zZWN0b3IsIGRhdGEtPnNl
Y3RvcnNfcGVyX2ZhdCk7CisgIGlmIChiZHJ2X3ByZWFkKGJzLAorCQkgZGF0YS0+ZmF0X3NlY3Rv
ciA8PCBHUlVCX0RJU0tfU0VDVE9SX0JJVFMsCisJCSAmZmlyc3RfZmF0LAorCQkgc2l6ZW9mIChm
aXJzdF9mYXQpKSAhPSBzaXplb2YoZmlyc3RfZmF0KSkKKyAgICB7CisgICAgICBwcmludGYoImZh
aWwgaGVyZS0tPmJkcnZfcHJlYWQuLi4uLi5saW5lWyV1XVxuIiwgX19MSU5FX18pOyAKKyAgICAg
IGdvdG8gZmFpbDsKKyAgICB9CisKKyAgZmlyc3RfZmF0ID0gZ3J1Yl9sZV90b19jcHUzMiAoZmly
c3RfZmF0KTsKKworICBpZiAoZGF0YS0+ZmF0X3NpemUgPT0gMzIpCisgICAgeworICAgICAgZmly
c3RfZmF0ICY9IDB4MGZmZmZmZmY7CisgICAgICBtYWdpYyA9IDB4MGZmZmZmMDA7CisgICAgfQor
ICBlbHNlIGlmIChkYXRhLT5mYXRfc2l6ZSA9PSAxNikKKyAgICB7CisgICAgICBmaXJzdF9mYXQg
Jj0gMHgwMDAwZmZmZjsKKyAgICAgIG1hZ2ljID0gMHhmZjAwOworICAgIH0KKyAgZWxzZQorICAg
IHsKKyAgICAgIGZpcnN0X2ZhdCAmPSAweDAwMDAwZmZmOworICAgICAgbWFnaWMgPSAweDBmMDA7
CisgICAgfQorCisgIC8qIFNlcmlhbCBudW1iZXIuICAqLworICBpZiAoYnBiLnNlY3RvcnNfcGVy
X2ZhdF8xNikKKyAgICBkYXRhLT51dWlkID0gZ3J1Yl9sZV90b19jcHUzMiAoYnBiLnZlcnNpb25f
c3BlY2lmaWMuZmF0MTJfb3JfZmF0MTYubnVtX3NlcmlhbCk7CisgIGVsc2UKKyAgICBkYXRhLT51
dWlkID0gZ3J1Yl9sZV90b19jcHUzMiAoYnBiLnZlcnNpb25fc3BlY2lmaWMuZmF0MzIubnVtX3Nl
cmlhbCk7CisKKyAgLyogSWdub3JlIHRoZSAzcmQgYml0LCBiZWNhdXNlIHNvbWUgQklPU2VzIGFz
c2lnbnMgMHhGMCB0byB0aGUgbWVkaWEKKyAgICAgZGVzY3JpcHRvciwgZXZlbiBpZiBpdCBpcyBh
IHNvLWNhbGxlZCBzdXBlcmZsb3BweSAoZS5nLiBhbiBVU0Iga2V5KS4KKyAgICAgVGhlIGNoZWNr
IG1heSBiZSB0b28gc3RyaWN0IGZvciB0aGlzIGtpbmQgb2Ygc3R1cGlkIEJJT1NlcywgYXMKKyAg
ICAgdGhleSBvdmVyd3JpdGUgdGhlIG1lZGlhIGRlc2NyaXB0b3IuICAqLworICBpZiAoKGZpcnN0
X2ZhdCB8IDB4OCkgIT0gKG1hZ2ljIHwgYnBiLm1lZGlhIHwgMHg4KSkKKyAgICB7CisgICAgICBw
cmludGYoImZhaWwgaGVyZS0tPmZpcnN0X2ZhdD0weCV4LCBtYWdpYz0weCV4Li4uLi4ubGluZVsl
dV1cbiIsCisJICAgICBmaXJzdF9mYXQsIG1hZ2ljLCBfX0xJTkVfXyk7IAorICAgICAgZ290byBm
YWlsOworICAgIH0KKyAgLyogU3RhcnQgZnJvbSB0aGUgcm9vdCBkaXJlY3RvcnkuICAqLworICBk
YXRhLT5maWxlX2NsdXN0ZXIgPSBkYXRhLT5yb290X2NsdXN0ZXI7CisgIGRhdGEtPmN1cl9jbHVz
dGVyX251bSA9IH4wVTsKKyAgZGF0YS0+YXR0ciA9IEdSVUJfRkFUX0FUVFJfRElSRUNUT1JZOwor
ICBwcmludGYoImRhdGEtPmZpbGVfY2x1c3Rlcj0ldSBcbmRhdGEtPmN1cl9jbHVzdGVyX251bT0l
dSBcbmRhdGEtPmF0dHI9MHgleFxuIgorCSAiZGF0YS0+bG9naWNhbF9zZWN0b3JfYml0cz0ldVxu
IgorCSAiZGF0YS0+Y2x1c3Rlcl9iaXRzPSV1XG4iLAorCSBkYXRhLT5maWxlX2NsdXN0ZXIsIGRh
dGEtPmN1cl9jbHVzdGVyX251bSwgZGF0YS0+YXR0ciwKKwkgZGF0YS0+bG9naWNhbF9zZWN0b3Jf
Yml0cywgZGF0YS0+Y2x1c3Rlcl9iaXRzKTsKKyAgcmV0dXJuIGRhdGE7CisKKyBmYWlsOgorCisg
IGZyZWUgKGRhdGEpOworICBlcnJ4ICgibm90IGEgRkFUIGZpbGVzeXN0ZW0uLi5cbiIpOworICBy
ZXR1cm4gMDsKK30KKworCisKKy8vtNPOxLz+tcTWuLaoxqvSxm9mZnNldNfWvdq0prbByKFsZW7X
1r3atcTK/b7dtb1idWYKKy8vzsS8/tPJZGF0YS0+ZmlsZV9jbHVzdGVy1ri2qAorLy9kYXRhLT5m
aWxlX2NsdXN0ZXLWuLaowcvOxLz+tcTG8Mq8tNi6xQorLy/ErMjPZGF0YS0+ZmlsZV9jbHVzdGVy
PTKjrLT6se24+cS/wrwKK3N0YXRpYyBncnViX3NzaXplX3QKK2dydWJfZmF0X3JlYWRfZGF0YSAo
QmxvY2tEcml2ZXJTdGF0ZSAqYnMsIHN0cnVjdCBncnViX2ZhdF9kYXRhICpkYXRhLAorCQkgICAg
dm9pZCAoKnJlYWRfaG9vaykgKGdydWJfZGlza19hZGRyX3Qgc2VjdG9yLAorCQkJCSAgICAgICB1
bnNpZ25lZCBvZmZzZXQsIHVuc2lnbmVkIGxlbmd0aCwKKwkJCQkgICAgICAgdm9pZCAqY2xvc3Vy
ZSksCisJCSAgICB2b2lkICpjbG9zdXJlLAorCQkgICAgZ3J1Yl9vZmZfdCBvZmZzZXQsIGdydWJf
c2l6ZV90IGxlbiwgY2hhciAqYnVmKQoreworICBncnViX3NpemVfdCBzaXplOworICBncnViX3Vp
bnQzMl90IGxvZ2ljYWxfY2x1c3RlcjsKKyAgdW5zaWduZWQgbG9naWNhbF9jbHVzdGVyX2JpdHM7
CisgIGdydWJfc3NpemVfdCByZXQgPSAwOworICB1bnNpZ25lZCBsb25nIHNlY3RvcjsKKyAgdWlu
dDY0X3Qgb2ZmX2J5dGVzID0gMDsgCisgIC8qIFRoaXMgaXMgYSBzcGVjaWFsIGNhc2UuIEZBVDEy
IGFuZCBGQVQxNiBkb2Vzbid0IGhhdmUgdGhlIHJvb3QgZGlyZWN0b3J5CisgICAgIGluIGNsdXN0
ZXJzLiAgKi8KKyAgaWYgKGRhdGEtPmZpbGVfY2x1c3RlciA9PSB+MFUpCisgICAgeworICAgICAg
c2l6ZSA9IChkYXRhLT5udW1fcm9vdF9zZWN0b3JzIDw8IEdSVUJfRElTS19TRUNUT1JfQklUUykg
LSBvZmZzZXQ7CisgICAgICBpZiAoc2l6ZSA+IGxlbikKKwlzaXplID0gbGVuOworCisgICAgICBv
ZmZfYnl0ZXMgPSAoKHVpbnQ2NF90KWRhdGEtPnJvb3Rfc2VjdG9yIDw8IEdSVUJfRElTS19TRUNU
T1JfQklUUykgKyBvZmZzZXQ7CisgICAgICBpZihiZHJ2X3JlYWQoYnMsIG9mZl9ieXRlcywgYnVm
LCBzaXplICkgIT0gc2l6ZSkgCisJcmV0dXJuIC0xOworCisgICAgICByZXR1cm4gc2l6ZTsKKyAg
ICB9CisKKyAgLyogQ2FsY3VsYXRlIHRoZSBsb2dpY2FsIGNsdXN0ZXIgbnVtYmVyIGFuZCBvZmZz
ZXQuICAqLworICBsb2dpY2FsX2NsdXN0ZXJfYml0cyA9IChkYXRhLT5jbHVzdGVyX2JpdHMKKwkJ
CSAgKyBkYXRhLT5sb2dpY2FsX3NlY3Rvcl9iaXRzCisJCQkgICsgR1JVQl9ESVNLX1NFQ1RPUl9C
SVRTKTsKKyAgbG9naWNhbF9jbHVzdGVyID0gb2Zmc2V0ID4+IGxvZ2ljYWxfY2x1c3Rlcl9iaXRz
OyAgICAvL3doaWNoIGNsdXN0ZXIgdG8gcmVhZCAKKyAgb2Zmc2V0ICY9ICgxIDw8IGxvZ2ljYWxf
Y2x1c3Rlcl9iaXRzKSAtIDE7ICAgICAgICAgICAvL21vZAorCisgIGlmIChsb2dpY2FsX2NsdXN0
ZXIgPCBkYXRhLT5jdXJfY2x1c3Rlcl9udW0pICAgLy8KKyAgICB7CisgICAgICBkYXRhLT5jdXJf
Y2x1c3Rlcl9udW0gPSAwOworICAgICAgZGF0YS0+Y3VyX2NsdXN0ZXIgPSBkYXRhLT5maWxlX2Ns
dXN0ZXI7IC8vILXaMrj2ZmF0se3P7r+qyry8x8K8xL/CvLrNzsS8/gorICAgIH0KKworICB3aGls
ZSAobGVuKQorICAgIHsKKyAgICAgIHdoaWxlIChsb2dpY2FsX2NsdXN0ZXIgPiBkYXRhLT5jdXJf
Y2x1c3Rlcl9udW0pCisJeworCSAgLyogRmluZCBuZXh0IGNsdXN0ZXIuICAqLworCSAgZ3J1Yl91
aW50MzJfdCBuZXh0X2NsdXN0ZXI7CisJICB1bnNpZ25lZCBsb25nIGZhdF9vZmZzZXQ7CisKKwkg
IHN3aXRjaCAoZGF0YS0+ZmF0X3NpemUpCisJICAgIHsKKwkgICAgY2FzZSAzMjoKKwkgICAgICBm
YXRfb2Zmc2V0ID0gZGF0YS0+Y3VyX2NsdXN0ZXIgPDwgMjsKKwkgICAgICBicmVhazsKKwkgICAg
Y2FzZSAxNjoKKwkgICAgICBmYXRfb2Zmc2V0ID0gZGF0YS0+Y3VyX2NsdXN0ZXIgPDwgMTsKKwkg
ICAgICBicmVhazsKKwkgICAgZGVmYXVsdDoKKwkgICAgICAvKiBjYXNlIDEyOiAqLworCSAgICAg
IGZhdF9vZmZzZXQgPSBkYXRhLT5jdXJfY2x1c3RlciArIChkYXRhLT5jdXJfY2x1c3RlciA+PiAx
KTsKKwkgICAgICBicmVhazsKKwkgICAgfQorCisJICAvKiBSZWFkIHRoZSBGQVQuICAqLworCSAg
aW50IGxlbiA9IChkYXRhLT5mYXRfc2l6ZSArIDcpID4+IDM7CisJICB1aW50NjRfdCBvZmZfYnl0
ZXMgPSAgKCh1aW50NjRfdClkYXRhLT5mYXRfc2VjdG9yIDw8IEdSVUJfRElTS19TRUNUT1JfQklU
UykgKyBmYXRfb2Zmc2V0OyAKKwkgIGlmIChiZHJ2X3ByZWFkIChicywgb2ZmX2J5dGVzLCAKKwkJ
CSAgKGNoYXIgKikgJm5leHRfY2x1c3RlciwgCisJCQkgIGxlbikgIT0gbGVuKSAgIC8vtNNmYXSx
7bbByKG02LrFCisJICAgIHJldHVybiAtMTsKKworCSAgbmV4dF9jbHVzdGVyID0gZ3J1Yl9sZV90
b19jcHUzMiAobmV4dF9jbHVzdGVyKTsKKwkgIHN3aXRjaCAoZGF0YS0+ZmF0X3NpemUpCisJICAg
IHsKKwkgICAgY2FzZSAxNjoKKwkgICAgICBuZXh0X2NsdXN0ZXIgJj0gMHhGRkZGOworCSAgICAg
IGJyZWFrOworCSAgICBjYXNlIDEyOgorCSAgICAgIGlmIChkYXRhLT5jdXJfY2x1c3RlciAmIDEp
CisJCW5leHRfY2x1c3RlciA+Pj0gNDsKKworCSAgICAgIG5leHRfY2x1c3RlciAmPSAweDBGRkY7
CisJICAgICAgYnJlYWs7CisJICAgIH0KKworCSAgcHJpbnRmICgiZmF0X3NpemU9JWQsIG5leHRf
Y2x1c3Rlcj0ldVxuIiwKKwkJCWRhdGEtPmZhdF9zaXplLCBuZXh0X2NsdXN0ZXIpOworCisJICAv
KiBDaGVjayB0aGUgZW5kLiAgKi8KKwkgIGlmIChuZXh0X2NsdXN0ZXIgPj0gZGF0YS0+Y2x1c3Rl
cl9lb2ZfbWFyaykKKwkgICAgcmV0dXJuIHJldDsKKworCSAgaWYgKG5leHRfY2x1c3RlciA8IDIg
fHwgbmV4dF9jbHVzdGVyID49IGRhdGEtPm51bV9jbHVzdGVycykKKwkgICAgeworCSAgICAgIHBy
aW50ZigiaW52YWxpZCBjbHVzdGVyICV1Li4uLi4uLi4uLi4uLi4uLlxuIiwKKwkJCSAgbmV4dF9j
bHVzdGVyKTsKKwkgICAgICByZXR1cm4gLTE7CisJICAgIH0KKworCSAgZGF0YS0+Y3VyX2NsdXN0
ZXIgPSBuZXh0X2NsdXN0ZXI7CisJICBkYXRhLT5jdXJfY2x1c3Rlcl9udW0rKzsKKwl9CisKKyAg
ICAgIC8qIFJlYWQgdGhlIGRhdGEgaGVyZS4gICovCisgICAgICAvL8LfvK202Mv5ttTTprXEvvi2
1MnIx/gKKyAgICAgIHNlY3RvciA9IChkYXRhLT5jbHVzdGVyX3NlY3RvcgorCQkrICgoZGF0YS0+
Y3VyX2NsdXN0ZXIgLSAyKQorCQkgICA8PCAoZGF0YS0+Y2x1c3Rlcl9iaXRzICsgZGF0YS0+bG9n
aWNhbF9zZWN0b3JfYml0cykpKTsgCisgICAgICAvL774ttTJyMf41tDIpbX0xqvSxrrztcTX1r3a
yv0KKyAgICAgIHNpemUgPSAoMSA8PCBsb2dpY2FsX2NsdXN0ZXJfYml0cykgLSBvZmZzZXQ7Cisg
ICAgICBpZiAoc2l6ZSA+IGxlbikKKwlzaXplID0gbGVuOworCisgICAgICAvL2Rpc2stPnJlYWRf
aG9vayA9IHJlYWRfaG9vazsKKyAgICAgIC8vZGlzay0+Y2xvc3VyZSA9IGNsb3N1cmU7CisgICAg
ICBpbnQ2NF90IG9mZl9ieXRlcyA9ICgodWludDY0X3Qpc2VjdG9yIDw8IEdSVUJfRElTS19TRUNU
T1JfQklUUykgKyBvZmZzZXQ7CisgICAgICAvL2Rpc2stPnJlYWRfaG9vayA9IDA7CisgICAgICBp
ZiAoYmRydl9wcmVhZCAoYnMsIG9mZl9ieXRlcywgYnVmLCBzaXplKSAhPSBzaXplKQorCXJldHVy
biAtMTsKKworICAgICAgbGVuIC09IHNpemU7CisgICAgICBidWYgKz0gc2l6ZTsKKyAgICAgIHJl
dCArPSBzaXplOworICAgICAgbG9naWNhbF9jbHVzdGVyKys7CisgICAgICBvZmZzZXQgPSAwOyAg
Ly/S1LrztsG1xLa8ysfN6tX7ycjH+AorICAgIH0KKworICByZXR1cm4gcmV0OworfQorCisvL7Hp
wPrTyWRhdGEtPmZpbGVfY2x1c3Rlcta4tqi1xMS/wrwKK2ludAorZ3J1Yl9mYXRfaXRlcmF0ZV9k
aXIgKEJsb2NrRHJpdmVyU3RhdGUgKmJzLCBzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YSkKK3sK
KyAgc3RydWN0IGdydWJfZmF0X2Rpcl9lbnRyeSBkaXI7CisgIGNoYXIgKmZpbGVuYW1lLCAqZmls
ZXAgPSAwOworICBncnViX3VpbnQxNl90ICp1bmlidWY7CisgIGludCBzbG90ID0gLTEsIHNsb3Rz
ID0gLTE7CisgIGludCBjaGVja3N1bSA9IC0xOworICBncnViX3NzaXplX3Qgb2Zmc2V0ID0gLXNp
emVvZihkaXIpOworCisgIGlmICghIChkYXRhLT5hdHRyICYgR1JVQl9GQVRfQVRUUl9ESVJFQ1RP
UlkpKQorICAgIHJldHVybiBwcmludGYoIm5vdCBhIGRpcmVjdG9yeS4uLi4uLlxuIik7CisKKyAg
LyogQWxsb2NhdGUgc3BhY2UgZW5vdWdoIHRvIGhvbGQgYSBsb25nIG5hbWUuICAqLworICBmaWxl
bmFtZSA9IChjaGFyKiltYWxsb2MgKDB4NDAgKiAxMyAqIDQgKyAxKTsKKyAgdW5pYnVmID0gKGdy
dWJfdWludDE2X3QgKikgbWFsbG9jICgweDQwICogMTMgKiAyKTsKKyAgaWYgKCEgZmlsZW5hbWUg
fHwgISB1bmlidWYpCisgICAgeworICAgICAgZnJlZSAoZmlsZW5hbWUpOworICAgICAgZnJlZSAo
dW5pYnVmKTsKKyAgICAgIHJldHVybiAtMTsKKyAgICB9CisKKworICBpbnQgY291bnQgPSAwOwor
ICB3aGlsZSAoMSkKKyAgICB7CisgICAgICB1bnNpZ25lZCBpOworCisgICAgICAvKiBBZGp1c3Qg
dGhlIG9mZnNldC4gICovCisgICAgICBvZmZzZXQgKz0gc2l6ZW9mIChkaXIpOworICAgICAgcHJp
bnRmKCJbJWRdb2Zmc2V0PSV1XG4iCisJICAgICAiZGF0YS0+Y3VyX2NsdXN0ZXJfbnVtPSV1LGRh
dGEtPmN1cl9jbHVzdGVyPSV1XG4iLCAKKwkgICAgIGNvdW50KzEsIG9mZnNldCwgCisJICAgICBk
YXRhLT5jdXJfY2x1c3Rlcl9udW0sIGRhdGEtPmN1cl9jbHVzdGVyKTsKKyAgICAgIC8qIFJlYWQg
YSBkaXJlY3RvcnkgZW50cnkuICAqLworICAgICAgLy8weDCx7cq+v9XEv8K8CisgICAgICBpZiAo
KGdydWJfZmF0X3JlYWRfZGF0YSAoYnMsIGRhdGEsIDAsIDAsCisJCQkgICAgICAgb2Zmc2V0LCBz
aXplb2YgKGRpciksIChjaGFyICopICZkaXIpCisJICAgIT0gc2l6ZW9mIChkaXIpIHx8IGRpci5u
YW1lWzBdID09IDApKQorCXsKKwkgIHByaW50ZigiYnJlYWsuLi5kaXIubmFtZVswXT09JWRcbiIs
IGRpci5uYW1lWzBdKTsKKwkgIGJyZWFrOworCX0KKyAgICAgIC8qIEhhbmRsZSBsb25nIG5hbWUg
ZW50cmllcy4gICovCisgICAgICBpZiAoZGlyLmF0dHIgPT0gR1JVQl9GQVRfQVRUUl9MT05HX05B
TUUpCisJeworCSAgcHJpbnRmKCJsb25nIG5hbWUuLi5cbiIpOworCSAgc3RydWN0IGdydWJfZmF0
X2xvbmdfbmFtZV9lbnRyeSAqbG9uZ19uYW1lCisJICAgID0gKHN0cnVjdCBncnViX2ZhdF9sb25n
X25hbWVfZW50cnkgKikgJmRpcjsKKwkgIGdydWJfdWludDhfdCBpZCA9IGxvbmdfbmFtZS0+aWQ7
CisKKwkgIGlmIChpZCAmIDB4NDApICAvL3RoZSBsYXN0IGl0ZW0KKwkgICAgeworCSAgICAgIGlk
ICY9IDB4M2Y7ICAgLy9pbmRleCBvciBvcmRpbmFsIG51bWJlciAgMX4zMQorCSAgICAgIHNsb3Rz
ID0gc2xvdCA9IGlkOworCSAgICAgIGNoZWNrc3VtID0gbG9uZ19uYW1lLT5jaGVja3N1bTsKKwkg
ICAgICBwcmludGYoInRoZSBsYXN0IG9yZGluYWwgbnVtPSVkISEhXG4iLCBpZCk7CisJICAgIH0K
KworCSAgaWYgKGlkICE9IHNsb3QgfHwgc2xvdCA9PSAwIHx8IGNoZWNrc3VtICE9IGxvbmdfbmFt
ZS0+Y2hlY2tzdW0pCisJICAgIHsKKwkgICAgICBwcmludGYoIm5vdCB2YWxpZCBvcmRpbmFsIG51
bWJlciAsaWdub3JlLi4uY29udGludWVcbiIpOworCSAgICAgIGNoZWNrc3VtID0gLTE7CisJICAg
ICAgY29udGludWU7CisJICAgIH0KKworCSAgc2xvdC0tOworCSAgbWVtY3B5ICh1bmlidWYgKyBz
bG90ICogMTMsIGxvbmdfbmFtZS0+bmFtZTEsIDUgKiAyKTsKKwkgIG1lbWNweSAodW5pYnVmICsg
c2xvdCAqIDEzICsgNSwgbG9uZ19uYW1lLT5uYW1lMiwgNiAqIDIpOworCSAgbWVtY3B5ICh1bmli
dWYgKyBzbG90ICogMTMgKyAxMSwgbG9uZ19uYW1lLT5uYW1lMywgMiAqIDIpOworCSAgcHJpbnRm
KCJtZW1jcHkuLi5jb250aW51ZVxuIik7CisJICBjb250aW51ZTsKKwl9CisKKyAgICAgIAorICAg
ICAgLyogQ2hlY2sgaWYgdGhpcyBlbnRyeSBpcyB2YWxpZC4gICovCisgICAgICAvL294ZTWx7cq+
0tG+rbG7yb6z/QorICAgICAgaWYgKGRpci5uYW1lWzBdID09IDB4ZTUgfHwgKGRpci5hdHRyICYg
fkdSVUJfRkFUX0FUVFJfVkFMSUQpKQorCXsKKwkgIHByaW50ZigiZGlyLm5hbWVbMF09MHgleCwg
ZGlyLmF0dHI9MHgleCBub3QgdmFsaWQuLi5jb250aW51ZVxuIiwgCisJCSBkaXIubmFtZVswXSwg
ZGlyLmF0dHIpOworCSAgY29udGludWU7CisJfQorCisgICAgICBwcmludGYoImNoZWNrc3VtPSVk
LCBzbG90PSVkXG4iLCBjaGVja3N1bSwgc2xvdCk7CisgICAgICAvKiBUaGlzIGlzIGEgd29ya2Fy
b3VuZCBmb3IgSmFwYW5lc2UuICAqLworICAgICAgaWYgKGRpci5uYW1lWzBdID09IDB4MDUpCisJ
ZGlyLm5hbWVbMF0gPSAweGU1OworCisgICAgICBpZiAoY2hlY2tzdW0gIT0gLTEgJiYgc2xvdCA9
PSAwKQorCXsKKwkgIHByaW50ZigiY2hlY2tzdW1pbmdcbiIpOworCSAgZ3J1Yl91aW50OF90IHN1
bTsKKworCSAgZm9yIChzdW0gPSAwLCBpID0gMDsgaSA8IHNpemVvZiAoZGlyLm5hbWUpOyBpKysp
CisJICAgIHN1bSA9ICgoc3VtID4+IDEpIHwgKHN1bSA8PCA3KSkgKyBkaXIubmFtZVtpXTsKKwor
CSAgaWYgKHN1bSA9PSBjaGVja3N1bSkKKwkgICAgey8vs6TD+7Htz+6688PmvfS907bMw/ux7c/u
o6zR6daks8m5ptTy1qTD99Xm1f3Kx7Okw/vX1gorCSAgICAgIGludCB1OworCisJICAgICAgZm9y
ICh1ID0gMDsgdSA8IHNsb3RzICogMTM7IHUrKykKKwkJdW5pYnVmW3VdID0gZ3J1Yl9sZV90b19j
cHUxNiAodW5pYnVmW3VdKTsKKworCSAgICAgICpncnViX3V0ZjE2X3RvX3V0ZjggKChncnViX3Vp
bnQ4X3QgKikgZmlsZW5hbWUsIHVuaWJ1ZiwKKwkJCQkgICBzbG90cyAqIDEzKSA9ICdcMCc7CisK
KwkgICAgICAvL2lmIChob29rIChmaWxlbmFtZSwgJmRpciwgY2xvc3VyZSkpCisJICAgICAgICAv
L2JyZWFrOworCisJICAgICAgY2hlY2tzdW0gPSAtMTsKKwkgICAgICBmb3IgKGkgPSAwOyBpIDwg
c2l6ZW9mIChkaXIubmFtZSk7IGkrKykKKwkJcHJpbnRmKCIweCV4ICAiLCBkaXIubmFtZVtpXSk7
CisJICAgICAgY2hhciAqZ2JuYW1lID0gKGNoYXIqKW1hbGxvYygyNTYpOworCSAgICAgIHUyZyhm
aWxlbmFtZSwgc3RybGVuKGZpbGVuYW1lKSwgZ2JuYW1lLCAyNTYpOworCSAgICAgIHByaW50Zigi
XG5kaXIubmFtZT0lcywgZmlsZW5hbWU9JXMsIGRpci5hdHRyPTB4JXgsIgorCQkgICAgICJzdW09
PWNoZWNrc3VtLi4uY29udGludWVcbiIsCisJCSAgICAgZGlyLm5hbWUsIGdibmFtZSwgZGlyLmF0
dHIpOworCSAgICAgIGZyZWUoZ2JuYW1lKTsKKwkgICAgICBjb3VudCsrOworCSAgICAgIGNvbnRp
bnVlOworCSAgICB9CisKKwkgIGNoZWNrc3VtID0gLTE7CisJfQorCisgICAgICAvL7rzw+a1xLSm
wO3V67bUt8fV5sq1s6TD+7rN1ebKtbbMw/sKKyAgICAgIC8qIENvbnZlcnQgdGhlIDguMyBmaWxl
IG5hbWUuICAqLworICAgICAgLy/IpbX0tszD+7XEv9W48aOsyKu4xM6q0KHQtAorICAgICAgZmls
ZXAgPSBmaWxlbmFtZTsKKyAgICAgIGlmIChkaXIuYXR0ciAmIEdSVUJfRkFUX0FUVFJfVk9MVU1F
X0lEKQorCXsKKwkgIHByaW50ZigiVk9MVU1FXG4iKTsKKwkgIGZvciAoaSA9IDA7IGkgPCBzaXpl
b2YgKGRpci5uYW1lKSAmJiBkaXIubmFtZVtpXQorCQkgJiYgISBncnViX2lzc3BhY2UgKGRpci5u
YW1lW2ldKTsgaSsrKQorCSAgICAqZmlsZXArKyA9IGRpci5uYW1lW2ldOworCX0KKyAgICAgIGVs
c2UKKwl7CisJICBmb3IgKGkgPSAwOyBpIDwgOCAmJiBkaXIubmFtZVtpXSAmJiAhIGdydWJfaXNz
cGFjZSAoZGlyLm5hbWVbaV0pOyBpKyspCisJICAgICpmaWxlcCsrID0gZ3J1Yl90b2xvd2VyIChk
aXIubmFtZVtpXSk7CisKKwkgICpmaWxlcCA9ICcuJzsKKworCSAgZm9yIChpID0gODsgaSA8IDEx
ICYmIGRpci5uYW1lW2ldICYmICEgZ3J1Yl9pc3NwYWNlIChkaXIubmFtZVtpXSk7IGkrKykKKwkg
ICAgKisrZmlsZXAgPSBncnViX3RvbG93ZXIgKGRpci5uYW1lW2ldKTsKKworCSAgaWYgKCpmaWxl
cCAhPSAnLicpCisJICAgIGZpbGVwKys7CisJfQorICAgICAgKmZpbGVwID0gJ1wwJzsKKworICAg
ICAgCisgICAgICBmb3IgKGkgPSAwOyBpIDwgc2l6ZW9mIChkaXIubmFtZSk7IGkrKykKKwlwcmlu
dGYoIjB4JXggICIsIGRpci5uYW1lW2ldKTsKKyAgICAgIHByaW50ZigiXG5kaXIubmFtZT0lcywg
ZmlsZW5hbWU9ob4lc6G/LCBkaXIuYXR0cj0weCV4LCIKKwkgICAgICIuLi5uZXh0IHdoaWxlXG4i
LAorCSAgICAgZGlyLm5hbWUsIGZpbGVuYW1lLCBkaXIuYXR0cik7CisgICAgICBjb3VudCsrOwor
ICAgICAgLyppZihzdHJjbXAoZmlsZW5hbWUsICIuIikgJiYgc3RyY21wKGZpbGVuYW1lLCAiLi4i
KSkKKwl7CisJICBwcmludGYoIns9PT09PT09PT09PT09PT5cbiIpOworCSAgc3RydWN0IGdydWJf
ZmF0X2RhdGEgKmRhdGEyID0gTlVMTDsKKwkgIGRhdGEyID0gKHN0cnVjdCBncnViX2ZhdF9kYXRh
KiltYWxsb2Moc2l6ZW9mKCpkYXRhKSk7CisJICBtZW1jcHkoZGF0YTIsIGRhdGEsIHNpemVvZigq
ZGF0YSkpOworCSAgZGF0YTItPmF0dHIgPSBkaXIuYXR0cjsKKwkgIGRhdGEyLT5maWxlX3NpemUg
PSBncnViX2xlX3RvX2NwdTMyIChkaXIuZmlsZV9zaXplKTsKKwkgIGRhdGEyLT5maWxlX2NsdXN0
ZXIgPSAoKGdydWJfbGVfdG9fY3B1MTYgKGRpci5maXJzdF9jbHVzdGVyX2hpZ2gpIDw8IDE2KQor
CQkJCSB8IGdydWJfbGVfdG9fY3B1MTYgKGRpci5maXJzdF9jbHVzdGVyX2xvdykpOworCSAgZGF0
YTItPmN1cl9jbHVzdGVyX251bSA9IH4wVTsKKwkgIChncnViX2ZhdF9pdGVyYXRlX2Rpcihicywg
ZGF0YTIpIDwgMCkgPyBwcmludGYoImVycm9yICEhISEhIVxuIikgOiAwOworCSAgZnJlZShkYXRh
Mik7CisJICBwcmludGYoIjw9PT09PT09PT09PT09PT09PT09fVxuIik7CisJfQorICAgICAgKi8K
KyAgICAgIC8vaWYgKGhvb2sgKGZpbGVuYW1lLCAmZGlyLCBjbG9zdXJlKSkKKyAgICAgICAgLy9i
cmVhazsKKyAgICB9CisKKyAgZnJlZSAoZmlsZW5hbWUpOworICBmcmVlICh1bmlidWYpOworCisg
IHJldHVybiAwOworfQorCisKKy8qCitzdHJ1Y3QgZ3J1Yl9mYXRfZmluZF9kaXJfY2xvc3VyZQor
eworICBzdHJ1Y3QgZ3J1Yl9mYXRfZGF0YSAqZGF0YTsKKyAgaW50ICgqaG9vaykgKGNvbnN0IGNo
YXIgKmZpbGVuYW1lLAorCSAgICAgICBjb25zdCBzdHJ1Y3QgZ3J1Yl9kaXJob29rX2luZm8gKmlu
Zm8sCisJICAgICAgIHZvaWQgKmNsb3N1cmUpOworICB2b2lkICpjbG9zdXJlOworICBjaGFyICpk
aXJuYW1lOworICBpbnQgY2FsbF9ob29rOworICBpbnQgZm91bmQ7Cit9OworCisKK3N0YXRpYyBp
bnQKK2dydWJfZmF0X2ZpbmRfZGlyX2hvb2sgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLCBzdHJ1Y3Qg
Z3J1Yl9mYXRfZGlyX2VudHJ5ICpkaXIsCisJCQl2b2lkICpjbG9zdXJlKQoreworICBzdHJ1Y3Qg
Z3J1Yl9mYXRfZmluZF9kaXJfY2xvc3VyZSAqYyA9IGNsb3N1cmU7CisgIHN0cnVjdCBncnViX2Rp
cmhvb2tfaW5mbyBpbmZvOworICBncnViX21lbXNldCAoJmluZm8sIDAsIHNpemVvZiAoaW5mbykp
OworCisgIGluZm8uZGlyID0gISEgKGRpci0+YXR0ciAmIEdSVUJfRkFUX0FUVFJfRElSRUNUT1JZ
KTsKKyAgaW5mby5jYXNlX2luc2Vuc2l0aXZlID0gMTsKKworICBpZiAoZGlyLT5hdHRyICYgR1JV
Ql9GQVRfQVRUUl9WT0xVTUVfSUQpCisgICAgcmV0dXJuIDA7CisgIGlmICgqKGMtPmRpcm5hbWUp
ID09ICdcMCcgJiYgKGMtPmNhbGxfaG9vaykpCisgICAgcmV0dXJuIGMtPmhvb2sgKGZpbGVuYW1l
LCAmaW5mbywgYy0+Y2xvc3VyZSk7CisKKyAgaWYgKGdydWJfc3RyY2FzZWNtcCAoYy0+ZGlybmFt
ZSwgZmlsZW5hbWUpID09IDApCisgICAgeworICAgICAgc3RydWN0IGdydWJfZmF0X2RhdGEgKmRh
dGEgPSBjLT5kYXRhOworCisgICAgICBjLT5mb3VuZCA9IDE7CisgICAgICBkYXRhLT5hdHRyID0g
ZGlyLT5hdHRyOworICAgICAgZGF0YS0+ZmlsZV9zaXplID0gZ3J1Yl9sZV90b19jcHUzMiAoZGly
LT5maWxlX3NpemUpOworICAgICAgZGF0YS0+ZmlsZV9jbHVzdGVyID0gKChncnViX2xlX3RvX2Nw
dTE2IChkaXItPmZpcnN0X2NsdXN0ZXJfaGlnaCkgPDwgMTYpCisJCQkgICAgICAgfCBncnViX2xl
X3RvX2NwdTE2IChkaXItPmZpcnN0X2NsdXN0ZXJfbG93KSk7CisgICAgICBkYXRhLT5jdXJfY2x1
c3Rlcl9udW0gPSB+MFU7CisKKyAgICAgIGlmIChjLT5jYWxsX2hvb2spCisJYy0+aG9vayAoZmls
ZW5hbWUsICZpbmZvLCBjLT5jbG9zdXJlKTsKKworICAgICAgcmV0dXJuIDE7CisgICAgfQorICBy
ZXR1cm4gMDsKK30KKyovCisKKy8qIEZpbmQgdGhlIHVuZGVybHlpbmcgZGlyZWN0b3J5IG9yIGZp
bGUgaW4gUEFUSCBhbmQgcmV0dXJuIHRoZQorICAgbmV4dCBwYXRoLiBJZiB0aGVyZSBpcyBubyBu
ZXh0IHBhdGggb3IgYW4gZXJyb3Igb2NjdXJzLCByZXR1cm4gTlVMTC4KKyAgIElmIEhPT0sgaXMg
c3BlY2lmaWVkLCBjYWxsIGl0IHdpdGggZWFjaCBmaWxlIG5hbWUuICAqLworY2hhciAqCitncnVi
X2ZhdF9maW5kX2RpciAoQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIHN0cnVjdCBncnViX2ZhdF9kYXRh
ICpkYXRhLAorCQkgICBjb25zdCBjaGFyICpwYXRoLAorCQkgICBpbnQgKCpob29rKSAoY29uc3Qg
Y2hhciAqZmlsZW5hbWUsCisJCQkJY29uc3Qgc3RydWN0IGdydWJfZGlyaG9va19pbmZvICppbmZv
LAorCQkJCXZvaWQgKmNsb3N1cmUpLAorCQkgICB2b2lkICpjbG9zdXJlKQoreworICBjaGFyICpk
aXJuYW1lLCAqZGlycDsKKyAgLy9zdHJ1Y3QgZ3J1Yl9mYXRfZmluZF9kaXJfY2xvc3VyZSBjOwor
CisgIGlmICghIChkYXRhLT5hdHRyICYgR1JVQl9GQVRfQVRUUl9ESVJFQ1RPUlkpKQorICAgIHsK
KyAgICAgIHByaW50Zigibm90IGEgZGlyZWN0b3J5Li4uLi4uLi4uLi4uLlxuIik7CisgICAgICBy
ZXR1cm4gMDsKKyAgICB9CisKKyAgLyogRXh0cmFjdCBhIGRpcmVjdG9yeSBuYW1lLiAgKi8KKyAg
d2hpbGUgKCpwYXRoID09ICcvJykKKyAgICBwYXRoKys7CisKKyAgZGlycCA9IGdydWJfc3RyY2hy
IChwYXRoLCAnLycpOworICBpZiAoZGlycCkKKyAgICB7CisgICAgICB1bnNpZ25lZCBsZW4gPSBk
aXJwIC0gcGF0aDsKKworICAgICAgZGlybmFtZSA9IChjaGFyKiltYWxsb2MgKGxlbiArIDEpOwor
ICAgICAgaWYgKCEgZGlybmFtZSkKKwlyZXR1cm4gMDsKKworICAgICAgbWVtY3B5IChkaXJuYW1l
LCBwYXRoLCBsZW4pOworICAgICAgZGlybmFtZVtsZW5dID0gJ1wwJzsKKyAgICB9CisgIGVsc2UK
KyAgICB7CisgICAgLyogVGhpcyBpcyBhY3R1YWxseSBhIGZpbGUuICAqLworICAgICAgZGlybmFt
ZSA9IGdydWJfc3RyZHVwIChwYXRoKTsKKyAgICB9CisgIC8vYy5kYXRhID0gZGF0YTsKKyAgLy9j
Lmhvb2sgPSBob29rOworICAvL2MuY2xvc3VyZSA9IGNsb3N1cmU7CisgIC8vYy5kaXJuYW1lID1k
aXJuYW1lOworICAvL2MuZm91bmQgPSAwOworICAvL2MuY2FsbF9ob29rID0gKCEgZGlycCAmJiBo
b29rKTsKKyAgaWYoZ3J1Yl9mYXRfaXRlcmF0ZV9kaXIgKGJzLCBkYXRhKTwwKQorICAgIHsKKyAg
ICAgICBwcmludGYoImZpbGUgbm90IGZvdW5kLi5cbiIpOworICAgICAgIHJldHVybiAwOworICAg
IH0KKyAgICAKKyAgCisgIGZyZWUgKGRpcm5hbWUpOworCisgIHJldHVybiBkaXJwOworfQorCisK
KworCisKKworCisKKworCisKKworCisKKworCisKKwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4g
LVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWItZmF0L2ZhdC5oIHhlbi00
LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL2dydWItZmF0L2ZhdC5oCi0tLSB4ZW4tNC4xLjIt
YS90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mYXQuaAkxOTcwLTAxLTAxIDA3OjAwOjAw
LjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1
Yi1mYXQvZmF0LmgJMjAxMi0xMi0yOCAxNjowMjo0MS4wMDk5Mzc3MzggKzA4MDAKQEAgLTAsMCAr
MSwxNDYgQEAKKyNpZm5kZWYgRlNfRkFUX0gKKyNkZWZpbmUgRlNfRkFUX0gKKworCisjaW5jbHVk
ZSAiZnMtdHlwZXMuaCIKKyNpbmNsdWRlICJibG9ja19pbnQuaCIKKworI2RlZmluZSBHUlVCX0RJ
U0tfU0VDVE9SX0JJVFMgICAgICA5CisjZGVmaW5lIEdSVUJfRkFUX0RJUl9FTlRSWV9TSVpFCTMy
CisKKyNkZWZpbmUgR1JVQl9GQVRfQVRUUl9SRUFEX09OTFkJMHgwMQorI2RlZmluZSBHUlVCX0ZB
VF9BVFRSX0hJRERFTgkweDAyCisjZGVmaW5lIEdSVUJfRkFUX0FUVFJfU1lTVEVNCTB4MDQKKyNk
ZWZpbmUgR1JVQl9GQVRfQVRUUl9WT0xVTUVfSUQJMHgwOAorI2RlZmluZSBHUlVCX0ZBVF9BVFRS
X0RJUkVDVE9SWQkweDEwCisjZGVmaW5lIEdSVUJfRkFUX0FUVFJfQVJDSElWRQkweDIwCisKKyNk
ZWZpbmUgR1JVQl9GQVRfTUFYRklMRQkyNTYKKworI2RlZmluZSBHUlVCX0ZBVF9BVFRSX0xPTkdf
TkFNRQkoR1JVQl9GQVRfQVRUUl9SRUFEX09OTFkgXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfSElE
REVOIFwKKwkJCQkgfCBHUlVCX0ZBVF9BVFRSX1NZU1RFTSBcCisJCQkJIHwgR1JVQl9GQVRfQVRU
Ul9WT0xVTUVfSUQpCisjZGVmaW5lIEdSVUJfRkFUX0FUVFJfVkFMSUQJKEdSVUJfRkFUX0FUVFJf
UkVBRF9PTkxZIFwKKwkJCQkgfCBHUlVCX0ZBVF9BVFRSX0hJRERFTiBcCisJCQkJIHwgR1JVQl9G
QVRfQVRUUl9TWVNURU0gXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfRElSRUNUT1JZIFwKKwkJCQkg
fCBHUlVCX0ZBVF9BVFRSX0FSQ0hJVkUgXAorCQkJCSB8IEdSVUJfRkFUX0FUVFJfVk9MVU1FX0lE
KQorCitzdHJ1Y3QgZ3J1Yl9mYXRfYnBiCit7CisgIGdydWJfdWludDhfdCBqbXBfYm9vdFszXTsK
KyAgZ3J1Yl91aW50OF90IG9lbV9uYW1lWzhdOworICBncnViX3VpbnQxNl90IGJ5dGVzX3Blcl9z
ZWN0b3I7CisgIGdydWJfdWludDhfdCBzZWN0b3JzX3Blcl9jbHVzdGVyOworICBncnViX3VpbnQx
Nl90IG51bV9yZXNlcnZlZF9zZWN0b3JzOworICBncnViX3VpbnQ4X3QgbnVtX2ZhdHM7CisgIGdy
dWJfdWludDE2X3QgbnVtX3Jvb3RfZW50cmllczsKKyAgZ3J1Yl91aW50MTZfdCBudW1fdG90YWxf
c2VjdG9yc18xNjsKKyAgZ3J1Yl91aW50OF90IG1lZGlhOworICBncnViX3VpbnQxNl90IHNlY3Rv
cnNfcGVyX2ZhdF8xNjsKKyAgZ3J1Yl91aW50MTZfdCBzZWN0b3JzX3Blcl90cmFjazsKKyAgZ3J1
Yl91aW50MTZfdCBudW1faGVhZHM7CisgIGdydWJfdWludDMyX3QgbnVtX2hpZGRlbl9zZWN0b3Jz
OworICBncnViX3VpbnQzMl90IG51bV90b3RhbF9zZWN0b3JzXzMyOworICB1bmlvbgorICB7Cisg
ICAgc3RydWN0CisgICAgeworICAgICAgZ3J1Yl91aW50OF90IG51bV9waF9kcml2ZTsKKyAgICAg
IGdydWJfdWludDhfdCByZXNlcnZlZDsKKyAgICAgIGdydWJfdWludDhfdCBib290X3NpZzsKKyAg
ICAgIGdydWJfdWludDMyX3QgbnVtX3NlcmlhbDsKKyAgICAgIGdydWJfdWludDhfdCBsYWJlbFsx
MV07CisgICAgICBncnViX3VpbnQ4X3QgZnN0eXBlWzhdOworICAgIH0gX19hdHRyaWJ1dGVfXyAo
KHBhY2tlZCkpIGZhdDEyX29yX2ZhdDE2OworICAgIHN0cnVjdAorICAgIHsKKyAgICAgIGdydWJf
dWludDMyX3Qgc2VjdG9yc19wZXJfZmF0XzMyOworICAgICAgZ3J1Yl91aW50MTZfdCBleHRlbmRl
ZF9mbGFnczsKKyAgICAgIGdydWJfdWludDE2X3QgZnNfdmVyc2lvbjsKKyAgICAgIGdydWJfdWlu
dDMyX3Qgcm9vdF9jbHVzdGVyOworICAgICAgZ3J1Yl91aW50MTZfdCBmc19pbmZvOworICAgICAg
Z3J1Yl91aW50MTZfdCBiYWNrdXBfYm9vdF9zZWN0b3I7CisgICAgICBncnViX3VpbnQ4X3QgcmVz
ZXJ2ZWRbMTJdOworICAgICAgZ3J1Yl91aW50OF90IG51bV9waF9kcml2ZTsKKyAgICAgIGdydWJf
dWludDhfdCByZXNlcnZlZDE7CisgICAgICBncnViX3VpbnQ4X3QgYm9vdF9zaWc7CisgICAgICBn
cnViX3VpbnQzMl90IG51bV9zZXJpYWw7CisgICAgICBncnViX3VpbnQ4X3QgbGFiZWxbMTFdOwor
ICAgICAgZ3J1Yl91aW50OF90IGZzdHlwZVs4XTsKKyAgICB9IF9fYXR0cmlidXRlX18gKChwYWNr
ZWQpKSBmYXQzMjsKKyAgfSBfX2F0dHJpYnV0ZV9fICgocGFja2VkKSkgdmVyc2lvbl9zcGVjaWZp
YzsKK30gX19hdHRyaWJ1dGVfXyAoKHBhY2tlZCkpOworCitzdHJ1Y3QgZ3J1Yl9mYXRfZGlyX2Vu
dHJ5Cit7CisgIGdydWJfdWludDhfdCBuYW1lWzExXTsKKyAgZ3J1Yl91aW50OF90IGF0dHI7Cisg
IGdydWJfdWludDhfdCBudF9yZXNlcnZlZDsKKyAgZ3J1Yl91aW50OF90IGNfdGltZV90ZW50aDsK
KyAgZ3J1Yl91aW50MTZfdCBjX3RpbWU7CisgIGdydWJfdWludDE2X3QgY19kYXRlOworICBncnVi
X3VpbnQxNl90IGFfZGF0ZTsKKyAgZ3J1Yl91aW50MTZfdCBmaXJzdF9jbHVzdGVyX2hpZ2g7Cisg
IGdydWJfdWludDE2X3Qgd190aW1lOworICBncnViX3VpbnQxNl90IHdfZGF0ZTsKKyAgZ3J1Yl91
aW50MTZfdCBmaXJzdF9jbHVzdGVyX2xvdzsKKyAgZ3J1Yl91aW50MzJfdCBmaWxlX3NpemU7Cit9
IF9fYXR0cmlidXRlX18gKChwYWNrZWQpKTsKKworc3RydWN0IGdydWJfZmF0X2xvbmdfbmFtZV9l
bnRyeQoreworICBncnViX3VpbnQ4X3QgaWQ7CisgIGdydWJfdWludDE2X3QgbmFtZTFbNV07Cisg
IGdydWJfdWludDhfdCBhdHRyOworICBncnViX3VpbnQ4X3QgcmVzZXJ2ZWQ7CisgIGdydWJfdWlu
dDhfdCBjaGVja3N1bTsKKyAgZ3J1Yl91aW50MTZfdCBuYW1lMls2XTsKKyAgZ3J1Yl91aW50MTZf
dCBmaXJzdF9jbHVzdGVyOworICBncnViX3VpbnQxNl90IG5hbWUzWzJdOworfSBfX2F0dHJpYnV0
ZV9fICgocGFja2VkKSk7CisKK3N0cnVjdCBncnViX2ZhdF9kYXRhCit7CisgIGludCBsb2dpY2Fs
X3NlY3Rvcl9iaXRzOworICBncnViX3VpbnQzMl90IG51bV9zZWN0b3JzOworCisgIGdydWJfdWlu
dDE2X3QgZmF0X3NlY3RvcjsKKyAgZ3J1Yl91aW50MzJfdCBzZWN0b3JzX3Blcl9mYXQ7CisgIGlu
dCBmYXRfc2l6ZTsKKworICBncnViX3VpbnQzMl90IHJvb3RfY2x1c3RlcjsKKyAgZ3J1Yl91aW50
MzJfdCByb290X3NlY3RvcjsKKyAgZ3J1Yl91aW50MzJfdCBudW1fcm9vdF9zZWN0b3JzOworCisg
IGludCBjbHVzdGVyX2JpdHM7CisgIGdydWJfdWludDMyX3QgY2x1c3Rlcl9lb2ZfbWFyazsKKyAg
Z3J1Yl91aW50MzJfdCBjbHVzdGVyX3NlY3RvcjsKKyAgZ3J1Yl91aW50MzJfdCBudW1fY2x1c3Rl
cnM7CisKKyAgZ3J1Yl91aW50OF90IGF0dHI7CisgIGdydWJfc3NpemVfdCBmaWxlX3NpemU7Cisg
IGdydWJfdWludDMyX3QgZmlsZV9jbHVzdGVyOworICBncnViX3VpbnQzMl90IGN1cl9jbHVzdGVy
X251bTsKKyAgZ3J1Yl91aW50MzJfdCBjdXJfY2x1c3RlcjsKKworICBncnViX3VpbnQzMl90IHV1
aWQ7Cit9OworCisKKworCisKKworc3RydWN0IGdydWJfZmF0X2RhdGEqIGdydWJfZmF0X21vdW50
IChCbG9ja0RyaXZlclN0YXRlICpicywgdWludDMyX3QgcGFydF9vZmZfc2VjdG9yKTsKK2ludCBn
cnViX2ZhdF9pdGVyYXRlX2RpciAoQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIHN0cnVjdCBncnViX2Zh
dF9kYXRhICpkYXRhKTsKKworCisKKworCisKKworI2VuZGlmCmRpZmYgLS1leGNsdWRlPS5zdm4g
LXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQvZnMtdHlw
ZXMuaCB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mcy10eXBlcy5o
Ci0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mcy10eXBlcy5o
CTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29s
cy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9mcy10eXBlcy5oCTIwMTItMTItMjggMTY6MDI6NDEu
MDA5OTM3NzM4ICswODAwCkBAIC0wLDAgKzEsMjI4IEBACisvKgorICogIEdSVUIgIC0tICBHUmFu
ZCBVbmlmaWVkIEJvb3Rsb2FkZXIKKyAqICBDb3B5cmlnaHQgKEMpIDIwMDIsMjAwNSwyMDA2LDIw
MDcsMjAwOCwyMDA5ICBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIEluYy4KKyAqCisgKiAgR1JV
QiBpcyBmcmVlIHNvZnR3YXJlOiB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5
CisgKiAgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5z
ZSBhcyBwdWJsaXNoZWQgYnkKKyAqICB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBlaXRo
ZXIgdmVyc2lvbiAzIG9mIHRoZSBMaWNlbnNlLCBvcgorICogIChhdCB5b3VyIG9wdGlvbikgYW55
IGxhdGVyIHZlcnNpb24uCisgKgorICogIEdSVUIgaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUg
dGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKKyAqICBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdp
dGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBvZgorICogIE1FUkNIQU5UQUJJTElUWSBv
ciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUKKyAqICBHTlUgR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgorICoKKyAqICBZb3Ugc2hvdWxk
IGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZQor
ICogIGFsb25nIHdpdGggR1JVQi4gIElmIG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGlj
ZW5zZXMvPi4KKyAqLworCisjaWZuZGVmIEdSVUJfVFlQRVNfSEVBREVSCisjZGVmaW5lIEdSVUJf
VFlQRVNfSEVBREVSCTEKKworI2luY2x1ZGUgPGNvbmZpZy5oPgorI2luY2x1ZGUgPHg4Nl82NC90
eXBlcy5oPgorCisjaWZkZWYgR1JVQl9VVElMCisjIGRlZmluZSBHUlVCX0NQVV9TSVpFT0ZfVk9J
RF9QCVNJWkVPRl9WT0lEX1AKKyMgZGVmaW5lIEdSVUJfQ1BVX1NJWkVPRl9MT05HCVNJWkVPRl9M
T05HCisjIGlmZGVmIFdPUkRTX0JJR0VORElBTgorIyAgZGVmaW5lIEdSVUJfQ1BVX1dPUkRTX0JJ
R0VORElBTgkxCisjIGVsc2UKKyMgIHVuZGVmIEdSVUJfQ1BVX1dPUkRTX0JJR0VORElBTgorIyBl
bmRpZgorI2Vsc2UgLyogISBHUlVCX1VUSUwgKi8KKyMgZGVmaW5lIEdSVUJfQ1BVX1NJWkVPRl9W
T0lEX1AJR1JVQl9UQVJHRVRfU0laRU9GX1ZPSURfUAorIyBkZWZpbmUgR1JVQl9DUFVfU0laRU9G
X0xPTkcJR1JVQl9UQVJHRVRfU0laRU9GX0xPTkcKKyMgaWZkZWYgR1JVQl9UQVJHRVRfV09SRFNf
QklHRU5ESUFOCisjICBkZWZpbmUgR1JVQl9DUFVfV09SRFNfQklHRU5ESUFOCTEKKyMgZWxzZQor
IyAgdW5kZWYgR1JVQl9DUFVfV09SRFNfQklHRU5ESUFOCisjIGVuZGlmCisjZW5kaWYgLyogISBH
UlVCX1VUSUwgKi8KKworI2lmIEdSVUJfQ1BVX1NJWkVPRl9WT0lEX1AgIT0gNCAmJiBHUlVCX0NQ
VV9TSVpFT0ZfVk9JRF9QICE9IDgKKyMgZXJyb3IgIlRoaXMgYXJjaGl0ZWN0dXJlIGlzIG5vdCBz
dXBwb3J0ZWQgYmVjYXVzZSBzaXplb2Yodm9pZCAqKSAhPSA0IGFuZCBzaXplb2Yodm9pZCAqKSAh
PSA4IgorI2VuZGlmCisKKyNpZm5kZWYgR1JVQl9UQVJHRVRfV09SRFNJWkUKKyMgaWYgR1JVQl9U
QVJHRVRfU0laRU9GX1ZPSURfUCA9PSA0CisjICBkZWZpbmUgR1JVQl9UQVJHRVRfV09SRFNJWkUg
MzIKKyMgZWxpZiBHUlVCX1RBUkdFVF9TSVpFT0ZfVk9JRF9QID09IDgKKyMgIGRlZmluZSBHUlVC
X1RBUkdFVF9XT1JEU0laRSA2NAorIyBlbmRpZgorI2VuZGlmCisKKy8qIERlZmluZSB2YXJpb3Vz
IHdpZGUgaW50ZWdlcnMuICAqLwordHlwZWRlZiBzaWduZWQgY2hhcgkJZ3J1Yl9pbnQ4X3Q7Cit0
eXBlZGVmIHNob3J0CQkJZ3J1Yl9pbnQxNl90OwordHlwZWRlZiBpbnQJCQlncnViX2ludDMyX3Q7
CisjaWYgR1JVQl9DUFVfU0laRU9GX0xPTkcgPT0gOAordHlwZWRlZiBsb25nCQkJZ3J1Yl9pbnQ2
NF90OworI2Vsc2UKK3R5cGVkZWYgbG9uZyBsb25nCQlncnViX2ludDY0X3Q7CisjZW5kaWYKKwor
dHlwZWRlZiB1bnNpZ25lZCBjaGFyCQlncnViX3VpbnQ4X3Q7Cit0eXBlZGVmIHVuc2lnbmVkIHNo
b3J0CQlncnViX3VpbnQxNl90OwordHlwZWRlZiB1bnNpZ25lZAkJZ3J1Yl91aW50MzJfdDsKKyNp
ZiBHUlVCX0NQVV9TSVpFT0ZfTE9ORyA9PSA4Cit0eXBlZGVmIHVuc2lnbmVkIGxvbmcJCWdydWJf
dWludDY0X3Q7CisjZWxzZQordHlwZWRlZiB1bnNpZ25lZCBsb25nIGxvbmcJZ3J1Yl91aW50NjRf
dDsKKyNlbmRpZgorCisvKiBNaXNjIHR5cGVzLiAgKi8KKyNpZiBHUlVCX1RBUkdFVF9TSVpFT0Zf
Vk9JRF9QID09IDgKK3R5cGVkZWYgZ3J1Yl91aW50NjRfdAlncnViX3RhcmdldF9hZGRyX3Q7Cit0
eXBlZGVmIGdydWJfdWludDY0X3QJZ3J1Yl90YXJnZXRfb2ZmX3Q7Cit0eXBlZGVmIGdydWJfdWlu
dDY0X3QJZ3J1Yl90YXJnZXRfc2l6ZV90OwordHlwZWRlZiBncnViX2ludDY0X3QJZ3J1Yl90YXJn
ZXRfc3NpemVfdDsKKyNlbHNlCit0eXBlZGVmIGdydWJfdWludDMyX3QJZ3J1Yl90YXJnZXRfYWRk
cl90OwordHlwZWRlZiBncnViX3VpbnQzMl90CWdydWJfdGFyZ2V0X29mZl90OwordHlwZWRlZiBn
cnViX3VpbnQzMl90CWdydWJfdGFyZ2V0X3NpemVfdDsKK3R5cGVkZWYgZ3J1Yl9pbnQzMl90CWdy
dWJfdGFyZ2V0X3NzaXplX3Q7CisjZW5kaWYKKworI2lmIEdSVUJfQ1BVX1NJWkVPRl9WT0lEX1Ag
PT0gOAordHlwZWRlZiBncnViX3VpbnQ2NF90CWdydWJfYWRkcl90OwordHlwZWRlZiBncnViX3Vp
bnQ2NF90CWdydWJfc2l6ZV90OwordHlwZWRlZiBncnViX2ludDY0X3QJZ3J1Yl9zc2l6ZV90Owor
I2Vsc2UKK3R5cGVkZWYgZ3J1Yl91aW50MzJfdAlncnViX2FkZHJfdDsKK3R5cGVkZWYgZ3J1Yl91
aW50MzJfdAlncnViX3NpemVfdDsKK3R5cGVkZWYgZ3J1Yl9pbnQzMl90CWdydWJfc3NpemVfdDsK
KyNlbmRpZgorCisjaWYgR1JVQl9DUFVfU0laRU9GX1ZPSURfUCA9PSA4CisjIGRlZmluZSBHUlVC
X1VMT05HX01BWCAxODQ0Njc0NDA3MzcwOTU1MTYxNVVMCisjIGRlZmluZSBHUlVCX0xPTkdfTUFY
IDkyMjMzNzIwMzY4NTQ3NzU4MDdMCisjIGRlZmluZSBHUlVCX0xPTkdfTUlOICgtOTIyMzM3MjAz
Njg1NDc3NTgwN0wgLSAxKQorI2Vsc2UKKyMgZGVmaW5lIEdSVUJfVUxPTkdfTUFYIDQyOTQ5Njcy
OTVVTAorIyBkZWZpbmUgR1JVQl9MT05HX01BWCAyMTQ3NDgzNjQ3TAorIyBkZWZpbmUgR1JVQl9M
T05HX01JTiAoLTIxNDc0ODM2NDdMIC0gMSkKKyNlbmRpZgorCisjaWYgR1JVQl9DUFVfU0laRU9G
X1ZPSURfUCA9PSA0CisjZGVmaW5lIFVJTlRfVE9fUFRSKHgpICgodm9pZCopKGdydWJfdWludDMy
X3QpKHgpKQorI2RlZmluZSBQVFJfVE9fVUlOVDY0KHgpICgoZ3J1Yl91aW50NjRfdCkoZ3J1Yl91
aW50MzJfdCkoeCkpCisjZGVmaW5lIFBUUl9UT19VSU5UMzIoeCkgKChncnViX3VpbnQzMl90KSh4
KSkKKyNlbHNlCisjZGVmaW5lIFVJTlRfVE9fUFRSKHgpICgodm9pZCopKGdydWJfdWludDY0X3Qp
KHgpKQorI2RlZmluZSBQVFJfVE9fVUlOVDY0KHgpICgoZ3J1Yl91aW50NjRfdCkoeCkpCisjZGVm
aW5lIFBUUl9UT19VSU5UMzIoeCkgKChncnViX3VpbnQzMl90KShncnViX3VpbnQ2NF90KSh4KSkK
KyNlbmRpZgorCisvKiBUaGUgdHlwZSBmb3IgcmVwcmVzZW50aW5nIGEgZmlsZSBvZmZzZXQuICAq
LwordHlwZWRlZiBncnViX3VpbnQ2NF90CWdydWJfb2ZmX3Q7CisKKy8qIFRoZSB0eXBlIGZvciBy
ZXByZXNlbnRpbmcgYSBkaXNrIGJsb2NrIGFkZHJlc3MuICAqLwordHlwZWRlZiBncnViX3VpbnQ2
NF90CWdydWJfZGlza19hZGRyX3Q7CisKKy8qIEJ5dGUtb3JkZXJzLiAgKi8KKyNkZWZpbmUgZ3J1
Yl9zd2FwX2J5dGVzMTYoeCkJXAorKHsgXAorICAgZ3J1Yl91aW50MTZfdCBfeCA9ICh4KTsgXAor
ICAgKGdydWJfdWludDE2X3QpICgoX3ggPDwgOCkgfCAoX3ggPj4gOCkpOyBcCit9KQorCisjaWYg
ZGVmaW5lZChfX0dOVUNfXykgJiYgKF9fR05VQ19fID4gMykgJiYgKF9fR05VQ19fID4gNCB8fCBf
X0dOVUNfTUlOT1JfXyA+PSAzKSAmJiBkZWZpbmVkKEdSVUJfVEFSR0VUX0kzODYpCitzdGF0aWMg
aW5saW5lIGdydWJfdWludDMyX3QgZ3J1Yl9zd2FwX2J5dGVzMzIoZ3J1Yl91aW50MzJfdCB4KQor
eworCXJldHVybiBfX2J1aWx0aW5fYnN3YXAzMih4KTsKK30KKworc3RhdGljIGlubGluZSBncnVi
X3VpbnQ2NF90IGdydWJfc3dhcF9ieXRlczY0KGdydWJfdWludDY0X3QgeCkKK3sKKwlyZXR1cm4g
X19idWlsdGluX2Jzd2FwNjQoeCk7Cit9CisjZWxzZQkJCQkJLyogbm90IGdjYyA0LjMgb3IgbmV3
ZXIgKi8KKyNkZWZpbmUgZ3J1Yl9zd2FwX2J5dGVzMzIoeCkJXAorKHsgXAorICAgZ3J1Yl91aW50
MzJfdCBfeCA9ICh4KTsgXAorICAgKGdydWJfdWludDMyX3QpICgoX3ggPDwgMjQpIFwKKyAgICAg
ICAgICAgICAgICAgICAgfCAoKF94ICYgKGdydWJfdWludDMyX3QpIDB4RkYwMFVMKSA8PCA4KSBc
CisgICAgICAgICAgICAgICAgICAgIHwgKChfeCAmIChncnViX3VpbnQzMl90KSAweEZGMDAwMFVM
KSA+PiA4KSBcCisgICAgICAgICAgICAgICAgICAgIHwgKF94ID4+IDI0KSk7IFwKK30pCisKKyNk
ZWZpbmUgZ3J1Yl9zd2FwX2J5dGVzNjQoeCkJXAorKHsgXAorICAgZ3J1Yl91aW50NjRfdCBfeCA9
ICh4KTsgXAorICAgKGdydWJfdWludDY0X3QpICgoX3ggPDwgNTYpIFwKKyAgICAgICAgICAgICAg
ICAgICAgfCAoKF94ICYgKGdydWJfdWludDY0X3QpIDB4RkYwMFVMTCkgPDwgNDApIFwKKyAgICAg
ICAgICAgICAgICAgICAgfCAoKF94ICYgKGdydWJfdWludDY0X3QpIDB4RkYwMDAwVUxMKSA8PCAy
NCkgXAorICAgICAgICAgICAgICAgICAgICB8ICgoX3ggJiAoZ3J1Yl91aW50NjRfdCkgMHhGRjAw
MDAwMFVMTCkgPDwgOCkgXAorICAgICAgICAgICAgICAgICAgICB8ICgoX3ggJiAoZ3J1Yl91aW50
NjRfdCkgMHhGRjAwMDAwMDAwVUxMKSA+PiA4KSBcCisgICAgICAgICAgICAgICAgICAgIHwgKChf
eCAmIChncnViX3VpbnQ2NF90KSAweEZGMDAwMDAwMDAwMFVMTCkgPj4gMjQpIFwKKyAgICAgICAg
ICAgICAgICAgICAgfCAoKF94ICYgKGdydWJfdWludDY0X3QpIDB4RkYwMDAwMDAwMDAwMDBVTEwp
ID4+IDQwKSBcCisgICAgICAgICAgICAgICAgICAgIHwgKF94ID4+IDU2KSk7IFwKK30pCisjZW5k
aWYJCQkJCS8qIG5vdCBnY2MgNC4zIG9yIG5ld2VyICovCisKKyNpZmRlZiBHUlVCX0NQVV9XT1JE
U19CSUdFTkRJQU4KKyMgZGVmaW5lIGdydWJfY3B1X3RvX2xlMTYoeCkJZ3J1Yl9zd2FwX2J5dGVz
MTYoeCkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2xlMzIoeCkJZ3J1Yl9zd2FwX2J5dGVzMzIoeCkK
KyMgZGVmaW5lIGdydWJfY3B1X3RvX2xlNjQoeCkJZ3J1Yl9zd2FwX2J5dGVzNjQoeCkKKyMgZGVm
aW5lIGdydWJfbGVfdG9fY3B1MTYoeCkJZ3J1Yl9zd2FwX2J5dGVzMTYoeCkKKyMgZGVmaW5lIGdy
dWJfbGVfdG9fY3B1MzIoeCkJZ3J1Yl9zd2FwX2J5dGVzMzIoeCkKKyMgZGVmaW5lIGdydWJfbGVf
dG9fY3B1NjQoeCkJZ3J1Yl9zd2FwX2J5dGVzNjQoeCkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2Jl
MTYoeCkJKChncnViX3VpbnQxNl90KSAoeCkpCisjIGRlZmluZSBncnViX2NwdV90b19iZTMyKHgp
CSgoZ3J1Yl91aW50MzJfdCkgKHgpKQorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fYmU2NCh4KQkoKGdy
dWJfdWludDY0X3QpICh4KSkKKyMgZGVmaW5lIGdydWJfYmVfdG9fY3B1MTYoeCkJKChncnViX3Vp
bnQxNl90KSAoeCkpCisjIGRlZmluZSBncnViX2JlX3RvX2NwdTMyKHgpCSgoZ3J1Yl91aW50MzJf
dCkgKHgpKQorIyBkZWZpbmUgZ3J1Yl9iZV90b19jcHU2NCh4KQkoKGdydWJfdWludDY0X3QpICh4
KSkKKyMgaWZkZWYgR1JVQl9UQVJHRVRfV09SRFNfQklHRU5ESUFOCisjICBkZWZpbmUgZ3J1Yl90
YXJnZXRfdG9faG9zdDE2KHgpCSgoZ3J1Yl91aW50MTZfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJf
dGFyZ2V0X3RvX2hvc3QzMih4KQkoKGdydWJfdWludDMyX3QpICh4KSkKKyMgIGRlZmluZSBncnVi
X3RhcmdldF90b19ob3N0NjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkpCisjICBkZWZpbmUgZ3J1
Yl9ob3N0X3RvX3RhcmdldDE2KHgpCSgoZ3J1Yl91aW50MTZfdCkgKHgpKQorIyAgZGVmaW5lIGdy
dWJfaG9zdF90b190YXJnZXQzMih4KQkoKGdydWJfdWludDMyX3QpICh4KSkKKyMgIGRlZmluZSBn
cnViX2hvc3RfdG9fdGFyZ2V0NjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkpCisjIGVsc2UgLyog
ISBHUlVCX1RBUkdFVF9XT1JEU19CSUdFTkRJQU4gKi8KKyMgIGRlZmluZSBncnViX3RhcmdldF90
b19ob3N0MTYoeCkJZ3J1Yl9zd2FwX2J5dGVzMTYoeCkKKyMgIGRlZmluZSBncnViX3RhcmdldF90
b19ob3N0MzIoeCkJZ3J1Yl9zd2FwX2J5dGVzMzIoeCkKKyMgIGRlZmluZSBncnViX3RhcmdldF90
b19ob3N0NjQoeCkJZ3J1Yl9zd2FwX2J5dGVzNjQoeCkKKyMgIGRlZmluZSBncnViX2hvc3RfdG9f
dGFyZ2V0MTYoeCkJZ3J1Yl9zd2FwX2J5dGVzMTYoeCkKKyMgIGRlZmluZSBncnViX2hvc3RfdG9f
dGFyZ2V0MzIoeCkJZ3J1Yl9zd2FwX2J5dGVzMzIoeCkKKyMgIGRlZmluZSBncnViX2hvc3RfdG9f
dGFyZ2V0NjQoeCkJZ3J1Yl9zd2FwX2J5dGVzNjQoeCkKKyMgZW5kaWYKKyNlbHNlIC8qICEgV09S
RFNfQklHRU5ESUFOICovCisjIGRlZmluZSBncnViX2NwdV90b19sZTE2KHgpCSgoZ3J1Yl91aW50
MTZfdCkgKHgpKQorIyBkZWZpbmUgZ3J1Yl9jcHVfdG9fbGUzMih4KQkoKGdydWJfdWludDMyX3Qp
ICh4KSkKKyMgZGVmaW5lIGdydWJfY3B1X3RvX2xlNjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkp
CisjIGRlZmluZSBncnViX2xlX3RvX2NwdTE2KHgpCSgoZ3J1Yl91aW50MTZfdCkgKHgpKQorIyBk
ZWZpbmUgZ3J1Yl9sZV90b19jcHUzMih4KQkoKGdydWJfdWludDMyX3QpICh4KSkKKyMgZGVmaW5l
IGdydWJfbGVfdG9fY3B1NjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkpCisjIGRlZmluZSBncnVi
X2NwdV90b19iZTE2KHgpCWdydWJfc3dhcF9ieXRlczE2KHgpCisjIGRlZmluZSBncnViX2NwdV90
b19iZTMyKHgpCWdydWJfc3dhcF9ieXRlczMyKHgpCisjIGRlZmluZSBncnViX2NwdV90b19iZTY0
KHgpCWdydWJfc3dhcF9ieXRlczY0KHgpCisjIGRlZmluZSBncnViX2JlX3RvX2NwdTE2KHgpCWdy
dWJfc3dhcF9ieXRlczE2KHgpCisjIGRlZmluZSBncnViX2JlX3RvX2NwdTMyKHgpCWdydWJfc3dh
cF9ieXRlczMyKHgpCisjIGRlZmluZSBncnViX2JlX3RvX2NwdTY0KHgpCWdydWJfc3dhcF9ieXRl
czY0KHgpCisjIGlmZGVmIEdSVUJfVEFSR0VUX1dPUkRTX0JJR0VORElBTgorIyAgZGVmaW5lIGdy
dWJfdGFyZ2V0X3RvX2hvc3QxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4KQorIyAgZGVmaW5lIGdy
dWJfdGFyZ2V0X3RvX2hvc3QzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyAgZGVmaW5lIGdy
dWJfdGFyZ2V0X3RvX2hvc3Q2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyAgZGVmaW5lIGdy
dWJfaG9zdF90b190YXJnZXQxNih4KQlncnViX3N3YXBfYnl0ZXMxNih4KQorIyAgZGVmaW5lIGdy
dWJfaG9zdF90b190YXJnZXQzMih4KQlncnViX3N3YXBfYnl0ZXMzMih4KQorIyAgZGVmaW5lIGdy
dWJfaG9zdF90b190YXJnZXQ2NCh4KQlncnViX3N3YXBfYnl0ZXM2NCh4KQorIyBlbHNlIC8qICEg
R1JVQl9UQVJHRVRfV09SRFNfQklHRU5ESUFOICovCisjICBkZWZpbmUgZ3J1Yl90YXJnZXRfdG9f
aG9zdDE2KHgpCSgoZ3J1Yl91aW50MTZfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJfdGFyZ2V0X3Rv
X2hvc3QzMih4KQkoKGdydWJfdWludDMyX3QpICh4KSkKKyMgIGRlZmluZSBncnViX3RhcmdldF90
b19ob3N0NjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkpCisjICBkZWZpbmUgZ3J1Yl9ob3N0X3Rv
X3RhcmdldDE2KHgpCSgoZ3J1Yl91aW50MTZfdCkgKHgpKQorIyAgZGVmaW5lIGdydWJfaG9zdF90
b190YXJnZXQzMih4KQkoKGdydWJfdWludDMyX3QpICh4KSkKKyMgIGRlZmluZSBncnViX2hvc3Rf
dG9fdGFyZ2V0NjQoeCkJKChncnViX3VpbnQ2NF90KSAoeCkpCisjIGVuZGlmCisjZW5kaWYgLyog
ISBXT1JEU19CSUdFTkRJQU4gKi8KKworI2lmIEdSVUJfVEFSR0VUX1NJWkVPRl9WT0lEX1AgPT0g
OAorIyAgZGVmaW5lIGdydWJfaG9zdF90b190YXJnZXRfYWRkcih4KSBncnViX2hvc3RfdG9fdGFy
Z2V0NjQoeCkKKyNlbHNlCisjICBkZWZpbmUgZ3J1Yl9ob3N0X3RvX3RhcmdldF9hZGRyKHgpIGdy
dWJfaG9zdF90b190YXJnZXQzMih4KQorI2VuZGlmCisKKyNlbmRpZiAvKiAhIEdSVUJfVFlQRVNf
SEVBREVSICovCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMv
aW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQvaTM4Ni90eXBlcy5oIHhlbi00LjEuMi1iL3Rvb2xzL2lv
ZW11LXFlbXUteGVuL2dydWItZmF0L2kzODYvdHlwZXMuaAotLS0geGVuLTQuMS4yLWEvdG9vbHMv
aW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQvaTM4Ni90eXBlcy5oCTE5NzAtMDEtMDEgMDc6MDA6MDAu
MDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnVi
LWZhdC9pMzg2L3R5cGVzLmgJMjAxMi0xMi0yOCAxNjowMjo0MS4wMTA5Mzc2MTkgKzA4MDAKQEAg
LTAsMCArMSwzNSBAQAorLyoKKyAqICBHUlVCICAtLSAgR1JhbmQgVW5pZmllZCBCb290bG9hZGVy
CisgKiAgQ29weXJpZ2h0IChDKSAyMDAyLDIwMDYsMjAwNyAgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0
aW9uLCBJbmMuCisgKgorICogIEdSVUIgaXMgZnJlZSBzb2Z0d2FyZTogeW91IGNhbiByZWRpc3Ry
aWJ1dGUgaXQgYW5kL29yIG1vZGlmeQorICogIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05V
IEdlbmVyYWwgUHVibGljIExpY2Vuc2UgYXMgcHVibGlzaGVkIGJ5CisgKiAgdGhlIEZyZWUgU29m
dHdhcmUgRm91bmRhdGlvbiwgZWl0aGVyIHZlcnNpb24gMyBvZiB0aGUgTGljZW5zZSwgb3IKKyAq
ICAoYXQgeW91ciBvcHRpb24pIGFueSBsYXRlciB2ZXJzaW9uLgorICoKKyAqICBHUlVCIGlzIGRp
c3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsCisgKiAgYnV0IFdJ
VEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YK
KyAqICBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0Uu
ICBTZWUgdGhlCisgKiAgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWls
cy4KKyAqCisgKiAgWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdl
bmVyYWwgUHVibGljIExpY2Vuc2UKKyAqICBhbG9uZyB3aXRoIEdSVUIuICBJZiBub3QsIHNlZSA8
aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uCisgKi8KKworI2lmbmRlZiBHUlVCX1RZUEVT
X0NQVV9IRUFERVIKKyNkZWZpbmUgR1JVQl9UWVBFU19DUFVfSEVBREVSCTEKKworLyogVGhlIHNp
emUgb2Ygdm9pZCAqLiAgKi8KKyNkZWZpbmUgR1JVQl9UQVJHRVRfU0laRU9GX1ZPSURfUAk0CisK
Ky8qIFRoZSBzaXplIG9mIGxvbmcuICAqLworI2RlZmluZSBHUlVCX1RBUkdFVF9TSVpFT0ZfTE9O
RwkJNAorCisvKiBpMzg2IGlzIGxpdHRsZS1lbmRpYW4uICAqLworI3VuZGVmIEdSVUJfVEFSR0VU
X1dPUkRTX0JJR0VORElBTgorCisjZGVmaW5lIEdSVUJfVEFSR0VUX0kzODYJCTEKKworI2RlZmlu
ZSBHUlVCX1RBUkdFVF9NSU5fQUxJR04JCTEKKworI2VuZGlmIC8qICEgR1JVQl9UWVBFU19DUFVf
SEVBREVSICovCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMv
aW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQvbWlzYy5oIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFl
bXUteGVuL2dydWItZmF0L21pc2MuaAotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14
ZW4vZ3J1Yi1mYXQvbWlzYy5oCTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisr
KyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC9taXNjLmgJMjAxMi0x
Mi0yOCAxNjowMjo0MS4wMTA5Mzc2MTkgKzA4MDAKQEAgLTAsMCArMSwxNyBAQAoraW50CitncnVi
X3N0cm5jbXAgKGNvbnN0IGNoYXIgKnMxLCBjb25zdCBjaGFyICpzMiwgZ3J1Yl9zaXplX3QgbikK
K3sKKyAgaWYgKG4gPT0gMCkKKyAgICByZXR1cm4gMDsKKworICB3aGlsZSAoKnMxICYmICpzMiAm
JiAtLW4pCisgICAgeworICAgICAgaWYgKCpzMSAhPSAqczIpCisgICAgICAgIGJyZWFrOworCisg
ICAgICBzMSsrOworICAgICAgczIrKzsKKyAgICB9CisKKyAgcmV0dXJuIChpbnQpICpzMSAtIChp
bnQpICpzMjsKK30KZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29s
cy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC94ODZfNjQvdHlwZXMuaCB4ZW4tNC4xLjItYi90b29s
cy9pb2VtdS1xZW11LXhlbi9ncnViLWZhdC94ODZfNjQvdHlwZXMuaAotLS0geGVuLTQuMS4yLWEv
dG9vbHMvaW9lbXUtcWVtdS14ZW4vZ3J1Yi1mYXQveDg2XzY0L3R5cGVzLmgJMTk3MC0wMS0wMSAw
NzowMDowMC4wMDAwMDAwMDAgKzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUt
eGVuL2dydWItZmF0L3g4Nl82NC90eXBlcy5oCTIwMTItMTItMjggMTY6MDI6NDEuMDExNzY0MTU5
ICswODAwCkBAIC0wLDAgKzEsMzkgQEAKKy8qCisgKiAgR1JVQiAgLS0gIEdSYW5kIFVuaWZpZWQg
Qm9vdGxvYWRlcgorICogIENvcHlyaWdodCAoQykgMjAwOCAgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0
aW9uLCBJbmMuCisgKgorICogIEdSVUIgaXMgZnJlZSBzb2Z0d2FyZTogeW91IGNhbiByZWRpc3Ry
aWJ1dGUgaXQgYW5kL29yIG1vZGlmeQorICogIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05V
IEdlbmVyYWwgUHVibGljIExpY2Vuc2UgYXMgcHVibGlzaGVkIGJ5CisgKiAgdGhlIEZyZWUgU29m
dHdhcmUgRm91bmRhdGlvbiwgZWl0aGVyIHZlcnNpb24gMyBvZiB0aGUgTGljZW5zZSwgb3IKKyAq
ICAoYXQgeW91ciBvcHRpb24pIGFueSBsYXRlciB2ZXJzaW9uLgorICoKKyAqICBHUlVCIGlzIGRp
c3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsCisgKiAgYnV0IFdJ
VEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YK
KyAqICBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0Uu
ICBTZWUgdGhlCisgKiAgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWls
cy4KKyAqCisgKiAgWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdl
bmVyYWwgUHVibGljIExpY2Vuc2UKKyAqICBhbG9uZyB3aXRoIEdSVUIuICBJZiBub3QsIHNlZSA8
aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uCisgKi8KKworI2lmbmRlZiBHUlVCX1RZUEVT
X0NQVV9IRUFERVIKKyNkZWZpbmUgR1JVQl9UWVBFU19DUFVfSEVBREVSCTEKKworLyogVGhlIHNp
emUgb2Ygdm9pZCAqLiAgKi8KKyNkZWZpbmUgR1JVQl9UQVJHRVRfU0laRU9GX1ZPSURfUAk4CisK
Ky8qIFRoZSBzaXplIG9mIGxvbmcuICAqLworI2lmZGVmIF9fTUlOR1czMl9fCisjZGVmaW5lIEdS
VUJfVEFSR0VUX1NJWkVPRl9MT05HCQk0CisjZWxzZQorI2RlZmluZSBHUlVCX1RBUkdFVF9TSVpF
T0ZfTE9ORwkJOAorI2VuZGlmCisKKy8qIHg4Nl82NCBpcyBsaXR0bGUtZW5kaWFuLiAgKi8KKyN1
bmRlZiBHUlVCX1RBUkdFVF9XT1JEU19CSUdFTkRJQU4KKworI2RlZmluZSBHUlVCX1RBUkdFVF9Y
ODZfNjQJCTEKKworI2RlZmluZSBHUlVCX1RBUkdFVF9NSU5fQUxJR04JCTEKKworI2VuZGlmIC8q
ICEgR1JVQl9UWVBFU19DUFVfSEVBREVSICovCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTgg
eGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vaTM4Ni90eXBlcy5oIHhlbi00LjEuMi1i
L3Rvb2xzL2lvZW11LXFlbXUteGVuL2kzODYvdHlwZXMuaAotLS0geGVuLTQuMS4yLWEvdG9vbHMv
aW9lbXUtcWVtdS14ZW4vaTM4Ni90eXBlcy5oCTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAw
ICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9pMzg2L3R5cGVzLmgJ
MjAxMi0xMi0yOCAxNjowMjo0MS4wMTc4MDIzNzEgKzA4MDAKQEAgLTAsMCArMSwzNSBAQAorLyoK
KyAqICBHUlVCICAtLSAgR1JhbmQgVW5pZmllZCBCb290bG9hZGVyCisgKiAgQ29weXJpZ2h0IChD
KSAyMDAyLDIwMDYsMjAwNyAgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuCisgKgorICog
IEdSVUIgaXMgZnJlZSBzb2Z0d2FyZTogeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1v
ZGlmeQorICogIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExp
Y2Vuc2UgYXMgcHVibGlzaGVkIGJ5CisgKiAgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiwg
ZWl0aGVyIHZlcnNpb24gMyBvZiB0aGUgTGljZW5zZSwgb3IKKyAqICAoYXQgeW91ciBvcHRpb24p
IGFueSBsYXRlciB2ZXJzaW9uLgorICoKKyAqICBHUlVCIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBo
b3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsCisgKiAgYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZ
OyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKKyAqICBNRVJDSEFOVEFCSUxJ
VFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlCisgKiAgR05V
IEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KKyAqCisgKiAgWW91IHNo
b3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vu
c2UKKyAqICBhbG9uZyB3aXRoIEdSVUIuICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3Jn
L2xpY2Vuc2VzLz4uCisgKi8KKworI2lmbmRlZiBHUlVCX1RZUEVTX0NQVV9IRUFERVIKKyNkZWZp
bmUgR1JVQl9UWVBFU19DUFVfSEVBREVSCTEKKworLyogVGhlIHNpemUgb2Ygdm9pZCAqLiAgKi8K
KyNkZWZpbmUgR1JVQl9UQVJHRVRfU0laRU9GX1ZPSURfUAk0CisKKy8qIFRoZSBzaXplIG9mIGxv
bmcuICAqLworI2RlZmluZSBHUlVCX1RBUkdFVF9TSVpFT0ZfTE9ORwkJNAorCisvKiBpMzg2IGlz
IGxpdHRsZS1lbmRpYW4uICAqLworI3VuZGVmIEdSVUJfVEFSR0VUX1dPUkRTX0JJR0VORElBTgor
CisjZGVmaW5lIEdSVUJfVEFSR0VUX0kzODYJCTEKKworI2RlZmluZSBHUlVCX1RBUkdFVF9NSU5f
QUxJR04JCTEKKworI2VuZGlmIC8qICEgR1JVQl9UWVBFU19DUFVfSEVBREVSICovCmRpZmYgLS1l
eGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vTWFr
ZWZpbGUgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vTWFrZWZpbGUKLS0tIHhlbi00
LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL01ha2VmaWxlCTIwMTEtMDItMTIgMDE6NTQ6NTEu
MDAwMDAwMDAwICswODAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhlbi9NYWtl
ZmlsZQkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxMTc2NDE1OSArMDgwMApAQCAtMTg4LDE3ICsxODgs
MTggQEAgbGlicWVtdV9jb21tb24uYTogJChPQkpTKQogIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMKICMgVVNFUl9P
QkpTIGlzIGNvZGUgdXNlZCBieSBxZW11IHVzZXJzcGFjZSBlbXVsYXRpb24KIFVTRVJfT0JKUz1j
dXRpbHMubyAgY2FjaGUtdXRpbHMubwogCiBsaWJxZW11X3VzZXIuYTogJChVU0VSX09CSlMpCiAK
ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMKIAotcWVtdS1pbWckKEVYRVNVRik6IHFlbXUtaW1nLm8gcWVtdS10b29s
Lm8gb3NkZXAubyAkKEJMT0NLX09CSlMpCisKK3FlbXUtaW1nJChFWEVTVUYpOmZzLXRpbWUubyBn
cnViX2Vyci5vIHBhcnRpdGlvbi5vIGZzaGVscC5vIG50ZnMubyBmYXQubyBtaXNjLm8gZGVidWcu
byBxZW11LWltZy5vIHFlbXUtdG9vbC5vIG9zZGVwLm8gJChCTE9DS19PQkpTKQogCiBxZW11LW5i
ZCQoRVhFU1VGKTogIHFlbXUtbmJkLm8gcWVtdS10b29sLm8gb3NkZXAubyAkKEJMT0NLX09CSlMp
CiAKIHFlbXUtaW1nJChFWEVTVUYpIHFlbXUtbmJkJChFWEVTVUYpOiBMSUJTICs9IC1segogCiAK
IGNsZWFuOgogIyBhdm9pZCBvbGQgYnVpbGQgcHJvYmxlbXMgYnkgcmVtb3ZpbmcgcG90ZW50aWFs
bHkgaW5jb3JyZWN0IG9sZCBmaWxlcwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00
LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL01ha2VmaWxlLm9yaWcgeGVuLTQuMS4yLWIvdG9v
bHMvaW9lbXUtcWVtdS14ZW4vTWFrZWZpbGUub3JpZwotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9l
bXUtcWVtdS14ZW4vTWFrZWZpbGUub3JpZwkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCAr
MDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vTWFrZWZpbGUub3JpZwky
MDEyLTEyLTI4IDE1OjU5OjM1LjM1NDY4MTYzNCArMDgwMApAQCAtMCwwICsxLDM3MiBAQAorIyBN
YWtlZmlsZSBmb3IgUUVNVS4KKworaW5jbHVkZSBjb25maWctaG9zdC5tYWsKK2luY2x1ZGUgJChT
UkNfUEFUSCkvcnVsZXMubWFrCisKKy5QSE9OWTogYWxsIGNsZWFuIGNzY29wZSBkaXN0Y2xlYW4g
ZHZpIGh0bWwgaW5mbyBpbnN0YWxsIGluc3RhbGwtZG9jIFwKKwlyZWN1cnNlLWFsbCBzcGVlZCB0
YXIgdGFyYmluIHRlc3QKKworVlBBVEg9JChTUkNfUEFUSCk6JChTUkNfUEFUSCkvaHcKKworCitD
RkxBR1MgKz0gJChPU19DRkxBR1MpICQoQVJDSF9DRkxBR1MpCitMREZMQUdTICs9ICQoT1NfTERG
TEFHUykgJChBUkNIX0xERkxBR1MpCisKK0NQUEZMQUdTICs9IC1JLiAtSSQoU1JDX1BBVEgpIC1N
TUQgLU1QIC1NVCAkQAorQ1BQRkxBR1MgKz0gLURfR05VX1NPVVJDRSAtRF9GSUxFX09GRlNFVF9C
SVRTPTY0IC1EX0xBUkdFRklMRV9TT1VSQ0UKK0xJQlM9CitpZmRlZiBDT05GSUdfU1RBVElDCitM
REZMQUdTICs9IC1zdGF0aWMKK2VuZGlmCitpZmRlZiBCVUlMRF9ET0NTCitET0NTPXFlbXUtZG9j
Lmh0bWwgcWVtdS10ZWNoLmh0bWwgcWVtdS4xIHFlbXUtaW1nLjEgcWVtdS1uYmQuOAorZWxzZQor
RE9DUz0KK2VuZGlmCisKK0xJQlMrPSQoQUlPTElCUykKKworaWZkZWYgQ09ORklHX1NPTEFSSVMK
K0xJQlMrPS1sc29ja2V0IC1sbnNsIC1scmVzb2x2CitlbmRpZgorCitpZmRlZiBDT05GSUdfV0lO
MzIKK0xJQlMrPS1sd2lubW0gLWx3czJfMzIgLWxpcGhscGFwaQorZW5kaWYKKworYWxsOiAkKFRP
T0xTKSAkKERPQ1MpIHJlY3Vyc2UtYWxsCisKK1NVQkRJUl9SVUxFUz0kKHBhdHN1YnN0ICUsc3Vi
ZGlyLSUsICQoVEFSR0VUX0RJUlMpKQorCitzdWJkaXItJToKKwkkKGNhbGwgcXVpZXQtY29tbWFu
ZCwkKE1BS0UpIC1DICQqIFY9IiQoVikiIFRBUkdFVF9ESVI9IiQqLyIgYWxsLCkKKworJChmaWx0
ZXIgJS1zb2Z0bW11LCQoU1VCRElSX1JVTEVTKSk6IGxpYnFlbXVfY29tbW9uLmEKKyQoZmlsdGVy
ICUtdXNlciwkKFNVQkRJUl9SVUxFUykpOiBsaWJxZW11X3VzZXIuYQorCityZWN1cnNlLWFsbDog
JChTVUJESVJfUlVMRVMpCisKK0NQUEZMQUdTICs9IC1JJChYRU5fUk9PVCkvdG9vbHMvbGlieGMK
K0NQUEZMQUdTICs9IC1JJChYRU5fUk9PVCkvdG9vbHMvYmxrdGFwL2xpYgorQ1BQRkxBR1MgKz0g
LUkkKFhFTl9ST09UKS90b29scy94ZW5zdG9yZQorQ1BQRkxBR1MgKz0gLUkkKFhFTl9ST09UKS90
b29scy9pbmNsdWRlCisKK3RhcGRpc2staW9lbXU6IHRhcGRpc2staW9lbXUuYyBjdXRpbHMuYyBi
bG9jay5jIGJsb2NrLXJhdy5jIGJsb2NrLWNvdy5jIGJsb2NrLXFjb3cuYyBhZXMuYyBibG9jay12
bWRrLmMgYmxvY2stY2xvb3AuYyBibG9jay1kbWcuYyBibG9jay1ib2Nocy5jIGJsb2NrLXZwYy5j
IGJsb2NrLXZ2ZmF0LmMgYmxvY2stcWNvdzIuYyBody94ZW5fYmxrdGFwLmMgb3NkZXAuYworCSQo
Q0MpIC1EUUVNVV9UT09MICQoQ0ZMQUdTKSAkKENQUEZMQUdTKSAkKEJBU0VfQ0ZMQUdTKSAkKExE
RkxBR1MpICQoQkFTRV9MREZMQUdTKSAtbyAkQCAkXiAtbHogJChMSUJTKQorCisjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIworIyBCTE9DS19PQkpTIGlzIGNvZGUgdXNlZCBieSBib3RoIHFlbXUgc3lzdGVtIGVtdWxh
dGlvbiBhbmQgcWVtdS1pbWcKKworQkxPQ0tfT0JKUz1jdXRpbHMubyBvc2RlcC5vIHFlbXUtbWFs
bG9jLm8KK0JMT0NLX09CSlMrPWJsb2NrLWNvdy5vIGJsb2NrLXFjb3cubyBhZXMubyBibG9jay12
bWRrLm8gYmxvY2stY2xvb3AubworQkxPQ0tfT0JKUys9YmxvY2stZG1nLm8gYmxvY2stYm9jaHMu
byBibG9jay12cGMubyBibG9jay12dmZhdC5vCitCTE9DS19PQkpTKz1ibG9jay1xY293Mi5vIGJs
b2NrLXBhcmFsbGVscy5vIGJsb2NrLW5iZC5vCitCTE9DS19PQkpTKz1uYmQubyBibG9jay5vIGFp
by5vCisKK2lmZGVmIENPTkZJR19XSU4zMgorQkxPQ0tfT0JKUyArPSBibG9jay1yYXctd2luMzIu
bworZWxzZQoraWZkZWYgQ09ORklHX0FJTworQkxPQ0tfT0JKUyArPSBwb3NpeC1haW8tY29tcGF0
Lm8KK2VuZGlmCitCTE9DS19PQkpTICs9IGJsb2NrLXJhdy1wb3NpeC5vCitlbmRpZgorCisjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjCisjIGxpYnFlbXVfY29tbW9uLmE6IFRhcmdldCBpbmRlcGVuZGVudCBwYXJ0IG9m
IHN5c3RlbSBlbXVsYXRpb24uIFRoZQorIyBsb25nIHRlcm0gcGF0aCBpcyB0byBzdXBwcmVzcyAq
YWxsKiB0YXJnZXQgc3BlY2lmaWMgY29kZSBpbiBjYXNlIG9mCisjIHN5c3RlbSBlbXVsYXRpb24s
IGkuZS4gYSBzaW5nbGUgUUVNVSBleGVjdXRhYmxlIHNob3VsZCBzdXBwb3J0IGFsbAorIyBDUFVz
IGFuZCBtYWNoaW5lcy4KKworT0JKUz0kKEJMT0NLX09CSlMpCitPQkpTKz1yZWFkbGluZS5vIGNv
bnNvbGUubworCitPQkpTKz1pcnEubworT0JKUys9aTJjLm8gc21idXMubyBzbWJ1c19lZXByb20u
byBtYXg3MzEwLm8gbWF4MTExeC5vIHdtODc1MC5vCitPQkpTKz1zc2QwMzAzLm8gc3NkMDMyMy5v
IGFkczc4NDYubyBzdGVsbGFyaXNfaW5wdXQubyB0d2w5MjIzMC5vCitPQkpTKz10bXAxMDUubyBs
bTgzMngubworT0JKUys9c2NzaS1kaXNrLm8gY2Ryb20ubworT0JKUys9c2NzaS1nZW5lcmljLm8K
K09CSlMrPXVzYi5vIHVzYi1odWIubyB1c2ItJChIT1NUX1VTQikubyB1c2ItaGlkLm8gdXNiLW1z
ZC5vIHVzYi13YWNvbS5vCitPQkpTKz11c2Itc2VyaWFsLm8gdXNiLW5ldC5vCitPQkpTKz1zZC5v
IHNzaS1zZC5vCitPQkpTKz1idC5vIGJ0LWhvc3QubyBidC12aGNpLm8gYnQtbDJjYXAubyBidC1z
ZHAubyBidC1oY2kubyBidC1oaWQubyB1c2ItYnQubworT0JKUys9YnVmZmVyZWRfZmlsZS5vIG1p
Z3JhdGlvbi5vIG1pZ3JhdGlvbi10Y3AubyBuZXQubyBxZW11LXNvY2tldHMubworT0JKUys9cWVt
dS1jaGFyLm8gYWlvLm8gbmV0LWNoZWNrc3VtLm8gc2F2ZXZtLm8gY2FjaGUtdXRpbHMubworCitp
ZmRlZiBDT05GSUdfQlJMQVBJCitPQkpTKz0gYmF1bS5vCitMSUJTKz0tbGJybGFwaQorZW5kaWYK
KworaWZkZWYgQ09ORklHX1dJTjMyCitPQkpTKz10YXAtd2luMzIubworZWxzZQorT0JKUys9bWln
cmF0aW9uLWV4ZWMubworZW5kaWYKKworQVVESU9fT0JKUyA9IGF1ZGlvLm8gbm9hdWRpby5vIHdh
dmF1ZGlvLm8gbWl4ZW5nLm8KK2lmZGVmIENPTkZJR19TREwKK0FVRElPX09CSlMgKz0gc2RsYXVk
aW8ubworZW5kaWYKK2lmZGVmIENPTkZJR19PU1MKK0FVRElPX09CSlMgKz0gb3NzYXVkaW8ubwor
ZW5kaWYKK2lmZGVmIENPTkZJR19DT1JFQVVESU8KK0FVRElPX09CSlMgKz0gY29yZWF1ZGlvLm8K
K0FVRElPX1BUID0geWVzCitlbmRpZgoraWZkZWYgQ09ORklHX0FMU0EKK0FVRElPX09CSlMgKz0g
YWxzYWF1ZGlvLm8KK2VuZGlmCitpZmRlZiBDT05GSUdfRFNPVU5ECitBVURJT19PQkpTICs9IGRz
b3VuZGF1ZGlvLm8KK2VuZGlmCitpZmRlZiBDT05GSUdfRk1PRAorQVVESU9fT0JKUyArPSBmbW9k
YXVkaW8ubworYXVkaW8vYXVkaW8ubyBhdWRpby9mbW9kYXVkaW8ubzogQ1BQRkxBR1MgOj0gLUkk
KENPTkZJR19GTU9EX0lOQykgJChDUFBGTEFHUykKK2VuZGlmCitpZmRlZiBDT05GSUdfRVNECitB
VURJT19QVCA9IHllcworQVVESU9fUFRfSU5UID0geWVzCitBVURJT19PQkpTICs9IGVzZGF1ZGlv
Lm8KK2VuZGlmCitpZmRlZiBDT05GSUdfUEEKK0FVRElPX1BUID0geWVzCitBVURJT19QVF9JTlQg
PSB5ZXMKK0FVRElPX09CSlMgKz0gcGFhdWRpby5vCitlbmRpZgoraWZkZWYgQVVESU9fUFQKK0xE
RkxBR1MgKz0gLXB0aHJlYWQKK2VuZGlmCitpZmRlZiBBVURJT19QVF9JTlQKK0FVRElPX09CSlMg
Kz0gYXVkaW9fcHRfaW50Lm8KK2VuZGlmCitBVURJT19PQkpTKz0gd2F2Y2FwdHVyZS5vCitpZmRl
ZiBDT05GSUdfQVVESU8KK09CSlMrPSQoYWRkcHJlZml4IGF1ZGlvLywgJChBVURJT19PQkpTKSkK
K2VuZGlmCisKK2lmZGVmIENPTkZJR19TREwKK09CSlMrPXNkbC5vIHhfa2V5bWFwLm8KK2VuZGlm
CitpZmRlZiBDT05GSUdfQ1VSU0VTCitPQkpTKz1jdXJzZXMubworZW5kaWYKK09CSlMrPXZuYy5v
IGQzZGVzLm8KKworaWZkZWYgQ09ORklHX0NPQ09BCitPQkpTKz1jb2NvYS5vCitlbmRpZgorCitp
ZmRlZiBDT05GSUdfU0xJUlAKK0NQUEZMQUdTKz0tSSQoU1JDX1BBVEgpL3NsaXJwCitTTElSUF9P
QkpTPWNrc3VtLm8gaWYubyBpcF9pY21wLm8gaXBfaW5wdXQubyBpcF9vdXRwdXQubyBcCitzbGly
cC5vIG1idWYubyBtaXNjLm8gc2J1Zi5vIHNvY2tldC5vIHRjcF9pbnB1dC5vIHRjcF9vdXRwdXQu
byBcCit0Y3Bfc3Vici5vIHRjcF90aW1lci5vIHVkcC5vIGJvb3RwLm8gZGVidWcubyB0ZnRwLm8K
K09CSlMrPSQoYWRkcHJlZml4IHNsaXJwLywgJChTTElSUF9PQkpTKSkKK2VuZGlmCisKK0xJQlMr
PSQoVkRFX0xJQlMpCisKK2NvY29hLm86IGNvY29hLm0KKworc2RsLm86IHNkbC5jIGtleW1hcHMu
YyBzZGxfa2V5c3ltLmgKKworc2RsLm8gYXVkaW8vc2RsYXVkaW8ubzogQ0ZMQUdTICs9ICQoU0RM
X0NGTEFHUykKKwordm5jLm86IHZuYy5jIGtleW1hcHMuYyBzZGxfa2V5c3ltLmggdm5jaGV4dGls
ZS5oIGQzZGVzLmMgZDNkZXMuaAorCit2bmMubzogQ0ZMQUdTICs9ICQoQ09ORklHX1ZOQ19UTFNf
Q0ZMQUdTKQorCitjdXJzZXMubzogY3Vyc2VzLmMga2V5bWFwcy5jIGN1cnNlc19rZXlzLmgKKwor
YnQtaG9zdC5vOiBDRkxBR1MgKz0gJChDT05GSUdfQkxVRVpfQ0ZMQUdTKQorCitsaWJxZW11X2Nv
bW1vbi5hOiAkKE9CSlMpCisKKyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjCisjIFVTRVJfT0JKUyBpcyBjb2RlIHVz
ZWQgYnkgcWVtdSB1c2Vyc3BhY2UgZW11bGF0aW9uCitVU0VSX09CSlM9Y3V0aWxzLm8gIGNhY2hl
LXV0aWxzLm8KKworbGlicWVtdV91c2VyLmE6ICQoVVNFUl9PQkpTKQorCisjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
CisKK3FlbXUtaW1nJChFWEVTVUYpOiBxZW11LWltZy5vIHFlbXUtdG9vbC5vIG9zZGVwLm8gJChC
TE9DS19PQkpTKQorCitxZW11LW5iZCQoRVhFU1VGKTogIHFlbXUtbmJkLm8gcWVtdS10b29sLm8g
b3NkZXAubyAkKEJMT0NLX09CSlMpCisKK3FlbXUtaW1nJChFWEVTVUYpIHFlbXUtbmJkJChFWEVT
VUYpOiBMSUJTICs9IC1segorCisKK2NsZWFuOgorIyBhdm9pZCBvbGQgYnVpbGQgcHJvYmxlbXMg
YnkgcmVtb3ZpbmcgcG90ZW50aWFsbHkgaW5jb3JyZWN0IG9sZCBmaWxlcworCXJtIC1mIGNvbmZp
Zy5tYWsgY29uZmlnLmggb3AtaTM4Ni5oIG9wYy1pMzg2LmggZ2VuLW9wLWkzODYuaCBvcC1hcm0u
aCBvcGMtYXJtLmggZ2VuLW9wLWFybS5oCisJcm0gLWYgKi5vIC4qLmQgKi5hICQoVE9PTFMpIFRB
R1MgY3Njb3BlLiogKi5wb2QgKn4gKi8qfgorCXJtIC1mIHNsaXJwLyoubyBzbGlycC8uKi5kIGF1
ZGlvLyoubyBhdWRpby8uKi5kCisJJChNQUtFKSAtQyB0ZXN0cyBjbGVhbgorCWZvciBkIGluICQo
VEFSR0VUX0RJUlMpOyBkbyBcCisJJChNQUtFKSAtQyAkJGQgJEAgfHwgZXhpdCAxIDsgXAorICAg
ICAgICBkb25lCisKK2Rpc3RjbGVhbjogY2xlYW4KKwlybSAtZiBjb25maWctaG9zdC5tYWsgY29u
ZmlnLWhvc3QuaCAkKERPQ1MpCisJcm0gLWYgcWVtdS17ZG9jLHRlY2h9LntpbmZvLGF1eCxjcCxk
dmksZm4saW5mbyxreSxsb2cscGcsdG9jLHRwLHZyfQorCWZvciBkIGluICQoVEFSR0VUX0RJUlMp
OyBkbyBcCisJcm0gLXJmICQkZCB8fCBleGl0IDEgOyBcCisgICAgICAgIGRvbmUKKworS0VZTUFQ
Uz1kYSAgICAgZW4tZ2IgIGV0ICBmciAgICAgZnItY2ggIGlzICBsdCAgbW9kaWZpZXJzICBubyAg
cHQtYnIgIHN2IFwKK2FyICAgICAgZGUgICAgIGVuLXVzICBmaSAgZnItYmUgIGhyICAgICBpdCAg
bHYgIG5sICAgICAgICAgcGwgIHJ1ICAgICB0aCBcCitjb21tb24gIGRlLWNoICBlcyAgICAgZm8g
IGZyLWNhICBodSAgICAgamEgIG1rICBubC1iZSAgICAgIHB0ICBzbCAgICAgdHIKKworaWZkZWYg
SU5TVEFMTF9CTE9CUworQkxPQlM9Ymlvcy5iaW4gdmdhYmlvcy5iaW4gdmdhYmlvcy1jaXJydXMu
YmluIHBwY19yb20uYmluIFwKK3ZpZGVvLnggb3BlbmJpb3Mtc3BhcmMzMiBvcGVuYmlvcy1zcGFy
YzY0IG9wZW5iaW9zLXBwYyBcCitweGUtbmUya19wY2kuYmluIHB4ZS1ydGw4MTM5LmJpbiBweGUt
cGNuZXQuYmluIHB4ZS1lMTAwMC5iaW4gXAorYmFtYm9vLmR0YgorZWxzZQorQkxPQlM9CitlbmRp
ZgorCitpbnN0YWxsLWRvYzogJChET0NTKQorCW1rZGlyIC1wICIkKERFU1RESVIpJChkb2NkaXIp
IgorCSQoSU5TVEFMTCkgLW0gNjQ0IHFlbXUtZG9jLmh0bWwgIHFlbXUtdGVjaC5odG1sICIkKERF
U1RESVIpJChkb2NkaXIpIgoraWZuZGVmIENPTkZJR19XSU4zMgorCW1rZGlyIC1wICIkKERFU1RE
SVIpJChtYW5kaXIpL21hbjEiCisJJChJTlNUQUxMKSAtbSA2NDQgcWVtdS4xIHFlbXUtaW1nLjEg
IiQoREVTVERJUikkKG1hbmRpcikvbWFuMSIKKwlta2RpciAtcCAiJChERVNURElSKSQobWFuZGly
KS9tYW44IgorCSQoSU5TVEFMTCkgLW0gNjQ0IHFlbXUtbmJkLjggIiQoREVTVERJUikkKG1hbmRp
cikvbWFuOCIKK2VuZGlmCisKK2luc3RhbGw6IGFsbCAkKGlmICQoQlVJTERfRE9DUyksaW5zdGFs
bC1kb2MpCisJbWtkaXIgLXAgIiQoREVTVERJUikkKGJpbmRpcikiCitpZm5lcSAoJChUT09MUyks
KQorCSQoSU5TVEFMTCkgLW0gNzU1IC1zICQoVE9PTFMpICIkKERFU1RESVIpJChiaW5kaXIpIgor
ZW5kaWYKK2lmbmVxICgkKEJMT0JTKSwpCisJbWtkaXIgLXAgIiQoREVTVERJUikkKGRhdGFkaXIp
IgorCXNldCAtZTsgZm9yIHggaW4gJChCTE9CUyk7IGRvIFwKKwkJJChJTlNUQUxMKSAtbSA2NDQg
JChTUkNfUEFUSCkvcGMtYmlvcy8kJHggIiQoREVTVERJUikkKGRhdGFkaXIpIjsgXAorCWRvbmUK
K2VuZGlmCitpZm5kZWYgQ09ORklHX1dJTjMyCisJbWtkaXIgLXAgIiQoREVTVERJUikkKGRhdGFk
aXIpL2tleW1hcHMiCisJc2V0IC1lOyBmb3IgeCBpbiAkKEtFWU1BUFMpOyBkbyBcCisJCSQoSU5T
VEFMTCkgLW0gNjQ0ICQoU1JDX1BBVEgpL2tleW1hcHMvJCR4ICIkKERFU1RESVIpJChkYXRhZGly
KS9rZXltYXBzIjsgXAorCWRvbmUKK2VuZGlmCisJZm9yIGQgaW4gJChUQVJHRVRfRElSUyk7IGRv
IFwKKwkkKE1BS0UpIC1DICQkZCAkQCB8fCBleGl0IDEgOyBcCisgICAgICAgIGRvbmUKKworIyB2
YXJpb3VzIHRlc3QgdGFyZ2V0cwordGVzdCBzcGVlZDogYWxsCisJJChNQUtFKSAtQyB0ZXN0cyAk
QAorCitUQUdTOgorCWV0YWdzICouW2NoXSB0ZXN0cy8qLltjaF0KKworY3Njb3BlOgorCXJtIC1m
IC4vY3Njb3BlLioKKwlmaW5kIC4gLW5hbWUgIiouW2NoXSIgLXByaW50IHwgc2VkICdzLF5cLi8s
LCcgPiAuL2NzY29wZS5maWxlcworCWNzY29wZSAtYgorCisjIGRvY3VtZW50YXRpb24KKyUuaHRt
bDogJS50ZXhpCisJdGV4aTJodG1sIC1tb25vbGl0aGljIC1udW1iZXIgJDwKKworJS5pbmZvOiAl
LnRleGkKKwltYWtlaW5mbyAkPCAtbyAkQAorCislLmR2aTogJS50ZXhpCisJdGV4aTJkdmkgJDwK
KworcWVtdS4xOiBxZW11LWRvYy50ZXhpCisJJChTUkNfUEFUSCkvdGV4aTJwb2QucGwgJDwgcWVt
dS5wb2QKKwlwb2QybWFuIC0tc2VjdGlvbj0xIC0tY2VudGVyPSIgIiAtLXJlbGVhc2U9IiAiIHFl
bXUucG9kID4gJEAKKworcWVtdS1pbWcuMTogcWVtdS1pbWcudGV4aQorCSQoU1JDX1BBVEgpL3Rl
eGkycG9kLnBsICQ8IHFlbXUtaW1nLnBvZAorCXBvZDJtYW4gLS1zZWN0aW9uPTEgLS1jZW50ZXI9
IiAiIC0tcmVsZWFzZT0iICIgcWVtdS1pbWcucG9kID4gJEAKKworcWVtdS1uYmQuODogcWVtdS1u
YmQudGV4aQorCSQoU1JDX1BBVEgpL3RleGkycG9kLnBsICQ8IHFlbXUtbmJkLnBvZAorCXBvZDJt
YW4gLS1zZWN0aW9uPTggLS1jZW50ZXI9IiAiIC0tcmVsZWFzZT0iICIgcWVtdS1uYmQucG9kID4g
JEAKKworaW5mbzogcWVtdS1kb2MuaW5mbyBxZW11LXRlY2guaW5mbworCitkdmk6IHFlbXUtZG9j
LmR2aSBxZW11LXRlY2guZHZpCisKK2h0bWw6IHFlbXUtZG9jLmh0bWwgcWVtdS10ZWNoLmh0bWwK
KworcWVtdS1kb2MuZHZpIHFlbXUtZG9jLmh0bWwgcWVtdS1kb2MuaW5mbzogcWVtdS1pbWcudGV4
aSBxZW11LW5iZC50ZXhpCisKK1ZFUlNJT04gPz0gJChzaGVsbCBjYXQgVkVSU0lPTikKK0ZJTEUg
PSBxZW11LSQoVkVSU0lPTikKKworIyB0YXIgcmVsZWFzZSAodXNlICdtYWtlIC1rIHRhcicgb24g
YSBjaGVja291dGVkIHRyZWUpCit0YXI6CisJcm0gLXJmIC90bXAvJChGSUxFKQorCWNwIC1yIC4g
L3RtcC8kKEZJTEUpCisJY2QgL3RtcCAmJiB0YXIgemN2ZiB+LyQoRklMRSkudGFyLmd6ICQoRklM
RSkgLS1leGNsdWRlIENWUyAtLWV4Y2x1ZGUgLmdpdCAtLWV4Y2x1ZGUgLnN2bgorCXJtIC1yZiAv
dG1wLyQoRklMRSkKKworIyBnZW5lcmF0ZSBhIGJpbmFyeSBkaXN0cmlidXRpb24KK3RhcmJpbjoK
KwljZCAvICYmIHRhciB6Y3ZmIH4vcWVtdS0kKFZFUlNJT04pLSQoQVJDSCkudGFyLmd6IFwKKwkk
KGJpbmRpcikvcWVtdSBcCisJJChiaW5kaXIpL3FlbXUtc3lzdGVtLXg4Nl82NCBcCisJJChiaW5k
aXIpL3FlbXUtc3lzdGVtLWFybSBcCisJJChiaW5kaXIpL3FlbXUtc3lzdGVtLWNyaXMgXAorCSQo
YmluZGlyKS9xZW11LXN5c3RlbS1tNjhrIFwKKwkkKGJpbmRpcikvcWVtdS1zeXN0ZW0tbWlwcyBc
CisJJChiaW5kaXIpL3FlbXUtc3lzdGVtLW1pcHNlbCBcCisJJChiaW5kaXIpL3FlbXUtc3lzdGVt
LW1pcHM2NCBcCisJJChiaW5kaXIpL3FlbXUtc3lzdGVtLW1pcHM2NGVsIFwKKwkkKGJpbmRpcikv
cWVtdS1zeXN0ZW0tcHBjIFwKKwkkKGJpbmRpcikvcWVtdS1zeXN0ZW0tcHBjZW1iIFwKKwkkKGJp
bmRpcikvcWVtdS1zeXN0ZW0tcHBjNjQgXAorCSQoYmluZGlyKS9xZW11LXN5c3RlbS1zaDQgXAor
CSQoYmluZGlyKS9xZW11LXN5c3RlbS1zaDRlYiBcCisJJChiaW5kaXIpL3FlbXUtc3lzdGVtLXNw
YXJjIFwKKwkkKGJpbmRpcikvcWVtdS1pMzg2IFwKKwkkKGJpbmRpcikvcWVtdS14ODZfNjQgXAor
CSQoYmluZGlyKS9xZW11LWFscGhhIFwKKwkkKGJpbmRpcikvcWVtdS1hcm0gXAorCSQoYmluZGly
KS9xZW11LWFybWViIFwKKwkkKGJpbmRpcikvcWVtdS1jcmlzIFwKKwkkKGJpbmRpcikvcWVtdS1t
NjhrIFwKKwkkKGJpbmRpcikvcWVtdS1taXBzIFwKKwkkKGJpbmRpcikvcWVtdS1taXBzZWwgXAor
CSQoYmluZGlyKS9xZW11LXBwYyBcCisJJChiaW5kaXIpL3FlbXUtcHBjNjQgXAorCSQoYmluZGly
KS9xZW11LXBwYzY0YWJpMzIgXAorCSQoYmluZGlyKS9xZW11LXNoNCBcCisJJChiaW5kaXIpL3Fl
bXUtc2g0ZWIgXAorCSQoYmluZGlyKS9xZW11LXNwYXJjIFwKKwkkKGJpbmRpcikvcWVtdS1zcGFy
YzY0IFwKKwkkKGJpbmRpcikvcWVtdS1zcGFyYzMycGx1cyBcCisJJChiaW5kaXIpL3FlbXUtaW1n
IFwKKwkkKGJpbmRpcikvcWVtdS1uYmQgXAorCSQoZGF0YWRpcikvYmlvcy5iaW4gXAorCSQoZGF0
YWRpcikvdmdhYmlvcy5iaW4gXAorCSQoZGF0YWRpcikvdmdhYmlvcy1jaXJydXMuYmluIFwKKwkk
KGRhdGFkaXIpL3BwY19yb20uYmluIFwKKwkkKGRhdGFkaXIpL3ZpZGVvLnggXAorCSQoZGF0YWRp
cikvb3BlbmJpb3Mtc3BhcmMzMiBcCisJJChkYXRhZGlyKS9vcGVuYmlvcy1zcGFyYzY0IFwKKwkk
KGRhdGFkaXIpL29wZW5iaW9zLXBwYyBcCisJJChkYXRhZGlyKS9weGUtbmUya19wY2kuYmluIFwK
KwkkKGRhdGFkaXIpL3B4ZS1ydGw4MTM5LmJpbiBcCisJJChkYXRhZGlyKS9weGUtcGNuZXQuYmlu
IFwKKwkkKGRhdGFkaXIpL3B4ZS1lMTAwMC5iaW4gXAorCSQoZG9jZGlyKS9xZW11LWRvYy5odG1s
IFwKKwkkKGRvY2RpcikvcWVtdS10ZWNoLmh0bWwgXAorCSQobWFuZGlyKS9tYW4xL3FlbXUuMSBc
CisJJChtYW5kaXIpL21hbjEvcWVtdS1pbWcuMSBcCisJJChtYW5kaXIpL21hbjgvcWVtdS1uYmQu
OAorCisjIEluY2x1ZGUgYXV0b21hdGljYWxseSBnZW5lcmF0ZWQgZGVwZW5kZW5jeSBmaWxlcwor
LWluY2x1ZGUgJCh3aWxkY2FyZCAuKi5kIGF1ZGlvLy4qLmQgc2xpcnAvLiouZCkKZGlmZiAtLWV4
Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9taXNj
LmMgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vbWlzYy5jCi0tLSB4ZW4tNC4xLjIt
YS90b29scy9pb2VtdS1xZW11LXhlbi9taXNjLmMJMTk3MC0wMS0wMSAwNzowMDowMC4wMDAwMDAw
MDAgKzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL21pc2MuYwkyMDEy
LTEyLTI4IDE2OjAyOjQxLjAxMjkzNzg0NiArMDgwMApAQCAtMCwwICsxLDQzMiBAQAorI2luY2x1
ZGUgIm1pc2MuaCIKKyNpbmNsdWRlIDxzdGRsaWIuaD4KKyNpbmNsdWRlIDxzdGRpby5oPgorI2lu
Y2x1ZGUgPHN0cmluZy5oPgorI2luY2x1ZGUgImdydWJfZXJyLmgiCisKKworaW50CitncnViX2lz
c3BhY2UgKGludCBjKQoreworICByZXR1cm4gKGMgPT0gJ1xuJyB8fCBjID09ICdccicgfHwgYyA9
PSAnICcgfHwgYyA9PSAnXHQnKTsKK30KKworaW50CitncnViX3RvbG93ZXIgKGludCBjKQorewor
ICBpZiAoYyA+PSAnQScgJiYgYyA8PSAnWicpCisgICAgcmV0dXJuIGMgLSAnQScgKyAnYSc7CisK
KyAgcmV0dXJuIGM7Cit9CisKKworY2hhciAqCitncnViX3N0cmNociAoY29uc3QgY2hhciAqcywg
aW50IGMpCit7CisgIGRvCisgICAgeworICAgICAgaWYgKCpzID09IGMpCisJcmV0dXJuIChjaGFy
ICopIHM7CisgICAgfQorICB3aGlsZSAoKnMrKyk7CisKKyAgcmV0dXJuIDA7Cit9CisKKworZ3J1
Yl9zaXplX3QKK2dydWJfc3RybGVuIChjb25zdCBjaGFyICpzKQoreworICBjb25zdCBjaGFyICpw
ID0gczsKKworICB3aGlsZSAoKnApCisgICAgcCsrOworCisgIHJldHVybiBwIC0gczsKK30KKwor
CisKKworCitjaGFyICoKK2dydWJfc3RybmNweSAoY2hhciAqZGVzdCwgY29uc3QgY2hhciAqc3Jj
LCBpbnQgYykKK3sKKyAgY2hhciAqcCA9IGRlc3Q7CisKKyAgd2hpbGUgKCgqcCsrID0gKnNyYysr
KSAhPSAnXDAnICYmIC0tYykKKyAgICA7CisKKyAgcmV0dXJuIGRlc3Q7Cit9CisKK2NoYXIgKgor
Z3J1Yl9zdHJkdXAgKGNvbnN0IGNoYXIgKnMpCit7CisgIGdydWJfc2l6ZV90IGxlbjsKKyAgY2hh
ciAqcDsKKworICBsZW4gPSBncnViX3N0cmxlbiAocykgKyAxOworICBwID0gKGNoYXIgKikgbWFs
bG9jIChsZW4pOworICBpZiAoISBwKQorICAgIHJldHVybiAwOworCisgIHJldHVybiBtZW1jcHkg
KHAsIHMsIGxlbik7Cit9CisKKworCitjaGFyICoKK2dydWJfc3RybmR1cCAoY29uc3QgY2hhciAq
cywgZ3J1Yl9zaXplX3QgbikKK3sKKyAgZ3J1Yl9zaXplX3QgbGVuOworICBjaGFyICpwOworCisg
IGxlbiA9IGdydWJfc3RybGVuIChzKTsKKyAgaWYgKGxlbiA+IG4pCisgICAgbGVuID0gbjsKKyAg
cCA9IChjaGFyICopIG1hbGxvYyAobGVuICsgMSk7CisgIGlmICghIHApCisgICAgcmV0dXJuIDA7
CisKKyAgbWVtY3B5IChwLCBzLCBsZW4pOworICBwW2xlbl0gPSAnXDAnOworICByZXR1cm4gcDsK
K30KKworCisKKworaW50CitncnViX3N0cmNtcCAoY29uc3QgY2hhciAqczEsIGNvbnN0IGNoYXIg
KnMyKQoreworICB3aGlsZSAoKnMxICYmICpzMikKKyAgICB7CisgICAgICBpZiAoKnMxICE9ICpz
MikKKwlicmVhazsKKworICAgICAgczErKzsKKyAgICAgIHMyKys7CisgICAgfQorCisgIHJldHVy
biAoaW50KSAqczEgLSAoaW50KSAqczI7Cit9CisKK2ludAorZ3J1Yl9zdHJuY21wIChjb25zdCBj
aGFyICpzMSwgY29uc3QgY2hhciAqczIsIGdydWJfc2l6ZV90IG4pCit7CisgIGlmIChuID09IDAp
CisgICAgcmV0dXJuIDA7CisKKyAgd2hpbGUgKCpzMSAmJiAqczIgJiYgLS1uKQorICAgIHsKKyAg
ICAgIGlmICgqczEgIT0gKnMyKQorCWJyZWFrOworCisgICAgICBzMSsrOworICAgICAgczIrKzsK
KyAgICB9CisKKyAgcmV0dXJuIChpbnQpICpzMSAtIChpbnQpICpzMjsKK30KKworCitpbnQKK2dy
dWJfc3RyY2FzZWNtcCAoY29uc3QgY2hhciAqczEsIGNvbnN0IGNoYXIgKnMyKQoreworICB3aGls
ZSAoKnMxICYmICpzMikKKyAgICB7CisgICAgICBpZiAoZ3J1Yl90b2xvd2VyICgqczEpICE9IGdy
dWJfdG9sb3dlciAoKnMyKSkKKwlicmVhazsKKworICAgICAgczErKzsKKyAgICAgIHMyKys7Cisg
ICAgfQorCisgIHJldHVybiAoaW50KSBncnViX3RvbG93ZXIgKCpzMSkgLSAoaW50KSBncnViX3Rv
bG93ZXIgKCpzMik7Cit9CisKKworaW50CitncnViX3N0cm5jYXNlY21wIChjb25zdCBjaGFyICpz
MSwgY29uc3QgY2hhciAqczIsIGdydWJfc2l6ZV90IG4pCit7CisgIGlmIChuID09IDApCisgICAg
cmV0dXJuIDA7CisKKyAgd2hpbGUgKCpzMSAmJiAqczIgJiYgLS1uKQorICAgIHsKKyAgICAgIGlm
IChncnViX3RvbG93ZXIgKCpzMSkgIT0gZ3J1Yl90b2xvd2VyICgqczIpKQorCWJyZWFrOworCisg
ICAgICBzMSsrOworICAgICAgczIrKzsKKyAgICB9CisKKyAgcmV0dXJuIChpbnQpIGdydWJfdG9s
b3dlciAoKnMxKSAtIChpbnQpIGdydWJfdG9sb3dlciAoKnMyKTsKK30KKwordm9pZCAqCitncnVi
X21lbW1vdmUgKHZvaWQgKmRlc3QsIGNvbnN0IHZvaWQgKnNyYywgZ3J1Yl9zaXplX3QgbikKK3sK
KyAgY2hhciAqZCA9IChjaGFyICopIGRlc3Q7CisgIGNvbnN0IGNoYXIgKnMgPSAoY29uc3QgY2hh
ciAqKSBzcmM7CisKKyAgaWYgKGQgPCBzKQorICAgIHdoaWxlIChuLS0pCisgICAgICAqZCsrID0g
KnMrKzsKKyAgZWxzZQorICAgIHsKKyAgICAgIGQgKz0gbjsKKyAgICAgIHMgKz0gbjsKKworICAg
ICAgd2hpbGUgKG4tLSkKKwkqLS1kID0gKi0tczsKKyAgICB9CisKKyAgcmV0dXJuIGRlc3Q7Cit9
CisKKwordm9pZCAqCitncnViX21hbGxvYyAoZ3J1Yl9zaXplX3Qgc2l6ZSkKK3sKKyAgdm9pZCAq
cmV0OworICByZXQgPSBtYWxsb2MgKHNpemUpOworICBpZiAoIXJldCkKKyAgICBncnViX2Vycm9y
IChHUlVCX0VSUl9PVVRfT0ZfTUVNT1JZLCAib3V0IG9mIG1lbW9yeSIpOworICByZXR1cm4gcmV0
OworfQorCisKKwordm9pZAorZ3J1Yl9mcmVlICh2b2lkICpwdHIpCit7CisgIGZyZWUgKHB0cik7
Cit9CisKKwordm9pZCAqCitncnViX21lbXNldCAodm9pZCAqcywgaW50IGMsIGdydWJfc2l6ZV90
IG4pCit7CisgIHVuc2lnbmVkIGNoYXIgKnAgPSAodW5zaWduZWQgY2hhciAqKSBzOworCisgIHdo
aWxlIChuLS0pCisgICAgKnArKyA9ICh1bnNpZ25lZCBjaGFyKSBjOworCisgIHJldHVybiBzOwor
fQorCitpbnQKK2dydWJfbWVtY21wIChjb25zdCB2b2lkICpzMSwgY29uc3Qgdm9pZCAqczIsIGdy
dWJfc2l6ZV90IG4pCit7CisgIGNvbnN0IGNoYXIgKnQxID0gczE7CisgIGNvbnN0IGNoYXIgKnQy
ID0gczI7CisKKyAgd2hpbGUgKG4tLSkKKyAgICB7CisgICAgICBpZiAoKnQxICE9ICp0MikKKwly
ZXR1cm4gKGludCkgKnQxIC0gKGludCkgKnQyOworCisgICAgICB0MSsrOworICAgICAgdDIrKzsK
KyAgICB9CisKKyAgcmV0dXJuIDA7Cit9CisKKwordm9pZCAqCitncnViX3phbGxvYyAoZ3J1Yl9z
aXplX3Qgc2l6ZSkKK3sKKyAgdm9pZCAqcmV0OworCisgIHJldCA9IGdydWJfbWFsbG9jIChzaXpl
KTsKKyAgaWYgKCFyZXQpCisgICAgcmV0dXJuIE5VTEw7CisgIG1lbXNldCAocmV0LCAwLCBzaXpl
KTsKKyAgcmV0dXJuIHJldDsKK30KKworLyogRGl2aWRlIE4gYnkgRCwgcmV0dXJuIHRoZSBxdW90
aWVudCwgYW5kIHN0b3JlIHRoZSByZW1haW5kZXIgaW4gKlIuICAqLworZ3J1Yl91aW50NjRfdAor
Z3J1Yl9kaXZtb2Q2NCAoZ3J1Yl91aW50NjRfdCBuLCBncnViX3VpbnQzMl90IGQsIGdydWJfdWlu
dDMyX3QgKnIpCit7CisgIC8qIFRoaXMgYWxnb3JpdGhtIGlzIHR5cGljYWxseSBpbXBsZW1lbnRl
ZCBieSBoYXJkd2FyZS4gVGhlIGlkZWEKKyAgICAgaXMgdG8gZ2V0IHRoZSBoaWdoZXN0IGJpdCBp
biBOLCA2NCB0aW1lcywgYnkga2VlcGluZworICAgICB1cHBlcihOICogMl5pKSA9IHVwcGVyKChR
ICogMTAgKyBNKSAqIDJeaSksIHdoZXJlIHVwcGVyCisgICAgIHJlcHJlc2VudHMgdGhlIGhpZ2gg
NjQgYml0cyBpbiAxMjgtYml0cyBzcGFjZS4gICovCisgIHVuc2lnbmVkIGJpdHMgPSA2NDsKKyAg
dW5zaWduZWQgbG9uZyBsb25nIHEgPSAwOworICB1bnNpZ25lZCBtID0gMDsKKworICAvKiBTa2lw
IHRoZSBzbG93IGNvbXB1dGF0aW9uIGlmIDMyLWJpdCBhcml0aG1ldGljIGlzIHBvc3NpYmxlLiAg
Ki8KKyAgaWYgKG4gPCAweGZmZmZmZmZmKQorICAgIHsKKyAgICAgIGlmIChyKQorCSpyID0gKChn
cnViX3VpbnQzMl90KSBuKSAlIGQ7CisKKyAgICAgIHJldHVybiAoKGdydWJfdWludDMyX3QpIG4p
IC8gZDsKKyAgICB9CisKKyAgd2hpbGUgKGJpdHMtLSkKKyAgICB7CisgICAgICBtIDw8PSAxOwor
CisgICAgICBpZiAobiAmICgxVUxMIDw8IDYzKSkKKwltIHw9IDE7CisKKyAgICAgIHEgPDw9IDE7
CisgICAgICBuIDw8PSAxOworCisgICAgICBpZiAobSA+PSBkKQorCXsKKwkgIHEgfD0gMTsKKwkg
IG0gLT0gZDsKKwl9CisgICAgfQorCisgIGlmIChyKQorICAgICpyID0gbTsKKworICByZXR1cm4g
cTsKK30KKworCisKKy8qIENvbnZlcnQgVVRGLTE2IHRvIFVURi04LiAgKi8KK2dydWJfdWludDhf
dCAqCitncnViX3V0ZjE2X3RvX3V0ZjggKGdydWJfdWludDhfdCAqZGVzdCwgZ3J1Yl91aW50MTZf
dCAqc3JjLAorICAgICAgICAgICAgICAgICAgICBncnViX3NpemVfdCBzaXplKQoreworICBncnVi
X3VpbnQzMl90IGNvZGVfaGlnaCA9IDA7CisKKyAgd2hpbGUgKHNpemUtLSkKKyAgICB7CisgICAg
ICBncnViX3VpbnQzMl90IGNvZGUgPSAqc3JjKys7CisKKyAgICAgIGlmIChjb2RlX2hpZ2gpCisg
ICAgICAgIHsKKyAgICAgICAgICBpZiAoY29kZSA+PSAweERDMDAgJiYgY29kZSA8PSAweERGRkYp
CisgICAgICAgICAgICB7CisgICAgICAgICAgICAgIC8qIFN1cnJvZ2F0ZSBwYWlyLiAgKi8KKyAg
ICAgICAgICAgICAgY29kZSA9ICgoY29kZV9oaWdoIC0gMHhEODAwKSA8PCAxMikgKyAoY29kZSAt
IDB4REMwMCkgKyAweDEwMDAwOworCisgICAgICAgICAgICAgICpkZXN0KysgPSAoY29kZSA+PiAx
OCkgfCAweEYwOworICAgICAgICAgICAgICAqZGVzdCsrID0gKChjb2RlID4+IDEyKSAmIDB4M0Yp
IHwgMHg4MDsKKyAgICAgICAgICAgICAgKmRlc3QrKyA9ICgoY29kZSA+PiA2KSAmIDB4M0YpIHwg
MHg4MDsKKyAgICAgICAgICAgICAgKmRlc3QrKyA9IChjb2RlICYgMHgzRikgfCAweDgwOworICAg
ICAgICAgICAgfQorICAgICAgICAgIGVsc2UKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAg
LyogRXJyb3IuLi4gICovCisgICAgICAgICAgICAgICpkZXN0KysgPSAnPyc7CisgICAgICAgICAg
ICB9CisKKyAgICAgICAgICBjb2RlX2hpZ2ggPSAwOworICAgICAgICB9CisgICAgICBlbHNlCisg
ICAgICAgIHsKKyAgICAgICAgICBpZiAoY29kZSA8PSAweDAwN0YpCisgICAgICAgICAgICAqZGVz
dCsrID0gY29kZTsKKyAgICAgICAgICBlbHNlIGlmIChjb2RlIDw9IDB4MDdGRikKKyAgICAgICAg
ICAgIHsKKyAgICAgICAgICAgICAgKmRlc3QrKyA9IChjb2RlID4+IDYpIHwgMHhDMDsKKyAgICAg
ICAgICAgICAgKmRlc3QrKyA9IChjb2RlICYgMHgzRikgfCAweDgwOworICAgICAgICAgICAgfQor
ICAgICAgICAgIGVsc2UgaWYgKGNvZGUgPj0gMHhEODAwICYmIGNvZGUgPD0gMHhEQkZGKQorICAg
ICAgICAgICAgeworICAgICAgICAgICAgICBjb2RlX2hpZ2ggPSBjb2RlOworICAgICAgICAgICAg
ICBjb250aW51ZTsKKyAgICAgICAgICAgIH0KKyAgICAgICAgICBlbHNlIGlmIChjb2RlID49IDB4
REMwMCAmJiBjb2RlIDw9IDB4REZGRikKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAgLyog
RXJyb3IuLi4gKi8KKyAgICAgICAgICAgICAgKmRlc3QrKyA9ICc/JzsKKyAgICAgICAgICAgIH0K
KyAgICAgICAgICBlbHNlCisgICAgICAgICAgICB7CisgICAgICAgICAgICAgICpkZXN0KysgPSAo
Y29kZSA+PiAxMikgfCAweEUwOworICAgICAgICAgICAgICAqZGVzdCsrID0gKChjb2RlID4+IDYp
ICYgMHgzRikgfCAweDgwOworICAgICAgICAgICAgICAqZGVzdCsrID0gKGNvZGUgJiAweDNGKSB8
IDB4ODA7CisgICAgICAgICAgICB9CisgICAgICAgIH0KKyAgICB9CisKKyAgcmV0dXJuIGRlc3Q7
Cit9CisKKworCisKKworCitzdGF0aWMgdm9pZCBwcmludF9ieXRlKGNoYXIgKnAsIGludCBsZW4p
Cit7CisgIHByaW50ZigiXG4qKioqKioqKioqKioqKioqc3RhcnQgcHJpbnQgJXMqKioqKioqKioq
KioqKioqKioqKlxuIiwgcCk7CisgIGludCBpOworICB1bnNpZ25lZCBjaGFyICpwYiA9ICh1bnNp
Z25lZCBjaGFyKilwOworICBmb3IoaSA9IDA7IGkgPCBsZW47IGkrKykKKyAgICB7CisgICAgICBw
cmludGYoIjB4JTAyeCwiLCBwYltpXSk7CisgICAgfQorICBwcmludGYoIlxuKioqKioqKioqKioq
KioqKioqKioqKmVuZCoqKioqKioqKioqKioqKioqKioqKioqKioqXG4iKTsKK30KKworCisKKyNp
bmNsdWRlICAgPGljb252Lmg+IAorI2RlZmluZSAgIE9VVExFTiAgIDI1NiAKKworCisvL7T6wuvX
qru7OrTT0rvW1rHgwuvXqs6qwe3Su9bWseDC6yAKK3N0YXRpYyBpbnQgICBjb2RlX2NvbnZlcnQo
Y29uc3QgY2hhciAgICpmcm9tX2NoYXJzZXQsIGNvbnN0IGNoYXIgICAqdG9fY2hhcnNldCwgCisJ
CSAgICBjaGFyICAgKmluYnVmLCBzaXplX3QgICBpbmxlbiwKKwkJICAgY2hhciAgICpvdXRidWYs
IHNpemVfdCAgIG91dGxlbikgCit7IAorICBpY29udl90ICAgY2Q7IAorICBpbnQgICByYzsgCisg
IGNoYXIgICAqKnBpbiAgID0gICAmaW5idWY7IAorICBjaGFyICAgKipwb3V0ICAgPSAgICZvdXRi
dWY7IAorCisgIC8vcHJpbnRmKCJzaXplb2YoaW50KT0lZCwgc2l6ZW9mKHNpemVfdCk9JWRcbiIs
IHNpemVvZihpbnQpLCBzaXplb2Yoc2l6ZV90KSk7CisgIGNkICAgPSAgIGljb252X29wZW4odG9f
Y2hhcnNldCwgZnJvbV9jaGFyc2V0KTsgCisgIGlmICAgKGNkPT0wKSAgIHJldHVybiAgIC0xOyAK
KyAgbWVtc2V0KG91dGJ1ZiwgMCwgb3V0bGVuKTsgCisgIGlmICAgKGljb252KGNkLCBwaW4sICZp
bmxlbiwgcG91dCwgJm91dGxlbik9PS0xKSAgIHJldHVybiAgIC0xOyAKKyAgaWNvbnZfY2xvc2Uo
Y2QpOyAKKyAgcmV0dXJuICAgMDsgCit9IAorLy9VTklDT0RFwuvXqs6qR0IyMzEywusgCitpbnQg
ICB1MmcoY2hhciAgICppbmJ1Ziwgc2l6ZV90ICAgaW5sZW4sIGNoYXIgICAqb3V0YnVmLCBzaXpl
X3QgIG91dGxlbikgCit7CisgIHJldHVybiAgIGNvZGVfY29udmVydCgidXRmLTgiLCAiZ2IyMzEy
IiwgaW5idWYsIGlubGVuLCBvdXRidWYsIG91dGxlbik7IAorfSAKKy8vR0IyMzEywuvXqs6qVU5J
Q09ERcLrIAoraW50ICAgZzJ1KGNoYXIgICAqaW5idWYsIHNpemVfdCAgIGlubGVuLCBjaGFyICAg
Km91dGJ1Ziwgc2l6ZV90ICAgb3V0bGVuKSAKK3sgCisgIHJldHVybiAgIGNvZGVfY29udmVydCgi
Z2IyMzEyIiwgInV0Zi04IiwgaW5idWYsIGlubGVuLCBvdXRidWYsIG91dGxlbik7IAorfSAKKwor
CisKK3ZvaWQgdGVzdCh2b2lkKSAKK3sKKyAgLy/X1rf7seDC69equ7s977u/5T/nrD8/Pz+9rD8/
CisgIGNoYXIgICBpbl91dGY4W10gICA9ICAgezB4ZTUsMHhhZCwweDk3LDB4ZTcsMHhhYywweGE2
LDB4ZTcsMHhiYywweDk2LDB4ZTcsMHhhMCwweDgxLDB4ZTgsMHhiZCwweGFjLDB4ZTYsMHg4ZCww
eGEyfTsgCisgIGNoYXIgICAqaW5fZ2IyMzEyICAgPSAoY2hhciopICAi19a3+7HgwuvXqru7Ijsg
CisgIGNoYXIgICBvdXRbT1VUTEVOXTsgCisgIGludCByYzsKKyAgLy91dGY4wuvXqs6qZ2IyMzEy
wusgCisgIHJjICAgPSAgIHUyZyhpbl91dGY4LCBzdHJsZW4oaW5fdXRmOCksIG91dCwgT1VUTEVO
KTsgCisgIHByaW50ZigidXRmOC0tPmdiMjMxMiAgIG91dD0lc1xuIiwgb3V0KTsKKyAgcHJpbnRf
Ynl0ZShvdXQsIHN0cmxlbihvdXQpKTsKKyAgLy9nYjIzMTLC69eqzqp1dGY4wusgCisgIHJjICAg
PSAgIGcydShpbl9nYjIzMTIsIHN0cmxlbihpbl9nYjIzMTIpLCBvdXQsIE9VVExFTik7IAorICBw
cmludF9ieXRlKG91dCwgc3RybGVuKG91dCkpOworICBwcmludGYoImdiMjMxMi0tPnV0ZjggICBv
dXQ9JXNcbiIsb3V0KTsgCit9IAorCisKKwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhl
bi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL21pc2MuaCB4ZW4tNC4xLjItYi90b29scy9p
b2VtdS1xZW11LXhlbi9taXNjLmgKLS0tIHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVu
L21pc2MuaAkxOTcwLTAxLTAxIDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4y
LWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vbWlzYy5oCTIwMTItMTItMjggMTY6MDI6NDEuMDEyOTM3
ODQ2ICswODAwCkBAIC0wLDAgKzEsODkgQEAKKyNpbmNsdWRlICJmcy10eXBlcy5oIgorI2luY2x1
ZGUgPHN0ZGRlZi5oPgorCisKKworCisKKyNkZWZpbmUgZ3J1Yl9tZW1jcHkoZCxzLG4pZ3J1Yl9t
ZW1tb3ZlICgoZCksIChzKSwgKG4pKQorCisKK2ludAorZ3J1Yl9pc3NwYWNlIChpbnQgYyk7CisK
K2ludAorZ3J1Yl90b2xvd2VyIChpbnQgYyk7CisKKworY2hhciAqCitncnViX3N0cmNociAoY29u
c3QgY2hhciAqcywgaW50IGMpOworCisKK2dydWJfc2l6ZV90CitncnViX3N0cmxlbiAoY29uc3Qg
Y2hhciAqcyk7CisKKworCitjaGFyICoKK2dydWJfc3RyZHVwIChjb25zdCBjaGFyICpzKTsKKwor
CisKK2NoYXIgKgorZ3J1Yl9zdHJuZHVwIChjb25zdCBjaGFyICpzLCBncnViX3NpemVfdCBuKTsK
KworCitjaGFyICoKK2dydWJfc3RybmNweSAoY2hhciAqZGVzdCwgY29uc3QgY2hhciAqc3JjLCBp
bnQgYyk7CisKKworaW50CitncnViX3N0cmNtcCAoY29uc3QgY2hhciAqczEsIGNvbnN0IGNoYXIg
KnMyKTsKKworaW50CitncnViX3N0cm5jbXAgKGNvbnN0IGNoYXIgKnMxLCBjb25zdCBjaGFyICpz
MiwgZ3J1Yl9zaXplX3Qgbik7CisKK2ludAorZ3J1Yl9zdHJjYXNlY21wIChjb25zdCBjaGFyICpz
MSwgY29uc3QgY2hhciAqczIpOworCitpbnQKK2dydWJfc3RybmNhc2VjbXAgKGNvbnN0IGNoYXIg
KnMxLCBjb25zdCBjaGFyICpzMiwgZ3J1Yl9zaXplX3Qgbik7CisKK3ZvaWQgKgorZ3J1Yl9tZW1t
b3ZlICh2b2lkICpkZXN0LCBjb25zdCB2b2lkICpzcmMsIGdydWJfc2l6ZV90IG4pOworCitpbnQK
K2dydWJfbWVtY21wIChjb25zdCB2b2lkICpzMSwgY29uc3Qgdm9pZCAqczIsIGdydWJfc2l6ZV90
IG4pOworCit2b2lkICoKK2dydWJfbWFsbG9jIChncnViX3NpemVfdCBzaXplKTsKKwordm9pZCAq
CitncnViX21lbXNldCAodm9pZCAqcywgaW50IGMsIGdydWJfc2l6ZV90IG4pOworCit2b2lkCitn
cnViX2ZyZWUgKHZvaWQgKnB0cik7CisKK3ZvaWQgKgorZ3J1Yl96YWxsb2MgKGdydWJfc2l6ZV90
IHNpemUpOworCitncnViX3VpbnQ2NF90CitncnViX2Rpdm1vZDY0IChncnViX3VpbnQ2NF90IG4s
IGdydWJfdWludDMyX3QgZCwgZ3J1Yl91aW50MzJfdCAqcik7CisKKy8qIENvbnZlcnQgVVRGLTE2
IHRvIFVURi04LiAgKi8KK2dydWJfdWludDhfdCAqCitncnViX3V0ZjE2X3RvX3V0ZjggKGdydWJf
dWludDhfdCAqZGVzdCwgZ3J1Yl91aW50MTZfdCAqc3JjLAorICAgICAgICAgICAgICAgICAgICBn
cnViX3NpemVfdCBzaXplKTsKKworCisvL1VOSUNPREXC69eqzqpHQjIzMTLC6yAKK2ludCAgIHUy
ZyhjaGFyICAgKmluYnVmLCBzaXplX3QgICBpbmxlbiwgY2hhciAgICpvdXRidWYsIHNpemVfdCAg
IG91dGxlbik7CisvL0dCMjMxMsLr16rOqlVOSUNPREXC6yAKK2ludCAgIGcydShjaGFyICAgKmlu
YnVmLCBzaXplX3QgICBpbmxlbiwgY2hhciAgICpvdXRidWYsIHNpemVfdCAgIG91dGxlbik7CisK
KworCit2b2lkIHRlc3Qodm9pZCk7CisKKworCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTgg
eGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vbnRmcy5jIHhlbi00LjEuMi1iL3Rvb2xz
L2lvZW11LXFlbXUteGVuL250ZnMuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14
ZW4vbnRmcy5jCTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4x
LjItYi90b29scy9pb2VtdS1xZW11LXhlbi9udGZzLmMJMjAxMi0xMi0yOCAxNjowMjo0MS4wMTM5
MzcwMzggKzA4MDAKQEAgLTAsMCArMSwxMTg4IEBACisvKiBudGZzLmMgLSBOVEZTIGZpbGVzeXN0
ZW0gKi8KKy8qCisgKiAgR1JVQiAgLS0gIEdSYW5kIFVuaWZpZWQgQm9vdGxvYWRlcgorICogIENv
cHlyaWdodCAoQykgMjAwNywyMDA4LDIwMDkgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMu
CisgKgorICogIFRoaXMgcHJvZ3JhbSBpcyBmcmVlIHNvZnR3YXJlOiB5b3UgY2FuIHJlZGlzdHJp
YnV0ZSBpdCBhbmQvb3IgbW9kaWZ5CisgKiAgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUg
R2VuZXJhbCBQdWJsaWMgTGljZW5zZSBhcyBwdWJsaXNoZWQgYnkKKyAqICB0aGUgRnJlZSBTb2Z0
d2FyZSBGb3VuZGF0aW9uLCBlaXRoZXIgdmVyc2lvbiAzIG9mIHRoZSBMaWNlbnNlLCBvcgorICog
IChhdCB5b3VyIG9wdGlvbikgYW55IGxhdGVyIHZlcnNpb24uCisgKgorICogIFRoaXMgcHJvZ3Jh
bSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLAorICog
IGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJh
bnR5IG9mCisgKiAgTUVSQ0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQ
VVJQT1NFLiAgU2VlIHRoZQorICogIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3Jl
IGRldGFpbHMuCisgKgorICogIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhl
IEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlCisgKiAgYWxvbmcgd2l0aCB0aGlzIHByb2dyYW0u
ICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uCisgKi8KKworCisj
aW5jbHVkZSAibWlzYy5oIgorI2luY2x1ZGUgImZzaGVscC5oIgorI2luY2x1ZGUgIm50ZnMuaCIK
KyNpbmNsdWRlICJkZWJ1Zy5oIgorI2luY2x1ZGUgImdydWJfZXJyLmgiCisjaW5jbHVkZSAiZXJy
LmgiCisKKyNkZWZpbmUgR1JVQl9ESVNLX1NFQ1RPUl9TSVpFIDB4MjAwCitudGZzY29tcF9mdW5j
X3QgZ3J1Yl9udGZzY29tcF9mdW5jOworCisvL2ltcG9ydGFudAorLy9sY24gaXMgcmVsYXRpdmUg
dG8gc3RhcnQgc2VjdG9yIG9mIHRoZSB2b2x1bWUKK3N0YXRpYyBncnViX29mZl90IHNfcGFydF9v
ZmZfc2VjdG9yOworc3RhdGljIGdydWJfdWludDMyX3Qgc19icGJfYnl0ZXNfcGVyX3NlY3RvcjsK
K2ludCBiZHJ2X3ByZWFkX2Zyb21fbGNuX29mX3ZvbHVtKEJsb2NrRHJpdmVyU3RhdGUgKmJzLCBp
bnQ2NF90IG9mZnNldCwKKwkJCQkgdm9pZCAqYnVmMSwgaW50IGNvdW50MSkKK3sKKyAgcmV0dXJu
IGJkcnZfcHJlYWQoYnMsIHNfcGFydF9vZmZfc2VjdG9yICogc19icGJfYnl0ZXNfcGVyX3NlY3Rv
ciArIG9mZnNldCwKKwkJIGJ1ZjEsIGNvdW50MSk7Cit9CisKKworc3RhdGljIGdydWJfZXJyX3QK
K2ZpeHVwIChzdHJ1Y3QgZ3J1Yl9udGZzX2RhdGEgKmRhdGEsIGNoYXIgKmJ1ZiwgaW50IGxlbiwg
Y2hhciAqbWFnaWMpCit7CisgIGludCBzczsKKyAgY2hhciAqcHU7CisgIGdydWJfdWludDE2X3Qg
dXM7CisgIAorICBEQkcoIiV4LSV4LSV4LSV4IiwgYnVmWzBdLCBidWZbMV0sIGJ1ZlsyXSwgYnVm
WzNdKTsKKyAgaWYgKGdydWJfbWVtY21wIChidWYsIG1hZ2ljLCA0KSkKKyAgICByZXR1cm4gZ3J1
Yl9lcnJvciAoR1JVQl9FUlJfQkFEX0ZTLCAiJXMgbGFiZWwgbm90IGZvdW5kIiwgbWFnaWMpOwor
CisgIHNzID0gdTE2YXQgKGJ1ZiwgNikgLSAxOworICBpZiAoc3MgKiAoaW50KSBkYXRhLT5ibG9j
a3NpemUgIT0gbGVuICogR1JVQl9ESVNLX1NFQ1RPUl9TSVpFKQorICAgIHJldHVybiBncnViX2Vy
cm9yIChHUlVCX0VSUl9CQURfRlMsICJzaXplIG5vdCBtYXRjaCIsCisJCSAgICAgICBzcyAqIChp
bnQpIGRhdGEtPmJsb2Nrc2l6ZSwKKwkJICAgICAgIGxlbiAqIEdSVUJfRElTS19TRUNUT1JfU0la
RSk7CisgIHB1ID0gYnVmICsgdTE2YXQgKGJ1ZiwgNCk7CisgIHVzID0gdTE2YXQgKHB1LCAwKTsK
KyAgYnVmIC09IDI7CisgIHdoaWxlIChzcyA+IDApCisgICAgeworICAgICAgYnVmICs9IGRhdGEt
PmJsb2Nrc2l6ZTsKKyAgICAgIHB1ICs9IDI7CisgICAgICBpZiAodTE2YXQgKGJ1ZiwgMCkgIT0g
dXMpCisJcmV0dXJuIGdydWJfZXJyb3IgKEdSVUJfRVJSX0JBRF9GUywgImZpeHVwIHNpZ25hdHVy
ZSBub3QgbWF0Y2giKTsKKyAgICAgIHYxNmF0IChidWYsIDApID0gdjE2YXQgKHB1LCAwKTsKKyAg
ICAgIHNzLS07CisgICAgfQorCisgIHJldHVybiAwOworfQorCitzdGF0aWMgZ3J1Yl9lcnJfdCBy
ZWFkX21mdCAoc3RydWN0IGdydWJfbnRmc19kYXRhICpkYXRhLCBjaGFyICpidWYsCisJCQkgICAg
Z3J1Yl91aW50MzJfdCBtZnRubyk7CitzdGF0aWMgZ3J1Yl9lcnJfdCByZWFkX2F0dHIgKHN0cnVj
dCBncnViX250ZnNfYXR0ciAqYXQsIGNoYXIgKmRlc3QsCisJCQkgICAgIGdydWJfZGlza19hZGRy
X3Qgb2ZzLCBncnViX3NpemVfdCBsZW4sCisJCQkgICAgIGludCBjYWNoZWQsCisJCQkgICAgIHZv
aWQgKCpyZWFkX2hvb2spIChncnViX2Rpc2tfYWRkcl90IHNlY3RvciwKKwkJCQkJCXVuc2lnbmVk
IG9mZnNldCwKKwkJCQkJCXVuc2lnbmVkIGxlbmd0aCwKKwkJCQkJCXZvaWQgKmNsb3N1cmUpLAor
CQkJICAgICB2b2lkICpjbG9zdXJlKTsKKworc3RhdGljIGdydWJfZXJyX3QgcmVhZF9kYXRhIChz
dHJ1Y3QgZ3J1Yl9udGZzX2F0dHIgKmF0LCBjaGFyICpwYSwgY2hhciAqZGVzdCwKKwkJCSAgICAg
Z3J1Yl9kaXNrX2FkZHJfdCBvZnMsIGdydWJfc2l6ZV90IGxlbiwKKwkJCSAgICAgaW50IGNhY2hl
ZCwKKwkJCSAgICAgdm9pZCAoKnJlYWRfaG9vaykgKGdydWJfZGlza19hZGRyX3Qgc2VjdG9yLAor
CQkJCQkJdW5zaWduZWQgb2Zmc2V0LAorCQkJCQkJdW5zaWduZWQgbGVuZ3RoLAorCQkJCQkJdm9p
ZCAqY2xvc3VyZSksCisJCQkgICAgIHZvaWQgKmNsb3N1cmUpOworCitzdGF0aWMgdm9pZAoraW5p
dF9hdHRyIChzdHJ1Y3QgZ3J1Yl9udGZzX2F0dHIgKmF0LCBzdHJ1Y3QgZ3J1Yl9udGZzX2ZpbGUg
Km1mdCkKK3sKKyAgYXQtPm1mdCA9IG1mdDsKKyAgYXQtPmZsYWdzID0gKG1mdCA9PSAmbWZ0LT5k
YXRhLT5tbWZ0KSA/IEFGX01NRlQgOiAwOworICBhdC0+YXR0cl9ueHQgPSBtZnQtPmJ1ZiArIHUx
NmF0IChtZnQtPmJ1ZiwgMHgxNCk7CisgIGF0LT5hdHRyX2VuZCA9IGF0LT5lbWZ0X2J1ZiA9IGF0
LT5lZGF0X2J1ZiA9IGF0LT5zYnVmID0gTlVMTDsKK30KKworc3RhdGljIHZvaWQKK2ZyZWVfYXR0
ciAoc3RydWN0IGdydWJfbnRmc19hdHRyICphdCkKK3sKKyAgZ3J1Yl9mcmVlIChhdC0+ZW1mdF9i
dWYpOworICBncnViX2ZyZWUgKGF0LT5lZGF0X2J1Zik7CisgIGdydWJfZnJlZSAoYXQtPnNidWYp
OworfQorCitzdGF0aWMgY2hhciAqCitmaW5kX2F0dHIgKHN0cnVjdCBncnViX250ZnNfYXR0ciAq
YXQsIHVuc2lnbmVkIGNoYXIgYXR0cikKK3sKKyAgZ3J1Yl9vZmZfdCBvZmZfYnl0ZXMxOworICBn
cnViX29mZl90IG9mZl9ieXRlczI7CisgCisgIGlmIChhdC0+ZmxhZ3MgJiBBRl9BTFNUKQorICAg
IHsKKyAgICAgIERCRygiISEhISEhXG5pbiBhIGF0dHIgbGlzdD09PT09PSIpOworICAgIHJldHJ5
OgorICAgICAgd2hpbGUgKGF0LT5hdHRyX254dCA8IGF0LT5hdHRyX2VuZCkKKwl7CisJICBhdC0+
YXR0cl9jdXIgPSBhdC0+YXR0cl9ueHQ7CisJICBhdC0+YXR0cl9ueHQgKz0gdTE2YXQgKGF0LT5h
dHRyX2N1ciwgNCk7CisJICBpZiAoKCh1bnNpZ25lZCBjaGFyKSAqYXQtPmF0dHJfY3VyID09IGF0
dHIpIHx8IChhdHRyID09IDApKQorCSAgICB7CisJICAgICAgY2hhciAqbmV3X3BvczsKKworCSAg
ICAgIGlmIChhdC0+ZmxhZ3MgJiBBRl9NTUZUKQorCQl7CisJCSAgREJHKCJpbiBBRl9NTUZULi4u
Li4uIik7CisJCSAgb2ZmX2J5dGVzMSA9IChncnViX29mZl90KSh2MzJhdCAoYXQtPmF0dHJfY3Vy
LCAweDEwKSkgPDwgQkxLX1NIUjsKKwkJICBvZmZfYnl0ZXMyID0gKGdydWJfb2ZmX3QpKHYzMmF0
IChhdC0+YXR0cl9jdXIsIDB4MTQpKSA8PCBCTEtfU0hSOworCQkgIGlmICgoYmRydl9wcmVhZAor
CQkgICAgICAgKGF0LT5tZnQtPmRhdGEtPmJzLCBvZmZfYnl0ZXMxLAorCQkJYXQtPmVtZnRfYnVm
LCA1MTIpKQorCQkgICAgICB8fAorCQkgICAgICAoYmRydl9wcmVhZAorCQkgICAgICAgKGF0LT5t
ZnQtPmRhdGEtPmJzLCBvZmZfYnl0ZXMyLAorCQkJYXQtPmVtZnRfYnVmICsgNTEyLCA1MTIpKSkK
KwkJICAgIHJldHVybiBOVUxMOworCisJCSAgaWYgKGZpeHVwCisJCSAgICAgIChhdC0+bWZ0LT5k
YXRhLCBhdC0+ZW1mdF9idWYsIGF0LT5tZnQtPmRhdGEtPm1mdF9zaXplLAorCQkgICAgICAgKGNo
YXIqKSJGSUxFIikpCisJCSAgICByZXR1cm4gTlVMTDsKKwkJfQorCSAgICAgIGVsc2UKKwkJewor
CQkgIERCRygicmVhZCBleHRlbmQgbWZ0IEZSPT09PT09Iik7CisJCSAgaWYgKHJlYWRfbWZ0IChh
dC0+bWZ0LT5kYXRhLCBhdC0+ZW1mdF9idWYsCisJCQkJdTMyYXQgKGF0LT5hdHRyX2N1ciwgMHgx
MCkpKQorCQkgICAgcmV0dXJuIE5VTEw7CisJCX0KKworCSAgICAgIG5ld19wb3MgPSAmYXQtPmVt
ZnRfYnVmW3UxNmF0IChhdC0+ZW1mdF9idWYsIDB4MTQpXTsKKwkgICAgICB3aGlsZSAoKHVuc2ln
bmVkIGNoYXIpICpuZXdfcG9zICE9IDB4RkYpCisJCXsKKwkJICBEQkcoIm5ldyBwb3MgaW4gZXh0
ZW5kIG1mdD09PT09PSIpOworCQkgIGlmICgoKHVuc2lnbmVkIGNoYXIpICpuZXdfcG9zID09CisJ
CSAgICAgICAodW5zaWduZWQgY2hhcikgKmF0LT5hdHRyX2N1cikKKwkJICAgICAgJiYgKHUxNmF0
IChuZXdfcG9zLCAweEUpID09IHUxNmF0IChhdC0+YXR0cl9jdXIsIDB4MTgpKSkKKwkJICAgIHsK
KwkJICAgICAgcmV0dXJuIG5ld19wb3M7CisJCSAgICB9CisJCSAgbmV3X3BvcyArPSB1MTZhdCAo
bmV3X3BvcywgNCk7CisJCX0KKwkgICAgICBncnViX2Vycm9yIChHUlVCX0VSUl9CQURfRlMsCisJ
CQkgICJjYW5cJ3QgZmluZCAweCVYIGluIGF0dHJpYnV0ZSBsaXN0IiwKKwkJCSAgKHVuc2lnbmVk
IGNoYXIpICphdC0+YXR0cl9jdXIpOworCSAgICAgIHJldHVybiBOVUxMOworCSAgICB9CisJfQor
ICAgICAgcmV0dXJuIE5VTEw7CisgICAgfQorCisKKyAgREJHKCJub3QgaW4gYSBhdHRyIGxpc3Q9
PT09PT0iKTsKKyAgYXQtPmF0dHJfY3VyID0gYXQtPmF0dHJfbnh0OworICB3aGlsZSAoKHVuc2ln
bmVkIGNoYXIpICphdC0+YXR0cl9jdXIgIT0gMHhGRikKKyAgICB7CisgICAgICBhdC0+YXR0cl9u
eHQgKz0gdTE2YXQgKGF0LT5hdHRyX2N1ciwgNCk7CisgICAgICBpZiAoKHVuc2lnbmVkIGNoYXIp
ICphdC0+YXR0cl9jdXIgPT0gQVRfQVRUUklCVVRFX0xJU1QpCisJYXQtPmF0dHJfZW5kID0gYXQt
PmF0dHJfY3VyOworICAgICAgaWYgKCgodW5zaWduZWQgY2hhcikgKmF0LT5hdHRyX2N1ciA9PSBh
dHRyKSB8fCAoYXR0ciA9PSAwKSkKKwl7CisJICBEQkcoImZvdW5kPT09PT09Iik7CisJICByZXR1
cm4gYXQtPmF0dHJfY3VyOworCX0KKyAgICAgIGF0LT5hdHRyX2N1ciA9IGF0LT5hdHRyX254dDsK
KyAgICB9CisgIAorICAKKyAgaWYgKGF0LT5hdHRyX2VuZCkKKyAgICB7CisgICAgICBEQkcoInNl
YXJjaGluZyBpbiBhdHRyIGxpc3Q9PT09PT0iKTsKKyAgICAgIGNoYXIgKnBhOworCisgICAgICBh
dC0+ZW1mdF9idWYgPSBncnViX21hbGxvYyAoYXQtPm1mdC0+ZGF0YS0+bWZ0X3NpemUgPDwgQkxL
X1NIUik7CisgICAgICBpZiAoYXQtPmVtZnRfYnVmID09IE5VTEwpCisJcmV0dXJuIE5VTEw7CisK
KyAgICAgIHBhID0gYXQtPmF0dHJfZW5kOworICAgICAgaWYgKHBhWzhdKQorCXsKKyAgICAgICAg
ICBpbnQgbjsKKworICAgICAgICAgIG4gPSAoKHUzMmF0IChwYSwgMHgzMCkgKyBHUlVCX0RJU0tf
U0VDVE9SX1NJWkUgLSAxKQorICAgICAgICAgICAgICAgJiAofihHUlVCX0RJU0tfU0VDVE9SX1NJ
WkUgLSAxKSkpOworCSAgYXQtPmF0dHJfY3VyID0gYXQtPmF0dHJfZW5kOworCSAgYXQtPmVkYXRf
YnVmID0gZ3J1Yl9tYWxsb2MgKG4pOworCSAgaWYgKCFhdC0+ZWRhdF9idWYpCisJICAgIHJldHVy
biBOVUxMOworCSAgaWYgKHJlYWRfZGF0YSAoYXQsIHBhLCBhdC0+ZWRhdF9idWYsIDAsIG4sIDAs
IDAsIDApKQorCSAgICB7CisJICAgICAgZ3J1Yl9lcnJvciAoR1JVQl9FUlJfQkFEX0ZTLAorCQkJ
ICAiZmFpbCB0byByZWFkIG5vbi1yZXNpZGVudCBhdHRyaWJ1dGUgbGlzdCIpOworCSAgICAgIHJl
dHVybiBOVUxMOworCSAgICB9CisJICBhdC0+YXR0cl9ueHQgPSBhdC0+ZWRhdF9idWY7CisJICBh
dC0+YXR0cl9lbmQgPSBhdC0+ZWRhdF9idWYgKyB1MzJhdCAocGEsIDB4MzApOworCX0KKyAgICAg
IGVsc2UKKwl7CisJICBhdC0+YXR0cl9ueHQgPSBhdC0+YXR0cl9lbmQgKyB1MTZhdCAocGEsIDB4
MTQpOworCSAgYXQtPmF0dHJfZW5kID0gYXQtPmF0dHJfZW5kICsgdTMyYXQgKHBhLCA0KTsKKwl9
CisgICAgICBhdC0+ZmxhZ3MgfD0gQUZfQUxTVDsKKyAgICAgIHdoaWxlIChhdC0+YXR0cl9ueHQg
PCBhdC0+YXR0cl9lbmQpCisJeworCSAgaWYgKCgodW5zaWduZWQgY2hhcikgKmF0LT5hdHRyX254
dCA9PSBhdHRyKSB8fCAoYXR0ciA9PSAwKSkKKwkgICAgYnJlYWs7CisJICBhdC0+YXR0cl9ueHQg
Kz0gdTE2YXQgKGF0LT5hdHRyX254dCwgNCk7CisJfQorICAgICAgaWYgKGF0LT5hdHRyX254dCA+
PSBhdC0+YXR0cl9lbmQpCisJeworCSAgREJHKCJub3QgZm91bmQgaW4gbGlzdCIpOworCSAgcmV0
dXJuIE5VTEw7CisJfQorICAgICAgREJHKCJmb3VuZCBpbiBhdHRyIGxpc3Q9PT09PT0iKTsKKyAg
ICAgIGlmICgoYXQtPmZsYWdzICYgQUZfTU1GVCkgJiYgKGF0dHIgPT0gQVRfREFUQSkpCisJewor
CSAgREJHKCJBRl9HUE9TISEhISEhPT09PT09Iik7CisJICBhdC0+ZmxhZ3MgfD0gQUZfR1BPUzsK
KwkgIGF0LT5hdHRyX2N1ciA9IGF0LT5hdHRyX254dDsKKwkgIHBhID0gYXQtPmF0dHJfY3VyOwor
CSAgdjMyYXQgKHBhLCAweDEwKSA9IGF0LT5tZnQtPmRhdGEtPm1mdF9zdGFydDsKKwkgIHYzMmF0
IChwYSwgMHgxNCkgPSBhdC0+bWZ0LT5kYXRhLT5tZnRfc3RhcnQgKyAxOworCSAgcGEgPSBhdC0+
YXR0cl9ueHQgKyB1MTZhdCAocGEsIDQpOworCSAgd2hpbGUgKHBhIDwgYXQtPmF0dHJfZW5kKQor
CSAgICB7CisJICAgICAgaWYgKCh1bnNpZ25lZCBjaGFyKSAqcGEgIT0gYXR0cikKKwkJYnJlYWs7
CisJICAgICAgaWYgKHJlYWRfYXR0cgorCQkgIChhdCwgcGEgKyAweDEwLAorCQkgICB1MzJhdCAo
cGEsIDB4MTApICogKGF0LT5tZnQtPmRhdGEtPm1mdF9zaXplIDw8IEJMS19TSFIpLAorCQkgICBh
dC0+bWZ0LT5kYXRhLT5tZnRfc2l6ZSA8PCBCTEtfU0hSLCAwLCAwLCAwKSkKKwkJcmV0dXJuIE5V
TEw7CisJICAgICAgcGEgKz0gdTE2YXQgKHBhLCA0KTsKKwkgICAgfQorCSAgYXQtPmF0dHJfbnh0
ID0gYXQtPmF0dHJfY3VyOworCSAgYXQtPmZsYWdzICY9IH5BRl9HUE9TOworCX0KKyAgICAgIAor
ICAgICAgREJHKCJnb3RvIHJldHJ5PT09PT09Iik7CisgICAgICBnb3RvIHJldHJ5OworICAgIH0K
KworICBEQkcoInJldHVybiBOVUxMIik7CisgIHJldHVybiBOVUxMOworfQorCitzdGF0aWMgY2hh
ciAqCitsb2NhdGVfYXR0ciAoc3RydWN0IGdydWJfbnRmc19hdHRyICphdCwgc3RydWN0IGdydWJf
bnRmc19maWxlICptZnQsCisJICAgICB1bnNpZ25lZCBjaGFyIGF0dHIpCit7CisgIAorCisgIGNo
YXIgKnBhOworICAKKyAgaW5pdF9hdHRyIChhdCwgbWZ0KTsKKworICBEQkcoIlxuISEhISEhXG5s
b2NhdGluZyBhdHRyPTB4JTAyeCwgYXQtPmZsYWc9MHglMDJ4PT09PT09PT09PT09IiwKKyAgICAg
IGF0dHIsIGF0LT5mbGFncyk7CisgIGlmICgocGEgPSBmaW5kX2F0dHIgKGF0LCBhdHRyKSkgPT0g
TlVMTCkKKyAgICB7CisgICAgICBEQkcoIjE9PT09PT09PT1ub3QgZm91bmQiKTsKKyAgICAgIHJl
dHVybiBOVUxMOworICAgIH0KKyAgaWYgKChhdC0+ZmxhZ3MgJiBBRl9BTFNUKSA9PSAwKQorICAg
IHsKKyAgICAgIERCRygiMj09PT09PT1ub3QgYSBhdHRyIGxpc3QsIGNvbnRpbnVlIHNlYXJjaGlu
ZyIpOworICAgICAgd2hpbGUgKDEpCisJeworCSAgaWYgKChwYSA9IGZpbmRfYXR0ciAoYXQsIGF0
dHIpKSA9PSBOVUxMKQorCSAgICBicmVhazsKKwkgIGlmIChhdC0+ZmxhZ3MgJiBBRl9BTFNUKQor
CSAgICB7CisJICAgICAgREJHKCIzPT09PT09PT09PWluIGEgYXR0ciBsaXN0LGZvdW5kIik7CisJ
ICAgICAgcmV0dXJuIHBhOworCSAgICB9CisJfQorICAgICAgREJHKCI0PT09PT09PT1zdGFydCBz
ZWFyY2hpbmcgYWxsIG92ZXIgYWdhaW4iKTsKKyAgICAgIGdydWJfZXJybm8gPSBHUlVCX0VSUl9O
T05FOworICAgICAgZnJlZV9hdHRyIChhdCk7CisgICAgICBpbml0X2F0dHIgKGF0LCBtZnQpOwor
ICAgICAgcGEgPSBmaW5kX2F0dHIgKGF0LCBhdHRyKTsKKyAgICB9CisgIERCRygibG9jYXRlIGZp
bmlzaD09PT09PVxuXG4iKTsKKyAgcmV0dXJuIHBhOworfQorCitzdGF0aWMgY2hhciAqCityZWFk
X3J1bl9kYXRhIChjaGFyICpydW4sIGludCBubiwgZ3J1Yl9kaXNrX2FkZHJfdCAqIHZhbCwgaW50
IHNpZykKK3sKKyAgZ3J1Yl9kaXNrX2FkZHJfdCByLCB2OworCisgIHIgPSAwOworICB2ID0gMTsK
KworICB3aGlsZSAobm4tLSkKKyAgICB7CisgICAgICByICs9IHYgKiAoKih1bnNpZ25lZCBjaGFy
ICopIChydW4rKykpOworICAgICAgdiA8PD0gODsKKyAgICB9CisKKyAgaWYgKChzaWcpICYmIChy
ICYgKHYgPj4gMSkpKQorICAgIHIgLT0gdjsKKworICAqdmFsID0gcjsKKyAgcmV0dXJuIHJ1bjsK
K30KKworZ3J1Yl9lcnJfdAorZ3J1Yl9udGZzX3JlYWRfcnVuX2xpc3QgKHN0cnVjdCBncnViX250
ZnNfcmxzdCAqIGN0eCkKK3sKKyAgREJHKCJyZWFkIHJ1biBsaXN0Iik7CisKKyAgaW50IGMxLCBj
MjsKKyAgZ3J1Yl9kaXNrX2FkZHJfdCB2YWw7CisgIGNoYXIgKnJ1bjsKKworICBydW4gPSBjdHgt
PmN1cl9ydW47CityZXRyeToKKyAgYzEgPSAoKHVuc2lnbmVkIGNoYXIpICgqcnVuKSAmIDB4Rik7
CisgIGMyID0gKCh1bnNpZ25lZCBjaGFyKSAoKnJ1bikgPj4gNCk7CisgIGlmICghYzEpCisgICAg
eworICAgICAgaWYgKChjdHgtPmF0dHIpICYmIChjdHgtPmF0dHItPmZsYWdzICYgQUZfQUxTVCkp
CisJeworCSAgdm9pZCAoKnNhdmVfaG9vaykgKGdydWJfZGlza19hZGRyX3Qgc2VjdG9yLAorCQkJ
ICAgICB1bnNpZ25lZCBvZmZzZXQsCisJCQkgICAgIHVuc2lnbmVkIGxlbmd0aCwKKwkJCSAgICAg
dm9pZCAqY2xvc3VyZSk7CisJICAKKwkgIC8vc2F2ZV9ob29rID0gY3R4LT5jb21wLmJzLT5yZWFk
X2hvb2s7CisJICAvL2N0eC0+Y29tcC5icy0+cmVhZF9ob29rID0gMDsKKwkgIHJ1biA9IGZpbmRf
YXR0ciAoY3R4LT5hdHRyLCAodW5zaWduZWQgY2hhcikgKmN0eC0+YXR0ci0+YXR0cl9jdXIpOwor
CSAgLy9jdHgtPmNvbXAuYnMtPnJlYWRfaG9vayA9IHNhdmVfaG9vazsKKwkgIGlmIChydW4pCisJ
ICAgIHsKKwkgICAgICBpZiAocnVuWzhdID09IDApCisJCXJldHVybiBncnViX2Vycm9yIChHUlVC
X0VSUl9CQURfRlMsCisJCQkJICAgIiREQVRBIHNob3VsZCBiZSBub24tcmVzaWRlbnQiKTsKKwor
CSAgICAgIHJ1biArPSB1MTZhdCAocnVuLCAweDIwKTsKKwkgICAgICBjdHgtPmN1cnJfbGNuID0g
MDsKKwkgICAgICBnb3RvIHJldHJ5OworCSAgICB9CisJfQorICAgICAgcmV0dXJuIGdydWJfZXJy
b3IgKEdSVUJfRVJSX0JBRF9GUywgInJ1biBsaXN0IG92ZXJmbG93biIpOworICAgIH0KKyAgcnVu
ID0gcmVhZF9ydW5fZGF0YSAocnVuICsgMSwgYzEsICZ2YWwsIDApOwkvKiBsZW5ndGggb2YgY3Vy
cmVudCBWQ04gKi8KKyAgY3R4LT5jdXJyX3ZjbiA9IGN0eC0+bmV4dF92Y247CisgIGN0eC0+bmV4
dF92Y24gKz0gdmFsOworICBydW4gPSByZWFkX3J1bl9kYXRhIChydW4sIGMyLCAmdmFsLCAxKTsJ
Lyogb2Zmc2V0IHRvIHByZXZpb3VzIExDTiAqLworICBjdHgtPmN1cnJfbGNuICs9IHZhbDsKKyAg
aWYgKHZhbCA9PSAwKQorICAgIGN0eC0+ZmxhZ3MgfD0gUkZfQkxOSzsKKyAgZWxzZQorICAgIGN0
eC0+ZmxhZ3MgJj0gflJGX0JMTks7CisgIGN0eC0+Y3VyX3J1biA9IHJ1bjsKKyAgcmV0dXJuIDA7
Cit9CisKK3N0YXRpYyBncnViX2Rpc2tfYWRkcl90CitncnViX250ZnNfcmVhZF9ibG9jayAoZ3J1
Yl9mc2hlbHBfbm9kZV90IG5vZGUsIGdydWJfZGlza19hZGRyX3QgYmxvY2spCit7CisgIHN0cnVj
dCBncnViX250ZnNfcmxzdCAqY3R4OworCisgIGN0eCA9IChzdHJ1Y3QgZ3J1Yl9udGZzX3Jsc3Qg
Kikgbm9kZTsKKyAgaWYgKGJsb2NrID49IGN0eC0+bmV4dF92Y24pCisgICAgeworICAgICAgaWYg
KGdydWJfbnRmc19yZWFkX3J1bl9saXN0IChjdHgpKQorCXJldHVybiAtMTsKKyAgICAgIHJldHVy
biBjdHgtPmN1cnJfbGNuOworICAgIH0KKyAgZWxzZQorICAgIHJldHVybiAoY3R4LT5mbGFncyAm
IFJGX0JMTkspID8gMCA6IChibG9jayAtCisJCQkJCSBjdHgtPmN1cnJfdmNuICsgY3R4LT5jdXJy
X2xjbik7Cit9CisKK3N0YXRpYyBncnViX2Vycl90CityZWFkX2RhdGEgKHN0cnVjdCBncnViX250
ZnNfYXR0ciAqYXQsIGNoYXIgKnBhLCBjaGFyICpkZXN0LAorCSAgIGdydWJfZGlza19hZGRyX3Qg
b2ZzLCBncnViX3NpemVfdCBsZW4sIGludCBjYWNoZWQsCisJICAgdm9pZCAoKnJlYWRfaG9vaykg
KGdydWJfZGlza19hZGRyX3Qgc2VjdG9yLAorCQkJICAgICAgdW5zaWduZWQgb2Zmc2V0LAorCQkJ
ICAgICAgdW5zaWduZWQgbGVuZ3RoLAorCQkJICAgICAgdm9pZCAqY2xvc3VyZSksCisJICAgdm9p
ZCAqY2xvc3VyZSkKK3sKKyAgZ3J1Yl9kaXNrX2FkZHJfdCB2Y247CisgIHN0cnVjdCBncnViX250
ZnNfcmxzdCBjYywgKmN0eDsKKworICBpZiAobGVuID09IDApCisgICAgcmV0dXJuIDA7CisKKyAg
Z3J1Yl9tZW1zZXQgKCZjYywgMCwgc2l6ZW9mIChjYykpOworICBjdHggPSAmY2M7CisgIGN0eC0+
YXR0ciA9IGF0OworICBjdHgtPmNvbXAuc3BjID0gYXQtPm1mdC0+ZGF0YS0+c3BjOworICBjdHgt
PmNvbXAuYnMgPSBhdC0+bWZ0LT5kYXRhLT5iczsKKworICBpZiAocGFbOF0gPT0gMCkKKyAgICB7
CisgICAgICBpZiAob2ZzICsgbGVuID4gdTMyYXQgKHBhLCAweDEwKSkKKwlyZXR1cm4gZ3J1Yl9l
cnJvciAoR1JVQl9FUlJfQkFEX0ZTLCAicmVhZCBvdXQgb2YgcmFuZ2UiKTsKKyAgICAgIGdydWJf
bWVtY3B5IChkZXN0LCBwYSArIHUzMmF0IChwYSwgMHgxNCkgKyBvZnMsIGxlbik7CisgICAgICBy
ZXR1cm4gMDsKKyAgICB9CisKKyAgaWYgKHUxNmF0IChwYSwgMHhDKSAmIEZMQUdfQ09NUFJFU1NF
RCkKKyAgICBjdHgtPmZsYWdzIHw9IFJGX0NPTVA7CisgIGVsc2UKKyAgICBjdHgtPmZsYWdzICY9
IH5SRl9DT01QOworICBjdHgtPmN1cl9ydW4gPSBwYSArIHUxNmF0IChwYSwgMHgyMCk7CisKKyAg
aWYgKGN0eC0+ZmxhZ3MgJiBSRl9DT01QKQorICAgIHsKKyAgICAgIGlmICghY2FjaGVkKQorCXJl
dHVybiBncnViX2Vycm9yIChHUlVCX0VSUl9CQURfRlMsICJhdHRyaWJ1dGUgY2FuXCd0IGJlIGNv
bXByZXNzZWQiKTsKKworICAgICAgaWYgKGF0LT5zYnVmKQorCXsKKwkgIGlmICgob2ZzICYgKH4o
Q09NX0xFTiAtIDEpKSkgPT0gYXQtPnNhdmVfcG9zKQorCSAgICB7CisJICAgICAgZ3J1Yl9kaXNr
X2FkZHJfdCBuOworCisJICAgICAgbiA9IENPTV9MRU4gLSAob2ZzIC0gYXQtPnNhdmVfcG9zKTsK
KwkgICAgICBpZiAobiA+IGxlbikKKwkJbiA9IGxlbjsKKworCSAgICAgIGdydWJfbWVtY3B5IChk
ZXN0LCBhdC0+c2J1ZiArIG9mcyAtIGF0LT5zYXZlX3Bvcywgbik7CisJICAgICAgaWYgKG4gPT0g
bGVuKQorCQlyZXR1cm4gMDsKKworCSAgICAgIGRlc3QgKz0gbjsKKwkgICAgICBsZW4gLT0gbjsK
KwkgICAgICBvZnMgKz0gbjsKKwkgICAgfQorCX0KKyAgICAgIGVsc2UKKwl7CisJICBhdC0+c2J1
ZiA9IGdydWJfbWFsbG9jIChDT01fTEVOKTsKKwkgIGlmIChhdC0+c2J1ZiA9PSBOVUxMKQorCSAg
ICByZXR1cm4gZ3J1Yl9lcnJubzsKKwkgIGF0LT5zYXZlX3BvcyA9IDE7CisJfQorCisgICAgICB2
Y24gPSBjdHgtPnRhcmdldF92Y24gPSAob2ZzID4+IENPTV9MT0dfTEVOKSAqIChDT01fU0VDIC8g
Y3R4LT5jb21wLnNwYyk7CisgICAgICBjdHgtPnRhcmdldF92Y24gJj0gfjB4RjsKKyAgICB9Cisg
IGVsc2UKKyAgICB2Y24gPSBjdHgtPnRhcmdldF92Y24gPSBncnViX2Rpdm1vZDY0IChvZnMgPj4g
QkxLX1NIUiwgY3R4LT5jb21wLnNwYywgMCk7CisKKyAgY3R4LT5uZXh0X3ZjbiA9IHUzMmF0IChw
YSwgMHgxMCk7CisgIGN0eC0+Y3Vycl9sY24gPSAwOworICB3aGlsZSAoY3R4LT5uZXh0X3ZjbiA8
PSBjdHgtPnRhcmdldF92Y24pCisgICAgeworICAgICAgaWYgKGdydWJfbnRmc19yZWFkX3J1bl9s
aXN0IChjdHgpKQorCXJldHVybiBncnViX2Vycm5vOworICAgIH0KKworICBpZiAoYXQtPmZsYWdz
ICYgQUZfR1BPUykKKyAgICB7CisgICAgICBncnViX2Rpc2tfYWRkcl90IHN0MCwgc3QxOworICAg
ICAgZ3J1Yl91aW50MzJfdCBtOworCisgICAgICBncnViX2Rpdm1vZDY0IChvZnMgPj4gQkxLX1NI
UiwgY3R4LT5jb21wLnNwYywgJm0pOworCisgICAgICBzdDAgPQorCShjdHgtPnRhcmdldF92Y24g
LSBjdHgtPmN1cnJfdmNuICsgY3R4LT5jdXJyX2xjbikgKiBjdHgtPmNvbXAuc3BjICsgbTsKKyAg
ICAgIHN0MSA9IHN0MCArIDE7CisgICAgICBpZiAoc3QxID09CisJICAoY3R4LT5uZXh0X3ZjbiAt
IGN0eC0+Y3Vycl92Y24gKyBjdHgtPmN1cnJfbGNuKSAqIGN0eC0+Y29tcC5zcGMpCisJeworCSAg
aWYgKGdydWJfbnRmc19yZWFkX3J1bl9saXN0IChjdHgpKQorCSAgICByZXR1cm4gZ3J1Yl9lcnJu
bzsKKwkgIHN0MSA9IGN0eC0+Y3Vycl9sY24gKiBjdHgtPmNvbXAuc3BjOworCX0KKyAgICAgIHYz
MmF0IChkZXN0LCAwKSA9IHN0MDsKKyAgICAgIHYzMmF0IChkZXN0LCA0KSA9IHN0MTsKKyAgICAg
IHJldHVybiAwOworICAgIH0KKworICBpZiAoIShjdHgtPmZsYWdzICYgUkZfQ09NUCkpCisgICAg
eworICAgICAgdW5zaWduZWQgaW50IHBvdzsKKworICAgICAgaWYgKCFncnViX2ZzaGVscF9sb2cy
Ymxrc2l6ZSAoY3R4LT5jb21wLnNwYywgJnBvdykpCisJZ3J1Yl9mc2hlbHBfcmVhZF9maWxlIChj
dHgtPmNvbXAuYnMsIChncnViX2ZzaGVscF9ub2RlX3QpIGN0eCwKKwkJCSAgICAgICByZWFkX2hv
b2ssIGNsb3N1cmUsIG9mcywgbGVuLCBkZXN0LAorCQkJICAgICAgIGdydWJfbnRmc19yZWFkX2Js
b2NrLCBvZnMgKyBsZW4sIHBvdyk7CisgICAgICByZXR1cm4gZ3J1Yl9lcnJubzsKKyAgICB9CisK
KyAgcmV0dXJuIChncnViX250ZnNjb21wX2Z1bmMpID8gZ3J1Yl9udGZzY29tcF9mdW5jIChhdCwg
ZGVzdCwgb2ZzLCBsZW4sIGN0eCwKKwkJCQkJCSAgICB2Y24pIDoKKyAgICBncnViX2Vycm9yIChH
UlVCX0VSUl9CQURfRlMsICJudGZzY29tcCBtb2R1bGUgbm90IGxvYWRlZCIpOworfQorCitzdGF0
aWMgZ3J1Yl9lcnJfdAorcmVhZF9hdHRyIChzdHJ1Y3QgZ3J1Yl9udGZzX2F0dHIgKmF0LCBjaGFy
ICpkZXN0LCBncnViX2Rpc2tfYWRkcl90IG9mcywKKwkgICBncnViX3NpemVfdCBsZW4sIGludCBj
YWNoZWQsCisJICAgdm9pZCAoKnJlYWRfaG9vaykgKGdydWJfZGlza19hZGRyX3Qgc2VjdG9yLAor
CQkJICAgICAgdW5zaWduZWQgb2Zmc2V0LAorCQkJICAgICAgdW5zaWduZWQgbGVuZ3RoLAorCQkJ
ICAgICAgdm9pZCAqY2xvc3VyZSksCisJICAgdm9pZCAqY2xvc3VyZSkKK3sKKyAgREJHKCJyZWFk
IGF0dHIiKTsKKyAgCisgIGNoYXIgKnNhdmVfY3VyOworICB1bnNpZ25lZCBjaGFyIGF0dHI7Cisg
IGNoYXIgKnBwOworICBncnViX2Vycl90IHJldDsKKworICBzYXZlX2N1ciA9IGF0LT5hdHRyX2N1
cjsKKyAgYXQtPmF0dHJfbnh0ID0gYXQtPmF0dHJfY3VyOworICBhdHRyID0gKHVuc2lnbmVkIGNo
YXIpICphdC0+YXR0cl9ueHQ7CisgIGlmIChhdC0+ZmxhZ3MgJiBBRl9BTFNUKQorICAgIHsKKyAg
ICAgIGNoYXIgKnBhOworICAgICAgZ3J1Yl9kaXNrX2FkZHJfdCB2Y247CisKKyAgICAgIHZjbiA9
IGdydWJfZGl2bW9kNjQgKG9mcywgYXQtPm1mdC0+ZGF0YS0+c3BjIDw8IEJMS19TSFIsIDApOwor
ICAgICAgcGEgPSBhdC0+YXR0cl9ueHQgKyB1MTZhdCAoYXQtPmF0dHJfbnh0LCA0KTsKKyAgICAg
IHdoaWxlIChwYSA8IGF0LT5hdHRyX2VuZCkKKwl7CisJICBpZiAoKHVuc2lnbmVkIGNoYXIpICpw
YSAhPSBhdHRyKQorCSAgICBicmVhazsKKwkgIGlmICh1MzJhdCAocGEsIDgpID4gdmNuKQorCSAg
ICBicmVhazsKKwkgIGF0LT5hdHRyX254dCA9IHBhOworCSAgcGEgKz0gdTE2YXQgKHBhLCA0KTsK
Kwl9CisgICAgfQorICBwcCA9IGZpbmRfYXR0ciAoYXQsIGF0dHIpOworICBpZiAocHApCisgICAg
cmV0ID0gcmVhZF9kYXRhIChhdCwgcHAsIGRlc3QsIG9mcywgbGVuLCBjYWNoZWQsIHJlYWRfaG9v
aywgY2xvc3VyZSk7CisgIGVsc2UKKyAgICByZXQgPQorICAgICAgKGdydWJfZXJybm8pID8gZ3J1
Yl9lcnJubyA6IGdydWJfZXJyb3IgKEdSVUJfRVJSX0JBRF9GUywKKwkJCQkJICAgICAgImF0dHJp
YnV0ZSBub3QgZm91bmQiKTsKKyAgYXQtPmF0dHJfY3VyID0gc2F2ZV9jdXI7CisgIHJldHVybiBy
ZXQ7Cit9CisKK3N0YXRpYyBncnViX2Vycl90CityZWFkX21mdCAoc3RydWN0IGdydWJfbnRmc19k
YXRhICpkYXRhLCBjaGFyICpidWYsIGdydWJfdWludDMyX3QgbWZ0bm8pCit7CisgIGlmIChyZWFk
X2F0dHIKKyAgICAgICgmZGF0YS0+bW1mdC5hdHRyLCBidWYsIG1mdG5vICogKChncnViX2Rpc2tf
YWRkcl90KSBkYXRhLT5tZnRfc2l6ZSkgPDwgQkxLX1NIUiwKKyAgICAgICBkYXRhLT5tZnRfc2l6
ZSA8PCBCTEtfU0hSLCAwLCAwLCAwKSkKKyAgICByZXR1cm4gZ3J1Yl9lcnJvciAoR1JVQl9FUlJf
QkFEX0ZTLCAiUmVhZCBNRlQgMHglWCBmYWlscyIsIG1mdG5vKTsKKyAgcmV0dXJuIGZpeHVwIChk
YXRhLCBidWYsIGRhdGEtPm1mdF9zaXplLCAoY2hhciopIkZJTEUiKTsKK30KKworc3RhdGljIGdy
dWJfZXJyX3QKK2luaXRfZmlsZSAoc3RydWN0IGdydWJfbnRmc19maWxlICptZnQsIGdydWJfdWlu
dDMyX3QgbWZ0bm8pCit7CisgIERCRygiaW5pdCBmaWxlIik7CisgIAorICB1bnNpZ25lZCBzaG9y
dCBmbGFnOworCisgIG1mdC0+aW5vZGVfcmVhZCA9IDE7CisKKyAgbWZ0LT5idWYgPSBncnViX21h
bGxvYyAobWZ0LT5kYXRhLT5tZnRfc2l6ZSA8PCBCTEtfU0hSKTsKKyAgaWYgKG1mdC0+YnVmID09
IE5VTEwpCisgICAgcmV0dXJuIGdydWJfZXJybm87CisKKyAgaWYgKHJlYWRfbWZ0IChtZnQtPmRh
dGEsIG1mdC0+YnVmLCBtZnRubykpCisgICAgcmV0dXJuIGdydWJfZXJybm87CisKKyAgZmxhZyA9
IHUxNmF0IChtZnQtPmJ1ZiwgMHgxNik7CisgIGlmICgoZmxhZyAmIDEpID09IDApCisgICAgcmV0
dXJuIGdydWJfZXJyb3IgKEdSVUJfRVJSX0JBRF9GUywgIk1GVCAweCVYIGlzIG5vdCBpbiB1c2Ui
LCBtZnRubyk7CisKKyAgaWYgKChmbGFnICYgMikgPT0gMCkKKyAgICB7CisgICAgICBjaGFyICpw
YTsKKworICAgICAgcGEgPSBsb2NhdGVfYXR0ciAoJm1mdC0+YXR0ciwgbWZ0LCBBVF9EQVRBKTsK
KyAgICAgIGlmIChwYSA9PSBOVUxMKQorCXJldHVybiBncnViX2Vycm9yIChHUlVCX0VSUl9CQURf
RlMsICJubyAkREFUQSBpbiBNRlQgMHglWCIsIG1mdG5vKTsKKworICAgICAgaWYgKCFwYVs4XSkK
KwltZnQtPnNpemUgPSB1MzJhdCAocGEsIDB4MTApOworICAgICAgZWxzZQorCW1mdC0+c2l6ZSA9
IHU2NGF0IChwYSwgMHgzMCk7CisKKyAgICAgIGlmICgobWZ0LT5hdHRyLmZsYWdzICYgQUZfQUxT
VCkgPT0gMCkKKwltZnQtPmF0dHIuYXR0cl9lbmQgPSAwOwkvKiAgRG9uJ3QganVtcCB0byBhdHRy
aWJ1dGUgbGlzdCAqLworICAgIH0KKyAgZWxzZQorICAgIGluaXRfYXR0ciAoJm1mdC0+YXR0ciwg
bWZ0KTsKKworICByZXR1cm4gMDsKK30KKworc3RhdGljIHZvaWQKK2ZyZWVfZmlsZSAoc3RydWN0
IGdydWJfbnRmc19maWxlICptZnQpCit7CisgIGZyZWVfYXR0ciAoJm1mdC0+YXR0cik7CisgIGdy
dWJfZnJlZSAobWZ0LT5idWYpOworfQorCitzdGF0aWMgaW50CitsaXN0X2ZpbGUgKHN0cnVjdCBn
cnViX250ZnNfZmlsZSAqZGlybywgY2hhciAqcG9zLAorCSAgIGludCAoKmhvb2spIChjb25zdCBj
aGFyICpmaWxlbmFtZSwKKwkJCWVudW0gZ3J1Yl9mc2hlbHBfZmlsZXR5cGUgZmlsZXR5cGUsCisJ
CQlncnViX2ZzaGVscF9ub2RlX3Qgbm9kZSwKKwkJCXZvaWQgKmNsb3N1cmUpLAorCSAgIHZvaWQg
KmNsb3N1cmUpCit7CisgIGNoYXIgKm5wOworICBpbnQgbnM7CisKKyAgd2hpbGUgKDEpCisgICAg
eworICAgICAgY2hhciAqdXN0ciwgbmFtZXNwYWNlOworICAgICAgY2hhciogZ2JzdHI7CisKKyAg
ICAgIGlmIChwb3NbMHhDXSAmIDIpCQkvKiBlbmQgc2lnbmF0dXJlICovCisJYnJlYWs7CisKKyAg
ICAgIG5wID0gcG9zICsgMHg1MDsKKyAgICAgIG5zID0gKHVuc2lnbmVkIGNoYXIpICoobnArKyk7
CisgICAgICBuYW1lc3BhY2UgPSAqKG5wKyspOworCisgICAgICAvKgorICAgICAgICogIElnbm9y
ZSBmaWxlcyBpbiBET1MgbmFtZXNwYWNlLCBhcyB0aGV5IHdpbGwgcmVhcHBlYXIgYXMgV2luMzIK
KyAgICAgICAqICBuYW1lcy4KKyAgICAgICAqLworICAgICAgaWYgKChucykgJiYgKG5hbWVzcGFj
ZSAhPSAyKSkKKwl7CisJICBlbnVtIGdydWJfZnNoZWxwX2ZpbGV0eXBlIHR5cGU7CisJICBzdHJ1
Y3QgZ3J1Yl9udGZzX2ZpbGUgKmZkaXJvOworCisJICBpZiAodTE2YXQgKHBvcywgNCkpCisJICAg
IHsKKwkgICAgICBncnViX2Vycm9yIChHUlVCX0VSUl9CQURfRlMsICI2NC1iaXQgTUZUIG51bWJl
ciIpOworCSAgICAgIHJldHVybiAwOworCSAgICB9CisKKwkgIHR5cGUgPQorCSAgICAodTMyYXQg
KHBvcywgMHg0OCkgJiBBVFRSX0RJUkVDVE9SWSkgPyBHUlVCX0ZTSEVMUF9ESVIgOgorCSAgICBH
UlVCX0ZTSEVMUF9SRUc7CisKKwkgIGZkaXJvID0gZ3J1Yl96YWxsb2MgKHNpemVvZiAoc3RydWN0
IGdydWJfbnRmc19maWxlKSk7CisJICBpZiAoIWZkaXJvKQorCSAgICByZXR1cm4gMDsKKworCSAg
ZmRpcm8tPmRhdGEgPSBkaXJvLT5kYXRhOworCSAgZmRpcm8tPmlubyA9IHUzMmF0IChwb3MsIDAp
OworCisJICB1c3RyID0gZ3J1Yl9tYWxsb2MgKG5zICogNCArIDEpOworCSAgZ2JzdHIgPSBncnVi
X21hbGxvYyhucyAqIDIgKyAxKTsKKwkgICAgaWYgKHVzdHIgPT0gTlVMTCB8fCBnYnN0ciA9PSBO
VUxMKQorCSAgICByZXR1cm4gMDsKKwkgICpncnViX3V0ZjE2X3RvX3V0ZjggKChncnViX3VpbnQ4
X3QgKikgdXN0ciwgKGdydWJfdWludDE2X3QgKikgbnAsCisJCQkgICAgICAgbnMpID0gJ1wwJzsK
KwkgIHUyZyh1c3RyLCBzdHJsZW4odXN0ciksIGdic3RyLCBucyAqIDIgKyAxKTsKKwkgIERCRygi
Z2JzdHI9JXMiLCBnYnN0cik7CisgICAgICAgICAgaWYgKG5hbWVzcGFjZSkKKyAgICAgICAgICAg
IHR5cGUgfD0gR1JVQl9GU0hFTFBfQ0FTRV9JTlNFTlNJVElWRTsKKworCSAgaWYgKGhvb2sgKGdi
c3RyLCB0eXBlLCBmZGlybywgY2xvc3VyZSkpCisJICAgIHsKKwkgICAgICBncnViX2ZyZWUgKHVz
dHIpOworCSAgICAgIGdydWJfZnJlZSAoZ2JzdHIpOworCSAgICAgIHJldHVybiAxOworCSAgICB9
CisJICBncnViX2ZyZWUoZ2JzdHIpOworCSAgZ3J1Yl9mcmVlICh1c3RyKTsKKwl9CisgICAgICBw
b3MgKz0gdTE2YXQgKHBvcywgOCk7CisgICAgfQorICByZXR1cm4gMDsKK30KKworc3RhdGljIGlu
dAorZ3J1Yl9udGZzX2l0ZXJhdGVfZGlyIChncnViX2ZzaGVscF9ub2RlX3QgZGlyLAorCQkgICAg
ICAgaW50ICgqaG9vaykgKGNvbnN0IGNoYXIgKmZpbGVuYW1lLAorCQkJCSAgICBlbnVtIGdydWJf
ZnNoZWxwX2ZpbGV0eXBlIGZpbGV0eXBlLAorCQkJCSAgICBncnViX2ZzaGVscF9ub2RlX3Qgbm9k
ZSwKKwkJCQkgICAgdm9pZCAqY2xvc3VyZSksCisJCSAgICAgICB2b2lkICpjbG9zdXJlKQorewor
ICB1bnNpZ25lZCBjaGFyICpiaXRtYXA7CisgIHN0cnVjdCBncnViX250ZnNfYXR0ciBhdHRyLCAq
YXQ7CisgIGNoYXIgKmN1cl9wb3MsICppbmR4LCAqYm1wOworICBpbnQgcmV0ID0gMDsKKyAgZ3J1
Yl9zaXplX3QgYml0bWFwX2xlbjsKKyAgc3RydWN0IGdydWJfbnRmc19maWxlICptZnQ7CisKKyAg
bWZ0ID0gKHN0cnVjdCBncnViX250ZnNfZmlsZSAqKSBkaXI7CisKKyAgaWYgKCFtZnQtPmlub2Rl
X3JlYWQpCisgICAgeworICAgICAgaWYgKGluaXRfZmlsZSAobWZ0LCBtZnQtPmlubykpCisJcmV0
dXJuIDA7CisgICAgfQorCisgIGluZHggPSBOVUxMOworICBibXAgPSBOVUxMOworCisgIGF0ID0g
JmF0dHI7CisgIGluaXRfYXR0ciAoYXQsIG1mdCk7CisgIHdoaWxlICgxKQorICAgIHsKKyAgICAg
IGlmICgoY3VyX3BvcyA9IGZpbmRfYXR0ciAoYXQsIEFUX0lOREVYX1JPT1QpKSA9PSBOVUxMKQor
CXsKKwkgIGdydWJfZXJyb3IgKEdSVUJfRVJSX0JBRF9GUywgIm5vICRJTkRFWF9ST09UIik7CisJ
ICBnb3RvIGRvbmU7CisJfQorCisgICAgICAvKiBSZXNpZGVudCwgTmFtZWxlbj00LCBPZmZzZXQ9
MHgxOCwgRmxhZ3M9MHgwMCwgTmFtZT0iJEkzMCIgKi8KKyAgICAgIGlmICgodTMyYXQgKGN1cl9w
b3MsIDgpICE9IDB4MTgwNDAwKSB8fAorCSAgKHUzMmF0IChjdXJfcG9zLCAweDE4KSAhPSAweDQ5
MDAyNCkgfHwKKwkgICh1MzJhdCAoY3VyX3BvcywgMHgxQykgIT0gMHgzMDAwMzMpKQorCWNvbnRp
bnVlOworICAgICAgY3VyX3BvcyArPSB1MTZhdCAoY3VyX3BvcywgMHgxNCk7CisgICAgICBpZiAo
KmN1cl9wb3MgIT0gMHgzMCkJLyogTm90IGZpbGVuYW1lIGluZGV4ICovCisJY29udGludWU7Cisg
ICAgICBicmVhazsKKyAgICB9CisKKyAgY3VyX3BvcyArPSAweDEwOwkJLyogU2tpcCBpbmRleCBy
b290ICovCisgIHJldCA9IGxpc3RfZmlsZSAobWZ0LCBjdXJfcG9zICsgdTE2YXQgKGN1cl9wb3Ms
IDApLCBob29rLCBjbG9zdXJlKTsKKyAgaWYgKHJldCkKKyAgICBnb3RvIGRvbmU7CisgICAgCisK
KyAgYml0bWFwID0gTlVMTDsKKyAgYml0bWFwX2xlbiA9IDA7CisgIGZyZWVfYXR0ciAoYXQpOwor
ICBpbml0X2F0dHIgKGF0LCBtZnQpOworICB3aGlsZSAoKGN1cl9wb3MgPSBmaW5kX2F0dHIgKGF0
LCBBVF9CSVRNQVApKSAhPSBOVUxMKQorICAgIHsKKyAgICAgIGludCBvZnM7CisKKyAgICAgIG9m
cyA9ICh1bnNpZ25lZCBjaGFyKSBjdXJfcG9zWzB4QV07CisgICAgICAvKiBOYW1lbGVuPTQsIE5h
bWU9IiRJMzAiICovCisgICAgICBpZiAoKGN1cl9wb3NbOV0gPT0gNCkgJiYKKwkgICh1MzJhdCAo
Y3VyX3Bvcywgb2ZzKSA9PSAweDQ5MDAyNCkgJiYKKwkgICh1MzJhdCAoY3VyX3Bvcywgb2ZzICsg
NCkgPT0gMHgzMDAwMzMpKQorCXsKKyAgICAgICAgICBpbnQgaXNfcmVzaWRlbnQgPSAoY3VyX3Bv
c1s4XSA9PSAwKTsKKworICAgICAgICAgIGJpdG1hcF9sZW4gPSAoKGlzX3Jlc2lkZW50KSA/IHUz
MmF0IChjdXJfcG9zLCAweDEwKSA6CisgICAgICAgICAgICAgICAgICAgICAgICB1MzJhdCAoY3Vy
X3BvcywgMHgyOCkpOworCisgICAgICAgICAgYm1wID0gZ3J1Yl9tYWxsb2MgKGJpdG1hcF9sZW4p
OworICAgICAgICAgIGlmIChibXAgPT0gTlVMTCkKKyAgICAgICAgICAgIGdvdG8gZG9uZTsKKwor
CSAgaWYgKGlzX3Jlc2lkZW50KQorCSAgICB7CisgICAgICAgICAgICAgIGdydWJfbWVtY3B5IChi
bXAsIChjaGFyICopIChjdXJfcG9zICsgdTE2YXQgKGN1cl9wb3MsIDB4MTQpKSwKKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGJpdG1hcF9sZW4pOworCSAgICB9CisgICAgICAgICAgZWxzZQor
ICAgICAgICAgICAgeworICAgICAgICAgICAgICBpZiAocmVhZF9kYXRhIChhdCwgY3VyX3Bvcywg
Ym1wLCAwLCBiaXRtYXBfbGVuLCAwLCAwLCAwKSkKKyAgICAgICAgICAgICAgICB7CisgICAgICAg
ICAgICAgICAgICBncnViX2Vycm9yIChHUlVCX0VSUl9CQURfRlMsCisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAiZmFpbHMgdG8gcmVhZCBub24tcmVzaWRlbnQgJEJJVE1BUCIpOworICAg
ICAgICAgICAgICAgICAgZ290byBkb25lOworICAgICAgICAgICAgICAgIH0KKyAgICAgICAgICAg
ICAgYml0bWFwX2xlbiA9IHUzMmF0IChjdXJfcG9zLCAweDMwKTsKKyAgICAgICAgICAgIH0KKwor
ICAgICAgICAgIGJpdG1hcCA9ICh1bnNpZ25lZCBjaGFyICopIGJtcDsKKwkgIGJyZWFrOworCX0K
KyAgICB9CisKKyAgZnJlZV9hdHRyIChhdCk7CisgIGN1cl9wb3MgPSBsb2NhdGVfYXR0ciAoYXQs
IG1mdCwgQVRfSU5ERVhfQUxMT0NBVElPTik7CisgIHdoaWxlIChjdXJfcG9zICE9IE5VTEwpCisg
ICAgeworICAgICAgLyogTm9uLXJlc2lkZW50LCBOYW1lbGVuPTQsIE9mZnNldD0weDQwLCBGbGFn
cz0wLCBOYW1lPSIkSTMwIiAqLworICAgICAgaWYgKCh1MzJhdCAoY3VyX3BvcywgOCkgPT0gMHg0
MDA0MDEpICYmCisJICAodTMyYXQgKGN1cl9wb3MsIDB4NDApID09IDB4NDkwMDI0KSAmJgorCSAg
KHUzMmF0IChjdXJfcG9zLCAweDQ0KSA9PSAweDMwMDAzMykpCisJYnJlYWs7CisgICAgICBjdXJf
cG9zID0gZmluZF9hdHRyIChhdCwgQVRfSU5ERVhfQUxMT0NBVElPTik7CisgICAgfQorCisgIGlm
ICgoIWN1cl9wb3MpICYmIChiaXRtYXApKQorICAgIHsKKyAgICAgIGdydWJfZXJyb3IgKEdSVUJf
RVJSX0JBRF9GUywgIiRCSVRNQVAgd2l0aG91dCAkSU5ERVhfQUxMT0NBVElPTiIpOworICAgICAg
Z290byBkb25lOworICAgIH0KKworICBpZiAoYml0bWFwKQorICAgIHsKKyAgICAgIGdydWJfZGlz
a19hZGRyX3QgdiwgaTsKKworICAgICAgaW5keCA9IGdydWJfbWFsbG9jIChtZnQtPmRhdGEtPmlk
eF9zaXplIDw8IEJMS19TSFIpOworICAgICAgaWYgKGluZHggPT0gTlVMTCkKKwlnb3RvIGRvbmU7
CisKKyAgICAgIHYgPSAxOworICAgICAgZm9yIChpID0gMDsgaSA8IChncnViX2Rpc2tfYWRkcl90
KWJpdG1hcF9sZW4gKiA4OyBpKyspCisJeworCSAgaWYgKCpiaXRtYXAgJiB2KQorCSAgICB7CisJ
ICAgICAgaWYgKChyZWFkX2F0dHIKKwkJICAgKGF0LCBpbmR4LCBpICogKG1mdC0+ZGF0YS0+aWR4
X3NpemUgPDwgQkxLX1NIUiksCisJCSAgICAobWZ0LT5kYXRhLT5pZHhfc2l6ZSA8PCBCTEtfU0hS
KSwgMCwgMCwgMCkpCisJCSAgfHwgKGZpeHVwIChtZnQtPmRhdGEsIGluZHgsIG1mdC0+ZGF0YS0+
aWR4X3NpemUsIChjaGFyKikiSU5EWCIpKSkKKwkJZ290byBkb25lOworCSAgICAgIHJldCA9IGxp
c3RfZmlsZSAobWZ0LCAmaW5keFsweDE4ICsgdTE2YXQgKGluZHgsIDB4MTgpXSwgaG9vaywKKwkJ
CSAgICAgICBjbG9zdXJlKTsKKwkgICAgICBpZiAocmV0KQorCQlnb3RvIGRvbmU7CisJICAgIH0K
KwkgIHYgPDw9IDE7CisJICBpZiAodiA+PSAweDEwMCkKKwkgICAgeworCSAgICAgIHYgPSAxOwor
CSAgICAgIGJpdG1hcCsrOworCSAgICB9CisJfQorICAgIH0KKworZG9uZToKKyAgZnJlZV9hdHRy
IChhdCk7CisgIGdydWJfZnJlZSAoaW5keCk7CisgIGdydWJfZnJlZSAoYm1wKTsKKworICByZXR1
cm4gcmV0OworfQorCisKK3N0cnVjdCBncnViX250ZnNfZGF0YSAqCitncnViX250ZnNfbW91bnQg
KEJsb2NrRHJpdmVyU3RhdGUqIGJzLCBncnViX3VpbnQzMl90IHBhcnRfb2ZmX3NlY3RvcikKK3sK
KyAgc3RydWN0IGdydWJfbnRmc19icGIgYnBiOworICBzdHJ1Y3QgZ3J1Yl9udGZzX2RhdGEgKmRh
dGEgPSAwOworICBncnViX29mZl90IG9mZl9ieXRlcyA9IChncnViX29mZl90KXBhcnRfb2ZmX3Nl
Y3RvciA8PCBCTEtfU0hSOyAKKyAgCisgIGlmICghYnMpCisgICAgZ290byBmYWlsOworCisgIGRh
dGEgPSAoc3RydWN0IGdydWJfbnRmc19kYXRhICopIGdydWJfemFsbG9jIChzaXplb2YgKCpkYXRh
KSk7CisgIGlmICghZGF0YSkKKyAgICBnb3RvIGZhaWw7CisKKyAgZGF0YS0+YnMgPSBiczsKKwor
ICAvKiBSZWFkIHRoZSBCUEIuICAqLworICBpZiAoYmRydl9wcmVhZCAoYnMsIG9mZl9ieXRlcywg
JmJwYiwgc2l6ZW9mIChicGIpKSAhPSBzaXplb2YoYnBiKSkKKyAgICB7CisgICAgICBEQkcoInJl
YWQgYnBiIGVyciEiKTsKKyAgICAgIGdvdG8gZmFpbDsKKyAgICB9CisgIGlmIChncnViX21lbWNt
cCAoKGNoYXIgKikgJmJwYi5vZW1fbmFtZSwgIk5URlMiLCA0KSkKKyAgICB7CisgICAgICBEQkco
ImJwYi5vZW1fbmFtZT0lcywgbm90IG50ZnMiLCBicGIub2VtX25hbWUpOworICAgICAgZ290byBm
YWlsOworICAgIH0KKyAgZGF0YS0+YmxvY2tzaXplID0gZ3J1Yl9sZV90b19jcHUxNiAoYnBiLmJ5
dGVzX3Blcl9zZWN0b3IpOworICBkYXRhLT5zcGMgPSBicGIuc2VjdG9yc19wZXJfY2x1c3RlciAq
IChkYXRhLT5ibG9ja3NpemUgPj4gQkxLX1NIUik7CisKKyAgaWYgKGJwYi5jbHVzdGVyc19wZXJf
bWZ0ID4gMCkKKyAgICBkYXRhLT5tZnRfc2l6ZSA9IGRhdGEtPnNwYyAqIGJwYi5jbHVzdGVyc19w
ZXJfbWZ0OworICBlbHNlCisgICAgZGF0YS0+bWZ0X3NpemUgPSAxIDw8ICgtYnBiLmNsdXN0ZXJz
X3Blcl9tZnQgLSBCTEtfU0hSKTsKKworICBpZiAoYnBiLmNsdXN0ZXJzX3Blcl9pbmRleCA+IDAp
CisgICAgZGF0YS0+aWR4X3NpemUgPSBkYXRhLT5zcGMgKiBicGIuY2x1c3RlcnNfcGVyX2luZGV4
OworICBlbHNlCisgICAgZGF0YS0+aWR4X3NpemUgPSAxIDw8ICgtYnBiLmNsdXN0ZXJzX3Blcl9p
bmRleCAtIEJMS19TSFIpOworCisgIGRhdGEtPm1mdF9zdGFydCA9IGdydWJfbGVfdG9fY3B1NjQg
KGJwYi5tZnRfbGNuKSAqIGRhdGEtPnNwYzsKKworICBpZiAoKGRhdGEtPm1mdF9zaXplID4gTUFY
X01GVCkgfHwgKGRhdGEtPmlkeF9zaXplID4gTUFYX0lEWCkpCisgICAgZ290byBmYWlsOworCisg
IGRhdGEtPm1tZnQuZGF0YSA9IGRhdGE7CisgIGRhdGEtPmNtZnQuZGF0YSA9IGRhdGE7CisKKyAg
ZGF0YS0+bW1mdC5idWYgPSBncnViX21hbGxvYyAoZGF0YS0+bWZ0X3NpemUgPDwgQkxLX1NIUik7
CisgIGlmICghZGF0YS0+bW1mdC5idWYpCisgICAgZ290byBmYWlsOworCisgIHNfYnBiX2J5dGVz
X3Blcl9zZWN0b3IgPSAoYnBiLmJ5dGVzX3Blcl9zZWN0b3IpOworICBzX3BhcnRfb2ZmX3NlY3Rv
ciA9IHBhcnRfb2ZmX3NlY3RvcjsKKyAgREJHKCJicGIuYnl0ZXNfcGVyX3NlY3Rvcj1ibG9ja3Np
emU9JXVcbiIKKyAgICAgICJicGIuc2VjdG9yX3Blcl9jbHVzdGVyPSV1XG4iCisgICAgICAiZGF0
YS0+YmxvY2tzaXplPSV1XG4iCisgICAgICAiZGF0YS0+c3BjPSV1XG4iCisgICAgICAiYnBiLmNs
dXN0ZXJzX3Blcl9tZnQ9JWRcbiIKKyAgICAgICJkYXRhLT5tZnRfc2l6ZT0ldVxuIgorICAgICAg
ImJwYi50b3RhbF9zZWN0b3JzPSV6ZFxuIgorICAgICAgImJwYi5tZnRfbGNuPSV6ZFxuIgorICAg
ICAgImRhdGEtPm1mdF9zdGFydD0ldVxuIiwKKyAgICAgIChicGIuYnl0ZXNfcGVyX3NlY3Rvciks
IChicGIuc2VjdG9yc19wZXJfY2x1c3RlciksCisgICAgICAoZGF0YS0+YmxvY2tzaXplKSwgKGRh
dGEtPnNwYyksCisgICAgICAoYnBiLmNsdXN0ZXJzX3Blcl9tZnQpLCAoZGF0YS0+bWZ0X3NpemUp
LAorICAgICAgKGJwYi5udW1fdG90YWxfc2VjdG9ycyksCisgICAgICAoZ3J1Yl9sZV90b19jcHU2
NChicGIubWZ0X2xjbikpLCAoZGF0YS0+bWZ0X3N0YXJ0KSk7CisgIAorICBvZmZfYnl0ZXMgPSAo
Z3J1Yl9vZmZfdClkYXRhLT5tZnRfc3RhcnQgPDwgQkxLX1NIUjsKKyAgZ3J1Yl91aW50MzJfdCBs
ZW4gPSBkYXRhLT5tZnRfc2l6ZSA8PCBCTEtfU0hSOworICBpZiAoYmRydl9wcmVhZF9mcm9tX2xj
bl9vZl92b2x1bShicywgb2ZmX2J5dGVzLAorCQkgZGF0YS0+bW1mdC5idWYsIGxlbikgIT0gbGVu
KQorICAgIHsKKyAgICAgIERCRygicmVhZCBtbWZ0IGVycm9yISIpOworICAgICAgZ290byBmYWls
OworICAgIH0KKyAgZGF0YS0+dXVpZCA9IGdydWJfbGVfdG9fY3B1NjQgKGJwYi5udW1fc2VyaWFs
KTsKKworICBpZiAoZml4dXAgKGRhdGEsIGRhdGEtPm1tZnQuYnVmLCBkYXRhLT5tZnRfc2l6ZSwg
KGNoYXIqKSJGSUxFIikpCisgICAgZ290byBmYWlsOworCisgIGlmICghbG9jYXRlX2F0dHIgKCZk
YXRhLT5tbWZ0LmF0dHIsICZkYXRhLT5tbWZ0LCBBVF9EQVRBKSkKKyAgICB7CisgICAgICBEQkco
ImxvY2F0ZV9hdHRyIEFUX0RBVEEgaW4gbW1mdCBmYWlsZWQhICIpOworICAgICAgZ290byBmYWls
OworICAgIH0KKyAgaWYgKGluaXRfZmlsZSAoJmRhdGEtPmNtZnQsIEZJTEVfUk9PVCkpCisgICAg
eworICAgICAgREJHKCJpbml0X2ZpbGUgRklMRV9ST09UIGZhaWxlZCEiKTsKKyAgICAgIGdvdG8g
ZmFpbDsKKyAgICB9CisgIHJldHVybiBkYXRhOworCitmYWlsOgorICBncnViX2Vycm9yIChHUlVC
X0VSUl9CQURfRlMsICJub3QgYW4gbnRmcyBmaWxlc3lzdGVtIik7CisKKyAgaWYgKGRhdGEpCisg
ICAgeworICAgICAgZnJlZV9maWxlICgmZGF0YS0+bW1mdCk7CisgICAgICBmcmVlX2ZpbGUgKCZk
YXRhLT5jbWZ0KTsKKyAgICAgIGdydWJfZnJlZSAoZGF0YSk7CisgICAgfQorICByZXR1cm4gMDsK
K30KKworc3RydWN0IGdydWJfbnRmc19kaXJfY2xvc3VyZQoreworICBpbnQgKCpob29rKSAoY29u
c3QgY2hhciAqZmlsZW5hbWUsCisJICAgICAgIGNvbnN0IHN0cnVjdCBncnViX2Rpcmhvb2tfaW5m
byAqaW5mbywKKwkgICAgICAgdm9pZCAqY2xvc3VyZSk7CisgIHZvaWQgKmNsb3N1cmU7CisgIHN0
cnVjdCBncnViX250ZnNfZmlsZSBmaWxlOworfTsKKworc3RhdGljIGludAoraXRlcmF0ZSAoY29u
c3QgY2hhciAqZmlsZW5hbWUsCisJIGVudW0gZ3J1Yl9mc2hlbHBfZmlsZXR5cGUgZmlsZXR5cGUs
CisJIGdydWJfZnNoZWxwX25vZGVfdCBub2RlLAorCSB2b2lkICpjbG9zdXJlKQoreworICBzdHJ1
Y3QgZ3J1Yl9udGZzX2Rpcl9jbG9zdXJlICpjID0gY2xvc3VyZTsKKyAgc3RydWN0IGdydWJfZGly
aG9va19pbmZvIGluZm87CisgIGdydWJfbWVtc2V0ICgmaW5mbywgMCwgc2l6ZW9mIChpbmZvKSk7
CisgIGluZm8uZGlyID0gKChmaWxldHlwZSAmIEdSVUJfRlNIRUxQX1RZUEVfTUFTSykgPT0gR1JV
Ql9GU0hFTFBfRElSKTsKKyAgYy0+ZmlsZS5kYXRhID0gbm9kZS0+ZGF0YTsKKyAgYy0+ZmlsZS5p
bm8gPSBub2RlLT5pbm87CisgIGdydWJfZnJlZSAobm9kZSk7CisgIAorICAKKyAgaWYoaW5pdF9m
aWxlKCZjLT5maWxlLCBjLT5maWxlLmlubykpCisgICAgeworICAgICAgZXJyeCgxLCAiaXRlcmF0
ZSgpOiBpbml0X2ZpbGUgZXJyb3IhXG4iKTsKKyAgICB9CisgIGVsc2UKKyAgICB7CisgICAgICBE
QkcoIi4uLi4uLmN1cnJlbnQgZmlsZSBtZnQgcmVhZCBzdWNjZXNzZnVsbHkhXG4iKTsKKyAgICB9
CisgIGNoYXIgKnBhID0gbG9jYXRlX2F0dHIoJmMtPmZpbGUuYXR0ciwKKwkJCSAmYy0+ZmlsZSwg
QVRfU1RBTkRBUkRfSU5GT1JNQVRJT04pOworICBpZihOVUxMID09IHBhKQorICAgIHsKKyAgICAg
IGVycngoMiwgIm5vICRTVEFOREFSRF9JTkZPUk1BVElPTiBpbiBNRlQgMHgleFxuIiwgYy0+Zmls
ZS5pbm8pOworICAgIH0KKyAgZ3J1Yl91aW50NjRfdCBkYXRlPSAwOworICBpZihyZWFkX2F0dHIo
JmMtPmZpbGUuYXR0ciwgKGNoYXIqKSZkYXRlLCAwLCA4LCAxLCBOVUxMLCBOVUxMKSkKKyAgICB7
CisgICAgICBlcnJ4KDMsICJyZWFkIGRhdGUgZXJyb3JcbiIpOworICAgIH0KKyAgZWxzZQorICAg
IHsKKworICAgICAgaW5mby50aW1lX250ZnMgPSBkYXRlOworICAgICAgREJHKCIuLi4uLi5kYXRl
OiAlenVcbiIsIGRhdGUpOworICAgIH0KKyAgREJHKCIuLi4uLi5zaXplIG9mIFwnJXNcJzogJXp1
XG4iLCBmaWxlbmFtZSwgKGMtPmZpbGUuc2l6ZSkpOworICBpbmZvLmZpbGVzaXplX250ZnMgPSBj
LT5maWxlLnNpemU7CisgIGZyZWVfZmlsZSgmYy0+ZmlsZSk7CisgIHJldHVybiBjLT5ob29rIChm
aWxlbmFtZSwgJmluZm8sIGMtPmNsb3N1cmUpOworfQorCisKKyNpbmNsdWRlICJmcy10aW1lLmgi
CitzdGF0aWMgIGludCBmaW5kX3RoZW5fbHNfaG9vayhjb25zdCBjaGFyICpmaWxlbmFtZSwKKwkJ
CSAgIGNvbnN0IHN0cnVjdCBncnViX2Rpcmhvb2tfaW5mbyAqaW5mbywgdm9pZCAqY2xvc3VyZSkK
K3sKKyAgc3RydWN0IGxzX2N0cmwqIGN0cmwgPSAoc3RydWN0IGxzX2N0cmwqKWNsb3N1cmU7Cisg
IERCRygiZGV0YWlsPSVkIiwgY3RybC0+ZGV0YWlsKTsKKyAgaWYoJyQnID09ICpmaWxlbmFtZSkK
KyAgICBnb3RvIGRvbmU7CisKKyAgcHJpbnRmKCIlcyIsIGZpbGVuYW1lKTsKKyAgaWYoIWN0cmwt
PmRldGFpbCkKKyAgICB7CisgICAgICBwcmludGYoIlxuIik7CisgICAgICBnb3RvIGRvbmU7Cisg
ICAgfQorICBlbHNlCisgICAgeworICAgICAgcHJpbnRmKCJcdCIpOworICAgIH0KKyAgCisgIGNo
YXIgYnVmZmVyWzUwXT17fTsKKyAgc3RydWN0IHRtIHRtMDsgIAorICBzdHJ1Y3QgdG0qIHB0bT0g
bnRmc191dGMybG9jYWwoaW5mby0+dGltZV9udGZzLCAmdG0wKTsKKyAgaWYoTlVMTCA9PSBwdG0p
IGVycngoMSwgIm50ZnNfdXRjMmxvY2FsIGZhaWxcbiIpOworICAgICAgICAgICAKKyAgcHJpbnRm
KCIlenVcdCIsIGluZm8tPmZpbGVzaXplX250ZnMpOworICBwcmludGYoIiVzXHQiLCAoaW5mby0+
ZGlyID8gImRpciIgOiAiZmlsZSIpKTsKKyAgc3RyZnRpbWUoYnVmZmVyLCA1MCwgIiVZLSVtLSVk
XHQlSDolTTolUyIsIHB0bSk7CisgIHByaW50ZigiJXMiLCBidWZmZXIpOworICAvL3ByaW50Zigi
JWQtJWQtJWRcdCIsIHB0bS0+dG1feWVhciwgcHRtLT50bV9tb24sIHB0bS0+dG1fbWRheSk7Cisg
IC8vcHJpbnRmKCIlZDolZDolZFx0IiwgcHRtLT50bV9ob3VyLCBwdG0tPnRtX21pbiwgcHRtLT50
bV9zZWMpOworICBwcmludGYoIlxuIik7CisKKyBkb25lOgorICByZXR1cm4gMDsgIC8vINfu1tW3
tbvYuPhpdGVyYXRlCit9CisKKworCitncnViX2Vycl90CitncnViX250ZnNfbHMgKGdydWJfZmls
ZV90IGZpbGUsIGNvbnN0IGNoYXIgKnBhdGgsCisJICAgICAgIGludCAoKmhvb2spIChjb25zdCBj
aGFyICpmaWxlbmFtZSwKKwkJCSAgICBjb25zdCBzdHJ1Y3QgZ3J1Yl9kaXJob29rX2luZm8gKmlu
Zm8sCisJCQkgICAgdm9pZCAqY2xvc3VyZSksCisJICAgICAgIHZvaWQgKmNsb3N1cmUpCit7Cisg
IHN0cnVjdCBncnViX250ZnNfZGF0YSAqZGF0YSA9IDA7CisgIHN0cnVjdCBncnViX2ZzaGVscF9u
b2RlICpmZGlybyA9IDA7CisgIHN0cnVjdCBncnViX250ZnNfZGlyX2Nsb3N1cmUgYyA9IHswfTsK
KworICBkYXRhID0gZ3J1Yl9udGZzX21vdW50IChmaWxlLT5icywgZmlsZS0+cGFydF9vZmZfc2Vj
dG9yKTsKKyAgaWYgKCFkYXRhKQorICAgIHsKKyAgICAgIERCRygibW91bnQgZmFpbGVkISIpOwor
ICAgICAgZ290byBmYWlsOworICAgIH0KKyAgZ3J1Yl9mc2hlbHBfZmluZF9maWxlIChwYXRoLCAm
ZGF0YS0+Y21mdCwgJmZkaXJvLCBncnViX250ZnNfaXRlcmF0ZV9kaXIsIDAsCisJCQkgMCwgR1JV
Ql9GU0hFTFBfRElSKTsKKworICAKKyAgaWYgKGdydWJfZXJybm8pCisgICAgZ290byBmYWlsOwor
CisgIGMuaG9vayA9IChob29rID8gaG9vayA6IGZpbmRfdGhlbl9sc19ob29rKTsKKyAgYy5jbG9z
dXJlID0gY2xvc3VyZTsKKyAgZ3J1Yl9udGZzX2l0ZXJhdGVfZGlyIChmZGlybywgaXRlcmF0ZSwg
JmMpOworCitmYWlsOgorICBpZiAoKGZkaXJvKSAmJiAoZmRpcm8gIT0gJmRhdGEtPmNtZnQpKQor
ICAgIHsKKyAgICAgIGZyZWVfZmlsZSAoZmRpcm8pOworICAgICAgZ3J1Yl9mcmVlIChmZGlybyk7
CisgICAgfQorICBpZiAoZGF0YSkKKyAgICB7CisgICAgICBmcmVlX2ZpbGUgKCZkYXRhLT5tbWZ0
KTsKKyAgICAgIGZyZWVfZmlsZSAoJmRhdGEtPmNtZnQpOworICAgICAgZ3J1Yl9mcmVlIChkYXRh
KTsKKyAgICB9CisKKworICByZXR1cm4gZ3J1Yl9lcnJubzsKK30KKworZ3J1Yl9lcnJfdAorZ3J1
Yl9udGZzX29wZW4gKGdydWJfZmlsZV90IGZpbGUsIGNvbnN0IGNoYXIgKm5hbWUpCit7CisgIHN0
cnVjdCBncnViX250ZnNfZGF0YSAqZGF0YSA9IDA7CisgIHN0cnVjdCBncnViX2ZzaGVscF9ub2Rl
ICptZnQgPSAwOworCisKKyAgZGF0YSA9IGdydWJfbnRmc19tb3VudCAoZmlsZS0+YnMsIGZpbGUt
PnBhcnRfb2ZmX3NlY3Rvcik7CisgIGlmICghZGF0YSkKKyAgICB7CisgICAgICBEQkcoIm1vdW50
IGZhaWxlZCEiKTsKKyAgICAgIGdvdG8gZmFpbDsKKyAgICB9CisgIGdydWJfZnNoZWxwX2ZpbmRf
ZmlsZSAobmFtZSwgJmRhdGEtPmNtZnQsICZtZnQsIGdydWJfbnRmc19pdGVyYXRlX2RpciwgMCwK
KwkJCSAwLCBHUlVCX0ZTSEVMUF9SRUcpOworCisgIGlmIChncnViX2Vycm5vKQorICAgIGdvdG8g
ZmFpbDsKKworICBpZiAobWZ0ICE9ICZkYXRhLT5jbWZ0KQorICAgIHsKKyAgICAgIGZyZWVfZmls
ZSAoJmRhdGEtPmNtZnQpOworICAgICAgZ3J1Yl9tZW1jcHkgKCZkYXRhLT5jbWZ0LCBtZnQsIHNp
emVvZiAoKm1mdCkpOworICAgICAgZ3J1Yl9mcmVlIChtZnQpOworICAgICAgaWYgKCFkYXRhLT5j
bWZ0Lmlub2RlX3JlYWQpCisJeworCSAgaWYgKGluaXRfZmlsZSAoJmRhdGEtPmNtZnQsIGRhdGEt
PmNtZnQuaW5vKSkKKwkgICAgZ290byBmYWlsOworCX0KKyAgICB9CisKKyAgZmlsZS0+c2l6ZSA9
IGRhdGEtPmNtZnQuc2l6ZTsKKyAgZmlsZS0+ZGF0YSA9IGRhdGE7CisgIGZpbGUtPm9mZnNldCA9
IDA7CisKKyAgcmV0dXJuIDA7CisKK2ZhaWw6CisgIGlmIChkYXRhKQorICAgIHsKKyAgICAgIGZy
ZWVfZmlsZSAoJmRhdGEtPm1tZnQpOworICAgICAgZnJlZV9maWxlICgmZGF0YS0+Y21mdCk7Cisg
ICAgICBncnViX2ZyZWUgKGRhdGEpOworICAgIH0KKworCisgIHJldHVybiBncnViX2Vycm5vOwor
fQorCitncnViX3NzaXplX3QKK2dydWJfbnRmc19yZWFkIChncnViX2ZpbGVfdCBmaWxlLCBncnVi
X29mZl90IG9mZnNldCwgZ3J1Yl9zaXplX3QgbGVuLCBjaGFyICpidWYpCit7CisgIHN0cnVjdCBn
cnViX250ZnNfZmlsZSAqbWZ0OworCisgIG1mdCA9ICYoKHN0cnVjdCBncnViX250ZnNfZGF0YSAq
KSBmaWxlLT5kYXRhKS0+Y21mdDsKKyAgaWYgKGZpbGUtPnJlYWRfaG9vaykKKyAgICBtZnQtPmF0
dHIuc2F2ZV9wb3MgPSAxOworICAKKyAgcmVhZF9hdHRyICgmbWZ0LT5hdHRyLCBidWYsIG9mZnNl
dCwgbGVuLCAxLAorCSAgICAgZmlsZS0+cmVhZF9ob29rLCBmaWxlLT5jbG9zdXJlKTsKKyAgCisg
IHJldHVybiAoZ3J1Yl9lcnJubykgPyAwIDogbGVuOworfQorCitncnViX2Vycl90CitncnViX250
ZnNfY2xvc2UgKGdydWJfZmlsZV90IGZpbGUpCit7CisgIHN0cnVjdCBncnViX250ZnNfZGF0YSAq
ZGF0YTsKKworICBkYXRhID0gZmlsZS0+ZGF0YTsKKworICBpZiAoZGF0YSkKKyAgICB7CisgICAg
ICBmcmVlX2ZpbGUgKCZkYXRhLT5tbWZ0KTsKKyAgICAgIGZyZWVfZmlsZSAoJmRhdGEtPmNtZnQp
OworICAgICAgZ3J1Yl9mcmVlIChkYXRhKTsKKyAgICB9CisKKworICByZXR1cm4gZ3J1Yl9lcnJu
bzsKK30KKworCisKZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29s
cy9pb2VtdS1xZW11LXhlbi9udGZzLmggeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4v
bnRmcy5oCi0tLSB4ZW4tNC4xLjItYS90b29scy9pb2VtdS1xZW11LXhlbi9udGZzLmgJMTk3MC0w
MS0wMSAwNzowMDowMC4wMDAwMDAwMDAgKzA3MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2lvZW11
LXFlbXUteGVuL250ZnMuaAkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxNDkzNjcwMSArMDgwMApAQCAt
MCwwICsxLDIyNyBAQAorLyogbnRmcy5oIC0gaGVhZGVyIGZvciB0aGUgTlRGUyBmaWxlc3lzdGVt
ICovCisvKgorICogIEdSVUIgIC0tICBHUmFuZCBVbmlmaWVkIEJvb3Rsb2FkZXIKKyAqICBDb3B5
cmlnaHQgKEMpIDIwMDcsMjAwOSAgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuCisgKgor
ICogIEdSVUIgaXMgZnJlZSBzb2Z0d2FyZTogeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29y
IG1vZGlmeQorICogIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj
IExpY2Vuc2UgYXMgcHVibGlzaGVkIGJ5CisgKiAgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlv
biwgZWl0aGVyIHZlcnNpb24gMyBvZiB0aGUgTGljZW5zZSwgb3IKKyAqICAoYXQgeW91ciBvcHRp
b24pIGFueSBsYXRlciB2ZXJzaW9uLgorICoKKyAqICBHUlVCIGlzIGRpc3RyaWJ1dGVkIGluIHRo
ZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsCisgKiAgYnV0IFdJVEhPVVQgQU5ZIFdBUlJB
TlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKKyAqICBNRVJDSEFOVEFC
SUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlCisgKiAg
R05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KKyAqCisgKiAgWW91
IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExp
Y2Vuc2UKKyAqICBhbG9uZyB3aXRoIEdSVUIuICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUu
b3JnL2xpY2Vuc2VzLz4uCisgKi8KKworI2lmbmRlZiBHUlVCX05URlNfSAorI2RlZmluZSBHUlVC
X05URlNfSAkxCisKKworI2luY2x1ZGUgImJsb2NrX2ludC5oIgorI2luY2x1ZGUgImZzLXR5cGVz
LmgiCisjaW5jbHVkZSAiZ3J1Yl9lcnIuaCIKKyNpbmNsdWRlICJmcy1jb21tLmgiCisKKyNkZWZp
bmUgRklMRV9NRlQgICAgICAwCisjZGVmaW5lIEZJTEVfTUZUTUlSUiAgMQorI2RlZmluZSBGSUxF
X0xPR0ZJTEUgIDIKKyNkZWZpbmUgRklMRV9WT0xVTUUgICAzCisjZGVmaW5lIEZJTEVfQVRUUkRF
RiAgNAorI2RlZmluZSBGSUxFX1JPT1QgICAgIDUKKyNkZWZpbmUgRklMRV9CSVRNQVAgICA2Cisj
ZGVmaW5lIEZJTEVfQk9PVCAgICAgNworI2RlZmluZSBGSUxFX0JBRENMVVMgIDgKKyNkZWZpbmUg
RklMRV9RVU9UQSAgICA5CisjZGVmaW5lIEZJTEVfVVBDQVNFICAxMAorCisjZGVmaW5lIEFUX1NU
QU5EQVJEX0lORk9STUFUSU9OCTB4MTAKKyNkZWZpbmUgQVRfQVRUUklCVVRFX0xJU1QJMHgyMAor
I2RlZmluZSBBVF9GSUxFTkFNRQkJMHgzMAorI2RlZmluZSBBVF9PQkpFQ1RfSUQJCTB4NDAKKyNk
ZWZpbmUgQVRfU0VDVVJJVFlfREVTQ1JJUFRPUgkweDUwCisjZGVmaW5lIEFUX1ZPTFVNRV9OQU1F
CQkweDYwCisjZGVmaW5lIEFUX1ZPTFVNRV9JTkZPUk1BVElPTgkweDcwCisjZGVmaW5lIEFUX0RB
VEEJCQkweDgwCisjZGVmaW5lIEFUX0lOREVYX1JPT1QJCTB4OTAKKyNkZWZpbmUgQVRfSU5ERVhf
QUxMT0NBVElPTgkweEEwCisjZGVmaW5lIEFUX0JJVE1BUAkJMHhCMAorI2RlZmluZSBBVF9TWU1M
SU5LCQkweEMwCisjZGVmaW5lIEFUX0VBX0lORk9STUFUSU9OCTB4RDAKKyNkZWZpbmUgQVRfRUEJ
CQkweEUwCisKKyNkZWZpbmUgQVRUUl9SRUFEX09OTFkJCTB4MQorI2RlZmluZSBBVFRSX0hJRERF
TgkJMHgyCisjZGVmaW5lIEFUVFJfU1lTVEVNCQkweDQKKyNkZWZpbmUgQVRUUl9BUkNISVZFCQkw
eDIwCisjZGVmaW5lIEFUVFJfREVWSUNFCQkweDQwCisjZGVmaW5lIEFUVFJfTk9STUFMCQkweDgw
CisjZGVmaW5lIEFUVFJfVEVNUE9SQVJZCQkweDEwMAorI2RlZmluZSBBVFRSX1NQQVJTRQkJMHgy
MDAKKyNkZWZpbmUgQVRUUl9SRVBBUlNFCQkweDQwMAorI2RlZmluZSBBVFRSX0NPTVBSRVNTRUQJ
CTB4ODAwCisjZGVmaW5lIEFUVFJfT0ZGTElORQkJMHgxMDAwCisjZGVmaW5lIEFUVFJfTk9UX0lO
REVYRUQJMHgyMDAwCisjZGVmaW5lIEFUVFJfRU5DUllQVEVECQkweDQwMDAKKyNkZWZpbmUgQVRU
Ul9ESVJFQ1RPUlkJCTB4MTAwMDAwMDAKKyNkZWZpbmUgQVRUUl9JTkRFWF9WSUVXCQkweDIwMDAw
MDAwCisKKyNkZWZpbmUgRkxBR19DT01QUkVTU0VECQkxCisjZGVmaW5lIEZMQUdfRU5DUllQVEVE
CQkweDQwMDAKKyNkZWZpbmUgRkxBR19TUEFSU0UJCTB4ODAwMAorCisKKyNkZWZpbmUgR1JVQl9E
SVNLX1NFQ1RPUl9CSVRTICAgOQorI2RlZmluZSBCTEtfU0hSCQlHUlVCX0RJU0tfU0VDVE9SX0JJ
VFMKKworI2RlZmluZSBNQVhfTUZUCQkoMTAyNCA+PiBCTEtfU0hSKQorI2RlZmluZSBNQVhfSURY
CQkoMTYzODQgPj4gQkxLX1NIUikKKworI2RlZmluZSBDT01fTEVOCQk0MDk2CisjZGVmaW5lIENP
TV9MT0dfTEVOCTEyCisjZGVmaW5lIENPTV9TRUMJCShDT01fTEVOID4+IEJMS19TSFIpCisKKyNk
ZWZpbmUgQUZfQUxTVAkJMQorI2RlZmluZSBBRl9NTUZUCQkyCisjZGVmaW5lIEFGX0dQT1MJCTQK
KworI2RlZmluZSBSRl9DT01QCQkxCisjZGVmaW5lIFJGX0NCTEsJCTIKKyNkZWZpbmUgUkZfQkxO
SwkJNAorCisjZGVmaW5lIHZhbHVlYXQoYnVmLG9mcyx0eXBlKQkqKCh0eXBlKikoKChjaGFyKili
dWYpK29mcykpCisKKyNkZWZpbmUgdTE2YXQoYnVmLG9mcykJZ3J1Yl9sZV90b19jcHUxNih2YWx1
ZWF0KGJ1ZixvZnMsZ3J1Yl91aW50MTZfdCkpCisjZGVmaW5lIHUzMmF0KGJ1ZixvZnMpCWdydWJf
bGVfdG9fY3B1MzIodmFsdWVhdChidWYsb2ZzLGdydWJfdWludDMyX3QpKQorI2RlZmluZSB1NjRh
dChidWYsb2ZzKQlncnViX2xlX3RvX2NwdTY0KHZhbHVlYXQoYnVmLG9mcyxncnViX3VpbnQ2NF90
KSkKKworI2RlZmluZSB2MTZhdChidWYsb2ZzKQl2YWx1ZWF0KGJ1ZixvZnMsZ3J1Yl91aW50MTZf
dCkKKyNkZWZpbmUgdjMyYXQoYnVmLG9mcykJdmFsdWVhdChidWYsb2ZzLGdydWJfdWludDMyX3Qp
CisjZGVmaW5lIHY2NGF0KGJ1ZixvZnMpCXZhbHVlYXQoYnVmLG9mcyxncnViX3VpbnQ2NF90KQor
CitzdHJ1Y3QgZ3J1Yl9udGZzX2JwYgoreworICBncnViX3VpbnQ4X3Qgam1wX2Jvb3RbM107Cisg
IGdydWJfdWludDhfdCBvZW1fbmFtZVs4XTsKKyAgZ3J1Yl91aW50MTZfdCBieXRlc19wZXJfc2Vj
dG9yOworICBncnViX3VpbnQ4X3Qgc2VjdG9yc19wZXJfY2x1c3RlcjsKKyAgZ3J1Yl91aW50OF90
IHJlc2VydmVkXzFbN107CisgIGdydWJfdWludDhfdCBtZWRpYTsKKyAgZ3J1Yl91aW50MTZfdCBy
ZXNlcnZlZF8yOworICBncnViX3VpbnQxNl90IHNlY3RvcnNfcGVyX3RyYWNrOworICBncnViX3Vp
bnQxNl90IG51bV9oZWFkczsKKyAgZ3J1Yl91aW50MzJfdCBudW1faGlkZGVuX3NlY3RvcnM7Cisg
IGdydWJfdWludDMyX3QgcmVzZXJ2ZWRfM1syXTsKKyAgZ3J1Yl91aW50NjRfdCBudW1fdG90YWxf
c2VjdG9yczsKKyAgZ3J1Yl91aW50NjRfdCBtZnRfbGNuOworICBncnViX3VpbnQ2NF90IG1mdF9t
aXJyX2xjbjsKKyAgZ3J1Yl9pbnQ4X3QgY2x1c3RlcnNfcGVyX21mdDsKKyAgZ3J1Yl9pbnQ4X3Qg
cmVzZXJ2ZWRfNFszXTsKKyAgZ3J1Yl9pbnQ4X3QgY2x1c3RlcnNfcGVyX2luZGV4OworICBncnVi
X2ludDhfdCByZXNlcnZlZF81WzNdOworICBncnViX3VpbnQ2NF90IG51bV9zZXJpYWw7CisgIGdy
dWJfdWludDMyX3QgY2hlY2tzdW07Cit9IF9fYXR0cmlidXRlX18gKChwYWNrZWQpKTsKKworI2Rl
ZmluZSBncnViX250ZnNfZmlsZSBncnViX2ZzaGVscF9ub2RlCisKK3N0cnVjdCBncnViX250ZnNf
YXR0cgoreworICBpbnQgZmxhZ3M7CisgIGNoYXIgKmVtZnRfYnVmLCAqZWRhdF9idWY7CisgIGNo
YXIgKmF0dHJfY3VyLCAqYXR0cl9ueHQsICphdHRyX2VuZDsKKyAgZ3J1Yl91aW50MzJfdCBzYXZl
X3BvczsKKyAgY2hhciAqc2J1ZjsKKyAgc3RydWN0IGdydWJfbnRmc19maWxlICptZnQ7Cit9Owor
CitzdHJ1Y3QgZ3J1Yl9mc2hlbHBfbm9kZQoreworICBzdHJ1Y3QgZ3J1Yl9udGZzX2RhdGEgKmRh
dGE7CisgIGNoYXIgKmJ1ZjsKKyAgZ3J1Yl91aW50NjRfdCBzaXplOworICBncnViX3VpbnQzMl90
IGlubzsKKyAgaW50IGlub2RlX3JlYWQ7CisgIHN0cnVjdCBncnViX250ZnNfYXR0ciBhdHRyOwor
fTsKKworc3RydWN0IGdydWJfbnRmc19kYXRhCit7CisgIHN0cnVjdCBncnViX250ZnNfZmlsZSBj
bWZ0OworICBzdHJ1Y3QgZ3J1Yl9udGZzX2ZpbGUgbW1mdDsKKyAgQmxvY2tEcml2ZXJTdGF0ZSog
YnM7CisgIGdydWJfdWludDMyX3QgbWZ0X3NpemU7CisgIGdydWJfdWludDMyX3QgaWR4X3NpemU7
CisgIGdydWJfdWludDMyX3Qgc3BjOworICBncnViX3VpbnQzMl90IGJsb2Nrc2l6ZTsKKyAgZ3J1
Yl91aW50MzJfdCBtZnRfc3RhcnQ7CisgIGdydWJfdWludDY0X3QgdXVpZDsKK307CisKK3N0cnVj
dCBncnViX250ZnNfY29tcAoreworICBCbG9ja0RyaXZlclN0YXRlKiBiczsKKyAgaW50IGNvbXBf
aGVhZCwgY29tcF90YWlsOworICBncnViX3VpbnQzMl90IGNvbXBfdGFibGVbMTZdWzJdOworICBn
cnViX3VpbnQzMl90IGNidWZfb2ZzLCBjYnVmX3Zjbiwgc3BjOworICBjaGFyICpjYnVmOworfTsK
Kworc3RydWN0IGdydWJfbnRmc19ybHN0Cit7CisgIGludCBmbGFnczsKKyAgZ3J1Yl9kaXNrX2Fk
ZHJfdCB0YXJnZXRfdmNuLCBjdXJyX3ZjbiwgbmV4dF92Y24sIGN1cnJfbGNuOworICBjaGFyICpj
dXJfcnVuOworICBzdHJ1Y3QgZ3J1Yl9udGZzX2F0dHIgKmF0dHI7CisgIHN0cnVjdCBncnViX250
ZnNfY29tcCBjb21wOworfTsKKworCisKKworCisKKwordHlwZWRlZiBncnViX2Vycl90ICgqbnRm
c2NvbXBfZnVuY190KSAoc3RydWN0IGdydWJfbnRmc19hdHRyICogYXQsIGNoYXIgKmRlc3QsCisJ
CQkJICAgICAgIGdydWJfdWludDMyX3Qgb2ZzLCBncnViX3VpbnQzMl90IGxlbiwKKwkJCQkgICAg
ICAgc3RydWN0IGdydWJfbnRmc19ybHN0ICogY3R4LAorCQkJCSAgICAgICBncnViX3VpbnQzMl90
IHZjbik7CisKK2V4dGVybiBudGZzY29tcF9mdW5jX3QgZ3J1Yl9udGZzY29tcF9mdW5jOworCitn
cnViX2Vycl90IGdydWJfbnRmc19yZWFkX3J1bl9saXN0IChzdHJ1Y3QgZ3J1Yl9udGZzX3Jsc3Qg
KmN0eCk7CisKKworCisKK2ludCBiZHJ2X3ByZWFkX2Zyb21fbGNuX29mX3ZvbHVtKEJsb2NrRHJp
dmVyU3RhdGUgKmJzLCBpbnQ2NF90IG9mZnNldCwKKwkJCQkgdm9pZCAqYnVmMSwgaW50IGNvdW50
MSk7CisKK3N0cnVjdCBncnViX250ZnNfZGF0YSAqCitncnViX250ZnNfbW91bnQgKEJsb2NrRHJp
dmVyU3RhdGUqIGJzLCBncnViX3VpbnQzMl90IHBhcnRfb2ZmX3NlY3Rvcik7CisKKworZ3J1Yl9l
cnJfdAorZ3J1Yl9udGZzX2xzIChncnViX2ZpbGVfdCBmaWxlLCBjb25zdCBjaGFyICpwYXRoLAor
CSAgICAgICBpbnQgKCpob29rKSAoY29uc3QgY2hhciAqZmlsZW5hbWUsCisJCQkgICAgY29uc3Qg
c3RydWN0IGdydWJfZGlyaG9va19pbmZvICppbmZvLAorCQkJICAgIHZvaWQgKmNsb3N1cmUpLAor
CSAgICAgIHZvaWQgKmNsb3N1cmUpOworCitncnViX2Vycl90CitncnViX250ZnNfb3BlbiAoZ3J1
Yl9maWxlX3QgZmlsZSwgY29uc3QgY2hhciAqbmFtZSk7CisKKworZ3J1Yl9zc2l6ZV90CitncnVi
X250ZnNfcmVhZCAoZ3J1Yl9maWxlX3QgZmlsZSwgZ3J1Yl9vZmZfdCBvZmZzZXQsCisJCWdydWJf
c2l6ZV90IGxlbiwgY2hhciAqYnVmKTsKKworCitncnViX2Vycl90CitncnViX250ZnNfY2xvc2Ug
KGdydWJfZmlsZV90IGZpbGUpOworCisKKyNlbmRpZiAvKiAhIEdSVUJfTlRGU19IICovCmRpZmYg
LS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4v
cGFydGl0aW9uLmMgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vcGFydGl0aW9uLmMK
LS0tIHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL3BhcnRpdGlvbi5jCTE5NzAtMDEt
MDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1x
ZW11LXhlbi9wYXJ0aXRpb24uYwkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxNDkzNjcwMSArMDgwMApA
QCAtMCwwICsxLDI0MCBAQAorI2luY2x1ZGUgInBhcnRpdGlvbi5oIgorI2luY2x1ZGUgPGVyci5o
PgorCitzdGF0aWMgaW50IGlzX2Z1bGxfemVybyh2b2lkICpwLCB1aW50IGJ5dGVzKQoreworICBp
bnQgaSA9IDA7CisgIHVpbnQ4X3QgKnAxID0gKHVpbnQ4X3QqKXA7CisgIHdoaWxlKGkgPCBieXRl
cykKKyAgeworICAgIGlmKCpwMSAhPSAwKQorICAgIHsKKyAgICAgIHJldHVybiAwOworICAgIH1l
bHNlCisgICAgeworICAgICAgaSsrOworICAgICAgcDErKzsKKyAgICB9CisgIH0KKyAgLy9wcmlu
dGYoIi4uLi4uLi4uLi5mdWxsIHplcm8uLi4uLi5cbiIpOworICByZXR1cm4gMTsKK30KKworc3Rh
dGljIHZvaWQgcmVhZF9wYXJ0aXRpb24odWludDhfdCAqcCwgc3RydWN0IHBhcnRpdGlvbl9yZWNv
cmQgKnIpCit7CisgICAgci0+Ym9vdGFibGUgPSBwWzBdOworICAgIHItPnN0YXJ0X2hlYWQgPSBw
WzFdOworICAgIHItPnN0YXJ0X2N5bGluZGVyID0gcFszXSB8ICgocFsyXSA8PCAyKSAmIDB4MDMw
MCk7CisgICAgci0+c3RhcnRfc2VjdG9yID0gcFsyXSAmIDB4M2Y7CisgICAgci0+c3lzdGVtID0g
cFs0XTsKKyAgICByLT5lbmRfaGVhZCA9IHBbNV07CisgICAgci0+ZW5kX2N5bGluZGVyID0gcFs3
XSB8ICgocFs2XSA8PCAyKSAmIDB4MzAwKTsKKyAgICByLT5lbmRfc2VjdG9yID0gcFs2XSAmIDB4
M2Y7CisgICAgci0+c3RhcnRfc2VjdG9yX2FicyA9IHBbOF0gfCBwWzldIDw8IDggfCBwWzEwXSA8
PCAxNiB8IHBbMTFdIDw8IDI0OworICAgIHItPm5iX3NlY3RvcnNfYWJzID0gcFsxMl0gfCBwWzEz
XSA8PCA4IHwgcFsxNF0gPDwgMTYgfCBwWzE1XSA8PCAyNDsKK30KKworCisKK2NoYXIqIGp1ZGdl
X2ZzKGxzX3BhcnRpdGlvbl90KiBwdCkKK3sKKyAgaWYocHQtPnBhcnQuc3lzdGVtPT0weDBiIHx8
IHB0LT5wYXJ0LnN5c3RlbT09MHgwMSkKKyAgICB7CisgICAgICBwdC0+ZnNfdHlwZSA9IEZTX0ZB
VDsKKyAgICAgIHJldHVybiAoY2hhciopIkZBVDMyIjsKKyAgICB9CisgIGVsc2UgaWYocHQtPnBh
cnQuc3lzdGVtPT0weDA3KQorICAgIHsKKyAgICAgIHB0LT5mc190eXBlID0gRlNfTlRGUzsKKyAg
ICAgIHJldHVybiAoY2hhciopIk5URlMiOworICAgIH0KKyAgZWxzZQorICAgIHsKKyAgICAgIHB0
LT5mc190eXBlID0gRlNfVU5LTk9XTjsKKyAgICAgIHJldHVybiAgKGNoYXIqKSJVTktOT1dOIjsK
KyAgICB9Cit9CisKK2ludCBlbnVtX3BhcnRpdGlvbihCbG9ja0RyaXZlclN0YXRlICpicywgbHNf
cGFydGl0aW9uX3QqIHBhcnRzKQoreworICAgIHN0cnVjdCBwYXJ0aXRpb25fcmVjb3JkIG1icls0
XTsKKyAgICB1aW50OF90IGRhdGFbNTEyXTsKKyAgICBpbnQgaTsKKyAgICBpbnQgZXh0X3BhcnRu
dW0gPSA0OworICAgIHN0cnVjdCBwYXJ0aXRpb25fcmVjb3JkIGV4dFsxMF07CisgICAgdWludDhf
dCBkYXRhMVs1MTJdOworICAgIGludCBqID0gMDsKKworICAgIGlmIChiZHJ2X3JlYWQoYnMsIDAs
IGRhdGEsIDEpKQorICAgICAgICBlcnJ4KEVJTlZBTCwgImVycm9yIHdoaWxlIHJlYWRpbmciKTsK
KworICAgIGlmIChkYXRhWzUxMF0gIT0gMHg1NSB8fCBkYXRhWzUxMV0gIT0gMHhhYSkgCisgICAg
eworICAgICAgICBlcnJubyA9IC1FSU5WQUw7CisgICAgICAgIHJldHVybiAtMTsKKyAgICB9Cisg
ICAgCisgICAgaW50IGsgPSAwOworICAgIGZvciAoaSA9IDA7IGkgPCA0OyBpKyspIAorICAgIHsK
KyAgICAgICAgcmVhZF9wYXJ0aXRpb24oJmRhdGFbNDQ2ICsgMTYgKiBpXSwgJm1icltpXSk7CisK
KyAgICAgICAgaWYgKCFtYnJbaV0ubmJfc2VjdG9yc19hYnMpCisgICAgICAgICAgICBjb250aW51
ZTsKKwkvL3ByaW50ZigidGhlICVkIHBhcnRpdGlvbjpib290PTB4JXgsIHN0YXJ0PSV1LCBzeXN0
ZW09MHgleCwgdG90YWw9JXVcdCIsIAorCS8vICAgICAgIGkrMSwgbWJyW2ldLmJvb3RhYmxlLCBt
YnJbaV0uc3RhcnRfc2VjdG9yX2FicywgbWJyW2ldLnN5c3RlbSwgbWJyW2ldLm5iX3NlY3RvcnNf
YWJzKTsKKwlwYXJ0c1trXS5wYXJ0ID0gbWJyW2ldOworCXBhcnRzW2tdLmlkID0gaSsxOworCWsr
KzsKKyAgICAgICAgaWYgKG1icltpXS5zeXN0ZW0gPT0gMHhGIHx8IG1icltpXS5zeXN0ZW0gPT0g
MHg1KSAKKwl7CisJICAgIC8vcHJpbnRmKCJpcyBhIGV4dGVuZCBwYXJ0aXRpb24uLi4uLi5cbiIp
OworCSAgICBpZiAoYmRydl9yZWFkKGJzLCBtYnJbaV0uc3RhcnRfc2VjdG9yX2FicywgZGF0YTEs
IDEpKQorICAgICAgICAgICAgICAgIGVycngoRUlOVkFMLCAiZXJyb3Igd2hpbGUgcmVhZGluZyIp
OworCSAgICAvLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8KKwkgICAgLy9kdW1wIGVicgorCSAg
ICAvLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8KKwkgICAgdWludDMyX3QgZXh0X3N0YXJ0X3Nl
Y3RvciA9IG1icltpXS5zdGFydF9zZWN0b3JfYWJzOworCSAgICBzdHJ1Y3QgcGFydGl0aW9uX3Jl
Y29yZCBleHRfbmV4dCA9IHswfTsKKyAgICAgICAgICAgIHdoaWxlICgxKSAKKwkgICAgeworCSAg
ICAgICAgcmVhZF9wYXJ0aXRpb24oJmRhdGExWzQ0NiArIDE2ICogMF0sICZleHRbal0pOworCQkv
L3ByaW50ZigidGhlICVkdGggcGFydGl0aW9uOmJvb3Q9MHgleCwgc3RhcnQ9JXUsIHN5c3RlbT0w
eCV4LCB0b3RhbD0ldVx0IiwKKwkJLy8gICAgICAgZXh0X3BhcnRudW0raisxLCBleHRbal0uYm9v
dGFibGUsIGV4dFtqXS5zdGFydF9zZWN0b3JfYWJzK2V4dF9zdGFydF9zZWN0b3IsIAorCQkvLyAg
ICAgICBleHRbal0uc3lzdGVtLCBleHRbal0ubmJfc2VjdG9yc19hYnMpOworCQkKKwkJCisJCWlm
KDAgIT0gZXh0W2pdLm5iX3NlY3RvcnNfYWJzKSAKKwkJeworCQkgIGV4dFtqXS5zdGFydF9zZWN0
b3JfYWJzICs9IGV4dF9zdGFydF9zZWN0b3I7CisJCSAgaWYoaiA+IDApCisJCSAgICBleHRbal0u
c3RhcnRfc2VjdG9yX2FicyArPSBleHRfbmV4dC5zdGFydF9zZWN0b3JfYWJzOworCQkgIHBhcnRz
W2tdLnBhcnQgPSBleHRbal07CisJCSAgcGFydHNba10uaWQgPSBleHRfcGFydG51bSArIGogKzE7
CisJCSAgaysrOworCQkgIGorKzsKKwkgICAgICAgIH0KKwkJZWxzZQorCQl7CisJCSAgcHJpbnRm
KCJuYl9zZWN0b3JzX2Ficz0wPj4+Pj4+Pj4+Pj4+XG4iKTsKKwkJfQorCQkvLy8vLy8vLy8vLy8v
Ly8vLy8vLy8vCisJCWlmKGV4dFtqLTFdLnN5c3RlbSA9PSAweEYgKQorCQkgIHsKKwkJICAgIHBy
aW50ZigiLi4uLi4uLi4uLi4uLi4uYWdhaW4gZXh0ZW5kLi4uLi4uLi4uLi4uLlxuIik7CisJCSAg
ICBleHRfc3RhcnRfc2VjdG9yID0gZXh0W2otMV0uc3RhcnRfc2VjdG9yX2FicyArIGV4dF9zdGFy
dF9zZWN0b3I7CisJCSAgICBpZiAoYmRydl9yZWFkKGJzLCBleHRfc3RhcnRfc2VjdG9yLCBkYXRh
MSwgMSkpCisJCSAgICAgIGVycngoRUlOVkFMLCAiZXJyb3Igd2hpbGUgcmVhZGluZyIpOworCQkg
ICAgY29udGludWU7CisJCSAgfQorCQllbHNlCisJCSAgeworCQkgICAgOy8vcHJpbnRmKCJpcyBh
IGxvZ2ljYWwgcGFydFxuIik7CisJCSAgfQorCQkvLy8vLy8vLy8vLy8vLy8vLy8vLy8KKwkJcmVh
ZF9wYXJ0aXRpb24oJmRhdGExWzQ0NiArIDE2ICogMV0sICZleHRfbmV4dCk7CisJCWlmIChpc19m
dWxsX3plcm8oJmV4dF9uZXh0LCBzaXplb2YoZXh0X25leHQpKSkKKyAgICAgICAgICAgICAgICAg
ICAgYnJlYWs7CisKKwkJaWYgKGJkcnZfcmVhZChicywgZXh0X3N0YXJ0X3NlY3RvciArIGV4dF9u
ZXh0LnN0YXJ0X3NlY3Rvcl9hYnMgLCBkYXRhMSwgMSkpCisJCSAgZXJyeChFSU5WQUwsICJlcnJv
ciB3aGlsZSByZWFkaW5nIik7CisJICAgIH0KKwl9ZWxzZQorCXsKKwkgIDsvL3ByaW50ZigiaXMg
YSBtYWluIHBhcnRpdGlvbi4uLi4uLlxuIik7CisJfQorICAgIH0KKyAgICAKKyAgICByZXR1cm4g
azsKK30KKworCisKKworCitpbnQgZmluZF9wYXJ0aXRpb24oQmxvY2tEcml2ZXJTdGF0ZSAqYnMs
IGludCBwYXJ0aXRpb24sCisgICAgICAgICAgICAgICAgICAgICAgICAgIG9mZl90ICpvZmZzZXQs
IG9mZl90ICpzaXplKQoreworICAgIHN0cnVjdCBwYXJ0aXRpb25fcmVjb3JkIG1icls0XTsKKyAg
ICB1aW50OF90IGRhdGFbNTEyXTsKKyAgICBpbnQgaTsKKyAgICBpbnQgZXh0X3BhcnRudW0gPSA0
OworCisKKyAgICBpZiAoYmRydl9yZWFkKGJzLCAwLCBkYXRhLCAxKSkKKyAgICAgICAgZXJyeChF
SU5WQUwsICJlcnJvciB3aGlsZSByZWFkaW5nIik7CisKKyAgICBpZiAoZGF0YVs1MTBdICE9IDB4
NTUgfHwgZGF0YVs1MTFdICE9IDB4YWEpIAorICAgIHsKKyAgICAgICAgZXJybm8gPSAtRUlOVkFM
OworICAgICAgICByZXR1cm4gLTE7CisgICAgfQorICAgIAorICAgIGludCBrID0gMDsKKyAgICBm
b3IgKGkgPSAwOyBpIDwgNDsgaSsrKSAKKyAgICB7CisgICAgICAgIHJlYWRfcGFydGl0aW9uKCZk
YXRhWzQ0NiArIDE2ICogaV0sICZtYnJbaV0pOworCisgICAgICAgIGlmICghbWJyW2ldLm5iX3Nl
Y3RvcnNfYWJzKQorICAgICAgICAgICAgY29udGludWU7CisJLy9wcmludGYoInRoZSAlZCBwYXJ0
aXRpb246IiwgaSsxKTsKKwkKKyAgICAgICAgaWYgKG1icltpXS5zeXN0ZW0gPT0gMHhGIHx8IG1i
cltpXS5zeXN0ZW0gPT0gMHg1KSAKKwl7CisJICAvL3ByaW50ZigiaXMgYSBleHRlbmQgcGFydGl0
aW9uLi4uLi4uXG4iKTsKKyAgICAgICAgICAgIHN0cnVjdCBwYXJ0aXRpb25fcmVjb3JkIGV4dFsx
MF07CisgICAgICAgICAgICB1aW50OF90IGRhdGExWzUxMl07CisgICAgICAgICAgICBpbnQgaiA9
IDA7CisKKyAgICAgICAgICAgIGlmIChiZHJ2X3JlYWQoYnMsIG1icltpXS5zdGFydF9zZWN0b3Jf
YWJzLCBkYXRhMSwgMSkpCisgICAgICAgICAgICAgICAgZXJyeChFSU5WQUwsICJlcnJvciB3aGls
ZSByZWFkaW5nIik7CisJICAgIAorCSAgICB1aW50MzJfdCBleHRfc3RhcnRfc2VjdG9yID0gbWJy
W2ldLnN0YXJ0X3NlY3Rvcl9hYnM7CisJICAgIHN0cnVjdCBwYXJ0aXRpb25fcmVjb3JkIGV4dF9u
ZXh0ID0gezB9OworICAgICAgICAgICAgd2hpbGUgKDEpIAorCSAgICB7CisJICAgICAgICByZWFk
X3BhcnRpdGlvbigmZGF0YTFbNDQ2ICsgMTYgKiAwXSwgJmV4dFtqXSk7CisJCXByaW50Zigic3Rh
cnQ9JXUsIHRvdGFsPSV1LCBzeXN0ZW09MHgleFx0IiwKKwkJICAgICAgIGV4dFtqXS5zdGFydF9z
ZWN0b3JfYWJzLCBleHRbal0ubmJfc2VjdG9yc19hYnMsIGV4dFtqXS5zeXN0ZW0pOworCQlwcmlu
dGYoInRoZSAlZHRoIHBhcnRpdGlvbiBpcyBhIGxvZ2ljYWwgcGFydFxuIiwgZXh0X3BhcnRudW0g
KyBqICsgMSk7CisJCQorCQlpZigwICE9IGV4dFtqXS5uYl9zZWN0b3JzX2FicykgCisJCXsKKwkJ
ICBpZiAoKGV4dF9wYXJ0bnVtICsgaiArIDEpID09IHBhcnRpdGlvbikgCisJCSAgeworCQkgICAg
ZXh0W2pdLnN0YXJ0X3NlY3Rvcl9hYnMgKz0gIGV4dF9zdGFydF9zZWN0b3I7CisJCSAgICBpZihq
ID4gMCkKKwkJICAgICAgZXh0W2pdLnN0YXJ0X3NlY3Rvcl9hYnMgKz0gZXh0X25leHQuc3RhcnRf
c2VjdG9yX2FiczsKKwkJICAgICpvZmZzZXQgPSAodWludDY0X3QpZXh0W2pdLnN0YXJ0X3NlY3Rv
cl9hYnMgPDwgOTsKKwkJICAgICpzaXplID0gKHVpbnQ2NF90KWV4dFtqXS5uYl9zZWN0b3JzX2Fi
cyA8PCA5OworCQkgICAgcmV0dXJuIDA7CisJCSAgfQorCQkgIGorKzsKKwkgICAgICAgIH0KKwkJ
CisJCXJlYWRfcGFydGl0aW9uKCZkYXRhMVs0NDYgKyAxNiAqIDFdLCAmZXh0X25leHQpOworCQlp
ZiAoaXNfZnVsbF96ZXJvKCZleHRfbmV4dCwgc2l6ZW9mKGV4dF9uZXh0KSkpCisgICAgICAgICAg
ICAgICAgICAgIGJyZWFrOworCQkvL2V4dF9zdGFydF9zZWN0b3IgKz0gZXh0X25leHQuc3RhcnRf
c2VjdG9yX2FiczsKKwkJaWYgKGJkcnZfcmVhZChicywgZXh0X3N0YXJ0X3NlY3RvciArIGV4dF9u
ZXh0LnN0YXJ0X3NlY3Rvcl9hYnMsIGRhdGExLCAxKSkKKwkJICAgIGVycngoRUlOVkFMLCAiZXJy
b3Igd2hpbGUgcmVhZGluZyIpOworCSAgICB9CisgICAgICAgICAgICAKKyAgICAgICAgfSAKKwll
bHNlIAorCXsKKwkgIC8vcHJpbnRmKCJpcyBhIG1haW4gcGFydGl0aW9uLi4uLi4uXG4iKTsKKwkg
ICAgaWYgKChpICsgMSkgPT0gcGFydGl0aW9uKSAKKwkgICAgeworCSAgICAgICpvZmZzZXQgPSAo
dWludDY0X3QpbWJyW2ldLnN0YXJ0X3NlY3Rvcl9hYnMgPDwgOTsKKwkgICAgICAqc2l6ZSA9ICh1
aW50NjRfdCltYnJbaV0ubmJfc2VjdG9yc19hYnMgPDwgOTsKKwkgICAgICByZXR1cm4gMDsKKwkg
ICAgfQorCX0KKyAgICB9CisKKyAgICBlcnJubyA9IC1FTk9FTlQ7CisgICAgcmV0dXJuIC0xOwor
fQorCisvLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8v
Ly8vCmRpZmYgLS1leGNsdWRlPS5zdm4gLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUt
cWVtdS14ZW4vcGFydGl0aW9uLmggeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vcGFy
dGl0aW9uLmgKLS0tIHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11LXFlbXUteGVuL3BhcnRpdGlvbi5o
CTE5NzAtMDEtMDEgMDc6MDA6MDAuMDAwMDAwMDAwICswNzAwCisrKyB4ZW4tNC4xLjItYi90b29s
cy9pb2VtdS1xZW11LXhlbi9wYXJ0aXRpb24uaAkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxNTk0MDIy
NSArMDgwMApAQCAtMCwwICsxLDQ2IEBACisjaWZuZGVmIF9QQVJUSVRJT05fSAorI2RlZmluZSBf
UEFSVElUSU9OX0gKKworI2luY2x1ZGUgPHN0ZGludC5oPgorCit0eXBlZGVmIHN0cnVjdCBwYXJ0
aXRpb25fcmVjb3JkCit7CisgICAgdWludDhfdCBib290YWJsZTsKKyAgICB1aW50OF90IHN0YXJ0
X2hlYWQ7CisgICAgdWludDMyX3Qgc3RhcnRfY3lsaW5kZXI7CisgICAgdWludDhfdCBzdGFydF9z
ZWN0b3I7CisgICAgdWludDhfdCBzeXN0ZW07CisgICAgdWludDhfdCBlbmRfaGVhZDsKKyAgICB1
aW50OF90IGVuZF9jeWxpbmRlcjsKKyAgICB1aW50OF90IGVuZF9zZWN0b3I7CisgICAgdWludDMy
X3Qgc3RhcnRfc2VjdG9yX2FiczsKKyAgICB1aW50MzJfdCBuYl9zZWN0b3JzX2FiczsKK30gX19h
dHRyaWJ1dGVfXyAoKHBhY2tlZCkpIHBhcnRfcmVjb3JkX3Q7CisKKworCit0eXBlZGVmIGVudW0K
KyAgeworICAgIEZTX1VOS05PV04gPSAwLAorICAgIEZTX0ZBVCwKKyAgICBGU19OVEZTCisgIH1G
U19UWVBFOworCit0eXBlZGVmIHN0cnVjdCBsc19wYXJ0aXRpb24KK3sKKyAgcGFydF9yZWNvcmRf
dCBwYXJ0OworICBpbnQgaWQ7CisgIEZTX1RZUEUgZnNfdHlwZTsKK31sc19wYXJ0aXRpb25fdDsK
KworCitjaGFyKiBqdWRnZV9mcyhsc19wYXJ0aXRpb25fdCogcHQpOworCisjaW5jbHVkZSAiYmxv
Y2tfaW50LmgiCitpbnQgZW51bV9wYXJ0aXRpb24oQmxvY2tEcml2ZXJTdGF0ZSAqYnMsIGxzX3Bh
cnRpdGlvbl90KiBwYXJ0cyk7CisKK2ludCBmaW5kX3BhcnRpdGlvbihCbG9ja0RyaXZlclN0YXRl
ICpicywgaW50IHBhcnRpdGlvbiwKKwkJICAgb2ZmX3QgKm9mZnNldCwgb2ZmX3QgKnNpemUpOwor
CisKKyNlbmRpZgpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xz
L2lvZW11LXFlbXUteGVuL3FlbXUtaW1nLmMgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14
ZW4vcWVtdS1pbWcuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vcWVtdS1p
bWcuYwkyMDExLTAyLTEyIDAxOjU0OjUxLjAwMDAwMDAwMCArMDgwMAorKysgeGVuLTQuMS4yLWIv
dG9vbHMvaW9lbXUtcWVtdS14ZW4vcWVtdS1pbWcuYwkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxNjkz
MjYyMiArMDgwMApAQCAtMjAsMjMgKzIwLDM1IEBACiAgKiBMSUFCSUxJVFksIFdIRVRIRVIgSU4g
QU4gQUNUSU9OIE9GIENPTlRSQUNULCBUT1JUIE9SIE9USEVSV0lTRSwgQVJJU0lORyBGUk9NLAog
ICogT1VUIE9GIE9SIElOIENPTk5FQ1RJT04gV0lUSCBUSEUgU09GVFdBUkUgT1IgVEhFIFVTRSBP
UiBPVEhFUiBERUFMSU5HUyBJTgogICogVEhFIFNPRlRXQVJFLgogICovCiAjaW5jbHVkZSAicWVt
dS1jb21tb24uaCIKICNpbmNsdWRlICJvc2RlcC5oIgogI2luY2x1ZGUgImJsb2NrX2ludC5oIgog
I2luY2x1ZGUgPGFzc2VydC5oPgorI2luY2x1ZGUgPGVyci5oPgorCisKKyNpbmNsdWRlICJwYXJ0
aXRpb24uaCIKKyNpbmNsdWRlICJmcy1jb21tLmgiCisjaW5jbHVkZSAiZmF0LmgiCisjaW5jbHVk
ZSAibnRmcy5oIgorI2luY2x1ZGUgIm1pc2MuaCIKKworCisKIAogI2lmZGVmIF9XSU4zMgogI2Rl
ZmluZSBXSU4zMl9MRUFOX0FORF9NRUFOCiAjaW5jbHVkZSA8d2luZG93cy5oPgogI2VuZGlmCiAK
IC8qIERlZmF1bHQgdG8gY2FjaGU9d3JpdGViYWNrIGFzIGRhdGEgaW50ZWdyaXR5IGlzIG5vdCBp
bXBvcnRhbnQgZm9yIHFlbXUtdGNnLiAqLworI2RlZmluZSBNQVhfUEFSVElUSU9OUyAgICAyMAog
I2RlZmluZSBCUkRWX09fRkxBR1MgQkRSVl9PX0NBQ0hFX1dCCiAKIHN0YXRpYyB2b2lkIFFFTVVf
Tk9SRVRVUk4gZXJyb3IoY29uc3QgY2hhciAqZm10LCAuLi4pCiB7CiAgICAgdmFfbGlzdCBhcDsK
ICAgICB2YV9zdGFydChhcCwgZm10KTsKICAgICBmcHJpbnRmKHN0ZGVyciwgInFlbXUtaW1nOiAi
KTsKICAgICB2ZnByaW50ZihzdGRlcnIsIGZtdCwgYXApOwpAQCAtNTMsMTYgKzY1LDE4IEBAIHN0
YXRpYyB2b2lkIGZvcm1hdF9wcmludCh2b2lkICpvcGFxdWUsIGMKIC8qIFBsZWFzZSBrZWVwIGlu
IHN5bmNoIHdpdGggcWVtdS1pbWcudGV4aSAqLwogc3RhdGljIHZvaWQgaGVscCh2b2lkKQogewog
ICAgIHByaW50ZigicWVtdS1pbWcgdmVyc2lvbiAiIFFFTVVfVkVSU0lPTiAiLCBDb3B5cmlnaHQg
KGMpIDIwMDQtMjAwOCBGYWJyaWNlIEJlbGxhcmRcbiIKICAgICAgICAgICAgInVzYWdlOiBxZW11
LWltZyBjb21tYW5kIFtjb21tYW5kIG9wdGlvbnNdXG4iCiAgICAgICAgICAgICJRRU1VIGRpc2sg
aW1hZ2UgdXRpbGl0eVxuIgogICAgICAgICAgICAiXG4iCiAgICAgICAgICAgICJDb21tYW5kIHN5
bnRheDpcbiIKKyAgICAgICAgICAgIiAgbHMgWy12XSBbWy1sXSAtZCBkaXJlY3RvcnldIGltZ2Zp
bGVcbiIKKyAgICAgICAgICAgIiAgY2F0IFstdl0gLWYgZmlsZSBpbWdmaWxlXG4iCiAgICAgICAg
ICAgICIgIGNyZWF0ZSBbLWVdIFstNl0gWy1iIGJhc2VfaW1hZ2VdIFstZiBmbXRdIGZpbGVuYW1l
IFtzaXplXVxuIgogICAgICAgICAgICAiICBjb21taXQgWy1mIGZtdF0gZmlsZW5hbWVcbiIKICAg
ICAgICAgICAgIiAgY29udmVydCBbLWNdIFstZV0gWy02XSBbLWYgZm10XSBbLU8gb3V0cHV0X2Zt
dF0gWy1CIG91dHB1dF9iYXNlX2ltYWdlXSBmaWxlbmFtZSBbZmlsZW5hbWUyIFsuLi5dXSBvdXRw
dXRfZmlsZW5hbWVcbiIKICAgICAgICAgICAgIiAgaW5mbyBbLWYgZm10XSBmaWxlbmFtZVxuIgog
ICAgICAgICAgICAiICBzbmFwc2hvdCBbLWwgfCAtYSBzbmFwc2hvdCB8IC1jIHNuYXBzaG90IHwg
LWQgc25hcHNob3RdIGZpbGVuYW1lXG4iCiAgICAgICAgICAgICJcbiIKICAgICAgICAgICAgIkNv
bW1hbmQgcGFyYW1ldGVyczpcbiIKICAgICAgICAgICAgIiAgJ2ZpbGVuYW1lJyBpcyBhIGRpc2sg
aW1hZ2UgZmlsZW5hbWVcbiIKQEAgLTIwOSwxNiArMjIzLDM0MyBAQCBzdGF0aWMgQmxvY2tEcml2
ZXJTdGF0ZSAqYmRydl9uZXdfb3BlbihjCiAgICAgICAgIGlmIChyZWFkX3Bhc3N3b3JkKHBhc3N3
b3JkLCBzaXplb2YocGFzc3dvcmQpKSA8IDApCiAgICAgICAgICAgICBlcnJvcigiTm8gcGFzc3dv
cmQgZ2l2ZW4iKTsKICAgICAgICAgaWYgKGJkcnZfc2V0X2tleShicywgcGFzc3dvcmQpIDwgMCkK
ICAgICAgICAgICAgIGVycm9yKCJpbnZhbGlkIHBhc3N3b3JkIik7CiAgICAgfQogICAgIHJldHVy
biBiczsKIH0KIAorc3RhdGljIHZvaWQgZ2V0X3BhcnRpdGlvbl9wYXRoKGNvbnN0IGNoYXIgKmRp
ciwgaW50ICp3aGljaF9wYXJ0LCBjaGFyICoqcGF0aCkKK3sKKyAgICBzdGF0aWMgY2hhciBmdWxs
X3BhdGhbNTEyXTsKKyAgICBjaGFyIHBhcnRbNV09e307CisKKyAgICBzdHJuY3B5KGZ1bGxfcGF0
aCwgZGlyLCA1MTIpOworICAgIGZ1bGxfcGF0aFs1MTFdID0gJ1wwJzsKKworICAgIC8vz97WxtLU
L7+qzbcgveHOsgorICAgIGNoYXIgKnAxID0gZnVsbF9wYXRoICsgMTsKKyAgICBjaGFyICpwMiA9
IHN0cmNocihmdWxsX3BhdGggKyAxLCAnLycpOworICAgIGlmKCFwMikKKyAgICB7CisgICAgICAg
IGVycngoMSwgImNoZWNrIHRoZSBmaWxlIHBhdGghXG4iKTsKKyAgICB9CisKKyAgICAqcGF0aCA9
IHAyOworICAgIHN0cm5jcHkocGFydCwgcDEsIHAyLXAxKTsKKyAgICAqd2hpY2hfcGFydCA9IGF0
b2kocGFydCk7Cit9CisKK3R5cGVkZWYgc3RydWN0IGdydWJfZnMKK3sKKyAgICBncnViX29wZW4g
b3BlbjsKKyAgICBncnViX2xzIGxzOworICAgIGdydWJfcmVhZCByZWFkOworICAgIGdydWJfY2xv
c2UgY2xvc2U7Cit9Z3J1Yl9mc190OworCitzdGF0aWMgZ3J1Yl9mc190IGdydWJfZnNfcGx1Z1sx
MF0gPSB7fTsKKworc3RhdGljIHZvaWQgZ3J1Yl9mc19wbHVnaW4odm9pZCkKK3sKKyAgZ3J1Yl9m
c19wbHVnW0ZTX0ZBVF0ub3BlbiA9IGdydWJfZmF0X29wZW47CisgIGdydWJfZnNfcGx1Z1tGU19G
QVRdLnJlYWQgPSBncnViX2ZhdF9yZWFkOworICBncnViX2ZzX3BsdWdbRlNfRkFUXS5jbG9zZSA9
IGdydWJfZmF0X2Nsb3NlOworICBncnViX2ZzX3BsdWdbRlNfRkFUXS5scyA9IGdydWJfZmF0X2xz
OworCisgIGdydWJfZnNfcGx1Z1tGU19OVEZTXS5vcGVuID0gZ3J1Yl9udGZzX29wZW47CisgIGdy
dWJfZnNfcGx1Z1tGU19OVEZTXS5yZWFkID0gZ3J1Yl9udGZzX3JlYWQ7CisgIGdydWJfZnNfcGx1
Z1tGU19OVEZTXS5jbG9zZSA9IGdydWJfbnRmc19jbG9zZTsKKyAgZ3J1Yl9mc19wbHVnW0ZTX05U
RlNdLmxzID0gZ3J1Yl9udGZzX2xzOworfQorCitzdGF0aWMgaW50IGltZ19scyhpbnQgYXJnYywg
Y2hhciAqKmFyZ3YpCit7CisgICAgaW50IGMgPSAtMTsKKyAgICBjaGFyICppbWdmaWxlID0gTlVM
TDsKKyAgICBjaGFyICpkaXIgPSBOVUxMOworICAgIGNoYXIgdmVyYm9zZSA9IDA7CisgICAgc3Ry
dWN0IGxzX2N0cmwgY3RybD17fTsKKworICAgIGZvcig7OykgCisgICAgeworICAgICAgICBjID0g
Z2V0b3B0KGFyZ2MsIGFyZ3YsICJkOmhsdiIpOworICAgICAgICBpZiAoYyA9PSAtMSkKKyAgICAg
ICAgICAgIGJyZWFrOworCisgICAgICAgIHN3aXRjaChjKSAKKyAgICAgICAgeworICAgICAgICAg
ICAgY2FzZSAndic6CisgICAgICAgICAgICAgICAgdmVyYm9zZSA9IDE7CisgICAgICAgICAgICAg
ICAgYnJlYWs7CisgICAgICAgICAgICBjYXNlICdsJzoKKyAgICAgICAgICAgICAgICBjdHJsLmRl
dGFpbCA9IDE7CisgICAgICAgICAgICAgICAgYnJlYWs7CisgICAgICAgICAgICBjYXNlICdoJzoK
KyAgICAgICAgICAgICAgICBoZWxwKCk7CisgICAgICAgICAgICAgICAgYnJlYWs7CisgICAgICAg
ICAgICBjYXNlICdkJzoKKyAgICAgICAgICAgICAgICBkaXIgPSBvcHRhcmc7CisgICAgICAgICAg
ICAgICAgYnJlYWs7CisgICAgICAgICAgICBkZWZhdWx0OgorICAgICAgICAgICAgICAgIGJyZWFr
OworICAgICAgICB9CisgICAgfQorICAgIAorICAgIGltZ2ZpbGUgPSBhcmd2W29wdGluZCsrXTsK
KyAgICAKKyAgICBpZiAob3B0aW5kID4gYXJnYykKKyAgICAgIGhlbHAoKTsKKyAgICAKKyAgICBC
bG9ja0RyaXZlclN0YXRlICpicyA9IGJkcnZfbmV3KCIiKTsKKyAgICBpZighYnMpCisgICAgICAg
IGVycm9yKCJOb3QgZW5vdWdoIG1lbW9yeSBmb3IgYmRydl9uZXdcbiIpOworICAgIGlmKGJkcnZf
b3BlbihicywgaW1nZmlsZSwgQlJEVl9PX0ZMQUdTKSA8IDApCisgICAgICAgIGVycm9yKCJDb3Vs
ZCBub3Qgb3BlbiAnJXMnXG4iLCBpbWdmaWxlKTsKKyAgICAKKyAgICBvZmZfdCBvZmZfYnl0ZXMg
PSAwOworICAgIG9mZl90IHNpemVfYnl0ZXMgPSAwOworICAgIGludCBpID0gMDsKKyAgICBsc19w
YXJ0aXRpb25fdCogcGFydHMgPSAobHNfcGFydGl0aW9uX3QqKW1hbGxvYyhNQVhfUEFSVElUSU9O
UyAqIHNpemVvZihsc19wYXJ0aXRpb25fdCkpOworICAgIGludCBjb3VudCA9IGVudW1fcGFydGl0
aW9uKGJzLCBwYXJ0cyk7CisgICAgCisgICAgaWYoIWRpcikKKyAgICB7CisgICAgICAgIC8vZmlu
ZF9wYXJ0aXRpb24oYnMsIDE1LCAmb2ZmX2J5dGVzLCAmc2l6ZV9ieXRlcyk7CisgICAgICAgIHBy
aW50ZigiaWRcdGFjdGl2ZVx0dHlwZVx0ZnNcdHN0YXJ0X3NlY3Rvclx0dG90YWxfc2VjdG9yc1xu
Iik7CisgICAgICAgIGZvcihpID0gMDsgaSA8IGNvdW50OyBpKyspCisgICAgICAgIHsKKyAgICAg
ICAgICAgIHByaW50ZigiJWRcdCVzXHQlc1x0JXNcdCV1XHQldVxuIiwgCisgICAgICAgICAgICAg
ICAgICAgcGFydHNbaV0uaWQsIAorICAgICAgICAgICAgICAgICAgIHBhcnRzW2ldLnBhcnQuYm9v
dGFibGU9PTB4ODAgPyAiYWN0aXZlIiA6ICJub25lLWFjdGl2ZSIsCisgICAgICAgICAgICAgICAg
ICAgKHBhcnRzW2ldLnBhcnQuc3lzdGVtPT0weDBmIHx8IHBhcnRzW2ldLnBhcnQuc3lzdGVtPT0w
eDA1KSA/ICJleHRlbmQiIDogKHBhcnRzW2ldLmlkPj01ID8gImxvZ2ljYWwiIDogInByaW1hcnki
KSwKKyAgICAgICAgICAgICAgICAgICBqdWRnZV9mcygmcGFydHNbaV0pLAorICAgICAgICAgICAg
ICAgICAgIHBhcnRzW2ldLnBhcnQuc3RhcnRfc2VjdG9yX2FicywKKyAgICAgICAgICAgICAgICAg
ICBwYXJ0c1tpXS5wYXJ0Lm5iX3NlY3RvcnNfYWJzCisgICAgICAgICAgICAgICAgICAgKTsKKyAg
ICAgICAgfQorCisgICAgICAgIGdvdG8gZmFpbDsKKyAgICB9CisgICAgZWxzZQorICAgIHsKKyAg
ICAgICAgZ3J1Yl9mc19wbHVnaW4oKTsKKyAgICAgIAorICAgICAgICBncnViX2ZpbGVfdCBmaWxl
ID0gTlVMTDsKKyAgICAgICAgY2hhciAqcGF0aCA9IE5VTEw7CisgICAgICAgIGludCB3aGljaF9w
YXJ0ID0gMTsKKyAgICAgIAorICAgICAgICBmaWxlID0gKGdydWJfZmlsZV90KW1hbGxvYyhzaXpl
b2YoKmZpbGUpKTsKKyAgICAgICAgZmlsZS0+YnMgPSBiczsKKyAgICAgICAgZmlsZS0+ZGF0YSA9
IE5VTEw7CisKKyAgICAgICAgaWYoJy8nICE9IGRpcltzdHJsZW4oZGlyKSAtIDFdKQorICAgICAg
ICAgICAgc3RyY2F0KGRpciwgIi8iKTsKKworICAgICAgICBnZXRfcGFydGl0aW9uX3BhdGgoZGly
LCAmd2hpY2hfcGFydCwgJnBhdGgpOworICAgICAgICBpZih3aGljaF9wYXJ0IDwgMSB8fCB3aGlj
aF9wYXJ0ID4gY291bnQpCisgICAgICAgIHsKKyAgICAgICAgICAgIGZwcmludGYoc3RkZXJyLCAi
ZXJyb3I6IGNoZWNrIHRoZSBwYXJ0aXRpb24gbnVtYmVyIVxuIik7CisgICAgICAgICAgICBnb3Rv
IGZhaWw7CisgICAgICAgIH0KKworICAgICAgICBmaWxlLT5wYXJ0X29mZl9zZWN0b3IgPSBwYXJ0
c1t3aGljaF9wYXJ0IC0gMV0ucGFydC5zdGFydF9zZWN0b3JfYWJzOworICAgICAgICBjdHJsLmRp
cm5hbWUgPSBkaXI7CisgICAgICAgIHByaW50Zigiob5uYW1lXHQiCisgICAgICAgICAgICAgICAi
c2l6ZShieXRlcylcdCIKKyAgICAgICAgICAgICAgICJkaXI/XHQiCisgICAgICAgICAgICAgICAi
ZGF0ZVx0IgorICAgICAgICAgICAgICAgInRpbWWhv1xuIik7CisKKyAgICAgICAganVkZ2VfZnMo
JnBhcnRzW3doaWNoX3BhcnQgLSAxXSk7CisgICAgICAgIEZTX1RZUEUgZnNfdHlwZSA9IHBhcnRz
W3doaWNoX3BhcnQgLSAxXS5mc190eXBlOworICAgICAgICBpZiAoZnNfdHlwZSA9PSBGU19VTktO
T1dOKSAKKyAgICAgICAgeworICAgICAgICAgICAgZXJyeCgxLCAidW5rbm93biBmaWxlIHN5c3Rl
bSFcbiIpOworICAgICAgICB9CisKKyAgICAgICAgZ3J1Yl9mc19wbHVnW2ZzX3R5cGVdLmxzKGZp
bGUsIHBhdGgsIE5VTEwsICh2b2lkKikmY3RybCk7CisgICAgICAgIGZpbGUtPmRhdGEgPyBmcmVl
KGZpbGUtPmRhdGEpIDogMDsKKyAgICAgICAgZnJlZShmaWxlKTsKKyAgICB9CisgICAgCisgICAg
CisgIGZhaWw6CisgICAgYmRydl9kZWxldGUoYnMpOworICAgIGZyZWUocGFydHMpOworICAgIHJl
dHVybiAwOworfQorCisKKworc3RhdGljIGludCBpbWdfY2F0KGludCBhcmdjLCBjaGFyICoqYXJn
dikKK3sKKyAgICBpbnQgYyA9IC0xOworICAgIGNoYXIgKmltZ2ZpbGUgPSBOVUxMOworICAgIGNo
YXIgKmZpbGVuYW1lID0gTlVMTDsKKyAgICBjaGFyIHZlcmJvc2UgPSAwOworCisgICAgZm9yKDs7
KSB7CisgICAgICAgIGMgPSBnZXRvcHQoYXJnYywgYXJndiwgImY6aHYiKTsKKyAgICAgICAgaWYg
KGMgPT0gLTEpCisgICAgICAgICAgICBicmVhazsKKyAgICAgICAgc3dpdGNoKGMpIHsKKyAgICAg
ICAgICAgIGNhc2UgJ3YnOgorICAgICAgICAgICAgICAgIHZlcmJvc2UgPSAxOworICAgICAgICAg
ICAgICAgIGJyZWFrOworICAgICAgICAgICAgY2FzZSAnaCc6CisgICAgICAgICAgICAgICAgaGVs
cCgpOworICAgICAgICAgICAgICAgIGJyZWFrOworICAgICAgICAgICAgY2FzZSAnZic6CisgICAg
ICAgICAgICAgICAgZmlsZW5hbWUgPSBvcHRhcmc7CisgICAgICAgICAgICAgICAgYnJlYWs7Cisg
ICAgICAgICAgICBkZWZhdWx0OgorICAgICAgICAgICAgICAgIGJyZWFrOworICAgICAgICB9Cisg
ICAgfQorICAgIAorICAgIGltZ2ZpbGUgPSBhcmd2W29wdGluZCsrXTsKKyAgICBpZiAob3B0aW5k
ID4gYXJnYykKKyAgICAgICAgaGVscCgpOworCisgICAgCisgICAgaWYoIWZpbGVuYW1lKQorICAg
IHsKKyAgICAgICAgcHJpbnRmKCJlcnJvcjogc3BlY2lmaWMgdGhlIGZpbGUgdG8gc2hvdyFcbiIp
OworICAgICAgICByZXR1cm4gLTE7CisgICAgfQorICAgICAgICAKKyAgICBCbG9ja0RyaXZlclN0
YXRlICpicyA9IGJkcnZfbmV3KCIiKTsKKyAgICBpZighYnMpCisgICAgICAgIGVycngoLTEsICJO
b3QgZW5vdWdoIG1lbW9yeSBmb3IgYmRydl9uZXdcbiIpOworICAgIGlmKGJkcnZfb3Blbihicywg
aW1nZmlsZSwgQlJEVl9PX0ZMQUdTKSA8IDApCisgICAgICAgIGVycngoLTEsICJDb3VsZCBub3Qg
b3BlbiAlc1xuIiwgaW1nZmlsZSk7CisgICAgCisKKyAgICB1aW50IGJ1Zl9zaXplID0gNDA5NjsK
KyAgICBjaGFyKiBidWYgPSAoY2hhciopbWFsbG9jKGJ1Zl9zaXplKTsKKyAgICBvZmZfdCBvZmZf
Ynl0ZXMgPSAwOworICAgIG9mZl90IHNpemVfYnl0ZXMgPSAwOworICAgIGludCBpID0gMDsKKyAg
ICBsc19wYXJ0aXRpb25fdCAqcGFydHMgPSAobHNfcGFydGl0aW9uX3QqKW1hbGxvYyhNQVhfUEFS
VElUSU9OUyAqIHNpemVvZihsc19wYXJ0aXRpb25fdCkpOworICAgIGludCBjb3VudCA9IGVudW1f
cGFydGl0aW9uKGJzLCBwYXJ0cyk7CisgICAgCisgICAgICAgCisgICAgeworICAgICAgICBncnVi
X2ZzX3BsdWdpbigpOworCisgICAgICAgIGdydWJfZmlsZV90IGZpbGUgPSBOVUxMOworICAgICAg
ICBjaGFyICpwYXRoID0gTlVMTDsKKyAgICAgICAgaW50IHdoaWNoX3BhcnQgPSAxOworICAgICAg
CisgICAgICAgIGZpbGUgPSAoZ3J1Yl9maWxlX3QpbWFsbG9jKHNpemVvZigqZmlsZSkpOworICAg
ICAgICBmaWxlLT5icyA9IGJzOworICAgICAgICBmaWxlLT5kYXRhID0gTlVMTDsKKyAgICAgIAor
ICAgICAgICBjaGFyKiBwID0gc3RyY2hyKGZpbGVuYW1lLCAnLycpOworICAgICAgICBpZighcCkK
KyAgICAgICAgeworICAgICAgICAgICAgZXJyeCgtMSwgInBsZWFzZSBjaGVjayB0aGUgZmlsZSBw
YXRoIVxuIik7CisgICAgICAgIH0KKyAgICAgICAgZWxzZQorICAgICAgICB7CisgICAgICAgICAg
ICBwID0gc3RyY2hyKHAsICcvJyk7CisgICAgICAgICAgICBpZighcCkgZXJyeCgtMSwgImNoZWNr
IHRoZSBmaWxlIHBhdGghIVxuIik7CisgICAgICAgIH0KKwkgIAorICAgICAgICBnZXRfcGFydGl0
aW9uX3BhdGgoZmlsZW5hbWUsICZ3aGljaF9wYXJ0LCAmcGF0aCk7CisgICAgICAgIERCRygicGFy
dD0lZCwgcGF0aD0lcyIsIHdoaWNoX3BhcnQsIHBhdGgpOworICAgICAgICBpZih3aGljaF9wYXJ0
IDwgMSB8fCB3aGljaF9wYXJ0ID4gY291bnQpCisgICAgICAgIHsKKyAgICAgICAgICAgIHByaW50
ZigiZXJyb3I6IGNoZWNrIHRoZSBwYXJ0aXRpb24gbnVtYmVyIVxuIik7CisgICAgICAgICAgICBn
b3RvIGZhaWw7CisgICAgICAgIH0KKyAgICAgICAgZmlsZS0+cGFydF9vZmZfc2VjdG9yID0gcGFy
dHNbd2hpY2hfcGFydCAtIDFdLnBhcnQuc3RhcnRfc2VjdG9yX2FiczsKKyAgICAgICAganVkZ2Vf
ZnMoJnBhcnRzW3doaWNoX3BhcnQgLSAxXSk7CisgICAgICAgIEZTX1RZUEUgZnNfdHlwZSA9IHBh
cnRzW3doaWNoX3BhcnQgLSAxXS5mc190eXBlOworICAgICAgICAoZnNfdHlwZSA9PSBGU19VTktO
T1dOKSA/IGVycngoMSwgInVua25vd24gZmlsZSBzeXN0ZW0hXG4iKSA6IDA7CisgICAgICAgIGdy
dWJfZnNfdCBncnViX2ZzX3BsZyA9IGdydWJfZnNfcGx1Z1tmc190eXBlXTsKKyAgIAorICAgICAg
ICBpZihncnViX2ZzX3BsZy5vcGVuKGZpbGUsIHBhdGgpID09IDApCisgICAgICAgIHsKKyAgICAg
ICAgICAgIC8vcHJpbnRmKCJmaWxlIHNpemU9JXpkIGJ5dGVzXG4iLCAoZmlsZS0+c2l6ZSkpOwor
CSAgCisgICAgICAgICAgICBncnViX3NpemVfdCBsZW4gPSBmaWxlLT5zaXplOworICAgICAgICAg
ICAgZ3J1Yl9vZmZfdCBvZmYgPSAwOworICAgICAgICAgICAgY2hhciAgdG1wZmlsZVsyNTZdPXt9
OworICAgICAgICAgICAgc3RybmNweSh0bXBmaWxlLCBnZXRlbnYoIkhPTUUiKSwgc2l6ZW9mKHRt
cGZpbGUpKTsKKyAgICAgICAgICAgIHRtcGZpbGVbc2l6ZW9mKHRtcGZpbGUpIC0gMV0gPSAnXDAn
OworICAgICAgICAgICAgc3RyY2F0KHRtcGZpbGUsICIvdG1wLmZpbGUiKTsKKwkgICAgCisgICAg
ICAgICAgICBpZighYnVmKQorICAgICAgICAgICAgeworICAgICAgICAgICAgICAgIHBlcnJvcigi
bm90IGVub3VnaCBtZW1vcnkhXG4iKTsKKyAgICAgICAgICAgICAgICBnb3RvIGZhaWw7CisgICAg
ICAgICAgICB9CisgICAgICAgICAgICBlbHNlCisgICAgICAgICAgICB7CisgICAgICAgICAgICAg
ICAgZ3J1Yl9zaXplX3QgcmVhZGVkID0gMDsKKyAgICAgICAgICAgICAgICBncnViX3NpemVfdCBs
ZWZ0ICA9IGxlbjsKKyAgICAgICAgICAgICAgICBncnViX3NpemVfdCB0b3RhbCA9IDA7CisgICAg
ICAgICAgICAgICAgRklMRSogZiA9IGZvcGVuKHRtcGZpbGUgLCJ3Iik7CisgICAgICAgICAgICAg
ICAgaWYoIWYpCisgICAgICAgICAgICAgICAgeworICAgICAgICAgICAgICAgICAgICBwZXJyb3Io
ImZvcGVuIGVycm9yIik7CisgICAgICAgICAgICAgICAgICAgIGdvdG8gZmFpbDsKKyAgICAgICAg
ICAgICAgICB9CisJICAgICAgCisJICAgICAgCisgICAgICAgICAgICAgICAgKGxlZnQgPiBidWZf
c2l6ZSkgPyAobGVmdCA9IGJ1Zl9zaXplKSA6IDA7CisgICAgICAgICAgICAgICAgd2hpbGUoKHJl
YWRlZCA9IGdydWJfZnNfcGxnLnJlYWQoZmlsZSwgb2ZmLCBsZWZ0LCBidWYpKQorICAgICAgICAg
ICAgICAgICAgICAgICYmIHRvdGFsIDw9IGxlbgorICAgICAgICAgICAgICAgICAgICAgICYmIHJl
YWRlZCA+IDApCisgICAgICAgICAgICAgICAgeworICAgICAgICAgICAgICAgICAgICBEQkcoInJl
YWRlZD0lemQiLCByZWFkZWQpOworICAgICAgICAgICAgICAgICAgICB0b3RhbCArPSBmd3JpdGUo
YnVmLCAxLCByZWFkZWQsIGYpOworICAgICAgICAgICAgICAgICAgICBvZmYgPSB0b3RhbDsKKyAg
ICAgICAgICAgICAgICAgICAgbGVmdCA9IGxlbiAtIHRvdGFsOworICAgICAgICAgICAgICAgICAg
ICAobGVmdCA8PSBidWZfc2l6ZSkgPyAwICA6IChsZWZ0ID0gYnVmX3NpemUpOworICAgICAgICAg
ICAgICAgICAgICBEQkcoInRvdGFsPSV6ZCIsIHRvdGFsKTsKKyAgICAgICAgICAgICAgICB9Owor
ICAgICAgICAgICAgICAgIGZjbG9zZShmKTsKKwkgICAgICAKKyAgICAgICAgICAgICAgICBpZih0
b3RhbCAhPSBsZW4pCisgICAgICAgICAgICAgICAgeworICAgICAgICAgICAgICAgICAgICBwZXJy
b3IoInJlYWQgZXJyb3IiKTsKKyAgICAgICAgICAgICAgICAgICAgZ290byBmYWlsOworICAgICAg
ICAgICAgICAgIH0KKyAgICAgICAgICAgICAgICBlbHNlCisgICAgICAgICAgICAgICAgeworICAg
ICAgICAgICAgICAgICAgICBzcHJpbnRmKGJ1ZiwgImNhdCAlcyIsIHRtcGZpbGUpOworICAgICAg
ICAgICAgICAgICAgICBzeXN0ZW0oYnVmKTsKKyAgCisgICAgICAgICAgICAgICAgfQorICAgICAg
ICAgICAgfQorICAgICAgICB9CisgICAgICAgIGVsc2UKKyAgICAgICAgeworICAgICAgICAgICAg
cHJpbnRmKCJvcGVuIGZhaWxlZCFcbiIpOworICAgICAgICB9CisgICAgICAKKyAgICAgIAorICAg
ICAgICBncnViX2ZzX3BsZy5jbG9zZShmaWxlKTsKKyAgICAgICAgZnJlZShmaWxlKTsKKyAgICB9
CisgICAgCisgICAgCisgIGZhaWw6CisgICAgZnJlZShidWYpOworICAgIGJkcnZfZGVsZXRlKGJz
KTsKKyAgICBmcmVlKHBhcnRzKTsKKyAgICByZXR1cm4gMDsKK30KKworCisKIHN0YXRpYyBpbnQg
aW1nX2NyZWF0ZShpbnQgYXJnYywgY2hhciAqKmFyZ3YpCiB7CiAgICAgaW50IGMsIHJldCwgZmxh
Z3M7CiAgICAgY29uc3QgY2hhciAqZm10ID0gInJhdyI7CiAgICAgY29uc3QgY2hhciAqZmlsZW5h
bWU7CiAgICAgY29uc3QgY2hhciAqYmFzZV9maWxlbmFtZSA9IE5VTEw7CiAgICAgdWludDY0X3Qg
c2l6ZTsKICAgICBjb25zdCBjaGFyICpwOwpAQCAtODUwLDE2ICsxMTkxLDE3IEBAIHN0YXRpYyB2
b2lkIGltZ19zbmFwc2hvdChpbnQgYXJnYywgY2hhciAKICAgICB9CiAKICAgICAvKiBDbGVhbnVw
ICovCiAgICAgYmRydl9kZWxldGUoYnMpOwogfQogCiBpbnQgbWFpbihpbnQgYXJnYywgY2hhciAq
KmFyZ3YpCiB7CisgIAogICAgIGNvbnN0IGNoYXIgKmNtZDsKIAogICAgIGJkcnZfaW5pdCgpOwog
ICAgIGlmIChhcmdjIDwgMikKICAgICAgICAgaGVscCgpOwogICAgIGNtZCA9IGFyZ3ZbMV07CiAg
ICAgYXJnYy0tOyBhcmd2Kys7CiAgICAgaWYgKCFzdHJjbXAoY21kLCAiY3JlYXRlIikpIHsKQEAg
LTg2NywxMyArMTIwOSwxOSBAQCBpbnQgbWFpbihpbnQgYXJnYywgY2hhciAqKmFyZ3YpCiAgICAg
fSBlbHNlIGlmICghc3RyY21wKGNtZCwgImNvbW1pdCIpKSB7CiAgICAgICAgIGltZ19jb21taXQo
YXJnYywgYXJndik7CiAgICAgfSBlbHNlIGlmICghc3RyY21wKGNtZCwgImNvbnZlcnQiKSkgewog
ICAgICAgICBpbWdfY29udmVydChhcmdjLCBhcmd2KTsKICAgICB9IGVsc2UgaWYgKCFzdHJjbXAo
Y21kLCAiaW5mbyIpKSB7CiAgICAgICAgIGltZ19pbmZvKGFyZ2MsIGFyZ3YpOwogICAgIH0gZWxz
ZSBpZiAoIXN0cmNtcChjbWQsICJzbmFwc2hvdCIpKSB7CiAgICAgICAgIGltZ19zbmFwc2hvdChh
cmdjLCBhcmd2KTsKLSAgICB9IGVsc2UgeworICAgIH0gZWxzZSBpZiAoIXN0cmNtcChjbWQsICJs
cyIpKSB7CisgICAgICAgIGltZ19scyhhcmdjLCBhcmd2KTsgICAgCisgICAgfSBlbHNlIGlmICgh
c3RyY21wKGNtZCwgImNhdCIpKSB7CisgICAgICAgIGltZ19jYXQoYXJnYywgYXJndik7CisgICAg
fQorICAgIGVsc2UgewogICAgICAgICBoZWxwKCk7CiAgICAgfQorICAgIAogICAgIHJldHVybiAw
OwogfQpkaWZmIC0tZXhjbHVkZT0uc3ZuIC1ycE4gLVU4IHhlbi00LjEuMi1hL3Rvb2xzL2lvZW11
LXFlbXUteGVuL3R5cGVzLmggeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVtdS14ZW4vdHlwZXMu
aAotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4vdHlwZXMuaAkxOTcwLTAxLTAx
IDA3OjAwOjAwLjAwMDAwMDAwMCArMDcwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvaW9lbXUtcWVt
dS14ZW4vdHlwZXMuaAkyMDEyLTEyLTI4IDE2OjAyOjQxLjAxNjkzMjYyMiArMDgwMApAQCAtMCww
ICsxLDM1IEBACisvKgorICogIEdSVUIgIC0tICBHUmFuZCBVbmlmaWVkIEJvb3Rsb2FkZXIKKyAq
ICBDb3B5cmlnaHQgKEMpIDIwMDIsMjAwNiwyMDA3ICBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24s
IEluYy4KKyAqCisgKiAgR1JVQiBpcyBmcmVlIHNvZnR3YXJlOiB5b3UgY2FuIHJlZGlzdHJpYnV0
ZSBpdCBhbmQvb3IgbW9kaWZ5CisgKiAgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZSBhcyBwdWJsaXNoZWQgYnkKKyAqICB0aGUgRnJlZSBTb2Z0d2Fy
ZSBGb3VuZGF0aW9uLCBlaXRoZXIgdmVyc2lvbiAzIG9mIHRoZSBMaWNlbnNlLCBvcgorICogIChh
dCB5b3VyIG9wdGlvbikgYW55IGxhdGVyIHZlcnNpb24uCisgKgorICogIEdSVUIgaXMgZGlzdHJp
YnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKKyAqICBidXQgV0lUSE9V
VCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBvZgorICog
IE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNl
ZSB0aGUKKyAqICBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgor
ICoKKyAqICBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJh
bCBQdWJsaWMgTGljZW5zZQorICogIGFsb25nIHdpdGggR1JVQi4gIElmIG5vdCwgc2VlIDxodHRw
Oi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4KKyAqLworCisjaWZuZGVmIEdSVUJfVFlQRVNfQ1BV
X0hFQURFUgorI2RlZmluZSBHUlVCX1RZUEVTX0NQVV9IRUFERVIJMQorCisvKiBUaGUgc2l6ZSBv
ZiB2b2lkICouICAqLworI2RlZmluZSBHUlVCX1RBUkdFVF9TSVpFT0ZfVk9JRF9QCTQKKworLyog
VGhlIHNpemUgb2YgbG9uZy4gICovCisjZGVmaW5lIEdSVUJfVEFSR0VUX1NJWkVPRl9MT05HCQk0
CisKKy8qIGkzODYgaXMgbGl0dGxlLWVuZGlhbi4gICovCisjdW5kZWYgR1JVQl9UQVJHRVRfV09S
RFNfQklHRU5ESUFOCisKKyNkZWZpbmUgR1JVQl9UQVJHRVRfSTM4NgkJMQorCisjZGVmaW5lIEdS
VUJfVEFSR0VUX01JTl9BTElHTgkJMQorCisjZW5kaWYgLyogISBHUlVCX1RZUEVTX0NQVV9IRUFE
RVIgKi8KZGlmZiAtLWV4Y2x1ZGU9LnN2biAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9pb2Vt
dS1xZW11LXhlbi94ODZfNjQvdHlwZXMuaCB4ZW4tNC4xLjItYi90b29scy9pb2VtdS1xZW11LXhl
bi94ODZfNjQvdHlwZXMuaAotLS0geGVuLTQuMS4yLWEvdG9vbHMvaW9lbXUtcWVtdS14ZW4veDg2
XzY0L3R5cGVzLmgJMTk3MC0wMS0wMSAwNzowMDowMC4wMDAwMDAwMDAgKzA3MDAKKysrIHhlbi00
LjEuMi1iL3Rvb2xzL2lvZW11LXFlbXUteGVuL3g4Nl82NC90eXBlcy5oCTIwMTItMTItMjggMTY6
MDI6NDEuMDE3ODAyMzcxICswODAwCkBAIC0wLDAgKzEsMzkgQEAKKy8qCisgKiAgR1JVQiAgLS0g
IEdSYW5kIFVuaWZpZWQgQm9vdGxvYWRlcgorICogIENvcHlyaWdodCAoQykgMjAwOCAgRnJlZSBT
b2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuCisgKgorICogIEdSVUIgaXMgZnJlZSBzb2Z0d2FyZTog
eW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQorICogIGl0IHVuZGVyIHRoZSB0
ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgYXMgcHVibGlzaGVkIGJ5Cisg
KiAgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiwgZWl0aGVyIHZlcnNpb24gMyBvZiB0aGUg
TGljZW5zZSwgb3IKKyAqICAoYXQgeW91ciBvcHRpb24pIGFueSBsYXRlciB2ZXJzaW9uLgorICoK
KyAqICBHUlVCIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2Vm
dWwsCisgKiAgYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxp
ZWQgd2FycmFudHkgb2YKKyAqICBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJU
SUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlCisgKiAgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2Ug
Zm9yIG1vcmUgZGV0YWlscy4KKyAqCisgKiAgWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29w
eSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UKKyAqICBhbG9uZyB3aXRoIEdSVUIu
ICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uCisgKi8KKworI2lm
bmRlZiBHUlVCX1RZUEVTX0NQVV9IRUFERVIKKyNkZWZpbmUgR1JVQl9UWVBFU19DUFVfSEVBREVS
CTEKKworLyogVGhlIHNpemUgb2Ygdm9pZCAqLiAgKi8KKyNkZWZpbmUgR1JVQl9UQVJHRVRfU0la
RU9GX1ZPSURfUAk4CisKKy8qIFRoZSBzaXplIG9mIGxvbmcuICAqLworI2lmZGVmIF9fTUlOR1cz
Ml9fCisjZGVmaW5lIEdSVUJfVEFSR0VUX1NJWkVPRl9MT05HCQk0CisjZWxzZQorI2RlZmluZSBH
UlVCX1RBUkdFVF9TSVpFT0ZfTE9ORwkJOAorI2VuZGlmCisKKy8qIHg4Nl82NCBpcyBsaXR0bGUt
ZW5kaWFuLiAgKi8KKyN1bmRlZiBHUlVCX1RBUkdFVF9XT1JEU19CSUdFTkRJQU4KKworI2RlZmlu
ZSBHUlVCX1RBUkdFVF9YODZfNjQJCTEKKworI2RlZmluZSBHUlVCX1RBUkdFVF9NSU5fQUxJR04J
CTEKKworI2VuZGlmIC8qICEgR1JVQl9UWVBFU19DUFVfSEVBREVSICovCg==
--047d7b66f24b9af9a904d1e55c43
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7b66f24b9af9a904d1e55c43--


From xen-devel-bounces@lists.xen.org Fri Dec 28 09:27:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 09:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToWDJ-00014C-MM; Fri, 28 Dec 2012 09:26:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1ToWDH-000147-8a
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 09:26:48 +0000
Received: from [193.109.254.147:23784] by server-10.bemta-14.messagelabs.com
	id 0B/33-13263-6D56DD05; Fri, 28 Dec 2012 09:26:46 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1356686802!11375448!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1288 invoked from network); 28 Dec 2012 09:26:42 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 09:26:42 -0000
Received: by mail-ea0-f174.google.com with SMTP id e13so4148561eaa.5
	for <xen-devel@lists.xensource.com>;
	Fri, 28 Dec 2012 01:26:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=r4w243Ax6LDeeFwVrnvYjCghcj9QyPcrcYI6yDCu5Co=;
	b=SVogj0x6lgDQhJ8VTgIAkrblxlpCpLmqlzt6JIPdulPlsKtwyo8b5pPN4q53A3u32Z
	rrJ6jAK2w+zY7m1dJTH4GcaUwiaVBsLrQeorDOZusXDqpmVvFN4pcl6gbljySGvh1lcD
	mfTklhwUISD/bTApiCD+Z3lExYuJ5qb28bYjEq0LrWxkAAk5WRSxV0R59yAZ3LE0F7Xh
	sCMBdXlV6ksljq2u26WSUPYGA5DSyV6eDbwhlDiBTcIpzwoH57C0qQiteJfYE9/tOcvT
	0BLFGcdY8SFweoMz5CKdU8KVcsi3JNBR86LyW2rnx0zfYWvCps33WJbeO+PCLBbISNKh
	604A==
MIME-Version: 1.0
Received: by 10.14.174.198 with SMTP id x46mr85108598eel.23.1356686802445;
	Fri, 28 Dec 2012 01:26:42 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Fri, 28 Dec 2012 01:26:42 -0800 (PST)
Date: Fri, 28 Dec 2012 17:26:42 +0800
Message-ID: <CA+ePHTAxJ6mWyrC-snd9=kQkJuAEzERdi6brHiq6nFcgMC3PAw@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=047d7b62430aee63c904d1e642bf
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] seperate xen checkpoint file as three parts(for
	xen-4.1.2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7b62430aee63c904d1e642bf
Content-Type: multipart/alternative; boundary=047d7b62430aee63c004d1e642bd

--047d7b62430aee63c004d1e642bd
Content-Type: text/plain; charset=ISO-8859-1

Hi,
    We can seperate the xen checkpoint file as three parts: xl config file,
memory dump file of the virtual machine and qemu state file.
    So I add two xl sub-commands: `xl save2` and `xl restore2`, the final
effect is as follows:


[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$ sudo xl save2
*Usage: xl [-v] save2 [options] <Domain> <CheckpointFile1>
<CheckpointFile2> <CheckpointFile3> [<ConfigFile>]*
*
*
*Save a domain state as three seperated files to restore later.*
*
*
*Options:*
*
*
*-h  Print this help.*
*-c  Leave domain running after creating the snapshot.*

[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$ sudo xl restore2
*Usage: xl [-v] restore2 [options] [<ConfigFile>] <CheckpointFile1>
<CheckpointFile2> <CheckpointFile3>*
*
*
*Restore a domain from a saved state.*
*
*
*Options:*
*
*
*-h  Print this help.*
*-p  Do not unpause domain after restoring it.*
*-e  Do not wait in the background for the death of the domain.*
*-d  Enable debug messages.*
*
*

*Besides, the `vncdisplay` option in xl config file will determine the ID
of the domainU shown in `xl list`, so it's convenient to judge th vm-id
according the xl config file. *
*
*
*For more details, please check the attachment.*

--047d7b62430aee63c004d1e642bd
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,<div>=A0 =A0 We can seperate the xen checkpoint file as three parts: xl =
config file, memory dump file of the virtual machine and qemu state file.</=
div><div>=A0 =A0 So I add two xl sub-commands: `xl save2` and `xl restore2`=
, the final effect is as follows:</div>
<div><blockquote style=3D"margin:0 0 0 40px;border:none;padding:0px"><div><=
br></div><div>[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$<font=
 color=3D"#ff6666"> sudo xl save2</font></div><div><b>Usage: xl [-v] save2 =
[options] &lt;Domain&gt; &lt;CheckpointFile1&gt; &lt;CheckpointFile2&gt; &l=
t;CheckpointFile3&gt; [&lt;ConfigFile&gt;]</b></div>
<div><b><br></b></div><div><b>Save a domain state as three seperated files =
to restore later.</b></div><div><b><br></b></div><div><b>Options:</b></div>=
<div><b><br></b></div><div><b>-h =A0Print this help.</b></div><div><b>-c =
=A0Leave domain running after creating the snapshot.</b></div>
<div><br></div><div>[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]=
$ <font color=3D"#ff6666">sudo xl restore2</font></div><div><b>Usage: xl [-=
v] restore2 [options] [&lt;ConfigFile&gt;] &lt;CheckpointFile1&gt; &lt;Chec=
kpointFile2&gt; &lt;CheckpointFile3&gt;</b></div>
<div><b><br></b></div><div><b>Restore a domain from a saved state.</b></div=
><div><b><br></b></div><div><b>Options:</b></div><div><b><br></b></div><div=
><b>-h =A0Print this help.</b></div><div><b>-p =A0Do not unpause domain aft=
er restoring it.</b></div>
<div><b>-e =A0Do not wait in the background for the death of the domain.</b=
></div><div><b>-d =A0Enable debug messages.</b></div><div><b><br></b></div>=
</blockquote><b><font color=3D"#ff6666">Besides, the `vncdisplay` option in=
 xl config file will determine the ID of the domainU shown in `xl list`, so=
 it&#39;s convenient to judge th vm-id according the xl config file.=A0</fo=
nt></b></div>
<div><b><font color=3D"#ff6666"><br></font></b></div><div><font color=3D"#f=
f6666"><b>For more details, please check the attachment.</b></font></div>

--047d7b62430aee63c004d1e642bd--
--047d7b62430aee63c904d1e642bf
Content-Type: application/octet-stream; name="xl-save2-restore2.patch"
Content-Disposition: attachment; filename="xl-save2-restore2.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hb949f8g0

ZGlmZiAtLWV4Y2x1ZGU9LnN2biAtLWV4Y2x1ZGU9JyoucmVqJyAtLWV4Y2x1ZGU9Jyoub3JpZycg
LXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvbGlieGwvbGlieGwuYyB4ZW4tNC4xLjItYi90b29s
cy9saWJ4bC9saWJ4bC5jCi0tLSB4ZW4tNC4xLjItYS90b29scy9saWJ4bC9saWJ4bC5jCTIwMTEt
MTAtMjEgMDE6MDU6NDIuMDAwMDAwMDAwICswODAwCisrKyB4ZW4tNC4xLjItYi90b29scy9saWJ4
bC9saWJ4bC5jCTIwMTItMTItMjggMTc6MDA6MDMuNDg1NjYyMjgwICswODAwCkBAIC00NjEsMTYg
KzQ2MSwzMCBAQCBpbnQgbGlieGxfZG9tYWluX3N1c3BlbmQobGlieGxfY3R4ICpjdHgsCiAgICAg
aW50IHJjID0gMDsKIAogICAgIHJjID0gbGlieGxfX2RvbWFpbl9zdXNwZW5kX2NvbW1vbihjdHgs
IGRvbWlkLCBmZCwgaHZtLCBsaXZlLCBkZWJ1Zyk7CiAgICAgaWYgKCFyYyAmJiBodm0pCiAgICAg
ICAgIHJjID0gbGlieGxfX2RvbWFpbl9zYXZlX2RldmljZV9tb2RlbChjdHgsIGRvbWlkLCBmZCk7
CiAgICAgcmV0dXJuIHJjOwogfQogCitpbnQgbGlieGxfZG9tYWluX3N1c3BlbmQyKGxpYnhsX2N0
eCAqY3R4LCBsaWJ4bF9kb21haW5fc3VzcGVuZF9pbmZvICppbmZvLAorICAgICAgICAgICAgICAg
ICAgICAgICAgIHVpbnQzMl90IGRvbWlkLCBpbnQgZmQyLCBpbnQgZmQzKQoreworICAgIGludCBo
dm0gPSBsaWJ4bF9fZG9tYWluX2lzX2h2bShjdHgsIGRvbWlkKTsKKyAgICBpbnQgbGl2ZSA9IGlu
Zm8gIT0gTlVMTCAmJiBpbmZvLT5mbGFncyAmIFhMX1NVU1BFTkRfTElWRTsKKyAgICBpbnQgZGVi
dWcgPSBpbmZvICE9IE5VTEwgJiYgaW5mby0+ZmxhZ3MgJiBYTF9TVVNQRU5EX0RFQlVHOworICAg
IGludCByYyA9IDA7CisKKyAgICByYyA9IGxpYnhsX19kb21haW5fc3VzcGVuZF9jb21tb24oY3R4
LCBkb21pZCwgZmQyLCBodm0sIGxpdmUsIGRlYnVnKTsKKyAgICBpZiAoIXJjICYmIGh2bSkKKyAg
ICAgIHJjID0gbGlieGxfX2RvbWFpbl9zYXZlX2RldmljZV9tb2RlbDIoY3R4LCBkb21pZCwgZmQy
LCBmZDMpOworICAgIHJldHVybiByYzsKK30KKwogaW50IGxpYnhsX2RvbWFpbl9wYXVzZShsaWJ4
bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQpCiB7CiAgICAgaW50IHJldDsKICAgICByZXQgPSB4
Y19kb21haW5fcGF1c2UoY3R4LT54Y2gsIGRvbWlkKTsKICAgICBpZiAocmV0PDApIHsKICAgICAg
ICAgTElCWExfX0xPR19FUlJOTyhjdHgsIExJQlhMX19MT0dfRVJST1IsICJwYXVzaW5nIGRvbWFp
biAlZCIsIGRvbWlkKTsKICAgICAgICAgcmV0dXJuIEVSUk9SX0ZBSUw7CiAgICAgfQpkaWZmIC0t
ZXhjbHVkZT0uc3ZuIC0tZXhjbHVkZT0nKi5yZWonIC0tZXhjbHVkZT0nKi5vcmlnJyAtcnBOIC1V
OCB4ZW4tNC4xLjItYS90b29scy9saWJ4bC9saWJ4bF9jcmVhdGUuYyB4ZW4tNC4xLjItYi90b29s
cy9saWJ4bC9saWJ4bF9jcmVhdGUuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvbGlieGwvbGlieGxf
Y3JlYXRlLmMJMjAxMS0xMC0yMSAwMTowNTo0Mi4wMDAwMDAwMDAgKzA4MDAKKysrIHhlbi00LjEu
Mi1iL3Rvb2xzL2xpYnhsL2xpYnhsX2NyZWF0ZS5jCTIwMTItMTItMjggMTc6MDA6NDUuNjE3OTM1
MDE5ICswODAwCkBAIC0xOTYsMzMgKzE5Niw1MiBAQCBpbnQgbGlieGxfX2RvbWFpbl9idWlsZChs
aWJ4bF9jdHggKmN0eCwgCiAgICAgfQogICAgIHJldCA9IGxpYnhsX19idWlsZF9wb3N0KGN0eCwg
ZG9taWQsIGluZm8sIHN0YXRlLCB2bWVudHMsIGxvY2FsZW50cyk7CiBvdXQ6CiAKICAgICBsaWJ4
bF9fZnJlZV9hbGwoJmdjKTsKICAgICByZXR1cm4gcmV0OwogfQogCisKK3N0YXRpYyBpbnQgb3Zl
cnJpZGVfcWVtdV9zdGF0ZShsaWJ4bF9jdHggKmN0eCwgdWludDMyX3QgKmRvbWlkLCBpbnQgcmVz
dG9yZV9mZDMpOworCisKIHN0YXRpYyBpbnQgZG9tYWluX3Jlc3RvcmUobGlieGxfY3R4ICpjdHgs
IGxpYnhsX2RvbWFpbl9idWlsZF9pbmZvICppbmZvLAogICAgICAgICAgICAgICAgICAgICAgICAg
IHVpbnQzMl90IGRvbWlkLCBpbnQgZmQsIGxpYnhsX2RvbWFpbl9idWlsZF9zdGF0ZSAqc3RhdGUs
CiAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZGV2aWNlX21vZGVsX2luZm8gKmRtX2lu
Zm8pCiB7CiAgICAgbGlieGxfX2djIGdjID0gTElCWExfSU5JVF9HQyhjdHgpOwogICAgIGNoYXIg
Kip2bWVudHMgPSBOVUxMLCAqKmxvY2FsZW50cyA9IE5VTEw7CiAgICAgc3RydWN0IHRpbWV2YWwg
c3RhcnRfdGltZTsKICAgICBpbnQgaSwgcmV0LCBlc2F2ZSwgZmxhZ3M7CisgICAgaW50IHNlcGVy
YXRlZCA9IDA7CisgICAgaW50IHJlc3RvcmVfZmQzID0gLTE7CiAKICAgICByZXQgPSBsaWJ4bF9f
YnVpbGRfcHJlKGN0eCwgZG9taWQsIGluZm8sIHN0YXRlKTsKICAgICBpZiAocmV0KQogICAgICAg
ICBnb3RvIG91dDsKIAogICAgIHJldCA9IGxpYnhsX19kb21haW5fcmVzdG9yZV9jb21tb24oY3R4
LCBkb21pZCwgaW5mbywgc3RhdGUsIGZkKTsKICAgICBpZiAocmV0KQogICAgICAgICBnb3RvIG91
dDsKIAorICAgIGlmIChWRVJTSU9OX1JFU1RPUkVfRlJPTV8zRiA9PSBjdHgtPnhsX3NhdmVfcmVz
dG9yZS54bF9yZXN0b3JlX3ZlcnNpb24pCisgICAgICB7CisJSU5GTygiZ29pbmcgdG8gb3Zlcmlk
ZSBxZW11IHN0YXRlIGZpbGUgYXQgL3Zhci9saWIveGVuLyBhZnRlciB4Y19kb21haW5fcmVzdG9y
ZSEiKTsKKwlyZXN0b3JlX2ZkMyA9IG9wZW4oY3R4LT54bF9zYXZlX3Jlc3RvcmUuZjMsIE9fUkRP
TkxZKTsKKwlzZXBlcmF0ZWQgPSAxOworCXJldCA9IG92ZXJyaWRlX3FlbXVfc3RhdGUoY3R4LCAm
ZG9taWQsIHJlc3RvcmVfZmQzKTsKKwlpZiAocmV0KQorCSAgeworCSAgICBFUlJPUigib3Zlcmlk
ZSBxZW11IHN0YXRlIGZpbGUgZmFpbGVkISIpOworCSAgICBnb3RvIG91dDsKKwkgIH0KKyAgICAg
IH0KKwogICAgIGdldHRpbWVvZmRheSgmc3RhcnRfdGltZSwgTlVMTCk7CiAKICAgICBpZiAoaW5m
by0+aHZtKSB7CiAgICAgICAgIHZtZW50cyA9IGxpYnhsX19jYWxsb2MoJmdjLCA3LCBzaXplb2Yo
Y2hhciAqKSk7CiAgICAgICAgIHZtZW50c1swXSA9ICJydGMvdGltZW9mZnNldCI7CiAgICAgICAg
IHZtZW50c1sxXSA9IChpbmZvLT51Lmh2bS50aW1lb2Zmc2V0KSA/IGluZm8tPnUuaHZtLnRpbWVv
ZmZzZXQgOiAiIjsKICAgICAgICAgdm1lbnRzWzJdID0gImltYWdlL29zdHlwZSI7CiAgICAgICAg
IHZtZW50c1szXSA9ICJodm0iOwpAQCAtMjcxLDE2ICsyOTAsMjAgQEAgb3V0OgogICAgICAgICBm
bGFncyAmPSB+T19OT05CTE9DSzsKICAgICAgICAgaWYgKGZjbnRsKGZkLCBGX1NFVEZMLCBmbGFn
cykgPT0gLTEpCiAgICAgICAgICAgICBMSUJYTF9fTE9HX0VSUk5PKGN0eCwgTElCWExfX0xPR19F
UlJPUiwgInVuYWJsZSB0byBwdXQgcmVzdG9yZSBmZCIKICAgICAgICAgICAgICAgICAgICAgICAg
ICAiIGJhY2sgdG8gYmxvY2tpbmcgbW9kZSIpOwogICAgIH0KIAogICAgIGVycm5vID0gZXNhdmU7
CiAgICAgbGlieGxfX2ZyZWVfYWxsKCZnYyk7CisgICAgCisgICAgaWYgKC0xICE9IHJlc3RvcmVf
ZmQzKQorICAgICAgY2xvc2UocmVzdG9yZV9mZDMpOworCiAgICAgcmV0dXJuIHJldDsKIH0KIAog
aW50IGxpYnhsX19kb21haW5fbWFrZShsaWJ4bF9jdHggKmN0eCwgbGlieGxfZG9tYWluX2NyZWF0
ZV9pbmZvICppbmZvLAogICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCAqZG9taWQpCiAg
Lyogb24gZW50cnksIGxpYnhsX2RvbWlkX3ZhbGlkX2d1ZXN0KGRvbWlkKSBtdXN0IGJlIGZhbHNl
OwogICAqIG9uIGV4aXQgKGV2ZW4gZXJyb3IgZXhpdCksIGRvbWlkIG1heSBiZSB2YWxpZCBhbmQg
cmVmZXIgdG8gYSBkb21haW4gKi8KIHsKQEAgLTI5MSwyOCArMzE0LDMzIEBAIGludCBsaWJ4bF9f
ZG9tYWluX21ha2UobGlieGxfY3R4ICpjdHgsIGwKICAgICBjaGFyICpyb19wYXRoc1tdID0geyAi
Y3B1IiwgIm1lbW9yeSIsICJkZXZpY2UiLCAiZXJyb3IiLCAiZHJpdmVycyIsCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgImNvbnRyb2wiLCAiYXR0ciIsICJtZXNzYWdlcyIgfTsKICAgICBjaGFy
ICpkb21fcGF0aCwgKnZtX3BhdGg7CiAgICAgc3RydWN0IHhzX3Blcm1pc3Npb25zIHJvcGVybVsy
XTsKICAgICBzdHJ1Y3QgeHNfcGVybWlzc2lvbnMgcndwZXJtWzFdOwogICAgIHhzX3RyYW5zYWN0
aW9uX3QgdCA9IDA7CiAgICAgeGVuX2RvbWFpbl9oYW5kbGVfdCBoYW5kbGU7CiAKKyAgICB1aW50
MzJfdCByZWFsX2RvbWlkID0gKmRvbWlkOworICAgICpkb21pZCA9IDA7CisgICAgSU5GTygidGVt
cG9yYXJpbHkgcmV2ZXJ0IGRvbWlkIGFzIDAgdG8gbWFrZSBzdXJlIGludmFsaWQgZG9tYWluIGlk
IDogXjA8eDxtYXgiKTsKICAgICBhc3NlcnQoIWxpYnhsX2RvbWlkX3ZhbGlkX2d1ZXN0KCpkb21p
ZCkpOwogCiAgICAgdXVpZF9zdHJpbmcgPSBsaWJ4bF9fdXVpZDJzdHJpbmcoJmdjLCBpbmZvLT51
dWlkKTsKICAgICBpZiAoIXV1aWRfc3RyaW5nKSB7CiAgICAgICAgIHJjID0gRVJST1JfTk9NRU07
CiAgICAgICAgIGdvdG8gb3V0OwogICAgIH0KIAogICAgIGZsYWdzID0gaW5mby0+aHZtID8gWEVO
X0RPTUNUTF9DREZfaHZtX2d1ZXN0IDogMDsKICAgICBmbGFncyB8PSBpbmZvLT5oYXAgPyBYRU5f
RE9NQ1RMX0NERl9oYXAgOiAwOwogICAgIGZsYWdzIHw9IGluZm8tPm9vcyA/IDAgOiBYRU5fRE9N
Q1RMX0NERl9vb3Nfb2ZmOwogICAgICpkb21pZCA9IC0xOworICAgICpkb21pZCA9IHJlYWxfZG9t
aWQ7CisgICAgSU5GTygiY2hhbmdlIGRvbWlkIGJhY2sgdG8gdm5jZGlzcGxheSAldSAuLi4iLCAq
ZG9taWQpOwogCiAgICAgLyogVWx0aW1hdGVseSwgaGFuZGxlIGlzIGFuIGFycmF5IG9mIDE2IHVp
bnQ4X3QsIHNhbWUgYXMgdXVpZCAqLwogICAgIGxpYnhsX3V1aWRfY29weSgobGlieGxfdXVpZCAq
KWhhbmRsZSwgJmluZm8tPnV1aWQpOwogCiAgICAgcmV0ID0geGNfZG9tYWluX2NyZWF0ZShjdHgt
PnhjaCwgaW5mby0+c3NpZHJlZiwgaGFuZGxlLCBmbGFncywgZG9taWQpOwogICAgIGlmIChyZXQg
PCAwKSB7CiAgICAgICAgIExJQlhMX19MT0dfRVJSTk9WQUwoY3R4LCBMSUJYTF9fTE9HX0VSUk9S
LCByZXQsICJkb21haW4gY3JlYXRpb24gZmFpbCIpOwogICAgICAgICByYyA9IEVSUk9SX0ZBSUw7
CkBAIC00MDMsMjAgKzQzMSwyMyBAQCByZXRyeV90cmFuc2FjdGlvbjoKIAogc3RhdGljIGludCBk
b19kb21haW5fY3JlYXRlKGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kb21haW5fY29uZmlnICpkX2Nv
bmZpZywKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9jb25zb2xlX3JlYWR5IGNi
LCB2b2lkICpwcml2LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90ICpkb21p
ZF9vdXQsIGludCByZXN0b3JlX2ZkKQogewogICAgIGxpYnhsX19kZXZpY2VfbW9kZWxfc3RhcnRp
bmcgKmRtX3N0YXJ0aW5nID0gMDsKICAgICBsaWJ4bF9kZXZpY2VfbW9kZWxfaW5mbyAqZG1faW5m
byA9ICZkX2NvbmZpZy0+ZG1faW5mbzsKICAgICBsaWJ4bF9kb21haW5fYnVpbGRfc3RhdGUgc3Rh
dGU7Ci0gICAgdWludDMyX3QgZG9taWQ7CisgICAgdWludDMyX3QgZG9taWQsIGN1c3RvbWl6ZV9k
b21pZDsKICAgICBpbnQgaSwgcmV0OwogCisgICAgY3VzdG9taXplX2RvbWlkID0gZF9jb25maWct
PmRtX2luZm8udm5jZGlzcGxheTsKICAgICBkb21pZCA9IDA7CisgICAgZG9taWQgPSBjdXN0b21p
emVfZG9taWQ7CisgICAgSU5GTygiY3VzdG9taXplIGN1cnJlbnQgZG9taWQgYXMgJXUuLi4iLCBk
b21pZCk7CiAKICAgICByZXQgPSBsaWJ4bF9fZG9tYWluX21ha2UoY3R4LCAmZF9jb25maWctPmNf
aW5mbywgJmRvbWlkKTsKICAgICBpZiAocmV0KSB7CiAgICAgICAgIGZwcmludGYoc3RkZXJyLCAi
Y2Fubm90IG1ha2UgZG9tYWluOiAlZFxuIiwgcmV0KTsKICAgICAgICAgcmV0ID0gRVJST1JfRkFJ
TDsKICAgICAgICAgZ290byBlcnJvcl9vdXQ7CiAgICAgfQogCkBAIC01NjAsOCArNTkxLDIzNyBA
QCBpbnQgbGlieGxfZG9tYWluX2NyZWF0ZV9uZXcobGlieGxfY3R4ICpjCiAgICAgcmV0dXJuIGRv
X2RvbWFpbl9jcmVhdGUoY3R4LCBkX2NvbmZpZywgY2IsIHByaXYsIGRvbWlkLCAtMSk7CiB9CiAK
IGludCBsaWJ4bF9kb21haW5fY3JlYXRlX3Jlc3RvcmUobGlieGxfY3R4ICpjdHgsIGxpYnhsX2Rv
bWFpbl9jb25maWcgKmRfY29uZmlnLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBs
aWJ4bF9jb25zb2xlX3JlYWR5IGNiLCB2b2lkICpwcml2LCB1aW50MzJfdCAqZG9taWQsIGludCBy
ZXN0b3JlX2ZkKQogewogICAgIHJldHVybiBkb19kb21haW5fY3JlYXRlKGN0eCwgZF9jb25maWcs
IGNiLCBwcml2LCBkb21pZCwgcmVzdG9yZV9mZCk7CiB9CisKKy8qCisgKnJlZmVyIHRvIGxpYnhj
L3hlbmd1ZXN0LmggCisgKmRlZmluZSBYQ19ERVZJQ0VfTU9ERUxfUkVTVE9SRV9GSUxFICIvdmFy
L2xpYi94ZW4vcWVtdS1yZXN1bWUiCisgKi8KKworI2RlZmluZSBSREVYQUNUKGZkLCBidWYsIHNp
emUpIHJkZXhhY3QoY3R4LCBmZCwgYnVmLCBzaXplKQorCitzdGF0aWMgc3NpemVfdCByZGV4YWN0
KGxpYnhsX2N0eCAqY3R4LCBpbnQgZmQsIHZvaWQqIGJ1Ziwgc2l6ZV90IHNpemUpCit7CisgIHNp
emVfdCBvZmZzZXQgPSAwOworICBzc2l6ZV90IGxlbjsKKyAgCisgIHdoaWxlIChvZmZzZXQgPCBz
aXplKQorICAgIHsKKyAgICAgIGxlbiA9IHJlYWQoZmQsIGJ1ZiArIG9mZnNldCwgc2l6ZSAtIG9m
ZnNldCk7CisgICAgICBpZiAoMCA9PSBsZW4pCisJeworCSAgTElCWExfX0xPR19FUlJOTyhjdHgs
IExJQlhMX19MT0dfRVJST1IsICJ4bCByZWFkIGV4YWN0IDogMC1MRU5HVEggcmVhZCEiKTsKKwkg
IHJldHVybiAtMjsKKwl9CisgICAgICBlbHNlIGlmIChsZW4gPCAwKQorCXsKKwkgIGlmICggLTEg
PT0gbGVuICYmIGVycm5vID09IEVJTlRSKQorCSAgICB7CisJICAgICAgY29udGludWU7CisJICAg
IH0KKwkgIExJQlhMX19MT0dfRVJSTk8oY3R4LCBMSUJYTF9fTE9HX0VSUk9SLCAieGwgcmVhZCBl
eGFjdCA6IG5lZ2F0aXZlIHJlYWQhIik7CisJICByZXR1cm4gLTE7CisJfQorICAgICAgb2Zmc2V0
ICs9IGxlbjsKKyAgICB9CisKKyAgcmV0dXJuIDA7Cit9CisKK3R5cGVkZWYgc3RydWN0IHFlbXVf
YnVmCit7CisgIGNoYXIgc2lnbmF0dXJlWzIxICsgMV07CisgIHVpbnQzMl90IGJ1ZnNpemU7Cisg
IHVpbnQ4X3QgKmJ1ZjsKK31xZW11X2J1Zl90OworCisKK3N0YXRpYyBpbnQgY29tcGF0X2J1ZmZl
cl9xZW11KGxpYnhsX2N0eCAqY3R4LCBpbnQgZmQsIHFlbXVfYnVmX3QgKnFlbXVidWYpCit7Cisg
IHVpbnQ4X3QgKnFidWYsICp0bXA7CisgIGludCBibGVuID0gMCwgZGxlbiA9IDA7CisgIGludCBy
YzsKKworICBibGVuID0gODE5MjsKKyAgaWYgKCAhKHFidWYgPSBtYWxsb2MoYmxlbikpKQorICAg
IHsKKyAgICAgIEVSUk9SKCJlcnJvciBhbGxvY2F0aW5nIFFFTVUgYnVmZmVyIik7CisgICAgICBy
ZXR1cm4gLTE7CisgICAgfSAKKworICB3aGlsZSggKHJjID0gcmVhZChmZCwgcWJ1ZitkbGVuLCBi
bGVuLWRsZW4pKSA+IDAgKQorICAgIHsKKyAgICAgIElORk8oInhsIHJlYWQgJWQgYnl0ZXMgUUVN
VSBEQVRBIiwgcmMpOworICAgICAgZGxlbiArPSByYzsKKyAgICAgIAorICAgICAgaWYgKGRsZW4g
PT0gYmxlbikKKwl7CisJICBJTkZPKCIlZC1ieXRlIFFFTVUgYnVmZmVyIGZ1bGwsIHJlYWxsb2Nh
dGluZy4uLiIsIGRsZW4pOworCSAgYmxlbiArPSA0MDk2OworCSAgdG1wID0gcmVhbGxvYyhxYnVm
LCBibGVuKTsKKwkgIGlmKCAhdG1wICkKKwkgICAgeworCSAgICAgIEVSUk9SKCJlcnJvciBncm93
aW5nIFFFTVUgYnVmZmVyIHRvICVkIGJ5dGVzIiwgYmxlbik7CisJICAgICAgZnJlZShxYnVmKTsK
KwkgICAgICByZXR1cm4gLTE7CisJICAgIH0KKwkgIHFidWYgPSB0bXA7CisJfQorICAgIH0KKyAg
CisgIGlmIChyYyA8IDApCisgICAgeworICAgICAgRVJST1IoImVycm9yIHJlYWRpbmcgUUVNVSBk
YXRhIik7CisgICAgICBmcmVlKHFidWYpOworICAgICAgcmV0dXJuIC0xOworICAgIH0KKworICBp
ZiggbWVtY21wKHFidWYsICJRRVZNIiwgNCkgKQorICAgIHsKKyAgICAgIEVSUk9SKCJpbnZhbGlk
IFFFTVUgbWFnaWMgOiAweCUwOHggKGV4cGVjdGVkIDogYFFFVk1gKSIsICoodW5zaWduZWQgaW50
KilxYnVmKTsKKyAgICAgIGZyZWUocWJ1Zik7CisgICAgICByZXR1cm4gLTE7CisgICAgfQorCisg
IHFlbXVidWYtPmJ1ZnNpemUgPSBkbGVuOworICBxZW11YnVmLT5idWYgPSBxYnVmOworCisgIHJl
dHVybiAwOworfQorCisKK3N0YXRpYyBpbnQgYnVmZmVyX3FlbXUobGlieGxfY3R4ICpjdHgsIGlu
dCBmZCwgcWVtdV9idWZfdCAqcWVtdWJ1ZikKK3sKKyAgdWludDMyX3QgcWxlbjsKKyAgdWludDhf
dCAqdG1wOworCisgIGlmKCBSREVYQUNUKGZkLCAmcWxlbiwgc2l6ZW9mKHFsZW4pKSApCisgICAg
eworICAgICAgRVJST1IoImVycm9yIHJlYWRpbmcgUUVNVSBkYXRhIGxlbmd0aCBmaWVsZCBpbiBo
ZWFkZXIiKTsKKyAgICAgIHJldHVybiAtMTsKKyAgICB9CisKKyAgaWYocWxlbiA+IHFlbXVidWYt
PmJ1ZnNpemUpCisgICAgeworICAgICAgaWYocWVtdWJ1Zi0+YnVmKQorCXsKKwkgIHRtcCA9IHJl
YWxsb2MocWVtdWJ1Zi0+YnVmLCBxbGVuKTsKKwkgIGlmKHRtcCkKKwkgICAgcWVtdWJ1Zi0+YnVm
ID0gdG1wOworCSAgZWxzZQorCSAgICB7CisJICAgICAgRVJST1IoImVycm9yIHJlYWxsb2NhdGlu
ZyBRRU1VIGJ1ZmZlciIpOworCSAgICAgIHJldHVybiAtMTsKKwkgICAgfQorCX0KKyAgICAgIGVs
c2UKKwl7CisJICBxZW11YnVmLT5idWYgPSBtYWxsb2MocWxlbik7CisJICBpZiAoICFxZW11YnVm
LT5idWYgKQorCSAgICB7CisJICAgICAgRVJST1IoImVycm9yIGFsbG9jYXRpbmcgUUVNVSBidWZm
ZXIiKTsKKwkgICAgICByZXR1cm4gLTE7CisJICAgIH0KKwl9CisgICAgfQorICAKKyAgcWVtdWJ1
Zi0+YnVmc2l6ZSA9IHFsZW47CisKKyAgaWYoIFJERVhBQ1QoZmQsIHFlbXVidWYtPmJ1ZiwgcWVt
dWJ1Zi0+YnVmc2l6ZSkgKQorICAgIHsKKyAgICAgIEVSUk9SKCJlcnJvciByZWFkaW5nIFFFTVUg
ZGF0YSIpOworICAgICAgcmV0dXJuIC0xOworICAgIH0KKyAgCisgIHJldHVybiAwOworfQorCisK
Kworc3RhdGljIGludCBvdmVycmlkZV9xZW11X3N0YXRlKGxpYnhsX2N0eCAqY3R4LCB1aW50MzJf
dCAqZG9taWQsIGludCByZXN0b3JlX2ZkMykKK3sKKyAgdWludDMyX3QgZG9taWRfciA9IDA7Cisg
IGNoYXIgcGF0aFsyNTZdPXt9OworICBxZW11X2J1Zl90IHFlbXVidWY9e307CisgIGludCBkbV9m
ZCA9IC0xOworICB1bnNpZ25lZCBjaGFyIHFlbXVzaWdbMjFdID0ge307CisgIGludCByZXQgPSAw
OworCisgIGRvbWlkX3IgPSAqZG9taWQ7CisgIHNwcmludGYocGF0aCwgWENfREVWSUNFX01PREVM
X1JFU1RPUkVfRklMRSIuJXUiLCBkb21pZF9yKTsKKworICAKKyAgICB7CisgICAgICBJTkZPKCJi
ZWdpbiB0byBvdmVycmlkZSAlcyB3aXRoIHBhcnQgSUlJIG9mIHRoZSBzYXZlZmlsZSIsIHBhdGgp
OworICAgICAgCisgICAgICBkbV9mZCA9IG9wZW4ocGF0aCwgT19DUkVBVHxPX1dST05MWXxPX1RS
VU5DKTsKKyAgICAgIGlmICgtMSAhPSBkbV9mZCkKKwl7CisJICBpZiggUkRFWEFDVChyZXN0b3Jl
X2ZkMywgcWVtdXNpZywgc2l6ZW9mKHFlbXVzaWcpKSApCisJICAgIHsKKwkgICAgICBFUlJPUigi
cmVhZCBxZW11IHNpZ25hdHVyZSBmYWlsIik7CisJICAgICAgcmV0dXJuIC0xOworCSAgICB9CisK
KwkgIGlmICggIW1lbWNtcChxZW11c2lnLCAiUWVtdURldmljZU1vZGVsUmVjb3JkIiwgc2l6ZW9m
KHFlbXVzaWcpKSApCisJICAgIHsKKwkgICAgICBtZW1jcHkocWVtdWJ1Zi5zaWduYXR1cmUsIHFl
bXVzaWcsIHNpemVvZihxZW11c2lnKSk7CisJICAgICAgcmV0ID0gY29tcGF0X2J1ZmZlcl9xZW11
KGN0eCwgcmVzdG9yZV9mZDMsICZxZW11YnVmKTsKKwkgICAgfQorCSAgZWxzZSBpZiAoICFtZW1j
bXAocWVtdXNpZywgIkRldmljZU1vZGVsUmVjb3JkMDAwMiIsIHNpemVvZihxZW11c2lnKSkgfHwK
KwkJICAgICFtZW1jbXAocWVtdXNpZywgIlJlbXVzRGV2aWNlTW9kZWxTdGF0ZSIsIHNpemVvZihx
ZW11c2lnKSkgKQorCSAgICB7CisJICAgICAgbWVtY3B5KHFlbXVidWYuc2lnbmF0dXJlLCBxZW11
c2lnLCBzaXplb2YocWVtdXNpZykpOworCSAgICAgIHJldCA9IGJ1ZmZlcl9xZW11KGN0eCwgcmVz
dG9yZV9mZDMsICZxZW11YnVmKTsKKwkgICAgfQorCSAgCisJICBpZiAocmV0KQorCSAgICB7CisJ
ICAgICAgRVJST1IoImVycm9yIHJlYWRpbmcgUUVNVSBkYXRhIGZyb20gc2F2ZSBzdGF0ZSBmaWxl
LTMiKTsKKwkgICAgICBpZiAocWVtdWJ1Zi5idWYpCisJCWZyZWUocWVtdWJ1Zi5idWYpOworCSAg
ICAgIHJldHVybiAtMTsKKyAgICAgCSAgICB9CisJICAKKwkgIGlmKCAtMSA9PSB3cml0ZShkbV9m
ZCwgcWVtdWJ1Zi5idWYsIHFlbXVidWYuYnVmc2l6ZSkgKQorCSAgICB7CisJICAgICAgRVJST1Io
ImVycm9yIHdyaXRpbmcgUUVNVSBkYXRhIHRvICVzIiwgcGF0aCk7CisJICAgICAgaWYgKHFlbXVi
dWYuYnVmKQorCQlmcmVlKHFlbXVidWYuYnVmKTsKKwkgICAgICByZXR1cm4gLTE7CisJICAgIH0K
Kwl9CisgICAgICBlbHNlCisJeworCSAgRVJST1IoImVycm9yIG9wZW4gJXMgZm9yIHdyaXRpbmcg
UUVNVSBzdGF0ZSIsIHBhdGgpOworCSAgcmV0dXJuIC0xOworCX0KKyAgICB9CisgCisgIElORk8o
IndyaXRlIFFFTVUgc3RhdGUgdG8gJXMgZG9uZSIsIHBhdGgpOworICAKKyAgcmV0dXJuIDA7Cit9
CisKKworCisKKworCitpbnQgbGlieGxfZG9tYWluX2NyZWF0ZV9yZXN0b3JlMihsaWJ4bF9jdHgg
KmN0eCwgbGlieGxfZG9tYWluX2NvbmZpZyAqZF9jb25maWcsCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGxpYnhsX2NvbnNvbGVfcmVhZHkgY2IsIHZvaWQgKnByaXYsIHVpbnQzMl90
ICpkb21pZCwgCisJCQkJIGludCByZXN0b3JlX2ZkMiwgaW50IHJlc3RvcmVfZmQzKQoreworICBp
bnQgcmV0ID0gMDsKKworICByZXQgPSBkb19kb21haW5fY3JlYXRlKGN0eCwgZF9jb25maWcsIGNi
LCBwcml2LCBkb21pZCwgcmVzdG9yZV9mZDIpOworCisgIGlmIChyZXQpCisgICAgcmV0dXJuIHJl
dDsKKworICByZXR1cm4gMDsKK30KZGlmZiAtLWV4Y2x1ZGU9LnN2biAtLWV4Y2x1ZGU9JyoucmVq
JyAtLWV4Y2x1ZGU9Jyoub3JpZycgLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvbGlieGwvbGli
eGxfZG9tLmMgeGVuLTQuMS4yLWIvdG9vbHMvbGlieGwvbGlieGxfZG9tLmMKLS0tIHhlbi00LjEu
Mi1hL3Rvb2xzL2xpYnhsL2xpYnhsX2RvbS5jCTIwMTEtMTAtMjEgMDE6MDU6NDIuMDAwMDAwMDAw
ICswODAwCisrKyB4ZW4tNC4xLjItYi90b29scy9saWJ4bC9saWJ4bF9kb20uYwkyMDEyLTEyLTI4
IDE3OjAwOjAzLjQ4NTY2MjI4MCArMDgwMApAQCAtNTc1LDE2ICs1NzUsOTYgQEAgaW50IGxpYnhs
X19kb21haW5fc2F2ZV9kZXZpY2VfbW9kZWwobGlieAogICAgICAgICB9CiAgICAgfQogICAgIGNs
b3NlKGZkMik7CiAgICAgdW5saW5rKGZpbGVuYW1lKTsKICAgICBsaWJ4bF9fZnJlZV9hbGwoJmdj
KTsKICAgICByZXR1cm4gMDsKIH0KIAoraW50IGxpYnhsX19kb21haW5fc2F2ZV9kZXZpY2VfbW9k
ZWwyKGxpYnhsX2N0eCAqY3R4LCB1aW50MzJfdCBkb21pZCwgaW50IGZkMiwgaW50IGZkMykKK3sK
KyAgICBsaWJ4bF9fZ2MgZ2MgPSBMSUJYTF9JTklUX0dDKGN0eCk7CisgICAgaW50IGZkX2RtLCBj
LCBjMjsKKyAgICBjaGFyIGJ1ZlsxMDI0XTsKKyAgICBjaGFyICpmaWxlbmFtZSA9IGxpYnhsX19z
cHJpbnRmKCZnYywgIi92YXIvbGliL3hlbi9xZW11LXNhdmUuJWQiLCBkb21pZCk7CisgICAgc3Ry
dWN0IHN0YXQgc3Q7CisgICAgdWludDMyX3QgcWVtdV9zdGF0ZV9sZW47CisKKyAgICBMSUJYTF9f
TE9HKGN0eCwgTElCWExfX0xPR19ERUJVRywgIlNhdmluZyBkZXZpY2UgbW9kZWwgc3RhdGUgdG8g
JXMiLCBmaWxlbmFtZSk7CisgICAgbGlieGxfX3hzX3dyaXRlKCZnYywgWEJUX05VTEwsIGxpYnhs
X19zcHJpbnRmKCZnYywgIi9sb2NhbC9kb21haW4vMC9kZXZpY2UtbW9kZWwvJWQvY29tbWFuZCIs
IGRvbWlkKSwgInNhdmUiKTsKKyAgICBsaWJ4bF9fd2FpdF9mb3JfZGV2aWNlX21vZGVsKGN0eCwg
ZG9taWQsICJwYXVzZWQiLCBOVUxMLCBOVUxMKTsKKworICAgIGlmIChzdGF0KGZpbGVuYW1lLCAm
c3QpIDwgMCkKKyAgICB7CisgICAgICAgIExJQlhMX19MT0coY3R4LCBMSUJYTF9fTE9HX0VSUk9S
LCAiVW5hYmxlIHRvIHN0YXQgcWVtdSBzYXZlIGZpbGVcbiIpOworICAgICAgICBsaWJ4bF9fZnJl
ZV9hbGwoJmdjKTsKKyAgICAgICAgcmV0dXJuIEVSUk9SX0ZBSUw7CisgICAgfQorCisgICAgcWVt
dV9zdGF0ZV9sZW4gPSBzdC5zdF9zaXplOworICAgIExJQlhMX19MT0coY3R4LCBMSUJYTF9fTE9H
X0RFQlVHLCAiUWVtdSBzdGF0ZSBpcyAlZCBieXRlc1xuIiwgcWVtdV9zdGF0ZV9sZW4pOworCisg
ICAgYyA9IGxpYnhsX3dyaXRlX2V4YWN0bHkoY3R4LCBmZDIsIFFFTVVfU0lHTkFUVVJFLCBzdHJs
ZW4oUUVNVV9TSUdOQVRVUkUpLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICJzYXZlZC1z
dGF0ZSBmaWxlIHBhcnQgMiIsICJxZW11IHNpZ25hdHVyZSIpOworICAgIGlmIChjKSB7CisgICAg
ICAgIGxpYnhsX19mcmVlX2FsbCgmZ2MpOworICAgICAgICByZXR1cm4gYzsKKyAgICB9CisgICAg
CisgICAgYyA9IGxpYnhsX3dyaXRlX2V4YWN0bHkoY3R4LCBmZDMsIFFFTVVfU0lHTkFUVVJFLCBz
dHJsZW4oUUVNVV9TSUdOQVRVUkUpLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICJzYXZl
ZC1zdGF0ZSBmaWxlIHBhcnQgMyIsICJxZW11IHNpZ25hdHVyZSIpOworICAgIGlmIChjKSB7Cisg
ICAgICAgIGxpYnhsX19mcmVlX2FsbCgmZ2MpOworICAgICAgICByZXR1cm4gYzsKKyAgICB9CisK
KyAgICBjID0gbGlieGxfd3JpdGVfZXhhY3RseShjdHgsIGZkMiwgJnFlbXVfc3RhdGVfbGVuLCBz
aXplb2YocWVtdV9zdGF0ZV9sZW4pLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICJzYXZl
ZC1zdGF0ZSBmaWxlIHBhcnQgMiIsICJzYXZlZC1zdGF0ZSBsZW5ndGgiKTsKKyAgICBpZiAoYykg
eworICAgICAgICBsaWJ4bF9fZnJlZV9hbGwoJmdjKTsKKyAgICAgICAgcmV0dXJuIGM7CisgICAg
fQorCisgICAgYyA9IGxpYnhsX3dyaXRlX2V4YWN0bHkoY3R4LCBmZDMsICZxZW11X3N0YXRlX2xl
biwgc2l6ZW9mKHFlbXVfc3RhdGVfbGVuKSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAi
c2F2ZWQtc3RhdGUgZmlsZSBwYXJ0IDMiLCAic2F2ZWQtc3RhdGUgbGVuZ3RoIik7CisgICAgaWYg
KGMpIHsKKyAgICAgICAgbGlieGxfX2ZyZWVfYWxsKCZnYyk7CisgICAgICAgIHJldHVybiBjOwor
ICAgIH0KKworCisgICAgZmRfZG0gPSBvcGVuKGZpbGVuYW1lLCBPX1JET05MWSk7CisgICAgd2hp
bGUgKChjID0gcmVhZChmZF9kbSwgYnVmLCBzaXplb2YoYnVmKSkpICE9IDApIHsKKyAgICAgICAg
aWYgKGMgPCAwKSB7CisgICAgICAgICAgICBpZiAoZXJybm8gPT0gRUlOVFIpCisgICAgICAgICAg
ICAgICAgY29udGludWU7CisgICAgICAgICAgICBsaWJ4bF9fZnJlZV9hbGwoJmdjKTsKKyAgICAg
ICAgICAgIHJldHVybiBlcnJubzsKKyAgICAgICAgfQorCWMyID0gYzsKKyAgICAgICAgYyA9IGxp
YnhsX3dyaXRlX2V4YWN0bHkoCisgICAgICAgICAgICBjdHgsIGZkMiwgYnVmLCBjMiwgInNhdmVk
LXN0YXRlIGZpbGUgcGFydCAyIiwgInFlbXUgc3RhdGUiKTsKKyAgICAgICAgaWYgKGMpIHsKKyAg
ICAgICAgICAgIGxpYnhsX19mcmVlX2FsbCgmZ2MpOworICAgICAgICAgICAgcmV0dXJuIGM7Cisg
ICAgICAgIH0KKwljID0gbGlieGxfd3JpdGVfZXhhY3RseSgKKyAgICAgICAgICAgIGN0eCwgZmQz
LCBidWYsIGMyLCAic2F2ZWQtc3RhdGUgZmlsZSBwYXJ0IDMiLCAicWVtdSBzdGF0ZSIpOworICAg
ICAgICBpZiAoYykgeworICAgICAgICAgICAgbGlieGxfX2ZyZWVfYWxsKCZnYyk7CisgICAgICAg
ICAgICByZXR1cm4gYzsKKyAgICAgICAgfQorICAgIH0KKyAgICBjbG9zZShmZF9kbSk7CisgICAg
dW5saW5rKGZpbGVuYW1lKTsKKyAgICBsaWJ4bF9fZnJlZV9hbGwoJmdjKTsKKyAgICByZXR1cm4g
MDsKK30KKwogY2hhciAqbGlieGxfX3V1aWQyc3RyaW5nKGxpYnhsX19nYyAqZ2MsIGNvbnN0IGxp
YnhsX3V1aWQgdXVpZCkKIHsKICAgICBjaGFyICpzID0gbGlieGxfX3NwcmludGYoZ2MsIExJQlhM
X1VVSURfRk1ULCBMSUJYTF9VVUlEX0JZVEVTKHV1aWQpKTsKICAgICBpZiAoIXMpCiAgICAgICAg
IExJQlhMX19MT0cobGlieGxfX2djX293bmVyKGdjKSwgTElCWExfX0xPR19FUlJPUiwgImNhbm5v
dCBhbGxvY2F0ZSBmb3IgdXVpZCIpOwogICAgIHJldHVybiBzOwogfQogCmRpZmYgLS1leGNsdWRl
PS5zdm4gLS1leGNsdWRlPScqLnJlaicgLS1leGNsdWRlPScqLm9yaWcnIC1ycE4gLVU4IHhlbi00
LjEuMi1hL3Rvb2xzL2xpYnhsL2xpYnhsLmggeGVuLTQuMS4yLWIvdG9vbHMvbGlieGwvbGlieGwu
aAotLS0geGVuLTQuMS4yLWEvdG9vbHMvbGlieGwvbGlieGwuaAkyMDExLTEwLTIxIDAxOjA1OjQy
LjAwMDAwMDAwMCArMDgwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvbGlieGwvbGlieGwuaAkyMDEy
LTEyLTI4IDE2OjU5OjM4LjAwODkzNDM5OSArMDgwMApAQCAtMjExLDI2ICsyMTEsNDIgQEAgdm9p
ZCBsaWJ4bF9maWxlX3JlZmVyZW5jZV9kZXN0cm95KGxpYnhsXwogdHlwZWRlZiBzdHJ1Y3QgbGli
eGxfX2NwdWlkX3BvbGljeSBsaWJ4bF9jcHVpZF9wb2xpY3k7CiB0eXBlZGVmIGxpYnhsX2NwdWlk
X3BvbGljeSAqIGxpYnhsX2NwdWlkX3BvbGljeV9saXN0Owogdm9pZCBsaWJ4bF9jcHVpZF9kZXN0
cm95KGxpYnhsX2NwdWlkX3BvbGljeV9saXN0ICpjcHVpZF9saXN0KTsKIAogI2RlZmluZSBMSUJY
TF9QQ0lfRlVOQ19BTEwgKH4wVSkKIAogI2luY2x1ZGUgIl9saWJ4bF90eXBlcy5oIgogCisKKyNk
ZWZpbmUgVkVSU0lPTl9SRVNUT1JFX0ZST01fMUYgMQorI2RlZmluZSBWRVJTSU9OX1JFU1RPUkVf
RlJPTV8zRiAyCisKK3N0cnVjdCBzYXZlX3Jlc3RvcmVfc3BlY2lmaWMKK3sKKyAgdWludDhfdCB4
bF9zYXZlX3ZlcnNpb247CisgIHVpbnQ4X3QgeGxfcmVzdG9yZV92ZXJzaW9uOworICBjb25zdCBj
aGFyICpmMTsKKyAgY29uc3QgY2hhciAqZjI7CisgIGNvbnN0IGNoYXIgKmYzOworfTsKKwogdHlw
ZWRlZiBzdHJ1Y3QgewogICAgIHhlbnRvb2xsb2dfbG9nZ2VyICpsZzsKICAgICB4Y19pbnRlcmZh
Y2UgKnhjaDsKICAgICBzdHJ1Y3QgeHNfaGFuZGxlICp4c2g7CiAKICAgICAvKiBmb3IgY2FsbGVy
cyB3aG8gcmVhcCBjaGlsZHJlbiB3aWxseS1uaWxseTsgY2FsbGVyIG11c3Qgb25seQogICAgICAq
IHNldCB0aGlzIGFmdGVyIGxpYnhsX2luaXQgYW5kIGJlZm9yZSBhbnkgb3RoZXIgY2FsbCAtIG9y
CiAgICAgICogbWF5IGxlYXZlIHRoZW0gdW50b3VjaGVkICovCiAgICAgaW50ICgqd2FpdHBpZF9p
bnN0ZWFkKShwaWRfdCBwaWQsIGludCAqc3RhdHVzLCBpbnQgZmxhZ3MpOwogICAgIGxpYnhsX3Zl
cnNpb25faW5mbyB2ZXJzaW9uX2luZm87CisgICAgCisgIC8qYW5vdGhlciBzYXZlIGFuZCByZXN0
b3JlIGZlYXR1cmUsIG9wZXJhdGUgb24gdGhyZWUgc3BlcmF0ZWQgZmlsZXMqLworICBzdHJ1Y3Qg
c2F2ZV9yZXN0b3JlX3NwZWNpZmljIHhsX3NhdmVfcmVzdG9yZTsKIH0gbGlieGxfY3R4OwogCiBj
b25zdCBsaWJ4bF92ZXJzaW9uX2luZm8qIGxpYnhsX2dldF92ZXJzaW9uX2luZm8obGlieGxfY3R4
ICpjdHgpOwogCiB0eXBlZGVmIHN0cnVjdCB7CiAjZGVmaW5lIFhMX1NVU1BFTkRfREVCVUcgMQog
I2RlZmluZSBYTF9TVVNQRU5EX0xJVkUgMgogICAgIGludCBmbGFnczsKQEAgLTI5MCwxOSArMzA2
LDI0IEBAIGludCBsaWJ4bF9jdHhfcG9zdGZvcmsobGlieGxfY3R4ICpjdHgpOwogCiAvKiBkb21h
aW4gcmVsYXRlZCBmdW5jdGlvbnMgKi8KIHZvaWQgbGlieGxfaW5pdF9jcmVhdGVfaW5mbyhsaWJ4
bF9kb21haW5fY3JlYXRlX2luZm8gKmNfaW5mbyk7CiB2b2lkIGxpYnhsX2luaXRfYnVpbGRfaW5m
byhsaWJ4bF9kb21haW5fYnVpbGRfaW5mbyAqYl9pbmZvLCBsaWJ4bF9kb21haW5fY3JlYXRlX2lu
Zm8gKmNfaW5mbyk7CiB2b2lkIGxpYnhsX2luaXRfZG1faW5mbyhsaWJ4bF9kZXZpY2VfbW9kZWxf
aW5mbyAqZG1faW5mbywgbGlieGxfZG9tYWluX2NyZWF0ZV9pbmZvICpjX2luZm8sIGxpYnhsX2Rv
bWFpbl9idWlsZF9pbmZvICpiX2luZm8pOwogdHlwZWRlZiBpbnQgKCpsaWJ4bF9jb25zb2xlX3Jl
YWR5KShsaWJ4bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsIHZvaWQgKnByaXYpOwogaW50IGxp
YnhsX2RvbWFpbl9jcmVhdGVfbmV3KGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kb21haW5fY29uZmln
ICpkX2NvbmZpZywgbGlieGxfY29uc29sZV9yZWFkeSBjYiwgdm9pZCAqcHJpdiwgdWludDMyX3Qg
KmRvbWlkKTsKIGludCBsaWJ4bF9kb21haW5fY3JlYXRlX3Jlc3RvcmUobGlieGxfY3R4ICpjdHgs
IGxpYnhsX2RvbWFpbl9jb25maWcgKmRfY29uZmlnLCBsaWJ4bF9jb25zb2xlX3JlYWR5IGNiLCB2
b2lkICpwcml2LCB1aW50MzJfdCAqZG9taWQsIGludCByZXN0b3JlX2ZkKTsKK2ludCBsaWJ4bF9k
b21haW5fY3JlYXRlX3Jlc3RvcmUyKGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kb21haW5fY29uZmln
ICpkX2NvbmZpZywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfY29uc29s
ZV9yZWFkeSBjYiwgdm9pZCAqcHJpdiwgdWludDMyX3QgKmRvbWlkLCAKKwkJCQkgaW50IHJlc3Rv
cmVfZmQyLCBpbnQgcmVzdG9yZV9mZDMpOwogdm9pZCBsaWJ4bF9kb21haW5fY29uZmlnX2Rlc3Ry
b3kobGlieGxfZG9tYWluX2NvbmZpZyAqZF9jb25maWcpOwogaW50IGxpYnhsX2RvbWFpbl9zdXNw
ZW5kKGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kb21haW5fc3VzcGVuZF9pbmZvICppbmZvLAogICAg
ICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCBkb21pZCwgaW50IGZkKTsKK2ludCBsaWJ4
bF9kb21haW5fc3VzcGVuZDIobGlieGxfY3R4ICpjdHgsIGxpYnhsX2RvbWFpbl9zdXNwZW5kX2lu
Zm8gKmluZm8sCisJCQkgdWludDMyX3QgZG9taWQsIGludCBmZDIsIGludCBmZDMpOwogaW50IGxp
YnhsX2RvbWFpbl9yZXN1bWUobGlieGxfY3R4ICpjdHgsIHVpbnQzMl90IGRvbWlkKTsKIGludCBs
aWJ4bF9kb21haW5fc2h1dGRvd24obGlieGxfY3R4ICpjdHgsIHVpbnQzMl90IGRvbWlkLCBpbnQg
cmVxKTsKIGludCBsaWJ4bF9kb21haW5fZGVzdHJveShsaWJ4bF9jdHggKmN0eCwgdWludDMyX3Qg
ZG9taWQsIGludCBmb3JjZSk7CiBpbnQgbGlieGxfZG9tYWluX3ByZXNlcnZlKGxpYnhsX2N0eCAq
Y3R4LCB1aW50MzJfdCBkb21pZCwgbGlieGxfZG9tYWluX2NyZWF0ZV9pbmZvICppbmZvLCBjb25z
dCBjaGFyICpuYW1lX3N1ZmZpeCwgbGlieGxfdXVpZCBuZXdfdXVpZCk7CiAKIC8qIGdldCBtYXgu
IG51bWJlciBvZiBjcHVzIHN1cHBvcnRlZCBieSBoeXBlcnZpc29yICovCiBpbnQgbGlieGxfZ2V0
X21heF9jcHVzKGxpYnhsX2N0eCAqY3R4KTsKIApkaWZmIC0tZXhjbHVkZT0uc3ZuIC0tZXhjbHVk
ZT0nKi5yZWonIC0tZXhjbHVkZT0nKi5vcmlnJyAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9s
aWJ4bC9saWJ4bF9pbnRlcm5hbC5oIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVy
bmFsLmgKLS0tIHhlbi00LjEuMi1hL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVybmFsLmgJMjAxMS0x
MC0yMSAwMTowNTo0My4wMDAwMDAwMDAgKzA4MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhs
L2xpYnhsX2ludGVybmFsLmgJMjAxMi0xMi0yOCAxNzowMDowMy40OTI2ODE2NzQgKzA4MDAKQEAg
LTE3MCwxNiArMTcwLDE3IEBAIF9oaWRkZW4gaW50IGxpYnhsX19idWlsZF9wdihsaWJ4bF9jdHgg
KmMKICAgICAgICAgICAgICBsaWJ4bF9kb21haW5fYnVpbGRfaW5mbyAqaW5mbywgbGlieGxfZG9t
YWluX2J1aWxkX3N0YXRlICpzdGF0ZSk7CiBfaGlkZGVuIGludCBsaWJ4bF9fYnVpbGRfaHZtKGxp
YnhsX2N0eCAqY3R4LCB1aW50MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgbGlieGxfZG9tYWlu
X2J1aWxkX2luZm8gKmluZm8sIGxpYnhsX2RvbWFpbl9idWlsZF9zdGF0ZSAqc3RhdGUpOwogCiBf
aGlkZGVuIGludCBsaWJ4bF9fZG9tYWluX3Jlc3RvcmVfY29tbW9uKGxpYnhsX2N0eCAqY3R4LCB1
aW50MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kb21haW5fYnVpbGRfaW5m
byAqaW5mbywgbGlieGxfZG9tYWluX2J1aWxkX3N0YXRlICpzdGF0ZSwgaW50IGZkKTsKIF9oaWRk
ZW4gaW50IGxpYnhsX19kb21haW5fc3VzcGVuZF9jb21tb24obGlieGxfY3R4ICpjdHgsIHVpbnQz
Ml90IGRvbWlkLCBpbnQgZmQsIGludCBodm0sIGludCBsaXZlLCBpbnQgZGVidWcpOwogX2hpZGRl
biBpbnQgbGlieGxfX2RvbWFpbl9zYXZlX2RldmljZV9tb2RlbChsaWJ4bF9jdHggKmN0eCwgdWlu
dDMyX3QgZG9taWQsIGludCBmZCk7CitfaGlkZGVuIGludCBsaWJ4bF9fZG9tYWluX3NhdmVfZGV2
aWNlX21vZGVsMihsaWJ4bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsIGludCBmZDIsIGludCBm
ZDMpOwogX2hpZGRlbiB2b2lkIGxpYnhsX191c2VyZGF0YV9kZXN0cm95YWxsKGxpYnhsX2N0eCAq
Y3R4LCB1aW50MzJfdCBkb21pZCk7CiAKIC8qIGZyb20geGxfZGV2aWNlICovCiBfaGlkZGVuIGNo
YXIgKmxpYnhsX19kZXZpY2VfZGlza19zdHJpbmdfb2ZfYmFja2VuZChsaWJ4bF9kaXNrX2JhY2tl
bmQgYmFja2VuZCk7CiBfaGlkZGVuIGNoYXIgKmxpYnhsX19kZXZpY2VfZGlza19zdHJpbmdfb2Zf
Zm9ybWF0KGxpYnhsX2Rpc2tfZm9ybWF0IGZvcm1hdCk7CiAKIF9oaWRkZW4gaW50IGxpYnhsX19k
ZXZpY2VfcGh5c2Rpc2tfbWFqb3JfbWlub3IoY29uc3QgY2hhciAqcGh5c3BhdGgsIGludCAqbWFq
b3IsIGludCAqbWlub3IpOwogX2hpZGRlbiBpbnQgbGlieGxfX2RldmljZV9kaXNrX2Rldl9udW1i
ZXIoY29uc3QgY2hhciAqdmlydHBhdGgpOwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC0tZXhjbHVkZT0n
Ki5yZWonIC0tZXhjbHVkZT0nKi5vcmlnJyAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9saWJ4
bC9saWJ4bF91dGlscy5oIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL2xpYnhsX3V0aWxzLmgKLS0t
IHhlbi00LjEuMi1hL3Rvb2xzL2xpYnhsL2xpYnhsX3V0aWxzLmgJMjAxMS0xMC0yMSAwMTowNTo0
My4wMDAwMDAwMDAgKzA4MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL2xpYnhsX3V0aWxz
LmgJMjAxMi0xMi0yOCAxNjo1OTo1NC40NjM5MzQzMTAgKzA4MDAKQEAgLTg0LDEwICs4NCwyMyBA
QCB2b2lkIGxpYnhsX2NwdW1hcF9yZXNldChsaWJ4bF9jcHVtYXAgKmNwCiAjZGVmaW5lIGxpYnhs
X2Zvcl9lYWNoX2NwdSh2YXIsIG1hcCkgZm9yICh2YXIgPSAwOyB2YXIgPCAobWFwKS5zaXplICog
ODsgdmFyKyspCiAKIGludCBsaWJ4bF9jcHVhcnJheV9hbGxvYyhsaWJ4bF9jdHggKmN0eCwgbGli
eGxfY3B1YXJyYXkgKmNwdWFycmF5KTsKIAogc3RhdGljIGlubGluZSB1aW50MzJfdCBsaWJ4bF9f
c2l6ZWtiX3RvX21iKHVpbnQzMl90IHMpIHsKICAgICByZXR1cm4gKHMgKyAxMDIzKSAvIDEwMjQ7
CiB9CiAKKworCisvKgorICpmb3Igc2ltcGxlIGRlYnVnIGxvZyBzdGF0ZW1lbnQKKyAqLworI2Rl
ZmluZSBJTkZPKF9tLCBfYS4uLikgTElCWExfX0xPRyhjdHgsIExJQlhMX19MT0dfSU5GTywgX20s
ICMjIF9hKTsKKyNkZWZpbmUgRVJST1IoX20sIF9hLi4uKSBMSUJYTF9fTE9HKGN0eCwgTElCWExf
X0xPR19FUlJPUiwgX20sICMjIF9hKTsKKworCisKKworCisKICNlbmRpZgogCmRpZmYgLS1leGNs
dWRlPS5zdm4gLS1leGNsdWRlPScqLnJlaicgLS1leGNsdWRlPScqLm9yaWcnIC1ycE4gLVU4IHhl
bi00LjEuMi1hL3Rvb2xzL2xpYnhsL01ha2VmaWxlIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL01h
a2VmaWxlCi0tLSB4ZW4tNC4xLjItYS90b29scy9saWJ4bC9NYWtlZmlsZQkyMDExLTEwLTIxIDAx
OjA1OjQyLjAwMDAwMDAwMCArMDgwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvbGlieGwvTWFrZWZp
bGUJMjAxMi0xMi0yOCAxNzowMDo0NS42MTE5MzM4OTcgKzA4MDAKQEAgLTYsMTYgKzYsMTcgQEAg
WEVOX1JPT1QgPSAkKENVUkRJUikvLi4vLi4KIGluY2x1ZGUgJChYRU5fUk9PVCkvdG9vbHMvUnVs
ZXMubWsKIAogTUFKT1IgPSAxLjAKIE1JTk9SID0gMAogCiBYTFVNQUpPUiA9IDEuMAogWExVTUlO
T1IgPSAwCiAKKyMgQ0ZMQUdTICs9IC1nZ2RiIC1PMAogQ0ZMQUdTICs9IC1XZXJyb3IgLVduby1m
b3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucwogQ0ZMQUdTICs9IC1JLiAt
ZlBJQwogQ0ZMQUdTICs9ICQoQ0ZMQUdTX2xpYnhlbmN0cmwpICQoQ0ZMQUdTX2xpYnhlbmd1ZXN0
KSAkKENGTEFHU19saWJ4ZW5zdG9yZSkgJChDRkxBR1NfbGliYmxrdGFwY3RsKQogCiBMSUJTID0g
JChMRExJQlNfbGlieGVuY3RybCkgJChMRExJQlNfbGlieGVuZ3Vlc3QpICQoTERMSUJTX2xpYnhl
bnN0b3JlKSAkKExETElCU19saWJibGt0YXBjdGwpICQoVVRJTF9MSUJTKQogaWZlcSAoJChDT05G
SUdfTGludXgpLHkpCiBMSUJTICs9IC1sdXVpZAogZW5kaWYKZGlmZiAtLWV4Y2x1ZGU9LnN2biAt
LWV4Y2x1ZGU9JyoucmVqJyAtLWV4Y2x1ZGU9Jyoub3JpZycgLXJwTiAtVTggeGVuLTQuMS4yLWEv
dG9vbHMvbGlieGwveGxfY21kaW1wbC5jIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL3hsX2NtZGlt
cGwuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvbGlieGwveGxfY21kaW1wbC5jCTIwMTEtMTAtMjEg
MDE6MDU6NDMuMDAwMDAwMDAwICswODAwCisrKyB4ZW4tNC4xLjItYi90b29scy9saWJ4bC94bF9j
bWRpbXBsLmMJMjAxMi0xMi0yOCAxNzowMDo0NS42MTE5MzM4OTcgKzA4MDAKQEAgLTUzLDE2ICs1
MywyMyBAQAogICAgICAgICBpbnQgbXVzdF9yYyA9IChjYWxsKTsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICBpZiAobXVzdF9yYyA8IDApIHsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAgICAgICAg
ICAgIGZwcmludGYoc3RkZXJyLCJ4bDogZmF0YWwgZXJyb3I6ICVzOiVkLCByYz0lZDogJXNcbiIs
ICAgICAgIFwKICAgICAgICAgICAgICAgICAgICAgX19GSUxFX18sX19MSU5FX18sIG11c3RfcmMs
ICNjYWxsKTsgICAgICAgICAgICAgICAgIFwKICAgICAgICAgICAgIGV4aXQoLW11c3RfcmMpOyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAgICAgICAgfSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFwKICAgICB9KQogCisjZGVmaW5lIENMT1NFRkQoX2ZkKSAoewkJCQlcCisgICAgICAgIGlm
IChfZmQgIT0gLTEpICAgICAgICAgICAgICAgICAgICAgICAgICBcCisJICB7ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwKKwkgICAgY2xvc2UoX2ZkKTsgICAgICAgICAgICAg
ICAgICAgICAgICAgXAorCSAgICBfZmQgPSAtMTsgICAgICAgICAgICAgICAgICAgICAgICAgICBc
CisJICB9ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICB9KQogCiBp
bnQgbG9nZmlsZSA9IDI7CiAKIC8qIGV2ZXJ5IGxpYnhsIGFjdGlvbiBpbiB4bCB1c2VzIHRoaXMg
c2FtZSBsaWJ4bCBjb250ZXh0ICovCiBsaWJ4bF9jdHggY3R4OwogCiAvKiB3aGVuIHdlIG9wZXJh
dGUgb24gYSBkb21haW4sIGl0IGlzIHRoaXMgb25lOiAqLwogc3RhdGljIHVpbnQzMl90IGRvbWlk
OwpAQCAtMTM2MywyOSArMTM3MCw1MSBAQCBzdGF0aWMgaW50IGNyZWF0ZV9kb21haW4oc3RydWN0
IGRvbWFpbl9jCiAgICAgbGlieGxfd2FpdGVyICp3MSA9IE5VTEwsICp3MiA9IE5VTEw7CiAgICAg
dm9pZCAqY29uZmlnX2RhdGEgPSAwOwogICAgIGludCBjb25maWdfbGVuID0gMDsKICAgICBpbnQg
cmVzdG9yZV9mZCA9IC0xOwogICAgIGludCBzdGF0dXMgPSAwOwogICAgIGxpYnhsX2NvbnNvbGVf
cmVhZHkgY2I7CiAgICAgcGlkX3QgY2hpbGRfY29uc29sZV9waWQgPSAtMTsKICAgICBzdHJ1Y3Qg
c2F2ZV9maWxlX2hlYWRlciBoZHI7CisgICAgaW50IHJlc3RvcmVfZmQxID0gLTEsIHJlc3RvcmVf
ZmQyID0gLTEsIHJlc3RvcmVfZmQzID0gLTEsIHNlcGVyYXRlZCA9IDA7CisgICAgaW50IHJlYWxf
cmVzdG9yZV9mZCA9IC0xOwogCiAgICAgbWVtc2V0KCZkX2NvbmZpZywgMHgwMCwgc2l6ZW9mKGRf
Y29uZmlnKSk7CiAKICAgICBpZiAocmVzdG9yZV9maWxlKSB7CiAgICAgICAgIHVpbnQ4X3QgKm9w
dGRhdGFfYmVnaW4gPSAwOwogICAgICAgICBjb25zdCB1aW50OF90ICpvcHRkYXRhX2hlcmUgPSAw
OwogICAgICAgICB1bmlvbiB7IHVpbnQzMl90IHUzMjsgY2hhciBiWzRdOyB9IHUzMmJ1ZjsKICAg
ICAgICAgdWludDMyX3QgYmFkZmxhZ3M7CiAKLSAgICAgICAgcmVzdG9yZV9mZCA9IG1pZ3JhdGVf
ZmQgPj0gMCA/IG1pZ3JhdGVfZmQgOgotICAgICAgICAgICAgb3BlbihyZXN0b3JlX2ZpbGUsIE9f
UkRPTkxZKTsKKwlpZiAoVkVSU0lPTl9SRVNUT1JFX0ZST01fM0YgPT0gY3R4LnhsX3NhdmVfcmVz
dG9yZS54bF9yZXN0b3JlX3ZlcnNpb24pCisJICB7CisJICAgIHJlc3RvcmVfZmQxID0gb3Blbihj
dHgueGxfc2F2ZV9yZXN0b3JlLmYxLCBPX1JET05MWSk7CisJICAgIHJlc3RvcmVfZmQyID0gb3Bl
bihjdHgueGxfc2F2ZV9yZXN0b3JlLmYyLCBPX1JET05MWSk7CisJICAgIHJlc3RvcmVfZmQzID0g
LTE7CisJICAgIHNlcGVyYXRlZCA9IDE7CisJICB9CisJZWxzZQorCSAgeworCSAgICBzZXBlcmF0
ZWQgPSAwOworCSAgICByZXN0b3JlX2ZkID0gbWlncmF0ZV9mZCA+PSAwID8gbWlncmF0ZV9mZCA6
CisJICAgICAgb3BlbihyZXN0b3JlX2ZpbGUsIE9fUkRPTkxZKTsKKwkgIH0KKwkKKwlpZiAoc2Vw
ZXJhdGVkKQorCSAgeworCSAgICByZWFsX3Jlc3RvcmVfZmQgPSByZXN0b3JlX2ZkMTsKKwkgIH0K
KwllbHNlCisJICB7CisJICAgIHJlYWxfcmVzdG9yZV9mZCA9IHJlc3RvcmVfZmQ7CisJICB9CiAK
LSAgICAgICAgQ0hLX0VSUk5PKCBsaWJ4bF9yZWFkX2V4YWN0bHkoJmN0eCwgcmVzdG9yZV9mZCwg
JmhkciwKKyAgICAgICAgQ0hLX0VSUk5PKCBsaWJ4bF9yZWFkX2V4YWN0bHkoJmN0eCwgcmVhbF9y
ZXN0b3JlX2ZkLCAmaGRyLAogICAgICAgICAgICAgICAgICAgIHNpemVvZihoZHIpLCByZXN0b3Jl
X2ZpbGUsICJoZWFkZXIiKSApOwogICAgICAgICBpZiAobWVtY21wKGhkci5tYWdpYywgc2F2ZWZp
bGVoZWFkZXJfbWFnaWMsIHNpemVvZihoZHIubWFnaWMpKSkgewogICAgICAgICAgICAgZnByaW50
ZihzdGRlcnIsICJGaWxlIGhhcyB3cm9uZyBtYWdpYyBudW1iZXIgLSIKICAgICAgICAgICAgICAg
ICAgICAgIiBjb3JydXB0IG9yIGZvciBhIGRpZmZlcmVudCB0b29sP1xuIik7CiAgICAgICAgICAg
ICByZXR1cm4gRVJST1JfSU5WQUw7CiAgICAgICAgIH0KICAgICAgICAgaWYgKGhkci5ieXRlb3Jk
ZXIgIT0gU0FWRUZJTEVfQllURU9SREVSX1ZBTFVFKSB7CiAgICAgICAgICAgICBmcHJpbnRmKHN0
ZGVyciwgIkZpbGUgaGFzIHdyb25nIGJ5dGUgb3JkZXJcbiIpOwpAQCAtMTQwMSwxNyArMTQzMCwx
NyBAQCBzdGF0aWMgaW50IGNyZWF0ZV9kb21haW4oc3RydWN0IGRvbWFpbl9jCiAgICAgICAgIGlm
IChiYWRmbGFncykgewogICAgICAgICAgICAgZnByaW50ZihzdGRlcnIsICJTYXZlZmlsZSBoYXMg
bWFuZGF0b3J5IGZsYWcocykgMHglIlBSSXgzMiIgIgogICAgICAgICAgICAgICAgICAgICAid2hp
Y2ggYXJlIG5vdCBzdXBwb3J0ZWQ7IG5lZWQgbmV3ZXIgeGxcbiIsCiAgICAgICAgICAgICAgICAg
ICAgIGJhZGZsYWdzKTsKICAgICAgICAgICAgIHJldHVybiBFUlJPUl9JTlZBTDsKICAgICAgICAg
fQogICAgICAgICBpZiAoaGRyLm9wdGlvbmFsX2RhdGFfbGVuKSB7CiAgICAgICAgICAgICBvcHRk
YXRhX2JlZ2luID0geG1hbGxvYyhoZHIub3B0aW9uYWxfZGF0YV9sZW4pOwotICAgICAgICAgICAg
Q0hLX0VSUk5PKCBsaWJ4bF9yZWFkX2V4YWN0bHkoJmN0eCwgcmVzdG9yZV9mZCwgb3B0ZGF0YV9i
ZWdpbiwKKyAgICAgICAgICAgIENIS19FUlJOTyggbGlieGxfcmVhZF9leGFjdGx5KCZjdHgsIHJl
YWxfcmVzdG9yZV9mZCwgb3B0ZGF0YV9iZWdpbiwKICAgICAgICAgICAgICAgICAgICBoZHIub3B0
aW9uYWxfZGF0YV9sZW4sIHJlc3RvcmVfZmlsZSwgIm9wdGRhdGEiKSApOwogICAgICAgICB9CiAK
ICNkZWZpbmUgT1BUREFUQV9MRUZUICAoaGRyLm9wdGlvbmFsX2RhdGFfbGVuIC0gKG9wdGRhdGFf
aGVyZSAtIG9wdGRhdGFfYmVnaW4pKQogI2RlZmluZSBXSVRIX09QVERBVEEoYW10LCBib2R5KSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAgICAgICAgICAgIGlmIChPUFREQVRB
X0xFRlQgPCAoYW10KSkgeyAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgICAgICAg
ICAgZnByaW50ZihzdGRlcnIsICJTYXZlZmlsZSB0cnVuY2F0ZWQuXG4iKTsgICAgICAgXAogICAg
ICAgICAgICAgICAgIHJldHVybiBFUlJPUl9JTlZBTDsgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFwKQEAgLTE1MTIsMTkgKzE1NDEsMjkgQEAgc3RhcnQ6CiAKICAgICBpZiAoIGRvbV9pbmZv
LT5jb25zb2xlX2F1dG9jb25uZWN0ICkgewogICAgICAgICBjYiA9IGF1dG9jb25uZWN0X2NvbnNv
bGU7CiAgICAgfWVsc2V7CiAgICAgICAgIGNiID0gTlVMTDsKICAgICB9CiAKICAgICBpZiAoIHJl
c3RvcmVfZmlsZSApIHsKLSAgICAgICAgcmV0ID0gbGlieGxfZG9tYWluX2NyZWF0ZV9yZXN0b3Jl
KCZjdHgsICZkX2NvbmZpZywKKyAgICAgIGlmIChzZXBlcmF0ZWQpCisJeworCSAgcmV0ID0gbGli
eGxfZG9tYWluX2NyZWF0ZV9yZXN0b3JlMigmY3R4LCAmZF9jb25maWcsCisJCQkJCSAgICAgY2Is
ICZjaGlsZF9jb25zb2xlX3BpZCwKKwkJCQkJICAgICAmZG9taWQsIHJlc3RvcmVfZmQyLCByZXN0
b3JlX2ZkMyk7CisJfQorICAgICAgZWxzZQorCXsKKwkgIHJldCA9IGxpYnhsX2RvbWFpbl9jcmVh
dGVfcmVzdG9yZSgmY3R4LCAmZF9jb25maWcsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIGNiLCAmY2hpbGRfY29uc29sZV9waWQsCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICZkb21pZCwgcmVzdG9yZV9mZCk7CisJfQorICAg
ICAgICAKICAgICB9ZWxzZXsKICAgICAgICAgcmV0ID0gbGlieGxfZG9tYWluX2NyZWF0ZV9uZXco
JmN0eCwgJmRfY29uZmlnLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGNiLCAmY2hpbGRfY29uc29sZV9waWQsICZkb21pZCk7CiAgICAgfQogICAgIGlmICggcmV0ICkK
ICAgICAgICAgZ290byBlcnJvcl9vdXQ7CiAKICAgICByZXQgPSBsaWJ4bF91c2VyZGF0YV9zdG9y
ZSgmY3R4LCBkb21pZCwgInhsIiwKQEAgLTE2OTgsMTYgKzE3MzcsMjAgQEAgZXJyb3Jfb3V0Ogog
CiBvdXQ6CiAgICAgaWYgKGxvZ2ZpbGUgIT0gMikKICAgICAgICAgY2xvc2UobG9nZmlsZSk7CiAK
ICAgICBsaWJ4bF9kb21haW5fY29uZmlnX2Rlc3Ryb3koJmRfY29uZmlnKTsKIAogICAgIGZyZWUo
Y29uZmlnX2RhdGEpOworICAgIENMT1NFRkQocmVzdG9yZV9mZCk7CisgICAgQ0xPU0VGRChyZXN0
b3JlX2ZkMSk7CisgICAgQ0xPU0VGRChyZXN0b3JlX2ZkMik7CisgICAgQ0xPU0VGRChyZXN0b3Jl
X2ZkMyk7CiAKIHdhaXRwaWRfb3V0OgogICAgIGlmIChjaGlsZF9jb25zb2xlX3BpZCA+IDAgJiYK
ICAgICAgICAgICAgIHdhaXRwaWQoY2hpbGRfY29uc29sZV9waWQsICZzdGF0dXMsIDApIDwgMCAm
JiBlcnJubyA9PSBFSU5UUikKICAgICAgICAgZ290byB3YWl0cGlkX291dDsKIAogICAgIC8qCiAg
ICAgICogSWYgd2UgaGF2ZSBkYWVtb25pemVkIHRoZW4gZG8gbm90IHJldHVybiB0byB0aGUgY2Fs
bGVyIC0tIHRoaXMgaGFzCkBAIC0yNDU1LDE2ICsyNDk4LDY2IEBAIHN0YXRpYyBpbnQgc2F2ZV9k
b21haW4oY29uc3QgY2hhciAqcCwgY28KICAgICBpZiAoY2hlY2twb2ludCkKICAgICAgICAgbGli
eGxfZG9tYWluX3VucGF1c2UoJmN0eCwgZG9taWQpOwogICAgIGVsc2UKICAgICAgICAgbGlieGxf
ZG9tYWluX2Rlc3Ryb3koJmN0eCwgZG9taWQsIDApOwogCiAgICAgZXhpdCgwKTsKIH0KIAorCitz
dGF0aWMgaW50IHNhdmVfZG9tYWluMihjb25zdCBjaGFyICpwLCBjb25zdCBjaGFyICpmMSwgY29u
c3QgY2hhciAqZjIsIGNvbnN0IGNoYXIgKmYzLAorCQkJaW50IGNoZWNrcG9pbnQsCisJCQljb25z
dCBjaGFyICpvdmVycmlkZV9jb25maWdfZmlsZSkKK3sKKyAgaW50IGZkMSwgZmQyLCBmZDM7Cisg
ICAgdWludDhfdCAqY29uZmlnX2RhdGE7CisgICAgaW50IGNvbmZpZ19sZW47CisKKyAgICBMT0co
InVzaW5nIHNhdmUyLi4uIik7CisKKyAgICBzYXZlX2RvbWFpbl9jb3JlX2JlZ2luKHAsIG92ZXJy
aWRlX2NvbmZpZ19maWxlLCAmY29uZmlnX2RhdGEsICZjb25maWdfbGVuKTsKKworICAgIGlmICgh
Y29uZmlnX2xlbikgeworICAgICAgICBmcHV0cygiIFNhdmVmaWxlIHdpbGwgbm90IGNvbnRhaW4g
eGwgZG9tYWluIGNvbmZpZ1xuIiwgc3RkZXJyKTsKKyAgICB9CisKKyAgICBmZDEgPSBvcGVuKGYx
LCBPX1dST05MWXxPX0NSRUFUfE9fVFJVTkMsIDA2NDQpOworICAgIGlmIChmZDEgPCAwKSB7Cisg
ICAgICAgIGZwcmludGYoc3RkZXJyLCAiRmFpbGVkIHRvIG9wZW4gdGVtcCBmaWxlICVzIGZvciB3
cml0aW5nXG4iLCBmMSk7CisgICAgICAgIGV4aXQoMik7CisgICAgfQorCisgICAgc2F2ZV9kb21h
aW5fY29yZV93cml0ZWNvbmZpZyhmZDEsIGYxLCBjb25maWdfZGF0YSwgY29uZmlnX2xlbik7Cisg
ICAgY2xvc2UoZmQxKTsKKworICAgICAgZmQyID0gb3BlbihmMiwgT19XUk9OTFl8T19DUkVBVHxP
X1RSVU5DLCAwNjQ0KTsKKyAgICBpZiAoZmQyIDwgMCkgeworICAgICAgICBmcHJpbnRmKHN0ZGVy
ciwgIkZhaWxlZCB0byBvcGVuIHRlbXAgZmlsZSAlcyBmb3Igd3JpdGluZ1xuIiwgZjIpOworICAg
ICAgICBleGl0KDIpOworICAgIH0KKyAgICBmZDMgPSBvcGVuKGYzLCBPX1dST05MWXxPX0NSRUFU
fE9fVFJVTkMsIDA2NDQpOworICAgIGlmIChmZDMgPCAwKSB7CisgICAgICAgIGZwcmludGYoc3Rk
ZXJyLCAiRmFpbGVkIHRvIG9wZW4gdGVtcCBmaWxlICVzIGZvciB3cml0aW5nXG4iLCBmMyk7Cisg
ICAgICAgIGV4aXQoMik7CisgICAgfQorICAgICAgQ0hLX0VSUk5PKGxpYnhsX2RvbWFpbl9zdXNw
ZW5kMigmY3R4LCBOVUxMLCBkb21pZCwgZmQyLCBmZDMpKTsKKyAgICBjbG9zZShmZDIpOworICAg
IGNsb3NlKGZkMyk7CisKKyAgICBpZiAoY2hlY2twb2ludCkKKyAgICAgICAgbGlieGxfZG9tYWlu
X3VucGF1c2UoJmN0eCwgZG9taWQpOworICAgIGVsc2UKKyAgICAgICAgbGlieGxfZG9tYWluX2Rl
c3Ryb3koJmN0eCwgZG9taWQsIDApOworCisgICAgZXhpdCgwKTsKK30KKworCisKIHN0YXRpYyBp
bnQgbWlncmF0ZV9yZWFkX2ZpeGVkbWVzc2FnZShpbnQgZmQsIGNvbnN0IHZvaWQgKm1zZywgaW50
IG1zZ3N6LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IGNoYXIg
KndoYXQsIGNvbnN0IGNoYXIgKnJ1bmUpIHsKICAgICBjaGFyIGJ1Zlttc2dzel07CiAgICAgY29u
c3QgY2hhciAqc3RyZWFtOwogICAgIGludCByYzsKIAogICAgIHN0cmVhbSA9IHJ1bmUgPyAibWln
cmF0aW9uIHJlY2VpdmVyIHN0cmVhbSIgOiAibWlncmF0aW9uIHN0cmVhbSI7CiAgICAgcmMgPSBs
aWJ4bF9yZWFkX2V4YWN0bHkoJmN0eCwgZmQsIGJ1ZiwgbXNnc3osIHN0cmVhbSwgd2hhdCk7CkBA
IC0yODY0LDMyICsyOTU3LDEwNCBAQCBpbnQgbWFpbl9yZXN0b3JlKGludCBhcmdjLCBjaGFyICoq
YXJndikKICAgICB9IGVsc2UgaWYgKGFyZ2Mtb3B0aW5kID09IDIpIHsKICAgICAgICAgY29uZmln
X2ZpbGUgPSBhcmd2W29wdGluZF07CiAgICAgICAgIGNoZWNrcG9pbnRfZmlsZSA9IGFyZ3Zbb3B0
aW5kICsgMV07CiAgICAgfSBlbHNlIHsKICAgICAgICAgaGVscCgicmVzdG9yZSIpOwogICAgICAg
ICByZXR1cm4gMjsKICAgICB9CiAKKyAgICBjdHgueGxfc2F2ZV9yZXN0b3JlLnhsX3Jlc3RvcmVf
dmVyc2lvbiA9IFZFUlNJT05fUkVTVE9SRV9GUk9NXzFGOworCiAgICAgbWVtc2V0KCZkb21faW5m
bywgMCwgc2l6ZW9mKGRvbV9pbmZvKSk7CiAgICAgZG9tX2luZm8uZGVidWcgPSBkZWJ1ZzsKICAg
ICBkb21faW5mby5kYWVtb25pemUgPSBkYWVtb25pemU7CiAgICAgZG9tX2luZm8ucGF1c2VkID0g
cGF1c2VkOwogICAgIGRvbV9pbmZvLmNvbmZpZ19maWxlID0gY29uZmlnX2ZpbGU7CiAgICAgZG9t
X2luZm8ucmVzdG9yZV9maWxlID0gY2hlY2twb2ludF9maWxlOwogICAgIGRvbV9pbmZvLm1pZ3Jh
dGVfZmQgPSAtMTsKICAgICBkb21faW5mby5jb25zb2xlX2F1dG9jb25uZWN0ID0gY29uc29sZV9h
dXRvY29ubmVjdDsKIAogICAgIHJjID0gY3JlYXRlX2RvbWFpbigmZG9tX2luZm8pOwogICAgIGlm
IChyYyA8IDApCiAgICAgICAgIHJldHVybiAtcmM7CiAKICAgICByZXR1cm4gMDsKIH0KIAoraW50
IG1haW5fcmVzdG9yZTIoaW50IGFyZ2MsIGNoYXIgKiphcmd2KQoreworICBjb25zdCBjaGFyICpj
aGVja3BvaW50X2ZpbGUxID0gTlVMTDsKKyAgY29uc3QgY2hhciAqY2hlY2twb2ludF9maWxlMiA9
IE5VTEw7CisgIGNvbnN0IGNoYXIgKmNoZWNrcG9pbnRfZmlsZTMgPSBOVUxMOworICAgIGNvbnN0
IGNoYXIgKmNvbmZpZ19maWxlID0gTlVMTDsKKyAgICBzdHJ1Y3QgZG9tYWluX2NyZWF0ZSBkb21f
aW5mbzsKKyAgICBpbnQgcGF1c2VkID0gMCwgZGVidWcgPSAwLCBkYWVtb25pemUgPSAxLCBjb25z
b2xlX2F1dG9jb25uZWN0ID0gMDsKKyAgICBpbnQgb3B0LCByYzsKKworICAgIHdoaWxlICgob3B0
ID0gZ2V0b3B0KGFyZ2MsIGFyZ3YsICJjaHBkZSIpKSAhPSAtMSkgeworICAgICAgICBzd2l0Y2gg
KG9wdCkgeworICAgICAgICBjYXNlICdjJzoKKyAgICAgICAgICAgIGNvbnNvbGVfYXV0b2Nvbm5l
Y3QgPSAxOworICAgICAgICAgICAgYnJlYWs7CisgICAgICAgIGNhc2UgJ3AnOgorICAgICAgICAg
ICAgcGF1c2VkID0gMTsKKyAgICAgICAgICAgIGJyZWFrOworICAgICAgICBjYXNlICdkJzoKKyAg
ICAgICAgICAgIGRlYnVnID0gMTsKKyAgICAgICAgICAgIGJyZWFrOworICAgICAgICBjYXNlICdl
JzoKKyAgICAgICAgICAgIGRhZW1vbml6ZSA9IDA7CisgICAgICAgICAgICBicmVhazsKKyAgICAg
ICAgY2FzZSAnaCc6CisgICAgICAgICAgICBoZWxwKCJyZXN0b3JlMiIpOworICAgICAgICAgICAg
cmV0dXJuIDA7CisgICAgICAgIGRlZmF1bHQ6CisgICAgICAgICAgICBmcHJpbnRmKHN0ZGVyciwg
Im9wdGlvbiBgJWMnIG5vdCBzdXBwb3J0ZWQuXG4iLCBvcHRvcHQpOworICAgICAgICAgICAgYnJl
YWs7CisgICAgICAgIH0KKyAgICB9CisKKyAgICBpZiAoYXJnYy1vcHRpbmQgPT0gMykgeworICAg
ICAgICBjaGVja3BvaW50X2ZpbGUxID0gYXJndltvcHRpbmRdOworCWNoZWNrcG9pbnRfZmlsZTIg
PSBhcmd2W29wdGluZCArIDFdOworCWNoZWNrcG9pbnRfZmlsZTMgPSBhcmd2W29wdGluZCArIDJd
OworICAgIH0gZWxzZSBpZiAoYXJnYy1vcHRpbmQgPT0gNCkgeworICAgICAgICBjb25maWdfZmls
ZSA9IGFyZ3Zbb3B0aW5kXTsKKyAgICAgICAgY2hlY2twb2ludF9maWxlMSA9IGFyZ3Zbb3B0aW5k
ICsgMV07CisJY2hlY2twb2ludF9maWxlMiA9IGFyZ3Zbb3B0aW5kICsgMl07CisJY2hlY2twb2lu
dF9maWxlMyA9IGFyZ3Zbb3B0aW5kICsgM107CisgICAgfSBlbHNlIHsKKyAgICAgICAgaGVscCgi
cmVzdG9yZTIiKTsKKyAgICAgICAgcmV0dXJuIDI7CisgICAgfQorCisgICAgY3R4LnhsX3NhdmVf
cmVzdG9yZS54bF9yZXN0b3JlX3ZlcnNpb24gPSBWRVJTSU9OX1JFU1RPUkVfRlJPTV8zRjsKKyAg
ICBjdHgueGxfc2F2ZV9yZXN0b3JlLmYxID0gY2hlY2twb2ludF9maWxlMTsKKyAgICBjdHgueGxf
c2F2ZV9yZXN0b3JlLmYyID0gY2hlY2twb2ludF9maWxlMjsKKyAgICBjdHgueGxfc2F2ZV9yZXN0
b3JlLmYzID0gY2hlY2twb2ludF9maWxlMzsKKyAgCisgICAgbWVtc2V0KCZkb21faW5mbywgMCwg
c2l6ZW9mKGRvbV9pbmZvKSk7CisgICAgZG9tX2luZm8uZGVidWcgPSBkZWJ1ZzsKKyAgICBkb21f
aW5mby5kYWVtb25pemUgPSBkYWVtb25pemU7CisgICAgZG9tX2luZm8ucGF1c2VkID0gcGF1c2Vk
OworICAgIGRvbV9pbmZvLmNvbmZpZ19maWxlID0gY29uZmlnX2ZpbGU7CisgICAgZG9tX2luZm8u
cmVzdG9yZV9maWxlID0gY2hlY2twb2ludF9maWxlMTsKKyAgICBkb21faW5mby5taWdyYXRlX2Zk
ID0gLTE7CisgICAgZG9tX2luZm8uY29uc29sZV9hdXRvY29ubmVjdCA9IGNvbnNvbGVfYXV0b2Nv
bm5lY3Q7CisKKyAgICByYyA9IGNyZWF0ZV9kb21haW4oJmRvbV9pbmZvKTsKKyAgICBpZiAocmMg
PCAwKQorICAgICAgICByZXR1cm4gLXJjOworCisgICAgcmV0dXJuIDA7Cit9CisKKworCiBpbnQg
bWFpbl9taWdyYXRlX3JlY2VpdmUoaW50IGFyZ2MsIGNoYXIgKiphcmd2KQogewogICAgIGludCBk
ZWJ1ZyA9IDAsIGRhZW1vbml6ZSA9IDE7CiAgICAgaW50IG9wdDsKIAogICAgIHdoaWxlICgob3B0
ID0gZ2V0b3B0KGFyZ2MsIGFyZ3YsICJoZWQiKSkgIT0gLTEpIHsKICAgICAgICAgc3dpdGNoIChv
cHQpIHsKICAgICAgICAgY2FzZSAnaCc6CkBAIC0yOTIzLDE3ICszMDg4LDE3IEBAIGludCBtYWlu
X3NhdmUoaW50IGFyZ2MsIGNoYXIgKiphcmd2KQogICAgIGludCBjaGVja3BvaW50ID0gMDsKICAg
ICBpbnQgb3B0OwogCiAgICAgd2hpbGUgKChvcHQgPSBnZXRvcHQoYXJnYywgYXJndiwgImhjIikp
ICE9IC0xKSB7CiAgICAgICAgIHN3aXRjaCAob3B0KSB7CiAgICAgICAgIGNhc2UgJ2MnOgogICAg
ICAgICAgICAgY2hlY2twb2ludCA9IDE7CiAgICAgICAgICAgICBicmVhazsKLSAgICAgICAgY2Fz
ZSAnaCc6CisJY2FzZSAnaCc6CiAgICAgICAgICAgICBoZWxwKCJzYXZlIik7CiAgICAgICAgICAg
ICByZXR1cm4gMDsKICAgICAgICAgZGVmYXVsdDoKICAgICAgICAgICAgIGZwcmludGYoc3RkZXJy
LCAib3B0aW9uIGAlYycgbm90IHN1cHBvcnRlZC5cbiIsIG9wdG9wdCk7CiAgICAgICAgICAgICBi
cmVhazsKICAgICAgICAgfQogICAgIH0KIApAQCAtMjk0NCwxNiArMzEwOSw1MSBAQCBpbnQgbWFp
bl9zYXZlKGludCBhcmdjLCBjaGFyICoqYXJndikKIAogICAgIHAgPSBhcmd2W29wdGluZF07CiAg
ICAgZmlsZW5hbWUgPSBhcmd2W29wdGluZCArIDFdOwogICAgIGNvbmZpZ19maWxlbmFtZSA9IGFy
Z3Zbb3B0aW5kICsgMl07CiAgICAgc2F2ZV9kb21haW4ocCwgZmlsZW5hbWUsIGNoZWNrcG9pbnQs
IGNvbmZpZ19maWxlbmFtZSk7CiAgICAgcmV0dXJuIDA7CiB9CiAKK2ludCBtYWluX3NhdmUyKGlu
dCBhcmdjLCBjaGFyICoqYXJndikKK3sKKyAgY29uc3QgY2hhciAqc2YxID0gTlVMTCwgKnNmMiA9
IE5VTEwsICpzZjMgPSBOVUxMLCAqcCA9IE5VTEw7CisgICAgY29uc3QgY2hhciAqY29uZmlnX2Zp
bGVuYW1lOworICAgIGludCBjaGVja3BvaW50ID0gMDsKKyAgICBpbnQgb3B0OworCisgICAgd2hp
bGUgKChvcHQgPSBnZXRvcHQoYXJnYywgYXJndiwgImhjIikpICE9IC0xKSB7CisgICAgICAgIHN3
aXRjaCAob3B0KSB7CisgICAgICAgIGNhc2UgJ2MnOgorICAgICAgICAgICAgY2hlY2twb2ludCA9
IDE7CisgICAgICAgICAgICBicmVhazsKKwljYXNlICdoJzoKKyAgICAgICAgICAgIGhlbHAoInNh
dmUyIik7CisgICAgICAgICAgICByZXR1cm4gMDsKKyAgICAgICAgZGVmYXVsdDoKKyAgICAgICAg
ICAgIGZwcmludGYoc3RkZXJyLCAib3B0aW9uIGAlYycgbm90IHN1cHBvcnRlZC5cbiIsIG9wdG9w
dCk7CisgICAgICAgICAgICBicmVhazsKKyAgICAgICAgfQorICAgIH0KKworICAgIGlmIChhcmdj
LW9wdGluZCA8IDMgfHwgYXJnYy1vcHRpbmQgPiA1KSB7CisgICAgICAgIGhlbHAoInNhdmUyIik7
CisgICAgICAgIHJldHVybiAyOworICAgIH0KKworICAgIHAgPSBhcmd2W29wdGluZF07CisgICAg
c2YxID0gYXJndltvcHRpbmQgKyAxXTsKKyAgICBzZjIgPSBhcmd2W29wdGluZCArIDJdOworICAg
IHNmMyA9IGFyZ3Zbb3B0aW5kICsgM107CisgICAgY29uZmlnX2ZpbGVuYW1lID0gYXJndltvcHRp
bmQgKyA0XTsKKyAgICBzYXZlX2RvbWFpbjIocCwgc2YxLCBzZjIsIHNmMywgY2hlY2twb2ludCwg
Y29uZmlnX2ZpbGVuYW1lKTsKKyAgICByZXR1cm4gMDsKK30KKwogaW50IG1haW5fbWlncmF0ZShp
bnQgYXJnYywgY2hhciAqKmFyZ3YpCiB7CiAgICAgY29uc3QgY2hhciAqcCA9IE5VTEw7CiAgICAg
Y29uc3QgY2hhciAqY29uZmlnX2ZpbGVuYW1lID0gTlVMTDsKICAgICBjb25zdCBjaGFyICpzc2hf
Y29tbWFuZCA9ICJzc2giOwogICAgIGNoYXIgKnJ1bmUgPSBOVUxMOwogICAgIGNoYXIgKmhvc3Q7
CiAgICAgaW50IG9wdCwgZGFlbW9uaXplID0gMSwgZGVidWcgPSAwOwpkaWZmIC0tZXhjbHVkZT0u
c3ZuIC0tZXhjbHVkZT0nKi5yZWonIC0tZXhjbHVkZT0nKi5vcmlnJyAtcnBOIC1VOCB4ZW4tNC4x
LjItYS90b29scy9saWJ4bC94bF9jbWR0YWJsZS5jIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL3hs
X2NtZHRhYmxlLmMKLS0tIHhlbi00LjEuMi1hL3Rvb2xzL2xpYnhsL3hsX2NtZHRhYmxlLmMJMjAx
MS0xMC0yMSAwMTowNTo0My4wMDAwMDAwMDAgKzA4MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2xp
YnhsL3hsX2NtZHRhYmxlLmMJMjAxMi0xMi0yOCAxNjo1OTozOC4wMDc5MzQ5NDIgKzA4MDAKQEAg
LTEwMCwxNiArMTAwLDIzIEBAIHN0cnVjdCBjbWRfc3BlYyBjbWRfdGFibGVbXSA9IHsKICAgICB9
LAogICAgIHsgInNhdmUiLAogICAgICAgJm1haW5fc2F2ZSwKICAgICAgICJTYXZlIGEgZG9tYWlu
IHN0YXRlIHRvIHJlc3RvcmUgbGF0ZXIiLAogICAgICAgIltvcHRpb25zXSA8RG9tYWluPiA8Q2hl
Y2twb2ludEZpbGU+IFs8Q29uZmlnRmlsZT5dIiwKICAgICAgICItaCAgUHJpbnQgdGhpcyBoZWxw
LlxuIgogICAgICAgIi1jICBMZWF2ZSBkb21haW4gcnVubmluZyBhZnRlciBjcmVhdGluZyB0aGUg
c25hcHNob3QuIgogICAgIH0sCisgICAgeyAic2F2ZTIiLAorICAgICAgJm1haW5fc2F2ZTIsCisg
ICAgICAiU2F2ZSBhIGRvbWFpbiBzdGF0ZSBhcyB0aHJlZSBzZXBlcmF0ZWQgZmlsZXMgdG8gcmVz
dG9yZSBsYXRlciIsCisgICAgICAiW29wdGlvbnNdIDxEb21haW4+IDxDaGVja3BvaW50RmlsZTE+
IDxDaGVja3BvaW50RmlsZTI+IDxDaGVja3BvaW50RmlsZTM+IFs8Q29uZmlnRmlsZT5dIiwKKyAg
ICAgICItaCAgUHJpbnQgdGhpcyBoZWxwLlxuIgorICAgICAgIi1jICBMZWF2ZSBkb21haW4gcnVu
bmluZyBhZnRlciBjcmVhdGluZyB0aGUgc25hcHNob3QuIgorICAgIH0sCiAgICAgeyAibWlncmF0
ZSIsCiAgICAgICAmbWFpbl9taWdyYXRlLAogICAgICAgIlNhdmUgYSBkb21haW4gc3RhdGUgdG8g
cmVzdG9yZSBsYXRlciIsCiAgICAgICAiW29wdGlvbnNdIDxEb21haW4+IDxob3N0PiIsCiAgICAg
ICAiLWggICAgICAgICAgICAgIFByaW50IHRoaXMgaGVscC5cbiIKICAgICAgICItQyA8Y29uZmln
PiAgICAgU2VuZCA8Y29uZmlnPiBpbnN0ZWFkIG9mIGNvbmZpZyBmaWxlIGZyb20gY3JlYXRpb24u
XG4iCiAgICAgICAiLXMgPHNzaGNvbW1hbmQ+IFVzZSA8c3NoY29tbWFuZD4gaW5zdGVhZCBvZiBz
c2guICBTdHJpbmcgd2lsbCBiZSBwYXNzZWRcbiIKICAgICAgICIgICAgICAgICAgICAgICAgdG8g
c2guIElmIGVtcHR5LCBydW4gPGhvc3Q+IGluc3RlYWQgb2Ygc3NoIDxob3N0PiB4bFxuIgpAQCAt
MTI2LDE2ICsxMzMsMjUgQEAgc3RydWN0IGNtZF9zcGVjIGNtZF90YWJsZVtdID0gewogICAgICAg
Jm1haW5fcmVzdG9yZSwKICAgICAgICJSZXN0b3JlIGEgZG9tYWluIGZyb20gYSBzYXZlZCBzdGF0
ZSIsCiAgICAgICAiW29wdGlvbnNdIFs8Q29uZmlnRmlsZT5dIDxDaGVja3BvaW50RmlsZT4iLAog
ICAgICAgIi1oICBQcmludCB0aGlzIGhlbHAuXG4iCiAgICAgICAiLXAgIERvIG5vdCB1bnBhdXNl
IGRvbWFpbiBhZnRlciByZXN0b3JpbmcgaXQuXG4iCiAgICAgICAiLWUgIERvIG5vdCB3YWl0IGlu
IHRoZSBiYWNrZ3JvdW5kIGZvciB0aGUgZGVhdGggb2YgdGhlIGRvbWFpbi5cbiIKICAgICAgICIt
ZCAgRW5hYmxlIGRlYnVnIG1lc3NhZ2VzLiIKICAgICB9LAorICAgIHsgInJlc3RvcmUyIiwKKyAg
ICAgICZtYWluX3Jlc3RvcmUyLAorICAgICAgIlJlc3RvcmUgYSBkb21haW4gZnJvbSBhIHNhdmVk
IHN0YXRlIiwKKyAgICAgICJbb3B0aW9uc10gWzxDb25maWdGaWxlPl0gPENoZWNrcG9pbnRGaWxl
MT4gPENoZWNrcG9pbnRGaWxlMj4gPENoZWNrcG9pbnRGaWxlMz4iLAorICAgICAgIi1oICBQcmlu
dCB0aGlzIGhlbHAuXG4iCisgICAgICAiLXAgIERvIG5vdCB1bnBhdXNlIGRvbWFpbiBhZnRlciBy
ZXN0b3JpbmcgaXQuXG4iCisgICAgICAiLWUgIERvIG5vdCB3YWl0IGluIHRoZSBiYWNrZ3JvdW5k
IGZvciB0aGUgZGVhdGggb2YgdGhlIGRvbWFpbi5cbiIKKyAgICAgICItZCAgRW5hYmxlIGRlYnVn
IG1lc3NhZ2VzLiIKKyAgICB9LAogICAgIHsgIm1pZ3JhdGUtcmVjZWl2ZSIsCiAgICAgICAmbWFp
bl9taWdyYXRlX3JlY2VpdmUsCiAgICAgICAiUmVzdG9yZSBhIGRvbWFpbiBmcm9tIGEgc2F2ZWQg
c3RhdGUiLAogICAgICAgIi0gZm9yIGludGVybmFsIHVzZSBvbmx5IiwKICAgICB9LAogICAgIHsg
ImNkLWluc2VydCIsCiAgICAgICAmbWFpbl9jZF9pbnNlcnQsCiAgICAgICAiSW5zZXJ0IGEgY2Ry
b20gaW50byBhIGd1ZXN0J3MgY2QgZHJpdmUiLApkaWZmIC0tZXhjbHVkZT0uc3ZuIC0tZXhjbHVk
ZT0nKi5yZWonIC0tZXhjbHVkZT0nKi5vcmlnJyAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9s
aWJ4bC94bC5oIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL3hsLmgKLS0tIHhlbi00LjEuMi1hL3Rv
b2xzL2xpYnhsL3hsLmgJMjAxMS0xMC0yMSAwMTowNTo0My4wMDAwMDAwMDAgKzA4MDAKKysrIHhl
bi00LjEuMi1iL3Rvb2xzL2xpYnhsL3hsLmgJMjAxMi0xMi0yOCAxNjo1OTozOC4wMjA4Mjg1MTMg
KzA4MDAKQEAgLTMxLDE4ICszMSwyMCBAQCBpbnQgbWFpbl9jZF9lamVjdChpbnQgYXJnYywgY2hh
ciAqKmFyZ3YpCiBpbnQgbWFpbl9jZF9pbnNlcnQoaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGlu
dCBtYWluX2NvbnNvbGUoaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBtYWluX3ZuY3ZpZXdl
cihpbnQgYXJnYywgY2hhciAqKmFyZ3YpOwogaW50IG1haW5fcGNpbGlzdChpbnQgYXJnYywgY2hh
ciAqKmFyZ3YpOwogaW50IG1haW5fcGNpbGlzdF9hc3NpZ25hYmxlKGludCBhcmdjLCBjaGFyICoq
YXJndik7CiBpbnQgbWFpbl9wY2lkZXRhY2goaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBt
YWluX3BjaWF0dGFjaChpbnQgYXJnYywgY2hhciAqKmFyZ3YpOwogaW50IG1haW5fcmVzdG9yZShp
bnQgYXJnYywgY2hhciAqKmFyZ3YpOworaW50IG1haW5fcmVzdG9yZTIoaW50IGFyZ2MsIGNoYXIg
Kiphcmd2KTsKIGludCBtYWluX21pZ3JhdGVfcmVjZWl2ZShpbnQgYXJnYywgY2hhciAqKmFyZ3Yp
OwogaW50IG1haW5fc2F2ZShpbnQgYXJnYywgY2hhciAqKmFyZ3YpOworaW50IG1haW5fc2F2ZTIo
aW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBtYWluX21pZ3JhdGUoaW50IGFyZ2MsIGNoYXIg
Kiphcmd2KTsKIGludCBtYWluX2R1bXBfY29yZShpbnQgYXJnYywgY2hhciAqKmFyZ3YpOwogaW50
IG1haW5fcGF1c2UoaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBtYWluX3VucGF1c2UoaW50
IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBtYWluX2Rlc3Ryb3koaW50IGFyZ2MsIGNoYXIgKiph
cmd2KTsKIGludCBtYWluX3NodXRkb3duKGludCBhcmdjLCBjaGFyICoqYXJndik7CiBpbnQgbWFp
bl9yZWJvb3QoaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBtYWluX2xpc3QoaW50IGFyZ2Ms
IGNoYXIgKiphcmd2KTsK
--047d7b62430aee63c904d1e642bf
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7b62430aee63c904d1e642bf--


From xen-devel-bounces@lists.xen.org Fri Dec 28 09:27:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 09:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToWDJ-00014C-MM; Fri, 28 Dec 2012 09:26:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1ToWDH-000147-8a
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 09:26:48 +0000
Received: from [193.109.254.147:23784] by server-10.bemta-14.messagelabs.com
	id 0B/33-13263-6D56DD05; Fri, 28 Dec 2012 09:26:46 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1356686802!11375448!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1288 invoked from network); 28 Dec 2012 09:26:42 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 09:26:42 -0000
Received: by mail-ea0-f174.google.com with SMTP id e13so4148561eaa.5
	for <xen-devel@lists.xensource.com>;
	Fri, 28 Dec 2012 01:26:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=r4w243Ax6LDeeFwVrnvYjCghcj9QyPcrcYI6yDCu5Co=;
	b=SVogj0x6lgDQhJ8VTgIAkrblxlpCpLmqlzt6JIPdulPlsKtwyo8b5pPN4q53A3u32Z
	rrJ6jAK2w+zY7m1dJTH4GcaUwiaVBsLrQeorDOZusXDqpmVvFN4pcl6gbljySGvh1lcD
	mfTklhwUISD/bTApiCD+Z3lExYuJ5qb28bYjEq0LrWxkAAk5WRSxV0R59yAZ3LE0F7Xh
	sCMBdXlV6ksljq2u26WSUPYGA5DSyV6eDbwhlDiBTcIpzwoH57C0qQiteJfYE9/tOcvT
	0BLFGcdY8SFweoMz5CKdU8KVcsi3JNBR86LyW2rnx0zfYWvCps33WJbeO+PCLBbISNKh
	604A==
MIME-Version: 1.0
Received: by 10.14.174.198 with SMTP id x46mr85108598eel.23.1356686802445;
	Fri, 28 Dec 2012 01:26:42 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Fri, 28 Dec 2012 01:26:42 -0800 (PST)
Date: Fri, 28 Dec 2012 17:26:42 +0800
Message-ID: <CA+ePHTAxJ6mWyrC-snd9=kQkJuAEzERdi6brHiq6nFcgMC3PAw@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=047d7b62430aee63c904d1e642bf
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] seperate xen checkpoint file as three parts(for
	xen-4.1.2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7b62430aee63c904d1e642bf
Content-Type: multipart/alternative; boundary=047d7b62430aee63c004d1e642bd

--047d7b62430aee63c004d1e642bd
Content-Type: text/plain; charset=ISO-8859-1

Hi,
    We can seperate the xen checkpoint file as three parts: xl config file,
memory dump file of the virtual machine and qemu state file.
    So I add two xl sub-commands: `xl save2` and `xl restore2`, the final
effect is as follows:


[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$ sudo xl save2
*Usage: xl [-v] save2 [options] <Domain> <CheckpointFile1>
<CheckpointFile2> <CheckpointFile3> [<ConfigFile>]*
*
*
*Save a domain state as three seperated files to restore later.*
*
*
*Options:*
*
*
*-h  Print this help.*
*-c  Leave domain running after creating the snapshot.*

[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$ sudo xl restore2
*Usage: xl [-v] restore2 [options] [<ConfigFile>] <CheckpointFile1>
<CheckpointFile2> <CheckpointFile3>*
*
*
*Restore a domain from a saved state.*
*
*
*Options:*
*
*
*-h  Print this help.*
*-p  Do not unpause domain after restoring it.*
*-e  Do not wait in the background for the death of the domain.*
*-d  Enable debug messages.*
*
*

*Besides, the `vncdisplay` option in xl config file will determine the ID
of the domainU shown in `xl list`, so it's convenient to judge th vm-id
according the xl config file. *
*
*
*For more details, please check the attachment.*

--047d7b62430aee63c004d1e642bd
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,<div>=A0 =A0 We can seperate the xen checkpoint file as three parts: xl =
config file, memory dump file of the virtual machine and qemu state file.</=
div><div>=A0 =A0 So I add two xl sub-commands: `xl save2` and `xl restore2`=
, the final effect is as follows:</div>
<div><blockquote style=3D"margin:0 0 0 40px;border:none;padding:0px"><div><=
br></div><div>[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]$<font=
 color=3D"#ff6666"> sudo xl save2</font></div><div><b>Usage: xl [-v] save2 =
[options] &lt;Domain&gt; &lt;CheckpointFile1&gt; &lt;CheckpointFile2&gt; &l=
t;CheckpointFile3&gt; [&lt;ConfigFile&gt;]</b></div>
<div><b><br></b></div><div><b>Save a domain state as three seperated files =
to restore later.</b></div><div><b><br></b></div><div><b>Options:</b></div>=
<div><b><br></b></div><div><b>-h =A0Print this help.</b></div><div><b>-c =
=A0Leave domain running after creating the snapshot.</b></div>
<div><br></div><div>[malei@xentest-4-1 Fri Dec 28 ~/honeypot/xen/xen-4.1.2]=
$ <font color=3D"#ff6666">sudo xl restore2</font></div><div><b>Usage: xl [-=
v] restore2 [options] [&lt;ConfigFile&gt;] &lt;CheckpointFile1&gt; &lt;Chec=
kpointFile2&gt; &lt;CheckpointFile3&gt;</b></div>
<div><b><br></b></div><div><b>Restore a domain from a saved state.</b></div=
><div><b><br></b></div><div><b>Options:</b></div><div><b><br></b></div><div=
><b>-h =A0Print this help.</b></div><div><b>-p =A0Do not unpause domain aft=
er restoring it.</b></div>
<div><b>-e =A0Do not wait in the background for the death of the domain.</b=
></div><div><b>-d =A0Enable debug messages.</b></div><div><b><br></b></div>=
</blockquote><b><font color=3D"#ff6666">Besides, the `vncdisplay` option in=
 xl config file will determine the ID of the domainU shown in `xl list`, so=
 it&#39;s convenient to judge th vm-id according the xl config file.=A0</fo=
nt></b></div>
<div><b><font color=3D"#ff6666"><br></font></b></div><div><font color=3D"#f=
f6666"><b>For more details, please check the attachment.</b></font></div>

--047d7b62430aee63c004d1e642bd--
--047d7b62430aee63c904d1e642bf
Content-Type: application/octet-stream; name="xl-save2-restore2.patch"
Content-Disposition: attachment; filename="xl-save2-restore2.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hb949f8g0

ZGlmZiAtLWV4Y2x1ZGU9LnN2biAtLWV4Y2x1ZGU9JyoucmVqJyAtLWV4Y2x1ZGU9Jyoub3JpZycg
LXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvbGlieGwvbGlieGwuYyB4ZW4tNC4xLjItYi90b29s
cy9saWJ4bC9saWJ4bC5jCi0tLSB4ZW4tNC4xLjItYS90b29scy9saWJ4bC9saWJ4bC5jCTIwMTEt
MTAtMjEgMDE6MDU6NDIuMDAwMDAwMDAwICswODAwCisrKyB4ZW4tNC4xLjItYi90b29scy9saWJ4
bC9saWJ4bC5jCTIwMTItMTItMjggMTc6MDA6MDMuNDg1NjYyMjgwICswODAwCkBAIC00NjEsMTYg
KzQ2MSwzMCBAQCBpbnQgbGlieGxfZG9tYWluX3N1c3BlbmQobGlieGxfY3R4ICpjdHgsCiAgICAg
aW50IHJjID0gMDsKIAogICAgIHJjID0gbGlieGxfX2RvbWFpbl9zdXNwZW5kX2NvbW1vbihjdHgs
IGRvbWlkLCBmZCwgaHZtLCBsaXZlLCBkZWJ1Zyk7CiAgICAgaWYgKCFyYyAmJiBodm0pCiAgICAg
ICAgIHJjID0gbGlieGxfX2RvbWFpbl9zYXZlX2RldmljZV9tb2RlbChjdHgsIGRvbWlkLCBmZCk7
CiAgICAgcmV0dXJuIHJjOwogfQogCitpbnQgbGlieGxfZG9tYWluX3N1c3BlbmQyKGxpYnhsX2N0
eCAqY3R4LCBsaWJ4bF9kb21haW5fc3VzcGVuZF9pbmZvICppbmZvLAorICAgICAgICAgICAgICAg
ICAgICAgICAgIHVpbnQzMl90IGRvbWlkLCBpbnQgZmQyLCBpbnQgZmQzKQoreworICAgIGludCBo
dm0gPSBsaWJ4bF9fZG9tYWluX2lzX2h2bShjdHgsIGRvbWlkKTsKKyAgICBpbnQgbGl2ZSA9IGlu
Zm8gIT0gTlVMTCAmJiBpbmZvLT5mbGFncyAmIFhMX1NVU1BFTkRfTElWRTsKKyAgICBpbnQgZGVi
dWcgPSBpbmZvICE9IE5VTEwgJiYgaW5mby0+ZmxhZ3MgJiBYTF9TVVNQRU5EX0RFQlVHOworICAg
IGludCByYyA9IDA7CisKKyAgICByYyA9IGxpYnhsX19kb21haW5fc3VzcGVuZF9jb21tb24oY3R4
LCBkb21pZCwgZmQyLCBodm0sIGxpdmUsIGRlYnVnKTsKKyAgICBpZiAoIXJjICYmIGh2bSkKKyAg
ICAgIHJjID0gbGlieGxfX2RvbWFpbl9zYXZlX2RldmljZV9tb2RlbDIoY3R4LCBkb21pZCwgZmQy
LCBmZDMpOworICAgIHJldHVybiByYzsKK30KKwogaW50IGxpYnhsX2RvbWFpbl9wYXVzZShsaWJ4
bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQpCiB7CiAgICAgaW50IHJldDsKICAgICByZXQgPSB4
Y19kb21haW5fcGF1c2UoY3R4LT54Y2gsIGRvbWlkKTsKICAgICBpZiAocmV0PDApIHsKICAgICAg
ICAgTElCWExfX0xPR19FUlJOTyhjdHgsIExJQlhMX19MT0dfRVJST1IsICJwYXVzaW5nIGRvbWFp
biAlZCIsIGRvbWlkKTsKICAgICAgICAgcmV0dXJuIEVSUk9SX0ZBSUw7CiAgICAgfQpkaWZmIC0t
ZXhjbHVkZT0uc3ZuIC0tZXhjbHVkZT0nKi5yZWonIC0tZXhjbHVkZT0nKi5vcmlnJyAtcnBOIC1V
OCB4ZW4tNC4xLjItYS90b29scy9saWJ4bC9saWJ4bF9jcmVhdGUuYyB4ZW4tNC4xLjItYi90b29s
cy9saWJ4bC9saWJ4bF9jcmVhdGUuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvbGlieGwvbGlieGxf
Y3JlYXRlLmMJMjAxMS0xMC0yMSAwMTowNTo0Mi4wMDAwMDAwMDAgKzA4MDAKKysrIHhlbi00LjEu
Mi1iL3Rvb2xzL2xpYnhsL2xpYnhsX2NyZWF0ZS5jCTIwMTItMTItMjggMTc6MDA6NDUuNjE3OTM1
MDE5ICswODAwCkBAIC0xOTYsMzMgKzE5Niw1MiBAQCBpbnQgbGlieGxfX2RvbWFpbl9idWlsZChs
aWJ4bF9jdHggKmN0eCwgCiAgICAgfQogICAgIHJldCA9IGxpYnhsX19idWlsZF9wb3N0KGN0eCwg
ZG9taWQsIGluZm8sIHN0YXRlLCB2bWVudHMsIGxvY2FsZW50cyk7CiBvdXQ6CiAKICAgICBsaWJ4
bF9fZnJlZV9hbGwoJmdjKTsKICAgICByZXR1cm4gcmV0OwogfQogCisKK3N0YXRpYyBpbnQgb3Zl
cnJpZGVfcWVtdV9zdGF0ZShsaWJ4bF9jdHggKmN0eCwgdWludDMyX3QgKmRvbWlkLCBpbnQgcmVz
dG9yZV9mZDMpOworCisKIHN0YXRpYyBpbnQgZG9tYWluX3Jlc3RvcmUobGlieGxfY3R4ICpjdHgs
IGxpYnhsX2RvbWFpbl9idWlsZF9pbmZvICppbmZvLAogICAgICAgICAgICAgICAgICAgICAgICAg
IHVpbnQzMl90IGRvbWlkLCBpbnQgZmQsIGxpYnhsX2RvbWFpbl9idWlsZF9zdGF0ZSAqc3RhdGUs
CiAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZGV2aWNlX21vZGVsX2luZm8gKmRtX2lu
Zm8pCiB7CiAgICAgbGlieGxfX2djIGdjID0gTElCWExfSU5JVF9HQyhjdHgpOwogICAgIGNoYXIg
Kip2bWVudHMgPSBOVUxMLCAqKmxvY2FsZW50cyA9IE5VTEw7CiAgICAgc3RydWN0IHRpbWV2YWwg
c3RhcnRfdGltZTsKICAgICBpbnQgaSwgcmV0LCBlc2F2ZSwgZmxhZ3M7CisgICAgaW50IHNlcGVy
YXRlZCA9IDA7CisgICAgaW50IHJlc3RvcmVfZmQzID0gLTE7CiAKICAgICByZXQgPSBsaWJ4bF9f
YnVpbGRfcHJlKGN0eCwgZG9taWQsIGluZm8sIHN0YXRlKTsKICAgICBpZiAocmV0KQogICAgICAg
ICBnb3RvIG91dDsKIAogICAgIHJldCA9IGxpYnhsX19kb21haW5fcmVzdG9yZV9jb21tb24oY3R4
LCBkb21pZCwgaW5mbywgc3RhdGUsIGZkKTsKICAgICBpZiAocmV0KQogICAgICAgICBnb3RvIG91
dDsKIAorICAgIGlmIChWRVJTSU9OX1JFU1RPUkVfRlJPTV8zRiA9PSBjdHgtPnhsX3NhdmVfcmVz
dG9yZS54bF9yZXN0b3JlX3ZlcnNpb24pCisgICAgICB7CisJSU5GTygiZ29pbmcgdG8gb3Zlcmlk
ZSBxZW11IHN0YXRlIGZpbGUgYXQgL3Zhci9saWIveGVuLyBhZnRlciB4Y19kb21haW5fcmVzdG9y
ZSEiKTsKKwlyZXN0b3JlX2ZkMyA9IG9wZW4oY3R4LT54bF9zYXZlX3Jlc3RvcmUuZjMsIE9fUkRP
TkxZKTsKKwlzZXBlcmF0ZWQgPSAxOworCXJldCA9IG92ZXJyaWRlX3FlbXVfc3RhdGUoY3R4LCAm
ZG9taWQsIHJlc3RvcmVfZmQzKTsKKwlpZiAocmV0KQorCSAgeworCSAgICBFUlJPUigib3Zlcmlk
ZSBxZW11IHN0YXRlIGZpbGUgZmFpbGVkISIpOworCSAgICBnb3RvIG91dDsKKwkgIH0KKyAgICAg
IH0KKwogICAgIGdldHRpbWVvZmRheSgmc3RhcnRfdGltZSwgTlVMTCk7CiAKICAgICBpZiAoaW5m
by0+aHZtKSB7CiAgICAgICAgIHZtZW50cyA9IGxpYnhsX19jYWxsb2MoJmdjLCA3LCBzaXplb2Yo
Y2hhciAqKSk7CiAgICAgICAgIHZtZW50c1swXSA9ICJydGMvdGltZW9mZnNldCI7CiAgICAgICAg
IHZtZW50c1sxXSA9IChpbmZvLT51Lmh2bS50aW1lb2Zmc2V0KSA/IGluZm8tPnUuaHZtLnRpbWVv
ZmZzZXQgOiAiIjsKICAgICAgICAgdm1lbnRzWzJdID0gImltYWdlL29zdHlwZSI7CiAgICAgICAg
IHZtZW50c1szXSA9ICJodm0iOwpAQCAtMjcxLDE2ICsyOTAsMjAgQEAgb3V0OgogICAgICAgICBm
bGFncyAmPSB+T19OT05CTE9DSzsKICAgICAgICAgaWYgKGZjbnRsKGZkLCBGX1NFVEZMLCBmbGFn
cykgPT0gLTEpCiAgICAgICAgICAgICBMSUJYTF9fTE9HX0VSUk5PKGN0eCwgTElCWExfX0xPR19F
UlJPUiwgInVuYWJsZSB0byBwdXQgcmVzdG9yZSBmZCIKICAgICAgICAgICAgICAgICAgICAgICAg
ICAiIGJhY2sgdG8gYmxvY2tpbmcgbW9kZSIpOwogICAgIH0KIAogICAgIGVycm5vID0gZXNhdmU7
CiAgICAgbGlieGxfX2ZyZWVfYWxsKCZnYyk7CisgICAgCisgICAgaWYgKC0xICE9IHJlc3RvcmVf
ZmQzKQorICAgICAgY2xvc2UocmVzdG9yZV9mZDMpOworCiAgICAgcmV0dXJuIHJldDsKIH0KIAog
aW50IGxpYnhsX19kb21haW5fbWFrZShsaWJ4bF9jdHggKmN0eCwgbGlieGxfZG9tYWluX2NyZWF0
ZV9pbmZvICppbmZvLAogICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCAqZG9taWQpCiAg
Lyogb24gZW50cnksIGxpYnhsX2RvbWlkX3ZhbGlkX2d1ZXN0KGRvbWlkKSBtdXN0IGJlIGZhbHNl
OwogICAqIG9uIGV4aXQgKGV2ZW4gZXJyb3IgZXhpdCksIGRvbWlkIG1heSBiZSB2YWxpZCBhbmQg
cmVmZXIgdG8gYSBkb21haW4gKi8KIHsKQEAgLTI5MSwyOCArMzE0LDMzIEBAIGludCBsaWJ4bF9f
ZG9tYWluX21ha2UobGlieGxfY3R4ICpjdHgsIGwKICAgICBjaGFyICpyb19wYXRoc1tdID0geyAi
Y3B1IiwgIm1lbW9yeSIsICJkZXZpY2UiLCAiZXJyb3IiLCAiZHJpdmVycyIsCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgImNvbnRyb2wiLCAiYXR0ciIsICJtZXNzYWdlcyIgfTsKICAgICBjaGFy
ICpkb21fcGF0aCwgKnZtX3BhdGg7CiAgICAgc3RydWN0IHhzX3Blcm1pc3Npb25zIHJvcGVybVsy
XTsKICAgICBzdHJ1Y3QgeHNfcGVybWlzc2lvbnMgcndwZXJtWzFdOwogICAgIHhzX3RyYW5zYWN0
aW9uX3QgdCA9IDA7CiAgICAgeGVuX2RvbWFpbl9oYW5kbGVfdCBoYW5kbGU7CiAKKyAgICB1aW50
MzJfdCByZWFsX2RvbWlkID0gKmRvbWlkOworICAgICpkb21pZCA9IDA7CisgICAgSU5GTygidGVt
cG9yYXJpbHkgcmV2ZXJ0IGRvbWlkIGFzIDAgdG8gbWFrZSBzdXJlIGludmFsaWQgZG9tYWluIGlk
IDogXjA8eDxtYXgiKTsKICAgICBhc3NlcnQoIWxpYnhsX2RvbWlkX3ZhbGlkX2d1ZXN0KCpkb21p
ZCkpOwogCiAgICAgdXVpZF9zdHJpbmcgPSBsaWJ4bF9fdXVpZDJzdHJpbmcoJmdjLCBpbmZvLT51
dWlkKTsKICAgICBpZiAoIXV1aWRfc3RyaW5nKSB7CiAgICAgICAgIHJjID0gRVJST1JfTk9NRU07
CiAgICAgICAgIGdvdG8gb3V0OwogICAgIH0KIAogICAgIGZsYWdzID0gaW5mby0+aHZtID8gWEVO
X0RPTUNUTF9DREZfaHZtX2d1ZXN0IDogMDsKICAgICBmbGFncyB8PSBpbmZvLT5oYXAgPyBYRU5f
RE9NQ1RMX0NERl9oYXAgOiAwOwogICAgIGZsYWdzIHw9IGluZm8tPm9vcyA/IDAgOiBYRU5fRE9N
Q1RMX0NERl9vb3Nfb2ZmOwogICAgICpkb21pZCA9IC0xOworICAgICpkb21pZCA9IHJlYWxfZG9t
aWQ7CisgICAgSU5GTygiY2hhbmdlIGRvbWlkIGJhY2sgdG8gdm5jZGlzcGxheSAldSAuLi4iLCAq
ZG9taWQpOwogCiAgICAgLyogVWx0aW1hdGVseSwgaGFuZGxlIGlzIGFuIGFycmF5IG9mIDE2IHVp
bnQ4X3QsIHNhbWUgYXMgdXVpZCAqLwogICAgIGxpYnhsX3V1aWRfY29weSgobGlieGxfdXVpZCAq
KWhhbmRsZSwgJmluZm8tPnV1aWQpOwogCiAgICAgcmV0ID0geGNfZG9tYWluX2NyZWF0ZShjdHgt
PnhjaCwgaW5mby0+c3NpZHJlZiwgaGFuZGxlLCBmbGFncywgZG9taWQpOwogICAgIGlmIChyZXQg
PCAwKSB7CiAgICAgICAgIExJQlhMX19MT0dfRVJSTk9WQUwoY3R4LCBMSUJYTF9fTE9HX0VSUk9S
LCByZXQsICJkb21haW4gY3JlYXRpb24gZmFpbCIpOwogICAgICAgICByYyA9IEVSUk9SX0ZBSUw7
CkBAIC00MDMsMjAgKzQzMSwyMyBAQCByZXRyeV90cmFuc2FjdGlvbjoKIAogc3RhdGljIGludCBk
b19kb21haW5fY3JlYXRlKGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kb21haW5fY29uZmlnICpkX2Nv
bmZpZywKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9jb25zb2xlX3JlYWR5IGNi
LCB2b2lkICpwcml2LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90ICpkb21p
ZF9vdXQsIGludCByZXN0b3JlX2ZkKQogewogICAgIGxpYnhsX19kZXZpY2VfbW9kZWxfc3RhcnRp
bmcgKmRtX3N0YXJ0aW5nID0gMDsKICAgICBsaWJ4bF9kZXZpY2VfbW9kZWxfaW5mbyAqZG1faW5m
byA9ICZkX2NvbmZpZy0+ZG1faW5mbzsKICAgICBsaWJ4bF9kb21haW5fYnVpbGRfc3RhdGUgc3Rh
dGU7Ci0gICAgdWludDMyX3QgZG9taWQ7CisgICAgdWludDMyX3QgZG9taWQsIGN1c3RvbWl6ZV9k
b21pZDsKICAgICBpbnQgaSwgcmV0OwogCisgICAgY3VzdG9taXplX2RvbWlkID0gZF9jb25maWct
PmRtX2luZm8udm5jZGlzcGxheTsKICAgICBkb21pZCA9IDA7CisgICAgZG9taWQgPSBjdXN0b21p
emVfZG9taWQ7CisgICAgSU5GTygiY3VzdG9taXplIGN1cnJlbnQgZG9taWQgYXMgJXUuLi4iLCBk
b21pZCk7CiAKICAgICByZXQgPSBsaWJ4bF9fZG9tYWluX21ha2UoY3R4LCAmZF9jb25maWctPmNf
aW5mbywgJmRvbWlkKTsKICAgICBpZiAocmV0KSB7CiAgICAgICAgIGZwcmludGYoc3RkZXJyLCAi
Y2Fubm90IG1ha2UgZG9tYWluOiAlZFxuIiwgcmV0KTsKICAgICAgICAgcmV0ID0gRVJST1JfRkFJ
TDsKICAgICAgICAgZ290byBlcnJvcl9vdXQ7CiAgICAgfQogCkBAIC01NjAsOCArNTkxLDIzNyBA
QCBpbnQgbGlieGxfZG9tYWluX2NyZWF0ZV9uZXcobGlieGxfY3R4ICpjCiAgICAgcmV0dXJuIGRv
X2RvbWFpbl9jcmVhdGUoY3R4LCBkX2NvbmZpZywgY2IsIHByaXYsIGRvbWlkLCAtMSk7CiB9CiAK
IGludCBsaWJ4bF9kb21haW5fY3JlYXRlX3Jlc3RvcmUobGlieGxfY3R4ICpjdHgsIGxpYnhsX2Rv
bWFpbl9jb25maWcgKmRfY29uZmlnLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBs
aWJ4bF9jb25zb2xlX3JlYWR5IGNiLCB2b2lkICpwcml2LCB1aW50MzJfdCAqZG9taWQsIGludCBy
ZXN0b3JlX2ZkKQogewogICAgIHJldHVybiBkb19kb21haW5fY3JlYXRlKGN0eCwgZF9jb25maWcs
IGNiLCBwcml2LCBkb21pZCwgcmVzdG9yZV9mZCk7CiB9CisKKy8qCisgKnJlZmVyIHRvIGxpYnhj
L3hlbmd1ZXN0LmggCisgKmRlZmluZSBYQ19ERVZJQ0VfTU9ERUxfUkVTVE9SRV9GSUxFICIvdmFy
L2xpYi94ZW4vcWVtdS1yZXN1bWUiCisgKi8KKworI2RlZmluZSBSREVYQUNUKGZkLCBidWYsIHNp
emUpIHJkZXhhY3QoY3R4LCBmZCwgYnVmLCBzaXplKQorCitzdGF0aWMgc3NpemVfdCByZGV4YWN0
KGxpYnhsX2N0eCAqY3R4LCBpbnQgZmQsIHZvaWQqIGJ1Ziwgc2l6ZV90IHNpemUpCit7CisgIHNp
emVfdCBvZmZzZXQgPSAwOworICBzc2l6ZV90IGxlbjsKKyAgCisgIHdoaWxlIChvZmZzZXQgPCBz
aXplKQorICAgIHsKKyAgICAgIGxlbiA9IHJlYWQoZmQsIGJ1ZiArIG9mZnNldCwgc2l6ZSAtIG9m
ZnNldCk7CisgICAgICBpZiAoMCA9PSBsZW4pCisJeworCSAgTElCWExfX0xPR19FUlJOTyhjdHgs
IExJQlhMX19MT0dfRVJST1IsICJ4bCByZWFkIGV4YWN0IDogMC1MRU5HVEggcmVhZCEiKTsKKwkg
IHJldHVybiAtMjsKKwl9CisgICAgICBlbHNlIGlmIChsZW4gPCAwKQorCXsKKwkgIGlmICggLTEg
PT0gbGVuICYmIGVycm5vID09IEVJTlRSKQorCSAgICB7CisJICAgICAgY29udGludWU7CisJICAg
IH0KKwkgIExJQlhMX19MT0dfRVJSTk8oY3R4LCBMSUJYTF9fTE9HX0VSUk9SLCAieGwgcmVhZCBl
eGFjdCA6IG5lZ2F0aXZlIHJlYWQhIik7CisJICByZXR1cm4gLTE7CisJfQorICAgICAgb2Zmc2V0
ICs9IGxlbjsKKyAgICB9CisKKyAgcmV0dXJuIDA7Cit9CisKK3R5cGVkZWYgc3RydWN0IHFlbXVf
YnVmCit7CisgIGNoYXIgc2lnbmF0dXJlWzIxICsgMV07CisgIHVpbnQzMl90IGJ1ZnNpemU7Cisg
IHVpbnQ4X3QgKmJ1ZjsKK31xZW11X2J1Zl90OworCisKK3N0YXRpYyBpbnQgY29tcGF0X2J1ZmZl
cl9xZW11KGxpYnhsX2N0eCAqY3R4LCBpbnQgZmQsIHFlbXVfYnVmX3QgKnFlbXVidWYpCit7Cisg
IHVpbnQ4X3QgKnFidWYsICp0bXA7CisgIGludCBibGVuID0gMCwgZGxlbiA9IDA7CisgIGludCBy
YzsKKworICBibGVuID0gODE5MjsKKyAgaWYgKCAhKHFidWYgPSBtYWxsb2MoYmxlbikpKQorICAg
IHsKKyAgICAgIEVSUk9SKCJlcnJvciBhbGxvY2F0aW5nIFFFTVUgYnVmZmVyIik7CisgICAgICBy
ZXR1cm4gLTE7CisgICAgfSAKKworICB3aGlsZSggKHJjID0gcmVhZChmZCwgcWJ1ZitkbGVuLCBi
bGVuLWRsZW4pKSA+IDAgKQorICAgIHsKKyAgICAgIElORk8oInhsIHJlYWQgJWQgYnl0ZXMgUUVN
VSBEQVRBIiwgcmMpOworICAgICAgZGxlbiArPSByYzsKKyAgICAgIAorICAgICAgaWYgKGRsZW4g
PT0gYmxlbikKKwl7CisJICBJTkZPKCIlZC1ieXRlIFFFTVUgYnVmZmVyIGZ1bGwsIHJlYWxsb2Nh
dGluZy4uLiIsIGRsZW4pOworCSAgYmxlbiArPSA0MDk2OworCSAgdG1wID0gcmVhbGxvYyhxYnVm
LCBibGVuKTsKKwkgIGlmKCAhdG1wICkKKwkgICAgeworCSAgICAgIEVSUk9SKCJlcnJvciBncm93
aW5nIFFFTVUgYnVmZmVyIHRvICVkIGJ5dGVzIiwgYmxlbik7CisJICAgICAgZnJlZShxYnVmKTsK
KwkgICAgICByZXR1cm4gLTE7CisJICAgIH0KKwkgIHFidWYgPSB0bXA7CisJfQorICAgIH0KKyAg
CisgIGlmIChyYyA8IDApCisgICAgeworICAgICAgRVJST1IoImVycm9yIHJlYWRpbmcgUUVNVSBk
YXRhIik7CisgICAgICBmcmVlKHFidWYpOworICAgICAgcmV0dXJuIC0xOworICAgIH0KKworICBp
ZiggbWVtY21wKHFidWYsICJRRVZNIiwgNCkgKQorICAgIHsKKyAgICAgIEVSUk9SKCJpbnZhbGlk
IFFFTVUgbWFnaWMgOiAweCUwOHggKGV4cGVjdGVkIDogYFFFVk1gKSIsICoodW5zaWduZWQgaW50
KilxYnVmKTsKKyAgICAgIGZyZWUocWJ1Zik7CisgICAgICByZXR1cm4gLTE7CisgICAgfQorCisg
IHFlbXVidWYtPmJ1ZnNpemUgPSBkbGVuOworICBxZW11YnVmLT5idWYgPSBxYnVmOworCisgIHJl
dHVybiAwOworfQorCisKK3N0YXRpYyBpbnQgYnVmZmVyX3FlbXUobGlieGxfY3R4ICpjdHgsIGlu
dCBmZCwgcWVtdV9idWZfdCAqcWVtdWJ1ZikKK3sKKyAgdWludDMyX3QgcWxlbjsKKyAgdWludDhf
dCAqdG1wOworCisgIGlmKCBSREVYQUNUKGZkLCAmcWxlbiwgc2l6ZW9mKHFsZW4pKSApCisgICAg
eworICAgICAgRVJST1IoImVycm9yIHJlYWRpbmcgUUVNVSBkYXRhIGxlbmd0aCBmaWVsZCBpbiBo
ZWFkZXIiKTsKKyAgICAgIHJldHVybiAtMTsKKyAgICB9CisKKyAgaWYocWxlbiA+IHFlbXVidWYt
PmJ1ZnNpemUpCisgICAgeworICAgICAgaWYocWVtdWJ1Zi0+YnVmKQorCXsKKwkgIHRtcCA9IHJl
YWxsb2MocWVtdWJ1Zi0+YnVmLCBxbGVuKTsKKwkgIGlmKHRtcCkKKwkgICAgcWVtdWJ1Zi0+YnVm
ID0gdG1wOworCSAgZWxzZQorCSAgICB7CisJICAgICAgRVJST1IoImVycm9yIHJlYWxsb2NhdGlu
ZyBRRU1VIGJ1ZmZlciIpOworCSAgICAgIHJldHVybiAtMTsKKwkgICAgfQorCX0KKyAgICAgIGVs
c2UKKwl7CisJICBxZW11YnVmLT5idWYgPSBtYWxsb2MocWxlbik7CisJICBpZiAoICFxZW11YnVm
LT5idWYgKQorCSAgICB7CisJICAgICAgRVJST1IoImVycm9yIGFsbG9jYXRpbmcgUUVNVSBidWZm
ZXIiKTsKKwkgICAgICByZXR1cm4gLTE7CisJICAgIH0KKwl9CisgICAgfQorICAKKyAgcWVtdWJ1
Zi0+YnVmc2l6ZSA9IHFsZW47CisKKyAgaWYoIFJERVhBQ1QoZmQsIHFlbXVidWYtPmJ1ZiwgcWVt
dWJ1Zi0+YnVmc2l6ZSkgKQorICAgIHsKKyAgICAgIEVSUk9SKCJlcnJvciByZWFkaW5nIFFFTVUg
ZGF0YSIpOworICAgICAgcmV0dXJuIC0xOworICAgIH0KKyAgCisgIHJldHVybiAwOworfQorCisK
Kworc3RhdGljIGludCBvdmVycmlkZV9xZW11X3N0YXRlKGxpYnhsX2N0eCAqY3R4LCB1aW50MzJf
dCAqZG9taWQsIGludCByZXN0b3JlX2ZkMykKK3sKKyAgdWludDMyX3QgZG9taWRfciA9IDA7Cisg
IGNoYXIgcGF0aFsyNTZdPXt9OworICBxZW11X2J1Zl90IHFlbXVidWY9e307CisgIGludCBkbV9m
ZCA9IC0xOworICB1bnNpZ25lZCBjaGFyIHFlbXVzaWdbMjFdID0ge307CisgIGludCByZXQgPSAw
OworCisgIGRvbWlkX3IgPSAqZG9taWQ7CisgIHNwcmludGYocGF0aCwgWENfREVWSUNFX01PREVM
X1JFU1RPUkVfRklMRSIuJXUiLCBkb21pZF9yKTsKKworICAKKyAgICB7CisgICAgICBJTkZPKCJi
ZWdpbiB0byBvdmVycmlkZSAlcyB3aXRoIHBhcnQgSUlJIG9mIHRoZSBzYXZlZmlsZSIsIHBhdGgp
OworICAgICAgCisgICAgICBkbV9mZCA9IG9wZW4ocGF0aCwgT19DUkVBVHxPX1dST05MWXxPX1RS
VU5DKTsKKyAgICAgIGlmICgtMSAhPSBkbV9mZCkKKwl7CisJICBpZiggUkRFWEFDVChyZXN0b3Jl
X2ZkMywgcWVtdXNpZywgc2l6ZW9mKHFlbXVzaWcpKSApCisJICAgIHsKKwkgICAgICBFUlJPUigi
cmVhZCBxZW11IHNpZ25hdHVyZSBmYWlsIik7CisJICAgICAgcmV0dXJuIC0xOworCSAgICB9CisK
KwkgIGlmICggIW1lbWNtcChxZW11c2lnLCAiUWVtdURldmljZU1vZGVsUmVjb3JkIiwgc2l6ZW9m
KHFlbXVzaWcpKSApCisJICAgIHsKKwkgICAgICBtZW1jcHkocWVtdWJ1Zi5zaWduYXR1cmUsIHFl
bXVzaWcsIHNpemVvZihxZW11c2lnKSk7CisJICAgICAgcmV0ID0gY29tcGF0X2J1ZmZlcl9xZW11
KGN0eCwgcmVzdG9yZV9mZDMsICZxZW11YnVmKTsKKwkgICAgfQorCSAgZWxzZSBpZiAoICFtZW1j
bXAocWVtdXNpZywgIkRldmljZU1vZGVsUmVjb3JkMDAwMiIsIHNpemVvZihxZW11c2lnKSkgfHwK
KwkJICAgICFtZW1jbXAocWVtdXNpZywgIlJlbXVzRGV2aWNlTW9kZWxTdGF0ZSIsIHNpemVvZihx
ZW11c2lnKSkgKQorCSAgICB7CisJICAgICAgbWVtY3B5KHFlbXVidWYuc2lnbmF0dXJlLCBxZW11
c2lnLCBzaXplb2YocWVtdXNpZykpOworCSAgICAgIHJldCA9IGJ1ZmZlcl9xZW11KGN0eCwgcmVz
dG9yZV9mZDMsICZxZW11YnVmKTsKKwkgICAgfQorCSAgCisJICBpZiAocmV0KQorCSAgICB7CisJ
ICAgICAgRVJST1IoImVycm9yIHJlYWRpbmcgUUVNVSBkYXRhIGZyb20gc2F2ZSBzdGF0ZSBmaWxl
LTMiKTsKKwkgICAgICBpZiAocWVtdWJ1Zi5idWYpCisJCWZyZWUocWVtdWJ1Zi5idWYpOworCSAg
ICAgIHJldHVybiAtMTsKKyAgICAgCSAgICB9CisJICAKKwkgIGlmKCAtMSA9PSB3cml0ZShkbV9m
ZCwgcWVtdWJ1Zi5idWYsIHFlbXVidWYuYnVmc2l6ZSkgKQorCSAgICB7CisJICAgICAgRVJST1Io
ImVycm9yIHdyaXRpbmcgUUVNVSBkYXRhIHRvICVzIiwgcGF0aCk7CisJICAgICAgaWYgKHFlbXVi
dWYuYnVmKQorCQlmcmVlKHFlbXVidWYuYnVmKTsKKwkgICAgICByZXR1cm4gLTE7CisJICAgIH0K
Kwl9CisgICAgICBlbHNlCisJeworCSAgRVJST1IoImVycm9yIG9wZW4gJXMgZm9yIHdyaXRpbmcg
UUVNVSBzdGF0ZSIsIHBhdGgpOworCSAgcmV0dXJuIC0xOworCX0KKyAgICB9CisgCisgIElORk8o
IndyaXRlIFFFTVUgc3RhdGUgdG8gJXMgZG9uZSIsIHBhdGgpOworICAKKyAgcmV0dXJuIDA7Cit9
CisKKworCisKKworCitpbnQgbGlieGxfZG9tYWluX2NyZWF0ZV9yZXN0b3JlMihsaWJ4bF9jdHgg
KmN0eCwgbGlieGxfZG9tYWluX2NvbmZpZyAqZF9jb25maWcsCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGxpYnhsX2NvbnNvbGVfcmVhZHkgY2IsIHZvaWQgKnByaXYsIHVpbnQzMl90
ICpkb21pZCwgCisJCQkJIGludCByZXN0b3JlX2ZkMiwgaW50IHJlc3RvcmVfZmQzKQoreworICBp
bnQgcmV0ID0gMDsKKworICByZXQgPSBkb19kb21haW5fY3JlYXRlKGN0eCwgZF9jb25maWcsIGNi
LCBwcml2LCBkb21pZCwgcmVzdG9yZV9mZDIpOworCisgIGlmIChyZXQpCisgICAgcmV0dXJuIHJl
dDsKKworICByZXR1cm4gMDsKK30KZGlmZiAtLWV4Y2x1ZGU9LnN2biAtLWV4Y2x1ZGU9JyoucmVq
JyAtLWV4Y2x1ZGU9Jyoub3JpZycgLXJwTiAtVTggeGVuLTQuMS4yLWEvdG9vbHMvbGlieGwvbGli
eGxfZG9tLmMgeGVuLTQuMS4yLWIvdG9vbHMvbGlieGwvbGlieGxfZG9tLmMKLS0tIHhlbi00LjEu
Mi1hL3Rvb2xzL2xpYnhsL2xpYnhsX2RvbS5jCTIwMTEtMTAtMjEgMDE6MDU6NDIuMDAwMDAwMDAw
ICswODAwCisrKyB4ZW4tNC4xLjItYi90b29scy9saWJ4bC9saWJ4bF9kb20uYwkyMDEyLTEyLTI4
IDE3OjAwOjAzLjQ4NTY2MjI4MCArMDgwMApAQCAtNTc1LDE2ICs1NzUsOTYgQEAgaW50IGxpYnhs
X19kb21haW5fc2F2ZV9kZXZpY2VfbW9kZWwobGlieAogICAgICAgICB9CiAgICAgfQogICAgIGNs
b3NlKGZkMik7CiAgICAgdW5saW5rKGZpbGVuYW1lKTsKICAgICBsaWJ4bF9fZnJlZV9hbGwoJmdj
KTsKICAgICByZXR1cm4gMDsKIH0KIAoraW50IGxpYnhsX19kb21haW5fc2F2ZV9kZXZpY2VfbW9k
ZWwyKGxpYnhsX2N0eCAqY3R4LCB1aW50MzJfdCBkb21pZCwgaW50IGZkMiwgaW50IGZkMykKK3sK
KyAgICBsaWJ4bF9fZ2MgZ2MgPSBMSUJYTF9JTklUX0dDKGN0eCk7CisgICAgaW50IGZkX2RtLCBj
LCBjMjsKKyAgICBjaGFyIGJ1ZlsxMDI0XTsKKyAgICBjaGFyICpmaWxlbmFtZSA9IGxpYnhsX19z
cHJpbnRmKCZnYywgIi92YXIvbGliL3hlbi9xZW11LXNhdmUuJWQiLCBkb21pZCk7CisgICAgc3Ry
dWN0IHN0YXQgc3Q7CisgICAgdWludDMyX3QgcWVtdV9zdGF0ZV9sZW47CisKKyAgICBMSUJYTF9f
TE9HKGN0eCwgTElCWExfX0xPR19ERUJVRywgIlNhdmluZyBkZXZpY2UgbW9kZWwgc3RhdGUgdG8g
JXMiLCBmaWxlbmFtZSk7CisgICAgbGlieGxfX3hzX3dyaXRlKCZnYywgWEJUX05VTEwsIGxpYnhs
X19zcHJpbnRmKCZnYywgIi9sb2NhbC9kb21haW4vMC9kZXZpY2UtbW9kZWwvJWQvY29tbWFuZCIs
IGRvbWlkKSwgInNhdmUiKTsKKyAgICBsaWJ4bF9fd2FpdF9mb3JfZGV2aWNlX21vZGVsKGN0eCwg
ZG9taWQsICJwYXVzZWQiLCBOVUxMLCBOVUxMKTsKKworICAgIGlmIChzdGF0KGZpbGVuYW1lLCAm
c3QpIDwgMCkKKyAgICB7CisgICAgICAgIExJQlhMX19MT0coY3R4LCBMSUJYTF9fTE9HX0VSUk9S
LCAiVW5hYmxlIHRvIHN0YXQgcWVtdSBzYXZlIGZpbGVcbiIpOworICAgICAgICBsaWJ4bF9fZnJl
ZV9hbGwoJmdjKTsKKyAgICAgICAgcmV0dXJuIEVSUk9SX0ZBSUw7CisgICAgfQorCisgICAgcWVt
dV9zdGF0ZV9sZW4gPSBzdC5zdF9zaXplOworICAgIExJQlhMX19MT0coY3R4LCBMSUJYTF9fTE9H
X0RFQlVHLCAiUWVtdSBzdGF0ZSBpcyAlZCBieXRlc1xuIiwgcWVtdV9zdGF0ZV9sZW4pOworCisg
ICAgYyA9IGxpYnhsX3dyaXRlX2V4YWN0bHkoY3R4LCBmZDIsIFFFTVVfU0lHTkFUVVJFLCBzdHJs
ZW4oUUVNVV9TSUdOQVRVUkUpLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICJzYXZlZC1z
dGF0ZSBmaWxlIHBhcnQgMiIsICJxZW11IHNpZ25hdHVyZSIpOworICAgIGlmIChjKSB7CisgICAg
ICAgIGxpYnhsX19mcmVlX2FsbCgmZ2MpOworICAgICAgICByZXR1cm4gYzsKKyAgICB9CisgICAg
CisgICAgYyA9IGxpYnhsX3dyaXRlX2V4YWN0bHkoY3R4LCBmZDMsIFFFTVVfU0lHTkFUVVJFLCBz
dHJsZW4oUUVNVV9TSUdOQVRVUkUpLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICJzYXZl
ZC1zdGF0ZSBmaWxlIHBhcnQgMyIsICJxZW11IHNpZ25hdHVyZSIpOworICAgIGlmIChjKSB7Cisg
ICAgICAgIGxpYnhsX19mcmVlX2FsbCgmZ2MpOworICAgICAgICByZXR1cm4gYzsKKyAgICB9CisK
KyAgICBjID0gbGlieGxfd3JpdGVfZXhhY3RseShjdHgsIGZkMiwgJnFlbXVfc3RhdGVfbGVuLCBz
aXplb2YocWVtdV9zdGF0ZV9sZW4pLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICJzYXZl
ZC1zdGF0ZSBmaWxlIHBhcnQgMiIsICJzYXZlZC1zdGF0ZSBsZW5ndGgiKTsKKyAgICBpZiAoYykg
eworICAgICAgICBsaWJ4bF9fZnJlZV9hbGwoJmdjKTsKKyAgICAgICAgcmV0dXJuIGM7CisgICAg
fQorCisgICAgYyA9IGxpYnhsX3dyaXRlX2V4YWN0bHkoY3R4LCBmZDMsICZxZW11X3N0YXRlX2xl
biwgc2l6ZW9mKHFlbXVfc3RhdGVfbGVuKSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAi
c2F2ZWQtc3RhdGUgZmlsZSBwYXJ0IDMiLCAic2F2ZWQtc3RhdGUgbGVuZ3RoIik7CisgICAgaWYg
KGMpIHsKKyAgICAgICAgbGlieGxfX2ZyZWVfYWxsKCZnYyk7CisgICAgICAgIHJldHVybiBjOwor
ICAgIH0KKworCisgICAgZmRfZG0gPSBvcGVuKGZpbGVuYW1lLCBPX1JET05MWSk7CisgICAgd2hp
bGUgKChjID0gcmVhZChmZF9kbSwgYnVmLCBzaXplb2YoYnVmKSkpICE9IDApIHsKKyAgICAgICAg
aWYgKGMgPCAwKSB7CisgICAgICAgICAgICBpZiAoZXJybm8gPT0gRUlOVFIpCisgICAgICAgICAg
ICAgICAgY29udGludWU7CisgICAgICAgICAgICBsaWJ4bF9fZnJlZV9hbGwoJmdjKTsKKyAgICAg
ICAgICAgIHJldHVybiBlcnJubzsKKyAgICAgICAgfQorCWMyID0gYzsKKyAgICAgICAgYyA9IGxp
YnhsX3dyaXRlX2V4YWN0bHkoCisgICAgICAgICAgICBjdHgsIGZkMiwgYnVmLCBjMiwgInNhdmVk
LXN0YXRlIGZpbGUgcGFydCAyIiwgInFlbXUgc3RhdGUiKTsKKyAgICAgICAgaWYgKGMpIHsKKyAg
ICAgICAgICAgIGxpYnhsX19mcmVlX2FsbCgmZ2MpOworICAgICAgICAgICAgcmV0dXJuIGM7Cisg
ICAgICAgIH0KKwljID0gbGlieGxfd3JpdGVfZXhhY3RseSgKKyAgICAgICAgICAgIGN0eCwgZmQz
LCBidWYsIGMyLCAic2F2ZWQtc3RhdGUgZmlsZSBwYXJ0IDMiLCAicWVtdSBzdGF0ZSIpOworICAg
ICAgICBpZiAoYykgeworICAgICAgICAgICAgbGlieGxfX2ZyZWVfYWxsKCZnYyk7CisgICAgICAg
ICAgICByZXR1cm4gYzsKKyAgICAgICAgfQorICAgIH0KKyAgICBjbG9zZShmZF9kbSk7CisgICAg
dW5saW5rKGZpbGVuYW1lKTsKKyAgICBsaWJ4bF9fZnJlZV9hbGwoJmdjKTsKKyAgICByZXR1cm4g
MDsKK30KKwogY2hhciAqbGlieGxfX3V1aWQyc3RyaW5nKGxpYnhsX19nYyAqZ2MsIGNvbnN0IGxp
YnhsX3V1aWQgdXVpZCkKIHsKICAgICBjaGFyICpzID0gbGlieGxfX3NwcmludGYoZ2MsIExJQlhM
X1VVSURfRk1ULCBMSUJYTF9VVUlEX0JZVEVTKHV1aWQpKTsKICAgICBpZiAoIXMpCiAgICAgICAg
IExJQlhMX19MT0cobGlieGxfX2djX293bmVyKGdjKSwgTElCWExfX0xPR19FUlJPUiwgImNhbm5v
dCBhbGxvY2F0ZSBmb3IgdXVpZCIpOwogICAgIHJldHVybiBzOwogfQogCmRpZmYgLS1leGNsdWRl
PS5zdm4gLS1leGNsdWRlPScqLnJlaicgLS1leGNsdWRlPScqLm9yaWcnIC1ycE4gLVU4IHhlbi00
LjEuMi1hL3Rvb2xzL2xpYnhsL2xpYnhsLmggeGVuLTQuMS4yLWIvdG9vbHMvbGlieGwvbGlieGwu
aAotLS0geGVuLTQuMS4yLWEvdG9vbHMvbGlieGwvbGlieGwuaAkyMDExLTEwLTIxIDAxOjA1OjQy
LjAwMDAwMDAwMCArMDgwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvbGlieGwvbGlieGwuaAkyMDEy
LTEyLTI4IDE2OjU5OjM4LjAwODkzNDM5OSArMDgwMApAQCAtMjExLDI2ICsyMTEsNDIgQEAgdm9p
ZCBsaWJ4bF9maWxlX3JlZmVyZW5jZV9kZXN0cm95KGxpYnhsXwogdHlwZWRlZiBzdHJ1Y3QgbGli
eGxfX2NwdWlkX3BvbGljeSBsaWJ4bF9jcHVpZF9wb2xpY3k7CiB0eXBlZGVmIGxpYnhsX2NwdWlk
X3BvbGljeSAqIGxpYnhsX2NwdWlkX3BvbGljeV9saXN0Owogdm9pZCBsaWJ4bF9jcHVpZF9kZXN0
cm95KGxpYnhsX2NwdWlkX3BvbGljeV9saXN0ICpjcHVpZF9saXN0KTsKIAogI2RlZmluZSBMSUJY
TF9QQ0lfRlVOQ19BTEwgKH4wVSkKIAogI2luY2x1ZGUgIl9saWJ4bF90eXBlcy5oIgogCisKKyNk
ZWZpbmUgVkVSU0lPTl9SRVNUT1JFX0ZST01fMUYgMQorI2RlZmluZSBWRVJTSU9OX1JFU1RPUkVf
RlJPTV8zRiAyCisKK3N0cnVjdCBzYXZlX3Jlc3RvcmVfc3BlY2lmaWMKK3sKKyAgdWludDhfdCB4
bF9zYXZlX3ZlcnNpb247CisgIHVpbnQ4X3QgeGxfcmVzdG9yZV92ZXJzaW9uOworICBjb25zdCBj
aGFyICpmMTsKKyAgY29uc3QgY2hhciAqZjI7CisgIGNvbnN0IGNoYXIgKmYzOworfTsKKwogdHlw
ZWRlZiBzdHJ1Y3QgewogICAgIHhlbnRvb2xsb2dfbG9nZ2VyICpsZzsKICAgICB4Y19pbnRlcmZh
Y2UgKnhjaDsKICAgICBzdHJ1Y3QgeHNfaGFuZGxlICp4c2g7CiAKICAgICAvKiBmb3IgY2FsbGVy
cyB3aG8gcmVhcCBjaGlsZHJlbiB3aWxseS1uaWxseTsgY2FsbGVyIG11c3Qgb25seQogICAgICAq
IHNldCB0aGlzIGFmdGVyIGxpYnhsX2luaXQgYW5kIGJlZm9yZSBhbnkgb3RoZXIgY2FsbCAtIG9y
CiAgICAgICogbWF5IGxlYXZlIHRoZW0gdW50b3VjaGVkICovCiAgICAgaW50ICgqd2FpdHBpZF9p
bnN0ZWFkKShwaWRfdCBwaWQsIGludCAqc3RhdHVzLCBpbnQgZmxhZ3MpOwogICAgIGxpYnhsX3Zl
cnNpb25faW5mbyB2ZXJzaW9uX2luZm87CisgICAgCisgIC8qYW5vdGhlciBzYXZlIGFuZCByZXN0
b3JlIGZlYXR1cmUsIG9wZXJhdGUgb24gdGhyZWUgc3BlcmF0ZWQgZmlsZXMqLworICBzdHJ1Y3Qg
c2F2ZV9yZXN0b3JlX3NwZWNpZmljIHhsX3NhdmVfcmVzdG9yZTsKIH0gbGlieGxfY3R4OwogCiBj
b25zdCBsaWJ4bF92ZXJzaW9uX2luZm8qIGxpYnhsX2dldF92ZXJzaW9uX2luZm8obGlieGxfY3R4
ICpjdHgpOwogCiB0eXBlZGVmIHN0cnVjdCB7CiAjZGVmaW5lIFhMX1NVU1BFTkRfREVCVUcgMQog
I2RlZmluZSBYTF9TVVNQRU5EX0xJVkUgMgogICAgIGludCBmbGFnczsKQEAgLTI5MCwxOSArMzA2
LDI0IEBAIGludCBsaWJ4bF9jdHhfcG9zdGZvcmsobGlieGxfY3R4ICpjdHgpOwogCiAvKiBkb21h
aW4gcmVsYXRlZCBmdW5jdGlvbnMgKi8KIHZvaWQgbGlieGxfaW5pdF9jcmVhdGVfaW5mbyhsaWJ4
bF9kb21haW5fY3JlYXRlX2luZm8gKmNfaW5mbyk7CiB2b2lkIGxpYnhsX2luaXRfYnVpbGRfaW5m
byhsaWJ4bF9kb21haW5fYnVpbGRfaW5mbyAqYl9pbmZvLCBsaWJ4bF9kb21haW5fY3JlYXRlX2lu
Zm8gKmNfaW5mbyk7CiB2b2lkIGxpYnhsX2luaXRfZG1faW5mbyhsaWJ4bF9kZXZpY2VfbW9kZWxf
aW5mbyAqZG1faW5mbywgbGlieGxfZG9tYWluX2NyZWF0ZV9pbmZvICpjX2luZm8sIGxpYnhsX2Rv
bWFpbl9idWlsZF9pbmZvICpiX2luZm8pOwogdHlwZWRlZiBpbnQgKCpsaWJ4bF9jb25zb2xlX3Jl
YWR5KShsaWJ4bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsIHZvaWQgKnByaXYpOwogaW50IGxp
YnhsX2RvbWFpbl9jcmVhdGVfbmV3KGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kb21haW5fY29uZmln
ICpkX2NvbmZpZywgbGlieGxfY29uc29sZV9yZWFkeSBjYiwgdm9pZCAqcHJpdiwgdWludDMyX3Qg
KmRvbWlkKTsKIGludCBsaWJ4bF9kb21haW5fY3JlYXRlX3Jlc3RvcmUobGlieGxfY3R4ICpjdHgs
IGxpYnhsX2RvbWFpbl9jb25maWcgKmRfY29uZmlnLCBsaWJ4bF9jb25zb2xlX3JlYWR5IGNiLCB2
b2lkICpwcml2LCB1aW50MzJfdCAqZG9taWQsIGludCByZXN0b3JlX2ZkKTsKK2ludCBsaWJ4bF9k
b21haW5fY3JlYXRlX3Jlc3RvcmUyKGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kb21haW5fY29uZmln
ICpkX2NvbmZpZywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfY29uc29s
ZV9yZWFkeSBjYiwgdm9pZCAqcHJpdiwgdWludDMyX3QgKmRvbWlkLCAKKwkJCQkgaW50IHJlc3Rv
cmVfZmQyLCBpbnQgcmVzdG9yZV9mZDMpOwogdm9pZCBsaWJ4bF9kb21haW5fY29uZmlnX2Rlc3Ry
b3kobGlieGxfZG9tYWluX2NvbmZpZyAqZF9jb25maWcpOwogaW50IGxpYnhsX2RvbWFpbl9zdXNw
ZW5kKGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kb21haW5fc3VzcGVuZF9pbmZvICppbmZvLAogICAg
ICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCBkb21pZCwgaW50IGZkKTsKK2ludCBsaWJ4
bF9kb21haW5fc3VzcGVuZDIobGlieGxfY3R4ICpjdHgsIGxpYnhsX2RvbWFpbl9zdXNwZW5kX2lu
Zm8gKmluZm8sCisJCQkgdWludDMyX3QgZG9taWQsIGludCBmZDIsIGludCBmZDMpOwogaW50IGxp
YnhsX2RvbWFpbl9yZXN1bWUobGlieGxfY3R4ICpjdHgsIHVpbnQzMl90IGRvbWlkKTsKIGludCBs
aWJ4bF9kb21haW5fc2h1dGRvd24obGlieGxfY3R4ICpjdHgsIHVpbnQzMl90IGRvbWlkLCBpbnQg
cmVxKTsKIGludCBsaWJ4bF9kb21haW5fZGVzdHJveShsaWJ4bF9jdHggKmN0eCwgdWludDMyX3Qg
ZG9taWQsIGludCBmb3JjZSk7CiBpbnQgbGlieGxfZG9tYWluX3ByZXNlcnZlKGxpYnhsX2N0eCAq
Y3R4LCB1aW50MzJfdCBkb21pZCwgbGlieGxfZG9tYWluX2NyZWF0ZV9pbmZvICppbmZvLCBjb25z
dCBjaGFyICpuYW1lX3N1ZmZpeCwgbGlieGxfdXVpZCBuZXdfdXVpZCk7CiAKIC8qIGdldCBtYXgu
IG51bWJlciBvZiBjcHVzIHN1cHBvcnRlZCBieSBoeXBlcnZpc29yICovCiBpbnQgbGlieGxfZ2V0
X21heF9jcHVzKGxpYnhsX2N0eCAqY3R4KTsKIApkaWZmIC0tZXhjbHVkZT0uc3ZuIC0tZXhjbHVk
ZT0nKi5yZWonIC0tZXhjbHVkZT0nKi5vcmlnJyAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9s
aWJ4bC9saWJ4bF9pbnRlcm5hbC5oIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVy
bmFsLmgKLS0tIHhlbi00LjEuMi1hL3Rvb2xzL2xpYnhsL2xpYnhsX2ludGVybmFsLmgJMjAxMS0x
MC0yMSAwMTowNTo0My4wMDAwMDAwMDAgKzA4MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhs
L2xpYnhsX2ludGVybmFsLmgJMjAxMi0xMi0yOCAxNzowMDowMy40OTI2ODE2NzQgKzA4MDAKQEAg
LTE3MCwxNiArMTcwLDE3IEBAIF9oaWRkZW4gaW50IGxpYnhsX19idWlsZF9wdihsaWJ4bF9jdHgg
KmMKICAgICAgICAgICAgICBsaWJ4bF9kb21haW5fYnVpbGRfaW5mbyAqaW5mbywgbGlieGxfZG9t
YWluX2J1aWxkX3N0YXRlICpzdGF0ZSk7CiBfaGlkZGVuIGludCBsaWJ4bF9fYnVpbGRfaHZtKGxp
YnhsX2N0eCAqY3R4LCB1aW50MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgbGlieGxfZG9tYWlu
X2J1aWxkX2luZm8gKmluZm8sIGxpYnhsX2RvbWFpbl9idWlsZF9zdGF0ZSAqc3RhdGUpOwogCiBf
aGlkZGVuIGludCBsaWJ4bF9fZG9tYWluX3Jlc3RvcmVfY29tbW9uKGxpYnhsX2N0eCAqY3R4LCB1
aW50MzJfdCBkb21pZCwKICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kb21haW5fYnVpbGRfaW5m
byAqaW5mbywgbGlieGxfZG9tYWluX2J1aWxkX3N0YXRlICpzdGF0ZSwgaW50IGZkKTsKIF9oaWRk
ZW4gaW50IGxpYnhsX19kb21haW5fc3VzcGVuZF9jb21tb24obGlieGxfY3R4ICpjdHgsIHVpbnQz
Ml90IGRvbWlkLCBpbnQgZmQsIGludCBodm0sIGludCBsaXZlLCBpbnQgZGVidWcpOwogX2hpZGRl
biBpbnQgbGlieGxfX2RvbWFpbl9zYXZlX2RldmljZV9tb2RlbChsaWJ4bF9jdHggKmN0eCwgdWlu
dDMyX3QgZG9taWQsIGludCBmZCk7CitfaGlkZGVuIGludCBsaWJ4bF9fZG9tYWluX3NhdmVfZGV2
aWNlX21vZGVsMihsaWJ4bF9jdHggKmN0eCwgdWludDMyX3QgZG9taWQsIGludCBmZDIsIGludCBm
ZDMpOwogX2hpZGRlbiB2b2lkIGxpYnhsX191c2VyZGF0YV9kZXN0cm95YWxsKGxpYnhsX2N0eCAq
Y3R4LCB1aW50MzJfdCBkb21pZCk7CiAKIC8qIGZyb20geGxfZGV2aWNlICovCiBfaGlkZGVuIGNo
YXIgKmxpYnhsX19kZXZpY2VfZGlza19zdHJpbmdfb2ZfYmFja2VuZChsaWJ4bF9kaXNrX2JhY2tl
bmQgYmFja2VuZCk7CiBfaGlkZGVuIGNoYXIgKmxpYnhsX19kZXZpY2VfZGlza19zdHJpbmdfb2Zf
Zm9ybWF0KGxpYnhsX2Rpc2tfZm9ybWF0IGZvcm1hdCk7CiAKIF9oaWRkZW4gaW50IGxpYnhsX19k
ZXZpY2VfcGh5c2Rpc2tfbWFqb3JfbWlub3IoY29uc3QgY2hhciAqcGh5c3BhdGgsIGludCAqbWFq
b3IsIGludCAqbWlub3IpOwogX2hpZGRlbiBpbnQgbGlieGxfX2RldmljZV9kaXNrX2Rldl9udW1i
ZXIoY29uc3QgY2hhciAqdmlydHBhdGgpOwpkaWZmIC0tZXhjbHVkZT0uc3ZuIC0tZXhjbHVkZT0n
Ki5yZWonIC0tZXhjbHVkZT0nKi5vcmlnJyAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9saWJ4
bC9saWJ4bF91dGlscy5oIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL2xpYnhsX3V0aWxzLmgKLS0t
IHhlbi00LjEuMi1hL3Rvb2xzL2xpYnhsL2xpYnhsX3V0aWxzLmgJMjAxMS0xMC0yMSAwMTowNTo0
My4wMDAwMDAwMDAgKzA4MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL2xpYnhsX3V0aWxz
LmgJMjAxMi0xMi0yOCAxNjo1OTo1NC40NjM5MzQzMTAgKzA4MDAKQEAgLTg0LDEwICs4NCwyMyBA
QCB2b2lkIGxpYnhsX2NwdW1hcF9yZXNldChsaWJ4bF9jcHVtYXAgKmNwCiAjZGVmaW5lIGxpYnhs
X2Zvcl9lYWNoX2NwdSh2YXIsIG1hcCkgZm9yICh2YXIgPSAwOyB2YXIgPCAobWFwKS5zaXplICog
ODsgdmFyKyspCiAKIGludCBsaWJ4bF9jcHVhcnJheV9hbGxvYyhsaWJ4bF9jdHggKmN0eCwgbGli
eGxfY3B1YXJyYXkgKmNwdWFycmF5KTsKIAogc3RhdGljIGlubGluZSB1aW50MzJfdCBsaWJ4bF9f
c2l6ZWtiX3RvX21iKHVpbnQzMl90IHMpIHsKICAgICByZXR1cm4gKHMgKyAxMDIzKSAvIDEwMjQ7
CiB9CiAKKworCisvKgorICpmb3Igc2ltcGxlIGRlYnVnIGxvZyBzdGF0ZW1lbnQKKyAqLworI2Rl
ZmluZSBJTkZPKF9tLCBfYS4uLikgTElCWExfX0xPRyhjdHgsIExJQlhMX19MT0dfSU5GTywgX20s
ICMjIF9hKTsKKyNkZWZpbmUgRVJST1IoX20sIF9hLi4uKSBMSUJYTF9fTE9HKGN0eCwgTElCWExf
X0xPR19FUlJPUiwgX20sICMjIF9hKTsKKworCisKKworCisKICNlbmRpZgogCmRpZmYgLS1leGNs
dWRlPS5zdm4gLS1leGNsdWRlPScqLnJlaicgLS1leGNsdWRlPScqLm9yaWcnIC1ycE4gLVU4IHhl
bi00LjEuMi1hL3Rvb2xzL2xpYnhsL01ha2VmaWxlIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL01h
a2VmaWxlCi0tLSB4ZW4tNC4xLjItYS90b29scy9saWJ4bC9NYWtlZmlsZQkyMDExLTEwLTIxIDAx
OjA1OjQyLjAwMDAwMDAwMCArMDgwMAorKysgeGVuLTQuMS4yLWIvdG9vbHMvbGlieGwvTWFrZWZp
bGUJMjAxMi0xMi0yOCAxNzowMDo0NS42MTE5MzM4OTcgKzA4MDAKQEAgLTYsMTYgKzYsMTcgQEAg
WEVOX1JPT1QgPSAkKENVUkRJUikvLi4vLi4KIGluY2x1ZGUgJChYRU5fUk9PVCkvdG9vbHMvUnVs
ZXMubWsKIAogTUFKT1IgPSAxLjAKIE1JTk9SID0gMAogCiBYTFVNQUpPUiA9IDEuMAogWExVTUlO
T1IgPSAwCiAKKyMgQ0ZMQUdTICs9IC1nZ2RiIC1PMAogQ0ZMQUdTICs9IC1XZXJyb3IgLVduby1m
b3JtYXQtemVyby1sZW5ndGggLVdtaXNzaW5nLWRlY2xhcmF0aW9ucwogQ0ZMQUdTICs9IC1JLiAt
ZlBJQwogQ0ZMQUdTICs9ICQoQ0ZMQUdTX2xpYnhlbmN0cmwpICQoQ0ZMQUdTX2xpYnhlbmd1ZXN0
KSAkKENGTEFHU19saWJ4ZW5zdG9yZSkgJChDRkxBR1NfbGliYmxrdGFwY3RsKQogCiBMSUJTID0g
JChMRExJQlNfbGlieGVuY3RybCkgJChMRExJQlNfbGlieGVuZ3Vlc3QpICQoTERMSUJTX2xpYnhl
bnN0b3JlKSAkKExETElCU19saWJibGt0YXBjdGwpICQoVVRJTF9MSUJTKQogaWZlcSAoJChDT05G
SUdfTGludXgpLHkpCiBMSUJTICs9IC1sdXVpZAogZW5kaWYKZGlmZiAtLWV4Y2x1ZGU9LnN2biAt
LWV4Y2x1ZGU9JyoucmVqJyAtLWV4Y2x1ZGU9Jyoub3JpZycgLXJwTiAtVTggeGVuLTQuMS4yLWEv
dG9vbHMvbGlieGwveGxfY21kaW1wbC5jIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL3hsX2NtZGlt
cGwuYwotLS0geGVuLTQuMS4yLWEvdG9vbHMvbGlieGwveGxfY21kaW1wbC5jCTIwMTEtMTAtMjEg
MDE6MDU6NDMuMDAwMDAwMDAwICswODAwCisrKyB4ZW4tNC4xLjItYi90b29scy9saWJ4bC94bF9j
bWRpbXBsLmMJMjAxMi0xMi0yOCAxNzowMDo0NS42MTE5MzM4OTcgKzA4MDAKQEAgLTUzLDE2ICs1
MywyMyBAQAogICAgICAgICBpbnQgbXVzdF9yYyA9IChjYWxsKTsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgXAogICAgICAgICBpZiAobXVzdF9yYyA8IDApIHsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAgICAgICAg
ICAgIGZwcmludGYoc3RkZXJyLCJ4bDogZmF0YWwgZXJyb3I6ICVzOiVkLCByYz0lZDogJXNcbiIs
ICAgICAgIFwKICAgICAgICAgICAgICAgICAgICAgX19GSUxFX18sX19MSU5FX18sIG11c3RfcmMs
ICNjYWxsKTsgICAgICAgICAgICAgICAgIFwKICAgICAgICAgICAgIGV4aXQoLW11c3RfcmMpOyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAgICAgICAgfSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFwKICAgICB9KQogCisjZGVmaW5lIENMT1NFRkQoX2ZkKSAoewkJCQlcCisgICAgICAgIGlm
IChfZmQgIT0gLTEpICAgICAgICAgICAgICAgICAgICAgICAgICBcCisJICB7ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwKKwkgICAgY2xvc2UoX2ZkKTsgICAgICAgICAgICAg
ICAgICAgICAgICAgXAorCSAgICBfZmQgPSAtMTsgICAgICAgICAgICAgICAgICAgICAgICAgICBc
CisJICB9ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICB9KQogCiBp
bnQgbG9nZmlsZSA9IDI7CiAKIC8qIGV2ZXJ5IGxpYnhsIGFjdGlvbiBpbiB4bCB1c2VzIHRoaXMg
c2FtZSBsaWJ4bCBjb250ZXh0ICovCiBsaWJ4bF9jdHggY3R4OwogCiAvKiB3aGVuIHdlIG9wZXJh
dGUgb24gYSBkb21haW4sIGl0IGlzIHRoaXMgb25lOiAqLwogc3RhdGljIHVpbnQzMl90IGRvbWlk
OwpAQCAtMTM2MywyOSArMTM3MCw1MSBAQCBzdGF0aWMgaW50IGNyZWF0ZV9kb21haW4oc3RydWN0
IGRvbWFpbl9jCiAgICAgbGlieGxfd2FpdGVyICp3MSA9IE5VTEwsICp3MiA9IE5VTEw7CiAgICAg
dm9pZCAqY29uZmlnX2RhdGEgPSAwOwogICAgIGludCBjb25maWdfbGVuID0gMDsKICAgICBpbnQg
cmVzdG9yZV9mZCA9IC0xOwogICAgIGludCBzdGF0dXMgPSAwOwogICAgIGxpYnhsX2NvbnNvbGVf
cmVhZHkgY2I7CiAgICAgcGlkX3QgY2hpbGRfY29uc29sZV9waWQgPSAtMTsKICAgICBzdHJ1Y3Qg
c2F2ZV9maWxlX2hlYWRlciBoZHI7CisgICAgaW50IHJlc3RvcmVfZmQxID0gLTEsIHJlc3RvcmVf
ZmQyID0gLTEsIHJlc3RvcmVfZmQzID0gLTEsIHNlcGVyYXRlZCA9IDA7CisgICAgaW50IHJlYWxf
cmVzdG9yZV9mZCA9IC0xOwogCiAgICAgbWVtc2V0KCZkX2NvbmZpZywgMHgwMCwgc2l6ZW9mKGRf
Y29uZmlnKSk7CiAKICAgICBpZiAocmVzdG9yZV9maWxlKSB7CiAgICAgICAgIHVpbnQ4X3QgKm9w
dGRhdGFfYmVnaW4gPSAwOwogICAgICAgICBjb25zdCB1aW50OF90ICpvcHRkYXRhX2hlcmUgPSAw
OwogICAgICAgICB1bmlvbiB7IHVpbnQzMl90IHUzMjsgY2hhciBiWzRdOyB9IHUzMmJ1ZjsKICAg
ICAgICAgdWludDMyX3QgYmFkZmxhZ3M7CiAKLSAgICAgICAgcmVzdG9yZV9mZCA9IG1pZ3JhdGVf
ZmQgPj0gMCA/IG1pZ3JhdGVfZmQgOgotICAgICAgICAgICAgb3BlbihyZXN0b3JlX2ZpbGUsIE9f
UkRPTkxZKTsKKwlpZiAoVkVSU0lPTl9SRVNUT1JFX0ZST01fM0YgPT0gY3R4LnhsX3NhdmVfcmVz
dG9yZS54bF9yZXN0b3JlX3ZlcnNpb24pCisJICB7CisJICAgIHJlc3RvcmVfZmQxID0gb3Blbihj
dHgueGxfc2F2ZV9yZXN0b3JlLmYxLCBPX1JET05MWSk7CisJICAgIHJlc3RvcmVfZmQyID0gb3Bl
bihjdHgueGxfc2F2ZV9yZXN0b3JlLmYyLCBPX1JET05MWSk7CisJICAgIHJlc3RvcmVfZmQzID0g
LTE7CisJICAgIHNlcGVyYXRlZCA9IDE7CisJICB9CisJZWxzZQorCSAgeworCSAgICBzZXBlcmF0
ZWQgPSAwOworCSAgICByZXN0b3JlX2ZkID0gbWlncmF0ZV9mZCA+PSAwID8gbWlncmF0ZV9mZCA6
CisJICAgICAgb3BlbihyZXN0b3JlX2ZpbGUsIE9fUkRPTkxZKTsKKwkgIH0KKwkKKwlpZiAoc2Vw
ZXJhdGVkKQorCSAgeworCSAgICByZWFsX3Jlc3RvcmVfZmQgPSByZXN0b3JlX2ZkMTsKKwkgIH0K
KwllbHNlCisJICB7CisJICAgIHJlYWxfcmVzdG9yZV9mZCA9IHJlc3RvcmVfZmQ7CisJICB9CiAK
LSAgICAgICAgQ0hLX0VSUk5PKCBsaWJ4bF9yZWFkX2V4YWN0bHkoJmN0eCwgcmVzdG9yZV9mZCwg
JmhkciwKKyAgICAgICAgQ0hLX0VSUk5PKCBsaWJ4bF9yZWFkX2V4YWN0bHkoJmN0eCwgcmVhbF9y
ZXN0b3JlX2ZkLCAmaGRyLAogICAgICAgICAgICAgICAgICAgIHNpemVvZihoZHIpLCByZXN0b3Jl
X2ZpbGUsICJoZWFkZXIiKSApOwogICAgICAgICBpZiAobWVtY21wKGhkci5tYWdpYywgc2F2ZWZp
bGVoZWFkZXJfbWFnaWMsIHNpemVvZihoZHIubWFnaWMpKSkgewogICAgICAgICAgICAgZnByaW50
ZihzdGRlcnIsICJGaWxlIGhhcyB3cm9uZyBtYWdpYyBudW1iZXIgLSIKICAgICAgICAgICAgICAg
ICAgICAgIiBjb3JydXB0IG9yIGZvciBhIGRpZmZlcmVudCB0b29sP1xuIik7CiAgICAgICAgICAg
ICByZXR1cm4gRVJST1JfSU5WQUw7CiAgICAgICAgIH0KICAgICAgICAgaWYgKGhkci5ieXRlb3Jk
ZXIgIT0gU0FWRUZJTEVfQllURU9SREVSX1ZBTFVFKSB7CiAgICAgICAgICAgICBmcHJpbnRmKHN0
ZGVyciwgIkZpbGUgaGFzIHdyb25nIGJ5dGUgb3JkZXJcbiIpOwpAQCAtMTQwMSwxNyArMTQzMCwx
NyBAQCBzdGF0aWMgaW50IGNyZWF0ZV9kb21haW4oc3RydWN0IGRvbWFpbl9jCiAgICAgICAgIGlm
IChiYWRmbGFncykgewogICAgICAgICAgICAgZnByaW50ZihzdGRlcnIsICJTYXZlZmlsZSBoYXMg
bWFuZGF0b3J5IGZsYWcocykgMHglIlBSSXgzMiIgIgogICAgICAgICAgICAgICAgICAgICAid2hp
Y2ggYXJlIG5vdCBzdXBwb3J0ZWQ7IG5lZWQgbmV3ZXIgeGxcbiIsCiAgICAgICAgICAgICAgICAg
ICAgIGJhZGZsYWdzKTsKICAgICAgICAgICAgIHJldHVybiBFUlJPUl9JTlZBTDsKICAgICAgICAg
fQogICAgICAgICBpZiAoaGRyLm9wdGlvbmFsX2RhdGFfbGVuKSB7CiAgICAgICAgICAgICBvcHRk
YXRhX2JlZ2luID0geG1hbGxvYyhoZHIub3B0aW9uYWxfZGF0YV9sZW4pOwotICAgICAgICAgICAg
Q0hLX0VSUk5PKCBsaWJ4bF9yZWFkX2V4YWN0bHkoJmN0eCwgcmVzdG9yZV9mZCwgb3B0ZGF0YV9i
ZWdpbiwKKyAgICAgICAgICAgIENIS19FUlJOTyggbGlieGxfcmVhZF9leGFjdGx5KCZjdHgsIHJl
YWxfcmVzdG9yZV9mZCwgb3B0ZGF0YV9iZWdpbiwKICAgICAgICAgICAgICAgICAgICBoZHIub3B0
aW9uYWxfZGF0YV9sZW4sIHJlc3RvcmVfZmlsZSwgIm9wdGRhdGEiKSApOwogICAgICAgICB9CiAK
ICNkZWZpbmUgT1BUREFUQV9MRUZUICAoaGRyLm9wdGlvbmFsX2RhdGFfbGVuIC0gKG9wdGRhdGFf
aGVyZSAtIG9wdGRhdGFfYmVnaW4pKQogI2RlZmluZSBXSVRIX09QVERBVEEoYW10LCBib2R5KSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAgICAgICAgICAgIGlmIChPUFREQVRB
X0xFRlQgPCAoYW10KSkgeyAgICAgICAgICAgICAgICAgICAgICAgICBcCiAgICAgICAgICAgICAg
ICAgZnByaW50ZihzdGRlcnIsICJTYXZlZmlsZSB0cnVuY2F0ZWQuXG4iKTsgICAgICAgXAogICAg
ICAgICAgICAgICAgIHJldHVybiBFUlJPUl9JTlZBTDsgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFwKQEAgLTE1MTIsMTkgKzE1NDEsMjkgQEAgc3RhcnQ6CiAKICAgICBpZiAoIGRvbV9pbmZv
LT5jb25zb2xlX2F1dG9jb25uZWN0ICkgewogICAgICAgICBjYiA9IGF1dG9jb25uZWN0X2NvbnNv
bGU7CiAgICAgfWVsc2V7CiAgICAgICAgIGNiID0gTlVMTDsKICAgICB9CiAKICAgICBpZiAoIHJl
c3RvcmVfZmlsZSApIHsKLSAgICAgICAgcmV0ID0gbGlieGxfZG9tYWluX2NyZWF0ZV9yZXN0b3Jl
KCZjdHgsICZkX2NvbmZpZywKKyAgICAgIGlmIChzZXBlcmF0ZWQpCisJeworCSAgcmV0ID0gbGli
eGxfZG9tYWluX2NyZWF0ZV9yZXN0b3JlMigmY3R4LCAmZF9jb25maWcsCisJCQkJCSAgICAgY2Is
ICZjaGlsZF9jb25zb2xlX3BpZCwKKwkJCQkJICAgICAmZG9taWQsIHJlc3RvcmVfZmQyLCByZXN0
b3JlX2ZkMyk7CisJfQorICAgICAgZWxzZQorCXsKKwkgIHJldCA9IGxpYnhsX2RvbWFpbl9jcmVh
dGVfcmVzdG9yZSgmY3R4LCAmZF9jb25maWcsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIGNiLCAmY2hpbGRfY29uc29sZV9waWQsCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICZkb21pZCwgcmVzdG9yZV9mZCk7CisJfQorICAg
ICAgICAKICAgICB9ZWxzZXsKICAgICAgICAgcmV0ID0gbGlieGxfZG9tYWluX2NyZWF0ZV9uZXco
JmN0eCwgJmRfY29uZmlnLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGNiLCAmY2hpbGRfY29uc29sZV9waWQsICZkb21pZCk7CiAgICAgfQogICAgIGlmICggcmV0ICkK
ICAgICAgICAgZ290byBlcnJvcl9vdXQ7CiAKICAgICByZXQgPSBsaWJ4bF91c2VyZGF0YV9zdG9y
ZSgmY3R4LCBkb21pZCwgInhsIiwKQEAgLTE2OTgsMTYgKzE3MzcsMjAgQEAgZXJyb3Jfb3V0Ogog
CiBvdXQ6CiAgICAgaWYgKGxvZ2ZpbGUgIT0gMikKICAgICAgICAgY2xvc2UobG9nZmlsZSk7CiAK
ICAgICBsaWJ4bF9kb21haW5fY29uZmlnX2Rlc3Ryb3koJmRfY29uZmlnKTsKIAogICAgIGZyZWUo
Y29uZmlnX2RhdGEpOworICAgIENMT1NFRkQocmVzdG9yZV9mZCk7CisgICAgQ0xPU0VGRChyZXN0
b3JlX2ZkMSk7CisgICAgQ0xPU0VGRChyZXN0b3JlX2ZkMik7CisgICAgQ0xPU0VGRChyZXN0b3Jl
X2ZkMyk7CiAKIHdhaXRwaWRfb3V0OgogICAgIGlmIChjaGlsZF9jb25zb2xlX3BpZCA+IDAgJiYK
ICAgICAgICAgICAgIHdhaXRwaWQoY2hpbGRfY29uc29sZV9waWQsICZzdGF0dXMsIDApIDwgMCAm
JiBlcnJubyA9PSBFSU5UUikKICAgICAgICAgZ290byB3YWl0cGlkX291dDsKIAogICAgIC8qCiAg
ICAgICogSWYgd2UgaGF2ZSBkYWVtb25pemVkIHRoZW4gZG8gbm90IHJldHVybiB0byB0aGUgY2Fs
bGVyIC0tIHRoaXMgaGFzCkBAIC0yNDU1LDE2ICsyNDk4LDY2IEBAIHN0YXRpYyBpbnQgc2F2ZV9k
b21haW4oY29uc3QgY2hhciAqcCwgY28KICAgICBpZiAoY2hlY2twb2ludCkKICAgICAgICAgbGli
eGxfZG9tYWluX3VucGF1c2UoJmN0eCwgZG9taWQpOwogICAgIGVsc2UKICAgICAgICAgbGlieGxf
ZG9tYWluX2Rlc3Ryb3koJmN0eCwgZG9taWQsIDApOwogCiAgICAgZXhpdCgwKTsKIH0KIAorCitz
dGF0aWMgaW50IHNhdmVfZG9tYWluMihjb25zdCBjaGFyICpwLCBjb25zdCBjaGFyICpmMSwgY29u
c3QgY2hhciAqZjIsIGNvbnN0IGNoYXIgKmYzLAorCQkJaW50IGNoZWNrcG9pbnQsCisJCQljb25z
dCBjaGFyICpvdmVycmlkZV9jb25maWdfZmlsZSkKK3sKKyAgaW50IGZkMSwgZmQyLCBmZDM7Cisg
ICAgdWludDhfdCAqY29uZmlnX2RhdGE7CisgICAgaW50IGNvbmZpZ19sZW47CisKKyAgICBMT0co
InVzaW5nIHNhdmUyLi4uIik7CisKKyAgICBzYXZlX2RvbWFpbl9jb3JlX2JlZ2luKHAsIG92ZXJy
aWRlX2NvbmZpZ19maWxlLCAmY29uZmlnX2RhdGEsICZjb25maWdfbGVuKTsKKworICAgIGlmICgh
Y29uZmlnX2xlbikgeworICAgICAgICBmcHV0cygiIFNhdmVmaWxlIHdpbGwgbm90IGNvbnRhaW4g
eGwgZG9tYWluIGNvbmZpZ1xuIiwgc3RkZXJyKTsKKyAgICB9CisKKyAgICBmZDEgPSBvcGVuKGYx
LCBPX1dST05MWXxPX0NSRUFUfE9fVFJVTkMsIDA2NDQpOworICAgIGlmIChmZDEgPCAwKSB7Cisg
ICAgICAgIGZwcmludGYoc3RkZXJyLCAiRmFpbGVkIHRvIG9wZW4gdGVtcCBmaWxlICVzIGZvciB3
cml0aW5nXG4iLCBmMSk7CisgICAgICAgIGV4aXQoMik7CisgICAgfQorCisgICAgc2F2ZV9kb21h
aW5fY29yZV93cml0ZWNvbmZpZyhmZDEsIGYxLCBjb25maWdfZGF0YSwgY29uZmlnX2xlbik7Cisg
ICAgY2xvc2UoZmQxKTsKKworICAgICAgZmQyID0gb3BlbihmMiwgT19XUk9OTFl8T19DUkVBVHxP
X1RSVU5DLCAwNjQ0KTsKKyAgICBpZiAoZmQyIDwgMCkgeworICAgICAgICBmcHJpbnRmKHN0ZGVy
ciwgIkZhaWxlZCB0byBvcGVuIHRlbXAgZmlsZSAlcyBmb3Igd3JpdGluZ1xuIiwgZjIpOworICAg
ICAgICBleGl0KDIpOworICAgIH0KKyAgICBmZDMgPSBvcGVuKGYzLCBPX1dST05MWXxPX0NSRUFU
fE9fVFJVTkMsIDA2NDQpOworICAgIGlmIChmZDMgPCAwKSB7CisgICAgICAgIGZwcmludGYoc3Rk
ZXJyLCAiRmFpbGVkIHRvIG9wZW4gdGVtcCBmaWxlICVzIGZvciB3cml0aW5nXG4iLCBmMyk7Cisg
ICAgICAgIGV4aXQoMik7CisgICAgfQorICAgICAgQ0hLX0VSUk5PKGxpYnhsX2RvbWFpbl9zdXNw
ZW5kMigmY3R4LCBOVUxMLCBkb21pZCwgZmQyLCBmZDMpKTsKKyAgICBjbG9zZShmZDIpOworICAg
IGNsb3NlKGZkMyk7CisKKyAgICBpZiAoY2hlY2twb2ludCkKKyAgICAgICAgbGlieGxfZG9tYWlu
X3VucGF1c2UoJmN0eCwgZG9taWQpOworICAgIGVsc2UKKyAgICAgICAgbGlieGxfZG9tYWluX2Rl
c3Ryb3koJmN0eCwgZG9taWQsIDApOworCisgICAgZXhpdCgwKTsKK30KKworCisKIHN0YXRpYyBp
bnQgbWlncmF0ZV9yZWFkX2ZpeGVkbWVzc2FnZShpbnQgZmQsIGNvbnN0IHZvaWQgKm1zZywgaW50
IG1zZ3N6LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IGNoYXIg
KndoYXQsIGNvbnN0IGNoYXIgKnJ1bmUpIHsKICAgICBjaGFyIGJ1Zlttc2dzel07CiAgICAgY29u
c3QgY2hhciAqc3RyZWFtOwogICAgIGludCByYzsKIAogICAgIHN0cmVhbSA9IHJ1bmUgPyAibWln
cmF0aW9uIHJlY2VpdmVyIHN0cmVhbSIgOiAibWlncmF0aW9uIHN0cmVhbSI7CiAgICAgcmMgPSBs
aWJ4bF9yZWFkX2V4YWN0bHkoJmN0eCwgZmQsIGJ1ZiwgbXNnc3osIHN0cmVhbSwgd2hhdCk7CkBA
IC0yODY0LDMyICsyOTU3LDEwNCBAQCBpbnQgbWFpbl9yZXN0b3JlKGludCBhcmdjLCBjaGFyICoq
YXJndikKICAgICB9IGVsc2UgaWYgKGFyZ2Mtb3B0aW5kID09IDIpIHsKICAgICAgICAgY29uZmln
X2ZpbGUgPSBhcmd2W29wdGluZF07CiAgICAgICAgIGNoZWNrcG9pbnRfZmlsZSA9IGFyZ3Zbb3B0
aW5kICsgMV07CiAgICAgfSBlbHNlIHsKICAgICAgICAgaGVscCgicmVzdG9yZSIpOwogICAgICAg
ICByZXR1cm4gMjsKICAgICB9CiAKKyAgICBjdHgueGxfc2F2ZV9yZXN0b3JlLnhsX3Jlc3RvcmVf
dmVyc2lvbiA9IFZFUlNJT05fUkVTVE9SRV9GUk9NXzFGOworCiAgICAgbWVtc2V0KCZkb21faW5m
bywgMCwgc2l6ZW9mKGRvbV9pbmZvKSk7CiAgICAgZG9tX2luZm8uZGVidWcgPSBkZWJ1ZzsKICAg
ICBkb21faW5mby5kYWVtb25pemUgPSBkYWVtb25pemU7CiAgICAgZG9tX2luZm8ucGF1c2VkID0g
cGF1c2VkOwogICAgIGRvbV9pbmZvLmNvbmZpZ19maWxlID0gY29uZmlnX2ZpbGU7CiAgICAgZG9t
X2luZm8ucmVzdG9yZV9maWxlID0gY2hlY2twb2ludF9maWxlOwogICAgIGRvbV9pbmZvLm1pZ3Jh
dGVfZmQgPSAtMTsKICAgICBkb21faW5mby5jb25zb2xlX2F1dG9jb25uZWN0ID0gY29uc29sZV9h
dXRvY29ubmVjdDsKIAogICAgIHJjID0gY3JlYXRlX2RvbWFpbigmZG9tX2luZm8pOwogICAgIGlm
IChyYyA8IDApCiAgICAgICAgIHJldHVybiAtcmM7CiAKICAgICByZXR1cm4gMDsKIH0KIAoraW50
IG1haW5fcmVzdG9yZTIoaW50IGFyZ2MsIGNoYXIgKiphcmd2KQoreworICBjb25zdCBjaGFyICpj
aGVja3BvaW50X2ZpbGUxID0gTlVMTDsKKyAgY29uc3QgY2hhciAqY2hlY2twb2ludF9maWxlMiA9
IE5VTEw7CisgIGNvbnN0IGNoYXIgKmNoZWNrcG9pbnRfZmlsZTMgPSBOVUxMOworICAgIGNvbnN0
IGNoYXIgKmNvbmZpZ19maWxlID0gTlVMTDsKKyAgICBzdHJ1Y3QgZG9tYWluX2NyZWF0ZSBkb21f
aW5mbzsKKyAgICBpbnQgcGF1c2VkID0gMCwgZGVidWcgPSAwLCBkYWVtb25pemUgPSAxLCBjb25z
b2xlX2F1dG9jb25uZWN0ID0gMDsKKyAgICBpbnQgb3B0LCByYzsKKworICAgIHdoaWxlICgob3B0
ID0gZ2V0b3B0KGFyZ2MsIGFyZ3YsICJjaHBkZSIpKSAhPSAtMSkgeworICAgICAgICBzd2l0Y2gg
KG9wdCkgeworICAgICAgICBjYXNlICdjJzoKKyAgICAgICAgICAgIGNvbnNvbGVfYXV0b2Nvbm5l
Y3QgPSAxOworICAgICAgICAgICAgYnJlYWs7CisgICAgICAgIGNhc2UgJ3AnOgorICAgICAgICAg
ICAgcGF1c2VkID0gMTsKKyAgICAgICAgICAgIGJyZWFrOworICAgICAgICBjYXNlICdkJzoKKyAg
ICAgICAgICAgIGRlYnVnID0gMTsKKyAgICAgICAgICAgIGJyZWFrOworICAgICAgICBjYXNlICdl
JzoKKyAgICAgICAgICAgIGRhZW1vbml6ZSA9IDA7CisgICAgICAgICAgICBicmVhazsKKyAgICAg
ICAgY2FzZSAnaCc6CisgICAgICAgICAgICBoZWxwKCJyZXN0b3JlMiIpOworICAgICAgICAgICAg
cmV0dXJuIDA7CisgICAgICAgIGRlZmF1bHQ6CisgICAgICAgICAgICBmcHJpbnRmKHN0ZGVyciwg
Im9wdGlvbiBgJWMnIG5vdCBzdXBwb3J0ZWQuXG4iLCBvcHRvcHQpOworICAgICAgICAgICAgYnJl
YWs7CisgICAgICAgIH0KKyAgICB9CisKKyAgICBpZiAoYXJnYy1vcHRpbmQgPT0gMykgeworICAg
ICAgICBjaGVja3BvaW50X2ZpbGUxID0gYXJndltvcHRpbmRdOworCWNoZWNrcG9pbnRfZmlsZTIg
PSBhcmd2W29wdGluZCArIDFdOworCWNoZWNrcG9pbnRfZmlsZTMgPSBhcmd2W29wdGluZCArIDJd
OworICAgIH0gZWxzZSBpZiAoYXJnYy1vcHRpbmQgPT0gNCkgeworICAgICAgICBjb25maWdfZmls
ZSA9IGFyZ3Zbb3B0aW5kXTsKKyAgICAgICAgY2hlY2twb2ludF9maWxlMSA9IGFyZ3Zbb3B0aW5k
ICsgMV07CisJY2hlY2twb2ludF9maWxlMiA9IGFyZ3Zbb3B0aW5kICsgMl07CisJY2hlY2twb2lu
dF9maWxlMyA9IGFyZ3Zbb3B0aW5kICsgM107CisgICAgfSBlbHNlIHsKKyAgICAgICAgaGVscCgi
cmVzdG9yZTIiKTsKKyAgICAgICAgcmV0dXJuIDI7CisgICAgfQorCisgICAgY3R4LnhsX3NhdmVf
cmVzdG9yZS54bF9yZXN0b3JlX3ZlcnNpb24gPSBWRVJTSU9OX1JFU1RPUkVfRlJPTV8zRjsKKyAg
ICBjdHgueGxfc2F2ZV9yZXN0b3JlLmYxID0gY2hlY2twb2ludF9maWxlMTsKKyAgICBjdHgueGxf
c2F2ZV9yZXN0b3JlLmYyID0gY2hlY2twb2ludF9maWxlMjsKKyAgICBjdHgueGxfc2F2ZV9yZXN0
b3JlLmYzID0gY2hlY2twb2ludF9maWxlMzsKKyAgCisgICAgbWVtc2V0KCZkb21faW5mbywgMCwg
c2l6ZW9mKGRvbV9pbmZvKSk7CisgICAgZG9tX2luZm8uZGVidWcgPSBkZWJ1ZzsKKyAgICBkb21f
aW5mby5kYWVtb25pemUgPSBkYWVtb25pemU7CisgICAgZG9tX2luZm8ucGF1c2VkID0gcGF1c2Vk
OworICAgIGRvbV9pbmZvLmNvbmZpZ19maWxlID0gY29uZmlnX2ZpbGU7CisgICAgZG9tX2luZm8u
cmVzdG9yZV9maWxlID0gY2hlY2twb2ludF9maWxlMTsKKyAgICBkb21faW5mby5taWdyYXRlX2Zk
ID0gLTE7CisgICAgZG9tX2luZm8uY29uc29sZV9hdXRvY29ubmVjdCA9IGNvbnNvbGVfYXV0b2Nv
bm5lY3Q7CisKKyAgICByYyA9IGNyZWF0ZV9kb21haW4oJmRvbV9pbmZvKTsKKyAgICBpZiAocmMg
PCAwKQorICAgICAgICByZXR1cm4gLXJjOworCisgICAgcmV0dXJuIDA7Cit9CisKKworCiBpbnQg
bWFpbl9taWdyYXRlX3JlY2VpdmUoaW50IGFyZ2MsIGNoYXIgKiphcmd2KQogewogICAgIGludCBk
ZWJ1ZyA9IDAsIGRhZW1vbml6ZSA9IDE7CiAgICAgaW50IG9wdDsKIAogICAgIHdoaWxlICgob3B0
ID0gZ2V0b3B0KGFyZ2MsIGFyZ3YsICJoZWQiKSkgIT0gLTEpIHsKICAgICAgICAgc3dpdGNoIChv
cHQpIHsKICAgICAgICAgY2FzZSAnaCc6CkBAIC0yOTIzLDE3ICszMDg4LDE3IEBAIGludCBtYWlu
X3NhdmUoaW50IGFyZ2MsIGNoYXIgKiphcmd2KQogICAgIGludCBjaGVja3BvaW50ID0gMDsKICAg
ICBpbnQgb3B0OwogCiAgICAgd2hpbGUgKChvcHQgPSBnZXRvcHQoYXJnYywgYXJndiwgImhjIikp
ICE9IC0xKSB7CiAgICAgICAgIHN3aXRjaCAob3B0KSB7CiAgICAgICAgIGNhc2UgJ2MnOgogICAg
ICAgICAgICAgY2hlY2twb2ludCA9IDE7CiAgICAgICAgICAgICBicmVhazsKLSAgICAgICAgY2Fz
ZSAnaCc6CisJY2FzZSAnaCc6CiAgICAgICAgICAgICBoZWxwKCJzYXZlIik7CiAgICAgICAgICAg
ICByZXR1cm4gMDsKICAgICAgICAgZGVmYXVsdDoKICAgICAgICAgICAgIGZwcmludGYoc3RkZXJy
LCAib3B0aW9uIGAlYycgbm90IHN1cHBvcnRlZC5cbiIsIG9wdG9wdCk7CiAgICAgICAgICAgICBi
cmVhazsKICAgICAgICAgfQogICAgIH0KIApAQCAtMjk0NCwxNiArMzEwOSw1MSBAQCBpbnQgbWFp
bl9zYXZlKGludCBhcmdjLCBjaGFyICoqYXJndikKIAogICAgIHAgPSBhcmd2W29wdGluZF07CiAg
ICAgZmlsZW5hbWUgPSBhcmd2W29wdGluZCArIDFdOwogICAgIGNvbmZpZ19maWxlbmFtZSA9IGFy
Z3Zbb3B0aW5kICsgMl07CiAgICAgc2F2ZV9kb21haW4ocCwgZmlsZW5hbWUsIGNoZWNrcG9pbnQs
IGNvbmZpZ19maWxlbmFtZSk7CiAgICAgcmV0dXJuIDA7CiB9CiAKK2ludCBtYWluX3NhdmUyKGlu
dCBhcmdjLCBjaGFyICoqYXJndikKK3sKKyAgY29uc3QgY2hhciAqc2YxID0gTlVMTCwgKnNmMiA9
IE5VTEwsICpzZjMgPSBOVUxMLCAqcCA9IE5VTEw7CisgICAgY29uc3QgY2hhciAqY29uZmlnX2Zp
bGVuYW1lOworICAgIGludCBjaGVja3BvaW50ID0gMDsKKyAgICBpbnQgb3B0OworCisgICAgd2hp
bGUgKChvcHQgPSBnZXRvcHQoYXJnYywgYXJndiwgImhjIikpICE9IC0xKSB7CisgICAgICAgIHN3
aXRjaCAob3B0KSB7CisgICAgICAgIGNhc2UgJ2MnOgorICAgICAgICAgICAgY2hlY2twb2ludCA9
IDE7CisgICAgICAgICAgICBicmVhazsKKwljYXNlICdoJzoKKyAgICAgICAgICAgIGhlbHAoInNh
dmUyIik7CisgICAgICAgICAgICByZXR1cm4gMDsKKyAgICAgICAgZGVmYXVsdDoKKyAgICAgICAg
ICAgIGZwcmludGYoc3RkZXJyLCAib3B0aW9uIGAlYycgbm90IHN1cHBvcnRlZC5cbiIsIG9wdG9w
dCk7CisgICAgICAgICAgICBicmVhazsKKyAgICAgICAgfQorICAgIH0KKworICAgIGlmIChhcmdj
LW9wdGluZCA8IDMgfHwgYXJnYy1vcHRpbmQgPiA1KSB7CisgICAgICAgIGhlbHAoInNhdmUyIik7
CisgICAgICAgIHJldHVybiAyOworICAgIH0KKworICAgIHAgPSBhcmd2W29wdGluZF07CisgICAg
c2YxID0gYXJndltvcHRpbmQgKyAxXTsKKyAgICBzZjIgPSBhcmd2W29wdGluZCArIDJdOworICAg
IHNmMyA9IGFyZ3Zbb3B0aW5kICsgM107CisgICAgY29uZmlnX2ZpbGVuYW1lID0gYXJndltvcHRp
bmQgKyA0XTsKKyAgICBzYXZlX2RvbWFpbjIocCwgc2YxLCBzZjIsIHNmMywgY2hlY2twb2ludCwg
Y29uZmlnX2ZpbGVuYW1lKTsKKyAgICByZXR1cm4gMDsKK30KKwogaW50IG1haW5fbWlncmF0ZShp
bnQgYXJnYywgY2hhciAqKmFyZ3YpCiB7CiAgICAgY29uc3QgY2hhciAqcCA9IE5VTEw7CiAgICAg
Y29uc3QgY2hhciAqY29uZmlnX2ZpbGVuYW1lID0gTlVMTDsKICAgICBjb25zdCBjaGFyICpzc2hf
Y29tbWFuZCA9ICJzc2giOwogICAgIGNoYXIgKnJ1bmUgPSBOVUxMOwogICAgIGNoYXIgKmhvc3Q7
CiAgICAgaW50IG9wdCwgZGFlbW9uaXplID0gMSwgZGVidWcgPSAwOwpkaWZmIC0tZXhjbHVkZT0u
c3ZuIC0tZXhjbHVkZT0nKi5yZWonIC0tZXhjbHVkZT0nKi5vcmlnJyAtcnBOIC1VOCB4ZW4tNC4x
LjItYS90b29scy9saWJ4bC94bF9jbWR0YWJsZS5jIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL3hs
X2NtZHRhYmxlLmMKLS0tIHhlbi00LjEuMi1hL3Rvb2xzL2xpYnhsL3hsX2NtZHRhYmxlLmMJMjAx
MS0xMC0yMSAwMTowNTo0My4wMDAwMDAwMDAgKzA4MDAKKysrIHhlbi00LjEuMi1iL3Rvb2xzL2xp
YnhsL3hsX2NtZHRhYmxlLmMJMjAxMi0xMi0yOCAxNjo1OTozOC4wMDc5MzQ5NDIgKzA4MDAKQEAg
LTEwMCwxNiArMTAwLDIzIEBAIHN0cnVjdCBjbWRfc3BlYyBjbWRfdGFibGVbXSA9IHsKICAgICB9
LAogICAgIHsgInNhdmUiLAogICAgICAgJm1haW5fc2F2ZSwKICAgICAgICJTYXZlIGEgZG9tYWlu
IHN0YXRlIHRvIHJlc3RvcmUgbGF0ZXIiLAogICAgICAgIltvcHRpb25zXSA8RG9tYWluPiA8Q2hl
Y2twb2ludEZpbGU+IFs8Q29uZmlnRmlsZT5dIiwKICAgICAgICItaCAgUHJpbnQgdGhpcyBoZWxw
LlxuIgogICAgICAgIi1jICBMZWF2ZSBkb21haW4gcnVubmluZyBhZnRlciBjcmVhdGluZyB0aGUg
c25hcHNob3QuIgogICAgIH0sCisgICAgeyAic2F2ZTIiLAorICAgICAgJm1haW5fc2F2ZTIsCisg
ICAgICAiU2F2ZSBhIGRvbWFpbiBzdGF0ZSBhcyB0aHJlZSBzZXBlcmF0ZWQgZmlsZXMgdG8gcmVz
dG9yZSBsYXRlciIsCisgICAgICAiW29wdGlvbnNdIDxEb21haW4+IDxDaGVja3BvaW50RmlsZTE+
IDxDaGVja3BvaW50RmlsZTI+IDxDaGVja3BvaW50RmlsZTM+IFs8Q29uZmlnRmlsZT5dIiwKKyAg
ICAgICItaCAgUHJpbnQgdGhpcyBoZWxwLlxuIgorICAgICAgIi1jICBMZWF2ZSBkb21haW4gcnVu
bmluZyBhZnRlciBjcmVhdGluZyB0aGUgc25hcHNob3QuIgorICAgIH0sCiAgICAgeyAibWlncmF0
ZSIsCiAgICAgICAmbWFpbl9taWdyYXRlLAogICAgICAgIlNhdmUgYSBkb21haW4gc3RhdGUgdG8g
cmVzdG9yZSBsYXRlciIsCiAgICAgICAiW29wdGlvbnNdIDxEb21haW4+IDxob3N0PiIsCiAgICAg
ICAiLWggICAgICAgICAgICAgIFByaW50IHRoaXMgaGVscC5cbiIKICAgICAgICItQyA8Y29uZmln
PiAgICAgU2VuZCA8Y29uZmlnPiBpbnN0ZWFkIG9mIGNvbmZpZyBmaWxlIGZyb20gY3JlYXRpb24u
XG4iCiAgICAgICAiLXMgPHNzaGNvbW1hbmQ+IFVzZSA8c3NoY29tbWFuZD4gaW5zdGVhZCBvZiBz
c2guICBTdHJpbmcgd2lsbCBiZSBwYXNzZWRcbiIKICAgICAgICIgICAgICAgICAgICAgICAgdG8g
c2guIElmIGVtcHR5LCBydW4gPGhvc3Q+IGluc3RlYWQgb2Ygc3NoIDxob3N0PiB4bFxuIgpAQCAt
MTI2LDE2ICsxMzMsMjUgQEAgc3RydWN0IGNtZF9zcGVjIGNtZF90YWJsZVtdID0gewogICAgICAg
Jm1haW5fcmVzdG9yZSwKICAgICAgICJSZXN0b3JlIGEgZG9tYWluIGZyb20gYSBzYXZlZCBzdGF0
ZSIsCiAgICAgICAiW29wdGlvbnNdIFs8Q29uZmlnRmlsZT5dIDxDaGVja3BvaW50RmlsZT4iLAog
ICAgICAgIi1oICBQcmludCB0aGlzIGhlbHAuXG4iCiAgICAgICAiLXAgIERvIG5vdCB1bnBhdXNl
IGRvbWFpbiBhZnRlciByZXN0b3JpbmcgaXQuXG4iCiAgICAgICAiLWUgIERvIG5vdCB3YWl0IGlu
IHRoZSBiYWNrZ3JvdW5kIGZvciB0aGUgZGVhdGggb2YgdGhlIGRvbWFpbi5cbiIKICAgICAgICIt
ZCAgRW5hYmxlIGRlYnVnIG1lc3NhZ2VzLiIKICAgICB9LAorICAgIHsgInJlc3RvcmUyIiwKKyAg
ICAgICZtYWluX3Jlc3RvcmUyLAorICAgICAgIlJlc3RvcmUgYSBkb21haW4gZnJvbSBhIHNhdmVk
IHN0YXRlIiwKKyAgICAgICJbb3B0aW9uc10gWzxDb25maWdGaWxlPl0gPENoZWNrcG9pbnRGaWxl
MT4gPENoZWNrcG9pbnRGaWxlMj4gPENoZWNrcG9pbnRGaWxlMz4iLAorICAgICAgIi1oICBQcmlu
dCB0aGlzIGhlbHAuXG4iCisgICAgICAiLXAgIERvIG5vdCB1bnBhdXNlIGRvbWFpbiBhZnRlciBy
ZXN0b3JpbmcgaXQuXG4iCisgICAgICAiLWUgIERvIG5vdCB3YWl0IGluIHRoZSBiYWNrZ3JvdW5k
IGZvciB0aGUgZGVhdGggb2YgdGhlIGRvbWFpbi5cbiIKKyAgICAgICItZCAgRW5hYmxlIGRlYnVn
IG1lc3NhZ2VzLiIKKyAgICB9LAogICAgIHsgIm1pZ3JhdGUtcmVjZWl2ZSIsCiAgICAgICAmbWFp
bl9taWdyYXRlX3JlY2VpdmUsCiAgICAgICAiUmVzdG9yZSBhIGRvbWFpbiBmcm9tIGEgc2F2ZWQg
c3RhdGUiLAogICAgICAgIi0gZm9yIGludGVybmFsIHVzZSBvbmx5IiwKICAgICB9LAogICAgIHsg
ImNkLWluc2VydCIsCiAgICAgICAmbWFpbl9jZF9pbnNlcnQsCiAgICAgICAiSW5zZXJ0IGEgY2Ry
b20gaW50byBhIGd1ZXN0J3MgY2QgZHJpdmUiLApkaWZmIC0tZXhjbHVkZT0uc3ZuIC0tZXhjbHVk
ZT0nKi5yZWonIC0tZXhjbHVkZT0nKi5vcmlnJyAtcnBOIC1VOCB4ZW4tNC4xLjItYS90b29scy9s
aWJ4bC94bC5oIHhlbi00LjEuMi1iL3Rvb2xzL2xpYnhsL3hsLmgKLS0tIHhlbi00LjEuMi1hL3Rv
b2xzL2xpYnhsL3hsLmgJMjAxMS0xMC0yMSAwMTowNTo0My4wMDAwMDAwMDAgKzA4MDAKKysrIHhl
bi00LjEuMi1iL3Rvb2xzL2xpYnhsL3hsLmgJMjAxMi0xMi0yOCAxNjo1OTozOC4wMjA4Mjg1MTMg
KzA4MDAKQEAgLTMxLDE4ICszMSwyMCBAQCBpbnQgbWFpbl9jZF9lamVjdChpbnQgYXJnYywgY2hh
ciAqKmFyZ3YpCiBpbnQgbWFpbl9jZF9pbnNlcnQoaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGlu
dCBtYWluX2NvbnNvbGUoaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBtYWluX3ZuY3ZpZXdl
cihpbnQgYXJnYywgY2hhciAqKmFyZ3YpOwogaW50IG1haW5fcGNpbGlzdChpbnQgYXJnYywgY2hh
ciAqKmFyZ3YpOwogaW50IG1haW5fcGNpbGlzdF9hc3NpZ25hYmxlKGludCBhcmdjLCBjaGFyICoq
YXJndik7CiBpbnQgbWFpbl9wY2lkZXRhY2goaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBt
YWluX3BjaWF0dGFjaChpbnQgYXJnYywgY2hhciAqKmFyZ3YpOwogaW50IG1haW5fcmVzdG9yZShp
bnQgYXJnYywgY2hhciAqKmFyZ3YpOworaW50IG1haW5fcmVzdG9yZTIoaW50IGFyZ2MsIGNoYXIg
Kiphcmd2KTsKIGludCBtYWluX21pZ3JhdGVfcmVjZWl2ZShpbnQgYXJnYywgY2hhciAqKmFyZ3Yp
OwogaW50IG1haW5fc2F2ZShpbnQgYXJnYywgY2hhciAqKmFyZ3YpOworaW50IG1haW5fc2F2ZTIo
aW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBtYWluX21pZ3JhdGUoaW50IGFyZ2MsIGNoYXIg
Kiphcmd2KTsKIGludCBtYWluX2R1bXBfY29yZShpbnQgYXJnYywgY2hhciAqKmFyZ3YpOwogaW50
IG1haW5fcGF1c2UoaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBtYWluX3VucGF1c2UoaW50
IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBtYWluX2Rlc3Ryb3koaW50IGFyZ2MsIGNoYXIgKiph
cmd2KTsKIGludCBtYWluX3NodXRkb3duKGludCBhcmdjLCBjaGFyICoqYXJndik7CiBpbnQgbWFp
bl9yZWJvb3QoaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsKIGludCBtYWluX2xpc3QoaW50IGFyZ2Ms
IGNoYXIgKiphcmd2KTsK
--047d7b62430aee63c904d1e642bf
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7b62430aee63c904d1e642bf--


From xen-devel-bounces@lists.xen.org Fri Dec 28 09:41:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 09:41:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToWQr-0001UQ-UP; Fri, 28 Dec 2012 09:40:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>)
	id 1ToWQq-0001UG-6b; Fri, 28 Dec 2012 09:40:48 +0000
Received: from [85.158.143.35:3064] by server-2.bemta-4.messagelabs.com id
	E8/D8-30861-F196DD05; Fri, 28 Dec 2012 09:40:47 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356687642!16971755!1
X-Originating-IP: [220.181.15.62]
X-SpamReason: No, hits=2.7 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDEyMTMw\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDEyMTMw\n,HTML_20_30,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6453 invoked from network); 28 Dec 2012 09:40:44 -0000
Received: from m15-62.126.com (HELO m15-62.126.com) (220.181.15.62)
	by server-13.tower-21.messagelabs.com with SMTP;
	28 Dec 2012 09:40:44 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=hvh+zcyjqkXI8RuA1HVhYy72iVy0lKet/RPt
	FGQ+xAM=; b=X8fKc6FNb23JaU0eubY6FLuhr7WCR1yLCJpbxZXLjmSs3WxzqD2p
	qHfyLVQFkU2yBFyFjUQf9Aosbr/nk7TMJDFG8WdsvJgS6Sq5knyScVjEx5dWiI8z
	b97xxTbj5SF6o3jOe6drrvOqxVS1bpIXJxJzW0wMV3VK4TuhHc4Vg9A=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr62
	(Coremail) ; Fri, 28 Dec 2012 17:40:40 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Fri, 28 Dec 2012 17:40:40 +0800 (CST)
From: hxkhust <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, 
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121219(21170.5156.5150) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: s4vl8GZvb3Rlcl9odG09MzM1Nzo4MQ==
MIME-Version: 1.0
Message-ID: <3db23816.2bb76.13be0e28820.Coremail.hxkhust@126.com>
X-CM-TRANSID: PsqowEBJckQZad1Q7sotAA--.2665W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitRGTBUX9kQOtdwABsD
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!help!Problem with qcow2 image during a PVM's
	setting up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1519574698011099333=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1519574698011099333==
Content-Type: multipart/alternative; 
	boundary="----=_Part_676919_46794321.1356687640607"

------=_Part_676919_46794321.1356687640607
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

The following was what  I did.
1) dd if=/dev/zero of =centos_raw.img bs=1024 count=8000000
2) install the os centos 5.5 x64 in the image file centos_raw.img and the correspongding config file is :
kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"
ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"
memory = 768
#maxmem = 768
name = "centos_raw_pv"
vcpus = 1
vif = ['mac=00:24:7C:3C:CE:EF,bridge=eth0']
disk = ['tap:aio:/home/pvm/centos_raw.img,xvda,w']
root = "/dev/xvda1 ro"
on_reboot = 'restart'
on_crash = 'restart'
3) qemu-img-xen create -b centos_raw.img -f qcow2 centos_raw_qcow2_1.img 5G
4) xm create centos_raw_qcow2_1.cfg and the centos_raw_qcow2_1.cfg is just like this:
kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"
ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"
memory = 768
name = "centos_raw_qcow2_1_pv"
vcpus = 1
vif = ['mac=00:24:7C:3C:C1:EF,bridge=eth0']
disk = ['tap:qcow2:/home/pvm/centos_raw_qcow2_1.img,xvda,w']
root = "/dev/xvda1 ro"
on_reboot = 'restart'
on_crash = 'destroy'
 
however I'm failed.the error message after the above command is entered is :
Using config file "./centos_raw_qcow2_1.cfg".
Error: Device 51712 (tap) could not be connected.Setting up the backend failed. See the log files in /var/log/xen/ for details.
 
So I have tried another way.
after I install centos in centos_raw.img, I did the following:
qemu-img-xen convert -O qcow2 centos_raw.img centos_qcow2.img
here this image file centos_qcow2.img can be running normally with the config file :
kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"
ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"
memory = 768
name = "centos_raw_pv"
vcpus = 1
vif = ['mac=00:24:7C:3C:CE:EF,bridge=eth0']
disk = ['tap:qcow2:/home/pvm/centos_qcow2.img,xvda,w']
root = "/dev/xvda1 ro"
on_reboot = 'restart'
on_crash = 'restart'
 
Then I input the command below:
qemu-img-xen create -b centos_qcow2.img -f qcow2 centos_qcow2_qcow2.img 5G
and I edit  the config file centos_qcow2_qcow2.cfg:
kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"
ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"
memory = 768
name = "centos_qcow2_qcow2_pv"
vcpus = 1
vif = ['mac=00:24:7C:3C:CE:1F,bridge=eth0']
boot="c"
disk = ['tap:qcow2:/home/pvm/centos_qcow2_qcow2.img,sda,w']
root = "/dev/sda1 ro"
on_reboot = 'restart'
on_crash = 'destroy'
 
and implement the command:
xm create centos_qcow2_qcow2.cfg
 
but what was posted were:
Using config file "./centos_qcow2_qcow2.cfg".
Error: Device 2048 (tap) could not be connected.Setting up the backend failed. See the log files in /var/log/xen/ for details.
 
I need to run a para-virtualized machine whose image file is qcow2 format and is based on another  image file.What can i do with this?I need your help.
------=_Part_676919_46794321.1356687640607
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><DIV>The following was what&nbsp; I did.</DIV>
<DIV>1) dd if=/dev/zero of =centos_raw.img bs=1024 count=8000000</DIV>
<DIV>2) install the os centos 5.5 x64 in the image file centos_raw.img and the correspongding config file is :</DIV>
<DIV>kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"<BR>ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"<BR>memory = 768<BR>#maxmem = 768<BR>name = "centos_raw_pv"<BR>vcpus = 1<BR>vif = ['mac=00:24:7C:3C:CE:EF,bridge=eth0']<BR>disk = ['tap:aio:/home/pvm/centos_raw.img,xvda,w']</DIV>
<DIV>root = "/dev/xvda1 ro"<BR>on_reboot = 'restart'<BR>on_crash = 'restart'</DIV>
<DIV>3) qemu-img-xen create -b centos_raw.img -f qcow2 centos_raw_qcow2_1.img 5G</DIV>
<DIV>4) xm create centos_raw_qcow2_1.cfg and the centos_raw_qcow2_1.cfg is just like this:</DIV>
<DIV>kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"<BR>ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"<BR>memory = 768<BR>name = "centos_raw_qcow2_1_pv"<BR>vcpus = 1<BR>vif = ['mac=00:24:7C:3C:C1:EF,bridge=eth0']<BR>disk = ['tap:qcow2:/home/pvm/centos_raw_qcow2_1.img,xvda,w']</DIV>
<DIV>root = "/dev/xvda1 ro"<BR>on_reboot = 'restart'<BR>on_crash = 'destroy'</DIV>
<DIV>&nbsp;</DIV>
<DIV>however I'm failed.the error message after the above command is&nbsp;entered&nbsp;is :</DIV>
<DIV>Using config file "./centos_raw_qcow2_1.cfg".</DIV>
<DIV>Error: Device 51712 (tap) could not be connected.Setting up the backend failed. See the log files in /var/log/xen/ for details.</DIV>
<DIV>&nbsp;</DIV>
<DIV>So I have tried another way.</DIV>
<DIV>after I install centos in centos_raw.img, I did the following:</DIV>
<DIV>qemu-img-xen convert -O qcow2 centos_raw.img centos_qcow2.img</DIV>
<DIV>here this image file centos_qcow2.img can be running normally with the config file :</DIV>
<DIV>kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"<BR>ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"<BR>memory = 768<BR>name = "centos_raw_pv"<BR>vcpus = 1<BR>vif = ['mac=00:24:7C:3C:CE:EF,bridge=eth0']<BR>disk = ['tap:qcow2:/home/pvm/centos_qcow2.img,xvda,w']</DIV>
<DIV>root = "/dev/xvda1 ro"<BR>on_reboot = 'restart'<BR>on_crash = 'restart'</DIV>
<DIV>&nbsp;</DIV>
<DIV>Then I input the command below:</DIV>
<DIV>qemu-img-xen create -b centos_qcow2.img -f qcow2 centos_qcow2_qcow2.img 5G</DIV>
<DIV>and I&nbsp;edit &nbsp;the config file centos_qcow2_qcow2.cfg:</DIV>
<DIV>kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"<BR>ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"<BR>memory = 768<BR>name = "centos_qcow2_qcow2_pv"<BR>vcpus = 1<BR>vif = ['mac=00:24:7C:3C:CE:1F,bridge=eth0']<BR>boot="c"<BR>disk = ['tap:qcow2:/home/pvm/centos_qcow2_qcow2.img,sda,w']</DIV>
<DIV>root = "/dev/sda1 ro"<BR>on_reboot = 'restart'<BR>on_crash = 'destroy'</DIV>
<DIV>&nbsp;</DIV>
<DIV>and implement the command:</DIV>
<DIV>xm create centos_qcow2_qcow2.cfg</DIV>
<DIV>&nbsp;</DIV>
<DIV>but what was posted were:</DIV>
<DIV>
<DIV>Using config file "./centos_qcow2_qcow2.cfg".</DIV>
<DIV>Error: Device&nbsp;2048 (tap) could not be connected.Setting up the backend failed. See the log files in /var/log/xen/ for details.</DIV>
<DIV>&nbsp;</DIV>
<DIV>I need to run a para-virtualized machine whose image file is qcow2 format and is based on another&nbsp; image file.What can i do with this?I need your help.</DIV></DIV></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_676919_46794321.1356687640607--



--===============1519574698011099333==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1519574698011099333==--



From xen-devel-bounces@lists.xen.org Fri Dec 28 09:41:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 09:41:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToWQr-0001UQ-UP; Fri, 28 Dec 2012 09:40:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>)
	id 1ToWQq-0001UG-6b; Fri, 28 Dec 2012 09:40:48 +0000
Received: from [85.158.143.35:3064] by server-2.bemta-4.messagelabs.com id
	E8/D8-30861-F196DD05; Fri, 28 Dec 2012 09:40:47 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356687642!16971755!1
X-Originating-IP: [220.181.15.62]
X-SpamReason: No, hits=2.7 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDEyMTMw\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjYyID0+IDEyMTMw\n,HTML_20_30,HTML_MESSAGE,
	MANY_EXCLAMATIONS,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6453 invoked from network); 28 Dec 2012 09:40:44 -0000
Received: from m15-62.126.com (HELO m15-62.126.com) (220.181.15.62)
	by server-13.tower-21.messagelabs.com with SMTP;
	28 Dec 2012 09:40:44 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=hvh+zcyjqkXI8RuA1HVhYy72iVy0lKet/RPt
	FGQ+xAM=; b=X8fKc6FNb23JaU0eubY6FLuhr7WCR1yLCJpbxZXLjmSs3WxzqD2p
	qHfyLVQFkU2yBFyFjUQf9Aosbr/nk7TMJDFG8WdsvJgS6Sq5knyScVjEx5dWiI8z
	b97xxTbj5SF6o3jOe6drrvOqxVS1bpIXJxJzW0wMV3VK4TuhHc4Vg9A=
Received: from hxkhust$126.com ( [59.172.234.171] ) by ajax-webmail-wmsvr62
	(Coremail) ; Fri, 28 Dec 2012 17:40:40 +0800 (CST)
X-Originating-IP: [59.172.234.171]
Date: Fri, 28 Dec 2012 17:40:40 +0800 (CST)
From: hxkhust <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, 
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121219(21170.5156.5150) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: s4vl8GZvb3Rlcl9odG09MzM1Nzo4MQ==
MIME-Version: 1.0
Message-ID: <3db23816.2bb76.13be0e28820.Coremail.hxkhust@126.com>
X-CM-TRANSID: PsqowEBJckQZad1Q7sotAA--.2665W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbitRGTBUX9kQOtdwABsD
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!help!Problem with qcow2 image during a PVM's
	setting up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1519574698011099333=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1519574698011099333==
Content-Type: multipart/alternative; 
	boundary="----=_Part_676919_46794321.1356687640607"

------=_Part_676919_46794321.1356687640607
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

The following was what  I did.
1) dd if=/dev/zero of =centos_raw.img bs=1024 count=8000000
2) install the os centos 5.5 x64 in the image file centos_raw.img and the correspongding config file is :
kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"
ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"
memory = 768
#maxmem = 768
name = "centos_raw_pv"
vcpus = 1
vif = ['mac=00:24:7C:3C:CE:EF,bridge=eth0']
disk = ['tap:aio:/home/pvm/centos_raw.img,xvda,w']
root = "/dev/xvda1 ro"
on_reboot = 'restart'
on_crash = 'restart'
3) qemu-img-xen create -b centos_raw.img -f qcow2 centos_raw_qcow2_1.img 5G
4) xm create centos_raw_qcow2_1.cfg and the centos_raw_qcow2_1.cfg is just like this:
kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"
ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"
memory = 768
name = "centos_raw_qcow2_1_pv"
vcpus = 1
vif = ['mac=00:24:7C:3C:C1:EF,bridge=eth0']
disk = ['tap:qcow2:/home/pvm/centos_raw_qcow2_1.img,xvda,w']
root = "/dev/xvda1 ro"
on_reboot = 'restart'
on_crash = 'destroy'
 
however I'm failed.the error message after the above command is entered is :
Using config file "./centos_raw_qcow2_1.cfg".
Error: Device 51712 (tap) could not be connected.Setting up the backend failed. See the log files in /var/log/xen/ for details.
 
So I have tried another way.
after I install centos in centos_raw.img, I did the following:
qemu-img-xen convert -O qcow2 centos_raw.img centos_qcow2.img
here this image file centos_qcow2.img can be running normally with the config file :
kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"
ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"
memory = 768
name = "centos_raw_pv"
vcpus = 1
vif = ['mac=00:24:7C:3C:CE:EF,bridge=eth0']
disk = ['tap:qcow2:/home/pvm/centos_qcow2.img,xvda,w']
root = "/dev/xvda1 ro"
on_reboot = 'restart'
on_crash = 'restart'
 
Then I input the command below:
qemu-img-xen create -b centos_qcow2.img -f qcow2 centos_qcow2_qcow2.img 5G
and I edit  the config file centos_qcow2_qcow2.cfg:
kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"
ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"
memory = 768
name = "centos_qcow2_qcow2_pv"
vcpus = 1
vif = ['mac=00:24:7C:3C:CE:1F,bridge=eth0']
boot="c"
disk = ['tap:qcow2:/home/pvm/centos_qcow2_qcow2.img,sda,w']
root = "/dev/sda1 ro"
on_reboot = 'restart'
on_crash = 'destroy'
 
and implement the command:
xm create centos_qcow2_qcow2.cfg
 
but what was posted were:
Using config file "./centos_qcow2_qcow2.cfg".
Error: Device 2048 (tap) could not be connected.Setting up the backend failed. See the log files in /var/log/xen/ for details.
 
I need to run a para-virtualized machine whose image file is qcow2 format and is based on another  image file.What can i do with this?I need your help.
------=_Part_676919_46794321.1356687640607
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><DIV>The following was what&nbsp; I did.</DIV>
<DIV>1) dd if=/dev/zero of =centos_raw.img bs=1024 count=8000000</DIV>
<DIV>2) install the os centos 5.5 x64 in the image file centos_raw.img and the correspongding config file is :</DIV>
<DIV>kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"<BR>ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"<BR>memory = 768<BR>#maxmem = 768<BR>name = "centos_raw_pv"<BR>vcpus = 1<BR>vif = ['mac=00:24:7C:3C:CE:EF,bridge=eth0']<BR>disk = ['tap:aio:/home/pvm/centos_raw.img,xvda,w']</DIV>
<DIV>root = "/dev/xvda1 ro"<BR>on_reboot = 'restart'<BR>on_crash = 'restart'</DIV>
<DIV>3) qemu-img-xen create -b centos_raw.img -f qcow2 centos_raw_qcow2_1.img 5G</DIV>
<DIV>4) xm create centos_raw_qcow2_1.cfg and the centos_raw_qcow2_1.cfg is just like this:</DIV>
<DIV>kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"<BR>ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"<BR>memory = 768<BR>name = "centos_raw_qcow2_1_pv"<BR>vcpus = 1<BR>vif = ['mac=00:24:7C:3C:C1:EF,bridge=eth0']<BR>disk = ['tap:qcow2:/home/pvm/centos_raw_qcow2_1.img,xvda,w']</DIV>
<DIV>root = "/dev/xvda1 ro"<BR>on_reboot = 'restart'<BR>on_crash = 'destroy'</DIV>
<DIV>&nbsp;</DIV>
<DIV>however I'm failed.the error message after the above command is&nbsp;entered&nbsp;is :</DIV>
<DIV>Using config file "./centos_raw_qcow2_1.cfg".</DIV>
<DIV>Error: Device 51712 (tap) could not be connected.Setting up the backend failed. See the log files in /var/log/xen/ for details.</DIV>
<DIV>&nbsp;</DIV>
<DIV>So I have tried another way.</DIV>
<DIV>after I install centos in centos_raw.img, I did the following:</DIV>
<DIV>qemu-img-xen convert -O qcow2 centos_raw.img centos_qcow2.img</DIV>
<DIV>here this image file centos_qcow2.img can be running normally with the config file :</DIV>
<DIV>kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"<BR>ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"<BR>memory = 768<BR>name = "centos_raw_pv"<BR>vcpus = 1<BR>vif = ['mac=00:24:7C:3C:CE:EF,bridge=eth0']<BR>disk = ['tap:qcow2:/home/pvm/centos_qcow2.img,xvda,w']</DIV>
<DIV>root = "/dev/xvda1 ro"<BR>on_reboot = 'restart'<BR>on_crash = 'restart'</DIV>
<DIV>&nbsp;</DIV>
<DIV>Then I input the command below:</DIV>
<DIV>qemu-img-xen create -b centos_qcow2.img -f qcow2 centos_qcow2_qcow2.img 5G</DIV>
<DIV>and I&nbsp;edit &nbsp;the config file centos_qcow2_qcow2.cfg:</DIV>
<DIV>kernel = "/home/pvm/vmlinuz-2.6.18-194.el5xen"<BR>ramdisk = "/home/pvm/initrd-2.6.18-194.el5xen.img"<BR>memory = 768<BR>name = "centos_qcow2_qcow2_pv"<BR>vcpus = 1<BR>vif = ['mac=00:24:7C:3C:CE:1F,bridge=eth0']<BR>boot="c"<BR>disk = ['tap:qcow2:/home/pvm/centos_qcow2_qcow2.img,sda,w']</DIV>
<DIV>root = "/dev/sda1 ro"<BR>on_reboot = 'restart'<BR>on_crash = 'destroy'</DIV>
<DIV>&nbsp;</DIV>
<DIV>and implement the command:</DIV>
<DIV>xm create centos_qcow2_qcow2.cfg</DIV>
<DIV>&nbsp;</DIV>
<DIV>but what was posted were:</DIV>
<DIV>
<DIV>Using config file "./centos_qcow2_qcow2.cfg".</DIV>
<DIV>Error: Device&nbsp;2048 (tap) could not be connected.Setting up the backend failed. See the log files in /var/log/xen/ for details.</DIV>
<DIV>&nbsp;</DIV>
<DIV>I need to run a para-virtualized machine whose image file is qcow2 format and is based on another&nbsp; image file.What can i do with this?I need your help.</DIV></DIV></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_676919_46794321.1356687640607--



--===============1519574698011099333==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1519574698011099333==--



From xen-devel-bounces@lists.xen.org Fri Dec 28 10:36:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 10:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToXIL-0003Hb-UY; Fri, 28 Dec 2012 10:36:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1ToXIL-0003HW-D1
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 10:36:05 +0000
Received: from [85.158.137.99:20563] by server-16.bemta-3.messagelabs.com id
	CB/28-27634-4167DD05; Fri, 28 Dec 2012 10:36:04 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1356690962!15720848!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQ4OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24551 invoked from network); 28 Dec 2012 10:36:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 10:36:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,368,1355097600"; 
   d="scan'208";a="1969079"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	28 Dec 2012 10:35:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 28 Dec 2012 05:35:58 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1ToXIE-0002xP-43;
	Fri, 28 Dec 2012 10:35:58 +0000
Message-ID: <1356690956.2917.7.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Rohit Damkondwar <genius.rsd@gmail.com>
Date: Fri, 28 Dec 2012 10:35:56 +0000
In-Reply-To: <CAHEEu857jjBLACY=MNKzqEkpGbjXw1cbfqy9AOyR5fVsuOmagg@mail.gmail.com>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland>
	<CAHEEu857jjBLACY=MNKzqEkpGbjXw1cbfqy9AOyR5fVsuOmagg@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
 interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-28 at 07:46 +0000, Rohit Damkondwar wrote:
> On Thu, Dec 27, 2012 at 6:03 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:
>         On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:
>         > Hi all. I want to set bandwidth limits to a virtual
>         interface
>         > dynamically(without restarting virtual machine). I have been
>         browsing
>         > xen source code 4.1.3. I looked into libxen
>         folder(xen_vif.c) and
>         > hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
>         > (driver/net/xen-netback/
>         > interface.c + common.h) and tx_add_credit function could be
>         used to
>         > modify rate limits. I want to change bandwidth limits
>         dynamically of a
>         > virtual interface in xen 4.1.3. Where should I look for in
>         xen 4.1.3?
>         >
>         > Please help.
>         >
>         
>         
>         Xen vif has a parameter called 'rate', I don't know whether it
>         suits
>         you.
>         
> The rate parameter only restricts one way traffic(probably only
> outgoing). 
>  

Yes, you're right. So you need two way shaping.

> 
>         Also, you can have a look at external tool like tc(8). My
>         vague thought
>         is that Vif is just another interface in Dom0, tc(8) should be
>         able to
>         traffic-shape Vif. 
> 
> 
> Don't you think using external tool may decrease the eifficiency ?. If
> xen itself has capabailities ( provided by tc tool ), wouldn't it be
> more efficient ? 
> 

Do you see significant performance degradation when using tc(8) or any
other tools alike? If so, do report with figures, it can help us
improve.

> I have used this tool. It is good. It serves my purpose. But wudn't it
> be better to include the bandwidth limiting capabilities in xen
> itself? I am not sure about this. Currently I am just browsing through
> the source code. What do u think ?
> 

TBH I'm not sure about this either. Again, comparisons and analysis of
bottleneck would be helpful.

> 
> I have seen  function "set_qos_algorithm_type" and paramaters
> (qos/algorithm type,qos/algorithm params, qos/supported algorithms) in
> vif class. Would they be useful ? Are they available only for XEN
> Enterprise ?
> 

Do you see those in libxen source code? I don't think they are in use
now.


Wei.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 10:36:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 10:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToXIL-0003Hb-UY; Fri, 28 Dec 2012 10:36:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1ToXIL-0003HW-D1
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 10:36:05 +0000
Received: from [85.158.137.99:20563] by server-16.bemta-3.messagelabs.com id
	CB/28-27634-4167DD05; Fri, 28 Dec 2012 10:36:04 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-217.messagelabs.com!1356690962!15720848!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQ4OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24551 invoked from network); 28 Dec 2012 10:36:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 10:36:04 -0000
X-IronPort-AV: E=Sophos;i="4.84,368,1355097600"; 
   d="scan'208";a="1969079"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	28 Dec 2012 10:35:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 28 Dec 2012 05:35:58 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1ToXIE-0002xP-43;
	Fri, 28 Dec 2012 10:35:58 +0000
Message-ID: <1356690956.2917.7.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Rohit Damkondwar <genius.rsd@gmail.com>
Date: Fri, 28 Dec 2012 10:35:56 +0000
In-Reply-To: <CAHEEu857jjBLACY=MNKzqEkpGbjXw1cbfqy9AOyR5fVsuOmagg@mail.gmail.com>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland>
	<CAHEEu857jjBLACY=MNKzqEkpGbjXw1cbfqy9AOyR5fVsuOmagg@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
 interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-28 at 07:46 +0000, Rohit Damkondwar wrote:
> On Thu, Dec 27, 2012 at 6:03 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:
>         On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:
>         > Hi all. I want to set bandwidth limits to a virtual
>         interface
>         > dynamically(without restarting virtual machine). I have been
>         browsing
>         > xen source code 4.1.3. I looked into libxen
>         folder(xen_vif.c) and
>         > hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
>         > (driver/net/xen-netback/
>         > interface.c + common.h) and tx_add_credit function could be
>         used to
>         > modify rate limits. I want to change bandwidth limits
>         dynamically of a
>         > virtual interface in xen 4.1.3. Where should I look for in
>         xen 4.1.3?
>         >
>         > Please help.
>         >
>         
>         
>         Xen vif has a parameter called 'rate', I don't know whether it
>         suits
>         you.
>         
> The rate parameter only restricts one way traffic(probably only
> outgoing). 
>  

Yes, you're right. So you need two way shaping.

> 
>         Also, you can have a look at external tool like tc(8). My
>         vague thought
>         is that Vif is just another interface in Dom0, tc(8) should be
>         able to
>         traffic-shape Vif. 
> 
> 
> Don't you think using external tool may decrease the eifficiency ?. If
> xen itself has capabailities ( provided by tc tool ), wouldn't it be
> more efficient ? 
> 

Do you see significant performance degradation when using tc(8) or any
other tools alike? If so, do report with figures, it can help us
improve.

> I have used this tool. It is good. It serves my purpose. But wudn't it
> be better to include the bandwidth limiting capabilities in xen
> itself? I am not sure about this. Currently I am just browsing through
> the source code. What do u think ?
> 

TBH I'm not sure about this either. Again, comparisons and analysis of
bottleneck would be helpful.

> 
> I have seen  function "set_qos_algorithm_type" and paramaters
> (qos/algorithm type,qos/algorithm params, qos/supported algorithms) in
> vif class. Would they be useful ? Are they available only for XEN
> Enterprise ?
> 

Do you see those in libxen source code? I don't think they are in use
now.


Wei.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 10:42:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 10:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToXO2-0003Ut-ON; Fri, 28 Dec 2012 10:41:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1ToXO1-0003Ul-1U
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 10:41:57 +0000
Received: from [85.158.139.83:44373] by server-6.bemta-5.messagelabs.com id
	A6/19-30498-4777DD05; Fri, 28 Dec 2012 10:41:56 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1356691309!29415530!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQ4OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3078 invoked from network); 28 Dec 2012 10:41:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 10:41:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,368,1355097600"; 
   d="scan'208";a="1969310"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	28 Dec 2012 10:41:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 28 Dec 2012 05:41:33 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1ToXNd-000323-Dj;
	Fri, 28 Dec 2012 10:41:33 +0000
Message-ID: <1356691292.2917.9.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Tushar Behera <tushar.behera@linaro.org>
Date: Fri, 28 Dec 2012 10:41:32 +0000
In-Reply-To: <50DD2AFB.6030701@linaro.org>
References: <1353048646-10935-1-git-send-email-tushar.behera@linaro.org>
	<1353048646-10935-9-git-send-email-tushar.behera@linaro.org>
	<1353057394.3499.159.camel@zakaz.uk.xensource.com>
	<50DD2AFB.6030701@linaro.org>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com, Ian Campbell <Ian.Campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>, konrad.wilk@oracle.com,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 08/14] xen: netback: Remove redundant check
 on unsigned variable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-28 at 05:15 +0000, Tushar Behera wrote:
> On 11/16/2012 02:46 PM, Ian Campbell wrote:
> > On Fri, 2012-11-16 at 06:50 +0000, Tushar Behera wrote:
> >> No need to check whether unsigned variable is less than 0.
> >>
> >> CC: Ian Campbell <ian.campbell@citrix.com>
> >> CC: xen-devel@lists.xensource.com
> >> CC: netdev@vger.kernel.org
> >> Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Thanks.
> > 
> 
> This patch was not picked up for 3.8-rc1. Any idea, who should pick this up?

CC'ing Konrad.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 10:42:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 10:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToXO2-0003Ut-ON; Fri, 28 Dec 2012 10:41:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1ToXO1-0003Ul-1U
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 10:41:57 +0000
Received: from [85.158.139.83:44373] by server-6.bemta-5.messagelabs.com id
	A6/19-30498-4777DD05; Fri, 28 Dec 2012 10:41:56 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1356691309!29415530!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQ4OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3078 invoked from network); 28 Dec 2012 10:41:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 10:41:51 -0000
X-IronPort-AV: E=Sophos;i="4.84,368,1355097600"; 
   d="scan'208";a="1969310"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	28 Dec 2012 10:41:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 28 Dec 2012 05:41:33 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1ToXNd-000323-Dj;
	Fri, 28 Dec 2012 10:41:33 +0000
Message-ID: <1356691292.2917.9.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Tushar Behera <tushar.behera@linaro.org>
Date: Fri, 28 Dec 2012 10:41:32 +0000
In-Reply-To: <50DD2AFB.6030701@linaro.org>
References: <1353048646-10935-1-git-send-email-tushar.behera@linaro.org>
	<1353048646-10935-9-git-send-email-tushar.behera@linaro.org>
	<1353057394.3499.159.camel@zakaz.uk.xensource.com>
	<50DD2AFB.6030701@linaro.org>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com, Ian Campbell <Ian.Campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>, konrad.wilk@oracle.com,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 08/14] xen: netback: Remove redundant check
 on unsigned variable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-12-28 at 05:15 +0000, Tushar Behera wrote:
> On 11/16/2012 02:46 PM, Ian Campbell wrote:
> > On Fri, 2012-11-16 at 06:50 +0000, Tushar Behera wrote:
> >> No need to check whether unsigned variable is less than 0.
> >>
> >> CC: Ian Campbell <ian.campbell@citrix.com>
> >> CC: xen-devel@lists.xensource.com
> >> CC: netdev@vger.kernel.org
> >> Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Thanks.
> > 
> 
> This patch was not picked up for 3.8-rc1. Any idea, who should pick this up?

CC'ing Konrad.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 10:46:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 10:46:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToXSI-0003dC-EL; Fri, 28 Dec 2012 10:46:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1ToXSG-0003d6-Tr
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 10:46:21 +0000
Received: from [85.158.137.99:23672] by server-2.bemta-3.messagelabs.com id
	34/C0-11239-B787DD05; Fri, 28 Dec 2012 10:46:19 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-217.messagelabs.com!1356691577!14857720!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQ4OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11308 invoked from network); 28 Dec 2012 10:46:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 10:46:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,368,1355097600"; 
   d="scan'208";a="1969492"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	28 Dec 2012 10:46:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 28 Dec 2012 05:46:16 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1ToXSC-000365-Da;
	Fri, 28 Dec 2012 10:46:16 +0000
Message-ID: <1356691575.2917.13.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Fri, 28 Dec 2012 10:46:15 +0000
In-Reply-To: <CA+ePHTC4Fq0sqb5ie+YBiC5Ft_q6O2PkJxYqGXxbBbHSHxfbOA@mail.gmail.com>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
	<CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
	<1356612077.19238.20.camel@iceland>
	<CA+ePHTC4Fq0sqb5ie+YBiC5Ft_q6O2PkJxYqGXxbBbHSHxfbOA@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
 =?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_error?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDEyLTEyLTI4IGF0IDAzOjEzICswMDAwLCDpqazno4ogd3JvdGU6Cj4gCj4gCj4g
T24gVGh1LCBEZWMgMjcsIDIwMTIgYXQgODo0MSBQTSwgV2VpIExpdSA8V2VpLkxpdTJAY2l0cml4
LmNvbT4gd3JvdGU6Cj4gICAgICAgICBPbiBUaHUsIDIwMTItMTItMjcgYXQgMDI6MTIgKzAwMDAs
IOmprOejiiB3cm90ZToKPiAgICAgICAgID4KPiAgICAgICAgID4gSSBnb3QgaXQsIGJ1dCB0aGUg
ZXJyb3IgYCAgeGM6IGVycm9yOiBkb19ldnRjaG5fb3A6Cj4gICAgICAgICA+IEhZUEVSVklTT1Jf
ZXZlbnRfY2hhbm5lbF9vcCBmYWlsZWQ6IC0xICgzID0gTm8gc3VjaAo+ICAgICAgICAgcHJvY2Vz
cyk6IEludGVybmFsCj4gICAgICAgICA+IGVycm9yLiBgIHNhaWQgbm8gc3VjaCBwcm9jZXNzLCB0
aGUgc3lzdGVtIGVycm9yIGRlc2NyaXB0aW9uCj4gICAgICAgICA+IGRpZG4ndCBzZWVtIGhhcyBh
bnl0aGluZyB0byBkbyB3aXRoIHRoZSBmb2xsb3dpbmcgbGluZXMgd2ljaAo+ICAgICAgICAgcmFp
c2VkCj4gICAgICAgICA+ICA4NSAgICBzdGF0ZS0+c3RvcmVfcG9ydCA9IHhjX2V2dGNobl9hbGxv
Y191bmJvdW5kKGN0eC0+eGNoLAo+ICAgICAgICAgZG9taWQsCj4gICAgICAgICA+IDApOwo+ICAg
ICAgICAgPiAgODYgICAgc3RhdGUtPmNvbnNvbGVfcG9ydCA9Cj4gICAgICAgICB4Y19ldnRjaG5f
YWxsb2NfdW5ib3VuZChjdHgtPnhjaCwgZG9taWQsCj4gICAgICAgICA+IDApOwo+ICAgICAgICAg
Cj4gICAgICAgICAKPiAgICAgICAgIFRoZSBlcnJvciBjb2RlIC0xIGlzIC1FUEVSTSwgd2hpY2gg
bWVhbnMgeW91IGRvbid0IGhhdmUKPiAgICAgICAgIHBlcm1pc3Npb24gdG8KPiAgICAgICAgIGlz
c3VlIHRoaXMgb3BlcmF0aW9uLiBJIGRvbid0IHRoaW5rIHRoaXMgaXMgYSBidWcuIFRoZXJlIG1p
Z2h0Cj4gICAgICAgICBiZSBzb21lCj4gICAgICAgICBwcm9ibGVtcyB3aXRoIHlvdXIgc2V0dXAu
Cj4gICAgICAgICAKPiAgICAgICAgIElmIHlvdSBuZWVkIGFueSBwb2ludGVyIGluIHJlYWRpbmcg
c291cmNlIGNvZGUsIEkgd2lsbCBiZQo+ICAgICAgICAgaGFwcHkgdG8gaGVscC4KPiAgICAgICAg
IAo+ICAgICAgICAgCj4gICAgICAgICBXZWkuCj4gICAgICAgICAKPiAgICAgVGhhbmtzIGZvciB5
b3VyIGtpbmRuZXNzISAKPiAKPiAKPiAgICAgSSBsb29rZWQgaW50byB0aGUgZnVuY3Rpb25zIGZv
ciBsb2dnaW5nLCBpbiB0aGlzIGNhc2UsICBgMyA9IE5vCj4gc3VjaCBwcm9jZXNzYCB3YXMgZnJv
bSBgZXJybm9gIGFuZCB0aGUgYCBIWVBFUlZJU09SX2V2ZW50X2NoYW5uZWxfb3AKPiBmYWlsZWQ6
IC0xIGAgd2FzIGZyb20gaHlwZXJ2aXNvci1sZXZlbAo+IGVycm9yKHNyYy94ZW4vY29tbW9uL2V2
ZW50X2NoYW5uZWwuYykuCj4gSW4gbXkgb3B0aW9uLCB0aGF0J3MgdG8gc2F5LCBlcnJvciBudW1i
ZXIgb2YgLTEgd2FzIGNhdXNlZCBieQo+IGh5cGVydmlzb3I7IGJ1dCB3aGF0IHdhcyB0aGUgZXJy
b3IgbnVtYmVyIG9mIDMgY2F1c2VkIGJ5LCAgZG9tMD8KPiBEbyBib3RoIHRoZSB0d28gZXJyb3Ig
bnVtYmVycyByZWZlciB0byB0aGUgZGVzY3JpcHRpb24gZGVmaW5lZCBpbgo+IGVycm5vLmggb3Ig
ZWxzZSBoeXBlcnZpc29yIGhhcyBpdHMgb3duIGVycm9yIGRlc2NyaXB0aW9uPwo+IAoKSSB0aGlu
ayB0aGUgdHdvIGZpbGVzIGFyZSBtb3N0bHkgdGhlIHNhbWUsIGJ1dCB0byBiZSBzdXJlIHlvdSBu
ZWVkIHRvCmxvb2sgaW50byB0aGUgc291cmNlIGZpbGUgaW4gYm90aCBMaW51eCBhbmQgWGVuLiBZ
b3Ugc2hvdWxkIHN0YXJ0IGZyb20KdGhlIGh5cGVydmlzb3IgbGV2ZWwsIGZpbmQgb3V0IHdoeSBp
dCByZXR1cm5zIC1FUEVSTS4gUm9vdCB1c2VyIGluIERvbTAKaGFzIG5vdGhpbmcgdG8gZG8gd2l0
aCBwcml2aWxlZ2UgaW4gaHlwZXJ2aXNvciBsZXZlbC4KCgpXZWkuCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Dec 28 10:46:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 10:46:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToXSI-0003dC-EL; Fri, 28 Dec 2012 10:46:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1ToXSG-0003d6-Tr
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 10:46:21 +0000
Received: from [85.158.137.99:23672] by server-2.bemta-3.messagelabs.com id
	34/C0-11239-B787DD05; Fri, 28 Dec 2012 10:46:19 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-217.messagelabs.com!1356691577!14857720!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTQ4OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11308 invoked from network); 28 Dec 2012 10:46:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 10:46:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,368,1355097600"; 
   d="scan'208";a="1969492"
Received: from unknown (HELO FTLPEX01CL02.citrite.net) ([10.13.107.79])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	28 Dec 2012 10:46:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.318.1;
	Fri, 28 Dec 2012 05:46:16 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1ToXSC-000365-Da;
	Fri, 28 Dec 2012 10:46:16 +0000
Message-ID: <1356691575.2917.13.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Fri, 28 Dec 2012 10:46:15 +0000
In-Reply-To: <CA+ePHTC4Fq0sqb5ie+YBiC5Ft_q6O2PkJxYqGXxbBbHSHxfbOA@mail.gmail.com>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
	<CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
	<1356612077.19238.20.camel@iceland>
	<CA+ePHTC4Fq0sqb5ie+YBiC5Ft_q6O2PkJxYqGXxbBbHSHxfbOA@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
 =?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_error?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDEyLTEyLTI4IGF0IDAzOjEzICswMDAwLCDpqazno4ogd3JvdGU6Cj4gCj4gCj4g
T24gVGh1LCBEZWMgMjcsIDIwMTIgYXQgODo0MSBQTSwgV2VpIExpdSA8V2VpLkxpdTJAY2l0cml4
LmNvbT4gd3JvdGU6Cj4gICAgICAgICBPbiBUaHUsIDIwMTItMTItMjcgYXQgMDI6MTIgKzAwMDAs
IOmprOejiiB3cm90ZToKPiAgICAgICAgID4KPiAgICAgICAgID4gSSBnb3QgaXQsIGJ1dCB0aGUg
ZXJyb3IgYCAgeGM6IGVycm9yOiBkb19ldnRjaG5fb3A6Cj4gICAgICAgICA+IEhZUEVSVklTT1Jf
ZXZlbnRfY2hhbm5lbF9vcCBmYWlsZWQ6IC0xICgzID0gTm8gc3VjaAo+ICAgICAgICAgcHJvY2Vz
cyk6IEludGVybmFsCj4gICAgICAgICA+IGVycm9yLiBgIHNhaWQgbm8gc3VjaCBwcm9jZXNzLCB0
aGUgc3lzdGVtIGVycm9yIGRlc2NyaXB0aW9uCj4gICAgICAgICA+IGRpZG4ndCBzZWVtIGhhcyBh
bnl0aGluZyB0byBkbyB3aXRoIHRoZSBmb2xsb3dpbmcgbGluZXMgd2ljaAo+ICAgICAgICAgcmFp
c2VkCj4gICAgICAgICA+ICA4NSAgICBzdGF0ZS0+c3RvcmVfcG9ydCA9IHhjX2V2dGNobl9hbGxv
Y191bmJvdW5kKGN0eC0+eGNoLAo+ICAgICAgICAgZG9taWQsCj4gICAgICAgICA+IDApOwo+ICAg
ICAgICAgPiAgODYgICAgc3RhdGUtPmNvbnNvbGVfcG9ydCA9Cj4gICAgICAgICB4Y19ldnRjaG5f
YWxsb2NfdW5ib3VuZChjdHgtPnhjaCwgZG9taWQsCj4gICAgICAgICA+IDApOwo+ICAgICAgICAg
Cj4gICAgICAgICAKPiAgICAgICAgIFRoZSBlcnJvciBjb2RlIC0xIGlzIC1FUEVSTSwgd2hpY2gg
bWVhbnMgeW91IGRvbid0IGhhdmUKPiAgICAgICAgIHBlcm1pc3Npb24gdG8KPiAgICAgICAgIGlz
c3VlIHRoaXMgb3BlcmF0aW9uLiBJIGRvbid0IHRoaW5rIHRoaXMgaXMgYSBidWcuIFRoZXJlIG1p
Z2h0Cj4gICAgICAgICBiZSBzb21lCj4gICAgICAgICBwcm9ibGVtcyB3aXRoIHlvdXIgc2V0dXAu
Cj4gICAgICAgICAKPiAgICAgICAgIElmIHlvdSBuZWVkIGFueSBwb2ludGVyIGluIHJlYWRpbmcg
c291cmNlIGNvZGUsIEkgd2lsbCBiZQo+ICAgICAgICAgaGFwcHkgdG8gaGVscC4KPiAgICAgICAg
IAo+ICAgICAgICAgCj4gICAgICAgICBXZWkuCj4gICAgICAgICAKPiAgICAgVGhhbmtzIGZvciB5
b3VyIGtpbmRuZXNzISAKPiAKPiAKPiAgICAgSSBsb29rZWQgaW50byB0aGUgZnVuY3Rpb25zIGZv
ciBsb2dnaW5nLCBpbiB0aGlzIGNhc2UsICBgMyA9IE5vCj4gc3VjaCBwcm9jZXNzYCB3YXMgZnJv
bSBgZXJybm9gIGFuZCB0aGUgYCBIWVBFUlZJU09SX2V2ZW50X2NoYW5uZWxfb3AKPiBmYWlsZWQ6
IC0xIGAgd2FzIGZyb20gaHlwZXJ2aXNvci1sZXZlbAo+IGVycm9yKHNyYy94ZW4vY29tbW9uL2V2
ZW50X2NoYW5uZWwuYykuCj4gSW4gbXkgb3B0aW9uLCB0aGF0J3MgdG8gc2F5LCBlcnJvciBudW1i
ZXIgb2YgLTEgd2FzIGNhdXNlZCBieQo+IGh5cGVydmlzb3I7IGJ1dCB3aGF0IHdhcyB0aGUgZXJy
b3IgbnVtYmVyIG9mIDMgY2F1c2VkIGJ5LCAgZG9tMD8KPiBEbyBib3RoIHRoZSB0d28gZXJyb3Ig
bnVtYmVycyByZWZlciB0byB0aGUgZGVzY3JpcHRpb24gZGVmaW5lZCBpbgo+IGVycm5vLmggb3Ig
ZWxzZSBoeXBlcnZpc29yIGhhcyBpdHMgb3duIGVycm9yIGRlc2NyaXB0aW9uPwo+IAoKSSB0aGlu
ayB0aGUgdHdvIGZpbGVzIGFyZSBtb3N0bHkgdGhlIHNhbWUsIGJ1dCB0byBiZSBzdXJlIHlvdSBu
ZWVkIHRvCmxvb2sgaW50byB0aGUgc291cmNlIGZpbGUgaW4gYm90aCBMaW51eCBhbmQgWGVuLiBZ
b3Ugc2hvdWxkIHN0YXJ0IGZyb20KdGhlIGh5cGVydmlzb3IgbGV2ZWwsIGZpbmQgb3V0IHdoeSBp
dCByZXR1cm5zIC1FUEVSTS4gUm9vdCB1c2VyIGluIERvbTAKaGFzIG5vdGhpbmcgdG8gZG8gd2l0
aCBwcml2aWxlZ2UgaW4gaHlwZXJ2aXNvciBsZXZlbC4KCgpXZWkuCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Dec 28 13:01:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 13:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToZY8-0006B4-53; Fri, 28 Dec 2012 13:00:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bp@alien8.de>) id 1ToZY6-0006Az-5F
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 13:00:30 +0000
Received: from [85.158.139.83:14175] by server-12.bemta-5.messagelabs.com id
	16/ED-02275-DE79DD05; Fri, 28 Dec 2012 13:00:29 +0000
X-Env-Sender: bp@alien8.de
X-Msg-Ref: server-2.tower-182.messagelabs.com!1356699627!29822763!1
X-Originating-IP: [78.46.96.112]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD,
	ML_RADAR_SPEW_LINKS_22,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4664 invoked from network); 28 Dec 2012 13:00:27 -0000
Received: from mail.skyhub.de (HELO mail.skyhub.de) (78.46.96.112)
	by server-2.tower-182.messagelabs.com with SMTP;
	28 Dec 2012 13:00:27 -0000
X-Virus-Scanned: Nedap ESD1 at mail.skyhub.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=alien8.de; s=alien8;
	t=1356699567; bh=SUVI1HBtdzKu7rL1utPFTBBkJZo7SCoCGSzOwAgKwY4=;
	h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version:
	Content-Type:In-Reply-To; b=Xd/9mcCpkJk+OMWB3lGUd8QcbAUHtIVjrEurK5
	s1u0Xpcg4Ii+nFfJzC+784QT32fntsaDoPeVKIE78NXoTbJy22Fm/q1zxqEJwrtlo84
	uURfeV5LKhfU66TQwd8eSIgHntnF8UKXJp5VQZArxB6WdNVg3fcD6akyit2eeBS4Kg=
Received: from mail.skyhub.de ([127.0.0.1])
	by localhost (door.skyhub.de [127.0.0.1]) (amavisd-new, port 10026)
	with ESMTP id iLKd0A+OutQd; Fri, 28 Dec 2012 13:59:27 +0100 (CET)
Received: from x1.localdomain (x1.visitor.congress.ccc.de [151.217.216.231])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id
	57F961D9CD1; Fri, 28 Dec 2012 13:59:27 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=alien8.de; s=alien8;
	t=1356699567; bh=SUVI1HBtdzKu7rL1utPFTBBkJZo7SCoCGSzOwAgKwY4=;
	h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version:
	Content-Type:In-Reply-To; b=Xd/9mcCpkJk+OMWB3lGUd8QcbAUHtIVjrEurK5
	s1u0Xpcg4Ii+nFfJzC+784QT32fntsaDoPeVKIE78NXoTbJy22Fm/q1zxqEJwrtlo84
	uURfeV5LKhfU66TQwd8eSIgHntnF8UKXJp5VQZArxB6WdNVg3fcD6akyit2eeBS4Kg=
Received: by x1.localdomain (Postfix, from userid 1000)
	id A451EC2006; Fri, 28 Dec 2012 13:59:27 +0100 (CET)
Date: Fri, 28 Dec 2012 13:59:27 +0100
From: Borislav Petkov <bp@alien8.de>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <20121228125926.GA13224@x1.alien8.de>
Mail-Followup-To: Borislav Petkov <bp@alien8.de>,
	Daniel Kiper <daniel.kiper@oracle.com>, hpa@zytor.com,
	kexec@lists.infradead.org, xen-devel@lists.xensource.com,
	konrad.wilk@oracle.com, tglx@linutronix.de, ebiederm@xmission.com,
	maxim.uvarov@oracle.com, andrew.cooper3@citrix.com,
	jbeulich@suse.com, mingo@redhat.com, x86@kernel.org,
	virtualization@lists.linux-foundation.org, vgoyal@redhat.com,
	linux-kernel@vger.kernel.org
References: <71b94237-c365-47ba-8d75-5ee1d8caee73@default>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <71b94237-c365-47ba-8d75-5ee1d8caee73@default>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	maxim.uvarov@oracle.com, kexec@lists.infradead.org,
	x86@kernel.org, virtualization@lists.linux-foundation.org,
	mingo@redhat.com, ebiederm@xmission.com, jbeulich@suse.com,
	hpa@zytor.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 02/11] x86/kexec: Add extra pointers to
 transition page table PGD, PUD, PMD and PTE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 27, 2012 at 03:19:24PM -0800, Daniel Kiper wrote:
> > Hmm... this code is being redone at the moment... this might conflict.
> 
> Is this available somewhere? May I have a look at it?

http://marc.info/?l=linux-kernel&m=135581534620383

The for-x86-boot-v7 and -v8 branches.

HTH.

-- 
Regards/Gruss,
Boris.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 13:01:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 13:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToZY8-0006B4-53; Fri, 28 Dec 2012 13:00:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bp@alien8.de>) id 1ToZY6-0006Az-5F
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 13:00:30 +0000
Received: from [85.158.139.83:14175] by server-12.bemta-5.messagelabs.com id
	16/ED-02275-DE79DD05; Fri, 28 Dec 2012 13:00:29 +0000
X-Env-Sender: bp@alien8.de
X-Msg-Ref: server-2.tower-182.messagelabs.com!1356699627!29822763!1
X-Originating-IP: [78.46.96.112]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD,
	ML_RADAR_SPEW_LINKS_22,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4664 invoked from network); 28 Dec 2012 13:00:27 -0000
Received: from mail.skyhub.de (HELO mail.skyhub.de) (78.46.96.112)
	by server-2.tower-182.messagelabs.com with SMTP;
	28 Dec 2012 13:00:27 -0000
X-Virus-Scanned: Nedap ESD1 at mail.skyhub.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=alien8.de; s=alien8;
	t=1356699567; bh=SUVI1HBtdzKu7rL1utPFTBBkJZo7SCoCGSzOwAgKwY4=;
	h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version:
	Content-Type:In-Reply-To; b=Xd/9mcCpkJk+OMWB3lGUd8QcbAUHtIVjrEurK5
	s1u0Xpcg4Ii+nFfJzC+784QT32fntsaDoPeVKIE78NXoTbJy22Fm/q1zxqEJwrtlo84
	uURfeV5LKhfU66TQwd8eSIgHntnF8UKXJp5VQZArxB6WdNVg3fcD6akyit2eeBS4Kg=
Received: from mail.skyhub.de ([127.0.0.1])
	by localhost (door.skyhub.de [127.0.0.1]) (amavisd-new, port 10026)
	with ESMTP id iLKd0A+OutQd; Fri, 28 Dec 2012 13:59:27 +0100 (CET)
Received: from x1.localdomain (x1.visitor.congress.ccc.de [151.217.216.231])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id
	57F961D9CD1; Fri, 28 Dec 2012 13:59:27 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=alien8.de; s=alien8;
	t=1356699567; bh=SUVI1HBtdzKu7rL1utPFTBBkJZo7SCoCGSzOwAgKwY4=;
	h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version:
	Content-Type:In-Reply-To; b=Xd/9mcCpkJk+OMWB3lGUd8QcbAUHtIVjrEurK5
	s1u0Xpcg4Ii+nFfJzC+784QT32fntsaDoPeVKIE78NXoTbJy22Fm/q1zxqEJwrtlo84
	uURfeV5LKhfU66TQwd8eSIgHntnF8UKXJp5VQZArxB6WdNVg3fcD6akyit2eeBS4Kg=
Received: by x1.localdomain (Postfix, from userid 1000)
	id A451EC2006; Fri, 28 Dec 2012 13:59:27 +0100 (CET)
Date: Fri, 28 Dec 2012 13:59:27 +0100
From: Borislav Petkov <bp@alien8.de>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <20121228125926.GA13224@x1.alien8.de>
Mail-Followup-To: Borislav Petkov <bp@alien8.de>,
	Daniel Kiper <daniel.kiper@oracle.com>, hpa@zytor.com,
	kexec@lists.infradead.org, xen-devel@lists.xensource.com,
	konrad.wilk@oracle.com, tglx@linutronix.de, ebiederm@xmission.com,
	maxim.uvarov@oracle.com, andrew.cooper3@citrix.com,
	jbeulich@suse.com, mingo@redhat.com, x86@kernel.org,
	virtualization@lists.linux-foundation.org, vgoyal@redhat.com,
	linux-kernel@vger.kernel.org
References: <71b94237-c365-47ba-8d75-5ee1d8caee73@default>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <71b94237-c365-47ba-8d75-5ee1d8caee73@default>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	maxim.uvarov@oracle.com, kexec@lists.infradead.org,
	x86@kernel.org, virtualization@lists.linux-foundation.org,
	mingo@redhat.com, ebiederm@xmission.com, jbeulich@suse.com,
	hpa@zytor.com, tglx@linutronix.de, vgoyal@redhat.com
Subject: Re: [Xen-devel] [PATCH v3 02/11] x86/kexec: Add extra pointers to
 transition page table PGD, PUD, PMD and PTE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 27, 2012 at 03:19:24PM -0800, Daniel Kiper wrote:
> > Hmm... this code is being redone at the moment... this might conflict.
> 
> Is this available somewhere? May I have a look at it?

http://marc.info/?l=linux-kernel&m=135581534620383

The for-x86-boot-v7 and -v8 branches.

HTH.

-- 
Regards/Gruss,
Boris.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 14:33:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 14:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToazX-0007cX-4l; Fri, 28 Dec 2012 14:32:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1ToazV-0007cS-By
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 14:32:53 +0000
Received: from [85.158.143.35:4442] by server-3.bemta-4.messagelabs.com id
	9A/47-18211-49DADD05; Fri, 28 Dec 2012 14:32:52 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356705171!16996111!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4493 invoked from network); 28 Dec 2012 14:32:52 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Dec 2012 14:32:52 -0000
Received: (qmail 31637 invoked from network); 28 Dec 2012 16:32:50 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	28 Dec 2012 16:32:50 +0200
Message-ID: <50DDADDD.8070806@gmail.com>
Date: Fri, 28 Dec 2012 16:34:05 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44564
Subject: [Xen-devel] hvm_emulate_one() usage
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I have a dom0 userspace application that receives mem_events. Mem_events 
are being received if a page fault occurs, and until I clear the page 
access rights I keep receiving the event in a loop. If I do clear the 
page access rights, I will no longer receive mem_events for said page.

What I thought I'd do was to add a new flag to the mem_event response 
(MEM_EVENT_FLAG_EMULATE_WRITE), and have this code execute in 
p2m_mem_access_resume() in xen/arch/x86/mm/p2m.c:

mem_event_get_response(d, &rsp);

if ( rsp.flags & MEM_EVENT_FLAG_EMULATE_WRITE )
{
     struct hvm_emulate_ctxt ctx[1] = {};
     struct vcpu *current_vcpu = current;

     set_current(d->vcpu[rsp.vcpu_id]);

     hvm_emulate_prepare(ctx, guest_cpu_user_regs());
     hvm_emulate_one(ctx);

     set_current(current_vcpu);
}

The code is supposed to go past the write instruction (without lifting 
the page access restrictions). What it does seem to achieve is this:

(XEN) ----[ Xen-4.1.2  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    6
(XEN) RIP:    e008:[<ffff82c4801bf4ea>] vmx_get_interrupt_shadow+0xa/0x10
(XEN) RFLAGS: 0000000000010203   CONTEXT: hypervisor
(XEN) rax: 0000000000004824   rbx: ffff83013c0c7ba0   rcx: 0000000000000008
(XEN) rdx: 0000000000000005   rsi: ffff83013c0c7f18   rdi: ffff8300bfca8000
(XEN) rbp: ffff83013c0c7f18   rsp: ffff83013c0c7b50   r8:  0000000000000002
(XEN) r9:  0000000000000002   r10: ffff82c48020af40   r11: 0000000000000282
(XEN) r12: ffff8300bfff2000   r13: ffff88012b478b18   r14: 00007fffd669c4c0
(XEN) r15: ffff83013c0c7e48   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000005d6c4000   cr2: 000000000221e538
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83013c0c7b50:
(XEN)    ffff82c4801a1a91 ffff83013f986000 ffff83013f986000 ffff83013c0c7f18
(XEN)    ffff82c4801ce0e1 0000000500050000 000000000003f31a 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff83013f986000 ffff8300bfff2000 ffff83013c0c7e48
(XEN)    ffff82c4801d1d81 ffff8300bfcac000 ffff82c4801d05c5 ffff83013c0c7f18
(XEN)    ffff82c4801a1447 ffff8300bfcac000 0000000000d7e004 0000000000d7e004
(XEN)    ffff83013c0c7e48 ffff88012b478b18 00007fffd669c4c0 ffff83013c0c7e48
(XEN)    ffff82c48014eb79 0000000000000000 000000000005d6f9 ffff82f600badf20
(XEN)    0000000000000000 4000000000000000 ffff82f600badf20 0000000000000000
(XEN)    ffff88012fc0b928 0000000000000001 ffff82c48016bc4b ffff82f600badf20
(XEN)    ffff82c48016c0b8 ffff83013c0ac000 ffff83013c0ac000 ffff82f600bb1940
(XEN)    000000000000000f ffff83013c0c7f18 ffff83013c0ac000 ffff82f600bb1940
(XEN)    fffffffffffffff3 0000000000d7e004 ffff83013c0c7e48 ffff88012b478b18
(XEN) Xen call trace:
(XEN)    [<ffff82c4801bf4ea>] vmx_get_interrupt_shadow+0xa/0x10
(XEN)    [<ffff82c4801a1a91>] hvm_emulate_prepare+0x31/0x80
(XEN)    [<ffff82c4801ce0e1>] p2m_mem_access_resume+0xe1/0x120
(XEN)    [<ffff82c4801d1d81>] mem_access_domctl+0x21/0x30
(XEN)    [<ffff82c4801d05c5>] mem_event_domctl+0x295/0x3b0
(XEN)    [<ffff82c4801a1447>] hvmemul_do_pio+0x27/0x30
(XEN)    [<ffff82c48014eb79>] arch_do_domctl+0x2e9/0x28a0
(XEN)    [<ffff82c48016bc4b>] get_page_type+0xb/0x20
(XEN)    [<ffff82c48016c0b8>] get_page_and_type_from_pagenr+0x78/0xe0
(XEN)    [<ffff82c4801025bb>] do_domctl+0xfb/0x10b0
(XEN)    [<ffff82c4801f2fa6>] ept_get_entry+0x136/0x250
(XEN)    [<ffff82c480180965>] copy_to_user+0x25/0x70
(XEN)    [<ffff82c4801f8778>] syscall_enter+0x88/0x8d
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 6:
(XEN) FATAL TRAP: vector = 6 (invalid opcode)
(XEN) ****************************************

I could find no documentation on either the hvm_*(), or the cpu-related 
functions. Obviously the hvm_emulate_prepare() call crashes the 
hypervisor, most likely because of the guest_cpu_user_regs() parameter, 
but "regs" is not being passed to p2m_mem_access_resume() (like it is 
being passed to p2m_mem_access_check()). I would appreciate your help in 
figuring out how to implement this.

Thanks, and happy holidays,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 14:33:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 14:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToazX-0007cX-4l; Fri, 28 Dec 2012 14:32:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1ToazV-0007cS-By
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 14:32:53 +0000
Received: from [85.158.143.35:4442] by server-3.bemta-4.messagelabs.com id
	9A/47-18211-49DADD05; Fri, 28 Dec 2012 14:32:52 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356705171!16996111!1
X-Originating-IP: [91.199.104.2]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4493 invoked from network); 28 Dec 2012 14:32:52 -0000
Received: from mail.bitdefender.com (HELO mail.bitdefender.com) (91.199.104.2)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Dec 2012 14:32:52 -0000
Received: (qmail 31637 invoked from network); 28 Dec 2012 16:32:50 +0200
Received: from rcojocaru.dsd.ro (HELO ?10.10.14.59?)
	(rcojocaru@bitdefender.com@10.10.14.59)
	by mail.bitdefender.com with AES256-SHA encrypted SMTP;
	28 Dec 2012 16:32:50 +0200
Message-ID: <50DDADDD.8070806@gmail.com>
Date: Fri, 28 Dec 2012 16:34:05 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-BitDefender-Scanner: Clean, Agent: BitDefender qmail 3.1.0 on
	elfie.dsd.hq, sigver: 7.44564
Subject: [Xen-devel] hvm_emulate_one() usage
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I have a dom0 userspace application that receives mem_events. Mem_events 
are being received if a page fault occurs, and until I clear the page 
access rights I keep receiving the event in a loop. If I do clear the 
page access rights, I will no longer receive mem_events for said page.

What I thought I'd do was to add a new flag to the mem_event response 
(MEM_EVENT_FLAG_EMULATE_WRITE), and have this code execute in 
p2m_mem_access_resume() in xen/arch/x86/mm/p2m.c:

mem_event_get_response(d, &rsp);

if ( rsp.flags & MEM_EVENT_FLAG_EMULATE_WRITE )
{
     struct hvm_emulate_ctxt ctx[1] = {};
     struct vcpu *current_vcpu = current;

     set_current(d->vcpu[rsp.vcpu_id]);

     hvm_emulate_prepare(ctx, guest_cpu_user_regs());
     hvm_emulate_one(ctx);

     set_current(current_vcpu);
}

The code is supposed to go past the write instruction (without lifting 
the page access restrictions). What it does seem to achieve is this:

(XEN) ----[ Xen-4.1.2  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    6
(XEN) RIP:    e008:[<ffff82c4801bf4ea>] vmx_get_interrupt_shadow+0xa/0x10
(XEN) RFLAGS: 0000000000010203   CONTEXT: hypervisor
(XEN) rax: 0000000000004824   rbx: ffff83013c0c7ba0   rcx: 0000000000000008
(XEN) rdx: 0000000000000005   rsi: ffff83013c0c7f18   rdi: ffff8300bfca8000
(XEN) rbp: ffff83013c0c7f18   rsp: ffff83013c0c7b50   r8:  0000000000000002
(XEN) r9:  0000000000000002   r10: ffff82c48020af40   r11: 0000000000000282
(XEN) r12: ffff8300bfff2000   r13: ffff88012b478b18   r14: 00007fffd669c4c0
(XEN) r15: ffff83013c0c7e48   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000005d6c4000   cr2: 000000000221e538
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83013c0c7b50:
(XEN)    ffff82c4801a1a91 ffff83013f986000 ffff83013f986000 ffff83013c0c7f18
(XEN)    ffff82c4801ce0e1 0000000500050000 000000000003f31a 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff83013f986000 ffff8300bfff2000 ffff83013c0c7e48
(XEN)    ffff82c4801d1d81 ffff8300bfcac000 ffff82c4801d05c5 ffff83013c0c7f18
(XEN)    ffff82c4801a1447 ffff8300bfcac000 0000000000d7e004 0000000000d7e004
(XEN)    ffff83013c0c7e48 ffff88012b478b18 00007fffd669c4c0 ffff83013c0c7e48
(XEN)    ffff82c48014eb79 0000000000000000 000000000005d6f9 ffff82f600badf20
(XEN)    0000000000000000 4000000000000000 ffff82f600badf20 0000000000000000
(XEN)    ffff88012fc0b928 0000000000000001 ffff82c48016bc4b ffff82f600badf20
(XEN)    ffff82c48016c0b8 ffff83013c0ac000 ffff83013c0ac000 ffff82f600bb1940
(XEN)    000000000000000f ffff83013c0c7f18 ffff83013c0ac000 ffff82f600bb1940
(XEN)    fffffffffffffff3 0000000000d7e004 ffff83013c0c7e48 ffff88012b478b18
(XEN) Xen call trace:
(XEN)    [<ffff82c4801bf4ea>] vmx_get_interrupt_shadow+0xa/0x10
(XEN)    [<ffff82c4801a1a91>] hvm_emulate_prepare+0x31/0x80
(XEN)    [<ffff82c4801ce0e1>] p2m_mem_access_resume+0xe1/0x120
(XEN)    [<ffff82c4801d1d81>] mem_access_domctl+0x21/0x30
(XEN)    [<ffff82c4801d05c5>] mem_event_domctl+0x295/0x3b0
(XEN)    [<ffff82c4801a1447>] hvmemul_do_pio+0x27/0x30
(XEN)    [<ffff82c48014eb79>] arch_do_domctl+0x2e9/0x28a0
(XEN)    [<ffff82c48016bc4b>] get_page_type+0xb/0x20
(XEN)    [<ffff82c48016c0b8>] get_page_and_type_from_pagenr+0x78/0xe0
(XEN)    [<ffff82c4801025bb>] do_domctl+0xfb/0x10b0
(XEN)    [<ffff82c4801f2fa6>] ept_get_entry+0x136/0x250
(XEN)    [<ffff82c480180965>] copy_to_user+0x25/0x70
(XEN)    [<ffff82c4801f8778>] syscall_enter+0x88/0x8d
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 6:
(XEN) FATAL TRAP: vector = 6 (invalid opcode)
(XEN) ****************************************

I could find no documentation on either the hvm_*(), or the cpu-related 
functions. Obviously the hvm_emulate_prepare() call crashes the 
hypervisor, most likely because of the guest_cpu_user_regs() parameter, 
but "regs" is not being passed to p2m_mem_access_resume() (like it is 
being passed to p2m_mem_access_check()). I would appreciate your help in 
figuring out how to implement this.

Thanks, and happy holidays,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 14:49:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 14:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TobFW-0007rL-NK; Fri, 28 Dec 2012 14:49:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>)
	id 1TobFV-0007rD-4w; Fri, 28 Dec 2012 14:49:25 +0000
Received: from [193.109.254.147:57825] by server-5.bemta-14.messagelabs.com id
	07/C2-32031-471BDD05; Fri, 28 Dec 2012 14:49:24 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356706160!9737046!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27405 invoked from network); 28 Dec 2012 14:49:22 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Dec 2012 14:49:22 -0000
Received: from aplexcas1.dom1.jhuapl.edu (unknown [128.244.198.90]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 3b02_a5ee_af38625d_7169_4548_9d97_441d854d721e;
	Fri, 28 Dec 2012 09:48:39 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Fri, 28 Dec 2012
	09:48:37 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: =?iso-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Fri, 28 Dec 2012 09:47:22 -0500
Thread-Topic: [Xen-devel] How to use the vTPM backend driver in the pv-ops
	kernel
Thread-Index: Ac3kVBgA2JAxssVBRYqDWVSJ4Uy5OgAtibHk
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48D30B4E79@aplesstripe.dom1.jhuapl.edu>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
	<20121222220407.GP8912@reaktio.net>
	<8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
	<068F06DC4D106941B297C0C5F9F446EA48D30B4E78@aplesstripe.dom1.jhuapl.edu>,
	<20121227170322.GS8912@reaktio.net>
In-Reply-To: <20121227170322.GS8912@reaktio.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>, gavin <gbtux@126.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

No, the vtpm manager is implemented in mini-os and mini-os not has a driver=
 for direct access to the physical tpm.
dom0 no longer takes any part of the vtpm communication paths.
________________________________________
From: Pasi K=E4rkk=E4inen [pasik@iki.fi]
Sent: Thursday, December 27, 2012 12:03 PM
To: Fioravante, Matthew E.
Cc: gavin; xen-users@lists.xen.org; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops k=
ernel

On Thu, Dec 27, 2012 at 10:27:33AM -0500, Fioravante, Matthew E. wrote:
> The frontend driver is currently being ported to the latest kernel. You c=
an
> find the patch cross listed here as well as the linux kernel mailing list.
>
> I have no plans to port the backend driver. If you need it you'll have to=
 get it from the 2.6.18
> kernel and port it yourself.
>

Hmm.. are you still using 2.6.18 kernel in dom0 yourself?

-- Pasi

> ________________________________________
> From: xen-devel-bounces@lists.xen.org [xen-devel-bounces@lists.xen.org] O=
n Behalf Of gavin [gbtux@126.com]
> Sent: Sunday, December 23, 2012 8:04 AM
> To: Pasi K=E4rkk=E4inen
> Cc: xen-users@lists.xen.org; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops=
 kernel
>
> Hi Pasi,
>
> Thank you very much for your information.
>
> Best Regards,
> Gavin
>
> At 2012-12-23 06:04:08,"Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:
>
> >On Sun, Dec 23, 2012 at 01:50:16AM +0800, gavin wrote:
> >>     Hi,
> >>
> >>    I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in t=
he
> >>    config file of pv-ops kernel, such as kernel 2.6.32.50. However, th=
is
> >>    option exists in the config file of kernel version 2.6.18.8. I also=
 cannot
> >>    find the vTPM backed driver (such as
> >>    linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
> >>    So, how can I configure and use the vTPM backend driver in kernel 2=
.6.32?
> >>    Thank you for any advice.
> >>
> >
> >I don't think vtpm drivers were ported to 2.6.32 pvops.
> >Recently there has been work on porting the drivers to upstream Linux 3.=
x,
> >but they aren't merged yet iirc.
> >
> >If you need to use them with 2.6.32 you need to port them yourself..
> >
> >-- Pasi
> >
>
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 14:49:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 14:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TobFW-0007rL-NK; Fri, 28 Dec 2012 14:49:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Matthew.Fioravante@jhuapl.edu>)
	id 1TobFV-0007rD-4w; Fri, 28 Dec 2012 14:49:25 +0000
Received: from [193.109.254.147:57825] by server-5.bemta-14.messagelabs.com id
	07/C2-32031-471BDD05; Fri, 28 Dec 2012 14:49:24 +0000
X-Env-Sender: Matthew.Fioravante@jhuapl.edu
X-Msg-Ref: server-2.tower-27.messagelabs.com!1356706160!9737046!1
X-Originating-IP: [128.244.251.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27405 invoked from network); 28 Dec 2012 14:49:22 -0000
Received: from piper.jhuapl.edu (HELO piper.jhuapl.edu) (128.244.251.37)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Dec 2012 14:49:22 -0000
Received: from aplexcas1.dom1.jhuapl.edu (unknown [128.244.198.90]) by
	piper.jhuapl.edu with smtp (TLS: TLSv1/SSLv3,128bits,RC4-MD5)
	id 3b02_a5ee_af38625d_7169_4548_9d97_441d854d721e;
	Fri, 28 Dec 2012 09:48:39 -0500
Received: from aplesstripe.dom1.jhuapl.edu ([128.244.198.211]) by
	aplexcas1.dom1.jhuapl.edu ([128.244.198.90]) with mapi; Fri, 28 Dec 2012
	09:48:37 -0500
From: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
To: =?iso-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Fri, 28 Dec 2012 09:47:22 -0500
Thread-Topic: [Xen-devel] How to use the vTPM backend driver in the pv-ops
	kernel
Thread-Index: Ac3kVBgA2JAxssVBRYqDWVSJ4Uy5OgAtibHk
Message-ID: <068F06DC4D106941B297C0C5F9F446EA48D30B4E79@aplesstripe.dom1.jhuapl.edu>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
	<20121222220407.GP8912@reaktio.net>
	<8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
	<068F06DC4D106941B297C0C5F9F446EA48D30B4E78@aplesstripe.dom1.jhuapl.edu>,
	<20121227170322.GS8912@reaktio.net>
In-Reply-To: <20121227170322.GS8912@reaktio.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>, gavin <gbtux@126.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

No, the vtpm manager is implemented in mini-os and mini-os not has a driver=
 for direct access to the physical tpm.
dom0 no longer takes any part of the vtpm communication paths.
________________________________________
From: Pasi K=E4rkk=E4inen [pasik@iki.fi]
Sent: Thursday, December 27, 2012 12:03 PM
To: Fioravante, Matthew E.
Cc: gavin; xen-users@lists.xen.org; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops k=
ernel

On Thu, Dec 27, 2012 at 10:27:33AM -0500, Fioravante, Matthew E. wrote:
> The frontend driver is currently being ported to the latest kernel. You c=
an
> find the patch cross listed here as well as the linux kernel mailing list.
>
> I have no plans to port the backend driver. If you need it you'll have to=
 get it from the 2.6.18
> kernel and port it yourself.
>

Hmm.. are you still using 2.6.18 kernel in dom0 yourself?

-- Pasi

> ________________________________________
> From: xen-devel-bounces@lists.xen.org [xen-devel-bounces@lists.xen.org] O=
n Behalf Of gavin [gbtux@126.com]
> Sent: Sunday, December 23, 2012 8:04 AM
> To: Pasi K=E4rkk=E4inen
> Cc: xen-users@lists.xen.org; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops=
 kernel
>
> Hi Pasi,
>
> Thank you very much for your information.
>
> Best Regards,
> Gavin
>
> At 2012-12-23 06:04:08,"Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:
>
> >On Sun, Dec 23, 2012 at 01:50:16AM +0800, gavin wrote:
> >>     Hi,
> >>
> >>    I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in t=
he
> >>    config file of pv-ops kernel, such as kernel 2.6.32.50. However, th=
is
> >>    option exists in the config file of kernel version 2.6.18.8. I also=
 cannot
> >>    find the vTPM backed driver (such as
> >>    linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
> >>    So, how can I configure and use the vTPM backend driver in kernel 2=
.6.32?
> >>    Thank you for any advice.
> >>
> >
> >I don't think vtpm drivers were ported to 2.6.32 pvops.
> >Recently there has been work on porting the drivers to upstream Linux 3.=
x,
> >but they aren't merged yet iirc.
> >
> >If you need to use them with 2.6.32 you need to port them yourself..
> >
> >-- Pasi
> >
>
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 15:11:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 15:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TobaK-0008PZ-Lq; Fri, 28 Dec 2012 15:10:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>)
	id 1TobaJ-0008PI-3B; Fri, 28 Dec 2012 15:10:55 +0000
Received: from [85.158.139.211:42993] by server-2.bemta-5.messagelabs.com id
	47/91-16162-E76BDD05; Fri, 28 Dec 2012 15:10:54 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356707348!19670380!1
X-Originating-IP: [220.181.15.47]
X-SpamReason: No, hits=3.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ3ID0+IDQxMjg=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ3ID0+IDQxMjg=\n,HTML_20_30,HTML_MESSAGE,
	MANY_EXCLAMATIONS,MIME_BASE64_TEXT,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8789 invoked from network); 28 Dec 2012 15:10:32 -0000
Received: from m15-47.126.com (HELO m15-47.126.com) (220.181.15.47)
	by server-4.tower-206.messagelabs.com with SMTP;
	28 Dec 2012 15:10:32 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=mQCHAtarX1lnbMUQfdtXnVVf4Jm7/75ur1wf
	TVSF/+s=; b=AodYnmR7at+JqvfSlbzuZCvh4vkt/cZLadAIXsnjubdp631augv/
	I21+j30yPxXyHGVWbJgrrjNbpv7m0DmC2hyLWM22WE/bb4deN8qu8+NVEu979v4H
	WuW1Z2Ga8uHvH/A/9Q06Yv7zgYe+vCWyhBMK3vhJsn+PTocCNPf9eAM=
Received: from hxkhust$126.com ( [59.174.46.57] ) by ajax-webmail-wmsvr47
	(Coremail) ; Fri, 28 Dec 2012 23:09:02 +0800 (CST)
X-Originating-IP: [59.174.46.57]
Date: Fri, 28 Dec 2012 23:09:02 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, 
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121219(21170.5156.5150) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: ZoXt12Zvb3Rlcl9odG09MzcxNjo4MQ==
MIME-Version: 1.0
Message-ID: <7d7c0264.18502.13be20f29d6.Coremail.hxkhust@126.com>
X-CM-TRANSID: L8qowEApmUIPtt1QmQYXAA--.4220W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbi6xKTBU0vONiQVQABsp
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!help!Problem with qcow2 image during a PVM's
	setting up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3556852658159723500=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3556852658159723500==
Content-Type: multipart/alternative; 
	boundary="----=_Part_377408_1001847886.1356707342806"

------=_Part_377408_1001847886.1356707342806
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64

VGhlIGZvbGxvd2luZyB3YXMgd2hhdCAgSSBkaWQuCjEpIGRkIGlmPS9kZXYvemVybyBvZiA9Y2Vu
dG9zX3Jhdy5pbWcgYnM9MTAyNCBjb3VudD04MDAwMDAwCjIpIGluc3RhbGwgdGhlIG9zIGNlbnRv
cyA1LjUgeDY0IGluIHRoZSBpbWFnZSBmaWxlIGNlbnRvc19yYXcuaW1nIGFuZCB0aGUgY29ycmVz
cG9uZ2RpbmcgY29uZmlnIGZpbGUgaXMgOgprZXJuZWwgPSAiL2hvbWUvcHZtL3ZtbGludXotMi42
LjE4LTE5NC5lbDV4ZW4iCnJhbWRpc2sgPSAiL2hvbWUvcHZtL2luaXRyZC0yLjYuMTgtMTk0LmVs
NXhlbi5pbWciCm1lbW9yeSA9IDc2OAojbWF4bWVtID0gNzY4Cm5hbWUgPSAiY2VudG9zX3Jhd19w
diIKdmNwdXMgPSAxCnZpZiA9IFsnbWFjPTAwOjI0OjdDOjNDOkNFOkVGLGJyaWRnZT1ldGgwJ10K
ZGlzayA9IFsndGFwOmFpbzovaG9tZS9wdm0vY2VudG9zX3Jhdy5pbWcseHZkYSx3J10Kcm9vdCA9
ICIvZGV2L3h2ZGExIHJvIgpvbl9yZWJvb3QgPSAncmVzdGFydCcKb25fY3Jhc2ggPSAncmVzdGFy
dCcKMykgcWVtdS1pbWcteGVuIGNyZWF0ZSAtYiBjZW50b3NfcmF3LmltZyAtZiBxY293MiBjZW50
b3NfcmF3X3Fjb3cyXzEuaW1nIDVHCjQpIHhtIGNyZWF0ZSBjZW50b3NfcmF3X3Fjb3cyXzEuY2Zn
IGFuZCB0aGUgY2VudG9zX3Jhd19xY293Ml8xLmNmZyBpcyBqdXN0IGxpa2UgdGhpczoKa2VybmVs
ID0gIi9ob21lL3B2bS92bWxpbnV6LTIuNi4xOC0xOTQuZWw1eGVuIgpyYW1kaXNrID0gIi9ob21l
L3B2bS9pbml0cmQtMi42LjE4LTE5NC5lbDV4ZW4uaW1nIgptZW1vcnkgPSA3NjgKbmFtZSA9ICJj
ZW50b3NfcmF3X3Fjb3cyXzFfcHYiCnZjcHVzID0gMQp2aWYgPSBbJ21hYz0wMDoyNDo3QzozQzpD
MTpFRixicmlkZ2U9ZXRoMCddCmRpc2sgPSBbJ3RhcDpxY293MjovaG9tZS9wdm0vY2VudG9zX3Jh
d19xY293Ml8xLmltZyx4dmRhLHcnXQpyb290ID0gIi9kZXYveHZkYTEgcm8iCm9uX3JlYm9vdCA9
ICdyZXN0YXJ0Jwpvbl9jcmFzaCA9ICdkZXN0cm95JwogCmhvd2V2ZXIgSSdtIGZhaWxlZC50aGUg
ZXJyb3IgbWVzc2FnZSBhZnRlciB0aGUgYWJvdmUgY29tbWFuZCBpcyBlbnRlcmVkIGlzIDoKVXNp
bmcgY29uZmlnIGZpbGUgIi4vY2VudG9zX3Jhd19xY293Ml8xLmNmZyIuCkVycm9yOiBEZXZpY2Ug
NTE3MTIgKHRhcCkgY291bGQgbm90IGJlIGNvbm5lY3RlZC5TZXR0aW5nIHVwIHRoZSBiYWNrZW5k
IGZhaWxlZC4gU2VlIHRoZSBsb2cgZmlsZXMgaW4gL3Zhci9sb2cveGVuLyBmb3IgZGV0YWlscy4K
IApTbyBJIGhhdmUgdHJpZWQgYW5vdGhlciB3YXkuCmFmdGVyIEkgaW5zdGFsbCBjZW50b3MgaW4g
Y2VudG9zX3Jhdy5pbWcsIEkgZGlkIHRoZSBmb2xsb3dpbmc6CnFlbXUtaW1nLXhlbiBjb252ZXJ0
IC1PIHFjb3cyIGNlbnRvc19yYXcuaW1nIGNlbnRvc19xY293Mi5pbWcKaGVyZSB0aGlzIGltYWdl
IGZpbGUgY2VudG9zX3Fjb3cyLmltZyBjYW4gYmUgcnVubmluZyBub3JtYWxseSB3aXRoIHRoZSBj
b25maWcgZmlsZSA6Cmtlcm5lbCA9ICIvaG9tZS9wdm0vdm1saW51ei0yLjYuMTgtMTk0LmVsNXhl
biIKcmFtZGlzayA9ICIvaG9tZS9wdm0vaW5pdHJkLTIuNi4xOC0xOTQuZWw1eGVuLmltZyIKbWVt
b3J5ID0gNzY4Cm5hbWUgPSAiY2VudG9zX3Jhd19wdiIKdmNwdXMgPSAxCnZpZiA9IFsnbWFjPTAw
OjI0OjdDOjNDOkNFOkVGLGJyaWRnZT1ldGgwJ10KZGlzayA9IFsndGFwOnFjb3cyOi9ob21lL3B2
bS9jZW50b3NfcWNvdzIuaW1nLHh2ZGEsdyddCnJvb3QgPSAiL2Rldi94dmRhMSBybyIKb25fcmVi
b290ID0gJ3Jlc3RhcnQnCm9uX2NyYXNoID0gJ3Jlc3RhcnQnCiAKVGhlbiBJIGlucHV0IHRoZSBj
b21tYW5kIGJlbG93OgpxZW11LWltZy14ZW4gY3JlYXRlIC1iIGNlbnRvc19xY293Mi5pbWcgLWYg
cWNvdzIgY2VudG9zX3Fjb3cyX3Fjb3cyLmltZyA1RwphbmQgSSBlZGl0ICB0aGUgY29uZmlnIGZp
bGUgY2VudG9zX3Fjb3cyX3Fjb3cyLmNmZzoKa2VybmVsID0gIi9ob21lL3B2bS92bWxpbnV6LTIu
Ni4xOC0xOTQuZWw1eGVuIgpyYW1kaXNrID0gIi9ob21lL3B2bS9pbml0cmQtMi42LjE4LTE5NC5l
bDV4ZW4uaW1nIgptZW1vcnkgPSA3NjgKbmFtZSA9ICJjZW50b3NfcWNvdzJfcWNvdzJfcHYiCnZj
cHVzID0gMQp2aWYgPSBbJ21hYz0wMDoyNDo3QzozQzpDRToxRixicmlkZ2U9ZXRoMCddCmJvb3Q9
ImMiCmRpc2sgPSBbJ3RhcDpxY293MjovaG9tZS9wdm0vY2VudG9zX3Fjb3cyX3Fjb3cyLmltZyxz
ZGEsdyddCnJvb3QgPSAiL2Rldi9zZGExIHJvIgpvbl9yZWJvb3QgPSAncmVzdGFydCcKb25fY3Jh
c2ggPSAnZGVzdHJveScKIAphbmQgaW1wbGVtZW50IHRoZSBjb21tYW5kOgp4bSBjcmVhdGUgY2Vu
dG9zX3Fjb3cyX3Fjb3cyLmNmZwogCmJ1dCB3aGF0IHdhcyBwb3N0ZWQgd2VyZToKVXNpbmcgY29u
ZmlnIGZpbGUgIi4vY2VudG9zX3Fjb3cyX3Fjb3cyLmNmZyIuCkVycm9yOiBEZXZpY2UgMjA0OCAo
dGFwKSBjb3VsZCBub3QgYmUgY29ubmVjdGVkLlNldHRpbmcgdXAgdGhlIGJhY2tlbmQgZmFpbGVk
LiBTZWUgdGhlIGxvZyBmaWxlcyBpbiAvdmFyL2xvZy94ZW4vIGZvciBkZXRhaWxzLgogCkkgbmVl
ZCB0byBydW4gYSBwYXJhLXZpcnR1YWxpemVkIG1hY2hpbmUgd2hvc2UgaW1hZ2UgZmlsZSBpcyBx
Y293MiBmb3JtYXQgYW5kIGlzIGJhc2VkIG9uIGFub3RoZXIgIGltYWdlIGZpbGUuV2hhdCBjYW4g
aSBkbyB3aXRoIHRoaXM/SSBuZWVkIHlvdXIgaGVscC4KSW4gZmFjdKOsSSBoYXZlIG5vIHdheSB0
byBnbyB0aHJvdWdoLk15IGJvc3MganVzdCBub3cgdGVsbCBtZSB0aGF0IGlmIEkgY291bGQgbm90
IG92ZXJjb21lIHRoaXMgcHJvYmxlbSAsdGhlIG9ubHkgdGhpbmcgSSBjb3VsZCBkbyBpcyB0byBk
aWUuSEVMUCEKQU5ZIHN1Z2dlc3Rpb24gd291bGQgYmUgZmluZSEKCgo=
------=_Part_377408_1001847886.1356707342806
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxkaXYgc3R5bGU9ImxpbmUtaGVpZ2h0OjEuNztjb2xvcjojMDAw
MDAwO2ZvbnQtc2l6ZToxNHB4O2ZvbnQtZmFtaWx5OmFyaWFsIj48ZGl2PlRoZSBmb2xsb3dpbmcg
d2FzIHdoYXQmbmJzcDsgSSBkaWQuPC9kaXY+CjxkaXY+MSkgZGQgaWY9L2Rldi96ZXJvIG9mID1j
ZW50b3NfcmF3LmltZyBicz0xMDI0IGNvdW50PTgwMDAwMDA8L2Rpdj4KPGRpdj4yKSBpbnN0YWxs
IHRoZSBvcyBjZW50b3MgNS41IHg2NCBpbiB0aGUgaW1hZ2UgZmlsZSBjZW50b3NfcmF3LmltZyBh
bmQgdGhlIGNvcnJlc3BvbmdkaW5nIGNvbmZpZyBmaWxlIGlzIDo8L2Rpdj4KPGRpdj5rZXJuZWwg
PSAiL2hvbWUvcHZtL3ZtbGludXotMi42LjE4LTE5NC5lbDV4ZW4iPGJyPnJhbWRpc2sgPSAiL2hv
bWUvcHZtL2luaXRyZC0yLjYuMTgtMTk0LmVsNXhlbi5pbWciPGJyPm1lbW9yeSA9IDc2ODxicj4j
bWF4bWVtID0gNzY4PGJyPm5hbWUgPSAiY2VudG9zX3Jhd19wdiI8YnI+dmNwdXMgPSAxPGJyPnZp
ZiA9IFsnbWFjPTAwOjI0OjdDOjNDOkNFOkVGLGJyaWRnZT1ldGgwJ108YnI+ZGlzayA9IFsndGFw
OmFpbzovaG9tZS9wdm0vY2VudG9zX3Jhdy5pbWcseHZkYSx3J108L2Rpdj4KPGRpdj5yb290ID0g
Ii9kZXYveHZkYTEgcm8iPGJyPm9uX3JlYm9vdCA9ICdyZXN0YXJ0Jzxicj5vbl9jcmFzaCA9ICdy
ZXN0YXJ0JzwvZGl2Pgo8ZGl2PjMpIHFlbXUtaW1nLXhlbiBjcmVhdGUgLWIgY2VudG9zX3Jhdy5p
bWcgLWYgcWNvdzIgY2VudG9zX3Jhd19xY293Ml8xLmltZyA1RzwvZGl2Pgo8ZGl2PjQpIHhtIGNy
ZWF0ZSBjZW50b3NfcmF3X3Fjb3cyXzEuY2ZnIGFuZCB0aGUgY2VudG9zX3Jhd19xY293Ml8xLmNm
ZyBpcyBqdXN0IGxpa2UgdGhpczo8L2Rpdj4KPGRpdj5rZXJuZWwgPSAiL2hvbWUvcHZtL3ZtbGlu
dXotMi42LjE4LTE5NC5lbDV4ZW4iPGJyPnJhbWRpc2sgPSAiL2hvbWUvcHZtL2luaXRyZC0yLjYu
MTgtMTk0LmVsNXhlbi5pbWciPGJyPm1lbW9yeSA9IDc2ODxicj5uYW1lID0gImNlbnRvc19yYXdf
cWNvdzJfMV9wdiI8YnI+dmNwdXMgPSAxPGJyPnZpZiA9IFsnbWFjPTAwOjI0OjdDOjNDOkMxOkVG
LGJyaWRnZT1ldGgwJ108YnI+ZGlzayA9IFsndGFwOnFjb3cyOi9ob21lL3B2bS9jZW50b3NfcmF3
X3Fjb3cyXzEuaW1nLHh2ZGEsdyddPC9kaXY+CjxkaXY+cm9vdCA9ICIvZGV2L3h2ZGExIHJvIjxi
cj5vbl9yZWJvb3QgPSAncmVzdGFydCc8YnI+b25fY3Jhc2ggPSAnZGVzdHJveSc8L2Rpdj4KPGRp
dj4mbmJzcDs8L2Rpdj4KPGRpdj5ob3dldmVyIEknbSBmYWlsZWQudGhlIGVycm9yIG1lc3NhZ2Ug
YWZ0ZXIgdGhlIGFib3ZlIGNvbW1hbmQgaXMmbmJzcDtlbnRlcmVkJm5ic3A7aXMgOjwvZGl2Pgo8
ZGl2PlVzaW5nIGNvbmZpZyBmaWxlICIuL2NlbnRvc19yYXdfcWNvdzJfMS5jZmciLjwvZGl2Pgo8
ZGl2PkVycm9yOiBEZXZpY2UgNTE3MTIgKHRhcCkgY291bGQgbm90IGJlIGNvbm5lY3RlZC5TZXR0
aW5nIHVwIHRoZSBiYWNrZW5kIGZhaWxlZC4gU2VlIHRoZSBsb2cgZmlsZXMgaW4gL3Zhci9sb2cv
eGVuLyBmb3IgZGV0YWlscy48L2Rpdj4KPGRpdj4mbmJzcDs8L2Rpdj4KPGRpdj5TbyBJIGhhdmUg
dHJpZWQgYW5vdGhlciB3YXkuPC9kaXY+CjxkaXY+YWZ0ZXIgSSBpbnN0YWxsIGNlbnRvcyBpbiBj
ZW50b3NfcmF3LmltZywgSSBkaWQgdGhlIGZvbGxvd2luZzo8L2Rpdj4KPGRpdj5xZW11LWltZy14
ZW4gY29udmVydCAtTyBxY293MiBjZW50b3NfcmF3LmltZyBjZW50b3NfcWNvdzIuaW1nPC9kaXY+
CjxkaXY+aGVyZSB0aGlzIGltYWdlIGZpbGUgY2VudG9zX3Fjb3cyLmltZyBjYW4gYmUgcnVubmlu
ZyBub3JtYWxseSB3aXRoIHRoZSBjb25maWcgZmlsZSA6PC9kaXY+CjxkaXY+a2VybmVsID0gIi9o
b21lL3B2bS92bWxpbnV6LTIuNi4xOC0xOTQuZWw1eGVuIjxicj5yYW1kaXNrID0gIi9ob21lL3B2
bS9pbml0cmQtMi42LjE4LTE5NC5lbDV4ZW4uaW1nIjxicj5tZW1vcnkgPSA3Njg8YnI+bmFtZSA9
ICJjZW50b3NfcmF3X3B2Ijxicj52Y3B1cyA9IDE8YnI+dmlmID0gWydtYWM9MDA6MjQ6N0M6M0M6
Q0U6RUYsYnJpZGdlPWV0aDAnXTxicj5kaXNrID0gWyd0YXA6cWNvdzI6L2hvbWUvcHZtL2NlbnRv
c19xY293Mi5pbWcseHZkYSx3J108L2Rpdj4KPGRpdj5yb290ID0gIi9kZXYveHZkYTEgcm8iPGJy
Pm9uX3JlYm9vdCA9ICdyZXN0YXJ0Jzxicj5vbl9jcmFzaCA9ICdyZXN0YXJ0JzwvZGl2Pgo8ZGl2
PiZuYnNwOzwvZGl2Pgo8ZGl2PlRoZW4gSSBpbnB1dCB0aGUgY29tbWFuZCBiZWxvdzo8L2Rpdj4K
PGRpdj5xZW11LWltZy14ZW4gY3JlYXRlIC1iIGNlbnRvc19xY293Mi5pbWcgLWYgcWNvdzIgY2Vu
dG9zX3Fjb3cyX3Fjb3cyLmltZyA1RzwvZGl2Pgo8ZGl2PmFuZCBJJm5ic3A7ZWRpdCAmbmJzcDt0
aGUgY29uZmlnIGZpbGUgY2VudG9zX3Fjb3cyX3Fjb3cyLmNmZzo8L2Rpdj4KPGRpdj5rZXJuZWwg
PSAiL2hvbWUvcHZtL3ZtbGludXotMi42LjE4LTE5NC5lbDV4ZW4iPGJyPnJhbWRpc2sgPSAiL2hv
bWUvcHZtL2luaXRyZC0yLjYuMTgtMTk0LmVsNXhlbi5pbWciPGJyPm1lbW9yeSA9IDc2ODxicj5u
YW1lID0gImNlbnRvc19xY293Ml9xY293Ml9wdiI8YnI+dmNwdXMgPSAxPGJyPnZpZiA9IFsnbWFj
PTAwOjI0OjdDOjNDOkNFOjFGLGJyaWRnZT1ldGgwJ108YnI+Ym9vdD0iYyI8YnI+ZGlzayA9IFsn
dGFwOnFjb3cyOi9ob21lL3B2bS9jZW50b3NfcWNvdzJfcWNvdzIuaW1nLHNkYSx3J108L2Rpdj4K
PGRpdj5yb290ID0gIi9kZXYvc2RhMSBybyI8YnI+b25fcmVib290ID0gJ3Jlc3RhcnQnPGJyPm9u
X2NyYXNoID0gJ2Rlc3Ryb3knPC9kaXY+CjxkaXY+Jm5ic3A7PC9kaXY+CjxkaXY+YW5kIGltcGxl
bWVudCB0aGUgY29tbWFuZDo8L2Rpdj4KPGRpdj54bSBjcmVhdGUgY2VudG9zX3Fjb3cyX3Fjb3cy
LmNmZzwvZGl2Pgo8ZGl2PiZuYnNwOzwvZGl2Pgo8ZGl2PmJ1dCB3aGF0IHdhcyBwb3N0ZWQgd2Vy
ZTo8L2Rpdj4KPGRpdj4KPGRpdj5Vc2luZyBjb25maWcgZmlsZSAiLi9jZW50b3NfcWNvdzJfcWNv
dzIuY2ZnIi48L2Rpdj4KPGRpdj5FcnJvcjogRGV2aWNlJm5ic3A7MjA0OCAodGFwKSBjb3VsZCBu
b3QgYmUgY29ubmVjdGVkLlNldHRpbmcgdXAgdGhlIGJhY2tlbmQgZmFpbGVkLiBTZWUgdGhlIGxv
ZyBmaWxlcyBpbiAvdmFyL2xvZy94ZW4vIGZvciBkZXRhaWxzLjwvZGl2Pgo8ZGl2PiZuYnNwOzwv
ZGl2Pgo8ZGl2PkkgbmVlZCB0byBydW4gYSBwYXJhLXZpcnR1YWxpemVkIG1hY2hpbmUgd2hvc2Ug
aW1hZ2UgZmlsZSBpcyBxY293MiBmb3JtYXQgYW5kIGlzIGJhc2VkIG9uIGFub3RoZXImbmJzcDsg
aW1hZ2UgZmlsZS5XaGF0IGNhbiBpIGRvIHdpdGggdGhpcz9JIG5lZWQgeW91ciBoZWxwLjwvZGl2
PjwvZGl2PjxkaXY+SW4gZmFjdKOsSSBoYXZlIG5vIHdheSB0byBnbyB0aHJvdWdoLk15IGJvc3Mg
anVzdCBub3cgdGVsbCBtZSB0aGF0IGlmIEkgY291bGQgbm90IG92ZXJjb21lIHRoaXMgcHJvYmxl
bSAsdGhlIG9ubHkgdGhpbmcgSSBjb3VsZCBkbyBpcyB0byBkaWUuSEVMUCE8L2Rpdj48ZGl2PkFO
WSBzdWdnZXN0aW9uIHdvdWxkIGJlIGZpbmUhPC9kaXY+PC9kaXY+PGJyPjxicj48c3BhbiB0aXRs
ZT0ibmV0ZWFzZWZvb3RlciI+PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwvc3Bhbj48
L3NwYW4+PC9kaXY+PGJyPjxicj48c3BhbiB0aXRsZT0ibmV0ZWFzZWZvb3RlciI+PHNwYW4gaWQ9
Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwvc3Bhbj48L3NwYW4+
------=_Part_377408_1001847886.1356707342806--



--===============3556852658159723500==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3556852658159723500==--



From xen-devel-bounces@lists.xen.org Fri Dec 28 15:11:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 15:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TobaK-0008PZ-Lq; Fri, 28 Dec 2012 15:10:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hxkhust@126.com>)
	id 1TobaJ-0008PI-3B; Fri, 28 Dec 2012 15:10:55 +0000
Received: from [85.158.139.211:42993] by server-2.bemta-5.messagelabs.com id
	47/91-16162-E76BDD05; Fri, 28 Dec 2012 15:10:54 +0000
X-Env-Sender: hxkhust@126.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356707348!19670380!1
X-Originating-IP: [220.181.15.47]
X-SpamReason: No, hits=3.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ3ID0+IDQxMjg=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ3ID0+IDQxMjg=\n,HTML_20_30,HTML_MESSAGE,
	MANY_EXCLAMATIONS,MIME_BASE64_TEXT,PLING_PLING
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8789 invoked from network); 28 Dec 2012 15:10:32 -0000
Received: from m15-47.126.com (HELO m15-47.126.com) (220.181.15.47)
	by server-4.tower-206.messagelabs.com with SMTP;
	28 Dec 2012 15:10:32 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=mQCHAtarX1lnbMUQfdtXnVVf4Jm7/75ur1wf
	TVSF/+s=; b=AodYnmR7at+JqvfSlbzuZCvh4vkt/cZLadAIXsnjubdp631augv/
	I21+j30yPxXyHGVWbJgrrjNbpv7m0DmC2hyLWM22WE/bb4deN8qu8+NVEu979v4H
	WuW1Z2Ga8uHvH/A/9Q06Yv7zgYe+vCWyhBMK3vhJsn+PTocCNPf9eAM=
Received: from hxkhust$126.com ( [59.174.46.57] ) by ajax-webmail-wmsvr47
	(Coremail) ; Fri, 28 Dec 2012 23:09:02 +0800 (CST)
X-Originating-IP: [59.174.46.57]
Date: Fri, 28 Dec 2012 23:09:02 +0800 (CST)
From: hxkhust  <hxkhust@126.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, 
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20121219(21170.5156.5150) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: ZoXt12Zvb3Rlcl9odG09MzcxNjo4MQ==
MIME-Version: 1.0
Message-ID: <7d7c0264.18502.13be20f29d6.Coremail.hxkhust@126.com>
X-CM-TRANSID: L8qowEApmUIPtt1QmQYXAA--.4220W
X-CM-SenderInfo: xk0nx3lvw6ij2wof0z/1tbi6xKTBU0vONiQVQABsp
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] !!!!help!Problem with qcow2 image during a PVM's
	setting up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3556852658159723500=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3556852658159723500==
Content-Type: multipart/alternative; 
	boundary="----=_Part_377408_1001847886.1356707342806"

------=_Part_377408_1001847886.1356707342806
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64

VGhlIGZvbGxvd2luZyB3YXMgd2hhdCAgSSBkaWQuCjEpIGRkIGlmPS9kZXYvemVybyBvZiA9Y2Vu
dG9zX3Jhdy5pbWcgYnM9MTAyNCBjb3VudD04MDAwMDAwCjIpIGluc3RhbGwgdGhlIG9zIGNlbnRv
cyA1LjUgeDY0IGluIHRoZSBpbWFnZSBmaWxlIGNlbnRvc19yYXcuaW1nIGFuZCB0aGUgY29ycmVz
cG9uZ2RpbmcgY29uZmlnIGZpbGUgaXMgOgprZXJuZWwgPSAiL2hvbWUvcHZtL3ZtbGludXotMi42
LjE4LTE5NC5lbDV4ZW4iCnJhbWRpc2sgPSAiL2hvbWUvcHZtL2luaXRyZC0yLjYuMTgtMTk0LmVs
NXhlbi5pbWciCm1lbW9yeSA9IDc2OAojbWF4bWVtID0gNzY4Cm5hbWUgPSAiY2VudG9zX3Jhd19w
diIKdmNwdXMgPSAxCnZpZiA9IFsnbWFjPTAwOjI0OjdDOjNDOkNFOkVGLGJyaWRnZT1ldGgwJ10K
ZGlzayA9IFsndGFwOmFpbzovaG9tZS9wdm0vY2VudG9zX3Jhdy5pbWcseHZkYSx3J10Kcm9vdCA9
ICIvZGV2L3h2ZGExIHJvIgpvbl9yZWJvb3QgPSAncmVzdGFydCcKb25fY3Jhc2ggPSAncmVzdGFy
dCcKMykgcWVtdS1pbWcteGVuIGNyZWF0ZSAtYiBjZW50b3NfcmF3LmltZyAtZiBxY293MiBjZW50
b3NfcmF3X3Fjb3cyXzEuaW1nIDVHCjQpIHhtIGNyZWF0ZSBjZW50b3NfcmF3X3Fjb3cyXzEuY2Zn
IGFuZCB0aGUgY2VudG9zX3Jhd19xY293Ml8xLmNmZyBpcyBqdXN0IGxpa2UgdGhpczoKa2VybmVs
ID0gIi9ob21lL3B2bS92bWxpbnV6LTIuNi4xOC0xOTQuZWw1eGVuIgpyYW1kaXNrID0gIi9ob21l
L3B2bS9pbml0cmQtMi42LjE4LTE5NC5lbDV4ZW4uaW1nIgptZW1vcnkgPSA3NjgKbmFtZSA9ICJj
ZW50b3NfcmF3X3Fjb3cyXzFfcHYiCnZjcHVzID0gMQp2aWYgPSBbJ21hYz0wMDoyNDo3QzozQzpD
MTpFRixicmlkZ2U9ZXRoMCddCmRpc2sgPSBbJ3RhcDpxY293MjovaG9tZS9wdm0vY2VudG9zX3Jh
d19xY293Ml8xLmltZyx4dmRhLHcnXQpyb290ID0gIi9kZXYveHZkYTEgcm8iCm9uX3JlYm9vdCA9
ICdyZXN0YXJ0Jwpvbl9jcmFzaCA9ICdkZXN0cm95JwogCmhvd2V2ZXIgSSdtIGZhaWxlZC50aGUg
ZXJyb3IgbWVzc2FnZSBhZnRlciB0aGUgYWJvdmUgY29tbWFuZCBpcyBlbnRlcmVkIGlzIDoKVXNp
bmcgY29uZmlnIGZpbGUgIi4vY2VudG9zX3Jhd19xY293Ml8xLmNmZyIuCkVycm9yOiBEZXZpY2Ug
NTE3MTIgKHRhcCkgY291bGQgbm90IGJlIGNvbm5lY3RlZC5TZXR0aW5nIHVwIHRoZSBiYWNrZW5k
IGZhaWxlZC4gU2VlIHRoZSBsb2cgZmlsZXMgaW4gL3Zhci9sb2cveGVuLyBmb3IgZGV0YWlscy4K
IApTbyBJIGhhdmUgdHJpZWQgYW5vdGhlciB3YXkuCmFmdGVyIEkgaW5zdGFsbCBjZW50b3MgaW4g
Y2VudG9zX3Jhdy5pbWcsIEkgZGlkIHRoZSBmb2xsb3dpbmc6CnFlbXUtaW1nLXhlbiBjb252ZXJ0
IC1PIHFjb3cyIGNlbnRvc19yYXcuaW1nIGNlbnRvc19xY293Mi5pbWcKaGVyZSB0aGlzIGltYWdl
IGZpbGUgY2VudG9zX3Fjb3cyLmltZyBjYW4gYmUgcnVubmluZyBub3JtYWxseSB3aXRoIHRoZSBj
b25maWcgZmlsZSA6Cmtlcm5lbCA9ICIvaG9tZS9wdm0vdm1saW51ei0yLjYuMTgtMTk0LmVsNXhl
biIKcmFtZGlzayA9ICIvaG9tZS9wdm0vaW5pdHJkLTIuNi4xOC0xOTQuZWw1eGVuLmltZyIKbWVt
b3J5ID0gNzY4Cm5hbWUgPSAiY2VudG9zX3Jhd19wdiIKdmNwdXMgPSAxCnZpZiA9IFsnbWFjPTAw
OjI0OjdDOjNDOkNFOkVGLGJyaWRnZT1ldGgwJ10KZGlzayA9IFsndGFwOnFjb3cyOi9ob21lL3B2
bS9jZW50b3NfcWNvdzIuaW1nLHh2ZGEsdyddCnJvb3QgPSAiL2Rldi94dmRhMSBybyIKb25fcmVi
b290ID0gJ3Jlc3RhcnQnCm9uX2NyYXNoID0gJ3Jlc3RhcnQnCiAKVGhlbiBJIGlucHV0IHRoZSBj
b21tYW5kIGJlbG93OgpxZW11LWltZy14ZW4gY3JlYXRlIC1iIGNlbnRvc19xY293Mi5pbWcgLWYg
cWNvdzIgY2VudG9zX3Fjb3cyX3Fjb3cyLmltZyA1RwphbmQgSSBlZGl0ICB0aGUgY29uZmlnIGZp
bGUgY2VudG9zX3Fjb3cyX3Fjb3cyLmNmZzoKa2VybmVsID0gIi9ob21lL3B2bS92bWxpbnV6LTIu
Ni4xOC0xOTQuZWw1eGVuIgpyYW1kaXNrID0gIi9ob21lL3B2bS9pbml0cmQtMi42LjE4LTE5NC5l
bDV4ZW4uaW1nIgptZW1vcnkgPSA3NjgKbmFtZSA9ICJjZW50b3NfcWNvdzJfcWNvdzJfcHYiCnZj
cHVzID0gMQp2aWYgPSBbJ21hYz0wMDoyNDo3QzozQzpDRToxRixicmlkZ2U9ZXRoMCddCmJvb3Q9
ImMiCmRpc2sgPSBbJ3RhcDpxY293MjovaG9tZS9wdm0vY2VudG9zX3Fjb3cyX3Fjb3cyLmltZyxz
ZGEsdyddCnJvb3QgPSAiL2Rldi9zZGExIHJvIgpvbl9yZWJvb3QgPSAncmVzdGFydCcKb25fY3Jh
c2ggPSAnZGVzdHJveScKIAphbmQgaW1wbGVtZW50IHRoZSBjb21tYW5kOgp4bSBjcmVhdGUgY2Vu
dG9zX3Fjb3cyX3Fjb3cyLmNmZwogCmJ1dCB3aGF0IHdhcyBwb3N0ZWQgd2VyZToKVXNpbmcgY29u
ZmlnIGZpbGUgIi4vY2VudG9zX3Fjb3cyX3Fjb3cyLmNmZyIuCkVycm9yOiBEZXZpY2UgMjA0OCAo
dGFwKSBjb3VsZCBub3QgYmUgY29ubmVjdGVkLlNldHRpbmcgdXAgdGhlIGJhY2tlbmQgZmFpbGVk
LiBTZWUgdGhlIGxvZyBmaWxlcyBpbiAvdmFyL2xvZy94ZW4vIGZvciBkZXRhaWxzLgogCkkgbmVl
ZCB0byBydW4gYSBwYXJhLXZpcnR1YWxpemVkIG1hY2hpbmUgd2hvc2UgaW1hZ2UgZmlsZSBpcyBx
Y293MiBmb3JtYXQgYW5kIGlzIGJhc2VkIG9uIGFub3RoZXIgIGltYWdlIGZpbGUuV2hhdCBjYW4g
aSBkbyB3aXRoIHRoaXM/SSBuZWVkIHlvdXIgaGVscC4KSW4gZmFjdKOsSSBoYXZlIG5vIHdheSB0
byBnbyB0aHJvdWdoLk15IGJvc3MganVzdCBub3cgdGVsbCBtZSB0aGF0IGlmIEkgY291bGQgbm90
IG92ZXJjb21lIHRoaXMgcHJvYmxlbSAsdGhlIG9ubHkgdGhpbmcgSSBjb3VsZCBkbyBpcyB0byBk
aWUuSEVMUCEKQU5ZIHN1Z2dlc3Rpb24gd291bGQgYmUgZmluZSEKCgo=
------=_Part_377408_1001847886.1356707342806
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxkaXYgc3R5bGU9ImxpbmUtaGVpZ2h0OjEuNztjb2xvcjojMDAw
MDAwO2ZvbnQtc2l6ZToxNHB4O2ZvbnQtZmFtaWx5OmFyaWFsIj48ZGl2PlRoZSBmb2xsb3dpbmcg
d2FzIHdoYXQmbmJzcDsgSSBkaWQuPC9kaXY+CjxkaXY+MSkgZGQgaWY9L2Rldi96ZXJvIG9mID1j
ZW50b3NfcmF3LmltZyBicz0xMDI0IGNvdW50PTgwMDAwMDA8L2Rpdj4KPGRpdj4yKSBpbnN0YWxs
IHRoZSBvcyBjZW50b3MgNS41IHg2NCBpbiB0aGUgaW1hZ2UgZmlsZSBjZW50b3NfcmF3LmltZyBh
bmQgdGhlIGNvcnJlc3BvbmdkaW5nIGNvbmZpZyBmaWxlIGlzIDo8L2Rpdj4KPGRpdj5rZXJuZWwg
PSAiL2hvbWUvcHZtL3ZtbGludXotMi42LjE4LTE5NC5lbDV4ZW4iPGJyPnJhbWRpc2sgPSAiL2hv
bWUvcHZtL2luaXRyZC0yLjYuMTgtMTk0LmVsNXhlbi5pbWciPGJyPm1lbW9yeSA9IDc2ODxicj4j
bWF4bWVtID0gNzY4PGJyPm5hbWUgPSAiY2VudG9zX3Jhd19wdiI8YnI+dmNwdXMgPSAxPGJyPnZp
ZiA9IFsnbWFjPTAwOjI0OjdDOjNDOkNFOkVGLGJyaWRnZT1ldGgwJ108YnI+ZGlzayA9IFsndGFw
OmFpbzovaG9tZS9wdm0vY2VudG9zX3Jhdy5pbWcseHZkYSx3J108L2Rpdj4KPGRpdj5yb290ID0g
Ii9kZXYveHZkYTEgcm8iPGJyPm9uX3JlYm9vdCA9ICdyZXN0YXJ0Jzxicj5vbl9jcmFzaCA9ICdy
ZXN0YXJ0JzwvZGl2Pgo8ZGl2PjMpIHFlbXUtaW1nLXhlbiBjcmVhdGUgLWIgY2VudG9zX3Jhdy5p
bWcgLWYgcWNvdzIgY2VudG9zX3Jhd19xY293Ml8xLmltZyA1RzwvZGl2Pgo8ZGl2PjQpIHhtIGNy
ZWF0ZSBjZW50b3NfcmF3X3Fjb3cyXzEuY2ZnIGFuZCB0aGUgY2VudG9zX3Jhd19xY293Ml8xLmNm
ZyBpcyBqdXN0IGxpa2UgdGhpczo8L2Rpdj4KPGRpdj5rZXJuZWwgPSAiL2hvbWUvcHZtL3ZtbGlu
dXotMi42LjE4LTE5NC5lbDV4ZW4iPGJyPnJhbWRpc2sgPSAiL2hvbWUvcHZtL2luaXRyZC0yLjYu
MTgtMTk0LmVsNXhlbi5pbWciPGJyPm1lbW9yeSA9IDc2ODxicj5uYW1lID0gImNlbnRvc19yYXdf
cWNvdzJfMV9wdiI8YnI+dmNwdXMgPSAxPGJyPnZpZiA9IFsnbWFjPTAwOjI0OjdDOjNDOkMxOkVG
LGJyaWRnZT1ldGgwJ108YnI+ZGlzayA9IFsndGFwOnFjb3cyOi9ob21lL3B2bS9jZW50b3NfcmF3
X3Fjb3cyXzEuaW1nLHh2ZGEsdyddPC9kaXY+CjxkaXY+cm9vdCA9ICIvZGV2L3h2ZGExIHJvIjxi
cj5vbl9yZWJvb3QgPSAncmVzdGFydCc8YnI+b25fY3Jhc2ggPSAnZGVzdHJveSc8L2Rpdj4KPGRp
dj4mbmJzcDs8L2Rpdj4KPGRpdj5ob3dldmVyIEknbSBmYWlsZWQudGhlIGVycm9yIG1lc3NhZ2Ug
YWZ0ZXIgdGhlIGFib3ZlIGNvbW1hbmQgaXMmbmJzcDtlbnRlcmVkJm5ic3A7aXMgOjwvZGl2Pgo8
ZGl2PlVzaW5nIGNvbmZpZyBmaWxlICIuL2NlbnRvc19yYXdfcWNvdzJfMS5jZmciLjwvZGl2Pgo8
ZGl2PkVycm9yOiBEZXZpY2UgNTE3MTIgKHRhcCkgY291bGQgbm90IGJlIGNvbm5lY3RlZC5TZXR0
aW5nIHVwIHRoZSBiYWNrZW5kIGZhaWxlZC4gU2VlIHRoZSBsb2cgZmlsZXMgaW4gL3Zhci9sb2cv
eGVuLyBmb3IgZGV0YWlscy48L2Rpdj4KPGRpdj4mbmJzcDs8L2Rpdj4KPGRpdj5TbyBJIGhhdmUg
dHJpZWQgYW5vdGhlciB3YXkuPC9kaXY+CjxkaXY+YWZ0ZXIgSSBpbnN0YWxsIGNlbnRvcyBpbiBj
ZW50b3NfcmF3LmltZywgSSBkaWQgdGhlIGZvbGxvd2luZzo8L2Rpdj4KPGRpdj5xZW11LWltZy14
ZW4gY29udmVydCAtTyBxY293MiBjZW50b3NfcmF3LmltZyBjZW50b3NfcWNvdzIuaW1nPC9kaXY+
CjxkaXY+aGVyZSB0aGlzIGltYWdlIGZpbGUgY2VudG9zX3Fjb3cyLmltZyBjYW4gYmUgcnVubmlu
ZyBub3JtYWxseSB3aXRoIHRoZSBjb25maWcgZmlsZSA6PC9kaXY+CjxkaXY+a2VybmVsID0gIi9o
b21lL3B2bS92bWxpbnV6LTIuNi4xOC0xOTQuZWw1eGVuIjxicj5yYW1kaXNrID0gIi9ob21lL3B2
bS9pbml0cmQtMi42LjE4LTE5NC5lbDV4ZW4uaW1nIjxicj5tZW1vcnkgPSA3Njg8YnI+bmFtZSA9
ICJjZW50b3NfcmF3X3B2Ijxicj52Y3B1cyA9IDE8YnI+dmlmID0gWydtYWM9MDA6MjQ6N0M6M0M6
Q0U6RUYsYnJpZGdlPWV0aDAnXTxicj5kaXNrID0gWyd0YXA6cWNvdzI6L2hvbWUvcHZtL2NlbnRv
c19xY293Mi5pbWcseHZkYSx3J108L2Rpdj4KPGRpdj5yb290ID0gIi9kZXYveHZkYTEgcm8iPGJy
Pm9uX3JlYm9vdCA9ICdyZXN0YXJ0Jzxicj5vbl9jcmFzaCA9ICdyZXN0YXJ0JzwvZGl2Pgo8ZGl2
PiZuYnNwOzwvZGl2Pgo8ZGl2PlRoZW4gSSBpbnB1dCB0aGUgY29tbWFuZCBiZWxvdzo8L2Rpdj4K
PGRpdj5xZW11LWltZy14ZW4gY3JlYXRlIC1iIGNlbnRvc19xY293Mi5pbWcgLWYgcWNvdzIgY2Vu
dG9zX3Fjb3cyX3Fjb3cyLmltZyA1RzwvZGl2Pgo8ZGl2PmFuZCBJJm5ic3A7ZWRpdCAmbmJzcDt0
aGUgY29uZmlnIGZpbGUgY2VudG9zX3Fjb3cyX3Fjb3cyLmNmZzo8L2Rpdj4KPGRpdj5rZXJuZWwg
PSAiL2hvbWUvcHZtL3ZtbGludXotMi42LjE4LTE5NC5lbDV4ZW4iPGJyPnJhbWRpc2sgPSAiL2hv
bWUvcHZtL2luaXRyZC0yLjYuMTgtMTk0LmVsNXhlbi5pbWciPGJyPm1lbW9yeSA9IDc2ODxicj5u
YW1lID0gImNlbnRvc19xY293Ml9xY293Ml9wdiI8YnI+dmNwdXMgPSAxPGJyPnZpZiA9IFsnbWFj
PTAwOjI0OjdDOjNDOkNFOjFGLGJyaWRnZT1ldGgwJ108YnI+Ym9vdD0iYyI8YnI+ZGlzayA9IFsn
dGFwOnFjb3cyOi9ob21lL3B2bS9jZW50b3NfcWNvdzJfcWNvdzIuaW1nLHNkYSx3J108L2Rpdj4K
PGRpdj5yb290ID0gIi9kZXYvc2RhMSBybyI8YnI+b25fcmVib290ID0gJ3Jlc3RhcnQnPGJyPm9u
X2NyYXNoID0gJ2Rlc3Ryb3knPC9kaXY+CjxkaXY+Jm5ic3A7PC9kaXY+CjxkaXY+YW5kIGltcGxl
bWVudCB0aGUgY29tbWFuZDo8L2Rpdj4KPGRpdj54bSBjcmVhdGUgY2VudG9zX3Fjb3cyX3Fjb3cy
LmNmZzwvZGl2Pgo8ZGl2PiZuYnNwOzwvZGl2Pgo8ZGl2PmJ1dCB3aGF0IHdhcyBwb3N0ZWQgd2Vy
ZTo8L2Rpdj4KPGRpdj4KPGRpdj5Vc2luZyBjb25maWcgZmlsZSAiLi9jZW50b3NfcWNvdzJfcWNv
dzIuY2ZnIi48L2Rpdj4KPGRpdj5FcnJvcjogRGV2aWNlJm5ic3A7MjA0OCAodGFwKSBjb3VsZCBu
b3QgYmUgY29ubmVjdGVkLlNldHRpbmcgdXAgdGhlIGJhY2tlbmQgZmFpbGVkLiBTZWUgdGhlIGxv
ZyBmaWxlcyBpbiAvdmFyL2xvZy94ZW4vIGZvciBkZXRhaWxzLjwvZGl2Pgo8ZGl2PiZuYnNwOzwv
ZGl2Pgo8ZGl2PkkgbmVlZCB0byBydW4gYSBwYXJhLXZpcnR1YWxpemVkIG1hY2hpbmUgd2hvc2Ug
aW1hZ2UgZmlsZSBpcyBxY293MiBmb3JtYXQgYW5kIGlzIGJhc2VkIG9uIGFub3RoZXImbmJzcDsg
aW1hZ2UgZmlsZS5XaGF0IGNhbiBpIGRvIHdpdGggdGhpcz9JIG5lZWQgeW91ciBoZWxwLjwvZGl2
PjwvZGl2PjxkaXY+SW4gZmFjdKOsSSBoYXZlIG5vIHdheSB0byBnbyB0aHJvdWdoLk15IGJvc3Mg
anVzdCBub3cgdGVsbCBtZSB0aGF0IGlmIEkgY291bGQgbm90IG92ZXJjb21lIHRoaXMgcHJvYmxl
bSAsdGhlIG9ubHkgdGhpbmcgSSBjb3VsZCBkbyBpcyB0byBkaWUuSEVMUCE8L2Rpdj48ZGl2PkFO
WSBzdWdnZXN0aW9uIHdvdWxkIGJlIGZpbmUhPC9kaXY+PC9kaXY+PGJyPjxicj48c3BhbiB0aXRs
ZT0ibmV0ZWFzZWZvb3RlciI+PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwvc3Bhbj48
L3NwYW4+PC9kaXY+PGJyPjxicj48c3BhbiB0aXRsZT0ibmV0ZWFzZWZvb3RlciI+PHNwYW4gaWQ9
Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwvc3Bhbj48L3NwYW4+
------=_Part_377408_1001847886.1356707342806--



--===============3556852658159723500==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3556852658159723500==--



From xen-devel-bounces@lists.xen.org Fri Dec 28 15:11:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 15:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tobas-0008Vu-UG; Fri, 28 Dec 2012 15:11:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1Tobar-0008VX-Bp; Fri, 28 Dec 2012 15:11:29 +0000
Received: from [85.158.139.211:50853] by server-16.bemta-5.messagelabs.com id
	71/1F-09208-0A6BDD05; Fri, 28 Dec 2012 15:11:28 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-10.tower-206.messagelabs.com!1356707414!19320610!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NTk1NTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6592 invoked from network); 28 Dec 2012 15:11:27 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Dec 2012 15:11:27 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id D8A7329D7;
	Fri, 28 Dec 2012 17:10:00 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id B0F8BF4017; Fri, 28 Dec 2012 17:10:00 +0200 (EET)
Date: Fri, 28 Dec 2012 17:10:00 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Message-ID: <20121228151000.GT8912@reaktio.net>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
	<20121222220407.GP8912@reaktio.net>
	<8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
	<068F06DC4D106941B297C0C5F9F446EA48D30B4E78@aplesstripe.dom1.jhuapl.edu>
	<20121227170322.GS8912@reaktio.net>
	<068F06DC4D106941B297C0C5F9F446EA48D30B4E79@aplesstripe.dom1.jhuapl.edu>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48D30B4E79@aplesstripe.dom1.jhuapl.edu>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>, gavin <gbtux@126.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 28, 2012 at 09:47:22AM -0500, Fioravante, Matthew E. wrote:
> No, the vtpm manager is implemented in mini-os and mini-os not has a driv=
er for direct access to the physical tpm.
> dom0 no longer takes any part of the vtpm communication paths.
>

Ok. that's nice. Thanks for the info :)

-- Pasi

> ________________________________________
> From: Pasi K=E4rkk=E4inen [pasik@iki.fi]
> Sent: Thursday, December 27, 2012 12:03 PM
> To: Fioravante, Matthew E.
> Cc: gavin; xen-users@lists.xen.org; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops=
 kernel
> =

> On Thu, Dec 27, 2012 at 10:27:33AM -0500, Fioravante, Matthew E. wrote:
> > The frontend driver is currently being ported to the latest kernel. You=
 can
> > find the patch cross listed here as well as the linux kernel mailing li=
st.
> >
> > I have no plans to port the backend driver. If you need it you'll have =
to get it from the 2.6.18
> > kernel and port it yourself.
> >
> =

> Hmm.. are you still using 2.6.18 kernel in dom0 yourself?
> =

> -- Pasi
> =

> > ________________________________________
> > From: xen-devel-bounces@lists.xen.org [xen-devel-bounces@lists.xen.org]=
 On Behalf Of gavin [gbtux@126.com]
> > Sent: Sunday, December 23, 2012 8:04 AM
> > To: Pasi K=E4rkk=E4inen
> > Cc: xen-users@lists.xen.org; xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-o=
ps kernel
> >
> > Hi Pasi,
> >
> > Thank you very much for your information.
> >
> > Best Regards,
> > Gavin
> >
> > At 2012-12-23 06:04:08,"Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:
> >
> > >On Sun, Dec 23, 2012 at 01:50:16AM +0800, gavin wrote:
> > >>     Hi,
> > >>
> > >>    I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in=
 the
> > >>    config file of pv-ops kernel, such as kernel 2.6.32.50. However, =
this
> > >>    option exists in the config file of kernel version 2.6.18.8. I al=
so cannot
> > >>    find the vTPM backed driver (such as
> > >>    linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
> > >>    So, how can I configure and use the vTPM backend driver in kernel=
 2.6.32?
> > >>    Thank you for any advice.
> > >>
> > >
> > >I don't think vtpm drivers were ported to 2.6.32 pvops.
> > >Recently there has been work on porting the drivers to upstream Linux =
3.x,
> > >but they aren't merged yet iirc.
> > >
> > >If you need to use them with 2.6.32 you need to port them yourself..
> > >
> > >-- Pasi
> > >
> >
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 15:11:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 15:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tobas-0008Vu-UG; Fri, 28 Dec 2012 15:11:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1Tobar-0008VX-Bp; Fri, 28 Dec 2012 15:11:29 +0000
Received: from [85.158.139.211:50853] by server-16.bemta-5.messagelabs.com id
	71/1F-09208-0A6BDD05; Fri, 28 Dec 2012 15:11:28 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-10.tower-206.messagelabs.com!1356707414!19320610!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NTk1NTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6592 invoked from network); 28 Dec 2012 15:11:27 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Dec 2012 15:11:27 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id D8A7329D7;
	Fri, 28 Dec 2012 17:10:00 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id B0F8BF4017; Fri, 28 Dec 2012 17:10:00 +0200 (EET)
Date: Fri, 28 Dec 2012 17:10:00 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Fioravante, Matthew E." <Matthew.Fioravante@jhuapl.edu>
Message-ID: <20121228151000.GT8912@reaktio.net>
References: <238048a.1c0.13bc3bc9be6.Coremail.gbtux@126.com>
	<20121222220407.GP8912@reaktio.net>
	<8fe6315.6fa2.13bc7dd8763.Coremail.gbtux@126.com>
	<068F06DC4D106941B297C0C5F9F446EA48D30B4E78@aplesstripe.dom1.jhuapl.edu>
	<20121227170322.GS8912@reaktio.net>
	<068F06DC4D106941B297C0C5F9F446EA48D30B4E79@aplesstripe.dom1.jhuapl.edu>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <068F06DC4D106941B297C0C5F9F446EA48D30B4E79@aplesstripe.dom1.jhuapl.edu>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>, gavin <gbtux@126.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops
 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 28, 2012 at 09:47:22AM -0500, Fioravante, Matthew E. wrote:
> No, the vtpm manager is implemented in mini-os and mini-os not has a driv=
er for direct access to the physical tpm.
> dom0 no longer takes any part of the vtpm communication paths.
>

Ok. that's nice. Thanks for the info :)

-- Pasi

> ________________________________________
> From: Pasi K=E4rkk=E4inen [pasik@iki.fi]
> Sent: Thursday, December 27, 2012 12:03 PM
> To: Fioravante, Matthew E.
> Cc: gavin; xen-users@lists.xen.org; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-ops=
 kernel
> =

> On Thu, Dec 27, 2012 at 10:27:33AM -0500, Fioravante, Matthew E. wrote:
> > The frontend driver is currently being ported to the latest kernel. You=
 can
> > find the patch cross listed here as well as the linux kernel mailing li=
st.
> >
> > I have no plans to port the backend driver. If you need it you'll have =
to get it from the 2.6.18
> > kernel and port it yourself.
> >
> =

> Hmm.. are you still using 2.6.18 kernel in dom0 yourself?
> =

> -- Pasi
> =

> > ________________________________________
> > From: xen-devel-bounces@lists.xen.org [xen-devel-bounces@lists.xen.org]=
 On Behalf Of gavin [gbtux@126.com]
> > Sent: Sunday, December 23, 2012 8:04 AM
> > To: Pasi K=E4rkk=E4inen
> > Cc: xen-users@lists.xen.org; xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] How to use the vTPM backend driver in the pv-o=
ps kernel
> >
> > Hi Pasi,
> >
> > Thank you very much for your information.
> >
> > Best Regards,
> > Gavin
> >
> > At 2012-12-23 06:04:08,"Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:
> >
> > >On Sun, Dec 23, 2012 at 01:50:16AM +0800, gavin wrote:
> > >>     Hi,
> > >>
> > >>    I cannot find the vTPM config option CONFIG_XEN_TPMDEV_BACKEND in=
 the
> > >>    config file of pv-ops kernel, such as kernel 2.6.32.50. However, =
this
> > >>    option exists in the config file of kernel version 2.6.18.8. I al=
so cannot
> > >>    find the vTPM backed driver (such as
> > >>    linux-2.6.18-xen.hg/drivers/xen/tpmback ) in the pv-ops kernel.
> > >>    So, how can I configure and use the vTPM backend driver in kernel=
 2.6.32?
> > >>    Thank you for any advice.
> > >>
> > >
> > >I don't think vtpm drivers were ported to 2.6.32 pvops.
> > >Recently there has been work on porting the drivers to upstream Linux =
3.x,
> > >but they aren't merged yet iirc.
> > >
> > >If you need to use them with 2.6.32 you need to port them yourself..
> > >
> > >-- Pasi
> > >
> >
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 15:38:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 15:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Toc15-0001I2-5H; Fri, 28 Dec 2012 15:38:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1Toc12-0001Hx-VR
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 15:38:33 +0000
Received: from [85.158.138.51:37170] by server-4.bemta-3.messagelabs.com id
	DA/0C-31835-7FCBDD05; Fri, 28 Dec 2012 15:38:31 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-7.tower-174.messagelabs.com!1356709110!20653781!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17216 invoked from network); 28 Dec 2012 15:38:30 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Dec 2012 15:38:30 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id 1D286400387;
	Fri, 28 Dec 2012 16:38:30 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id dtg0VJBqGfk9; Fri, 28 Dec 2012 16:38:29 +0100 (CET)
Received: from [192.168.178.50]
	(host123-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.123])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id 2680E40017B;
	Fri, 28 Dec 2012 16:38:29 +0100 (CET)
Message-ID: <50DDBCF0.5070200@tiscali.it>
Date: Fri, 28 Dec 2012 16:38:24 +0100
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>
References: <50D305F2.1060109@tiscali.it>
In-Reply-To: <50D305F2.1060109@tiscali.it>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Update Seabios on xen-unstable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2736960270111390062=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============2736960270111390062==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms040403010903000301060500"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms040403010903000301060500
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable

Il 20/12/2012 13:34, Fabio Fantoni ha scritto:
> I saw good news about qemu-xen on xen-unstable updates to latest=20
> stable version (1.3), I already start test it and report bugs found.
> I saw no seabios updates for now on xen-unstable, there is 1.7.1=20
> upstream version since few months that include all vgabios.
> Is possible to update seabios on xen-unstable please?
> I have tried it time ago for probably resolution of qxl vga problem=20
> without result.
> It probably needs some particular settings or modification for correct =

> integration in xen about vgabios that I not know of.
> Thanks for any reply.
>
Updated debian seabios package to 1.7.1 (on personal repository for=20
now), tested with wheezy and xen-unstable and is working.

Package repository: https://github.com/Fantu/pkg-seabios

Changelog of my build:
-------------------------------------------------------------------
seabios (1.7.1-0.1) experimental; urgency=3Dlow

   * Non-maintainer upload
   * New upstream release 1.7.1 featuring xen support (Closes: #678042)
   * Removed debian/patches/fix-=3D=3D-in-shell.patch (applied upstream).=

   * Updated debian/optionrom from qemu 1.3.0 keeping debian changes.

  -- Fabio Fantoni <fabio.fantoni@heliman.it> Thu, 28 Dec 2012 16:18:06=20
+0100
-------------------------------------------------------------------

Could someone test it and tell me if is sufficent good to propose upload =

into Debian and hence enabling qemu upstream into xen on experimental=20
repository?


--------------ms040403010903000301060500
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMTIyODE1MzgyNFowIwYJKoZIhvcNAQkEMRYEFGKo//VshrRwXhkp8IAxI2Ra
zitjMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAq5SlbNpvk5NW7n7nB8WpQ9O3
dH+MENrS/h9IBCALV51Vr5M/7PHFfsAHS5II9qe+encvWanZJUEPUmZ2QuP23yfgxjBp7nYO
SB2zfpxN8uApdcVkNaGHcFm+3eS/ih50I5aIYYrzyKUBh0aLt4Q7vXFfQLWazV6qMGwYFoYX
cD0gexCspvB73uwlvfPQK/YOIbYNqRu56LVM5aFOcyq/ZA/v1BVqPdA+MfNPp+Ug+t/bqcrd
9FdA1p2UXvT2l0nRS0Lwq/SpIBTj+rfGefvXcbcNy285WubMIxOfKURvrqU5NbENJrQv5baa
F/DBL/Ypz/FmSh/JKAkhjyBKxoSetAAAAAAAAA==
--------------ms040403010903000301060500--


--===============2736960270111390062==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2736960270111390062==--


From xen-devel-bounces@lists.xen.org Fri Dec 28 15:38:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 15:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Toc15-0001I2-5H; Fri, 28 Dec 2012 15:38:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1Toc12-0001Hx-VR
	for xen-devel@lists.xensource.com; Fri, 28 Dec 2012 15:38:33 +0000
Received: from [85.158.138.51:37170] by server-4.bemta-3.messagelabs.com id
	DA/0C-31835-7FCBDD05; Fri, 28 Dec 2012 15:38:31 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-7.tower-174.messagelabs.com!1356709110!20653781!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RATWARE_GECKO_BUILD,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17216 invoked from network); 28 Dec 2012 15:38:30 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Dec 2012 15:38:30 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id 1D286400387;
	Fri, 28 Dec 2012 16:38:30 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id dtg0VJBqGfk9; Fri, 28 Dec 2012 16:38:29 +0100 (CET)
Received: from [192.168.178.50]
	(host123-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.123])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id 2680E40017B;
	Fri, 28 Dec 2012 16:38:29 +0100 (CET)
Message-ID: <50DDBCF0.5070200@tiscali.it>
Date: Fri, 28 Dec 2012 16:38:24 +0100
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>
References: <50D305F2.1060109@tiscali.it>
In-Reply-To: <50D305F2.1060109@tiscali.it>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Update Seabios on xen-unstable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2736960270111390062=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============2736960270111390062==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms040403010903000301060500"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms040403010903000301060500
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable

Il 20/12/2012 13:34, Fabio Fantoni ha scritto:
> I saw good news about qemu-xen on xen-unstable updates to latest=20
> stable version (1.3), I already start test it and report bugs found.
> I saw no seabios updates for now on xen-unstable, there is 1.7.1=20
> upstream version since few months that include all vgabios.
> Is possible to update seabios on xen-unstable please?
> I have tried it time ago for probably resolution of qxl vga problem=20
> without result.
> It probably needs some particular settings or modification for correct =

> integration in xen about vgabios that I not know of.
> Thanks for any reply.
>
Updated debian seabios package to 1.7.1 (on personal repository for=20
now), tested with wheezy and xen-unstable and is working.

Package repository: https://github.com/Fantu/pkg-seabios

Changelog of my build:
-------------------------------------------------------------------
seabios (1.7.1-0.1) experimental; urgency=3Dlow

   * Non-maintainer upload
   * New upstream release 1.7.1 featuring xen support (Closes: #678042)
   * Removed debian/patches/fix-=3D=3D-in-shell.patch (applied upstream).=

   * Updated debian/optionrom from qemu 1.3.0 keeping debian changes.

  -- Fabio Fantoni <fabio.fantoni@heliman.it> Thu, 28 Dec 2012 16:18:06=20
+0100
-------------------------------------------------------------------

Could someone test it and tell me if is sufficent good to propose upload =

into Debian and hence enabling qemu upstream into xen on experimental=20
repository?


--------------ms040403010903000301060500
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMTIyODE1MzgyNFowIwYJKoZIhvcNAQkEMRYEFGKo//VshrRwXhkp8IAxI2Ra
zitjMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAq5SlbNpvk5NW7n7nB8WpQ9O3
dH+MENrS/h9IBCALV51Vr5M/7PHFfsAHS5II9qe+encvWanZJUEPUmZ2QuP23yfgxjBp7nYO
SB2zfpxN8uApdcVkNaGHcFm+3eS/ih50I5aIYYrzyKUBh0aLt4Q7vXFfQLWazV6qMGwYFoYX
cD0gexCspvB73uwlvfPQK/YOIbYNqRu56LVM5aFOcyq/ZA/v1BVqPdA+MfNPp+Ug+t/bqcrd
9FdA1p2UXvT2l0nRS0Lwq/SpIBTj+rfGefvXcbcNy285WubMIxOfKURvrqU5NbENJrQv5baa
F/DBL/Ypz/FmSh/JKAkhjyBKxoSetAAAAAAAAA==
--------------ms040403010903000301060500--


--===============2736960270111390062==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2736960270111390062==--


From xen-devel-bounces@lists.xen.org Fri Dec 28 22:28:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 22:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToiPE-00064l-D7; Fri, 28 Dec 2012 22:27:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1ToiPC-00064g-N0
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 22:27:54 +0000
Received: from [85.158.143.99:21237] by server-1.bemta-4.messagelabs.com id
	B6/DC-28401-9EC1ED05; Fri, 28 Dec 2012 22:27:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1356733673!31087756!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzNzMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31072 invoked from network); 28 Dec 2012 22:27:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 22:27:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,372,1355097600"; 
   d="scan'208";a="384553"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Dec 2012 22:27:52 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 28 Dec 2012 22:27:52 +0000
Message-ID: <50DE1CE5.1000909@citrix.com>
Date: Fri, 28 Dec 2012 22:27:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <50DDADDD.8070806@gmail.com>
In-Reply-To: <50DDADDD.8070806@gmail.com>
Subject: Re: [Xen-devel] hvm_emulate_one() usage
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/12/2012 14:34, Razvan Cojocaru wrote:
> Hello,
>
> I have a dom0 userspace application that receives mem_events. Mem_events 
> are being received if a page fault occurs, and until I clear the page 
> access rights I keep receiving the event in a loop. If I do clear the 
> page access rights, I will no longer receive mem_events for said page.
>
> What I thought I'd do was to add a new flag to the mem_event response 
> (MEM_EVENT_FLAG_EMULATE_WRITE), and have this code execute in 
> p2m_mem_access_resume() in xen/arch/x86/mm/p2m.c:
>
> mem_event_get_response(d, &rsp);
>
> if ( rsp.flags & MEM_EVENT_FLAG_EMULATE_WRITE )
> {
>      struct hvm_emulate_ctxt ctx[1] = {};
>      struct vcpu *current_vcpu = current;
>
>      set_current(d->vcpu[rsp.vcpu_id]);

Not that I can help you with your problem specifically, but
set_current() here ...

>
>      hvm_emulate_prepare(ctx, guest_cpu_user_regs());
>      hvm_emulate_one(ctx);
>
>      set_current(current_vcpu);

and here are absolutely wrong and will cause bad things to happen. (As
demonstrated by the crash below)

set_current() is only for use with scheduling, and sets which vcpu is
"current" on this pcpu.  As the code currently stands, there is a
thundering great race condition where this particular vcpu might be
current on 2 pcpus at once.

Other than above, which will certainly break the scheduling code,
"current" is used everywhere in the Xen code, so your call to
hvm_emulate_prepare is using the real "current" vcpus registers, with
information from the wrong "current" cpu, including cs and ss segment
registers, which is then going to be interpreted incorrectly as they
will being used in the wrong vcms/gdt.

By this point, bets are certainly on that stuff will break.


Can you describe exactly what behaviour you are attempting to achieve
with this?  It seems to me that you are wanting to step a paused HVM
vcpu on by one instruction based off a hypercall from dom0 ?

~Andrew

> }
>
> The code is supposed to go past the write instruction (without lifting 
> the page access restrictions). What it does seem to achieve is this:
>
> (XEN) ----[ Xen-4.1.2  x86_64  debug=n  Not tainted ]----
> (XEN) CPU:    6
> (XEN) RIP:    e008:[<ffff82c4801bf4ea>] vmx_get_interrupt_shadow+0xa/0x10
> (XEN) RFLAGS: 0000000000010203   CONTEXT: hypervisor
> (XEN) rax: 0000000000004824   rbx: ffff83013c0c7ba0   rcx: 0000000000000008
> (XEN) rdx: 0000000000000005   rsi: ffff83013c0c7f18   rdi: ffff8300bfca8000
> (XEN) rbp: ffff83013c0c7f18   rsp: ffff83013c0c7b50   r8:  0000000000000002
> (XEN) r9:  0000000000000002   r10: ffff82c48020af40   r11: 0000000000000282
> (XEN) r12: ffff8300bfff2000   r13: ffff88012b478b18   r14: 00007fffd669c4c0
> (XEN) r15: ffff83013c0c7e48   cr0: 0000000080050033   cr4: 00000000000026f0
> (XEN) cr3: 000000005d6c4000   cr2: 000000000221e538
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff83013c0c7b50:
> (XEN)    ffff82c4801a1a91 ffff83013f986000 ffff83013f986000 ffff83013c0c7f18
> (XEN)    ffff82c4801ce0e1 0000000500050000 000000000003f31a 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 ffff83013f986000 ffff8300bfff2000 ffff83013c0c7e48
> (XEN)    ffff82c4801d1d81 ffff8300bfcac000 ffff82c4801d05c5 ffff83013c0c7f18
> (XEN)    ffff82c4801a1447 ffff8300bfcac000 0000000000d7e004 0000000000d7e004
> (XEN)    ffff83013c0c7e48 ffff88012b478b18 00007fffd669c4c0 ffff83013c0c7e48
> (XEN)    ffff82c48014eb79 0000000000000000 000000000005d6f9 ffff82f600badf20
> (XEN)    0000000000000000 4000000000000000 ffff82f600badf20 0000000000000000
> (XEN)    ffff88012fc0b928 0000000000000001 ffff82c48016bc4b ffff82f600badf20
> (XEN)    ffff82c48016c0b8 ffff83013c0ac000 ffff83013c0ac000 ffff82f600bb1940
> (XEN)    000000000000000f ffff83013c0c7f18 ffff83013c0ac000 ffff82f600bb1940
> (XEN)    fffffffffffffff3 0000000000d7e004 ffff83013c0c7e48 ffff88012b478b18
> (XEN) Xen call trace:
> (XEN)    [<ffff82c4801bf4ea>] vmx_get_interrupt_shadow+0xa/0x10
> (XEN)    [<ffff82c4801a1a91>] hvm_emulate_prepare+0x31/0x80
> (XEN)    [<ffff82c4801ce0e1>] p2m_mem_access_resume+0xe1/0x120
> (XEN)    [<ffff82c4801d1d81>] mem_access_domctl+0x21/0x30
> (XEN)    [<ffff82c4801d05c5>] mem_event_domctl+0x295/0x3b0
> (XEN)    [<ffff82c4801a1447>] hvmemul_do_pio+0x27/0x30
> (XEN)    [<ffff82c48014eb79>] arch_do_domctl+0x2e9/0x28a0
> (XEN)    [<ffff82c48016bc4b>] get_page_type+0xb/0x20
> (XEN)    [<ffff82c48016c0b8>] get_page_and_type_from_pagenr+0x78/0xe0
> (XEN)    [<ffff82c4801025bb>] do_domctl+0xfb/0x10b0
> (XEN)    [<ffff82c4801f2fa6>] ept_get_entry+0x136/0x250
> (XEN)    [<ffff82c480180965>] copy_to_user+0x25/0x70
> (XEN)    [<ffff82c4801f8778>] syscall_enter+0x88/0x8d
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 6:
> (XEN) FATAL TRAP: vector = 6 (invalid opcode)
> (XEN) ****************************************
>
> I could find no documentation on either the hvm_*(), or the cpu-related 
> functions. Obviously the hvm_emulate_prepare() call crashes the 
> hypervisor, most likely because of the guest_cpu_user_regs() parameter, 
> but "regs" is not being passed to p2m_mem_access_resume() (like it is 
> being passed to p2m_mem_access_check()). I would appreciate your help in 
> figuring out how to implement this.
>
> Thanks, and happy holidays,
> Razvan Cojocaru
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 22:28:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 22:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1ToiPE-00064l-D7; Fri, 28 Dec 2012 22:27:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1ToiPC-00064g-N0
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 22:27:54 +0000
Received: from [85.158.143.99:21237] by server-1.bemta-4.messagelabs.com id
	B6/DC-28401-9EC1ED05; Fri, 28 Dec 2012 22:27:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1356733673!31087756!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzNzMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31072 invoked from network); 28 Dec 2012 22:27:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 22:27:53 -0000
X-IronPort-AV: E=Sophos;i="4.84,372,1355097600"; 
   d="scan'208";a="384553"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Dec 2012 22:27:52 +0000
Received: from [10.30.249.68] (10.30.249.68) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Fri, 28 Dec 2012 22:27:52 +0000
Message-ID: <50DE1CE5.1000909@citrix.com>
Date: Fri, 28 Dec 2012 22:27:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <50DDADDD.8070806@gmail.com>
In-Reply-To: <50DDADDD.8070806@gmail.com>
Subject: Re: [Xen-devel] hvm_emulate_one() usage
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/12/2012 14:34, Razvan Cojocaru wrote:
> Hello,
>
> I have a dom0 userspace application that receives mem_events. Mem_events 
> are being received if a page fault occurs, and until I clear the page 
> access rights I keep receiving the event in a loop. If I do clear the 
> page access rights, I will no longer receive mem_events for said page.
>
> What I thought I'd do was to add a new flag to the mem_event response 
> (MEM_EVENT_FLAG_EMULATE_WRITE), and have this code execute in 
> p2m_mem_access_resume() in xen/arch/x86/mm/p2m.c:
>
> mem_event_get_response(d, &rsp);
>
> if ( rsp.flags & MEM_EVENT_FLAG_EMULATE_WRITE )
> {
>      struct hvm_emulate_ctxt ctx[1] = {};
>      struct vcpu *current_vcpu = current;
>
>      set_current(d->vcpu[rsp.vcpu_id]);

Not that I can help you with your problem specifically, but
set_current() here ...

>
>      hvm_emulate_prepare(ctx, guest_cpu_user_regs());
>      hvm_emulate_one(ctx);
>
>      set_current(current_vcpu);

and here are absolutely wrong and will cause bad things to happen. (As
demonstrated by the crash below)

set_current() is only for use with scheduling, and sets which vcpu is
"current" on this pcpu.  As the code currently stands, there is a
thundering great race condition where this particular vcpu might be
current on 2 pcpus at once.

Other than above, which will certainly break the scheduling code,
"current" is used everywhere in the Xen code, so your call to
hvm_emulate_prepare is using the real "current" vcpus registers, with
information from the wrong "current" cpu, including cs and ss segment
registers, which is then going to be interpreted incorrectly as they
will being used in the wrong vcms/gdt.

By this point, bets are certainly on that stuff will break.


Can you describe exactly what behaviour you are attempting to achieve
with this?  It seems to me that you are wanting to step a paused HVM
vcpu on by one instruction based off a hypercall from dom0 ?

~Andrew

> }
>
> The code is supposed to go past the write instruction (without lifting 
> the page access restrictions). What it does seem to achieve is this:
>
> (XEN) ----[ Xen-4.1.2  x86_64  debug=n  Not tainted ]----
> (XEN) CPU:    6
> (XEN) RIP:    e008:[<ffff82c4801bf4ea>] vmx_get_interrupt_shadow+0xa/0x10
> (XEN) RFLAGS: 0000000000010203   CONTEXT: hypervisor
> (XEN) rax: 0000000000004824   rbx: ffff83013c0c7ba0   rcx: 0000000000000008
> (XEN) rdx: 0000000000000005   rsi: ffff83013c0c7f18   rdi: ffff8300bfca8000
> (XEN) rbp: ffff83013c0c7f18   rsp: ffff83013c0c7b50   r8:  0000000000000002
> (XEN) r9:  0000000000000002   r10: ffff82c48020af40   r11: 0000000000000282
> (XEN) r12: ffff8300bfff2000   r13: ffff88012b478b18   r14: 00007fffd669c4c0
> (XEN) r15: ffff83013c0c7e48   cr0: 0000000080050033   cr4: 00000000000026f0
> (XEN) cr3: 000000005d6c4000   cr2: 000000000221e538
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff83013c0c7b50:
> (XEN)    ffff82c4801a1a91 ffff83013f986000 ffff83013f986000 ffff83013c0c7f18
> (XEN)    ffff82c4801ce0e1 0000000500050000 000000000003f31a 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 ffff83013f986000 ffff8300bfff2000 ffff83013c0c7e48
> (XEN)    ffff82c4801d1d81 ffff8300bfcac000 ffff82c4801d05c5 ffff83013c0c7f18
> (XEN)    ffff82c4801a1447 ffff8300bfcac000 0000000000d7e004 0000000000d7e004
> (XEN)    ffff83013c0c7e48 ffff88012b478b18 00007fffd669c4c0 ffff83013c0c7e48
> (XEN)    ffff82c48014eb79 0000000000000000 000000000005d6f9 ffff82f600badf20
> (XEN)    0000000000000000 4000000000000000 ffff82f600badf20 0000000000000000
> (XEN)    ffff88012fc0b928 0000000000000001 ffff82c48016bc4b ffff82f600badf20
> (XEN)    ffff82c48016c0b8 ffff83013c0ac000 ffff83013c0ac000 ffff82f600bb1940
> (XEN)    000000000000000f ffff83013c0c7f18 ffff83013c0ac000 ffff82f600bb1940
> (XEN)    fffffffffffffff3 0000000000d7e004 ffff83013c0c7e48 ffff88012b478b18
> (XEN) Xen call trace:
> (XEN)    [<ffff82c4801bf4ea>] vmx_get_interrupt_shadow+0xa/0x10
> (XEN)    [<ffff82c4801a1a91>] hvm_emulate_prepare+0x31/0x80
> (XEN)    [<ffff82c4801ce0e1>] p2m_mem_access_resume+0xe1/0x120
> (XEN)    [<ffff82c4801d1d81>] mem_access_domctl+0x21/0x30
> (XEN)    [<ffff82c4801d05c5>] mem_event_domctl+0x295/0x3b0
> (XEN)    [<ffff82c4801a1447>] hvmemul_do_pio+0x27/0x30
> (XEN)    [<ffff82c48014eb79>] arch_do_domctl+0x2e9/0x28a0
> (XEN)    [<ffff82c48016bc4b>] get_page_type+0xb/0x20
> (XEN)    [<ffff82c48016c0b8>] get_page_and_type_from_pagenr+0x78/0xe0
> (XEN)    [<ffff82c4801025bb>] do_domctl+0xfb/0x10b0
> (XEN)    [<ffff82c4801f2fa6>] ept_get_entry+0x136/0x250
> (XEN)    [<ffff82c480180965>] copy_to_user+0x25/0x70
> (XEN)    [<ffff82c4801f8778>] syscall_enter+0x88/0x8d
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 6:
> (XEN) FATAL TRAP: vector = 6 (invalid opcode)
> (XEN) ****************************************
>
> I could find no documentation on either the hvm_*(), or the cpu-related 
> functions. Obviously the hvm_emulate_prepare() call crashes the 
> hypervisor, most likely because of the guest_cpu_user_regs() parameter, 
> but "regs" is not being passed to p2m_mem_access_resume() (like it is 
> being passed to p2m_mem_access_check()). I would appreciate your help in 
> figuring out how to implement this.
>
> Thanks, and happy holidays,
> Razvan Cojocaru
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 23:30:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 23:30:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TojN2-0006df-Br; Fri, 28 Dec 2012 23:29:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TojN1-0006da-G2
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 23:29:43 +0000
Received: from [85.158.143.99:30117] by server-2.bemta-4.messagelabs.com id
	F1/92-30861-66B2ED05; Fri, 28 Dec 2012 23:29:42 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1356737381!30184162!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28881 invoked from network); 28 Dec 2012 23:29:42 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 23:29:42 -0000
Received: by mail-ee0-f52.google.com with SMTP id d17so5335303eek.39
	for <xen-devel@lists.xen.org>; Fri, 28 Dec 2012 15:29:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:message-id:date:from:user-agent:mime-version:to:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=vPEN90SLJTlGtQAM0zJ+GlbhxciFL/3JDYx2/7w9Q1I=;
	b=CYKhuMOCAH2ogZGVXUxnZPSaXYzubuqthY+wSMxoW2rTUp7fo+SYNwM38+E+pwUleJ
	IF8CPyqPCcg1rsUhCuocf/tYS24u9Z5w/zkw5aDLmE5zmS7y+cVHtF82YGWQnDYAwNwj
	FVuTdXvZhSQW7V3unrUIthhe62KTh7VtW1lPnsYU7TtHfzPxMMj7qlLP/+d6wf17k1Pb
	v4c6b1QiQFTXd2f+GlM4jD4tu7JjyYOwiDDivIQEn+KXNRAIYcsyNhMh1uOdvBMBWlVk
	zczqaz4eS6/yn8rj42xbJ/+bQLYYEZXQXmyUgduKK87MBd52lP2tdsVdzBGqnpIOK6Hp
	zklg==
X-Received: by 10.14.213.134 with SMTP id a6mr89808401eep.45.1356737380905;
	Fri, 28 Dec 2012 15:29:40 -0800 (PST)
Received: from [192.168.228.100] ([188.25.168.24])
	by mx.google.com with ESMTPS id z8sm68625043eeo.11.2012.12.28.15.29.39
	(version=SSLv3 cipher=OTHER); Fri, 28 Dec 2012 15:29:40 -0800 (PST)
Message-ID: <50DE2B61.8050606@gmail.com>
Date: Sat, 29 Dec 2012 01:29:37 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121128 Thunderbird/10.0.11
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <50DDADDD.8070806@gmail.com> <50DE1CE5.1000909@citrix.com>
In-Reply-To: <50DE1CE5.1000909@citrix.com>
Subject: Re: [Xen-devel] hvm_emulate_one() usage
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, thanks for the reply!

> Not that I can help you with your problem specifically, but
> set_current() here ...
> 
>>
>>      hvm_emulate_prepare(ctx, guest_cpu_user_regs());
>>      hvm_emulate_one(ctx);
>>
>>      set_current(current_vcpu);
> 
> and here are absolutely wrong and will cause bad things to happen. (As
> demonstrated by the crash below)

Right.

> "current" is used everywhere in the Xen code, so your call to
> hvm_emulate_prepare is using the real "current" vcpus registers, with
> information from the wrong "current" cpu, including cs and ss segment
> registers, which is then going to be interpreted incorrectly as they
> will being used in the wrong vcms/gdt.

I see, that's what I was trying to avoid with the set_current() call - I
had hoped that it would tell guest_cpu_user_regs() what vcpu to use.

That was my only hope, as in the context of p2m_mem_access_resume() I
don't have the "struct cpu_user_regs *regs" parameter that I have access
to in p2m_mem_access_check().

> Can you describe exactly what behaviour you are attempting to achieve
> with this?  It seems to me that you are wanting to step a paused HVM
> vcpu on by one instruction based off a hypercall from dom0 ?

That's basically it, yes. In the hypervisor, tell dom0 that a mem_event
happened (a write attempt happened on a rx page), and let dom0 decide if
the write should happen or not (without dom0 setting the page to rwx and
losing future events on that same page). If dom0 decides that the write
should go ahead, it should signal this with a special flag in the
response it puts in the mem_event ring buffer, and the hypervisor should
then step the paused vcpu by one instruction (the write instruction).

This does work if I step in p2m_mem_access_check() (where I have access
to the "regs" parameter), before putting the mem_event request in the
ring buffer (and without any set_current() funny business), but that's
not acceptable behaviour because then dom0 gets notified _after_ the
write, and it's important for the notification to occur before the write
(so that dom0 could stop the write from happening if it needs to).

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Dec 28 23:30:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Dec 2012 23:30:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TojN2-0006df-Br; Fri, 28 Dec 2012 23:29:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rzvncj@gmail.com>) id 1TojN1-0006da-G2
	for xen-devel@lists.xen.org; Fri, 28 Dec 2012 23:29:43 +0000
Received: from [85.158.143.99:30117] by server-2.bemta-4.messagelabs.com id
	F1/92-30861-66B2ED05; Fri, 28 Dec 2012 23:29:42 +0000
X-Env-Sender: rzvncj@gmail.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1356737381!30184162!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28881 invoked from network); 28 Dec 2012 23:29:42 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Dec 2012 23:29:42 -0000
Received: by mail-ee0-f52.google.com with SMTP id d17so5335303eek.39
	for <xen-devel@lists.xen.org>; Fri, 28 Dec 2012 15:29:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=x-received:message-id:date:from:user-agent:mime-version:to:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=vPEN90SLJTlGtQAM0zJ+GlbhxciFL/3JDYx2/7w9Q1I=;
	b=CYKhuMOCAH2ogZGVXUxnZPSaXYzubuqthY+wSMxoW2rTUp7fo+SYNwM38+E+pwUleJ
	IF8CPyqPCcg1rsUhCuocf/tYS24u9Z5w/zkw5aDLmE5zmS7y+cVHtF82YGWQnDYAwNwj
	FVuTdXvZhSQW7V3unrUIthhe62KTh7VtW1lPnsYU7TtHfzPxMMj7qlLP/+d6wf17k1Pb
	v4c6b1QiQFTXd2f+GlM4jD4tu7JjyYOwiDDivIQEn+KXNRAIYcsyNhMh1uOdvBMBWlVk
	zczqaz4eS6/yn8rj42xbJ/+bQLYYEZXQXmyUgduKK87MBd52lP2tdsVdzBGqnpIOK6Hp
	zklg==
X-Received: by 10.14.213.134 with SMTP id a6mr89808401eep.45.1356737380905;
	Fri, 28 Dec 2012 15:29:40 -0800 (PST)
Received: from [192.168.228.100] ([188.25.168.24])
	by mx.google.com with ESMTPS id z8sm68625043eeo.11.2012.12.28.15.29.39
	(version=SSLv3 cipher=OTHER); Fri, 28 Dec 2012 15:29:40 -0800 (PST)
Message-ID: <50DE2B61.8050606@gmail.com>
Date: Sat, 29 Dec 2012 01:29:37 +0200
From: Razvan Cojocaru <rzvncj@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.11) Gecko/20121128 Thunderbird/10.0.11
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <50DDADDD.8070806@gmail.com> <50DE1CE5.1000909@citrix.com>
In-Reply-To: <50DE1CE5.1000909@citrix.com>
Subject: Re: [Xen-devel] hvm_emulate_one() usage
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, thanks for the reply!

> Not that I can help you with your problem specifically, but
> set_current() here ...
> 
>>
>>      hvm_emulate_prepare(ctx, guest_cpu_user_regs());
>>      hvm_emulate_one(ctx);
>>
>>      set_current(current_vcpu);
> 
> and here are absolutely wrong and will cause bad things to happen. (As
> demonstrated by the crash below)

Right.

> "current" is used everywhere in the Xen code, so your call to
> hvm_emulate_prepare is using the real "current" vcpus registers, with
> information from the wrong "current" cpu, including cs and ss segment
> registers, which is then going to be interpreted incorrectly as they
> will being used in the wrong vcms/gdt.

I see, that's what I was trying to avoid with the set_current() call - I
had hoped that it would tell guest_cpu_user_regs() what vcpu to use.

That was my only hope, as in the context of p2m_mem_access_resume() I
don't have the "struct cpu_user_regs *regs" parameter that I have access
to in p2m_mem_access_check().

> Can you describe exactly what behaviour you are attempting to achieve
> with this?  It seems to me that you are wanting to step a paused HVM
> vcpu on by one instruction based off a hypercall from dom0 ?

That's basically it, yes. In the hypervisor, tell dom0 that a mem_event
happened (a write attempt happened on a rx page), and let dom0 decide if
the write should happen or not (without dom0 setting the page to rwx and
losing future events on that same page). If dom0 decides that the write
should go ahead, it should signal this with a special flag in the
response it puts in the mem_event ring buffer, and the hypervisor should
then step the paused vcpu by one instruction (the write instruction).

This does work if I step in p2m_mem_access_check() (where I have access
to the "regs" parameter), before putting the mem_event request in the
ring buffer (and without any set_current() funny business), but that's
not acceptable behaviour because then dom0 gets notified _after_ the
write, and it's important for the notification to occur before the write
(so that dom0 could stop the write from happening if it needs to).

Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 29 07:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Dec 2012 07:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Toqy8-0007ey-If; Sat, 29 Dec 2012 07:36:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Toqy6-0007et-PJ
	for xen-devel@lists.xensource.com; Sat, 29 Dec 2012 07:36:31 +0000
Received: from [193.109.254.147:50905] by server-13.bemta-14.messagelabs.com
	id 22/15-01725-E7D9ED05; Sat, 29 Dec 2012 07:36:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1356766586!1180258!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzNzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4280 invoked from network); 29 Dec 2012 07:36:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Dec 2012 07:36:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,375,1355097600"; 
   d="scan'208";a="386069"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Dec 2012 07:36:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 29 Dec 2012 07:36:26 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Toqy2-00073f-7c;
	Sat, 29 Dec 2012 07:36:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Toqy1-0004Co-Pk;
	Sat, 29 Dec 2012 07:36:26 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14812-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 29 Dec 2012 07:36:25 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14812: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14812 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14812/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install     fail pass in 14811

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14811
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14811

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14811 never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  c4114a042410

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 29 07:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Dec 2012 07:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Toqy8-0007ey-If; Sat, 29 Dec 2012 07:36:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Toqy6-0007et-PJ
	for xen-devel@lists.xensource.com; Sat, 29 Dec 2012 07:36:31 +0000
Received: from [193.109.254.147:50905] by server-13.bemta-14.messagelabs.com
	id 22/15-01725-E7D9ED05; Sat, 29 Dec 2012 07:36:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1356766586!1180258!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzNzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4280 invoked from network); 29 Dec 2012 07:36:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Dec 2012 07:36:27 -0000
X-IronPort-AV: E=Sophos;i="4.84,375,1355097600"; 
   d="scan'208";a="386069"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Dec 2012 07:36:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sat, 29 Dec 2012 07:36:26 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Toqy2-00073f-7c;
	Sat, 29 Dec 2012 07:36:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Toqy1-0004Co-Pk;
	Sat, 29 Dec 2012 07:36:26 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14812-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 29 Dec 2012 07:36:25 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14812: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14812 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14812/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install     fail pass in 14811

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14811
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14811

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 14811 never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  c4114a042410

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 29 16:22:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Dec 2012 16:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TozA7-0006Ys-QK; Sat, 29 Dec 2012 16:21:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1TozA6-0006Yk-0C; Sat, 29 Dec 2012 16:21:26 +0000
Received: from [85.158.137.99:45240] by server-5.bemta-3.messagelabs.com id
	4F/82-04992-4881FD05; Sat, 29 Dec 2012 16:21:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-217.messagelabs.com!1356798082!14968312!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUxNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24667 invoked from network); 29 Dec 2012 16:21:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Dec 2012 16:21:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,377,1355097600"; 
   d="scan'208";a="2059121"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	29 Dec 2012 16:21:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Sat, 29 Dec 2012 11:21:21 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TozA1-0002M0-3J;
	Sat, 29 Dec 2012 16:21:21 +0000
Message-ID: <1356798079.2917.19.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Sat, 29 Dec 2012 16:21:19 +0000
In-Reply-To: <1355745382.14620.56.camel@zakaz.uk.xensource.com>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
	<1355412947.10554.147.camel@zakaz.uk.xensource.com>
	<CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
	<1355745382.14620.56.camel@zakaz.uk.xensource.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: Paul Harvey <jhebus@googlemail.com>, xen-devel <xen-devel@lists.xen.org>,
	wei.liu2@citrix.com, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-17 at 11:56 +0000, Ian Campbell wrote:
> On Fri, 2012-12-14 at 13:06 +0000, Paul Harvey wrote:
> > Program received signal SIGABRT, Aborted.
> > 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> > (gdb) bt
> > #0  0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> > #1  0x00007fe588cabb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
> > #2  0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> > #3  0x00007fe588d7c807 in __fortify_fail () from /lib/x86_64-linux-gnu/libc.so.6
> > #4  0x00007fe588d7b700 in __chk_fail () from /lib/x86_64-linux-gnu/libc.so.6
> > #5  0x00007fe588d7c7be in __fdelt_warn () from /lib/x86_64-linux-gnu/libc.so.6
> > #6  0x0000000000403ca8 in handle_io () at daemon/io.c:1059
> > #7  0x00000000004021c5 in main (argc=2, argv=0x7fff58691d48) at daemon/main.c:166
> 
> daemon/io.c:1059 in 4.1.2 is:
>                                     FD_ISSET(xc_evtchn_fd(d->xce_handle),
>                                              &readfds))
>                                         handle_ring_read(d);
> 
> I rather suspect this is overrunning the readfds array.
> http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_select.h.html suggests this is sized by FD_SETSIZE. On my system that appears to be statically 1024 (at least strace doesn't show a syscall to determine it in a simple test app although grep /usr/include suggests that might be an option on some systems).
> 
> It doesn't seem likely that there will be a simple solution to this. We
> probably need to switch to something other than select(2). poll(2) seems
> to handle arbitrary numbers of file descriptors. epoll(7) would be nice
> (it supposedly scales better than poll) but is Linux specific. Another
> option might be to fork multiple worker processes (might be a good idea
> if xenconsole becomes a bottleneck).

libevent wraps around different event APIs and provides consistent
interface across OSes. But I don't know whether adding libevent as Xen
tools dependency is a good idea.

> It seems likely (based on a quick grep) that both xenstore (both the C
> and ocaml variants) will suffer from the same issue.
> 

Yes, I ran a test and hit this limit in both Xenstored and Xenconsoled.

> I'm not sure why we have an evtchn handle per guest, other than this
> comment which suggests it was simply expedient rather than a good
> design:
> 	/* Opening evtchn independently for each console is a bit
> 	 * wasteful, but that's how the code is structured... */
> 	dom->xce_handle = xc_evtchn_open(NULL, 0);
> 	if (dom->xce_handle == NULL) {
> 		err = errno;
> 		goto out;
> 	}
> However this is just one open fd which scales with number of domains
> (the others are the pty related ones) so just fixing this would just buy
> a bit more time but not fix the underlying issue.
> 

Even if you work around this problem, you will still hit Xenstore limit.
So the underlying issue has to be fixed.


Wei.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Dec 29 16:22:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Dec 2012 16:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TozA7-0006Ys-QK; Sat, 29 Dec 2012 16:21:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1TozA6-0006Yk-0C; Sat, 29 Dec 2012 16:21:26 +0000
Received: from [85.158.137.99:45240] by server-5.bemta-3.messagelabs.com id
	4F/82-04992-4881FD05; Sat, 29 Dec 2012 16:21:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-217.messagelabs.com!1356798082!14968312!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUxNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24667 invoked from network); 29 Dec 2012 16:21:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Dec 2012 16:21:24 -0000
X-IronPort-AV: E=Sophos;i="4.84,377,1355097600"; 
   d="scan'208";a="2059121"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	29 Dec 2012 16:21:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Sat, 29 Dec 2012 11:21:21 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TozA1-0002M0-3J;
	Sat, 29 Dec 2012 16:21:21 +0000
Message-ID: <1356798079.2917.19.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Sat, 29 Dec 2012 16:21:19 +0000
In-Reply-To: <1355745382.14620.56.camel@zakaz.uk.xensource.com>
References: <CABR7Q=ocj_k4SAw5FW_Rr4AyfRDnq0TsCPz8kFGpSo8y4TbHhw@mail.gmail.com>
	<1354874630.31710.10.camel@zakaz.uk.xensource.com>
	<CABR7Q=oO8rZ9fMdscBkrNUS4a9O6r0eMYdTdCYw2qoJ3mWOaVw@mail.gmail.com>
	<CABR7Q=oaVnoTQo4Up7DYbm8xYaSRrN5LV5=ajW0QYrByUg8x+Q@mail.gmail.com>
	<CABR7Q=oKbP6Xc0O8bmL-YhFnGy3ZwwgAEz2vBW0St-uaqBjGtA@mail.gmail.com>
	<1355402216.10554.125.camel@zakaz.uk.xensource.com>
	<CABR7Q=oY0ZWQi_NKW8OFqvb69F1xyza5VKR6eqAap9QomzoKaw@mail.gmail.com>
	<CABR7Q=q+U5g=w0_wPg4tXSfqZRk6NG=-y3GXM=+0-X3prhMH+w@mail.gmail.com>
	<1355411952.10554.138.camel@zakaz.uk.xensource.com>
	<CABR7Q=ojDBJA=xgeXkhPfTbUCCnKFAtr7Ds_QnE1954TkJLJ3A@mail.gmail.com>
	<CABR7Q=qNoL2szAQ7h9+nfBSzACkA8htmWLTmnxg1t_mXpU6EBQ@mail.gmail.com>
	<1355412947.10554.147.camel@zakaz.uk.xensource.com>
	<CABR7Q=oDbVkmWy=FUB7Zhxrcfzhcuip1kicGRhtgNkgygaEYjw@mail.gmail.com>
	<1355745382.14620.56.camel@zakaz.uk.xensource.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: Paul Harvey <jhebus@googlemail.com>, xen-devel <xen-devel@lists.xen.org>,
	wei.liu2@citrix.com, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] 1000 Domains: Not able to access Domu
 via xm console from Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-12-17 at 11:56 +0000, Ian Campbell wrote:
> On Fri, 2012-12-14 at 13:06 +0000, Paul Harvey wrote:
> > Program received signal SIGABRT, Aborted.
> > 0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> > (gdb) bt
> > #0  0x00007fe588ca8425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> > #1  0x00007fe588cabb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
> > #2  0x00007fe588ce639e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> > #3  0x00007fe588d7c807 in __fortify_fail () from /lib/x86_64-linux-gnu/libc.so.6
> > #4  0x00007fe588d7b700 in __chk_fail () from /lib/x86_64-linux-gnu/libc.so.6
> > #5  0x00007fe588d7c7be in __fdelt_warn () from /lib/x86_64-linux-gnu/libc.so.6
> > #6  0x0000000000403ca8 in handle_io () at daemon/io.c:1059
> > #7  0x00000000004021c5 in main (argc=2, argv=0x7fff58691d48) at daemon/main.c:166
> 
> daemon/io.c:1059 in 4.1.2 is:
>                                     FD_ISSET(xc_evtchn_fd(d->xce_handle),
>                                              &readfds))
>                                         handle_ring_read(d);
> 
> I rather suspect this is overrunning the readfds array.
> http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_select.h.html suggests this is sized by FD_SETSIZE. On my system that appears to be statically 1024 (at least strace doesn't show a syscall to determine it in a simple test app although grep /usr/include suggests that might be an option on some systems).
> 
> It doesn't seem likely that there will be a simple solution to this. We
> probably need to switch to something other than select(2). poll(2) seems
> to handle arbitrary numbers of file descriptors. epoll(7) would be nice
> (it supposedly scales better than poll) but is Linux specific. Another
> option might be to fork multiple worker processes (might be a good idea
> if xenconsole becomes a bottleneck).

libevent wraps around different event APIs and provides consistent
interface across OSes. But I don't know whether adding libevent as Xen
tools dependency is a good idea.

> It seems likely (based on a quick grep) that both xenstore (both the C
> and ocaml variants) will suffer from the same issue.
> 

Yes, I ran a test and hit this limit in both Xenstored and Xenconsoled.

> I'm not sure why we have an evtchn handle per guest, other than this
> comment which suggests it was simply expedient rather than a good
> design:
> 	/* Opening evtchn independently for each console is a bit
> 	 * wasteful, but that's how the code is structured... */
> 	dom->xce_handle = xc_evtchn_open(NULL, 0);
> 	if (dom->xce_handle == NULL) {
> 		err = errno;
> 		goto out;
> 	}
> However this is just one open fd which scales with number of domains
> (the others are the pty related ones) so just fixing this would just buy
> a bit more time but not fix the underlying issue.
> 

Even if you work around this problem, you will still hit Xenstore limit.
So the underlying issue has to be fixed.


Wei.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 30 08:03:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Dec 2012 08:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpDrZ-0002o9-C5; Sun, 30 Dec 2012 08:03:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TpDrX-0002o4-S4
	for xen-devel@lists.xensource.com; Sun, 30 Dec 2012 08:03:16 +0000
Received: from [85.158.143.99:46403] by server-2.bemta-4.messagelabs.com id
	62/7A-30861-345FFD05; Sun, 30 Dec 2012 08:03:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1356854594!26114510!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7279 invoked from network); 30 Dec 2012 08:03:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Dec 2012 08:03:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,379,1355097600"; 
   d="scan'208";a="389516"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Dec 2012 08:03:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 30 Dec 2012 08:03:13 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TpDrV-0005fR-Tu;
	Sun, 30 Dec 2012 08:03:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TpDrV-0002yf-Mv;
	Sun, 30 Dec 2012 08:03:13 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14813-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 30 Dec 2012 08:03:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14813: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14813 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14813/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 14812
 test-amd64-amd64-qemut-win    7 windows-install             fail pass in 14812
 test-amd64-amd64-xl-qemut-winxpsp3 12 guest-localmigrate/x10 fail pass in 14812
 test-amd64-amd64-xl-qemuu-win7-amd64 7 windows-install fail in 14812 pass in 14813

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 12 guest-saverestore.2      fail blocked in 14812
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14812
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore     fail in 14812 like 14811

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 14812 never pass
 test-amd64-amd64-qemut-win   16 leak-check/check      fail in 14812 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 14812 never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  c4114a042410

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 30 08:03:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Dec 2012 08:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpDrZ-0002o9-C5; Sun, 30 Dec 2012 08:03:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TpDrX-0002o4-S4
	for xen-devel@lists.xensource.com; Sun, 30 Dec 2012 08:03:16 +0000
Received: from [85.158.143.99:46403] by server-2.bemta-4.messagelabs.com id
	62/7A-30861-345FFD05; Sun, 30 Dec 2012 08:03:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1356854594!26114510!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODM4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7279 invoked from network); 30 Dec 2012 08:03:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Dec 2012 08:03:14 -0000
X-IronPort-AV: E=Sophos;i="4.84,379,1355097600"; 
   d="scan'208";a="389516"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Dec 2012 08:03:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Sun, 30 Dec 2012 08:03:13 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TpDrV-0005fR-Tu;
	Sun, 30 Dec 2012 08:03:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TpDrV-0002yf-Mv;
	Sun, 30 Dec 2012 08:03:13 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14813-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 30 Dec 2012 08:03:13 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14813: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14813 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14813/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 14812
 test-amd64-amd64-qemut-win    7 windows-install             fail pass in 14812
 test-amd64-amd64-xl-qemut-winxpsp3 12 guest-localmigrate/x10 fail pass in 14812
 test-amd64-amd64-xl-qemuu-win7-amd64 7 windows-install fail in 14812 pass in 14813

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 12 guest-saverestore.2      fail blocked in 14812
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14812
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore     fail in 14812 like 14811

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 14812 never pass
 test-amd64-amd64-qemut-win   16 leak-check/check      fail in 14812 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 14812 never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  c4114a042410

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Dec 30 12:09:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Dec 2012 12:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpHgf-0004Th-ON; Sun, 30 Dec 2012 12:08:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <genius.rsd@gmail.com>) id 1TpHge-0004Tc-7Y
	for xen-devel@lists.xen.org; Sun, 30 Dec 2012 12:08:16 +0000
Received: from [85.158.138.51:55586] by server-14.bemta-3.messagelabs.com id
	2D/A6-27443-FAE20E05; Sun, 30 Dec 2012 12:08:15 +0000
X-Env-Sender: genius.rsd@gmail.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1356869290!30458481!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: , surbl: (ASYNC_NO)
	c3VyYmxfcmVjaGVja19kZWxheTogMjAxMzg5MyAoYWJhbmRvbmVkOiB
	yb2hpdHNkYW1rb25kd2Fy\nLndvcmRwcmVzcy5jb20p\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14998 invoked from network); 30 Dec 2012 12:08:11 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Dec 2012 12:08:11 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so9897335iac.4
	for <xen-devel@lists.xen.org>; Sun, 30 Dec 2012 04:08:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=bJlFncvL+n68d4nfFxAQdsH3Y+uvJ8MHTucJ2NO1qPw=;
	b=bEKHIJVZ/Cx446zuojj02YqPA0maHfWHV+LjvkQLEia7kLWE7h0UB39PDZHKE21bPN
	cdeYewkl3L1+z1iACWa1lM26h9PlKGcoQf6UTcxKI6IhtyEvat1kMsGxMO7bNYW1bl1t
	JS8UytdGQIwLjzIuEqUKu/1yuqqUZ2lcU5DSlXdUaw5X62uSLG8XZ5ELDG8O0jgal8Dn
	uQGB4zfqbiKi2iYVu+g6zcGP6nB5+zlPRK5hai1sJjFbbQbvLROctlbzxAqXj/za3U3y
	psi3C9EBrkRx+FvFIqr5Qzw268j1VaSLB2/27FB3cTlQM4MgKalmFD8VKGuCHXkMtktH
	vHHQ==
MIME-Version: 1.0
Received: by 10.50.36.164 with SMTP id r4mr33097357igj.57.1356869289994; Sun,
	30 Dec 2012 04:08:09 -0800 (PST)
Received: by 10.64.68.47 with HTTP; Sun, 30 Dec 2012 04:08:09 -0800 (PST)
In-Reply-To: <1356690956.2917.7.camel@iceland>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland>
	<CAHEEu857jjBLACY=MNKzqEkpGbjXw1cbfqy9AOyR5fVsuOmagg@mail.gmail.com>
	<1356690956.2917.7.camel@iceland>
Date: Sun, 30 Dec 2012 17:38:09 +0530
Message-ID: <CAHEEu860f4g7vZ0+hHXBtXe95J+7d33OzW9kJ2x5O2au6q3zHA@mail.gmail.com>
From: Rohit Damkondwar <genius.rsd@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4213290297787961755=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4213290297787961755==
Content-Type: multipart/alternative; boundary=14dae9340bc309686d04d210c0fc

--14dae9340bc309686d04d210c0fc
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Dec 28, 2012 at 4:05 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Fri, 2012-12-28 at 07:46 +0000, Rohit Damkondwar wrote:
> > On Thu, Dec 27, 2012 at 6:03 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:
> >         On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:
> >         > Hi all. I want to set bandwidth limits to a virtual
> >         interface
> >         > dynamically(without restarting virtual machine). I have been
> >         browsing
> >         > xen source code 4.1.3. I looked into libxen
> >         folder(xen_vif.c) and
> >         > hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
> >         > (driver/net/xen-netback/
> >         > interface.c + common.h) and tx_add_credit function could be
> >         used to
> >         > modify rate limits. I want to change bandwidth limits
> >         dynamically of a
> >         > virtual interface in xen 4.1.3. Where should I look for in
> >         xen 4.1.3?
> >         >
> >         > Please help.
> >         >
> >
> >
> >         Xen vif has a parameter called 'rate', I don't know whether it
> >         suits
> >         you.
> >
> > The rate parameter only restricts one way traffic(probably only
> > outgoing).
> >
>
> Yes, you're right. So you need two way shaping.
>
> >
> >         Also, you can have a look at external tool like tc(8). My
> >         vague thought
> >         is that Vif is just another interface in Dom0, tc(8) should be
> >         able to
> >         traffic-shape Vif.
> >
> >
> > Don't you think using external tool may decrease the eifficiency ?. If
> > xen itself has capabailities ( provided by tc tool ), wouldn't it be
> > more efficient ?
> >
>
> Do you see significant performance degradation when using tc(8) or any
> other tools alike? If so, do report with figures, it can help us
> improve.
>

It doesn't seem like tc gives bad performance. But I cannot say until
statistics prove that tc is better.

>
> > I have used this tool. It is good. It serves my purpose. But wudn't it
> > be better to include the bandwidth limiting capabilities in xen
> > itself? I am not sure about this. Currently I am just browsing through
> > the source code. What do u think ?
> >
>
> TBH I'm not sure about this either. Again, comparisons and analysis of
> bottleneck would be helpful.
>
> >
> > I have seen  function "set_qos_algorithm_type" and paramaters
> > (qos/algorithm type,qos/algorithm params, qos/supported algorithms) in
> > vif class. Would they be useful ? Are they available only for XEN
> > Enterprise ?
> >
>
> Do you see those in libxen source code? I don't think they are in use
> now.
>
I didn't know that. The when which library should I look into? I could find
vif class in libxen folder. So I thought it should be used. I read that xl
is not matured enough in xen 4.1.3. So which library should I look into?
Please help.

>
>
> Wei.
>
>
>


-- 
Rohit S Damkondwar
B.Tech Computer Engineering
CoEP
MyBlog <http://www.rohitsdamkondwar.wordpress.com>

--14dae9340bc309686d04d210c0fc
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On F=
ri, Dec 28, 2012 at 4:05 PM, Wei Liu <span dir=3D"ltr">&lt;<a href=3D"mailt=
o:Wei.Liu2@citrix.com" target=3D"_blank">Wei.Liu2@citrix.com</a>&gt;</span>=
 wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Fri, 2012-12-28 at 07:4=
6 +0000, Rohit Damkondwar wrote:<br>
&gt; On Thu, Dec 27, 2012 at 6:03 PM, Wei Liu &lt;<a href=3D"mailto:Wei.Liu=
2@citrix.com">Wei.Liu2@citrix.com</a>&gt; wrote:<br>
&gt; =A0 =A0 =A0 =A0 On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wr=
ote:<br>
&gt; =A0 =A0 =A0 =A0 &gt; Hi all. I want to set bandwidth limits to a virtu=
al<br>
&gt; =A0 =A0 =A0 =A0 interface<br>
&gt; =A0 =A0 =A0 =A0 &gt; dynamically(without restarting virtual machine). =
I have been<br>
&gt; =A0 =A0 =A0 =A0 browsing<br>
&gt; =A0 =A0 =A0 =A0 &gt; xen source code 4.1.3. I looked into libxen<br>
&gt; =A0 =A0 =A0 =A0 folder(xen_vif.c) and<br>
&gt; =A0 =A0 =A0 =A0 &gt; hotplug(linux) folder. Earlier in xen 3.0 , xenvi=
f struture<br>
&gt; =A0 =A0 =A0 =A0 &gt; (driver/net/xen-netback/<br>
&gt; =A0 =A0 =A0 =A0 &gt; interface.c + common.h) and tx_add_credit functio=
n could be<br>
&gt; =A0 =A0 =A0 =A0 used to<br>
&gt; =A0 =A0 =A0 =A0 &gt; modify rate limits. I want to change bandwidth li=
mits<br>
&gt; =A0 =A0 =A0 =A0 dynamically of a<br>
&gt; =A0 =A0 =A0 =A0 &gt; virtual interface in xen 4.1.3. Where should I lo=
ok for in<br>
&gt; =A0 =A0 =A0 =A0 xen 4.1.3?<br>
&gt; =A0 =A0 =A0 =A0 &gt;<br>
&gt; =A0 =A0 =A0 =A0 &gt; Please help.<br>
&gt; =A0 =A0 =A0 =A0 &gt;<br>
&gt;<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 Xen vif has a parameter called &#39;rate&#39;, I don&#=
39;t know whether it<br>
&gt; =A0 =A0 =A0 =A0 suits<br>
&gt; =A0 =A0 =A0 =A0 you.<br>
&gt;<br>
&gt; The rate parameter only restricts one way traffic(probably only<br>
&gt; outgoing).<br>
&gt;<br>
<br>
</div>Yes, you&#39;re right. So you need two way shaping.<br>
<div class=3D"im"><br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 Also, you can have a look at external tool like tc(8).=
 My<br>
&gt; =A0 =A0 =A0 =A0 vague thought<br>
&gt; =A0 =A0 =A0 =A0 is that Vif is just another interface in Dom0, tc(8) s=
hould be<br>
&gt; =A0 =A0 =A0 =A0 able to<br>
&gt; =A0 =A0 =A0 =A0 traffic-shape Vif.<br>
&gt;<br>
&gt;<br>
&gt; Don&#39;t you think using external tool may decrease the eifficiency ?=
. If<br>
&gt; xen itself has capabailities ( provided by tc tool ), wouldn&#39;t it =
be<br>
&gt; more efficient ?<br>
&gt;<br>
<br>
</div>Do you see significant performance degradation when using tc(8) or an=
y<br>
other tools alike? If so, do report with figures, it can help us<br>
improve.=A0<br></blockquote><div>=A0</div><div>It doesn&#39;t seem like tc =
gives bad performance. But I cannot say until statistics prove that tc is b=
etter. <br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8=
ex;border-left:1px #ccc solid;padding-left:1ex">

<div class=3D"im"><br>
&gt; I have used this tool. It is good. It serves my purpose. But wudn&#39;=
t it<br>
&gt; be better to include the bandwidth limiting capabilities in xen<br>
&gt; itself? I am not sure about this. Currently I am just browsing through=
<br>
&gt; the source code. What do u think ?<br>
&gt;<br>
<br>
</div>TBH I&#39;m not sure about this either. Again, comparisons and analys=
is of<br>
bottleneck would be helpful.<br>
<div class=3D"im"><br>
&gt;<br>
&gt; I have seen =A0function &quot;set_qos_algorithm_type&quot; and paramat=
ers<br>
&gt; (qos/algorithm type,qos/algorithm params, qos/supported algorithms) in=
<br>
&gt; vif class. Would they be useful ? Are they available only for XEN<br>
&gt; Enterprise ?<br>
&gt;<br>
<br>
</div>Do you see those in libxen source code? I don&#39;t think they are in=
 use<br>
now.<br></blockquote><div>I didn&#39;t know that. The when which library sh=
ould I look into? I could find vif class in libxen folder. So I thought it =
should be used. I read that xl is not matured enough in xen 4.1.3. So which=
 library should I look into? Please help. <br>
</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-l=
eft:1px #ccc solid;padding-left:1ex">
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Wei.<br>
<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><br>-- <br>Rohit S D=
amkondwar<br>B.Tech Computer Engineering<br>CoEP<br><a href=3D"http://www.r=
ohitsdamkondwar.wordpress.com" target=3D"_blank">MyBlog</a><br>
</div></div>

--14dae9340bc309686d04d210c0fc--


--===============4213290297787961755==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4213290297787961755==--


From xen-devel-bounces@lists.xen.org Sun Dec 30 12:09:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Dec 2012 12:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpHgf-0004Th-ON; Sun, 30 Dec 2012 12:08:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <genius.rsd@gmail.com>) id 1TpHge-0004Tc-7Y
	for xen-devel@lists.xen.org; Sun, 30 Dec 2012 12:08:16 +0000
Received: from [85.158.138.51:55586] by server-14.bemta-3.messagelabs.com id
	2D/A6-27443-FAE20E05; Sun, 30 Dec 2012 12:08:15 +0000
X-Env-Sender: genius.rsd@gmail.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1356869290!30458481!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: , surbl: (ASYNC_NO)
	c3VyYmxfcmVjaGVja19kZWxheTogMjAxMzg5MyAoYWJhbmRvbmVkOiB
	yb2hpdHNkYW1rb25kd2Fy\nLndvcmRwcmVzcy5jb20p\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14998 invoked from network); 30 Dec 2012 12:08:11 -0000
Received: from mail-ia0-f173.google.com (HELO mail-ia0-f173.google.com)
	(209.85.210.173)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Dec 2012 12:08:11 -0000
Received: by mail-ia0-f173.google.com with SMTP id w21so9897335iac.4
	for <xen-devel@lists.xen.org>; Sun, 30 Dec 2012 04:08:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=bJlFncvL+n68d4nfFxAQdsH3Y+uvJ8MHTucJ2NO1qPw=;
	b=bEKHIJVZ/Cx446zuojj02YqPA0maHfWHV+LjvkQLEia7kLWE7h0UB39PDZHKE21bPN
	cdeYewkl3L1+z1iACWa1lM26h9PlKGcoQf6UTcxKI6IhtyEvat1kMsGxMO7bNYW1bl1t
	JS8UytdGQIwLjzIuEqUKu/1yuqqUZ2lcU5DSlXdUaw5X62uSLG8XZ5ELDG8O0jgal8Dn
	uQGB4zfqbiKi2iYVu+g6zcGP6nB5+zlPRK5hai1sJjFbbQbvLROctlbzxAqXj/za3U3y
	psi3C9EBrkRx+FvFIqr5Qzw268j1VaSLB2/27FB3cTlQM4MgKalmFD8VKGuCHXkMtktH
	vHHQ==
MIME-Version: 1.0
Received: by 10.50.36.164 with SMTP id r4mr33097357igj.57.1356869289994; Sun,
	30 Dec 2012 04:08:09 -0800 (PST)
Received: by 10.64.68.47 with HTTP; Sun, 30 Dec 2012 04:08:09 -0800 (PST)
In-Reply-To: <1356690956.2917.7.camel@iceland>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland>
	<CAHEEu857jjBLACY=MNKzqEkpGbjXw1cbfqy9AOyR5fVsuOmagg@mail.gmail.com>
	<1356690956.2917.7.camel@iceland>
Date: Sun, 30 Dec 2012 17:38:09 +0530
Message-ID: <CAHEEu860f4g7vZ0+hHXBtXe95J+7d33OzW9kJ2x5O2au6q3zHA@mail.gmail.com>
From: Rohit Damkondwar <genius.rsd@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4213290297787961755=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4213290297787961755==
Content-Type: multipart/alternative; boundary=14dae9340bc309686d04d210c0fc

--14dae9340bc309686d04d210c0fc
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Dec 28, 2012 at 4:05 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Fri, 2012-12-28 at 07:46 +0000, Rohit Damkondwar wrote:
> > On Thu, Dec 27, 2012 at 6:03 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:
> >         On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wrote:
> >         > Hi all. I want to set bandwidth limits to a virtual
> >         interface
> >         > dynamically(without restarting virtual machine). I have been
> >         browsing
> >         > xen source code 4.1.3. I looked into libxen
> >         folder(xen_vif.c) and
> >         > hotplug(linux) folder. Earlier in xen 3.0 , xenvif struture
> >         > (driver/net/xen-netback/
> >         > interface.c + common.h) and tx_add_credit function could be
> >         used to
> >         > modify rate limits. I want to change bandwidth limits
> >         dynamically of a
> >         > virtual interface in xen 4.1.3. Where should I look for in
> >         xen 4.1.3?
> >         >
> >         > Please help.
> >         >
> >
> >
> >         Xen vif has a parameter called 'rate', I don't know whether it
> >         suits
> >         you.
> >
> > The rate parameter only restricts one way traffic(probably only
> > outgoing).
> >
>
> Yes, you're right. So you need two way shaping.
>
> >
> >         Also, you can have a look at external tool like tc(8). My
> >         vague thought
> >         is that Vif is just another interface in Dom0, tc(8) should be
> >         able to
> >         traffic-shape Vif.
> >
> >
> > Don't you think using external tool may decrease the eifficiency ?. If
> > xen itself has capabailities ( provided by tc tool ), wouldn't it be
> > more efficient ?
> >
>
> Do you see significant performance degradation when using tc(8) or any
> other tools alike? If so, do report with figures, it can help us
> improve.
>

It doesn't seem like tc gives bad performance. But I cannot say until
statistics prove that tc is better.

>
> > I have used this tool. It is good. It serves my purpose. But wudn't it
> > be better to include the bandwidth limiting capabilities in xen
> > itself? I am not sure about this. Currently I am just browsing through
> > the source code. What do u think ?
> >
>
> TBH I'm not sure about this either. Again, comparisons and analysis of
> bottleneck would be helpful.
>
> >
> > I have seen  function "set_qos_algorithm_type" and paramaters
> > (qos/algorithm type,qos/algorithm params, qos/supported algorithms) in
> > vif class. Would they be useful ? Are they available only for XEN
> > Enterprise ?
> >
>
> Do you see those in libxen source code? I don't think they are in use
> now.
>
I didn't know that. The when which library should I look into? I could find
vif class in libxen folder. So I thought it should be used. I read that xl
is not matured enough in xen 4.1.3. So which library should I look into?
Please help.

>
>
> Wei.
>
>
>


-- 
Rohit S Damkondwar
B.Tech Computer Engineering
CoEP
MyBlog <http://www.rohitsdamkondwar.wordpress.com>

--14dae9340bc309686d04d210c0fc
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On F=
ri, Dec 28, 2012 at 4:05 PM, Wei Liu <span dir=3D"ltr">&lt;<a href=3D"mailt=
o:Wei.Liu2@citrix.com" target=3D"_blank">Wei.Liu2@citrix.com</a>&gt;</span>=
 wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Fri, 2012-12-28 at 07:4=
6 +0000, Rohit Damkondwar wrote:<br>
&gt; On Thu, Dec 27, 2012 at 6:03 PM, Wei Liu &lt;<a href=3D"mailto:Wei.Liu=
2@citrix.com">Wei.Liu2@citrix.com</a>&gt; wrote:<br>
&gt; =A0 =A0 =A0 =A0 On Thu, 2012-12-27 at 08:46 +0000, Rohit Damkondwar wr=
ote:<br>
&gt; =A0 =A0 =A0 =A0 &gt; Hi all. I want to set bandwidth limits to a virtu=
al<br>
&gt; =A0 =A0 =A0 =A0 interface<br>
&gt; =A0 =A0 =A0 =A0 &gt; dynamically(without restarting virtual machine). =
I have been<br>
&gt; =A0 =A0 =A0 =A0 browsing<br>
&gt; =A0 =A0 =A0 =A0 &gt; xen source code 4.1.3. I looked into libxen<br>
&gt; =A0 =A0 =A0 =A0 folder(xen_vif.c) and<br>
&gt; =A0 =A0 =A0 =A0 &gt; hotplug(linux) folder. Earlier in xen 3.0 , xenvi=
f struture<br>
&gt; =A0 =A0 =A0 =A0 &gt; (driver/net/xen-netback/<br>
&gt; =A0 =A0 =A0 =A0 &gt; interface.c + common.h) and tx_add_credit functio=
n could be<br>
&gt; =A0 =A0 =A0 =A0 used to<br>
&gt; =A0 =A0 =A0 =A0 &gt; modify rate limits. I want to change bandwidth li=
mits<br>
&gt; =A0 =A0 =A0 =A0 dynamically of a<br>
&gt; =A0 =A0 =A0 =A0 &gt; virtual interface in xen 4.1.3. Where should I lo=
ok for in<br>
&gt; =A0 =A0 =A0 =A0 xen 4.1.3?<br>
&gt; =A0 =A0 =A0 =A0 &gt;<br>
&gt; =A0 =A0 =A0 =A0 &gt; Please help.<br>
&gt; =A0 =A0 =A0 =A0 &gt;<br>
&gt;<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 Xen vif has a parameter called &#39;rate&#39;, I don&#=
39;t know whether it<br>
&gt; =A0 =A0 =A0 =A0 suits<br>
&gt; =A0 =A0 =A0 =A0 you.<br>
&gt;<br>
&gt; The rate parameter only restricts one way traffic(probably only<br>
&gt; outgoing).<br>
&gt;<br>
<br>
</div>Yes, you&#39;re right. So you need two way shaping.<br>
<div class=3D"im"><br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 Also, you can have a look at external tool like tc(8).=
 My<br>
&gt; =A0 =A0 =A0 =A0 vague thought<br>
&gt; =A0 =A0 =A0 =A0 is that Vif is just another interface in Dom0, tc(8) s=
hould be<br>
&gt; =A0 =A0 =A0 =A0 able to<br>
&gt; =A0 =A0 =A0 =A0 traffic-shape Vif.<br>
&gt;<br>
&gt;<br>
&gt; Don&#39;t you think using external tool may decrease the eifficiency ?=
. If<br>
&gt; xen itself has capabailities ( provided by tc tool ), wouldn&#39;t it =
be<br>
&gt; more efficient ?<br>
&gt;<br>
<br>
</div>Do you see significant performance degradation when using tc(8) or an=
y<br>
other tools alike? If so, do report with figures, it can help us<br>
improve.=A0<br></blockquote><div>=A0</div><div>It doesn&#39;t seem like tc =
gives bad performance. But I cannot say until statistics prove that tc is b=
etter. <br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8=
ex;border-left:1px #ccc solid;padding-left:1ex">

<div class=3D"im"><br>
&gt; I have used this tool. It is good. It serves my purpose. But wudn&#39;=
t it<br>
&gt; be better to include the bandwidth limiting capabilities in xen<br>
&gt; itself? I am not sure about this. Currently I am just browsing through=
<br>
&gt; the source code. What do u think ?<br>
&gt;<br>
<br>
</div>TBH I&#39;m not sure about this either. Again, comparisons and analys=
is of<br>
bottleneck would be helpful.<br>
<div class=3D"im"><br>
&gt;<br>
&gt; I have seen =A0function &quot;set_qos_algorithm_type&quot; and paramat=
ers<br>
&gt; (qos/algorithm type,qos/algorithm params, qos/supported algorithms) in=
<br>
&gt; vif class. Would they be useful ? Are they available only for XEN<br>
&gt; Enterprise ?<br>
&gt;<br>
<br>
</div>Do you see those in libxen source code? I don&#39;t think they are in=
 use<br>
now.<br></blockquote><div>I didn&#39;t know that. The when which library sh=
ould I look into? I could find vif class in libxen folder. So I thought it =
should be used. I read that xl is not matured enough in xen 4.1.3. So which=
 library should I look into? Please help. <br>
</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-l=
eft:1px #ccc solid;padding-left:1ex">
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Wei.<br>
<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><br>-- <br>Rohit S D=
amkondwar<br>B.Tech Computer Engineering<br>CoEP<br><a href=3D"http://www.r=
ohitsdamkondwar.wordpress.com" target=3D"_blank">MyBlog</a><br>
</div></div>

--14dae9340bc309686d04d210c0fc--


--===============4213290297787961755==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4213290297787961755==--


From xen-devel-bounces@lists.xen.org Mon Dec 31 02:23:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 02:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpV1R-0007fT-KX; Mon, 31 Dec 2012 02:22:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1TpV1Q-0007fO-Gi
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 02:22:36 +0000
Received: from [85.158.143.99:13042] by server-3.bemta-4.messagelabs.com id
	35/C0-18211-BE6F0E05; Mon, 31 Dec 2012 02:22:35 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1356920554!30688469!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjQxNjY1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1000 invoked from network); 31 Dec 2012 02:22:34 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-3.tower-216.messagelabs.com with SMTP;
	31 Dec 2012 02:22:34 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 30 Dec 2012 18:22:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,383,1355126400"; d="scan'208";a="186289717"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 30 Dec 2012 18:22:32 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 30 Dec 2012 18:22:31 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 30 Dec 2012 18:22:30 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 31 Dec 2012 10:22:26 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Wu, GabrielX"
	<gabrielx.wu@intel.com>
Thread-Topic: [Xen-devel] VMX status report. Xen:26127 & Dom0:3.6.5
Thread-Index: AQHN38IHl51MigMvEkOQmFGTO3Io0ZgyOgYg
Date: Mon, 31 Dec 2012 02:22:25 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A1023ADFF@SHSMSX101.ccr.corp.intel.com>
References: <E4558C0C96688748837EB1B05BEED75A0FD83B81@SHSMSX102.ccr.corp.intel.com>
	<20121221212743.GB521@phenom.dumpdata.com>
In-Reply-To: <20121221212743.GB521@phenom.dumpdata.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Liu, RongrongX" <rongrongx.liu@intel.com>, "Liu,
	SongtaoX" <songtaox.liu@intel.com>, "Zhou, Chao" <chao.zhou@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] VMX status report. Xen:26127 & Dom0:3.6.5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Saturday, December 22, 2012 5:28 AM
> To: Wu, GabrielX
> Cc: xen-devel@lists.xen.org; Ren, Yongjie; Liu, RongrongX; Liu, SongtaoX;
> Zhou, Chao
> Subject: Re: [Xen-devel] VMX status report. Xen:26127 & Dom0:3.6.5
> 
> On Mon, Nov 12, 2012 at 02:16:44AM +0000, Wu, GabrielX wrote:
> > Hi all,
> >
> > This is the test report of xen-unstable tree, no new issue found and no
> issue fixed.
> >
> > Version Info:
> >
> ============================================================
> =====
> > xen-changeset:   26127:bd78e5630a5b
> > Dom0:          linux.git  3.6.5
> >
> ============================================================
> =====
> >
> > New issue(0)
> > ==============
> >
> > Fixed issue(0)
> > ==============
> >
> > Old issues(9)
> > ==============
> > 1. [ACPI] Dom0 can't resume from S3 sleep
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
> > 2. [XL]"xl vcpu-set" causes dom0 crash or panic
> 
> This should be fixed in v3.8.
> 
Fixes are already in Linus' linux.git tree, or not ?
We did a test with that tree, and found there's no crash now, but dom0 vCPU increase didn't work.

> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730
> > 3. [VT-D]fail to detach NIC from guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736
> > 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747
> > 5. after detaching a VF from a guest, shutdown the guest is very slow
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
> > 6. Poor performance when do guest save/restore and migration with
> linux 3.x dom0
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
> 
> And this one as well.
Yes, there should be no performance loss now for migration case.
We already marked this bug as 'fixed' with kernel 3.6.7 Dom0.

> > 7. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
> 
> And this one too.
> > 8. Dom0 cannot be shutdown before PCI device detachment from guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> 
> > 9. xl pci-list shows one PCI device (PF or VF) could be assigned to two
> different guests
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1834
> >
> > Best Regards,
> > Ronghui Wu(Gabriel)
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 02:23:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 02:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpV1R-0007fT-KX; Mon, 31 Dec 2012 02:22:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1TpV1Q-0007fO-Gi
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 02:22:36 +0000
Received: from [85.158.143.99:13042] by server-3.bemta-4.messagelabs.com id
	35/C0-18211-BE6F0E05; Mon, 31 Dec 2012 02:22:35 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1356920554!30688469!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjQxNjY1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1000 invoked from network); 31 Dec 2012 02:22:34 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-3.tower-216.messagelabs.com with SMTP;
	31 Dec 2012 02:22:34 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 30 Dec 2012 18:22:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.84,383,1355126400"; d="scan'208";a="186289717"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 30 Dec 2012 18:22:32 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 30 Dec 2012 18:22:31 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 30 Dec 2012 18:22:30 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.88]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.9]) with mapi id
	14.01.0355.002; Mon, 31 Dec 2012 10:22:26 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Wu, GabrielX"
	<gabrielx.wu@intel.com>
Thread-Topic: [Xen-devel] VMX status report. Xen:26127 & Dom0:3.6.5
Thread-Index: AQHN38IHl51MigMvEkOQmFGTO3Io0ZgyOgYg
Date: Mon, 31 Dec 2012 02:22:25 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A1023ADFF@SHSMSX101.ccr.corp.intel.com>
References: <E4558C0C96688748837EB1B05BEED75A0FD83B81@SHSMSX102.ccr.corp.intel.com>
	<20121221212743.GB521@phenom.dumpdata.com>
In-Reply-To: <20121221212743.GB521@phenom.dumpdata.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Liu, RongrongX" <rongrongx.liu@intel.com>, "Liu,
	SongtaoX" <songtaox.liu@intel.com>, "Zhou, Chao" <chao.zhou@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] VMX status report. Xen:26127 & Dom0:3.6.5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Saturday, December 22, 2012 5:28 AM
> To: Wu, GabrielX
> Cc: xen-devel@lists.xen.org; Ren, Yongjie; Liu, RongrongX; Liu, SongtaoX;
> Zhou, Chao
> Subject: Re: [Xen-devel] VMX status report. Xen:26127 & Dom0:3.6.5
> 
> On Mon, Nov 12, 2012 at 02:16:44AM +0000, Wu, GabrielX wrote:
> > Hi all,
> >
> > This is the test report of xen-unstable tree, no new issue found and no
> issue fixed.
> >
> > Version Info:
> >
> ============================================================
> =====
> > xen-changeset:   26127:bd78e5630a5b
> > Dom0:          linux.git  3.6.5
> >
> ============================================================
> =====
> >
> > New issue(0)
> > ==============
> >
> > Fixed issue(0)
> > ==============
> >
> > Old issues(9)
> > ==============
> > 1. [ACPI] Dom0 can't resume from S3 sleep
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
> > 2. [XL]"xl vcpu-set" causes dom0 crash or panic
> 
> This should be fixed in v3.8.
> 
Fixes are already in Linus' linux.git tree, or not ?
We did a test with that tree, and found there's no crash now, but dom0 vCPU increase didn't work.

> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730
> > 3. [VT-D]fail to detach NIC from guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736
> > 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747
> > 5. after detaching a VF from a guest, shutdown the guest is very slow
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
> > 6. Poor performance when do guest save/restore and migration with
> linux 3.x dom0
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
> 
> And this one as well.
Yes, there should be no performance loss now for migration case.
We already marked this bug as 'fixed' with kernel 3.6.7 Dom0.

> > 7. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
> 
> And this one too.
> > 8. Dom0 cannot be shutdown before PCI device detachment from guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> 
> > 9. xl pci-list shows one PCI device (PF or VF) could be assigned to two
> different guests
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1834
> >
> > Best Regards,
> > Ronghui Wu(Gabriel)
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 03:10:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 03:10:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpVlW-0008SB-LX; Mon, 31 Dec 2012 03:10:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1TpVlU-0008S6-Sc
	for xen-devel@lists.xensource.com; Mon, 31 Dec 2012 03:10:13 +0000
Received: from [193.109.254.147:16559] by server-11.bemta-14.messagelabs.com
	id 96/57-02659-41201E05; Mon, 31 Dec 2012 03:10:12 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1356923410!11639087!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29399 invoked from network); 31 Dec 2012 03:10:10 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 03:10:10 -0000
Received: by mail-ea0-f179.google.com with SMTP id i12so4932683eaa.10
	for <xen-devel@lists.xensource.com>;
	Sun, 30 Dec 2012 19:10:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qsNLarHhSLhvEQ7fWiMKDh+N3wCh0M90Ao9yjFccLPU=;
	b=EfOeHhQ59ZzLBxiF850EOg80T0wxeuPs3kwaxqSrZj/nWgFoWaL5MHQcILpm3X9nHJ
	8qk12JKqf9Odnt60KqC1RIjmskH13mGpIIAOk9+QD4piLyPs6mozXBSHoygOFvpxOsN6
	xZCEs/0Sxsrzv/092tmIaRPiPmLiEjwCGLjGwgRNJ7hDk8VNUlKyP58lit2mBvWvV1rT
	BajD4MsynAKedjJtzGWRx5MV9/5IXweqFcGK1hOCuiWV2gJ7ZD4e1BgLYwfiCXeiTjOL
	qElDDFML18mO6WgpRyGehjCe3SsCp1wBeFkV/GC14DruJMNDjiePSuPlxHin5DLnYUDD
	LiqQ==
MIME-Version: 1.0
Received: by 10.14.173.65 with SMTP id u41mr106955640eel.13.1356923409750;
	Sun, 30 Dec 2012 19:10:09 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Sun, 30 Dec 2012 19:10:09 -0800 (PST)
In-Reply-To: <1356691575.2917.13.camel@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
	<CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
	<1356612077.19238.20.camel@iceland>
	<CA+ePHTC4Fq0sqb5ie+YBiC5Ft_q6O2PkJxYqGXxbBbHSHxfbOA@mail.gmail.com>
	<1356691575.2917.13.camel@iceland>
Date: Mon, 31 Dec 2012 11:10:09 +0800
Message-ID: <CA+ePHTCaJRCopiM-WoWYDBhrNrcmkNz7rcUmjfXtcHcTt=niqQ@mail.gmail.com>
From: =?GB2312?B?wu3A2g==?= <aware.why@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel]
	=?gb2312?b?W0JVR106IHdoZW4gdXNpbmcgYHhsIHJlc3RvcmVg?=
	=?gb2312?b?o6x4Y19ldnRjaG5fYWxsb2NfdW5ib3VuZCB3aWxsIHJhaXNlIHRo?=
	=?gb2312?b?aXMgZXJyb3I=?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6054993727468398706=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6054993727468398706==
Content-Type: multipart/alternative; boundary=047d7b6226b0d35ba704d21d5966

--047d7b6226b0d35ba704d21d5966
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 28, 2012 at 6:46 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Fri, 2012-12-28 at 03:13 +0000, =C2=ED=C0=DA wrote:
> >
> >
> > On Thu, Dec 27, 2012 at 8:41 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:
> >         On Thu, 2012-12-27 at 02:12 +0000, =C2=ED=C0=DA wrote:
> >         >
> >         > I got it, but the error `  xc: error: do_evtchn_op:
> >         > HYPERVISOR_event_channel_op failed: -1 (3 =3D No such
> >         process): Internal
> >         > error. ` said no such process, the system error description
> >         > didn't seem has anything to do with the following lines wich
> >         raised
> >         >  85    state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch=
,
> >         domid,
> >         > 0);
> >         >  86    state->console_port =3D
> >         xc_evtchn_alloc_unbound(ctx->xch, domid,
> >         > 0);
> >
> >
> >         The error code -1 is -EPERM, which means you don't have
> >         permission to
> >         issue this operation. I don't think this is a bug. There might
> >         be some
> >         problems with your setup.
> >
> >         If you need any pointer in reading source code, I will be
> >         happy to help.
> >
> >
> >         Wei.
> >
> >     Thanks for your kindness!
> >
> >
> >     I looked into the functions for logging, in this case,  `3 =3D No
> > such process` was from `errno` and the ` HYPERVISOR_event_channel_op
> > failed: -1 ` was from hypervisor-level
> > error(src/xen/common/event_channel.c).
> > In my option, that's to say, error number of -1 was caused by
> > hypervisor; but what was the error number of 3 caused by,  dom0?
> > Do both the two error numbers refer to the description defined in
> > errno.h or else hypervisor has its own error description?
> >
>
> I think the two files are mostly the same, but to be sure you need to
> look into the source file in both Linux and Xen. You should start from
> the hypervisor level, find out why it returns -EPERM. Root user in Dom0
> has nothing to do with privilege in hypervisor level.
>
>
> Wei.
>
> The scene is that there are more than 10 processes, each process calls `x=
l
restore` to start a VM where a virus sample will run to detect the sample's
behaviour  about every 120 seconds.
For a long time, such as 5 days or 15 days, the xen server where the
processes run will raise this error with a certain probability.
Maybe it has something to do with the hypervisor's stability.

--047d7b6226b0d35ba704d21d5966
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Fri, Dec 28, 2012 at 6:46 PM, Wei Liu=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:Wei.Liu2@citrix.com" target=3D"_bl=
ank">Wei.Liu2@citrix.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmai=
l_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left=
:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Fri, 2012-12-28 at 03:13 +0000, =
=C2=ED=C0=DA wrote:<br>
&gt;<br>
&gt;<br>
&gt; On Thu, Dec 27, 2012 at 8:41 PM, Wei Liu &lt;<a href=3D"mailto:Wei.Liu=
2@citrix.com">Wei.Liu2@citrix.com</a>&gt; wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; On Thu, 2012-12-27 at 02:12 +0000, =C2=ED=
=C0=DA wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; I got it, but the error ` &nbsp;xc: e=
rror: do_evtchn_op:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; HYPERVISOR_event_channel_op failed: -=
1 (3 =3D No such<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; process): Internal<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; error. ` said no such process, the sy=
stem error description<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; didn&#39;t seem has anything to do wi=
th the following lines wich<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; raised<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp;85 &nbsp; &nbsp;state-&gt;store=
_port =3D xc_evtchn_alloc_unbound(ctx-&gt;xch,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; domid,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; 0);<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp;86 &nbsp; &nbsp;state-&gt;conso=
le_port =3D<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; xc_evtchn_alloc_unbound(ctx-&gt;xch, domid=
,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; 0);<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; The error code -1 is -EPERM, which means y=
ou don&#39;t have<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; permission to<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; issue this operation. I don&#39;t think th=
is is a bug. There might<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; be some<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; problems with your setup.<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; If you need any pointer in reading source =
code, I will be<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; happy to help.<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Wei.<br>
&gt;<br>
&gt; &nbsp; &nbsp; Thanks for your kindness!<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; I looked into the functions for logging, in this case, &=
nbsp;`3 =3D No<br>
&gt; such process` was from `errno` and the ` HYPERVISOR_event_channel_op<b=
r>
&gt; failed: -1 ` was from hypervisor-level<br>
&gt; error(src/xen/common/event_channel.c).<br>
&gt; In my option, that&#39;s to say, error number of -1 was caused by<br>
&gt; hypervisor; but what was the error number of 3 caused by, &nbsp;dom0?<=
br>
&gt; Do both the two error numbers refer to the description defined in<br>
&gt; errno.h or else hypervisor has its own error description?<br>
&gt;<br>
<br>
</div></div>I think the two files are mostly the same, but to be sure you n=
eed to<br>
look into the source file in both Linux and Xen. You should start from<br>
the hypervisor level, find out why it returns -EPERM. Root user in Dom0<br>
has nothing to do with privilege in hypervisor level.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Wei.<br>
<br>
</font></span></blockquote></div>The scene is that there are more than 10 p=
rocesses, each process calls `xl restore` to start a VM where a virus sampl=
e will run to detect the sample&#39;s behaviour &nbsp;about every 120 secon=
ds.<div>
For a long time, such as 5 days or 15 days, the xen server where the proces=
ses run will raise this error with a certain probability.</div><div>Maybe i=
t has something to do with the hypervisor&#39;s stability.</div>

--047d7b6226b0d35ba704d21d5966--


--===============6054993727468398706==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6054993727468398706==--


From xen-devel-bounces@lists.xen.org Mon Dec 31 03:10:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 03:10:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpVlW-0008SB-LX; Mon, 31 Dec 2012 03:10:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1TpVlU-0008S6-Sc
	for xen-devel@lists.xensource.com; Mon, 31 Dec 2012 03:10:13 +0000
Received: from [193.109.254.147:16559] by server-11.bemta-14.messagelabs.com
	id 96/57-02659-41201E05; Mon, 31 Dec 2012 03:10:12 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1356923410!11639087!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29399 invoked from network); 31 Dec 2012 03:10:10 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 03:10:10 -0000
Received: by mail-ea0-f179.google.com with SMTP id i12so4932683eaa.10
	for <xen-devel@lists.xensource.com>;
	Sun, 30 Dec 2012 19:10:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qsNLarHhSLhvEQ7fWiMKDh+N3wCh0M90Ao9yjFccLPU=;
	b=EfOeHhQ59ZzLBxiF850EOg80T0wxeuPs3kwaxqSrZj/nWgFoWaL5MHQcILpm3X9nHJ
	8qk12JKqf9Odnt60KqC1RIjmskH13mGpIIAOk9+QD4piLyPs6mozXBSHoygOFvpxOsN6
	xZCEs/0Sxsrzv/092tmIaRPiPmLiEjwCGLjGwgRNJ7hDk8VNUlKyP58lit2mBvWvV1rT
	BajD4MsynAKedjJtzGWRx5MV9/5IXweqFcGK1hOCuiWV2gJ7ZD4e1BgLYwfiCXeiTjOL
	qElDDFML18mO6WgpRyGehjCe3SsCp1wBeFkV/GC14DruJMNDjiePSuPlxHin5DLnYUDD
	LiqQ==
MIME-Version: 1.0
Received: by 10.14.173.65 with SMTP id u41mr106955640eel.13.1356923409750;
	Sun, 30 Dec 2012 19:10:09 -0800 (PST)
Received: by 10.223.36.65 with HTTP; Sun, 30 Dec 2012 19:10:09 -0800 (PST)
In-Reply-To: <1356691575.2917.13.camel@iceland>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
	<CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
	<1356612077.19238.20.camel@iceland>
	<CA+ePHTC4Fq0sqb5ie+YBiC5Ft_q6O2PkJxYqGXxbBbHSHxfbOA@mail.gmail.com>
	<1356691575.2917.13.camel@iceland>
Date: Mon, 31 Dec 2012 11:10:09 +0800
Message-ID: <CA+ePHTCaJRCopiM-WoWYDBhrNrcmkNz7rcUmjfXtcHcTt=niqQ@mail.gmail.com>
From: =?GB2312?B?wu3A2g==?= <aware.why@gmail.com>
To: Wei Liu <Wei.Liu2@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel]
	=?gb2312?b?W0JVR106IHdoZW4gdXNpbmcgYHhsIHJlc3RvcmVg?=
	=?gb2312?b?o6x4Y19ldnRjaG5fYWxsb2NfdW5ib3VuZCB3aWxsIHJhaXNlIHRo?=
	=?gb2312?b?aXMgZXJyb3I=?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6054993727468398706=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6054993727468398706==
Content-Type: multipart/alternative; boundary=047d7b6226b0d35ba704d21d5966

--047d7b6226b0d35ba704d21d5966
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

On Fri, Dec 28, 2012 at 6:46 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:

> On Fri, 2012-12-28 at 03:13 +0000, =C2=ED=C0=DA wrote:
> >
> >
> > On Thu, Dec 27, 2012 at 8:41 PM, Wei Liu <Wei.Liu2@citrix.com> wrote:
> >         On Thu, 2012-12-27 at 02:12 +0000, =C2=ED=C0=DA wrote:
> >         >
> >         > I got it, but the error `  xc: error: do_evtchn_op:
> >         > HYPERVISOR_event_channel_op failed: -1 (3 =3D No such
> >         process): Internal
> >         > error. ` said no such process, the system error description
> >         > didn't seem has anything to do with the following lines wich
> >         raised
> >         >  85    state->store_port =3D xc_evtchn_alloc_unbound(ctx->xch=
,
> >         domid,
> >         > 0);
> >         >  86    state->console_port =3D
> >         xc_evtchn_alloc_unbound(ctx->xch, domid,
> >         > 0);
> >
> >
> >         The error code -1 is -EPERM, which means you don't have
> >         permission to
> >         issue this operation. I don't think this is a bug. There might
> >         be some
> >         problems with your setup.
> >
> >         If you need any pointer in reading source code, I will be
> >         happy to help.
> >
> >
> >         Wei.
> >
> >     Thanks for your kindness!
> >
> >
> >     I looked into the functions for logging, in this case,  `3 =3D No
> > such process` was from `errno` and the ` HYPERVISOR_event_channel_op
> > failed: -1 ` was from hypervisor-level
> > error(src/xen/common/event_channel.c).
> > In my option, that's to say, error number of -1 was caused by
> > hypervisor; but what was the error number of 3 caused by,  dom0?
> > Do both the two error numbers refer to the description defined in
> > errno.h or else hypervisor has its own error description?
> >
>
> I think the two files are mostly the same, but to be sure you need to
> look into the source file in both Linux and Xen. You should start from
> the hypervisor level, find out why it returns -EPERM. Root user in Dom0
> has nothing to do with privilege in hypervisor level.
>
>
> Wei.
>
> The scene is that there are more than 10 processes, each process calls `x=
l
restore` to start a VM where a virus sample will run to detect the sample's
behaviour  about every 120 seconds.
For a long time, such as 5 days or 15 days, the xen server where the
processes run will raise this error with a certain probability.
Maybe it has something to do with the hypervisor's stability.

--047d7b6226b0d35ba704d21d5966
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Fri, Dec 28, 2012 at 6:46 PM, Wei Liu=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:Wei.Liu2@citrix.com" target=3D"_bl=
ank">Wei.Liu2@citrix.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmai=
l_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left=
:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Fri, 2012-12-28 at 03:13 +0000, =
=C2=ED=C0=DA wrote:<br>
&gt;<br>
&gt;<br>
&gt; On Thu, Dec 27, 2012 at 8:41 PM, Wei Liu &lt;<a href=3D"mailto:Wei.Liu=
2@citrix.com">Wei.Liu2@citrix.com</a>&gt; wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; On Thu, 2012-12-27 at 02:12 +0000, =C2=ED=
=C0=DA wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; I got it, but the error ` &nbsp;xc: e=
rror: do_evtchn_op:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; HYPERVISOR_event_channel_op failed: -=
1 (3 =3D No such<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; process): Internal<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; error. ` said no such process, the sy=
stem error description<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; didn&#39;t seem has anything to do wi=
th the following lines wich<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; raised<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp;85 &nbsp; &nbsp;state-&gt;store=
_port =3D xc_evtchn_alloc_unbound(ctx-&gt;xch,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; domid,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; 0);<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp;86 &nbsp; &nbsp;state-&gt;conso=
le_port =3D<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; xc_evtchn_alloc_unbound(ctx-&gt;xch, domid=
,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; 0);<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; The error code -1 is -EPERM, which means y=
ou don&#39;t have<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; permission to<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; issue this operation. I don&#39;t think th=
is is a bug. There might<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; be some<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; problems with your setup.<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; If you need any pointer in reading source =
code, I will be<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; happy to help.<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Wei.<br>
&gt;<br>
&gt; &nbsp; &nbsp; Thanks for your kindness!<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; I looked into the functions for logging, in this case, &=
nbsp;`3 =3D No<br>
&gt; such process` was from `errno` and the ` HYPERVISOR_event_channel_op<b=
r>
&gt; failed: -1 ` was from hypervisor-level<br>
&gt; error(src/xen/common/event_channel.c).<br>
&gt; In my option, that&#39;s to say, error number of -1 was caused by<br>
&gt; hypervisor; but what was the error number of 3 caused by, &nbsp;dom0?<=
br>
&gt; Do both the two error numbers refer to the description defined in<br>
&gt; errno.h or else hypervisor has its own error description?<br>
&gt;<br>
<br>
</div></div>I think the two files are mostly the same, but to be sure you n=
eed to<br>
look into the source file in both Linux and Xen. You should start from<br>
the hypervisor level, find out why it returns -EPERM. Root user in Dom0<br>
has nothing to do with privilege in hypervisor level.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Wei.<br>
<br>
</font></span></blockquote></div>The scene is that there are more than 10 p=
rocesses, each process calls `xl restore` to start a VM where a virus sampl=
e will run to detect the sample&#39;s behaviour &nbsp;about every 120 secon=
ds.<div>
For a long time, such as 5 days or 15 days, the xen server where the proces=
ses run will raise this error with a certain probability.</div><div>Maybe i=
t has something to do with the hypervisor&#39;s stability.</div>

--047d7b6226b0d35ba704d21d5966--


--===============6054993727468398706==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6054993727468398706==--


From xen-devel-bounces@lists.xen.org Mon Dec 31 07:29:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 07:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpZne-0001zW-32; Mon, 31 Dec 2012 07:28:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TpZnb-0001zR-Ox
	for xen-devel@lists.xensource.com; Mon, 31 Dec 2012 07:28:40 +0000
Received: from [85.158.139.211:10381] by server-9.bemta-5.messagelabs.com id
	E0/92-10690-6AE31E05; Mon, 31 Dec 2012 07:28:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1356938918!21843848!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODUz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 932 invoked from network); 31 Dec 2012 07:28:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 07:28:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,384,1355097600"; 
   d="scan'208";a="393098"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Dec 2012 07:28:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 31 Dec 2012 07:28:37 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TpZnZ-000414-K5;
	Mon, 31 Dec 2012 07:28:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TpZnZ-0001MI-4x;
	Mon, 31 Dec 2012 07:28:37 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14814-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 31 Dec 2012 07:28:37 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14814: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14814 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14814/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14812
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14813

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  c4114a042410

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 07:29:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 07:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpZne-0001zW-32; Mon, 31 Dec 2012 07:28:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1TpZnb-0001zR-Ox
	for xen-devel@lists.xensource.com; Mon, 31 Dec 2012 07:28:40 +0000
Received: from [85.158.139.211:10381] by server-9.bemta-5.messagelabs.com id
	E0/92-10690-6AE31E05; Mon, 31 Dec 2012 07:28:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1356938918!21843848!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODUz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 932 invoked from network); 31 Dec 2012 07:28:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 07:28:38 -0000
X-IronPort-AV: E=Sophos;i="4.84,384,1355097600"; 
   d="scan'208";a="393098"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Dec 2012 07:28:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.5; Mon, 31 Dec 2012 07:28:37 +0000
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1TpZnZ-000414-K5;
	Mon, 31 Dec 2012 07:28:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1TpZnZ-0001MI-4x;
	Mon, 31 Dec 2012 07:28:37 +0000
To: xen-devel@lists.xensource.com
Message-ID: <osstest-14814-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 31 Dec 2012 07:28:37 +0000
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 14814: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 14814 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/14814/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 14812
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 14813

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-qemut-win-vcpus1 16 leak-check/check           fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win-vcpus1 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemut-win    16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-qemut-win   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemut-win 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c4114a042410
baseline version:
 xen                  c4114a042410

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-qemut-win-vcpus1                             fail    
 test-amd64-i386-xl-qemut-win-vcpus1                          fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-amd64-amd64-qemut-win                                   fail    
 test-amd64-i386-qemut-win                                    fail    
 test-amd64-amd64-xl-qemut-win                                fail    
 test-amd64-amd64-xl-win                                      fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 11:31:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 11:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpdZj-0004R1-Gz; Mon, 31 Dec 2012 11:30:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpdZi-0004Qw-1T
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 11:30:34 +0000
Received: from [85.158.139.83:9952] by server-2.bemta-5.messagelabs.com id
	4F/E0-16162-95771E05; Mon, 31 Dec 2012 11:30:33 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1356953430!31238409!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27564 invoked from network); 31 Dec 2012 11:30:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 11:30:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="2306551"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 11:30:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 06:30:29 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TpdZc-000412-Sq;
	Mon, 31 Dec 2012 11:30:28 +0000
Message-ID: <1356953426.2917.36.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Rohit Damkondwar <genius.rsd@gmail.com>
Date: Mon, 31 Dec 2012 11:30:26 +0000
In-Reply-To: <CAHEEu860f4g7vZ0+hHXBtXe95J+7d33OzW9kJ2x5O2au6q3zHA@mail.gmail.com>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland>
	<CAHEEu857jjBLACY=MNKzqEkpGbjXw1cbfqy9AOyR5fVsuOmagg@mail.gmail.com>
	<1356690956.2917.7.camel@iceland>
	<CAHEEu860f4g7vZ0+hHXBtXe95J+7d33OzW9kJ2x5O2au6q3zHA@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
 interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2012-12-30 at 12:08 +0000, Rohit Damkondwar wrote:


>         TBH I'm not sure about this either. Again, comparisons and
>         analysis of
>         bottleneck would be helpful.
>         
>         >
>         > I have seen  function "set_qos_algorithm_type" and
>         paramaters
>         > (qos/algorithm type,qos/algorithm params, qos/supported
>         algorithms) in
>         > vif class. Would they be useful ? Are they available only
>         for XEN
>         > Enterprise ?
>         >
>         
>         
>         Do you see those in libxen source code? I don't think they are
>         in use
>         now.
> I didn't know that. The when which library should I look into? I could
> find vif class in libxen folder. So I thought it should be used. I
> read that xl is not matured enough in xen 4.1.3. So which library
> should I look into? Please help. 

There is considerable amount of work to be done.

Tool stack is relatively easy. You can try adding config options to
whatever tool stack you're using. You need to parse the config option
(which is easy, because every tool stack has parser), then write the
option to xenstore so that guest can pick it up.

To modify the kernel is a bit harder. Download kernel package, look for
netback.c and friends, whose location varies depending the kernel you're
using. Grep for 'rate' so that you can have a look at how tx rate limit
is implemented.

The last bit is to upsteam your changes. Please post your changes to
Xen-devel to see if it is suitable for upstreaming. Of course this is
not mandatory. By upstreaming your changes you can 1) relieve your
maintenance burden, 2) benefit others.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 11:31:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 11:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpdZj-0004R1-Gz; Mon, 31 Dec 2012 11:30:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpdZi-0004Qw-1T
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 11:30:34 +0000
Received: from [85.158.139.83:9952] by server-2.bemta-5.messagelabs.com id
	4F/E0-16162-95771E05; Mon, 31 Dec 2012 11:30:33 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1356953430!31238409!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27564 invoked from network); 31 Dec 2012 11:30:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 11:30:31 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="2306551"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 11:30:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 06:30:29 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1TpdZc-000412-Sq;
	Mon, 31 Dec 2012 11:30:28 +0000
Message-ID: <1356953426.2917.36.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: Rohit Damkondwar <genius.rsd@gmail.com>
Date: Mon, 31 Dec 2012 11:30:26 +0000
In-Reply-To: <CAHEEu860f4g7vZ0+hHXBtXe95J+7d33OzW9kJ2x5O2au6q3zHA@mail.gmail.com>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland>
	<CAHEEu857jjBLACY=MNKzqEkpGbjXw1cbfqy9AOyR5fVsuOmagg@mail.gmail.com>
	<1356690956.2917.7.camel@iceland>
	<CAHEEu860f4g7vZ0+hHXBtXe95J+7d33OzW9kJ2x5O2au6q3zHA@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: wei.liu2@citrix.com, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
 interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2012-12-30 at 12:08 +0000, Rohit Damkondwar wrote:


>         TBH I'm not sure about this either. Again, comparisons and
>         analysis of
>         bottleneck would be helpful.
>         
>         >
>         > I have seen  function "set_qos_algorithm_type" and
>         paramaters
>         > (qos/algorithm type,qos/algorithm params, qos/supported
>         algorithms) in
>         > vif class. Would they be useful ? Are they available only
>         for XEN
>         > Enterprise ?
>         >
>         
>         
>         Do you see those in libxen source code? I don't think they are
>         in use
>         now.
> I didn't know that. The when which library should I look into? I could
> find vif class in libxen folder. So I thought it should be used. I
> read that xl is not matured enough in xen 4.1.3. So which library
> should I look into? Please help. 

There is considerable amount of work to be done.

Tool stack is relatively easy. You can try adding config options to
whatever tool stack you're using. You need to parse the config option
(which is easy, because every tool stack has parser), then write the
option to xenstore so that guest can pick it up.

To modify the kernel is a bit harder. Download kernel package, look for
netback.c and friends, whose location varies depending the kernel you're
using. Grep for 'rate' so that you can have a look at how tx rate limit
is implemented.

The last bit is to upsteam your changes. Please post your changes to
Xen-devel to see if it is suitable for upstreaming. Of course this is
not mandatory. By upstreaming your changes you can 1) relieve your
maintenance burden, 2) benefit others.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 12:04:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 12:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpe6T-0004to-59; Mon, 31 Dec 2012 12:04:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tpe6R-0004td-AS
	for xen-devel@lists.xensource.com; Mon, 31 Dec 2012 12:04:23 +0000
Received: from [85.158.139.83:38618] by server-12.bemta-5.messagelabs.com id
	F5/00-02275-64F71E05; Mon, 31 Dec 2012 12:04:22 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1356955460!27734526!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27956 invoked from network); 31 Dec 2012 12:04:21 -0000
Received: from unknown (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 12:04:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="2307644"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 12:03:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 07:03:18 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1Tpe5O-0004T1-5q;
	Mon, 31 Dec 2012 12:03:18 +0000
Message-ID: <1356955395.2917.41.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Mon, 31 Dec 2012 12:03:15 +0000
In-Reply-To: <CA+ePHTCaJRCopiM-WoWYDBhrNrcmkNz7rcUmjfXtcHcTt=niqQ@mail.gmail.com>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
	<CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
	<1356612077.19238.20.camel@iceland>
	<CA+ePHTC4Fq0sqb5ie+YBiC5Ft_q6O2PkJxYqGXxbBbHSHxfbOA@mail.gmail.com>
	<1356691575.2917.13.camel@iceland>
	<CA+ePHTCaJRCopiM-WoWYDBhrNrcmkNz7rcUmjfXtcHcTt=niqQ@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
 =?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_error?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDEyLTEyLTMxIGF0IDAzOjEwICswMDAwLCDpqazno4ogd3JvdGU6Cj4gICAgICAg
ICAKPiAgICAgICAgIAo+ICAgICAgICAgCj4gICAgICAgICBJIHRoaW5rIHRoZSB0d28gZmlsZXMg
YXJlIG1vc3RseSB0aGUgc2FtZSwgYnV0IHRvIGJlIHN1cmUgeW91Cj4gICAgICAgICBuZWVkIHRv
Cj4gICAgICAgICBsb29rIGludG8gdGhlIHNvdXJjZSBmaWxlIGluIGJvdGggTGludXggYW5kIFhl
bi4gWW91IHNob3VsZAo+ICAgICAgICAgc3RhcnQgZnJvbQo+ICAgICAgICAgdGhlIGh5cGVydmlz
b3IgbGV2ZWwsIGZpbmQgb3V0IHdoeSBpdCByZXR1cm5zIC1FUEVSTS4gUm9vdAo+ICAgICAgICAg
dXNlciBpbiBEb20wCj4gICAgICAgICBoYXMgbm90aGluZyB0byBkbyB3aXRoIHByaXZpbGVnZSBp
biBoeXBlcnZpc29yIGxldmVsLgo+ICAgICAgICAgCj4gICAgICAgICAKPiAgICAgICAgIFdlaS4K
PiAgICAgICAgIAo+IFRoZSBzY2VuZSBpcyB0aGF0IHRoZXJlIGFyZSBtb3JlIHRoYW4gMTAgcHJv
Y2Vzc2VzLCBlYWNoIHByb2Nlc3MgY2FsbHMKPiBgeGwgcmVzdG9yZWAgdG8gc3RhcnQgYSBWTSB3
aGVyZSBhIHZpcnVzIHNhbXBsZSB3aWxsIHJ1biB0byBkZXRlY3QgdGhlCj4gc2FtcGxlJ3MgYmVo
YXZpb3VyICBhYm91dCBldmVyeSAxMjAgc2Vjb25kcy4KPiBGb3IgYSBsb25nIHRpbWUsIHN1Y2gg
YXMgNSBkYXlzIG9yIDE1IGRheXMsIHRoZSB4ZW4gc2VydmVyIHdoZXJlIHRoZQo+IHByb2Nlc3Nl
cyBydW4gd2lsbCByYWlzZSB0aGlzIGVycm9yIHdpdGggYSBjZXJ0YWluIHByb2JhYmlsaXR5Lgo+
IE1heWJlIGl0IGhhcyBzb21ldGhpbmcgdG8gZG8gd2l0aCB0aGUgaHlwZXJ2aXNvcidzIHN0YWJp
bGl0eS4KClRoaXMgaXMgbm90IGEgdXN1YWwgc2NlbmFyaW8uIFNvcnJ5IEkgZG9uJ3Qga25vdyBo
b3cgdG8gcmVwcm9kdWNlIHRoaXMuCgpCdXQgdGhlIGNvZGUgcGF0aCBpbiBoeXBlcnZpc29yIGZv
ciBldnRjaG5fYWxsb2NfdW5ib3VuZCBpcyBxdWl0ZSBzaG9ydCwKeW91IG1heSB0cnkgYWRkaW5n
IHNvbWUgZGVidWcgb3V0cHV0IGFsb25nIHRoZSBwYXRoIHRvIHNlZSB3aGljaCBzdGVwCmZhaWxz
IHRoZW4gcmVwb3J0IGJhY2suCgoKV2VpLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3Rz
Lnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Dec 31 12:04:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 12:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpe6T-0004to-59; Mon, 31 Dec 2012 12:04:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tpe6R-0004td-AS
	for xen-devel@lists.xensource.com; Mon, 31 Dec 2012 12:04:23 +0000
Received: from [85.158.139.83:38618] by server-12.bemta-5.messagelabs.com id
	F5/00-02275-64F71E05; Mon, 31 Dec 2012 12:04:22 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1356955460!27734526!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27956 invoked from network); 31 Dec 2012 12:04:21 -0000
Received: from unknown (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 12:04:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="2307644"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 12:03:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 07:03:18 -0500
Received: from [10.80.3.80]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <Wei.Liu2@citrix.com>)	id 1Tpe5O-0004T1-5q;
	Mon, 31 Dec 2012 12:03:18 +0000
Message-ID: <1356955395.2917.41.camel@iceland>
From: Wei Liu <Wei.Liu2@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Mon, 31 Dec 2012 12:03:15 +0000
In-Reply-To: <CA+ePHTCaJRCopiM-WoWYDBhrNrcmkNz7rcUmjfXtcHcTt=niqQ@mail.gmail.com>
References: <CA+ePHTBx8iJaAz7PCzeM-jFcs9e_Cx2qAR3cZTSavbZs0hhEoA@mail.gmail.com>
	<CA+ePHTAcFUrPC3rUiBTps0JZ6PdBxUZfsZSgnn4Mym=po+NDpg@mail.gmail.com>
	<20121226134131.GA25087@iceland>
	<CA+ePHTAumsKKXNmcYEsY9OV5J3pyy7kygAktFEtjP84O9_ub0g@mail.gmail.com>
	<20121226193312.GA28152@iceland>
	<CA+ePHTCNA7EkOQvbWw+=t+v_q7M7XAnrF9a1rr_U+VikCyxffg@mail.gmail.com>
	<1356612077.19238.20.camel@iceland>
	<CA+ePHTC4Fq0sqb5ie+YBiC5Ft_q6O2PkJxYqGXxbBbHSHxfbOA@mail.gmail.com>
	<1356691575.2917.13.camel@iceland>
	<CA+ePHTCaJRCopiM-WoWYDBhrNrcmkNz7rcUmjfXtcHcTt=niqQ@mail.gmail.com>
X-Mailer: Evolution 3.6.0-0ubuntu3 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	wei.liu2@citrix.com
Subject: Re: [Xen-devel] =?utf-8?q?=5BBUG=5D=3A_when_using_=60xl_restore=60?=
 =?utf-8?q?=EF=BC=8Cxc=5Fevtchn=5Falloc=5Funbound_will_raise_this_error?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDEyLTEyLTMxIGF0IDAzOjEwICswMDAwLCDpqazno4ogd3JvdGU6Cj4gICAgICAg
ICAKPiAgICAgICAgIAo+ICAgICAgICAgCj4gICAgICAgICBJIHRoaW5rIHRoZSB0d28gZmlsZXMg
YXJlIG1vc3RseSB0aGUgc2FtZSwgYnV0IHRvIGJlIHN1cmUgeW91Cj4gICAgICAgICBuZWVkIHRv
Cj4gICAgICAgICBsb29rIGludG8gdGhlIHNvdXJjZSBmaWxlIGluIGJvdGggTGludXggYW5kIFhl
bi4gWW91IHNob3VsZAo+ICAgICAgICAgc3RhcnQgZnJvbQo+ICAgICAgICAgdGhlIGh5cGVydmlz
b3IgbGV2ZWwsIGZpbmQgb3V0IHdoeSBpdCByZXR1cm5zIC1FUEVSTS4gUm9vdAo+ICAgICAgICAg
dXNlciBpbiBEb20wCj4gICAgICAgICBoYXMgbm90aGluZyB0byBkbyB3aXRoIHByaXZpbGVnZSBp
biBoeXBlcnZpc29yIGxldmVsLgo+ICAgICAgICAgCj4gICAgICAgICAKPiAgICAgICAgIFdlaS4K
PiAgICAgICAgIAo+IFRoZSBzY2VuZSBpcyB0aGF0IHRoZXJlIGFyZSBtb3JlIHRoYW4gMTAgcHJv
Y2Vzc2VzLCBlYWNoIHByb2Nlc3MgY2FsbHMKPiBgeGwgcmVzdG9yZWAgdG8gc3RhcnQgYSBWTSB3
aGVyZSBhIHZpcnVzIHNhbXBsZSB3aWxsIHJ1biB0byBkZXRlY3QgdGhlCj4gc2FtcGxlJ3MgYmVo
YXZpb3VyICBhYm91dCBldmVyeSAxMjAgc2Vjb25kcy4KPiBGb3IgYSBsb25nIHRpbWUsIHN1Y2gg
YXMgNSBkYXlzIG9yIDE1IGRheXMsIHRoZSB4ZW4gc2VydmVyIHdoZXJlIHRoZQo+IHByb2Nlc3Nl
cyBydW4gd2lsbCByYWlzZSB0aGlzIGVycm9yIHdpdGggYSBjZXJ0YWluIHByb2JhYmlsaXR5Lgo+
IE1heWJlIGl0IGhhcyBzb21ldGhpbmcgdG8gZG8gd2l0aCB0aGUgaHlwZXJ2aXNvcidzIHN0YWJp
bGl0eS4KClRoaXMgaXMgbm90IGEgdXN1YWwgc2NlbmFyaW8uIFNvcnJ5IEkgZG9uJ3Qga25vdyBo
b3cgdG8gcmVwcm9kdWNlIHRoaXMuCgpCdXQgdGhlIGNvZGUgcGF0aCBpbiBoeXBlcnZpc29yIGZv
ciBldnRjaG5fYWxsb2NfdW5ib3VuZCBpcyBxdWl0ZSBzaG9ydCwKeW91IG1heSB0cnkgYWRkaW5n
IHNvbWUgZGVidWcgb3V0cHV0IGFsb25nIHRoZSBwYXRoIHRvIHNlZSB3aGljaCBzdGVwCmZhaWxz
IHRoZW4gcmVwb3J0IGJhY2suCgoKV2VpLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3Rz
Lnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Dec 31 12:16:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 12:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpeI5-00057p-L4; Mon, 31 Dec 2012 12:16:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TpeI4-00057f-47
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 12:16:24 +0000
Received: from [85.158.139.211:50957] by server-12.bemta-5.messagelabs.com id
	F9/9A-02275-71281E05; Mon, 31 Dec 2012 12:16:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356956182!19886148!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22996 invoked from network); 31 Dec 2012 12:16:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 12:16:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="396011"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Dec 2012 12:16:22 +0000
Received: from mac.citrite.net (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 31 Dec 2012 12:16:21 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <qemu-devel@nongnu.org>
Date: Mon, 31 Dec 2012 13:16:13 +0100
Message-ID: <1356956174-23548-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
References: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 2/3] xen_disk: fix memory leak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gaW9yZXFfcmVsZWFzZSB0aGUgZnVsbCBpb3JlcSB3YXMgbWVtc2V0IHRvIDAsIGxvb3Npbmcg
YWxsIHRoZSBkYXRhCmFuZCBtZW1vcnkgYWxsb2NhdGlvbnMgaW5zaWRlIHRoZSBRRU1VSU9WZWN0
b3IsIHdoaWNoIGxlYWRzIHRvIGEKbWVtb3J5IGxlYWsuIENyZWF0ZSBhIG5ldyBmdW5jdGlvbiB0
byBzcGVjaWZpY2FsbHkgcmVzZXQgaW9yZXEuCgpSZXBvcnRlZC1ieTogTWFpayBXZXNzbGVyIDxt
YWlrLndlc3NsZXJAeWFob28uY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCkNjOiBTdGVm
YW5vIFN0YWJlbGxpbmkgPFN0ZWZhbm8uU3RhYmVsbGluaUBldS5jaXRyaXguY29tPgpDYzogQW50
aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+Ci0tLQogaHcveGVuX2Rpc2su
YyB8ICAgMjggKysrKysrKysrKysrKysrKysrKysrKysrKystLQogMSBmaWxlcyBjaGFuZ2VkLCAy
NiBpbnNlcnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2h3L3hlbl9kaXNr
LmMgYi9ody94ZW5fZGlzay5jCmluZGV4IGExNTllZTUuLjFlYjQ4NWEgMTAwNjQ0Ci0tLSBhL2h3
L3hlbl9kaXNrLmMKKysrIGIvaHcveGVuX2Rpc2suYwpAQCAtMTEzLDYgKzExMywzMSBAQCBzdHJ1
Y3QgWGVuQmxrRGV2IHsKIAogLyogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSAqLwogCitzdGF0aWMgdm9pZCBpb3JlcV9yZXNldChz
dHJ1Y3QgaW9yZXEgKmlvcmVxKQoreworICAgIG1lbXNldCgmaW9yZXEtPnJlcSwgMCwgc2l6ZW9m
KGlvcmVxLT5yZXEpKTsKKyAgICBpb3JlcS0+c3RhdHVzID0gMDsKKyAgICBpb3JlcS0+c3RhcnQg
PSAwOworICAgIGlvcmVxLT5wcmVzeW5jID0gMDsKKyAgICBpb3JlcS0+cG9zdHN5bmMgPSAwOwor
ICAgIGlvcmVxLT5tYXBwZWQgPSAwOworCisgICAgbWVtc2V0KGlvcmVxLT5kb21pZHMsIDAsIHNp
emVvZihpb3JlcS0+ZG9taWRzKSk7CisgICAgbWVtc2V0KGlvcmVxLT5yZWZzLCAwLCBzaXplb2Yo
aW9yZXEtPnJlZnMpKTsKKyAgICBpb3JlcS0+cHJvdCA9IDA7CisgICAgbWVtc2V0KGlvcmVxLT5w
YWdlLCAwLCBzaXplb2YoaW9yZXEtPnBhZ2UpKTsKKyAgICBpb3JlcS0+cGFnZXMgPSBOVUxMOwor
CisgICAgaW9yZXEtPmFpb19pbmZsaWdodCA9IDA7CisgICAgaW9yZXEtPmFpb19lcnJvcnMgPSAw
OworCisgICAgaW9yZXEtPmJsa2RldiA9IE5VTEw7CisgICAgbWVtc2V0KCZpb3JlcS0+bGlzdCwg
MCwgc2l6ZW9mKGlvcmVxLT5saXN0KSk7CisgICAgbWVtc2V0KCZpb3JlcS0+YWNjdCwgMCwgc2l6
ZW9mKGlvcmVxLT5hY2N0KSk7CisKKyAgICBxZW11X2lvdmVjX3Jlc2V0KCZpb3JlcS0+dik7Cit9
CisKIHN0YXRpYyBzdHJ1Y3QgaW9yZXEgKmlvcmVxX3N0YXJ0KHN0cnVjdCBYZW5CbGtEZXYgKmJs
a2RldikKIHsKICAgICBzdHJ1Y3QgaW9yZXEgKmlvcmVxID0gTlVMTDsKQEAgLTEzMCw3ICsxNTUs
NiBAQCBzdGF0aWMgc3RydWN0IGlvcmVxICppb3JlcV9zdGFydChzdHJ1Y3QgWGVuQmxrRGV2ICpi
bGtkZXYpCiAgICAgICAgIC8qIGdldCBvbmUgZnJvbSBmcmVlbGlzdCAqLwogICAgICAgICBpb3Jl
cSA9IFFMSVNUX0ZJUlNUKCZibGtkZXYtPmZyZWVsaXN0KTsKICAgICAgICAgUUxJU1RfUkVNT1ZF
KGlvcmVxLCBsaXN0KTsKLSAgICAgICAgcWVtdV9pb3ZlY19yZXNldCgmaW9yZXEtPnYpOwogICAg
IH0KICAgICBRTElTVF9JTlNFUlRfSEVBRCgmYmxrZGV2LT5pbmZsaWdodCwgaW9yZXEsIGxpc3Qp
OwogICAgIGJsa2Rldi0+cmVxdWVzdHNfaW5mbGlnaHQrKzsKQEAgLTE1NCw3ICsxNzgsNyBAQCBz
dGF0aWMgdm9pZCBpb3JlcV9yZWxlYXNlKHN0cnVjdCBpb3JlcSAqaW9yZXEsIGJvb2wgZmluaXNo
KQogICAgIHN0cnVjdCBYZW5CbGtEZXYgKmJsa2RldiA9IGlvcmVxLT5ibGtkZXY7CiAKICAgICBR
TElTVF9SRU1PVkUoaW9yZXEsIGxpc3QpOwotICAgIG1lbXNldChpb3JlcSwgMCwgc2l6ZW9mKCpp
b3JlcSkpOworICAgIGlvcmVxX3Jlc2V0KGlvcmVxKTsKICAgICBpb3JlcS0+YmxrZGV2ID0gYmxr
ZGV2OwogICAgIFFMSVNUX0lOU0VSVF9IRUFEKCZibGtkZXYtPmZyZWVsaXN0LCBpb3JlcSwgbGlz
dCk7CiAgICAgaWYgKGZpbmlzaCkgewotLSAKMS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWls
aW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVu
LWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Dec 31 12:16:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 12:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpeI5-00057p-L4; Mon, 31 Dec 2012 12:16:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TpeI4-00057f-47
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 12:16:24 +0000
Received: from [85.158.139.211:50957] by server-12.bemta-5.messagelabs.com id
	F9/9A-02275-71281E05; Mon, 31 Dec 2012 12:16:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356956182!19886148!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22996 invoked from network); 31 Dec 2012 12:16:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 12:16:22 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="396011"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Dec 2012 12:16:22 +0000
Received: from mac.citrite.net (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 31 Dec 2012 12:16:21 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <qemu-devel@nongnu.org>
Date: Mon, 31 Dec 2012 13:16:13 +0100
Message-ID: <1356956174-23548-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
References: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 2/3] xen_disk: fix memory leak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gaW9yZXFfcmVsZWFzZSB0aGUgZnVsbCBpb3JlcSB3YXMgbWVtc2V0IHRvIDAsIGxvb3Npbmcg
YWxsIHRoZSBkYXRhCmFuZCBtZW1vcnkgYWxsb2NhdGlvbnMgaW5zaWRlIHRoZSBRRU1VSU9WZWN0
b3IsIHdoaWNoIGxlYWRzIHRvIGEKbWVtb3J5IGxlYWsuIENyZWF0ZSBhIG5ldyBmdW5jdGlvbiB0
byBzcGVjaWZpY2FsbHkgcmVzZXQgaW9yZXEuCgpSZXBvcnRlZC1ieTogTWFpayBXZXNzbGVyIDxt
YWlrLndlc3NsZXJAeWFob28uY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCkNjOiBTdGVm
YW5vIFN0YWJlbGxpbmkgPFN0ZWZhbm8uU3RhYmVsbGluaUBldS5jaXRyaXguY29tPgpDYzogQW50
aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+Ci0tLQogaHcveGVuX2Rpc2su
YyB8ICAgMjggKysrKysrKysrKysrKysrKysrKysrKysrKystLQogMSBmaWxlcyBjaGFuZ2VkLCAy
NiBpbnNlcnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2h3L3hlbl9kaXNr
LmMgYi9ody94ZW5fZGlzay5jCmluZGV4IGExNTllZTUuLjFlYjQ4NWEgMTAwNjQ0Ci0tLSBhL2h3
L3hlbl9kaXNrLmMKKysrIGIvaHcveGVuX2Rpc2suYwpAQCAtMTEzLDYgKzExMywzMSBAQCBzdHJ1
Y3QgWGVuQmxrRGV2IHsKIAogLyogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSAqLwogCitzdGF0aWMgdm9pZCBpb3JlcV9yZXNldChz
dHJ1Y3QgaW9yZXEgKmlvcmVxKQoreworICAgIG1lbXNldCgmaW9yZXEtPnJlcSwgMCwgc2l6ZW9m
KGlvcmVxLT5yZXEpKTsKKyAgICBpb3JlcS0+c3RhdHVzID0gMDsKKyAgICBpb3JlcS0+c3RhcnQg
PSAwOworICAgIGlvcmVxLT5wcmVzeW5jID0gMDsKKyAgICBpb3JlcS0+cG9zdHN5bmMgPSAwOwor
ICAgIGlvcmVxLT5tYXBwZWQgPSAwOworCisgICAgbWVtc2V0KGlvcmVxLT5kb21pZHMsIDAsIHNp
emVvZihpb3JlcS0+ZG9taWRzKSk7CisgICAgbWVtc2V0KGlvcmVxLT5yZWZzLCAwLCBzaXplb2Yo
aW9yZXEtPnJlZnMpKTsKKyAgICBpb3JlcS0+cHJvdCA9IDA7CisgICAgbWVtc2V0KGlvcmVxLT5w
YWdlLCAwLCBzaXplb2YoaW9yZXEtPnBhZ2UpKTsKKyAgICBpb3JlcS0+cGFnZXMgPSBOVUxMOwor
CisgICAgaW9yZXEtPmFpb19pbmZsaWdodCA9IDA7CisgICAgaW9yZXEtPmFpb19lcnJvcnMgPSAw
OworCisgICAgaW9yZXEtPmJsa2RldiA9IE5VTEw7CisgICAgbWVtc2V0KCZpb3JlcS0+bGlzdCwg
MCwgc2l6ZW9mKGlvcmVxLT5saXN0KSk7CisgICAgbWVtc2V0KCZpb3JlcS0+YWNjdCwgMCwgc2l6
ZW9mKGlvcmVxLT5hY2N0KSk7CisKKyAgICBxZW11X2lvdmVjX3Jlc2V0KCZpb3JlcS0+dik7Cit9
CisKIHN0YXRpYyBzdHJ1Y3QgaW9yZXEgKmlvcmVxX3N0YXJ0KHN0cnVjdCBYZW5CbGtEZXYgKmJs
a2RldikKIHsKICAgICBzdHJ1Y3QgaW9yZXEgKmlvcmVxID0gTlVMTDsKQEAgLTEzMCw3ICsxNTUs
NiBAQCBzdGF0aWMgc3RydWN0IGlvcmVxICppb3JlcV9zdGFydChzdHJ1Y3QgWGVuQmxrRGV2ICpi
bGtkZXYpCiAgICAgICAgIC8qIGdldCBvbmUgZnJvbSBmcmVlbGlzdCAqLwogICAgICAgICBpb3Jl
cSA9IFFMSVNUX0ZJUlNUKCZibGtkZXYtPmZyZWVsaXN0KTsKICAgICAgICAgUUxJU1RfUkVNT1ZF
KGlvcmVxLCBsaXN0KTsKLSAgICAgICAgcWVtdV9pb3ZlY19yZXNldCgmaW9yZXEtPnYpOwogICAg
IH0KICAgICBRTElTVF9JTlNFUlRfSEVBRCgmYmxrZGV2LT5pbmZsaWdodCwgaW9yZXEsIGxpc3Qp
OwogICAgIGJsa2Rldi0+cmVxdWVzdHNfaW5mbGlnaHQrKzsKQEAgLTE1NCw3ICsxNzgsNyBAQCBz
dGF0aWMgdm9pZCBpb3JlcV9yZWxlYXNlKHN0cnVjdCBpb3JlcSAqaW9yZXEsIGJvb2wgZmluaXNo
KQogICAgIHN0cnVjdCBYZW5CbGtEZXYgKmJsa2RldiA9IGlvcmVxLT5ibGtkZXY7CiAKICAgICBR
TElTVF9SRU1PVkUoaW9yZXEsIGxpc3QpOwotICAgIG1lbXNldChpb3JlcSwgMCwgc2l6ZW9mKCpp
b3JlcSkpOworICAgIGlvcmVxX3Jlc2V0KGlvcmVxKTsKICAgICBpb3JlcS0+YmxrZGV2ID0gYmxr
ZGV2OwogICAgIFFMSVNUX0lOU0VSVF9IRUFEKCZibGtkZXYtPmZyZWVsaXN0LCBpb3JlcSwgbGlz
dCk7CiAgICAgaWYgKGZpbmlzaCkgewotLSAKMS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWls
aW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVu
LWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Dec 31 12:16:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 12:16:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpeI6-00057w-0k; Mon, 31 Dec 2012 12:16:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TpeI4-00057g-9r
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 12:16:24 +0000
Received: from [85.158.139.211:36908] by server-14.bemta-5.messagelabs.com id
	56/29-09538-71281E05; Mon, 31 Dec 2012 12:16:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356956182!19886148!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22999 invoked from network); 31 Dec 2012 12:16:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 12:16:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="396012"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Dec 2012 12:16:22 +0000
Received: from mac.citrite.net (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 31 Dec 2012 12:16:22 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <qemu-devel@nongnu.org>
Date: Mon, 31 Dec 2012 13:16:14 +0100
Message-ID: <1356956174-23548-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
References: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 3/3] xen_disk: add persistent grant support
	to xen_disk backend
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyBwcm90b2NvbCBleHRlbnNpb24gcmV1c2VzIHRoZSBzYW1lIHNldCBvZiBncmFudCBwYWdl
cyBmb3IgYWxsCnRyYW5zYWN0aW9ucyBiZXR3ZWVuIHRoZSBmcm9udC9iYWNrIGRyaXZlcnMsIGF2
b2lkaW5nIGV4cGVuc2l2ZSB0bGIKZmx1c2hlcywgZ3JhbnQgdGFibGUgbG9jayBjb250ZW50aW9u
IGFuZCBzd2l0Y2hlcyBiZXR3ZWVuIHVzZXJzcGFjZQphbmQga2VybmVsIHNwYWNlLiBUaGUgZnVs
bCBkZXNjcmlwdGlvbiBvZiB0aGUgcHJvdG9jb2wgY2FuIGJlIGZvdW5kIGluCnRoZSBwdWJsaWMg
YmxraWYuaCBoZWFkZXIuCgpTcGVlZCBpbXByb3ZlbWVudCB3aXRoIDE1IGd1ZXN0cyBwZXJmb3Jt
aW5nIEkvTyBpcyB+NDUwJS4KClNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2Vy
LnBhdUBjaXRyaXguY29tPgpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IFN0ZWZhbm8g
U3RhYmVsbGluaSA8U3RlZmFuby5TdGFiZWxsaW5pQGV1LmNpdHJpeC5jb20+CkNjOiBBbnRob255
IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4KLS0tClBlcmZvcm1hbmNlIGNvbXBh
cmlzb24gd2l0aCB0aGUgcHJldmlvdXMgaW1wbGVtZW50YXRpb24gY2FuIGJlIHNlZW4gaW4KdGhl
IGZvbGxvd2lnbiBncmFwaDoKCmh0dHA6Ly94ZW5iaXRzLnhlbi5vcmcvcGVvcGxlL3JveWdlci9w
ZXJzaXN0ZW50X3JlYWRfcWVtdS5wbmcKLS0tCiBody94ZW5fZGlzay5jIHwgIDE1NSArKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tLS0tLQogMSBmaWxl
cyBjaGFuZ2VkLCAxMzggaW5zZXJ0aW9ucygrKSwgMTcgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0
IGEvaHcveGVuX2Rpc2suYyBiL2h3L3hlbl9kaXNrLmMKaW5kZXggMWViNDg1YS4uYmFmZWNlYiAx
MDA2NDQKLS0tIGEvaHcveGVuX2Rpc2suYworKysgYi9ody94ZW5fZGlzay5jCkBAIC01Miw2ICs1
MiwxMSBAQCBzdGF0aWMgaW50IG1heF9yZXF1ZXN0cyA9IDMyOwogI2RlZmluZSBCTE9DS19TSVpF
ICA1MTIKICNkZWZpbmUgSU9DQl9DT1VOVCAgKEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVT
VCArIDIpCiAKK3N0cnVjdCBwZXJzaXN0ZW50X2dudCB7CisgICAgdm9pZCAqcGFnZTsKKyAgICBz
dHJ1Y3QgWGVuQmxrRGV2ICpibGtkZXY7Cit9OworCiBzdHJ1Y3QgaW9yZXEgewogICAgIGJsa2lm
X3JlcXVlc3RfdCAgICAgcmVxOwogICAgIGludDE2X3QgICAgICAgICAgICAgc3RhdHVzOwpAQCAt
NjksNiArNzQsNyBAQCBzdHJ1Y3QgaW9yZXEgewogICAgIGludCAgICAgICAgICAgICAgICAgcHJv
dDsKICAgICB2b2lkICAgICAgICAgICAgICAgICpwYWdlW0JMS0lGX01BWF9TRUdNRU5UU19QRVJf
UkVRVUVTVF07CiAgICAgdm9pZCAgICAgICAgICAgICAgICAqcGFnZXM7CisgICAgaW50ICAgICAg
ICAgICAgICAgICBudW1fdW5tYXA7CiAKICAgICAvKiBhaW8gc3RhdHVzICovCiAgICAgaW50ICAg
ICAgICAgICAgICAgICBhaW9faW5mbGlnaHQ7CkBAIC0xMDUsNiArMTExLDEyIEBAIHN0cnVjdCBY
ZW5CbGtEZXYgewogICAgIGludCAgICAgICAgICAgICAgICAgcmVxdWVzdHNfaW5mbGlnaHQ7CiAg
ICAgaW50ICAgICAgICAgICAgICAgICByZXF1ZXN0c19maW5pc2hlZDsKIAorICAgIC8qIFBlcnNp
c3RlbnQgZ3JhbnRzIGV4dGVuc2lvbiAqLworICAgIGdib29sZWFuICAgICAgICAgICAgZmVhdHVy
ZV9wZXJzaXN0ZW50OworICAgIEdUcmVlICAgICAgICAgICAgICAgKnBlcnNpc3RlbnRfZ250czsK
KyAgICB1bnNpZ25lZCBpbnQgICAgICAgIHBlcnNpc3RlbnRfZ250X2M7CisgICAgdW5zaWduZWQg
aW50ICAgICAgICBtYXhfZ3JhbnRzOworCiAgICAgLyogcWVtdSBibG9jayBkcml2ZXIgKi8KICAg
ICBEcml2ZUluZm8gICAgICAgICAgICpkaW5mbzsKICAgICBCbG9ja0RyaXZlclN0YXRlICAgICpi
czsKQEAgLTEzOCw2ICsxNTAsMjkgQEAgc3RhdGljIHZvaWQgaW9yZXFfcmVzZXQoc3RydWN0IGlv
cmVxICppb3JlcSkKICAgICBxZW11X2lvdmVjX3Jlc2V0KCZpb3JlcS0+dik7CiB9CiAKK3N0YXRp
YyBnaW50IGludF9jbXAoZ2NvbnN0cG9pbnRlciBhLCBnY29uc3Rwb2ludGVyIGIsIGdwb2ludGVy
IHVzZXJfZGF0YSkKK3sKKyAgICB1aW50IHVhID0gR1BPSU5URVJfVE9fVUlOVChhKTsKKyAgICB1
aW50IHViID0gR1BPSU5URVJfVE9fVUlOVChiKTsKKyAgICByZXR1cm4gKHVhID4gdWIpIC0gKHVh
IDwgdWIpOworfQorCitzdGF0aWMgdm9pZCBkZXN0cm95X2dyYW50KGdwb2ludGVyIHBnbnQpCit7
CisgICAgc3RydWN0IHBlcnNpc3RlbnRfZ250ICpncmFudCA9IHBnbnQ7CisgICAgWGVuR250dGFi
IGdudCA9IGdyYW50LT5ibGtkZXYtPnhlbmRldi5nbnR0YWJkZXY7CisKKyAgICBpZiAoeGNfZ250
dGFiX211bm1hcChnbnQsIGdyYW50LT5wYWdlLCAxKSAhPSAwKSB7CisgICAgICAgIHhlbl9iZV9w
cmludGYoJmdyYW50LT5ibGtkZXYtPnhlbmRldiwgMCwKKyAgICAgICAgICAgICAgICAgICAgICAi
eGNfZ250dGFiX211bm1hcCBmYWlsZWQ6ICVzXG4iLAorICAgICAgICAgICAgICAgICAgICAgIHN0
cmVycm9yKGVycm5vKSk7CisgICAgfQorICAgIGdyYW50LT5ibGtkZXYtPnBlcnNpc3RlbnRfZ250
X2MtLTsKKyAgICB4ZW5fYmVfcHJpbnRmKCZncmFudC0+YmxrZGV2LT54ZW5kZXYsIDMsCisgICAg
ICAgICAgICAgICAgICAidW5tYXBwZWQgZ3JhbnQgJXBcbiIsIGdyYW50LT5wYWdlKTsKKyAgICBn
X2ZyZWUoZ3JhbnQpOworfQorCiBzdGF0aWMgc3RydWN0IGlvcmVxICppb3JlcV9zdGFydChzdHJ1
Y3QgWGVuQmxrRGV2ICpibGtkZXYpCiB7CiAgICAgc3RydWN0IGlvcmVxICppb3JlcSA9IE5VTEw7
CkBAIC0yNjYsMjEgKzMwMSwyMSBAQCBzdGF0aWMgdm9pZCBpb3JlcV91bm1hcChzdHJ1Y3QgaW9y
ZXEgKmlvcmVxKQogICAgIFhlbkdudHRhYiBnbnQgPSBpb3JlcS0+YmxrZGV2LT54ZW5kZXYuZ250
dGFiZGV2OwogICAgIGludCBpOwogCi0gICAgaWYgKGlvcmVxLT52Lm5pb3YgPT0gMCB8fCBpb3Jl
cS0+bWFwcGVkID09IDApIHsKKyAgICBpZiAoaW9yZXEtPm51bV91bm1hcCA9PSAwIHx8IGlvcmVx
LT5tYXBwZWQgPT0gMCkgewogICAgICAgICByZXR1cm47CiAgICAgfQogICAgIGlmIChiYXRjaF9t
YXBzKSB7CiAgICAgICAgIGlmICghaW9yZXEtPnBhZ2VzKSB7CiAgICAgICAgICAgICByZXR1cm47
CiAgICAgICAgIH0KLSAgICAgICAgaWYgKHhjX2dudHRhYl9tdW5tYXAoZ250LCBpb3JlcS0+cGFn
ZXMsIGlvcmVxLT52Lm5pb3YpICE9IDApIHsKKyAgICAgICAgaWYgKHhjX2dudHRhYl9tdW5tYXAo
Z250LCBpb3JlcS0+cGFnZXMsIGlvcmVxLT5udW1fdW5tYXApICE9IDApIHsKICAgICAgICAgICAg
IHhlbl9iZV9wcmludGYoJmlvcmVxLT5ibGtkZXYtPnhlbmRldiwgMCwgInhjX2dudHRhYl9tdW5t
YXAgZmFpbGVkOiAlc1xuIiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RyZXJyb3IoZXJy
bm8pKTsKICAgICAgICAgfQotICAgICAgICBpb3JlcS0+YmxrZGV2LT5jbnRfbWFwIC09IGlvcmVx
LT52Lm5pb3Y7CisgICAgICAgIGlvcmVxLT5ibGtkZXYtPmNudF9tYXAgLT0gaW9yZXEtPm51bV91
bm1hcDsKICAgICAgICAgaW9yZXEtPnBhZ2VzID0gTlVMTDsKICAgICB9IGVsc2UgewotICAgICAg
ICBmb3IgKGkgPSAwOyBpIDwgaW9yZXEtPnYubmlvdjsgaSsrKSB7CisgICAgICAgIGZvciAoaSA9
IDA7IGkgPCBpb3JlcS0+bnVtX3VubWFwOyBpKyspIHsKICAgICAgICAgICAgIGlmICghaW9yZXEt
PnBhZ2VbaV0pIHsKICAgICAgICAgICAgICAgICBjb250aW51ZTsKICAgICAgICAgICAgIH0KQEAg
LTI5OCw0MSArMzMzLDEwNyBAQCBzdGF0aWMgdm9pZCBpb3JlcV91bm1hcChzdHJ1Y3QgaW9yZXEg
KmlvcmVxKQogc3RhdGljIGludCBpb3JlcV9tYXAoc3RydWN0IGlvcmVxICppb3JlcSkKIHsKICAg
ICBYZW5HbnR0YWIgZ250ID0gaW9yZXEtPmJsa2Rldi0+eGVuZGV2LmdudHRhYmRldjsKLSAgICBp
bnQgaTsKKyAgICB1aW50MzJfdCBkb21pZHNbQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNU
XTsKKyAgICB1aW50MzJfdCByZWZzW0JMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVF07Cisg
ICAgdm9pZCAqcGFnZVtCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JFUVVFU1RdOworICAgIGludCBp
LCBqLCBuZXdfbWFwcyA9IDA7CisgICAgc3RydWN0IHBlcnNpc3RlbnRfZ250ICpncmFudDsKIAog
ICAgIGlmIChpb3JlcS0+di5uaW92ID09IDAgfHwgaW9yZXEtPm1hcHBlZCA9PSAxKSB7CiAgICAg
ICAgIHJldHVybiAwOwogICAgIH0KLSAgICBpZiAoYmF0Y2hfbWFwcykgeworICAgIGlmIChpb3Jl
cS0+YmxrZGV2LT5mZWF0dXJlX3BlcnNpc3RlbnQpIHsKKyAgICAgICAgZm9yIChpID0gMDsgaSA8
IGlvcmVxLT52Lm5pb3Y7IGkrKykgeworICAgICAgICAgICAgZ3JhbnQgPSBnX3RyZWVfbG9va3Vw
KGlvcmVxLT5ibGtkZXYtPnBlcnNpc3RlbnRfZ250cywKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIEdVSU5UX1RPX1BPSU5URVIoaW9yZXEtPnJlZnNbaV0pKTsKKworICAgICAg
ICAgICAgaWYgKGdyYW50ICE9IE5VTEwpIHsKKyAgICAgICAgICAgICAgICBwYWdlW2ldID0gZ3Jh
bnQtPnBhZ2U7CisgICAgICAgICAgICAgICAgeGVuX2JlX3ByaW50ZigmaW9yZXEtPmJsa2Rldi0+
eGVuZGV2LCAzLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgInVzaW5nIHBlcnNpc3Rl
bnQtZ3JhbnQgJSIgUFJJdTMyICJcbiIsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBp
b3JlcS0+cmVmc1tpXSk7CisgICAgICAgICAgICB9IGVsc2UgeworICAgICAgICAgICAgICAgICAg
ICAvKiBBZGQgdGhlIGdyYW50IHRvIHRoZSBsaXN0IG9mIGdyYW50cyB0aGF0CisgICAgICAgICAg
ICAgICAgICAgICAqIHNob3VsZCBiZSBtYXBwZWQKKyAgICAgICAgICAgICAgICAgICAgICovCisg
ICAgICAgICAgICAgICAgICAgIGRvbWlkc1tuZXdfbWFwc10gPSBpb3JlcS0+ZG9taWRzW2ldOwor
ICAgICAgICAgICAgICAgICAgICByZWZzW25ld19tYXBzXSA9IGlvcmVxLT5yZWZzW2ldOworICAg
ICAgICAgICAgICAgICAgICBwYWdlW2ldID0gTlVMTDsKKyAgICAgICAgICAgICAgICAgICAgbmV3
X21hcHMrKzsKKyAgICAgICAgICAgIH0KKyAgICAgICAgfQorICAgICAgICAvKiBTZXQgdGhlIHBy
b3RlY3Rpb24gdG8gUlcsIHNpbmNlIGdyYW50cyBtYXkgYmUgcmV1c2VkIGxhdGVyCisgICAgICAg
ICAqIHdpdGggYSBkaWZmZXJlbnQgcHJvdGVjdGlvbiB0aGFuIHRoZSBvbmUgbmVlZGVkIGZvciB0
aGlzIHJlcXVlc3QKKyAgICAgICAgICovCisgICAgICAgIGlvcmVxLT5wcm90ID0gUFJPVF9XUklU
RSB8IFBST1RfUkVBRDsKKyAgICB9IGVsc2UgeworICAgICAgICAvKiBBbGwgZ3JhbnRzIGluIHRo
ZSByZXF1ZXN0IHNob3VsZCBiZSBtYXBwZWQgKi8KKyAgICAgICAgbWVtY3B5KHJlZnMsIGlvcmVx
LT5yZWZzLCBzaXplb2YocmVmcykpOworICAgICAgICBtZW1jcHkoZG9taWRzLCBpb3JlcS0+ZG9t
aWRzLCBzaXplb2YoZG9taWRzKSk7CisgICAgICAgIG1lbXNldChwYWdlLCAwLCBzaXplb2YocGFn
ZSkpOworICAgICAgICBuZXdfbWFwcyA9IGlvcmVxLT52Lm5pb3Y7CisgICAgfQorCisgICAgaWYg
KGJhdGNoX21hcHMgJiYgbmV3X21hcHMpIHsKICAgICAgICAgaW9yZXEtPnBhZ2VzID0geGNfZ250
dGFiX21hcF9ncmFudF9yZWZzCi0gICAgICAgICAgICAoZ250LCBpb3JlcS0+di5uaW92LCBpb3Jl
cS0+ZG9taWRzLCBpb3JlcS0+cmVmcywgaW9yZXEtPnByb3QpOworICAgICAgICAgICAgKGdudCwg
bmV3X21hcHMsIGRvbWlkcywgcmVmcywgaW9yZXEtPnByb3QpOwogICAgICAgICBpZiAoaW9yZXEt
PnBhZ2VzID09IE5VTEwpIHsKICAgICAgICAgICAgIHhlbl9iZV9wcmludGYoJmlvcmVxLT5ibGtk
ZXYtPnhlbmRldiwgMCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgImNhbid0IG1hcCAlZCBn
cmFudCByZWZzICglcywgJWQgbWFwcylcbiIsCi0gICAgICAgICAgICAgICAgICAgICAgICAgIGlv
cmVxLT52Lm5pb3YsIHN0cmVycm9yKGVycm5vKSwgaW9yZXEtPmJsa2Rldi0+Y250X21hcCk7Cisg
ICAgICAgICAgICAgICAgICAgICAgICAgIG5ld19tYXBzLCBzdHJlcnJvcihlcnJubyksIGlvcmVx
LT5ibGtkZXYtPmNudF9tYXApOwogICAgICAgICAgICAgcmV0dXJuIC0xOwogICAgICAgICB9Ci0g
ICAgICAgIGZvciAoaSA9IDA7IGkgPCBpb3JlcS0+di5uaW92OyBpKyspIHsKLSAgICAgICAgICAg
IGlvcmVxLT52LmlvdltpXS5pb3ZfYmFzZSA9IGlvcmVxLT5wYWdlcyArIGkgKiBYQ19QQUdFX1NJ
WkUgKwotICAgICAgICAgICAgICAgICh1aW50cHRyX3QpaW9yZXEtPnYuaW92W2ldLmlvdl9iYXNl
OworICAgICAgICBmb3IgKGkgPSAwLCBqID0gMDsgaSA8IGlvcmVxLT52Lm5pb3Y7IGkrKykgewor
ICAgICAgICAgICAgaWYgKHBhZ2VbaV0gPT0gTlVMTCkKKyAgICAgICAgICAgICAgICBwYWdlW2ld
ID0gaW9yZXEtPnBhZ2VzICsgKGorKykgKiBYQ19QQUdFX1NJWkU7CiAgICAgICAgIH0KLSAgICAg
ICAgaW9yZXEtPmJsa2Rldi0+Y250X21hcCArPSBpb3JlcS0+di5uaW92OwotICAgIH0gZWxzZSAg
ewotICAgICAgICBmb3IgKGkgPSAwOyBpIDwgaW9yZXEtPnYubmlvdjsgaSsrKSB7CisgICAgICAg
IGlvcmVxLT5ibGtkZXYtPmNudF9tYXAgKz0gbmV3X21hcHM7CisgICAgfSBlbHNlIGlmIChuZXdf
bWFwcykgIHsKKyAgICAgICAgZm9yIChpID0gMDsgaSA8IG5ld19tYXBzOyBpKyspIHsKICAgICAg
ICAgICAgIGlvcmVxLT5wYWdlW2ldID0geGNfZ250dGFiX21hcF9ncmFudF9yZWYKLSAgICAgICAg
ICAgICAgICAoZ250LCBpb3JlcS0+ZG9taWRzW2ldLCBpb3JlcS0+cmVmc1tpXSwgaW9yZXEtPnBy
b3QpOworICAgICAgICAgICAgICAgIChnbnQsIGRvbWlkc1tpXSwgcmVmc1tpXSwgaW9yZXEtPnBy
b3QpOwogICAgICAgICAgICAgaWYgKGlvcmVxLT5wYWdlW2ldID09IE5VTEwpIHsKICAgICAgICAg
ICAgICAgICB4ZW5fYmVfcHJpbnRmKCZpb3JlcS0+YmxrZGV2LT54ZW5kZXYsIDAsCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAiY2FuJ3QgbWFwIGdyYW50IHJlZiAlZCAoJXMsICVkIG1h
cHMpXG4iLAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW9yZXEtPnJlZnNbaV0sIHN0
cmVycm9yKGVycm5vKSwgaW9yZXEtPmJsa2Rldi0+Y250X21hcCk7CisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICByZWZzW2ldLCBzdHJlcnJvcihlcnJubyksIGlvcmVxLT5ibGtkZXYtPmNu
dF9tYXApOwogICAgICAgICAgICAgICAgIGlvcmVxX3VubWFwKGlvcmVxKTsKICAgICAgICAgICAg
ICAgICByZXR1cm4gLTE7CiAgICAgICAgICAgICB9Ci0gICAgICAgICAgICBpb3JlcS0+di5pb3Zb
aV0uaW92X2Jhc2UgPSBpb3JlcS0+cGFnZVtpXSArICh1aW50cHRyX3QpaW9yZXEtPnYuaW92W2ld
Lmlvdl9iYXNlOwogICAgICAgICAgICAgaW9yZXEtPmJsa2Rldi0+Y250X21hcCsrOwogICAgICAg
ICB9CisgICAgICAgIGZvciAoaSA9IDAsIGogPSAwOyBpIDwgaW9yZXEtPnYubmlvdjsgaSsrKSB7
CisgICAgICAgICAgICBpZiAocGFnZVtpXSA9PSBOVUxMKQorICAgICAgICAgICAgICAgIHBhZ2Vb
aV0gPSBpb3JlcS0+cGFnZVtqKytdOworICAgICAgICB9CisgICAgfQorICAgIGlmIChpb3JlcS0+
YmxrZGV2LT5mZWF0dXJlX3BlcnNpc3RlbnQpIHsKKyAgICAgICAgd2hpbGUoKGlvcmVxLT5ibGtk
ZXYtPnBlcnNpc3RlbnRfZ250X2MgPCBpb3JlcS0+YmxrZGV2LT5tYXhfZ3JhbnRzKSAmJgorICAg
ICAgICAgICAgICBuZXdfbWFwcykgeworICAgICAgICAgICAgLyogR28gdGhyb3VnaCB0aGUgbGlz
dCBvZiBuZXdseSBtYXBwZWQgZ3JhbnRzIGFuZCBhZGQgYXMgbWFueQorICAgICAgICAgICAgICog
YXMgcG9zc2libGUgdG8gdGhlIGxpc3Qgb2YgcGVyc2lzdGVudGx5IG1hcHBlZCBncmFudHMKKyAg
ICAgICAgICAgICAqLworICAgICAgICAgICAgZ3JhbnQgPSBnX21hbGxvYzAoc2l6ZW9mKCpncmFu
dCkpOworICAgICAgICAgICAgbmV3X21hcHMtLTsKKyAgICAgICAgICAgIGlmIChiYXRjaF9tYXBz
KQorICAgICAgICAgICAgICAgIGdyYW50LT5wYWdlID0gaW9yZXEtPnBhZ2VzICsgKG5ld19tYXBz
KSAqIFhDX1BBR0VfU0laRTsKKyAgICAgICAgICAgIGVsc2UKKyAgICAgICAgICAgICAgICBncmFu
dC0+cGFnZSA9IGlvcmVxLT5wYWdlW25ld19tYXBzXTsKKyAgICAgICAgICAgIGdyYW50LT5ibGtk
ZXYgPSBpb3JlcS0+YmxrZGV2OworICAgICAgICAgICAgeGVuX2JlX3ByaW50ZigmaW9yZXEtPmJs
a2Rldi0+eGVuZGV2LCAzLAorICAgICAgICAgICAgICAgICAgICAgICAgICAiYWRkaW5nIGdyYW50
ICUiIFBSSXUzMiAiIHBhZ2U6ICVwXG4iLAorICAgICAgICAgICAgICAgICAgICAgICAgICByZWZz
W25ld19tYXBzXSwgZ3JhbnQtPnBhZ2UpOworICAgICAgICAgICAgZ190cmVlX2luc2VydChpb3Jl
cS0+YmxrZGV2LT5wZXJzaXN0ZW50X2dudHMsCisgICAgICAgICAgICAgICAgICAgICAgICAgIEdV
SU5UX1RPX1BPSU5URVIocmVmc1tuZXdfbWFwc10pLAorICAgICAgICAgICAgICAgICAgICAgICAg
ICBncmFudCk7CisgICAgICAgICAgICBpb3JlcS0+YmxrZGV2LT5wZXJzaXN0ZW50X2dudF9jKys7
CisgICAgICAgIH0KKyAgICB9CisgICAgZm9yIChpID0gMDsgaSA8IGlvcmVxLT52Lm5pb3Y7IGkr
KykgeworICAgICAgICBpb3JlcS0+di5pb3ZbaV0uaW92X2Jhc2UgPSBwYWdlW2ldICsKKyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKHVpbnRwdHJfdClpb3JlcS0+di5pb3ZbaV0u
aW92X2Jhc2U7CiAgICAgfQogICAgIGlvcmVxLT5tYXBwZWQgPSAxOworICAgIGlvcmVxLT5udW1f
dW5tYXAgPSBuZXdfbWFwczsKICAgICByZXR1cm4gMDsKIH0KIApAQCAtNjkwLDYgKzc5MSw3IEBA
IHN0YXRpYyBpbnQgYmxrX2luaXQoc3RydWN0IFhlbkRldmljZSAqeGVuZGV2KQogCiAgICAgLyog
ZmlsbCBpbmZvICovCiAgICAgeGVuc3RvcmVfd3JpdGVfYmVfaW50KCZibGtkZXYtPnhlbmRldiwg
ImZlYXR1cmUtYmFycmllciIsIDEpOworICAgIHhlbnN0b3JlX3dyaXRlX2JlX2ludCgmYmxrZGV2
LT54ZW5kZXYsICJmZWF0dXJlLXBlcnNpc3RlbnQiLCAxKTsKICAgICB4ZW5zdG9yZV93cml0ZV9i
ZV9pbnQoJmJsa2Rldi0+eGVuZGV2LCAiaW5mbyIsICAgICAgICAgICAgaW5mbyk7CiAgICAgeGVu
c3RvcmVfd3JpdGVfYmVfaW50KCZibGtkZXYtPnhlbmRldiwgInNlY3Rvci1zaXplIiwgICAgIGJs
a2Rldi0+ZmlsZV9ibGspOwogICAgIHhlbnN0b3JlX3dyaXRlX2JlX2ludCgmYmxrZGV2LT54ZW5k
ZXYsICJzZWN0b3JzIiwKQEAgLTcxMyw2ICs4MTUsNyBAQCBvdXRfZXJyb3I6CiBzdGF0aWMgaW50
IGJsa19jb25uZWN0KHN0cnVjdCBYZW5EZXZpY2UgKnhlbmRldikKIHsKICAgICBzdHJ1Y3QgWGVu
QmxrRGV2ICpibGtkZXYgPSBjb250YWluZXJfb2YoeGVuZGV2LCBzdHJ1Y3QgWGVuQmxrRGV2LCB4
ZW5kZXYpOworICAgIGludCBwZXJzOwogCiAgICAgaWYgKHhlbnN0b3JlX3JlYWRfZmVfaW50KCZi
bGtkZXYtPnhlbmRldiwgInJpbmctcmVmIiwgJmJsa2Rldi0+cmluZ19yZWYpID09IC0xKSB7CiAg
ICAgICAgIHJldHVybiAtMTsKQEAgLTcyMSw2ICs4MjQsMTEgQEAgc3RhdGljIGludCBibGtfY29u
bmVjdChzdHJ1Y3QgWGVuRGV2aWNlICp4ZW5kZXYpCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICZibGtkZXYtPnhlbmRldi5yZW1vdGVfcG9ydCkgPT0gLTEpIHsKICAgICAgICAgcmV0dXJu
IC0xOwogICAgIH0KKyAgICBpZiAoeGVuc3RvcmVfcmVhZF9mZV9pbnQoJmJsa2Rldi0+eGVuZGV2
LCAiZmVhdHVyZS1wZXJzaXN0ZW50IiwgJnBlcnMpKSB7CisgICAgICAgIGJsa2Rldi0+ZmVhdHVy
ZV9wZXJzaXN0ZW50ID0gRkFMU0U7CisgICAgfSBlbHNlIHsKKyAgICAgICAgYmxrZGV2LT5mZWF0
dXJlX3BlcnNpc3RlbnQgPSAhIXBlcnM7CisgICAgfQogCiAgICAgYmxrZGV2LT5wcm90b2NvbCA9
IEJMS0lGX1BST1RPQ09MX05BVElWRTsKICAgICBpZiAoYmxrZGV2LT54ZW5kZXYucHJvdG9jb2wp
IHsKQEAgLTc2NCw2ICs4NzIsMTUgQEAgc3RhdGljIGludCBibGtfY29ubmVjdChzdHJ1Y3QgWGVu
RGV2aWNlICp4ZW5kZXYpCiAgICAgfQogICAgIH0KIAorICAgIGlmIChibGtkZXYtPmZlYXR1cmVf
cGVyc2lzdGVudCkgeworICAgICAgICAvKiBJbml0IHBlcnNpc3RlbnQgZ3JhbnRzICovCisgICAg
ICAgIGJsa2Rldi0+bWF4X2dyYW50cyA9IG1heF9yZXF1ZXN0cyAqIEJMS0lGX01BWF9TRUdNRU5U
U19QRVJfUkVRVUVTVDsKKyAgICAgICAgYmxrZGV2LT5wZXJzaXN0ZW50X2dudHMgPSBnX3RyZWVf
bmV3X2Z1bGwoKEdDb21wYXJlRGF0YUZ1bmMpaW50X2NtcCwKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIE5VTEwsIE5VTEwsCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAoR0Rlc3Ryb3lOb3RpZnkpZGVzdHJveV9ncmFudCk7
CisgICAgICAgIGJsa2Rldi0+cGVyc2lzdGVudF9nbnRfYyA9IDA7CisgICAgfQorCiAgICAgeGVu
X2JlX2JpbmRfZXZ0Y2huKCZibGtkZXYtPnhlbmRldik7CiAKICAgICB4ZW5fYmVfcHJpbnRmKCZi
bGtkZXYtPnhlbmRldiwgMSwgIm9rOiBwcm90byAlcywgcmluZy1yZWYgJWQsICIKQEAgLTgwNCw2
ICs5MjEsMTAgQEAgc3RhdGljIGludCBibGtfZnJlZShzdHJ1Y3QgWGVuRGV2aWNlICp4ZW5kZXYp
CiAgICAgICAgIGJsa19kaXNjb25uZWN0KHhlbmRldik7CiAgICAgfQogCisgICAgLyogRnJlZSBw
ZXJzaXN0ZW50IGdyYW50cyAqLworICAgIGlmIChibGtkZXYtPmZlYXR1cmVfcGVyc2lzdGVudCkK
KyAgICAgICAgZ190cmVlX2Rlc3Ryb3koYmxrZGV2LT5wZXJzaXN0ZW50X2dudHMpOworCiAgICAg
d2hpbGUgKCFRTElTVF9FTVBUWSgmYmxrZGV2LT5mcmVlbGlzdCkpIHsKICAgICAgICAgaW9yZXEg
PSBRTElTVF9GSVJTVCgmYmxrZGV2LT5mcmVlbGlzdCk7CiAgICAgICAgIFFMSVNUX1JFTU9WRShp
b3JlcSwgbGlzdCk7Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Dec 31 12:16:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 12:16:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpeI6-00057w-0k; Mon, 31 Dec 2012 12:16:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TpeI4-00057g-9r
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 12:16:24 +0000
Received: from [85.158.139.211:36908] by server-14.bemta-5.messagelabs.com id
	56/29-09538-71281E05; Mon, 31 Dec 2012 12:16:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1356956182!19886148!2
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22999 invoked from network); 31 Dec 2012 12:16:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 12:16:23 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="396012"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Dec 2012 12:16:22 +0000
Received: from mac.citrite.net (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 31 Dec 2012 12:16:22 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <qemu-devel@nongnu.org>
Date: Mon, 31 Dec 2012 13:16:14 +0100
Message-ID: <1356956174-23548-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
References: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 3/3] xen_disk: add persistent grant support
	to xen_disk backend
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyBwcm90b2NvbCBleHRlbnNpb24gcmV1c2VzIHRoZSBzYW1lIHNldCBvZiBncmFudCBwYWdl
cyBmb3IgYWxsCnRyYW5zYWN0aW9ucyBiZXR3ZWVuIHRoZSBmcm9udC9iYWNrIGRyaXZlcnMsIGF2
b2lkaW5nIGV4cGVuc2l2ZSB0bGIKZmx1c2hlcywgZ3JhbnQgdGFibGUgbG9jayBjb250ZW50aW9u
IGFuZCBzd2l0Y2hlcyBiZXR3ZWVuIHVzZXJzcGFjZQphbmQga2VybmVsIHNwYWNlLiBUaGUgZnVs
bCBkZXNjcmlwdGlvbiBvZiB0aGUgcHJvdG9jb2wgY2FuIGJlIGZvdW5kIGluCnRoZSBwdWJsaWMg
YmxraWYuaCBoZWFkZXIuCgpTcGVlZCBpbXByb3ZlbWVudCB3aXRoIDE1IGd1ZXN0cyBwZXJmb3Jt
aW5nIEkvTyBpcyB+NDUwJS4KClNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2Vy
LnBhdUBjaXRyaXguY29tPgpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IFN0ZWZhbm8g
U3RhYmVsbGluaSA8U3RlZmFuby5TdGFiZWxsaW5pQGV1LmNpdHJpeC5jb20+CkNjOiBBbnRob255
IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4KLS0tClBlcmZvcm1hbmNlIGNvbXBh
cmlzb24gd2l0aCB0aGUgcHJldmlvdXMgaW1wbGVtZW50YXRpb24gY2FuIGJlIHNlZW4gaW4KdGhl
IGZvbGxvd2lnbiBncmFwaDoKCmh0dHA6Ly94ZW5iaXRzLnhlbi5vcmcvcGVvcGxlL3JveWdlci9w
ZXJzaXN0ZW50X3JlYWRfcWVtdS5wbmcKLS0tCiBody94ZW5fZGlzay5jIHwgIDE1NSArKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tLS0tLQogMSBmaWxl
cyBjaGFuZ2VkLCAxMzggaW5zZXJ0aW9ucygrKSwgMTcgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0
IGEvaHcveGVuX2Rpc2suYyBiL2h3L3hlbl9kaXNrLmMKaW5kZXggMWViNDg1YS4uYmFmZWNlYiAx
MDA2NDQKLS0tIGEvaHcveGVuX2Rpc2suYworKysgYi9ody94ZW5fZGlzay5jCkBAIC01Miw2ICs1
MiwxMSBAQCBzdGF0aWMgaW50IG1heF9yZXF1ZXN0cyA9IDMyOwogI2RlZmluZSBCTE9DS19TSVpF
ICA1MTIKICNkZWZpbmUgSU9DQl9DT1VOVCAgKEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVT
VCArIDIpCiAKK3N0cnVjdCBwZXJzaXN0ZW50X2dudCB7CisgICAgdm9pZCAqcGFnZTsKKyAgICBz
dHJ1Y3QgWGVuQmxrRGV2ICpibGtkZXY7Cit9OworCiBzdHJ1Y3QgaW9yZXEgewogICAgIGJsa2lm
X3JlcXVlc3RfdCAgICAgcmVxOwogICAgIGludDE2X3QgICAgICAgICAgICAgc3RhdHVzOwpAQCAt
NjksNiArNzQsNyBAQCBzdHJ1Y3QgaW9yZXEgewogICAgIGludCAgICAgICAgICAgICAgICAgcHJv
dDsKICAgICB2b2lkICAgICAgICAgICAgICAgICpwYWdlW0JMS0lGX01BWF9TRUdNRU5UU19QRVJf
UkVRVUVTVF07CiAgICAgdm9pZCAgICAgICAgICAgICAgICAqcGFnZXM7CisgICAgaW50ICAgICAg
ICAgICAgICAgICBudW1fdW5tYXA7CiAKICAgICAvKiBhaW8gc3RhdHVzICovCiAgICAgaW50ICAg
ICAgICAgICAgICAgICBhaW9faW5mbGlnaHQ7CkBAIC0xMDUsNiArMTExLDEyIEBAIHN0cnVjdCBY
ZW5CbGtEZXYgewogICAgIGludCAgICAgICAgICAgICAgICAgcmVxdWVzdHNfaW5mbGlnaHQ7CiAg
ICAgaW50ICAgICAgICAgICAgICAgICByZXF1ZXN0c19maW5pc2hlZDsKIAorICAgIC8qIFBlcnNp
c3RlbnQgZ3JhbnRzIGV4dGVuc2lvbiAqLworICAgIGdib29sZWFuICAgICAgICAgICAgZmVhdHVy
ZV9wZXJzaXN0ZW50OworICAgIEdUcmVlICAgICAgICAgICAgICAgKnBlcnNpc3RlbnRfZ250czsK
KyAgICB1bnNpZ25lZCBpbnQgICAgICAgIHBlcnNpc3RlbnRfZ250X2M7CisgICAgdW5zaWduZWQg
aW50ICAgICAgICBtYXhfZ3JhbnRzOworCiAgICAgLyogcWVtdSBibG9jayBkcml2ZXIgKi8KICAg
ICBEcml2ZUluZm8gICAgICAgICAgICpkaW5mbzsKICAgICBCbG9ja0RyaXZlclN0YXRlICAgICpi
czsKQEAgLTEzOCw2ICsxNTAsMjkgQEAgc3RhdGljIHZvaWQgaW9yZXFfcmVzZXQoc3RydWN0IGlv
cmVxICppb3JlcSkKICAgICBxZW11X2lvdmVjX3Jlc2V0KCZpb3JlcS0+dik7CiB9CiAKK3N0YXRp
YyBnaW50IGludF9jbXAoZ2NvbnN0cG9pbnRlciBhLCBnY29uc3Rwb2ludGVyIGIsIGdwb2ludGVy
IHVzZXJfZGF0YSkKK3sKKyAgICB1aW50IHVhID0gR1BPSU5URVJfVE9fVUlOVChhKTsKKyAgICB1
aW50IHViID0gR1BPSU5URVJfVE9fVUlOVChiKTsKKyAgICByZXR1cm4gKHVhID4gdWIpIC0gKHVh
IDwgdWIpOworfQorCitzdGF0aWMgdm9pZCBkZXN0cm95X2dyYW50KGdwb2ludGVyIHBnbnQpCit7
CisgICAgc3RydWN0IHBlcnNpc3RlbnRfZ250ICpncmFudCA9IHBnbnQ7CisgICAgWGVuR250dGFi
IGdudCA9IGdyYW50LT5ibGtkZXYtPnhlbmRldi5nbnR0YWJkZXY7CisKKyAgICBpZiAoeGNfZ250
dGFiX211bm1hcChnbnQsIGdyYW50LT5wYWdlLCAxKSAhPSAwKSB7CisgICAgICAgIHhlbl9iZV9w
cmludGYoJmdyYW50LT5ibGtkZXYtPnhlbmRldiwgMCwKKyAgICAgICAgICAgICAgICAgICAgICAi
eGNfZ250dGFiX211bm1hcCBmYWlsZWQ6ICVzXG4iLAorICAgICAgICAgICAgICAgICAgICAgIHN0
cmVycm9yKGVycm5vKSk7CisgICAgfQorICAgIGdyYW50LT5ibGtkZXYtPnBlcnNpc3RlbnRfZ250
X2MtLTsKKyAgICB4ZW5fYmVfcHJpbnRmKCZncmFudC0+YmxrZGV2LT54ZW5kZXYsIDMsCisgICAg
ICAgICAgICAgICAgICAidW5tYXBwZWQgZ3JhbnQgJXBcbiIsIGdyYW50LT5wYWdlKTsKKyAgICBn
X2ZyZWUoZ3JhbnQpOworfQorCiBzdGF0aWMgc3RydWN0IGlvcmVxICppb3JlcV9zdGFydChzdHJ1
Y3QgWGVuQmxrRGV2ICpibGtkZXYpCiB7CiAgICAgc3RydWN0IGlvcmVxICppb3JlcSA9IE5VTEw7
CkBAIC0yNjYsMjEgKzMwMSwyMSBAQCBzdGF0aWMgdm9pZCBpb3JlcV91bm1hcChzdHJ1Y3QgaW9y
ZXEgKmlvcmVxKQogICAgIFhlbkdudHRhYiBnbnQgPSBpb3JlcS0+YmxrZGV2LT54ZW5kZXYuZ250
dGFiZGV2OwogICAgIGludCBpOwogCi0gICAgaWYgKGlvcmVxLT52Lm5pb3YgPT0gMCB8fCBpb3Jl
cS0+bWFwcGVkID09IDApIHsKKyAgICBpZiAoaW9yZXEtPm51bV91bm1hcCA9PSAwIHx8IGlvcmVx
LT5tYXBwZWQgPT0gMCkgewogICAgICAgICByZXR1cm47CiAgICAgfQogICAgIGlmIChiYXRjaF9t
YXBzKSB7CiAgICAgICAgIGlmICghaW9yZXEtPnBhZ2VzKSB7CiAgICAgICAgICAgICByZXR1cm47
CiAgICAgICAgIH0KLSAgICAgICAgaWYgKHhjX2dudHRhYl9tdW5tYXAoZ250LCBpb3JlcS0+cGFn
ZXMsIGlvcmVxLT52Lm5pb3YpICE9IDApIHsKKyAgICAgICAgaWYgKHhjX2dudHRhYl9tdW5tYXAo
Z250LCBpb3JlcS0+cGFnZXMsIGlvcmVxLT5udW1fdW5tYXApICE9IDApIHsKICAgICAgICAgICAg
IHhlbl9iZV9wcmludGYoJmlvcmVxLT5ibGtkZXYtPnhlbmRldiwgMCwgInhjX2dudHRhYl9tdW5t
YXAgZmFpbGVkOiAlc1xuIiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RyZXJyb3IoZXJy
bm8pKTsKICAgICAgICAgfQotICAgICAgICBpb3JlcS0+YmxrZGV2LT5jbnRfbWFwIC09IGlvcmVx
LT52Lm5pb3Y7CisgICAgICAgIGlvcmVxLT5ibGtkZXYtPmNudF9tYXAgLT0gaW9yZXEtPm51bV91
bm1hcDsKICAgICAgICAgaW9yZXEtPnBhZ2VzID0gTlVMTDsKICAgICB9IGVsc2UgewotICAgICAg
ICBmb3IgKGkgPSAwOyBpIDwgaW9yZXEtPnYubmlvdjsgaSsrKSB7CisgICAgICAgIGZvciAoaSA9
IDA7IGkgPCBpb3JlcS0+bnVtX3VubWFwOyBpKyspIHsKICAgICAgICAgICAgIGlmICghaW9yZXEt
PnBhZ2VbaV0pIHsKICAgICAgICAgICAgICAgICBjb250aW51ZTsKICAgICAgICAgICAgIH0KQEAg
LTI5OCw0MSArMzMzLDEwNyBAQCBzdGF0aWMgdm9pZCBpb3JlcV91bm1hcChzdHJ1Y3QgaW9yZXEg
KmlvcmVxKQogc3RhdGljIGludCBpb3JlcV9tYXAoc3RydWN0IGlvcmVxICppb3JlcSkKIHsKICAg
ICBYZW5HbnR0YWIgZ250ID0gaW9yZXEtPmJsa2Rldi0+eGVuZGV2LmdudHRhYmRldjsKLSAgICBp
bnQgaTsKKyAgICB1aW50MzJfdCBkb21pZHNbQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNU
XTsKKyAgICB1aW50MzJfdCByZWZzW0JMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVF07Cisg
ICAgdm9pZCAqcGFnZVtCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JFUVVFU1RdOworICAgIGludCBp
LCBqLCBuZXdfbWFwcyA9IDA7CisgICAgc3RydWN0IHBlcnNpc3RlbnRfZ250ICpncmFudDsKIAog
ICAgIGlmIChpb3JlcS0+di5uaW92ID09IDAgfHwgaW9yZXEtPm1hcHBlZCA9PSAxKSB7CiAgICAg
ICAgIHJldHVybiAwOwogICAgIH0KLSAgICBpZiAoYmF0Y2hfbWFwcykgeworICAgIGlmIChpb3Jl
cS0+YmxrZGV2LT5mZWF0dXJlX3BlcnNpc3RlbnQpIHsKKyAgICAgICAgZm9yIChpID0gMDsgaSA8
IGlvcmVxLT52Lm5pb3Y7IGkrKykgeworICAgICAgICAgICAgZ3JhbnQgPSBnX3RyZWVfbG9va3Vw
KGlvcmVxLT5ibGtkZXYtPnBlcnNpc3RlbnRfZ250cywKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIEdVSU5UX1RPX1BPSU5URVIoaW9yZXEtPnJlZnNbaV0pKTsKKworICAgICAg
ICAgICAgaWYgKGdyYW50ICE9IE5VTEwpIHsKKyAgICAgICAgICAgICAgICBwYWdlW2ldID0gZ3Jh
bnQtPnBhZ2U7CisgICAgICAgICAgICAgICAgeGVuX2JlX3ByaW50ZigmaW9yZXEtPmJsa2Rldi0+
eGVuZGV2LCAzLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgInVzaW5nIHBlcnNpc3Rl
bnQtZ3JhbnQgJSIgUFJJdTMyICJcbiIsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBp
b3JlcS0+cmVmc1tpXSk7CisgICAgICAgICAgICB9IGVsc2UgeworICAgICAgICAgICAgICAgICAg
ICAvKiBBZGQgdGhlIGdyYW50IHRvIHRoZSBsaXN0IG9mIGdyYW50cyB0aGF0CisgICAgICAgICAg
ICAgICAgICAgICAqIHNob3VsZCBiZSBtYXBwZWQKKyAgICAgICAgICAgICAgICAgICAgICovCisg
ICAgICAgICAgICAgICAgICAgIGRvbWlkc1tuZXdfbWFwc10gPSBpb3JlcS0+ZG9taWRzW2ldOwor
ICAgICAgICAgICAgICAgICAgICByZWZzW25ld19tYXBzXSA9IGlvcmVxLT5yZWZzW2ldOworICAg
ICAgICAgICAgICAgICAgICBwYWdlW2ldID0gTlVMTDsKKyAgICAgICAgICAgICAgICAgICAgbmV3
X21hcHMrKzsKKyAgICAgICAgICAgIH0KKyAgICAgICAgfQorICAgICAgICAvKiBTZXQgdGhlIHBy
b3RlY3Rpb24gdG8gUlcsIHNpbmNlIGdyYW50cyBtYXkgYmUgcmV1c2VkIGxhdGVyCisgICAgICAg
ICAqIHdpdGggYSBkaWZmZXJlbnQgcHJvdGVjdGlvbiB0aGFuIHRoZSBvbmUgbmVlZGVkIGZvciB0
aGlzIHJlcXVlc3QKKyAgICAgICAgICovCisgICAgICAgIGlvcmVxLT5wcm90ID0gUFJPVF9XUklU
RSB8IFBST1RfUkVBRDsKKyAgICB9IGVsc2UgeworICAgICAgICAvKiBBbGwgZ3JhbnRzIGluIHRo
ZSByZXF1ZXN0IHNob3VsZCBiZSBtYXBwZWQgKi8KKyAgICAgICAgbWVtY3B5KHJlZnMsIGlvcmVx
LT5yZWZzLCBzaXplb2YocmVmcykpOworICAgICAgICBtZW1jcHkoZG9taWRzLCBpb3JlcS0+ZG9t
aWRzLCBzaXplb2YoZG9taWRzKSk7CisgICAgICAgIG1lbXNldChwYWdlLCAwLCBzaXplb2YocGFn
ZSkpOworICAgICAgICBuZXdfbWFwcyA9IGlvcmVxLT52Lm5pb3Y7CisgICAgfQorCisgICAgaWYg
KGJhdGNoX21hcHMgJiYgbmV3X21hcHMpIHsKICAgICAgICAgaW9yZXEtPnBhZ2VzID0geGNfZ250
dGFiX21hcF9ncmFudF9yZWZzCi0gICAgICAgICAgICAoZ250LCBpb3JlcS0+di5uaW92LCBpb3Jl
cS0+ZG9taWRzLCBpb3JlcS0+cmVmcywgaW9yZXEtPnByb3QpOworICAgICAgICAgICAgKGdudCwg
bmV3X21hcHMsIGRvbWlkcywgcmVmcywgaW9yZXEtPnByb3QpOwogICAgICAgICBpZiAoaW9yZXEt
PnBhZ2VzID09IE5VTEwpIHsKICAgICAgICAgICAgIHhlbl9iZV9wcmludGYoJmlvcmVxLT5ibGtk
ZXYtPnhlbmRldiwgMCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgImNhbid0IG1hcCAlZCBn
cmFudCByZWZzICglcywgJWQgbWFwcylcbiIsCi0gICAgICAgICAgICAgICAgICAgICAgICAgIGlv
cmVxLT52Lm5pb3YsIHN0cmVycm9yKGVycm5vKSwgaW9yZXEtPmJsa2Rldi0+Y250X21hcCk7Cisg
ICAgICAgICAgICAgICAgICAgICAgICAgIG5ld19tYXBzLCBzdHJlcnJvcihlcnJubyksIGlvcmVx
LT5ibGtkZXYtPmNudF9tYXApOwogICAgICAgICAgICAgcmV0dXJuIC0xOwogICAgICAgICB9Ci0g
ICAgICAgIGZvciAoaSA9IDA7IGkgPCBpb3JlcS0+di5uaW92OyBpKyspIHsKLSAgICAgICAgICAg
IGlvcmVxLT52LmlvdltpXS5pb3ZfYmFzZSA9IGlvcmVxLT5wYWdlcyArIGkgKiBYQ19QQUdFX1NJ
WkUgKwotICAgICAgICAgICAgICAgICh1aW50cHRyX3QpaW9yZXEtPnYuaW92W2ldLmlvdl9iYXNl
OworICAgICAgICBmb3IgKGkgPSAwLCBqID0gMDsgaSA8IGlvcmVxLT52Lm5pb3Y7IGkrKykgewor
ICAgICAgICAgICAgaWYgKHBhZ2VbaV0gPT0gTlVMTCkKKyAgICAgICAgICAgICAgICBwYWdlW2ld
ID0gaW9yZXEtPnBhZ2VzICsgKGorKykgKiBYQ19QQUdFX1NJWkU7CiAgICAgICAgIH0KLSAgICAg
ICAgaW9yZXEtPmJsa2Rldi0+Y250X21hcCArPSBpb3JlcS0+di5uaW92OwotICAgIH0gZWxzZSAg
ewotICAgICAgICBmb3IgKGkgPSAwOyBpIDwgaW9yZXEtPnYubmlvdjsgaSsrKSB7CisgICAgICAg
IGlvcmVxLT5ibGtkZXYtPmNudF9tYXAgKz0gbmV3X21hcHM7CisgICAgfSBlbHNlIGlmIChuZXdf
bWFwcykgIHsKKyAgICAgICAgZm9yIChpID0gMDsgaSA8IG5ld19tYXBzOyBpKyspIHsKICAgICAg
ICAgICAgIGlvcmVxLT5wYWdlW2ldID0geGNfZ250dGFiX21hcF9ncmFudF9yZWYKLSAgICAgICAg
ICAgICAgICAoZ250LCBpb3JlcS0+ZG9taWRzW2ldLCBpb3JlcS0+cmVmc1tpXSwgaW9yZXEtPnBy
b3QpOworICAgICAgICAgICAgICAgIChnbnQsIGRvbWlkc1tpXSwgcmVmc1tpXSwgaW9yZXEtPnBy
b3QpOwogICAgICAgICAgICAgaWYgKGlvcmVxLT5wYWdlW2ldID09IE5VTEwpIHsKICAgICAgICAg
ICAgICAgICB4ZW5fYmVfcHJpbnRmKCZpb3JlcS0+YmxrZGV2LT54ZW5kZXYsIDAsCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAiY2FuJ3QgbWFwIGdyYW50IHJlZiAlZCAoJXMsICVkIG1h
cHMpXG4iLAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW9yZXEtPnJlZnNbaV0sIHN0
cmVycm9yKGVycm5vKSwgaW9yZXEtPmJsa2Rldi0+Y250X21hcCk7CisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICByZWZzW2ldLCBzdHJlcnJvcihlcnJubyksIGlvcmVxLT5ibGtkZXYtPmNu
dF9tYXApOwogICAgICAgICAgICAgICAgIGlvcmVxX3VubWFwKGlvcmVxKTsKICAgICAgICAgICAg
ICAgICByZXR1cm4gLTE7CiAgICAgICAgICAgICB9Ci0gICAgICAgICAgICBpb3JlcS0+di5pb3Zb
aV0uaW92X2Jhc2UgPSBpb3JlcS0+cGFnZVtpXSArICh1aW50cHRyX3QpaW9yZXEtPnYuaW92W2ld
Lmlvdl9iYXNlOwogICAgICAgICAgICAgaW9yZXEtPmJsa2Rldi0+Y250X21hcCsrOwogICAgICAg
ICB9CisgICAgICAgIGZvciAoaSA9IDAsIGogPSAwOyBpIDwgaW9yZXEtPnYubmlvdjsgaSsrKSB7
CisgICAgICAgICAgICBpZiAocGFnZVtpXSA9PSBOVUxMKQorICAgICAgICAgICAgICAgIHBhZ2Vb
aV0gPSBpb3JlcS0+cGFnZVtqKytdOworICAgICAgICB9CisgICAgfQorICAgIGlmIChpb3JlcS0+
YmxrZGV2LT5mZWF0dXJlX3BlcnNpc3RlbnQpIHsKKyAgICAgICAgd2hpbGUoKGlvcmVxLT5ibGtk
ZXYtPnBlcnNpc3RlbnRfZ250X2MgPCBpb3JlcS0+YmxrZGV2LT5tYXhfZ3JhbnRzKSAmJgorICAg
ICAgICAgICAgICBuZXdfbWFwcykgeworICAgICAgICAgICAgLyogR28gdGhyb3VnaCB0aGUgbGlz
dCBvZiBuZXdseSBtYXBwZWQgZ3JhbnRzIGFuZCBhZGQgYXMgbWFueQorICAgICAgICAgICAgICog
YXMgcG9zc2libGUgdG8gdGhlIGxpc3Qgb2YgcGVyc2lzdGVudGx5IG1hcHBlZCBncmFudHMKKyAg
ICAgICAgICAgICAqLworICAgICAgICAgICAgZ3JhbnQgPSBnX21hbGxvYzAoc2l6ZW9mKCpncmFu
dCkpOworICAgICAgICAgICAgbmV3X21hcHMtLTsKKyAgICAgICAgICAgIGlmIChiYXRjaF9tYXBz
KQorICAgICAgICAgICAgICAgIGdyYW50LT5wYWdlID0gaW9yZXEtPnBhZ2VzICsgKG5ld19tYXBz
KSAqIFhDX1BBR0VfU0laRTsKKyAgICAgICAgICAgIGVsc2UKKyAgICAgICAgICAgICAgICBncmFu
dC0+cGFnZSA9IGlvcmVxLT5wYWdlW25ld19tYXBzXTsKKyAgICAgICAgICAgIGdyYW50LT5ibGtk
ZXYgPSBpb3JlcS0+YmxrZGV2OworICAgICAgICAgICAgeGVuX2JlX3ByaW50ZigmaW9yZXEtPmJs
a2Rldi0+eGVuZGV2LCAzLAorICAgICAgICAgICAgICAgICAgICAgICAgICAiYWRkaW5nIGdyYW50
ICUiIFBSSXUzMiAiIHBhZ2U6ICVwXG4iLAorICAgICAgICAgICAgICAgICAgICAgICAgICByZWZz
W25ld19tYXBzXSwgZ3JhbnQtPnBhZ2UpOworICAgICAgICAgICAgZ190cmVlX2luc2VydChpb3Jl
cS0+YmxrZGV2LT5wZXJzaXN0ZW50X2dudHMsCisgICAgICAgICAgICAgICAgICAgICAgICAgIEdV
SU5UX1RPX1BPSU5URVIocmVmc1tuZXdfbWFwc10pLAorICAgICAgICAgICAgICAgICAgICAgICAg
ICBncmFudCk7CisgICAgICAgICAgICBpb3JlcS0+YmxrZGV2LT5wZXJzaXN0ZW50X2dudF9jKys7
CisgICAgICAgIH0KKyAgICB9CisgICAgZm9yIChpID0gMDsgaSA8IGlvcmVxLT52Lm5pb3Y7IGkr
KykgeworICAgICAgICBpb3JlcS0+di5pb3ZbaV0uaW92X2Jhc2UgPSBwYWdlW2ldICsKKyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKHVpbnRwdHJfdClpb3JlcS0+di5pb3ZbaV0u
aW92X2Jhc2U7CiAgICAgfQogICAgIGlvcmVxLT5tYXBwZWQgPSAxOworICAgIGlvcmVxLT5udW1f
dW5tYXAgPSBuZXdfbWFwczsKICAgICByZXR1cm4gMDsKIH0KIApAQCAtNjkwLDYgKzc5MSw3IEBA
IHN0YXRpYyBpbnQgYmxrX2luaXQoc3RydWN0IFhlbkRldmljZSAqeGVuZGV2KQogCiAgICAgLyog
ZmlsbCBpbmZvICovCiAgICAgeGVuc3RvcmVfd3JpdGVfYmVfaW50KCZibGtkZXYtPnhlbmRldiwg
ImZlYXR1cmUtYmFycmllciIsIDEpOworICAgIHhlbnN0b3JlX3dyaXRlX2JlX2ludCgmYmxrZGV2
LT54ZW5kZXYsICJmZWF0dXJlLXBlcnNpc3RlbnQiLCAxKTsKICAgICB4ZW5zdG9yZV93cml0ZV9i
ZV9pbnQoJmJsa2Rldi0+eGVuZGV2LCAiaW5mbyIsICAgICAgICAgICAgaW5mbyk7CiAgICAgeGVu
c3RvcmVfd3JpdGVfYmVfaW50KCZibGtkZXYtPnhlbmRldiwgInNlY3Rvci1zaXplIiwgICAgIGJs
a2Rldi0+ZmlsZV9ibGspOwogICAgIHhlbnN0b3JlX3dyaXRlX2JlX2ludCgmYmxrZGV2LT54ZW5k
ZXYsICJzZWN0b3JzIiwKQEAgLTcxMyw2ICs4MTUsNyBAQCBvdXRfZXJyb3I6CiBzdGF0aWMgaW50
IGJsa19jb25uZWN0KHN0cnVjdCBYZW5EZXZpY2UgKnhlbmRldikKIHsKICAgICBzdHJ1Y3QgWGVu
QmxrRGV2ICpibGtkZXYgPSBjb250YWluZXJfb2YoeGVuZGV2LCBzdHJ1Y3QgWGVuQmxrRGV2LCB4
ZW5kZXYpOworICAgIGludCBwZXJzOwogCiAgICAgaWYgKHhlbnN0b3JlX3JlYWRfZmVfaW50KCZi
bGtkZXYtPnhlbmRldiwgInJpbmctcmVmIiwgJmJsa2Rldi0+cmluZ19yZWYpID09IC0xKSB7CiAg
ICAgICAgIHJldHVybiAtMTsKQEAgLTcyMSw2ICs4MjQsMTEgQEAgc3RhdGljIGludCBibGtfY29u
bmVjdChzdHJ1Y3QgWGVuRGV2aWNlICp4ZW5kZXYpCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICZibGtkZXYtPnhlbmRldi5yZW1vdGVfcG9ydCkgPT0gLTEpIHsKICAgICAgICAgcmV0dXJu
IC0xOwogICAgIH0KKyAgICBpZiAoeGVuc3RvcmVfcmVhZF9mZV9pbnQoJmJsa2Rldi0+eGVuZGV2
LCAiZmVhdHVyZS1wZXJzaXN0ZW50IiwgJnBlcnMpKSB7CisgICAgICAgIGJsa2Rldi0+ZmVhdHVy
ZV9wZXJzaXN0ZW50ID0gRkFMU0U7CisgICAgfSBlbHNlIHsKKyAgICAgICAgYmxrZGV2LT5mZWF0
dXJlX3BlcnNpc3RlbnQgPSAhIXBlcnM7CisgICAgfQogCiAgICAgYmxrZGV2LT5wcm90b2NvbCA9
IEJMS0lGX1BST1RPQ09MX05BVElWRTsKICAgICBpZiAoYmxrZGV2LT54ZW5kZXYucHJvdG9jb2wp
IHsKQEAgLTc2NCw2ICs4NzIsMTUgQEAgc3RhdGljIGludCBibGtfY29ubmVjdChzdHJ1Y3QgWGVu
RGV2aWNlICp4ZW5kZXYpCiAgICAgfQogICAgIH0KIAorICAgIGlmIChibGtkZXYtPmZlYXR1cmVf
cGVyc2lzdGVudCkgeworICAgICAgICAvKiBJbml0IHBlcnNpc3RlbnQgZ3JhbnRzICovCisgICAg
ICAgIGJsa2Rldi0+bWF4X2dyYW50cyA9IG1heF9yZXF1ZXN0cyAqIEJMS0lGX01BWF9TRUdNRU5U
U19QRVJfUkVRVUVTVDsKKyAgICAgICAgYmxrZGV2LT5wZXJzaXN0ZW50X2dudHMgPSBnX3RyZWVf
bmV3X2Z1bGwoKEdDb21wYXJlRGF0YUZ1bmMpaW50X2NtcCwKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIE5VTEwsIE5VTEwsCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAoR0Rlc3Ryb3lOb3RpZnkpZGVzdHJveV9ncmFudCk7
CisgICAgICAgIGJsa2Rldi0+cGVyc2lzdGVudF9nbnRfYyA9IDA7CisgICAgfQorCiAgICAgeGVu
X2JlX2JpbmRfZXZ0Y2huKCZibGtkZXYtPnhlbmRldik7CiAKICAgICB4ZW5fYmVfcHJpbnRmKCZi
bGtkZXYtPnhlbmRldiwgMSwgIm9rOiBwcm90byAlcywgcmluZy1yZWYgJWQsICIKQEAgLTgwNCw2
ICs5MjEsMTAgQEAgc3RhdGljIGludCBibGtfZnJlZShzdHJ1Y3QgWGVuRGV2aWNlICp4ZW5kZXYp
CiAgICAgICAgIGJsa19kaXNjb25uZWN0KHhlbmRldik7CiAgICAgfQogCisgICAgLyogRnJlZSBw
ZXJzaXN0ZW50IGdyYW50cyAqLworICAgIGlmIChibGtkZXYtPmZlYXR1cmVfcGVyc2lzdGVudCkK
KyAgICAgICAgZ190cmVlX2Rlc3Ryb3koYmxrZGV2LT5wZXJzaXN0ZW50X2dudHMpOworCiAgICAg
d2hpbGUgKCFRTElTVF9FTVBUWSgmYmxrZGV2LT5mcmVlbGlzdCkpIHsKICAgICAgICAgaW9yZXEg
PSBRTElTVF9GSVJTVCgmYmxrZGV2LT5mcmVlbGlzdCk7CiAgICAgICAgIFFMSVNUX1JFTU9WRShp
b3JlcSwgbGlzdCk7Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Dec 31 12:16:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 12:16:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpeIC-00058V-Dx; Mon, 31 Dec 2012 12:16:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TpeIA-00058E-Mb
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 12:16:30 +0000
Received: from [85.158.139.83:40348] by server-16.bemta-5.messagelabs.com id
	2C/0A-09208-E1281E05; Mon, 31 Dec 2012 12:16:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1356956181!31811685!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17458 invoked from network); 31 Dec 2012 12:16:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 12:16:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="396010"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Dec 2012 12:16:21 +0000
Received: from mac.citrite.net (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 31 Dec 2012 12:16:21 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <qemu-devel@nongnu.org>
Date: Mon, 31 Dec 2012 13:16:12 +0100
Message-ID: <1356956174-23548-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
References: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 1/3] xen_disk: handle disk files on
	ramfs/tmpfs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RmlsZXMgdGhhdCByZXNpZGUgb24gcmFtZnMgb3IgdG1wZnMgY2Fubm90IGJlIG9wZW5lZCB3aXRo
IE9fRElSRUNULAppZiBmaXJzdCBjYWxsIHRvIGJkcnZfb3BlbiBmYWlscyB3aXRoIGVycm5vID0g
RUlOVkFMLCB0cnkgYSBzZWNvbmQKY2FsbCB3aXRob3V0IEJEUlZfT19OT0NBQ0hFLgoKU2lnbmVk
LW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiB4ZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpDYzogU3RlZmFubyBTdGFiZWxsaW5pIDxTdGVmYW5vLlN0YWJl
bGxpbmlAZXUuY2l0cml4LmNvbT4KQ2M6IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBj
aXRyaXguY29tPgotLS0KIGh3L3hlbl9kaXNrLmMgfCAgIDE2ICsrKysrKysrKysrKystLS0KIDEg
ZmlsZXMgY2hhbmdlZCwgMTMgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRpZmYgLS1n
aXQgYS9ody94ZW5fZGlzay5jIGIvaHcveGVuX2Rpc2suYwppbmRleCBlNmJiMmYyLi5hMTU5ZWU1
IDEwMDY0NAotLS0gYS9ody94ZW5fZGlzay5jCisrKyBiL2h3L3hlbl9kaXNrLmMKQEAgLTU2Miw3
ICs1NjIsNyBAQCBzdGF0aWMgdm9pZCBibGtfYWxsb2Moc3RydWN0IFhlbkRldmljZSAqeGVuZGV2
KQogc3RhdGljIGludCBibGtfaW5pdChzdHJ1Y3QgWGVuRGV2aWNlICp4ZW5kZXYpCiB7CiAgICAg
c3RydWN0IFhlbkJsa0RldiAqYmxrZGV2ID0gY29udGFpbmVyX29mKHhlbmRldiwgc3RydWN0IFhl
bkJsa0RldiwgeGVuZGV2KTsKLSAgICBpbnQgaW5kZXgsIHFmbGFncywgaW5mbyA9IDA7CisgICAg
aW50IGluZGV4LCBxZmxhZ3MsIGluZm8gPSAwLCByYzsKIAogICAgIC8qIHJlYWQgeGVuc3RvcmUg
ZW50cmllcyAqLwogICAgIGlmIChibGtkZXYtPnBhcmFtcyA9PSBOVUxMKSB7CkBAIC02MjUsOCAr
NjI1LDE4IEBAIHN0YXRpYyBpbnQgYmxrX2luaXQoc3RydWN0IFhlbkRldmljZSAqeGVuZGV2KQog
ICAgICAgICB4ZW5fYmVfcHJpbnRmKCZibGtkZXYtPnhlbmRldiwgMiwgImNyZWF0ZSBuZXcgYmRy
diAoeGVuYnVzIHNldHVwKVxuIik7CiAgICAgICAgIGJsa2Rldi0+YnMgPSBiZHJ2X25ldyhibGtk
ZXYtPmRldik7CiAgICAgICAgIGlmIChibGtkZXYtPmJzKSB7Ci0gICAgICAgICAgICBpZiAoYmRy
dl9vcGVuKGJsa2Rldi0+YnMsIGJsa2Rldi0+ZmlsZW5hbWUsIHFmbGFncywKLSAgICAgICAgICAg
ICAgICAgICAgICAgIGJkcnZfZmluZF93aGl0ZWxpc3RlZF9mb3JtYXQoYmxrZGV2LT5maWxlcHJv
dG8pKSAhPSAwKSB7CisgICAgICAgICAgICByYyA9IGJkcnZfb3BlbihibGtkZXYtPmJzLCBibGtk
ZXYtPmZpbGVuYW1lLCBxZmxhZ3MsCisgICAgICAgICAgICAgICAgICAgICAgICBiZHJ2X2ZpbmRf
d2hpdGVsaXN0ZWRfZm9ybWF0KGJsa2Rldi0+ZmlsZXByb3RvKSk7CisgICAgICAgICAgICBpZiAo
cmMgIT0gMCAmJiBlcnJubyA9PSBFSU5WQUwpIHsKKyAgICAgICAgICAgICAgICAvKiBGaWxlcyBv
biByYW1mcyBvciB0bXBmcyBjYW5ub3QgYmUgb3BlbmVkIHdpdGggT19ESVJFQ1QsCisgICAgICAg
ICAgICAgICAgICogcmVtb3ZlIHRoZSBCRFJWX09fTk9DQUNIRSBmbGFnLCBhbmQgdHJ5IHRvIG9w
ZW4KKyAgICAgICAgICAgICAgICAgKiB0aGUgZmlsZSBhZ2Fpbi4KKyAgICAgICAgICAgICAgICAg
Ki8KKyAgICAgICAgICAgICAgICBxZmxhZ3MgJj0gfkJEUlZfT19OT0NBQ0hFOworICAgICAgICAg
ICAgICAgIHJjID0gYmRydl9vcGVuKGJsa2Rldi0+YnMsIGJsa2Rldi0+ZmlsZW5hbWUsIHFmbGFn
cywKKyAgICAgICAgICAgICAgICAgICAgICAgIGJkcnZfZmluZF93aGl0ZWxpc3RlZF9mb3JtYXQo
YmxrZGV2LT5maWxlcHJvdG8pKTsKKyAgICAgICAgICAgIH0KKyAgICAgICAgICAgIGlmIChyYyAh
PSAwKSB7CiAgICAgICAgICAgICAgICAgYmRydl9kZWxldGUoYmxrZGV2LT5icyk7CiAgICAgICAg
ICAgICAgICAgYmxrZGV2LT5icyA9IE5VTEw7CiAgICAgICAgICAgICB9Ci0tIAoxLjcuNy41IChB
cHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Dec 31 12:16:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 12:16:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpeIC-00058V-Dx; Mon, 31 Dec 2012 12:16:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1TpeIA-00058E-Mb
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 12:16:30 +0000
Received: from [85.158.139.83:40348] by server-16.bemta-5.messagelabs.com id
	2C/0A-09208-E1281E05; Mon, 31 Dec 2012 12:16:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1356956181!31811685!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17458 invoked from network); 31 Dec 2012 12:16:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 12:16:21 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="396010"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Dec 2012 12:16:21 +0000
Received: from mac.citrite.net (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 31 Dec 2012 12:16:21 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <qemu-devel@nongnu.org>
Date: Mon, 31 Dec 2012 13:16:12 +0100
Message-ID: <1356956174-23548-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
References: <1356956174-23548-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC 1/3] xen_disk: handle disk files on
	ramfs/tmpfs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RmlsZXMgdGhhdCByZXNpZGUgb24gcmFtZnMgb3IgdG1wZnMgY2Fubm90IGJlIG9wZW5lZCB3aXRo
IE9fRElSRUNULAppZiBmaXJzdCBjYWxsIHRvIGJkcnZfb3BlbiBmYWlscyB3aXRoIGVycm5vID0g
RUlOVkFMLCB0cnkgYSBzZWNvbmQKY2FsbCB3aXRob3V0IEJEUlZfT19OT0NBQ0hFLgoKU2lnbmVk
LW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiB4ZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpDYzogU3RlZmFubyBTdGFiZWxsaW5pIDxTdGVmYW5vLlN0YWJl
bGxpbmlAZXUuY2l0cml4LmNvbT4KQ2M6IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBj
aXRyaXguY29tPgotLS0KIGh3L3hlbl9kaXNrLmMgfCAgIDE2ICsrKysrKysrKysrKystLS0KIDEg
ZmlsZXMgY2hhbmdlZCwgMTMgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRpZmYgLS1n
aXQgYS9ody94ZW5fZGlzay5jIGIvaHcveGVuX2Rpc2suYwppbmRleCBlNmJiMmYyLi5hMTU5ZWU1
IDEwMDY0NAotLS0gYS9ody94ZW5fZGlzay5jCisrKyBiL2h3L3hlbl9kaXNrLmMKQEAgLTU2Miw3
ICs1NjIsNyBAQCBzdGF0aWMgdm9pZCBibGtfYWxsb2Moc3RydWN0IFhlbkRldmljZSAqeGVuZGV2
KQogc3RhdGljIGludCBibGtfaW5pdChzdHJ1Y3QgWGVuRGV2aWNlICp4ZW5kZXYpCiB7CiAgICAg
c3RydWN0IFhlbkJsa0RldiAqYmxrZGV2ID0gY29udGFpbmVyX29mKHhlbmRldiwgc3RydWN0IFhl
bkJsa0RldiwgeGVuZGV2KTsKLSAgICBpbnQgaW5kZXgsIHFmbGFncywgaW5mbyA9IDA7CisgICAg
aW50IGluZGV4LCBxZmxhZ3MsIGluZm8gPSAwLCByYzsKIAogICAgIC8qIHJlYWQgeGVuc3RvcmUg
ZW50cmllcyAqLwogICAgIGlmIChibGtkZXYtPnBhcmFtcyA9PSBOVUxMKSB7CkBAIC02MjUsOCAr
NjI1LDE4IEBAIHN0YXRpYyBpbnQgYmxrX2luaXQoc3RydWN0IFhlbkRldmljZSAqeGVuZGV2KQog
ICAgICAgICB4ZW5fYmVfcHJpbnRmKCZibGtkZXYtPnhlbmRldiwgMiwgImNyZWF0ZSBuZXcgYmRy
diAoeGVuYnVzIHNldHVwKVxuIik7CiAgICAgICAgIGJsa2Rldi0+YnMgPSBiZHJ2X25ldyhibGtk
ZXYtPmRldik7CiAgICAgICAgIGlmIChibGtkZXYtPmJzKSB7Ci0gICAgICAgICAgICBpZiAoYmRy
dl9vcGVuKGJsa2Rldi0+YnMsIGJsa2Rldi0+ZmlsZW5hbWUsIHFmbGFncywKLSAgICAgICAgICAg
ICAgICAgICAgICAgIGJkcnZfZmluZF93aGl0ZWxpc3RlZF9mb3JtYXQoYmxrZGV2LT5maWxlcHJv
dG8pKSAhPSAwKSB7CisgICAgICAgICAgICByYyA9IGJkcnZfb3BlbihibGtkZXYtPmJzLCBibGtk
ZXYtPmZpbGVuYW1lLCBxZmxhZ3MsCisgICAgICAgICAgICAgICAgICAgICAgICBiZHJ2X2ZpbmRf
d2hpdGVsaXN0ZWRfZm9ybWF0KGJsa2Rldi0+ZmlsZXByb3RvKSk7CisgICAgICAgICAgICBpZiAo
cmMgIT0gMCAmJiBlcnJubyA9PSBFSU5WQUwpIHsKKyAgICAgICAgICAgICAgICAvKiBGaWxlcyBv
biByYW1mcyBvciB0bXBmcyBjYW5ub3QgYmUgb3BlbmVkIHdpdGggT19ESVJFQ1QsCisgICAgICAg
ICAgICAgICAgICogcmVtb3ZlIHRoZSBCRFJWX09fTk9DQUNIRSBmbGFnLCBhbmQgdHJ5IHRvIG9w
ZW4KKyAgICAgICAgICAgICAgICAgKiB0aGUgZmlsZSBhZ2Fpbi4KKyAgICAgICAgICAgICAgICAg
Ki8KKyAgICAgICAgICAgICAgICBxZmxhZ3MgJj0gfkJEUlZfT19OT0NBQ0hFOworICAgICAgICAg
ICAgICAgIHJjID0gYmRydl9vcGVuKGJsa2Rldi0+YnMsIGJsa2Rldi0+ZmlsZW5hbWUsIHFmbGFn
cywKKyAgICAgICAgICAgICAgICAgICAgICAgIGJkcnZfZmluZF93aGl0ZWxpc3RlZF9mb3JtYXQo
YmxrZGV2LT5maWxlcHJvdG8pKTsKKyAgICAgICAgICAgIH0KKyAgICAgICAgICAgIGlmIChyYyAh
PSAwKSB7CiAgICAgICAgICAgICAgICAgYmRydl9kZWxldGUoYmxrZGV2LT5icyk7CiAgICAgICAg
ICAgICAgICAgYmxrZGV2LT5icyA9IE5VTEw7CiAgICAgICAgICAgICB9Ci0tIAoxLjcuNy41IChB
cHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Dec 31 12:51:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 12:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpeq9-0005ky-Ka; Mon, 31 Dec 2012 12:51:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tpeq7-0005kt-M1
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 12:51:36 +0000
Received: from [85.158.138.51:27674] by server-1.bemta-3.messagelabs.com id
	BD/1F-08906-65A81E05; Mon, 31 Dec 2012 12:51:34 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1356958291!24591763!1
X-Originating-IP: [209.85.210.170]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7578 invoked from network); 31 Dec 2012 12:51:32 -0000
Received: from mail-ia0-f170.google.com (HELO mail-ia0-f170.google.com)
	(209.85.210.170)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 12:51:32 -0000
Received: by mail-ia0-f170.google.com with SMTP id i1so10755124iaa.29
	for <xen-devel@lists.xen.org>; Mon, 31 Dec 2012 04:51:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=Vtd1JqZAMU7XrCrfWFiBVDcuRHt0+A0sFmu8Ny6ag/c=;
	b=ON/iE/U0W90gDJ7DJ9pdbXSGwnp99sdCJUV9rNrVVr6Oayq1KhzGYZ1HYuxW0lGZg+
	LG+dpJanc4DDQThWmOX0A0gx8zZ631qYqtC05o2/hbyIipYw8QJic8Vmv4cUxWIcNpgu
	0cdy6waSWeBNDAz2bn77DTQj9/qH1ttD+fKSs8JEAUq7Mzez8cE8a17ACALFjvkWQtV/
	89lAeOSQhQmuC1TVB1FsbkMLKhAlFr4LC/lKXbkTL9XAXzP6yWpd3B7pxeoBTJEj7KhE
	x/7OfyQLjeR1pgu7asOW/CcbbV7ti8RkJ9HnFwr/dIf836RK+/BsHS2yJsRntXUkROMA
	0cqQ==
MIME-Version: 1.0
Received: by 10.42.131.133 with SMTP id z5mr30848431ics.10.1356958290964; Mon,
	31 Dec 2012 04:51:30 -0800 (PST)
Received: by 10.231.93.74 with HTTP; Mon, 31 Dec 2012 04:51:30 -0800 (PST)
In-Reply-To: <CAOvdn6UOkaZFb3Dtu_=TRCs6ZhH5yM=JjH2a-yJHAUYSzdWqPw@mail.gmail.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
	<50D4967602000078000B2114@nat28.tlf.novell.com>
	<3368417890369848263@unknownmsgid>
	<50D6713C.2000202@invisiblethingslab.com>
	<CAOvdn6UOkaZFb3Dtu_=TRCs6ZhH5yM=JjH2a-yJHAUYSzdWqPw@mail.gmail.com>
Date: Mon, 31 Dec 2012 07:51:30 -0500
Message-ID: <CAOvdn6Xf8qQKinDqmz8AtEZ5Bvi1qOpDrRguedKcVe+BziPRTA@mail.gmail.com>
From: Ben Guthro <ben.guthro@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Marek Marczykowski <marmarek@invisiblethingslab.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2886384838838103695=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2886384838838103695==
Content-Type: multipart/alternative; boundary=90e6ba212433e8720604d225784e

--90e6ba212433e8720604d225784e
Content-Type: text/plain; charset=ISO-8859-1

Sadly, the full revert of this changeset in Xen-4.2 did not solve the
resume issue I am seeing on the Lenovo T430.

My current suspicion is irq delivery, because of the following messages I
see on the console on the way down:

(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Broke affinity for irq 1
(XEN) Broke affinity for irq 9
(XEN) Broke affinity for irq 12
(XEN) Broke affinity for irq 26
(XEN) Broke affinity for irq 30
(XEN) Broke affinity for irq 1
(XEN) Broke affinity for irq 1
(XEN) Entering ACPI S3 state.

I am not currently running with pinning dom0 vcpus

Of course, this is merely a suspicion, and I don't have a lot of hard
evidence to back this up.

I've requested an ITP be budgeted to debug these issues on Intel SDPs, but
I think it may be some months before I see the results of that.


Jan - any suggestions on how to procede with this? FWIW, Xen 4.0.y suspends
on this machine reliably.


/btg

On Sun, Dec 23, 2012 at 8:45 AM, Ben Guthro <ben.guthro@gmail.com> wrote:

> Interesting.
>
> I had started by reverting the commit entirely, but settled on only
> reverting the part causing the scheduling issues.
> I'm not sure if I was as thorough in my testing this fix, across a lot of
> laptop generations.
>
> I'll test reverting the full commit in the new year, and report back.
>
> I think that, at a minimum - the commit should get some scrutiny by people
> who might understand the subtleties, and/or unintended side effects better
> than I.
>
>
> -Ben
>
>
> On Sat, Dec 22, 2012 at 9:49 PM, Marek Marczykowski <
> marmarek@invisiblethingslab.com> wrote:
>
>> On 21.12.2012 17:18, Ben Guthro wrote:
>> > On Dec 21, 2012, at 11:03 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> >
>> >>>>> On 21.12.12 at 16:30, Marek Marczykowski <
>> marmarek@invisiblethingslab.com> wrote:
>> >>> Next bisection (this time with sched_ratelimit_us=0) gives this
>> commit:
>> >>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f
>> >>
>> >> Ben, wasn't this where your bisection ended up too?
>> >
>> > Yes, for the dom0_pin_vcpus issue.
>>
>> Ok, I can confirm that on xen-4.1-testing tip with above commit reverted
>> completely problem has gone away, even without sched_ratelimit_us=0. With
>> Ben's patch (partially revert) no reboot observed, but still sometimes
>> only
>> pCPU0 is used after resume.
>>
>> --
>> Best Regards / Pozdrawiam,
>> Marek Marczykowski
>> Invisible Things Lab
>>
>>
>

--90e6ba212433e8720604d225784e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Sadly, the full revert of this changeset in Xen-4.2 did not solve the resum=
e issue I am seeing on the Lenovo T430.<div><br></div><div>My current suspi=
cion is irq delivery, because of the following messages I see on the consol=
e on the way down:</div>
<div><br></div><div><div>(XEN) Preparing system for ACPI S3 state.</div><di=
v>(XEN) Disabling non-boot CPUs ...</div><div>(XEN) Broke affinity for irq =
1</div><div>(XEN) Broke affinity for irq 9</div><div>(XEN) Broke affinity f=
or irq 12</div>
<div>(XEN) Broke affinity for irq 26</div><div>(XEN) Broke affinity for irq=
 30</div><div>(XEN) Broke affinity for irq 1</div><div>(XEN) Broke affinity=
 for irq 1</div><div>(XEN) Entering ACPI S3 state.</div><div><br></div>
<div>I am not currently running with pinning dom0 vcpus</div><div><br></div=
><div>Of course, this is merely a suspicion, and I don&#39;t have a lot of =
hard evidence to back this up.</div><div><br></div>I&#39;ve requested an IT=
P be budgeted to debug these issues on Intel SDPs, but I think it may be so=
me months before I see the results of that.</div>
<div><br></div><div><br></div><div>Jan - any suggestions on how to procede =
with this? FWIW, Xen 4.0.y suspends on this machine reliably.</div><div><br=
></div><div><br></div><div>/btg</div><div><br><div class=3D"gmail_quote">
On Sun, Dec 23, 2012 at 8:45 AM, Ben Guthro <span dir=3D"ltr">&lt;<a href=
=3D"mailto:ben.guthro@gmail.com" target=3D"_blank">ben.guthro@gmail.com</a>=
&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0=
 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Interesting.<div><br></div><div>I had started by reverting the commit entir=
ely, but settled on only reverting the part causing the scheduling issues.<=
/div><div>I&#39;m not sure if I was as thorough in my testing this fix, acr=
oss a lot of laptop generations.</div>

<div><br></div><div>I&#39;ll test reverting the full commit in the new year=
, and report back.</div><div><br></div><div>I think that, at a minimum - th=
e commit should get some scrutiny by people who might understand the subtle=
ties, and/or unintended side effects better than I.</div>
<span class=3D"HOEnZb"><font color=3D"#888888">
<div><br></div><div><br></div></font></span><div><span class=3D"HOEnZb"><fo=
nt color=3D"#888888">-Ben</font></span><div><div class=3D"h5"><br><br><div =
class=3D"gmail_quote">On Sat, Dec 22, 2012 at 9:49 PM, Marek Marczykowski <=
span dir=3D"ltr">&lt;<a href=3D"mailto:marmarek@invisiblethingslab.com" tar=
get=3D"_blank">marmarek@invisiblethingslab.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><div>On 21.12.2012 17:18, Ben Guthro wr=
ote:<br>
&gt; On Dec 21, 2012, at 11:03 AM, Jan Beulich &lt;<a href=3D"mailto:JBeuli=
ch@suse.com" target=3D"_blank">JBeulich@suse.com</a>&gt; wrote:<br>
&gt;<br>
&gt;&gt;&gt;&gt;&gt; On 21.12.12 at 16:30, Marek Marczykowski &lt;<a href=
=3D"mailto:marmarek@invisiblethingslab.com" target=3D"_blank">marmarek@invi=
siblethingslab.com</a>&gt; wrote:<br>
&gt;&gt;&gt; Next bisection (this time with sched_ratelimit_us=3D0) gives t=
his commit:<br>
&gt;&gt;&gt; <a href=3D"http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d6=
7e4d12723f" target=3D"_blank">http://xenbits.xen.org/hg/xen-4.1-testing.hg/=
rev/d67e4d12723f</a><br>
&gt;&gt;<br>
&gt;&gt; Ben, wasn&#39;t this where your bisection ended up too?<br>
&gt;<br>
&gt; Yes, for the dom0_pin_vcpus issue.<br>
<br>
</div></div>Ok, I can confirm that on xen-4.1-testing tip with above commit=
 reverted<br>
completely problem has gone away, even without sched_ratelimit_us=3D0. With=
<br>
Ben&#39;s patch (partially revert) no reboot observed, but still sometimes =
only<br>
pCPU0 is used after resume.<br>
<div><div><br>
--<br>
Best Regards / Pozdrawiam,<br>
Marek Marczykowski<br>
Invisible Things Lab<br>
<br>
</div></div></blockquote></div><br></div></div></div>
</blockquote></div><br></div>

--90e6ba212433e8720604d225784e--


--===============2886384838838103695==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2886384838838103695==--


From xen-devel-bounces@lists.xen.org Mon Dec 31 12:51:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 12:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpeq9-0005ky-Ka; Mon, 31 Dec 2012 12:51:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Tpeq7-0005kt-M1
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 12:51:36 +0000
Received: from [85.158.138.51:27674] by server-1.bemta-3.messagelabs.com id
	BD/1F-08906-65A81E05; Mon, 31 Dec 2012 12:51:34 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1356958291!24591763!1
X-Originating-IP: [209.85.210.170]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7578 invoked from network); 31 Dec 2012 12:51:32 -0000
Received: from mail-ia0-f170.google.com (HELO mail-ia0-f170.google.com)
	(209.85.210.170)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 12:51:32 -0000
Received: by mail-ia0-f170.google.com with SMTP id i1so10755124iaa.29
	for <xen-devel@lists.xen.org>; Mon, 31 Dec 2012 04:51:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=Vtd1JqZAMU7XrCrfWFiBVDcuRHt0+A0sFmu8Ny6ag/c=;
	b=ON/iE/U0W90gDJ7DJ9pdbXSGwnp99sdCJUV9rNrVVr6Oayq1KhzGYZ1HYuxW0lGZg+
	LG+dpJanc4DDQThWmOX0A0gx8zZ631qYqtC05o2/hbyIipYw8QJic8Vmv4cUxWIcNpgu
	0cdy6waSWeBNDAz2bn77DTQj9/qH1ttD+fKSs8JEAUq7Mzez8cE8a17ACALFjvkWQtV/
	89lAeOSQhQmuC1TVB1FsbkMLKhAlFr4LC/lKXbkTL9XAXzP6yWpd3B7pxeoBTJEj7KhE
	x/7OfyQLjeR1pgu7asOW/CcbbV7ti8RkJ9HnFwr/dIf836RK+/BsHS2yJsRntXUkROMA
	0cqQ==
MIME-Version: 1.0
Received: by 10.42.131.133 with SMTP id z5mr30848431ics.10.1356958290964; Mon,
	31 Dec 2012 04:51:30 -0800 (PST)
Received: by 10.231.93.74 with HTTP; Mon, 31 Dec 2012 04:51:30 -0800 (PST)
In-Reply-To: <CAOvdn6UOkaZFb3Dtu_=TRCs6ZhH5yM=JjH2a-yJHAUYSzdWqPw@mail.gmail.com>
References: <50B7AF8A.5010304@invisiblethingslab.com>
	<50B88B8402000078000ACC4C@nat28.tlf.novell.com>
	<50B8D9A9.60502@invisiblethingslab.com>
	<50B8DAEA0200007800090B69@nat28.tlf.novell.com>
	<50B8DC55.8000308@invisiblethingslab.com>
	<50BC653E02000078000AD28C@nat28.tlf.novell.com>
	<50BDFA38.7030009@invisiblethingslab.com>
	<50D335E6.902@invisiblethingslab.com>
	<CAOvdn6V1aeLr+VsA2rrizDa8a0dOUMoWzea8Le+hvM1QOdPFsA@mail.gmail.com>
	<50D39C73.906@invisiblethingslab.com>
	<50D3EB03.4000109@invisiblethingslab.com>
	<50D4322102000078000B1F80@nat28.tlf.novell.com>
	<50D46534.2010304@invisiblethingslab.com>
	<50D4757202000078000B2042@nat28.tlf.novell.com>
	<50D46B47.8000003@invisiblethingslab.com>
	<50D48090.6060603@invisiblethingslab.com>
	<50D4967602000078000B2114@nat28.tlf.novell.com>
	<3368417890369848263@unknownmsgid>
	<50D6713C.2000202@invisiblethingslab.com>
	<CAOvdn6UOkaZFb3Dtu_=TRCs6ZhH5yM=JjH2a-yJHAUYSzdWqPw@mail.gmail.com>
Date: Mon, 31 Dec 2012 07:51:30 -0500
Message-ID: <CAOvdn6Xf8qQKinDqmz8AtEZ5Bvi1qOpDrRguedKcVe+BziPRTA@mail.gmail.com>
From: Ben Guthro <ben.guthro@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Marek Marczykowski <marmarek@invisiblethingslab.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Only CPU0 active after ACPI S3, xen 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2886384838838103695=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2886384838838103695==
Content-Type: multipart/alternative; boundary=90e6ba212433e8720604d225784e

--90e6ba212433e8720604d225784e
Content-Type: text/plain; charset=ISO-8859-1

Sadly, the full revert of this changeset in Xen-4.2 did not solve the
resume issue I am seeing on the Lenovo T430.

My current suspicion is irq delivery, because of the following messages I
see on the console on the way down:

(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Broke affinity for irq 1
(XEN) Broke affinity for irq 9
(XEN) Broke affinity for irq 12
(XEN) Broke affinity for irq 26
(XEN) Broke affinity for irq 30
(XEN) Broke affinity for irq 1
(XEN) Broke affinity for irq 1
(XEN) Entering ACPI S3 state.

I am not currently running with pinning dom0 vcpus

Of course, this is merely a suspicion, and I don't have a lot of hard
evidence to back this up.

I've requested an ITP be budgeted to debug these issues on Intel SDPs, but
I think it may be some months before I see the results of that.


Jan - any suggestions on how to procede with this? FWIW, Xen 4.0.y suspends
on this machine reliably.


/btg

On Sun, Dec 23, 2012 at 8:45 AM, Ben Guthro <ben.guthro@gmail.com> wrote:

> Interesting.
>
> I had started by reverting the commit entirely, but settled on only
> reverting the part causing the scheduling issues.
> I'm not sure if I was as thorough in my testing this fix, across a lot of
> laptop generations.
>
> I'll test reverting the full commit in the new year, and report back.
>
> I think that, at a minimum - the commit should get some scrutiny by people
> who might understand the subtleties, and/or unintended side effects better
> than I.
>
>
> -Ben
>
>
> On Sat, Dec 22, 2012 at 9:49 PM, Marek Marczykowski <
> marmarek@invisiblethingslab.com> wrote:
>
>> On 21.12.2012 17:18, Ben Guthro wrote:
>> > On Dec 21, 2012, at 11:03 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> >
>> >>>>> On 21.12.12 at 16:30, Marek Marczykowski <
>> marmarek@invisiblethingslab.com> wrote:
>> >>> Next bisection (this time with sched_ratelimit_us=0) gives this
>> commit:
>> >>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d67e4d12723f
>> >>
>> >> Ben, wasn't this where your bisection ended up too?
>> >
>> > Yes, for the dom0_pin_vcpus issue.
>>
>> Ok, I can confirm that on xen-4.1-testing tip with above commit reverted
>> completely problem has gone away, even without sched_ratelimit_us=0. With
>> Ben's patch (partially revert) no reboot observed, but still sometimes
>> only
>> pCPU0 is used after resume.
>>
>> --
>> Best Regards / Pozdrawiam,
>> Marek Marczykowski
>> Invisible Things Lab
>>
>>
>

--90e6ba212433e8720604d225784e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Sadly, the full revert of this changeset in Xen-4.2 did not solve the resum=
e issue I am seeing on the Lenovo T430.<div><br></div><div>My current suspi=
cion is irq delivery, because of the following messages I see on the consol=
e on the way down:</div>
<div><br></div><div><div>(XEN) Preparing system for ACPI S3 state.</div><di=
v>(XEN) Disabling non-boot CPUs ...</div><div>(XEN) Broke affinity for irq =
1</div><div>(XEN) Broke affinity for irq 9</div><div>(XEN) Broke affinity f=
or irq 12</div>
<div>(XEN) Broke affinity for irq 26</div><div>(XEN) Broke affinity for irq=
 30</div><div>(XEN) Broke affinity for irq 1</div><div>(XEN) Broke affinity=
 for irq 1</div><div>(XEN) Entering ACPI S3 state.</div><div><br></div>
<div>I am not currently running with pinning dom0 vcpus</div><div><br></div=
><div>Of course, this is merely a suspicion, and I don&#39;t have a lot of =
hard evidence to back this up.</div><div><br></div>I&#39;ve requested an IT=
P be budgeted to debug these issues on Intel SDPs, but I think it may be so=
me months before I see the results of that.</div>
<div><br></div><div><br></div><div>Jan - any suggestions on how to procede =
with this? FWIW, Xen 4.0.y suspends on this machine reliably.</div><div><br=
></div><div><br></div><div>/btg</div><div><br><div class=3D"gmail_quote">
On Sun, Dec 23, 2012 at 8:45 AM, Ben Guthro <span dir=3D"ltr">&lt;<a href=
=3D"mailto:ben.guthro@gmail.com" target=3D"_blank">ben.guthro@gmail.com</a>=
&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0=
 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Interesting.<div><br></div><div>I had started by reverting the commit entir=
ely, but settled on only reverting the part causing the scheduling issues.<=
/div><div>I&#39;m not sure if I was as thorough in my testing this fix, acr=
oss a lot of laptop generations.</div>

<div><br></div><div>I&#39;ll test reverting the full commit in the new year=
, and report back.</div><div><br></div><div>I think that, at a minimum - th=
e commit should get some scrutiny by people who might understand the subtle=
ties, and/or unintended side effects better than I.</div>
<span class=3D"HOEnZb"><font color=3D"#888888">
<div><br></div><div><br></div></font></span><div><span class=3D"HOEnZb"><fo=
nt color=3D"#888888">-Ben</font></span><div><div class=3D"h5"><br><br><div =
class=3D"gmail_quote">On Sat, Dec 22, 2012 at 9:49 PM, Marek Marczykowski <=
span dir=3D"ltr">&lt;<a href=3D"mailto:marmarek@invisiblethingslab.com" tar=
get=3D"_blank">marmarek@invisiblethingslab.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><div>On 21.12.2012 17:18, Ben Guthro wr=
ote:<br>
&gt; On Dec 21, 2012, at 11:03 AM, Jan Beulich &lt;<a href=3D"mailto:JBeuli=
ch@suse.com" target=3D"_blank">JBeulich@suse.com</a>&gt; wrote:<br>
&gt;<br>
&gt;&gt;&gt;&gt;&gt; On 21.12.12 at 16:30, Marek Marczykowski &lt;<a href=
=3D"mailto:marmarek@invisiblethingslab.com" target=3D"_blank">marmarek@invi=
siblethingslab.com</a>&gt; wrote:<br>
&gt;&gt;&gt; Next bisection (this time with sched_ratelimit_us=3D0) gives t=
his commit:<br>
&gt;&gt;&gt; <a href=3D"http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/d6=
7e4d12723f" target=3D"_blank">http://xenbits.xen.org/hg/xen-4.1-testing.hg/=
rev/d67e4d12723f</a><br>
&gt;&gt;<br>
&gt;&gt; Ben, wasn&#39;t this where your bisection ended up too?<br>
&gt;<br>
&gt; Yes, for the dom0_pin_vcpus issue.<br>
<br>
</div></div>Ok, I can confirm that on xen-4.1-testing tip with above commit=
 reverted<br>
completely problem has gone away, even without sched_ratelimit_us=3D0. With=
<br>
Ben&#39;s patch (partially revert) no reboot observed, but still sometimes =
only<br>
pCPU0 is used after resume.<br>
<div><div><br>
--<br>
Best Regards / Pozdrawiam,<br>
Marek Marczykowski<br>
Invisible Things Lab<br>
<br>
</div></div></blockquote></div><br></div></div></div>
</blockquote></div><br></div>

--90e6ba212433e8720604d225784e--


--===============2886384838838103695==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2886384838838103695==--


From xen-devel-bounces@lists.xen.org Mon Dec 31 13:07:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 13:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpf52-000616-JC; Mon, 31 Dec 2012 13:07:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tpf50-000611-Qi
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 13:06:59 +0000
Received: from [85.158.137.99:49186] by server-4.bemta-3.messagelabs.com id
	54/0B-31835-1FD81E05; Mon, 31 Dec 2012 13:06:57 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1356959216!18307011!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 720 invoked from network); 31 Dec 2012 13:06:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 13:06:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="396477"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Dec 2012 13:06:56 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 31 Dec 2012 13:06:56 +0000
Message-ID: <50E18DEF.2090108@citrix.com>
Date: Mon, 31 Dec 2012 14:06:55 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Maik Wessler <maik@mwessler.net>
References: <1356518803.88953.YahooMailNeo@web162702.mail.bf1.yahoo.com>
In-Reply-To: <1356518803.88953.YahooMailNeo@web162702.mail.bf1.yahoo.com>
Cc: Maik Wessler <maik.wessler@yahoo.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] qemu-system-i386: memory leak?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/12/12 11:46, Maik Wessler wrote:
> Hi all,
> 
> I am using xen-4.2-testing.hg on debian 6.0.6 (x86_64) with Kernel
> 3.4.15 (tmem enabled). Problem is that the /usr/lib/xen/bin/qemu-system-i386
> use more and more memory. After one week uptime (depends on memory) the
> machine starts to swap...
> 
> Details:
> 
> root@dmw01:~# cat /etc/grub.d/09_linux_xen |grep mem
> multiboot${rel_xen_dirname}/${xen_basename} placeholder ${xen_args}
> dom0_mem=1592M,max:1592M
> 
> 
> root@dmw01:~# free
>              total       used       free     shared    buffers     cached
> Mem:       1523280    1408896     114384          0       9824      17496
> -/+ buffers/cache:    1381576     141704
> Swap:       505916     134592     371324
> 
> 
> ps -e -orss=,args= | sort -b -k1,1n
> 
> Start:
> 28872 /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-12,server,nowait -mon
> chardev=libxl-cmd,mode=control -xen-attach -name mgtmw01 -nographic -M
> xenpv -m 385
> 
> End:
> 243472 /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-12,server,nowait -mon
> chardev=libxl-cmd,mode=control -xen-attach -name mgtmw01 -nographic -M
> xenpv -m 385
> 
> 
> 
> root@dmw01:~# ps aux|grep qemu|grep mgtmw01
> root      3903  0.0 15.9 423876 243464 ?       Ssl  Dec18   9:39
> /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-12,server,nowait -mon
> chardev=libxl-cmd,mode=control -xen-attach -name mgtmw01 -nographic -M
> xenpv -m 385
> 
> root@dmw01:~# pmap 3903
> 3903:   /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-12,server,nowait -mon
> chardev=libxl-cmd,mode=control -xen-attach -name mgtmw01 -nographic -M
> xenpv -m 385
> 00007fe942b7e000   4112K rw---    [ anon ]
> 00007fe942f83000   1028K rw---    [ anon ]
> 00007fe943085000   1028K rw---    [ anon ]
> 00007fe943187000   1028K rw---    [ anon ]
> 00007fe943289000   2056K rw---    [ anon ]
> 00007fe943495000   1028K rw---    [ anon ]
> 00007fe9435a1000   1028K rw---    [ anon ]
> 00007fe9436ad000   1028K rw---    [ anon ]
> 00007fe9437b8000   5140K rw---    [ anon ]
> 00007fe943cbd000      4K -----    [ anon ]
> 00007fe943cbe000   8192K rw---    [ anon ]
> 00007fe9444be000     20K r-x--  /usr/lib/libXdmcp.so.6.0.0
> 00007fe9444c3000   2044K -----  /usr/lib/libXdmcp.so.6.0.0
> 00007fe9446c2000      4K rw---  /usr/lib/libXdmcp.so.6.0.0
> 00007fe9446c3000      8K r-x--  /usr/lib/libXau.so.6.0.0
> 00007fe9446c5000   2048K -----  /usr/lib/libXau.so.6.0.0
> 00007fe9448c5000      4K rw---  /usr/lib/libXau.so.6.0.0
> 00007fe9448c6000    124K r-x--  /lib/libx86.so.1
> 00007fe9448e5000   2048K -----  /lib/libx86.so.1
> 00007fe944ae5000      8K rw---  /lib/libx86.so.1
> 00007fe944ae7000      4K rw---    [ anon ]
> 00007fe944ae8000    128K r-x--  /usr/lib/liblzo2.so.2.0.0
> 00007fe944b08000   2044K -----  /usr/lib/liblzo2.so.2.0.0
> 00007fe944d07000      4K rw---  /usr/lib/liblzo2.so.2.0.0
> 00007fe944d08000    132K r-x--  /usr/lib/liblzma.so.2.0.0
> 00007fe944d29000   2048K -----  /usr/lib/liblzma.so.2.0.0
> 00007fe944f29000      4K rw---  /usr/lib/liblzma.so.2.0.0
> 00007fe944f2a000     60K r-x--  /lib/libbz2.so.1.0.4
> 00007fe944f39000   2044K -----  /lib/libbz2.so.1.0.4
> 00007fe945138000      8K rw---  /lib/libbz2.so.1.0.4
> 00007fe94513a000    112K r-x--  /usr/lib/libxcb.so.1.1.0
> 00007fe945156000   2044K -----  /usr/lib/libxcb.so.1.1.0
> 00007fe945355000      4K rw---  /usr/lib/libxcb.so.1.1.0
> 00007fe945356000    308K r-x--  /usr/lib/libvga.so.1.4.3
> 00007fe9453a3000   2044K -----  /usr/lib/libvga.so.1.4.3
> 00007fe9455a2000     36K rw---  /usr/lib/libvga.so.1.4.3
> 00007fe9455ab000     36K rw---    [ anon ]
> 00007fe9455b4000     88K r-x--  /usr/lib/libdirect-1.2.so.9.0.1
> 00007fe9455ca000   2044K -----  /usr/lib/libdirect-1.2.so.9.0.1
> 00007fe9457c9000      8K rw---  /usr/lib/libdirect-1.2.so.9.0.1
> 00007fe9457cb000     36K r-x--  /usr/lib/libfusion-1.2.so.9.0.1
> 00007fe9457d4000   2048K -----  /usr/lib/libfusion-1.2.so.9.0.1
> 00007fe9459d4000      4K rw---  /usr/lib/libfusion-1.2.so.9.0.1
> 00007fe9459d5000    508K r-x--  /usr/lib/libdirectfb-1.2.so.9.0.1
> 00007fe945a54000   2044K -----  /usr/lib/libdirectfb-1.2.so.9.0.1
> 00007fe945c53000     16K rw---  /usr/lib/libdirectfb-1.2.so.9.0.1
> 00007fe945c57000    888K r-x--  /usr/lib/libasound.so.2.0.0
> 00007fe945d35000   2044K -----  /usr/lib/libasound.so.2.0.0
> 00007fe945f34000     32K rw---  /usr/lib/libasound.so.2.0.0
> 00007fe945f3c000      8K r-x--  /lib/libdl-2.11.3.so
> 00007fe945f3e000   2048K -----  /lib/libdl-2.11.3.so
> 00007fe94613e000      4K r----  /lib/libdl-2.11.3.so
> 00007fe94613f000      4K rw---  /lib/libdl-2.11.3.so
> 00007fe946140000    192K r-x--  /lib/libpcre.so.3.12.1
> 00007fe946170000   2044K -----  /lib/libpcre.so.3.12.1
> 00007fe94636f000      4K rw---  /lib/libpcre.so.3.12.1
> 00007fe946370000   1380K r-x--  /lib/libc-2.11.3.so
> 00007fe9464c9000   2044K -----  /lib/libc-2.11.3.so
> 00007fe9466c8000     16K r----  /lib/libc-2.11.3.so
> 00007fe9466cc000      4K rw---  /lib/libc-2.11.3.so
> 00007fe9466cd000     20K rw---    [ anon ]
> 00007fe9466d2000     92K r-x--  /lib/libpthread-2.11.3.so
> 00007fe9466e9000   2044K -----  /lib/libpthread-2.11.3.so
> 00007fe9468e8000      4K r----  /lib/libpthread-2.11.3.so
> 00007fe9468e9000      4K rw---  /lib/libpthread-2.11.3.so
> 00007fe9468ea000     16K rw---    [ anon ]
> 00007fe9468ee000     92K r-x--  /usr/lib/libz.so.1.2.3.4
> 00007fe946905000   2044K -----  /usr/lib/libz.so.1.2.3.4
> 00007fe946b04000      4K rw---  /usr/lib/libz.so.1.2.3.4
> 00007fe946b05000    512K r-x--  /lib/libm-2.11.3.so
> 00007fe946b85000   2048K -----  /lib/libm-2.11.3.so
> 00007fe946d85000      4K r----  /lib/libm-2.11.3.so
> 00007fe946d86000      4K rw---  /lib/libm-2.11.3.so
> 00007fe946d87000      4K r-x--  /lib/libaio.so.1.0.1
> 00007fe946d88000   2044K -----  /lib/libaio.so.1.0.1
> 00007fe946f87000      4K rw---  /lib/libaio.so.1.0.1
> 00007fe946f88000    160K r-x--  /usr/lib/libxenguest.so.4.2.0
> 00007fe946fb0000   2048K -----  /usr/lib/libxenguest.so.4.2.0
> 00007fe9471b0000      8K rw---  /usr/lib/libxenguest.so.4.2.0
> 00007fe9471b2000    136K r-x--  /usr/lib/libxenctrl.so.4.2.0
> 00007fe9471d4000   2048K -----  /usr/lib/libxenctrl.so.4.2.0
> 00007fe9473d4000      4K rw---  /usr/lib/libxenctrl.so.4.2.0
> 00007fe9473d5000     24K r-x--  /usr/lib/libxenstore.so.3.0.2
> 00007fe9473db000   2044K -----  /usr/lib/libxenstore.so.3.0.2
> 00007fe9475da000      4K rw---  /usr/lib/libxenstore.so.3.0.2
> 00007fe9475db000     12K rw---    [ anon ]
> 00007fe9475de000   1236K r-x--  /usr/lib/libX11.so.6.3.0
> 00007fe947713000   2048K -----  /usr/lib/libX11.so.6.3.0
> 00007fe947913000     24K rw---  /usr/lib/libX11.so.6.3.0
> 00007fe947919000    432K r-x--  /usr/lib/libSDL-1.2.so.0.11.3
> 00007fe947985000   2048K -----  /usr/lib/libSDL-1.2.so.0.11.3
> 00007fe947b85000      8K rw---  /usr/lib/libSDL-1.2.so.0.11.3
> 00007fe947b87000    304K rw---    [ anon ]
> 00007fe947bd3000    140K r-x--  /usr/lib/libjpeg.so.62.0.0
> 00007fe947bf6000   2044K -----  /usr/lib/libjpeg.so.62.0.0
> 00007fe947df5000      4K rw---  /usr/lib/libjpeg.so.62.0.0
> 00007fe947df6000    148K r-x--  /lib/libpng12.so.0.44.0
> 00007fe947e1b000   2048K -----  /lib/libpng12.so.0.44.0
> 00007fe94801b000      4K rw---  /lib/libpng12.so.0.44.0
> 00007fe94801c000     16K r-x--  /lib/libuuid.so.1.3.0
> 00007fe948020000   2044K -----  /lib/libuuid.so.1.3.0
> 00007fe94821f000      4K rw---  /lib/libuuid.so.1.3.0
> 00007fe948220000    264K r-x--  /lib/libncurses.so.5.7
> 00007fe948262000   2044K -----  /lib/libncurses.so.5.7
> 00007fe948461000     20K rw---  /lib/libncurses.so.5.7
> 00007fe948466000      8K r-x--  /lib/libutil-2.11.3.so
> 00007fe948468000   2044K -----  /lib/libutil-2.11.3.so
> 00007fe948667000      4K r----  /lib/libutil-2.11.3.so
> 00007fe948668000      4K rw---  /lib/libutil-2.11.3.so
> 00007fe948669000    876K r-x--  /lib/libglib-2.0.so.0.2400.2
> 00007fe948744000   2044K -----  /lib/libglib-2.0.so.0.2400.2
> 00007fe948943000      8K rw---  /lib/libglib-2.0.so.0.2400.2
> 00007fe948945000      4K rw---    [ anon ]
> 00007fe948946000     16K r-x--  /usr/lib/libgthread-2.0.so.0.2400.2
> 00007fe94894a000   2044K -----  /usr/lib/libgthread-2.0.so.0.2400.2
> 00007fe948b49000      4K rw---  /usr/lib/libgthread-2.0.so.0.2400.2
> 00007fe948b4a000     28K r-x--  /lib/librt-2.11.3.so
> 00007fe948b51000   2044K -----  /lib/librt-2.11.3.so
> 00007fe948d50000      4K r----  /lib/librt-2.11.3.so
> 00007fe948d51000      4K rw---  /lib/librt-2.11.3.so
> 00007fe948d52000    120K r-x--  /lib/ld-2.11.3.so
> 00007fe948de5000   1536K rw---    [ anon ]
> 00007fe948f65000      4K rw-s-  /dev/xen/gntdev
> 00007fe948f66000      4K rw-s-  /dev/xen/gntdev
> 00007fe948f67000      8K rw---    [ anon ]
> 00007fe948f69000      4K -----    [ anon ]
> 00007fe948f6a000     20K rw---    [ anon ]
> 00007fe948f6f000      4K r----  /lib/ld-2.11.3.so
> 00007fe948f70000      4K rw---  /lib/ld-2.11.3.so
> 00007fe948f71000      4K rw---    [ anon ]
> 00007fe948f72000   3020K r-x--  /usr/lib/xen/bin/qemu-system-i386
> 00007fe949464000    816K r----  /usr/lib/xen/bin/qemu-system-i386
> 00007fe949530000    176K rw---  /usr/lib/xen/bin/qemu-system-i386
> 00007fe94955c000   8228K rw---    [ anon ]
> 00007fe94a5a7000 309936K rw---    [ anon ]
> 00007fff43e80000    132K rw---    [ stack ]
> 00007fff43fff000      4K r-x--    [ anon ]
> ffffffffff600000      4K r-x--    [ anon ]
>  total           424016K
> 
> 
> Can anyone help? 

I've just posted a bug fix for a memory leak in Qemu Xen PV disk
backend, you can take a look at the patch at:
http://lists.nongnu.org/archive/html/qemu-devel/2012-12/msg03677.html.

There's also a memory leak in the linux gntdev device which is used by
Qemu, you should also take a look at the following linux kernel patch
http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=commit;h=a67baeb77375199bbd842fa308cb565164dd1f19.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 13:07:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 13:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpf52-000616-JC; Mon, 31 Dec 2012 13:07:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Tpf50-000611-Qi
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 13:06:59 +0000
Received: from [85.158.137.99:49186] by server-4.bemta-3.messagelabs.com id
	54/0B-31835-1FD81E05; Mon, 31 Dec 2012 13:06:57 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-217.messagelabs.com!1356959216!18307011!1
X-Originating-IP: [46.33.159.39]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNDYuMzMuMTU5LjM5ID0+IDEzODU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 720 invoked from network); 31 Dec 2012 13:06:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (46.33.159.39)
	by server-10.tower-217.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 13:06:57 -0000
X-IronPort-AV: E=Sophos;i="4.84,385,1355097600"; 
   d="scan'208";a="396477"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Dec 2012 13:06:56 +0000
Received: from [192.168.1.30] (10.31.3.229) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.5;
	Mon, 31 Dec 2012 13:06:56 +0000
Message-ID: <50E18DEF.2090108@citrix.com>
Date: Mon, 31 Dec 2012 14:06:55 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Maik Wessler <maik@mwessler.net>
References: <1356518803.88953.YahooMailNeo@web162702.mail.bf1.yahoo.com>
In-Reply-To: <1356518803.88953.YahooMailNeo@web162702.mail.bf1.yahoo.com>
Cc: Maik Wessler <maik.wessler@yahoo.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] qemu-system-i386: memory leak?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/12/12 11:46, Maik Wessler wrote:
> Hi all,
> 
> I am using xen-4.2-testing.hg on debian 6.0.6 (x86_64) with Kernel
> 3.4.15 (tmem enabled). Problem is that the /usr/lib/xen/bin/qemu-system-i386
> use more and more memory. After one week uptime (depends on memory) the
> machine starts to swap...
> 
> Details:
> 
> root@dmw01:~# cat /etc/grub.d/09_linux_xen |grep mem
> multiboot${rel_xen_dirname}/${xen_basename} placeholder ${xen_args}
> dom0_mem=1592M,max:1592M
> 
> 
> root@dmw01:~# free
>              total       used       free     shared    buffers     cached
> Mem:       1523280    1408896     114384          0       9824      17496
> -/+ buffers/cache:    1381576     141704
> Swap:       505916     134592     371324
> 
> 
> ps -e -orss=,args= | sort -b -k1,1n
> 
> Start:
> 28872 /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-12,server,nowait -mon
> chardev=libxl-cmd,mode=control -xen-attach -name mgtmw01 -nographic -M
> xenpv -m 385
> 
> End:
> 243472 /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-12,server,nowait -mon
> chardev=libxl-cmd,mode=control -xen-attach -name mgtmw01 -nographic -M
> xenpv -m 385
> 
> 
> 
> root@dmw01:~# ps aux|grep qemu|grep mgtmw01
> root      3903  0.0 15.9 423876 243464 ?       Ssl  Dec18   9:39
> /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-12,server,nowait -mon
> chardev=libxl-cmd,mode=control -xen-attach -name mgtmw01 -nographic -M
> xenpv -m 385
> 
> root@dmw01:~# pmap 3903
> 3903:   /usr/lib/xen/bin/qemu-system-i386 -xen-domid 12 -chardev
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-12,server,nowait -mon
> chardev=libxl-cmd,mode=control -xen-attach -name mgtmw01 -nographic -M
> xenpv -m 385
> 00007fe942b7e000   4112K rw---    [ anon ]
> 00007fe942f83000   1028K rw---    [ anon ]
> 00007fe943085000   1028K rw---    [ anon ]
> 00007fe943187000   1028K rw---    [ anon ]
> 00007fe943289000   2056K rw---    [ anon ]
> 00007fe943495000   1028K rw---    [ anon ]
> 00007fe9435a1000   1028K rw---    [ anon ]
> 00007fe9436ad000   1028K rw---    [ anon ]
> 00007fe9437b8000   5140K rw---    [ anon ]
> 00007fe943cbd000      4K -----    [ anon ]
> 00007fe943cbe000   8192K rw---    [ anon ]
> 00007fe9444be000     20K r-x--  /usr/lib/libXdmcp.so.6.0.0
> 00007fe9444c3000   2044K -----  /usr/lib/libXdmcp.so.6.0.0
> 00007fe9446c2000      4K rw---  /usr/lib/libXdmcp.so.6.0.0
> 00007fe9446c3000      8K r-x--  /usr/lib/libXau.so.6.0.0
> 00007fe9446c5000   2048K -----  /usr/lib/libXau.so.6.0.0
> 00007fe9448c5000      4K rw---  /usr/lib/libXau.so.6.0.0
> 00007fe9448c6000    124K r-x--  /lib/libx86.so.1
> 00007fe9448e5000   2048K -----  /lib/libx86.so.1
> 00007fe944ae5000      8K rw---  /lib/libx86.so.1
> 00007fe944ae7000      4K rw---    [ anon ]
> 00007fe944ae8000    128K r-x--  /usr/lib/liblzo2.so.2.0.0
> 00007fe944b08000   2044K -----  /usr/lib/liblzo2.so.2.0.0
> 00007fe944d07000      4K rw---  /usr/lib/liblzo2.so.2.0.0
> 00007fe944d08000    132K r-x--  /usr/lib/liblzma.so.2.0.0
> 00007fe944d29000   2048K -----  /usr/lib/liblzma.so.2.0.0
> 00007fe944f29000      4K rw---  /usr/lib/liblzma.so.2.0.0
> 00007fe944f2a000     60K r-x--  /lib/libbz2.so.1.0.4
> 00007fe944f39000   2044K -----  /lib/libbz2.so.1.0.4
> 00007fe945138000      8K rw---  /lib/libbz2.so.1.0.4
> 00007fe94513a000    112K r-x--  /usr/lib/libxcb.so.1.1.0
> 00007fe945156000   2044K -----  /usr/lib/libxcb.so.1.1.0
> 00007fe945355000      4K rw---  /usr/lib/libxcb.so.1.1.0
> 00007fe945356000    308K r-x--  /usr/lib/libvga.so.1.4.3
> 00007fe9453a3000   2044K -----  /usr/lib/libvga.so.1.4.3
> 00007fe9455a2000     36K rw---  /usr/lib/libvga.so.1.4.3
> 00007fe9455ab000     36K rw---    [ anon ]
> 00007fe9455b4000     88K r-x--  /usr/lib/libdirect-1.2.so.9.0.1
> 00007fe9455ca000   2044K -----  /usr/lib/libdirect-1.2.so.9.0.1
> 00007fe9457c9000      8K rw---  /usr/lib/libdirect-1.2.so.9.0.1
> 00007fe9457cb000     36K r-x--  /usr/lib/libfusion-1.2.so.9.0.1
> 00007fe9457d4000   2048K -----  /usr/lib/libfusion-1.2.so.9.0.1
> 00007fe9459d4000      4K rw---  /usr/lib/libfusion-1.2.so.9.0.1
> 00007fe9459d5000    508K r-x--  /usr/lib/libdirectfb-1.2.so.9.0.1
> 00007fe945a54000   2044K -----  /usr/lib/libdirectfb-1.2.so.9.0.1
> 00007fe945c53000     16K rw---  /usr/lib/libdirectfb-1.2.so.9.0.1
> 00007fe945c57000    888K r-x--  /usr/lib/libasound.so.2.0.0
> 00007fe945d35000   2044K -----  /usr/lib/libasound.so.2.0.0
> 00007fe945f34000     32K rw---  /usr/lib/libasound.so.2.0.0
> 00007fe945f3c000      8K r-x--  /lib/libdl-2.11.3.so
> 00007fe945f3e000   2048K -----  /lib/libdl-2.11.3.so
> 00007fe94613e000      4K r----  /lib/libdl-2.11.3.so
> 00007fe94613f000      4K rw---  /lib/libdl-2.11.3.so
> 00007fe946140000    192K r-x--  /lib/libpcre.so.3.12.1
> 00007fe946170000   2044K -----  /lib/libpcre.so.3.12.1
> 00007fe94636f000      4K rw---  /lib/libpcre.so.3.12.1
> 00007fe946370000   1380K r-x--  /lib/libc-2.11.3.so
> 00007fe9464c9000   2044K -----  /lib/libc-2.11.3.so
> 00007fe9466c8000     16K r----  /lib/libc-2.11.3.so
> 00007fe9466cc000      4K rw---  /lib/libc-2.11.3.so
> 00007fe9466cd000     20K rw---    [ anon ]
> 00007fe9466d2000     92K r-x--  /lib/libpthread-2.11.3.so
> 00007fe9466e9000   2044K -----  /lib/libpthread-2.11.3.so
> 00007fe9468e8000      4K r----  /lib/libpthread-2.11.3.so
> 00007fe9468e9000      4K rw---  /lib/libpthread-2.11.3.so
> 00007fe9468ea000     16K rw---    [ anon ]
> 00007fe9468ee000     92K r-x--  /usr/lib/libz.so.1.2.3.4
> 00007fe946905000   2044K -----  /usr/lib/libz.so.1.2.3.4
> 00007fe946b04000      4K rw---  /usr/lib/libz.so.1.2.3.4
> 00007fe946b05000    512K r-x--  /lib/libm-2.11.3.so
> 00007fe946b85000   2048K -----  /lib/libm-2.11.3.so
> 00007fe946d85000      4K r----  /lib/libm-2.11.3.so
> 00007fe946d86000      4K rw---  /lib/libm-2.11.3.so
> 00007fe946d87000      4K r-x--  /lib/libaio.so.1.0.1
> 00007fe946d88000   2044K -----  /lib/libaio.so.1.0.1
> 00007fe946f87000      4K rw---  /lib/libaio.so.1.0.1
> 00007fe946f88000    160K r-x--  /usr/lib/libxenguest.so.4.2.0
> 00007fe946fb0000   2048K -----  /usr/lib/libxenguest.so.4.2.0
> 00007fe9471b0000      8K rw---  /usr/lib/libxenguest.so.4.2.0
> 00007fe9471b2000    136K r-x--  /usr/lib/libxenctrl.so.4.2.0
> 00007fe9471d4000   2048K -----  /usr/lib/libxenctrl.so.4.2.0
> 00007fe9473d4000      4K rw---  /usr/lib/libxenctrl.so.4.2.0
> 00007fe9473d5000     24K r-x--  /usr/lib/libxenstore.so.3.0.2
> 00007fe9473db000   2044K -----  /usr/lib/libxenstore.so.3.0.2
> 00007fe9475da000      4K rw---  /usr/lib/libxenstore.so.3.0.2
> 00007fe9475db000     12K rw---    [ anon ]
> 00007fe9475de000   1236K r-x--  /usr/lib/libX11.so.6.3.0
> 00007fe947713000   2048K -----  /usr/lib/libX11.so.6.3.0
> 00007fe947913000     24K rw---  /usr/lib/libX11.so.6.3.0
> 00007fe947919000    432K r-x--  /usr/lib/libSDL-1.2.so.0.11.3
> 00007fe947985000   2048K -----  /usr/lib/libSDL-1.2.so.0.11.3
> 00007fe947b85000      8K rw---  /usr/lib/libSDL-1.2.so.0.11.3
> 00007fe947b87000    304K rw---    [ anon ]
> 00007fe947bd3000    140K r-x--  /usr/lib/libjpeg.so.62.0.0
> 00007fe947bf6000   2044K -----  /usr/lib/libjpeg.so.62.0.0
> 00007fe947df5000      4K rw---  /usr/lib/libjpeg.so.62.0.0
> 00007fe947df6000    148K r-x--  /lib/libpng12.so.0.44.0
> 00007fe947e1b000   2048K -----  /lib/libpng12.so.0.44.0
> 00007fe94801b000      4K rw---  /lib/libpng12.so.0.44.0
> 00007fe94801c000     16K r-x--  /lib/libuuid.so.1.3.0
> 00007fe948020000   2044K -----  /lib/libuuid.so.1.3.0
> 00007fe94821f000      4K rw---  /lib/libuuid.so.1.3.0
> 00007fe948220000    264K r-x--  /lib/libncurses.so.5.7
> 00007fe948262000   2044K -----  /lib/libncurses.so.5.7
> 00007fe948461000     20K rw---  /lib/libncurses.so.5.7
> 00007fe948466000      8K r-x--  /lib/libutil-2.11.3.so
> 00007fe948468000   2044K -----  /lib/libutil-2.11.3.so
> 00007fe948667000      4K r----  /lib/libutil-2.11.3.so
> 00007fe948668000      4K rw---  /lib/libutil-2.11.3.so
> 00007fe948669000    876K r-x--  /lib/libglib-2.0.so.0.2400.2
> 00007fe948744000   2044K -----  /lib/libglib-2.0.so.0.2400.2
> 00007fe948943000      8K rw---  /lib/libglib-2.0.so.0.2400.2
> 00007fe948945000      4K rw---    [ anon ]
> 00007fe948946000     16K r-x--  /usr/lib/libgthread-2.0.so.0.2400.2
> 00007fe94894a000   2044K -----  /usr/lib/libgthread-2.0.so.0.2400.2
> 00007fe948b49000      4K rw---  /usr/lib/libgthread-2.0.so.0.2400.2
> 00007fe948b4a000     28K r-x--  /lib/librt-2.11.3.so
> 00007fe948b51000   2044K -----  /lib/librt-2.11.3.so
> 00007fe948d50000      4K r----  /lib/librt-2.11.3.so
> 00007fe948d51000      4K rw---  /lib/librt-2.11.3.so
> 00007fe948d52000    120K r-x--  /lib/ld-2.11.3.so
> 00007fe948de5000   1536K rw---    [ anon ]
> 00007fe948f65000      4K rw-s-  /dev/xen/gntdev
> 00007fe948f66000      4K rw-s-  /dev/xen/gntdev
> 00007fe948f67000      8K rw---    [ anon ]
> 00007fe948f69000      4K -----    [ anon ]
> 00007fe948f6a000     20K rw---    [ anon ]
> 00007fe948f6f000      4K r----  /lib/ld-2.11.3.so
> 00007fe948f70000      4K rw---  /lib/ld-2.11.3.so
> 00007fe948f71000      4K rw---    [ anon ]
> 00007fe948f72000   3020K r-x--  /usr/lib/xen/bin/qemu-system-i386
> 00007fe949464000    816K r----  /usr/lib/xen/bin/qemu-system-i386
> 00007fe949530000    176K rw---  /usr/lib/xen/bin/qemu-system-i386
> 00007fe94955c000   8228K rw---    [ anon ]
> 00007fe94a5a7000 309936K rw---    [ anon ]
> 00007fff43e80000    132K rw---    [ stack ]
> 00007fff43fff000      4K r-x--    [ anon ]
> ffffffffff600000      4K r-x--    [ anon ]
>  total           424016K
> 
> 
> Can anyone help? 

I've just posted a bug fix for a memory leak in Qemu Xen PV disk
backend, you can take a look at the patch at:
http://lists.nongnu.org/archive/html/qemu-devel/2012-12/msg03677.html.

There's also a memory leak in the linux gntdev device which is used by
Qemu, you should also take a look at the following linux kernel patch
http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=commit;h=a67baeb77375199bbd842fa308cb565164dd1f19.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 14:15:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 14:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpg8W-0006Zk-TR; Mon, 31 Dec 2012 14:14:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tpg8V-0006Zf-V7
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 14:14:40 +0000
Received: from [85.158.143.35:16275] by server-2.bemta-4.messagelabs.com id
	32/35-30861-FCD91E05; Mon, 31 Dec 2012 14:14:39 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-10.tower-21.messagelabs.com!1356963277!10883828!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20150 invoked from network); 31 Dec 2012 14:14:38 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-10.tower-21.messagelabs.com with SMTP;
	31 Dec 2012 14:14:38 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 5EBBDC56188;
	Mon, 31 Dec 2012 14:14:26 +0000 (GMT)
Date: Mon, 31 Dec 2012 14:14:25 +0000
From: Alex Bligh <alex@alex.org.uk>
To: =?UTF-8?Q?Pasi_K=C3=A4rkk=C3=A4inen?= <pasik@iki.fi>,
	Wei Liu <Wei.Liu2@citrix.com>
Message-ID: <8FFF92289CD7FCF3159DADCF@Ximines.local>
In-Reply-To: <20121227123839.GR8912@reaktio.net>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland> <20121227123839.GR8912@reaktio.net>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Rohit Damkondwar <genius.rsd@gmail.com>, Alex Bligh <alex@alex.org.uk>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
 interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CgotLU9uIDI3IERlY2VtYmVyIDIwMTIgMTQ6Mzg6MzkgKzAyMDAgUGFzaSBLw6Rya2vDpGluZW4g
PHBhc2lrQGlraS5maT4gd3JvdGU6Cgo+IFllcywgWW91IGNhbiB1c2UgdGhlIGdlbmVyaWMgTGlu
dXggUW9TIHRvb2xzIGluIGRvbTAgdG8gc2hhcGUgdGhlIHZpZnMuCgpVc2luZyB0YyBmb3IgdHJ1
ZSBzaGFwaW5nIGlzIHNsaWdodGx5IG5vbi10cml2aWFsIGFzIGl0IHNoYXBlcyBvbmx5IG9uCmVn
cmVzcyAod2hpY2ggY29uZnVzaW5nbHkgaXMgdG93YXJkcyB0aGUgVk0pLCBhbmQgd2lsbCBvbmx5
IHBvbGljZSBvbgppbmdyZXNzLiBVc2luZyBpZmIgaXMgYSB3YXkgYXJvdW5kIHRoaXMuCgpJZiB0
YyBhbG9uZSB3b24ndCBkbyB0aGUgam9iLCBJIHdvdWxkIGhhdmUgdGhvdWdodCB0aGF0IHJhdGhl
ciB0aGFuCnJlaW52ZW50IHRoZSB3aGVlbCBhbmQgaW1wbGVtZW50IHJhdGUgc2hhcGluZyBpbiBu
ZXRiYWNrLCBpdCB3b3VsZCBiZQpiZXR0ZXIgdG8gaG9vayBpbnRvIHRoZSBrZXJuZWwncyBleGlz
dGluZyBxZGlzYyBpbmZyYXN0cnVjdHVyZSBzbyB0YyBsaWtlCnRvb2xzIGNhbiBiZSB1c2VkLiBU
aGlzIHdvdWxkIGdpdmUgbXVjaCBtb3JlIGZ1bmN0aW9uYWxpdHkgdGhhbiBhIHNpbXBsZQpyYXRl
LgoKLS0gCkFsZXggQmxpZ2gKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Dec 31 14:15:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 14:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpg8W-0006Zk-TR; Mon, 31 Dec 2012 14:14:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <alex@alex.org.uk>) id 1Tpg8V-0006Zf-V7
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 14:14:40 +0000
Received: from [85.158.143.35:16275] by server-2.bemta-4.messagelabs.com id
	32/35-30861-FCD91E05; Mon, 31 Dec 2012 14:14:39 +0000
X-Env-Sender: alex@alex.org.uk
X-Msg-Ref: server-10.tower-21.messagelabs.com!1356963277!10883828!1
X-Originating-IP: [89.16.176.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20150 invoked from network); 31 Dec 2012 14:14:38 -0000
Received: from mail.avalus.com (HELO mail.avalus.com) (89.16.176.221)
	by server-10.tower-21.messagelabs.com with SMTP;
	31 Dec 2012 14:14:38 -0000
Received: by mail.avalus.com (Postfix) with ESMTPSA id 5EBBDC56188;
	Mon, 31 Dec 2012 14:14:26 +0000 (GMT)
Date: Mon, 31 Dec 2012 14:14:25 +0000
From: Alex Bligh <alex@alex.org.uk>
To: =?UTF-8?Q?Pasi_K=C3=A4rkk=C3=A4inen?= <pasik@iki.fi>,
	Wei Liu <Wei.Liu2@citrix.com>
Message-ID: <8FFF92289CD7FCF3159DADCF@Ximines.local>
In-Reply-To: <20121227123839.GR8912@reaktio.net>
References: <CAHEEu86_NbEGWZnhm8Fcch3km0P-fLM+OzSd2vJYxP1xVfZ0wQ@mail.gmail.com>
	<1356611599.19238.13.camel@iceland> <20121227123839.GR8912@reaktio.net>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Disposition: inline
Cc: Rohit Damkondwar <genius.rsd@gmail.com>, Alex Bligh <alex@alex.org.uk>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] dynamically set bandwidth limits of a virtual
 interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Alex Bligh <alex@alex.org.uk>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CgotLU9uIDI3IERlY2VtYmVyIDIwMTIgMTQ6Mzg6MzkgKzAyMDAgUGFzaSBLw6Rya2vDpGluZW4g
PHBhc2lrQGlraS5maT4gd3JvdGU6Cgo+IFllcywgWW91IGNhbiB1c2UgdGhlIGdlbmVyaWMgTGlu
dXggUW9TIHRvb2xzIGluIGRvbTAgdG8gc2hhcGUgdGhlIHZpZnMuCgpVc2luZyB0YyBmb3IgdHJ1
ZSBzaGFwaW5nIGlzIHNsaWdodGx5IG5vbi10cml2aWFsIGFzIGl0IHNoYXBlcyBvbmx5IG9uCmVn
cmVzcyAod2hpY2ggY29uZnVzaW5nbHkgaXMgdG93YXJkcyB0aGUgVk0pLCBhbmQgd2lsbCBvbmx5
IHBvbGljZSBvbgppbmdyZXNzLiBVc2luZyBpZmIgaXMgYSB3YXkgYXJvdW5kIHRoaXMuCgpJZiB0
YyBhbG9uZSB3b24ndCBkbyB0aGUgam9iLCBJIHdvdWxkIGhhdmUgdGhvdWdodCB0aGF0IHJhdGhl
ciB0aGFuCnJlaW52ZW50IHRoZSB3aGVlbCBhbmQgaW1wbGVtZW50IHJhdGUgc2hhcGluZyBpbiBu
ZXRiYWNrLCBpdCB3b3VsZCBiZQpiZXR0ZXIgdG8gaG9vayBpbnRvIHRoZSBrZXJuZWwncyBleGlz
dGluZyBxZGlzYyBpbmZyYXN0cnVjdHVyZSBzbyB0YyBsaWtlCnRvb2xzIGNhbiBiZSB1c2VkLiBU
aGlzIHdvdWxkIGdpdmUgbXVjaCBtb3JlIGZ1bmN0aW9uYWxpdHkgdGhhbiBhIHNpbXBsZQpyYXRl
LgoKLS0gCkFsZXggQmxpZ2gKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:24:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpk1Z-0000Rq-5s; Mon, 31 Dec 2012 18:23:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tpk1X-0000RR-QG
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:23:44 +0000
Received: from [85.158.143.35:4374] by server-2.bemta-4.messagelabs.com id
	4D/06-30861-F28D1E05; Mon, 31 Dec 2012 18:23:43 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356978219!17207303!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUyOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25239 invoked from network); 31 Dec 2012 18:23:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:23:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2180576"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:23:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:23:38 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1Tpk1S-0001E7-FY;
	Mon, 31 Dec 2012 18:23:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:22:33 +0000
Message-ID: <1356978155-18293-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
References: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, Wei Liu <liuw@liuw.name>
Subject: [Xen-devel] [RFC PATCH 1/3] Add a field in struct domain to
	indicate evtchn level.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <liuw@liuw.name>


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/event_channel.c |    1 +
 xen/include/xen/sched.h    |   15 ++++++++++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 89f0ca7..87e422e 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1173,6 +1173,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 int evtchn_init(struct domain *d)
 {
     spin_lock_init(&d->event_lock);
+    d->evtchn_level = 2;
     if ( get_free_port(d) != 0 )
         return -EINVAL;
     evtchn_from_port(d, 0)->state = ECS_RESERVED;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 6c55039..1c43e0a 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -50,7 +50,19 @@ extern struct domain *dom0;
 #else
 #define BITS_PER_EVTCHN_WORD(d) (has_32bit_shinfo(d) ? 32 : BITS_PER_LONG)
 #endif
-#define MAX_EVTCHNS(d) (BITS_PER_EVTCHN_WORD(d) * BITS_PER_EVTCHN_WORD(d))
+#define MAX_EVTCHNS_L2(d) (BITS_PER_EVTCHN_WORD(d) * BITS_PER_EVTCHN_WORD(d))
+#define MAX_EVTCHNS_L3(d) (MAX_EVTCHNS_L2(d) * BITS_PER_EVTCHN_WORD(d))
+#define MAX_EVTCHNS(d) ({ int __v = 0;          \
+            switch ( d->evtchn_level ) {        \
+            case 2:                             \
+                __v = MAX_EVTCHNS_L2(d); break; \
+            case 3:                             \
+                __v = MAX_EVTCHNS_L3(d); break; \
+            default:                            \
+                BUG();                          \
+            };                                  \
+            __v;})
+
 #define EVTCHNS_PER_BUCKET 128
 #define NR_EVTCHN_BUCKETS  (NR_EVENT_CHANNELS / EVTCHNS_PER_BUCKET)
 
@@ -262,6 +274,7 @@ struct domain
     /* Event channel information. */
     struct evtchn   *evtchn[NR_EVTCHN_BUCKETS];
     spinlock_t       event_lock;
+    unsigned int     evtchn_level;
 
     struct grant_table *grant_table;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:24:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpk1Z-0000Rq-5s; Mon, 31 Dec 2012 18:23:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tpk1X-0000RR-QG
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:23:44 +0000
Received: from [85.158.143.35:4374] by server-2.bemta-4.messagelabs.com id
	4D/06-30861-F28D1E05; Mon, 31 Dec 2012 18:23:43 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356978219!17207303!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUyOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25239 invoked from network); 31 Dec 2012 18:23:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:23:42 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2180576"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:23:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:23:38 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1Tpk1S-0001E7-FY;
	Mon, 31 Dec 2012 18:23:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:22:33 +0000
Message-ID: <1356978155-18293-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
References: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, Wei Liu <liuw@liuw.name>
Subject: [Xen-devel] [RFC PATCH 1/3] Add a field in struct domain to
	indicate evtchn level.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <liuw@liuw.name>


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/event_channel.c |    1 +
 xen/include/xen/sched.h    |   15 ++++++++++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 89f0ca7..87e422e 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1173,6 +1173,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 int evtchn_init(struct domain *d)
 {
     spin_lock_init(&d->event_lock);
+    d->evtchn_level = 2;
     if ( get_free_port(d) != 0 )
         return -EINVAL;
     evtchn_from_port(d, 0)->state = ECS_RESERVED;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 6c55039..1c43e0a 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -50,7 +50,19 @@ extern struct domain *dom0;
 #else
 #define BITS_PER_EVTCHN_WORD(d) (has_32bit_shinfo(d) ? 32 : BITS_PER_LONG)
 #endif
-#define MAX_EVTCHNS(d) (BITS_PER_EVTCHN_WORD(d) * BITS_PER_EVTCHN_WORD(d))
+#define MAX_EVTCHNS_L2(d) (BITS_PER_EVTCHN_WORD(d) * BITS_PER_EVTCHN_WORD(d))
+#define MAX_EVTCHNS_L3(d) (MAX_EVTCHNS_L2(d) * BITS_PER_EVTCHN_WORD(d))
+#define MAX_EVTCHNS(d) ({ int __v = 0;          \
+            switch ( d->evtchn_level ) {        \
+            case 2:                             \
+                __v = MAX_EVTCHNS_L2(d); break; \
+            case 3:                             \
+                __v = MAX_EVTCHNS_L3(d); break; \
+            default:                            \
+                BUG();                          \
+            };                                  \
+            __v;})
+
 #define EVTCHNS_PER_BUCKET 128
 #define NR_EVTCHN_BUCKETS  (NR_EVENT_CHANNELS / EVTCHNS_PER_BUCKET)
 
@@ -262,6 +274,7 @@ struct domain
     /* Event channel information. */
     struct evtchn   *evtchn[NR_EVTCHN_BUCKETS];
     spinlock_t       event_lock;
+    unsigned int     evtchn_level;
 
     struct grant_table *grant_table;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:24:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpk1Y-0000Rj-P3; Mon, 31 Dec 2012 18:23:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tpk1X-0000RQ-6D
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:23:43 +0000
Received: from [85.158.143.35:40964] by server-3.bemta-4.messagelabs.com id
	85/39-18211-E28D1E05; Mon, 31 Dec 2012 18:23:42 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356978219!17207303!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUyOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25195 invoked from network); 31 Dec 2012 18:23:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:23:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2180575"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:23:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:23:38 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1Tpk1S-0001E7-Fy;
	Mon, 31 Dec 2012 18:23:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:22:34 +0000
Message-ID: <1356978155-18293-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
References: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, Wei Liu <liuw@liuw.name>
Subject: [Xen-devel] [RFC PATCH 2/3] Dynamically allocate domain->evtchn,
	also bump EVTCHNS_PER_BUCKET to 512.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <liuw@liuw.name>


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/event_channel.c |    8 ++++++++
 xen/include/xen/sched.h    |    4 ++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 87e422e..9898f8e 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1172,6 +1172,12 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 
 int evtchn_init(struct domain *d)
 {
+    d->evtchn = (struct evtchn **)
+        xzalloc_array(struct evtchn *, NR_EVTCHN_BUCKETS);
+
+    if ( !d->evtchn )
+        return -ENOMEM;
+
     spin_lock_init(&d->event_lock);
     d->evtchn_level = 2;
     if ( get_free_port(d) != 0 )
@@ -1215,6 +1221,8 @@ void evtchn_destroy(struct domain *d)
     spin_unlock(&d->event_lock);
 
     clear_global_virq_handlers(d);
+
+    xfree(d->evtchn);
 }
 
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 1c43e0a..5f23213 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -63,7 +63,7 @@ extern struct domain *dom0;
             };                                  \
             __v;})
 
-#define EVTCHNS_PER_BUCKET 128
+#define EVTCHNS_PER_BUCKET 512
 #define NR_EVTCHN_BUCKETS  (NR_EVENT_CHANNELS / EVTCHNS_PER_BUCKET)
 
 struct evtchn
@@ -272,7 +272,7 @@ struct domain
     spinlock_t       rangesets_lock;
 
     /* Event channel information. */
-    struct evtchn   *evtchn[NR_EVTCHN_BUCKETS];
+    struct evtchn  **evtchn;
     spinlock_t       event_lock;
     unsigned int     evtchn_level;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:24:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpk1Y-0000Rj-P3; Mon, 31 Dec 2012 18:23:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tpk1X-0000RQ-6D
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:23:43 +0000
Received: from [85.158.143.35:40964] by server-3.bemta-4.messagelabs.com id
	85/39-18211-E28D1E05; Mon, 31 Dec 2012 18:23:42 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356978219!17207303!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUyOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25195 invoked from network); 31 Dec 2012 18:23:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:23:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2180575"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:23:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:23:38 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1Tpk1S-0001E7-Fy;
	Mon, 31 Dec 2012 18:23:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:22:34 +0000
Message-ID: <1356978155-18293-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
References: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, Wei Liu <liuw@liuw.name>
Subject: [Xen-devel] [RFC PATCH 2/3] Dynamically allocate domain->evtchn,
	also bump EVTCHNS_PER_BUCKET to 512.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <liuw@liuw.name>


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/event_channel.c |    8 ++++++++
 xen/include/xen/sched.h    |    4 ++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 87e422e..9898f8e 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1172,6 +1172,12 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 
 int evtchn_init(struct domain *d)
 {
+    d->evtchn = (struct evtchn **)
+        xzalloc_array(struct evtchn *, NR_EVTCHN_BUCKETS);
+
+    if ( !d->evtchn )
+        return -ENOMEM;
+
     spin_lock_init(&d->event_lock);
     d->evtchn_level = 2;
     if ( get_free_port(d) != 0 )
@@ -1215,6 +1221,8 @@ void evtchn_destroy(struct domain *d)
     spin_unlock(&d->event_lock);
 
     clear_global_virq_handlers(d);
+
+    xfree(d->evtchn);
 }
 
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 1c43e0a..5f23213 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -63,7 +63,7 @@ extern struct domain *dom0;
             };                                  \
             __v;})
 
-#define EVTCHNS_PER_BUCKET 128
+#define EVTCHNS_PER_BUCKET 512
 #define NR_EVTCHN_BUCKETS  (NR_EVENT_CHANNELS / EVTCHNS_PER_BUCKET)
 
 struct evtchn
@@ -272,7 +272,7 @@ struct domain
     spinlock_t       rangesets_lock;
 
     /* Event channel information. */
-    struct evtchn   *evtchn[NR_EVTCHN_BUCKETS];
+    struct evtchn  **evtchn;
     spinlock_t       event_lock;
     unsigned int     evtchn_level;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:24:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpk1X-0000RS-Cv; Mon, 31 Dec 2012 18:23:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tpk1W-0000RL-Db
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:23:42 +0000
Received: from [85.158.143.35:40956] by server-1.bemta-4.messagelabs.com id
	E3/7A-28401-D28D1E05; Mon, 31 Dec 2012 18:23:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356978219!17207303!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUyOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25157 invoked from network); 31 Dec 2012 18:23:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:23:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2180574"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:23:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:23:38 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1Tpk1S-0001E7-Di	for xen-devel@lists.xen.org;
	Mon, 31 Dec 2012 18:23:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:22:32 +0000
Message-ID: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Subject: [Xen-devel] Implement 3-level event channels in Xen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series implements 3-level event channel routines in Xen.

The implementation is as followed:
  * Add a field evtchn_level in struct domain.
  * Add pointers in struct domain to point to 3-level shared array.
  * Add 2nd level selector in struct vcpu.
  * Add a new op in do_event_channel_op to register n-level evtchn.

The exposed interface for registering is extendable, however only 3-level is
supported at the moment.

The routines for 3-level evtchns are more or less the same as 2-level ones.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:24:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpk1X-0000RS-Cv; Mon, 31 Dec 2012 18:23:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tpk1W-0000RL-Db
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:23:42 +0000
Received: from [85.158.143.35:40956] by server-1.bemta-4.messagelabs.com id
	E3/7A-28401-D28D1E05; Mon, 31 Dec 2012 18:23:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1356978219!17207303!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUyOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25157 invoked from network); 31 Dec 2012 18:23:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:23:41 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2180574"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:23:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:23:38 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1Tpk1S-0001E7-Di	for xen-devel@lists.xen.org;
	Mon, 31 Dec 2012 18:23:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:22:32 +0000
Message-ID: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Subject: [Xen-devel] Implement 3-level event channels in Xen.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series implements 3-level event channel routines in Xen.

The implementation is as followed:
  * Add a field evtchn_level in struct domain.
  * Add pointers in struct domain to point to 3-level shared array.
  * Add 2nd level selector in struct vcpu.
  * Add a new op in do_event_channel_op to register n-level evtchn.

The exposed interface for registering is extendable, however only 3-level is
supported at the moment.

The routines for 3-level evtchns are more or less the same as 2-level ones.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:24:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpk1s-0000TR-JE; Mon, 31 Dec 2012 18:24:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tpk1q-0000TG-Mg
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:24:03 +0000
Received: from [85.158.139.83:12164] by server-2.bemta-5.messagelabs.com id
	9F/03-16162-148D1E05; Mon, 31 Dec 2012 18:24:01 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356978238!27905308!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3020 invoked from network); 31 Dec 2012 18:23:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:23:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2340565"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:23:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:23:38 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1Tpk1S-0001E7-GW;
	Mon, 31 Dec 2012 18:23:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:22:35 +0000
Message-ID: <1356978155-18293-4-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
References: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, Wei Liu <liuw@liuw.name>
Subject: [Xen-devel] [RFC PATCH 3/3] Implement 3-level event channel
	routines.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <liuw@liuw.name>

Add some fields in struct domain to hold pointer to 3-level shared arrays.
Also add per-cpu second level selector in struct vcpu.

These structures are mapped by a new evtchn op. Guest should fall back to use
2-level event channel if mapping fails.

The routines are more or less as the 2-level ones.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/domain.c              |    2 +
 xen/arch/x86/irq.c                 |    2 +-
 xen/common/event_channel.c         |  520 ++++++++++++++++++++++++++++++++++--
 xen/common/keyhandler.c            |   12 +-
 xen/common/schedule.c              |    2 +-
 xen/include/public/event_channel.h |   24 ++
 xen/include/public/xen.h           |   17 +-
 xen/include/xen/event.h            |    4 +
 xen/include/xen/sched.h            |   12 +-
 9 files changed, 565 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 7a07c06..b457b00 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2058,6 +2058,8 @@ int domain_relinquish_resources(struct domain *d)
             }
         }
 
+        event_channel_unmap_nlevel(d);
+
         d->arch.relmem = RELMEM_shared;
         /* fallthrough */
 
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 05cede5..d517e39 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1452,7 +1452,7 @@ int pirq_guest_unmask(struct domain *d)
         {
             pirq = pirqs[i]->pirq;
             if ( pirqs[i]->masked &&
-                 !test_bit(pirqs[i]->evtchn, &shared_info(d, evtchn_mask)) )
+                 !evtchn_is_masked(d, pirqs[i]->evtchn) )
                 pirq_guest_eoi(pirqs[i]);
         }
     } while ( ++pirq < d->nr_pirqs && n == ARRAY_SIZE(pirqs) );
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 9898f8e..fb3a7b4 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -27,6 +27,7 @@
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
 #include <asm/current.h>
+#include <xen/paging.h>
 
 #include <public/xen.h>
 #include <public/event_channel.h>
@@ -413,6 +414,85 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     return rc;
 }
 
+static inline int __evtchn_is_masked_l2(struct domain *d, int port)
+{
+    return test_bit(port, &shared_info(d, evtchn_mask));
+}
+
+static inline int __evtchn_is_masked_l3(struct domain *d, int port)
+{
+    return test_bit(port % EVTCHNS_PER_PAGE,
+                    d->evtchn_mask[port / EVTCHNS_PER_PAGE]);
+}
+
+int evtchn_is_masked(struct domain *d, int port)
+{
+    switch ( d->evtchn_level )
+    {
+    case 2:
+        return __evtchn_is_masked_l2(d, port);
+    case 3:
+        return __evtchn_is_masked_l3(d, port);
+    default:
+        printk(" %s: unknown event channel level %d for domain %d \n",
+               __FUNCTION__, d->evtchn_level, d->domain_id);
+        return 1;
+    }
+}
+
+static inline int __evtchn_is_pending_l2(struct domain *d, int port)
+{
+    return test_bit(port, &shared_info(d, evtchn_pending));
+}
+
+static inline int __evtchn_is_pending_l3(struct domain *d, int port)
+{
+    return test_bit(port % EVTCHNS_PER_PAGE,
+                    d->evtchn_pending[port / EVTCHNS_PER_PAGE]);
+}
+
+int evtchn_is_pending(struct domain *d, int port)
+{
+    switch ( d->evtchn_level )
+    {
+    case 2:
+        return __evtchn_is_pending_l2(d, port);
+    case 3:
+        return __evtchn_is_pending_l3(d, port);
+    default:
+        printk(" %s: unknown event channel level %d for domain %d\n",
+               __FUNCTION__, d->evtchn_level, d->domain_id);
+        return 0;
+    }
+}
+
+static inline void __evtchn_clear_pending_l2(struct domain *d, int port)
+{
+    clear_bit(port, &shared_info(d, evtchn_pending));
+}
+
+static inline void __evtchn_clear_pending_l3(struct domain *d, int port)
+{
+    clear_bit(port % EVTCHNS_PER_PAGE,
+              d->evtchn_pending[port / EVTCHNS_PER_PAGE]);
+}
+
+static void evtchn_clear_pending(struct domain *d, int port)
+{
+    switch ( d->evtchn_level )
+    {
+    case 2:
+        __evtchn_clear_pending_l2(d, port);
+        break;
+    case 3:
+        __evtchn_clear_pending_l3(d, port);
+        break;
+    default:
+        printk("Cannot clear pending for %d level event channel"
+               " for domain %d, port %d\n",
+               d->evtchn_level, d->domain_id, port);
+    }
+}
 
 static long __evtchn_close(struct domain *d1, int port1)
 {
@@ -529,7 +609,7 @@ static long __evtchn_close(struct domain *d1, int port1)
     }
 
     /* Clear pending event to avoid unexpected behavior on re-bind. */
-    clear_bit(port1, &shared_info(d1, evtchn_pending));
+    evtchn_clear_pending(d1, port1);
 
     /* Reset binding to vcpu0 when the channel is freed. */
     chn1->state          = ECS_FREE;
@@ -606,16 +686,15 @@ int evtchn_send(struct domain *d, unsigned int lport)
         ret = -EINVAL;
     }
 
-out:
+ out:
     spin_unlock(&ld->event_lock);
 
     return ret;
 }
 
-static void evtchn_set_pending(struct vcpu *v, int port)
+static void __evtchn_set_pending_l2(struct vcpu *v, int port)
 {
     struct domain *d = v->domain;
-    int vcpuid;
 
     /*
      * The following bit operations must happen in strict order.
@@ -633,7 +712,50 @@ static void evtchn_set_pending(struct vcpu *v, int port)
     {
         vcpu_mark_events_pending(v);
     }
-    
+}
+
+static void __evtchn_set_pending_l3(struct vcpu *v, int port)
+{
+    struct domain *d = v->domain;
+
+    int page_no = port / EVTCHNS_PER_PAGE;
+    int offset = port % EVTCHNS_PER_PAGE;
+    int l1cb = BITS_PER_EVTCHN_WORD(d)*BITS_PER_EVTCHN_WORD(d);
+    int l2cb = BITS_PER_EVTCHN_WORD(d);
+
+    if ( test_and_set_bit(offset, d->evtchn_pending[page_no]) )
+        return;
+
+    if ( !test_bit(offset, d->evtchn_mask[page_no]) &&
+         !test_and_set_bit(port / l2cb,
+                           v->evtchn_pending_sel_l2) &&
+         !test_and_set_bit(port / l1cb,
+                           &vcpu_info(v, evtchn_pending_sel)) )
+    {
+        vcpu_mark_events_pending(v);
+    }
+}
+
+static void evtchn_set_pending(struct vcpu *v, int port)
+{
+    struct domain *d = v->domain;
+    int vcpuid;
+
+    switch ( d->evtchn_level )
+    {
+    case 2:
+        __evtchn_set_pending_l2(v, port);
+        break;
+    case 3:
+        __evtchn_set_pending_l3(v, port);
+        break;
+    default:
+        printk("Cannot set pending for %d level event channel"
+               " for domain %d, port %d\n",
+               d->evtchn_level, d->domain_id, port);
+        return;
+    }
+
     /* Check if some VCPU might be polling for this event. */
     if ( likely(bitmap_empty(d->poll_mask, d->max_vcpus)) )
         return;
@@ -916,21 +1038,16 @@ long evtchn_bind_vcpu(unsigned int port, unsigned int vcpu_id)
 }
 
 
-int evtchn_unmask(unsigned int port)
+static int __evtchn_unmask_l2(unsigned int port)
 {
     struct domain *d = current->domain;
     struct vcpu   *v;
 
-    ASSERT(spin_is_locked(&d->event_lock));
-
-    if ( unlikely(!port_is_valid(d, port)) )
-        return -EINVAL;
-
     v = d->vcpu[evtchn_from_port(d, port)->notify_vcpu_id];
 
     /*
      * These operations must happen in strict order. Based on
-     * include/xen/event.h:evtchn_set_pending(). 
+     * __evtchn_set_pending_l2().
      */
     if ( test_and_clear_bit(port, &shared_info(d, evtchn_mask)) &&
          test_bit          (port, &shared_info(d, evtchn_pending)) &&
@@ -943,6 +1060,58 @@ int evtchn_unmask(unsigned int port)
     return 0;
 }
 
+static int __evtchn_unmask_l3(unsigned int port)
+{
+    struct domain *d = current->domain;
+    struct vcpu *v;
+
+    int page_no = port / EVTCHNS_PER_PAGE;
+    int offset = port % EVTCHNS_PER_PAGE;
+    int l1cb = BITS_PER_EVTCHN_WORD(d)*BITS_PER_EVTCHN_WORD(d);
+    int l2cb = BITS_PER_EVTCHN_WORD(d);
+
+    v = d->vcpu[evtchn_from_port(d, port)->notify_vcpu_id];
+
+    if ( test_and_clear_bit(offset, d->evtchn_mask[page_no]) &&
+         test_bit(offset, d->evtchn_pending[page_no]) &&
+         !test_and_set_bit(port / l2cb,
+                           v->evtchn_pending_sel_l2) &&
+         !test_and_set_bit(port / l1cb,
+                           &vcpu_info(v, evtchn_pending_sel)) )
+    {
+        vcpu_mark_events_pending(v);
+    }
+
+    return 0;
+}
+
+int evtchn_unmask(unsigned int port)
+{
+    struct domain *d = current->domain;
+    int rc = -EINVAL;
+
+    ASSERT(spin_is_locked(&d->event_lock));
+
+    if ( unlikely(!port_is_valid(d, port)) )
+        return -EINVAL;
+
+    switch ( d->evtchn_level )
+    {
+    case 2:
+        rc = __evtchn_unmask_l2(port);
+        break;
+    case 3:
+        rc = __evtchn_unmask_l3(port);
+        break;
+    default:
+        printk("Cannot unmask port %d for %d level event channel"
+               " for domain %d\n", port,
+               d->evtchn_level, d->domain_id);
+    }
+
+    return rc;
+}
+
 
 static long evtchn_reset(evtchn_reset_t *r)
 {
@@ -969,6 +1138,290 @@ out:
     return rc;
 }
 
+static void __unmap_l2_sel(struct vcpu *v)
+{
+    unsigned long mfn;
+
+    if ( v->evtchn_pending_sel_l2 != 0 )
+    {
+        unmap_domain_page_global(v->evtchn_pending_sel_l2);
+        mfn = virt_to_mfn(v->evtchn_pending_sel_l2);
+        put_page_and_type(mfn_to_page(mfn));
+
+        v->evtchn_pending_sel_l2 = 0;
+    }
+}
+
+static int __map_l2_sel(struct vcpu *v, unsigned long gfn, unsigned long off)
+{
+    void *mapping;
+    int rc = -EINVAL;
+    struct page_info *page;
+    struct domain *d = v->domain;
+
+    if ( off >= PAGE_SIZE )
+        return rc;
+
+    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    if ( !page )
+        goto out;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        goto out;
+    }
+
+    mapping = __map_domain_page_global(page);
+    if ( mapping == NULL )
+    {
+        put_page_and_type(page);
+        rc = -ENOMEM;
+        goto out;
+    }
+
+    v->evtchn_pending_sel_l2 = (unsigned long *)(mapping + off);
+    rc = 0;
+
+ out:
+    return rc;
+}
+
+
+static void __unmap_l3_arrays(struct domain *d)
+{
+    int i;
+    unsigned long mfn;
+
+    for ( i = 0; i < MAX_L3_PAGES; i++ )
+    {
+        if ( d->evtchn_pending[i] != 0 )
+        {
+            unmap_domain_page_global(d->evtchn_pending[i]);
+            mfn = virt_to_mfn(d->evtchn_pending[i]);
+            put_page_and_type(mfn_to_page(mfn));
+            d->evtchn_pending[i] = 0;
+        }
+        if ( d->evtchn_mask[i] != 0 )
+        {
+            unmap_domain_page_global(d->evtchn_mask[i]);
+            mfn = virt_to_mfn(d->evtchn_mask[i]);
+            put_page_and_type(mfn_to_page(mfn));
+            d->evtchn_mask[i] = 0;
+        }
+    }
+}
+
+static int __map_l3_arrays(struct domain *d, unsigned long *pending,
+                           unsigned long *mask)
+{
+    int i;
+    void *pending_mapping, *mask_mapping;
+    struct page_info *pending_page, *mask_page;
+    unsigned long pending_gfn, mask_gfn;
+    int rc = -EINVAL;
+
+    for ( i = 0;
+          i < MAX_L3_PAGES && pending[i] != 0 && mask[i] != 0;
+          i++ )
+    {
+        pending_gfn = pending[i];
+        mask_gfn = mask[i];
+
+        pending_page = get_page_from_gfn(d, pending_gfn, NULL, P2M_ALLOC);
+        if ( !pending_page )
+            goto err;
+
+        mask_page = get_page_from_gfn(d, mask_gfn, NULL, P2M_ALLOC);
+        if ( !mask_page )
+        {
+            put_page(pending_page);
+            goto err;
+        }
+
+        if ( !get_page_type(pending_page, PGT_writable_page) )
+        {
+            put_page(pending_page);
+            put_page(mask_page);
+            goto err;
+        }
+
+        if ( !get_page_type(mask_page, PGT_writable_page) )
+        {
+            put_page_and_type(pending_page);
+            put_page(mask_page);
+            goto err;
+        }
+
+        pending_mapping = __map_domain_page_global(pending_page);
+        if ( !pending_mapping )
+        {
+            put_page_and_type(pending_page);
+            put_page_and_type(mask_page);
+            rc = -ENOMEM;
+            goto err;
+        }
+
+        mask_mapping = __map_domain_page_global(mask_page);
+        if ( !mask_mapping )
+        {
+            unmap_domain_page_global(pending_mapping);
+            put_page_and_type(pending_page);
+            put_page_and_type(mask_page);
+            rc = -ENOMEM;
+            goto err;
+        }
+
+        d->evtchn_pending[i] = pending_mapping;
+        d->evtchn_mask[i] = mask_mapping;
+    }
+
+    rc = 0;
+
+err:
+    return rc;
+}
+
+static void __evtchn_unmap_all_3level(struct domain *d)
+{
+    struct vcpu *v;
+    /* This is called when destroying a domain, so no pausing... */
+    for_each_vcpu ( d, v )
+        __unmap_l2_sel(v);
+    __unmap_l3_arrays(d);
+}
+
+void event_channel_unmap_nlevel(struct domain *d)
+{
+    switch ( d->evtchn_level )
+    {
+    case 3:
+        __evtchn_unmap_all_3level(d);
+        break;
+    default:
+        break;
+    }
+}
+
+static void __evtchn_migrate_bitmap_l3(struct domain *d)
+{
+    struct vcpu *v;
+
+    /* Easy way to migrate, just move existing selector down one level
+     * then copy the pending array and mask array */
+    for_each_vcpu ( d, v )
+    {
+        memcpy(&v->evtchn_pending_sel_l2[0],
+               &vcpu_info(v, evtchn_pending_sel),
+               sizeof(vcpu_info(v, evtchn_pending_sel)));
+
+        memset(&vcpu_info(v, evtchn_pending_sel), 0,
+               sizeof(vcpu_info(v, evtchn_pending_sel)));
+
+        set_bit(0, &vcpu_info(v, evtchn_pending_sel));
+    }
+
+    memcpy(d->evtchn_pending[0], &shared_info(d, evtchn_pending),
+           sizeof(shared_info(d, evtchn_pending)));
+    memcpy(d->evtchn_mask[0], &shared_info(d, evtchn_mask),
+           sizeof(shared_info(d, evtchn_mask)));
+}
+
+static long __evtchn_register_3level(struct evtchn_register_3level *r)
+{
+    struct domain *d = current->domain;
+    unsigned long mfns[r->nr_vcpus];
+    unsigned long offsets[r->nr_vcpus];
+    unsigned char was_up[r->nr_vcpus];
+    int rc, i;
+    struct vcpu *v;
+
+    if ( d->evtchn_level == 3 )
+        return -EINVAL;
+
+    if ( r->nr_vcpus > d->max_vcpus )
+        return -EINVAL;
+
+    for ( i = 0; i < MAX_L3_PAGES; i++ )
+        if ( d->evtchn_pending[i] || d->evtchn_mask[i] )
+            return -EINVAL;
+
+    for_each_vcpu ( d, v )
+        if ( v->evtchn_pending_sel_l2 )
+            return -EINVAL;
+
+    if ( copy_from_user(mfns, r->l2sel_mfn,
+                        sizeof(unsigned long)*r->nr_vcpus) )
+        return -EFAULT;
+
+    if ( copy_from_user(offsets, r->l2sel_offset,
+                        sizeof(unsigned long)*r->nr_vcpus) )
+        return -EFAULT;
+
+    /* put cpu offline */
+    for_each_vcpu ( d, v )
+    {
+        if ( v == current )
+            was_up[v->vcpu_id] = 1;
+        else
+            was_up[v->vcpu_id] = !test_and_set_bit(_VPF_down,
+                                                   &v->pause_flags);
+    }
+
+    /* map evtchn pending array and evtchn mask array */
+    rc = __map_l3_arrays(d, r->evtchn_pending, r->evtchn_mask);
+    if ( rc )
+        goto out;
+
+    for_each_vcpu ( d, v )
+    {
+        if ( (rc = __map_l2_sel(v, mfns[v->vcpu_id], offsets[v->vcpu_id])) )
+        {
+            struct vcpu *v1;
+            for_each_vcpu ( d, v1 )
+                __unmap_l2_sel(v1);
+            __unmap_l3_arrays(d);
+            goto out;
+        }
+    }
+
+    /* Scan current bitmap and migrate all outstanding events to new bitmap */
+    __evtchn_migrate_bitmap_l3(d);
+
+    /* make sure all writes take effect before switching to new routines */
+    wmb();
+    d->evtchn_level = 3;
+
+ out:
+    /* bring cpus back online */
+    for_each_vcpu ( d, v )
+        if ( was_up[v->vcpu_id] &&
+             test_and_clear_bit(_VPF_down, &v->pause_flags) )
+            vcpu_wake(v);
+
+    return rc;
+}
+
+static long evtchn_register_nlevel(evtchn_register_nlevel_t *r)
+{
+    struct domain *d = current->domain;
+    int rc;
+
+    spin_lock(&d->event_lock);
+
+    switch ( r->level )
+    {
+    case 3:
+        rc = __evtchn_register_3level(&r->u.l3);
+        break;
+    default:
+        rc = -EINVAL;
+    }
+
+    spin_unlock(&d->event_lock);
+
+    return rc;
+}
 
 long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
@@ -1078,6 +1531,14 @@ long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
+    case EVTCHNOP_register_nlevel: {
+        struct evtchn_register_nlevel reg;
+        if ( copy_from_guest(&reg, arg, 1) != 0 )
+            return -EFAULT;
+        rc = evtchn_register_nlevel(&reg);
+        break;
+    }
+
     default:
         rc = -ENOSYS;
         break;
@@ -1251,7 +1712,6 @@ void evtchn_move_pirqs(struct vcpu *v)
     spin_unlock(&d->event_lock);
 }
 
-
 static void domain_dump_evtchn_info(struct domain *d)
 {
     unsigned int port;
@@ -1260,8 +1720,10 @@ static void domain_dump_evtchn_info(struct domain *d)
     bitmap_scnlistprintf(keyhandler_scratch, sizeof(keyhandler_scratch),
                          d->poll_mask, d->max_vcpus);
     printk("Event channel information for domain %d:\n"
+           "using %d-level event channel\n"
            "Polling vCPUs: {%s}\n"
-           "    port [p/m]\n", d->domain_id, keyhandler_scratch);
+           "    port [p/m]\n", d->domain_id, d->evtchn_level,
+           keyhandler_scratch);
 
     spin_lock(&d->event_lock);
 
@@ -1269,6 +1731,8 @@ static void domain_dump_evtchn_info(struct domain *d)
     {
         const struct evtchn *chn;
         char *ssid;
+        int page_no = port / EVTCHNS_PER_PAGE;
+        int offset = port % EVTCHNS_PER_PAGE;
 
         if ( !port_is_valid(d, port) )
             continue;
@@ -1276,11 +1740,28 @@ static void domain_dump_evtchn_info(struct domain *d)
         if ( chn->state == ECS_FREE )
             continue;
 
-        printk("    %4u [%d/%d]: s=%d n=%d x=%d",
-               port,
-               !!test_bit(port, &shared_info(d, evtchn_pending)),
-               !!test_bit(port, &shared_info(d, evtchn_mask)),
-               chn->state, chn->notify_vcpu_id, chn->xen_consumer);
+        printk("    %4u", port);
+
+        switch ( d->evtchn_level )
+        {
+        case 2:
+            printk(" [%d/%d]:",
+                   !!test_bit(port, &shared_info(d, evtchn_pending)),
+                   !!test_bit(port, &shared_info(d, evtchn_mask)));
+            break;
+        case 3:
+            printk(" [%d/%d]:",
+                   !!test_bit(offset, d->evtchn_pending[page_no]),
+                   !!test_bit(offset, d->evtchn_mask[page_no]));
+            break;
+        default:
+            printk(" %s: unknown event channel level %d for domain %d \n",
+                   __FUNCTION__, d->evtchn_level, d->domain_id);
+            goto out;
+        }
+
+        printk(" s=%d n=%d x=%d", chn->state, chn->notify_vcpu_id,
+               chn->xen_consumer);
 
         switch ( chn->state )
         {
@@ -1310,6 +1791,7 @@ static void domain_dump_evtchn_info(struct domain *d)
         }
     }
 
+ out:
     spin_unlock(&d->event_lock);
 }
 
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 2c5c230..294cca9 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -298,15 +298,15 @@ static void dump_domains(unsigned char key)
     {
         for_each_vcpu ( d, v )
         {
+            unsigned int bits = BITS_PER_EVTCHN_WORD(d);
+            for (int i = 2; i < d->evtchn_level; i++)
+                bits *= BITS_PER_EVTCHN_WORD(d);
             printk("Notifying guest %d:%d (virq %d, port %d, stat %d/%d/%d)\n",
                    d->domain_id, v->vcpu_id,
                    VIRQ_DEBUG, v->virq_to_evtchn[VIRQ_DEBUG],
-                   test_bit(v->virq_to_evtchn[VIRQ_DEBUG], 
-                            &shared_info(d, evtchn_pending)),
-                   test_bit(v->virq_to_evtchn[VIRQ_DEBUG], 
-                            &shared_info(d, evtchn_mask)),
-                   test_bit(v->virq_to_evtchn[VIRQ_DEBUG] /
-                            BITS_PER_EVTCHN_WORD(d),
+                   evtchn_is_pending(d, v->virq_to_evtchn[VIRQ_DEBUG]),
+                   evtchn_is_masked(d, v->virq_to_evtchn[VIRQ_DEBUG]),
+                   test_bit(v->virq_to_evtchn[VIRQ_DEBUG] / bits,
                             &vcpu_info(v, evtchn_pending_sel)));
             send_guest_vcpu_virq(v, VIRQ_DEBUG);
         }
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index ae798c9..b676c9c 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -693,7 +693,7 @@ static long do_poll(struct sched_poll *sched_poll)
             goto out;
 
         rc = 0;
-        if ( test_bit(port, &shared_info(d, evtchn_pending)) )
+        if ( evtchn_is_pending(d, port) )
             goto out;
     }
 
diff --git a/xen/include/public/event_channel.h b/xen/include/public/event_channel.h
index 07ff321..29cd6e9 100644
--- a/xen/include/public/event_channel.h
+++ b/xen/include/public/event_channel.h
@@ -71,6 +71,7 @@
 #define EVTCHNOP_bind_vcpu        8
 #define EVTCHNOP_unmask           9
 #define EVTCHNOP_reset           10
+#define EVTCHNOP_register_nlevel 11
 /* ` } */
 
 typedef uint32_t evtchn_port_t;
@@ -258,6 +259,29 @@ struct evtchn_reset {
 typedef struct evtchn_reset evtchn_reset_t;
 
 /*
+ * EVTCHNOP_register_nlevel: Register N level event channels.
+ * NOTES:
+ *   1. currently only 3-level is supported.
+ *   2. should fall back to basic 2-level if this call fails.
+ */
+#define MAX_L3_PAGES 8
+struct evtchn_register_3level {
+    unsigned long evtchn_pending[MAX_L3_PAGES];
+    unsigned long evtchn_mask[MAX_L3_PAGES];
+    unsigned long *l2sel_mfn;
+    unsigned long *l2sel_offset;
+    uint32_t nr_vcpus;
+};
+
+struct evtchn_register_nlevel {
+    uint32_t level;
+    union {
+        struct evtchn_register_3level l3;
+    } u;
+};
+typedef struct evtchn_register_nlevel evtchn_register_nlevel_t;
+
+/*
  * ` enum neg_errnoval
  * ` HYPERVISOR_event_channel_op_compat(struct evtchn_op *op)
  * `
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 5593066..1d4ef2d 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -554,9 +554,24 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_t);
 
 /*
  * Event channel endpoints per domain:
+ * 2-level:
  *  1024 if a long is 32 bits; 4096 if a long is 64 bits.
+ * 3-level:
+ *  32k if a long is 32 bits; 256k if a long is 64 bits;
  */
-#define NR_EVENT_CHANNELS (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L2 (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * 64)
+#define NR_EVENT_CHANNELS(x) ({ unsigned int __v = 0;   \
+            switch (x) {                                \
+            case 2:                                     \
+                __v = NR_EVENT_CHANNELS_L2; break;      \
+            case 3:                                     \
+                __v = NR_EVENT_CHANNELS_L3; break;      \
+            default:                                    \
+                BUG();                                  \
+            }                                           \
+            __v;})
+
 
 struct vcpu_time_info {
     /*
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 71c3e92..e7cd6be 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -102,4 +102,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
         mb(); /* set blocked status /then/ caller does his work */      \
     } while ( 0 )
 
+extern void event_channel_unmap_nlevel(struct domain *d);
+int evtchn_is_masked(struct domain *d, int port);
+int evtchn_is_pending(struct domain *d, int port);
+
 #endif /* __XEN_EVENT_H__ */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 5f23213..ae78549 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -64,7 +64,8 @@ extern struct domain *dom0;
             __v;})
 
 #define EVTCHNS_PER_BUCKET 512
-#define NR_EVTCHN_BUCKETS  (NR_EVENT_CHANNELS / EVTCHNS_PER_BUCKET)
+#define NR_EVTCHN_BUCKETS  (NR_EVENT_CHANNELS_L3 / EVTCHNS_PER_BUCKET)
+#define EVTCHNS_PER_PAGE   (PAGE_SIZE * 8)
 
 struct evtchn
 {
@@ -104,7 +105,7 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
 
 struct waitqueue_vcpu;
 
-struct vcpu 
+struct vcpu
 {
     int              vcpu_id;
 
@@ -112,6 +113,9 @@ struct vcpu
 
     vcpu_info_t     *vcpu_info;
 
+    /* For 3-level event channel */
+    unsigned long   *evtchn_pending_sel_l2;
+
     struct domain   *domain;
 
     struct vcpu     *next_in_list;
@@ -275,6 +279,10 @@ struct domain
     struct evtchn  **evtchn;
     spinlock_t       event_lock;
     unsigned int     evtchn_level;
+#define L3_PAGES (NR_EVENT_CHANNELS_L3/BITS_PER_LONG*sizeof(unsigned long)/PAGE_SIZE)
+    unsigned long   *evtchn_pending[L3_PAGES];
+    unsigned long   *evtchn_mask[L3_PAGES];
+#undef  L3_PAGES
 
     struct grant_table *grant_table;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:24:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Tpk1s-0000TR-JE; Mon, 31 Dec 2012 18:24:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Tpk1q-0000TG-Mg
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:24:03 +0000
Received: from [85.158.139.83:12164] by server-2.bemta-5.messagelabs.com id
	9F/03-16162-148D1E05; Mon, 31 Dec 2012 18:24:01 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1356978238!27905308!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3020 invoked from network); 31 Dec 2012 18:23:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:23:59 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2340565"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:23:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:23:38 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1Tpk1S-0001E7-GW;
	Mon, 31 Dec 2012 18:23:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:22:35 +0000
Message-ID: <1356978155-18293-4-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
References: <1356978155-18293-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, Wei Liu <liuw@liuw.name>
Subject: [Xen-devel] [RFC PATCH 3/3] Implement 3-level event channel
	routines.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <liuw@liuw.name>

Add some fields in struct domain to hold pointer to 3-level shared arrays.
Also add per-cpu second level selector in struct vcpu.

These structures are mapped by a new evtchn op. Guest should fall back to use
2-level event channel if mapping fails.

The routines are more or less as the 2-level ones.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/domain.c              |    2 +
 xen/arch/x86/irq.c                 |    2 +-
 xen/common/event_channel.c         |  520 ++++++++++++++++++++++++++++++++++--
 xen/common/keyhandler.c            |   12 +-
 xen/common/schedule.c              |    2 +-
 xen/include/public/event_channel.h |   24 ++
 xen/include/public/xen.h           |   17 +-
 xen/include/xen/event.h            |    4 +
 xen/include/xen/sched.h            |   12 +-
 9 files changed, 565 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 7a07c06..b457b00 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2058,6 +2058,8 @@ int domain_relinquish_resources(struct domain *d)
             }
         }
 
+        event_channel_unmap_nlevel(d);
+
         d->arch.relmem = RELMEM_shared;
         /* fallthrough */
 
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 05cede5..d517e39 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1452,7 +1452,7 @@ int pirq_guest_unmask(struct domain *d)
         {
             pirq = pirqs[i]->pirq;
             if ( pirqs[i]->masked &&
-                 !test_bit(pirqs[i]->evtchn, &shared_info(d, evtchn_mask)) )
+                 !evtchn_is_masked(d, pirqs[i]->evtchn) )
                 pirq_guest_eoi(pirqs[i]);
         }
     } while ( ++pirq < d->nr_pirqs && n == ARRAY_SIZE(pirqs) );
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 9898f8e..fb3a7b4 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -27,6 +27,7 @@
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
 #include <asm/current.h>
+#include <xen/paging.h>
 
 #include <public/xen.h>
 #include <public/event_channel.h>
@@ -413,6 +414,85 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     return rc;
 }
 
+static inline int __evtchn_is_masked_l2(struct domain *d, int port)
+{
+    return test_bit(port, &shared_info(d, evtchn_mask));
+}
+
+static inline int __evtchn_is_masked_l3(struct domain *d, int port)
+{
+    return test_bit(port % EVTCHNS_PER_PAGE,
+                    d->evtchn_mask[port / EVTCHNS_PER_PAGE]);
+}
+
+int evtchn_is_masked(struct domain *d, int port)
+{
+    switch ( d->evtchn_level )
+    {
+    case 2:
+        return __evtchn_is_masked_l2(d, port);
+    case 3:
+        return __evtchn_is_masked_l3(d, port);
+    default:
+        printk(" %s: unknown event channel level %d for domain %d \n",
+               __FUNCTION__, d->evtchn_level, d->domain_id);
+        return 1;
+    }
+}
+
+static inline int __evtchn_is_pending_l2(struct domain *d, int port)
+{
+    return test_bit(port, &shared_info(d, evtchn_pending));
+}
+
+static inline int __evtchn_is_pending_l3(struct domain *d, int port)
+{
+    return test_bit(port % EVTCHNS_PER_PAGE,
+                    d->evtchn_pending[port / EVTCHNS_PER_PAGE]);
+}
+
+int evtchn_is_pending(struct domain *d, int port)
+{
+    switch ( d->evtchn_level )
+    {
+    case 2:
+        return __evtchn_is_pending_l2(d, port);
+    case 3:
+        return __evtchn_is_pending_l3(d, port);
+    default:
+        printk(" %s: unknown event channel level %d for domain %d\n",
+               __FUNCTION__, d->evtchn_level, d->domain_id);
+        return 0;
+    }
+}
+
+static inline void __evtchn_clear_pending_l2(struct domain *d, int port)
+{
+    clear_bit(port, &shared_info(d, evtchn_pending));
+}
+
+static inline void __evtchn_clear_pending_l3(struct domain *d, int port)
+{
+    clear_bit(port % EVTCHNS_PER_PAGE,
+              d->evtchn_pending[port / EVTCHNS_PER_PAGE]);
+}
+
+static void evtchn_clear_pending(struct domain *d, int port)
+{
+    switch ( d->evtchn_level )
+    {
+    case 2:
+        __evtchn_clear_pending_l2(d, port);
+        break;
+    case 3:
+        __evtchn_clear_pending_l3(d, port);
+        break;
+    default:
+        printk("Cannot clear pending for %d level event channel"
+               " for domain %d, port %d\n",
+               d->evtchn_level, d->domain_id, port);
+    }
+}
 
 static long __evtchn_close(struct domain *d1, int port1)
 {
@@ -529,7 +609,7 @@ static long __evtchn_close(struct domain *d1, int port1)
     }
 
     /* Clear pending event to avoid unexpected behavior on re-bind. */
-    clear_bit(port1, &shared_info(d1, evtchn_pending));
+    evtchn_clear_pending(d1, port1);
 
     /* Reset binding to vcpu0 when the channel is freed. */
     chn1->state          = ECS_FREE;
@@ -606,16 +686,15 @@ int evtchn_send(struct domain *d, unsigned int lport)
         ret = -EINVAL;
     }
 
-out:
+ out:
     spin_unlock(&ld->event_lock);
 
     return ret;
 }
 
-static void evtchn_set_pending(struct vcpu *v, int port)
+static void __evtchn_set_pending_l2(struct vcpu *v, int port)
 {
     struct domain *d = v->domain;
-    int vcpuid;
 
     /*
      * The following bit operations must happen in strict order.
@@ -633,7 +712,50 @@ static void evtchn_set_pending(struct vcpu *v, int port)
     {
         vcpu_mark_events_pending(v);
     }
-    
+}
+
+static void __evtchn_set_pending_l3(struct vcpu *v, int port)
+{
+    struct domain *d = v->domain;
+
+    int page_no = port / EVTCHNS_PER_PAGE;
+    int offset = port % EVTCHNS_PER_PAGE;
+    int l1cb = BITS_PER_EVTCHN_WORD(d)*BITS_PER_EVTCHN_WORD(d);
+    int l2cb = BITS_PER_EVTCHN_WORD(d);
+
+    if ( test_and_set_bit(offset, d->evtchn_pending[page_no]) )
+        return;
+
+    if ( !test_bit(offset, d->evtchn_mask[page_no]) &&
+         !test_and_set_bit(port / l2cb,
+                           v->evtchn_pending_sel_l2) &&
+         !test_and_set_bit(port / l1cb,
+                           &vcpu_info(v, evtchn_pending_sel)) )
+    {
+        vcpu_mark_events_pending(v);
+    }
+}
+
+static void evtchn_set_pending(struct vcpu *v, int port)
+{
+    struct domain *d = v->domain;
+    int vcpuid;
+
+    switch ( d->evtchn_level )
+    {
+    case 2:
+        __evtchn_set_pending_l2(v, port);
+        break;
+    case 3:
+        __evtchn_set_pending_l3(v, port);
+        break;
+    default:
+        printk("Cannot set pending for %d level event channel"
+               " for domain %d, port %d\n",
+               d->evtchn_level, d->domain_id, port);
+        return;
+    }
+
     /* Check if some VCPU might be polling for this event. */
     if ( likely(bitmap_empty(d->poll_mask, d->max_vcpus)) )
         return;
@@ -916,21 +1038,16 @@ long evtchn_bind_vcpu(unsigned int port, unsigned int vcpu_id)
 }
 
 
-int evtchn_unmask(unsigned int port)
+static int __evtchn_unmask_l2(unsigned int port)
 {
     struct domain *d = current->domain;
     struct vcpu   *v;
 
-    ASSERT(spin_is_locked(&d->event_lock));
-
-    if ( unlikely(!port_is_valid(d, port)) )
-        return -EINVAL;
-
     v = d->vcpu[evtchn_from_port(d, port)->notify_vcpu_id];
 
     /*
      * These operations must happen in strict order. Based on
-     * include/xen/event.h:evtchn_set_pending(). 
+     * __evtchn_set_pending_l2().
      */
     if ( test_and_clear_bit(port, &shared_info(d, evtchn_mask)) &&
          test_bit          (port, &shared_info(d, evtchn_pending)) &&
@@ -943,6 +1060,58 @@ int evtchn_unmask(unsigned int port)
     return 0;
 }
 
+static int __evtchn_unmask_l3(unsigned int port)
+{
+    struct domain *d = current->domain;
+    struct vcpu *v;
+
+    int page_no = port / EVTCHNS_PER_PAGE;
+    int offset = port % EVTCHNS_PER_PAGE;
+    int l1cb = BITS_PER_EVTCHN_WORD(d)*BITS_PER_EVTCHN_WORD(d);
+    int l2cb = BITS_PER_EVTCHN_WORD(d);
+
+    v = d->vcpu[evtchn_from_port(d, port)->notify_vcpu_id];
+
+    if ( test_and_clear_bit(offset, d->evtchn_mask[page_no]) &&
+         test_bit(offset, d->evtchn_pending[page_no]) &&
+         !test_and_set_bit(port / l2cb,
+                           v->evtchn_pending_sel_l2) &&
+         !test_and_set_bit(port / l1cb,
+                           &vcpu_info(v, evtchn_pending_sel)) )
+    {
+        vcpu_mark_events_pending(v);
+    }
+
+    return 0;
+}
+
+int evtchn_unmask(unsigned int port)
+{
+    struct domain *d = current->domain;
+    int rc = -EINVAL;
+
+    ASSERT(spin_is_locked(&d->event_lock));
+
+    if ( unlikely(!port_is_valid(d, port)) )
+        return -EINVAL;
+
+    switch ( d->evtchn_level )
+    {
+    case 2:
+        rc = __evtchn_unmask_l2(port);
+        break;
+    case 3:
+        rc = __evtchn_unmask_l3(port);
+        break;
+    default:
+        printk("Cannot unmask port %d for %d level event channel"
+               " for domain %d\n", port,
+               d->evtchn_level, d->domain_id);
+    }
+
+    return rc;
+}
+
 
 static long evtchn_reset(evtchn_reset_t *r)
 {
@@ -969,6 +1138,290 @@ out:
     return rc;
 }
 
+static void __unmap_l2_sel(struct vcpu *v)
+{
+    unsigned long mfn;
+
+    if ( v->evtchn_pending_sel_l2 != 0 )
+    {
+        unmap_domain_page_global(v->evtchn_pending_sel_l2);
+        mfn = virt_to_mfn(v->evtchn_pending_sel_l2);
+        put_page_and_type(mfn_to_page(mfn));
+
+        v->evtchn_pending_sel_l2 = 0;
+    }
+}
+
+static int __map_l2_sel(struct vcpu *v, unsigned long gfn, unsigned long off)
+{
+    void *mapping;
+    int rc = -EINVAL;
+    struct page_info *page;
+    struct domain *d = v->domain;
+
+    if ( off >= PAGE_SIZE )
+        return rc;
+
+    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    if ( !page )
+        goto out;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        goto out;
+    }
+
+    mapping = __map_domain_page_global(page);
+    if ( mapping == NULL )
+    {
+        put_page_and_type(page);
+        rc = -ENOMEM;
+        goto out;
+    }
+
+    v->evtchn_pending_sel_l2 = (unsigned long *)(mapping + off);
+    rc = 0;
+
+ out:
+    return rc;
+}
+
+
+static void __unmap_l3_arrays(struct domain *d)
+{
+    int i;
+    unsigned long mfn;
+
+    for ( i = 0; i < MAX_L3_PAGES; i++ )
+    {
+        if ( d->evtchn_pending[i] != 0 )
+        {
+            unmap_domain_page_global(d->evtchn_pending[i]);
+            mfn = virt_to_mfn(d->evtchn_pending[i]);
+            put_page_and_type(mfn_to_page(mfn));
+            d->evtchn_pending[i] = 0;
+        }
+        if ( d->evtchn_mask[i] != 0 )
+        {
+            unmap_domain_page_global(d->evtchn_mask[i]);
+            mfn = virt_to_mfn(d->evtchn_mask[i]);
+            put_page_and_type(mfn_to_page(mfn));
+            d->evtchn_mask[i] = 0;
+        }
+    }
+}
+
+static int __map_l3_arrays(struct domain *d, unsigned long *pending,
+                           unsigned long *mask)
+{
+    int i;
+    void *pending_mapping, *mask_mapping;
+    struct page_info *pending_page, *mask_page;
+    unsigned long pending_gfn, mask_gfn;
+    int rc = -EINVAL;
+
+    for ( i = 0;
+          i < MAX_L3_PAGES && pending[i] != 0 && mask[i] != 0;
+          i++ )
+    {
+        pending_gfn = pending[i];
+        mask_gfn = mask[i];
+
+        pending_page = get_page_from_gfn(d, pending_gfn, NULL, P2M_ALLOC);
+        if ( !pending_page )
+            goto err;
+
+        mask_page = get_page_from_gfn(d, mask_gfn, NULL, P2M_ALLOC);
+        if ( !mask_page )
+        {
+            put_page(pending_page);
+            goto err;
+        }
+
+        if ( !get_page_type(pending_page, PGT_writable_page) )
+        {
+            put_page(pending_page);
+            put_page(mask_page);
+            goto err;
+        }
+
+        if ( !get_page_type(mask_page, PGT_writable_page) )
+        {
+            put_page_and_type(pending_page);
+            put_page(mask_page);
+            goto err;
+        }
+
+        pending_mapping = __map_domain_page_global(pending_page);
+        if ( !pending_mapping )
+        {
+            put_page_and_type(pending_page);
+            put_page_and_type(mask_page);
+            rc = -ENOMEM;
+            goto err;
+        }
+
+        mask_mapping = __map_domain_page_global(mask_page);
+        if ( !mask_mapping )
+        {
+            unmap_domain_page_global(pending_mapping);
+            put_page_and_type(pending_page);
+            put_page_and_type(mask_page);
+            rc = -ENOMEM;
+            goto err;
+        }
+
+        d->evtchn_pending[i] = pending_mapping;
+        d->evtchn_mask[i] = mask_mapping;
+    }
+
+    rc = 0;
+
+err:
+    return rc;
+}
+
+static void __evtchn_unmap_all_3level(struct domain *d)
+{
+    struct vcpu *v;
+    /* This is called when destroying a domain, so no pausing... */
+    for_each_vcpu ( d, v )
+        __unmap_l2_sel(v);
+    __unmap_l3_arrays(d);
+}
+
+void event_channel_unmap_nlevel(struct domain *d)
+{
+    switch ( d->evtchn_level )
+    {
+    case 3:
+        __evtchn_unmap_all_3level(d);
+        break;
+    default:
+        break;
+    }
+}
+
+static void __evtchn_migrate_bitmap_l3(struct domain *d)
+{
+    struct vcpu *v;
+
+    /* Easy way to migrate, just move existing selector down one level
+     * then copy the pending array and mask array */
+    for_each_vcpu ( d, v )
+    {
+        memcpy(&v->evtchn_pending_sel_l2[0],
+               &vcpu_info(v, evtchn_pending_sel),
+               sizeof(vcpu_info(v, evtchn_pending_sel)));
+
+        memset(&vcpu_info(v, evtchn_pending_sel), 0,
+               sizeof(vcpu_info(v, evtchn_pending_sel)));
+
+        set_bit(0, &vcpu_info(v, evtchn_pending_sel));
+    }
+
+    memcpy(d->evtchn_pending[0], &shared_info(d, evtchn_pending),
+           sizeof(shared_info(d, evtchn_pending)));
+    memcpy(d->evtchn_mask[0], &shared_info(d, evtchn_mask),
+           sizeof(shared_info(d, evtchn_mask)));
+}
+
+static long __evtchn_register_3level(struct evtchn_register_3level *r)
+{
+    struct domain *d = current->domain;
+    unsigned long mfns[r->nr_vcpus];
+    unsigned long offsets[r->nr_vcpus];
+    unsigned char was_up[r->nr_vcpus];
+    int rc, i;
+    struct vcpu *v;
+
+    if ( d->evtchn_level == 3 )
+        return -EINVAL;
+
+    if ( r->nr_vcpus > d->max_vcpus )
+        return -EINVAL;
+
+    for ( i = 0; i < MAX_L3_PAGES; i++ )
+        if ( d->evtchn_pending[i] || d->evtchn_mask[i] )
+            return -EINVAL;
+
+    for_each_vcpu ( d, v )
+        if ( v->evtchn_pending_sel_l2 )
+            return -EINVAL;
+
+    if ( copy_from_user(mfns, r->l2sel_mfn,
+                        sizeof(unsigned long)*r->nr_vcpus) )
+        return -EFAULT;
+
+    if ( copy_from_user(offsets, r->l2sel_offset,
+                        sizeof(unsigned long)*r->nr_vcpus) )
+        return -EFAULT;
+
+    /* put cpu offline */
+    for_each_vcpu ( d, v )
+    {
+        if ( v == current )
+            was_up[v->vcpu_id] = 1;
+        else
+            was_up[v->vcpu_id] = !test_and_set_bit(_VPF_down,
+                                                   &v->pause_flags);
+    }
+
+    /* map evtchn pending array and evtchn mask array */
+    rc = __map_l3_arrays(d, r->evtchn_pending, r->evtchn_mask);
+    if ( rc )
+        goto out;
+
+    for_each_vcpu ( d, v )
+    {
+        if ( (rc = __map_l2_sel(v, mfns[v->vcpu_id], offsets[v->vcpu_id])) )
+        {
+            struct vcpu *v1;
+            for_each_vcpu ( d, v1 )
+                __unmap_l2_sel(v1);
+            __unmap_l3_arrays(d);
+            goto out;
+        }
+    }
+
+    /* Scan current bitmap and migrate all outstanding events to new bitmap */
+    __evtchn_migrate_bitmap_l3(d);
+
+    /* make sure all writes take effect before switching to new routines */
+    wmb();
+    d->evtchn_level = 3;
+
+ out:
+    /* bring cpus back online */
+    for_each_vcpu ( d, v )
+        if ( was_up[v->vcpu_id] &&
+             test_and_clear_bit(_VPF_down, &v->pause_flags) )
+            vcpu_wake(v);
+
+    return rc;
+}
+
+static long evtchn_register_nlevel(evtchn_register_nlevel_t *r)
+{
+    struct domain *d = current->domain;
+    int rc;
+
+    spin_lock(&d->event_lock);
+
+    switch ( r->level )
+    {
+    case 3:
+        rc = __evtchn_register_3level(&r->u.l3);
+        break;
+    default:
+        rc = -EINVAL;
+    }
+
+    spin_unlock(&d->event_lock);
+
+    return rc;
+}
 
 long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
@@ -1078,6 +1531,14 @@ long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
+    case EVTCHNOP_register_nlevel: {
+        struct evtchn_register_nlevel reg;
+        if ( copy_from_guest(&reg, arg, 1) != 0 )
+            return -EFAULT;
+        rc = evtchn_register_nlevel(&reg);
+        break;
+    }
+
     default:
         rc = -ENOSYS;
         break;
@@ -1251,7 +1712,6 @@ void evtchn_move_pirqs(struct vcpu *v)
     spin_unlock(&d->event_lock);
 }
 
-
 static void domain_dump_evtchn_info(struct domain *d)
 {
     unsigned int port;
@@ -1260,8 +1720,10 @@ static void domain_dump_evtchn_info(struct domain *d)
     bitmap_scnlistprintf(keyhandler_scratch, sizeof(keyhandler_scratch),
                          d->poll_mask, d->max_vcpus);
     printk("Event channel information for domain %d:\n"
+           "using %d-level event channel\n"
            "Polling vCPUs: {%s}\n"
-           "    port [p/m]\n", d->domain_id, keyhandler_scratch);
+           "    port [p/m]\n", d->domain_id, d->evtchn_level,
+           keyhandler_scratch);
 
     spin_lock(&d->event_lock);
 
@@ -1269,6 +1731,8 @@ static void domain_dump_evtchn_info(struct domain *d)
     {
         const struct evtchn *chn;
         char *ssid;
+        int page_no = port / EVTCHNS_PER_PAGE;
+        int offset = port % EVTCHNS_PER_PAGE;
 
         if ( !port_is_valid(d, port) )
             continue;
@@ -1276,11 +1740,28 @@ static void domain_dump_evtchn_info(struct domain *d)
         if ( chn->state == ECS_FREE )
             continue;
 
-        printk("    %4u [%d/%d]: s=%d n=%d x=%d",
-               port,
-               !!test_bit(port, &shared_info(d, evtchn_pending)),
-               !!test_bit(port, &shared_info(d, evtchn_mask)),
-               chn->state, chn->notify_vcpu_id, chn->xen_consumer);
+        printk("    %4u", port);
+
+        switch ( d->evtchn_level )
+        {
+        case 2:
+            printk(" [%d/%d]:",
+                   !!test_bit(port, &shared_info(d, evtchn_pending)),
+                   !!test_bit(port, &shared_info(d, evtchn_mask)));
+            break;
+        case 3:
+            printk(" [%d/%d]:",
+                   !!test_bit(offset, d->evtchn_pending[page_no]),
+                   !!test_bit(offset, d->evtchn_mask[page_no]));
+            break;
+        default:
+            printk(" %s: unknown event channel level %d for domain %d \n",
+                   __FUNCTION__, d->evtchn_level, d->domain_id);
+            goto out;
+        }
+
+        printk(" s=%d n=%d x=%d", chn->state, chn->notify_vcpu_id,
+               chn->xen_consumer);
 
         switch ( chn->state )
         {
@@ -1310,6 +1791,7 @@ static void domain_dump_evtchn_info(struct domain *d)
         }
     }
 
+ out:
     spin_unlock(&d->event_lock);
 }
 
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 2c5c230..294cca9 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -298,15 +298,15 @@ static void dump_domains(unsigned char key)
     {
         for_each_vcpu ( d, v )
         {
+            unsigned int bits = BITS_PER_EVTCHN_WORD(d);
+            for (int i = 2; i < d->evtchn_level; i++)
+                bits *= BITS_PER_EVTCHN_WORD(d);
             printk("Notifying guest %d:%d (virq %d, port %d, stat %d/%d/%d)\n",
                    d->domain_id, v->vcpu_id,
                    VIRQ_DEBUG, v->virq_to_evtchn[VIRQ_DEBUG],
-                   test_bit(v->virq_to_evtchn[VIRQ_DEBUG], 
-                            &shared_info(d, evtchn_pending)),
-                   test_bit(v->virq_to_evtchn[VIRQ_DEBUG], 
-                            &shared_info(d, evtchn_mask)),
-                   test_bit(v->virq_to_evtchn[VIRQ_DEBUG] /
-                            BITS_PER_EVTCHN_WORD(d),
+                   evtchn_is_pending(d, v->virq_to_evtchn[VIRQ_DEBUG]),
+                   evtchn_is_masked(d, v->virq_to_evtchn[VIRQ_DEBUG]),
+                   test_bit(v->virq_to_evtchn[VIRQ_DEBUG] / bits,
                             &vcpu_info(v, evtchn_pending_sel)));
             send_guest_vcpu_virq(v, VIRQ_DEBUG);
         }
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index ae798c9..b676c9c 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -693,7 +693,7 @@ static long do_poll(struct sched_poll *sched_poll)
             goto out;
 
         rc = 0;
-        if ( test_bit(port, &shared_info(d, evtchn_pending)) )
+        if ( evtchn_is_pending(d, port) )
             goto out;
     }
 
diff --git a/xen/include/public/event_channel.h b/xen/include/public/event_channel.h
index 07ff321..29cd6e9 100644
--- a/xen/include/public/event_channel.h
+++ b/xen/include/public/event_channel.h
@@ -71,6 +71,7 @@
 #define EVTCHNOP_bind_vcpu        8
 #define EVTCHNOP_unmask           9
 #define EVTCHNOP_reset           10
+#define EVTCHNOP_register_nlevel 11
 /* ` } */
 
 typedef uint32_t evtchn_port_t;
@@ -258,6 +259,29 @@ struct evtchn_reset {
 typedef struct evtchn_reset evtchn_reset_t;
 
 /*
+ * EVTCHNOP_register_nlevel: Register N level event channels.
+ * NOTES:
+ *   1. currently only 3-level is supported.
+ *   2. should fall back to basic 2-level if this call fails.
+ */
+#define MAX_L3_PAGES 8
+struct evtchn_register_3level {
+    unsigned long evtchn_pending[MAX_L3_PAGES];
+    unsigned long evtchn_mask[MAX_L3_PAGES];
+    unsigned long *l2sel_mfn;
+    unsigned long *l2sel_offset;
+    uint32_t nr_vcpus;
+};
+
+struct evtchn_register_nlevel {
+    uint32_t level;
+    union {
+        struct evtchn_register_3level l3;
+    } u;
+};
+typedef struct evtchn_register_nlevel evtchn_register_nlevel_t;
+
+/*
  * ` enum neg_errnoval
  * ` HYPERVISOR_event_channel_op_compat(struct evtchn_op *op)
  * `
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 5593066..1d4ef2d 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -554,9 +554,24 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_t);
 
 /*
  * Event channel endpoints per domain:
+ * 2-level:
  *  1024 if a long is 32 bits; 4096 if a long is 64 bits.
+ * 3-level:
+ *  32k if a long is 32 bits; 256k if a long is 64 bits;
  */
-#define NR_EVENT_CHANNELS (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L2 (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * 64)
+#define NR_EVENT_CHANNELS(x) ({ unsigned int __v = 0;   \
+            switch (x) {                                \
+            case 2:                                     \
+                __v = NR_EVENT_CHANNELS_L2; break;      \
+            case 3:                                     \
+                __v = NR_EVENT_CHANNELS_L3; break;      \
+            default:                                    \
+                BUG();                                  \
+            }                                           \
+            __v;})
+
 
 struct vcpu_time_info {
     /*
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 71c3e92..e7cd6be 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -102,4 +102,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
         mb(); /* set blocked status /then/ caller does his work */      \
     } while ( 0 )
 
+extern void event_channel_unmap_nlevel(struct domain *d);
+int evtchn_is_masked(struct domain *d, int port);
+int evtchn_is_pending(struct domain *d, int port);
+
 #endif /* __XEN_EVENT_H__ */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 5f23213..ae78549 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -64,7 +64,8 @@ extern struct domain *dom0;
             __v;})
 
 #define EVTCHNS_PER_BUCKET 512
-#define NR_EVTCHN_BUCKETS  (NR_EVENT_CHANNELS / EVTCHNS_PER_BUCKET)
+#define NR_EVTCHN_BUCKETS  (NR_EVENT_CHANNELS_L3 / EVTCHNS_PER_BUCKET)
+#define EVTCHNS_PER_PAGE   (PAGE_SIZE * 8)
 
 struct evtchn
 {
@@ -104,7 +105,7 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
 
 struct waitqueue_vcpu;
 
-struct vcpu 
+struct vcpu
 {
     int              vcpu_id;
 
@@ -112,6 +113,9 @@ struct vcpu
 
     vcpu_info_t     *vcpu_info;
 
+    /* For 3-level event channel */
+    unsigned long   *evtchn_pending_sel_l2;
+
     struct domain   *domain;
 
     struct vcpu     *next_in_list;
@@ -275,6 +279,10 @@ struct domain
     struct evtchn  **evtchn;
     spinlock_t       event_lock;
     unsigned int     evtchn_level;
+#define L3_PAGES (NR_EVENT_CHANNELS_L3/BITS_PER_LONG*sizeof(unsigned long)/PAGE_SIZE)
+    unsigned long   *evtchn_pending[L3_PAGES];
+    unsigned long   *evtchn_mask[L3_PAGES];
+#undef  L3_PAGES
 
     struct grant_table *grant_table;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:39:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkGA-00015z-NS; Mon, 31 Dec 2012 18:38:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkG8-00015k-IA
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:38:48 +0000
Received: from [85.158.143.99:42843] by server-2.bemta-4.messagelabs.com id
	9F/49-30861-7BBD1E05; Mon, 31 Dec 2012 18:38:47 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1356979124!31417659!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29042 invoked from network); 31 Dec 2012 18:38:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:38:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2341387"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:38:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:38:43 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkG3-0001R3-Bd;
	Mon, 31 Dec 2012 18:38:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:37:37 +0000
Message-ID: <1356979057-18442-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356979057-18442-2-git-send-email-wei.liu2@citrix.com>
References: <1356979057-18442-2-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [RFC PATCH 3/3] Xen: implement 3-level event channel
	routines.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 arch/x86/xen/enlighten.c              |    7 +
 drivers/xen/events.c                  |  419 +++++++++++++++++++++++++++++++--
 include/xen/events.h                  |    2 +
 include/xen/interface/event_channel.h |   24 ++
 include/xen/interface/xen.h           |    2 +-
 5 files changed, 437 insertions(+), 17 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index bc893e7..f471881 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -43,6 +43,7 @@
 #include <xen/hvm.h>
 #include <xen/hvc-console.h>
 #include <xen/acpi.h>
+#include <xen/events.h>
 
 #include <asm/paravirt.h>
 #include <asm/apic.h>
@@ -195,6 +196,9 @@ void xen_vcpu_restore(void)
 		    HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
 			BUG();
 	}
+
+	if (evtchn_level_param == 3)
+		xen_event_channel_setup_3level();
 }
 
 static void __init xen_banner(void)
@@ -1028,6 +1032,9 @@ void xen_setup_vcpu_info_placement(void)
 	for_each_possible_cpu(cpu)
 		xen_vcpu_setup(cpu);
 
+	if (evtchn_level_param == 3)
+		xen_event_channel_setup_3level();
+
 	/* xen_vcpu_setup managed to place the vcpu_info within the
 	   percpu area for all cpus, so make use of it */
 	if (have_vcpu_info_placement) {
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index f60ba76..adb94e9 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -52,9 +52,15 @@
 #include <xen/interface/hvm/params.h>
 
 /* N-level event channel, starting from 2 */
+unsigned int evtchn_level_param = -1;
 unsigned int evtchn_level = 2;
 EXPORT_SYMBOL_GPL(evtchn_level);
 
+/* 3-level event channel */
+DEFINE_PER_CPU(unsigned long [sizeof(unsigned long)*8], evtchn_sel_l2);
+unsigned long evtchn_pending[NR_EVENT_CHANNELS_L3/BITS_PER_LONG] __page_aligned_bss;
+unsigned long evtchn_mask[NR_EVENT_CHANNELS_L3/BITS_PER_LONG] __page_aligned_bss;
+
 struct evtchn_ops {
 	unsigned long (*active_evtchns)(unsigned int,
 					struct shared_info*, unsigned int);
@@ -142,6 +148,29 @@ static struct irq_chip xen_pirq_chip;
 static void enable_dynirq(struct irq_data *data);
 static void disable_dynirq(struct irq_data *data);
 
+static int __init parse_evtchn_level(char *arg)
+{
+	if (!arg)
+		return -EINVAL;
+
+	if (strcmp(arg, "3") == 0)
+		evtchn_level_param = 3;
+
+	return 0;
+}
+early_param("evtchn_level", parse_evtchn_level);
+
+static inline int __is_masked_l2(int chn)
+{
+	struct shared_info *sh = HYPERVISOR_shared_info;
+	return sync_test_and_set_bit(chn, sh->evtchn_mask);
+}
+
+static inline int __is_masked_l3(int chn)
+{
+	return sync_test_and_set_bit(chn, evtchn_mask);
+}
+
 /* Get info for IRQ */
 static struct irq_info *info_for_irq(unsigned irq)
 {
@@ -311,6 +340,15 @@ static inline unsigned long __active_evtchns_l2(unsigned int cpu,
 		~sh->evtchn_mask[idx];
 }
 
+static inline unsigned long __active_evtchns_l3(unsigned int cpu,
+						struct shared_info *sh,
+						unsigned int idx)
+{
+	return evtchn_pending[idx] &
+		per_cpu(cpu_evtchn_mask, cpu)[idx] &
+		~evtchn_mask[idx];
+}
+
 static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu)
 {
 	int irq = evtchn_to_irq[chn];
@@ -351,18 +389,33 @@ static inline void __clear_evtchn_l2(int port)
 	sync_clear_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline void __clear_evtchn_l3(int port)
+{
+	sync_clear_bit(port, &evtchn_pending[0]);
+}
+
 static inline void __set_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	sync_set_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline void __set_evtchn_l3(int port)
+{
+	sync_set_bit(port, &evtchn_pending[0]);
+}
+
 static inline int __test_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	return sync_test_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline int __test_evtchn_l3(int port)
+{
+	return sync_test_bit(port, &evtchn_pending[0]);
+}
+
 /**
  * notify_remote_via_irq - send event to remote end of event channel via irq
  * @irq: irq of event channel to send event to
@@ -386,6 +439,11 @@ static void __mask_evtchn_l2(int port)
 	sync_set_bit(port, &s->evtchn_mask[0]);
 }
 
+static void __mask_evtchn_l3(int port)
+{
+	sync_set_bit(port, &evtchn_mask[0]);
+}
+
 static void __unmask_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
@@ -416,6 +474,36 @@ static void __unmask_evtchn_l2(int port)
 	put_cpu();
 }
 
+static void __unmask_evtchn_l3(int port)
+{
+	unsigned int cpu = get_cpu();
+	int l1cb = BITS_PER_LONG * BITS_PER_LONG;
+	int l2cb = BITS_PER_LONG;
+
+	if (unlikely(cpu != cpu_from_evtchn(port))) {
+		struct evtchn_unmask unmask = { .port = port };
+		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
+	} else {
+		struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
+
+		sync_clear_bit(port, &evtchn_mask[0]);
+
+		/*
+		 * The following is basically the equivalent of
+		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
+		 * the interrupt edge' if the channel is masked.
+		 */
+		if (sync_test_bit(port, &evtchn_pending[0]) &&
+		    !sync_test_and_set_bit(port / l2cb,
+					   &per_cpu(evtchn_sel_l2, cpu)[0]) &&
+		    !sync_test_and_set_bit(port / l1cb,
+					   &vcpu_info->evtchn_pending_sel))
+			vcpu_info->evtchn_upcall_pending = 1;
+	}
+
+	put_cpu();
+}
+
 static void xen_irq_init(unsigned irq)
 {
 	struct irq_info *info;
@@ -1181,6 +1269,7 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector)
 	notify_remote_via_irq(irq);
 }
 
+static DEFINE_SPINLOCK(debug_lock);
 static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 {
 	struct shared_info *sh = HYPERVISOR_shared_info;
@@ -1188,7 +1277,6 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	unsigned long *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu);
 	int i;
 	unsigned long flags;
-	static DEFINE_SPINLOCK(debug_lock);
 	struct vcpu_info *v;
 
 	spin_lock_irqsave(&debug_lock, flags);
@@ -1196,13 +1284,13 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	printk("\nvcpu %d\n  ", cpu);
 
 	for_each_online_cpu(i) {
-		int pending;
+		int masked;
 		v = per_cpu(xen_vcpu, i);
-		pending = (get_irq_regs() && i == cpu)
+		masked = (get_irq_regs() && i == cpu)
 			? xen_irqs_disabled(get_irq_regs())
 			: v->evtchn_upcall_mask;
 		printk("%d: masked=%d pending=%d event_sel %0*lx\n  ", i,
-		       pending, v->evtchn_upcall_pending,
+		       masked, v->evtchn_upcall_pending,
 		       (int)(sizeof(v->evtchn_pending_sel)*2),
 		       v->evtchn_pending_sel);
 	}
@@ -1227,7 +1315,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 		       i % 8 == 0 ? "\n   " : " ");
 
 	printk("\nlocal cpu%d mask:\n   ", cpu);
-	for (i = (NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG)-1; i >= 0; i--)
+	for (i = (NR_EVENT_CHANNELS(2)/BITS_PER_LONG)-1; i >= 0; i--)
 		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
 		       cpu_evtchn[i],
 		       i % 8 == 0 ? "\n   " : " ");
@@ -1242,7 +1330,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	}
 
 	printk("\npending list:\n");
-	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(2); i++) {
 		if (sync_test_bit(i, sh->evtchn_pending)) {
 			int word_idx = i / BITS_PER_LONG;
 			printk("  %d: event %d -> irq %d%s%s%s\n",
@@ -1262,15 +1350,110 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static irqreturn_t __xen_debug_interrupt_l3(int irq, void *dev_id)
+{
+	int cpu = smp_processor_id();
+	unsigned long *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu);
+	int i, j;
+	unsigned long flags;
+	struct vcpu_info *v;
+
+	spin_lock_irqsave(&debug_lock, flags);
+
+	printk("\nvcpu %d\n  ", cpu);
+
+	for_each_online_cpu(i) {
+		int masked;
+
+		v = per_cpu(xen_vcpu, i);
+		masked = (get_irq_regs() && i == cpu)
+			? xen_irqs_disabled(get_irq_regs())
+			: v->evtchn_upcall_mask;
+		printk("%d: masked=%d pending=%d event_sel_l1 %0*lx\n  ", i,
+		       masked, v->evtchn_upcall_pending,
+		       (int)(sizeof(v->evtchn_pending_sel)*2),
+		       v->evtchn_pending_sel);
+
+		printk("\nevtchn_sel_l2:\n   ");
+		for (j = (sizeof(unsigned long)*8)-1; j >= 0; j--)
+			printk("%0*lx%s",
+			       (int)(sizeof(evtchn_sel_l2[0])*2),
+			       per_cpu(evtchn_sel_l2, i)[j],
+			       j % 8 == 0 ? "\n   " : " ");
+	}
+
+	v = per_cpu(xen_vcpu, cpu);
+
+	printk("\npending:\n   ");
+	for (i = ARRAY_SIZE(evtchn_pending)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_pending[0])*2),
+		       evtchn_pending[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nglobal mask:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       evtchn_mask[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nglobally unmasked:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       evtchn_pending[i] & ~evtchn_mask[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nlocal cpu%d mask:\n   ", cpu);
+	for (i = (NR_EVENT_CHANNELS(3)/BITS_PER_LONG)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
+		       cpu_evtchn[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nlocally unmasked:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--) {
+		unsigned long pending = evtchn_pending[i]
+			& ~evtchn_mask[i]
+			& cpu_evtchn[i];
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       pending, i % 8 == 0 ? "\n   " : " ");
+	}
+
+	printk("\npending list:\n");
+	for (i = 0; i < NR_EVENT_CHANNELS(3); i++) {
+		if (sync_test_bit(i, evtchn_pending)) {
+			int word_idx_l1 = i / (BITS_PER_LONG * BITS_PER_LONG);
+			int word_idx_l2 = i / BITS_PER_LONG;
+			printk("  %d: event %d -> irq %d%s%s%s%s\n",
+			       cpu_from_evtchn(i), i,
+			       evtchn_to_irq[i],
+			       sync_test_bit(word_idx_l1, &v->evtchn_pending_sel)
+					     ? "" : " l1-clear",
+			       sync_test_bit(word_idx_l2, per_cpu(evtchn_sel_l2, cpu))
+					     ? "" : " l2-clear",
+			       !sync_test_bit(i, evtchn_mask)
+					     ? "" : " globally-masked",
+			       sync_test_bit(i, cpu_evtchn)
+					     ? "" : " locally-masked");
+		}
+	}
+
+	spin_unlock_irqrestore(&debug_lock, flags);
+
+	return IRQ_HANDLED;
+}
+
 irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 {
 	return eops->xen_debug_interrupt(irq, dev_id);
 }
 
 static DEFINE_PER_CPU(unsigned, xed_nesting_count);
+
+/* 2-level event channel does not use current_word_idx_l2 */
 static DEFINE_PER_CPU(unsigned int, current_word_idx);
+static DEFINE_PER_CPU(unsigned int, current_word_idx_l2);
 static DEFINE_PER_CPU(unsigned int, current_bit_idx);
 
+
 /*
  * Mask out the i least significant bits of w
  */
@@ -1303,7 +1486,8 @@ static void __xen_evtchn_do_upcall_l2(void)
 		if (__this_cpu_inc_return(xed_nesting_count) - 1)
 			goto out;
 
-#ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
+#ifndef CONFIG_X86
+		/* No need for a barrier -- XCHG is a barrier on x86. */
 		/* Clear master flag /before/ clearing selector flag. */
 		wmb();
 #endif
@@ -1392,6 +1576,155 @@ out:
 	put_cpu();
 }
 
+void __xen_evtchn_do_upcall_l3(void)
+{
+	struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
+	unsigned count;
+	int start_word_idx_l1, start_word_idx_l2, start_bit_idx;
+	int word_idx_l1, word_idx_l2, bit_idx;
+	int i, j;
+	unsigned long l1cb, l2cb;
+	int cpu = get_cpu();
+
+	l1cb = BITS_PER_LONG * BITS_PER_LONG;
+	l2cb = BITS_PER_LONG;
+
+	do {
+		unsigned long pending_words_l1;
+
+		vcpu_info->evtchn_upcall_pending = 0;
+
+		if (__this_cpu_inc_return(xed_nesting_count) - 1)
+			goto out;
+#ifndef CONFIG_X86
+		/* No need for a barrier -- XCHG is a barrier on x86. */
+		/* Clear master flag /before/ clearing selector flag. */
+		wmb();
+#endif
+		/* here we get l1 pending selector */
+		pending_words_l1 = xchg(&vcpu_info->evtchn_pending_sel, 0);
+
+		start_word_idx_l1 = __this_cpu_read(current_word_idx);
+		start_word_idx_l2 = __this_cpu_read(current_word_idx_l2);
+		start_bit_idx = __this_cpu_read(current_bit_idx);
+
+		word_idx_l1 = start_word_idx_l1;
+
+		/* loop through l1, try to pick up l2 */
+		for (i = 0; pending_words_l1 != 0; i++) {
+			unsigned long words_l1;
+			unsigned long pending_words_l2;
+			unsigned long pwl2idx;
+
+			words_l1 = MASK_LSBS(pending_words_l1, word_idx_l1);
+
+			if (words_l1 == 0) {
+				word_idx_l1 = 0;
+				start_word_idx_l2 = 0;
+				continue;
+			}
+
+			word_idx_l1 = __ffs(words_l1);
+
+			pwl2idx = word_idx_l1 * BITS_PER_LONG;
+
+			pending_words_l2 =
+				xchg(&per_cpu(evtchn_sel_l2, cpu)[pwl2idx],
+				     0);
+
+			word_idx_l2 = 0;
+			if (word_idx_l1 == start_word_idx_l1) {
+				if (i == 0)
+					word_idx_l2 = start_word_idx_l2;
+				else
+					word_idx_l2 &= (1UL << start_word_idx_l2) - 1;
+			}
+
+			for (j = 0; pending_words_l2 != 0; j++) {
+				unsigned long pending_bits;
+				unsigned long words_l2;
+				unsigned long idx;
+
+				words_l2 = MASK_LSBS(pending_words_l2,
+						     word_idx_l2);
+
+				if (words_l2 == 0) {
+					word_idx_l2 = 0;
+					bit_idx = 0;
+					continue;
+				}
+
+				word_idx_l2 = __ffs(words_l2);
+
+				idx = word_idx_l1*BITS_PER_LONG+word_idx_l2;
+				pending_bits =
+					eops->active_evtchns(cpu, NULL, idx);
+
+				bit_idx = 0;
+				if (word_idx_l2 == start_word_idx_l2) {
+					if (j == 0)
+						bit_idx = start_bit_idx;
+					else
+						bit_idx &= (1UL<<start_bit_idx)-1;
+				}
+
+				/* process port */
+				do {
+					unsigned long bits;
+					int port, irq;
+					struct irq_desc *desc;
+
+					bits = MASK_LSBS(pending_bits, bit_idx);
+
+					if (bits == 0)
+						break;
+
+					bit_idx = __ffs(bits);
+
+					port = word_idx_l1 * l1cb +
+						word_idx_l2 * l2cb +
+						bit_idx;
+
+					irq = evtchn_to_irq[port];
+
+					if (irq != -1) {
+						desc = irq_to_desc(irq);
+						if (desc)
+							generic_handle_irq_desc(irq, desc);
+					}
+
+					bit_idx = (bit_idx + 1) % BITS_PER_LONG;
+
+					__this_cpu_write(current_bit_idx, bit_idx);
+					__this_cpu_write(current_word_idx_l2,
+							 bit_idx ? word_idx_l2 :
+							 (word_idx_l2+1) % BITS_PER_LONG);
+					__this_cpu_write(current_word_idx_l2,
+							 word_idx_l2 ? word_idx_l1 :
+							 (word_idx_l1+1) % BITS_PER_LONG);
+				} while (bit_idx != 0);
+
+				if ((word_idx_l2 != start_word_idx_l2) || (j != 0))
+					pending_words_l2 &= ~(1UL << word_idx_l2);
+
+				word_idx_l2 = (word_idx_l2) % BITS_PER_LONG;
+			}
+
+			if ((word_idx_l1 != start_word_idx_l1) || (i != 0))
+				pending_words_l1 &= ~(1UL << word_idx_l1);
+
+			word_idx_l1 = (word_idx_l1) % BITS_PER_LONG;
+		}
+
+		BUG_ON(!irqs_disabled());
+		count = __this_cpu_read(xed_nesting_count);
+		__this_cpu_write(xed_nesting_count, 0);
+	} while (count != 1 || vcpu_info->evtchn_upcall_pending);
+
+out:
+	put_cpu();
+}
+
 void xen_evtchn_do_upcall(struct pt_regs *regs)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
@@ -1525,12 +1858,6 @@ static void mask_ack_dynirq(struct irq_data *data)
 	ack_dynirq(data);
 }
 
-static inline int __is_masked_l2(int chn)
-{
-	struct shared_info *sh = HYPERVISOR_shared_info;
-	return sync_test_and_set_bit(chn, sh->evtchn_mask);
-}
-
 static int retrigger_dynirq(struct irq_data *data)
 {
 	int evtchn = evtchn_from_irq(data->irq);
@@ -1821,14 +2148,74 @@ static struct evtchn_ops evtchn_ops_l2 __read_mostly = {
 	.xen_debug_interrupt = __xen_debug_interrupt_l2,
 };
 
+static struct evtchn_ops evtchn_ops_l3 __read_mostly = {
+	.active_evtchns = __active_evtchns_l3,
+	.clear_evtchn = __clear_evtchn_l3,
+	.set_evtchn = __set_evtchn_l3,
+	.test_evtchn = __test_evtchn_l3,
+	.mask_evtchn = __mask_evtchn_l3,
+	.unmask_evtchn = __unmask_evtchn_l3,
+	.is_masked = __is_masked_l3,
+	.xen_evtchn_do_upcall = __xen_evtchn_do_upcall_l3,
+	.xen_debug_interrupt = __xen_debug_interrupt_l3,
+};
+
+int xen_event_channel_setup_3level(void)
+{
+	evtchn_register_nlevel_t reg;
+	int i, nr_pages, cpu;
+	unsigned long mfns[nr_cpu_ids];
+	unsigned long offsets[nr_cpu_ids];
+	int rc = -EINVAL;
+
+	memset(&reg, 0, sizeof(reg));
+
+	reg.level = 3;
+	nr_pages = (sizeof(unsigned long) == 4 ? 1 : 8);
+
+	for (i = 0; i < nr_pages; i++) {
+		unsigned long offset = PAGE_SIZE * i;
+		reg.u.l3.evtchn_pending[i] =
+			arbitrary_virt_to_mfn(
+				(void *)((unsigned long)evtchn_pending+offset));
+		reg.u.l3.evtchn_mask[i] =
+			arbitrary_virt_to_mfn(
+				(void *)((unsigned long)evtchn_mask+offset));
+	}
+
+	reg.u.l3.l2sel_mfn = mfns;
+	reg.u.l3.l2sel_offset = offsets;
+	reg.u.l3.nr_vcpus = nr_cpu_ids;
+
+	for_each_possible_cpu(cpu) {
+		reg.u.l3.l2sel_mfn[cpu] =
+			arbitrary_virt_to_mfn(&per_cpu(evtchn_sel_l2, cpu));
+		reg.u.l3.l2sel_offset[cpu] =
+			offset_in_page(&per_cpu(evtchn_sel_l2, cpu));
+	}
+
+	rc = HYPERVISOR_event_channel_op(EVTCHNOP_register_nlevel, &reg);
+
+	if (rc == 0)
+		evtchn_level = 3;
+
+	return rc;
+}
+EXPORT_SYMBOL_GPL(xen_event_channel_setup_3level);
+
 void __init xen_init_IRQ(void)
 {
 	int i, rc;
 	int cpu;
 
-	/* Setup 2-level event channel */
-	eops = &evtchn_ops_l2;
-	evtchn_level = 2;
+	switch (evtchn_level) {
+	case 2:
+		eops = &evtchn_ops_l2; break;
+	case 3:
+		eops = &evtchn_ops_l3; break;
+	default:
+		BUG();
+	}
 
 	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
 				sizeof(*evtchn_to_irq),
diff --git a/include/xen/events.h b/include/xen/events.h
index bc10f22..87696fc 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -111,5 +111,7 @@ int xen_test_irq_shared(int irq);
 
 /* N-level event channels */
 extern unsigned int evtchn_level;
+extern unsigned int evtchn_level_param;
+int xen_event_channel_setup_3level(void);
 
 #endif	/* _XEN_EVENTS_H */
diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h
index f494292..f764d21 100644
--- a/include/xen/interface/event_channel.h
+++ b/include/xen/interface/event_channel.h
@@ -190,6 +190,30 @@ struct evtchn_reset {
 };
 typedef struct evtchn_reset evtchn_reset_t;
 
+/*
+ * EVTCHNOP_register_nlevel: Register N level event channels.
+ * NOTES:
+ *   1. currently only 3-level is supported.
+ *   2. should fall back to basic 2-level if this call fails.
+ */
+#define EVTCHNOP_register_nlevel 11
+#define MAX_L3_PAGES 8		/* 8 pages for 64 bits */
+struct evtchn_register_3level {
+	unsigned long evtchn_pending[MAX_L3_PAGES];
+	unsigned long evtchn_mask[MAX_L3_PAGES];
+	unsigned long *l2sel_mfn;
+	unsigned long *l2sel_offset;
+	unsigned int nr_vcpus;
+};
+
+struct evtchn_register_nlevel {
+	uint32_t level;
+	union {
+		struct evtchn_register_3level l3;
+	} u;
+};
+typedef struct evtchn_register_nlevel evtchn_register_nlevel_t;
+
 struct evtchn_op {
 	uint32_t cmd; /* EVTCHNOP_* */
 	union {
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index c66e1ff..7cb9d8f 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -289,7 +289,7 @@ DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
  *  32k if a long is 32 bits; 256k if a long is 64 bits.
  */
 #define NR_EVENT_CHANNELS_L2 (sizeof(unsigned long) * sizeof(unsigned long) * 64)
-#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * sizeof(unsigned long))
+#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * 64)
 #define NR_EVENT_CHANNELS(x) ({ unsigned int __v = 0;	\
 	switch (x) {					\
 	case 2:						\
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:39:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkGA-00015z-NS; Mon, 31 Dec 2012 18:38:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkG8-00015k-IA
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:38:48 +0000
Received: from [85.158.143.99:42843] by server-2.bemta-4.messagelabs.com id
	9F/49-30861-7BBD1E05; Mon, 31 Dec 2012 18:38:47 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1356979124!31417659!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29042 invoked from network); 31 Dec 2012 18:38:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:38:46 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2341387"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:38:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:38:43 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkG3-0001R3-Bd;
	Mon, 31 Dec 2012 18:38:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:37:37 +0000
Message-ID: <1356979057-18442-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356979057-18442-2-git-send-email-wei.liu2@citrix.com>
References: <1356979057-18442-2-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [RFC PATCH 3/3] Xen: implement 3-level event channel
	routines.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 arch/x86/xen/enlighten.c              |    7 +
 drivers/xen/events.c                  |  419 +++++++++++++++++++++++++++++++--
 include/xen/events.h                  |    2 +
 include/xen/interface/event_channel.h |   24 ++
 include/xen/interface/xen.h           |    2 +-
 5 files changed, 437 insertions(+), 17 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index bc893e7..f471881 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -43,6 +43,7 @@
 #include <xen/hvm.h>
 #include <xen/hvc-console.h>
 #include <xen/acpi.h>
+#include <xen/events.h>
 
 #include <asm/paravirt.h>
 #include <asm/apic.h>
@@ -195,6 +196,9 @@ void xen_vcpu_restore(void)
 		    HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
 			BUG();
 	}
+
+	if (evtchn_level_param == 3)
+		xen_event_channel_setup_3level();
 }
 
 static void __init xen_banner(void)
@@ -1028,6 +1032,9 @@ void xen_setup_vcpu_info_placement(void)
 	for_each_possible_cpu(cpu)
 		xen_vcpu_setup(cpu);
 
+	if (evtchn_level_param == 3)
+		xen_event_channel_setup_3level();
+
 	/* xen_vcpu_setup managed to place the vcpu_info within the
 	   percpu area for all cpus, so make use of it */
 	if (have_vcpu_info_placement) {
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index f60ba76..adb94e9 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -52,9 +52,15 @@
 #include <xen/interface/hvm/params.h>
 
 /* N-level event channel, starting from 2 */
+unsigned int evtchn_level_param = -1;
 unsigned int evtchn_level = 2;
 EXPORT_SYMBOL_GPL(evtchn_level);
 
+/* 3-level event channel */
+DEFINE_PER_CPU(unsigned long [sizeof(unsigned long)*8], evtchn_sel_l2);
+unsigned long evtchn_pending[NR_EVENT_CHANNELS_L3/BITS_PER_LONG] __page_aligned_bss;
+unsigned long evtchn_mask[NR_EVENT_CHANNELS_L3/BITS_PER_LONG] __page_aligned_bss;
+
 struct evtchn_ops {
 	unsigned long (*active_evtchns)(unsigned int,
 					struct shared_info*, unsigned int);
@@ -142,6 +148,29 @@ static struct irq_chip xen_pirq_chip;
 static void enable_dynirq(struct irq_data *data);
 static void disable_dynirq(struct irq_data *data);
 
+static int __init parse_evtchn_level(char *arg)
+{
+	if (!arg)
+		return -EINVAL;
+
+	if (strcmp(arg, "3") == 0)
+		evtchn_level_param = 3;
+
+	return 0;
+}
+early_param("evtchn_level", parse_evtchn_level);
+
+static inline int __is_masked_l2(int chn)
+{
+	struct shared_info *sh = HYPERVISOR_shared_info;
+	return sync_test_and_set_bit(chn, sh->evtchn_mask);
+}
+
+static inline int __is_masked_l3(int chn)
+{
+	return sync_test_and_set_bit(chn, evtchn_mask);
+}
+
 /* Get info for IRQ */
 static struct irq_info *info_for_irq(unsigned irq)
 {
@@ -311,6 +340,15 @@ static inline unsigned long __active_evtchns_l2(unsigned int cpu,
 		~sh->evtchn_mask[idx];
 }
 
+static inline unsigned long __active_evtchns_l3(unsigned int cpu,
+						struct shared_info *sh,
+						unsigned int idx)
+{
+	return evtchn_pending[idx] &
+		per_cpu(cpu_evtchn_mask, cpu)[idx] &
+		~evtchn_mask[idx];
+}
+
 static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu)
 {
 	int irq = evtchn_to_irq[chn];
@@ -351,18 +389,33 @@ static inline void __clear_evtchn_l2(int port)
 	sync_clear_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline void __clear_evtchn_l3(int port)
+{
+	sync_clear_bit(port, &evtchn_pending[0]);
+}
+
 static inline void __set_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	sync_set_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline void __set_evtchn_l3(int port)
+{
+	sync_set_bit(port, &evtchn_pending[0]);
+}
+
 static inline int __test_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	return sync_test_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline int __test_evtchn_l3(int port)
+{
+	return sync_test_bit(port, &evtchn_pending[0]);
+}
+
 /**
  * notify_remote_via_irq - send event to remote end of event channel via irq
  * @irq: irq of event channel to send event to
@@ -386,6 +439,11 @@ static void __mask_evtchn_l2(int port)
 	sync_set_bit(port, &s->evtchn_mask[0]);
 }
 
+static void __mask_evtchn_l3(int port)
+{
+	sync_set_bit(port, &evtchn_mask[0]);
+}
+
 static void __unmask_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
@@ -416,6 +474,36 @@ static void __unmask_evtchn_l2(int port)
 	put_cpu();
 }
 
+static void __unmask_evtchn_l3(int port)
+{
+	unsigned int cpu = get_cpu();
+	int l1cb = BITS_PER_LONG * BITS_PER_LONG;
+	int l2cb = BITS_PER_LONG;
+
+	if (unlikely(cpu != cpu_from_evtchn(port))) {
+		struct evtchn_unmask unmask = { .port = port };
+		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
+	} else {
+		struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
+
+		sync_clear_bit(port, &evtchn_mask[0]);
+
+		/*
+		 * The following is basically the equivalent of
+		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
+		 * the interrupt edge' if the channel is masked.
+		 */
+		if (sync_test_bit(port, &evtchn_pending[0]) &&
+		    !sync_test_and_set_bit(port / l2cb,
+					   &per_cpu(evtchn_sel_l2, cpu)[0]) &&
+		    !sync_test_and_set_bit(port / l1cb,
+					   &vcpu_info->evtchn_pending_sel))
+			vcpu_info->evtchn_upcall_pending = 1;
+	}
+
+	put_cpu();
+}
+
 static void xen_irq_init(unsigned irq)
 {
 	struct irq_info *info;
@@ -1181,6 +1269,7 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector)
 	notify_remote_via_irq(irq);
 }
 
+static DEFINE_SPINLOCK(debug_lock);
 static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 {
 	struct shared_info *sh = HYPERVISOR_shared_info;
@@ -1188,7 +1277,6 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	unsigned long *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu);
 	int i;
 	unsigned long flags;
-	static DEFINE_SPINLOCK(debug_lock);
 	struct vcpu_info *v;
 
 	spin_lock_irqsave(&debug_lock, flags);
@@ -1196,13 +1284,13 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	printk("\nvcpu %d\n  ", cpu);
 
 	for_each_online_cpu(i) {
-		int pending;
+		int masked;
 		v = per_cpu(xen_vcpu, i);
-		pending = (get_irq_regs() && i == cpu)
+		masked = (get_irq_regs() && i == cpu)
 			? xen_irqs_disabled(get_irq_regs())
 			: v->evtchn_upcall_mask;
 		printk("%d: masked=%d pending=%d event_sel %0*lx\n  ", i,
-		       pending, v->evtchn_upcall_pending,
+		       masked, v->evtchn_upcall_pending,
 		       (int)(sizeof(v->evtchn_pending_sel)*2),
 		       v->evtchn_pending_sel);
 	}
@@ -1227,7 +1315,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 		       i % 8 == 0 ? "\n   " : " ");
 
 	printk("\nlocal cpu%d mask:\n   ", cpu);
-	for (i = (NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG)-1; i >= 0; i--)
+	for (i = (NR_EVENT_CHANNELS(2)/BITS_PER_LONG)-1; i >= 0; i--)
 		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
 		       cpu_evtchn[i],
 		       i % 8 == 0 ? "\n   " : " ");
@@ -1242,7 +1330,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	}
 
 	printk("\npending list:\n");
-	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(2); i++) {
 		if (sync_test_bit(i, sh->evtchn_pending)) {
 			int word_idx = i / BITS_PER_LONG;
 			printk("  %d: event %d -> irq %d%s%s%s\n",
@@ -1262,15 +1350,110 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static irqreturn_t __xen_debug_interrupt_l3(int irq, void *dev_id)
+{
+	int cpu = smp_processor_id();
+	unsigned long *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu);
+	int i, j;
+	unsigned long flags;
+	struct vcpu_info *v;
+
+	spin_lock_irqsave(&debug_lock, flags);
+
+	printk("\nvcpu %d\n  ", cpu);
+
+	for_each_online_cpu(i) {
+		int masked;
+
+		v = per_cpu(xen_vcpu, i);
+		masked = (get_irq_regs() && i == cpu)
+			? xen_irqs_disabled(get_irq_regs())
+			: v->evtchn_upcall_mask;
+		printk("%d: masked=%d pending=%d event_sel_l1 %0*lx\n  ", i,
+		       masked, v->evtchn_upcall_pending,
+		       (int)(sizeof(v->evtchn_pending_sel)*2),
+		       v->evtchn_pending_sel);
+
+		printk("\nevtchn_sel_l2:\n   ");
+		for (j = (sizeof(unsigned long)*8)-1; j >= 0; j--)
+			printk("%0*lx%s",
+			       (int)(sizeof(evtchn_sel_l2[0])*2),
+			       per_cpu(evtchn_sel_l2, i)[j],
+			       j % 8 == 0 ? "\n   " : " ");
+	}
+
+	v = per_cpu(xen_vcpu, cpu);
+
+	printk("\npending:\n   ");
+	for (i = ARRAY_SIZE(evtchn_pending)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_pending[0])*2),
+		       evtchn_pending[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nglobal mask:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       evtchn_mask[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nglobally unmasked:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       evtchn_pending[i] & ~evtchn_mask[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nlocal cpu%d mask:\n   ", cpu);
+	for (i = (NR_EVENT_CHANNELS(3)/BITS_PER_LONG)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
+		       cpu_evtchn[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nlocally unmasked:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--) {
+		unsigned long pending = evtchn_pending[i]
+			& ~evtchn_mask[i]
+			& cpu_evtchn[i];
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       pending, i % 8 == 0 ? "\n   " : " ");
+	}
+
+	printk("\npending list:\n");
+	for (i = 0; i < NR_EVENT_CHANNELS(3); i++) {
+		if (sync_test_bit(i, evtchn_pending)) {
+			int word_idx_l1 = i / (BITS_PER_LONG * BITS_PER_LONG);
+			int word_idx_l2 = i / BITS_PER_LONG;
+			printk("  %d: event %d -> irq %d%s%s%s%s\n",
+			       cpu_from_evtchn(i), i,
+			       evtchn_to_irq[i],
+			       sync_test_bit(word_idx_l1, &v->evtchn_pending_sel)
+					     ? "" : " l1-clear",
+			       sync_test_bit(word_idx_l2, per_cpu(evtchn_sel_l2, cpu))
+					     ? "" : " l2-clear",
+			       !sync_test_bit(i, evtchn_mask)
+					     ? "" : " globally-masked",
+			       sync_test_bit(i, cpu_evtchn)
+					     ? "" : " locally-masked");
+		}
+	}
+
+	spin_unlock_irqrestore(&debug_lock, flags);
+
+	return IRQ_HANDLED;
+}
+
 irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 {
 	return eops->xen_debug_interrupt(irq, dev_id);
 }
 
 static DEFINE_PER_CPU(unsigned, xed_nesting_count);
+
+/* 2-level event channel does not use current_word_idx_l2 */
 static DEFINE_PER_CPU(unsigned int, current_word_idx);
+static DEFINE_PER_CPU(unsigned int, current_word_idx_l2);
 static DEFINE_PER_CPU(unsigned int, current_bit_idx);
 
+
 /*
  * Mask out the i least significant bits of w
  */
@@ -1303,7 +1486,8 @@ static void __xen_evtchn_do_upcall_l2(void)
 		if (__this_cpu_inc_return(xed_nesting_count) - 1)
 			goto out;
 
-#ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
+#ifndef CONFIG_X86
+		/* No need for a barrier -- XCHG is a barrier on x86. */
 		/* Clear master flag /before/ clearing selector flag. */
 		wmb();
 #endif
@@ -1392,6 +1576,155 @@ out:
 	put_cpu();
 }
 
+void __xen_evtchn_do_upcall_l3(void)
+{
+	struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
+	unsigned count;
+	int start_word_idx_l1, start_word_idx_l2, start_bit_idx;
+	int word_idx_l1, word_idx_l2, bit_idx;
+	int i, j;
+	unsigned long l1cb, l2cb;
+	int cpu = get_cpu();
+
+	l1cb = BITS_PER_LONG * BITS_PER_LONG;
+	l2cb = BITS_PER_LONG;
+
+	do {
+		unsigned long pending_words_l1;
+
+		vcpu_info->evtchn_upcall_pending = 0;
+
+		if (__this_cpu_inc_return(xed_nesting_count) - 1)
+			goto out;
+#ifndef CONFIG_X86
+		/* No need for a barrier -- XCHG is a barrier on x86. */
+		/* Clear master flag /before/ clearing selector flag. */
+		wmb();
+#endif
+		/* here we get l1 pending selector */
+		pending_words_l1 = xchg(&vcpu_info->evtchn_pending_sel, 0);
+
+		start_word_idx_l1 = __this_cpu_read(current_word_idx);
+		start_word_idx_l2 = __this_cpu_read(current_word_idx_l2);
+		start_bit_idx = __this_cpu_read(current_bit_idx);
+
+		word_idx_l1 = start_word_idx_l1;
+
+		/* loop through l1, try to pick up l2 */
+		for (i = 0; pending_words_l1 != 0; i++) {
+			unsigned long words_l1;
+			unsigned long pending_words_l2;
+			unsigned long pwl2idx;
+
+			words_l1 = MASK_LSBS(pending_words_l1, word_idx_l1);
+
+			if (words_l1 == 0) {
+				word_idx_l1 = 0;
+				start_word_idx_l2 = 0;
+				continue;
+			}
+
+			word_idx_l1 = __ffs(words_l1);
+
+			pwl2idx = word_idx_l1 * BITS_PER_LONG;
+
+			pending_words_l2 =
+				xchg(&per_cpu(evtchn_sel_l2, cpu)[pwl2idx],
+				     0);
+
+			word_idx_l2 = 0;
+			if (word_idx_l1 == start_word_idx_l1) {
+				if (i == 0)
+					word_idx_l2 = start_word_idx_l2;
+				else
+					word_idx_l2 &= (1UL << start_word_idx_l2) - 1;
+			}
+
+			for (j = 0; pending_words_l2 != 0; j++) {
+				unsigned long pending_bits;
+				unsigned long words_l2;
+				unsigned long idx;
+
+				words_l2 = MASK_LSBS(pending_words_l2,
+						     word_idx_l2);
+
+				if (words_l2 == 0) {
+					word_idx_l2 = 0;
+					bit_idx = 0;
+					continue;
+				}
+
+				word_idx_l2 = __ffs(words_l2);
+
+				idx = word_idx_l1*BITS_PER_LONG+word_idx_l2;
+				pending_bits =
+					eops->active_evtchns(cpu, NULL, idx);
+
+				bit_idx = 0;
+				if (word_idx_l2 == start_word_idx_l2) {
+					if (j == 0)
+						bit_idx = start_bit_idx;
+					else
+						bit_idx &= (1UL<<start_bit_idx)-1;
+				}
+
+				/* process port */
+				do {
+					unsigned long bits;
+					int port, irq;
+					struct irq_desc *desc;
+
+					bits = MASK_LSBS(pending_bits, bit_idx);
+
+					if (bits == 0)
+						break;
+
+					bit_idx = __ffs(bits);
+
+					port = word_idx_l1 * l1cb +
+						word_idx_l2 * l2cb +
+						bit_idx;
+
+					irq = evtchn_to_irq[port];
+
+					if (irq != -1) {
+						desc = irq_to_desc(irq);
+						if (desc)
+							generic_handle_irq_desc(irq, desc);
+					}
+
+					bit_idx = (bit_idx + 1) % BITS_PER_LONG;
+
+					__this_cpu_write(current_bit_idx, bit_idx);
+					__this_cpu_write(current_word_idx_l2,
+							 bit_idx ? word_idx_l2 :
+							 (word_idx_l2+1) % BITS_PER_LONG);
+					__this_cpu_write(current_word_idx_l2,
+							 word_idx_l2 ? word_idx_l1 :
+							 (word_idx_l1+1) % BITS_PER_LONG);
+				} while (bit_idx != 0);
+
+				if ((word_idx_l2 != start_word_idx_l2) || (j != 0))
+					pending_words_l2 &= ~(1UL << word_idx_l2);
+
+				word_idx_l2 = (word_idx_l2) % BITS_PER_LONG;
+			}
+
+			if ((word_idx_l1 != start_word_idx_l1) || (i != 0))
+				pending_words_l1 &= ~(1UL << word_idx_l1);
+
+			word_idx_l1 = (word_idx_l1) % BITS_PER_LONG;
+		}
+
+		BUG_ON(!irqs_disabled());
+		count = __this_cpu_read(xed_nesting_count);
+		__this_cpu_write(xed_nesting_count, 0);
+	} while (count != 1 || vcpu_info->evtchn_upcall_pending);
+
+out:
+	put_cpu();
+}
+
 void xen_evtchn_do_upcall(struct pt_regs *regs)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
@@ -1525,12 +1858,6 @@ static void mask_ack_dynirq(struct irq_data *data)
 	ack_dynirq(data);
 }
 
-static inline int __is_masked_l2(int chn)
-{
-	struct shared_info *sh = HYPERVISOR_shared_info;
-	return sync_test_and_set_bit(chn, sh->evtchn_mask);
-}
-
 static int retrigger_dynirq(struct irq_data *data)
 {
 	int evtchn = evtchn_from_irq(data->irq);
@@ -1821,14 +2148,74 @@ static struct evtchn_ops evtchn_ops_l2 __read_mostly = {
 	.xen_debug_interrupt = __xen_debug_interrupt_l2,
 };
 
+static struct evtchn_ops evtchn_ops_l3 __read_mostly = {
+	.active_evtchns = __active_evtchns_l3,
+	.clear_evtchn = __clear_evtchn_l3,
+	.set_evtchn = __set_evtchn_l3,
+	.test_evtchn = __test_evtchn_l3,
+	.mask_evtchn = __mask_evtchn_l3,
+	.unmask_evtchn = __unmask_evtchn_l3,
+	.is_masked = __is_masked_l3,
+	.xen_evtchn_do_upcall = __xen_evtchn_do_upcall_l3,
+	.xen_debug_interrupt = __xen_debug_interrupt_l3,
+};
+
+int xen_event_channel_setup_3level(void)
+{
+	evtchn_register_nlevel_t reg;
+	int i, nr_pages, cpu;
+	unsigned long mfns[nr_cpu_ids];
+	unsigned long offsets[nr_cpu_ids];
+	int rc = -EINVAL;
+
+	memset(&reg, 0, sizeof(reg));
+
+	reg.level = 3;
+	nr_pages = (sizeof(unsigned long) == 4 ? 1 : 8);
+
+	for (i = 0; i < nr_pages; i++) {
+		unsigned long offset = PAGE_SIZE * i;
+		reg.u.l3.evtchn_pending[i] =
+			arbitrary_virt_to_mfn(
+				(void *)((unsigned long)evtchn_pending+offset));
+		reg.u.l3.evtchn_mask[i] =
+			arbitrary_virt_to_mfn(
+				(void *)((unsigned long)evtchn_mask+offset));
+	}
+
+	reg.u.l3.l2sel_mfn = mfns;
+	reg.u.l3.l2sel_offset = offsets;
+	reg.u.l3.nr_vcpus = nr_cpu_ids;
+
+	for_each_possible_cpu(cpu) {
+		reg.u.l3.l2sel_mfn[cpu] =
+			arbitrary_virt_to_mfn(&per_cpu(evtchn_sel_l2, cpu));
+		reg.u.l3.l2sel_offset[cpu] =
+			offset_in_page(&per_cpu(evtchn_sel_l2, cpu));
+	}
+
+	rc = HYPERVISOR_event_channel_op(EVTCHNOP_register_nlevel, &reg);
+
+	if (rc == 0)
+		evtchn_level = 3;
+
+	return rc;
+}
+EXPORT_SYMBOL_GPL(xen_event_channel_setup_3level);
+
 void __init xen_init_IRQ(void)
 {
 	int i, rc;
 	int cpu;
 
-	/* Setup 2-level event channel */
-	eops = &evtchn_ops_l2;
-	evtchn_level = 2;
+	switch (evtchn_level) {
+	case 2:
+		eops = &evtchn_ops_l2; break;
+	case 3:
+		eops = &evtchn_ops_l3; break;
+	default:
+		BUG();
+	}
 
 	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
 				sizeof(*evtchn_to_irq),
diff --git a/include/xen/events.h b/include/xen/events.h
index bc10f22..87696fc 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -111,5 +111,7 @@ int xen_test_irq_shared(int irq);
 
 /* N-level event channels */
 extern unsigned int evtchn_level;
+extern unsigned int evtchn_level_param;
+int xen_event_channel_setup_3level(void);
 
 #endif	/* _XEN_EVENTS_H */
diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h
index f494292..f764d21 100644
--- a/include/xen/interface/event_channel.h
+++ b/include/xen/interface/event_channel.h
@@ -190,6 +190,30 @@ struct evtchn_reset {
 };
 typedef struct evtchn_reset evtchn_reset_t;
 
+/*
+ * EVTCHNOP_register_nlevel: Register N level event channels.
+ * NOTES:
+ *   1. currently only 3-level is supported.
+ *   2. should fall back to basic 2-level if this call fails.
+ */
+#define EVTCHNOP_register_nlevel 11
+#define MAX_L3_PAGES 8		/* 8 pages for 64 bits */
+struct evtchn_register_3level {
+	unsigned long evtchn_pending[MAX_L3_PAGES];
+	unsigned long evtchn_mask[MAX_L3_PAGES];
+	unsigned long *l2sel_mfn;
+	unsigned long *l2sel_offset;
+	unsigned int nr_vcpus;
+};
+
+struct evtchn_register_nlevel {
+	uint32_t level;
+	union {
+		struct evtchn_register_3level l3;
+	} u;
+};
+typedef struct evtchn_register_nlevel evtchn_register_nlevel_t;
+
 struct evtchn_op {
 	uint32_t cmd; /* EVTCHNOP_* */
 	union {
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index c66e1ff..7cb9d8f 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -289,7 +289,7 @@ DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
  *  32k if a long is 32 bits; 256k if a long is 64 bits.
  */
 #define NR_EVENT_CHANNELS_L2 (sizeof(unsigned long) * sizeof(unsigned long) * 64)
-#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * sizeof(unsigned long))
+#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * 64)
 #define NR_EVENT_CHANNELS(x) ({ unsigned int __v = 0;	\
 	switch (x) {					\
 	case 2:						\
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:39:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkGA-00015s-8q; Mon, 31 Dec 2012 18:38:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkG7-00015i-ML
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:38:47 +0000
Received: from [85.158.143.99:42826] by server-3.bemta-4.messagelabs.com id
	15/9C-18211-6BBD1E05; Mon, 31 Dec 2012 18:38:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1356979124!31417659!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28986 invoked from network); 31 Dec 2012 18:38:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:38:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2341386"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:38:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:38:43 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkG3-0001R3-B9;
	Mon, 31 Dec 2012 18:38:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:37:36 +0000
Message-ID: <1356979057-18442-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [RFC PATCH 2/3] Xen: rework NR_EVENT_CHANNELS related
	stuffs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/xen/events.c        |   44 +++++++++++++++++++++++++++++--------------
 drivers/xen/evtchn.c        |   16 +++++++++-------
 include/xen/events.h        |    3 +++
 include/xen/interface/xen.h |   17 ++++++++++++++++-
 4 files changed, 58 insertions(+), 22 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 835101f..f60ba76 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -52,7 +52,8 @@
 #include <xen/interface/hvm/params.h>
 
 /* N-level event channel, starting from 2 */
-static unsigned int evtchn_level = 2;
+unsigned int evtchn_level = 2;
+EXPORT_SYMBOL_GPL(evtchn_level);
 
 struct evtchn_ops {
 	unsigned long (*active_evtchns)(unsigned int,
@@ -130,8 +131,7 @@ static int *evtchn_to_irq;
 static unsigned long *pirq_eoi_map;
 static bool (*pirq_needs_eoi)(unsigned irq);
 
-static DEFINE_PER_CPU(unsigned long [NR_EVENT_CHANNELS/BITS_PER_LONG],
-		      cpu_evtchn_mask);
+static DEFINE_PER_CPU(unsigned long *, cpu_evtchn_mask);
 
 /* Xen will never allocate port zero for any purpose. */
 #define VALID_EVTCHN(chn)	((chn) != 0)
@@ -913,7 +913,7 @@ static int find_virq(unsigned int virq, unsigned int cpu)
 	int port, rc = -ENOENT;
 
 	memset(&status, 0, sizeof(status));
-	for (port = 0; port <= NR_EVENT_CHANNELS; port++) {
+	for (port = 0; port <= NR_EVENT_CHANNELS(evtchn_level); port++) {
 		status.dom = DOMID_SELF;
 		status.port = port;
 		rc = HYPERVISOR_event_channel_op(EVTCHNOP_status, &status);
@@ -1138,7 +1138,7 @@ int evtchn_get(unsigned int evtchn)
 	struct irq_info *info;
 	int err = -ENOENT;
 
-	if (evtchn >= NR_EVENT_CHANNELS)
+	if (evtchn >= NR_EVENT_CHANNELS(evtchn_level))
 		return -EINVAL;
 
 	mutex_lock(&irq_mapping_update_lock);
@@ -1227,7 +1227,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 		       i % 8 == 0 ? "\n   " : " ");
 
 	printk("\nlocal cpu%d mask:\n   ", cpu);
-	for (i = (NR_EVENT_CHANNELS/BITS_PER_LONG)-1; i >= 0; i--)
+	for (i = (NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG)-1; i >= 0; i--)
 		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
 		       cpu_evtchn[i],
 		       i % 8 == 0 ? "\n   " : " ");
@@ -1242,7 +1242,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	}
 
 	printk("\npending list:\n");
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (sync_test_bit(i, sh->evtchn_pending)) {
 			int word_idx = i / BITS_PER_LONG;
 			printk("  %d: event %d -> irq %d%s%s%s\n",
@@ -1709,14 +1709,14 @@ void xen_irq_resume(void)
 	init_evtchn_cpu_bindings();
 
 	/* New event-channel space is not 'live' yet. */
-	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
+	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS(evtchn_level); evtchn++)
 		eops->mask_evtchn(evtchn);
 
 	/* No IRQ <-> event-channel mappings. */
 	list_for_each_entry(info, &xen_irq_list_head, list)
 		info->evtchn = 0; /* zap event-channel binding */
 
-	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
+	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS(evtchn_level); evtchn++)
 		evtchn_to_irq[evtchn] = -1;
 
 	for_each_possible_cpu(cpu) {
@@ -1824,21 +1824,37 @@ static struct evtchn_ops evtchn_ops_l2 __read_mostly = {
 void __init xen_init_IRQ(void)
 {
 	int i, rc;
+	int cpu;
 
-	evtchn_level = 2;
+	/* Setup 2-level event channel */
 	eops = &evtchn_ops_l2;
+	evtchn_level = 2;
 
-	/* Setup 2-level event channel */
-	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
+	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
+				sizeof(*evtchn_to_irq),
 				GFP_KERNEL);
 	BUG_ON(!evtchn_to_irq);
-	for (i = 0; i < NR_EVENT_CHANNELS; i++)
+
+	for_each_possible_cpu(cpu) {
+		void *p;
+		unsigned int nr = NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG;
+		p = kzalloc_node(sizeof(unsigned long) * nr,
+				 GFP_KERNEL,
+				 cpu_to_node(cpu));
+		if (!p)
+			p = kzalloc(sizeof(unsigned long) * nr,
+				    GFP_KERNEL);
+		BUG_ON(!p);
+		per_cpu(cpu_evtchn_mask, cpu) = p;
+	}
+
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++)
 		evtchn_to_irq[i] = -1;
 
 	init_evtchn_cpu_bindings();
 
 	/* No event channels are 'live' right now. */
-	for (i = 0; i < NR_EVENT_CHANNELS; i++)
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++)
 		eops->mask_evtchn(i);
 
 	pirq_needs_eoi = pirq_needs_eoi_flag;
diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index b1f60a0..cb45ecf 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -232,7 +232,7 @@ static ssize_t evtchn_write(struct file *file, const char __user *buf,
 	for (i = 0; i < (count/sizeof(evtchn_port_t)); i++) {
 		unsigned port = kbuf[i];
 
-		if (port < NR_EVENT_CHANNELS &&
+		if (port < NR_EVENT_CHANNELS(evtchn_level) &&
 		    get_port_user(port) == u &&
 		    !get_port_enabled(port)) {
 			set_port_enabled(port, true);
@@ -364,7 +364,7 @@ static long evtchn_ioctl(struct file *file,
 			break;
 
 		rc = -EINVAL;
-		if (unbind.port >= NR_EVENT_CHANNELS)
+		if (unbind.port >= NR_EVENT_CHANNELS(evtchn_level))
 			break;
 
 		spin_lock_irq(&port_user_lock);
@@ -392,7 +392,7 @@ static long evtchn_ioctl(struct file *file,
 		if (copy_from_user(&notify, uarg, sizeof(notify)))
 			break;
 
-		if (notify.port >= NR_EVENT_CHANNELS) {
+		if (notify.port >= NR_EVENT_CHANNELS(evtchn_level)) {
 			rc = -EINVAL;
 		} else if (get_port_user(notify.port) != u) {
 			rc = -ENOTCONN;
@@ -482,7 +482,7 @@ static int evtchn_release(struct inode *inode, struct file *filp)
 
 	free_page((unsigned long)u->ring);
 
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (get_port_user(i) != u)
 			continue;
 
@@ -491,7 +491,7 @@ static int evtchn_release(struct inode *inode, struct file *filp)
 
 	spin_unlock_irq(&port_user_lock);
 
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (get_port_user(i) != u)
 			continue;
 
@@ -528,7 +528,8 @@ static int __init evtchn_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
-	port_user = kcalloc(NR_EVENT_CHANNELS, sizeof(*port_user), GFP_KERNEL);
+	port_user = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
+			    sizeof(*port_user), GFP_KERNEL);
 	if (port_user == NULL)
 		return -ENOMEM;
 
@@ -541,7 +542,8 @@ static int __init evtchn_init(void)
 		return err;
 	}
 
-	printk(KERN_INFO "Event-channel device installed.\n");
+	printk(KERN_INFO "Event-channel device installed."
+	       " Event-channel level: %d\n", evtchn_level);
 
 	return 0;
 }
diff --git a/include/xen/events.h b/include/xen/events.h
index 04399b2..bc10f22 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -109,4 +109,7 @@ int xen_irq_from_gsi(unsigned gsi);
 /* Determine whether to ignore this IRQ if it is passed to a guest. */
 int xen_test_irq_shared(int irq);
 
+/* N-level event channels */
+extern unsigned int evtchn_level;
+
 #endif	/* _XEN_EVENTS_H */
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index a890804..c66e1ff 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -283,9 +283,24 @@ DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
 
 /*
  * Event channel endpoints per domain:
+ * 2-level:
  *  1024 if a long is 32 bits; 4096 if a long is 64 bits.
+ * 3-level:
+ *  32k if a long is 32 bits; 256k if a long is 64 bits.
  */
-#define NR_EVENT_CHANNELS (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L2 (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * sizeof(unsigned long))
+#define NR_EVENT_CHANNELS(x) ({ unsigned int __v = 0;	\
+	switch (x) {					\
+	case 2:						\
+		__v = NR_EVENT_CHANNELS_L2; break;	\
+	case 3:						\
+		__v = NR_EVENT_CHANNELS_L3; break;	\
+	default:					\
+		BUG();					\
+	}						\
+	__v; })
+
 
 struct vcpu_time_info {
 	/*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:39:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkGA-00015s-8q; Mon, 31 Dec 2012 18:38:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkG7-00015i-ML
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:38:47 +0000
Received: from [85.158.143.99:42826] by server-3.bemta-4.messagelabs.com id
	15/9C-18211-6BBD1E05; Mon, 31 Dec 2012 18:38:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1356979124!31417659!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28986 invoked from network); 31 Dec 2012 18:38:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:38:45 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2341386"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:38:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:38:43 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkG3-0001R3-B9;
	Mon, 31 Dec 2012 18:38:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:37:36 +0000
Message-ID: <1356979057-18442-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [RFC PATCH 2/3] Xen: rework NR_EVENT_CHANNELS related
	stuffs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/xen/events.c        |   44 +++++++++++++++++++++++++++++--------------
 drivers/xen/evtchn.c        |   16 +++++++++-------
 include/xen/events.h        |    3 +++
 include/xen/interface/xen.h |   17 ++++++++++++++++-
 4 files changed, 58 insertions(+), 22 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 835101f..f60ba76 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -52,7 +52,8 @@
 #include <xen/interface/hvm/params.h>
 
 /* N-level event channel, starting from 2 */
-static unsigned int evtchn_level = 2;
+unsigned int evtchn_level = 2;
+EXPORT_SYMBOL_GPL(evtchn_level);
 
 struct evtchn_ops {
 	unsigned long (*active_evtchns)(unsigned int,
@@ -130,8 +131,7 @@ static int *evtchn_to_irq;
 static unsigned long *pirq_eoi_map;
 static bool (*pirq_needs_eoi)(unsigned irq);
 
-static DEFINE_PER_CPU(unsigned long [NR_EVENT_CHANNELS/BITS_PER_LONG],
-		      cpu_evtchn_mask);
+static DEFINE_PER_CPU(unsigned long *, cpu_evtchn_mask);
 
 /* Xen will never allocate port zero for any purpose. */
 #define VALID_EVTCHN(chn)	((chn) != 0)
@@ -913,7 +913,7 @@ static int find_virq(unsigned int virq, unsigned int cpu)
 	int port, rc = -ENOENT;
 
 	memset(&status, 0, sizeof(status));
-	for (port = 0; port <= NR_EVENT_CHANNELS; port++) {
+	for (port = 0; port <= NR_EVENT_CHANNELS(evtchn_level); port++) {
 		status.dom = DOMID_SELF;
 		status.port = port;
 		rc = HYPERVISOR_event_channel_op(EVTCHNOP_status, &status);
@@ -1138,7 +1138,7 @@ int evtchn_get(unsigned int evtchn)
 	struct irq_info *info;
 	int err = -ENOENT;
 
-	if (evtchn >= NR_EVENT_CHANNELS)
+	if (evtchn >= NR_EVENT_CHANNELS(evtchn_level))
 		return -EINVAL;
 
 	mutex_lock(&irq_mapping_update_lock);
@@ -1227,7 +1227,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 		       i % 8 == 0 ? "\n   " : " ");
 
 	printk("\nlocal cpu%d mask:\n   ", cpu);
-	for (i = (NR_EVENT_CHANNELS/BITS_PER_LONG)-1; i >= 0; i--)
+	for (i = (NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG)-1; i >= 0; i--)
 		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
 		       cpu_evtchn[i],
 		       i % 8 == 0 ? "\n   " : " ");
@@ -1242,7 +1242,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	}
 
 	printk("\npending list:\n");
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (sync_test_bit(i, sh->evtchn_pending)) {
 			int word_idx = i / BITS_PER_LONG;
 			printk("  %d: event %d -> irq %d%s%s%s\n",
@@ -1709,14 +1709,14 @@ void xen_irq_resume(void)
 	init_evtchn_cpu_bindings();
 
 	/* New event-channel space is not 'live' yet. */
-	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
+	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS(evtchn_level); evtchn++)
 		eops->mask_evtchn(evtchn);
 
 	/* No IRQ <-> event-channel mappings. */
 	list_for_each_entry(info, &xen_irq_list_head, list)
 		info->evtchn = 0; /* zap event-channel binding */
 
-	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
+	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS(evtchn_level); evtchn++)
 		evtchn_to_irq[evtchn] = -1;
 
 	for_each_possible_cpu(cpu) {
@@ -1824,21 +1824,37 @@ static struct evtchn_ops evtchn_ops_l2 __read_mostly = {
 void __init xen_init_IRQ(void)
 {
 	int i, rc;
+	int cpu;
 
-	evtchn_level = 2;
+	/* Setup 2-level event channel */
 	eops = &evtchn_ops_l2;
+	evtchn_level = 2;
 
-	/* Setup 2-level event channel */
-	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
+	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
+				sizeof(*evtchn_to_irq),
 				GFP_KERNEL);
 	BUG_ON(!evtchn_to_irq);
-	for (i = 0; i < NR_EVENT_CHANNELS; i++)
+
+	for_each_possible_cpu(cpu) {
+		void *p;
+		unsigned int nr = NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG;
+		p = kzalloc_node(sizeof(unsigned long) * nr,
+				 GFP_KERNEL,
+				 cpu_to_node(cpu));
+		if (!p)
+			p = kzalloc(sizeof(unsigned long) * nr,
+				    GFP_KERNEL);
+		BUG_ON(!p);
+		per_cpu(cpu_evtchn_mask, cpu) = p;
+	}
+
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++)
 		evtchn_to_irq[i] = -1;
 
 	init_evtchn_cpu_bindings();
 
 	/* No event channels are 'live' right now. */
-	for (i = 0; i < NR_EVENT_CHANNELS; i++)
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++)
 		eops->mask_evtchn(i);
 
 	pirq_needs_eoi = pirq_needs_eoi_flag;
diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index b1f60a0..cb45ecf 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -232,7 +232,7 @@ static ssize_t evtchn_write(struct file *file, const char __user *buf,
 	for (i = 0; i < (count/sizeof(evtchn_port_t)); i++) {
 		unsigned port = kbuf[i];
 
-		if (port < NR_EVENT_CHANNELS &&
+		if (port < NR_EVENT_CHANNELS(evtchn_level) &&
 		    get_port_user(port) == u &&
 		    !get_port_enabled(port)) {
 			set_port_enabled(port, true);
@@ -364,7 +364,7 @@ static long evtchn_ioctl(struct file *file,
 			break;
 
 		rc = -EINVAL;
-		if (unbind.port >= NR_EVENT_CHANNELS)
+		if (unbind.port >= NR_EVENT_CHANNELS(evtchn_level))
 			break;
 
 		spin_lock_irq(&port_user_lock);
@@ -392,7 +392,7 @@ static long evtchn_ioctl(struct file *file,
 		if (copy_from_user(&notify, uarg, sizeof(notify)))
 			break;
 
-		if (notify.port >= NR_EVENT_CHANNELS) {
+		if (notify.port >= NR_EVENT_CHANNELS(evtchn_level)) {
 			rc = -EINVAL;
 		} else if (get_port_user(notify.port) != u) {
 			rc = -ENOTCONN;
@@ -482,7 +482,7 @@ static int evtchn_release(struct inode *inode, struct file *filp)
 
 	free_page((unsigned long)u->ring);
 
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (get_port_user(i) != u)
 			continue;
 
@@ -491,7 +491,7 @@ static int evtchn_release(struct inode *inode, struct file *filp)
 
 	spin_unlock_irq(&port_user_lock);
 
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (get_port_user(i) != u)
 			continue;
 
@@ -528,7 +528,8 @@ static int __init evtchn_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
-	port_user = kcalloc(NR_EVENT_CHANNELS, sizeof(*port_user), GFP_KERNEL);
+	port_user = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
+			    sizeof(*port_user), GFP_KERNEL);
 	if (port_user == NULL)
 		return -ENOMEM;
 
@@ -541,7 +542,8 @@ static int __init evtchn_init(void)
 		return err;
 	}
 
-	printk(KERN_INFO "Event-channel device installed.\n");
+	printk(KERN_INFO "Event-channel device installed."
+	       " Event-channel level: %d\n", evtchn_level);
 
 	return 0;
 }
diff --git a/include/xen/events.h b/include/xen/events.h
index 04399b2..bc10f22 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -109,4 +109,7 @@ int xen_irq_from_gsi(unsigned gsi);
 /* Determine whether to ignore this IRQ if it is passed to a guest. */
 int xen_test_irq_shared(int irq);
 
+/* N-level event channels */
+extern unsigned int evtchn_level;
+
 #endif	/* _XEN_EVENTS_H */
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index a890804..c66e1ff 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -283,9 +283,24 @@ DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
 
 /*
  * Event channel endpoints per domain:
+ * 2-level:
  *  1024 if a long is 32 bits; 4096 if a long is 64 bits.
+ * 3-level:
+ *  32k if a long is 32 bits; 256k if a long is 64 bits.
  */
-#define NR_EVENT_CHANNELS (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L2 (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * sizeof(unsigned long))
+#define NR_EVENT_CHANNELS(x) ({ unsigned int __v = 0;	\
+	switch (x) {					\
+	case 2:						\
+		__v = NR_EVENT_CHANNELS_L2; break;	\
+	case 3:						\
+		__v = NR_EVENT_CHANNELS_L3; break;	\
+	default:					\
+		BUG();					\
+	}						\
+	__v; })
+
 
 struct vcpu_time_info {
 	/*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:40:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkHg-0001Fz-7K; Mon, 31 Dec 2012 18:40:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkHe-0001FP-E8
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:40:22 +0000
Received: from [85.158.138.51:2111] by server-11.bemta-3.messagelabs.com id
	29/37-13335-51CD1E05; Mon, 31 Dec 2012 18:40:21 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1356979216!24491100!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUyOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5679 invoked from network); 31 Dec 2012 18:40:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:40:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2181484"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:40:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:40:15 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkHX-0001SJ-8q;
	Mon, 31 Dec 2012 18:40:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:38:56 +0000
Message-ID: <1356979137-18484-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
References: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, konrad.wilk@oracle.com
Subject: [Xen-devel] [RFC PATCH 2/3] Xen: rework NR_EVENT_CHANNELS related
	stuffs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/xen/events.c        |   44 +++++++++++++++++++++++++++++--------------
 drivers/xen/evtchn.c        |   16 +++++++++-------
 include/xen/events.h        |    3 +++
 include/xen/interface/xen.h |   17 ++++++++++++++++-
 4 files changed, 58 insertions(+), 22 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 835101f..f60ba76 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -52,7 +52,8 @@
 #include <xen/interface/hvm/params.h>
 
 /* N-level event channel, starting from 2 */
-static unsigned int evtchn_level = 2;
+unsigned int evtchn_level = 2;
+EXPORT_SYMBOL_GPL(evtchn_level);
 
 struct evtchn_ops {
 	unsigned long (*active_evtchns)(unsigned int,
@@ -130,8 +131,7 @@ static int *evtchn_to_irq;
 static unsigned long *pirq_eoi_map;
 static bool (*pirq_needs_eoi)(unsigned irq);
 
-static DEFINE_PER_CPU(unsigned long [NR_EVENT_CHANNELS/BITS_PER_LONG],
-		      cpu_evtchn_mask);
+static DEFINE_PER_CPU(unsigned long *, cpu_evtchn_mask);
 
 /* Xen will never allocate port zero for any purpose. */
 #define VALID_EVTCHN(chn)	((chn) != 0)
@@ -913,7 +913,7 @@ static int find_virq(unsigned int virq, unsigned int cpu)
 	int port, rc = -ENOENT;
 
 	memset(&status, 0, sizeof(status));
-	for (port = 0; port <= NR_EVENT_CHANNELS; port++) {
+	for (port = 0; port <= NR_EVENT_CHANNELS(evtchn_level); port++) {
 		status.dom = DOMID_SELF;
 		status.port = port;
 		rc = HYPERVISOR_event_channel_op(EVTCHNOP_status, &status);
@@ -1138,7 +1138,7 @@ int evtchn_get(unsigned int evtchn)
 	struct irq_info *info;
 	int err = -ENOENT;
 
-	if (evtchn >= NR_EVENT_CHANNELS)
+	if (evtchn >= NR_EVENT_CHANNELS(evtchn_level))
 		return -EINVAL;
 
 	mutex_lock(&irq_mapping_update_lock);
@@ -1227,7 +1227,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 		       i % 8 == 0 ? "\n   " : " ");
 
 	printk("\nlocal cpu%d mask:\n   ", cpu);
-	for (i = (NR_EVENT_CHANNELS/BITS_PER_LONG)-1; i >= 0; i--)
+	for (i = (NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG)-1; i >= 0; i--)
 		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
 		       cpu_evtchn[i],
 		       i % 8 == 0 ? "\n   " : " ");
@@ -1242,7 +1242,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	}
 
 	printk("\npending list:\n");
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (sync_test_bit(i, sh->evtchn_pending)) {
 			int word_idx = i / BITS_PER_LONG;
 			printk("  %d: event %d -> irq %d%s%s%s\n",
@@ -1709,14 +1709,14 @@ void xen_irq_resume(void)
 	init_evtchn_cpu_bindings();
 
 	/* New event-channel space is not 'live' yet. */
-	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
+	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS(evtchn_level); evtchn++)
 		eops->mask_evtchn(evtchn);
 
 	/* No IRQ <-> event-channel mappings. */
 	list_for_each_entry(info, &xen_irq_list_head, list)
 		info->evtchn = 0; /* zap event-channel binding */
 
-	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
+	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS(evtchn_level); evtchn++)
 		evtchn_to_irq[evtchn] = -1;
 
 	for_each_possible_cpu(cpu) {
@@ -1824,21 +1824,37 @@ static struct evtchn_ops evtchn_ops_l2 __read_mostly = {
 void __init xen_init_IRQ(void)
 {
 	int i, rc;
+	int cpu;
 
-	evtchn_level = 2;
+	/* Setup 2-level event channel */
 	eops = &evtchn_ops_l2;
+	evtchn_level = 2;
 
-	/* Setup 2-level event channel */
-	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
+	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
+				sizeof(*evtchn_to_irq),
 				GFP_KERNEL);
 	BUG_ON(!evtchn_to_irq);
-	for (i = 0; i < NR_EVENT_CHANNELS; i++)
+
+	for_each_possible_cpu(cpu) {
+		void *p;
+		unsigned int nr = NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG;
+		p = kzalloc_node(sizeof(unsigned long) * nr,
+				 GFP_KERNEL,
+				 cpu_to_node(cpu));
+		if (!p)
+			p = kzalloc(sizeof(unsigned long) * nr,
+				    GFP_KERNEL);
+		BUG_ON(!p);
+		per_cpu(cpu_evtchn_mask, cpu) = p;
+	}
+
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++)
 		evtchn_to_irq[i] = -1;
 
 	init_evtchn_cpu_bindings();
 
 	/* No event channels are 'live' right now. */
-	for (i = 0; i < NR_EVENT_CHANNELS; i++)
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++)
 		eops->mask_evtchn(i);
 
 	pirq_needs_eoi = pirq_needs_eoi_flag;
diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index b1f60a0..cb45ecf 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -232,7 +232,7 @@ static ssize_t evtchn_write(struct file *file, const char __user *buf,
 	for (i = 0; i < (count/sizeof(evtchn_port_t)); i++) {
 		unsigned port = kbuf[i];
 
-		if (port < NR_EVENT_CHANNELS &&
+		if (port < NR_EVENT_CHANNELS(evtchn_level) &&
 		    get_port_user(port) == u &&
 		    !get_port_enabled(port)) {
 			set_port_enabled(port, true);
@@ -364,7 +364,7 @@ static long evtchn_ioctl(struct file *file,
 			break;
 
 		rc = -EINVAL;
-		if (unbind.port >= NR_EVENT_CHANNELS)
+		if (unbind.port >= NR_EVENT_CHANNELS(evtchn_level))
 			break;
 
 		spin_lock_irq(&port_user_lock);
@@ -392,7 +392,7 @@ static long evtchn_ioctl(struct file *file,
 		if (copy_from_user(&notify, uarg, sizeof(notify)))
 			break;
 
-		if (notify.port >= NR_EVENT_CHANNELS) {
+		if (notify.port >= NR_EVENT_CHANNELS(evtchn_level)) {
 			rc = -EINVAL;
 		} else if (get_port_user(notify.port) != u) {
 			rc = -ENOTCONN;
@@ -482,7 +482,7 @@ static int evtchn_release(struct inode *inode, struct file *filp)
 
 	free_page((unsigned long)u->ring);
 
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (get_port_user(i) != u)
 			continue;
 
@@ -491,7 +491,7 @@ static int evtchn_release(struct inode *inode, struct file *filp)
 
 	spin_unlock_irq(&port_user_lock);
 
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (get_port_user(i) != u)
 			continue;
 
@@ -528,7 +528,8 @@ static int __init evtchn_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
-	port_user = kcalloc(NR_EVENT_CHANNELS, sizeof(*port_user), GFP_KERNEL);
+	port_user = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
+			    sizeof(*port_user), GFP_KERNEL);
 	if (port_user == NULL)
 		return -ENOMEM;
 
@@ -541,7 +542,8 @@ static int __init evtchn_init(void)
 		return err;
 	}
 
-	printk(KERN_INFO "Event-channel device installed.\n");
+	printk(KERN_INFO "Event-channel device installed."
+	       " Event-channel level: %d\n", evtchn_level);
 
 	return 0;
 }
diff --git a/include/xen/events.h b/include/xen/events.h
index 04399b2..bc10f22 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -109,4 +109,7 @@ int xen_irq_from_gsi(unsigned gsi);
 /* Determine whether to ignore this IRQ if it is passed to a guest. */
 int xen_test_irq_shared(int irq);
 
+/* N-level event channels */
+extern unsigned int evtchn_level;
+
 #endif	/* _XEN_EVENTS_H */
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index a890804..c66e1ff 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -283,9 +283,24 @@ DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
 
 /*
  * Event channel endpoints per domain:
+ * 2-level:
  *  1024 if a long is 32 bits; 4096 if a long is 64 bits.
+ * 3-level:
+ *  32k if a long is 32 bits; 256k if a long is 64 bits.
  */
-#define NR_EVENT_CHANNELS (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L2 (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * sizeof(unsigned long))
+#define NR_EVENT_CHANNELS(x) ({ unsigned int __v = 0;	\
+	switch (x) {					\
+	case 2:						\
+		__v = NR_EVENT_CHANNELS_L2; break;	\
+	case 3:						\
+		__v = NR_EVENT_CHANNELS_L3; break;	\
+	default:					\
+		BUG();					\
+	}						\
+	__v; })
+
 
 struct vcpu_time_info {
 	/*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:40:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkHg-0001Fz-7K; Mon, 31 Dec 2012 18:40:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkHe-0001FP-E8
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:40:22 +0000
Received: from [85.158.138.51:2111] by server-11.bemta-3.messagelabs.com id
	29/37-13335-51CD1E05; Mon, 31 Dec 2012 18:40:21 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1356979216!24491100!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUyOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5679 invoked from network); 31 Dec 2012 18:40:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:40:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2181484"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:40:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:40:15 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkHX-0001SJ-8q;
	Mon, 31 Dec 2012 18:40:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:38:56 +0000
Message-ID: <1356979137-18484-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
References: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, konrad.wilk@oracle.com
Subject: [Xen-devel] [RFC PATCH 2/3] Xen: rework NR_EVENT_CHANNELS related
	stuffs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/xen/events.c        |   44 +++++++++++++++++++++++++++++--------------
 drivers/xen/evtchn.c        |   16 +++++++++-------
 include/xen/events.h        |    3 +++
 include/xen/interface/xen.h |   17 ++++++++++++++++-
 4 files changed, 58 insertions(+), 22 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 835101f..f60ba76 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -52,7 +52,8 @@
 #include <xen/interface/hvm/params.h>
 
 /* N-level event channel, starting from 2 */
-static unsigned int evtchn_level = 2;
+unsigned int evtchn_level = 2;
+EXPORT_SYMBOL_GPL(evtchn_level);
 
 struct evtchn_ops {
 	unsigned long (*active_evtchns)(unsigned int,
@@ -130,8 +131,7 @@ static int *evtchn_to_irq;
 static unsigned long *pirq_eoi_map;
 static bool (*pirq_needs_eoi)(unsigned irq);
 
-static DEFINE_PER_CPU(unsigned long [NR_EVENT_CHANNELS/BITS_PER_LONG],
-		      cpu_evtchn_mask);
+static DEFINE_PER_CPU(unsigned long *, cpu_evtchn_mask);
 
 /* Xen will never allocate port zero for any purpose. */
 #define VALID_EVTCHN(chn)	((chn) != 0)
@@ -913,7 +913,7 @@ static int find_virq(unsigned int virq, unsigned int cpu)
 	int port, rc = -ENOENT;
 
 	memset(&status, 0, sizeof(status));
-	for (port = 0; port <= NR_EVENT_CHANNELS; port++) {
+	for (port = 0; port <= NR_EVENT_CHANNELS(evtchn_level); port++) {
 		status.dom = DOMID_SELF;
 		status.port = port;
 		rc = HYPERVISOR_event_channel_op(EVTCHNOP_status, &status);
@@ -1138,7 +1138,7 @@ int evtchn_get(unsigned int evtchn)
 	struct irq_info *info;
 	int err = -ENOENT;
 
-	if (evtchn >= NR_EVENT_CHANNELS)
+	if (evtchn >= NR_EVENT_CHANNELS(evtchn_level))
 		return -EINVAL;
 
 	mutex_lock(&irq_mapping_update_lock);
@@ -1227,7 +1227,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 		       i % 8 == 0 ? "\n   " : " ");
 
 	printk("\nlocal cpu%d mask:\n   ", cpu);
-	for (i = (NR_EVENT_CHANNELS/BITS_PER_LONG)-1; i >= 0; i--)
+	for (i = (NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG)-1; i >= 0; i--)
 		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
 		       cpu_evtchn[i],
 		       i % 8 == 0 ? "\n   " : " ");
@@ -1242,7 +1242,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	}
 
 	printk("\npending list:\n");
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (sync_test_bit(i, sh->evtchn_pending)) {
 			int word_idx = i / BITS_PER_LONG;
 			printk("  %d: event %d -> irq %d%s%s%s\n",
@@ -1709,14 +1709,14 @@ void xen_irq_resume(void)
 	init_evtchn_cpu_bindings();
 
 	/* New event-channel space is not 'live' yet. */
-	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
+	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS(evtchn_level); evtchn++)
 		eops->mask_evtchn(evtchn);
 
 	/* No IRQ <-> event-channel mappings. */
 	list_for_each_entry(info, &xen_irq_list_head, list)
 		info->evtchn = 0; /* zap event-channel binding */
 
-	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
+	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS(evtchn_level); evtchn++)
 		evtchn_to_irq[evtchn] = -1;
 
 	for_each_possible_cpu(cpu) {
@@ -1824,21 +1824,37 @@ static struct evtchn_ops evtchn_ops_l2 __read_mostly = {
 void __init xen_init_IRQ(void)
 {
 	int i, rc;
+	int cpu;
 
-	evtchn_level = 2;
+	/* Setup 2-level event channel */
 	eops = &evtchn_ops_l2;
+	evtchn_level = 2;
 
-	/* Setup 2-level event channel */
-	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
+	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
+				sizeof(*evtchn_to_irq),
 				GFP_KERNEL);
 	BUG_ON(!evtchn_to_irq);
-	for (i = 0; i < NR_EVENT_CHANNELS; i++)
+
+	for_each_possible_cpu(cpu) {
+		void *p;
+		unsigned int nr = NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG;
+		p = kzalloc_node(sizeof(unsigned long) * nr,
+				 GFP_KERNEL,
+				 cpu_to_node(cpu));
+		if (!p)
+			p = kzalloc(sizeof(unsigned long) * nr,
+				    GFP_KERNEL);
+		BUG_ON(!p);
+		per_cpu(cpu_evtchn_mask, cpu) = p;
+	}
+
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++)
 		evtchn_to_irq[i] = -1;
 
 	init_evtchn_cpu_bindings();
 
 	/* No event channels are 'live' right now. */
-	for (i = 0; i < NR_EVENT_CHANNELS; i++)
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++)
 		eops->mask_evtchn(i);
 
 	pirq_needs_eoi = pirq_needs_eoi_flag;
diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index b1f60a0..cb45ecf 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -232,7 +232,7 @@ static ssize_t evtchn_write(struct file *file, const char __user *buf,
 	for (i = 0; i < (count/sizeof(evtchn_port_t)); i++) {
 		unsigned port = kbuf[i];
 
-		if (port < NR_EVENT_CHANNELS &&
+		if (port < NR_EVENT_CHANNELS(evtchn_level) &&
 		    get_port_user(port) == u &&
 		    !get_port_enabled(port)) {
 			set_port_enabled(port, true);
@@ -364,7 +364,7 @@ static long evtchn_ioctl(struct file *file,
 			break;
 
 		rc = -EINVAL;
-		if (unbind.port >= NR_EVENT_CHANNELS)
+		if (unbind.port >= NR_EVENT_CHANNELS(evtchn_level))
 			break;
 
 		spin_lock_irq(&port_user_lock);
@@ -392,7 +392,7 @@ static long evtchn_ioctl(struct file *file,
 		if (copy_from_user(&notify, uarg, sizeof(notify)))
 			break;
 
-		if (notify.port >= NR_EVENT_CHANNELS) {
+		if (notify.port >= NR_EVENT_CHANNELS(evtchn_level)) {
 			rc = -EINVAL;
 		} else if (get_port_user(notify.port) != u) {
 			rc = -ENOTCONN;
@@ -482,7 +482,7 @@ static int evtchn_release(struct inode *inode, struct file *filp)
 
 	free_page((unsigned long)u->ring);
 
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (get_port_user(i) != u)
 			continue;
 
@@ -491,7 +491,7 @@ static int evtchn_release(struct inode *inode, struct file *filp)
 
 	spin_unlock_irq(&port_user_lock);
 
-	for (i = 0; i < NR_EVENT_CHANNELS; i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
 		if (get_port_user(i) != u)
 			continue;
 
@@ -528,7 +528,8 @@ static int __init evtchn_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
-	port_user = kcalloc(NR_EVENT_CHANNELS, sizeof(*port_user), GFP_KERNEL);
+	port_user = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
+			    sizeof(*port_user), GFP_KERNEL);
 	if (port_user == NULL)
 		return -ENOMEM;
 
@@ -541,7 +542,8 @@ static int __init evtchn_init(void)
 		return err;
 	}
 
-	printk(KERN_INFO "Event-channel device installed.\n");
+	printk(KERN_INFO "Event-channel device installed."
+	       " Event-channel level: %d\n", evtchn_level);
 
 	return 0;
 }
diff --git a/include/xen/events.h b/include/xen/events.h
index 04399b2..bc10f22 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -109,4 +109,7 @@ int xen_irq_from_gsi(unsigned gsi);
 /* Determine whether to ignore this IRQ if it is passed to a guest. */
 int xen_test_irq_shared(int irq);
 
+/* N-level event channels */
+extern unsigned int evtchn_level;
+
 #endif	/* _XEN_EVENTS_H */
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index a890804..c66e1ff 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -283,9 +283,24 @@ DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
 
 /*
  * Event channel endpoints per domain:
+ * 2-level:
  *  1024 if a long is 32 bits; 4096 if a long is 64 bits.
+ * 3-level:
+ *  32k if a long is 32 bits; 256k if a long is 64 bits.
  */
-#define NR_EVENT_CHANNELS (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L2 (sizeof(unsigned long) * sizeof(unsigned long) * 64)
+#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * sizeof(unsigned long))
+#define NR_EVENT_CHANNELS(x) ({ unsigned int __v = 0;	\
+	switch (x) {					\
+	case 2:						\
+		__v = NR_EVENT_CHANNELS_L2; break;	\
+	case 3:						\
+		__v = NR_EVENT_CHANNELS_L3; break;	\
+	default:					\
+		BUG();					\
+	}						\
+	__v; })
+
 
 struct vcpu_time_info {
 	/*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:40:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:40:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkHd-0001FQ-QI; Mon, 31 Dec 2012 18:40:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkHc-0001Ez-Hx
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:40:20 +0000
Received: from [85.158.143.99:62262] by server-1.bemta-4.messagelabs.com id
	78/9E-28401-31CD1E05; Mon, 31 Dec 2012 18:40:19 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1356979216!27858251!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26337 invoked from network); 31 Dec 2012 18:40:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:40:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2341487"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:40:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:40:15 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkHX-0001SJ-8N;
	Mon, 31 Dec 2012 18:40:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:38:55 +0000
Message-ID: <1356979137-18484-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
References: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, konrad.wilk@oracle.com
Subject: [Xen-devel] [RFC PATCH 1/3] Xen: generalized event channel
	operations.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/xen/events.c |  110 ++++++++++++++++++++++++++++++++++----------------
 1 file changed, 76 insertions(+), 34 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 7595581..835101f 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -51,6 +51,23 @@
 #include <xen/interface/hvm/hvm_op.h>
 #include <xen/interface/hvm/params.h>
 
+/* N-level event channel, starting from 2 */
+static unsigned int evtchn_level = 2;
+
+struct evtchn_ops {
+	unsigned long (*active_evtchns)(unsigned int,
+					struct shared_info*, unsigned int);
+	void (*clear_evtchn)(int);
+	void (*set_evtchn)(int);
+	int (*test_evtchn)(int);
+	void (*mask_evtchn)(int);
+	void (*unmask_evtchn)(int);
+	int (*is_masked)(int);
+	void (*xen_evtchn_do_upcall)(void);
+	irqreturn_t (*xen_debug_interrupt)(int, void*);
+};
+static struct evtchn_ops *eops;
+
 /*
  * This lock protects updates to the following mapping and reference-count
  * arrays. The lock does not need to be acquired to read the mapping tables.
@@ -285,9 +302,9 @@ static bool pirq_needs_eoi_flag(unsigned irq)
 	return info->u.pirq.flags & PIRQ_NEEDS_EOI;
 }
 
-static inline unsigned long active_evtchns(unsigned int cpu,
-					   struct shared_info *sh,
-					   unsigned int idx)
+static inline unsigned long __active_evtchns_l2(unsigned int cpu,
+						struct shared_info *sh,
+						unsigned int idx)
 {
 	return sh->evtchn_pending[idx] &
 		per_cpu(cpu_evtchn_mask, cpu)[idx] &
@@ -309,6 +326,7 @@ static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu)
 	info_for_irq(irq)->cpu = cpu;
 }
 
+
 static void init_evtchn_cpu_bindings(void)
 {
 	int i;
@@ -327,25 +345,24 @@ static void init_evtchn_cpu_bindings(void)
 		       (i == 0) ? ~0 : 0, sizeof(*per_cpu(cpu_evtchn_mask, i)));
 }
 
-static inline void clear_evtchn(int port)
+static inline void __clear_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	sync_clear_bit(port, &s->evtchn_pending[0]);
 }
 
-static inline void set_evtchn(int port)
+static inline void __set_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	sync_set_bit(port, &s->evtchn_pending[0]);
 }
 
-static inline int test_evtchn(int port)
+static inline int __test_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	return sync_test_bit(port, &s->evtchn_pending[0]);
 }
 
-
 /**
  * notify_remote_via_irq - send event to remote end of event channel via irq
  * @irq: irq of event channel to send event to
@@ -363,13 +380,13 @@ void notify_remote_via_irq(int irq)
 }
 EXPORT_SYMBOL_GPL(notify_remote_via_irq);
 
-static void mask_evtchn(int port)
+static void __mask_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	sync_set_bit(port, &s->evtchn_mask[0]);
 }
 
-static void unmask_evtchn(int port)
+static void __unmask_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	unsigned int cpu = get_cpu();
@@ -521,7 +538,7 @@ static void eoi_pirq(struct irq_data *data)
 	irq_move_irq(data);
 
 	if (VALID_EVTCHN(evtchn))
-		clear_evtchn(evtchn);
+		eops->clear_evtchn(evtchn);
 
 	if (pirq_needs_eoi(data->irq)) {
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi);
@@ -567,7 +584,7 @@ static unsigned int __startup_pirq(unsigned int irq)
 	info->evtchn = evtchn;
 
 out:
-	unmask_evtchn(evtchn);
+	eops->unmask_evtchn(evtchn);
 	eoi_pirq(irq_get_irq_data(irq));
 
 	return 0;
@@ -590,7 +607,7 @@ static void shutdown_pirq(struct irq_data *data)
 	if (!VALID_EVTCHN(evtchn))
 		return;
 
-	mask_evtchn(evtchn);
+	eops->mask_evtchn(evtchn);
 
 	close.port = evtchn;
 	if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0)
@@ -1164,7 +1181,7 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector)
 	notify_remote_via_irq(irq);
 }
 
-irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
+static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 {
 	struct shared_info *sh = HYPERVISOR_shared_info;
 	int cpu = smp_processor_id();
@@ -1245,6 +1262,11 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
+{
+	return eops->xen_debug_interrupt(irq, dev_id);
+}
+
 static DEFINE_PER_CPU(unsigned, xed_nesting_count);
 static DEFINE_PER_CPU(unsigned int, current_word_idx);
 static DEFINE_PER_CPU(unsigned int, current_bit_idx);
@@ -1263,7 +1285,7 @@ static DEFINE_PER_CPU(unsigned int, current_bit_idx);
  * a bitset of words which contain pending event bits.  The second
  * level is a bitset of pending events themselves.
  */
-static void __xen_evtchn_do_upcall(void)
+static void __xen_evtchn_do_upcall_l2(void)
 {
 	int start_word_idx, start_bit_idx;
 	int word_idx, bit_idx;
@@ -1308,7 +1330,7 @@ static void __xen_evtchn_do_upcall(void)
 			}
 			word_idx = __ffs(words);
 
-			pending_bits = active_evtchns(cpu, s, word_idx);
+			pending_bits = eops->active_evtchns(cpu, s, word_idx);
 			bit_idx = 0; /* usually scan entire word from start */
 			if (word_idx == start_word_idx) {
 				/* We scan the starting word in two parts */
@@ -1377,7 +1399,7 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 	exit_idle();
 	irq_enter();
 
-	__xen_evtchn_do_upcall();
+	eops->xen_evtchn_do_upcall();
 
 	irq_exit();
 	set_irq_regs(old_regs);
@@ -1385,7 +1407,7 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 void xen_hvm_evtchn_do_upcall(void)
 {
-	__xen_evtchn_do_upcall();
+	eops->xen_evtchn_do_upcall();
 }
 EXPORT_SYMBOL_GPL(xen_hvm_evtchn_do_upcall);
 
@@ -1459,15 +1481,14 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 int resend_irq_on_evtchn(unsigned int irq)
 {
 	int masked, evtchn = evtchn_from_irq(irq);
-	struct shared_info *s = HYPERVISOR_shared_info;
 
 	if (!VALID_EVTCHN(evtchn))
 		return 1;
 
-	masked = sync_test_and_set_bit(evtchn, s->evtchn_mask);
-	sync_set_bit(evtchn, s->evtchn_pending);
+	masked = eops->is_masked(evtchn);
+	eops->set_evtchn(evtchn);
 	if (!masked)
-		unmask_evtchn(evtchn);
+		eops->unmask_evtchn(evtchn);
 
 	return 1;
 }
@@ -1477,7 +1498,7 @@ static void enable_dynirq(struct irq_data *data)
 	int evtchn = evtchn_from_irq(data->irq);
 
 	if (VALID_EVTCHN(evtchn))
-		unmask_evtchn(evtchn);
+		eops->unmask_evtchn(evtchn);
 }
 
 static void disable_dynirq(struct irq_data *data)
@@ -1485,7 +1506,7 @@ static void disable_dynirq(struct irq_data *data)
 	int evtchn = evtchn_from_irq(data->irq);
 
 	if (VALID_EVTCHN(evtchn))
-		mask_evtchn(evtchn);
+		eops->mask_evtchn(evtchn);
 }
 
 static void ack_dynirq(struct irq_data *data)
@@ -1495,7 +1516,7 @@ static void ack_dynirq(struct irq_data *data)
 	irq_move_irq(data);
 
 	if (VALID_EVTCHN(evtchn))
-		clear_evtchn(evtchn);
+		eops->clear_evtchn(evtchn);
 }
 
 static void mask_ack_dynirq(struct irq_data *data)
@@ -1504,19 +1525,24 @@ static void mask_ack_dynirq(struct irq_data *data)
 	ack_dynirq(data);
 }
 
+static inline int __is_masked_l2(int chn)
+{
+	struct shared_info *sh = HYPERVISOR_shared_info;
+	return sync_test_and_set_bit(chn, sh->evtchn_mask);
+}
+
 static int retrigger_dynirq(struct irq_data *data)
 {
 	int evtchn = evtchn_from_irq(data->irq);
-	struct shared_info *sh = HYPERVISOR_shared_info;
 	int ret = 0;
 
 	if (VALID_EVTCHN(evtchn)) {
 		int masked;
 
-		masked = sync_test_and_set_bit(evtchn, sh->evtchn_mask);
-		sync_set_bit(evtchn, sh->evtchn_pending);
+		masked = eops->is_masked(evtchn);
+		eops->set_evtchn(evtchn);
 		if (!masked)
-			unmask_evtchn(evtchn);
+			eops->unmask_evtchn(evtchn);
 		ret = 1;
 	}
 
@@ -1616,7 +1642,7 @@ void xen_clear_irq_pending(int irq)
 	int evtchn = evtchn_from_irq(irq);
 
 	if (VALID_EVTCHN(evtchn))
-		clear_evtchn(evtchn);
+		eops->clear_evtchn(evtchn);
 }
 EXPORT_SYMBOL(xen_clear_irq_pending);
 void xen_set_irq_pending(int irq)
@@ -1624,7 +1650,7 @@ void xen_set_irq_pending(int irq)
 	int evtchn = evtchn_from_irq(irq);
 
 	if (VALID_EVTCHN(evtchn))
-		set_evtchn(evtchn);
+		eops->set_evtchn(evtchn);
 }
 
 bool xen_test_irq_pending(int irq)
@@ -1633,7 +1659,7 @@ bool xen_test_irq_pending(int irq)
 	bool ret = false;
 
 	if (VALID_EVTCHN(evtchn))
-		ret = test_evtchn(evtchn);
+		ret = eops->test_evtchn(evtchn);
 
 	return ret;
 }
@@ -1684,7 +1710,7 @@ void xen_irq_resume(void)
 
 	/* New event-channel space is not 'live' yet. */
 	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
-		mask_evtchn(evtchn);
+		eops->mask_evtchn(evtchn);
 
 	/* No IRQ <-> event-channel mappings. */
 	list_for_each_entry(info, &xen_irq_list_head, list)
@@ -1783,12 +1809,28 @@ void xen_callback_vector(void)
 void xen_callback_vector(void) {}
 #endif
 
+static struct evtchn_ops evtchn_ops_l2 __read_mostly = {
+	.active_evtchns = __active_evtchns_l2,
+	.clear_evtchn = __clear_evtchn_l2,
+	.set_evtchn = __set_evtchn_l2,
+	.test_evtchn = __test_evtchn_l2,
+	.mask_evtchn = __mask_evtchn_l2,
+	.unmask_evtchn = __unmask_evtchn_l2,
+	.is_masked = __is_masked_l2,
+	.xen_evtchn_do_upcall = __xen_evtchn_do_upcall_l2,
+	.xen_debug_interrupt = __xen_debug_interrupt_l2,
+};
+
 void __init xen_init_IRQ(void)
 {
 	int i, rc;
 
+	evtchn_level = 2;
+	eops = &evtchn_ops_l2;
+
+	/* Setup 2-level event channel */
 	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
-				    GFP_KERNEL);
+				GFP_KERNEL);
 	BUG_ON(!evtchn_to_irq);
 	for (i = 0; i < NR_EVENT_CHANNELS; i++)
 		evtchn_to_irq[i] = -1;
@@ -1797,7 +1839,7 @@ void __init xen_init_IRQ(void)
 
 	/* No event channels are 'live' right now. */
 	for (i = 0; i < NR_EVENT_CHANNELS; i++)
-		mask_evtchn(i);
+		eops->mask_evtchn(i);
 
 	pirq_needs_eoi = pirq_needs_eoi_flag;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:40:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:40:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkHd-0001FQ-QI; Mon, 31 Dec 2012 18:40:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkHc-0001Ez-Hx
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:40:20 +0000
Received: from [85.158.143.99:62262] by server-1.bemta-4.messagelabs.com id
	78/9E-28401-31CD1E05; Mon, 31 Dec 2012 18:40:19 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1356979216!27858251!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26337 invoked from network); 31 Dec 2012 18:40:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:40:18 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2341487"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:40:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:40:15 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkHX-0001SJ-8N;
	Mon, 31 Dec 2012 18:40:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:38:55 +0000
Message-ID: <1356979137-18484-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
References: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, konrad.wilk@oracle.com
Subject: [Xen-devel] [RFC PATCH 1/3] Xen: generalized event channel
	operations.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/xen/events.c |  110 ++++++++++++++++++++++++++++++++++----------------
 1 file changed, 76 insertions(+), 34 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 7595581..835101f 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -51,6 +51,23 @@
 #include <xen/interface/hvm/hvm_op.h>
 #include <xen/interface/hvm/params.h>
 
+/* N-level event channel, starting from 2 */
+static unsigned int evtchn_level = 2;
+
+struct evtchn_ops {
+	unsigned long (*active_evtchns)(unsigned int,
+					struct shared_info*, unsigned int);
+	void (*clear_evtchn)(int);
+	void (*set_evtchn)(int);
+	int (*test_evtchn)(int);
+	void (*mask_evtchn)(int);
+	void (*unmask_evtchn)(int);
+	int (*is_masked)(int);
+	void (*xen_evtchn_do_upcall)(void);
+	irqreturn_t (*xen_debug_interrupt)(int, void*);
+};
+static struct evtchn_ops *eops;
+
 /*
  * This lock protects updates to the following mapping and reference-count
  * arrays. The lock does not need to be acquired to read the mapping tables.
@@ -285,9 +302,9 @@ static bool pirq_needs_eoi_flag(unsigned irq)
 	return info->u.pirq.flags & PIRQ_NEEDS_EOI;
 }
 
-static inline unsigned long active_evtchns(unsigned int cpu,
-					   struct shared_info *sh,
-					   unsigned int idx)
+static inline unsigned long __active_evtchns_l2(unsigned int cpu,
+						struct shared_info *sh,
+						unsigned int idx)
 {
 	return sh->evtchn_pending[idx] &
 		per_cpu(cpu_evtchn_mask, cpu)[idx] &
@@ -309,6 +326,7 @@ static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu)
 	info_for_irq(irq)->cpu = cpu;
 }
 
+
 static void init_evtchn_cpu_bindings(void)
 {
 	int i;
@@ -327,25 +345,24 @@ static void init_evtchn_cpu_bindings(void)
 		       (i == 0) ? ~0 : 0, sizeof(*per_cpu(cpu_evtchn_mask, i)));
 }
 
-static inline void clear_evtchn(int port)
+static inline void __clear_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	sync_clear_bit(port, &s->evtchn_pending[0]);
 }
 
-static inline void set_evtchn(int port)
+static inline void __set_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	sync_set_bit(port, &s->evtchn_pending[0]);
 }
 
-static inline int test_evtchn(int port)
+static inline int __test_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	return sync_test_bit(port, &s->evtchn_pending[0]);
 }
 
-
 /**
  * notify_remote_via_irq - send event to remote end of event channel via irq
  * @irq: irq of event channel to send event to
@@ -363,13 +380,13 @@ void notify_remote_via_irq(int irq)
 }
 EXPORT_SYMBOL_GPL(notify_remote_via_irq);
 
-static void mask_evtchn(int port)
+static void __mask_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	sync_set_bit(port, &s->evtchn_mask[0]);
 }
 
-static void unmask_evtchn(int port)
+static void __unmask_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	unsigned int cpu = get_cpu();
@@ -521,7 +538,7 @@ static void eoi_pirq(struct irq_data *data)
 	irq_move_irq(data);
 
 	if (VALID_EVTCHN(evtchn))
-		clear_evtchn(evtchn);
+		eops->clear_evtchn(evtchn);
 
 	if (pirq_needs_eoi(data->irq)) {
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi);
@@ -567,7 +584,7 @@ static unsigned int __startup_pirq(unsigned int irq)
 	info->evtchn = evtchn;
 
 out:
-	unmask_evtchn(evtchn);
+	eops->unmask_evtchn(evtchn);
 	eoi_pirq(irq_get_irq_data(irq));
 
 	return 0;
@@ -590,7 +607,7 @@ static void shutdown_pirq(struct irq_data *data)
 	if (!VALID_EVTCHN(evtchn))
 		return;
 
-	mask_evtchn(evtchn);
+	eops->mask_evtchn(evtchn);
 
 	close.port = evtchn;
 	if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0)
@@ -1164,7 +1181,7 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector)
 	notify_remote_via_irq(irq);
 }
 
-irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
+static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 {
 	struct shared_info *sh = HYPERVISOR_shared_info;
 	int cpu = smp_processor_id();
@@ -1245,6 +1262,11 @@ irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
+{
+	return eops->xen_debug_interrupt(irq, dev_id);
+}
+
 static DEFINE_PER_CPU(unsigned, xed_nesting_count);
 static DEFINE_PER_CPU(unsigned int, current_word_idx);
 static DEFINE_PER_CPU(unsigned int, current_bit_idx);
@@ -1263,7 +1285,7 @@ static DEFINE_PER_CPU(unsigned int, current_bit_idx);
  * a bitset of words which contain pending event bits.  The second
  * level is a bitset of pending events themselves.
  */
-static void __xen_evtchn_do_upcall(void)
+static void __xen_evtchn_do_upcall_l2(void)
 {
 	int start_word_idx, start_bit_idx;
 	int word_idx, bit_idx;
@@ -1308,7 +1330,7 @@ static void __xen_evtchn_do_upcall(void)
 			}
 			word_idx = __ffs(words);
 
-			pending_bits = active_evtchns(cpu, s, word_idx);
+			pending_bits = eops->active_evtchns(cpu, s, word_idx);
 			bit_idx = 0; /* usually scan entire word from start */
 			if (word_idx == start_word_idx) {
 				/* We scan the starting word in two parts */
@@ -1377,7 +1399,7 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 	exit_idle();
 	irq_enter();
 
-	__xen_evtchn_do_upcall();
+	eops->xen_evtchn_do_upcall();
 
 	irq_exit();
 	set_irq_regs(old_regs);
@@ -1385,7 +1407,7 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 void xen_hvm_evtchn_do_upcall(void)
 {
-	__xen_evtchn_do_upcall();
+	eops->xen_evtchn_do_upcall();
 }
 EXPORT_SYMBOL_GPL(xen_hvm_evtchn_do_upcall);
 
@@ -1459,15 +1481,14 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 int resend_irq_on_evtchn(unsigned int irq)
 {
 	int masked, evtchn = evtchn_from_irq(irq);
-	struct shared_info *s = HYPERVISOR_shared_info;
 
 	if (!VALID_EVTCHN(evtchn))
 		return 1;
 
-	masked = sync_test_and_set_bit(evtchn, s->evtchn_mask);
-	sync_set_bit(evtchn, s->evtchn_pending);
+	masked = eops->is_masked(evtchn);
+	eops->set_evtchn(evtchn);
 	if (!masked)
-		unmask_evtchn(evtchn);
+		eops->unmask_evtchn(evtchn);
 
 	return 1;
 }
@@ -1477,7 +1498,7 @@ static void enable_dynirq(struct irq_data *data)
 	int evtchn = evtchn_from_irq(data->irq);
 
 	if (VALID_EVTCHN(evtchn))
-		unmask_evtchn(evtchn);
+		eops->unmask_evtchn(evtchn);
 }
 
 static void disable_dynirq(struct irq_data *data)
@@ -1485,7 +1506,7 @@ static void disable_dynirq(struct irq_data *data)
 	int evtchn = evtchn_from_irq(data->irq);
 
 	if (VALID_EVTCHN(evtchn))
-		mask_evtchn(evtchn);
+		eops->mask_evtchn(evtchn);
 }
 
 static void ack_dynirq(struct irq_data *data)
@@ -1495,7 +1516,7 @@ static void ack_dynirq(struct irq_data *data)
 	irq_move_irq(data);
 
 	if (VALID_EVTCHN(evtchn))
-		clear_evtchn(evtchn);
+		eops->clear_evtchn(evtchn);
 }
 
 static void mask_ack_dynirq(struct irq_data *data)
@@ -1504,19 +1525,24 @@ static void mask_ack_dynirq(struct irq_data *data)
 	ack_dynirq(data);
 }
 
+static inline int __is_masked_l2(int chn)
+{
+	struct shared_info *sh = HYPERVISOR_shared_info;
+	return sync_test_and_set_bit(chn, sh->evtchn_mask);
+}
+
 static int retrigger_dynirq(struct irq_data *data)
 {
 	int evtchn = evtchn_from_irq(data->irq);
-	struct shared_info *sh = HYPERVISOR_shared_info;
 	int ret = 0;
 
 	if (VALID_EVTCHN(evtchn)) {
 		int masked;
 
-		masked = sync_test_and_set_bit(evtchn, sh->evtchn_mask);
-		sync_set_bit(evtchn, sh->evtchn_pending);
+		masked = eops->is_masked(evtchn);
+		eops->set_evtchn(evtchn);
 		if (!masked)
-			unmask_evtchn(evtchn);
+			eops->unmask_evtchn(evtchn);
 		ret = 1;
 	}
 
@@ -1616,7 +1642,7 @@ void xen_clear_irq_pending(int irq)
 	int evtchn = evtchn_from_irq(irq);
 
 	if (VALID_EVTCHN(evtchn))
-		clear_evtchn(evtchn);
+		eops->clear_evtchn(evtchn);
 }
 EXPORT_SYMBOL(xen_clear_irq_pending);
 void xen_set_irq_pending(int irq)
@@ -1624,7 +1650,7 @@ void xen_set_irq_pending(int irq)
 	int evtchn = evtchn_from_irq(irq);
 
 	if (VALID_EVTCHN(evtchn))
-		set_evtchn(evtchn);
+		eops->set_evtchn(evtchn);
 }
 
 bool xen_test_irq_pending(int irq)
@@ -1633,7 +1659,7 @@ bool xen_test_irq_pending(int irq)
 	bool ret = false;
 
 	if (VALID_EVTCHN(evtchn))
-		ret = test_evtchn(evtchn);
+		ret = eops->test_evtchn(evtchn);
 
 	return ret;
 }
@@ -1684,7 +1710,7 @@ void xen_irq_resume(void)
 
 	/* New event-channel space is not 'live' yet. */
 	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
-		mask_evtchn(evtchn);
+		eops->mask_evtchn(evtchn);
 
 	/* No IRQ <-> event-channel mappings. */
 	list_for_each_entry(info, &xen_irq_list_head, list)
@@ -1783,12 +1809,28 @@ void xen_callback_vector(void)
 void xen_callback_vector(void) {}
 #endif
 
+static struct evtchn_ops evtchn_ops_l2 __read_mostly = {
+	.active_evtchns = __active_evtchns_l2,
+	.clear_evtchn = __clear_evtchn_l2,
+	.set_evtchn = __set_evtchn_l2,
+	.test_evtchn = __test_evtchn_l2,
+	.mask_evtchn = __mask_evtchn_l2,
+	.unmask_evtchn = __unmask_evtchn_l2,
+	.is_masked = __is_masked_l2,
+	.xen_evtchn_do_upcall = __xen_evtchn_do_upcall_l2,
+	.xen_debug_interrupt = __xen_debug_interrupt_l2,
+};
+
 void __init xen_init_IRQ(void)
 {
 	int i, rc;
 
+	evtchn_level = 2;
+	eops = &evtchn_ops_l2;
+
+	/* Setup 2-level event channel */
 	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
-				    GFP_KERNEL);
+				GFP_KERNEL);
 	BUG_ON(!evtchn_to_irq);
 	for (i = 0; i < NR_EVENT_CHANNELS; i++)
 		evtchn_to_irq[i] = -1;
@@ -1797,7 +1839,7 @@ void __init xen_init_IRQ(void)
 
 	/* No event channels are 'live' right now. */
 	for (i = 0; i < NR_EVENT_CHANNELS; i++)
-		mask_evtchn(i);
+		eops->mask_evtchn(i);
 
 	pirq_needs_eoi = pirq_needs_eoi_flag;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:40:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkHc-0001F1-DV; Mon, 31 Dec 2012 18:40:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkHb-0001En-5c
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:40:19 +0000
Received: from [85.158.143.99:46029] by server-3.bemta-4.messagelabs.com id
	B8/0D-18211-21CD1E05; Mon, 31 Dec 2012 18:40:18 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1356979216!27858251!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26323 invoked from network); 31 Dec 2012 18:40:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:40:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2341486"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:40:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:40:15 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkHX-0001SJ-6j;
	Mon, 31 Dec 2012 18:40:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:38:54 +0000
Message-ID: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: konrad.wilk@oracle.com
Subject: [Xen-devel] Implement 3-level event channel routines in Linux.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series implements 3-level event channel routines in Linux kernel.

My thought is that 3-level event channel is only useful for Dom0 or driver
domain, so it is not enabled by default. Enable it with evtchn_level=3 in
kernel command line.

HVM is not supported at the moment. As it is not very likely it will need this.
And I haven't found a right place to issue the hypercall.

My understaning is that PVH has more or less the same initialization process as
PV, so the current implementation should work for PVH as well. Please correct
me if I'm wrong.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:40:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkHc-0001F1-DV; Mon, 31 Dec 2012 18:40:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkHb-0001En-5c
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:40:19 +0000
Received: from [85.158.143.99:46029] by server-3.bemta-4.messagelabs.com id
	B8/0D-18211-21CD1E05; Mon, 31 Dec 2012 18:40:18 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1356979216!27858251!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26323 invoked from network); 31 Dec 2012 18:40:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:40:17 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2341486"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:40:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:40:15 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkHX-0001SJ-6j;
	Mon, 31 Dec 2012 18:40:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:38:54 +0000
Message-ID: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
Cc: konrad.wilk@oracle.com
Subject: [Xen-devel] Implement 3-level event channel routines in Linux.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series implements 3-level event channel routines in Linux kernel.

My thought is that 3-level event channel is only useful for Dom0 or driver
domain, so it is not enabled by default. Enable it with evtchn_level=3 in
kernel command line.

HVM is not supported at the moment. As it is not very likely it will need this.
And I haven't found a right place to issue the hypercall.

My understaning is that PVH has more or less the same initialization process as
PV, so the current implementation should work for PVH as well. Please correct
me if I'm wrong.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:40:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:40:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkHg-0001GA-Jj; Mon, 31 Dec 2012 18:40:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkHf-0001Fb-2z
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:40:23 +0000
Received: from [85.158.138.51:28768] by server-9.bemta-3.messagelabs.com id
	C4/2B-11948-61CD1E05; Mon, 31 Dec 2012 18:40:22 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1356979216!24491100!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUyOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5731 invoked from network); 31 Dec 2012 18:40:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:40:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2181485"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:40:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:40:15 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkHX-0001SJ-Aj;
	Mon, 31 Dec 2012 18:40:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:38:57 +0000
Message-ID: <1356979137-18484-4-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
References: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, konrad.wilk@oracle.com
Subject: [Xen-devel] [RFC PATCH 3/3] Xen: implement 3-level event channel
	routines.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 arch/x86/xen/enlighten.c              |    7 +
 drivers/xen/events.c                  |  419 +++++++++++++++++++++++++++++++--
 include/xen/events.h                  |    2 +
 include/xen/interface/event_channel.h |   24 ++
 include/xen/interface/xen.h           |    2 +-
 5 files changed, 437 insertions(+), 17 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index bc893e7..f471881 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -43,6 +43,7 @@
 #include <xen/hvm.h>
 #include <xen/hvc-console.h>
 #include <xen/acpi.h>
+#include <xen/events.h>
 
 #include <asm/paravirt.h>
 #include <asm/apic.h>
@@ -195,6 +196,9 @@ void xen_vcpu_restore(void)
 		    HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
 			BUG();
 	}
+
+	if (evtchn_level_param == 3)
+		xen_event_channel_setup_3level();
 }
 
 static void __init xen_banner(void)
@@ -1028,6 +1032,9 @@ void xen_setup_vcpu_info_placement(void)
 	for_each_possible_cpu(cpu)
 		xen_vcpu_setup(cpu);
 
+	if (evtchn_level_param == 3)
+		xen_event_channel_setup_3level();
+
 	/* xen_vcpu_setup managed to place the vcpu_info within the
 	   percpu area for all cpus, so make use of it */
 	if (have_vcpu_info_placement) {
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index f60ba76..adb94e9 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -52,9 +52,15 @@
 #include <xen/interface/hvm/params.h>
 
 /* N-level event channel, starting from 2 */
+unsigned int evtchn_level_param = -1;
 unsigned int evtchn_level = 2;
 EXPORT_SYMBOL_GPL(evtchn_level);
 
+/* 3-level event channel */
+DEFINE_PER_CPU(unsigned long [sizeof(unsigned long)*8], evtchn_sel_l2);
+unsigned long evtchn_pending[NR_EVENT_CHANNELS_L3/BITS_PER_LONG] __page_aligned_bss;
+unsigned long evtchn_mask[NR_EVENT_CHANNELS_L3/BITS_PER_LONG] __page_aligned_bss;
+
 struct evtchn_ops {
 	unsigned long (*active_evtchns)(unsigned int,
 					struct shared_info*, unsigned int);
@@ -142,6 +148,29 @@ static struct irq_chip xen_pirq_chip;
 static void enable_dynirq(struct irq_data *data);
 static void disable_dynirq(struct irq_data *data);
 
+static int __init parse_evtchn_level(char *arg)
+{
+	if (!arg)
+		return -EINVAL;
+
+	if (strcmp(arg, "3") == 0)
+		evtchn_level_param = 3;
+
+	return 0;
+}
+early_param("evtchn_level", parse_evtchn_level);
+
+static inline int __is_masked_l2(int chn)
+{
+	struct shared_info *sh = HYPERVISOR_shared_info;
+	return sync_test_and_set_bit(chn, sh->evtchn_mask);
+}
+
+static inline int __is_masked_l3(int chn)
+{
+	return sync_test_and_set_bit(chn, evtchn_mask);
+}
+
 /* Get info for IRQ */
 static struct irq_info *info_for_irq(unsigned irq)
 {
@@ -311,6 +340,15 @@ static inline unsigned long __active_evtchns_l2(unsigned int cpu,
 		~sh->evtchn_mask[idx];
 }
 
+static inline unsigned long __active_evtchns_l3(unsigned int cpu,
+						struct shared_info *sh,
+						unsigned int idx)
+{
+	return evtchn_pending[idx] &
+		per_cpu(cpu_evtchn_mask, cpu)[idx] &
+		~evtchn_mask[idx];
+}
+
 static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu)
 {
 	int irq = evtchn_to_irq[chn];
@@ -351,18 +389,33 @@ static inline void __clear_evtchn_l2(int port)
 	sync_clear_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline void __clear_evtchn_l3(int port)
+{
+	sync_clear_bit(port, &evtchn_pending[0]);
+}
+
 static inline void __set_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	sync_set_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline void __set_evtchn_l3(int port)
+{
+	sync_set_bit(port, &evtchn_pending[0]);
+}
+
 static inline int __test_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	return sync_test_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline int __test_evtchn_l3(int port)
+{
+	return sync_test_bit(port, &evtchn_pending[0]);
+}
+
 /**
  * notify_remote_via_irq - send event to remote end of event channel via irq
  * @irq: irq of event channel to send event to
@@ -386,6 +439,11 @@ static void __mask_evtchn_l2(int port)
 	sync_set_bit(port, &s->evtchn_mask[0]);
 }
 
+static void __mask_evtchn_l3(int port)
+{
+	sync_set_bit(port, &evtchn_mask[0]);
+}
+
 static void __unmask_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
@@ -416,6 +474,36 @@ static void __unmask_evtchn_l2(int port)
 	put_cpu();
 }
 
+static void __unmask_evtchn_l3(int port)
+{
+	unsigned int cpu = get_cpu();
+	int l1cb = BITS_PER_LONG * BITS_PER_LONG;
+	int l2cb = BITS_PER_LONG;
+
+	if (unlikely(cpu != cpu_from_evtchn(port))) {
+		struct evtchn_unmask unmask = { .port = port };
+		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
+	} else {
+		struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
+
+		sync_clear_bit(port, &evtchn_mask[0]);
+
+		/*
+		 * The following is basically the equivalent of
+		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
+		 * the interrupt edge' if the channel is masked.
+		 */
+		if (sync_test_bit(port, &evtchn_pending[0]) &&
+		    !sync_test_and_set_bit(port / l2cb,
+					   &per_cpu(evtchn_sel_l2, cpu)[0]) &&
+		    !sync_test_and_set_bit(port / l1cb,
+					   &vcpu_info->evtchn_pending_sel))
+			vcpu_info->evtchn_upcall_pending = 1;
+	}
+
+	put_cpu();
+}
+
 static void xen_irq_init(unsigned irq)
 {
 	struct irq_info *info;
@@ -1181,6 +1269,7 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector)
 	notify_remote_via_irq(irq);
 }
 
+static DEFINE_SPINLOCK(debug_lock);
 static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 {
 	struct shared_info *sh = HYPERVISOR_shared_info;
@@ -1188,7 +1277,6 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	unsigned long *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu);
 	int i;
 	unsigned long flags;
-	static DEFINE_SPINLOCK(debug_lock);
 	struct vcpu_info *v;
 
 	spin_lock_irqsave(&debug_lock, flags);
@@ -1196,13 +1284,13 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	printk("\nvcpu %d\n  ", cpu);
 
 	for_each_online_cpu(i) {
-		int pending;
+		int masked;
 		v = per_cpu(xen_vcpu, i);
-		pending = (get_irq_regs() && i == cpu)
+		masked = (get_irq_regs() && i == cpu)
 			? xen_irqs_disabled(get_irq_regs())
 			: v->evtchn_upcall_mask;
 		printk("%d: masked=%d pending=%d event_sel %0*lx\n  ", i,
-		       pending, v->evtchn_upcall_pending,
+		       masked, v->evtchn_upcall_pending,
 		       (int)(sizeof(v->evtchn_pending_sel)*2),
 		       v->evtchn_pending_sel);
 	}
@@ -1227,7 +1315,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 		       i % 8 == 0 ? "\n   " : " ");
 
 	printk("\nlocal cpu%d mask:\n   ", cpu);
-	for (i = (NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG)-1; i >= 0; i--)
+	for (i = (NR_EVENT_CHANNELS(2)/BITS_PER_LONG)-1; i >= 0; i--)
 		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
 		       cpu_evtchn[i],
 		       i % 8 == 0 ? "\n   " : " ");
@@ -1242,7 +1330,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	}
 
 	printk("\npending list:\n");
-	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(2); i++) {
 		if (sync_test_bit(i, sh->evtchn_pending)) {
 			int word_idx = i / BITS_PER_LONG;
 			printk("  %d: event %d -> irq %d%s%s%s\n",
@@ -1262,15 +1350,110 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static irqreturn_t __xen_debug_interrupt_l3(int irq, void *dev_id)
+{
+	int cpu = smp_processor_id();
+	unsigned long *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu);
+	int i, j;
+	unsigned long flags;
+	struct vcpu_info *v;
+
+	spin_lock_irqsave(&debug_lock, flags);
+
+	printk("\nvcpu %d\n  ", cpu);
+
+	for_each_online_cpu(i) {
+		int masked;
+
+		v = per_cpu(xen_vcpu, i);
+		masked = (get_irq_regs() && i == cpu)
+			? xen_irqs_disabled(get_irq_regs())
+			: v->evtchn_upcall_mask;
+		printk("%d: masked=%d pending=%d event_sel_l1 %0*lx\n  ", i,
+		       masked, v->evtchn_upcall_pending,
+		       (int)(sizeof(v->evtchn_pending_sel)*2),
+		       v->evtchn_pending_sel);
+
+		printk("\nevtchn_sel_l2:\n   ");
+		for (j = (sizeof(unsigned long)*8)-1; j >= 0; j--)
+			printk("%0*lx%s",
+			       (int)(sizeof(evtchn_sel_l2[0])*2),
+			       per_cpu(evtchn_sel_l2, i)[j],
+			       j % 8 == 0 ? "\n   " : " ");
+	}
+
+	v = per_cpu(xen_vcpu, cpu);
+
+	printk("\npending:\n   ");
+	for (i = ARRAY_SIZE(evtchn_pending)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_pending[0])*2),
+		       evtchn_pending[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nglobal mask:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       evtchn_mask[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nglobally unmasked:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       evtchn_pending[i] & ~evtchn_mask[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nlocal cpu%d mask:\n   ", cpu);
+	for (i = (NR_EVENT_CHANNELS(3)/BITS_PER_LONG)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
+		       cpu_evtchn[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nlocally unmasked:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--) {
+		unsigned long pending = evtchn_pending[i]
+			& ~evtchn_mask[i]
+			& cpu_evtchn[i];
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       pending, i % 8 == 0 ? "\n   " : " ");
+	}
+
+	printk("\npending list:\n");
+	for (i = 0; i < NR_EVENT_CHANNELS(3); i++) {
+		if (sync_test_bit(i, evtchn_pending)) {
+			int word_idx_l1 = i / (BITS_PER_LONG * BITS_PER_LONG);
+			int word_idx_l2 = i / BITS_PER_LONG;
+			printk("  %d: event %d -> irq %d%s%s%s%s\n",
+			       cpu_from_evtchn(i), i,
+			       evtchn_to_irq[i],
+			       sync_test_bit(word_idx_l1, &v->evtchn_pending_sel)
+					     ? "" : " l1-clear",
+			       sync_test_bit(word_idx_l2, per_cpu(evtchn_sel_l2, cpu))
+					     ? "" : " l2-clear",
+			       !sync_test_bit(i, evtchn_mask)
+					     ? "" : " globally-masked",
+			       sync_test_bit(i, cpu_evtchn)
+					     ? "" : " locally-masked");
+		}
+	}
+
+	spin_unlock_irqrestore(&debug_lock, flags);
+
+	return IRQ_HANDLED;
+}
+
 irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 {
 	return eops->xen_debug_interrupt(irq, dev_id);
 }
 
 static DEFINE_PER_CPU(unsigned, xed_nesting_count);
+
+/* 2-level event channel does not use current_word_idx_l2 */
 static DEFINE_PER_CPU(unsigned int, current_word_idx);
+static DEFINE_PER_CPU(unsigned int, current_word_idx_l2);
 static DEFINE_PER_CPU(unsigned int, current_bit_idx);
 
+
 /*
  * Mask out the i least significant bits of w
  */
@@ -1303,7 +1486,8 @@ static void __xen_evtchn_do_upcall_l2(void)
 		if (__this_cpu_inc_return(xed_nesting_count) - 1)
 			goto out;
 
-#ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
+#ifndef CONFIG_X86
+		/* No need for a barrier -- XCHG is a barrier on x86. */
 		/* Clear master flag /before/ clearing selector flag. */
 		wmb();
 #endif
@@ -1392,6 +1576,155 @@ out:
 	put_cpu();
 }
 
+void __xen_evtchn_do_upcall_l3(void)
+{
+	struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
+	unsigned count;
+	int start_word_idx_l1, start_word_idx_l2, start_bit_idx;
+	int word_idx_l1, word_idx_l2, bit_idx;
+	int i, j;
+	unsigned long l1cb, l2cb;
+	int cpu = get_cpu();
+
+	l1cb = BITS_PER_LONG * BITS_PER_LONG;
+	l2cb = BITS_PER_LONG;
+
+	do {
+		unsigned long pending_words_l1;
+
+		vcpu_info->evtchn_upcall_pending = 0;
+
+		if (__this_cpu_inc_return(xed_nesting_count) - 1)
+			goto out;
+#ifndef CONFIG_X86
+		/* No need for a barrier -- XCHG is a barrier on x86. */
+		/* Clear master flag /before/ clearing selector flag. */
+		wmb();
+#endif
+		/* here we get l1 pending selector */
+		pending_words_l1 = xchg(&vcpu_info->evtchn_pending_sel, 0);
+
+		start_word_idx_l1 = __this_cpu_read(current_word_idx);
+		start_word_idx_l2 = __this_cpu_read(current_word_idx_l2);
+		start_bit_idx = __this_cpu_read(current_bit_idx);
+
+		word_idx_l1 = start_word_idx_l1;
+
+		/* loop through l1, try to pick up l2 */
+		for (i = 0; pending_words_l1 != 0; i++) {
+			unsigned long words_l1;
+			unsigned long pending_words_l2;
+			unsigned long pwl2idx;
+
+			words_l1 = MASK_LSBS(pending_words_l1, word_idx_l1);
+
+			if (words_l1 == 0) {
+				word_idx_l1 = 0;
+				start_word_idx_l2 = 0;
+				continue;
+			}
+
+			word_idx_l1 = __ffs(words_l1);
+
+			pwl2idx = word_idx_l1 * BITS_PER_LONG;
+
+			pending_words_l2 =
+				xchg(&per_cpu(evtchn_sel_l2, cpu)[pwl2idx],
+				     0);
+
+			word_idx_l2 = 0;
+			if (word_idx_l1 == start_word_idx_l1) {
+				if (i == 0)
+					word_idx_l2 = start_word_idx_l2;
+				else
+					word_idx_l2 &= (1UL << start_word_idx_l2) - 1;
+			}
+
+			for (j = 0; pending_words_l2 != 0; j++) {
+				unsigned long pending_bits;
+				unsigned long words_l2;
+				unsigned long idx;
+
+				words_l2 = MASK_LSBS(pending_words_l2,
+						     word_idx_l2);
+
+				if (words_l2 == 0) {
+					word_idx_l2 = 0;
+					bit_idx = 0;
+					continue;
+				}
+
+				word_idx_l2 = __ffs(words_l2);
+
+				idx = word_idx_l1*BITS_PER_LONG+word_idx_l2;
+				pending_bits =
+					eops->active_evtchns(cpu, NULL, idx);
+
+				bit_idx = 0;
+				if (word_idx_l2 == start_word_idx_l2) {
+					if (j == 0)
+						bit_idx = start_bit_idx;
+					else
+						bit_idx &= (1UL<<start_bit_idx)-1;
+				}
+
+				/* process port */
+				do {
+					unsigned long bits;
+					int port, irq;
+					struct irq_desc *desc;
+
+					bits = MASK_LSBS(pending_bits, bit_idx);
+
+					if (bits == 0)
+						break;
+
+					bit_idx = __ffs(bits);
+
+					port = word_idx_l1 * l1cb +
+						word_idx_l2 * l2cb +
+						bit_idx;
+
+					irq = evtchn_to_irq[port];
+
+					if (irq != -1) {
+						desc = irq_to_desc(irq);
+						if (desc)
+							generic_handle_irq_desc(irq, desc);
+					}
+
+					bit_idx = (bit_idx + 1) % BITS_PER_LONG;
+
+					__this_cpu_write(current_bit_idx, bit_idx);
+					__this_cpu_write(current_word_idx_l2,
+							 bit_idx ? word_idx_l2 :
+							 (word_idx_l2+1) % BITS_PER_LONG);
+					__this_cpu_write(current_word_idx_l2,
+							 word_idx_l2 ? word_idx_l1 :
+							 (word_idx_l1+1) % BITS_PER_LONG);
+				} while (bit_idx != 0);
+
+				if ((word_idx_l2 != start_word_idx_l2) || (j != 0))
+					pending_words_l2 &= ~(1UL << word_idx_l2);
+
+				word_idx_l2 = (word_idx_l2) % BITS_PER_LONG;
+			}
+
+			if ((word_idx_l1 != start_word_idx_l1) || (i != 0))
+				pending_words_l1 &= ~(1UL << word_idx_l1);
+
+			word_idx_l1 = (word_idx_l1) % BITS_PER_LONG;
+		}
+
+		BUG_ON(!irqs_disabled());
+		count = __this_cpu_read(xed_nesting_count);
+		__this_cpu_write(xed_nesting_count, 0);
+	} while (count != 1 || vcpu_info->evtchn_upcall_pending);
+
+out:
+	put_cpu();
+}
+
 void xen_evtchn_do_upcall(struct pt_regs *regs)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
@@ -1525,12 +1858,6 @@ static void mask_ack_dynirq(struct irq_data *data)
 	ack_dynirq(data);
 }
 
-static inline int __is_masked_l2(int chn)
-{
-	struct shared_info *sh = HYPERVISOR_shared_info;
-	return sync_test_and_set_bit(chn, sh->evtchn_mask);
-}
-
 static int retrigger_dynirq(struct irq_data *data)
 {
 	int evtchn = evtchn_from_irq(data->irq);
@@ -1821,14 +2148,74 @@ static struct evtchn_ops evtchn_ops_l2 __read_mostly = {
 	.xen_debug_interrupt = __xen_debug_interrupt_l2,
 };
 
+static struct evtchn_ops evtchn_ops_l3 __read_mostly = {
+	.active_evtchns = __active_evtchns_l3,
+	.clear_evtchn = __clear_evtchn_l3,
+	.set_evtchn = __set_evtchn_l3,
+	.test_evtchn = __test_evtchn_l3,
+	.mask_evtchn = __mask_evtchn_l3,
+	.unmask_evtchn = __unmask_evtchn_l3,
+	.is_masked = __is_masked_l3,
+	.xen_evtchn_do_upcall = __xen_evtchn_do_upcall_l3,
+	.xen_debug_interrupt = __xen_debug_interrupt_l3,
+};
+
+int xen_event_channel_setup_3level(void)
+{
+	evtchn_register_nlevel_t reg;
+	int i, nr_pages, cpu;
+	unsigned long mfns[nr_cpu_ids];
+	unsigned long offsets[nr_cpu_ids];
+	int rc = -EINVAL;
+
+	memset(&reg, 0, sizeof(reg));
+
+	reg.level = 3;
+	nr_pages = (sizeof(unsigned long) == 4 ? 1 : 8);
+
+	for (i = 0; i < nr_pages; i++) {
+		unsigned long offset = PAGE_SIZE * i;
+		reg.u.l3.evtchn_pending[i] =
+			arbitrary_virt_to_mfn(
+				(void *)((unsigned long)evtchn_pending+offset));
+		reg.u.l3.evtchn_mask[i] =
+			arbitrary_virt_to_mfn(
+				(void *)((unsigned long)evtchn_mask+offset));
+	}
+
+	reg.u.l3.l2sel_mfn = mfns;
+	reg.u.l3.l2sel_offset = offsets;
+	reg.u.l3.nr_vcpus = nr_cpu_ids;
+
+	for_each_possible_cpu(cpu) {
+		reg.u.l3.l2sel_mfn[cpu] =
+			arbitrary_virt_to_mfn(&per_cpu(evtchn_sel_l2, cpu));
+		reg.u.l3.l2sel_offset[cpu] =
+			offset_in_page(&per_cpu(evtchn_sel_l2, cpu));
+	}
+
+	rc = HYPERVISOR_event_channel_op(EVTCHNOP_register_nlevel, &reg);
+
+	if (rc == 0)
+		evtchn_level = 3;
+
+	return rc;
+}
+EXPORT_SYMBOL_GPL(xen_event_channel_setup_3level);
+
 void __init xen_init_IRQ(void)
 {
 	int i, rc;
 	int cpu;
 
-	/* Setup 2-level event channel */
-	eops = &evtchn_ops_l2;
-	evtchn_level = 2;
+	switch (evtchn_level) {
+	case 2:
+		eops = &evtchn_ops_l2; break;
+	case 3:
+		eops = &evtchn_ops_l3; break;
+	default:
+		BUG();
+	}
 
 	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
 				sizeof(*evtchn_to_irq),
diff --git a/include/xen/events.h b/include/xen/events.h
index bc10f22..87696fc 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -111,5 +111,7 @@ int xen_test_irq_shared(int irq);
 
 /* N-level event channels */
 extern unsigned int evtchn_level;
+extern unsigned int evtchn_level_param;
+int xen_event_channel_setup_3level(void);
 
 #endif	/* _XEN_EVENTS_H */
diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h
index f494292..f764d21 100644
--- a/include/xen/interface/event_channel.h
+++ b/include/xen/interface/event_channel.h
@@ -190,6 +190,30 @@ struct evtchn_reset {
 };
 typedef struct evtchn_reset evtchn_reset_t;
 
+/*
+ * EVTCHNOP_register_nlevel: Register N level event channels.
+ * NOTES:
+ *   1. currently only 3-level is supported.
+ *   2. should fall back to basic 2-level if this call fails.
+ */
+#define EVTCHNOP_register_nlevel 11
+#define MAX_L3_PAGES 8		/* 8 pages for 64 bits */
+struct evtchn_register_3level {
+	unsigned long evtchn_pending[MAX_L3_PAGES];
+	unsigned long evtchn_mask[MAX_L3_PAGES];
+	unsigned long *l2sel_mfn;
+	unsigned long *l2sel_offset;
+	unsigned int nr_vcpus;
+};
+
+struct evtchn_register_nlevel {
+	uint32_t level;
+	union {
+		struct evtchn_register_3level l3;
+	} u;
+};
+typedef struct evtchn_register_nlevel evtchn_register_nlevel_t;
+
 struct evtchn_op {
 	uint32_t cmd; /* EVTCHNOP_* */
 	union {
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index c66e1ff..7cb9d8f 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -289,7 +289,7 @@ DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
  *  32k if a long is 32 bits; 256k if a long is 64 bits.
  */
 #define NR_EVENT_CHANNELS_L2 (sizeof(unsigned long) * sizeof(unsigned long) * 64)
-#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * sizeof(unsigned long))
+#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * 64)
 #define NR_EVENT_CHANNELS(x) ({ unsigned int __v = 0;	\
 	switch (x) {					\
 	case 2:						\
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 18:40:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 18:40:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpkHg-0001GA-Jj; Mon, 31 Dec 2012 18:40:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1TpkHf-0001Fb-2z
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 18:40:23 +0000
Received: from [85.158.138.51:28768] by server-9.bemta-3.messagelabs.com id
	C4/2B-11948-61CD1E05; Mon, 31 Dec 2012 18:40:22 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1356979216!24491100!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyOTUyOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5731 invoked from network); 31 Dec 2012 18:40:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 18:40:19 -0000
X-IronPort-AV: E=Sophos;i="4.84,386,1355097600"; 
   d="scan'208";a="2181485"
Received: from unknown (HELO FTLPEX01CL03.citrite.net) ([10.13.107.80])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	31 Dec 2012 18:40:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.318.1;
	Mon, 31 Dec 2012 13:40:15 -0500
Received: from [10.80.237.127] (helo=tbox.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1TpkHX-0001SJ-Aj;
	Mon, 31 Dec 2012 18:40:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 31 Dec 2012 18:38:57 +0000
Message-ID: <1356979137-18484-4-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
References: <1356979137-18484-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: Wei Liu <wei.liu2@citrix.com>, konrad.wilk@oracle.com
Subject: [Xen-devel] [RFC PATCH 3/3] Xen: implement 3-level event channel
	routines.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 arch/x86/xen/enlighten.c              |    7 +
 drivers/xen/events.c                  |  419 +++++++++++++++++++++++++++++++--
 include/xen/events.h                  |    2 +
 include/xen/interface/event_channel.h |   24 ++
 include/xen/interface/xen.h           |    2 +-
 5 files changed, 437 insertions(+), 17 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index bc893e7..f471881 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -43,6 +43,7 @@
 #include <xen/hvm.h>
 #include <xen/hvc-console.h>
 #include <xen/acpi.h>
+#include <xen/events.h>
 
 #include <asm/paravirt.h>
 #include <asm/apic.h>
@@ -195,6 +196,9 @@ void xen_vcpu_restore(void)
 		    HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
 			BUG();
 	}
+
+	if (evtchn_level_param == 3)
+		xen_event_channel_setup_3level();
 }
 
 static void __init xen_banner(void)
@@ -1028,6 +1032,9 @@ void xen_setup_vcpu_info_placement(void)
 	for_each_possible_cpu(cpu)
 		xen_vcpu_setup(cpu);
 
+	if (evtchn_level_param == 3)
+		xen_event_channel_setup_3level();
+
 	/* xen_vcpu_setup managed to place the vcpu_info within the
 	   percpu area for all cpus, so make use of it */
 	if (have_vcpu_info_placement) {
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index f60ba76..adb94e9 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -52,9 +52,15 @@
 #include <xen/interface/hvm/params.h>
 
 /* N-level event channel, starting from 2 */
+unsigned int evtchn_level_param = -1;
 unsigned int evtchn_level = 2;
 EXPORT_SYMBOL_GPL(evtchn_level);
 
+/* 3-level event channel */
+DEFINE_PER_CPU(unsigned long [sizeof(unsigned long)*8], evtchn_sel_l2);
+unsigned long evtchn_pending[NR_EVENT_CHANNELS_L3/BITS_PER_LONG] __page_aligned_bss;
+unsigned long evtchn_mask[NR_EVENT_CHANNELS_L3/BITS_PER_LONG] __page_aligned_bss;
+
 struct evtchn_ops {
 	unsigned long (*active_evtchns)(unsigned int,
 					struct shared_info*, unsigned int);
@@ -142,6 +148,29 @@ static struct irq_chip xen_pirq_chip;
 static void enable_dynirq(struct irq_data *data);
 static void disable_dynirq(struct irq_data *data);
 
+static int __init parse_evtchn_level(char *arg)
+{
+	if (!arg)
+		return -EINVAL;
+
+	if (strcmp(arg, "3") == 0)
+		evtchn_level_param = 3;
+
+	return 0;
+}
+early_param("evtchn_level", parse_evtchn_level);
+
+static inline int __is_masked_l2(int chn)
+{
+	struct shared_info *sh = HYPERVISOR_shared_info;
+	return sync_test_and_set_bit(chn, sh->evtchn_mask);
+}
+
+static inline int __is_masked_l3(int chn)
+{
+	return sync_test_and_set_bit(chn, evtchn_mask);
+}
+
 /* Get info for IRQ */
 static struct irq_info *info_for_irq(unsigned irq)
 {
@@ -311,6 +340,15 @@ static inline unsigned long __active_evtchns_l2(unsigned int cpu,
 		~sh->evtchn_mask[idx];
 }
 
+static inline unsigned long __active_evtchns_l3(unsigned int cpu,
+						struct shared_info *sh,
+						unsigned int idx)
+{
+	return evtchn_pending[idx] &
+		per_cpu(cpu_evtchn_mask, cpu)[idx] &
+		~evtchn_mask[idx];
+}
+
 static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu)
 {
 	int irq = evtchn_to_irq[chn];
@@ -351,18 +389,33 @@ static inline void __clear_evtchn_l2(int port)
 	sync_clear_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline void __clear_evtchn_l3(int port)
+{
+	sync_clear_bit(port, &evtchn_pending[0]);
+}
+
 static inline void __set_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	sync_set_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline void __set_evtchn_l3(int port)
+{
+	sync_set_bit(port, &evtchn_pending[0]);
+}
+
 static inline int __test_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	return sync_test_bit(port, &s->evtchn_pending[0]);
 }
 
+static inline int __test_evtchn_l3(int port)
+{
+	return sync_test_bit(port, &evtchn_pending[0]);
+}
+
 /**
  * notify_remote_via_irq - send event to remote end of event channel via irq
  * @irq: irq of event channel to send event to
@@ -386,6 +439,11 @@ static void __mask_evtchn_l2(int port)
 	sync_set_bit(port, &s->evtchn_mask[0]);
 }
 
+static void __mask_evtchn_l3(int port)
+{
+	sync_set_bit(port, &evtchn_mask[0]);
+}
+
 static void __unmask_evtchn_l2(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
@@ -416,6 +474,36 @@ static void __unmask_evtchn_l2(int port)
 	put_cpu();
 }
 
+static void __unmask_evtchn_l3(int port)
+{
+	unsigned int cpu = get_cpu();
+	int l1cb = BITS_PER_LONG * BITS_PER_LONG;
+	int l2cb = BITS_PER_LONG;
+
+	if (unlikely(cpu != cpu_from_evtchn(port))) {
+		struct evtchn_unmask unmask = { .port = port };
+		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
+	} else {
+		struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
+
+		sync_clear_bit(port, &evtchn_mask[0]);
+
+		/*
+		 * The following is basically the equivalent of
+		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
+		 * the interrupt edge' if the channel is masked.
+		 */
+		if (sync_test_bit(port, &evtchn_pending[0]) &&
+		    !sync_test_and_set_bit(port / l2cb,
+					   &per_cpu(evtchn_sel_l2, cpu)[0]) &&
+		    !sync_test_and_set_bit(port / l1cb,
+					   &vcpu_info->evtchn_pending_sel))
+			vcpu_info->evtchn_upcall_pending = 1;
+	}
+
+	put_cpu();
+}
+
 static void xen_irq_init(unsigned irq)
 {
 	struct irq_info *info;
@@ -1181,6 +1269,7 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector)
 	notify_remote_via_irq(irq);
 }
 
+static DEFINE_SPINLOCK(debug_lock);
 static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 {
 	struct shared_info *sh = HYPERVISOR_shared_info;
@@ -1188,7 +1277,6 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	unsigned long *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu);
 	int i;
 	unsigned long flags;
-	static DEFINE_SPINLOCK(debug_lock);
 	struct vcpu_info *v;
 
 	spin_lock_irqsave(&debug_lock, flags);
@@ -1196,13 +1284,13 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	printk("\nvcpu %d\n  ", cpu);
 
 	for_each_online_cpu(i) {
-		int pending;
+		int masked;
 		v = per_cpu(xen_vcpu, i);
-		pending = (get_irq_regs() && i == cpu)
+		masked = (get_irq_regs() && i == cpu)
 			? xen_irqs_disabled(get_irq_regs())
 			: v->evtchn_upcall_mask;
 		printk("%d: masked=%d pending=%d event_sel %0*lx\n  ", i,
-		       pending, v->evtchn_upcall_pending,
+		       masked, v->evtchn_upcall_pending,
 		       (int)(sizeof(v->evtchn_pending_sel)*2),
 		       v->evtchn_pending_sel);
 	}
@@ -1227,7 +1315,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 		       i % 8 == 0 ? "\n   " : " ");
 
 	printk("\nlocal cpu%d mask:\n   ", cpu);
-	for (i = (NR_EVENT_CHANNELS(evtchn_level)/BITS_PER_LONG)-1; i >= 0; i--)
+	for (i = (NR_EVENT_CHANNELS(2)/BITS_PER_LONG)-1; i >= 0; i--)
 		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
 		       cpu_evtchn[i],
 		       i % 8 == 0 ? "\n   " : " ");
@@ -1242,7 +1330,7 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	}
 
 	printk("\npending list:\n");
-	for (i = 0; i < NR_EVENT_CHANNELS(evtchn_level); i++) {
+	for (i = 0; i < NR_EVENT_CHANNELS(2); i++) {
 		if (sync_test_bit(i, sh->evtchn_pending)) {
 			int word_idx = i / BITS_PER_LONG;
 			printk("  %d: event %d -> irq %d%s%s%s\n",
@@ -1262,15 +1350,110 @@ static irqreturn_t __xen_debug_interrupt_l2(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static irqreturn_t __xen_debug_interrupt_l3(int irq, void *dev_id)
+{
+	int cpu = smp_processor_id();
+	unsigned long *cpu_evtchn = per_cpu(cpu_evtchn_mask, cpu);
+	int i, j;
+	unsigned long flags;
+	struct vcpu_info *v;
+
+	spin_lock_irqsave(&debug_lock, flags);
+
+	printk("\nvcpu %d\n  ", cpu);
+
+	for_each_online_cpu(i) {
+		int masked;
+
+		v = per_cpu(xen_vcpu, i);
+		masked = (get_irq_regs() && i == cpu)
+			? xen_irqs_disabled(get_irq_regs())
+			: v->evtchn_upcall_mask;
+		printk("%d: masked=%d pending=%d event_sel_l1 %0*lx\n  ", i,
+		       masked, v->evtchn_upcall_pending,
+		       (int)(sizeof(v->evtchn_pending_sel)*2),
+		       v->evtchn_pending_sel);
+
+		printk("\nevtchn_sel_l2:\n   ");
+		for (j = (sizeof(unsigned long)*8)-1; j >= 0; j--)
+			printk("%0*lx%s",
+			       (int)(sizeof(evtchn_sel_l2[0])*2),
+			       per_cpu(evtchn_sel_l2, i)[j],
+			       j % 8 == 0 ? "\n   " : " ");
+	}
+
+	v = per_cpu(xen_vcpu, cpu);
+
+	printk("\npending:\n   ");
+	for (i = ARRAY_SIZE(evtchn_pending)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_pending[0])*2),
+		       evtchn_pending[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nglobal mask:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       evtchn_mask[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nglobally unmasked:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       evtchn_pending[i] & ~evtchn_mask[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nlocal cpu%d mask:\n   ", cpu);
+	for (i = (NR_EVENT_CHANNELS(3)/BITS_PER_LONG)-1; i >= 0; i--)
+		printk("%0*lx%s", (int)(sizeof(cpu_evtchn[0])*2),
+		       cpu_evtchn[i],
+		       i % 8 == 0 ? "\n   " : " ");
+
+	printk("\nlocally unmasked:\n   ");
+	for (i = ARRAY_SIZE(evtchn_mask)-1; i >= 0; i--) {
+		unsigned long pending = evtchn_pending[i]
+			& ~evtchn_mask[i]
+			& cpu_evtchn[i];
+		printk("%0*lx%s", (int)(sizeof(evtchn_mask[0])*2),
+		       pending, i % 8 == 0 ? "\n   " : " ");
+	}
+
+	printk("\npending list:\n");
+	for (i = 0; i < NR_EVENT_CHANNELS(3); i++) {
+		if (sync_test_bit(i, evtchn_pending)) {
+			int word_idx_l1 = i / (BITS_PER_LONG * BITS_PER_LONG);
+			int word_idx_l2 = i / BITS_PER_LONG;
+			printk("  %d: event %d -> irq %d%s%s%s%s\n",
+			       cpu_from_evtchn(i), i,
+			       evtchn_to_irq[i],
+			       sync_test_bit(word_idx_l1, &v->evtchn_pending_sel)
+					     ? "" : " l1-clear",
+			       sync_test_bit(word_idx_l2, per_cpu(evtchn_sel_l2, cpu))
+					     ? "" : " l2-clear",
+			       !sync_test_bit(i, evtchn_mask)
+					     ? "" : " globally-masked",
+			       sync_test_bit(i, cpu_evtchn)
+					     ? "" : " locally-masked");
+		}
+	}
+
+	spin_unlock_irqrestore(&debug_lock, flags);
+
+	return IRQ_HANDLED;
+}
+
 irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
 {
 	return eops->xen_debug_interrupt(irq, dev_id);
 }
 
 static DEFINE_PER_CPU(unsigned, xed_nesting_count);
+
+/* 2-level event channel does not use current_word_idx_l2 */
 static DEFINE_PER_CPU(unsigned int, current_word_idx);
+static DEFINE_PER_CPU(unsigned int, current_word_idx_l2);
 static DEFINE_PER_CPU(unsigned int, current_bit_idx);
 
+
 /*
  * Mask out the i least significant bits of w
  */
@@ -1303,7 +1486,8 @@ static void __xen_evtchn_do_upcall_l2(void)
 		if (__this_cpu_inc_return(xed_nesting_count) - 1)
 			goto out;
 
-#ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
+#ifndef CONFIG_X86
+		/* No need for a barrier -- XCHG is a barrier on x86. */
 		/* Clear master flag /before/ clearing selector flag. */
 		wmb();
 #endif
@@ -1392,6 +1576,155 @@ out:
 	put_cpu();
 }
 
+void __xen_evtchn_do_upcall_l3(void)
+{
+	struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
+	unsigned count;
+	int start_word_idx_l1, start_word_idx_l2, start_bit_idx;
+	int word_idx_l1, word_idx_l2, bit_idx;
+	int i, j;
+	unsigned long l1cb, l2cb;
+	int cpu = get_cpu();
+
+	l1cb = BITS_PER_LONG * BITS_PER_LONG;
+	l2cb = BITS_PER_LONG;
+
+	do {
+		unsigned long pending_words_l1;
+
+		vcpu_info->evtchn_upcall_pending = 0;
+
+		if (__this_cpu_inc_return(xed_nesting_count) - 1)
+			goto out;
+#ifndef CONFIG_X86
+		/* No need for a barrier -- XCHG is a barrier on x86. */
+		/* Clear master flag /before/ clearing selector flag. */
+		wmb();
+#endif
+		/* here we get l1 pending selector */
+		pending_words_l1 = xchg(&vcpu_info->evtchn_pending_sel, 0);
+
+		start_word_idx_l1 = __this_cpu_read(current_word_idx);
+		start_word_idx_l2 = __this_cpu_read(current_word_idx_l2);
+		start_bit_idx = __this_cpu_read(current_bit_idx);
+
+		word_idx_l1 = start_word_idx_l1;
+
+		/* loop through l1, try to pick up l2 */
+		for (i = 0; pending_words_l1 != 0; i++) {
+			unsigned long words_l1;
+			unsigned long pending_words_l2;
+			unsigned long pwl2idx;
+
+			words_l1 = MASK_LSBS(pending_words_l1, word_idx_l1);
+
+			if (words_l1 == 0) {
+				word_idx_l1 = 0;
+				start_word_idx_l2 = 0;
+				continue;
+			}
+
+			word_idx_l1 = __ffs(words_l1);
+
+			pwl2idx = word_idx_l1 * BITS_PER_LONG;
+
+			pending_words_l2 =
+				xchg(&per_cpu(evtchn_sel_l2, cpu)[pwl2idx],
+				     0);
+
+			word_idx_l2 = 0;
+			if (word_idx_l1 == start_word_idx_l1) {
+				if (i == 0)
+					word_idx_l2 = start_word_idx_l2;
+				else
+					word_idx_l2 &= (1UL << start_word_idx_l2) - 1;
+			}
+
+			for (j = 0; pending_words_l2 != 0; j++) {
+				unsigned long pending_bits;
+				unsigned long words_l2;
+				unsigned long idx;
+
+				words_l2 = MASK_LSBS(pending_words_l2,
+						     word_idx_l2);
+
+				if (words_l2 == 0) {
+					word_idx_l2 = 0;
+					bit_idx = 0;
+					continue;
+				}
+
+				word_idx_l2 = __ffs(words_l2);
+
+				idx = word_idx_l1*BITS_PER_LONG+word_idx_l2;
+				pending_bits =
+					eops->active_evtchns(cpu, NULL, idx);
+
+				bit_idx = 0;
+				if (word_idx_l2 == start_word_idx_l2) {
+					if (j == 0)
+						bit_idx = start_bit_idx;
+					else
+						bit_idx &= (1UL<<start_bit_idx)-1;
+				}
+
+				/* process port */
+				do {
+					unsigned long bits;
+					int port, irq;
+					struct irq_desc *desc;
+
+					bits = MASK_LSBS(pending_bits, bit_idx);
+
+					if (bits == 0)
+						break;
+
+					bit_idx = __ffs(bits);
+
+					port = word_idx_l1 * l1cb +
+						word_idx_l2 * l2cb +
+						bit_idx;
+
+					irq = evtchn_to_irq[port];
+
+					if (irq != -1) {
+						desc = irq_to_desc(irq);
+						if (desc)
+							generic_handle_irq_desc(irq, desc);
+					}
+
+					bit_idx = (bit_idx + 1) % BITS_PER_LONG;
+
+					__this_cpu_write(current_bit_idx, bit_idx);
+					__this_cpu_write(current_word_idx_l2,
+							 bit_idx ? word_idx_l2 :
+							 (word_idx_l2+1) % BITS_PER_LONG);
+					__this_cpu_write(current_word_idx_l2,
+							 word_idx_l2 ? word_idx_l1 :
+							 (word_idx_l1+1) % BITS_PER_LONG);
+				} while (bit_idx != 0);
+
+				if ((word_idx_l2 != start_word_idx_l2) || (j != 0))
+					pending_words_l2 &= ~(1UL << word_idx_l2);
+
+				word_idx_l2 = (word_idx_l2) % BITS_PER_LONG;
+			}
+
+			if ((word_idx_l1 != start_word_idx_l1) || (i != 0))
+				pending_words_l1 &= ~(1UL << word_idx_l1);
+
+			word_idx_l1 = (word_idx_l1) % BITS_PER_LONG;
+		}
+
+		BUG_ON(!irqs_disabled());
+		count = __this_cpu_read(xed_nesting_count);
+		__this_cpu_write(xed_nesting_count, 0);
+	} while (count != 1 || vcpu_info->evtchn_upcall_pending);
+
+out:
+	put_cpu();
+}
+
 void xen_evtchn_do_upcall(struct pt_regs *regs)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
@@ -1525,12 +1858,6 @@ static void mask_ack_dynirq(struct irq_data *data)
 	ack_dynirq(data);
 }
 
-static inline int __is_masked_l2(int chn)
-{
-	struct shared_info *sh = HYPERVISOR_shared_info;
-	return sync_test_and_set_bit(chn, sh->evtchn_mask);
-}
-
 static int retrigger_dynirq(struct irq_data *data)
 {
 	int evtchn = evtchn_from_irq(data->irq);
@@ -1821,14 +2148,74 @@ static struct evtchn_ops evtchn_ops_l2 __read_mostly = {
 	.xen_debug_interrupt = __xen_debug_interrupt_l2,
 };
 
+static struct evtchn_ops evtchn_ops_l3 __read_mostly = {
+	.active_evtchns = __active_evtchns_l3,
+	.clear_evtchn = __clear_evtchn_l3,
+	.set_evtchn = __set_evtchn_l3,
+	.test_evtchn = __test_evtchn_l3,
+	.mask_evtchn = __mask_evtchn_l3,
+	.unmask_evtchn = __unmask_evtchn_l3,
+	.is_masked = __is_masked_l3,
+	.xen_evtchn_do_upcall = __xen_evtchn_do_upcall_l3,
+	.xen_debug_interrupt = __xen_debug_interrupt_l3,
+};
+
+int xen_event_channel_setup_3level(void)
+{
+	evtchn_register_nlevel_t reg;
+	int i, nr_pages, cpu;
+	unsigned long mfns[nr_cpu_ids];
+	unsigned long offsets[nr_cpu_ids];
+	int rc = -EINVAL;
+
+	memset(&reg, 0, sizeof(reg));
+
+	reg.level = 3;
+	nr_pages = (sizeof(unsigned long) == 4 ? 1 : 8);
+
+	for (i = 0; i < nr_pages; i++) {
+		unsigned long offset = PAGE_SIZE * i;
+		reg.u.l3.evtchn_pending[i] =
+			arbitrary_virt_to_mfn(
+				(void *)((unsigned long)evtchn_pending+offset));
+		reg.u.l3.evtchn_mask[i] =
+			arbitrary_virt_to_mfn(
+				(void *)((unsigned long)evtchn_mask+offset));
+	}
+
+	reg.u.l3.l2sel_mfn = mfns;
+	reg.u.l3.l2sel_offset = offsets;
+	reg.u.l3.nr_vcpus = nr_cpu_ids;
+
+	for_each_possible_cpu(cpu) {
+		reg.u.l3.l2sel_mfn[cpu] =
+			arbitrary_virt_to_mfn(&per_cpu(evtchn_sel_l2, cpu));
+		reg.u.l3.l2sel_offset[cpu] =
+			offset_in_page(&per_cpu(evtchn_sel_l2, cpu));
+	}
+
+	rc = HYPERVISOR_event_channel_op(EVTCHNOP_register_nlevel, &reg);
+
+	if (rc == 0)
+		evtchn_level = 3;
+
+	return rc;
+}
+EXPORT_SYMBOL_GPL(xen_event_channel_setup_3level);
+
 void __init xen_init_IRQ(void)
 {
 	int i, rc;
 	int cpu;
 
-	/* Setup 2-level event channel */
-	eops = &evtchn_ops_l2;
-	evtchn_level = 2;
+	switch (evtchn_level) {
+	case 2:
+		eops = &evtchn_ops_l2; break;
+	case 3:
+		eops = &evtchn_ops_l3; break;
+	default:
+		BUG();
+	}
 
 	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS(evtchn_level),
 				sizeof(*evtchn_to_irq),
diff --git a/include/xen/events.h b/include/xen/events.h
index bc10f22..87696fc 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -111,5 +111,7 @@ int xen_test_irq_shared(int irq);
 
 /* N-level event channels */
 extern unsigned int evtchn_level;
+extern unsigned int evtchn_level_param;
+int xen_event_channel_setup_3level(void);
 
 #endif	/* _XEN_EVENTS_H */
diff --git a/include/xen/interface/event_channel.h b/include/xen/interface/event_channel.h
index f494292..f764d21 100644
--- a/include/xen/interface/event_channel.h
+++ b/include/xen/interface/event_channel.h
@@ -190,6 +190,30 @@ struct evtchn_reset {
 };
 typedef struct evtchn_reset evtchn_reset_t;
 
+/*
+ * EVTCHNOP_register_nlevel: Register N level event channels.
+ * NOTES:
+ *   1. currently only 3-level is supported.
+ *   2. should fall back to basic 2-level if this call fails.
+ */
+#define EVTCHNOP_register_nlevel 11
+#define MAX_L3_PAGES 8		/* 8 pages for 64 bits */
+struct evtchn_register_3level {
+	unsigned long evtchn_pending[MAX_L3_PAGES];
+	unsigned long evtchn_mask[MAX_L3_PAGES];
+	unsigned long *l2sel_mfn;
+	unsigned long *l2sel_offset;
+	unsigned int nr_vcpus;
+};
+
+struct evtchn_register_nlevel {
+	uint32_t level;
+	union {
+		struct evtchn_register_3level l3;
+	} u;
+};
+typedef struct evtchn_register_nlevel evtchn_register_nlevel_t;
+
 struct evtchn_op {
 	uint32_t cmd; /* EVTCHNOP_* */
 	union {
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index c66e1ff..7cb9d8f 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -289,7 +289,7 @@ DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
  *  32k if a long is 32 bits; 256k if a long is 64 bits.
  */
 #define NR_EVENT_CHANNELS_L2 (sizeof(unsigned long) * sizeof(unsigned long) * 64)
-#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * sizeof(unsigned long))
+#define NR_EVENT_CHANNELS_L3 (NR_EVENT_CHANNELS_L2 * 64)
 #define NR_EVENT_CHANNELS(x) ({ unsigned int __v = 0;	\
 	switch (x) {					\
 	case 2:						\
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 19:43:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 19:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TplGi-0002eU-B3; Mon, 31 Dec 2012 19:43:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TplGh-0002eP-4Q
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 19:43:27 +0000
Received: from [85.158.139.83:39308] by server-3.bemta-5.messagelabs.com id
	7B/D9-25441-EDAE1E05; Mon, 31 Dec 2012 19:43:26 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-182.messagelabs.com!1356983005!30081073!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24093 invoked from network); 31 Dec 2012 19:43:25 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-182.messagelabs.com with SMTP;
	31 Dec 2012 19:43:25 -0000
X-TM-IMSS-Message-ID: <51f0d4ab0008a066@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 51f0d4ab0008a066 ;
	Mon, 31 Dec 2012 14:43:36 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBVJhLwb024161; 
	Mon, 31 Dec 2012 14:43:21 -0500
Message-ID: <50E1EAD9.1080401@tycho.nsa.gov>
Date: Mon, 31 Dec 2012 14:43:21 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Tamas Lengyel <tamas.k.lengyel@gmail.com>
References: <CABfawhn5BN8RLU1wSxHxX62hB5fUQngXFhEKn9GNHcW7yBs4DA@mail.gmail.com>
In-Reply-To: <CABfawhn5BN8RLU1wSxHxX62hB5fUQngXFhEKn9GNHcW7yBs4DA@mail.gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XSM and privcmd_ioctl_mmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/26/2012 12:53 AM, Tamas Lengyel wrote:
> Hi,
> I'm using the v6 XSM patch-set (
> http://lists.xen.org/archives/html/xen-devel/2012-11/msg01920.html) to
> perform dom0 disaggregation but I came across two ioctl functions in the
> Linux privcmd driver that were getting -EPERM errors in my secondary
> control domU. The problem is that the permission checks are not coming from
> XSM but  the Kernel itself, when XSM should be in charge of access control.
> 
> The two functions in Linux 3.x kernel are *privcmd_ioctl_mmap *and *
> privcmd_ioctl_mmap_batch*:
> 
> driver/xen/privcmd.c@199 and @319 in Linux 3.7.0:
> 
> *        if (!xen_initial_domain())*
> *                return -EPERM;*
> *
> *
> Are these checks still needed when the XSM patches are applied?
> 
> Thanks,
> Tamas

No, those checks were never needed, with or without XSM; they were trying
to enforce hypervisor-level access control in Linux, which was at best a
slight performance improvement.  They should be removed; I will submit a
patch to remove them if you don't want to submit it yourself, since they
do break control domains.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 19:43:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 19:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TplGi-0002eU-B3; Mon, 31 Dec 2012 19:43:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1TplGh-0002eP-4Q
	for xen-devel@lists.xen.org; Mon, 31 Dec 2012 19:43:27 +0000
Received: from [85.158.139.83:39308] by server-3.bemta-5.messagelabs.com id
	7B/D9-25441-EDAE1E05; Mon, 31 Dec 2012 19:43:26 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-182.messagelabs.com!1356983005!30081073!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24093 invoked from network); 31 Dec 2012 19:43:25 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-182.messagelabs.com with SMTP;
	31 Dec 2012 19:43:25 -0000
X-TM-IMSS-Message-ID: <51f0d4ab0008a066@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 51f0d4ab0008a066 ;
	Mon, 31 Dec 2012 14:43:36 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id qBVJhLwb024161; 
	Mon, 31 Dec 2012 14:43:21 -0500
Message-ID: <50E1EAD9.1080401@tycho.nsa.gov>
Date: Mon, 31 Dec 2012 14:43:21 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Tamas Lengyel <tamas.k.lengyel@gmail.com>
References: <CABfawhn5BN8RLU1wSxHxX62hB5fUQngXFhEKn9GNHcW7yBs4DA@mail.gmail.com>
In-Reply-To: <CABfawhn5BN8RLU1wSxHxX62hB5fUQngXFhEKn9GNHcW7yBs4DA@mail.gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XSM and privcmd_ioctl_mmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/26/2012 12:53 AM, Tamas Lengyel wrote:
> Hi,
> I'm using the v6 XSM patch-set (
> http://lists.xen.org/archives/html/xen-devel/2012-11/msg01920.html) to
> perform dom0 disaggregation but I came across two ioctl functions in the
> Linux privcmd driver that were getting -EPERM errors in my secondary
> control domU. The problem is that the permission checks are not coming from
> XSM but  the Kernel itself, when XSM should be in charge of access control.
> 
> The two functions in Linux 3.x kernel are *privcmd_ioctl_mmap *and *
> privcmd_ioctl_mmap_batch*:
> 
> driver/xen/privcmd.c@199 and @319 in Linux 3.7.0:
> 
> *        if (!xen_initial_domain())*
> *                return -EPERM;*
> *
> *
> Are these checks still needed when the XSM patches are applied?
> 
> Thanks,
> Tamas

No, those checks were never needed, with or without XSM; they were trying
to enforce hypervisor-level access control in Linux, which was at best a
slight performance improvement.  They should be removed; I will submit a
patch to remove them if you don't want to submit it yourself, since they
do break control domains.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 20:45:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 20:45:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpmDu-0003F1-3k; Mon, 31 Dec 2012 20:44:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.lengyel@zentific.com>) id 1TpmDs-0003Ew-Tm
	for xen-devel@lists.xensource.com; Mon, 31 Dec 2012 20:44:37 +0000
Received: from [85.158.143.35:48017] by server-3.bemta-4.messagelabs.com id
	68/7B-18211-339F1E05; Mon, 31 Dec 2012 20:44:35 +0000
X-Env-Sender: tamas.lengyel@zentific.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1356986671!5913800!1
X-Originating-IP: [209.85.220.181]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14787 invoked from network); 31 Dec 2012 20:44:32 -0000
Received: from mail-vc0-f181.google.com (HELO mail-vc0-f181.google.com)
	(209.85.220.181)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 20:44:32 -0000
Received: by mail-vc0-f181.google.com with SMTP id gb30so13219015vcb.12
	for <xen-devel@lists.xensource.com>;
	Mon, 31 Dec 2012 12:44:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type
	:x-gm-message-state;
	bh=gXCe3NUgeylzFjpw8Bt5nDIDBHccSWmKJT5sT9PE7Jk=;
	b=WqqwlYjAed8m13wVSQpMMqauoQT/JLMH4gYqFSnkgvwWFe2xsQXI7wkDkd0FaUm+Jj
	PjWo0ZEC9DWSymQB+YsfMxdyo0lAcxsYcYya388Lyok1gejcy4w70LitndO4BqjrYI/5
	9LEohCDiRSZLu5wOsG2vV2xxYrhFgFOUxDzyPabo9k0iMxvA1ISFcdmFwqle1vJGuMTl
	6fOJECEUkz1K2Qna6NJLVB9g1oWitcjLN2HZSv/eH4rVhpZ9eh0fuqPkYHz7WD0V24lS
	ltenlWWV9OLr3+gaAHw3i7M9Jfdpf5UlvJsl3fjPFWbrHyCA0Tt2tcUKXIdx6hgHYvZE
	Q80g==
MIME-Version: 1.0
Received: by 10.58.95.170 with SMTP id dl10mr60419596veb.54.1356986670818;
	Mon, 31 Dec 2012 12:44:30 -0800 (PST)
Received: by 10.58.0.129 with HTTP; Mon, 31 Dec 2012 12:44:30 -0800 (PST)
Date: Mon, 31 Dec 2012 15:44:30 -0500
Message-ID: <CAErYnsgFhW3Cf-W1CQJGOUstcyfdQxWUcGkGbd0uxM1R=XYuSw@mail.gmail.com>
From: Tamas Lengyel <tamas.lengyel@zentific.com>
To: konrad.wilk@oracle.com, jeremy@goop.org
X-Gm-Message-State: ALoCoQmu/CCORAXEEfh1VgkTsMo3BerMAXk155D4Ucqgv5/xCg3ZpGxslBGRhOhQ8RHEkfZNP+x6
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH] Access control in Xen privcmd_ioctl_mmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In the privcmd Linux driver two checks in the functions
privcmd_ioctl_mmap and privcmd_ioctl_mmap_batch are not needed as they
are trying to enforce hypervisor-level access control.  They should be
removed as they break secondary control domains when performing dom0
disaggregation. Xen itself provides adequate security controls around
these hypercalls and these checks prevent those controls from
functioning as intended.

The patch applies to the stable Linux 3.7.1 kernel.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: xen-devel@lists.xensource.com
Cc: linux-kernel@vger.kernel.org
---
 drivers/xen/privcmd.c |    6 ------
 1 files changed, 0 insertions(+), 6 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 71f5c45..adaa260 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -196,9 +196,6 @@ static long privcmd_ioctl_mmap(void __user *udata)
        LIST_HEAD(pagelist);
        struct mmap_mfn_state state;

-       if (!xen_initial_domain())
-               return -EPERM;
-
        if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
                return -EFAULT;

@@ -316,9 +313,6 @@ static long privcmd_ioctl_mmap_batch(void __user
*udata, int version)
        int *err_array = NULL;
        struct mmap_batch_state state;

-       if (!xen_initial_domain())
-               return -EPERM;
-
        switch (version) {
        case 1:
                if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Dec 31 20:45:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Dec 2012 20:45:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1TpmDu-0003F1-3k; Mon, 31 Dec 2012 20:44:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.lengyel@zentific.com>) id 1TpmDs-0003Ew-Tm
	for xen-devel@lists.xensource.com; Mon, 31 Dec 2012 20:44:37 +0000
Received: from [85.158.143.35:48017] by server-3.bemta-4.messagelabs.com id
	68/7B-18211-339F1E05; Mon, 31 Dec 2012 20:44:35 +0000
X-Env-Sender: tamas.lengyel@zentific.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1356986671!5913800!1
X-Originating-IP: [209.85.220.181]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.8; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14787 invoked from network); 31 Dec 2012 20:44:32 -0000
Received: from mail-vc0-f181.google.com (HELO mail-vc0-f181.google.com)
	(209.85.220.181)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2012 20:44:32 -0000
Received: by mail-vc0-f181.google.com with SMTP id gb30so13219015vcb.12
	for <xen-devel@lists.xensource.com>;
	Mon, 31 Dec 2012 12:44:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type
	:x-gm-message-state;
	bh=gXCe3NUgeylzFjpw8Bt5nDIDBHccSWmKJT5sT9PE7Jk=;
	b=WqqwlYjAed8m13wVSQpMMqauoQT/JLMH4gYqFSnkgvwWFe2xsQXI7wkDkd0FaUm+Jj
	PjWo0ZEC9DWSymQB+YsfMxdyo0lAcxsYcYya388Lyok1gejcy4w70LitndO4BqjrYI/5
	9LEohCDiRSZLu5wOsG2vV2xxYrhFgFOUxDzyPabo9k0iMxvA1ISFcdmFwqle1vJGuMTl
	6fOJECEUkz1K2Qna6NJLVB9g1oWitcjLN2HZSv/eH4rVhpZ9eh0fuqPkYHz7WD0V24lS
	ltenlWWV9OLr3+gaAHw3i7M9Jfdpf5UlvJsl3fjPFWbrHyCA0Tt2tcUKXIdx6hgHYvZE
	Q80g==
MIME-Version: 1.0
Received: by 10.58.95.170 with SMTP id dl10mr60419596veb.54.1356986670818;
	Mon, 31 Dec 2012 12:44:30 -0800 (PST)
Received: by 10.58.0.129 with HTTP; Mon, 31 Dec 2012 12:44:30 -0800 (PST)
Date: Mon, 31 Dec 2012 15:44:30 -0500
Message-ID: <CAErYnsgFhW3Cf-W1CQJGOUstcyfdQxWUcGkGbd0uxM1R=XYuSw@mail.gmail.com>
From: Tamas Lengyel <tamas.lengyel@zentific.com>
To: konrad.wilk@oracle.com, jeremy@goop.org
X-Gm-Message-State: ALoCoQmu/CCORAXEEfh1VgkTsMo3BerMAXk155D4Ucqgv5/xCg3ZpGxslBGRhOhQ8RHEkfZNP+x6
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH] Access control in Xen privcmd_ioctl_mmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In the privcmd Linux driver two checks in the functions
privcmd_ioctl_mmap and privcmd_ioctl_mmap_batch are not needed as they
are trying to enforce hypervisor-level access control.  They should be
removed as they break secondary control domains when performing dom0
disaggregation. Xen itself provides adequate security controls around
these hypercalls and these checks prevent those controls from
functioning as intended.

The patch applies to the stable Linux 3.7.1 kernel.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: xen-devel@lists.xensource.com
Cc: linux-kernel@vger.kernel.org
---
 drivers/xen/privcmd.c |    6 ------
 1 files changed, 0 insertions(+), 6 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 71f5c45..adaa260 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -196,9 +196,6 @@ static long privcmd_ioctl_mmap(void __user *udata)
        LIST_HEAD(pagelist);
        struct mmap_mfn_state state;

-       if (!xen_initial_domain())
-               return -EPERM;
-
        if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
                return -EFAULT;

@@ -316,9 +313,6 @@ static long privcmd_ioctl_mmap_batch(void __user
*udata, int version)
        int *err_array = NULL;
        struct mmap_batch_state state;

-       if (!xen_initial_domain())
-               return -EPERM;
-
        switch (version) {
        case 1:
                if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

